id
stringlengths
9
16
raw
stringlengths
0
3.38M
cleaned
stringlengths
0
3.38M
2510.14883
arXiv:2510.14883v1 [hep-th] 16 Oct 2025 Low energy dynamics of vibrating Kinks. J. Mateos Guilarte(a) (a) Instituto Universitario de F´ısica Fundamental y Matem´aticas Universidad de Salamanca, SPAIN ABSTRACT Low energy dynamics of Kinks and Kink-AntiKink configurations in the Jackiw-Rebbi model is fully described. The strategy is based in the Collective Coordinates adiabatic approach. The necessary solution of Quantum Mechanical spectral problems, both for scalar and spinorial wave functions, is unveiled as an intermediate step. 1 Introduction Throughout the last 50 years a vast research activity in studying the dynamics of topological defects experienced strong impetus. Given the relevance of topological defects in Fundamental/Mathematical Physics, Condensed Matter Physics, Cosmology, Biophysics and other branches of Science this task revealed itself as necessary. Except in Integrable Field Theories like sine-Gordon, Korteweg-de Vries, Kadomtsevt-Petviashvili equations, etcetera, no analitycal methods are appliable. Recently, however, numerical methos has been applied with sucess to analyze scattering of Kinks in several distinguished non integrable mode that live in (1 + 1)-dimensions. We mention specifically the References [1]-[2]- [3] where numerical methods of integration have been applied to understand processes of scattering between Kinks and AntiKinks. Focusing in analyzing collisions between Kink shaped defects, either simply travelling and/or wobling while travelling, succes in understanding their interactions emerged. Of course ,there is a vast Literature on this subject that can be found in the References just quoted. In a parallel attack to the understanding of interaction between topological defects has been alternatively produced from the adiabatic, low energy, side. Time dependence is restricted to the so called collective coordinates and the investigation is led to deal with dynamical systems with a finite number of degrees of freedom. Specifically, when the objects of research are Kinks, the central topic in this paper, the solution of the simplified system in Reference [4] shows astonishing qualitative agreement with the numerical results. In one a priori completely different framework a big deal of research has been devoted to study how Fermions affect the dynamics of systems in Field Theory and Condensed Matter Physics, see [5]-[6]-[7]. In these papers a new phenomenon with far reaching consequences was discovered: the fractionization of the Fermi number in presence of topological defects, see also [8] where the connection with index theorems, specifically the eta invariant, was explored. These findings aroused interest in studying physical settings where Fermions live in presence of topological defects as Kinks, see e.g. [9]-[10]-[14]. In this work we shall concentrate in developing the collective coordinates adiabatic approximation when Fermions are present in the system. A previous work in this line is the Reference [13] but our focus will be the paradigmatic Jackiw-Rebbi model, a simple setting rich enough to obtain a great 1 amount of information. More precisely, we shall continue in this paper the analysis developed in References [11] and [12] to fully unveil the adiabatic dynamics of vibrating kinks and kink-antikink configurations. 2 The Jackiw-Rebbi model in R1,1 Minkowski space-time Let us consider a QFT of fermions and bosons restricted to move on a line. The dynamics is governed by the action, (1): SJR[ϕ, Ψ] = Z R1,1 d2x 1 2∂µϕ∂µϕ −λ2 2 (ϕ2 −1)2 + i¯Ψγµ∂µΨ −g ¯ΨϕΨ  (1) ¯Ψ = Ψ†γ0 , [ϕ] = 1 , [Ψ] = L−1/2 , [λ] = L−1 = [g] encompassing a quartic self-interaction of the bosons plus a Yukawa coupling between fermions and bosons. In the natural system of units where the Planck constant and the speed of light in vacuum are set to one, ℏ= c = 1, the dimensions of the fields and couplings are shown below the action above. The Jackiw-Rebbi Hamiltonian H = HFB + HB, (2), is in turn obtained via a Legendre transformation : HFB = Z dx Ψ†(t, x)  −iα ∂ ∂x  Ψ(t, x) + Z dx Ψ†(t, x) {gϕ(t, x)β} Ψ(t, x) HB = 1 2 Z dx ( Π2(x) + ∂ϕ ∂x 2 + λ2 ϕ2(t, x) −1 2 ) (2) The Dirac matrices α = σ2 and β = σ1 are chosen like in [5]. Here, σ1 and σ2 are Pauli matrices and this choice corresponds to the Clifford algebra γ0 = σ1 , γ1 = iσ3 , γ5 = γ0γ1 = σ2 [γµ, γν] = 2gµν , gµν = diag(1, −1) , µ, ν = 0, 1 The Klein-Gordon and Dirac fields are maps from the Minkowski space respectively to the field of the reals and the fundamental irreducible representation of the Spin(1, 1; R) group: ϕ(t, x) = R1,1 −→R , Ψ(t, x) =  ψ1(t, x) ψ2(t, x)  : R1,1 −→irr PSpin(1, 1; R) , i.e., the transformations generated by [γ0, γ1], and characterized by the Lorentz boost parameter χ, acts on one spinor in the form SL[χ] = e χ 4 [γ0,γ1] =  cosh χ 2 i sinh χ 2 −i sinh χ 2 cosh χ 2  , cosh χ = 1 √ 1 −v2 , ΨL(t, x) = SL[χ]Ψ(t, x) The Euler-Lagrange (classical) field equations read: □ϕ + 2λ2ϕ(t, x)(ϕ2(t, x) −1) + g ¯Ψ(t, x)Ψ(t, x) = 0 (3) iγµ ∂Ψ ∂xµ −gϕ(t, x)Ψ(t, x) = 0 . (4) 2 This system of coupled non-linear PDE’s is very difficult to solve. The situation is better -and pertinent to the posterior canonical quantization of the system- if some static solution of the system of the form (ϕS(x), ΨS = 0) is discovered: −d2ϕS dx2 + 2λ2ϕS(x)(ϕ2 S(x) −1) = 0 In that case, one may search for solutions close to these static solutions which linearize the (3-4) PDE system: ϕ(t, x) = ϕS(x) + η(t, x) ⇒□η + 2λ2(3ϕ2 S(x) −1)η(t, x) = O(η2) (5) iγµ ∂Ψ ∂xµ −gϕS(x)Ψ(t, x) = O(ηΨ) (6) Since the terms in (5-6) with no derivatives of the fields are time inedependent it is convenient to solve the linear system via Fourier transform in time: η(t, x) = Z ∞ −∞ dωB 2π eiωBtη(x; ωB) , Ψ(t, x) = Z ∞ −∞ dωF 2π eiωF tΨ(x; ωF) The linear PDE system (5-6) becomes equivalent to the spectral problem (7-8): hSchη(x; ωB) = h −d2 dx2 + 2λ2(3ϕ2 S(x) −1) i η(x; ωB) = ω2 Bη(x; ωB) (7) hDΨ(x; ωF) = h −iα d dx + gβϕS(x) i Ψ(x; ωF) = ωFΨ(x; ωF) (8) where hSch and hD are respectively the quantum mechanical Schr¨odinger and Dirac operators in the background created by the ϕs classical solution. 2.1 Bose/Fermi quanta in homogeneous field backgrounds The standard canonical quantization procedure to build the space of stationary states of H and evaluate the quantum transitions within the Fock space states is based on finding the eigenwave functions of the Schr¨odinger operator and the eigenspinors of the Dirac operator to be taken as the one-particle states. The simplest solutions of the classical field equations are the two homogeneous, independent of t and x, minima of the scalar potential energy whereas HBF is minimized by ΨV = 0: ϕS(t, x)± = ϕ± V = ±1 , ΨS(x) = ΨV = 0 Choice of one of these two configurations, e.g. (ϕS(t, x)+ = +1, ΨS(t, x) = 0), as the ground state, spontaneously breaks the ϕ →−ϕ symmetry and the linear Klein-Gordon and Dirac equations become: ∂2ϕ ∂t2 = ∂2ϕ ∂x2 −4λ2ϕ(t, x) (9)  i ∂ ∂t 0 0 i ∂ ∂t  ·  ψ1(t, x) ψ2(t, x)  =  0 −∂ ∂x + g ∂ ∂x + g 0  ·  ψ1(t, x) ψ2(t, x)  . (10) Because there are no time and space dependent terms in the PDE operators in the linear equations (9) and (10) it is convenient the search for the general solutions as Fourier transform integrals, see 3 (11-12)-13)1 ϕ(t, x) = Z H+ dk p 4πωB(k)  a(k)e−i ωB(k) t+ikx + a∗(k)ei ωB(k)t−ikx (11) Ψ(t, x) = √g Z H+ dk p 4πωF(k)  b(k)u(k)e−iωF (k)t+ikx + c∗(k)v(k)eiωF (k)t−ikx (12) Ψ†(t, x) = √g Z H+ dk p 4πωF(k)  b∗(k)u†(k)eiωF (k)t−ikx + c(k)v†(k)e−iωF (k)t+ikx (13) where the integration is performed over the upper branches H+ B and H+ F of the hyperbolas: ω2 B = k2 + 4λ2, ωB = + √ k2 + 4λ2, and ω2 F = k2 + g2, ωF = + p g2 + k2. In order to be the spinor expansion (12) the general solution of (10), u(k) and v(k) must be respectively the eigenspinors of the 2 × 2-matrices with eigenvalues ωF 2:  0 g −ik g + ik 0  ·  u1(k) u2(k)  = + p k2 + g2  u1(k) u2(k)   0 −g −ik −g + ik 0  ·  v1(k) v2(k)  = + p k2 + g2  v1(k) v2(k)  u(k) = + p k2 + g2 2g !1/2 · 1 g+ik √ k2+g2 ! , v(k) = p k2 + g2 2g !1/2 · −g−ik √ k2+g2 1 ! which satisfy the standard orthonormality conditions: u†(k)u(k) = ωF g = v†(k)v(k) , ¯u(k)u(k) = 1 = −¯v(k)v(k) together with u†(k)v(−k) = 0. In the canonical quantization procedure the Fourier coefficients of the scalar field are promoted to creation and annihilation bosonic operators satisfying the commutationon rules: [ˆa(k1), ˆa†(k2)] = δ(k1 −k2) , [ˆa(k1), ˆa(k2)] = 0 = [ˆa†(k1), ˆa†(k2)] The ground state with no meson particles at all is annihilated by all the destruction bosonic operators ˆa(k)|0⟩B = 0 , ∀k Meson multiparticle states have the form (14: N Y j=1 [ˆa†(kj)]nj|0⟩B = |n1n2 · · · nN⟩, nj ∈N , N X j=1 nj = N (14) and form the basis of the Bosonic Fock space, obtained via symmetric tensor product. Simili modo, the canonical quantization of the Dirac field courses via anticommutation relations (15-16) {ˆb†(k1),ˆb(k2)} = δ(k1 −k2) , {ˆc†(k1), ˆc(k2)} = δ(k1 −k2) (15) 1We denote back η(t, x) as ϕ(t, x) to follow the conventional notation. 2Note however that the spectral equation for u is the same as the spectral equation for v if (ωF , k) is replaced by (−ωF , −k). Notice also that it is possible to come back to (ωF ; k) provided that g will be transmutted to −g, the essential property of antimatter. 4 {ˆb(k1),ˆb(k2)} = 0 = {ˆc(k1), ˆc(k2)} (16) Likewise the fermionic ground state (17) with neither electrons nor positrons is annihilated by all the fermionic destruction operators ˆb(k)|0⟩F = 0 = ˆc(k)|0⟩F , ∀k (17) Electron/positron3 multiparticle states (18-19) form the basis of the Fermionic Fock space built from the one-particle states via antisymmetric tensor product: N Y j=1 [ˆb†(kj)]n− j |0⟩F = |n− 1 n− 2 · · · n− N⟩, n− j = 0 or 1 , N X j=1 n− j = N (18) N Y j=1 [ˆc†(kj)]n+ j |0⟩F = |n+ 1 n+ 2 · · · n+ N⟩, n+ j = 0 or 1 , N X j=1 n+ j = N (19) Quantization by anticommutators forces the antisymmetry of the multiparticle states and thus one state can only be either unnoccupied, n± j = 0, or occupied only by one Fermion, n± j = 1. Note that n+ j = 1 and n− j = 1 is simultaneously possible describing one state with one electron and one positron, both with momentum kj. 3 Bosonic and Fermionic Kink fluctuations Besides the the homogeneous static solutions this system admits also static space dependent solutions that, via Lorentz transformations, are travelling wave with Kink shape: −d2ϕ dx2 ± 2λ2ϕ(x)(ϕ2 −1) = 0 ⇐ϕ± K( x −vt √ 1 −v2 −a) = ± tanh[λ( x −vt √ 1 −v2 −a] −iαdΨK dx + β(gϕK + imα)ΨK = 0 ⇐ΨK = 0 Defining non dimensional space-time coordinates τ = λt, y = λx and considering small fluctua- tions ϕ(τ, y) ≃ϕK(y) + η(τ, y) , Ψ(τ, y) = 0 + ψ(τ, y) on the Kink classical background the expansion above is still solution of the field equations if the linear system of coupled PDE’s hold:  ∂2 ∂τ 2 −∂2 ∂y2 + 4 − 6 cosh2 y  ϕ(τ, y) = O(ϕ2)  i ∂ ∂τ −iσ2 ∂ ∂y + νσ1ϕK(y)  Ψ(τ, y) = O(ϕΨ) Note that: (1) We choose ΨK = 0 as Fermionic ground state and thus we neglected the Fermionic backreaction on the Kink at the classical level. (2) We introduce the important non dimensional parameter ν = g λ that measures the strength of the Yukawa versus the scalar self-interaction couplings. (3) Again we abuse of notation in the linearized equations by writing ϕ(τ, y) instead of η(τ, y) . 3We shall refer to the Fermi quanta in the Jackiw-Rebbi model as electrons/positrons by analogy with QED. In the JR system there is no electric charge. The N¨other’s invariant associated to the U(1) symmetry is the Fermi number. 5 Because there are no τ-dependent terms in both operators it is natural the seaech of solutions via τ-Fourier transform integrals: ϕ(τ, y) = Z ∞ −∞ dτ eiΩBτ fΩB(y) , ΩB = ωB λ Ψ(τ, y) = Z ∞ −∞ dτ eiΩF τ ψΩF (y) , ΩF = ωF λ such that the general solution of the linearized equations requires the solution of two quantum mechanical spectral problems, one for a P¨osch-Teller/Schr¨odinger operator the other one for a Dirac operator in a Kink potential background. 3.1 Higgs bosons propagating over Kink topological defects Starting wit the scalar/Bose case, the quantum mechanical spectral problem (20) governing the scalar Kink fluctuations reads: hPTfB(y) =  −d2 dy2 + 4 − 6 cosh2 y  fΩB(y) = Ω2 BfΩB(y) (20) Fortunately the eigenvalues and eigenfunctions of this one-particle Hamiltonian are well known. The discrete spectrum posseses two bound state eigenfunctions whose corresponding eigenvalues s are respectively Ω2 B = 0 and Ω2 B = 3., namely: 1. Zero mode Ω0 = 0 , f0(y) = 1 cosh2 y This eigenfunction is due to the spontaneous breaking of the translational symmetry by the Kink. 2. Bound state: the shape fluctuation mode Ω2√ 3 = 3 , f√ 3(y) = sinh y cosh2 y The next eigenfunction in the discrete spectrum responds to vibrations of the Kink, rather than translations, with a frequency of √ 3 and it is referred to as shape mode because is accompanied by variations in the Kink shape4. Above these two bound fluctuation modes the eigenstates with energies over the threshold of the continuous spectrum Ω2 B(0) = 4 arise. 3. Scattering states: the continuous spectrum Ω2 B(q) = q2 + 4 , f(y; q) = eiqy(3 tanh2 y −3iq tanh y −(1 + q2)) = eiqyP2(tanh y; q) It is remarkable that the scattering involved is transparent, i.e, the reflection amplitude r(q) is zero and the modulus of the transmission amplitude is one: t(q) = 1 −iq 1 + iq · 2 −iq 2 + iq . 4This fluctuation mode is closely approximated by a Derrick mode obeying to a Lorentz dilatation, see Reference [4]. 6 From it the total phase shift and the spectral density are easily computed. The general solution (21) is obtained as a linear superposition in terms of the eigenfunctions of the one-particle operator ϕ(τ, y) = lim ε→0 A0e−iετ + A∗ 0eiετ f0(y) +  A√ 3e−i √ 3τ + A∗√ 3ei √ 3τ · f√ 3(y) + Z dq q 2 p q2 + 4  A(q)e−i√ q2+4τf(y; q) + A∗(a)ei √ k2+4τf ∗(y; q)  (21) Canonical quantization courses as usual replacing the complex coefficients of the spectral expan- sion by quantum operators satifying commutative quantization relations: [ ˆA0, ˆA† 0] = 1, , [ ˆA√ 3, ˆA†√ 3] = 1 , [ ˆA(q1), ˆA†(q2)] = δ(q1 −q2) The differences with respectto the Bosonic Fock space in the vacuum sector are three: (1) There is one state where a Boson is bounded to the Kink center travelling with it at no cost of energy. (2) The shape mode is one state in the Fock space where one Boson is trapped by the Kink giving rise to one Kink excited state characterized by its vibration frequency. (3) There are many states where the Higgs quanta are scattered off the Kink but the outgoing particles wave escape from the Kink center as plane waves times Jacobi polynomias of order 2. 3.2 Electrons/Positrons propagating over Kink topological defects Spinorial Kink fluctuations are determined from the spectral problem of the one-particle Kink-Dirac Hamiltonian (22) : hDKψ(y; ΩF) = ΩFψ(y; ΩF) , ψ(y; ΩF) = 1 √g  ψ1(y; ΩF) ψ2(y; Ωf)  (22) hDK =  0 −d dy + ν tanh y d dy + ν tanh y 0  , ν = g λ , [ψ1(y; ΩF)] = [ψ2(y; ΩF)] = 1 . Instead of solving directly the spectral problem (22) we notice that the square of the Dirac-Kink operator is a diagonal matrix of contiguous P¨osch-Teller-Schr¨odinger operators. Moreover, defining the first-order differential operator dν = d dy + ν tanh y, the Darboux factorization method may be succesfully applied to find the spectrum. h2 DK = −d2 dy2 + ν2 −ν(ν+1) cosh2 y 0 0 −d2 dy2 + ν2 −ν(ν−1) cosh2 y ! =  d† νdν 0 0 dνd† ν  =  d† νdν 0 0 d† ν−1dν−1 + 2ν −1  1. Fermionic zero modes Immediately one recognizes Fermionic zero modes and the non existent normalizable anti-Fermionic zero modes as livingrespectively in the kermels of dν or d† ν. One finds: hDK  ψ(0) 1 (y) 0  = 0 ⇒ψ(0) 1 (y) = 1 coshν y , hDK  0 ψ(0) 2 (y)  = 0 ⇒ψ(0) 2 (y) = coshν y Needless to say changing from Kink to anti-Kink the normalizable zero mode is the corresponding to antifermions 7 2. Fermionic Kink shape modes Focusing in the case when g is a multiple of λ and ν = N ∈N∗is a positive natural number, there are N −1 proper bound states which correspond to vibrating Kink spinorial shape modes. The eigenvalues are well known and show that these states carry imaginary momentum on the positive imaginary half-axis in the complex q-plane: q = iκl5. Eigenvalues: Ω(l) F (κl; N) = p (2N −l)l = q N 2 −κ2 l , l = 1, 2, · · · , N −1 , κl = N −l In terms of truncated to polynomials hypergeometric Gauss series the eigenspinors are also ex- plicitly known. Eigenspinors ψ(l)(y) = ψ(l) 1 (y) ψ(l) 2 (y) ! ψ(l) 1 (y) = 1 coshN−l y · 2F1(2N + 1 −l, −l, N −l + 1; 1 2(1 + tanh y)) ψ(l) 2 (y) = 1 coshN−l y · 2F1(2N −l, −l + 1, N −l + 1; 1 2(1 + tanh y)) This identification has been possible because the bound state labelled by l in the upper diagonal component and the bound state labelled by j = l −1 in the lower diagonal component of the square of the Dirac operator share identical eigenvalues. 3. Fermions scattered off Kinks It remains to describe the spinorial fluctuations scattered off Kinks, i.e., not bounded to the Kink center. Of course, these fluctuations belong to the continuous spectrum of hDK (23): hDKψ(y; q) = ΩF(q)ψ(y, q) , ψ(y; q) =   ψ1(y; q) ψ2(y; q)   (23) The eigenvalues are: Eigenvalues: Ω2 F(q) = q2 + N 2 ⇒ΩF(q) = + p q2 + N 2 whereas the eigenspinors are proper Gauus hypergeometric series: Eigenspinors ψ1(y; q) = A(q)(sechy)−iq · 2F1[1 −iq + N, −iq −N, 1 −iq; ey ey + e−y ] ψ2(y; q) = A(q)(sechy)−iq · 2F1[−iq + N, −iq −N + 1, 1 −iq; ey ey + e−y ] Scattering scalar or spinor waves are characterized by their phase shifts and/or spectral densities. Considering the system defined on a finite interval of very large length L with PBC the spectral densities of the scattering through the Kink wells suffered respectively by the upper and lower components are: ρF(q) = ρ(1) F (q) + ρ(2) F (q) ρ(1) F = gL 2π + N−1 X j=1 j j2 + q2 + N N 2 + q2 , ρ(2) F = gL 2π + N−1 X j=1 j j2 + q2 5In this case there is one half-bound state just at the threshold of the continuous spectrum with l = N. These “half-states”do not exist if ν /∈N∗. 8 Note that besides the precise characterization of the continuous spectrum in terms of the wave number q the bound states resurface as poles in the spectral density. 3.3 Fermionic quanta To finish this Section we expand first the classical spinor field in terms of the one-particle states: 1 √gΨ+(y) = B0  sechNy 0  + N−1 X l=1 h Bl ψ(l) 1 (y) ψ(l) 2 (y) ! e−iΩ(l) F gt + C∗ l ϕ∗(l) 1 (y) ϕ∗(l) 2 (y) ! eiΩ(l) F gti + Z dq p 4πΩF(q)  B(q)  ψ1(y; q) ψ2(y; q)  e−iΩF (q)gt + C∗(q)  ϕ∗ 1(y; q) ϕ∗ 2(y; q)  eiΩF (q)gt  We stress that the eigenspinors ψ of the Dirac-Kink operator and of its g to −g transformed ϕ have been taken as a complete system in the space of spinor fields. The next step is the promotion of the coefficients to Fermi operators demanding anticommutation rules between them to establish the canonical quantization procedure: { ˆB0, ˆB† 0} = 1 , { ˆBl1, ˆB† l2} = δl1l2 , { ˆCl1, ˆC† l2} = δl1l2 { ˆB(q1), ˆB†(q2)} = δ(q1 −q2) = { ˆC(q1), ˆC†(q2)} , {ˆΨ+(y1), iˆΨ+†(y2)} = iδ(y1 −y2) The Fermi ground state and in general the Fermionic Fock space describing electron/positron mul- tiparticle states with Fermi statistics built in it follows. As a practical computation we show the normal ordered Fermi number operator, all the annihi- lation operators placed at the right of the creation operators denoted by the : ˆF : symbol. : ˆF : = Z dx 1 2 h ˆΨ+†(x), ˆΨ+(x) i , σ1  ψ1(y; q) ψ2(y; q)  =  ϕ1(y; q) ϕ2(y; q)  : ˆF : = 1 2[ ˆB† 0, ˆB0] + N−1 X l=1 ( ˆB† l ˆBl −ˆC† l ˆCl) + Z dqρF(q)( ˆB†(q) ˆB(q) −ˆC†(q) ˆC(q)) = ˆN0 −1 2 + N1 X l=1 ( ˆN − l −ˆN + l ) + Z dqρF(q)( ˆN −(q) −ˆN +(q)) Due to the unpaired zero mode we see that the expectation value of this operator in any state of the Fermionic Fock space is fractionary. Note that because the σ1 matrix maps the eigenspinors of hDK(g) into those of hDK(−g), not only the states in the discrete spectra are paired (except the zero mode) but also the spectral densities in the continuous spectra are identical. 4 Low energy dynamics of vibrating Kinks. Impact of Bosonic and Fermionic shape modes Study of the dynamics of topological defects in non-linear non-integrable field theories is an endeavour beyond the reach of analytical methods. Numerical analysis helped with more and more powerful computers has been sucessfully used to obtain information about this important subject because many types of topological defects exist in Nature. During the last twenty years of the XX Century an alternative route has been investigated by physicist and mathematicians. The idea is that at 9 low energies the dynamics is essentially described by geodesic motion over the moduli space of these extended objects. More recently internal/shape vibrational modes of fluctuation are included in this effective finite dimensional dynamical systems. The degrees of freedom correspond to the collective coordinates of the defect and its vibrational modes. The adiabatic principle dictates that both the KInk moduli space coordinates and the shape mode amplitudes of Kink fluctuations support all the time dependence and describe the adiabtic evolution of the topological defect. The miracle is that sticking to this approximation by numerical methods, has been confirmed in this simplified scenario. In this Section our goal is to construct the effective adiabatic dynamics of the Kink collective coordinates encompassing both bosonic and fermionic shape fluctuation modes. We thus start with the kink solution but incorporate also the lower bosonic and fermionic shape fluctuation modes. 4.1 Low energy dynamics of a single vibrating Kink In the simplest g = 2λ case, ν = 2, there is only a shape mode of each type and we focus on the configuration ϕ(τ, y) ≃ ϕK(y) + A(τ)η√ 3(y) = tanh[y −a(τ)] + A(τ) sinh[y −a(τ)] cosh2[y −a(τ)] 1 √gΨ√ 3(τ, y, a, Λ) = Λ(τ)Φ(t, a(τ)) ≃ Λ(τ) cosh[y −a(τ)]  tanh[y −a(τ)] 1/ √ 3  where the Kink configuration is supplemented by the bosonic and fermionc shape modes, both of frequency Ω= √ 3. The space parametrized by the collective coordinates is thus a four-dimensiona supermanifold. There are two bosonic collective coordinates, the Kink center a and the amplitude of the bosonic shape mode A, spanning the “body”. The “soul”of the submanifold is spanned in turn by one fermionic collective coordinate, the amplitude of the fermionic shape model: Λ. Now a very subtle point: in classical field theory bosonic fields present no problems. Classical Fermi fields are incompatible with the exclusion principle and thus, strictly speaking do not exist. A loophole to deal with this problem is to focus on the anticommutation rules and look at their classical limit, only satisfied by Grassman variables. It is then natural to consider classical Fermi fields as Grassmann fields but then there is no the possibility of having the Dirac sea as the ground state. One must cope with the existence of spinor waves propagating with negative energy. Within this spirit, we shall consider that THE SHAPE MODE AMPLITUDE, Λ = Λ1 + iΛ2, is a complex Grassmann variable: Λ2 = 0 = Λ∗2 , ΛΛ∗+ Λ∗Λ = 0 Λ2 1 = 0 = Λ2 2 , Λ1Λ2 + Λ2Λ1 = 0 , Λ∗Λ = 2iΛ1λ2 Under the adiabatic hypothesis where the temporal dependence of the system is encoded in the collective coordinates a(τ), A(τ), Λ(τ) the dynamics is reduced to a finite dimensional Lagrangian system. The kinetic energy and the potential energy receive contributions of both the bosonic and fermionic collective coordinates. Rememenbering the vibrating Kink configuration ϕK(y, a, A) = tanh(y −a)  (1 + A cosh(y −a)  and the two components of the Fermionic bound state of energy √ 3, the Fermionic shape mode of 10 fluctuations over the vibrating Kink ψ(2) 1 √ 3(y, a, Λ) = Λ cosh(y −a) · 2F1(4, −1, 2, 1 2(1 + tanh(y −a)) = Λf(y, a) ψ(2) 2 √ 3(y, a, Λ) = Λ cosh(y −a) · 2F1(3, 0, 2, 1 2(1 + tanh(y −a)) = Λg(y, a) The contributions due to the Bosonic and Fermionic shape modes to the effective Kinetic and potential energies are: 6: T B eff = 1 2 Z ∞ −∞ dy ∂ϕK ∂a ˙a + ∂ϕK ∂A ˙A 2 = 1 2 4 3 + π 2 A + 14 15A2  ˙a˙a + 2 3 ˙A ˙A  T F eff = −Λ∗˙Λ Z ∞ −∞ dy f[y, a]2 + g[y, a]2 + iΛ∗Λ Z ∞ −∞ dy  f[y, a]∂f ∂a + g[y, a]∂g ∂a  ˙a = −8 3Λ∗Λ The effective potential energies due to the bosonic and fermionic collective variables are slightly more difficult to compute: V B eff = 1 2 Z ∞ −∞ dy dϕ dy 2 + 1 −ϕ22 ! = 4 3 + A2 + π 8 A3 + 2 35A4  V F eff = iΛ∗Λ Z ∞ −∞ dy  f(y, a)  −d dy + 2ϕK(y, a, A)  g(y, a) + g(y, a)  d dy + 2ϕK(y, a, A)  f(y, a)  = −iΛ∗Λ  4 + π 2 A  = iΛ∗Λ · W[a, A] We notice now the main conceptual facts: (1) Fluctuations in the Kink center of mass a and the amplitude of vibrating shape modes, both scalar (bosonic) and spinorial (fermionic), are entangled. Thus, if the Kink kinetic energy decreases the frequency of Kink oscillations increases. (2) The amplitude of the Fermionic fluctuations is coupled to the amplitude of the Bosonic ones via a Yukawa interaction between one real and one complex degrees of freedom, remarkably independent on the Kink position a. To make these statements more precise we look at the motion equations derived from the effective Lagrangian Leff = T B eff + T F eff −V B eff −V FB eff , LF eff = −4 3Λ∗˙Λ −iW[a, A] · Λ∗Λ LB eff = 1 2 4 3 + π 2 A + 14 15A2  ˙a˙a + 2 3 ˙A ˙A  + 4 3 + A2 + π 8 A3 + 2 35A4  which are, (24-25-26): d dτ  (4 3 + π 2 A + 14 15A2)˙a  = i∂W ∂a Λ∗Λ (24) 2 3 ¨A = (π 2 + 28 15A)˙a2 −(2A + 3π 8 A2 + 8 25A3) −i∂W ∂A Λ∗Λ (25) i ˙Λ∗= W[a, A] · Λ∗ (26) 6Note that the expressions for the Fermionic kinetic and potential energies are hermitian, because the anti- hermiticity of ∂ ∂τ and the anti-commutativity of the Grassman fields Ψ† and Ψ. 11 Consider first the situation where the Fermionic fluctuations are null: Λ(τ) = 0, ∀τ. Then, the first of the EL equations gives rise to a constant of motion because a is a ciclyc variable: (4 3 + π 2 A + 14 15A2)˙a = C ≡˙a = C 4 3 + π 2A + 14 15A2 . Still in absence of fermionic fluctuations, and focusing in the regime of small shape mode amplitude the second motion equation becomes linear: ¨A ≃D(C) −ω2(C)A + O(A2) , D(C) = 9π 32C2 ω2(C) = 3 − 21 20 −27π2 256 C2  . Having chosen ,the Λ = 0 = Λ∗solution of the motion equations for the Grassman degree of freedom the main impact of the shape mode in the Kink dynamics is clearly shown. The frequency of the Kink oscillations gets modified as a function of C ∝˙a i.e., the faster the vibrating Kink moves the lower the frequency of its oscillations becomes and viceversa. This transference from kinetic to potential energy in Kink dynamics is a semi-classical effect because the shape mode appears at order ℏin the ℏexpansion of the Jackiw-Rebbi action. The dependence of the shape mode amplitude with time is easily recognized if Fermionic fluctuations do not enter the game A(τ) = D(C) ω2(C) + A (cos(ω(C)τ) + D) . We stress that there are a critical value of C. If |C| > 2 q 5 7 ω(C) becomes imaginary and the Kink stop oscillating, the evolution is purely kinetic. When |C| < 2 q 5 7 the translational and vibrational movements coexist. Any perturbation producing a variation of C produces a variation of the shape mode frequency and viceversa, just qualitatively agreeing with the numerical predictions in the full field theoretical model. The mutual influence between Bosonic and Fermionic fluctuations is understood if we consider the dynamics determined by the third equation. The equation (26) reads: i ˙Λ∗(τ) = −  4 + π 2 A  Λ∗ (27) The formal solution to (27) is easy to find, i.e., see (28) Λ∗(τ) = δ∗exp[i1 2 Z τ 0 dτ ′  4 + π 2 A(τ ′ ] (28) where δ∗= δ1 −iδ2, δ2 1 = δ2 2 = 0, is a Grassmann integration constant. Thus, iΛ∗(τ)Λ(τ) = iδ∗δ and therefore ¨A ≃D(C) −2 · iδ∗δ −  ω2(C) −iπ 2 δ∗δ  A + O(A2) ⇒ A(τ) ≃D(C) −2 · iδ∗δ ω2(C) −iπ 2δ∗δ + A  cos((ω(C) −iπ 2 δ∗δ)τ) + D  . The spinorial Kink fluctuations interact with the scalar Kink fluctuations modifying the midpoint of the scalar fluctuation amplitudes and the vibration frequencies. 12 In order to calibrate how the Kink dynamics depends on the oscillatory shape modes also for g λ = 3 let us consider the two vibrating modes. The spinor fields describing these oscillations are: Ψ(1) √ 5(τ, y, a) = Λ(τ)Φ(1)(y, a) = Λ(τ) cosh2(y −a(τ))  2F1(6, −1, 3, 1 2(1 + tanh(y −a(τ)) 2F1(5, 0, 3, 1 2(1 + tanh(y −a(τ))  Ψ(2) √ 8(τ, y, a) = Λ(τ)Φ(2) √ 8(y, a) = Λ(τ) cosh(y −a(τ))  2F1(5, −2, 2, 1 2(1 + tanh(y −a(τ)) 2F1(4, −1, 2, 1 2(1 + tanh(y −a(τ))  Only the Fermionic kinetic and potential energies are modified because, even though g/λ = 3, we stick to the standard ϕ4 Kink: T F(1) eff = −Λ∗˙Λ Z ∞ −∞ dy ΦT(1)(y)Φ(1)(y)  = −8 5Λ∗˙Λ T F(2) eff = −Λ∗˙Λ Z ∞ −∞ dy ΦT(2)(y)Φ(2)(y)  = −Λ∗˙Λ V F(1) eff = iΛ∗Λ Z ∞ −∞ dy ΦT(1)(y)  −iσ2 d dy + 3σ1 tanh y(1 + A cosh y)  Φ(1)(y) = −iΛ∗Λ 8 3 + 3π 8 A  V F(2) eff = iΛ∗Λ Z ∞ −∞ dy ΦT(2)(y)  −iσ2 d dy + 3σ1 tanh y(1 + A cosh y)  Φ(2)(y) = −iΛ∗Λ 8 3 + 21 64A  Therefore the effective models are Lagrangian dynamical systems where the configuration spaces are supermanifolds. The dynamical variables are both c-umbers, a and A, and Grasmann magnitudes, Λ∗and Λ. The fact that the effective super dynamical systems are not supersymetric is due to the property that shape modes are not BPS. Although we started with no backreaction of the fermions on the Kink at the classical level this effect has been generated at one-loop level by taking into account the effect of Fermionic shape modes 4.2 Effective dynamics of vibrating Kink-Anti-Kink configurations 4.2.1 Effective kinetic energy The Kink-Antikink configurations depend on two parameters when the two basic topological defects are vibrating : ϕKA(y, a(τ), A(τ)) = tanh(a(τ) + y) −tanh(y −a(τ)) −1 (29) + A(τ) tanh(a(τ))(tanh(a(τ) + y) · sech(a(τ) + y) −tanh(y −a(τ)) · sech(a(τ) −y)) where a labels the relative position of the Kink with respect to the AntiKink and A the syncronized amplitudes of vibration of Kink and Antikink. It is assumed that the Center of Mass is placed in the origin of the Reference system. In the formula (29) the adiabatic approximation is implemented: only the collective coordinates a and A depend on the τ time. The evoluion is so smooth that the spatial coordinate y remains constant in time. Under this hypothesis a non Euclidean metric arises in the (a, A) plane: gaa(a, A) = Z ∞ −∞ dy ∂ϕKA ∂a · ∂ϕKA ∂a  , gaA(a, A) = Z ∞ −∞ dy ∂ϕKA ∂a · ∂ϕKA ∂A  gAA(a, A) = Z ∞ −∞ dy ∂ϕKA ∂A · ∂ϕKA ∂a  , ˙a = ∂a ∂τ , ˙A = ∂A ∂τ 13 such that the Kinetic energy of the Kink-AntiKink adiabatic evolution becomes: T KA eff = 1 2  gaa(a, A)˙a2 + 2gaA(a, A)˙a ˙A + gA,A(a, A) ˙A2 Mathematica computations offer the following results gaa(a, A) = = 480ae10a A2 cosh(8a) + 4 46A2 + 7  cosh(2a) + 4 23A2 −4  cosh(4a) + 4 2A2 + 1  cosh(6a) + 163A2 −16  15 (e2a −1)7 (e2a + 1)3 − 16e10a sinh(2a) 40 93A2 −13  cosh(2a) + 668A2 + 80  cosh(4a) + 40 3A2 + 1  cosh(6a)  15 (e2a −1)7 (e2a + 1)3 − 16e10a sinh(2a) 7A2 + 10  cosh(8a) + 2219A2 + 410  + 15π e2a −1 10 A 15 (e2a −1)7 (e2a + 1)3 gaA(a, A) = −A tanh(a) −5aAcsch6(a) −7aAcsch4(a) −3aAcsch2(a) + (π −aA)sech2(a) + coth(a)  5Acsch4(a) + 11 3 Acsch2(a) + A  gAA(a) = 1 24csch5(a)sech(a)(36a −3 sinh(2a) −12 sinh(4a) + sinh(6a) + 12a cosh(4a)) Starting with the aa component of the metric tensor we observw that the limits when a tends to ±∞are lim a→±∞gaa[a, A] = 8 3 ± πA + 28 15A2 . These limits correspond to infinite separation between the Kink and AntiKink centers, the AntKInk at the right with respect to the Kink, the + sign, or viceversa, the −sign. Clearly, this component of the metric tensor is twice the metric of one excited single Kink. Near the origin, the aa component metric tensor behaves as follows: gaa[a, A] ≃a→0 πa3A + a2 248A2 63 −64 15  + 16 3 The limits of the other components of the metric tensor at ±∞are lim a→±∞gaA[a, A] = 0 , lim a→±∞gAA‘[a, A] = 4 3 confirming that when Kink AntiKink very are far appart behave as two isolated single Kinks and their interactions are negligible. Close to the origin when Kink and AntiKink are superposed these components of the metric tensor become: gaA[a, A] ≃a→0 −πa(152 105A + a) + 80 63Aa3 , gAA[a, A] ≃a→0 56 15 −152 105a2 Figure 1: Snapshots of: (left) gaa[a, −5] (center) gaA[a, 5] (right) Graphics of gAA[a] 14 4.2.2 Effective potential energy The last step is the computation of the effective potential energy between vibrating Kink and Antikink topological defects in the adiabatic regime of collective coordinates, (30): V KA eff [a, A] = 1 2 Z ∞ −∞ dy ∂ϕKA ∂y · ∂ϕKA ∂y + 1 −ϕ2 KA(y, a, A) 2 (30) The calculation needs a huge computational effort and here is the result achieved in a Mathematica enviroment: V KA eff [a, A] = 1 210(e2a −1)11(e2a + 1)3 × 105 2 π e2a −1 11 A −141e2a + 45e4a + e6a −1  A2 + 96 e2a −1  +16 e4a −1  3e12aA4(10696 cosh(2a) −15105 cosh(4a) + 2716 cosh(6a) −986 cosh(8a) + 28 cosh(10a) + cosh(12a) −17510) − −840e12aA2 sinh4(a)(−288 sinh(2a) + 40 sinh(4a) −96 sinh(6a) + 4 sinh(8a) + +656 cosh(2a) −124 cosh(4a) + 112 cosh(6a) −5 cosh(8a) −159) + (35 −8e4a + e8a −17  e2a −1 8 + + 1680a 8e11a 3e3aA4(130 cosh(2a) −32 cosh(4a) + 29 cosh(6a) −4(cosh(8a) + 7) + cosh(10a))+ +12e3aA2 sinh4(a)(8 sinh(2a) −80 sinh(4a) + 8 sinh(6a) −8 sinh(8a) − −80 cosh(2a) + 124 cosh(4a) −16 cosh(6a) + 9 cosh(8a) + 123) + +11(42 sinh(a) −5(6 sinh(3a) −3 sinh(5a) + sinh(7a)) + sinh(9a)) + +14 cosh(a) −22 cosh(3a) + 5 cosh(5a) + 7 cosh(7a) −5 cosh(9a)) + 8)} Close to the origin the effective Kink-AntiKink potential behaves as V KA eff [a, A] ≃a→0 1 420a3 −3255πA3 + 11520A2 + 1680πA −7168  + +a2 −94912A4 + 45045πA3 + 322036A2 −180180πA + 192192  15015 + + 2 35a 105πA3 −608A2 + 105πA  + 5152A4 −3465πA3 + 15444A2 1155 Special values of the effective potential at distinguished points are: lim a→0 V KA eff [a, A] = 736A4 165 −3πA3 + 468A2 35 , lim a→∞V KA eff [a, A] = 8 3 + π 4 A3 + 4 35A4 , lim a→−∞V KA eff [a, A] = ∞ The qualitative properties of this mechanical potential are encoded in Figures 2 and 3. Figure 2: Tomographic snapshots of the effective potential when the Kink and the Anti-Kink are close to each other for the following amplitudes of the shape mode: A = 0 (left) A = 1 (center) A = 3 (right) If both Kink and AntiKink are non excited, i.e., when A = 0, the low energy dinamics is captured by a Lagrangian mechanical system with a single degree of freedom, the relative position a: L = 1 2g[a, 0]˙a˙a −V KA eff [a, 0] Therefore, the system is Liouville integrable and, because the energy E = 1 2g[a, 0]˙a˙a + V KA eff [a, 0] 15 Figure 3: 3D graphics of the effective potentialin the (a, A)-plane as function of the (a, A)-collective coordinates. Interval: a ∈(−2, 2), A ∈(0, 1) (left) a ∈(−2, 2), A ∈(1, 2) (right) . is a constant of motion, it is possible to reduce the integration of the system to the quadrature (31): t −t0 = Z da s g[a, 0] 2(E −V KA eff [a, 0]) (31) It is not possible neither writing the integral in terms of analytical functions nor, even if it not were the case, to invert the outcome and to know the explicit dependence of a on time. Nevertheless, Figure 2.(left) as well as the explicit knowledge of V KA eff [a, 0] makes possible a qualitative analysis of the motion. E = 8 3 is the threshold for unbounded motion and bounded motion occurs if 0 < E < 8 3. Thus, for energies greather than 8 3, starting with initial conditions (a(t −t0 << 0) >> 0, ˙a(t −t0 << 0) < 0) a decreases when times runs forward until it becomes slightly negative meaning that Kink and AntiKink centers cross each other while exchanging their relative position. For suficciently high energy, the relative position a becomes negative enough to reach sooner or latter the infinite potential barrier. Then a bounce is produced and a second exchange between Kink and AntiKink takes place moving appart again the KAK pair up to very long distances. For energie less than 8 3 but greather than 0 the Kink-Antikink motion is bounded. These oscillatory motions were christened as “bions”by its dicoverers in References [15]- [16]. The plots of the effective potentials for A01 and A = 3, Figure 2 (center) and (right) show similar pattern but less room for bions is left with increasing shape mode amplitudes. A brief digression on the quantum description of the previously described classical dynamics is convenient. Because the momentum conjugate to a is pa = g[a, 0]˙a the quantum momentum operator becomes ˆpa = −i d da while the quantum Hamiltonian (32) , Weyl ordered, reads: ˆH = − 1 p g[a, 0] d da p g−1[a, 0] d da  −V KA eff [a, 0] (32) and the unbounded motion orbits become scattering backward waves while bound states arise from the bounded orbits only complying with tsome Bohr-Sommerfeld quantization conditions. If both Kink and AntiKink are excited and we let the amplitude A vary things are different. The effective dynamics is captured by a Lagrangian system with two degrees of freedom: a and A. Denoting now a = a1 and A = a2, the efective Lagrangian reads L = 1 2gaiaj[a1, a2]∂ai ∂τ · ∂aj ∂τ −V KA eff [a1, a2] , i, j = 1, 2 . and, accordingly, the motion equations become: ∂2ai ∂τ 2 + X j,k Γai ajak ∂aj ∂τ ∂ak ∂τ = − X j gaiaj δV δaj , Γai ajak = 1 2 X l gaial ∂galaj ∂ak + ∂galak ∂al −∂gakaj ∂al  The energy is still a constant of motion but there is no a second invariant that would guarantee the Liouville integra- bility of the system. Needless to say, to obtain analytical solutions of this system of ODE’s is hopeless. Nevertheless, 16 numerical analysis of this system for initial conditions corresponding to Kink/AntiKink scattering has been sucessefully performed in the seminal Reference [4]. These authors reached similar consequence to those previously obtained in the numerical treatment of the full field theory: in the Kink Antikink scattering quasi-bound states where several bounces occur arise for some windows of initial velocities. Moreover, the pattern shows a very interesting fractal structure. 4.3 Fermionic fluctuations of Kink/Anti-Kink configurations Consider the bounded spinorial fluctuation over the Kink/AntiKink configuration (33) when the ratio g λ is 2. 1 √g Ψ√ 3 √ 3(y, a, Λ) = Λ tanh[a] × (33) ×  sech(y + a)  2F1(4, −1, 2, 1 2(1 + tanh(y + a)) 2F1(3, 0, 2, 1 2(1 + tanh(y + a))  −sech(y −a)  2F1(4, −1, 2, 1 2(1 + tanh(y −a)) 2F1(3, 0, 2, 1 2(1 + tanh(y −a))  Figure 4: (left) Graphics of the Kink shape mode and the AntiKink shape mode (right) Graphics of the upper and lower component of the Fermionic shape mode Contribution to the Kinetic energy of the Fermionic fluctuations is encoded in the factor Ga) (34) T F eff = − Z ∞ −∞ dyΨ†√ 3 √ 3(y, a, Λ) ˙Ψ√ 3 √ 3(y, a, Λ) = −Λ∗˙ΛG(a) G(a) = 1 tanh2 a Z ∞ −∞ dy "tanh(y + a) cosh(y + a) −tanh(y −a) cosh(y −a) 2 + (sech(y + a) −sech(y −a))2 # = 4 3(12a −3 sinh(2a) −3 sinh(4a) + sinh(6a)) coth2(a)csch3(2a) (34) Figure 5: (left) Contribution to the matric of Fermionic fluctuations G(a) as a function of the distance between Kink and Antikink (right) Effective potential U(a) due to Fermionic fluctuations of the Kink/AntiKink configuration also as a function of a. Next we compute the efective potential contributed by the Fermionic shape mode fluctuating over the Kink/AntiKink configuration 17 V F eff(a, Λ∗Λ) = i Z ∞ −∞ dy Ψ†√ 3 √ 3(y, a, Λ)  −iσ2 d dy + 2ϕKA(y, a, A)  Ψ√ 3 √ 3(y, a, Λ) = iΛ∗Λ · U(a) U(a) = 1 tanh2 a Z ∞ −∞ dy  2F1[4, −1, 2, 1 2(1 + tanh(y + a))] cosh(y + a) − 2F1[4, −1, 2, 1 2(1 + tanh(y −a))] cosh(y −a)  × ×  −d dy + 2ϕKA(y, a, A)   2F1[3, 0, 2, 1 2(1 + tanh(y + a))] cosh(y + a) − 2F1[3, 0, 2, 1 2(1 + tanh(y −a))] cosh(y −a)  + +  2F1[3, 0, 2, 1 2(1 + tanh(y + a))] cosh(y + a) − 2F1[3, 0, 2, 1 2(1 + tanh(y −a))] cosh(y −a)  × ×  d dy + 2ϕKA(y, a, A)   2F1[4, −1, 2, 1 2(1 + tanh(y + a))] cosh(y + a) − 2F1[4, −1, 2, 1 2(1 + tanh(y −a))] cosh(y −a)  where ϕKA(y, a, A) is the excited Kink/AntiKink configuration defined in formula (29). The result for the effective potential is (35: U(a) = −2 3 coth2(a)csch3(2a) log e36a −3 sinh(2a) −12 sinh(4a) + sinh(6a) + 6 log e2a cosh(4a)  (35) Since the Fermionic Kink/ AntiKink fluctuations do not depend on the shape mode vibration amplitude of Kink and Antikink we expect that G(a) only will be a function of a and indeed this the case. The effective potential U(a) induced by the Fermionic fluctuations on vibrating Kink AntiKink configurations however do not depend on the amplitude of the Bosonic shape mode . This unexpected effect is due to the fact, see Figure 8 (left), that Kink and Antikink oscillates in counter-phase and there are destructive interferences. Thus, one should expect that interactions between the amplitudes Λ and A respectively of Fermionic and Bosonic fluctuations of Kink/AntiKink configurations only would arise if the amplitudes of vibrations of Kink differ from the AntiKink amplitudes. To test this statement generalization to consider the amplitude of the Kink different from the AntKink amplitude in the shape modes is implemented by replacing in the Kink AntiKink configuration the contribution of excitations by Atanh(y + a) cosh(y + a) −B tanh(y −a) cosh(y −a) . A quick Mathematica run confirms the conceptual argument given above: U(a, A −B) = − coth2(a)  3 2π e2a −1 6 (A −B) + 16e6a(36a −3 sinh(2a) −12 sinh(4a) + sinh(6a) + 12a cosh(4a))  3 (e4a −1)3 Understanding of how the induced metric and effective potential by Fermionic Kink fluctuations depends on g λ starts by looking at N = 3. In this case there are two Fermionic shape modes. The lower frequency is √ 5 that, implemented on the excited Kink/AntiKink configuration, gives rise to the spinor wave function (36): 1 √g Ψ√ 5 √ 5(y, Λ) = Λ tanh a   2F1[6,−1,3, 1 2 (1+tanh(y+a)) cosh2(y+a) − 2F1[6,−1,3, 1 2 (1+tanh(y−a)) cosh2(y−a) 2F1[5,0,3, 1 2 (1+tanh(y+a)) cosh2(y+a) − 2F1[5,0,3, 1 2 (1+tanh(y−a)) cosh2(y−a)   (36) The effective kinetic energy induced by this Fermionic shape mode is T F 1 eff = Z ∞ −∞ dy Ψ†√ 5 √ 5(y, Λ) ˙Ψ√ 5 √ 5(y, Λ) = −Λ∗˙Λ · G3(a) G3(a) = 1 5 coth2(a)csch5(2a) −80 sinh(2a) −15 sinh(6a) + sinh(10a) + 120 log e2a cosh(2a)  (37) where the G3(a) factor is written in (37). Likewise the effective potential is derived via similar procedures V F 1 eff = i Z ∞ −∞ dy Ψ†√ 5 √ 5(y, Λ){−iσ2 d dy + 3ϕKA(y, a, A)}Ψ√ 5 √ 5(y, Λ) = iΛ∗Λ · U3(a) U3(a) = −2 15 coth2(a)csch5(2a) −350 sinh(2a) −125 sinh(6a) + sinh(10a) + 60 log e2a (11 cosh(2a) + cosh(6a))  In sum, identical pattern is observed in the effective adiabatic dynamics induced by Fermionic Kink shape modes in Kink/AntiKink configurations when N = 2 or N = 3. We thus skip of writing the results obtained for the Fermionic shape mode of frequency √ 8 which are qualitatively equivalent but even more cumbersome. 18 Figure 6: (left) Induced metric G3(a) (right) Induced effective potential U3(a) 5 Outlook The theoretical developments in this work have been focused in a Field Theory (1 + 1)-dimensional model of bosons and fermions, specifically the Jackiw-Rebbi model introduced in Reference [5]. One interesting way to extend these ideas is to jump in the number of fields both bosonic and fermionic. An early proposal in this line can be found in a paper published circa the year 2000 by one MIT group, see [14]. In that work the authors deal with a Field Theory model with two Bose and two Fermi fields. The interaction between bosonic and fermionic fields is through two Yukawa couplings and sophisticated phenomena arise. We intend, however, to extend scalar/Bose two-field models treated by our group in Salamanca by incorporating two spinor/Fermi fields in these systems. The first model that we have in mind was discussed, among other papers, in Reference [17]. This Field Theoretical model exhibits a rich variety of Kinks and posseses interesting properties of integrability in the related mechanical analogous system. We expect that adding spinor/Fermi fields similar phenomena to that found in the Jackiw-Rebbi model will be kept but subtle novelties probably will be unfolded. The second system where we envisage that the analysis developed in the JR model will be fruitful is the massive non-linear S2-sigma 1 + 1-dimensional model, [18]. Again, in this “deformed”non-linear sigma model a manifold of Kinks, topological and non topological, exist. Spinor/Fermi fields will be included as sections of one spinor bundle over the two-sphere rather than spinor functions. In any case we expect that new phenomena will appear with respect to that described in the JR model. The interplay between scalar and spinor fields in this non-linear system promises the appearance of new subleties. Other playground where the analysis of effective low energy dynamics seems to be promising is the moduli space of BPS vortices in the Abelian Higgs model, see e.g. [2] for a parallel study devoted to Kinks. The Abelian Higgs model in (2+1) space-time, at the critical ratio between the self-interacting scalar λ and the electromagnetic e2 couplings, the transition point between Type I and Type II Ginzburg-Landau superconductivity, posseses manifolds of topological non interacting stable BPS vortices characterized by an integer number of magnetic quanta. These topological defects admit also vibrational modes, see References [19]-[20]. The collective coordinate low energy analysis has been developed in [21]-[22]-[23] in absence of Fermions. The addition of Fermions to the Abelian Higgs model is compelling, including Yukawa and electromagnetic couplings. The system will be much more complex but the temptation is strong towards identify the Collective Coordinates of these planar topological defects and see how Fermions affect the adiabatic dynamics of BPS vortices. References [1] A. Alonso-Izquierdo,, D. Miguelez-Caballero, L.M. Nieto, J. Queiroga-Nunes, Wobling kinks in a two-component scalar field theory: Interaction between shape modes, [2] A. Alonso-Izquierdo, L. M. Nieto, J. Queiroga-Nunes, Asymmetric scattering between kinks and woblers, Comm. Nonl. Sci. and Num. Sim. 107(2022) 106183 [3] A. Alonso-Izquierdo, L. M. Nieto J. Queiroga-Nunes, Scattering between wobling kinks, Phys. Rev. D103, 045003 (2021) [4] N.S. Manton, K. Ol´es, T. Romanczukiewicz, A. Wereszczynski, “Collective coordinates model of kink-antikink collisions in ϕ4 theory, Phys. Rev. Lett. 127, 071601 (2021). [5] R. Jackiw and C. Rebbi, Solitons with fermion number 1 2, Phys. Rev. D13, 3398 (1976) 19 [6] R. Jackiw and R. Schrieffer,Solitons with fermionic number 1 2 in condensed matter physics and relativistic field theory, Nuclear Physics B190[FS 3](1981) 253-332 [7] A. Niemi and G.W. Semenoff, Fermion number fractionazitation in quantum field theory, Physics Reports 135 (1986)99 [8] A. Alonso-Izquierdo,R. Fresneda, J. Mateos Guilarte, and D. Vassilevich, Soliton fermionic number from the heat kernel expansion, European Physical Journal C79 (2019) 525 [9] Y.Chu and T. Vachaspati, Fermions on one or fewer Kinks, Phys. Rev. D77(2008)025006 [10] D. Bazeia, A. Mohammadi, Fermionic bound states in distict kinlike backgrounds, European Physical Journal C77 (2017)203 [11] J.Campos, A. Mohammadi, Kink-antikink collisions in the supersymmetric ϕ4 model, JHEP08(2022)180 [12] D. Bazeia, J. Campos, A. Mohhamadi, Resonance mediated by fermions in kink-antikink collisions, JHEP12(2022) 085 [13] J. Campos, A. Mohammadi, J. Queiruga, A. Wereszcinsky, , Fermionic spectral walls in in kink collisions, JHEP01(2023)071 [14] E. Farhi, N. Graham, R.L. Jaffe, H. Weigel, A heavy fermion can create a soliton: a 1+1 dimensional example, Physics Letters 475B (2000) 335 [15] T.Sugiyama, Kink-antikink collisions in the two-dimensional φ4 model, Progress in Theoretical Physics, 61 (1971) 15501563 [16] D.K. Campbell, J.S. Schonfels, C.A. Wingate, Resonance structure in kink-antikink interactions ϕ4 theory, Phys- ica D9 (1983) 1-32 [17] A. Alonso-Izquierdo, M.A. Gonzalez Leon, J. Mateos Guilarte, The Kink variety in systems of two coupled scalar fields in two space-time dimensions, Physical Review D65 (2002) 085012 [18] A. Alonso-Izquierdo, M.A. Gonzalez Leon, J. Mateos Guilarte, Kinks in a non-linear massive sigma model, Physical Review Letters 101 (2008) 131602 [19] A. Alonso-Izquierdo, W. Garcia Fuertes, J. Mateos Guilarte, A note in BPS vortex bound states, Physics Letters B753 (2016)29-32 [20] A. Alonso-Izquierdo, W. Garcia Fuertes, J. Mateos Guilarte, Dissecting zero modes and bound states on BPS vortices in Ginzburg-Landau superconductors, JHEP05 (2016) 1-36 [21] A. Alonso-Izquierdo, W. Garcia Fuertes. N.S. Manton, J. Mateos Guilarte, Spectral flow of vortex shape modes over the BPS 2-vortex moduli space, JHEP01 (2024) 020 [22] A. Alonso-Izquierdo, N.S. Manton, J. Mateos Guilarte, A. Wereszczynski, Collective coordinate models for 2- vortex shape mode dynamics, Physical Review D110 (2024) 085006 [23] A. Alonso-Izquierdo, N.S. Manton, J. Mateos Guilarte, M. Rees, A. Wereszczynski, Dynamics of excited BPS 3-vortices, Physical Review D111 (2025) 105021 20
16 Oct 2025 Low energy dynamics of vibrating Kinks. J. Mateos Guilarte(a) (a) Instituto Universitario de F ́ısica Fundamental y Matem ́aticas Universidad de Salamanca, SPAIN ABSTRACT Low energy dynamics of Kinks and Kink-AntiKink configurations in the Jackiw-Rebbi model is fully described. The strategy is based in the Collective Coordinates adiabatic approach. The necessary solution of Quantum Mechanical spectral problems, both for scalar and spinorial wave functions, is unveiled as an intermediate step. 1 Introduction Throughout the last 50 years a vast research activity in studying the dynamics of topological defects experienced strong impetus. Given the relevance of topological defects in Fundamental/Mathematical Physics, Condensed Matter Physics, Cosmology, Biophysics and other branches of Science this task revealed itself as necessary. Except in Integrable Field Theories like sine-Gordon, Korteweg-de Vries, Kadomtsevt-Petviashvili equations, etcetera, no analitycal methods are appliable. Recently, however, numerical methos has been applied with sucess to analyze scattering of Kinks in several distinguished non integrable mode that live in (1 + 1)-dimensions. We mention specifically the References [1]-[2]- [3] where numerical methods of integration have been applied to understand processes of scattering between Kinks and AntiKinks. Focusing in analyzing collisions between Kink shaped defects, either simply travelling and/or wobling while travelling, succes in understanding their interactions emerged. Of course ,there is a vast Literature on this subject that can be found in the References just quoted. In a parallel attack to the understanding of interaction between topological defects has been alternatively produced from the adiabatic, low energy, side. Time dependence is restricted to the so called collective coordinates and the investigation is led to deal with dynamical systems with a finite number of degrees of freedom. Specifically, when the objects of research are Kinks, the central topic in this paper, the solution of the simplified system in Reference [4] shows astonishing qualitative agreement with the numerical results. In one a priori completely different framework a big deal of research has been devoted to study how Fermions affect the dynamics of systems in Field Theory and Condensed Matter Physics, see [5]-[6]-[7]. In these papers a new phenomenon with far reaching consequences was discovered: the fractionization of the Fermi number in presence of topological defects, see also [8] where the connection with index theorems, specifically the eta invariant, was explored. These findings aroused interest in studying physical settings where Fermions live in presence of topological defects as Kinks, see e.g. [9]-[10]-[14]. In this work we shall concentrate in developing the collective coordinates adiabatic approximation when Fermions are present in the system. A previous work in this line is the Reference [13] but our focus will be the paradigmatic Jackiw-Rebbi model, a simple setting rich enough to obtain a great 1 amount of information. More precisely, we shall continue in this paper the analysis developed in References [11] and [12] to fully unveil the adiabatic dynamics of vibrating kinks and kink-antikink configurations. 2 The Jackiw-Rebbi model in R1,1 Minkowski space-time Let us consider a QFT of fermions and bosons restricted to move on a line. The dynamics is governed by the action, (1): SJR[φ, Ψ] = Z R1,1 d2x 1 2∂μφ∂μφ -λ2 2 (φ2 -1)2 + i ̄Ψγμ∂μΨ -g ̄ΨφΨ (1) ̄Ψ = Ψ†γ0 , [φ] = 1 , [Ψ] = L-1/2 , [λ] = L-1 = [g] encompassing a quartic self-interaction of the bosons plus a Yukawa coupling between fermions and bosons. In the natural system of units where the Planck constant and the speed of light in vacuum are set to one, ħ= c = 1, the dimensions of the fields and couplings are shown below the action above. The Jackiw-Rebbi Hamiltonian H = HFB + HB, (2), is in turn obtained via a Legendre transformation : HFB = Z dx Ψ†(t, x) -iα ∂ ∂x Ψ(t, x) + Z dx Ψ†(t, x) {gφ(t, x)β} Ψ(t, x) HB = 1 2 Z dx ( Π2(x) + ∂φ ∂x 2 + λ2 φ2(t, x) -1 2 ) (2) The Dirac matrices α = σ2 and β = σ1 are chosen like in [5]. Here, σ1 and σ2 are Pauli matrices and this choice corresponds to the Clifford algebra γ0 = σ1 , γ1 = iσ3 , γ5 = γ0γ1 = σ2 [γμ, γν] = 2gμν , gμν = diag(1, -1) , μ, ν = 0, 1 The Klein-Gordon and Dirac fields are maps from the Minkowski space respectively to the field of the reals and the fundamental irreducible representation of the Spin(1, 1; R) group: φ(t, x) = R1,1 -→R , Ψ(t, x) = ψ1(t, x) ψ2(t, x) : R1,1 -→irr PSpin(1, 1; R) , i.e., the transformations generated by [γ0, γ1], and characterized by the Lorentz boost parameter χ, acts on one spinor in the form SL[χ] = e χ 4 [γ0,γ1] = cosh χ 2 i sinh χ 2 -i sinh χ 2 cosh χ 2 , cosh χ = 1 √ 1 -v2 , ΨL(t, x) = SL[χ]Ψ(t, x) The Euler-Lagrange (classical) field equations read: □φ + 2λ2φ(t, x)(φ2(t, x) -1) + g ̄Ψ(t, x)Ψ(t, x) = 0 (3) iγμ ∂Ψ ∂xμ -gφ(t, x)Ψ(t, x) = 0 . (4) 2 This system of coupled non-linear PDE's is very difficult to solve. The situation is better -and pertinent to the posterior canonical quantization of the system- if some static solution of the system of the form (φS(x), ΨS = 0) is discovered: -d2φS dx2 + 2λ2φS(x)(φ2 S(x) -1) = 0 In that case, one may search for solutions close to these static solutions which linearize the (3-4) PDE system: φ(t, x) = φS(x) + η(t, x) ⇒□η + 2λ2(3φ2 S(x) -1)η(t, x) = O(η2) (5) iγμ ∂Ψ ∂xμ -gφS(x)Ψ(t, x) = O(ηΨ) (6) Since the terms in (5-6) with no derivatives of the fields are time inedependent it is convenient to solve the linear system via Fourier transform in time: η(t, x) = Z ∞ -∞ dωB 2π eiωBtη(x; ωB) , Ψ(t, x) = Z ∞ -∞ dωF 2π eiωF tΨ(x; ωF) The linear PDE system (5-6) becomes equivalent to the spectral problem (7-8): hSchη(x; ωB) = h -d2 dx2 + 2λ2(3φ2 S(x) -1) i η(x; ωB) = ω2 Bη(x; ωB) (7) hDΨ(x; ωF) = h -iα d dx + gβφS(x) i Ψ(x; ωF) = ωFΨ(x; ωF) (8) where hSch and hD are respectively the quantum mechanical Schr ̈odinger and Dirac operators in the background created by the φs classical solution. 2.1 Bose/Fermi quanta in homogeneous field backgrounds The standard canonical quantization procedure to build the space of stationary states of H and evaluate the quantum transitions within the Fock space states is based on finding the eigenwave functions of the Schr ̈odinger operator and the eigenspinors of the Dirac operator to be taken as the one-particle states. The simplest solutions of the classical field equations are the two homogeneous, independent of t and x, minima of the scalar potential energy whereas HBF is minimized by ΨV = 0: φS(t, x)± = φ± V = ±1 , ΨS(x) = ΨV = 0 Choice of one of these two configurations, e.g. (φS(t, x)+ = +1, ΨS(t, x) = 0), as the ground state, spontaneously breaks the φ →-φ symmetry and the linear Klein-Gordon and Dirac equations become: ∂2φ ∂t2 = ∂2φ ∂x2 -4λ2φ(t, x) (9) i ∂ ∂t 0 0 i ∂ ∂t · ψ1(t, x) ψ2(t, x) = 0 -∂ ∂x + g ∂ ∂x + g 0 · ψ1(t, x) ψ2(t, x) . (10) Because there are no time and space dependent terms in the PDE operators in the linear equations (9) and (10) it is convenient the search for the general solutions as Fourier transform integrals, see 3 (11-12)-13)1 φ(t, x) = Z H+ dk p 4πωB(k) a(k)e-i ωB(k) t+ikx + a∗(k)ei ωB(k)t-ikx (11) Ψ(t, x) = √g Z H+ dk p 4πωF(k) b(k)u(k)e-iωF (k)t+ikx + c∗(k)v(k)eiωF (k)t-ikx (12) Ψ†(t, x) = √g Z H+ dk p 4πωF(k) b∗(k)u†(k)eiωF (k)t-ikx + c(k)v†(k)e-iωF (k)t+ikx (13) where the integration is performed over the upper branches H+ B and H+ F of the hyperbolas: ω2 B = k2 + 4λ2, ωB = + √ k2 + 4λ2, and ω2 F = k2 + g2, ωF = + p g2 + k2. In order to be the spinor expansion (12) the general solution of (10), u(k) and v(k) must be respectively the eigenspinors of the 2 × 2-matrices with eigenvalues ωF 2: 0 g -ik g + ik 0 · u1(k) u2(k) = + p k2 + g2 u1(k) u2(k) 0 -g -ik -g + ik 0 · v1(k) v2(k) = + p k2 + g2 v1(k) v2(k) u(k) = + p k2 + g2 2g !1/2 · 1 g+ik √ k2+g2 ! , v(k) = p k2 + g2 2g !1/2 · -g-ik √ k2+g2 1 ! which satisfy the standard orthonormality conditions: u†(k)u(k) = ωF g = v†(k)v(k) , ̄u(k)u(k) = 1 = - ̄v(k)v(k) together with u†(k)v(-k) = 0. In the canonical quantization procedure the Fourier coefficients of the scalar field are promoted to creation and annihilation bosonic operators satisfying the commutationon rules: [ˆa(k1), ˆa†(k2)] = δ(k1 -k2) , [ˆa(k1), ˆa(k2)] = 0 = [ˆa†(k1), ˆa†(k2)] The ground state with no meson particles at all is annihilated by all the destruction bosonic operators ˆa(k)|0⟩B = 0 , ∀k Meson multiparticle states have the form (14: N Y j=1 [ˆa†(kj)]nj|0⟩B = |n1n2 · · · nN⟩, nj ∈N , N X j=1 nj = N (14) and form the basis of the Bosonic Fock space, obtained via symmetric tensor product. Simili modo, the canonical quantization of the Dirac field courses via anticommutation relations (15-16) {ˆb†(k1),ˆb(k2)} = δ(k1 -k2) , {ˆc†(k1), ˆc(k2)} = δ(k1 -k2) (15) 1We denote back η(t, x) as φ(t, x) to follow the conventional notation. 2Note however that the spectral equation for u is the same as the spectral equation for v if (ωF , k) is replaced by (-ωF , -k). Notice also that it is possible to come back to (ωF ; k) provided that g will be transmutted to -g, the essential property of antimatter. 4 {ˆb(k1),ˆb(k2)} = 0 = {ˆc(k1), ˆc(k2)} (16) Likewise the fermionic ground state (17) with neither electrons nor positrons is annihilated by all the fermionic destruction operators ˆb(k)|0⟩F = 0 = ˆc(k)|0⟩F , ∀k (17) Electron/positron3 multiparticle states (18-19) form the basis of the Fermionic Fock space built from the one-particle states via antisymmetric tensor product: N Y j=1 [ˆb†(kj)]nj |0⟩F = |n1 n2 · · · nN⟩, nj = 0 or 1 , N X j=1 nj = N (18) N Y j=1 [ˆc†(kj)]n+ j |0⟩F = |n+ 1 n+ 2 · · · n+ N⟩, n+ j = 0 or 1 , N X j=1 n+ j = N (19) Quantization by anticommutators forces the antisymmetry of the multiparticle states and thus one state can only be either unnoccupied, n± j = 0, or occupied only by one Fermion, n± j = 1. Note that n+ j = 1 and nj = 1 is simultaneously possible describing one state with one electron and one positron, both with momentum kj. 3 Bosonic and Fermionic Kink fluctuations Besides the the homogeneous static solutions this system admits also static space dependent solutions that, via Lorentz transformations, are travelling wave with Kink shape: -d2φ dx2 ± 2λ2φ(x)(φ2 -1) = 0 ⇐φ± K( x -vt √ 1 -v2 -a) = ± tanh[λ( x -vt √ 1 -v2 -a] -iαdΨK dx + β(gφK + imα)ΨK = 0 ⇐ΨK = 0 Defining non dimensional space-time coordinates τ = λt, y = λx and considering small fluctuations φ(τ, y) ≃φK(y) + η(τ, y) , Ψ(τ, y) = 0 + ψ(τ, y) on the Kink classical background the expansion above is still solution of the field equations if the linear system of coupled PDE's hold: ∂2 ∂τ 2 -∂2 ∂y2 + 4 - 6 cosh2 y φ(τ, y) = O(φ2) i ∂ ∂τ -iσ2 ∂ ∂y + νσ1φK(y) Ψ(τ, y) = O(φΨ) Note that: (1) We choose ΨK = 0 as Fermionic ground state and thus we neglected the Fermionic backreaction on the Kink at the classical level. (2) We introduce the important non dimensional parameter ν = g λ that measures the strength of the Yukawa versus the scalar self-interaction couplings. (3) Again we abuse of notation in the linearized equations by writing φ(τ, y) instead of η(τ, y) . 3We shall refer to the Fermi quanta in the Jackiw-Rebbi model as electrons/positrons by analogy with QED. In the JR system there is no electric charge. The N ̈other's invariant associated to the U(1) symmetry is the Fermi number. 5 Because there are no τ-dependent terms in both operators it is natural the seaech of solutions via τ-Fourier transform integrals: φ(τ, y) = Z ∞ -∞ dτ eiΩBτ fΩB(y) , ΩB = ωB λ Ψ(τ, y) = Z ∞ -∞ dτ eiΩF τ ψΩF (y) , ΩF = ωF λ such that the general solution of the linearized equations requires the solution of two quantum mechanical spectral problems, one for a P ̈osch-Teller/Schr ̈odinger operator the other one for a Dirac operator in a Kink potential background. 3.1 Higgs bosons propagating over Kink topological defects Starting wit the scalar/Bose case, the quantum mechanical spectral problem (20) governing the scalar Kink fluctuations reads: hPTfB(y) = -d2 dy2 + 4 - 6 cosh2 y fΩB(y) = Ω2 BfΩB(y) (20) Fortunately the eigenvalues and eigenfunctions of this one-particle Hamiltonian are well known. The discrete spectrum posseses two bound state eigenfunctions whose corresponding eigenvalues s are respectively Ω2 B = 0 and Ω2 B = 3., namely: 1. Zero mode Ω0 = 0 , f0(y) = 1 cosh2 y This eigenfunction is due to the spontaneous breaking of the translational symmetry by the Kink. 2. Bound state: the shape fluctuation mode Ω2√ 3 = 3 , f√ 3(y) = sinh y cosh2 y The next eigenfunction in the discrete spectrum responds to vibrations of the Kink, rather than translations, with a frequency of √ 3 and it is referred to as shape mode because is accompanied by variations in the Kink shape4. Above these two bound fluctuation modes the eigenstates with energies over the threshold of the continuous spectrum Ω2 B(0) = 4 arise. 3. Scattering states: the continuous spectrum Ω2 B(q) = q2 + 4 , f(y; q) = eiqy(3 tanh2 y -3iq tanh y -(1 + q2)) = eiqyP2(tanh y; q) It is remarkable that the scattering involved is transparent, i.e, the reflection amplitude r(q) is zero and the modulus of the transmission amplitude is one: t(q) = 1 -iq 1 + iq · 2 -iq 2 + iq . 4This fluctuation mode is closely approximated by a Derrick mode obeying to a Lorentz dilatation, see Reference [4]. 6 From it the total phase shift and the spectral density are easily computed. The general solution (21) is obtained as a linear superposition in terms of the eigenfunctions of the one-particle operator φ(τ, y) = lim ε→0 A0e-iετ + A∗ 0eiετ f0(y) + A√ 3e-i √ 3τ + A∗√ 3ei √ 3τ · f√ 3(y) + Z dq q 2 p q2 + 4 A(q)e-i√ q2+4τf(y; q) + A∗(a)ei √ k2+4τf ∗(y; q) (21) Canonical quantization courses as usual replacing the complex coefficients of the spectral expansion by quantum operators satifying commutative quantization relations: [ ˆA0, ˆA† 0] = 1, , [ ˆA√ 3, ˆA†√ 3] = 1 , [ ˆA(q1), ˆA†(q2)] = δ(q1 -q2) The differences with respectto the Bosonic Fock space in the vacuum sector are three: (1) There is one state where a Boson is bounded to the Kink center travelling with it at no cost of energy. (2) The shape mode is one state in the Fock space where one Boson is trapped by the Kink giving rise to one Kink excited state characterized by its vibration frequency. (3) There are many states where the Higgs quanta are scattered off the Kink but the outgoing particles wave escape from the Kink center as plane waves times Jacobi polynomias of order 2. 3.2 Electrons/Positrons propagating over Kink topological defects Spinorial Kink fluctuations are determined from the spectral problem of the one-particle Kink-Dirac Hamiltonian (22) : hDKψ(y; ΩF) = ΩFψ(y; ΩF) , ψ(y; ΩF) = 1 √g ψ1(y; ΩF) ψ2(y; Ωf) (22) hDK = 0 -d dy + ν tanh y d dy + ν tanh y 0 , ν = g λ , [ψ1(y; ΩF)] = [ψ2(y; ΩF)] = 1 . Instead of solving directly the spectral problem (22) we notice that the square of the Dirac-Kink operator is a diagonal matrix of contiguous P ̈osch-Teller-Schr ̈odinger operators. Moreover, defining the first-order differential operator dν = d dy + ν tanh y, the Darboux factorization method may be succesfully applied to find the spectrum. h2 DK = -d2 dy2 + ν2 -ν(ν+1) cosh2 y 0 0 -d2 dy2 + ν2 -ν(ν-1) cosh2 y ! = d† νdν 0 0 dνd† ν = d† νdν 0 0 d† ν-1dν-1 + 2ν -1 1. Fermionic zero modes Immediately one recognizes Fermionic zero modes and the non existent normalizable anti-Fermionic zero modes as livingrespectively in the kermels of dν or d† ν. One finds: hDK ψ(0) 1 (y) 0 = 0 ⇒ψ(0) 1 (y) = 1 coshν y , hDK 0 ψ(0) 2 (y) = 0 ⇒ψ(0) 2 (y) = coshν y Needless to say changing from Kink to anti-Kink the normalizable zero mode is the corresponding to antifermions 7 2. Fermionic Kink shape modes Focusing in the case when g is a multiple of λ and ν = N ∈N∗is a positive natural number, there are N -1 proper bound states which correspond to vibrating Kink spinorial shape modes. The eigenvalues are well known and show that these states carry imaginary momentum on the positive imaginary half-axis in the complex q-plane: q = iκl5. Eigenvalues: Ω(l) F (κl; N) = p (2N -l)l = q N 2 -κ2 l , l = 1, 2, · · · , N -1 , κl = N -l In terms of truncated to polynomials hypergeometric Gauss series the eigenspinors are also explicitly known. Eigenspinors ψ(l)(y) = ψ(l) 1 (y) ψ(l) 2 (y) ! ψ(l) 1 (y) = 1 coshN-l y · 2F1(2N + 1 -l, -l, N -l + 1; 1 2(1 + tanh y)) ψ(l) 2 (y) = 1 coshN-l y · 2F1(2N -l, -l + 1, N -l + 1; 1 2(1 + tanh y)) This identification has been possible because the bound state labelled by l in the upper diagonal component and the bound state labelled by j = l -1 in the lower diagonal component of the square of the Dirac operator share identical eigenvalues. 3. Fermions scattered off Kinks It remains to describe the spinorial fluctuations scattered off Kinks, i.e., not bounded to the Kink center. Of course, these fluctuations belong to the continuous spectrum of hDK (23): hDKψ(y; q) = ΩF(q)ψ(y, q) , ψ(y; q) =   ψ1(y; q) ψ2(y; q)   (23) The eigenvalues are: Eigenvalues: Ω2 F(q) = q2 + N 2 ⇒ΩF(q) = + p q2 + N 2 whereas the eigenspinors are proper Gauus hypergeometric series: Eigenspinors ψ1(y; q) = A(q)(sechy)-iq · 2F1[1 -iq + N, -iq -N, 1 -iq; ey ey + e-y ] ψ2(y; q) = A(q)(sechy)-iq · 2F1[-iq + N, -iq -N + 1, 1 -iq; ey ey + e-y ] Scattering scalar or spinor waves are characterized by their phase shifts and/or spectral densities. Considering the system defined on a finite interval of very large length L with PBC the spectral densities of the scattering through the Kink wells suffered respectively by the upper and lower components are: ρF(q) = ρ(1) F (q) + ρ(2) F (q) ρ(1) F = gL 2π + N-1 X j=1 j j2 + q2 + N N 2 + q2 , ρ(2) F = gL 2π + N-1 X j=1 j j2 + q2 5In this case there is one half-bound state just at the threshold of the continuous spectrum with l = N. These "half-states"do not exist if ν /∈N∗. 8 Note that besides the precise characterization of the continuous spectrum in terms of the wave number q the bound states resurface as poles in the spectral density. 3.3 Fermionic quanta To finish this Section we expand first the classical spinor field in terms of the one-particle states: 1 √gΨ+(y) = B0 sechNy 0 + N-1 X l=1 h Bl ψ(l) 1 (y) ψ(l) 2 (y) ! e-iΩ(l) F gt + C∗ l φ∗(l) 1 (y) φ∗(l) 2 (y) ! eiΩ(l) F gti + Z dq p 4πΩF(q) B(q) ψ1(y; q) ψ2(y; q) e-iΩF (q)gt + C∗(q) φ∗ 1(y; q) φ∗ 2(y; q) eiΩF (q)gt We stress that the eigenspinors ψ of the Dirac-Kink operator and of its g to -g transformed φ have been taken as a complete system in the space of spinor fields. The next step is the promotion of the coefficients to Fermi operators demanding anticommutation rules between them to establish the canonical quantization procedure: { ˆB0, ˆB† 0} = 1 , { ˆBl1, ˆB† l2} = δl1l2 , { ˆCl1, ˆC† l2} = δl1l2 { ˆB(q1), ˆB†(q2)} = δ(q1 -q2) = { ˆC(q1), ˆC†(q2)} , {ˆΨ+(y1), iˆΨ+†(y2)} = iδ(y1 -y2) The Fermi ground state and in general the Fermionic Fock space describing electron/positron multiparticle states with Fermi statistics built in it follows. As a practical computation we show the normal ordered Fermi number operator, all the annihilation operators placed at the right of the creation operators denoted by the : ˆF : symbol. : ˆF : = Z dx 1 2 h ˆΨ+†(x), ˆΨ+(x) i , σ1 ψ1(y; q) ψ2(y; q) = φ1(y; q) φ2(y; q) : ˆF : = 1 2[ ˆB† 0, ˆB0] + N-1 X l=1 ( ˆB† l ˆBl -ˆC† l ˆCl) + Z dqρF(q)( ˆB†(q) ˆB(q) -ˆC†(q) ˆC(q)) = ˆN0 -1 2 + N1 X l=1 ( ˆN - l -ˆN + l ) + Z dqρF(q)( ˆN -(q) -ˆN +(q)) Due to the unpaired zero mode we see that the expectation value of this operator in any state of the Fermionic Fock space is fractionary. Note that because the σ1 matrix maps the eigenspinors of hDK(g) into those of hDK(-g), not only the states in the discrete spectra are paired (except the zero mode) but also the spectral densities in the continuous spectra are identical. 4 Low energy dynamics of vibrating Kinks. Impact of Bosonic and Fermionic shape modes Study of the dynamics of topological defects in non-linear non-integrable field theories is an endeavour beyond the reach of analytical methods. Numerical analysis helped with more and more powerful computers has been sucessfully used to obtain information about this important subject because many types of topological defects exist in Nature. During the last twenty years of the XX Century an alternative route has been investigated by physicist and mathematicians. The idea is that at 9 low energies the dynamics is essentially described by geodesic motion over the moduli space of these extended objects. More recently internal/shape vibrational modes of fluctuation are included in this effective finite dimensional dynamical systems. The degrees of freedom correspond to the collective coordinates of the defect and its vibrational modes. The adiabatic principle dictates that both the KInk moduli space coordinates and the shape mode amplitudes of Kink fluctuations support all the time dependence and describe the adiabtic evolution of the topological defect. The miracle is that sticking to this approximation by numerical methods, has been confirmed in this simplified scenario. In this Section our goal is to construct the effective adiabatic dynamics of the Kink collective coordinates encompassing both bosonic and fermionic shape fluctuation modes. We thus start with the kink solution but incorporate also the lower bosonic and fermionic shape fluctuation modes. 4.1 Low energy dynamics of a single vibrating Kink In the simplest g = 2λ case, ν = 2, there is only a shape mode of each type and we focus on the configuration φ(τ, y) ≃ φK(y) + A(τ)η√ 3(y) = tanh[y -a(τ)] + A(τ) sinh[y -a(τ)] cosh2[y -a(τ)] 1 √gΨ√ 3(τ, y, a, Λ) = Λ(τ)Φ(t, a(τ)) ≃ Λ(τ) cosh[y -a(τ)] tanh[y -a(τ)] 1/ √ 3 where the Kink configuration is supplemented by the bosonic and fermionc shape modes, both of frequency Ω= √ 3. The space parametrized by the collective coordinates is thus a four-dimensiona supermanifold. There are two bosonic collective coordinates, the Kink center a and the amplitude of the bosonic shape mode A, spanning the "body". The "soul"of the submanifold is spanned in turn by one fermionic collective coordinate, the amplitude of the fermionic shape model: Λ. Now a very subtle point: in classical field theory bosonic fields present no problems. Classical Fermi fields are incompatible with the exclusion principle and thus, strictly speaking do not exist. A loophole to deal with this problem is to focus on the anticommutation rules and look at their classical limit, only satisfied by Grassman variables. It is then natural to consider classical Fermi fields as Grassmann fields but then there is no the possibility of having the Dirac sea as the ground state. One must cope with the existence of spinor waves propagating with negative energy. Within this spirit, we shall consider that THE SHAPE MODE AMPLITUDE, Λ = Λ1 + iΛ2, is a complex Grassmann variable: Λ2 = 0 = Λ∗2 , ΛΛ∗+ Λ∗Λ = 0 Λ2 1 = 0 = Λ2 2 , Λ1Λ2 + Λ2Λ1 = 0 , Λ∗Λ = 2iΛ1λ2 Under the adiabatic hypothesis where the temporal dependence of the system is encoded in the collective coordinates a(τ), A(τ), Λ(τ) the dynamics is reduced to a finite dimensional Lagrangian system. The kinetic energy and the potential energy receive contributions of both the bosonic and fermionic collective coordinates. Rememenbering the vibrating Kink configuration φK(y, a, A) = tanh(y -a) (1 + A cosh(y -a) and the two components of the Fermionic bound state of energy √ 3, the Fermionic shape mode of 10 fluctuations over the vibrating Kink ψ(2) 1 √ 3(y, a, Λ) = Λ cosh(y -a) · 2F1(4, -1, 2, 1 2(1 + tanh(y -a)) = Λf(y, a) ψ(2) 2 √ 3(y, a, Λ) = Λ cosh(y -a) · 2F1(3, 0, 2, 1 2(1 + tanh(y -a)) = Λg(y, a) The contributions due to the Bosonic and Fermionic shape modes to the effective Kinetic and potential energies are: 6: T B eff = 1 2 Z ∞ -∞ dy ∂φK ∂a ̇a + ∂φK ∂A ̇A 2 = 1 2 4 3 + π 2 A + 14 15A2 ̇a ̇a + 2 3 ̇A ̇A T F eff = -Λ∗ ̇Λ Z ∞ -∞ dy f[y, a]2 + g[y, a]2 + iΛ∗Λ Z ∞ -∞ dy f[y, a]∂f ∂a + g[y, a]∂g ∂a ̇a = -8 3Λ∗Λ The effective potential energies due to the bosonic and fermionic collective variables are slightly more difficult to compute: V B eff = 1 2 Z ∞ -∞ dy dφ dy 2 + 1 -φ2 2 ! = 4 3 + A2 + π 8 A3 + 2 35A4 V F eff = iΛ∗Λ Z ∞ -∞ dy f(y, a) -d dy + 2φK(y, a, A) g(y, a) + g(y, a) d dy + 2φK(y, a, A) f(y, a) = -iΛ∗Λ 4 + π 2 A = iΛ∗Λ · W[a, A] We notice now the main conceptual facts: (1) Fluctuations in the Kink center of mass a and the amplitude of vibrating shape modes, both scalar (bosonic) and spinorial (fermionic), are entangled. Thus, if the Kink kinetic energy decreases the frequency of Kink oscillations increases. (2) The amplitude of the Fermionic fluctuations is coupled to the amplitude of the Bosonic ones via a Yukawa interaction between one real and one complex degrees of freedom, remarkably independent on the Kink position a. To make these statements more precise we look at the motion equations derived from the effective Lagrangian Leff = T B eff + T F eff -V B eff -V FB eff , LF eff = -4 3Λ∗ ̇Λ -iW[a, A] · Λ∗Λ LB eff = 1 2 4 3 + π 2 A + 14 15A2 ̇a ̇a + 2 3 ̇A ̇A + 4 3 + A2 + π 8 A3 + 2 35A4 which are, (24-25-26): d dτ (4 3 + π 2 A + 14 15A2) ̇a = i∂W ∂a Λ∗Λ (24) 2 3 ̈A = (π 2 + 28 15A) ̇a2 -(2A + 3π 8 A2 + 8 25A3) -i∂W ∂A Λ∗Λ (25) i ̇Λ∗= W[a, A] · Λ∗ (26) 6Note that the expressions for the Fermionic kinetic and potential energies are hermitian, because the antihermiticity of ∂ ∂τ and the anti-commutativity of the Grassman fields Ψ† and Ψ. 11 Consider first the situation where the Fermionic fluctuations are null: Λ(τ) = 0, ∀τ. Then, the first of the EL equations gives rise to a constant of motion because a is a ciclyc variable: (4 3 + π 2 A + 14 15A2) ̇a = C ≡ ̇a = C 4 3 + π 2A + 14 15A2 . Still in absence of fermionic fluctuations, and focusing in the regime of small shape mode amplitude the second motion equation becomes linear: ̈A ≃D(C) -ω2(C)A + O(A2) , D(C) = 9π 32C2 ω2(C) = 3 - 21 20 -27π2 256 C2 . Having chosen ,the Λ = 0 = Λ∗solution of the motion equations for the Grassman degree of freedom the main impact of the shape mode in the Kink dynamics is clearly shown. The frequency of the Kink oscillations gets modified as a function of C ∝ ̇a i.e., the faster the vibrating Kink moves the lower the frequency of its oscillations becomes and viceversa. This transference from kinetic to potential energy in Kink dynamics is a semi-classical effect because the shape mode appears at order ħin the ħexpansion of the Jackiw-Rebbi action. The dependence of the shape mode amplitude with time is easily recognized if Fermionic fluctuations do not enter the game A(τ) = D(C) ω2(C) + A (cos(ω(C)τ) + D) . We stress that there are a critical value of C. If |C| > 2 q 5 7 ω(C) becomes imaginary and the Kink stop oscillating, the evolution is purely kinetic. When |C| > 0, ̇a(t -t0 << 0) < 0) a decreases when times runs forward until it becomes slightly negative meaning that Kink and AntiKink centers cross each other while exchanging their relative position. For suficciently high energy, the relative position a becomes negative enough to reach sooner or latter the infinite potential barrier. Then a bounce is produced and a second exchange between Kink and AntiKink takes place moving appart again the KAK pair up to very long distances. For energie less than 8 3 but greather than 0 the Kink-Antikink motion is bounded. These oscillatory motions were christened as "bions"by its dicoverers in References [15]- [16]. The plots of the effective potentials for A01 and A = 3, Figure 2 (center) and (right) show similar pattern but less room for bions is left with increasing shape mode amplitudes. A brief digression on the quantum description of the previously described classical dynamics is convenient. Because the momentum conjugate to a is pa = g[a, 0] ̇a the quantum momentum operator becomes ˆpa = -i d da while the quantum Hamiltonian (32) , Weyl ordered, reads: ˆH = - 1 p g[a, 0] d da p g-1[a, 0] d da -V KA eff [a, 0] (32) and the unbounded motion orbits become scattering backward waves while bound states arise from the bounded orbits only complying with tsome Bohr-Sommerfeld quantization conditions. If both Kink and AntiKink are excited and we let the amplitude A vary things are different. The effective dynamics is captured by a Lagrangian system with two degrees of freedom: a and A. Denoting now a = a1 and A = a2, the efective Lagrangian reads L = 1 2gaiaj[a1, a2]∂ai ∂τ · ∂aj ∂τ -V KA eff [a1, a2] , i, j = 1, 2 . and, accordingly, the motion equations become: ∂2ai ∂τ 2 + X j,k Γai ajak ∂aj ∂τ ∂ak ∂τ = - X j gaiaj δV δaj , Γai ajak = 1 2 X l gaial ∂galaj ∂ak + ∂galak ∂al -∂gakaj ∂al The energy is still a constant of motion but there is no a second invariant that would guarantee the Liouville integrability of the system. Needless to say, to obtain analytical solutions of this system of ODE's is hopeless. Nevertheless, 16 numerical analysis of this system for initial conditions corresponding to Kink/AntiKink scattering has been sucessefully performed in the seminal Reference [4]. These authors reached similar consequence to those previously obtained in the numerical treatment of the full field theory: in the Kink Antikink scattering quasi-bound states where several bounces occur arise for some windows of initial velocities. Moreover, the pattern shows a very interesting fractal structure. 4.3 Fermionic fluctuations of Kink/Anti-Kink configurations Consider the bounded spinorial fluctuation over the Kink/AntiKink configuration (33) when the ratio g λ is 2. 1 √g Ψ√ 3 √ 3(y, a, Λ) = Λ tanh[a] × (33) × sech(y + a) 2F1(4, -1, 2, 1 2(1 + tanh(y + a)) 2F1(3, 0, 2, 1 2(1 + tanh(y + a)) -sech(y -a) 2F1(4, -1, 2, 1 2(1 + tanh(y -a)) 2F1(3, 0, 2, 1 2(1 + tanh(y -a)) Figure 4: (left) Graphics of the Kink shape mode and the AntiKink shape mode (right) Graphics of the upper and lower component of the Fermionic shape mode Contribution to the Kinetic energy of the Fermionic fluctuations is encoded in the factor Ga) (34) T F eff = - Z ∞ -∞ dyΨ†√ 3 √ 3(y, a, Λ) ̇Ψ√ 3 √ 3(y, a, Λ) = -Λ∗ ̇ΛG(a) G(a) = 1 tanh2 a Z ∞ -∞ dy " tanh(y + a) cosh(y + a) -tanh(y -a) cosh(y -a) 2 + (sech(y + a) -sech(y -a))2 # = 4 3(12a -3 sinh(2a) -3 sinh(4a) + sinh(6a)) coth2(a)csch3(2a) (34) Figure 5: (left) Contribution to the matric of Fermionic fluctuations G(a) as a function of the distance between Kink and Antikink (right) Effective potential U(a) due to Fermionic fluctuations of the Kink/AntiKink configuration also as a function of a. Next we compute the efective potential contributed by the Fermionic shape mode fluctuating over the Kink/AntiKink configuration 17 V F eff(a, Λ∗Λ) = i Z ∞ -∞ dy Ψ†√ 3 √ 3(y, a, Λ) -iσ2 d dy + 2φKA(y, a, A) Ψ√ 3 √ 3(y, a, Λ) = iΛ∗Λ · U(a) U(a) = 1 tanh2 a Z ∞ -∞ dy 2F1[4, -1, 2, 1 2(1 + tanh(y + a))] cosh(y + a) - 2F1[4, -1, 2, 1 2(1 + tanh(y -a))] cosh(y -a) × × -d dy + 2φKA(y, a, A) 2F1[3, 0, 2, 1 2(1 + tanh(y + a))] cosh(y + a) - 2F1[3, 0, 2, 1 2(1 + tanh(y -a))] cosh(y -a) + + 2F1[3, 0, 2, 1 2(1 + tanh(y + a))] cosh(y + a) - 2F1[3, 0, 2, 1 2(1 + tanh(y -a))] cosh(y -a) × × d dy + 2φKA(y, a, A) 2F1[4, -1, 2, 1 2(1 + tanh(y + a))] cosh(y + a) - 2F1[4, -1, 2, 1 2(1 + tanh(y -a))] cosh(y -a) where φKA(y, a, A) is the excited Kink/AntiKink configuration defined in formula (29). The result for the effective potential is (35: U(a) = -2 3 coth2(a)csch3(2a) log e36a -3 sinh(2a) -12 sinh(4a) + sinh(6a) + 6 log e2a cosh(4a) (35) Since the Fermionic Kink/ AntiKink fluctuations do not depend on the shape mode vibration amplitude of Kink and Antikink we expect that G(a) only will be a function of a and indeed this the case. The effective potential U(a) induced by the Fermionic fluctuations on vibrating Kink AntiKink configurations however do not depend on the amplitude of the Bosonic shape mode . This unexpected effect is due to the fact, see Figure 8 (left), that Kink and Antikink oscillates in counter-phase and there are destructive interferences. Thus, one should expect that interactions between the amplitudes Λ and A respectively of Fermionic and Bosonic fluctuations of Kink/AntiKink configurations only would arise if the amplitudes of vibrations of Kink differ from the AntiKink amplitudes. To test this statement generalization to consider the amplitude of the Kink different from the AntKink amplitude in the shape modes is implemented by replacing in the Kink AntiKink configuration the contribution of excitations by Atanh(y + a) cosh(y + a) -B tanh(y -a) cosh(y -a) . A quick Mathematica run confirms the conceptual argument given above: U(a, A -B) = - coth2(a) 3 2π e2a -1 6 (A -B) + 16e6a(36a -3 sinh(2a) -12 sinh(4a) + sinh(6a) + 12a cosh(4a)) 3 (e4a -1)3 Understanding of how the induced metric and effective potential by Fermionic Kink fluctuations depends on g λ starts by looking at N = 3. In this case there are two Fermionic shape modes. The lower frequency is √ 5 that, implemented on the excited Kink/AntiKink configuration, gives rise to the spinor wave function (36): 1 √g Ψ√ 5 √ 5(y, Λ) = Λ tanh a   2F1[6,-1,3, 1 2 (1+tanh(y+a)) cosh2(y+a) - 2F1[6,-1,3, 1 2 (1+tanh(y-a)) cosh2(y-a) 2F1[5,0,3, 1 2 (1+tanh(y+a)) cosh2(y+a) - 2F1[5,0,3, 1 2 (1+tanh(y-a)) cosh2(y-a)   (36) The effective kinetic energy induced by this Fermionic shape mode is T F 1 eff = Z ∞ -∞ dy Ψ†√ 5 √ 5(y, Λ) ̇Ψ√ 5 √ 5(y, Λ) = -Λ∗ ̇Λ · G3(a) G3(a) = 1 5 coth2(a)csch5(2a) -80 sinh(2a) -15 sinh(6a) + sinh(10a) + 120 log e2a cosh(2a) (37) where the G3(a) factor is written in (37). Likewise the effective potential is derived via similar procedures V F 1 eff = i Z ∞ -∞ dy Ψ†√ 5 √ 5(y, Λ){-iσ2 d dy + 3φKA(y, a, A)}Ψ√ 5 √ 5(y, Λ) = iΛ∗Λ · U3(a) U3(a) = -2 15 coth2(a)csch5(2a) -350 sinh(2a) -125 sinh(6a) + sinh(10a) + 60 log e2a (11 cosh(2a) + cosh(6a)) In sum, identical pattern is observed in the effective adiabatic dynamics induced by Fermionic Kink shape modes in Kink/AntiKink configurations when N = 2 or N = 3. We thus skip of writing the results obtained for the Fermionic shape mode of frequency √ 8 which are qualitatively equivalent but even more cumbersome. 18 Figure 6: (left) Induced metric G3(a) (right) Induced effective potential U3(a) 5 Outlook The theoretical developments in this work have been focused in a Field Theory (1 + 1)-dimensional model of bosons and fermions, specifically the Jackiw-Rebbi model introduced in Reference [5]. One interesting way to extend these ideas is to jump in the number of fields both bosonic and fermionic. An early proposal in this line can be found in a paper published circa the year 2000 by one MIT group, see [14]. In that work the authors deal with a Field Theory model with two Bose and two Fermi fields. The interaction between bosonic and fermionic fields is through two Yukawa couplings and sophisticated phenomena arise. We intend, however, to extend scalar/Bose two-field models treated by our group in Salamanca by incorporating two spinor/Fermi fields in these systems. The first model that we have in mind was discussed, among other papers, in Reference [17]. This Field Theoretical model exhibits a rich variety of Kinks and posseses interesting properties of integrability in the related mechanical analogous system. We expect that adding spinor/Fermi fields similar phenomena to that found in the Jackiw-Rebbi model will be kept but subtle novelties probably will be unfolded. The second system where we envisage that the analysis developed in the JR model will be fruitful is the massive non-linear S2-sigma 1 + 1-dimensional model, [18]. Again, in this "deformed"non-linear sigma model a manifold of Kinks, topological and non topological, exist. Spinor/Fermi fields will be included as sections of one spinor bundle over the two-sphere rather than spinor functions. In any case we expect that new phenomena will appear with respect to that described in the JR model. The interplay between scalar and spinor fields in this non-linear system promises the appearance of new subleties. Other playground where the analysis of effective low energy dynamics seems to be promising is the moduli space of BPS vortices in the Abelian Higgs model, see e.g. [2] for a parallel study devoted to Kinks. The Abelian Higgs model in (2+1) space-time, at the critical ratio between the self-interacting scalar λ and the electromagnetic e2 couplings, the transition point between Type I and Type II Ginzburg-Landau superconductivity, posseses manifolds of topological non interacting stable BPS vortices characterized by an integer number of magnetic quanta. These topological defects admit also vibrational modes, see References [19]-[20]. The collective coordinate low energy analysis has been developed in [21]-[22]-[23] in absence of Fermions. The addition of Fermions to the Abelian Higgs model is compelling, including Yukawa and electromagnetic couplings. The system will be much more complex but the temptation is strong towards identify the Collective Coordinates of these planar topological defects and see how Fermions affect the adiabatic dynamics of BPS vortices. References [1] A. Alonso-Izquierdo,, D. Miguelez-Caballero, L.M. Nieto, J. Queiroga-Nunes, Wobling kinks in a two-component scalar field theory: Interaction between shape modes, [2] A. Alonso-Izquierdo, L. M. Nieto, J. Queiroga-Nunes, Asymmetric scattering between kinks and woblers, Comm. Nonl. Sci. and Num. Sim. 107(2022) 106183 [3] A. Alonso-Izquierdo, L. M. Nieto J. Queiroga-Nunes, Scattering between wobling kinks, Phys. Rev. D103, 045003 (2021) [4] N.S. Manton, K. Ol ́es, T. Romanczukiewicz, A. Wereszczynski, "Collective coordinates model of kink-antikink collisions in φ4 theory, Phys. Rev. Lett. 127, 071601 (2021). [5] R. Jackiw and C. Rebbi, Solitons with fermion number 1 2, Phys. Rev. D13, 3398 (1976) 19 [6] R. Jackiw and R. Schrieffer,Solitons with fermionic number 1 2 in condensed matter physics and relativistic field theory, Nuclear Physics B190[FS 3](1981) 253-332 [7] A. Niemi and G.W. Semenoff, Fermion number fractionazitation in quantum field theory, Physics Reports 135 (1986)99 [8] A. Alonso-Izquierdo,R. Fresneda, J. Mateos Guilarte, and D. Vassilevich, Soliton fermionic number from the heat kernel expansion, European Physical Journal C79 (2019) 525 [9] Y.Chu and T. Vachaspati, Fermions on one or fewer Kinks, Phys. Rev. D77(2008)025006 [10] D. Bazeia, A. Mohammadi, Fermionic bound states in distict kinlike backgrounds, European Physical Journal C77 (2017)203 [11] J.Campos, A. Mohammadi, Kink-antikink collisions in the supersymmetric φ4 model, JHEP08(2022)180 [12] D. Bazeia, J. Campos, A. Mohhamadi, Resonance mediated by fermions in kink-antikink collisions, JHEP12(2022) 085 [13] J. Campos, A. Mohammadi, J. Queiruga, A. Wereszcinsky, , Fermionic spectral walls in in kink collisions, JHEP01(2023)071 [14] E. Farhi, N. Graham, R.L. Jaffe, H. Weigel, A heavy fermion can create a soliton: a 1+1 dimensional example, Physics Letters 475B (2000) 335 [15] T.Sugiyama, Kink-antikink collisions in the two-dimensional φ4 model, Progress in Theoretical Physics, 61 (1971) 15501563 [16] D.K. Campbell, J.S. Schonfels, C.A. Wingate, Resonance structure in kink-antikink interactions φ4 theory, Physica D9 (1983) 1-32 [17] A. Alonso-Izquierdo, M.A. Gonzalez Leon, J. Mateos Guilarte, The Kink variety in systems of two coupled scalar fields in two space-time dimensions, Physical Review D65 (2002) 085012 [18] A. Alonso-Izquierdo, M.A. Gonzalez Leon, J. Mateos Guilarte, Kinks in a non-linear massive sigma model, Physical Review Letters 101 (2008) 131602 [19] A. Alonso-Izquierdo, W. Garcia Fuertes, J. Mateos Guilarte, A note in BPS vortex bound states, Physics Letters B753 (2016)29-32 [20] A. Alonso-Izquierdo, W. Garcia Fuertes, J. Mateos Guilarte, Dissecting zero modes and bound states on BPS vortices in Ginzburg-Landau superconductors, JHEP05 (2016) 1-36 [21] A. Alonso-Izquierdo, W. Garcia Fuertes. N.S. Manton, J. Mateos Guilarte, Spectral flow of vortex shape modes over the BPS 2-vortex moduli space, JHEP01 (2024) 020 [22] A. Alonso-Izquierdo, N.S. Manton, J. Mateos Guilarte, A. Wereszczynski, Collective coordinate models for 2vortex shape mode dynamics, Physical Review D110 (2024) 085006 [23] A. Alonso-Izquierdo, N.S. Manton, J. Mateos Guilarte, M. Rees, A. Wereszczynski, Dynamics of excited BPS 3-vortices, Physical Review D111 (2025) 105021 20
2510.14881
THE GATEKEEPER KNOWS ENOUGH Fikresilase W. Abebayew BoA AI CoE Fikresilase.Wondmeneh@bankofabyssinia.com ABSTRACT Large Language Models (LLMs) are increasingly deployed as autonomous agents, yet their practical utility is fundamentally constrained by a limited context window and state desynchronization resulting from the LLMs’ stateless nature and inefficient context management. These limitations lead to unreliable output, unpredictable behavior, and inefficient resource usage, particularly when interacting with large, structured, and sensitive knowledge systems such as codebases and documents. To address these challenges, we introduce the Gatekeeper Protocol, a novel, domain-agnostic framework that governs agent–system interactions. Our protocol mandates that the agent first operate and reason on a minimalist, low-fidelity “latent state” representation of the system to strategically request high-fidelity context on demand. All interactions are mediated through a unified JSON format that serves as a declarative, state-synchronized protocol, ensuring the agent’s model of the system remains verifiably grounded in the system’s reality. We demonstrate the efficacy of this protocol with Sage, a reference implementation of the Gatekeeper Protocol for software development. Our results show that this approach significantly increases agent reliability, improves computational efficiency by minimizing token consumption, and enables scalable interaction with complex systems, creating a foundational methodology for building more robust, predictable, and grounded AI agents for any structured knowledge domain. Keywords State Synchronization · Progressive Contextualization · Declarative Protocol · AI Agents · Grounded AI · Human-AI Interaction 1 Introduction The deployment of Large Language Models (LLMs) as autonomous agents for complex tasks like software development is rapidly advancing using the already existing techniques [1, 2]. However, their practical utility is fundamentally constrained by their LLM’s stateless architecture and the context engineering design [3, 4] on the agent program. This core limitation leads to state desynchronization, where the agent’s internal model diverges from the system’s true state [5]. Consequently, agents exhibit unreliable behavior, hallucinate nonexistent entities [6], and inefficiently manage context, precluding their use in high-stakes, real-world applications. In this work, we propose the Gatekeeper Protocol, a domain-agnostic framework that eschews ambiguous, conversational interaction and instead enforces a formal, state-synchronized communication layer between the agent and the system. Our protocol mandates that the agent first operate on a minimalist, low-fidelity “latent state” representation of the system. This forces the agent to reason strategically and request high-fidelity context only when necessary, rather than consuming the entire context. All interactions are mediated through a unified JSON format that serves as a declarative, state-synchronized ledger. This mechanism ensures the agent’s model of the system remains verifiably grounded in reality. This approach transforms the agent from an unpredictable conversationalist into a deterministic and reliable partner, significantly improving token efficiency and scalability. 2 Background The journey for many modern agents began with the "reason-act" loop, a powerful idea popularized by frameworks like ReAct [1]. This approach allows an agent to "think out loud" before acting, which was a major leap forward. But this arXiv:2510.14881v1 [cs.AI] 16 Oct 2025 simple loop has a critical weakness: an agent that can’t remember its past is doomed to repeat its mistakes. This led to a wave of research focused on giving agents a memory. Architectures like those in Generative Agents [2] and Reflexion [7] introduced sophisticated memory streams and self-reflection abilities, allowing agents to learn from experience. However, giving an agent a vast memory created a new, more subtle problem: how does it find the right memory at the right time? As surveys on the topic show [6], this has sparked an arms race in memory management techniques. We now have complex systems like MemGPT [3] and HiAgent [4] that treat an agent’s context like a virtual memory hierarchy in an operating system. These are brilliant engineering solutions for preventing context overflow and cutting down on redundant information. Yet, we argue that all these approaches share a common blind spot. They are hyper-focused on perfecting the information that goes into the LLM’s black box, but they don’t formalize the communication channel between the agent and the external world. The result is that the agent’s understanding can still get out of sync with reality a problem so significant that frameworks like SyncMind [5] are now being built just to measure it. The consensus is that state synchronization is essential [8], but the field has yet to coalesce around a standard way to achieve it. The Gatekeeper Protocol charts a different course. We believe that true agent reliability comes not from a more complex memory system, but from a simpler, more robust interaction protocol. Our contribution isn’t a better memory architecture; it’s a formal contract that governs how an agent is allowed to perceive and act upon a system. We position our work in three distinct ways. First, we shift the focus from the "OS level" to the "API level." Instead of trying to manage the agent’s internal memory (like MemGPT), we enforce a strict, declarative contract for every interaction using a unified JSON format. This makes every action an explicit, verifiable transaction, providing a clear audit trail of the agent’s behavior. Second, our protocol champions an "inference-first" philosophy, a stark contrast to the dominant "retrieval-first" model of RAG. The broader community is starting to agree that simply retrieving documents isn’t enough, leading to calls for smarter "Context Engineering" [9, 10]. The Gatekeeper Protocol provides a concrete framework for this idea. We force the agent to reason on a cheap, high-level map before it’s allowed to ask for the expensive, detailed documents. It has to think before it reads. Finally, our protocol’s declarative action space makes agents fundamentally safer. A ReAct-style agent might generate a rm -rf command, but a Gatekeeper agent can only state its intent—for example, {"request": {"delete": {}}}. This simple but critical distinction means a trusted system can safely interpret the intent, preventing a whole class of unpredictable and potentially harmful actions. 3 The Gatekeeper Protocol 3.1 Architectural Overview The protocol’s architecture is defined by the manipulation of a single, central data structure: the System State-Context Representation (SCR), denoted as L. This unified JSON object serves three simultaneous roles: 1. A Latent Context Map: It provides a high-level, potentially low-fidelity, structural representation of the system. 2. A State Record: It is the authoritative, ground-truth record of the system’s state at a given time. 3. An Action Interface: The agent proposes actions by modifying specific fields within this object. The interaction is a discrete, time-stepped cycle (Figure 1). At each step t, the agent’s policy, πagent, receives the current SCR, Lt. Based on a given task T, it generates a proposed modification, L′ t, which encodes a desired action At. The system executes the action, producing a new SCR, Lt+1, which begins the next cycle. 3.2 Latent Context and Progressive Contextualization To ensure token efficiency, the protocol’s SCR begins as a low-fidelity latent map. Components within the SCR can have placeholder values (e.g., "unsummarized"). The agent operates on an "inference-first" principle, using this structural information to reason about the task before requesting high-fidelity content. This process of deciding when to request more context is modeled as a policy, πagent, that seeks to maximize the expected value of future states while minimizing the cost of information retrieval. At each step t, the agent chooses a set of unsummarized components, C, to query via a provide action. This choice, At(C), is determined by: At(C) = πagent(Lt, T) = argmax C⊆Sunsum [E[V (Lt+1|Lt, At(C))] −λ · Cost(C)] (1) 2 Figure 1: The Gatekeeper Protocol cycle: Lt Agent Policy πagent −−−−−−−−−−→L′ t(containing At) System Execution Φ −−−−−−−−−−→Lt+1 Where: • Sunsum is the set of all components with unsummarized content in Lt. • E[V (Lt+1| . . . )] is the expected value (or utility) of the next state, given the current state and the chosen action. This represents the agent’s belief about how much the new information will help in solving task T. • Cost(C) is the token cost associated with retrieving and processing the content of the components in C. • λ is a hyperparameter that balances the trade-off between information gain and token cost. 3.3 System State Representation and Synchronization To eliminate state desynchronization, the protocol enforces that the SCR, L, is the single source of truth. The agent’s understanding of the system is not an internal belief but is entirely represented by the SCR it possesses. Synchronization is achieved through a deterministic state transition function, Φ, executed by the system. The agent proposes an action by modifying the request fields in its current SCR, Lt, to create a proposed state change, L′ t. The system validates and executes this proposal. The state transition is defined as a piecewise function: Lt+1 = Φ(Lt, L′ t) if IsValid(L′ t, Lt) Lt otherwise (2) Where: • IsValid(L′ t, Lt) is a validation function that returns true if the action encoded in L′ t is permissible given the current state Lt. • Φ(Lt, L′ t) is the execution function that applies the valid changes and returns the new, ground-truth SCR. 3 This mechanism guarantees that the agent’s state is always synchronized with the system’s state, as any invalid or failed action results in the state remaining unchanged, preventing state drift. 3.4 The Declarative Action Interface The SCR itself serves as the action interface. To ensure safety and uniformity, the agent’s actions must be declarative, not imperative. The action space A is a finite set of intents (e.g., provide, edit, write, delete). An action At is encoded by populating the request field of one or more components within the proposed SCR, L′ t. For example, an intent to delete a component s is encoded as setting the request field of s in L′ t to "{"delete": {}}". This declarative model decouples the agent’s intent from the system’s execution logic, allowing for a trusted layer to safely validate, log, and perform the requested state transition. 4 Empirical Evaluation To rigorously test the robustness and generalizability of the Gatekeeper Protocol, we conducted a comprehensive empirical evaluation. We compared our protocol against four common context management strategies across a suite of three diverse programming tasks, implementing each strategy with seven different LLMs to ensure our findings were model-agnostic. For our protocol’s implementation, we used Sage, our open-source agent publicly available on GitHub1. 4.1 Experimental Design Tasks. To avoid overfitting to a single problem type, our evaluation suite included three distinct tasks: 1. Python Refactoring (Stateful): A multi-file function renaming task testing state tracking and dependency management. 2. Frontend Component Creation (Structured): A task to build a new React component using a well-defined library (Next.js with Shadcn/ui). 3. Python Web Scraping (Exploratory): A task to write a new script to scrape data from a live, previously unseen website. Models and Strategies. Each task was run with seven models (Google: Gemini 2.0 Flash Experimental, Qwen: Qwen3 Coder 480B A35B, DeepSeek: R1 Microsoft: MAI DS R1, OpenAI: gpt-oss-20b, Mistral: Mistral Small 3.2 24B and NVIDIA: Nemotron Nano 9B V2 ) across five strategies: (1) Full Codebase, (2) Recent Files, (3) RAG, (4) ReAct Agent, and (5) Sage (Gatekeeper Protocol). Baseline Fairness. To ensure a fair comparison, all baselines were implemented with standardized prompts. Our RAG implementation used cosine similarity over 512-token chunks from a ChromaDB vector store. The ReAct agent was given an identical imperative toolset to Sage. 4.2 Metrics We defined three primary metrics, averaged over R = 7 model runs for each task, and then averaged across all three tasks. Average Task Completion Progress (%). For a task decomposed into K sub-tasks, let ci,j be a binary variable for the completion of sub-task j in run i. The average progress is: Progressavg = 100 R · K R X i=1 K X j=1 ci,j (3) Average Grounding Errors. Let Ei be the count of grounding errors (actions based on a false state belief) for run i. The average is: GEavg = 1 R R X i=1 Ei (4) 1https://github.com/Fikresilase/sage 4 Average Total Tokens. This metric measures the cumulative sum of all input and output tokens for the entire duration of a task run, providing a holistic measure of computational cost. 4.3 Quantitative Results The aggregated results, averaged across all models and tasks, are presented in Table 1. Table 1: Comparative Performance of Context Management Strategies (Averaged Across 3 Tasks and 7 LLMs). Values are mean ± standard deviation. Strategy Avg. Task Completion (%) Avg. Grounding Errors Avg. Total Tokens Full Codebase 48% ± 18% 4.3 ± 2.1 19,100 ± 3500 Recent Files 31% ± 22% 5.8 ± 2.5 9,800 ± 4100 RAG 58% ± 15% 3.1 ± 1.5 14,300 ± 2800 ReAct Agent 55% ± 17% 3.4 ± 1.8 15,200 ± 3100 Sage (Gatekeeper) 73% ± 8% 0.8 ± 0.4 6,200 ± 1200 As shown in Table 1, the Gatekeeper Protocol consistently outperforms all baselines. Sage achieved an average task completion of 73%, a notable improvement over the next best strategy, RAG, at 58%. Crucially, the low standard deviation (±8%) indicates a high degree of reliability across different models and tasks. It also committed an order of magnitude fewer grounding errors and was over twice as token-efficient. 4.4 Qualitative Analysis and Discussion Our observations revealed a direct relationship between a codebase’s conventionality and the Gatekeeper Protocol’s token efficiency. In the highly structured Next.js/Shadcn task, Sage’s initial latent map was remarkably accurate, allowing it to solve the task with minimal ‘provide‘ requests and thus very low token usage. Conversely, in the bespoke web scraping task, which required understanding a unique and unknown file structure, Sage was more cautious. It issued more ‘provide‘ requests to build up its high-fidelity context, leading to an increase in token consumption. This demonstrates that the protocol’s efficiency is proportional to the "common knowledge" embedded in the system’s structure, a feature that allows it to dynamically adapt its cost to the complexity of the task. This adaptive behavior was absent in the baseline models, which consumed high token counts regardless of task structure. 4.5 Threats to Validity We acknowledge several threats to validity. Our evaluation, while diverse, was confined to three tasks. A larger benchmark suite is needed for full generalization. The performance of RAG is highly sensitive to chunking and embedding strategies, and different configurations might yield different results. However, we believe the consistency of the Gatekeeper Protocol’s superior performance across a diverse model set provides strong evidence for its general effectiveness. 5 Discussion Our results show that a formalized interaction protocol is a more significant driver of agent reliability than the choice of the underlying LLM. The consistent outperformance of our agent, Sage, is not an accident of the model but a direct consequence of the Gatekeeper Protocol’s architecture. 5.1 Analysis of Findings The protocol’s success stems from three principles. First, the inference-first model of contextualization forces the agent to reason on a cheap, low-fidelity map before requesting expensive, high-fidelity context. This minimizes grounding errors by ensuring the agent acts only on a relevant, validated subset of the knowledge base. Second, the transactional nature of state synchronization provides a powerful guardrail against context drift. While ReAct agents often wasted resources re-reading files to re-establish state, our agent was guaranteed a fresh, ground-truth representation after every action. This eliminated the need for costly re-validation and was the primary driver of its high task completion rate. 5 Finally, the protocol effectively scaffolds the reasoning process of diverse LLMs. By imposing a correct and efficient procedure, the Gatekeeper’s structure compensates for some of the raw reasoning deficiencies of the underlying models, guiding them toward a valid solution. 5.2 Limitations of the Protocol Despite its strong performance, the protocol has clear boundaries. 1. It requires structured knowledge. The protocol’s reliance on a structural "map" makes it unsuitable for monolithic, unstructured data sources. 2. It introduces latency. The reliability gained from its multi-turn, transactional nature comes at the cost of speed compared to one-shot generation. 3. It is bounded by LLM reasoning. The protocol can guide a confused agent, but it cannot fix a fundamental inability to reason about a task. Agent performance is ultimately limited by the core intelligence of the model. 4. Initial analysis can be costly. Generating the latent map for extremely large systems could be computationally intensive, an area for future optimization. 5.3 Broader Impact While tested on code, the Gatekeeper Protocol is a domain-agnostic methodology. Its core insight is that for high-stakes professional work, raw generative power is insufficient. The future of reliable AI assistance lies in shifting focus from developing more complex internal memories to designing formal, structured interaction protocols. This provides a foundational pathway toward building autonomous agents that are predictable, verifiable, and ultimately, trustworthy. 6 Conclusion This paper confronted the critical challenges of state desynchronization, context management, and grounding that limit current LLM agents. We argued that the solution lies not in better memory, but in a better protocol for interaction. We introduced the Gatekeeper Protocol, a framework built on a latent context map, a synchronized state ledger, and a declarative action space. Our empirical evaluation showed that this protocol delivers substantial and consistent improvements in agent reliability and efficiency across a diverse suite of language models, strongly suggesting that architecture, not the model, is the primary driver of robust performance. The work presented here lays a foundational methodology, but the path forward is rich with possibilities. A key direction for future work is the development of a hierarchical latent map, or "latent map tree," to further enhance scalability. In this model, a provide action on a high-level segment (e.g., a folder) would return not raw content, but a more detailed, lower-level latent map for that segment. This would allow the agent to recursively navigate massive knowledge systems, progressively increasing the resolution of its context only where necessary. This structural enhancement could be combined with advanced reasoning techniques like model chaining or query transformation. Moreover, there is a clear need to refine these principles into a universal Gatekeeper specification a truly domain-agnostic protocol that can be implemented across a variety of fields with minimal adaptation. Ultimately, the Gatekeeper Protocol offers a crucial pathway toward building autonomous systems that are powerful, predictable, and safe enough for high-stakes, real-world applications. References [1] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. [2] Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. [3] Charles Packer, Vivian Fang, Shishir G. Lin, Kevin Gg, Joseph E. Gonzalez, Ion Stoica, and Michael I. Jordan. Memgpt: Towards llms as operating systems. arXiv preprint arXiv:2310.08560, 2023. [4] Mengkang Hu, Tianxing Chen, Qiguang Chen, Yao Mu, Wenqi Shao, and Ping Luo. Hiagent: Hierarchical working memory management for solving long-horizon agent tasks with large language model. arXiv preprint arXiv:2402.09559, 2024. 6 [5] Xuehang Guo, Xingyao Wang, Yangyi Chen, Sha Li, Chi Han, Manling Li, and Heng Ji. Syncmind: Measuring agent out-of-sync recovery in collaborative software engineering. arXiv preprint arXiv:2502.06994, 2025. [6] Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, and Ji-Rong Wen. A survey on the memory mechanism of large language model based agents. arXiv preprint arXiv:2404.13501, 2024. [7] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. [8] Victor de Lamo Castrillo, Habtom Kahsay Gidey, Alexander Lenz, and Alois Knoll. Fundamentals of building autonomous llm agents. arXiv preprint arXiv:2510.09244, 2025. [9] Anthropic. Effective context engineering for ai agents. Anthropic Engineering Blog, 2024. [10] Jeff Huber. Rag is dead, context engineering is king. Latent.Space Blog, 2024. 7
THE GATEKEEPER KNOWS ENOUGH Fikresilase W. Abebayew BoA AI CoE ABSTRACT Large Language Models (LLMs) are increasingly deployed as autonomous agents, yet their practical utility is fundamentally constrained by a limited context window and state desynchronization resulting from the LLMs' stateless nature and inefficient context management. These limitations lead to unreliable output, unpredictable behavior, and inefficient resource usage, particularly when interacting with large, structured, and sensitive knowledge systems such as codebases and documents. To address these challenges, we introduce the Gatekeeper Protocol, a novel, domain-agnostic framework that governs agent-system interactions. Our protocol mandates that the agent first operate and reason on a minimalist, low-fidelity "latent state" representation of the system to strategically request high-fidelity context on demand. All interactions are mediated through a unified JSON format that serves as a declarative, state-synchronized protocol, ensuring the agent's model of the system remains verifiably grounded in the system's reality. We demonstrate the efficacy of this protocol with Sage, a reference implementation of the Gatekeeper Protocol for software development. Our results show that this approach significantly increases agent reliability, improves computational efficiency by minimizing token consumption, and enables scalable interaction with complex systems, creating a foundational methodology for building more robust, predictable, and grounded AI agents for any structured knowledge domain. Keywords State Synchronization · Progressive Contextualization · Declarative Protocol · AI Agents · Grounded AI · Human-AI Interaction 1 Introduction The deployment of Large Language Models (LLMs) as autonomous agents for complex tasks like software development is rapidly advancing using the already existing techniques [1, 2]. However, their practical utility is fundamentally constrained by their LLM's stateless architecture and the context engineering design [3, 4] on the agent program. This core limitation leads to state desynchronization, where the agent's internal model diverges from the system's true state [5]. Consequently, agents exhibit unreliable behavior, hallucinate nonexistent entities [6], and inefficiently manage context, precluding their use in high-stakes, real-world applications. In this work, we propose the Gatekeeper Protocol, a domain-agnostic framework that eschews ambiguous, conversational interaction and instead enforces a formal, state-synchronized communication layer between the agent and the system. Our protocol mandates that the agent first operate on a minimalist, low-fidelity "latent state" representation of the system. This forces the agent to reason strategically and request high-fidelity context only when necessary, rather than consuming the entire context. All interactions are mediated through a unified JSON format that serves as a declarative, state-synchronized ledger. This mechanism ensures the agent's model of the system remains verifiably grounded in reality. This approach transforms the agent from an unpredictable conversationalist into a deterministic and reliable partner, significantly improving token efficiency and scalability. 2 Background The journey for many modern agents began with the "reason-act" loop, a powerful idea popularized by frameworks like ReAct [1]. This approach allows an agent to "think out loud" before acting, which was a major leap forward. But this 16 Oct 2025 simple loop has a critical weakness: an agent that can't remember its past is doomed to repeat its mistakes. This led to a wave of research focused on giving agents a memory. Architectures like those in Generative Agents [2] and Reflexion [7] introduced sophisticated memory streams and self-reflection abilities, allowing agents to learn from experience. However, giving an agent a vast memory created a new, more subtle problem: how does it find the right memory at the right time? As surveys on the topic show [6], this has sparked an arms race in memory management techniques. We now have complex systems like MemGPT [3] and HiAgent [4] that treat an agent's context like a virtual memory hierarchy in an operating system. These are brilliant engineering solutions for preventing context overflow and cutting down on redundant information. Yet, we argue that all these approaches share a common blind spot. They are hyper-focused on perfecting the information that goes into the LLM's black box, but they don't formalize the communication channel between the agent and the external world. The result is that the agent's understanding can still get out of sync with reality a problem so significant that frameworks like SyncMind [5] are now being built just to measure it. The consensus is that state synchronization is essential [8], but the field has yet to coalesce around a standard way to achieve it. The Gatekeeper Protocol charts a different course. We believe that true agent reliability comes not from a more complex memory system, but from a simpler, more robust interaction protocol. Our contribution isn't a better memory architecture; it's a formal contract that governs how an agent is allowed to perceive and act upon a system. We position our work in three distinct ways. First, we shift the focus from the "OS level" to the "API level." Instead of trying to manage the agent's internal memory (like MemGPT), we enforce a strict, declarative contract for every interaction using a unified JSON format. This makes every action an explicit, verifiable transaction, providing a clear audit trail of the agent's behavior. Second, our protocol champions an "inference-first" philosophy, a stark contrast to the dominant "retrieval-first" model of RAG. The broader community is starting to agree that simply retrieving documents isn't enough, leading to calls for smarter "Context Engineering" [9, 10]. The Gatekeeper Protocol provides a concrete framework for this idea. We force the agent to reason on a cheap, high-level map before it's allowed to ask for the expensive, detailed documents. It has to think before it reads. Finally, our protocol's declarative action space makes agents fundamentally safer. A ReAct-style agent might generate a rm -rf command, but a Gatekeeper agent can only state its intent-for example, {"request": {"delete": {}}}. This simple but critical distinction means a trusted system can safely interpret the intent, preventing a whole class of unpredictable and potentially harmful actions. 3 The Gatekeeper Protocol 3.1 Architectural Overview The protocol's architecture is defined by the manipulation of a single, central data structure: the System State-Context Representation (SCR), denoted as L. This unified JSON object serves three simultaneous roles: 1. A Latent Context Map: It provides a high-level, potentially low-fidelity, structural representation of the system. 2. A State Record: It is the authoritative, ground-truth record of the system's state at a given time. 3. An Action Interface: The agent proposes actions by modifying specific fields within this object. The interaction is a discrete, time-stepped cycle (Figure 1). At each step t, the agent's policy, πagent, receives the current SCR, Lt. Based on a given task T, it generates a proposed modification, L′ t, which encodes a desired action At. The system executes the action, producing a new SCR, Lt+1, which begins the next cycle. 3.2 Latent Context and Progressive Contextualization To ensure token efficiency, the protocol's SCR begins as a low-fidelity latent map. Components within the SCR can have placeholder values (e.g., "unsummarized"). The agent operates on an "inference-first" principle, using this structural information to reason about the task before requesting high-fidelity content. This process of deciding when to request more context is modeled as a policy, πagent, that seeks to maximize the expected value of future states while minimizing the cost of information retrieval. At each step t, the agent chooses a set of unsummarized components, C, to query via a provide action. This choice, At(C), is determined by: At(C) = πagent(Lt, T) = argmax C⊆Sunsum [E[V (Lt+1|Lt, At(C))] -λ · Cost(C)] (1) 2 Figure 1: The Gatekeeper Protocol cycle: Lt Agent Policy πagent ----------→L′ t(containing At) System Execution Φ ----------→Lt+1 Where: • Sunsum is the set of all components with unsummarized content in Lt. • E[V (Lt+1| . . . )] is the expected value (or utility) of the next state, given the current state and the chosen action. This represents the agent's belief about how much the new information will help in solving task T. • Cost(C) is the token cost associated with retrieving and processing the content of the components in C. • λ is a hyperparameter that balances the trade-off between information gain and token cost. 3.3 System State Representation and Synchronization To eliminate state desynchronization, the protocol enforces that the SCR, L, is the single source of truth. The agent's understanding of the system is not an internal belief but is entirely represented by the SCR it possesses. Synchronization is achieved through a deterministic state transition function, Φ, executed by the system. The agent proposes an action by modifying the request fields in its current SCR, Lt, to create a proposed state change, L′ t. The system validates and executes this proposal. The state transition is defined as a piecewise function: Lt+1 = Φ(Lt, L′ t) if IsValid(L′ t, Lt) Lt otherwise (2) Where: • IsValid(L′ t, Lt) is a validation function that returns true if the action encoded in L′ t is permissible given the current state Lt. • Φ(Lt, L′ t) is the execution function that applies the valid changes and returns the new, ground-truth SCR. 3 This mechanism guarantees that the agent's state is always synchronized with the system's state, as any invalid or failed action results in the state remaining unchanged, preventing state drift. 3.4 The Declarative Action Interface The SCR itself serves as the action interface. To ensure safety and uniformity, the agent's actions must be declarative, not imperative. The action space A is a finite set of intents (e.g., provide, edit, write, delete). An action At is encoded by populating the request field of one or more components within the proposed SCR, L′ t. For example, an intent to delete a component s is encoded as setting the request field of s in L′ t to "{"delete": {}}". This declarative model decouples the agent's intent from the system's execution logic, allowing for a trusted layer to safely validate, log, and perform the requested state transition. 4 Empirical Evaluation To rigorously test the robustness and generalizability of the Gatekeeper Protocol, we conducted a comprehensive empirical evaluation. We compared our protocol against four common context management strategies across a suite of three diverse programming tasks, implementing each strategy with seven different LLMs to ensure our findings were model-agnostic. For our protocol's implementation, we used Sage, our open-source agent publicly available on GitHub1. 4.1 Experimental Design Tasks. To avoid overfitting to a single problem type, our evaluation suite included three distinct tasks: 1. Python Refactoring (Stateful): A multi-file function renaming task testing state tracking and dependency management. 2. Frontend Component Creation (Structured): A task to build a new React component using a well-defined library (Next.js with Shadcn/ui). 3. Python Web Scraping (Exploratory): A task to write a new script to scrape data from a live, previously unseen website. Models and Strategies. Each task was run with seven models (Google: Gemini 2.0 Flash Experimental, Qwen: Qwen3 Coder 480B A35B, DeepSeek: R1 Microsoft: MAI DS R1, OpenAI: gpt-oss-20b, Mistral: Mistral Small 3.2 24B and NVIDIA: Nemotron Nano 9B V2 ) across five strategies: (1) Full Codebase, (2) Recent Files, (3) RAG, (4) ReAct Agent, and (5) Sage (Gatekeeper Protocol). Baseline Fairness. To ensure a fair comparison, all baselines were implemented with standardized prompts. Our RAG implementation used cosine similarity over 512-token chunks from a ChromaDB vector store. The ReAct agent was given an identical imperative toolset to Sage. 4.2 Metrics We defined three primary metrics, averaged over R = 7 model runs for each task, and then averaged across all three tasks. Average Task Completion Progress (%). For a task decomposed into K sub-tasks, let ci,j be a binary variable for the completion of sub-task j in run i. The average progress is: Progressavg = 100 R · K R X i=1 K X j=1 ci,j (3) Average Grounding Errors. Let Ei be the count of grounding errors (actions based on a false state belief) for run i. The average is: GEavg = 1 R R X i=1 Ei (4) 1https://github.com/Fikresilase/sage 4 Average Total Tokens. This metric measures the cumulative sum of all input and output tokens for the entire duration of a task run, providing a holistic measure of computational cost. 4.3 Quantitative Results The aggregated results, averaged across all models and tasks, are presented in Table 1. Table 1: Comparative Performance of Context Management Strategies (Averaged Across 3 Tasks and 7 LLMs). Values are mean ± standard deviation. Strategy Avg. Task Completion (%) Avg. Grounding Errors Avg. Total Tokens Full Codebase 48% ± 18% 4.3 ± 2.1 19,100 ± 3500 Recent Files 31% ± 22% 5.8 ± 2.5 9,800 ± 4100 RAG 58% ± 15% 3.1 ± 1.5 14,300 ± 2800 ReAct Agent 55% ± 17% 3.4 ± 1.8 15,200 ± 3100 Sage (Gatekeeper) 73% ± 8% 0.8 ± 0.4 6,200 ± 1200 As shown in Table 1, the Gatekeeper Protocol consistently outperforms all baselines. Sage achieved an average task completion of 73%, a notable improvement over the next best strategy, RAG, at 58%. Crucially, the low standard deviation (±8%) indicates a high degree of reliability across different models and tasks. It also committed an order of magnitude fewer grounding errors and was over twice as token-efficient. 4.4 Qualitative Analysis and Discussion Our observations revealed a direct relationship between a codebase's conventionality and the Gatekeeper Protocol's token efficiency. In the highly structured Next.js/Shadcn task, Sage's initial latent map was remarkably accurate, allowing it to solve the task with minimal 'provide' requests and thus very low token usage. Conversely, in the bespoke web scraping task, which required understanding a unique and unknown file structure, Sage was more cautious. It issued more 'provide' requests to build up its high-fidelity context, leading to an increase in token consumption. This demonstrates that the protocol's efficiency is proportional to the "common knowledge" embedded in the system's structure, a feature that allows it to dynamically adapt its cost to the complexity of the task. This adaptive behavior was absent in the baseline models, which consumed high token counts regardless of task structure. 4.5 Threats to Validity We acknowledge several threats to validity. Our evaluation, while diverse, was confined to three tasks. A larger benchmark suite is needed for full generalization. The performance of RAG is highly sensitive to chunking and embedding strategies, and different configurations might yield different results. However, we believe the consistency of the Gatekeeper Protocol's superior performance across a diverse model set provides strong evidence for its general effectiveness. 5 Discussion Our results show that a formalized interaction protocol is a more significant driver of agent reliability than the choice of the underlying LLM. The consistent outperformance of our agent, Sage, is not an accident of the model but a direct consequence of the Gatekeeper Protocol's architecture. 5.1 Analysis of Findings The protocol's success stems from three principles. First, the inference-first model of contextualization forces the agent to reason on a cheap, low-fidelity map before requesting expensive, high-fidelity context. This minimizes grounding errors by ensuring the agent acts only on a relevant, validated subset of the knowledge base. Second, the transactional nature of state synchronization provides a powerful guardrail against context drift. While ReAct agents often wasted resources re-reading files to re-establish state, our agent was guaranteed a fresh, ground-truth representation after every action. This eliminated the need for costly re-validation and was the primary driver of its high task completion rate. 5 Finally, the protocol effectively scaffolds the reasoning process of diverse LLMs. By imposing a correct and efficient procedure, the Gatekeeper's structure compensates for some of the raw reasoning deficiencies of the underlying models, guiding them toward a valid solution. 5.2 Limitations of the Protocol Despite its strong performance, the protocol has clear boundaries. 1. It requires structured knowledge. The protocol's reliance on a structural "map" makes it unsuitable for monolithic, unstructured data sources. 2. It introduces latency. The reliability gained from its multi-turn, transactional nature comes at the cost of speed compared to one-shot generation. 3. It is bounded by LLM reasoning. The protocol can guide a confused agent, but it cannot fix a fundamental inability to reason about a task. Agent performance is ultimately limited by the core intelligence of the model. 4. Initial analysis can be costly. Generating the latent map for extremely large systems could be computationally intensive, an area for future optimization. 5.3 Broader Impact While tested on code, the Gatekeeper Protocol is a domain-agnostic methodology. Its core insight is that for high-stakes professional work, raw generative power is insufficient. The future of reliable AI assistance lies in shifting focus from developing more complex internal memories to designing formal, structured interaction protocols. This provides a foundational pathway toward building autonomous agents that are predictable, verifiable, and ultimately, trustworthy. 6 Conclusion This paper confronted the critical challenges of state desynchronization, context management, and grounding that limit current LLM agents. We argued that the solution lies not in better memory, but in a better protocol for interaction. We introduced the Gatekeeper Protocol, a framework built on a latent context map, a synchronized state ledger, and a declarative action space. Our empirical evaluation showed that this protocol delivers substantial and consistent improvements in agent reliability and efficiency across a diverse suite of language models, strongly suggesting that architecture, not the model, is the primary driver of robust performance. The work presented here lays a foundational methodology, but the path forward is rich with possibilities. A key direction for future work is the development of a hierarchical latent map, or "latent map tree," to further enhance scalability. In this model, a provide action on a high-level segment (e.g., a folder) would return not raw content, but a more detailed, lower-level latent map for that segment. This would allow the agent to recursively navigate massive knowledge systems, progressively increasing the resolution of its context only where necessary. This structural enhancement could be combined with advanced reasoning techniques like model chaining or query transformation. Moreover, there is a clear need to refine these principles into a universal Gatekeeper specification a truly domain-agnostic protocol that can be implemented across a variety of fields with minimal adaptation. Ultimately, the Gatekeeper Protocol offers a crucial pathway toward building autonomous systems that are powerful, predictable, and safe enough for high-stakes, real-world applications. References [1] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. [2] Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint , 2023. [3] Charles Packer, Vivian Fang, Shishir G. Lin, Kevin Gg, Joseph E. Gonzalez, Ion Stoica, and Michael I. Jordan. Memgpt: Towards llms as operating systems. arXiv preprint , 2023. [4] Mengkang Hu, Tianxing Chen, Qiguang Chen, Yao Mu, Wenqi Shao, and Ping Luo. Hiagent: Hierarchical working memory management for solving long-horizon agent tasks with large language model. arXiv preprint , 2024. 6 [5] Xuehang Guo, Xingyao Wang, Yangyi Chen, Sha Li, Chi Han, Manling Li, and Heng Ji. Syncmind: Measuring agent out-of-sync recovery in collaborative software engineering. arXiv preprint , 2025. [6] Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, and Ji-Rong Wen. A survey on the memory mechanism of large language model based agents. arXiv preprint , 2024. [7] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint , 2023. [8] Victor de Lamo Castrillo, Habtom Kahsay Gidey, Alexander Lenz, and Alois Knoll. Fundamentals of building autonomous llm agents. arXiv preprint , 2025. [9] Anthropic. Effective context engineering for ai agents. Anthropic Engineering Blog, 2024. [10] Jeff Huber. Rag is dead, context engineering is king. Latent.Space Blog, 2024. 7
2510.14880
Fantastic (small) Retrievers and How to Train Them: mxbai-edge-colbert-v0 Tech Report Rikiya Takehi1,2*, Benjamin Clavié1, Sean Lee1, and Aamir Shakir1 1 Mixedbread AI 2 Waseda University {rikiya,ben,sean}@mixedbread.com Abstract. In this work, we introduce mxbai-edge-colbert-v0 models, at two different parameter counts: 17M and 32M. As part of our re- search, we conduct numerous experiments to improve retrieval and late- interaction models, which we intend to distill into smaller models as proof-of-concepts. Our ultimate aim is to support retrieval at all scales, from large-scale retrieval which lives in the cloud to models that can run locally, on any device. mxbai-edge-colbert-v0 is a model that we hope will serve as a solid foundation backbone for all future experiments, represent- ing the first version of a long series of small proof-of-concepts. As part of the development of mxbai-edge-colbert-v0, we conducted multiple abla- tion studies, of which we report the results. In terms of downstream per- formance, mxbai-edge-colbert-v0 is a particularly capable small model, outperforming ColBERTv2 on common short-text benchmarks (BEIR) and representing a large step forward in long-context tasks, with un- precedented efficiency. 1 Introduction In the last two years, neural Information Retrieval (IR) has experienced an unprecedented level of interest, owing in large part to the rapid development and deployment of Large Language Models (LLMs) and the proven effectiveness of Retrieval Augmented Generation (RAG) pipelines [13], where retrieval models are used to provide LLMs with useful context. As part of this wave, end-user interest in multi-vector retrieval methods, also called late interaction models or, more simply, ColBERT, after the model which initially introduced this method [11]. Where the dominant paradigm in neural IR, Dense Passage Retrieval (DPR) [36], leverages a single, large vector to rep- resent documents, ColBERT models instead employ numerous smaller vectors, with each individual token representation projected to a small dimension then retained. In order to make this tractable, ColBERT models are frequently used with aggressive index quantization [25,24] or as second-stage rankers in a larger pipeline. The growing popularity of multi-vector models can be explained by their retrieval performance. ColBERT models have been noted for their particularly ⋆Work performed during an internship at Mixedbread. arXiv:2510.14880v1 [cs.IR] 16 Oct 2025 2 R. Takehi et al. robust out-of-domain performance [25], especially in multi-modal settings [27]. They have also recently been demonstrated to provably alleviate certain limi- tations of single-vector retrieval approaches, with a 150M parameter ColBERT models vastly outperforming 8B parameter single-vector embeddings on bench- marks designed to test the limits of embedding models [34]. In spite of these strong performances, the ecosystem for open ColBERT mod- els have moved more slowly than that of single-vector models. Up until last year, the most widely used ColBERT model was ColBERTv2, originally released in 2021. Subsequently, answerai-colbert-small-v11 demonstrated a 33 million pa- rameter ColBERT model could outperform all existing small retrievers, and reached performance exceeding even that of ColBERTv2 and most <500M pa- rameter retrievers. Fig. 1. An overview of the full training process However, there has, until very recently, been a lack of late-interaction mod- els featuring modern features, such as long context handling due to backbone limitations, at least in the text modality. Indeed, both ColBERTv2 and answerai- colbert were built on top of BERT variants, namely the original BERT [6] and MiniLM [32], with short context limits and poor efficiency, especially across longer contexts. ModernBERT [33] spearheaded a new wave of novel encoders, built with efficiency in mind and allowing long-context encoders. Following its original re- lease, it has been followed by Ettin, a reproduction of it across model sizes, and ModernVBERT, which combines Ettin with a vision-encoder, bringing the archi- tectural improvements to multi-modality. GTE-ModernColBERT2 [1] was subse- quently released, leveraging ModernBERT as a backbone, and creating the new 1 https://huggingface.co/answerdotai/answerai-colbert-small-v1 2 lightonai/GTE-ModernColBERT-v1 Mxbai-edge-ColBERTv0 3 de-facto standard for 130M+ parameter ColBERT, outperforming ColBERTv2 and all dense retrievers in its parameter class. A large gap, however, remains: while GTE-ModernColBERT is a strong, “full- sized” model, answerai-colbert-small-v1 remains, by far, the most downloaded ColBERT model, in spite of its architectural limitations. In our own exploratory work at Mixedbread, we found ourselves frequently using it, as its small size and strong performance provided a strong testbed for various experiments. We firmly believe that performance at both ends of the scale spectrum is very important, especially as small models are strong predictor of the performance impact of model modifications. As such, we decided to address this gap, and create the mxbai-edge-colbert- v0 family of ColBERT models. These models come in two different sizes, with 17 and 32 million parameters, and have been created to serve as a strong base- line to support further experiments while addressing the needs of users seeking a modern, low parameter-count ColBERT. To train these models, we first cre- ated dense embedding baselines through a series of three training stages, before running numerous ablations resulting in the released models. The resulting models are strong performers across the board, with consider- ably improved efficiency over previous models. Notably, mxbai-edge-colbert-v0- 17m outperforms ColBERTv2 despite an embedding dimension of 48, one third of the commonly used 128, and an extremely low compute and memory foot- print. Its strong performance, combined with long-context handling and very low latencies, makes it particularly suitable for re-ranking applications on-device. These models, the first of an hopefully long series of efficient edge models, represent a solid foundation for further studies on the effectiveness of ColBERT model. We hope that they will support research, both within and outside of Mixedbread, while supporting a large range of real-world uses. 2 Creating a Suitable Dense Base Model Previous work has demonstrated the importance of beginning ColBERT training from a suitably “warmed-up” model, with considerably better results obtained when training from a dense embedding model rather than initializing training from scratch [4,1], outperforming even those obtained by further training an existing ColBERT model [3,25]. We believe this effect to be due to the fact that dense embedding models now routinely undergo a long, distantly-supervised contrastive alignment training phase [31] before being fine-tuned on high quality data, which is not commonly done for ColBERT models3. In light of this, we first set out to create a suitable dense backbone, at our target model sizes: 32 and 17 million parameters. We use the Ettin [35] encoder 3 As the aim of this work is to create a suitable backbone to identify the effect of individual modifications, we leave the exploration of a ColBERT-specific contrastive warm-up phase to future work, but believe it holds strong potential for further improvements. 4 R. Takehi et al. models as starting models, which are a replication of the ModernBERT training recipe across various model sizes [33]. 2.1 Contrastive Pre-Training We follow the standardised recipe for our contrastive pre-training phase, as is now commonly adopted by the large majority of embedding models [31,19,16]. Effectively, this phase consists in leveraging many open datasets during which we have approximate queries that can be mapped to documents that are at least somewhat semantically related to them. In practice, this takes many different forms: forum posts with their title acting as a query, QA pairs extracted from common websites, etc. This training is done with a large batch size, facilitated by GradCache [9], and resulting in a better embedding alignment. For this section, we used the contrastor training framework, which was used in the training of the Nomic embeddings models [21]. We used a common selec- tion of pre-training datasets, presented in Table 2.1. We follow the work done on mxbai-embedding-large [12] and train sequentially, that is, one dataset at a time, rather than all at once, which we empirically found to result in better performance. A similar form of this effect was described in the snowflake-arctic embeddings tech report [19], where stratification of training examples by origin dataset yielded superior results. Table 1. Datasets used for contrastive pretraining. Dataset Size (rows) synthetic datasets 2.65M nomic-embed-unsupervised-data4 172.8M bge-m3-data 1.57M cornstack (subsampled, 8% total) 20M Total 197M Interestingly, this pre-training phase highlighted an effect that appears com- mon to all the ModernBERT and Ettin models: a higher learning rate is needed to reach satisfying results when compared to previous backbone encoder models. This effect was first described in the ModernBERT paper, where hyperparam- eter sweeps revealed that a considerably higher learning rate was necessary for ModernBERT to outperform previous encoders on common retrieval tasks [33]. Table 2 shows the NanoBEIR NDCG@10 of multiple training runs with different learning rates. 2.2 Fine-tuning Subsequently, we move on to the next step of dense embedding pre-training: supervised fine-tuning on higher quality data, with mined hard negatives [37]. The mining of hard negatives is a key factor in training embedding models, as Mxbai-edge-ColBERTv0 5 Table 2. Performance of two models post-contrastive training with varying learning rates (NDCG@10 on NanoBEIR) Model Learning Rate Batch Size NDCG@10 17M 3.5e-04 24576 0.493 17M 6.0e-04 24576 0.523 32M 2.8e-04 12288 0.543 32M 5.0e-04 12288 0.559 it helps provide stronger “counter-examples”: for every example that is relevant to the query, we also provide the models with examples that are not. When using solely random negatives, this task becomes trivial: a completely unrelated negative will rapidly only reach very low similarity scores, and stop meaningfully contributing to learning. This also dulls the learning process of what a “match” looks like: if the negative examples are always completely unrelated, then the model does not have to work as hard to learn what makes a document truly relevant. Hard negative mining attempts to solve this problem by gathering a set of harder negatives that look more similar to the positive document. This ensures that the model learns to accurately represent details that differ- entiate relatively similar documents, rather than just general topics. On the other hand, negatives that are too hard can also be harmful to the learning process: if the model only ever sees negative examples that are “almost-positives”, then it might fail to learn good high-level representations. Moreover, negatives that are too hard carry a high false-negative rate: most datasets commonly used for retrieval have sparse labels, and it is highly likely that a lot of the highest-scoring negative documents could actually be positives. As such, crafting a good mix of negatives is important. We follow NV- Embedv2 [26] in our mining process, where we used Qwen3-Embedding-8B to mine hard negatives and set the threshold to 0.95. To learn on various nega- tive hardness, we also mixed the data with 35% of BM-25 mined and 30% of randomly mined documents. We mine negative for the de-facto standard set of training datasets for retrieval fine-tuning: MSMARCO, NQ, HotPotQA and PubMed. Table 3. Performance comparison of Mxbai Edge models (Dense) with and without finetuning (NDCG@10). Model NDCG@10 Mxbai Edge 17M (Dense, non-FT) 0.523 Mxbai Edge 17M (Dense, FT) 0.556 Mxbai Edge 32M (Dense, non-FT) 0.559 Mxbai Edge 32M (Dense, FT) 0.576 6 R. Takehi et al. We adopt the AnglE training loss [15] for this fine-tuning step, using the AnglE codebase5. We train on a selection of common datasets, once again seeking to follow standard practices while avoiding over-contamination in relation to frequent benchmarks. 2.3 “Stella-style” Distillation Finally, we add a third stage to our model pre-training, which is inspired by the Stella [38] model family. Stella, in addition to more commonplace retrieval training, introduces embedding space distillation: it generates embeddings for queries and documents using a strong teacher, such as LLM-based embedding models, and designs a teaching process in which various distance-based losses are used to minimise the distance between the embeddings produced by the student model and its teachers. The resulting models, the Stella and Jasper embedding families, are extremely strong embedding models at their respective sizes, and have been frequently demonstrated to reach very strong out-of-domain performance [17]. As part of our training, we initially employed the partial codebase released by the Stella authors6, but found the full multi-step process difficult produce, yielding poorly performing models, with fluctuating performance and extremely high sensitivity to hyperparameters. Following work such as LEAF [30], we opted to simplify the distillation loss to a simple L2 loss: L2(yi, ˆyi) = d X j=1 (yij −ˆyij)2 (1) which attempts to minimize the distance between our student’s vectors and the teacher vectors. As this step relies purely on embedding space distillation, there is no need to leverage retrieval datasets, since the relationship between queries and docu- ments is not used during this stage. However, it has previously been highlighted that having a variety of inputs corresponding to common retrieval uses [30], es- pecially in terms of input lengths (e.g. longer documents and short queries), is helpful to improve performance. We thus sample many documents from various mixed sources and queries from large retrieval datasets, with a detailed data mix provided in Appendix A. We used StellaV5 1.5B as our teacher model, with an output dimension of 1024. We found that using higher teacher dimension embeddings resulted in decreasing performance, likely due to the vast dimension difference between our student models and the target sizes, while lower dimensions such as 768 resulted in mildly diminished results. To ensure our models’ dimensions matched the teacher ones for distillation, we employed a 2-layer feedforward projection with a SiLU [7](a.k.a Swish [23]) activation. 5 https://github.com/SeanLee97/AnglE 6 https://github.com/NovaSearch-Team/RAG-Retrieval Mxbai-edge-ColBERTv0 7 Table 4. Average NDCG@10 on NanoBEIR for dense Mxbai Edge variants. Model Avg. NDCG@10 Mxbai Edge 17M (Dense, FT) 0.556 Mxbai Edge 17M (Dense, Distill) 0.567 Mxbai Edge 32M (Dense, FT) 0.576 Mxbai Edge 32M (Dense, Distill) 0.626 Table 4 shows the NanoBEIR NDCG@10 results between the post-finetuning and post-distillation variants of both model sizes. It shows that, despite our streamlined process, this step results in performance gains7, but unevenly dis- tributed across model sizes. While the 32M variant heavily benefits from this step, the gains on the 17M model are more modest. We theorise that this might be due to the streamlined distillation loss we used struggling to bridge large dimensionality gaps compared to the original, more complex Stella loss mix, but do not explore this effect further. 3 ColBERT Training Finally, we apply our final training stage for the ColBERT models. We run a series of ablations in order to create a strong baseline with this model, which will be able to support both real-world edge use cases and subsequent research uses satisfyingly. We detail our training setting in the subsequent sections about our ablation work. For training data, we restrict ourselves to MSMARCO, so as to ensure that better data does not obscure the impact of training modifications (further details in Section 3.1), and use 16-way training tuples, where each query is associated with a positive example and 15 negative examples, all with a teacher score. We use a batch size of 128 and KL-Div loss with normalized scores [3], except when otherwise specified. All experiments are performed using the PyLate [2] framework. 3.1 Data We experimented with various training datasets, comparing the MSMARCO [20] RLHN [29] set scored with Qwen3-Reranker [39] as a teacher to the triplets used by answerai-colbert-small [4] and GTE-ModernColBERT [1], with scores gener- ated by BGE-Gemma2 Reranker8 [14] to score a small subset of MSMARCO training tuples and comparing min-max normalized and unnormalized teacher scores [3]. 7 with the 32M variant reaching performance that is competitive with many state-of- the-art small embedding models. 8 https://huggingface.co/BAAI/bge-reranker-v2-gemma 8 R. Takehi et al. Table 5. Effect of teachers used for distillation. Teacher NDCG@10 Qwen3-8B (no norm) 0.5991 Qwen3-8B (minmax norm) 0.5854 BGE-Gemma2 0.6286 We present a comparison of these training methods in Table 5. Surprisingly, whether normalized or unnormalized, Qwen3-Reranker is consistently outper- formed as a teacher by the older BGE-Gemma2. Upon further analysis, it ap- pears that on common retrieval benchmarks, the scores generated by Qwen3- Reranker are extremely skewed towards the extremes, with very scores outside of the [0.99,1] range for positives and [0, 0.01] range for negatives. We believe that this might be indicative of overfitting from the reranker, resulting in poor distributions to use for distillation. Our attempts at using both large and very small temperatures did not significantly change performance. 3.2 Ablations We performed various ablations in order to understand the impact of certain pa- rameters on model performance. To avoid overfitting, hyperparemeter ablations (optimizer, distillation impact, learning rate and projection dim) were evaluated on 5 NanoBEIR subset, so as to provide a good performance indicator without being exposed to the full BEIR sets. The selected subsets are high-quality search dataset MSMARCO (in-domain), SciFact (OOD), FiQA (OOD), NQ (OOD), and NFCorpus (OOD). Final ablations on projection layers and casing were evaluated on all of NanoBEIR. Optimizers We benchmarked both AdamW [18] and Muon [10] across a range of learning rates with a fixed batch size. We present the results of these ablations in Table 6. Our results indicate that even with limited experiments and the relatively small batch size that is commonly employed to train late-interaction models, Muon appears to be a strong optimizer for ColBERT model training. Impact of Stella-style Distillation In Table 7, we show a comparison of running trainings on the dense embedding result model resulting from our fine- tuning stage vs the model resulting from our distillation stage. Our results clearly show that Stella-style distillation improves performance of the result- ing ColBERT model, even when projection heads are discarded to only retain the backbone model. Projection Dimension The projection dimension used by ColBERT models is traditionally set to 128, after the one used by the original ColBERT and ColBERTv2 [11,25] models, and this dimension has shown good performance in Mxbai-edge-ColBERTv0 9 Table 6. Comparison of model performance across optimizers and learning rates. NDCG@10 is the average NDCG@10 score across the 4 ablation datasets. Optimizer NDCG@10 AdamW 1e-4 0.5911 5e-5 0.5780 8e-5 0.5923 Muon 1e-4 (AdamW 8e-5) 0.5718 3e-4 (AdamW 8e-5) 0.5604 5e-4 (AdamW 8e-5) 0.5862 1e-3 (AdamW 8e-5) 0.5985 3e-3 (AdamW 8e-5) 0.5748 Table 7. ColBERT performs better on a base embedding model trained on Stella-style distillation. Base Model Variant NDCG@10 32M model (fine-tuned only) 0.5771 32M model (with Stella-style distillation) 0.5911 both text and multimodal settings [8]. As of right now, the current state-of-the- art for smaller ColBERT models uses a projection dimension of 96 [4]. In effect, the final projection dimension is largely defined in an arbitrary way, despite the large consequences it has in terms of both storage requirements and scoring speed. Table 8. Effect of projection dimension on NDCG@10 (Muon 1e-3, AdamW 8e-5) on the 32m model, using 20% training data. Projection Dimension NDCG@10 96 0.5991 64 0.5985 48 0.5967 32 0.5772 24 0.5423 16 0.5126 In Table 8, we present the results of ablating a large range of projection dimensions, from 16 to 96. We show that lower dimensions hold up performance surprisingly well. Indeed, the performance decrease on NanoBEIR is very mild 10 R. Takehi et al. until a dimension of 48, but subsequently considerably degrades at projection dimensions of 32 and below. Projection Layers In a recent study, we demonstrated that the use of more complex projection layers outperformed the single-layer linear projection that is ubiquitous in ColBERT models [5]. As part of this work, we experiment with the best variant proposed in our previous study, using a 2-layer feedforward network with an upscaled intermediate dimension and a residual connection, and compare it to a model trained with the “normal” ColBERT projection. Table 9. Performance of different projection heads on the 17m model, under matched training hyperparameters, on full data. Projection NDCG@10 2-layer FFN 0.6405 Linear Projection 0.6275 We present the results of this comparison on the 17m parameter model vari- ant in Table 9. As this experiment came later in our training process, results are reported as full NanoBEIR NDCG@10 rather the previously defined ablation sets. Our results that the use of better projection layers contributes positively to performance. While we do not perform significance testing due to the low num- ber of evaluated checkpoints, we note that we reproduced this effect across a range of training seeds, with no single-layer linear projection checkpoint coming within less than 1 NDCG@10 of the 2-layer projection checkpoints. Casing Virtually all embedding models we consider "previous-generation" ei- ther use bert-base-uncased [6] as their backbone, or models which were largely inspired by it. These encoders are all case-insensitive, meaning that all input text is lower-cased before being tokenized. On the other hand, virtually all Large Language Models employ some form of casing, which ModernBERT [33], and thus subsequently Ettin [35], also adopts. The impact this tokenization change has, if any, has not yet been studied in detail. We decided to conduct an ablation in this sense, for which we present the results in Table 10. Our results demonstrate an interesting phenomenon: while there appears to be no significant difference at the 32M parameter scale, the results of the 17M model variant are significantly improved by lower-casing. Across random seeds, we also observed that lower-casing consistently reached stronger performance on the 17M model, but no discernible trend emerged at the 32M scale. As with other ablations, we do not further attempt to understand the un- derlying mechanism, but theorise that the limited embedding dimensions and parameter count of the 17M model means that it benefits disproportionately from the learning simplification that lower-casing provides. Mxbai-edge-ColBERTv0 11 Table 10. NanoBEIR NDCG@10 comparison with and without lower-casing as a pre- processing step, with all other hyperparameters kept equal and training on the full data. Optimizer NDCG@10 32M Lower-casing 0.6520 No lower-casing 0.6519 17M Lower-casing 0.6405 No lower-casing 0.6317 4 Results Table 11. BEIR benchmark results (NDCG@10). Columns show the BEIR average and sampled tasks: MSMARCO, SciFact, Touche2020, FiQA, TREC-COVID, NQ, and DBPedia. Results in bold indicate best result for the weight class. The best results for size class are in bold. The complete table in Appendix B. Model AVG MSMARCO SF Touche FiQA COVID NQ DBP >100M parameters GTE-ModernColBERT-v1 0.547 0.453 0.763 0.312 0.453 0.836 0.618 0.480 ColBERTv2 0.488 0.456 0.693 0.263 0.356 0.733 0.562 0.446 <35M parameters mxbai-edge-colbert-v0-32m 0.521 0.450 0.740 0.313 0.390 0.775 0.600 0.455 answerai-colbert-small-v1 0.534 0.434 0.740 0.250 0.410 0.831 0.594 0.464 bge-small-en-v1.5 0.517 0.408 0.713 0.260 0.403 0.759 0.502 0.400 snowflake-s 0.520 0.402 0.722 0.235 0.407 0.801 0.509 0.410 <25M parameters mxbai-edge-colbert-v0-17m 0.490 0.416 0.719 0.316 0.326 0.713 0.551 0.410 colbert-muvera-micro 0.394 0.364 0.662 0.251 0.254 0.561 0.386 0.332 all-MiniLM-L6-v2 0.419 0.365 0.645 0.169 0.369 0.472 0.439 0.323 In Table 11, we present the result of our model on a selected range of BEIR [28] datasets and their average, while Table 12, we present the result of our model on LongEmbed task [41]. On the BEIR datasets, we note that our models are overall strong perform- ers. While outperformed on short-context average by the previous small-scale state-of-the-art, answerai-colbert-small, they reach strong performance across 12 R. Takehi et al. the board. Particularly noteworthy is that mxbai-edge-colbert-v0-17m, a 17 Mil- lion parameter model, outperforms the still-widely-used ColBERTv2, despite having less than 1/6th of the parameters and a projection dimension set to just 48, a third of ColBERTv2’s 128. They do so with remarkable efficiency, especially as context length increases, thanks to their ModernBERT-based backbone. Table 12. Detailed LongEmbed benchmark performance. Context length is set to 4k and 32k context variants for models supporting it. Otherwise, it is set to the model’s maximum sequence length (8k for granite-embeddings and 512 for others). Best re- sults for size class are in bold, best overall results are underlined. Models with more parameters than their size class but added for completeness are in italics. Model AVG NarrQA QMSum Wiki SummScr. Needle Passkey >100M parameters GTE-ModernColBERT-v1 (32k) 0.898 0.780 0.737 0.999 0.953 0.950 0.970 GTE-ModernColBERT-v1 (4k) 0.809 0.530 0.528 0.931 0.947 0.950 0.970 granite-embedding-english-r2 9 0.656 0.479 0.416 0.859 0.937 0.430 0.818 ColBERTv2 0.428 0.287 0.254 0.648 0.686 0.330 0.365 <50M parameters mxbai-edge-colbert-v0-32m (32k) 0.849 0.585 0.698 0.993 0.910 0.915 0.990 mxbai-edge-colbert-v0-32m (4k) 0.783 0.444 0.508 0.930 0.909 0.915 0.990 granite-embedding-small-english-r2 10 0.637 0.413 0.365 0.799 0.899 0.550 0.798 answerai-colbert-small-v1 0.441 0.266 0.272 0.645 0.735 0.338 0.388 bge-small-en-v1.5 0.312 0.220 0.208 0.430 0.532 0.263 0.218 snowflake-arctic-embed-s 0.356 0.177 0.230 0.411 0.643 0.283 0.390 <25M parameters mxbai-edge-colbert-v0-17m (32k) 0.847 0.621 0.733 0.977 0.943 0.950 0.858 mxbai-edge-colbert-v0-17m (4k) 0.776 0.437 0.566 0.909 0.935 0.950 0.858 all-MiniLM-L6-v2 0.298 0.183 0.163 0.463 0.548 0.200 0.233 colbert-muvera-micro 0.405 0.230 0.244 0.566 0.689 0.318 0.385 On long-context evaluations, our models reach very strong performance, only outperformed again by the larger GTE-ModernColBERT. We show that both of our models are extremely strong performer, only outperformed by the larger GTE-ModernColBERT. As expected, models based on more modern architec- ture, capable of handling longer context lengths, are the only competitive mod- els on this task, with previous methods unable to process longer documents efficiently and resorting to truncation, thus greatly reducing their performance. 9 149M parameter model. Results are from the MTEB leaderboard. 10 49M parameter model. Results are from the MTEB leaderboard. Mxbai-edge-ColBERTv0 13 Particularly notably, even our 17M parameter variant outperforms the cur- rent <1B parameter single-vector retrieval state-of-the-art11 on LongEmbed tasks, such as granite-embedding-r2, by almost 20NDCG@10 points. Note that Needle and Passkey are computed on NDCG@1 and are calculated by taking the average of all lengths. Interestingly, we note, similarly to [1] that despite being based on a model with a native 8,000 context window, mxbai-edge-colbert’s models are capable of handling 32k sequence lengths and observe performance gains from it, despite our retrieval training using documents truncated to 220 tokens. Finally, we note that the low parameter count of our models, in combina- tion with their highly-efficient architecture, make them particularly suitable for reranking task. This is especially true for longer chunks, as there currently does not exist any re-ranker able to reach similarly strong performance while running with low latencies on CPU for long document reranking. Table 13. Relative performance and efficiency comparisons of small ColBERT models on NanoBEIR, with ColBERTv2 as a reference. CPU and GPU refer to runtimes as the average of 10 runs on each hardware type. Mem. is RAM requirements, in MB, for storing 10,000 300 token document representations in fp16. LoCo. stands for long- context support. Dim. is the projection dimension of each model. Best values are in bold, best values while outperforming ColBERTv2 on retrieval are underlined. Model Params Dim. NDCG@10 LoCo GPU CPU Mem. ColBERTv2 130M 128 0.6198 – 81s 1540s 732 answerai-colbert-small-v1 33M 96 0.6545 – 59s 621s 549 colbert-muvera-micro 4M 128 0.5599 – 45s 88s 732 mxbai-edge-colbert-v0-17m 17M 48 0.6405 ✓ 51s 487s 275 mxbai-edge-colbert-v0-32m 32M 64 0.6520 ✓ 55s 589s 366 Efficiency comparison with other small ColBERTs Table 13 shows the rel- ative performance of our 17M parameter edge ColBERT against other commonly used small ColBERTs, along with efficiency comparisons. For ease of rapid eval- uation, we report overall NanoBEIR NDCG@10 scores, long-context support, projection dimension (a very important factor for edge use cases), mean run- time of 10 NanoBEIR evaluation runs on both GPU with a single RTX 4090 and CPU, representing the encoding of around 67,000 documents, as well as 650 queries and as many searches and scoring steps. We also report the memory usage required to store the 16-bit vector represen- tations of 10,000 300-tokens documents stored, a direct factor of the projection dimension, to provide a brief overview of the suitability for various in-memory encoding usages. 11 According to the MTEB Leaderboard as of October 2025 14 R. Takehi et al. 5 Conclusion We introduce v0 of the Mxbai-edge-ColBERT model family. These models repre- sent the first small ColBERT model to fully benefit from a modern architecture, with long-context support and all the efficiency improvements introduced by the ModernBERT [33] generation of encoder models. Our intent with these models is two-fold: our main aim is for them to pro- vide a suitable testbed for future experiments and distillation of our research on larger-scale models, as well as to serve as a strong performance predictor experiments, following scaling laws. Our second aim to support a large range of on-device use-cases, be it local RAG projects or extremely efficient re-ranking on both CPU and GPU. Mxbai-edge-ColBERT-v0, at both model sizes, reach strong performance on a variety of datasets. Notably, the 17M parameter variant outperforms Col- BERTv2 with an order-of-magnitude fewer parameters, and with vector storage and scoring-time compute requirements reduced by two thirds. We fully intend to continue to upgrade these models with future developments and are looking forward to seeing them used in the real-world. References 1. Chaffin, A.: Gte-moderncolbert (2025), https://huggingface.co/lightonai/ GTE-ModernColBERT-v1 2. Chaffin, A., Sourty, R.: Pylate: Flexible training and retrieval for late interaction models. arXiv preprint arXiv:2508.03555, to be published at CIKM 2025 (2025) 3. Clavié, B.: Jacolbertv2. 5: Optimising multi-vector retrievers to create state-of-the- art japanese retrievers with constrained resources. Journal of Natural Language Processing 32(1), 176–218 (2025) 4. Clavié, B.: Small but mighty: Introducing answerai-colbert-small (August 2024), https://www.answer.ai/posts/2024-08-13-small-but-mighty-colbert.html 5. Clavié, B., Lee, S., Takehi, R., Shakir, A., Kato, M.P.: Simple projection variants improve colbert performance (2025), https://arxiv.org/abs/2510.12327 6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidi- rectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 4171–4186 (2019) 7. Elfwing, S., Uchibe, E., Doya, K.: Sigmoid-weighted linear units for neural net- work function approximation in reinforcement learning. Neural networks 107, 3–11 (2018) 8. Faysse, M., Sibille, H., Wu, T., Omrani, B., Viaud, G., HUDELOT, C., Colombo, P.: Colpali: Efficient document retrieval with vision language models. In: The Thirteenth International Conference on Learning Representations (2025), https: //openreview.net/forum?id=ogjBpZ8uSi 9. Gao, L., Zhang, Y., Han, J., Callan, J.: Scaling deep contrastive learning batch size under memory limited setup. In: Proceedings of the 6th Workshop on Repre- sentation Learning for NLP (2021) Mxbai-edge-ColBERTv0 15 10. Jordan, K., Jin, Y., Boza, V., You, J., Cesista, F., Newhouse, L., Bernstein, J.: Muon: An optimizer for hidden layers in neural networks (2024), https: //kellerjordan.github.io/posts/muon/ 11. Khattab, O., Zaharia, M.: Colbert: Efficient and effective passage search via con- textualized late interaction over bert. In: Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. pp. 39–48 (2020) 12. Lee, S., Shakir, A., Koenig, D., Lipp, J.: Open source strikes bread-new fluffy embeddings model (2024). URL https://www. mixedbread. ai/blog/mxbai-embed- large-v1 (2024) 13. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.t., Rocktäschel, T., et al.: Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33, 9459–9474 (2020) 14. Li, C., Liu, Z., Xiao, S., Shao, Y.: Making large language models a better foundation for dense retrieval (2023) 15. Li, X., Li, J.: AnglE-optimized text embeddings. arXiv preprint cs.CL arXiv:2309.12871 (2023) 16. Li, Z., Zhang, X., Zhang, Y., Long, D., Xie, P., Zhang, M.: Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 (2023) 17. Liu, F., Enevoldsen, K.C., Solomatin, R., Samoed, T., Chung, I., Aarsen, T., Fődi, Z.: Introducing rteb: A new standard for retrieval evaluation. Hugging Face Blog (Oct 2025), https://huggingface.co/blog/rteb 18. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017) 19. Merrick, L., Xu, D., Nuti, G., Campos, D.: Arctic-embed: Scalable, efficient, and accurate text embedding models. arXiv preprint arXiv:2405.05374 (2024) 20. Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L.: Ms marco: A human-generated machine reading comprehension dataset (2016) 21. Nussbaum, Z., Morris, J.X., Duderstadt, B., Mulyar, A.: Nomic embed: Training a reproducible long context text embedder (2024) 22. Penedo, G., Kydlíček, H., Lozhkov, A., Mitchell, M., Raffel, C.A., Von Werra, L., Wolf, T., et al.: The fineweb datasets: Decanting the web for the finest text data at scale. Advances in Neural Information Processing Systems 37, 30811–30849 (2024) 23. Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. arXiv preprint arXiv:1710.05941 (2017) 24. Santhanam, K., Khattab, O., Potts, C., Zaharia, M.: Plaid: an efficient engine for late interaction retrieval. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management. pp. 1747–1756 (2022) 25. Santhanam, K., Khattab, O., Saad-Falcon, J., Potts, C., Zaharia, M.: Colbertv2: Effective and efficient retrieval via lightweight late interaction. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 3715–3734 (2022) 26. de Souza P. Moreira, G., Osmulski, R., Xu, M., Ak, R., Schifferer, B., Oldridge, E.: Nv-retriever: Improving text embedding models with effective hard-negative mining (2025), https://arxiv.org/abs/2407.15831 27. Teiletche, P., Macé, Q., Conti, M., Loison, A., Viaud, G., Colombo, P., Faysse, M.: Modernvbert: Towards smaller visual document retrievers (2025), https://arxiv. org/abs/2510.01149 16 R. Takehi et al. 28. Thakur, N., Reimers, N., Rücklé, A., Srivastava, A., Gurevych, I.: Beir: A heteroge- nous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663 (2021) 29. Thakur, N., Zhang, C., Ma, X., Lin, J.: Fixing data that hurts performance: Cascading llms to relabel hard negatives for robust information retrieval (2025), https://arxiv.org/abs/2505.16967 30. Vujanic, R., Rueckstiess, T.: Leaf: Knowledge distillation of text embedding models with teacher-aligned representations (2025), https://arxiv.org/abs/2509.12539 31. Wang, L., Yang, N., Huang, X., Jiao, B., Yang, L., Jiang, D., Majumder, R., Wei, F.: Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 (2022) 32. Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: Minilm: Deep self- attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems 33, 5776–5788 (2020) 33. Warner, B., Chaffin, A., Clavié, B., Weller, O., Hallström, O., Taghadouini, S., Gallagher, A., Biswas, R., Ladhak, F., Aarsen, T., Adams, G.T., Howard, J., Poli, I.: Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. In: Che, W., Nabende, J., Shutova, E., Pilehvar, M.T. (eds.) Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers). pp. 2526–2547. Association for Computational Linguistics, Vienna, Aus- tria (Jul 2025). https://doi.org/10.18653/v1/2025.acl-long.127, https:// aclanthology.org/2025.acl-long.127/ 34. Weller, O., Boratko, M., Naim, I., Lee, J.: On the theoretical limitations of embedding-based retrieval (2025), https://arxiv.org/abs/2508.21038 35. Weller, O., Ricci, K., Marone, M., Chaffin, A., Lawrie, D., Durme, B.V.: Seq vs seq: An open suite of paired encoders and decoders (2025), https://arxiv.org/ abs/2507.11412 36. Yates, A., Nogueira, R., Lin, J.: Pretrained transformers for text ranking: Bert and beyond. In: Proceedings of the 14th ACM International Conference on web search and data mining. pp. 1154–1156 (2021) 37. Zhan, J., Mao, J., Liu, Y., Guo, J., Zhang, M., Ma, S.: Optimizing dense retrieval model training with hard negatives. In: Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. pp. 1503– 1512 (2021) 38. Zhang, D., Li, J., Zeng, Z., Wang, F.: Jasper and stella: distillation of sota embed- ding models (2025), https://arxiv.org/abs/2412.19048 39. Zhang, Y., Li, M., Long, D., Zhang, X., Lin, H., Yang, B., Xie, P., Yang, A., Liu, D., Lin, J., Huang, F., Zhou, J.: Qwen3 embedding: Advancing text embedding and reranking through foundation models (2025), https://arxiv.org/abs/2506. 05176 40. Zhou, F., Wang, Z., Liu, Q., Li, J., Liu, P.: Programming every example: Lifting pre-training data quality like experts at scale. arXiv preprint arXiv:2409.17115 (2024) 41. Zhu, D., Wang, L., Yang, N., Song, Y., Wu, W., Wei, F., Li, S.: Longembed: Extending embedding models for long context retrieval (2024), https://arxiv. org/abs/2404.12096 Mxbai-edge-ColBERTv0 17 A Distillation Data The data mix for the distillation stage is provided in Tables 14 and 15. Table 14. Queries used for distillation Dataset Size (rows) msmarco 510k amazon_qa 475k nq 175k triviaqa 70k pubmed 67k arxiv 50k cornstk 50k lotte 25k medqa 13k mldr 12.2k Total 1.45M Table 15. Passages used for distillation Dataset Size (rows) DCLM-Pro 1.59M english_words 742k fineweb 665k dclm_sent 400k ccnews 370k stack 185k ettin_tokens 50k Total 4.00M DCLM-Pro [40] and FineWeb [22] are full documents randomly from their respective datasets, while dclm_sent is comprised of individual DCLM-Pro doc- uments broken down into individual sentences, to create more varied small-length inputs, again following Stella [38]. ettin_tokens and english_words were added during the course of this study following the release of LEAF [30], which used a similar method to improve trianing. ettin_tokens is a dataset comprised of very short input, where each document is a single token from our model’s tokenizer, while english_words is a large collection of English words along with a definition generated by Gemini 2.0 Flash. B Full BEIR Results We show the full BEIR results in Tables 16 and 17. 18 R. Takehi et al. Table 16. BEIR benchmark (Part A): AVG and (Touche2020, NQ, MSMARCO, Sci- Fact, FiQA2018, NFCorpus, ArguAna). Scores are NDCG@10. Model AVG Touche2020 NQ MSMARCO SciFact FiQA2018 NFCorpus ArguAna >100M parameters GTE-ModernColBERT-v1 0.547 0.312 0.618 0.453 0.763 0.453 0.379 0.485 colbertv2 0.488 0.263 0.562 0.456 0.693 0.356 0.338 0.463 <35M parameters mxbai-edge-colbert-v0-32m 0.521 0.313 0.600 0.450 0.740 0.390 0.362 0.454 answerai-colbert-small-v1 0.533 0.250 0.594 0.434 0.740 0.410 0.369 0.468 bge-small-en-v1.5 0.517 0.260 0.502 0.408 0.713 0.403 0.349 0.331 snowflake-s 0.520 0.235 0.509 0.402 0.722 0.407 0.324 0.339 <25M parameters mxbai-edge-colbert-v0-17m 0.490 0.316 0.551 0.416 0.719 0.326 0.352 0.464 colbert-muvera-micro 0.394 0.251 0.386 0.364 0.662 0.254 0.321 0.303 all-MiniLM-L6-v2 0.419 0.169 0.439 0.365 0.645 0.369 0.314 0.331 Table 17. BEIR benchmark (Part B): Rest of the tasks (QuoraRetrieval, SCIDOCS, TRECCOVID, ClimateFEVER, HotpotQA, DBPedia, CQADupstack, FEVER). Scores are NDCG@10. Model QuoraRetrieval SCIDOCS TRECCOVID ClimateFEVER HotpotQA DBPedia CQADupstack FEVER >100M parameters GTE-ModernColBERT-v1 0.866 0.191 0.836 0.306 0.773 0.480 0.410 0.874 colbertv2 0.852 0.154 0.733 0.176 0.667 0.446 0.378 0.785 <35M parameters mxbai-edge-colbert-v0-32m 0.863 0.170 0.775 0.290 0.734 0.455 0.388 0.826 answerai-colbert-small-v1 0.879 0.187 0.831 0.328 0.769 0.464 0.394 0.887 bge-small-en-v1.5 0.887 0.198 0.759 0.253 0.699 0.400 0.391 0.866 snowflake-s 0.884 0.218 0.801 0.352 0.665 0.410 0.397 0.871 <25M parameters mxbai-edge-colbert-v0-17m 0.839 0.169 0.713 0.224 0.713 0.410 0.356 0.784 colbert-muvera-micro 0.764 0.123 0.561 0.115 0.528 0.332 0.313 0.637 all-MiniLM-L6-v2 0.876 0.217 0.472 0.203 0.465 0.323 0.412 0.519
Fantastic (small) Retrievers and How to Train Them: mxbai-edge-colbert-v0 Tech Report Rikiya Takehi1,2*, Benjamin Clavié1, Sean Lee1, and Aamir Shakir1 1 Mixedbread AI 2 Waseda University { Abstract. In this work, we introduce mxbai-edge-colbert-v0 models, at two different parameter counts: 17M and 32M. As part of our research, we conduct numerous experiments to improve retrieval and lateinteraction models, which we intend to distill into smaller models as proof-of-concepts. Our ultimate aim is to support retrieval at all scales, from large-scale retrieval which lives in the cloud to models that can run locally, on any device. mxbai-edge-colbert-v0 is a model that we hope will serve as a solid foundation backbone for all future experiments, representing the first version of a long series of small proof-of-concepts. As part of the development of mxbai-edge-colbert-v0, we conducted multiple ablation studies, of which we report the results. In terms of downstream performance, mxbai-edge-colbert-v0 is a particularly capable small model, outperforming ColBERTv2 on common short-text benchmarks (BEIR) and representing a large step forward in long-context tasks, with unprecedented efficiency. 1 Introduction In the last two years, neural Information Retrieval (IR) has experienced an unprecedented level of interest, owing in large part to the rapid development and deployment of Large Language Models (LLMs) and the proven effectiveness of Retrieval Augmented Generation (RAG) pipelines [13], where retrieval models are used to provide LLMs with useful context. As part of this wave, end-user interest in multi-vector retrieval methods, also called late interaction models or, more simply, ColBERT, after the model which initially introduced this method [11]. Where the dominant paradigm in neural IR, Dense Passage Retrieval (DPR) [36], leverages a single, large vector to represent documents, ColBERT models instead employ numerous smaller vectors, with each individual token representation projected to a small dimension then retained. In order to make this tractable, ColBERT models are frequently used with aggressive index quantization [25,24] or as second-stage rankers in a larger pipeline. The growing popularity of multi-vector models can be explained by their retrieval performance. ColBERT models have been noted for their particularly ⋆Work performed during an internship at Mixedbread. 16 Oct 2025 2 R. Takehi et al. robust out-of-domain performance [25], especially in multi-modal settings [27]. They have also recently been demonstrated to provably alleviate certain limitations of single-vector retrieval approaches, with a 150M parameter ColBERT models vastly outperforming 8B parameter single-vector embeddings on benchmarks designed to test the limits of embedding models [34]. In spite of these strong performances, the ecosystem for open ColBERT models have moved more slowly than that of single-vector models. Up until last year, the most widely used ColBERT model was ColBERTv2, originally released in 2021. Subsequently, answerai-colbert-small-v11 demonstrated a 33 million parameter ColBERT model could outperform all existing small retrievers, and reached performance exceeding even that of ColBERTv2 and most 100M parameters GTE-ModernColBERT-v1 0.547 0.453 0.763 0.312 0.453 0.836 0.618 0.480 ColBERTv2 0.488 0.456 0.693 0.263 0.356 0.733 0.562 0.446 100M parameters GTE-ModernColBERT-v1 (32k) 0.898 0.780 0.737 0.999 0.953 0.950 0.970 GTE-ModernColBERT-v1 (4k) 0.809 0.530 0.528 0.931 0.947 0.950 0.970 granite-embedding-english-r2 9 0.656 0.479 0.416 0.859 0.937 0.430 0.818 ColBERTv2 0.428 0.287 0.254 0.648 0.686 0.330 0.365 100M parameters GTE-ModernColBERT-v1 0.547 0.312 0.618 0.453 0.763 0.453 0.379 0.485 colbertv2 0.488 0.263 0.562 0.456 0.693 0.356 0.338 0.463 100M parameters GTE-ModernColBERT-v1 0.866 0.191 0.836 0.306 0.773 0.480 0.410 0.874 colbertv2 0.852 0.154 0.733 0.176 0.667 0.446 0.378 0.785 <35M parameters mxbai-edge-colbert-v0-32m 0.863 0.170 0.775 0.290 0.734 0.455 0.388 0.826 answerai-colbert-small-v1 0.879 0.187 0.831 0.328 0.769 0.464 0.394 0.887 bge-small-en-v1.5 0.887 0.198 0.759 0.253 0.699 0.400 0.391 0.866 snowflake-s 0.884 0.218 0.801 0.352 0.665 0.410 0.397 0.871 <25M parameters mxbai-edge-colbert-v0-17m 0.839 0.169 0.713 0.224 0.713 0.410 0.356 0.784 colbert-muvera-micro 0.764 0.123 0.561 0.115 0.528 0.332 0.313 0.637 all-MiniLM-L6-v2 0.876 0.217 0.472 0.203 0.465 0.323 0.412 0.519
2510.14877
Prepared for submission to JCAP Astrophysical uncertainties challenge 21-cm forecasts: A primordial black hole case study Dominic Agius,a Rouven Essig,b Daniele Gaggero,c Sergio Palomares-Ruiz,a Gregory Suczewski,b Mauro Vallid aInstituto de Física Corpuscular (IFIC), CSIC-Universitat de València, E-46071, València, Spain bC.N. Yang Institute for Theoretical Physics, Stony Brook University, NY 11794, USA cINFN Sezione di Pisa, Polo Fibonacci, Largo B. Pontecorvo 3, 56127 Pisa, Italy dINFN Sezione di Roma, Piazzale Aldo Moro 2, I-00185 Rome, Italy E-mail: dominic.agius@ific.uv.es Abstract. The 21-cm signal is a powerful probe of the early Universe’s thermal history and could provide a unique avenue for constraining exotic physics. Previous studies have forecasted stringent constraints on energy injections from exotic sources that heat, excite, and ionize the background gas and thereby modify the 21-cm signal. In this work, we quantify the substantial impact that astrophysical uncertainties have on the projected sensitivity to exotic energy injection. In particular, there are significant uncertainties in the minimum star- forming dark matter halo mass, the Lyman-α emission, and the X-ray emission, whose values characterize the fiducial astrophysical model when projecting bounds. As a case study, we investigate the energy injection of accreting primordial black holes of mass ∼1 M⊙−103 M⊙, also taking into account uncertainties in the accretion model. We show that, depending on the chosen fiducial model and accretion uncertainties, the sensitivity of future 21-cm data could constrain the abundance of primordial black holes to be either slightly stronger, or significantly weaker, than current limits from the Cosmic Microwave Background. arXiv:2510.14877v1 [astro-ph.CO] 16 Oct 2025 Contents 1 Introduction 1 2 The 21-cm signal and exotic energy injection 3 2.1 Overview of 21-cm basics 3 2.2 Shape of the signal from the dark ages to the cosmic dawn: the crucial role of astrophysical parameters 5 2.3 The case study of Primordial Black Holes: impact on the signal 6 2.3.1 The Bondi–Hoyle–Lyttleton model 7 2.3.2 The Park–Ricotti model 8 2.3.3 Environmental properties 9 3 Methodology 10 3.1 Zeus21 calculation of the 21-cm signal 10 3.2 Energy injection and energy deposition from accreting matter 12 3.2.1 Energy injection 12 3.2.2 Energy deposition 12 3.3 Mock data generation 14 3.4 Statistics 17 3.4.1 Sensitivities 17 3.5 Marginalization 18 4 Results 19 4.1 The importance of the fiducial model 19 4.2 Uncertainty in accretion model 21 4.3 21-cm forecasts and discussion 24 5 Conclusions and Outlook 25 A Appendix 27 A.1 Comparison to previous work 27 A.2 Fixed astrophysical parameters 28 1 Introduction Recent cosmological measurements have heralded a new era in physics, allowing both for precise measurements of the parameters of the ΛCDM model and for stringent constraints on new physics scenarios [1]. Cosmology provides a particularly important avenue to test a wide range of dark matter (DM) candidates [2], with interesting complementarity to other detection strategies, such as direct and indirect detection or collider searches [3]. The upcoming era of 21-cm cosmology promises to revolutionize our understanding of the early Universe, opening a new window into the dark ages (30 ≲z ≲200), the cosmic dawn (10 ≲z ≲30), and epoch of reionization (6 ≲z ≲10) [4–8]. By mapping the hyperfine transition of neutral hydrogen through cosmic time, experiments like the Hydrogen Epoch of Reionization Array HERA [9] and the Square Kilometer Array (SKA) [10] will probe the 21-cm – 1 – power spectrum, providing unprecedented sensitivity to the thermal history of the Universe at redshifts z ≃6 −30. Future lunar-based experiments such as LuSEE Night [11] and FARSIDE [12] may one day measure the 21-cm signal at even higher redshift (in the dark ages), where it could provide a clean cosmological probe free from astrophysical uncertainties [13, 14]. Despite this promise, and the sentiment that “the moon is our future” [15], our focus here is on the two ground-based experiments, which will deliver data in the coming years. The new window that HERA and SKA will provide into the thermal and ionization proper- ties of the intergalactic medium (IGM) makes the 21-cm signal a powerful upcoming tool for precision cosmology, and a promising avenue for constraining new physics beyond the Stan- dard Model [16]. However, realizing this potential, especially in the context of exotic physics, faces a significant challenge: the dominant, yet uncertain, influence of the first astrophysi- cal sources [17, 18]. The ultimate constraining power of the 21-cm signal is fundamentally dependent on the properties of the first stars, with some astrophysical scenarios being in- herently less sensitive to exotic physics insofar as their constraining power. Key properties of the first stars, such as their X-ray luminosity, Lyman-α emission, and the minimum halo mass required to form the first star-forming regions, are currently poorly constrained, and any exotic physics signature must be disentangled from this complex astrophysical signal. As such, a comprehensive assessment of these uncertainties is essential to determine the true constraining power of upcoming 21-cm experiments, and to assess whether they can probe a wider parameter space than existing observations of the Cosmic Microwave Background (CMB). The goal of this paper is to quantify how these astrophysical systematics, associated with the fiducial astrophysical scenario, impact forecasts for exotic energy injection. As a concrete and well-motivated case study, we investigate the effects of an exotic energy source: a monochromatic population of accreting primordial black holes (PBHs). PBHs are hypo- thetical compact objects that may have formed in the early Universe. A common formation mechanism involves the collapse of large overdensities at small scales that exceed a critical threshold [19–29]. Interest in PBHs with masses above O(M⊙) has grown significantly fol- lowing the gravitational wave detections by the LIGO/VIRGO/KAGRA experiments [30–34] and speculation that a signal may have been of primordial origin [35]. Even a sub-dominant PBH population in this mass range would have important implications, potentially causing early structure formation [36, 37] and contributing to the formation of supermassive black holes at high redshifts [38, 39]. In the mass range (1−103) M⊙, PBHs would accrete baryonic matter, injecting radiation into the IGM [40]. This process directly alters the IGM gas tem- perature and ionization fraction, leaving a distinct imprint on the 21-cm signal [41–47]. The same physical mechanism of heating and ionizing the IGM also determines 21-cm forecasts for decaying [46, 48–58] or annihilating DM [17, 47, 49, 50, 53, 55–57, 59–64], and evaporating PBHs [47, 52, 57, 58, 65–73], and our PBH case study can be generalized to these results.1 The case study of PBHs is particularly relevant, since a previous study has shown that the 21-cm signal has better sensitivity than other probes to PBHs in the mass range(1 − 103) M⊙[44]. In addition, we further investigate a second theoretical uncertainty associated with the accretion physics model. Constraints on accreting PBHs from CMB observations are already known to be strongly dependent on how the accretion is modeled [75–82]. Several effects have been explored in this context, including modifications to the accretion rate from DM minihalos around PBHs [79, 83], accretion outflows [80], black hole spin [84], and radiative 1We note that depositing energy into Lyman-α photons without additional heating or ionization can also provide some constraining power [74]. – 2 – feedback [81, 82]. In this work, we extend this investigation to the 21-cm signal by comparing the widely used Bondi-Hoyle-Lyttleton (BHL) accretion model [85–90] with the state-of-the- art Park-Ricotti (PR) model, which includes the important effect of radiative feedback [91–93]. This case study allows us to explore the interplay between uncertainties from a scenario with exotic energy injection, and the much larger uncertainties from standard astrophysics. To summarize, the goal of this paper is twofold: a) to characterize how astrophysical uncertainties in the fiducial 21-cm model can alter the forecasted sensitivity to an exotic energy injection, and b) using PBHs as our case study, to provide updated 21-cm forecasts that account for systematics in the accretion model, in light of the astrophysical uncertainties. This will complement the study presented in ref. [82] and assess the impact of the PR model in the context of 21-cm cosmology. This paper is structured as follows. In section 2, we provide an overview of 21-cm basics, highlighting the key ingredients that are altered by the presence of DM, with particular reference to accreting PBHs. In section 3, we describe our methodology for providing 21- cm sensitivity forecasts, with specific emphasis on the impact of astrophysical uncertainties entering the fiducial model. We go on to quantify our results in section 4, where we present forecasts for the upcoming SKA interferometer. Finally, we conclude and discuss the outlook in section 5. An appendix provides a comparison to previous work, and lists astrophysical parameters that are held fixed in our study. 2 The 21-cm signal and exotic energy injection In this section, we introduce the basics of 21-cm phenomenology, and describe the standard astrophysics scenario governing the signal and its uncertainties. We then discuss the impact of PBH accretion on the 21-cm observables: the brightness temperature and the power spectrum. 2.1 Overview of 21-cm basics The 21-cm signal is often parametrized by the so-called “spin temperature” (TS), defined by the fraction of neutral hydrogen in the triplet state compared to the singlet state, n1 n0 = 3 e−T0 TS , (2.1) where n1 and n0 are the number densities of neutral hydrogen in the triplet (excited) and singlet (ground) states, respectively. The factor of 3 comes from the degeneracy of the triplet state compared to the singlet, and kB T0 = h ν0 is the energy of the 21-cm photons. By definition, higher spin temperatures correspond to a higher proportion of excited states (with n1 ≈3 n0 for TS ≫T0), while lower temperatures correspond to a smaller proportion of excited states (with n1 ≪n0 for TS ≪T0). The ratio of these densities (and therefore the spin temperature) depends on the interactions that can induce hyperfine transitions. There are three dominant ingredients that determine the spin temperature, which can be understood as the weighted average of the temperatures associated with each process: T −1 S = T −1 CMB + xα T −1 c + xc T −1 k 1 + xα + xc . (2.2) Each term corresponds to a key interaction. The first corresponds to the stimulated emission and absorption caused by a background radiation field (which we take to be the CMB), a process characterized by the CMB temperature, TCMB. The second term accounts for the – 3 – coupling between the spin temperature and the color temperature, Tc, mediated by Lyman-α photons through the Wouthuysen–Field (WF) effect [94, 95]. The color temperature of the Lyman-α radiation field can be understood as the slope of the spectrum near the Lyman-α frequency, and when the radiation spectrum reaches a steady state, this color temperature is equal to the gas kinetic temperature field at the Lyman-α frequency [96]. The strength of this interaction is determined by the coupling coefficient xα, which is proportional to the flux of Lyman-α photons. Physically, this coupling arises because Lyman-α photons excite electrons from their original hyperfine ground state (with number densities n0 or n1). Upon relaxation, the electron can decay to either of the ground state levels, therefore altering the original population between the singlet and triplet states. The third interaction in eq. (2.2) is due to hydrogen–hydrogen collisions, whose strength is determined by the collisional coupling, xc. This final term becomes negligible after z ≃30, as the expansion of the Universe dilutes the density of hydrogen atoms. Upcoming 21-cm experiments, such as the SKA telescope, are designed to measure the frequencies probing the cosmic dawn and the epoch of reionization (corresponding to z ≃ 6−30). At these times, the collisional coupling is negligible (xc ≪1). Lastly, for all scenarios of interest, the color temperature is approximately equal (within 5% [97]) to the kinetic temperature of the baryons, Tc ≃Tk, so we use them interchangeably in this work. In the Rayleigh-Jeans limit (h ν ≪kB Tb(ν)), the specific intensity of a blackbody is proportional to the brightness temperature, Tb. The primary 21-cm observable is the differ- ential brightness temperature, δTb, which measures the intensity of the 21-cm line relative to the CMB background (or in general, the background radiation), and depends on both the spin temperature and the optical depth of the 21-cm line, τ21, δTb = TS −TCMB 1 + z 1 −e−τ21 . (2.3) When the spin temperature is greater (less) than the CMB temperature, the signal is in emission (absorption). There is no signal when the spin temperature is in equilibrium with the CMB. For the frequencies (redshifts) of interest, the optical depth is small (τ21 ≪1) and eq. (2.3) can be approximated as [4, 98, 99] δTb ≃27 xHI (1 + δb)  1 −TCMB TS   1 1 + H−1 ∂vr/∂r  1 + z 10 1/2 mK , (2.4) where H is the Hubble constant, xHI is the fraction of neutral hydrogen, δb is the fractional baryon density perturbation, and ∂vr/∂r is the line-of-sight gradient of the neutral hydrogen velocity. Eq. (2.4) captures the signal at a specific spatial point, and its sky average, δTb, is what is referred to as the global 21-cm signal. Since the properties of the IGM are not completely homogeneous, any fluctuations in the 21-cm signal in space and time, can be characterized using the statistical properties of the signal. Particularly useful is the 21-cm power spectrum, P21(k, z), which quantifies the variance of the brightness temperature as a function of spatial scale (or wavenumber k), at a given redshift. It is defined via the two-point correlation of the signal in Fourier space, ⟨δT21(k, z)δT ∗ 21(k′, z)⟩= (2π)3δD(k + k′)P21(k, z) , (2.5) where δD is the Dirac delta function, and δT21(k, z) corresponds to the Fourier transform of δTb(x, z) −δTb(x, z). Throughout this work we use the dimensional reduced power spectrum, ∆2 21(k, z) = k3P21(k, z) 2π2 [mK2] . (2.6) – 4 – 2.2 Shape of the signal from the dark ages to the cosmic dawn: the crucial role of astrophysical parameters The 21-cm signal is dictated by the changing thermal and ionization properties of the Universe around the epoch of reionization, caused by the formation of the first astrophysical sources and their emission. These first stars determine the 21-cm signal through three key physical effects: i) through their impact on the kinetic temperature of the gas, via X-ray heating, ii) via the strength of WF coupling xα, induced by Lyman-α pumping, and iii) via changes in the fraction of neutral hydrogen xHI, shown explicitly in eqs. (2.2) and (2.4). The epoch of interest for the present study is the redshift range z ≃10 −25, and begins in the last portion of the dark ages, when no stars have formed yet, and the IGM is neutral. At z ∼25, the Universe has expanded sufficiently such that the collisional coupling between the spin temperature and the kinetic temperature of the gas is no longer effective, and δTb = 0 is expected because TS = TCMB. When the first astrophysical sources switch on, they emit both Lyman-α and X-ray photons. Initially, Lyman-α emission is expected to couple the spin temperature to the gas temperature, generating an absorption signal, since the gas temperature is below that of the CMB. Then, as sources begin to emit more strongly in X-rays, the gas temperature increases, damping the absorption signal. Eventually, Tk (which is ∼TS) increases above TCMB, resulting in an emission signal, which is damped as the fraction of neutral hydrogen vanishes, and the Universe becomes completely ionized. The sources responsible for the emission of Lyman-α photons (i.e., the first stars and galaxies) are discrete and clustered, and hence the Lyman-α background responsible for the WF coupling builds up inhomogeneously in space. Later, the X-ray flux originating from the first X-ray binaries is also expected to be patchy. Therefore, the temperature contrast between the heated and cold regions leads to spatial variations in Tk and thus in the brightness temperature, and these spatial variations can be captured with the power spectrum. Both the global averaged absorption/emission signal and the power spectrum that char- acterizes the spatial variations are shaped by the details of the star formation process, and eventually depend on some key parameters that are crucial in the current study. In particu- lar, three quantities are certainly relevant: 1) the number of Lyman-α photons emitted per baryon, Nα; 2) the X-ray luminosity, LX, normalized to the star-formation rate, ˙M∗; 3) the minimum mass of star-forming DM halos, Mturn. The first quantity is responsible for the WF coupling described above, and therefore impacts the coupling of the spin temperature and the kinetic temperature of the gas. The second parameter controls the heating of the cold gas. Both parameters depend on the Star Formation Rate Density (SFRD), which quantifies how much stellar mass is formed per unit volume and per unit time at each redshift. It plays a central role in the 21-cm modeling and determines the normalization of the radiation fields mentioned above that shape the signal and its fluctuations. In section 3, we will address how the SFRD evolves over cosmic history and how it is modeled with the numerical tools that we use. The third quantity, instead, sets the threshold mass for DM halos to cool and form stars: larger values of Mturn restrict star formation to rarer massive halos, delaying the onset of radiation backgrounds. In section 3, we will also provide more details on Mturn. These three astrophysical parameters are uncertain. As a consequence, both the shape of the absorption signal and the power spectrum that future experiments aim to probe are uncertain, and our current knowledge prevents us from characterizing it accurately. Therefore, the sensitivity of these upcoming data to new physics scenarios depends strongly on the values – 5 – chosen for the astrophysical parameters, which we will illustrate in this paper by forecasting the sensitivity to PBHs. 2.3 The case study of Primordial Black Holes: impact on the signal Additional physics phenomena beyond the Standard Model can alter the 21-cm signal de- scribed in the previous subsection. In particular, we consider a hypothetical population of massive PBHs that accrete baryonic matter. These objects can affect the 21-cm brightness temperature by altering the spin temperature and the 21-cm optical depth. Through the process of accretion, PBHs would inject high-energy photons, which can deposit energy in the IGM, heating, exciting, and ionizing the gas. Therefore, PBHs would alter the 21-cm optical depth by reducing the neutral hydrogen fraction and also alter the spin temperature by increasing the kinetic temperature from heating. The accretion process by PBHs would also produce Lyman-α photons, which would directly impact the efficiency of the WF-effect through the dimensionless coupling xα, by increasing the flux of Lyman-α photons, Jα, xα = 0.416 × 16π2e2 27 me c A10 h νo kB Tk Jα , (2.7) where A10 is the Einstein coefficient for spontaneous emission, and e and me are the charge and mass of the electron. In particular, the Lyman-α flux can be split into two contributions, Jα = Jastro α + JPBH α , (2.8) where Jastro α ∝Nα is the astrophysical contribution from the first stars [100],2 and JPBH α denotes the contribution from accreting primordial black holes. Additionally, injection of energy from accreting PBHs also alters the free electron frac- tion, xe, and the baryon temperature, Tk, of the IGM. To account for this, we use the modified evolution equations described in ref. [103] and given by dxe(z) dz = 1 (1 + z) H(z) (R(z) −I(z) −IX(z)) , dTk dz = 1 1 + z [2 Tk + γ (Tk −TCMB)] + Kh , (2.9) where R(z) and I(z) are the standard recombination and ionization rates, and γ is the di- mensionless opacity of the gas. IX(z) and Kh denote the additional respective ionization and heating rates from exotic injections, which are proportional to the deposited energy due to exotic sources. We will define the deposited energy in section 3.2.2. We refer the reader to refs. [51, 103, 104] for further details on these coefficients. We show in figure 1 how the quantities in eq. (2.9) are affected by the presence of accreting PBHs. We show the impact for the two different accretion models: the Bondi- Hoyle-Lyttleton (BHL) model described in section 2.3.1; and the Park-Ricotti (PR) model, described in section 2.3.2. We set the PBH mass to MPBH = 103 M⊙, and the fraction of dark matter contained in PBHs, fPBH = ΩPBH/ΩDM, to 0.1, in this figure. As can be clearly seen, the BHL accretion model has a much larger impact than the PR accretion model on xe and 2Note that Zeus21 does not include the contribution from X-rays excitations of neutral hydrogen [101], which is subdominant for the astrophysical parameters and redshifts we consider [102]. – 6 – 101 102 103 z 10−2 100 xe MPBH = 103 M⊙, fPBH = 0.1 101 102 103 z 101 102 103 Tk [K] No PBHs BHL PR Figure 1: Impact of accreting PBHs on the free-electron fraction (left panel), xe, and on the gas temperature (right panel), Tk. Results are shown for two different accretion models, the Park-Ricotti and Bondi-Hoyle-Lyttleton models. For both panels we assume a PBH mass of MPBH = 103 M⊙and a fractional PBH abundance of fPBH = 0.1. Tk, especially at late times, changing the free electron fraction and the baryon temperature by more than one order of magnitude. Hence, the modeling of accretion strongly influences the ionization fraction and gas temperature, and thus the predicted 21-cm signal. The key quantity governing the impact of the accreting PBHs on the 21-cm signal is the accretion rate, ˙M ≡dM dt , which quantifies the rate at which baryonic matter is captured by a black hole. This quantity is typically a function of the PBH mass and PBH speed, and also depends on the properties of the medium. It is used to calculate the amount of energy that is injected into the IGM, and thus the impact on the observables. In the next sections we will compare the two accretion models, BHL and PR. We remark here that we do not study the impact of dark matter mini-halos on the accretion rate in this work.3 Our conclusions—that astrophysical uncertainties associated with the choice of fiducial model can significantly impact the derived constraints—are expected to remain unchanged in this case. 2.3.1 The Bondi–Hoyle–Lyttleton model The Bondi-Hoyle-Lyttleton (BHL) model was developed as an interpolation between two lim- iting cases: the ballistic regime, which describes accretion onto a point mass moving at con- stant velocity through a uniform-density medium under purely gravitational influence [85–88], and spherical accretion, which considers a stationary, spherically symmetric object accreting matter, while accounting for both pressure and gravity [89, 90]. With these simplifying as- sumptions, the resulting interpolation formula can provide an order-of-magnitude estimate of the accretion rate [90], without capturing complicated hydrodynamical effects such as ra- diation feedback. Despite these simplifications, it is commonly used to derive bounds and forecasts in the cosmological setting. We provide a summary of the model below. Under the BHL accretion model [85–90], the mass accretion rate onto an isolated compact object with mass M and with a velocity vrel relative to the ambient gas, is given by ˙MBHL = 4πλ (GM)2ρb (v2 rel + c2s)3/2 , (2.10) 3We refer the interested reader to refs. [79, 82] for an analysis with the CMB, and ref. [47] for a 21-cm analysis. – 7 – where ρb and cs denote the density and sound speed of the surrounding medium, respectively, and λ is a dimensionless parameter determining the normalization of the accretion rate. Bondi originally computed the maximal value of λ as a function of the equation of state of the gas under simplifying assumptions, finding λ ∼O(1) [90]. Later studies corrected this value of λ, specifically computing it in cosmological scenarios, where the effects of cosmic expansion and DM overdensities were included [75, 76, 105]. These studies generally assume spherical symmetry for the accretion flow and identify bremsstrahlung (free-free) emission near the Schwarzschild radius as the dominant cooling mechanism, while also accounting for the Hubble flow. However, the calculation of λ from first principles involves simplifying complicated accre- tion processes, and an alternative phenomenological approach for quantifying λ exists in the literature. This approach involves setting the value of λ to be consistent with astrophysical observations. Indeed, values of λ ≃10−2 −10−3 are frequently adopted to align theoretical predictions with observations, such as the absence of a large population of isolated neutron stars [106] or stellar-mass black holes [107] in the local Universe. This correction is also motivated by studies of nearby active galactic nuclei [108] and the accretion environment around the central supermassive black hole of the Milky Way [109]. The suppression factor λ is intended to account for a variety of non-gravitational effects (including pressure forces, viscosity, and radiation feedback) that can act to reduce the accretion rate. In what follows, we adopt λ = 0.01 as a representative value for disk accretion scenarios. 2.3.2 The Park–Ricotti model The role of radiative feedback in the accretion process was investigated by Park and Ri- cotti through hydrodynamical simulations of accretion from a homogeneous medium onto a compact object [91–93]. These simulations revealed the formation of an ionization front (sometimes preceded by a shock wave depending on the velocity regime) alongside a sharp suppression of the accretion rate at low speed. The authors developed a simplified analytical prescription, the PR model, which successfully reproduces the accretion rates observed in the simulations, and which we briefly outline in the following. The high-energy radiation generated during accretion ionizes the surrounding gas, al- tering its temperature, density, and flow velocity relative to the black hole. These changes, in turn, impact the accretion dynamics. The PR model finds a semi-analytic fit to their simulation that agrees with the functional form of the BHL accretion formula, eq. (2.10), but in this case accreting from the ionized gas. Hence, the PR accretion rate is ˙MPR = 4π (GM)2ρin (v2 in + c2 s,in)3/2 , (2.11) where ρin, vin, and cs,in are the density, relative velocity, and sound speed of the ionized gas, respectively. The sound speed cs,in is treated as a constant, parameterizing the temperature of the ionized gas. The values of ρin and vin are determined from the ambient gas properties ρb and vrel by enforcing one-dimensional mass conservation and force equilibrium across the ionization front, as detailed in ref. [82] and references therein. At high relative velocities, vrel ≳2 cs,in, the PR and BHL models converge, since the ram pressure dominates over the ionization pressure. However, the BHL rate is typically reduced by the normalization parameter λ, resulting in an accretion rate lower than that of the PR model. At lower velocities, vrel ≲2 cs,in, the formation of a shock front leads to significant deviations: whereas the BHL rate increases, the PR rate declines and becomes – 8 – strongly suppressed relative to BHL. It is this regime that is most relevant in the cosmological setting, and has the largest phenomenological impact. The value of cs,in, or equivalently the gas temperature within the ionized region, sets the characteristic velocity and hence the peak of the PR accretion rate. While cs,in is determined, in principle, by the balance between radiative heating and cooling, the PR model treats it as a free parameter. A commonly adopted benchmark, that we use in this work is cs,in = 23 km/s, corresponding to a temperature of 4 × 104 K [93]. A more detailed analysis of this parameter and its implications for cosmological constraints is provided in ref. [82]. Once the accretion rate has been specified, it can be applied to a specific cosmological setup characterized by a redshift-evolving density and relative speed between PBHs and baryons. It can subsequently be used to compute the PBHs luminosity, which plays a central role in determining the rate of energy injection. In the following section, we describe in more detail our modeling of the cosmological medium. 2.3.3 Environmental properties In this work, we adopt both the BHL and PR models to describe the accretion of a gas with a homogeneous density distribution onto PBHs. As mentioned above, the Universe is starting to develop virialized halos in the epoch of interest in this study, and some of these halos are forming stars, giving rise to the cosmic dawn. However, prior to the end of the cosmic dawn and the onset of reionization, the vast majority of PBHs are still expected to be isolated in the cosmological medium rather than in virialized halos (see for instance Figure 14 in ref. [110] and ref. [75]).4 Hence, similarly to our previous work [82], we take the cosmological background density of baryons to evolve with redshift as ρb = 200 mp  1 + z 1 + zrec 3 g cm−3 , (2.12) where mp is the proton mass in grams and zrec ≃1100 is the redshift at recombination. The baryon sound speed is given by cs = s γ (1 + xe) Tk mp km s−1 , (2.13) where Tk is the gas temperature, xe is the ionization fraction, and γ is the adiabatic index. We also adopt the linear cosmological relative velocity between baryons and DM as the relative velocity between PBHs and the background gas, vrel, with the RMS value given by [111, 112] q ⟨v2 rel⟩= min [1, (1 + z)/1000] × 30 km s−1 . (2.14) The PBH velocity distribution is assumed to follow a Maxwell–Boltzmann profile, where we relate the root-mean-square velocity in eq. (2.14) to the temperature, to uniquely define the distribution.5 We remark here that any contribution to the power spectrum in the form of Poisson noise [113], due to the discreteness of PBHs is not accounted for here. This has been studied 4See, however, ref. [47] for an analysis that also considers the contribution from some PBHs that are contained in virialized halos. 5As pointed out in ref. [44], previous works, such as refs. [76, 78] have (incorrectly) used a proxy for this averaging, veff, corresponding to a limiting case where the accretion luminosity has the proportionality Lacc ∝˙M 2. We further discuss the effect of varying the velocity distribution in section A.1. – 9 – previously (e.g., in refs. [44, 114]), where it was found that for the scales probed by SKA and HERA with wavenumbers k ≃0.1−1 Mpc−1, the effect is subdominant. Furthermore, we have also not modeled the potential impact of PBHs on early structure formation [36, 37] or on the formation of the first stars themselves [115], which remain interesting avenues for future investigation. 3 Methodology Our goal is to provide 21-cm sensitivity forecasts for PBHs, highlighting the impact of choice of fiducial model on the forecasts. To this end, we implement the exotic physics of accreting PBHs into the Zeus21 code [100, 116], modeled with both the BHL and PR models. In section 3.1, we first describe the numerical modeling of the first stars, as accounted for in Zeus21. We go on to describe our modeling of injected and deposited energy from accreting matter around PBHs in section 3.2. In section 3.3, we introduce three plausible astrophysical scenarios as our fiducial models for mock data generation. In section 3.4, we outline our statistical analysis pipeline, including details on the expected sensitivities of the HERA and SKA telescope configurations. Finally, in section 3.5, we marginalize over astrophysical nuisance parameters and present 21-cm sensitivities to PBHs for three astrophysical scenarios. 3.1 Zeus21 calculation of the 21-cm signal The 21-cm signal during the cosmic dawn is principally determined by the effects of the radiation fields produced by the first stars, whose formation and emission properties are un- certain. Modeling the formation and associated emission of the first stars is very challenging, especially on cosmological scales. There have been dedicated efforts using hydrodynamical simulations (see, e.g., refs. [117, 118]), but these are computationally very expensive. A faster semi-numerical approach also exists, where 3D realizations of evolved density, temperature, ionization, velocity, and radiation fields are computed using Lagrangian perturbation theory (see, e.g., the popular code 21cmFAST [119, 120]). These Lagrangian methods produce results comparable to full hydrodynamical simulations [121], at a fraction of the computational cost. Despite this speed-up of this code, it still is computationally expensive to run parameter scans using these methods. In this work, we instead opt to use the fully analytic code, Zeus21 [100, 116], where the SFRD is the main quantity determining the 21-cm signal, and both the global signal and power spectrum are computed approximately two orders of magnitude faster than with 21cmFAST. Comparisons have shown that Zeus21 and 21cmFAST agree at the 10% level for both the 21-cm global signal and power spectrum [100, 116]. We summarize below how we use the Zeus21 code in our analysis. Following ref. [100], we take the SFRD to be an approximately log normal variable at cosmic dawn. This allows for the fully analytic computation of the global signal and power spectrum. This analytic calculation relies on the assumption that the SFRD scales exponen- tially with the over/underdensities δR, where these δR are assumed from the cosmological initial conditions. The 21-cm signal thus depends on the sum of SFRDs, averaged over dif- ferent comoving radii R. Under these assumptions, the SFRD in a region of comoving radius R, and with density contrast δR, is given by [122, 123] SFRD(z|δR) = (1 + δR) Z dMh dn dMh (δR) ˙M∗(Mh) , (3.1) – 10 – where dn dMh (δR) is the density-modulated halo mass function (HMF), and M∗(Mh) ≡M∗(Mh, z) is the star formation rate (SFR) of a galaxy hosted in a halo of mass Mh (see ref. [123] for more details). We use a Sheth-Tormen halo mass function [124], and consider a SFR that assumes that some fraction of baryonic matter accreted by galaxies is converted into stars ˙M∗= f∗fb ˙Mh , (3.2) where fb = Ωb/Ωm is the baryon fraction, ˙ Mh is the mass accretion rate of a galaxy, and f∗≡f∗(Mh) is the SFR efficiency. The efficiency f∗(Mh) is regulated by an exponentially suppressed duty fraction, ensuring that stars do not form efficiently below some threshold Mturn, as given by f∗(Mh) = 2ϵ∗ (Mh/Mpivot)−α∗+ (Mh/Mpivot)−β∗exp  −Mturn Mh  , (3.3) where ϵ∗, Mpivot, α∗, and β∗are constant parameters, fixed to the values shown in table 3. In Zeus21, Mturn is taken to be the atomic cooling threshold, corresponding to a fixed virial temperature of Tvir = 104 K. We generalize this implementation following ref. [125], and modify Zeus21 by taking Mturn to depend on the virial temperature as Mturn = 100.58 × T 3/2 vir 1 + z 10 −3/2 . (3.4) Zeus21 additionally uses a broadcasting methodology to compute the effective biases γR, by positing that the SFRD in eq. (3.1) is related to its average value SFRD(z|δR) ≈SFRD(z) eγR˜δR , (3.5) where SFRD(z) is computed following [126], and ˜δR = δR −γR σ2 R/2, where δR and σR is the density contrast and variance in a region of comoving radius R, respectively (see ref. [100] for details). The approximation as an exponential allows an analytic calculation of the correlation function of the SFRD given δR, which is a well-known cosmological output of the CLASS code [127]. The nonlinearities associated with structure formation are therefore encoded in the effective bias γR. Using the above prescription for the SFRD, the Lyman-α radiation field Jastro α and the X-ray radiation field Jastro X can be written as integrals of the form Jastro α/X ∝ Z dR cα/x(R) SFRD(R) , (3.6) where the integral is performed over the comoving radius R, and cα/x are coefficients ac- counting for photon propagation, defined explicitly in ref. [100]. We remark here explicitly that the Lyman-α normalization in eq. (3.6) is Jastro α ∝Nα, the number of Lyman-α pho- tons per baryon. Additionally, the X-ray term that contributes to heating is normalized by Jastro X ∝LX/ ˙M∗, where LX/ ˙M∗is the soft-band (E < 2 keV) luminosity/SFR in X-rays in units of erg s−1 M−1 ⊙yr [100, 128]. These normalizations, and their associated uncertainties, are a focus of this paper, and will be discussed in detail section 3.3. Zeus21 uses the background cosmological evolution of the ionization fraction and IGM temperature as computed by CLASS [127], and calculates the stellar contribution to these quantities throughout cosmic dawn using the prescription described above. To include the contribution from PBHs, we modify both Zeus21 and CLASS. In Zeus21, we include the Lyman-α contribution from PBHs as in eq. (2.7), and in CLASS, we use the modified evolution equations described in ref. [103] and given in eq. (2.9), evolved using HyRec [129]. – 11 – 3.2 Energy injection and energy deposition from accreting matter 3.2.1 Energy injection The total energy injected into the medium per unit volume is given by d2E dV dt inj = L nPBH = L fPBH ρDM MPBH , (3.7) where L is the bolometric luminosity of a PBH with mass MPBH, nPBH is the number density of PBHs, fPBH is the fraction of DM in the form of PBHs, and ρDM is the DM density today. The bolometric luminosity of an accreting PBH depends on the accretion rate, ˙M, and is connected to the total rest-mass energy inflow of the accreted material through the radiative efficiency parameter ϵ, defined as L = ϵ ˙Mc2 . (3.8) The efficiency ϵ is generally a function of the accretion rate, often modeled as a power law, ϵ( ˙M) ∝ ˙Ma. The exponent a, along with the normalization, depends on the nature of the accretion flow, particularly on whether an accretion disk forms and on its specific properties. A key criterion for the formation of an accretion disk is whether the angular momentum of the baryons is sufficient to maintain Keplerian motion at radii larger than the innermost stable circular orbit [130]. Following refs. [44, 78, 82], we assume that an accretion disk always forms. The modeling of the transport of angular momentum and energy (through turbulence, viscosity, shear, and magnetic fields) has significant uncertainties, which translates into uncer- tainties in forecasting cosmological constraints. The angular momentum and energy transport and its influence on the radiative output have been explored in depth in prior studies [78, 82]. Nonetheless, in line with refs. [44, 78, 82], we adopt a fiducial model in which a hot, geometri- cally thick accretion disk develops, known as advection-dominated accretion flow (ADAF). In this regime, the radiative efficiency ϵ decreases with decreasing accretion rate. In the ADAF scenario, ϵ is modeled as ϵ( ˙M) = ϵ0 ˙M 0.01 ˙MEdd !a , (3.9) where ˙MEdd denotes the Eddington accretion rate. The values of ϵ0 and a depend on the accretion rate itself and follow a piecewise definition, provided in table 1 (taken from ref. [131]). The parameter δ in table 1 quantifies the fraction of turbulent energy in the accretion disk that heats the electrons directly, and we assume a benchmark value δ = 0.1 in this work.6 For the typical accretion rates arising in both the BHL and PR model and assuming δ = 0.1, then usually we fall in the regime where ϵ0 = 0.12 and a = 0.59. The other values of ϵ0 and a that occur for larger values of the accretion rate are given in table 1. The functional form in eq. (3.9) encodes the necessary information required to compute the energy injection into the IGM. 3.2.2 Energy deposition It is well established that energy injected into the medium during the dark ages is not de- posited immediately [132–134]. Instead, energy is deposited at later times. Initially, on short 6See ref. [82] for a study on the effect of varying delta on cosmological bounds, and ref. [131] for more details on the ADAF parameterization. – 12 – δ ˙M/ ˙MEdd range ϵ0 a < 2.9 × 10−5 1.58 0.65 0.1 9.4 × 10−5 −5.0 × 10−3 0.026 0.27 5.0 × 10−3 −6.6 × 10−3 0.50 4.53 Table 1: Piecewise power-law fitting parameters for the radiative efficiency ϵ, used in eq. (3.9). Values are taken from ref. [131]. time-scales, injected particles initiate an electromagnetic cascade through the interaction with thermal photons, where there is an increase in the number of non-thermal particles at the ex- pense of a decrease in their average energy. These non-thermal particles then cool slowly with redshift, on cosmological times scales, and when these particles have energies of the order of keV, they start interacting strongly with the hydrogen atoms of the IGM, thus depositing their energy [103, 133]. This delayed deposition is quantified by the energy deposition functions fc(z, xe), which describe the fraction of injected energy deposited at redshift z into a specific channel c. The most relevant deposition channels are heating, ionization, and excitation of atoms, d2E dV dt dep, c = fc(z, xe) d2E dV dt inj . (3.10) The energy deposition functions fc(z, xe) can be computed directly from a given energy- differential luminosity spectrum Lω using fc(z, xe) = H(z) R d ln(1+z′) H(z′) R T(z′, z, ω) Lω dω R Lω dω , (3.11) where T(z′, z, ω) is are the energy and redshift-dependent transfer functions tabulated in refs. [135, 136]. These functions are implemented in the DarkAges code [103], which we use to perform the integrals in eq. (3.11). One of the assumptions underlying this approach is that changes to the free electron fraction xe from any additional energy injection do not significantly alter the evolution of the electromagnetic cascade [103]. Accordingly, the ionization fraction xe is taken to follow its standard evolution in the absence of energy injection. In figure 2, we show the energy deposition from PBHs into heating, ionization and exci- tation (Lyman-α) for both accretion models. The energy deposition from heating, ionization and through Lyman-α photons decreases monotonically after z ∼1000, largely a consequence of the dilution of the background medium. However, the two accretion models differ sub- stantially in how quickly they decrease. Much more energy is deposited in the early Universe in the PR model than in the BHL model, but the PR model shuts off more rapidly at late times than the BHL model. The implications of these differences for the 21-cm signal will be discussed in the next sections. With the ingredients described above, we can compute the Lyman-α contribution from accreting PBHs as JPBH α = c 4π H(z) ν2α h d2E dV dt dep, Lyα , (3.12) where να is the Lyman-α frequency, and we have explicitly written the Lyman-α contribution from eq. (3.10). – 13 – 101 102 103 z 10−46 10−41 10−36 10−31 10−26 10−21 Energy Dep.  Jm−3s−1 MPBH = 103 M⊙, fPBH = 0.1 BHL PR Channels Lyman-α Heating Ionization Channels Lyman-α Heating Ionization Figure 2: Energy deposition into the IGM from PBHs into Lyman-α (solid lines), heating (dashed lines), and ionization (dotted lines) for the BHL (red lines) and PR (blue lines) accretion models, as a function of redshift. We assume a PBH mass of MPBH = 103 M⊙and a fractional PBH abundance of fPBH = 0.1. Finally, we note that the DarkHistory code [137] improves upon DarkAges by self- consistently accounting for the backreaction of changes to xe by exotic injection on the evolution of the electromagnetic cascade, by recomputing xe-dependent transfer functions. This correction becomes particularly relevant when the ionization fraction rises rapidly dur- ing the epoch of reionization, or when the ionization fraction is modified substantially by exotic injections in the ionization channel. The first effect becomes most relevant at z ≲12 (see left panel of figure 1). Since our analysis is restricted to z > 10, we do not expect substantial deviations from our results stemming from this narrow redshift range. This is especially the case, since at these redshifts heating from astrophysical sources dominates over any additional heating contributions from exotic sources. The second effect could be relevant, especially in less-constraining astrophysical scenarios, where the 21-cm signal does not rule out significant energy injections. In particular, as shown in Figure 6 of ref. [137], constraints become stronger when considering the backreaction effect of modifications to the ionization and thermal history induced by exotic sources, by 10%–50% stronger for the cases of interest. This is attributed to larger temperatures caused by including backreaction effects, as shown in Figure 4 of ref. [137]. We do not attempt to quantify this effect further, given that the systematics that we will present in this paper correspond to orders of magnitude effects on the bound, and will be most relevant in a region of the parameter space corresponding to large energy injections, that are constrained by other observations (see, e.g., figure 6 below and references in the caption). 3.3 Mock data generation To forecast constraints using the 21-cm signal, a fiducial set of astrophysics parameters for mock data generation must be chosen. However, the properties of the first stars are poorly known, and consequently, the fiducial 21-cm signal expected to be measured by experiments represents a significant uncertainty. Current theoretical predictions of the power spectrum – 14 – during cosmic dawn vary by at least one order of magnitude, with similar uncertainties in the astrophysical parameters underlying the fiducial theory model (see, e.g., refs. [16–18, 123, 138– 143]). Several fiducial astrophysical scenarios have been studied in the literature. Some include only atomically cooled Pop II stars (e.g., refs. [17, 44, 140]), while others also consider the effects of an additional population of molecularly cooled halos forming Pop III stars (e.g., refs. [46, 62, 123]). However, when confronted with existing data, the allowed parameter space for each model widens substantially, such that the theory prediction for the global and power spectrum signals spans roughly two orders of magnitude [16, 138, 139, 141–144]. To account for this uncertainty, we consider three distinct fiducial astrophysical scenar- ios for mock data generation, parametrized similarly to ref. [44]. In particular, we consider an astrophysical model consisting of three parameters that we vary, and several additional parameters that we set to the fiducial values given in table 3 in section A.2. The value (or range of values) of each parameter is chosen to be consistent with existing independent ob- servational constraints from ultraviolet luminosity functions (UVLFs) [140, 145, 146], spectra from Chandra X-ray observatory [138, 147, 148], and X-ray limits from HERA [149, 150].7 For simplicity, we consider an astrophysical model containing only Pop II stars, but remark that the properties of Pop III stars are also poorly understood, so the uncertainties associated with choosing a fiducial model would remain if we include Pop III stars [151].8 The three parameters of the astrophysical model that we vary over in this work are: • Nα: The number of Lyman-α photons between 10.2 eV and 13.6 eV per baryon. • LX: The luminosity/ ˙M∗in soft X-rays (0.5 keV ≤E ≤2 keV) in units of erg s−1 M−1 ⊙yr. • Tvir: The minimum virial temperature of galaxy forming halos. In particular, we vary the astrophysical parameters between three scenarios for mock data generation. We label these as benchmark, more-constraining, and less-constraining, with the astrophysics parameter choices for each of these scenarios given in table 2. Each of these three scenarios are chosen to be consistent with current observational and theory con- straints, and named according to their constraining power for exotic scenarios. All scenarios are consistent with the 2σ upper limits on the power spectrum reported by HERA [149, 150]. We justify the parameter choices for our three scenarios below. As far as the constraining power associated to each scenario is concerned, the full discussion will be presented in section 4. To specify the ranges on Nα, we begin by remarking that the typical benchmark sce- nario of Nα = 9690 Lyman-α photons per baryon is given in ref. [122], and computed from the spectral energy distributions provided in the Starburst99 simulations [152]. To remain consistent with the existing literature, we also use this as our benchmark value. However, the Starburst99 model predictions include data for different metallicities, initial mass func- tions, ages of the stellar population, and whether the stars formed at a continuous rate, or instantaneously. Integrating the Starburst99 spectra for different combinations of these pa- rameters to derive a value of Nα gives different results ranging from several hundreds to tens of thousands of photons per baryon. Additionally, more recent additions to the Starburst99 7These observational constraints necessarily depend on additional modeling assumptions about the star formation history and emission properties of early galaxies. We leave a fully self-consistent study of all independent constraints to future work. 8Pop II stars have been found in surveys looking at metal-poor stars in our galaxy and neighboring satellites. However, Pop III stars have not yet been detected [151]. – 15 – Astrophysical parameter Benchmark Less-Constraining More-Constraining Nα 9690 3 × 103 4 × 104 log10 LX/ ˙M∗[erg s yr M⊙] 40.5 41 39.5 Tvir [K] 104 104 105 Table 2: The three astrophysical parameters that we vary over for forecasting the sensitivity of the 21-cm signal to PBHs. These parameters are: the number of photons between Ly-α and Ly continuum per baryon, Nα; the soft-band (E < 2 keV) luminosity/SFR in X-rays, LX; the minimum virial temperature of halos hosting galaxies, Tvir. The second column shows our benchmark scenario, while the third and fourth columns shows the set of parameters leading to less sensitivity to PBHs and more sensitivity to PBHs, respectively. have highlighted further theoretical uncertainties affecting the emission properties of stellar sources, such as stellar rotation, which can change the UV luminosity by up to a factor of 5 [153, 154]. Given the absence of any robust theoretical model of the UV spectrum of the first stars, we bracket this uncertainty through the parameter choices for Nα. In particular, we choose the less-constraining and more-constraining values of Nα to be 3 × 103 and 4 × 104 photons per baryon. These values are contained within ranges on Nα implicitly considered in refs. [44, 128, 155–157]. For LX, an upper bound can be derived using stacked X-ray observations from Chandra, giving LX/ ˙M∗≲1042 erg s−1 M−1 ⊙yr [148, 158]. Recently, the HERA collaboration released a lower limit, finding that LX/ ˙M∗> 1039.9 erg s−1 M−1 ⊙yr at 2σ confidence [149, 150]. This lower limit is derived from a lower bound on the gas temperature, from the non-detection of the 21-cm power spectrum by HERA, leading to the conclusion that some heating of the gas must be present in the early Universe. However, the translation of this gas temperature limit to an X-ray limit requires assumptions on the modeling of astrophysical sources. To include the uncertainties associated with the astrophysical modeling, we conservatively choose our lower limit to be LX/ ˙M∗= 1039.5 erg s−1 M−1 ⊙yr. We choose a benchmark scenario contained within these limits, assuming high-mass X-ray binaries consistent with ref. [159], with LX/ ˙M∗= 1040.5 erg s−1 M−1 ⊙yr,9 and we choose an upper limit of LX/ ˙M∗= 1041 erg s−1 M−1 ⊙yr.10 For Tvir, a lower limit can be set by the atomic cooling threshold, and an upper limit can be set by requiring consistency with observed high-z Lyman break galaxies [145], and from observations of the Lyman-α forest [17, 155, 161, 162].11 We consider our benchmark scenario at the atomic cooling threshold Tvir = 104 K, and choose our less-constraining and more-constraining values at 104 K and 105 K, respectively. The substantial variation in the astrophysics parameters reflects the absence of powerful observational probes constraining cosmic dawn. Ongoing efforts to constrain these param- eters using independent observables include UVLFs [16, 123, 140, 144, 163], quasar dark 9This benchmark value should be understood as an order of magnitude estimate on the X-ray luminosity. See, e.g., ref. [160], and references therein. 10We remark here that using a larger upper limit for LX would further increase heating to the IGM from astrophysical sources, making distinguishability from exotic sources even more difficult (and can be roughly understood as an even less-constraining scenario). Similarly, using a lower limit would have the opposite effect, allowing for a more-constraining scenario. 11Attempts to constrain Mturn are given, e.g., in ref. [140]. – 16 – fraction measurements [164, 165], cosmic X-ray and radio background measurements [138, 141, 143, 147, 148], and the high redshift Lyman-α forest [142, 166]. As the James Webb Space Telescope (JWST) continues to shed light on the early Universe [167], such approaches may further narrow the allowed parameter space [168]. In addition, upcoming missions such as The Advanced Telescope for High Energy Astrophysics (ATHENA) X-ray telescope, will work in synergy with 21-cm observatories to constrain the cosmic dawn [169]. However, until the 21-cm signal is measured, and independent observatories constrain the UV and X-ray properties of the first stars, a significant systematic will remain in the mock data generation for forecasts. Other groups have considered these astrophysical uncertainties on the 21-cm signal, studying the impact of the first galaxies being bright or faint [123, 170], or how altering astrophysical scenarios would have an impact on the constraints on annihilating DM [17] or warm DM [18]. 3.4 Statistics Following ref. [44], we choose a multivariate Gaussian likelihood of the form log L = −1 2 Nz X iz Nk X ik  ∆2 21 (ziz, kik)test −∆2 21 (ziz, kik)fid 2 σ2 tot (ziz, kik) , (3.13) where ∆2 21 (ziz, kik)test/fid are the 21-cm power spectrum evaluated at a given z and k, for either the test or fiducial model, and σtot (ziz, kik) is the total measurement error, defined in the next section. The sums run over redshift z and through k-space (in Mpc−1), with the range we consider explicitly including k = {0.13, 0.18, 0.23, 0.29, 0.34, 0.39, 0.45, 0.5, 0.55, 0.61, 0.66, 0.71, 0.77, 0.82, 0.87, 0.93} Mpc−1, consistent with those used in ref. [62], and ten log-spaced z measurements z = {10.27, 11.04, 11.91, 12.93, 14.11, 15.52, 17.21, 19.29, 21.91, 25.30}, matching the 8 MHz bandwidth of the experiments we consider.12 We adopt the three-parametric model outlined in section 3.3, and explicitly defined in table 2. We compute the likelihood when varying fPBH under the three different astrophysical scenarios defined in table 2. 3.4.1 Sensitivities The total measurement error in the power spectrum is given by [62, 140, 171] σ2 tot (z, k) = σ2 exp (z, k) + σ2 sample (z, k) + 0.2 ∆2 21 (z, k)fid 2 , (3.14) where σexp denotes the experimental error,13 σsample is the error introduced by Poisson noise due to cosmic variance, and the third term accounts for a model uncertainty budget of 20%, in line with the error budget computed for 21cmFAST and propagated for Zeus21 [100, 121]. We compute the experimental error for each of our telescope configurations, and for each of the three fiducial astrophysical scenarios using 21cmSense [172–174], assuming the default system temperature given by Tsys = 100 K + 260 K  ν(z) 150 MHz −2.6 , (3.15) 12We consider only z ≳10, since Zeus21 is designed to be valid until reionization at z ≃10 [100]. 13The experimental sensitivities σexp are explicitly dependent on the power spectrum of the fiducial astro- physical model, as shown explicitly in eq. (15) of ref. [172]. – 17 – where ν(z) is the redshifted 21-cm line frequency. Modeling Tsys as in eq. (3.15), accounts for a receiver temperature of 100 K, and includes a sky temperature consistent with the measured diffuse radio background at ∼100 MHz [175].14 We use the same system temperature for all the experimental configurations considered in this work. In this work, we focus our analysis on producing bounds for SKA, but for completeness include a discussion on the experimental sensitivity of HERA. For SKA, we consider two configurations, the initial SKA AA* configuration and the final SKA AA4 configuration (sometimes equivalently referred to in the literature as SKA1-LOW and SKA2-LOW, respectively [176]). Both follow a spiral layout, where AA* has 307 stations, while AA4 has 512 stations. The SKA-LOW stations are ∼38 m in diameter, but can also be divided into sub-stations of 12 m and 18 m. These sub-stations provide an increased field of view and access to shorter baselines. We assume a gaussian beam for each baseline, and that the experiment is located at a latitude of 30.7◦. We refer the interested reader to ref. [177] for more details and follow their specifications. Both SKA configurations are explicitly implemented in the SKA_forecast notebook provided in 21cmSense. For our sensitivities, we use a deep survey, where we set the number of tracking hours to be the same as the number of observation hours per day. We use a fixed bandwidth of 8 MHz (corresponding to our redshift spacing), and assume an observation time of 1080 hrs (6 hrs per day for 180 days). For HERA, we use a hexagonal antenna layout comprising 331 antennas (11 on each side), with a dish size of 14 m, and separation of 12.12 m. We assume a gaussian beam for each baseline, and that the experiment is located at a latitude of 30.8◦. For our sensitivities, we perform a drift-scan, where the number of tracking hours is equal to the beam-crossing time, similarly to refs. [62, 171]. Similarly to SKA, we use a fixed bandwidth of 8 MHz, and assume an observation time of 1080 hrs (6 hrs per day for 180 days). 3.5 Marginalization To properly account for degeneracies between accreting PBHs and astrophysical parameters, a marginalization should be performed. In the context of 21-cm studies, this is commonly performed using MCMC methods [44, 155], which efficiently sample the posterior without evaluating the full parameter volume, or with Fisher forecasts [46, 62, 171], which provide an estimate of parameter correlations when likelihood evaluations are computationally expensive. In our case, computations of the likelihood take ∼1s, making a full parameter-scan feasible. As such, we run a full 4-dimensional parameter scan over the three parameters in table 2, together with the PBH parameter fPBH. We do this for the PBH masses MPBH = {1, 10, 100, 1000} M⊙,15 keeping the rest of the astrophysical parameters fixed to the values given in table 3 of section A.2. The grid has dimension 204, corresponding to 20 log-spaced points per parameters. We verified a posteriori that the grid adequately covers the relevant parameter space, such that the marginalized likelihood vanishes at physically meaningful values, while not missing the region of maximum likelihood. From the resulting 4-dimensional likelihood, we perform a marginalization over the as- trophysical parameters by interpolating the grid, and then integrating using the trapezoidal 14As pointed out in ref. [62], other works, including ref. [171], considered the temperature given in eq. (3.15) to correspond to their pessimistic scenario. 15We compute a few additional masses within this range to more precisely find the PBH mass where the bound vanishes at fPBH = 1, to avoid unphysical features in figure 6. – 18 – rule. The resulting 1-dimensional posterior is then used to construct a highest-density inter- val, with the bounds stated at the 95% probability value.16 4 Results In this section, we show the potential sensitivity of future 21-cm data to accreting PBHs. We begin in section 4.1, by using the BHL accretion model as an example, to show that the constraining power of future experiments on any exotic energy-injection mechanism, whose 21-cm signal is governed by its heating and ionizing impact on the IGM, depends sensitively on the choice of fiducial model. We then show in section 4.2 how the 21-cm sensitivity to PBHs varies between the BHL and PR accretion models. In section 4.3, we combine these elements to present the projected exclusion limits on the PBH abundance, fPBH, illustrating how both systematics impact the projected constraints. 4.1 The importance of the fiducial model We analyze three distinct astrophysical scenarios by varying the three parameters (Nα, LX, Tvir) defined in section 3.3. These three distinct scenarios are defined in table 2: a bench- mark model based on default parameters in Zeus21, a less-constraining model, and a more-constraining model. The constraining power of each scenario is determined by how prominently the baseline 21-cm signal stands out against experimental noise and how sensi- tive it is to additional energy injection, where this sensitivity can be understood using simple physical principles, discussed below. • The less-constraining scenario combines a low flux of Lyman-α photons per baryon (Nα), high X-ray luminosity (LX), and a low minimum halo virial temperature (Tvir). The combination of weak Lyman-α coupling and strong X-ray heating produces a shal- low absorption trough in the global signal. This is because a low Lyman-α coupling implies a weak coupling between the spin temperature and the gas temperature, com- pounded with an increase in the gas temperature, so that the difference between TCMB and Tk is smaller. Moreover, a low Tvir allows star formation to occur in smaller ha- los and thus at earlier times, shifting the features in the signal to earlier times (lower frequencies) where the expected experimental sensitivity is weaker [17]. The resulting signal, shown in the top row of figure 3, provides a poor baseline for constraining power with respect to any DM candidate that contributes to the heating of the IGM in this epoch. • The more-constraining scenario assumes the opposite: high Nα, low LX, and a high Tvir. Strong Lyman-α coupling and minimal X-ray heating create a deep, prominent absorption signal, and thus a larger power spectrum. A high Tvir causes star formation to occur only at later times in more massive halos, enhancing the power spectrum fluctuations. As shown in the bottom row of figure 3, this scenario produces a deep global signal, and a power spectrum that is higher in magnitude than the other cases. As such, this scenario is more sensitive to deviations caused by exotic energy injection, allowing for more stringent constraints to be set, as we will quantify in section 4.3. 16We compute the bound assuming a flat prior on fPBH over the range (0, 1), in line with the constraint derived from the CMB in ref. [82]. – 19 – less-constraining Scenario: Nα = 3 × 103, LX = 1041 erg s−1 M−1 ⊙yr, Tvir = 104 K. 10 15 20 25 z −200 −150 −100 −50 0 δTb [mK] BHL fPBH = [10−2, 10−1] fPBH = [10−3, 10−2] fPBH = [0, 10−3] fPBH = 0 10 15 20 25 z 10−2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA benchmark Scenario: Nα = 9690, LX = 1040.5 erg s−1 M−1 ⊙yr, Tvir = 104 K. 10 15 20 25 z −200 −150 −100 −50 0 δTb [mK] 10 15 20 25 z 10−2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA More-Constraining Scenario: Nα = 4 × 104, LX = 1039.5 erg s−1 M−1 ⊙yr, Tvir = 105 K. 10 15 20 25 z −200 −150 −100 −50 0 δTb [mK] 10 15 20 25 z 10−2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA Figure 3: Predictions of the 21-cm global signal (left panels) and the power spectrum (right panels), as a function of z, for the less-constraining (first row), benchmark (middle row) and more-constraining (bottom row) astrophysical scenarios. We assume the BHL accretion model and PBHs of mass MPBH = 102 M⊙, with the colors corresponding to different values of fPBH as indicated in the legend of the top left panel; for the power spectrum, we fix k = 0.15 Mpc−1. We also show the 1σ experimental sensitivities, σexp (z, k), from the HERA experiment, and the SKA AA* and SKA AA4 configurations, computed using 21cmSense [172– 174]. The astrophysical parameters for each row are given in the title, with the definitions provided in table 2. The remaining astrophysical parameters are fixed to those shown in table 3. – 20 – • The benchmark scenario assumes an intermediate value of each of these parameters, and thus an intermediate constraining power compared to the other two scenarios. It was chosen based on the default parameters defined in Zeus21, in line with the discussion presented in section 3.3, and is roughly consistent with previous fiducial benchmarks [44, 100, 116, 140]. We characterize and visualize each of these three astrophysical scenarios in figure 3. The figure shows the global signal (left panels) and the power spectrum (right panels) as a function of redshift, evaluated at a representative spatial scale k = 0.15 Mpc−1. Each plot also shows the impact of a sub-dominant population of PBHs with relative abundance fPBH and assuming BHL accretion. We include the 1σ experimental sensitivity associated with the experimental configurations that we consider here, overlaid on the power spectrum plots. As shown in the plot, the sensitivity bands increasingly tighten the constraints for each fiducial scenario from top to bottom. We emphasize that the experimental sensitivity depends on fiducial model’s power spectrum (see footnote 13). This dependence explains the shift in the intersection of the SKA sensitivity curve with the x-axis between the benchmark and more- constraining scenario, and it occurs since the fiducial power spectrum in the benchmark scenario is greater than the more-constraining in the narrow redshift range of z ≃17−20. As shown, the less-constraining scenario features a shallower global absorption dip and a suppressed power spectrum, especially at higher redshift. In this scenario, the 1σ experimental sensitivity band encompasses the signal, making it difficult to provide constraints on the PBH hypothesis, under our accretion modeling assumptions. This result generalizes to other exotic sources, where the experimental sensitivity for this fiducial scenario is of the same order as the signal, so distinguishing between different injections proves difficult, as has already been shown in the context of DM injections [17, 18]. In contrast, the more- constraining scenario plotted in the bottom row clearly shows a very pronounced absorption dip and enhanced fluctuations. As a consequence, even a small amount of exotic injection is expected to alter the signal in a detectable way. The benchmark scenario is also depicted in the middle row for comparison, which shows an intermediate regime for distinguishability. We quantify the implications of these three scenarios on the resulting PBH bounds in section 4.3. We remark here that by choosing these three fiducial astrophysical scenarios, we choose an agnostic approach as far as the correlations between the parameters are concerned. This choice avoids relying on a quantitative model that attempts to describe the underlying as- trophysical quantities and their evolution amid large uncertainties, and in the absence of complementary observations that fully constrain the properties of the first stars. 4.2 Uncertainty in accretion model The sensitivity of the 21-cm signal to accreting PBHs is determined by their model-dependent energy injection into the IGM. Of the two accretion models we consider, the BHL accretion model produces a substantially larger impact on the ionization fraction and gas temperature at late times, compared to the PR model, as shown in figure 1. In figure 4, we show how altered properties of the IGM translate into the 21-cm global signal and power spectrum. In particular, in the top row of figure 4 we show the global signal, where the impact of accreting PBHs under the BHL prescription is significantly larger. We also show in the middle and bottom panels of figure 4 the impact of each accretion scenario on the power spectrum, including the 1σ experimental sensitivities of three different telescope configura- tions. The middle panel shows the power spectrum as a function of redshift at a fixed scale – 21 – 10 15 20 25 z −100 −75 −50 −25 0 25 δTb [mK] BHL 10 15 20 25 z −100 −75 −50 −25 0 25 δTb [mK] PR 10 15 20 25 z 10−3 10−1 101 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA 10 15 20 25 z 10−3 10−1 101 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA 1.0 0.2 0.4 0.6 0.8 k [Mpc−1] 100 101 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA fPBH = 0 fPBH = [0, 10−3] fPBH = [10−3, 10−2] fPBH = [10−2, 10−1] 100 0.2 0.4 0.6 0.8 k [Mpc−1] 100 101 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA Figure 4: Predictions of the 21-cm signal for PBHs for the BHL accretion model (left column) and the PR accretion model (right column), and for a PBH mass of MPBH = 102 M⊙and for a range of fractional PBH abundance fPBH. Top row: impact on the global differential brightness temperature, δTb, as a function of redshift. Middle row: impact of accreting PBHs on the 21-cm power spectrum, ∆2 21, at a fixed scale k = 0.15 Mpc−1, as a function of redshift. The 1σ sensitivities, σexp (z, k), are shown for the HERA telescope, and for the SKA AA* and SKA AA4 configurations, computed using 21cmSense [172–174]. Bottom row: same as the middle row, but now showing the power spectrum, ∆2 21, at a fixed redshift z = 15, as a function of k instead. In each plot we assume the benchmark astrophysics scenario, corresponding to the middle row in figure 3, and given in table 2. The legend provided in the bottom left panel applies to all panels. – 22 – 3.8 4.0 4.2 log10 Tvir/K −3 −2 −1 log10 fPBH 40.4 40.5 40.6 log10 LX/ ˙M∗ (erg s−1 M−1 ⊙yr) 3.8 4.0 4.2 4.4 log10 Nα 4.0 4.4 log10 Nα 40.4 40.6 log10 LX/ ˙M∗ (erg s−1 M−1 ⊙yr) −3 −2 −1 log10 fPBH PR BHL Figure 5: Comparison of the posteriors for our astrophysical and PBH parameters at 68% and 95% probability for a PBH mass of MPBH = 102 M⊙and for two accretion models, BHL (blue lines and contours) and PR (red lines and contours). The fiducial astrophysical model is assumed to be the benchmark scenario in table 2. k = 0.15 Mpc−1, and the bottom panel shows the power spectrum as a function of k at a fixed redshift z = 15. It is apparent from these figures that BHL accretion can be much more constrained by the 21-cm power spectrum compared to PR accretion, by comparing the bands describing experimental sensitivities to the signal in the presence of accreting PBHs. The difference in constraining power between the two accretion models is further de- scribed in figure 5, where we show the joint posteriors for our astrophysical and PBH pa- rameters at 68% and 95% probability at MPBH = 102 M⊙for BHL and PR. The underlying fiducial model that we use for this scenario is the benchmark astrophysics scenario in ta- ble 2. As shown in the triangle plot, PR accreting PBHs are unconstrained, while the PBH abundance for BHL accreting PBHs is constrained by fPBH ≲10−2.6. In the next section, we show how this substantial difference in the 21-cm power spectrum between accretion models translates into 21-cm bounds on the abundance of accreting PBHs. – 23 – 100 101 102 103 MPBH [M⊙] 10−6 10−5 10−4 10−3 10−2 10−1 100 fPBH CMB (PR) CMB (BHL) 21-cm (BHL) 21-cm (PR) other constraints Less-Constraining Benchmark More-Constraining Figure 6: Sensitivity of SKA AA4 to fPBH, the fractional contribution of PBHs to the DM abundance, at 95% probability versus the PBH mass, assuming a monochromatic PBH mass function. Results are shown as a function of the PBH mass, for two accretion models: BHL accretion (blue curves and blue shaded region) and PR accretion (red curves and red shaded region). We show results for three different fiducial astrophysical scenarios: benchmark (solid lines, labeled “21-cm (BHL)” and “21-cm (PR)”), more-constraining (dash-dotted lines), and less-constraining scenarios (dashed lines); the astrophysics parameters for each of these three scenarios are provided in table 2. We remark that the less-constraining scenario is on the x-axis with fPBH = 1, for both models. For comparison, we also show with a thin gray line (labeled “other constraints”) the combined constraints from other probes, derived from gravitational waves from merging events [178, 179], radio and X-ray observations [180], microlensing [181–183], dynamical effects [184, 185], and dwarf galaxy heating [186]. We also show the CMB constraints reported in ref. [82] (dotted gray lines), labeled for both the BHL and PR accretion models.17 We make use of [187] for plotting. 4.3 21-cm forecasts and discussion The forecasted constraints at 95% probability on fPBH are presented in figure 6, which cap- tures the two primary sources of uncertainty investigated in this work: the astrophysical modeling of the cosmic dawn and the PBH accretion prescription. The results demonstrate that these uncertainties have a significant impact on the projected sensitivity of 21-cm exper- iments. Our study reveals two key findings. First, the choice of the fiducial astrophysical scenario is critical. As shown by the spread between the less-constraining and more- constraining lines in figure 6, the projected constraints vary by orders of magnitude. In 17The analysis in ref. [82] was focused on the mass range MPBH ≥10 M⊙, and the bound was not computed for lower masses. – 24 – the less-unconstraining scenario (characterized by low Nα, high LX, and low Tvir), 21- cm observations may offer no constraining power, whereas the more-constraining scenario (characterized by high Nα, low LX, and high Tvir) combined with the BHL accretion model could yield the most stringent constraints on stellar-mass PBHs. We claim that this result generalizes to any 21-cm constraint on additional heating and ionization of the IGM from exotic sources. Secondly, the accretion model itself affects the forecast sensitivity. The PR model, which includes radiative feedback, yields constraints that are at least two orders of magni- tude weaker than those from the BHL model, for each set of astrophysical parameters that we consider in this work. In particular, these bounds are weaker than existing, indepen- dent limits from other probes. Conversely, the widely-used BHL model suggests that 21-cm experiments could probe a large and unconstrained region of the PBH parameter space, par- ticularly for MPBH ≳10 M⊙, assuming a more-constraining astrophysical scenario. We remark here, that despite the substantial late time decrease in energy injection from the PR model compared to the BHL model, as shown in figure 2, where there is suppression in the accretion rate of ∼10 orders of magnitude at z ≃20, a bound can still be obtained. This is due to the significant energy deposition at early times, altering the evolution equations at late times. We can understand this effect as a modification of the initial conditions for later time evolution, effectively increasing the ionization and temperature floor. Thus, despite orders of magnitude lower injection at late times, the PR model still allows for some constraint to be placed, in our more-constraining astrophysical scenario. Counterintuitively, this implies that the 21-cm cosmological probe can be sensitive to early-time injections. Therefore, our study highlights that disambiguating the underlying astrophysical model is crucial for the interpretation of any future 21-cm signal, in the context of bounds on exotic energy injection. This strongly motivates independent observational efforts to constrain the properties of the first stars and galaxies. Indeed, a more robust approach than the one given here, where we have bracketed the uncertainty between the less-constraining and more- constraining scenario, would be to use future independent observations to derive posteriors for the astrophysical parameters, and then use these posteriors in a joint 21-cm forecast.18 5 Conclusions and Outlook Observations of the redshifted 21-cm signal with next-generation radio interferometers, such as the SKA [10], are expected to open up a new window into the early Universe. The 21-cm signal is highly sensitive to the thermal and ionization history of the IGM and thus provides a potentially powerful probe of exotic sources of energy injection, such as accreting [42, 44, 45, 47] or evaporating [47, 52, 57, 58, 65–73] PBHs and DM annihilations [17, 47, 49, 50, 53, 55– 57, 60–64] or decays [46, 49–58]. However, the potential for this signal to constrain new physics is fundamentally limited by the precision with which the standard astrophysical processes that govern the cosmic dawn can be modeled, introducing substantial uncertainty due to our incomplete knowledge of early galaxy formation and emission. In this work, we have presented a detailed forecast of the sensitivity of upcoming 21-cm power spectrum measurements to accreting PBHs, quantifying the impact of key systematic uncertainties. By exploring a range of plausible, yet observationally unconstrained, astro- physical scenarios, we demonstrated that the projected constraints on the PBH abundance, 18This would be a natural application of the methodology discussed, e.g., in refs. [16, 140, 144, 155, 165], to future data. – 25 – fPBH, can vary by several orders of magnitude. In a more-constraining astrophysical sce- nario, characterized by strong Lyman-α coupling, minimal X-ray heating, and a high mass threshold for star formation in halos, 21-cm observations could yield leading bounds on PBHs with mass MPBH ≳10 M⊙. Conversely, for a less-constraining scenario, with weaker Lyman- α coupling, stronger heating, and a lower halo mass threshold, upcoming experiments may offer no improvement over existing limits. Our results highlight that the significant impact of astrophysical uncertainties on constraints of exotic energy injection is a generic feature of any 21-cm forecast, with heating and ionization as the primary channels. Furthermore, we investigated the impact of the PBH accretion model by comparing the standard Bondi-Hoyle-Lyttleton prescription with the more recent Park-Ricotti model, which incorporates a more comprehensive treatment of radiative feedback. Our analysis shows that the inclusion of feedback, which strongly suppresses the accretion rate at late times, relaxes the forecasted 21-cm constraints by at least two orders of magnitude across both constraining astrophysical scenarios. This result highlights the importance of accurate theoretical modeling of accretion in deriving robust constraints on PBHs from cosmological observables. Finally, our results emphasize the importance of a multi-probe observational strategy. Synergies with complementary datasets, such as high-redshift UV luminosity functions from JWST [167, 168] and future X-ray data [169], will be essential for mitigating the degeneracy between astrophysics and exotic physics [16–18, 123, 138–144, 163], thereby enabling the 21- cm signal to reach its full potential as a probe of fundamental physics, and narrowing the range of uncertainties identified in this work. Acknowledgments We would like to thank Héctor Cruz and Julián Muñoz for helpful clarifications on Zeus21, and James Davies for clarification regarding differences between versions of 21cmFAST. We are grateful to Joshua Foster and Yitian Sun for prompt communication regarding their recent work [47]. We extend our thanks also to Gaétan Facchinetti, Hongwan Liu, Laura Lopez- Honorez, Andrei Mesinger, Diego Redigolo, Justus Schwagereit and Tracy Slatyer for useful discussions. We made use of the SOM Graviton computing infrastructure for this work. DA and SPR acknowledge support from the Generalitat Valenciana grants CIGRIS/2021/054 and CIPROM/2022/36, respectively. DA and SPR are also supported by grant PID2023-151418NB-I00, which is funded by MCIU/AEI/10.13039/501100011033/ FEDER, UE, and by the European Union’s Horizon 2020 research and innovation pro- gramme under the Marie Skłodowska-Curie grants H2020-MSCA-ITN-2019/860881-HIDDeN and HORIZON-MSCA-2021-SE-01/101086085-ASYMMETRY. RE acknowledges support from DOE Grant DE-SC0025309 and Simons Investigator in Physics Awards 623940 and MPS- SIP-00010469. DG acknowledges support from the project Theoretical Astroparticle Physics (TAsP) funded by INFN. MV acknowledges support from the project “Theoretical Particle Physics and Cosmology (TPPC)” funded by INFN. DA thanks INFN Pisa, MIT CTP – a Leinweber Institute, YITP, and University of Amsterdam – GRAPPA for their hospitality, where part of this work was carried out. This work made use of Zeus21 [100], NumPy [188], SciPy [189], CLASS [127], getdist [190], and any dependencies. – 26 – A Appendix A.1 Comparison to previous work There is a previous study of future constraints on the PBH abundance using 21-cm cosmol- ogy [44]. In that work, BHL accreting PBHs were considered, and self-consistently imple- mented in 21cmFAST [119, 120]. The results that we show in figure 6 are consistent within approximately an order of magnitude of those presented in ref. [44], and we use this section to outline the differences between our work and ref. [44] that could account for this difference. Firstly, ref. [44] used a different numerical code, 21cmFAST [119, 120], for computing the 21-cm signal and power spectrum, and we make use of Zeus21. While a comprehensive comparison has been carried out in refs. [100, 116], where differences in observables were shown to be within the 20% level typically found between radiative transfer simulations and approximate semi-numeric models [121], this comparison was carried out for more recent versions of 21cmFAST. In particular, ref. [44] used 21cmFASTv1.2, while this extensive testing was carried out comparing Zeus21 to 21cmFASTv3. Between versions of 21cmFAST, there are improvements that have a significant impact on the resulting power spectrum,19 where we give a non-exhaustive list below: • It was pointed out in Zeus21 [100], that older versions of 21cmFAST had underestimated adiabatic fluctuations. This approximation can affect predictions of the temperature fluctuations to differ by approximately one order of magnitude (see Fig. 13 of ref. [100] for a comparison, and ref. [193] for further details). This has since been updated in 21cmFAST, but was not available to the authors of ref. [44]. • Recent versions of 21cmFAST introduced several updates between version 3.0 and 4.0, leading to discrepancies exceeding 10% in some cases. These changes include a correc- tion to the adiabatic treatment (as described above), and an updated approach to the construction of density fields, where the cloud-in-cloud method is now preferred over the previous nearest-neighbor method. Secondly, in ref. [44], a minimal set of four astrophysics parameters was used. Namely, the UV ionization efficiency, the number of X-ray photons per solar mass, the minimum virial temperature of halos hosting galaxies, and the number of photons per baryon between Lyman-α and the Lyman limit. In our study, we use the same parameters, but reduce to a minimal model containing three parameters, keeping the UV ionization efficiency fixed, since that parameter does not have an analogue in Zeus21. This is due to the UV ionization parameter defining the efficiency at which a grid point in 21cmFAST is fully ionized, and since Zeus21 does not evolve grids, it is not used. We checked that the three parameters that we have chosen have the largest effect on the bound. At the level of statistics, this implies that these parameters have the strongest correlation with fPBH. We verified this ourselves, and this correlation is shown in the triangle plot in Fig. 13 of ref. [44]. Thirdly, ref. [44] uses a different effective velocity in their computation compared to our use of a Maxwell-Boltzmann distribution. In particular, ref. [44] chooses to apply an effective velocity that assumes a radiative efficiency ϵ ∝ ˙Ma PBH with a = 1, despite using a radiative efficiency with an arbitrary power-law dependence as in their ADAF model. This is discussed in Sec. III.A of ref. [44], and the general power-law dependence is shown in their eq. (3.6). 19We remark that it has been pointed out that additional approximations made in 21cmFAST may also lead to changes in the power spectra above the 20% level [191, 192]. – 27 – Parameter Value Parameter Value Parameter Value α⋆ 0.5 Mesc 1010 M⊙ Nα 9690 β⋆ −0.5 αesc 0.0 NLW 6200 ϵ⋆ 10−1 log10 LX/ ˙ M∗ 40.5 erg s−1 M −1 ⊙yr ALW 2.0 Mpivot 3 × 1011 M⊙ E0,X 500 eV βLW 0.6 Tvir 104 K Emax,X 2000 eV Avcb 1.0 fesc,0 10−1 αX −1.0 βvcb 1.8 Table 3: Parameters for Pop II sources in the fiducial Zeus21 model. Parameters that vary for different astrophysical scenarios are shown in bold. See table 2 for the ranges that we vary over, and ref. [116] for definitions of these parameters. Despite all the caveats discussed there, it was chosen to follow previous works and adopt the expression for the effective velocity assuming the a = 1 result. In taking a Maxwell- Boltzmann velocity distribution as described in section 2.3.3, we are using the general power law dependence and not fixing a = 1. We checked that the effect of using a = 1, leads to a larger injection and thus a more stringent constraint by a factor of ∼2. Fourthly, in the three astrophysical scenarios that we consider, the benchmark case is most similar, but not an exact reproduction of the global signal and power spectrum in ref. [44]. Finding a set of parameters in Zeus21 that exactly reproduced that global signal and power spectrum proved unsuccessful, given the reasons stipulated above. Phenomenologically, observe that in figure 3 of ref. [44], the benchmark global signal has an absorption trough at z ∼15, with depth δTb ∼−150 mK. The corresponding trough for our fiducial scenario has an absorption trough at z ∼16, with depth δTb ∼−80 mK. Since experiments provide more sensitivity at earlier times, and the depth of this trough translates to the amplitude of the power spectrum (see figure 3 for visualization of both effects), we would expect the previous study to correspond to more stringent bounds, which is actually reported there. Those limits roughly correspond to our optimistic scenario, with the caveat that we use a different velocity distribution, as detailed in the previous paragraph. Finally, since Zeus21 breaks down at z ∼10 [100],20 we restrict to redshifts z > 10, with explicit binning given in section 3.4. In contrast, ref. [44] used 9 log-spaced measurements in redshift in the interval z ∼8.3 −19.5. As described above, experiments have increased sensi- tivity at later times, so including these redshift bins leads to an increase in the constraining power, further explaining those more stringent bounds for the BHL model. A.2 Fixed astrophysical parameters Modeling the cosmic dawn and the formation of the first stars necessitates the choice of many astrophysical parameters, making various assumptions on the properties of the first stars. In table 3, we provide a list of the Pop II parameters that we assume to be fixed in this work. Furthermore, we list other flags in Zeus21 that we use, to allow for reproducibility of our re- sults. In particular, we use the flags: USE_RELATIVE_VELOCITIES = True, USE_LW_FEEDBACK = True, kmax_CLASS = 500, and zmax_CLASS = 50. Furthermore, we use the Planck2018 parameters [194]: Ωc = 0.11933, Ωb = 0.02242, h = 0.6766, As = exp(3.047) × 10−10, ns = 0.9665 and zreio = 7.82. 20Since Zeus21 does not evolve reionization, the 21-cm signal in this setup follows the underlying density field by construction. – 28 – References [1] M. S. Turner, The road to precision cosmology, Annual Review of Nuclear and Particle Science 72 (Sept., 2022) 1–35. [2] T. R. Slatyer, What does cosmology teach us about non-gravitational properties of dark matter?, Nuclear Physics B 1003 (June, 2024) 116468. [3] M. Cirelli, A. Strumia and J. Zupan, Dark Matter, 2406.01705. [4] S. Furlanetto, S. P. Oh and F. Briggs, Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe, Phys. Rept. 433 (2006) 181–301, [astro-ph/0608032]. [5] J. R. Pritchard and A. Loeb, 21-cm cosmology in the 21st century, Rept. Prog. Phys. 75 (2012) 086901, [1109.6012]. [6] S. R. Furlanetto, The 21-cm line as a probe of reionization, in Understanding the Epoch of Cosmic Reionization: Challenges and Progress (A. Mesinger, ed.), vol. 423, pp. 247–280. Springer International Publishing, 2016. 1511.01131. [7] P. Villanueva-Domingo, Shedding light on dark matter through 21 cm cosmology and reionization constraints. PhD thesis, U. Valencia, 2021. 2112.08201. [8] R. Mondal and R. Barkana, Prospects for precision cosmology with the 21 cm signal from the dark ages, Nature Astron. 7 (2023) 1025–1030, [2305.08593]. [9] D. R. DeBoer et al., Hydrogen Epoch of Reionization Array (HERA), Publ. Astron. Soc. Pac. 129 (2017) 045001, [1606.07473]. [10] G. Mellema et al., Reionization and the Cosmic Dawn with the Square Kilometre Array, Exper. Astron. 36 (2013) 235–318, [1210.0197]. [11] S. D. Bale, N. Bassett, J. O. Burns, J. Dorigo Jones, K. Goetz, C. Hellum-Bye et al., LuSEE ’Night’: The Lunar Surface Electromagnetics Experiment, arXiv e-prints (Jan., 2023) arXiv:2301.10345, [2301.10345]. [12] J. Burns et al., FARSIDE: A low radio frequency interferometric array on the lunar farside, in Bulletin of the American Astronomical Society, vol. 51, p. 178, 2019. 1907.05407. DOI. [13] S. Jester and H. Falcke, Science with a lunar low-frequency array: from the dark ages of the Universe to nearby exoplanets, New Astron. Rev. 53 (2009) 1–26, [0902.0493]. [14] A. Liu and J. R. Shaw, Data analysis for precision 21 cm cosmology, Publ. Astron. Soc. Pac. 132 (2020) 062001, [1907.08211]. [15] J. Silk, The limits of cosmology, Gen. Rel. Grav. 57 (2025) 127, [2509.08066]. [16] O. Z. Katz, N. Outmezguine, D. Redigolo and T. Volansky, Probing new physics at cosmic dawn with 21-cm cosmology, Nucl. Phys. B 1003 (2024) 116502, [2401.10978]. [17] L. Lopez-Honorez, O. Mena, Á. Moliné, S. Palomares-Ruiz and A. C. Vincent, The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes, JCAP 08 (2016) 004, [1603.06795]. [18] Q. Decant, A. Dimitriou, L. L. Honorez and B. Zaldivar, Simulation-based inference on warm dark matter from HERA forecasts, JCAP 07 (2025) 004, [2412.10310]. [19] B. J. Carr, The primordial black hole mass spectrum, Astrophys. J. 201 (1975) 1–19. [20] P. Ivanov, P. Naselsky and I. Novikov, Inflation and primordial black holes as dark matter, Phys. Rev. D50 (1994) 7173–7178. [21] J. C. Niemeyer and K. Jedamzik, Near-critical gravitational collapse and the initial mass function of primordial black holes, Phys. Rev. Lett. 80 (1998) 5481–5484, [astro-ph/9709072]. – 29 – [22] J. Yokoyama, Formation of MACHO primordial black holes in inflationary cosmology, Astron. Astrophys. 318 (1997) 673, [astro-ph/9509027]. [23] J. Yokoyama, Formation of primordial black holes in the inflationary universe, Phys. Rept. 307 (1998) 133–139. [24] J. C. Niemeyer and K. Jedamzik, Dynamics of primordial black hole formation, Phys. Rev. D 59 (1999) 124013, [astro-ph/9901292]. [25] M. Shibata and M. Sasaki, Black hole formation in the Friedmann universe: Formulation and computation in numerical relativity, Phys. Rev. D60 (1999) 084002, [gr-qc/9905064]. [26] I. Musco, J. C. Miller and L. Rezzolla, Computations of primordial black hole formation, Class. Quant. Grav. 22 (2005) 1405–1424, [gr-qc/0412063]. [27] S. Young, C. T. Byrnes and M. Sasaki, Calculating the mass fraction of primordial black holes, JCAP 1407 (2014) 045, [1405.7023]. [28] A. Escrivà, C. Germani and R. K. Sheth, Analytical thresholds for black hole formation in general cosmological backgrounds, JCAP 01 (2021) 030, [2007.05564]. [29] I. Musco, V. De Luca, G. Franciolini and A. Riotto, Threshold for primordial black holes. II. A simple analytic prescription, Phys. Rev. D 103 (2021) 063538, [2011.03014]. [30] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., Observation of gravitational waves from a binary black hole merger, Phys. Rev. Lett. 116 (2016) 061102, [1602.03837]. [31] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., Binary black hole mergers in the first advanced LIGO observing run, Phys. Rev. X 6 (2016) 041015, [1606.04856]. [32] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., GW151226: Observation of gravitational waves from a 22-solar-mass binary black hole coalescence, Phys. Rev. Lett. 116 (2016) 241103, [1606.04855]. [33] LIGO Scientific, VIRGO collaboration, B. P. Abbott et al., GW170104: Observation of a 50-solar-mass binary black hole coalescence at redshift 0.2, Phys. Rev. Lett. 118 (2017) 221101, [1706.01812]. [34] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence, Phys. Rev. Lett. 119 (2017) 141101, [1709.09660]. [35] S. Bird et al., Did LIGO detect dark matter?, Phys. Rev. Lett. 116 (2016) 201301, [1603.00464]. [36] D. Inman and Y. Ali-Haïmoud, Early structure formation in primordial black hole cosmologies, Phys. Rev. D 100 (2019) 083528, [1907.08129]. [37] M. S. Delos, A. Rantala, S. Young and F. Schmidt, Structure formation with primordial black holes: collisional dynamics, binaries, and gravitational waves, JCAP 12 (2024) 005, [2410.01876]. [38] A. Bogdan et al., Evidence for heavy-seed origin of early supermassive black holes from a z ≈10 X-ray quasar, Nature Astron. 8 (2024) 126–133, [2305.15458]. [39] M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, Primordial black holes—perspectives in gravitational wave astronomy, Class. Quant. Grav. 35 (2018) 063001, [1801.05235]. [40] B. J. Carr, Pregalactic black hole accretion and the thermal history of the Universe, Mon. Not. Roy. Astron. Soc. 194 (1981) 639–668. [41] H. Tashiro and N. Sugiyama, The effect of primordial black holes on 21 cm fluctuations, Mon. Not. Roy. Astron. Soc. 435 (2013) 3001, [1207.6405]. – 30 – [42] A. Hektor, G. Hütsi, L. Marzola, M. Raidal, V. Vaskonen and H. Veermäe, Constraining primordial black holes with the EDGES 21-cm absorption signal, Phys. Rev. D 98 (2018) 023503, [1803.09697]. [43] J. Luis Bernal, N. Bellomo, A. Raccanelli and L. Verde, Cosmological implications of primordial black holes, JCAP 10 (2017) 052, [1709.07465]. [44] O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, Constraining the primordial black hole abundance with 21-cm cosmology, Phys. Rev. D 100 (2019) 043540, [1906.07735]. [45] Y. Yang, Constraints on accreting primordial black holes with the global 21-cm signal, Phys. Rev. D 104 (2021) 063528, [2108.11130]. [46] Y. Sun, J. W. Foster, H. Liu, J. B. Muñoz and T. R. Slatyer, Inhomogeneous energy injection in the 21-cm power spectrum: Sensitivity to dark matter decay, Phys. Rev. D 111 (2025) 043015, [2312.11608]. [47] Y. Sun, J. W. Foster and J. B. Muñoz, Constraining inhomogeneous energy injection from annihilating dark matter and primordial black holes with 21-cm cosmology, 2509.22772. [48] Y. A. Shchekinov and E. O. Vasiliev, Particle decay in the early Universe: predictions for 21 cm, Mon. Not. Roy. Astron. Soc. 379 (2007) 1003–1010, [astro-ph/0604231]. [49] S. R. Furlanetto, S. P. Oh and E. Pierpaoli, The effects of dark matter decay and annihilation on the high-redshift 21 cm background, Phys. Rev. D 74 (2006) 103502, [astro-ph/0608385]. [50] M. Valdes, A. Ferrara, M. Mapelli and E. Ripamonti, Constraining DM through 21 cm observations, Mon. Not. Roy. Astron. Soc. 377 (2007) 245–252, [astro-ph/0701301]. [51] V. Poulin, J. Lesgourgues and P. D. Serpico, Cosmological constraints on exotic injection of electromagnetic energy, JCAP 03 (2017) 043, [1610.10051]. [52] S. Clark, B. Dutta, Y. Gao, Y.-Z. Ma and L. E. Strigari, 21 cm limits on decaying dark matter and primordial black holes, Phys. Rev. D 98 (2018) 043006, [1803.09390]. [53] H. Liu and T. R. Slatyer, Implications of a 21-cm signal for dark matter annihilation and decay, Phys. Rev. D 98 (2018) 023501, [1803.09739]. [54] A. Mitridate and A. Podo, Bounds on dark matter decay from 21 cm line, JCAP 05 (2018) 069, [1803.11169]. [55] W. Qin, J. B. Munoz, H. Liu and T. R. Slatyer, Birth of the first stars amidst decaying and annihilating dark matter, Phys. Rev. D 109 (2024) 103026, [2308.12992]. [56] B. Novosyadlyj, Y. Kulinich and D. Koval, Global signal in the redshifted hydrogen 21-cm line from the dark ages and cosmic dawn: Dependence on the nature of dark matter and modeling of first light, Phys. Rev. D 111 (2025) 083514, [2410.07380]. [57] M.-L. Zhao, S. Wang and X. Zhang, Prospects for probing dark matter particles and primordial black holes with the Hongmeng mission using the 21 cm global spectrum at cosmic dawn, JCAP 07 (2025) 039, [2412.19257]. [58] M.-L. Zhao, Y. Shao, S. Wang and X. Zhang, Prospects for probing dark matter particles and primordial black holes with the Square Kilometre Array using the 21 cm power spectrum at cosmic dawn, 2507.02651. [59] A. Natarajan and D. J. Schwarz, Dark matter annihilation and its effect on CMB and Hydrogen 21 cm observations, Phys. Rev. D 80 (2009) 043529, [0903.4485]. [60] C. Evoli, A. Mesinger and A. Ferrara, Unveiling the nature of dark matter with high redshift 21 cm line experiments, JCAP 11 (2014) 024, [1408.1109]. [61] G. D’Amico, P. Panci and A. Strumia, Bounds on dark matter annihilations from 21 cm data, Phys. Rev. Lett. 121 (2018) 011103, [1803.03629]. – 31 – [62] G. Facchinetti, L. Lopez-Honorez, Y. Qin and A. Mesinger, 21cm signal sensitivity to dark matter decay, JCAP 01 (2024) 005, [2308.16656]. [63] H. Bae, A. L. Erickcek, M. S. Delos and J. B. Muñoz, 21-cm constraints on dark matter annihilation after an early matter-dominated era, Phys. Rev. D 112 (2025) 083013, [2502.08719]. [64] P. K. Natwariya, K. Kadota and A. J. Nishizawa, Sensitivity toward dark matter annihilation imprints on 21-cm signal with SKA-Low: a convolutional neural network approach, 2508.08251. [65] Y. Yang, Constraints on primordial black holes and curvature perturbations from the global 21-cm signal, Phys. Rev. D 102 (2020) 083538, [2009.11547]. [66] A. Halder and M. Pandey, Probing the effects of primordial black holes on 21-cm EDGES signal along with interacting dark energy and dark matter–baryon scattering, Mon. Not. Roy. Astron. Soc. 508 (2021) 3446–3454, [2101.05228]. [67] A. Halder and S. Banerjee, Bounds on abundance of primordial black hole and dark matter from EDGES 21-cm signal, Phys. Rev. D 103 (2021) 063044, [2102.00959]. [68] S. Mittal, A. Ray, G. Kulkarni and B. Dasgupta, Constraining primordial black holes as dark matter using the global 21-cm signal with X-ray heating and excess radio background, JCAP 03 (2022) 030, [2107.02190]. [69] P. K. Natwariya, A. C. Nayak and T. Srivastava, Constraining spinning primordial black holes with global 21-cm signal, Mon. Not. Roy. Astron. Soc. 510 (2021) 4236, [2107.12358]. [70] J. Cang, Y. Gao and Y.-Z. Ma, 21-cm constraints on spinning primordial black holes, JCAP 03 (2022) 012, [2108.13256]. [71] A. K. Saha and R. Laha, Sensitivities on nonspinning and spinning primordial black hole dark matter with global 21-cm troughs, Phys. Rev. D 105 (2022) 103026, [2112.10794]. [72] U. Mukhopadhyay, D. Majumdar and A. Halder, Constraining PBH mass distributions from 21cm brightness temperature results and an analytical mapping between probability distribution of 21cm signal and PBH masses, JCAP 10 (2022) 099, [2203.13008]. [73] Y. Yang, Impact of radiation from primordial black holes on the 21-cm angular-power spectrum in the dark ages, Phys. Rev. D 106 (2022) 123508, [2209.00851]. [74] D. Agius and T. Slatyer. In preparation, 2025. [75] M. Ricotti, J. P. Ostriker and K. J. Mack, Effect of primordial black holes on the cosmic microwave background and cosmological parameter estimates, Astrophys. J. 680 (2008) 829, [0709.0524]. [76] Y. Ali-Haïmoud and M. Kamionkowski, Cosmic microwave background limits on accreting primordial black holes, Phys. Rev. D 95 (2017) 043534, [1612.05644]. [77] B. Horowitz, Revisiting primordial black holes constraints from ionization history, 1612.07264. [78] V. Poulin, P. D. Serpico, F. Calore, S. Clesse and K. Kohri, CMB bounds on disk-accreting massive primordial black holes, Phys. Rev. D 96 (2017) 083524, [1707.04206]. [79] P. D. Serpico, V. Poulin, D. Inman and K. Kohri, Cosmic microwave background bounds on primordial black holes including dark matter halo accretion, Phys. Rev. Res. 2 (2020) 023204, [2002.10771]. [80] L. Piga, M. Lucca, N. Bellomo, V. Bosch-Ramon, S. Matarrese, A. Raccanelli et al., The effect of outflows on CMB bounds from Primordial Black Hole accretion, JCAP 12 (2022) 016, [2210.14934]. – 32 – [81] G. Facchinetti, M. Lucca and S. Clesse, Relaxing CMB bounds on primordial black holes: The role of ionization fronts, Phys. Rev. D 107 (2023) 043537, [2212.07969]. [82] D. Agius, R. Essig, D. Gaggero, F. Scarcella, G. Suczewski and M. Valli, Feedback in the dark: a critical examination of CMB bounds on primordial black holes, JCAP 07 (2024) 003, [2403.18895]. [83] K. J. Mack, J. P. Ostriker and M. Ricotti, Growth of structure seeded by primordial black holes, Astrophys. J. 665 (2007) 1277–1287, [astro-ph/0608642]. [84] V. De Luca and N. Bellomo, The accretion, emission, mass and spin evolution of primordial black holes. Springer Series in Astrophysics and Cosmology. Springer, 2025. 2312.14097. 10.1007/978-981-97-8887-3. [85] F. Hoyle and R. A. Lyttleton, The effect of interstellar matter on climatic variation, Math. Proc. Camb. Philos. Soc. 35 (1939) 405–415. [86] F. Hoyle and R. A. Lyttleton, On the accretion of interstellar matter by stars, Math. Proc. Camb. Philos. Soc. 36 (1940) 325–330. [87] F. Hoyle and R. A. Lyttleton, On the physical aspects of accretion by stars, Math. Proc. Camb. Philos. Soc. 36 (1940) 424–437. [88] F. Hoyle and R. A. Lyttleton, On the accretion theory of stellar evolution, Mon. Not. Roy. Astron. Soc. 101 (1941) 227. [89] H. Bondi and F. Hoyle, On the mechanism of accretion by stars, Mon. Not. Roy. Astron. Soc. 104 (1944) 273–282. [90] H. Bondi, On spherically symmetrical accretion, Mon. Not. Roy. Astron. Soc. 112 (1952) 195. [91] K. Park and M. Ricotti, Accretion onto intermediate mass black holes regulated by radiative feedback I. Parametric study for spherically symmetric accretion, Astrophys. J. 739 (2011) 2, [1006.1302]. [92] K. Park and M. Ricotti, Accretion onto black holes from large scales regulated by radiative feedback. II. Growth rate and duty cycle, Astrophys. J. 747 (2012) 9, [1110.4634]. [93] K. Park and M. Ricotti, Accretion onto black holes from large scales regulated by radiative feedback. III. Enhanced luminosity of intermediate mass black holes moving at supersonic speeds, Astrophys. J. 767 (2013) 163, [1211.0542]. [94] S. A. Wouthuysen, On the excitation mechanism of the 21-cm (radio-frequency) interstellar hydrogen emission line., Astrophys. J. 57 (1952) 31–32. [95] G. B. Field, Excitation of the hydrogen 21-cm line, Proceedings of the IRE 46 (1958) 240–250. [96] X.-L. Chen and J. Miralda-Escude, The spin - kinetic temperature coupling and the heating rate due to Lyman - alpha scattering before reionization: Predictions for 21cm emission and absorption, Astrophys. J. 602 (2004) 1–11, [astro-ph/0303395]. [97] C. M. Hirata, Wouthuysen-Field coupling strength and application to high-redshift 21 cm radiation, Mon. Not. Roy. Astron. Soc. 367 (2006) 259–274, [astro-ph/0507102]. [98] P. Madau, A. Meiksin and M. J. Rees, 21-cm tomography of the intergalactic medium at high redshift, Astrophys. J. 475 (1997) 429, [astro-ph/9608010]. [99] R. Barkana, The rise of the first stars: Supersonic streaming, radiative feedback, and 21-cm cosmology, Phys. Rept. 645 (2016) 1–59, [1605.04357]. [100] J. B. Muñoz, An effective model for the cosmic-dawn 21-cm signal, Mon. Not. Roy. Astron. Soc. 523 (2023) 2587–2607, [2302.08506]. [101] X.-L. Chen and J. Miralda-Escude, The 21cm signature of the first stars, Astrophys. J. 684 (2008) 18–33, [astro-ph/0605439]. – 33 – [102] L. Lopez-Honorez, O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, Variations in fundamental constants at the cosmic dawn, JCAP 06 (2020) 026, [2004.00013]. [103] P. Stöcker, M. Krämer, J. Lesgourgues and V. Poulin, Exotic energy injection with ExoCLASS: Application to the Higgs portal model and evaporating black holes, JCAP 03 (2018) 018, [1801.01871]. [104] V. Poulin, P. D. Serpico and J. Lesgourgues, Dark matter annihilations in halos and high-redshift sources of reionization of the universe, JCAP 12 (2015) 041, [1508.01370]. [105] M. Ricotti, Bondi accretion in the early universe, Astrophys. J. 662 (2007) 53–61, [0706.0864]. [106] R. Perna, R. Narayan, G. Rybicki, L. Stella and A. Treves, Bondi accretion and the problem of the missing isolated neutron stars, Astrophys. J. 594 (2003) 936–942, [astro-ph/0305421]. [107] R. Fender, T. Maccarone and I. Heywood, The closest black holes, Mon. Not. Roy. Astron. Soc. 430 (2013) 1538, [1301.1341]. [108] S. Pellegrini, Nuclear accretion in galaxies of the local Universe: Clues from Chandra observations, Astrophys. J. 624 (2005) 155–161, [astro-ph/0502035]. [109] Q. D. Wang et al., Dissecting X-ray-emitting gas around the center of our galaxy, Science 341 (2013) 981, [1307.5845]. [110] P. Jangra, D. Gaggero, B. J. Kavanagh and J. M. Diego, The cosmic history of primordial black hole accretion and its uncertainties, JCAP 08 (2025) 006, [2412.11921]. [111] D. Tseliakhovich and C. Hirata, Relative velocity of dark matter and baryonic fluids and the formation of the first structures, Phys. Rev. D 82 (2010) 083520, [1005.2416]. [112] C. Dvorkin, K. Blum and M. Kamionkowski, Constraining dark matter-baryon scattering with linear cosmology, Phys. Rev. D 89 (2014) 023519, [1311.2937]. [113] N. Afshordi, P. McDonald and D. N. Spergel, Primordial black holes as dark matter: the power spectrum and evaporation of early structures, Astrophys. J. Lett. 594 (2003) L71–L74, [astro-ph/0302035]. [114] P. S. Cole and J. Silk, Small-scale primordial fluctuations in the 21 cm Dark Ages signal, Mon. Not. Roy. Astron. Soc. 501 (2021) 2627–2634, [1912.02171]. [115] J. M. Koulen, S. Profumo and N. Smyth, Primordial black holes and the first stars, Phys. Rev. D 112 (2025) 043044, [2506.06171]. [116] H. A. G. Cruz, J. B. Munoz, N. Sabti and M. Kamionkowski, Effective model for the 21-cm signal with population III stars, Phys. Rev. D 111 (2025) 083503, [2407.18294]. [117] H. Trac and R. Cen, Radiative transfer simulations of cosmic reionization. 1. Methodology and initial results, Astrophys. J. 671 (2007) 1, [astro-ph/0612406]. [118] N. Y. Gnedin, Cosmic reionization on computers I. Design and calibration of simulations, Astrophys. J. 793 (2014) 29, [1403.4245]. [119] A. Mesinger, S. Furlanetto and R. Cen, 21cmFAST: a fast, semi-numerical simulation of the high-redshift 21-cm signal, Mon. Not. Roy. Astron. Soc. 411 (2011) 955, [1003.3878]. [120] S. G. Murray, B. Greig, A. Mesinger, J. B. Muñoz, Y. Qin, J. Park et al., 21cmFAST v3: a Python-integrated C code for generating 3D realizations of the cosmic 21cm signal, J. Open Source Softw. 5 (2020) 2582, [2010.15121]. [121] O. Zahn, A. Mesinger, M. McQuinn, H. Trac, R. Cen and L. E. Hernquist, Comparison of reionization models: Radiative transfer simulations and approximate, semi-numeric models, Mon. Not. Roy. Astron. Soc. 414 (2011) 727, [1003.3455]. – 34 – [122] R. Barkana and A. Loeb, Detecting the earliest galaxies through two new sources of 21cm fluctuations, Astrophys. J. 626 (2005) 1–11, [astro-ph/0410129]. [123] J. B. Muñoz, Y. Qin, A. Mesinger, S. G. Murray, B. Greig and C. Mason, The impact of the first galaxies on cosmic dawn and reionization, Mon. Not. Roy. Astron. Soc. 511 (2022) 3657–3681, [2110.13919]. [124] R. K. Sheth, H. J. Mo and G. Tormen, Ellipsoidal collapse and an improved model for the number and spatial distribution of dark matter haloes, Mon. Not. Roy. Astron. Soc. 323 (2001) 1, [astro-ph/9907024]. [125] R. Barkana and A. Loeb, In the beginning: the first sources of light and the reionization of the Universe, Phys. Rept. 349 (2001) 125–238, [astro-ph/0010468]. [126] P. Madau, H. C. Ferguson, M. E. Dickinson, M. Giavalisco, C. C. Steidel and A. Fruchter, High redshift galaxies in the hubble deep field. color selection and star formation history to z=4, Mon. Not. Roy. Astron. Soc. 283 (1996) 1388–1404, [astro-ph/9607172]. [127] D. Blas, J. Lesgourgues and T. Tram, The Cosmic Linear Anisotropy Solving System (CLASS) II: approximation schemes, JCAP 07 (2011) 034, [1104.2933]. [128] B. Greig and A. Mesinger, Simultaneously constraining the astrophysics of reionization and the epoch of heating with 21CMMC, Mon. Not. Roy. Astron. Soc. 472 (2017) 2651–2669, [1705.03471]. [129] Y. Ali-Haïmoud and C. M. Hirata, Hyrec: A fast and highly accurate primordial hydrogen and helium recombination code, Physical Review D 83 (Feb., 2011) . [130] S. L. Shapiro and A. P. Lightman, Black holes in X-ray binaries: marginal existence and rotation reversals of accretion disks., Astrophys. J. 204 (1976) 555–560. [131] F.-G. Xie and F. Yuan, The radiative efficiency of hot accretion flows, Mon. Not. Roy. Astron. Soc. 427 (2012) 1580, [1207.3113]. [132] J. M. Shull and M. E. van Steenberg, X-ray secondary heating and ionization in quasar emission-line clouds, Astrophys. J. 298 (1985) 268–274. [133] X.-L. Chen and M. Kamionkowski, Particle decays during the cosmic dark ages, Phys. Rev. D 70 (2004) 043502, [astro-ph/0310473]. [134] T. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner, CMB constraints on WIMP annihilation: energy absorption during the recombination epoch, Phys. Rev. D 80 (2009) 043526, [0906.1197]. [135] T. R. Slatyer, Energy injection and absorption in the cosmic dark ages, Phys. Rev. D 87 (2013) 123513, [1211.0283]. [136] T. R. Slatyer, Indirect dark matter signatures in the cosmic dark ages II. Ionization, heating and photon production from arbitrary energy injections, Phys. Rev. D 93 (2016) 023521, [1506.03812]. [137] H. Liu, G. W. Ridgway and T. R. Slatyer, DarkHistory: a code package for calculating modified cosmic ionization and thermal histories with dark matter and other exotic energy injections, Phys. Rev. D 101, 023530 (2020) 101 (Jan., 2019) 023530, [1904.09296]. [138] A. Fialkov, A. Cohen, R. Barkana and J. Silk, Constraining the redshifted 21-cm signal with the unresolved soft X-ray background, Mon. Not. Roy. Astron. Soc. 464 (2017) 3498–3508, [1602.07322]. [139] A. Cohen, A. Fialkov, R. Barkana and M. Lotem, Charting the parameter space of the global 21-cm signal, Mon. Not. Roy. Astron. Soc. 472 (2017) 1915–1931, [1609.02312]. – 35 – [140] J. Park, A. Mesinger, B. Greig and N. Gillet, Inferring the astrophysics of reionization and cosmic dawn from galaxy luminosity functions and the 21-cm signal, Mon. Not. Roy. Astron. Soc. 484 (2019) 933–949, [1809.08995]. [141] S. Pochinda et al., Constraining the properties of Population III galaxies with multiwavelength observations, Mon. Not. Roy. Astron. Soc. 531 (2024) 1113–1132, [2312.08095]. [142] P. H. Sims et al., Rapid and late cosmic reionization driven by massive galaxies: a joint analysis of constraints from 21-cm, Lyman line & CMB data sets, 2504.09725. [143] J. Dhandha et al., Narrowing the discovery space of the cosmological 21-cm signal using multi-wavelength constraints, 2508.13761. [144] O. Z. Katz, D. Redigolo and T. Volansky, Closing in on Pop-III Stars: Constraints and Predictions Across the Spectrum, 2502.03525. [145] R. J. Bouwens et al., UV luminosity functions at redshifts z ∼4 to z ∼10: 10000 galaxies from HST legacy fields, Astrophys. J. 803 (2015) 34, [1403.4295]. [146] R. J. Bouwens et al., New determinations of the UV luminosity functions from z ∼9 to z ∼2 show a remarkable consistency with halo growth and a constant star formation efficiency, Astrophys. J. 162 (2021) 47, [2102.07775]. [147] N. Cappelluti et al., The nature of the unresolved extragalactic soft CXB, Mon. Not. Roy. Astron. Soc. 427 (2012) 651, [1208.4105]. [148] B. D. Lehmer et al., The 4 Ms Chandra deep field-south number counts apportioned by source class: pervasive active galactic nuclei and the ascent of normal galaxies, Astrophys. J. 752 (2012) 46, [1204.1977]. [149] HERA collaboration, Z. Abdurashidova et al., HERA phase I limits on the cosmic 21 cm signal: constraints on astrophysics and cosmology during the epoch of reionization, Astrophys. J. 924 (2022) 51, [2108.07282]. [150] HERA collaboration, Z. Abdurashidova et al., Improved constraints on the 21 cm EoR power spectrum and the X-ray heating of the IGM with HERA phase I observations, Astrophys. J. 945 (2023) 124, [2210.04912]. [151] R. S. Klessen and S. C. O. Glover, The first stars: formation, properties, and impact, Ann. Rev. Astron. Astrophys. 61 (2023) 65–130, [2303.12500]. [152] C. Leitherer, D. Schaerer, J. D. Goldader, R. M. Gonzalez Delgado, C. Robert, D. F. Kune et al., Starburst99: synthesis models for galaxies with active star formation, Astrophys. J. Suppl. 123 (1999) 3–40, [astro-ph/9902334]. [153] C. Leitherer et al., A library of theoretical ultraviolet spectra of massive, hot stars for evolutionary synthesis, Astrophys. J. Suppl. 189 (2010) 309–335, [1006.5624]. [154] C. Leitherer, S. Ekström, G. Meynet, D. Schaerer, K. B. Agienko and E. M. Levesque, The effects of stellar rotation. II. a comprehensive set of Starburst99 models, Astrophys. J. Suppl. 212 (2014) 14, [1403.5444]. [155] B. Greig and A. Mesinger, 21CMMC: an MCMC analysis tool enabling astrophysical parameter studies of the cosmic 21 cm signal, Mon. Not. Roy. Astron. Soc. 449 (2015) 4246–4263, [1501.06576]. [156] B. Greig and A. Mesinger, 21CMMC with a 3D light-cone: the impact of the co-evolution approximation on the astrophysics of reionization and cosmic dawn, Mon. Not. Roy. Astron. Soc. 477 (2018) 3217–3229, [1801.01592]. [157] S. Witte, P. Villanueva-Domingo, S. Gariazzo, O. Mena and S. Palomares-Ruiz, EDGES result versus CMB and low-redshift constraints on ionization histories, Phys. Rev. D 97 (2018) 103533, [1804.03888]. – 36 – [158] B. D. Lehmer et al., The evolution of normal galaxy X-ray emission through cosmic history: Constraints from the 6 Ms Chandra deep field-south, Astrophys. J. 825 (2016) 7, [1604.06461]. [159] T. Fragos et al., X-ray binary evolution across cosmic time, Astrophys. J. 764 (2013) 41, [1206.2395]. [160] S. P. Oh and Z. Haiman, Second-generation objects in the universe: radiative cooling and collapse of halos with virial temperatures above 10^4 kelvin, Astrophys. J. 569 (2002) 558, [astro-ph/0108071]. [161] M. Tegmark, J. Silk, M. J. Rees, A. Blanchard, T. Abel and F. Palla, How small were the first cosmological objects?, Astrophys. J. 474 (1997) 1–12, [astro-ph/9603007]. [162] A. Mesinger, A. Ferrara and D. S. Spiegel, Signatures of X-rays in the early Universe, Mon. Not. Roy. Astron. Soc. 431 (2013) 621, [1210.7319]. [163] Y. Qin, A. Mesinger, J. Park, B. Greig and J. B. Muñoz, A tale of two sites – I. Inferring the properties of minihalo-hosted galaxies from current observations, Mon. Not. Roy. Astron. Soc. 495 (2020) 123–140, [2003.04442]. [164] I. McGreer, A. Mesinger and V. D’Odorico, Model-independent evidence in favour of an end to reionization by z ≈6, Mon. Not. Roy. Astron. Soc. 447 (2015) 499–505, [1411.5375]. [165] B. Greig and A. Mesinger, The global history of reionization, Mon. Not. Roy. Astron. Soc. 465 (2017) 4838–4852, [1605.05374]. [166] Y. Qin, A. Mesinger, S. E. I. Bosman and M. Viel, Reionization and galaxy inference from the high-redshift Ly α forest, Mon. Not. Roy. Astron. Soc. 506 (2021) 2390–2407, [2101.09033]. [167] J. P. Gardner et al., The james webb space telescope mission, Publ. Astron. Soc. Pac. 135 (2023) 068001. [168] S. L. Finkelstein et al., CEERS Key Paper. I. An Early Look into the First 500 Myr of Galaxy Formation with JWST, Astrophys. J. Lett. 946 (2023) L13, [2211.05792]. [169] R. Cassano et al., SKA-Athena Synergy White Paper, 1807.09080. [170] A. Mesinger, B. Greig and E. Sobacchi, The evolution of 21 cm structure (EOS): public, large-scale simulations of Cosmic Dawn and Reionization, Mon. Not. Roy. Astron. Soc. 459 (2016) 2342–2353, [1602.07711]. [171] C. A. Mason, J. B. Muñoz, B. Greig, A. Mesinger and J. Park, 21cmfish: Fisher-matrix framework for fast parameter forecasts from the cosmic 21-cm signal, Mon. Not. Roy. Astron. Soc. 524 (2023) 4711–4728, [2212.09797]. [172] J. C. Pober, A. R. Parsons, D. R. DeBoer, P. McDonald, M. McQuinn, J. E. Aguirre et al., The baryon acoustic oscillation broadband and broad-beam array: design overview and sensitivity forecasts, Astron. J. 145 (2013) 65, [1210.2413]. [173] S. G. Murray, J. Pober and M. Kolopanis, 21cmSense v2: a modular, open-source 21cm sensitivity calculator, J. Open Source Softw. 9 (2024) 6501, [2406.02415]. [174] J. C. Pober et al., What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization, Astrophys. J. 782 (2014) 66, [1310.7031]. [175] T. J. Mozdzen, J. D. Bowman, R. A. Monsalve and A. E. E. Rogers, Improved measurement of the spectral index of the diffuse radio background between 90 and 190 MHz, Mon. Not. Roy. Astron. Soc. 464 (2017) 4995–5002, [1609.08705]. [176] L. V. E. Koopmans et al., The Cosmic Dawn and Epoch of Reionization with the Square Kilometre Array, PoS AASKA14 (2015) 001, [1505.07568]. [177] S. Seethapuram Sridhar, W. Williams, D. Price, s. Breen and L. Ball, “SKA Low and Mid subarray templates.” SKAO, 2025. 10.5281/zenodo.16951088. – 37 – [178] B. J. Kavanagh, D. Gaggero and G. Bertone, Merger rate of a subdominant population of primordial black holes, Phys. Rev. D 98 (2018) 023536, [1805.09034]. [179] Z.-C. Chen and Q.-G. Huang, Distinguishing primordial black holes from astrophysical black holes by Einstein Telescope and Cosmic Explorer, JCAP 08 (2020) 039, [1904.02396]. [180] J. Manshanden, D. Gaggero, G. Bertone, R. M. T. Connors and M. Ricotti, Multi-wavelength astronomical searches for primordial black holes, JCAP 06 (2019) 026, [1812.07967]. [181] M. Oguri, J. M. Diego, N. Kaiser, P. L. Kelly and T. Broadhurst, Understanding caustic crossings in giant arcs: characteristic scales, event rates, and constraints on compact dark matter, Phys. Rev. D 97 (2018) 023518, [1710.00148]. [182] T. Blaineau et al., New limits from microlensing on Galactic black holes in the mass range 10 M⊙< M < 1000 M⊙, Astron. Astrophys. 664 (2022) A106, [2202.13819]. [183] A. Esteban-Gutiérrez, E. Mediavilla, J. Jiménez-Vicente and J. A. Muñoz, Constraints on the abundance of primordial black holes from X-ray quasar microlensing observations: Substellar to planetary mass range, Astrophys. J. 954 (2023) 172, [2307.07473]. [184] M. A. Monroy-Rodríguez and C. Allen, The end of the MACHO era revisited: new limits on MACHO masses from halo wide binaries, Astrophys. J. 790 (2014) 159, [1406.5169]. [185] T. D. Brandt, Constraints on MACHO dark matter from compact stellar systems in ultra-faint dwarf galaxies, Astrophys. J. Lett. 824 (2016) L31, [1605.03665]. [186] P. Lu, V. Takhistov, G. B. Gelmini, K. Hayashi, Y. Inoue and A. Kusenko, Constraining primordial black holes with dwarf galaxy heating, Astrophys. J. Lett. 908 (2021) L23, [2007.02213]. [187] B. J. Kavanagh, bradkav/pbhbounds: Release version, 2019. 10.5281/ZENODO.3538999. [188] C. R. Harris et al., Array programming with NumPy, Nature 585 (2020) 357–362, [2006.10256]. [189] P. Virtanen et al., SciPy 1.0–Fundamental algorithms for scientific computing in Python, Nature Meth. 17 (2020) 261, [1907.10121]. [190] A. Lewis, GetDist: a Python package for analysing Monte Carlo samples, JCAP 08 (2025) 025, [1910.13970]. [191] J. Flitter and E. D. Kovetz, New tool for 21-cm cosmology. II. Investigating the effect of early linear fluctuations, Phys. Rev. D 109 (2024) 043513, [2309.03948]. [192] J. Flitter, S. Libanore and E. D. Kovetz, More careful treatment of matter density fluctuations in 21-cm simulations, Phys. Rev. D 112 (2025) 023537, [2411.00089]. [193] J. B. Muñoz, Y. Ali-Haïmoud and M. Kamionkowski, Primordial non-gaussianity from the bispectrum of 21-cm fluctuations in the dark ages, Physical Review D 92 (Oct., 2015) . [194] Planck collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [1807.06209]. – 38 –
Prepared for submission to JCAP Astrophysical uncertainties challenge 21-cm forecasts: A primordial black hole case study Dominic Agius,a Rouven Essig,b Daniele Gaggero,c Sergio Palomares-Ruiz,a Gregory Suczewski,b Mauro Vallid aInstituto de Física Corpuscular (IFIC), CSIC-Universitat de València, E-46071, València, Spain bC.N. Yang Institute for Theoretical Physics, Stony Brook University, NY 11794, USA cINFN Sezione di Pisa, Polo Fibonacci, Largo B. Pontecorvo 3, 56127 Pisa, Italy dINFN Sezione di Roma, Piazzale Aldo Moro 2, I-00185 Rome, Italy E-mail: Abstract. The 21-cm signal is a powerful probe of the early Universe's thermal history and could provide a unique avenue for constraining exotic physics. Previous studies have forecasted stringent constraints on energy injections from exotic sources that heat, excite, and ionize the background gas and thereby modify the 21-cm signal. In this work, we quantify the substantial impact that astrophysical uncertainties have on the projected sensitivity to exotic energy injection. In particular, there are significant uncertainties in the minimum starforming dark matter halo mass, the Lyman-α emission, and the X-ray emission, whose values characterize the fiducial astrophysical model when projecting bounds. As a case study, we investigate the energy injection of accreting primordial black holes of mass ∼1 M⊙-103 M⊙, also taking into account uncertainties in the accretion model. We show that, depending on the chosen fiducial model and accretion uncertainties, the sensitivity of future 21-cm data could constrain the abundance of primordial black holes to be either slightly stronger, or significantly weaker, than current limits from the Cosmic Microwave Background. 16 Oct 2025 Contents 1 Introduction 1 2 The 21-cm signal and exotic energy injection 3 2.1 Overview of 21-cm basics 3 2.2 Shape of the signal from the dark ages to the cosmic dawn: the crucial role of astrophysical parameters 5 2.3 The case study of Primordial Black Holes: impact on the signal 6 2.3.1 The Bondi-Hoyle-Lyttleton model 7 2.3.2 The Park-Ricotti model 8 2.3.3 Environmental properties 9 3 Methodology 10 3.1 Zeus21 calculation of the 21-cm signal 10 3.2 Energy injection and energy deposition from accreting matter 12 3.2.1 Energy injection 12 3.2.2 Energy deposition 12 3.3 Mock data generation 14 3.4 Statistics 17 3.4.1 Sensitivities 17 3.5 Marginalization 18 4 Results 19 4.1 The importance of the fiducial model 19 4.2 Uncertainty in accretion model 21 4.3 21-cm forecasts and discussion 24 5 Conclusions and Outlook 25 A Appendix 27 A.1 Comparison to previous work 27 A.2 Fixed astrophysical parameters 28 1 Introduction Recent cosmological measurements have heralded a new era in physics, allowing both for precise measurements of the parameters of the ΛCDM model and for stringent constraints on new physics scenarios [1]. Cosmology provides a particularly important avenue to test a wide range of dark matter (DM) candidates [2], with interesting complementarity to other detection strategies, such as direct and indirect detection or collider searches [3]. The upcoming era of 21-cm cosmology promises to revolutionize our understanding of the early Universe, opening a new window into the dark ages (30 ≲z ≲200), the cosmic dawn (10 ≲z ≲30), and epoch of reionization (6 ≲z ≲10) [4-8]. By mapping the hyperfine transition of neutral hydrogen through cosmic time, experiments like the Hydrogen Epoch of Reionization Array HERA [9] and the Square Kilometer Array (SKA) [10] will probe the 21-cm - 1 - power spectrum, providing unprecedented sensitivity to the thermal history of the Universe at redshifts z ≃6 -30. Future lunar-based experiments such as LuSEE Night [11] and FARSIDE [12] may one day measure the 21-cm signal at even higher redshift (in the dark ages), where it could provide a clean cosmological probe free from astrophysical uncertainties [13, 14]. Despite this promise, and the sentiment that "the moon is our future" [15], our focus here is on the two ground-based experiments, which will deliver data in the coming years. The new window that HERA and SKA will provide into the thermal and ionization properties of the intergalactic medium (IGM) makes the 21-cm signal a powerful upcoming tool for precision cosmology, and a promising avenue for constraining new physics beyond the Standard Model [16]. However, realizing this potential, especially in the context of exotic physics, faces a significant challenge: the dominant, yet uncertain, influence of the first astrophysical sources [17, 18]. The ultimate constraining power of the 21-cm signal is fundamentally dependent on the properties of the first stars, with some astrophysical scenarios being inherently less sensitive to exotic physics insofar as their constraining power. Key properties of the first stars, such as their X-ray luminosity, Lyman-α emission, and the minimum halo mass required to form the first star-forming regions, are currently poorly constrained, and any exotic physics signature must be disentangled from this complex astrophysical signal. As such, a comprehensive assessment of these uncertainties is essential to determine the true constraining power of upcoming 21-cm experiments, and to assess whether they can probe a wider parameter space than existing observations of the Cosmic Microwave Background (CMB). The goal of this paper is to quantify how these astrophysical systematics, associated with the fiducial astrophysical scenario, impact forecasts for exotic energy injection. As a concrete and well-motivated case study, we investigate the effects of an exotic energy source: a monochromatic population of accreting primordial black holes (PBHs). PBHs are hypothetical compact objects that may have formed in the early Universe. A common formation mechanism involves the collapse of large overdensities at small scales that exceed a critical threshold [19-29]. Interest in PBHs with masses above O(M⊙) has grown significantly following the gravitational wave detections by the LIGO/VIRGO/KAGRA experiments [30-34] and speculation that a signal may have been of primordial origin [35]. Even a sub-dominant PBH population in this mass range would have important implications, potentially causing early structure formation [36, 37] and contributing to the formation of supermassive black holes at high redshifts [38, 39]. In the mass range (1-103) M⊙, PBHs would accrete baryonic matter, injecting radiation into the IGM [40]. This process directly alters the IGM gas temperature and ionization fraction, leaving a distinct imprint on the 21-cm signal [41-47]. The same physical mechanism of heating and ionizing the IGM also determines 21-cm forecasts for decaying [46, 48-58] or annihilating DM [17, 47, 49, 50, 53, 55-57, 59-64], and evaporating PBHs [47, 52, 57, 58, 65-73], and our PBH case study can be generalized to these results.1 The case study of PBHs is particularly relevant, since a previous study has shown that the 21-cm signal has better sensitivity than other probes to PBHs in the mass range(1 - 103) M⊙[44]. In addition, we further investigate a second theoretical uncertainty associated with the accretion physics model. Constraints on accreting PBHs from CMB observations are already known to be strongly dependent on how the accretion is modeled [75-82]. Several effects have been explored in this context, including modifications to the accretion rate from DM minihalos around PBHs [79, 83], accretion outflows [80], black hole spin [84], and radiative 1We note that depositing energy into Lyman-α photons without additional heating or ionization can also provide some constraining power [74]. - 2 - feedback [81, 82]. In this work, we extend this investigation to the 21-cm signal by comparing the widely used Bondi-Hoyle-Lyttleton (BHL) accretion model [85-90] with the state-of-theart Park-Ricotti (PR) model, which includes the important effect of radiative feedback [91-93]. This case study allows us to explore the interplay between uncertainties from a scenario with exotic energy injection, and the much larger uncertainties from standard astrophysics. To summarize, the goal of this paper is twofold: a) to characterize how astrophysical uncertainties in the fiducial 21-cm model can alter the forecasted sensitivity to an exotic energy injection, and b) using PBHs as our case study, to provide updated 21-cm forecasts that account for systematics in the accretion model, in light of the astrophysical uncertainties. This will complement the study presented in ref. [82] and assess the impact of the PR model in the context of 21-cm cosmology. This paper is structured as follows. In section 2, we provide an overview of 21-cm basics, highlighting the key ingredients that are altered by the presence of DM, with particular reference to accreting PBHs. In section 3, we describe our methodology for providing 21cm sensitivity forecasts, with specific emphasis on the impact of astrophysical uncertainties entering the fiducial model. We go on to quantify our results in section 4, where we present forecasts for the upcoming SKA interferometer. Finally, we conclude and discuss the outlook in section 5. An appendix provides a comparison to previous work, and lists astrophysical parameters that are held fixed in our study. 2 The 21-cm signal and exotic energy injection In this section, we introduce the basics of 21-cm phenomenology, and describe the standard astrophysics scenario governing the signal and its uncertainties. We then discuss the impact of PBH accretion on the 21-cm observables: the brightness temperature and the power spectrum. 2.1 Overview of 21-cm basics The 21-cm signal is often parametrized by the so-called "spin temperature" (TS), defined by the fraction of neutral hydrogen in the triplet state compared to the singlet state, n1 n0 = 3 e-T0 TS , (2.1) where n1 and n0 are the number densities of neutral hydrogen in the triplet (excited) and singlet (ground) states, respectively. The factor of 3 comes from the degeneracy of the triplet state compared to the singlet, and kB T0 = h ν0 is the energy of the 21-cm photons. By definition, higher spin temperatures correspond to a higher proportion of excited states (with n1 ≈3 n0 for TS ≫T0), while lower temperatures correspond to a smaller proportion of excited states (with n1 ≪n0 for TS ≪T0). The ratio of these densities (and therefore the spin temperature) depends on the interactions that can induce hyperfine transitions. There are three dominant ingredients that determine the spin temperature, which can be understood as the weighted average of the temperatures associated with each process: T -1 S = T -1 CMB + xα T -1 c + xc T -1 k 1 + xα + xc . (2.2) Each term corresponds to a key interaction. The first corresponds to the stimulated emission and absorption caused by a background radiation field (which we take to be the CMB), a process characterized by the CMB temperature, TCMB. The second term accounts for the - 3 - coupling between the spin temperature and the color temperature, Tc, mediated by Lyman-α photons through the Wouthuysen-Field (WF) effect [94, 95]. The color temperature of the Lyman-α radiation field can be understood as the slope of the spectrum near the Lyman-α frequency, and when the radiation spectrum reaches a steady state, this color temperature is equal to the gas kinetic temperature field at the Lyman-α frequency [96]. The strength of this interaction is determined by the coupling coefficient xα, which is proportional to the flux of Lyman-α photons. Physically, this coupling arises because Lyman-α photons excite electrons from their original hyperfine ground state (with number densities n0 or n1). Upon relaxation, the electron can decay to either of the ground state levels, therefore altering the original population between the singlet and triplet states. The third interaction in eq. (2.2) is due to hydrogen-hydrogen collisions, whose strength is determined by the collisional coupling, xc. This final term becomes negligible after z ≃30, as the expansion of the Universe dilutes the density of hydrogen atoms. Upcoming 21-cm experiments, such as the SKA telescope, are designed to measure the frequencies probing the cosmic dawn and the epoch of reionization (corresponding to z ≃ 6-30). At these times, the collisional coupling is negligible (xc ≪1). Lastly, for all scenarios of interest, the color temperature is approximately equal (within 5% [97]) to the kinetic temperature of the baryons, Tc ≃Tk, so we use them interchangeably in this work. In the Rayleigh-Jeans limit (h ν ≪kB Tb(ν)), the specific intensity of a blackbody is proportional to the brightness temperature, Tb. The primary 21-cm observable is the differential brightness temperature, δTb, which measures the intensity of the 21-cm line relative to the CMB background (or in general, the background radiation), and depends on both the spin temperature and the optical depth of the 21-cm line, τ21, δTb = TS -TCMB 1 + z 1 -e-τ21 . (2.3) When the spin temperature is greater (less) than the CMB temperature, the signal is in emission (absorption). There is no signal when the spin temperature is in equilibrium with the CMB. For the frequencies (redshifts) of interest, the optical depth is small (τ21 ≪1) and eq. (2.3) can be approximated as [4, 98, 99] δTb ≃27 xHI (1 + δb) 1 -TCMB TS 1 1 + H-1 ∂vr/∂r 1 + z 10 1/2 mK , (2.4) where H is the Hubble constant, xHI is the fraction of neutral hydrogen, δb is the fractional baryon density perturbation, and ∂vr/∂r is the line-of-sight gradient of the neutral hydrogen velocity. Eq. (2.4) captures the signal at a specific spatial point, and its sky average, δTb, is what is referred to as the global 21-cm signal. Since the properties of the IGM are not completely homogeneous, any fluctuations in the 21-cm signal in space and time, can be characterized using the statistical properties of the signal. Particularly useful is the 21-cm power spectrum, P21(k, z), which quantifies the variance of the brightness temperature as a function of spatial scale (or wavenumber k), at a given redshift. It is defined via the two-point correlation of the signal in Fourier space, ⟨δT21(k, z)δT ∗ 21(k′, z)⟩= (2π)3δD(k + k′)P21(k, z) , (2.5) where δD is the Dirac delta function, and δT21(k, z) corresponds to the Fourier transform of δTb(x, z) -δTb(x, z). Throughout this work we use the dimensional reduced power spectrum, ∆2 21(k, z) = k3P21(k, z) 2π2 [mK2] . (2.6) - 4 - 2.2 Shape of the signal from the dark ages to the cosmic dawn: the crucial role of astrophysical parameters The 21-cm signal is dictated by the changing thermal and ionization properties of the Universe around the epoch of reionization, caused by the formation of the first astrophysical sources and their emission. These first stars determine the 21-cm signal through three key physical effects: i) through their impact on the kinetic temperature of the gas, via X-ray heating, ii) via the strength of WF coupling xα, induced by Lyman-α pumping, and iii) via changes in the fraction of neutral hydrogen xHI, shown explicitly in eqs. (2.2) and (2.4). The epoch of interest for the present study is the redshift range z ≃10 -25, and begins in the last portion of the dark ages, when no stars have formed yet, and the IGM is neutral. At z ∼25, the Universe has expanded sufficiently such that the collisional coupling between the spin temperature and the kinetic temperature of the gas is no longer effective, and δTb = 0 is expected because TS = TCMB. When the first astrophysical sources switch on, they emit both Lyman-α and X-ray photons. Initially, Lyman-α emission is expected to couple the spin temperature to the gas temperature, generating an absorption signal, since the gas temperature is below that of the CMB. Then, as sources begin to emit more strongly in X-rays, the gas temperature increases, damping the absorption signal. Eventually, Tk (which is ∼TS) increases above TCMB, resulting in an emission signal, which is damped as the fraction of neutral hydrogen vanishes, and the Universe becomes completely ionized. The sources responsible for the emission of Lyman-α photons (i.e., the first stars and galaxies) are discrete and clustered, and hence the Lyman-α background responsible for the WF coupling builds up inhomogeneously in space. Later, the X-ray flux originating from the first X-ray binaries is also expected to be patchy. Therefore, the temperature contrast between the heated and cold regions leads to spatial variations in Tk and thus in the brightness temperature, and these spatial variations can be captured with the power spectrum. Both the global averaged absorption/emission signal and the power spectrum that characterizes the spatial variations are shaped by the details of the star formation process, and eventually depend on some key parameters that are crucial in the current study. In particular, three quantities are certainly relevant: 1) the number of Lyman-α photons emitted per baryon, Nα; 2) the X-ray luminosity, LX, normalized to the star-formation rate, ̇M∗; 3) the minimum mass of star-forming DM halos, Mturn. The first quantity is responsible for the WF coupling described above, and therefore impacts the coupling of the spin temperature and the kinetic temperature of the gas. The second parameter controls the heating of the cold gas. Both parameters depend on the Star Formation Rate Density (SFRD), which quantifies how much stellar mass is formed per unit volume and per unit time at each redshift. It plays a central role in the 21-cm modeling and determines the normalization of the radiation fields mentioned above that shape the signal and its fluctuations. In section 3, we will address how the SFRD evolves over cosmic history and how it is modeled with the numerical tools that we use. The third quantity, instead, sets the threshold mass for DM halos to cool and form stars: larger values of Mturn restrict star formation to rarer massive halos, delaying the onset of radiation backgrounds. In section 3, we will also provide more details on Mturn. These three astrophysical parameters are uncertain. As a consequence, both the shape of the absorption signal and the power spectrum that future experiments aim to probe are uncertain, and our current knowledge prevents us from characterizing it accurately. Therefore, the sensitivity of these upcoming data to new physics scenarios depends strongly on the values - 5 - chosen for the astrophysical parameters, which we will illustrate in this paper by forecasting the sensitivity to PBHs. 2.3 The case study of Primordial Black Holes: impact on the signal Additional physics phenomena beyond the Standard Model can alter the 21-cm signal described in the previous subsection. In particular, we consider a hypothetical population of massive PBHs that accrete baryonic matter. These objects can affect the 21-cm brightness temperature by altering the spin temperature and the 21-cm optical depth. Through the process of accretion, PBHs would inject high-energy photons, which can deposit energy in the IGM, heating, exciting, and ionizing the gas. Therefore, PBHs would alter the 21-cm optical depth by reducing the neutral hydrogen fraction and also alter the spin temperature by increasing the kinetic temperature from heating. The accretion process by PBHs would also produce Lyman-α photons, which would directly impact the efficiency of the WF-effect through the dimensionless coupling xα, by increasing the flux of Lyman-α photons, Jα, xα = 0.416 × 16π2e2 27 me c A10 h νo kB Tk Jα , (2.7) where A10 is the Einstein coefficient for spontaneous emission, and e and me are the charge and mass of the electron. In particular, the Lyman-α flux can be split into two contributions, Jα = Jastro α + JPBH α , (2.8) where Jastro α ∝Nα is the astrophysical contribution from the first stars [100],2 and JPBH α denotes the contribution from accreting primordial black holes. Additionally, injection of energy from accreting PBHs also alters the free electron fraction, xe, and the baryon temperature, Tk, of the IGM. To account for this, we use the modified evolution equations described in ref. [103] and given by dxe(z) dz = 1 (1 + z) H(z) (R(z) -I(z) -IX(z)) , dTk dz = 1 1 + z [2 Tk + γ (Tk -TCMB)] + Kh , (2.9) where R(z) and I(z) are the standard recombination and ionization rates, and γ is the dimensionless opacity of the gas. IX(z) and Kh denote the additional respective ionization and heating rates from exotic injections, which are proportional to the deposited energy due to exotic sources. We will define the deposited energy in section 3.2.2. We refer the reader to refs. [51, 103, 104] for further details on these coefficients. We show in figure 1 how the quantities in eq. (2.9) are affected by the presence of accreting PBHs. We show the impact for the two different accretion models: the BondiHoyle-Lyttleton (BHL) model described in section 2.3.1; and the Park-Ricotti (PR) model, described in section 2.3.2. We set the PBH mass to MPBH = 103 M⊙, and the fraction of dark matter contained in PBHs, fPBH = ΩPBH/ΩDM, to 0.1, in this figure. As can be clearly seen, the BHL accretion model has a much larger impact than the PR accretion model on xe and 2Note that Zeus21 does not include the contribution from X-rays excitations of neutral hydrogen [101], which is subdominant for the astrophysical parameters and redshifts we consider [102]. - 6 - 101 102 103 z 10-2 100 xe MPBH = 103 M⊙, fPBH = 0.1 101 102 103 z 101 102 103 Tk [K] No PBHs BHL PR Figure 1: Impact of accreting PBHs on the free-electron fraction (left panel), xe, and on the gas temperature (right panel), Tk. Results are shown for two different accretion models, the Park-Ricotti and Bondi-Hoyle-Lyttleton models. For both panels we assume a PBH mass of MPBH = 103 M⊙and a fractional PBH abundance of fPBH = 0.1. Tk, especially at late times, changing the free electron fraction and the baryon temperature by more than one order of magnitude. Hence, the modeling of accretion strongly influences the ionization fraction and gas temperature, and thus the predicted 21-cm signal. The key quantity governing the impact of the accreting PBHs on the 21-cm signal is the accretion rate, ̇M ≡dM dt , which quantifies the rate at which baryonic matter is captured by a black hole. This quantity is typically a function of the PBH mass and PBH speed, and also depends on the properties of the medium. It is used to calculate the amount of energy that is injected into the IGM, and thus the impact on the observables. In the next sections we will compare the two accretion models, BHL and PR. We remark here that we do not study the impact of dark matter mini-halos on the accretion rate in this work.3 Our conclusions-that astrophysical uncertainties associated with the choice of fiducial model can significantly impact the derived constraints-are expected to remain unchanged in this case. 2.3.1 The Bondi-Hoyle-Lyttleton model The Bondi-Hoyle-Lyttleton (BHL) model was developed as an interpolation between two limiting cases: the ballistic regime, which describes accretion onto a point mass moving at constant velocity through a uniform-density medium under purely gravitational influence [85-88], and spherical accretion, which considers a stationary, spherically symmetric object accreting matter, while accounting for both pressure and gravity [89, 90]. With these simplifying assumptions, the resulting interpolation formula can provide an order-of-magnitude estimate of the accretion rate [90], without capturing complicated hydrodynamical effects such as radiation feedback. Despite these simplifications, it is commonly used to derive bounds and forecasts in the cosmological setting. We provide a summary of the model below. Under the BHL accretion model [85-90], the mass accretion rate onto an isolated compact object with mass M and with a velocity vrel relative to the ambient gas, is given by ̇MBHL = 4πλ (GM)2ρb (v2 rel + c2s)3/2 , (2.10) 3We refer the interested reader to refs. [79, 82] for an analysis with the CMB, and ref. [47] for a 21-cm analysis. - 7 - where ρb and cs denote the density and sound speed of the surrounding medium, respectively, and λ is a dimensionless parameter determining the normalization of the accretion rate. Bondi originally computed the maximal value of λ as a function of the equation of state of the gas under simplifying assumptions, finding λ ∼O(1) [90]. Later studies corrected this value of λ, specifically computing it in cosmological scenarios, where the effects of cosmic expansion and DM overdensities were included [75, 76, 105]. These studies generally assume spherical symmetry for the accretion flow and identify bremsstrahlung (free-free) emission near the Schwarzschild radius as the dominant cooling mechanism, while also accounting for the Hubble flow. However, the calculation of λ from first principles involves simplifying complicated accretion processes, and an alternative phenomenological approach for quantifying λ exists in the literature. This approach involves setting the value of λ to be consistent with astrophysical observations. Indeed, values of λ ≃10-2 -10-3 are frequently adopted to align theoretical predictions with observations, such as the absence of a large population of isolated neutron stars [106] or stellar-mass black holes [107] in the local Universe. This correction is also motivated by studies of nearby active galactic nuclei [108] and the accretion environment around the central supermassive black hole of the Milky Way [109]. The suppression factor λ is intended to account for a variety of non-gravitational effects (including pressure forces, viscosity, and radiation feedback) that can act to reduce the accretion rate. In what follows, we adopt λ = 0.01 as a representative value for disk accretion scenarios. 2.3.2 The Park-Ricotti model The role of radiative feedback in the accretion process was investigated by Park and Ricotti through hydrodynamical simulations of accretion from a homogeneous medium onto a compact object [91-93]. These simulations revealed the formation of an ionization front (sometimes preceded by a shock wave depending on the velocity regime) alongside a sharp suppression of the accretion rate at low speed. The authors developed a simplified analytical prescription, the PR model, which successfully reproduces the accretion rates observed in the simulations, and which we briefly outline in the following. The high-energy radiation generated during accretion ionizes the surrounding gas, altering its temperature, density, and flow velocity relative to the black hole. These changes, in turn, impact the accretion dynamics. The PR model finds a semi-analytic fit to their simulation that agrees with the functional form of the BHL accretion formula, eq. (2.10), but in this case accreting from the ionized gas. Hence, the PR accretion rate is ̇MPR = 4π (GM)2ρin (v2 in + c2 s,in)3/2 , (2.11) where ρin, vin, and cs,in are the density, relative velocity, and sound speed of the ionized gas, respectively. The sound speed cs,in is treated as a constant, parameterizing the temperature of the ionized gas. The values of ρin and vin are determined from the ambient gas properties ρb and vrel by enforcing one-dimensional mass conservation and force equilibrium across the ionization front, as detailed in ref. [82] and references therein. At high relative velocities, vrel ≳2 cs,in, the PR and BHL models converge, since the ram pressure dominates over the ionization pressure. However, the BHL rate is typically reduced by the normalization parameter λ, resulting in an accretion rate lower than that of the PR model. At lower velocities, vrel ≲2 cs,in, the formation of a shock front leads to significant deviations: whereas the BHL rate increases, the PR rate declines and becomes - 8 - strongly suppressed relative to BHL. It is this regime that is most relevant in the cosmological setting, and has the largest phenomenological impact. The value of cs,in, or equivalently the gas temperature within the ionized region, sets the characteristic velocity and hence the peak of the PR accretion rate. While cs,in is determined, in principle, by the balance between radiative heating and cooling, the PR model treats it as a free parameter. A commonly adopted benchmark, that we use in this work is cs,in = 23 km/s, corresponding to a temperature of 4 × 104 K [93]. A more detailed analysis of this parameter and its implications for cosmological constraints is provided in ref. [82]. Once the accretion rate has been specified, it can be applied to a specific cosmological setup characterized by a redshift-evolving density and relative speed between PBHs and baryons. It can subsequently be used to compute the PBHs luminosity, which plays a central role in determining the rate of energy injection. In the following section, we describe in more detail our modeling of the cosmological medium. 2.3.3 Environmental properties In this work, we adopt both the BHL and PR models to describe the accretion of a gas with a homogeneous density distribution onto PBHs. As mentioned above, the Universe is starting to develop virialized halos in the epoch of interest in this study, and some of these halos are forming stars, giving rise to the cosmic dawn. However, prior to the end of the cosmic dawn and the onset of reionization, the vast majority of PBHs are still expected to be isolated in the cosmological medium rather than in virialized halos (see for instance Figure 14 in ref. [110] and ref. [75]).4 Hence, similarly to our previous work [82], we take the cosmological background density of baryons to evolve with redshift as ρb = 200 mp 1 + z 1 + zrec 3 g cm-3 , (2.12) where mp is the proton mass in grams and zrec ≃1100 is the redshift at recombination. The baryon sound speed is given by cs = s γ (1 + xe) Tk mp km s-1 , (2.13) where Tk is the gas temperature, xe is the ionization fraction, and γ is the adiabatic index. We also adopt the linear cosmological relative velocity between baryons and DM as the relative velocity between PBHs and the background gas, vrel, with the RMS value given by [111, 112] q ⟨v2 rel⟩= min [1, (1 + z)/1000] × 30 km s-1 . (2.14) The PBH velocity distribution is assumed to follow a Maxwell-Boltzmann profile, where we relate the root-mean-square velocity in eq. (2.14) to the temperature, to uniquely define the distribution.5 We remark here that any contribution to the power spectrum in the form of Poisson noise [113], due to the discreteness of PBHs is not accounted for here. This has been studied 4See, however, ref. [47] for an analysis that also considers the contribution from some PBHs that are contained in virialized halos. 5As pointed out in ref. [44], previous works, such as refs. [76, 78] have (incorrectly) used a proxy for this averaging, veff, corresponding to a limiting case where the accretion luminosity has the proportionality Lacc ∝ ̇M 2. We further discuss the effect of varying the velocity distribution in section A.1. - 9 - previously (e.g., in refs. [44, 114]), where it was found that for the scales probed by SKA and HERA with wavenumbers k ≃0.1-1 Mpc-1, the effect is subdominant. Furthermore, we have also not modeled the potential impact of PBHs on early structure formation [36, 37] or on the formation of the first stars themselves [115], which remain interesting avenues for future investigation. 3 Methodology Our goal is to provide 21-cm sensitivity forecasts for PBHs, highlighting the impact of choice of fiducial model on the forecasts. To this end, we implement the exotic physics of accreting PBHs into the Zeus21 code [100, 116], modeled with both the BHL and PR models. In section 3.1, we first describe the numerical modeling of the first stars, as accounted for in Zeus21. We go on to describe our modeling of injected and deposited energy from accreting matter around PBHs in section 3.2. In section 3.3, we introduce three plausible astrophysical scenarios as our fiducial models for mock data generation. In section 3.4, we outline our statistical analysis pipeline, including details on the expected sensitivities of the HERA and SKA telescope configurations. Finally, in section 3.5, we marginalize over astrophysical nuisance parameters and present 21-cm sensitivities to PBHs for three astrophysical scenarios. 3.1 Zeus21 calculation of the 21-cm signal The 21-cm signal during the cosmic dawn is principally determined by the effects of the radiation fields produced by the first stars, whose formation and emission properties are uncertain. Modeling the formation and associated emission of the first stars is very challenging, especially on cosmological scales. There have been dedicated efforts using hydrodynamical simulations (see, e.g., refs. [117, 118]), but these are computationally very expensive. A faster semi-numerical approach also exists, where 3D realizations of evolved density, temperature, ionization, velocity, and radiation fields are computed using Lagrangian perturbation theory (see, e.g., the popular code 21cmFAST [119, 120]). These Lagrangian methods produce results comparable to full hydrodynamical simulations [121], at a fraction of the computational cost. Despite this speed-up of this code, it still is computationally expensive to run parameter scans using these methods. In this work, we instead opt to use the fully analytic code, Zeus21 [100, 116], where the SFRD is the main quantity determining the 21-cm signal, and both the global signal and power spectrum are computed approximately two orders of magnitude faster than with 21cmFAST. Comparisons have shown that Zeus21 and 21cmFAST agree at the 10% level for both the 21-cm global signal and power spectrum [100, 116]. We summarize below how we use the Zeus21 code in our analysis. Following ref. [100], we take the SFRD to be an approximately log normal variable at cosmic dawn. This allows for the fully analytic computation of the global signal and power spectrum. This analytic calculation relies on the assumption that the SFRD scales exponentially with the over/underdensities δR, where these δR are assumed from the cosmological initial conditions. The 21-cm signal thus depends on the sum of SFRDs, averaged over different comoving radii R. Under these assumptions, the SFRD in a region of comoving radius R, and with density contrast δR, is given by [122, 123] SFRD(z|δR) = (1 + δR) Z dMh dn dMh (δR) ̇M∗(Mh) , (3.1) - 10 - where dn dMh (δR) is the density-modulated halo mass function (HMF), and M∗(Mh) ≡M∗(Mh, z) is the star formation rate (SFR) of a galaxy hosted in a halo of mass Mh (see ref. [123] for more details). We use a Sheth-Tormen halo mass function [124], and consider a SFR that assumes that some fraction of baryonic matter accreted by galaxies is converted into stars ̇M∗= f∗fb ̇Mh , (3.2) where fb = Ωb/Ωm is the baryon fraction, ̇ Mh is the mass accretion rate of a galaxy, and f∗≡f∗(Mh) is the SFR efficiency. The efficiency f∗(Mh) is regulated by an exponentially suppressed duty fraction, ensuring that stars do not form efficiently below some threshold Mturn, as given by f∗(Mh) = 2ε∗ (Mh/Mpivot)-α∗+ (Mh/Mpivot)-β∗exp -Mturn Mh , (3.3) where ε∗, Mpivot, α∗, and β∗are constant parameters, fixed to the values shown in table 3. In Zeus21, Mturn is taken to be the atomic cooling threshold, corresponding to a fixed virial temperature of Tvir = 104 K. We generalize this implementation following ref. [125], and modify Zeus21 by taking Mturn to depend on the virial temperature as Mturn = 100.58 × T 3/2 vir 1 + z 10 -3/2 . (3.4) Zeus21 additionally uses a broadcasting methodology to compute the effective biases γR, by positing that the SFRD in eq. (3.1) is related to its average value SFRD(z|δR) ≈SFRD(z) eγR ̃δR , (3.5) where SFRD(z) is computed following [126], and ̃δR = δR -γR σ2 R/2, where δR and σR is the density contrast and variance in a region of comoving radius R, respectively (see ref. [100] for details). The approximation as an exponential allows an analytic calculation of the correlation function of the SFRD given δR, which is a well-known cosmological output of the CLASS code [127]. The nonlinearities associated with structure formation are therefore encoded in the effective bias γR. Using the above prescription for the SFRD, the Lyman-α radiation field Jastro α and the X-ray radiation field Jastro X can be written as integrals of the form Jastro α/X ∝ Z dR cα/x(R) SFRD(R) , (3.6) where the integral is performed over the comoving radius R, and cα/x are coefficients accounting for photon propagation, defined explicitly in ref. [100]. We remark here explicitly that the Lyman-α normalization in eq. (3.6) is Jastro α ∝Nα, the number of Lyman-α photons per baryon. Additionally, the X-ray term that contributes to heating is normalized by Jastro X ∝LX/ ̇M∗, where LX/ ̇M∗is the soft-band (E 10, we do not expect substantial deviations from our results stemming from this narrow redshift range. This is especially the case, since at these redshifts heating from astrophysical sources dominates over any additional heating contributions from exotic sources. The second effect could be relevant, especially in less-constraining astrophysical scenarios, where the 21-cm signal does not rule out significant energy injections. In particular, as shown in Figure 6 of ref. [137], constraints become stronger when considering the backreaction effect of modifications to the ionization and thermal history induced by exotic sources, by 10%-50% stronger for the cases of interest. This is attributed to larger temperatures caused by including backreaction effects, as shown in Figure 4 of ref. [137]. We do not attempt to quantify this effect further, given that the systematics that we will present in this paper correspond to orders of magnitude effects on the bound, and will be most relevant in a region of the parameter space corresponding to large energy injections, that are constrained by other observations (see, e.g., figure 6 below and references in the caption). 3.3 Mock data generation To forecast constraints using the 21-cm signal, a fiducial set of astrophysics parameters for mock data generation must be chosen. However, the properties of the first stars are poorly known, and consequently, the fiducial 21-cm signal expected to be measured by experiments represents a significant uncertainty. Current theoretical predictions of the power spectrum - 14 - during cosmic dawn vary by at least one order of magnitude, with similar uncertainties in the astrophysical parameters underlying the fiducial theory model (see, e.g., refs. [16-18, 123, 138143]). Several fiducial astrophysical scenarios have been studied in the literature. Some include only atomically cooled Pop II stars (e.g., refs. [17, 44, 140]), while others also consider the effects of an additional population of molecularly cooled halos forming Pop III stars (e.g., refs. [46, 62, 123]). However, when confronted with existing data, the allowed parameter space for each model widens substantially, such that the theory prediction for the global and power spectrum signals spans roughly two orders of magnitude [16, 138, 139, 141-144]. To account for this uncertainty, we consider three distinct fiducial astrophysical scenarios for mock data generation, parametrized similarly to ref. [44]. In particular, we consider an astrophysical model consisting of three parameters that we vary, and several additional parameters that we set to the fiducial values given in table 3 in section A.2. The value (or range of values) of each parameter is chosen to be consistent with existing independent observational constraints from ultraviolet luminosity functions (UVLFs) [140, 145, 146], spectra from Chandra X-ray observatory [138, 147, 148], and X-ray limits from HERA [149, 150].7 For simplicity, we consider an astrophysical model containing only Pop II stars, but remark that the properties of Pop III stars are also poorly understood, so the uncertainties associated with choosing a fiducial model would remain if we include Pop III stars [151].8 The three parameters of the astrophysical model that we vary over in this work are: • Nα: The number of Lyman-α photons between 10.2 eV and 13.6 eV per baryon. • LX: The luminosity/ ̇M∗in soft X-rays (0.5 keV ≤E ≤2 keV) in units of erg s-1 M-1 ⊙yr. • Tvir: The minimum virial temperature of galaxy forming halos. In particular, we vary the astrophysical parameters between three scenarios for mock data generation. We label these as benchmark, more-constraining, and less-constraining, with the astrophysics parameter choices for each of these scenarios given in table 2. Each of these three scenarios are chosen to be consistent with current observational and theory constraints, and named according to their constraining power for exotic scenarios. All scenarios are consistent with the 2σ upper limits on the power spectrum reported by HERA [149, 150]. We justify the parameter choices for our three scenarios below. As far as the constraining power associated to each scenario is concerned, the full discussion will be presented in section 4. To specify the ranges on Nα, we begin by remarking that the typical benchmark scenario of Nα = 9690 Lyman-α photons per baryon is given in ref. [122], and computed from the spectral energy distributions provided in the Starburst99 simulations [152]. To remain consistent with the existing literature, we also use this as our benchmark value. However, the Starburst99 model predictions include data for different metallicities, initial mass functions, ages of the stellar population, and whether the stars formed at a continuous rate, or instantaneously. Integrating the Starburst99 spectra for different combinations of these parameters to derive a value of Nα gives different results ranging from several hundreds to tens of thousands of photons per baryon. Additionally, more recent additions to the Starburst99 7These observational constraints necessarily depend on additional modeling assumptions about the star formation history and emission properties of early galaxies. We leave a fully self-consistent study of all independent constraints to future work. 8Pop II stars have been found in surveys looking at metal-poor stars in our galaxy and neighboring satellites. However, Pop III stars have not yet been detected [151]. - 15 - Astrophysical parameter Benchmark Less-Constraining More-Constraining Nα 9690 3 × 103 4 × 104 log10 LX/ ̇M∗[erg s yr M⊙] 40.5 41 39.5 Tvir [K] 104 104 105 Table 2: The three astrophysical parameters that we vary over for forecasting the sensitivity of the 21-cm signal to PBHs. These parameters are: the number of photons between Ly-α and Ly continuum per baryon, Nα; the soft-band (E 1039.9 erg s-1 M-1 ⊙yr at 2σ confidence [149, 150]. This lower limit is derived from a lower bound on the gas temperature, from the non-detection of the 21-cm power spectrum by HERA, leading to the conclusion that some heating of the gas must be present in the early Universe. However, the translation of this gas temperature limit to an X-ray limit requires assumptions on the modeling of astrophysical sources. To include the uncertainties associated with the astrophysical modeling, we conservatively choose our lower limit to be LX/ ̇M∗= 1039.5 erg s-1 M-1 ⊙yr. We choose a benchmark scenario contained within these limits, assuming high-mass X-ray binaries consistent with ref. [159], with LX/ ̇M∗= 1040.5 erg s-1 M-1 ⊙yr,9 and we choose an upper limit of LX/ ̇M∗= 1041 erg s-1 M-1 ⊙yr.10 For Tvir, a lower limit can be set by the atomic cooling threshold, and an upper limit can be set by requiring consistency with observed high-z Lyman break galaxies [145], and from observations of the Lyman-α forest [17, 155, 161, 162].11 We consider our benchmark scenario at the atomic cooling threshold Tvir = 104 K, and choose our less-constraining and more-constraining values at 104 K and 105 K, respectively. The substantial variation in the astrophysics parameters reflects the absence of powerful observational probes constraining cosmic dawn. Ongoing efforts to constrain these parameters using independent observables include UVLFs [16, 123, 140, 144, 163], quasar dark 9This benchmark value should be understood as an order of magnitude estimate on the X-ray luminosity. See, e.g., ref. [160], and references therein. 10We remark here that using a larger upper limit for LX would further increase heating to the IGM from astrophysical sources, making distinguishability from exotic sources even more difficult (and can be roughly understood as an even less-constraining scenario). Similarly, using a lower limit would have the opposite effect, allowing for a more-constraining scenario. 11Attempts to constrain Mturn are given, e.g., in ref. [140]. - 16 - fraction measurements [164, 165], cosmic X-ray and radio background measurements [138, 141, 143, 147, 148], and the high redshift Lyman-α forest [142, 166]. As the James Webb Space Telescope (JWST) continues to shed light on the early Universe [167], such approaches may further narrow the allowed parameter space [168]. In addition, upcoming missions such as The Advanced Telescope for High Energy Astrophysics (ATHENA) X-ray telescope, will work in synergy with 21-cm observatories to constrain the cosmic dawn [169]. However, until the 21-cm signal is measured, and independent observatories constrain the UV and X-ray properties of the first stars, a significant systematic will remain in the mock data generation for forecasts. Other groups have considered these astrophysical uncertainties on the 21-cm signal, studying the impact of the first galaxies being bright or faint [123, 170], or how altering astrophysical scenarios would have an impact on the constraints on annihilating DM [17] or warm DM [18]. 3.4 Statistics Following ref. [44], we choose a multivariate Gaussian likelihood of the form log L = -1 2 Nz X iz Nk X ik ∆2 21 (ziz, kik)test -∆2 21 (ziz, kik)fid 2 σ2 tot (ziz, kik) , (3.13) where ∆2 21 (ziz, kik)test/fid are the 21-cm power spectrum evaluated at a given z and k, for either the test or fiducial model, and σtot (ziz, kik) is the total measurement error, defined in the next section. The sums run over redshift z and through k-space (in Mpc-1), with the range we consider explicitly including k = {0.13, 0.18, 0.23, 0.29, 0.34, 0.39, 0.45, 0.5, 0.55, 0.61, 0.66, 0.71, 0.77, 0.82, 0.87, 0.93} Mpc-1, consistent with those used in ref. [62], and ten log-spaced z measurements z = {10.27, 11.04, 11.91, 12.93, 14.11, 15.52, 17.21, 19.29, 21.91, 25.30}, matching the 8 MHz bandwidth of the experiments we consider.12 We adopt the three-parametric model outlined in section 3.3, and explicitly defined in table 2. We compute the likelihood when varying fPBH under the three different astrophysical scenarios defined in table 2. 3.4.1 Sensitivities The total measurement error in the power spectrum is given by [62, 140, 171] σ2 tot (z, k) = σ2 exp (z, k) + σ2 sample (z, k) + 0.2 ∆2 21 (z, k)fid 2 , (3.14) where σexp denotes the experimental error,13 σsample is the error introduced by Poisson noise due to cosmic variance, and the third term accounts for a model uncertainty budget of 20%, in line with the error budget computed for 21cmFAST and propagated for Zeus21 [100, 121]. We compute the experimental error for each of our telescope configurations, and for each of the three fiducial astrophysical scenarios using 21cmSense [172-174], assuming the default system temperature given by Tsys = 100 K + 260 K ν(z) 150 MHz -2.6 , (3.15) 12We consider only z ≳10, since Zeus21 is designed to be valid until reionization at z ≃10 [100]. 13The experimental sensitivities σexp are explicitly dependent on the power spectrum of the fiducial astrophysical model, as shown explicitly in eq. (15) of ref. [172]. - 17 - where ν(z) is the redshifted 21-cm line frequency. Modeling Tsys as in eq. (3.15), accounts for a receiver temperature of 100 K, and includes a sky temperature consistent with the measured diffuse radio background at ∼100 MHz [175].14 We use the same system temperature for all the experimental configurations considered in this work. In this work, we focus our analysis on producing bounds for SKA, but for completeness include a discussion on the experimental sensitivity of HERA. For SKA, we consider two configurations, the initial SKA AA* configuration and the final SKA AA4 configuration (sometimes equivalently referred to in the literature as SKA1-LOW and SKA2-LOW, respectively [176]). Both follow a spiral layout, where AA* has 307 stations, while AA4 has 512 stations. The SKA-LOW stations are ∼38 m in diameter, but can also be divided into sub-stations of 12 m and 18 m. These sub-stations provide an increased field of view and access to shorter baselines. We assume a gaussian beam for each baseline, and that the experiment is located at a latitude of 30.7◦. We refer the interested reader to ref. [177] for more details and follow their specifications. Both SKA configurations are explicitly implemented in the SKA_forecast notebook provided in 21cmSense. For our sensitivities, we use a deep survey, where we set the number of tracking hours to be the same as the number of observation hours per day. We use a fixed bandwidth of 8 MHz (corresponding to our redshift spacing), and assume an observation time of 1080 hrs (6 hrs per day for 180 days). For HERA, we use a hexagonal antenna layout comprising 331 antennas (11 on each side), with a dish size of 14 m, and separation of 12.12 m. We assume a gaussian beam for each baseline, and that the experiment is located at a latitude of 30.8◦. For our sensitivities, we perform a drift-scan, where the number of tracking hours is equal to the beam-crossing time, similarly to refs. [62, 171]. Similarly to SKA, we use a fixed bandwidth of 8 MHz, and assume an observation time of 1080 hrs (6 hrs per day for 180 days). 3.5 Marginalization To properly account for degeneracies between accreting PBHs and astrophysical parameters, a marginalization should be performed. In the context of 21-cm studies, this is commonly performed using MCMC methods [44, 155], which efficiently sample the posterior without evaluating the full parameter volume, or with Fisher forecasts [46, 62, 171], which provide an estimate of parameter correlations when likelihood evaluations are computationally expensive. In our case, computations of the likelihood take ∼1s, making a full parameter-scan feasible. As such, we run a full 4-dimensional parameter scan over the three parameters in table 2, together with the PBH parameter fPBH. We do this for the PBH masses MPBH = {1, 10, 100, 1000} M⊙,15 keeping the rest of the astrophysical parameters fixed to the values given in table 3 of section A.2. The grid has dimension 204, corresponding to 20 log-spaced points per parameters. We verified a posteriori that the grid adequately covers the relevant parameter space, such that the marginalized likelihood vanishes at physically meaningful values, while not missing the region of maximum likelihood. From the resulting 4-dimensional likelihood, we perform a marginalization over the astrophysical parameters by interpolating the grid, and then integrating using the trapezoidal 14As pointed out in ref. [62], other works, including ref. [171], considered the temperature given in eq. (3.15) to correspond to their pessimistic scenario. 15We compute a few additional masses within this range to more precisely find the PBH mass where the bound vanishes at fPBH = 1, to avoid unphysical features in figure 6. - 18 - rule. The resulting 1-dimensional posterior is then used to construct a highest-density interval, with the bounds stated at the 95% probability value.16 4 Results In this section, we show the potential sensitivity of future 21-cm data to accreting PBHs. We begin in section 4.1, by using the BHL accretion model as an example, to show that the constraining power of future experiments on any exotic energy-injection mechanism, whose 21-cm signal is governed by its heating and ionizing impact on the IGM, depends sensitively on the choice of fiducial model. We then show in section 4.2 how the 21-cm sensitivity to PBHs varies between the BHL and PR accretion models. In section 4.3, we combine these elements to present the projected exclusion limits on the PBH abundance, fPBH, illustrating how both systematics impact the projected constraints. 4.1 The importance of the fiducial model We analyze three distinct astrophysical scenarios by varying the three parameters (Nα, LX, Tvir) defined in section 3.3. These three distinct scenarios are defined in table 2: a benchmark model based on default parameters in Zeus21, a less-constraining model, and a more-constraining model. The constraining power of each scenario is determined by how prominently the baseline 21-cm signal stands out against experimental noise and how sensitive it is to additional energy injection, where this sensitivity can be understood using simple physical principles, discussed below. • The less-constraining scenario combines a low flux of Lyman-α photons per baryon (Nα), high X-ray luminosity (LX), and a low minimum halo virial temperature (Tvir). The combination of weak Lyman-α coupling and strong X-ray heating produces a shallow absorption trough in the global signal. This is because a low Lyman-α coupling implies a weak coupling between the spin temperature and the gas temperature, compounded with an increase in the gas temperature, so that the difference between TCMB and Tk is smaller. Moreover, a low Tvir allows star formation to occur in smaller halos and thus at earlier times, shifting the features in the signal to earlier times (lower frequencies) where the expected experimental sensitivity is weaker [17]. The resulting signal, shown in the top row of figure 3, provides a poor baseline for constraining power with respect to any DM candidate that contributes to the heating of the IGM in this epoch. • The more-constraining scenario assumes the opposite: high Nα, low LX, and a high Tvir. Strong Lyman-α coupling and minimal X-ray heating create a deep, prominent absorption signal, and thus a larger power spectrum. A high Tvir causes star formation to occur only at later times in more massive halos, enhancing the power spectrum fluctuations. As shown in the bottom row of figure 3, this scenario produces a deep global signal, and a power spectrum that is higher in magnitude than the other cases. As such, this scenario is more sensitive to deviations caused by exotic energy injection, allowing for more stringent constraints to be set, as we will quantify in section 4.3. 16We compute the bound assuming a flat prior on fPBH over the range (0, 1), in line with the constraint derived from the CMB in ref. [82]. - 19 - less-constraining Scenario: Nα = 3 × 103, LX = 1041 erg s-1 M-1 ⊙yr, Tvir = 104 K. 10 15 20 25 z -200 -150 -100 -50 0 δTb [mK] BHL fPBH = [10-2, 10-1] fPBH = [10-3, 10-2] fPBH = [0, 10-3] fPBH = 0 10 15 20 25 z 10-2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA benchmark Scenario: Nα = 9690, LX = 1040.5 erg s-1 M-1 ⊙yr, Tvir = 104 K. 10 15 20 25 z -200 -150 -100 -50 0 δTb [mK] 10 15 20 25 z 10-2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA More-Constraining Scenario: Nα = 4 × 104, LX = 1039.5 erg s-1 M-1 ⊙yr, Tvir = 105 K. 10 15 20 25 z -200 -150 -100 -50 0 δTb [mK] 10 15 20 25 z 10-2 100 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA Figure 3: Predictions of the 21-cm global signal (left panels) and the power spectrum (right panels), as a function of z, for the less-constraining (first row), benchmark (middle row) and more-constraining (bottom row) astrophysical scenarios. We assume the BHL accretion model and PBHs of mass MPBH = 102 M⊙, with the colors corresponding to different values of fPBH as indicated in the legend of the top left panel; for the power spectrum, we fix k = 0.15 Mpc-1. We also show the 1σ experimental sensitivities, σexp (z, k), from the HERA experiment, and the SKA AA* and SKA AA4 configurations, computed using 21cmSense [172174]. The astrophysical parameters for each row are given in the title, with the definitions provided in table 2. The remaining astrophysical parameters are fixed to those shown in table 3. - 20 - • The benchmark scenario assumes an intermediate value of each of these parameters, and thus an intermediate constraining power compared to the other two scenarios. It was chosen based on the default parameters defined in Zeus21, in line with the discussion presented in section 3.3, and is roughly consistent with previous fiducial benchmarks [44, 100, 116, 140]. We characterize and visualize each of these three astrophysical scenarios in figure 3. The figure shows the global signal (left panels) and the power spectrum (right panels) as a function of redshift, evaluated at a representative spatial scale k = 0.15 Mpc-1. Each plot also shows the impact of a sub-dominant population of PBHs with relative abundance fPBH and assuming BHL accretion. We include the 1σ experimental sensitivity associated with the experimental configurations that we consider here, overlaid on the power spectrum plots. As shown in the plot, the sensitivity bands increasingly tighten the constraints for each fiducial scenario from top to bottom. We emphasize that the experimental sensitivity depends on fiducial model's power spectrum (see footnote 13). This dependence explains the shift in the intersection of the SKA sensitivity curve with the x-axis between the benchmark and moreconstraining scenario, and it occurs since the fiducial power spectrum in the benchmark scenario is greater than the more-constraining in the narrow redshift range of z ≃17-20. As shown, the less-constraining scenario features a shallower global absorption dip and a suppressed power spectrum, especially at higher redshift. In this scenario, the 1σ experimental sensitivity band encompasses the signal, making it difficult to provide constraints on the PBH hypothesis, under our accretion modeling assumptions. This result generalizes to other exotic sources, where the experimental sensitivity for this fiducial scenario is of the same order as the signal, so distinguishing between different injections proves difficult, as has already been shown in the context of DM injections [17, 18]. In contrast, the moreconstraining scenario plotted in the bottom row clearly shows a very pronounced absorption dip and enhanced fluctuations. As a consequence, even a small amount of exotic injection is expected to alter the signal in a detectable way. The benchmark scenario is also depicted in the middle row for comparison, which shows an intermediate regime for distinguishability. We quantify the implications of these three scenarios on the resulting PBH bounds in section 4.3. We remark here that by choosing these three fiducial astrophysical scenarios, we choose an agnostic approach as far as the correlations between the parameters are concerned. This choice avoids relying on a quantitative model that attempts to describe the underlying astrophysical quantities and their evolution amid large uncertainties, and in the absence of complementary observations that fully constrain the properties of the first stars. 4.2 Uncertainty in accretion model The sensitivity of the 21-cm signal to accreting PBHs is determined by their model-dependent energy injection into the IGM. Of the two accretion models we consider, the BHL accretion model produces a substantially larger impact on the ionization fraction and gas temperature at late times, compared to the PR model, as shown in figure 1. In figure 4, we show how altered properties of the IGM translate into the 21-cm global signal and power spectrum. In particular, in the top row of figure 4 we show the global signal, where the impact of accreting PBHs under the BHL prescription is significantly larger. We also show in the middle and bottom panels of figure 4 the impact of each accretion scenario on the power spectrum, including the 1σ experimental sensitivities of three different telescope configurations. The middle panel shows the power spectrum as a function of redshift at a fixed scale - 21 - 10 15 20 25 z -100 -75 -50 -25 0 25 δTb [mK] BHL 10 15 20 25 z -100 -75 -50 -25 0 25 δTb [mK] PR 10 15 20 25 z 10-3 10-1 101 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA 10 15 20 25 z 10-3 10-1 101 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA 1.0 0.2 0.4 0.6 0.8 k [Mpc-1] 100 101 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA fPBH = 0 fPBH = [0, 10-3] fPBH = [10-3, 10-2] fPBH = [10-2, 10-1] 100 0.2 0.4 0.6 0.8 k [Mpc-1] 100 101 102 ∆2 21 [mK2] SKA AA4 SKA AA∗ HERA Figure 4: Predictions of the 21-cm signal for PBHs for the BHL accretion model (left column) and the PR accretion model (right column), and for a PBH mass of MPBH = 102 M⊙and for a range of fractional PBH abundance fPBH. Top row: impact on the global differential brightness temperature, δTb, as a function of redshift. Middle row: impact of accreting PBHs on the 21-cm power spectrum, ∆2 21, at a fixed scale k = 0.15 Mpc-1, as a function of redshift. The 1σ sensitivities, σexp (z, k), are shown for the HERA telescope, and for the SKA AA* and SKA AA4 configurations, computed using 21cmSense [172-174]. Bottom row: same as the middle row, but now showing the power spectrum, ∆2 21, at a fixed redshift z = 15, as a function of k instead. In each plot we assume the benchmark astrophysics scenario, corresponding to the middle row in figure 3, and given in table 2. The legend provided in the bottom left panel applies to all panels. - 22 - 3.8 4.0 4.2 log10 Tvir/K -3 -2 -1 log10 fPBH 40.4 40.5 40.6 log10 LX/ ̇M∗ (erg s-1 M-1 ⊙yr) 3.8 4.0 4.2 4.4 log10 Nα 4.0 4.4 log10 Nα 40.4 40.6 log10 LX/ ̇M∗ (erg s-1 M-1 ⊙yr) -3 -2 -1 log10 fPBH PR BHL Figure 5: Comparison of the posteriors for our astrophysical and PBH parameters at 68% and 95% probability for a PBH mass of MPBH = 102 M⊙and for two accretion models, BHL (blue lines and contours) and PR (red lines and contours). The fiducial astrophysical model is assumed to be the benchmark scenario in table 2. k = 0.15 Mpc-1, and the bottom panel shows the power spectrum as a function of k at a fixed redshift z = 15. It is apparent from these figures that BHL accretion can be much more constrained by the 21-cm power spectrum compared to PR accretion, by comparing the bands describing experimental sensitivities to the signal in the presence of accreting PBHs. The difference in constraining power between the two accretion models is further described in figure 5, where we show the joint posteriors for our astrophysical and PBH parameters at 68% and 95% probability at MPBH = 102 M⊙for BHL and PR. The underlying fiducial model that we use for this scenario is the benchmark astrophysics scenario in table 2. As shown in the triangle plot, PR accreting PBHs are unconstrained, while the PBH abundance for BHL accreting PBHs is constrained by fPBH ≲10-2.6. In the next section, we show how this substantial difference in the 21-cm power spectrum between accretion models translates into 21-cm bounds on the abundance of accreting PBHs. - 23 - 100 101 102 103 MPBH [M⊙] 10-6 10-5 10-4 10-3 10-2 10-1 100 fPBH CMB (PR) CMB (BHL) 21-cm (BHL) 21-cm (PR) other constraints Less-Constraining Benchmark More-Constraining Figure 6: Sensitivity of SKA AA4 to fPBH, the fractional contribution of PBHs to the DM abundance, at 95% probability versus the PBH mass, assuming a monochromatic PBH mass function. Results are shown as a function of the PBH mass, for two accretion models: BHL accretion (blue curves and blue shaded region) and PR accretion (red curves and red shaded region). We show results for three different fiducial astrophysical scenarios: benchmark (solid lines, labeled "21-cm (BHL)" and "21-cm (PR)"), more-constraining (dash-dotted lines), and less-constraining scenarios (dashed lines); the astrophysics parameters for each of these three scenarios are provided in table 2. We remark that the less-constraining scenario is on the x-axis with fPBH = 1, for both models. For comparison, we also show with a thin gray line (labeled "other constraints") the combined constraints from other probes, derived from gravitational waves from merging events [178, 179], radio and X-ray observations [180], microlensing [181-183], dynamical effects [184, 185], and dwarf galaxy heating [186]. We also show the CMB constraints reported in ref. [82] (dotted gray lines), labeled for both the BHL and PR accretion models.17 We make use of [187] for plotting. 4.3 21-cm forecasts and discussion The forecasted constraints at 95% probability on fPBH are presented in figure 6, which captures the two primary sources of uncertainty investigated in this work: the astrophysical modeling of the cosmic dawn and the PBH accretion prescription. The results demonstrate that these uncertainties have a significant impact on the projected sensitivity of 21-cm experiments. Our study reveals two key findings. First, the choice of the fiducial astrophysical scenario is critical. As shown by the spread between the less-constraining and moreconstraining lines in figure 6, the projected constraints vary by orders of magnitude. In 17The analysis in ref. [82] was focused on the mass range MPBH ≥10 M⊙, and the bound was not computed for lower masses. - 24 - the less-unconstraining scenario (characterized by low Nα, high LX, and low Tvir), 21cm observations may offer no constraining power, whereas the more-constraining scenario (characterized by high Nα, low LX, and high Tvir) combined with the BHL accretion model could yield the most stringent constraints on stellar-mass PBHs. We claim that this result generalizes to any 21-cm constraint on additional heating and ionization of the IGM from exotic sources. Secondly, the accretion model itself affects the forecast sensitivity. The PR model, which includes radiative feedback, yields constraints that are at least two orders of magnitude weaker than those from the BHL model, for each set of astrophysical parameters that we consider in this work. In particular, these bounds are weaker than existing, independent limits from other probes. Conversely, the widely-used BHL model suggests that 21-cm experiments could probe a large and unconstrained region of the PBH parameter space, particularly for MPBH ≳10 M⊙, assuming a more-constraining astrophysical scenario. We remark here, that despite the substantial late time decrease in energy injection from the PR model compared to the BHL model, as shown in figure 2, where there is suppression in the accretion rate of ∼10 orders of magnitude at z ≃20, a bound can still be obtained. This is due to the significant energy deposition at early times, altering the evolution equations at late times. We can understand this effect as a modification of the initial conditions for later time evolution, effectively increasing the ionization and temperature floor. Thus, despite orders of magnitude lower injection at late times, the PR model still allows for some constraint to be placed, in our more-constraining astrophysical scenario. Counterintuitively, this implies that the 21-cm cosmological probe can be sensitive to early-time injections. Therefore, our study highlights that disambiguating the underlying astrophysical model is crucial for the interpretation of any future 21-cm signal, in the context of bounds on exotic energy injection. This strongly motivates independent observational efforts to constrain the properties of the first stars and galaxies. Indeed, a more robust approach than the one given here, where we have bracketed the uncertainty between the less-constraining and moreconstraining scenario, would be to use future independent observations to derive posteriors for the astrophysical parameters, and then use these posteriors in a joint 21-cm forecast.18 5 Conclusions and Outlook Observations of the redshifted 21-cm signal with next-generation radio interferometers, such as the SKA [10], are expected to open up a new window into the early Universe. The 21-cm signal is highly sensitive to the thermal and ionization history of the IGM and thus provides a potentially powerful probe of exotic sources of energy injection, such as accreting [42, 44, 45, 47] or evaporating [47, 52, 57, 58, 65-73] PBHs and DM annihilations [17, 47, 49, 50, 53, 5557, 60-64] or decays [46, 49-58]. However, the potential for this signal to constrain new physics is fundamentally limited by the precision with which the standard astrophysical processes that govern the cosmic dawn can be modeled, introducing substantial uncertainty due to our incomplete knowledge of early galaxy formation and emission. In this work, we have presented a detailed forecast of the sensitivity of upcoming 21-cm power spectrum measurements to accreting PBHs, quantifying the impact of key systematic uncertainties. By exploring a range of plausible, yet observationally unconstrained, astrophysical scenarios, we demonstrated that the projected constraints on the PBH abundance, 18This would be a natural application of the methodology discussed, e.g., in refs. [16, 140, 144, 155, 165], to future data. - 25 - fPBH, can vary by several orders of magnitude. In a more-constraining astrophysical scenario, characterized by strong Lyman-α coupling, minimal X-ray heating, and a high mass threshold for star formation in halos, 21-cm observations could yield leading bounds on PBHs with mass MPBH ≳10 M⊙. Conversely, for a less-constraining scenario, with weaker Lymanα coupling, stronger heating, and a lower halo mass threshold, upcoming experiments may offer no improvement over existing limits. Our results highlight that the significant impact of astrophysical uncertainties on constraints of exotic energy injection is a generic feature of any 21-cm forecast, with heating and ionization as the primary channels. Furthermore, we investigated the impact of the PBH accretion model by comparing the standard Bondi-Hoyle-Lyttleton prescription with the more recent Park-Ricotti model, which incorporates a more comprehensive treatment of radiative feedback. Our analysis shows that the inclusion of feedback, which strongly suppresses the accretion rate at late times, relaxes the forecasted 21-cm constraints by at least two orders of magnitude across both constraining astrophysical scenarios. This result highlights the importance of accurate theoretical modeling of accretion in deriving robust constraints on PBHs from cosmological observables. Finally, our results emphasize the importance of a multi-probe observational strategy. Synergies with complementary datasets, such as high-redshift UV luminosity functions from JWST [167, 168] and future X-ray data [169], will be essential for mitigating the degeneracy between astrophysics and exotic physics [16-18, 123, 138-144, 163], thereby enabling the 21cm signal to reach its full potential as a probe of fundamental physics, and narrowing the range of uncertainties identified in this work. Acknowledgments We would like to thank Héctor Cruz and Julián Muñoz for helpful clarifications on Zeus21, and James Davies for clarification regarding differences between versions of 21cmFAST. We are grateful to Joshua Foster and Yitian Sun for prompt communication regarding their recent work [47]. We extend our thanks also to Gaétan Facchinetti, Hongwan Liu, Laura LopezHonorez, Andrei Mesinger, Diego Redigolo, Justus Schwagereit and Tracy Slatyer for useful discussions. We made use of the SOM Graviton computing infrastructure for this work. DA and SPR acknowledge support from the Generalitat Valenciana grants CIGRIS/2021/054 and CIPROM/2022/36, respectively. DA and SPR are also supported by grant PID2023-151418NB-I00, which is funded by MCIU/AEI/10.13039/501100011033/ FEDER, UE, and by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grants H2020-MSCA-ITN-2019/860881-HIDDeN and HORIZON-MSCA-2021-SE-01/101086085-ASYMMETRY. RE acknowledges support from DOE Grant DE-SC0025309 and Simons Investigator in Physics Awards 623940 and MPSSIP-00010469. DG acknowledges support from the project Theoretical Astroparticle Physics (TAsP) funded by INFN. MV acknowledges support from the project "Theoretical Particle Physics and Cosmology (TPPC)" funded by INFN. DA thanks INFN Pisa, MIT CTP - a Leinweber Institute, YITP, and - GRAPPA for their hospitality, where part of this work was carried out. This work made use of Zeus21 [100], NumPy [188], SciPy [189], CLASS [127], getdist [190], and any dependencies. - 26 - A Appendix A.1 Comparison to previous work There is a previous study of future constraints on the PBH abundance using 21-cm cosmology [44]. In that work, BHL accreting PBHs were considered, and self-consistently implemented in 21cmFAST [119, 120]. The results that we show in figure 6 are consistent within approximately an order of magnitude of those presented in ref. [44], and we use this section to outline the differences between our work and ref. [44] that could account for this difference. Firstly, ref. [44] used a different numerical code, 21cmFAST [119, 120], for computing the 21-cm signal and power spectrum, and we make use of Zeus21. While a comprehensive comparison has been carried out in refs. [100, 116], where differences in observables were shown to be within the 20% level typically found between radiative transfer simulations and approximate semi-numeric models [121], this comparison was carried out for more recent versions of 21cmFAST. In particular, ref. [44] used 21cmFASTv1.2, while this extensive testing was carried out comparing Zeus21 to 21cmFASTv3. Between versions of 21cmFAST, there are improvements that have a significant impact on the resulting power spectrum,19 where we give a non-exhaustive list below: • It was pointed out in Zeus21 [100], that older versions of 21cmFAST had underestimated adiabatic fluctuations. This approximation can affect predictions of the temperature fluctuations to differ by approximately one order of magnitude (see Fig. 13 of ref. [100] for a comparison, and ref. [193] for further details). This has since been updated in 21cmFAST, but was not available to the authors of ref. [44]. • Recent versions of 21cmFAST introduced several updates between version 3.0 and 4.0, leading to discrepancies exceeding 10% in some cases. These changes include a correction to the adiabatic treatment (as described above), and an updated approach to the construction of density fields, where the cloud-in-cloud method is now preferred over the previous nearest-neighbor method. Secondly, in ref. [44], a minimal set of four astrophysics parameters was used. Namely, the UV ionization efficiency, the number of X-ray photons per solar mass, the minimum virial temperature of halos hosting galaxies, and the number of photons per baryon between Lyman-α and the Lyman limit. In our study, we use the same parameters, but reduce to a minimal model containing three parameters, keeping the UV ionization efficiency fixed, since that parameter does not have an analogue in Zeus21. This is due to the UV ionization parameter defining the efficiency at which a grid point in 21cmFAST is fully ionized, and since Zeus21 does not evolve grids, it is not used. We checked that the three parameters that we have chosen have the largest effect on the bound. At the level of statistics, this implies that these parameters have the strongest correlation with fPBH. We verified this ourselves, and this correlation is shown in the triangle plot in Fig. 13 of ref. [44]. Thirdly, ref. [44] uses a different effective velocity in their computation compared to our use of a Maxwell-Boltzmann distribution. In particular, ref. [44] chooses to apply an effective velocity that assumes a radiative efficiency ε ∝ ̇Ma PBH with a = 1, despite using a radiative efficiency with an arbitrary power-law dependence as in their ADAF model. This is discussed in Sec. III.A of ref. [44], and the general power-law dependence is shown in their eq. (3.6). 19We remark that it has been pointed out that additional approximations made in 21cmFAST may also lead to changes in the power spectra above the 20% level [191, 192]. - 27 - Parameter Value Parameter Value Parameter Value α⋆ 0.5 Mesc 1010 M⊙ Nα 9690 β⋆ -0.5 αesc 0.0 NLW 6200 ε⋆ 10-1 log10 LX/ ̇ M∗ 40.5 erg s-1 M -1 ⊙yr ALW 2.0 Mpivot 3 × 1011 M⊙ E0,X 500 eV βLW 0.6 Tvir 104 K Emax,X 2000 eV Avcb 1.0 fesc,0 10-1 αX -1.0 βvcb 1.8 Table 3: Parameters for Pop II sources in the fiducial Zeus21 model. Parameters that vary for different astrophysical scenarios are shown in bold. See table 2 for the ranges that we vary over, and ref. [116] for definitions of these parameters. Despite all the caveats discussed there, it was chosen to follow previous works and adopt the expression for the effective velocity assuming the a = 1 result. In taking a MaxwellBoltzmann velocity distribution as described in section 2.3.3, we are using the general power law dependence and not fixing a = 1. We checked that the effect of using a = 1, leads to a larger injection and thus a more stringent constraint by a factor of ∼2. Fourthly, in the three astrophysical scenarios that we consider, the benchmark case is most similar, but not an exact reproduction of the global signal and power spectrum in ref. [44]. Finding a set of parameters in Zeus21 that exactly reproduced that global signal and power spectrum proved unsuccessful, given the reasons stipulated above. Phenomenologically, observe that in figure 3 of ref. [44], the benchmark global signal has an absorption trough at z ∼15, with depth δTb ∼-150 mK. The corresponding trough for our fiducial scenario has an absorption trough at z ∼16, with depth δTb ∼-80 mK. Since experiments provide more sensitivity at earlier times, and the depth of this trough translates to the amplitude of the power spectrum (see figure 3 for visualization of both effects), we would expect the previous study to correspond to more stringent bounds, which is actually reported there. Those limits roughly correspond to our optimistic scenario, with the caveat that we use a different velocity distribution, as detailed in the previous paragraph. Finally, since Zeus21 breaks down at z ∼10 [100],20 we restrict to redshifts z > 10, with explicit binning given in section 3.4. In contrast, ref. [44] used 9 log-spaced measurements in redshift in the interval z ∼8.3 -19.5. As described above, experiments have increased sensitivity at later times, so including these redshift bins leads to an increase in the constraining power, further explaining those more stringent bounds for the BHL model. A.2 Fixed astrophysical parameters Modeling the cosmic dawn and the formation of the first stars necessitates the choice of many astrophysical parameters, making various assumptions on the properties of the first stars. In table 3, we provide a list of the Pop II parameters that we assume to be fixed in this work. Furthermore, we list other flags in Zeus21 that we use, to allow for reproducibility of our results. In particular, we use the flags: USE_RELATIVE_VELOCITIES = True, USE_LW_FEEDBACK = True, kmax_CLASS = 500, and zmax_CLASS = 50. Furthermore, we use the Planck2018 parameters [194]: Ωc = 0.11933, Ωb = 0.02242, h = 0.6766, As = exp(3.047) × 10-10, ns = 0.9665 and zreio = 7.82. 20Since Zeus21 does not evolve reionization, the 21-cm signal in this setup follows the underlying density field by construction. - 28 - References [1] M. S. Turner, The road to precision cosmology, Annual Review of Nuclear and Particle Science 72 (Sept., 2022) 1-35. [2] T. R. Slatyer, What does cosmology teach us about non-gravitational properties of dark matter?, Nuclear Physics B 1003 (June, 2024) 116468. [3] M. Cirelli, A. Strumia and J. Zupan, Dark Matter, 2406.01705. [4] S. Furlanetto, S. P. Oh and F. Briggs, Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe, Phys. Rept. 433 (2006) 181-301, [astro-ph/0608032]. [5] J. R. Pritchard and A. Loeb, 21-cm cosmology in the 21st century, Rept. Prog. Phys. 75 (2012) 086901, [1109.6012]. [6] S. R. Furlanetto, The 21-cm line as a probe of reionization, in Understanding the Epoch of Cosmic Reionization: Challenges and Progress (A. Mesinger, ed.), vol. 423, pp. 247-280. Springer International Publishing, 2016. 1511.01131. [7] P. Villanueva-Domingo, Shedding light on dark matter through 21 cm cosmology and reionization constraints. PhD thesis, U. Valencia, 2021. 2112.08201. [8] R. Mondal and R. Barkana, Prospects for precision cosmology with the 21 cm signal from the dark ages, Nature Astron. 7 (2023) 1025-1030, [2305.08593]. [9] D. R. DeBoer et al., Hydrogen Epoch of Reionization Array (HERA), Publ. Astron. Soc. Pac. 129 (2017) 045001, [1606.07473]. [10] G. Mellema et al., Reionization and the Cosmic Dawn with the Square Kilometre Array, Exper. Astron. 36 (2013) 235-318, [1210.0197]. [11] S. D. Bale, N. Bassett, J. O. Burns, J. Dorigo Jones, K. Goetz, C. Hellum-Bye et al., LuSEE 'Night': The Lunar Surface Electromagnetics Experiment, arXiv e-prints (Jan., 2023) , [2301.10345]. [12] J. Burns et al., FARSIDE: A low radio frequency interferometric array on the lunar farside, in Bulletin of the American Astronomical Society, vol. 51, p. 178, 2019. 1907.05407. DOI. [13] S. Jester and H. Falcke, Science with a lunar low-frequency array: from the dark ages of the Universe to nearby exoplanets, New Astron. Rev. 53 (2009) 1-26, [0902.0493]. [14] A. Liu and J. R. Shaw, Data analysis for precision 21 cm cosmology, Publ. Astron. Soc. Pac. 132 (2020) 062001, [1907.08211]. [15] J. Silk, The limits of cosmology, Gen. Rel. Grav. 57 (2025) 127, [2509.08066]. [16] O. Z. Katz, N. Outmezguine, D. Redigolo and T. Volansky, Probing new physics at cosmic dawn with 21-cm cosmology, Nucl. Phys. B 1003 (2024) 116502, [2401.10978]. [17] L. Lopez-Honorez, O. Mena, Á. Moliné, S. Palomares-Ruiz and A. C. Vincent, The 21 cm signal and the interplay between dark matter annihilations and astrophysical processes, JCAP 08 (2016) 004, [1603.06795]. [18] Q. Decant, A. Dimitriou, L. L. Honorez and B. Zaldivar, Simulation-based inference on warm dark matter from HERA forecasts, JCAP 07 (2025) 004, [2412.10310]. [19] B. J. Carr, The primordial black hole mass spectrum, Astrophys. J. 201 (1975) 1-19. [20] P. Ivanov, P. Naselsky and I. Novikov, Inflation and primordial black holes as dark matter, Phys. Rev. D50 (1994) 7173-7178. [21] J. C. Niemeyer and K. Jedamzik, Near-critical gravitational collapse and the initial mass function of primordial black holes, Phys. Rev. Lett. 80 (1998) 5481-5484, [astro-ph/9709072]. - 29 - [22] J. Yokoyama, Formation of MACHO primordial black holes in inflationary cosmology, Astron. Astrophys. 318 (1997) 673, [astro-ph/9509027]. [23] J. Yokoyama, Formation of primordial black holes in the inflationary universe, Phys. Rept. 307 (1998) 133-139. [24] J. C. Niemeyer and K. Jedamzik, Dynamics of primordial black hole formation, Phys. Rev. D 59 (1999) 124013, [astro-ph/9901292]. [25] M. Shibata and M. Sasaki, Black hole formation in the Friedmann universe: Formulation and computation in numerical relativity, Phys. Rev. D60 (1999) 084002, [gr-qc/9905064]. [26] I. Musco, J. C. Miller and L. Rezzolla, Computations of primordial black hole formation, Class. Quant. Grav. 22 (2005) 1405-1424, [gr-qc/0412063]. [27] S. Young, C. T. Byrnes and M. Sasaki, Calculating the mass fraction of primordial black holes, JCAP 1407 (2014) 045, [1405.7023]. [28] A. Escrivà, C. Germani and R. K. Sheth, Analytical thresholds for black hole formation in general cosmological backgrounds, JCAP 01 (2021) 030, [2007.05564]. [29] I. Musco, V. De Luca, G. Franciolini and A. Riotto, Threshold for primordial black holes. II. A simple analytic prescription, Phys. Rev. D 103 (2021) 063538, [2011.03014]. [30] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., Observation of gravitational waves from a binary black hole merger, Phys. Rev. Lett. 116 (2016) 061102, [1602.03837]. [31] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., Binary black hole mergers in the first advanced LIGO observing run, Phys. Rev. X 6 (2016) 041015, [1606.04856]. [32] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., GW151226: Observation of gravitational waves from a 22-solar-mass binary black hole coalescence, Phys. Rev. Lett. 116 (2016) 241103, [1606.04855]. [33] LIGO Scientific, VIRGO collaboration, B. P. Abbott et al., GW170104: Observation of a 50-solar-mass binary black hole coalescence at redshift 0.2, Phys. Rev. Lett. 118 (2017) 221101, [1706.01812]. [34] LIGO Scientific, Virgo collaboration, B. P. Abbott et al., GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence, Phys. Rev. Lett. 119 (2017) 141101, [1709.09660]. [35] S. Bird et al., Did LIGO detect dark matter?, Phys. Rev. Lett. 116 (2016) 201301, [1603.00464]. [36] D. Inman and Y. Ali-Haïmoud, Early structure formation in primordial black hole cosmologies, Phys. Rev. D 100 (2019) 083528, [1907.08129]. [37] M. S. Delos, A. Rantala, S. Young and F. Schmidt, Structure formation with primordial black holes: collisional dynamics, binaries, and gravitational waves, JCAP 12 (2024) 005, [2410.01876]. [38] A. Bogdan et al., Evidence for heavy-seed origin of early supermassive black holes from a z ≈10 X-ray quasar, Nature Astron. 8 (2024) 126-133, [2305.15458]. [39] M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, Primordial black holes-perspectives in gravitational wave astronomy, Class. Quant. Grav. 35 (2018) 063001, [1801.05235]. [40] B. J. Carr, Pregalactic black hole accretion and the thermal history of the Universe, Mon. Not. Roy. Astron. Soc. 194 (1981) 639-668. [41] H. Tashiro and N. Sugiyama, The effect of primordial black holes on 21 cm fluctuations, Mon. Not. Roy. Astron. Soc. 435 (2013) 3001, [1207.6405]. - 30 - [42] A. Hektor, G. Hütsi, L. Marzola, M. Raidal, V. Vaskonen and H. Veermäe, Constraining primordial black holes with the EDGES 21-cm absorption signal, Phys. Rev. D 98 (2018) 023503, [1803.09697]. [43] J. Luis Bernal, N. Bellomo, A. Raccanelli and L. Verde, Cosmological implications of primordial black holes, JCAP 10 (2017) 052, [1709.07465]. [44] O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, Constraining the primordial black hole abundance with 21-cm cosmology, Phys. Rev. D 100 (2019) 043540, [1906.07735]. [45] Y. Yang, Constraints on accreting primordial black holes with the global 21-cm signal, Phys. Rev. D 104 (2021) 063528, [2108.11130]. [46] Y. Sun, J. W. Foster, H. Liu, J. B. Muñoz and T. R. Slatyer, Inhomogeneous energy injection in the 21-cm power spectrum: Sensitivity to dark matter decay, Phys. Rev. D 111 (2025) 043015, [2312.11608]. [47] Y. Sun, J. W. Foster and J. B. Muñoz, Constraining inhomogeneous energy injection from annihilating dark matter and primordial black holes with 21-cm cosmology, 2509.22772. [48] Y. A. Shchekinov and E. O. Vasiliev, Particle decay in the early Universe: predictions for 21 cm, Mon. Not. Roy. Astron. Soc. 379 (2007) 1003-1010, [astro-ph/0604231]. [49] S. R. Furlanetto, S. P. Oh and E. Pierpaoli, The effects of dark matter decay and annihilation on the high-redshift 21 cm background, Phys. Rev. D 74 (2006) 103502, [astro-ph/0608385]. [50] M. Valdes, A. Ferrara, M. Mapelli and E. Ripamonti, Constraining DM through 21 cm observations, Mon. Not. Roy. Astron. Soc. 377 (2007) 245-252, [astro-ph/0701301]. [51] V. Poulin, J. Lesgourgues and P. D. Serpico, Cosmological constraints on exotic injection of electromagnetic energy, JCAP 03 (2017) 043, [1610.10051]. [52] S. Clark, B. Dutta, Y. Gao, Y.-Z. Ma and L. E. Strigari, 21 cm limits on decaying dark matter and primordial black holes, Phys. Rev. D 98 (2018) 043006, [1803.09390]. [53] H. Liu and T. R. Slatyer, Implications of a 21-cm signal for dark matter annihilation and decay, Phys. Rev. D 98 (2018) 023501, [1803.09739]. [54] A. Mitridate and A. Podo, Bounds on dark matter decay from 21 cm line, JCAP 05 (2018) 069, [1803.11169]. [55] W. Qin, J. B. Munoz, H. Liu and T. R. Slatyer, Birth of the first stars amidst decaying and annihilating dark matter, Phys. Rev. D 109 (2024) 103026, [2308.12992]. [56] B. Novosyadlyj, Y. Kulinich and D. Koval, Global signal in the redshifted hydrogen 21-cm line from the dark ages and cosmic dawn: Dependence on the nature of dark matter and modeling of first light, Phys. Rev. D 111 (2025) 083514, [2410.07380]. [57] M.-L. Zhao, S. Wang and X. Zhang, Prospects for probing dark matter particles and primordial black holes with the Hongmeng mission using the 21 cm global spectrum at cosmic dawn, JCAP 07 (2025) 039, [2412.19257]. [58] M.-L. Zhao, Y. Shao, S. Wang and X. Zhang, Prospects for probing dark matter particles and primordial black holes with the Square Kilometre Array using the 21 cm power spectrum at cosmic dawn, 2507.02651. [59] A. Natarajan and D. J. Schwarz, Dark matter annihilation and its effect on CMB and Hydrogen 21 cm observations, Phys. Rev. D 80 (2009) 043529, [0903.4485]. [60] C. Evoli, A. Mesinger and A. Ferrara, Unveiling the nature of dark matter with high redshift 21 cm line experiments, JCAP 11 (2014) 024, [1408.1109]. [61] G. D'Amico, P. Panci and A. Strumia, Bounds on dark matter annihilations from 21 cm data, Phys. Rev. Lett. 121 (2018) 011103, [1803.03629]. - 31 - [62] G. Facchinetti, L. Lopez-Honorez, Y. Qin and A. Mesinger, 21cm signal sensitivity to dark matter decay, JCAP 01 (2024) 005, [2308.16656]. [63] H. Bae, A. L. Erickcek, M. S. Delos and J. B. Muñoz, 21-cm constraints on dark matter annihilation after an early matter-dominated era, Phys. Rev. D 112 (2025) 083013, [2502.08719]. [64] P. K. Natwariya, K. Kadota and A. J. Nishizawa, Sensitivity toward dark matter annihilation imprints on 21-cm signal with SKA-Low: a convolutional neural network approach, 2508.08251. [65] Y. Yang, Constraints on primordial black holes and curvature perturbations from the global 21-cm signal, Phys. Rev. D 102 (2020) 083538, [2009.11547]. [66] A. Halder and M. Pandey, Probing the effects of primordial black holes on 21-cm EDGES signal along with interacting dark energy and dark matter-baryon scattering, Mon. Not. Roy. Astron. Soc. 508 (2021) 3446-3454, [2101.05228]. [67] A. Halder and S. Banerjee, Bounds on abundance of primordial black hole and dark matter from EDGES 21-cm signal, Phys. Rev. D 103 (2021) 063044, [2102.00959]. [68] S. Mittal, A. Ray, G. Kulkarni and B. Dasgupta, Constraining primordial black holes as dark matter using the global 21-cm signal with X-ray heating and excess radio background, JCAP 03 (2022) 030, [2107.02190]. [69] P. K. Natwariya, A. C. Nayak and T. Srivastava, Constraining spinning primordial black holes with global 21-cm signal, Mon. Not. Roy. Astron. Soc. 510 (2021) 4236, [2107.12358]. [70] J. Cang, Y. Gao and Y.-Z. Ma, 21-cm constraints on spinning primordial black holes, JCAP 03 (2022) 012, [2108.13256]. [71] A. K. Saha and R. Laha, Sensitivities on nonspinning and spinning primordial black hole dark matter with global 21-cm troughs, Phys. Rev. D 105 (2022) 103026, [2112.10794]. [72] U. Mukhopadhyay, D. Majumdar and A. Halder, Constraining PBH mass distributions from 21cm brightness temperature results and an analytical mapping between probability distribution of 21cm signal and PBH masses, JCAP 10 (2022) 099, [2203.13008]. [73] Y. Yang, Impact of radiation from primordial black holes on the 21-cm angular-power spectrum in the dark ages, Phys. Rev. D 106 (2022) 123508, [2209.00851]. [74] D. Agius and T. Slatyer. In preparation, 2025. [75] M. Ricotti, J. P. Ostriker and K. J. Mack, Effect of primordial black holes on the cosmic microwave background and cosmological parameter estimates, Astrophys. J. 680 (2008) 829, [0709.0524]. [76] Y. Ali-Haïmoud and M. Kamionkowski, Cosmic microwave background limits on accreting primordial black holes, Phys. Rev. D 95 (2017) 043534, [1612.05644]. [77] B. Horowitz, Revisiting primordial black holes constraints from ionization history, 1612.07264. [78] V. Poulin, P. D. Serpico, F. Calore, S. Clesse and K. Kohri, CMB bounds on disk-accreting massive primordial black holes, Phys. Rev. D 96 (2017) 083524, [1707.04206]. [79] P. D. Serpico, V. Poulin, D. Inman and K. Kohri, Cosmic microwave background bounds on primordial black holes including dark matter halo accretion, Phys. Rev. Res. 2 (2020) 023204, [2002.10771]. [80] L. Piga, M. Lucca, N. Bellomo, V. Bosch-Ramon, S. Matarrese, A. Raccanelli et al., The effect of outflows on CMB bounds from Primordial Black Hole accretion, JCAP 12 (2022) 016, [2210.14934]. - 32 - [81] G. Facchinetti, M. Lucca and S. Clesse, Relaxing CMB bounds on primordial black holes: The role of ionization fronts, Phys. Rev. D 107 (2023) 043537, [2212.07969]. [82] D. Agius, R. Essig, D. Gaggero, F. Scarcella, G. Suczewski and M. Valli, Feedback in the dark: a critical examination of CMB bounds on primordial black holes, JCAP 07 (2024) 003, [2403.18895]. [83] K. J. Mack, J. P. Ostriker and M. Ricotti, Growth of structure seeded by primordial black holes, Astrophys. J. 665 (2007) 1277-1287, [astro-ph/0608642]. [84] V. De Luca and N. Bellomo, The accretion, emission, mass and spin evolution of primordial black holes. Springer Series in Astrophysics and Cosmology. Springer, 2025. 2312.14097. 10.1007/978-981-97-8887-3. [85] F. Hoyle and R. A. Lyttleton, The effect of interstellar matter on climatic variation, Math. Proc. Camb. Philos. Soc. 35 (1939) 405-415. [86] F. Hoyle and R. A. Lyttleton, On the accretion of interstellar matter by stars, Math. Proc. Camb. Philos. Soc. 36 (1940) 325-330. [87] F. Hoyle and R. A. Lyttleton, On the physical aspects of accretion by stars, Math. Proc. Camb. Philos. Soc. 36 (1940) 424-437. [88] F. Hoyle and R. A. Lyttleton, On the accretion theory of stellar evolution, Mon. Not. Roy. Astron. Soc. 101 (1941) 227. [89] H. Bondi and F. Hoyle, On the mechanism of accretion by stars, Mon. Not. Roy. Astron. Soc. 104 (1944) 273-282. [90] H. Bondi, On spherically symmetrical accretion, Mon. Not. Roy. Astron. Soc. 112 (1952) 195. [91] K. Park and M. Ricotti, Accretion onto intermediate mass black holes regulated by radiative feedback I. Parametric study for spherically symmetric accretion, Astrophys. J. 739 (2011) 2, [1006.1302]. [92] K. Park and M. Ricotti, Accretion onto black holes from large scales regulated by radiative feedback. II. Growth rate and duty cycle, Astrophys. J. 747 (2012) 9, [1110.4634]. [93] K. Park and M. Ricotti, Accretion onto black holes from large scales regulated by radiative feedback. III. Enhanced luminosity of intermediate mass black holes moving at supersonic speeds, Astrophys. J. 767 (2013) 163, [1211.0542]. [94] S. A. Wouthuysen, On the excitation mechanism of the 21-cm (radio-frequency) interstellar hydrogen emission line., Astrophys. J. 57 (1952) 31-32. [95] G. B. Field, Excitation of the hydrogen 21-cm line, Proceedings of the IRE 46 (1958) 240-250. [96] X.-L. Chen and J. Miralda-Escude, The spin - kinetic temperature coupling and the heating rate due to Lyman - alpha scattering before reionization: Predictions for 21cm emission and absorption, Astrophys. J. 602 (2004) 1-11, [astro-ph/0303395]. [97] C. M. Hirata, Wouthuysen-Field coupling strength and application to high-redshift 21 cm radiation, Mon. Not. Roy. Astron. Soc. 367 (2006) 259-274, [astro-ph/0507102]. [98] P. Madau, A. Meiksin and M. J. Rees, 21-cm tomography of the intergalactic medium at high redshift, Astrophys. J. 475 (1997) 429, [astro-ph/9608010]. [99] R. Barkana, The rise of the first stars: Supersonic streaming, radiative feedback, and 21-cm cosmology, Phys. Rept. 645 (2016) 1-59, [1605.04357]. [100] J. B. Muñoz, An effective model for the cosmic-dawn 21-cm signal, Mon. Not. Roy. Astron. Soc. 523 (2023) 2587-2607, [2302.08506]. [101] X.-L. Chen and J. Miralda-Escude, The 21cm signature of the first stars, Astrophys. J. 684 (2008) 18-33, [astro-ph/0605439]. - 33 - [102] L. Lopez-Honorez, O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, Variations in fundamental constants at the cosmic dawn, JCAP 06 (2020) 026, [2004.00013]. [103] P. Stöcker, M. Krämer, J. Lesgourgues and V. Poulin, Exotic energy injection with ExoCLASS: Application to the Higgs portal model and evaporating black holes, JCAP 03 (2018) 018, [1801.01871]. [104] V. Poulin, P. D. Serpico and J. Lesgourgues, Dark matter annihilations in halos and high-redshift sources of reionization of the universe, JCAP 12 (2015) 041, [1508.01370]. [105] M. Ricotti, Bondi accretion in the early universe, Astrophys. J. 662 (2007) 53-61, [0706.0864]. [106] R. Perna, R. Narayan, G. Rybicki, L. Stella and A. Treves, Bondi accretion and the problem of the missing isolated neutron stars, Astrophys. J. 594 (2003) 936-942, [astro-ph/0305421]. [107] R. Fender, T. Maccarone and I. Heywood, The closest black holes, Mon. Not. Roy. Astron. Soc. 430 (2013) 1538, [1301.1341]. [108] S. Pellegrini, Nuclear accretion in galaxies of the local Universe: Clues from Chandra observations, Astrophys. J. 624 (2005) 155-161, [astro-ph/0502035]. [109] Q. D. Wang et al., Dissecting X-ray-emitting gas around the center of our galaxy, Science 341 (2013) 981, [1307.5845]. [110] P. Jangra, D. Gaggero, B. J. Kavanagh and J. M. Diego, The cosmic history of primordial black hole accretion and its uncertainties, JCAP 08 (2025) 006, [2412.11921]. [111] D. Tseliakhovich and C. Hirata, Relative velocity of dark matter and baryonic fluids and the formation of the first structures, Phys. Rev. D 82 (2010) 083520, [1005.2416]. [112] C. Dvorkin, K. Blum and M. Kamionkowski, Constraining dark matter-baryon scattering with linear cosmology, Phys. Rev. D 89 (2014) 023519, [1311.2937]. [113] N. Afshordi, P. McDonald and D. N. Spergel, Primordial black holes as dark matter: the power spectrum and evaporation of early structures, Astrophys. J. Lett. 594 (2003) L71-L74, [astro-ph/0302035]. [114] P. S. Cole and J. Silk, Small-scale primordial fluctuations in the 21 cm Dark Ages signal, Mon. Not. Roy. Astron. Soc. 501 (2021) 2627-2634, [1912.02171]. [115] J. M. Koulen, S. Profumo and N. Smyth, Primordial black holes and the first stars, Phys. Rev. D 112 (2025) 043044, [2506.06171]. [116] H. A. G. Cruz, J. B. Munoz, N. Sabti and M. Kamionkowski, Effective model for the 21-cm signal with population III stars, Phys. Rev. D 111 (2025) 083503, [2407.18294]. [117] H. Trac and R. Cen, Radiative transfer simulations of cosmic reionization. 1. Methodology and initial results, Astrophys. J. 671 (2007) 1, [astro-ph/0612406]. [118] N. Y. Gnedin, Cosmic reionization on computers I. Design and calibration of simulations, Astrophys. J. 793 (2014) 29, [1403.4245]. [119] A. Mesinger, S. Furlanetto and R. Cen, 21cmFAST: a fast, semi-numerical simulation of the high-redshift 21-cm signal, Mon. Not. Roy. Astron. Soc. 411 (2011) 955, [1003.3878]. [120] S. G. Murray, B. Greig, A. Mesinger, J. B. Muñoz, Y. Qin, J. Park et al., 21cmFAST v3: a Python-integrated C code for generating 3D realizations of the cosmic 21cm signal, J. Open Source Softw. 5 (2020) 2582, [2010.15121]. [121] O. Zahn, A. Mesinger, M. McQuinn, H. Trac, R. Cen and L. E. Hernquist, Comparison of reionization models: Radiative transfer simulations and approximate, semi-numeric models, Mon. Not. Roy. Astron. Soc. 414 (2011) 727, [1003.3455]. - 34 - [122] R. Barkana and A. Loeb, Detecting the earliest galaxies through two new sources of 21cm fluctuations, Astrophys. J. 626 (2005) 1-11, [astro-ph/0410129]. [123] J. B. Muñoz, Y. Qin, A. Mesinger, S. G. Murray, B. Greig and C. Mason, The impact of the first galaxies on cosmic dawn and reionization, Mon. Not. Roy. Astron. Soc. 511 (2022) 3657-3681, [2110.13919]. [124] R. K. Sheth, H. J. Mo and G. Tormen, Ellipsoidal collapse and an improved model for the number and spatial distribution of dark matter haloes, Mon. Not. Roy. Astron. Soc. 323 (2001) 1, [astro-ph/9907024]. [125] R. Barkana and A. Loeb, In the beginning: the first sources of light and the reionization of the Universe, Phys. Rept. 349 (2001) 125-238, [astro-ph/0010468]. [126] P. Madau, H. C. Ferguson, M. E. Dickinson, M. Giavalisco, C. C. Steidel and A. Fruchter, High redshift galaxies in the hubble deep field. color selection and star formation history to z=4, Mon. Not. Roy. Astron. Soc. 283 (1996) 1388-1404, [astro-ph/9607172]. [127] D. Blas, J. Lesgourgues and T. Tram, The Cosmic Linear Anisotropy Solving System (CLASS) II: approximation schemes, JCAP 07 (2011) 034, [1104.2933]. [128] B. Greig and A. Mesinger, Simultaneously constraining the astrophysics of reionization and the epoch of heating with 21CMMC, Mon. Not. Roy. Astron. Soc. 472 (2017) 2651-2669, [1705.03471]. [129] Y. Ali-Haïmoud and C. M. Hirata, Hyrec: A fast and highly accurate primordial hydrogen and helium recombination code, Physical Review D 83 (Feb., 2011) . [130] S. L. Shapiro and A. P. Lightman, Black holes in X-ray binaries: marginal existence and rotation reversals of accretion disks., Astrophys. J. 204 (1976) 555-560. [131] F.-G. Xie and F. Yuan, The radiative efficiency of hot accretion flows, Mon. Not. Roy. Astron. Soc. 427 (2012) 1580, [1207.3113]. [132] J. M. Shull and M. E. van Steenberg, X-ray secondary heating and ionization in quasar emission-line clouds, Astrophys. J. 298 (1985) 268-274. [133] X.-L. Chen and M. Kamionkowski, Particle decays during the cosmic dark ages, Phys. Rev. D 70 (2004) 043502, [astro-ph/0310473]. [134] T. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner, CMB constraints on WIMP annihilation: energy absorption during the recombination epoch, Phys. Rev. D 80 (2009) 043526, [0906.1197]. [135] T. R. Slatyer, Energy injection and absorption in the cosmic dark ages, Phys. Rev. D 87 (2013) 123513, [1211.0283]. [136] T. R. Slatyer, Indirect dark matter signatures in the cosmic dark ages II. Ionization, heating and photon production from arbitrary energy injections, Phys. Rev. D 93 (2016) 023521, [1506.03812]. [137] H. Liu, G. W. Ridgway and T. R. Slatyer, DarkHistory: a code package for calculating modified cosmic ionization and thermal histories with dark matter and other exotic energy injections, Phys. Rev. D 101, 023530 (2020) 101 (Jan., 2019) 023530, [1904.09296]. [138] A. Fialkov, A. Cohen, R. Barkana and J. Silk, Constraining the redshifted 21-cm signal with the unresolved soft X-ray background, Mon. Not. Roy. Astron. Soc. 464 (2017) 3498-3508, [1602.07322]. [139] A. Cohen, A. Fialkov, R. Barkana and M. Lotem, Charting the parameter space of the global 21-cm signal, Mon. Not. Roy. Astron. Soc. 472 (2017) 1915-1931, [1609.02312]. - 35 - [140] J. Park, A. Mesinger, B. Greig and N. Gillet, Inferring the astrophysics of reionization and cosmic dawn from galaxy luminosity functions and the 21-cm signal, Mon. Not. Roy. Astron. Soc. 484 (2019) 933-949, [1809.08995]. [141] S. Pochinda et al., Constraining the properties of Population III galaxies with multiwavelength observations, Mon. Not. Roy. Astron. Soc. 531 (2024) 1113-1132, [2312.08095]. [142] P. H. Sims et al., Rapid and late cosmic reionization driven by massive galaxies: a joint analysis of constraints from 21-cm, Lyman line & CMB data sets, 2504.09725. [143] J. Dhandha et al., Narrowing the discovery space of the cosmological 21-cm signal using multi-wavelength constraints, 2508.13761. [144] O. Z. Katz, D. Redigolo and T. Volansky, Closing in on Pop-III Stars: Constraints and Predictions Across the Spectrum, 2502.03525. [145] R. J. Bouwens et al., UV luminosity functions at redshifts z ∼4 to z ∼10: 10000 galaxies from HST legacy fields, Astrophys. J. 803 (2015) 34, [1403.4295]. [146] R. J. Bouwens et al., New determinations of the UV luminosity functions from z ∼9 to z ∼2 show a remarkable consistency with halo growth and a constant star formation efficiency, Astrophys. J. 162 (2021) 47, [2102.07775]. [147] N. Cappelluti et al., The nature of the unresolved extragalactic soft CXB, Mon. Not. Roy. Astron. Soc. 427 (2012) 651, [1208.4105]. [148] B. D. Lehmer et al., The 4 Ms Chandra deep field-south number counts apportioned by source class: pervasive active galactic nuclei and the ascent of normal galaxies, Astrophys. J. 752 (2012) 46, [1204.1977]. [149] HERA collaboration, Z. Abdurashidova et al., HERA phase I limits on the cosmic 21 cm signal: constraints on astrophysics and cosmology during the epoch of reionization, Astrophys. J. 924 (2022) 51, [2108.07282]. [150] HERA collaboration, Z. Abdurashidova et al., Improved constraints on the 21 cm EoR power spectrum and the X-ray heating of the IGM with HERA phase I observations, Astrophys. J. 945 (2023) 124, [2210.04912]. [151] R. S. Klessen and S. C. O. Glover, The first stars: formation, properties, and impact, Ann. Rev. Astron. Astrophys. 61 (2023) 65-130, [2303.12500]. [152] C. Leitherer, D. Schaerer, J. D. Goldader, R. M. Gonzalez Delgado, C. Robert, D. F. Kune et al., Starburst99: synthesis models for galaxies with active star formation, Astrophys. J. Suppl. 123 (1999) 3-40, [astro-ph/9902334]. [153] C. Leitherer et al., A library of theoretical ultraviolet spectra of massive, hot stars for evolutionary synthesis, Astrophys. J. Suppl. 189 (2010) 309-335, [1006.5624]. [154] C. Leitherer, S. Ekström, G. Meynet, D. Schaerer, K. B. Agienko and E. M. Levesque, The effects of stellar rotation. II. a comprehensive set of Starburst99 models, Astrophys. J. Suppl. 212 (2014) 14, [1403.5444]. [155] B. Greig and A. Mesinger, 21CMMC: an MCMC analysis tool enabling astrophysical parameter studies of the cosmic 21 cm signal, Mon. Not. Roy. Astron. Soc. 449 (2015) 4246-4263, [1501.06576]. [156] B. Greig and A. Mesinger, 21CMMC with a 3D light-cone: the impact of the co-evolution approximation on the astrophysics of reionization and cosmic dawn, Mon. Not. Roy. Astron. Soc. 477 (2018) 3217-3229, [1801.01592]. [157] S. Witte, P. Villanueva-Domingo, S. Gariazzo, O. Mena and S. Palomares-Ruiz, EDGES result versus CMB and low-redshift constraints on ionization histories, Phys. Rev. D 97 (2018) 103533, [1804.03888]. - 36 - [158] B. D. Lehmer et al., The evolution of normal galaxy X-ray emission through cosmic history: Constraints from the 6 Ms Chandra deep field-south, Astrophys. J. 825 (2016) 7, [1604.06461]. [159] T. Fragos et al., X-ray binary evolution across cosmic time, Astrophys. J. 764 (2013) 41, [1206.2395]. [160] S. P. Oh and Z. Haiman, Second-generation objects in the universe: radiative cooling and collapse of halos with virial temperatures above 10^4 kelvin, Astrophys. J. 569 (2002) 558, [astro-ph/0108071]. [161] M. Tegmark, J. Silk, M. J. Rees, A. Blanchard, T. Abel and F. Palla, How small were the first cosmological objects?, Astrophys. J. 474 (1997) 1-12, [astro-ph/9603007]. [162] A. Mesinger, A. Ferrara and D. S. Spiegel, Signatures of X-rays in the early Universe, Mon. Not. Roy. Astron. Soc. 431 (2013) 621, [1210.7319]. [163] Y. Qin, A. Mesinger, J. Park, B. Greig and J. B. Muñoz, A tale of two sites - I. Inferring the properties of minihalo-hosted galaxies from current observations, Mon. Not. Roy. Astron. Soc. 495 (2020) 123-140, [2003.04442]. [164] I. McGreer, A. Mesinger and V. D'Odorico, Model-independent evidence in favour of an end to reionization by z ≈6, Mon. Not. Roy. Astron. Soc. 447 (2015) 499-505, [1411.5375]. [165] B. Greig and A. Mesinger, The global history of reionization, Mon. Not. Roy. Astron. Soc. 465 (2017) 4838-4852, [1605.05374]. [166] Y. Qin, A. Mesinger, S. E. I. Bosman and M. Viel, Reionization and galaxy inference from the high-redshift Ly α forest, Mon. Not. Roy. Astron. Soc. 506 (2021) 2390-2407, [2101.09033]. [167] J. P. Gardner et al., The james webb space telescope mission, Publ. Astron. Soc. Pac. 135 (2023) 068001. [168] S. L. Finkelstein et al., CEERS Key Paper. I. An Early Look into the First 500 Myr of Galaxy Formation with JWST, Astrophys. J. Lett. 946 (2023) L13, [2211.05792]. [169] R. Cassano et al., SKA-Athena Synergy White Paper, 1807.09080. [170] A. Mesinger, B. Greig and E. Sobacchi, The evolution of 21 cm structure (EOS): public, large-scale simulations of Cosmic Dawn and Reionization, Mon. Not. Roy. Astron. Soc. 459 (2016) 2342-2353, [1602.07711]. [171] C. A. Mason, J. B. Muñoz, B. Greig, A. Mesinger and J. Park, 21cmfish: Fisher-matrix framework for fast parameter forecasts from the cosmic 21-cm signal, Mon. Not. Roy. Astron. Soc. 524 (2023) 4711-4728, [2212.09797]. [172] J. C. Pober, A. R. Parsons, D. R. DeBoer, P. McDonald, M. McQuinn, J. E. Aguirre et al., The baryon acoustic oscillation broadband and broad-beam array: design overview and sensitivity forecasts, Astron. J. 145 (2013) 65, [1210.2413]. [173] S. G. Murray, J. Pober and M. Kolopanis, 21cmSense v2: a modular, open-source 21cm sensitivity calculator, J. Open Source Softw. 9 (2024) 6501, [2406.02415]. [174] J. C. Pober et al., What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization, Astrophys. J. 782 (2014) 66, [1310.7031]. [175] T. J. Mozdzen, J. D. Bowman, R. A. Monsalve and A. E. E. Rogers, Improved measurement of the spectral index of the diffuse radio background between 90 and 190 MHz, Mon. Not. Roy. Astron. Soc. 464 (2017) 4995-5002, [1609.08705]. [176] L. V. E. Koopmans et al., The Cosmic Dawn and Epoch of Reionization with the Square Kilometre Array, PoS AASKA14 (2015) 001, [1505.07568]. [177] S. Seethapuram Sridhar, W. Williams, D. Price, s. Breen and L. Ball, "SKA Low and Mid subarray templates." SKAO, 2025. 10.5281/zenodo.16951088. - 37 - [178] B. J. Kavanagh, D. Gaggero and G. Bertone, Merger rate of a subdominant population of primordial black holes, Phys. Rev. D 98 (2018) 023536, [1805.09034]. [179] Z.-C. Chen and Q.-G. Huang, Distinguishing primordial black holes from astrophysical black holes by Einstein Telescope and Cosmic Explorer, JCAP 08 (2020) 039, [1904.02396]. [180] J. Manshanden, D. Gaggero, G. Bertone, R. M. T. Connors and M. Ricotti, Multi-wavelength astronomical searches for primordial black holes, JCAP 06 (2019) 026, [1812.07967]. [181] M. Oguri, J. M. Diego, N. Kaiser, P. L. Kelly and T. Broadhurst, Understanding caustic crossings in giant arcs: characteristic scales, event rates, and constraints on compact dark matter, Phys. Rev. D 97 (2018) 023518, [1710.00148]. [182] T. Blaineau et al., New limits from microlensing on Galactic black holes in the mass range 10 M⊙< M < 1000 M⊙, Astron. Astrophys. 664 (2022) A106, [2202.13819]. [183] A. Esteban-Gutiérrez, E. Mediavilla, J. Jiménez-Vicente and J. A. Muñoz, Constraints on the abundance of primordial black holes from X-ray quasar microlensing observations: Substellar to planetary mass range, Astrophys. J. 954 (2023) 172, [2307.07473]. [184] M. A. Monroy-Rodríguez and C. Allen, The end of the MACHO era revisited: new limits on MACHO masses from halo wide binaries, Astrophys. J. 790 (2014) 159, [1406.5169]. [185] T. D. Brandt, Constraints on MACHO dark matter from compact stellar systems in ultra-faint dwarf galaxies, Astrophys. J. Lett. 824 (2016) L31, [1605.03665]. [186] P. Lu, V. Takhistov, G. B. Gelmini, K. Hayashi, Y. Inoue and A. Kusenko, Constraining primordial black holes with dwarf galaxy heating, Astrophys. J. Lett. 908 (2021) L23, [2007.02213]. [187] B. J. Kavanagh, bradkav/pbhbounds: Release version, 2019. 10.5281/ZENODO.3538999. [188] C. R. Harris et al., Array programming with NumPy, Nature 585 (2020) 357-362, [2006.10256]. [189] P. Virtanen et al., SciPy 1.0-Fundamental algorithms for scientific computing in Python, Nature Meth. 17 (2020) 261, [1907.10121]. [190] A. Lewis, GetDist: a Python package for analysing Monte Carlo samples, JCAP 08 (2025) 025, [1910.13970]. [191] J. Flitter and E. D. Kovetz, New tool for 21-cm cosmology. II. Investigating the effect of early linear fluctuations, Phys. Rev. D 109 (2024) 043513, [2309.03948]. [192] J. Flitter, S. Libanore and E. D. Kovetz, More careful treatment of matter density fluctuations in 21-cm simulations, Phys. Rev. D 112 (2025) 023537, [2411.00089]. [193] J. B. Muñoz, Y. Ali-Haïmoud and M. Kamionkowski, Primordial non-gaussianity from the bispectrum of 21-cm fluctuations in the dark ages, Physical Review D 92 (Oct., 2015) . [194] Planck collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [1807.06209]. - 38 -
2510.14879
Multi-wavelength analysis of the progenitor of GRB 230307A via Bayesian model comparison Viviane Alfradique∗ Centro Brasileiro de Pesquisas F´ısicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil Rodrigo da Mata, Juan C. Rodr´ıguez-Ram´ırez, and Cl´ecio R. Bom Centro Brasileiro de Pesquisas F´ısicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil (Dated: October 17, 2025) GRB 230307A is one of the brightest long-duration gamma-ray bursts (GRBs) ever detected, yet its progenitor remains uncertain due to the variety of plausible astrophysical scenarios. In this work, we investigate four possible progenitors for GRB 230307A: a binary neutron star (BNS), a neutron star–white dwarf (NS–WD) system, a neutron star–black hole (NS–BH) merger, and a tidal disruption event (TDE) involving a white dwarf and a supermassive black hole. Additionally, we explore three distinct central engine models powering the kilonova associated with the BNS: radioactive decay of r-process nuclei in a two-component ejecta model, a magnetar-driven model including magnetic dipole spin-down, and a combined model of magnetar spin-down with 56Ni radioactive decay. We perform Bayesian multi-wavelength light-curve analyses using physically motivated models and priors, and evaluate model performance through Bayes factors and leave-one- out cross-validation (LOO) scores. Our results show a statistical preference for a BNS or NS–WD progenitor producing a kilonova powered by a magnetar and 56Ni decay, characterized by a 56Ni mass of ∼4×10−4 M⊙and an ejecta mass of 0.06 M⊙. Furthermore, under the assumption of a BNS origin within this model, we infer binary component masses of m1 = 1.81+0.46 −0.61 M⊙and m2 = 1.61+0.65 −0.41 M⊙, with a dimensionless tidal deformability of ˜Λ = 471+318 −395. From the component mass posteriors, we infer that the observed offset can be explained by a natal kick as long as the systemic velocity is nearly aligned with the pre-kick orbital motion. In this case, the required kick velocity (co-moving frame) and binary separation range within v′ k ∼100–150 km s−1, and a0 ∼2–3 R⊙, respectively. The discovery of GRB 230307A, reported on March 7, 2023, by NASA’s Fermi Gamma-ray Space Telescope [1], Gravitational Wave High-energy Electromagnetic Coun- terpart All-sky Monitor (GECAM, [2]) and the Konus- Wind [3], represents one of the most luminous gamma-ray bursts ever observed. This event exhibited a peak flux of 4.48×10−4 erg cm−2 s−1 and a high gamma-ray fluence of 3 × 10−3 erg cm−2 within the 10–1000 keV energy range [4]. The burst had a total duration of 42 seconds [4], and even 61 days after the event [5, 6], a faint counterpart re- mained detectable across multiple wavelengths. Follow- up observations spanned the optical, near-infrared, and soft X-ray bands, with data collected by instruments such as the James Webb Space Telescope (JWST, [5]), Hubble Space Telescope (HST, [6]), Very Large Telescope (VLT, [5]), Gemini South Telescope [5], Swift X-ray Telescope (XRT, [5]), Chandra X-ray Observatory [5], Australia Telescope Compact Array (ATCA, [5]), MeerKAT [5], AGILE [7], and the Lobster Eye Imager for Astronomy (LEIA, [4]). These comprehensive observations enabled a detailed characterization of the burst’s afterglow and en- vironment. The rapid decay of the bolometric luminosity, along with the increase in the photosphere radius, sug- ∗vivianeapa@cbpf.com gests the presence of a thermal component that associates a kilonova with the GRB and supports the presence of lanthanides in the ejected material [5, 6]. One way to investigate the origin of the progenitor is to analyze how variations in kilonova evolution are in- trinsically related to the properties of the ejecta and the remnant of the merger. This analysis provides valuable insights into the physical processes that are consistent with observations while excluding those that are not sup- ported by the data. In the case of GRB 230307A, the rapid decay of the optical emission at early times, fol- lowed by the dominance of emission in the near-infrared (NIR) band, may indicate the presence of heavy isotopes [5, 6]. Furthermore, the detection of tellurium in the mid-infrared spectrum provides additional support for r- process nucleosynthesis, which is expected in BNS and NS-BH mergers. These observations, together with the significant offset between the burst position and its po- tential host galaxy at redshift z = 0.065, suggest a com- pact binary merger origin, possibly explained by a high kick velocity imparted to the neutron star component [8]. Another notable feature of GRB 230307A is the soft X-ray emission observed by LEIA [4] in the 0.5–4.0 keV band. This emission shows a plateau during the first 10 seconds, followed by a decay, a behavior commonly inter- preted as the spin-down signature of a magnetar formed arXiv:2510.14879v1 [astro-ph.HE] 16 Oct 2025 2 after the compact binary merger. However, the nature of the magnetar central engine remains debated. As noted in [9], neutrino emission could suppress the production of lanthanide-rich ejecta, and the mechanisms connecting spin-down luminosity to X-ray emission are still not fully understood [10]. In light of these challenges, Wang et al. [9] proposed a NS–WD merger with a possible rem- nant magnetar as the progenitor. In this scenario, the long-lived emission is explained by the lower density of the white dwarf, with heavy elements produced by 56Ni decay rather than the r-process typical of BNS mergers. They showed that an NS–WD system with a ∼10−3 M⊙ white dwarf and an ejecta mass of ∼0.1 M⊙provides a consistent fit to the multi-wavelength afterglow and kilo- nova emission. Beyond the magnetar engine, other mod- els have also been proposed, such as disruption of the NS crust during the inspiral phase [11–14] or magnetospheric interactions between the binary components [15–18]. [10] was the first study to investigate the progenitor of the exotic GRB 230307A. Their analysis relied on the phe- nomenological classification of GRBs into type I, associ- ated with massive star collapses, and type II, resulting from compact object mergers, as proposed by [19, 20]. They found that GRB 230307A has an ϵ value (defined as the ratio between the isotropic gamma-ray energy and the rest-frame spectral peak energy) lying on the bound- ary between the two classes, with a 3σ deviation from the type II classification. However, the effective ampli- tude parameter they obtained is consistent with type II. Given the similarity of these values to those of other long GRBs, the authors concluded that GRB 230307A most likely originated from a compact object merger, either a BNS or an NS–WD system, although it remains unclear which of these channels is the progenitor. Identifying the progenitor system responsible for tran- sient astrophysical events, such as GRB 230307A, is a fundamental step in understanding the physics govern- ing such explosions. The degeneracy in the observable signatures makes it crucial to develop robust model se- lection strategies. This is especially important as we en- ter the era of high-cadence, multi-messenger observations enabled by facilities such as the Vera C. Rubin Observa- tory, which will conduct the Legacy Survey of Space and Time [21], anticipated to reveal a large number of such events. A reliable identification of the progenitor will not only refine our models of compact binary evolution, but also improve our understanding of the equation of state (EOS) of dense matter, nucleosynthesis pathways, and the role of magnetic fields in shaping the observed electromagnetic emission. In this work, we investigate multiple progenitor scenar- ios for GRB 230307A within a comprehensive Bayesian framework, aiming to identify the model that best re- produces the observed multiwavelength data, particu- larly the afterglow and kilonova components. We per- form Bayesian inference using the nested sampling algo- rithm implemented in dynesty [22] and evaluate com- peting models with statistical tools including the Bayes factor, the LOO score, and the Kullback–Leibler (KL) divergence. These complementary metrics allow a robust model comparison by balancing goodness of fit, model complexity, and information content. The paper is structured as follows: Section I describes the observational data and theoretical models; Section II outlines the Bayesian methodology; Section III presents the model comparison and parameter inference results; and Section IV summarizes the implications for the iden- tification of the GRB 230307A progenitor. I. DATA AND EJECTA MODELS A. Observational data The data used to perform the multi-wavelength anal- ysis were collected from the source data provided by [6] (cf. “Source Data Fig. 3”), which gathered data from the HST, JWST, Swift/XRT, Swift/UVOT, X-Shooter, Chandra, Gemini, XMM-Newton, and Fermi/GBM. The follow-up observations cover the optical with the Gem- ini telescope and the Southern Astrophysical Research telescope, near-infrared with the Gemini Telescope and the JWST, which also explored the mid-infrared and far- infrared bands; in the radio with the ATCA; and in the X-ray with Swift/XRT, Chandra X-ray Observatory, and XMM-Newton. B. Possible scenarios of GRB 230307A progenitor To investigate the electromagnetic counterpart of GRB 230307A, we adopt a modeling approach that separates the afterglow and kilonova components, allowing a clear comparison between different scenarios for thermal emis- sion. We use the Python package afterglowpy [23] to model the synchrotron radiation from a Gaussian struc- tured jet (unless stated otherwise), keeping the afterglow fixed across all analyses to isolate the effects of chang- ing the kilonova emission models. The afterglow model includes standard forward shock emission, with free pa- rameters: isotropic-equivalent kinetic energy E0, circum- burst density n0, the magnetic- and electron-energy frac- tions εB and εe, respectively, electron power-law index p, jet core opening angle θc, and electron participation fraction ξN. Each kilonova model then explores distinct assumptions about the geometry, composition, and en- ergy sources of the ejecta, allowing us to assess their in- dividual contributions to the observed emission without introducing degeneracies from varying afterglow geome- try. 1. Binary Neutron Stars Merger We adopt the mixed-component kilonova model de- scribed in Metzger [24], where the total emission arises 3 from the sum of two distinct ejecta components: a blue (lanthanide-poor) and a red (lanthanide-rich) compo- nent. This framework captures the multichannel nature of neutron star merger ejecta, with fast, polar outflows typically being neutron-rich and of low opacity (κ ∼0.5– 1 cm2g−1), producing an early, short-lived blue kilonova. In contrast, slower equatorial tidal ejecta are generally lanthanide-rich and highly opaque (κ ∼5–10 cm2g−1), resulting in a longer-lasting red kilonova that peaks in the near-infrared. The model assumes that the transient is powered by the radioactive decay of r-process nuclei synthesized in the neutron-rich ejecta. Each component of the ejecta is treated as an expanding black body with fixed opacity and velocity and is characterized by its own mass, opacity, and thermalization efficiency. Following GW170817, this two-component, one-dimensional mod- eling approach has become standard practice. In multi- component implementations, the components evolve in- dependently, and their emissions are summed to obtain the total light curve. Here, we modified the implementa- tion of this model provided by the Python package nmma [25]. 2. Binary Neutron Stars with central engine We adopt the engine-powered kilonova model intro- duced by [26] and implemented in the Python package redback [27], which extends traditional radioactive kilo- nova frameworks to include energy injection from a long- lived, rapidly rotating neutron star (a magnetar) formed as a result of the binary neutron star merger. The model accounts for spin-down of the remnant via both mag- netic dipole radiation and gravitational wave emission, depending on the strength and configuration of the in- ternal toroidal and external dipolar magnetic fields. The resulting energy budget, governed by the competition be- tween these two channels, significantly impacts the lumi- nosity and temporal evolution of both the kilonova and its afterglow. In this framework, the magnetar wind in- teracts with the expanding ejecta, increasing its kinetic and internal energy while also modifying the radiative efficiency through time-dependent gamma-ray leakage. The model solves a set of coupled differential equations tracking the evolution of the Lorentz factor, internal en- ergy, and radius of the ejecta, while accounting for radia- tive losses and adiabatic expansion. 3. Neutron Star–Black Hole Merger We adopt the neutron star–black hole kilonova frame- work developed by [28], which models the expected op- tical/IR emission following NS–BH mergers by combin- ing radiative transfer simulations with gravitational wave parameter constraints. We use the redback implemen- tation of this model. Kilonova emission is modeled using the radiative transfer code POSSIS [29], with an up- dated grid of synthetic spectra tailored to NS–BH sys- tems. The model incorporates a two-component ejecta structure: lanthanide-rich dynamical ejecta concentrated in the equatorial plane, and a more isotropic post-merger wind component with intermediate opacity. 4. Tidal disruption event We utilize the tidal disruption event model imple- mented in the Modular Open Source Fitter for Tran- sients (mosfit [30] and redback), which systemati- cally fits TDEs using a physically motivated framework. The model is based on hydrodynamical simulations of polytropic stars disrupted by supermassive black holes (SMBHs), from which the fallback rate of stellar debris is derived and converted into a bolometric light curve as- suming a constant radiative efficiency. The fallback rate ˙Mfb depends on the black hole mass, stellar mass, stellar structure (parameterized by a polytropic index), and the impact parameter of the disruption event. To account for potential time delays due to circularization and disk accretion, the model introduces a viscous delay timescale that acts as a low-pass filter on the fallback rate. The re- sultant accretion-powered luminosity is then reprocessed through a photospheric layer, which is assumed to emit as a blackbody. 5. Neutron Star-White Dwarf Merger We also consider the semi-analytical model proposed in [9], which interprets the optical and infrared emission of GRB 230307A within the framework of an NS–WD merger. In this scenario, the late-time kilonova-like emis- sion is powered by both the spin-down energy of a long- lived magnetar and the radioactive decay of a small quan- tity of 56Ni, rather than by r-process nucleosynthesis, which is unlikely to occur in the low-density ejecta typi- cal of NS–WD mergers. While the model is primarily ap- plied to NS–WD mergers, it can also be extended to BNS mergers, as such systems are likewise expected to produce a magnetar accompanied by 56Ni synthesis [31, 32]. The model describes the merger ejecta as multiple concentric shells with fixed velocities and a power-law density pro- file. The spin-down luminosity of the magnetar follows a magnetic dipole formula with a fixed timescale derived from early X-ray data, while the 56Ni and 57Co radioac- tive heating is modeled using standard exponential decay laws. The energy evolution of each ejecta shell includes contributions from magnetar injection, radioactive decay, adiabatic expansion, and photon diffusion. The light curve is computed by summing the contributions from all shells and assuming blackbody emission with temper- ature given by the Stefan–Boltzmann law. 4 II. BAYESIAN ANALYSIS TESTS A. Bayes factor A straightforward way to compare two models in terms of “goodness of fit”, based on observed data, is by eval- uating the Bayes’ ratio. Let M denote a model charac- terized by a set of parameters θM i . The Bayes’ ratio is defined as the ratio of the evidence term Z (i.e., the in- tegral of the product between the likelihood L and the prior p θM i  over the entire parameter space), ZM = Z L d|θM i  p θM i  dnθM i , (1) of the two competing models: ln B12 = ln Z1 Z2  . (2) The criterion for interpreting the Bayes factor is called Jeffreys’ scale [33]. This scale classifies the strength of evidence into four categories in favor of model 1: 1. Strong evidence: if ln B12 > 5; 2. Moderate evidence: if 2 < ln B12 ≤5; 3. Weak evidence: if 0 < ln B12 ≤2; 4. No evidence (models are equally supported): if ln B12 = 0; If ln B12 assumes negative values, then model 2 is better supported than model 1 and is classified as follows: 1. Strong evidence: if ln B12 < −5; 2. Moderate evidence: if −5 ≤ln B12 < −2; 3. Weak evidence: if −2 ≤ln B12 < 0. Throughout this article, we will compute the Bayes factor relative to our best-fitting model as a reference, where we will refer to as ln B1,ref = ln Z1/Zref . B. Leave-One-Out cross-validation Leave-One-Out cross-validation [34] is a fully Bayesian method for analyzing the predictive performance of a model by estimating how well it predicts each observation when that observation is left out of the fitting process. LOO evaluates the likelihood of each observation when left out of the fit, averaging over the full posterior dis- tribution of the model parameters to approximate the expected log predictive density (elpd) for new data. The LOO estimate is defined as the sum of the logarithm of the posterior predictive probability of each left-out ob- servation, p(yi | y−i): elpdLOO = n X i=1 log p(yi | y−i), (3) where yi is the i-th observation and y−i denotes the data with the i-th observation removed. Higher elpdLOO val- ues indicate better predictive performance. C. Kullback-Leibler divergence The Kullback-Leibler divergence is a primary measure of the divergence between two probability distributions, p  ⃗θ  and q  ⃗θ  , which is quantified in the amount of in- formation lost when the distribution q  ⃗θ  is used instead of the distribution p  ⃗θ  . The KL divergence, DKL (p||q), is defined as DKL (p||q) = Z ⃗θ p (θi) log p (θi) q (θi)  dnθi, (4) and quantifies how well q  ⃗θ  approximates p  ⃗θ  . The primary Bayesian analysis in this work compares differ- ent kilonova models while fixing the afterglow model. In this way, our results explore the KL divergence for the posterior distribution obtained using the Markov Chain Monte Carlo method, focusing solely on the afterglow pa- rameter space. The KL divergence values are interpreted such that values below 0.1 indicate negligible divergence, values between 0.1 and 1 suggest moderate divergence, and values greater than 1 denote substantial divergence between the posterior distributions. III. RESULTS In this section, we present our multi-wavelength analy- ses of GRB 230307A. In the first subsection, we describe our model selection analysis, aimed at investigating the nature of the progenitor and the ejecta mechanism that powered the observed kilonova emission. In subsection III B, we describe the influence of the kilonova emission on the GRB, and in subsection III C, we explore the bi- nary properties of the GRB 230307A progenitor and in- vestigate the possibility of a supernova natal kick to ex- plain the observed offset. A. Model selection: preferred progenitor system and central engine scenario Here, we apply the statistical model selection methods described in Section II to a joint analysis of the GRB afterglow and a candidate kilonova signature or emis- sion from a tidal disruption event. We investigate several compact merger progenitor scenarios that could produce the kilonova emission, including a compact binary (either a binary neutron star merger or a white dwarf–neutron star merger) and a neutron star–black hole merger. The median values and corresponding 1σ uncertainties for the 5 physical parameters of each model analyzed in this work are presented in Table III. The results show reasonable agreement, within 1σ to 2σ, with previous estimates re- ported in [6] and [9]. The parameter inference from the magnetar spin-down model indicates an ejecta mass of 0.06 M⊙and a 56Ni mass of ∼4 × 10−4 M⊙, consis- tent with that expected from a typical NS–WD merger [35], while also remaining compatible with the higher, more characteristic 56Ni production of a BNS merger [31]. Therefore, throughout the remainder of this pa- per, we adopt this model as a generic compact binary coalescence (CBC) magnetar spin-down scenario, with- out excluding either of these two progenitor possibilities. Table I summarizes our results for the different possi- ble scenarios, presenting the Bayes factors, LOO, and maximum likelihood values. Based on the Bayes factor, we find that the scenario best supported by the data is the CBC magnetar spin-down model, which yields a maximum log-likelihood of –99.86. This corresponds to likelihood ratios of 1.1, 1.8, 39, and 85 relative to the BNS two-component kilonova, BNS general merger-nova, NS–BH, and TDE models, respectively. We adopt this model as the reference for the Bayes factor comparison. Figure 1 presents the multi-wavelength light curves of GRB 230307A for all models discussed in Section II, as- suming the best-fit parameters and prior bounds listed in Table III. We compare models using both Bayes factors and LOO scores (see results in Table I). The Bayes factor analysis provides strong evidence against all alternative models compared to the reference model, with logarithmic Bayes factor values below –5. The LOO cross-validation analy- sis indicates that the CBC magnetar spin-down model provides the best predictive accuracy, with elpdloo = −52±17. The BNS two-component model exhibits lower predictive performance (elpdloo = −141 ± 42), but its difference relative to the magnetar spin-down model cor- responds to only ∼2σ, which does not constitute deci- sive evidence against it. Therefore, while the CBC mag- netar spin-down scenario is statistically preferred, the BNS two-component model remains a possible alterna- tive. The BNS general merger-nova model yields an even lower elpdloo, with a difference of approximately 2.6σ relative to the CBC magnetar spin-down model, indicat- ing weaker support compared to the BNS two-component model, yet it cannot be entirely excluded. By contrast, the NS+BH and SBH+WD astrophysical scenarios have extremely low elpdloo values, suggesting poor predictive performance. However, the large standard errors imply that their true predictive ability is highly uncertain; thus, while these models are strongly disfavored, they cannot be strictly ruled out based solely on this analysis. These results reinforce the conclusions of [9] and [6], provid- ing strong evidence, in terms of both goodness of fit and model complexity, in favor of a scenario where the pro- genitor of GRB 230307A is the coalescence of a compact binary, compared to systems formed by NS–BH mergers producing kilonova emission, or SBH–WD mergers lead- ing to tidal disruption events. To gain a deeper understanding of our results and es- tablish connections between the underlying emission pro- cesses, we analyzed the best-fit parameters of each model in each wavelength band. The BNS two-component model provides the best fit to the NIR data, with a modest improvement in chi-squared (χ2 ∼3) relative to the CBC magnetar spin-down model. In contrast, in the optical band, the CBC magnetar spin-down model per- forms better (χ2 ∼36), while the BNS two-component (χ2 ∼45) and BNS general merger-nova (χ2 ∼69) mod- els are slightly less favored. Both the NS-BH and TDE models are strongly disfavored in both bands, exhibiting significantly higher χ2 values (≈160–1650), indicating a poor agreement with the observed thermal emission of GRB 230307A. The TDE model’s failure is likely due to its lack of lanthanide-rich ejecta, which are essential to reproduce the NIR flux. Additionally, the poor fit of the NS-BH model suggests that the observed data require a blue optical emission component, commonly attributed to lanthanide-poor disk winds, that is typically absent in standard NS-BH kilonova models. On the other hand, the radio and X-ray bands exhibit low χ2 values (approx- imately 0.5–4 for the CBC kilonova models and approxi- mately 15–320 for the TDE and NS-BH models), reflect- ing the dominant contribution of the afterglow emission relative to the kilonova component in fitting the observed data. Our results show the best fit for the X-ray band with the CBC magnetar spin-down model, while the ra- dio data are better described by the BNS two-component model. Overall, the CBC kilonova models provide a supe- rior description of the afterglow-dominated data, as also reflected by their higher maximum likelihood values (see Table I for a comparison). B. Modeling the afterglow of GRB 230307A with and without kilonova emission It has been established in previous studies that the emission from GRB 230307A is consistent with a gamma- ray burst accompanied by kilonova emission [6, 9]. In par- ticular, these works showed that the inclusion of a kilo- nova component is essential to reproduce the enhanced brightness of the optical and NIR light curves compared to standard afterglow models. In this context, we an- alyze the ability of a GRB model with a Gaussian jet structure to fit the observed data and quantify the im- provement achieved when including a kilonova compo- nent, described by the two best-fit models found in the previous subsection: the magnetar spin-down scenario and the two-component kilonova. Furthermore, we eval- uate how the inclusion of transient emission affects the inferred parameters describing the afterglow. Our analysis shows that the afterglow-only model yields a maximum log-likelihood lower by factors of 23 and 25 compared to models in which the BNS two- component kilonova or CBC magnetar spin-down emis- 6 TABLE I. Summary statistical results for the multi-wavelength analysis of GRB 230307A. Transient emission model ln Z1/Zref Maximum LOO Dimensions KN model ln(L) reference BNS system -25.81 -110.69 -141±42 17 [36] (two-component kilonova) BNS system -87.08 -181.06 -52±17 13 [26] (General Merger-Nova) CBC system - -99.86 -170±43 14 [9] (magnetar spin-down) NS+BH system -3838.82 -3932.98 -3906±2854 15 [36] (two-component kilonova - ejecta relation) SBH+WD system -8388.02 -8458.49 -5370±2946 15 [30] (TDE - 4/3 polytropes stars) sion is added to the afterglow, respectively. This result is mainly governed by the NIR band, particularly the F444W and F277W filters, which provide well-measured data and yield considerable χ2 values of approximately 4030 and 564, respectively. The large χ2 values from these bands reveal the need for an additional emission component to better reproduce the observational data at late times (after ≈2 days post-explosion; see the panel in Fig. 2). The best-fitting light curve for the GRB-only model shows better agreement with the data in the X-ray and optical bands, yielding lower χ2 values in the range of 0.5–36. In Table II, we present the KL divergence values ob- tained for all transient emission models explored in this work, assuming the CBC magnetar spin-down model as the fiducial model. We find that most parame- ter distributions from the GRB-only inference, except for the magnetic energy fraction, diverge substantially from the fiducial model, with KL values exceeding 1. This divergence is further supported by the signifi- cant disagreement, greater than 1σ, between the poste- rior means (see Fig. 3). The results obtained for the NS–BH and TDE models show partial agreement within 1σ, with moderate divergence from the true model for log10 E0, log10 n0, log ϵc, log θc, and significant divergence for the remaining parameters. In contrast, all BNS sce- narios (two-component and general merger-nova models) exhibit negligible divergence for most parameters. This emphasizes that the inference of afterglow parameters is reliable only when the assumed progenitor scenario matches the true one. Otherwise, the inclusion of dif- ferent transient components, or their omission, results in divergent Bayesian inferences and, consequently, differ- ent interpretations of the afterglow emission. C. Binary progenitor properties and implications for the observed offset of GRB 230307A The results presented in the previous subsections suggest that the optical emission observed from GRB 230307A is most consistent with a kilonova scenario originating from the merger of a compact binary sys- TABLE II. KL divergence values computed from the posterior distributions of the Gaussian jet-structure parameter model across the different scenarios investigated in this work. The results obtained with the CBC magnetar spin-down model (see subsection I B 5) are taken as the reference (true) proba- bility distribution. GRB-only BNS BNS NS-BH TDE (two-comp.) (central eng.) log10 E0 5.81 0.09 0.03 0.72 0.74 log10 n0 2.44 0.14 0.06 0.51 0.24 log10 θc 2.31 0.07 0.02 1.04 0.87 log10 ϵc 2.74 0.04 0.03 0.68 0.48 log10 ϵb 0.62 0.18 0.07 1.64 1.38 p 7.49 0.03 0.41 2.77 3.58 log10 ξN 2.26 0.04 0.07 1.41 1.16 tem. Within this framework, we now explore the phys- ical properties of the potential progenitor system. As- suming that the kilonova originated from a BNS merger and following [37], we adopt phenomenological fits from numerical-relativity simulations that relate the ejecta pa- rameters to the binary properties, such as the individ- ual masses and compactnesses. For the dynamical ejecta mass, we use the fitting relations derived by [38], while the disk mass fits are taken from [39]. We fix the Tol- man–Oppenheimer–Volkoff maximum mass to 2.17 M⊙ following the constraint reported in [37]. To estimate the neutron star radius, we use the empirical fit for R1.4 (the radius of a 1.4 M⊙neutron star) as a function of binary tidal deformability and chirp mass, based on the relations provided in [40] and calibrated across different EOS. Adopting the model that best fits the observed data, which combines magnetar spin-down and the radioactive decay of 56Ni, we infer that the binary system is com- posed of a primary with a mass of m1 = 1.81+0.46 −0.61 M⊙ and a secondary with a mass of m2 = 1.61+0.65 −0.41 M⊙, with dimensionless tidal deformability of the binary being ˜Λ = 471+318 −395. Assuming a BNS two-component model for the kilonova emission, we obtain constraints on the binary masses of m1 = 1.82+0.45 −0.61 M⊙and m2 = 1.42+0.51 −0.22 M⊙, together with an associated dimensionless tidal deforma- 7 FIG. 1. GRB 230307A multi-wavelength light curves. Solid lines show the best-fitting model curves obtained from joint Bayesian inference for different progenitor scenarios (shown in different colors; see App.A for details). Black dots represent observational data points with uncertainties, and triangles denote upper limits. In some cases, the error bars are not visible because their size is smaller than the plotting scale. bility of ˜Λ = 439+368 −350. Although these results suggest a slightly greater asymmetry between the mass compo- nents, they remain in overall very good agreement within the uncertainties. These values correspond to the poste- rior mean estimates with their associated 68% confidence intervals. For both inferences, we assume uniform prior bounds of [1.2, 2.27] for m1, [1.2, m1] for m2, and [70, 790] for ˜Λ. The contour plots of these results are shown in Fig. 4. Based on the inferred BNS component masses, m1 and m2, we search for the conditions under which the binary can attain the projected offset ℓobs ∼38.9 kpc at coales- cence. In particular, we consider the scenario in which the binary was orbiting its host galaxy with velocity ¯vc and suddenly experienced a kick of systemic velocity ¯v′ k, where the prime indicates that the quantity is measured in the pre-kick co-moving frame. This event leaves the binary with a velocity ¯v0 = ¯vc + ¯v′ k relative to the galaxy centre. We then investigate the minimal kick velocity v′ k magnitude required to reproduce the observed kilonova offset. Hereafter, unbarred variables like vx, denote the magnitude of the velocity vector ¯vx. Considering the binary has velocity v0 at a radial lo- cation r0 relative to the host-galaxy centre, the classical 8 FIG. 2. Multi-wavelength light curves of GRB 230307A from the afterglow-only emission (green) and with the addition of a kilonova, modeled using the BNS two-component model (blue) and the CBC magnetar spin-down model (orange), for the X-ray, optical, and NIR bands. Black dots represent ob- servational data with uncertainties, and triangles indicate up- per limits. For certain data points, the error bars are not shown because they are smaller than the scale of the plot. work–energy theorem gives the binary velocity vb at a radial position r as vb(r) = q v2 0 + 2[Φ(r0) −Φ(r)], (5) where Φg(r) = Φ∗(r) + ΦDM(r) is the total gravitational potential of the galaxy, composed of stellar and dark- matter components, both approached as spherically sym- metric, which we define as a function of the stellar mass M∗and redshift z, as detailed in App. B. To estimate the flight time elapsed between ejection and coalescence, ∆tP, we assume that the binary velocity is approximately aligned with the radial direction, vb ≈dr/dt, which is valid when the radial distance at coalescence is much larger than the initial radius r0. Integrating equation 5 then gives ∆tP = 5c5(1 + q)2 256G3q(m1 + m2)3 a4 0 fe = Z ℓobs/ cos θp r0 dr p v2 0 + 2[Φg(r0) −Φg(r)] . (6) The first equality in equation 6 corresponds to the coa- lescence time of a binary with initial separation a0, as derived by Peters [41], where q = m2/m1 < 1, and fe ∼1 is a function of the initial binary eccentricity, G is the gravitational constant, and c is the speed of light. In the second equality, ℓobs/ cos θp is the radial position at coalescence, with θp the angle between the coalescence direction and the plane of the sky. From equation 6, we note that the minimum velocity v0 re- quired for the binary to reach the distance ℓobs/ cos θp is v0,th = p 2[Φg(ℓobs/ cos θp) −Φg(r0)]. This corresponds to the threshold value of v0 that ensures the integrand in equation 6 remains real. Finally, since ¯v0 = ¯vc + ¯v′ k, the required systemic kick velocity can be obtained as v′ k = q v2 0 −sin2 θkv2c −vc cos θk, (7) with θk being the angle among ¯vc and ¯v′ k, and we ap- proach vc as the circular velocity of the galaxy at the radius r0 (see App. B, for details). In the upper panel of Fig. 5, we display the minimum values for v′ k, required to produce the observed offset ℓobs = 38.9 kpc, as a function of a0 and the angle θk. The points in this v′ k −a0 space were obtained by solving the simultaneous equations (6-7), while fixing the value of M∗∼2.4×109 M⊙and z = 0.0647, consistent with the estimate by [6], using the posterior samples of m1 and m2 derived from the magnetar spin-down model (see Fig. 4), and uniformly sampling the parameters θp ∈(0, π/3), θk ∈(0, π), and r0 ∈(0.5, 5) kpc. In the lower panel of Fig. 5, we show the coalescence time associated with the parameter configurations of the upper panel. We observe from Fig. 5 that the minimum systemic velocity v′ k required to produce the offset ℓobs = 39.8 kpc ranges between ∼100 and 270 km s−1. This velocity clearly depends on the kick angle θk, with smaller values of θk leading to the lowest v′ k. The corresponding bi- nary separations a0 mostly lie within 2–3 R⊙, regardless of the kick direction. Under such conditions, the ejec- tions would produce the electromagnetic transient within ∆tP ≲1 Gyr, as shown in the lower panel of Fig. 5. We note that this period of time is comparable to the galaxy evolution time scale, and much lower than the Hubble time (in ΛCDM cosmological models). A possible explanation for the origin of the sudden ejec- tion of the binary is the natal kick imparted to one of its components during an asymmetric supernova explosion [42]. This scenario is particularly plausible for systemic kicks that are quasi-aligned with the binary’s orbital mo- tion before the supernova event (i.e. θk ≲π/4), which 9 0.0 0.1 0.2 0.3 0.4 p(log(ϵB)|d) -6 -5 -4 -3 -2 -1 0 -0.4 -0.2 0.0 0.2 0.4 log(ϵB) 0 5 10 15 p(p|d) 2.0 2.2 2.4 2.6 2.8 3.0 -0.4 -0.2 0.0 0.2 0.4 p 0.0 0.1 0.2 0.3 0.4 p(log(ξN)|d) -5 -4 -3 -2 -1 0 -0.4 -0.2 0.0 0.2 0.4 log(ξN) FIG. 3. Posterior comparison of GRB afterglow model parameters obtained from joint analyses with different possi- ble astrophysical scenarios. The subplots show the residuals relative to the CBC magnetar spin-down model, defined as 2 × (PDFKN model −PDFref.) / (PDFKN model + PDFref.). The shaded gray region denotes the 1σ credible interval of the refer- ence model’s posterior distributions. FIG. 4. Corner plot of the binary component masses (m1, m2) and dimensionless tidal deformability (Λ) from inference re- sults of the magnetar spin-down model (orange) and BNS two-component model (blue). The top panel shows the final inference for the magnetar spin-down model. would imply systemic velocities of v′ k ∼100–150 km s−1, as shown in Fig. 5. Natal kicks producing such parame- ter configurations of v′ k and a0 would require finely tuned conditions to avoid binary disruption, yet remain possi- ble according to analyses of similar phenomena discussed in [43, 44] and are consistent with the inferred natal kick velocities of some observed Galactic BNS pulsars [45]. IV. CONCLUSIONS GRB 230307A is the second-brightest gamma-ray burst ever observed. The absence of an associated super- nova, combined with persistent late-time near-infrared emission and rapidly decaying optical emission, along with spectroscopic detection of emission lines indicative of heavy elements such as tellurium, strongly supports the presence of a kilonova powered by r-process nucle- osynthesis. The long duration of GRB 230307A ini- tially suggested a traditional classification as the result of the collapse of a massive star. However, the pres- ence of soft X-ray emission characterized by a plateau followed by a power-law decay points to the presence of a magnetar central engine, which is typically associated with CBC. Furthermore, the large observed offset from the host galaxy provides additional support for a CBC origin. These factors pose a challenge to establishing a direct and comprehensive understanding of the pro- genitor of GRB 230307A. In Section III A, we present 10 FIG. 5. Scatter plots for parameter combinations of v′ k −a0 (upper) and ∆tP −a0 (lower), consistent with the observed merger offset ℓobs = 38.9 kpc, as derived from equations (6-7). Colors indicate different assumed values of θk. a detailed Bayesian multi-wavelength analysis of GRB 230307A, examining various progenitor scenarios such as BNS, NS–WD, NS–BH mergers, and a white dwarf TDE. Model comparison based on the Bayes factor and LOO score strongly favors a BNS or NS–WD progenitor, in which the emission is powered by magnetar spin-down and 56Ni radioactive decay. This conclusion aligns with previous works [4, 10], which analyzed prompt X-ray and gamma-ray data from LEIA and GECAM and reached similar findings. Our analysis further demonstrates that a kilonova component, in addition to the standard after- glow emission, is essential to reproduce the observations, underscoring the complex nature of the emission mecha- nisms in GRB 230307A. We also investigated the source properties of the pos- sible progenitor of GRB 230307A. Assuming a BNS pro- genitor, we used the relations between the ejecta prop- erties and the masses and tidal deformabilities of the binary components as described by Dietrich and Ujevic [38] and Coughlin et al. [39]. Applying these relations to the two best-fit models, the magnetar spin-down and the two-component kilonova, we performed a Bayesian inference to constrain the BNS source properties. Both models yielded consistent results, indicating component masses of m1 = 1.81+0.46 −0.61 M⊙and m2 = 1.61+0.65 −0.41 M⊙, with a tidal deformability of ˜Λ = 471+318 −395 for the mag- netar spin-down model. Moreover, we used the poste- rior mass samples to investigate the ejection conditions of the binary progenitor responsible for the large pro- jected merger offset. Assuming the system was ejected from the nearest galaxy by a kick imparted on top of its orbital motion, we find that the minimum systemic ve- locity v′ k (in the co-moving frame) required to reproduce the observed offset of ℓobs = 39.8 kpc ranges between ∼100 and 270 km s−1. The corresponding post-kick bi- nary separation falls within a0 ∼2–3 R⊙. If the kick is quasi-aligned with the pre-kick orbital motion, the re- quired velocity decreases to vk ∼100–150 km s−1. This special case can be plausibly explained within the natal- kick scenario, where one binary component undergoes an asymmetric supernova explosion, under finely tuned con- ditions that prevent binary disruption [43, 44]. The results of this work provide statistical evidence supporting a CBC origin for GRB 230307A. The inferred parameter values from the magnetar spin-down model are consistent with either a BNS merger or a NS-WD merger. Distinguishing between these two progenitor channels re- mains challenging, primarily due to the limited availabil- ity of detailed numerical simulations for NS-WD merg- ers, in contrast to the extensive modeling developed for BNS systems (e.g., [46–53]). Without comparable simu- lation data, including detailed predictions of electromag- netic counterparts and ejecta properties, it is currently not possible to determine the nature of the progenitor based solely on observational features. Future progress will require the development of a sys- tematic suite of general-relativistic hydrodynamic sim- ulations targeting mergers between neutron stars and white dwarfs. This poses unique technical challenges due to the broad range of spatial and temporal scales re- quired to accurately model both types of stellar remnants with realistic equations of state. These simulations must also incorporate variations in mass ratios, white dwarf compositions, and neutron star properties [54]. Such ef- forts are essential for producing reliable predictions of ob- servables, including kilonova light curves, nucleosynthetic yields, and gravitational wave signatures specific to these systems. Ultimately, this work would provide the neces- sary foundation for distinguishing between different types of compact binary progenitors in future multi-messenger observations. Gravitational waves emitted during the coalescence could provide complementary observational constraints to resolve this ambiguity. Unfortunately, the distance of GRB 230307A is slightly beyond the current sensitiv- ity of the LIGO, Virgo, and KAGRA detectors, meaning that even if the event had occurred during the fourth 11 observing run, it would have been undetectable. Future GW detectors, with sensitivity extending beyond the dis- tance of GRB 230307A, will open a promising opportu- nity for multi-messenger astronomy, providing comple- mentary insights into its progenitors. ACKNOWLEDGEMENTS The authors made use of Sci-Mind servers ma- chines developed by the CBPF AI LAB team and would like to thank Paulo Russano and Marcelo Portes de Albuquerque for all the support in infrastructure matters. J.C.R.R. acknowledges support from Rio de Janeiro State Funding Agency FAPERJ, grant E- 26/205.635/2022. C.R.B. acknowledges the financial sup- port from CNPq (316072/2021-4) and from FAPERJ (grants 201.456/2022 and 210.330/2022) and the FINEP contract 01.22.0505.00 (ref. 1891/22). Appendix A: Parameter inference from multi-wavelength observations In this appendix, we present the results of a joint Bayesian inference from the multi-wavelength analysis of GRB 230307A, assuming that the emission consists of an afterglow signature and a transient emission (either kilo- nova or TDE). The Bayesian inference was performed using a dynamic nested sampling algorithm provided by the dynesty Python package. In each simulation, we employed 500 live points along with a multi-ellipsoidal bounding method, and terminated the sampling process when the change in log-evidence dropped below a tol- erance threshold of 0.1. The median values and the 1σ confidence intervals are presented in Table III. Appendix B: The gravitational potential of the host galaxy We model the gravitational potential, Φg, of the puta- tive host galaxy that formed the progenitor binary sys- tem discussed in the main text as spherically symmetric, and composed of stellar and dark matter (DM) compo- nents: Φg (r) = Φ∗(r) + ΦDM (r) . (B1) For the stellar component, we adopt the Hernquist pro- file [55]: Φ∗(r) = −GM∗ r + ah , (B2) where M∗is the total stellar mass, and ah is the scale radius, assumed to be ah = 0.55 R50 [56], with R50 the stellar half-mass radius. The latter is obtained as a func- tion of M∗from the empirical relation derived in [57]. For the DM component, we adopt the Navarro–Frenk–White (NFW) potential [58]: ΦDM (r) = −4πGρ0R3 s r ln  1 + r Rs  , (B3) where ρ0 is the characteristic density of the DM halo, and Rs its scale radius. These parameters are related to the halo mass, MDM, through MDM = 4πρ0R3 s  ln (1 + c) − c 1 + c  , (B4) where c is the concentration parameter. We estimate MDM as a function of the total stellar mass M∗and the source redshift, using the empirical re- lation derived by [59]. The concentration parameter c is computed using the colossus toolkit [60], specifying M200, the source redshift, and the mass–concentration relation from Ishiyama et al. [61]. Given c and MDM, the scale radius is defined as Rs = Rvir/c, where Rvir =  3MDM 4π∆virρc 1/3 , (B5) with ∆vir = 200 and the critical density of the Universe, ρc = 1.4 × 1011 M⊙Mpc−3. Finally, the characteristic density ρ0 is obtained from equation B4. With the definitions above, the galactic potential in equation B1 is fully determined by specifying the total stellar mass M∗, the source redshift z, and the radial position r. Finally, given the gravitational potentials, the implied circular velocity of the galaxy at radius r is vc (r) = s r dΦ∗ dr + dΦDM dr  . (B6) [1] F. G. Team, Grb 230307a: Fermi gbm final real-time localization, GRB Coordinates Network, Circular Service 2023GCN.33405....1F (2023). [2] S. Xiong, C. Wang, Y. Huang, and G. Team, Grb 230307a: Gecam detection of an extremely bright burst, GRB Coordinates Network, Circular Service 2023GCN.33406....1X (2023). [3] D. Svinkin, D. Frederiks, M. Ulanov, and et al., Konus- wind detection of grb 230307a, GRB Coordinates Net- work, Circular Service 2023GCN.33427....1S (2023). 12 TABLE III. Posterior medians and 1σ credible intervals for the model parameters. The GRB afterglow model assumes a Gaussian jet structure with the luminosity distance fixed at 291 Mpc. Prior bounds are indicated in brackets in the first column; all priors are assumed to be uniform. CBC (mag.+56Ni) BNS (two-comp.) BNS (cental eng.) NS-BH TDE GRB model parameter estimation - [prior] log10 EK,iso (erg) - [50, 60] 53.18+1.74 −1.58 53.11+1.18 −1.24 53.36+1.59 −1.52 53.86+0.56 −1.03 53.34+0.49 −0.48 log10 n0 (cm−3) - [-6, 2] -0.75+1.99 −2.32 -0.53+1.90 −4.91 -0.30+1.80 −2.41 0.05+0.97 −0.27 0.20+1.61 −1.57 log10 θc - [-2, -0.5] -0.89+0.38 −0.55 -0.56+0.06 −0.94 -0.88+0.36 −0.54 -1.26+0.22 −0.10 -1.23+0.10 −0.20 log10 ϵe - [-6, -0.3] -2.68+1.40 −1.46 -2.41+2.10 −3.55 -2.96+1.48 −1.62 -3.53+1.07 −0.65 -2.84+0.57 −0.46 log10 ϵB - [-6, -0.3] -3.22+1.31 −1.22 -2.61+2.31 −2.37 -3.56+1.25 −1.05 -1.00+0.67 −0.47 -1.63+0.87 −0.59 p - [2.01, 2.9] 2.12+0.09 −0.10 2.14+0.18 −0.12 2.13+0.06 −0.09 2.80+0.10 −0.11 2.81+0.09 −0.12 log10 ξN - [-5, 0] -2.36+1.23 −1.13 -2.37+1.30 −1.42 -2.75+1.40 −1.32 -4.31+0.85 −0.63 -3.95+0.66 −0.65 KN model parameter estimation - CBC (mag.+56Ni) log10 κ (cm2g−1) - [-1, 0] −0.34+0.24 −0.25 . . . . . . . . . . . . log10 Mej (M⊙) - [-2.0, -0.8] −1.22+0.29 −0.32 . . . . . . . . . . . . log10 MNi (M⊙) - [-5.0, -3.0] −3.40+0.10 −0.02 . . . . . . . . . . . . log10 χLsd (0) (erg s−1) - [45.0, 48.5] 47.66+0.37 −0.52 . . . . . . . . . . . . vmin (c) - [0.001, 0.15] 0.10+0.01 −0.01 . . . . . . . . . . . . vmax (c) - [0.18, 0.35] 0.26+0.04 −0.04 . . . . . . . . . . . . δ - [1.0, 3.0] 1.74+0.70 −0.62 . . . . . . . . . . . . KN model parameter estimation - BNS (two-comp.) log10 Mej,1 (M⊙) - [-3, -1] . . . −1.18+0.18 −0.08 . . . . . . . . . log10 vej,1 (c) - [-1.0, -0.5] . . . −0.90+0.05 −0.07 . . . . . . . . . log10 κ1 (cm2 g−1) - [-2.0, 0.5] . . . −0.41+0.42 −0.40 . . . . . . . . . βv,1 - [1, 5] . . . 2.47+1.39 −1.27 . . . . . . . . . Ye,1 - [0.2, 0.4] . . . 0.31+0.06 −0.07 . . . . . . . . . log10 Mej,2 (M⊙) - [-3, -1] . . . −1.11+0.10 −0.02 . . . . . . . . . log10 vej,2 (c) - [-2, -1] . . . −1.34+0.11 −0.10 . . . . . . . . . log10 κ2 (cm2 g−1) - [-0.5, 2.0] . . . 1.50+0.39 −0.35 . . . . . . . . . βv,2 - [1, 5] . . . 3.34+1.26 −1.14 . . . . . . . . . Ye,2 - [0.1, 0.2] . . . 0.14+0.03 −0.03 . . . . . . . . . KN model parameter estimation - BNS (central eng.) log10 Mej (M⊙) - [-3, -1] . . . . . . −1.84+0.12 −0.11 . . . . . . log10 vej (c) - [-2, -0.5] . . . . . . −1.50+0.20 −0.31 . . . . . . ζ - [0, 0.5] . . . . . . 0.20+0.16 −0.15 . . . . . . κ (cm2g−1) - [0.1, 10.0] . . . . . . 3.01+1.56 −1.61 . . . . . . log10 L0 (erg s−1) - [45.5, 50.0] . . . . . . 48.06+0.63 −0.69 . . . . . . nism (cm−3) - [3.0, 5.0] . . . . . . 3.47+0.60 −0.42 . . . . . . KN model parameter estimation NS-BH - [prior] MBH (M⊙) - [5.0, 100.0] . . . . . . . . . 49.99+23.73 −17.33 . . . MNS (M⊙) - [1.2, 2.27] . . . . . . . . . 1.49+0.33 −0.24 . . . χBH - [0.05, 1.0] . . . . . . . . . 0.68+0.20 −0.26 . . . ΛNS - [5.0, 5000.0] . . . . . . . . . 1396+1548 −1163 . . . ζ - [0, 0.5] . . . . . . . . . 0.33+0.13 −0.17 . . . log10 vej,2 (c) - [-2.0, -0.5] . . . . . . . . . −1.48+0.59 −0.41 . . . log10 κ1 (cm2g−1) - [-2.0, 0.5] . . . . . . . . . −1.48+0.59 −0.41 . . . log10 κ2 (cm2g−1) - [-0.5, 2.0] . . . . . . . . . −0.46+0.53 −0.59 . . . KN model parameter estimation TDE - [prior] MBH (106 M⊙) - [0.1, 100.0] . . . . . . . . . . . . 0.45+1.39 −0.34 Mstar (M⊙) - [0, 1.44] . . . . . . . . . . . . 0.69+0.43 −0.49 tvisc. (days) - [1e−3, 1e5] . . . . . . . . . . . . 72669+18785 −16641 b - [0,2] . . . . . . . . . . . . 0.72+0.77 −0.64 η - [0.005,0.4] . . . . . . . . . . . . 0.13+0.13 −0.11 log10 LEdd (erg s−1) - [43.1, 46.1] . . . . . . . . . . . . 43.66+0.57 −0.48 log10 Rphoto (km) - [-4,4] . . . . . . . . . . . . −1.69+2.06 −1.60 Lphoto (erg s−1) - [0,4] . . . . . . . . . . . . 2.77+1.12 −1.17 13 [4] H. Sun, C.-W. Wang, J. Yang, and et al, Magnetar emer- gence in a peculiar gamma-ray burst from a compact star merger, Natl Sci Rev. 12, 3 (2025). [5] A. J. Levan, B. P. Gompertz, O. S. Salafia, and et al., Heavy-element production in a compact object merger observed by jwst, Nature 626, 737–741 (2024). [6] Y.-H. Yang, E. Troja, B. O’Connor, and et al., A lanthanide-rich kilonova in the aftermath of a long gamma-ray burst, Nature 626, 742–745 (2024). [7] C. Casentini, M. Tavani, C. Pittori, and et al, Grb 230307a: Agile/mcal detection, GRB Coordinates Net- work, Circular Service 2023GCN.33412....1C (2023). [8] V. Kalogera, U. Kolb, and A. R. King, Supernova kicks, magnetic braking, and neutron star binaries, ApJ 504, 927 (1998). [9] X. I. Wang, Y.-W. Yu, J. Ren, and et al., What pow- ered the kilonova-like emission after grb 230307a in the framework of a neutron star-white dwarf merger?, ApJL 964, L9 (2024). [10] Z. Du, H. L¨u, Y. Yuan, X. Yang, and E. Liang, The progenitor and central engine of a peculiar grb 230307a, ApJL 962, L27 (2024). [11] D. Tsang, J. S. Read, T. Hinderer, A. L. Piro, and R. Bondarescu, Resonant shattering of neutron star crusts, Phys. Rev. Lett. 108, 011102 (2012). [12] C. Palenzuela, L. Lehner, M. Ponce, and et al., Electro- magnetic and gravitational outputs from binary-neutron- star coalescence, Phys. Rev. Lett. 111, 061105 (2013). [13] D. Neill, W. G. Newton, and D. Tsang, Resonant shatter- ing flares as multimessenger probes of the nuclear sym- metry energy, MNRAS 504, 1129–1143 (2021). [14] A. G. Suvorov and K. D. Kokkotas, Precessing magnetars as central engines in short gamma-ray bursts, MNRAS 502, 2482–2494 (2021). [15] S. T. McWilliams and J. Levin, Electromagnetic extrac- tion of energy from black-hole-neutron-star binaries, ApJ 742, 90 (2011). [16] E. R. Most and A. A. Philippov, Electromagnetic pre- cursors to gravitational-wave events: Numerical simula- tions of flaring in pre-merger binary neutron star magne- tospheres, ApJL 893, L6 (2020). [17] A. M. Beloborodov, Emission of magnetar bursts and precursors of neutron star mergers, ApJ 921, 92 (2021). [18] A. J. Cooper, O. Gupta, and Z. Wadiasingh, Pulsar re- vival in neutron star mergers: multimessenger prospects for the discovery of pre-merger coherent radio emission, MNRAS 519, 3923–3946 (2023). [19] H.-J. L¨u, E.-W. Liang, B.-B. Zhang, and B. Zhang, A new classification method for gamma-ray bursts, ApJ 725, 1965 (2010). [20] H.-J. L¨u, B. Zhang, E.-W. Liang, B.-B. Zhang, and T. Sakamoto, The ‘amplitude’ parameter of gamma-ray bursts and its implications for grb classification, MNRAS 442, 1922–1929 (2014). [21] P. A. Abell, J. Allison, S. F. Anderson, and et al., Lsst science book, version 2.0, arXiv e- prints 10.48550/arXiv.0912.0201 (2009), arXiv:0912.0201 [astro-ph.IM]. [22] J. S. Speagle, Dynesty: a dynamic nested sampling pack- age for estimating bayesian posteriors and evidences, MNRAS 493, 3132–3158 (2020). [23] G. Ryan, H. van Eerten, L. Piro, and E. Troja, Gamma- ray burst afterglows in the multimessenger era: Numeri- cal models and closure relations, ApJ 896, 166 (2020). [24] B. D. Metzger, Kilonovae, Living Rev. Rel. 23, 1 (2020). [25] P. T. H. Pang, T. Dietrich, M. W. Coughlin, and et al., An updated nuclear-physics and multi-messenger astro- physics framework for binary neutron star mergers, Na- ture Communications 14, 8352 (2023). [26] N. Sarin, C. M. B. Omand, B. Margalit, and D. I. Jones, On the diversity of magnetar-driven kilonovae, MNRAS 516, 4949–4962 (2022). [27] N. Sarin, M. H¨ubner, C. M. B. Omand, and et al., red- back: a bayesian inference software package for electro- magnetic transients, MNRAS 531, 1203–1227 (2024). [28] S. Anand, M. W. Coughlin, M. M. Kasliwal, and et al., Optical follow-up of the neutron star–black hole mergers S200105ae and S200115j, Nature Astron. 5, 46 (2021). [29] M. Bulla, Possis: predicting spectra, light curves and polarization for multi-dimensional models of supernovae and kilonovae, MNRAS 489, 5037–5045 (2019). [30] B. Mockler, J. Guillochon, and E. Ramirez-Ruiz, Weigh- ing black holes using tidal disruption events, ApJ 872, 151 (2019). [31] M. Jacobi, F. Magistrelli, E. Loffredo, and et al., 56Ni production in long-lived binary neutron star merger remnants, arXiv e-prints , arXiv:2503.17445 (2025), arXiv:2503.17445 [astro-ph.GA]. [32] S. Ai, H. Gao, and B. Zhang, Engine-fed kilonovae (merg- ernovae). ii. radiation, ApJ 978, 52 (2025). [33] H. Jeffreys, Theory of Probability (Oxford University Press Oxford, 1961). [34] A. Vehtari, A. Gelman, and J. Gabry, Practical bayesian model evaluation using leave-one-out cross-validation and waic, Statistics and Computing 27, 1413 (2017). [35] Y. Zenati, H. B. Perets, and S. Toonen, Neutron star–white dwarf mergers: early evolution, physical prop- erties, and outcomes, MNRAS 486, 1805–1813 (2019). [36] V. A. Villar, J. Guillochon, E. Berger, and et al., The combined ultraviolet, optical, and near-infrared light curves of the kilonova associated with the binary neutron star merger gw170817: Unified data set, analytic models, and physical implications, ApJL 851, L21 (2017). [37] M. Nicholl, B. Margalit, P. Schmidt, and et al., Tight multi-messenger constraints on the neutron star equation of state from gw170817 and a forward model for kilonova light curve synthesis, MNRAS 505, 3016–3032 (2021). [38] T. Dietrich and M. Ujevic, Modeling dynamical ejecta from binary neutron star mergers and implications for electromagnetic counterparts, Class. Quantum Grav. 34, 105014 (2017). [39] M. W. Coughlin, T. Dietrich, B. Margalit, and B. D. Metzger, Multimessenger bayesian parameter inference of a binary neutron star merger, MNRASL 489, L91–L96 (2019). [40] S. De, D. Finstad, J. M. Lattimer, and et al., Tidal de- formabilities and radii of neutron stars from the observa- tion of gw170817, PRL 121, 121 (2018). [41] P. C. Peters, Gravitational radiation and the motion of two point masses, Phys. Rev. 136, B1224 (1964). [42] N. Brandt and P. Podsiadlowski, The effects of high- velocity supernova kicks on the orbital properties and sky distributions of neutron-star binaries, MNRAS 274, 461 (1995). [43] T. M. Tauris, M. Kramer, P. C. C. Freire, and et al., Formation of double neutron star systems, ApJ 846, 170 (2017). [44] M. Renzo, E. Zapartas, S. E. d. Mink, and et al., Massive 14 runaway and walkaway stars a study of the kinematical imprints of the physical processes governing the evolution and explosion of their binary progenitors, A&A 624, 28 (2019). [45] P. Disberg, N. Gaspari, and A. J. Levan, Kinematic con- straints on the ages and kick velocities of galactic neutron star binaries, A&A 689, 22 (2024). [46] K. Kiuchi, S. Fujibayashi, K. Hayashi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Self-consistent picture of the mass ejection from a one second long binary neutron star merger leaving a short-lived remnant in a general-relativistic neutrino-radiation magnetohydrody- namic simulation, Phys. Rev. Lett. 131, 011401 (2023). [47] A. Murguia-Berthier et al., HARM3D+NUC: A New Method for Simulating the Post-merger Phase of Binary Neutron Star Mergers with GRMHD, Tabulated EOS, and Neutrino Leakage, Astrophys. J. 919, 95 (2021), arXiv:2106.05356 [astro-ph.HE]. [48] L. Combi and D. M. Siegel, GRMHD Simulations of Neutron-star Mergers with Weak Interactions: r- process Nucleosynthesis and Electromagnetic Signatures of Dynamical Ejecta, Astrophys. J. 944, 28 (2023), arXiv:2206.03618 [astro-ph.HE]. [49] L. Rezzolla and O. Zanotti, Relativistic hydrodynamics (OUP Oxford, 2013). [50] M. W. Coughlin, T. Dietrich, B. Margalit, and B. D. Metzger, Multimessenger Bayesian parameter inference of a binary neutron star merger, Mon. Not. Roy. Astron. Soc. 489, L91 (2019), arXiv:1812.04803 [astro-ph.HE]. [51] T. Dietrich and M. Ujevic, Modeling dynamical ejecta from binary neutron star mergers and implications for electromagnetic counterparts, Class. Quant. Grav. 34, 105014 (2017), arXiv:1612.03665 [gr-qc]. [52] D. Radice, A. Perego, K. Hotokezaka, S. A. Fromm, S. Bernuzzi, and L. F. Roberts, Binary Neutron Star Mergers: Mass Ejection, Electromagnetic Counterparts and Nucleosynthesis, Astrophys. J. 869, 130 (2018), arXiv:1809.11161 [astro-ph.HE]. [53] V. Nedora, F. Schianchi, S. Bernuzzi, D. Radice, B. Daszuta, A. Endrizzi, A. Perego, A. Prakash, and F. Zappa, Mapping dynamical ejecta and disk masses from numerical relativity simulations of neutron star mergers, Class. Quant. Grav. 39, 015008 (2022), arXiv:2011.11110 [astro-ph.HE]. [54] J. Mor´an-Fraile, F. K. R¨opke, R. Pakmor, M. A. Aloy, S. T. Ohlmann, F. R. N. Schneider, G. Leidi, and G. Li- outas, Self-consistent magnetohydrodynamic simulation of jet launching in a neutron star – white dwarf merger, Astron. Astrophys. 681, A41 (2024), arXiv:2310.08623 [astro-ph.HE]. [55] L. Hernquist, An Analytical Model for Spherical Galaxies and Bulges, Astrophys. J. 356, 359 (1990). [56] B. P. Abbott, R. Abbott, T. D. Abbott, and et al., On the Progenitor of Binary Neutron Star Merger GW170817, The Astrophysical Journal Letters 850, L40 (2017), arXiv:1710.05838 [astro-ph.HE]. [57] Y.-C. Zhang and X.-H. Yang, Size distribution of galaxies in sdss dr7: weak dependence on halo environment, Res. Astron. Astrophys. 19, 006 (2019). [58] J. F. Navarro, C. S. Frenk, and S. D. M. White, The Structure of Cold Dark Matter Halos, Astrophys. J. 462, 563 (1996), arXiv:astro-ph/9508025 [astro-ph]. [59] G. Girelli, L. Pozzetti, M. Bolzonella, and et al., The stellar-to-halo mass relation over the past 12 gyr i. stan- dard λcdm model, A&A 634, 23 (2020). [60] B. Diemer, COLOSSUS: A Python Toolkit for Cosmol- ogy, Large-scale Structure, and Dark Matter Halos, The Astrophysical Journal Supplement Series 239, 35 (2018), arXiv:1712.04512 [astro-ph.CO]. [61] T. Ishiyama, F. Prada, A. A. Klypin, and et al., The uchuu simulations: Data release 1 and dark matter halo concentrations, MNRAS 506, 4210–4231 (2021).
Multi-wavelength analysis of the progenitor of GRB 230307A via Bayesian model comparison Viviane Alfradique∗ Centro Brasileiro de Pesquisas F ́ısicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil Rodrigo da Mata, Juan C. Rodr ́ıguez-Ram ́ırez, and Cl ́ecio R. Bom Centro Brasileiro de Pesquisas F ́ısicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil (Dated: October 17, 2025) GRB 230307A is one of the brightest long-duration gamma-ray bursts (GRBs) ever detected, yet its progenitor remains uncertain due to the variety of plausible astrophysical scenarios. In this work, we investigate four possible progenitors for GRB 230307A: a binary neutron star (BNS), a neutron star-white dwarf (NS-WD) system, a neutron star-black hole (NS-BH) merger, and a tidal disruption event (TDE) involving a white dwarf and a supermassive black hole. Additionally, we explore three distinct central engine models powering the kilonova associated with the BNS: radioactive decay of r-process nuclei in a two-component ejecta model, a magnetar-driven model including magnetic dipole spin-down, and a combined model of magnetar spin-down with 56Ni radioactive decay. We perform Bayesian multi-wavelength light-curve analyses using physically motivated models and priors, and evaluate model performance through Bayes factors and leave-oneout cross-validation (LOO) scores. Our results show a statistical preference for a BNS or NS-WD progenitor producing a kilonova powered by a magnetar and 56Ni decay, characterized by a 56Ni mass of ∼4×10-4 M⊙and an ejecta mass of 0.06 M⊙. Furthermore, under the assumption of a BNS origin within this model, we infer binary component masses of m1 = 1.81+0.46 -0.61 M⊙and m2 = 1.61+0.65 -0.41 M⊙, with a dimensionless tidal deformability of ̃Λ = 471+318 -395. From the component mass posteriors, we infer that the observed offset can be explained by a natal kick as long as the systemic velocity is nearly aligned with the pre-kick orbital motion. In this case, the required kick velocity (co-moving frame) and binary separation range within v′ k ∼100-150 km s-1, and a0 ∼2-3 R⊙, respectively. The discovery of GRB 230307A, reported on March 7, 2023, by NASA's Fermi Gamma-ray Space Telescope [1], Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor (GECAM, [2]) and the KonusWind [3], represents one of the most luminous gamma-ray bursts ever observed. This event exhibited a peak flux of 4.48×10-4 erg cm-2 s-1 and a high gamma-ray fluence of 3 × 10-3 erg cm-2 within the 10-1000 keV energy range [4]. The burst had a total duration of 42 seconds [4], and even 61 days after the event [5, 6], a faint counterpart remained detectable across multiple wavelengths. Followup observations spanned the optical, near-infrared, and soft X-ray bands, with data collected by instruments such as the James Webb Space Telescope (JWST, [5]), Hubble Space Telescope (HST, [6]), Very Large Telescope (VLT, [5]), Gemini South Telescope [5], Swift X-ray Telescope (XRT, [5]), Chandra X-ray Observatory [5], Australia Telescope Compact Array (ATCA, [5]), MeerKAT [5], AGILE [7], and the Lobster Eye Imager for Astronomy (LEIA, [4]). These comprehensive observations enabled a detailed characterization of the burst's afterglow and environment. The rapid decay of the bolometric luminosity, along with the increase in the photosphere radius, sug- ∗ gests the presence of a thermal component that associates a kilonova with the GRB and supports the presence of lanthanides in the ejected material [5, 6]. One way to investigate the origin of the progenitor is to analyze how variations in kilonova evolution are intrinsically related to the properties of the ejecta and the remnant of the merger. This analysis provides valuable insights into the physical processes that are consistent with observations while excluding those that are not supported by the data. In the case of GRB 230307A, the rapid decay of the optical emission at early times, followed by the dominance of emission in the near-infrared (NIR) band, may indicate the presence of heavy isotopes [5, 6]. Furthermore, the detection of tellurium in the mid-infrared spectrum provides additional support for rprocess nucleosynthesis, which is expected in BNS and NS-BH mergers. These observations, together with the significant offset between the burst position and its potential host galaxy at redshift z = 0.065, suggest a compact binary merger origin, possibly explained by a high kick velocity imparted to the neutron star component [8]. Another notable feature of GRB 230307A is the soft X-ray emission observed by LEIA [4] in the 0.5-4.0 keV band. This emission shows a plateau during the first 10 seconds, followed by a decay, a behavior commonly interpreted as the spin-down signature of a magnetar formed 16 Oct 2025 2 after the compact binary merger. However, the nature of the magnetar central engine remains debated. As noted in [9], neutrino emission could suppress the production of lanthanide-rich ejecta, and the mechanisms connecting spin-down luminosity to X-ray emission are still not fully understood [10]. In light of these challenges, Wang et al. [9] proposed a NS-WD merger with a possible remnant magnetar as the progenitor. In this scenario, the long-lived emission is explained by the lower density of the white dwarf, with heavy elements produced by 56Ni decay rather than the r-process typical of BNS mergers. They showed that an NS-WD system with a ∼10-3 M⊙ white dwarf and an ejecta mass of ∼0.1 M⊙provides a consistent fit to the multi-wavelength afterglow and kilonova emission. Beyond the magnetar engine, other models have also been proposed, such as disruption of the NS crust during the inspiral phase [11-14] or magnetospheric interactions between the binary components [15-18]. [10] was the first study to investigate the progenitor of the exotic GRB 230307A. Their analysis relied on the phenomenological classification of GRBs into type I, associated with massive star collapses, and type II, resulting from compact object mergers, as proposed by [19, 20]. They found that GRB 230307A has an ε value (defined as the ratio between the isotropic gamma-ray energy and the rest-frame spectral peak energy) lying on the boundary between the two classes, with a 3σ deviation from the type II classification. However, the effective amplitude parameter they obtained is consistent with type II. Given the similarity of these values to those of other long GRBs, the authors concluded that GRB 230307A most likely originated from a compact object merger, either a BNS or an NS-WD system, although it remains unclear which of these channels is the progenitor. Identifying the progenitor system responsible for transient astrophysical events, such as GRB 230307A, is a fundamental step in understanding the physics governing such explosions. The degeneracy in the observable signatures makes it crucial to develop robust model selection strategies. This is especially important as we enter the era of high-cadence, multi-messenger observations enabled by facilities such as the Vera C. Rubin Observatory, which will conduct the Legacy Survey of Space and Time [21], anticipated to reveal a large number of such events. A reliable identification of the progenitor will not only refine our models of compact binary evolution, but also improve our understanding of the equation of state (EOS) of dense matter, nucleosynthesis pathways, and the role of magnetic fields in shaping the observed electromagnetic emission. In this work, we investigate multiple progenitor scenarios for GRB 230307A within a comprehensive Bayesian framework, aiming to identify the model that best reproduces the observed multiwavelength data, particularly the afterglow and kilonova components. We perform Bayesian inference using the nested sampling algorithm implemented in dynesty [22] and evaluate competing models with statistical tools including the Bayes factor, the LOO score, and the Kullback-Leibler (KL) divergence. These complementary metrics allow a robust model comparison by balancing goodness of fit, model complexity, and information content. The paper is structured as follows: Section I describes the observational data and theoretical models; Section II outlines the Bayesian methodology; Section III presents the model comparison and parameter inference results; and Section IV summarizes the implications for the identification of the GRB 230307A progenitor. I. DATA AND EJECTA MODELS A. Observational data The data used to perform the multi-wavelength analysis were collected from the source data provided by [6] (cf. "Source Data Fig. 3"), which gathered data from the HST, JWST, Swift/XRT, Swift/UVOT, X-Shooter, Chandra, Gemini, XMM-Newton, and Fermi/GBM. The follow-up observations cover the optical with the Gemini telescope and the Southern Astrophysical Research telescope, near-infrared with the Gemini Telescope and the JWST, which also explored the mid-infrared and farinfrared bands; in the radio with the ATCA; and in the X-ray with Swift/XRT, Chandra X-ray Observatory, and XMM-Newton. B. Possible scenarios of GRB 230307A progenitor To investigate the electromagnetic counterpart of GRB 230307A, we adopt a modeling approach that separates the afterglow and kilonova components, allowing a clear comparison between different scenarios for thermal emission. We use the Python package afterglowpy [23] to model the synchrotron radiation from a Gaussian structured jet (unless stated otherwise), keeping the afterglow fixed across all analyses to isolate the effects of changing the kilonova emission models. The afterglow model includes standard forward shock emission, with free parameters: isotropic-equivalent kinetic energy E0, circumburst density n0, the magnetic- and electron-energy fractions εB and εe, respectively, electron power-law index p, jet core opening angle θc, and electron participation fraction ξN. Each kilonova model then explores distinct assumptions about the geometry, composition, and energy sources of the ejecta, allowing us to assess their individual contributions to the observed emission without introducing degeneracies from varying afterglow geometry. 1. Binary Neutron Stars Merger We adopt the mixed-component kilonova model described in Metzger [24], where the total emission arises 3 from the sum of two distinct ejecta components: a blue (lanthanide-poor) and a red (lanthanide-rich) component. This framework captures the multichannel nature of neutron star merger ejecta, with fast, polar outflows typically being neutron-rich and of low opacity (κ ∼0.51 cm2g-1), producing an early, short-lived blue kilonova. In contrast, slower equatorial tidal ejecta are generally lanthanide-rich and highly opaque (κ ∼5-10 cm2g-1), resulting in a longer-lasting red kilonova that peaks in the near-infrared. The model assumes that the transient is powered by the radioactive decay of r-process nuclei synthesized in the neutron-rich ejecta. Each component of the ejecta is treated as an expanding black body with fixed opacity and velocity and is characterized by its own mass, opacity, and thermalization efficiency. Following GW170817, this two-component, one-dimensional modeling approach has become standard practice. In multicomponent implementations, the components evolve independently, and their emissions are summed to obtain the total light curve. Here, we modified the implementation of this model provided by the Python package nmma [25]. 2. Binary Neutron Stars with central engine We adopt the engine-powered kilonova model introduced by [26] and implemented in the Python package redback [27], which extends traditional radioactive kilonova frameworks to include energy injection from a longlived, rapidly rotating neutron star (a magnetar) formed as a result of the binary neutron star merger. The model accounts for spin-down of the remnant via both magnetic dipole radiation and gravitational wave emission, depending on the strength and configuration of the internal toroidal and external dipolar magnetic fields. The resulting energy budget, governed by the competition between these two channels, significantly impacts the luminosity and temporal evolution of both the kilonova and its afterglow. In this framework, the magnetar wind interacts with the expanding ejecta, increasing its kinetic and internal energy while also modifying the radiative efficiency through time-dependent gamma-ray leakage. The model solves a set of coupled differential equations tracking the evolution of the Lorentz factor, internal energy, and radius of the ejecta, while accounting for radiative losses and adiabatic expansion. 3. Neutron Star-Black Hole Merger We adopt the neutron star-black hole kilonova framework developed by [28], which models the expected optical/IR emission following NS-BH mergers by combining radiative transfer simulations with gravitational wave parameter constraints. We use the redback implementation of this model. Kilonova emission is modeled using the radiative transfer code POSSIS [29], with an updated grid of synthetic spectra tailored to NS-BH systems. The model incorporates a two-component ejecta structure: lanthanide-rich dynamical ejecta concentrated in the equatorial plane, and a more isotropic post-merger wind component with intermediate opacity. 4. Tidal disruption event We utilize the tidal disruption event model implemented in the Modular Open Source Fitter for Transients (mosfit [30] and redback), which systematically fits TDEs using a physically motivated framework. The model is based on hydrodynamical simulations of polytropic stars disrupted by supermassive black holes (SMBHs), from which the fallback rate of stellar debris is derived and converted into a bolometric light curve assuming a constant radiative efficiency. The fallback rate ̇Mfb depends on the black hole mass, stellar mass, stellar structure (parameterized by a polytropic index), and the impact parameter of the disruption event. To account for potential time delays due to circularization and disk accretion, the model introduces a viscous delay timescale that acts as a low-pass filter on the fallback rate. The resultant accretion-powered luminosity is then reprocessed through a photospheric layer, which is assumed to emit as a blackbody. 5. Neutron Star-White Dwarf Merger We also consider the semi-analytical model proposed in [9], which interprets the optical and infrared emission of GRB 230307A within the framework of an NS-WD merger. In this scenario, the late-time kilonova-like emission is powered by both the spin-down energy of a longlived magnetar and the radioactive decay of a small quantity of 56Ni, rather than by r-process nucleosynthesis, which is unlikely to occur in the low-density ejecta typical of NS-WD mergers. While the model is primarily applied to NS-WD mergers, it can also be extended to BNS mergers, as such systems are likewise expected to produce a magnetar accompanied by 56Ni synthesis [31, 32]. The model describes the merger ejecta as multiple concentric shells with fixed velocities and a power-law density profile. The spin-down luminosity of the magnetar follows a magnetic dipole formula with a fixed timescale derived from early X-ray data, while the 56Ni and 57Co radioactive heating is modeled using standard exponential decay laws. The energy evolution of each ejecta shell includes contributions from magnetar injection, radioactive decay, adiabatic expansion, and photon diffusion. The light curve is computed by summing the contributions from all shells and assuming blackbody emission with temperature given by the Stefan-Boltzmann law. 4 II. BAYESIAN ANALYSIS TESTS A. Bayes factor A straightforward way to compare two models in terms of "goodness of fit", based on observed data, is by evaluating the Bayes' ratio. Let M denote a model characterized by a set of parameters θM i . The Bayes' ratio is defined as the ratio of the evidence term Z (i.e., the integral of the product between the likelihood L and the prior p θM i over the entire parameter space), ZM = Z L d|θM i p θM i dnθM i , (1) of the two competing models: ln B12 = ln Z1 Z2 . (2) The criterion for interpreting the Bayes factor is called Jeffreys' scale [33]. This scale classifies the strength of evidence into four categories in favor of model 1: 1. Strong evidence: if ln B12 > 5; 2. Moderate evidence: if 2 < ln B12 ≤5; 3. Weak evidence: if 0 < ln B12 ≤2; 4. No evidence (models are equally supported): if ln B12 = 0; If ln B12 assumes negative values, then model 2 is better supported than model 1 and is classified as follows: 1. Strong evidence: if ln B12 < -5; 2. Moderate evidence: if -5 ≤ln B12 < -2; 3. Weak evidence: if -2 ≤ln B12 < 0. Throughout this article, we will compute the Bayes factor relative to our best-fitting model as a reference, where we will refer to as ln B1,ref = ln Z1/Zref . B. Leave-One-Out cross-validation Leave-One-Out cross-validation [34] is a fully Bayesian method for analyzing the predictive performance of a model by estimating how well it predicts each observation when that observation is left out of the fitting process. LOO evaluates the likelihood of each observation when left out of the fit, averaging over the full posterior distribution of the model parameters to approximate the expected log predictive density (elpd) for new data. The LOO estimate is defined as the sum of the logarithm of the posterior predictive probability of each left-out observation, p(yi | y-i): elpdLOO = n X i=1 log p(yi | y-i), (3) where yi is the i-th observation and y-i denotes the data with the i-th observation removed. Higher elpdLOO values indicate better predictive performance. C. Kullback-Leibler divergence The Kullback-Leibler divergence is a primary measure of the divergence between two probability distributions, p ⃗θ and q ⃗θ , which is quantified in the amount of information lost when the distribution q ⃗θ is used instead of the distribution p ⃗θ . The KL divergence, DKL (p||q), is defined as DKL (p||q) = Z ⃗θ p (θi) log p (θi) q (θi) dnθi, (4) and quantifies how well q ⃗θ approximates p ⃗θ . The primary Bayesian analysis in this work compares different kilonova models while fixing the afterglow model. In this way, our results explore the KL divergence for the posterior distribution obtained using the Markov Chain Monte Carlo method, focusing solely on the afterglow parameter space. The KL divergence values are interpreted such that values below 0.1 indicate negligible divergence, values between 0.1 and 1 suggest moderate divergence, and values greater than 1 denote substantial divergence between the posterior distributions. III. RESULTS In this section, we present our multi-wavelength analyses of GRB 230307A. In the first subsection, we describe our model selection analysis, aimed at investigating the nature of the progenitor and the ejecta mechanism that powered the observed kilonova emission. In subsection III B, we describe the influence of the kilonova emission on the GRB, and in subsection III C, we explore the binary properties of the GRB 230307A progenitor and investigate the possibility of a supernova natal kick to explain the observed offset. A. Model selection: preferred progenitor system and central engine scenario Here, we apply the statistical model selection methods described in Section II to a joint analysis of the GRB afterglow and a candidate kilonova signature or emission from a tidal disruption event. We investigate several compact merger progenitor scenarios that could produce the kilonova emission, including a compact binary (either a binary neutron star merger or a white dwarf-neutron star merger) and a neutron star-black hole merger. The median values and corresponding 1σ uncertainties for the 5 physical parameters of each model analyzed in this work are presented in Table III. The results show reasonable agreement, within 1σ to 2σ, with previous estimates reported in [6] and [9]. The parameter inference from the magnetar spin-down model indicates an ejecta mass of 0.06 M⊙and a 56Ni mass of ∼4 × 10-4 M⊙, consistent with that expected from a typical NS-WD merger [35], while also remaining compatible with the higher, more characteristic 56Ni production of a BNS merger [31]. Therefore, throughout the remainder of this paper, we adopt this model as a generic compact binary coalescence (CBC) magnetar spin-down scenario, without excluding either of these two progenitor possibilities. Table I summarizes our results for the different possible scenarios, presenting the Bayes factors, LOO, and maximum likelihood values. Based on the Bayes factor, we find that the scenario best supported by the data is the CBC magnetar spin-down model, which yields a maximum log-likelihood of -99.86. This corresponds to likelihood ratios of 1.1, 1.8, 39, and 85 relative to the BNS two-component kilonova, BNS general merger-nova, NS-BH, and TDE models, respectively. We adopt this model as the reference for the Bayes factor comparison. Figure 1 presents the multi-wavelength light curves of GRB 230307A for all models discussed in Section II, assuming the best-fit parameters and prior bounds listed in Table III. We compare models using both Bayes factors and LOO scores (see results in Table I). The Bayes factor analysis provides strong evidence against all alternative models compared to the reference model, with logarithmic Bayes factor values below -5. The LOO cross-validation analysis indicates that the CBC magnetar spin-down model provides the best predictive accuracy, with elpdloo = -52±17. The BNS two-component model exhibits lower predictive performance (elpdloo = -141 ± 42), but its difference relative to the magnetar spin-down model corresponds to only ∼2σ, which does not constitute decisive evidence against it. Therefore, while the CBC magnetar spin-down scenario is statistically preferred, the BNS two-component model remains a possible alternative. The BNS general merger-nova model yields an even lower elpdloo, with a difference of approximately 2.6σ relative to the CBC magnetar spin-down model, indicating weaker support compared to the BNS two-component model, yet it cannot be entirely excluded. By contrast, the NS+BH and SBH+WD astrophysical scenarios have extremely low elpdloo values, suggesting poor predictive performance. However, the large standard errors imply that their true predictive ability is highly uncertain; thus, while these models are strongly disfavored, they cannot be strictly ruled out based solely on this analysis. These results reinforce the conclusions of [9] and [6], providing strong evidence, in terms of both goodness of fit and model complexity, in favor of a scenario where the progenitor of GRB 230307A is the coalescence of a compact binary, compared to systems formed by NS-BH mergers producing kilonova emission, or SBH-WD mergers leading to tidal disruption events. To gain a deeper understanding of our results and establish connections between the underlying emission processes, we analyzed the best-fit parameters of each model in each wavelength band. The BNS two-component model provides the best fit to the NIR data, with a modest improvement in chi-squared (χ2 ∼3) relative to the CBC magnetar spin-down model. In contrast, in the optical band, the CBC magnetar spin-down model performs better (χ2 ∼36), while the BNS two-component (χ2 ∼45) and BNS general merger-nova (χ2 ∼69) models are slightly less favored. Both the NS-BH and TDE models are strongly disfavored in both bands, exhibiting significantly higher χ2 values (≈160-1650), indicating a poor agreement with the observed thermal emission of GRB 230307A. The TDE model's failure is likely due to its lack of lanthanide-rich ejecta, which are essential to reproduce the NIR flux. Additionally, the poor fit of the NS-BH model suggests that the observed data require a blue optical emission component, commonly attributed to lanthanide-poor disk winds, that is typically absent in standard NS-BH kilonova models. On the other hand, the radio and X-ray bands exhibit low χ2 values (approximately 0.5-4 for the CBC kilonova models and approximately 15-320 for the TDE and NS-BH models), reflecting the dominant contribution of the afterglow emission relative to the kilonova component in fitting the observed data. Our results show the best fit for the X-ray band with the CBC magnetar spin-down model, while the radio data are better described by the BNS two-component model. Overall, the CBC kilonova models provide a superior description of the afterglow-dominated data, as also reflected by their higher maximum likelihood values (see Table I for a comparison). B. Modeling the afterglow of GRB 230307A with and without kilonova emission It has been established in previous studies that the emission from GRB 230307A is consistent with a gammaray burst accompanied by kilonova emission [6, 9]. In particular, these works showed that the inclusion of a kilonova component is essential to reproduce the enhanced brightness of the optical and NIR light curves compared to standard afterglow models. In this context, we analyze the ability of a GRB model with a Gaussian jet structure to fit the observed data and quantify the improvement achieved when including a kilonova component, described by the two best-fit models found in the previous subsection: the magnetar spin-down scenario and the two-component kilonova. Furthermore, we evaluate how the inclusion of transient emission affects the inferred parameters describing the afterglow. Our analysis shows that the afterglow-only model yields a maximum log-likelihood lower by factors of 23 and 25 compared to models in which the BNS twocomponent kilonova or CBC magnetar spin-down emis6 TABLE I. Summary statistical results for the multi-wavelength analysis of GRB 230307A. Transient emission model ln Z1/Zref Maximum LOO Dimensions KN model ln(L) reference BNS system -25.81 -110.69 -141±42 17 [36] (two-component kilonova) BNS system -87.08 -181.06 -52±17 13 [26] (General Merger-Nova) CBC system - -99.86 -170±43 14 [9] (magnetar spin-down) NS+BH system -3838.82 -3932.98 -3906±2854 15 [36] (two-component kilonova - ejecta relation) SBH+WD system -8388.02 -8458.49 -5370±2946 15 [30] (TDE - 4/3 polytropes stars) sion is added to the afterglow, respectively. This result is mainly governed by the NIR band, particularly the F444W and F277W filters, which provide well-measured data and yield considerable χ2 values of approximately 4030 and 564, respectively. The large χ2 values from these bands reveal the need for an additional emission component to better reproduce the observational data at late times (after ≈2 days post-explosion; see the panel in Fig. 2). The best-fitting light curve for the GRB-only model shows better agreement with the data in the X-ray and optical bands, yielding lower χ2 values in the range of 0.5-36. In Table II, we present the KL divergence values obtained for all transient emission models explored in this work, assuming the CBC magnetar spin-down model as the fiducial model. We find that most parameter distributions from the GRB-only inference, except for the magnetic energy fraction, diverge substantially from the fiducial model, with KL values exceeding 1. This divergence is further supported by the significant disagreement, greater than 1σ, between the posterior means (see Fig. 3). The results obtained for the NS-BH and TDE models show partial agreement within 1σ, with moderate divergence from the true model for log10 E0, log10 n0, log εc, log θc, and significant divergence for the remaining parameters. In contrast, all BNS scenarios (two-component and general merger-nova models) exhibit negligible divergence for most parameters. This emphasizes that the inference of afterglow parameters is reliable only when the assumed progenitor scenario matches the true one. Otherwise, the inclusion of different transient components, or their omission, results in divergent Bayesian inferences and, consequently, different interpretations of the afterglow emission. C. Binary progenitor properties and implications for the observed offset of GRB 230307A The results presented in the previous subsections suggest that the optical emission observed from GRB 230307A is most consistent with a kilonova scenario originating from the merger of a compact binary sysTABLE II. KL divergence values computed from the posterior distributions of the Gaussian jet-structure parameter model across the different scenarios investigated in this work. The results obtained with the CBC magnetar spin-down model (see subsection I B 5) are taken as the reference (true) probability distribution. GRB-only BNS BNS NS-BH TDE (two-comp.) (central eng.) log10 E0 5.81 0.09 0.03 0.72 0.74 log10 n0 2.44 0.14 0.06 0.51 0.24 log10 θc 2.31 0.07 0.02 1.04 0.87 log10 εc 2.74 0.04 0.03 0.68 0.48 log10 εb 0.62 0.18 0.07 1.64 1.38 p 7.49 0.03 0.41 2.77 3.58 log10 ξN 2.26 0.04 0.07 1.41 1.16 tem. Within this framework, we now explore the physical properties of the potential progenitor system. Assuming that the kilonova originated from a BNS merger and following [37], we adopt phenomenological fits from numerical-relativity simulations that relate the ejecta parameters to the binary properties, such as the individual masses and compactnesses. For the dynamical ejecta mass, we use the fitting relations derived by [38], while the disk mass fits are taken from [39]. We fix the Tolman-Oppenheimer-Volkoff maximum mass to 2.17 M⊙ following the constraint reported in [37]. To estimate the neutron star radius, we use the empirical fit for R1.4 (the radius of a 1.4 M⊙neutron star) as a function of binary tidal deformability and chirp mass, based on the relations provided in [40] and calibrated across different EOS. Adopting the model that best fits the observed data, which combines magnetar spin-down and the radioactive decay of 56Ni, we infer that the binary system is composed of a primary with a mass of m1 = 1.81+0.46 -0.61 M⊙ and a secondary with a mass of m2 = 1.61+0.65 -0.41 M⊙, with dimensionless tidal deformability of the binary being ̃Λ = 471+318 -395. Assuming a BNS two-component model for the kilonova emission, we obtain constraints on the binary masses of m1 = 1.82+0.45 -0.61 M⊙and m2 = 1.42+0.51 -0.22 M⊙, together with an associated dimensionless tidal deforma7 FIG. 1. GRB 230307A multi-wavelength light curves. Solid lines show the best-fitting model curves obtained from joint Bayesian inference for different progenitor scenarios (shown in different colors; see App.A for details). Black dots represent observational data points with uncertainties, and triangles denote upper limits. In some cases, the error bars are not visible because their size is smaller than the plotting scale. bility of ̃Λ = 439+368 -350. Although these results suggest a slightly greater asymmetry between the mass components, they remain in overall very good agreement within the uncertainties. These values correspond to the posterior mean estimates with their associated 68% confidence intervals. For both inferences, we assume uniform prior bounds of [1.2, 2.27] for m1, [1.2, m1] for m2, and [70, 790] for ̃Λ. The contour plots of these results are shown in Fig. 4. Based on the inferred BNS component masses, m1 and m2, we search for the conditions under which the binary can attain the projected offset lobs ∼38.9 kpc at coalescence. In particular, we consider the scenario in which the binary was orbiting its host galaxy with velocity ̄vc and suddenly experienced a kick of systemic velocity ̄v′ k, where the prime indicates that the quantity is measured in the pre-kick co-moving frame. This event leaves the binary with a velocity ̄v0 = ̄vc + ̄v′ k relative to the galaxy centre. We then investigate the minimal kick velocity v′ k magnitude required to reproduce the observed kilonova offset. Hereafter, unbarred variables like vx, denote the magnitude of the velocity vector ̄vx. Considering the binary has velocity v0 at a radial location r0 relative to the host-galaxy centre, the classical 8 FIG. 2. Multi-wavelength light curves of GRB 230307A from the afterglow-only emission (green) and with the addition of a kilonova, modeled using the BNS two-component model (blue) and the CBC magnetar spin-down model (orange), for the X-ray, optical, and NIR bands. Black dots represent observational data with uncertainties, and triangles indicate upper limits. For certain data points, the error bars are not shown because they are smaller than the scale of the plot. work-energy theorem gives the binary velocity vb at a radial position r as vb(r) = q v2 0 + 2[Φ(r0) -Φ(r)], (5) where Φg(r) = Φ∗(r) + ΦDM(r) is the total gravitational potential of the galaxy, composed of stellar and darkmatter components, both approached as spherically symmetric, which we define as a function of the stellar mass M∗and redshift z, as detailed in App. B. To estimate the flight time elapsed between ejection and coalescence, ∆tP, we assume that the binary velocity is approximately aligned with the radial direction, vb ≈dr/dt, which is valid when the radial distance at coalescence is much larger than the initial radius r0. Integrating equation 5 then gives ∆tP = 5c5(1 + q)2 256G3q(m1 + m2)3 a4 0 fe = Z lobs/ cos θp r0 dr p v2 0 + 2[Φg(r0) -Φg(r)] . (6) The first equality in equation 6 corresponds to the coalescence time of a binary with initial separation a0, as derived by Peters [41], where q = m2/m1 < 1, and fe ∼1 is a function of the initial binary eccentricity, G is the gravitational constant, and c is the speed of light. In the second equality, lobs/ cos θp is the radial position at coalescence, with θp the angle between the coalescence direction and the plane of the sky. From equation 6, we note that the minimum velocity v0 required for the binary to reach the distance lobs/ cos θp is v0,th = p 2[Φg(lobs/ cos θp) -Φg(r0)]. This corresponds to the threshold value of v0 that ensures the integrand in equation 6 remains real. Finally, since ̄v0 = ̄vc + ̄v′ k, the required systemic kick velocity can be obtained as v′ k = q v2 0 -sin2 θkv2c -vc cos θk, (7) with θk being the angle among ̄vc and ̄v′ k, and we approach vc as the circular velocity of the galaxy at the radius r0 (see App. B, for details). In the upper panel of Fig. 5, we display the minimum values for v′ k, required to produce the observed offset lobs = 38.9 kpc, as a function of a0 and the angle θk. The points in this v′ k -a0 space were obtained by solving the simultaneous equations (6-7), while fixing the value of M∗∼2.4×109 M⊙and z = 0.0647, consistent with the estimate by [6], using the posterior samples of m1 and m2 derived from the magnetar spin-down model (see Fig. 4), and uniformly sampling the parameters θp ∈(0, π/3), θk ∈(0, π), and r0 ∈(0.5, 5) kpc. In the lower panel of Fig. 5, we show the coalescence time associated with the parameter configurations of the upper panel. We observe from Fig. 5 that the minimum systemic velocity v′ k required to produce the offset lobs = 39.8 kpc ranges between ∼100 and 270 km s-1. This velocity clearly depends on the kick angle θk, with smaller values of θk leading to the lowest v′ k. The corresponding binary separations a0 mostly lie within 2-3 R⊙, regardless of the kick direction. Under such conditions, the ejections would produce the electromagnetic transient within ∆tP ≲1 Gyr, as shown in the lower panel of Fig. 5. We note that this period of time is comparable to the galaxy evolution time scale, and much lower than the Hubble time (in ΛCDM cosmological models). A possible explanation for the origin of the sudden ejection of the binary is the natal kick imparted to one of its components during an asymmetric supernova explosion [42]. This scenario is particularly plausible for systemic kicks that are quasi-aligned with the binary's orbital motion before the supernova event (i.e. θk ≲π/4), which 9 0.0 0.1 0.2 0.3 0.4 p(log(εB)|d) -6 -5 -4 -3 -2 -1 0 -0.4 -0.2 0.0 0.2 0.4 log(εB) 0 5 10 15 p(p|d) 2.0 2.2 2.4 2.6 2.8 3.0 -0.4 -0.2 0.0 0.2 0.4 p 0.0 0.1 0.2 0.3 0.4 p(log(ξN)|d) -5 -4 -3 -2 -1 0 -0.4 -0.2 0.0 0.2 0.4 log(ξN) FIG. 3. Posterior comparison of GRB afterglow model parameters obtained from joint analyses with different possible astrophysical scenarios. The subplots show the residuals relative to the CBC magnetar spin-down model, defined as 2 × (PDFKN model -PDFref.) / (PDFKN model + PDFref.). The shaded gray region denotes the 1σ credible interval of the reference model's posterior distributions. FIG. 4. Corner plot of the binary component masses (m1, m2) and dimensionless tidal deformability (Λ) from inference results of the magnetar spin-down model (orange) and BNS two-component model (blue). The top panel shows the final inference for the magnetar spin-down model. would imply systemic velocities of v′ k ∼100-150 km s-1, as shown in Fig. 5. Natal kicks producing such parameter configurations of v′ k and a0 would require finely tuned conditions to avoid binary disruption, yet remain possible according to analyses of similar phenomena discussed in [43, 44] and are consistent with the inferred natal kick velocities of some observed Galactic BNS pulsars [45]. IV. CONCLUSIONS GRB 230307A is the second-brightest gamma-ray burst ever observed. The absence of an associated supernova, combined with persistent late-time near-infrared emission and rapidly decaying optical emission, along with spectroscopic detection of emission lines indicative of heavy elements such as tellurium, strongly supports the presence of a kilonova powered by r-process nucleosynthesis. The long duration of GRB 230307A initially suggested a traditional classification as the result of the collapse of a massive star. However, the presence of soft X-ray emission characterized by a plateau followed by a power-law decay points to the presence of a magnetar central engine, which is typically associated with CBC. Furthermore, the large observed offset from the host galaxy provides additional support for a CBC origin. These factors pose a challenge to establishing a direct and comprehensive understanding of the progenitor of GRB 230307A. In Section III A, we present 10 FIG. 5. Scatter plots for parameter combinations of v′ k -a0 (upper) and ∆tP -a0 (lower), consistent with the observed merger offset lobs = 38.9 kpc, as derived from equations (6-7). Colors indicate different assumed values of θk. a detailed Bayesian multi-wavelength analysis of GRB 230307A, examining various progenitor scenarios such as BNS, NS-WD, NS-BH mergers, and a white dwarf TDE. Model comparison based on the Bayes factor and LOO score strongly favors a BNS or NS-WD progenitor, in which the emission is powered by magnetar spin-down and 56Ni radioactive decay. This conclusion aligns with previous works [4, 10], which analyzed prompt X-ray and gamma-ray data from LEIA and GECAM and reached similar findings. Our analysis further demonstrates that a kilonova component, in addition to the standard afterglow emission, is essential to reproduce the observations, underscoring the complex nature of the emission mechanisms in GRB 230307A. We also investigated the source properties of the possible progenitor of GRB 230307A. Assuming a BNS progenitor, we used the relations between the ejecta properties and the masses and tidal deformabilities of the binary components as described by Dietrich and Ujevic [38] and Coughlin et al. [39]. Applying these relations to the two best-fit models, the magnetar spin-down and the two-component kilonova, we performed a Bayesian inference to constrain the BNS source properties. Both models yielded consistent results, indicating component masses of m1 = 1.81+0.46 -0.61 M⊙and m2 = 1.61+0.65 -0.41 M⊙, with a tidal deformability of ̃Λ = 471+318 -395 for the magnetar spin-down model. Moreover, we used the posterior mass samples to investigate the ejection conditions of the binary progenitor responsible for the large projected merger offset. Assuming the system was ejected from the nearest galaxy by a kick imparted on top of its orbital motion, we find that the minimum systemic velocity v′ k (in the co-moving frame) required to reproduce the observed offset of lobs = 39.8 kpc ranges between ∼100 and 270 km s-1. The corresponding post-kick binary separation falls within a0 ∼2-3 R⊙. If the kick is quasi-aligned with the pre-kick orbital motion, the required velocity decreases to vk ∼100-150 km s-1. This special case can be plausibly explained within the natalkick scenario, where one binary component undergoes an asymmetric supernova explosion, under finely tuned conditions that prevent binary disruption [43, 44]. The results of this work provide statistical evidence supporting a CBC origin for GRB 230307A. The inferred parameter values from the magnetar spin-down model are consistent with either a BNS merger or a NS-WD merger. Distinguishing between these two progenitor channels remains challenging, primarily due to the limited availability of detailed numerical simulations for NS-WD mergers, in contrast to the extensive modeling developed for BNS systems (e.g., [46-53]). Without comparable simulation data, including detailed predictions of electromagnetic counterparts and ejecta properties, it is currently not possible to determine the nature of the progenitor based solely on observational features. Future progress will require the development of a systematic suite of general-relativistic hydrodynamic simulations targeting mergers between neutron stars and white dwarfs. This poses unique technical challenges due to the broad range of spatial and temporal scales required to accurately model both types of stellar remnants with realistic equations of state. These simulations must also incorporate variations in mass ratios, white dwarf compositions, and neutron star properties [54]. Such efforts are essential for producing reliable predictions of observables, including kilonova light curves, nucleosynthetic yields, and gravitational wave signatures specific to these systems. Ultimately, this work would provide the necessary foundation for distinguishing between different types of compact binary progenitors in future multi-messenger observations. Gravitational waves emitted during the coalescence could provide complementary observational constraints to resolve this ambiguity. Unfortunately, the distance of GRB 230307A is slightly beyond the current sensitivity of the LIGO, Virgo, and KAGRA detectors, meaning that even if the event had occurred during the fourth 11 observing run, it would have been undetectable. Future GW detectors, with sensitivity extending beyond the distance of GRB 230307A, will open a promising opportunity for multi-messenger astronomy, providing complementary insights into its progenitors. ACKNOWLEDGEMENTS The authors made use of Sci-Mind servers machines developed by the CBPF AI LAB team and would like to thank Paulo Russano and Marcelo Portes de Albuquerque for all the support in infrastructure matters. J.C.R.R. acknowledges support from Rio de Janeiro State Funding Agency FAPERJ, grant E26/205.635/2022. C.R.B. acknowledges the financial support from CNPq (316072/2021-4) and from FAPERJ (grants 201.456/2022 and 210.330/2022) and the FINEP contract 01.22.0505.00 (ref. 1891/22). Appendix A: Parameter inference from multi-wavelength observations In this appendix, we present the results of a joint Bayesian inference from the multi-wavelength analysis of GRB 230307A, assuming that the emission consists of an afterglow signature and a transient emission (either kilonova or TDE). The Bayesian inference was performed using a dynamic nested sampling algorithm provided by the dynesty Python package. In each simulation, we employed 500 live points along with a multi-ellipsoidal bounding method, and terminated the sampling process when the change in log-evidence dropped below a tolerance threshold of 0.1. The median values and the 1σ confidence intervals are presented in Table III. Appendix B: The gravitational potential of the host galaxy We model the gravitational potential, Φg, of the putative host galaxy that formed the progenitor binary system discussed in the main text as spherically symmetric, and composed of stellar and dark matter (DM) components: Φg (r) = Φ∗(r) + ΦDM (r) . (B1) For the stellar component, we adopt the Hernquist profile [55]: Φ∗(r) = -GM∗ r + ah , (B2) where M∗is the total stellar mass, and ah is the scale radius, assumed to be ah = 0.55 R50 [56], with R50 the stellar half-mass radius. The latter is obtained as a function of M∗from the empirical relation derived in [57]. For the DM component, we adopt the Navarro-Frenk-White (NFW) potential [58]: ΦDM (r) = -4πGρ0R3 s r ln 1 + r Rs , (B3) where ρ0 is the characteristic density of the DM halo, and Rs its scale radius. These parameters are related to the halo mass, MDM, through MDM = 4πρ0R3 s ln (1 + c) - c 1 + c , (B4) where c is the concentration parameter. We estimate MDM as a function of the total stellar mass M∗and the source redshift, using the empirical relation derived by [59]. The concentration parameter c is computed using the colossus toolkit [60], specifying M200, the source redshift, and the mass-concentration relation from Ishiyama et al. [61]. Given c and MDM, the scale radius is defined as Rs = Rvir/c, where Rvir = 3MDM 4π∆virρc 1/3 , (B5) with ∆vir = 200 and the critical density of the Universe, ρc = 1.4 × 1011 M⊙Mpc-3. Finally, the characteristic density ρ0 is obtained from equation B4. With the definitions above, the galactic potential in equation B1 is fully determined by specifying the total stellar mass M∗, the source redshift z, and the radial position r. Finally, given the gravitational potentials, the implied circular velocity of the galaxy at radius r is vc (r) = s r dΦ∗ dr + dΦDM dr . (B6) [1] F. G. Team, Grb 230307a: Fermi gbm final real-time localization, GRB Coordinates Network, Circular Service 2023GCN.33405....1F (2023). [2] S. Xiong, C. Wang, Y. Huang, and G. Team, Grb 230307a: Gecam detection of an extremely bright burst, GRB Coordinates Network, Circular Service 2023GCN.33406....1X (2023). [3] D. Svinkin, D. Frederiks, M. Ulanov, and et al., Konuswind detection of grb 230307a, GRB Coordinates Network, Circular Service 2023GCN.33427....1S (2023). 12 TABLE III. Posterior medians and 1σ credible intervals for the model parameters. The GRB afterglow model assumes a Gaussian jet structure with the luminosity distance fixed at 291 Mpc. Prior bounds are indicated in brackets in the first column; all priors are assumed to be uniform. CBC (mag.+56Ni) BNS (two-comp.) BNS (cental eng.) NS-BH TDE GRB model parameter estimation - [prior] log10 EK,iso (erg) - [50, 60] 53.18+1.74 -1.58 53.11+1.18 -1.24 53.36+1.59 -1.52 53.86+0.56 -1.03 53.34+0.49 -0.48 log10 n0 (cm-3) - [-6, 2] -0.75+1.99 -2.32 -0.53+1.90 -4.91 -0.30+1.80 -2.41 0.05+0.97 -0.27 0.20+1.61 -1.57 log10 θc - [-2, -0.5] -0.89+0.38 -0.55 -0.56+0.06 -0.94 -0.88+0.36 -0.54 -1.26+0.22 -0.10 -1.23+0.10 -0.20 log10 εe - [-6, -0.3] -2.68+1.40 -1.46 -2.41+2.10 -3.55 -2.96+1.48 -1.62 -3.53+1.07 -0.65 -2.84+0.57 -0.46 log10 εB - [-6, -0.3] -3.22+1.31 -1.22 -2.61+2.31 -2.37 -3.56+1.25 -1.05 -1.00+0.67 -0.47 -1.63+0.87 -0.59 p - [2.01, 2.9] 2.12+0.09 -0.10 2.14+0.18 -0.12 2.13+0.06 -0.09 2.80+0.10 -0.11 2.81+0.09 -0.12 log10 ξN - [-5, 0] -2.36+1.23 -1.13 -2.37+1.30 -1.42 -2.75+1.40 -1.32 -4.31+0.85 -0.63 -3.95+0.66 -0.65 KN model parameter estimation - CBC (mag.+56Ni) log10 κ (cm2g-1) - [-1, 0] -0.34+0.24 -0.25 . . . . . . . . . . . . log10 Mej (M⊙) - [-2.0, -0.8] -1.22+0.29 -0.32 . . . . . . . . . . . . log10 MNi (M⊙) - [-5.0, -3.0] -3.40+0.10 -0.02 . . . . . . . . . . . . log10 χLsd (0) (erg s-1) - [45.0, 48.5] 47.66+0.37 -0.52 . . . . . . . . . . . . vmin (c) - [0.001, 0.15] 0.10+0.01 -0.01 . . . . . . . . . . . . vmax (c) - [0.18, 0.35] 0.26+0.04 -0.04 . . . . . . . . . . . . δ - [1.0, 3.0] 1.74+0.70 -0.62 . . . . . . . . . . . . KN model parameter estimation - BNS (two-comp.) log10 Mej,1 (M⊙) - [-3, -1] . . . -1.18+0.18 -0.08 . . . . . . . . . log10 vej,1 (c) - [-1.0, -0.5] . . . -0.90+0.05 -0.07 . . . . . . . . . log10 κ1 (cm2 g-1) - [-2.0, 0.5] . . . -0.41+0.42 -0.40 . . . . . . . . . βv,1 - [1, 5] . . . 2.47+1.39 -1.27 . . . . . . . . . Ye,1 - [0.2, 0.4] . . . 0.31+0.06 -0.07 . . . . . . . . . log10 Mej,2 (M⊙) - [-3, -1] . . . -1.11+0.10 -0.02 . . . . . . . . . log10 vej,2 (c) - [-2, -1] . . . -1.34+0.11 -0.10 . . . . . . . . . log10 κ2 (cm2 g-1) - [-0.5, 2.0] . . . 1.50+0.39 -0.35 . . . . . . . . . βv,2 - [1, 5] . . . 3.34+1.26 -1.14 . . . . . . . . . Ye,2 - [0.1, 0.2] . . . 0.14+0.03 -0.03 . . . . . . . . . KN model parameter estimation - BNS (central eng.) log10 Mej (M⊙) - [-3, -1] . . . . . . -1.84+0.12 -0.11 . . . . . . log10 vej (c) - [-2, -0.5] . . . . . . -1.50+0.20 -0.31 . . . . . . ζ - [0, 0.5] . . . . . . 0.20+0.16 -0.15 . . . . . . κ (cm2g-1) - [0.1, 10.0] . . . . . . 3.01+1.56 -1.61 . . . . . . log10 L0 (erg s-1) - [45.5, 50.0] . . . . . . 48.06+0.63 -0.69 . . . . . . nism (cm-3) - [3.0, 5.0] . . . . . . 3.47+0.60 -0.42 . . . . . . KN model parameter estimation NS-BH - [prior] MBH (M⊙) - [5.0, 100.0] . . . . . . . . . 49.99+23.73 -17.33 . . . MNS (M⊙) - [1.2, 2.27] . . . . . . . . . 1.49+0.33 -0.24 . . . χBH - [0.05, 1.0] . . . . . . . . . 0.68+0.20 -0.26 . . . ΛNS - [5.0, 5000.0] . . . . . . . . . 1396+1548 -1163 . . . ζ - [0, 0.5] . . . . . . . . . 0.33+0.13 -0.17 . . . log10 vej,2 (c) - [-2.0, -0.5] . . . . . . . . . -1.48+0.59 -0.41 . . . log10 κ1 (cm2g-1) - [-2.0, 0.5] . . . . . . . . . -1.48+0.59 -0.41 . . . log10 κ2 (cm2g-1) - [-0.5, 2.0] . . . . . . . . . -0.46+0.53 -0.59 . . . KN model parameter estimation TDE - [prior] MBH (106 M⊙) - [0.1, 100.0] . . . . . . . . . . . . 0.45+1.39 -0.34 Mstar (M⊙) - [0, 1.44] . . . . . . . . . . . . 0.69+0.43 -0.49 tvisc. (days) - [1e-3, 1e5] . . . . . . . . . . . . 72669+18785 -16641 b - [0,2] . . . . . . . . . . . . 0.72+0.77 -0.64 η - [0.005,0.4] . . . . . . . . . . . . 0.13+0.13 -0.11 log10 LEdd (erg s-1) - [43.1, 46.1] . . . . . . . . . . . . 43.66+0.57 -0.48 log10 Rphoto (km) - [-4,4] . . . . . . . . . . . . -1.69+2.06 -1.60 Lphoto (erg s-1) - [0,4] . . . . . . . . . . . . 2.77+1.12 -1.17 13 [4] H. Sun, C.-W. Wang, J. Yang, and et al, Magnetar emergence in a peculiar gamma-ray burst from a compact star merger, Natl Sci Rev. 12, 3 (2025). [5] A. J. Levan, B. P. Gompertz, O. S. Salafia, and et al., Heavy-element production in a compact object merger observed by jwst, Nature 626, 737-741 (2024). [6] Y.-H. Yang, E. Troja, B. O'Connor, and et al., A lanthanide-rich kilonova in the aftermath of a long gamma-ray burst, Nature 626, 742-745 (2024). [7] C. Casentini, M. Tavani, C. Pittori, and et al, Grb 230307a: Agile/mcal detection, GRB Coordinates Network, Circular Service 2023GCN.33412....1C (2023). [8] V. Kalogera, U. Kolb, and A. R. King, Supernova kicks, magnetic braking, and neutron star binaries, ApJ 504, 927 (1998). [9] X. I. Wang, Y.-W. Yu, J. Ren, and et al., What powered the kilonova-like emission after grb 230307a in the framework of a neutron star-white dwarf merger?, ApJL 964, L9 (2024). [10] Z. Du, H. L ̈u, Y. Yuan, X. Yang, and E. Liang, The progenitor and central engine of a peculiar grb 230307a, ApJL 962, L27 (2024). [11] D. Tsang, J. S. Read, T. Hinderer, A. L. Piro, and R. Bondarescu, Resonant shattering of neutron star crusts, Phys. Rev. Lett. 108, 011102 (2012). [12] C. Palenzuela, L. Lehner, M. Ponce, and et al., Electromagnetic and gravitational outputs from binary-neutronstar coalescence, Phys. Rev. Lett. 111, 061105 (2013). [13] D. Neill, W. G. Newton, and D. Tsang, Resonant shattering flares as multimessenger probes of the nuclear symmetry energy, MNRAS 504, 1129-1143 (2021). [14] A. G. Suvorov and K. D. Kokkotas, Precessing magnetars as central engines in short gamma-ray bursts, MNRAS 502, 2482-2494 (2021). [15] S. T. McWilliams and J. Levin, Electromagnetic extraction of energy from black-hole-neutron-star binaries, ApJ 742, 90 (2011). [16] E. R. Most and A. A. Philippov, Electromagnetic precursors to gravitational-wave events: Numerical simulations of flaring in pre-merger binary neutron star magnetospheres, ApJL 893, L6 (2020). [17] A. M. Beloborodov, Emission of magnetar bursts and precursors of neutron star mergers, ApJ 921, 92 (2021). [18] A. J. Cooper, O. Gupta, and Z. Wadiasingh, Pulsar revival in neutron star mergers: multimessenger prospects for the discovery of pre-merger coherent radio emission, MNRAS 519, 3923-3946 (2023). [19] H.-J. L ̈u, E.-W. Liang, B.-B. Zhang, and B. Zhang, A new classification method for gamma-ray bursts, ApJ 725, 1965 (2010). [20] H.-J. L ̈u, B. Zhang, E.-W. Liang, B.-B. Zhang, and T. Sakamoto, The 'amplitude' parameter of gamma-ray bursts and its implications for grb classification, MNRAS 442, 1922-1929 (2014). [21] P. A. Abell, J. Allison, S. F. Anderson, and et al., Lsst science book, version 2.0, arXiv eprints 10.48550/arXiv.0912.0201 (2009), . [22] J. S. Speagle, Dynesty: a dynamic nested sampling package for estimating bayesian posteriors and evidences, MNRAS 493, 3132-3158 (2020). [23] G. Ryan, H. van Eerten, L. Piro, and E. Troja, Gammaray burst afterglows in the multimessenger era: Numerical models and closure relations, ApJ 896, 166 (2020). [24] B. D. Metzger, Kilonovae, Living Rev. Rel. 23, 1 (2020). [25] P. T. H. Pang, T. Dietrich, M. W. Coughlin, and et al., An updated nuclear-physics and multi-messenger astrophysics framework for binary neutron star mergers, Nature Communications 14, 8352 (2023). [26] N. Sarin, C. M. B. Omand, B. Margalit, and D. I. Jones, On the diversity of magnetar-driven kilonovae, MNRAS 516, 4949-4962 (2022). [27] N. Sarin, M. H ̈ubner, C. M. B. Omand, and et al., redback: a bayesian inference software package for electromagnetic transients, MNRAS 531, 1203-1227 (2024). [28] S. Anand, M. W. Coughlin, M. M. Kasliwal, and et al., Optical follow-up of the neutron star-black hole mergers S200105ae and S200115j, Nature Astron. 5, 46 (2021). [29] M. Bulla, Possis: predicting spectra, light curves and polarization for multi-dimensional models of supernovae and kilonovae, MNRAS 489, 5037-5045 (2019). [30] B. Mockler, J. Guillochon, and E. Ramirez-Ruiz, Weighing black holes using tidal disruption events, ApJ 872, 151 (2019). [31] M. Jacobi, F. Magistrelli, E. Loffredo, and et al., 56Ni production in long-lived binary neutron star merger remnants, arXiv e-prints , (2025), . [32] S. Ai, H. Gao, and B. Zhang, Engine-fed kilonovae (mergernovae). ii. radiation, ApJ 978, 52 (2025). [33] H. Jeffreys, Theory of Probability (Oxford University Press Oxford, 1961). [34] A. Vehtari, A. Gelman, and J. Gabry, Practical bayesian model evaluation using leave-one-out cross-validation and waic, Statistics and Computing 27, 1413 (2017). [35] Y. Zenati, H. B. Perets, and S. Toonen, Neutron star-white dwarf mergers: early evolution, physical properties, and outcomes, MNRAS 486, 1805-1813 (2019). [36] V. A. Villar, J. Guillochon, E. Berger, and et al., The combined ultraviolet, optical, and near-infrared light curves of the kilonova associated with the binary neutron star merger gw170817: Unified data set, analytic models, and physical implications, ApJL 851, L21 (2017). [37] M. Nicholl, B. Margalit, P. Schmidt, and et al., Tight multi-messenger constraints on the neutron star equation of state from gw170817 and a forward model for kilonova light curve synthesis, MNRAS 505, 3016-3032 (2021). [38] T. Dietrich and M. Ujevic, Modeling dynamical ejecta from binary neutron star mergers and implications for electromagnetic counterparts, Class. Quantum Grav. 34, 105014 (2017). [39] M. W. Coughlin, T. Dietrich, B. Margalit, and B. D. Metzger, Multimessenger bayesian parameter inference of a binary neutron star merger, MNRASL 489, L91-L96 (2019). [40] S. De, D. Finstad, J. M. Lattimer, and et al., Tidal deformabilities and radii of neutron stars from the observation of gw170817, PRL 121, 121 (2018). [41] P. C. Peters, Gravitational radiation and the motion of two point masses, Phys. Rev. 136, B1224 (1964). [42] N. Brandt and P. Podsiadlowski, The effects of highvelocity supernova kicks on the orbital properties and sky distributions of neutron-star binaries, MNRAS 274, 461 (1995). [43] T. M. Tauris, M. Kramer, P. C. C. Freire, and et al., Formation of double neutron star systems, ApJ 846, 170 (2017). [44] M. Renzo, E. Zapartas, S. E. d. Mink, and et al., Massive 14 runaway and walkaway stars a study of the kinematical imprints of the physical processes governing the evolution and explosion of their binary progenitors, A&A 624, 28 (2019). [45] P. Disberg, N. Gaspari, and A. J. Levan, Kinematic constraints on the ages and kick velocities of galactic neutron star binaries, A&A 689, 22 (2024). [46] K. Kiuchi, S. Fujibayashi, K. Hayashi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Self-consistent picture of the mass ejection from a one second long binary neutron star merger leaving a short-lived remnant in a general-relativistic neutrino-radiation magnetohydrodynamic simulation, Phys. Rev. Lett. 131, 011401 (2023). [47] A. Murguia-Berthier et al., HARM3D+NUC: A New Method for Simulating the Post-merger Phase of Binary Neutron Star Mergers with GRMHD, Tabulated EOS, and Neutrino Leakage, Astrophys. J. 919, 95 (2021), . [48] L. Combi and D. M. Siegel, GRMHD Simulations of Neutron-star Mergers with Weak Interactions: rprocess Nucleosynthesis and Electromagnetic Signatures of Dynamical Ejecta, Astrophys. J. 944, 28 (2023), . [49] L. Rezzolla and O. Zanotti, Relativistic hydrodynamics (OUP Oxford, 2013). [50] M. W. Coughlin, T. Dietrich, B. Margalit, and B. D. Metzger, Multimessenger Bayesian parameter inference of a binary neutron star merger, Mon. Not. Roy. Astron. Soc. 489, L91 (2019), . [51] T. Dietrich and M. Ujevic, Modeling dynamical ejecta from binary neutron star mergers and implications for electromagnetic counterparts, Class. Quant. Grav. 34, 105014 (2017), . [52] D. Radice, A. Perego, K. Hotokezaka, S. A. Fromm, S. Bernuzzi, and L. F. Roberts, Binary Neutron Star Mergers: Mass Ejection, Electromagnetic Counterparts and Nucleosynthesis, Astrophys. J. 869, 130 (2018), . [53] V. Nedora, F. Schianchi, S. Bernuzzi, D. Radice, B. Daszuta, A. Endrizzi, A. Perego, A. Prakash, and F. Zappa, Mapping dynamical ejecta and disk masses from numerical relativity simulations of neutron star mergers, Class. Quant. Grav. 39, 015008 (2022), . [54] J. Mor ́an-Fraile, F. K. R ̈opke, R. Pakmor, M. A. Aloy, S. T. Ohlmann, F. R. N. Schneider, G. Leidi, and G. Lioutas, Self-consistent magnetohydrodynamic simulation of jet launching in a neutron star - white dwarf merger, Astron. Astrophys. 681, A41 (2024), . [55] L. Hernquist, An Analytical Model for Spherical Galaxies and Bulges, Astrophys. J. 356, 359 (1990). [56] B. P. Abbott, R. Abbott, T. D. Abbott, and et al., On the Progenitor of Binary Neutron Star Merger GW170817, The Astrophysical Journal Letters 850, L40 (2017), . [57] Y.-C. Zhang and X.-H. Yang, Size distribution of galaxies in sdss dr7: weak dependence on halo environment, Res. Astron. Astrophys. 19, 006 (2019). [58] J. F. Navarro, C. S. Frenk, and S. D. M. White, The Structure of Cold Dark Matter Halos, Astrophys. J. 462, 563 (1996), arXiv:astro-ph/9508025 [astro-ph]. [59] G. Girelli, L. Pozzetti, M. Bolzonella, and et al., The stellar-to-halo mass relation over the past 12 gyr i. standard λcdm model, A&A 634, 23 (2020). [60] B. Diemer, COLOSSUS: A Python Toolkit for Cosmology, Large-scale Structure, and Dark Matter Halos, The Astrophysical Journal Supplement Series 239, 35 (2018), . [61] T. Ishiyama, F. Prada, A. A. Klypin, and et al., The uchuu simulations: Data release 1 and dark matter halo concentrations, MNRAS 506, 4210-4231 (2021).
2510.14882
Preprint SCALEWEAVER: WEAVING EFFICIENT CONTROL- LABLE T2I GENERATION WITH MULTI-SCALE REFER- ENCE ATTENTION Keli Liu*, Zhendong Wang*, Wengang Zhou†, Shaodong Xu, Ruixiao Dong, Houqiang Li University of Science and Technology of China {sa23006063, zhendongwang, xiodon, dongruixiaoyx}@mail.ustc.edu.cn {zhwg,lihq}@ustc.edu.cn canny depth colorization palette sketch deblur Figure 1: Images generated by ScaleWeaver. Our ScaleWeaver enables efficient and precise con- trollable text-to-image generation based on the visual autoregressive model, supporting diverse con- dition signals and producing high-fidelity images with strong text alignment and controllability. ABSTRACT Text-to-image generation with visual autoregressive (VAR) models has recently achieved impressive advances in generation fidelity and inference efficiency. While control mechanisms have been explored for diffusion models, enabling pre- cise and flexible control within VAR paradigm remains underexplored. To bridge this critical gap, in this paper, we introduce ScaleWeaver, a novel framework de- signed to achieve high-fidelity, controllable generation upon advanced VAR mod- els through parameter-efficient fine-tuning. The core module in ScaleWeaver is the improved MMDiT block with the proposed Reference Attention module, which efficiently and effectively incorporates conditional information. Different from MM Attention, the proposed Reference Attention module discards the unnec- essary attention from image→condition, reducing computational cost while sta- bilizing control injection. Besides, it strategically emphasizes parameter reuse, leveraging the capability of the VAR backbone itself with a few introduced pa- rameters to process control information, and equipping a zero-initialized linear projection to ensure that control signals are incorporated effectively without dis- rupting the generative capability of the base model. Extensive experiments show ∗Equal contribution. † Corresponding author. 1 arXiv:2510.14882v1 [cs.CV] 16 Oct 2025 Preprint that ScaleWeaver delivers high-quality generation and precise control while at- taining superior efficiency over diffusion-based methods, making ScaleWeaver a practical and effective solution for controllable text-to-image generation within the visual autoregressive paradigm. Code and models will be released. 1 INTRODUCITON Generative models have achieved remarkable progress in recent years, with two dominant paradigms emerging: diffusion models and autoregressive (AR) models. The rise of diffusion-based text-to- image (T2I) models has brought unprecedented fidelity and diversity to generative modeling (Rom- bach et al., 2022; Podell et al., 2024; Chen et al., 2024; Esser et al., 2024; Wu et al., 2025). In particular, controllable image generation with specified spatial conditions such as edges, depth, or segmentation maps has been extensively explored in diffusion models, giving rise to methods that enable precise and flexible control (Zhang et al., 2023; Mou et al., 2024). These conditional gen- eration approaches have shown great practical value, enabling applications such as guided content creation, image editing, and domain-specific generation in a wide range of scenarios. In parallel, Autoregressive (AR) models leverage the scaling properties and causal modeling capa- bilities of large language models, offering strong scalability and generalizability, as demonstrated by systems such as LlamaGen (Sun et al., 2024) and Open-MAGVIT2 (Luo et al., 2024). Within this family, visual autoregressive (VAR) (Tian et al., 2024) models have recently emerged as a promis- ing direction. Unlike conventional AR models that predict the next token in sequence, VAR adopts a next-scale prediction paradigm, allowing the model to capture hierarchical visual structures and improving both quality and efficiency. Representative works, including Infinity (Han et al., 2025), HART (Tang et al., 2025), Switti (Voronov et al., 2024), and Star-T2I Ma et al. (2024), demonstrate that VAR can achieve high-resolution and high-quality T2I synthesis. Nevertheless, controllable generation with VAR remains largely unexplored. Existing methods such as ControlVAR (Li et al., 2024c) and CAR (Yao et al., 2024) are limited to class-conditioned generation on ImageNet, fail to demonstrate effectiveness in high-quality T2I settings, and rely on resource-intensive fine-tuning, which restricts their scalability. These limitations pose a key challenge–how to efficiently and effec- tively inject conditional information without disrupting base generation? In this paper, to address the aforementioned challenge, we propose ScaleWeaver, a novel frame- work for efficient and effective controllable T2I generation built on VAR models. In ScaleWeaver, conditional inputs (e.g., canny edges or depth maps) are tokenized using the same tokenizer as the input image, which is proven to be effective. These conditional tokens are then processed through a dedicated conditional branch, which is trained with LoRA (Hu et al., 2022) modules across all scales. The resulting conditional tokens then interact with image tokens through our proposed core module–Reference Attention. Reference Attention is an enhanced attention mechanism integrated into the MMDiT block (Esser et al., 2024), designed to incorporate control information with high flexibility and effectiveness. In contrast to original MM attention, Reference Attention removes the attention from image→condition that is computationally redundant for condition injection. Besides, we apply a parameter reuse strategy with additional linear projectors to exploit the autoregres- sive backbone’s existing capacity to process conditional information. This greatly reduces training overhead while improving adaptation efficiency, making the method both practical and scalable for large models. The overall design of Reference Attention allows the model to progressively learn meaningful control while preserving the generative quality of the base model. Extensive experiments demonstrate that ScaleWeaver achieves high-quality generation and robust text-image alignment across controllable generation tasks with a wide range of conditions. Our approach consistently maintains strong performance and adaptability across various types of control signals. Notably, ScaleWeaver offers a substantial improvement in efficiency over state-of-the-art diffusion-based control methods, making it a practical and scalable solution for controllable T2I generation in the autoregressive paradigm. Our main contributions can be summarized as follows: • We propose ScaleWeaver, a novel framework for controllable generation based on text-to- image VAR models, equipped with light-weight condition injection, inheriting the infer- ence advantage and scale-wise bidirectional modeling. 2 Preprint • We propose Reference Attention mechanism for stable multi-scale integration of the con- dition with a significant reduction of computational cost compared with widely-used MM- Attention. Cooperating with a parameter reuse and zero-init strategy, Reference Attention further reduces training cost while maintaining strong adaptability. • Extensive qualitative and quantitative evaluations show that our ScaleWeaver achieves su- perior controllability and efficiency compared to existing diffusion-based controllable gen- eration methods. 2 RELATED WORKS 2.1 AUTOREGRESSIVE MODELS IN VISUAL GENERATION Autoregressive (AR) models have long been applied to visual generation by modeling images as sequences of discrete tokens. Early works such as VQVAE (van den Oord et al., 2017) and VQ- GAN (Esser et al., 2021) tokenize images into codebook indices and employ transformer-based decoders to autoregressively generate visual content. More recent advances leverage large-scale language modeling techniques for image synthesis, as seen in LlamaGen (Sun et al., 2024) and Open-MAGVIT2 (Luo et al., 2024), which utilize powerful transformer backbones to scale up AR generation. In parallel, MAR (Li et al., 2024b) and NOVA (Deng et al., 2024) remove vec- tor quantization and directly model continuous latents with diffusion loss, improving fidelity and scalability for images and videos. Within AR, visual autoregressive (VAR) (Tian et al., 2024) ap- proaches reconceptualize the autoregressive modeling by adopting next-scale (coarse-to-fine) pre- diction rather than next-token prediction, preserving hierarchical structure and enabling scalable autoregressive image synthesis. Recent systems such as Infinity (Han et al., 2025), HART (Tang et al., 2025), Switti (Voronov et al., 2024), and Star-T2I (Ma et al., 2024) further demonstrate that VAR-based text-to-image models can achieve performance comparable to state-of-the-art diffusion models, leveraging scale-wise transformers and coarse-to-fine generation to enable high-resolution synthesis with strong text alignment. These works establish VAR as a competitive backbone for high-quality T2I generation. Besides image generation, VAR-style models extend naturally to other pixel-to-pixel vision tasks, including super-resolution (VARSR) (Qu et al., 2025), image restora- tion (Varformer) (Wang et al., 2025), and unified generation frameworks (VARGPT) (Zhuang et al., 2025). These results underscore VAR’s versatility and scalability across visual generation tasks. 2.2 CONTROLLABLE IMAGE GENERATION Controllable image generation methods aim to enable fine-grained control by injecting external con- ditional information into the synthesis process. Adapter-based approaches, notably ControlNet and T2I-Adapter, attach condition encoders and lightweight modulation heads to pretrained diffusion backbones, conditioning on edges, depth, normals, or segmentation while largely freezing the back- bone (Zhang et al., 2023; Mou et al., 2024). Subsequent frameworks such as UniControl (Qin et al., 2023), Uni-ControlNet (Zhao et al., 2023), and ControlNet++ (Li et al., 2024a) broaden the con- dition space and optimized training strategies. In parallel, attention-based schemes like OmniCon- trol (Tan et al., 2024) and EasyControl (Zhang et al., 2025) leverage multimodal attention to inte- grate conditions within the denoising transformer, enhancing spatial alignment and flexibility. With the advent of autoregressive (AR) backbones, research on controllable generation has increasingly extended to AR models. ControlAR (Li et al., 2025) introduces conditional decoding for Llama- Gen (Sun et al., 2024), enabling precise control in an AR setting. In the context of visual autoregres- sive models, ControlVAR (Li et al., 2024c), CAR (Yao et al., 2024), and SCALAR (Xu et al., 2025) incorporate conditioning into scale-wise generation, demonstrating controllability within the VAR framework. However, these methods are restricted to class-conditional generation; high-quality im- age generation with text prompts has not been well explored. 3 METHOD 3.1 PRELIMINARY OF VISUAL AUTOREGRESSIVE GENERATION Different from conventional autoregresstive model based on raster-scan next-token decoding, Visual autoregressive (VAR) model is a innovative next-scale prediction framework. By conditioning each 3 Preprint finer scale on coarser context and text prompt, VAR aligns with a coarse-to-fine perceptual pro- gression while satisfying the autoregressive premise on newly defined causal units. Essentially, the autoregressive unit is an entire token map at a given scale rather than a single token, which reduces the number of autoregressive rounds while retains bidirectional correlations within each scale. VAR model incorporates a multi-scale visual tokenizer and a generative transformer for image syn- thesis. The tokenizer encodes an image I into K residual maps {R1, . . . , RK} from coarse to fine, and a text encoder transforms the text prompt into embedding t. Then the generative transformer is trained to predict the next scale image map Rs conditioned on previous scales’ maps and text embedding, which factorizes the joint probability distribution into the conditional distributions: pθ(R1:K) = K Y s=1 pθ(Rs | R<s, t), (1) where θ is the weight of the generative transformer. During training, VAR model applies teacher forcing to supply ground-truth coarse scales; at inference, the model samples sequentially from coarse to fine and detokenizes the residual hierarchy to get the final image. 3.2 OVERVIEW OF SCALEWEAVER To support a condition image input, we extend the above vanilla VAR architecture to incorporate an auxiliary visual condition c. To be specific, the multi-scale tokenizer is first used to map the conditional input to multi-scale tokens {C1, . . . , CK}. Then the generator predicts the next-scale image tokens conditioned on image tokens and the condition tokens of coarser scales, and text prompt, which is formulated as follows: pθ(R1:K | t, c) = K Y s=1 pθ Rs | R<s, t, C<s), (2) where condition tokens serve as side inputs and are not involved in autoregressively output. Overall design. As illustrated in Fig. 2 (a), ScaleWeaver employs the same multi-scale tokenizer for both image and condition, ensuring spatial and scale alignment between Rs and Cs. Conditional tokens are processed by a conditional branch that mirrors the backbone interfaces and is applied at every scale. During next-scale prediction, image tokens interact with condition tokens via the proposed Reference Attention module inserted in the attention block of an MMDiT-style transformer; this fusion occurs at each scale while the rest of the transformer structure remains unchanged. Efficient parameter reuse with LoRA. To reduce training cost as much as possible, ScaleWeaver reuses text-to-image backbone weights and introduces LoRA adapters in the conditional branch pro- jections across scales. LoRA modules (with small rank r) modulate the condition-side projections and are applied to projection layers with the condition embedder; image-side weights are frozen to preserve base model’s generative capability. This strategy focuses on inheriting capacity on condi- tion processing, keeps the number of additional parameters minimal, and enables efficient training. Multi-modal classifier-free guidance. We adopt standard VAR training with a multi-scale cross- entropy objective on image tokens only; conditional tokens are auxiliary inputs and receive no direct supervision. Besides the original classifier-free guidance (CFG) used in VAR models, to enable CFG with conditional information, we randomly drop the condition during training (replace the condition image with a blank input) with probability p, thereby exposing the model to both conditional and non-conditional regimes. At inference, guidance operates on the logits over the next-scale token vocabulary at each scale s. Let zs(·) denote the pre-softmax logits predicted under a particular conditioning setup. The final multi-modal CFG is computed by: zguid s = zbase s + γimg zimg s −zbase s  + γ zboth s −zimg s  , (3) where zbase s is the prediction based on neither text nor image, zimg s receives only the image condition, and zboth s receives both text and image; γimg, γ ≥0 are hyperparameters controlling image- and text-guided strengths, respectively. The final output is obtained by Softmax(zguid s ) at each scale. 4 Preprint Self A'en)on image tokens condi.on tokens qkv projec)on qkv projec)on 🔥 ❄ Self A'en)on Cross A'en)on ⊕ zero linear ⋯ ⋯ out projec)on out projec)on 🔥 ❄ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ScaleWeaver A row of colorful townhouses with a clear blue sky a bove… MM-A'en)on Reference A'en)on (Ours) (b) Reference A,en-on (c) A,en-on map comparison (a) Overall pipeline of ScaleWeaver Mul7-scale Tokenize Mul7-scale Detokenize Reference A(en)on image self-a*en,on image-to- condi,on condi,on- to-image condi,on self-a*en,on ⋯ ⋯ image self-a*en,on image-to- condi,on condi,on- to-image condi,on self-a*en,on 𝑸 𝑲 𝑲 𝑸 Figure 2: Illustration of ScaleWeaver framework. The conditional input is tokenized using the same multi-scale tokenizer as the image, processed by a LoRA-based conditional branch, and fused with image tokens via Reference Attention at each scale. In the Reference Attention module, image- side projections are frozen while the conditional branch is LoRA-tuned. Image queries attend to condition keys/values through cross-attention, gated by a zero-initialized projection to preserve base generation capability and enable gradual control. Unlike MM-Attention, Reference Attention re- moves the image→condition path, reducing unnecessary computation and stabilizing control injec- tion. For clarity, image-text cross-attention and FFN within the original Infinity blocks are omitted. 3.3 REFERENCE ATTENTION The core module in our ScaleWeaver is Reference Attention, which targets efficiently and effectively injecting the control image. We incorporate attention-based fusion of conditional and image tokens, using a mechanism designed to balance flexibility with stability. Unlike OmniControl (Tan et al., 2024) and EasyControl (Zhang et al., 2025), which employ standard multi-modal attention (Esser et al., 2024) for condition injection, the Reference Attention in our approach achieves zero-injection at initialization, ensuring that conditional information is introduced without disrupting the base gen- eration process. This design choice allows control to emerge progressively during training while maintaining the autoregressive backbone’s generative prior. What’s more, Multi-modal attention typically admits four routes: image/condition self-attention and bi-directional cross-attention. Reference Attention explicitly removes the image→condition path, retaining condition→image and both self-attentions. This asymmetry reduces representational en- tanglement, simplifies optimization, and concentrates parameter updates on the conditional branch, which we found important for stable controllability without degrading image priors. Let X(i) s ∈RLs×d denote image tokens and C(i) s ∈RLc s×d denote condition tokens at scale s. We perform self-attention independently on each stream and enable a single cross path from condition 5 Preprint to image. We derive query, key, and value projections separately for the two pathways: [Qx, Kx, Vx] = X(i) s [WQ, WK, WV ] , [Qc, Kc, Vc] = C(i) s [WQ + ∆WQ, WK + ∆WK, WV + ∆WV ] . (4) This setup highlights that the image pathway maintains frozen backbone projections, while the con- dition pathway learns lightweight low-rank updates to encode control information. With these pro- jections, Reference Attention updates tokens as follows: ˆX(i) s = Attn(Qx, Kx, Vx), ˆCs(i) = Attn(Qc, Kc, Vc), eX(i) s = ˆX(i) s + WzeroAttn(Qx, Kc, Vc), (5) where Wzero is zero-initialized, ensuring that at start the module behaves identically to standard self-attention, while gradually learning to incorporate conditional guidance as training progresses. Reference Attention acts as a drop-in replacement inside the attention block of an MMDiT-style transformer. At each scale s, the module receives X(i) s and C(i) s , applies stream-wise self-attention, then injects zero-initiated condition via Eq. 5. The module is applied at all scales, enabling coarse guidance at low s and refinement at high s, while other layers remain unchanged. Since image-side projections are frozen and condition-side projections are LoRA-tuned with a small rank, the number of trainable parameters is minimal, maintaining computational efficiency. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Tasks and conditions. We conduct experiments on six conditional generation tasks that span di- verse structural and appearance-based controls: Canny edge, depth (Yang et al., 2024), blur, col- orization, color palette, and sketch (Su et al., 2021) images. These tasks are chosen to cover both geometric guidance and appearance or style-related guidance, thereby providing a comprehensive assessment of controllability and image quality. Training details. Training is conducted on a composite dataset of 26k high-resolution images generated with FLUX.1-dev from the text-to-image-2M dataset (Hate, 2024) and fluxdev-controlnet- 16k dataset (Kadirnar, 2024). We use Infinity 2B as the autoregressive backbone and fine-tune with LoRA-Plus (Hayou et al., 2024) optimizer on 4 NVIDIA A40 GPUs. Only LoRA modules and zero-linear layers in the conditional branch are updated, while the backbone remains frozen. For more details, please refer to Appendix A.1. Evaluation metrics. We assess both generation quality and controllability. For image quality, we report Fr´echet Inception Distance (FID)(Heusel et al., 2017) and CLIP-IQA (Wang et al., 2023), and use CLIP Score (Radford et al., 2021) to evaluate text-image alignment. For control fidelity, we use F1 score for Canny-based edge control and mean squared error (MSE) for the remaining conditions. All evaluations are conducted on 5,000 images from the COCO 2017 (Lin et al., 2014) validation set. This evaluation protocol provides a balanced view of fidelity, alignment, and controllability. 4.2 EXPERIMENTAL RESULTS Quantitative comparison. We compare ScaleWeaver with a comprehensive set of controllable generation baselines, including SD1.5-based ControlNet (Zhang et al., 2023) and T2I-Adapter (Mou et al., 2024), as well as FLUX.1-based ControlNet Pro, OminiControl (Tan et al., 2024), and Easy- Control (Zhang et al., 2025). ScaleWeaver is evaluated across diverse condition types, demonstrating effective and precise control in all settings. Notably, for challenging conditions such as depth and blur, our controllability surpasses existing methods. Across all tasks, ScaleWeaver maintains strong generative quality and text-image consistency, achieving results on par with leading diffusion-based approaches. Importantly, these advances are realized with significantly improved efficiency, under- scoring the practical advantages of our approach for controllable generation. 6 Preprint Condition Method Base model Controllability Generative Quality Text Consistency F1 ↑/MSE ↓ FID ↓ CLIP-IQA ↑ CLIP-Score ↑ Depth ControlNet SD 1.5 923 23.03 0.64 0.308 T2I-Adapter 1560 24.72 0.61 0.309 ControlNet Pro FLUX.1-dev 2958 62.20 0.55 0.212 OminiControl† 556 30.75 0.68 0.307 EasyControl† 607 23.04 0.57 0.303 Ours Infinity 2B 506 25.80 0.67 0.302 Canny ControlNet SD 1.5 0.35 18.74 0.65 0.305 T2I-Adapter 0.22 20.06 0.57 0.305 ControlNet Pro FLUX.1-dev 0.21 98.69 0.48 0.192 OminiControl† 0.45 23.63 0.66 0.306 EasyControl† 0.32 18.47 0.60 0.303 Ours Infinity 2B 0.30 22.31 0.66 0.299 Colorization ControlNet Pro FLUX.1-dev 994 30.38 0.40 0.279 OminiControl† 109 9.76 0.56 0.312 Ours Infinity 2B 186 9.34 0.48 0.310 Deblur ControlNet Pro FLUX.1-dev 338 16.27 0.55 0.294 OminiControl 62 18.89 0.59 0.301 Ours Infinity 2B 46 17.39 0.53 0.308 Table 1: Quantitative comparison of controllable T2I generation on COCO 2017. Best results are shown in bold and second best results are underlined. † indicates that the results are reproduced under the same testing protocol to ensure a fair comparison. Qualitative comparison. As shown in Fig. 3, our method consistently produces images with clear, well-defined subjects and distinct boundaries, while maintaining both foreground and background integrity. The generated images exhibit high visual coherence, with minimal artifacts and strong separation between objects and their surroundings. Furthermore, our approach demonstrates robust text-image alignment: even for prompts with long or complex descriptions, ScaleWeaver accurately captures fine-grained details and generates images that faithfully reflect the input text. Figure 4: Diverse generation with the same sketch condition but different text prompts. Diverse generation. As demonstrated in Fig. 4, ScaleWeaver is capable of generating images with diverse styles and content for dif- ferent user prompts while preserving a fixed semantic structure provided by the condition. This highlights the effectiveness of our control- lable generation framework in delivering both high fidelity and precise semantic correspon- dence, as well as supporting diverse user intent under consistent structural guidance. Human Evaluation. To further validate the effectiveness of our approach, we conduct a user study in which participants are asked to se- lect the best image according to three criteria: image quality, controllability and text-image alignment. The results of the user study are consistent with our quantitative findings, confirming that ScaleWeaver achieves competitive controllability and high-quality text-to-image generation compared to baseline methods. Analysis of Efficiency. We compare ScaleWeaver’s efficiency with state-of-the-art diffusion- based methods in Tab. 3. All experiments are conducted on 1024 × 1024 image generation. For diffusion-based baselines, we use 28 sampling steps, following standard practice. Inference latency is measured as the wall-clock time required to generate an image on an NVIDIA A40 GPU. 7 Preprint condition Ours ControlNet T2I-Adapter EasyControl OminiControl A person with brown hair, wearing a pink shirt and white pants, stands with their back to the viewer in a field of purple lupine flowers, holding a tablet … Elegant table setting featuring a centerpiece of delicate peach roses and white blooms in clear glass vases. On the table are neatly arranged white plates… A giraffe next to a stone fence staring off into the distance. A lush green forest filled with lots of trees. A white rescue sheep standing in the middle of the field on a sunny day. A pair of off-white crocheted baby booties. A realistic still-life painting depicts a large, orange heirloom tomato and a smaller, yellow-green tomato, accompanied by a bare green vine, resting on a light brown surface against a dark background. A still-life oil painting captures a bouquet of blooming white lilies with prominent orange stamens and green leaves arranged in a dark, simple vase on a reddish-brown wooden chest, with a small blue and white porcelain bowl to the left, all set against a dark green, muted background. condition Ours OminiControl canny deblur colorization condition Ours OminiControl depth Figure 3: Qualitative comparison of controllable text-to-image generation results. ScaleWeaver achieves high-fidelity synthesis and precise control across diverse conditions, outperforming diffusion-based baselines in both visual quality and controllability. Best viewed via zoom in. Method #Param (M) Latency (s) ControlNet Pro 15B 38.7 OminiControl 12B 70.9 EasyControl 12B 60.4 Ours 2.3B 7.6 Table 3: Efficiency comparison for 1024 × 1024 image generation. Our method achieves comparable or better generation quality with a significantly smaller model size (2.3B pa- rameters) compared to FLUX.1 (Labs, 2024) based meth- ods. More importantly, ScaleWeaver delivers a substan- tial speedup in inference, requiring only 7.6 seconds per image, which is 5–9× faster than FLUX.1-based ap- proaches. This demonstrates the efficacy of our approach and highlights the practical advantage of controllable generation with VAR-based models, making ScaleWeaver a highly efficient and scalable solution for real-world ap- plications. 8 Preprint Method Generation Quality ↑ Controllability ↑ Text-Image Alignment ↑ Overall ↑ OminiControl 0.3053 0.3474 0.2947 0.3158 ControlNet 0.0702 0.1053 0.0912 0.0889 EasyControl 0.1719 0.2702 0.2351 0.2257 Ours 0.4526 0.2772 0.3789 0.3696 Table 2: Human evaluation in terms of generation quality, controllability, text-image alignment. Each value indicates the percentage of times a method was preferred by users (higher is better). ScaleWeaver outperforms all baselines across all criteria. 4.3 ABLATION STUDY To better understand which injection mechanisms are most effective for spatial control in visual Injection Method F1 ↑ FID ↓ CLIP Score ↑ Spatial addition 0.315 25.448 0.296 MM-Attention 0.344 24.888 0.294 Ours 0.325 23.224 0.298 Table 4: Ablation study on injection methods. autoregressive (VAR) text-to-image models, we conducted ablation studies on injection mech- anisms and conditional branch configurations. All ablation experiments are conducted on the Canny condition, with each model trained for 24k iterations. During the evaluation, guidance is applied solely to the condition image with a guidance scale of 1.5. Injection mechanisms. We compare three injection mechanisms: spatial addition, multi-modal at- tention (MM-Attention), and our proposed Reference Attention. As shown in Tab. 4, MM-Attention achieves the highest controllability (F1 score) but at the cost of degraded generation quality and text-image alignment. Our Reference Attention achieves the best overall balance, delivering strong controllability while preserving high generation quality and text alignment. Please refer to Ap- pendix A.2 for visual comparison. This demonstrates that Reference Attention effectively integrates conditional information without disrupting the base generative prior. Component Setting F1 ↑ FID ↓ CLIP Score ↑ zero-linear w/ →w/o 0.282 25.059 0.296 LoRA Rank 16 →4 0.320 25.703 0.296 16 →8 0.322 25.712 0.296 Condition Blocks first 16 →last 16 0.344 24.510 0.296 first 16 →all 0.346 26.211 0.296 Number of Blocks 16 →4 0.202 25.367 0.299 16 →8 0.229 24.132 0.298 Default Default 0.325 23.224 0.298 Table 5: Ablation studies on key design choices. Design choices. We perform com- prehensive ablation studies to in- vestigate the key components in our ScaleWeaver, including zero- initialized linear gating, LoRA rank, condition injection location, and number of conditional blocks. As shown in Tab. 5, the default set- ting (zero-linear enabled, LoRA rank 16, condition injected in the first 16 blocks) achieves the best balance of controllability and generation quality. Removing the zero-linear gate signif- icantly reduces controllability, highlighting its importance for stable control injection. Lowering the LoRA rank from 16 to 4 or 8 slightly decreases controllability and worsens FID. While injecting con- ditions in the last 16 blocks or all blocks increases controllability, it suffers considerable degradation in image quality, suggesting that early integration is more effective. Finally, reducing the number of conditional blocks from 16 to 4 or 8 degrades controllability, underscoring the need for sufficient capacity to process conditional information. Overall, these ablations confirm that our design choices in ScaleWeaver achieve the best trade-off between controllability and generation quality. 5 CONCLUSION In this paper, we introduced ScaleWeaver, a parameter-efficient and scalable framework for control- lable text-to-image generation based on visual autoregressive models. By leveraging a novel Refer- ence Attention mechanism and a parameter reuse strategy with LoRA, ScaleWeaver enables precise and flexible multi-scale control while maintaining high generation quality and efficiency. Extensive experiments demonstrate that our approach achieves competitive controllability and faster inference 9 Preprint compared to diffusion-based baselines, with strong text-image alignment and visual fidelity. We believe ScaleWeaver provides a practical and extensible foundation for future research on efficient and controllable generative modeling in the autoregressive paradigm. REFERENCES Junsong Chen, Jincheng YU, Chongjian GE, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-$\alpha$: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In Proceedings of theInternational Conference on Learning Representations, 2024. Haoge Deng, Ting Pan, Haiwen Diao, Zhengxiong Luo, Yufeng Cui, Huchuan Lu, Shiguang Shan, Yonggang Qi, and Xinlong Wang. Autoregressive video generation without vector quantization. arXiv preprint arXiv:2412.14169, 2024. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pp. 12873–12883, 2021. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M¨uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis. In Proceedings of the International Conference on Machine Learning, 2024. Jian Han, Jinlai Liu, Yi Jiang, Bin Yan, Yuqi Zhang, Zehuan Yuan, Bingyue Peng, and Xiaob- ing Liu. Infinity: Scaling bitwise autoregressive modeling for high-resolution image synthesis. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 15733–15744, 2025. Jacky Hate. Text-to-image-2m (revision e64fca4), 2024. URL https://huggingface.co/ datasets/jackyhate/text-to-image-2M. Soufiane Hayou, Nikhil Ghosh, and Bin Yu. LoRA+: Efficient low rank adaptation of large models. In Proceedings of the International Conference on Machine Learning, 2024. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proceedings of the Advances in Neural Information Processing Systems, 30, 2017. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In Proceedings of the International Conference on Learning Representations, 2022. Kadirnar. Fluxdev-controlnet-16k, 2024. URL https://huggingface.co/datasets/ kadirnar/fluxdev_controlnet_16k. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback: Project page: liming-ai. github. io/controlnet plus plus. In Proceedings of the European Conference on Computer Vision, pp. 129–147. Springer, 2024a. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. arXiv preprint arXiv:2406.11838, 2024b. Xiang Li, Kai Qiu, Hao Chen, Jason Kuen, Zhe Lin, Rita Singh, and Bhiksha Raj. Controlvar: Exploring controllable visual autoregressive modeling. arXiv preprint arXiv:2406.09750, 2024c. Zongming Li, Tianheng Cheng, Shoufa Chen, Peize Sun, Haocheng Shen, Longjin Ran, Xiaoxin Chen, Wenyu Liu, and Xinggang Wang. ControlAR: Controllable image generation with autore- gressive models. In Proceedings of the International Conference on Learning Representations, 2025. 10 Preprint Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, pp. 740–755. Springer, 2014. Zhuoyan Luo, Fengyuan Shi, Yixiao Ge, Yujiu Yang, Limin Wang, and Ying Shan. Open- magvit2: An open-source project toward democratizing auto-regressive visual generation. CoRR, abs/2409.04410, 2024. Xiaoxiao Ma, Mohan Zhou, Tao Liang, Yalong Bai, Tiejun Zhao, Biye Li, Huaian Chen, and Yi Jin. Star: Scale-wise text-conditioned autoregressive image generation. arXiv preprint arXiv:2406.10797, 2024. Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 4296– 4304, 2024. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M¨uller, Joe Penna, and Robin Rombach. SDXL: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of the International Conference on Learning Representations, 2024. Can Qin, Shu Zhang, Ning Yu, Yihao Feng, Xinyi Yang, Yingbo Zhou, Huan Wang, Juan Car- los Niebles, Caiming Xiong, Silvio Savarese, et al. Unicontrol: A unified diffusion model for controllable visual generation in the wild. arXiv preprint arXiv:2305.11147, 2023. Yunpeng Qu, Kun Yuan, Jinhua Hao, Kai Zhao, Qizhi Xie, Ming Sun, and Chao Zhou. Visual autoregressive modeling for image super-resolution. arXiv preprint arXiv:2501.18993, 2025. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, pp. 8748–8763. PmLR, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Zhuo Su, Wenzhe Liu, Zitong Yu, Dewen Hu, Qing Liao, Qi Tian, Matti Pietik¨ainen, and Li Liu. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pp. 5117–5127, 2021. Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024. Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, and Xinchao Wang. Ominicontrol: Min- imal and universal control for diffusion transformer. arXiv preprint arXiv:2411.15098, 2024. Haotian Tang, Yecheng Wu, Shang Yang, Enze Xie, Junsong Chen, Junyu Chen, Zhuoyang Zhang, Han Cai, Yao Lu, and Song Han. HART: Efficient visual generation with hybrid autoregressive transformer. In Proceedings of the International Conference on Learning Representations, 2025. Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Proceedings of the Advances in Neural Information Processing Systems, 37:84839–84865, 2024. Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. Neural discrete representation learning. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Proceedings of the Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Anton Voronov, Denis Kuznedelev, Mikhail Khoroshikh, Valentin Khrulkov, and Dmitry Baranchuk. Switti: Designing scale-wise transformers for text-to-image synthesis. arXiv preprint arXiv:2412.01819, 2024. 11 Preprint Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 2555–2563, 2023. Siyang Wang, Naishan Zheng, Jie Huang, and Feng Zhao. Navigating image restoration with var’s distribution alignment prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7559–7569, June 2025. Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report, 2025. URL https://arxiv.org/abs/ 2508.02324. Ryan Xu, Dongyang Jin, Yancheng Bai, Rui Lan, Xu Duan, Lei Sun, and Xiangxiang Chu. Scalar: Scale-wise controllable visual autoregressive learning. arXiv preprint arXiv:2507.19946, 2025. Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything: Unleashing the power of large-scale unlabeled data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10371–10381, 2024. Ziyu Yao, Jialin Li, Yifeng Zhou, Yong Liu, Xi Jiang, Chengjie Wang, Feng Zheng, Yuexian Zou, and Lei Li. Car: Controllable autoregressive modeling for visual generation. arXiv preprint arXiv:2410.04671, 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3836–3847, 2023. Yuxuan Zhang, Yirui Yuan, Yiren Song, Haofan Wang, and Jiaming Liu. Easycontrol: Adding efficient and flexible control for diffusion transformer. arXiv preprint arXiv:2503.07027, 2025. Shihao Zhao, Dongdong Chen, Yen-Chun Chen, Jianmin Bao, Shaozhe Hao, Lu Yuan, and Kwan- Yee K Wong. Uni-controlnet: All-in-one control to text-to-image diffusion models. Proceedings of the Advances in Neural Information Processing Systems, 36:11127–11150, 2023. Xianwei Zhuang, Yuxin Xie, Yufan Deng, Liming Liang, Jinghan Ru, Yuguo Yin, and Yuexian Zou. Vargpt: Unified understanding and generation in a visual autoregressive multimodal large language model. arXiv preprint arXiv:2501.12327, 2025. 12 Preprint A APPENDIX A.1 MORE IMPLEMENTATION DETAILS We train all models using 4 NVIDIA A40 GPUs, with a per-GPU batch size of 2. Training is performed using the LoRA-Plus optimizer with a learning rate of 5e-5 and a LoRA rank of 16. Only the LoRA modules and zero-linear layers in the conditional branch are updated; the image/text backbone remains frozen. The default training hyperparameters are summarized in Table 6. Hyperparameter Default Setting Gradient clipping 5 Adam betas (0.9, 0.97) LoRAPlus LR ratio 1.25 Batch size 8 Learning rate 5e-5 Learning rate decay None Mixed precision bfloat16 Reweight loss by scale True Zero-linear gating Enabled LoRA rank 16 Condition injection location First 16 blocks Table 6: Training config. Table 7 summarizes the number of training iterations for each condition type. All models converge efficiently, typically within 30k iterations, demonstrating the practicality of our approach. Condition Type Training Iterations Canny Edge 36,000 Depth Map 24,000 Colorization 24,000 Deblur 24,000 Sketch 24,000 Palette 24,000 Table 7: Training iterations for each condition type. A.2 VISUALIZATION COMPARISON FOR ABLATION STUDY In this section, we provide qualitative comparison of different injection mechanisms evaluated in our ablation study, as illustrated in Figure 5. By zooming in on the generated images, it is evident that our proposed injection method produces more realistic subjects with fewer artifacts compared to alternative approaches. This demonstrates the effectiveness of our injection strategy in integrating conditional information while maintaining image quality. A.3 MORE VISUALIZATION RESULTS Diverse generation with the same sketch condition is shown in Fig. 6 along with the prompts we used. More visualization results under different control conditions are shown in Figs. 7, 8, 9. These examples further demonstrate that our approach is capable of faithfully following diverse types of control signals, while simultaneously producing outputs with rich variations in appearance and structure. The results indicate that our method not only achieves precise controllability but also preserves a high degree of generative diversity, which is crucial for adapting to different application scenarios. A.4 LIMITATION AND FUTURE WORK Limitation. While ScaleWeaver demonstrates strong performance across a range of conditions, there are some limitations to consider. First, the generation capability is ultimately determined by the 13 Preprint Ours MM-Attention Spatial Addition canny Ours MM-Attention canny Spatial Addition Figure 5: Visualization comparison for ablation study. We show qualitative results for different injection mechanisms. A song thrush in a tranquil forest at sunrise, Japanese Ukiyo-e woodblock print style. Features elegant lines, a serene atmosphere, and flat color planes. A vibrant stained glass window depic@ng a songbird in a sunlit forest. The scene is composed of colorful glass pieces connected by bold lead lines. A stylized illustra@on portrays a brown bird with a speckled breast standing in profile in a sunlit forest clearing, surrounded by tall blades of yellow and blue grass, two bare saplings, and a small snail on the ground, with a dense background of tall, slender trees glowing with warm yellow light. A high-contrast, black and white noir film still. In a rain-slicked, desolate forest, a solitary bird is caught in a harsh, singular beam of light, its form casting a long, dramatic shadow. The atmosphere is tense and mysterious, as if the bird is a silent witness to a crime. A vibrant 8-bit pixel art scene from a classic Nintendo-era adventure game. A charmingly blocky, speckled bird stands in a forest with a dithered background and a strict, yet beautiful, 16-color palette. The entire scene is constructed from a carefully arranged grid of pixels. Figure 6: Conditional generation results on diverse condition types. capacity of the underlying base model; with our efficient training strategy, we believe ScaleWeaver can perform well and further benefit from scaling to larger VAR models. Second, our method does not currently support multi-condition control simultaneously. Future work. Future research directions include extending ScaleWeaver to support multi- condition control, as the current framework is limited to handling a single condition at a time. The Reference Attention mechanism, with its modular design, naturally supports multi-condition integration, making this a promising avenue for exploration. Additionally, investigating methods to dynamically balance the influence of multiple conditions could further enhance the flexibility and adaptability of the model. Another potential direction is to scale ScaleWeaver to larger VAR backbones, leveraging the scalability of autoregressive models to improve generation quality and controllability. A.5 THE USE OF LARGE LANGUAGE MODELS (LLMS) We utilize a large language model to assist in correcting grammar errors and polishing the language of this paper. All ideas, methods, experiments, and conclusions are the sole work of the authors. 14 Preprint Condition Generated Images Figure 7: Diverse generation by ScaleWeaver, with Canny image or sketch image as control image. 15 Preprint Condition Generated Images Figure 8: Diverse generation by ScaleWeaver, with blur image or depth map as control image. 16 Preprint Condition Generated Images Figure 9: Diverse generation by ScaleWeaver, with grey image or palette map as control image. 17
Preprint SCALEWEAVER: WEAVING EFFICIENT CONTROLLABLE T2I GENERATION WITH MULTI-SCALE REFERENCE ATTENTION Keli Liu*, Zhendong Wang*, Wengang Zhou†, Shaodong Xu, Ruixiao Dong, Houqiang Li {sa23006063, zhendongwang, xiodon, { canny depth colorization palette sketch deblur Figure 1: Images generated by ScaleWeaver. Our ScaleWeaver enables efficient and precise controllable text-to-image generation based on the visual autoregressive model, supporting diverse condition signals and producing high-fidelity images with strong text alignment and controllability. ABSTRACT Text-to-image generation with visual autoregressive (VAR) models has recently achieved impressive advances in generation fidelity and inference efficiency. While control mechanisms have been explored for diffusion models, enabling precise and flexible control within VAR paradigm remains underexplored. To bridge this critical gap, in this paper, we introduce ScaleWeaver, a novel framework designed to achieve high-fidelity, controllable generation upon advanced VAR models through parameter-efficient fine-tuning. The core module in ScaleWeaver is the improved MMDiT block with the proposed Reference Attention module, which efficiently and effectively incorporates conditional information. Different from MM Attention, the proposed Reference Attention module discards the unnecessary attention from image→condition, reducing computational cost while stabilizing control injection. Besides, it strategically emphasizes parameter reuse, leveraging the capability of the VAR backbone itself with a few introduced parameters to process control information, and equipping a zero-initialized linear projection to ensure that control signals are incorporated effectively without disrupting the generative capability of the base model. Extensive experiments show ∗Equal contribution. † Corresponding author. 1 16 Oct 2025 Preprint that ScaleWeaver delivers high-quality generation and precise control while attaining superior efficiency over diffusion-based methods, making ScaleWeaver a practical and effective solution for controllable text-to-image generation within the visual autoregressive paradigm. Code and models will be released. 1 INTRODUCITON Generative models have achieved remarkable progress in recent years, with two dominant paradigms emerging: diffusion models and autoregressive (AR) models. The rise of diffusion-based text-toimage (T2I) models has brought unprecedented fidelity and diversity to generative modeling (Rombach et al., 2022; Podell et al., 2024; Chen et al., 2024; Esser et al., 2024; Wu et al., 2025). In particular, controllable image generation with specified spatial conditions such as edges, depth, or segmentation maps has been extensively explored in diffusion models, giving rise to methods that enable precise and flexible control (Zhang et al., 2023; Mou et al., 2024). These conditional generation approaches have shown great practical value, enabling applications such as guided content creation, image editing, and domain-specific generation in a wide range of scenarios. In parallel, Autoregressive (AR) models leverage the scaling properties and causal modeling capabilities of large language models, offering strong scalability and generalizability, as demonstrated by systems such as LlamaGen (Sun et al., 2024) and Open-MAGVIT2 (Luo et al., 2024). Within this family, visual autoregressive (VAR) (Tian et al., 2024) models have recently emerged as a promising direction. Unlike conventional AR models that predict the next token in sequence, VAR adopts a next-scale prediction paradigm, allowing the model to capture hierarchical visual structures and improving both quality and efficiency. Representative works, including Infinity (Han et al., 2025), HART (Tang et al., 2025), Switti (Voronov et al., 2024), and Star-T2I Ma et al. (2024), demonstrate that VAR can achieve high-resolution and high-quality T2I synthesis. Nevertheless, controllable generation with VAR remains largely unexplored. Existing methods such as ControlVAR (Li et al., 2024c) and CAR (Yao et al., 2024) are limited to class-conditioned generation on ImageNet, fail to demonstrate effectiveness in high-quality T2I settings, and rely on resource-intensive fine-tuning, which restricts their scalability. These limitations pose a key challenge-how to efficiently and effectively inject conditional information without disrupting base generation? In this paper, to address the aforementioned challenge, we propose ScaleWeaver, a novel framework for efficient and effective controllable T2I generation built on VAR models. In ScaleWeaver, conditional inputs (e.g., canny edges or depth maps) are tokenized using the same tokenizer as the input image, which is proven to be effective. These conditional tokens are then processed through a dedicated conditional branch, which is trained with LoRA (Hu et al., 2022) modules across all scales. The resulting conditional tokens then interact with image tokens through our proposed core module-Reference Attention. Reference Attention is an enhanced attention mechanism integrated into the MMDiT block (Esser et al., 2024), designed to incorporate control information with high flexibility and effectiveness. In contrast to original MM attention, Reference Attention removes the attention from image→condition that is computationally redundant for condition injection. Besides, we apply a parameter reuse strategy with additional linear projectors to exploit the autoregressive backbone's existing capacity to process conditional information. This greatly reduces training overhead while improving adaptation efficiency, making the method both practical and scalable for large models. The overall design of Reference Attention allows the model to progressively learn meaningful control while preserving the generative quality of the base model. Extensive experiments demonstrate that ScaleWeaver achieves high-quality generation and robust text-image alignment across controllable generation tasks with a wide range of conditions. Our approach consistently maintains strong performance and adaptability across various types of control signals. Notably, ScaleWeaver offers a substantial improvement in efficiency over state-of-the-art diffusion-based control methods, making it a practical and scalable solution for controllable T2I generation in the autoregressive paradigm. Our main contributions can be summarized as follows: • We propose ScaleWeaver, a novel framework for controllable generation based on text-toimage VAR models, equipped with light-weight condition injection, inheriting the inference advantage and scale-wise bidirectional modeling. 2 Preprint • We propose Reference Attention mechanism for stable multi-scale integration of the condition with a significant reduction of computational cost compared with widely-used MMAttention. Cooperating with a parameter reuse and zero-init strategy, Reference Attention further reduces training cost while maintaining strong adaptability. • Extensive qualitative and quantitative evaluations show that our ScaleWeaver achieves superior controllability and efficiency compared to existing diffusion-based controllable generation methods. 2 RELATED WORKS 2.1 AUTOREGRESSIVE MODELS IN VISUAL GENERATION Autoregressive (AR) models have long been applied to visual generation by modeling images as sequences of discrete tokens. Early works such as VQVAE (van den Oord et al., 2017) and VQGAN (Esser et al., 2021) tokenize images into codebook indices and employ transformer-based decoders to autoregressively generate visual content. More recent advances leverage large-scale language modeling techniques for image synthesis, as seen in LlamaGen (Sun et al., 2024) and Open-MAGVIT2 (Luo et al., 2024), which utilize powerful transformer backbones to scale up AR generation. In parallel, MAR (Li et al., 2024b) and NOVA (Deng et al., 2024) remove vector quantization and directly model continuous latents with diffusion loss, improving fidelity and scalability for images and videos. Within AR, visual autoregressive (VAR) (Tian et al., 2024) approaches reconceptualize the autoregressive modeling by adopting next-scale (coarse-to-fine) prediction rather than next-token prediction, preserving hierarchical structure and enabling scalable autoregressive image synthesis. Recent systems such as Infinity (Han et al., 2025), HART (Tang et al., 2025), Switti (Voronov et al., 2024), and Star-T2I (Ma et al., 2024) further demonstrate that VAR-based text-to-image models can achieve performance comparable to state-of-the-art diffusion models, leveraging scale-wise transformers and coarse-to-fine generation to enable high-resolution synthesis with strong text alignment. These works establish VAR as a competitive backbone for high-quality T2I generation. Besides image generation, VAR-style models extend naturally to other pixel-to-pixel vision tasks, including super-resolution (VARSR) (Qu et al., 2025), image restoration (Varformer) (Wang et al., 2025), and unified generation frameworks (VARGPT) (Zhuang et al., 2025). These results underscore VAR's versatility and scalability across visual generation tasks. 2.2 CONTROLLABLE IMAGE GENERATION Controllable image generation methods aim to enable fine-grained control by injecting external conditional information into the synthesis process. Adapter-based approaches, notably ControlNet and T2I-Adapter, attach condition encoders and lightweight modulation heads to pretrained diffusion backbones, conditioning on edges, depth, normals, or segmentation while largely freezing the backbone (Zhang et al., 2023; Mou et al., 2024). Subsequent frameworks such as UniControl (Qin et al., 2023), Uni-ControlNet (Zhao et al., 2023), and ControlNet++ (Li et al., 2024a) broaden the condition space and optimized training strategies. In parallel, attention-based schemes like OmniControl (Tan et al., 2024) and EasyControl (Zhang et al., 2025) leverage multimodal attention to integrate conditions within the denoising transformer, enhancing spatial alignment and flexibility. With the advent of autoregressive (AR) backbones, research on controllable generation has increasingly extended to AR models. ControlAR (Li et al., 2025) introduces conditional decoding for LlamaGen (Sun et al., 2024), enabling precise control in an AR setting. In the context of visual autoregressive models, ControlVAR (Li et al., 2024c), CAR (Yao et al., 2024), and SCALAR (Xu et al., 2025) incorporate conditioning into scale-wise generation, demonstrating controllability within the VAR framework. However, these methods are restricted to class-conditional generation; high-quality image generation with text prompts has not been well explored. 3 METHOD 3.1 PRELIMINARY OF VISUAL AUTOREGRESSIVE GENERATION Different from conventional autoregresstive model based on raster-scan next-token decoding, Visual autoregressive (VAR) model is a innovative next-scale prediction framework. By conditioning each 3 Preprint finer scale on coarser context and text prompt, VAR aligns with a coarse-to-fine perceptual progression while satisfying the autoregressive premise on newly defined causal units. Essentially, the autoregressive unit is an entire token map at a given scale rather than a single token, which reduces the number of autoregressive rounds while retains bidirectional correlations within each scale. VAR model incorporates a multi-scale visual tokenizer and a generative transformer for image synthesis. The tokenizer encodes an image I into K residual maps {R1, . . . , RK} from coarse to fine, and a text encoder transforms the text prompt into embedding t. Then the generative transformer is trained to predict the next scale image map Rs conditioned on previous scales' maps and text embedding, which factorizes the joint probability distribution into the conditional distributions: pθ(R1:K) = K Y s=1 pθ(Rs | R<s, t), (1) where θ is the weight of the generative transformer. During training, VAR model applies teacher forcing to supply ground-truth coarse scales; at inference, the model samples sequentially from coarse to fine and detokenizes the residual hierarchy to get the final image. 3.2 OVERVIEW OF SCALEWEAVER To support a condition image input, we extend the above vanilla VAR architecture to incorporate an auxiliary visual condition c. To be specific, the multi-scale tokenizer is first used to map the conditional input to multi-scale tokens {C1, . . . , CK}. Then the generator predicts the next-scale image tokens conditioned on image tokens and the condition tokens of coarser scales, and text prompt, which is formulated as follows: pθ(R1:K | t, c) = K Y s=1 pθ Rs | R<s, t, C<s), (2) where condition tokens serve as side inputs and are not involved in autoregressively output. Overall design. As illustrated in Fig. 2 (a), ScaleWeaver employs the same multi-scale tokenizer for both image and condition, ensuring spatial and scale alignment between Rs and Cs. Conditional tokens are processed by a conditional branch that mirrors the backbone interfaces and is applied at every scale. During next-scale prediction, image tokens interact with condition tokens via the proposed Reference Attention module inserted in the attention block of an MMDiT-style transformer; this fusion occurs at each scale while the rest of the transformer structure remains unchanged. Efficient parameter reuse with LoRA. To reduce training cost as much as possible, ScaleWeaver reuses text-to-image backbone weights and introduces LoRA adapters in the conditional branch projections across scales. LoRA modules (with small rank r) modulate the condition-side projections and are applied to projection layers with the condition embedder; image-side weights are frozen to preserve base model's generative capability. This strategy focuses on inheriting capacity on condition processing, keeps the number of additional parameters minimal, and enables efficient training. Multi-modal classifier-free guidance. We adopt standard VAR training with a multi-scale crossentropy objective on image tokens only; conditional tokens are auxiliary inputs and receive no direct supervision. Besides the original classifier-free guidance (CFG) used in VAR models, to enable CFG with conditional information, we randomly drop the condition during training (replace the condition image with a blank input) with probability p, thereby exposing the model to both conditional and non-conditional regimes. At inference, guidance operates on the logits over the next-scale token vocabulary at each scale s. Let zs(·) denote the pre-softmax logits predicted under a particular conditioning setup. The final multi-modal CFG is computed by: zguid s = zbase s + γimg zimg s -zbase s + γ zboth s -zimg s , (3) where zbase s is the prediction based on neither text nor image, zimg s receives only the image condition, and zboth s receives both text and image; γimg, γ ≥0 are hyperparameters controlling image- and text-guided strengths, respectively. The final output is obtained by Softmax(zguid s ) at each scale. 4 Preprint Self A'en)on image tokens condi.on tokens qkv projec)on qkv projec)on 🔥 ❄ Self A'en)on Cross A'en)on ⊕ zero linear ⋯ ⋯ out projec)on out projec)on 🔥 ❄ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ScaleWeaver A row of colorful townhouses with a clear blue sky a bove... MM-A'en)on Reference A'en)on (Ours) (b) Reference A,en-on (c) A,en-on map comparison (a) Overall pipeline of ScaleWeaver Mul7-scale Tokenize Mul7-scale Detokenize Reference A(en)on image self-a*en,on image-tocondi,on condi,onto-image condi,on self-a*en,on ⋯ ⋯ image self-a*en,on image-tocondi,on condi,onto-image condi,on self-a*en,on Q K K Q Figure 2: Illustration of ScaleWeaver framework. The conditional input is tokenized using the same multi-scale tokenizer as the image, processed by a LoRA-based conditional branch, and fused with image tokens via Reference Attention at each scale. In the Reference Attention module, imageside projections are frozen while the conditional branch is LoRA-tuned. Image queries attend to condition keys/values through cross-attention, gated by a zero-initialized projection to preserve base generation capability and enable gradual control. Unlike MM-Attention, Reference Attention removes the image→condition path, reducing unnecessary computation and stabilizing control injection. For clarity, image-text cross-attention and FFN within the original Infinity blocks are omitted. 3.3 REFERENCE ATTENTION The core module in our ScaleWeaver is Reference Attention, which targets efficiently and effectively injecting the control image. We incorporate attention-based fusion of conditional and image tokens, using a mechanism designed to balance flexibility with stability. Unlike OmniControl (Tan et al., 2024) and EasyControl (Zhang et al., 2025), which employ standard multi-modal attention (Esser et al., 2024) for condition injection, the Reference Attention in our approach achieves zero-injection at initialization, ensuring that conditional information is introduced without disrupting the base generation process. This design choice allows control to emerge progressively during training while maintaining the autoregressive backbone's generative prior. What's more, Multi-modal attention typically admits four routes: image/condition self-attention and bi-directional cross-attention. Reference Attention explicitly removes the image→condition path, retaining condition→image and both self-attentions. This asymmetry reduces representational entanglement, simplifies optimization, and concentrates parameter updates on the conditional branch, which we found important for stable controllability without degrading image priors. Let X(i) s ∈RLs×d denote image tokens and C(i) s ∈RLc s×d denote condition tokens at scale s. We perform self-attention independently on each stream and enable a single cross path from condition 5 Preprint to image. We derive query, key, and value projections separately for the two pathways: [Qx, Kx, Vx] = X(i) s [WQ, WK, WV ] , [Qc, Kc, Vc] = C(i) s [WQ + ∆WQ, WK + ∆WK, WV + ∆WV ] . (4) This setup highlights that the image pathway maintains frozen backbone projections, while the condition pathway learns lightweight low-rank updates to encode control information. With these projections, Reference Attention updates tokens as follows: ˆX(i) s = Attn(Qx, Kx, Vx), ˆCs(i) = Attn(Qc, Kc, Vc), eX(i) s = ˆX(i) s + WzeroAttn(Qx, Kc, Vc), (5) where Wzero is zero-initialized, ensuring that at start the module behaves identically to standard self-attention, while gradually learning to incorporate conditional guidance as training progresses. Reference Attention acts as a drop-in replacement inside the attention block of an MMDiT-style transformer. At each scale s, the module receives X(i) s and C(i) s , applies stream-wise self-attention, then injects zero-initiated condition via Eq. 5. The module is applied at all scales, enabling coarse guidance at low s and refinement at high s, while other layers remain unchanged. Since image-side projections are frozen and condition-side projections are LoRA-tuned with a small rank, the number of trainable parameters is minimal, maintaining computational efficiency. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Tasks and conditions. We conduct experiments on six conditional generation tasks that span diverse structural and appearance-based controls: Canny edge, depth (Yang et al., 2024), blur, colorization, color palette, and sketch (Su et al., 2021) images. These tasks are chosen to cover both geometric guidance and appearance or style-related guidance, thereby providing a comprehensive assessment of controllability and image quality. Training details. Training is conducted on a composite dataset of 26k high-resolution images generated with FLUX.1-dev from the text-to-image-2M dataset (Hate, 2024) and fluxdev-controlnet16k dataset (Kadirnar, 2024). We use Infinity 2B as the autoregressive backbone and fine-tune with LoRA-Plus (Hayou et al., 2024) optimizer on 4 NVIDIA A40 GPUs. Only LoRA modules and zero-linear layers in the conditional branch are updated, while the backbone remains frozen. For more details, please refer to Appendix A.1. Evaluation metrics. We assess both generation quality and controllability. For image quality, we report Fr ́echet Inception Distance (FID)(Heusel et al., 2017) and CLIP-IQA (Wang et al., 2023), and use CLIP Score (Radford et al., 2021) to evaluate text-image alignment. For control fidelity, we use F1 score for Canny-based edge control and mean squared error (MSE) for the remaining conditions. All evaluations are conducted on 5,000 images from the COCO 2017 (Lin et al., 2014) validation set. This evaluation protocol provides a balanced view of fidelity, alignment, and controllability. 4.2 EXPERIMENTAL RESULTS Quantitative comparison. We compare ScaleWeaver with a comprehensive set of controllable generation baselines, including SD1.5-based ControlNet (Zhang et al., 2023) and T2I-Adapter (Mou et al., 2024), as well as FLUX.1-based ControlNet Pro, OminiControl (Tan et al., 2024), and EasyControl (Zhang et al., 2025). ScaleWeaver is evaluated across diverse condition types, demonstrating effective and precise control in all settings. Notably, for challenging conditions such as depth and blur, our controllability surpasses existing methods. Across all tasks, ScaleWeaver maintains strong generative quality and text-image consistency, achieving results on par with leading diffusion-based approaches. Importantly, these advances are realized with significantly improved efficiency, underscoring the practical advantages of our approach for controllable generation. 6 Preprint Condition Method Base model Controllability Generative Quality Text Consistency F1 ↑/MSE ↓ FID ↓ CLIP-IQA ↑ CLIP-Score ↑ Depth ControlNet SD 1.5 923 23.03 0.64 0.308 T2I-Adapter 1560 24.72 0.61 0.309 ControlNet Pro FLUX.1-dev 2958 62.20 0.55 0.212 OminiControl† 556 30.75 0.68 0.307 EasyControl† 607 23.04 0.57 0.303 Ours Infinity 2B 506 25.80 0.67 0.302 Canny ControlNet SD 1.5 0.35 18.74 0.65 0.305 T2I-Adapter 0.22 20.06 0.57 0.305 ControlNet Pro FLUX.1-dev 0.21 98.69 0.48 0.192 OminiControl† 0.45 23.63 0.66 0.306 EasyControl† 0.32 18.47 0.60 0.303 Ours Infinity 2B 0.30 22.31 0.66 0.299 Colorization ControlNet Pro FLUX.1-dev 994 30.38 0.40 0.279 OminiControl† 109 9.76 0.56 0.312 Ours Infinity 2B 186 9.34 0.48 0.310 Deblur ControlNet Pro FLUX.1-dev 338 16.27 0.55 0.294 OminiControl 62 18.89 0.59 0.301 Ours Infinity 2B 46 17.39 0.53 0.308 Table 1: Quantitative comparison of controllable T2I generation on COCO 2017. Best results are shown in bold and second best results are underlined. † indicates that the results are reproduced under the same testing protocol to ensure a fair comparison. Qualitative comparison. As shown in Fig. 3, our method consistently produces images with clear, well-defined subjects and distinct boundaries, while maintaining both foreground and background integrity. The generated images exhibit high visual coherence, with minimal artifacts and strong separation between objects and their surroundings. Furthermore, our approach demonstrates robust text-image alignment: even for prompts with long or complex descriptions, ScaleWeaver accurately captures fine-grained details and generates images that faithfully reflect the input text. Figure 4: Diverse generation with the same sketch condition but different text prompts. Diverse generation. As demonstrated in Fig. 4, ScaleWeaver is capable of generating images with diverse styles and content for different user prompts while preserving a fixed semantic structure provided by the condition. This highlights the effectiveness of our controllable generation framework in delivering both high fidelity and precise semantic correspondence, as well as supporting diverse user intent under consistent structural guidance. Human Evaluation. To further validate the effectiveness of our approach, we conduct a user study in which participants are asked to select the best image according to three criteria: image quality, controllability and text-image alignment. The results of the user study are consistent with our quantitative findings, confirming that ScaleWeaver achieves competitive controllability and high-quality text-to-image generation compared to baseline methods. Analysis of Efficiency. We compare ScaleWeaver's efficiency with state-of-the-art diffusionbased methods in Tab. 3. All experiments are conducted on 1024 × 1024 image generation. For diffusion-based baselines, we use 28 sampling steps, following standard practice. Inference latency is measured as the wall-clock time required to generate an image on an NVIDIA A40 GPU. 7 Preprint condition Ours ControlNet T2I-Adapter EasyControl OminiControl A person with brown hair, wearing a pink shirt and white pants, stands with their back to the viewer in a field of purple lupine flowers, holding a tablet ... Elegant table setting featuring a centerpiece of delicate peach roses and white blooms in clear glass vases. On the table are neatly arranged white plates... A giraffe next to a stone fence staring off into the distance. A lush green forest filled with lots of trees. A white rescue sheep standing in the middle of the field on a sunny day. A pair of off-white crocheted baby booties. A realistic still-life painting depicts a large, orange heirloom tomato and a smaller, yellow-green tomato, accompanied by a bare green vine, resting on a light brown surface against a dark background. A still-life oil painting captures a bouquet of blooming white lilies with prominent orange stamens and green leaves arranged in a dark, simple vase on a reddish-brown wooden chest, with a small blue and white porcelain bowl to the left, all set against a dark green, muted background. condition Ours OminiControl canny deblur colorization condition Ours OminiControl depth Figure 3: Qualitative comparison of controllable text-to-image generation results. ScaleWeaver achieves high-fidelity synthesis and precise control across diverse conditions, outperforming diffusion-based baselines in both visual quality and controllability. Best viewed via zoom in. Method #Param (M) Latency (s) ControlNet Pro 15B 38.7 OminiControl 12B 70.9 EasyControl 12B 60.4 Ours 2.3B 7.6 Table 3: Efficiency comparison for 1024 × 1024 image generation. Our method achieves comparable or better generation quality with a significantly smaller model size (2.3B parameters) compared to FLUX.1 (Labs, 2024) based methods. More importantly, ScaleWeaver delivers a substantial speedup in inference, requiring only 7.6 seconds per image, which is 5-9× faster than FLUX.1-based approaches. This demonstrates the efficacy of our approach and highlights the practical advantage of controllable generation with VAR-based models, making ScaleWeaver a highly efficient and scalable solution for real-world applications. 8 Preprint Method Generation Quality ↑ Controllability ↑ Text-Image Alignment ↑ Overall ↑ OminiControl 0.3053 0.3474 0.2947 0.3158 ControlNet 0.0702 0.1053 0.0912 0.0889 EasyControl 0.1719 0.2702 0.2351 0.2257 Ours 0.4526 0.2772 0.3789 0.3696 Table 2: Human evaluation in terms of generation quality, controllability, text-image alignment. Each value indicates the percentage of times a method was preferred by users (higher is better). ScaleWeaver outperforms all baselines across all criteria. 4.3 ABLATION STUDY To better understand which injection mechanisms are most effective for spatial control in visual Injection Method F1 ↑ FID ↓ CLIP Score ↑ Spatial addition 0.315 25.448 0.296 MM-Attention 0.344 24.888 0.294 Ours 0.325 23.224 0.298 Table 4: Ablation study on injection methods. autoregressive (VAR) text-to-image models, we conducted ablation studies on injection mechanisms and conditional branch configurations. All ablation experiments are conducted on the Canny condition, with each model trained for 24k iterations. During the evaluation, guidance is applied solely to the condition image with a guidance scale of 1.5. Injection mechanisms. We compare three injection mechanisms: spatial addition, multi-modal attention (MM-Attention), and our proposed Reference Attention. As shown in Tab. 4, MM-Attention achieves the highest controllability (F1 score) but at the cost of degraded generation quality and text-image alignment. Our Reference Attention achieves the best overall balance, delivering strong controllability while preserving high generation quality and text alignment. Please refer to Appendix A.2 for visual comparison. This demonstrates that Reference Attention effectively integrates conditional information without disrupting the base generative prior. Component Setting F1 ↑ FID ↓ CLIP Score ↑ zero-linear w/ →w/o 0.282 25.059 0.296 LoRA Rank 16 →4 0.320 25.703 0.296 16 →8 0.322 25.712 0.296 Condition Blocks first 16 →last 16 0.344 24.510 0.296 first 16 →all 0.346 26.211 0.296 Number of Blocks 16 →4 0.202 25.367 0.299 16 →8 0.229 24.132 0.298 Default Default 0.325 23.224 0.298 Table 5: Ablation studies on key design choices. Design choices. We perform comprehensive ablation studies to investigate the key components in our ScaleWeaver, including zeroinitialized linear gating, LoRA rank, condition injection location, and number of conditional blocks. As shown in Tab. 5, the default setting (zero-linear enabled, LoRA rank 16, condition injected in the first 16 blocks) achieves the best balance of controllability and generation quality. Removing the zero-linear gate significantly reduces controllability, highlighting its importance for stable control injection. Lowering the LoRA rank from 16 to 4 or 8 slightly decreases controllability and worsens FID. While injecting conditions in the last 16 blocks or all blocks increases controllability, it suffers considerable degradation in image quality, suggesting that early integration is more effective. Finally, reducing the number of conditional blocks from 16 to 4 or 8 degrades controllability, underscoring the need for sufficient capacity to process conditional information. Overall, these ablations confirm that our design choices in ScaleWeaver achieve the best trade-off between controllability and generation quality. 5 CONCLUSION In this paper, we introduced ScaleWeaver, a parameter-efficient and scalable framework for controllable text-to-image generation based on visual autoregressive models. By leveraging a novel Reference Attention mechanism and a parameter reuse strategy with LoRA, ScaleWeaver enables precise and flexible multi-scale control while maintaining high generation quality and efficiency. Extensive experiments demonstrate that our approach achieves competitive controllability and faster inference 9 Preprint compared to diffusion-based baselines, with strong text-image alignment and visual fidelity. We believe ScaleWeaver provides a practical and extensible foundation for future research on efficient and controllable generative modeling in the autoregressive paradigm. REFERENCES Junsong Chen, Jincheng YU, Chongjian GE, Lewei Yao, Enze Xie, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart- : Fast training of diffusion transformer for photorealistic text-to-image synthesis. In Proceedings of theInternational Conference on Learning Representations, 2024. Haoge Deng, Ting Pan, Haiwen Diao, Zhengxiong Luo, Yufeng Cui, Huchuan Lu, Shiguang Shan, Yonggang Qi, and Xinlong Wang. Autoregressive video generation without vector quantization. arXiv preprint , 2024. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12873-12883, 2021. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M ̈uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis. In Proceedings of the International Conference on Machine Learning, 2024. Jian Han, Jinlai Liu, Yi Jiang, Bin Yan, Yuqi Zhang, Zehuan Yuan, Bingyue Peng, and Xiaobing Liu. Infinity: Scaling bitwise autoregressive modeling for high-resolution image synthesis. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 15733-15744, 2025. Jacky Hate. Text-to-image-2m (revision e64fca4), 2024. URL https://huggingface.co/ datasets/jackyhate/text-to-image-2M. Soufiane Hayou, Nikhil Ghosh, and Bin Yu. LoRA+: Efficient low rank adaptation of large models. In Proceedings of the International Conference on Machine Learning, 2024. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proceedings of the Advances in Neural Information Processing Systems, 30, 2017. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In Proceedings of the International Conference on Learning Representations, 2022. Kadirnar. Fluxdev-controlnet-16k, 2024. URL https://huggingface.co/datasets/ kadirnar/fluxdev_controlnet_16k. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen. Controlnet++: Improving conditional controls with efficient consistency feedback: Project page: liming-ai. github. io/controlnet plus plus. In Proceedings of the European Conference on Computer Vision, pp. 129-147. Springer, 2024a. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. arXiv preprint , 2024b. Xiang Li, Kai Qiu, Hao Chen, Jason Kuen, Zhe Lin, Rita Singh, and Bhiksha Raj. Controlvar: Exploring controllable visual autoregressive modeling. arXiv preprint , 2024c. Zongming Li, Tianheng Cheng, Shoufa Chen, Peize Sun, Haocheng Shen, Longjin Ran, Xiaoxin Chen, Wenyu Liu, and Xinggang Wang. ControlAR: Controllable image generation with autoregressive models. In Proceedings of the International Conference on Learning Representations, 2025. 10 Preprint Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll ́ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, pp. 740-755. Springer, 2014. Zhuoyan Luo, Fengyuan Shi, Yixiao Ge, Yujiu Yang, Limin Wang, and Ying Shan. Openmagvit2: An open-source project toward democratizing auto-regressive visual generation. CoRR, abs/2409.04410, 2024. Xiaoxiao Ma, Mohan Zhou, Tao Liang, Yalong Bai, Tiejun Zhao, Biye Li, Huaian Chen, and Yi Jin. Star: Scale-wise text-conditioned autoregressive image generation. arXiv preprint , 2024. Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 42964304, 2024. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M ̈uller, Joe Penna, and Robin Rombach. SDXL: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of the International Conference on Learning Representations, 2024. Can Qin, Shu Zhang, Ning Yu, Yihao Feng, Xinyi Yang, Yingbo Zhou, Huan Wang, Juan Carlos Niebles, Caiming Xiong, Silvio Savarese, et al. Unicontrol: A unified diffusion model for controllable visual generation in the wild. arXiv preprint , 2023. Yunpeng Qu, Kun Yuan, Jinhua Hao, Kai Zhao, Qizhi Xie, Ming Sun, and Chao Zhou. Visual autoregressive modeling for image super-resolution. arXiv preprint , 2025. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, pp. 8748-8763. PmLR, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695, 2022. Zhuo Su, Wenzhe Liu, Zitong Yu, Dewen Hu, Qing Liao, Qi Tian, Matti Pietik ̈ainen, and Li Liu. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5117-5127, 2021. Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model beats diffusion: Llama for scalable image generation. arXiv preprint , 2024. Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, and Xinchao Wang. Ominicontrol: Minimal and universal control for diffusion transformer. arXiv preprint , 2024. Haotian Tang, Yecheng Wu, Shang Yang, Enze Xie, Junsong Chen, Junyu Chen, Zhuoyang Zhang, Han Cai, Yao Lu, and Song Han. HART: Efficient visual generation with hybrid autoregressive transformer. In Proceedings of the International Conference on Learning Representations, 2025. Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Proceedings of the Advances in Neural Information Processing Systems, 37:84839-84865, 2024. Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. Neural discrete representation learning. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Proceedings of the Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. Anton Voronov, Denis Kuznedelev, Mikhail Khoroshikh, Valentin Khrulkov, and Dmitry Baranchuk. Switti: Designing scale-wise transformers for text-to-image synthesis. arXiv preprint , 2024. 11 Preprint Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 2555-2563, 2023. Siyang Wang, Naishan Zheng, Jie Huang, and Feng Zhao. Navigating image restoration with var's distribution alignment prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7559-7569, June 2025. Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report, 2025. URL https://arxiv.org/abs/ 2508.02324. Ryan Xu, Dongyang Jin, Yancheng Bai, Rui Lan, Xu Duan, Lei Sun, and Xiangxiang Chu. Scalar: Scale-wise controllable visual autoregressive learning. arXiv preprint , 2025. Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything: Unleashing the power of large-scale unlabeled data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10371-10381, 2024. Ziyu Yao, Jialin Li, Yifeng Zhou, Yong Liu, Xi Jiang, Chengjie Wang, Feng Zheng, Yuexian Zou, and Lei Li. Car: Controllable autoregressive modeling for visual generation. arXiv preprint , 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3836-3847, 2023. Yuxuan Zhang, Yirui Yuan, Yiren Song, Haofan Wang, and Jiaming Liu. Easycontrol: Adding efficient and flexible control for diffusion transformer. arXiv preprint , 2025. Shihao Zhao, Dongdong Chen, Yen-Chun Chen, Jianmin Bao, Shaozhe Hao, Lu Yuan, and KwanYee K Wong. Uni-controlnet: All-in-one control to text-to-image diffusion models. Proceedings of the Advances in Neural Information Processing Systems, 36:11127-11150, 2023. Xianwei Zhuang, Yuxin Xie, Yufan Deng, Liming Liang, Jinghan Ru, Yuguo Yin, and Yuexian Zou. Vargpt: Unified understanding and generation in a visual autoregressive multimodal large language model. arXiv preprint , 2025. 12 Preprint A APPENDIX A.1 MORE IMPLEMENTATION DETAILS We train all models using 4 NVIDIA A40 GPUs, with a per-GPU batch size of 2. Training is performed using the LoRA-Plus optimizer with a learning rate of 5e-5 and a LoRA rank of 16. Only the LoRA modules and zero-linear layers in the conditional branch are updated; the image/text backbone remains frozen. The default training hyperparameters are summarized in Table 6. Hyperparameter Default Setting Gradient clipping 5 Adam betas (0.9, 0.97) LoRAPlus LR ratio 1.25 Batch size 8 Learning rate 5e-5 Learning rate decay None Mixed precision bfloat16 Reweight loss by scale True Zero-linear gating Enabled LoRA rank 16 Condition injection location First 16 blocks Table 6: Training config. Table 7 summarizes the number of training iterations for each condition type. All models converge efficiently, typically within 30k iterations, demonstrating the practicality of our approach. Condition Type Training Iterations Canny Edge 36,000 Depth Map 24,000 Colorization 24,000 Deblur 24,000 Sketch 24,000 Palette 24,000 Table 7: Training iterations for each condition type. A.2 VISUALIZATION COMPARISON FOR ABLATION STUDY In this section, we provide qualitative comparison of different injection mechanisms evaluated in our ablation study, as illustrated in Figure 5. By zooming in on the generated images, it is evident that our proposed injection method produces more realistic subjects with fewer artifacts compared to alternative approaches. This demonstrates the effectiveness of our injection strategy in integrating conditional information while maintaining image quality. A.3 MORE VISUALIZATION RESULTS Diverse generation with the same sketch condition is shown in Fig. 6 along with the prompts we used. More visualization results under different control conditions are shown in Figs. 7, 8, 9. These examples further demonstrate that our approach is capable of faithfully following diverse types of control signals, while simultaneously producing outputs with rich variations in appearance and structure. The results indicate that our method not only achieves precise controllability but also preserves a high degree of generative diversity, which is crucial for adapting to different application scenarios. A.4 LIMITATION AND FUTURE WORK Limitation. While ScaleWeaver demonstrates strong performance across a range of conditions, there are some limitations to consider. First, the generation capability is ultimately determined by the 13 Preprint Ours MM-Attention Spatial Addition canny Ours MM-Attention canny Spatial Addition Figure 5: Visualization comparison for ablation study. We show qualitative results for different injection mechanisms. A song thrush in a tranquil forest at sunrise, Japanese Ukiyo-e woodblock print style. Features elegant lines, a serene atmosphere, and flat color planes. A vibrant stained glass window depic@ng a songbird in a sunlit forest. The scene is composed of colorful glass pieces connected by bold lead lines. A stylized illustra@on portrays a brown bird with a speckled breast standing in profile in a sunlit forest clearing, surrounded by tall blades of yellow and blue grass, two bare saplings, and a small snail on the ground, with a dense background of tall, slender trees glowing with warm yellow light. A high-contrast, black and white noir film still. In a rain-slicked, desolate forest, a solitary bird is caught in a harsh, singular beam of light, its form casting a long, dramatic shadow. The atmosphere is tense and mysterious, as if the bird is a silent witness to a crime. A vibrant 8-bit pixel art scene from a classic Nintendo-era adventure game. A charmingly blocky, speckled bird stands in a forest with a dithered background and a strict, yet beautiful, 16-color palette. The entire scene is constructed from a carefully arranged grid of pixels. Figure 6: Conditional generation results on diverse condition types. capacity of the underlying base model; with our efficient training strategy, we believe ScaleWeaver can perform well and further benefit from scaling to larger VAR models. Second, our method does not currently support multi-condition control simultaneously. Future work. Future research directions include extending ScaleWeaver to support multicondition control, as the current framework is limited to handling a single condition at a time. The Reference Attention mechanism, with its modular design, naturally supports multi-condition integration, making this a promising avenue for exploration. Additionally, investigating methods to dynamically balance the influence of multiple conditions could further enhance the flexibility and adaptability of the model. Another potential direction is to scale ScaleWeaver to larger VAR backbones, leveraging the scalability of autoregressive models to improve generation quality and controllability. A.5 THE USE OF LARGE LANGUAGE MODELS (LLMS) We utilize a large language model to assist in correcting grammar errors and polishing the language of this paper. All ideas, methods, experiments, and conclusions are the sole work of the authors. 14 Preprint Condition Generated Images Figure 7: Diverse generation by ScaleWeaver, with Canny image or sketch image as control image. 15 Preprint Condition Generated Images Figure 8: Diverse generation by ScaleWeaver, with blur image or depth map as control image. 16 Preprint Condition Generated Images Figure 9: Diverse generation by ScaleWeaver, with grey image or palette map as control image. 17
2510.14878
PREDICTING KERNEL REGRESSION LEARNING CURVES FROM ONLY RAW DATA STATISTICS Dhruva Karkada∗,1, Joseph Turnbull∗,1, Yuxi Liu1, James B. Simon1, {dkarkada,joeyturnbull,yuxi liu,jsi}@berkeley.edu ABSTRACT We study kernel regression with common rotation-invariant kernels on real datasets including CIFAR-5m, SVHN, and ImageNet. We give a theoretical framework that predicts learning curves (test risk vs. sample size) from only two measurements: the empirical data covariance matrix and an empirical polynomial decomposition of the target function f∗. The key new idea is an analytical approximation of a kernel’s eigenvalues and eigenfunctions with respect to an anisotropic data distribution. The eigenfunctions resemble Hermite polynomials of the data, so we call this approximation the Hermite eigenstructure ansatz (HEA). We prove the HEA for Gaussian data, but we find that real image data is often “Gaussian enough” for the HEA to hold well in practice, enabling us to predict learning curves by applying prior results relating kernel eigenstructure to test risk. Extending beyond kernel regression, we empirically find that MLPs in the feature-learning regime learn Hermite polynomials in the order predicted by the HEA. Our HEA framework is a proof of concept that an end-to-end theory of learning which maps dataset structure all the way to model performance is possible for nontrivial learning algorithms on real datasets. 1 INTRODUCTION The quest to understand machine learning is largely motivated by a desire to predict and explain learning behavior in realistic settings. This means that, sooner or later, scientists of machine learning must develop theory that works for real datasets, somehow incorporating task structure into predictions of model performance, optimal hyperparameters, and other objects of interest. This necessity has been the elephant in the room of much of deep learning theory for some time: despite much progress in the study of neural network training and generalization, it has proven difficult to move beyond simplistic models of data and make analytical predictions applicable to real data distributions. The central difficulty is of course the complexity of real data. There can be no full analytical description of any real data distribution, so it is difficult to see how we might develop mathematical theory that describes how such a dataset is learned. How might we hope to proceed? One way forward may be to identify a comparatively succinct “reduced description” of a data distribution that characterizes its structure, at least insofar as a particular class of learner is concerned. We would like this reduced description to be sufficient to predict quantities of interest yet minimal enough to be a significant reduction in complexity. Ideally, we would like the theory that makes predictions from this reduced description to be mathematically simple, and we would like the description itself to give some insight into how the class of learner in question sees the data. In this paper, we present such a reduced description of high-dimensional datasets that is suitable for describing their learning by kernel ridge regression (KRR) with rotation-invariant kernels. We find that just the data covariance matrix Σ := E  xx⊤ , together with a Hermite decomposition of ∗Joint primary authorship. Work completed during summer internship at Imbue. 1UC Berkeley Imbue Code to reproduce all experiments available at https://github.com/JoeyTurn/ hermite-eigenstructure-ansatz. 1 arXiv:2510.14878v1 [cs.LG] 16 Oct 2025 data space Hermite Eigenstructure Ansatz (this work) feature space implicit feature map linear ridge reg. estimator (data space) Theory Eigenfunctions ( ) Spectrum (= feature variances) • test error • mean estimator • sample complexity • train error • etc. Kernel eigenstructure KRR algorithm Avg-case predictions PCA directions PCA variances Data structure Known eigenframework (1) (2) 101 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 1.50 test MSE Gaussian kernel @ CIFAR-5m domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog 101 102 103 104 Empirical sample complexity nlrn 101 102 103 104 Predicted nlrn Laplace kernel @ Imagenet32 linear quadratic cubic quartic Figure 1: We provide an end-to-end theory of learning for kernel ridge regression (KRR) that maps minimal statistics of the data distribution to test-time performance. (Top left.) KRR implicitly consists of two steps: (1) the kernel maps the data to high-dimensional nonlinear features x 7→ψ(x), then (2) it fits a linear estimator to these features. Let the covariance in data space be E  xx⊤ = UΓU ⊤and let the covariance in feature space be E  ψ(x)ψ(x)⊤ = ΞΛΞ⊤. (Lower left.) We introduce an ansatz (blue box) that predicts the feature covariance statistics (Ξ, Λ) from only the data covariance (U, Γ) and the functional form of ψ(·). This is sufficient to predict average-case test error using known results (green box). (Top right.) We are able to predict learning curves for KRR on image tasks without requiring omniscient knowledge of the feature statistics (i.e., without ever constructing or diagonalizing a kernel matrix). (Bottom right.) We are able to accurately predict the KRR sample complexity, including constant prefactors, for learning polynomials of the ImageNet dataset. See Figure 3 for additional plots and Section D for experimental details. the target function,2 is sufficient to characterize learning by rotation-invariant kernels. We obtain this reduced description, which we term “Hermite eigenstructure,” from a study of Gaussian data, but we nonetheless find it predictive for complex image datasets including CIFAR-5m, SVHN, and ImageNet. From just the covariance matrix, we can predict kernel eigenstructure and learning curves for synthetic functions. Using the true labels to estimate the target function’s Hermite decomposition, we are additionally able to predict KRR learning curves on real tasks. Unlike previous approaches for predicting learning curves, this method does not require numerically constructing or diagonalizing a kernel matrix to find the kernel eigensystem. Our approach relies on recent results in the theory of KRR which assert that knowledge of the kernel’s eigenstructure with respect to a data measure is sufficient to predict learning behavior (Sollich (2001); Bordelon et al. (2020); Jacot et al. (2020); Simon et al. (2021)). These works provide a set of equations that map this eigenstructure to predictions of test-time error. Our central observation is that, despite the complexity of high-dimensional datasets and the great variety of rotation-invariant kernels, this kernel eigenstructure is often very close to a simple analytical form expressible in terms of Hermite polynomials in the original data space. We term this claim the “Hermite eigenstructure ansatz,” and we identify a (broad) set of conditions under which it empirically holds. Concretely, our contributions are as follows: • We propose the Hermite eigenstructure ansatz (Section 4), a closed-form expression for the eigensystem of rotation-invariant kernels on real datasets. We find empirically that it holds to an excellent approximation for real image datasets (Figure 2). • We prove that the HEA holds in the case of Gaussian data for two limiting cases of the kernel function (Theorems 1 and 2). • We use the HEA to predict KRR learning curves on CIFAR-5m, SVHN, and ImageNet from only data covariance statistics and a Hermite decomposition of the target function (Figures 1 and 3). 2See Section 3.2 and Section A for a review of Hermite polynomials. 2 • We empirically find that MLPs in the feature-learning regime learn Hermite polynomials of CIFAR-5m in the same order as the HEA predicts for KRR (Figure 4). 2 RESEARCH CONTEXT AND RELATED WORKS Kernel models as proxies for neural networks. Our motivation for studying KRR comes from the “neural tangent kernel” (NTK) line of work, which finds suitably parametrized infinite-width networks are equivalent to KRR, and that for MLPs, the kernel function is rotation-invariant (Neal, 1996; Lee et al., 2018; Jacot et al., 2018). Kernel methods have proven useful as models of network dynamics and optimization (Chizat et al., 2019; Du et al., 2019; Bordelon & Pehlevan, 2021). Learning curves for KRR. Motivated by the NTK, many recent works have converged on a set of equations which predict KRR’s test-time error from the kernel and task eigenstructure (Sollich, 2001; Bordelon et al., 2020; Jacot et al., 2020; Simon et al., 2021). This “KRR eigenframework” depends on the kernel’s eigenvalues and eigenfunctions with respect to the data distribution. Our main result is an approximate analytical expression for these eigenvalues and eigenfunctions, permitting the inductive bias of KRR to be studied directly in the data space. Section C reviews this eigenframework. Exactly-solved cases of kernel eigenstructure. Exact kernel diagonalizations with machine-learning- relevant kernels are known in many highly-symmetric settings, including stationary kernels on the torus Td and rotation-invariant kernels on the sphere Sd (Mei & Montanari, 2019). Moving to anisotropic domains, Ghorbani et al. (2020) gave the eigenstructure of rotation-invariant kernels when the measure is a “product of spheres” of different radii. The case of a Gaussian kernel on a Gaussian measure was solved exactly by Zhu et al. (1997). Our Hermite eigenstructure ansatz is consistent with all these results and unifies them in a limiting case. Modeling data with a Gaussian measure. A developing body of literature argues that MLPs’ learning of complex data distributions is similar to the behavior one would see if the data were Gaussian with the same covariance (Goldt et al., 2020; Refinetti et al., 2023). We broadly adopt this lens in the study of KRR and find that, indeed, the data is well-modeled as Gaussian. Single- and multi-index models. Much recent literature has studied MLPs’ learning of single- and multi-index functions which depend only on a rank-one or rank-k projection of the input x (Dudeja & Hsu, 2018; Bietti et al., 2022; Dandi et al., 2023; Lee et al., 2024; Mousavi-Hosseini et al., 2024). This work partially motivated our study, and the multidimensional Hermite basis we use in this work is a basis of multi-index functions. Prior work in this vein has found that higher-order Hermite polynomials require more samples or gradient steps to learn. Two ways in which we depart from this body of work are that (a) we seek to predict the value of the test error (including constant prefactors), not just asymptotics or scaling laws, and (b) we study anisotropic data, which allows application of our results to real datasets. Analytical models of data. Several recent works have proposed theoretical models for the hierarchical structure in image and text data with the aim of understanding neural network performance on such datasets (Cagnetta et al., 2024; Sclocchi et al., 2025; Cagnetta & Wyart, 2024). Our work in this paper is undertaken in a similar spirit. 3 PRELIMINARIES We will work in a standard supervised setting: our dataset consists of n samples X = {xi}n i=1 drawn i.i.d. from a measure µ over Rd, and we wish to learn a target function f∗from noisy training labels y = {yi}n i=1 where yi = f∗(xi) + N(0, ϵ2) with noise level ϵ ≥0. We will assume with minimal loss of generality that µ has mean zero: Ex∼µ[x] = 0. Once a learning rule returns a predicted function ˆf, we evaluate its test mean-squared error MSEte = Ex∼µ h (f∗(x) −ˆf(x))2i +ϵ2. We write ⟨g, h⟩µ := Ex∼µ[g(x)h(x)] and ||g||2 µ := ⟨g, g⟩µ for the L2 inner product and norm with respect to µ. We write (ai)i∈I to denote an ordered sequence with index set I, and we write only (ai) when the index set is clear from context. 3 3.1 KERNEL REGRESSION AND KERNEL EIGENSYSTEMS KRR is a learning rule specified by a positive-semidefinite “kernel function” K : Rd × Rd →R and a ridge parameter δ ≥0. Given a dataset (X, y), KRR returns the predicted function ˆf(x) = kxX (KXX + δIn)−1y, (1) where the vector [kxX ]i = K(x, xi) and matrix [KXX ]ij = K(xi, xj) contain evaluations of the kernel function. In this paper, we will restrict our attention to two special classes of kernel: Definition 1 (Rotation-invariant kernel). A kernel function is rotation-invariant if it takes the form K(x, x′) = K(||x|| , ||x′|| , x⊤x′). Such a kernel K is called “rotation-invariant” because K(Ux, Ux′) = K(x, x′) for any orthonor- mal matrix U. Many widely-used kernels are rotation-invariant, including the Gaussian kernel K(x, x′) = e− 1 2σ2 ||x−x′|| 2 , the Laplace kernel K(x, x′) = e−1 σ||x−x′||, and the Neural Network Gaussian Process (NNGP) kernels and NTKs of infinite-width MLPs. We will be particularly interested in a subset of rotation-invariant kernels which discard the explicit radial dependence: Definition 2 (Dot-product kernel). A kernel function is a dot-product kernel if it takes the form K(x, x′) = K(x⊤x′). For a dot-product kernel to be positive-semidefinite on all domains, it must admit a Taylor series K(x⊤x′) = P ℓ≥0 cℓ ℓ! (x⊤x′)ℓwith nonnegative level coefficients cℓ≥0 (Schoenberg, 1942). We will find it useful to describe dot-product kernels in terms of their level coefficients (cℓ)ℓ≥0. We would like to study arbitrary rotation-invariant kernels, but it is easier to study dot-product kernels, which admit the above series expansion. Fortunately, a rotation-invariant kernel is a dot product kernel when the domain is restricted to a sphere, and if we know that our data has typical norm r, we may approximate the rotation-invariant kernel as the dot-product kernel which matches on rSd−1: Definition 3 (On-sphere level coefficients). The on-sphere level coefficients of a rotation-invariant kernel K at a radius r > 0 are the nonnegative sequence coeffs(K, r) := (cℓ)ℓ≥0 such that K(x, x′) = X ℓ≥0 cℓ ℓ! (x⊤x′)ℓ for all x, x′ such that ||x|| = ||x′|| = r. (2) We give the on-sphere level coefficients for various kernels in Section B. An eigenfunction of a kernel K with respect to a measure µ is a function ϕ such that Ex′∼µ[K(x, x′)ϕ(x′)] = λϕ(x) for some λ ≥0. By Mercer’s Theorem (Mohri et al., 2018, Theorem 6.2), any compact kernel admits a complete basis of orthonormal eigenfunctions with ⟨ϕi, ϕj⟩µ = δij and may be spectrally decomposed3 as K(x, x′) = P i λiϕi(x)ϕi(x′). We will write eigensystem(µ, K) = (λi, ϕi)∞ i=1 to denote the sequence of all eigenpairs, indexed in decreas- ing eigenvalue order (λi ≥λi+1) unless otherwise specified. It will prove useful to decompose the target function in the kernel eigenbasis as f∗(x) = P i viϕi(x), where (vi) are eigencoefficients. 3.2 HERMITE POLYNOMIALS AS A NATURAL BASIS FOR GAUSSIAN DATA Throughout, we write (hk)k≥0 for the normalized probabilist’s Hermite polynomials. These are the orthogonal polynomials for the standard Gaussian measure, satisfying Ex∼N (0,1)[hk(x)hm(x)] = δkm. The first few such polynomials are h0(x)=1, h1(x)=x, h2(x)= 1 √ 2(x2 −1). See Section A for a review of Hermite polynomials. We can use these 1D Hermite polynomials to construct an orthonormal basis for a multivariate Gaussian measure x ∼N(0, Σ) with positive-definite covariance Σ ≻0. First we diagonalize 3Here is an intuitive description of a kernel eigensystem which is shown visually in Figure 1. Any kernel function K may be viewed as an inner product in a high-dimensional feature space: K(x, x′) = ⟨ψ(x), ψ(x′)⟩. Consider mapping the dataset into this high-dimensional space and then computing the principal components of Σψ := Ex  ψ(x)ψ⊤(x)  = ΞΛΞ⊤. Each eigenvalue λi is a kernel eigenvalue. The corresponding eigenfunction is a projection onto the i-th principal direction: ϕi(x) = λ−1/2 i ⟨ψ(x), ξi⟩. 4 the covariance as Σ = UΓU ⊤with orthogonal matrix U = [u1 · · · ud] and diagonal matrix Γ = diag (γ1, . . . , γd). Then for any multi-index α ∈Nd 0, we define h(Σ) α (x) := d Y i=1 hαi(zi) , where z = Γ−1/2U ⊤x. (3) In Equation (3), the elements of α specify the order of the Hermite polynomial along each prin- cipal direction of data covariance. These multidimensional Hermite polynomials are orthonormal, satisfying Ex∼N(0,Σ) h h(Σ) α (x)h(Σ) α′ (x) i = δαα′. In the next section, we assert that this na¨ıve guess of basis is in fact often close to the true basis of kernel eigenfunctions for synthetic and real datasets. 4 THE HERMITE EIGENSTRUCTURE ANSATZ: THEORY AND EXPERIMENT Prior work has shown that predicting kernel regression learning curves boils down to understanding the kernel eigensystem. With this as motivation, we are ready to introduce our primary mathematical object: an explicit functional form for the kernel eigensystem, suitable for rotation-invariant kernels and high-dimensional datasets. Plan of attack. First we will write down this explicit functional form (Definition 4). Then we will state our assertion that this functional form approximates the true kernel eigensystem (HEA). Next, we will demonstrate that the HEA holds to an excellent approximation for several rotation-invariant kernels on several real image datasets (Figure 2). We will then give an intuitive justification for the HEA and give two formal theorems which state that the HEA holds for Gaussian data as kernel width grows (Theorems 1 and 2). Next, we will characterize the factors that make the HEA work better or worse (Section 4.2). Finally, we will use the HEA to predict KRR learning curves in Section 5. We begin by explicitly defining our Hermite eigensystem: Definition 4 (Hermite eigensystem). Given a data covariance matrix Σ = UΓU ⊤and a sequence of level coefficients (cℓ), we define the (Σ, (cℓ))−Hermite eigensystem to be the set of (scalar, function) pairs HE(Σ, (cℓ)) = {(λα, ϕα) for all α ∈Nd 0} (4) where for each multi-index α the proposed eigenvalue and eigenfunction are constructed as λα = c|α| · d Y i=1 γαi i and ϕα = h(Σ) α , (5) where |α| = P i αi and h(Σ) α is the multivariate Hermite polynomial given in Equation (3). The (Σ, (cℓ))−Hermite eigensystem is a set of Hermite polynomials ϕα and associated positive scalars λα, one for each multi-index α ∈Nd 0. The eigenvalues (λα) are monomials in the data covariance eigenvalues (γi), rescaled by the appropriate level coefficient c|α|. We now present the Hermite eigenstructure ansatz, which attests that this set of (scalar, function) pairs is in fact a close match to the true kernel eigensystem. Let K be a rotation-invariant kernel and let µ be a measure over Rd with zero mean. Then let: • Σ = Ex∼µ  xx⊤ be the data covariance matrix, • r = Tr[Σ] 1 2 be the root-mean-squared data norm, and • (cℓ) = coeffs(K, r) be the level coefficients of K restricted to the sphere rSd−1. The Hermite eigenstructure ansatz asserts that eigensystem(µ, K) ≈HE(Σ, (cℓ)). (HEA) That is, the true kernel eigensystem is approximately equal to the (Σ, (cℓ))-Hermite eigensystem given in Definition 4. 5 theory vs empirics: kernel spectrum 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Predicted eigenvalue ¸ (th) i Gaussian kernel @ Gaussian data constant mode linear modes quadratic modes cubic modes quartic modes 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Gaussian kernel @ CIFAR-10 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 Laplace kernel @ SVHN 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 ReLU NTK @ Imagenet32 theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 2: The Hermite eigenstructure ansatz (HEA) accurately predicts the eigenvalues (top) and eigenfunctions (bottom) of various kernel/dataset combinations. For four kernel/dataset settings (columns), we compute the empirical kernel eigensystem {(λ(emp) i , ϕ(emp) i )} and compare to the theoretical eigenpairs {(λ(th) i , ϕ(th) i )} obtained from Definition 4 and indexed in order of decreasing λ(th) i . In the top plot in each column, the i-th point from the top right has coordinates (λ(emp) i , λ(th) i ) and its color indicates the polynomial degree of ϕ(th) i . In the bottom plot in each column, we bin both the predicted and empirical eigenfunctions into logarithmic spectral bins and visualize the pairwise subspace overlap (Equation (44)), with axes matching the top plot. Grey pixels indicate bins with no eigenvalues. In all plots, concentration along the diagonal indicates theory-experiment match. See Section D.2 for further explanation and experimental details. The HEA is a strong claim: it asserts that the kernel eigensystem, to a good approximation, has a simple analytical form which depends only on the second-order statistics of µ and no higher moments and that the kernel eigenfunctions are multivariate Hermite polynomials independent of the kernel chosen (so long as it is rotation-invariant). Rather than a provable fact, the HEA should be treated as a falsifiable claim that may hold well or poorly in any given setting. In practice, with rotation-invariant kernels and high-dimensional image datasets, we will find that it often holds quite well. Figure 2 examines four such settings, finding that in each, both the kernel spectrum and eigenfunctions are well-predicted by the HEA. 4.1 THE HEA FOR GAUSSIAN DATA: SOME INTUITION AND TWO THEOREMS When might we expect the HEA to hold? To gain the central intuition, it is sufficient to consider a simple case of univariate Gaussian data x ∼µ = N(0, γ) and the Gaussian kernel Kσ(x, x′) = e −1 2σ2 (x−x′)2. This kernel admits the feature map Kσ(x, x′) = ⟨ψσ(x), ψσ(x′)⟩where ψσ(x) = e −x2 2σ2 ·  1 x σ x2 √ 2σ2 · · · xℓ √ ℓ!σℓ · · ·  , (6) We would like to find the directions of principal covariance of ψσ(x). Let us suppose that σ2 ≫γ: the kernel width dominates the width of the data distribution. Examining Equation (6), we can make two observations. First, the exponential prefactor will be close to one, and we may approximate ψσ componentwise as [ψσ(x)]ℓ≈ xℓ √ ℓ!σℓ. This amounts to approximating our kernel as Kσ(x, x′) = P ℓ (xx′)ℓ σ2ℓ·ℓ! — that is, as a dot-product kernel with coefficients cℓ= σ−2ℓ. Second, each component of ψσ will dominate all subsequent components: Ex  [ψσ(x)]2 ℓ  ∝σ−2ℓγℓ ≫ Ex  [ψσ(x)]2 ℓ+1  ∝σ−2(ℓ+1)γℓ+1. (7) 6 Since the first element of ψσ is by far the largest (and since we do not center ψσ before computing eigendirections), the first direction of principal variation will correspond to ϕ0(x) ≈1, with variance λ0 ≈1.4 The next direction will correspond to ϕ1(x) ≈γ−1/2x with variance λ1 ≈σ−2γ. The next eigenfunction must incorporate the x2 direction, but we have a problem: x2 is not orthogonal to ϕ0(x) = 1. We must therefore orthogonalize it with respect to ϕ0 against our measure µ in the usual Gram-Schmidt fashion. This yields ϕ2(x) ≈ 1 √ 2(γ−1x2 −1), which using standard formulas for Gaussian integrals gives an eigenvalue λ2 = Z K(x, x′)ϕ2(x)ϕ2(x′)dµ(x)dµ(x′) ≈σ−4γ2. (8) Continuing this process to higher orders, we find that the kernel eigensystem matches that predicted by the HEA: ϕℓis the ℓ-th orthogonal polynomial with respect to our Gaussian measure — that is, the Hermite polynomial hℓ(γ−1/2x) — and the ℓ-th eigenvalue is λℓ≈σ−2ℓγℓ= cℓγℓ. The corrections hidden by every “≈” in this derivation are of relative size O(σ−2γ) and thus vanish as σ grows.5 This same analysis holds for any dot-product kernel K(x, x′) = P ℓ cℓ ℓ! (xx′)ℓwith level coefficients (cℓ) such that cℓ+1γ cℓ ≪1. It can, with some difficulty, be further extended to apply to multivariate Gaussian data x ∼N(0, Σ). But what if the kernel is not a dot-product kernel, such as the Laplace kernel K(x, x′) = e 1 σ||x−x′||? Unlike the Gaussian kernel, the Laplace kernel is not well- approximated by a dot-product kernel even at large width because of its nonanalyticity at zero. However, like all rotation-invariant kernels, the Laplace kernel is a dot-product kernel when restricted to a sphere (and will be close to a dot-product kernel when restricted to a spherical shell whose thickness is not too big). In such a case, we will require the data to be high-dimensional: for data with a high effective dimension deff := Tr[Σ]2 Tr[Σ2] ≫1, samples will tend to concentrate in norm.6 We may thus safely approximate any rotation-invariant K as a dot-product kernel. Having given an intuitive derivation of the HEA for Gaussian data, we now move to formal statements. Our first theorem states that the HEA holds for the Gaussian kernel at large width. Theorem 1 (The HEA holds for a wide Gaussian kernel on a Gaussian measure). Let µ = N(0, Σ) be a multivariate Gaussian measure and let Kσ(x, x′) = e− 1 2σ2 ||x−x′|| 2 be the Gaussian kernel with width σ. Let r = Tr [Σ]1/2 and let (cℓ) = coeffs(Kσ, r), which yields cℓ= σ−2ℓe−r2 2σ2 . Then: as σ →∞, eigensystem(µ, Kσ) →HE(Σ, (cℓ)). Proof sketch (full proof in Section H). Mehler’s formula can give the Gaussian kernel’s eigensystem exactly (Mehler, 1866). Taking σ →∞in the resulting expressions yields agreement with the HEA. Our second theorem applies to dot-product kernels with fast-decaying level coefficients. Theorem 2 (The HEA holds for a fast-decaying dot-product kernel on a Gaussian measure). Let µ = N(0, Σ) be a multivariate Gaussian measure with variance Σ ≻0 and let K(cℓ)(x, x′) = ∞ X ℓ=0 cℓ ℓ! (x⊤x′)ℓ be a dot-product kernel with coefficients cℓ≥0 such that cℓ+1 ≤ϵ · cℓfor some ϵ > 0. Then: as ϵ →0, eigensystem(µ, K(cℓ)) →HE(Σ, (cℓ)) linearly in ϵ. 4We index eigenmodes from 0 instead of 1 here to match the polynomial order ℓ. 5Were we to repeat this calculation with a different measure µ for x, we would obtain the orthogonal polyno- mials with respect to µ as eigenfunctions. For example, if µ = U[−1, 1], we get the Legendre polynomials. 6For Gaussian data x ∼N(0, Σ), the relative variance of the norm is Var  ||x||2 /E  ||x||22 = 2/deff, which falls to zero as deff grows. 7 Proof sketch: Our proof formalizes the intuitive “Gram-Schmidt” derivation of the HEA given above. We use perturbation theory to show that the kernel eigenstructure splits into exponentially-separated segments, with the ℓ-th segment eigenstructure determined almost fully by the ℓ-th order term of K(cℓ). Due to the complexity of the proof, we break it up into stages: we rigorously state and prove the one-dimensional case in Section I, then state and prove the general case in Section J. 4.2 CONDITIONS FOR SUCCESS: FAST DECAY OF cℓ, HIGH DATA DIMENSION, AND A “GAUSSIAN ENOUGH” DATA DISTRIBUTION The intuitive theory above suggests three conditions required for the HEA to hold reasonably well. Here we list these conditions and give empirical confirmation that breaking any one usually causes the HEA to fail. 1) Fast decay of level coefficients. As discussed in Section 4.1, we need cℓ≫γ1cℓ+1 for the Gram-Schmidt process underlying the HEA to work. In Figure 13, we show that as we decrease the Gaussian kernel’s width (and thus increase cℓ+1 cℓ) on a fixed dataset, the HEA eventually breaks. 2) High data dimension (for some kernels). As previously discussed, concentration of norm (via high deff) is required if we are to approximate an arbitrary rotation-invariant kernel as a dot-product kernel. In Figure 14, we show that for the Laplace kernel and ReLU NTK, agreement with the HEA worsens as deff decreases. However, since the Gaussian kernel is smooth at x = x′, it does not require concentration of norm, and low deff is fine (Figure 15). 3) “Gaussian enough” data distribution. Common image datasets are complex enough to roughly satisfy simple tests of Gaussianity, such as coordinatewise Gaussian marginals. As we make the dataset simpler (CIFAR →SVHN →MNIST →tabular), these marginals become less Gaussian, and HEA agreement degrades (Figures 16 and 17). It is noteworthy that our theory empirically works better on more complex datasets thanks to the blessings of dimensionality. 5 THE HEA ALLOWS PREDICTION OF KRR LEARNING CURVES Under the conditions outlined in the previous section, we expect the HEA to accurately predict kernel eigenstructure. We aim to plug these results directly into the aforementioned KRR eigenframework (of e.g. Simon et al. (2021)) to predict the final test risk of KRR. However, a key challenge remains: using the eigenframework requires knowing the coefficients of the target function in the kernel eigenbasis, f⋆(x) = P i viϕi(x). We must measure these coefficients vi from finitely many samples of the target function. Were the data perfectly Gaussian, the multi-Hermite polynomials would be an orthonormal basis with respect to the measure. We could then estimate the coefficients by simply taking inner products between the target vector and generated Hermite polynomials and expect the estimation error to decay as O(N −1/2) with the total number of samples N. However, small amounts of non-Gaussianity in the data introduce cascading non-orthogonality in the Hermite basis. As a result, the na¨ıve method overestimates the power in the overlapping modes. To rectify this effect, we modify our measurement technique by re-orthogonalizing the sampled Hermite polynomials via the Gram-Schmidt process:7 iterate over increasing i : h(GS) i = unitnorm  hi − X j<i D h(GS) j , hi E h(GS) j   (9) where hi := hi(X) is the ith multi-Hermite polynomial evaluated on the samples. The {hi(·)}i are ordered by increasing degree; there is no dependence on the level coefficients cℓand thus our measurement is kernel-independent. We proceed to estimate the coefficients as ˆvi = ⟨h(GS) i , y⟩. As we show in Figure 3, with this single estimate of the target’s near-Hermite orthonormal decomposition, we can reliably predict learning curves on a variety of tasks and kernels. 6 MLPS LEARN HERMITE POLYNOMIALS IN THE ORDER PREDICTED BY THE HEA One consequence of our theory is that there exists a canonical learning order in which KRR learns Hermite polynomials as sample size increases: each polynomial’s learning priority is given by its associated HEA eigenvalue. Here, we check whether this order also predicts the training time learning order of feature-learning MLPs (Yang & Hu, 2021). We train MLPs online on multi-Hermite 7For a full discussion of this method, see Section D.3. 8 101 102 103 104 n training samples 0.0 0.5 1.0 1.5 test MSE Gaussian kernel @ CIFAR-5m z10 z190 z2 0 z1z3 z16z20 z3 0 z0z1z2 101 102 103 104 Empirical sample complexity nlrn 101 102 103 104 Predicted sample complexity z1 z5 z2 0 z0z2 z3 0 z1z3 z4 0 z2 0z2 z100 z0z1z2 z204 z10z13 z2 3z5 Laplace kernel @ ImageNet-32 linear quadratic cubic quartic 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 test MSE Gaussian kernel @ SVHN even vs. odd prime vs. composite genus 0 vs. genus ¸ 1 0 vs. all others 4 vs. 2 101 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 Laplace kernel @ CIFAR-5m cat vs. all others car vs. truck dog vs. frog vehicles vs. animals 101 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 Shallow ReLU NTK @ ImageNet-32 source exp. ¯ ¯ = 1:1 ¯ = 1:15 ¯ = 1:25 ¯ = 1:4 ¯ = 1:6 Figure 3: Using only the empirical data covariance and a polynomial decomposition of the target function, we predict learning curves across a variety of kernels, datasets, and targets. (Top row.) We predict test error on synthetic target functions of real datasets. Each target is the multi-Hermite polynomial (Equation (3)) whose leading term is indicated in the plots. (Recall that zi = u⊤ i x/√γi are the rescaled PCA coordinates.) For each of these targets, our ansatz predicts both learning curves (top left) and the sample complexity required to achieve MSE ≤0.5 (top right). (Bottom left and center.) We train on binarized true target functions. See Section D.1 for details. (Bottom right.) We construct synthetic targets by drawing from a Gaussian process. The source exponent controls the difficulty of the target. See Section D.1 for details. For all learning curve predictions, we estimate the coefficients of the target function in the predicted eigenbasis using the Gram-Schmidt process described in Section D.3. See Section D.4 for full experimental details. polynomial target functions of Gaussian data and CIFAR-5m and count the number of steps niter required to reach online loss MSE ≤0.1. We find that the effective optimization time η · niter is well predicted by the HEA eigenvalue (Figure 4). 100 101 102 103 104 105 101 102 103 Eff. opt. time η · niter 1 z1 z5 z100 1 z2 0 z0z2z2z3 z1z10 z3 1 z0z1z2 z3z2 5 z4 0 z0z1z2z3 1/2 MLP @ Gaussian Data constant linear quadratic cubic quartic 100 101 102 103 104 105 101 102 103 1 z1 z2 0 z5 z0z2z40 z2z3 z3 1 z150 z1z60 z2 0z5 1/2 MLP @ CIFAR-5m Inverse of HEA eigenvalue λ −1 Figure 4: The HEA accurately predicts polynomial learning order in MLPs in the feature- learning regime. We measure the amount of time it takes to train three-layer ReLU MLPs, each dot being a trained MLP on one multi-Hermite polynomial target once a test error of MSE ≤0.1 is reached. We find that the optimization time is well-predicted by the inverse square root of the HEA eigenvalue λ−1/2 α . 9 Details of our exact MLP setup experiments can be found in Section F, detailing validation of the feature-learning regime, insight into hyperparameter choices, and model performance when taken into the NTK/lazy and ultra-rich regimes. 7 DISCUSSION We have presented a theoretical framework which describes how KRR “sees” complex natural datasets: namely, as a nearly-Gaussian measure with Hermite eigenstructure as per Definition 4. This is a proof of concept that end-to-end theories of learning — mapping dataset structure all the way to model performance — are possible for a nontrivial learning algorithm on real datasets. Theories of this sort applicable to more general algorithms may be a good end goal for learning theory. AUTHOR CONTRIBUTIONS DK led the scientific development of the KRR empirics from early exploration through final experi- ments, wrote most of the codebase, and together with JT wrote the empirical sections of the main text. JT developed the MLP empirics and determined the conditions under which the HEA holds for KRR. YL provided formal statements of Theorems 1 and 2 and developed the proofs appearing in Sections H to J. JS guessed the HEA idea, developed preliminary empirics and the conjectures that became our theorems, wrote most of the main text, and led the team logistically. ACKNOWLEDGMENTS The authors thank Boris Hanin for giving us an introductory schooling in Hermite polynomials. We are grateful to Berkan Ottlik for many clarifying discussions and detailed comments on the manuscript and Evan Ellis for useful comments on the introduction. DK thanks the many friendly faces at Imbue for welcoming us, especially Evan Ellis for interesting conversations, Matthew Schallenkamp for useful tips, and Bowei Liu for being our GPU genie. JT thanks Kanjun Qiu and Josh Albrecht for supporting this research financially. YL acknowledges GPT-5 for multiple wrong attempts at proofs which gave salutary motivation to produce correct proofs, if only to prove someone on the internet wrong. JS thanks Parthe Pandit, Margalit Glasgow, and Jonathan Shi for useful early feedback and Josh Albrecht and Kanjun Qiu for encouragement, guidance and support during the development of this work and the process of learning to lead. JS additionally thanks the residents of Fort Jones, CA for providing a peaceful and welcoming environment for a November 2024 stay in which the ideas developed in this paper took form. This work was principally funded by Imbue under the Feature Lab (FLAB) initiative. This work was supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Contract No. W911NF-20-1-0151 awarded to Michael R. DeWeese. YL thanks Stuart Russell for funding during the development of this work. STATEMENT ON THE USE OF LARGE LANGUAGE MODELS We used large language models (LLMs) for analytical computations, writing code, detailed literature search on narrow topics, and the Taylor expansions of Laplace and ReLU kernels appearing in Section B. We found that LLMs performed certain tasks faster than us but with a propensity for miscommunication or overconfidence, especially when fashioning proofs, so we sought or performed independent verification of everything useful we got from an LLM. 10 REFERENCES Alexander Atanasov, Alexandru Meterez, James B Simon, and Cengiz Pehlevan. The optimization landscape of sgd across the feature learning strength. arXiv preprint arXiv:2410.04642, 2024. Cited on page 34. Francis Bach. High-dimensional analysis of double descent for linear regression with random projections. arXiv preprint arXiv:2303.01372, 2023. Cited on page 20. Alberto Bietti, Joan Bruna, Clayton Sanford, and Min Jae Song. Learning single-index models with shallow neural networks. Advances in neural information processing systems, 35:9768–9783, 2022. Cited on page 3. Blake Bordelon and Cengiz Pehlevan. Learning curves for sgd on structured features. arXiv preprint arXiv:2106.02713, 2021. Cited on page 3. Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. In International Conference on Machine Learning, pp. 1024–1034. PMLR, 2020. Cited on pages 2, 3, and 20. Francesco Cagnetta and Matthieu Wyart. Towards a theory of how the structure of language is acquired by deep neural networks. Advances in Neural Information Processing Systems, 37: 83119–83163, 2024. Cited on page 3. Francesco Cagnetta, Leonardo Petrini, Umberto M Tomasini, Alessandro Favero, and Matthieu Wyart. How deep neural networks learn compositional data: The random hierarchy model. Physical Review X, 14(3):031001, 2024. Cited on page 3. Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model align- ment explain generalization in kernel regression and infinitely wide neural networks. Nature communications, 12(1):1–12, 2021. Cited on page 20. Yang Chen and Nigel Lawrence. Small eigenvalues of large hankel matrices. Journal of Physics A: Mathematical and General, 32(42):7305, 1999. Cited on page 50. Chen Cheng and Andrea Montanari. Dimension free ridge regression. arXiv preprint arXiv:2210.08571, 2022. Cited on page 20. Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. Advances in Neural Information Processing Systems, 32, 2019. Cited on page 3. Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, and Ludovic Stephan. How two-layer neural networks learn, one (giant) step at a time. arXiv preprint arXiv:2305.18270, 2023. Cited on page 3. Chandler Davis and William Morton Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM Journal on Numerical Analysis, 7(1):1–46, 1970. Cited on pages 49 and 59. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. Cited on page 21. Edgar Dobriban and Stefan Wager. High-dimensional asymptotics of predictions: Ridge regression and classification. The Annals of Statistics, 46(1):247–279, 2018. Cited on page 20. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, pp. 1675–1685. PMLR, 2019. Cited on page 3. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL . Cited on pages 21 and 30. Rishabh Dudeja and Daniel Hsu. Learning single-index models in gaussian space. In Conference On Learning Theory, pp. 1887–1930. PMLR, 2018. Cited on page 3. 11 Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33: 14820–14830, 2020. Cited on page 3. Sebastian Goldt, Marc M´ezard, Florent Krzakala, and Lenka Zdeborov´a. Modeling the influence of data structure on learning in neural networks: The hidden manifold model. Physical Review X, 10 (4):041044, 2020. Cited on page 3. Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high- dimensional ridgeless least squares interpolation. The Annals of Statistics, 50(2):949–986, 2022. Cited on page 20. Arthur Jacot, Cl´ement Hongler, and Franck Gabriel. Neural tangent kernel: Convergence and gener- alization in neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2018. Cited on page 3. Arthur Jacot, Berfin S¸ims¸ek, Francesco Spadaro, Cl´ement Hongler, and Franck Gabriel. Kernel alignment risk estimator: Risk prediction from training data. Advances in neural information processing systems, 33:15568–15578, 2020. Cited on pages 2, 3, and 20. Stefan Kaczmarz. Angenaherte auflosung von systemen linearer glei-chungen. Bull. Int. Acad. Pol. Sic. Let., Cl. Sci. Math. Nat., pp. 355–357, 1937. Cited on page 24. Tosio Kato. Variation of discrete spectra. Communications in Mathematical Physics, 111(3):501–504, 1987. Cited on page 65. Yann LeCun, Corinna Cortes, and Christopher J. C. Burges. The MNIST database of handwritten digits. , 1998. Dataset. Cited on page 21. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. In International Conference on Learning Representations (ICLR), 2018. Cited on page 3. Jason D Lee, Kazusato Oko, Taiji Suzuki, and Denny Wu. Neural network learns low-dimensional polynomials with sgd near the information-theoretic limit. Advances in Neural Information Processing Systems, 37:58716–58756, 2024. Cited on page 3. Bruno Loureiro, Cedric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mezard, and Lenka Zdeborov´a. Learning curves of generic features maps for realistic datasets with a teacher-student model. Advances in Neural Information Processing Systems, 34:18137–18151, 2021. Cited on page 20. F. Gustav Mehler. Ueber die entwicklung einer function von beliebig vielen variablen nach laplaceschen functionen h¨oherer ordnung. Journal f¨ur die reine und angewandte Mathematik, 66: 161–176, 1866. Cited on pages 7 and 43. Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and the double descent curve. Communications on Pure and Applied Mathematics, 2019. Cited on page 3. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. Adaptive Computation and Machine Learning. The MIT Press, Cambridge, Massachusetts, second edition edition, 2018. ISBN 978-0-262-03940-6. Cited on page 4. Alireza Mousavi-Hosseini, Denny Wu, and Murat A Erdogdu. Learning multi-index models with neural networks via mean-field langevin dynamics. arXiv preprint arXiv:2408.07254, 2024. Cited on page 3. Preetum Nakkiran, Behnam Neyshabur, and Hanie Sedghi. The deep bootstrap framework: Good online learners are good offline generalizers. arXiv preprint arXiv:2010.08127, 2020. Cited on page 21. Radford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, pp. 29–53. Springer, 1996. Cited on page 3. 12 Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Baolin Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, pp. 7, 2011. Workshop paper. Cited on page 21. Maria Refinetti, Alessandro Ingrosso, and Sebastian Goldt. Neural networks trained with sgd learn distributions of increasing complexity. In International Conference on Machine Learning, pp. 28843–28863. PMLR, 2023. Cited on page 3. Dominic Richards, Jaouad Mourtada, and Lorenzo Rosasco. Asymptotics of ridge (less) regression under general source condition. In International Conference on Artificial Intelligence and Statistics, pp. 3889–3897. PMLR, 2021. Cited on page 20. I. J. Schoenberg. Positive definite functions on spheres. Duke Mathematical Journal, 9(1):96 – 108, 1942. doi: 10.1215/S0012-7094-42-00908-6. URL . Cited on page 4. Antonio Sclocchi, Alessandro Favero, and Matthieu Wyart. A phase transition in diffusion models reveals the hierarchical nature of data. Proceedings of the National Academy of Sciences, 122(1): e2408799121, 2025. Cited on page 3. James B Simon, Madeline Dickens, Dhruva Karkada, and Michael R DeWeese. The eigenlearning framework: A conservation law perspective on kernel ridge regression and wide neural networks. arXiv preprint arXiv:2110.03922, 2021. Cited on pages 2, 3, 8, and 20. Peter Sollich. Gaussian process regression with mismatched models. In Advances in Neural Information Processing Systems, pp. 519–526. MIT Press, 2001. Cited on pages 2, 3, and 20. Terence Tao. Topics in Random Matrix Theory, volume 132 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2012. ISBN 978-0-8218-7430-1. Cited on page 48. Gerald Teschl. Mathematical Methods in Quantum Mechanics: With Applications to Schr¨odinger Operators. Number Vol. 99 in Graduate Studies in Mathematics. American mathematical society, Providence, R.I, 2009. ISBN 978-0-8218-4660-5. Cited on page 48. Alexander Wei, Wei Hu, and Jacob Steinhardt. More than a toy: Random matrix models predict how real-world neural representations generalize. In International Conference on Machine Learning, Proceedings of Machine Learning Research, 2022. Cited on page 20. Denny Wu and Ji Xu. On the optimal weighted ℓ2 regularization in overparameterized linear regression. Advances in Neural Information Processing Systems, 33:10112–10123, 2020. Cited on page 20. Greg Yang and Edward J Hu. Tensor programs iv: Feature learning in infinite-width neural networks. In International Conference on Machine Learning, pp. 11727–11737. PMLR, 2021. Cited on page 8. Greg Yang, James B Simon, and Jeremy Bernstein. A spectral condition for feature learning. arXiv preprint arXiv:2310.17813, 2023. Cited on page 35. Huaiyu Zhu, Christopher K. I. Williams, Richard Rohwer, and Michal Morciniec. Gaussian regression and optimal finite dimensional linear models. Technical Report NCRG/97/011, Neural Computing Research Group, Aston University, Birmingham, UK, 1997. URL . Cited on pages 3 and 43. 13 A REVIEW OF HERMITE POLYNOMIALS Our ansatz for kernel eigenstructure is constructed from Hermite polynomials, and we use several key properties throughout the paper. In this appendix, we give a brief review of Hermite polynomials. Let us write Heℓfor the probabilist’s Hermite polynomials.8 These are the unique set of polynomials satisfying the following properties: (i) Degree. Heℓis a polynomial of degree ℓ. (ii) Monic. The leading coefficient is 1, so Heℓ(x) = xℓ+ · · · (in particular He0(x) = 1). (iii) Orthogonality w.r.t. N(0, 1). For ℓ̸= m, it holds that Ex∼N (0,1)[Heℓ(x)Hem(x)] = 0. The probabilist’s Hermite polynomials have squared norm Ex∼N (0,1)  He2 ℓ(x)  = ℓ!. Since we will use the Hermite polynomials as a basis in which to express other functions, we will prefer them to have unit norm, so we use the normalized probabilist’s Hermite polynomials hℓ(x) := 1 √ ℓ!Heℓ(x), which satisfy Ex∼N(0,1)[hk(x)hℓ(x)] = δkℓ. The first several such polynomials are given by h0(x) = 1, h1(x) = x, h2(x) = 1 √ 2 (x2 −1), h3(x) = 1 √ 6 (x3 −3x), h4(x) = 1 √ 24 (x4 −6x2 + 3), h5(x) = 1 √ 120 (x5 −10x3 + 15x). The Hermite polynomials obey a remarkable number of useful mathematical relations. In particular, single- and multidimensional integrals of Hermite polynomials against exponentials and other poly- nomials are often computable in closed form. The Wikipedia page (https://en.wikipedia. org/wiki/Hermite_polynomials) is a good first reference. Here we state one such integral which is useful for understanding the intuitive derivation of the HEA in Section 4.1: Ex∼N(0,1)[hℓ(x) xm] =    m! √ ℓ! 2(m−ℓ)/2 (m −ℓ)/2  ! , m ≥ℓand m ≡ℓ (mod 2), 0, otherwise. (10) That is, the ℓ-th Hermite polynomial is orthogonal to all monomials xm whose order is less than ℓ(or whose order is simply of a different parity from ℓ). In particular, this implies that Ex′∼N(0,1)[(xx′)mhℓ(x′)] = 0 when m < ℓ: the function hℓlies in the nullspace of the rank-one kernel term K(x, x′) = (xx′)m. For future reference, we also note the multiplication and differentiation formulas: HenHem = min(m,n) X j=0 m j n j  j!Hen+m−2j (11) He′ n = nHen−1 (12) 8Some references use the physicist’s Hermite polynomials Hℓ. These are related to the probabilist’s Hermites by Heℓby Hℓ(x) = 2ℓ/2Heℓ( √ 2 x). When using Hermite polynomials, be sure you know which ones you’re talking about! 14 B ON-SPHERE LEVEL COEFFICIENTS FOR COMMON ROTATION-INVARIANT KERNELS In this appendix, we tabulate the on-sphere level coefficients for various kernels. We also use this appendix as the place where we recall the functional forms of the ReLU NNGP kernel and NTK. We begin by recalling Definition 3. Definition 3 (On-sphere level coefficients). The on-sphere level coefficients of a rotation-invariant kernel K at a radius r > 0 are the nonnegative sequence coeffs(K, r) := (cℓ)ℓ≥0 such that K(x, x′) = X ℓ≥0 cℓ ℓ! (x⊤x′)ℓ for all x, x′ such that ||x|| = ||x′|| = r. (13) B.1 GAUSSIAN KERNEL For the Gaussian kernel K(x, x′) = e− 1 2σ2 ||x−x′|| 2 , the level coefficients (cℓ) = coeffs(K, r) may be found by noting that, when ||x|| = ||x′|| = r, then K(x, x′) = e−r2 σ2 · e x⊤x′ σ2 = e−r2 σ2 · X ℓ≥0 1 σ2ℓℓ!(x⊤x′)ℓ. (14) Pattern-matching to Definition 3, we may then observe that c0 = e−r2 σ2 , c1 = e−r2 σ2 · σ−2, c2 = e−r2 σ2 · σ−4, ... cℓ= e−r2 σ2 · σ−2ℓ. B.2 EXPONENTIAL KERNEL Let the exponential kernel be K(x, x′) = e 1 σ2 x⊤x′. We do not use this kernel in experiments or theory reported in this paper, but we nonetheless include it here because it is arguably the nicest kernel for the study of the HEA, and we used it extensively in our initial experiments. The blessing of the exponential kernel is that it admits the Taylor expansion K(x, x′) = e 1 σ2 x⊤x′ = X ℓ 1 σ−2ℓℓ!(x⊤x′)ℓ. (15) That is, the exponential kernel is exactly a dot-product kernel with coefficients cℓ= σ−2ℓ. When σ = 1 and thus cℓ= 1 for all ℓ, this is in some sense the “platonic ideal” dot-product kernel (though since we must then take γ1 ≪1 for the HEA to hold as per the intuition developed in Section 4.1). When the domain is restricted to a sphere, the Gaussian kernel and exponential kernel are equal up to a global factor of e−r2 σ2 . B.3 LAPLACE KERNEL Here we obtain the on-sphere level coefficients for the Laplace kernel K(x, x′) = e−1 σ||x−x′||.9 Let s := x⊤x′ r2 ∈[−1, 1], β := √ 2 r σ . (16) Since ∥x −x′∥= √ 2 r √1 −s on the sphere, K(x, x′) = exp −β √ 1 −s  . (17) 9Note for users of our codebase: as defined in our repo, the Laplace kernel is actually K(x, x′) = e − 1 √ 2σ ||x−x′||; that is, there is an extra factor of √ 2. We later moved away from this convention, but it remains in code. 15 Closed form for on-sphere level coefficients. Let yn(x) denote the reverse Bessel polynomials, with exponential generating function X ℓ≥0 yℓ−1(x) tℓ ℓ! = exp  1 2x 1 − √ 1 −2xt  . (18) Applying Equation (18) with x = 1 2β and t = β 2 s gives exp β(1 − √ 1 −s)  = X ℓ≥0 yℓ−1  1 2β  (βs/2)ℓ ℓ! . (19) Multiplying by e−β and substituting s = (x⊤x′)/r2, the definition K(x, x′) = P ℓ≥0 cℓ ℓ! (x⊤x′)ℓ implies cℓ= e−β r2ℓyℓ−1  1 2β  β 2 ℓ , β = √ 2 r σ . (20) The first few coefficients. With the convention y−1 ≡1, y0(x) = 1, y1(x) = 1 + x, y2(x) = 3 + 3x + x2, y3(x) = 15 + 15x + 6x2 + x3, we obtain (for β = √ 2r/σ) c0 = e−β, c1 = e−β r2 β 2 , c2 = e−β r4 β2 4 + β 8  , c3 = e−β r6 β3 8 + 3β2 16 + β 16  , c4 = e−β r8 β4 16 + 3β3 16 + 5β2 32 + 5β 128  . Large-ℓasymptotics. The dominant singularity of F(s) = e−β√1−s is at s = 1, with F(s) = 1 −β√1 −s + O(1 −s), yielding [sℓ]F(s) ∼ β 2√πℓ−3/2, where the coefficient extraction operator [sℓ]F(s) returns the ℓ-th coefficient in the power series of F(s). Since cℓ= r−2ℓℓ! [sℓ]F(s), cℓ∼ β 2√π ℓ! r2ℓℓ3/2 (ℓ→∞), β = √ 2 r σ . (21) Subtlety: fast-growing cℓmeans diverging HEA eigenvalues Here we encounter a subtlety with the Laplace kernel coefficients: Equation (21) states that cℓ→∞ superexponentially as ℓgrows. Recall that the largest eigenvalue in each level predicted by the HEA is λ = cℓγℓ 1. This superexponential growth means that for any value of γ1 > 0, no matter how small, these largest levelwise eigenvalues will eventually begin to increase as ℓgrows and continue to grow without bound. A hacky fixed-point calculation using Stirling’s formula suggests that the minimum occurs at ℓmin ≈r2γ−1 1 . See Figure 5 for a numerical illustration of this. What do we make of this? From a theoretical standpoint, this is the result of the fact that since the Laplace kernel on the sphere rSd−1 has a singularity at x⊤x′ = r2, our on-sphere dot-product kernel approximation to the Laplace kernel will diverge when attempting to evaluate K(x, x′) at points x with larger radius ||x|| > r. Since our theory is designed to work with Gaussian data, and roughly half of a Gaussian distribution will spill outside its sphere of average radius into this divergent region, the HEA predicts growing eigenvalues and infinite trace. While this might seem to spell the doom of the HEA insofar as the Laplace kernel is concerned, our experiments (e.g. Figures 2 and 3) attest that this is not the case. We find that we can still get good 16 100 101 level ` 10−9 101 1011 1021 1031 HEA eigval c` ¢ °` 1 °1 = 0:03 °1 = 0:1 °1 = 0:3 Figure 5: We compute the Laplace kernel on-sphere coefficients cℓwith σ = r = 1 and plot cℓγℓ 1 for various ℓ. Dashed lines show r2γ−1 1 , the value of the predicted minimum. experimental agreement by (a) ensuring the data has high effective dimension so that γ1 is small10 and then (b) truncating the HEA at finite order ℓ, usually ℓ∈[5, 10]. It seems plausible to us that this series approximation to the Laplace kernel is essentially an asymptotic expansion rather than a true Taylor series, meaning that it gives a good approximation when truncated to a finite number of terms so long as a particular parameter (here r−1γ1) is small, but then later diverges, rather than giving a better and better, and ultimately perfect, approximation as the number of terms grows. Essentially this same story also holds for the ReLU NNGP kernel and ReLU NTK, so we will not discuss it again. B.4 THE RELU NNGP KERNEL For a one–hidden-layer ReLU network with first-layer weight variance σ2 w and bias variance σ2 b (and no output bias), the first-layer preactivation kernel is K1(x, x′) = σ2 w x⊤x′ + σ2 b. (22) On the sphere ∥x∥= ∥x′∥= r, set q := K1(x, x) = σ2 wr2 + σ2 b, s := x⊤x′ r2 , ρ := K1(x, x′) p K1(x, x)K1(x′, x′) = σ2 wr2 s + σ2 b q . (23) The ReLU NNGP kernel is K2(x, x′) = σ2 w 2π q p 1 −ρ2 + (π −arccos ρ) ρ  =: σ2 w 2π q H(ρ). (24) We Taylor-expand K2 in powers of x⊤x′ and write K2(x, x′) = X ℓ≥0 cℓ ℓ! (x⊤x′)ℓ. (25) A change of variables gives the coefficients cℓ= σ2 w 2π q σ2 w q ℓ H(ℓ)(a) with a := σ2 b q , q = σ2 wr2 + σ2 b, (26) where H(ρ) = p 1 −ρ2 + (π −arccos ρ)ρ and H(ℓ) denotes the ℓ-th derivative. 10Simply decreasing all eigenvalues does not help as that also decreases the sphere radius r proportionally, which Equation (21) shows then increases each cℓin a manner that compensates. 17 The first few coefficients follow from H(a) = p 1 −a2 + (π −arccos a) a, H′(a) = π −arccos a, H′′(a) = 1 √ 1 −a2 , H(3)(a) = a (1 −a2)3/2 , H(4)(a) = 2a2 + 1 (1 −a2)5/2 , yielding c0 = σ2 w 2π q hp 1 −a2 + (π −arccos a) a i , c1 = σ2 w 2π q σ2 w q  (π −arccos a), c2 = σ2 w 2π q σ2 w q 2 1 √ 1 −a2 , c3 = σ2 w 2π q σ2 w q 3 a (1 −a2)3/2 , c4 = σ2 w 2π q σ2 w q 4 2a2 + 1 (1 −a2)5/2 . (27) Asymptotics. As ℓgrows, the coefficient cℓgrows as cℓ= Θ σ2 w 2π q σ2 w q ℓ ℓ! ℓ3/2 ! , ℓ→∞. (28) B.5 THE RELU NTK As in the previous subsection, we treat a shallow ReLU network. The ReLU NNGP kernel remains K2(x, x′) = σ2 w 2π q p 1 −ρ2 + (π −arccos ρ) ρ  =: σ2 w 2π q H(ρ). (29) The corresponding two-layer ReLU NTK is Θ2(x, x′) = σ2 w K1(x, x′) · 1 2π (π −arccos ρ) | {z } training first layer + K2(x, x′) | {z } training second layer (30) = σ2 w 2π q p 1 −ρ2 + 2ρ(π −arccos ρ)  =: σ2 w 2π q J(ρ). (31) We Taylor-expand Θ2 in powers of x⊤x′ and write Θ2(x, x′) = X ℓ≥0 cℓ ℓ! (x⊤x′)ℓ. (32) A change of variables gives the coefficients cℓ= σ2 w 2π q σ2 w q ℓ J(ℓ)(a) with a := σ2 b q , q = σ2 wr2 + σ2 b, (33) where J(ρ) = p 1 −ρ2 + 2ρ(π −arccos ρ) and J(ℓ) denotes the ℓ-th derivative. 18 The first few coefficients follow from the identities J(a) = p 1 −a2 + 2a (π −arccos a), J′(a) = 2(π −arccos a) + a √ 1 −a2 , J′′(a) = 3 −2a2 (1 −a2)3/2 , J(3)(a) = a (5 −2a2) (1 −a2)5/2 , J(4)(a) = 5 + 14a2 −4a4 (1 −a2)7/2 , yielding c0 = σ2 w 2π q hp 1 −a2 + 2a (π −arccos a) i , c1 = σ2 w 2π q σ2 w q h 2(π −arccos a) + a √ 1 −a2 i , c2 = σ2 w 2π q σ2 w q 2 3 −2a2 (1 −a2)3/2 , c3 = σ2 w 2π q σ2 w q 3 a (5 −2a2) (1 −a2)5/2 , c4 = σ2 w 2π q σ2 w q 4 5 + 14a2 −4a4 (1 −a2)7/2 . Asymptotics. As ℓgrows, the coefficient cℓgrows as cℓ= Θ σ2 w 2π q σ2 w q ℓ ℓ! ℓ3/2 ! , ℓ→∞. (34) 19 C REVIEW OF THE KRR EIGENFRAMEWORK FOR PREDICTING TEST PERFORMANCE FROM TASK EIGENSTRUCTURE The central piece of existing theory which we use in this paper is a set of equations which give the average-case test MSE of KRR in terms of the task eigenstructure. In this appendix, we will review this KRR eigenframework. This eigenframework has been derived many times by different means in both the statistical physics community and the classical statistics community (which usually phrases the result as applying to linear ridge regression). References studying KRR include Sollich (2001); Bordelon et al. (2020); Jacot et al. (2020); Simon et al. (2021); Loureiro et al. (2021); Wei et al. (2022); references studying linear ridge regression include Dobriban & Wager (2018); Wu & Xu (2020); Hastie et al. (2022); Richards et al. (2021); Cheng & Montanari (2022). The result is essentially the same in all cases. Here we will adopt the terminology and notation of Simon et al. (2021). Recall that we are studying KRR with a kernel function K with data sampled xi ∼µ, targets generated as yi = f∗(xi) + η, and noise η ∼N(0, ϵ2) with variance ϵ2 ≥0. The kernel admits an eigendecomposition K(x, x′) = P i λiϕi(x)ϕi(x′) with orthonormal eigenfunctions (ϕi). Let us decompose the target function in the eigenbasis as f∗(x) = P i viϕi(x). Suppose we run KRR with n samples and a ridge parameter δ ≥0, and we compute the population (i.e. test) and train MSEs as MSEte = Ex∼µ h (f∗(x) −ˆf(x))2i + ϵ2, (35) MSEtr = 1 n X i (yi −ˆf(xi))2. (36) C.1 STATEMENT OF THE EIGENFRAMEWORK We are now ready to state the eigenframework. Let κ ≥0 be the unique nonnegative solution to X i λi λi + κ + δ κ = n. (37) Then test risk is given approximately by MSEte ≈Ete := E0B, (38) where the overfitting coefficient E0 is given by E0 := n n −P i  λi λi+κ 2 (39) and the bias is given by B = X i  κ λi + κ 2 v2 i + σ2. (40) Train risk is given by MSEtr ≈Etr ≡ δ2 n2κ2 Ete, (41) What is meant by the “≈” in Equations (38) and (41)? This result only becomes exact in certain stringent cases; it is formally derived under an assumption that the eigenfunctions are independent Gaussian (or sub-Gaussian) variables when x is sampled from µ, and it is exact only in an asymptotic limit in which n and the number of eigenmodes in a given eigenvalue range (or the number of duplicate copies of any given eigenmode) both grow large at a proportional rate (Hastie et al., 2022; Bach, 2023). These conditions emphatically do not apply to any realistic instance of KRR. Nonetheless, numerical experiments find that Equations (38) and (41) hold with small error even at modest n (Canatar et al., 2021; Simon et al., 2021; Wei et al., 2022): though derived in an idealized setting and exact only in a limit, this eigenframework holds reliably in practical cases. In this paper, we use this eigenframework as a tool to map predictions of task eigenstructure to predictions of learning curves. Since we are here using it in settings very similar to those tested by previous works (Bordelon et al., 2020; Jacot et al., 2020; Simon et al., 2021; Wei et al., 2022), we expect it to work well. Its use introduces some small error (as it is not perfect at finite n), but this is usually not the dominant source of error. 20 D EXPERIMENTS CHECKING THE HEA: DETAILS AND FURTHER DISCUSSION This appendix contains descriptions of the experimental stack used to verify the HEA, as well as a discussion of practical considerations for applying the HEA to real datasets. It is organized as follows: • In Section D.1 we catalog the kernels, datasets, and target functions we use throughout our experiments. • In Section D.2 we explain the experiments that directly check whether the kernel eigenstruc- ture matches the HEA prediction (Figure 2). • In Section D.3 we describe our method for estimating the decomposition of the target function in the Hermite eigenbasis. Unlike previous work, our method does not require constructing or diagonalizing an empirical kernel matrix. • In Section D.4 we detail the experimental setups for each of the learning curve and sample complexity plots (Figures 1 and 3). • Finally, in Section D.5 we show the results of various additional experiments. D.1 KERNELS, DATASETS, AND TARGET FUNCTIONS Kernels. We use the Gaussian kernel, Laplace kernel, ReLU NNGP kernel, and ReLU NTK in our experiments. A detailed review of these kernels can be found in Section B.11 Datasets. We use the following datasets for the main experiments: • Mean-zero anisotropic Gaussian data. This synthetic dataset is fully specified by its (diagonal) covariance. Different experiments set the data dimension and covariance decay rate differently; see experiment-specific details in Sections D.2, E and F. • CIFAR-5m (Nakkiran et al., 2020). This dataset consists of more than 5 million samples of synthetic images akin to CIFAR-10. These images were sampled using a deep generative model trained on CIFAR-10. Though the distributions of CIFAR-5m and CIFAR-10 may not be identical, they are typically considered close enough for research purposes. • SVHN (Netzer et al., 2011). This dataset contains over 600,000 images of numerals, taken from house numbers found on Google Street View. • ImageNet-32 (Deng et al., 2009). This dataset contains downsampled ImageNet images (32 × 32 pixels). • MNIST (LeCun et al., 1998) and Mushroom dataset (Dua & Graff, 2017). We use MNIST and the UCI Mushroom tabular dataset in Section E as examples of insufficiently Gaussian datasets. We sometimes employ regularized ZCA whitening to increase the effective dimension of the data. This is a data preprocessing technique parameterized by a ZCA strength ω2 which maps X = USV ⊤7→US  ω2S2 + Id −1/2 V ⊤ (42) where X ∈Rd×N is the data matrix, USV ⊤is its SVD, and we use the normalization notation A := A/(∥A∥2 F/d). As the ZCA strength ω2 →0, we get no spectral transformation apart from a scalar normalization X →USV ⊤. Conversely, when ω2 →∞, we get full whitening X →UV ⊤. The crossover point of this behavior occurs at ω2 ∼1. Note that although partially-whitened Gaussian data are slightly less anisotropic, they are still distributed as a multivariate Gaussian. We sometimes employ sample normalization, x →x/∥x∥. Note that although the normalized data lie on the hypersphere, their distribution is still anisotropic. 11A note for users of our codebase: in this paper, we define the Laplace kernel to be K(x, x′) = e−1 σ ||xx′||, but because we initially used a different convention, in code it is K(x, x′) = e − 1 σ √ 2||xx′||. When we report a kernel width in this paper, this is the width in the parameterization we use in the paper, not in code. 21 Both sample normalization and ZCA whitening are preprocessing techniques that, on aggregate, shift high-dimensional data samples closer to the hypersphere. Since the HEA relies on an expansion of kernel functions in terms of on-sphere coefficients (see Section B), these methods move any experimental setup closer to the regime well-described by the HEA. See Section E for further discussion of this point. Targets. We use a variety of synthetic and real target functions in our experiments. All targets are scalar; the synthetic targets take continuous values, whereas the real targets are binarized (yi ∈ {+1, −1}). All targets are mean-zero; for real targets, this means that the binary (super)classes are always balanced (even if the binary superclasses contain a differing number of base classes). We use the following targets for the main experiments: • (Synthetic.) Multi-Hermite targets. A single (normalized) multi-Hermite polynomials of the PCA components (Equation (3)). • (Synthetic.) Powerlaw targets. We draw a random sample of the Gaussian process f⋆(x) = P X i cihi(x) + ϵ · (white noise), (43) where hi(x) is shorthand for the ith multi-Hermite polynomial hαi(z) and ci is a mean-zero Gaussian random variable with variance (i+6)−β. The so-called source exponent β satisfies β > 1 and controls the Sobolev smoothness of the target: the larger β is, the smoother and easier-to-learn the target. We choose a numerical truncation threshold P = 30, 000 for convenience, choosing the target noise level ϵ to ensure that the target is unit norm E[yi] = 1. Our results are empirically insensitive to the randomness in the target. • (Real.) class vs. class. A binarization in which samples are only drawn from two classes. • (Real.) class vs. all others. A binarization similar to a single output element of practical neural networks with one-hot label encodings. A key difference here is that samples are drawn from each binary superclass in equal proportion so that E[yi] = 0. • (Real.) Domesticated vs. wild animals. CIFAR-5m binarization: [cat, dog, horse] vs. [bird, deer, frog]. • (Real.) Vehicles vs. animals. CIFAR-5m binarization: [plane, car, ship, truck] vs. [bird, cat, deer, dog, frog, horse]. Samples are drawn from each superclass in equal proportion so that E[yi] = 0. • (Real.) Even vs. odd. SVHN binarization based on parity: [0, 2, 4, 6, 8] vs. [1, 3, 5, 7, 9]. • (Real.) Prime vs. composite. SVHN binarization based on primality: [2, 3, 5, 7] vs. [4, 6, 8, 9]. We leave out [0] and [1], numerals whose primality is undefined. • (Real.) Genus 0 vs. genus ≥1. SVHN binarization based on the numeral’s topological genus: [1, 3, 5, 7] vs. [0, 6, 8, 9]. We leave out [2] and [4], numerals whose topological genus is font-dependent. D.2 DIRECT EIGENSTRUCTURE CHECKS What is the appropriate way to numerically compare two eigensystems? The spectra are easy to compare – one can simply check whether |λi−ˆλi| λi is small for all i. An easy visual check is to simply scatter one spectrum against the other on a log-log plot; if the points remain close to the y = x line, then the spectra agree. Comparing the eigenbases, on the other hand, is a subtler matter. One must be careful when the eigensystems have small eigenvalue gaps. This issue is most easily understood by considering the limit: what happens when comparing two diagonalizations of a degenerate matrix? In this case, numerical eigendecomposition is undefined since the true eigenvectors are not unique; the computed eigenvectors are thus arbitrary. Simply comparing ˆϕi with ϕi for all i is therefore insufficient. 22 Clearly, any good eigenbasis comparison must be spectrum-aware. In particular, differences between eigenvectors belonging to (near-)degenerate subspaces should not be strongly penalized. A coarse but simple way to make this comparison is with spectral binning. We divide R+ into logarithmically-spaced bins; then, for each eigenbasis, we treat the modes whose eigenvalues fall within the same bin as a single near-degenerate eigenspace. Applying this procedure to the HEA12 yields a set of disjoint Hermite eigenspaces {Φ(th) i }nbins i=1 , and likewise for the empirical basis. Let d(th) i = dim(Φ(th) j ) and likewise for d(emp) i . Note that in general we do not expect d(·) i to equal d(·) j for i ̸= j; indeed, some bins may contain no modes at all. However, we do expect d(th) i = d(emp) i for all i (if the theory is accurate). Having handled any issues of spectral near-degeneracy, we may directly compare the two eigenbases by computing the pairwise overlaps between the binned eigenspaces: Overlap(i, j) =      1 d(emp) j Φ(th)⊤ i Φ(emp) j 2 F, d(th) i ̸= 0 and d(emp) j ̸= 0, undefined, otherwise. (44) where again 1 ≤i, j ≤nbins enumerate the bins. To visualize this in Figure 2, we plot the overlaps in a heatmap, graying out pixels whose spectral bins do not contain any eigenmodes (and thus have undefined overlap). We note that discrepancies in the tails are primarily caused by distortions in the empirical eigenbasis caused by finite-size effects, rather than genuine disagreement between the theory and the true eigensystem (see Figure 6). The experiments that generated Figure 2 used the following hyperparameters: • Gaussian kernel, gaussian data. Kernel width σ = 6. Data x ∈R200 drawn i.i.d. Gaussian with diagonal covariance E  x2 i  = (i + 6)−3.0. This results in deff ≈7. We choose a steep covariance decay exponent to ensure that cubic modes are clearly present in the plot. • Gaussian kernel, CIFAR-5m. Kernel width σ = 6. No ZCA nor sample normalization. This results in deff ≈9. • Laplace kernel, SVHN. Kernel width σ = 8 √ 2. Whiten data with ZCA strength ω2 = 5 × 10−3 and then unit-normalize. This results in deff ≈21. We generally observed that deff ≥20 is a reliable rule of thumb for obtaining good agreement with the HEA using the Laplace kernel. • ReLU NTK, ImageNet-32. Bias variance σ2 b = 1.68 and weight variance σ2 w = 0.56. Whiten data with ZCA strength ω2 = 5 × 10−3 and then unit-normalize. This results in deff ≈40. Examining the on-sphere coefficients, we see that σ2 b/σ2 w ≫1 is the ReLU NTK equivalent for the wide kernel condition for Gaussian kernels. D.3 HOW TO DECOMPOSE THE TARGET FUNCTION We are interested in recovering the coefficients of the target f⋆in the kernel eigenbasis: Recover vi from samples of f⋆(x) = X i viϕi(x), (45) where i is the mode index. According to the HEA, this amounts to expanding f⋆in the multi-Hermite basis: f⋆(x) = X i ˜vihi(x). (46) where hi(x) is shorthand for the ith multi-Hermite polynomial hαi(z). We use different notation for the Hermite coefficients since the HEA will not hold exactly in practical settings. Of course, obtaining any full expansion of f⋆from N samples is exactly as hard as the original learning problem. However, we can hope to obtain an expansion that is good enough to predict the behavior of KRR trained with up to N samples. 12Here, we abuse terminology by referring to the proposed Hermite basis as an eigenbasis; evaluated on finitely many samples of real data, these basis vectors may not be truly orthonormal. 23 Let us define y ∈RN as the vector of target samples and hi ∈RN as the ith multi-Hermite polynomial evaluated on the samples. Let us stack the top P < N modes [h1 h2 · · · hP ]⊤and call the resulting matrix H ∈RP ×N.13 We would like to recover the top P coefficients ˆv ∈RP from y and H. Na¨ıve first try. As a first pass, let’s simply run linear regression: ˆv = H†y, where H† is the pseudo-inverse. Empirically, the recovered coefficients are not very good. There are two main reasons. • The true target contains Hermite modes which are not represented in our top P list. This is an essential model misspecification which appears to linear regression as target noise. This is not a problem in itself; the learning curve eigenframework can handle this kind of noise, if it knows how much noise power there is. However, it is unclear how to measure the magnitude of this misspecification noise, since it gets mixed in with the sampling “noise.” • Linear regression has a flat inductive bias: it tends to fit the samples using each of its regressors with equal enthusiasm. Here, the regressors are functions, arranged (roughly) in order of decreasing smoothness. On the other hand, natural targets are relatively smooth; their power tends to concentrate on the early modes. This leads to large estimation errors in the top coefficients. These two challenges, left unresolved, result in poorly predicted learning curves. Second pass. Let us instead consider the following greedy iterative algorithm: Algorithm 1 Greedy Residual Fitting (GRF) Require: Target vector y ∈RN, Hermite feature matrix H ∈RP ×N. 1: y(0) ←y 2: for p = 1 to P do 3: hp ←Hp,: 4: ˆvp ←⟨hp, y(p−1)⟩ ∥hp∥2 2 ▷find the coefficient of hp in y(p−1) 5: y(p) ←y(p−1) −ˆvp hp ▷subtract off the projection onto hp 6: end for 7: return ˆv = (ˆv1, . . . , ˆvP ) and ˆϵ2 = ∥y(P )∥2 2 This is essentially the Kaczmarz method for solving a linear system (Kaczmarz, 1937). This target recovery algorithm has a few notable virtues. First, it is direct; it does not require matrix inverses, so it runs in quadratic rather than cubic time. Second, since it is iterative and stateful, it automatically prioritizes correctly estimating the top coefficients, which is where the target power tends to lie. Third, it naturally estimates the noise power (it is simply the norm of the final residual ∥y(P)∥2). Fourth, it estimates the correct amount of total power: by the Pythagorean theorem, P p ˆv2 p + ˆϵ2 = ||y|| if ||hp|| = √ N, that is, if the Hermite polynomials are indeed unit norm.14 We empirically find that this algorithm works well for synthetic targets on synthetic Gaussian data. However, real data is not perfectly Gaussian. Even small deviations from Gaussianity introduce systematic correlations between the Hermite polynomials {hi}; these overlaps cause the greedy algorithm to systematically overestimate the target power in the affected modes. We empirically found that if the target is sufficiently “dense” in the Hermite modes (i.e., not dominated by very few coefficients), then these errors “average out” (loosely speaking) and the predicted learning curves remain accurate. Unfortunately, this is rarely the case in real targets. Real targets often contain much power in a few early modes, many of which suffer from this overlap problem. As a result, the predicted learning curves are often far off the mark. See Figures 9 and 11 for empirics. 13In our experiments we choose P = 30, 000 and N = 80, 000 since manipulating this tensor maximizes our GPU VRAM utilization. 14If they are not unit norm, then for the purposes of the KRR eigenframework, one usually wants to take ˆvp 7→N −1/2 ||hp|| ˆvp to estimate the total amount of power in Hermite direction p. After adopting this scaling, GRF again conserves total power by the Pythagorean theorem: P p ˆv2 p + ˆϵ2 = ||y||. 24 Third pass. To rectify these modes, we simply squeeze out the non-orthogonality, starting from the top Hermite modes. A standard technique for iteratively eliminating overlaps is to use the Gram- Schmidt process. In practice, this simply amounts to performing a QR decomposition, QR = H, and estimating the coefficients from the orthogonal component as ˆv = Qy. The major strength of this coefficient recovery technique is that it accurately predicts learning curves. Another strength is that since the regressors are orthogonal, the noise power is easily obtained as 1 −ˆv⊤ˆv. The main drawback is that we must once again resort to a cubic-time algorithm. A natural question is then: if we are going to run a cubic-time algorithm anyways, why not simply diagonalize the empirical kernel matrix? There are several reasons: • Universal measurement of target. This procedure only needs to be run once for a given target function. The resulting coefficients can then be reused to predict learning behavior across a variety of kernels and hyperparameters. • Finite-size effects. Diagonalizing the finite-sample empirical kernel matrix typically intro- duces distortions in the tail modes. The HEA avoids this issue by constructing the basis directly. • Numerical conditioning. The (non-orthogonalized) Hermite polynomial matrix H is very well-conditioned on natural datasets. Performing Gram-Schmidt orthogonalization tends to be numerically robust, even at 32-bit floating-point precision. On the other hand, the numerical conditioning of kernel diagonalization worsens as the number of samples (and retrievable modes) increases; in practice, we typically need 64-bit precision to obtain reliable eigenvectors. As a consequence, the prefactors for GPU VRAM space complexity are friendlier for Gram-Schmidt. • Theoretical insight. Kernel diagonalization is an opaque numerical computation; it does not expose the functional form of the eigenfunctions. The perturbed closed-form expression offered by HEA + Gram-Schmidt reveals the analytical structure of learning in kernels. Takeaway. The HEA conceptually holds; the kernel eigenfunctions are small perturbations of the multi-Hermite polynomials, even for complex real data. If the target function is sufficiently dense, the perturbations are not important to model, and the target coefficients can be estimated using a direct greedy method. However, for some real targets with prominent coefficients, we must be careful to account for their perturbations. D.4 LEARNING CURVES AND SAMPLE COMPLEXITIES In this section, we discuss additional considerations for evaluating learning curves. We end by cataloging the experimental parameters used to generate learning curve plots. Evaluating the theoretical predictions requires summing over all the eigenmodes. In practice, we estimate this sum by truncating at some large P ≫N and discarding the tail. For kernels with fast spectral decay, the neglected tail contributes negligibly and the truncation introduces negligible error. However, for slow-decaying kernels (e.g., Laplace kernel or ReLU NTK) the tail sum is non-negligible and we must account for it. We do this by modifying the ridge as ˜δ = δ + ∞ X i=P λi ≈δ + Tr [K] − P X i=0 λi ! . (47) This heuristic arises from examining the eigenframework conservation law n = δ κ + ∞ X i=0 λi λi + κ (48) = δ κ + ∞ X i=P λi λi + κ + P −1 X i=0 λi λi + κ (49) ≈δ κ + ∞ X i=P λi κ + P −1 X i=0 λi λi + κ (50) where the final approximation follows from the fact that P ≫n so κ ≫λi for the tail modes. Combining terms yields the tail-corrected ridge. 25 Finally, we detail the parameters used in each of our main experiments. All kernel regression experiments use a ridge of δ = 10−3 and run up to 50 trials, each with a new test set of size 5,000. All target function estimates use 80,000 samples. • Figure 1, top right. Gaussian kernel, width σ = 4. No ZCA nor normalization. • Figure 1, bottom right. Laplace kernel, width σ = 4 √ 2. ZCA strength ω2 = 10−3 with data sample normalization. Sample complexities are obtained by computing learning curves (empirical curves averaged over 20 trials), performing logarithmic interpolation, and finding where the test risk falls below 0.5. • Figure 3, top left. Gaussian kernel, width σ = 8. No ZCA nor normalization. We use a wide kernel for better agreement at targets of high degree; for comparison with a narrower kernel, see Figure 7 below. • Figure 3, top right. Same as Figure 1, bottom right. • Figure 3, bottom left. Gaussian kernel, width σ = 10. No ZCA, but the samples are normalized. For comparison with a narrow kernel and no sample normalization, see Figure 8 below. • Figure 3, bottom center. Laplace kernel, width σ = 4 √ 2. ZCA strength ω2 = 5 × 10−3 with data sample normalization. For comparison with no sample normalization, see Figure 10. • Figure 3, bottom right. ReLU NTK, bias and weight variances σ2 b = 1.96 and σ2 w = 0.49. ZCA strength ω2 = 10−2 with data sample normalization. We found that the HEA for the ReLU NTK is particularly sensitive to insufficient effective dimension. 26 D.5 ADDITIONAL EXPERIMENTS 100 101 102 103 104 Mode index i 10−8 10−6 10−4 10−2 100 eigenvalue ¸i Gaussian kernel @ Gaussian data constant mode linear modes quadratic modes cubic modes quartic modes predicted empirical predicted empirical 100 101 102 103 104 Mode index i 10−8 10−6 10−4 10−2 100 Gaussian kernel @ CIFAR-10 100 101 102 103 104 Mode index i 10−8 10−6 10−4 10−2 100 Laplace kernel @ SVHN 100 101 102 103 104 Mode index i 10−8 10−6 10−4 10−2 100 ReLU NTK @ Imagenet32 Figure 6: We show an alternative visualization of the top row of Figure 2, comparing the predicted and empirical spectra of various kernel/dataset combinations. We see that the HEA accurately predicts the minute details of the kernel spectrum. Furthermore, tail deviations are indeed caused by finite-kernel effects in the empirical spectrum rather than a failure of the HEA. 101 102 103 104 n training samples 0.0 0.5 1.0 1.5 test MSE Gaussian kernel @ CIFAR-5m Kernel width = 8 z10 z190 z2 0 z1z3 z16z20 z3 0 z0z1z2 101 102 103 104 n training samples 0.0 0.5 1.0 1.5 test MSE Gaussian kernel @ CIFAR-5m Kernel width = 2 z10 z190 z2 0 z1z3 z16z20 z3 0 z0z1z2 Figure 7: We compare the original plot from Figure 3 with kernel width 8 (left) to the same experimental setup except for a kernel width 2 (right). The true eigenfunctions of the narrower kernel deviate from the HEA prediction, especially for the high-degree modes. For a similar comparison, see Figure 13. 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 test MSE Gaussian kernel @ SVHN even vs. odd genus 0 vs. genus ¸ 1 0 vs. all others 4 vs. 2 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 test MSE Gaussian kernel @ SVHN no norm even vs. odd genus 0 vs. genus ¸ 1 0 vs. all others 4 vs. 2 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 test MSE Gaussian kernel @ SVHN narrow even vs. odd genus 0 vs. genus ¸ 1 0 vs. all others 4 vs. 2 Figure 8: We compare the original experimental setup from Figure 3 (sample normalization, kernel width 10) on the left, to two similar setups: no sample normalization (center) and a narrower kernel width of 4 (right). Interestingly, the narrow kernel width does not change the agreement much; removing sample normalization worsens agreement slightly. 27 101 102 103 104 n training samples 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 test MSE Gaussian kernel @ CIFAR-5m HEA prediction domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog 101 102 103 104 n training samples 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 test MSE Gaussian kernel @ CIFAR-5m Empirical kernel diagonalization domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog 101 102 103 104 n training samples 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 test MSE Gaussian kernel @ CIFAR-5m Greedy residual fitting domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog 101 102 103 104 n training samples 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 test MSE Gaussian kernel @ CIFAR-5m Assuming isotropic data domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog Figure 9: Here we show four plots in which the experimental setup does not change; instead, we compare different techniques for estimating the required theoretical quantities. (Upper left.) HEA prediction, identical to Figure 1. (Upper right.) We numerically diagonalize the kernel matrix (size 25000 × 25000) and use the obtained eigenstructure to make predictions. The accuracy of the prediction degrades as the number of training samples approaches the size of the kernel matrix used to estimate the true eigenstructure. (Lower left.) We use the greedy algorithm described in Algorithm 1 to estimate the target coefficients. We see good initial agreement, but it degrades due to accumulating non-orthogonality. (Lower right.) For comparison, we include the predictions one would obtain if one modeled the data distribution as an isotropic Gaussian. Clearly, a major contribution of our work is to handle the anisotropy in natural data, since it strongly affects the resulting learning curves. 28 101 102 103 104 n training samples 0.0 0.5 1.0 test MSE Laplace kernel @ CIFAR-5m cat vs. all others car vs. truck dog vs. frog vehicles vs. animals 101 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 test MSE Laplace kernel @ CIFAR-5m no norm cat vs. all others car vs. truck dog vs. frog vehicles vs. animals Figure 10: We compare the plot from Figure 3 on the left with an identical experimental setup, except without sample normalization. We see that normalization is necessary for obtaining agreement, since the HEA is derived using the kernel’s on-sphere level coefficients. 101 102 103 104 n training samples 0.0 0.2 0.4 0.6 0.8 1.0 1.2 test MSE Laplace Kernel @ Imagenet32 source exp. ¯ ¯ = 1:05 ¯ = 1:15 ¯ = 1:25 ¯ = 1:5 Figure 11: For “dense” synthetic targets (i.e., targets whose power is split among many eigenfunc- tions), the HEA predicts the performance of the Laplace kernel very well. Contrasting this with the predictions for Laplace kernel on real targets (Figure 10), we conclude that dense targets are more forgiving of errors in target coefficient recovery. For this experiment, we use ZCA strength ω2 = 10−2 and sample normalization. 101 102 103 104 Empirical sample complexity n 101 102 103 104 Predicted complexity n z1 z5 z2 0 z0z2 z1z3 z3 0 z100 z0z1z2 z190 z4 0 z2 3z5 Laplace kernel @ CIFAR-5m linear targets quadratic targets cubic targets quartic targets Figure 12: Additional sample complexity plot, this time for Laplace kernel on CIFAR-5m. We use sample normalization and a ZCA strength of 3 × 10−3. 29 E WHAT BREAKS THE HEA? CASE STUDIES OF THREE CAUSES OF FAILURE In the main text, we have primarily shown results in settings in which the HEA works quite well in predicting kernel eigenstructure, learning curves, and monomial learning rates. Here we get our hands dirty and show where it breaks. First cause of failure: narrow kernel width. The discussion of Section 4.1 suggests that if we make kernel width too narrow — or, in kernels without a notion of width, change the kernel so that cℓ+1γ1 cℓ is not much less than one — we should start to see the HEA break. In particular, since for a Gaussian kernel cℓ∝σ−2ℓ, narrow width means non-small level coefficient ratios and vice versa. Figure 13 shows an empirical test: on the same synthetic data distribution, the kernel is made progressively narrower. As expected, the HEA breaks. Second cause of failure: low data dimension leading to non-concentration of norm. The HEA treats all rotation-invariant kernels as if they were dot-product kernels. In fact, it throws away all information about the kernel function that cannot be obtained by its value on a sphere. For this to work in practice, we should expect our data to lie fairly close to a sphere. For Gaussian data x ∼Σ, this is assured if the data has high effective dimension deff := Tr[Σ]2 Tr[Σ2]. Figure 14 shows that with a Laplace kernel on a Gaussian dataset, the HEA breaks as deff decreases. Interestingly, not all rotation-invariant kernels require high effective dimension. For example, our heuristic derivation of the HEA for Gaussian data in Section 4.1 took place in just one dimension! As it turns out, this is because the Gaussian kernel is given by a polynomial feature map (of infinite dimension), but the Laplace kernel has a cusp at x = x′ and is not. (Equivalently, the Gaussian kernel is analytic, but the Laplace kernel is not.) For a stationary kernel K(x, x′) = k  1 σ2 ||x −x′||2 which depends only on the distance between points with scaling factor σ, the σ →0 limit will give the HEA on Gaussian data if the function k(·) is real-analytic at zero. The Gaussian kernel satisfies this real-analytic criterion, but the Laplace kernel does not. As evidence of this, Figure 15 repeats the above experiment with a Gaussian kernel and finds that low deff is no problem. We note that there is another problem conflated with this one: as discussed in Section B, the Laplace kernel’s level coefficients actually diverge superexponentially, and the predicted HEA eigenvalues diverge faster the closer γ1 P i γi is to one. One could disentangle this by studying a rotation-invariant kernel that (a) does not factor into a product of one-dimensional kernels like the Gaussian and exponential kernels do but (b) is still analytic, so its level coefficients will not diverge superexponentially. Such a kernel is not among those studied here, so we leave this to future work. Third cause of failure: the data not being “Gaussian enough.” The HEA is theoretically derived for Gaussian data, but nonetheless works for some real datasets. This suggests that the datasets for which it works are “Gaussian enough” in some sense. Here we support this intuition with experiments. First, in Figure 16, we plot the marginal distributions of the first few normalized principal coordinates for four datasets of decreasing complexity: CIFAR-10, SVHN, MNIST, and the tabular UCI Mushrooms dataset (Dua & Graff, 2017). Plotted this way, these datasets appear increasingly non-Gaussian. Then, in Figure 17, we test the HEA on these four datasets, finding that the HEA indeed seems to work worse as the dataset becomes “less Gaussian.” For good measure and out of interest, we plot the first few PCA coordinates for the three image datasets in Figure 18. Of course, we put “Gaussian enough” in quotes because this is a nonrigorous and ill-defined notion: we have not invoked any precise method of determining which of two datasets is “closer” to Gaussian! We do not attempt to give a precise definition here. This seems like a worthwhile direction for future exploration, especially from the statistics community. 30 theory vs empirics: kernel spectrum 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Predicted eigenvalue ¸ (th) i Gaussian kernel @ Gaussian data (width ¾ = 6) constant mode linear modes quadratic modes cubic modes quartic modes ≥ quintic modes 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Gaussian kernel @ Gaussian data (width ¾ = 1:5) 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Gaussian kernel @ Gaussian data (width ¾ = 0:8) 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Gaussian kernel @ Gaussian data (width ¾ = 0:5) theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 13: The HEA breaks when kernel width is too narrow. We repeat the experiment of Figure 2 with the Gaussian kernel function K(x, x′) = e− 1 2σ2 ||x−x′|| 2 and synthetic Gaussian data. The data is generated as x ∼N(0, Γ), where Γ has eigenvalues γi ∝i−3 for i = 1 . . . 30, with γ1 ≈0.83 after normalization. As the kernel width σ shrinks to σ ≲1, the HEA breaks: eigenvalue prediction fails with higher-order modes dominating predictions, and the clear eigenspace overlap present at large width evaporates. theory vs empirics: kernel spectrum 10−5 10−3 10−1 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 Predicted eigenvalue ¸ (th) i deff = 38:35 Laplace kernel @ Gaussian data °i / (i + 3)¡1:2 constant mode linear modes quadratic modes cubic modes quartic modes ≥ quintic modes 10−5 10−3 10−1 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 deff = 21:44 Laplace kernel @ Gaussian data °i / (i + 3)¡1:5 10−5 10−3 10−1 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 deff = 10:40 Laplace kernel @ Gaussian data °i / (i + 3)¡2:0 10−5 10−3 10−1 Empirical eigenvalue ¸ (emp) i 10−5 10−4 10−3 10−2 10−1 100 deff = 4:62 Laplace kernel @ Gaussian data °i / (i + 3)¡3:0 theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 14: With the Laplace kernel, the HEA requires high effective data dimension... We repeat the experiment of Figure 13 with two modifications: first, the kernel function is Laplace K(x, x′) = e−1 σ||x−x′|| with width σ = 8 √ 2, and second, the eigenvalues are γi ∝(i + 3)−α with variable exponent α. As α increases (moving left to right), the effective dimension deff = ( P i γi) 2 P i γ2 i decreases, and agreement with the HEA degrades. Empirically, we find that deff ≈20 is usually high enough to see good agreement. 31 theory vs empirics: kernel spectrum 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 Predicted eigenvalue ¸ (th) i deff = 38:35 Gaussian kernel @ Gaussian data °i / (i + 3)¡1:2 constant mode linear modes quadratic modes cubic modes quartic modes 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 deff = 21:44 Gaussian kernel @ Gaussian data °i / (i + 3)¡1:5 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 deff = 10:40 Gaussian kernel @ Gaussian data °i / (i + 3)¡2:0 10−8 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−8 10−6 10−4 10−2 100 deff = 4:62 Gaussian kernel @ Gaussian data °i / (i + 3)¡3:0 theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 15: ...but with the Gaussian kernel, it does not need high effective dimension. We repeat the experiment of Figure 14 exactly, but use a Gaussian kernel with width σ = 3. Thanks to the smoothness of the Gaussian kernel, we see little degradation of the theory-experiment match of the HEA. 0.0 0.2 0.4 CIFAR-10 First PCA coordinate PCA coord. density Gaussian fit Second PCA coordinate Third PCA coordinate Fourth PCA coordinate 0.0 0.2 0.4 SVHN 0.0 0.2 0.4 MNIST −4 −2 0 2 4 z1 0.0 0.5 1.0 Mushrooms −4 −2 0 2 4 z2 −4 −2 0 2 4 z3 −4 −2 0 2 4 z4 Figure 16: Different datasets have different degrees of Gaussianity in their early principal components. For four natural datasets — CIFAR-10, SVHN, MNIST, and the UCI Mushrooms tabular dataset — we compute the first four normalized principal coordinates (zi)4 i=1 and plot their distributions as histograms. We find that the first few principal coordinates for CIFAR-10 are fairly close to Gaussian, this is less true for SVHN, yet less true for MNIST, and not at all true for the Mushrooms dataset. 32 theory vs empirics: kernel spectrum 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−6 10−5 10−4 10−3 10−2 10−1 100 Predicted eigenvalue ¸ (th) i Gaussian kernel @ CIFAR-10 constant mode linear modes quadratic modes cubic modes quartic modes 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−6 10−5 10−4 10−3 10−2 10−1 100 Gaussian kernel @ SVHN 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−6 10−5 10−4 10−3 10−2 10−1 100 Gaussian kernel @ MNIST 10−6 10−4 10−2 100 Empirical eigenvalue ¸ (emp) i 10−6 10−5 10−4 10−3 10−2 10−1 100 Gaussian kernel @ Mushrooms theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 17: The HEA works better with complex high-dimensional datasets and worse with simpler datasets. We repeat the experiment of Figure 2 with a Gaussian kernel of width σ = 3 on 8000 samples from four datasets of increasing non-Gaussianity: CIFAR-10, SVHN, MNIST, and the UCI Mushrooms tabular dataset. As the dataset becomes simpler, the HEA works worse and worse, though the predicted trend is still roughly present, if quantitatively wrong, even for the tabular dataset. CIFAR-10 First PCA coordinate Second PCA coordinate Third PCA coordinate Fourth PCA coordinate SVHN MNIST Figure 18: Visualization of the first four PCA directions for the CIFAR-10, SVHN, and MNIST datasets. 33 F MLP EXPERIMENTS: DETAILS AND FURTHER DISCUSSION As the HEA requires level coefficients to predict λα, level coefficients for MLP experiments are computed with the ReLU NTK with both bias and weight variances of 1 which match the empirical variances of our MLP networks at initialization. Unless otherwise stated, all experiments were performed using a feature-learning (µP) MLP with the following data hyperparameters: γi = (i + 6)−α α = 1.7 and MLP hyperparameters, width w = 8192 depth (hidden layers+1) L = 3 learning rate η = 10−2 batch size bsz = 1024 ζ = 1, where ζ is the richness parameter (defined by a rescaling of the network’s output f 7→f/ζ and either learning rate ηmax 7−→ηmax · ζ2/L for ζ > 1 or ηmax 7−→ηmax · ζ2 for ζ < 1) allowing a network to scale between an ultra-rich and a lazy/NTK regime (Atanasov et al., 2024). In order to avoid network outputs exploding for ζ ≪1, we have all MLPs output the difference f(x; θt)−f(x; θ0) for network parameters at gradient step t being θt. 0 2 4 6 8 Mode x3 Mode x2 0 0 2 4 6 8 0 2 4 6 8 Mode x2x3 0 2 4 6 8 Mode x2 1x2 2 Figure 19: Validation of MLPs being feature-learning. We investigate the Gram matrix of the first weight matrix, W⊤ 1 W1. We train MLPs with ζ = 1 on d = 10 Gaussian synthetic data for 1000 GD steps. Yellow denotes higher entries, while blue denotes near-zero entries. The Gram matrices are heavily weighted to modes present in the target function, confirming the feature learning regime. 34 All experiments were carried out in an online setting, with niter being the number of SGD steps it took to reach a train error MSEtr ≤0.1. This choice is largely inconsequential to the results, and was only chosen such that (1) we ensure each target function is properly fit enough, and (2) training did not run for a prolonged time. Random samplings of n=bsz Gaussian datapoints produce variance in high-order Hermite polynomials, so an exponential moving average (EMA) for MSEtr is used with decay constant 0.9: EMAtr, t = 0.9MSEtr, t + 0.1EMAtr, t−1. This corresponds with a half-life for the train error of about 7 steps. This ensures high-variance random samplings do not cause the effective optimization time η · niter to be skewed in favor of higher order modes. We varied the EMA constant and found values less than 0.9 to underestimate high order modes’ (as we had a higher probability for random samples to have a naturally low Hermite norm) optimization time and values greater than 0.9 to overestimate lower order modes’ optimization time (due to all modes’ optimization time being offset by a constant). One could subtract the half-life from an empirically found niter with any EMA constant to get a more EMA-unbiased estimate for when the test error goes below the set cutoff, although we do not perform such a procedure. We additionally varied the loss cutoff (in the termination condition MSEtr ≤cutoff) and found that small changes did not significantly affect our results. This network is empirically validated to be in the feature learning regime by looking at the Gram matrix W⊤ 1 W1, with W1 ∈Rwidth×dim initialized with Gaussian entries. A lazy network has this Gram matrix constant throughout training, whereas feature learning allows for Θ(1) changes to the spectral norm of the W1, producing Θ(1) changes to W⊤ 1 W1 (Yang et al., 2023). We train ζ = 1 networks on simple PCA-mode target functions and find significantly more power in the Gram matrix entries corresponding to the data indices used in the target function, confirming that the MLP is operating in the feature-learning regime (Figure 19). 100 101 102 103 Base case MLP @ Gaussian Data constant linear quadratic cubic quartic Lower depth (1 hidden layer, L=2) Higher learning rate (η = 0.5) 101 103 100 101 102 103 Narrow width (width=128) 101 103 Lower batch size (batch size=64) 101 103 Different data exponent (α=1.14) Inverse of HEA eigenvalue λ −1 Eff. opt. time η · niter Figure 20: Our primary MLP results are largely invariant to hyperparameter choices. We train MLPs on synthetic Gaussian data, with one hyperparameter being motified. The base case is shown in the (top left) for reference. Apart from the (top center), all modifications produce no notable changes to our observation of λ−1/2 α ∼η · niter scaling. (Top center). The 1 hidden layer case is only different than the rest by a change in eigenvalue-independent prefactor, the scaling law still holding. We check if our hyperparameter choices were simply lucky, or if our λ−1/2 α ∼η · niter scaling is real and reproducible. We take our base case, and vary all hyperparameters (depth, width, learning rate, batch size, data covariances) we do not expect to affect the scaling, finding that none do; the only change we observe is an eigenvalue-independent rise of effective optimization time when going to a more shallow network. Results are summarized in Figure 20. The only hyperparameter we know will change the exponent of λexp. α ∼η · niter is the richness parameter ζ: when ζ ≪1, the kernel analysis in the main text tells us we expect exponent 1. What is unknown is the scaling exponent as ζ ≫1, which we find to be lower than the baseline 1/2. Lastly, our effective optimization time’s η is defined as the base ζ-independent learning rate. Our findings are shown in Figure 21. 35 101 103 105 100 101 102 103 Eff. opt. time η · niter 1 z1 z5 z2 0 z0z2 z3 0 1 Lazy ζ =10−3 constant linear quadratic cubic quartic 101 103 105 1 z1z5 z100 1 z2 0 z0z2 z2z3 z1z10z3 1 z0z1z2 z3z2 5 z4 0 z0z1z2z3 1/2 Rich ζ =1 101 103 105 1 z1z5 z2 0 z0z2 z2z3 z1z10 z3 1 z0z1z2 z4 0 0.4? Ultra-rich ζ =103 Inverse of HEA eigenvalue λ −1 Figure 21: Sweeping the richness parameter changes the slope. (Left) We validate taking ζ ≪1 to bring us into the lazy/NTK regime, where we know the eigenvalue to be inversely proportional to the effective optimization time. (Right) We investigate the so-called ”ultra-rich” regime ζ ≫1 and find there to be a λ−0.4 α ∼η · niter relationship. We suspect the 0.4 slope in the ultra-rich regime would change with variation of hyperparameters and is thus not fundamental, unlike the slopes of 1 and 1/2 in the lazy and rich regimes. Lastly, we would like to note that the base synthetic case and the CIFAR-5m case were averaged over 3 trials, and upon finding a remarkably small standard deviation in effective optimization time, all further validation experiments were completed with only a single trial. 36 G REVIEW OF HILBERT SPACE THEORY In this section we review the basics of Hilbert space theory, which would be very useful in proving theorems in the next few appendices. Na¨ıvely, a Hilbert space is just Rn or Cn, where n is allowed to go to infinity. In the standard notation, it is written as (H, ⟨·|·⟩), where H is the set of vectors, and the angular brackets ⟨·|·⟩is the inner product on pairs of vectors in the set. G.1 DIRAC NOTATION We use the standard Dirac notation. • Vectors are written as |v⟩∈H. • The duals of vectors on H are written as ⟨v|, defined via the inner product: ⟨v| : |w⟩7→ ⟨v|w⟩. • Linear operators are written as capital letters: A. • Linear operators act on the right: ⟨v|A|w⟩:= ⟨v|(|Aw⟩). For example, the Euclidean space Rd is a Hilbert space. Its inner product is the dot-product ⟨v|w⟩= v⊤w. Write column vectors as kets and row conjugates as bras: |v⟩=   v1 ... vd  , ⟨v| = |v⟩⊤= [v1 · · · vd] , ⟨v|w⟩= ⟨v| |w⟩= v⊤w. For a matrix A ∈Rd×d, it acts on vectors by multiplication on the right. Rank-one operators are of the form |v⟩⟨w|, with action |x⟩7→|v⟩⟨w|x⟩. G.2 OPERATORS In a Euclidean space, such as R3, the geometry is defined by its lengths and angles. A linear operator on R3 preserves lengths and angles, and therefore preserves its geometry, iff it preserves the dot-product between vectors. Such geometry-preserving linear operators are called orthogonal operators. Generalized to Hilbert spaces, geometry-preserving linear operators are called unitary. An operator V : H →K is unitary iff ⟨V v|V w⟩= ⟨v|w⟩, ∥V |v⟩∥= ∥|v⟩∥. For example, a unitary V : Rn →Rn is just an orthogonal matrix. Unitary maps preserve all geometric relationships, such as angles between vectors, lengths of vectors, and orthonormal bases. The rank of an operator is the dimension of its range. By the analog of singular value decomposition in Hilbert space, if an operator has finite rank r, then it can be written in the form of Pr k=1 |vk⟩⟨uk|. In a Euclidean space, operators have transposes. The transposition of operators satisfies (A⊤w)⊤v = w⊤(Av). Generalized to Hilbert space, operators have adjoints. The adjoint of an operator A is written as A∗, and satisfies ⟨w|Av⟩= ⟨A∗w|v⟩ In a Euclidean space, symmetric operators are operators such that A = A⊤. In a Hilbert space, self-adjoint operators are operators such that A = A∗. Just like how symmetric operators on Rd are of the form Pd k=1 akvkv⊤ k , self-adjoint operators are of the form P k ak|vk⟩⟨vk|, although the summation may be infinite. The operator norm is a norm defined on operators. It is defined by ||A||op := supx∈H,||x||=1 ||Ax||. An operator is positive definite, written as A ≻0, iff it is self-adjoint, and ⟨v|A|v⟩> 0 for all nonzero |v⟩. Similarly, an operator is positive semidefinite, written as A ⪰0, iff it is self-adjoint, and ⟨v|A|v⟩geq0 for all |v⟩. 37 G.3 COMPACT OPERATORS In infinite dimensions, some finite-dimensional intuitions break. A finite-rank operator, by virtue of being finite-rank, behaves essentially the same as a linear operator between two finite-dimensional spaces. However, they are somewhat too trivial. Compact operators are a compromise. They can have infinite rank, but still allow some finite-dimensional intuitions to apply. An operator K : H →H is compact iff it is the operator-norm limit of a sequence of finite-rank operators. Symmetric matrices are diagonalizable. Similarly, if an operator is compact and self-adjoint, then it is diagonalizable: There exists an orthonormal basis of eigenvectors {|ej⟩}j∈J and real eigenvalues {λj}j∈J with K = X j∈J λj|ej⟩⟨ej|, λj ∈R, λj →0. Nonzero eigenvalues have finite multiplicity, and the eigenvalues can only accumulate at zero. We will only study operators of a very specific form that arises naturally from kernel regression. Let {|vn⟩}n≥0 be an orthonormal set in H and let an ≥0 with P n an < ∞. Define K := ∞ X n=0 an|vn⟩⟨vn|, Then: • K is self-adjoint, positive, has finite trace, and compact. • K has eigenpairs {(an, |vn⟩) : n ≥0}, counting multiplicities of repeated an. • ∥K∥= supn an, Tr [K] = P n an. G.4 FUNCTION SPACES Generally, the vector space of functions is infinite-dimensional. Hilbert space theory shows that certain geometric intuitions honed from low-dimensional Euclidean spaces are still useful even in an infinite-dimensional function space. As a basic example, consider L2(dx), the set of square-integrable functions on R →R, with inner product ⟨f|g⟩= Z R f(x)g(x)dx This generalizes to L2(dDx), the space of square-integrable functions on RD →RD. It has inner product ⟨f|g⟩= Z RD f(x)g(x)dDx Let µ be a probability measure on Rd. Then L2(µ) is the space of functions f such that R Rd |f(x)|2µ(dx). That is, the space of functions f, such that if X ∼µ, then f(X) has finite second-moment. The inner product ⟨f|g⟩= EX∼µ[f(X)g(X)]. In this formalism, the expectation operator is simply taking the inner product by the constant-1 vector: EX∼µ[f(X)] = EX∼µ[1 f(X)] = ⟨1|f⟩ We will mostly study L2(µ) in the case where µ is a Gaussian distribution N(0, Σ). Those cases are called Gaussian spaces. We will concern ourselves mainly with integral kernel operators over L2(µ). An integral kernel operator is a linear operator of the type K : L2(µ) →L2(µ). It is defined using a kernel function K : Rd × Rd →R, such that K|v⟩= |w⟩: w(x) = Z Rd K(x, y)v(y)µ(dy) 38 Solving for the eigensystem of K is equivalent to diagonalizing the kernel operator, such that K = ∞ X n=0 λn|wn⟩⟨wn| for some orthonormal basis |wn⟩. As an example, the dot-product kernel in R1 is K(x, y) = ∞ X n=0 cn n! (xy)n In operator form, it is equivalent to K = ∞ X n=0 cn(2n −1)!! n! |vn⟩⟨vn| where we define the vectors corresponding to the monomial functions: |vn⟩, vn(x) = xn p (2n −1)!! The factor of p (2n −1)!! is necessary to make sure the vectors are unit-length: ⟨vn|vn⟩= 1. The problem with the representation K = ∞ X n=0 cn(2n −1)!! n! |vn⟩⟨vn| is that, while it looks like a diagonalization, it is not, because the vectors are unit-length, but not orthogonal to each other. Diagonalizing the kernel is impossible in general, but it is possible in certain limiting cases. Our main theoretical work in this paper is just working out these cases. G.5 1D GAUSSIAN SPACE Let µ(dx) = (2π)−1/2e−x2/2dx be the standard Gaussian distribution on R1. We have the 1D Gaussian space L2(µ). The differentiation operator Dx maps f(x) to f ′(x), and by integration-by-parts, has adjoint D∗ x = −Dx + x Therefore, we can define the Ornstein–Uhlenbeck (OU) operator L := −D2 x + xDx = (D∗ xDx). Then L is self-adjoint and positive semidefinite on L2(µ). The eigenvector equation for L is L|vn⟩= λn|vn⟩=⇒v′′ n(x) −xv′ n(x) + λnvn(x) = 0 This is just Hermite’s differential equation, with solutions being the probabilist’s Hermite polynomials Hen. They are orthogonal in L2(µ) with Eµ[Hem(X)Hen(X)] = n!δmn. but not normalized. They are normalized by hn(x) := Hen(x) √ n! , |hn⟩∈L2(µ). Thus we see that the normalized Hermite polynomials is the eigensystem for the OU operator in Gaussian space: L|hn⟩= n|hn⟩, L = ∞ X n=0 n|hn⟩⟨hn| Since the OU operator is the simplest way to make a self-adjoint operator out of the differentiation operator Dx, it is a natural object to consider. 39 G.6 MULTIDIMENSIONAL GAUSSIAN SPACE The gaussian space over R1 can be generalized to multiple dimensions. Let µd = N(0, Id) be the standard gaussian distribution on Rd. The corresponding OU operator is L = −∆+ x · ∇= d X i=1 −∂2 xi + x∂xi That is, we can write L as a sum of 1-dimensional OU operators, as L = Pd i=1 Li. Note that the operator operates on the dimensions x1, . . . , xd without interfering with each other: L(f1(x1) · · · fd(xd)) = (L1f1)(x1)f2(x2) · · · fd(xd) + · · · + f1(x1) · · · fd−1(xd−1)(Ldfd)(xd) Because of this, the eigensystem of L is obtained directly by taking the product of the eigensystem of L1, . . . , Ld. For a multiindex α = (α1, . . . , αd) ∈Nd set Heα(x) := d Y i=1 Heαi(xi), hα(x) := d Y i=1 Heαi(xi) √αi! , |α| := X i αi. Then, L is diagonalized as L = X α∈Nd |α||hα⟩⟨hα| For each n = 0, 1, 2, . . . , the eigenspace En := span{|hα⟩: |α| = n} has multiplicity dim En = #{α ∈N d : |α| = n} = n + d −1 d −1  . This degeneracy is due to symmetry. For any orthogonal transformation M : Rd →Rd and any smooth f : Rd →R, we have L(f ◦M) = (Lf) ◦M. Consequently, if |vn⟩is a solution to the eigenvector equation L|vn⟩= λn|vn⟩, so would its spherical rotations, and any vector sum of them would have the same eigenvalue λn. This is similar to the case of solid harmonics in Rd. In that case, ∆is also spherically symmetric, and its eigenspaces accordingly are degenerate. G.7 UNITARY TRANSFORMATIONS OF GAUSSIAN SPACES Consider two Gaussian spaces L2(N(0, Id)), L2(N(0, Σ)). One of them is standardized, the other is not. Since many properties of the Hermite polynomials are derived over the standard Gaussian space, we would like to translate statements in L2(N(0, Id)) to statements in L2(N(0, Σ)). By the geometric view point, as long as a statement is cast in the language of Hilbert space geometry, the statement will continue to hold true after any unitary transformation. Now, suppose we have a matrix M such that MM ⊤= Σ, then EX∼N(0,Σ))[f(X)g(X)] = EZ∼N (0,Id))[f(LZ)g(LZ)] Thus, we have a unitary transformation V : L2(N(0, Σ)) →L2(N(0, Id)), defined by (V f)(x) = f(Mx), (V ∗f)(x) = f(M −1x) Note that the matrix M has an ambiguity: If MM ⊤= Σ, then given any orthogonal matrix O, we also have (OM)(OM)⊤= Σ. Thus, we can obtain an entire family of unitary transformations, one per orthogonal matrix O. We will exploit this degree of ambiguity later. Vectors can be transformed. So can operators. An operator A is transformed to V AV ∗, so that (V AV ∗)(V |v⟩) = V (A|v⟩). L2(N(0, Σ)) L2(N(0, Σ)) L2(N(0, Id)) L2(N(0, Id)) A V V V AV ∗ 40 If an operator is represented as P n an|wn⟩⟨vn|, then its transformed operator is represented as P n anV |wn⟩⟨vn|V ∗. In particular, diagonalized operators stay diagonalized: A = X n λn|vn⟩⟨vn| =⇒V AV ∗= X n λnV |vn⟩⟨vn|V ∗ Kernels are transformed in the same way as functions, since EX∼N(0,Σ))  EY ∼N (0,Σ))[f(X)K(X, Y )g(Y )]  =EX∼N(0,Id))  EY ∼N (0,Id))[f(MX)K(MX, MY )g(MY )]  G.8 SOME USEFUL PROPERTIES OF HERMITE POLYNOMIALS Let µ be the standard normal distribution on R1. For each n ∈N, define the normalized monomial vector |vn⟩by the function xn √ (2n−1)!!, and define the normalized probabilist’s Hermite polynomial vector |hn⟩= 1 √ n! Hen(x). They are both sequences of unit vectors in L2(µ). However, the sequence |h0⟩, |h1⟩, . . . is orthonor- mal, but the sequence |v0⟩, |v1⟩, . . . is not orthonormal, and in fact, is increasingly ill-conditioned. By looking up a standard reference table, we have the following basic properties: 1. |h0⟩, |h2⟩, |h4⟩, . . . are obtained by Gram–Schmidt orthonormalization of |v0⟩, |v2⟩, |v4⟩, . . . . 2. |h1⟩, |h3⟩, |h5⟩, . . . are obtained by Gram–Schmidt orthonormalization of |v1⟩, |v3⟩, |v5⟩, . . . . 3. |hn⟩= √ n! ⌊n 2 ⌋ X m=0 (−1)mp (2n −4m −1)!! 2mm!(n −2m)! |vn−2m⟩ 4. |vn⟩= P∞ m=0 Mnm|hm⟩where M is an invertible lower-triangular matrix satisfying Mn,n−2m = n! p (2n −1)!! 1 2mm! p (n −2m)! M −1 n,n−2m = √ n!(−1)mp (2n −4m −1)!! 2mm!(n −2m)! for 0 ≤2m ≤n. 5. ⟨vn|vm⟩= ( (n+m−1)!! √ (2n−1)!!(2m−1)!! n ≡m mod 2 0 else In particular, ... ⟨vn|vn−4⟩= p (2n −1)(2n −3)(2n −5)(2n −7) (2n −1)(2n −3) , ⟨vn|vn−2⟩= p (2n −1)(2n −3) 2n −1 , ⟨vn|vn⟩= 1, ⟨vn|vn+2⟩= 2n + 1 p (2n + 1)(2n + 3) , ⟨vn|vn+4⟩= (2n + 1)(2n + 3) p (2n + 1)(2n + 3)(2n + 5)(2n + 7) ... 41 A useful operator is Mx, the multiply-by-x operator, which, using the multiplication formula xHen = Hen+1 + nHen−1 gives Mx = X n √ n + 1|hn+1⟩⟨hn| + √ n + 1|hn⟩⟨hn+1| It is equivalently expressed as Mx = a+a∗, where a = P n √n + 1|hn⟩⟨hn+1| is the lowering ladder operator. Furthermore, because He′ n = nHen−1, the lowering ladder operator is the differentiation operator Dx. Therefore, a∗= D∗ x = Mx −a = Mx −Dx. 42 H PROOF OF THEOREM 1 In this appendix, we prove Theorem 1, which states that the eigensystem of a Gaussian kernel Kσ(x, x′) = e− 1 2σ2 ||x−x′|| 2 on a multidimensional anisotropic Gaussian measure µ = N(0, Σ) approaches the Hermite eigenstructure given in Definition 4 as σ →∞. To show this, we will obtain exact expressions for the kernel eigensystem, then simply show that the eigenvalues and eigenvectors are those predicted by the HEA up to terms that vanish as σ grows. We will proceed in three stages of successive generality: 1. We solve the problem for L2(µ), where µ = N(0, 1) is the standard Gaussian. This is the hardest stage. 2. We take an outer product of measures to solve the problem for L2(µ), where µ = N(0, Id). 3. We solve the problem for L2(µ), where µ = N(0, Σ), by taking an opportune unitary transformation from L2(N(0, γ)) to L2(N(0, 1)). H.1 PART 1: THE 1D UNIT GAUSSIAN We solve the problem for L2(µ), where µ = N(0, 1) is the standard Gaussian. Our approach here is essentially the one used by (Zhu et al., 1997). We first quote the Mehler formula (Mehler, 1866): 1 p 1 −ρ2 exp  −ρ2(x2 + y2) −2ρxy 2(1 −ρ2)  = ∞ X n=0 ρnhn(x)hn(y) for any ρ ∈[0, +1). where hk is the normalized Hermite polynomial. We multiply by (1 −ρ), and substitute with ρ = e−t, to make the form more beautiful: p tanh(t/2) exp  −e−t(x2 + y2) −2xy 4 sinh t  = ∞ X n=0 (1 −e−t)e−nthn(x)hn(y) Define an integral kernel operator Kt : L2(µ) →L2(µ) using the expression on the left: (Ktf)(x) = Z R1 p tanh(t/2) exp  −e−t(x2 + y2) −2xy 4 sinh t  f(y)µ(dy) The Mehler formula then states that Kt is diagonalized as: Kt = ∞ X n=0 (1 −e−t)e−nt|hn⟩⟨hn|. Thus, we have successfully obtained a one-parameter family of integral kernel operators, all diagonal- ized in the Hermite basis. Each operator has a Gaussian kernel. However, none of them matches the form that we want: K(x, y) = e− 1 2σ2 (x−y)2 In order to reach such a form, we need to construct {Tτ : τ ∈R}, a one-parameter family of unitary transformations L2(µ). Then we will solve for τ, t, such that TτKtV ∗ τ has the desired kernel form. For any τ ∈R, define the squeezing operator Tτ : L2(µ) →L2(µ) by (Tτf)(x) = eτ/2e−e2τ −1 4 x2f(eτx) This is unitary by direct integration and change of variables. It is a one-parameter family, and it satisfies T ∗ τ = T−τ, Tτ1 ◦Tτ2 = Tτ1+τ2 We can interpret this one-parameter family as a continuous “rotation” in L2(µ). Except that, because L2(µ) has infinitely many dimensions, the rotation need not return to the starting point. Concretely, consider what happens when the 0th Hermite vector |h0⟩is rotated. The family of vectors Tτ|h0⟩ would not turn back towards |h0⟩again. Instead, it continues to rotate further and further away, with ⟨h0|Tτ|h0⟩= 1 √ cosh τ →0 43 as τ →∞. Such behavior is possible, because there are infinitely many dimensions to rotate to. The one-parameter family Tτ can be written as Tτ = eτA, where A is the infinitesimal generator of the family: A = ∂τ|τ=0Tτ = xDx + 1 2(1 −x2) = He1Dx + 1 2He2 Using the multiplication and differentiation formulas of the Hermite polynomials, we have AHen = −1 2(Hen+2 −n(n −1)Hen−2), A2Hen = 1 4(Hen+4 −2(n2 + n + 1)Hen + n(n −1)(n −2)(n −3)Hen−4) Thus, ⟨hn|A|hn⟩= 0, ⟨hn|A2|hn⟩= −n2 + n + 1 2 Therefore, the angle between |hn⟩and Tτ|hn⟩, at the limit of small τ, is arccos⟨hn|eτA|hn⟩= r n2 + n + 1 2 τ + O(τ 2) Equivalently, since A|hn⟩= −1 2 p (n + 1)(n + 2)|hn+2⟩+ 1 2 p n(n −1)|hn−2⟩ we see that the operator rotates |hn⟩in the direction of |hn+2⟩and |hn−2⟩simultaneously. This explains our previous statement that eτA rotates |hn⟩further and further away, towards |h∞⟩. Now, for any two function f, g, we have ⟨f|TτKT ∗ τ |g⟩= ZZ (T−τf)(x)K(x, y)(T−τg)(y)µ(dx)µ(dy) = ZZ f(u)Kτ(u, v)g(v)µ(du)µ(dv) where the transformed kernel is Kτ(x, y) = eτe−e2τ −1 4 (x2+y2)K(eτx, eτy) Consider the previously solved case of K = Kt. Plugging it in, we find that TτKT ∗ τ is an integral kernel operator with kernel function Kt,τ(x, y) = eτp tanh(t/2) exp  − e2τ −1 4 + e−te2τ 4 sinh t  (x2 + y2) + e2τ 2 sinh txy  . To match the target form K(x, y) = exp(−(x −y)2/(2σ2)), e2τ 2 sinh t = 1 σ2 , e2τ −1 4 + e−te2τ 4 sinh t = 1 2σ2 . Eliminate e2τ and solve for t, we have σ2 = et + e−t −1, that is, σ2 = 4 sinh2(t/2), t = 2 arsinh (σ/2) Plug it back, e2τ = 2 sinh t σ2 = tanh(t/2)−1 = r 1 + 4 σ2 So the prefactor simplifies to eτp tanh(t/2) = 1, hence (TτKtT ∗ τ )(x, y) = exp  −(x −y)2 2σ2  , t = 2 arsinh (σ/2) , τ = 1 4 ln  1 + 4 σ2  44 The diagonalization of TτKtT ∗ τ is TτKtT ∗ τ = ∞ X n=0 (1 −e−t)e−ntTτ|hn⟩⟨hn|T ∗ τ Thus, the eigensystem is eigensystem(µ, K) = {(1 −e−t)e−nt, Tτ|hn⟩: n = 0, 1, 2, . . . } At the σ →∞limit, we have t = 2 ln σ + 2/σ2 + O(σ−4), τ = σ−2 −2σ−4 + O(σ−6) Now, the Hermite eigensystem corresponding to e−(x−y)2 2σ2 is HE(1, (cn)) with cn = σ−2n, we see that the HEA is proven: (1 −e−t)e−nt cn = 1 −(2n + 2)σ−2 + O(σ−4), arccos ⟨hn|Tτ|hn⟩= r n2 + n + 1 2 σ−2 + O(σ−4) We also see that, for any fixed n, the rate of convergence is on the order of nσ−2 = ncℓ+1/cℓ. We will show in the next few sections that this is a generic phenomenon. In general, if a kernel has coefficients cℓdecaying at a rate of ϵ, then the n-th entry in the kernel’s eigensystem converges to the corresponding n-th entry in the Hermite eigensystem, at a rate of O(ϵ). The constant in O(ϵ) increases with n, so that the convergence is not uniform. The higher-order entries converge slower. H.2 PART 1 BONUS We can diagonalize K(x, y) = exy/σ2 in the same way: K(x, y) = (1 −2/σ2)−1/2Kt,τ(x, y), t = arcosh σ2/2  , τ = 1 4 ln  1 −4 σ4  Thus, its eigensystem is eigensystem(µ, K) = {(1 −2/σ2)−1/2(1 −e−t)e−nt, Tτ|hn⟩: n = 0, 1, 2, . . . } At the σ →∞limit, t = 2 ln σ −σ−4 + O(σ−8), τ = −σ−4 + O(σ−8) yielding (1 −e−t)e−nt cn = 1 + (n + 1 2)σ−4 + O(σ−8), arccos ⟨hn|Tτ|hn⟩= − r n2 + n + 1 2 σ−4 + O(σ−8) This case is special, in that the convergence is on the order of nσ−4 = n(cℓ+1/cℓ)2, which is faster by one order of magnitude than the generic case. To understand this, we directly expand the operators. The operator for exp(−(x −y)2/2σ2) has expansion Kexp(−(x−y)2/2σ2) = I + σ−2(|h1⟩⟨h1| −M 2 x) + O(σ−4) where we note that the multiplication operator Mx is self-adjoint, with expression Mx = X n √ n + 1|hn+1⟩⟨hn| + √ n + 1|hn⟩⟨hn+1| Because M 2 x is not diagonal in the Hermite basis, the operator Kexp(−(x−y)2/2σ2) is not diagonal at the σ−2 order, and perturbation occurs at that order. In contrast, the operator for exp(xy/σ2) has expansion Kexp(xy/σ2) = I + σ−2|h1⟩⟨h1| + σ−4(|h2⟩+ 2−1/2|h0⟩)(⟨h2| + 2−1/2⟨h0|) + O(σ−6) Therefore, it is not diagonal at the σ−4 order, and perturbation occurs at that order. 45 H.3 PART 2 We take its product, to solve the problem for L2(µ), where µ = N(0, Id). The kernel K(x, y) = e−||x−y||2 2σ2 decomposes into a product of kernels: K(x, y) = d Y i=1 e−(xi−yi)2 2σ2 Therefore, its kernel operator decomposes into a tensor product of kernel operators: K = d O i=1 Ki = X α∈Nd d O i=1 (1 −e−t)e−αitTτ|hαi⟩⟨hαi|T ∗ τ = X α∈Nd (1 −e−t)de−|α|tT ⊗d τ |hα⟩⟨hα|(T ⊗d τ )∗ So, its eigensystem is {(1 −e−t)de−|α|t, T ⊗d τ |hα⟩: α ∈Nd} At the σ →∞limit, using the previous result, each eigenvalue converges as (1 −e−t)de−|α|t c|α| = 1 −(2|α| + 2d)σ−2 + O(σ−4) and each eigenvector converges as arccos⟨hα|T ⊗d τ |hα⟩= sPd i=1(α2 i + αi + 1) 2 σ−2 + O(σ−4) H.4 PART 3 We solve the problem for L2(µ), where µ = N(0, Σ), by taking an opportune unitary transformation V : L2(N(0, Σ)) →L2(N(0, Id)). As previously stated, if MM ⊤= Σ, then V : L2(N(0, Σ)) →L2(N(0, Id)) defined by (V f)(x) = f(Mx), (V ∗f)(x) = f(M −1x) is unitary. Now, define the operator using the kernel function K(x, y) = e−||x−y||2 2σ2 . With the unitary transfor- mation, its kernel becomes (V KV ∗)(x, y) = K(Mx, My) = e−(x−y)⊤M⊤M(x−y) 2σ2 Therefore, the kernel decomposes if M ⊤M is diagonalized. This can be solved by taking the SVD of the covariance matrix as Σ = UΓU ⊤, where Γ = diag (γ1, . . . , γd), then we can set M = UΓ1/2 so that V KV ∗has kernel d Y i=1 e −(xi−yi)2 2(σ/√γi)2 Thus, V KV ∗diagonalizes as V KV ∗= X α∈Nd d Y i=1 (1 −e−ti)e−αiti ⊗d i=1 Tτi|hαi⟩(⊗d i=1Tτi|hαi⟩)∗ 46 where ti, τi are defined by ti = 2 arsinh (σ/2√γi) , τi = 1 4 ln  1 + 4γi σ2  Now, we convert this back to K to obtain K = X α∈Nd d Y i=1 (1 −e−ti)e−αitiV ∗⊗d i=1 Tτi|hαi⟩(V ∗⊗d i=1 Tτi|hαi⟩)∗ At the limit of σ →∞, the eigenvalues converge to d Y i=1 (1 −e−ti)e−αiti = σ−2|α| Y i γαi i ! (1 −(2|α| + 2d)σ−2 + O(σ−4)) at a rate of σ−2. As before, the eigenvectors converge to V ∗|hα⟩at a rate of q Pd i=1(α2 i +αi+1) 2 σ−2. Since (V ∗hα)(x) = hα(Γ−1/2U ⊤x) = h(Σ) α (x) the theorem is proven fully. Similarly to the case in Section H.2, the kernel K(x, y) = ex⊤y/σ2 converges to the Hermite eigensystem at a rate of O(σ−4), one order of magnitude faster. 47 I PROOF OF THEOREM 2 IN ONE DIMENSION In this appendix, we prove Theorem 2 in the special case of one dimension. The general case is proven in Section J. The proof of this case is easier than the general case since there is no multiplicity. The ideas in the proof would be reused in the general case. Section I.1 is a reference sheet quoting several theorems we need for the proofs. The reader can skip it and refer back when needed. Section I.2 states Theorem 2 rigorously as Theorem 5, then shows that it is a special case of a more general theorem (Theorem 6). The next section proves the general case. Section I.3.1 shows that the eigenvalues of the kernel converge to the desired form with relative error decaying at a rate of O(ϵ). Section I.3.2 leverages this to show that the eigenvectors also converge to the desired form at a rate of O(ϵ). I.1 SETUP We need to quote some big-name theorems for later use. We will need to cut up the spectrum of an operator into segments, each falling within an interval. The following theorem allows us to construct tight enough bounds on the intervals. It is a special case of the general Courant–Fischer–Weyl min-max principle, strong enough for our purpose. Our special case avoids the part about essential spectrum, which makes the general statement inconvenient to use. Theorem 3 (Courant–Fischer–Weyl min-max principle). Let A be a compact, positive semidefinite operator over a Hilbert space H. Let its eigenvalues be enumerated as λ1 ≥λ2 ≥· · · ≥0. They can be finite or infinite. Then, λk = max M⊂H dim M=k min x∈M ∥x∥=1 ⟨x|A|x⟩ (51) = min M⊂H M=k−1 max x∈M⊥ ∥x∥=1 ⟨x|A|x⟩ (52) Proof. The second equation is (Teschl, 2009, Theorem 4.10). The first equation is proven by essentially the same technique as the finite-dimensional case. Let v1, v2, . . . be its eigenvectors. If we set M to be the span of {v1| . . . , vk}, then we have λk = min x∈M ∥x∥=1 ⟨x, Ax⟩ So it remains to prove the other half. For any subspace with dim M = k, it must have a nontrivial intersection with {vk, vk+1, . . . }, therefore, there exists some unit vector x ∈M, such that it has decomposition x = P i≥k aivi. With that, we have ⟨x|Ax⟩= X i≥k |ai|2λi ≤ X i≥k |ai|2λk = λk The min-max principle has a corollary that we will use, more convenient than quoting the full min-max principle. Corollary 1 (Cauchy interlacing law). For any n×n Hermitian matrix An with top left n−1×n−1 minor An−1, then λi+1 (An) ≤λi (An−1) ≤λi (An) for all 1 ≤i < n. (Tao, 2012, Eq. 1.75) After bounding the eigenvalues, we will use the second theorem to bound the eigenvectors. However, we need something more, because we need to bound the eigenvector rotations of segments of the spectrum, and the segment may be badly separated within, even though it is well-separated from other segments. 48 Therefore, we need to handle the rotations reducing subspaces. A reducing subspace for an operator A is a closed subspace V , such that A(V ) ⊂V . For self-adjoint operators, the reducing subspaces are direct sums of eigenspaces. In particular, the span of an eigenvector is a reducing subspace. Given a segment of the spectrum, Λ ⊂σ(A), we define VΛ as the reducing space of Λ. For example, if Λ = {E}, then VΛ is the closed span of all eigenvectors with eigenvalue E. We also define PΛ as the orthogonal projector to VΛ. The following theorem shows that, if a segment of the operator spectrum is separated by Ω(1) from the rest of the spectrum, then its corresponding reducing subspaces would only rotate by O(ϵ) under an operator perturbation by O(ϵ). Theorem 4 (Davis–Kahan sin Θ theorem (Davis & Kahan, 1970)). Let A, B be self-adjoint operators such that • σ(A) is partitioned into Λ0, Λ1 • σ(B) is partitioned into Γ0, Γ1 • The spaces VΛ0, VΓ0 are of the same dimension. Similarly for VΛ1, VΓ1. • Λ0 is contained in an interval [x, y]. • Γ1 is disjoint from the enlarged interval (x −δ, y + δ). Then there exists a self-adjoint “angle” operator Θ, such that the rotation operator  cos Θ sin Θ −sin Θ cos Θ  rotates VΛi to VΓi for i = 0, 1. Furthermore, the angle operator satisfies ||sin Θ|| ≤1 δ ||A −B|| (53) for any unitarily invariant norm ||·||. Intuitively, the operator Θ is just a diagonal matrix of angles, in a suitable orthonormal basis. The operator  cos Θ sin Θ −sin Θ cos Θ  performs simultaneous rotation in many (potentially infinitely many) 2- dimensional planes, such that the two reducing subspaces of A are rotated to two reducing subspaces of B. I.2 REDUCTION TO A GENERAL CASE We clean up the form of Theorem 2 by performing a few WLOGs and reductions into Theorem 5, so that we can deduce the theorem as a corollary of a more general Theorem 6. Alternatively, the theorem can be proven by proving the more general, multidimensional case. This is done in Section J. However, it may be worthwhile to study the following special case before reading the more general case, since most of the proof ideas are present. We begin with a convenient definition. Definition 5 (fast-decay). A real-valued sequence c0, c1, . . . is fast-decaying iff there exists a number ϵ ∈[0, 1), such that cn+1 ≤ϵcn for all n ∈N. As in the proof of Theorem 1 in Section H, we can perform a unitary transform of L2(N(0, γ)) to L2(N(0, 1)). Next, we define an = (2n−1)!! n! cn, so that K = X n an|vn⟩⟨vn| By Stirling’s approximation, (2n −1)!! n! = 1 2n 2n n  = 2n √πn(1 + O(1/n)) Therefore, if cn is fast-decaying with parameter ϵ, then an is fast-decaying with parameters 2ϵ, δlow,n/√n. So, we can study the kernel P n an|vn⟩⟨vn|, with the fast-decaying condition directly imposed on an. This makes the notation cleaner. 49 Because ⟨vn|hn⟩= q n! (2n−1)!! (see Section G.8), convergence to cn is equivalent to convergence to an|⟨vn|hn⟩|2. With that, we can we restate the theorem we wish to prove, more rigorously: Theorem 5 (The HEA holds for a fast-decaying kernel on 1D Gaussian measure). Let µ = N(0, 1) be the standard Gaussian measure, and let K = ∞ X n=0 an|vn⟩⟨vn| be a dot-product kernel with fast-decaying coefficients an with parameters ϵ, then for any n ∈N, there exists an eigensystem of K, written as (λ0(K), |v0(K)⟩), (λ1(K), |v1(K)⟩), . . . , such that λn(K) an|⟨vn|hn⟩|2 = 1 + O(ϵ) |∠(|vn(K)⟩, |hn⟩)| = O(ϵ) as ϵ →0. Furthermore, the scaling factor in O(ϵ) depends only on the Gram matrix of the vectors |v0⟩, . . . , |vn⟩. The above statement may seem oddly convoluted with “for any n ∈N, there exists...”, but this is necessary, because the rate of convergence is not uniform over the sequence. In general, higher-order eigenvectors converge slower than lower-order eigenvectors, which means the constant in the O(ϵ) terms is larger for larger n, and no uniform-convergence rate exists. The scaling factor in O(ϵ) measures the speed of convergence for the n-th eigenpair. It is slower if the Gram matrix of the vectors |v0⟩, . . . , |vn⟩is ill-conditioned, because in this case, the vectors are almost linearly dependent. Indeed, the Hankel moment matrices of most commonly used probability distributions, including the uniform distribution, the gaussian distribution, and the exponential distribution, are exponentially ill-conditioned (Chen & Lawrence, 1999). Therefore, we should expect the constant in O(ϵ) to grow exponentially with n. Also, take note of the phrasing “there exists an eigensystem of K denoted as (λk(K), |vk(K)⟩)k”, because we allow K to suffer multiplicity. In these cases, the corresponding eigenspace would have more than 1 dimension, and therefore there is freedom in choosing any orthonormal basis of the eigenspace as “the eigenvectors”. The theorem states that, despite the multiplicity, there exists a good choice of eigenvectors, such that they make a small angle with the canonical eigenvectors |ˆvn⟩. This will become especially relevant in the proof of the multidimensional generalization in Section J. Because the orthonormal basis |h0⟩, |h1⟩, . . . is obtained by Gram–Schmidt orthonormalization on |v0⟩, |v1⟩, . . . , we can generalize it, this time with all epsilons and deltas in place for maximal rigor: 50 Theorem 6. Let: 1. |v0⟩, |v1⟩, . . . be a sequence of linearly independent unit vectors in a Hilbert space, such that their closed span is the whole space; 2. |ˆv0⟩, |ˆv1⟩, . . . be the sequence obtained by performing Gram–Schmidt process on the sequence; Then, there exists a sequence of constants C0, C1, . . . , > 0 and ϵ0, ϵ1, · · · > 0, such that for all n ∈N, ϵ ∈[0, ϵn], and all a0, a1, . . . fast-decaying sequences with parameters ϵ, there exists an eigensystem of the kernel K := P∞ n=0 an|vn⟩⟨vn|, denoted as (λk(K), |vk(K)⟩)k, such that λn(K) ∈an|⟨ˆvn|vn⟩|2(1 ± Cnϵ) and |∠(|vn(K)⟩, |ˆvn⟩)| ≤Cnϵ Furthermore, the scaling factor Cn and the bound ϵn depend on only on the Gram matrix of the vectors |v0⟩, . . . , |vn⟩. Intuitively restated, the theorem says that as ϵ →0, the eigensystem of the fast-decaying kernel P∞ n=0 an|vn⟩⟨vn| rotates towards the canonical eigensystem at a rate of ϵ. The proof has two parts. The first part uses the min-max principle and lowest-order operator perturbation (commonly used in quantum mechanics), to segment the spectrum of K into small intervals of the form λn(K) ∈an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) In particular, since an are fast-decaying, it shows that the eigenvalues are exponentially separated. The second part applies Davis–Kahan twice, using this exponential separation of eigenvalues, to show that | sin ∠(|vn(K)⟩, | ˜K⟩)| = O(ϵ) and | sin ∠(|vn( ˜K)⟩, |ˆvn⟩)| = O(ϵ), for a cleverly-chosen operator ˜K. Before we launch into the proof, we should look at a simple case that explains why this should be true. Consider the case of two dimensions. We only have |v0⟩, |v1⟩. In this case, the kernel K = a0|v0⟩⟨v0| + a1|v1⟩⟨v1|. Diagonalizing the kernel is equivalent to finding the major and minor axes of the contour ellipse defined by {|x⟩: ⟨x|K|x⟩= 1}. This ellipse is the unique ellipse tangent to the 4 lines defined by a0|⟨v0|x⟩|2 = 1, a1|⟨v1|x⟩|2 = 1. Suppose we fix a0, and let a1 →0. Then, the lines of a0|⟨v0|x⟩|2 = 1 remain constant, but the lines of a1|⟨v1|x⟩|2 = 1 diverge to infinity. The ellipse degenerates to two parallel lines. Its minor semiaxis rotates to become perpendicular to the two parallel lines, i.e. parallel to |v0⟩. Therefore, the eigenpair converges to (a0|⟨ˆv0|v0⟩|2, |ˆv0⟩). Suppose we fix a1 and let a0 →∞. Then, the lines of a1|⟨v1|x⟩|2 = 1 remain constant, but the lines of a0|⟨v0|x⟩|2 = 1 converge to the origin. The ellipse degenerates to two line segments. Its major semiaxis rotates to become the same as that line segment, i.e. parallel to a0|⟨v0|x⟩|2 = 1, i.e. perpendicular to |v0⟩. Therefore, the eigenpair converges to (a1|⟨ˆv1|v1⟩|2, |ˆv1⟩). Intuitively, we see that for a given n, the effect of all an+1, an+2, . . . terms in the kernel is a small perturbation on the n-th eigenspace, and negligible because the parallel planes diverge to infinity. The effect of all an−1, an−2, . . . , a0 is a large but fixed perturbation, forcing the n-th eigenspace to be perpendicular to all of |vn−1⟩, . . . , |v0⟩, but once that is done, their effects are also negligible because the parallel planes converge to the origin. 51 (a) The case of constant a0, and a1 →0. The ellipse degenerates to two parallel lines. (b) The case of constant a1, and a0 →∞. The ellipse degenerates to two repeated line segments. Figure 22: Diagonalizing the kernel in 2 dimensions at the a1/a0 →0 limit. I.3 PROOF OF THEOREM 6 I.3.1 PART 1 To show: λn(K) = an|⟨ˆvn|vn⟩|2(1 + O(ϵ)). Proof. If an = 0, then all λn, λn+1, · · · = 0, and the kernel K becomes finite-ranked, with range Span (|v0:n−1⟩). So the theorem becomes is trivial. Otherwise, we assume an > 0, which means that all a0, . . . , an > 0. We apply the min-max principle to obtain an upper bound of the form λn(K) ≤an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) and a lower bound of the form λn(K) ≥an|⟨ˆvn|vn⟩|2(1+O(ϵ)), thus completing the estimate. For the upper bound, we use V = Span(|v0:n−1⟩), then λn ≤ sup x∈Span(|v0:n−1⟩)⊥ ∥x∥=1 ⟨x|K|x⟩ = sup x∈Span(|v0:n−1⟩)⊥ ∥x∥=1 ∞ X k=0 ak|⟨x|vk⟩|2 = sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 ∞ X k=n ak|⟨x|vk⟩|2 ≤ sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 an|⟨x|vn⟩|2 + sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 ∞ X k=n+1 ak|⟨x|vk⟩|2 = an|⟨ˆvn|vn⟩|2 + λmax ∞ X k=n+1 ak|vk⟩⟨vk| ! ≤an|⟨ˆvn|vn⟩|2 + Tr ∞ X k=n+1 ak|vk⟩⟨vk| ! = an|⟨ˆvn|vn⟩|2 + ∞ X k=n+1 ak = an|⟨ˆvn|vn⟩|2 + anO(ϵ) = an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) 52 where the step λmax ≤Tr is because all ak ≥0. Though unnecessary, we can concretely write down the upper bound as λn ≤an|⟨ˆvn|vn⟩|2(1 + Cϵ) where C = 1 |⟨ˆvn|vn⟩|2 ∞ X k=n+1 ak anϵ ≤ 1 |⟨ˆvn|vn⟩|2 1 1 −ϵ For the lower bound, we use V = Span(|v0:n⟩), then λn ≥ inf x∈Span(|v0:n⟩) ∥x∥=1 ⟨x|K|x⟩ = inf x∈Span(|v0:n⟩) ∥x∥=1 ∞ X k=0 ak|⟨x|vk⟩|2 ≥ inf x∈Span(|v0:n⟩) ∥x∥=1 n X k=0 ak|⟨x|vk⟩|2 The quantity is a standard problem in quadratic programming, with exact solution λmin G1/2AG1/2 , where A = diag(a0, . . . , an) and G = (⟨vi|vj⟩)n i,j=0 is the Gram matrix. To see this, write x = P j∈0:n cjvj, then ∥x∥2 = cT Gc, n X k=0 ak|⟨x|vk⟩|2 = (Gc)T A(Gc) = cT GAGc. So the constrained minimum of cT GAGc cT Gc equals the smallest eigenvalue of G1/2AG1/2 by the Rayleigh–Ritz principle. Let M = G1/2AG1/2. We invert the matrix to use standard operator perturbation theory: M −1 = 1 an unuT n + n−1 X k=0 1 ak ukuT k where uk = G−1/2ek is the k -th column of G−1/2. The perturbation Pn−1 k=0 1 ak ukuT k is order O(ϵ) compared to the unperturbed part 1 an unuT n. The unperturbed operator has maximal eigenvalue ∥un∥2 an with eigenvector ˆun := un/∥un∥. The perturbed operator has maximal eigenvalue: ∥un∥2 an + ˆuT n n−1 X k=0 1 ak ukuT k ! ˆun + O(ϵ2) Inverting the eigenvalue, λmin G1/2AG1/2 = an ∥un∥2  1 −an/an−1 ∥un∥2 |uT n−1ˆun|2 + O(ϵ2)  = an ∥un∥2 (1 + O(ϵ)) Now, ∥un∥2 = eT nG−1en, and G−1 is the Gram matrix of the dual basis of |v0:n⟩in Span(|v0:n⟩). In particular, because |ˆvn⟩is perpendicular to all of |v0⟩, . . . , |vn−1⟩, the n-th dual vector is just |ˆvn⟩ ⟨vn|ˆvn⟩. Therefore ∥un∥2 = |ˆvn⟩ ⟨vn|ˆvn⟩ 2 = 1 |⟨vn|ˆvn⟩|2 , and we obtain the desired lower bound λn ≥an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) 53 Some comments on the constants in O(ϵ). In the above proof, we constructed an upper bound an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) and a lower bound an|⟨ˆvn|vn⟩|2(1 −O(ϵ)). The constant in the upper bound is 1 |⟨ˆvn|vn⟩|2 1 1 −ϵ which depends on |⟨ˆvn|vn⟩|2 = sin2 θ, where θ is the angle between |vn⟩and Span(|v0:n−1⟩). We see that the bound is pushed wider when either the coefficients become less strictly exponentially- decaying, or the vector |vn⟩leans into Span(|v0:n−1⟩), and thus becomes less orthonormal. The constant in the lower bound is 1 ∥un∥2 |uT n−1ˆun|2 = 1 ∥un∥4 |uT n−1un|2 = ⟨vn|ˆvn⟩|4|uT n−1un|2 = |⟨vn|ˆvn⟩|4(G−1 n−1,n)2 Similarly to the previous case, G−1 n−1,n gets larger as the vectors |v0⟩, . . . , |vn⟩get less orthonormal, which worsens eigenvalue convergence. I.3.2 PART 2 Proof. If an = 0, then we can trivially select the n-th eigenvector to be |ˆvn⟩. Otherwise, we have a0, . . . , an > 0. By the eigenvalue bound (that is, Part 1 of the theorem), the spectral gap around λn(K) is min jNeqn |λn(K) −λj(K)| = an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) Define the truncated operator ˜K = Pn k=0 ak|vk⟩⟨vk|. By Davis–Kahan, | sin ∠(vn(K), vn( ˜K))| ≤ 2 an|⟨ˆvn|vn⟩|2(1 + O(ϵ))∥K −˜K∥op ≤ 2 an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) Tr h K −˜K i = 2 an|⟨ˆvn|vn⟩|2(1 + O(ϵ)) ∞ X k=n+1 ak = O(ϵ) Thus, we need only bound | sin ∠(vn( ˜K), ˆvn)|. Because ˜K lives inside Span(|v0:n⟩), we thenceforth restrict the Hilbert space to just Span(|v0:n⟩). Define the twice-truncated operator ¯K = Pn−1 k=0 ak|vk⟩⟨vk|. The eigenvalue bound applies to its first n eigenvalues, and its (n + 1)-th eigenstate is the ground state, with eigenvalue 0 and eigenvector |ˆvn⟩. Thus, the spectral gap around its ground state eigenvalue is min jNeqn |λn( ¯K) −λj( ¯K)| = λn−1( ¯K) = an−1|⟨ˆvn−1|vn−1⟩|2(1 + O(ϵ)) By Davis–Kahan again, | sin ∠(vn( ˜K), ˆvn)| = | sin ∠(vn( ˜K), vn( ¯K))| ≤ 2 an−1|⟨ˆvn−1|vn−1⟩|2(1 + O(ϵ))∥˜K −¯K∥op = 2 an−1|⟨ˆvn−1|vn−1⟩|2(1 + O(ϵ))an = O(ϵ) 54 There are two occurrences of O(ϵ) in the proof. Both can be bounded explicitly, to show that they only depend on the Gram matrix of |v0⟩, . . . , |vn⟩, as in Part 1. The first O(ϵ) has explicit upper bound constant 2 anϵ|⟨ˆvn|vn⟩|2(1 + O(ϵ)) ∞ X k=n+1 ak ≤ 2 |⟨ˆvn|vn⟩|2 1 1 −ϵ 1 1 + O(ϵ) where the remaining O(ϵ) in 1 1+O(ϵ) came from Part 1, which as we showed in Part 1, only depends on the Gram matrix of |v0⟩, . . . , |vn⟩. The second O(ϵ) has explicit upper bound constant 2 |⟨ˆvn−1|vn−1⟩|2(1 + O(ϵ)) where the remaining O(ϵ) only depends on the Gram matrix of |v0⟩, . . . , |vn−1⟩. 55 J PROOF OF THEOREM 2 IN THE GENERAL CASE In this appendix, we prove Theorem 2 completely. The general idea is to modify the proof given in Section I to account for multiplicity. Section J.1 presents the overall plan of the proof. Section J.2 sets up all the machinery needed to handle multiplicity in eigenvalues and eigenvectors, which is a new occurrence in multiple dimensions. Section J.3 states the theorem in full rigor as Theorem 7. The next two subsections prove two lemmas that apply to generic operators, not just an integral kernel operator: Section J.4 shows that the eigenvalues of a generic fast-decaying kernel splits into segments that are exponentially separated, and Section J.5 sharpens this separation into proving that the eigenvalues in the N-th segment are only slightly perturbed by all the other segments. Section J.6 specializes to the case of a dot-product kernel, showing convergence of the eigenvalues, and then leverages that convergence into the convergence of eigenspaces. J.1 PLAN OF THE PROOF Let K(x, y) = P∞ n=0 cn n! (x⊤y)n be a dot-product kernel with fast-decaying coefficients cn. As in Section H.4, to study a spherically symmetric dot-product kernel over a nonstandard Gaussian distribution N(0, Σ), we construct a whitening unitary transform V : L2(N(0, Σ)) →L2(N(0, Id)), thus converting the problem to solving for the eigenstructure of a spherically asymmetric kernel over the standard Gaussian distribution. Let the SVD of Σ be UΓU ⊤, where Γ = diag (γ1, . . . , γd) with γ1, . . . , γd ≥0. Define V by (V f)(x) = f(Mx). This then converts the operator K to V KV ∗. The operator V KV ∗is a kernel operator with kernel function satisfying (V KV ∗)(x, y) = ∞ X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n We will prove that the eigensystem of V KV ∗converges to ( c|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0 ) as ϵ →0. Then, by reversing the V transform, we find that the eigensystem of K converges to ( c|α| d Y i=1 γαi i , |h(Σ) α ⟩ ! for all α ∈Nd 0 ) as desired. We eliminate a pesky special case: one or more of γ1, . . . , γd may be equal to zero. In this case, V KV ∗may not be positive definite, but merely positive semidefinite, which is annoying. For example, what if γ2 = γ4 = 0? Then the operator V KV ∗splits to two halves. It is the zero operator on Span (e2, e4), and it is positive definite on Span (e1, e3, e5, . . . , ed). Then we can separately prove the eigensystem convergence on the two halves, and take their tensor product. The case of zero operator is obviously trivial, since its eigensystem is just 0, |h(α2,α4))⟩  for all (α2, α4) ∈N2 0 Thus, WLOG, we need only consider the case where γ1, . . . , γd > 0. We note that, at least in one case, the theorem has been proven: If cn = σ−2n for some σ, then it is just a minor variant of Theorem 1, which has been proven in Section H.2. Thus, if we prove that the difference between the eigensystem of V KV ∗and the eigensystem of cex⊤y/σ2 vanishes at ϵ →0, for some well-chosen values of σ, c, we are done. This cannot be done directly, once again due to nonuniform convergence: The higher-order parts of the eigensystem is wilder, and harder to control. To bypass this difficulty, we divide and conquer. We prove that the eigensystem of V KV ∗is “segmented” into exponentially separated intervals, such that each segment is ϵ-insensitive to perturbations in all other segments. This allows us to show 56 that, for any fixed n ∈N0, the n-th segment of eigensystem(V KV ∗) – corresponding to the term cn(x⊤Γy)n – converges to the n-th segment of eigensystem(Kceϵ(x⊤Γy)), where c = cnϵ−n. Since the n-th segment of eigensystem(Kceϵ(x⊤Γy)) converges to ( cn   ϵn ϵ|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = n ) the theorem is proven. J.2 MACHINERY FOR MULTIPLICITY The main difficulty, compared to the one-dimensional case, is that we must directly handle the multiplicity of eigensystems. By this, we mean that in Rd, the Hermite basis is no longer of form {|hn⟩}n∈N, but rather, {|hα⟩}α∈Nd. This creates degeneracy, that is, if P i αi = P i α′ i, then |hα⟩, |hα′⟩belong to the same eigenspace. Concretely, define the Ornstein–Uhlenbeck operator ∇2 −x · ∇on L2(µd). In the d = 1 case, its eigenvalues have no multiplicity, and its eigenvectors are precisely the normalized Hermite polynomials {|hn⟩}n∈0:∞. In the d > 1 case, its eigenvalues suffer multiplicity. Its n-th eigenspace is {|hα⟩: |α| = n}α∈Nd, with n+d−1 d−1  dimensions. This multiplicity is inescapable, because the Ornstein–Uhlenbeck operator is spherically symmet- ric, and spherical symmetry inevitably leads to multiplicity. In our case, the dot-product kernel P n cn n! (x⊤Γy)n may have some entries of Γ equal, which leads to (partial) spherical symmetry, and thus multiplicity. As a more famous example, the spherical harmonics in L2(Rd) are the eigenvectors of the Laplacian operator ∇2. Due to spherical symmetry of the operator, its eigenvalues have multiplicity, thus it splits L2(Rd) into eigenspaces. The n-th eigenspace is spanned by (2n+d−2)(n+d−3)! n! (d−2)! degree-n homogeneous polynomials. Let’s consider the prototypical case that we want to study: the convergence of dot-product kernels to the Hermite eigensystem. In Rd, there are n+d−1 d−1  degree-n monomials, and n+d−1 d−1  degree-n Hermite polynomials. To obtain the Hermite polynomials, we cannot simply apply the Gram– Schmidt process on the monomials individually. Instead, we need to apply the Gram–Schmidt process simultaneously on each segment of n+d−1 d−1  monomials, to obtain the dimension- n+d−1 d−1  subspace spanned by the degree-n Hermite polynomials. To count the multiplicity, we use a function m : N →N ∪{∞}. It should be interpreted as saying that the n-th eigenspace has dimension m(n). For example, the multiplicity counting function for the Ornstein–Uhlenbeck operator over L2(µd) is m(n) = n+d−1 d−1  . Note that m does not need to be strictly positive. That is, we allow m(k) = 0 for some k. We even allow P k m(k) to be finite, in the case that the Hilbert space under consideration is finite-dimensional, although we require P k m(k) > 0, for otherwise it would be completely trivial. For convenience, we will from now on assume that either m(k) > 0 for all k, since we do not need more generality. The reader who needs this generality can read the next few sections and mentally generalize the constructions. The most important condition on the multiplicity counting function is: Definition 6 (polynomially bounded multiplicity). A multiplicity counting function is polynomially bounded iff there exists some A, d > 0 such that m(n) < And for all n ∈N. This is satisfied by the Hermite basis in any dimension, since its m(n) = O(nd−1). Given a multiplicity counting function, we define a system of vectors that it counts: Definition 7 (vector systems with multiplicity). Given a multiplicity counting function m, a vector system with multiplicity m is an indexed set of vectors of form {vk,l : k ∈N, l ∈[1 : m(k)]}. As in the last few sections, we will only consider vector systems that consists of linearly independent unit vectors, and such that the closure of their span is the entire Hilbert space. Similarly, we define a system of coefficient it counts: 57 Definition 8 (coefficient systems with multiplicity). Given a multiplicity counting function m, a coefficient system with multiplicity m is an indexed set of real numbers of form {vk,l : k ∈N, l ∈ [1 : m(k)]}. We next generalize the Gram–Schmidt process to handle multiplicity. Definition 9 (Gram–Schmidt process on vector systems). Given a multiplicity counting function m, and a linearly independent vector system {vk,l : k ∈N, l ∈[1 : m(k)]}, we define the Gram–Schmidt process on the vector system by the following algorithm: 1. Construct an arbitrary orthonormal basis of Span(|v0,1⟩, . . . , |v0,m(0)⟩). Call them |ˆv0,1⟩, . . . , |ˆv0,m(0)⟩. 2. Select the next smallest k such that m(k) > 0, and project each of |vk,1⟩, . . . , |vk,m(k)⟩ to Span |ˆv0,1⟩, . . . , |ˆv0,m(0)⟩ ⊥, to obtain |v′ k,1⟩, . . . , |v′ k,m(k)⟩then construct an arbitrary orthonormal basis of their span. Call them |ˆvk,1⟩, . . . , |ˆvk,m(k)⟩. 3. Continue this way inductively. Note that the Gram–Schmidt process is not uniquely defined, due to the steps where ar- bitrary orthonormal bases are chosen. However, it constructs a sequence of subspaces {Span |ˆvk,1⟩, . . . , |ˆvk,m(k)⟩  }k∈0:∞, which are uniquely defined. Also note that even the traditional Gram–Schmidt process, without multiplicity, still is not uniquely defined, because each ˆvk could have been −ˆvk instead. That is, there is a {−1, +1} ambiguity per step. Now, note that {−1, +1} is just O(1), the orthogonal group on R1, and we see that it is a general phenomenon: In general, the Gram–Schmidt process with multiplicity m creates an O(m(k)) amount of ambiguity at step k. Each vector system defines a positive semi-definite kernel K = ∞ X k=0 m(k) X l=1 ak,l|vk,l⟩⟨vk,l| for any indexed set of non-negative scalars {vk,l : k ∈N, l ∈[1 : m(k)]}, provided that all ak,l ≥0, and P k,l ak,l < ∞. To handle the multiplicity of eigensystem(K), we need to make four changes. 1. Generalize the definition of “fast-decaying coefficients” to fast-decaying segments of coeffi- cients. 2. Prove that segments of adjacent eigenvalues are exponentially separated. 3. Prove the convergence of reducing subspaces (i.e. direct sums of eigenspaces), rather than eigenvectors. 4. Prove the convergence of whole segments of eigensystem(K), rather than individual entries like (λn(K), |vn(K)⟩). A fast-decaying sequence of coefficient segments with the specified multiplicity m is obtained by slightly loosening a fast-decaying sequence of coefficients, so that instead of individual coefficients, it is segments of coefficients that are now decaying exponentially. This will force segments of the eigenvalues to be well-separated as well, and thus their corresponding reducing subspaces. Definition 10 (fast-decay with multiplicity). A coefficient system ck,l is fast-decaying iff there exists a sequence of numbers δlow,n ∈(0, 1], and a number ϵ ∈[0, 1), and a sequence of numbers ¯cn, such that cn,l ∈[¯cnδlow,n, ¯cn], ¯cn+1 ≤ϵ¯cn, ∀n ∈N We say that such a system is a fast-decaying coefficient system with parameters (ϵ, δlow,n, ¯cn). For example, a fast-decaying sequence is a fast-decaying system where the multiplicity counting function is m(k) = 1, and all δlow,n = 1. We will show that the coefficient system of a dot-product kernel P n cn n! (x⊤Γy)n is fast-decaying in Section J.6. 58 Given a polynomially bounded multiplicity m, a vector system for m, and a fast-decaying coefficient system, define K = X k X l ak,l|vk,l⟩⟨vk,l| It is positive semidefinite and has finite trace. Therefore, its spectrum is discrete, and except for zero, each of its eigenvalue is positive and has only finite multiplicity. Therefore, its eigensystem is well-defined. Definition 11 (eigensegment). Given K, a positive semidefinite operator with finite trace, enumerate its eigenvalues as λ0,1 ≥· · · ≥λ0,m(0) ≥λ1,1 ≥· · · ≥0, counting eigenvalue multiplicity. Construct a corresponding orthonormal eigenbasis |vk,l⟩. If all λn,1, . . . , λn,m(n) are distinct, then the n-th eigensegment of eigensystem(K) is the set {(λn,l, Span (|vn,l⟩)) : l ∈[1 : m(n)]}. Otherwise, if λn,1 = λn,2, and all others are distinct, then the n-th eigensegment of eigensystem(K) is the set {(λn,1, Span (|vn,1⟩, |vn,2⟩)), (λn,2, Span (|vn,1⟩, |vn,2⟩))}∪{(λn,l, Span (|vn,l⟩)) : l ∈[3 : m(n)]} In general, the eigensegment is obtained by merging degenerate eigenspaces. Despite the valiant effort in removing ambiguity, the above definition still has some residual ambiguity: If unluckily, λk,m(k) = λk+1,1, then the reducing subspaces of the k-th and (k +1)-th eigensegments are not uniquely defined, since we can rotate the eigenvector pair |vk+1,1⟩, |vk,m(k)⟩arbitrarily. Fortunately, we will demonstrate that the residual ambiguity disappears in the ϵ →0 limit for fast-decaying kernels. Finally, we need to define what it means for two eigensegments to be close together: Definition 12 (eigensegment bulk closeness). Given two eigensegments of equal length, let λn,1, . . . , λn,m(n) be the eigenvalues from the first segment, and λ′ n,1, . . . , λ′ n,m(n) from the second. We say that they are ϵ-close in bulk if |λn,l −λ′ n,l| < ϵ min(λn,l, λ′ n,l) ∀l ∈[1 : m(n)] In bulk? Indeed, due to the annoyances of degeneracy, we would first show that the eigenvalues converge as O(ϵ). After that is done, we can set ϵ small enough so that it will force the eigenspaces to match up as well, and thus does bulk closeness resolve into detailed closeness. Definition 13 (eigensegment detailed closeness). Given two eigensegments of equal length, we say that the two eigensegments are ϵ-close in detail if they are ϵ-close in bulk, and for each eigenspace V ′ in the second eigensegment, there exists one or more eigenspaces V1, . . . , Vs in the first, such that there exists a unitary angle operator Θ, such that  cos Θ sin Θ −sin Θ cos Θ  rotates V1 ⊕· · · ⊕Vs to V ′, and ∥sin Θ∥op < ϵ. See Section I.1 and (Davis & Kahan, 1970) for details on the meaning of the angle operator. Intuitively, the operator  cos Θ sin Θ −sin Θ cos Θ  is the generalization of multidimensional rotation to a Hilbert space. It performs simultaneous rotation in many (potentially infinitely many) 2-dimensional planes. To say that ∥sin Θ∥op < ϵ means that in every single one of these planes, the angle of rotation is < arcsin(ϵ). The definition is by design asymmetric, because we will show that if one source eigensegment (think of the dot-product kernel’s eigensystem) is always O(ϵ)-close to a target eigensegment (think of the Hermite eigensystem) in the coarse sense, then as ϵ →0, the source eigensegment will be forced to be O(ϵ)-close to the target eigensegment in the detailed sense. We would rather not hit a moving target with a static gun, but hit a static target with a moving gun. We point out that even “convergence in detail” does not imply convergence of every and each eigenvector, because a degeneracy in the target eigensegment may stubbornly remain broken in the source segment. For example, it is possible that λ′ n,1 = λ′ n,2 exactly in the target eigensegment, but λn,1 ̸= λn,2 in the source eigensegment. In this case, the eigenvectors corresponding to λn,1, λn,2 may be rotated by 45° compared to the chosen eigenvectors for λ′ n,1, λ′ n,2. 59 Concretely for our case of the dot-product kernels, this means that the kernel may have no degenerate eigenvectors, but the Hermite eigensystem has degenerate eigenvectors. In such a case, the best we can possibly do is proving that each eigenspace of the Hermite eigensystem corresponds to a direct sum of the kernel’s eigensystem that is a small angle’s rotation away. Concretely, suppose that we have γ1 = γ2, then the Hermite eigensystem is degenerate, but we may find that the kernel’s eigensystem stubbornly remains both nondegenerate and rotated by 45° askew of the Hermite basis. For example, it might stubbornly insist on containing two non-degenerate eigenvectors close to q 1 2(|h(0,1)⟩+ |h(1,0)⟩) and q 1 2(|h(0,1)⟩−|h(1,0)⟩). This is fine, since their direct sum does converge to Span |h(0,1)⟩, |h(1,0)⟩  , and that is the best that can be proven. One cannot expect more to be proven given such degeneracy in the target eigensystem. In the case of Theorem 2, before proving it, we saw why it should be true using a 2-dimensional picture with ellipses (Figure 22). Here, we can also see why it should be true using a 3-dimensional picture with ellipsoids. Consider the case of three dimensions. We only have |v0,1⟩, |v1,1⟩, |v1,2⟩. In this case, the kernel K = a0,1|v0,1⟩⟨v0,1| + a1,1|v1,1⟩⟨v1,1| + a1,2|v1,2⟩⟨v1,2|. Diagonalizing the kernel is equivalent to finding the principal axes of the contour ellipsoid defined by {|x⟩: ⟨x|K|x⟩= 1}. This ellipsoid is the unique ellipse tangent to the 6 planes defined by a0,1|v0,1⟩⟨v0,1| = 1, a1,1|v1,1⟩⟨v1,1| = 1, a1,2|v1,2⟩⟨v1,2| = 1. Suppose we fix a1,1, a1,2, and let a0,1 → 0. Then, the planes of a1,1|v1,1⟩⟨v1,1| = 1, a1,2|v1,2⟩⟨v1,2| = 1 remain constant, but the planes of a0,1|v0,1⟩⟨v0,1| = 1 diverge to infin- ity. The ellipsoid degenerates to a parallelogram prism defined by the 4 planes. Two of its principal axes rotate to become perpendicular to the 2 pairs of parallel planes, and essentially ignore a0,1. Suppose we fix a1,1, a1,2, and let a0,1 → ∞. Then, the planes of a1,1|v1,1⟩⟨v1,1| = 1, a1,2|v1,2⟩⟨v1,2| = 1 remain constant, but the planes of a0,1|v0,1⟩⟨v0,1| = 1 converge to the origin. The ellipsoid degenerates to two flat parallelograms. Two of its principal axes rotate to fall within the parallelogram, perpendicular to its 2 edges. Intuitively, we see that for a given n, the effect of all an+1,1, . . . , an+1,m(n+1), an+2,1, . . . terms in the kernel is a small perturbation on the n-th eigenspace, and negligible because the parallel planes diverge to infinity. The effect of all an−1,m(n−1), an−1,m(n−1)−1, . . . , a0,1 is a large but fixed perturbation, forcing the n-th eigenspace to be perpendicular to all of |vn−1, m(n −1)⟩, . . . , |v0,0⟩, but once that is done, their effects are also negligible because the parallel planes converge to the origin. (a) The case of constant a1,1, a1,2, and a0,1 →0. The ellipsoid degenerates to a parallelogram prism. (b) The case of constant a1,1, a1,2, and a0,1 →∞. The ellipsoid degenerates to two repeated parallelograms. Figure 23: Diagonalizing the kernel in 3 dimensions at two limits. 60 Figure 24 shows the eigenstructure of K. On the coarse level, it is divided into exponentially separated segments. The N-th segment contains m(N) eigenvalues clustered within an interval with order of magnitude Θ(¯aN). As ϵ →0, the segments become ever cleanly separated, and also converging closer and closer to the N-th segment of a target eigenstructure. The target eigensegment may contain multiple eigenvalues with varying multiplicity. If a target eigenvalue λ has multiplicity 3, then there will be precisely 3 eigenvalues (counting multiplicity) falling within λ(1 ± O(ϵ)). Notably, these 3 eigenvalues may be degenerate, or not degenerate. In either case, the direct sum of their eigenspaces will have the same dimension as the eigenspace corresponding to λ, and it will make an angle of size O(ϵ). Figure 24: Structure of the kernel spectrum, zooming into one particular N-th segment. They converge to a segment of the target spectrum according to multiplicity of the target spectrum. J.3 STATEMENT OF THE THEOREM The statement of the theorem is really unwieldy, so we write it out in a separate section. A dot-product kernel K(x, y) := ∞ X n=0 cn n! (x⊤y)n is defined by its level-coefficients c0, c1, · · · ≥0. Given ϵ > 0, it is ϵ-fast-decaying if cn+1 ≤ϵcn for all n ∈N. The multiplicity counting function for it is m(n) = n+d−1 d−1  . It is polynomially bounded. We enumerate the eigensystem of K as (λ0,1, V0,1), (λ1,1, V1,1), . . . , (λ1,d, V1,d), (λ2,1, V2,1), . . . , (λ2,( d+1 2 ), V2,( d+1 2 )), ... (λn,1, Vn,1), . . . , (λn,( n+d−1 d−1 ), Vn,( n+d−1 d−1 )), ... 61 where the eigenvalues are arranged in decreasing order: λ0,1 ≥λ1,1 ≥· · · In case of multiplicity, the same eigenvalue is written repeatedly. For example, if λ has multiplicity 2, then it is written out twice. Each Vk,l is the eigenspace of λk,l. In case of eigenvalue multiplicity, repeat the eigenspace. For example, if λ1,1 = λ1,2 is an eigenvalue of multiplicity 2, then it corresponds to a 2-dimensional eigenspace V . In this case, define V1,1 = V1,2 = V . We also need to perform a similar “merging” on the Hermite eigensystem: ( cn d Y i=1 γαi i , Span  |hα⟩(Σ)! : n ∈N, |α| = n ) This merging is, unfortunately, not exactly the same, because we can’t just say that if cn Qd i=1 γαi i = cn′ Qd i=1 γα′ i i , then merge their eigenspaces, because we must not merge if n ̸= n′, even if the eigenvalues happen to be the same in this case. This is because, as ϵ →0, eventually cn and cn′ will differ so greatly, that this accidental degeneracy is broken. We must only merge eigenspaces that are non-accidentally degenerate. So why didn’t we do this for K? Because K is much harder to control for! We know everything there could be known about the Hermite eigenstructure, but K is a big unknown that we must laborously control. One thing that is a big unknown about K is that we don’t know which eigenvalues are accidentally the same, and which eigenvalues are non-accidentally the same. So we treat them without special considerations. We know which are accidental and which are not for the Hermite eigenstructure, so we treat them differently. So, we perform a non-accidental merge of the Hermite eigensystem. For each n ∈N, we say that the n-th segment of the Hermite eigensystem is ( cn d Y i=1 γαi i , Span  |hα⟩(Σ)! : |α| = n ) and within each segment, we merge the degenerate eigenspaces. For example, if it happens that Qd i=1 γαi i = Qd i=1 γα′ i i for |α| = |α′| = n, but no other α′′ with |α′′| = n, then replace both Span  |h(Σ) α ⟩  and Span  |h(Σ) α′ ⟩  by their direct sum Span  |h(Σ) α ⟩, |h(Σ) α′ ⟩  . With all this set up, we can now state Theorem 2, this time with full rigor. Theorem 7 (The HEA holds for a fast-decaying dot-product kernel on Gaussian measure). Let 1. Σ be a covariance matrix; 2. Σ = UΓU ⊤be its SVD, with Γ = diag (γ1, . . . , γd); 3. µ = N(0, Σ); For any N ∈N, there exists constants CN, DN, ϵN, such that if ϵ ∈[0, ϵN], and K is an ϵ-fast- decaying dot-product kernel, the following happens. For any element (λ, V ) within the merged N-th segment of the Hermite eigenstructure, there exists exactly dim V eigenvalues (counting multiplicity) of K that are in the interval λ(1 ± CNϵ). Let their corresponding eigenspaces be V1, . . . , Vdim V , then there exists a unitary operator  cos Θ sin Θ −sin Θ cos Θ  that rotates V1 ⊕· · · ⊕Vdim V to V , such that ∥sin Θ∥op < DNϵ. J.4 PART 1: EXPONENTIAL SEPARATION OF EIGENSYSTEM SEGMENTS For each segment N, we use the min-max principle to construct upper and lower bounds on its eigenvalues. 62 Lemma 1 (exponential separation of eigensystem segments). Given 1. a polynomially bounded multiplicity counting function m, 2. a fast-decaying coefficient system ck,l with parameters (ϵ, δlow,n, ¯an), 3. a linearly independent vector system |vk,l⟩, 4. operator K := ∞ X k=0 m(l) X l=1 ak,l|vk,l⟩⟨vk,l| then for any n, there exists constants ϵn, Cn, Dn > 0, such that λn,1, . . . , λn,m(n) ∈[Cn¯an, Dn¯an] for all ϵ ∈[0, ϵn]. The constants ϵn, Cn, Dn depend only on δlow,0, . . . , δlow,n and the Gram matrix of the vectors |v0,1⟩, . . . , |vn,m(n)⟩. We bound the entire eigensegment λN,1(K) ≥· · · ≥λN,m(N)(K) by the min-max principle. This is analogous to what we did in Section I.3.1, but simplified, because we do not need to produce sharp bounds. That comes later. For the lower bound, we use V = Span |v0,1⟩, . . . , |vN,m(N)⟩  λN,m(N)(K) ≥ min x∈Span(|v0,1⟩,...,|vN,m(N)⟩) ∥x∥=1 ⟨x|K|x⟩ = min x∈Span(|v0,1⟩,...,|vN,m(N)⟩) ∥x∥=1 ∞ X k=0 m(k) X l=1 ak,l|⟨x|vk,l⟩|2 ≥ min x∈Span(|ˆv0,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 N X k=0 m(k) X l=1 ak,l|⟨x|vk,l⟩|2 ≥ ¯aN δmin,N min x∈Span(|ˆv0,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 N X k=0 m(k) X l=1 |⟨x|vk,l⟩|2 = ¯aN δmin,N λmin(G) = Ω(¯aN) where G is the Gram matrix of the vectors |v0,1⟩, . . . , |vN,m(N)⟩. By assumption, these vectors are linearly independent, so the Gram matrix is positive definite. For the upper bound, we use V = Span |v0,1⟩, . . . , |vN−1,m(N−1)⟩  . 63 λN,1(K) ≤ sup x∈Span(|v0,1⟩,...,|vN−1,m(N−1)⟩) ⊥ ∥x∥=1 ⟨x|K|x⟩ = sup x∈Span(|ˆvN,1⟩,...) ∥x∥=1 ⟨x|K|x⟩ ≤ sup x∈Span(|ˆvN,1⟩,...) ∥x∥=1 m(N) X l=1 aN,l⟨x|vN,l⟩⟨vN,l|x⟩+ λmax   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|   ≤ sup x∈Span(|ˆvN,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 m(N) X l=1 aN,l⟨x|vN,l⟩⟨vN,l|x⟩+ Tr   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|   = λmax      m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1   + ∞ X k=N+1 m(k) X l=1 ak,l | {z } use polynomial multiplicity ≤Tr     m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1  + O(¯aNϵ) = m(N) X l=1 m(N) X i=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,i⟩+ O(¯aNϵ) ≤¯aN m(N) X l=1 m(N) X i=1 ⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,i⟩+ O(¯aNϵ) = O(¯aN) J.5 PART 2: CONVERGENCE IN BULK Now that the spectrum is divided into exponentially separated segments, we can perform a surgical extraction of each N-th segment of the spectrum, to cut off both the “head” part < N, and the “tail” part > N. Lemma 2 (bulk insensitivity of eigensystem segments). Under the same assumptions as Section J.4, for any n, there exists constants ϵn > 0, such that the N-th eigensegment is O(ϵ)-close in bulk to the spectrum of the matrix   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1 for all ϵ ∈[0, ϵn]. The constant ϵn and the constant in O(ϵ) depend only on δlow,0, δlow,n and the Gram matrix of the vectors |v0,1⟩, . . . , |vn,m(n)⟩. Intuitively, the lemma states that the eigensystem segments separate very cleanly. First, relative to the N-th eigensegment, all the higher-order segments are O(ϵ)-small, and thus ignorable. Second, relative to the N-th segment, the only effect of the lower-order segments is to force the N-th eigensegment into a safe subspace very close to the orthogonal subspace Span |ˆvN,1⟩, . . . , |ˆvN,m(N)⟩  , within which the lower-order terms a0,1|v0,1⟩⟨v0,1|, . . . , aN−1,m(N−1)|vN−1,m(N−1)⟩⟨vN−1,m(N−1)| all vanish. Stated in another way, the lemma states that the N-th segment of the spectrum of K is O(ϵ)-close in bulk to ˜K. To obtain ˜K, we first remove its tail Pi n=N+1 Pm(n) l=1 an,l|vn,l⟩⟨vn,l|, then project to the 64 space orthogonal to |v0,1⟩, . . . , |vN−1,m(N−1)⟩to remove its head, to obtain ˜K = m(N) X i,j=1 m(N) X l=1 aN,l|ˆvN,i⟩⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩⟨ˆvN,j| The key of the proof is to ensure that each cut perturbs the eigenvalues by O(¯aNϵ), so that we would extract something that is O(ϵ)-close in bulk to the original eigensegment. J.5.1 CUTTING OFF THE TAIL We need: Theorem 8 (Wielandt–Hoffman inequality (Kato, 1987)). Let A, B be self-adjoint operators, such that C := B −A is a trace-class operator, then we can enumerate the eigenvalues of A, B, C as αi, βi, γi (including eigenvalue multiplicity) such that X i |αi −βi| ≤ X i |γi| (54) The tail of the operator K is the part that comes after the aN,l coefficients. It is bounded in trace norm: Tr   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|  = ∞ X k=N+1 m(k) X l=1 ak,l ≤ ∞ X k=N+1 ¯aNϵk−Nm(k) ≤¯aN ∞ X k=N+1 ϵk−NAkD = O(¯aNϵ) where we use the polynomial bound m(k) ≤AkD. Thus, by Wielandt–Hoffman, removing the tail perturbs the spectrum by only O(¯aNϵ). Note particularly: 1. all segments from the 0-th to the N-th remain exponentially separated; 2. the perturbed N-th segment is O(ϵ)-close in bulk to the original N-th segment. J.5.2 CUTTING OFF THE HEAD Cutting off the head is simply cutting off the inverted tail. Having cut off the tail, we have a finite-rank operator ˜K := N X k=0 m(k) X l=1 ak,l|vk,l⟩⟨vk,l| that splits into two reducing subspaces V, V ⊥, where V = Span (|0, 1⟩, . . . , |N, m(N)⟩). It is zero on V ⊥and positive-definite on V . Therefore, we drop down from the full Hilbert space to just V , where we can reason with matrices. We express ˜K in matrix form in the orthonormal basis |ˆvk,l⟩, ordered so that |ˆvN,1⟩, . . . , |ˆvN,m(N)⟩ come before |ˆv0,1⟩, . . . , |ˆvN−1,m(N−1)⟩: [ ˜K] =  A B B⊤ C + D  65 where the four matrices are: A =   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1 , B =   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvn,j⟩   i∈1:m(N), (n,j)∈[0:N−1,1:m] C =   m(N) X l=1 aN,l⟨ˆvn,j|vN,l⟩⟨vN,l|ˆvn′,j′⟩   (n,j),(n′,j′)∈[0:N−1,1:m] D =   N−1 X n=0 m(N) X l=1 an,l⟨ˆvn,j|vn,l⟩⟨vn,l|ˆvn′,j′⟩   (n,j),(n′,j′)∈[0:N−1,1:m] For a symmetric matrix in block form,  A B B⊤ C + D −1 =  A−1 0 0 0  +  A−1BS−1B⊤A−1 −A−1BS−1 −S−1B⊤A−1 S−1  where S = D + C −B⊤A−1B. We need several crude bounds on A, B, C, D, to prove that the first term really is the bulk term, and the second term really is an order O(ϵ) perturbation upon the bulk term, and thus we can safely cut it off according to the Wielandt–Hoffman inequality. 1. For each of A, B, C, their entries are all bounded in absolute values by O(¯aN). Thus, their spectral radii are bounded by O(¯aN). 2. By Cauchy interlacing law, λmin(A) ≥λmin([ ˜K]) = Ω(¯aN). Thus, the spectrum of A is Θ(¯aN), and the spectrum of A−1 is Θ(¯a−1 N ). 3. Notice that the matrix D is constructed similarly to the matrix [ ˜K], except with one more truncation. Therefore, by the same argument, its least eigenvalue is on the order of Ω(¯aN−1) = Ω(¯aN/ϵ). 4. Therefore, B⊤A−1, A−1B have entries bounded in order O(1), and B⊤A−1B has entries bounded in order O(¯aN). 5. Therefore, S has the same spectrum as D with an order O(¯aN) perturbation. Since D has smallest eigenvalue Ω(¯aN−1), so does S. Therefore, S−1 has largest eigenvalue O(1/¯aN−1) = O(ϵ/¯aN). Thus, the N-th segment of the eigenstructure of ˜K, inverted, is O(ϵ)-close in bulk to the eigenstructure of A−1. Inverting again, we find that it is O(ϵ)-close in bulk to the eigenstructure of A. J.6 PART 3: THE SPECIAL CASE OF DOT-PRODUCT KERNELS Recall, we need to show that, in the standard Gaussian space L2(N(0, Id)), an operator K defined by kernel P n cn n! (x⊤Γy)n converges to the Hermite eigensystem. Here, Γ = diag (γ1, . . . , γd) with γ1, . . . , γd > 0. Before applying the two generic lemmas. We need to first show that the kernel does have a fast- decaying coefficient system. This is because, even though cn is a fast-decaying coefficient sequence, it does not automatically imply that K has a fast-decaying coefficient system if we express it in the monomial vector system. Once this is done, we apply Section J.4 to conclude that the kernel’s spectrum is divided into exponentially separated segments. Then, we “hopscotch” through several eigenstructures, until we show that the N-th eigensegment of K is O(ϵ)-close in bulk to the N-th eigensegment of a stretched dot-product kernel ceϵ(x⊤Γy). This argument is nearly the same as the proof of Section J.5, with a small extension to account for the special structure of dot-product kernels. Finally, we hopscotch again, applying Davis–Kahan at every step, to prove that the eigenspaces also converge as O(ϵ). 66 J.6.1 COMBINATORICS WITH THE MONOMIAL BASIS In analogy with the multidimensional Hermite vector system |hα⟩= d O i=1 |hαi⟩= d Y i=1 hαi(xi) we define the multidimensional normalized monomial vector system, by taking the tensor product of the single-dimensional normalized monomial vectors: |vα⟩= d O i=1 |vαi⟩= d Y i=1 xαi i p (2αi −1)!! Both of them are vector systems over the polynomially bounded multiplicity counting function m(n) := n + d −1 d −1  The n-th segment of the monomial vector system consists of {|vα⟩: |α| = n}, and similarly, for Hermite, {|hα⟩: |α| = n}. The Hermite vector system is obtained by the Gram–Schmidt process on the monomial vector system. Now, consider the form of the dot-product kernel function: K(x, y) := ∞ X n=0 cn n! d X i=1 γixiyi !n = ∞ X n=0 cn n! X |α|=n n! α1! · · · αd! d Y i=1 γαi i xαi i yαi i ! In bra-ket notation, the operator is K = ∞ X n=0 X α:|α|=n cn d Y i=1 γαi i (2αi −1)!! αi!  |vα⟩⟨vα| Since cn is a fast-decaying sequence, it remains to show that the quantity max α:|α|=n d Y i=1 γαi i (2αi −1)!! αi!  grows at most exponentially with n. Once we show that, we know that the coefficient system is also fast-decaying. Equivalently, we need to show that max α:|α|=n d X i=1  αi ln(γi/2) + ln 2αi αi  grows at most linearly with n. Note that, because such a bound on does not depend on the choice of the fast-decaying sequence cn, it allows us to change cn to any other c′ n with the same decay rate ϵ, and still get a fast-decaying coefficient system. First term is trivial: d X i=1 αi ln(γi/2) ∈  n min i∈[1:d] ln(γi/2), n max i∈[1:d] ln(γi/2)  = Θ(n) To bound the second term, we do some combinatorics. Let ak := 2k k  , then we have f(k+1) f(k) = 4 − 2 k+1 strictly monotonically increasing. Therefore, f is strictly log-convex. Therefore, α 7→ Pd i=1 ln f(αk) is strictly Schur-convex. Thus, d X i=1 ln f(αk) 67 achieves its upper bound at (n, 0, . . . , 0), and lower bound when all αk are as close to equal as possible. Upper bound: ln f(n) = ln 2n n  = 2n ln 2 −1 2 ln(πn) + O(1/n) = Θ(n) Lower bound: d ln f(n/d) = d ln 2n/d n/d  = 2n ln 2 −1 2 ln(πn/d) + O(1/n) = Θ(n) In fact, we have shown that the coefficient block is very tightly clustered: d X i=1 ln f(αk) = 2n ln 2 −1 2 ln(πn) + [−ln √ d, 0] + O(1/n) for all α satisfying |α| = n. 68 J.6.2 CONVERGENCE IN BULK Like the general case, the proof has multiple parts. First, we apply Section J.4 to conclude that K has exponentially separated eigensegments. Then, we hopscotch through multiple eigensystem segments to show eigensegment convergence in bulk, as in Section J.5. N-th segment of K(x, y) = ∞ X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n O(ϵ) N-th segment of ˜K(x, y) = N X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n (0-th segment of [ ˜K]−1)−1 O(ϵ) (eigensystem of just the aN part of [ ˜K]−1)−1 provided that c′ N = cN (eigensystem of just the aN part of [ ˜K′]−1)−1 O(ϵ) (0-th segment of [ ˜K′]−1)−1 N-th segment of ˜K′(x, y) = N X n=0 c′ n n! (γ1x1y1 + · · · + γdxdyd)n O(ϵ) N-th segment of K′(x, y) = ∞ X n=0 c′ n n! (γ1x1y1 + · · · + γdxdyd)n set K′(x, y) = ceϵ(x⊤Γy), where c = cN/ϵN N-th segment of K′(x, y) = cN ϵN eϵ(x⊤Γy) O(ϵ) N-th segment of ( cN ϵN ϵ|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0 ) ( cN d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = N ) Figure 25: Hopscotching through eigensystems. The new trick here is that we avoid solving for the N-th eigensegment of ˜K, by going down then up again, arriving at a previously solved kernel as if taking a subway. 69 J.6.3 CONVERGENCE IN DETAIL We have successfully proven the source eigensegment converges in bulk to the target eigensegment. It remains to prove convergence in detail. This requires breaking degeneracies in the source eigenseg- ment whenever the degeneracy is broken in the target eigensegment (but not vice versa), followed by applying Davis–Kahan once per target eigenspace. Fix some N. The N-th target eigensegment is ( cN d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = N ) Some of the eigenvalues in it may be equal, because {ln γ1, . . . , ln γd} may be N-linearly dependent.15 Therefore, merge the eigenvectors with equal eigenvalue into eigenspaces. The eigenspaces have distinct eigenvalues. Note in particular that this constitutes a static target: The coefficients c0, c1, . . . may change, but if cN Qd i=1 γαi i = cN Qd i=1 γα′ i i for one choice of the coefficients, then it is so for all choices. Now, fix one such eigenspace V and its corresponding eigenvalue cNζ. Let the N-th segment of K be {(cNζN,1(K), |vN,1(K)⟩), . . . , (cNζN,m(N)(K), |vN,m(N)(K)⟩)} where we do not yet demand that the eigenvalues are all distinct. If some of the eigenvalues are unluckily degenerate, we just tolerate an arbitrary choice of the eigenvectors for now. As previously argued, it is O(ϵ)-close to the target eigensegment in bulk. Therefore, for small enough ϵ, the source eigenvalues cNζN,1(K), . . . , cNζN,m(N)(K) will be corralled around their corresponding target eigenvalues, like iron filings around magnets. Because there are only m(N) source eigenvalues to go around, each target eigenvalue can only grab exactly as many source eigenvalues as its multiplicity. In particular, this means that these “herds” of eigenvalues may be highly degenerate together, they are separated from other herds by ¯aNΘ(1). In particular, this means cNζ will be able to grab exactly dim V source eigenvalues, so that they are all stuck within an interval cN(ζ ± O(ϵ)). Enumerate these as cNζN,i1(K), . . . , cNζN,idim V (K). Let their eigenvecters be |vN,i1(K)⟩, . . . , |vN,idim V (K)⟩, and let the vector space spanned by them be VK. This is a reducing subspace of K, and one that we wish to show as O(ϵ)-close to V . Follow through the hopscotching diagram again. At each step in the hopscotching, either there is an exact equality, in which case the reducing subspace is unchanged, or there is a perturbation on the operator that is O(ϵ) relative to the operator, in which case, by Davis–Kahan, the reducing subspace is perturbed by an angle of only O(ϵ). We spell this out explicitly for the top half of the diagram. At the first step, K = P∞ n=0 cn n! (x⊤Γy)n is perturbed by truncating the tail P∞ n=N+1 cn n! (x⊤Γy)n. This has operator norm O(cNϵ). Since the gap between cNζN,i1(K), . . . , cNζN,idim V (K) and all other eigenvalues of K is on the order of ¯aN Θ(1). Thus, by Davis–Kahan, truncating the tail only perturbs the reducing subspace VK by an angle O(ϵ). The second step is an exact identity. Under a matrix inverse, the eigenspaces are preserved, even though the eigenvalues are inverted. In the third step, the inverted head is truncated. The inverted head has operator norm on the order of O(ϵ/¯aN), while the remaining operator has operator norm on the order of O(1/¯aN). Now, since the gap between (cNζN,i1(K))−1, . . . , (cNζN,idim V (K))−1 and all other inverted eigenvalues of [ ˜K]−1 is on the order 1 ¯aN Θ(1), truncating the inverted head only perturbs the reducing subspace by another angle O(ϵ). 15Even if they are N-linearly independent, the gaps between successive eigenvalues will get smaller for larger N if d ≥3. By the standard counting argument in Diophantine approximation theory, the gap decays at least as fast as N −(d−2). 70
PREDICTING KERNEL REGRESSION LEARNING CURVES FROM ONLY RAW DATA STATISTICS Dhruva Karkada∗,1, Joseph Turnbull∗,1, Yuxi Liu1, James B. Simon1, {dkarkada,joeyturnbull,yuxi ABSTRACT We study kernel regression with common rotation-invariant kernels on real datasets including CIFAR-5m, SVHN, and ImageNet. We give a theoretical framework that predicts learning curves (test risk vs. sample size) from only two measurements: the empirical data covariance matrix and an empirical polynomial decomposition of the target function f∗. The key new idea is an analytical approximation of a kernel's eigenvalues and eigenfunctions with respect to an anisotropic data distribution. The eigenfunctions resemble Hermite polynomials of the data, so we call this approximation the Hermite eigenstructure ansatz (HEA). We prove the HEA for Gaussian data, but we find that real image data is often "Gaussian enough" for the HEA to hold well in practice, enabling us to predict learning curves by applying prior results relating kernel eigenstructure to test risk. Extending beyond kernel regression, we empirically find that MLPs in the feature-learning regime learn Hermite polynomials in the order predicted by the HEA. Our HEA framework is a proof of concept that an end-to-end theory of learning which maps dataset structure all the way to model performance is possible for nontrivial learning algorithms on real datasets. 1 INTRODUCTION The quest to understand machine learning is largely motivated by a desire to predict and explain learning behavior in realistic settings. This means that, sooner or later, scientists of machine learning must develop theory that works for real datasets, somehow incorporating task structure into predictions of model performance, optimal hyperparameters, and other objects of interest. This necessity has been the elephant in the room of much of deep learning theory for some time: despite much progress in the study of neural network training and generalization, it has proven difficult to move beyond simplistic models of data and make analytical predictions applicable to real data distributions. The central difficulty is of course the complexity of real data. There can be no full analytical description of any real data distribution, so it is difficult to see how we might develop mathematical theory that describes how such a dataset is learned. How might we hope to proceed? One way forward may be to identify a comparatively succinct "reduced description" of a data distribution that characterizes its structure, at least insofar as a particular class of learner is concerned. We would like this reduced description to be sufficient to predict quantities of interest yet minimal enough to be a significant reduction in complexity. Ideally, we would like the theory that makes predictions from this reduced description to be mathematically simple, and we would like the description itself to give some insight into how the class of learner in question sees the data. In this paper, we present such a reduced description of high-dimensional datasets that is suitable for describing their learning by kernel ridge regression (KRR) with rotation-invariant kernels. We find that just the data covariance matrix Σ := E xx⊤ , together with a Hermite decomposition of ∗Joint primary authorship. Work completed during summer internship at Imbue. 1UC Berkeley Imbue Code to reproduce all experiments available at https://github.com/JoeyTurn/ hermite-eigenstructure-ansatz. 1 16 Oct 2025 data space Hermite Eigenstructure Ansatz (this work) feature space implicit feature map linear ridge reg. estimator (data space) Theory Eigenfunctions ( ) Spectrum (= feature variances) • test error • mean estimator • sample complexity • train error • etc. Kernel eigenstructure KRR algorithm Avg-case predictions PCA directions PCA variances Data structure Known eigenframework (1) (2) 101 102 103 104 n training samples 0.00 0.25 0.50 0.75 1.00 1.25 1.50 test MSE Gaussian kernel @ CIFAR-5m domesticated vs. wild dog vs. all others deer vs. horse car vs. ship plane vs. frog 101 102 103 104 Empirical sample complexity nlrn 101 102 103 104 Predicted nlrn Laplace kernel @ Imagenet32 linear quadratic cubic quartic Figure 1: We provide an end-to-end theory of learning for kernel ridge regression (KRR) that maps minimal statistics of the data distribution to test-time performance. (Top left.) KRR implicitly consists of two steps: (1) the kernel maps the data to high-dimensional nonlinear features x 7→ψ(x), then (2) it fits a linear estimator to these features. Let the covariance in data space be E xx⊤ = UΓU ⊤and let the covariance in feature space be E ψ(x)ψ(x)⊤ = ΞΛΞ⊤. (Lower left.) We introduce an ansatz (blue box) that predicts the feature covariance statistics (Ξ, Λ) from only the data covariance (U, Γ) and the functional form of ψ(·). This is sufficient to predict average-case test error using known results (green box). (Top right.) We are able to predict learning curves for KRR on image tasks without requiring omniscient knowledge of the feature statistics (i.e., without ever constructing or diagonalizing a kernel matrix). (Bottom right.) We are able to accurately predict the KRR sample complexity, including constant prefactors, for learning polynomials of the ImageNet dataset. See Figure 3 for additional plots and Section D for experimental details. the target function,2 is sufficient to characterize learning by rotation-invariant kernels. We obtain this reduced description, which we term "Hermite eigenstructure," from a study of Gaussian data, but we nonetheless find it predictive for complex image datasets including CIFAR-5m, SVHN, and ImageNet. From just the covariance matrix, we can predict kernel eigenstructure and learning curves for synthetic functions. Using the true labels to estimate the target function's Hermite decomposition, we are additionally able to predict KRR learning curves on real tasks. Unlike previous approaches for predicting learning curves, this method does not require numerically constructing or diagonalizing a kernel matrix to find the kernel eigensystem. Our approach relies on recent results in the theory of KRR which assert that knowledge of the kernel's eigenstructure with respect to a data measure is sufficient to predict learning behavior (Sollich (2001); Bordelon et al. (2020); Jacot et al. (2020); Simon et al. (2021)). These works provide a set of equations that map this eigenstructure to predictions of test-time error. Our central observation is that, despite the complexity of high-dimensional datasets and the great variety of rotation-invariant kernels, this kernel eigenstructure is often very close to a simple analytical form expressible in terms of Hermite polynomials in the original data space. We term this claim the "Hermite eigenstructure ansatz," and we identify a (broad) set of conditions under which it empirically holds. Concretely, our contributions are as follows: • We propose the Hermite eigenstructure ansatz (Section 4), a closed-form expression for the eigensystem of rotation-invariant kernels on real datasets. We find empirically that it holds to an excellent approximation for real image datasets (Figure 2). • We prove that the HEA holds in the case of Gaussian data for two limiting cases of the kernel function (Theorems 1 and 2). • We use the HEA to predict KRR learning curves on CIFAR-5m, SVHN, and ImageNet from only data covariance statistics and a Hermite decomposition of the target function (Figures 1 and 3). 2See Section 3.2 and Section A for a review of Hermite polynomials. 2 • We empirically find that MLPs in the feature-learning regime learn Hermite polynomials of CIFAR-5m in the same order as the HEA predicts for KRR (Figure 4). 2 RESEARCH CONTEXT AND RELATED WORKS Kernel models as proxies for neural networks. Our motivation for studying KRR comes from the "neural tangent kernel" (NTK) line of work, which finds suitably parametrized infinite-width networks are equivalent to KRR, and that for MLPs, the kernel function is rotation-invariant (Neal, 1996; Lee et al., 2018; Jacot et al., 2018). Kernel methods have proven useful as models of network dynamics and optimization (Chizat et al., 2019; Du et al., 2019; Bordelon & Pehlevan, 2021). Learning curves for KRR. Motivated by the NTK, many recent works have converged on a set of equations which predict KRR's test-time error from the kernel and task eigenstructure (Sollich, 2001; Bordelon et al., 2020; Jacot et al., 2020; Simon et al., 2021). This "KRR eigenframework" depends on the kernel's eigenvalues and eigenfunctions with respect to the data distribution. Our main result is an approximate analytical expression for these eigenvalues and eigenfunctions, permitting the inductive bias of KRR to be studied directly in the data space. Section C reviews this eigenframework. Exactly-solved cases of kernel eigenstructure. Exact kernel diagonalizations with machine-learningrelevant kernels are known in many highly-symmetric settings, including stationary kernels on the torus Td and rotation-invariant kernels on the sphere Sd (Mei & Montanari, 2019). Moving to anisotropic domains, Ghorbani et al. (2020) gave the eigenstructure of rotation-invariant kernels when the measure is a "product of spheres" of different radii. The case of a Gaussian kernel on a Gaussian measure was solved exactly by Zhu et al. (1997). Our Hermite eigenstructure ansatz is consistent with all these results and unifies them in a limiting case. Modeling data with a Gaussian measure. A developing body of literature argues that MLPs' learning of complex data distributions is similar to the behavior one would see if the data were Gaussian with the same covariance (Goldt et al., 2020; Refinetti et al., 2023). We broadly adopt this lens in the study of KRR and find that, indeed, the data is well-modeled as Gaussian. Single- and multi-index models. Much recent literature has studied MLPs' learning of single- and multi-index functions which depend only on a rank-one or rank-k projection of the input x (Dudeja & Hsu, 2018; Bietti et al., 2022; Dandi et al., 2023; Lee et al., 2024; Mousavi-Hosseini et al., 2024). This work partially motivated our study, and the multidimensional Hermite basis we use in this work is a basis of multi-index functions. Prior work in this vein has found that higher-order Hermite polynomials require more samples or gradient steps to learn. Two ways in which we depart from this body of work are that (a) we seek to predict the value of the test error (including constant prefactors), not just asymptotics or scaling laws, and (b) we study anisotropic data, which allows application of our results to real datasets. Analytical models of data. Several recent works have proposed theoretical models for the hierarchical structure in image and text data with the aim of understanding neural network performance on such datasets (Cagnetta et al., 2024; Sclocchi et al., 2025; Cagnetta & Wyart, 2024). Our work in this paper is undertaken in a similar spirit. 3 PRELIMINARIES We will work in a standard supervised setting: our dataset consists of n samples X = {xi}n i=1 drawn i.i.d. from a measure μ over Rd, and we wish to learn a target function f∗from noisy training labels y = {yi}n i=1 where yi = f∗(xi) + N(0, ε2) with noise level ε ≥0. We will assume with minimal loss of generality that μ has mean zero: Ex∼μ[x] = 0. Once a learning rule returns a predicted function ˆf, we evaluate its test mean-squared error MSEte = Ex∼μ h (f∗(x) -ˆf(x))2i +ε2. We write ⟨g, h⟩μ := Ex∼μ[g(x)h(x)] and ||g||2 μ := ⟨g, g⟩μ for the L2 inner product and norm with respect to μ. We write (ai)i∈I to denote an ordered sequence with index set I, and we write only (ai) when the index set is clear from context. 3 3.1 KERNEL REGRESSION AND KERNEL EIGENSYSTEMS KRR is a learning rule specified by a positive-semidefinite "kernel function" K : Rd × Rd →R and a ridge parameter δ ≥0. Given a dataset (X, y), KRR returns the predicted function ˆf(x) = kxX (KXX + δIn)-1y, (1) where the vector [kxX ]i = K(x, xi) and matrix [KXX ]ij = K(xi, xj) contain evaluations of the kernel function. In this paper, we will restrict our attention to two special classes of kernel: Definition 1 (Rotation-invariant kernel). A kernel function is rotation-invariant if it takes the form K(x, x′) = K(||x|| , ||x′|| , x⊤x′). Such a kernel K is called "rotation-invariant" because K(Ux, Ux′) = K(x, x′) for any orthonormal matrix U. Many widely-used kernels are rotation-invariant, including the Gaussian kernel K(x, x′) = e1 2σ2 ||x-x′|| 2 , the Laplace kernel K(x, x′) = e-1 σ||x-x′||, and the Neural Network Gaussian Process (NNGP) kernels and NTKs of infinite-width MLPs. We will be particularly interested in a subset of rotation-invariant kernels which discard the explicit radial dependence: Definition 2 (Dot-product kernel). A kernel function is a dot-product kernel if it takes the form K(x, x′) = K(x⊤x′). For a dot-product kernel to be positive-semidefinite on all domains, it must admit a Taylor series K(x⊤x′) = P l≥0 cl l! (x⊤x′)lwith nonnegative level coefficients cl≥0 (Schoenberg, 1942). We will find it useful to describe dot-product kernels in terms of their level coefficients (cl)l≥0. We would like to study arbitrary rotation-invariant kernels, but it is easier to study dot-product kernels, which admit the above series expansion. Fortunately, a rotation-invariant kernel is a dot product kernel when the domain is restricted to a sphere, and if we know that our data has typical norm r, we may approximate the rotation-invariant kernel as the dot-product kernel which matches on rSd-1: Definition 3 (On-sphere level coefficients). The on-sphere level coefficients of a rotation-invariant kernel K at a radius r > 0 are the nonnegative sequence coeffs(K, r) := (cl)l≥0 such that K(x, x′) = X l≥0 cl l! (x⊤x′)l for all x, x′ such that ||x|| = ||x′|| = r. (2) We give the on-sphere level coefficients for various kernels in Section B. An eigenfunction of a kernel K with respect to a measure μ is a function φ such that Ex′∼μ[K(x, x′)φ(x′)] = λφ(x) for some λ ≥0. By Mercer's Theorem (Mohri et al., 2018, Theorem 6.2), any compact kernel admits a complete basis of orthonormal eigenfunctions with ⟨φi, φj⟩μ = δij and may be spectrally decomposed3 as K(x, x′) = P i λiφi(x)φi(x′). We will write eigensystem(μ, K) = (λi, φi)∞ i=1 to denote the sequence of all eigenpairs, indexed in decreasing eigenvalue order (λi ≥λi+1) unless otherwise specified. It will prove useful to decompose the target function in the kernel eigenbasis as f∗(x) = P i viφi(x), where (vi) are eigencoefficients. 3.2 HERMITE POLYNOMIALS AS A NATURAL BASIS FOR GAUSSIAN DATA Throughout, we write (hk)k≥0 for the normalized probabilist's Hermite polynomials. These are the orthogonal polynomials for the standard Gaussian measure, satisfying Ex∼N (0,1)[hk(x)hm(x)] = δkm. The first few such polynomials are h0(x)=1, h1(x)=x, h2(x)= 1 √ 2(x2 -1). See Section A for a review of Hermite polynomials. We can use these 1D Hermite polynomials to construct an orthonormal basis for a multivariate Gaussian measure x ∼N(0, Σ) with positive-definite covariance Σ ≻0. First we diagonalize 3Here is an intuitive description of a kernel eigensystem which is shown visually in Figure 1. Any kernel function K may be viewed as an inner product in a high-dimensional feature space: K(x, x′) = ⟨ψ(x), ψ(x′)⟩. Consider mapping the dataset into this high-dimensional space and then computing the principal components of Σψ := Ex ψ(x)ψ⊤(x) = ΞΛΞ⊤. Each eigenvalue λi is a kernel eigenvalue. The corresponding eigenfunction is a projection onto the i-th principal direction: φi(x) = λ-1/2 i ⟨ψ(x), ξi⟩. 4 the covariance as Σ = UΓU ⊤with orthogonal matrix U = [u1 · · · ud] and diagonal matrix Γ = diag (γ1, . . . , γd). Then for any multi-index α ∈Nd 0, we define h(Σ) α (x) := d Y i=1 hαi(zi) , where z = Γ-1/2U ⊤x. (3) In Equation (3), the elements of α specify the order of the Hermite polynomial along each principal direction of data covariance. These multidimensional Hermite polynomials are orthonormal, satisfying Ex∼N(0,Σ) h h(Σ) α (x)h(Σ) α′ (x) i = δαα′. In the next section, we assert that this na ̈ıve guess of basis is in fact often close to the true basis of kernel eigenfunctions for synthetic and real datasets. 4 THE HERMITE EIGENSTRUCTURE ANSATZ: THEORY AND EXPERIMENT Prior work has shown that predicting kernel regression learning curves boils down to understanding the kernel eigensystem. With this as motivation, we are ready to introduce our primary mathematical object: an explicit functional form for the kernel eigensystem, suitable for rotation-invariant kernels and high-dimensional datasets. Plan of attack. First we will write down this explicit functional form (Definition 4). Then we will state our assertion that this functional form approximates the true kernel eigensystem (HEA). Next, we will demonstrate that the HEA holds to an excellent approximation for several rotation-invariant kernels on several real image datasets (Figure 2). We will then give an intuitive justification for the HEA and give two formal theorems which state that the HEA holds for Gaussian data as kernel width grows (Theorems 1 and 2). Next, we will characterize the factors that make the HEA work better or worse (Section 4.2). Finally, we will use the HEA to predict KRR learning curves in Section 5. We begin by explicitly defining our Hermite eigensystem: Definition 4 (Hermite eigensystem). Given a data covariance matrix Σ = UΓU ⊤and a sequence of level coefficients (cl), we define the (Σ, (cl))-Hermite eigensystem to be the set of (scalar, function) pairs HE(Σ, (cl)) = {(λα, φα) for all α ∈Nd 0} (4) where for each multi-index α the proposed eigenvalue and eigenfunction are constructed as λα = c|α| · d Y i=1 γαi i and φα = h(Σ) α , (5) where |α| = P i αi and h(Σ) α is the multivariate Hermite polynomial given in Equation (3). The (Σ, (cl))-Hermite eigensystem is a set of Hermite polynomials φα and associated positive scalars λα, one for each multi-index α ∈Nd 0. The eigenvalues (λα) are monomials in the data covariance eigenvalues (γi), rescaled by the appropriate level coefficient c|α|. We now present the Hermite eigenstructure ansatz, which attests that this set of (scalar, function) pairs is in fact a close match to the true kernel eigensystem. Let K be a rotation-invariant kernel and let μ be a measure over Rd with zero mean. Then let: • Σ = Ex∼μ xx⊤ be the data covariance matrix, • r = Tr[Σ] 1 2 be the root-mean-squared data norm, and • (cl) = coeffs(K, r) be the level coefficients of K restricted to the sphere rSd-1. The Hermite eigenstructure ansatz asserts that eigensystem(μ, K) ≈HE(Σ, (cl)). (HEA) That is, the true kernel eigensystem is approximately equal to the (Σ, (cl))-Hermite eigensystem given in Definition 4. 5 theory vs empirics: kernel spectrum 10-8 10-6 10-4 10-2 100 Empirical eigenvalue ̧ (emp) i 10-8 10-6 10-4 10-2 100 Predicted eigenvalue ̧ (th) i Gaussian kernel @ Gaussian data constant mode linear modes quadratic modes cubic modes quartic modes 10-8 10-6 10-4 10-2 100 Empirical eigenvalue ̧ (emp) i 10-8 10-6 10-4 10-2 100 Gaussian kernel @ CIFAR-10 10-4 10-2 100 Empirical eigenvalue ̧ (emp) i 10-5 10-4 10-3 10-2 10-1 100 Laplace kernel @ SVHN 10-4 10-2 100 Empirical eigenvalue ̧ (emp) i 10-5 10-4 10-3 10-2 10-1 100 ReLU NTK @ Imagenet32 theory vs empirics: kernel eigenbasis (grouped in spectral bins) Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace Empirical eigenspace Predicted eigenspace 0.0 0.5 1.0 eigenspace overlap Figure 2: The Hermite eigenstructure ansatz (HEA) accurately predicts the eigenvalues (top) and eigenfunctions (bottom) of various kernel/dataset combinations. For four kernel/dataset settings (columns), we compute the empirical kernel eigensystem {(λ(emp) i , φ(emp) i )} and compare to the theoretical eigenpairs {(λ(th) i , φ(th) i )} obtained from Definition 4 and indexed in order of decreasing λ(th) i . In the top plot in each column, the i-th point from the top right has coordinates (λ(emp) i , λ(th) i ) and its color indicates the polynomial degree of φ(th) i . In the bottom plot in each column, we bin both the predicted and empirical eigenfunctions into logarithmic spectral bins and visualize the pairwise subspace overlap (Equation (44)), with axes matching the top plot. Grey pixels indicate bins with no eigenvalues. In all plots, concentration along the diagonal indicates theory-experiment match. See Section D.2 for further explanation and experimental details. The HEA is a strong claim: it asserts that the kernel eigensystem, to a good approximation, has a simple analytical form which depends only on the second-order statistics of μ and no higher moments and that the kernel eigenfunctions are multivariate Hermite polynomials independent of the kernel chosen (so long as it is rotation-invariant). Rather than a provable fact, the HEA should be treated as a falsifiable claim that may hold well or poorly in any given setting. In practice, with rotation-invariant kernels and high-dimensional image datasets, we will find that it often holds quite well. Figure 2 examines four such settings, finding that in each, both the kernel spectrum and eigenfunctions are well-predicted by the HEA. 4.1 THE HEA FOR GAUSSIAN DATA: SOME INTUITION AND TWO THEOREMS When might we expect the HEA to hold? To gain the central intuition, it is sufficient to consider a simple case of univariate Gaussian data x ∼μ = N(0, γ) and the Gaussian kernel Kσ(x, x′) = e -1 2σ2 (x-x′)2. This kernel admits the feature map Kσ(x, x′) = ⟨ψσ(x), ψσ(x′)⟩where ψσ(x) = e -x2 2σ2 · 1 x σ x2 √ 2σ2 · · · xl √ l!σl · · · , (6) We would like to find the directions of principal covariance of ψσ(x). Let us suppose that σ2 ≫γ: the kernel width dominates the width of the data distribution. Examining Equation (6), we can make two observations. First, the exponential prefactor will be close to one, and we may approximate ψσ componentwise as [ψσ(x)]l≈ xl √ l!σl. This amounts to approximating our kernel as Kσ(x, x′) = P l (xx′)l σ2l·l! - that is, as a dot-product kernel with coefficients cl= σ-2l. Second, each component of ψσ will dominate all subsequent components: Ex [ψσ(x)]2 l ∝σ-2lγl ≫ Ex [ψσ(x)]2 l+1 ∝σ-2(l+1)γl+1. (7) 6 Since the first element of ψσ is by far the largest (and since we do not center ψσ before computing eigendirections), the first direction of principal variation will correspond to φ0(x) ≈1, with variance λ0 ≈1.4 The next direction will correspond to φ1(x) ≈γ-1/2x with variance λ1 ≈σ-2γ. The next eigenfunction must incorporate the x2 direction, but we have a problem: x2 is not orthogonal to φ0(x) = 1. We must therefore orthogonalize it with respect to φ0 against our measure μ in the usual Gram-Schmidt fashion. This yields φ2(x) ≈ 1 √ 2(γ-1x2 -1), which using standard formulas for Gaussian integrals gives an eigenvalue λ2 = Z K(x, x′)φ2(x)φ2(x′)dμ(x)dμ(x′) ≈σ-4γ2. (8) Continuing this process to higher orders, we find that the kernel eigensystem matches that predicted by the HEA: φlis the l-th orthogonal polynomial with respect to our Gaussian measure - that is, the Hermite polynomial hl(γ-1/2x) - and the l-th eigenvalue is λl≈σ-2lγl= clγl. The corrections hidden by every "≈" in this derivation are of relative size O(σ-2γ) and thus vanish as σ grows.5 This same analysis holds for any dot-product kernel K(x, x′) = P l cl l! (xx′)lwith level coefficients (cl) such that cl+1γ cl ≪1. It can, with some difficulty, be further extended to apply to multivariate Gaussian data x ∼N(0, Σ). But what if the kernel is not a dot-product kernel, such as the Laplace kernel K(x, x′) = e 1 σ||x-x′||? Unlike the Gaussian kernel, the Laplace kernel is not wellapproximated by a dot-product kernel even at large width because of its nonanalyticity at zero. However, like all rotation-invariant kernels, the Laplace kernel is a dot-product kernel when restricted to a sphere (and will be close to a dot-product kernel when restricted to a spherical shell whose thickness is not too big). In such a case, we will require the data to be high-dimensional: for data with a high effective dimension deff := Tr[Σ]2 Tr[Σ2] ≫1, samples will tend to concentrate in norm.6 We may thus safely approximate any rotation-invariant K as a dot-product kernel. Having given an intuitive derivation of the HEA for Gaussian data, we now move to formal statements. Our first theorem states that the HEA holds for the Gaussian kernel at large width. Theorem 1 (The HEA holds for a wide Gaussian kernel on a Gaussian measure). Let μ = N(0, Σ) be a multivariate Gaussian measure and let Kσ(x, x′) = e1 2σ2 ||x-x′|| 2 be the Gaussian kernel with width σ. Let r = Tr [Σ]1/2 and let (cl) = coeffs(Kσ, r), which yields cl= σ-2le-r2 2σ2 . Then: as σ →∞, eigensystem(μ, Kσ) →HE(Σ, (cl)). Proof sketch (full proof in Section H). Mehler's formula can give the Gaussian kernel's eigensystem exactly (Mehler, 1866). Taking σ →∞in the resulting expressions yields agreement with the HEA. Our second theorem applies to dot-product kernels with fast-decaying level coefficients. Theorem 2 (The HEA holds for a fast-decaying dot-product kernel on a Gaussian measure). Let μ = N(0, Σ) be a multivariate Gaussian measure with variance Σ ≻0 and let K(cl)(x, x′) = ∞ X l=0 cl l! (x⊤x′)l be a dot-product kernel with coefficients cl≥0 such that cl+1 ≤ε · clfor some ε > 0. Then: as ε →0, eigensystem(μ, K(cl)) →HE(Σ, (cl)) linearly in ε. 4We index eigenmodes from 0 instead of 1 here to match the polynomial order l. 5Were we to repeat this calculation with a different measure μ for x, we would obtain the orthogonal polynomials with respect to μ as eigenfunctions. For example, if μ = U[-1, 1], we get the Legendre polynomials. 6For Gaussian data x ∼N(0, Σ), the relative variance of the norm is Var ||x||2 /E ||x||2 2 = 2/deff, which falls to zero as deff grows. 7 Proof sketch: Our proof formalizes the intuitive "Gram-Schmidt" derivation of the HEA given above. We use perturbation theory to show that the kernel eigenstructure splits into exponentially-separated segments, with the l-th segment eigenstructure determined almost fully by the l-th order term of K(cl). Due to the complexity of the proof, we break it up into stages: we rigorously state and prove the one-dimensional case in Section I, then state and prove the general case in Section J. 4.2 CONDITIONS FOR SUCCESS: FAST DECAY OF cl, HIGH DATA DIMENSION, AND A "GAUSSIAN ENOUGH" DATA DISTRIBUTION The intuitive theory above suggests three conditions required for the HEA to hold reasonably well. Here we list these conditions and give empirical confirmation that breaking any one usually causes the HEA to fail. 1) Fast decay of level coefficients. As discussed in Section 4.1, we need cl≫γ1cl+1 for the Gram-Schmidt process underlying the HEA to work. In Figure 13, we show that as we decrease the Gaussian kernel's width (and thus increase cl+1 cl) on a fixed dataset, the HEA eventually breaks. 2) High data dimension (for some kernels). As previously discussed, concentration of norm (via high deff) is required if we are to approximate an arbitrary rotation-invariant kernel as a dot-product kernel. In Figure 14, we show that for the Laplace kernel and ReLU NTK, agreement with the HEA worsens as deff decreases. However, since the Gaussian kernel is smooth at x = x′, it does not require concentration of norm, and low deff is fine (Figure 15). 3) "Gaussian enough" data distribution. Common image datasets are complex enough to roughly satisfy simple tests of Gaussianity, such as coordinatewise Gaussian marginals. As we make the dataset simpler (CIFAR →SVHN →MNIST →tabular), these marginals become less Gaussian, and HEA agreement degrades (Figures 16 and 17). It is noteworthy that our theory empirically works better on more complex datasets thanks to the blessings of dimensionality. 5 THE HEA ALLOWS PREDICTION OF KRR LEARNING CURVES Under the conditions outlined in the previous section, we expect the HEA to accurately predict kernel eigenstructure. We aim to plug these results directly into the aforementioned KRR eigenframework (of e.g. Simon et al. (2021)) to predict the final test risk of KRR. However, a key challenge remains: using the eigenframework requires knowing the coefficients of the target function in the kernel eigenbasis, f⋆(x) = P i viφi(x). We must measure these coefficients vi from finitely many samples of the target function. Were the data perfectly Gaussian, the multi-Hermite polynomials would be an orthonormal basis with respect to the measure. We could then estimate the coefficients by simply taking inner products between the target vector and generated Hermite polynomials and expect the estimation error to decay as O(N -1/2) with the total number of samples N. However, small amounts of non-Gaussianity in the data introduce cascading non-orthogonality in the Hermite basis. As a result, the na ̈ıve method overestimates the power in the overlapping modes. To rectify this effect, we modify our measurement technique by re-orthogonalizing the sampled Hermite polynomials via the Gram-Schmidt process:7 iterate over increasing i : h(GS) i = unitnorm  hi - X j 0 are the nonnegative sequence coeffs(K, r) := (cl)l≥0 such that K(x, x′) = X l≥0 cl l! (x⊤x′)l for all x, x′ such that ||x|| = ||x′|| = r. (13) B.1 GAUSSIAN KERNEL For the Gaussian kernel K(x, x′) = e1 2σ2 ||x-x′|| 2 , the level coefficients (cl) = coeffs(K, r) may be found by noting that, when ||x|| = ||x′|| = r, then K(x, x′) = e-r2 σ2 · e x⊤x′ σ2 = e-r2 σ2 · X l≥0 1 σ2ll!(x⊤x′)l. (14) Pattern-matching to Definition 3, we may then observe that c0 = e-r2 σ2 , c1 = e-r2 σ2 · σ-2, c2 = e-r2 σ2 · σ-4, ... cl= e-r2 σ2 · σ-2l. B.2 EXPONENTIAL KERNEL Let the exponential kernel be K(x, x′) = e 1 σ2 x⊤x′. We do not use this kernel in experiments or theory reported in this paper, but we nonetheless include it here because it is arguably the nicest kernel for the study of the HEA, and we used it extensively in our initial experiments. The blessing of the exponential kernel is that it admits the Taylor expansion K(x, x′) = e 1 σ2 x⊤x′ = X l 1 σ-2ll!(x⊤x′)l. (15) That is, the exponential kernel is exactly a dot-product kernel with coefficients cl= σ-2l. When σ = 1 and thus cl= 1 for all l, this is in some sense the "platonic ideal" dot-product kernel (though since we must then take γ1 ≪1 for the HEA to hold as per the intuition developed in Section 4.1). When the domain is restricted to a sphere, the Gaussian kernel and exponential kernel are equal up to a global factor of e-r2 σ2 . B.3 LAPLACE KERNEL Here we obtain the on-sphere level coefficients for the Laplace kernel K(x, x′) = e-1 σ||x-x′||.9 Let s := x⊤x′ r2 ∈[-1, 1], β := √ 2 r σ . (16) Since ∥x -x′∥= √ 2 r √1 -s on the sphere, K(x, x′) = exp -β √ 1 -s . (17) 9Note for users of our codebase: as defined in our repo, the Laplace kernel is actually K(x, x′) = e - 1 √ 2σ ||x-x′||; that is, there is an extra factor of √ 2. We later moved away from this convention, but it remains in code. 15 Closed form for on-sphere level coefficients. Let yn(x) denote the reverse Bessel polynomials, with exponential generating function X l≥0 yl-1(x) tl l! = exp 1 2x 1 - √ 1 -2xt . (18) Applying Equation (18) with x = 1 2β and t = β 2 s gives exp β(1 - √ 1 -s) = X l≥0 yl-1 1 2β (βs/2)l l! . (19) Multiplying by e-β and substituting s = (x⊤x′)/r2, the definition K(x, x′) = P l≥0 cl l! (x⊤x′)l implies cl= e-β r2lyl-1 1 2β β 2 l , β = √ 2 r σ . (20) The first few coefficients. With the convention y-1 ≡1, y0(x) = 1, y1(x) = 1 + x, y2(x) = 3 + 3x + x2, y3(x) = 15 + 15x + 6x2 + x3, we obtain (for β = √ 2r/σ) c0 = e-β, c1 = e-β r2 β 2 , c2 = e-β r4 β2 4 + β 8 , c3 = e-β r6 β3 8 + 3β2 16 + β 16 , c4 = e-β r8 β4 16 + 3β3 16 + 5β2 32 + 5β 128 . Large-lasymptotics. The dominant singularity of F(s) = e-β√1-s is at s = 1, with F(s) = 1 -β√1 -s + O(1 -s), yielding [sl]F(s) ∼ β 2√πl-3/2, where the coefficient extraction operator [sl]F(s) returns the l-th coefficient in the power series of F(s). Since cl= r-2ll! [sl]F(s), cl∼ β 2√π l! r2ll3/2 (l→∞), β = √ 2 r σ . (21) Subtlety: fast-growing clmeans diverging HEA eigenvalues Here we encounter a subtlety with the Laplace kernel coefficients: Equation (21) states that cl→∞ superexponentially as lgrows. Recall that the largest eigenvalue in each level predicted by the HEA is λ = clγl 1. This superexponential growth means that for any value of γ1 > 0, no matter how small, these largest levelwise eigenvalues will eventually begin to increase as lgrows and continue to grow without bound. A hacky fixed-point calculation using Stirling's formula suggests that the minimum occurs at lmin ≈r2γ-1 1 . See Figure 5 for a numerical illustration of this. What do we make of this? From a theoretical standpoint, this is the result of the fact that since the Laplace kernel on the sphere rSd-1 has a singularity at x⊤x′ = r2, our on-sphere dot-product kernel approximation to the Laplace kernel will diverge when attempting to evaluate K(x, x′) at points x with larger radius ||x|| > r. Since our theory is designed to work with Gaussian data, and roughly half of a Gaussian distribution will spill outside its sphere of average radius into this divergent region, the HEA predicts growing eigenvalues and infinite trace. While this might seem to spell the doom of the HEA insofar as the Laplace kernel is concerned, our experiments (e.g. Figures 2 and 3) attest that this is not the case. We find that we can still get good 16 100 101 level ` 10-9 101 1011 1021 1031 HEA eigval c` ¢ °` 1 °1 = 0:03 °1 = 0:1 °1 = 0:3 Figure 5: We compute the Laplace kernel on-sphere coefficients clwith σ = r = 1 and plot clγl 1 for various l. Dashed lines show r2γ-1 1 , the value of the predicted minimum. experimental agreement by (a) ensuring the data has high effective dimension so that γ1 is small10 and then (b) truncating the HEA at finite order l, usually l∈[5, 10]. It seems plausible to us that this series approximation to the Laplace kernel is essentially an asymptotic expansion rather than a true Taylor series, meaning that it gives a good approximation when truncated to a finite number of terms so long as a particular parameter (here r-1γ1) is small, but then later diverges, rather than giving a better and better, and ultimately perfect, approximation as the number of terms grows. Essentially this same story also holds for the ReLU NNGP kernel and ReLU NTK, so we will not discuss it again. B.4 THE RELU NNGP KERNEL For a one-hidden-layer ReLU network with first-layer weight variance σ2 w and bias variance σ2 b (and no output bias), the first-layer preactivation kernel is K1(x, x′) = σ2 w x⊤x′ + σ2 b. (22) On the sphere ∥x∥= ∥x′∥= r, set q := K1(x, x) = σ2 wr2 + σ2 b, s := x⊤x′ r2 , ρ := K1(x, x′) p K1(x, x)K1(x′, x′) = σ2 wr2 s + σ2 b q . (23) The ReLU NNGP kernel is K2(x, x′) = σ2 w 2π q p 1 -ρ2 + (π -arccos ρ) ρ =: σ2 w 2π q H(ρ). (24) We Taylor-expand K2 in powers of x⊤x′ and write K2(x, x′) = X l≥0 cl l! (x⊤x′)l. (25) A change of variables gives the coefficients cl= σ2 w 2π q σ2 w q l H(l)(a) with a := σ2 b q , q = σ2 wr2 + σ2 b, (26) where H(ρ) = p 1 -ρ2 + (π -arccos ρ)ρ and H(l) denotes the l-th derivative. 10Simply decreasing all eigenvalues does not help as that also decreases the sphere radius r proportionally, which Equation (21) shows then increases each clin a manner that compensates. 17 The first few coefficients follow from H(a) = p 1 -a2 + (π -arccos a) a, H′(a) = π -arccos a, H′′(a) = 1 √ 1 -a2 , H(3)(a) = a (1 -a2)3/2 , H(4)(a) = 2a2 + 1 (1 -a2)5/2 , yielding c0 = σ2 w 2π q hp 1 -a2 + (π -arccos a) a i , c1 = σ2 w 2π q σ2 w q (π -arccos a), c2 = σ2 w 2π q σ2 w q 2 1 √ 1 -a2 , c3 = σ2 w 2π q σ2 w q 3 a (1 -a2)3/2 , c4 = σ2 w 2π q σ2 w q 4 2a2 + 1 (1 -a2)5/2 . (27) Asymptotics. As lgrows, the coefficient clgrows as cl= Θ σ2 w 2π q σ2 w q l l! l3/2 ! , l→∞. (28) B.5 THE RELU NTK As in the previous subsection, we treat a shallow ReLU network. The ReLU NNGP kernel remains K2(x, x′) = σ2 w 2π q p 1 -ρ2 + (π -arccos ρ) ρ =: σ2 w 2π q H(ρ). (29) The corresponding two-layer ReLU NTK is Θ2(x, x′) = σ2 w K1(x, x′) · 1 2π (π -arccos ρ) | {z } training first layer + K2(x, x′) | {z } training second layer (30) = σ2 w 2π q p 1 -ρ2 + 2ρ(π -arccos ρ) =: σ2 w 2π q J(ρ). (31) We Taylor-expand Θ2 in powers of x⊤x′ and write Θ2(x, x′) = X l≥0 cl l! (x⊤x′)l. (32) A change of variables gives the coefficients cl= σ2 w 2π q σ2 w q l J(l)(a) with a := σ2 b q , q = σ2 wr2 + σ2 b, (33) where J(ρ) = p 1 -ρ2 + 2ρ(π -arccos ρ) and J(l) denotes the l-th derivative. 18 The first few coefficients follow from the identities J(a) = p 1 -a2 + 2a (π -arccos a), J′(a) = 2(π -arccos a) + a √ 1 -a2 , J′′(a) = 3 -2a2 (1 -a2)3/2 , J(3)(a) = a (5 -2a2) (1 -a2)5/2 , J(4)(a) = 5 + 14a2 -4a4 (1 -a2)7/2 , yielding c0 = σ2 w 2π q hp 1 -a2 + 2a (π -arccos a) i , c1 = σ2 w 2π q σ2 w q h 2(π -arccos a) + a √ 1 -a2 i , c2 = σ2 w 2π q σ2 w q 2 3 -2a2 (1 -a2)3/2 , c3 = σ2 w 2π q σ2 w q 3 a (5 -2a2) (1 -a2)5/2 , c4 = σ2 w 2π q σ2 w q 4 5 + 14a2 -4a4 (1 -a2)7/2 . Asymptotics. As lgrows, the coefficient clgrows as cl= Θ σ2 w 2π q σ2 w q l l! l3/2 ! , l→∞. (34) 19 C REVIEW OF THE KRR EIGENFRAMEWORK FOR PREDICTING TEST PERFORMANCE FROM TASK EIGENSTRUCTURE The central piece of existing theory which we use in this paper is a set of equations which give the average-case test MSE of KRR in terms of the task eigenstructure. In this appendix, we will review this KRR eigenframework. This eigenframework has been derived many times by different means in both the statistical physics community and the classical statistics community (which usually phrases the result as applying to linear ridge regression). References studying KRR include Sollich (2001); Bordelon et al. (2020); Jacot et al. (2020); Simon et al. (2021); Loureiro et al. (2021); Wei et al. (2022); references studying linear ridge regression include Dobriban & Wager (2018); Wu & Xu (2020); Hastie et al. (2022); Richards et al. (2021); Cheng & Montanari (2022). The result is essentially the same in all cases. Here we will adopt the terminology and notation of Simon et al. (2021). Recall that we are studying KRR with a kernel function K with data sampled xi ∼μ, targets generated as yi = f∗(xi) + η, and noise η ∼N(0, ε2) with variance ε2 ≥0. The kernel admits an eigendecomposition K(x, x′) = P i λiφi(x)φi(x′) with orthonormal eigenfunctions (φi). Let us decompose the target function in the eigenbasis as f∗(x) = P i viφi(x). Suppose we run KRR with n samples and a ridge parameter δ ≥0, and we compute the population (i.e. test) and train MSEs as MSEte = Ex∼μ h (f∗(x) -ˆf(x))2i + ε2, (35) MSEtr = 1 n X i (yi -ˆf(xi))2. (36) C.1 STATEMENT OF THE EIGENFRAMEWORK We are now ready to state the eigenframework. Let κ ≥0 be the unique nonnegative solution to X i λi λi + κ + δ κ = n. (37) Then test risk is given approximately by MSEte ≈Ete := E0B, (38) where the overfitting coefficient E0 is given by E0 := n n -P i λi λi+κ 2 (39) and the bias is given by B = X i κ λi + κ 2 v2 i + σ2. (40) Train risk is given by MSEtr ≈Etr ≡ δ2 n2κ2 Ete, (41) What is meant by the "≈" in Equations (38) and (41)? This result only becomes exact in certain stringent cases; it is formally derived under an assumption that the eigenfunctions are independent Gaussian (or sub-Gaussian) variables when x is sampled from μ, and it is exact only in an asymptotic limit in which n and the number of eigenmodes in a given eigenvalue range (or the number of duplicate copies of any given eigenmode) both grow large at a proportional rate (Hastie et al., 2022; Bach, 2023). These conditions emphatically do not apply to any realistic instance of KRR. Nonetheless, numerical experiments find that Equations (38) and (41) hold with small error even at modest n (Canatar et al., 2021; Simon et al., 2021; Wei et al., 2022): though derived in an idealized setting and exact only in a limit, this eigenframework holds reliably in practical cases. In this paper, we use this eigenframework as a tool to map predictions of task eigenstructure to predictions of learning curves. Since we are here using it in settings very similar to those tested by previous works (Bordelon et al., 2020; Jacot et al., 2020; Simon et al., 2021; Wei et al., 2022), we expect it to work well. Its use introduces some small error (as it is not perfect at finite n), but this is usually not the dominant source of error. 20 D EXPERIMENTS CHECKING THE HEA: DETAILS AND FURTHER DISCUSSION This appendix contains descriptions of the experimental stack used to verify the HEA, as well as a discussion of practical considerations for applying the HEA to real datasets. It is organized as follows: • In Section D.1 we catalog the kernels, datasets, and target functions we use throughout our experiments. • In Section D.2 we explain the experiments that directly check whether the kernel eigenstructure matches the HEA prediction (Figure 2). • In Section D.3 we describe our method for estimating the decomposition of the target function in the Hermite eigenbasis. Unlike previous work, our method does not require constructing or diagonalizing an empirical kernel matrix. • In Section D.4 we detail the experimental setups for each of the learning curve and sample complexity plots (Figures 1 and 3). • Finally, in Section D.5 we show the results of various additional experiments. D.1 KERNELS, DATASETS, AND TARGET FUNCTIONS Kernels. We use the Gaussian kernel, Laplace kernel, ReLU NNGP kernel, and ReLU NTK in our experiments. A detailed review of these kernels can be found in Section B.11 Datasets. We use the following datasets for the main experiments: • Mean-zero anisotropic Gaussian data. This synthetic dataset is fully specified by its (diagonal) covariance. Different experiments set the data dimension and covariance decay rate differently; see experiment-specific details in Sections D.2, E and F. • CIFAR-5m (Nakkiran et al., 2020). This dataset consists of more than 5 million samples of synthetic images akin to CIFAR-10. These images were sampled using a deep generative model trained on CIFAR-10. Though the distributions of CIFAR-5m and CIFAR-10 may not be identical, they are typically considered close enough for research purposes. • SVHN (Netzer et al., 2011). This dataset contains over 600,000 images of numerals, taken from house numbers found on Google Street View. • ImageNet-32 (Deng et al., 2009). This dataset contains downsampled ImageNet images (32 × 32 pixels). • MNIST (LeCun et al., 1998) and Mushroom dataset (Dua & Graff, 2017). We use MNIST and the UCI Mushroom tabular dataset in Section E as examples of insufficiently Gaussian datasets. We sometimes employ regularized ZCA whitening to increase the effective dimension of the data. This is a data preprocessing technique parameterized by a ZCA strength ω2 which maps X = USV ⊤7→US ω2S2 + Id -1/2 V ⊤ (42) where X ∈Rd×N is the data matrix, USV ⊤is its SVD, and we use the normalization notation A := A/(∥A∥2 F/d). As the ZCA strength ω2 →0, we get no spectral transformation apart from a scalar normalization X →USV ⊤. Conversely, when ω2 →∞, we get full whitening X →UV ⊤. The crossover point of this behavior occurs at ω2 ∼1. Note that although partially-whitened Gaussian data are slightly less anisotropic, they are still distributed as a multivariate Gaussian. We sometimes employ sample normalization, x →x/∥x∥. Note that although the normalized data lie on the hypersphere, their distribution is still anisotropic. 11A note for users of our codebase: in this paper, we define the Laplace kernel to be K(x, x′) = e-1 σ ||xx′||, but because we initially used a different convention, in code it is K(x, x′) = e - 1 σ √ 2||xx′||. When we report a kernel width in this paper, this is the width in the parameterization we use in the paper, not in code. 21 Both sample normalization and ZCA whitening are preprocessing techniques that, on aggregate, shift high-dimensional data samples closer to the hypersphere. Since the HEA relies on an expansion of kernel functions in terms of on-sphere coefficients (see Section B), these methods move any experimental setup closer to the regime well-described by the HEA. See Section E for further discussion of this point. Targets. We use a variety of synthetic and real target functions in our experiments. All targets are scalar; the synthetic targets take continuous values, whereas the real targets are binarized (yi ∈ {+1, -1}). All targets are mean-zero; for real targets, this means that the binary (super)classes are always balanced (even if the binary superclasses contain a differing number of base classes). We use the following targets for the main experiments: • (Synthetic.) Multi-Hermite targets. A single (normalized) multi-Hermite polynomials of the PCA components (Equation (3)). • (Synthetic.) Powerlaw targets. We draw a random sample of the Gaussian process f⋆(x) = P X i cihi(x) + ε · (white noise), (43) where hi(x) is shorthand for the ith multi-Hermite polynomial hαi(z) and ci is a mean-zero Gaussian random variable with variance (i+6)-β. The so-called source exponent β satisfies β > 1 and controls the Sobolev smoothness of the target: the larger β is, the smoother and easier-to-learn the target. We choose a numerical truncation threshold P = 30, 000 for convenience, choosing the target noise level ε to ensure that the target is unit norm E[yi] = 1. Our results are empirically insensitive to the randomness in the target. • (Real.) class vs. class. A binarization in which samples are only drawn from two classes. • (Real.) class vs. all others. A binarization similar to a single output element of practical neural networks with one-hot label encodings. A key difference here is that samples are drawn from each binary superclass in equal proportion so that E[yi] = 0. • (Real.) Domesticated vs. wild animals. CIFAR-5m binarization: [cat, dog, horse] vs. [bird, deer, frog]. • (Real.) Vehicles vs. animals. CIFAR-5m binarization: [plane, car, ship, truck] vs. [bird, cat, deer, dog, frog, horse]. Samples are drawn from each superclass in equal proportion so that E[yi] = 0. • (Real.) Even vs. odd. SVHN binarization based on parity: [0, 2, 4, 6, 8] vs. [1, 3, 5, 7, 9]. • (Real.) Prime vs. composite. SVHN binarization based on primality: [2, 3, 5, 7] vs. [4, 6, 8, 9]. We leave out [0] and [1], numerals whose primality is undefined. • (Real.) Genus 0 vs. genus ≥1. SVHN binarization based on the numeral's topological genus: [1, 3, 5, 7] vs. [0, 6, 8, 9]. We leave out [2] and [4], numerals whose topological genus is font-dependent. D.2 DIRECT EIGENSTRUCTURE CHECKS What is the appropriate way to numerically compare two eigensystems? The spectra are easy to compare - one can simply check whether |λi-ˆλi| λi is small for all i. An easy visual check is to simply scatter one spectrum against the other on a log-log plot; if the points remain close to the y = x line, then the spectra agree. Comparing the eigenbases, on the other hand, is a subtler matter. One must be careful when the eigensystems have small eigenvalue gaps. This issue is most easily understood by considering the limit: what happens when comparing two diagonalizations of a degenerate matrix? In this case, numerical eigendecomposition is undefined since the true eigenvectors are not unique; the computed eigenvectors are thus arbitrary. Simply comparing ˆφi with φi for all i is therefore insufficient. 22 Clearly, any good eigenbasis comparison must be spectrum-aware. In particular, differences between eigenvectors belonging to (near-)degenerate subspaces should not be strongly penalized. A coarse but simple way to make this comparison is with spectral binning. We divide R+ into logarithmically-spaced bins; then, for each eigenbasis, we treat the modes whose eigenvalues fall within the same bin as a single near-degenerate eigenspace. Applying this procedure to the HEA12 yields a set of disjoint Hermite eigenspaces {Φ(th) i }nbins i=1 , and likewise for the empirical basis. Let d(th) i = dim(Φ(th) j ) and likewise for d(emp) i . Note that in general we do not expect d(·) i to equal d(·) j for i ̸= j; indeed, some bins may contain no modes at all. However, we do expect d(th) i = d(emp) i for all i (if the theory is accurate). Having handled any issues of spectral near-degeneracy, we may directly compare the two eigenbases by computing the pairwise overlaps between the binned eigenspaces: Overlap(i, j) =      1 d(emp) j Φ(th)⊤ i Φ(emp) j 2 F, d(th) i ̸= 0 and d(emp) j ̸= 0, undefined, otherwise. (44) where again 1 ≤i, j ≤nbins enumerate the bins. To visualize this in Figure 2, we plot the overlaps in a heatmap, graying out pixels whose spectral bins do not contain any eigenmodes (and thus have undefined overlap). We note that discrepancies in the tails are primarily caused by distortions in the empirical eigenbasis caused by finite-size effects, rather than genuine disagreement between the theory and the true eigensystem (see Figure 6). The experiments that generated Figure 2 used the following hyperparameters: • Gaussian kernel, gaussian data. Kernel width σ = 6. Data x ∈R200 drawn i.i.d. Gaussian with diagonal covariance E x2 i = (i + 6)-3.0. This results in deff ≈7. We choose a steep covariance decay exponent to ensure that cubic modes are clearly present in the plot. • Gaussian kernel, CIFAR-5m. Kernel width σ = 6. No ZCA nor sample normalization. This results in deff ≈9. • Laplace kernel, SVHN. Kernel width σ = 8 √ 2. Whiten data with ZCA strength ω2 = 5 × 10-3 and then unit-normalize. This results in deff ≈21. We generally observed that deff ≥20 is a reliable rule of thumb for obtaining good agreement with the HEA using the Laplace kernel. • ReLU NTK, ImageNet-32. Bias variance σ2 b = 1.68 and weight variance σ2 w = 0.56. Whiten data with ZCA strength ω2 = 5 × 10-3 and then unit-normalize. This results in deff ≈40. Examining the on-sphere coefficients, we see that σ2 b/σ2 w ≫1 is the ReLU NTK equivalent for the wide kernel condition for Gaussian kernels. D.3 HOW TO DECOMPOSE THE TARGET FUNCTION We are interested in recovering the coefficients of the target f⋆in the kernel eigenbasis: Recover vi from samples of f⋆(x) = X i viφi(x), (45) where i is the mode index. According to the HEA, this amounts to expanding f⋆in the multi-Hermite basis: f⋆(x) = X i ̃vihi(x). (46) where hi(x) is shorthand for the ith multi-Hermite polynomial hαi(z). We use different notation for the Hermite coefficients since the HEA will not hold exactly in practical settings. Of course, obtaining any full expansion of f⋆from N samples is exactly as hard as the original learning problem. However, we can hope to obtain an expansion that is good enough to predict the behavior of KRR trained with up to N samples. 12Here, we abuse terminology by referring to the proposed Hermite basis as an eigenbasis; evaluated on finitely many samples of real data, these basis vectors may not be truly orthonormal. 23 Let us define y ∈RN as the vector of target samples and hi ∈RN as the ith multi-Hermite polynomial evaluated on the samples. Let us stack the top P 1 or ηmax 7-→ηmax · ζ2 for ζ 0 for all nonzero |v⟩. Similarly, an operator is positive semidefinite, written as A ⪰0, iff it is self-adjoint, and ⟨v|A|v⟩geq0 for all |v⟩. 37 G.3 COMPACT OPERATORS In infinite dimensions, some finite-dimensional intuitions break. A finite-rank operator, by virtue of being finite-rank, behaves essentially the same as a linear operator between two finite-dimensional spaces. However, they are somewhat too trivial. Compact operators are a compromise. They can have infinite rank, but still allow some finite-dimensional intuitions to apply. An operator K : H →H is compact iff it is the operator-norm limit of a sequence of finite-rank operators. Symmetric matrices are diagonalizable. Similarly, if an operator is compact and self-adjoint, then it is diagonalizable: There exists an orthonormal basis of eigenvectors {|ej⟩}j∈J and real eigenvalues {λj}j∈J with K = X j∈J λj|ej⟩⟨ej|, λj ∈R, λj →0. Nonzero eigenvalues have finite multiplicity, and the eigenvalues can only accumulate at zero. We will only study operators of a very specific form that arises naturally from kernel regression. Let {|vn⟩}n≥0 be an orthonormal set in H and let an ≥0 with P n an 0 and ε0, ε1, · · · > 0, such that for all n ∈N, ε ∈[0, εn], and all a0, a1, . . . fast-decaying sequences with parameters ε, there exists an eigensystem of the kernel K := P∞ n=0 an|vn⟩⟨vn|, denoted as (λk(K), |vk(K)⟩)k, such that λn(K) ∈an|⟨ˆvn|vn⟩|2(1 ± Cnε) and |∠(|vn(K)⟩, |ˆvn⟩)| ≤Cnε Furthermore, the scaling factor Cn and the bound εn depend on only on the Gram matrix of the vectors |v0⟩, . . . , |vn⟩. Intuitively restated, the theorem says that as ε →0, the eigensystem of the fast-decaying kernel P∞ n=0 an|vn⟩⟨vn| rotates towards the canonical eigensystem at a rate of ε. The proof has two parts. The first part uses the min-max principle and lowest-order operator perturbation (commonly used in quantum mechanics), to segment the spectrum of K into small intervals of the form λn(K) ∈an|⟨ˆvn|vn⟩|2(1 + O(ε)) In particular, since an are fast-decaying, it shows that the eigenvalues are exponentially separated. The second part applies Davis-Kahan twice, using this exponential separation of eigenvalues, to show that | sin ∠(|vn(K)⟩, | ̃K⟩)| = O(ε) and | sin ∠(|vn( ̃K)⟩, |ˆvn⟩)| = O(ε), for a cleverly-chosen operator ̃K. Before we launch into the proof, we should look at a simple case that explains why this should be true. Consider the case of two dimensions. We only have |v0⟩, |v1⟩. In this case, the kernel K = a0|v0⟩⟨v0| + a1|v1⟩⟨v1|. Diagonalizing the kernel is equivalent to finding the major and minor axes of the contour ellipse defined by {|x⟩: ⟨x|K|x⟩= 1}. This ellipse is the unique ellipse tangent to the 4 lines defined by a0|⟨v0|x⟩|2 = 1, a1|⟨v1|x⟩|2 = 1. Suppose we fix a0, and let a1 →0. Then, the lines of a0|⟨v0|x⟩|2 = 1 remain constant, but the lines of a1|⟨v1|x⟩|2 = 1 diverge to infinity. The ellipse degenerates to two parallel lines. Its minor semiaxis rotates to become perpendicular to the two parallel lines, i.e. parallel to |v0⟩. Therefore, the eigenpair converges to (a0|⟨ˆv0|v0⟩|2, |ˆv0⟩). Suppose we fix a1 and let a0 →∞. Then, the lines of a1|⟨v1|x⟩|2 = 1 remain constant, but the lines of a0|⟨v0|x⟩|2 = 1 converge to the origin. The ellipse degenerates to two line segments. Its major semiaxis rotates to become the same as that line segment, i.e. parallel to a0|⟨v0|x⟩|2 = 1, i.e. perpendicular to |v0⟩. Therefore, the eigenpair converges to (a1|⟨ˆv1|v1⟩|2, |ˆv1⟩). Intuitively, we see that for a given n, the effect of all an+1, an+2, . . . terms in the kernel is a small perturbation on the n-th eigenspace, and negligible because the parallel planes diverge to infinity. The effect of all an-1, an-2, . . . , a0 is a large but fixed perturbation, forcing the n-th eigenspace to be perpendicular to all of |vn-1⟩, . . . , |v0⟩, but once that is done, their effects are also negligible because the parallel planes converge to the origin. 51 (a) The case of constant a0, and a1 →0. The ellipse degenerates to two parallel lines. (b) The case of constant a1, and a0 →∞. The ellipse degenerates to two repeated line segments. Figure 22: Diagonalizing the kernel in 2 dimensions at the a1/a0 →0 limit. I.3 PROOF OF THEOREM 6 I.3.1 PART 1 To show: λn(K) = an|⟨ˆvn|vn⟩|2(1 + O(ε)). Proof. If an = 0, then all λn, λn+1, · · · = 0, and the kernel K becomes finite-ranked, with range Span (|v0:n-1⟩). So the theorem becomes is trivial. Otherwise, we assume an > 0, which means that all a0, . . . , an > 0. We apply the min-max principle to obtain an upper bound of the form λn(K) ≤an|⟨ˆvn|vn⟩|2(1 + O(ε)) and a lower bound of the form λn(K) ≥an|⟨ˆvn|vn⟩|2(1+O(ε)), thus completing the estimate. For the upper bound, we use V = Span(|v0:n-1⟩), then λn ≤ sup x∈Span(|v0:n-1⟩)⊥ ∥x∥=1 ⟨x|K|x⟩ = sup x∈Span(|v0:n-1⟩)⊥ ∥x∥=1 ∞ X k=0 ak|⟨x|vk⟩|2 = sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 ∞ X k=n ak|⟨x|vk⟩|2 ≤ sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 an|⟨x|vn⟩|2 + sup x∈Span(|ˆvn:∞⟩) ∥x∥=1 ∞ X k=n+1 ak|⟨x|vk⟩|2 = an|⟨ˆvn|vn⟩|2 + λmax ∞ X k=n+1 ak|vk⟩⟨vk| ! ≤an|⟨ˆvn|vn⟩|2 + Tr ∞ X k=n+1 ak|vk⟩⟨vk| ! = an|⟨ˆvn|vn⟩|2 + ∞ X k=n+1 ak = an|⟨ˆvn|vn⟩|2 + anO(ε) = an|⟨ˆvn|vn⟩|2(1 + O(ε)) 52 where the step λmax ≤Tr is because all ak ≥0. Though unnecessary, we can concretely write down the upper bound as λn ≤an|⟨ˆvn|vn⟩|2(1 + Cε) where C = 1 |⟨ˆvn|vn⟩|2 ∞ X k=n+1 ak anε ≤ 1 |⟨ˆvn|vn⟩|2 1 1 -ε For the lower bound, we use V = Span(|v0:n⟩), then λn ≥ inf x∈Span(|v0:n⟩) ∥x∥=1 ⟨x|K|x⟩ = inf x∈Span(|v0:n⟩) ∥x∥=1 ∞ X k=0 ak|⟨x|vk⟩|2 ≥ inf x∈Span(|v0:n⟩) ∥x∥=1 n X k=0 ak|⟨x|vk⟩|2 The quantity is a standard problem in quadratic programming, with exact solution λmin G1/2AG1/2 , where A = diag(a0, . . . , an) and G = (⟨vi|vj⟩)n i,j=0 is the Gram matrix. To see this, write x = P j∈0:n cjvj, then ∥x∥2 = cT Gc, n X k=0 ak|⟨x|vk⟩|2 = (Gc)T A(Gc) = cT GAGc. So the constrained minimum of cT GAGc cT Gc equals the smallest eigenvalue of G1/2AG1/2 by the Rayleigh-Ritz principle. Let M = G1/2AG1/2. We invert the matrix to use standard operator perturbation theory: M -1 = 1 an unuT n + n-1 X k=0 1 ak ukuT k where uk = G-1/2ek is the k -th column of G-1/2. The perturbation Pn-1 k=0 1 ak ukuT k is order O(ε) compared to the unperturbed part 1 an unuT n. The unperturbed operator has maximal eigenvalue ∥un∥2 an with eigenvector ˆun := un/∥un∥. The perturbed operator has maximal eigenvalue: ∥un∥2 an + ˆuT n n-1 X k=0 1 ak ukuT k ! ˆun + O(ε2) Inverting the eigenvalue, λmin G1/2AG1/2 = an ∥un∥2 1 -an/an-1 ∥un∥2 |uT n-1ˆun|2 + O(ε2) = an ∥un∥2 (1 + O(ε)) Now, ∥un∥2 = eT nG-1en, and G-1 is the Gram matrix of the dual basis of |v0:n⟩in Span(|v0:n⟩). In particular, because |ˆvn⟩is perpendicular to all of |v0⟩, . . . , |vn-1⟩, the n-th dual vector is just |ˆvn⟩ ⟨vn|ˆvn⟩. Therefore ∥un∥2 = |ˆvn⟩ ⟨vn|ˆvn⟩ 2 = 1 |⟨vn|ˆvn⟩|2 , and we obtain the desired lower bound λn ≥an|⟨ˆvn|vn⟩|2(1 + O(ε)) 53 Some comments on the constants in O(ε). In the above proof, we constructed an upper bound an|⟨ˆvn|vn⟩|2(1 + O(ε)) and a lower bound an|⟨ˆvn|vn⟩|2(1 -O(ε)). The constant in the upper bound is 1 |⟨ˆvn|vn⟩|2 1 1 -ε which depends on |⟨ˆvn|vn⟩|2 = sin2 θ, where θ is the angle between |vn⟩and Span(|v0:n-1⟩). We see that the bound is pushed wider when either the coefficients become less strictly exponentiallydecaying, or the vector |vn⟩leans into Span(|v0:n-1⟩), and thus becomes less orthonormal. The constant in the lower bound is 1 ∥un∥2 |uT n-1ˆun|2 = 1 ∥un∥4 |uT n-1un|2 = ⟨vn|ˆvn⟩|4|uT n-1un|2 = |⟨vn|ˆvn⟩|4(G-1 n-1,n)2 Similarly to the previous case, G-1 n-1,n gets larger as the vectors |v0⟩, . . . , |vn⟩get less orthonormal, which worsens eigenvalue convergence. I.3.2 PART 2 Proof. If an = 0, then we can trivially select the n-th eigenvector to be |ˆvn⟩. Otherwise, we have a0, . . . , an > 0. By the eigenvalue bound (that is, Part 1 of the theorem), the spectral gap around λn(K) is min jNeqn |λn(K) -λj(K)| = an|⟨ˆvn|vn⟩|2(1 + O(ε)) Define the truncated operator ̃K = Pn k=0 ak|vk⟩⟨vk|. By Davis-Kahan, | sin ∠(vn(K), vn( ̃K))| ≤ 2 an|⟨ˆvn|vn⟩|2(1 + O(ε))∥K - ̃K∥op ≤ 2 an|⟨ˆvn|vn⟩|2(1 + O(ε)) Tr h K - ̃K i = 2 an|⟨ˆvn|vn⟩|2(1 + O(ε)) ∞ X k=n+1 ak = O(ε) Thus, we need only bound | sin ∠(vn( ̃K), ˆvn)|. Because ̃K lives inside Span(|v0:n⟩), we thenceforth restrict the Hilbert space to just Span(|v0:n⟩). Define the twice-truncated operator ̄K = Pn-1 k=0 ak|vk⟩⟨vk|. The eigenvalue bound applies to its first n eigenvalues, and its (n + 1)-th eigenstate is the ground state, with eigenvalue 0 and eigenvector |ˆvn⟩. Thus, the spectral gap around its ground state eigenvalue is min jNeqn |λn( ̄K) -λj( ̄K)| = λn-1( ̄K) = an-1|⟨ˆvn-1|vn-1⟩|2(1 + O(ε)) By Davis-Kahan again, | sin ∠(vn( ̃K), ˆvn)| = | sin ∠(vn( ̃K), vn( ̄K))| ≤ 2 an-1|⟨ˆvn-1|vn-1⟩|2(1 + O(ε))∥ ̃K - ̄K∥op = 2 an-1|⟨ˆvn-1|vn-1⟩|2(1 + O(ε))an = O(ε) 54 There are two occurrences of O(ε) in the proof. Both can be bounded explicitly, to show that they only depend on the Gram matrix of |v0⟩, . . . , |vn⟩, as in Part 1. The first O(ε) has explicit upper bound constant 2 anε|⟨ˆvn|vn⟩|2(1 + O(ε)) ∞ X k=n+1 ak ≤ 2 |⟨ˆvn|vn⟩|2 1 1 -ε 1 1 + O(ε) where the remaining O(ε) in 1 1+O(ε) came from Part 1, which as we showed in Part 1, only depends on the Gram matrix of |v0⟩, . . . , |vn⟩. The second O(ε) has explicit upper bound constant 2 |⟨ˆvn-1|vn-1⟩|2(1 + O(ε)) where the remaining O(ε) only depends on the Gram matrix of |v0⟩, . . . , |vn-1⟩. 55 J PROOF OF THEOREM 2 IN THE GENERAL CASE In this appendix, we prove Theorem 2 completely. The general idea is to modify the proof given in Section I to account for multiplicity. Section J.1 presents the overall plan of the proof. Section J.2 sets up all the machinery needed to handle multiplicity in eigenvalues and eigenvectors, which is a new occurrence in multiple dimensions. Section J.3 states the theorem in full rigor as Theorem 7. The next two subsections prove two lemmas that apply to generic operators, not just an integral kernel operator: Section J.4 shows that the eigenvalues of a generic fast-decaying kernel splits into segments that are exponentially separated, and Section J.5 sharpens this separation into proving that the eigenvalues in the N-th segment are only slightly perturbed by all the other segments. Section J.6 specializes to the case of a dot-product kernel, showing convergence of the eigenvalues, and then leverages that convergence into the convergence of eigenspaces. J.1 PLAN OF THE PROOF Let K(x, y) = P∞ n=0 cn n! (x⊤y)n be a dot-product kernel with fast-decaying coefficients cn. As in Section H.4, to study a spherically symmetric dot-product kernel over a nonstandard Gaussian distribution N(0, Σ), we construct a whitening unitary transform V : L2(N(0, Σ)) →L2(N(0, Id)), thus converting the problem to solving for the eigenstructure of a spherically asymmetric kernel over the standard Gaussian distribution. Let the SVD of Σ be UΓU ⊤, where Γ = diag (γ1, . . . , γd) with γ1, . . . , γd ≥0. Define V by (V f)(x) = f(Mx). This then converts the operator K to V KV ∗. The operator V KV ∗is a kernel operator with kernel function satisfying (V KV ∗)(x, y) = ∞ X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n We will prove that the eigensystem of V KV ∗converges to ( c|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0 ) as ε →0. Then, by reversing the V transform, we find that the eigensystem of K converges to ( c|α| d Y i=1 γαi i , |h(Σ) α ⟩ ! for all α ∈Nd 0 ) as desired. We eliminate a pesky special case: one or more of γ1, . . . , γd may be equal to zero. In this case, V KV ∗may not be positive definite, but merely positive semidefinite, which is annoying. For example, what if γ2 = γ4 = 0? Then the operator V KV ∗splits to two halves. It is the zero operator on Span (e2, e4), and it is positive definite on Span (e1, e3, e5, . . . , ed). Then we can separately prove the eigensystem convergence on the two halves, and take their tensor product. The case of zero operator is obviously trivial, since its eigensystem is just 0, |h(α2,α4))⟩ for all (α2, α4) ∈N2 0 Thus, WLOG, we need only consider the case where γ1, . . . , γd > 0. We note that, at least in one case, the theorem has been proven: If cn = σ-2n for some σ, then it is just a minor variant of Theorem 1, which has been proven in Section H.2. Thus, if we prove that the difference between the eigensystem of V KV ∗and the eigensystem of cex⊤y/σ2 vanishes at ε →0, for some well-chosen values of σ, c, we are done. This cannot be done directly, once again due to nonuniform convergence: The higher-order parts of the eigensystem is wilder, and harder to control. To bypass this difficulty, we divide and conquer. We prove that the eigensystem of V KV ∗is "segmented" into exponentially separated intervals, such that each segment is ε-insensitive to perturbations in all other segments. This allows us to show 56 that, for any fixed n ∈N0, the n-th segment of eigensystem(V KV ∗) - corresponding to the term cn(x⊤Γy)n - converges to the n-th segment of eigensystem(Kceε(x⊤Γy)), where c = cnε-n. Since the n-th segment of eigensystem(Kceε(x⊤Γy)) converges to ( cn εn ε|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = n ) the theorem is proven. J.2 MACHINERY FOR MULTIPLICITY The main difficulty, compared to the one-dimensional case, is that we must directly handle the multiplicity of eigensystems. By this, we mean that in Rd, the Hermite basis is no longer of form {|hn⟩}n∈N, but rather, {|hα⟩}α∈Nd. This creates degeneracy, that is, if P i αi = P i α′ i, then |hα⟩, |hα′⟩belong to the same eigenspace. Concretely, define the Ornstein-Uhlenbeck operator ∇2 -x · ∇on L2(μd). In the d = 1 case, its eigenvalues have no multiplicity, and its eigenvectors are precisely the normalized Hermite polynomials {|hn⟩}n∈0:∞. In the d > 1 case, its eigenvalues suffer multiplicity. Its n-th eigenspace is {|hα⟩: |α| = n}α∈Nd, with n+d-1 d-1 dimensions. This multiplicity is inescapable, because the Ornstein-Uhlenbeck operator is spherically symmetric, and spherical symmetry inevitably leads to multiplicity. In our case, the dot-product kernel P n cn n! (x⊤Γy)n may have some entries of Γ equal, which leads to (partial) spherical symmetry, and thus multiplicity. As a more famous example, the spherical harmonics in L2(Rd) are the eigenvectors of the Laplacian operator ∇2. Due to spherical symmetry of the operator, its eigenvalues have multiplicity, thus it splits L2(Rd) into eigenspaces. The n-th eigenspace is spanned by (2n+d-2)(n+d-3)! n! (d-2)! degree-n homogeneous polynomials. Let's consider the prototypical case that we want to study: the convergence of dot-product kernels to the Hermite eigensystem. In Rd, there are n+d-1 d-1 degree-n monomials, and n+d-1 d-1 degree-n Hermite polynomials. To obtain the Hermite polynomials, we cannot simply apply the GramSchmidt process on the monomials individually. Instead, we need to apply the Gram-Schmidt process simultaneously on each segment of n+d-1 d-1 monomials, to obtain the dimension- n+d-1 d-1 subspace spanned by the degree-n Hermite polynomials. To count the multiplicity, we use a function m : N →N ∪{∞}. It should be interpreted as saying that the n-th eigenspace has dimension m(n). For example, the multiplicity counting function for the Ornstein-Uhlenbeck operator over L2(μd) is m(n) = n+d-1 d-1 . Note that m does not need to be strictly positive. That is, we allow m(k) = 0 for some k. We even allow P k m(k) to be finite, in the case that the Hilbert space under consideration is finite-dimensional, although we require P k m(k) > 0, for otherwise it would be completely trivial. For convenience, we will from now on assume that either m(k) > 0 for all k, since we do not need more generality. The reader who needs this generality can read the next few sections and mentally generalize the constructions. The most important condition on the multiplicity counting function is: Definition 6 (polynomially bounded multiplicity). A multiplicity counting function is polynomially bounded iff there exists some A, d > 0 such that m(n) 0, and project each of |vk,1⟩, . . . , |vk,m(k)⟩ to Span |ˆv0,1⟩, . . . , |ˆv0,m(0)⟩ ⊥, to obtain |v′ k,1⟩, . . . , |v′ k,m(k)⟩then construct an arbitrary orthonormal basis of their span. Call them |ˆvk,1⟩, . . . , |ˆvk,m(k)⟩. 3. Continue this way inductively. Note that the Gram-Schmidt process is not uniquely defined, due to the steps where arbitrary orthonormal bases are chosen. However, it constructs a sequence of subspaces {Span |ˆvk,1⟩, . . . , |ˆvk,m(k)⟩ }k∈0:∞, which are uniquely defined. Also note that even the traditional Gram-Schmidt process, without multiplicity, still is not uniquely defined, because each ˆvk could have been -ˆvk instead. That is, there is a {-1, +1} ambiguity per step. Now, note that {-1, +1} is just O(1), the orthogonal group on R1, and we see that it is a general phenomenon: In general, the Gram-Schmidt process with multiplicity m creates an O(m(k)) amount of ambiguity at step k. Each vector system defines a positive semi-definite kernel K = ∞ X k=0 m(k) X l=1 ak,l|vk,l⟩⟨vk,l| for any indexed set of non-negative scalars {vk,l : k ∈N, l ∈[1 : m(k)]}, provided that all ak,l ≥0, and P k,l ak,l 0, it is ε-fast-decaying if cn+1 ≤εcn for all n ∈N. The multiplicity counting function for it is m(n) = n+d-1 d-1 . It is polynomially bounded. We enumerate the eigensystem of K as (λ0,1, V0,1), (λ1,1, V1,1), . . . , (λ1,d, V1,d), (λ2,1, V2,1), . . . , (λ2,( d+1 2 ), V2,( d+1 2 )), ... (λn,1, Vn,1), . . . , (λn,( n+d-1 d-1 ), Vn,( n+d-1 d-1 )), ... 61 where the eigenvalues are arranged in decreasing order: λ0,1 ≥λ1,1 ≥· · · In case of multiplicity, the same eigenvalue is written repeatedly. For example, if λ has multiplicity 2, then it is written out twice. Each Vk,l is the eigenspace of λk,l. In case of eigenvalue multiplicity, repeat the eigenspace. For example, if λ1,1 = λ1,2 is an eigenvalue of multiplicity 2, then it corresponds to a 2-dimensional eigenspace V . In this case, define V1,1 = V1,2 = V . We also need to perform a similar "merging" on the Hermite eigensystem: ( cn d Y i=1 γαi i , Span |hα⟩(Σ) ! : n ∈N, |α| = n ) This merging is, unfortunately, not exactly the same, because we can't just say that if cn Qd i=1 γαi i = cn′ Qd i=1 γα′ i i , then merge their eigenspaces, because we must not merge if n ̸= n′, even if the eigenvalues happen to be the same in this case. This is because, as ε →0, eventually cn and cn′ will differ so greatly, that this accidental degeneracy is broken. We must only merge eigenspaces that are non-accidentally degenerate. So why didn't we do this for K? Because K is much harder to control for! We know everything there could be known about the Hermite eigenstructure, but K is a big unknown that we must laborously control. One thing that is a big unknown about K is that we don't know which eigenvalues are accidentally the same, and which eigenvalues are non-accidentally the same. So we treat them without special considerations. We know which are accidental and which are not for the Hermite eigenstructure, so we treat them differently. So, we perform a non-accidental merge of the Hermite eigensystem. For each n ∈N, we say that the n-th segment of the Hermite eigensystem is ( cn d Y i=1 γαi i , Span |hα⟩(Σ) ! : |α| = n ) and within each segment, we merge the degenerate eigenspaces. For example, if it happens that Qd i=1 γαi i = Qd i=1 γα′ i i for |α| = |α′| = n, but no other α′′ with |α′′| = n, then replace both Span |h(Σ) α ⟩ and Span |h(Σ) α′ ⟩ by their direct sum Span |h(Σ) α ⟩, |h(Σ) α′ ⟩ . With all this set up, we can now state Theorem 2, this time with full rigor. Theorem 7 (The HEA holds for a fast-decaying dot-product kernel on Gaussian measure). Let 1. Σ be a covariance matrix; 2. Σ = UΓU ⊤be its SVD, with Γ = diag (γ1, . . . , γd); 3. μ = N(0, Σ); For any N ∈N, there exists constants CN, DN, εN, such that if ε ∈[0, εN], and K is an ε-fastdecaying dot-product kernel, the following happens. For any element (λ, V ) within the merged N-th segment of the Hermite eigenstructure, there exists exactly dim V eigenvalues (counting multiplicity) of K that are in the interval λ(1 ± CNε). Let their corresponding eigenspaces be V1, . . . , Vdim V , then there exists a unitary operator cos Θ sin Θ -sin Θ cos Θ that rotates V1 ⊕· · · ⊕Vdim V to V , such that ∥sin Θ∥op 0, such that λn,1, . . . , λn,m(n) ∈[Cn ̄an, Dn ̄an] for all ε ∈[0, εn]. The constants εn, Cn, Dn depend only on δlow,0, . . . , δlow,n and the Gram matrix of the vectors |v0,1⟩, . . . , |vn,m(n)⟩. We bound the entire eigensegment λN,1(K) ≥· · · ≥λN,m(N)(K) by the min-max principle. This is analogous to what we did in Section I.3.1, but simplified, because we do not need to produce sharp bounds. That comes later. For the lower bound, we use V = Span |v0,1⟩, . . . , |vN,m(N)⟩ λN,m(N)(K) ≥ min x∈Span(|v0,1⟩,...,|vN,m(N)⟩) ∥x∥=1 ⟨x|K|x⟩ = min x∈Span(|v0,1⟩,...,|vN,m(N)⟩) ∥x∥=1 ∞ X k=0 m(k) X l=1 ak,l|⟨x|vk,l⟩|2 ≥ min x∈Span(|ˆv0,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 N X k=0 m(k) X l=1 ak,l|⟨x|vk,l⟩|2 ≥ ̄aN δmin,N min x∈Span(|ˆv0,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 N X k=0 m(k) X l=1 |⟨x|vk,l⟩|2 = ̄aN δmin,N λmin(G) = Ω( ̄aN) where G is the Gram matrix of the vectors |v0,1⟩, . . . , |vN,m(N)⟩. By assumption, these vectors are linearly independent, so the Gram matrix is positive definite. For the upper bound, we use V = Span |v0,1⟩, . . . , |vN-1,m(N-1)⟩ . 63 λN,1(K) ≤ sup x∈Span(|v0,1⟩,...,|vN-1,m(N-1)⟩) ⊥ ∥x∥=1 ⟨x|K|x⟩ = sup x∈Span(|ˆvN,1⟩,...) ∥x∥=1 ⟨x|K|x⟩ ≤ sup x∈Span(|ˆvN,1⟩,...) ∥x∥=1 m(N) X l=1 aN,l⟨x|vN,l⟩⟨vN,l|x⟩+ λmax   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|   ≤ sup x∈Span(|ˆvN,1⟩,...,|ˆvN,m(N)⟩) ∥x∥=1 m(N) X l=1 aN,l⟨x|vN,l⟩⟨vN,l|x⟩+ Tr   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|   = λmax      m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1   + ∞ X k=N+1 m(k) X l=1 ak,l | {z } use polynomial multiplicity ≤Tr     m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1  + O( ̄aNε) = m(N) X l=1 m(N) X i=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,i⟩+ O( ̄aNε) ≤ ̄aN m(N) X l=1 m(N) X i=1 ⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,i⟩+ O( ̄aNε) = O( ̄aN) J.5 PART 2: CONVERGENCE IN BULK Now that the spectrum is divided into exponentially separated segments, we can perform a surgical extraction of each N-th segment of the spectrum, to cut off both the "head" part N. Lemma 2 (bulk insensitivity of eigensystem segments). Under the same assumptions as Section J.4, for any n, there exists constants εn > 0, such that the N-th eigensegment is O(ε)-close in bulk to the spectrum of the matrix   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1 for all ε ∈[0, εn]. The constant εn and the constant in O(ε) depend only on δlow,0, δlow,n and the Gram matrix of the vectors |v0,1⟩, . . . , |vn,m(n)⟩. Intuitively, the lemma states that the eigensystem segments separate very cleanly. First, relative to the N-th eigensegment, all the higher-order segments are O(ε)-small, and thus ignorable. Second, relative to the N-th segment, the only effect of the lower-order segments is to force the N-th eigensegment into a safe subspace very close to the orthogonal subspace Span |ˆvN,1⟩, . . . , |ˆvN,m(N)⟩ , within which the lower-order terms a0,1|v0,1⟩⟨v0,1|, . . . , aN-1,m(N-1)|vN-1,m(N-1)⟩⟨vN-1,m(N-1)| all vanish. Stated in another way, the lemma states that the N-th segment of the spectrum of K is O(ε)-close in bulk to ̃K. To obtain ̃K, we first remove its tail Pi n=N+1 Pm(n) l=1 an,l|vn,l⟩⟨vn,l|, then project to the 64 space orthogonal to |v0,1⟩, . . . , |vN-1,m(N-1)⟩to remove its head, to obtain ̃K = m(N) X i,j=1 m(N) X l=1 aN,l|ˆvN,i⟩⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩⟨ˆvN,j| The key of the proof is to ensure that each cut perturbs the eigenvalues by O( ̄aNε), so that we would extract something that is O(ε)-close in bulk to the original eigensegment. J.5.1 CUTTING OFF THE TAIL We need: Theorem 8 (Wielandt-Hoffman inequality (Kato, 1987)). Let A, B be self-adjoint operators, such that C := B -A is a trace-class operator, then we can enumerate the eigenvalues of A, B, C as αi, βi, γi (including eigenvalue multiplicity) such that X i |αi -βi| ≤ X i |γi| (54) The tail of the operator K is the part that comes after the aN,l coefficients. It is bounded in trace norm: Tr   ∞ X k=N+1 m(k) X l=1 ak,l|vk,l⟩⟨vk,l|  = ∞ X k=N+1 m(k) X l=1 ak,l ≤ ∞ X k=N+1 ̄aNεk-Nm(k) ≤ ̄aN ∞ X k=N+1 εk-NAkD = O( ̄aNε) where we use the polynomial bound m(k) ≤AkD. Thus, by Wielandt-Hoffman, removing the tail perturbs the spectrum by only O( ̄aNε). Note particularly: 1. all segments from the 0-th to the N-th remain exponentially separated; 2. the perturbed N-th segment is O(ε)-close in bulk to the original N-th segment. J.5.2 CUTTING OFF THE HEAD Cutting off the head is simply cutting off the inverted tail. Having cut off the tail, we have a finite-rank operator ̃K := N X k=0 m(k) X l=1 ak,l|vk,l⟩⟨vk,l| that splits into two reducing subspaces V, V ⊥, where V = Span (|0, 1⟩, . . . , |N, m(N)⟩). It is zero on V ⊥and positive-definite on V . Therefore, we drop down from the full Hilbert space to just V , where we can reason with matrices. We express ̃K in matrix form in the orthonormal basis |ˆvk,l⟩, ordered so that |ˆvN,1⟩, . . . , |ˆvN,m(N)⟩ come before |ˆv0,1⟩, . . . , |ˆvN-1,m(N-1)⟩: [ ̃K] = A B B⊤ C + D 65 where the four matrices are: A =   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvN,j⟩   m(N) i,j=1 , B =   m(N) X l=1 aN,l⟨ˆvN,i|vN,l⟩⟨vN,l|ˆvn,j⟩   i∈1:m(N), (n,j)∈[0:N-1,1:m] C =   m(N) X l=1 aN,l⟨ˆvn,j|vN,l⟩⟨vN,l|ˆvn′,j′⟩   (n,j),(n′,j′)∈[0:N-1,1:m] D =   N-1 X n=0 m(N) X l=1 an,l⟨ˆvn,j|vn,l⟩⟨vn,l|ˆvn′,j′⟩   (n,j),(n′,j′)∈[0:N-1,1:m] For a symmetric matrix in block form, A B B⊤ C + D -1 = A-1 0 0 0 + A-1BS-1B⊤A-1 -A-1BS-1 -S-1B⊤A-1 S-1 where S = D + C -B⊤A-1B. We need several crude bounds on A, B, C, D, to prove that the first term really is the bulk term, and the second term really is an order O(ε) perturbation upon the bulk term, and thus we can safely cut it off according to the Wielandt-Hoffman inequality. 1. For each of A, B, C, their entries are all bounded in absolute values by O( ̄aN). Thus, their spectral radii are bounded by O( ̄aN). 2. By Cauchy interlacing law, λmin(A) ≥λmin([ ̃K]) = Ω( ̄aN). Thus, the spectrum of A is Θ( ̄aN), and the spectrum of A-1 is Θ( ̄a-1 N ). 3. Notice that the matrix D is constructed similarly to the matrix [ ̃K], except with one more truncation. Therefore, by the same argument, its least eigenvalue is on the order of Ω( ̄aN-1) = Ω( ̄aN/ε). 4. Therefore, B⊤A-1, A-1B have entries bounded in order O(1), and B⊤A-1B has entries bounded in order O( ̄aN). 5. Therefore, S has the same spectrum as D with an order O( ̄aN) perturbation. Since D has smallest eigenvalue Ω( ̄aN-1), so does S. Therefore, S-1 has largest eigenvalue O(1/ ̄aN-1) = O(ε/ ̄aN). Thus, the N-th segment of the eigenstructure of ̃K, inverted, is O(ε)-close in bulk to the eigenstructure of A-1. Inverting again, we find that it is O(ε)-close in bulk to the eigenstructure of A. J.6 PART 3: THE SPECIAL CASE OF DOT-PRODUCT KERNELS Recall, we need to show that, in the standard Gaussian space L2(N(0, Id)), an operator K defined by kernel P n cn n! (x⊤Γy)n converges to the Hermite eigensystem. Here, Γ = diag (γ1, . . . , γd) with γ1, . . . , γd > 0. Before applying the two generic lemmas. We need to first show that the kernel does have a fastdecaying coefficient system. This is because, even though cn is a fast-decaying coefficient sequence, it does not automatically imply that K has a fast-decaying coefficient system if we express it in the monomial vector system. Once this is done, we apply Section J.4 to conclude that the kernel's spectrum is divided into exponentially separated segments. Then, we "hopscotch" through several eigenstructures, until we show that the N-th eigensegment of K is O(ε)-close in bulk to the N-th eigensegment of a stretched dot-product kernel ceε(x⊤Γy). This argument is nearly the same as the proof of Section J.5, with a small extension to account for the special structure of dot-product kernels. Finally, we hopscotch again, applying Davis-Kahan at every step, to prove that the eigenspaces also converge as O(ε). 66 J.6.1 COMBINATORICS WITH THE MONOMIAL BASIS In analogy with the multidimensional Hermite vector system |hα⟩= d O i=1 |hαi⟩= d Y i=1 hαi(xi) we define the multidimensional normalized monomial vector system, by taking the tensor product of the single-dimensional normalized monomial vectors: |vα⟩= d O i=1 |vαi⟩= d Y i=1 xαi i p (2αi -1)!! Both of them are vector systems over the polynomially bounded multiplicity counting function m(n) := n + d -1 d -1 The n-th segment of the monomial vector system consists of {|vα⟩: |α| = n}, and similarly, for Hermite, {|hα⟩: |α| = n}. The Hermite vector system is obtained by the Gram-Schmidt process on the monomial vector system. Now, consider the form of the dot-product kernel function: K(x, y) := ∞ X n=0 cn n! d X i=1 γixiyi !n = ∞ X n=0 cn n! X |α|=n n! α1! · · · αd! d Y i=1 γαi i xαi i yαi i ! In bra-ket notation, the operator is K = ∞ X n=0 X α:|α|=n cn d Y i=1 γαi i (2αi -1)!! αi! |vα⟩⟨vα| Since cn is a fast-decaying sequence, it remains to show that the quantity max α:|α|=n d Y i=1 γαi i (2αi -1)!! αi! grows at most exponentially with n. Once we show that, we know that the coefficient system is also fast-decaying. Equivalently, we need to show that max α:|α|=n d X i=1 αi ln(γi/2) + ln 2αi αi grows at most linearly with n. Note that, because such a bound on does not depend on the choice of the fast-decaying sequence cn, it allows us to change cn to any other c′ n with the same decay rate ε, and still get a fast-decaying coefficient system. First term is trivial: d X i=1 αi ln(γi/2) ∈ n min i∈[1:d] ln(γi/2), n max i∈[1:d] ln(γi/2) = Θ(n) To bound the second term, we do some combinatorics. Let ak := 2k k , then we have f(k+1) f(k) = 4 - 2 k+1 strictly monotonically increasing. Therefore, f is strictly log-convex. Therefore, α 7→ Pd i=1 ln f(αk) is strictly Schur-convex. Thus, d X i=1 ln f(αk) 67 achieves its upper bound at (n, 0, . . . , 0), and lower bound when all αk are as close to equal as possible. Upper bound: ln f(n) = ln 2n n = 2n ln 2 -1 2 ln(πn) + O(1/n) = Θ(n) Lower bound: d ln f(n/d) = d ln 2n/d n/d = 2n ln 2 -1 2 ln(πn/d) + O(1/n) = Θ(n) In fact, we have shown that the coefficient block is very tightly clustered: d X i=1 ln f(αk) = 2n ln 2 -1 2 ln(πn) + [-ln √ d, 0] + O(1/n) for all α satisfying |α| = n. 68 J.6.2 CONVERGENCE IN BULK Like the general case, the proof has multiple parts. First, we apply Section J.4 to conclude that K has exponentially separated eigensegments. Then, we hopscotch through multiple eigensystem segments to show eigensegment convergence in bulk, as in Section J.5. N-th segment of K(x, y) = ∞ X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n O(ε) N-th segment of ̃K(x, y) = N X n=0 cn n! (γ1x1y1 + · · · + γdxdyd)n (0-th segment of [ ̃K]-1)-1 O(ε) (eigensystem of just the aN part of [ ̃K]-1)-1 provided that c′ N = cN (eigensystem of just the aN part of [ ̃K′]-1)-1 O(ε) (0-th segment of [ ̃K′]-1)-1 N-th segment of ̃K′(x, y) = N X n=0 c′ n n! (γ1x1y1 + · · · + γdxdyd)n O(ε) N-th segment of K′(x, y) = ∞ X n=0 c′ n n! (γ1x1y1 + · · · + γdxdyd)n set K′(x, y) = ceε(x⊤Γy), where c = cN/εN N-th segment of K′(x, y) = cN εN eε(x⊤Γy) O(ε) N-th segment of ( cN εN ε|α| d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0 ) ( cN d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = N ) Figure 25: Hopscotching through eigensystems. The new trick here is that we avoid solving for the N-th eigensegment of ̃K, by going down then up again, arriving at a previously solved kernel as if taking a subway. 69 J.6.3 CONVERGENCE IN DETAIL We have successfully proven the source eigensegment converges in bulk to the target eigensegment. It remains to prove convergence in detail. This requires breaking degeneracies in the source eigensegment whenever the degeneracy is broken in the target eigensegment (but not vice versa), followed by applying Davis-Kahan once per target eigenspace. Fix some N. The N-th target eigensegment is ( cN d Y i=1 γαi i , |hα⟩ ! for all α ∈Nd 0, |α| = N ) Some of the eigenvalues in it may be equal, because {ln γ1, . . . , ln γd} may be N-linearly dependent.15 Therefore, merge the eigenvectors with equal eigenvalue into eigenspaces. The eigenspaces have distinct eigenvalues. Note in particular that this constitutes a static target: The coefficients c0, c1, . . . may change, but if cN Qd i=1 γαi i = cN Qd i=1 γα′ i i for one choice of the coefficients, then it is so for all choices. Now, fix one such eigenspace V and its corresponding eigenvalue cNζ. Let the N-th segment of K be {(cNζN,1(K), |vN,1(K)⟩), . . . , (cNζN,m(N)(K), |vN,m(N)(K)⟩)} where we do not yet demand that the eigenvalues are all distinct. If some of the eigenvalues are unluckily degenerate, we just tolerate an arbitrary choice of the eigenvectors for now. As previously argued, it is O(ε)-close to the target eigensegment in bulk. Therefore, for small enough ε, the source eigenvalues cNζN,1(K), . . . , cNζN,m(N)(K) will be corralled around their corresponding target eigenvalues, like iron filings around magnets. Because there are only m(N) source eigenvalues to go around, each target eigenvalue can only grab exactly as many source eigenvalues as its multiplicity. In particular, this means that these "herds" of eigenvalues may be highly degenerate together, they are separated from other herds by ̄aNΘ(1). In particular, this means cNζ will be able to grab exactly dim V source eigenvalues, so that they are all stuck within an interval cN(ζ ± O(ε)). Enumerate these as cNζN,i1(K), . . . , cNζN,idim V (K). Let their eigenvecters be |vN,i1(K)⟩, . . . , |vN,idim V (K)⟩, and let the vector space spanned by them be VK. This is a reducing subspace of K, and one that we wish to show as O(ε)-close to V . Follow through the hopscotching diagram again. At each step in the hopscotching, either there is an exact equality, in which case the reducing subspace is unchanged, or there is a perturbation on the operator that is O(ε) relative to the operator, in which case, by Davis-Kahan, the reducing subspace is perturbed by an angle of only O(ε). We spell this out explicitly for the top half of the diagram. At the first step, K = P∞ n=0 cn n! (x⊤Γy)n is perturbed by truncating the tail P∞ n=N+1 cn n! (x⊤Γy)n. This has operator norm O(cNε). Since the gap between cNζN,i1(K), . . . , cNζN,idim V (K) and all other eigenvalues of K is on the order of ̄aN Θ(1). Thus, by Davis-Kahan, truncating the tail only perturbs the reducing subspace VK by an angle O(ε). The second step is an exact identity. Under a matrix inverse, the eigenspaces are preserved, even though the eigenvalues are inverted. In the third step, the inverted head is truncated. The inverted head has operator norm on the order of O(ε/ ̄aN), while the remaining operator has operator norm on the order of O(1/ ̄aN). Now, since the gap between (cNζN,i1(K))-1, . . . , (cNζN,idim V (K))-1 and all other inverted eigenvalues of [ ̃K]-1 is on the order 1 ̄aN Θ(1), truncating the inverted head only perturbs the reducing subspace by another angle O(ε). 15Even if they are N-linearly independent, the gaps between successive eigenvalues will get smaller for larger N if d ≥3. By the standard counting argument in Diophantine approximation theory, the gap decays at least as fast as N -(d-2). 70
2510.14872
Strategic Behavior in Crowdfunding: Insights from a Large-Scale Online Experiment. Din Amir1, Bar Hoter1, and Moran Koren1[0000−0003−0012−0208] Ben-Gurion University of the Negev, Department of Industrial Engineering and Management Abstract. This study examines strategic behavior in crowdfunding us- ing a large-scale online experiment. Building on the model of [4], we test predictions about risk aversion (i.e., opting out despite seeing a posi- tive private signal) and mutual insurance (i.e., opting in despite seeing a negative private signal) in a static, single-shot crowdfunding game, focus- ing on informational incentives rather than dynamic effects. Our results validate key theoretical predictions: crowdfunding mechanisms induce distinct strategic behaviors compared to voting, where participants are more likely to follow private signals (odds ratio: 0.139, p < 0.001). Ad- ditionally, the study demonstrates that higher signal accuracy (85% vs. 55%) decreases risk aversion (odds ratio: 0.414, p = 0.024) but increases reliance on mutual insurance (odds ratio: 2.532, p = 0.026). However, contrary to theory, increasing the required participation threshold (50% to 80%) amplifies risk aversion (odds ratio: 3.251, p = 0.005), which, pending further investigation, may indicate cognitive constraints. Fur- thermore, we show that while mutual insurance supports participation, it may hinder information aggregation, particularly as signal accuracy in- creases. These findings advance crowdfunding theory by confirming the impact of informational incentives and identifying behavioral deviations that challenge standard models, offering insights for platform design and mechanism refinement. Keywords: Crowdfunding · Mechanism Design · Threshold Mechanism · Information Aggregation · Experimental Economics 1 Introduction Crowdfunding platforms — such as Kickstarter or GoFundMe — have become popular tools for funding creative, social, or commercial projects. In these plat- forms, people decide together whether to support a project, usually based on limited information about its true quality. The basic idea is simple: if enough people commit funds, the project gets financed. If not, no money is collected. This collective funding approach holds a lot of promise: it gives people the power to support ideas they believe in, without relying on centralized gatekeep- ers. But it also raises an important question: how do people actually decide whether to contribute? arXiv:2510.14872v1 [econ.TH] 16 Oct 2025 2 F. Author et al. In this paper, we focus on a key challenge in crowdfunding: people must decide based on imperfect signals about the project’s quality. Each person receives a private signal — for example, a hint that the project might succeed or fail — and then chooses whether to contribute. These signals are correct only with a certain probability, known as signal accuracy. Ideally, people would simply follow their signal — contribute if the signal is positive, and avoid contributing if it is negative. But the crowdfunding mech- anism introduces a twist: the project only succeeds if enough others also con- tribute. This threshold rule creates incentives to behave strategically: - Some people might choose not to contribute even when they receive a positive signal, fearing that others will not join. This is called risk aversion. - Others might con- tribute even after a negative signal, hoping that the crowd knows better. This is called mutual insurance — relying on the wisdom of the group. Our study aims to understand whether this kind of strategic behavior really occurs in practice, and how it compares to a simpler group decision-making mechanism: majority voting. In voting, everyone casts a vote based on their private signal, and the outcome is determined by the majority. Individual payoffs don’t depend on thresholds or others’ behavior — only on whether the group decision was correct. Prior research on crowdfunding has primarily focused on dynamic aspects: how backers respond to accumulating support over time, how campaign mo- mentum affects success, and how project creators can strategically manage their campaigns (see for example [19, 16, 11]). However, these dynamic elements may mask more fundamental strategic considerations that arise purely from the in- formational structure of crowdfunding mechanisms. While theoretically compelling, these pure informational effects have never been tested empirically. Our study provides the first experimental examination of these fundamental strategic behaviors by deliberately stripping away the dy- namic elements that characterize most crowdfunding research. Through a large-scale online experiment with 1,368 participants, we iso- late how individuals navigate the tension between private signals and collec- tive decision-making in a single-shot context. Specifically, we test two primary hypotheses: (1) participants are less likely to follow their private signals in crowd- funding mechanisms compared to voting, due to the strategic considerations in- troduced by thresholds, and (2) strategic behavior, rather than confusion about the mechanism, drives deviations from signal-following behavior, with higher signal accuracy decreasing risk aversion and increasing mutual insurance. Our design systematically varies key parameters—signal accuracy (55% vs. 85%), group size (5 vs. 25), and majority thresholds (50% vs. 80%)—to understand the drivers of strategic behavior in its purest form. Our findings confirm the existence of systematic differences between crowd- funding and simple voting mechanisms, validating the aforementioned theoreti- cal predictions. Most notably, participants are significantly less likely to follow their private signals in crowdfunding scenarios. When focusing on crowdfund- ing, the results on signal accuracy align with theoretical predictions: higher Title Suppressed Due to Excessive Length 3 accuracy reduces risk-averse behavior while increasing reliance on mutual in- surance, suggesting participants actively respond to the quality of their private information. However, some results challenge theoretical predictions, particu- larly around majority thresholds - where higher thresholds increase rather than decrease risk-averse behavior, suggesting cognitive limitations may interact with strategic considerations in ways current models don’t capture. Beyond individual strategic behavior, our study provides insights into how crowdfunding mechanisms aggregate information. Theory suggests that mutual insurance behavior, while individually rational, could impair the crowd’s ability to separate good projects from bad ones. We examine this prediction by com- paring information aggregation between voting and crowdfunding mechanisms under different conditions of signal accuracy, group size, and majority thresholds. These findings advance our understanding of collective funding mechanisms in three key ways. First, they provide empirical validation for core theoretical predictions about strategic behavior arising purely from informational consider- ations. Second, they reveal important deviations from theory that suggest direc- tions for future theoretical refinement, particularly around how cognitive limita- tions might interact with strategic decision-making. Third, they offer practical insights for platform design, particularly around threshold mechanisms and in- formation provision. The remainder of the paper is organized as follows. Section 2 reviews the literature on crowdfunding dynamics and situates our work within the broader research on collective decision-making mechanisms. Section 3 presents the theo- retical framework of [3]. Section 4 develops our hypotheses. Section 5 details the experimental design. Section 6 presents the results. Section 7 examines informa- tion aggregation effects. Section 8 discusses implications for theory and practice. Finally, Section 9 concludes with suggestions for future research. 2 Literature Review In recent years, the rise of online crowdfunding platforms has revolutionized the way projects are funded, enabling individuals worldwide to contribute to initia- tives they believe in. This development has not only reshaped the landscape of funding for various ventures but has also provided new opportunities for empir- ical research into the factors influencing crowdfunding success and participant engagement. Experimental design methodologies play a crucial role in this re- search, offering systematic insights into the complex dynamics of donor behavior and project outcomes. Our experimental framework builds upon foundational studies that have em- ployed experimental design to investigate the intricacies of crowdfunding mecha- nisms and their impact on project success and donor behavior. We draw inspira- tion from the provision point mechanism, as discussed by [6], which is commonly used on platforms like Kickstarter. This mechanism ensures that projects must reach or surpass a predetermined funding goal before any financial transactions are processed, mitigating risks for backers and aligning with the all-or-nothing 4 F. Author et al. funding principle. Understanding this mechanism is essential for comprehending engagement patterns and funding dynamics in crowdfunding. Central to our experimental design is the concept of collective purchases, where a group of potential contributors decide whether to financially support a common value good. As explored in existing theoretical work [2–4], the efficiency of such collective decisions is influenced by the information held by individual contributors and their strategic behavior. While the Condorcet Jury Theorem suggests that efficiency should prevail in sufficiently large populations in the absence of strategic behavior, these theoretical findings indicate that this may not always be the case due to the incentives for strategic behavior inherent in collective purchase scenarios. Our study also contributes to the growing literature examining the dynamics of crowdfunding platforms and their impact on project success. Researchers such as [14] and [7] have investigated the role of moral hazard and the efficiency of different crowdfunding models, while [12] and [13] have provided empirical insights into the patterns of backer behavior and the dichotomy of crowdfunding outcomes. Our experimental design incorporates these findings, focusing on the strategic considerations of agents in the final stages of a crowdfunding campaign when value-maximizing behavior is most prevalent. Furthermore, our experimental design considers the phenomenon of over- subscription in crowdfunding, where the total contributions pledged exceed the funding goal. This scenario is particularly relevant in the context of collective purchases, as it can influence the strategic behavior of potential contributors. [9] model the concept of information aggregation in voting mechanisms, accounting for various types of agents and their strategic behavior. They demonstrate that uninformed neutral agents, or swing voters, rationally vote to counteract known population biases, amplifying the influence of informed neutral agents’ votes and enabling information aggregation in large populations. [3]’s model extends this discussion by introducing the concept of oversubscription and the nuanced usage of agent information. In this setup, agents, while not fully informed, sometimes act against their own signals and rely on mutual insurance to prevent potential mistakes. This approach harnesses the power of collective wisdom in decision- making but can also hinder the full aggregation of information, even in large populations. [5] discuss a similar model incorporating mutual insurance into a dynamic settings. To simulate the investment and reward dynamics inherent in real-world crowdfunding, our study includes a mock currency payout system. Participants are allocated a certain amount of fictitious currency at the beginning of the experiment, which they can then invest in projects based on their beliefs and perceived value. This setup allows us to explore how potential profits or losses influence backer decision-making and engagement. The conceptualization of this component draws inspiration from the work of [18], who incorporated a similar feature in their crowdfunding experiment. By integrating these methodologies and building upon insights from exist- ing theoretical work and the broader crowdfunding literature, our experiment Title Suppressed Due to Excessive Length 5 aims to provide a comprehensive understanding of the variables that drive suc- cess and engagement within the crowdfunding ecosystem. Through the lens of experimental design, we seek to uncover the nuanced interplay between project characteristics, funding mechanisms, and donor behavior, contributing valuable insights to the rapidly evolving field of crowdfunding research. 3 Theoretical Framework Our theoretical foundations derive from [3] (henceforth AKS), who analyze a collective purchase mechanism where agents must decide whether to contribute to buying a good of uncertain value. We leverage AKS’s model of a sparse, single-shot crowdfunding with a binary signal structure to empirically investigate mutual insurance behavior. In this setting, no dynamic incentives emerge, with the only incentive being informational. In their model, n agents receive private binary signals (correct with proba- bility p > 0.5) about whether the good’s value is high (v = 1) or low (v = 0). The good is purchased only if the number of contributors exceeds a threshold B, in which case contributors pay price τ and receive the good. If the threshold isn’t met, no payments are made and no good is provided. Unlike voting mechanisms, this setup creates strategic considerations beyond information aggregation since payoffs depend on both the collective outcome and individual contribution decisions. A key early result establishes uniqueness of equilibrium behavior.1 Let the trivial equilibrium be the strategy profile where all players contribute zero funds. AKS proved that for the crowdfunding game, no more than one sym- metric non-trivial Bayes-Nash equilibrium exists. Furthermore, when population size is sufficiently large, exactly one symmetric non-trivial Bayes-Nash equilib- rium exists: Theorem 1 (AKS Theorem 1). No crowdfunding game has more than one symmetric non-trivial Bayes-Nash equilibrium. For any threshold ratio q ∈(0, 1], when the population is sufficiently large, a unique symmetric non-trivial equilib- rium exists. While the theoretical model by Arieli et al. (2018) proves that a unique symmetric Bayes-Nash equilibrium emerges in large populations (n = 100–1000), their computational simulations suggest that similar equilibrium patterns also hold for smaller groups under moderate pricing and typical threshold values (e.g., 50% or 80%). We designed our experiment to replicate these conditions: prices are fixed (based on clickcoin incentives), and our thresholds and signal accuracies 1 [2] presents an early version of the model where prices are held fixed, [3] extends the basic model to one where prior probability and price may vary. [4] extends the discussion to endogenous prices and welfare. [5] charaterizes the equilibrium of a sequential version of the model in which mutual insurance and observational learning co-exist. 6 F. Author et al. are aligned with the parameter regions where convergence to equilibrium was observed. Moreover, we selected group sizes of 5 and 25 to capture both small- group dynamics and intermediate-scale aggregation, while maintaining feasibility for controlled online experimentation. The characterization of equilibrium behavior depends critically on the price level.2 When prices are moderate (meaning P(v = 1|si = l) < τ < P(v = 1|si = h)), the equilibrium takes a specific form: Theorem 2 (AKS Theorem 4). For any crowdfunding game with moderate price, there exists a unique symmetric non-trivial Bayes-Nash equilibrium σ∗ where agents with positive signals always contribute (σ∗(h) = 1) while agents with negative signals mix with probability λ ∈[0, 1). For large populations, this mixing probability has a precise characterization: Theorem 3 (AKS Lemma 1). For large populations, and a moderate price, when the threshold ratio is q, the equilibrium mixing probability for low-signal agents is: lim n→∞σ∗ n(l) = ( 0 if q ≤1 −p q−(1−p) p otherwise As one can see from the preceeding theorem, Agents probabilistically opt-in even when their signal is low. We call this a Mutual Insurance equilibria. In the full characterization of AKS, when the price is high (i.e. above the range of moderate prices), depending on the threshold, the equilibrium can be one in which agents with low signal surely opt-out while those with high signals mix. We call this a Risk Aversion equilibria. Taking the frequentistic interpretation of a mixed equilibrium, these theoret- ical predictions guide our experimental design and hypotheses. The precise char- acterization of equilibrium behavior allows us to test whether observed behavior aligns with strategic equilibrium play or better matches alternative explanations based on mechanism complexity. Performance Metrics. The theory also characterizes two key performance mea- sures. The “correctness index” measures how well the mechanism aggregates information, while the “participation index” captures market penetration. For moderate prices, these indices are: θ(µ, p, τ) = 1 −1 −p p 1 −τ τ µ R(µ, p, τ) = µ(1 + 1 −p p 1 −τ τ ) 2 See for [3] the full characterization. Title Suppressed Due to Excessive Length 7 Group Size Considerations. While the main theoretical results are asymptotic, the authors demonstrate through computational analysis that these results pro- vide good approximations even for relatively small groups. Their calculations show that the theoretical predictions hold well for populations of 100-1000 par- ticipants, which they note aligns with empirical observations from real crowd- funding campaigns. This finding is particularly relevant for experimental work, as it suggests that strategic behavior patterns should be observable even in lab- oratory settings with modest group sizes. Moreover, they find that for some parameter combinations, the theoretical predictions are accurate even for very small groups (n ∈{5, 10}), though this depends on the specific parameter values chosen. 4 Hypotheses Development Our study tests two main hypotheses derived from the theoretical framework. To clarify, we define the key behavioral patterns we are testing in empirical, observable terms, so that they can be measured directly at the individual level. In our setting, each participant receives a private signal indicating whether a project is likely high or low quality. Based on this signal, they must choose whether to contribute to the project (in the crowdfunding scenario) or cast a vote (in the voting scenario). We define: - Signal-following behavior: A participant acts in accordance with their private signal (e.g., votes for red after observing a red signal). - Risk-averse be- havior: A participant receives a positive signal (indicating support) but chooses not to contribute. - Mutual insurance behavior: A participant receives a negative signal (indicating opposition) but chooses to contribute anyway. These definitions allow us to track each participant’s decision relative to their private signal and classify their behavior accordingly. Hypothesis 1 (H1) Participants in the crowdfunding condition are less likely to follow their private signals compared to those in the voting condition. This hypothesis captures whether the threshold-based incentive structure in crowdfunding leads to deviations from signal-following, relative to a simpler majority voting mechanism. Hypothesis 2 (H2) If participants behave strategically (rather than due to con- fusion), we expect their deviations from signal-following to follow predictable patterns: a) Among participants with positive signals ("type H"): Higher signal accuracy and higher threshold requirements will reduce risk-averse behavior, as these conditions increase the expected value of contributing. b) Among participants with negative signals ("type L"): Higher signal accuracy will increase mutual insurance behavior, as individuals become more confident in the group’s ability to correct their own error. 8 F. Author et al. c) Group size will not affect strategic behavior, because according to the- ory, strategic choices depend on equilibrium beliefs that scale with population size. In other words, participants respond to their perceived probability of suc- cess, which remains stable across group sizes due to the aggregation effects described in the theoretical model. Note that in our data, risk-averse and mutual insurance behavior are mea- sured as binary outcomes based on the match between signal and action. These allow us to directly test the directional predictions derived from equilibrium anal- ysis. We do not assume that participants are computing full Bayesian equilibria; rather, we use these theoretical benchmarks to detect structured deviations from naive signal-following behavior. 5 Experimental Design This experimental study investigates group decision-making processes through a randomized factorial design comparing majority voting and crowdfunding mod- els. 5.1 Participants We recruited participants via Prolific, a widely used online platform for behav- ioral research. While Prolific users may differ from typical crowdfunding backers in demographics or motivations, their experience with digital tasks and economic games makes them suitable for controlled experimental settings. Our goal is not to replicate the exact population of real-world backers, but to isolate behavioral patterns under incentive-compatible conditions. In that respect, Prolific provides a high-quality, diverse, and reliable participant pool that supports both replica- tion and random assignment across conditions. The sample size was determined using G*Power analysis [8] to ensure sufficient power for logistic regression. The study included a total of 1368 participants, all native English speakers (self re- ported to Prolific), of whom 1200 participated in the main experiment and 168 participated in the pilot study. The average age was 39 years (median: 36, SD: 13.6). The sample consisted of 681 male participants, 516 female participants, and 171 participants identifying as other or unspecified gender. Participants had an average of 1047.3 previous Prolific approvals (median: 380.5, SD: 1557.3), suggesting a right-skewed distribution of Prolific approvals, with some highly experienced outliers pulling the mean up. 5.2 Procedure Crowdfunding platforms operate mostly online; hence, using an online exper- imental platform naturally reflects real-world behavior and is ideal for study- ing collective behaviors in this context. Our experiment examines how different mechanisms and group configurations influence choices in uncertain environ- ments. Title Suppressed Due to Excessive Length 9 The study employed a 2 (group size: 5 vs. 25) × 2 (signal volume: 55% vs. 85% ball ratio) × 3 (voting method: regular voting vs. crowdfunding [50% threshold] vs. crowdfunding [80% threshold]) between-subjects design. Participants were randomly assigned to groups of either 5 or 25 members. Each participant received an initial endowment of 420 clickcoins (a currency we created for this experiment, worth approximately 0.03£). Each group was presented with information about an urn containing a known proportion of red and blue balls, with the majority color making up either 55% or 85% of the balls, depending on the experimental condition. Each participant independently drew one ball from the urn and observed only their own draw, with no information about other group members’ draws. This private signal formed the basis for their subsequent choices. The pitcher was either 55% or 85% one color — a parameter we refer to as signal accuracy. This reflects the reliability of the private signal each participant receives. In the voting condition, group decisions are determined by simple majority (i.e., 50%). We excluded higher threshold voting rules (e.g., 80%) due to interpretational ambiguity: if the threshold is not met, there is no canonical outcome for the group decision (accept or reject). In contrast, in the crowdfunding mechanism, failure to meet the threshold naturally results in project failure. Therefore, our design includes voting with a 50% threshold only, and crowdfunding with both 50% and 80% thresholds. We carefully designed the payment structure to maintain equal expected re- turns across all experimental conditions, ensuring that any behavioral differences could be attributed to the mechanism rather than financial incentives. In the reg- ular voting condition, participants voted on the urn’s majority color, with each group member receiving 84 clickcoins for correct decisions and losing 84 click- coins for incorrect choices. In the crowdfunding conditions, participants faced a more nuanced choice: They decided whether to invest in an outcome based on a predetermined color assigned to their group. Each participant could invest 84 clickcoins to bet that this assigned color was the urn’s majority. If the group met its participation threshold (either 50% or 80% of members investing) and the predetermined color matched the urn’s majority, investing participants re- ceived 168 clickcoins in return. However, if the group reached the participation threshold but the predetermined color did not match the majority, investing participants lost their 84-clickcoin investment. Participants who chose not to invest retained their initial endowment regardless of the outcome. This reward structures created equivalent expected returns to the voting condition. Due to the undefined nature of regular voting with an 80% threshold (as there is no clear rule when fewer than 80% of members vote), the analysis focuses on two primary comparisons: regular voting versus crowdfunding [50% thresh- old], and crowdfunding [50% threshold] versus crowdfunding [80% threshold]. These comparisons allow us to isolate the effects of the mechanism and partici- pation threshold separately. Throughout the experiment, all individual decisions remained private, and while participants knew their group size, they had no communication with other group members. This design choice ensures that any 10 F. Author et al. observed effects stem from the institutional structure rather than social influ- ence or coordination. The research team aggregated group decisions after the experiment concluded, determining outcomes and calculating payments based on the predetermined rules for each condition. A preliminary comprehension test ensures participants understand the ex- perimental principles before the main decision-making task. 5.3 Data processing Participants (N=1,200) were randomly assigned across the twelve experimen- tal conditions, with approximately 100 participants per condition. The average completion time was 2 minutes and 57 seconds (The experiment involved only a single binary decision based on a clear signal, making it feasible to complete thoughtfully within this timeframe), translating to an average hourly rate of £25.60. To ensure data quality, we excluded participants who either failed at- tention checks or completed the experiment unusually quickly — within the bottom 10% of completion times (i.e., under 53 seconds). This led to the exclu- sion of 119 participants (9.92% of the initial sample), resulting in a final sample of 1081 participants.3 6 Results Three logistic regression models were employed to analyze different aspects of participant decision-making. The first model evaluated how structural fac- tors—specifically group size, signal volume, and voting method—influenced par- ticipants’ likelihood of choosing in alignment with their private signals (IsTrue). The second model focused specifically on crowdfunding conditions and exam- ined how group size, signal volume, and relative majority percentage affected risk-averse behavior (RA). This analysis concentrated on instances where par- ticipants had received private signals supporting participation, allowing for pre- cise measurement of risk-averse decision-making. The third model also examined crowdfunding conditions but investigated mutual insurance behavior (MutIns). It analyzed how the same factors—group size, signal volume, and relative ma- jority percentage—influenced decisions when participants had received private signals that did not support participation. This approach enabled us to isolate and measure choices driven by mutual insurance considerations. All logistic regression models were evaluated using odds ratios, coefficients, standard errors, and p-values. Likelihood ratio tests were conducted to compare the fit of each model to a null model without predictors. The significance level was set at α = .05 for all analyses. 3 Our results remained consistent across multiple time thresholds (50, 60, and 75 seconds) and aligned with pilot study findings, demonstrating the robustness of our analysis to time cutoff selection. Title Suppressed Due to Excessive Length 11 Table 1. Logistic Regression Model Predicting Voting According to Signal (IsTrue) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) 1.941 6.969 4.536 10.689 < 0.001 Crowdfunding -1.975 0.139 0.092 0.209 < 0.001 BallRatio85 0.091 1.095 0.754 1.590 0.633 GroupSize25 -0.051 0.950 0.655 1.380 0.789 Note: N = 613 We performed a logistic regression analysis to investigate the influence of Scenario, BallRatio, groupsize on the likelihood of acting according to your pri- vate signal (IsTrue). The binary outcome variable “isTrue” was modeled based on 3 predictors. The model analyzed 610 observations and revealed the following results: – (Intercept) The intercept was statistically significant (p < 0.001) with an odds ratio of 6.969 (95% CI: 4.536–10.689). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 87.4%. – scenarioCrowdfunding was statistically significant with an odds ratio of 0.139 (p < 0.001). This indicates that moving from the reference level (Vot- ing) to this level (Crowdfunding) decreased the likelihood of the outcome by 86.1% (95% CI: 0.092–0.209). This corresponds to a probability change of P(Y=1) from 0.872 to 0.490 (a change of -38.0%). – BallRatio85 was not statistically significant with an odds ratio of 1.095 (p = 0.633). – groupsize25 was not statistically significant with an odds ratio of 0.950 (p = 0.789). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 106.08, df = 3, p =< 0.001). Table 2. Logistic Regression Model Predicting Risk Aversion (RA) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) -3.3490 0.035 0.015 0.080 < 0.001 groupsize25 0.1509 1.163 0.572 2.365 0.677 BallRatio85 -0.8814 0.414 0.193 0.889 0.024 RelativeMajorityPercentage80 1.1789 3.251 1.440 7.346 0.005 Note: N = 628 Next, we performed a logistic regression analysis to investigate the influence of groupsize, BallRatio, RelativeMajorityPercentage on the likelihood of Risk Aversion (RA). The binary outcome variable “RA” was modeled based on 3 pre- dictors. The model analyzed 628 observations and revealed the following results: 12 F. Author et al. – (Intercept) The intercept was statistically significant (p < 0.001) with an odds ratio of 0.035 (95% CI: 0.015–0.080). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 3.4%. – groupsize25 was not statistically significant with an odds ratio of 1.163 (p = 0.677). – BallRatio85 was statistically significant with an odds ratio of 0.414 (p = 0.024). This indicates that moving from the reference level to this level of BallRatio decreased the likelihood of the outcome by 58.6% (95% CI: 0.193–0.889). This corresponds to a probability change of P(Y=1) from 0.034 to 0.015 (a change of 1.9 percentage points or a relative decrease of 55.9% in probability). – RelativeMajorityPercentage80 was statistically significant with an odds ratio of 3.251 (p = 0.005). This indicates that moving from the reference level to this level of RelativeMajorityPercentage increased the likelihood of the outcome by 225.1% (95% CI: 1.440–7.346). This corresponds to a probability change of P(Y=1) from 0.034 to 0.102 (a change of 6.8% percentage points). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 15.38, df = 3, p = 0.001). Table 3. Logistic Regression Model Predicting Mutual Insurance (MutIns) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) 1.910 6.758 3.437 13.849 < 0.001 groupsize25 -0.021 0.979 0.453 2.115 0.957 BallRatio85 0.929 2.532 1.115 5.750 0.026 RelativeMajorityPercentage80 0.064 1.066 0.494 2.303 0.871 Note: N = 324 Finally, we performed a logistic regression analysis to investigate the influence of BallRatio, groupsize, RelativeMajorityPercentage on the likelihood of Mutual Insurance (MutIns). The binary outcome variable “MutIns” was modeled based on 3 predictors. The model analyzed 324 observations and revealed the following results: – (Intercept) was statistically significant (p < 0.001) with an odds ratio of 6.758 (95% CI: 3.437–13.849). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 87.1%. – BallRatio85 was statistically significant with an odds ratio of 2.532 (p = 0.026). This indicates that moving from the reference level to this level of BallRatio increased the likelihood of the outcome by 150.0% (95% CI: 1.115–5.750). This corresponds to a probability change of P(Y=1) from 0.871 to 0.944 (a change of 7.3%). Title Suppressed Due to Excessive Length 13 – groupsize25 was not statistically significant with an odds ratio of 0.979 (p = 0.957). – RelativeMajorityPercentage80 was not statistically significant with an odds ratio of 1.066 (p = 0.871). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 5.34, df = 3, p = 0.148). 6.1 Key Findings and Interpretations Our experimental results provide strong support for both hypotheses. The signifi- cant negative effect of collective purchase on signal-following behavior (coefficient -1.975, p < 0.001) confirms Hypothesis 1, demonstrating that participants are indeed less likely to follow their private signals in crowdfunding scenarios com- pared to voting. The analysis of risk-averse and mutual insurance behavior supports Hypoth- esis 2’s predictions about strategic rather than complexity-driven behavior. For type H agents, higher signal accuracy significantly reduces risk-averse behavior (coefficient -0.8841, p = 0.024), as predicted. For type L agents, higher sig- nal accuracy increases mutual insurance behavior (coefficient 0.929, p = 0.026), aligning with the theoretical predictions. Group size shows no significant effect in either analysis, supporting our prediction that participants responses con- sider the completion probability in each possible state. Theory tell us that these probability ratios remain fixed through population size due to strategic behavior. The differential response to mechanism parameters, particularly the signifi- cant effects of signal accuracy in theoretically predicted directions while group size remains insignificant, provides compelling evidence that participants’ behav- ior reflects strategic considerations rather than mechanism complexity. These re- sults demonstrate that crowdfunding mechanisms induce systematic deviations from signal-following behavior through strategic channels rather than confusion about the mechanism itself. One area in which our experimental results diverge from theoretical predic- tions is the required participation threshold. We predicted that an increase in the threshold would lead to a decrease in aversion and an increase in mutual insur- ance. Our experimental results showed no significant relation between threshold and participants’ tendency to utilize mutual insurance and an observed increase in risk aversion when the required threshold is raised. We suspect this arises from participant confusion, as deciding under a supermajority rule likely imposes a greater cognitive load than under a regular majority. We discuss this further in Section 8. 7 Information Aggregation A central question in crowdfunding research is how effectively the mechanism can separate “the wheat from the chaff.” Our experimental findings provide insight 14 F. Author et al. into this issue. We observe both Risk Aversion and Mutual Insurance, suggest- ing that while some participants perceive the lottery card as expensive, others perceive it as either cheap or moderately priced. To apply our experimental find- ings, we adopt a frequentist interpretation of a mixed equilibrium and posit that our participant pool is divided into two groups: one perceiving the lottery price as expensive, and another viewing it as moderate. Let ρ denote the proportion of subjects who perceive the lottery as expensive. We show our calculation for the baseline level (i.e., group size=5, threshold at 50%, and signal accuracy of 55%). Under the assumption of a mixed equilibrium, those who find the lottery expensive and receive a low signal choose to opt out, whereas those with a high signal mix between opting out and following their signal. At the baseline, we observe that ψ := 3.4% of the H-typed subjects opt out . Meanwhile, subjects who find the lottery moderately priced would opt in if they receive a high signal, but mix if they receive a low signal; at the baseline, λ := 87.1% of the L-typed subjects opt in despite having a low signal. Let φ denote the proportion of subjects who opted in. We can derive ρ from the following equality, φ = ρPr(a = y|Price_is_expensive) + (1 −ρ)Pr(a = y|Price_is_moderate) = ρPr(s = h)ψ + (1 −ρ)(Pr(s = h) + pr(s = l)λ). Since Pr(ω = H) = 0.5, the unconditional signal probabilities are Pr(s = h) = Pr(s = l) = 0.5. We can therefore calculate ρ = 1+λ−2φ 1+λ−ψ . In our baseline scenario, φ = 71/81. Hence, the proportion of subjects who perceive the lottery as expensive is approximately 0.0643. Next, we can calculate the probability of seeing the opt-in action given the state of the world, φH := Pr(a = y|ω = G) = (1 −ρ)(q ∗1 + (1 −q) ∗λ) + ρ(q ∗ψ + (1 −q) ∗0) = 0.882 φL := Pr(a = y|ω = B) = (1 −ρ)((1 −q) ∗1 + q ∗λ) + ρ((1 −q) ∗ψ + q ∗0) = 0.870. Recalling that the probability of completion follows a binomial distribution, we can calculate the expected correctness of the crowdfunding mechanism, and find that θCF ≈0.502. When examining the correctness of a voting mechanism (i.e., where agents are incentivized to follow their signal) we calculate θV oting ≈0.593. When repeating the calculation for a signal accuracy of 85%, we see that θCF ≈0.502, and for the correctness of a voting mechanism we calculate θV oting ≈ 0.973. Suggesting that the loss of accuracy due to over participation in crowd- funding increases dramatically as the accuracy of the private signal improves. If we take the empirical findings at face value and note that only 87.4% of participants in the voting scenario followed their signals, then we arrive at θ0.55 Voting ≈0.569 and θ0.85 Voting ≈0.907. These results suggest that, when signal accuracy is low, empirically, crowdfund- ing is not far bellow the voting scenario. However, as signal accuracy improves, voting yields much higher correctness. Title Suppressed Due to Excessive Length 15 7.1 Experimental Observations In our experiment, each condition included 100 participants, organized into ei- ther twenty groups of five or four groups of twenty-five. Although these sample sizes fall bellow the threshold for statistical significance, they still provide a benchmark for evaluating the validity of the preceding theoretical discussion. In fact, a crowdfunding accuracy of around 50% appears to align quite closely with the performances of the various groups observed in the experiment. In Table 4 we present the actual correctness we saw in each of the experi- ment’s conditions. Notably, the Voting scenario resulted in much higher correct- ness than the Crowdfunding scenario. As expected, this difference became more pronounced when signal accuracy increased. Table 4. Correctness by Ball Ratio, Group Size, and Scenario Type Ball Group Number of Voting Crowdfunding Crowdfunding Ratio (%) Size Groups (50%) (50%) (80%) 55 5 20 75.00 40.00 50.00 55 25 4 75.00 75.00 25.00 85 5 20 95.00 50.00 50.00 85 25 4 100.00 50.00 50.00 To support our conjecture that the gap was due to oversubscription, in Table 5 we present the percentage of correct group decisions made in each condition. We provide separate data for opting in when ω = G and opting out when ω = B. In Table 5 one can observe that, across all conditions, for groups where sufficient Table 5. Information Aggregation in Crowdfunding Signal Majority Group G| P(a = y) ≥T P(a = y) ≥T|G P(a = y) < T|B Accuracy (%) Percentage (%) Size % (%) (%) 55 50 5 40 100.00 0.00 55 50 25 75 100.00 0.00 55 80 5 58.83 76.92 0.00 55 80 25 0 0.00 33.33 85 50 5 50 100.00 0.00 85 50 25 50 100.00 0.00 85 80 5 52.63 90.91 0.00 85 80 25 50 100.00 0.00 T denotes the threshold, G the high-value state, and B the low-value state. participation was reached, fewer than 60% arrived at the correct decision. When ω = G, over 75% of the groups met the required threshold. However, increasing 16 F. Author et al. the threshold to 80% appeared to reduce the probability of successful completion as the population size increased, both when ω = B and when ω = G. In other words, although many participants opted in, this did not neces- sarily translate into more ex-post correct group outcomes. Moreover, when the threshold was increased to 80%, the probability of actually reaching that thresh- old decreased (especially in larger groups), but even when groups did meet it, they were not substantially more likely to be correct. Taken together, these patterns suggest that over-subscription can undermine the collective decision process, leading to a large number of groups meeting their thresholds yet failing to choose correctly. 8 Discussion Our analysis revealed several key findings about strategic behavior in crowdfund- ing mechanisms. These findings both confirm existing theoretical predictions ([2, 3, 1, 13]) and challenge conventional models in important ways, highlighting areas for further research and practical application. Systematic Deviations from Signal-Following Behavior: One of the most significant results of this study is the systematic deviation from signal- following behavior in crowdfunding scenarios compared to voting mechanisms. As predicted, participants in crowdfunding were less likely to follow their private signals, reflecting the additional strategic considerations introduced by threshold mechanisms. This validates the theoretical claim that crowdfunding mechanisms create unique tensions between individual incentives and collective outcomes. However, the observed behavior also reveals important deviations that cannot be fully explained by standard equilibrium models. Mutual Insurance and Strategic Behavior: The increase in mutual in- surance behavior under conditions of higher signal accuracy aligns with theoret- ical predictions and highlights the strategic nature of decision-making in crowd- funding. Participants with negative private signals contributed despite their sig- nals, relying on the perceived collective wisdom of the group. This behavior underscores how individuals balance their private information against the aggre- gated decisions of the crowd, especially under conditions of uncertainty. While mutual insurance reflects rational strategic behavior, it also raises questions about how such behavior might impair information aggregation, as overreliance on collective decisions can dilute the impact of accurate private signals. Unexpected Threshold Effects: Contrary to theoretical predictions, higher majority thresholds increased risk-averse behavior among participants. Tradi- tional models suggest that higher thresholds should reduce risk aversion by providing greater protection against adverse outcomes. However, our findings indicate the opposite—participants became more cautious under higher thresh- olds. This divergence can be understood through the lens of prospect theory, a behavioral framework developed by Khaneman and Tversky (see [10] for a recent review). Prospect theory posits that individuals overweight small probabilities and exhibit loss aversion, where losses loom larger than equivalent gains. In the Title Suppressed Due to Excessive Length 17 context of crowdfunding, crossing a higher threshold might be perceived as a low-probability event, leading participants to overestimate the risk of a wrong aggregate decision, thus avoid contributing to minimize potential losses. Addi- tionally, the framing of a higher threshold may create a psychological barrier, where participants perceive the task as more daunting and act conservatively (see [17]). Limitations of Current Models: Our results highlight critical gaps in ex- isting theoretical models of crowdfunding. While the models successfully predict certain behaviors, such as mutual insurance and the effects of signal accuracy, they fail to capture the cognitive and psychological complexities introduced by higher thresholds. For instance, the increased cognitive load associated with supermajority rules may interact with strategic considerations in ways that tra- ditional models do not account for (See [15]). Incorporating behavioral factors, such as loss aversion, framing effects, and bounded rationality, could refine these models and improve their predictive power. Additionally, the omission of pricing mechanisms in this study reflects a delib- erate design choice aimed at achieving scale in a large-scale online experiment. Incorporating monetary incentives or pricing structures would have been pro- hibitively expensive and logistically challenging in this setting. Future research should address this limitation by exploring how pricing strategies interact with the behavioral dynamics observed in this study, particularly in environments where economic trade-offs significantly impact participation. Implications for Platform Design: From a practical perspective, our find- ings suggest that crowdfunding platforms should carefully consider the design of threshold mechanisms. While higher thresholds may intuitively seem to improve project quality by increasing the commitment required, they may inadvertently discourage participation due to heightened perceptions of risk. Platforms could address this by providing clearer explanations of thresholds, reducing perceived complexity, or experimenting with alternative mechanisms that balance individ- ual and collective incentives more effectively. Bridging Theory and Real-World Dynamics: Although our controlled experimental setup allowed us to isolate pure informational effects, real-world crowdfunding campaigns involve dynamic elements such as time pressure, social proof, and iterative updates. Future research should explore how these factors interact with the strategic behaviors identified in this study. For example, how do dynamic updates influence risk-averse and mutual insurance behaviors over time? Can social proof mitigate the psychological barriers introduced by higher thresholds? Addressing these questions will enhance the applicability of our find- ings to real-world scenarios. 9 Conclusions This study demonstrates that strategic behavior in crowdfunding exists even when isolating pure informational incentives from the dynamic elements typi- cally studied in crowdfunding research. Through a controlled experimental envi- 18 F. Author et al. ronment with 1,368 participants, we show how individuals navigate the tension between private signals and collective decision-making in a single-shot context. Our findings significantly advance crowdfunding theory in two ways. First, we provide empirical validation that strategic behavior emerges even in simple, single-shot scenarios. Second, we identify important limitations in current theo- retical models, particularly in their treatment of threshold effects. The observed increase in risk-averse behavior under higher thresholds suggests that cognitive factors play a more significant role than previously recognized. The study also provides critical insights into information aggregation in crowdfunding mechanisms. Our results show that when signal accuracy is low, crowdfunding may actually outperform simple voting in terms of information aggregation. However, this advantage disappears and even reverses as signal ac- curacy increases, with the performance gap becoming particularly pronounced with highly accurate signals. This finding has important implications for under- standing when crowdfunding mechanisms are most effective at separating good projects from bad ones. For platform design, our results demonstrate the importance of carefully considering threshold mechanisms and their effects on participant behavior. The strong relationship between thresholds and risk-averse behavior suggests that the standard tools used by platforms to ensure project quality may have unintended consequences on participant decision-making. Future research should explore how these fundamental strategic considera- tions and information aggregation properties interact with the dynamic elements of real crowdfunding campaigns. Particularly promising directions include exam- ining how social proof and project updates influence risk-averse and mutual in- surance behaviors, and investigating whether alternative threshold mechanisms could better align individual and collective interests. References 1. Alaei, S., Malekian, A., Mostagir, M.: A Dynamic Model of Crowdfunding. In: Proceedings of the 2016 ACM Conference on Economics and Computation. ACM, Maastricht, The Netherlands (2016). https://doi.org/10.2139/ssrn.2737748 2. Arieli, I., Koren, M., Smorodinsky, R.: The crowdfunding game. In: R. Devanur N., Lu P. (eds) Web and Internet Economics. WINE 2017. Lecture Notes in Computer Science, vol. 10660, pp. 398–399. Springer, Cham (2017) 3. Arieli, I., Koren, M., Smorodinsky, R.: The one-shot crowdfunding game. In: Pro- ceedings of the 2018 ACM Conference on Economics and Computation (EC ’18). pp. 213–214 (2018). https://doi.org/10.1145/3219166.3219215 4. Arieli, I., Koren, M., Smorodinsky, R.: Information aggregation in large collective purchases. Economic Theory pp. 1–51 (2023) 5. Ban, A., Koren, M.: Sequential Fundraising and Social Insurance. In: Proceed- ings of the 21st ACM Conference on Economics and Computation. pp. 45–46. EC ’20, Association for Computing Machinery, New York, NY, USA (Jul 2020). https://doi.org/10.1145/3391403.3399479 6. Burtch, G., Hong, Y., Liu, D.: The role of provision points in online crowdfunding. Journal of Management Information Systems 35(1), 117–144 (2018) Title Suppressed Due to Excessive Length 19 7. Chemla, G., Tinn, K.: Learning through crowdfunding. Management Science 66(5), 1783–1801 (2020) 8. Faul, F., Erdfelder, E., Lang, A.G., Buchner, A.: G* power 3: A flexible statis- tical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39(2), 175–191 (2007) 9. Feddersen, T.J., Pesendorfer, W.: The swing voter’s curse. The American Economic Review pp. 408–424 (1996) 10. Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. In: Handbook of the fundamentals of financial decision making: Part I, pp. 99–127. World Scientific (2013) 11. Koning, R., Model, J.: Experimental Study of Crowdfunding Cas- cades: When Nothing is Better than Something (Oct 2013). https://doi.org/10.2139/ssrn.2308161, https://papers.ssrn.com/abstract=2308161 12. Kuppuswamy, V., Bayus, B.L.: Does my contribution to your crowdfunding project matter? Journal of Business Venturing 32(1), 72–89 (2017) 13. Mollick, E.: The dynamics of crowdfunding: An exploratory study. Journal of Busi- ness Venturing 29(1), 1–16 (2014) 14. Strausz, R.: A theory of crowdfunding: A mechanism design approach with de- mand uncertainty and moral hazard. American Economic Review 107(6), 1430– 1476 (2017) 15. Sweller, J.: Cognitive load theory. In: Psychology of learning and motivation, vol. 55, pp. 37–76. Elsevier (2011) 16. Teunenbroek, C.v., Bekkers, R.: Follow the crowd: Social information and crowdfunding donations in a large field experiment. Journal of Behavioral Public Administration 3(1) (Mar 2020). https://doi.org/10.30636/jbpa.31.87, https://www.journal-bpa.org/index.php/jbpa/article/view/87, number: 1 17. Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211(4481), 453–458 (1981) 18. Weinmann, M., Mishra, A.N., Kaiser, L.F., vom Brocke, J.: The attraction effect in crowdfunding. Information Systems Research 34(3), 1276–1295 (2023) 19. Zaggl, M.A., Block, J.: Do small funding amounts lead to reverse herding? A field experiment in reward-based crowdfunding. Journal of Business Ventur- ing Insights 12, e00139 (Nov 2019). https://doi.org/10.1016/j.jbvi.2019.e00139, https://www.sciencedirect.com/science/article/pii/S2352673419300472
Strategic Behavior in Crowdfunding: Insights from a Large-Scale Online Experiment. Din Amir1, Bar Hoter1, and Moran Koren1[0000-0003-0012-0208] Ben-Gurion University of the Negev, . This study examines strategic behavior in crowdfunding using a large-scale online experiment. Building on the model of [4], we test predictions about risk aversion (i.e., opting out despite seeing a positive private signal) and mutual insurance (i.e., opting in despite seeing a negative private signal) in a static, single-shot crowdfunding game, focusing on informational incentives rather than dynamic effects. Our results validate key theoretical predictions: crowdfunding mechanisms induce distinct strategic behaviors compared to voting, where participants are more likely to follow private signals (odds ratio: 0.139, p 0.5) about whether the good's value is high (v = 1) or low (v = 0). The good is purchased only if the number of contributors exceeds a threshold B, in which case contributors pay price τ and receive the good. If the threshold isn't met, no payments are made and no good is provided. Unlike voting mechanisms, this setup creates strategic considerations beyond information aggregation since payoffs depend on both the collective outcome and individual contribution decisions. A key early result establishes uniqueness of equilibrium behavior.1 Let the trivial equilibrium be the strategy profile where all players contribute zero funds. AKS proved that for the crowdfunding game, no more than one symmetric non-trivial Bayes-Nash equilibrium exists. Furthermore, when population size is sufficiently large, exactly one symmetric non-trivial Bayes-Nash equilibrium exists: Theorem 1 (AKS Theorem 1). No crowdfunding game has more than one symmetric non-trivial Bayes-Nash equilibrium. For any threshold ratio q ∈(0, 1], when the population is sufficiently large, a unique symmetric non-trivial equilibrium exists. While the theoretical model by Arieli et al. (2018) proves that a unique symmetric Bayes-Nash equilibrium emerges in large populations (n = 100-1000), their computational simulations suggest that similar equilibrium patterns also hold for smaller groups under moderate pricing and typical threshold values (e.g., 50% or 80%). We designed our experiment to replicate these conditions: prices are fixed (based on clickcoin incentives), and our thresholds and signal accuracies 1 [2] presents an early version of the model where prices are held fixed, [3] extends the basic model to one where prior probability and price may vary. [4] extends the discussion to endogenous prices and welfare. [5] charaterizes the equilibrium of a sequential version of the model in which mutual insurance and observational learning co-exist. 6 F. Author et al. are aligned with the parameter regions where convergence to equilibrium was observed. Moreover, we selected group sizes of 5 and 25 to capture both smallgroup dynamics and intermediate-scale aggregation, while maintaining feasibility for controlled online experimentation. The characterization of equilibrium behavior depends critically on the price level.2 When prices are moderate (meaning P(v = 1|si = l) < τ < P(v = 1|si = h)), the equilibrium takes a specific form: Theorem 2 (AKS Theorem 4). For any crowdfunding game with moderate price, there exists a unique symmetric non-trivial Bayes-Nash equilibrium σ∗ where agents with positive signals always contribute (σ∗(h) = 1) while agents with negative signals mix with probability λ ∈[0, 1). For large populations, this mixing probability has a precise characterization: Theorem 3 (AKS Lemma 1). For large populations, and a moderate price, when the threshold ratio is q, the equilibrium mixing probability for low-signal agents is: lim n→∞σ∗ n(l) = ( 0 if q ≤1 -p q-(1-p) p otherwise As one can see from the preceeding theorem, Agents probabilistically opt-in even when their signal is low. We call this a Mutual Insurance equilibria. In the full characterization of AKS, when the price is high (i.e. above the range of moderate prices), depending on the threshold, the equilibrium can be one in which agents with low signal surely opt-out while those with high signals mix. We call this a Risk Aversion equilibria. Taking the frequentistic interpretation of a mixed equilibrium, these theoretical predictions guide our experimental design and hypotheses. The precise characterization of equilibrium behavior allows us to test whether observed behavior aligns with strategic equilibrium play or better matches alternative explanations based on mechanism complexity. Performance Metrics. The theory also characterizes two key performance measures. The "correctness index" measures how well the mechanism aggregates information, while the "participation index" captures market penetration. For moderate prices, these indices are: θ(μ, p, τ) = 1 -1 -p p 1 -τ τ μ R(μ, p, τ) = μ(1 + 1 -p p 1 -τ τ ) 2 See for [3] the full characterization. Title Suppressed Due to Excessive Length 7 Group Size Considerations. While the main theoretical results are asymptotic, the authors demonstrate through computational analysis that these results provide good approximations even for relatively small groups. Their calculations show that the theoretical predictions hold well for populations of 100-1000 participants, which they note aligns with empirical observations from real crowdfunding campaigns. This finding is particularly relevant for experimental work, as it suggests that strategic behavior patterns should be observable even in laboratory settings with modest group sizes. Moreover, they find that for some parameter combinations, the theoretical predictions are accurate even for very small groups (n ∈{5, 10}), though this depends on the specific parameter values chosen. 4 Hypotheses Development Our study tests two main hypotheses derived from the theoretical framework. To clarify, we define the key behavioral patterns we are testing in empirical, observable terms, so that they can be measured directly at the individual level. In our setting, each participant receives a private signal indicating whether a project is likely high or low quality. Based on this signal, they must choose whether to contribute to the project (in the crowdfunding scenario) or cast a vote (in the voting scenario). We define: - Signal-following behavior: A participant acts in accordance with their private signal (e.g., votes for red after observing a red signal). - Risk-averse behavior: A participant receives a positive signal (indicating support) but chooses not to contribute. - Mutual insurance behavior: A participant receives a negative signal (indicating opposition) but chooses to contribute anyway. These definitions allow us to track each participant's decision relative to their private signal and classify their behavior accordingly. Hypothesis 1 (H1) Participants in the crowdfunding condition are less likely to follow their private signals compared to those in the voting condition. This hypothesis captures whether the threshold-based incentive structure in crowdfunding leads to deviations from signal-following, relative to a simpler majority voting mechanism. Hypothesis 2 (H2) If participants behave strategically (rather than due to confusion), we expect their deviations from signal-following to follow predictable patterns: a) Among participants with positive signals ("type H"): Higher signal accuracy and higher threshold requirements will reduce risk-averse behavior, as these conditions increase the expected value of contributing. b) Among participants with negative signals ("type L"): Higher signal accuracy will increase mutual insurance behavior, as individuals become more confident in the group's ability to correct their own error. 8 F. Author et al. c) Group size will not affect strategic behavior, because according to theory, strategic choices depend on equilibrium beliefs that scale with population size. In other words, participants respond to their perceived probability of success, which remains stable across group sizes due to the aggregation effects described in the theoretical model. Note that in our data, risk-averse and mutual insurance behavior are measured as binary outcomes based on the match between signal and action. These allow us to directly test the directional predictions derived from equilibrium analysis. We do not assume that participants are computing full Bayesian equilibria; rather, we use these theoretical benchmarks to detect structured deviations from naive signal-following behavior. 5 Experimental Design This experimental study investigates group decision-making processes through a randomized factorial design comparing majority voting and crowdfunding models. 5.1 Participants We recruited participants via Prolific, a widely used online platform for behavioral research. While Prolific users may differ from typical crowdfunding backers in demographics or motivations, their experience with digital tasks and economic games makes them suitable for controlled experimental settings. Our goal is not to replicate the exact population of real-world backers, but to isolate behavioral patterns under incentive-compatible conditions. In that respect, Prolific provides a high-quality, diverse, and reliable participant pool that supports both replication and random assignment across conditions. The sample size was determined using G*Power analysis [8] to ensure sufficient power for logistic regression. The study included a total of 1368 participants, all native English speakers (self reported to Prolific), of whom 1200 participated in the main experiment and 168 participated in the pilot study. The average age was 39 years (median: 36, SD: 13.6). The sample consisted of 681 male participants, 516 female participants, and 171 participants identifying as other or unspecified gender. Participants had an average of 1047.3 previous Prolific approvals (median: 380.5, SD: 1557.3), suggesting a right-skewed distribution of Prolific approvals, with some highly experienced outliers pulling the mean up. 5.2 Procedure Crowdfunding platforms operate mostly online; hence, using an online experimental platform naturally reflects real-world behavior and is ideal for studying collective behaviors in this context. Our experiment examines how different mechanisms and group configurations influence choices in uncertain environments. Title Suppressed Due to Excessive Length 9 The study employed a 2 (group size: 5 vs. 25) × 2 (signal volume: 55% vs. 85% ball ratio) × 3 (voting method: regular voting vs. crowdfunding [50% threshold] vs. crowdfunding [80% threshold]) between-subjects design. Participants were randomly assigned to groups of either 5 or 25 members. Each participant received an initial endowment of 420 clickcoins (a currency we created for this experiment, worth approximately 0.03£). Each group was presented with information about an urn containing a known proportion of red and blue balls, with the majority color making up either 55% or 85% of the balls, depending on the experimental condition. Each participant independently drew one ball from the urn and observed only their own draw, with no information about other group members' draws. This private signal formed the basis for their subsequent choices. The pitcher was either 55% or 85% one color - a parameter we refer to as signal accuracy. This reflects the reliability of the private signal each participant receives. In the voting condition, group decisions are determined by simple majority (i.e., 50%). We excluded higher threshold voting rules (e.g., 80%) due to interpretational ambiguity: if the threshold is not met, there is no canonical outcome for the group decision (accept or reject). In contrast, in the crowdfunding mechanism, failure to meet the threshold naturally results in project failure. Therefore, our design includes voting with a 50% threshold only, and crowdfunding with both 50% and 80% thresholds. We carefully designed the payment structure to maintain equal expected returns across all experimental conditions, ensuring that any behavioral differences could be attributed to the mechanism rather than financial incentives. In the regular voting condition, participants voted on the urn's majority color, with each group member receiving 84 clickcoins for correct decisions and losing 84 clickcoins for incorrect choices. In the crowdfunding conditions, participants faced a more nuanced choice: They decided whether to invest in an outcome based on a predetermined color assigned to their group. Each participant could invest 84 clickcoins to bet that this assigned color was the urn's majority. If the group met its participation threshold (either 50% or 80% of members investing) and the predetermined color matched the urn's majority, investing participants received 168 clickcoins in return. However, if the group reached the participation threshold but the predetermined color did not match the majority, investing participants lost their 84-clickcoin investment. Participants who chose not to invest retained their initial endowment regardless of the outcome. This reward structures created equivalent expected returns to the voting condition. Due to the undefined nature of regular voting with an 80% threshold (as there is no clear rule when fewer than 80% of members vote), the analysis focuses on two primary comparisons: regular voting versus crowdfunding [50% threshold], and crowdfunding [50% threshold] versus crowdfunding [80% threshold]. These comparisons allow us to isolate the effects of the mechanism and participation threshold separately. Throughout the experiment, all individual decisions remained private, and while participants knew their group size, they had no communication with other group members. This design choice ensures that any 10 F. Author et al. observed effects stem from the institutional structure rather than social influence or coordination. The research team aggregated group decisions after the experiment concluded, determining outcomes and calculating payments based on the predetermined rules for each condition. A preliminary comprehension test ensures participants understand the experimental principles before the main decision-making task. 5.3 Data processing Participants (N=1,200) were randomly assigned across the twelve experimental conditions, with approximately 100 participants per condition. The average completion time was 2 minutes and 57 seconds (The experiment involved only a single binary decision based on a clear signal, making it feasible to complete thoughtfully within this timeframe), translating to an average hourly rate of £25.60. To ensure data quality, we excluded participants who either failed attention checks or completed the experiment unusually quickly - within the bottom 10% of completion times (i.e., under 53 seconds). This led to the exclusion of 119 participants (9.92% of the initial sample), resulting in a final sample of 1081 participants.3 6 Results Three logistic regression models were employed to analyze different aspects of participant decision-making. The first model evaluated how structural factors-specifically group size, signal volume, and voting method-influenced participants' likelihood of choosing in alignment with their private signals (IsTrue). The second model focused specifically on crowdfunding conditions and examined how group size, signal volume, and relative majority percentage affected risk-averse behavior (RA). This analysis concentrated on instances where participants had received private signals supporting participation, allowing for precise measurement of risk-averse decision-making. The third model also examined crowdfunding conditions but investigated mutual insurance behavior (MutIns). It analyzed how the same factors-group size, signal volume, and relative majority percentage-influenced decisions when participants had received private signals that did not support participation. This approach enabled us to isolate and measure choices driven by mutual insurance considerations. All logistic regression models were evaluated using odds ratios, coefficients, standard errors, and p-values. Likelihood ratio tests were conducted to compare the fit of each model to a null model without predictors. The significance level was set at α = .05 for all analyses. 3 Our results remained consistent across multiple time thresholds (50, 60, and 75 seconds) and aligned with pilot study findings, demonstrating the robustness of our analysis to time cutoff selection. Title Suppressed Due to Excessive Length 11 Table 1. Logistic Regression Model Predicting Voting According to Signal (IsTrue) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) 1.941 6.969 4.536 10.689 < 0.001 Crowdfunding -1.975 0.139 0.092 0.209 < 0.001 BallRatio85 0.091 1.095 0.754 1.590 0.633 GroupSize25 -0.051 0.950 0.655 1.380 0.789 Note: N = 613 We performed a logistic regression analysis to investigate the influence of Scenario, BallRatio, groupsize on the likelihood of acting according to your private signal (IsTrue). The binary outcome variable "isTrue" was modeled based on 3 predictors. The model analyzed 610 observations and revealed the following results: - (Intercept) The intercept was statistically significant (p < 0.001) with an odds ratio of 6.969 (95% CI: 4.536-10.689). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 87.4%. - scenarioCrowdfunding was statistically significant with an odds ratio of 0.139 (p < 0.001). This indicates that moving from the reference level (Voting) to this level (Crowdfunding) decreased the likelihood of the outcome by 86.1% (95% CI: 0.092-0.209). This corresponds to a probability change of P(Y=1) from 0.872 to 0.490 (a change of -38.0%). - BallRatio85 was not statistically significant with an odds ratio of 1.095 (p = 0.633). - groupsize25 was not statistically significant with an odds ratio of 0.950 (p = 0.789). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 106.08, df = 3, p =< 0.001). Table 2. Logistic Regression Model Predicting Risk Aversion (RA) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) -3.3490 0.035 0.015 0.080 < 0.001 groupsize25 0.1509 1.163 0.572 2.365 0.677 BallRatio85 -0.8814 0.414 0.193 0.889 0.024 RelativeMajorityPercentage80 1.1789 3.251 1.440 7.346 0.005 Note: N = 628 Next, we performed a logistic regression analysis to investigate the influence of groupsize, BallRatio, RelativeMajorityPercentage on the likelihood of Risk Aversion (RA). The binary outcome variable "RA" was modeled based on 3 predictors. The model analyzed 628 observations and revealed the following results: 12 F. Author et al. - (Intercept) The intercept was statistically significant (p < 0.001) with an odds ratio of 0.035 (95% CI: 0.015-0.080). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 3.4%. - groupsize25 was not statistically significant with an odds ratio of 1.163 (p = 0.677). - BallRatio85 was statistically significant with an odds ratio of 0.414 (p = 0.024). This indicates that moving from the reference level to this level of BallRatio decreased the likelihood of the outcome by 58.6% (95% CI: 0.193-0.889). This corresponds to a probability change of P(Y=1) from 0.034 to 0.015 (a change of 1.9 percentage points or a relative decrease of 55.9% in probability). - RelativeMajorityPercentage80 was statistically significant with an odds ratio of 3.251 (p = 0.005). This indicates that moving from the reference level to this level of RelativeMajorityPercentage increased the likelihood of the outcome by 225.1% (95% CI: 1.440-7.346). This corresponds to a probability change of P(Y=1) from 0.034 to 0.102 (a change of 6.8% percentage points). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 15.38, df = 3, p = 0.001). Table 3. Logistic Regression Model Predicting Mutual Insurance (MutIns) Predictor Coefficient Odds Ratio Lower 95% CI Upper 95% CI p-value (Intercept) 1.910 6.758 3.437 13.849 < 0.001 groupsize25 -0.021 0.979 0.453 2.115 0.957 BallRatio85 0.929 2.532 1.115 5.750 0.026 RelativeMajorityPercentage80 0.064 1.066 0.494 2.303 0.871 Note: N = 324 Finally, we performed a logistic regression analysis to investigate the influence of BallRatio, groupsize, RelativeMajorityPercentage on the likelihood of Mutual Insurance (MutIns). The binary outcome variable "MutIns" was modeled based on 3 predictors. The model analyzed 324 observations and revealed the following results: - (Intercept) was statistically significant (p < 0.001) with an odds ratio of 6.758 (95% CI: 3.437-13.849). This represents the baseline odds of the outcome when all predictor variables are zero. The corresponding baseline probability of the outcome is 87.1%. - BallRatio85 was statistically significant with an odds ratio of 2.532 (p = 0.026). This indicates that moving from the reference level to this level of BallRatio increased the likelihood of the outcome by 150.0% (95% CI: 1.115-5.750). This corresponds to a probability change of P(Y=1) from 0.871 to 0.944 (a change of 7.3%). Title Suppressed Due to Excessive Length 13 - groupsize25 was not statistically significant with an odds ratio of 0.979 (p = 0.957). - RelativeMajorityPercentage80 was not statistically significant with an odds ratio of 1.066 (p = 0.871). The likelihood ratio test showed that the model provided a significantly better fit than an intercept-only model (χ2 = 5.34, df = 3, p = 0.148). 6.1 Key Findings and Interpretations Our experimental results provide strong support for both hypotheses. The significant negative effect of collective purchase on signal-following behavior (coefficient -1.975, p < 0.001) confirms Hypothesis 1, demonstrating that participants are indeed less likely to follow their private signals in crowdfunding scenarios compared to voting. The analysis of risk-averse and mutual insurance behavior supports Hypothesis 2's predictions about strategic rather than complexity-driven behavior. For type H agents, higher signal accuracy significantly reduces risk-averse behavior (coefficient -0.8841, p = 0.024), as predicted. For type L agents, higher signal accuracy increases mutual insurance behavior (coefficient 0.929, p = 0.026), aligning with the theoretical predictions. Group size shows no significant effect in either analysis, supporting our prediction that participants responses consider the completion probability in each possible state. Theory tell us that these probability ratios remain fixed through population size due to strategic behavior. The differential response to mechanism parameters, particularly the significant effects of signal accuracy in theoretically predicted directions while group size remains insignificant, provides compelling evidence that participants' behavior reflects strategic considerations rather than mechanism complexity. These results demonstrate that crowdfunding mechanisms induce systematic deviations from signal-following behavior through strategic channels rather than confusion about the mechanism itself. One area in which our experimental results diverge from theoretical predictions is the required participation threshold. We predicted that an increase in the threshold would lead to a decrease in aversion and an increase in mutual insurance. Our experimental results showed no significant relation between threshold and participants' tendency to utilize mutual insurance and an observed increase in risk aversion when the required threshold is raised. We suspect this arises from participant confusion, as deciding under a supermajority rule likely imposes a greater cognitive load than under a regular majority. We discuss this further in Section 8. 7 Information Aggregation A central question in crowdfunding research is how effectively the mechanism can separate "the wheat from the chaff." Our experimental findings provide insight 14 F. Author et al. into this issue. We observe both Risk Aversion and Mutual Insurance, suggesting that while some participants perceive the lottery card as expensive, others perceive it as either cheap or moderately priced. To apply our experimental findings, we adopt a frequentist interpretation of a mixed equilibrium and posit that our participant pool is divided into two groups: one perceiving the lottery price as expensive, and another viewing it as moderate. Let ρ denote the proportion of subjects who perceive the lottery as expensive. We show our calculation for the baseline level (i.e., group size=5, threshold at 50%, and signal accuracy of 55%). Under the assumption of a mixed equilibrium, those who find the lottery expensive and receive a low signal choose to opt out, whereas those with a high signal mix between opting out and following their signal. At the baseline, we observe that ψ := 3.4% of the H-typed subjects opt out . Meanwhile, subjects who find the lottery moderately priced would opt in if they receive a high signal, but mix if they receive a low signal; at the baseline, λ := 87.1% of the L-typed subjects opt in despite having a low signal. Let φ denote the proportion of subjects who opted in. We can derive ρ from the following equality, φ = ρPr(a = y|Price_is_expensive) + (1 -ρ)Pr(a = y|Price_is_moderate) = ρPr(s = h)ψ + (1 -ρ)(Pr(s = h) + pr(s = l)λ). Since Pr(ω = H) = 0.5, the unconditional signal probabilities are Pr(s = h) = Pr(s = l) = 0.5. We can therefore calculate ρ = 1+λ-2φ 1+λ-ψ . In our baseline scenario, φ = 71/81. Hence, the proportion of subjects who perceive the lottery as expensive is approximately 0.0643. Next, we can calculate the probability of seeing the opt-in action given the state of the world, φH := Pr(a = y|ω = G) = (1 -ρ)(q ∗1 + (1 -q) ∗λ) + ρ(q ∗ψ + (1 -q) ∗0) = 0.882 φL := Pr(a = y|ω = B) = (1 -ρ)((1 -q) ∗1 + q ∗λ) + ρ((1 -q) ∗ψ + q ∗0) = 0.870. Recalling that the probability of completion follows a binomial distribution, we can calculate the expected correctness of the crowdfunding mechanism, and find that θCF ≈0.502. When examining the correctness of a voting mechanism (i.e., where agents are incentivized to follow their signal) we calculate θV oting ≈0.593. When repeating the calculation for a signal accuracy of 85%, we see that θCF ≈0.502, and for the correctness of a voting mechanism we calculate θV oting ≈ 0.973. Suggesting that the loss of accuracy due to over participation in crowdfunding increases dramatically as the accuracy of the private signal improves. If we take the empirical findings at face value and note that only 87.4% of participants in the voting scenario followed their signals, then we arrive at θ0.55 Voting ≈0.569 and θ0.85 Voting ≈0.907. These results suggest that, when signal accuracy is low, empirically, crowdfunding is not far bellow the voting scenario. However, as signal accuracy improves, voting yields much higher correctness. Title Suppressed Due to Excessive Length 15 7.1 Experimental Observations In our experiment, each condition included 100 participants, organized into either twenty groups of five or four groups of twenty-five. Although these sample sizes fall bellow the threshold for statistical significance, they still provide a benchmark for evaluating the validity of the preceding theoretical discussion. In fact, a crowdfunding accuracy of around 50% appears to align quite closely with the performances of the various groups observed in the experiment. In Table 4 we present the actual correctness we saw in each of the experiment's conditions. Notably, the Voting scenario resulted in much higher correctness than the Crowdfunding scenario. As expected, this difference became more pronounced when signal accuracy increased. Table 4. Correctness by Ball Ratio, Group Size, and Scenario Type Ball Group Number of Voting Crowdfunding Crowdfunding Ratio (%) Size Groups (50%) (50%) (80%) 55 5 20 75.00 40.00 50.00 55 25 4 75.00 75.00 25.00 85 5 20 95.00 50.00 50.00 85 25 4 100.00 50.00 50.00 To support our conjecture that the gap was due to oversubscription, in Table 5 we present the percentage of correct group decisions made in each condition. We provide separate data for opting in when ω = G and opting out when ω = B. In Table 5 one can observe that, across all conditions, for groups where sufficient Table 5. Information Aggregation in Crowdfunding Signal Majority Group G| P(a = y) ≥T P(a = y) ≥T|G P(a = y) < T|B Accuracy (%) Percentage (%) Size % (%) (%) 55 50 5 40 100.00 0.00 55 50 25 75 100.00 0.00 55 80 5 58.83 76.92 0.00 55 80 25 0 0.00 33.33 85 50 5 50 100.00 0.00 85 50 25 50 100.00 0.00 85 80 5 52.63 90.91 0.00 85 80 25 50 100.00 0.00 T denotes the threshold, G the high-value state, and B the low-value state. participation was reached, fewer than 60% arrived at the correct decision. When ω = G, over 75% of the groups met the required threshold. However, increasing 16 F. Author et al. the threshold to 80% appeared to reduce the probability of successful completion as the population size increased, both when ω = B and when ω = G. In other words, although many participants opted in, this did not necessarily translate into more ex-post correct group outcomes. Moreover, when the threshold was increased to 80%, the probability of actually reaching that threshold decreased (especially in larger groups), but even when groups did meet it, they were not substantially more likely to be correct. Taken together, these patterns suggest that over-subscription can undermine the collective decision process, leading to a large number of groups meeting their thresholds yet failing to choose correctly. 8 Discussion Our analysis revealed several key findings about strategic behavior in crowdfunding mechanisms. These findings both confirm existing theoretical predictions ([2, 3, 1, 13]) and challenge conventional models in important ways, highlighting areas for further research and practical application. Systematic Deviations from Signal-Following Behavior: One of the most significant results of this study is the systematic deviation from signalfollowing behavior in crowdfunding scenarios compared to voting mechanisms. As predicted, participants in crowdfunding were less likely to follow their private signals, reflecting the additional strategic considerations introduced by threshold mechanisms. This validates the theoretical claim that crowdfunding mechanisms create unique tensions between individual incentives and collective outcomes. However, the observed behavior also reveals important deviations that cannot be fully explained by standard equilibrium models. Mutual Insurance and Strategic Behavior: The increase in mutual insurance behavior under conditions of higher signal accuracy aligns with theoretical predictions and highlights the strategic nature of decision-making in crowdfunding. Participants with negative private signals contributed despite their signals, relying on the perceived collective wisdom of the group. This behavior underscores how individuals balance their private information against the aggregated decisions of the crowd, especially under conditions of uncertainty. While mutual insurance reflects rational strategic behavior, it also raises questions about how such behavior might impair information aggregation, as overreliance on collective decisions can dilute the impact of accurate private signals. Unexpected Threshold Effects: Contrary to theoretical predictions, higher majority thresholds increased risk-averse behavior among participants. Traditional models suggest that higher thresholds should reduce risk aversion by providing greater protection against adverse outcomes. However, our findings indicate the opposite-participants became more cautious under higher thresholds. This divergence can be understood through the lens of prospect theory, a behavioral framework developed by Khaneman and Tversky (see [10] for a recent review). Prospect theory posits that individuals overweight small probabilities and exhibit loss aversion, where losses loom larger than equivalent gains. In the Title Suppressed Due to Excessive Length 17 context of crowdfunding, crossing a higher threshold might be perceived as a low-probability event, leading participants to overestimate the risk of a wrong aggregate decision, thus avoid contributing to minimize potential losses. Additionally, the framing of a higher threshold may create a psychological barrier, where participants perceive the task as more daunting and act conservatively (see [17]). Limitations of Current Models: Our results highlight critical gaps in existing theoretical models of crowdfunding. While the models successfully predict certain behaviors, such as mutual insurance and the effects of signal accuracy, they fail to capture the cognitive and psychological complexities introduced by higher thresholds. For instance, the increased cognitive load associated with supermajority rules may interact with strategic considerations in ways that traditional models do not account for (See [15]). Incorporating behavioral factors, such as loss aversion, framing effects, and bounded rationality, could refine these models and improve their predictive power. Additionally, the omission of pricing mechanisms in this study reflects a deliberate design choice aimed at achieving scale in a large-scale online experiment. Incorporating monetary incentives or pricing structures would have been prohibitively expensive and logistically challenging in this setting. Future research should address this limitation by exploring how pricing strategies interact with the behavioral dynamics observed in this study, particularly in environments where economic trade-offs significantly impact participation. Implications for Platform Design: From a practical perspective, our findings suggest that crowdfunding platforms should carefully consider the design of threshold mechanisms. While higher thresholds may intuitively seem to improve project quality by increasing the commitment required, they may inadvertently discourage participation due to heightened perceptions of risk. Platforms could address this by providing clearer explanations of thresholds, reducing perceived complexity, or experimenting with alternative mechanisms that balance individual and collective incentives more effectively. Bridging Theory and Real-World Dynamics: Although our controlled experimental setup allowed us to isolate pure informational effects, real-world crowdfunding campaigns involve dynamic elements such as time pressure, social proof, and iterative updates. Future research should explore how these factors interact with the strategic behaviors identified in this study. For example, how do dynamic updates influence risk-averse and mutual insurance behaviors over time? Can social proof mitigate the psychological barriers introduced by higher thresholds? Addressing these questions will enhance the applicability of our findings to real-world scenarios. 9 Conclusions This study demonstrates that strategic behavior in crowdfunding exists even when isolating pure informational incentives from the dynamic elements typically studied in crowdfunding research. Through a controlled experimental envi18 F. Author et al. ronment with 1,368 participants, we show how individuals navigate the tension between private signals and collective decision-making in a single-shot context. Our findings significantly advance crowdfunding theory in two ways. First, we provide empirical validation that strategic behavior emerges even in simple, single-shot scenarios. Second, we identify important limitations in current theoretical models, particularly in their treatment of threshold effects. The observed increase in risk-averse behavior under higher thresholds suggests that cognitive factors play a more significant role than previously recognized. The study also provides critical insights into information aggregation in crowdfunding mechanisms. Our results show that when signal accuracy is low, crowdfunding may actually outperform simple voting in terms of information aggregation. However, this advantage disappears and even reverses as signal accuracy increases, with the performance gap becoming particularly pronounced with highly accurate signals. This finding has important implications for understanding when crowdfunding mechanisms are most effective at separating good projects from bad ones. For platform design, our results demonstrate the importance of carefully considering threshold mechanisms and their effects on participant behavior. The strong relationship between thresholds and risk-averse behavior suggests that the standard tools used by platforms to ensure project quality may have unintended consequences on participant decision-making. Future research should explore how these fundamental strategic considerations and information aggregation properties interact with the dynamic elements of real crowdfunding campaigns. Particularly promising directions include examining how social proof and project updates influence risk-averse and mutual insurance behaviors, and investigating whether alternative threshold mechanisms could better align individual and collective interests. References 1. Alaei, S., Malekian, A., Mostagir, M.: A Dynamic Model of Crowdfunding. In: Proceedings of the 2016 ACM Conference on Economics and Computation. ACM, Maastricht, The Netherlands (2016). https://doi.org/10.2139/ssrn.2737748 2. Arieli, I., Koren, M., Smorodinsky, R.: The crowdfunding game. In: R. Devanur N., Lu P. (eds) Web and Internet Economics. WINE 2017. Lecture Notes in Computer Science, vol. 10660, pp. 398-399. Springer, Cham (2017) 3. Arieli, I., Koren, M., Smorodinsky, R.: The one-shot crowdfunding game. In: Proceedings of the 2018 ACM Conference on Economics and Computation (EC '18). pp. 213-214 (2018). https://doi.org/10.1145/3219166.3219215 4. Arieli, I., Koren, M., Smorodinsky, R.: Information aggregation in large collective purchases. Economic Theory pp. 1-51 (2023) 5. Ban, A., Koren, M.: Sequential Fundraising and Social Insurance. In: Proceedings of the 21st ACM Conference on Economics and Computation. pp. 45-46. EC '20, Association for Computing Machinery, New York, NY, USA (Jul 2020). https://doi.org/10.1145/3391403.3399479 6. Burtch, G., Hong, Y., Liu, D.: The role of provision points in online crowdfunding. Journal of Management Information Systems 35(1), 117-144 (2018) Title Suppressed Due to Excessive Length 19 7. Chemla, G., Tinn, K.: Learning through crowdfunding. Management Science 66(5), 1783-1801 (2020) 8. Faul, F., Erdfelder, E., Lang, A.G., Buchner, A.: G* power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39(2), 175-191 (2007) 9. Feddersen, T.J., Pesendorfer, W.: The swing voter's curse. The American Economic Review pp. 408-424 (1996) 10. Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. In: Handbook of the fundamentals of financial decision making: Part I, pp. 99-127. World Scientific (2013) 11. Koning, R., Model, J.: Experimental Study of Crowdfunding Cascades: When Nothing is Better than Something (Oct 2013). https://doi.org/10.2139/ssrn.2308161, https://papers.ssrn.com/abstract=2308161 12. Kuppuswamy, V., Bayus, B.L.: Does my contribution to your crowdfunding project matter? Journal of Business Venturing 32(1), 72-89 (2017) 13. Mollick, E.: The dynamics of crowdfunding: An exploratory study. Journal of Business Venturing 29(1), 1-16 (2014) 14. Strausz, R.: A theory of crowdfunding: A mechanism design approach with demand uncertainty and moral hazard. American Economic Review 107(6), 14301476 (2017) 15. Sweller, J.: Cognitive load theory. In: Psychology of learning and motivation, vol. 55, pp. 37-76. Elsevier (2011) 16. Teunenbroek, C.v., Bekkers, R.: Follow the crowd: Social information and crowdfunding donations in a large field experiment. Journal of Behavioral Public Administration 3(1) (Mar 2020). https://doi.org/10.30636/jbpa.31.87, https://www.journal-bpa.org/index.php/jbpa/article/view/87, number: 1 17. Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211(4481), 453-458 (1981) 18. Weinmann, M., Mishra, A.N., Kaiser, L.F., vom Brocke, J.: The attraction effect in crowdfunding. Information Systems Research 34(3), 1276-1295 (2023) 19. Zaggl, M.A., Block, J.: Do small funding amounts lead to reverse herding? A field experiment in reward-based crowdfunding. Journal of Business Venturing Insights 12, e00139 (Nov 2019). https://doi.org/10.1016/j.jbvi.2019.e00139, https://www.sciencedirect.com/science/article/pii/S2352673419300472
2510.14874
Under review as a conference paper at ICLR 2026 TOUCH: TEXT-GUIDED CONTROLLABLE GENERA- TION OF FREE-FORM HAND-OBJECT INTERACTIONS Guangyi Han1, Wei Zhai1,†, Yuhang Yang1, Yang Cao1, Zheng-Jun Zha1 1 University of Science and Technology of China {hanguangyi@mail., wzhai056@, yyuhang@mail., forrest@, zhazj@}ustc.edu.cn ABSTRACT Hand-object interaction (HOI) is fundamental for humans to express intent. Exist- ing HOI generation research is predominantly confined to fixed grasping patterns, where control is tied to physical priors such as force closure or generic intent instructions, even when expressed through elaborate language. Such an overly general conditioning imposes a strong inductive bias for stable grasps, thus failing to capture the diversity of daily HOI. To address these limitations, we introduce Free-Form HOI Generation, which aims to generate controllable, diverse, and physically plausible HOI conditioned on fine-grained intent, extending HOI from grasping to free-form interactions, like pushing, poking, and rotating. To support this task, we construct WildO2, an in-the-wild diverse 3D HOI dataset, which includes diverse HOI derived from internet videos. Specifically, it contains 4.4k unique interactions across 92 intents and 610 object categories, each with detailed semantic annotations. Building on this dataset, we propose TOUCH, a three-stage framework centered on a multi-level diffusion model that facilitates fine-grained semantic control to generate versatile hand poses beyond grasping priors. This process leverages explicit contact modeling for conditioning and is subsequently refined with contact consistency and physical constraints to ensure realism. Com- prehensive experiments demonstrate our method’s ability to generate controllable, diverse, and physically plausible hand interactions representative of daily activi- ties. The project page is here. 1 INTRODUCTION Hand-Object Interaction (HOI) is fundamental to expressing intent and executing tasks in human daily life, and the ability to generate controllable interactions is crucial for AR/VR, robotics, and embodied AI (Zheng et al., 2025). While existing HOI generation research has progressed from ensuring physical plausibility (Fang et al., 2020) to incorporating semantic controllability (Li et al., 2024; Yang et al., 2023), its scope remains confined to a grasp-centric paradigm (Zhou et al., 2022; 2024), where control signals—spanning from physical constraints like force closure (Nguyen, 1988; Zheng & Qian, 2005) to coarse instructions like verb-noun pairs (Li et al., 2025)—are overly gen- eral, imposing a strong inductive bias that favors the generation of stable grasps (Taheri et al., 2020; Turpin et al., 2022), sacrificing interaction diversity. Furthermore, even with more sophisticated con- trol such as detailed natural language (e.g., via LLMs) (Huang et al., 2025; Zhang et al., 2025a;b; Shao et al., 2025), the underlying model designs and inherent inductive biases are still fundamen- tally geared towards generating only grasping interactions, driven by historical focus and prevailing representations. Consequently, these approaches lack the fine-grained control and inherent capabil- ity to capture the diverse non-grasping interactions found in the real world, including varied hand poses, contact details, and nuanced semantic intent. To bridge the gap between the limited scope of current methods and the complexity of real-world interactions, we introduce the task of Free-Form HOI Generation. The goal is to break grasp- centric limitations and shift towards generating diverse interactions, including the vast array of non- grasping manipulations. This task emphasizes expressiveness and controllability in the generation † Corresponding Author. 1 arXiv:2510.14874v1 [cs.CV] 16 Oct 2025 Under review as a conference paper at ICLR 2026 Free-Form Actions Free-Form Contacts Diverse Objects Daily Hand-Object Interaction general: tipping [obj_category] physical: Apply [hand_contact_part] to exert a controlled force on the [obj_contact_part] of [obj_category], causing it to tip over. hand_contact_part: [index pad] obj_contact_part: cap obj_category: water bottle SSCs: Tip water bottle over. DSCs Laboratory Collection HOI4D, OakInk, GRAB,… Object, Affordance, Text,… Condition Dataset drive Grasp Generation: Figure 1: Overview. We extend HOI generation beyond laboratory “grasp” settings (left) toward broader daily HOI modalities (right), enabling the modeling of more human-like interactions. Our dataset WildO2, built from Internet videos, covers more contacts, more objects, and more actions, and is enriched with descriptive synthetic captions (DSCs) to support fine-grained semantic control- lable HOI generation with our method, TOUCH. process, aiming to synthesize interactions that are not only physically plausible but also semantically rich and truly adaptable to complex human intentions. The core challenge of this task lies in two aspects: what to generate and how to generate it. The former pertains to spatial plausibility: the model must break free from restrictive grasping priors (e.g., palm position and orientation, contact region assumption (Ye et al., 2024; Jiang et al., 2021; Jung & Lee, 2025)) to explore a vast yet physically valid interaction space. To address this, we propose that contact relationships serve as a powerful cue to constrain this high-dimensional space, offering a more nuanced understanding of physically valid interactions. The latter pertains to seman- tic controllability: the model must accurately map fine-grained textual instructions to specific hand configurations and contact regions. The prior knowledge within Large Language Models (LLMs) offers a promising pathway for this guidance (Tang et al., 2023). A major obstacle is the lack of diverse 3D training data; current datasets(Zhan et al., 2024; Fu et al., 2025) are limited to lab-based grasps, and large-scale real-world data collection remains challenging. In contrast, abundant 2D HOI videos online provide rich and realistic daily interaction behaviors. To tackle the proposed task and challenges, we present TOUCH, a three-stage framework for con- trollable free-form HOI generation. First, we explicitly model the contact on the surfaces of the hand and the object separately by jointly encoding spatial point-cloud relations and semantic information, providing strong spatial priors to mitigate uncertainty from the high degrees of freedom in inter- action position and pose. We further incorporate part-level hand modeling for more precise action control. Second, we employ a multi-level diffusion model with attention-based fusion of semantics and geometry: coarse-grained intent and global object geometry guide the early diffusion stages, while fine-grained text and local contact features refine detailed motions in deeper stages, enabling fine-grained semantic controllability. Finally, we introduce self-supervised contact consistency and physical plausibility constraints to optimize the generated interactions, ensuring realism and physical feasibility. Compared to prior methods restricted to grasp generation, TOUCH naturally generalizes to diverse free-form HOI such as pushing, pressing, and rotating. Additionally, based on 3D object reconstruction (Xu et al., 2024), we introduce an automated pipeline to build the dataset WildO2 that jointly recovers and optimizes high-quality 3D hand-object interaction samples from internet videos annotated with interaction intent. By leveraging vision- language models (Bai et al., 2023b), we generate fine-grained semantic annotations, resulting in the 3D daily HOI dataset covering diverse interaction intents. Our main contributions are: (1) We propose to extend the HOI from constrained grasping to a broader, more realistic, and more diverse set of daily interactions. (2) We propose TOUCH, a new framework that can generate natural, physically reasonable, and diverse free-form HOI under fine- 2 Under review as a conference paper at ICLR 2026 grained text guidance. (3) We build an automated pipeline and construct WildO2, an in-the-wild 3D dataset for daily HOI, providing a critical resource that enables future research in this domain. Extensive experiments demonstrate the superiority of TOUCH. 2 RELATED WORK 2.1 HAND-OBJECT INTERACTION DATASETS. Existing 3D hand-object interaction (HOI) datasets are predominantly collected in controlled labo- ratory settings, relying either on physics-based simulation synthesis (Hasson et al., 2019) or motion capture systems to record real interactions (Hampali et al., 2020; Liu et al., 2022; Yang et al., 2022; Brahmbhatt et al., 2020). Although these datasets provide valuable support for modeling 3D HOI, they suffer from limited diversity due to constrained camera setups, a small number of participants, and a restricted set of object instances. In contrast, large-scale in-the-wild video datasets (Damen et al., 2020; Grauman et al., 2022; Shan et al., 2020) contain abundant HOI clips, but lack high- quality 3D annotations. Some studies have attempted to annotate subsets of these videos in 3D using object template-based optimization methods (Cao et al., 2021; Patel et al., 2022); however, due to the high diversity of open-set objects, scaling such approaches remains challenging. 2.2 TEMPLATE-FREE HOI RECONSTRUCTION. The core bottleneck in reconstructing HOIs in the wild has long been the recovery of diverse ob- ject geometries. While existing template-free approaches (Fan et al., 2024; Ye et al., 2022) avoid predefined object model constraints, they are typically trained on limited datasets and exhibit poor generalization to novel objects. In recent years, multi-view diffusion models (Liu et al., 2023a) and large-scale reconstruction models (LRMs) (Hong et al., 2024) have enabled high-quality 3D mesh reconstruction directly from single images (Xu et al., 2024) or text prompts (Poole et al., 2022), demonstrating strong generalization capabilities. Motivated by these advances, several HOI studies have explored image-to-3D reconstruction pipelines to handle open-set objects in the wild. However, due to severe hand occlusion, these methods often rely on image inpainting to complete occluded re- gions (Tian et al., 2025; Liu et al., 2024; Wen et al., 2025), or employ text-to-3D generation to align with coarse reconstruction results (Wu et al., 2024; Chen et al., 2025a). Nonetheless, most of these pipelines depend on heuristic completion or registration strategies, resulting in limited geometric consistency with the input, and have yet to be validated at scale in an automated manner. 2.3 DATA-DRIVEN CONTROLLABLE HOI GENERATION. In the evolution of HOI generation, interaction guidance has progressively advanced: from coarse control based on grasp type (Feix et al., 2015; Chen et al., 2025b), to object-conditioned genera- tion (Karunratanakul et al., 2020), and further to task/action-level intent constraints (Christen et al., 2024; Yu et al., 2025; Yang et al., 2024b;a). To enhance physical plausibility, contact penetration loss and hand anatomical constraints have been widely adopted (Wei et al., 2024). Additionally, explicit modeling of hand part segmentation and contact relationships with objects has been shown to improve physical realism and detailed expression of interactions (Liu et al., 2023b; Zhang et al., 2024). Building on these efforts, we propose a multi-level controllable generation framework trained on our newly constructed daily HOI dataset, enabling finer-grained semantic intent control and the flexible generation of free-form HOIs that align with complex human intentions. 3 DATASET 3.1 DATA COLLECTION AND PROCESSING Our goal is to construct a diverse dataset of 3D hand-object interactions from in-the-wild videos. A primary challenge in this process is the severe occlusion of the object by the hand, which com- promises the quality of 3D object reconstruction. To address this, we introduce a semi-automated data generation pipeline, centered around a novel Object-only to Hand-Object Interaction (O2HOI) frame pairing strategy. 3 Under review as a conference paper at ICLR 2026 One-image-to-3d H&O Reconstruction 𝑴𝒉𝒐𝒊 𝑯 𝑴𝒓𝒆𝒇 𝑶 𝑴𝒉𝒐𝒊 𝑶 𝑴𝒊𝒏𝒑𝒂𝒊𝒏𝒕 affine 𝑽𝑯 𝑽𝑶 Image Collection & Processing SSCs Contact Alignment Optimization 𝑳𝒄𝒂𝒎 𝑳𝒑𝒉𝒚 𝑪𝑯 𝑪𝑶 DSCs: Apply [thumb pad, index pad, ring nail] to gently lift one end of the [edge] of [plate] and then release it to drop down. Hand Reconstruction 2d feature matcher Obj-only 𝒊𝒓𝒆𝒇 HOI frame 𝒊𝒉𝒐𝒊 … Contact zone Figure 2: The proposed data generation pipeline for WildO2. The process begins with O2HOI frame pair extraction from in-the-wild videos, followed by a three-stage pipeline for 3D reconstruction, camera alignment, and hand-object refinement that produces high-fidelity interaction data. We begin by filtering the Something-Something V2 dataset (Goyal et al., 2017), which is rich in goal-directed human actions, to obtain 8k single-hand, single-object interaction clips. For each clip, we automatically extract an O2HOI pair (details in Appendix): an object-only frame Iref, where the object is unoccluded, and a corresponding interaction frame Ihoi. To obtain a complete ob- ject mask in the interaction frame, we segment the object in Iref using SAM2 (Ravi et al., 2024) and then transfer this mask to Ihoi via a robust dense matching model (Edstedt et al., 2024), yield- ing Minpaint. This mask transfer strategy offers a distinct advantage over common alternatives: it avoids the geometric inconsistencies of diffusion-based inpainting (Liu et al., 2024) while being significantly more scalable than manual completion (Wen et al., 2025). Consequently, our approach facilitates the automated, large-scale generation of high-fidelity 3D assets for reconstruction. 3.2 DATA GENERATION PIPELINE Based on the O2HOI pairs, we build a three-stage pipeline to recover 3D HOI (see Figure 2). Stage 1: Initialization. For each pair, we reconstruct a textured object mesh VO recon from the object-only frame Iref using an image-to-3D model (Xu et al., 2024). Concurrently, we estimate initial MANO (Romero et al., 2017) hand parameters Hinit from the interaction frame Ihoi using a state-of-the-art hand reconstruction method (Pavlakos et al., 2024). Stage 2: Camera Alignment. A challenge arises from coordinate system misalignment: the ob- ject mesh VO recon is created in a canonical space of the object-only frame Iref, while the hand exists in the camera space of the interaction frame Ihoi. To unify them, we align VO recon to an object-centric global coordinate system relative to the interaction frame by optimizing the camera projection matrix K and extrinsics (R, t). This is achieved by minimizing a camera alignment loss, Lcam, via differentiable rendering. The optimization proceeds in two phases: we initially use mask IoU, Sinkhorn (Cuturi, 2013) loss, and an edge penalty term (to prevent the object from moving out of view). Once the IoU surpasses a threshold, we introduce scale-invariant depth (Eigen et al., 2014) and RGB reconstruction losses for fine-tuning. The overall objective is formulated as: min K,R,t; Lcam = Lmask + Lsinkhorn + Ledge + λfine(Ldepth + Lrgb). (1) Stage 3: Hand-Object Refinement. With the aligned camera and object, we refine the initial hand parameters Hinit to achieve physically plausible contact. Specifically, we cast rays from the camera center through pixels within the interaction mask Minpaint. The intersection points of these rays with the 3D hand and object geometries define a potential 3D contact zone. We then optimize H using a refinement objective Lalign, which combines 2D evidence with 3D physical constraints: hand mask IoU (LH mask), 2D joint reprojection error (Lj2d), an ICP loss on the 3D contact zone (Licp), and physical constraints for contact, penetration, and anatomy based on (Yang et al., 2021). min H ; Lalign = LH mask +Lj2d +Licp +Lphy, Lphy = Lcontact +Lpene +Lanatomy +Lself. (2) 4 Under review as a conference paper at ICLR 2026 This pipeline yields 4,414 high-quality 3D hand-object interaction samples after a final stage of manual inspection and refinement, which constitute the ground truth of our dataset. bottle bottle bottle bottle bottle box box box box box pen pen pen pen pen cup cup cup cup cup glass glass glass glass glass book book book book book brush brush brush brush brush pencil pencil pencil pencil pencil remote remote remote remote remote can can can can can spoon spoon spoon spoon spoon toy toy toy toy toy jar jar jar jar jar container container container container container scissors scissors scissors scissors scissors color pencils color pencils color pencils color pencils color pencils phone phone phone phone phone card card card card card mug mug mug mug mug knife knife knife knife knife Pushing Pushing Pushing Pushing Pushing Poking Poking Poking Poking Poking Lifting Lifting Lifting Lifting Lifting Picking Picking Picking Picking Picking Moving Moving Moving Moving Moving Turning Turning Turning Turning Turning Pulling Pulling Pulling Pulling Pulling Putting Putting Putting Putting Putting Rolling Rolling Rolling Rolling Rolling Spinning Spinning Spinning Spinning Spinning Opening Opening Opening Opening Opening Tipping Tipping Tipping Tipping Tipping Laying Laying Laying Laying Laying Closing Closing Closing Closing Closing Squeezing Squeezing Squeezing Squeezing Squeezing Taking Taking Taking Taking Taking Showing Showing Showing Showing Showing Touching Touching Touching Touching Touching Holding Holding Holding Holding Holding Falling Falling Falling Falling Falling thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index pad thumb, index pad thumb, index pad thumb, index pad thumb, index pad index pad index pad index pad index pad index pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad index, middle pad index, middle pad index, middle pad index, middle pad index, middle pad index, middle, ring pad index, middle, ring pad index, middle, ring pad index, middle, ring pad index, middle, ring pad thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail index pad, middle nail index pad, middle nail index pad, middle nail index pad, middle nail index pad, middle nail thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm index nail index nail index nail index nail index nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail thumb pad thumb pad thumb pad thumb pad thumb pad index nail, middle pad index nail, middle pad index nail, middle pad index nail, middle pad index nail, middle pad middle, ring pad middle, ring pad middle, ring pad middle, ring pad middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad (a) 0 1 Hand Part Contact Distribution Palmar Dorsal (b) Figure 3: Dataset Distribution. (a) An illustration of the interplay between the most frequent object categories, interaction types, and hand contact regions. Object and action definitions are adapted and refined from (Goyal et al., 2017). Contact regions are derived based on our dataset analysis. (b) Specific segmentation of the 17 hand parts and their contact frequency distribution in the dataset, along with a contact heatmap of the entire hand. 3.3 DATA ANNOTATION AND STATISTICS We enrich our dataset with a multi-level annotation system, generating over 44k annotations. A statistical overview is provided in Figure 3, with further details in the Appendix. 3D Geometry and Transformation. Each sample includes the final hand-object meshes ( ˆ VH, ˆ VO) and the corresponding camera parameters derived from our generation pipeline. Contact Maps. We compute dense contact maps between the hand and object surfaces. To handle varying object scales, our method robustly identifies contact regions by combining relative and absolute distance thresholds with bidirectional nearest-neighbor filtering. Multi-Level Language Descriptions. We provide two levels of textual descriptions. We inherit the template-based Short Synthetic Captions (SSCs) from Something-Something V2 (e.g., ”picking [Something] up”). Additionally, we use a Vision-Language Model (VLM) (Bai et al., 2023b) to generate more detailed Descriptive Synthetic Captions (DSCs), which are manually verified for quality and relevance. Fine-Grained Hand Part Segmentation. We segment the hand mesh into 17 parts, including finger pads, nails, knuckles, palmar, and the dorsal region. This partitioning scheme goes beyond the coarse divisions commonly used in grasp generation tasks (Hasson et al., 2019; Liu et al., 2023b)—which often focus only on contact on the inner hand—by also accounting for contact on the dorsal side. This fine-grained segmentation supports detailed local interaction analysis and facilitates alignment with the semantic descriptions in the DSCs. 4 METHOD This work aims to generate natural and physically plausible hand-object interaction (HOI) poses, parameterized by H, along with corresponding contact maps CH and CO, conditioned on a multi- level textual prompt T and an object mesh VO. To tackle this problem, we propose a three-stage framework, as illustrated in Figure 4. Specifically, the Contact Map Prediction module (Section 4.1) infers the potential contact regions on the hand and object surfaces based on the text and object geometry. The Multi-Level Conditioned Diffusion module (Section 4.2) synthesizes a coarse hand pose by integrating coarse-to-fine textual and geometric features within a diffusion framework, en- suring alignment with multi-level constraints. Finally, the Physical Constraints Refinement module (Section 4.3) further optimizes the coarse pose to enhance contact realism and prevent penetrations. 4.1 CONTACT MAP PREDICTION To generate diverse interactions beyond simple grasping, we design two independent yet similar CVAEs (Sohn et al., 2015) to generate binary contact maps for the object and the hand, respectively. 5 Under review as a conference paper at ICLR 2026 × 𝐍 Sample & Normalize 𝐱𝐭−𝟏 Adapter PointNet PointNet 𝐱𝐭 𝒙𝒊 K V Self-Attention FiLM 𝒙𝒊−𝟏 Cross-Attention CIM N Q 𝐟𝜽( 𝒙𝒕, 𝒕, 𝒚) 𝑷𝑶 𝑷𝑯 … 𝒚𝒍𝒐𝒄 𝒊 𝒚𝒈𝒍𝒃 𝒊 Contact Prediction Multi-level Conditioned Diffusion Refinement Input Apply [thumb pad, index pad, ring nail] to gently lift one end of the [edge] of [plate] and then release it to drop down. lifting up [plate] 𝑭𝒈𝒍𝒃 O 𝑭𝒍𝒐𝒄 𝑶 𝑭𝒈𝒆𝒏 𝑭𝒂𝒍𝒍 t𝒔𝟎 𝒕 𝑭𝒈𝒍𝒃 𝑯 Add & Norm Add & Norm layer>3 CIM … 𝐱𝟎 DDPM TTA 𝑶𝟎 𝑯𝟏 𝑯𝟎 𝑶𝟏 𝑮𝑫𝒄𝒚𝒄 𝑮𝑫𝒄𝒕𝒄 hand obj 𝑳𝒓𝒆𝒇𝒊𝒏𝒆𝒓 ෝ𝑯𝒅𝒊𝒇𝒇 ෝ𝑯𝒓𝒆𝒇𝒊𝒏𝒆𝒓 VAE VAE 𝑭𝒍𝒐𝒄 𝑯 Figure 4: Overview of our three-stage framework TOUCH for generating hand-object interactions from multi-level text prompts and object meshes. CIM stands for the Condition Injection Module. For the object branch, we sample a point cloud PO ∈RNO×3 (NO = 3000) from its mesh VO, normalize it, and record the scale factor sO. We use PointNet (Qi et al., 2016) to extract its geometric features, which are concatenated with sO to form the object condition FO. For the hand branch, we generate a canonical point cloud P0 H ∈RNH×3 (NH = 778) from MANO’s zero pose and shape parameters H0. This point cloud, combined with a hand-part mask initialized from the fine-grained text TDSC, is processed by PointNet to obtain the hand condition FH. This design integrates the topological structure of the point clouds with text-guided emphasis on interaction-relevant hand regions. Both CVAEs are trained conditioned on their respective geometric features (FO, FH) and a shared text feature FDSC = ftext(TDSC), which is extracted using the Qwen-7B (Bai et al., 2023a) processed through a lightweight adapter. The optimization objective is a composite loss function: Lcontact = Lfocal + Ldice + βLKL (3) where Lfocal and Ldice supervise the contact prediction, and LKL structures the latent space. During inference, under the conditional features (FO, FH, FDSC), the model samples from a Gaussian prior z ∼N(0, I) and decodes it to produce the predicted binary contact maps ˆCO ∈{0, 1}NO×1 and ˆCH ∈{0, 1}NH×1. 4.2 MULTI-LEVEL CONDITIONED DIFFUSION The core of our method is a Transformer-based Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020) that synthesizes hand pose parameters ˆH conditioned on the object point cloud PO, multi-level text T, and predicted contact maps ˆC. Instead of predicting noise, our model fθ is trained to directly predict the denoised data ˆx0 = fθ(xt, t, y), optimized with an L2 loss on the pose parameters: Ldiff = Et,ϵ  ∥ˆx0 −x0∥2 Condition Generation: Transformer Inputs. To achieve precise control, our model extracts multi- level conditional features from both geometric and textual modalities. On the geometric side, we use PointNet to extract global features FO glb, FH glb and point-wise local features from the object point cloud PO, the initial hand point cloud P0 H, and the predicted contact maps ˆC from the previous stage. To focus on interaction regions, we leverage ˆC to adaptively select features of N O loc = 128 object points and N H loc = 64 hand points near contact areas, yielding ˜FO loc and ˜FH loc. On the textual side, we utilize ftext to extract both coarse-grained FSSC qwen = ftext(TSSC) and fine-grained FDSC qwen text features. Conditional Injection: Coarse-to-Fine Control. We inject these features into the Ninj = 8 blocks of our Transformer model in a hierarchical, coarse-to-fine fashion. This design ensures that global context, defined by SSCs and global geometry, shapes the overall pose in early denoising stages, while local details, defined by DSCs and contact-point features, are refined in later stages. Specifi- cally, for the i-th Transformer block: 6 Under review as a conference paper at ICLR 2026 Early Stages (i < 4): Global context is injected, with no local features. yi glb = concat(FO glb, FH glb, sO, FSSC qwen, t), yi loc = ∅ (4) Later Stages (4 ≤i < Ninj): Local details are injected, switching to fine-grained conditions. yi glb = concat(FH glb, sO, FDSC qwen, t), yi loc = concat(˜FO loc, ˜FH loc) (5) To prevent over-reliance on any single condition and enhance robustness, we randomly drop each component of the global condition with a 10% probability during training. Within each block, the global condition yi glb modulates the main features via FiLM (Perez et al., 2018), while the local condition yi loc is integrated through cross-attention to provide fine-grained spatial cues. This dual mechanism effectively decouples global contextual guidance from local geometric refinement. Finally, the updated latent goes through self-attention and a Feed-Forward Network (FFN). Training Loss. To improve training stability and spatial alignment, we introduce two auxiliary losses alongside the primary diffusion loss Ldiff. A global pose loss directly supervises the hand’s global rotation rrot and translation T to prevent overall pose drift, an issue exacerbated when di- rectly regressing H, which comprises parameters with disparate numerical ranges (e.g., shape β, pose Θ, rrot, T). A distance map loss ensures precise contact by supervising the distance map dmap ∈R21×NO from the 21 hand joints to the object surface. The final objective is a weighted sum: Ltotal = Ldiff + λglobal  |ˆrrot −rgt rot| + |ˆT −Tgt|  + λdmap|ˆdmap −dgt map| (6) 4.3 PHYSICAL CONSTRAINTS REFINEMENT To address the common issue of global pose drift in free-form HOI generation, where the hand often fails to make contact with the object, we introduce an efficient physical refinement module. This module is powered by a refiner network, frefiner, which inherits the Transformer architecture of our diffusion model. The process begins with a single forward pass to rapidly correct the global positioning of the initial pose ˆHdiff, establishing primary physical contact. Subsequently, this corrected pose undergoes Ntta iterations of test-time optimization (TTA) to fine-tune local contact details, such as finger placements. The entire optimization is guided by our self-supervised cycle-consistency loss (Lcyc), which en- forces bidirectional mapping consistency between hand and object contact surfaces. The core idea is that a hand contact point, after being mapped to the nearest object point via Φ (hand-to-object), should map back to its original location via the reverse mapping Ψ (object-to-hand), and vice versa. This loss acts as a powerful regularizer, effectively reducing the ambiguity inherent in the mappings. We combine this with Lphy (see Eq. 2). The total refinement loss is defined as: Lrefiner = Lphy + λcyc(EPh∈PCH ||Ψ(Φ(Ph)) −Ph||1 + EPo∈PCO ||Φ(Ψ(Po)) −Po||1) (7) 5 EXPERIMENTS 5.1 EXPERIMENTAL SETTINGS Our experiments are conducted on the WildO2 dataset. For each hand part contact category, we perform a random 4:1 split, yielding approximately 3.7k training and 677 test samples. To address the long-tailed distribution of hand part labels, we aggregate 10 less frequent hand part categories and then apply resampling using unique 7-bit labels to balance the data. The model is trained for 1000 epochs using the Adam optimizer with a learning rate of 1e-4 and a batch size of 128. The diffusion model’s parameters are frozen during the training of the refiner module. We evaluate our method from four perspectives: (1) Contact Accuracy, assessed by IoU(P-IoU) and F1-score(P- F1) against ground-truth contacts parts. (2) Physical Plausibility, measured by Mean Per-Vertex Position Error (MPVPE), Penetration Depth (PD), and Penetration Volume (PV). Note that unlike works focusing on grasping (Jiang et al., 2021), we do not employ physics engine-based stability simulation metrics, as our scope of interactions is broader than force-closure grasps. (3) Diversity, quantified by entropy and cluster size. (4) Semantic Consistency, evaluated using a point cloud- based FID (P-FID) (Nichol et al., 2022), VLM assisted evaluation, and a perceptual score (PS) from 10 users. 7 Under review as a conference paper at ICLR 2026 Push [snake] from left to right. Apply [index, middle, ring, pinky pad] to gently push the [snake] along its [body], ensuring minimal pressure. Poke [bottle]. Apply [index pad] to destabilize the [base] of [bottle] with a quick, precise motion. Open [tap].Apply [thumb, index, middle, ring pad and palm] to rotate the [handle] of [tap]. Lift up [bottle], Apply [thumb, index, middle, ring and pinky pad] to grip the [cap] of [bottle] firmly, ensuring a secure hold while lifting one end. Pretend to turn [shoe] upside down. Apply [thumb, index, middle, ring, pinky pad] to grasp the [heel] of [shoe] and simulate flipping it over. Push [knife]. Apply [index pad] to exert gentle force on the [handle] of [knife] to slightly move it. Lift [rod] up completely, then let it drop down. Apply [thumb, index, middle, ring pad] to grasp the [end] of rod firmly for lifting and release it. Push [snake] from left to right. Apply [index, middle, ring, pinky pad] to gently push the [snake] along its [body], ensuring minimal pressure. Poke [bottle]. Apply [index pad] to destabilize the [base] of [bottle] with a quick, precise motion. Open [tap].Apply [thumb, index, middle, ring pad and palm] to rotate the [handle] of [tap]. Lift up [bottle], Apply [thumb, index, middle, ring and pinky pad] to grip the [cap] of [bottle] firmly, ensuring a secure hold while lifting one end. Pretend to turn [shoe] upside down. Apply [thumb, index, middle, ring, pinky pad] to grasp the [heel] of [shoe] and simulate flipping it over. Push [knife]. Apply [index pad] to exert gentle force on the [handle] of [knife] to slightly move it. Lift [rod] up completely, then let it drop down. Apply [thumb, index, middle, ring pad] to grasp the [end] of rod firmly for lifting and release it. SSCs Obj Ours ContactGen Text2HOI GT Ihoi Figure 5: Visualization Results. Comparisons of different methods on samples from the WildO2 test set. Each sample consists of SSCs and an object mesh as input, with the output being an interactive hand pose. The last row shows the original authentic 2D HOI frame from internet videos. 5.2 COMPARISONS As existing methods have not explored fine-grained controlled HOI generation, we select two repre- sentative types of baselines: (1) ContactGen (Liu et al., 2023b): an object-conditioned multi-layer CVAE using coarse hand part labels. (2) Text2HOI (Cha et al., 2024): a transformer-based condi- tional diffusion model guided by coarse text conditions. We remove its temporal axis and adapt it for our setting. Compared to typical grasping datasets, hand poses in WildO2 exhibit higher degrees of freedom. Both baseline methods exhibit noticeable overall hand drift. To ensure fair comparison, we also augment them with an optimization-based post-processing module to correct hand poses. Experimental results in Table 1 show that our method outperforms baselines across most metrics. Visual results in Fig. 5 further demonstrate that our method generates more realistic HOI poses that better align with input text descriptions. Method Contact Acc. Physical Plausibility Diversity Semantic Consistency P-IoU↑ P-F1↑ MPVPE↓ PD↓ PV↓ Ent.↑ CS↑ P-FID↓ VLM↑ PS↑ ContactGen 0.620 0.730 5.46 1.296 7.37 2.85 4.93 6.08 4.8 6.3 Text2HOI 0.711 0.795 4.69 1.239 4.93 2.85 5.20 15.72 6.5 7.5 Ours 0.776 0.844 2.97 0.932 2.67 2.93 5.40 4.13 7.1 8.8 Table 1: Quantitative comparison on hand-object interaction synthesis. We evaluate all methods on our test set using comprehensive metrics covering physical plausibility, contact accuracy, diver- sity, and semantic consistency. ↑indicates higher is better, ↓indicates lower is better. 5.3 ABLATION STUDY To evaluate the effectiveness of our contact-guided generation of spatial relations in HOI and the coarse-to-fine text control design, we conduct ablation studies as shown in Table 2, ✗means without. For clarity, TTA is disabled. We argue for the primacy of contact metrics, as penetration metrics, PD and PV are meaningful only after hand-object contact is established; otherwise, they can be misleading. This is starkly exemplified by the ”✗refiner” variant, which scores poorly on contact yet achieves deceptively low PD/PV values simply because the generated hand drifts away from the 8 Under review as a conference paper at ICLR 2026 object, thus avoiding interaction entirely. This distinguishes our complex task from traditional grasp- ing, where contact is facilitated by priors. The consistent degradation in contact performance upon removing any module confirms their synergistic importance. Fig. 7 offers a qualitative visualization of the step-by-step improvements afforded by our contact guidance. Metrics P-IoU↑ P-F1↑ MPVPE↓ PD↓ PV↓ P-FID↓ Ours(✗TTA) 0.728 0.805 3.00 1.093 4.82 4.84 ✗hoc. 0.492 0.611 4.93 1.330 5.50 5.41 ✗refiner 0.513 0.621 5.05 0.723 2.98 5.84 ✗Lcyc. 0.702 0.787 3.00 1.100 5.29 5.79 ✗mul. 0.525 0.631 5.00 1.464 6.52 6.84 ✗TDSC 0.698 0.784 3.02 1.119 5.28 6.09 ✗TSSC 0.687 0.778 2.92 1.119 5.17 5.52 CLIP 0.713 0.798 2.87 1.136 4.85 4.84 BERT 0.705 0.790 2.91 1.182 4.99 6.08 MPNet 0.704 0.788 2.87 1.114 5.06 6.02 Table 2: Analyzes the contributions of various components, includ- ing the absence of MO and MH (hoc.) for guiding spatial relation- ship generation, the multi-level network structure (mul.), and the multi-level text. Success Geometric Recon. Failure Others Non-Interactive Failure Pose Estimation Failure 9% 55% 31% 3% 2% Figure 6: Breakdown of WildO2 reconstruction out- comes. lifting up [card],applying [thumb, index pad] to gently lift one end of the [edge] of [card] and then release it to drop down. w/o refiner w/o TTA refined result Figure 7: Contact guidance visualiza- tion. Push [green pen], applying [hand_contact_part] to exert gentle force on the [body] of [green pen], causing it to slightly move. Lift up [green pen], applying [hand_contact_part] to grip the [handle] of [green pen] firmly, ensuring a secure hold while lifting one end without letting it drop. thumb pad index pad thumb, index pad thumb, index, middle pad Figure 8: Generation results for a ”ballpoint pen.” Each subfigure shows a unique and plausible hand pose gener- ated by specifying different control signals. 5.4 DISCUSSION Dataset Reconstruction Analysis. Our dataset reconstruction from in-the-wild images achieved an approximate success rate of 55%. The failures, detailed in Fig. 6, provide insight into the inherent challenges of this task. The primary obstacle is geometric reconstruction failure, where 3D mesh recovery is unsuccessful. Further examples are available in the AppendixA.2.4. Text-guided Model and Usage. We replace the text encoder with other common token- or sentence- level encoders (e.g., CLIP (Radford et al., 2021), BERT (Devlin et al., 2019), MPNet (Song et al., 2020)) to analyze their impact on generation quality. Results indicate that Qwen-7B offers better performance in capturing fine-grained semantic details, detailed in Tab. 2. Generation Under Different Control Conditions for the Same Object. Using the same object un- der varying hand contact regions and interaction intents, our model produces diverse and physically plausible hand poses, as visualized in Fig. 8. 6 CONCLUSION In this paper, we addressed the limitations of grasp-centric approaches by introducing the Free- form HOI Generation task. Our work expands the synthesis paradigm beyond simple grasping to a broader, more semantically expressive spectrum of interactions. To support this, we built an automated pipeline to construct WildO2, an in-the-wild 3D dataset for daily HOIs, providing a critical resource to enable future research in this domain. 9 Under review as a conference paper at ICLR 2026 Limitations and Future Directions. Our framework currently focuses on static HOI snapshots, which inherently limits its ability to capture the temporal dynamics of an interaction process. While our pipeline offers rapid expansion, the current dataset scale also presents an area for future growth. In the future, we plan to extend our work to dynamic sequences by leveraging large-scale video datasets and incorporating 6-DoF object pose estimation, thus modeling the entire human- environment interaction process. REFERENCES Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023a. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, local- ization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 2023b. Samarth Brahmbhatt, Chengcheng Tang, Christopher D Twigg, Charles C Kemp, and James Hays. Contactpose: A dataset of grasps with object contact and hand pose. In European Conference on Computer Vision, pp. 361–378. Springer, 2020. Zhe Cao, Ilija Radosavovic, Angjoo Kanazawa, and Jitendra Malik. Reconstructing hand-object interactions in the wild. ICCV, 2021. Junuk Cha, Jihyeon Kim, Jae Shin Yoon, and Seungryul Baek. Text2hoi: Text-guided 3d motion generation for hand-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1577–1585, 2024. Hongyi Chen, Yunchao Yao, Yufei Ye, Zhixuan Xu, Homanga Bharadhwaj, Jiashun Wang, Shubham Tulsiani, Zackory Erickson, and Jeffrey Ichnowski. Web2grasp: Learning functional grasps from web images of hand-object interactions, 2025a. URL https://arxiv.org/abs/2505. 05517. Jiayi Chen, Yubin Ke, Lin Peng, and He Wang. Dexonomy: Synthesizing all dexterous grasp types in a grasp taxonomy. Robotics: Science and Systems, 2025b. Sammy Christen, Shreyas Hampali, Fadime Sener, Edoardo Remelli, Tomas Hodan, Eric Sauser, Shugao Ma, and Bugra Tekin. Diffh2o: Diffusion-based synthesis of hand-object interactions from textual descriptions. In SIGGRAPH Asia 2024 Conference Papers, pp. 1–11, 2024. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evange- los Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. The epic-kitchens dataset: Collection, challenges and baselines. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171–4186, 2019. Johan Edstedt, Qiyu Sun, Georg B¨okman, M˚arten Wadenb¨ack, and Michael Felsberg. RoMa: Robust Dense Feature Matching. IEEE Conference on Computer Vision and Pattern Recognition, 2024. David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems, 27, 2014. 10 Under review as a conference paper at ICLR 2026 Zicong Fan, Maria Parelli, Maria Eleni Kadoglou, Xu Chen, Muhammed Kocabas, Michael J Black, and Otmar Hilliges. Hold: Category-agnostic 3d reconstruction of interacting hands and objects from video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pp. 494–504, 2024. Hao-Shu Fang, Chenxi Wang, Minghao Gou, and Cewu Lu. Graspnet-1billion: A large-scale bench- mark for general object grasping. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11444–11453, 2020. Thomas Feix, Javier Romero, Heinz-Bodo Schmiedmayer, Aaron M Dollar, and Danica Kragic. The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems, 46 (1):66–77, 2015. Rao Fu, Dingxi Zhang, Alex Jiang, Wanjia Fu, Austin Funk, Daniel Ritchie, and Srinath Sridhar. Gigahands: A massive annotated dataset of bimanual hand activities. In Proceedings of the Com- puter Vision and Pattern Recognition Conference, pp. 17461–17474, 2025. Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne West- phal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pp. 5842–5850, 2017. Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Gird- har, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18995–19012, 2022. Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3d annotation of hand and object poses. In CVPR, 2020. Yana Hasson, G¨ul Varol, Dimitris Tzionas, Igor Kalevatykh, Michael J. Black, Ivan Laptev, and Cordelia Schmid. Learning joint reconstruction of hands and manipulated objects. In CVPR, 2019. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d, 2024. URL https://arxiv.org/abs/2311.04400. Mingzhen Huang, Fu-Jen Chu, Bugra Tekin, Kevin J Liang, Haoyu Ma, Weiyao Wang, Xingyu Chen, Pierre Gleize, Hongfei Xue, Siwei Lyu, Kris Kitani, Matt Feiszli, and Hao Tang. Hoigpt: Learning long sequence hand-object interaction with language models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2025. Hanwen Jiang, Shaowei Liu, Jiashun Wang, and Xiaolong Wang. Hand-object contact consistency reasoning for human grasps generation. In Proceedings of the International Conference on Com- puter Vision, 2021. Daniel Sungho Jung and Kyoung Mu Lee. Learning dense hand contact estimation from imbalanced data. arXiv preprint arXiv:2505.11152, 2025. Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, and Siyu Tang. Grasping field: Learning implicit representations for human grasps, 2020. URL https: //arxiv.org/abs/2008.04451. Kailin Li, Jingbo Wang, Lixin Yang, Cewu Lu, and Bo Dai. Semgrasp: Semantic grasp generation via language aligned discretization. In European Conference on Computer Vision, pp. 109–127. Springer, 2024. 11 Under review as a conference paper at ICLR 2026 Muchen Li, Sammy Christen, Chengde Wan, Yujun Cai, Renjie Liao, Leonid Sigal, and Shugao Ma. Latenthoi: On the generalizable hand object motion generation with latent hand diffusion. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 17416–17425, 2025. Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9298–9309, 2023a. Shaowei Liu, Yang Zhou, Jimei Yang, Saurabh Gupta, and Shenlong Wang. Contactgen: Generative contact modeling for grasp generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 20609–20620, 2023b. Yumeng Liu, Xiaoxiao Long, Zemin Yang, Yuan Liu, Marc Habermann, Christian Theobalt, Yuexin Ma, and Wenping Wang. Easyhoi: Unleashing the power of large models for reconstructing hand-object interactions in the wild. arXiv preprint arXiv:2411.14280, 2024. Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interac- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21013–21022, June 2022. Joanna Materzynska, Tete Xiao, Roei Herzig, Huijuan Xu, Xiaolong Wang, and Trevor Darrell. Something-else: Compositional action recognition with spatial-temporal interaction networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1049– 1059, 2020. Van-Duc Nguyen. Constructing force-closure grasps. The International Journal of Robotics Re- search, 7(3):3–16, 1988. Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts, 2022. URL https://arxiv.org/ abs/2212.08751. Austin Patel, Andrew Wang, Ilija Radosavovic, and Jitendra Malik. Learning to imitate object interactions from internet videos, 2022. URL https://arxiv.org/abs/2211.13225. Georgios Pavlakos, Dandan Shan, Ilija Radosavovic, Angjoo Kanazawa, David Fouhey, and Jitendra Malik. Reconstructing hands in 3D with transformers. In CVPR, 2024. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv, 2022. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593, 2016. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020, 2021. URL https://arxiv.org/abs/2103.00020. Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R¨adle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Va- sudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Doll´ar, and Christoph Fe- ichtenhofer. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714, 2024. URL https://arxiv.org/abs/2408.00714. Javier Romero, Dimitrios Tzionas, and Michael J. Black. Embodied hands: Modeling and capturing hands and bodies together. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), November 2017. URL http://doi.acm.org/10.1145/3130800.3130883. 12 Under review as a conference paper at ICLR 2026 Dandan Shan, Jiaqi Geng, Michelle Shu, and David Fouhey. Understanding human hands in contact at internet scale. In CVPR, 2020. Yawen Shao, Wei Zhai, Yuhang Yang, Hongchen Luo, Yang Cao, and Zheng-Jun Zha. Great: Geometry-intention collaborative inference for open-vocabulary 3d object affordance grounding. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 17326–17336, 2025. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. Advances in neural information processing systems, 28, 2015. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mpnet: Masked and permuted pre- training for language understanding. arXiv preprint arXiv:2004.09297, 2020. Omid Taheri, Nima Ghorbani, Michael J Black, and Dimitrios Tzionas. Grab: A dataset of whole- body human grasping of objects. In European conference on computer vision, pp. 581–600. Springer, 2020. Chao Tang, Dehao Huang, Wenqi Ge, Weiyu Liu, and Hong Zhang. Graspgpt: Leveraging se- mantic knowledge from a large language model for task-oriented grasping. IEEE Robotics and Automation Letters, 8(11):7551–7558, 2023. Yongqi Tian, Xueyu Sun, Haoyuan He, Linji Hao, Ning Ding, and Caigui Jiang. Funhoi: Annotation-free 3d hand-object interaction generation via functional text guidanc, 2025. URL https://arxiv.org/abs/2502.20805. Dylan Turpin, Liquan Wang, Eric Heiden, Yun-Chun Chen, Miles Macklin, Stavros Tsogkas, Sven Dickinson, and Animesh Garg. Grasp’d: Differentiable contact-rich grasp synthesis for multi- fingered hands. In European Conference on Computer Vision, pp. 201–221. Springer, 2022. Yi-Lin Wei, Jian-Jian Jiang, Chengyi Xing, Xian-Tuo Tan, Xiao-Ming Wu, Hao Li, Mark Cutkosky, and Wei-Shi Zheng. Grasp as you say: Language-guided dexterous grasp generation. Advances in Neural Information Processing Systems, 37:46881–46907, 2024. Boran Wen, Dingbang Huang, Zichen Zhang, Jiahong Zhou, Jianbin Deng, Jingyu Gong, Yulong Chen, Lizhuang Ma, and Yong-Lu Li. Reconstructing in-the-wild open-vocabulary human-object interactions, 2025. URL https://arxiv.org/abs/2503.15898. Jane Wu, Georgios Pavlakos, Georgia Gkioxari, and Jitendra Malik. Reconstructing hand-held objects in 3d. arXiv preprint arXiv:2404.06507, 2024. Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024. Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, and Cewu Lu. CPF: Learning a contact potential field to model the hand-object interaction. In ICCV, 2021. Lixin Yang, Kailin Li, Xinyu Zhan, Fei Wu, Anran Xu, Liu Liu, and Cewu Lu. Oakink: A large-scale knowledge repository for understanding hand-object interaction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 20953–20962, 2022. Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, and Zheng-Jun Zha. Grounding 3d object affordance from 2d interactions in images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10905–10915, October 2023. Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, and Zheng-Jun Zha. Lemon: Learning 3d human-object interaction relation from 2d images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16284–16295, 2024a. Yuhang Yang, Wei Zhai, Chengfeng Wang, Chengjun Yu, Yang Cao, and Zheng-Jun Zha. Ego- choir: Capturing 3d human-object interaction regions from egocentric views. arXiv preprint arXiv:2405.13659, 2024b. 13 Under review as a conference paper at ICLR 2026 Yufei Ye, Abhinav Gupta, and Shubham Tulsiani. What’s in your hands? 3d reconstruction of generic objects in hands. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3895–3905, 2022. Yufei Ye, Abhinav Gupta, Kris Kitani, and Shubham Tulsiani. G-hop: Generative hand-object prior for interaction reconstruction and grasp synthesis. In CVPR, 2024. Chengjun Yu, Wei Zhai, Yuhang Yang, Yang Cao, and Zheng-Jun Zha. Hero: Human reaction generation from videos. arXiv preprint arXiv:2503.08270, 2025. Xinyu Zhan, Lixin Yang, Yifei Zhao, Kangrui Mao, Hanlin Xu, Zenan Lin, Kailin Li, and Cewu Lu. Oakink2: A dataset of bimanual hands-object manipulation in complex task completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 445–456, 2024. Zhenhao Zhang, Ye Shi, Lingxiao Yang, Suting Ni, Qi Ye, and Jingya Wang. Openhoi: Open- world hand-object interaction synthesis with multimodal large language model. arXiv preprint arXiv:2505.18947, 2025a. Zhongqun Zhang, Hengfei Wang, Ziwei Yu, Yihua Cheng, Angela Yao, and Hyung Jin Chang. Nl2contact: Natural language guided 3d hand-object contact modeling with diffusion model. In European Conference on Computer Vision, pp. 284–300. Springer, 2024. Zichen Zhang, Hongchen Luo, Wei Zhai, et al. Pear: phrase-based hand-object interaction anticipa- tion. Science China Information Sciences, 68(150209), 2025b. doi: 10.1007/s11432-024-4405-4. Ying Zheng, Lei Yao, Yuejiao Su, Yi Zhang, Yi Wang, Sicheng Zhao, Yiyi Zhang, and Lap-Pui Chau. A survey of embodied learning for object-centric robotic manipulation. Machine Intelligence Research, pp. 1–39, 2025. Yu Zheng and Wen-Han Qian. Coping with the grasping uncertainties in force-closure analysis. The international journal of robotics research, 24(4):311–327, 2005. Keyang Zhou, Bharat Lal Bhatnagar, Jan Eric Lenssen, and Gerard Pons-Moll. Toch: Spatio- temporal object-to-hand correspondence for motion refinement. In European Conference on Com- puter Vision (ECCV). Springer, October 2022. Keyang Zhou, Bharat Lal Bhatnagar, Jan Eric Lenssen, and Gerard Pons-Moll. Gears: Local geometry-aware hand-object interaction synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2024. 14 Under review as a conference paper at ICLR 2026 WildO2 WildO2 TOUCH TOUCH More Contacts More Objects More Actions Figure 9: Visualization Results of more WildO2 test set samples. For each sample, the leftmost image represents the hand-object interaction frame (Ihoi) extracted from Internet videos. The middle image shows our reconstruction of Ihoi, which serves as the ground truth for WildO2. The right- most image illustrates the free-form HOI generated by our method, TOUCH, conditioned on object meshes and DSCs. The bottom row contains the corresponding DSCs. 15 Under review as a conference paper at ICLR 2026 A APPENDIX A.1 MORE EXPERIMENT RESULTS We performed a per-category statistical analysis of the metrics for the most frequent verb categories in the test set, as detailed in Table 3. The results reveal significant variations across different verb categories, reflecting the distinct characteristics of each action. For instance, the Poke category, which typically involves a singular or highly localized contact area, consequently yields higher scores on contact-related metrics such as P-IoU and P-F1. However, this action imposes fewer constraints on the overall hand orientation and the configuration of non- contacting fingers, leading to a higher MPVPE for the generated hand poses. Conversely, the Lift category, which closely resembles a simple grasping motion, is generally less complex. As a result, it demonstrates strong performance across most metrics, which corroborates our earlier assertion regarding the relative simplicity of grasping operations. In general, for a given verb, greater diversity in the associated actions correlates with increased learning difficulty for the model. Push Poke Lift Pick Move Turn Pull Put Open Roll Spin P-IoU 0.762 0.811 0.805 0.752 0.773 0.725 0.830 0.753 0.778 0.739 0.763 P-F1 0.833 0.852 0.872 0.834 0.844 0.821 0.891 0.828 0.866 0.819 0.834 MPVPE 3.098 3.458 2.724 2.511 2.618 3.014 2.301 3.034 2.621 2.653 3.844 PD 1.135 0.391 0.568 1.127 0.811 1.462 1.214 1.105 1.206 1.294 0.892 PV 3.025 0.922 1.716 2.814 2.842 5.718 3.440 3.656 4.081 1.652 3.052 Table 3: Per-category performance metrics for the most frequent verb categories in the test set. Additionally, we evaluate our method on the OakInk dataset. Using its CapGrasp (Li et al., 2024) an- notations and computed part contact information, we construct fine-grained text conditions through summarization via Qwen (Bai et al., 2023a) and verification. Method Contact Acc. Physical Plausibility Diversity Semantic P-IoU↑ P-F1↑ MPVPE↓ PD↓ PV↓ Ent.↑ CS↑ P-FID↓ ContactGen 0.654 0.769 6.69 0.790 16.59 2.93 5.33 11.25 Text2HOI 0.778 0.860 5.49 0.807 7.05 2.84 3.48 4.95 Ours 0.812 0.882 4.49 0.939 5.89 2.92 3.50 3.21 Table 4: Quantitative comparison on hand-object interaction synthesis in OakInk-Shape dataset. ↑indicates higher is better, ↓indicates lower is better. A.2 DETAILS OF WILDO2 ACQUISITION A.2.1 RECONSTRUCTABLE CLIP SELECTION FROM WILD VIDEOS Our data generation process commences with the Something-Something V2 (SSv2) dataset (Goyal et al., 2017), a large-scale collection of over 220,847 videos. We chose this dataset for several key reasons. First, its vast scale and diversity, resulting from a crowdsourced effort where contributors enact specific verb-noun prompts, provide a rich variety of goal-oriented hand-object interactions. Second, many videos feature close-up shots of the hand-held object, which is advantageous for 3D reconstruction. Critically, the associated Something-Else dataset (Materzynska et al., 2020) fur- nishes bounding box annotations for hands and objects in approximately 180,049 of these clips, offering an invaluable starting point. However, a primary challenge of SSv2 is its low resolution (typically 240 pixels in height), which necessitates a rigorous filtering pipeline to isolate clips suit- able for high-fidelity reconstruction. 16 Under review as a conference paper at ICLR 2026 To ensure the viability of our reconstruction pipeline, we implemented a multi-stage filtering process to distill the initial 180k annotated clips into a high-quality subset. The screening criteria were de- signed to simplify the interaction scenario (e.g., single hand, single object) and guarantee sufficient visual quality for reliable 3D modeling. This process effectively removes clips that are ambigu- ous, occluded, or otherwise intractable. The detailed steps and their impact on the dataset size are summarized in Table 5. Table 5: The multi-stage filtering pipeline for selecting reconstructable clips from the Something- Something V2 dataset. The process starts with clips annotated by Something-Else and progressively refines the selection based on interaction complexity and visual quality. Step Criterion Description Clips Remaining 1 Initial Annotated Set Clips from SSv2 with hand and object bounding box annotations. 180,049 2 Single-Hand Interaction Exclude clips involving multi hands to simplify interaction modeling. 137,578 3 Single-Object Interaction Retain only clips where a single object is being manipulated. 82,728 4 Minimum Object Size Discard clips where the object is too small to be reliably reconstructed. 57,384 5 Object Visibility Exclude clips where the object is too large or moves out of frame. 12,888 6 Frame Stability Ensure stable pre-interaction and interaction frames can be identified. 8,551 A.2.2 ACQUISITION OF O2HOI FRAME PAIRS This section details the automated procedure for extracting an ”Object-only to Hand-Object Interac- tion” (O2HOI) frame pair, denoted as (Iref, Ihoi), from each video clip. 1. Mask Generation and Frame Selection Preliminaries. For each frame in a given video sequence, we generate object masks (MO) and hand masks (MH) using the SAM2. The interaction period, THOI = [ifirst, ilast], is defined as the contiguous block of frames where the Intersection over Union (IoU) between the object mask and the hand mask, each expanded by several pixels, exceeds a predefined threshold. The reference frame, Iref, is selected as the object-only frame temporally closest to this interaction period. 2. Optimal HOI Frame Selection. The primary objective is to select an index ihoi from THOI such that the object’s pose in frame Ihoi has undergone minimal change relative to its pose in Iref. We achieve this by estimating the 2D affine transformation between the object in the reference frame and in every candidate frame within THOI. This estimation utilizes the RoMa dense feature matcher (Edstedt et al., 2024). The detailed steps for comparing a candidate frame It (where t ∈THOI) to the reference frame Iref are as follows: • Feature Matching: For each pair (Iref, It), we extract robust keypoint correspondences (kref, kt) within their respective object masks, M O ref and M O t . • Transformation Estimation: The keypoints are used to robustly estimate an affine transfor- mation matrix At using RANSAC. • Rotation and IoU Calculation: The rotation angle sequence θt and IoU metrics are com- puted as follows: RS = At[0 : 2, 0 : 2], U, S, V ⊤= SV D(RS), R = UV ⊤ (8) θt = arctan 2(R21, R11) · 180 π , IoU t = |warpAffine(M O ref, At) ∩Mt| |warpAffine(M O ref, At) ∪Mt| (9) • Interaction Frame Selection: with IoU gradient ∆IoU t. The selected HOI frame index ihoi is determined by: ihoi =      arg mint∈THOI |θt|, if mint∈THOI |θt| > MAX MIN ANGLE arg mint∈Sstable |t −imax|, if maxt∈THOI |θt| < MIN MAX ANGLE arg min t∈THOI |θt|<MAX MIN ANGLE |t −imax|, otherwise (10) 17 Under review as a conference paper at ICLR 2026 where Sstable = {t ∈THOI | |∆IoUt| < DT IOU THRES}, and MAX MIN ANGLE=1, MIN MAX ANGLE=5 • Inpainting Mask Generation: In summary, we obtain the O2HOI frame pair (iref, ihoi) and the corresponding inpainting mask for subsequent processing: Minpaint = warpAffine(M O ref, Ahoi) (11) This procedure ensures accurate spatial alignment between the object-only and interaction frames, facilitating efficient and reliable inpainting mask generation for downstream tasks. A.2.3 ESTIMATION OF OBJECT ELEVATIONS The backbone of InstantMesh (Xu et al., 2024), Zero123 (Liu et al., 2023a), generates novel views of objects; however, the predicted elevation angles may not precisely align with the control signals. To address this, we employ a rendering-based matching strategy to estimate the optimal elevation angle for the reconstructed 3D object mesh. Specifically, we render the mesh from candidate elevations and select the angle whose 2D projection best matches the HOI frame, providing a reliable initial- ization for subsequent optimization. The search is performed in two stages: (a) a coarse search over the range [−40◦, 60◦] with 11 sampled angles, and (b) a fine search within [best −10◦, best + 10◦] using 21 samples. The selection criterion is the point-wise mean squared error (MSE) between the rendered projection and the aligned HOI frame. elevation = arg min θ∈Θ 1 N N X i=1 (Iref[i] −Irendered(θ)[i])2 (12) We employ the CLIP-similarity metric to automatically assess the reconstruction quality of image- to-3D methods. This enables efficient and coarse filtering of clips, ensuring that only those with high-quality and reliable 3D reconstructions are selected for subsequent processing and annotation. A.2.4 RECONSTRUCTION ANALYSIS The primary obstacle is Geometric Reconstruction Failure, where 3D mesh recovery is unsuccessful. Another category, Non-Interactive Cases, is defined for instances where the 2D HOI guide image itself showed no discernible contact. Explicit failures in Pose Estimation account for 2.1%. Finally, the Other Cases category contains various failures that bypassed initial rule-based screening; these include some of the most complex scenarios, such as interactions involving deformable or transpar- ent objects, which fall outside the scope of our current reconstruction method. Details and examples of each failure type are provided in Figure 10. Figure 10: Examples of reconstruction failures. (Left) A screenshot of our annotation UI show- ing a ”Non-Interactive” failure, where the reconstructed hand and object do not interact. (Right Top) Examples of ”Geometric Reconstruction Failure” for the object and the hand. This was the most common failure category, with object geometry being more challenging to reconstruct than the hand’s. (Right Middle) Examples of ”Pose Estimation Failure,” where the 6DoF pose of the hand or object was incorrectly recovered. (Right Bottom) Examples of ”Other Cases.” The first exam- ple shows a deformable object, and the second shows an object that is part of a larger structure (a drawer). 18 Under review as a conference paper at ICLR 2026 A.2.5 CONTACT MAP COMPUTATION Given an object point cloud O ∈RNo×3 and a hand point cloud H ∈RNh×3, we propose a robust four-stage contact map computation algorithm: Stage 1: Bidirectional Nearest Neighbor Voting. For each point, we compute the nearest neighbor in the opposite set and accumulate votes: voteo(i) = X j 1[NN(hj) = oi], voteh(j) = X i 1[NN(oi) = hj] Stage 2: Candidate Selection. High-frequency candidate points are selected based on the upper quantile α of the vote distribution: Co = {i | voteo(i) ≥Q1−α(voteo)} Stage 3: Distance Validation. Core contact points are further filtered by requiring their hand-object distance di to be below both a quantile threshold β and an absolute threshold ϵ: So = {i ∈Co | di < min(Qβ(d), ϵ)} Stage 4: Region Expansion. The contact region is expanded by including points within a radius γ · ¯d of any core contact point, where ¯d is the mean k-nearest neighbor distance: Mo =  i | min s∈So ∥oi −os∥2 ≤γ · ¯d  A.2.6 WILDO2 DSCS PROMPT [Structured Output Protocol] As a hand-object interaction analyzer, generate JSON strictly like this: { "obj_category": "cylindrical", "general": "grip [obj_category]", "physical": "Apply [hand_contact] to establish stable three-point contact with [obj_contact] of [obj_category] while other fingers form loose sphere.", "hand_contact": ["thumb pad", "pinky nail", "palm"], "obj_contact": "body" } [Key Requirements] 1. Mandatory placeholders: [hand_contact],[obj_contact],[obj_category] 2. Physical Field Rules: - Use ONLY [hand_contact] placeholder - ABSOLUTELY NO explicit contact point enumeration - must describe contact areas, interaction methods, and force levels considering object functionality 3. hand_contact is where interaction is most likely to occur.Input hand_contact may contain sensing errors from point cloud analysis. Valid hand_contact options (17 total):['thumb pad','index pad','middle pad','ring pad','pinky pad','thumb nail','index nail','middle nail','ring nail','pinky nail','thumb knuckle','index knuckle','middle knuckle','ring knuckle','pinky knuckle','palm','back of palm',] 4. ABSOLUTELY NO PREAMBLE/FOOTNOTES, only RAW JSON output Current Input Interaction Description: "hand_contact": xxx, "general_intention": xxx. A.2.7 DATASET ANALYSIS For data integration purposes, Table 6 outlines the mapping used to align labels from the Something- Something dataset with our defined action groups. A comprehensive comparison with existing hand- object interaction datasets is presented in Table 7. We further analyze the statistical properties of 19 Under review as a conference paper at ICLR 2026 our collected objects, including the scale distribution and its relationship with action categories, as visualized in Figure 11. Figure 12 presents the word cloud of object categories in our dataset. Table 6: The correspondence between the class labels of the Something-Something dataset and the action groups defined in this study. Action Group Class Label Picking Pretending to pick something up Picking something up Pushing Pushing something from right to left Pushing something from left to right Pushing something so that it almost falls off but doesn’t Pushing something so that it slightly moves Pushing something so it spins Pushing something so that it falls off the table Pushing something off of something Pushing something onto something Pushing something with something Something colliding with something and both are being deflected Poking Poking something so that it falls over Poking a stack of something so the stack collapses Poking something so it slightly moves Poking something so lightly that it doesn’t or almost doesn’t move Pretending to poke something Poking something so that it spins around Poking a stack of something without the stack collapsing Poking a hole into something soft Poking a hole into some substance Lifting Lifting up one end of something without letting it drop down Lifting something up completely without letting it drop down Lifting up one end of something, then letting it drop down Lifting something up completely, then letting it drop down Lifting something with something on it Lifting a surface with something on it but not enough for it to slide down Moving Moving something down Moving something towards the camera Moving something away from the camera Moving something across a surface without it falling down Moving something up Moving something across a surface until it falls down Moving part of something Squeezing Pretending to squeeze something Squeezing something Taking Taking one of many similar things on the table Pretending to take something from somewhere Pretending to take something out of something Taking something from somewhere Touching Touching (without moving) part of something Opening Opening something 20 Under review as a conference paper at ICLR 2026 Pretending to open something without actually opening it Laying Laying something on the table on its side, not upright Turning Pretending to turn something upside down Turning something upside down Rolling Rolling something on a flat surface Letting something roll along a flat surface Letting something roll down a slanted surface Spinning Spinning something that quickly stops spinning Spinning something so it continues spinning Pulling Pulling something from right to left Pulling something from left to right Throwing Throwing something Throwing something onto a surface Throwing something in the air and letting it fall Putting Putting something on a surface Putting something upright on the table Putting something on a flat surface without letting it roll Putting something onto a slanted surface but it doesn’t glide down Putting something into something Putting something that can’t roll onto a slanted surface, so it slides down Pretending to put something behind something Putting something that cannot actually stand upright upright on the table, so it falls on its side Putting something similar to other things that are already on the table Falling Something falling like a feather or paper Something falling like a rock Bending Bending something so that it deforms Tipping Tipping something over Tipping something with something in it over, so something in it falls out Closing Pretending to close something without actually closing it Closing something Twisting Twisting something Pretending or trying and failing to twist something Holding Holding something Piling Piling something up Showing Showing that something is empty Showing something to the camera Sprinkling Pretending to sprinkle air onto something Tearing Tearing something just a little bit Pretending to be tearing something that is not tearable Tearing something into two pieces Folding Folding something Unfolding Unfolding something Attaching Attaching something to something Tilting Tilting something with something on it slightly so it doesn’t fall down Tilting something with something on it until it falls off Stacking Stacking number of something Covering Covering something with something 21 Under review as a conference paper at ICLR 2026 Spreading Pretending to spread air onto something Dropping Dropping something onto something Wiping Pretending or failing to wipe something off of something 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Scale HOI4D OakInk WildO2 Figure 11: The left figure is KDE curves of object scales in different datasets. Our dataset, collected without object category bias, shows a more uniform scale distribution., while the right figure depicts the object size distribution across different verb categories. Actions like ”squeezing” typically in- volve smaller objects, whereas ”folding” tends to be associated with larger ones. Dataset Data Collection Dataset Scale Dataset Anno. Environmnet Authentic Obj Cate. Obj Inst. Clips/Frames Act. Cate. Contact HO-3D lab ✓ 10 10 27/78k none ✗ obman syn. ✗ 8 2.7k -/154K none ✗ HOI4D lab ✓ 16 800 4K/2.4M 54 ✗ Oakink* lab/syn. ✓/✗ 34/34 100/1.7k 793/230k 5 ✗ ContactPose lab ✓ 25 25 2.3k/3M 25 ✓ GRAB lab ✓ 37 51 1.3k/1.6M 4 ✗ Ours wild ✓ 610 4.4k 4.4k/- 92 ✓ Table 7: Comparison with existing hand-object interaction datasets: HO-3D (Hampali et al., 2020), obman (Hasson et al., 2019), HOI4D (Liu et al., 2022), Oakink* (Yang et al., 2022), ContactPose (Brahmbhatt et al., 2020), GRAB (Taheri et al., 2020). ”Authentic” indicates whether the hand-object interaction is genuine human behavior. The Oakink* dataset comprises two parts: real data collected in laboratory settings (img set) and synthetic data (shape set). A.3 DETAILS OF TOUCH A.3.1 IMPLEMENTATION DETAILS Condition Injection Transformer Architecture. Our diffusion model and refiner network frefiner are both built upon an 8-layer, 4-head Transformer architecture. This architecture features a latent dimension of 512, a feed-forward network size of 1024, the GELU activation function, and a dropout rate of 0.1. To handle the variable number of input points for hands and objects, we designed an Adaptive Feature Selector, which employs a multi-head attention mechanism to aggregate features from the input point clouds into a fixed-size representation (64 for the hand and 128 for the object). 22 Under review as a conference paper at ICLR 2026 Figure 12: Object category wordcloud in the WildO2 dataset. Object categories are derived from SSv2 noun labels, with overly fine-grained descriptions merged (e.g., ’baby oil bottle’ and ’perfume bottle’ are grouped as ’bottle’). This reduces the number of categories from 1643 (SSv2) to 610 (Ours), followed by further refinement using VLM and manual correction. Text Encoder. To facilitate effective feature alignment and representation learning, we designed and implemented customized encoder architectures tailored to different sources of textual features. For the 4096-dimensional features from the large language model Qwen-7B, we employed a non- linear feature adapter. This module, inspired by established practices in cross-modal alignment, utilizes a Multi-Layer Perceptron (MLP) incorporating GELU activations and Layer Normalization to project the high-dimensional features into a unified 512-dimensional latent space. For the 768- dimensional token-level sequence features from models such as BERT and MPNet, we diverged from the conventional mean-pooling strategy. Instead, we implemented an attention-based pooling mechanism. Training. The diffusion model is trained for 1000 epochs using the Adam optimizer with a learning rate of 1e-4 and a batch size of 128. Subsequently, the refiner network is trained using a distinct set of loss weights tailored for physical plausibility. During this phase, the parameters of the diffusion model are kept frozen. Refinement. At inference time, following a single forward pass through the refiner network, we perform Test-Time Adaptation (TTA) on the generated pose. This process directly optimizes the pose parameters for 500 iterations with a learning rate of 1e-2. The loss weight coefficients used during all training and refinement stages are detailed in Table 8. Table 8: Weight coefficients for each loss term during the training and refinement stages. - indicates that the loss term is not applicable to the corresponding stage. Parameter Diffusion Model Refiner Network Description λsimple 1.0 5.0 Base L2 loss λdmap 0.1 - Hand-object distance map loss λglobal 0.1 0.1 Hand global pose loss λpene - 100.0 Hand-object penetration loss λcontact - 100.0 Hand-object contact loss λcyc - 10.0 Cycle-consistency loss λself - 10000.0 Hand self-penetration loss λanatomy - 0.1 Anatomical plausibility loss 23 Under review as a conference paper at ICLR 2026 A.3.2 EXPERIMENT SETTINGS Training Details. We implement our model using Accelerate and train it on 8 NVIDIA 4090D GPUs with a batch size of 128. We first define 17 detailed hand parts (e.g., pad, nail, knuckle for each finger, palmar, dorsal). Due to the long-tailed distribution of their contact frequencies, we aggregate these parts into 7 semantically coherent categories (pad of each finger, palmar and dorsal). The contact state for any interaction is then encoded as a 7-bit binary label, where each bit represents the contact status (1 for contact, 0 for non-contact) of one category. To create a balanced training set, we perform resampling based on these unique 7-bit labels. Evaluation Metrics • Physical Plausibility. We employ three metrics to assess the physical plausibility of the generated hand-object interactions: (1) Mean Per-Vertex Position Error (MPVPE, mm), which computes the average L2 distance between the predicted hand mesh ˆH and the ground truth H; (2) Penetration Depth (PD, cm), measuring the maximum depth of hand vertices penetrating the object surface; (3) Penetration Volume (PV, cm3), quantifying the volumetric intersection by voxelizing the object mesh and calculating the volume within the hand surface. • Contact Accuracy. Contact accuracy is measured by comparing the predicted contact map CH with the ground truth C∗ H using Intersection over Union (IoU) and F1 score. • HOI Diversity. Following (Liu et al., 2023b), we assess diversity by clustering generated grasps into 20 clusters using K-means, and report the entropy of cluster assignments and the average cluster size. • Semantic Consistency. Semantic consistency is evaluated by: (1) P-FID, the Fr´echet In- ception Distance between point clouds of predicted and ground truth hand meshes, using a pre-trained feature extractor (Nichol et al., 2022); (2) VLM assisted evaluation, where rendered hand-object interactions are scored for semantic alignment with input captions (0-10 scale); (3) Perceptual Score (PS), the mean rating from 10 volunteers (0-10 scale), reflecting the naturalness and semantic consistency of generated grasps. 24
Under review as a conference paper at ICLR 2026 TOUCH: TEXT-GUIDED CONTROLLABLE GENERATION OF FREE-FORM HAND-OBJECT INTERACTIONS Guangyi Han1, Wei Zhai1,†, Yuhang Yang1, Yang Cao1, Zheng-Jun Zha1 1 {hanguangyi@mail., wzhai056@, yyuhang@mail., forrest@, ABSTRACT Hand-object interaction (HOI) is fundamental for humans to express intent. Existing HOI generation research is predominantly confined to fixed grasping patterns, where control is tied to physical priors such as force closure or generic intent instructions, even when expressed through elaborate language. Such an overly general conditioning imposes a strong inductive bias for stable grasps, thus failing to capture the diversity of daily HOI. To address these limitations, we introduce Free-Form HOI Generation, which aims to generate controllable, diverse, and physically plausible HOI conditioned on fine-grained intent, extending HOI from grasping to free-form interactions, like pushing, poking, and rotating. To support this task, we construct WildO2, an in-the-wild diverse 3D HOI dataset, which includes diverse HOI derived from internet videos. Specifically, it contains 4.4k unique interactions across 92 intents and 610 object categories, each with detailed semantic annotations. Building on this dataset, we propose TOUCH, a three-stage framework centered on a multi-level diffusion model that facilitates fine-grained semantic control to generate versatile hand poses beyond grasping priors. This process leverages explicit contact modeling for conditioning and is subsequently refined with contact consistency and physical constraints to ensure realism. Comprehensive experiments demonstrate our method's ability to generate controllable, diverse, and physically plausible hand interactions representative of daily activities. The project page is here. 1 INTRODUCTION Hand-Object Interaction (HOI) is fundamental to expressing intent and executing tasks in human daily life, and the ability to generate controllable interactions is crucial for AR/VR, robotics, and embodied AI (Zheng et al., 2025). While existing HOI generation research has progressed from ensuring physical plausibility (Fang et al., 2020) to incorporating semantic controllability (Li et al., 2024; Yang et al., 2023), its scope remains confined to a grasp-centric paradigm (Zhou et al., 2022; 2024), where control signals-spanning from physical constraints like force closure (Nguyen, 1988; Zheng & Qian, 2005) to coarse instructions like verb-noun pairs (Li et al., 2025)-are overly general, imposing a strong inductive bias that favors the generation of stable grasps (Taheri et al., 2020; Turpin et al., 2022), sacrificing interaction diversity. Furthermore, even with more sophisticated control such as detailed natural language (e.g., via LLMs) (Huang et al., 2025; Zhang et al., 2025a;b; Shao et al., 2025), the underlying model designs and inherent inductive biases are still fundamentally geared towards generating only grasping interactions, driven by historical focus and prevailing representations. Consequently, these approaches lack the fine-grained control and inherent capability to capture the diverse non-grasping interactions found in the real world, including varied hand poses, contact details, and nuanced semantic intent. To bridge the gap between the limited scope of current methods and the complexity of real-world interactions, we introduce the task of Free-Form HOI Generation. The goal is to break graspcentric limitations and shift towards generating diverse interactions, including the vast array of nongrasping manipulations. This task emphasizes expressiveness and controllability in the generation † Corresponding Author. 1 16 Oct 2025 Under review as a conference paper at ICLR 2026 Free-Form Actions Free-Form Contacts Diverse Objects Daily Hand-Object Interaction general: tipping [obj_category] physical: Apply [hand_contact_part] to exert a controlled force on the [obj_contact_part] of [obj_category], causing it to tip over. hand_contact_part: [index pad] obj_contact_part: cap obj_category: water bottle SSCs: Tip water bottle over. DSCs Laboratory Collection HOI4D, OakInk, GRAB,... Object, Affordance, Text,... Condition Dataset drive Grasp Generation: Figure 1: Overview. We extend HOI generation beyond laboratory "grasp" settings (left) toward broader daily HOI modalities (right), enabling the modeling of more human-like interactions. Our dataset WildO2, built from Internet videos, covers more contacts, more objects, and more actions, and is enriched with descriptive synthetic captions (DSCs) to support fine-grained semantic controllable HOI generation with our method, TOUCH. process, aiming to synthesize interactions that are not only physically plausible but also semantically rich and truly adaptable to complex human intentions. The core challenge of this task lies in two aspects: what to generate and how to generate it. The former pertains to spatial plausibility: the model must break free from restrictive grasping priors (e.g., palm position and orientation, contact region assumption (Ye et al., 2024; Jiang et al., 2021; Jung & Lee, 2025)) to explore a vast yet physically valid interaction space. To address this, we propose that contact relationships serve as a powerful cue to constrain this high-dimensional space, offering a more nuanced understanding of physically valid interactions. The latter pertains to semantic controllability: the model must accurately map fine-grained textual instructions to specific hand configurations and contact regions. The prior knowledge within Large Language Models (LLMs) offers a promising pathway for this guidance (Tang et al., 2023). A major obstacle is the lack of diverse 3D training data; current datasets(Zhan et al., 2024; Fu et al., 2025) are limited to lab-based grasps, and large-scale real-world data collection remains challenging. In contrast, abundant 2D HOI videos online provide rich and realistic daily interaction behaviors. To tackle the proposed task and challenges, we present TOUCH, a three-stage framework for controllable free-form HOI generation. First, we explicitly model the contact on the surfaces of the hand and the object separately by jointly encoding spatial point-cloud relations and semantic information, providing strong spatial priors to mitigate uncertainty from the high degrees of freedom in interaction position and pose. We further incorporate part-level hand modeling for more precise action control. Second, we employ a multi-level diffusion model with attention-based fusion of semantics and geometry: coarse-grained intent and global object geometry guide the early diffusion stages, while fine-grained text and local contact features refine detailed motions in deeper stages, enabling fine-grained semantic controllability. Finally, we introduce self-supervised contact consistency and physical plausibility constraints to optimize the generated interactions, ensuring realism and physical feasibility. Compared to prior methods restricted to grasp generation, TOUCH naturally generalizes to diverse free-form HOI such as pushing, pressing, and rotating. Additionally, based on 3D object reconstruction (Xu et al., 2024), we introduce an automated pipeline to build the dataset WildO2 that jointly recovers and optimizes high-quality 3D hand-object interaction samples from internet videos annotated with interaction intent. By leveraging visionlanguage models (Bai et al., 2023b), we generate fine-grained semantic annotations, resulting in the 3D daily HOI dataset covering diverse interaction intents. Our main contributions are: (1) We propose to extend the HOI from constrained grasping to a broader, more realistic, and more diverse set of daily interactions. (2) We propose TOUCH, a new framework that can generate natural, physically reasonable, and diverse free-form HOI under fine2 Under review as a conference paper at ICLR 2026 grained text guidance. (3) We build an automated pipeline and construct WildO2, an in-the-wild 3D dataset for daily HOI, providing a critical resource that enables future research in this domain. Extensive experiments demonstrate the superiority of TOUCH. 2 RELATED WORK 2.1 HAND-OBJECT INTERACTION DATASETS. Existing 3D hand-object interaction (HOI) datasets are predominantly collected in controlled laboratory settings, relying either on physics-based simulation synthesis (Hasson et al., 2019) or motion capture systems to record real interactions (Hampali et al., 2020; Liu et al., 2022; Yang et al., 2022; Brahmbhatt et al., 2020). Although these datasets provide valuable support for modeling 3D HOI, they suffer from limited diversity due to constrained camera setups, a small number of participants, and a restricted set of object instances. In contrast, large-scale in-the-wild video datasets (Damen et al., 2020; Grauman et al., 2022; Shan et al., 2020) contain abundant HOI clips, but lack highquality 3D annotations. Some studies have attempted to annotate subsets of these videos in 3D using object template-based optimization methods (Cao et al., 2021; Patel et al., 2022); however, due to the high diversity of open-set objects, scaling such approaches remains challenging. 2.2 TEMPLATE-FREE HOI RECONSTRUCTION. The core bottleneck in reconstructing HOIs in the wild has long been the recovery of diverse object geometries. While existing template-free approaches (Fan et al., 2024; Ye et al., 2022) avoid predefined object model constraints, they are typically trained on limited datasets and exhibit poor generalization to novel objects. In recent years, multi-view diffusion models (Liu et al., 2023a) and large-scale reconstruction models (LRMs) (Hong et al., 2024) have enabled high-quality 3D mesh reconstruction directly from single images (Xu et al., 2024) or text prompts (Poole et al., 2022), demonstrating strong generalization capabilities. Motivated by these advances, several HOI studies have explored image-to-3D reconstruction pipelines to handle open-set objects in the wild. However, due to severe hand occlusion, these methods often rely on image inpainting to complete occluded regions (Tian et al., 2025; Liu et al., 2024; Wen et al., 2025), or employ text-to-3D generation to align with coarse reconstruction results (Wu et al., 2024; Chen et al., 2025a). Nonetheless, most of these pipelines depend on heuristic completion or registration strategies, resulting in limited geometric consistency with the input, and have yet to be validated at scale in an automated manner. 2.3 DATA-DRIVEN CONTROLLABLE HOI GENERATION. In the evolution of HOI generation, interaction guidance has progressively advanced: from coarse control based on grasp type (Feix et al., 2015; Chen et al., 2025b), to object-conditioned generation (Karunratanakul et al., 2020), and further to task/action-level intent constraints (Christen et al., 2024; Yu et al., 2025; Yang et al., 2024b;a). To enhance physical plausibility, contact penetration loss and hand anatomical constraints have been widely adopted (Wei et al., 2024). Additionally, explicit modeling of hand part segmentation and contact relationships with objects has been shown to improve physical realism and detailed expression of interactions (Liu et al., 2023b; Zhang et al., 2024). Building on these efforts, we propose a multi-level controllable generation framework trained on our newly constructed daily HOI dataset, enabling finer-grained semantic intent control and the flexible generation of free-form HOIs that align with complex human intentions. 3 DATASET 3.1 DATA COLLECTION AND PROCESSING Our goal is to construct a diverse dataset of 3D hand-object interactions from in-the-wild videos. A primary challenge in this process is the severe occlusion of the object by the hand, which compromises the quality of 3D object reconstruction. To address this, we introduce a semi-automated data generation pipeline, centered around a novel Object-only to Hand-Object Interaction (O2HOI) frame pairing strategy. 3 Under review as a conference paper at ICLR 2026 One-image-to-3d H&O Reconstruction Mhoi H Mref O Mhoi O Minpaint affine VH VO Image Collection & Processing SSCs Contact Alignment Optimization Lcam Lphy CH CO DSCs: Apply [thumb pad, index pad, ring nail] to gently lift one end of the [edge] of [plate] and then release it to drop down. Hand Reconstruction 2d feature matcher Obj-only iref HOI frame ihoi ... Contact zone Figure 2: The proposed data generation pipeline for WildO2. The process begins with O2HOI frame pair extraction from in-the-wild videos, followed by a three-stage pipeline for 3D reconstruction, camera alignment, and hand-object refinement that produces high-fidelity interaction data. We begin by filtering the Something-Something V2 dataset (Goyal et al., 2017), which is rich in goal-directed human actions, to obtain 8k single-hand, single-object interaction clips. For each clip, we automatically extract an O2HOI pair (details in Appendix): an object-only frame Iref, where the object is unoccluded, and a corresponding interaction frame Ihoi. To obtain a complete object mask in the interaction frame, we segment the object in Iref using SAM2 (Ravi et al., 2024) and then transfer this mask to Ihoi via a robust dense matching model (Edstedt et al., 2024), yielding Minpaint. This mask transfer strategy offers a distinct advantage over common alternatives: it avoids the geometric inconsistencies of diffusion-based inpainting (Liu et al., 2024) while being significantly more scalable than manual completion (Wen et al., 2025). Consequently, our approach facilitates the automated, large-scale generation of high-fidelity 3D assets for reconstruction. 3.2 DATA GENERATION PIPELINE Based on the O2HOI pairs, we build a three-stage pipeline to recover 3D HOI (see Figure 2). Stage 1: Initialization. For each pair, we reconstruct a textured object mesh VO recon from the object-only frame Iref using an image-to-3D model (Xu et al., 2024). Concurrently, we estimate initial MANO (Romero et al., 2017) hand parameters Hinit from the interaction frame Ihoi using a state-of-the-art hand reconstruction method (Pavlakos et al., 2024). Stage 2: Camera Alignment. A challenge arises from coordinate system misalignment: the object mesh VO recon is created in a canonical space of the object-only frame Iref, while the hand exists in the camera space of the interaction frame Ihoi. To unify them, we align VO recon to an object-centric global coordinate system relative to the interaction frame by optimizing the camera projection matrix K and extrinsics (R, t). This is achieved by minimizing a camera alignment loss, Lcam, via differentiable rendering. The optimization proceeds in two phases: we initially use mask IoU, Sinkhorn (Cuturi, 2013) loss, and an edge penalty term (to prevent the object from moving out of view). Once the IoU surpasses a threshold, we introduce scale-invariant depth (Eigen et al., 2014) and RGB reconstruction losses for fine-tuning. The overall objective is formulated as: min K,R,t; Lcam = Lmask + Lsinkhorn + Ledge + λfine(Ldepth + Lrgb). (1) Stage 3: Hand-Object Refinement. With the aligned camera and object, we refine the initial hand parameters Hinit to achieve physically plausible contact. Specifically, we cast rays from the camera center through pixels within the interaction mask Minpaint. The intersection points of these rays with the 3D hand and object geometries define a potential 3D contact zone. We then optimize H using a refinement objective Lalign, which combines 2D evidence with 3D physical constraints: hand mask IoU (LH mask), 2D joint reprojection error (Lj2d), an ICP loss on the 3D contact zone (Licp), and physical constraints for contact, penetration, and anatomy based on (Yang et al., 2021). min H ; Lalign = LH mask +Lj2d +Licp +Lphy, Lphy = Lcontact +Lpene +Lanatomy +Lself. (2) 4 Under review as a conference paper at ICLR 2026 This pipeline yields 4,414 high-quality 3D hand-object interaction samples after a final stage of manual inspection and refinement, which constitute the ground truth of our dataset. bottle bottle bottle bottle bottle box box box box box pen pen pen pen pen cup cup cup cup cup glass glass glass glass glass book book book book book brush brush brush brush brush pencil pencil pencil pencil pencil remote remote remote remote remote can can can can can spoon spoon spoon spoon spoon toy toy toy toy toy jar jar jar jar jar container container container container container scissors scissors scissors scissors scissors color pencils color pencils color pencils color pencils color pencils phone phone phone phone phone card card card card card mug mug mug mug mug knife knife knife knife knife Pushing Pushing Pushing Pushing Pushing Poking Poking Poking Poking Poking Lifting Lifting Lifting Lifting Lifting Picking Picking Picking Picking Picking Moving Moving Moving Moving Moving Turning Turning Turning Turning Turning Pulling Pulling Pulling Pulling Pulling Putting Putting Putting Putting Putting Rolling Rolling Rolling Rolling Rolling Spinning Spinning Spinning Spinning Spinning Opening Opening Opening Opening Opening Tipping Tipping Tipping Tipping Tipping Laying Laying Laying Laying Laying Closing Closing Closing Closing Closing Squeezing Squeezing Squeezing Squeezing Squeezing Taking Taking Taking Taking Taking Showing Showing Showing Showing Showing Touching Touching Touching Touching Touching Holding Holding Holding Holding Holding Falling Falling Falling Falling Falling thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index, middle pad thumb, index pad thumb, index pad thumb, index pad thumb, index pad thumb, index pad index pad index pad index pad index pad index pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad thumb, index, middle, ring, pinky pad index, middle pad index, middle pad index, middle pad index, middle pad index, middle pad index, middle, ring pad index, middle, ring pad index, middle, ring pad index, middle, ring pad index, middle, ring pad thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm thumb, index, middle, ring, pinky pad, palm index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad index, middle, ring, pinky pad thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail thumb, index, middle pad, ring nail index pad, middle nail index pad, middle nail index pad, middle nail index pad, middle nail index pad, middle nail thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index, middle pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm thumb, index, middle, ring pad, palm index nail index nail index nail index nail index nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail index, middle pad, ring nail thumb pad thumb pad thumb pad thumb pad thumb pad index nail, middle pad index nail, middle pad index nail, middle pad index nail, middle pad index nail, middle pad middle, ring pad middle, ring pad middle, ring pad middle, ring pad middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad thumb, middle, ring pad (a) 0 1 Hand Part Contact Distribution Palmar Dorsal (b) Figure 3: Dataset Distribution. (a) An illustration of the interplay between the most frequent object categories, interaction types, and hand contact regions. Object and action definitions are adapted and refined from (Goyal et al., 2017). Contact regions are derived based on our dataset analysis. (b) Specific segmentation of the 17 hand parts and their contact frequency distribution in the dataset, along with a contact heatmap of the entire hand. 3.3 DATA ANNOTATION AND STATISTICS We enrich our dataset with a multi-level annotation system, generating over 44k annotations. A statistical overview is provided in Figure 3, with further details in the Appendix. 3D Geometry and Transformation. Each sample includes the final hand-object meshes ( ˆ VH, ˆ VO) and the corresponding camera parameters derived from our generation pipeline. Contact Maps. We compute dense contact maps between the hand and object surfaces. To handle varying object scales, our method robustly identifies contact regions by combining relative and absolute distance thresholds with bidirectional nearest-neighbor filtering. Multi-Level Language Descriptions. We provide two levels of textual descriptions. We inherit the template-based Short Synthetic Captions (SSCs) from Something-Something V2 (e.g., "picking [Something] up"). Additionally, we use a Vision-Language Model (VLM) (Bai et al., 2023b) to generate more detailed Descriptive Synthetic Captions (DSCs), which are manually verified for quality and relevance. Fine-Grained Hand Part Segmentation. We segment the hand mesh into 17 parts, including finger pads, nails, knuckles, palmar, and the dorsal region. This partitioning scheme goes beyond the coarse divisions commonly used in grasp generation tasks (Hasson et al., 2019; Liu et al., 2023b)-which often focus only on contact on the inner hand-by also accounting for contact on the dorsal side. This fine-grained segmentation supports detailed local interaction analysis and facilitates alignment with the semantic descriptions in the DSCs. 4 METHOD This work aims to generate natural and physically plausible hand-object interaction (HOI) poses, parameterized by H, along with corresponding contact maps CH and CO, conditioned on a multilevel textual prompt T and an object mesh VO. To tackle this problem, we propose a three-stage framework, as illustrated in Figure 4. Specifically, the Contact Map Prediction module (Section 4.1) infers the potential contact regions on the hand and object surfaces based on the text and object geometry. The Multi-Level Conditioned Diffusion module (Section 4.2) synthesizes a coarse hand pose by integrating coarse-to-fine textual and geometric features within a diffusion framework, ensuring alignment with multi-level constraints. Finally, the Physical Constraints Refinement module (Section 4.3) further optimizes the coarse pose to enhance contact realism and prevent penetrations. 4.1 CONTACT MAP PREDICTION To generate diverse interactions beyond simple grasping, we design two independent yet similar CVAEs (Sohn et al., 2015) to generate binary contact maps for the object and the hand, respectively. 5 Under review as a conference paper at ICLR 2026 × N Sample & Normalize xt-1 Adapter PointNet PointNet xt xi K V Self-Attention FiLM xi-1 Cross-Attention CIM N Q fθ( xt, t, y) PO PH ... yloc i yglb i Contact Prediction Multi-level Conditioned Diffusion Refinement Input Apply [thumb pad, index pad, ring nail] to gently lift one end of the [edge] of [plate] and then release it to drop down. lifting up [plate] Fglb O Floc O Fgen Fall ts0 t Fglb H Add & Norm Add & Norm layer>3 CIM ... x0 DDPM TTA O0 H1 H0 O1 GDcyc GDctc hand obj Lrefiner ෝHdiff ෝHrefiner VAE VAE Floc H Figure 4: Overview of our three-stage framework TOUCH for generating hand-object interactions from multi-level text prompts and object meshes. CIM stands for the Condition Injection Module. For the object branch, we sample a point cloud PO ∈RNO×3 (NO = 3000) from its mesh VO, normalize it, and record the scale factor sO. We use PointNet (Qi et al., 2016) to extract its geometric features, which are concatenated with sO to form the object condition FO. For the hand branch, we generate a canonical point cloud P0 H ∈RNH×3 (NH = 778) from MANO's zero pose and shape parameters H0. This point cloud, combined with a hand-part mask initialized from the fine-grained text TDSC, is processed by PointNet to obtain the hand condition FH. This design integrates the topological structure of the point clouds with text-guided emphasis on interaction-relevant hand regions. Both CVAEs are trained conditioned on their respective geometric features (FO, FH) and a shared text feature FDSC = ftext(TDSC), which is extracted using the Qwen-7B (Bai et al., 2023a) processed through a lightweight adapter. The optimization objective is a composite loss function: Lcontact = Lfocal + Ldice + βLKL (3) where Lfocal and Ldice supervise the contact prediction, and LKL structures the latent space. During inference, under the conditional features (FO, FH, FDSC), the model samples from a Gaussian prior z ∼N(0, I) and decodes it to produce the predicted binary contact maps ˆCO ∈{0, 1}NO×1 and ˆCH ∈{0, 1}NH×1. 4.2 MULTI-LEVEL CONDITIONED DIFFUSION The core of our method is a Transformer-based Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020) that synthesizes hand pose parameters ˆH conditioned on the object point cloud PO, multi-level text T, and predicted contact maps ˆC. Instead of predicting noise, our model fθ is trained to directly predict the denoised data ˆx0 = fθ(xt, t, y), optimized with an L2 loss on the pose parameters: Ldiff = Et,ε ∥ˆx0 -x0∥2 Condition Generation: Transformer Inputs. To achieve precise control, our model extracts multilevel conditional features from both geometric and textual modalities. On the geometric side, we use PointNet to extract global features FO glb, FH glb and point-wise local features from the object point cloud PO, the initial hand point cloud P0 H, and the predicted contact maps ˆC from the previous stage. To focus on interaction regions, we leverage ˆC to adaptively select features of N O loc = 128 object points and N H loc = 64 hand points near contact areas, yielding ̃FO loc and ̃FH loc. On the textual side, we utilize ftext to extract both coarse-grained FSSC qwen = ftext(TSSC) and fine-grained FDSC qwen text features. Conditional Injection: Coarse-to-Fine Control. We inject these features into the Ninj = 8 blocks of our Transformer model in a hierarchical, coarse-to-fine fashion. This design ensures that global context, defined by SSCs and global geometry, shapes the overall pose in early denoising stages, while local details, defined by DSCs and contact-point features, are refined in later stages. Specifically, for the i-th Transformer block: 6 Under review as a conference paper at ICLR 2026 Early Stages (i MAX MIN ANGLE arg mint∈Sstable |t -imax|, if maxt∈THOI |θt| < MIN MAX ANGLE arg min t∈THOI |θt|<MAX MIN ANGLE |t -imax|, otherwise (10) 17 Under review as a conference paper at ICLR 2026 where Sstable = {t ∈THOI | |∆IoUt| < DT IOU THRES}, and MAX MIN ANGLE=1, MIN MAX ANGLE=5 • Inpainting Mask Generation: In summary, we obtain the O2HOI frame pair (iref, ihoi) and the corresponding inpainting mask for subsequent processing: Minpaint = warpAffine(M O ref, Ahoi) (11) This procedure ensures accurate spatial alignment between the object-only and interaction frames, facilitating efficient and reliable inpainting mask generation for downstream tasks. A.2.3 ESTIMATION OF OBJECT ELEVATIONS The backbone of InstantMesh (Xu et al., 2024), Zero123 (Liu et al., 2023a), generates novel views of objects; however, the predicted elevation angles may not precisely align with the control signals. To address this, we employ a rendering-based matching strategy to estimate the optimal elevation angle for the reconstructed 3D object mesh. Specifically, we render the mesh from candidate elevations and select the angle whose 2D projection best matches the HOI frame, providing a reliable initialization for subsequent optimization. The search is performed in two stages: (a) a coarse search over the range [-40◦, 60◦] with 11 sampled angles, and (b) a fine search within [best -10◦, best + 10◦] using 21 samples. The selection criterion is the point-wise mean squared error (MSE) between the rendered projection and the aligned HOI frame. elevation = arg min θ∈Θ 1 N N X i=1 (Iref[i] -Irendered(θ)[i])2 (12) We employ the CLIP-similarity metric to automatically assess the reconstruction quality of imageto-3D methods. This enables efficient and coarse filtering of clips, ensuring that only those with high-quality and reliable 3D reconstructions are selected for subsequent processing and annotation. A.2.4 RECONSTRUCTION ANALYSIS The primary obstacle is Geometric Reconstruction Failure, where 3D mesh recovery is unsuccessful. Another category, Non-Interactive Cases, is defined for instances where the 2D HOI guide image itself showed no discernible contact. Explicit failures in Pose Estimation account for 2.1%. Finally, the Other Cases category contains various failures that bypassed initial rule-based screening; these include some of the most complex scenarios, such as interactions involving deformable or transparent objects, which fall outside the scope of our current reconstruction method. Details and examples of each failure type are provided in Figure 10. Figure 10: Examples of reconstruction failures. (Left) A screenshot of our annotation UI showing a "Non-Interactive" failure, where the reconstructed hand and object do not interact. (Right Top) Examples of "Geometric Reconstruction Failure" for the object and the hand. This was the most common failure category, with object geometry being more challenging to reconstruct than the hand's. (Right Middle) Examples of "Pose Estimation Failure," where the 6DoF pose of the hand or object was incorrectly recovered. (Right Bottom) Examples of "Other Cases." The first example shows a deformable object, and the second shows an object that is part of a larger structure (a drawer). 18 Under review as a conference paper at ICLR 2026 A.2.5 CONTACT MAP COMPUTATION Given an object point cloud O ∈RNo×3 and a hand point cloud H ∈RNh×3, we propose a robust four-stage contact map computation algorithm: Stage 1: Bidirectional Nearest Neighbor Voting. For each point, we compute the nearest neighbor in the opposite set and accumulate votes: voteo(i) = X j 1[NN(hj) = oi], voteh(j) = X i 1[NN(oi) = hj] Stage 2: Candidate Selection. High-frequency candidate points are selected based on the upper quantile α of the vote distribution: Co = {i | voteo(i) ≥Q1-α(voteo)} Stage 3: Distance Validation. Core contact points are further filtered by requiring their hand-object distance di to be below both a quantile threshold β and an absolute threshold ε: So = {i ∈Co | di < min(Qβ(d), ε)} Stage 4: Region Expansion. The contact region is expanded by including points within a radius γ · ̄d of any core contact point, where ̄d is the mean k-nearest neighbor distance: Mo = i | min s∈So ∥oi -os∥2 ≤γ · ̄d A.2.6 WILDO2 DSCS PROMPT [Structured Output Protocol] As a hand-object interaction analyzer, generate JSON strictly like this: { "obj_category": "cylindrical", "general": "grip [obj_category]", "physical": "Apply [hand_contact] to establish stable three-point contact with [obj_contact] of [obj_category] while other fingers form loose sphere.", "hand_contact": ["thumb pad", "pinky nail", "palm"], "obj_contact": "body" } [Key Requirements] 1. Mandatory placeholders: [hand_contact],[obj_contact],[obj_category] 2. Physical Field Rules: - Use ONLY [hand_contact] placeholder - ABSOLUTELY NO explicit contact point enumeration - must describe contact areas, interaction methods, and force levels considering object functionality 3. hand_contact is where interaction is most likely to occur.Input hand_contact may contain sensing errors from point cloud analysis. Valid hand_contact options (17 total):['thumb pad','index pad','middle pad','ring pad','pinky pad','thumb nail','index nail','middle nail','ring nail','pinky nail','thumb knuckle','index knuckle','middle knuckle','ring knuckle','pinky knuckle','palm','back of palm',] 4. ABSOLUTELY NO PREAMBLE/FOOTNOTES, only RAW JSON output Current Input Interaction Description: "hand_contact": xxx, "general_intention": xxx. A.2.7 DATASET ANALYSIS For data integration purposes, Table 6 outlines the mapping used to align labels from the SomethingSomething dataset with our defined action groups. A comprehensive comparison with existing handobject interaction datasets is presented in Table 7. We further analyze the statistical properties of 19 Under review as a conference paper at ICLR 2026 our collected objects, including the scale distribution and its relationship with action categories, as visualized in Figure 11. Figure 12 presents the word cloud of object categories in our dataset. Table 6: The correspondence between the class labels of the Something-Something dataset and the action groups defined in this study. Action Group Class Label Picking Pretending to pick something up Picking something up Pushing Pushing something from right to left Pushing something from left to right Pushing something so that it almost falls off but doesn't Pushing something so that it slightly moves Pushing something so it spins Pushing something so that it falls off the table Pushing something off of something Pushing something onto something Pushing something with something Something colliding with something and both are being deflected Poking Poking something so that it falls over Poking a stack of something so the stack collapses Poking something so it slightly moves Poking something so lightly that it doesn't or almost doesn't move Pretending to poke something Poking something so that it spins around Poking a stack of something without the stack collapsing Poking a hole into something soft Poking a hole into some substance Lifting Lifting up one end of something without letting it drop down Lifting something up completely without letting it drop down Lifting up one end of something, then letting it drop down Lifting something up completely, then letting it drop down Lifting something with something on it Lifting a surface with something on it but not enough for it to slide down Moving Moving something down Moving something towards the camera Moving something away from the camera Moving something across a surface without it falling down Moving something up Moving something across a surface until it falls down Moving part of something Squeezing Pretending to squeeze something Squeezing something Taking Taking one of many similar things on the table Pretending to take something from somewhere Pretending to take something out of something Taking something from somewhere Touching Touching (without moving) part of something Opening Opening something 20 Under review as a conference paper at ICLR 2026 Pretending to open something without actually opening it Laying Laying something on the table on its side, not upright Turning Pretending to turn something upside down Turning something upside down Rolling Rolling something on a flat surface Letting something roll along a flat surface Letting something roll down a slanted surface Spinning Spinning something that quickly stops spinning Spinning something so it continues spinning Pulling Pulling something from right to left Pulling something from left to right Throwing Throwing something Throwing something onto a surface Throwing something in the air and letting it fall Putting Putting something on a surface Putting something upright on the table Putting something on a flat surface without letting it roll Putting something onto a slanted surface but it doesn't glide down Putting something into something Putting something that can't roll onto a slanted surface, so it slides down Pretending to put something behind something Putting something that cannot actually stand upright upright on the table, so it falls on its side Putting something similar to other things that are already on the table Falling Something falling like a feather or paper Something falling like a rock Bending Bending something so that it deforms Tipping Tipping something over Tipping something with something in it over, so something in it falls out Closing Pretending to close something without actually closing it Closing something Twisting Twisting something Pretending or trying and failing to twist something Holding Holding something Piling Piling something up Showing Showing that something is empty Showing something to the camera Sprinkling Pretending to sprinkle air onto something Tearing Tearing something just a little bit Pretending to be tearing something that is not tearable Tearing something into two pieces Folding Folding something Unfolding Unfolding something Attaching Attaching something to something Tilting Tilting something with something on it slightly so it doesn't fall down Tilting something with something on it until it falls off Stacking Stacking number of something Covering Covering something with something 21 Under review as a conference paper at ICLR 2026 Spreading Pretending to spread air onto something Dropping Dropping something onto something Wiping Pretending or failing to wipe something off of something 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Scale HOI4D OakInk WildO2 Figure 11: The left figure is KDE curves of object scales in different datasets. Our dataset, collected without object category bias, shows a more uniform scale distribution., while the right figure depicts the object size distribution across different verb categories. Actions like "squeezing" typically involve smaller objects, whereas "folding" tends to be associated with larger ones. Dataset Data Collection Dataset Scale Dataset Anno. Environmnet Authentic Obj Cate. Obj Inst. Clips/Frames Act. Cate. Contact HO-3D lab ✓ 10 10 27/78k none ✗ obman syn. ✗ 8 2.7k -/154K none ✗ HOI4D lab ✓ 16 800 4K/2.4M 54 ✗ Oakink* lab/syn. ✓/✗ 34/34 100/1.7k 793/230k 5 ✗ ContactPose lab ✓ 25 25 2.3k/3M 25 ✓ GRAB lab ✓ 37 51 1.3k/1.6M 4 ✗ Ours wild ✓ 610 4.4k 4.4k/- 92 ✓ Table 7: Comparison with existing hand-object interaction datasets: HO-3D (Hampali et al., 2020), obman (Hasson et al., 2019), HOI4D (Liu et al., 2022), Oakink* (Yang et al., 2022), ContactPose (Brahmbhatt et al., 2020), GRAB (Taheri et al., 2020). "Authentic" indicates whether the hand-object interaction is genuine human behavior. The Oakink* dataset comprises two parts: real data collected in laboratory settings (img set) and synthetic data (shape set). A.3 DETAILS OF TOUCH A.3.1 IMPLEMENTATION DETAILS Condition Injection Transformer Architecture. Our diffusion model and refiner network frefiner are both built upon an 8-layer, 4-head Transformer architecture. This architecture features a latent dimension of 512, a feed-forward network size of 1024, the GELU activation function, and a dropout rate of 0.1. To handle the variable number of input points for hands and objects, we designed an Adaptive Feature Selector, which employs a multi-head attention mechanism to aggregate features from the input point clouds into a fixed-size representation (64 for the hand and 128 for the object). 22 Under review as a conference paper at ICLR 2026 Figure 12: Object category wordcloud in the WildO2 dataset. Object categories are derived from SSv2 noun labels, with overly fine-grained descriptions merged (e.g., 'baby oil bottle' and 'perfume bottle' are grouped as 'bottle'). This reduces the number of categories from 1643 (SSv2) to 610 (Ours), followed by further refinement using VLM and manual correction. Text Encoder. To facilitate effective feature alignment and representation learning, we designed and implemented customized encoder architectures tailored to different sources of textual features. For the 4096-dimensional features from the large language model Qwen-7B, we employed a nonlinear feature adapter. This module, inspired by established practices in cross-modal alignment, utilizes a Multi-Layer Perceptron (MLP) incorporating GELU activations and Layer Normalization to project the high-dimensional features into a unified 512-dimensional latent space. For the 768dimensional token-level sequence features from models such as BERT and MPNet, we diverged from the conventional mean-pooling strategy. Instead, we implemented an attention-based pooling mechanism. Training. The diffusion model is trained for 1000 epochs using the Adam optimizer with a learning rate of 1e-4 and a batch size of 128. Subsequently, the refiner network is trained using a distinct set of loss weights tailored for physical plausibility. During this phase, the parameters of the diffusion model are kept frozen. Refinement. At inference time, following a single forward pass through the refiner network, we perform Test-Time Adaptation (TTA) on the generated pose. This process directly optimizes the pose parameters for 500 iterations with a learning rate of 1e-2. The loss weight coefficients used during all training and refinement stages are detailed in Table 8. Table 8: Weight coefficients for each loss term during the training and refinement stages. - indicates that the loss term is not applicable to the corresponding stage. Parameter Diffusion Model Refiner Network Description λsimple 1.0 5.0 Base L2 loss λdmap 0.1 - Hand-object distance map loss λglobal 0.1 0.1 Hand global pose loss λpene - 100.0 Hand-object penetration loss λcontact - 100.0 Hand-object contact loss λcyc - 10.0 Cycle-consistency loss λself - 10000.0 Hand self-penetration loss λanatomy - 0.1 Anatomical plausibility loss 23 Under review as a conference paper at ICLR 2026 A.3.2 EXPERIMENT SETTINGS Training Details. We implement our model using Accelerate and train it on 8 NVIDIA 4090D GPUs with a batch size of 128. We first define 17 detailed hand parts (e.g., pad, nail, knuckle for each finger, palmar, dorsal). Due to the long-tailed distribution of their contact frequencies, we aggregate these parts into 7 semantically coherent categories (pad of each finger, palmar and dorsal). The contact state for any interaction is then encoded as a 7-bit binary label, where each bit represents the contact status (1 for contact, 0 for non-contact) of one category. To create a balanced training set, we perform resampling based on these unique 7-bit labels. Evaluation Metrics • Physical Plausibility. We employ three metrics to assess the physical plausibility of the generated hand-object interactions: (1) Mean Per-Vertex Position Error (MPVPE, mm), which computes the average L2 distance between the predicted hand mesh ˆH and the ground truth H; (2) Penetration Depth (PD, cm), measuring the maximum depth of hand vertices penetrating the object surface; (3) Penetration Volume (PV, cm3), quantifying the volumetric intersection by voxelizing the object mesh and calculating the volume within the hand surface. • Contact Accuracy. Contact accuracy is measured by comparing the predicted contact map CH with the ground truth C∗ H using Intersection over Union (IoU) and F1 score. • HOI Diversity. Following (Liu et al., 2023b), we assess diversity by clustering generated grasps into 20 clusters using K-means, and report the entropy of cluster assignments and the average cluster size. • Semantic Consistency. Semantic consistency is evaluated by: (1) P-FID, the Fr ́echet Inception Distance between point clouds of predicted and ground truth hand meshes, using a pre-trained feature extractor (Nichol et al., 2022); (2) VLM assisted evaluation, where rendered hand-object interactions are scored for semantic alignment with input captions (0-10 scale); (3) Perceptual Score (PS), the mean rating from 10 volunteers (0-10 scale), reflecting the naturalness and semantic consistency of generated grasps. 24
2510.14873
Sampling Density Compensation using Fast Fourier Deconvolution Rui Luo, Peng Hu, Haikun Qi ∗ Abstract Density Compensation Function (DCF) is widely used in non-Cartesian MRI reconstruction, either for direct Non-Uniform Fast Fourier Trans- form (NUFFT) reconstruction or for iterative undersampled reconstruc- tion. Current state-of-the-art methods involve time-consuming tens of iterations, which is one of the main hurdles for widespread application of the highly efficient non-Cartesian MRI. In this paper, we propose an effi- cient, non-iterative method to calculate DCF for arbitrary non-Cartesian k-space trajectories using Fast Fourier Deconvolution. Simulation experi- ments demonstrate that the proposed method is able to yield DCF for 3D non-Cartesian reconstruction in less than 20 seconds, achieving orders of magnitude speed improvement compared to the state-of-the-art method while achieving similar or slightly better reconstruction quality. 1 Introduction N on-Cartesian acquisition is a highly efficient acquisition technique for mag- netic resonance imaging (MRI). However, non-Cartesian sampling poses challenges for the reconstruction step. Since for most non-Cartesian sampling patterns, samples at the low-frequency region are much denser than the high- frequency region, direct reconstruction without proper weighting of the non- Cartesian k-space may lead to image blurring. By tradition, the weights to bal- ance the sampling density are termed Density Compensation Function (DCF) [1]. DCF is not only essential for straightforward NUFFT reconstruction, but has also been proven to accelerate the convergence in iterative reconstruction and deep-learning reconstruction by improving the conditioning of the problem [2]. ∗Rui Luo is with the School of Biomedical Engineering & State Key Laboratory of Ad- vanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China (e-mail: luorui2023@shanghaitech.edu.cn). Peng Hu is with the School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China (e-mail: hupeng@shanghaitech.edu.cn). Haikun Qi is with the School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China (e-mail: qihk@shanghaitech.edu.cn). 1 arXiv:2510.14873v1 [physics.med-ph] 16 Oct 2025 A conventional method for calculating the DCF is based on the Voronoi di- agram [3], which is firstly introduced by Rasche et al [4]. In this prior work, Voronoi diagram is employed by intuition: since the area of each cell appears to be smaller when the samples are denser, and the area of each cell can be used directly as the DCF. Although this method is natural and accurate, con- structing a Voronoi diagram is extremely computationally expensive and is also numerically unstable for 3D non-Cartesian trajectories. In 1999, Pipe and Menon proposed to solve the DCF in an iterative manner [1]. The initial guess of DCF is a unity sequence, and in each iterative step, the DCF is convolved by a kernel function Ψ. At convergence, the Point Spread Function (PSF) of the sampling would be close to an impulse function, which is also the ideal PSF. Since the effect of a non-uniform sampling of k-space can be quantified by the corresponding PSF, making the PSF close to the ideal impulse function is a feasible way to solve this problem. This method serves as the foundation of most DCF methods proposed afterwards. There are several further developments based on the Pipe and Menon’s method [1]. The first one is about the choice of the optimal kernel function Ψ by Johnson and Pipe [5]. In this work, the authors proposed the optimal choice of Ψ in 2D cases is a convolution of two circular binary windows, while in 3D cases the optimal Ψ is the convolution of two spherical binary windows. This method improves the reconstruction accuracy but is still slow for 3D non- Cartesian trajectories. As reported by Johnson et al., it took 350 seconds per iteration to calculate the DCF for a 2563 sampling pattern. Another method is proposed by Zwart et al. [6], in which the authors pro- posed to replace the direct convolution with a grid convolution. Grid convolution refers to a convolution mapping non-uniform samples to uniform samples, fol- lowed by another convolution mapping uniform samples to non-uniform samples. The advantage of this method comes from avoiding the time-consuming search- ing process in non-uniform to non-uniform convolution. It has been shown that this method can achieve 1.8 seconds per iteration and converge with 40 itera- tions for a trajectory designed for a matrix size of 1003. However, this method is still iterative and lengthy computation time can be expected for a bigger matrix size. A recent work proposed an optimization approach for calculating the DCF [7], which achieves more accurate reconstruction but increases the runtime by two orders of magnitude compared to the previous iterative method [1]. In this work, we propose a fast, non-iterative method to calculate DCF using Fast Fourier Deconvolution (FFD), achieving efficient DCF calculation particularly for 3D non-Cartesian sampling patterns. 2 2 Theory and Methods 2.1 Problem Formulation For k-space S0(k), the sampling process can be denoted by: S1(k) = III(k) · S0(k) (1) where III(k) is an impulse sequence, defined by PNk i=1 δ(k −ki), and ki refers to the i-th sample in the sampling pattern K = {ki}Nk i=1, Nk denotes the total number of k-space samples. After applying the density compensation to the sampled k-space, we obtain: S2(k) = D(k) · S1(k) (2) = D(k) · III(k) · S0(k) (3) where D(k) is the density compensation function (DCF). Without loss of gen- erality, D(k) is assumed to be non-zero everywhere. Then, the weighted sampling pattern (WSP) can be defined as: E(k) = D(k) · III(k) (4) The point spread function (PSF) P(x) of E(k) is: P(x) = F−1{E(k)} (5) where F denotes the Fourier transform, x represents the spatial domain. Our goal is to find a D(k) such that P(x) ≈δ(x) in ∥x∥< L where L denotes the field of view (FOV). ∥· ∥denotes the ℓ2-norm 2.2 PSF Decomposition Let ˆE(k), ˆP(x) denote some initial guess of E(k), P(x) respectively. ˆP(x) can be decomposed into two parts: ˆP(x) = ˆPin(x) + ˆPout(x) (6) where ˆPin(x) = ˆP(x) · W(x) ˆPout(x) = ˆP(x) · (1 −W(x)) (7) W(x) is a window function such that W(x) = 0 for all ∥x∥≥L. We can define ˆEin(k) and ˆEout(k) as the Fourier pair of ˆPin(x) and ˆPout(x), respectively: ˆEin(k) = F{ ˆPin(x)} ˆEout(k) = F{ ˆPout(x)} (8) 3 According to the linear property of Fourier transform, we have: ˆE(k) = ˆEin(k) + ˆEout(k) (9) It’s important to note that unlike E(k), which is an impulse sequence and is zero almost everywhere, neither ˆEin(k) and ˆEout(k) is an impulse sequence, and they are non-zero almost everywhere. 2.3 Optimal Weighted Sampling Pattern Using the method noted in Sec. 2.2, we have ˆPin(x) = ˆP(x) · W(x) ˆPout(x) = ˆP(x) · (1 −W(x)) ˆEin(k) = F{ ˆPin(x)} ˆEout(k) = F{ ˆPout(x)} (10) The proposed WSP is given by: E⋆(k) = ˆE(k)/ ˆEin(k), (11) which yields: E⋆ in(k) = ˆEin(k)/ ˆEin(k) = 1 E⋆ out(k) = ˆEout(k)/ ˆEin(k) (12) Then, the corresponding PSF is: P ⋆ in(x) = F−1{1} = δ(x) P ⋆ out(x) = F−1{ ˆEout(k) ˆEin(k) } (13) thus, P ⋆(x) →δ(x) in ∥x∥< L holds only if: P ⋆ out(x) →0, 0 < ∥x∥< L (14) The problem of finding the optimal WSP turns into an optimization problem for W(x): W ⋆(x) = arg min W (x) ∥P ⋆ out(x)∥/P ⋆ 0 , 0 < ∥x∥< L (15) where P ⋆ 0 denotes the intensity of the impulse at P ⋆(0). 4 2.4 Min-Max Parameter Search For practical implementation, we assume a specific form for W(x): W(x) = 1 −∥¯x∥p where ¯x = x/L and p is the shape parameter to be optimized. Note that W(x) can also be of other forms. We use the expression of 1−∥¯x∥p because it is tested to be fast for optimization and yields accurate reconstruction as demonstrated in Sec. 3. A simple line search can be performed to find the optimal p in W(x). Specifically, by assuming that the optimal W(x) is dimension-independent and trajectory-independent, we can perform a Monte Carlo test by passing in 1D random sampling patterns to obtain the maximum value of the objective func- tion ∥P ⋆ out(x)∥/P ⋆ 0 for each p. The optimal p can then be chosen to minimize the maximum objective function. This min-max optimization is formulated as: p⋆= arg min p {max itest ∥P ⋆ out(x)∥/P ⋆ 0 } (16) where itest is the index of a Monte Carlo test, p⋆is the optimal shape parame- ter. For each test, the 1D sampling pattern follows a Gaussian distribution of N(0, (kmax/3)2),where kmax denotes the maximum of |k|. In conclusion, following the steps above, we can derive the optimal shape parameter p⋆, after which we can obtain W ⋆(x), ˆPin(x), ˆEin(k) sequentially. The optimal weight sampling pattern E⋆(k) is defined by ˆE(k)/ ˆEin(k) in Eq. 11, which is essentially a deconvolution recovering ˆPin(x) to δ(x), and in practice is done by Fast Fourier Deconvolution (FFD). The relation between E⋆(k) and the optimal DCF D⋆(k) is defined by E⋆(k) = D⋆(k) · III(k) in Eq. 4. 3 Experiments In this section, simulation experiments are performed to evaluate the speed and reconstruction quality of the proposed method, compared to the conventional iterative method. 3.1 Simulation Settings The simulation is conducted on a complex-valued 2D/3D digital phantom as shown in Fig. 1, which consists of an elliptical shell, a heart-like structure, and several balls with different sizes. The phase map is generated by sam- pling the white noise followed by a spatial low-pass filter. The matrix size is set to 256 and the FOV is set to 500mm. We test both methods on two 2D trajectories and two 3D trajectories, including Variable Density Spiral [8] (here- after termed as VdSpiral), Rosette [9], Cones [10] and Yarnball [11]. FINUFFT [12, 13] is adopted for non-Cartesian k-space simulation and reconstruction. A general-purpose gradient waveform design method [14] is adopted to simulate the gradient waveforms and the sampling patterns. 5 Figure 1: The complex-valued digital phantom used for simulation. The top row displays magnitude images of the (A) axial, (B) coronal, and (C) sagittal center slices. The bottom row shows the corresponding phase images for the same slices (D, E, F). 3.2 Implementation Details In the proposed method, the 1D DCF is adopted as the initial DCF guess, ˆD(k) to maintain numerical stability of the deconvolution, which is defined as: ∀i, j ˆD(ki) = ∥ki+1 −ki∥2 · ∥ki∥Nd−1 2 , ki, ki+1 ∈Kj, (17) where Kj is the j-th interleaf of K, and Nd is the number of dimension of the k-space. ˆD(k) of the last point of each interleaf is copied from the previous point. The optimal parameter p of the window function W(x) is obtained by a min-max parameter search in Sec. 2.4. The proposed method was implemented in Python (3.12.8), and FINUFFT was used to perform the deconvolution. The source code of the proposed method will be available upon publication of this work at https://github. com/RyanShanghaitech/MrAutoDcf. A state-of-the-art 3D sampling density compensation method by Zwart [6], the C implementation of which is publicly available 1, is adopted as the baseline method. This method features the basic iterative structure proposed by Pipe [1], the optimal kernel proposed by Johnson [5] and an efficient grid convolution 1https://www.ismrm.org/mri unbound/sequence.htm 6 method. It has been demonstrated to achieve a speed of 1.8 seconds per iteration with 1003 matrix size [6]. 3.3 Performance Assessment Both methods are run on a 4.9 GHz 12-core CPU (Intel® Core™i7-12700). Normalized Root Mean Square Error (NRMSE) and Structural Similarity In- dex Measure (SSIM) are calculated for the NUFFT reconstruction using the DCF generated by the baseline and proposed methods to assess reconstruction quality. Execution time Texe is recorded to assess the computation speed. The reconstructed images of both methods are normalized to zero mean and unit variance before metric calculation to ensure the comparison reflects structural fidelity and noise amplification, as shown below: ˆI = I −µ(I) σ(I) . (18) 4 Results Among the four involved trajectories, the DCF, the PSF, and the reconstruction results are illustrated for a representative trajectory, Yarnball. Performance metrics including the computation time, NRMSE, SSIM are reported for all four trajectories. 4.1 Window Function The results of the brute-force parameter search of the window function W ⋆(x) = 1 −∥¯x∥p are shown in Fig. 2. The optimal choice of p = 2.4 is adopted in the implementation of the proposed method. 4.2 DCF Results The DCFs of one representative interleaf of the Yarnball trajectory calculated by the baseline method and the proposed method are shown in Fig. 3. The two methods yield similar DCFs. However, significant oscillations can be observed in the DCF calculated by the previous method, while the DCF of the proposed method is smooth. Although the reconstruction results imply that the oscilla- tions do not introduce significant errors in the reconstructed images, a smooth DCF is generally preferred for non-Cartesian MRI reconstruction. It is noted that previous well-established methods also tend to generate smooth DCF, in- cluding the Voronoi method [4] and the ad-hoc method such as the analytical DCF of Spiral [15]. 4.3 PSF Results By applying Non-Uniform Inverse Fast Fourier Transform (NUIFFT) to the DCFs calculated by the two methods, we can obtain the corresponding PSFs, 7 Figure 2: (A) Min-max parameter search in the 1D random sampling experi- ment. (B) The adopted W ⋆(x). as shown in Fig. 4. The full width at half maximum (FWHM) of the PSFs of both methods are 1.5× pixel size, indicating both methods have an equivalent effect on the reconstructed spatial resolution. The PSFs are reconstructed with 10× oversampling for better visualization. 4.4 Reconstruction Results The reconstruction results of the 3D digital phantom using the DCF calculated with the baseline method and the proposed method are illustrated in Fig. 5, where the reconstructed images and their difference with the ground truth of three orthogonal slices are shown. Besides the slight Gibbs ringing artifacts due to k-space truncation, the reconstructed images are free of noticeable dis- 8 Figure 3: DCF calculated using the baseline and the proposed methods for an interleaf of the Yarnball trajectory. NRMSE SSIM Baseline Proposed Baseline Proposed VdSpiral 0.018 0.016 0.953 0.956 Rosette 0.018 0.018 0.943 0.954 Yarnball 0.028 0.021 0.971 0.976 Cones 0.023 0.019 0.971 0.976 Table 1: Reconstruction quality comparison. The reconstruction metrics of the proposed method and the baseline method are comparable. tortion and blurring. The quantitative metrics of the reconstructed images are summarized in Table 1. 4.5 Speed Comparison The execution time of the baseline method and the proposed method is com- pared in Table 2. The proposed method is 1-2 orders of magnitude faster than the baseline method while preserving the reconstruction quality as shown in Ta- ble 1. Especially, for the 3D non-Cartesian trajectories, the proposed method only takes less than 20 seconds for DCF calculation, facilitating highly efficient 3D non-Cartesian reconstruction. 9 Figure 4: PSF resulting from the DCF calculated by the baseline method (A-C) and the proposed method (D-F). The figure displays logarithmic-scaled PSF of three cross-sectional planes: Z = 0 (A,D), Y = 0 (B,E), and X = 0 (C,F). Baseline Proposed Improvement VdSpiral 3.835 0.044 87× Rosette 5.397 0.073 74× Yarnball 1399.853 18.542 75× Cones 555.792 12.788 43× Table 2: Execution time (in seconds) comparison. The proposed method is 1-2 orders of magnitude faster than the baseline method. 5 Conclusion In this paper, we proposed an efficient DCF calculation method using Fast Fourier Deconvolution. This method is general-purpose, non-iterative and highly efficient. In the simulation experiments, the proposed method reduces the com- putation time from around 10 minutes to less than 20 seconds for the 3D non- Cartesian trajectories while achieving similar or slightly better reconstruction quality compared to the previous time-consuming iterative method. The pro- posed DCF calculation method would be a crucial component for an efficient non-Cartesian MRI pipeline. 10 Figure 5: The digital phantom images of three orthogonal slices reconstructed using the DCF calculated by the baseline method (A-C) and the proposed method (G-I). The difference images between the reconstructed and ground truth images are shown below the reconstruction for each method. 11 6 Acknowledgement This work was supported in part by the High Technology Research and Devel- opment Center of the Ministry of Science and Technology of China under Grant SQ2022YFC2400133, in part by the Explorer Program of the Science and Tech- nology Commission of Shanghai Municipality under Grant 23TS1400300. References [1] James G. Pipe and Padmanabhan Menon. Sampling density compensation in MRI: Rationale and an iterative numerical solution. Magnetic Resonance in Medicine, 41(1):179–186, 1999. [2] Klaas P. Pruessmann, Markus Weiger, Peter B¨ornert, and Peter Boesiger. Advances in sensitivity encoding with arbitrary k-space trajectories. Mag- netic Resonance in Medicine, 46(4):638–651, 2001. [3] Georges Voronoi. Nouvelles applications des param`etres continus `a la th´eorie des formes quadratiques. Premier m´emoire. Sur quelques propri´et´es des formes quadratiques positives parfaites. Journal f¨ur die reine und ange- wandte Mathematik (Crelles Journal), 1908(133):97–102, January 1908. [4] V. Rasche, R. Proksa, R. Sinkus, P. Bornert, and H. Eggers. Resampling of data between arbitrary grids using convolution interpolation. IEEE Trans- actions on Medical Imaging, 18(5):385–392, May 1999. [5] Kenneth O. Johnson and James G. Pipe. Convolution kernel design and efficient algorithm for sampling density correction. Magnetic Resonance in Medicine, 61(2):439–447, 2009. [6] Nicholas R. Zwart, Kenneth O. Johnson, and James G. Pipe. Efficient sample density estimation by combining gridding and an optimized kernel. Magnetic Resonance in Medicine, 67(3):701–710, 2012. [7] Nicholas Dwork, Daniel O’Connor, Ethan M. I. Johnson, Corey A. Baron, Jeremy W. Gordon, John M. Pauly, and Peder E. Z. Larson. Optimization in the space domain for density compensation with the nonuniform FFT. Magnetic Resonance Imaging, 100:102–111, July 2023. [8] B´en´edicte M. A. Delattre, Robin M. Heidemann, Lindsey A. Crowe, Jean- Paul Vall´ee, and Jean-No¨el Hyacinthe. Spiral demystified. Magnetic Reso- nance Imaging, 28(6):862–881, July 2010. [9] D.C. Noll. Multishot rosette trajectories for spectrally selective MR imag- ing. IEEE Transactions on Medical Imaging, 16(4):372–377, August 1997. [10] Paul T. Gurney, Brian A. Hargreaves, and Dwight G. Nishimura. Design and analysis of a practical 3D cones trajectory. Magnetic Resonance in Medicine, 55(3):575–582, 2006. 12 [11] Robert W. Stobbe and Christian Beaulieu. Three-dimensional Yarnball k- space acquisition for accelerated MRI. Magnetic Resonance in Medicine, 85(4):1840–1854, 2021. [12] A. H. Barnett. Aliasing error of the exp(β √ 1 −Z2) kernel in the nonuni- form fast Fourier transform, October 2020. [13] Alex H. Barnett, Jeremy F. Magland, and Ludvig af Klinteberg. A parallel non-uniform fast Fourier transform library based on an ”exponential of semicircle” kernel, April 2019. [14] Rui Luo, Hongzhang Huang, Qinfang Miao, Jian Xu, Peng Hu, and Haikun Qi. Real-Time Gradient Waveform Design for Arbitrary k-Space Trajecto- ries, September 2025. [15] Richard D. Hoge, Remi K. S. Kwan, and G. Bruce Pike. Density compensa- tion functions for spiral MRI. Magnetic Resonance in Medicine, 38(1):117– 128, 1997. 13
Sampling Density Compensation using Fast Fourier Deconvolution Rui Luo, Peng Hu, Haikun Qi ∗ Abstract Density Compensation Function (DCF) is widely used in non-Cartesian MRI reconstruction, either for direct Non-Uniform Fast Fourier Transform (NUFFT) reconstruction or for iterative undersampled reconstruction. Current state-of-the-art methods involve time-consuming tens of iterations, which is one of the main hurdles for widespread application of the highly efficient non-Cartesian MRI. In this paper, we propose an efficient, non-iterative method to calculate DCF for arbitrary non-Cartesian k-space trajectories using Fast Fourier Deconvolution. Simulation experiments demonstrate that the proposed method is able to yield DCF for 3D non-Cartesian reconstruction in less than 20 seconds, achieving orders of magnitude speed improvement compared to the state-of-the-art method while achieving similar or slightly better reconstruction quality. 1 Introduction N on-Cartesian acquisition is a highly efficient acquisition technique for magnetic resonance imaging (MRI). However, non-Cartesian sampling poses challenges for the reconstruction step. Since for most non-Cartesian sampling patterns, samples at the low-frequency region are much denser than the highfrequency region, direct reconstruction without proper weighting of the nonCartesian k-space may lead to image blurring. By tradition, the weights to balance the sampling density are termed Density Compensation Function (DCF) [1]. DCF is not only essential for straightforward NUFFT reconstruction, but has also been proven to accelerate the convergence in iterative reconstruction and deep-learning reconstruction by improving the conditioning of the problem [2]. ∗Rui Luo is with the - vanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China (e-mail: ). Peng Hu is with the 201210, China (e-mail: ). Haikun Qi is with the 201210, China (e-mail: ). 1 16 Oct 2025 A conventional method for calculating the DCF is based on the Voronoi diagram [3], which is firstly introduced by Rasche et al [4]. In this prior work, Voronoi diagram is employed by intuition: since the area of each cell appears to be smaller when the samples are denser, and the area of each cell can be used directly as the DCF. Although this method is natural and accurate, constructing a Voronoi diagram is extremely computationally expensive and is also numerically unstable for 3D non-Cartesian trajectories. In 1999, Pipe and Menon proposed to solve the DCF in an iterative manner [1]. The initial guess of DCF is a unity sequence, and in each iterative step, the DCF is convolved by a kernel function Ψ. At convergence, the Point Spread Function (PSF) of the sampling would be close to an impulse function, which is also the ideal PSF. Since the effect of a non-uniform sampling of k-space can be quantified by the corresponding PSF, making the PSF close to the ideal impulse function is a feasible way to solve this problem. This method serves as the foundation of most DCF methods proposed afterwards. There are several further developments based on the Pipe and Menon's method [1]. The first one is about the choice of the optimal kernel function Ψ by Johnson and Pipe [5]. In this work, the authors proposed the optimal choice of Ψ in 2D cases is a convolution of two circular binary windows, while in 3D cases the optimal Ψ is the convolution of two spherical binary windows. This method improves the reconstruction accuracy but is still slow for 3D nonCartesian trajectories. As reported by Johnson et al., it took 350 seconds per iteration to calculate the DCF for a 2563 sampling pattern. Another method is proposed by Zwart et al. [6], in which the authors proposed to replace the direct convolution with a grid convolution. Grid convolution refers to a convolution mapping non-uniform samples to uniform samples, followed by another convolution mapping uniform samples to non-uniform samples. The advantage of this method comes from avoiding the time-consuming searching process in non-uniform to non-uniform convolution. It has been shown that this method can achieve 1.8 seconds per iteration and converge with 40 iterations for a trajectory designed for a matrix size of 1003. However, this method is still iterative and lengthy computation time can be expected for a bigger matrix size. A recent work proposed an optimization approach for calculating the DCF [7], which achieves more accurate reconstruction but increases the runtime by two orders of magnitude compared to the previous iterative method [1]. In this work, we propose a fast, non-iterative method to calculate DCF using Fast Fourier Deconvolution (FFD), achieving efficient DCF calculation particularly for 3D non-Cartesian sampling patterns. 2 2 Theory and Methods 2.1 Problem Formulation For k-space S0(k), the sampling process can be denoted by: S1(k) = III(k) · S0(k) (1) where III(k) is an impulse sequence, defined by PNk i=1 δ(k -ki), and ki refers to the i-th sample in the sampling pattern K = {ki}Nk i=1, Nk denotes the total number of k-space samples. After applying the density compensation to the sampled k-space, we obtain: S2(k) = D(k) · S1(k) (2) = D(k) · III(k) · S0(k) (3) where D(k) is the density compensation function (DCF). Without loss of generality, D(k) is assumed to be non-zero everywhere. Then, the weighted sampling pattern (WSP) can be defined as: E(k) = D(k) · III(k) (4) The point spread function (PSF) P(x) of E(k) is: P(x) = F-1{E(k)} (5) where F denotes the Fourier transform, x represents the spatial domain. Our goal is to find a D(k) such that P(x) ≈δ(x) in ∥x∥< L where L denotes the field of view (FOV). ∥· ∥denotes the l2-norm 2.2 PSF Decomposition Let ˆE(k), ˆP(x) denote some initial guess of E(k), P(x) respectively. ˆP(x) can be decomposed into two parts: ˆP(x) = ˆPin(x) + ˆPout(x) (6) where ˆPin(x) = ˆP(x) · W(x) ˆPout(x) = ˆP(x) · (1 -W(x)) (7) W(x) is a window function such that W(x) = 0 for all ∥x∥≥L. We can define ˆEin(k) and ˆEout(k) as the Fourier pair of ˆPin(x) and ˆPout(x), respectively: ˆEin(k) = F{ ˆPin(x)} ˆEout(k) = F{ ˆPout(x)} (8) 3 According to the linear property of Fourier transform, we have: ˆE(k) = ˆEin(k) + ˆEout(k) (9) It's important to note that unlike E(k), which is an impulse sequence and is zero almost everywhere, neither ˆEin(k) and ˆEout(k) is an impulse sequence, and they are non-zero almost everywhere. 2.3 Optimal Weighted Sampling Pattern Using the method noted in Sec. 2.2, we have ˆPin(x) = ˆP(x) · W(x) ˆPout(x) = ˆP(x) · (1 -W(x)) ˆEin(k) = F{ ˆPin(x)} ˆEout(k) = F{ ˆPout(x)} (10) The proposed WSP is given by: E⋆(k) = ˆE(k)/ ˆEin(k), (11) which yields: E⋆ in(k) = ˆEin(k)/ ˆEin(k) = 1 E⋆ out(k) = ˆEout(k)/ ˆEin(k) (12) Then, the corresponding PSF is: P ⋆ in(x) = F-1{1} = δ(x) P ⋆ out(x) = F-1{ ˆEout(k) ˆEin(k) } (13) thus, P ⋆(x) →δ(x) in ∥x∥< L holds only if: P ⋆ out(x) →0, 0 < ∥x∥< L (14) The problem of finding the optimal WSP turns into an optimization problem for W(x): W ⋆(x) = arg min W (x) ∥P ⋆ out(x)∥/P ⋆ 0 , 0 < ∥x∥< L (15) where P ⋆ 0 denotes the intensity of the impulse at P ⋆(0). 4 2.4 Min-Max Parameter Search For practical implementation, we assume a specific form for W(x): W(x) = 1 -∥ ̄x∥p where ̄x = x/L and p is the shape parameter to be optimized. Note that W(x) can also be of other forms. We use the expression of 1-∥ ̄x∥p because it is tested to be fast for optimization and yields accurate reconstruction as demonstrated in Sec. 3. A simple line search can be performed to find the optimal p in W(x). Specifically, by assuming that the optimal W(x) is dimension-independent and trajectory-independent, we can perform a Monte Carlo test by passing in 1D random sampling patterns to obtain the maximum value of the objective function ∥P ⋆ out(x)∥/P ⋆ 0 for each p. The optimal p can then be chosen to minimize the maximum objective function. This min-max optimization is formulated as: p⋆= arg min p {max itest ∥P ⋆ out(x)∥/P ⋆ 0 } (16) where itest is the index of a Monte Carlo test, p⋆is the optimal shape parameter. For each test, the 1D sampling pattern follows a Gaussian distribution of N(0, (kmax/3)2),where kmax denotes the maximum of |k|. In conclusion, following the steps above, we can derive the optimal shape parameter p⋆, after which we can obtain W ⋆(x), ˆPin(x), ˆEin(k) sequentially. The optimal weight sampling pattern E⋆(k) is defined by ˆE(k)/ ˆEin(k) in Eq. 11, which is essentially a deconvolution recovering ˆPin(x) to δ(x), and in practice is done by Fast Fourier Deconvolution (FFD). The relation between E⋆(k) and the optimal DCF D⋆(k) is defined by E⋆(k) = D⋆(k) · III(k) in Eq. 4. 3 Experiments In this section, simulation experiments are performed to evaluate the speed and reconstruction quality of the proposed method, compared to the conventional iterative method. 3.1 Simulation Settings The simulation is conducted on a complex-valued 2D/3D digital phantom as shown in Fig. 1, which consists of an elliptical shell, a heart-like structure, and several balls with different sizes. The phase map is generated by sampling the white noise followed by a spatial low-pass filter. The matrix size is set to 256 and the FOV is set to 500mm. We test both methods on two 2D trajectories and two 3D trajectories, including Variable Density Spiral [8] (hereafter termed as VdSpiral), Rosette [9], Cones [10] and Yarnball [11]. FINUFFT [12, 13] is adopted for non-Cartesian k-space simulation and reconstruction. A general-purpose gradient waveform design method [14] is adopted to simulate the gradient waveforms and the sampling patterns. 5 Figure 1: The complex-valued digital phantom used for simulation. The top row displays magnitude images of the (A) axial, (B) coronal, and (C) sagittal center slices. The bottom row shows the corresponding phase images for the same slices (D, E, F). 3.2 Implementation Details In the proposed method, the 1D DCF is adopted as the initial DCF guess, ˆD(k) to maintain numerical stability of the deconvolution, which is defined as: ∀i, j ˆD(ki) = ∥ki+1 -ki∥2 · ∥ki∥Nd-1 2 , ki, ki+1 ∈Kj, (17) where Kj is the j-th interleaf of K, and Nd is the number of dimension of the k-space. ˆD(k) of the last point of each interleaf is copied from the previous point. The optimal parameter p of the window function W(x) is obtained by a min-max parameter search in Sec. 2.4. The proposed method was implemented in Python (3.12.8), and FINUFFT was used to perform the deconvolution. The source code of the proposed method will be available upon publication of this work at https://github. com/RyanShanghaitech/MrAutoDcf. A state-of-the-art 3D sampling density compensation method by Zwart [6], the C implementation of which is publicly available 1, is adopted as the baseline method. This method features the basic iterative structure proposed by Pipe [1], the optimal kernel proposed by Johnson [5] and an efficient grid convolution 1https://www.ismrm.org/mri unbound/sequence.htm 6 method. It has been demonstrated to achieve a speed of 1.8 seconds per iteration with 1003 matrix size [6]. 3.3 Performance Assessment Both methods are run on a 4.9 GHz 12-core CPU (Intel® CoreTMi7-12700). Normalized Root Mean Square Error (NRMSE) and Structural Similarity Index Measure (SSIM) are calculated for the NUFFT reconstruction using the DCF generated by the baseline and proposed methods to assess reconstruction quality. Execution time Texe is recorded to assess the computation speed. The reconstructed images of both methods are normalized to zero mean and unit variance before metric calculation to ensure the comparison reflects structural fidelity and noise amplification, as shown below: ˆI = I -μ(I) σ(I) . (18) 4 Results Among the four involved trajectories, the DCF, the PSF, and the reconstruction results are illustrated for a representative trajectory, Yarnball. Performance metrics including the computation time, NRMSE, SSIM are reported for all four trajectories. 4.1 Window Function The results of the brute-force parameter search of the window function W ⋆(x) = 1 -∥ ̄x∥p are shown in Fig. 2. The optimal choice of p = 2.4 is adopted in the implementation of the proposed method. 4.2 DCF Results The DCFs of one representative interleaf of the Yarnball trajectory calculated by the baseline method and the proposed method are shown in Fig. 3. The two methods yield similar DCFs. However, significant oscillations can be observed in the DCF calculated by the previous method, while the DCF of the proposed method is smooth. Although the reconstruction results imply that the oscillations do not introduce significant errors in the reconstructed images, a smooth DCF is generally preferred for non-Cartesian MRI reconstruction. It is noted that previous well-established methods also tend to generate smooth DCF, including the Voronoi method [4] and the ad-hoc method such as the analytical DCF of Spiral [15]. 4.3 PSF Results By applying Non-Uniform Inverse Fast Fourier Transform (NUIFFT) to the DCFs calculated by the two methods, we can obtain the corresponding PSFs, 7 Figure 2: (A) Min-max parameter search in the 1D random sampling experiment. (B) The adopted W ⋆(x). as shown in Fig. 4. The full width at half maximum (FWHM) of the PSFs of both methods are 1.5× pixel size, indicating both methods have an equivalent effect on the reconstructed spatial resolution. The PSFs are reconstructed with 10× oversampling for better visualization. 4.4 Reconstruction Results The reconstruction results of the 3D digital phantom using the DCF calculated with the baseline method and the proposed method are illustrated in Fig. 5, where the reconstructed images and their difference with the ground truth of three orthogonal slices are shown. Besides the slight Gibbs ringing artifacts due to k-space truncation, the reconstructed images are free of noticeable dis8 Figure 3: DCF calculated using the baseline and the proposed methods for an interleaf of the Yarnball trajectory. NRMSE SSIM Baseline Proposed Baseline Proposed VdSpiral 0.018 0.016 0.953 0.956 Rosette 0.018 0.018 0.943 0.954 Yarnball 0.028 0.021 0.971 0.976 Cones 0.023 0.019 0.971 0.976 Table 1: Reconstruction quality comparison. The reconstruction metrics of the proposed method and the baseline method are comparable. tortion and blurring. The quantitative metrics of the reconstructed images are summarized in Table 1. 4.5 Speed Comparison The execution time of the baseline method and the proposed method is compared in Table 2. The proposed method is 1-2 orders of magnitude faster than the baseline method while preserving the reconstruction quality as shown in Table 1. Especially, for the 3D non-Cartesian trajectories, the proposed method only takes less than 20 seconds for DCF calculation, facilitating highly efficient 3D non-Cartesian reconstruction. 9 Figure 4: PSF resulting from the DCF calculated by the baseline method (A-C) and the proposed method (D-F). The figure displays logarithmic-scaled PSF of three cross-sectional planes: Z = 0 (A,D), Y = 0 (B,E), and X = 0 (C,F). Baseline Proposed Improvement VdSpiral 3.835 0.044 87× Rosette 5.397 0.073 74× Yarnball 1399.853 18.542 75× Cones 555.792 12.788 43× Table 2: Execution time (in seconds) comparison. The proposed method is 1-2 orders of magnitude faster than the baseline method. 5 Conclusion In this paper, we proposed an efficient DCF calculation method using Fast Fourier Deconvolution. This method is general-purpose, non-iterative and highly efficient. In the simulation experiments, the proposed method reduces the computation time from around 10 minutes to less than 20 seconds for the 3D nonCartesian trajectories while achieving similar or slightly better reconstruction quality compared to the previous time-consuming iterative method. The proposed DCF calculation method would be a crucial component for an efficient non-Cartesian MRI pipeline. 10 Figure 5: The digital phantom images of three orthogonal slices reconstructed using the DCF calculated by the baseline method (A-C) and the proposed method (G-I). The difference images between the reconstructed and ground truth images are shown below the reconstruction for each method. 11 6 Acknowledgement This work was supported in part by the High Technology Research and Development Center of the Ministry of Science and Technology of China under Grant SQ2022YFC2400133, in part by the Explorer Program of the Science and Technology Commission of Shanghai Municipality under Grant 23TS1400300. References [1] James G. Pipe and Padmanabhan Menon. Sampling density compensation in MRI: Rationale and an iterative numerical solution. Magnetic Resonance in Medicine, 41(1):179-186, 1999. [2] Klaas P. Pruessmann, Markus Weiger, Peter B ̈ornert, and Peter Boesiger. Advances in sensitivity encoding with arbitrary k-space trajectories. Magnetic Resonance in Medicine, 46(4):638-651, 2001. [3] Georges Voronoi. Nouvelles applications des param`etres continus `a la th ́eorie des formes quadratiques. Premier m ́emoire. Sur quelques propri ́et ́es des formes quadratiques positives parfaites. Journal f ̈ur die reine und angewandte Mathematik (Crelles Journal), 1908(133):97-102, January 1908. [4] V. Rasche, R. Proksa, R. Sinkus, P. Bornert, and H. Eggers. Resampling of data between arbitrary grids using convolution interpolation. IEEE Transactions on Medical Imaging, 18(5):385-392, May 1999. [5] Kenneth O. Johnson and James G. Pipe. Convolution kernel design and efficient algorithm for sampling density correction. Magnetic Resonance in Medicine, 61(2):439-447, 2009. [6] Nicholas R. Zwart, Kenneth O. Johnson, and James G. Pipe. Efficient sample density estimation by combining gridding and an optimized kernel. Magnetic Resonance in Medicine, 67(3):701-710, 2012. [7] Nicholas Dwork, Daniel O'Connor, Ethan M. I. Johnson, Corey A. Baron, Jeremy W. Gordon, John M. Pauly, and Peder E. Z. Larson. Optimization in the space domain for density compensation with the nonuniform FFT. Magnetic Resonance Imaging, 100:102-111, July 2023. [8] B ́en ́edicte M. A. Delattre, Robin M. Heidemann, Lindsey A. Crowe, JeanPaul Vall ́ee, and Jean-No ̈el Hyacinthe. Spiral demystified. Magnetic Resonance Imaging, 28(6):862-881, July 2010. [9] D.C. Noll. Multishot rosette trajectories for spectrally selective MR imaging. IEEE Transactions on Medical Imaging, 16(4):372-377, August 1997. [10] Paul T. Gurney, Brian A. Hargreaves, and Dwight G. Nishimura. Design and analysis of a practical 3D cones trajectory. Magnetic Resonance in Medicine, 55(3):575-582, 2006. 12 [11] Robert W. Stobbe and Christian Beaulieu. Three-dimensional Yarnball kspace acquisition for accelerated MRI. Magnetic Resonance in Medicine, 85(4):1840-1854, 2021. [12] A. H. Barnett. Aliasing error of the exp(β √ 1 -Z2) kernel in the nonuniform fast Fourier transform, October 2020. [13] Alex H. Barnett, Jeremy F. Magland, and Ludvig af Klinteberg. A parallel non-uniform fast Fourier transform library based on an "exponential of semicircle" kernel, April 2019. [14] Rui Luo, Hongzhang Huang, Qinfang Miao, Jian Xu, Peng Hu, and Haikun Qi. Real-Time Gradient Waveform Design for Arbitrary k-Space Trajectories, September 2025. [15] Richard D. Hoge, Remi K. S. Kwan, and G. Bruce Pike. Density compensation functions for spiral MRI. Magnetic Resonance in Medicine, 38(1):117128, 1997. 13
2510.14876
BADAS: Context Aware Collision Prediction Using Real-World Dashcam Data Roni Goldshmidt roni.goldshmidt@getnexar.com Hamish Scott hamish.scott@getnexar.com Lorenzo Niccolini lorenzo.niccolini@getnexar.com Shizhan Zhu shizhan.zhu@getnexar.com Daniel Moura daniel.moura@getnexar.com Orly Zvitia orly.zvitia@getnexar.com October 17, 2025 Abstract Existing collision prediction methods often fail to distin- guish between ego-vehicle threats and random accidents non-involving ego-vehicle, leading to excessive false alerts in real-world deployment. We present BADAS, a fam- ily of collision prediction models trained on Nexar’s real- world dashcam collision dataset—the first benchmark de- signed explicitly for ego-centric evaluation. We re-annotate major benchmarks to identify ego involvement, add con- sensus alert-time labels, and synthesize negatives where needed, enabling fair AP/AUC and temporal evaluation. BADAS uses a V-JEPA2 backbone trained end-to-end and comes in two variants: BADAS-Open (trained on our 1.5k public videos) and BADAS1.0 (trained on 40k pro- prietary videos). Across DAD, DADA-2000, DoTA, and Nexar, BADAS achieves state-of-the-art AP/AUC and out- performs a forward-collision ADAS baseline while pro- ducing more realistic time-to-accident estimates. We re- lease our BADAS-Open model weights and code, along with re-annotations of all evaluation datasets to promote ego- centric collision prediction research. 1. Introduction Collision prediction is fundamental to Advanced Driver As- sistance Systems (ADAS) and autonomous vehicles, yet current approaches fail to meet real-world deployment re- quirements. Despite decades of research, existing methods struggle with excessive false alarms and miss critical ego- vehicle threats. We present BADAS (V-JEPA2 [1] Based Advanced Driver Assistance System), a new approach that achieves state-of-the-art performance by combining mod- ern video foundation models with high-quality, ego-centric real-world driving data. As shown in Figure 1, BADAS significantly outperforms both academic methods and com- mercial ADAS systems across major benchmarks, demon- strating the power of aligning training data with actual de- ployment scenarios. Figure 1. BADAS achieves state-of-the-art performance on colli- sion prediction benchmarks. Our approach substantially outper- forms existing academic methods and commercial vision-based forward collision warning systems by leveraging ego-centric real- world data and modern video foundation models when evaluated on ego-vehicle involved collisions. The core challenge in collision prediction lies not in the algorithms themselves, but in the fundamental misalign- ment between training data and actual driving scenarios. More critically, existing real-world datasets like DAD [2], DoTA [3], and DADA-2000 [4] suffer from a conceptual flaw: they treat all visible accidents equally, training mod- els to detect any collision within the camera’s field of view. This approach generates excessive false alarms in deployment—a vehicle accident two lanes over triggers the same alert as an imminent frontal collision threatening the ego vehicle. arXiv:2510.14876v1 [cs.CV] 16 Oct 2025 Figure 2 illustrates this critical distinction between ego- centric and general collision prediction. While visually salient accidents in adjacent lanes may capture attention, they are irrelevant to the ego vehicle’s safety and should not trigger warnings. Our systematic re-annotation of existing benchmarks reveals the severity of this problem as shown in Table 1. Figure 2. The critical distinction between ego-centric and gen- eral collision prediction. Top row: Ego-vehicle involved events requiring immediate driver intervention. Bottom row: Non-ego accidents that are visually prominent but irrelevant to ego-vehicle safety. BADAS focuses exclusively on ego-relevant scenarios to minimize false alarms while maintaining high sensitivity to actual threats. Examples from the DAD dataset [2]. Table 1. Re-annotation results of major collision-prediction datasets. The high percentage of non-ego accidents (40–92%) sug- gests that these datasets should be used with caution when de- veloping or evaluating ADAS-related methods. Pos-Ego: ego- involved accidents. Pos-Not Ego: non-ego accidents. Less- 2s: samples filtered out due to insufficient anticipation time— specifically, cases where the collision occurs within 2 seconds from the beginning of the video. Our model requires a full 2 sec- onds prediction horizon; thus, such cases are invalid for anticipa- tion. Negative: normal driving videos. %Not Ego: percentage of non-ego accidents among retained positive samples. Dataset #Pos-Ego #Pos-Not Ego #Less-2s #Negative % Pos-Not Ego DADA-2000 [4] 75 51 2 0 40.5 DAD [2] 13 150 2 301 92.0 DoTA [3] 327 255 16 0 43.8 Nexar [5] 672 0 0 672 0.0 Beyond the ego-centric issue, existing datasets ex- hibit additional limitations, posing challenges for effective ADAS development. DoTA and DADA-2000 contain only positive examples, making it impossible to evaluate false positive rates—a crucial metric for user acceptance. They also lack consistency in defining alert timing: DAD bases alerts on abnormal behavior onset, DoTA uses subjective ”inevitability” judgments, and DADA-2000 triggers alerts upon 3rd party vehicle appearance regardless of actual risk. The recently introduced Nexar Dashcam Collision Pre- diction Dataset [5, 6] addresses these fundamental prob- lems through a paradigm shift in data collection. Unlike existing datasets, Nexar comprises 1.5k real-world dash- cam videos from actual drivers experiencing genuine col- lisions and near-misses. This dataset provides three key advantages: (1) exclusive focus on ego-vehicle involved events, eliminating irrelevant accidents, (2) inclusion of near-collision events where accidents are avoided through emergency maneuvers—capturing the much more common successfully-resolved dangerous situations that provide rich training signals, and (3) standardized, consensus-based alert timing annotations from 10 human annotators that enable consistent temporal evaluation. The real-world nature of this data is crucial: it captures the true distribution of driv- ing scenarios, including rare events that synthetic datasets cannot replicate and that drivers actually encounter on the road. Our approach leverages recent advances in video foun- dation models, particularly V-JEPA2 [1], which has shown strong capabilities in understanding temporal dy- namics and visual patterns. These architectures excel when trained on data that connects visual patterns to concrete outcomes—precisely what ego-centric collision and near- collision events provide. By fine-tuning V-JEPA2 on the Nexar high-quality ego-relevant dataset, we harness its tem- poral reasoning capabilities for collision prediction. The combination of modern architectures with appropriate real- world data yields the significant performance gains shown in Figure 1, substantially surpassing both task-specific ar- chitectures and commercial ADAS systems. The contributions of this paper are as follows: 1. Ego-centric problem reformulation: We redefine colli- sion prediction as an ego-centric task and systematically re-annotate major benchmarks (DAD, DoTA, DADA- 2000) to identify ego-vehicle involvement, revealing that 40-92% of their accidents are irrelevant for ADAS appli- cations. These annotations are publicly released to ben- efit the community. 2. Standardized temporal evaluation: We establish a co- herent definition of alert timing through 10-annotator consensus and contribute precise temporal annotations for all test sets, addressing the inconsistent and subjec- tive definitions across existing datasets. Our results in 8 emphasize that longer estimated time to accident may represent early anticipations rather than a better actual prediction. 3. State-of-the-art performance: Our models surpass ex- isting methods by fine-tuning V-JEPA2 [1] on high- quality, ego-centric real-world data, demonstrating that combining advanced architectures with appropriate data significantly outperforms both academic methods and commercial vision-based forward collision warning sys- tems. 4. Real-world data importance: We demonstrate that practical collision prediction requires training in genuine traffic situations rather than synthetic or controlled envi- ronments. A preliminary analysis (Figure 9a) already re- veals the natural long-tailed structure of real-world colli- sions, underscoring the value of diverse large-scale data. The rest of this paper is organized as follows: Section 2 reviews related work in collision prediction datasets and methods and video understanding. Section 3 presents our methodology including model architecture and training pro- tocols. Section 4 describes our experimental setup and re- annotation process. Section 5 presents a comprehensive analysis and results. We conclude in Section 6 and discuss potential directions for future research in Section 7. 2. Related Work 2.1. Traffic Accident Prediction Methods Recent advances in traffic accident prediction have pro- duced various approaches, though we focus our comparison on methods with open-source implementations that enable reproducible evaluation on our ego-centric test sets. UString [7] uses RNN-based architectures with adaptive loss functions that emphasize frames near collision events, dynamically adjusting loss contribution based on temporal proximity to accidents. DSTA [8] employs transformer- based architectures with dynamic spatial-temporal attention mechanisms, allowing the model to focus on relevant spa- tial regions while tracking their temporal evolution. Both methods provide open-source implementations, facilitating direct comparison. While other approaches using graph neural networks [9], reinforcement learning [10], and multi-modal fusion [11] exist in the literature, the lack of publicly available im- plementations prevents fair comparison on our ego-centric datasets. Therefore, our experimental comparison focuses on UString and DSTA as the current open-source state-of- the-art. 2.2. Collision Prediction Datasets and Temporal Annotations Existing datasets differ significantly in their temporal an- notation approaches. DAD [2] contains 1,750 videos and defines accidents based on abnormal behavior onset rather than impact moments. DoTA [3] provides 4,990 videos with subjective annotations marking when accidents appear ”inevitable,” resulting in inconsistent temporal boundaries. DADA-2000 [4] offers 1,962 videos annotating from vehi- cle appearance to collision, which may mark accidents as predictable too early when vehicle presence alone doesn’t indicate danger. The Nexar dataset [6] addresses these limitations through consensus-based alert times from multiple anno- tators, establishing both the earliest moment of recogniz- able danger (talert) and precise collision timing (tcollision), en- abling evaluation of both prediction accuracy and temporal appropriateness. 2.3. Foundation Models for Video Understanding Video foundation models have evolved from temporal extensions of image architectures to sophisticated self- supervised approaches. SlowFast networks [12] process videos at multiple temporal resolutions, while Video Swin Transformers [13] extend window-based attention tempo- rally. Self-supervised methods like VideoMAE [14] recon- struct masked spatial-temporal patches, while V-JEPA [15] predicts abstract representations rather than raw pixels. V- JEPA2 [1] further improves masking strategies and training objectives for temporal understanding, making it particu- larly suited for anticipatory tasks. BADAS is the first ap- plication of V-JEPA2 for collision prediction, demonstrat- ing that fine-tuning on ego-centric data substantially out- performs task-specific architectures. 2.4. Industrial ADAS Systems Current ADAS Forward Collision Warning (FCW) systems typically combine radar and camera sensors with physics- based models to calculate time-to-collision and trigger alerts [16]. For comparison purposes, we focus on vision- only implementations that process monocular dashcam in- puts. The open-source FCW system [17] uses YOLO [18] for object detection and [19] for lane detection, raising alerts when detected objects fall within distance thresh- olds. This provides a baseline representing current deployed vision-based ADAS technology. 3. Methodology 3.1. Problem Formulation Building on the Nexar Challenge [5], we reformulate col- lision prediction as an ego-centric task. Given a dash- cam video sequence V = {f1, ..., fT }, we predict at each timestep t the probability pt = P(ego-collision|f1:t) that the ego vehicle will be involved in a collision within time horizon τ. Ego-vehicle incidents: Scenarios where the ego vehicle experiences collision or executes emergency maneuvers, in- cluding direct collisions, near-misses requiring evasive ac- tion, and situations prevented only by emergency interven- tion. Non-ego incidents: Accidents visible but irrelevant to ego-vehicle safety, including adjacent lane collisions and distant events. Near-collisions (emergency maneuvers that prevented accidents) are treated as positive training examples, pro- viding rich supervisory signals from successfully-resolved dangerous situations. 3.2. BADAS Architecture BADAS uses V-JEPA2 [1] backbone with patch aggrega- tion and classification head: V-JEPA2 Backbone: Encoder patchifies videos using 2×16×16 tubelets and passes tokens through ViT-L trans- former. We process 16 frames of size 256 × 256, yielding 2048 latent patches with dimension D = 1024. Patch Aggregation: We aggregate patches using at- tentive probe [20]. Given patches X ∈RP ×D, learned queries Q ∈ RM×D compute attention scores A = softmax(QX⊤/ √ D). These aggregate patches via weight matrix W ∈RD×d to produce features AXW ∈RM×d, concatenated to final vector of size Md (with M = 12, d = 64). Prediction Head: Three-layer MLP with GELU activa- tion, layer normalization, and 0.1 dropout probability maps aggregated features to collision probability. Hidden dimen- sion: 768. 3.3. Training Protocol We fine-tune V-JEPA2 end-to-end using AdamW optimizer with learning rate η = 1×10−5, weight decay 1×10−4, and cosine annealing schedule. Binary cross-entropy loss with mixed precision training and gradient clipping (norm 5.0) ensures stability. Early stopping on validation set prevents overfitting. 3.4. Dataset Re-annotation Protocol We systematically re-annotated DAD, DoTA, and DADA- 2000 for ego-centric evaluation: Ego-Involvement Classification: We manually re- viewed all accidents, categorizing them as ego-involved - recording vehicle is directly involved in the accident, or non-ego involved - accidents in adjacent lanes, perpendicu- lar traffic or visible but non-threatening. Human Alert Time: 10 annotators with defensive driv- ing certification marked when they would initiate defensive action. Consensus time (median) represents realistic human response timing. Figure 3 shows human reaction patterns across 726 events: median 1.70s, mean 1.81s (SD=0.82s), with 90% of alerts between 0.70-3.47s before collision. Alerts beyond 95th percentile (3.47s) likely respond to normal variations rather than genuine threats. Figure 3. Cumulative distribution of human reaction times across 726 ego-involved collisions. Median: 1.70s, 90% range: 0.70- 3.47s before impact. Synthetic Negatives: For datasets lacking negative sam- ples, we extract first 4 seconds from videos where human alert occurs 4.5s or later after video start, ensuring realistic driving without collision precursors. Table 2. Dataset composition after ego-centric re-annotation Dataset Real Neg. Real Pos. Synth. Neg. Total DADA-2000 0 75 38 113 DAD 301 13 0 314 DoTA 0 327 40 367 Our consensus-based protocol provides unified temporal reference across datasets, addressing the existing inconsis- tencies and enabling fair comparison. The re-annotations are publicly available 1. 3.5. Model Variants BADAS-Open: Trained solely on Nexar’s public dataset (1,500 videos) with balanced collision/normal driving rep- resentation. Achieves state-of-the-art on major bench- marks. Model and code are publicly available 2 under the APSv2 license. BADAS1.0: Commercial variant trained on 40k pro- prietary sequences (20k collisions/near-misses, 20k normal driving). Uses identical architecture and training protocols, but 25× more data, enabling data scaling effect evaluation and superior edge-case handling. 1https://github.com/getnexar/BADAS- Open/tree/ main 2https://huggingface.co/nexar-ai/BADAS-Open Table 3. Ego-centric collision prediction performance across benchmarks. DAD (n=116) DADA (n=113) DoTA (n=367) Nexar (n=1344) Method AP AUC mTTA AP AUC mTTA AP AUC mTTA AP AUC mTTA DSTA 0.06 0.59 2.9 0.74 0.60 6.5 0.92 0.55 4.8 0.53 0.54 9.8 UString 0.06 0.61 2.4 0.69 0.57 5.3 0.90 0.54 4.3 0.48 0.48 9.1 FCW ADAS 0.11 0.69 2.0 0.77 0.78 3.8 0.94 0.69 3.0 0.58 0.64 8.7 BADAS-Open 0.66 0.87 2.7 0.87 0.77 4.3 0.94 0.70 4.0 0.86 0.88 4.9 BADAS1.0 0.94 0.99 2.7 0.90 0.87 4.6 0.95 0.72 4.0 0.91 0.91 3.9 4. Experiments 4.1. Experimental Setup We evaluate BADAS against state-of-the-art collision pre- diction methods and industrial ADAS systems. Our base- line methods include UString [7], DSTA [8] and FCW ADAS [17], a vision-only forward collision warning sys- tem using YOLOv11 [18] for object detection and TuSim- ple ResNet-34 [19] for lane detection. Evaluation is performed on three benchmarks re- annotated for ego-centric assessment: DAD [2], DADA [4], DoTA [3], and the Nexar test set. For datasets originally containing only positive samples (DADA, DoTA), we gen- erate synthetic negatives as described in Section 3. Perfor- mance is measured using Average Precision (AP) for rank- ing quality, Area Under Curve (AUC) for discrimination ca- pability, and mean Time-To-Accident (mTTA) for temporal accuracy. 5. Results 5.1. Quantitative Performance Table 3 presents a comprehensive summary of our evalua- tion results. BADAS models achieve substantial improve- ments over existing methods, with BADAS1.0 reaching 0.94 AP on DAD compared to 0.06 for baseline methods. The performance gap is particularly striking on the Nexar dataset—the largest and most comprehensive benchmark— where BADAS-Open achieves 0.86 AP versus 0.48-0.53 for academic baselines and 0.58 for the industrial FCW system. Notably, BADAS models maintain consistent perfor- mance across diverse datasets, indicating robust generaliza- tion. The mTTA values reveal a critical distinction: base- line methods report physically implausible predictions (9- 10 seconds before collision on Nexar), while BADAS main- tains more realistic 3-5 second windows better aligned with human prediction capabilities. The industrial FCW sys- tem, despite using state-of-the-art detection models, under- performed due to its rule-based logic generating excessive false positives in dense traffic—highlighting the advantages of end-to-end learning from real collision data. 5.2. Qualitative Analysis Figure 4 illustrates prediction scores over time for repre- sentative test samples. BADAS-Open (red) demonstrates Figure 4. Collision predictions scores over time for 6 samples: UString (blue), DSTA (green) and BADAS-Open (red). Vertical dashed lines indicate collision time, horizontal line shows detec- tion threshold. Table 4. Performance for different label windowing strategies. Window Precision Recall AP AUC mTTA 1.0s 0.792 0.896 0.930 0.930 3.112 1.5s 0.785 0.939 0.931 0.935 3.880 2.0s 0.741 0.964 0.925 0.931 4.646 stable, confident predictions that rise sharply as collisions approach, while baseline methods exhibit erratic patterns with premature or inconsistent alerting. This stability trans- lates to practical deployment benefits leading to fewer false alarms and more reliable threat assessment. Frame-by-frame analysis in Figure 5 further demon- strates that in urban intersection and residential scenar- ios, BADAS variants correctly identify ego-relevant threats while maintaining low confidence during safe driving peri- ods. 5.3. Data Scaling Effects Figure 6 illustrates the effect of training data scale, showing a logarithmic improvement in Nexar validation AP as the dataset size increases from 1.5k to 40k videos. This con- sistent scaling trend supports our hypothesis that leveraging large-scale real-world data continues to yield performance gains, even at the largest scales tested. 5.4. Ablation Studies We investigate key design choices through systematic abla- tions. Table 4 examines label window selection, showing that T = 1.5 seconds optimally balances precision (0.785) and recall (0.939). Shorter windows miss developing colli- sion patterns, while longer windows include too many non- indicative frames. Data augmentation and oversampling strategies (Table 5) show significant benefits, with the combination of 2× over- sampling for positive samples and augmentations improv- Figure 5. Frame-by-frame comparison on Nexar test footage. Top: urban intersection collision. Bottom: residential street scenario. Alert indicators show prediction confidence (green: safe, orange: caution, red: imminent collision). BADAS variants demonstrate accurate ego-centric threat assessment. Figure 6. Validation AP versus training data size, showing loga- rithmic performance improvement with data scaling. Table 5. Impact of sampling strategies and augmentations. Oversampling Rate Augmentations AP AUC 1× No 0.922 0.927 2× No 0.924 0.922 1× Yes 0.931 0.935 2× Yes 0.939 0.941 ing AP from 0.922 to 0.939. The augmentations (Figure 7) simulate diverse weather and lighting conditions, improving robustness to real-world variations. Architecture ablations (Table 6) show the contribution of each component. We train models with the base V-JEPA2 backbone with: only a linear head, the attentive probe with a linear head, and the attentive probe with the MLP head. We test training with and without the base model frozen. Fine-tuning the V-JEPA2 backbone provides the largest gain (0.707→0.928 AP), while the attentive probe and MLP head add incremental improvements, reaching 0.939 AP with the complete architecture. import albumentations as A A.Compose([ A.RandomBrightnessContrast(p=0.5), A.HueSaturationValue(p=0.5), A.GaussianBlur(blur_limit=(3, 7), p=0.3), A.CLAHE(p=0.3), A.RandomGamma(p=0.3), A.RGBShift(p=0.3), A.MotionBlur(blur_limit=7, p=0.3), A.RandomRain(p=0.2), A.RandomSnow(p=0.2), A.RandomShadow(p=0.2), ]) Figure 7. Data augmentations for weather and lighting robustness. Table 6. Architecture component contributions. Configuration AP AUC Frozen base 0.707 0.744 Frozen base + probe 0.782 0.811 Frozen base + probe + MLP 0.771 0.809 Fine-tuned base 0.928 0.935 Fine-tuned base + probe 0.929 0.936 Fine-tuned base + probe + MLP 0.939 0.941 5.5. Temporal Accuracy Analysis Figure 8 compares Time-To-Accident distributions at 80% confidence against human consensus annotations. BADAS- Open’s median TTA of 3.0 seconds closely aligns with practical requirements—providing sufficient warning while avoiding premature alerts. In contrast, UString and DSTA show median TTAs of 6.2s and 7.5s respectively, with high variance extending to implausible 14-second predictions. This unrealistic early detection often manifests as false pos- itives in deployment, posing challenges when integrating these methods into real-world ADAS. Figure 8. TTA distributions at 80% confidence. Human consen- sus (green) shows expert annotations. BADAS-Open demonstrates realistic timing while baselines exhibit implausibly early predic- tions. 5.6. Long-Tail Performance Beyond common vehicle-to-vehicle interactions, real-world collision prediction must also recognize rare but safety- critical scenarios. To assess BADAS-Open robustness un- der such conditions, we measure recall across various third- party long-tail categories for confidence threshold of 0.85 relative to the recall observed on the dominant ”vehicle” class. As shown in Figure 9, the model achieves the highest re- call on vehicle-related events, with only a modest drop in larger vehicles (trucks and busses). Vulnerable Road Users (VRUs) categories - pedestrians, cyclists, motorcyclists - show a notable decline in recall, and performance decreases sharply further on animal-related incidents. This degradation highlights the challenges of generalizing to rare, visually diverse, and low-frequency event types. While this behavior is expected given the absence of such cases in the training distribution of the BADAS-Open model, these results underscore the importance of explicitly evaluating the long-tail performance in collision prediction. 6. Conclusion This work introduces BADAS, a new approach to collision prediction that focuses on ego-vehicle safety through ego- centric problem formulation. Building on insights from the Nexar Dashcam Collision Prediction Challenge, we demon- strate that focusing exclusively on ego-vehicle threats— rather than general accident detection—dramatically im- proves real-world performance. Our systematic re-annotation of major benchmarks re- veals fundamental issues with existing datasets: a signifi- cant portion of annotated accidents do not involve the ego- Figure 9. (a) Distribution of third-party categories, represent- ing the interacting agents. Static infrastructure include buildings, fences, poles and traffic signs. The vehicle class corresponds to the Nexar [5] test set used throughout this work, while the remaining categories originate from additional annotated data. (b) BADAS- Open recall across these categories at a confidence threshold of 0.85 vehicle, leading models to learn patterns irrelevant to ego- vehicle safety. By filtering for ego-relevance and estab- lishing human baseline reaction times, we create evaluation protocols that better reflect real-world deployment require- ments. Our synthetic negative sampling method further im- proves the balance between positive and negative samples and relaxes the biased AP and AUC measurements. We further highlight the necessity of a coherent defini- tion and annotation scheme for alert time, to serve as ref- erence to the predicted mTTA values. Our findings show varying levels of early prediction in all methods. This is es- pecially important for practical systems as these early pre- dictions will be manifested as false alerts when deployed in real ADAS or AV frameworks. We present two model variants addressing different de- ployment needs: BADAS-Open, trained exclusively on 1.5k public Nexar videos, and BADAS1.0, leveraging 40k videos from Nexar’s proprietary dataset. Both models achieve state-of-the-art performance when compared to leading re- search methods and FWC systems. The significant perfor- mance gain observed with increased data volume suggests that the potential of data scaling has not yet been fully sat- urated. The BADAS-Open model and code are released to the research community. While our model outperforms existing state-of-the-art results, we also highlight the long-tail nature of colli- sion and near-collision distributions, showing that BADAS- Open performance significantly deteriorates on minority classes. This result is expected, as any model trained on an imbalanced dataset naturally focuses on majority classes (e.g., vehicle-to-vehicle accidents). However, edge cases must also be taken into account — first by explicitly evalu- ating current model performance on them, and later by de- veloping dedicated strategies to improve their prediction. 7. Future Work While this study provides encouraging evidence for the ef- fectiveness of context-aware architectures in collision pre- diction, several open challenges remain. Future research directions include expanding the dataset to further enhance generalization, improving mean time-to- alert (mTTA) to reduce false alerts in real-world systems, and addressing long-tail categories to better evaluate and predict diverse and rare driving scenarios. Our model abil- ity to recognize complex and risky situations even before collisions occurred as illustrated in Figure 5 suggests the potential to extend collision prediction models beyond a bi- nary formulation, toward a three-level taxonomy: normal, warning, and alert. Such an approach could be particularly beneficial for autonomous driving systems, enabling adap- tive decision-making based on momentary risk levels. We refer the readers to our project page for full length examples 3. Ultimately, advancing reliable and context-aware colli- sion prediction can contribute significantly to the broader goal of safer, more anticipatory driver assistance systems, and may play a key role in bridging the gap between cur- rent ADAS technologies and fully autonomous driving. References [1] M. Assran, A. Bardes, D. Fan, Q. Garrido, R. Howes, M. Komeili, M. Muckley, A. Rizvi, C. Roberts, K. Sinha, A. Zholus, S. Arnaud, A. Gejji, A. Martin, F. R. Hogan, D. Dugas, P. Bojanowski, V. Khalidov, P. Labatut, F. Massa, M. Szafraniec, K. Krishnakumar, Y. Li, X. Ma, S. Chandar, F. Meier, Y. LeCun, M. Rabbat, and N. Ballas, “V-jepa 2: Self-supervised video models enable understanding, predic- tion and planning,” 2025. [2] F.-H. Chan, Y.-T. Chen, Y. Xiang, and M. Sun, “Anticipating accidents in dashcam videos,” in Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part IV 13. Springer, 2017, pp. 136–153. [3] Y. Yao, X. Wang, M. Xu, Z. Pu, Y. Wang, E. Atkins, and D. J. Crandall, “Dota: Unsupervised detection of traffic anomaly in driving videos,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 1, pp. 444–459, 2022. [4] J. Fang, D. Yan, J. Qiao, J. Xue, and H. Yu, “Dada: Driver at- tention prediction in driving accident scenarios,” IEEE trans- 3https://www.nexar-ai.com/badas actions on intelligent transportation systems, vol. 23, no. 6, pp. 4959–4971, 2021. [5] D. C. Moura, S. Zhu, and O. Zvitia, “Nexar dashcam crash prediction challenge,” https://kaggle.com/competitions/ nexar-collision-prediction, 2025, kaggle. [6] D. Moura, S. Zhu, and O. Zvitia, “Nexar dashcam colli- sion prediction dataset and challenge,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025, pp. 2583–2591. [7] T. Suzuki, H. Kataoka, Y. Aoki, and Y. Satoh, “Anticipating traffic accidents with adaptive loss and large-scale incident db,” in Proceedings of the IEEE conference on computer vi- sion and pattern recognition, 2018, pp. 3521–3529. [8] M. M. Karim, Y. Li, R. Qin, and Z. Yin, “A dynamic spatial- temporal attention network for early anticipation of traffic accidents,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9590–9600, 2022. [9] A. V. Malawade, S.-Y. Yu, B. Hsu, D. Muthirayan, P. P. Khargonekar, and M. A. Al Faruque, “Spatiotemporal scene- graph embedding for autonomous vehicle collision predic- tion,” IEEE Internet of Things Journal, vol. 9, no. 12, pp. 9379–9388, 2022. [10] W. Bao, Q. Yu, and Y. Kong, “Drive: Deep reinforced ac- cident anticipation with visual explanation,” in Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 2021, pp. 7619–7628. [11] J. Fang, L.-l. Li, J. Zhou, J. Xiao, H. Yu, C. Lv, J. Xue, and T.-S. Chua, “Abductive ego-view accident video un- derstanding for safe driving perception,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 22 030–22 040. [12] C. Feichtenhofer, H. Fan, J. Malik, and K. He, “Slow- fast networks for video recognition,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6202–6211. [13] Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, and H. Hu, “Video swin transformer,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 3202–3211. [14] Z. Tong, Y. Song, J. Wang, and L. Wang, “Videomae: Masked autoencoders are data-efficient learners for self- supervised video pre-training,” Advances in neural informa- tion processing systems, vol. 35, pp. 10 078–10 093, 2022. [15] A. Bardes, Q. Garrido, J. Ponce, X. Chen, M. Rabbat, Y. Le- Cun, M. Assran, and N. Ballas, “Revisiting feature predic- tion for learning visual representations from video,” arXiv preprint arXiv:2404.08471, 2024. [16] “Euro ncap assessment protocol – collision avoidance,” European New Car Assessment Programme (Euro NCAP), Tech. Rep., 2020, version 1.0.4.1, October 2020. De- fines Forward Collision Warning (FCW) and Autonomous Emergency Braking (AEB) test procedures. [Online]. Avail- able: https://www.euroncap.com/media/80154/euro-ncap- assessment-protocol-sa-collision-avoidance-v1041.pdf [17] J. Li, “Vehicle-cv-adas,” https://github.com/jason-li-831202/ Vehicle-CV-ADAS, 2023, accessed: 2025-10-02. [18] R. Khanam and M. Hussain, “Yolov11: An overview of the key architectural enhancements,” arXiv preprint, 2024, arXiv:2410.17725. [19] Z. Qin, P. Zhang, and X. Li, “Ultra fast deep lane detec- tion with hybrid anchor driven ordinal classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–14, 2022. [20] B. Psomas, D. Christopoulos, E. Baltzi, I. Kakogeorgiou, T. Aravanis, N. Komodakis, K. Karantzalos, Y. Avrithis, and G. Tolias, “Attention, please! revisiting attentive probing for masked image modeling,” 2025. [Online]. Available: https://arxiv.org/abs/2506.10178
BADAS: Context Aware Collision Prediction Using Real-World Dashcam Data Roni Goldshmidt Hamish Scott Lorenzo Niccolini Shizhan Zhu Daniel Moura Orly Zvitia October 17, 2025 Abstract Existing collision prediction methods often fail to distinguish between ego-vehicle threats and random accidents non-involving ego-vehicle, leading to excessive false alerts in real-world deployment. We present BADAS, a family of collision prediction models trained on Nexar's realworld dashcam collision dataset-the first benchmark designed explicitly for ego-centric evaluation. We re-annotate major benchmarks to identify ego involvement, add consensus alert-time labels, and synthesize negatives where needed, enabling fair AP/AUC and temporal evaluation. BADAS uses a V-JEPA2 backbone trained end-to-end and comes in two variants: BADAS-Open (trained on our 1.5k public videos) and BADAS1.0 (trained on 40k proprietary videos). Across DAD, DADA-2000, DoTA, and Nexar, BADAS achieves state-of-the-art AP/AUC and outperforms a forward-collision ADAS baseline while producing more realistic time-to-accident estimates. We release our BADAS-Open model weights and code, along with re-annotations of all evaluation datasets to promote egocentric collision prediction research. 1. Introduction Collision prediction is fundamental to Advanced Driver Assistance Systems (ADAS) and autonomous vehicles, yet current approaches fail to meet real-world deployment requirements. Despite decades of research, existing methods struggle with excessive false alarms and miss critical egovehicle threats. We present BADAS (V-JEPA2 [1] Based Advanced Driver Assistance System), a new approach that achieves state-of-the-art performance by combining modern video foundation models with high-quality, ego-centric real-world driving data. As shown in Figure 1, BADAS significantly outperforms both academic methods and commercial ADAS systems across major benchmarks, demonstrating the power of aligning training data with actual deployment scenarios. Figure 1. BADAS achieves state-of-the-art performance on collision prediction benchmarks. Our approach substantially outperforms existing academic methods and commercial vision-based forward collision warning systems by leveraging ego-centric realworld data and modern video foundation models when evaluated on ego-vehicle involved collisions. The core challenge in collision prediction lies not in the algorithms themselves, but in the fundamental misalignment between training data and actual driving scenarios. More critically, existing real-world datasets like DAD [2], DoTA [3], and DADA-2000 [4] suffer from a conceptual flaw: they treat all visible accidents equally, training models to detect any collision within the camera's field of view. This approach generates excessive false alarms in deployment-a vehicle accident two lanes over triggers the same alert as an imminent frontal collision threatening the ego vehicle. 16 Oct 2025 Figure 2 illustrates this critical distinction between egocentric and general collision prediction. While visually salient accidents in adjacent lanes may capture attention, they are irrelevant to the ego vehicle's safety and should not trigger warnings. Our systematic re-annotation of existing benchmarks reveals the severity of this problem as shown in Table 1. Figure 2. The critical distinction between ego-centric and general collision prediction. Top row: Ego-vehicle involved events requiring immediate driver intervention. Bottom row: Non-ego accidents that are visually prominent but irrelevant to ego-vehicle safety. BADAS focuses exclusively on ego-relevant scenarios to minimize false alarms while maintaining high sensitivity to actual threats. Examples from the DAD dataset [2]. Table 1. Re-annotation results of major collision-prediction datasets. The high percentage of non-ego accidents (40-92%) suggests that these datasets should be used with caution when developing or evaluating ADAS-related methods. Pos-Ego: egoinvolved accidents. Pos-Not Ego: non-ego accidents. Less2s: samples filtered out due to insufficient anticipation timespecifically, cases where the collision occurs within 2 seconds from the beginning of the video. Our model requires a full 2 seconds prediction horizon; thus, such cases are invalid for anticipation. Negative: normal driving videos. %Not Ego: percentage of non-ego accidents among retained positive samples. Dataset #Pos-Ego #Pos-Not Ego #Less-2s #Negative % Pos-Not Ego DADA-2000 [4] 75 51 2 0 40.5 DAD [2] 13 150 2 301 92.0 DoTA [3] 327 255 16 0 43.8 Nexar [5] 672 0 0 672 0.0 Beyond the ego-centric issue, existing datasets exhibit additional limitations, posing challenges for effective ADAS development. DoTA and DADA-2000 contain only positive examples, making it impossible to evaluate false positive rates-a crucial metric for user acceptance. They also lack consistency in defining alert timing: DAD bases alerts on abnormal behavior onset, DoTA uses subjective "inevitability" judgments, and DADA-2000 triggers alerts upon 3rd party vehicle appearance regardless of actual risk. The recently introduced Nexar Dashcam Collision Prediction Dataset [5, 6] addresses these fundamental problems through a paradigm shift in data collection. Unlike existing datasets, Nexar comprises 1.5k real-world dashcam videos from actual drivers experiencing genuine collisions and near-misses. This dataset provides three key advantages: (1) exclusive focus on ego-vehicle involved events, eliminating irrelevant accidents, (2) inclusion of near-collision events where accidents are avoided through emergency maneuvers-capturing the much more common successfully-resolved dangerous situations that provide rich training signals, and (3) standardized, consensus-based alert timing annotations from 10 human annotators that enable consistent temporal evaluation. The real-world nature of this data is crucial: it captures the true distribution of driving scenarios, including rare events that synthetic datasets cannot replicate and that drivers actually encounter on the road. Our approach leverages recent advances in video foundation models, particularly V-JEPA2 [1], which has shown strong capabilities in understanding temporal dynamics and visual patterns. These architectures excel when trained on data that connects visual patterns to concrete outcomes-precisely what ego-centric collision and nearcollision events provide. By fine-tuning V-JEPA2 on the Nexar high-quality ego-relevant dataset, we harness its temporal reasoning capabilities for collision prediction. The combination of modern architectures with appropriate realworld data yields the significant performance gains shown in Figure 1, substantially surpassing both task-specific architectures and commercial ADAS systems. The contributions of this paper are as follows: 1. Ego-centric problem reformulation: We redefine collision prediction as an ego-centric task and systematically re-annotate major benchmarks (DAD, DoTA, DADA2000) to identify ego-vehicle involvement, revealing that 40-92% of their accidents are irrelevant for ADAS applications. These annotations are publicly released to benefit the community. 2. Standardized temporal evaluation: We establish a coherent definition of alert timing through 10-annotator consensus and contribute precise temporal annotations for all test sets, addressing the inconsistent and subjective definitions across existing datasets. Our results in 8 emphasize that longer estimated time to accident may represent early anticipations rather than a better actual prediction. 3. State-of-the-art performance: Our models surpass existing methods by fine-tuning V-JEPA2 [1] on highquality, ego-centric real-world data, demonstrating that combining advanced architectures with appropriate data significantly outperforms both academic methods and commercial vision-based forward collision warning systems. 4. Real-world data importance: We demonstrate that practical collision prediction requires training in genuine traffic situations rather than synthetic or controlled environments. A preliminary analysis (Figure 9a) already reveals the natural long-tailed structure of real-world collisions, underscoring the value of diverse large-scale data. The rest of this paper is organized as follows: Section 2 reviews related work in collision prediction datasets and methods and video understanding. Section 3 presents our methodology including model architecture and training protocols. Section 4 describes our experimental setup and reannotation process. Section 5 presents a comprehensive analysis and results. We conclude in Section 6 and discuss potential directions for future research in Section 7. 2. Related Work 2.1. Traffic Accident Prediction Methods Recent advances in traffic accident prediction have produced various approaches, though we focus our comparison on methods with open-source implementations that enable reproducible evaluation on our ego-centric test sets. UString [7] uses RNN-based architectures with adaptive loss functions that emphasize frames near collision events, dynamically adjusting loss contribution based on temporal proximity to accidents. DSTA [8] employs transformerbased architectures with dynamic spatial-temporal attention mechanisms, allowing the model to focus on relevant spatial regions while tracking their temporal evolution. Both methods provide open-source implementations, facilitating direct comparison. While other approaches using graph neural networks [9], reinforcement learning [10], and multi-modal fusion [11] exist in the literature, the lack of publicly available implementations prevents fair comparison on our ego-centric datasets. Therefore, our experimental comparison focuses on UString and DSTA as the current open-source state-ofthe-art. 2.2. Collision Prediction Datasets and Temporal Annotations Existing datasets differ significantly in their temporal annotation approaches. DAD [2] contains 1,750 videos and defines accidents based on abnormal behavior onset rather than impact moments. DoTA [3] provides 4,990 videos with subjective annotations marking when accidents appear "inevitable," resulting in inconsistent temporal boundaries. DADA-2000 [4] offers 1,962 videos annotating from vehicle appearance to collision, which may mark accidents as predictable too early when vehicle presence alone doesn't indicate danger. The Nexar dataset [6] addresses these limitations through consensus-based alert times from multiple annotators, establishing both the earliest moment of recognizable danger (talert) and precise collision timing (tcollision), enabling evaluation of both prediction accuracy and temporal appropriateness. 2.3. Foundation Models for Video Understanding Video foundation models have evolved from temporal extensions of image architectures to sophisticated selfsupervised approaches. SlowFast networks [12] process videos at multiple temporal resolutions, while Video Swin Transformers [13] extend window-based attention temporally. Self-supervised methods like VideoMAE [14] reconstruct masked spatial-temporal patches, while V-JEPA [15] predicts abstract representations rather than raw pixels. VJEPA2 [1] further improves masking strategies and training objectives for temporal understanding, making it particularly suited for anticipatory tasks. BADAS is the first application of V-JEPA2 for collision prediction, demonstrating that fine-tuning on ego-centric data substantially outperforms task-specific architectures. 2.4. Industrial ADAS Systems Current ADAS Forward Collision Warning (FCW) systems typically combine radar and camera sensors with physicsbased models to calculate time-to-collision and trigger alerts [16]. For comparison purposes, we focus on visiononly implementations that process monocular dashcam inputs. The open-source FCW system [17] uses YOLO [18] for object detection and [19] for lane detection, raising alerts when detected objects fall within distance thresholds. This provides a baseline representing current deployed vision-based ADAS technology. 3. Methodology 3.1. Problem Formulation Building on the Nexar Challenge [5], we reformulate collision prediction as an ego-centric task. Given a dashcam video sequence V = {f1, ..., fT }, we predict at each timestep t the probability pt = P(ego-collision|f1:t) that the ego vehicle will be involved in a collision within time horizon τ. Ego-vehicle incidents: Scenarios where the ego vehicle experiences collision or executes emergency maneuvers, including direct collisions, near-misses requiring evasive action, and situations prevented only by emergency intervention. Non-ego incidents: Accidents visible but irrelevant to ego-vehicle safety, including adjacent lane collisions and distant events. Near-collisions (emergency maneuvers that prevented accidents) are treated as positive training examples, providing rich supervisory signals from successfully-resolved dangerous situations. 3.2. BADAS Architecture BADAS uses V-JEPA2 [1] backbone with patch aggregation and classification head: V-JEPA2 Backbone: Encoder patchifies videos using 2×16×16 tubelets and passes tokens through ViT-L transformer. We process 16 frames of size 256 × 256, yielding 2048 latent patches with dimension D = 1024. Patch Aggregation: We aggregate patches using attentive probe [20]. Given patches X ∈RP ×D, learned queries Q ∈ RM×D compute attention scores A = softmax(QX⊤/ √ D). These aggregate patches via weight matrix W ∈RD×d to produce features AXW ∈RM×d, concatenated to final vector of size Md (with M = 12, d = 64). Prediction Head: Three-layer MLP with GELU activation, layer normalization, and 0.1 dropout probability maps aggregated features to collision probability. Hidden dimension: 768. 3.3. Training Protocol We fine-tune V-JEPA2 end-to-end using AdamW optimizer with learning rate η = 1×10-5, weight decay 1×10-4, and cosine annealing schedule. Binary cross-entropy loss with mixed precision training and gradient clipping (norm 5.0) ensures stability. Early stopping on validation set prevents overfitting. 3.4. Dataset Re-annotation Protocol We systematically re-annotated DAD, DoTA, and DADA2000 for ego-centric evaluation: Ego-Involvement Classification: We manually reviewed all accidents, categorizing them as ego-involved - recording vehicle is directly involved in the accident, or non-ego involved - accidents in adjacent lanes, perpendicular traffic or visible but non-threatening. Human Alert Time: 10 annotators with defensive driving certification marked when they would initiate defensive action. Consensus time (median) represents realistic human response timing. Figure 3 shows human reaction patterns across 726 events: median 1.70s, mean 1.81s (SD=0.82s), with 90% of alerts between 0.70-3.47s before collision. Alerts beyond 95th percentile (3.47s) likely respond to normal variations rather than genuine threats. Figure 3. Cumulative distribution of human reaction times across 726 ego-involved collisions. Median: 1.70s, 90% range: 0.703.47s before impact. Synthetic Negatives: For datasets lacking negative samples, we extract first 4 seconds from videos where human alert occurs 4.5s or later after video start, ensuring realistic driving without collision precursors. Table 2. Dataset composition after ego-centric re-annotation Dataset Real Neg. Real Pos. Synth. Neg. Total DADA-2000 0 75 38 113 DAD 301 13 0 314 DoTA 0 327 40 367 Our consensus-based protocol provides unified temporal reference across datasets, addressing the existing inconsistencies and enabling fair comparison. The re-annotations are publicly available 1. 3.5. Model Variants BADAS-Open: Trained solely on Nexar's public dataset (1,500 videos) with balanced collision/normal driving representation. Achieves state-of-the-art on major benchmarks. Model and code are publicly available 2 under the APSv2 license. BADAS1.0: Commercial variant trained on 40k proprietary sequences (20k collisions/near-misses, 20k normal driving). Uses identical architecture and training protocols, but 25× more data, enabling data scaling effect evaluation and superior edge-case handling. 1https://github.com/getnexar/BADAS- Open/tree/ main 2https://huggingface.co/nexar-ai/BADAS-Open Table 3. Ego-centric collision prediction performance across benchmarks. DAD (n=116) DADA (n=113) DoTA (n=367) Nexar (n=1344) Method AP AUC mTTA AP AUC mTTA AP AUC mTTA AP AUC mTTA DSTA 0.06 0.59 2.9 0.74 0.60 6.5 0.92 0.55 4.8 0.53 0.54 9.8 UString 0.06 0.61 2.4 0.69 0.57 5.3 0.90 0.54 4.3 0.48 0.48 9.1 FCW ADAS 0.11 0.69 2.0 0.77 0.78 3.8 0.94 0.69 3.0 0.58 0.64 8.7 BADAS-Open 0.66 0.87 2.7 0.87 0.77 4.3 0.94 0.70 4.0 0.86 0.88 4.9 BADAS1.0 0.94 0.99 2.7 0.90 0.87 4.6 0.95 0.72 4.0 0.91 0.91 3.9 4. Experiments 4.1. Experimental Setup We evaluate BADAS against state-of-the-art collision prediction methods and industrial ADAS systems. Our baseline methods include UString [7], DSTA [8] and FCW ADAS [17], a vision-only forward collision warning system using YOLOv11 [18] for object detection and TuSimple ResNet-34 [19] for lane detection. Evaluation is performed on three benchmarks reannotated for ego-centric assessment: DAD [2], DADA [4], DoTA [3], and the Nexar test set. For datasets originally containing only positive samples (DADA, DoTA), we generate synthetic negatives as described in Section 3. Performance is measured using Average Precision (AP) for ranking quality, Area Under Curve (AUC) for discrimination capability, and mean Time-To-Accident (mTTA) for temporal accuracy. 5. Results 5.1. Quantitative Performance Table 3 presents a comprehensive summary of our evaluation results. BADAS models achieve substantial improvements over existing methods, with BADAS1.0 reaching 0.94 AP on DAD compared to 0.06 for baseline methods. The performance gap is particularly striking on the Nexar dataset-the largest and most comprehensive benchmarkwhere BADAS-Open achieves 0.86 AP versus 0.48-0.53 for academic baselines and 0.58 for the industrial FCW system. Notably, BADAS models maintain consistent performance across diverse datasets, indicating robust generalization. The mTTA values reveal a critical distinction: baseline methods report physically implausible predictions (910 seconds before collision on Nexar), while BADAS maintains more realistic 3-5 second windows better aligned with human prediction capabilities. The industrial FCW system, despite using state-of-the-art detection models, underperformed due to its rule-based logic generating excessive false positives in dense traffic-highlighting the advantages of end-to-end learning from real collision data. 5.2. Qualitative Analysis Figure 4 illustrates prediction scores over time for representative test samples. BADAS-Open (red) demonstrates Figure 4. Collision predictions scores over time for 6 samples: UString (blue), DSTA (green) and BADAS-Open (red). Vertical dashed lines indicate collision time, horizontal line shows detection threshold. Table 4. Performance for different label windowing strategies. Window Precision Recall AP AUC mTTA 1.0s 0.792 0.896 0.930 0.930 3.112 1.5s 0.785 0.939 0.931 0.935 3.880 2.0s 0.741 0.964 0.925 0.931 4.646 stable, confident predictions that rise sharply as collisions approach, while baseline methods exhibit erratic patterns with premature or inconsistent alerting. This stability translates to practical deployment benefits leading to fewer false alarms and more reliable threat assessment. Frame-by-frame analysis in Figure 5 further demonstrates that in urban intersection and residential scenarios, BADAS variants correctly identify ego-relevant threats while maintaining low confidence during safe driving periods. 5.3. Data Scaling Effects Figure 6 illustrates the effect of training data scale, showing a logarithmic improvement in Nexar validation AP as the dataset size increases from 1.5k to 40k videos. This consistent scaling trend supports our hypothesis that leveraging large-scale real-world data continues to yield performance gains, even at the largest scales tested. 5.4. Ablation Studies We investigate key design choices through systematic ablations. Table 4 examines label window selection, showing that T = 1.5 seconds optimally balances precision (0.785) and recall (0.939). Shorter windows miss developing collision patterns, while longer windows include too many nonindicative frames. Data augmentation and oversampling strategies (Table 5) show significant benefits, with the combination of 2× oversampling for positive samples and augmentations improvFigure 5. Frame-by-frame comparison on Nexar test footage. Top: urban intersection collision. Bottom: residential street scenario. Alert indicators show prediction confidence (green: safe, orange: caution, red: imminent collision). BADAS variants demonstrate accurate ego-centric threat assessment. Figure 6. Validation AP versus training data size, showing logarithmic performance improvement with data scaling. Table 5. Impact of sampling strategies and augmentations. Oversampling Rate Augmentations AP AUC 1× No 0.922 0.927 2× No 0.924 0.922 1× Yes 0.931 0.935 2× Yes 0.939 0.941 ing AP from 0.922 to 0.939. The augmentations (Figure 7) simulate diverse weather and lighting conditions, improving robustness to real-world variations. Architecture ablations (Table 6) show the contribution of each component. We train models with the base V-JEPA2 backbone with: only a linear head, the attentive probe with a linear head, and the attentive probe with the MLP head. We test training with and without the base model frozen. Fine-tuning the V-JEPA2 backbone provides the largest gain (0.707→0.928 AP), while the attentive probe and MLP head add incremental improvements, reaching 0.939 AP with the complete architecture. import albumentations as A A.Compose([ A.RandomBrightnessContrast(p=0.5), A.HueSaturationValue(p=0.5), A.GaussianBlur(blur_limit=(3, 7), p=0.3), A.CLAHE(p=0.3), A.RandomGamma(p=0.3), A.RGBShift(p=0.3), A.MotionBlur(blur_limit=7, p=0.3), A.RandomRain(p=0.2), A.RandomSnow(p=0.2), A.RandomShadow(p=0.2), ]) Figure 7. Data augmentations for weather and lighting robustness. Table 6. Architecture component contributions. Configuration AP AUC Frozen base 0.707 0.744 Frozen base + probe 0.782 0.811 Frozen base + probe + MLP 0.771 0.809 Fine-tuned base 0.928 0.935 Fine-tuned base + probe 0.929 0.936 Fine-tuned base + probe + MLP 0.939 0.941 5.5. Temporal Accuracy Analysis Figure 8 compares Time-To-Accident distributions at 80% confidence against human consensus annotations. BADASOpen's median TTA of 3.0 seconds closely aligns with practical requirements-providing sufficient warning while avoiding premature alerts. In contrast, UString and DSTA show median TTAs of 6.2s and 7.5s respectively, with high variance extending to implausible 14-second predictions. This unrealistic early detection often manifests as false positives in deployment, posing challenges when integrating these methods into real-world ADAS. Figure 8. TTA distributions at 80% confidence. Human consensus (green) shows expert annotations. BADAS-Open demonstrates realistic timing while baselines exhibit implausibly early predictions. 5.6. Long-Tail Performance Beyond common vehicle-to-vehicle interactions, real-world collision prediction must also recognize rare but safetycritical scenarios. To assess BADAS-Open robustness under such conditions, we measure recall across various thirdparty long-tail categories for confidence threshold of 0.85 relative to the recall observed on the dominant "vehicle" class. As shown in Figure 9, the model achieves the highest recall on vehicle-related events, with only a modest drop in larger vehicles (trucks and busses). Vulnerable Road Users (VRUs) categories - pedestrians, cyclists, motorcyclists - show a notable decline in recall, and performance decreases sharply further on animal-related incidents. This degradation highlights the challenges of generalizing to rare, visually diverse, and low-frequency event types. While this behavior is expected given the absence of such cases in the training distribution of the BADAS-Open model, these results underscore the importance of explicitly evaluating the long-tail performance in collision prediction. 6. Conclusion This work introduces BADAS, a new approach to collision prediction that focuses on ego-vehicle safety through egocentric problem formulation. Building on insights from the Nexar Dashcam Collision Prediction Challenge, we demonstrate that focusing exclusively on ego-vehicle threatsrather than general accident detection-dramatically improves real-world performance. Our systematic re-annotation of major benchmarks reveals fundamental issues with existing datasets: a significant portion of annotated accidents do not involve the egoFigure 9. (a) Distribution of third-party categories, representing the interacting agents. Static infrastructure include buildings, fences, poles and traffic signs. The vehicle class corresponds to the Nexar [5] test set used throughout this work, while the remaining categories originate from additional annotated data. (b) BADASOpen recall across these categories at a confidence threshold of 0.85 vehicle, leading models to learn patterns irrelevant to egovehicle safety. By filtering for ego-relevance and establishing human baseline reaction times, we create evaluation protocols that better reflect real-world deployment requirements. Our synthetic negative sampling method further improves the balance between positive and negative samples and relaxes the biased AP and AUC measurements. We further highlight the necessity of a coherent definition and annotation scheme for alert time, to serve as reference to the predicted mTTA values. Our findings show varying levels of early prediction in all methods. This is especially important for practical systems as these early predictions will be manifested as false alerts when deployed in real ADAS or AV frameworks. We present two model variants addressing different deployment needs: BADAS-Open, trained exclusively on 1.5k public Nexar videos, and BADAS1.0, leveraging 40k videos from Nexar's proprietary dataset. Both models achieve state-of-the-art performance when compared to leading research methods and FWC systems. The significant performance gain observed with increased data volume suggests that the potential of data scaling has not yet been fully saturated. The BADAS-Open model and code are released to the research community. While our model outperforms existing state-of-the-art results, we also highlight the long-tail nature of collision and near-collision distributions, showing that BADASOpen performance significantly deteriorates on minority classes. This result is expected, as any model trained on an imbalanced dataset naturally focuses on majority classes (e.g., vehicle-to-vehicle accidents). However, edge cases must also be taken into account - first by explicitly evaluating current model performance on them, and later by developing dedicated strategies to improve their prediction. 7. Future Work While this study provides encouraging evidence for the effectiveness of context-aware architectures in collision prediction, several open challenges remain. Future research directions include expanding the dataset to further enhance generalization, improving mean time-toalert (mTTA) to reduce false alerts in real-world systems, and addressing long-tail categories to better evaluate and predict diverse and rare driving scenarios. Our model ability to recognize complex and risky situations even before collisions occurred as illustrated in Figure 5 suggests the potential to extend collision prediction models beyond a binary formulation, toward a three-level taxonomy: normal, warning, and alert. Such an approach could be particularly beneficial for autonomous driving systems, enabling adaptive decision-making based on momentary risk levels. We refer the readers to our project page for full length examples 3. Ultimately, advancing reliable and context-aware collision prediction can contribute significantly to the broader goal of safer, more anticipatory driver assistance systems, and may play a key role in bridging the gap between current ADAS technologies and fully autonomous driving. References [1] M. Assran, A. Bardes, D. Fan, Q. Garrido, R. Howes, M. Komeili, M. Muckley, A. Rizvi, C. Roberts, K. Sinha, A. Zholus, S. Arnaud, A. Gejji, A. Martin, F. R. Hogan, D. Dugas, P. Bojanowski, V. Khalidov, P. Labatut, F. Massa, M. Szafraniec, K. Krishnakumar, Y. Li, X. Ma, S. Chandar, F. Meier, Y. LeCun, M. Rabbat, and N. Ballas, "V-jepa 2: Self-supervised video models enable understanding, prediction and planning," 2025. [2] F.-H. Chan, Y.-T. Chen, Y. Xiang, and M. Sun, "Anticipating accidents in dashcam videos," in Computer Vision-ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part IV 13. Springer, 2017, pp. 136-153. [3] Y. Yao, X. Wang, M. Xu, Z. Pu, Y. Wang, E. Atkins, and D. J. Crandall, "Dota: Unsupervised detection of traffic anomaly in driving videos," IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 1, pp. 444-459, 2022. [4] J. Fang, D. Yan, J. Qiao, J. Xue, and H. Yu, "Dada: Driver attention prediction in driving accident scenarios," IEEE trans3https://www.nexar-ai.com/badas actions on intelligent transportation systems, vol. 23, no. 6, pp. 4959-4971, 2021. [5] D. C. Moura, S. Zhu, and O. Zvitia, "Nexar dashcam crash prediction challenge," https://kaggle.com/competitions/ nexar-collision-prediction, 2025, kaggle. [6] D. Moura, S. Zhu, and O. Zvitia, "Nexar dashcam collision prediction dataset and challenge," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025, pp. 2583-2591. [7] T. Suzuki, H. Kataoka, Y. Aoki, and Y. Satoh, "Anticipating traffic accidents with adaptive loss and large-scale incident db," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3521-3529. [8] M. M. Karim, Y. Li, R. Qin, and Z. Yin, "A dynamic spatialtemporal attention network for early anticipation of traffic accidents," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9590-9600, 2022. [9] A. V. Malawade, S.-Y. Yu, B. Hsu, D. Muthirayan, P. P. Khargonekar, and M. A. Al Faruque, "Spatiotemporal scenegraph embedding for autonomous vehicle collision prediction," IEEE Internet of Things Journal, vol. 9, no. 12, pp. 9379-9388, 2022. [10] W. Bao, Q. Yu, and Y. Kong, "Drive: Deep reinforced accident anticipation with visual explanation," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7619-7628. [11] J. Fang, L.-l. Li, J. Zhou, J. Xiao, H. Yu, C. Lv, J. Xue, and T.-S. Chua, "Abductive ego-view accident video understanding for safe driving perception," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 22 030-22 040. [12] C. Feichtenhofer, H. Fan, J. Malik, and K. He, "Slowfast networks for video recognition," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6202-6211. [13] Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, and H. Hu, "Video swin transformer," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 3202-3211. [14] Z. Tong, Y. Song, J. Wang, and L. Wang, "Videomae: Masked autoencoders are data-efficient learners for selfsupervised video pre-training," Advances in neural information processing systems, vol. 35, pp. 10 078-10 093, 2022. [15] A. Bardes, Q. Garrido, J. Ponce, X. Chen, M. Rabbat, Y. LeCun, M. Assran, and N. Ballas, "Revisiting feature prediction for learning visual representations from video," arXiv preprint , 2024. [16] "Euro ncap assessment protocol - collision avoidance," European New Car Assessment Programme (Euro NCAP), Tech. Rep., 2020, version 1.0.4.1, October 2020. Defines Forward Collision Warning (FCW) and Autonomous Emergency Braking (AEB) test procedures. [Online]. Available: https://www.euroncap.com/media/80154/euro-ncapassessment-protocol-sa-collision-avoidance-v1041.pdf [17] J. Li, "Vehicle-cv-adas," https://github.com/jason-li-831202/ Vehicle-CV-ADAS, 2023, accessed: 2025-10-02. [18] R. Khanam and M. Hussain, "Yolov11: An overview of the key architectural enhancements," arXiv preprint, 2024, . [19] Z. Qin, P. Zhang, and X. Li, "Ultra fast deep lane detection with hybrid anchor driven ordinal classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-14, 2022. [20] B. Psomas, D. Christopoulos, E. Baltzi, I. Kakogeorgiou, T. Aravanis, N. Komodakis, K. Karantzalos, Y. Avrithis, and G. Tolias, "Attention, please! revisiting attentive probing for masked image modeling," 2025. [Online]. Available: https://arxiv.org/abs/2506.10178
2510.14869
Tight bounds towards Zarankiewicz problem in hypergraph Guorong Gaoa,b*, Jianfeng Houb†, Shuping Huangc‡, Hezhi Wanga§ aCenter for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou, Fujian, China bSchool of Mathematics and Statistics, Fuzhou University, Fuzhou, Fujian, China cSchool of Mathematical Sciences, University of Sciences and Technology of China, Hefei, China Abstract The classical Zarankiewicz problem, which concerns the maximum number of edges in a bipartite graph without a forbidden complete bipartite subgraph, motivates a direct analogue for hypergraphs. Let Ks1,...,sr be the complete r-partite r-graph such that the i-th part has si vertices. We say an r-partite r-graph H = H(V1, . . . , Vr) contains an ordered Ks1,...,sr if Ks1,...,sr is a subgraph of H and the set of size si vertices is embedded in Vi. The Zarankiewicz number for r-graph, denoted by z(m1, . . . , mr; s1, , . . . , sr), is the maximum number of edges of the r-partite r-graph whose i-th part has mi vertices and does not contain an ordered Ks1,...,sr. In this paper, we show that z(m1, m2, · · · , mr−1, n; s1, s2, · · · , sr−1, t) = Θ  m1m2 · · · mr−1n1−1/s1s2···sr−1 for a range of parameters. This extends a result of Conlon [Math. Proc. Camb. Philos. Soc. (2022)]. Keywords: hypergraph, Zarankiewicz problem, random algebraic method; *Research supported by National Key R&D Program of China (Grant No. 2023YFA1010202), National Natural Science Youth Foundation of China (Grant No. 12401448), Natural Science Foundation of Fujian Province (Grant No. 2024J08030). Email: grgao@fzu.edu.cn †Research supported by National Key R&D Program of China (Grant No. 2023YFA1010202), National Natural Science Foundation of China (Grant No. 12071077), the Central Guidance on Local Science and Technology Development Fund of Fujian Province (Grant No. 2023L3003). Email: jfhou@fzu.edu.cn ‡Email: hsp@mail.ustc.edu.cn §Email: sdlgsjwhz@126.com 1 arXiv:2510.14869v1 [math.CO] 16 Oct 2025 1 Introduction An r-uniform hypergraph (or r-graph for convenience) H = (V(H), E(H)) is a pair of a vertex set V(H) and an edge set E(H), where the edge set is a collection of r-element subsets of the vertex set. Given a graph F, the Tur´an number of F, denoted by ex(n, F), is the maximum possible number of edges in the n-vertex F-free graphs. The classical Erd˝os-Stone-Simonovits theorem gives an estimate for this function, showing that ex(n, F) = 1 − 1 χ(F) −1 + o(1) ! n 2 ! , where χ(F) is the chromatic number of F. For bipartite F, this gives the bound ex(n, F) = o  n2 . While more precise estimates are known, a number of notoriously difficult open problems remain. The most intensively studied case is when F = Ks,t, the complete bipartite graph with parts of order s and t. In the 1930s, Erd˝os–R´enyi–Brown [2] gave optimal lower bounds for K(2, t) and K(3, t). Then Koll´ar–R´onyai–Szab´o [9] and Alon–R´onyai–Szab´o [1] proved that ex n, Ks,t  = Θ  nr−1 s  (1) where t > (s −1)!. Bukh [3, 4] introduced the powerful random algebraic method and showed that (1) holds as long as t > 9s+o(s). Write G = G(m, n) with parts of size m and n. The Zarankiewicz number z(m, n; s, t) is the maximum number of edges in G(m, n) with parts U and V satisfying that there is no copy of Ks,t with s vertices in U and t vertices in V. The classical K˝ov´ari-S´os-Tur´an Theorem [10] gives z(m, n; s, t) = O  mn1−1 s  . By the random algebraic method, Conlon [6] proved that z(m, n; s, t) = Θ  mn1−1 s  for any fixed 2 ≤s ≤t and any m ⩽nt1/(s−1)/s(s−1). Let Ks1,...,sr be the complete r-partite r-graph such that the i-th part has si vertices. Ma, Yuan and Zhang [11] proved that ex n, Ks1,s2,...,sr−1,t  = Ω  nr− 1 s1s2···sr−1  for sufficiently large t. More recently, Pohoata and Zakharov [13] proved the same lower bound as long as t > ((r−1)(s−1))! and Mubayi [12] improved this lower bound on t substantially in the Zarankiewicz case, from factorial to exponential at the expense of a small o(1) error parameter in the exponent. In this paper, we study the Zarankiewicz problem in hypergraph. Define that an r-partite r-graph H = H(V1, . . . , Vr) contains an ordered Ks1,...,sr if Ks1,...,sr is a subgraph of H and the 2 set of size si vertices is embedded in Vi. The Zarankiewicz number for r-graph, denoted by z(m1, . . . , mr; s1, , . . . , sr), is the maximum number of edges of the r-partite r-graph whose i-th part has mi vertices and does not contain an ordered Ks1,...,sr. If m1 = m2 = · · · = mr = n, then we write z(n; s1, . . . , sr) for short. Recently, Mubayi [12] proved that z (n; s1, . . . , sr−1, t) > n1−o(1) · z (n; s1, . . . , sr−3, sr−2sr−1, t) for any r ≥3, and positive integers s1, . . . , sr−1, t, n →∞. Our first result is a supersaturation result for Ks1,...,sr, which extends a classical Theorem of Erd˝os [7]. Theorem 1.1. Let H be an r-partite r-graph with parts V1, . . . Vr, |Vi| = mi for 1 ≤i ≤r and mr = min1≤i≤r{mi}. Then there exists constants c1 and c2, such that if |E(H)| ≥c1· Qr−1 i=1 mi  ·m 1− 1 s1s2···sr−1 r , then H contains at least c2 · Qr i=1 mi si  · p Qr i=1 si copies of ordered Ks1,s2···sr, where p = |E(H)| Qr i=1 mi. As a corollary of Theorem 1.1, we have the following upper bound for the Zarankiewicz problem, which extends the K˝ov´ari-S´os-Tur´an Theorem [10]. Corollary 1.2. For any fixed m1, m2, · · · , mr and s1, s2, · · · , sr, if mi ≥mr for all 1 ≤i ≤r −1, then z(m1, m2, · · · , mr; s1, s2, · · · , sr) = O  m1m2 · · · mr−1m1−1/s1s2···sr−1 r  . By a generalization of Bukh’s random algebraic method, we prove the following lower bound for the Zarankiewicz problem, which extends the result of Conlon [6]. Theorem 1.3. For any fixed m1, m2, · · · , mr−1, n and s1, s2, · · · , sr−1, t, let m = Qr−1 i=1 mi and s = Qr−1 i=1 si. If s ≤t and m ≤nt1/s−1/s(s−1), then z(m1, m2, · · · , mr−1, n; s1, s2, · · · , sr−1, t) = Ω  m1m2 · · · mr−1n1−1/s1s2···sr−1 . Combining Corollary 1.2 and Theorem 1.1, we obtain a range of tight bound (up to the constant) for Zarankiewicz problem in hypergraph. Corollary 1.4. For any fixed m1, m2, · · · , mr−1, n and s1, s2, · · · , sr−1, t, let m = Qr−1 i=1 mi and s = Qr−1 i=1 si. If s ≤t, m ≤nt1/s−1/s(s−1) and mi ≥n for all 1 ≤i ≤r −1, then z(m1, m2, · · · , mr−1, n; s1, s2, · · · , sr−1, t) = Θ  m1m2 · · · mr−1n1−1/s1s2···sr−1 . In Section 2, we prove the Theorem 1.3. Theorem 1.1 is proved in Section 3. Throughout this paper, we adopt the standard notations. In particular, for an r-graph H, the degree of a vertex v ∈V(H), denoted by dH(v), is the number of edges of H containing v. We omit the subscript if there is no confusion. Remark. After we completed this work, we noticed that Chen, Liu and Ye [5] established the same lower bound of z(m1, m2, · · · , mr−1, n; s1, s2, · · · , sr−1, t) for a larger t and without restriction of n. Their method is an another generalization of Bukh’s random algebraic method. 3 2 Proof of the Theorem 1.3 The proof employs the powerful random algebraic method, which is developed in [3, 4, 6]. For a prime power q, let Fq be the finite field of order q. Writing a polynomial in t variables over Fq as f(X): Ft q →Fq, where X = (X1, . . . , Xt). Let Pd be the set of polynomials in X of degree at most d, that is, the expression for f is as follows: X Pt i=1 ai≤d αXa1 1 Xa2 2 · · · Xat t α ∈Fq. By a random polynomial, we just mean a polynomial chosen uniformly from the set Pd. One may produce such a random polynomial by taking the coefficients of the monomials above to be independent random elements of Fq. The following lemma, due to Conlon [6], estimates the probability that a randomly selected polynomial from Pd passes through m distinct points. Let Fq be the algebraic closure of Fq. Lemma 2.1 ([6]). Suppose that q > m 2  and d ≥m −1. If f is a random t-variate polynomial of degree d over Fq and x1, . . . , xm are m distinct points in F t q, then P  f (xi) = 0 for all i = 1, . . . , m ≤1/qm. Over an algebraically closed field F, a variety is a set of the form W = n x ∈F t : f1(x) = · · · = fs(x) = 0 o for some collection of polynomials f1, . . . , fs : F t →F. The variety is irreducible if it cannot be written as the union of two proper subvarieties. The dimension dim W of W is then the maximum integer d such that there exists a chain of irreducible subvarieties of W of the form ∅⊊{p} ⊊W1 ⊊W2 ⊊· · · ⊊Wd ⊂W, where p is a point. Another definition of the dimension of an algebraic variety is that dim W = max {dim Wi | Wi is an irreducible component of W} . In what follows we introduce three standard lemmas about the varieties. Lemma 2.2. Every variety W over an algebraically closed field F with dim W ≥1 has infinitely many points. Lemma 2.3. Suppose that W is an irreducible variety over an algebraically closed field F. Then, for any polynomial g : F t →F, W ⊆{x : g(x) = 0} or W ∩{x : g(x) = 0} is a variety of dimension less than dim W. 4 Lemma 2.4 (B´ezout’s theorem [8]). If, for a collection of polynomials f1, . . . , ft : F t →F, the variety W = n x ∈F t : f1(x) = · · · = ft(x) = 0 o has dim W = 0, then |W| ≤ tY i=1 deg (fi) . Moreover, for a collection of polynomials f1, . . . , fs : F t →F, the variety W = n x ∈F t : f1(x) = · · · = fs(x) = 0 o has at most Qs i=1 deg (fi) irreducible components. Proof of Theorem 1.3. Let l = Qr−1 i=1 li, s = Qr−1 i=1 si, fix d = l t1/(s−1)m −1 and ℓ= j q(d+1)/(s−1)/2d k . Consider the r-partite r-graph between sets U1, U2 · · · Ur−1 and V, where V may be viewed as a copy of Fs q for some prime power q and each Ui has order ℓi, i = 1, 2 · · · r −1, every vertex class  up1, up2, · · · , upr−1  , up1 ∈U1, up2 ∈U2 · · · upr−1 ∈Ur−1, of which is associated to an Qr−1 i=1 si −1  -variate polynomial fp1,··· ,pr−1 of degree at most d with coefficients in Fq. Each  up1, up2, · · · , upr−1  is then joined to the set of points S p1,··· ,pr−1 = n x1, . . . , xQr−1 i=1 si−1, fp1,··· ,pr−1  x1, . . . , xQr−1 i=1 si−1  : x1, . . . , xQr−1 i=1 si−1 ∈Fq o in V. Note that, for any 1 ≤ji ≤si and j = Qr−1 i=1 ji (1 ⩽pi ji ⩽li) elements of  p1, p2, · · · , pr−1 , 1 ≤p1 ≤l1, · · · , 1 ≤pr−1 ≤lr−1, we denote them by a1, a2, · · · , aj. S a1 ∩· · · ∩S aj = n (x1, . . . , xs) : xs = fa1  x1, . . . , xQr−1 i=1 si−1  = · · · = faj  x1, . . . , xQr−1 i=1 si−1 o This intersection therefore has the same size as Ta1,a2 ∩· · · ∩Ta1,aj, where Tai,a′ i = n x1, . . . , xQr−1 i=1 si−1  :  fai −fa′ i   x1, . . . , xQr−1 i=1 si−1  = 0 o . Lemma 2.5. Let H is a r-partite r-graph with parts V1, V2, · · · , Vr−1 and U, |Vi| = li, |U| = n, i = 1, 2, · · · , r −1, let l = Qr−1 i=1 li, s = Qr−1 i=1 si, j = Qr−1 i=1 ji, there exist a choice of fp1,··· ,pr−1 of degree at most d with 1 ≤p1 ≤l1, · · · , 1 ≤pr−1 ≤lr−1 such that the dimension of the intersection Ta1,a2 ∩· · · ∩Ta1,a j is at most s −j. The main idea of the proof of Lemma 2.5 can be found in [6]. However, it is not a separate proof. To ensure the completeness of the proof, we include the proof of this lemma in the appendix. 5 By Lemma 2.5, there is a choice of fp1,··· ,pr−1 for 1 ≤p1 ≤l1, · · · , 1 ≤pr−1 ≤lr−1 such that for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤li, i = 1, 2, · · · , r −1, the intersection Ta1,a2 ∩· · · ∩Ta1,as has dimension 0, so by Lemma 2.4, the number of points in the intersection is at most ds−1 < t. Therefore, for any 1 ≤p1 ≤l1, · · · , 1 ≤pr−1 ≤lr−1, the intersection S a1 ∩· · · ∩S as has at most t −1 points, so there is no copy of Ks1,s2,··· ,sr−1,t with si vertices in Ui and t vertices in V. Since l = Ω  q d+1 s−1  , |V| = qs and |E| = lqs−1, so for m0 := n d+1 s(s−1) ⩽nt1/s−1/s(s−1), m1m2 · · · mr−1 ⩽m0, we have the following result, z(m1, m2, · · · , mr−1, n; s1, s2, · · · , sr−1, t) = Ω  m1m2 · · · mr−1n1−1/s1s2···sr−1 . By applying Bertrand’s postulate, we can extend this result to all n not only the form qs with q a prime power. □ 3 Proof of the Theorem 1.1 Proof of Theorem 1.1. We will prove it by induction on r. Set c1 be a sufficiently large constant and c2 be a sufficiently small constant. For a number x ≥0 and a positive integer s, let x s ! =  0, if x < s −1; x(x−1)···(x−s+1) s! , if x ≥s −1. For the base case r = 2, let H be a bipartite graph with parts V1, V2 and |V1| = m1, |V2| = m2. We use the standard double counting to prove that if e = |E(H)| ≥c1m1m 1−1 s1 2 , then there are many copies of ordered Ks1,s2. Let ts1,1 be the number of ordered Ks1,1 of H and ts1,s2 be the number of ordered Ks1,s2. Clearly, ts1,1 = X v∈V2 d(v) s1 ! ≥m2 P v∈V2 d(v) m2 s1 ! = m2 e m2 s1 ! , where the inequality follows from the Jensen’s inequality. Let S ⊆V1 be a vertex set of size s1 and f(S ) be the number of vertices adjacent to all vertices of S . Then P S ⊂V1 f(S ) = ts1,1 and we have ts1,s2 = X S ⊂V1 f(S ) s2 ! ≥ m1 s1 ! P S ⊂V1 f(S )/ m1 s1  s2 ! = m1 s1 ! ts1,1/ m1 s1  s2 ! ≥ m1 s1 ! m2  e m2 s1  / m1 s1  s2 ! . 6 As e ≥c1m1m 1−1 s1 2 and c1 is sufficiently large, we have m2  e m2 s1  / m1 s1  > s2 −1. Thus ts1,s2 ≥c2 m1 s1 ! m2 s2 ! e m1m2 !s1s2 , where c2 is a sufficiently small constant. Suppose the theorem holds for r −1, we are going to prove the theorem holds for r. Let H be an r-partite r-graph with parts V1, V2, · · · ,Vr and |Vi| = mi for 1 ≤i ≤r. To prove the theorem, we set mi ≥mr for all 1 ≤i ≤r −1 and e = |E(H)| ≥c1 · Qr−1 i=1 mi  · m 1− 1 s1s2···sr−1 r . Without loss of generality, let mr−1 = min1≤i≤r−1{mi}. Remove all the vertices of Vr with degree less than c1 · Qr−2 i=1 mi  · m 1− 1 s1s2···sr−2 r−1 . This process removes at most mr × c1 ·  r−2 Y i=1 mi · m 1− 1 s1s2···sr−2 r−1 ≤c1 ·  r−1 Y i=1 mi · m 1− 1 s1s2···sr−2 r = o(e) edges, where the inequality follows from mr−1 ≥mr. Let V′ r be the collection of the remaining vertices of Vr. Then X v∈V′r d(v) = (1 −o(1))e ≥e 2. For each v ∈V′ r, let Hv be link-hypergraph of v with edge set {h\{v} : v ∈h ∈E(H)}. Clearly, |E(Hv)| = d(v) ≥c1 · Qr−2 i=1 mi  · m 1− 1 s1s2···sr−2 r−1 and Hv is an (r −1)-partite (r −1)-graph. By induction, Hv has at least c2′ · Qr−1 i=1 mi si  ·  d(v) Qr−1 i=1 mi Qr−1 i=1 si copies of ordered Ks1,s2,··· ,sr−1. Let ta be the number of ordered Ks1,s2,··· ,sr−1,1 of H and tb be the number of ordered Ks1,s2,··· ,sr−1,sr. Then by Jensen’s inequality ta ≥ X v∈V′r c2 ′ · r−1 Y i=1 mi si ! · d(v) Qr−1 i=1 mi !Qr−1 i=1 si ≥c′ 2 r−1 Y i=1 mi si ! · |V′ r| · P v∈V′r d(v) |V′ r| · Qr−1 i=1 mi !Qr−1 i=1 si ≥c′ 2 r−1 Y i=1 mi si ! · mr · e/2 Qr i=1 mi !Qr−1 i=1 si . Let ⃗S =(S 1, S 2, · · · , S r−1), where S i ⊂Vi and |S i| = si for 1 ≤i ≤r −1, we denote by f(⃗S ) the number of vertices v such that S 1 ∪S 2 ∪· · · ∪S r−1 ∪v induces a copy of ordered Ks1,s2,··· ,sr−1,1 in H. Then P ⃗S f(⃗S ) = ta. Again by Jensen’s inequality, tb = X ⃗S f(⃗S ) sr ! ≥ r−1 Y i=1 mi si ! P ⃗S f(⃗S )/Qr−1 i=1 mi si  sr ! = r−1 Y i=1 mi si ! ta/Qr−1 i=1 mi si  sr ! 7 ≥ r−1 Y i=1 mi si ! c′ 2 Qr−1 i=1 mi si  · mr ·  e/2 Qr i=1 mi Qr−1 i=1 si /Qr−1 i=1 mi si  sr ! = r−1 Y i=1 mi si ! c′ 2mr ·  e/2 Qr i=1 mi Qr−1 i=1 si sr ! As e ≥c1· Qr−1 i=1 mi  ·m 1− 1 s1s2···sr−1 r and c1 is sufficiently large, we have c′ 2mr ·  e/2 Qr i=1 mi Qr−1 i=1 si > sr−1. Thus tb ≥c2 rY i=1 mi si ! e Qr i=1 mi !s1s2···sr , as desired. □ Appendix A. Proof of Lemma 2.5 Proof of Lemma 2.5. We are going to show that there is a choice of fp1,··· ,pr−1 for 1 ≤p1 ≤ l1, · · · , 1 ≤pr−1 ≤lr−1 such that for any 1 ≤ji ≤si, 1 ≤ki ≤li and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ ki ≤li, i = 1, 2, · · · , r −1, the intersection Ta1,a2 ∩· · · ∩Ta1,aj has dimension at most s −j. To do this, we will pick the fp1,··· ,pr−1 in sequence and show by induction. For convenience, we let fa1, fa2, · · · , fal denote l elements of the form for fp1,··· ,pr−1, 1 ≤p1 ≤l1, · · · , 1 ≤pr−1 ≤lr−1. To begin the induction, we let fa1 be any (s1s2 · · · sr−1 −1)-variate polynomial of degree d. In this case, the condition that the intersection Ta1,a2 ∩· · · ∩Ta1,aj have dimension at most at most s −j, for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r −1 is degenerate, but can be meaningfully replaced by the observation that the set of all (x1, . . . , xs−1), corresponding to the trivial intersection, equals F s−1 q , which has dimension s1s2 · · · sr−1 −1, as require. Let k = Qr−1 i=1 ki, suppose that fa1, . . . , fak−1 have been chosen consistent with the induction hypothesis. We would like to pick fak so that for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ ki ≤li, i = 1, 2, · · · , r −1, the intersection Ta1,a2 ∩· · · ∩Ta1,ak has dimension at most s −j. For now, fix 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r −2, 1 ≤jr−1 ≤sr−1 and 1 ≤pr−1 1 < pr−1 2 < · · · < pr−1 jr−1 < kr−1 ≤lr−1. By the induction hypothesis, that Ta1,a2 ∩· · · ∩Ta1,aj−1 has dimension at most s −j + 1. Split the variety Ta1,a2 ∩· · ·∩Ta1,aj−1 into irreducible components W1, . . . , Wr and suppose that Wa is a component of dimension s −j + 1 ≥1. By Lemma 2.2, Wa has infinitely many points. Fix d + 1 points w1, . . . , wd+1 on Wa. For any (s −1)-variate polynomial f, write Ta1,f = n (x1, . . . , xs−1) : f −fa1   x1, . . . , xQr−1 i=1 si−1  = 0 o . By Lemma 2.3, we see that if dim Wa ∩Ta1, f = dim Wa, then Ta1, f must contain all of Wa and, in particular, each of w1, . . . , wd+1. Therefore, for a random (s −1)-variate polynomial f of 8 degree d, the probability that Wa ∩Ta1, f does not have dimension at most s −j is at most the probability that the polynomial f −fa1 passes through all of w1, . . . , wd+1, which, by Lemma 2.1, is at most q−(d+1). Since, by Lemma 2.4, the number of irreducible components of Ta1,a2 ∩· · ·∩Ta1,a j−1 is at most ds−1, this implies that the probability Ta1,a2 ∩· · · ∩Ta1,a j−1 ∩Ta1,ak does not have dimension at most s −j is at most ds−1q−(d+1). By taking a union bound over the at most ℓs−1 choices for j and a1, a2, · · · , aj−1, so the probability there exists 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r −1 such that Ta1,a2 ∩· · · ∩Ta1,aj−1 ∩Ta1,ak does not have dimension at most s −j is at most ℓs−1ds−1q−(d+1) < 1 for q sufficiently large. Therefore, there exists an (s −1)-variate polynomial fak of degree at most d such that Ta1,a2 ∩· · · ∩Ta1,a j−1 ∩Ta1,ak has dimension at most s −j for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r −1. □ References [1] N. Alon, L. R´onyai, T. Szab´o, Norm-graphs: variations and applications. J. Combin. The- ory Ser. B 76 (1999), no. 2, 280-290. [2] W. G. Brown, On graphs that do not contain a Thomsen graph, Canad. Math. Bull. 9 (1966), 281-285. [3] B. Bukh, Random algebraic construction of extremal graphs. Bull. London. Math. Soc. 47 (2015), 939-945. [4] B. Bukh, Extremal graphs without exponentially small bicliques. Duke Math. J. 173 (2024), no. 11, 2039-2062. [5] Q. Chen, H. Liu and K. Ye, Extremal constructions for apex partite hypergraphs. arXiv:2510.07997 [6] D. Conlon, Some remarks on the Zarankiewicz problem. Mathematical Proceedings of the Cambridge Philosophical Society. 2022; 173 (1): 155-161. [7] P. Erd˝os, On extremal problems of graphs and generalized graphs. Israel J. Math. 2 (1964), 183-190. [8] W. Fulton, Introduction to intersection theory in algebraic geometry, CBMS Regional Con- ference Series in Mathematics, 54, American Mathematical Society, Providence, RI, 1984. [9] J. Koll´ar, L. R´onyai, T. Szab´o, Norm-graphs and bipartite Tur´an numbers, Combinatorica 16 (1996), no. 3, 399-406. [10] T. K˝ov´ari, V. T. S´os, P. Tur´an, On a problem of K. Zarankiewicz. Colloq. Math. 3 (1954), 50-57. 9 [11] J. Ma, X. Yuan, M. Zhang, Some extremal results on complete degenerate hypergraphs. J. Combin. Theory Ser. A 154 (2018), 598-609. [12] D. Mubayi, Hypergraphs without complete partite subgraphs, arXiv: 2507.06390. [13] C. Pohoata, D. Zakharov, Norm hypergraphs, preprint available at arXiv: 2101.00715. 10
Tight bounds towards Zarankiewicz problem in hypergraph Guorong Gaoa,b*, Jianfeng Houb†, Shuping Huangc‡, Hezhi Wanga§ aCenter for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou, Fujian, China b . Let Ks1,...,sr be the complete r-partite r-graph such that the i-th part has si vertices. We say an r-partite r-graph H = H(V1, . . . , Vr) contains an ordered Ks1,...,sr if Ks1,...,sr is a subgraph of H and the set of size si vertices is embedded in Vi. The Zarankiewicz number for r-graph, denoted by z(m1, . . . , mr; s1, , . . . , sr), is the maximum number of edges of the r-partite r-graph whose i-th part has mi vertices and does not contain an ordered Ks1,...,sr. In this paper, we show that z(m1, m2, · · · , mr-1, n; s1, s2, · · · , sr-1, t) = Θ m1m2 · · · mr-1n1-1/s1s2···sr-1 for a range of parameters. This extends a result of Conlon [Math. Proc. Camb. Philos. Soc. (2022)]. Keywords: hypergraph, Zarankiewicz problem, random algebraic method; *Research supported by National Key R&D Program of China (Grant No. 2023YFA1010202), National Natural Science Youth Foundation of China (Grant No. 12401448), Natural Science Foundation of Fujian Province (Grant No. 2024J08030). Email: †Research supported by National Key R&D Program of China (Grant No. 2023YFA1010202), National Natural Science Foundation of China (Grant No. 12071077), the Central Guidance on Local Science and Technology Development Fund of Fujian Province (Grant No. 2023L3003). Email: ‡Email: §Email: 1 16 Oct 2025 1 Introduction An r-uniform hypergraph (or r-graph for convenience) H = (V(H), E(H)) is a pair of a vertex set V(H) and an edge set E(H), where the edge set is a collection of r-element subsets of the vertex set. Given a graph F, the Tur ́an number of F, denoted by ex(n, F), is the maximum possible number of edges in the n-vertex F-free graphs. The classical Erd ̋os-Stone-Simonovits theorem gives an estimate for this function, showing that ex(n, F) = 1 - 1 χ(F) -1 + o(1) ! n 2 ! , where χ(F) is the chromatic number of F. For bipartite F, this gives the bound ex(n, F) = o n2 . While more precise estimates are known, a number of notoriously difficult open problems remain. The most intensively studied case is when F = Ks,t, the complete bipartite graph with parts of order s and t. In the 1930s, Erd ̋os-R ́enyi-Brown [2] gave optimal lower bounds for K(2, t) and K(3, t). Then Koll ́ar-R ́onyai-Szab ́o [9] and Alon-R ́onyai-Szab ́o [1] proved that ex n, Ks,t = Θ nr-1 s (1) where t > (s -1)!. Bukh [3, 4] introduced the powerful random algebraic method and showed that (1) holds as long as t > 9s+o(s). Write G = G(m, n) with parts of size m and n. The Zarankiewicz number z(m, n; s, t) is the maximum number of edges in G(m, n) with parts U and V satisfying that there is no copy of Ks,t with s vertices in U and t vertices in V. The classical K ̋ov ́ari-S ́os-Tur ́an Theorem [10] gives z(m, n; s, t) = O mn1-1 s . By the random algebraic method, Conlon [6] proved that z(m, n; s, t) = Θ mn1-1 s for any fixed 2 ≤s ≤t and any m ⩽nt1/(s-1)/s(s-1). Let Ks1,...,sr be the complete r-partite r-graph such that the i-th part has si vertices. Ma, Yuan and Zhang [11] proved that ex n, Ks1,s2,...,sr-1,t = Ω nr1 s1s2···sr-1 for sufficiently large t. More recently, Pohoata and Zakharov [13] proved the same lower bound as long as t > ((r-1)(s-1))! and Mubayi [12] improved this lower bound on t substantially in the Zarankiewicz case, from factorial to exponential at the expense of a small o(1) error parameter in the exponent. In this paper, we study the Zarankiewicz problem in hypergraph. Define that an r-partite r-graph H = H(V1, . . . , Vr) contains an ordered Ks1,...,sr if Ks1,...,sr is a subgraph of H and the 2 set of size si vertices is embedded in Vi. The Zarankiewicz number for r-graph, denoted by z(m1, . . . , mr; s1, , . . . , sr), is the maximum number of edges of the r-partite r-graph whose i-th part has mi vertices and does not contain an ordered Ks1,...,sr. If m1 = m2 = · · · = mr = n, then we write z(n; s1, . . . , sr) for short. Recently, Mubayi [12] proved that z (n; s1, . . . , sr-1, t) > n1-o(1) · z (n; s1, . . . , sr-3, sr-2sr-1, t) for any r ≥3, and positive integers s1, . . . , sr-1, t, n →∞. Our first result is a supersaturation result for Ks1,...,sr, which extends a classical Theorem of Erd ̋os [7]. Theorem 1.1. Let H be an r-partite r-graph with parts V1, . . . Vr, |Vi| = mi for 1 ≤i ≤r and mr = min1≤i≤r{mi}. Then there exists constants c1 and c2, such that if |E(H)| ≥c1· Qr-1 i=1 mi ·m 11 s1s2···sr-1 r , then H contains at least c2 · Qr i=1 mi si · p Qr i=1 si copies of ordered Ks1,s2···sr, where p = |E(H)| Qr i=1 mi. As a corollary of Theorem 1.1, we have the following upper bound for the Zarankiewicz problem, which extends the K ̋ov ́ari-S ́os-Tur ́an Theorem [10]. Corollary 1.2. For any fixed m1, m2, · · · , mr and s1, s2, · · · , sr, if mi ≥mr for all 1 ≤i ≤r -1, then z(m1, m2, · · · , mr; s1, s2, · · · , sr) = O m1m2 · · · mr-1m1-1/s1s2···sr-1 r . By a generalization of Bukh's random algebraic method, we prove the following lower bound for the Zarankiewicz problem, which extends the result of Conlon [6]. Theorem 1.3. For any fixed m1, m2, · · · , mr-1, n and s1, s2, · · · , sr-1, t, let m = Qr-1 i=1 mi and s = Qr-1 i=1 si. If s ≤t and m ≤nt1/s-1/s(s-1), then z(m1, m2, · · · , mr-1, n; s1, s2, · · · , sr-1, t) = Ω m1m2 · · · mr-1n1-1/s1s2···sr-1 . Combining Corollary 1.2 and Theorem 1.1, we obtain a range of tight bound (up to the constant) for Zarankiewicz problem in hypergraph. Corollary 1.4. For any fixed m1, m2, · · · , mr-1, n and s1, s2, · · · , sr-1, t, let m = Qr-1 i=1 mi and s = Qr-1 i=1 si. If s ≤t, m ≤nt1/s-1/s(s-1) and mi ≥n for all 1 ≤i ≤r -1, then z(m1, m2, · · · , mr-1, n; s1, s2, · · · , sr-1, t) = Θ m1m2 · · · mr-1n1-1/s1s2···sr-1 . In Section 2, we prove the Theorem 1.3. Theorem 1.1 is proved in Section 3. Throughout this paper, we adopt the standard notations. In particular, for an r-graph H, the degree of a vertex v ∈V(H), denoted by dH(v), is the number of edges of H containing v. We omit the subscript if there is no confusion. Remark. After we completed this work, we noticed that Chen, Liu and Ye [5] established the same lower bound of z(m1, m2, · · · , mr-1, n; s1, s2, · · · , sr-1, t) for a larger t and without restriction of n. Their method is an another generalization of Bukh's random algebraic method. 3 2 Proof of the Theorem 1.3 The proof employs the powerful random algebraic method, which is developed in [3, 4, 6]. For a prime power q, let Fq be the finite field of order q. Writing a polynomial in t variables over Fq as f(X): Ft q →Fq, where X = (X1, . . . , Xt). Let Pd be the set of polynomials in X of degree at most d, that is, the expression for f is as follows: X Pt i=1 ai≤d αXa1 1 Xa2 2 · · · Xat t α ∈Fq. By a random polynomial, we just mean a polynomial chosen uniformly from the set Pd. One may produce such a random polynomial by taking the coefficients of the monomials above to be independent random elements of Fq. The following lemma, due to Conlon [6], estimates the probability that a randomly selected polynomial from Pd passes through m distinct points. Let Fq be the algebraic closure of Fq. Lemma 2.1 ([6]). Suppose that q > m 2 and d ≥m -1. If f is a random t-variate polynomial of degree d over Fq and x1, . . . , xm are m distinct points in F t q, then P f (xi) = 0 for all i = 1, . . . , m ≤1/qm. Over an algebraically closed field F, a variety is a set of the form W = n x ∈F t : f1(x) = · · · = fs(x) = 0 o for some collection of polynomials f1, . . . , fs : F t →F. The variety is irreducible if it cannot be written as the union of two proper subvarieties. The dimension dim W of W is then the maximum integer d such that there exists a chain of irreducible subvarieties of W of the form ∅⊊{p} ⊊W1 ⊊W2 ⊊· · · ⊊Wd ⊂W, where p is a point. Another definition of the dimension of an algebraic variety is that dim W = max {dim Wi | Wi is an irreducible component of W} . In what follows we introduce three standard lemmas about the varieties. Lemma 2.2. Every variety W over an algebraically closed field F with dim W ≥1 has infinitely many points. Lemma 2.3. Suppose that W is an irreducible variety over an algebraically closed field F. Then, for any polynomial g : F t →F, W ⊆{x : g(x) = 0} or W ∩{x : g(x) = 0} is a variety of dimension less than dim W. 4 Lemma 2.4 (B ́ezout's theorem [8]). If, for a collection of polynomials f1, . . . , ft : F t →F, the variety W = n x ∈F t : f1(x) = · · · = ft(x) = 0 o has dim W = 0, then |W| ≤ tY i=1 deg (fi) . Moreover, for a collection of polynomials f1, . . . , fs : F t →F, the variety W = n x ∈F t : f1(x) = · · · = fs(x) = 0 o has at most Qs i=1 deg (fi) irreducible components. Proof of Theorem 1.3. Let l = Qr-1 i=1 li, s = Qr-1 i=1 si, fix d = l t1/(s-1)m -1 and l= j q(d+1)/(s-1)/2d k . Consider the r-partite r-graph between sets U1, U2 · · · Ur-1 and V, where V may be viewed as a copy of Fs q for some prime power q and each Ui has order li, i = 1, 2 · · · r -1, every vertex class up1, up2, · · · , upr-1 , up1 ∈U1, up2 ∈U2 · · · upr-1 ∈Ur-1, of which is associated to an Qr-1 i=1 si -1 -variate polynomial fp1,··· ,pr-1 of degree at most d with coefficients in Fq. Each up1, up2, · · · , upr-1 is then joined to the set of points S p1,··· ,pr-1 = n x1, . . . , xQr-1 i=1 si-1, fp1,··· ,pr-1 x1, . . . , xQr-1 i=1 si-1 : x1, . . . , xQr-1 i=1 si-1 ∈Fq o in V. Note that, for any 1 ≤ji ≤si and j = Qr-1 i=1 ji (1 ⩽pi ji ⩽li) elements of p1, p2, · · · , pr-1 , 1 ≤p1 ≤l1, · · · , 1 ≤pr-1 ≤lr-1, we denote them by a1, a2, · · · , aj. S a1 ∩· · · ∩S aj = n (x1, . . . , xs) : xs = fa1 x1, . . . , xQr-1 i=1 si-1 = · · · = faj x1, . . . , xQr-1 i=1 si-1 o This intersection therefore has the same size as Ta1,a2 ∩· · · ∩Ta1,aj, where Tai,a′ i = n x1, . . . , xQr-1 i=1 si-1 : fai -fa′ i x1, . . . , xQr-1 i=1 si-1 = 0 o . Lemma 2.5. Let H is a r-partite r-graph with parts V1, V2, · · · , Vr-1 and U, |Vi| = li, |U| = n, i = 1, 2, · · · , r -1, let l = Qr-1 i=1 li, s = Qr-1 i=1 si, j = Qr-1 i=1 ji, there exist a choice of fp1,··· ,pr-1 of degree at most d with 1 ≤p1 ≤l1, · · · , 1 ≤pr-1 ≤lr-1 such that the dimension of the intersection Ta1,a2 ∩· · · ∩Ta1,a j is at most s -j. The main idea of the proof of Lemma 2.5 can be found in [6]. However, it is not a separate proof. To ensure the completeness of the proof, we include the proof of this lemma in the appendix. 5 By Lemma 2.5, there is a choice of fp1,··· ,pr-1 for 1 ≤p1 ≤l1, · · · , 1 ≤pr-1 ≤lr-1 such that for any 1 ≤ji ≤si and 1 ≤pi 1 s2 -1. Thus ts1,s2 ≥c2 m1 s1 ! m2 s2 ! e m1m2 !s1s2 , where c2 is a sufficiently small constant. Suppose the theorem holds for r -1, we are going to prove the theorem holds for r. Let H be an r-partite r-graph with parts V1, V2, · · · ,Vr and |Vi| = mi for 1 ≤i ≤r. To prove the theorem, we set mi ≥mr for all 1 ≤i ≤r -1 and e = |E(H)| ≥c1 · Qr-1 i=1 mi · m 11 s1s2···sr-1 r . Without loss of generality, let mr-1 = min1≤i≤r-1{mi}. Remove all the vertices of Vr with degree less than c1 · Qr-2 i=1 mi · m 11 s1s2···sr-2 r-1 . This process removes at most mr × c1 ·  r-2 Y i=1 mi · m 11 s1s2···sr-2 r-1 ≤c1 ·  r-1 Y i=1 mi · m 11 s1s2···sr-2 r = o(e) edges, where the inequality follows from mr-1 ≥mr. Let V′ r be the collection of the remaining vertices of Vr. Then X v∈V′r d(v) = (1 -o(1))e ≥e 2. For each v ∈V′ r, let Hv be link-hypergraph of v with edge set {h\{v} : v ∈h ∈E(H)}. Clearly, |E(Hv)| = d(v) ≥c1 · Qr-2 i=1 mi · m 11 s1s2···sr-2 r-1 and Hv is an (r -1)-partite (r -1)-graph. By induction, Hv has at least c2′ · Qr-1 i=1 mi si · d(v) Qr-1 i=1 mi Qr-1 i=1 si copies of ordered Ks1,s2,··· ,sr-1. Let ta be the number of ordered Ks1,s2,··· ,sr-1,1 of H and tb be the number of ordered Ks1,s2,··· ,sr-1,sr. Then by Jensen's inequality ta ≥ X v∈V′r c2 ′ · r-1 Y i=1 mi si ! · d(v) Qr-1 i=1 mi !Qr-1 i=1 si ≥c′ 2 r-1 Y i=1 mi si ! · |V′ r| · P v∈V′r d(v) |V′ r| · Qr-1 i=1 mi !Qr-1 i=1 si ≥c′ 2 r-1 Y i=1 mi si ! · mr · e/2 Qr i=1 mi !Qr-1 i=1 si . Let ⃗S =(S 1, S 2, · · · , S r-1), where S i ⊂Vi and |S i| = si for 1 ≤i ≤r -1, we denote by f(⃗S ) the number of vertices v such that S 1 ∪S 2 ∪· · · ∪S r-1 ∪v induces a copy of ordered Ks1,s2,··· ,sr-1,1 in H. Then P ⃗S f(⃗S ) = ta. Again by Jensen's inequality, tb = X ⃗S f(⃗S ) sr ! ≥ r-1 Y i=1 mi si ! P ⃗S f(⃗S )/Qr-1 i=1 mi si sr ! = r-1 Y i=1 mi si ! ta/Qr-1 i=1 mi si sr ! 7 ≥ r-1 Y i=1 mi si ! c′ 2 Qr-1 i=1 mi si · mr · e/2 Qr i=1 mi Qr-1 i=1 si /Qr-1 i=1 mi si sr ! = r-1 Y i=1 mi si ! c′ 2mr · e/2 Qr i=1 mi Qr-1 i=1 si sr ! As e ≥c1· Qr-1 i=1 mi ·m 11 s1s2···sr-1 r and c1 is sufficiently large, we have c′ 2mr · e/2 Qr i=1 mi Qr-1 i=1 si > sr-1. Thus tb ≥c2 rY i=1 mi si ! e Qr i=1 mi !s1s2···sr , as desired. □ Appendix A. Proof of Lemma 2.5 Proof of Lemma 2.5. We are going to show that there is a choice of fp1,··· ,pr-1 for 1 ≤p1 ≤ l1, · · · , 1 ≤pr-1 ≤lr-1 such that for any 1 ≤ji ≤si, 1 ≤ki ≤li and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ ki ≤li, i = 1, 2, · · · , r -1, the intersection Ta1,a2 ∩· · · ∩Ta1,aj has dimension at most s -j. To do this, we will pick the fp1,··· ,pr-1 in sequence and show by induction. For convenience, we let fa1, fa2, · · · , fal denote l elements of the form for fp1,··· ,pr-1, 1 ≤p1 ≤l1, · · · , 1 ≤pr-1 ≤lr-1. To begin the induction, we let fa1 be any (s1s2 · · · sr-1 -1)-variate polynomial of degree d. In this case, the condition that the intersection Ta1,a2 ∩· · · ∩Ta1,aj have dimension at most at most s -j, for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r -1 is degenerate, but can be meaningfully replaced by the observation that the set of all (x1, . . . , xs-1), corresponding to the trivial intersection, equals F s-1 q , which has dimension s1s2 · · · sr-1 -1, as require. Let k = Qr-1 i=1 ki, suppose that fa1, . . . , fak-1 have been chosen consistent with the induction hypothesis. We would like to pick fak so that for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ ki ≤li, i = 1, 2, · · · , r -1, the intersection Ta1,a2 ∩· · · ∩Ta1,ak has dimension at most s -j. For now, fix 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r -2, 1 ≤jr-1 ≤sr-1 and 1 ≤pr-1 1 < pr-1 2 < · · · < pr-1 jr-1 < kr-1 ≤lr-1. By the induction hypothesis, that Ta1,a2 ∩· · · ∩Ta1,aj-1 has dimension at most s -j + 1. Split the variety Ta1,a2 ∩· · ·∩Ta1,aj-1 into irreducible components W1, . . . , Wr and suppose that Wa is a component of dimension s -j + 1 ≥1. By Lemma 2.2, Wa has infinitely many points. Fix d + 1 points w1, . . . , wd+1 on Wa. For any (s -1)-variate polynomial f, write Ta1,f = n (x1, . . . , xs-1) : f -fa1 x1, . . . , xQr-1 i=1 si-1 = 0 o . By Lemma 2.3, we see that if dim Wa ∩Ta1, f = dim Wa, then Ta1, f must contain all of Wa and, in particular, each of w1, . . . , wd+1. Therefore, for a random (s -1)-variate polynomial f of 8 degree d, the probability that Wa ∩Ta1, f does not have dimension at most s -j is at most the probability that the polynomial f -fa1 passes through all of w1, . . . , wd+1, which, by Lemma 2.1, is at most q-(d+1). Since, by Lemma 2.4, the number of irreducible components of Ta1,a2 ∩· · ·∩Ta1,a j-1 is at most ds-1, this implies that the probability Ta1,a2 ∩· · · ∩Ta1,a j-1 ∩Ta1,ak does not have dimension at most s -j is at most ds-1q-(d+1). By taking a union bound over the at most ls-1 choices for j and a1, a2, · · · , aj-1, so the probability there exists 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r -1 such that Ta1,a2 ∩· · · ∩Ta1,aj-1 ∩Ta1,ak does not have dimension at most s -j is at most ls-1ds-1q-(d+1) < 1 for q sufficiently large. Therefore, there exists an (s -1)-variate polynomial fak of degree at most d such that Ta1,a2 ∩· · · ∩Ta1,a j-1 ∩Ta1,ak has dimension at most s -j for any 1 ≤ji ≤si and 1 ≤pi 1 < pi 2 < · · · < pi ji ≤ki ≤li, i = 1, 2, · · · , r -1. □ References [1] N. Alon, L. R ́onyai, T. Szab ́o, Norm-graphs: variations and applications. J. Combin. Theory Ser. B 76 (1999), no. 2, 280-290. [2] W. G. Brown, On graphs that do not contain a Thomsen graph, Canad. Math. Bull. 9 (1966), 281-285. [3] B. Bukh, Random algebraic construction of extremal graphs. Bull. London. Math. Soc. 47 (2015), 939-945. [4] B. Bukh, Extremal graphs without exponentially small bicliques. Duke Math. J. 173 (2024), no. 11, 2039-2062. [5] Q. Chen, H. Liu and K. Ye, Extremal constructions for apex partite hypergraphs. D. Conlon, Some remarks on the Zarankiewicz problem. Mathematical Proceedings of the Cambridge Philosophical Society. 2022; 173 (1): 155-161. [7] P. Erd ̋os, On extremal problems of graphs and generalized graphs. Israel J. Math. 2 (1964), 183-190. [8] W. Fulton, Introduction to intersection theory in algebraic geometry, CBMS Regional Conference Series in Mathematics, 54, American Mathematical Society, Providence, RI, 1984. [9] J. Koll ́ar, L. R ́onyai, T. Szab ́o, Norm-graphs and bipartite Tur ́an numbers, Combinatorica 16 (1996), no. 3, 399-406. [10] T. K ̋ov ́ari, V. T. S ́os, P. Tur ́an, On a problem of K. Zarankiewicz. Colloq. Math. 3 (1954), 50-57. 9 [11] J. Ma, X. Yuan, M. Zhang, Some extremal results on complete degenerate hypergraphs. J. Combin. Theory Ser. A 154 (2018), 598-609. [12] D. Mubayi, Hypergraphs without complete partite subgraphs, arXiv: 2507.06390. [13] C. Pohoata, D. Zakharov, Norm hypergraphs, preprint available at arXiv: 2101.00715. 10
2510.14871
From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR ERWEI WANG, SAMUEL BAYLISS, ANDRA BISCA, ZACHARY BLAIR, SANGEETA CHOWD- HARY, KRISTOF DENOLF, JEFF FIFIELD, BRANDON FREIBERGER, ERIKA HUNHOFF, PHIL JAMES-ROXBY, JACK LO, JOSEPH MELBER, STEPHEN NEUENDORFFER, ED- DIE RICHTER, ANDRÉ RÖSTI, JAVIER SETOAIN, GAGANDEEP SINGH, ENDRI TAKA, PRANATHI VASIREDDY, ZHEWEN YU, NIANSONG ZHANG, and JINMING ZHUANG, Re- search and Advanced Development, AMD, USA General-purpose compilers abstract away parallelism, locality, and synchronization, limiting their effectiveness on modern spatial architectures. As modern computing architectures increasingly rely on fine-grained control over data movement, execution order, and compute placement for performance, compiler infrastructure must provide explicit mechanisms for orchestrating compute and data to fully exploit such architectures. We introduce MLIR-AIR, a novel, open-source compiler stack built on MLIR that bridges the semantic gap between high-level workloads and fine-grained spatial architectures such as AMD’s NPUs. MLIR-AIR defines the AIR dialect, which provides structured representations for asynchronous and hierarchical operations across compute and memory resources. AIR primitives allow the compiler to orchestrate spatial scheduling, distribute computation across hardware regions, and overlap communication with computation without relying on ad hoc runtime coordination or manual scheduling. We demonstrate MLIR-AIR’s capabilities through two case studies: matrix multiplication and the multi-head attention block from the LLaMA 2 model. For matrix multiplication, MLIR-AIR achieves up to 78.7% compute efficiency and generates implementations with performance almost identical to state-of-the-art, hand-optimized matrix multiplication written using the lower-level, close-to-metal MLIR-AIE framework. For multi-head attention, we demonstrate that the AIR interface supports fused implementations using approximately 150 lines of code, enabling tractable expression of complex workloads with efficient mapping to spatial hardware. MLIR-AIR transforms high-level structured control flow into spatial programs that efficiently utilize the compute fabric and memory hierarchy of an NPU, leveraging asynchronous execution, tiling, and communication overlap through compiler-managed scheduling. Additional Key Words and Phrases: Compiler, dataflow architecture, hardware acceleration, machine learning, reconfigurable technology, spatial architecture. 1 Introduction Modern computing architectures are increasingly spatial and asynchronous. They consist of many distributed compute units, partitioned memory hierarchies, and message-passing interconnects. Achieving high performance on such architectures requires precise control over computation, data movement, and execution of tasks. Mainstream CPUs and GPUs rely on a thread-centric parallel compute model that assumes many threads will be scheduled onto a limited set of hardware resources. Programmers describe large numbers of threads, and the system (hardware for GPUs, software for CPUs) maps them to available compute units. This model has scaled effectively for decades as semiconductor technology has Authors’ Contact Information: Erwei Wang, erwei.wang@amd.com; Samuel Bayliss, samuel.bayliss@amd.com; Andra Bisca, andra.bisca@amd.com; Zachary Blair, zachary.blair@amd.com; Sangeeta Chowdhary, sangeeta.chowdhary@amd.com; Kristof Denolf, kristof.denolf@amd.com; Jeff Fifield, jeff.fifield@amd.com; Brandon Freiberger, brandon.freiberger@amd. com; Erika Hunhoff, erika.hunhoff@amd.com; Phil James-Roxby, phil.james-roxby@amd.com; Jack Lo, jack.lo@amd. com; Joseph Melber, joseph.melber@amd.com; Stephen Neuendorffer, stephen.neuendorffer@amd.com; Eddie Richter, eddie.richter@amd.com; André Rösti, andre.rosti@amd.com; Javier Setoain, javier.setoain@amd.com; Gagandeep Singh, gagandeep.singh@amd.com; Endri Taka, endri.taka@utexas.edu; Pranathi Vasireddy, pranathi.vasireddy@amd.com; Zhewen Yu, zhewen.yu@amd.com; Niansong Zhang, nz264@cornell.edu; Jinming Zhuang, jinming_zhuang@brown.edu, Research and Advanced Development, AMD, San Jose, USA. arXiv:2510.14871v1 [cs.CL] 16 Oct 2025 2 Wang et al. delivered more cores and vector units. Hardware support for more concurrent threads improves throughput and reduces compute latency. The effectiveness of the thread-centric model depends on two assumptions: (1) that threads operate independently, and (2) that shared resources, particularly memory bandwidth, scale with compute. When these assumptions break down, due to synchronization, data dependencies, or resource contention, execution stalls. Hardware schedulers, particularly in GPUs, respond by context-switching to another group of threads (wavefronts) to maintain forward progress. However, GPUs do not guarantee independent forward progress across all threads. Their schedulers may allocate compute resources to a subset of runnable wavefronts, while deferring others indefinitely. This behavior introduces nondeterminism and limits software visibility into execution dynamics, an increasingly critical shortcoming for latency-sensitive or tightly coupled workloads. Despite these limitations, the thread-centric model remains widely adopted because it simplifies software development: applications are decomposed into independent tasks that communicate through shared memory, while the hardware handles data reuse and execution interleaving. However, this abstraction comes at a significant cost. Maintaining the illusion of shared memory and uniform execution requires dense interconnects, deep cache hierarchies, and complex runtime mechanisms, all of which consume energy, silicon area, and increase design complexity. An emerging alternative is to return control to the software. A programming model that enables explicit expression of task placement, scheduling order, and inter-task data sharing allows software to better exploit spatial and temporal locality. Rather than relying on implicit reuse via caches, such a model supports deliberate coordination between compute units that are scheduled close in space and time, reducing hardware overhead while improving predictability and efficiency. To this end, we introduce AIR, a compiler intermediate representation (IR) that exposes spatial and temporal execution structure as explicit, programmable constructs. AIR captures high level user-described data-movement and compute scheduling intent, including concurrent execution. Implemented as a multi-level intermediate representation (MLIR) [25] dialect, AIR bridges the gap between high-level programs and spatial architectures. It supports transformations that lower structured control flow into statically scheduled spatial programs, optimized for GPUs and domain- specific neural processing units (NPUs). We demonstrate AIR’s effectiveness on two representative AI workloads: matrix multiplication and the multi-head attention (MHA) block from the LLaMA 2 model [44]. Our results demonstrate that AIR produces spatially distributed schedules that overlap communication with computation, exploit locality, and minimize runtime control overhead. 1.1 Contributions The rapid advance of artificial intelligence (AI) models, algorithms, and accelerators has driven the adoption of diverse programming tools. Some tools focus on end-user productivity, while others are aimed at optimizing the efficient implementation of AI applications on an increasingly diverse range of specialized accelerators. MLIR is a flexible compiler abstraction designed to bridge this gap by allowing progressive lowering of designs through an extensible set of dialects [26]. Users can compose operations from a range of dialects and, in general, select transformations to achieve the goal of lowering high-level programmer intent to low-level optimized implementation1. AIR is an MLIR dialect that contains operations to express compute scheduling and memory allocation in spatial architectures. It operates at a level of abstraction that enables portable expression of compute kernels by avoiding explicit support for vendor-specific features. Cross-generational portability and performance scalability are supported by splitting the responsibilities for scheduling compute tasks between the compiler and runtime. This enables the compiler to define tightly 1In some applications, MLIR is used to analyze and raise the abstraction of operations, rather than lower them for execution. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 3 coupled and concurrent herds of execution, while giving the runtime flexibility to schedule those herds on devices whose sizes vary within and across generations of accelerator hardware. A design expressed using the AIR dialect can use a vendor-specific lowering for implementation on an accelerator as the operations included in MLIR-AIR are intended to support common features that we observe emerging in a class of spatial hardware accelerators. Programmers or compilers can use AIR to express compute groupings and data allocations spatially, and see those decisions honored in the subsequent lowering to vendor-specific implementations. In sum, this work makes the following key contributions: • We present AIR, a new IR implemented as an MLIR dialect that exposes spatial and temporal structure in programs. AIR enables the compiler to coordinate computation, data movement, and synchronization — capabilities that traditional thread-centric models obscure or defer to hardware. AIR is developed as a set of spatially aware abstractions that enable lowerings from high-level programs to tiled spatial hardware. AIR models spatial partitioning with air.herd, point-to-point communication with air.channel, and explicit synchronization with air.token. These abstractions enable the compiler to control spatial execution, without compelling the user to drop down to lower-levels of vendor-specific abstractions. • We build a complete end-to-end compiler flow that uses AIR to lower workloads written using high-level Python frameworks to low-level code for AMD NPUs. MLIR-AIR compiles structured loop nests into efficient, spatial programs dispatched using the NPU runtime.2 • We demonstrate MLIR-AIR’s effectiveness on two representative AI workloads. MLIR-AIR produces statically scheduled programs that exploit locality, parallelism, and pipelining on tiled hardware. MLIR-AIR is open source and modular by design. It integrates into, and composes with other dialects with the MLIR ecosystem and provides a foundation for targeting a wide range of spatial accelerators beyond AMD’s NPU. 2 Background This section surveys recent trends in spatial hardware that inform the architectural design of modern accelerators, which motivate key requirements on modern compilers. 2.1 Trends in Spatial Hardware In Table 1, we describe six key trends in efficient compute hardware. Taken together, these trends define a general direction in parallel hardware design, where efficient data movement is the driving design philosophy. Control over where compute operations are dispatched and where data is allocated are fundamental in such systems. Emphasizing the importance of physical placement in these systems, we refer to this direction in hardware design as a movement towards spatial hardware. These six hardware trends collectively motivate a corresponding set of compiler features necessary to effectively target spatial hardware. 2.1.1 Complex System Hierarchy Design reuse is extensive in semiconductor manufacturing be- cause of the high cost of verification. Larger chip designs are often composed of multiple chiplets, that may themselves be composed of pre-verified hardware building blocks. Non-uniform perfor- mance for similar workloads can occur if a workload is assigned resources that cross spatial or hierarchical boundaries, or if the workload uses components shared at a cluster level. Interactions between spatially arranged components can be positive (e.g., components within a cluster share a level of cache hierarchy) or negative (e.g., components arbitrate for access to a limited number of ports). In order to maximize performance and minimize negative interactions, compilers and schedulers must be aware of spatial boundaries within the chip. 2https://github.com/Xilinx/mlir-air 4 Wang et al. Trend Description Complex System Hierarchy Design reuse introduces arbitrary boundaries in systems. Dispatch Placement Schedulers guaranteeing locality enable resource-sharing. Multi-root Memory Hierarchy Devices have independent, physically distant memory channels. Peer Memory Movement Efficient designs are not limited to data transfer through main memory. Data Movement Offload Specialized DMAs coordinate efficient data movement. Asynchronous Execution Distinct hardware scheduled to execute independently via dependencies. Table 1. Six hardware trends of spatial architectures. 2.1.2 Dispatch Placement Compilers and schedulers share control of decisions over placement within a spatial architecture. As such, in order to holistically optimize placement, both compilers and runtimes must be able to control or query where a scheduler allocates compute or where a memory allocator places memory. This knowledge or control would allow a compiler optimized for spatial architectures to note the desired spatial affinity of dispatched compute elements as optional or mandatory constraints on the behavior of the runtime scheduler. 2.1.3 Multi-root Memory Hierarchy Traditional compilers treat main memory as a unifying single root of coherency. However, many modern devices use multiple independent memory channels to increase aggregate bandwidth. The transparent hardware-based interleaving of data across these channels offers one simple mechanism for accessing this bandwidth, but in a large device, it is likely that there is a non-uniform energy and latency cost for access to these separate memory channels. These NUMA effects have previously been observed in large multi-socket CPU systems, but compilers and runtimes now have a role to play in ensuring physical affinity between memory allocation within channels and compute scheduling even within a single package. 2.1.4 Peer Memory Movement CPUs and GPUs incorporate large amounts of on-chip SRAM memory that is used as caches and/or scratchpads. Data transfer between on-chip memories can occur implicitly in the case of coherent caches, or may be explicitly orchestrated. Effective use of on-chip memories can offer lower interconnect energy, and achieve higher realized bandwidth compared to when data is fetched multiple times from external memory. 2.1.5 Data Movement Offload GPUs and NPUs increasingly feature Direct Memory Access (DMA) engines capable of offloading complex address generation from the compute datapath to improve data movement efficiency. This enables efficient pipelined use of the interconnect fabric as well as in-line reshaping and transposition of data for efficient computation. 2.1.6 Asynchronous Execution Memory and communication operations often have considerable latency. To achieve the most efficient performance, independent actors in the system (e.g., DMAs, compute units, etc.) are kept busy, using techniques to avoid stalling during the round-trip time necessary to synchronize two concurrent components. Increasingly sophisticated hardware sched- ulers close to those actors interpret explicitly encoded dependencies and select the next suitable thread of work for the actor to perform. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 5 2.2 Identified Needs in Compilers Taken together, these trends in hardware construction motivate a desire for a software model that enables user control over scheduling and memory allocation. Specifically, we see a need for a framework that: (1) Exposes hardware controls over memory allocation, allowing users to allocate memory in different levels of the memory hierarchy and in different non-uniform memory access (NUMA) domains at each level of the memory hierarchy. (2) Exposes hardware controls over compute placement, enabling users to describe units of compute that should be scheduled concurrently, enabling tightly-coupled compute elements to optimize data-sharing and local synchronization. (3) Separates data movement from computation explicitly in the IR, enabling independent schedul- ing and optimization of each. This decoupling allows the compiler to overlap communication with computation, and apply architecture-specific optimizations when supported by hard- ware. (4) Enables dependency resolution close to hardware to minimize the time taken to observe completion of a predecessor operation. This can be achieved by expressing dependencies ex- plicitly, and supporting lowerings that target platform-specific synchronization capabilities. 3 Related Work The past decade has seen rapid evolution in accelerator architectures for machine learning. Many of these accelerators share the key characteristics outlined in Section 2.1: explicit spatial compute and memory hierarchies, high-throughput interconnects, and programmable DMA subsystems. Examples include Google’s TPU [23], AMD’s and Intel’s Neural Processing Units (NPUs) [21, 33], Qualcomm’s AI Engine [32], GroqChip [1], Cerebras’ Wafer Scale Engine [8], and platforms from SambaNova [31]. A defining common feature of these accelerators is the spatial allocation of compute kernels to fixed hardware regions (e.g., tiles or cores), where data is communicated via explicitly programmed on-chip data-paths, often decoupled from compute [39, 41]. While architectural designs vary, a common challenge remains: enabling compilers to map high-level programs to these platforms by spatial locality, data movement, and synchronization [37]. In response, the compiler community has developed a range of spatially aware compilation frameworks that aim to bridge the gap between abstract algorithm specification and low-level hardware control. These works largely focus on flexible frontends for compiler frameworks, compile transformations that enable efficient computation, compiler techniques for targeting a broad range of accelerators, or any combination thereof. The remainder of this section highlights notable works in each category. Frontends for Accelerator Programming. There is a large diversity of frontends for accel- erator programming frameworks. IRON provides a close-to-metal interface that allows detailed and customized performance tuning [20]. In contrast, frontends that capture intent at a higher level of abstraction are useful for flexibility, reusability, and quick adaptation to new algorithms and emerging programming models. Consequentially, MLIR-AIR and other works have focused on this higher level of abstraction. For instance, Union introduces a unified hardware-software co-design ecosystem within the MLIR infrastructure [22], which supports TensorFlow, ONNX, and COMET [28] as inputs. Similarly, SODA-OPT supports various high-level languages as in- puts, including Python, C++, and Fortran [2]. XLA, while originally a standalone compiler for TensorFlow and JAX, has increasingly adopted MLIR components to enhance its modularity and extensibility [36]. Both Union and SODA-OPT use MLIR internally to increase front-end flexibility; while XLA was originally a standalone compiler, it has increasingly adopted MLIR components to enhance its modularity and extensibility. ARIES [48] provides an MLIR-based flow targeting 6 Wang et al. AMD AI Engines, with a focus on providing tile-granularity programming interface. Unlike these frameworks, MLIR-AIR defines a spatially explicit intermediate representation that directly models hardware concurrency, locality, and asynchronous execution inline of MLIR IRs, uniquely striking a balance between fine-grained compiler-managed scheduling and frontend and backend flexibility. Polyhedral Compilation for Mapping Tasks to Resources. The extraction and spatial mapping of parallelism implicit in algorithms are central to delivering high quality of results (QoR), especially for accelerators which are often composed of many parallel compute units. The polyhedral model [6] provides a formal framework for analyzing and transforming loop nests through affine access relations and schedule functions. Early efforts in this space include Vivado High-level Synthesis, which demonstrated how affine loop transformations could be applied to high-level code to generate efficient FPGA implementations [11, 38, 40, 42]. AutoSA advanced this direction by introducing a full-stack polyhedral compilation flow targeting systolic arrays on FPGAs [45]. It applies space-time transformations and loop tiling to generate parallel accelerator kernels that maximize throughput while respecting hardware resource constraints. More recent tools extend these capabilities across broader architectural targets. Tools like Diesel [18] and PLUTO [7] utilize the polyhedral model to automatically parallelize and optimize loop nests across multiple hardware architectures, including multicore CPUs, GPUs, and FPGAs. Polygeist further enhances the applicability of polyhedral compilation by translating C to MLIR’s Affine and SCF dialects, enabling integration with modern compiler infrastructure and reuse of polyhedral analyses with MLIR-based workflows [27]. In contrast, MLIR-AIR leverages polyhedral analyses not only for loop transformations but also to guide asynchronous scheduling and data movement, integrating these capabilities within a structured, token-based IR. Compiler Frameworks for Diverse Spatial Accelerators. Alongside the polyhedral model, many tools such as Marvel [10] and AMOS [47] offer plug-and-play mechanisms for diverse spatial accelerator architectures. By abstracting device-specific optimizations and code generation, these tools focus on compute patterns and memory hierarchies common to spatial accelerators, facilitating seamless integration across diverse hardware generations and platforms. Moreover, when targeting reconfigurable FPGA devices, frameworks like HIDA [46] and Revet [35] enable automatic generation of Register Transfer Level (RTL) code, streamlining the hardware design process without requiring extra manual effort. In contrast, MLIR-AIR emphasizes explicit modeling of spatial scheduling and asynchronous execution within the IR itself, enabling precise control over task placement without relying on external runtime coordination or fixed hardware templates. 4 MLIR-AIR: A Novel Compiler Framework for Spatial Architectures Modern spatial accelerators require the compiler to do more than expose parallelism, they require explicit control over placement, communication, and execution order. MLIR-AIR is built to provide that control natively. MLIR-AIR is a novel, platform-agnostic compiler framework designed to target a wide range of spatial architectures. In this work, we focus on its instantiation for AMD NPUs — a tiled architecture optimized for high-throughput and low-latency AI computations. As shown in Figure 1, the AMD NPU architecture consists of a two-dimensional grid of compute ( ), memory ( ), and shim tiles ( ). Shim tiles form the interfacing row which connects the NPU to host memory and I/O systems. These are the only tiles that can initiate a memory transaction to the SoC memory system. Memory tiles, located adjacent to the shim tiles, provide shared memory resources accessible by compute tiles throughout the array. Compute tiles comprise the majority of the array, each integrating local memory buffers with scalar and vector engines. Every tile features a dedicated DMA engine for block data transfers (represented in buffer descriptors, or BDs) over a reconfigurable streaming interconnect. This enables localized compute-memory communication, via the streaming interconnects—and peer-to-peer data movement, via either the dedicated cascade From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 7 Core tile. Memory. Memory tile. Shim tile. Switch box. Streaming interconnect. Fig. 1. AMD NPU architecture. connections between cores or local memory shared by neighbors ( ). The absence of caches, either for data or instructions, and the emphasis on computing on local tiles of data ( eliminating memory access latency variation) means the architecture is characterized by extremely deterministic behaviour. Compilers designed to tile up work to fit into local memories can use the predictable behavior to construct efficient data-flow achieving high utilization. MLIR-AIR is designed to bridge high-level algorithmic representations with the low-level spatial execution requirements of modern accelerators, such as the AMD NPU. It provides the abstractions and transformations necessary to translate structured programs into tiled, explicitly scheduled implementations. Figure 2 illustrates the MLIR-AIR compilation flow for the AMD NPU, highlighting its integration with MLIR’s ecosystem and spatial backend tools. Algorithm-Level Programming Interface. At the compiler frontend ( 1 ), MLIR-AIR interfaces with high-level AI frameworks such as PyTorch, TensorFlow, and Triton through MLIR-compatible frontends including Torch-MLIR, TOSA, and Triton-Shared. These frameworks emit programs using structured MLIR representations that preserve loop nests, tensor operations, and affine indexing. Frontend dialects are lowered into MLIR common components ( 2 ), including structured control flow (SCF) and linear algebra (LinAlg) dialects to provide an algorithm-friendly interface. These dialects offer C-like generic programming abstractions that preserve loop structure and tensor semantics, making AI workloads analyzable by non-domain experts. Unlike traditional low-level compilation flows that are tightly coupled to specific frontends or hardware backends, MLIR-AIR decouples emerging AI frontends from new accelerator architectures, defining a common compute model fit for the new era of spatial hardware. Representation of Asynchronous Parallelism. At the core of MLIR-AIR is the AIR dialect ( 3 ), a set of compiler abstractions that explicitly model hardware scheduling, asynchronous execution, and interactions with the memory hierarchy. Unlike conventional IRs that assume shared memory and centralized scheduling, AIR models the constraints and opportunities of spatial systems directly in the compiler. MLIR-AIR captures fine-grained asynchronous parallelism through an asynchronous control and dataflow graph (ACDG), a directed acyclic graph that encodes MLIR operation-level dependencies sequencing computation and data movement. The ACDG is embedded directly in the MLIR-AIR IR via the production and consumption of air.token values, which are static 8 Wang et al. Frontends MLIR-AIR SCF (Tiled) LinAlg Memref Segment, Herd Channel NPU Config DMAs Async analysis and optimizations Func. call Peano / Vitis MLIR-AIE External Host Program Runtime Sequence Execution (XRT/ROCr) Binary Launch PyTorch Triton Triton-Shared Torch-MLIR MLIR Community JiT Compilable TensorFlow TOSA IREE Tiling NPU 3 1 2 5 4 6 Fig. 2. MLIR-AIR stack overview. single assignment (SSA) values encoding the read-after-write (RAW), write-after-read (WAR), and write-after-write (WAW) dependency types. This token-based mechanism integrates with MLIR’s native SSA dominance and verification infrastructure, enabling automatic correctness checks and transformation legality throughout the compilation process. ACDG constructs are composable with MLIR’s structured control flow, supporting structured parallelism through tokens yielded by scf.parallel and AIR spatial operations (see Section 5). This allows explicit encoding of both inter- and intra-loop parallelism. Furthermore, loop-carried dependencies and pipelining opportunities are represented explicitly via tokens passed through scf.for iteration arguments, enabling the compiler to reason about and optimize fine-grained execution schedules, including pipeline stages and resource contention. Decoupled Data Movement Primitives. Unlike conventional memory copy operations that couple source and destination within a single construct, MLIR-AIR introduces decoupled data movement air.channel.put and air.channel.get operations ( 4 ) to model unidirectional data transfers localized to their respective memory hierarchies. These operations are linked via a globally declared symbolic air.channel, which abstracts the communication path and enforces backpressure-based synchronization between asynchronous code regions. This decoupling enables fine-grained control over dataflow, allowing communication to be scheduled alongside computation when desired, or independently when beneficial for performance or modularity. The design closely aligns air.channel operations with their target memory scopes, allowing the compiler to reason about and optimize data movement via pattern matching of simple and localized codes. Optimization and Performance Feedback. Beyond compilation, MLIR-AIR enhances the op- timization and debugging process by providing execution traces that capture key performance metrics during hardware execution. These traces are visualized using tools like Chrome Tracing or Perfetto UI [3], allowing developers to analyze the runtime parallelism between computation and data movement. This profiling capability enables fine-grained performance evaluation, helping developers identify bottlenecks and inefficiencies in execution. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 9 Platform-agnostic Implementation and Runtime. MLIR-AIR supports integration with multiple hardware implementation backends and runtime systems, enabling platform-agnostic compilation. The final lowered IR is consumed by platform-specific tools such as MLIR-AIE ( 5 ) for NPUs [20] or LLVM-based pipelines for CPUs and GPUs, which generate hardware-specific control and dataflow representations. These are subsequently compiled and deployed using runtime frameworks ( 6 ) such as XRT [17] or ROCr [15], ensuring compatibility with diverse hardware platforms. This modular backend integration facilitates scalable and efficient deployment while preserving the architectural flexibility of MLIR-AIR across spatially heterogeneous systems. 5 AIR Concepts The AIR dialect provides the core primitives for expressing spatial execution semantics in MLIR-AIR. These primitives are designed to give the compiler fine-grained control over execution, concurrency, communication, and synchronization at various levels of granularity. These primitives can then be targeted by architecture-specific backends. AIR integrates within existing compilation stacks, and transforms allow designs to be ingested from several different frontends. The AIR dialect is designed to compose with standard MLIR dialects, reusing existing dialects to describe computation and kernel. This modularity and flexibility allow developers to describe fine-grained control over execution, communication, and memory behavior while maintaining portability by decoupling these abstractions from vendor-specific hardware implementations. We group the new operations in the AIR dialect into three categories: • Scheduling Constructs, which express spatial and hierarchical parallelism across compute resources (Section 5.1). • Data Locality Constructs, which represent explicit and decoupled data transfers aligned with memory hierarchy and DMA affinity (Section 5.2). • Synchronization Constructs, which captures explicit operation-level and loop-carried dependencies for asynchronous execution and pipelining (Section 5.3). Together, these abstractions allow AIR to represent concurrency control, memory placement and movement, and synchronization between many concurrent actors as explicit compiler-visible constructs. 5.1 Scheduling Constructs AIR introduces scheduling constructs that express how computation is distributed and exe- cuted across a spatial accelerator. These constructs define task placement, launch behavior, and hardware resource partitioning, forming the foundation of spatial scheduling in AIR. The dialect includes air.launch, air.segment, and air.herd operations, which are hierarchically composed: an air.launch may contain multiple air.segment operations, each of which may dispatch one or more air.herd operations. 5.1.1 air.launch The air.launch operation defines a region of computation to be offloaded from the host processor to the accelerator. It is designed as a construct to support portability and scaling by selecting groups of operations whose dispatch may be deferred to a runtime scheduler. It groups together compute, communication, and synchronization operations into a single launch unit, which are scheduled at runtime. The optional iteration space attribute attached to an air.launch operation describes a set of unique instances of the region body that the runtime scheduler is delegated to manage. Those unique instances must be permutable, i.e., a completely parallel schedule is legal, and instances within the air.launch iteration space must not rely on observing the effects of any other instance. To improve the effectiveness of compile-time optimization, we assume the compiler is free to hand off multiple different variants of the compiled air.launch operation to the runtime, each 10 Wang et al. enabling optimized dispatch of a parameterizable subset of the iteration space. Our lowering for AMD NPU architecture uses this freedom to offer different opportunistic granularities (e.g., one column or whole array) for the runtime to schedule work. air.launch also manages the lifetime of the bound resources that implement the operations hierarchically nested within its body. Once all nested operations within a launch iteration are scheduled and begin execution, the launch is able to release unused resources back to the runtime. This hierarchical management of resources ensures efficient resource utilization, especially when tasks are nested within larger compute tasks. 5.1.2 air.segment The body of an air.segment encapsulates the reservation of pool(s) of com- pute resources for use in scheduling the operations nested inside them. Segments can be optionally annotated with architecture-specific attributes describing the pool of resources they are reserving, when targeting backends that benefit from resource-aware scheduling. An architecture might want to define two pools of resources that have physical affinity (e.g., resources in one chiplet) so that they can ensure that the scheduler dispatches air.herd operations within that segment exclusively using the segment resources. Segments have an optional grid space. This allows easy replication of resource pools. Segment instances within that space are dispatched concurrently. Other relationships between the scheduling of segments in time and space can be controlled using the synchronization constructs in Section 5.3. 5.1.3 air.herd The air.herd operation defines a group of work units that execute concurrently on a grid of physical compute units and their local memories. It contains an index space which, expressed as an affine set of worker indices, generalizes the notion of thread IDs found in traditional parallel programming models (e.g., CUDA [29] or OpenMP [9]). Each worker in the air.herd executes the same region body, but specialization is enabled by indexing: control flow and memory access patterns may diverge based on each worker’s coordinates in the air.herd. air.herd operations are scheduled atomically : they are only scheduled when resources for all the workers are available, and must enable independent forward progress for their individual workers. It is implied that workers are allocated as a physically local contiguous block, and lowerings may make use of the grid dimensions to lower to architecture-specific features that make use of that physical locality. The size of air.herd operations indicates the granularity of (concurrent) dispatch; users should be aware that certain architectures may place limits on the size of resources it can guarantee run concurrently (e.g., lowerings may fail during backend compilation if unimplementable air.herd operations are created). Where multiple air.herd operations are included in a air.launch, their default behavior is to run sequentially. Programmers may use the advanced synchronization constructions in Section 5.3 to set additional constraints on air.herd operations to guarantee sequential or concurrent execution and consequently modify the resource requirements of the surrounding air.launch. 5.2 Data Locality Constructs Spatial architectures rely heavily on local memory hierarchies and explicit DMA engines to achieve high efficiency. MLIR-AIR introduces constructs that make data locality explicit, enabling the compiler to reason about and optimize data movement across compute tiles and memory spaces. When ingesting code from an AI framework, progressive lowerings are supported by use of MLIR MemRef types—which represent typed memory references to structured data and are assumed to reside in global memory if not explicitly annotated. Existing lowerings support explicit memory allocation and movement into at least two further levels of addressable memory hierarchy (shared cluster scratchpads and private memory local to a worker). We support two levels of abstraction in our data movement constructs. First, to support lowering from higher-level dialects, MLIR-AIR supports an air.memcpy operation. Second, to provide further From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 11 control over memory movement, MLIR-AIR provides an air.channel operation that abstracts architecture-specific optimizations in device interconnect and specialized tensor-DMAs. 5.2.1 air.memcpy To progressively bridge the gap between high-level memory transfer specifica- tions and spatial hardware implementations, MLIR-AIR introduces an intermediate air.memcpy construct. air.memcpy enhances the conventional memcpy operation with explicit attributes for data layout and memory spaces. This allows users to indicate which levels of hierarchy they are trans- ferring from, and to express the desire for an in-flight physical reshaping of the data, decoupling the logical layout of a MemRef from its physical representation in memory. This is useful because data transfer operations offer an opportunity to specialize data layout on the fly. In subsequent lowering paths, memcpy operations are lowered to make use of air.channel operations. 5.2.2 air.channel Many modern spatial accelerators, including AMD NPUs and NVIDIA Hopper GPUs, expose hierarchies of data movement engines and memory spaces that require explicit software modeling for efficient execution. To support this, MLIR-AIR introduces the air.channel abstraction, which represents stream-based data transfers between distinct memory regions through paired put and get operations: • put operations transfer data from a source memory address in one level of the memory hierarchy onto a serialized stream, and • get operations retrieve data from the stream into a destination memory address, represent- ing a buffer in a particular level of the memory hierarchy. These operations are placed in the code regions local to their respective memory allocations, en- abling the compiler to express and optimize DMA-to-memory affinity. It captures both endpoints of the communication via a globally scoped symbolic air.channel, enabling an ordered and streamed data transfer. Subsequent compiler passes, detailed in Section 7.4, then decouple air.memcpy (Listing 1) into discrete air.channel.put and air.channel.get operations (Listing 2). Notably, air.channel operation references can cross levels of the AIR construct hierarchy. For example, an air.channel.put operation that is the immediate child of an air.launch operation may push data into an air.channel whose consumers are more deeply nested air.channel.get operations inside air.herd and air.segment operations. The air.channel abstraction integrates naturally with the MemRef dialect by adopting the same offset, size, and stride specifications for describing structured memory accesses. As shown in Listing 2, air.channel operations operate over structured MemRef views, allowing tensor access patterns to remain analyzable and composable with other MLIR transformations. Listing 1. air.memcpy operation3. 1 air.memcpy (%y, %x) Listing 2. Equivalent air.channel pair. 1 air.channel.put @chan1 (%x) 2 air.channel.get @chan1 (%y) MLIR-AIR also supports broadcasting within air.channel operations, enabling a single source to supply data to multiple consumers without redundant resource usage. Broadcasting is explicitly specified through affine integer sets, which define mappings from input indices to sets of output indices and are specialized into affine.if conditions at lower levels. The detection and lowering of broadcast patterns is further discussed in Section 7.2. Named bundles of air.channel symbols are supported to allow users to select a specific air.channel to put/get buffers (using a numerical index). 3For simplicity, we omit the offsets, sizes, and strides lists from this code snippet. 12 Wang et al. 5.2.3 air.herd The air.herd operation not only defines parallel execution but also plays a crucial role in data locality. Since all workers in an air.herd run concurrently on allocated local resources—including compute units, local memories, and tile-local data movers—we can optimize data movement between them using methods such as double buffering to minimize memory latency (see Section 7.4.1). The lowering of air.herd to hardware platforms such as AMD NPUs ensures spatial contiguity of worker placement, allowing architecture-specific features to be exploited for efficient implementation of communications. For example, in its default setting, the physical lowering of air.herd operations to NPU AI Engine arrays guarantees that neighboring workers can write to the local memory of their neighboring cores. This allows an efficient specialized lowering of certain patterns of channel communication within the air.herd. By constraining data exchange to local resources, air.herd operations improve the dataflow efficiency within their scope. 5.2.4 air.segment As a resource management construct, air.segment also contributes to data locality by controlling memory partitioning and affinity constraints. Since air.segment operations dictate how resources are allocated and shared, they help ensure that accesses to the shared memories remain localized within their data producers and consumers, including the data movers and any air.herd operations within an air.segment. This reduces unnecessary data movement, keeping computation and memory access spatially co-located for better efficiency. 5.3 Synchronization Constructs 5.3.1 air.token The air.token construct provides fine-grained control over execution depen- dencies in asynchronous workloads, with the granularity tunable from coarse-grained synchroniza- tion over regions of code to fine-grained per-operation scheduling. When an operation is marked with the async keyword, it returns an air.token that signals its completion status; this air.token can be used to constrain the relative scheduling or placement of operations in time or space. Synchronization using air.token is managed through two mechanisms. Firstly, explicit air.wait_all operations, can be inserted into synchronous control flow. This allow us to prevent further operation dispatch until tokens specified in the air.wait_all operation have signaled completion. Secondly, synchronization lists, that explicitly specify the relative scheduling of operations in time and space, describe a scheduling graph. Backend lowerings can use this graph to push dependency resolution to distributed dispatchers in hardware devices, allowing offload of groups of operations. The compute model envisages three types of relationships between operations controlled by synchronization lists. • Dependency lists encode directed edges between an operation and the predecessor operations in which it’s inputs are defined. It requires that all inputs to a operation that are modified by a source operation in the dependency list are visible before the sink operation is scheduled. This is often implemented as a happens-before relationship to control the scheduling of operations in time. • Concurrency lists constrain the scheduling of operations in space and time. Each air.token in a concurrency list defines an undirected edge between two operations that indicates they must be scheduled at the same time. This implies that each operation must use exclusive resources. • Affinity lists constrain the scheduling of operations in space. These are lists of tokens that define undirected edges between operations that must execute using the same resources. In practice, this means these operations’ time-slots must be disjoint, but the edge does not describe which operation must be scheduled first. The edges provide information on where spatial affinity could be exploited by the compiler or runtime. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 13 Details on how MLIR-AIR automatically detects and lowers data dependencies are included in Section 7.3.AIR’s parallelism constructs, such as air.launch, signal air.token completion by behaving as a grouped asynchronous task: an air.launch’s air.token is released only when all operations within the air.launch have completed. 5.3.2 air.channel While primarily a data movement construct, air.channel also plays an essential role in synchronization across data movers operating on discrete memory spaces, where an air.channel.get operation is synchronized to an air.channel.put with the same air.channel by back pressure, as shown in Listing 2. This synchronization abstraction—combined with asynchronous dependencies on air.channel actors to enforce synchronization local to each memory space—is both simple and effective: en- forcement of dependencies between distributed data movers does not require complex control-flow dependencies across code regions. 6 AIR Dialect Constructs in Use To concretely illustrate how AIR dialect constructs appear in practice, we present a simpli- fied example of an element-wise vector addition program. This example bridges the conceptual descriptions of Section 5 and the compiler transformations detailed in Section 7. The input program, shown in Appendix A, expresses a tiled vector addition in generic SCF using scf.parallel and scf.for. We use explicit memref.copy operations to move data between MemRef objects scheduled in a loop iteration. The program is agnostic to the target hardware and does not yet reflect spatial execution, memory locality, or asynchronous scheduling. The corresponding AIR-transformed IR, shown in Listing 3, makes the spatial and asynchronous execution explicit. In this transformed version: • The outer scf.parallel loop is replaced by an air.launch enclosing an air.herd, as- signing each iteration to a spatial tile in a 2D compute grid. • Temporary buffer allocations are restructured with explicit memory hierarchy annota- tions, and all data movement operations are rewritten using air.memcpy or decoupled air.channel.put and air.channel.get. • Execution dependencies across asynchronous regions are made explicit with air.token values, enabling pipelined execution between data transfer and compute stages. The next section describes the key compilation stages in this IR transform. 14 Wang et al. Listing 3. Element-wise vector add described in AIR dialect. The syntax has been simplified and reformatted for clarity of presentation; some MLIR dialect annotations and attributes are omitted. 1 air.channel @channel_0 [1, 2] 2 air.channel @channel_1 [1, 2] 3 air.channel @channel_2 [1, 2] 4 func.func @eltwise_add (%arg0: memref <65536 xf32>, %arg1: memref <65536 xf32>, %arg2: memref <65536 xf32>) { 5 %0 = air.launch async (%arg3 , %arg4) in (1, 1) args(%arg7=%arg0 , %arg8=%arg1 , %arg9=%arg2) { 6 %1 = air.segment @eltwise_add_0 async args(%arg10=%arg7 , %arg11=%arg8 , %arg12 =%arg9) { 7 %2 = air.channel.put async @channel_0[0, 0] (%arg10[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 8 %3 = air.channel.put async @channel_0[0, 1] (%arg10[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 9 %4 = air.channel.put async @channel_1[0, 0] (%arg11[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 10 %5 = air.channel.put async @channel_1[0, 1] (%arg11[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 11 %6 = air.channel.get async @channel_2[0, 0] (%arg12[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 12 %7 = air.channel.get async @channel_2[0, 1] (%arg12[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 13 %8 = air.herd @herd_0 async tile (%arg13 , %arg14) in (1, 2) { 14 %async_token , %results = memref.alloc () async 15 %async_token_4 , %results_5 = memref.alloc () async 16 %async_token_6 , %results_7 = memref.alloc () async 17 %async_token_8 , %results_9 = memref.alloc () async 18 %async_token_10 , %results_11 = memref.alloc () async 19 %async_token_12 , %results_13 = memref.alloc () async 20 %9 = air.wait_all async deps =[% async_token_10 , %async_token_12] 21 %10:3 = scf.for %arg17 = 0 to 65536 step 4096 iter_args (%arg18 = %9, %arg19 = %async_token_12 , % arg20 = %async_token_12) { 22 %11 = air.channel.get async deps =[%arg18 , %arg20] @channel_0 [%arg13 , %arg14] (% results_9 [] [] []) 23 %12 = air.channel.get async deps =[%arg18 , %arg20] @channel_1 [%arg13 , %arg14] (% results_13 [] [] []) 24 %13 = air.wait_all async deps =[%11, %12] 25 %14 = scf.for %arg21 = 0 to 1024 step 1 iter_args (%arg22 = %13) { 26 %async_token_20 , %results_21 = memref.load async deps =[% arg22] %results_9 [%arg21] 27 %async_token_22 , %results_23 = memref.load async deps =[% arg22] %results_13 [%arg21] 28 %22 = arith.addf %results_21 , %results_23 29 %async_token_24 = memref.store async deps =[% arg22] %22, %results_11 [%arg21] 30 %23 = air.wait_all async deps =[% async_token_20 , %async_token_22 , %async_token_24] 31 scf.yield %23 32 } 33 %15 = air.channel.put async deps =[%arg19 , %arg18 , %14] @channel_2 [%arg13 , %arg14] (% results_11 [] [] []) 34 %16 = air.channel.get async deps =[% arg19] @channel_0 [%arg13 , %arg14] (% results [] [] []) 35 %17 = air.channel.get async deps =[% arg19] @channel_1 [%arg13 , %arg14] (% results_5 [] [] []) 36 %18 = air.wait_all async deps =[%16, %17, %arg18] 37 %19 = scf.for %arg21 = 0 to 1024 step 1 iter_args (%arg22 = %18) { 38 %async_token_20 , %results_21 = memref.load async deps =[% arg22] %results [%arg21] 39 %async_token_22 , %results_23 = memref.load async deps =[% arg22] %results_5 [%arg21] 40 %22 = arith.addf %results_21 , %results_23 41 %async_token_24 = memref.store async deps =[% arg22] %22, %results_7 [%arg21] 42 %23 = air.wait_all async deps =[% async_token_20 , %async_token_22 , %async_token_24] 43 scf.yield %23 44 } 45 %20 = air.channel.put async deps =[%19, %arg18] @channel_2 [%arg13 , %arg14] (% results_7 [] [] []) 46 %21 = air.wait_all async deps =[%16, %17] 47 scf.yield %15, %20, %21 48 } 49 /* Memref deallocations were omitted. */ 50 } 51 air.wait_all [%2, %3, %4, %5, %6, %7, %8] 52 }} 53 return 54 } 7 Compilation of AIR dialect to Spatial Hardware This section builds upon the AIR constructs introduced in Section 5, and describes the core com- piler optimizations in MLIR-AIR that transform high-level loop-based programs into efficient spatial implementations. These passes progressively transform generic operations into tiled subproblems, resolve dependencies for correct asynchronous execution, optimize data reuse and communication locality, and finally lower the program into hardware-executable IRs targeting AMD NPUs. The following subsections describe each stage in this process: • Tiling and Parallelism Mapping: Maps high-level operations to distinct hardware tiles via loop tiling and parallel loop conversion (Section 7.1). From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 15 • Broadcast Detection and Lowering: Identifies and lowers data reuse patterns as affine- mapped broadcasts to reduce redundant transfers (Section 7.2). • Asynchronous Dependency Analysis: Constructs fine-grained control and data depen- dencies represented using ACDG (Section 7.3). • Inferring Dataflow via air.channel: Decouples memory transfers into local operations, exposing DMA-to-memory affinity for scheduling (Section 7.4). • Lowering to AMD NPU Targets: Generates spatial hardware IR and runtime code for deployment on AMD NPUs (Section 7.5). 7.1 Tiling and Parallelism Mapping Tiling identifies parallel subregions of computation and introduces structured iteration con- structs to represent them. These parallel tiles form the basis for spatial parallelism analysis and schedule optimization. In MLIR-AIR, tiling is performed through a compiler pass pipeline that lowers implicit parallelism in high-level operations (e.g., from the LinAlg dialect) into explicit scf.parallel or scf.for_all loops. These are subsequently mapped to AIR spatial constructs such as air.launch and air.herd. The pipeline leverages upstream MLIR tiling utilities while also ensuring flexibility and compatibility when plugged into broader compiler ecosystems, including IREE [4] and Triton [30]. In Figure 3, we examine how tiling strategies in MLIR-AIR influence the scheduling efficiency for a tiled matrix multiplication (𝐴× 𝐵= 𝐶, where 𝐴∈R𝑀×𝐾, 𝐵∈R𝐾×𝑁, and 𝐶∈R𝑀×𝑁) mapped to AMD NPUs. We consider a matrix multiplication problem tiled along the 𝑀and 𝑁dimensions and mapped to three spatial layouts: 1×4, 2×2, and 4×1 air.herd operations. These configurations are selected to cover a range of aspect ratios and communication patterns. In the 1×4 layout, matrix 𝐴 is broadcast across the four column-aligned cores, while matrix 𝐵is privately transferred to each core via separate dataflows. The 4×1 layout exhibits the complementary pattern: 𝐵is broadcast across rows, but 𝐴must be duplicated. In contrast, the 2×2 layout enables two-dimensional reuse: 𝐴is broadcast column-wise, and 𝐵row-wise, minimizing redundant transfers. Data reuse patterns are automatically inferred by MLIR-AIR (detailed in Section 7.2). Figure 3 presents the runtime traces of each strategy, with an assumption of equal tile sizes for 𝐴 and 𝐵. In the 1×4 and 4×1 cases, imbalance in data streaming—due to one matrix requiring separate transfers—results in core stalls as execution waits for both inputs to arrive. By contrast, the 2×2 configuration shows reduced stalls and improved throughput owing to symmetric broadcast reuse on both 𝐴and 𝐵paths. Performance bottlenecks in asymmetric tilings can be mitigated by rebalancing tile sizes to equalize data transfer volumes or by allocating additional DMA channels to heavier dataflows. The latter can be automated through MLIR-AIR’s dataflow-aware bufferization (see Section 7.4.3). This case study highlights how MLIR-AIR enables fast tile-shape-aware schedule selection through explicit representation of data dependencies and broadcast opportunities, guiding design- space exploration for spatial platforms. 7.2 Broadcast Detection and Lowering Once tiling has spatially partitioned the computational workload, the next optimization opportu- nity lies in minimizing off-chip data movement through on-chip reuse. To achieve this, MLIR-AIR introduces a systematic way to identify and optimize data broadcasting patterns in the tiled problem space. Compiler passes work in tandem to both detect opportunities and generate optimized AIR code that explicitly captures such patterns using affine maps. 7.2.1 Broadcast Detection The broadcast detection pass performs static analysis on the iteration domain of the program to discover replication patterns in data movements. When detected, the pass annotates the data movement operation with an affine set representing a projection in the 16 Wang et al. Core tiles Mem. tile (a) 1 × 4 herd. Core tiles Mem. tile 𝐴dataflow. 𝐵dataflow. (b) 4 × 1 herd. Core tiles Mem. tile (c) 2 × 2 herd. W0 R0 W1 R1 R2 R3 R4 (d) Memory tile trace for (a). W0 R0 R1 R2 R3 W1 R4 (e) Memory tile trace for (b). W0 R0 R1 W1 R2 R3 (f) Memory tile trace for (c). Fig. 3. Impact of tiling strategy on data movement schedule in an output-stationary matrix multiplication. See Listing 6 for its schedule described using loop nests, where the for_all loops ii and jj were mapped to horizontal and vertical directions in the two-dimensional air.herd operations, respectively. spatial iteration domains from the sources to broadcast destinations. For example, an affine set 𝑆0 representing a broadcast on two-dimensional spatial iterations, e.g., an air.herd, where an array of 4 × 1 air.memcpy sources broadcast to 4 × 4 destinations, has the form {(𝑑0,𝑑1) ∈Z2|∃𝑠0 ∈Z : 𝑑0 = 𝑠0, 0 ≤𝑠0 ≤3, 0 ≤𝑑1 ≤3}, where the symbol 𝑠0 and dimensions 𝑑0 and 𝑑1 represent the source and destination spaces, respectively. By expressing this as an affine set in MLIR’s Affine dialect, the AIR dialect retains a precise and analyzable description of the communication pattern, which remains composable with other open-source MLIR dialects thanks to the community-developed Affine dialect utilities. 7.3 Asynchronous Dependency Analysis MLIR-AIR captures asynchronous parallelism using ACDG, represented inline of an MLIR code- base via the SSA air.token, which tracks the execution ordering and ensures correctness. 7.3.1 Capturing Dependencies using ACDGs In the synchronous code snippet shown in Listing 4, each data movement is implicitly blocked by the previous one, leading to a sequential schedule. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 17 However, both DMA operations could theoretically execute simultaneously, assuming independent DMA resources. MLIR-AIR automatically analyzes memory references, identifies these implicit dependencies, and explicitly annotates synchronization tokens as shown in Listing 5. Listing 4. Synchronous (sequential) execution. 1 air.memcpy (%v1, %v2) 2 air.memcpy (%v3, %v4) 3 func.call @func(%v1, %v3) Listing 5. Explicit asynchronous dependencies. 1 %t1 = air.memcpy async (%v1 , %v2) 2 %t2 = air.memcpy async (%v3 , %v4) 3 %t3 = func.call async 4 deps =[%t1 , %t2] @func (%v1 , %v3) This explicit representation clearly indicates that the compute operation must wait for both DMA transfers to complete before proceeding, preserving correctness. Furthermore, it also makes evident that the two DMA operations can execute in parallel, effectively leveraging multiple discrete DMA resources. In MLIR-AIR, we provide compiler passes which, driven by MLIR’s native SSA representation and dominance analysis, automatically capture the ACDG arising from the read-after-write, write-after-read and write-after-write dependencies between MLIR operations, providing robust guarantees of correctness in MLIR-AIR’s scheduling optimizations. Loop-Carried Dependency in ACDG. In conventional compiler analysis, loop-carried dependen- cies are often represented using dependence polyhedra of the form (𝑖, 𝑗,𝑘) →(𝑖′, 𝑗′,𝑘′), capturing legal source and destination iteration pairs that must respect data dependence across loops. Compil- ers such as PLUTO [7] and work by Baskaran et al. [5] typically model tiling and execution at the level of atomic tiles, where dependencies across tiles dictate scheduling order, while dependencies within a tile are assumed to be resolved independently through external memory accesses. This treat- ment simplifies global scheduling but leaves intra-tile parallelism and fine-grained asynchronous scheduling opportunities underexplored. Extending beyond this model, MLIR-AIR’s ACDG captures dependencies at the operation level— both within and across loop iterations. As illustrated in Figure 4a, loop-carried dependencies are explicitly represented by passing air.token values (explicit synchronization handles) through the iteration arguments of scf.for loops, tracking per-iteration execution states. An arbitrary number of air.token values can be carried across iterations, each representing an independently progressing thread of execution. This enables precise modeling of parallel pipelines, race conditions, and shared resource usage, as illustrated in Figure 4b. This mechanism is particularly valuable in hardware pipelining, where producer and consumer stages can overlap in time (e.g., using ping-pong buffering; see Section 7.4.1). Representation of Asynchronous Dependencies via air.token in scf.parallel. Similarly, MLIR-AIR supports asynchronous dependencies within structured parallel execution constructs such as scf.parallel. Visualized by Figure 4c, MLIR-AIR explicitly handles the synchronized initialization of parallel threads via an air.token passed into the initialization argument of the loop; their synchronized termination is represented explicitly via a reduction tree of air.wait_all barriers. 7.3.2 Reasoning using ACDGs Building on the previous section on ACDG extraction, we now describe how MLIR-AIR progressively transforms a generic loop-based program into finer-grained asynchronous schedules by analyzing and restructuring its control and data dependencies. Figure 5 illustrates this process on an imperfect loop nest. In the original synchronous form (Figure 5a), the loop bodies imply a fully sequential dataflow in the absence of explicit parallelism annotations. Nevertheless, the underlying ACDG reveals opportunities for parallelism, as operations can be partitioned based on the memory buffers they access (annotated by colors). 18 Wang et al. term. body iter. arg. yield for term. term. term. term. body body body body iter. arg. iter. arg. iter. arg. iter. arg. yield for term. body init. arg. term. body init. arg. · · · term. body init. arg. reduce parallel (a) scf.for, single token. (b) scf.for, multiple tokens. (c) scf.parallel. (d) Code for (a). (e) Code for (b). (f) Code for (c). Fig. 4. Visualizations of ACDGs in loop iterations, including (a) sequentialized for loop, (b) multi-token for loop, and (c) parallel loop, and their respective MLIR-AIR specification. A circle represents an air.token, and a polygon represents a group of MLIR operations in the loop body. Listings (d—f) demonstrates how each ACDG is represented inline of MLIR code. MLIR-AIR first applies asynchronous dependency analysis to construct an explicit ACDG using loop-carried air.token values within each loop (Figure 5b). This step exposes parallelism between sub-graphs of each loop’s body accessing distinct buffers, while preserving correctness through token synchronization. To further expose optimization opportunities, MLIR-AIR splits the asynchronous loop nest into multiple independent nests (Figure 5c), each exclusively operating on a single memory object. This restructuring systematically uncovers and amplifies the spatial parallelism latent in generic loop-based input programs, isolating them into dataflows which facilitate compiler optimizations. 7.4 Inferring Dataflow via air.channel The ACDG structure not only enables fine-grained parallelism analysis, but also serves as the foundation for identifying and scheduling data movement across disjoint memory spaces. AIR’s channel-based abstraction makes such communication patterns explicit and analyzable. Figure 6 illustrates this transformation using ACDGs. In the pre-transformation ACDG shown in Figure 6a, a memcpy operation moves data from a shared buffer 𝑎into a local buffer 𝑎′, which is subsequently consumed by a compute kernel. Because memcpy resides within the body of the air.herd, the producer and consumer of 𝑎′ are tightly coupled within a single hierarchical region. While this correctly expresses intra-herd dependencies, it fails to expose the fine-grained asynchro- nous boundary between the shared memory and local memory regions. As a result, the external From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 19 yield yield body 4 body 3 body 2 body 1 for for body Ops. accessing data A only. body Ops. accessing data B only. Data flow. Dependency. yield yield body 4 body 3 body 2 body 1 for for yield yield body 3 body 1 yield yield body 4 body 2 for for for for (a) Sync. loop nest. (b) Async. loop nest. (c) Async. loop nest, split. Fig. 5. Visualizations of ACDGs in loop iterations, including (a) sequentialized for loop, (b) multi-token for loop, and (c) parallel loop, and their respective MLIR-AIR specification. thread managing 𝑎remains blocked until the entire air.herd completes—despite the fact that only the memcpy operation requires synchronization. The transformed ACDG in Figure 6c resolves this limitation by replacing memcpy with a decoupled pair of air.channel.put and air.channel.get operations. These operations are hoisted to the respective regions associated with the source and destination memory, each integrated into its own ACDG subgraph via explicit air.token synchronization. To preserve the correctness of the original computation, the hoisted put operation must replicate the parallel semantics of the original memcpy; if the memcpy was nested within a 𝑀× 𝑁air.herd, then the corresponding put must be nested within a matching 𝑀× 𝑁scf.parallel loop. The dashed arrow between air.channel.get and air.channel.put represents the data stream back pressure; an overlapping schedule across two sides is enabled if stream buffering is supported in hardware. This ensures that data produced and consumed match in size across hierarchies. This decoupling of ‘put’ from ‘get’ not only enables overlapping communication and execution but also allows the compiler to infer and instantiate multiple parallel dataflows—subject to available bandwidth and communication resources. When hardware permits, the compiler may emit parallel air.channel instances, increasing aggregate throughput and improving hardware utilization. In this setting, data movement is no longer serialized at the herd boundary, and bandwidth can scale with the degree of inferred parallelism. Furthermore, the use of air.channel operations allows multiple data movement operations to share communication resources. This enables hardware-aware optimizations such as air.channel arbitration in pipelined execution (see Section 7.4.1) and air.channel reuse (see Section 7.4.2). 7.4.1 Capturing Hardware Pipelining with air.channel in ACDG Building on the fine-grained asynchronous representations introduced in Section 7.3.1 and the decoupled air.channel abstrac- tion, MLIR-AIR captures hardware pipelining by leveraging the loop-carried air.token semantics in ACDG. By allowing multiple tokens to flow independently across iterations, MLIR-AIR models 20 Wang et al. dealloc 𝑎 dealloc 𝑐′ dealloc 𝑎′ func alloc 𝑐′ memcpy alloc 𝑎′ alloc 𝑎 herd 𝑀× 𝑁 dealloc 𝑎 dealloc 𝑐′ dealloc 𝑎′ func alloc 𝑐′ get put alloc 𝑎′ alloc 𝑎 herd 𝑀× 𝑁 dealloc 𝑐′ dealloc 𝑎′ func alloc 𝑐′ get alloc 𝑎′ dealloc 𝑎 put alloc 𝑎 herd 𝑀× 𝑁 parallel 𝑀× 𝑁 (a) Coupled memcpy. (b) Decoupled put/get. (c) Decoupled put/get, put hoisted. Operation. air.token dependency. air.channel dependency. Fig. 6. Visualizations of ACDGs before and after air.channel decoupling. the three key dependencies in a hardware pipeline: (i) producer-consumer data dependencies, (ii) producer-side resource contention and (iii) consumer-side resource contention, all at once. As a motivating example, we consider two-stage pipelining, commonly referred to as ping-pong buffering. To expose pipeline stages explicitly, the loop must first be unrolled by a factor of two, corresponding to the number of stages, yielding distinct ping and pong threads for ACDG annotation. The resulting structure maps naturally to the generic ACDG with multiple loop-carried tokens shown in Figure 4b, where ping producer, ping consumer, pong producer, and pong consumer map to the four loop body subgraphs. Two of the four tokens (annotated in gray and green), represent the producer-consumer dataflow for the ping and pong stages, while the other two tokens (red and blue) capture intra-stage resource contention on the producer and consumer side, respectively. The final ACDG representing the two-stage pipeline is illustrated in Figure 7, where the flattened form highlights how each token enforces correctness across iterations. ‘ping’ producer ops. ‘ping’ consumer ops. ‘ping’ producer ops. ‘ping’ consumer ops. ‘pong’ producer ops. ‘pong’ consumer ops. ‘pong’ producer ops. ‘pong’ consumer ops. Fig. 7. Flattened ACDG showing a ping-pong buffering schedule, specialized from a generic ACDG form in Figure 4b. To demonstrate the pipelining transformation process, we implemented a simple case study in which a data stream traverses through an AMD NPU memory tile, featuring multiple memory banks and data ports. Figure 8 shows that with ping-pong enabled, the MLIR-AIR compiler cor- rectly identifies producer (write) and consumer (read) threads from the input loops and infers an overlapping schedule. The post-transformation runtime trace, shown in Figure 8b, confirms the expected behavior: data reads and writes execute concurrently across two buffers, validating the correctness and effectiveness of the pipelined ACDG transformation. 7.4.2 Time-multiplexed Data Movement via air.channel Merging The decoupled air.channel abstraction in MLIR-AIR enables time-multiplexed data movement by allowing multiple dataflows to reuse shared communication resources through air.channel merging. This is particularly valuable in scenarios where data movement hardware—such as memory ports, DMA engines, or network routing resources—is limited. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 21 WRITE_PORT_0 READ_PORT_0 Mem. tile WRITE_PORT_0 READ_PORT_0 (a) Block diagram and trace showing memtile ping-pong buffering disabled. WRITE_PORT_0 READ_PORT_0 Mem. tile WRITE_PORT_0 READ_PORT_0 (b) Block diagram and trace showing memtile ping-pong buffering enabled. Fig. 8. A simple data streaming case study showing the effect of enabling two-stage hardware pipelining. MLIR-AIR provides compiler passes that automatically detect opportunities for channel merging by analyzing the ACDG structure. Merging is controlled via compiler flags that specify the memory hierarchy at which merging is applied. For selected hierarchies, all merging opportunities implicit in the control flow are greedily identified and lowered. put chan2 𝑎[𝑓𝑎(𝒊)] [𝑔𝑎(𝒋)] get chan1 𝑎[𝑓𝑎(𝒊)] for 𝒋in 0 to 𝒏 for 𝒊in 0 to 𝒎 put chan4 𝑏[𝑓𝑏(𝒊)] [𝑔𝑏(𝒋)] get chan3 𝑏[𝑓𝑏(𝒊)] for 𝒋in 0 to 𝒏 for 𝒊in 0 to 𝒎 put chan2 𝑏[𝑓𝑏(𝒊)] [𝑔𝑏(𝒋)] get chan2 𝑎[𝑓𝑎(𝒊)] [𝑔𝑎(𝒋)] get chan1 𝑏[𝑓𝑏(𝒊)] get chan1 𝑎[𝑓𝑎(𝒊)] for 𝒋in 0 to 𝒏 for 𝒊in 0 to 𝒎 (a) Before air.channel fusion. (b) After air.channel fusion. Fig. 9. Visualizations of ACDGs before and after channel merging. Figure 9 illustrates a generic example: in Figure a, two imperfect loop nests perform channel put and get operations on separate memory objects 𝑎and 𝑏, through affine maps 𝑓𝑎, 𝑔𝑎, 𝑓𝑏, and 𝑔𝑏, respectively. Merging is permitted when the iteration domains 𝒊, 𝒋match, ensuring correctness when interleaving the data movements. The resulting fused ACDG, shown in Figure b, sequentializes the data movements by interleaving the two loops, consolidating their use of the air.channel operations @chan1 and @chan2. Figure 10 further demonstrates the hardware mapping of the fused design onto an NPU memory tile, along with performance traces showing the data movement schedule. Both the original and fused designs apply pipelined execution following the scheme in Section 7.4.1. After merging, data movements are time-multiplexed, reducing contention for ports and buffers, thereby lowering resource utilization while preserving performance. 7.4.3 Parallelized Data Movement via air.channel Splitting While channel merging enables time- multiplexed reuse of constrained DMA resources by sequentializing data transfers, such serialization 22 Wang et al. READ_PORT_1 Shim port 1 WRITE_PORT_1 READ_PORT_2 Shim port 2 WRITE_PORT_2 Core tiles Mem. tiles (a) Before air.channel merging. Shim port 1 WRITE_PORT_0 READ_PORT_0 Mem. tile Core tiles (b) After air.channel merging. W0 W1 R0 R1 𝑎[𝑓𝑎(𝒊)] 𝑎[𝑓𝑎(𝒊)] [𝑔𝑎(𝒋)] 𝑏[𝑓𝑏(𝒊)] 𝑏[𝑓𝑏(𝒊)] [𝑔𝑏(𝒋)] (c) Pre-merge trace at memory tile. W0 R0 𝑎[𝑓𝑎(𝒊)] 𝑏[𝑓𝑏(𝒊)] 𝑎[𝑓𝑎(𝒊)] [𝑔𝑎(𝒋)] 𝑏[𝑓𝑏(𝒊)] [𝑔𝑏(𝒋)] (d) Post-merge trace at memory tile. Fig. 10. Impact of air.channel merging on data movement parallelism and resource usage. a and b show schematic diagrams before and after air.channel merging. c and d show performance traces pre- and post-merging. may limit performance when hardware availability permits greater parallelism. When DMA re- sources are abundant, MLIR-AIR supports an alternative strategy: exposing and exploiting data movement parallelism through MemRef splitting. Inputs for MLIR-AIR, often from high-level IRs expressed using generic tensor abstractions, do not always consider spatial memory connectivity constraints in target architectures such as AMD NPUs during bufferization, leading to degraded performance or mapping failures at implementation time. To address this, MemRef splitting performs a dataflow-aware partitioning analysis that refines buffer allocations based on the actual access patterns and hardware platform constraints. In a common access pattern, a memory object 𝒂is read once and written to multiple outputs. Using the polyhedral representation, the read and write operations within loop nests over 𝒊and 𝒋can be represented as 𝒂[𝑓(𝒊)] and 𝒂[𝑓(𝒊)] [𝑔(𝒋)], where 𝑓and 𝑔are affine maps. A concrete example of 𝑔which implies a splittable data access pattern, with 𝒊∈Z1, is one with dependence polyhedron {𝑆0 [𝒊] →𝑆0 [𝑖mod 2]}, indicating two disjoint access patterns. The affine map transformation is made possible by MLIR-AIR’s explicit representation of paral- lelism: by analyzing parallel air.channel operations and the associated asynchronous dependen- cies, the compiler infers the implicit parallel access patterns, transforms the affine access functions to partition independent memory accesses, and bufferizes them into smaller sub-buffers—guaranteeing parallel, conflict-free access at runtime. In the original schedule (Figure 11a), 𝒂is naively bufferized into a single memory object, lead- ing to sequentialized reads 𝒂[𝑓(𝒊)] over time, regardless of available parallel memory tiles and From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 23 DMA engines. This limits parallelism across memory tiles and DMA engines, leading to reduced throughput and potential port over-utilization that can cause mapping failures. After MemRef splitting, MLIR-AIR transforms the access maps 𝑓→⟨𝑓1, 𝑓2⟩, partitioning 𝒂[𝑓(𝒊)] into multiple independent sub-tensors ⟨𝒂[𝑓1 (𝒊)] , 𝒂[𝑓2 (𝒊)]⟩that can be allocated to separate mem- ory tiles. This results in an optimized schedule, enabling independent and concurrent data movement across the spatial fabric. Following this workflow, we implemented a synthetic data streaming experiment, moving data from shim ports through memory tiles to cores using a unit-stride affine access pattern. The pre- splitting hardware trace in Figure 11c shows serialization of all inbound traffic through a single tile, limiting throughput. After MemRef splitting, the post-splitting trace in Figure 11d demonstrates parallel streaming through disjoint memory tiles and shim ports, significantly improving data movement efficiency. READ_PORT_1 READ_PORT_2 Shim port 1 WRITE_PORT_1 Core tiles Mem. tiles (a) Before MemRef splitting. READ_PORT_1 Shim port 1 WRITE_PORT_1 READ_PORT_2 Shim port 2 WRITE_PORT_2 Core tiles Mem. tiles (b) After MemRef splitting. W0 R0 R1 𝑡[𝑓(𝒊)] 𝑡[𝑓(𝒊)] [𝑔1 (𝒋)] 𝑡[𝑓(𝒊)] [𝑔2 (𝒋)] (c) Mem. tile trace, pre-splitting. W0 R0 W1 R1 𝑡[𝑓1 (𝒊)] 𝑡[𝑓1 (𝒊)] [𝑔1 (𝒋)] 𝑡[𝑓2 (𝒊)] 𝑡[𝑓2 (𝒊)] [𝑔2 (𝒋)] (d) Mem. tile trace, post-splitting. Fig. 11. Impact of MemRef splitting on data movement parallelism. a and b show schematic diagrams before and after MemRef splitting. c and d show performance traces visualizing serialization and parallelism pre- and post-splitting. 7.5 Lowering to AMD NPU Targets With parallelism, data reuse, and communication patterns fully expressed, the AIR IR is ready to be lowered into hardware-specific representations. First, constructs in MLIR-AIR are lowered to constructs in MLIR-AIE [14]. air.herd operations are lowered to per-core compute kernels. air.channel constructs are lowered to DMA engines, BDs, and stream connections between tiles. Synchronization via air.token values is implemented using tile-local locks. 24 Wang et al. MLIR-AIR supports code generation targeting multiple open-source frameworks, including LLVM IR, AMD XRT, and ROCr, enabling integration into heterogeneous systems involving CPUs and GPUs. Synchronization between the hardware-specific IR and the runtime program is provided by tokens generated from the NPU hardware controller, which is lowered from air.token values synchronizing host operations with on-device operations. 8 Integration with AI Model Software Ecosystems MLIR-AIR is designed to bridge the gap between high-level AI model frameworks and low-level hardware execution platforms. A key feature of MLIR-AIR is its flexible frontend integration, which allows it to ingest AI model specifications from multiple widely-used programming environments and IRs. These integrations allow developers to compile high-level AI models directly into MLIR- AIR’s asynchronous, tiled execution model, ready for targeting spatial accelerators like GPUs and AMD NPUs. Python Integration via AIR’s Python Bindings. MLIR-AIR includes native Python bindings that expose AIR dialect operations to Python-based workflows. An example vector-add design using these bindings is shown in Appendix B. These bindings allow direct programmatic construction of AIR IR, enabling rapid prototyping and integration with AI model preprocessing, autotuning, or interactive toolchains. PyTorch Frontend via Torch-MLIR. Through Torch-MLIR, PyTorch models are lowered into MLIR dialects which are compatible with MLIR-AIR’s tiling, scheduling, and asynchronous lowering passes. This allows MLIR-AIR to serve as a backend for PyTorch with no model rewriting, producing spatially executable kernels and runtime binaries for NPUs. IREE Integration for Portable Deployment. MLIR-AIR interoperates with IREE by consuming its tiled intermediate MLIR representations, and producing scheduled AIR programs [4]. These can be integrated with IREE’s hardware abstraction layer (HAL), enabling deployment across heterogeneous systems where AIR-based NPUs coexist with CPU and GPU targets under a unified runtime. Triton Frontend via Triton-Shared. AIR supports Triton through the Triton-Shared project [12], which lowers Triton IR into MLIR dialects consumable by AIR. AIR’s compilation pipeline then transforms these into hardware schedules and spatial mappings targeting AMD NPUs. This enables the reuse of GPU-oriented high-level abstractions for mapping onto NPUs. As of the date of publication,MLIR-AIR is the only compiler infrastructure that enables Triton programs to target AMD NPUs, allowing the reuse of GPU-oriented high-level abstractions for spatial architectures. The Triton-to-AIR workflow is experimental and remains under active development. 9 Design Experience and Results We evaluate MLIR-AIR across progressively complex AI workloads to assess its abstraction effi- ciency, expressiveness, and performance portability. Our analysis focuses on three main dimensions: (1) programming abstraction analysis of MLIR-AIR using Halstead metrics [19], (2) performance scaling across multiple backends and hardware configurations for matrix multiplication, and (3) MLIR-AIR’s ability to express and optimize fused kernels through a case study on the LLaMA 2 MHA block. These evaluations highlight MLIR-AIR ’s ability to serve as a spatial compiler abstraction that balances expressiveness and analyzability, positioning it between high-level programming models such as Triton and low-level spatial backends like MLIR-AIE. 9.1 Programming Abstraction Analysis MLIR-AIR provides a structured, loop-based programming interface that decouples algorithm specification from hardware mapping. Developers express computation at a high level while relying on the compiler to perform hardware-aware transformations. This makes MLIR-AIR serves as an effective bridging layer between high-level programming models, such as Triton and PyTorch, and From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 25 Table 2. Difference in Halstead vocabulary, difficulty, and effort among Triton, ARIES, MLIR-AIR and MLIR- AIE, implementing the same set of common AI components to target AMD NPU (lower is better). All designs use bfloat16 data format. Green shade annotates the lowest value. Designs implemented with 𝜇kernel called externally are annotated with ✔. While MLIR-AIE naturally shows higher complexity due to its explicit low-level programming model, MLIR-AIR bridges the gap between Triton and MLIR-AIE, offering lower effort and difficulty while maintaining spatial expressiveness. Design Abstraction External 𝜇kernel Vocabulary Difficulty Effort Value × Value × Value × matrix_scalar_add (single core) Triton ✘ 10 – 1.25 – 62.29 – ARIES N/A N/A – N/A – N/A – MLIR-AIR ✘ 14 1.40 1.64 1.31 112.14 1.80 MLIR-AIE ✘ 13 1.30 1.5 1.20 83.26 1.34 eltwise_binaryop Triton ✘ 12 – 1.4 – 105.40 – ARIES N/A N/A – N/A – N/A – MLIR-AIR ✔ 11 0.92 1.50 1.07 62.27 0.38 MLIR-AIE ✔ 23 1.92 3.579 2.56 825.67 7.83 softmax Triton ✘ 14 – 2.4 – 164.48 – ARIES N/A N/A – N/A – N/A – MLIR-AIR ✔ 11 0.79 1.50 0.63 62.27 0.38 MLIR-AIE ✔ 18 1.29 4.615 1.92 692.85 4.21 conv2d Triton ✘ 53 – 1.74 – 867.09 – ARIES N/A N/A – N/A – N/A – MLIR-AIR ✔ 11 0.21 1.50 0.86 62.27 0.072 MLIR-AIE ✔ 36 0.68 5.0 2.87 1938.72 2.24 matmul Triton ✘ 86 – 5.73 – 5410.33 – ARIES ✔ 40 0.47 4.76 0.83 2079.31 0.38 MLIR-AIR ✔ 78 0.91 3.68 0.64 4713.03 0.87 MLIR-AIE ✔ 107 1.24 13.46 2.35 32 040.15 5.92 low-level, spatially explicit representations like MLIR-AIE, which target fine-grained hardware configurations on spatial platforms. To evaluate the abstraction level of MLIR-AIR, we perform Halstead complexity analysis across representative AI workloads, comparing against ARIES—a compiler stack that similarly bridges high- level models to spatial hardware [48]. Halstead metrics, computed in our experiments using the open- source tool radon, quantify software complexity based on code structure that captures vocabulary, difficulty, and effort, to provide a language-agnostic measure of clarity and maintainability [24]. The Halstead metrics were evaluated across a spectrum of representative AI designs, including matrix multiplications, strided and depth-wise convolutions, nonlinear functions such as softmax and exponentiation, and trigonometric operations used in Rotary Positional Embeddings (RoPE) [43]. Table 2 reports the Halstead vocabulary, difficulty, and effort metrics across five representative workloads—each implemented using Triton [12, 30], ARIES [48], MLIR-AIR (via Python bindings), and MLIR-AIE (via IRON [20])—and highlights the key trends in abstraction efficiency and pro- gramming overhead. These examples were drawn from publicly available GitHub repositories. The workloads span a range of complexity and include both inline and externally defined 𝜇-kernels—a templatized set of compute instructions specialized to perform a task—as indicated in the ‘external 26 Wang et al. 𝜇kernel’ column. Triton examples include 𝜇-kernel logic inline, which inflates vocabulary and effort metrics. In contrast, MLIR-AIR and MLIR-AIE designs often invoke external kernels, resulting in more compact in-body control logic. Despite this discrepancy in kernel inclusion, MLIR-AIR consistently maintains Halstead difficulty and effort scores within 2× of Triton across the workloads. This indicates that MLIR-AIR offers a similarly accessible structured parallel programming abstraction as Triton. For smaller, single- core kernels like matrix_scalar_add, MLIR-AIR and MLIR-AIE show near-identical complexity. However, for complex, multi-core designs, such as in conv2d, and matmul, MLIR-AIR shows a dramatic reduction in overhead, achieving over 80% lower difficulty and effort than MLIR-AIE. This demonstrates MLIR-AIR ’s strength in managing complexity as spatial parallelism increases. ARIES presents matrix multiplication examples on their GitHub repository, which we used for comparisons in Table 2. In this example, Halstead vocabulary and effort metrics are both very low—lower than Triton—due to the reduced number of operations and operands in control logic. However, MLIR-AIR achieves a lower difficulty score, indicating AIR-based representations use simpler and more regular constructs. This result suggests that MLIR-AIR enables structured parallelism at a comparable or lower cognitive complexity than ARIES, while supporting a broader class of workloads. These results demonstrate that MLIR-AIR effectively bridges the programming gap between the high-level Triton-style control flow and the low-level, highly explicit MLIR-AIE representation. By combining structured, tile-aware abstractions with token-based asynchronous scheduling, MLIR-AIR enables efficient spatial hardware mapping while significantly reducing the complexity developers must manage in their source code. 9.2 Performance Scaling: Mapping Matrix Multiplication to Spatial Hardware To evaluate the ability of MLIR-AIR to generate efficient spatial compute kernels from generic loop-based programs, we examine its performance on matrix multiplication. Our experiments span a range of problem sizes (256-4096 per iteration dimension) and data formats, measured on an laptop platform featuring the AMD Ryzen AI 7840 NPU [13, 16]. We executed all MLIR-AIR and MLIR-AIE programs using the AMD XRT runtime [17], which manages binary loading, data movement, and kernel dispatch. We evaluate our MLIR-AIR-generated MLIR-AIE dialect code against MLIR-AIE’s published hand-optimized matrix multiplication implementation, which has been adopted by many recent research works as the state-of-the-art baseline for spatial execution on AMD NPUs [20, 34, 48]. The goal of this evaluation is to demonstrate that MLIR-AIR, starting from a naively specified nested loop for matrix multiplication, can produce performant implementations through a sequence of compiler transformations. Listing 6 shows pseudocode for tiled matrix multiplication written in a generic loop-nest style, using for loops for sequential execution and for_all for spatial parallelism. Such generic represen- tations are beneficial for portability by using an algorithm specification decoupled from hardware mapping, where AI frameworks can target MLIR-AIR without needing to provide device-specific code, demonstrating ease of frontend integration. MLIR-AIR compiles this form via a series of compilation passes presented in Section 7, which optimizes the bufferization, data movement scheduling and concurrency modeling with platform awareness. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 27 10−1 100 101 102 102 103 104 max. eff.=%48.2 (a) 2 × 2 herd 1TOPS Throughput (GOPS) 10−1 100 101 102 max. eff.=%65.0 (b) 2 × 4 herd 2TOPS Compute workload (GOPs) 10−1 100 101 102 max. eff.=%48.6 (c) 4 × 4 herd 4TOPS Fig. 12. Throughput versus compute workload for bfloat16 matrix multiplications, with shapes up to 𝑀= 𝑁= 𝐾= 4𝑘, for AIE tile herd sized (c) 2 × 2, (b) 2 × 4 and (a) 4 × 4, respectively. Compute workload increases along the x-axis. Each point represents the maximum throughput achieved across 20 random tests, to filter out any random system and DDR access latency injected at runtime. Each color/shape reflects a distinct 𝐾, with ( ), ( ) and ( ) annotating tests using 𝐾= 256, 1024 and 4096, respectively. Dotted line marks the theoretical peak compute throughput4achievable for each herd of AIE cores at bfloat16 precision, and the maximum compute efficiency achieved against it is annotated with an arrow. Listing 6. Pseudocode for a tiled output-stationary matrix multiplication that drives MLIR-AIR for_all (i_outer = 0; i_outer < M; i_outer +=t_i) { for_all (j_outer = 0; j_outer < N; j_outer +=t_j) { for (k_outer = 0; k_outer < K; k_outer +=t_k) { for_all (ii = 0; ii < t_i; ii++) { for_all (jj = 0; jj < t_j; jj++) { for (kk = 0; kk < t_k; kk++) { C[ii][jj] += matmul(A[ii][kk], B[kk][jj]); }}}}}} The loop nest structure shown above implements an output-stationary schedule: each compute tile accumulates a portion of the output matrix locally across multiple input tile iterations (k loop). This is a naive but widely applicable strategy, offering high reuse of output accumulators, low communication cost for partial sums, and simple mapping to spatial arrays. However, it is only one of many possible schedules supported by MLIR-AIR, as MLIR-AIR performs schedule optimizations on generic control-flow constructs, allowing for adaptability towards different compute problems and platform constraints. Figure 12 shows the performance of MLIR-AIR-compiled matrix multiplication kernels generated from a generic loop nest of the form shown in Listing 6, plotted as throughput (GOP/s) versus compute workload (GOPs). Three spatial hardware configurations were evaluated: 4 × 4 tile array (4 TOP/s peak) 2 × 4 tile array (2 TOP/s peak), and 2 × 2 tile array5 (1 TOP/s peak). In this plot, the tiling sizes were fixed to 𝑀= 𝑁= 𝐾= 64, using the bfloat16 (bf16) data format for both input and output buffers; this tile size is chosen to fit entirely within the local memory of a single NPU tile sized 64KB. Note that this tile size was chosen heuristically and not fine-tuned; higher performance may be possible by adjusting tiling factors based on memory hierarchy and DMA burst sizes. 4Theoretical peak compute throughput is calculated as the maximum compute speed achievable by the specified compute units, with the assumption of infinite data movement bandwidth and zero control overhead. 5Herd is reshaped to occupy a single column of four AIE tiles. 28 Wang et al. 10−1 100 101 102 102 103 104 max. eff.=%77.7 max. eff.=%76.6 max. eff.=%78.7 (a) i16 2TOP/s Throughput (GOP/s) 10−1 100 101 102 max. eff.=%50.8 max. eff.=%49.1 max. eff.=%48.6 (b) bf16 MLIR-AIE ARIES MLIR-AIR 4TOP/s Compute workload (GOPs) 10−1 100 101 102 max. eff.=%60.6 max. eff.=%56.2 max. eff.=%59.1 (c) i8 8TOP/s Fig. 13. Throughput versus compute workload Pareto frontiers for bfloat16, i16 and i8 matrix multiplica- tions, with shapes swept up to 𝑀= 𝑁= 𝐾= 4𝑘, for AIE tile herd sized 4 × 4. Output data width was kept consistent at 16 bits for all tests—bfloat16 outputs for bfloat16 inputs, and i16 outputs for i8 and i16 inputs, respectively. The dotted line marks the theoretical peak compute throughput5 achievable for each herd of AIE cores at the specified precision. The three subplots (a—c) compare performance as the air.herd dimensions increase. The air.herd dimensions are easily tunable in source code via tiling factors (Section 7.1). The peak throughput achieved scales proportionately with the tile count, demonstrating that MLIR-AIR is able to leverage increased spatial compute automatically, analyzed from the structured parallelism in the input program. Larger problem sizes increas the computational intensity (OPs per memory access), which lead to better utilization of compute tiles in the dataflow pipeline. As we sweep across increasing problem sizes (larger 𝐾values), the throughput consistently improves. Higher 𝐾reduces the start-up effect of the dataflow pipeline by reducing the frequency of flushes as the scheduler refills accumulators with zeros, leading to increased overall performance. This trend reflects AIR’s ability to schedule larger compute tiles effectively, reducing the relative overhead of data transfers and synchronization. Across all configurations, throughput is lower at smaller workloads (left side of each plot) due to underutilization of compute resources and startup latency at runtime. As the compute workload increases, throughput rises and asymptotically approaches the device peak. This indicates the transition from memory-bound throughput at small sizes to compute-bound throughput at larger sizes. The shape of the performance curve confirms that MLIR-AIR introduces minimal runtime overhead and supports efficient scaling into the compute-bound region. MLIR-AIR achieves up to 48.6% of peak on the 4TOP/s herd, 65.0% of peak on the 2TOP/s herd, and 48.2% of peak on the 1TOP/s herd. To evaluate the QoR achievable by MLIR-AIR, we benchmark the performance of matrix multipli- cation workloads across three common AI data types supported by AMD NPU’s vector engine: i16, bf16, and i8. MLIR-AIE’s hand-optimized implementations serve as baselines. Figure 13 shows that in all three precision settings, MLIR-AIR’s compiler-generated designs achieve throughput closely tracking the Pareto frontier established by the manually optimized designs written in MLIR-AIE. This confirms MLIR-AIR’s effectiveness in generating near-optimal performance without requiring handcrafted code. Designs generated by MLIR-AIR achieve 78.7%, 48.6% and 59.1% maximum efficiencies against the theoretical peak throughput for i16, bf16 and i8, respectively, which fall within 5 pp from the MLIR-AIE hand-optimized designs. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 29 𝑄, 𝐾 & 𝑉 proj. 𝑄 16 6 16 18x6 8 𝐾 𝑉 RoPE row no.= pos matmul. (w/ invsqrt) Attn. softmax matmul 𝑄, 𝐾 and 𝑉 vectors 288 288 𝑄, 𝐾 and 𝑉 weights 48 𝐾 cache 48 48 256 𝑉 cache Core DDR 48 Output Fig. 14. An overview of the fused LLaMA 2 MHA schedule. The dataflow through memory tile is omitted for simplicity. We also compare MLIR-AIR against ARIES, which demonstrates strong performance on smaller GEMM shapes, particularly for i16 and i8. However, ARIES exhibits lower peak throughputs on these two data formats when saturated, indicating trade-offs between early-stage performance and scalability. These results further underscore MLIR-AIR’s ability to combine generality, analyzability, and performance within a unified spatial compilation flow. 9.3 Kernel Merging: LLaMA 2 Multi-Head Attention To evaluate MLIR-AIR’s ability to express and optimize fused AI kernels, we implement a prototype of the LLaMA 2 MHA [44] block on an AMD NPU using a single AIE core. The model uses a head size of 48 and a sequence length of 256, with 6 heads multiplexed in time. The MHA block includes projection (𝑄, 𝐾,𝑉), rotary positional encoding (RoPE), softmax, and two matrix multiplications, separated by key-value (𝐾𝑉) caching [43]. Each operation is implemented as a generic function, composed using structured scf.for and scf.parallel loop nests. MLIR-AIR compiles this into a hardware-aware schedule, by mapping operations to NPU components such DMA channels, BDs, and NPU compute tiles. 𝐾𝑉caching is implemented as loop-nested air.channel operations targeting persistent DDR memories, holding a cache size of 48 × 256 data for 𝐾and 𝑉, respectively. DMA channel and BD reuse opportunities are captured via air.channel merging described previously in Section 7.4.2. Correctness in placement and buffer management is enforced via MLIR-AIR’s dependency analysis, which generates proper synchronization via air.token values. Table 3 profiles the end-to-end latency of each head, implemented with kernels dispatched both individually and fused together, with host and runtime dispatch overheads included. When each component executes as an independent kernel, total latency is 834𝜇s. With all kernels fused as one, latency is reduced to 373𝜇s—achieving a 2.24× speedup by eliminating dispatch overheads, amortizing reconfiguration cost, and leveraging data locality within the NPU tile’s local memory. While this prototype does not yet exploit spatial parallelism across multiple AIE cores, it high- lights MLIR-AIR’s ability to concisely represent and optimize non-trivial transformer blocks. The 30 Wang et al. Table 3. LLaMA2 MHA evaluation. Component Lines of code Latency (𝜇s) Speedup 𝑄, 𝐾and 𝑉vector projection 16 169 – RoPE 37 121 – matmul (w/ invsqrt) 39 283 – softmax 26 115 – matmul 37 146 – Total 155 834 – Fused 155 373 2.24× full implementation is written in 155 lines of high-level MLIR, demonstrating the expressiveness of AIR abstractions for modeling modern AI computation and communication patterns. 10 Future Directions We identify three key directions for extending MLIR-AIR’s applicability and automation in spatial compiler workflows: Multi-Target Hardware Support. MLIR-AIR envisages support for multiple hardware platforms beyond NPUs, including compilation to GPUs using long-running persistent kernels and integration with user-developed accelerators implemented in FPGAs. This requires extending our backend abstractions and lowering pipelines to target architecture-specific runtimes, scheduling models and memory hierarchies. Support for Heterogeneous Runtime Coordination. As modern systems increasingly include CPUs, GPUs, and NPUs on a shared die, MLIR-AIR can be a common abstraction supporting runtime coordination across heterogeneous devices. This includes lowering AIR to multiple backend runtimes (e.g., ROCr, XRT) and managing inter-accelerator data movement and synchronization. Cross-Device Launch Semantics. Some MLIR-AIR features such as data movement over explicit channels, and resource management using segments may have applicability in scaling beyond a single device, we plan to explore how an appropriate runtime might use air.launch to enable multi-device dispatch. This includes scaling launches dynamically across available hardware based on runtime resource availability, allowing for coordinated execution across multiple accelerators on one host, and multiple hosts within a compute cluster. 11 Conclusion MLIR-AIR introduces a structured, extensible compiler abstraction for mapping high-level AI programs onto spatial architectures. By providing explicit constructs for asynchronous parallelism, data movement, and compute scheduling, AIR enables platform-agnostic, analyzable code generation without sacrificing performance. Our extensive evaluation demonstrates that AIR provides both high expressiveness and efficiency, while maintaining a low abstraction overhead. We believe MLIR-AIR provides a strong foundation for future spatial compiler infrastructures. References [1] Dennis Abts, John Kim, Garrin Kimmell, Matthew Boyd, et al. 2022. The Groq Software-defined Scale-out Tensor Streaming Multiprocessor : From chips-to-systems architectural overview. In IEEE Hot Chips 34 Symposium (HCS). [2] Nicolas Bohm Agostini, Serena Curzel, Vinay Amatya, Cheng Tan, et al. 2022. An MLIR-based Compiler Flow for System-level Design and Hardware Acceleration. In IEEE/ACM International Conference on Computer-Aided Design. [3] The Chromium Authors. 2025. Perfetto. https://perfetto.dev/docs/ [4] The IREE Authors. 2019. IREE. https://iree.dev/ [5] Muthu Manikandan Baskaran, Uday Bondhugula, Sriram Krishnamoorthy, J. Ramanujam, et al. 2008. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In International Conference on Supercomputing. [6] Mohamed-Walid Benabderrahmane, Louis-Noël Pouchet, Albert Cohen, and Cédric Bastoul. 2010. The Polyhedral Model is More Widely Applicable than You Think. In Proceedings of the 19th Joint European Conference on Theory and Practice of Software, International Conference on Compiler Construction. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 31 [7] Uday Bondhugula, Albert Hartono, J. Ramanujam, and P. Sadayappan. 2008. A Practical Automatic Polyhedral Program Optimization System. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). [8] Cerebras. 2025. Cerebras Wafer Scale Engine. https://www.cerebras.ai/chip [9] Rohit Chandra, Leo Dagum, David Kohr, Ramesh Menon, Dror Maydan, and Jeff McDonald. 2001. Parallel programming in OpenMP. Morgan kaufmann. [10] Prasanth Chatarasi, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Tushar Krishna, and Vivek Sarkar. 2021. Marvel: A Data-centric Approach for Mapping Deep Learning Operators on Spatial Accelerators. ACM Transactions on Architecture and Code Optimization (TACO) 19, 1 (2021), 1–26. [11] Jason Cong, Bin Liu, Stephen Neuendorffer, Juanjo Noguera, Kees Vissers, and Zhiru Zhang. 2011. High-Level Synthesis for FPGAs: From Prototyping to Deployment. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 30, 4 (2011), 473–491. [12] Microsoft Corporation. 2025. Triton-shared. https://github.com/microsoft/triton-shared [13] Advanced Micro Devices. 2025. AI Engine. https://www.amd.com/en/products/adaptive-socs-and-fpgas/technologies/ ai-engine.html [14] Advanced Micro Devices. 2025. MLIR-AIE. https://xilinx.github.io/mlir-aie/ [15] Advanced Micro Devices. 2025. ROCr. https://rocm.docs.amd.com/projects/ROCR-Runtime/en/latest/ [16] Advanced Micro Devices. 2025. Ryzen 7 7840U. https://www.amd.com/en/products/processors/laptop/ryzen/7000- series/amd-ryzen-7-7840u.html [17] Advanced Micro Devices. 2025. XRT. https://xilinx.github.io/XRT/master/html/index.html [18] Venmugil Elango, Norm Rubin, Mahesh Ravishankar, Hariharan Sandanagobalane, and Vinod Grover. 2018. Diesel: DSL for Linear Algebra and Neural Net Computations on GPUs. In ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. [19] T Hariprasad, G Vidhyagaran, K Seenu, and Chandrasegar Thirumalai. 2017. Software Complexity Analysis using Halstead Metrics. In International Conference on Trends in Electronics and Informatics (ICEI). [20] Erika Hunhoff, Joseph Melber, Kristof Denolf, Andra Bisca, Samuel Bayliss, Stephen Neuendorffer, Jeff Fifield, Jack Lo, Pranathi Vasireddy, Phil James-Roxby, and Eric Keller. 2025. Efficiency, Expressivity, and Extensibility in a Close- to-Metal NPU Programming Interface . In IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM). [21] INTEL. 2025. Intel NPU. https://edc.intel.com/content/www/us/en/design/products/platforms/details/arrow-lake- s/core-ultra-200s-series-processors-datasheet-volume-1-of-2/intel-neural-processing-unit-intel-npu/ [22] Geonhwa Jeong, Gokcen Kestor, Prasanth Chatarasi, Angshuman Parashar, Po-An Tsai, Sivasankaran Rajamanickam, Roberto Gioiosa, and Tushar Krishna. 2021. Union: A Unified HW-SW Co-design Ecosystem in MLIR for Evaluating Tensor Operations on Spatial Accelerators. In International Conference on Parallel Architectures and Compilation Techniques (PACT). [23] Norm Jouppi, George Kurian, Sheng Li, Peter Ma, et al. 2023. TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings. In Annual International Symposium on Computer Architecture. [24] Michele Lacchia. 2025. Radon. https://github.com/rubik/radon/tree/master [25] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Oleksandr Zinenko. 2021. MLIR: Scaling Compiler Infrastructure for Domain Specific Computation. In IEEE/ACM International Symposium on Code Generation and Optimization (CGO). [26] Chris Lattner and Jacques Pienaar. 2019. MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law. [27] William S. Moses, Lorenzo Chelini, Ruizhe Zhao, and Oleksandr Zinenko. 2021. Polygeist: Raising C to Polyhedral MLIR. In International Conference on Parallel Architectures and Compilation Techniques (PACT). [28] Erdal Mutlu, Ruiqin Tian, Bin Ren, Sriram Krishnamoorthy, Roberto Gioiosa, Jacques Pienaar, and Gokcen" Kestor. 2022. COMET: A Domain-Specific Compilation of High-Performance Computational Chemistry. In Languages and Compilers for Parallel Computing. [29] NVIDIA. 2020. CUDA, release: 10.2.89. https://developer.nvidia.com/cuda-toolkit [30] OpenAI. 2025. TRITON. https://triton-lang.org/main/index.html [31] Raghu Prabhakar, Ram Sivaramakrishnan, Darshan Gandhi, Yun Du, et al. 2024. SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts . In IEEE/ACM International Symposium on Microarchitecture (MICRO). [32] Qualcomm. 2025. Qualcomm AI Engine. https://www.qualcomm.com/products/technology/processors/ai-engine [33] Alejandro Rico, Satyaprakash Pareek, Javier Cabezas, David Clarke, et al. 2024. AMD XDNA™NPU in Ryzen™AI Processors. IEEE Micro 44, 6 (2024), 73–82. [34] André Rösti and Michael Franz. 2025. Unlocking the AMD Neural Processing Unit for ML Training on the Client Using Bare-Metal-Programming Tools. In IEEE International Symposium on Field-Programmable Custom Computing 32 Wang et al. Machines (FCCM). [35] Alexander C Rucker, Shiv Sundram, Coleman Smith, Matthew Vilim, Raghu Prabhakar, Fredrik Kjølstad, and Kunle Olukotun. 2024. Revet: A Language and Compiler for Dataflow Threads. In International Symposium on High- Performance Computer Architecture (HPCA). [36] Amit Sabne. 2020. XLA: Compiling Machine Learning for Peak Performance. https://research.google/pubs/xla- compiling-machine-learning-for-peak-performance/. [37] Gagandeep Singh. 2022. Designing, modeling, and optimizing data-intensive computing systems. arXiv preprint arXiv:2208.08886 (2022). [38] Gagandeep Singh, Mohammed Alser, Damla Senol Cali, Dionysios Diamantopoulos, Juan Gómez-Luna, Henk Corporaal, and Onur Mutlu. 2021. FPGA-based near-memory acceleration of modern data-intensive applications. IEEE Micro 41, 4 (2021), 39–48. [39] Gagandeep Singh, Mohammed Alser, Kristof Denolf, Can Firtina, Alireza Khodamoradi, Meryem Banu Cavlak, Henk Corporaal, and Onur Mutlu. 2024. RUBICON: a framework for designing efficient deep learning-based genomic basecallers. Genome Biology 25, 1 (2024), 49. [40] Gagandeep Singh, Dionysios Diamantopoulos, Christoph Hagleitner, Juan Gomez-Luna, Sander Stuijk, Onur Mutlu, and Henk Corporaal. 2020. NERO: A near high-bandwidth memory stencil accelerator for weather prediction modeling. In 2020 30th International Conference on Field-Programmable Logic and Applications (FPL). IEEE, 9–17. [41] Gagandeep Singh, Alireza Khodamoradi, Kristof Denolf, Jack Lo, Juan Gomez-Luna, Joseph Melber, Andra Bisca, Henk Corporaal, and Onur Mutlu. 2023. SPARTA: Spatial Acceleration for Efficient and Scalable Horizontal Diffusion Weather Stencil Computation. In Proceedings of the 37th International Conference on Supercomputing. 463–476. [42] Gagandeep Singha, Dionysios Diamantopoulosb, Juan Gómez-Lunaa, Sander Stuijkc, Henk Corporaalc, and Onur Mutlua. 2022. LEAPER: Fast and accurate FPGA-based system performance prediction via transfer learning. In 2022 IEEE 40th International Conference on Computer Design (ICCD). IEEE, 499–508. [43] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. RoFormer: Enhanced Transformer with Rotary Position Embedding. Neurocomput. 568, C (2024). [44] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, et al. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv:2307.09288 [cs.CL] https://arxiv.org/abs/2307.09288 [45] Jie Wang, Licheng Guo, and Jason Cong. 2021. AutoSA: A Polyhedral Compiler for High-Performance Systolic Arrays on FPGA. In ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. [46] Hanchen Ye, Hyegang Jun, and Deming Chen. 2024. HIDA: A Hierarchical Dataflow Compiler for High-level Synthesis. In ACM International Conference on Architectural Support for Programming Languages and Operating Systems. [47] Size Zheng, Renze Chen, Anjiang Wei, Yicheng Jin, Qin Han, Liqiang Lu, Bingyang Wu, Xiuhong Li, Shengen Yan, and Yun Liang. 2022. AMOS: Enabling Automatic Mapping for Tensor Computations on Spatial Accelerators with Hardware Abstraction. In Annual International Symposium on Computer Architecture. [48] Jinming Zhuang, Shaojie Xiang, Hongzheng Chen, Niansong Zhang, Zhuoping Yang, Tony Mao, Zhiru Zhang, and Peipei Zhou. 2025. ARIES: An Agile MLIR-Based Compilation Flow for Reconfigurable Devices with AI Engines. In ACM/SIGDA International Symposium on Field Programmable Gate Arrays. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 33 A Input IR to MLIR-AIR’s Vector-add Example Listing 7. Element-wise vector add, described in upstream MLIR dialects. 1 func.func @eltwise_add (%arg0: memref <65536 xf32>, %arg1: memref <65536 xf32>, %arg2: memref <65536 xf32>) { 2 %c65536 = arith.constant 65536 : index 3 %c2048 = arith.constant 2048 : index 4 %c1024 = arith.constant 1024 : index 5 %c1 = arith.constant 1 : index 6 %c2 = arith.constant 2 : index 7 %c0 = arith.constant 0 : index 8 scf.parallel (%arg3) = (%c0) to (%c2) step (%c1) { 9 %alloc = memref.alloc () : memref <1024xf32 , 2> 10 %alloc_0 = memref.alloc () : memref <1024xf32 , 2> 11 %alloc_1 = memref.alloc () : memref <1024xf32 , 2> 12 scf.for %arg4 = %c0 to %c65536 step %c2048 { 13 %subview = memref.subview %arg0 [0] [1024] [1] : memref <65536 xf32> to memref <1024 xf32> 14 memref.copy %subview , %alloc : memref <1024 xf32> to memref <1024xf32 , 2> 15 %subview_2 = memref.subview %arg1 [0] [1024] [1] : memref <65536 xf32> to memref <1024 xf32> 16 memref.copy %subview_2 , %alloc_0 : memref <1024 xf32> to memref <1024xf32 , 2> 17 scf.for %arg5 = %c0 to %c1024 step %c1 { 18 %0 = memref.load %alloc[%arg5] : memref <1024xf32 , 2> 19 %1 = memref.load %alloc_0 [%arg5] : memref <1024xf32 , 2> 20 %2 = arith.addf %0, %1 : f32 21 memref.store %2, %alloc_1 [%arg5] : memref <1024xf32 , 2> 22 } 23 %subview_3 = memref.subview %arg2 [0] [1024] [1] : memref <65536 xf32> to memref <1024 xf32> 24 memref.copy %alloc_1 , %subview_3 : memref <1024xf32 , 2> to memref <1024 xf32> 25 memref.dealloc %alloc : memref <1024xf32 , 2> 26 memref.dealloc %alloc_0 : memref <1024xf32 , 2> 27 memref.dealloc %alloc_1 : memref <1024xf32 , 2> 28 } 29 scf.reduce 30 } 31 return 32 } 34 Wang et al. B AIR Python Bindings to MLIR-AIR’s Vector-add Example Listing 8. Element-wise vector add, described in AIR’s Python bindings. @module_builder def build_module(n, tile_n , np_dtype_in): a_size = [n] b_size = a_size out_size = a_size xrt_dtype_in = type_mapper(np_dtype_in) num_tiles = 2 assert n % (tile_n * num_tiles) == 0 # L3 MemRefTypes l3memrefTy = MemRefType.get(a_size , xrt_dtype_in) # L1 MemRefTypes l1MemrefTy = MemRefType.get( shape=[ tile_n], element_type=xrt_dtype_in , memory_space=IntegerAttr.get(T.i32(), MemorySpace.L1), ) @FuncOp.from_py_func(l3memrefTy , l3memrefTy , l3memrefTy) def eltwise_add(arg0 , arg1 , arg2): @herd( name="herd_0", sizes=[1, num_tiles], operands =[arg0 , arg1 , arg2], ) def herd_body(_tx , _ty , _sx , _sy , _l3_a , _l3_b , _l3_c): l1_a_data = AllocOp(l1MemrefTy , [], []) l1_b_data = AllocOp(l1MemrefTy , [], []) l1_out_data = AllocOp(l1MemrefTy , [], []) for _l_ivx in range_(0, n, tile_n * num_tiles): offset_map = AffineMap.get(0, 2, [ AffineExpr.get_add( AffineSymbolExpr.get(0), AffineExpr.get_mul( AffineSymbolExpr.get(1), AffineConstantExpr.get(tile_n), ), ) ], ) offset = affine_apply(offset_map , [_l_ivx , _ty]) dma_memcpy_nd(l1_a_data , _l3_a , src_offsets =[ offset], src_sizes =[ tile_n], src_strides =[1], ) dma_memcpy_nd(l1_b_data , _l3_b , src_offsets =[ offset], src_sizes =[ tile_n], src_strides =[1], ) for i in range_(tile_n): val_a = load(l1_a_data , [i]) val_b = load(l1_b_data , [i]) val_out = arith.addf(val_a , val_b) store(val_out , l1_out_data , [i]) yield_ ([]) dma_memcpy_nd(_l3_c , l1_out_data , dst_offsets =[ offset], dst_sizes =[ tile_n], dst_strides =[1], ) DeallocOp(l1_a_data) DeallocOp(l1_b_data) DeallocOp(l1_out_data) yield_ ([])
From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR ERWEI WANG, SAMUEL BAYLISS, ANDRA BISCA, ZACHARY BLAIR, SANGEETA CHOWDHARY, KRISTOF DENOLF, JEFF FIFIELD, BRANDON FREIBERGER, ERIKA HUNHOFF, PHIL JAMES-ROXBY, JACK LO, JOSEPH MELBER, STEPHEN NEUENDORFFER, EDDIE RICHTER, ANDRÉ RÖSTI, JAVIER SETOAIN, GAGANDEEP SINGH, ENDRI TAKA, PRANATHI VASIREDDY, ZHEWEN YU, NIANSONG ZHANG, and JINMING ZHUANG, Research and Advanced Development, AMD, USA General-purpose compilers abstract away parallelism, locality, and synchronization, limiting their effectiveness on modern spatial architectures. As modern computing architectures increasingly rely on fine-grained control over data movement, execution order, and compute placement for performance, compiler infrastructure must provide explicit mechanisms for orchestrating compute and data to fully exploit such architectures. We introduce MLIR-AIR, a novel, open-source compiler stack built on MLIR that bridges the semantic gap between high-level workloads and fine-grained spatial architectures such as AMD's NPUs. MLIR-AIR defines the AIR dialect, which provides structured representations for asynchronous and hierarchical operations across compute and memory resources. AIR primitives allow the compiler to orchestrate spatial scheduling, distribute computation across hardware regions, and overlap communication with computation without relying on ad hoc runtime coordination or manual scheduling. We demonstrate MLIR-AIR's capabilities through two case studies: matrix multiplication and the multi-head attention block from the LLaMA 2 model. For matrix multiplication, MLIR-AIR achieves up to 78.7% compute efficiency and generates implementations with performance almost identical to state-of-the-art, hand-optimized matrix multiplication written using the lower-level, close-to-metal MLIR-AIE framework. For multi-head attention, we demonstrate that the AIR interface supports fused implementations using approximately 150 lines of code, enabling tractable expression of complex workloads with efficient mapping to spatial hardware. MLIR-AIR transforms high-level structured control flow into spatial programs that efficiently utilize the compute fabric and memory hierarchy of an NPU, leveraging asynchronous execution, tiling, and communication overlap through compiler-managed scheduling. Additional Key Words and Phrases: Compiler, dataflow architecture, hardware acceleration, machine learning, reconfigurable technology, spatial architecture. 1 Introduction Modern computing architectures are increasingly spatial and asynchronous. They consist of many distributed compute units, partitioned memory hierarchies, and message-passing interconnects. Achieving high performance on such architectures requires precise control over computation, data movement, and execution of tasks. Mainstream CPUs and GPUs rely on a thread-centric parallel compute model that assumes many threads will be scheduled onto a limited set of hardware resources. Programmers describe large numbers of threads, and the system (hardware for GPUs, software for CPUs) maps them to available compute units. This model has scaled effectively for decades as semiconductor technology has Authors' Contact Information: Erwei Wang, ; Samuel Bayliss, ; Andra Bisca, ; Zachary Blair, ; Sangeeta Chowdhary, ; Kristof Denolf, ; Jeff Fifield, ; Brandon Freiberger, brandon.freiberger@amd. com; Erika Hunhoff, ; Phil James-Roxby, ; Jack Lo, jack.lo@amd. com; Joseph Melber, ; Stephen Neuendorffer, ; Eddie Richter, ; André Rösti, ; Javier Setoain, ; Gagandeep Singh, ; Endri Taka, ; Pranathi Vasireddy, ; Zhewen Yu, ; Niansong Zhang, ; Jinming Zhuang, , Research and Advanced Development, AMD, San Jose, USA. 16 Oct 2025 2 Wang et al. delivered more cores and vector units. Hardware support for more concurrent threads improves throughput and reduces compute latency. The effectiveness of the thread-centric model depends on two assumptions: (1) that threads operate independently, and (2) that shared resources, particularly memory bandwidth, scale with compute. When these assumptions break down, due to synchronization, data dependencies, or resource contention, execution stalls. Hardware schedulers, particularly in GPUs, respond by context-switching to another group of threads (wavefronts) to maintain forward progress. However, GPUs do not guarantee independent forward progress across all threads. Their schedulers may allocate compute resources to a subset of runnable wavefronts, while deferring others indefinitely. This behavior introduces nondeterminism and limits software visibility into execution dynamics, an increasingly critical shortcoming for latency-sensitive or tightly coupled workloads. Despite these limitations, the thread-centric model remains widely adopted because it simplifies software development: applications are decomposed into independent tasks that communicate through shared memory, while the hardware handles data reuse and execution interleaving. However, this abstraction comes at a significant cost. Maintaining the illusion of shared memory and uniform execution requires dense interconnects, deep cache hierarchies, and complex runtime mechanisms, all of which consume energy, silicon area, and increase design complexity. An emerging alternative is to return control to the software. A programming model that enables explicit expression of task placement, scheduling order, and inter-task data sharing allows software to better exploit spatial and temporal locality. Rather than relying on implicit reuse via caches, such a model supports deliberate coordination between compute units that are scheduled close in space and time, reducing hardware overhead while improving predictability and efficiency. To this end, we introduce AIR, a compiler intermediate representation (IR) that exposes spatial and temporal execution structure as explicit, programmable constructs. AIR captures high level user-described data-movement and compute scheduling intent, including concurrent execution. Implemented as a multi-level intermediate representation (MLIR) [25] dialect, AIR bridges the gap between high-level programs and spatial architectures. It supports transformations that lower structured control flow into statically scheduled spatial programs, optimized for GPUs and domainspecific neural processing units (NPUs). We demonstrate AIR's effectiveness on two representative AI workloads: matrix multiplication and the multi-head attention (MHA) block from the LLaMA 2 model [44]. Our results demonstrate that AIR produces spatially distributed schedules that overlap communication with computation, exploit locality, and minimize runtime control overhead. 1.1 Contributions The rapid advance of artificial intelligence (AI) models, algorithms, and accelerators has driven the adoption of diverse programming tools. Some tools focus on end-user productivity, while others are aimed at optimizing the efficient implementation of AI applications on an increasingly diverse range of specialized accelerators. MLIR is a flexible compiler abstraction designed to bridge this gap by allowing progressive lowering of designs through an extensible set of dialects [26]. Users can compose operations from a range of dialects and, in general, select transformations to achieve the goal of lowering high-level programmer intent to low-level optimized implementation1. AIR is an MLIR dialect that contains operations to express compute scheduling and memory allocation in spatial architectures. It operates at a level of abstraction that enables portable expression of compute kernels by avoiding explicit support for vendor-specific features. Cross-generational portability and performance scalability are supported by splitting the responsibilities for scheduling compute tasks between the compiler and runtime. This enables the compiler to define tightly 1In some applications, MLIR is used to analyze and raise the abstraction of operations, rather than lower them for execution. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 3 coupled and concurrent herds of execution, while giving the runtime flexibility to schedule those herds on devices whose sizes vary within and across generations of accelerator hardware. A design expressed using the AIR dialect can use a vendor-specific lowering for implementation on an accelerator as the operations included in MLIR-AIR are intended to support common features that we observe emerging in a class of spatial hardware accelerators. Programmers or compilers can use AIR to express compute groupings and data allocations spatially, and see those decisions honored in the subsequent lowering to vendor-specific implementations. In sum, this work makes the following key contributions: • We present AIR, a new IR implemented as an MLIR dialect that exposes spatial and temporal structure in programs. AIR enables the compiler to coordinate computation, data movement, and synchronization - capabilities that traditional thread-centric models obscure or defer to hardware. AIR is developed as a set of spatially aware abstractions that enable lowerings from high-level programs to tiled spatial hardware. AIR models spatial partitioning with air.herd, point-to-point communication with air.channel, and explicit synchronization with air.token. These abstractions enable the compiler to control spatial execution, without compelling the user to drop down to lower-levels of vendor-specific abstractions. • We build a complete end-to-end compiler flow that uses AIR to lower workloads written using high-level Python frameworks to low-level code for AMD NPUs. MLIR-AIR compiles structured loop nests into efficient, spatial programs dispatched using the NPU runtime.2 • We demonstrate MLIR-AIR's effectiveness on two representative AI workloads. MLIR-AIR produces statically scheduled programs that exploit locality, parallelism, and pipelining on tiled hardware. MLIR-AIR is open source and modular by design. It integrates into, and composes with other dialects with the MLIR ecosystem and provides a foundation for targeting a wide range of spatial accelerators beyond AMD's NPU. 2 Background This section surveys recent trends in spatial hardware that inform the architectural design of modern accelerators, which motivate key requirements on modern compilers. 2.1 Trends in Spatial Hardware In Table 1, we describe six key trends in efficient compute hardware. Taken together, these trends define a general direction in parallel hardware design, where efficient data movement is the driving design philosophy. Control over where compute operations are dispatched and where data is allocated are fundamental in such systems. Emphasizing the importance of physical placement in these systems, we refer to this direction in hardware design as a movement towards spatial hardware. These six hardware trends collectively motivate a corresponding set of compiler features necessary to effectively target spatial hardware. 2.1.1 Complex System Hierarchy Design reuse is extensive in semiconductor manufacturing because of the high cost of verification. Larger chip designs are often composed of multiple chiplets, that may themselves be composed of pre-verified hardware building blocks. Non-uniform performance for similar workloads can occur if a workload is assigned resources that cross spatial or hierarchical boundaries, or if the workload uses components shared at a cluster level. Interactions between spatially arranged components can be positive (e.g., components within a cluster share a level of cache hierarchy) or negative (e.g., components arbitrate for access to a limited number of ports). In order to maximize performance and minimize negative interactions, compilers and schedulers must be aware of spatial boundaries within the chip. 2https://github.com/Xilinx/mlir-air 4 Wang et al. Trend Description Complex System Hierarchy Design reuse introduces arbitrary boundaries in systems. Dispatch Placement Schedulers guaranteeing locality enable resource-sharing. Multi-root Memory Hierarchy Devices have independent, physically distant memory channels. Peer Memory Movement Efficient designs are not limited to data transfer through main memory. Data Movement Offload Specialized DMAs coordinate efficient data movement. Asynchronous Execution Distinct hardware scheduled to execute independently via dependencies. Table 1. Six hardware trends of spatial architectures. 2.1.2 Dispatch Placement Compilers and schedulers share control of decisions over placement within a spatial architecture. As such, in order to holistically optimize placement, both compilers and runtimes must be able to control or query where a scheduler allocates compute or where a memory allocator places memory. This knowledge or control would allow a compiler optimized for spatial architectures to note the desired spatial affinity of dispatched compute elements as optional or mandatory constraints on the behavior of the runtime scheduler. 2.1.3 Multi-root Memory Hierarchy Traditional compilers treat main memory as a unifying single root of coherency. However, many modern devices use multiple independent memory channels to increase aggregate bandwidth. The transparent hardware-based interleaving of data across these channels offers one simple mechanism for accessing this bandwidth, but in a large device, it is likely that there is a non-uniform energy and latency cost for access to these separate memory channels. These NUMA effects have previously been observed in large multi-socket CPU systems, but compilers and runtimes now have a role to play in ensuring physical affinity between memory allocation within channels and compute scheduling even within a single package. 2.1.4 Peer Memory Movement CPUs and GPUs incorporate large amounts of on-chip SRAM memory that is used as caches and/or scratchpads. Data transfer between on-chip memories can occur implicitly in the case of coherent caches, or may be explicitly orchestrated. Effective use of on-chip memories can offer lower interconnect energy, and achieve higher realized bandwidth compared to when data is fetched multiple times from external memory. 2.1.5 Data Movement Offload GPUs and NPUs increasingly feature Direct Memory Access (DMA) engines capable of offloading complex address generation from the compute datapath to improve data movement efficiency. This enables efficient pipelined use of the interconnect fabric as well as in-line reshaping and transposition of data for efficient computation. 2.1.6 Asynchronous Execution Memory and communication operations often have considerable latency. To achieve the most efficient performance, independent actors in the system (e.g., DMAs, compute units, etc.) are kept busy, using techniques to avoid stalling during the round-trip time necessary to synchronize two concurrent components. Increasingly sophisticated hardware schedulers close to those actors interpret explicitly encoded dependencies and select the next suitable thread of work for the actor to perform. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 5 2.2 Identified Needs in Compilers Taken together, these trends in hardware construction motivate a desire for a software model that enables user control over scheduling and memory allocation. Specifically, we see a need for a framework that: (1) Exposes hardware controls over memory allocation, allowing users to allocate memory in different levels of the memory hierarchy and in different non-uniform memory access (NUMA) domains at each level of the memory hierarchy. (2) Exposes hardware controls over compute placement, enabling users to describe units of compute that should be scheduled concurrently, enabling tightly-coupled compute elements to optimize data-sharing and local synchronization. (3) Separates data movement from computation explicitly in the IR, enabling independent scheduling and optimization of each. This decoupling allows the compiler to overlap communication with computation, and apply architecture-specific optimizations when supported by hardware. (4) Enables dependency resolution close to hardware to minimize the time taken to observe completion of a predecessor operation. This can be achieved by expressing dependencies explicitly, and supporting lowerings that target platform-specific synchronization capabilities. 3 Related Work The past decade has seen rapid evolution in accelerator architectures for machine learning. Many of these accelerators share the key characteristics outlined in Section 2.1: explicit spatial compute and memory hierarchies, high-throughput interconnects, and programmable DMA subsystems. Examples include Google's TPU [23], AMD's and Intel's Neural Processing Units (NPUs) [21, 33], Qualcomm's AI Engine [32], GroqChip [1], Cerebras' Wafer Scale Engine [8], and platforms from SambaNova [31]. A defining common feature of these accelerators is the spatial allocation of compute kernels to fixed hardware regions (e.g., tiles or cores), where data is communicated via explicitly programmed on-chip data-paths, often decoupled from compute [39, 41]. While architectural designs vary, a common challenge remains: enabling compilers to map high-level programs to these platforms by spatial locality, data movement, and synchronization [37]. In response, the compiler community has developed a range of spatially aware compilation frameworks that aim to bridge the gap between abstract algorithm specification and low-level hardware control. These works largely focus on flexible frontends for compiler frameworks, compile transformations that enable efficient computation, compiler techniques for targeting a broad range of accelerators, or any combination thereof. The remainder of this section highlights notable works in each category. Frontends for Accelerator Programming. There is a large diversity of frontends for accelerator programming frameworks. IRON provides a close-to-metal interface that allows detailed and customized performance tuning [20]. In contrast, frontends that capture intent at a higher level of abstraction are useful for flexibility, reusability, and quick adaptation to new algorithms and emerging programming models. Consequentially, MLIR-AIR and other works have focused on this higher level of abstraction. For instance, Union introduces a unified hardware-software co-design ecosystem within the MLIR infrastructure [22], which supports TensorFlow, ONNX, and COMET [28] as inputs. Similarly, SODA-OPT supports various high-level languages as inputs, including Python, C++, and Fortran [2]. XLA, while originally a standalone compiler for TensorFlow and JAX, has increasingly adopted MLIR components to enhance its modularity and extensibility [36]. Both Union and SODA-OPT use MLIR internally to increase front-end flexibility; while XLA was originally a standalone compiler, it has increasingly adopted MLIR components to enhance its modularity and extensibility. ARIES [48] provides an MLIR-based flow targeting 6 Wang et al. AMD AI Engines, with a focus on providing tile-granularity programming interface. Unlike these frameworks, MLIR-AIR defines a spatially explicit intermediate representation that directly models hardware concurrency, locality, and asynchronous execution inline of MLIR IRs, uniquely striking a balance between fine-grained compiler-managed scheduling and frontend and backend flexibility. Polyhedral Compilation for Mapping Tasks to Resources. The extraction and spatial mapping of parallelism implicit in algorithms are central to delivering high quality of results (QoR), especially for accelerators which are often composed of many parallel compute units. The polyhedral model [6] provides a formal framework for analyzing and transforming loop nests through affine access relations and schedule functions. Early efforts in this space include Vivado High-level Synthesis, which demonstrated how affine loop transformations could be applied to high-level code to generate efficient FPGA implementations [11, 38, 40, 42]. AutoSA advanced this direction by introducing a full-stack polyhedral compilation flow targeting systolic arrays on FPGAs [45]. It applies space-time transformations and loop tiling to generate parallel accelerator kernels that maximize throughput while respecting hardware resource constraints. More recent tools extend these capabilities across broader architectural targets. Tools like Diesel [18] and PLUTO [7] utilize the polyhedral model to automatically parallelize and optimize loop nests across multiple hardware architectures, including multicore CPUs, GPUs, and FPGAs. Polygeist further enhances the applicability of polyhedral compilation by translating C to MLIR's Affine and SCF dialects, enabling integration with modern compiler infrastructure and reuse of polyhedral analyses with MLIR-based workflows [27]. In contrast, MLIR-AIR leverages polyhedral analyses not only for loop transformations but also to guide asynchronous scheduling and data movement, integrating these capabilities within a structured, token-based IR. Compiler Frameworks for Diverse Spatial Accelerators. Alongside the polyhedral model, many tools such as Marvel [10] and AMOS [47] offer plug-and-play mechanisms for diverse spatial accelerator architectures. By abstracting device-specific optimizations and code generation, these tools focus on compute patterns and memory hierarchies common to spatial accelerators, facilitating seamless integration across diverse hardware generations and platforms. Moreover, when targeting reconfigurable FPGA devices, frameworks like HIDA [46] and Revet [35] enable automatic generation of Register Transfer Level (RTL) code, streamlining the hardware design process without requiring extra manual effort. In contrast, MLIR-AIR emphasizes explicit modeling of spatial scheduling and asynchronous execution within the IR itself, enabling precise control over task placement without relying on external runtime coordination or fixed hardware templates. 4 MLIR-AIR: A Novel Compiler Framework for Spatial Architectures Modern spatial accelerators require the compiler to do more than expose parallelism, they require explicit control over placement, communication, and execution order. MLIR-AIR is built to provide that control natively. MLIR-AIR is a novel, platform-agnostic compiler framework designed to target a wide range of spatial architectures. In this work, we focus on its instantiation for AMD NPUs - a tiled architecture optimized for high-throughput and low-latency AI computations. As shown in Figure 1, the AMD NPU architecture consists of a two-dimensional grid of compute ( ), memory ( ), and shim tiles ( ). Shim tiles form the interfacing row which connects the NPU to host memory and I/O systems. These are the only tiles that can initiate a memory transaction to the SoC memory system. Memory tiles, located adjacent to the shim tiles, provide shared memory resources accessible by compute tiles throughout the array. Compute tiles comprise the majority of the array, each integrating local memory buffers with scalar and vector engines. Every tile features a dedicated DMA engine for block data transfers (represented in buffer descriptors, or BDs) over a reconfigurable streaming interconnect. This enables localized compute-memory communication, via the streaming interconnects-and peer-to-peer data movement, via either the dedicated cascade From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 7 Core tile. Memory. Memory tile. Shim tile. Switch box. Streaming interconnect. Fig. 1. AMD NPU architecture. connections between cores or local memory shared by neighbors ( ). The absence of caches, either for data or instructions, and the emphasis on computing on local tiles of data ( eliminating memory access latency variation) means the architecture is characterized by extremely deterministic behaviour. Compilers designed to tile up work to fit into local memories can use the predictable behavior to construct efficient data-flow achieving high utilization. MLIR-AIR is designed to bridge high-level algorithmic representations with the low-level spatial execution requirements of modern accelerators, such as the AMD NPU. It provides the abstractions and transformations necessary to translate structured programs into tiled, explicitly scheduled implementations. Figure 2 illustrates the MLIR-AIR compilation flow for the AMD NPU, highlighting its integration with MLIR's ecosystem and spatial backend tools. Algorithm-Level Programming Interface. At the compiler frontend ( 1 ), MLIR-AIR interfaces with high-level AI frameworks such as PyTorch, TensorFlow, and Triton through MLIR-compatible frontends including Torch-MLIR, TOSA, and Triton-Shared. These frameworks emit programs using structured MLIR representations that preserve loop nests, tensor operations, and affine indexing. Frontend dialects are lowered into MLIR common components ( 2 ), including structured control flow (SCF) and linear algebra (LinAlg) dialects to provide an algorithm-friendly interface. These dialects offer C-like generic programming abstractions that preserve loop structure and tensor semantics, making AI workloads analyzable by non-domain experts. Unlike traditional low-level compilation flows that are tightly coupled to specific frontends or hardware backends, MLIR-AIR decouples emerging AI frontends from new accelerator architectures, defining a common compute model fit for the new era of spatial hardware. Representation of Asynchronous Parallelism. At the core of MLIR-AIR is the AIR dialect ( 3 ), a set of compiler abstractions that explicitly model hardware scheduling, asynchronous execution, and interactions with the memory hierarchy. Unlike conventional IRs that assume shared memory and centralized scheduling, AIR models the constraints and opportunities of spatial systems directly in the compiler. MLIR-AIR captures fine-grained asynchronous parallelism through an asynchronous control and dataflow graph (ACDG), a directed acyclic graph that encodes MLIR operation-level dependencies sequencing computation and data movement. The ACDG is embedded directly in the MLIR-AIR IR via the production and consumption of air.token values, which are static 8 Wang et al. Frontends MLIR-AIR SCF (Tiled) LinAlg Memref Segment, Herd Channel NPU Config DMAs Async analysis and optimizations Func. call Peano / Vitis MLIR-AIE External Host Program Runtime Sequence Execution (XRT/ROCr) Binary Launch PyTorch Triton Triton-Shared Torch-MLIR MLIR Community JiT Compilable TensorFlow TOSA IREE Tiling NPU 3 1 2 5 4 6 Fig. 2. MLIR-AIR stack overview. single assignment (SSA) values encoding the read-after-write (RAW), write-after-read (WAR), and write-after-write (WAW) dependency types. This token-based mechanism integrates with MLIR's native SSA dominance and verification infrastructure, enabling automatic correctness checks and transformation legality throughout the compilation process. ACDG constructs are composable with MLIR's structured control flow, supporting structured parallelism through tokens yielded by scf.parallel and AIR spatial operations (see Section 5). This allows explicit encoding of both inter- and intra-loop parallelism. Furthermore, loop-carried dependencies and pipelining opportunities are represented explicitly via tokens passed through scf.for iteration arguments, enabling the compiler to reason about and optimize fine-grained execution schedules, including pipeline stages and resource contention. Decoupled Data Movement Primitives. Unlike conventional memory copy operations that couple source and destination within a single construct, MLIR-AIR introduces decoupled data movement air.channel.put and air.channel.get operations ( 4 ) to model unidirectional data transfers localized to their respective memory hierarchies. These operations are linked via a globally declared symbolic air.channel, which abstracts the communication path and enforces backpressure-based synchronization between asynchronous code regions. This decoupling enables fine-grained control over dataflow, allowing communication to be scheduled alongside computation when desired, or independently when beneficial for performance or modularity. The design closely aligns air.channel operations with their target memory scopes, allowing the compiler to reason about and optimize data movement via pattern matching of simple and localized codes. Optimization and Performance Feedback. Beyond compilation, MLIR-AIR enhances the optimization and debugging process by providing execution traces that capture key performance metrics during hardware execution. These traces are visualized using tools like Chrome Tracing or Perfetto UI [3], allowing developers to analyze the runtime parallelism between computation and data movement. This profiling capability enables fine-grained performance evaluation, helping developers identify bottlenecks and inefficiencies in execution. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 9 Platform-agnostic Implementation and Runtime. MLIR-AIR supports integration with multiple hardware implementation backends and runtime systems, enabling platform-agnostic compilation. The final lowered IR is consumed by platform-specific tools such as MLIR-AIE ( 5 ) for NPUs [20] or LLVM-based pipelines for CPUs and GPUs, which generate hardware-specific control and dataflow representations. These are subsequently compiled and deployed using runtime frameworks ( 6 ) such as XRT [17] or ROCr [15], ensuring compatibility with diverse hardware platforms. This modular backend integration facilitates scalable and efficient deployment while preserving the architectural flexibility of MLIR-AIR across spatially heterogeneous systems. 5 AIR Concepts The AIR dialect provides the core primitives for expressing spatial execution semantics in MLIR-AIR. These primitives are designed to give the compiler fine-grained control over execution, concurrency, communication, and synchronization at various levels of granularity. These primitives can then be targeted by architecture-specific backends. AIR integrates within existing compilation stacks, and transforms allow designs to be ingested from several different frontends. The AIR dialect is designed to compose with standard MLIR dialects, reusing existing dialects to describe computation and kernel. This modularity and flexibility allow developers to describe fine-grained control over execution, communication, and memory behavior while maintaining portability by decoupling these abstractions from vendor-specific hardware implementations. We group the new operations in the AIR dialect into three categories: • Scheduling Constructs, which express spatial and hierarchical parallelism across compute resources (Section 5.1). • Data Locality Constructs, which represent explicit and decoupled data transfers aligned with memory hierarchy and DMA affinity (Section 5.2). • Synchronization Constructs, which captures explicit operation-level and loop-carried dependencies for asynchronous execution and pipelining (Section 5.3). Together, these abstractions allow AIR to represent concurrency control, memory placement and movement, and synchronization between many concurrent actors as explicit compiler-visible constructs. 5.1 Scheduling Constructs AIR introduces scheduling constructs that express how computation is distributed and executed across a spatial accelerator. These constructs define task placement, launch behavior, and hardware resource partitioning, forming the foundation of spatial scheduling in AIR. The dialect includes air.launch, air.segment, and air.herd operations, which are hierarchically composed: an air.launch may contain multiple air.segment operations, each of which may dispatch one or more air.herd operations. 5.1.1 air.launch The air.launch operation defines a region of computation to be offloaded from the host processor to the accelerator. It is designed as a construct to support portability and scaling by selecting groups of operations whose dispatch may be deferred to a runtime scheduler. It groups together compute, communication, and synchronization operations into a single launch unit, which are scheduled at runtime. The optional iteration space attribute attached to an air.launch operation describes a set of unique instances of the region body that the runtime scheduler is delegated to manage. Those unique instances must be permutable, i.e., a completely parallel schedule is legal, and instances within the air.launch iteration space must not rely on observing the effects of any other instance. To improve the effectiveness of compile-time optimization, we assume the compiler is free to hand off multiple different variants of the compiled air.launch operation to the runtime, each 10 Wang et al. enabling optimized dispatch of a parameterizable subset of the iteration space. Our lowering for AMD NPU architecture uses this freedom to offer different opportunistic granularities (e.g., one column or whole array) for the runtime to schedule work. air.launch also manages the lifetime of the bound resources that implement the operations hierarchically nested within its body. Once all nested operations within a launch iteration are scheduled and begin execution, the launch is able to release unused resources back to the runtime. This hierarchical management of resources ensures efficient resource utilization, especially when tasks are nested within larger compute tasks. 5.1.2 air.segment The body of an air.segment encapsulates the reservation of pool(s) of compute resources for use in scheduling the operations nested inside them. Segments can be optionally annotated with architecture-specific attributes describing the pool of resources they are reserving, when targeting backends that benefit from resource-aware scheduling. An architecture might want to define two pools of resources that have physical affinity (e.g., resources in one chiplet) so that they can ensure that the scheduler dispatches air.herd operations within that segment exclusively using the segment resources. Segments have an optional grid space. This allows easy replication of resource pools. Segment instances within that space are dispatched concurrently. Other relationships between the scheduling of segments in time and space can be controlled using the synchronization constructs in Section 5.3. 5.1.3 air.herd The air.herd operation defines a group of work units that execute concurrently on a grid of physical compute units and their local memories. It contains an index space which, expressed as an affine set of worker indices, generalizes the notion of thread IDs found in traditional parallel programming models (e.g., CUDA [29] or OpenMP [9]). Each worker in the air.herd executes the same region body, but specialization is enabled by indexing: control flow and memory access patterns may diverge based on each worker's coordinates in the air.herd. air.herd operations are scheduled atomically : they are only scheduled when resources for all the workers are available, and must enable independent forward progress for their individual workers. It is implied that workers are allocated as a physically local contiguous block, and lowerings may make use of the grid dimensions to lower to architecture-specific features that make use of that physical locality. The size of air.herd operations indicates the granularity of (concurrent) dispatch; users should be aware that certain architectures may place limits on the size of resources it can guarantee run concurrently (e.g., lowerings may fail during backend compilation if unimplementable air.herd operations are created). Where multiple air.herd operations are included in a air.launch, their default behavior is to run sequentially. Programmers may use the advanced synchronization constructions in Section 5.3 to set additional constraints on air.herd operations to guarantee sequential or concurrent execution and consequently modify the resource requirements of the surrounding air.launch. 5.2 Data Locality Constructs Spatial architectures rely heavily on local memory hierarchies and explicit DMA engines to achieve high efficiency. MLIR-AIR introduces constructs that make data locality explicit, enabling the compiler to reason about and optimize data movement across compute tiles and memory spaces. When ingesting code from an AI framework, progressive lowerings are supported by use of MLIR MemRef types-which represent typed memory references to structured data and are assumed to reside in global memory if not explicitly annotated. Existing lowerings support explicit memory allocation and movement into at least two further levels of addressable memory hierarchy (shared cluster scratchpads and private memory local to a worker). We support two levels of abstraction in our data movement constructs. First, to support lowering from higher-level dialects, MLIR-AIR supports an air.memcpy operation. Second, to provide further From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 11 control over memory movement, MLIR-AIR provides an air.channel operation that abstracts architecture-specific optimizations in device interconnect and specialized tensor-DMAs. 5.2.1 air.memcpy To progressively bridge the gap between high-level memory transfer specifications and spatial hardware implementations, MLIR-AIR introduces an intermediate air.memcpy construct. air.memcpy enhances the conventional memcpy operation with explicit attributes for data layout and memory spaces. This allows users to indicate which levels of hierarchy they are transferring from, and to express the desire for an in-flight physical reshaping of the data, decoupling the logical layout of a MemRef from its physical representation in memory. This is useful because data transfer operations offer an opportunity to specialize data layout on the fly. In subsequent lowering paths, memcpy operations are lowered to make use of air.channel operations. 5.2.2 air.channel Many modern spatial accelerators, including AMD NPUs and NVIDIA Hopper GPUs, expose hierarchies of data movement engines and memory spaces that require explicit software modeling for efficient execution. To support this, MLIR-AIR introduces the air.channel abstraction, which represents stream-based data transfers between distinct memory regions through paired put and get operations: • put operations transfer data from a source memory address in one level of the memory hierarchy onto a serialized stream, and • get operations retrieve data from the stream into a destination memory address, representing a buffer in a particular level of the memory hierarchy. These operations are placed in the code regions local to their respective memory allocations, enabling the compiler to express and optimize DMA-to-memory affinity. It captures both endpoints of the communication via a globally scoped symbolic air.channel, enabling an ordered and streamed data transfer. Subsequent compiler passes, detailed in Section 7.4, then decouple air.memcpy (Listing 1) into discrete air.channel.put and air.channel.get operations (Listing 2). Notably, air.channel operation references can cross levels of the AIR construct hierarchy. For example, an air.channel.put operation that is the immediate child of an air.launch operation may push data into an air.channel whose consumers are more deeply nested air.channel.get operations inside air.herd and air.segment operations. The air.channel abstraction integrates naturally with the MemRef dialect by adopting the same offset, size, and stride specifications for describing structured memory accesses. As shown in Listing 2, air.channel operations operate over structured MemRef views, allowing tensor access patterns to remain analyzable and composable with other MLIR transformations. Listing 1. air.memcpy operation3. 1 air.memcpy (%y, %x) Listing 2. Equivalent air.channel pair. 1 air.channel.put @chan1 (%x) 2 air.channel.get @chan1 (%y) MLIR-AIR also supports broadcasting within air.channel operations, enabling a single source to supply data to multiple consumers without redundant resource usage. Broadcasting is explicitly specified through affine integer sets, which define mappings from input indices to sets of output indices and are specialized into affine.if conditions at lower levels. The detection and lowering of broadcast patterns is further discussed in Section 7.2. Named bundles of air.channel symbols are supported to allow users to select a specific air.channel to put/get buffers (using a numerical index). 3For simplicity, we omit the offsets, sizes, and strides lists from this code snippet. 12 Wang et al. 5.2.3 air.herd The air.herd operation not only defines parallel execution but also plays a crucial role in data locality. Since all workers in an air.herd run concurrently on allocated local resources-including compute units, local memories, and tile-local data movers-we can optimize data movement between them using methods such as double buffering to minimize memory latency (see Section 7.4.1). The lowering of air.herd to hardware platforms such as AMD NPUs ensures spatial contiguity of worker placement, allowing architecture-specific features to be exploited for efficient implementation of communications. For example, in its default setting, the physical lowering of air.herd operations to NPU AI Engine arrays guarantees that neighboring workers can write to the local memory of their neighboring cores. This allows an efficient specialized lowering of certain patterns of channel communication within the air.herd. By constraining data exchange to local resources, air.herd operations improve the dataflow efficiency within their scope. 5.2.4 air.segment As a resource management construct, air.segment also contributes to data locality by controlling memory partitioning and affinity constraints. Since air.segment operations dictate how resources are allocated and shared, they help ensure that accesses to the shared memories remain localized within their data producers and consumers, including the data movers and any air.herd operations within an air.segment. This reduces unnecessary data movement, keeping computation and memory access spatially co-located for better efficiency. 5.3 Synchronization Constructs 5.3.1 air.token The air.token construct provides fine-grained control over execution dependencies in asynchronous workloads, with the granularity tunable from coarse-grained synchronization over regions of code to fine-grained per-operation scheduling. When an operation is marked with the async keyword, it returns an air.token that signals its completion status; this air.token can be used to constrain the relative scheduling or placement of operations in time or space. Synchronization using air.token is managed through two mechanisms. Firstly, explicit air.wait_all operations, can be inserted into synchronous control flow. This allow us to prevent further operation dispatch until tokens specified in the air.wait_all operation have signaled completion. Secondly, synchronization lists, that explicitly specify the relative scheduling of operations in time and space, describe a scheduling graph. Backend lowerings can use this graph to push dependency resolution to distributed dispatchers in hardware devices, allowing offload of groups of operations. The compute model envisages three types of relationships between operations controlled by synchronization lists. • Dependency lists encode directed edges between an operation and the predecessor operations in which it's inputs are defined. It requires that all inputs to a operation that are modified by a source operation in the dependency list are visible before the sink operation is scheduled. This is often implemented as a happens-before relationship to control the scheduling of operations in time. • Concurrency lists constrain the scheduling of operations in space and time. Each air.token in a concurrency list defines an undirected edge between two operations that indicates they must be scheduled at the same time. This implies that each operation must use exclusive resources. • Affinity lists constrain the scheduling of operations in space. These are lists of tokens that define undirected edges between operations that must execute using the same resources. In practice, this means these operations' time-slots must be disjoint, but the edge does not describe which operation must be scheduled first. The edges provide information on where spatial affinity could be exploited by the compiler or runtime. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 13 Details on how MLIR-AIR automatically detects and lowers data dependencies are included in Section 7.3.AIR's parallelism constructs, such as air.launch, signal air.token completion by behaving as a grouped asynchronous task: an air.launch's air.token is released only when all operations within the air.launch have completed. 5.3.2 air.channel While primarily a data movement construct, air.channel also plays an essential role in synchronization across data movers operating on discrete memory spaces, where an air.channel.get operation is synchronized to an air.channel.put with the same air.channel by back pressure, as shown in Listing 2. This synchronization abstraction-combined with asynchronous dependencies on air.channel actors to enforce synchronization local to each memory space-is both simple and effective: enforcement of dependencies between distributed data movers does not require complex control-flow dependencies across code regions. 6 AIR Dialect Constructs in Use To concretely illustrate how AIR dialect constructs appear in practice, we present a simplified example of an element-wise vector addition program. This example bridges the conceptual descriptions of Section 5 and the compiler transformations detailed in Section 7. The input program, shown in Appendix A, expresses a tiled vector addition in generic SCF using scf.parallel and scf.for. We use explicit memref.copy operations to move data between MemRef objects scheduled in a loop iteration. The program is agnostic to the target hardware and does not yet reflect spatial execution, memory locality, or asynchronous scheduling. The corresponding AIR-transformed IR, shown in Listing 3, makes the spatial and asynchronous execution explicit. In this transformed version: • The outer scf.parallel loop is replaced by an air.launch enclosing an air.herd, assigning each iteration to a spatial tile in a 2D compute grid. • Temporary buffer allocations are restructured with explicit memory hierarchy annotations, and all data movement operations are rewritten using air.memcpy or decoupled air.channel.put and air.channel.get. • Execution dependencies across asynchronous regions are made explicit with air.token values, enabling pipelined execution between data transfer and compute stages. The next section describes the key compilation stages in this IR transform. 14 Wang et al. Listing 3. Element-wise vector add described in AIR dialect. The syntax has been simplified and reformatted for clarity of presentation; some MLIR dialect annotations and attributes are omitted. 1 air.channel @channel_0 [1, 2] 2 air.channel @channel_1 [1, 2] 3 air.channel @channel_2 [1, 2] 4 func.func @eltwise_add (%arg0: memref , %arg1: memref , %arg2: memref ) { 5 %0 = air.launch async (%arg3 , %arg4) in (1, 1) args(%arg7=%arg0 , %arg8=%arg1 , %arg9=%arg2) { 6 %1 = air.segment @eltwise_add_0 async args(%arg10=%arg7 , %arg11=%arg8 , %arg12 =%arg9) { 7 %2 = air.channel.put async @channel_0[0, 0] (%arg10[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 8 %3 = air.channel.put async @channel_0[0, 1] (%arg10[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 9 %4 = air.channel.put async @channel_1[0, 0] (%arg11[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 10 %5 = air.channel.put async @channel_1[0, 1] (%arg11[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 11 %6 = air.channel.get async @channel_2[0, 0] (%arg12[0, 0, 0] [32, 2, 512] [2048, 512, 1]) 12 %7 = air.channel.get async @channel_2[0, 1] (%arg12[0, 0, 1024] [32, 2, 512] [2048, 512, 1]) 13 %8 = air.herd @herd_0 async tile (%arg13 , %arg14) in (1, 2) { 14 %async_token , %results = memref.alloc () async 15 %async_token_4 , %results_5 = memref.alloc () async 16 %async_token_6 , %results_7 = memref.alloc () async 17 %async_token_8 , %results_9 = memref.alloc () async 18 %async_token_10 , %results_11 = memref.alloc () async 19 %async_token_12 , %results_13 = memref.alloc () async 20 %9 = air.wait_all async deps =[% async_token_10 , %async_token_12] 21 %10:3 = scf.for %arg17 = 0 to 65536 step 4096 iter_args (%arg18 = %9, %arg19 = %async_token_12 , % arg20 = %async_token_12) { 22 %11 = air.channel.get async deps =[%arg18 , %arg20] @channel_0 [%arg13 , %arg14] (% results_9 [] [] []) 23 %12 = air.channel.get async deps =[%arg18 , %arg20] @channel_1 [%arg13 , %arg14] (% results_13 [] [] []) 24 %13 = air.wait_all async deps =[%11, %12] 25 %14 = scf.for %arg21 = 0 to 1024 step 1 iter_args (%arg22 = %13) { 26 %async_token_20 , %results_21 = memref.load async deps =[% arg22] %results_9 [%arg21] 27 %async_token_22 , %results_23 = memref.load async deps =[% arg22] %results_13 [%arg21] 28 %22 = arith.addf %results_21 , %results_23 29 %async_token_24 = memref.store async deps =[% arg22] %22, %results_11 [%arg21] 30 %23 = air.wait_all async deps =[% async_token_20 , %async_token_22 , %async_token_24] 31 scf.yield %23 32 } 33 %15 = air.channel.put async deps =[%arg19 , %arg18 , %14] @channel_2 [%arg13 , %arg14] (% results_11 [] [] []) 34 %16 = air.channel.get async deps =[% arg19] @channel_0 [%arg13 , %arg14] (% results [] [] []) 35 %17 = air.channel.get async deps =[% arg19] @channel_1 [%arg13 , %arg14] (% results_5 [] [] []) 36 %18 = air.wait_all async deps =[%16, %17, %arg18] 37 %19 = scf.for %arg21 = 0 to 1024 step 1 iter_args (%arg22 = %18) { 38 %async_token_20 , %results_21 = memref.load async deps =[% arg22] %results [%arg21] 39 %async_token_22 , %results_23 = memref.load async deps =[% arg22] %results_5 [%arg21] 40 %22 = arith.addf %results_21 , %results_23 41 %async_token_24 = memref.store async deps =[% arg22] %22, %results_7 [%arg21] 42 %23 = air.wait_all async deps =[% async_token_20 , %async_token_22 , %async_token_24] 43 scf.yield %23 44 } 45 %20 = air.channel.put async deps =[%19, %arg18] @channel_2 [%arg13 , %arg14] (% results_7 [] [] []) 46 %21 = air.wait_all async deps =[%16, %17] 47 scf.yield %15, %20, %21 48 } 49 /* Memref deallocations were omitted. */ 50 } 51 air.wait_all [%2, %3, %4, %5, %6, %7, %8] 52 }} 53 return 54 } 7 Compilation of AIR dialect to Spatial Hardware This section builds upon the AIR constructs introduced in Section 5, and describes the core compiler optimizations in MLIR-AIR that transform high-level loop-based programs into efficient spatial implementations. These passes progressively transform generic operations into tiled subproblems, resolve dependencies for correct asynchronous execution, optimize data reuse and communication locality, and finally lower the program into hardware-executable IRs targeting AMD NPUs. The following subsections describe each stage in this process: • Tiling and Parallelism Mapping: Maps high-level operations to distinct hardware tiles via loop tiling and parallel loop conversion (Section 7.1). From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 15 • Broadcast Detection and Lowering: Identifies and lowers data reuse patterns as affinemapped broadcasts to reduce redundant transfers (Section 7.2). • Asynchronous Dependency Analysis: Constructs fine-grained control and data dependencies represented using ACDG (Section 7.3). • Inferring Dataflow via air.channel: Decouples memory transfers into local operations, exposing DMA-to-memory affinity for scheduling (Section 7.4). • Lowering to AMD NPU Targets: Generates spatial hardware IR and runtime code for deployment on AMD NPUs (Section 7.5). 7.1 Tiling and Parallelism Mapping Tiling identifies parallel subregions of computation and introduces structured iteration constructs to represent them. These parallel tiles form the basis for spatial parallelism analysis and schedule optimization. In MLIR-AIR, tiling is performed through a compiler pass pipeline that lowers implicit parallelism in high-level operations (e.g., from the LinAlg dialect) into explicit scf.parallel or scf.for_all loops. These are subsequently mapped to AIR spatial constructs such as air.launch and air.herd. The pipeline leverages upstream MLIR tiling utilities while also ensuring flexibility and compatibility when plugged into broader compiler ecosystems, including IREE [4] and Triton [30]. In Figure 3, we examine how tiling strategies in MLIR-AIR influence the scheduling efficiency for a tiled matrix multiplication (A× B= C, where A∈RM×K, B∈RK×N, and C∈RM×N) mapped to AMD NPUs. We consider a matrix multiplication problem tiled along the Mand Ndimensions and mapped to three spatial layouts: 1×4, 2×2, and 4×1 air.herd operations. These configurations are selected to cover a range of aspect ratios and communication patterns. In the 1×4 layout, matrix A is broadcast across the four column-aligned cores, while matrix Bis privately transferred to each core via separate dataflows. The 4×1 layout exhibits the complementary pattern: Bis broadcast across rows, but Amust be duplicated. In contrast, the 2×2 layout enables two-dimensional reuse: Ais broadcast column-wise, and Brow-wise, minimizing redundant transfers. Data reuse patterns are automatically inferred by MLIR-AIR (detailed in Section 7.2). Figure 3 presents the runtime traces of each strategy, with an assumption of equal tile sizes for A and B. In the 1×4 and 4×1 cases, imbalance in data streaming-due to one matrix requiring separate transfers-results in core stalls as execution waits for both inputs to arrive. By contrast, the 2×2 configuration shows reduced stalls and improved throughput owing to symmetric broadcast reuse on both Aand Bpaths. Performance bottlenecks in asymmetric tilings can be mitigated by rebalancing tile sizes to equalize data transfer volumes or by allocating additional DMA channels to heavier dataflows. The latter can be automated through MLIR-AIR's dataflow-aware bufferization (see Section 7.4.3). This case study highlights how MLIR-AIR enables fast tile-shape-aware schedule selection through explicit representation of data dependencies and broadcast opportunities, guiding designspace exploration for spatial platforms. 7.2 Broadcast Detection and Lowering Once tiling has spatially partitioned the computational workload, the next optimization opportunity lies in minimizing off-chip data movement through on-chip reuse. To achieve this, MLIR-AIR introduces a systematic way to identify and optimize data broadcasting patterns in the tiled problem space. Compiler passes work in tandem to both detect opportunities and generate optimized AIR code that explicitly captures such patterns using affine maps. 7.2.1 Broadcast Detection The broadcast detection pass performs static analysis on the iteration domain of the program to discover replication patterns in data movements. When detected, the pass annotates the data movement operation with an affine set representing a projection in the 16 Wang et al. Core tiles Mem. tile (a) 1 × 4 herd. Core tiles Mem. tile Adataflow. Bdataflow. (b) 4 × 1 herd. Core tiles Mem. tile (c) 2 × 2 herd. W0 R0 W1 R1 R2 R3 R4 (d) Memory tile trace for (a). W0 R0 R1 R2 R3 W1 R4 (e) Memory tile trace for (b). W0 R0 R1 W1 R2 R3 (f) Memory tile trace for (c). Fig. 3. Impact of tiling strategy on data movement schedule in an output-stationary matrix multiplication. See Listing 6 for its schedule described using loop nests, where the for_all loops ii and jj were mapped to horizontal and vertical directions in the two-dimensional air.herd operations, respectively. spatial iteration domains from the sources to broadcast destinations. For example, an affine set S0 representing a broadcast on two-dimensional spatial iterations, e.g., an air.herd, where an array of 4 × 1 air.memcpy sources broadcast to 4 × 4 destinations, has the form {(d0,d1) ∈Z2|∃s0 ∈Z : d0 = s0, 0 ≤s0 ≤3, 0 ≤d1 ≤3}, where the symbol s0 and dimensions d0 and d1 represent the source and destination spaces, respectively. By expressing this as an affine set in MLIR's Affine dialect, the AIR dialect retains a precise and analyzable description of the communication pattern, which remains composable with other open-source MLIR dialects thanks to the community-developed Affine dialect utilities. 7.3 Asynchronous Dependency Analysis MLIR-AIR captures asynchronous parallelism using ACDG, represented inline of an MLIR codebase via the SSA air.token, which tracks the execution ordering and ensures correctness. 7.3.1 Capturing Dependencies using ACDGs In the synchronous code snippet shown in Listing 4, each data movement is implicitly blocked by the previous one, leading to a sequential schedule. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 17 However, both DMA operations could theoretically execute simultaneously, assuming independent DMA resources. MLIR-AIR automatically analyzes memory references, identifies these implicit dependencies, and explicitly annotates synchronization tokens as shown in Listing 5. Listing 4. Synchronous (sequential) execution. 1 air.memcpy (%v1, %v2) 2 air.memcpy (%v3, %v4) 3 func.call @func(%v1, %v3) Listing 5. Explicit asynchronous dependencies. 1 %t1 = air.memcpy async (%v1 , %v2) 2 %t2 = air.memcpy async (%v3 , %v4) 3 %t3 = func.call async 4 deps =[%t1 , %t2] @func (%v1 , %v3) This explicit representation clearly indicates that the compute operation must wait for both DMA transfers to complete before proceeding, preserving correctness. Furthermore, it also makes evident that the two DMA operations can execute in parallel, effectively leveraging multiple discrete DMA resources. In MLIR-AIR, we provide compiler passes which, driven by MLIR's native SSA representation and dominance analysis, automatically capture the ACDG arising from the read-after-write, write-after-read and write-after-write dependencies between MLIR operations, providing robust guarantees of correctness in MLIR-AIR's scheduling optimizations. Loop-Carried Dependency in ACDG. In conventional compiler analysis, loop-carried dependencies are often represented using dependence polyhedra of the form (i, j,k) →(i′, j′,k′), capturing legal source and destination iteration pairs that must respect data dependence across loops. Compilers such as PLUTO [7] and work by Baskaran et al. [5] typically model tiling and execution at the level of atomic tiles, where dependencies across tiles dictate scheduling order, while dependencies within a tile are assumed to be resolved independently through external memory accesses. This treatment simplifies global scheduling but leaves intra-tile parallelism and fine-grained asynchronous scheduling opportunities underexplored. Extending beyond this model, MLIR-AIR's ACDG captures dependencies at the operation levelboth within and across loop iterations. As illustrated in Figure 4a, loop-carried dependencies are explicitly represented by passing air.token values (explicit synchronization handles) through the iteration arguments of scf.for loops, tracking per-iteration execution states. An arbitrary number of air.token values can be carried across iterations, each representing an independently progressing thread of execution. This enables precise modeling of parallel pipelines, race conditions, and shared resource usage, as illustrated in Figure 4b. This mechanism is particularly valuable in hardware pipelining, where producer and consumer stages can overlap in time (e.g., using ping-pong buffering; see Section 7.4.1). Representation of Asynchronous Dependencies via air.token in scf.parallel. Similarly, MLIR-AIR supports asynchronous dependencies within structured parallel execution constructs such as scf.parallel. Visualized by Figure 4c, MLIR-AIR explicitly handles the synchronized initialization of parallel threads via an air.token passed into the initialization argument of the loop; their synchronized termination is represented explicitly via a reduction tree of air.wait_all barriers. 7.3.2 Reasoning using ACDGs Building on the previous section on ACDG extraction, we now describe how MLIR-AIR progressively transforms a generic loop-based program into finer-grained asynchronous schedules by analyzing and restructuring its control and data dependencies. Figure 5 illustrates this process on an imperfect loop nest. In the original synchronous form (Figure 5a), the loop bodies imply a fully sequential dataflow in the absence of explicit parallelism annotations. Nevertheless, the underlying ACDG reveals opportunities for parallelism, as operations can be partitioned based on the memory buffers they access (annotated by colors). 18 Wang et al. term. body iter. arg. yield for term. term. term. term. body body body body iter. arg. iter. arg. iter. arg. iter. arg. yield for term. body init. arg. term. body init. arg. · · · term. body init. arg. reduce parallel (a) scf.for, single token. (b) scf.for, multiple tokens. (c) scf.parallel. (d) Code for (a). (e) Code for (b). (f) Code for (c). Fig. 4. Visualizations of ACDGs in loop iterations, including (a) sequentialized for loop, (b) multi-token for loop, and (c) parallel loop, and their respective MLIR-AIR specification. A circle represents an air.token, and a polygon represents a group of MLIR operations in the loop body. Listings (d-f) demonstrates how each ACDG is represented inline of MLIR code. MLIR-AIR first applies asynchronous dependency analysis to construct an explicit ACDG using loop-carried air.token values within each loop (Figure 5b). This step exposes parallelism between sub-graphs of each loop's body accessing distinct buffers, while preserving correctness through token synchronization. To further expose optimization opportunities, MLIR-AIR splits the asynchronous loop nest into multiple independent nests (Figure 5c), each exclusively operating on a single memory object. This restructuring systematically uncovers and amplifies the spatial parallelism latent in generic loop-based input programs, isolating them into dataflows which facilitate compiler optimizations. 7.4 Inferring Dataflow via air.channel The ACDG structure not only enables fine-grained parallelism analysis, but also serves as the foundation for identifying and scheduling data movement across disjoint memory spaces. AIR's channel-based abstraction makes such communication patterns explicit and analyzable. Figure 6 illustrates this transformation using ACDGs. In the pre-transformation ACDG shown in Figure 6a, a memcpy operation moves data from a shared buffer ainto a local buffer a′, which is subsequently consumed by a compute kernel. Because memcpy resides within the body of the air.herd, the producer and consumer of a′ are tightly coupled within a single hierarchical region. While this correctly expresses intra-herd dependencies, it fails to expose the fine-grained asynchronous boundary between the shared memory and local memory regions. As a result, the external From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 19 yield yield body 4 body 3 body 2 body 1 for for body Ops. accessing data A only. body Ops. accessing data B only. Data flow. Dependency. yield yield body 4 body 3 body 2 body 1 for for yield yield body 3 body 1 yield yield body 4 body 2 for for for for (a) Sync. loop nest. (b) Async. loop nest. (c) Async. loop nest, split. Fig. 5. Visualizations of ACDGs in loop iterations, including (a) sequentialized for loop, (b) multi-token for loop, and (c) parallel loop, and their respective MLIR-AIR specification. thread managing aremains blocked until the entire air.herd completes-despite the fact that only the memcpy operation requires synchronization. The transformed ACDG in Figure 6c resolves this limitation by replacing memcpy with a decoupled pair of air.channel.put and air.channel.get operations. These operations are hoisted to the respective regions associated with the source and destination memory, each integrated into its own ACDG subgraph via explicit air.token synchronization. To preserve the correctness of the original computation, the hoisted put operation must replicate the parallel semantics of the original memcpy; if the memcpy was nested within a M× Nair.herd, then the corresponding put must be nested within a matching M× Nscf.parallel loop. The dashed arrow between air.channel.get and air.channel.put represents the data stream back pressure; an overlapping schedule across two sides is enabled if stream buffering is supported in hardware. This ensures that data produced and consumed match in size across hierarchies. This decoupling of 'put' from 'get' not only enables overlapping communication and execution but also allows the compiler to infer and instantiate multiple parallel dataflows-subject to available bandwidth and communication resources. When hardware permits, the compiler may emit parallel air.channel instances, increasing aggregate throughput and improving hardware utilization. In this setting, data movement is no longer serialized at the herd boundary, and bandwidth can scale with the degree of inferred parallelism. Furthermore, the use of air.channel operations allows multiple data movement operations to share communication resources. This enables hardware-aware optimizations such as air.channel arbitration in pipelined execution (see Section 7.4.1) and air.channel reuse (see Section 7.4.2). 7.4.1 Capturing Hardware Pipelining with air.channel in ACDG Building on the fine-grained asynchronous representations introduced in Section 7.3.1 and the decoupled air.channel abstraction, MLIR-AIR captures hardware pipelining by leveraging the loop-carried air.token semantics in ACDG. By allowing multiple tokens to flow independently across iterations, MLIR-AIR models 20 Wang et al. dealloc a dealloc c′ dealloc a′ func alloc c′ memcpy alloc a′ alloc a herd M× N dealloc a dealloc c′ dealloc a′ func alloc c′ get put alloc a′ alloc a herd M× N dealloc c′ dealloc a′ func alloc c′ get alloc a′ dealloc a put alloc a herd M× N parallel M× N (a) Coupled memcpy. (b) Decoupled put/get. (c) Decoupled put/get, put hoisted. Operation. air.token dependency. air.channel dependency. Fig. 6. Visualizations of ACDGs before and after air.channel decoupling. the three key dependencies in a hardware pipeline: (i) producer-consumer data dependencies, (ii) producer-side resource contention and (iii) consumer-side resource contention, all at once. As a motivating example, we consider two-stage pipelining, commonly referred to as ping-pong buffering. To expose pipeline stages explicitly, the loop must first be unrolled by a factor of two, corresponding to the number of stages, yielding distinct ping and pong threads for ACDG annotation. The resulting structure maps naturally to the generic ACDG with multiple loop-carried tokens shown in Figure 4b, where ping producer, ping consumer, pong producer, and pong consumer map to the four loop body subgraphs. Two of the four tokens (annotated in gray and green), represent the producer-consumer dataflow for the ping and pong stages, while the other two tokens (red and blue) capture intra-stage resource contention on the producer and consumer side, respectively. The final ACDG representing the two-stage pipeline is illustrated in Figure 7, where the flattened form highlights how each token enforces correctness across iterations. 'ping' producer ops. 'ping' consumer ops. 'ping' producer ops. 'ping' consumer ops. 'pong' producer ops. 'pong' consumer ops. 'pong' producer ops. 'pong' consumer ops. Fig. 7. Flattened ACDG showing a ping-pong buffering schedule, specialized from a generic ACDG form in Figure 4b. To demonstrate the pipelining transformation process, we implemented a simple case study in which a data stream traverses through an AMD NPU memory tile, featuring multiple memory banks and data ports. Figure 8 shows that with ping-pong enabled, the MLIR-AIR compiler correctly identifies producer (write) and consumer (read) threads from the input loops and infers an overlapping schedule. The post-transformation runtime trace, shown in Figure 8b, confirms the expected behavior: data reads and writes execute concurrently across two buffers, validating the correctness and effectiveness of the pipelined ACDG transformation. 7.4.2 Time-multiplexed Data Movement via air.channel Merging The decoupled air.channel abstraction in MLIR-AIR enables time-multiplexed data movement by allowing multiple dataflows to reuse shared communication resources through air.channel merging. This is particularly valuable in scenarios where data movement hardware-such as memory ports, DMA engines, or network routing resources-is limited. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 21 WRITE_PORT_0 READ_PORT_0 Mem. tile WRITE_PORT_0 READ_PORT_0 (a) Block diagram and trace showing memtile ping-pong buffering disabled. WRITE_PORT_0 READ_PORT_0 Mem. tile WRITE_PORT_0 READ_PORT_0 (b) Block diagram and trace showing memtile ping-pong buffering enabled. Fig. 8. A simple data streaming case study showing the effect of enabling two-stage hardware pipelining. MLIR-AIR provides compiler passes that automatically detect opportunities for channel merging by analyzing the ACDG structure. Merging is controlled via compiler flags that specify the memory hierarchy at which merging is applied. For selected hierarchies, all merging opportunities implicit in the control flow are greedily identified and lowered. put chan2 a[fa(i)] [ga(j)] get chan1 a[fa(i)] for jin 0 to n for iin 0 to m put chan4 b[fb(i)] [gb(j)] get chan3 b[fb(i)] for jin 0 to n for iin 0 to m put chan2 b[fb(i)] [gb(j)] get chan2 a[fa(i)] [ga(j)] get chan1 b[fb(i)] get chan1 a[fa(i)] for jin 0 to n for iin 0 to m (a) Before air.channel fusion. (b) After air.channel fusion. Fig. 9. Visualizations of ACDGs before and after channel merging. Figure 9 illustrates a generic example: in Figure a, two imperfect loop nests perform channel put and get operations on separate memory objects aand b, through affine maps fa, ga, fb, and gb, respectively. Merging is permitted when the iteration domains i, jmatch, ensuring correctness when interleaving the data movements. The resulting fused ACDG, shown in Figure b, sequentializes the data movements by interleaving the two loops, consolidating their use of the air.channel operations @chan1 and @chan2. Figure 10 further demonstrates the hardware mapping of the fused design onto an NPU memory tile, along with performance traces showing the data movement schedule. Both the original and fused designs apply pipelined execution following the scheme in Section 7.4.1. After merging, data movements are time-multiplexed, reducing contention for ports and buffers, thereby lowering resource utilization while preserving performance. 7.4.3 Parallelized Data Movement via air.channel Splitting While channel merging enables timemultiplexed reuse of constrained DMA resources by sequentializing data transfers, such serialization 22 Wang et al. READ_PORT_1 Shim port 1 WRITE_PORT_1 READ_PORT_2 Shim port 2 WRITE_PORT_2 Core tiles Mem. tiles (a) Before air.channel merging. Shim port 1 WRITE_PORT_0 READ_PORT_0 Mem. tile Core tiles (b) After air.channel merging. W0 W1 R0 R1 a[fa(i)] a[fa(i)] [ga(j)] b[fb(i)] b[fb(i)] [gb(j)] (c) Pre-merge trace at memory tile. W0 R0 a[fa(i)] b[fb(i)] a[fa(i)] [ga(j)] b[fb(i)] [gb(j)] (d) Post-merge trace at memory tile. Fig. 10. Impact of air.channel merging on data movement parallelism and resource usage. a and b show schematic diagrams before and after air.channel merging. c and d show performance traces pre- and post-merging. may limit performance when hardware availability permits greater parallelism. When DMA resources are abundant, MLIR-AIR supports an alternative strategy: exposing and exploiting data movement parallelism through MemRef splitting. Inputs for MLIR-AIR, often from high-level IRs expressed using generic tensor abstractions, do not always consider spatial memory connectivity constraints in target architectures such as AMD NPUs during bufferization, leading to degraded performance or mapping failures at implementation time. To address this, MemRef splitting performs a dataflow-aware partitioning analysis that refines buffer allocations based on the actual access patterns and hardware platform constraints. In a common access pattern, a memory object ais read once and written to multiple outputs. Using the polyhedral representation, the read and write operations within loop nests over iand jcan be represented as a[f(i)] and a[f(i)] [g(j)], where fand gare affine maps. A concrete example of gwhich implies a splittable data access pattern, with i∈Z1, is one with dependence polyhedron {S0 [i] →S0 [imod 2]}, indicating two disjoint access patterns. The affine map transformation is made possible by MLIR-AIR's explicit representation of parallelism: by analyzing parallel air.channel operations and the associated asynchronous dependencies, the compiler infers the implicit parallel access patterns, transforms the affine access functions to partition independent memory accesses, and bufferizes them into smaller sub-buffers-guaranteeing parallel, conflict-free access at runtime. In the original schedule (Figure 11a), ais naively bufferized into a single memory object, leading to sequentialized reads a[f(i)] over time, regardless of available parallel memory tiles and From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 23 DMA engines. This limits parallelism across memory tiles and DMA engines, leading to reduced throughput and potential port over-utilization that can cause mapping failures. After MemRef splitting, MLIR-AIR transforms the access maps f→⟨f1, f2⟩, partitioning a[f(i)] into multiple independent sub-tensors ⟨a[f1 (i)] , a[f2 (i)]⟩that can be allocated to separate memory tiles. This results in an optimized schedule, enabling independent and concurrent data movement across the spatial fabric. Following this workflow, we implemented a synthetic data streaming experiment, moving data from shim ports through memory tiles to cores using a unit-stride affine access pattern. The presplitting hardware trace in Figure 11c shows serialization of all inbound traffic through a single tile, limiting throughput. After MemRef splitting, the post-splitting trace in Figure 11d demonstrates parallel streaming through disjoint memory tiles and shim ports, significantly improving data movement efficiency. READ_PORT_1 READ_PORT_2 Shim port 1 WRITE_PORT_1 Core tiles Mem. tiles (a) Before MemRef splitting. READ_PORT_1 Shim port 1 WRITE_PORT_1 READ_PORT_2 Shim port 2 WRITE_PORT_2 Core tiles Mem. tiles (b) After MemRef splitting. W0 R0 R1 t[f(i)] t[f(i)] [g1 (j)] t[f(i)] [g2 (j)] (c) Mem. tile trace, pre-splitting. W0 R0 W1 R1 t[f1 (i)] t[f1 (i)] [g1 (j)] t[f2 (i)] t[f2 (i)] [g2 (j)] (d) Mem. tile trace, post-splitting. Fig. 11. Impact of MemRef splitting on data movement parallelism. a and b show schematic diagrams before and after MemRef splitting. c and d show performance traces visualizing serialization and parallelism preand post-splitting. 7.5 Lowering to AMD NPU Targets With parallelism, data reuse, and communication patterns fully expressed, the AIR IR is ready to be lowered into hardware-specific representations. First, constructs in MLIR-AIR are lowered to constructs in MLIR-AIE [14]. air.herd operations are lowered to per-core compute kernels. air.channel constructs are lowered to DMA engines, BDs, and stream connections between tiles. Synchronization via air.token values is implemented using tile-local locks. 24 Wang et al. MLIR-AIR supports code generation targeting multiple open-source frameworks, including LLVM IR, AMD XRT, and ROCr, enabling integration into heterogeneous systems involving CPUs and GPUs. Synchronization between the hardware-specific IR and the runtime program is provided by tokens generated from the NPU hardware controller, which is lowered from air.token values synchronizing host operations with on-device operations. 8 Integration with AI Model Software Ecosystems MLIR-AIR is designed to bridge the gap between high-level AI model frameworks and low-level hardware execution platforms. A key feature of MLIR-AIR is its flexible frontend integration, which allows it to ingest AI model specifications from multiple widely-used programming environments and IRs. These integrations allow developers to compile high-level AI models directly into MLIRAIR's asynchronous, tiled execution model, ready for targeting spatial accelerators like GPUs and AMD NPUs. Python Integration via AIR's Python Bindings. MLIR-AIR includes native Python bindings that expose AIR dialect operations to Python-based workflows. An example vector-add design using these bindings is shown in Appendix B. These bindings allow direct programmatic construction of AIR IR, enabling rapid prototyping and integration with AI model preprocessing, autotuning, or interactive toolchains. PyTorch Frontend via Torch-MLIR. Through Torch-MLIR, PyTorch models are lowered into MLIR dialects which are compatible with MLIR-AIR's tiling, scheduling, and asynchronous lowering passes. This allows MLIR-AIR to serve as a backend for PyTorch with no model rewriting, producing spatially executable kernels and runtime binaries for NPUs. IREE Integration for Portable Deployment. MLIR-AIR interoperates with IREE by consuming its tiled intermediate MLIR representations, and producing scheduled AIR programs [4]. These can be integrated with IREE's hardware abstraction layer (HAL), enabling deployment across heterogeneous systems where AIR-based NPUs coexist with CPU and GPU targets under a unified runtime. Triton Frontend via Triton-Shared. AIR supports Triton through the Triton-Shared project [12], which lowers Triton IR into MLIR dialects consumable by AIR. AIR's compilation pipeline then transforms these into hardware schedules and spatial mappings targeting AMD NPUs. This enables the reuse of GPU-oriented high-level abstractions for mapping onto NPUs. As of the date of publication,MLIR-AIR is the only compiler infrastructure that enables Triton programs to target AMD NPUs, allowing the reuse of GPU-oriented high-level abstractions for spatial architectures. The Triton-to-AIR workflow is experimental and remains under active development. 9 Design Experience and Results We evaluate MLIR-AIR across progressively complex AI workloads to assess its abstraction efficiency, expressiveness, and performance portability. Our analysis focuses on three main dimensions: (1) programming abstraction analysis of MLIR-AIR using Halstead metrics [19], (2) performance scaling across multiple backends and hardware configurations for matrix multiplication, and (3) MLIR-AIR's ability to express and optimize fused kernels through a case study on the LLaMA 2 MHA block. These evaluations highlight MLIR-AIR 's ability to serve as a spatial compiler abstraction that balances expressiveness and analyzability, positioning it between high-level programming models such as Triton and low-level spatial backends like MLIR-AIE. 9.1 Programming Abstraction Analysis MLIR-AIR provides a structured, loop-based programming interface that decouples algorithm specification from hardware mapping. Developers express computation at a high level while relying on the compiler to perform hardware-aware transformations. This makes MLIR-AIR serves as an effective bridging layer between high-level programming models, such as Triton and PyTorch, and From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 25 Table 2. Difference in Halstead vocabulary, difficulty, and effort among Triton, ARIES, MLIR-AIR and MLIRAIE, implementing the same set of common AI components to target AMD NPU (lower is better). All designs use bfloat16 data format. Green shade annotates the lowest value. Designs implemented with μkernel called externally are annotated with ✔. While MLIR-AIE naturally shows higher complexity due to its explicit low-level programming model, MLIR-AIR bridges the gap between Triton and MLIR-AIE, offering lower effort and difficulty while maintaining spatial expressiveness. Design Abstraction External μkernel Vocabulary Difficulty Effort Value × Value × Value × matrix_scalar_add (single core) Triton ✘ 10 - 1.25 - 62.29 - ARIES N/A N/A - N/A - N/A - MLIR-AIR ✘ 14 1.40 1.64 1.31 112.14 1.80 MLIR-AIE ✘ 13 1.30 1.5 1.20 83.26 1.34 eltwise_binaryop Triton ✘ 12 - 1.4 - 105.40 - ARIES N/A N/A - N/A - N/A - MLIR-AIR ✔ 11 0.92 1.50 1.07 62.27 0.38 MLIR-AIE ✔ 23 1.92 3.579 2.56 825.67 7.83 softmax Triton ✘ 14 - 2.4 - 164.48 - ARIES N/A N/A - N/A - N/A - MLIR-AIR ✔ 11 0.79 1.50 0.63 62.27 0.38 MLIR-AIE ✔ 18 1.29 4.615 1.92 692.85 4.21 conv2d Triton ✘ 53 - 1.74 - 867.09 - ARIES N/A N/A - N/A - N/A - MLIR-AIR ✔ 11 0.21 1.50 0.86 62.27 0.072 MLIR-AIE ✔ 36 0.68 5.0 2.87 1938.72 2.24 matmul Triton ✘ 86 - 5.73 - 5410.33 - ARIES ✔ 40 0.47 4.76 0.83 2079.31 0.38 MLIR-AIR ✔ 78 0.91 3.68 0.64 4713.03 0.87 MLIR-AIE ✔ 107 1.24 13.46 2.35 32 040.15 5.92 low-level, spatially explicit representations like MLIR-AIE, which target fine-grained hardware configurations on spatial platforms. To evaluate the abstraction level of MLIR-AIR, we perform Halstead complexity analysis across representative AI workloads, comparing against ARIES-a compiler stack that similarly bridges highlevel models to spatial hardware [48]. Halstead metrics, computed in our experiments using the opensource tool radon, quantify software complexity based on code structure that captures vocabulary, difficulty, and effort, to provide a language-agnostic measure of clarity and maintainability [24]. The Halstead metrics were evaluated across a spectrum of representative AI designs, including matrix multiplications, strided and depth-wise convolutions, nonlinear functions such as softmax and exponentiation, and trigonometric operations used in Rotary Positional Embeddings (RoPE) [43]. Table 2 reports the Halstead vocabulary, difficulty, and effort metrics across five representative workloads-each implemented using Triton [12, 30], ARIES [48], MLIR-AIR (via Python bindings), and MLIR-AIE (via IRON [20])-and highlights the key trends in abstraction efficiency and programming overhead. These examples were drawn from publicly available GitHub repositories. The workloads span a range of complexity and include both inline and externally defined μ-kernels-a templatized set of compute instructions specialized to perform a task-as indicated in the 'external 26 Wang et al. μkernel' column. Triton examples include μ-kernel logic inline, which inflates vocabulary and effort metrics. In contrast, MLIR-AIR and MLIR-AIE designs often invoke external kernels, resulting in more compact in-body control logic. Despite this discrepancy in kernel inclusion, MLIR-AIR consistently maintains Halstead difficulty and effort scores within 2× of Triton across the workloads. This indicates that MLIR-AIR offers a similarly accessible structured parallel programming abstraction as Triton. For smaller, singlecore kernels like matrix_scalar_add, MLIR-AIR and MLIR-AIE show near-identical complexity. However, for complex, multi-core designs, such as in conv2d, and matmul, MLIR-AIR shows a dramatic reduction in overhead, achieving over 80% lower difficulty and effort than MLIR-AIE. This demonstrates MLIR-AIR 's strength in managing complexity as spatial parallelism increases. ARIES presents matrix multiplication examples on their GitHub repository, which we used for comparisons in Table 2. In this example, Halstead vocabulary and effort metrics are both very low-lower than Triton-due to the reduced number of operations and operands in control logic. However, MLIR-AIR achieves a lower difficulty score, indicating AIR-based representations use simpler and more regular constructs. This result suggests that MLIR-AIR enables structured parallelism at a comparable or lower cognitive complexity than ARIES, while supporting a broader class of workloads. These results demonstrate that MLIR-AIR effectively bridges the programming gap between the high-level Triton-style control flow and the low-level, highly explicit MLIR-AIE representation. By combining structured, tile-aware abstractions with token-based asynchronous scheduling, MLIR-AIR enables efficient spatial hardware mapping while significantly reducing the complexity developers must manage in their source code. 9.2 Performance Scaling: Mapping Matrix Multiplication to Spatial Hardware To evaluate the ability of MLIR-AIR to generate efficient spatial compute kernels from generic loop-based programs, we examine its performance on matrix multiplication. Our experiments span a range of problem sizes (256-4096 per iteration dimension) and data formats, measured on an laptop platform featuring the AMD Ryzen AI 7840 NPU [13, 16]. We executed all MLIR-AIR and MLIR-AIE programs using the AMD XRT runtime [17], which manages binary loading, data movement, and kernel dispatch. We evaluate our MLIR-AIR-generated MLIR-AIE dialect code against MLIR-AIE's published hand-optimized matrix multiplication implementation, which has been adopted by many recent research works as the state-of-the-art baseline for spatial execution on AMD NPUs [20, 34, 48]. The goal of this evaluation is to demonstrate that MLIR-AIR, starting from a naively specified nested loop for matrix multiplication, can produce performant implementations through a sequence of compiler transformations. Listing 6 shows pseudocode for tiled matrix multiplication written in a generic loop-nest style, using for loops for sequential execution and for_all for spatial parallelism. Such generic representations are beneficial for portability by using an algorithm specification decoupled from hardware mapping, where AI frameworks can target MLIR-AIR without needing to provide device-specific code, demonstrating ease of frontend integration. MLIR-AIR compiles this form via a series of compilation passes presented in Section 7, which optimizes the bufferization, data movement scheduling and concurrency modeling with platform awareness. From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR 27 10-1 100 101 102 102 103 104 max. eff.=%48.2 (a) 2 × 2 herd 1TOPS Throughput (GOPS) 10-1 100 101 102 max. eff.=%65.0 (b) 2 × 4 herd 2TOPS Compute workload (GOPs) 10-1 100 101 102 max. eff.=%48.6 (c) 4 × 4 herd 4TOPS Fig. 12. Throughput versus compute workload for bfloat16 matrix multiplications, with shapes up to M= N= K= 4k, for AIE tile herd sized (c) 2 × 2, (b) 2 × 4 and (a) 4 × 4, respectively. Compute workload increases along the x-axis. Each point represents the maximum throughput achieved across 20 random tests, to filter out any random system and DDR access latency injected at runtime. Each color/shape reflects a distinct K, with ( ), ( ) and ( ) annotating tests using K= 256, 1024 and 4096, respectively. Dotted line marks the theoretical peak compute throughput4achievable for each herd of AIE cores at bfloat16 precision, and the maximum compute efficiency achieved against it is annotated with an arrow. Listing 6. Pseudocode for a tiled output-stationary matrix multiplication that drives MLIR-AIR for_all (i_outer = 0; i_outer , %arg1: memref , %arg2: memref ) { 2 %c65536 = arith.constant 65536 : index 3 %c2048 = arith.constant 2048 : index 4 %c1024 = arith.constant 1024 : index 5 %c1 = arith.constant 1 : index 6 %c2 = arith.constant 2 : index 7 %c0 = arith.constant 0 : index 8 scf.parallel (%arg3) = (%c0) to (%c2) step (%c1) { 9 %alloc = memref.alloc () : memref 10 %alloc_0 = memref.alloc () : memref 11 %alloc_1 = memref.alloc () : memref 12 scf.for %arg4 = %c0 to %c65536 step %c2048 { 13 %subview = memref.subview %arg0 [0] [1024] [1] : memref to memref 14 memref.copy %subview , %alloc : memref to memref 15 %subview_2 = memref.subview %arg1 [0] [1024] [1] : memref to memref 16 memref.copy %subview_2 , %alloc_0 : memref to memref 17 scf.for %arg5 = %c0 to %c1024 step %c1 { 18 %0 = memref.load %alloc[%arg5] : memref 19 %1 = memref.load %alloc_0 [%arg5] : memref 20 %2 = arith.addf %0, %1 : f32 21 memref.store %2, %alloc_1 [%arg5] : memref 22 } 23 %subview_3 = memref.subview %arg2 [0] [1024] [1] : memref to memref 24 memref.copy %alloc_1 , %subview_3 : memref to memref 25 memref.dealloc %alloc : memref 26 memref.dealloc %alloc_0 : memref 27 memref.dealloc %alloc_1 : memref 28 } 29 scf.reduce 30 } 31 return 32 } 34 Wang et al. B AIR Python Bindings to MLIR-AIR's Vector-add Example Listing 8. Element-wise vector add, described in AIR's Python bindings. @module_builder def build_module(n, tile_n , np_dtype_in): a_size = [n] b_size = a_size out_size = a_size xrt_dtype_in = type_mapper(np_dtype_in) num_tiles = 2 assert n % (tile_n * num_tiles) == 0 # L3 MemRefTypes l3memrefTy = MemRefType.get(a_size , xrt_dtype_in) # L1 MemRefTypes l1MemrefTy = MemRefType.get( shape=[ tile_n], element_type=xrt_dtype_in , memory_space=IntegerAttr.get(T.i32(), MemorySpace.L1), ) @FuncOp.from_py_func(l3memrefTy , l3memrefTy , l3memrefTy) def eltwise_add(arg0 , arg1 , arg2): @herd( name="herd_0", sizes=[1, num_tiles], operands =[arg0 , arg1 , arg2], ) def herd_body(_tx , _ty , _sx , _sy , _l3_a , _l3_b , _l3_c): l1_a_data = AllocOp(l1MemrefTy , [], []) l1_b_data = AllocOp(l1MemrefTy , [], []) l1_out_data = AllocOp(l1MemrefTy , [], []) for _l_ivx in range_(0, n, tile_n * num_tiles): offset_map = AffineMap.get(0, 2, [ AffineExpr.get_add( AffineSymbolExpr.get(0), AffineExpr.get_mul( AffineSymbolExpr.get(1), AffineConstantExpr.get(tile_n), ), ) ], ) offset = affine_apply(offset_map , [_l_ivx , _ty]) dma_memcpy_nd(l1_a_data , _l3_a , src_offsets =[ offset], src_sizes =[ tile_n], src_strides =[1], ) dma_memcpy_nd(l1_b_data , _l3_b , src_offsets =[ offset], src_sizes =[ tile_n], src_strides =[1], ) for i in range_(tile_n): val_a = load(l1_a_data , [i]) val_b = load(l1_b_data , [i]) val_out = arith.addf(val_a , val_b) store(val_out , l1_out_data , [i]) yield_ ([]) dma_memcpy_nd(_l3_c , l1_out_data , dst_offsets =[ offset], dst_sizes =[ tile_n], dst_strides =[1], ) DeallocOp(l1_a_data) DeallocOp(l1_b_data) DeallocOp(l1_out_data) yield_ ([])
2510.14868
Electron transport in junctions between altermagnets Shubham Ghadigaonkar,1, ∗Sachchidanand Das,1, ∗and Abhiram Soori1, † 1School of Physics, University of Hyderabad, Prof. C. R. Rao Road, Gachibowli, Hyderabad-500046, India We theoretically investigate electron transport in junctions between the two AMs in strong and weak altermagnetic phases. The charge and spin conductivities are analyzed as functions of angle between the N´eel vectors of the two AMs θ. In the strong AM regime, the charge conductivity van- ishes as θ →π, while in the weak AM phase it remains finite. Introducing a normal metal between two AMs leads to Fabry–P´erot-type oscillations in charge conductivity. In the strong phase, trans- port is dominated by up-spin electrons, whereas both spin channels contribute in the weak phase. These results highlight the potential of AM-based heterostructures for spintronic applications, such as spin filters, and quantum interference–based spintronic devices, where tunable spin-dependent transport and interference effects can be utilized in electronic devices. I. INTRODUCTION Altermagnets (AMs), materials having d-wave mag- netic order have generated tremendous interest among condensed matter physicists in the past couple of years [1–6]. Characterized by traits of both ferromag- nets and antiferromagnets, their net spin polarization is zero. They are known to carry spin current under a volt- age bias [7]. Junctions of AMs with normal metals, ferro- magnets and superconductors have been studied by many groups [7–10]. The spin that commutes with the Hamil- tonian of AM defines the N´eel vector for the AM. Several candidate materials such as MnTe, Mn5Si3, KV2Se2O exist for AMs [11–13]. Magnetic tunnel junctions (MTJs) are junctions be- tween two ferromagnetic metals separated by a thin insu- lator, wherein electrons are able to pass through the thin insulating barrier. The relative orientation of the mag- netic moments in these layers dictates the possibility of electron tunneling. The parallel alignment of magnetiza- tions in the ferromagnetic layers exhibits a low electrical resistance in the junction, whereas an antiparallel align- ment results in a high resistance. This variation in resis- tance between the two magnetic configurations gives rise to the tunneling magnetoresistance (TMR) effect, which forms the fundamental basis for the operation of MTJ- based spintronic devices [14–17]. Magnetic tunnel junc- tions in altermagnetic RuO2, and single ferromagnetic electrode have been shown to result in a high tunneling magnetoresistance [18–20]. Magnetic tunnel junctions between the altermagnets has just recently been studied where the authors claim to achieve tunneling magnetore- sistance over 1000% by just rotating the AM and tuning the altermagnetic strength and Fermi energy [21, 22]. The orientation of the N´eel vector can be tuned us- ing spin–orbit torques or ultrafast optical excitations [23, 24]. It influences how spin-polarized currents propagate through the materials. This orientation of N´eel vector modifies the spin-dependent scattering and shows spin ∗These authors contributed equally to this work † abhirams@uohyd.ac.in Hall response. A recent study on AM/p−wave magnet junction by rotating the N´eel vector of the latter rela- tive to that of the former shows similar response [25]. In MnTe, it is found that domains which have different di- rections for the N´eel vectors are formed very much like that in ferromagnets [13]. Motivated by these developments, we first study elec- tron transport in junctions between two altermagnets having different directions of N´eel vectors modeled by continuum model. We write down boundary condi- tions that characterize the junction. Then we calcu- late charge and spin conductivities using the Landauer- Buttiker scattering formalism. Also, we sandwich a nor- mal metal (NM) in between the two AMs, having differ- ent direction of N´eel vectors, to study how the inclusion of NM affects the conductivity. II. OUTLINE OF CALCULATION A simple model for AMs consists of a Hamiltonian for electrons with spin- and direction- dependent hopping in a two-dimensional square lattice. It breaks time rever- sal symmetry but is invariant under time reversal times π/2-rotation. The Hamiltonian in tight-binding model can be written as sum of two terms: first describing a normal metal and the second describing altermagnetic order. The Hamiltonian can be written as H = −2t(cos kxa+cos kya)σ0 −2tJ(cos kxa−cos kya)σz, where a is the lattice spacing, σj’s are Pauli spin ma- trices. By Taylor expanding such a Hamiltonian in mo- mentum space near the band bottom, its continuum form can be obtained. Depending on the relative strength of the altermagnetic term compared to the normal metal term in the Hamiltonian, the phase of AM can be classi- fied into strong and weak. The strong phase corresponds to tJ > t ≥0 whereas the weak phase corresponds to 0 ≤tJ < t. We will consider junctions between two AM’s having different N´eel vectors. The charge (spin) current den- sity corresponds to the current density that obeys the arXiv:2510.14868v1 [cond-mat.mes-hall] 16 Oct 2025 2 continuity equation along with the charge (spin) den- sity defined by ρχ = eψ†ψ (ρs χ = ℏψ†σχψ/2), where σχ = σz cos χ + σx sin χ. The spin density correspond- ing to the N´eel vector direction defined by χ commutes with the Hamiltonian and is different on different sides of the junction. III. JUNCTION BETWEEN AMS IN WEAK PHASE A. Details of the calculation In the weak phase, the location of the band bottoms for both the spins is ⃗k = 0, so, the Hamiltonian for weak phase near band bottom can be written as HW (χ) = −(tσ0 −tJσχ)a2∂2 x −(tσ0 + tJσχ)a2∂2 y. (1) Here we consider a junction between two AMs in the weak phase differing in the N´eel vector directions. To be more precise, in the region left to x = 0, the Hamiltonian is HW (χ = 0) and in the region right to x = 0 the Hamiltonian is HW (χ = θ). In the weak AM, dispersion for ↑and ↓electrons are given by– E = (t −tJ)k2 x↑a2 + (tJ + t)k2 y↑a2 (2) E = (tJ + t)k2 x↓a2 + (t −tJ)k2 y↓a2 (3) The boundary condition that can be derived from the probability current conservation along ˆx is given by ψ(0−) = cψ(0+)  c(tσ0 −tJσz)a∂xψ + taq0ψ  0−= (tσ0 −tJσθ)a∂xψ|0+ (4) Here, c and q0 characterise the junction. In certain limits, c can be thought of physically as the ratio between the hopping strength at the bond that forms the junction to t, and q0 corresponds to the strength of delta-function impurity at the junction [7]. The scattering eigenfunction for an ↑-electron ap- proaching from the left at energy E and angle of inci- dence α can be expressed as ψ(x)eiky↑y, where ψ(x) = (eikx↑x + r↑↑e−ikx↑x) |↑⟩+ r↓↑eikx↓x |↓⟩, for x < 0, = t↑↑eikx↑x |↑θ⟩+ t↓↑eikx↓x |↓θ⟩, for x > 0. (5) Here |↑⟩= |↑χ=0⟩, |↓⟩= |↓χ=0⟩|↑χ⟩= [cos χ 2 , sin χ 2 ]T , |↓χ⟩= [−sin χ 2 , cos χ 2 ]T ., ky↑= p E/(t + tJ) sin α kx↑= p E/(t −tJ) cos α, kx↓= q {E/(t + tJ)}(1 −η sin2 α) (6) and η = (t−tJ)/(t+tJ). The scattering coefficients rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (4), where σ =↑and σ′ =↑or ↓. The charge- and spin- current densities in the system due to this wavefunction are given by Jc ↑(α) = 2e ℏ (t −tJ)kx↑(1 −|r↑↑|2) −(t + tJ)kx↓|r↓↑|2 , Js− ↑(α) = (t −tJ)kx↑(1 −|r↑↑|2) + (t + tJ)kx↓|r↓↑|2, Js+ ↑(α) = (t −tJ)kx↑|t↑↑|2 −(t + tJ)kx↓|t↓↑|2. (7) Note that while the charge current is same in the two AMs, the spin current need not be the same, since none of the Pauli spin matrices commute with Hamiltonians on both the sides. Js+ (Js−) is the spin current density on the right (left) side of the junction. While the spin current density on the left corresponds to σz, the one on the right corresponds to σθ. The scattering eigenfunction corresponding to a ↓- electron incident at an angle of incidence α at energy E has the form ψ(x)eiky↓y where, ψ(x) = (eikx↓x + r↓↓e−ikx↓x) |↓⟩+ r↑↓e−ikx↑x |↑⟩, for x < 0 = t↑↓eikx↑x |↑θ⟩+ t↓↓eikx↓x |↓θ⟩ for x > 0. (8) Here, ky↓= p E/(t −tJ) sin α, kx↓= p E/(t + tJ) cos α, kx↑= p E/(t −tJ) q 1 −(sin2 α)/η (9) When sin2 α/η > 1, kx↑is imaginary. It is chosen in such a way that the wavefunction for ↑-electron decays away from the junction. Using the boundary conditions shown in Eq. (4), the scattering coefficients rσ′σ and tσ′σ can be calculated, where σ =↓and σ′ =↓or ↑. This wavefunction results in the following charge and spin current densities Jc ↓(α) = 2e ℏ (t + tJ)kx↓(1 −|r↓↓|2) −(t −tJ)Re[kx↑]|r↑↓|2 , Js− ↓(α) = −(t + tJ)kx↓(1 −|r↓↓|2) −(t −tJ)Re[kx↑]|r↑↓|2, Js+ ↓(α) = (t −tJ)Re[kx↑]|t↑↓|2 −(t + tJ)kx↓|t↓↓|2 (10) The differential charge and the spin conductivities are given by G = e 8π2p t2 −t2 J Z π/2 −π/2 dα[Jc ↑(α) + Jc ↓(α)], Gs± = e 8π2p t2 −t2 J Z π/2 −π/2 dα[Js± ↑(α) + Js± ↓(α)](11) 3 B. Results In Fig. 1(b,c), we plot the conductivities versus the an- gle between the N´eel vectors of the two AMs. In Fig. 1(b), the charge conductivity is shown wavevector the angle θ for different ratios of tJ/t, with parameters q0 = 1/a, c = 1, and E = t. For larger values of the altermag- netic strength tJ, the conductivity exhibits significant variation. We see [Fig. 1(b)] that for θ = 0, the charge conductivity is maximum and then decreases monoton- ically as θ increases in the range [0, π]. As θ deviates away from 0, the orientations of the spins corresponding to the same values of ⃗k on either sides of the junction is different leading to reduced conductivity. Interestingly, the conductivity at θ = 0 increases with increasing value of tJ. This feature can be understood by taking the limit where all the reflection coefficients are zero and using Eq. (11). As tJ decreases, this variation becomes progres- FIG. 1. (a) Schematic of the junction with the Fermi surfaces on each region. The N´eel vectors on either sides of the junc- tion differ by an angle θ. (b) Differential charge conductivity versus θ for different values of tJ/t indicated in the legend. Other parameters: q0 = 1/a, c = 1, E = t (c) Spin conduc- tivity versus θ in the left and right AM for q0 = 0, c = 1.2, tJ = 0.2 and E = t. sively weaker, and at tJ = 0, the conductivity remains constant across all angles. This behaviour corresponds to the complete absence of altermagnetic effects, rendering the system equivalent to a normal metal. Fig. 1(c) illustrates the variation of spin conductivities in the left and right AM regions wavevector θ, for pa- rameters q0 = 0, c = 1.2, tJ = 0.2, and E = t. The spin conductivity, defined as the difference between the up- spin and down-spin conductivities, has the same value in both regions at θ = 0. It gradually increases with θ in both the regions. In the left region, the spin current re- mains mostly negative, while in the right region it stays entirely negative across all θ, indicating that down-spin contributions dominate over up-spins. This is because, the down-spin Fermi surface is elongated along y and there are more states that are closer to normal incidence than those for the up-spin incidence wherein the Fermi surface is elongated along x. At θ = π, the spin conductivities in the left and right regions shift symmetrically above and below zero. This can be understood as follows. The left and right AM regions are structurally identical, differing only in the orientation of their magnetization by an angle θ. At θ = 0, both regions have their magnetization aligned in the same direction, leading to identical up-spin and down-spin transport channels. As a result, the difference between these channels—and hence the spin conductiv- ity—is the same on both sides. However, at θ = π, the magnetization in the right AM is completely reversed rel- ative to the left AM, effectively corresponding to a spin inversion. In this case, the up-spins in the left region correspond to the down-spins in the right region and vice versa, producing spin conductivities of equal magnitude but opposite sign. IV. JUNCTION BETWEEN AM/NM/AM IN WEAK PHASE A. Details of the calculation Now a normal metal (NM) of length L is sandwitched between the two AMs having different N´eel vectors differ- ing by χ. The region to the left of x < 0, the Hamiltonian is HW (χ = 0) and to the right of x > L is HW (χ = θ). In the region 0 < x < L, the Hamiltonian is given by HW (χ = 0, tJ = 0). Dispersion for ↑- and ↓spin electrons in the AM is given in Eqn. 3. But the dispersion of the normal metal is given by E = t0a2(q2 x + q2 y) −µ (12) Conservation of probability current density at the junc- tion of left AM/NM junction at x = 0 and right AM/NM junction at x = L gives boundary conditions ψ(0−) = cψ(0+) c  (tσ0 −tJσz)a∂xψ  0−=  tσ0(a∂x −aq0)ψ  0+ ψ(L−) = cψ(L+) c  t(σ0a∂x + aq0)ψ  L−=  (tσ0 −tJσθ)a∂xψ  L+ (13) The scattering eigenfunction corresponding to a ↑- electron with energy E, incident at an angle α, takes 4 the form ψ(x)eiky↑y ψ(x) = (eikx↑x + r↑↑e−ikx↑x) |↑⟩+ r↓↑e−ikx↓x |↓⟩, for x < 0, =  AReiqxx + ALe−iqxx |↑⟩+  BReiqxx + BLe−iqxx |↓⟩ for 0 < x < L = t↑↑eikx↑x |↑θ⟩+ t↓↑eikx↓x |↓θ⟩, for x > L. (14) where the wavevectors in the two AMs are given by the Eq.(6) and qxa = q (E + µ)/t −k2 y↑. The scattering co- efficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (13). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (7) where Js− ↑(α) and Js+ ↑(α) are the spin current for the left and right AM respectively. Now when a down-spin electron is incident from the left AM, the scattering eigenfunction of it with energy E, incident at an angle α, takes the form ψ(x)eiky↓y ψ(x) = (eikx↓x + r↓↓e−ikx↓x) |↓⟩+ r↑↓e−ikx↑x |↑⟩, for x < 0, =  AReiqxx + ALe−iqxx |↑⟩+  BReiqxx + BLe−iqxx |↓⟩ for 0 < x < L = t↑↓eikx↑x |↑θ⟩+ t↓↓eikx↓x |↓θ⟩, for x > L. (15) here the wavevectors in the two AMs are given by the Eq.(9) and qx = q (E + µ)/t −k2 y↓. The scattering co- efficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (13). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (10), where Js− ↓(α) and Js+ ↓(α) are the spin current for the left and right AM respectively. The differential charge and spin conductivities in this sys- tem are given by Eq.(11). B. Results Figure 2(b) shows variation of total conductivity with respect to the Length (L) of the NM at θ = 0 (blue solid line) and at θ = π (red dotted line).The conduc- tivity shows an oscillatory dependence on the length L, where the oscillation amplitude initially is large at lower L and then decreases, eventually stabilizing with an al- most constant amplitude for larger L. This behaviour arises because, for ky away from 0, the down spin elec- trons from AM do not have plane wave states on the NM [see Fig. 1(a)] making the states evanescent. For small L, electrons are transmitted through evanescent waves and FIG. 2. (a) Schematic of the system. Fermi surfaces in each region are indicated by curves. The N´eel vectors on the left AM and right AM differ by an angle θ. Differential conduc- tivity (b) versus L in the units of a keeping µ = t0, (c) versus µ in the units of t0 keeping L = 20 for two different values of θ i.e θ = 0 and π indicated in the legend. Other parameters: q0 = 1/a, c = 1,tJ = 0.2t0,E = t0 are same for (b) and (c) contribute to the total conductivity. However, as L in- creases, the contribution from spin-down electrons with large ky vanishes. Such oscillations originate from quantum-interference effects: multiple reflections within the finite NM segment generate constructive or destructive interference deter- mined by the accumulated phase, known as Fabry-P´erot interference(FPI) [26, 27] . As L increases, the phase ac- quired by the electron wavefunction varies, leading to al- ternating constructive and destructive interference. FPI condition given by ∆L = π/q, where ∆L is the interval between successive peaks and q is the wave number of the interfering mode in the NM. This condition is obtained by considering the normal incident electrons which are the dominant contributions to conductivity. ∆L calcu- lated by the above condition is 2.221a, in agreement with the numerically obtained value of 2.212a obtained in the results in Fig. 2(b). While the oscillatory dependence on NM length is present for both spin configurations due to quantum in- terference, the relative amplitude difference in overall magnitude between the two spin orientations originates from the spin-dependent tunneling at the AM/NM inter- faces. Fermi surface shows that transverse momentum matching is higher for up-spin electrons as compared to the ↓electrons at AM/NM interface as all the ky for up-spin electrons matches with the ky for NM, but for down-spin it is not the case. So, for θ = 0, the spin po- larizations of the left and right AM are collinear, which 5 maximizes the effective overlap (both up and down) be- tween the transmitted and incident spin states across the junction. For this case most of the current is carried by ↑electrons leading to higher conductivity. In contrast, for θ = π, the spin orientations of the two AMs are an- tiparallel. So most of the current is carried by the down- spin electrons. Since there are less number of ky states for down-spin electrons to carry current, we observe a slightly lower conductivity. Figure 2(c) presents the variation of the total conduc- tivity wavevector the chemical potential (µ) of the nor- mal metal at two different values of θ. Similar to the case of varying NM length, the conductivity for both θ = 0 and θ = π exhibits oscillations due to FPI. However, the oscillation amplitude here remains nearly constant after an initial increase. The initial increase in conductivity is due to the increase in the size of the Fermi surface of the NM, which accommodates more electrons from AM. The FPI condition is ∆q = π/L at fixed length L, where ∆q is the interval between the successive peaks at µ1 and µ2. ∆q calculated by this condition is 0.157/a which closely matches with 0.148/a that is obtained in the results in Fig. 2(c) by ∆q = p (E + µ2)/t − p (E + µ1)/t. V. JUNCTION BETWEEN AMS IN STRONG PHASE A. Details of the calculation In the strong phase, electrons of the two spins have different band bottoms. For ↑-the band bottom lies at kx = π/a, ky = 0 whereas for ↓- band bottom lies at kx = 0, ky = π/a. So the Hamiltonian near the band bottom for the strong phase can be written as HS(χ) = −  (tJ −t) ∂x −iπ a 2 + (tJ + t)∂2 y  a2 |↑χ⟩⟨↑χ| −  (tJ + t)∂2 x + (tJ −t) ∂y ± iπ a 2 a2 |↓χ⟩⟨↓χ| , (16) To the left of x = 0, the Hamiltonian is HS(χ = 0) and to the right of x = 0 the Hamiltonian is HS(χ = θ). Dispersion of altermagnet in the strong phase for up-spin and down-spin electrons are given by– E = (tJ −t)(kx↑a −π)2 + (tJ + t)k2 y↑a2 (17) E = (tJ + t)k2 x↓a2 + (tJ −t)(ky↓a ∓π)2 (18) Probability current density for x < 0 and x > 0 respec- tively is given by- J− S = 2 ℏIm " ψ†n (tJσ0 −t0σz)(∂x −iπ a σ↑)ψ o# (19) J+ S = 2 ℏIm " ψ†n (tJσ0 −t0σθ)(∂x −iπ a σ↑θ)ψ o# (20) where σ↑= (σ0 + σz)/2, σ↑θ = (σ0 + σθ)/2. The conser- vation of probability current across the junction provides the necessary boundary conditions and is given below - ψ(0−) = cψ(0+) c htJσ0 −t0σz)(∂xψ −iπ a σ↑ψ  + q0ψ i 0− = htJσ0 −t0σθ)(∂xψ −iπ a σ↑θψ i 0+ (21) When an ↑-spin electron with energy E is incident from the left AM making an angle α with x-axis at the junction, the scattering wavefunction associated with the electron has the form ψ = ψ(x) eiky↑y, where ψ(x) = (eikx↑x + r↑↑ei(2π/a−kx↑)x) |↑⟩+ r↓↑e−ikx↓x |↓⟩, for x < 0, = t↑↑eikx↑x |↑θ⟩+ t↓↑eikx↓x |↓θ⟩, for x > 0. (22) where kx↑a = π + r E tJ −t0 cos α, ky↑a = r E tJ + t0 sin α and kx↓a = s E t + tJ −η r E t + tJ sin α −π sgn(α) 2 (23) For the ↑-spin incident electrons, kx↓a becomes imagi- nary, that means there is no ky for the ↓-spin reflected and transmitted electrons which matches with the ky of incident electrons. The charge and spin current densities corresponding to this wavefunction are given by - Jc ↑(α) = 2e ℏ h (tJ −t)(kx↑−π/a)(1 −|r↑↑|2) −(tJ + t)Re(kx↓)|r↓↑|2i , Js− ↑(α) = (tJ −t)(kx↑−π/a)(1 −|r↑↑|2) +(tJ + t)Re(kx↓)|r↓↑|2, Js+ ↑(α) = (tJ −t)(kx↑−π/a)|t↑↑|2 −(tJ + t)Re(kx↓)|t↓↑|2 (24) Similarly scattering eigenfunction for a ↓electron, hav- ing energy E, being incident at an angle α with x-axis takes the form ψ = ψ(x) eiky↓y, ψ(x) = (eikx↓x + r↓↓e−ikx↓x) |↓⟩+ r↑↓ei(2π/a−kx↑)x |↑⟩, for x < 0 = t↑↓eikx↑x |↑θ⟩+ t↓↓eikx↓x |↓θ⟩ for x > 0. (25) where kx↓a = r E tJ + t cos α, kx↑a = π + s E tJ −t + 1 η k2 y↓a2 ky↓a = −sgn(α) π a + r E tJ −t sin α (26) 6 Similar to the above case, here kx↑becomes imgaginary for ↓−spin incidence exhibiting the same physical inter- pretation but with opposite spin. Spin current densities across both the sides of the junc- tion is given below. Since charge current is conserved on both the regions across junction, only charge current on the left AM is shown below. Jc ↓(α) = 2e ℏ h (tJ −t) Re(π/a −kx↑)|r↑↓|2) +(tJ + t)kx↓(1 −|r↓↓|2) i , Js− ↓(α) = (tJ −t) Re(π/a −kx↑)|r↑↓|2 −(tJ + t)kx↓(1 −|r↓↓|2), Js+ ↓(α) = (tJ −t)Re(kx↑−π/a)|t↑↓|2 −(tJ + t)kx↓|t↓↓|2 (27) The differential charge and the spin conductivities are given by Eqn. (11) B. Results Figure 3(b) shows variation of charge conductivity with the spin polarization angle θ for different values of t. In the strong phase, conductivity is dominated by a single spin channel, and electron transport occurs only when the transverse momentum matches across the two AM regions. As shown in Fig. 3(a), an electron incident with a given spin is transmitted into the right AM with the same spin. So, there is no down-spin current for the up-spin electron incidence and vice versa. The conductivity is maximum at θ = 0 and decreases monotonically with increasing θ, vanishing at θ = π. This behavior arises because, for θ = 0, the spin orientations in both regions are collinear, enabling large transmission. With increasing θ, the spin overlap is continuously reduced, thus suppressing the conductivity. At θ = π, the spins are fully antiparallel, eliminating overlap and hence blocking transmission into the right AM, resulting in zero conductivity. Figure 3(c) illustrates the variation of spin conductiv- ity with the spin polarization angle θ for different values of the hopping parameter t on both sides of the interface. The spin conductivities corresponding to either side of the junction coincide for identical parameter values and decrease progressively to zero as θ approaches π. This behavior arises from the spin-dependent Fermi surface of the altermagnet. For θ = 0, the spin polarizations in the two AM regions are collinear, allowing efficient transmis- sion of spin-polarized electrons and yielding maximum spin conductivity. As θ increases, the relative spin align- ment between the two AMs is reduced, causing a mis- match between spin states and thereby suppressing the spin-resolved conductivities. Since there is no down-spin current for the up-spin incidence and no up-spin current FIG. 3. (a) Schematic of the system. The curves indicate Fermi surface. The N´eel vectors on the left AM and right AM differ by an angle θ. (b) Differential charge conductivity versus θ and (c) spin conductivity versus θ for different values of tJ as indicated in the legend. Other parameters: q0 = 1/a, c = 1, E = tJ (d) Fermi surface of up- and down-spin electrons for different values of t. for the down spin incidence, so for a particular θ differ- ence between the up- and down- currents remains the same on either sides of the junction. The complete over- lap thus reflects the high symmetry of the AM Fermi surface, which ensures equivalent transport characteris- tics on both sides. The spin conductivity, defined as the difference be- tween up-spin and down-spin current, can take negative values which shows that contribution to the current due to down-spin electrons is higher than the up-spin elec- trons. This is explained by Fig. 3(d) where the Fermi surface for up- and down-spin is drawn for different val- ues of t. It is clear from the above figure that in the 7 region 0 < t < tJ as t grows from 0 to tJ, the Fermi sur- face for up-spin electrons is shortened along ky occupying less number of transverse momentum states whereas the Fermi surface for down-spin electrons is elongated along ky occupying larger number of ky states resulting into higher down-spin conductivity for t ̸= 0. When t = 0, the spin conductivity completely vanishes due to equal con- tribution of up-and down-spin electrons because at this value of t Fermi surface for both the spins are exactly identical occupying same number of ky states. Thus, the sign of the spin conductivity reflects the imbalance be- tween spin-resolved transport channels. VI. JUNCTION BETWEEN AM/NM/AM IN STRONG PHASE A. Details of the calculation Now a normal metal (NM) of length L is sandwiched between the two AMs having different N´eel vectors char- acterized by χ. The Hamiltonian in region to the left of x = 0 is HS(χ = 0), whereas to the right of x = L is HS(χ = θ). In the region between 0 < x < L, the Hamiltonian for the NM is given below - Hnm = −t0σ0a2(∂2 x + ∂2 y) −µ for (0 < x < L) (28) Conservation of current probability density at the two in- terfaces x = 0 and x = L results in boundary condition. ψ(0−) = cψ(0+) c  (tJσ0 −tσz)(a∂x −iπσ↑)ψ  0−=  tσ0(a∂x −aq0)ψ  0+ ψ(L−) = cψ(L+) c  t(σ0a∂x + aq0)ψ  L−=  (tJσ0 −tσθ)(a∂x −iπσ↑θψ  L+ (29) A ↑-spin electron with energy E, incident from the left AM at an angle α relative to the x-axis, is described by a scattering wavefunction of the form ψ(x)eiky↑y, where ψ(x) = (eikx↑x + r↑↑ei(2π/a−kx↑)x) |↑⟩+ r↓↑e−ikx↓x |↓⟩, for x < 0, =  AReiqxx + ALe−iqxx |↑⟩+  BReiqxx + BLe−iqxx |↓⟩ for 0 < x < L = t↑↑eikx↑x |↑θ⟩+ t↓↑eikx↓x |↓θ⟩, for x > L (30) where the expressions for kx↑a, kx↓a and ky↑are given by equation (23) and qxa = q (E + µ)/t −k2 y↑. The scattering coefficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (29). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (24), where Js− ↓(α) and Js+ ↓(α) are the spin current for the left and right AM respectively. Similarly a ↓-spin electron with energy E, when inci- dent from the left AM at an angle α with respect to the x-axis, is described by a scattering wavefunction of the form ψ(x)eiky↓y, where ψ(x) = (eikx↓x + r↓↓e−ikx↓x) |↓⟩+ r↑↓ei(2π/a−kx↑)x |↑⟩, for x < 0, =  AReiqxx + ALe−iqxx |↑⟩+  BReiqxx + BLe−iqxx |↓⟩ for 0 < x < L = t↑↑eikx↑x |↑θ⟩+ t↓↑eikx↓x |↓θ⟩, for x > L. (31) where the expressions for kx↓a, kx↑a and ky↓are given by equation (26) and qxa = q (E + µ)/t −k2 y↓. Using the boundary conditions in Eq. (29) the scattering coef- ficients AR, BR, AL, BL, rσ′σ and tσ′σ are calculated. The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (27), where Js− ↓(α) and Js+ ↓(α) are the spin cur- rents for the left and right AM respectively. The differ- ential charge and spin conductivities in this system are calculated using Eq.(11). B. Results Figure 4(b) shows total charge conductivity with respect to spin polarization angle θ. In particular, the up-spin conductivity is maximum at θ = 0 and decreases gradually to zero as θ approaches π. The spin-dependent band structure and transverse momentum matching across the junction explains this behaviour. At θ = 0, the spin polarization in both the AMs is aligned along the same direction, allowing large transmission of up-spin electrons. As θ increases, the spin alignment between the two AM regions deviates, introducing a spin mismatch that reduces the probability of transmission of the up-spin electrons. At θ = π, the magnetizations are fully anti-aligned, and the right AM supports only down-spin states, thereby completely blocking the transmission of up-spin electrons and resulting in zero conductivity. On the other hand conductivity due to the down-spin electrons is completely zero irrespective of θ because of mismatching of transverse momentum at the junction of left AM and NM. So the down-spin transmission suppressed exponentially. Figure 4(c) shows variation total charge conductivity with respect to the chemical potential µ. Here also, we observe that the conductivity due to down-spin inci- dent electrons remains zero for all values of µ, reflecting 8 FIG. 4. (a) Schematic of the system. The curves show Fermi surfaces in each region. The N´eel vectors on the left AM and right AM differ by an angle θ. Total charge conductivity (b) versus θ keeping L = 3a and µ = tJ (c) versus µ keeping L = 4a and θ = 0, and (d) versus L keeping µ = 2tJ and θ = 0 for ↑and ↓spin electrons indicated in the legend. Other parameters: q0 = 1/a, c = 1, t0 = 0.1tJ,E = tJ. the same underlying spin-dependent transport mecha- nism described earlier. In contrast, up-spin incident elec- trons exhibit finite conductivity in the right AM region, which shows an oscillatory dependence on µ due to FPI effects in the normal metal. As µ varies, the correspond- ing change in the Fermi wavevector alters the phase ac- cumulation of the electron wavefunction due to back and forth reflection within the NM. This results in either con- structive or destructive interference, thereby causing os- cillations in the transmission probability and hence in the conductivity. The FPI condition, ∆q = π/L, gives 0.72/a in comparison to 0.68/a that is found in Fig. 4(c) at two different chemical potential µ1 and µ2 measured at the peaks calculated by ∆q = p (E + µ2)/t − p (E + µ1)/t. Figure 4(d) shows variation of total charge conduc- tivity with respect to the length (L) of the NM . The oscillatory behavior of the conductivity wavevector L arises due to FPI caused by multiple reflections of elec- tron wavefunctions within the NM region. Similar to the above cases the FPI condition, ∆L = π/qx, at normal incidence gives 2.22a, whereas the numerically obtained value observed in the results from the Fig. 4(d) is 2a. This discrepancy arises because the Fabry–P´erot interference condition applied above assumes normal incidence, but actually the electrons also propagate at oblique angles, each with its own distinct interference condition. Most of the contribution to the conductivity is due to the up-spin transmitted electrons. Down-spin electrons contribute only at smaller lengths and decay exponentially inside the NM as evanescent modes due to transverse momen- tum mismatch at the interface. VII. SUMMARY We have investigated electron transport across junc- tions involving altermagnets by analyzing two distinct regimes - the strong altermagnetic and weak altermag- netic phases. For each case, we calculate both the charge and spin conductivities as functions of angle between the N´eel vectors θ. In the strong AM regime, the total charge conductivity gradually decreases and eventually vanishes as θ approaches π. In contrast, for the weak AM case, the charge conductivity remains finite even at θ = π, re- flecting the partial spin polarization characteristic of the weak phase. Similarly, the spin conductivity on the left and right AM electrodes are identical in the strong AM regime, signifying symmetric spin transport, whereas in the weak AM phase this symmetry is lost except when θ = 0. To further explore the transport behavior, we intro- duce a normal metal between the two AM layers and study the variation of charge conductivity versus chem- ical potential and the NM length for both phases. The conductivity exhibits clear Fabry–P´erot type oscillations originating from quantum interference between multiple reflections at the AM/NM interfaces. The oscillation fre- quency is noticeably higher in the weak AM case and lower in the strong AM, consistent with the difference in the effective wavevectors and spin-dependent potentials of the two regimes. Moreover, in the strong AM phase, the transport is almost completely dominated by up-spin electrons, demonstrating strong spin selectivity, while in the weak AM phase, both spin channels contribute sig- nificantly to the overall conductivity. This distinction highlights the tunable nature of spin-dependent trans- port in altermagnetic materials, where varying the angle between N´eel vectors of two AMs can effectively control the degree of spin filtering and interference characteris- tics. ACKNOWLEDGMENTS SG, SD and AS thank Science and Engineering Re- search Board (now Anusandhan National Research Foun- dation) Core Research grant (CRG/2022/004311) for fi- nancial support. 9 [1] L. ˇSmejkal, A. B. Hellenes, R. Gonz´alez-Hern´andez, J. Sinova, and T. Jungwirth, Giant and tunneling mag- netoresistance in unconventional collinear antiferromag- nets with nonrelativistic spin-momentum coupling, Phys. Rev. X 12, 011028 (2022). [2] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Beyond conven- tional ferromagnetism and antiferromagnetism: A phase with nonrelativistic spin and crystal rotation symmetry, Phys. Rev. X 12, 031042 (2022). [3] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Emerging re- search landscape of altermagnetism, Phys. Rev. X 12, 040501 (2022). [4] R. M. Fernandes, V. S. de Carvalho, T. Birol, and R. G. Pereira, Topological transition from nodal to nodeless Zeeman splitting in altermagnets, Phys. Rev. B 109, 024404 (2024). [5] X. Zhou, W. Feng, E.-W. Zhang, L. ˇSmejkal, J. Sinova, Y. Mokrousov, and Y. Yao, Crystal thermal transport in altermagnetic RuO2, Phys. Rev. Lett. 132, 056701 (2024). [6] H. Yan, X. Zhou, P. Qin, and Z. Liu, Review on spin- split antiferromagnetic spintronics, App. Phys. Lett. 124, 030503 (2024). [7] S. Das, D. Suri, and A. Soori, Transport across junctions of altermagnets with normal metals and ferromagnets, J. Phys.: Condens. Matter 35, 435302 (2023). [8] M. Papaj, Andreev reflection at the altermagnet- superconductor interface, Phys. Rev. B 108, L060508 (2023). [9] C. Sun, A. Brataas, and J. Linder, Andreev reflection in altermagnets, Phys. Rev. B 108, 054511 (2023). [10] S. Das and A. Soori, Crossed Andreev reflection in alter- magnets, Phys. Rev. B 109, 245424 (2024). [11] H. Reichlov´a, R. L. Seeger, R. Gonz´alez-Hern´andez, I. Kounta, R. Schlitz, D. Kriegner, P. Ritzinger, M. Lam- mel, M. Leivisk¨a, V. Petˇr´ıˇcek, P. Doleˇzal, E. Schmoranze- rov´a, A. Bad’ura, A. Thomas, V. Baltz, L. Michez, J. Sinova, S. T. B. Goennenwein, T. Jungwirth, and L. ˇSmejkal, Macroscopic time reversal symmetry break- ing by staggered spin-momentum interaction (2021), arXiv:2012.15651 [cond-mat.mes-hall]. [12] B. Jiang, M. Hu, J. Bai, Z. Song, C. Mu, G. Qu, W. Li, W. Zhu, H. Pi, Z. Wei, Y.-J. Sun, Y. Huang, X. Zheng, Y. Peng, L. He, S. Li, J. Luo, Z. Li, G. Chen, H. Li, H. Weng, and T. Qian, A metallic room-temperature d- wave altermagnet, Nat. Phys. 21, 754 (2025). [13] O. J. Amin, A. Dal Din, E. Golias, Y. Niu, A. Za- kharov, S. C. Fromage, C. J. B. Fields, S. L. Heywood, R. B. Cousins, F. Maccherozzi, J. Krempask´y, J. H. Dil, D. Kriegner, B. Kiraly, R. P. Campion, A. W. Rushforth, K. W. Edmonds, S. S. Dhesi, L. ˇSmejkal, T. Jungwirth, and P. Wadley, Nanoscale imaging and control of alter- magnetism in MnTe, Nature 636, 348 (2024). [14] M. Julliere, Tunneling between ferromagnetic films, Physics Letters A 54, 225 (1975). [15] T. Miyazaki and N. Tezuka, Giant magnetic tunneling effect in Fe/Al2O3/Fe junction, Journal of Magnetism and Magnetic Materials 139, L231 (1995). [16] S. Yuasa and D. D. Djayaprawira, Giant tunnel magne- toresistance in magnetic tunnel junctions with a crys- talline MgO(001) barrier, Journal of Physics D: Applied Physics 40, R337 (2007). [17] J. S. Moodera, L. R. Kinder, T. M. Wong, and R. Meser- vey, Large magnetoresistance at room temperature in fer- romagnetic thin film tunnel junctions, Phys. Rev. Lett. 74, 3273 (1995). [18] Y.-Y. Jiang, Z.-A. Wang, K. Samanta, S.-H. Zhang, R.- C. Xiao, W. J. Lu, Y. P. Sun, E. Y. Tsymbal, and D.- F. Shao, Prediction of giant tunneling magnetoresistance in Ruo2/Tio2/Ruo2 (110) antiferromagnetic tunnel junc- tions, Phys. Rev. B 108, 174439 (2023). [19] S. Noh, G.-H. Kim, J. Lee, H. Jung, U. Seo, G. So, J. Lee, S. Lee, M. Park, S. Yang, Y. S. Oh, H. Jin, C. Sohn, and J.-W. Yoo, Tunneling magnetoresistance in alter- magnetic RuO2-based magnetic tunnel junctions, Phys. Rev. Lett. 134, 246703 (2025). [20] K. Samanta, Y.-Y. Jiang, T. R. Paudel, D.-F. Shao, and E. Y. Tsymbal, Tunneling magnetoresistance in magnetic tunnel junctions with a single ferromagnetic electrode, Phys. Rev. B 109, 174407 (2024). [21] Y.-F. Sun, Y. Mao, Y.-C. Zhuang, and Q.-F. Sun, Tun- neling magnetoresistance effect in altermagnets, Phys. Rev. B 112, 094411 (2025). [22] M. Ezawa, Tunneling magnetoresistance in a junction made of x-wave magnets with x = p, d, f, g, i (2025), arXiv:2509.16867 [cond-mat.mes-hall]. [23] P. Zhang, C.-T. Chou, H. Yun, B. C. McGoldrick, J. T. Hou, K. A. Mkhoyan, and L. Liu, Control of N´eel vector with spin-orbit torques in an antiferromagnetic insula- tor with tilted easy plane, Phys. Rev. Lett. 129, 017203 (2022). [24] A. Ono and S. Ishihara, Ultrafast reorientation of the N´eel vector in antiferromagnetic dirac semimetals, npj Computational Materials 7, 171 (2021). [25] S. Das and A. Soori, Orientation dependent anomalous hall and spin hall currents at the junctions of alter- magnets with p-wave magnets (2025), arXiv:2508.15723 [cond-mat.mes-hall]. [26] A. Soori, S. Das, and S. Rao, Magnetic-field-induced Fabry-P´erot resonances in helical edge states, Phys. Rev. B 86, 125312 (2012). [27] W. Liang, M. Bockrath, D. Bozovic, J. H. Hafner, M. Tinkham, and H. Park, Fabry-Perot interference in a nanotube electron waveguide, Nature 411, 665 (2001).
Electron transport in junctions between altermagnets Shubham Ghadigaonkar,1, ∗Sachchidanand Das,1, ∗and Abhiram Soori1, † 1 . C. R. Rao Road, Gachibowli, Hyderabad-500046, India We theoretically investigate electron transport in junctions between the two AMs in strong and weak altermagnetic phases. The charge and spin conductivities are analyzed as functions of angle between the N ́eel vectors of the two AMs θ. In the strong AM regime, the charge conductivity vanishes as θ →π, while in the weak AM phase it remains finite. Introducing a normal metal between two AMs leads to Fabry-P ́erot-type oscillations in charge conductivity. In the strong phase, transport is dominated by up-spin electrons, whereas both spin channels contribute in the weak phase. These results highlight the potential of AM-based heterostructures for spintronic applications, such as spin filters, and quantum interference-based spintronic devices, where tunable spin-dependent transport and interference effects can be utilized in electronic devices. I. INTRODUCTION Altermagnets (AMs), materials having d-wave magnetic order have generated tremendous interest among condensed matter physicists in the past couple of years [1-6]. Characterized by traits of both ferromagnets and antiferromagnets, their net spin polarization is zero. They are known to carry spin current under a voltage bias [7]. Junctions of AMs with normal metals, ferromagnets and superconductors have been studied by many groups [7-10]. The spin that commutes with the Hamiltonian of AM defines the N ́eel vector for the AM. Several candidate materials such as MnTe, Mn5Si3, KV2Se2O exist for AMs [11-13]. Magnetic tunnel junctions (MTJs) are junctions between two ferromagnetic metals separated by a thin insulator, wherein electrons are able to pass through the thin insulating barrier. The relative orientation of the magnetic moments in these layers dictates the possibility of electron tunneling. The parallel alignment of magnetizations in the ferromagnetic layers exhibits a low electrical resistance in the junction, whereas an antiparallel alignment results in a high resistance. This variation in resistance between the two magnetic configurations gives rise to the tunneling magnetoresistance (TMR) effect, which forms the fundamental basis for the operation of MTJbased spintronic devices [14-17]. Magnetic tunnel junctions in altermagnetic RuO2, and single ferromagnetic electrode have been shown to result in a high tunneling magnetoresistance [18-20]. Magnetic tunnel junctions between the altermagnets has just recently been studied where the authors claim to achieve tunneling magnetoresistance over 1000% by just rotating the AM and tuning the altermagnetic strength and Fermi energy [21, 22]. The orientation of the N ́eel vector can be tuned using spin-orbit torques or ultrafast optical excitations [23, 24]. It influences how spin-polarized currents propagate through the materials. This orientation of N ́eel vector modifies the spin-dependent scattering and shows spin ∗These authors contributed equally to this work † Hall response. A recent study on AM/p-wave magnet junction by rotating the N ́eel vector of the latter relative to that of the former shows similar response [25]. In MnTe, it is found that domains which have different directions for the N ́eel vectors are formed very much like that in ferromagnets [13]. Motivated by these developments, we first study electron transport in junctions between two altermagnets having different directions of N ́eel vectors modeled by continuum model. We write down boundary conditions that characterize the junction. Then we calculate charge and spin conductivities using the LandauerButtiker scattering formalism. Also, we sandwich a normal metal (NM) in between the two AMs, having different direction of N ́eel vectors, to study how the inclusion of NM affects the conductivity. II. OUTLINE OF CALCULATION A simple model for AMs consists of a Hamiltonian for electrons with spin- and direction- dependent hopping in a two-dimensional square lattice. It breaks time reversal symmetry but is invariant under time reversal times π/2-rotation. The Hamiltonian in tight-binding model can be written as sum of two terms: first describing a normal metal and the second describing altermagnetic order. The Hamiltonian can be written as H = -2t(cos kxa+cos kya)σ0 -2tJ(cos kxa-cos kya)σz, where a is the lattice spacing, σj's are Pauli spin matrices. By Taylor expanding such a Hamiltonian in momentum space near the band bottom, its continuum form can be obtained. Depending on the relative strength of the altermagnetic term compared to the normal metal term in the Hamiltonian, the phase of AM can be classified into strong and weak. The strong phase corresponds to tJ > t ≥0 whereas the weak phase corresponds to 0 ≤tJ 0. (5) Here |↑⟩= |↑χ=0⟩, |↓⟩= |↓χ=0⟩|↑χ⟩= [cos χ 2 , sin χ 2 ]T , |↓χ⟩= [-sin χ 2 , cos χ 2 ]T ., ky↑= p E/(t + tJ) sin α kx↑= p E/(t -tJ) cos α, kx↓= q {E/(t + tJ)}(1 -η sin2 α) (6) and η = (t-tJ)/(t+tJ). The scattering coefficients rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (4), where σ =↑and σ′ =↑or ↓. The charge- and spin- current densities in the system due to this wavefunction are given by Jc ↑(α) = 2e ħ (t -tJ)kx↑(1 -|r↑↑|2) -(t + tJ)kx↓|r↓↑|2 , Js- ↑(α) = (t -tJ)kx↑(1 -|r↑↑|2) + (t + tJ)kx↓|r↓↑|2, Js+ ↑(α) = (t -tJ)kx↑|t↑↑|2 -(t + tJ)kx↓|t↓↑|2. (7) Note that while the charge current is same in the two AMs, the spin current need not be the same, since none of the Pauli spin matrices commute with Hamiltonians on both the sides. Js+ (Js-) is the spin current density on the right (left) side of the junction. While the spin current density on the left corresponds to σz, the one on the right corresponds to σθ. The scattering eigenfunction corresponding to a ↓- electron incident at an angle of incidence α at energy E has the form ψ(x)eiky↓y where, ψ(x) = (eikx↓x + r↓↓e-ikx↓x) |↓⟩+ r↑↓e-ikx↑x |↑⟩, for x 0. (8) Here, ky↓= p E/(t -tJ) sin α, kx↓= p E/(t + tJ) cos α, kx↑= p E/(t -tJ) q 1 -(sin2 α)/η (9) When sin2 α/η > 1, kx↑is imaginary. It is chosen in such a way that the wavefunction for ↑-electron decays away from the junction. Using the boundary conditions shown in Eq. (4), the scattering coefficients rσ′σ and tσ′σ can be calculated, where σ =↓and σ′ =↓or ↑. This wavefunction results in the following charge and spin current densities Jc ↓(α) = 2e ħ (t + tJ)kx↓(1 -|r↓↓|2) -(t -tJ)Re[kx↑]|r↑↓|2 , Js- ↓(α) = -(t + tJ)kx↓(1 -|r↓↓|2) -(t -tJ)Re[kx↑]|r↑↓|2, Js+ ↓(α) = (t -tJ)Re[kx↑]|t↑↓|2 -(t + tJ)kx↓|t↓↓|2 (10) The differential charge and the spin conductivities are given by G = e 8π2p t2 -t2 J Z π/2 -π/2 dα[Jc ↑(α) + Jc ↓(α)], Gs± = e 8π2p t2 -t2 J Z π/2 -π/2 dα[Js± ↑(α) + Js± ↓(α)](11) 3 B. Results In Fig. 1(b,c), we plot the conductivities versus the angle between the N ́eel vectors of the two AMs. In Fig. 1(b), the charge conductivity is shown wavevector the angle θ for different ratios of tJ/t, with parameters q0 = 1/a, c = 1, and E = t. For larger values of the altermagnetic strength tJ, the conductivity exhibits significant variation. We see [Fig. 1(b)] that for θ = 0, the charge conductivity is maximum and then decreases monotonically as θ increases in the range [0, π]. As θ deviates away from 0, the orientations of the spins corresponding to the same values of ⃗k on either sides of the junction is different leading to reduced conductivity. Interestingly, the conductivity at θ = 0 increases with increasing value of tJ. This feature can be understood by taking the limit where all the reflection coefficients are zero and using Eq. (11). As tJ decreases, this variation becomes progresFIG. 1. (a) Schematic of the junction with the Fermi surfaces on each region. The N ́eel vectors on either sides of the junction differ by an angle θ. (b) Differential charge conductivity versus θ for different values of tJ/t indicated in the legend. Other parameters: q0 = 1/a, c = 1, E = t (c) Spin conductivity versus θ in the left and right AM for q0 = 0, c = 1.2, tJ = 0.2 and E = t. sively weaker, and at tJ = 0, the conductivity remains constant across all angles. This behaviour corresponds to the complete absence of altermagnetic effects, rendering the system equivalent to a normal metal. Fig. 1(c) illustrates the variation of spin conductivities in the left and right AM regions wavevector θ, for parameters q0 = 0, c = 1.2, tJ = 0.2, and E = t. The spin conductivity, defined as the difference between the upspin and down-spin conductivities, has the same value in both regions at θ = 0. It gradually increases with θ in both the regions. In the left region, the spin current remains mostly negative, while in the right region it stays entirely negative across all θ, indicating that down-spin contributions dominate over up-spins. This is because, the down-spin Fermi surface is elongated along y and there are more states that are closer to normal incidence than those for the up-spin incidence wherein the Fermi surface is elongated along x. At θ = π, the spin conductivities in the left and right regions shift symmetrically above and below zero. This can be understood as follows. The left and right AM regions are structurally identical, differing only in the orientation of their magnetization by an angle θ. At θ = 0, both regions have their magnetization aligned in the same direction, leading to identical up-spin and down-spin transport channels. As a result, the difference between these channels-and hence the spin conductivity-is the same on both sides. However, at θ = π, the magnetization in the right AM is completely reversed relative to the left AM, effectively corresponding to a spin inversion. In this case, the up-spins in the left region correspond to the down-spins in the right region and vice versa, producing spin conductivities of equal magnitude but opposite sign. IV. JUNCTION BETWEEN AM/NM/AM IN WEAK PHASE A. Details of the calculation Now a normal metal (NM) of length L is sandwitched between the two AMs having different N ́eel vectors differing by χ. The region to the left of x L is HW (χ = θ). In the region 0 L. (14) where the wavevectors in the two AMs are given by the Eq.(6) and qxa = q (E + μ)/t -k2 y↑. The scattering coefficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (13). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (7) where Js- ↑(α) and Js+ ↑(α) are the spin current for the left and right AM respectively. Now when a down-spin electron is incident from the left AM, the scattering eigenfunction of it with energy E, incident at an angle α, takes the form ψ(x)eiky↓y ψ(x) = (eikx↓x + r↓↓e-ikx↓x) |↓⟩+ r↑↓e-ikx↑x |↑⟩, for x L. (15) here the wavevectors in the two AMs are given by the Eq.(9) and qx = q (E + μ)/t -k2 y↓. The scattering coefficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (13). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (10), where Js- ↓(α) and Js+ ↓(α) are the spin current for the left and right AM respectively. The differential charge and spin conductivities in this system are given by Eq.(11). B. Results Figure 2(b) shows variation of total conductivity with respect to the Length (L) of the NM at θ = 0 (blue solid line) and at θ = π (red dotted line).The conductivity shows an oscillatory dependence on the length L, where the oscillation amplitude initially is large at lower L and then decreases, eventually stabilizing with an almost constant amplitude for larger L. This behaviour arises because, for ky away from 0, the down spin electrons from AM do not have plane wave states on the NM [see Fig. 1(a)] making the states evanescent. For small L, electrons are transmitted through evanescent waves and FIG. 2. (a) Schematic of the system. Fermi surfaces in each region are indicated by curves. The N ́eel vectors on the left AM and right AM differ by an angle θ. Differential conductivity (b) versus L in the units of a keeping μ = t0, (c) versus μ in the units of t0 keeping L = 20 for two different values of θ i.e θ = 0 and π indicated in the legend. Other parameters: q0 = 1/a, c = 1,tJ = 0.2t0,E = t0 are same for (b) and (c) contribute to the total conductivity. However, as L increases, the contribution from spin-down electrons with large ky vanishes. Such oscillations originate from quantum-interference effects: multiple reflections within the finite NM segment generate constructive or destructive interference determined by the accumulated phase, known as Fabry-P ́erot interference(FPI) [26, 27] . As L increases, the phase acquired by the electron wavefunction varies, leading to alternating constructive and destructive interference. FPI condition given by ∆L = π/q, where ∆L is the interval between successive peaks and q is the wave number of the interfering mode in the NM. This condition is obtained by considering the normal incident electrons which are the dominant contributions to conductivity. ∆L calculated by the above condition is 2.221a, in agreement with the numerically obtained value of 2.212a obtained in the results in Fig. 2(b). While the oscillatory dependence on NM length is present for both spin configurations due to quantum interference, the relative amplitude difference in overall magnitude between the two spin orientations originates from the spin-dependent tunneling at the AM/NM interfaces. Fermi surface shows that transverse momentum matching is higher for up-spin electrons as compared to the ↓electrons at AM/NM interface as all the ky for up-spin electrons matches with the ky for NM, but for down-spin it is not the case. So, for θ = 0, the spin polarizations of the left and right AM are collinear, which 5 maximizes the effective overlap (both up and down) between the transmitted and incident spin states across the junction. For this case most of the current is carried by ↑electrons leading to higher conductivity. In contrast, for θ = π, the spin orientations of the two AMs are antiparallel. So most of the current is carried by the downspin electrons. Since there are less number of ky states for down-spin electrons to carry current, we observe a slightly lower conductivity. Figure 2(c) presents the variation of the total conductivity wavevector the chemical potential (μ) of the normal metal at two different values of θ. Similar to the case of varying NM length, the conductivity for both θ = 0 and θ = π exhibits oscillations due to FPI. However, the oscillation amplitude here remains nearly constant after an initial increase. The initial increase in conductivity is due to the increase in the size of the Fermi surface of the NM, which accommodates more electrons from AM. The FPI condition is ∆q = π/L at fixed length L, where ∆q is the interval between the successive peaks at μ1 and μ2. ∆q calculated by this condition is 0.157/a which closely matches with 0.148/a that is obtained in the results in Fig. 2(c) by ∆q = p (E + μ2)/t - p (E + μ1)/t. V. JUNCTION BETWEEN AMS IN STRONG PHASE A. Details of the calculation In the strong phase, electrons of the two spins have different band bottoms. For ↑-the band bottom lies at kx = π/a, ky = 0 whereas for ↓- band bottom lies at kx = 0, ky = π/a. So the Hamiltonian near the band bottom for the strong phase can be written as HS(χ) = - (tJ -t) ∂x -iπ a 2 + (tJ + t)∂2 y a2 |↑χ⟩⟨↑χ| - (tJ + t)∂2 x + (tJ -t) ∂y ± iπ a 2 a2 |↓χ⟩⟨↓χ| , (16) To the left of x = 0, the Hamiltonian is HS(χ = 0) and to the right of x = 0 the Hamiltonian is HS(χ = θ). Dispersion of altermagnet in the strong phase for up-spin and down-spin electrons are given byE = (tJ -t)(kx↑a -π)2 + (tJ + t)k2 y↑a2 (17) E = (tJ + t)k2 x↓a2 + (tJ -t)(ky↓a ∓π)2 (18) Probability current density for x 0 respectively is given byJ- S = 2 ħIm " ψ†n (tJσ0 -t0σz)(∂x -iπ a σ↑)ψ o# (19) J+ S = 2 ħIm " ψ†n (tJσ0 -t0σθ)(∂x -iπ a σ↑θ)ψ o# (20) where σ↑= (σ0 + σz)/2, σ↑θ = (σ0 + σθ)/2. The conservation of probability current across the junction provides the necessary boundary conditions and is given below - ψ(0-) = cψ(0+) c h tJσ0 -t0σz)(∂xψ -iπ a σ↑ψ + q0ψ i 0- = h tJσ0 -t0σθ)(∂xψ -iπ a σ↑θψ i 0+ (21) When an ↑-spin electron with energy E is incident from the left AM making an angle α with x-axis at the junction, the scattering wavefunction associated with the electron has the form ψ = ψ(x) eiky↑y, where ψ(x) = (eikx↑x + r↑↑ei(2π/a-kx↑)x) |↑⟩+ r↓↑e-ikx↓x |↓⟩, for x 0. (22) where kx↑a = π + r E tJ -t0 cos α, ky↑a = r E tJ + t0 sin α and kx↓a = s E t + tJ -η r E t + tJ sin α -π sgn(α) 2 (23) For the ↑-spin incident electrons, kx↓a becomes imaginary, that means there is no ky for the ↓-spin reflected and transmitted electrons which matches with the ky of incident electrons. The charge and spin current densities corresponding to this wavefunction are given by - Jc ↑(α) = 2e ħ h (tJ -t)(kx↑-π/a)(1 -|r↑↑|2) -(tJ + t)Re(kx↓)|r↓↑|2i , Js- ↑(α) = (tJ -t)(kx↑-π/a)(1 -|r↑↑|2) +(tJ + t)Re(kx↓)|r↓↑|2, Js+ ↑(α) = (tJ -t)(kx↑-π/a)|t↑↑|2 -(tJ + t)Re(kx↓)|t↓↑|2 (24) Similarly scattering eigenfunction for a ↓electron, having energy E, being incident at an angle α with x-axis takes the form ψ = ψ(x) eiky↓y, ψ(x) = (eikx↓x + r↓↓e-ikx↓x) |↓⟩+ r↑↓ei(2π/a-kx↑)x |↑⟩, for x 0. (25) where kx↓a = r E tJ + t cos α, kx↑a = π + s E tJ -t + 1 η k2 y↓a2 ky↓a = -sgn(α) π a + r E tJ -t sin α (26) 6 Similar to the above case, here kx↑becomes imgaginary for ↓-spin incidence exhibiting the same physical interpretation but with opposite spin. Spin current densities across both the sides of the junction is given below. Since charge current is conserved on both the regions across junction, only charge current on the left AM is shown below. Jc ↓(α) = 2e ħ h (tJ -t) Re(π/a -kx↑)|r↑↓|2) +(tJ + t)kx↓(1 -|r↓↓|2) i , Js- ↓(α) = (tJ -t) Re(π/a -kx↑)|r↑↓|2 -(tJ + t)kx↓(1 -|r↓↓|2), Js+ ↓(α) = (tJ -t)Re(kx↑-π/a)|t↑↓|2 -(tJ + t)kx↓|t↓↓|2 (27) The differential charge and the spin conductivities are given by Eqn. (11) B. Results Figure 3(b) shows variation of charge conductivity with the spin polarization angle θ for different values of t. In the strong phase, conductivity is dominated by a single spin channel, and electron transport occurs only when the transverse momentum matches across the two AM regions. As shown in Fig. 3(a), an electron incident with a given spin is transmitted into the right AM with the same spin. So, there is no down-spin current for the up-spin electron incidence and vice versa. The conductivity is maximum at θ = 0 and decreases monotonically with increasing θ, vanishing at θ = π. This behavior arises because, for θ = 0, the spin orientations in both regions are collinear, enabling large transmission. With increasing θ, the spin overlap is continuously reduced, thus suppressing the conductivity. At θ = π, the spins are fully antiparallel, eliminating overlap and hence blocking transmission into the right AM, resulting in zero conductivity. Figure 3(c) illustrates the variation of spin conductivity with the spin polarization angle θ for different values of the hopping parameter t on both sides of the interface. The spin conductivities corresponding to either side of the junction coincide for identical parameter values and decrease progressively to zero as θ approaches π. This behavior arises from the spin-dependent Fermi surface of the altermagnet. For θ = 0, the spin polarizations in the two AM regions are collinear, allowing efficient transmission of spin-polarized electrons and yielding maximum spin conductivity. As θ increases, the relative spin alignment between the two AMs is reduced, causing a mismatch between spin states and thereby suppressing the spin-resolved conductivities. Since there is no down-spin current for the up-spin incidence and no up-spin current FIG. 3. (a) Schematic of the system. The curves indicate Fermi surface. The N ́eel vectors on the left AM and right AM differ by an angle θ. (b) Differential charge conductivity versus θ and (c) spin conductivity versus θ for different values of tJ as indicated in the legend. Other parameters: q0 = 1/a, c = 1, E = tJ (d) Fermi surface of up- and down-spin electrons for different values of t. for the down spin incidence, so for a particular θ difference between the up- and down- currents remains the same on either sides of the junction. The complete overlap thus reflects the high symmetry of the AM Fermi surface, which ensures equivalent transport characteristics on both sides. The spin conductivity, defined as the difference between up-spin and down-spin current, can take negative values which shows that contribution to the current due to down-spin electrons is higher than the up-spin electrons. This is explained by Fig. 3(d) where the Fermi surface for up- and down-spin is drawn for different values of t. It is clear from the above figure that in the 7 region 0 L (30) where the expressions for kx↑a, kx↓a and ky↑are given by equation (23) and qxa = q (E + μ)/t -k2 y↑. The scattering coefficients AR, BR, AL, BL, rσ′σ and tσ′σ can be found using the boundary conditions in Eq. (29). The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (24), where Js- ↓(α) and Js+ ↓(α) are the spin current for the left and right AM respectively. Similarly a ↓-spin electron with energy E, when incident from the left AM at an angle α with respect to the x-axis, is described by a scattering wavefunction of the form ψ(x)eiky↓y, where ψ(x) = (eikx↓x + r↓↓e-ikx↓x) |↓⟩+ r↑↓ei(2π/a-kx↑)x |↑⟩, for x L. (31) where the expressions for kx↓a, kx↑a and ky↓are given by equation (26) and qxa = q (E + μ)/t -k2 y↓. Using the boundary conditions in Eq. (29) the scattering coefficients AR, BR, AL, BL, rσ′σ and tσ′σ are calculated. The charge- and spin- current densities on two sides of the junction due to this wavefunction are given by Eq. (27), where Js- ↓(α) and Js+ ↓(α) are the spin currents for the left and right AM respectively. The differential charge and spin conductivities in this system are calculated using Eq.(11). B. Results Figure 4(b) shows total charge conductivity with respect to spin polarization angle θ. In particular, the up-spin conductivity is maximum at θ = 0 and decreases gradually to zero as θ approaches π. The spin-dependent band structure and transverse momentum matching across the junction explains this behaviour. At θ = 0, the spin polarization in both the AMs is aligned along the same direction, allowing large transmission of up-spin electrons. As θ increases, the spin alignment between the two AM regions deviates, introducing a spin mismatch that reduces the probability of transmission of the up-spin electrons. At θ = π, the magnetizations are fully anti-aligned, and the right AM supports only down-spin states, thereby completely blocking the transmission of up-spin electrons and resulting in zero conductivity. On the other hand conductivity due to the down-spin electrons is completely zero irrespective of θ because of mismatching of transverse momentum at the junction of left AM and NM. So the down-spin transmission suppressed exponentially. Figure 4(c) shows variation total charge conductivity with respect to the chemical potential μ. Here also, we observe that the conductivity due to down-spin incident electrons remains zero for all values of μ, reflecting 8 FIG. 4. (a) Schematic of the system. The curves show Fermi surfaces in each region. The N ́eel vectors on the left AM and right AM differ by an angle θ. Total charge conductivity (b) versus θ keeping L = 3a and μ = tJ (c) versus μ keeping L = 4a and θ = 0, and (d) versus L keeping μ = 2tJ and θ = 0 for ↑and ↓spin electrons indicated in the legend. Other parameters: q0 = 1/a, c = 1, t0 = 0.1tJ,E = tJ. the same underlying spin-dependent transport mechanism described earlier. In contrast, up-spin incident electrons exhibit finite conductivity in the right AM region, which shows an oscillatory dependence on μ due to FPI effects in the normal metal. As μ varies, the corresponding change in the Fermi wavevector alters the phase accumulation of the electron wavefunction due to back and forth reflection within the NM. This results in either constructive or destructive interference, thereby causing oscillations in the transmission probability and hence in the conductivity. The FPI condition, ∆q = π/L, gives 0.72/a in comparison to 0.68/a that is found in Fig. 4(c) at two different chemical potential μ1 and μ2 measured at the peaks calculated by ∆q = p (E + μ2)/t - p (E + μ1)/t. Figure 4(d) shows variation of total charge conductivity with respect to the length (L) of the NM . The oscillatory behavior of the conductivity wavevector L arises due to FPI caused by multiple reflections of electron wavefunctions within the NM region. Similar to the above cases the FPI condition, ∆L = π/qx, at normal incidence gives 2.22a, whereas the numerically obtained value observed in the results from the Fig. 4(d) is 2a. This discrepancy arises because the Fabry-P ́erot interference condition applied above assumes normal incidence, but actually the electrons also propagate at oblique angles, each with its own distinct interference condition. Most of the contribution to the conductivity is due to the up-spin transmitted electrons. Down-spin electrons contribute only at smaller lengths and decay exponentially inside the NM as evanescent modes due to transverse momentum mismatch at the interface. VII. SUMMARY We have investigated electron transport across junctions involving altermagnets by analyzing two distinct regimes - the strong altermagnetic and weak altermagnetic phases. For each case, we calculate both the charge and spin conductivities as functions of angle between the N ́eel vectors θ. In the strong AM regime, the total charge conductivity gradually decreases and eventually vanishes as θ approaches π. In contrast, for the weak AM case, the charge conductivity remains finite even at θ = π, reflecting the partial spin polarization characteristic of the weak phase. Similarly, the spin conductivity on the left and right AM electrodes are identical in the strong AM regime, signifying symmetric spin transport, whereas in the weak AM phase this symmetry is lost except when θ = 0. To further explore the transport behavior, we introduce a normal metal between the two AM layers and study the variation of charge conductivity versus chemical potential and the NM length for both phases. The conductivity exhibits clear Fabry-P ́erot type oscillations originating from quantum interference between multiple reflections at the AM/NM interfaces. The oscillation frequency is noticeably higher in the weak AM case and lower in the strong AM, consistent with the difference in the effective wavevectors and spin-dependent potentials of the two regimes. Moreover, in the strong AM phase, the transport is almost completely dominated by up-spin electrons, demonstrating strong spin selectivity, while in the weak AM phase, both spin channels contribute significantly to the overall conductivity. This distinction highlights the tunable nature of spin-dependent transport in altermagnetic materials, where varying the angle between N ́eel vectors of two AMs can effectively control the degree of spin filtering and interference characteristics. ACKNOWLEDGMENTS SG, SD and AS thank Science and Engineering Research Board (now Anusandhan National Research Foundation) Core Research grant (CRG/2022/004311) for financial support. 9 [1] L. ˇSmejkal, A. B. Hellenes, R. Gonz ́alez-Hern ́andez, J. Sinova, and T. Jungwirth, Giant and tunneling magnetoresistance in unconventional collinear antiferromagnets with nonrelativistic spin-momentum coupling, Phys. Rev. X 12, 011028 (2022). [2] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Beyond conventional ferromagnetism and antiferromagnetism: A phase with nonrelativistic spin and crystal rotation symmetry, Phys. Rev. X 12, 031042 (2022). [3] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Emerging research landscape of altermagnetism, Phys. Rev. X 12, 040501 (2022). [4] R. M. Fernandes, V. S. de Carvalho, T. Birol, and R. G. Pereira, Topological transition from nodal to nodeless Zeeman splitting in altermagnets, Phys. Rev. B 109, 024404 (2024). [5] X. Zhou, W. Feng, E.-W. Zhang, L. ˇSmejkal, J. Sinova, Y. Mokrousov, and Y. Yao, Crystal thermal transport in altermagnetic RuO2, Phys. Rev. Lett. 132, 056701 (2024). [6] H. Yan, X. Zhou, P. Qin, and Z. Liu, Review on spinsplit antiferromagnetic spintronics, App. Phys. Lett. 124, 030503 (2024). [7] S. Das, D. Suri, and A. Soori, Transport across junctions of altermagnets with normal metals and ferromagnets, J. Phys.: Condens. Matter 35, 435302 (2023). [8] M. Papaj, Andreev reflection at the altermagnetsuperconductor interface, Phys. Rev. B 108, L060508 (2023). [9] C. Sun, A. Brataas, and J. Linder, Andreev reflection in altermagnets, Phys. Rev. B 108, 054511 (2023). [10] S. Das and A. Soori, Crossed Andreev reflection in altermagnets, Phys. Rev. B 109, 245424 (2024). [11] H. Reichlov ́a, R. L. Seeger, R. Gonz ́alez-Hern ́andez, I. Kounta, R. Schlitz, D. Kriegner, P. Ritzinger, M. Lammel, M. Leivisk ̈a, V. Petˇr ́ıˇcek, P. Doleˇzal, E. Schmoranzerov ́a, A. Bad'ura, A. Thomas, V. Baltz, L. Michez, J. Sinova, S. T. B. Goennenwein, T. Jungwirth, and L. ˇSmejkal, Macroscopic time reversal symmetry breaking by staggered spin-momentum interaction (2021), . [12] B. Jiang, M. Hu, J. Bai, Z. Song, C. Mu, G. Qu, W. Li, W. Zhu, H. Pi, Z. Wei, Y.-J. Sun, Y. Huang, X. Zheng, Y. Peng, L. He, S. Li, J. Luo, Z. Li, G. Chen, H. Li, H. Weng, and T. Qian, A metallic room-temperature dwave altermagnet, Nat. Phys. 21, 754 (2025). [13] O. J. Amin, A. Dal Din, E. Golias, Y. Niu, A. Zakharov, S. C. Fromage, C. J. B. Fields, S. L. Heywood, R. B. Cousins, F. Maccherozzi, J. Krempask ́y, J. H. Dil, D. Kriegner, B. Kiraly, R. P. Campion, A. W. Rushforth, K. W. Edmonds, S. S. Dhesi, L. ˇSmejkal, T. Jungwirth, and P. Wadley, Nanoscale imaging and control of altermagnetism in MnTe, Nature 636, 348 (2024). [14] M. Julliere, Tunneling between ferromagnetic films, Physics Letters A 54, 225 (1975). [15] T. Miyazaki and N. Tezuka, Giant magnetic tunneling effect in Fe/Al2O3/Fe junction, Journal of Magnetism and Magnetic Materials 139, L231 (1995). [16] S. Yuasa and D. D. Djayaprawira, Giant tunnel magnetoresistance in magnetic tunnel junctions with a crystalline MgO(001) barrier, Journal of Physics D: Applied Physics 40, R337 (2007). [17] J. S. Moodera, L. R. Kinder, T. M. Wong, and R. Meservey, Large magnetoresistance at room temperature in ferromagnetic thin film tunnel junctions, Phys. Rev. Lett. 74, 3273 (1995). [18] Y.-Y. Jiang, Z.-A. Wang, K. Samanta, S.-H. Zhang, R.- C. Xiao, W. J. Lu, Y. P. Sun, E. Y. Tsymbal, and D.- F. Shao, Prediction of giant tunneling magnetoresistance in Ruo2/Tio2/Ruo2 (110) antiferromagnetic tunnel junctions, Phys. Rev. B 108, 174439 (2023). [19] S. Noh, G.-H. Kim, J. Lee, H. Jung, U. Seo, G. So, J. Lee, S. Lee, M. Park, S. Yang, Y. S. Oh, H. Jin, C. Sohn, and J.-W. Yoo, Tunneling magnetoresistance in altermagnetic RuO2-based magnetic tunnel junctions, Phys. Rev. Lett. 134, 246703 (2025). [20] K. Samanta, Y.-Y. Jiang, T. R. Paudel, D.-F. Shao, and E. Y. Tsymbal, Tunneling magnetoresistance in magnetic tunnel junctions with a single ferromagnetic electrode, Phys. Rev. B 109, 174407 (2024). [21] Y.-F. Sun, Y. Mao, Y.-C. Zhuang, and Q.-F. Sun, Tunneling magnetoresistance effect in altermagnets, Phys. Rev. B 112, 094411 (2025). [22] M. Ezawa, Tunneling magnetoresistance in a junction made of x-wave magnets with x = p, d, f, g, i (2025), . [23] P. Zhang, C.-T. Chou, H. Yun, B. C. McGoldrick, J. T. Hou, K. A. Mkhoyan, and L. Liu, Control of N ́eel vector with spin-orbit torques in an antiferromagnetic insulator with tilted easy plane, Phys. Rev. Lett. 129, 017203 (2022). [24] A. Ono and S. Ishihara, Ultrafast reorientation of the N ́eel vector in antiferromagnetic dirac semimetals, npj Computational Materials 7, 171 (2021). [25] S. Das and A. Soori, Orientation dependent anomalous hall and spin hall currents at the junctions of altermagnets with p-wave magnets (2025), . [26] A. Soori, S. Das, and S. Rao, Magnetic-field-induced Fabry-P ́erot resonances in helical edge states, Phys. Rev. B 86, 125312 (2012). [27] W. Liang, M. Bockrath, D. Bozovic, J. H. Hafner, M. Tinkham, and H. Park, Fabry-Perot interference in a nanotube electron waveguide, Nature 411, 665 (2001).
2510.14867
Disorder-assisted Spin-Filtering at Metal/Ferromagnet Interfaces: An Alternative Route to Anisotropic Magnetoresistance Ivan Iorsh1 and Mikhail Titov2 1Department of Physics, Engineering Physics & Astronomy, Queen’s Universiy, Kingston, Canada 2Institute for Molecules and Materials, Radboud University, Nijmegen, The Netherlands (Dated: October 17, 2025) We introduce a minimal interface-scattering mechanism that produces a sizable anisotropic mag- netoresistance (AMR) in metal/ferromagnet bilayers (e.g., Pt/YIG) without invoking bulk spin or orbital Hall currents. In a δ-layer model with interfacial exchange and Rashba spin-orbit coupling, charge transfer at a high-quality interface creates a spin-selective phase condition (interfacial spin filtering) that suppresses backscattering for one spin projection while enhancing momentum relax- ation for the other. The resulting resistance anisotropy peaks at an optimal metal thickness of a few nanometers, quantitatively reproducing the thickness and angular dependences typically attributed to spin Hall magnetoresistance (SMR), as well as its characteristic magnitude. Remarkably, the maximal AMR scales linearly with the smaller of the two coupling strengths – exchange or spin- orbit, highlighting a mechanism fundamentally distinct from SMR. Our scattering formulation maps onto Boltzmann boundary conditions and predicts other clear discriminants from SMR, including strong sensitivity to interfacial charge transfer and disorder. The spin Hall effect (SHE) and its inverse have become cornerstones of spintronics [1, 2]. In heavy metals with strong spin-orbit coupling, an applied charge current is proposed to generate a transverse spin accumulation de- tectable via magnetoresistance or spin-torque phenom- ena. Recently, the orbital Hall effect (OHE) has been advanced as an additional channel for angular momen- tum transport, with theory and experiments indicating sizable orbital responses in transition metals [3–5]. These frameworks have been widely used to interpret magneto- transport in heavy-metal/ferromagnet heterostructures, often under the umbrella of spin Hall magnetoresistance (SMR) or its orbital analogue. Despite their success as interpretative tools, both SHE- and OHE-based pictures face conceptual ambiguities. The definition of spin or orbital current operators is not unique, and such current operators do neither correspond to conserved quantities nor couple to external fields in the respective effective Hamiltonians [6, 7]. Consequently, their expectation values lack the status of genuine ob- servables, raising questions of whether “spin currents” or “orbital magnetization currents” do necessarily provide a physically sound basis for interpreting transport exper- iments. A paradigmatic case is anisotropic magnetoresistance (AMR) in metal/ferromagnet bilayers. In Pt/YIG, a sys- tem with an insulating ferromagnet, AMR is widely ex- plained as SMR originating from reflection/absorption of spin-Hall currents at the Pt/YIG interface [8–12]. Related interpretations have been extended to metal- lic stacks (e.g., Co/Pt), where conventional AMR in the ferromagnet can coexist with SMR in the normal metal. More recently, interfacial spin-orbit magnetoresis- tance and Rashba–Edelstein magnetoresistance (REMR) have highlighted the role of interfacial spin-orbit scatter- ing even without spin-current absorption [13–16]. OHE- FM FM Metal Metal s z W 0 x s- FIG. 1. Schematic of spin filtering for charge flow parallel to a metal/ferromagnet interface. Interfacial disorder strongly re- laxes the momentum of one spin projection, whereas the other experiences nearly specular reflection (minimal momentum loss). The effect is maximized when one of the spin channels acquires interface scattering phase equal π. based scenarios, conversion of orbital transport into spin accumulation at the interface, have likewise been pro- posed for Pt/Co, NiFe/Pt, and Pt/YIG [3–5, 17–20]. These developments underscore the growing complexity of the field and the increasing difficulty of disentangling spin and orbital degrees of freedom. In this Letter, we advance an alternative, disorder- assisted spin filtering mechanism of anisotropic magne- toresistance (SFMR) in metal/ferromagnet bilayers that does not rely on bulk spin or orbital currents. We ar- gue that interfacial charge transfer can form a positively charged δ-layer at a high-quality metal/ferromagnet in- terface. For suitable charge transfer, this δ-layer, which is sensitive to both exchange and interfacial spin-orbit cou- pling (ISOC), mediates a resonant interface scattering channel akin to resonant surface/interface states in tun- neling anisotropic magnetoresistance [21, 22]. The inter- ference between ordinary non-magnetic impurity scatter- ing and the spin-selective resonant interface channel pro- duces strong spin filtering: electrons with one spin pro- jection experience nearly specular reflection, while those arXiv:2510.14867v1 [cond-mat.mes-hall] 16 Oct 2025 2 with the opposite projection undergo enhanced momen- tum relaxation as shown schematicaly in Fig. 1. The net outcome is a robust anisotropic magnetoresistance. A salient prediction of SFMR is that, at optimal condi- tions, the magnitude of the anisotropic resistance scales linearly with the interface magnetic exchange or with the ISOC strength (depending on what scale is smaller). For an ideal interface the effect is also suppressed by the metal disorder parameter 1/EFτ, where τ is the mean scattering time in the metal bulk and EF is the Fermi energy. Contrary to the intuition, the effect may be en- hanced by additional interfacial disorder. This naturally yields the AMR amplitude ∆ρ/ρ in the 10−4–10−3 range observed in heavy-metal films. The resulting signal is strongly enhanced near an optimal charge transfer (for Pt/YIG, approximately one electron per three Pt unit cells) and reproduces SMR/SOMR systematics, includ- ing a non-monotonic Pt thickness dependence with an optimum of a few nanometers [10, 11, 23]. Crucially, in our framework the spin current is not a necessary con- struct [15, 16, 24]. For definiteness, we consider a metal film of thickness W, occupying 0 < z < W, that is placed on a ferromag- netic dielectric for z < 0. We neglect spin-orbit coupling in the metal bulk and adopt effective interfacial model of Amin and Stiles [16], H = p2 2m + V (r) + U0 Θ(−z) + ℏvFδ(z)Γp, (1) where vF is the Fermi velocity, V (r) denotes non- magnetic disorder, Θ(z) is the Heaviside theta function and U0 is the spin-independent confinement, while the last term represents an interface potential, Γp = u0 + γ σ · ˆm + λ σ · (ˆp × ˆz), (2) where u0 sets the spin-independent barrier strength de- fined by the charge transfer across the interface, γ pa- rameterizes the interfacial exchange, and λ quantifies the interfacial Rashba coupling; ˆm is the unit magnetization vector in the ferromagnet, ˆp = p/mvF the unit momen- tum vector, and ˆz the interface normal. This δ-layer model captures the minimal ingredients of SFMR (charge transfer, interfacial exchange and ISOC) and can be employed straightforwardly for the derivation of the scattering-matrix boundary conditions for Boltz- mann transport [15, 16, 25–28]. The role of interfacial disorder in SMR was recently ex- amined in Ref. [25]. That work applies Boltzmann kinetic theory to the model of Eq. (1) together with the Okulov– Ustinov boundary conditions at the metal/ferromagnet interface [26]. It shows that disorder scattering at or near the interface can drive a small spin current across the interface that scales as λ3/EFτ [25]. Within that framework the magnetoresistance vanishes; invoking the usual SHE–inverse-SHE logic [2] would then suggest an MR of order λ6/(EFτ)2 – an exceptionally small effect. Below we re-examine the same model and evaluate the spin current slightly away from the interface. We find that it scales as λ/EFτ and, more importantly, that it varies by roughly six orders of magnitude within a Fermi wavelength λF from the interface. In contrast, allowing for finite interfacial exchange γ in the boundary condition immediately yields a non-zero anisotropic magnetoresistance of the metal film without any appeal to spin transport. Moreover, we identify a resonant regime for 2u0 ≃− p U0/EF, where the film magnetoresistance reaches its maximum provided γ ≃λ. In this regime, for thin metal films, the resulting AMR is set by ∆ρ/ρ ≃min{|γ|, |λ|}/EFτ, which is naturally matching the observed magnitude of AMR [8–12]. For thicker films, the interfacial SFMR contribution is addi- tionally suppressed by a factor ℓ/W, where ℓ= vFτ is the mean free path. We start by writing down the semiclassical Boltzmann equation (SBE) for the distribution function ˆf of elec- trons in a metal subject to an electric field E = Eˆx applied along the x direction. Although both electron scattering and propagation in the metal bulk are spin- independent, we must allow for a non-trivial spin struc- ture of the electron distribution due to the spin-selective boundary condition at the metal/ferromagnet interface. The stationary SBE, written in the plane-wave basis with momentum p, takes the form vp·∇r ˆf(p, r)−eE·∇p ˆf(p, r) = − ˆf(p, r) −f0(εp) τ , (3) where vp = p/m is the electron velocity, εp = p2/2m the energy dispersion, ˆf a 2 × 2 matrix in spin space, τ the scattering time, and f0(ε) the angle-averaged equilibrium distribution function, which is spin-independent. We look for the solution of Eq. (3) in the form ˆf(p, r) = f0(εp) + f1(p, z) + ˆf2(p, z), (4) where f1 and ˆf2 are the non-equilibrium corrections pro- portional to the applied electric field E. The non-trivial spin structure is contained only in ˆf2, and the condition ˆf2 ≪f1 is assumed. The SBE is supplemented by two boundary conditions: one at the metal/air interface (z = W) and another at the metal/ferromagnet interface (z = 0). We express f1,2 as f1,2 = f + 1,2 Θ(pz) + f − 1,2 Θ(−pz), distinguishing electrons moving away from and toward the interface, respectively. At z = W we impose the standard Fuchs-Sondheimer boundary condition for a perfectly diffusive surface [29, 30]: f − 1,2(p, W) = 0. At z = 0, we assume almost specular scattering, where ˆf2 represents deviations from perfect specularity. In this case, the boundary condition for f1 is f − 1 (εp, ˆp, 0) = f + 1 (εp, ˆp, 0), while for ˆf2 we adopt the 3 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 Well depth, u0 p EF/U0 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 AMR constant, ∆ρ/ρ × (EFτ)/min(λ, γ) λ = 0.1 (a) γ = 0.01 γ = 0.05 γ = 0.07 γ = 0.1 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 Well depth, u0 p EF/U0 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 AMR constant, ∆ρ/ρ × (EFτ)/min(λ, γ) γ = 0.05 (b) λ = 0.01 λ = 0.05 λ = 0.1 λ = 0.2 FIG. 2. The AMR constant ∆ρ/ρ as a function of the charge transfer parameter u0 for different values of the in- terface exchange and Rashba couplings, γ and λ. Both figures correspond to a sufficiently narrow metal film with W/ℓ= 0.5. Large AMR signal of both signs is observed for 2u0 ≃− p U0/EF and γ ≃λ. general Okulov-Ustinov boundary condition [31], |vz| h ˆf + 2 (p, 0) −ˆf − 2 (p, 0) i = Z p′z<0 ˆ W(p, p′)  f − 1 (p, 0) −f − 1 (p′, 0)  d3p′ (2π)3 , (5) where vα is the velocity component, and ˆ W(p, p′) is the (yet unspecified) scattering rate. On the right-hand side we have already used that ˆf2 ≪f1. The solution for f1 reads: f1(p, z) = eEvFτ cos ϕp p p2 −p2z (∂f0/∂εp) × h 1 −e z−W |vz|τ  Θ(−ˆpz) +  1 −e−W +z vzτ  Θ(ˆpz) i . (6) The solution for ˆf2 has the form: ˆf2(p, z) = ˆf2(p, 0) Θ(pz) exp(−z/vzτ), (7) and its boundary value is obtained from ˆf2(p, 0) = Z p′z<0 d3p′ (2π)3 ˆ W(p, p′) |vz|  f − 1 (p, 0) −f − 1 (p′, 0)  . The scattering rate is given by the Fermi golden rule, ˆ W(p, p′) = 2πNiV 2 0 ˆspˆsp′ˆs† p′ˆs† p δ(εp −εp′), (8) where ˆsp = 1 + ˆrp, Ni is the surface impurity concentra- tion, and V0 the impurity potential strength. Interface reflection matrix ˆrp appear because specularly reflected electrons near the interface remain coherent with incident ones, leading to interference effects. The spin structure of ˆ W reflects spin mixing during specular reflection at an interface with spin-orbit coupling. Once ˆf2 is known, we can compute two-dimensional charge and spin current densities using the standard ther- modynamic definition Jα β = W Z 0 dz Z d3p (2π)3 evβ Tr h σα f1(p, z) + ˆf2(p, z) i . (9) where α = 0, x, y, z is the spin index (α = 0 denotes charge), while the index β = x, y, z refers to the velocity directions. The total current separates into an isotropic, spin- independent part from f1 and an anisotropic, spin- dependent part from ˆf2. The corresponding conductivity can be decomposed as σα ββ′ = σ1δββ′δα0 +σα 2,ββ′. Assum- ing σ1 ≫ˆσ2, the relative anisotropic contribution to the resistivity is ∆ρ∥(⊥)(φ)/ρ ≈−σ2/σ1, where ∥and ⊥de- note directions parallel and perpendicular to the electric field, respectively, and ρ is the isotropic resistivity of the film. The expressions for ∆ρ∥,⊥yield: ∆ρ∥,⊥ ρ = 3(EFτ)−1 8π2Φ(2w) Ni N0λF Z d2k d2k′ qq′ k∥,⊥ 1 −e−w q w × Sk,k′ h k  1−e−w q  cos ϕk−k′ 1−e−w q′  cos ϕ′ k i , (10) where N0 is a bulk impurity concentration, Ni is a two- dimensional surface impurity concentration, w = W/ℓ, k∥= px/mvF = k cos ϕk, k⊥= py/mvF = k sin ϕk are the dimensionless in-plane momenta, q = √ 1 −k2, and Sk,k′ = Tr[ ˆ Mk ˆ Mk′], ˆ Mk = ˆs† kˆsk = ˆsk + ˆs† k. (11) The function Φ(2w) describes the classical size effect in thin metallic films [32] (see Supplementary Information for explicit form). For w ≫1, Φ(2w) ≈1, while for w ≪1, Φ(2w) ≈−(3w/4) ln w. 4 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Film thickness, W/ℓ 0.0 0.2 0.4 0.6 0.8 AMR constant, ∆ρ/ρ × (EFτ)/min(λ, γ) γ = 0.01 γ = 0.015 γ = 0.02 FIG. 3. AMR constant as a function of film thickness W/ℓ for three values of interface exchange parameter γ and for EF/U0 = 0.8, λ = 0.01, 2u0 = − p U0/EF. For a perfect interface Ni ≃N0λF due to the bulk impurities within the distance λF to the interface. The presence of interfacial defects ensures Ni ≳N0λF . The small parameter 1/EFτ indicates that bulk corrections of the same order may not be captured by SBE. This is, however, of no importance for AMR since the bulk Boltzmann equation lacks spin-dependent terms and such “weak localization” corrections can be safely neglected. The angular dependence of longitudinal and trans- verse resistivity components is ∆ρ∥∼∆ρ cos 2φ and ∆ρ⊥∼sin 2φ. In Figs. 2(a,b) we plot ∆ρ/ρ versus the charge transfer parameter u0 for various λ and γ. A res- onant enhancement of magnetoresistance appears within a certain range of u0, originating from the structure of the matrix ˆ Mk: ˆ Mk = 2(ImK)2 ||Zk|2 −K2|2  |Zk|2 + |K|2 −2Z∗ k ReK ImK −2Zk ReK ImK |Zk|2 + |K|2  , (12) where Zk = γeiφ −iλkeiϕk, with φ and ϕk being the angles of the exchange field and momentum, respectively, and K = p U0/EF −q2 + 2u0 + iq. For generic u0 and γ, λ ≪1, we have |Zk|2 ≪|K|2, and the anisotropic term in Eq. (10) scales as λ2γ2. When 2u0 ≃− p U0/EF, one may find the values of q ∈(0, 1) that yield the condition Re K = ±|Zk|. In this case, one of the eigenphases of the matrix ˆrk equals π, while the other one remains close to zero. As the result, the denominator in Eq. (12) becomes arbitrarily small leading to a resonant enhancement of the AMR signal. In the leading order in λ, γ, we find: ∆ρ ρ ≈min(|λ|, |γ|) Φ(2w) EFτ " 1 5 + 4 5  min(|λ|, |γ|) max(|λ|, |γ|) 2# , (13) 0 2 4 6 8 Position, z/ℓ 0.00 0.02 0.04 0.06 0.08 0.10 Spin current density, j(y) z (z), a.u. λ = 0.02 λ = 0.04 λ = 0.06 FIG. 4. Spatial dependence of the spin current density σyvz for vanishing exchange parameter γ = 0, u0 = −0.4, EF/U0 = 0.8, and film thickness W/ℓ= 8. where Φ(2w) depends only on the film thickness. At reso- nance, the AMR is linear in the smaller of the two param- eters, γ or λ. This condition corresponds to the situation shown in Fig. 1, where one spin projection becomes im- mune to impurity scattering – an interface spin-filtering regime determined by min(γ, λ). In Fig. 3 we plot the dependence of the anisotropic magnetoresistance on the film thickness. It can be seen that it generally reproduces the experimentally observed phenomena with a maximum value for the thickness W approximately equal to the mean free path ℓwith subse- quent W −1 decay for larger W. The suppression of the AMR in the limit W/ℓ≪1 is largely due to the strong deviation of Φ(2w) from one – the classical size effect. Equation (9) can also be used to define the spin cur- rent. Even for γ = 0 (when AMR is forbidden by symme- try) a finite spin current with polarization along y flows in the z direction as shown in Fig. 4. The current density demonstrates a strong dependence on the choice of z, in- dicating that the spin current is not conserved on length scales of the order of the mean free path. At resonant conditions 2u0 ≃− p U0/EF and away from the inter- face, the spin current scales as λ ln(λ−1) for λ ≪1, while exactly at the interface (z = 0) it scales as λ3 ln(λ−1). In contrast, the proposed spin filtering magnetoresistance scales linearly with λ for λ ≪γ and does not depend on λ for λ ≫γ. Thus, it cannot be attributed to a combi- nation of spin Hall and inverse spin Hall effects. The microscopic origin of the exchange parameter γ requires further study. As shown experimentally [8], the AMR persists even when a thin non-magnetic spacer sep- arates the FM and metal, suggesting that direct exchange is suppressed. An indirect exchange mediated, for exam- ple, by the RKKY interaction may therefore play the 5 dominant role. In our calculations, we assumed no additional interface impurities; scattering arises from bulk impurities in a co- herence layer of thickness λF near the interface, leading to the smallness parameter 1/EFτ. Increasing Ni en- hances the effect, and in the limit Ni ≃ℓ−2 the factor 1/EFτ cancels out, leaving min(λ, γ) as the only small pa- rameter. This indicates that careful interface engineering and control over the charge transfer parameter u0 can, in principle, enhance the AMR by orders of magnitude. In conclusion, we have proposed a new mechanism for anisotropic magnetoresistance in FM/NM bilayers that does not rely on the spin Hall or inverse spin Hall ef- fect phenomenology but rather on anisotropic interfacial electron scattering. The effect requires no bulk spin-orbit coupling, yet reproduces the correct magnitude, angular dependence, and film-thickness behavior observed exper- imentally. These results underscore the importance of quantitatively accurate boundary conditions for a proper description of magnetoresistance in FM/NM bi-layers. We appreciate discussions with Joseph Barker, Sebas- tian Goennenwein, Igor Gornyi and Yuriy Mokrousov. [1] A. Hoffmann, IEEE Trans. Magn. 49, 5172 (2013). [2] J. Sinova, S. O. Valenzuela, J. Wunderlich, C. Back, and T. Jungwirth, Rev. Mod. Phys. 87, 1213 (2015). [3] D. Go, D. Jo, C. Kim, and H.-W. Lee, Phys. Rev. Lett. 121, 086602 (2018). [4] D. Jo, D. Go, B.-C. Min, and H.-W. Lee, Physical Review B 98, 214405 (2018). [5] Y.-G. Choi, D. Jo, K.-H. Ko, D. Go, K.-H. Kim, H. G. Park, C. Kim, B.-C. Min, G.-M. Choi, and H.-W. Lee, Nature 619, 52 (2023). [6] E. I. Rashba, Phys. Rev. B 68, 241315 (2003). [7] J. Shi, P. Zhang, D. Xiao, and Q. Niu, Phys. Rev. Lett. 96, 076604 (2006). [8] H. Nakayama, M. Althammer, Y.-T. Chen, et al., Phys. Rev. Lett. 110, 206601 (2013). [9] Y.-T. Chen, S. Takahashi, H. Nakayama, M. Althammer, S. T. B. Goennenwein, E. Saitoh, and G. E. W. Bauer, Phys. Rev. B 87, 144411 (2013). [10] M. Althammer, S. Meyer, H. Nakayama, et al., Phys. Rev. B 87, 224401 (2013). [11] N. Vlietstra, J. Shan, V. Castel, J. Ben Youssef, and B. J. van Wees, Physical Review B 87, 184421 (2013). [12] Y.-T. Chen, S. Takahashi, H. Nakayama, M. Althammer, S. T. B. Goennenwein, E. Saitoh, and G. E. W. Bauer, Journal of Physics: Condensed Matter 28, 103004 (2016). [13] L. Zhou, H. Song, K. Liu, Z. Luan, P. Wang, L. Sun, S. Jiang, H. Xiang, Y. Chen, J. Du, H. Ding, K. Xia, J. Xiao, and D. Wu, Science Advances 4, eaao3318 (2018). [14] H. Nakayama, Y. Kanno, H. An, T. Tashiro, S. Haku, A. Nomura, and K. Ando, Physical Review Letters 117, 116602 (2016). [15] V. P. Amin and M. D. Stiles, Physical Review B 94, 104419 (2016). [16] V. P. Amin and M. D. Stiles, Physical Review B 94, 104420 (2016). [17] S. Ding et al., Phys. Rev. Lett. 128, 067201 (2022). [18] G. Sala and P. Gambardella, Phys. Rev. Research 4, 033037 (2022). [19] D. Jo, D. Go, H.-W. Lee, et al., Nat. Commun. 14, 2230 (2023). [20] L. Salemi, M. Berritta, A. K. Nandy, and P. M. Oppe- neer, Nature Communications 10, 5381 (2019). [21] A. N. Chantis, K. D. Belashchenko, E. Y. Tsymbal, and M. van Schilfgaarde, Physical Review Letters 98, 046601 (2007). [22] C. Shen, T. Leeney, A. Matos-Abiague, B. Scharf, J. E. Han, and I. ˇZuti´c, Physical Review B 102, 045312 (2020). [23] S. R. Marmion, M. Ali, M. McLaren, D. A. Williams, and B. J. Hickey, Physical Review B 89, 220404(R) (2014). [24] K. Gupta, R. J. H. Wesselink, R. Liu, Z. Yuan, and P. J. Kelly, Physical Review Letters 124, 087702 (2020). [25] A. V. Shumilin and V. V. Kabanov, Physical Review Re- search 5, 033215 (2023). [26] V. I. Okulov and V. V. Ustinov, Soviet Journal of Low Temperature Physics 5, 101 (1979). [27] P. G. Wolynes, Physical Review A 13, 1235 (1976). [28] L. A. Falkovsky, Advances in Physics 32, 753 (1983). [29] K. Fuchs, in Proc. Cambridge Philos. Soc., Vol. 34 (Cambridge University Press, 1938) pp. 100–108. [30] E. H. Sondheimer, Advances in physics 50, 499 (2001). [31] V. Okulov and V. Ustinov, Soviet Journal Low Temper- ature Physics 5, 101 (1979). [32] J. Parrott, Proceedings of the Physical Society 85, 1143 (1965). s1 SUPPLEMENTARY INFORMATION Disorder-assisted Spin-Filtering at Metal/Ferromagnet Interfaces: An Alternative Route to Anisotropic Magnetoresistance Ivan Iorsh and Mikhail Titov In this supplementary we provide the details of the derivations for an interested reader. I. DERIVATION OF THE ELASTIC REFLECTION MATRIX We start from the Hamiltonian of Eq. (1) from the main text, H = p2 2m + V (r) + U0 Θ(−z) + ℏvFδ(z)Γp, (s1) where vF is the Fermi velocity, V (r) denotes nonmagnetic (Coulomb) disorder in the metal bulk, and U0 Θ(−z) is the confinement potential for conduction electrons. Here U0 has a meaning of the work function (the energy needed for an electron to leave the metal for a ferromagnet dielectric at z < 0). The δ-layer potential is set by Γp = u0 + γ σ · ˆm + λ σ · (ˆp × ˆz) = u0 + γ cos θ Z∗ p Zp u0 −γ cos θ  . (s2) where ˆp = p/mvF = (kx, ky, q), ˆm = (sin θ cos φ, sin θ sin φ, cos θ) is the unit vector in magnetization direction of the ferromagnet, and ˆz is the unit vector normal to the interface. Note that the operator Γp is commuting with δ(z). Here, the dimensionless parameters u0, γ and λ control the interface charge transfer, effective exchange interaction and the Rashba interface spin-orbit coupling, respectfully. The absence of either spin-orbit coupling or exchange would immediately lead to an identically vanishing AMR on symmetry grounds. In a plain-wave eigenstate the operator Γp is replaced by the momentum-dependent matrix function, where Zp = γ sin θeiφ −iλkeiϕk, where k = p 1 −q2, (s3) and ϕk is the angle of in-plane electron momentum, kx = k cos ϕk, ky = k sin ϕk. The reflection matrix is defined from the scattering problem for the interface reflection in a clean system V (r) = 0. For elastic scattering we fix the conduction electron energy to EF = mv2 F/2 and use the dimensionless coordinate ζ = mvFz. In this case the scattering problem (ℏ= 1) for the propagation in z direction is reduced to the spectral equaiton  −∂2 ζ + 2Γkδ(ζ)  Ψk =  1 −k2 −U0 Θ(−ζ)/EF  Ψk, (s4) where we can also use q = √ 1 −k2. The equation of Eq. (s4) is solved by Ψk(ζ > 0) = (ake−iqζ + bkeiqζ)/√q, Ψk(ζ < 0) = ckeκζ/√q, (s5) where q is defined as positive definite, κ = p U0/EF −q2, ak, bk and ck are spinors and the factor 1/√q ensures the correct normalization of the scattering state. The reflection matrix is obtained from the relation bk = ˆrkak using matching conditions at ζ = 0, which read Ψk(ζ < 0)|ζ=0 = Ψk(ζ > 0)|ζ=0 , ∂Ψk(ζ > 0) ∂ζ −∂Ψk(ζ < 0) ∂ζ  ζ=0 = 2ΓkΨk(ζ = 0). (s6) Thus, we get ck = (1 + ˆrk)ak and (2Γk + κ)ck = −iq(1 −ˆrk)ak, which gives 1 −ˆrk 1 + ˆrk = i2Γk + κ q . (s7) s2 The reflection matrix is, therefore, given by ˆrk = 1 −iK 1 + iK = −2Γk + κ + iq 2Γk + κ −iq , where K = 2Γk + κ q . (s8) The eigenvalues of the matrix K can be written as K± = 1 q  2u0 + κ ± 2 p γ2 + λ2k2 + 2λkγ sin θ sin(ϕk −φ)  . (s9) The corresponding eigenvalues of the reflection matrix ˆrk are often parameterized with the scattering phases, δ±, as r± = exp(iδ±). The latter are clearly defined by δ± = −i ln u0 + κ/2 ± p γ2 + λ2k2 + 2λkγ sin θ sin(ϕk −φ) + iq/2 u0 + κ/2 ± p γ2 + λ2k2 + 2λkγ sin θ sin(ϕk −φ) −iq/2 ! (s10) For realistic interfaces we normally have γ ≪λ ≪1. This means that, for an interface with u0 + κ/2 ≫1, both scattering phases δ± are identical and close to zero. However, for an interface with an optimal charge transfer u0 + κ/2 ≪γ (i. e. for negative u0 that is close to − p U0/EF/2) one may expect a large difference between the scattering phases for two spin components. In this case one may find incident waves (the values of q) for which one of the scattering phases equals π, i.e. corresponds to a strong resonant scattering at the interface. In the presence of disorder near the interface this will lead to a diffusive boundary condition for one spin component, while the other spin component will be specularly reflected, thus strongly affecting the AMR of the entire metal film. We refer to this effect as the interfacial spin filtering. The interfacial spin filtering takes place when either δ+ or δ−turns into π for a certain value of q within the range 0 < q < 1. As the result, the AMR as the function of u0 demonstrates two corresponding distinct peaks illustrated in Fig. 2 of the main text. II. CALCULATION OF THE AMR AND SPIN CURRENT The stationary SBE takes the form vp · ∇r ˆf(p, r) −eE · ∇p ˆf(p, r) = − ˆf(p, r) −f0(εp) τ , (s11) where vp = p/m is the electron velocity, εp = p2/2m is the dispersion relation, ˆf is a matrix in spin space, τ is the scattering time, and f0(ε) is the momentum angle-averaged distribution function, which is a unit matrix in spin space. The SBE is subject to two boundary conditions: at z = 0 metal/ferromagnet interface and z = W metal/air interface. We assume completely diffuse scattering at z = W interface, and almost specular reflection at z = 0. Strong deviations from the specular reflection for one of the spin components due to the resonant spin-dependent scattering off metal/ferromagnet interface is responsible for a strong AMR in or model. At optimal conditions the only smallness of the AMR signal comes from an effective exchange parameter γ (assuming γ ≪λ). Finite symmetry braking parameter γ is, in any case, required in any model of the effect. We look for the solution of Eq. (s11) in the form ˆf(p, r) = f0(εp) + f1(εp, ˆp, z) + ˆf2(εp, ˆp, z), (s12) where f1(εp, ˆp, z) is the spin-independent angular harmonic of the distribution function that defines the main (spin- independent) contribution to the charge current, while ˆf2(εp, ˆp, z) is the smaller spin-dependent correction (f2 ≪f1) that is a matrix in spin space and is sensitive to magnetization direction ˆm in the ferromagnet. The correction ˆf2 originates from the scattering off the high-quality metal/ferromagnet interface. For f1 we employ the standard Fouks boundary conditions. Those relate the distribution functions of electrons coming to the interface and electrons leaving the interface. In what follows we formally decompose: f1 = f + 1 +f − 1 and introduce the distributions functions f± 1 (εp, ˆp, z), where “+” corresponds to ˆpz > 0 and “−” corresponds to ˆpz < 0. The spin-independent harmonics f1 is, then, set by the boundary conditions f − 1 (εp, ˆp, W) = 0, (s13a) f + 1 (εp, ˆp, 0) = f − 1 (εp, ˆp, 0). (s13b) s3 These boundary conditions correspond to purely diffusive scattering at z = W interface, and specular reflection at z = 0 interface. The solution for f1 to the linear order in the electric field E takes the form f1(εp, ˆp, z) = eEvFτ cos ϕp ∂f0(εp) ∂εp h 1 −e z−W |vz|τ  Θ(−ˆpz) +  1 −e−z+W vzτ  Θ(ˆpz) i , (s14) where vz = pz/m and ϕp is the angle of the in-plane projection of the vector p with respect to x direction (this is equivalent to ϕk of the previous Section). The spin-dependent part of the distribution function ˆf2 yields f − 2 (εp, ˆp, z) = 0 due to the diffusive boundary condition at z = W as in Eq. (s13a). At z = W it yields, however, the most general form of the boundary condition from Ref. [31], that can be written as ˆf2(εp, ˆp, z = 0) = Θ(pz) vz Z d3p′ (2π)3 ˆ W(p, p′) f − 1 (εp, ˆp′, 0) −f − 1 (εp, ˆp, 0)  , (s15) where the interface collision rate is set by ˆ W(p, p′) = 2πNiV 2 0 (1 + ˆrp)(1 + ˆrp′)(1 + ˆr† p′)(1 + ˆr† p)δ(εp −εp′). (s16) The coordinate dependence of the spin-dependent component is simply found as ˆf2(εp, ˆp, z) = ˆf2(εp, ˆp, z = 0)e−z/vzτ. (s17) Thus, the analysis of the AMR is mostly reduced to the integration in Eq. (s15). For a sake of generality we compute the charge current density alongside with the spin-current densities using the standard thermodynamic definition j(α) β (z) = Z d3p (2π)3 evβ Tr h σα  f1(εp, ˆp, z) + ˆf2(εp, ˆp, z) i = j1(z)δα,0δβx + j(α) 2,β (z), (s18) where α = 0, x, y, z is the spin index (0 corresponds to the usual charge current), while the index β = x, y, z denotes the spatial directions (velocity directions). Since the metal layer is usually rather thin it is also convenient to define the two-dimensional current density as J(α) β = Z W 0 dz j(α) β (z) = J1δα,0δβx + J(α) 2,β . (s19) The current J1 is associated with the spin-independent angular harmonics of the distribution function f1. Due to the absence of spin structure in this harmonic it is simply the cvharge current flowing in the direction of the field E (x-direction). With the help of the solution for f1 from Eq. (s14) we obtain J1 = σ2D 0 Φ(2W/l) E = σ1E, (s20) Φ(x) = 1 −3 8x + e−x 16x x3 −x2 −10x + 6  + Ei(1, x) 16x (−x4 + 12x2), (s21) where Ei is the exponential integral and σ2D 0 = nFe2τ/mW is the two-dimensional Drude conductivity of the film without an effect of the boundaries (here nF is the three dimensional electron concentration). The function Φ(x) was first introduced by Sondheimer [30] to describe the classical size effects in the conduction of thin metal films. For x ≪1 one finds Φ(x) ≈(3x/4) ln(x−1), while for x ≫1, one obtains Φ(x) ≈1 −3/8x. The contribution J(0) 2,x is sensitive to the direction of ˆm and defines anisotropic resistance of the metal film. From Eqs. (s15, s17) we obtain J(0) 2,x = σ2D 0 E 3 16π2 1 EFτ Ni N0λF ℓ W Z 1 0 dk Z 2π 0 dϕk k q k cos ϕk  1 −e−W/ℓq × Z 1 0 dk′ Z 2π 0 dϕ′ k k′ q′ Sk,k′ h k′ cos ϕ′ k  1 −e−W/ℓq′ −k cos ϕk  1 −e−W/ℓqi , (s22) s4 where λF is the Fermi wave length, Ni is the two-dimensional concentration of the interfacial defects, N0 is the three dimensional concentration of non-magnetic impurities in the metal bulk, Sk,k′ = Tr[(ˆsk + ˆs† k)(ˆsk′ + ˆs† k′)], q = √ 1 −k2, and q′ = p 1 −(k′)2. Here we have defined ˆsk = 1 + ˆrk. For a perfect interface one finds Ni = N0λF . The interface disorder, in this case, is defined by “bulk” non-magnetic impurities located within a distance λF from the interface. Normally, interface brings up an additional scattering hence even for a high-quality interfaces one finds Ni/N0λF ≫1. Thus, for W ≃ℓthe pre-factor (1/EFτ)(Ni/N0λF)(ℓ/W) in Eq. (s22) may still be of the order of 1. The charge current J(0) 2,x contains the angle-dependent contribution that varies as cos 2φ with respect to the in-plane direction of ˆm. This angle-dependent part defines the AMR and can be quantified by the ratio ∆ρ∥/ρ = −  J(0) 2,x − D J(0) 2,x E φ  /J1, (s23) where D J(0) 2,x E φ ≪J1 is the part of J(0) 2,x that is independent of φ (the angular brackets ⟨. . .⟩φ denote the averaging over the directions of φ). Similarly one can define the tranversal current contribution J(0) 2,y, which is proportional to sin 2φ. The coefficient in J(0) 2,x, which is proportional to cos 2φ and the coefficient in J(0) 2,y in the front of sin 2φ are identical in our theory and represent the interfacial AMR that is consistent with that observed in SMR experiments. We will demonstrate this point explicitly in the next Section. It is also instructive to see that the physics of spin-filtering magnetoresistance (SFMR) that we have presented has nothing to do with the mechanism of the SHE. This can be seen most straightforwardly within the same model by computing the corresponding spin-current density. The spin Hall effect is usually associated in this geometry with the spin-current density component j(y) z /e in the metal bulk. The latter is found from the same SBE as j(y) z (z) e =σ0E e 3 16π2 1 EFτ Ni N0λF Z 1 0 kdk q e−z/ℓq Z 2π 0 dϕk Z 1 0 k′dk′ q′ Z 2π 0 dϕ′ k S(y) k,k′ × h k′ cos ϕ′ k  1 −e−W/ℓq′ −k cos ϕk  1 −e−W/ℓqi , (s24) where σ0 = nFe2τ/m and S(y) k,k′ = Tr[σyˆsk(ˆsk′ + ˆs† k′)ˆs† k]. The quantity S(y) k,k′ brings an additional smallness in the spin-orbit strength that makes the transversal spin current, and consequently Hall angle, several magnitudes smaller than the relative AMR. As the result, the SMR, which is formally given by the squared Hall angle, is absolutely negligible as compared to the SFMR even outside the optimal parameter range u0 ≃− p U0/EF. Interestingly, the value of the spin current at the interface j(y) z (z = 0) scales as λ3 and is practically vanishing for any realistic values of the parameters. III. ANGULAR DEPENDENCE OF THE INTERFACIAL AMR SIGNAL In order to illuminate the dependence of the AMR on the in-plane magnetization angle φ analytically, we have to make few simplifications in Eq. (s22). The entire dependence on φ arises exclusively from the interface scattering kernel Sk,k′ = Tr h (ˆsk + ˆs† k)(ˆsk′ + ˆs† k′) i = Tr h ˆ Mk ˆ Mk′ i , (s25) where ˆ Mk = ˆsk + ˆs† k. In a vicinity of the spin filtering resonance that dominates the AMR signal, the leading contribution comes from the product of the diagonal elements of the matrix ˆ Mk. Furthermore, in order to simplify the logic, we may neglect the thickness dependence (assuming W > l and omitting the exponentials exp (−W/ℓq)). We also may notice that the result is dominated by q ≪1, hence, for an estimate we may let k = 1 in the integrand. After that, the integration over k and k′ is performed trivially with the result j(0) 2,x = A Z 2π 0 dϕk cos2ϕk 2 3 −π|Zk| 2  Z 2π 0 dϕ′ k  1 −π|Zk′| 2  + B Z 2π 0 dϕk cos ϕk  1 −|Zk| 2  Z 2π 0 dϕ′ k cos ϕ′ k  1 −|Zk′| 2  , (s26) s5 where A and B are some numerical coefficients that we do not need to specify. Similarly, the expression for the y component of the charge current can be simplified in the same approximation as the following j(0) 2,y = A Z 2π 0 dϕk cos ϕk sin ϕk 2 3 −π|Zk| 2  Z 2π 0 dϕ′ k  1 −π|Zk′| 2  + B Z 2π 0 dϕk sin ϕk  1 −|Zk| 2  Z 2π 0 dϕ′ k cos ϕ′ k  1 −|Zk′| 2  , (s27) In the expressions of Eqs. (s26, s27) we have defined |Zk| = p γ2 + λ2 + 2γλ sin(ϕk −φ). (s28) Since we have to consider both γ ≪1 and λ ≪1, we have |Zk| ≪1. It is easy to see that the expressions in front of the coefficient A in both Eq. (s26) and Eq. (s27) contain linear terms in |Zk|, while the expressions in front of the coefficient B contain only quadratic in |Zk| terms, which we can, therefore, neglect. Subtracting the isotropic part from j(0) 2,x, we simply obtain j(0) 2,x − D j(0) 2,x E φ = −π2A 2 Z dϕk cos 2ϕk |Zk|, (s29a) j(0) 2,y = −π2A 2 Z dϕk sin 2ϕk |Zk|, (s29b) where we have neglected all terms of the order of |Zk|2. We can now define α = ϕk −φ and take advantage of the identity Z 2π 0 dα sin 2α p γ2 + λ2 + 2γλ sin α = 0. (s30) As the result we obtain two anisotropic components of the charge current as j(0) 2,x − D j(0) 2,x E φ = δj cos 2φ, j(0) 2,y = δj sin 2φ, (s31) where δj is a common constant. To ensure appreciable deviations from the angular dependence of Eq. (s31) one needs to make both interfacial parameters γ and λ of the order of 1, which is hardly possible for any known interface.
Disorder-assisted Spin-Filtering at Metal/Ferromagnet Interfaces: An Alternative Route to Anisotropic Magnetoresistance Ivan Iorsh1 and Mikhail Titov2 1 's Universiy, Kingston, Canada 2Institute for Molecules and Materials, Radboud University, Nijmegen, The Netherlands (Dated: October 17, 2025) We introduce a minimal interface-scattering mechanism that produces a sizable anisotropic magnetoresistance (AMR) in metal/ferromagnet bilayers (e.g., Pt/YIG) without invoking bulk spin or orbital Hall currents. In a δ-layer model with interfacial exchange and Rashba spin-orbit coupling, charge transfer at a high-quality interface creates a spin-selective phase condition (interfacial spin filtering) that suppresses backscattering for one spin projection while enhancing momentum relaxation for the other. The resulting resistance anisotropy peaks at an optimal metal thickness of a few nanometers, quantitatively reproducing the thickness and angular dependences typically attributed to spin Hall magnetoresistance (SMR), as well as its characteristic magnitude. Remarkably, the maximal AMR scales linearly with the smaller of the two coupling strengths - exchange or spinorbit, highlighting a mechanism fundamentally distinct from SMR. Our scattering formulation maps onto Boltzmann boundary conditions and predicts other clear discriminants from SMR, including strong sensitivity to interfacial charge transfer and disorder. The spin Hall effect (SHE) and its inverse have become cornerstones of spintronics [1, 2]. In heavy metals with strong spin-orbit coupling, an applied charge current is proposed to generate a transverse spin accumulation detectable via magnetoresistance or spin-torque phenomena. Recently, the orbital Hall effect (OHE) has been advanced as an additional channel for angular momentum transport, with theory and experiments indicating sizable orbital responses in transition metals [3-5]. These frameworks have been widely used to interpret magnetotransport in heavy-metal/ferromagnet heterostructures, often under the umbrella of spin Hall magnetoresistance (SMR) or its orbital analogue. Despite their success as interpretative tools, both SHEand OHE-based pictures face conceptual ambiguities. The definition of spin or orbital current operators is not unique, and such current operators do neither correspond to conserved quantities nor couple to external fields in the respective effective Hamiltonians [6, 7]. Consequently, their expectation values lack the status of genuine observables, raising questions of whether "spin currents" or "orbital magnetization currents" do necessarily provide a physically sound basis for interpreting transport experiments. A paradigmatic case is anisotropic magnetoresistance (AMR) in metal/ferromagnet bilayers. In Pt/YIG, a system with an insulating ferromagnet, AMR is widely explained as SMR originating from reflection/absorption of spin-Hall currents at the Pt/YIG interface [8-12]. Related interpretations have been extended to metallic stacks (e.g., Co/Pt), where conventional AMR in the ferromagnet can coexist with SMR in the normal metal. More recently, interfacial spin-orbit magnetoresistance and Rashba-Edelstein magnetoresistance (REMR) have highlighted the role of interfacial spin-orbit scattering even without spin-current absorption [13-16]. OHEFM FM Metal Metal s z W 0 x sFIG. 1. Schematic of spin filtering for charge flow parallel to a metal/ferromagnet interface. Interfacial disorder strongly relaxes the momentum of one spin projection, whereas the other experiences nearly specular reflection (minimal momentum loss). The effect is maximized when one of the spin channels acquires interface scattering phase equal π. based scenarios, conversion of orbital transport into spin accumulation at the interface, have likewise been proposed for Pt/Co, NiFe/Pt, and Pt/YIG [3-5, 17-20]. These developments underscore the growing complexity of the field and the increasing difficulty of disentangling spin and orbital degrees of freedom. In this Letter, we advance an alternative, disorderassisted spin filtering mechanism of anisotropic magnetoresistance (SFMR) in metal/ferromagnet bilayers that does not rely on bulk spin or orbital currents. We argue that interfacial charge transfer can form a positively charged δ-layer at a high-quality metal/ferromagnet interface. For suitable charge transfer, this δ-layer, which is sensitive to both exchange and interfacial spin-orbit coupling (ISOC), mediates a resonant interface scattering channel akin to resonant surface/interface states in tunneling anisotropic magnetoresistance [21, 22]. The interference between ordinary non-magnetic impurity scattering and the spin-selective resonant interface channel produces strong spin filtering: electrons with one spin projection experience nearly specular reflection, while those 16 Oct 2025 2 with the opposite projection undergo enhanced momentum relaxation as shown schematicaly in Fig. 1. The net outcome is a robust anisotropic magnetoresistance. A salient prediction of SFMR is that, at optimal conditions, the magnitude of the anisotropic resistance scales linearly with the interface magnetic exchange or with the ISOC strength (depending on what scale is smaller). For an ideal interface the effect is also suppressed by the metal disorder parameter 1/EFτ, where τ is the mean scattering time in the metal bulk and EF is the Fermi energy. Contrary to the intuition, the effect may be enhanced by additional interfacial disorder. This naturally yields the AMR amplitude ∆ρ/ρ in the 10-4-10-3 range observed in heavy-metal films. The resulting signal is strongly enhanced near an optimal charge transfer (for Pt/YIG, approximately one electron per three Pt unit cells) and reproduces SMR/SOMR systematics, including a non-monotonic Pt thickness dependence with an optimum of a few nanometers [10, 11, 23]. Crucially, in our framework the spin current is not a necessary construct [15, 16, 24]. For definiteness, we consider a metal film of thickness W, occupying 0 0) = (ake-iqζ + bkeiqζ)/√q, Ψk(ζ 0)|ζ=0 , ∂Ψk(ζ > 0) ∂ζ -∂Ψk(ζ 0 and "-" corresponds to ˆpz l and omitting the exponentials exp (-W/lq)). We also may notice that the result is dominated by q ≪1, hence, for an estimate we may let k = 1 in the integrand. After that, the integration over k and k′ is performed trivially with the result j(0) 2,x = A Z 2π 0 dφk cos2φk 2 3 -π|Zk| 2 Z 2π 0 dφ′ k 1 -π|Zk′| 2 + B Z 2π 0 dφk cos φk 1 -|Zk| 2 Z 2π 0 dφ′ k cos φ′ k 1 -|Zk′| 2 , (s26) s5 where A and B are some numerical coefficients that we do not need to specify. Similarly, the expression for the y component of the charge current can be simplified in the same approximation as the following j(0) 2,y = A Z 2π 0 dφk cos φk sin φk 2 3 -π|Zk| 2 Z 2π 0 dφ′ k 1 -π|Zk′| 2 + B Z 2π 0 dφk sin φk 1 -|Zk| 2 Z 2π 0 dφ′ k cos φ′ k 1 -|Zk′| 2 , (s27) In the expressions of Eqs. (s26, s27) we have defined |Zk| = p γ2 + λ2 + 2γλ sin(φk -φ). (s28) Since we have to consider both γ ≪1 and λ ≪1, we have |Zk| ≪1. It is easy to see that the expressions in front of the coefficient A in both Eq. (s26) and Eq. (s27) contain linear terms in |Zk|, while the expressions in front of the coefficient B contain only quadratic in |Zk| terms, which we can, therefore, neglect. Subtracting the isotropic part from j(0) 2,x, we simply obtain j(0) 2,x - D j(0) 2,x E φ = -π2A 2 Z dφk cos 2φk |Zk|, (s29a) j(0) 2,y = -π2A 2 Z dφk sin 2φk |Zk|, (s29b) where we have neglected all terms of the order of |Zk|2. We can now define α = φk -φ and take advantage of the identity Z 2π 0 dα sin 2α p γ2 + λ2 + 2γλ sin α = 0. (s30) As the result we obtain two anisotropic components of the charge current as j(0) 2,x - D j(0) 2,x E φ = δj cos 2φ, j(0) 2,y = δj sin 2φ, (s31) where δj is a common constant. To ensure appreciable deviations from the angular dependence of Eq. (s31) one needs to make both interfacial parameters γ and λ of the order of 1, which is hardly possible for any known interface.
2510.14870
Computation of attractor dimension and maximal sums of Lyapunov exponents using polynomial optimization Jeremy P. Parker1, David Goluskin2 1Division of Mathematics, University of Dundee, Dundee, DD1 4HN, United Kingdom 2Department of Mathematics and Statistics, University of Victoria, Victoria, BC, V8P 5C2, Canada Abstract Two approaches are presented for computing upper bounds on Lyapunov exponents and their sums, and on Lyapunov dimension, among all trajectories of a dynamical sys- tem governed by ordinary differential equations. The first approach expresses a sum of Lyapunov exponents as a time average in an augmented dynamical system and then applies methods for bounding time averages. This generalizes the work of Oeri & Go- luskin (Nonlinearity 36:5378–5400, 2023), who bounded the single leading Lyapunov exponent. The second approach considers a different augmented dynamical system, where bounds on sums of Lyapunov exponents are implied by stability of certain sets, and such stability is verified using Lyapunov function methods. Both of our approaches also can be adapted to directly compute bounds on Lyapunov dimension, which in turn implies bounds on the fractal dimension of a global attractor. For systems of ordinary differential equations with polynomial right-hand sides, all of our bounding formula- tions lead to polynomial optimization problems with sum-of-squares constraints. These sum-of-squares problems can be solved computationally for any particular system to yield numerical bounds, provided the number of variables and degree of polynomials is not prohibitive. Most of our upper bounds are proven to be sharp under relatively weak assumptions. In the case of the polynomial optimization problems, sharpness means that upper bounds converge to the exact values as polynomial degrees are raised. Computational examples demonstrate upper bounds that are sharp to several digits, including for a six-dimensional dynamical system where sums of Lyapunov exponents are maximized on periodic orbits. 1 Introduction The detection and characterization of chaos is a major computational challenge for many dynamical systems. Chaotic properties are often quantified via the spectrum of Lyapunov 1 arXiv:2510.14870v1 [math.DS] 16 Oct 2025 exponents (LEs), which can be computed over individual trajectories to obtain local in- formation about that trajectory and, perhaps, global implications such as existence of hyperchaos or estimates of attractor dimension. In this paper we consider the problem of finding upper bounds on sums of LEs among all possible trajectories, without knowledge of any particular trajectories. Upper bounds on sums of LEs can be used to construct upper bounds on the fractal dimension of the global attractor. Our upper bounds are complementary to typical trajectory-based computations, which give corresponding lower bounds on sums of LEs and on Lyapunov dimension of global attractors. The LE spectrum quantifies the asymptotic rates at which linearized perturbations in different directions grow or decay along trajectories of a dynamical system. For dynamics in n dimensions there are n exponents, defined in decreasing order. Partial sums of the first k exponents describe the exponential rates of growth or contraction of k-dimensional volumes in state space, so that the first exponent governs the stretching of lines, the sum of the first two governs the growth of areas, and so on. In dissipative systems, these sums become negative beyond a certain value of k, reflecting the contraction of higher- dimensional volumes. One way of defining a value at which this contraction begins leads to the Lyapunov dimension, which is non-integer in general, and which provides a natural connection between dynamical stability and the fractal geometry of invariant sets such as strange attractors. We consider autonomous, continuous-time dynamics governed by an ordinary differen- tial equation (ODE) system, d dtX(t) = F(X(t)), X(0) = X0, (1) with F ∈C1(Ω, Rn), where Ck(A, B) denotes the space of functions mapping A to B that are k times continuously differentiable. The state space Ω⊂Rn may be all of Rn, a subdomain of Rn or a lower-dimensional manifold embedded in Rn. In the present work, we apply existing methods for proving global properties of ODE dynamical systems to the problem of bounding LEs and their sums. These existing methods rely on finding auxiliary functions V ∈C1(Ω, R) subject to certain pointwise inequalities that, in turn, imply global statements about the dynamics. The best known examples of auxiliary functions are Lyapunov functions, whose inequality conditions imply statements about stability. (Our use of Lyapunov functions to study Lyapunov exponents is not typical, despite both objects bearing the same person’s name.) Among the lesser known types of auxiliary functions are those whose inequality conditions imply bounds on infinite- time averages. For each quantity studied in the present work, we formulate two different upper bounds: a “shifted spectrum” approach based on Lyapunov function conditions for stability, and a “sphere projection” approach based on auxiliary function conditions for bounding time averages. The shifted spectrum approach that we present in section 2.3 uses conditions on V that are similar to standard Lyapunov function conditions. In stability theory, the pointwise 2 inequality conditions required of a Lyapunov function may be, for instance, V (X) ≥0, F(X) · DV (X) ≤0 ∀X ∈Ω, (2) where DV denotes the gradient of V with respect to the state space variable X. Here F(X) · DV (X) is the Lie derivative of V with respect to the flow since the usual chain rule gives d dtV (X(t)) = V (X(t))·DV (X(t)). The existence of V satisfying (2) would imply that the set of X ∈Ωwhere V (X) = 0 is Lyapunov stable, meaning that trajectories remain arbitrarily close to this set if they start sufficiently close [17]. Until about 25 years ago, there was no broadly applicable method for finding V subject to pointwise inequalities like those in (2). Now, however, it is often possible to construct such V computationally using optimization methods based on sum-of-squares (SOS) polynomials [36, 39]. In particular, if the function F defining the dynamics is polynomial in the components of X, and if V is sought from a finite-dimensional space of polynomials, then F · DV is also a polynomial. Deciding whether the polynomials V and −F · DV are nonnegative on Ωis prohibitively hard in general [32], but this nonnegativity can be ensured by stronger and more tractable SOS conditions. For instance, simply requiring V to be SOS, meaning that it can be represented as a sum of squares of other polynomials, would imply V ≥0 on all of Rn. Other SOS conditions can imply nonnegativity on Ωbut not on all of Rn, as explained in section 2.4. The sphere projection approach that we present in section 2.2 uses conditions on an auxiliary function V for bounding time averages. For any Φ ∈C0(Ω, R) evaluated along a trajectory X(t) with initial condition X0, define the infinite-time average Φ as Φ(X0) = lim sup T→∞ 1 T Z T 0 Φ(X(t)) dt. (3) On each X(t) that remains bounded forward in time, boundedness of V (X(t)) implies d dtV = 0, and thus F · DV = 0. Thus, on bounded trajectories, Φ = Φ + F · DV ≤sup X∈Ω [Φ + F · DV ]. (4) The right-hand upper bound is useful because it does not involve individual trajectories. Since (4) holds for all bounded trajectories and all V ∈C1, one can maximize the left-hand side over X0 and minimize the right-hand side over V to find sup X0∈Ω Φ ≤ inf V ∈C1(Ω) sup X∈Ω [Φ + F · DV ]. (5) Furthermore, the inequality in (5) is an equality when Ωis a compact set in which trajecto- ries remain [3, 46], and this equality underlies sharpness of our sphere projection approach. When Φ, V and F are polynomial, so is the expression Φ + F · DV on the right-hand side 3 of (5). In this case the min–max problem can be relaxed into a minimization over poly- nomial V subject to an SOS constraint, as explained in section 2.4. This constitutes an SOS optimization problem, where the optimization objective is to minimize the resulting upper bound on Φ. These ideas were applied by Oeri and Goluskin [34] to bound the single leading LE. Here we extend this approach to sums of LEs and to Lyapunov dimension. Our two approaches are the first broadly applicable computational methods that can give convergent upper bounds on global maxima of LE sums or Lyapunov dimension. Unlike methods that rely on pointwise optimization over state space [7], our methods remain sharp when the maximizing orbits are periodic, rather than stationary. Furthermore, our methods give a systematic way to search for the maximizing trajectories themselves. The rest of the paper is organized as follows. Section 2 defines sums of LEs and Lyapunov dimension and then describes our two general approaches to bounding these quantities. After arriving at more general optimization problems, we strengthen their con- straints to obtain SOS optimization problems, then we describe how to exploit symmetries. Section 3 presents computational results from applications of our methods to two exam- ples: the chaotic Duffing oscillator, which can be formulated as an autonomous system in 4 variables, and a 3-degree-of-freedom Hamiltonian system adapted from [35], which is an autonomous system in 6 variables. For all LE sums in both examples, we confirm that our computed bounds are sharp to several digits by finding periodic orbits on which the bounds are approximately saturated. Concluding remarks are given in section 4. For complete- ness, appendix A summarizes necessary material about exterior products, and appendix B proves a non-standard variant of an existence theorem for Lyapunov functions. 2 Optimization problems and SOS relaxations This section defines the objects we seek to bound and formulates our methods for doing so. section 2.1 give expressions for LE sums and Lyapunov dimension that are useful for what follows. In section 2.2 we describe approaches for bounding these quantities based on projecting tangent space vectors onto the unit sphere. In section 2.3 we describe approaches that instead shift the spectrum of the Jacobian so that tangent vectors remain bounded. Section 2.4 describes how each formulation is relaxed to into SOS optimization problems that can be implemented numerically. Finally, section 2.5 describes how symmetries may be exploited in the optimization problems. 2.1 Tangent vectors, Lyapunov exponents and Lyapunov dimension In this section we consider autonomous ODE systems on a domain B ⊂Rn, just as in (1) but denoted here using lowercase x and f, d dtx(t) = f(x(t)), x(0) = x0, (6) 4 with f ∈C1(B, Rn). Uppercase X and F and domain Ωwill be reserved for associated ODEs on augmented state spaces that combine the dynamics of x with dynamics on its tangent space. Trajectories are assumed to remain in B for all t ≥0. In cases where B is unbounded, we further assume that each trajectory remains bounded for t ≥0, meaning there is no forward-time blowup. The differentiability of f and the assumption of no blowup guarantee well-posedness for all positive times, meaning there exists a differentiable flow map φ : R+ × B →B satisfying x(t) = φt(x0) for every initial condition x0 ∈B and all t ≥0. The LEs associated with a trajectory x(t) quantify the convergence or divergence of infinitesimally nearby trajectories. Linearization of (6) around each point x(t) defines the dynamics of the tangent vector y(t), which evolves in the tangent space Rn according to d dty(t) = Df(x(t)) y(t), y(0) = y0, (7) where Df(x) denotes the n × n Jacobian matrix of f at x, and y0 is the initial tangent vector. Equivalently, y(t) is given by the linearized finite-time flow map as y(t) = Dφt(x0) y0. (8) We restrict y0 to the unit sphere, Sn−1 = {x ∈Rn : |x| = 1}, (9) because the tangent vector dynamics are linear in y and thus are unaffected by normaliza- tion of the initial condition. Considering the separation between trajectories starting at x0 and at x0 + εy0, Taylor expanding the flow map and using (8) gives |φt(x0) −φt(x0 + εy0)| = ε|y(t)| + O(ε2|y0|2). (10) The tangent vector typically shrinks or grows exponentially like |y(t)| ≈eµt for some average rate µ(x0, y0), in which case (10) suggests |φt(x0) −φt(x0 + εy0)| ≈ε|y0|eµ(x0,y0)t. (11) If µ < 0 this approximation can be valid for all time. If µ > 0 this approximation can be valid only until the time when separation between trajectories grows too large for (10) to apply, but this time approaches infinity as ε →0. Motivated by the expectation that |y(t)| ≈eµt, the corresponding LE can be defined precisely as µ(x0, y0) = lim sup t→∞ 1 t log |y(t)|. (12) For each trajectory x(t), which is determined by its initial condition x0, the LE µ defined by (12) generally depends on the initial tangent vector direction y0. The Oseledets ergodic 5 theorem guarantees that different y0 give at most n different LEs [40], the largest of which is called the leading LE. We define this by definition 2.1 and then give equivalent expressions in sections 2.2 and 2.3 that underlie the methods we introduce there. The literature on LEs uses various definitions that are equivalent in many settings [40]. Definition 2.1. For x(t) evolving by the ODE (6), the leading Lyapunov exponent is µ1(x0) = sup y0∈Sn−1 lim sup t→∞ 1 t log |y(t)|, (13) where the tangent vector y(t) evolves by (7). The leading LE is the average growth rate of lengths in the tangent space, maxi- mized over the initial direction. A natural generalization is to consider growth rates of k-dimensional volumes in the tangent space for any k ≤n. From this one can define lead- ing sums of LEs and, in turn, the full spectrum of LEs [45]. It is also possible to define the spectrum of LEs directly using singular values of Dφt [40], but their sums are easier to define, and the sums are what we study in the present work. Let |y1 ∧· · · ∧yk| denote the volume of the parallelepiped whose edges coincide with vectors y1, . . . , yk ∈Rn. If each yi is a tangent vector that evolves in time according to (7), the volume of the corresponding parallelepiped generally grows or shrinks exponentially in time, so we can define an average exponential rate analogously to (12). Maximizing this rate over all initial directions of the tangent vectors gives [45] Mk(x0) = sup y1(0),...,yk(0)∈Rn |y1(0)∧···∧yk(0)|=1 lim sup t→∞ 1 t log |y1(t) ∧· · · ∧yk(t)|, (14) where each yi(t) evolves as in (7). The value Mk(x0) is the maximum time-averaged growth rate of k-dimensional volumes in the tangent space for the trajectory x(t) starting at x0. This maximum growth rate will be attained by almost every choice of tangent vectors, under the assumptions of certain multiplicative ergodic theorems [40]. The set of all y1 ∧· · · ∧yk form a vector space called the exterior product space of k-vectors on Rn. The space of k-vectors on Rn has dimension nk = n! k!(n−k)!. We can therefore identify each y1 ∧· · · ∧yk with a vector denoted by yk ∈Rnk, whose Euclidean norm |yk| gives the volume of the parallelepiped with edges y1, . . . , yk. For time-dependent k-vectors where each yi(t) is a tangent vector satisfying the ODE (7), the k-vectors satisfy a corresponding ODE (see lemma A.4), d dtyk(t) = Df [k](x(t)) yk(t). (15) Here Df [k] denotes the kth additive compound for the matrix Df, and later Df (k) denotes the kth multiplicative compound. These compounds are linear transformations acting on 6 the space of k-vectors, induced by the linear transformation Df on vectors, and they can be computed explicitly as nk × nk matrices. Exterior products and compound matrices are reviewed in appendix A. The expression (14) for Mk can be equivalently stated using yk(t), resulting in the following definition 2.2 that we choose for Mk. Definition 2.2. For x(t) evolving by the ODE (6) from initial condition x0, the Lyapunov exponents are defined such that the sum of the k leading Lyapunov exponents is Mk(x0) = sup yk 0 ∈Snk−1 lim sup t→∞ 1 t log |yk(t)|, (16) where yk(t) evolves by (15) from initial condition yk 0. Having defined Mk, we can define the LE spectrum µ1 ≥µ2 ≥· · · ≥µn such the Mk are necessarily LE sums. When k = 1 we already have µ1 = M1 since definition 2.2 reduces to definition 2.1 when yk = y and Df [k] = Df. For k ≥2, the LE µk can be defined recursively by µk = Mk −Mk−1, (17) so indeed Mk = Pk i=1 µi. There are equivalent ways to define µk as the fundamental object and then define the Mk as their partial sums, but for our purposes it is simpler to define Mk directly by (16). For systems with attractors, a primary use of LE sums is to define and study the dimension of an attractor. Definition 2.3 gives the Kaplan–Yorke formula [16] for the global Lyapunov dimension dL of an ODE system; see [18] for equivalent definitions. Hausdorff dimension is perhaps a more natural way to define the dimension of a fractal set in general. For the global attractor of a dynamical system, Hausdorffdimension can be very difficult to estimate directly, but it is bounded above by dL [8]. Thus, the upper bounds we produce for dL are upper bounds on Hausdorffdimension also. Definition 2.3. For x(t) evolving in B ⊂Rn by the ODE (6), let j be the smallest integer such that supx0∈B Mj+1(x0) < 0. The global Lyapunov dimension over B is dL = j + sup x0∈B  Mj(x0) −µj+1(x0)  . (18) For particular ODEs, the Kaplan–Yorke formula is usually evaluated numerically along a chaotic trajectory, rather than being maximized over all trajectories. The result indicates a fractal dimension of the chaotic attractor, rather than of the global attractor. Global Lyapunov dimension pertains to the latter, and the so-called Eden conjecture [9, 22] pre- dicts that the maximum in (18) will be attained on predicts that the maximum in (18) is attained on an equilibrium or periodic orbit, not a chaotic orbit. Lemma 2.4 states the fact that j ≤dL < j + 1, which we prove for completeness, along with a restatement of the definition of dL that we will use in sections 2.2 and 2.3. 7 Lemma 2.4. The Lyapunov dimension dL has the following properties. 1. For the integer j in definition 2.3, Lyapunov dimension is bounded by j ≤dL < j +1. 2. The expression (18) for dL is equivalent to dL = inf B∈[0,1)(j + B) s.t. B [Mj(x0) −Mj+1(x0)] −Mj(x0) ≥0 ∀x0 ∈B. (19) Proof. The definition of j implies that there exists ε > 0 such that Mj+1(x0) ≤−ε and −µj+1(x0) ≥ε for all x0 ∈B, (20) and that supx0∈B Mj(x0) ≥0. From these facts it follows that the supremum in the definition (18) of dL cannot be negative, which gives the lower bound j ≤dL in the first part of the lemma. For the upper bound, note that Mj(x0) −µj+1(x0) ≤−µj+1(x0) −ε −µj+1(x0) < 1, (21) which gives dL < j + 1. For the second part of the lemma, note that the supremum in the definition (18) of dL is equivalent to inf B s.t. Mj(x0) −µj+1(x0) ≤B ∀x0 ∈B. (22) Since −µj+1 = Mj −Mj+1 > 0, rearranging the constraint in (22) gives the constraint in (19) as an equivalent form. This infimum is in [0, 1) and so is unchanged by the B ∈[0, 1) constraint in (19). Thus expression (19) for dL is equivalent to expression (18) in the definition. Remark 2.5. Combining both parts of lemma 2.4 implies that, if we can find B ∈R and k ≥j where B [Mk(x0) −Mk+1(x0)] −Mk(x0) ≥0 (23) for all x0 ∈B, then the Lyapunov dimension is bounded above by dL ≤k + B. Note that (23) cannot hold for all x0 unless B ≥0. 2.2 Sphere projection approach Our first approach to bounding the LE sums Mk of definition 2.2 uses the formulation (5) for bounding infinite-time averages. This is a generalization of the approach in [34], which was described only for the k = 1 case. First note that the time average in the definition 8 of Mk can be expressed as a time integral along trajectories in the augmented state space for x(t) and yk(t): lim sup t→∞ 1 t log |yk(t)| = lim sup T→∞ 1 T Z T 0 d dtlog |yk(t)| dt (24) = lim sup T→∞ 1 T Z T 0 (yk)T d dtyk |yk|2 dt (25) = lim sup T→∞ 1 T Z T 0 (yk)TDf [k](x)yk |yk|2 dt, (26) where (yk)T is the transpose of the vector yk. The formulation (5) for bounding time aver- ages can be applied to the quantity in (26) only if each (x, yk) remains bounded forward in time, but yk(t) will be unbounded when Mk is positive. The same tangent space dynamics can be captured with bounded variables by projecting yk onto a sphere, by introducing zk = yk/|yk|. Expressing the right-hand side of (26) in terms of zk and then maximizing both sides over the initial tangent space directions zk 0 gives Mk(x0) = sup zk 0 ∈Snk−1 Φk, (27) where the overline denotes a time average of Φk(x, zk) = (zk)TDf [k](x)zk (28) along trajectories in the (x, zk) state space. The ODE governing zk(t) follows from the ODE (15) for yk(t). The x(t) dynamics together with the projected dynamics of k-vectors in its tangent space gives the ODE system d dt  x zk  =  f(x) ℓk(x, zk)  , (29) on the augmented state space B × Snk−1, where ℓk(x, zk) = Df [k](x)zk −Φk(x, zk)zk. (30) Representations of Mk as time averages over the projected tangent space dynamics (29) have often appeared in the study of stochastic dynamics, where they are called Fursten- berg–Khasminskii formulas [2, chapter 6]. (In the stochastic case, Φk is defined with additional terms and is averaged with respect to stationary measures.) Auxiliary function methods for bounding time averages can be applied to Mk by using its representation (27) as a time average in the augmented state space for (x, zk). In particular, we directly apply the formula (5) with state variable X = (x, zk) on domain B ×Snk−1 and with F(X) being the right-hand side of (29). The resulting inequality is the 9 first part of proposition 2.6 below, for which no further proof is needed. If F ∈C1 and the domain is compact, then (5) holds with equality, as proved in [46]. This yields the second part of proposition 2.6 since assuming f ∈C2 gives that F ∈C1. The k = 1 special case of proposition 2.6 is proposition 1 in [34]. Proposition 2.6. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward invari- ant on B ⊂Rn. Let Φk(x, zk) and ℓk(x, zk) be defined as in (28) and (30), respectively. 1. Suppose each trajectory x(t) in B is bounded for t ≥0. For each k ∈{1, . . . , n}, the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤ inf V ∈C1(B×Rnk ) sup x∈B zk∈Snk−1 [Φk + f · DxV + ℓk · DzkV ] , (31) where DxV and DzkV denote the gradients of V (x, zk) with respect to x and zk, respectively. 2. Suppose B is compact and f ∈C2(B, Rn). Then (31) holds with equality. The optimization problem on the right-hand side of (31) may be intractable, but it can be relaxed into computationally tractable optimization problems that also yield upper bounds on Mk. Relaxations using SOS polynomials are given in section 2.4 below. The Lyapunov dimension (18) can be bounded above using similar ideas. One way is to apply upper bounds Mk ≤Bk that have already been found using the right-hand side of (31) or its SOS relaxations. If such bounds are computed with Bk ≥0 but Bk+1 < 0 for some k ∈{1, . . . , n}, then proposition 2.7 below gives an upper bounds on dL. To apply this result, one does not need to know whether k is the minimal integer j appearing in the definition (18) of dL. It is enough to know that Bk+1 < 0, which implies k ≥j. Proposition 2.7. Suppose that, for some k ∈{1, . . . , n} and values Bk ≥0 and Bk+1 < 0, sup x0∈B Mk(x0) ≤Bk and sup x0∈B Mk+1(x0) ≤Bk+1. (32) Then, the Lyapunov dimension (18) is bounded above by dL ≤k + Bk Bk −Bk+1 . (33) Proof. The meaning of j in definition 2.3 and the fact that Bk+1 < 0 together imply k ≥j. According to remark 2.5, it suffices for the inequality (23) to hold for all x0 with B = Bk Bk−Bk+1 . For any x0 ∈B, the assumptions give Mk(x0) ≤Bk and 0 < −Bk+1 ≤ −Mk+1(x0), so −Bk+1Mk(x0) ≤−BkMk+1(x0), (34) 10 and BkMk(x0) −Bk+1Mk(x0) ≤BkMk(x0) −BkMk+1(x0). (35) Dividing by the positive quantity Bk −Bk+1 and rearranging gives Bk Bk −Bk+1 [Mk(x0) −Mk+1(x0)] −Mk(x0) ≥0, (36) which is the required inequality (23). This concludes the proof. The upper bound on Lyapunov dimension given by proposition 2.7 cannot be sharp for certain systems, even if Bk and Bk+1 are each sharp bounds in (32). The reason is that the inequality (34), when maximized over x0, is an equality only if Mk(x0) and Mk+1(x0) attain their maxima on the same orbits. This is not true in general; see section 3.2 for a counterexample. We thus give a second formulation for upper bounds on dL that is generally sharper than proposition 2.7. This formulation directly considers the ratio in the Kaplan–Yorke formula (18), rather than separately bounding the Mk and Mk+1 terms in that ratio. In particular, we consider the x(t) dynamics together with the projected dynamics of k-vectors and (k + 1)-vectors in its tangent space, d dt   x zk zk+1  =   f(x) ℓk(x, zk) ℓk+1(x, zk+1)  , (37) where ℓk is defined by (30), and likewise for ℓk+1. The augmented state space of this system is (x, zk, zk+1) ∈B × Snk−1 × Snk+1−1. The constraint in (19) from lemma 2.4 can then be expressed in terms of time averages in the ODE system (37). For this we define ΨB(x, zk, zk+1) = (1 −B)Φk(x, zk) + B Φk+1(x, zk+1), (38) and we consider the condition Ψ B(x0, zk 0, zk+1 0 ) ≤0 for all x0 ∈B, zk 0 ∈Snk−1, zk+1 0 ∈Snk+1−1, (39) where the time average denoted by the overbar is along trajectories of (37). Condition (39) is equivalent to the first condition in (19) of lemma 2.4, as explained below in the proof of proposition 2.8. The condition (39) can be enforced using the general formulation (5) for bounding time averages, applied to the augmented system (37). This yields the first part of proposition 2.8, whose sharpness is addressed by its second part. Proposition 2.8 can be applied only when the LE sum Mk+1 for certain k is known to be negative on every trajectory. Such negativity can be confirmed by using (31), or its SOS relaxation that we introduce in section 2.4, to obtain a negative upper bound on Mk+1. Proposition 2.8. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward invari- ant on B ⊂Rn. Let Φk(x, zk) and ℓk(x, zk) be as defined by (28) and (30), and likewise for Φk+1(x, zk+1) and ℓk+1(x, zk+1). 11 1. Suppose each trajectory x(t) in B is bounded for t ≥0. Let the positive integer k ≤n be such that supx0∈B Mk+1(x0) < 0. Then the global Lyapunov dimension dL over B is bounded above by dL ≤ inf B∈[0,1] V ∈C1 (k +B) s.t. ΨB + f · DxV + ℓk · DzkV + ℓk+1 · Dzk+1V ≤0 for all x ∈B, zk ∈Snk−1, zk+1 ∈Snk+1−1, (40) where V : B×Snk−1×Snk+1−1 →R, and DxV , DzkV and Dzk+1V denote the gradients of V (x, zk, zk+1) with respect to each argument. 2. Suppose B is compact and f ∈C2(B, Rn). Let j be the minimum admissible k, as in definition 2.3 for dL. Then, for all ε > 0, dL −ε sup x0∈B Mj+1(x0) −1 ≤j + Bε ≤dL, (41) where Bε = inf B∈[0,1] V ∈C1 B s.t. ΨB + f · DxV + ℓj · DzjV + ℓj+1 · Dzj+1V ≤ε for all x ∈B, zj ∈Snj−1, zj+1 ∈Snj+1−1. (42) Proof. For the first part, the assumption that supx0∈B Mk+1(x0) < 0 means that k ≥j where j is as defined in definition 2.3. Applying remark 2.5, we therefore have dL ≤ inf B∈[0,1)(k + B) s.t. B [Mk(x0) −Mk+1(x0)] −Mk(x0) ≥0 for all x0 ∈B. (43) The constraint in (43) is equivalent to the Ψ B ≤0 condition (39). To see this, we rearrange the constraint in (43) and use the expression (27) for Mk to obtain an equivalent inequality, (1 −B) sup zk 0 ∈Snk Φk(x0, zk 0) + B sup zk+1 0 ∈Snk+1 Φk+1(x0, zk+1 0 ) ≤0 ∀x0 ∈B. (44) This inequality holds if and only if it holds without the suprema for all zk 0 and zk+1 0 , which is exactly the Ψ B ≤0 condition (39). Writing this latter condition in place of the equivalent constraint in (43) gives dL ≤ inf B∈[0,1)(k+B) s.t. Ψ B(x0, zk 0, zk+1 0 ) ≤0 for all (x0, zk 0, zk+1 0 ) ∈B×Snk−1×Snk+1−1. (45) We want to replace the constraint in (45) with one that does not involve time averages. To do so we upper bound Ψ B using the general formulation (5) for time averages by letting 12 X = (x, zk, zk+1) on domain Ω= B ×Snk−1 ×Snk+1−1, and letting F(X) be the right-hand side of (37). This gives sup x0∈B zk 0 ∈Snk−1 zk+1 0 ∈Snk+1−1 ΨB ≤inf V ∈C1 sup x∈B zk∈Snk−1 zk+1∈Snk+1−1  ΨB + f · DxV + ℓk · DzkV + ℓk+1 · Dzk+1V  . (46) The constraint of (43), which is equivalent to nonpositivity of the left-hand side of (46), can be strengthened by requiring there to exist V ∈C1 for which the inner supremum on the right-hand side is nonpositive. This gives the first part of the proposition. For the second part, note that (43) is an equality when k = j, according to lemma 2.4. That equality can be expressed as dL = inf B∈[0,1)(j +B) s.t. Ψ B(x0, zj 0, zj+1 0 ) ≤0 for all (x0, zj 0, zj+1 0 ) ∈B ×Snj−1 ×Snj+1−1, (47) where equivalence of the constraints in (43) and (47) is as in the first part. To prove the second inequality in the claim (41), let B be any value for which the constraint in (47) holds. It suffices to prove existence of V for which constraint in the definition (42) of Bε holds also. This will imply that the infimum over B in (42) is at least as small as in (47). Under the assumptions that B is compact and f ∈C2, the X domain Ωis compact, and F(X) defined as the right-hand side of (37) is C1 on an open region containing Ω. For such X and F, the general upper bound (5) is an equality [46], so in particular (46) is an equality. Choosing k = j, equality in (47) implies that the constraint in (47) is equivalent to 0 ≥inf V ∈C1 sup x∈B zk∈Snk−1 zk+1∈Snk+1−1  ΨB + f · DxV + ℓk · DzkV + ℓk+1 · Dzk+1V  . (48) This inequality implies that, for any ε > 0, there exists V ∈C1 such that  ΨB + f · DxV + ℓk · DzkV + ℓk+1 · Dzk+1V  ≤ε ∀(x, zk, zk+1) ∈B × Snk−1 × Snk+1−1. (49) (Such V need not exist with ε = 0 if the infimum over V in the constraint of (46) is not attained.) Condition (49) is the same as the constraint in the minimization (42) defining Bε, and this implies Bε ≤B. We have shown that Bε ≤B for any B satisfying the constraint in (47), and thus dL −j ≤Bε. This establishes the second inequality in (41). To prove the first inequality in the claim (41), suppose that the inequality constraint in the definition (42) of Bε is satisfied for a pair of B and V . Averaging this inequality along any trajectory and then maximizing over the initial directions zk 0 and zk+1 0 gives Mk(x0) + Bµk+1(x0) ≤ε, (50) 13 which we divide by −µk+1 > 0 to find Mk(x0) −µk+1(x0) − ε −µk+1(x0) ≤B. (51) Maximizing the first term over x0, and bounding the second term above using µk+1(x0) ≤ supx0∈B Mk+1 < 0, we find (dL −j) − ε supx0∈B Mj+1(x0) ≤B. (52) Since (52) holds at all B where the constraint set in the definition (42) of Bε is not empty, minimizing over such B gives (52) with Bε on the right-hand side. This establishes the first inequality in (41), which completes the proof. Remark 2.9. The second part of proposition 2.8 implies that dL ≤j + Bε + ε sup x0∈B Mj+1(x0) −1 (53) at each ε > 0, and that this upper bound converges to dL as ε →0. 2.3 Shifted spectrum approach In the preceding subsection, the possibly unbounded k-vectors yk in the tangent space are captured by zk, which is bounded because it is the projection of yk onto the unit sphere. In the alternative approach of the present subsection, we capture the dynamics of yk by letting wk(t) = wk 0e−Btyk(t), (54) where the initial condition wk 0 can be any k-vector. This wk(t) is bounded forward in time for sufficiently large B. The ODE for wk is the same as the ODE (15) for yk, except the spectrum of the Jacobian is shifted by B. Together with the ODE (6) for x(t) this gives the system d dt  x wk  =  f(x) mB k (x, wk)  , (55) where mB k (x, wk) = Df [k](x)wk −Bwk. (56) The following lemma asserts that if B is large enough to prevent unbounded growth of wk, then B is an upper bound on the LE sum Mk, and the infimum over such B is equal to Mk. Lemma 2.10. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward invariant on B ⊂Rn. Let the initial condition x0 give a trajectory that is bounded for t ≥0. Let wk(t) satisfy the ODE (55) with initial condition wk 0. For each k ∈{1, . . . , n}: 14 1. The leading LE sum Mk(x0) is equivalent to Mk(x0) = inf B∈R B s.t. for each wk 0 ∈Rnk, wk(t) is bounded for all t ≥0. (57) 2. For any B > Mk(x0) there exist K(x0, B) > 0 and λ(x0, B) > 0 such that |wk(t)| ≤Ke−λt|wk 0| for all t ≥0 and wk 0 ∈Rnk. (58) 3. If B is compact and B > supx0∈B Mk(x0), there exist K(B) and λ(B) independent of x0 such that (58) holds for all x0 ∈B. Proof. If there exists yk 0 such that |yk(t)| →∞in finite time then, according to (54), no B satisfies the condition in (57). Then both sides of (57) are ∞, and the second part holds vacuously. Henceforth we assume that w(t) exists for all t ≥0. We will first show that if B satisfies the condition of (57) then Mk(x0) ≤B. Substi- tuting yk = eBtwk into the time average on the right-hand side of (16) gives lim sup t→∞ 1 t log yk(t) = lim sup t→∞ 1 t log  eBtwk(t) /|wk 0|  = B + lim sup t→∞ 1 t log wk(t) ≤B, (59) where the final inequality follows from the boundedness of wk(t). Taking the supremum of the left-hand side over yk 0 gives Mk(x0) ≤B. Next we let B > Mk(x0). Showing that (58) holds will prove the second part of the lemma, and it will complete the proof of the first part by ensuring boundedness of wk(t). Without loss of generality, we assume |wk 0| = 1. Since Mk(x0) ≥lim sup t→∞ 1 t log |yk(t)| = B + lim sup t→∞ 1 t log |wk(t)|, (60) we can choose some λ < (B −Mk(x0)) so that lim supt→∞ 1 t log |wk(t)| < −λ. Therefore by Gr¨onwall’s inequality there exists a minimal T(wk 0) ≥0 such that |wk(t)| ≤e−λt for all t ≥T(wk 0). By compactness, we can let T ∗= max wk 0∈Snk−1 T(wk 0), (61) so that |wk(t)| ≤e−λt for all t ≥T ∗and all wk 0 ∈Snk−1. Finally we let K = max t∈[0,T ∗] wk 0∈Snk−1 eλt|wk(t)|. (62) 15 which completes the proof of the first two parts of the lemma. For the third part, assume that Mk(x0) is bounded above (otherwise the result is vacuously true), so that we can choose λ < B −supx0∈B Mk(x0). Let K = max x0∈B t∈[0,T †] wk 0∈Snk−1 eλt|wk(t)|, (63) where T † = maxx0∈B T ∗(x0) for the T ∗(x0) defined in (61). This maximum is valid because (t, x0, wk 0) 7→eλt|wk(t)| is continuous on the compact set [0, T †] × B × Snk−1. With this value of λ and K, we must then have (58) for every x0, which completes the proof. Practical upper bounds on the LE sum Mk are found by combining lemma 2.10 with the specialised Lyapunov theorem given as lemma B.1 applied to the system (55). This yields the following proposition. Proposition 2.11. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward in- variant on B ⊂Rn. Let mB k (x, wk) be defined as in (56). 1. Suppose each trajectory x(t) in B is bounded for t ≥0. For each k ∈{1, . . . , n}, the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤ inf V ∈C1(B×Rnk ) B s.t. V ≥ wk 2 f · DxV + mB k · DwkV ≤0, (64) where the right-hand constraints are imposed for all (x, wk) ∈B × Snk. 2. Suppose B is compact and f ∈C2(B, Rn). Then (64) holds with equality between the supremum and the infimum, and the Lyapunov functions may be restricted to the form V (x, wk) = (wk)TU(x)wk with U ∈C1(B, Rnk×nk). Proof. For the first part, assume B and V satisfy the constraints on the right-hand side of (64). Letting A(x) = Df [k](x) −BInk, the first part of lemma B.1 applies to the system (55) and guarantees boundedness of wk(t) for t ≥0. Boundedness of wk(t) means that B satisfies the constraint on the right-hand side of lemma 2.10, so Mk(x0) ≤B for all x0 ∈B. This implies (64), so the first part of the proposition is proven. For the second part, let B > supx0 Mk(x0). By the third part of Lemma 2.10, there exist K > 0 and λ > 0 such that (135) holds for all x0 ∈B. Observe also that A ∈C1(B, Rnk×nk) since f ∈C2, and that A(x) is bounded uniformly in x since B is compact. Thus the second part of lemma B.1 applies, so there exists V (x, wk) = (wk)TU(x)wk satisfying the constraint in (64). This is true for any B > supx0 Mk(x0), so (64) is an equality. This method gives bounds for the different sums of LEs, which can then be used with proposition 2.7 to bound the Lyapunov dimension. Analogously to proposition 2.8, a direct 16 bound for the dimension is also possible with a modified shifted spectrum approach, using the augmented system d dt   x yk+1 w  =   f(x) Df [k+1](x)yk+1 SB k (x, yk+1, w)  , (65) where we have defined SB k (x, yk+1, w) = Df [k](x)w + B 1 −B (yk+1)TDf [k+1](x)yk+1 |yk+1|2 w. (66) No shifting is needed in the yk+1 dynamics because this quantity decays exponentially on average when Mk+1 < 0. The w dynamics (66) shift the Df [k] spectrum not by a negative constant, as in (56), but by a factor proportional to the instantaneous growth rate of yk+1, which is negative on average. Proposition 2.12. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward in- variant on B ⊂Rn with each trajectory x(t) bounded for t ≥0. Let the positive integer k ≤n be such that supx0∈B Mk+1(x0) < 0. Then the global Lyapunov dimension dL over B is bounded above by dL ≤inf V ∈C1(k + B) s.t. V (x, yk+1, w) ≥|w|2 f · DxV + Df [k+1]yk+1 · DykV + SB k · DwV ≤0, (67) where the right-hand constraints are imposed for all (x, yk+1, w) ∈B × Rnk+1 × Rnk, and DxV , DzkV and Dzk+1V denote the gradients of V (x, yk+1, w) with respect to each argu- ment. Proof. We will prove that, for any B and V satisfying the constraints in (67), dL ≤k + B. We will use remark 2.5 to do this. First note that supx0∈B Mk+1(x0) < 0 implies that k ≥j where j is the minimal integer of definition 2.3. Observe that the dynamics of w in the system (65) are of the form dw dt = A(x, yk+1)w for some matrix A, so we may apply lemma B.1 to this system. This tells us that given any B and V satisfying the constraints, w(t) is bounded for any initial conditions. We now suppose that we have such a V and B and show that (23) holds for all x0. Consider any initial conditions for (65). Let yk = |yk+1| −B 1−B w, which combined with (66) implies that yk satisfies the usual tangent dynamics (15). We then have wTDf [k]w |w|2 = (yk)TDf [k]yk |yk|2 . (68) 17 Since |w(t)| is bounded, 0 ≥lim sup T→∞ 1 T log |w(T)| |w(0)| (69) = lim T→∞ 1 T Z T 0 d dt log |w(t)| |w(0)|dt (70) = lim sup T→∞ 1 T Z T 0 1 2 d|w|2 dt |w|2 ! dt (71) = lim sup T→∞ 1 T Z T 0 wT(Df)[k]w |w|2 + B 1 −B (yk+1)T(Df)[k+1]yk+1 |yk+1|2 ! dt. (72) Using (68) and (26), this gives 0 ≤lim sup t→∞ 1 t log |yk(t)| + B 1 −B lim sup t→∞ 1 t log |yk+1(t)|. (73) Since this holds for all yk+1 and yk, by (27) we must have 0 ≤Mk(x0) + B 1 −B Mk+1(x0). (74) which rearranges to B(Mk(x0) −Mk+1(x0)) −Mk(x0) ≥0. (75) Remark 2.5 finishes the proof. 2.4 Sum-of-squares relaxations In sections 2.2 and 2.3 we have formulated upper bounds on LE sums and on Lyapunov dimension that involve pointwise inequality constraints on functions over sets or, equiv- alently, pointwise suprema of functions over sets. All of these bounds can be expressed as solutions to optimization problems with pointwise nonnegativity constraints. When all expressions are polynomial, nonnegativity can be strengthened into polynomial SOS con- ditions. This approach is also the basis of the previous paper [34], and we briefly recall the main concepts here. In order for a function S : Rn →R to satisfy S(x) ≥0 for all X ∈Rn, it suffices that S ∈Σ, where Σ denotes the set of polynomials that are representable as a sum of squares of other polynomials. The set of nonnegative polynomials strictly contains Σ, and the relationship between these sets has been extensively studied [43]. Often one wants to enforce S(x) ≥0 on a subset of Rn without requiring nonnegativity on all of Rn. For instance, one may be interested in a set Ωwith a ‘semiagebraic’ definition in terms of polynomial equalities and inequalities, Ω= {X ∈Rn : gi(X) ≥0 for i = 1, . . . , I, hj(X) = 0 for j = 1, . . . , J} , (76) 18 where the gi ∈R[X] and hi ∈R[X] are given. Here R[X] denotes the ring of polynomials in X with real coefficients. There are various ways to define a set ΣΩof polynomials whose nonnegativity on Ωis implied by SOS conditions. Here we choose a set of polynomials that is called the quadratic module generated by the polynomials that define Ω, ΣΩ=   σ0 + I X i=1 σigi + J X j=1 ρjhj : σi ∈Σ for i = 0, . . . , I, ρj ∈R[X] for j = 1, . . . , J   , (77) where all R[X] denotes the set of all polynomial functions of X with real coefficients. Any polynomial in ΣΩmust be nonnegative on Ωbecause σ0(X) ≥0 at every X ∈Rn, while each σi(X)gi(X) ≥0 and each ρj(X)hj(X) = 0 at every X ∈Ω. When Ω= Rn, there are no gi or ρj, so ΣΩ= Σ. In the preceding paper [34], the condition S ∈ΣΩwas expressed in a different but equivalent way: by saying there exist σi and ρj such that S−P σigi−P ρjhj is SOS. Computational formulations enforce membership in ΣΩvia a stronger constraint of membership in a finite-dimensional subspace. The simplest choice is to enforce member- ship in ΣΩ ν , which the truncation of the quadratic module (77) where each σi and ρj has polynomial degree no larger than ν. This gives a hierarchy of computational formulations, where raising ν cannot worsen results–and typically improves them–but makes computa- tions more expensive. In practice, it is often useful to choose the spaces for σi and ρi more carefully. In our computations of section 3, for instance, we impose symmetries and allow different maximum degrees in different variables. For simplicity in the present subsection, however, we write formulations in terms of the simple truncation ΣΩ ν . The upper bounds on LE sums and Lyapunov dimension formulated in sections 2.2 and 2.3 are stated as, or can be reformulated as, minimizations subject to pointwise in- equalities on V . When the ODE right-hand side f(x) is polynomial, and when we restrict to auxiliary functions V in some finite-dimensional space of polynomials, then the pointwise inequalities can be enforced using SOS conditions. This gives SOS optimization problems that are computationally tractable when they are not too large, and whose minima give upper bounds on the desired quantities. For the shifted spectrum approach of section 2.3, the SOS relaxations are immediate after choosing a polynomial space for V and enforcing nonnegativity on some set Ωvia membership in ΣΩ. For the sphere projection approach of section 2.2, the relevant minmax problems can be expressed as constrained minimizations, after which the SOS relaxations are simple. 2.4.1 Sphere projection The sphere projection approach of section 2.2 is based on the general upper bound (5) on time averages, which can be expressed as max X(0)∈ΩΦ ≤ inf V ∈C1(Ω) B s.t. B −Φ −F · DV ≥0 ∀X ∈Ω. (78) 19 An SOS relaxation of the right-hand side gives an upper bound, stated in the first part of proposition 2.13 below. Furthermore, this upper bound is an equality if Ωis forward invariant and its semialgebraic specification (76) has the Archimedean property, as stated in the second part of proposition 2.13 below. The Archimedean property implies compactness of Ωand is only slightly stronger; any specification of compact Ωthat is not Archimedean can be given this property by adding the polynomial gi(X) = R2 −|X|2 with a constant R that is large enough to not change Ωitself. A proof of the second part of proposition 2.13 can be found in [19, Theorem 1]. Briefly, since Ωis compact and forward invariant, equality in (78) is given by the main theorem of [46], whose applicability on compact manifolds is addressed after (8) in [34]. Then, the Archimedean property guarantees that ΣΩcontains B −Φ −F · DV for each suboptimal B, by Putinar’s Positivstellensatz [41, Lemma 4.1]. Proposition 2.13. Let d dtX(t) = F(X(t)) with F ∈Rn[X], and let Φ ∈R[X]. Let the dynamics be forward invariant on a semialgebraic set Ω⊂Rn, whose specification defines the quadratic module ΣΩ. 1. If each trajectory X(t) in Ωis bounded for t ≥0, then max X0∈ΩΦ ≤ inf V ∈R[X] B s.t. B −Φ −F · DV ∈ΣΩ. (79) 2. If Ωis compact and its semialgebraic specification is Archimedean, then, max X0∈ΩΦ = inf V ∈R[X] B s.t. B −Φ −F · DV ∈ΣΩ. (80) All of the upper bounds from section 2.2 can be relaxed into SOS optimization problems by the same reasoning that leads from (5) to (79). Proposition 2.15 below is an SOS relaxation of proposition 2.6 for bounding LE sums. Computations giving such bounds also imply bounds on Lyapunov dimension via proposition 2.7, but potentially sharper bounds on dimension can be computed directly using proposition 2.16, which is an SOS relxation of proposition 2.8. In the proposition statements, Rn[x] denotes the space of polynomials in x taking vector values in Rn, and the x(t) domain is a semialgebraic set, B = {x ∈Rn : gi(x) ≥0 for i = 1, . . . , I, hj(x) = 0 for j = 1, . . . , J} . (81) This is understood to include the B = Rn case when there are no gi or hj. The k = 1 special case of proposition 2.15 is essentially Proposition 2 of [34], although the SOS conditions are more explicit in [34], and the restriction to finite-dimensional polynomial spaces is more explicit here. Remark 2.14. In the following propositions 2.15 to 2.18, for fixed k ∈{1, . . . , n} and polynomial degrees ν and ν′, the computation of the resulting upper bound Ci(k, ν, ν′) is an SOS program that can be solved using standard software (see section 3). For certain 20 combinations (k, ν, ν′), there will not exist any V in the chosen space of polynomials that satisfy the SOS constraints, in which case we understand the infimum over bounds to be +∞, meaning that we obtain no upper bound. When bounding Lyapunov dimension, choos- ing k > dL −1 is necessary for the constraint set to not be empty. In all cases, raising the polynomial degrees ν and ν′ cannot shrink the constraint set and will often enlarge it. Proposition 2.15. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant. Let ΣΩbe the quadratic module (77) for the set Ω= B × Snk whose semialgebraic specification includes the polynomial constraints on x in (81) and the equality zk −1 = 0. Let Φk(x, zk) and ℓk(x, zk) be defined as in (28) and (30), respectively. 1. Suppose that each trajectory x(t) in B is bounded for t ≥0. For each k ∈{1, . . . , n}, and each pair of polynomial degrees (ν, ν′), the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤C1(k, ν, ν′), (82) where C1(k, ν, ν′) = inf V ∈R[x,zk]ν B s.t. B −Φk −f · DxV −ℓk · DzkV ∈ΣΩ ν′. (83) 2. Suppose further that the invariant set B is compact, and its semialgebraic specification is Archimedean. Then, for each k, sup x0∈B Mk(x0) = sup ν,ν′∈Z≥ C1(k, ν, ν′). (84) Proof. The first part is an SOS relaxation of the upper bound (31) on Mk for each k. Equivalently, it is a particular case of (79) with X, F and Φ as described above in propo- sition 2.6, where V is further constrained by having degree no larger than ν, and where σi and ρj in the quadratic module representation (77) have degrees no larger than ν′. The second part is the analogous special case of (80). This equality is applicable because the B is specified as an Archimedean subset of Rn, and so adding |zk| −1 = 0 gives an Archimedean specification of Ωin Rn+nk. All polynomial degrees are admitted in (80), which is equivalent to the supremum over ν and ν′ in (84). Proposition 2.16. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant, and let the integer k ≤n be such that supx0∈B Mk+1(x0) < 0. Let ΣΩbe the quadratic module (77) for the set Ω= B × Snk−1 × Snk+1−1 whose semialgebraic specification includes the polynomial constraints on x in (81), and the equalities zk −1 = 0 and zk+1 −1 = 0. Let Φk(x, zk) and ℓk(x, zk) be defined as in (28) and (30), respectively, and likewise for Φk+1(x, zk+1) and ℓk+1(x, zk+1). 21 1. Suppose that each trajectory x(t) in B is bounded for t ≥0. For each pair of polyno- mial degrees (ν, ν′), the the maximum Lyapunov dimension dL among trajectories in B is bounded above by dL ≤k + C2(k, ν, ν′, 0) (85) where C2(k, ν, ν′, ε) = inf B∈[0,1] V ∈R[x,zk,zk+1]ν B s.t. ε−[ΨB+f·DxV +ℓk·DzkV +ℓk+1·Dzk+1V ] ∈ΣΩ ν′. (86) 2. Suppose further that the invariant set B is compact, and its semialgebraic specification is Archimedean. Let j be the minimum admissible k, as in definition 2.3 for dL. Then, dL = j + inf ν,ν′∈Z≥ ε>0 C2(j, ν, ν′, ε). (87) Proof. The first part follows from the upper bound (40) on dL because the SOS constraint in (86) with ε = 0 implies the nonpositivity constraint on the right-hand side of (40). For the second part, we first establish upper and lower bounds on C2 in terms of dL. For the lower bound, note that the constraint in (86) implies the constraint in the definition (42) of Bε. This implies Bε ≤C2(j, ν, ν′, ε), with which the first inequality in (41) gives dL −ε sup x0∈B Mj+1(x0) −1 ≤j + C2(j, ν, ν′, ε) (88) for all ε > 0 and ν, ν′ ∈Z≥. Minimizing over these parameters gives dL ≤j + inf ν,ν′∈Z≥ ε>0 C2(j, ν, ν′, ε) (89) The infimum will be approached or attained as ν, ν′ →∞and ε →0+. For the upper bound on C2, note that since B has an Archimedean specification, so does Ω. Thus we can apply the general SOS upper bounds with equality (80), in particular with F(X) being the right-hand side of the ODE system (37), with Φ(X) being ΨB, and with k = j. This gives sup x0∈B zj 0∈Snj −1 zj+1 0 ∈Snj+1−1 Ψ B = inf V ∈R[x,zj,zj+1] ε s.t. ε −  ΨB + f · DxV + ℓk · DzjV + ℓj+1 · Dzj+1V  ∈ΣΩ. (90) Note that the variable called B in (80) is ε in (90), where B has a different meaning. Let B be such that the constraint holds in expression (47) for dL. This constraint is equivalent to 22 both sides of (90) being nonpositive, and nonpositivity of the right-hand infimum in (90) implies that there exists polynomial V satisfying the SOS constraint for any ε > 0. Since such V exists for any B > dL −j , dL −j ≥ inf B∈[0,1] V ∈R[x,zj,zj+1] B s.t. ε−  ΨB + f · DxV + ℓk · DzjV + ℓj+1 · Dzj+1V  ∈ΣΩ(91) for all ε > 0. Restricting the right-hand minimization to V of degree ν and weights σi and ρj of degree ν′ gives the definition of C2(j, ν, ν′, ε). Thus, for all ε > 0, j + inf ν,ν′∈Z≥ C2(j, ν, ν′, ε) ≤dL. (92) (This inequality need not hold with finite ν and ν′.) This upper bound of dL, combined with the lower bound of dL (89), established the claim (87) of the second part and completes the proof. The minimization problems defining C1 and C2 for each (k, ν, ν′) in (82) of proposi- tion 2.15 are SOS programs. That is, polynomials of finite degree are constrained to be SOS, and these polynomials as well as the optimization objective are affine in the optimiza- tion parameters. Here the optimization parameters include B as well as the coefficients of V in whatever basis it is represented. Such SOS programs are computationally tractable so long as the dimension of X and the degree of polynomials in the SOS constraints are not too large. Increasing ν and ν′ gives a hierarchy of SOS programs with increasing com- putational cost whose minima cannot increase. Typically these minima decrease as ν and ν′ are raised, and the second part of each theorem guarantees convergence to the sharp bound under mild conditions. More generally, a hierarchy of SOS programs can be defined by choosing different sequences of finite-dimensional polynomial spaces, rather than simply truncating by maximum degrees of ν and ν′. This often includes imposing symmetries on each polynomial subspace due to symmetries of f(x), as explained in section 2.5 below. 2.4.2 Shifted spectrum Next we give the SOS formulations based on the shifted spectrum approach. Proposi- tion 2.17 below is an SOS relaxation of proposition 2.11 for bounding LE sums. Computa- tions giving such bounds also imply bounds on Lyapunov dimension via proposition 2.7, but potentially sharper bounds on dimension can be computed directly using proposition 2.18, which is an SOS relaxation of proposition 2.12. Unlike our other methods, we do not prove that proposition 2.18 gives sharp results in general. Proposition 2.17. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant. Let mB k (x, wk) be defined as in (56). Let ΣΩbe the quadratic module (77) for the set Ω= B × Rnk whose semialgebraic specification includes the same constraints (81) defining B. 23 1. For each k ∈{1, . . . , n}, and each pair of polynomial degrees (ν, ν′), the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤C3(k, ν, ν′), (93) where C3(k, ν, ν′) = inf V ∈R[x,wk]ν B s.t. V −|wk|2 ∈ΣΩ ν′ −[f · DxV + mB k · DwkV ] ∈ΣΩ ν′. (94) 2. Suppose that the invariant set B is compact, and that its semialgebraic specification is Archimedean. Then, for each k, sup x0∈B Mk(x0) = inf ν,ν′∈Z≥ C3(k, ν, ν′), (95) where the infimum in (94) is over V (x, wk) = (wk)TU(x)wk with U ∈Rnk×nk[x] a symmetric polynomial matrix, and where σi(x, wk) and ρj(x, wk) in the SOS repre- sentation (77) are likewise quadratic in wk. Proof. The first part follows from the upper bound (64) on Mk because the SOS constraints in (94) imply the nonnegativity constraints in (64) from proposition 2.11. For the second part, let B1 and B2 be arbitrary with B1 > B2 > supx0∈B Mk(x0). It suffices to show that there exists a polynomial matrix U(x) such that V (x, wk) = (wk)TU(x)wk satisfies the SOS constraints of (94) for B = B1 with no restrictions on polynomial degree. Since B2 > supx0∈B Mk(x0), the second part of proposition 2.11 guarantees the existence of a symmetric U ∈C1(Rn, Rnk×nk) for which V satisfies the pointwise inequalities in (64) with B = B2. We will show that this implies existence of the desired polynomial U under the present assumptions. The pointwise inequalities in (64), with B = B2 in the V = (wk)TUwk case, can be written in the form (wk)T ˆZi(x)wk ≥0, where ˆZ1(x) = U(x) −Ink, (96) ˆZ2(x) = −f(x) · DxU(x) + 2 U(x)(B2 Ink −Df [k](x)), (97) with Ink denoting the nk × nk identity matrix. The pointwise inequalities on B × Rnk are equivalent to the condition that both ˆZi(x) ⪰0 for all x ∈B. In fact, by scaling U we can satisfy both of these equations with the additional restriction that ˆZ1 ≻0 strictly on B. Let Z1(x) = U(x) −Ink, (98) Z2(x) = −f(x) · DxU(x) + 2 U(x)(B1 Ink −Df [k](x)), (99) 24 which are the same definitions with B2 replaced by B1. Since B1 > B2, we now have both Zi(x) ≻0 strictly for all x ∈B. Because there exists U ∈C1 such that both Zi(x) ≻0 on B, and B is compact, there also exists a polynomial matrix U ∈Rnk×nk[x] for which both Zi(x) ≻0 on B. This follows from a strengthened version of the Stone-Weierstrass approximation theorem [25]. This polynomial U satisfies the SOS constraints of (94) due to B being Archimedean and Zi being strictly positive definite on B. Recalling that B is the set where all gi(x) ≥0 and hj(x) = 0, applying a matrix Positivstellensatz [44, theorem 2] guarantees existence of polynomial matrices Si(x) and Ti(x), with the Si(x) being squares of other polynomial matrices, such that Z1(x) = S0(x) + I X i=1 gi(x)Si(x) + J X j=1 hj(x)Tj(x), (100) and likewise for Z2. The existence of such representations implies that both (wk)TZiwk belong to the quadratic module ΣΩ. Therefore V = (wk)TUwk satisfies the constraints of (94) with B = B1, and the second part of the proposition is proven. Proposition 2.18. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant on B ⊂Rn with each trajectory x(t) bounded for t ≥0. Let the positive integer k ≤n be such that supx0∈B Mk+1(x0) < 0. Let ΣΩbe the quadratic module (77) for the set Ω= B × Rnk+1 × Rnk whose semialgebraic specification includes the polynomial constraints on x in (81). Let each trajectory x(t) in B be bounded for t ≥0. For each k ∈{1, . . . , n}, and each pair of polynomial degrees (ν, ν′), the the maximum Lyapunov dimension dL among trajectories in B is bounded above by dL ≤k + C4(k, ν, ν′), (101) where C4(k, ν, ν′) = inf B∈[0,1] V ∈R[x,yk+1,w]ν B s.t. V −|w|2 ∈ΣΩ ν′ (1 −B) yk+1 2 SB k ∈ΣΩ ν′. (102) 2.5 Symmetry exploitation If an ODE being studied using auxiliary function methods has symmetry, then in many cases the same symmetry can be imposed on the auxiliary functions, leading to symme- tries in SOS programs that can be exploited computationally by block diagonalizing the corresponding semidefinite programs (SDPs) [11]. More precisely, an ODE system (1) with right-hand side F(X) is invariant under a map Λ : Rn →Rn if and only if F is Λ-equivariant, meaning that F(ΛX) = ΛF(X) for all X. In such cases, X(t) solves the ODE if and only if ΛX(t) does. The trajectory itself may be Λ-invariant, meaning that X(t) = ΛX(t), or it may not be. For any ODE, the set of such transformations forms a 25 group. We focus on the common situation where each Λ is an orthogonal linear transfor- mation on Rn, meaning the group of such Λ is a subgroup of the orthogonal group O(n). (In some contexts the set of Λ would be called the linear representation of a group, rather than a group itself, but we do not draw this distinction.) In many formulations of auxiliary function methods for ODEs, if the ODE is invariant under Λ then the results are prov- ably unchanged if the optimization is over the restricted set of V which are Λ-invariant, meaning V (X) = V (ΛX) for all X. In the context of bounding time averages, see the ap- pendix of [34] for a proof that V (X) can be taken as Λ-invariant under certain conditions. For our shifted spectrum methods, the Lyapunov function in lemma B.1 can be taken as Λ-invariant also. We omit the proof, which closely follows the argument in [34]. In order to apply such results in the context of bounding LE sums and Lyapunov di- mension, we must determine the symmetries of ODEs on the augmented state spaces for (x, zk) and (x, wk), governed by (29) and (55). First, regardless of whether the x ODE has any symmetry, the augmented ODEs have a sign symmetry due to linearity of the yk dynamics. These sign symmetries are summarized by proposition 2.19. Second, if the right-hand side f(x) of the x ODE is equivariant under an orthogonal transformation, this induces equivariance on the right-hand sides of the augmented ODEs. This is summa- rized by proposition 2.20, which invokes the kth multiplicative compound Λ(k) of Λ (cf. appendix A). Note that In denotes the n×n identity matrix, and diag(A, . . . , B) is a block diagonal matrix with square blocks A, . . . , B. Proposition 2.19. Let d dtx(t) = f(x(t)) with f : B →Rn. The augmented ODEs are invariant under the following orthogonal transformations. 1. The ODE system (29) is invariant under diag(In, −Ink). 2. The ODE system (37) is invariant under diag(In, ±Ink, ∓Ink+1). 3. The ODE system (55) is invariant under diag(In, −Ink). 4. The ODE system (65) is invariant under diag(In, ±Ink+1, ∓Ink). Proof. All four parts have very similar proofs, so we prove only the last. The claim amounts to the right-hand side of (65) being equivariant under negation of yk+1. For f(x) this statement is trivial. The requirement that Df [k+1](x)(−yk+1) = −Df [k+1](x)yk+1 is also immediate. The final requirement that SB k (x, −yk+1, w) = SB k (x, yk+1, w) holds because yk+1 appears quadratically in both the numerator and denominator in (66). Proposition 2.20. Let B ⊂Rn be invariant under Λ ∈O(n). If d dtx(t) = f(x(t)) with f : B →Rn is invariant under Λ, then the augmented ODEs are invariant under the following orthogonal transformations. 1. The ODE system (29) is invariant under diag(Λ, Λ(k)). 26 2. The ODE system (37) is invariant under diag(Λ, Λ(k), Λ(k+1)). 3. The ODE system (55) is invariant under diag(Λ, Λ(k)). 4. The ODE system (65) is invariant under diag(Λ, Λ(k+1), Λ(k)). Proof. By assumption f(x) is equivariant under the orthogonal transformation Λ, meaning f(Λx) = Λf(x). The multiplicative compound Λ(k) is orthogonal by proposition A.2, thus all transformations in the proposition are orthogonal. The claim is proved by showing that the right-hand sides of the augmented ODEs are equivariant under the claimed transfor- mations. All four parts are very similar, so we prove only the first. For this we must show that ℓk(Λx, Λ(k)zk) = Λ(k)ℓk(x, zk). It is sufficient to show that Df [k](Λx)Λ(k)zk = Λ(k)Df [k](x)zk, (103) so that Φk(Λx, Λ(k)zk) = Φk(x, zk) (104) by orthogonality of Λ(k). Oeri and Goluskin [34] showed that Df(Λx) = ΛDf(x)ΛT . Taking the kth additive compound of this equation gives, via proposition A.3, Df [k](Λx) = Λ(k)Df [k](x)  Λ(k)T . (105) Orthogonality then gives (103). The proofs of the other parts follow similarly, noting that (103) remains valid with yk or wk in place of zk. Combining both propositions gives any finite symmetry group for the augmented sys- tems. For example, if B is invariant and f is equivariant under a group generated by {Λ1, . . . , Λj} then the right-hand sides of the augmented systems (29) and (55) are equiv- ariant under the group generated by n diag(Λ1, Λ(k) 1 ), . . . , diag(Λj, Λ(k) j ), diag(In, −Ink) o , (106) and we can restrict to V that are invariant under the same transformations. This can be enforced by representing V in a symmetry-invariant basis. For sign symmetries this can be done with monomial bases, but general rotations require more sophisticated choices of symmetry-invariant polynomial basis. Note that for the shifted spectrum method propo- sition 2.17 it is sufficient to consider V that are is quadratic in wk, which automatically gives invariance of V with respect to the symmetry of proposition 2.19. 27 3 Computational examples We now present computational examples in which our methods based on SOS optimization are applied to particular ODEs. It has been conjectured that, for all systems with equilibria embedded in chaotic attractors, the supremum in the formula for the Lyapunov dimension (18) is attained on an equilibrium [9, 22]. This indeed is the case for many simple systems, including Lorenz’s 1963 model [23]. In order to demonstrate the power of our methods, we focus on two example systems for which we know there are no equilibria. For all of our examples, the code to replicate the results, implemented in Julia, is avail- able at https://github.com/jeremypparker/Parker_Goluskin_LEs. We used SumOf- Squares.jl [20] to reformulate SOS optimization problems as SDPs, along with Symbol- icWedderburn.jl [15] for symmetry exploitation. To solve the resulting SDPs, we primarily used Mosek [1]. Certain SDPs were also solved using Clarabel.jl [14] and Hypatia.jl [5], which gave very similar results with longer runtimes. In applications of the sphere pro- jection approach, the bounds B can be minimized as the objective of the SDPs, and the bounds we report are the numerical optima returned by Mosek. In applications of the shifted spectrum approach, the value of B must be fixed during each SDP solution because it multiplies coefficients of V in the SOS constraints, and the coefficients to be optimized over can appear only linearly. In such computations, if there exists a polynomial V sat- isfying the SOS constraints, then the SDP will have a nonempty constraint set, and the SDP solver should report feasibility. We thus perform a bisection search in B to find the boundary between large B values where the SDPs are feasible and small B values where they are infeasible. Close to this boundary the numerical conditioning of the SDPs becomes increasingly poor, making it difficult to precisely identify the infimum over admissible B. Confirming that near-optimal bounds are valid calls for rigorous and/or multiple-precision computations [12, 37], but for simplicity we have used only double precision arithmetic in our examples. We call a bound ‘feasible’ if Mosek reports a dual objective value of less than 10−6, ‘infeasible’ if the final dual objective value is greater than 1 and ‘indeterminate’ otherwise, but we stress that these computations are not rigorously validated. To confirm that accuracy of our computed bounds, we computed various period orbits and the LE sums along each of these orbits. We used the Tsit5 solver from the OrdinaryD- iffEq.jl package [42] to solve integrate the ODE systems, find periodic orbits and compute LEs. For a given periodic orbit of period T, the LEs µk were found using the eigenvalues of the Jacobian DφT . To compute the sums Mk, we simply sum the µk in descending order. 3.1 The Duffing Oscillator The Duffing oscillator is a second-order nonautonomous ODE governing amplitude of os- cillations that are damped and sinusoidally forced. Different versions exist, but we choose ¨A + δ ˙A + βA + αA3 = γ cos ωt (107) 28 with parameters (α, β, γ, δ, ω) = (1, −1, 0.3, 0.2, 1). We numerically estimate the Lyapunov exponents on the chaotic attractor to be approximately 0.158 and −0.358, which gives a local Lyapunov dimension of dL ≈1.441. Kuznetsov and Reitmann [18] derive a rigorous bound on the Lyapunov dimension of the global attractor in the Duffing oscillator which, for these parameter values, gives a bound dL ≤1.9910. Note that we have not defined LEs or Lyapunov dimension for nonautonomous systems. This is straightforward to do, but instead we convert (107) to an autonomous system. The ODE (107) is not a polynomial system, as required for the formulations presented in section 2, but it can be transformed into a polynomial system by a standard trick (cf. Parker et al. [38]). We let x1 = A and x2 = ˙A, and we introduce x3 = sin ωt and x4 = cos ωt. Then the x(t) vector evolves according to dx dt =   x2 −δx2 −βx1 −αx3 1 + γx4 ωx4 −ωx3  . (108) Recalling that x3 and x4 are sine and cosine of the same argument, we are interested only in dynamics on the manifold B = {x ∈R4 | x2 3 + x2 4 = 1}. (109) Since only odd powers of x appear in (108), the right-hand side is equivariant under the simple sign symmetry Λ = −I, and B is invariant, so we can also impose that our polyno- mials V are invariant under the same symmetry. This is exploited in our code using the methods described in section 2.5. For the system (108), the Jacobian is Df(x) =   0 1 0 0 −β −3αx2 1 −δ 0 γ 0 0 0 ω 0 0 −ω 0  . (110) 29 From this, we compute the additive compound matrices Df [2](x) =   −δ 0 γ 0 0 0 0 0 ω 1 0 0 0 −ω 0 0 1 0 0 −β −3αx2 1 0 −δ ω γ 0 0 −β −3αx2 1 ω −δ 0 0 0 0 0 0 0   , (111) Df [3](x) =   −δ ω −γ 0 −ω −δ 0 0 0 0 0 1 0 0 −β −3αx2 1 −δ  , (112) and Df [4](x) = −δ. (113) In this case, the fourth additive compound matrix—the trace of the Jacobian—is indepen- dent of x, and therefore either of our methods gives the sharp result that M4 = −δ on any orbit. This implies that the dimension dL is optimized on the same orbit as the optimizer for M3 and, given a sharp bound for M3, proposition 2.7 will give a sharp bound for dL. Furthermore, since there are no equilibria, every trajectory must have a zero LE associated with the flow, and an additional vanishing LE corresponding to amplitude change in the x3 and x4 variables. Therefore M1 = M2 = M3 for all orbits of interest, and it suffices to compute bounds on M1 only in order to deduce bounds for M2, M3 and dL. Nevertheless, to test our methods we computed bounds using our formulations for M1, M2, M3 and dL. To formulate SOS optimization problems, we must choose finite dimensional spaces for the polynomials to be optimized over. In all cases we choose a maximum degree of V and then construct a basis from symmetry-invariant monomials. For each SOS constraint, the restriction that hj(x) = 0, where hj = x2 3 + x2 4 −1, is enforced via additional unknown polynomials ρj, whose maximum degrees are chosen so that hjρj matches the maximum degree of other terms in the expression. See the accompanying code for full details. For the sphere projection methods proposition 2.15 and proposition 2.16, computational results are reported in tables 1 and 2 respectively. In both cases, clearly converged bounds are achieved with V of degree 4 in x and 2 in zk. For the shifted spectrum methods proposition 2.17 and proposition 2.18, numerical errors are large when the degree of V in x is 4 or smaller, so we let V have degree 6 in x. In this case, we find feasible bounds M1 ≤0.887898 and dL ≤3.8162, and we find infeasible bounds when each final digit is decreased by 1. These bounds perfectly match those found via the sphere projection methods. To find an orbit that maximises the leading LE M1, and consequently dL also, we follow a strategy similar to that of Tobasco et al. [46]. We restrict to a Poincar´e section of B with x3 = 0 and x4 = 1, then we attempt to find the global minimum of the expression B −Φk −f · DxV −ℓk · DzkV (114) 30 dx dz k = 1 k = 2 k = 3 k = 4 2 0 27.3625 31.8231 29.7823 -0.2000 2 2 14.8295 15.5894 13.9366 -0.2000 4 0 1.3666 1.3668 1.3666 -0.2000 4 2 0.8879 0.8880 0.8879 -0.2000 4 4 0.8879 0.8879 0.8879 -0.2000 Table 1: Upper bounds on the sum Mk of the k leading LEs, computed by solving the SOS optimization problem in proposition 2.15 for the Duffing oscillator in the variables of (108). Polynomial degrees are at most dx in x and dz in zk. Tabulated values are rounded up to the precision shown, and the underlines indicate the digits that agree (after rounding) with the values computed by numerical integration for the periodic orbit of figure 1. On this orbit M4 = 0.2 and M1 = M2 = M3 ≈0.88785. dx dz Bound 0 0 4.0000 0 2 4.0000 0 4 4.0000 2 0 3.9912 2 2 3.9774 2 4 3.9614 4 0 3.8724 4 2 3.8162 4 4 3.8162 6 0 3.8555 6 2 3.8162 Table 2: Upper bounds on the Lyapunov dimension dL, computed by solving the SOS optimization problem in proposition 2.16 with k = 3 for the Duffing oscillator in the variables of (108). Polynomial degrees are at most dx in x and dz in zk and zk+1. Tabulated values are rounded up to the precision shown, and underlines indicate digits that agree (after rounding) with the value dL ≈3.81615 computed by numerical integration for the periodic orbit of figure 1. 31 x1 −1 0 1 x2 −1.0 −0.5 0.0 0.5 1.0 Figure 1: A chaotic trajectory (grey) in the system (108), and the unstable periodic orbit of period 2π discussed in the text (blue). in the constraint of (83), where B and V are the approximately optimal values returned by the SDP solver for the sphere projection method. This expression must vanish on average over the optimal orbit, and it is everywhere non-negative, so close to the orbit it must be very small in value. The minimization strategy is to plot, at each x1 and x2 over the approximate range of the chaotic attractor, the minimum of this expression over all zk ∈Snk−1, which is approximated by random sampling on the unit sphere. This minimum value is then passed to a Newton-shooting algorithm to converge the true periodic orbit, which approximately intersects the point (x1, x2, x3, x4) = (−0.14984, 0.01520, 0, 1) (115) and has a period of 2π. This is the small, approximately circular periodic orbit depicted in figure 1. We computed LEs on this orbit to be approximately µ1 = 0.88785, µ2 = 0, µ3 = 0, µ4 = −1.08785, (116) giving M1 = M2 = M3 = 0.88785, M4 = −0.2. (117) The Kaplan-Yorke formula (18) then gives dL = 3.81615, given that the supremum must be attained on this orbit. These values perfectly match the bounds given by C1, C2, C3 and C4. In the limit γ →0 of (107), the orbit becomes an equilibrium point at the origin. This matches previous observations that global Lyapunov dimension is attained on equilibria in 32 systems that have such points. Finally, we note that our bounds on dL given by C2 and C4 of (108) are equivalent to dL ≤1.8162 for (107), a significant improvement over the bound of Kuznetsov and Reitmann [18], and that this result must be sharp as an equivalent periodic orbit exists within the nonautonomous system (107). 3.2 A three-degree-of-freedom Hamiltonian system For an example where LE sums for different k are maximized on different orbits, we consider a higher-dimensional system whose large number of symmetries keeps the computational cost tractable. In particular, we consider a Hamiltonian system with Hamiltonian function H(x) = 1 2 |x|2 + x2 1x2 2 + x2 2x2 3 + x2 3x2 1, (118) the first three components of x being the position variables, and the final three the corre- sponding momenta. This is a specific form of a triaxially symmetric potential studied by Palaci´an et al. [35], which has applications in galactic dynamics [4, 6]. The Hamiltonian (118) gives rise to a system of ODEs in 6 variables, dx dt =   x4 x5 x6 −x1 −2x1x2 2 −2x1x2 3 −x2 −2x2x2 1 −2x2x2 3 −x3 −2x3x2 1 −2x3x2 2   . (119) The right-hand side of (119) is equivariant under x 7→Λx where Λ is any matrix in the subgroup of O(6) generated by Λ1 =   −1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 −1 0 0 0 0 0 0 1 0 0 0 0 0 0 1   , Λ2 =   0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1   , Λ3 =   0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0   . We consider the fixed energy level H = 1. At this value of H, there are no equilibria, so µ3 = µ4 = 0 on all orbits. We consider only orbits whose position variables remain in the region where 1 2 x2 1 + x2 2 + x2 3  + x2 1x2 2 + x2 2x2 3 + x2 3x2 1  ≤1, depicted in figure 2, and whose full state vector x remains in the compact sphere 1 2|x|2 ≤1. Two families of particularly short periodic orbits, which are also depicted in figure 2, were found by a brute-force search over random initial conditions. The first family, which we call S1, has a period of approximately 5.278 and passes near x = (0.83871, 0, 0, 0, 0, 1.13867), 33 x1 x2 x3 −1 1 −1 1 −1 0 1 x1 x2 x3 −1 1 −1 1 −1 0 1 Figure 2: Plots of the position variables (x1, x2, x3) for the Hamiltonian system (118) at the energy level H = 1. Left: The family of periodic orbits S1. Right: The family of periodic orbits S2. Trajectories are confined to stay within the region 1 2 x2 1 + x2 2 + x2 3  + x2 1x2 2 + x2 2x2 3 + x2 3x2 1  ≤1, which is bounded by the shaded surface. On S1, the LE sums are M1 = M2 = M3 = M4 = M5 = 0.42913 and on S2, M1 = M5 = 0.25919 and M2 = M3 = M4 = 0.51837. or near points that map to this x under a symmetry. The S1 orbits have LEs of approxi- mately (µ1, µ2, µ3, µ4, µ5, µ6) = (0.42913, 0, 0, 0, 0, −0.42913), corresponding to LE sums of (M1, M2, M3, M4, M5, M6) = (0.42913, 0.42913, 0.42913, 0.42913, 0.42913, 0). The second family of orbits, which we call S2, has a period of approximately 4.857 and passes near x = (−0.57566, 0.57566, 0, 0.39004, 0.39004, 0.90186), or near a symmetry-related point. These S2 orbits have LEs of approximately (µ1, µ2, µ3, µ4, µ5, µ6) = (0.25919, 0.25919, 0, 0, −0.25919, −0.25919), corresponding to sums of (M1, M2, M3, M4, M5, M6) = (0.25919, 0.51837, 0.51837, 0.51837, 0.25919, 0). Because this system is Hamiltonian, volumes in state space are conserved, and our methods for bounding Lyapunov dimension are not relevant. We carried out SOS compu- tations to bound LE sums using the sphere projection and shifted spectrum approaches, 34 dx dz k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 2 0 1.2361 2.0232 2.1082 2.0232 1.2361 0.0000 2 2 0.4843 0.9216 0.9216 0.4843 0.0000 4 0 1.1855 2.0046 2.0611 2.0046 1.1855 0.0000 4 2 0.4292 0.4292 0.0000 Table 3: Upper bounds on the sum Mk of the k leading LEs, computed by solving the SOS optimization problem in proposition 2.15 for the Hamiltonian system (119). Polynomial degrees are at most dx in x and dz in zk. Tabulated values are rounded up to the precision shown, and the underlines indicate the digits that agree (after rounding) with the values computed by numerical integration for the periodic orbits S1 (k = 1, 5, 6) or S2 (k = 2, 3, 4) of figure 2. Values are missing in cases where computations were prohibitively memory- intensive. dx k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 2 0.4311 0.6141 0.7579 0.6141 0.4311 0.0000 4 0.4292 0.5184 0.5711 0.5184 0.4292 0.0000 6 0.4292 0.5184 0.5300 0.5184 0.4292 0.0000 Table 4: Upper bounds on the sum Mk of the k leading LEs, computed by solving the SOS optimization problem in proposition 2.17 for the Hamiltonian system (119). Polynomial degrees are at most dx in x and 2 in wk. The values in this table are not rounded. The underlines indicate the digits that agree (after rounding) with the values computed by numerical integration for the periodic orbits S1 (k = 1, 5, 6) or S2 (k = 2, 3, 4) of figure 2. 35 and tables 3 and 4 give the results. In both cases we add the explicit constraint |x|2 ≤2 along with H(x) = 1 in the definition of the set B because this improves the numerical results. Using the shifted spectrum approach, we find sharp bounds for M1, M2, M4, M5 and M6. Using the sphere projection approach, we found sharp bounds on M1, M5 and M6. Sharp bounds for the other k values seemed to require larger polynomial degrees that were beyond our computational resources. Given the Hamiltonian structure of the system, bounding M5 is directly equivalent to bounding M1, bounding M4 is equivalent to bounding M2, and bounding M6 is trivial, since this is always zero. These facts are borne out by the results. Bounding the sum M3 is not a priori identical to any other case, but the lack of equilibria means that we expect the true value to be equal to M2 and M4. Furthermore, bounds for M3 require a particularly high-dimensional augmented system of dimension n + nk = 26, which is computationally challenging at even modest polynomial degree despite exploiting all symmetries of the sys- tem. This is especially true for the sphere projection method, which has higher polynomial degree in the tangent space dynamics. In table 3, the missing values were not possible to compute when solving SDPs with Mosek due to insufficient memory, despite having 3TB of memory. This could be surmounted by using a first-order SDP solver at the expense of very long runtimes and/or lower precision. In table 4, the M3 result is not sharp at the maximum degree dx = 6 that was computationally feasible. 4 Conclusion We have described two distinct approaches to bounding sums of Lyapunov exponents as well as Lyapunov dimension in ODE systems. The general versions of our formulations require optimizing over auxiliary functions in the class C1. We prove that most of these formulations give sharp bounds under weak assumptions, but the optimization problems are not tractable in general. In the case of ODEs with polynomial right-hand sides, our bounding formulations can be relaxed using SOS constraints, which we again prove are convergent, and which are often computationally tractable using SOS optimization. Car- rying out such SOS computations for particular ODEs, we obtained nearly perfect bounds in challenging examples. For one of the examples, the sphere projection method gave much better conditioned SDPs in practice. For the other example, the shifted spectrum method was more appropriate due to onerous memory requirements of the sphere projec- tion method. In section 3.1, we demonstrated how the results of our bounding methods could be used to find the optimal, potentially previously undetected, periodic orbits which maximize the bounds. All of our methods could be used to construct fully rigorous bounds via a computer assisted proof, making use of interval arithmetic or rational projection [12, 37]. In both of our examples, the results of the SOS programs allowed us to find periodic orbits that attained our bounds to within several digits, thus confirming their sharpness. In principle, 36 computer-assisted proofs based on our framework could also give rigorous upper bounds on LEs in partial differential equations, by combining our methods with some in Goluskin and Fantuzzi [13], but such computations would be challenging. In the thesis of Oeri [33], an analogous approach to the sphere projection method was formulated for bounding the leading LE of discrete-time dynamical systems, with an SOS implementation for polynomial maps. This can be directly adapted to sums of LEs by using additive (rather than multiplicative) compound matrices. An approach analogous to our shifted spectrum method is also possible for bounding sums of LEs for discrete- time systems, and methods can additionally be derived for the Lyapunov dimension. In general, these are computationally difficult because the equivalent of the Lie derivative is a composition of the dynamics with an auxiliary function, leading to very high polynomial degrees. This may be the focus of a future publication. Data availability statement Code to reproduce all the examples in this paper is available at https://github.com/jeremypparker/Parker_ Acknowledgements The authors thank Samuel Punshon-Smith and Anthony Quas for useful discussions. Dur- ing this work DG was supported by the NSERC Discovery Grants Program (awards RGPIN-2018-04263 and RGPIN-2025-06823). Computational resources were provided by the Digital Research Alliance of Canada. A Exterior products and compound matrices The exterior (or wedge or Grassmann) product is a multilinear map of vectors, defined by the property that x2 ∧x1 = −x1 ∧x2, and x1 ∧x2 = 0 only if x1 = αx2 for some α, or x2 = 0. An exterior product of k vectors in a vector space of dimension n ≥k is called a k-vector. The properties of the exterior product imply that permuting the xi in x1∧· · ·∧xk either leaves the k-vector unchanged or negates it, and that the k-vector is nonzero if and only if the xi are linearly independent. The set of k-vectors in Rn is therefore a linear space of dimension nk = n! (n−k)!. We identify this space with Rnk and choose as a basis the set of k-vectors of the form ei1 ∧· · · ∧eik, (120) where i1 < . . . < ik and ei is the ith standard basis vector in Rnk. The standard Euclidean norm on this space satisfies the property that |x1 ∧· · · ∧xk| (121) 37 is the volume of the parallepiped spanned by x1, . . . , xk ∈Rn [47]. Linear maps on vectors in Rn induce linear maps on corresponding k-vectors in Rnk. Likewise, linear ODEs on Rn induce linear ODEs on k-vectors. As defined below, the map on k-vectors induced by a map A is captured by the multiplicative compound A(k), and the ODE for k-vectors induced by an ODE with right-hand side A is captured by the additive compound A[k]. The multiplicative compound is common in many different areas of mathematics, and is known variously as the compound representation, exterior power representation, wedge power representation or simply the induced action of a linear map. Compound matrices have been applied previously in the calculation and bounding of LEs [18, 26, 29, 30] and also stability of dynamical systems [31]. Here we state key definitions and theorems; a more detailed introduction can be found in Marshall et al. [28, Chapter 19F], in more generality for complex non-square matrices. Definition A.1. For any real matrix A ∈Rn×n and integer k ∈{1, . . . , n}: 1. The kth multiplicative compound of A is the real matrix A(k) ∈Rnk×nk whose entries are the k × k subdeterminants of A, taken in lexicographic order. 2. The kth additive compound of A is the real matrix A[k] ∈Rnk×nk such that A[k] = d dδ (I + δA)(k) δ=0 . (122) The lexicographic order in definition A.1 refers to the ordering of all nk permutations of k increasing integers chosen from {1, . . . , n}. As an example, the lexicographic order of k = 3 integers in the case n = 4 is (1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4). (123) Therefore, the (2, 3) entry of A(3) is A(3) 2,3 = det   a1,1 a1,3 a1,4 a2,1 a2,3 a2,4 a4,1 a4,3 a4,4  , (124) where ap,q denotes the (p, q) element of A. Note that the row and column indices come from the second and third permutations, respectively. It follows from the definitions that A(1) = A[1] = A, and A(n) = det A whereas A[n] = trA. In fact, the eigenvalues of multiplicative and additive compounds are respectively the products and sums of the eigenvalues of the original matrix, which gives rise to their names. An explicit example of additive compound matrices is given in (111). The next two propositions list additional properties that will be useful to us for multi- plicative and additive compounds, respectively. In proposition A.2, part 1 is a special case of the Cauchy–Binet theorem. Parts 2–4 follow quickly from the definition, and the last part is theorem 6.2.10 of Fiedler [10]. 38 Proposition A.2 (Properties of multiplicative compound matrices). For any real square matrices A and B: 1. (AB)(k) = A(k)B(k). 2. A(k)T = AT(k). 3. If A is invertible, then A−1(k) = A(k)−1. 4. A is orthogonal if and only if A(k) is orthogonal. 5. For k-vectors identified with Rnk and for A ∈Rn×n, A(k) (x1 ∧· · · ∧xk) = (Ax1) ∧· · · ∧(Axk). (125) Proposition A.3 (Properties of additive compound matrices). For any real square ma- trices: 1. (A + B)[k] = A[k] + B[k]. 2. A[k] (x1 ∧· · · ∧xk) = (Ax1∧x2∧· · ·∧xk)+(x1∧Ax2∧· · ·∧xk)+· · ·+(x1∧x2∧· · ·∧Axk). 3. If Q is invertible, QAQ−1[k] = Q(k)A[k] Q(k)−1. Proof. Part 1 is immediate from the definition. For part 2, see theorem 2f of [24]. Part 3 follows by calculation, QAQ−1[k] = d dδ I + δQAQ−1(k) δ=0 = d dδ Q (I + δA) Q−1(k) δ=0 = d dδ Q(k) (I + δA)(k)  Q(k)−1 δ=0 where the last equality uses properties 1 and 2 from proposition A.2. Multiplicative compound matrices are straightforward to compute directly, at least when n is small. Additive compound matrices can be computed with symbolic manipu- lation. In our computational examples we compute additive compounds using Julia with the DynamicPolynomials.jl package [21]. Explicit formulas for the elements of additive compounds also exist [27]. We now have the ingredients to prove (15): 39 Lemma A.4. If d dtyi(t) = Df(x(t)) yi(t) (126) for i = 1, . . . , k, and we define yk(t) = y1(t) ∧· · · ∧yk(t), (127) then d dtyk(t) = Df [k](x(t)) yk(t). (128) Proof. We have yi(t) = Dφt(x0) yi(0), (129) so, using proposition A.2, yk(t) = Dφt(x0) y1(0)  ∧· · · ∧ Dφt(x0) yk(0)  = Dφtx0 (k) yk(0). (130) Observe that φ0(x0) = x0, and φt+δ(x0) = φδ(φt(x0)), so that Dφt+δ(x0) = Dφδ(x(t)) Dφt(x0). (131) Then by proposition A.2, h Dφt+δ(x0) i(k) = h Dφδ(x(t)) i(k)  Dφt(x0) (k) , (132) so by the usual definition of the derivative, d dtyk = lim δ→0 Dφt+δ(x0) (k) yk(0) − Dφt(x0) (k) yk(0) δ ! = lim δ→0  Dφδ(x(t)) (k) −I δ ! yk(t). Furthermore, we have h Dφδ(x(t)) i(k) = [I + δDf(x(t)) + o(δ)](k) and so d dtyk = lim δ→0 [I + δDf(x(t)) + o(δ)](k) −I δ ! yk(t) = d dδ [I + δDf(x(t))](k) δ=0 yk(t) = (Df(x(t)))[k] yk(t), by definition A.1. 40 B A specialised Lyapunov theorem Here we present a Lyapunov theorem for the stability of one variable w ∈Rm whose linear dynamics is controlled by a second variable x. This is applicable to both proposition 2.11 and proposition 2.12. The second part, whose proof follows that of similar results in Khalil [17], is the so-called converse Lyapunov theorem, which states that under stronger conditions, it is always possible to find a Lyapunov function and that this V is quadratic in w. Lemma B.1. For a domain B ⊂Rn, let x(t) and w(t) solve the ODE system d dt x w  =  f(x) A(x)w  (133) on B × Rm, with f ∈C0(B, Rn) and A ∈C0(B, Rm×m). 1. If there exists V ∈C1 (B × Rm) such that V (x, w) ≥|w|2 and f(x) · DxV (x, w) + [A(x)w] · DwV (x, w) ≤0 (134) for all (x, w) ∈B × Rm, then w(t) is bounded forward in time for each w(0) ∈Rm. 2. If A ∈C1(B, Rm×m) is bounded and f ∈C1(B, Rn), and if there exist K > 0 and λ > 0 such that |w(t)| ≤Ke−λt|w(0)| (135) for all solutions to (133), then there exists U ∈C1 (B, Rm×m) such that (134) is satisfied with V (x, w) = wTU(x)w. Proof. For the first part, let V ∈C1 satisfy the two inequality conditions. For contradic- tion, suppose that there is a solution with |w(t)| unbounded. The first V condition then implies that V (x(t), w(t)) is unbounded forward in time. The second V condition implies d dtV (x(t), w(t)) = f(x(t)) · DxV (x(t), w(t)) + [A(x(t))w(t)] · DwV (x(t), w(t)) ≤0. (136) This expression is continuous in t, so we can integrate to show that V remains bounded for all time, which is a contradiction. For the second part, fix a time T > 0, to be chosen later. Define V (x0, w0) = 2L 1 −e−2LT Z T 0 |w(t)|2 dt, (137) where (x(t), w(t)) evolves under (133) from initial condition (x(0), w(0)) = (x0, w0), and L is an upper bound for the operator norm ∥A(x)∥over x ∈B. Observe that, by linearity of w(t) with respect to w0, this is quadratic in w0 and thus can be written V (x0, w0) = wT 0 U(x0)w0 (138) 41 for some symmetric matrix U(x0). Since A ∈C1(B, Rm×m), it follows that U ∈C1(B, Rm×m). Note that d dt|w|2 ≤2|w| |A(x)w| ≤2∥A(x)∥|w|2 ≤2L|w|2 (139) i.e. d dt|w|2 ≥−2L|w|2. Thus Gr¨onwall’s inequality gives us |w(t)|2 ≥e−2Lt|w0|2 (140) and so V (x0, w0) ≥ 2L 1 −e−2LT Z T 0 e−2Lt|w0|2dt = |w0|2, (141) satisfying the first condition of (134). For the second condition of (134), we define v(t) = V (x(t), w(t)) = 2L 1 −e−2LT Z t+T t |w(s)|2 ds. (142) Then, f(x(t)) · DxV (x(t), w(t)) + [A(x(t))w(t)] · DwV (x(t), w(t)) = dv dt = 2L 1 −e−2LT |w(t + T)|2 −|w(t)|2 ≤ 2L 1 −e−2LT  K2e−2λT −1  |w(t)|2, (143) so if T ≥1 λ log K the condition is satisfied. References [1] M. ApS. MOSEK Optimizer API for Julia, 2025. [2] L. Arnold. Random dynamical systems. Springer Monographs in Mathematics. Springer-Verlag, 1998. [3] J. Bochi. Ergodic optimization of Birkhoffaverages and Lyapunov exponents. In Proc. Int. Congr. Math., pages 1825–1846, 2018. [4] N. Caranicolas. 1:1:1 resonant periodic orbits in 3-dimensional galactic-type Hamilto- nians. Astronomy and Astrophysics, 282:34–36, 1994. 42 [5] C. Coey, L. Kapelevich, and J. P. Vielma. Solving natural conic formulations with Hypatia.jl. INFORMS Journal on Computing, 34(5):2686–2699, 2022. [6] T. de Zeeuw. Motion in the core of a triaxial potential. Monthly Notices of the Royal Astronomical Society, 215(4):731–760, 1985. [7] C. R. Doering and J. Gibbon. On the shape and dimension of the Lorenz attractor. Dynamics and Stability of Systems, 10(3):255–268, 1995. [8] A. Eden, C. Foias, and R. Temam. Local and global Lyapunov exponents. Journal of Dynamics and Differential Equations, 3:133–177, 1991. [9] A. O. Eden. An abstract theory of L-exponents with applications to dimension analysis. Indiana University, 1989. [10] M. Fiedler. Special matrices and their applications in numerical mathematics. Courier Corporation, 2008. [11] K. Gatermann and P. A. Parrilo. Symmetry groups, semidefinite programs, and sums of squares. Journal of Pure and Applied Algebra, 192(1-3):95–128, 2004. [12] D. Goluskin. Bounding averages rigorously using semidefinite programming: mean moments of the Lorenz system. Journal of Nonlinear Science, 28:621–651, 2018. [13] D. Goluskin and G. Fantuzzi. Bounds on mean energy in the Kuramoto–Sivashinsky equation computed using semidefinite programming. Nonlinearity, 32(5):1705, 2019. [14] P. J. Goulart and Y. Chen. Clarabel: An interior-point solver for conic programs with quadratic objectives, 2024. [15] M. Kaluba, D. Kielak, and P. W. Nowak. On property (T) for Aut(Fn) and SLn(Z). Annals of Mathematics, 193(2):539–562, 2021. [16] J. Kaplan and J. Yorke. Chaotic behavior of multidimensional difference equations. Functional Differential Equations and Approximation of Fixed Points, pages 204–227, 1979. [17] H. Khalil. Nonlinear systems. Prentice Hall, 2002. [18] N. Kuznetsov and V. Reitmann. Attractor dimension estimates for dynamical systems: theory and computation. Springer, 2020. [19] M. Lakshmi, G. Fantuzzi, J. Fern´andez-Caballero, Y. Hwang, and S. Chernyshenko. Finding extremal periodic orbits with polynomial optimisation, with application to a nine-mode model of shear flow. SIAM J. Appl. Dyn. Syst., 19:763–787, 2020. [20] B. Legat, C. Coey, R. Deits, J. Huchette, and A. Perry. Sum-of-squares optimization in Julia. In The First Annual JuMP-dev Workshop, 2017. [21] B. Legat, S. Timme, T. Weisser, L. Kapelevich, N. Sajko, Manuel, A. Demin, C. Hansknecht, C. Rackauckas, J. TagBot, M. Forets, and T. Holy. JuliaAlge- bra/DynamicPolynomials.jl: v0.5.7, May 2024. 43 [22] G. A. Leonov and S. Lyashko. Eden’s hypothesis for a Lorenz system. Vestnik St. Petersburg University: Mathematics, 26(3):15–18, 1993. [23] G. A. Leonov, N. V. Kuznetsov, N. Korzhemanova, and D. Kusakin. Lyapunov di- mension formula for the global attractor of the Lorenz system. Communications in Nonlinear Science and Numerical Simulation, 41:84–103, 2016. [24] D. London. On derivations arising in differential equations. Linear and Multilinear Algebra, 4(3):179–189, 1976. [25] G. G. Lorentz. Approximation of Functions. Chelsea Publishing Company, 2nd edition, 1986. [26] D. Manika. Application of the compound matrix theory for the computation of Lya- punov exponents of autonomous Hamiltonian systems. Master’s thesis, Aristotle Uni- versity of Thessaloniki, 2013. [27] M. Margaliot and E. D. Sontag. Revisiting totally positive differential systems: A tutorial and new results. Automatica, 101:1–14, 2019. [28] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics. Springer New York, 2010. [29] D. Martini, D. Angeli, G. Innocenti, and A. Tesi. Ruling out positive Lyapunov exponents by using the Jacobian’s second additive compound matrix. IEEE Control Systems Letters, 6:2924–2928, 2022. [30] D. Martini, D. Angeli, G. Innocenti, and A. Tesi. Bounding Lyapunov exponents through second additive compound matrices: Case studies and application to systems with first integral. International Journal of Bifurcation and Chaos, 33(10):2350114, 2023. [31] J. S. Muldowney. Compound matrices and ordinary differential equations. The Rocky Mountain Journal of Mathematics, pages 857–872, 1990. [32] K. G. Murty and S. N. Kabadi. Some NP-complete problems in quadratic and non- linear programming. Mathematical Programming: Series A and B, 39(2):117–129, 1987. [33] H. Oeri. Convex optimization methods for bounding Lyapunov exponents. PhD thesis, University of Victoria, 2023. [34] H. Oeri and D. Goluskin. Convex computation of maximal Lyapunov exponents. Nonlinearity, 36(10):5378, 2023. [35] J. F. Palaci´an, C. Vidal, J. Vidarte, and P. Yanguas. Periodic solutions and KAM tori in a triaxial potential. SIAM Journal on Applied Dynamical Systems, 16(1):159–187, 2017. 44 [36] A. Papachristodoulou and S. Prajna. On the construction of Lyapunov functions using the sum of squares decomposition. In Proceedings of the 41st IEEE Conference on Decision and Control, 2002., volume 3, pages 3482–3487. IEEE, 2002. [37] J. P. Parker. The Lorenz system as a gradient-like system. Nonlinearity, 37(9):095022, 2024. [38] J. P. Parker, D. Goluskin, and G. Vasil. A study of the double pendulum using polynomial optimization. Chaos, 31(10):103102, 2021. [39] P. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, California Institute of Technology, 2000. [40] A. Pikovsky and A. Politi. Lyapunov exponents: a tool to explore complex dynamics. Cambridge University Press, 2016. [41] M. Putinar. Positive polynomials on semi-algebraic sets. Indiana University Mathe- matics Journal, 452:969–984, 1993. [42] C. Rackauckas and Q. Nie. Differentialequations.jl–a performant and feature-rich ecosystem for solving differential equations in Julia. Journal of Open Research Soft- ware, 5(1):15, 2017. [43] B. Reznick. Some concrete aspects of hilbert’s 17th problem. Contemporary Mathe- matics, 253(251-272), 2000. [44] C. W. Scherer and C. W. J. Hol. Matrix sum of squares relaxations for robust semidef- inite programs. Mathematical Programming, 107:189–211, 2006. [45] R. Temam. Infinite-dimensional dynamical systems in mechanics and physics, vol- ume 68. Springer Science & Business Media, 2012. [46] I. Tobasco, D. Goluskin, and C. R. Doering. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems. Physics Letters A, 382(6):382–386, 2018. [47] S. Winitzki. Linear algebra via exterior products. Sergei Winitzki, 2009. 45
Computation of attractor dimension and maximal sums of Lyapunov exponents using polynomial optimization Jeremy P. Parker1, David Goluskin2 1Division of Mathematics, 1 4HN, United Kingdom 2 8P 5C2, Canada Abstract Two approaches are presented for computing upper bounds on Lyapunov exponents and their sums, and on Lyapunov dimension, among all trajectories of a dynamical system governed by ordinary differential equations. The first approach expresses a sum of Lyapunov exponents as a time average in an augmented dynamical system and then applies methods for bounding time averages. This generalizes the work of Oeri & Goluskin (Nonlinearity 36:5378-5400, 2023), who bounded the single leading Lyapunov exponent. The second approach considers a different augmented dynamical system, where bounds on sums of Lyapunov exponents are implied by stability of certain sets, and such stability is verified using Lyapunov function methods. Both of our approaches also can be adapted to directly compute bounds on Lyapunov dimension, which in turn implies bounds on the fractal dimension of a global attractor. For systems of ordinary differential equations with polynomial right-hand sides, all of our bounding formulations lead to polynomial optimization problems with sum-of-squares constraints. These sum-of-squares problems can be solved computationally for any particular system to yield numerical bounds, provided the number of variables and degree of polynomials is not prohibitive. Most of our upper bounds are proven to be sharp under relatively weak assumptions. In the case of the polynomial optimization problems, sharpness means that upper bounds converge to the exact values as polynomial degrees are raised. Computational examples demonstrate upper bounds that are sharp to several digits, including for a six-dimensional dynamical system where sums of Lyapunov exponents are maximized on periodic orbits. 1 Introduction The detection and characterization of chaos is a major computational challenge for many dynamical systems. Chaotic properties are often quantified via the spectrum of Lyapunov 1 16 Oct 2025 exponents (LEs), which can be computed over individual trajectories to obtain local information about that trajectory and, perhaps, global implications such as existence of hyperchaos or estimates of attractor dimension. In this paper we consider the problem of finding upper bounds on sums of LEs among all possible trajectories, without knowledge of any particular trajectories. Upper bounds on sums of LEs can be used to construct upper bounds on the fractal dimension of the global attractor. Our upper bounds are complementary to typical trajectory-based computations, which give corresponding lower bounds on sums of LEs and on Lyapunov dimension of global attractors. The LE spectrum quantifies the asymptotic rates at which linearized perturbations in different directions grow or decay along trajectories of a dynamical system. For dynamics in n dimensions there are n exponents, defined in decreasing order. Partial sums of the first k exponents describe the exponential rates of growth or contraction of k-dimensional volumes in state space, so that the first exponent governs the stretching of lines, the sum of the first two governs the growth of areas, and so on. In dissipative systems, these sums become negative beyond a certain value of k, reflecting the contraction of higherdimensional volumes. One way of defining a value at which this contraction begins leads to the Lyapunov dimension, which is non-integer in general, and which provides a natural connection between dynamical stability and the fractal geometry of invariant sets such as strange attractors. We consider autonomous, continuous-time dynamics governed by an ordinary differential equation (ODE) system, d dtX(t) = F(X(t)), X(0) = X0, (1) with F ∈C1(Ω, Rn), where Ck(A, B) denotes the space of functions mapping A to B that are k times continuously differentiable. The state space Ω⊂Rn may be all of Rn, a subdomain of Rn or a lower-dimensional manifold embedded in Rn. In the present work, we apply existing methods for proving global properties of ODE dynamical systems to the problem of bounding LEs and their sums. These existing methods rely on finding auxiliary functions V ∈C1(Ω, R) subject to certain pointwise inequalities that, in turn, imply global statements about the dynamics. The best known examples of auxiliary functions are Lyapunov functions, whose inequality conditions imply statements about stability. (Our use of Lyapunov functions to study Lyapunov exponents is not typical, despite both objects bearing the same person's name.) Among the lesser known types of auxiliary functions are those whose inequality conditions imply bounds on infinitetime averages. For each quantity studied in the present work, we formulate two different upper bounds: a "shifted spectrum" approach based on Lyapunov function conditions for stability, and a "sphere projection" approach based on auxiliary function conditions for bounding time averages. The shifted spectrum approach that we present in section 2.3 uses conditions on V that are similar to standard Lyapunov function conditions. In stability theory, the pointwise 2 inequality conditions required of a Lyapunov function may be, for instance, V (X) ≥0, F(X) · DV (X) ≤0 ∀X ∈Ω, (2) where DV denotes the gradient of V with respect to the state space variable X. Here F(X) · DV (X) is the Lie derivative of V with respect to the flow since the usual chain rule gives d dtV (X(t)) = V (X(t))·DV (X(t)). The existence of V satisfying (2) would imply that the set of X ∈Ωwhere V (X) = 0 is Lyapunov stable, meaning that trajectories remain arbitrarily close to this set if they start sufficiently close [17]. Until about 25 years ago, there was no broadly applicable method for finding V subject to pointwise inequalities like those in (2). Now, however, it is often possible to construct such V computationally using optimization methods based on sum-of-squares (SOS) polynomials [36, 39]. In particular, if the function F defining the dynamics is polynomial in the components of X, and if V is sought from a finite-dimensional space of polynomials, then F · DV is also a polynomial. Deciding whether the polynomials V and -F · DV are nonnegative on Ωis prohibitively hard in general [32], but this nonnegativity can be ensured by stronger and more tractable SOS conditions. For instance, simply requiring V to be SOS, meaning that it can be represented as a sum of squares of other polynomials, would imply V ≥0 on all of Rn. Other SOS conditions can imply nonnegativity on Ωbut not on all of Rn, as explained in section 2.4. The sphere projection approach that we present in section 2.2 uses conditions on an auxiliary function V for bounding time averages. For any Φ ∈C0(Ω, R) evaluated along a trajectory X(t) with initial condition X0, define the infinite-time average Φ as Φ(X0) = lim sup T→∞ 1 T Z T 0 Φ(X(t)) dt. (3) On each X(t) that remains bounded forward in time, boundedness of V (X(t)) implies d dtV = 0, and thus F · DV = 0. Thus, on bounded trajectories, Φ = Φ + F · DV ≤sup X∈Ω [Φ + F · DV ]. (4) The right-hand upper bound is useful because it does not involve individual trajectories. Since (4) holds for all bounded trajectories and all V ∈C1, one can maximize the left-hand side over X0 and minimize the right-hand side over V to find sup X0∈Ω Φ ≤ inf V ∈C1(Ω) sup X∈Ω [Φ + F · DV ]. (5) Furthermore, the inequality in (5) is an equality when Ωis a compact set in which trajectories remain [3, 46], and this equality underlies sharpness of our sphere projection approach. When Φ, V and F are polynomial, so is the expression Φ + F · DV on the right-hand side 3 of (5). In this case the min-max problem can be relaxed into a minimization over polynomial V subject to an SOS constraint, as explained in section 2.4. This constitutes an SOS optimization problem, where the optimization objective is to minimize the resulting upper bound on Φ. These ideas were applied by Oeri and Goluskin [34] to bound the single leading LE. Here we extend this approach to sums of LEs and to Lyapunov dimension. Our two approaches are the first broadly applicable computational methods that can give convergent upper bounds on global maxima of LE sums or Lyapunov dimension. Unlike methods that rely on pointwise optimization over state space [7], our methods remain sharp when the maximizing orbits are periodic, rather than stationary. Furthermore, our methods give a systematic way to search for the maximizing trajectories themselves. The rest of the paper is organized as follows. Section 2 defines sums of LEs and Lyapunov dimension and then describes our two general approaches to bounding these quantities. After arriving at more general optimization problems, we strengthen their constraints to obtain SOS optimization problems, then we describe how to exploit symmetries. Section 3 presents computational results from applications of our methods to two examples: the chaotic Duffing oscillator, which can be formulated as an autonomous system in 4 variables, and a 3-degree-of-freedom Hamiltonian system adapted from [35], which is an autonomous system in 6 variables. For all LE sums in both examples, we confirm that our computed bounds are sharp to several digits by finding periodic orbits on which the bounds are approximately saturated. Concluding remarks are given in section 4. For completeness, appendix A summarizes necessary material about exterior products, and appendix B proves a non-standard variant of an existence theorem for Lyapunov functions. 2 Optimization problems and SOS relaxations This section defines the objects we seek to bound and formulates our methods for doing so. section 2.1 give expressions for LE sums and Lyapunov dimension that are useful for what follows. In section 2.2 we describe approaches for bounding these quantities based on projecting tangent space vectors onto the unit sphere. In section 2.3 we describe approaches that instead shift the spectrum of the Jacobian so that tangent vectors remain bounded. Section 2.4 describes how each formulation is relaxed to into SOS optimization problems that can be implemented numerically. Finally, section 2.5 describes how symmetries may be exploited in the optimization problems. 2.1 Tangent vectors, Lyapunov exponents and Lyapunov dimension In this section we consider autonomous ODE systems on a domain B ⊂Rn, just as in (1) but denoted here using lowercase x and f, d dtx(t) = f(x(t)), x(0) = x0, (6) 4 with f ∈C1(B, Rn). Uppercase X and F and domain Ωwill be reserved for associated ODEs on augmented state spaces that combine the dynamics of x with dynamics on its tangent space. Trajectories are assumed to remain in B for all t ≥0. In cases where B is unbounded, we further assume that each trajectory remains bounded for t ≥0, meaning there is no forward-time blowup. The differentiability of f and the assumption of no blowup guarantee well-posedness for all positive times, meaning there exists a differentiable flow map φ : R+ × B →B satisfying x(t) = φt(x0) for every initial condition x0 ∈B and all t ≥0. The LEs associated with a trajectory x(t) quantify the convergence or divergence of infinitesimally nearby trajectories. Linearization of (6) around each point x(t) defines the dynamics of the tangent vector y(t), which evolves in the tangent space Rn according to d dty(t) = Df(x(t)) y(t), y(0) = y0, (7) where Df(x) denotes the n × n Jacobian matrix of f at x, and y0 is the initial tangent vector. Equivalently, y(t) is given by the linearized finite-time flow map as y(t) = Dφt(x0) y0. (8) We restrict y0 to the unit sphere, Sn-1 = {x ∈Rn : |x| = 1}, (9) because the tangent vector dynamics are linear in y and thus are unaffected by normalization of the initial condition. Considering the separation between trajectories starting at x0 and at x0 + εy0, Taylor expanding the flow map and using (8) gives |φt(x0) -φt(x0 + εy0)| = ε|y(t)| + O(ε2|y0|2). (10) The tangent vector typically shrinks or grows exponentially like |y(t)| ≈eμt for some average rate μ(x0, y0), in which case (10) suggests |φt(x0) -φt(x0 + εy0)| ≈ε|y0|eμ(x0,y0)t. (11) If μ 0 this approximation can be valid only until the time when separation between trajectories grows too large for (10) to apply, but this time approaches infinity as ε →0. Motivated by the expectation that |y(t)| ≈eμt, the corresponding LE can be defined precisely as μ(x0, y0) = lim sup t→∞ 1 t log |y(t)|. (12) For each trajectory x(t), which is determined by its initial condition x0, the LE μ defined by (12) generally depends on the initial tangent vector direction y0. The Oseledets ergodic 5 theorem guarantees that different y0 give at most n different LEs [40], the largest of which is called the leading LE. We define this by definition 2.1 and then give equivalent expressions in sections 2.2 and 2.3 that underlie the methods we introduce there. The literature on LEs uses various definitions that are equivalent in many settings [40]. Definition 2.1. For x(t) evolving by the ODE (6), the leading Lyapunov exponent is μ1(x0) = sup y0∈Sn-1 lim sup t→∞ 1 t log |y(t)|, (13) where the tangent vector y(t) evolves by (7). The leading LE is the average growth rate of lengths in the tangent space, maximized over the initial direction. A natural generalization is to consider growth rates of k-dimensional volumes in the tangent space for any k ≤n. From this one can define leading sums of LEs and, in turn, the full spectrum of LEs [45]. It is also possible to define the spectrum of LEs directly using singular values of Dφt [40], but their sums are easier to define, and the sums are what we study in the present work. Let |y1 ∧· · · ∧yk| denote the volume of the parallelepiped whose edges coincide with vectors y1, . . . , yk ∈Rn. If each yi is a tangent vector that evolves in time according to (7), the volume of the corresponding parallelepiped generally grows or shrinks exponentially in time, so we can define an average exponential rate analogously to (12). Maximizing this rate over all initial directions of the tangent vectors gives [45] Mk(x0) = sup y1(0),...,yk(0)∈Rn |y1(0)∧···∧yk(0)|=1 lim sup t→∞ 1 t log |y1(t) ∧· · · ∧yk(t)|, (14) where each yi(t) evolves as in (7). The value Mk(x0) is the maximum time-averaged growth rate of k-dimensional volumes in the tangent space for the trajectory x(t) starting at x0. This maximum growth rate will be attained by almost every choice of tangent vectors, under the assumptions of certain multiplicative ergodic theorems [40]. The set of all y1 ∧· · · ∧yk form a vector space called the exterior product space of k-vectors on Rn. The space of k-vectors on Rn has dimension nk = n! k!(n-k)!. We can therefore identify each y1 ∧· · · ∧yk with a vector denoted by yk ∈Rnk, whose Euclidean norm |yk| gives the volume of the parallelepiped with edges y1, . . . , yk. For time-dependent k-vectors where each yi(t) is a tangent vector satisfying the ODE (7), the k-vectors satisfy a corresponding ODE (see lemma A.4), d dtyk(t) = Df [k](x(t)) yk(t). (15) Here Df [k] denotes the kth additive compound for the matrix Df, and later Df (k) denotes the kth multiplicative compound. These compounds are linear transformations acting on 6 the space of k-vectors, induced by the linear transformation Df on vectors, and they can be computed explicitly as nk × nk matrices. Exterior products and compound matrices are reviewed in appendix A. The expression (14) for Mk can be equivalently stated using yk(t), resulting in the following definition 2.2 that we choose for Mk. Definition 2.2. For x(t) evolving by the ODE (6) from initial condition x0, the Lyapunov exponents are defined such that the sum of the k leading Lyapunov exponents is Mk(x0) = sup yk 0 ∈Snk-1 lim sup t→∞ 1 t log |yk(t)|, (16) where yk(t) evolves by (15) from initial condition yk 0. Having defined Mk, we can define the LE spectrum μ1 ≥μ2 ≥· · · ≥μn such the Mk are necessarily LE sums. When k = 1 we already have μ1 = M1 since definition 2.2 reduces to definition 2.1 when yk = y and Df [k] = Df. For k ≥2, the LE μk can be defined recursively by μk = Mk -Mk-1, (17) so indeed Mk = Pk i=1 μi. There are equivalent ways to define μk as the fundamental object and then define the Mk as their partial sums, but for our purposes it is simpler to define Mk directly by (16). For systems with attractors, a primary use of LE sums is to define and study the dimension of an attractor. Definition 2.3 gives the Kaplan-Yorke formula [16] for the global Lyapunov dimension dL of an ODE system; see [18] for equivalent definitions. Hausdorff dimension is perhaps a more natural way to define the dimension of a fractal set in general. For the global attractor of a dynamical system, Hausdorffdimension can be very difficult to estimate directly, but it is bounded above by dL [8]. Thus, the upper bounds we produce for dL are upper bounds on Hausdorffdimension also. Definition 2.3. For x(t) evolving in B ⊂Rn by the ODE (6), let j be the smallest integer such that supx0∈B Mj+1(x0) 0 such that Mj+1(x0) ≤-ε and -μj+1(x0) ≥ε for all x0 ∈B, (20) and that supx0∈B Mj(x0) ≥0. From these facts it follows that the supremum in the definition (18) of dL cannot be negative, which gives the lower bound j ≤dL in the first part of the lemma. For the upper bound, note that Mj(x0) -μj+1(x0) ≤-μj+1(x0) -ε -μj+1(x0) 0, rearranging the constraint in (22) gives the constraint in (19) as an equivalent form. This infimum is in [0, 1) and so is unchanged by the B ∈[0, 1) constraint in (19). Thus expression (19) for dL is equivalent to expression (18) in the definition. Remark 2.5. Combining both parts of lemma 2.4 implies that, if we can find B ∈R and k ≥j where B [Mk(x0) -Mk+1(x0)] -Mk(x0) ≥0 (23) for all x0 ∈B, then the Lyapunov dimension is bounded above by dL ≤k + B. Note that (23) cannot hold for all x0 unless B ≥0. 2.2 Sphere projection approach Our first approach to bounding the LE sums Mk of definition 2.2 uses the formulation (5) for bounding infinite-time averages. This is a generalization of the approach in [34], which was described only for the k = 1 case. First note that the time average in the definition 8 of Mk can be expressed as a time integral along trajectories in the augmented state space for x(t) and yk(t): lim sup t→∞ 1 t log |yk(t)| = lim sup T→∞ 1 T Z T 0 d dtlog |yk(t)| dt (24) = lim sup T→∞ 1 T Z T 0 (yk)T d dtyk |yk|2 dt (25) = lim sup T→∞ 1 T Z T 0 (yk)TDf [k](x)yk |yk|2 dt, (26) where (yk)T is the transpose of the vector yk. The formulation (5) for bounding time averages can be applied to the quantity in (26) only if each (x, yk) remains bounded forward in time, but yk(t) will be unbounded when Mk is positive. The same tangent space dynamics can be captured with bounded variables by projecting yk onto a sphere, by introducing zk = yk/|yk|. Expressing the right-hand side of (26) in terms of zk and then maximizing both sides over the initial tangent space directions zk 0 gives Mk(x0) = sup zk 0 ∈Snk-1 Φk, (27) where the overline denotes a time average of Φk(x, zk) = (zk)TDf [k](x)zk (28) along trajectories in the (x, zk) state space. The ODE governing zk(t) follows from the ODE (15) for yk(t). The x(t) dynamics together with the projected dynamics of k-vectors in its tangent space gives the ODE system d dt x zk = f(x) lk(x, zk) , (29) on the augmented state space B × Snk-1, where lk(x, zk) = Df [k](x)zk -Φk(x, zk)zk. (30) Representations of Mk as time averages over the projected tangent space dynamics (29) have often appeared in the study of stochastic dynamics, where they are called Furstenberg-Khasminskii formulas [2, chapter 6]. (In the stochastic case, Φk is defined with additional terms and is averaged with respect to stationary measures.) Auxiliary function methods for bounding time averages can be applied to Mk by using its representation (27) as a time average in the augmented state space for (x, zk). In particular, we directly apply the formula (5) with state variable X = (x, zk) on domain B ×Snk-1 and with F(X) being the right-hand side of (29). The resulting inequality is the 9 first part of proposition 2.6 below, for which no further proof is needed. If F ∈C1 and the domain is compact, then (5) holds with equality, as proved in [46]. This yields the second part of proposition 2.6 since assuming f ∈C2 gives that F ∈C1. The k = 1 special case of proposition 2.6 is proposition 1 in [34]. Proposition 2.6. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward invariant on B ⊂Rn. Let Φk(x, zk) and lk(x, zk) be defined as in (28) and (30), respectively. 1. Suppose each trajectory x(t) in B is bounded for t ≥0. For each k ∈{1, . . . , n}, the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤ inf V ∈C1(B×Rnk ) sup x∈B zk∈Snk-1 [Φk + f · DxV + lk · DzkV ] , (31) where DxV and DzkV denote the gradients of V (x, zk) with respect to x and zk, respectively. 2. Suppose B is compact and f ∈C2(B, Rn). Then (31) holds with equality. The optimization problem on the right-hand side of (31) may be intractable, but it can be relaxed into computationally tractable optimization problems that also yield upper bounds on Mk. Relaxations using SOS polynomials are given in section 2.4 below. The Lyapunov dimension (18) can be bounded above using similar ideas. One way is to apply upper bounds Mk ≤Bk that have already been found using the right-hand side of (31) or its SOS relaxations. If such bounds are computed with Bk ≥0 but Bk+1 0, dL -ε sup x0∈B Mj+1(x0) -1 ≤j + Bε ≤dL, (41) where Bε = inf B∈[0,1] V ∈C1 B s.t. ΨB + f · DxV + lj · DzjV + lj+1 · Dzj+1V ≤ε for all x ∈B, zj ∈Snj-1, zj+1 ∈Snj+1-1. (42) Proof. For the first part, the assumption that supx0∈B Mk+1(x0) 0, there exists V ∈C1 such that ΨB + f · DxV + lk · DzkV + lk+1 · Dzk+1V ≤ε ∀(x, zk, zk+1) ∈B × Snk-1 × Snk+1-1. (49) (Such V need not exist with ε = 0 if the infimum over V in the constraint of (46) is not attained.) Condition (49) is the same as the constraint in the minimization (42) defining Bε, and this implies Bε ≤B. We have shown that Bε ≤B for any B satisfying the constraint in (47), and thus dL -j ≤Bε. This establishes the second inequality in (41). To prove the first inequality in the claim (41), suppose that the inequality constraint in the definition (42) of Bε is satisfied for a pair of B and V . Averaging this inequality along any trajectory and then maximizing over the initial directions zk 0 and zk+1 0 gives Mk(x0) + Bμk+1(x0) ≤ε, (50) 13 which we divide by -μk+1 > 0 to find Mk(x0) -μk+1(x0) - ε -μk+1(x0) ≤B. (51) Maximizing the first term over x0, and bounding the second term above using μk+1(x0) ≤ supx0∈B Mk+1 0, and that this upper bound converges to dL as ε →0. 2.3 Shifted spectrum approach In the preceding subsection, the possibly unbounded k-vectors yk in the tangent space are captured by zk, which is bounded because it is the projection of yk onto the unit sphere. In the alternative approach of the present subsection, we capture the dynamics of yk by letting wk(t) = wk 0e-Btyk(t), (54) where the initial condition wk 0 can be any k-vector. This wk(t) is bounded forward in time for sufficiently large B. The ODE for wk is the same as the ODE (15) for yk, except the spectrum of the Jacobian is shifted by B. Together with the ODE (6) for x(t) this gives the system d dt x wk = f(x) mB k (x, wk) , (55) where mB k (x, wk) = Df [k](x)wk -Bwk. (56) The following lemma asserts that if B is large enough to prevent unbounded growth of wk, then B is an upper bound on the LE sum Mk, and the infimum over such B is equal to Mk. Lemma 2.10. Let d dtx(t) = f(x(t)) with f ∈C1(B, Rn) and dynamics forward invariant on B ⊂Rn. Let the initial condition x0 give a trajectory that is bounded for t ≥0. Let wk(t) satisfy the ODE (55) with initial condition wk 0. For each k ∈{1, . . . , n}: 14 1. The leading LE sum Mk(x0) is equivalent to Mk(x0) = inf B∈R B s.t. for each wk 0 ∈Rnk, wk(t) is bounded for all t ≥0. (57) 2. For any B > Mk(x0) there exist K(x0, B) > 0 and λ(x0, B) > 0 such that |wk(t)| ≤Ke-λt|wk 0| for all t ≥0 and wk 0 ∈Rnk. (58) 3. If B is compact and B > supx0∈B Mk(x0), there exist K(B) and λ(B) independent of x0 such that (58) holds for all x0 ∈B. Proof. If there exists yk 0 such that |yk(t)| →∞in finite time then, according to (54), no B satisfies the condition in (57). Then both sides of (57) are ∞, and the second part holds vacuously. Henceforth we assume that w(t) exists for all t ≥0. We will first show that if B satisfies the condition of (57) then Mk(x0) ≤B. Substituting yk = eBtwk into the time average on the right-hand side of (16) gives lim sup t→∞ 1 t log yk(t) = lim sup t→∞ 1 t log eBtwk(t) /|wk 0| = B + lim sup t→∞ 1 t log wk(t) ≤B, (59) where the final inequality follows from the boundedness of wk(t). Taking the supremum of the left-hand side over yk 0 gives Mk(x0) ≤B. Next we let B > Mk(x0). Showing that (58) holds will prove the second part of the lemma, and it will complete the proof of the first part by ensuring boundedness of wk(t). Without loss of generality, we assume |wk 0| = 1. Since Mk(x0) ≥lim sup t→∞ 1 t log |yk(t)| = B + lim sup t→∞ 1 t log |wk(t)|, (60) we can choose some λ supx0 Mk(x0). By the third part of Lemma 2.10, there exist K > 0 and λ > 0 such that (135) holds for all x0 ∈B. Observe also that A ∈C1(B, Rnk×nk) since f ∈C2, and that A(x) is bounded uniformly in x since B is compact. Thus the second part of lemma B.1 applies, so there exists V (x, wk) = (wk)TU(x)wk satisfying the constraint in (64). This is true for any B > supx0 Mk(x0), so (64) is an equality. This method gives bounds for the different sums of LEs, which can then be used with proposition 2.7 to bound the Lyapunov dimension. Analogously to proposition 2.8, a direct 16 bound for the dimension is also possible with a modified shifted spectrum approach, using the augmented system d dt   x yk+1 w  =   f(x) Df [k+1](x)yk+1 SB k (x, yk+1, w)  , (65) where we have defined SB k (x, yk+1, w) = Df [k](x)w + B 1 -B (yk+1)TDf [k+1](x)yk+1 |yk+1|2 w. (66) No shifting is needed in the yk+1 dynamics because this quantity decays exponentially on average when Mk+1 dL -1 is necessary for the constraint set to not be empty. In all cases, raising the polynomial degrees ν and ν′ cannot shrink the constraint set and will often enlarge it. Proposition 2.15. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant. Let ΣΩbe the quadratic module (77) for the set Ω= B × Snk whose semialgebraic specification includes the polynomial constraints on x in (81) and the equality zk -1 = 0. Let Φk(x, zk) and lk(x, zk) be defined as in (28) and (30), respectively. 1. Suppose that each trajectory x(t) in B is bounded for t ≥0. For each k ∈{1, . . . , n}, and each pair of polynomial degrees (ν, ν′), the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤C1(k, ν, ν′), (82) where C1(k, ν, ν′) = inf V ∈R[x,zk]ν B s.t. B -Φk -f · DxV -lk · DzkV ∈ΣΩ ν′. (83) 2. Suppose further that the invariant set B is compact, and its semialgebraic specification is Archimedean. Then, for each k, sup x0∈B Mk(x0) = sup ν,ν′∈Z≥ C1(k, ν, ν′). (84) Proof. The first part is an SOS relaxation of the upper bound (31) on Mk for each k. Equivalently, it is a particular case of (79) with X, F and Φ as described above in proposition 2.6, where V is further constrained by having degree no larger than ν, and where σi and ρj in the quadratic module representation (77) have degrees no larger than ν′. The second part is the analogous special case of (80). This equality is applicable because the B is specified as an Archimedean subset of Rn, and so adding |zk| -1 = 0 gives an Archimedean specification of Ωin Rn+nk. All polynomial degrees are admitted in (80), which is equivalent to the supremum over ν and ν′ in (84). Proposition 2.16. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant, and let the integer k ≤n be such that supx0∈B Mk+1(x0) 0 C2(j, ν, ν′, ε). (87) Proof. The first part follows from the upper bound (40) on dL because the SOS constraint in (86) with ε = 0 implies the nonpositivity constraint on the right-hand side of (40). For the second part, we first establish upper and lower bounds on C2 in terms of dL. For the lower bound, note that the constraint in (86) implies the constraint in the definition (42) of Bε. This implies Bε ≤C2(j, ν, ν′, ε), with which the first inequality in (41) gives dL -ε sup x0∈B Mj+1(x0) -1 ≤j + C2(j, ν, ν′, ε) (88) for all ε > 0 and ν, ν′ ∈Z≥. Minimizing over these parameters gives dL ≤j + inf ν,ν′∈Z≥ ε>0 C2(j, ν, ν′, ε) (89) The infimum will be approached or attained as ν, ν′ →∞and ε →0+. For the upper bound on C2, note that since B has an Archimedean specification, so does Ω. Thus we can apply the general SOS upper bounds with equality (80), in particular with F(X) being the right-hand side of the ODE system (37), with Φ(X) being ΨB, and with k = j. This gives sup x0∈B zj 0∈Snj -1 zj+1 0 ∈Snj+1-1 Ψ B = inf V ∈R[x,zj,zj+1] ε s.t. ε - ΨB + f · DxV + lk · DzjV + lj+1 · Dzj+1V ∈ΣΩ. (90) Note that the variable called B in (80) is ε in (90), where B has a different meaning. Let B be such that the constraint holds in expression (47) for dL. This constraint is equivalent to 22 both sides of (90) being nonpositive, and nonpositivity of the right-hand infimum in (90) implies that there exists polynomial V satisfying the SOS constraint for any ε > 0. Since such V exists for any B > dL -j , dL -j ≥ inf B∈[0,1] V ∈R[x,zj,zj+1] B s.t. ε- ΨB + f · DxV + lk · DzjV + lj+1 · Dzj+1V ∈ΣΩ(91) for all ε > 0. Restricting the right-hand minimization to V of degree ν and weights σi and ρj of degree ν′ gives the definition of C2(j, ν, ν′, ε). Thus, for all ε > 0, j + inf ν,ν′∈Z≥ C2(j, ν, ν′, ε) ≤dL. (92) (This inequality need not hold with finite ν and ν′.) This upper bound of dL, combined with the lower bound of dL (89), established the claim (87) of the second part and completes the proof. The minimization problems defining C1 and C2 for each (k, ν, ν′) in (82) of proposition 2.15 are SOS programs. That is, polynomials of finite degree are constrained to be SOS, and these polynomials as well as the optimization objective are affine in the optimization parameters. Here the optimization parameters include B as well as the coefficients of V in whatever basis it is represented. Such SOS programs are computationally tractable so long as the dimension of X and the degree of polynomials in the SOS constraints are not too large. Increasing ν and ν′ gives a hierarchy of SOS programs with increasing computational cost whose minima cannot increase. Typically these minima decrease as ν and ν′ are raised, and the second part of each theorem guarantees convergence to the sharp bound under mild conditions. More generally, a hierarchy of SOS programs can be defined by choosing different sequences of finite-dimensional polynomial spaces, rather than simply truncating by maximum degrees of ν and ν′. This often includes imposing symmetries on each polynomial subspace due to symmetries of f(x), as explained in section 2.5 below. 2.4.2 Shifted spectrum Next we give the SOS formulations based on the shifted spectrum approach. Proposition 2.17 below is an SOS relaxation of proposition 2.11 for bounding LE sums. Computations giving such bounds also imply bounds on Lyapunov dimension via proposition 2.7, but potentially sharper bounds on dimension can be computed directly using proposition 2.18, which is an SOS relaxation of proposition 2.12. Unlike our other methods, we do not prove that proposition 2.18 gives sharp results in general. Proposition 2.17. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant. Let mB k (x, wk) be defined as in (56). Let ΣΩbe the quadratic module (77) for the set Ω= B × Rnk whose semialgebraic specification includes the same constraints (81) defining B. 23 1. For each k ∈{1, . . . , n}, and each pair of polynomial degrees (ν, ν′), the maximum leading LE sum Mk among trajectories in B is bounded above by sup x0∈B Mk(x0) ≤C3(k, ν, ν′), (93) where C3(k, ν, ν′) = inf V ∈R[x,wk]ν B s.t. V -|wk|2 ∈ΣΩ ν′ -[f · DxV + mB k · DwkV ] ∈ΣΩ ν′. (94) 2. Suppose that the invariant set B is compact, and that its semialgebraic specification is Archimedean. Then, for each k, sup x0∈B Mk(x0) = inf ν,ν′∈Z≥ C3(k, ν, ν′), (95) where the infimum in (94) is over V (x, wk) = (wk)TU(x)wk with U ∈Rnk×nk[x] a symmetric polynomial matrix, and where σi(x, wk) and ρj(x, wk) in the SOS representation (77) are likewise quadratic in wk. Proof. The first part follows from the upper bound (64) on Mk because the SOS constraints in (94) imply the nonnegativity constraints in (64) from proposition 2.11. For the second part, let B1 and B2 be arbitrary with B1 > B2 > supx0∈B Mk(x0). It suffices to show that there exists a polynomial matrix U(x) such that V (x, wk) = (wk)TU(x)wk satisfies the SOS constraints of (94) for B = B1 with no restrictions on polynomial degree. Since B2 > supx0∈B Mk(x0), the second part of proposition 2.11 guarantees the existence of a symmetric U ∈C1(Rn, Rnk×nk) for which V satisfies the pointwise inequalities in (64) with B = B2. We will show that this implies existence of the desired polynomial U under the present assumptions. The pointwise inequalities in (64), with B = B2 in the V = (wk)TUwk case, can be written in the form (wk)T ˆZi(x)wk ≥0, where ˆZ1(x) = U(x) -Ink, (96) ˆZ2(x) = -f(x) · DxU(x) + 2 U(x)(B2 Ink -Df [k](x)), (97) with Ink denoting the nk × nk identity matrix. The pointwise inequalities on B × Rnk are equivalent to the condition that both ˆZi(x) ⪰0 for all x ∈B. In fact, by scaling U we can satisfy both of these equations with the additional restriction that ˆZ1 ≻0 strictly on B. Let Z1(x) = U(x) -Ink, (98) Z2(x) = -f(x) · DxU(x) + 2 U(x)(B1 Ink -Df [k](x)), (99) 24 which are the same definitions with B2 replaced by B1. Since B1 > B2, we now have both Zi(x) ≻0 strictly for all x ∈B. Because there exists U ∈C1 such that both Zi(x) ≻0 on B, and B is compact, there also exists a polynomial matrix U ∈Rnk×nk[x] for which both Zi(x) ≻0 on B. This follows from a strengthened version of the Stone-Weierstrass approximation theorem [25]. This polynomial U satisfies the SOS constraints of (94) due to B being Archimedean and Zi being strictly positive definite on B. Recalling that B is the set where all gi(x) ≥0 and hj(x) = 0, applying a matrix Positivstellensatz [44, theorem 2] guarantees existence of polynomial matrices Si(x) and Ti(x), with the Si(x) being squares of other polynomial matrices, such that Z1(x) = S0(x) + I X i=1 gi(x)Si(x) + J X j=1 hj(x)Tj(x), (100) and likewise for Z2. The existence of such representations implies that both (wk)TZiwk belong to the quadratic module ΣΩ. Therefore V = (wk)TUwk satisfies the constraints of (94) with B = B1, and the second part of the proposition is proven. Proposition 2.18. Let d dtx(t) = f(x(t)) with f ∈Rn[x], for which a semialgebraic set B is forward invariant on B ⊂Rn with each trajectory x(t) bounded for t ≥0. Let the positive integer k ≤n be such that supx0∈B Mk+1(x0) 0 and λ > 0 such that |w(t)| ≤Ke-λt|w(0)| (135) for all solutions to (133), then there exists U ∈C1 (B, Rm×m) such that (134) is satisfied with V (x, w) = wTU(x)w. Proof. For the first part, let V ∈C1 satisfy the two inequality conditions. For contradiction, suppose that there is a solution with |w(t)| unbounded. The first V condition then implies that V (x(t), w(t)) is unbounded forward in time. The second V condition implies d dtV (x(t), w(t)) = f(x(t)) · DxV (x(t), w(t)) + [A(x(t))w(t)] · DwV (x(t), w(t)) ≤0. (136) This expression is continuous in t, so we can integrate to show that V remains bounded for all time, which is a contradiction. For the second part, fix a time T > 0, to be chosen later. Define V (x0, w0) = 2L 1 -e-2LT Z T 0 |w(t)|2 dt, (137) where (x(t), w(t)) evolves under (133) from initial condition (x(0), w(0)) = (x0, w0), and L is an upper bound for the operator norm ∥A(x)∥over x ∈B. Observe that, by linearity of w(t) with respect to w0, this is quadratic in w0 and thus can be written V (x0, w0) = wT 0 U(x0)w0 (138) 41 for some symmetric matrix U(x0). Since A ∈C1(B, Rm×m), it follows that U ∈C1(B, Rm×m). Note that d dt|w|2 ≤2|w| |A(x)w| ≤2∥A(x)∥|w|2 ≤2L|w|2 (139) i.e. d dt|w|2 ≥-2L|w|2. Thus Gr ̈onwall's inequality gives us |w(t)|2 ≥e-2Lt|w0|2 (140) and so V (x0, w0) ≥ 2L 1 -e-2LT Z T 0 e-2Lt|w0|2dt = |w0|2, (141) satisfying the first condition of (134). For the second condition of (134), we define v(t) = V (x(t), w(t)) = 2L 1 -e-2LT Z t+T t |w(s)|2 ds. (142) Then, f(x(t)) · DxV (x(t), w(t)) + [A(x(t))w(t)] · DwV (x(t), w(t)) = dv dt = 2L 1 -e-2LT |w(t + T)|2 -|w(t)|2 ≤ 2L 1 -e-2LT K2e-2λT -1 |w(t)|2, (143) so if T ≥1 λ log K the condition is satisfied. References [1] M. ApS. MOSEK Optimizer API for Julia, 2025. [2] L. Arnold. Random dynamical systems. Springer Monographs in Mathematics. Springer-Verlag, 1998. [3] J. Bochi. Ergodic optimization of Birkhoffaverages and Lyapunov exponents. In Proc. Int. Congr. Math., pages 1825-1846, 2018. [4] N. Caranicolas. 1:1:1 resonant periodic orbits in 3-dimensional galactic-type Hamiltonians. Astronomy and Astrophysics, 282:34-36, 1994. 42 [5] C. Coey, L. Kapelevich, and J. P. Vielma. Solving natural conic formulations with Hypatia.jl. INFORMS Journal on Computing, 34(5):2686-2699, 2022. [6] T. de Zeeuw. Motion in the core of a triaxial potential. Monthly Notices of the Royal Astronomical Society, 215(4):731-760, 1985. [7] C. R. Doering and J. Gibbon. On the shape and dimension of the Lorenz attractor. Dynamics and Stability of Systems, 10(3):255-268, 1995. [8] A. Eden, C. Foias, and R. Temam. Local and global Lyapunov exponents. Journal of Dynamics and Differential Equations, 3:133-177, 1991. [9] A. O. Eden. An abstract theory of L-exponents with applications to dimension analysis. Indiana University, 1989. [10] M. Fiedler. Special matrices and their applications in numerical mathematics. Courier Corporation, 2008. [11] K. Gatermann and P. A. Parrilo. Symmetry groups, semidefinite programs, and sums of squares. Journal of Pure and Applied Algebra, 192(1-3):95-128, 2004. [12] D. Goluskin. Bounding averages rigorously using semidefinite programming: mean moments of the Lorenz system. Journal of Nonlinear Science, 28:621-651, 2018. [13] D. Goluskin and G. Fantuzzi. Bounds on mean energy in the Kuramoto-Sivashinsky equation computed using semidefinite programming. Nonlinearity, 32(5):1705, 2019. [14] P. J. Goulart and Y. Chen. Clarabel: An interior-point solver for conic programs with quadratic objectives, 2024. [15] M. Kaluba, D. Kielak, and P. W. Nowak. On property (T) for Aut(Fn) and SLn(Z). Annals of Mathematics, 193(2):539-562, 2021. [16] J. Kaplan and J. Yorke. Chaotic behavior of multidimensional difference equations. Functional Differential Equations and Approximation of Fixed Points, pages 204-227, 1979. [17] H. Khalil. Nonlinear systems. Prentice Hall, 2002. [18] N. Kuznetsov and V. Reitmann. Attractor dimension estimates for dynamical systems: theory and computation. Springer, 2020. [19] M. Lakshmi, G. Fantuzzi, J. Fern ́andez-Caballero, Y. Hwang, and S. Chernyshenko. Finding extremal periodic orbits with polynomial optimisation, with application to a nine-mode model of shear flow. SIAM J. Appl. Dyn. Syst., 19:763-787, 2020. [20] B. Legat, C. Coey, R. Deits, J. Huchette, and A. Perry. Sum-of-squares optimization in Julia. In The First Annual JuMP-dev Workshop, 2017. [21] B. Legat, S. Timme, T. Weisser, L. Kapelevich, N. Sajko, Manuel, A. Demin, C. Hansknecht, C. Rackauckas, J. TagBot, M. Forets, and T. Holy. JuliaAlgebra/DynamicPolynomials.jl: v0.5.7, May 2024. 43 [22] G. A. Leonov and S. Lyashko. Eden's hypothesis for a Lorenz system. Vestnik St. Petersburg University: Mathematics, 26(3):15-18, 1993. [23] G. A. Leonov, N. V. Kuznetsov, N. Korzhemanova, and D. Kusakin. Lyapunov dimension formula for the global attractor of the Lorenz system. Communications in Nonlinear Science and Numerical Simulation, 41:84-103, 2016. [24] D. London. On derivations arising in differential equations. Linear and Multilinear Algebra, 4(3):179-189, 1976. [25] G. G. Lorentz. Approximation of Functions. Chelsea Publishing Company, 2nd edition, 1986. [26] D. Manika. Application of the compound matrix theory for the computation of Lyapunov exponents of autonomous Hamiltonian systems. Master's thesis, Aristotle University of Thessaloniki, 2013. [27] M. Margaliot and E. D. Sontag. Revisiting totally positive differential systems: A tutorial and new results. Automatica, 101:1-14, 2019. [28] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics. Springer New York, 2010. [29] D. Martini, D. Angeli, G. Innocenti, and A. Tesi. Ruling out positive Lyapunov exponents by using the Jacobian's second additive compound matrix. IEEE Control Systems Letters, 6:2924-2928, 2022. [30] D. Martini, D. Angeli, G. Innocenti, and A. Tesi. Bounding Lyapunov exponents through second additive compound matrices: Case studies and application to systems with first integral. International Journal of Bifurcation and Chaos, 33(10):2350114, 2023. [31] J. S. Muldowney. Compound matrices and ordinary differential equations. The Rocky Mountain Journal of Mathematics, pages 857-872, 1990. [32] K. G. Murty and S. N. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Mathematical Programming: Series A and B, 39(2):117-129, 1987. [33] H. Oeri. Convex optimization methods for bounding Lyapunov exponents. PhD thesis, 2023. [34] H. Oeri and D. Goluskin. Convex computation of maximal Lyapunov exponents. Nonlinearity, 36(10):5378, 2023. [35] J. F. Palaci ́an, C. Vidal, J. Vidarte, and P. Yanguas. Periodic solutions and KAM tori in a triaxial potential. SIAM Journal on Applied Dynamical Systems, 16(1):159-187, 2017. 44 [36] A. Papachristodoulou and S. Prajna. On the construction of Lyapunov functions using the sum of squares decomposition. In Proceedings of the 41st IEEE Conference on Decision and Control, 2002., volume 3, pages 3482-3487. IEEE, 2002. [37] J. P. Parker. The Lorenz system as a gradient-like system. Nonlinearity, 37(9):095022, 2024. [38] J. P. Parker, D. Goluskin, and G. Vasil. A study of the double pendulum using polynomial optimization. Chaos, 31(10):103102, 2021. [39] P. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, California 2000. [40] A. Pikovsky and A. Politi. Lyapunov exponents: a tool to explore complex dynamics. Cambridge University Press, 2016. [41] M. Putinar. Positive polynomials on semi-algebraic sets. Indiana University Mathematics Journal, 452:969-984, 1993. [42] C. Rackauckas and Q. Nie. Differentialequations.jl-a performant and feature-rich ecosystem for solving differential equations in Julia. Journal of Open Research Software, 5(1):15, 2017. [43] B. Reznick. Some concrete aspects of hilbert's 17th problem. Contemporary Mathematics, 253(251-272), 2000. [44] C. W. Scherer and C. W. J. Hol. Matrix sum of squares relaxations for robust semidefinite programs. Mathematical Programming, 107:189-211, 2006. [45] R. Temam. Infinite-dimensional dynamical systems in mechanics and physics, volume 68. Springer Science & Business Media, 2012. [46] I. Tobasco, D. Goluskin, and C. R. Doering. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems. Physics Letters A, 382(6):382-386, 2018. [47] S. Winitzki. Linear algebra via exterior products. Sergei Winitzki, 2009. 45
2510.14866
BENCHMARKING MULTIMODAL LARGE LANGUAGE MODELS FOR FACE RECOGNITION Hatef Otroshi Shahreza and S´ebastien Marcel Idiap Research Institute, Switzerland {hatef.otroshi, sebastien.marcel}@idiap.ch ABSTRACT Multimodal large language models (MLLMs) have achieved remarkable performance across diverse vision-and-language tasks. However, their potential in face recognition remains underexplored. In particular, the performance of open-source MLLMs needs to be evaluated and compared with existing face recognition models on standard benchmarks with similar protocol. In this work, we present a systematic benchmark of state-of-the-art MLLMs for face recognition on several face recognition datasets, including LFW, CALFW, CPLFW, CFP, AgeDB and RFW. Experimental results reveal that while MLLMs capture rich semantic cues useful for face-related tasks, they lag behind specialized models in high-precision recognition scenarios in zero-shot applications. This bench- mark provides a foundation for advancing MLLM-based face recognition, offering insights for the design of next- generation models with higher accuracy and generalization. The source code of our benchmark is publicly available in the project page. Index Terms— Benchmark, Face Recognition, Founda- tion Models, Multimodal Large Language Models (MLLMs) 1. INTRODUCTION Multimodal large language models (MLLMs) have recently gained significant attention from the research community for visual and linguistic understanding tasks. By combining pre- trained visual encoders with large language models (LLMs), systems such as Flamingo [1], QwenVL [2], and GPT-4o [3] have achieved state-of-the-art performance across diverse tasks, including image captioning and visual question answer- ing (VQA). These models showcase the ability of LLMs to reason over perceptual inputs and generate coherent, contex- tually grounded output text, enabling general-purpose image processing in zero-shot and few-shot settings. Leveraging large-scale pretraining, they have accelerated the develop- ment of foundation models that are capable of interpreting This work was funded by the Hasler foundation through the Responsible Face Recognition (SAFER) project and also by the European Union project CarMen (Grant Agreement No. 101168325). and responding to complex visual questions without requiring task-specific supervision. Face recognition is also a popular computer vision and is increasingly used in different applications [4, 5, 6]. In partic- ular, face recognition is used as a secure authentication tool in a broad range of applications such as smart phone unlocking, border control, etc. In addition to the security purposes, face recognition is used for entertainment and also in social media. Face recognition models have been extensively studied in the literature and there are also standard benchmarks to evaluate and compare the performance of face recognition models. With the surge of MLLMs, we can consider potential ap- plications of MLLMs for face recognition [7]. However, for replacing MLLMs with existing face recognition models, it is important to know the performance of MLLMs compared to typical models on standard benchmarks with similar protocol. In this paper, we investigate how open-source MLLMs per- form on face recognition benchmarks? While there are some previous works on evaluation of MLLMs for different face understanding tasks [8, 9, 10], to our knowledge this paper is the first work that benchmarks MLLMs for face recognition on standard datasets with similar protocols. In the remaining of this paper, we first review previous work in the literature in Section 2. We then describe our benchmark in Section 3 and present our results in Section 4. Finally, the paper is concluded in Section 5. 2. RELATED WORK Recently, several papers have investigated the application of MLLMs for face-related tasks, including face recognition, at- tribute analysis, forgery detection, anti-spoofing, and multi- modal reasoning. A recent survey [7] provides an extensive review of how foundation models and MLLMs are being ap- plied in biometrics and face recognition. Early studies investigated the use of pretrained MLLMs, such as ChatGPT [3], for face verification [9], and predicting soft-biometrics, such as age, gender, and ethnicity. Jia et al. [11] also evaluated the application of ChatGPT for zero-shot face deepfake detection. Shi et al. [12] explored chain-of- thoughts prompting for Gemini and ChatGPT in face anti- spoofing and deepfake detection tasks. Komaty et al. [10] arXiv:2510.14866v1 [cs.CV] 16 Oct 2025 Are these two images of the same person? Answer “yes” or “no”. Are these two images of the same person? Answer “yes” or “no”. FaceXBench FaceRecBench [ours] Fig. 1. Sample questions in FaceXBench [8] and our benchmark. investigated in-context learning of ChatGPT [3] for face anti- spoofing. Sony et al. [13] evaluated the performance of sev- eral foundation models (such as CLIP, BLIP, etc.) for face recognition, and showed that fusion of face recognition mod- els with foundation models can improve recognition accuracy. In addition to these studies, some benchmarks were pro- posed for different face processing tasks. FaceBench [14] proposed a visual question-answering benchmark for fa- cial attributes. Benchmarks such as FaceXBench [8] and Face-Human-Bench [15] were also proposed to benchmark MLLMs across various face processing tasks, including facial expression recognition, attribute prediction, anti-spoofing, etc. FaceXBench [8] also includes face recognition, using face recognition datasets such as LFW [16], AgeDB [17], CFP-FF [18], CFP-FP [18], CALFW [19], CPLFW [20]. However, they used multiple-choice questions for evaluating the performance of MLLMs. Fig. 1 illustrates two example questions from FaceXBench for face recognition task. While FaceXBench is a useful benchmark for comparing MLLMs in face processing tasks, the reported accuracy values are not comparable to the accuracy of face recognition models in the literature. In fact, for evaluating typical face recognition models, we simply have two face images and want to see if they have same identity. However, reported values for face recognition based on questions with multiple images and also multiple choices are not consistent with values reported in the literature. In this paper, we focus on face recognition and benchmark MLLMs with similar protocols as typical face recognition models. 3. BENCHMARKING MLLMS FOR FACE RECOGNITION To evaluate MLLMs for face recognition, we consider a ver- ification task where two face images are available and the question is whether the given images belong to the same iden- tity. Hence, we give the MLLM with both images along with the following prompt: Prompt Are these two images of the same person? Answer “yes” or “no”. Then, the output of MLLM is expected to be “yes” or “no”, meaning if the images are predicted to correspond to same person or not. In our benchmark, we consider different standard datasets, including Labeled Faces in the Wild (LFW) [16], Cross- age LFW (CALFW) [19], Cross-Pose LFW (CPLFW) [20], Celebrities in Frontal-Profile in the Wild (CFP) [18], AgeDB- 30 [17], and Racial Faces in-the-Wild (RFW) [21]. Our evaluation for each of these dataset include 6,000 pairs of images with 3,000 positive and 3,000 negative pairs. For consistency with prior works on face recognition, we report recognition accuracy on these datasets. In the following, we briefly describe each dataset: Labeled Faces in the Wild (LFW): LFW [16] is a widely used benchmark dataset for unconstrained face verification. It contains 13,233 images of faces collected from the web, with 5,749 unique individuals. The dataset is designed to evaluate how well face recognition algorithms generalize to real-world conditions, with variations in pose, expression, illumination, and background. Since its release, LFW has served as a stan- dard reference point for measuring progress in face recogni- tion under unconstrained settings. Cross-Age LFW (CALFW): CALFW [19] extends LFW by introducing cross-age variation, aiming to make the veri- fication task more challenging. It contains image pairs of the same individual captured at different ages, highlighting the difficulty of recognizing faces over long time spans. This dataset is primarily focused on age-related intra-class varia- tions while maintaining inter-class diversity, making it a valu- able benchmark for studying the robustness of face recogni- tion systems to aging effects. Cross-Pose LFW (CPLFW): CPLFW [20] is another variant of LFW, created to evaluate face recognition under cross-pose conditions. It includes face pairs where the same subject appears in significantly different poses, thus introduc- ing large intra-class variations in pose. By emphasizing pose differences, CPLFW complements LFW and CALFW to how well recognition systems handle extreme viewpoint changes. Celebrities in Frontal-Profile (CFP): The CFP [18] dataset is introduced to test face recognition across frontal and profile views. It consists of images of 500 celebrities, with both frontal and profile face shots, and provides verifi- cation protocols for frontal-to-frontal (CFP-FF) and frontal- Table 1. Comparison of recognition accuracy (%) of MLLMs with face recognition models on different face datasets. Model LFW AgeDB30 CALFW CPLFW CFP-FP CFP-FF Average Open source MLLMs LLaVA-v1.5-7b 50.00 50.00 50.00 50.00 50.00 50.00 50.00 LLaVA-v1.5-13b 49.92 49.68 50.02 49.83 50.09 50.01 49.93 LLaVA-OneVision-Qwen2-0.5b 55.52 52.63 50.08 51.55 55.00 55.61 53.40 LLaVA-NeXT-Vicuna-13b 65.95 57.50 51.53 54.95 67.44 70.87 61.37 LLaVA-NeXT-Vicuna-7b 53.02 50.72 50.13 50.13 53.87 56.86 52.45 LLaVA-NeXT-Mistral-7b 50.00 49.98 50.02 50.22 49.99 50.03 50.04 GLM-4v-9b 52.58 50.42 48.52 50.27 50.39 49.93 50.35 Idefics-9b-Instruct 50.13 50.05 50.02 49.98 49.99 50.40 50.09 Idefics2-8b 72.40 68.53 54.93 55.98 74.11 74.43 66.73 Idefics3-8B-Llama3 88.83 57.98 61.18 70.90 80.26 85.11 74.05 ShareGPT4v-7b 49.98 50.00 50.00 49.95 50.00 50.00 49.99 ShareGPT4v-13b 49.92 49.98 50.00 49.87 50.16 50.00 49.99 PaliGemma-3b-mix-448 48.48 49.40 48.43 50.37 50.26 50.33 49.54 Ovis1.5-Llama3-8B 52.63 52.40 50.85 50.23 53.67 54.97 52.46 Ovis1.5-Gemma2-9B 73.93 57.87 56.95 54.93 75.09 78.49 66.21 Llama-3.2-11B-Vision-Instruct 50.55 48.20 51.83 49.50 49.47 49.94 49.92 InternVL2.5-1B 54.22 50.62 50.80 51.10 51.44 51.26 51.57 InternVL3-1B 69.28 56.25 56.15 63.35 60.80 64.06 61.65 InternVL3-8B 87.92 52.27 59.08 72.30 79.20 81.84 72.10 InternVL3-38B 90.10 55.72 61.37 71.20 72.50 84.40 72.55 FaceLLM-8B 90.65 53.38 61.48 73.50 80.06 84.89 73.99 Valley2 92.93 60.75 68.58 74.55 84.33 92.27 78.90 Qwen2-VL-2B-Instruct 63.38 58.55 50.77 51.17 61.27 62.33 57.91 Qwen2-VL-7B-Instruct 93.28 66.03 71.95 75.28 86.93 93.11 81.10 Qwen2.5-VL-3B-Instruct 77.52 54.03 60.92 59.43 67.00 82.70 66.93 Qwen2.5-VL-7B-Instruct 89.48 59.07 69.08 73.43 79.09 89.46 76.60 Qwen2.5-VL-32B-Instruct 79.32 59.07 64.48 66.15 69.70 83.01 70.29 Face Recognition Models IResNet-50 (HyperFace) 98.27 90.40 91.48 85.60 92.24 98.86 92.81 IResNet-50 (MS1MV2) 99.83 98.28 95.45 92.08 98.27 99.99 97.31 to-profile (CFP-FP) matching. AgeDB: AgeDB [17] is a benchmark dataset focused on age-related variations in face recognition. It contains im- ages 16,516 images of 570 subjects with a wide age range. The dataset provides predefined verification protocols with increasing age gaps (e.g., 5, 10, 20, and 30 years), enabling systematic evaluation of how well algorithms handle the chal- lenge of age progression. AgeDB is commonly used to study long-term face recognition performance. We use 30-year protocol in our benchmark. Racial Faces in-the-Wild (RFW): The RFW [21] dataset was introduced to evaluate bias and fairness in face recog- nition systems across different demographic groups. RFW is constructed by reorganizing images from MS-Celeb [22] into four balanced subsets: Caucasian, Asian, Indian, and African. Each subset contains approximately 10,000 images from around 3,000 individuals, with 6,000 comparisons. By providing a benchmark focused on racial diversity, RFW en- ables systematic analysis of demographic disparities in recog- nition performance and has become a widely used dataset for studying fairness in face recognition. We use the cropped images for each dataset available in Insightface repository1. We also use VLMEvalKit repository2 to implement our benchmark. The source code of our bench- mark is publicly available in the project page3. 4. EXPERIMENTAL RESULTS We evaluate and benchmark open-source4 MLLMs on var- ious standard face recognition datasets, including LFW [16], CALFW [19], CPLFW [20], CFP [18], AgeDB-30 [17], and RFW [21]. The MLLMs used in our experi- ments include LLaVA-v1.5-7b [23], LLaVA-v1.5-13b [23], 1https://github.com/deepinsight/insightface 2https://github.com/open-compass/VLMEvalKit 3Project page: https://www.idiap.ch/paper/facerecbench 4Note that given the restrictions in license of each of the benchmark datasets, we were not able to use commercial MLLMs in this study. Table 2. Comparison of recognition accuracy (%) of MLLMs on different demography groups in RFW. Model African Asian Caucasian Indian Average Std. Open source MLLMs LLaVA-NeXT-Vicuna-13b 51.28 53.53 56.88 53.00 53.67 2.03 Idefics3-8B-Llama3 60.78 66.15 70.38 66.38 65.92 3.41 Ovis1.5-Gemma2-9B 52.62 54.92 61.93 55.83 56.33 3.44 InternVL3-8B 64.37 63.08 66.37 64.03 64.46 1.20 FaceLLM-8B 66.00 66.78 68.82 66.02 66.90 1.15 Valley2 63.53 68.77 75.57 70.28 69.54 4.29 Qwen2-VL-7B-Instruct 60.40 65.55 76.68 68.35 67.75 5.90 Qwen2.5-VL-7B-Instruct 62.65 66.63 70.55 68.98 67.20 2.98 Face Recognition Models IResNet-50 (HyperFace) 88.27 82.98 84.33 78.02 83.40 3.66 IResNet-50 (MS1MV2) 98.32 97.73 99.33 98.23 98.40 0.58 LLaVA-OneVision-Qwen2-0.5b [24], LLaVA-NeXT-Vicuna- 7b [24], LLaVA-NeXT-Vicuna-13b [24], LLaVA-NeXT- Mistral-7b [24], GLM-4v-9b [25], Idefics-2-8b [26], Idefics- 9b-Instruct [27], Idefics3-8B-Llama3 [28], ShareGPT4v- 13b [29], ShareGPT4v-7b [29], PaliGemma-3b-mix-448 [30], Ovis-1.5-Llama3-8B [31], Ovis1.5-Gemma2-9B [31], Llama- 3.2-11B-Vision-Instruct [32], InternVL2.5-1B [33], Intern- VL3-1B [34], InternVL3-2B [34], InternVL3-8B [34], Face- LLM-8B [35], Valley2 [36], Qwen2-VL-2B-Instruct [2], Qwen2-VL-7B-Instruct [2], Qwen2.5-VL-3B-Instruct [37], Qwen2.5-VL-7B-Instruct [37], Qwen2.5-VL-32B-Instruct [37]. We run our evaluations on a system equipped with NVIDIA H100. Table 1 reports the performance of different MLLMs on several face recognition benchmarks (LFW, CALFW, CPLFW, CFP, and AgeDB-30). This table also compares the performance of MLLMs with IResNet-50 (trained with AdaFace loss [6] on MS-Celeb [22] dataset) as a state-of- the-art face recognition model. As another baseline, we also consider a face recognition model with IResNet-50 trained with HyperFace [38] synthetic dataset. As the results in this paper show there is a significant gap between the perfor- mance of face recognition models. While increasing the size of MLLM can improve the performance on benchmarks, it also saturates for each MLLM (can be seen in InternVL3 and Qwen2.5VL). Among the benchmarked MLLMs, FaceLLM is based on InternVL3 and finetuned for face understanding. The results in Table 1 show that the finetuing in FaceLLM has increased the performance compared to the base model (InternVL3) on different face recognition benchmarks. This suggests that by using domain-specific data instead of general-purpose data, we can expect improvement in MLLMs for the face recogni- tion task. We also compare the top-performing models in Table 1 on RFW dataset. Table 2 reports the performance of different models for four demography groups. The result in this ta- ble also indicate significant gap between the performance of MLLMs with typical models. 5. CONCLUSION In this paper, we presented a benchmark for MLLMs with similar protocol used to evaluate typical face recognition models. Although MLLMs have shown considerable po- tentials in broad applications, most are trained mainly on general-purpose datasets or large-scale image–text pairs col- lected from the web. Consequently, these models are able to generate high-level image descriptions but often lack task- specific precision. For instance, while they can describe a person’s appearance or identify basic demographic attributes such as age and gender, they frequently struggle with more details which are required to recognize identity or verify if identity is the same in two images. This limitation poses challenges for applications of MLLMs in face recognition and requires further study in future. Our benchmark can be used by future researchers to compare the MLLMs with face recognition models. 6. REFERENCES [1] Jean-Baptiste Alayrac et al., “Flamingo: a visual language model for few-shot learning,” Advances in neural information processing systems, vol. 35, pp. 23716–23736, 2022. [2] Peng Wang et al., “Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution,” arXiv preprint arXiv:2409.12191, 2024. [3] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al., “Gpt-4o system card,” arXiv preprint arXiv:2410.21276, 2024. [4] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, “Arcface: Additive angular margin loss for deep face recogni- tion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4690–4699. [5] Anjith George, Christophe Ecabert, Hatef Otroshi Shahreza, Ketan Kotwal, and S´ebastien Marcel, “Edgeface: Efficient face recognition model for edge devices,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 6, no. 2, pp. 158–168, 2024. [6] Minchul Kim, Anil K Jain, and Xiaoming Liu, “Adaface: Quality adaptive margin for face recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 18750–18759. [7] Hatef Otroshi Shahreza and S´ebastien Marcel, “Foundation models and biometrics: A survey and outlook,” IEEE Trans- actions on Information Forensics and Security, 2025. [8] Kartik Narayan, Vibashan VS, and Vishal M Patel, “Facexbench: Evaluating multimodal llms on face understand- ing,” arXiv preprint arXiv:2501.10360, 2025. [9] Ahmad Hassanpour, Yasamin Kowsari, Hatef Otroshi Shahreza, Bian Yang, and S´ebastien Marcel, “Chatgpt and biometrics: an assessment of face recognition, gender detection, and age estimation capabilities,” in 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024, pp. 3224–3229. [10] Alain Komaty, Hatef Otroshi Shahreza, Anjith George, and Se- bastien Marcel, “Exploring chatgpt for face presentation at- tack detection in zero and few-shot in-context learning,” arXiv preprint arXiv:2501.08799, 2025. [11] Shan Jia, Reilin Lyu, Kangran Zhao, Yize Chen, Zhiyuan Yan, Yan Ju, Chuanbo Hu, Xin Li, Baoyuan Wu, and Siwei Lyu, “Can chatgpt detect deepfakes? a study of using multimodal large language models for media forensics,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4324–4333. [12] Yichen Shi et al., “Shield: An evaluation benchmark for face spoofing and forgery detection with multimodal large language models,” arXiv preprint arXiv:2402.04178, 2024. [13] Redwan Sony, Parisa Farmanifard, Arun Ross, and Anil K Jain, “Foundation versus domain-specific models: Perfor- mance comparison, fusion, and explainability in face recog- nition,” arXiv preprint arXiv:2507.03541, 2025. [14] Xiaoqin Wang, Xusen Ma, Xianxu Hou, Meidan Ding, Yudong Li, Junliang Chen, Wenting Chen, Xiaoyang Peng, and Linlin Shen, “Facebench: A multi-view multi-level facial attribute vqa dataset for benchmarking face perception mllms,” arXiv preprint arXiv:2503.21457, 2025. [15] Lixiong Qin et al., “Face-human-bench: A comprehensive benchmark of face and human understanding for multi-modal assistants,” arXiv preprint arXiv:2501.01243, 2025. [16] Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Workshop on faces in’Real-Life’Images: detection, align- ment, and recognition, 2008. [17] Stylianos Moschoglou, Athanasios Papaioannou, Christos Sag- onas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou, “Agedb: the first manually collected, in-the-wild age database,” in proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 51–59. [18] Soumyadip Sengupta, Jun-Cheng Chen, Carlos Castillo, Vishal M Patel, Rama Chellappa, and David W Jacobs, “Frontal to profile face verification in the wild,” in 2016 IEEE winter conference on applications of computer vision (WACV). IEEE, 2016, pp. 1–9. [19] Tianyue Zheng, Weihong Deng, and Jiani Hu, “Cross-age lfw: A database for studying cross-age face recognition in un- constrained environments,” arXiv preprint arXiv:1708.08197, 2017. [20] Tianyue Zheng and Weihong Deng, “Cross-pose lfw: A database for studying cross-pose face recognition in uncon- strained environments,” Beijing University of Posts and Telecommunications, Tech. Rep, vol. 5, no. 7, 2018. [21] Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yao- hai Huang, “Racial faces in the wild: Reducing racial bias by information maximization adaptation network,” in Proceed- ings of the ieee/cvf international conference on computer vi- sion, 2019, pp. 692–702. [22] Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jian- feng Gao, “Ms-celeb-1m: A dataset and benchmark for large- scale face recognition,” in European conference on computer vision. Springer, 2016, pp. 87–102. [23] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee, “Visual instruction tuning,” in NeurIPS, 2023. [24] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee, “Im- proved baselines with visual instruction tuning,” 2023. [25] Team GLM, Aohan Zeng, et al., “Chatglm: A family of large language models from glm-130b to glm-4 all tools,” 2024. [26] Hugo Laurenc¸on, L´eo Tronchon, Matthieu Cord, and Victor Sanh, “What matters when building vision-language models?,” 2024. [27] Hugo Laurenc¸on, Lucile Saulnier, L´eo Tronchon, Stas Bek- man, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh, “Obelics: An open web-scale filtered dataset of interleaved image-text documents,” 2023. [28] Hugo Laurenc¸on, Andr´es Marafioti, Victor Sanh, and L´eo Tronchon, “Building and better understanding vision-language models: insights and future directions.,” 2024. [29] Lin Chen and other, “Sharegpt4v: Improving large multi- modal models with better captions,” in European Conference on Computer Vision. Springer, 2024, pp. 370–387. [30] Lucas Beyer, Andreas Steiner, Andr´e Susano Pinto, Alexan- der Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neu- mann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al., “Paligemma: A versatile 3b vlm for trans- fer,” arXiv preprint arXiv:2407.07726, 2024. [31] Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye, “Ovis: Structural em- bedding alignment for multimodal large language model,” arXiv:2405.20797, 2024. [32] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhi- nav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Let- man, Akhil Mathur, Alan Schelten, Alex Vaughan, et al., “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783, 2024. [33] Zhe Chen et al., “Expanding performance boundaries of open- source multimodal models with model, data, and test-time scal- ing,” arXiv preprint arXiv:2412.05271, 2024. [34] Jinguo Zhu et al., “Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models,” arXiv preprint arXiv:2504.10479, 2025. [35] Hatef Otroshi Shahreza and S´ebastien Marcel, “Facellm: A multimodal large language model for face understanding,” arXiv preprint arXiv:2507.10300, 2025. [36] Ziheng Wu, Zhenghao Chen, Ruipu Luo, Can Zhang, Yuan Gao, Zhentao He, Xian Wang, Haoran Lin, and Minghui Qiu, “Valley2: Exploring multimodal models with scalable vision- language design,” arXiv preprint arXiv:2501.05901, 2025. [37] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al., “Qwen2.5 technical report,” 2025. [38] Hatef Otroshi Shahreza and S´ebastien Marcel, “Hyperface: Generating synthetic face recognition datasets by exploring face embedding hypersphere,” in The Thirteenth International Conference on Learning Representations, 2025.
BENCHMARKING MULTIMODAL LARGE LANGUAGE MODELS FOR FACE RECOGNITION Hatef Otroshi Shahreza and S ́ebastien Marcel Idiap Research Institute, Switzerland {hatef.otroshi, ABSTRACT Multimodal large language models (MLLMs) have achieved remarkable performance across diverse vision-and-language tasks. However, their potential in face recognition remains underexplored. In particular, the performance of open-source MLLMs needs to be evaluated and compared with existing face recognition models on standard benchmarks with similar protocol. In this work, we present a systematic benchmark of state-of-the-art MLLMs for face recognition on several face recognition datasets, including LFW, CALFW, CPLFW, CFP, AgeDB and RFW. Experimental results reveal that while MLLMs capture rich semantic cues useful for face-related tasks, they lag behind specialized models in high-precision recognition scenarios in zero-shot applications. This benchmark provides a foundation for advancing MLLM-based face recognition, offering insights for the design of nextgeneration models with higher accuracy and generalization. The source code of our benchmark is publicly available in the project page. Index Terms- Benchmark, Face Recognition, Foundation Models, Multimodal Large Language Models (MLLMs) 1. INTRODUCTION Multimodal large language models (MLLMs) have recently gained significant attention from the research community for visual and linguistic understanding tasks. By combining pretrained visual encoders with large language models (LLMs), systems such as Flamingo [1], QwenVL [2], and GPT-4o [3] have achieved state-of-the-art performance across diverse tasks, including image captioning and visual question answering (VQA). These models showcase the ability of LLMs to reason over perceptual inputs and generate coherent, contextually grounded output text, enabling general-purpose image processing in zero-shot and few-shot settings. Leveraging large-scale pretraining, they have accelerated the development of foundation models that are capable of interpreting This work was funded by the Hasler foundation through the Responsible Face Recognition (SAFER) project and also by the European Union project CarMen (Grant Agreement No. 101168325). and responding to complex visual questions without requiring task-specific supervision. Face recognition is also a popular computer vision and is increasingly used in different applications [4, 5, 6]. In particular, face recognition is used as a secure authentication tool in a broad range of applications such as smart phone unlocking, border control, etc. In addition to the security purposes, face recognition is used for entertainment and also in social media. Face recognition models have been extensively studied in the literature and there are also standard benchmarks to evaluate and compare the performance of face recognition models. With the surge of MLLMs, we can consider potential applications of MLLMs for face recognition [7]. However, for replacing MLLMs with existing face recognition models, it is important to know the performance of MLLMs compared to typical models on standard benchmarks with similar protocol. In this paper, we investigate how open-source MLLMs perform on face recognition benchmarks? While there are some previous works on evaluation of MLLMs for different face understanding tasks [8, 9, 10], to our knowledge this paper is the first work that benchmarks MLLMs for face recognition on standard datasets with similar protocols. In the remaining of this paper, we first review previous work in the literature in Section 2. We then describe our benchmark in Section 3 and present our results in Section 4. Finally, the paper is concluded in Section 5. 2. RELATED WORK Recently, several papers have investigated the application of MLLMs for face-related tasks, including face recognition, attribute analysis, forgery detection, anti-spoofing, and multimodal reasoning. A recent survey [7] provides an extensive review of how foundation models and MLLMs are being applied in biometrics and face recognition. Early studies investigated the use of pretrained MLLMs, such as ChatGPT [3], for face verification [9], and predicting soft-biometrics, such as age, gender, and ethnicity. Jia et al. [11] also evaluated the application of ChatGPT for zero-shot face deepfake detection. Shi et al. [12] explored chain-ofthoughts prompting for Gemini and ChatGPT in face antispoofing and deepfake detection tasks. Komaty et al. [10] 16 Oct 2025 Are these two images of the same person? Answer "yes" or "no". Are these two images of the same person? Answer "yes" or "no". FaceXBench FaceRecBench [ours] Fig. 1. Sample questions in FaceXBench [8] and our benchmark. investigated in-context learning of ChatGPT [3] for face antispoofing. Sony et al. [13] evaluated the performance of several foundation models (such as CLIP, BLIP, etc.) for face recognition, and showed that fusion of face recognition models with foundation models can improve recognition accuracy. In addition to these studies, some benchmarks were proposed for different face processing tasks. FaceBench [14] proposed a visual question-answering benchmark for facial attributes. Benchmarks such as FaceXBench [8] and Face-Human-Bench [15] were also proposed to benchmark MLLMs across various face processing tasks, including facial expression recognition, attribute prediction, anti-spoofing, etc. FaceXBench [8] also includes face recognition, using face recognition datasets such as LFW [16], AgeDB [17], CFP-FF [18], CFP-FP [18], CALFW [19], CPLFW [20]. However, they used multiple-choice questions for evaluating the performance of MLLMs. Fig. 1 illustrates two example questions from FaceXBench for face recognition task. While FaceXBench is a useful benchmark for comparing MLLMs in face processing tasks, the reported accuracy values are not comparable to the accuracy of face recognition models in the literature. In fact, for evaluating typical face recognition models, we simply have two face images and want to see if they have same identity. However, reported values for face recognition based on questions with multiple images and also multiple choices are not consistent with values reported in the literature. In this paper, we focus on face recognition and benchmark MLLMs with similar protocols as typical face recognition models. 3. BENCHMARKING MLLMS FOR FACE RECOGNITION To evaluate MLLMs for face recognition, we consider a verification task where two face images are available and the question is whether the given images belong to the same identity. Hence, we give the MLLM with both images along with the following prompt: Prompt Are these two images of the same person? Answer "yes" or "no". Then, the output of MLLM is expected to be "yes" or "no", meaning if the images are predicted to correspond to same person or not. In our benchmark, we consider different standard datasets, including Labeled Faces in the Wild (LFW) [16], Crossage LFW (CALFW) [19], Cross-Pose LFW (CPLFW) [20], Celebrities in Frontal-Profile in the Wild (CFP) [18], AgeDB30 [17], and Racial Faces in-the-Wild (RFW) [21]. Our evaluation for each of these dataset include 6,000 pairs of images with 3,000 positive and 3,000 negative pairs. For consistency with prior works on face recognition, we report recognition accuracy on these datasets. In the following, we briefly describe each dataset: Labeled Faces in the Wild (LFW): LFW [16] is a widely used benchmark dataset for unconstrained face verification. It contains 13,233 images of faces collected from the web, with 5,749 unique individuals. The dataset is designed to evaluate how well face recognition algorithms generalize to real-world conditions, with variations in pose, expression, illumination, and background. Since its release, LFW has served as a standard reference point for measuring progress in face recognition under unconstrained settings. Cross-Age LFW (CALFW): CALFW [19] extends LFW by introducing cross-age variation, aiming to make the verification task more challenging. It contains image pairs of the same individual captured at different ages, highlighting the difficulty of recognizing faces over long time spans. This dataset is primarily focused on age-related intra-class variations while maintaining inter-class diversity, making it a valuable benchmark for studying the robustness of face recognition systems to aging effects. Cross-Pose LFW (CPLFW): CPLFW [20] is another variant of LFW, created to evaluate face recognition under cross-pose conditions. It includes face pairs where the same subject appears in significantly different poses, thus introducing large intra-class variations in pose. By emphasizing pose differences, CPLFW complements LFW and CALFW to how well recognition systems handle extreme viewpoint changes. Celebrities in Frontal-Profile (CFP): The CFP [18] dataset is introduced to test face recognition across frontal and profile views. It consists of images of 500 celebrities, with both frontal and profile face shots, and provides verification protocols for frontal-to-frontal (CFP-FF) and frontalTable 1. Comparison of recognition accuracy (%) of MLLMs with face recognition models on different face datasets. Model LFW AgeDB30 CALFW CPLFW CFP-FP CFP-FF Average Open source MLLMs LLaVA-v1.5-7b 50.00 50.00 50.00 50.00 50.00 50.00 50.00 LLaVA-v1.5-13b 49.92 49.68 50.02 49.83 50.09 50.01 49.93 LLaVA-OneVision-Qwen2-0.5b 55.52 52.63 50.08 51.55 55.00 55.61 53.40 LLaVA-NeXT-Vicuna-13b 65.95 57.50 51.53 54.95 67.44 70.87 61.37 LLaVA-NeXT-Vicuna-7b 53.02 50.72 50.13 50.13 53.87 56.86 52.45 LLaVA-NeXT-Mistral-7b 50.00 49.98 50.02 50.22 49.99 50.03 50.04 GLM-4v-9b 52.58 50.42 48.52 50.27 50.39 49.93 50.35 Idefics-9b-Instruct 50.13 50.05 50.02 49.98 49.99 50.40 50.09 Idefics2-8b 72.40 68.53 54.93 55.98 74.11 74.43 66.73 Idefics3-8B-Llama3 88.83 57.98 61.18 70.90 80.26 85.11 74.05 ShareGPT4v-7b 49.98 50.00 50.00 49.95 50.00 50.00 49.99 ShareGPT4v-13b 49.92 49.98 50.00 49.87 50.16 50.00 49.99 PaliGemma-3b-mix-448 48.48 49.40 48.43 50.37 50.26 50.33 49.54 Ovis1.5-Llama3-8B 52.63 52.40 50.85 50.23 53.67 54.97 52.46 Ovis1.5-Gemma2-9B 73.93 57.87 56.95 54.93 75.09 78.49 66.21 Llama-3.2-11B-Vision-Instruct 50.55 48.20 51.83 49.50 49.47 49.94 49.92 InternVL2.5-1B 54.22 50.62 50.80 51.10 51.44 51.26 51.57 InternVL3-1B 69.28 56.25 56.15 63.35 60.80 64.06 61.65 InternVL3-8B 87.92 52.27 59.08 72.30 79.20 81.84 72.10 InternVL3-38B 90.10 55.72 61.37 71.20 72.50 84.40 72.55 FaceLLM-8B 90.65 53.38 61.48 73.50 80.06 84.89 73.99 Valley2 92.93 60.75 68.58 74.55 84.33 92.27 78.90 Qwen2-VL-2B-Instruct 63.38 58.55 50.77 51.17 61.27 62.33 57.91 Qwen2-VL-7B-Instruct 93.28 66.03 71.95 75.28 86.93 93.11 81.10 Qwen2.5-VL-3B-Instruct 77.52 54.03 60.92 59.43 67.00 82.70 66.93 Qwen2.5-VL-7B-Instruct 89.48 59.07 69.08 73.43 79.09 89.46 76.60 Qwen2.5-VL-32B-Instruct 79.32 59.07 64.48 66.15 69.70 83.01 70.29 Face Recognition Models IResNet-50 (HyperFace) 98.27 90.40 91.48 85.60 92.24 98.86 92.81 IResNet-50 (MS1MV2) 99.83 98.28 95.45 92.08 98.27 99.99 97.31 to-profile (CFP-FP) matching. AgeDB: AgeDB [17] is a benchmark dataset focused on age-related variations in face recognition. It contains images 16,516 images of 570 subjects with a wide age range. The dataset provides predefined verification protocols with increasing age gaps (e.g., 5, 10, 20, and 30 years), enabling systematic evaluation of how well algorithms handle the challenge of age progression. AgeDB is commonly used to study long-term face recognition performance. We use 30-year protocol in our benchmark. Racial Faces in-the-Wild (RFW): The RFW [21] dataset was introduced to evaluate bias and fairness in face recognition systems across different demographic groups. RFW is constructed by reorganizing images from MS-Celeb [22] into four balanced subsets: Caucasian, Asian, Indian, and African. Each subset contains approximately 10,000 images from around 3,000 individuals, with 6,000 comparisons. By providing a benchmark focused on racial diversity, RFW enables systematic analysis of demographic disparities in recognition performance and has become a widely used dataset for studying fairness in face recognition. We use the cropped images for each dataset available in Insightface repository1. We also use VLMEvalKit repository2 to implement our benchmark. The source code of our benchmark is publicly available in the project page3. 4. EXPERIMENTAL RESULTS We evaluate and benchmark open-source4 MLLMs on various standard face recognition datasets, including LFW [16], CALFW [19], CPLFW [20], CFP [18], AgeDB-30 [17], and RFW [21]. The MLLMs used in our experiments include LLaVA-v1.5-7b [23], LLaVA-v1.5-13b [23], 1https://github.com/deepinsight/insightface 2https://github.com/open-compass/VLMEvalKit 3Project page: https://www.idiap.ch/paper/facerecbench 4Note that given the restrictions in license of each of the benchmark datasets, we were not able to use commercial MLLMs in this study. Table 2. Comparison of recognition accuracy (%) of MLLMs on different demography groups in RFW. Model African Asian Caucasian Indian Average Std. Open source MLLMs LLaVA-NeXT-Vicuna-13b 51.28 53.53 56.88 53.00 53.67 2.03 Idefics3-8B-Llama3 60.78 66.15 70.38 66.38 65.92 3.41 Ovis1.5-Gemma2-9B 52.62 54.92 61.93 55.83 56.33 3.44 InternVL3-8B 64.37 63.08 66.37 64.03 64.46 1.20 FaceLLM-8B 66.00 66.78 68.82 66.02 66.90 1.15 Valley2 63.53 68.77 75.57 70.28 69.54 4.29 Qwen2-VL-7B-Instruct 60.40 65.55 76.68 68.35 67.75 5.90 Qwen2.5-VL-7B-Instruct 62.65 66.63 70.55 68.98 67.20 2.98 Face Recognition Models IResNet-50 (HyperFace) 88.27 82.98 84.33 78.02 83.40 3.66 IResNet-50 (MS1MV2) 98.32 97.73 99.33 98.23 98.40 0.58 LLaVA-OneVision-Qwen2-0.5b [24], LLaVA-NeXT-Vicuna7b [24], LLaVA-NeXT-Vicuna-13b [24], LLaVA-NeXTMistral-7b [24], GLM-4v-9b [25], Idefics-2-8b [26], Idefics9b-Instruct [27], Idefics3-8B-Llama3 [28], ShareGPT4v13b [29], ShareGPT4v-7b [29], PaliGemma-3b-mix-448 [30], Ovis-1.5-Llama3-8B [31], Ovis1.5-Gemma2-9B [31], Llama3.2-11B-Vision-Instruct [32], InternVL2.5-1B [33], InternVL3-1B [34], InternVL3-2B [34], InternVL3-8B [34], FaceLLM-8B [35], Valley2 [36], Qwen2-VL-2B-Instruct [2], Qwen2-VL-7B-Instruct [2], Qwen2.5-VL-3B-Instruct [37], Qwen2.5-VL-7B-Instruct [37], Qwen2.5-VL-32B-Instruct [37]. We run our evaluations on a system equipped with NVIDIA H100. Table 1 reports the performance of different MLLMs on several face recognition benchmarks (LFW, CALFW, CPLFW, CFP, and AgeDB-30). This table also compares the performance of MLLMs with IResNet-50 (trained with AdaFace loss [6] on MS-Celeb [22] dataset) as a state-ofthe-art face recognition model. As another baseline, we also consider a face recognition model with IResNet-50 trained with HyperFace [38] synthetic dataset. As the results in this paper show there is a significant gap between the performance of face recognition models. While increasing the size of MLLM can improve the performance on benchmarks, it also saturates for each MLLM (can be seen in InternVL3 and Qwen2.5VL). Among the benchmarked MLLMs, FaceLLM is based on InternVL3 and finetuned for face understanding. The results in Table 1 show that the finetuing in FaceLLM has increased the performance compared to the base model (InternVL3) on different face recognition benchmarks. This suggests that by using domain-specific data instead of general-purpose data, we can expect improvement in MLLMs for the face recognition task. We also compare the top-performing models in Table 1 on RFW dataset. Table 2 reports the performance of different models for four demography groups. The result in this table also indicate significant gap between the performance of MLLMs with typical models. 5. CONCLUSION In this paper, we presented a benchmark for MLLMs with similar protocol used to evaluate typical face recognition models. Although MLLMs have shown considerable potentials in broad applications, most are trained mainly on general-purpose datasets or large-scale image-text pairs collected from the web. Consequently, these models are able to generate high-level image descriptions but often lack taskspecific precision. For instance, while they can describe a person's appearance or identify basic demographic attributes such as age and gender, they frequently struggle with more details which are required to recognize identity or verify if identity is the same in two images. This limitation poses challenges for applications of MLLMs in face recognition and requires further study in future. Our benchmark can be used by future researchers to compare the MLLMs with face recognition models. 6. REFERENCES [1] Jean-Baptiste Alayrac et al., "Flamingo: a visual language model for few-shot learning," Advances in neural information processing systems, vol. 35, pp. 23716-23736, 2022. [2] Peng Wang et al., "Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution," arXiv preprint , 2024. [3] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al., "Gpt-4o system card," arXiv preprint , 2024. [4] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, "Arcface: Additive angular margin loss for deep face recognition," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4690-4699. [5] Anjith George, Christophe Ecabert, Hatef Otroshi Shahreza, Ketan Kotwal, and S ́ebastien Marcel, "Edgeface: Efficient face recognition model for edge devices," IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 6, no. 2, pp. 158-168, 2024. [6] Minchul Kim, Anil K Jain, and Xiaoming Liu, "Adaface: Quality adaptive margin for face recognition," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 18750-18759. [7] Hatef Otroshi Shahreza and S ́ebastien Marcel, "Foundation models and biometrics: A survey and outlook," IEEE Transactions on Information Forensics and Security, 2025. [8] Kartik Narayan, Vibashan VS, and Vishal M Patel, "Facexbench: Evaluating multimodal llms on face understanding," arXiv preprint , 2025. [9] Ahmad Hassanpour, Yasamin Kowsari, Hatef Otroshi Shahreza, Bian Yang, and S ́ebastien Marcel, "Chatgpt and biometrics: an assessment of face recognition, gender detection, and age estimation capabilities," in 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024, pp. 3224-3229. [10] Alain Komaty, Hatef Otroshi Shahreza, Anjith George, and Sebastien Marcel, "Exploring chatgpt for face presentation attack detection in zero and few-shot in-context learning," arXiv preprint , 2025. [11] Shan Jia, Reilin Lyu, Kangran Zhao, Yize Chen, Zhiyuan Yan, Yan Ju, Chuanbo Hu, Xin Li, Baoyuan Wu, and Siwei Lyu, "Can chatgpt detect deepfakes? a study of using multimodal large language models for media forensics," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4324-4333. [12] Yichen Shi et al., "Shield: An evaluation benchmark for face spoofing and forgery detection with multimodal large language models," arXiv preprint , 2024. [13] Redwan Sony, Parisa Farmanifard, Arun Ross, and Anil K Jain, "Foundation versus domain-specific models: Performance comparison, fusion, and explainability in face recognition," arXiv preprint , 2025. [14] Xiaoqin Wang, Xusen Ma, Xianxu Hou, Meidan Ding, Yudong Li, Junliang Chen, Wenting Chen, Xiaoyang Peng, and Linlin Shen, "Facebench: A multi-view multi-level facial attribute vqa dataset for benchmarking face perception mllms," arXiv preprint , 2025. [15] Lixiong Qin et al., "Face-human-bench: A comprehensive benchmark of face and human understanding for multi-modal assistants," arXiv preprint , 2025. [16] Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller, "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments," in Workshop on faces in'Real-Life'Images: detection, alignment, and recognition, 2008. [17] Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou, "Agedb: the first manually collected, in-the-wild age database," in proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 51-59. [18] Soumyadip Sengupta, Jun-Cheng Chen, Carlos Castillo, Vishal M Patel, Rama Chellappa, and David W Jacobs, "Frontal to profile face verification in the wild," in 2016 IEEE winter conference on applications of computer vision (WACV). IEEE, 2016, pp. 1-9. [19] Tianyue Zheng, Weihong Deng, and Jiani Hu, "Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments," arXiv preprint , 2017. [20] Tianyue Zheng and Weihong Deng, "Cross-pose lfw: A database for studying cross-pose face recognition in unconstrained environments," Beijing . Rep, vol. 5, no. 7, 2018. [21] Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang, "Racial faces in the wild: Reducing racial bias by information maximization adaptation network," in Proceedings of the ieee/cvf international conference on computer vision, 2019, pp. 692-702. [22] Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao, "Ms-celeb-1m: A dataset and benchmark for largescale face recognition," in European conference on computer vision. Springer, 2016, pp. 87-102. [23] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee, "Visual instruction tuning," in NeurIPS, 2023. [24] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee, "Improved baselines with visual instruction tuning," 2023. [25] Team GLM, Aohan Zeng, et al., "Chatglm: A family of large language models from glm-130b to glm-4 all tools," 2024. [26] Hugo Laurenc ̧on, L ́eo Tronchon, Matthieu Cord, and Victor Sanh, "What matters when building vision-language models?," 2024. [27] Hugo Laurenc ̧on, Lucile Saulnier, L ́eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh, "Obelics: An open web-scale filtered dataset of interleaved image-text documents," 2023. [28] Hugo Laurenc ̧on, Andr ́es Marafioti, Victor Sanh, and L ́eo Tronchon, "Building and better understanding vision-language models: insights and future directions.," 2024. [29] Lin Chen and other, "Sharegpt4v: Improving large multimodal models with better captions," in European Conference on Computer Vision. Springer, 2024, pp. 370-387. [30] Lucas Beyer, Andreas Steiner, Andr ́e Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al., "Paligemma: A versatile 3b vlm for transfer," arXiv preprint , 2024. [31] Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye, "Ovis: Structural embedding alignment for multimodal large language model," , 2024. [32] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al., "The llama 3 herd of models," arXiv preprint , 2024. [33] Zhe Chen et al., "Expanding performance boundaries of opensource multimodal models with model, data, and test-time scaling," arXiv preprint , 2024. [34] Jinguo Zhu et al., "Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models," arXiv preprint , 2025. [35] Hatef Otroshi Shahreza and S ́ebastien Marcel, "Facellm: A multimodal large language model for face understanding," arXiv preprint , 2025. [36] Ziheng Wu, Zhenghao Chen, Ruipu Luo, Can Zhang, Yuan Gao, Zhentao He, Xian Wang, Haoran Lin, and Minghui Qiu, "Valley2: Exploring multimodal models with scalable visionlanguage design," arXiv preprint , 2025. [37] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al., "Qwen2.5 technical report," 2025. [38] Hatef Otroshi Shahreza and S ́ebastien Marcel, "Hyperface: Generating synthetic face recognition datasets by exploring face embedding hypersphere," in The Thirteenth International Conference on Learning Representations, 2025.
2510.14875
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX631 AREPO-RSG: Aspherical Circumstellar Material and Winds from Pulsating Dusty Red Supergiants in Global 3D Radiation Hydrodynamic Simulations Jing-Ze Ma (马竟泽) ,1 Stephen Justham ,1 R¨udiger Pakmor ,1 Andrea Chiavassa ,2, 1 Taeho Ryu ,3, 4, 1 and Selma E. de Mink 1 1Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany 2Universit´e Cˆote d’Azur, Observatoire de la Cˆote d’Azur, CNRS, Lagrange, CS 34229, Nice, France 3JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80308, USA 4Department of Astrophysical and Planetary Sciences, 391 UCB, Boulder, CO 80309, USA ABSTRACT Recent observations have revealed a surprisingly large fraction of hydrogen-rich supernovae (SNe) interacting with dense confined circumstellar material (CSM), whose origin is heavily debated. Exploit- ing our recent implementation of a sophisticated radiation transport scheme in the moving-mesh code AREPO, we perform full-sphere 3D radiation hydrodynamic simulations of red supergiant envelopes. For 10 M⊙and 20 M⊙core-carbon-burning stars, we find that large-amplitude radial pulsations lift the surface material of density 10−14–10−12 g cm−3 to the circumstellar environment up to 3 × 1014 cm, consistent with the inferred density for the interacting SN 2013fs. There, radiation acts on dust to drive highly anisotropic outflows of 10−6–10−5 M⊙yr−1. The total CSM masses for both simulations are ∼0.01 M⊙. Due to convection, the CSM density structure has order-of-magnitude angular variations, dominated by large-scale asymmetries. We suggest that (1) the CSM around the progenitor is bound material instead of a widely-assumed steady wind, (2) highly aspherical CSM is common and can be created by surface convection rather than only from binary interactions, and (3) 3D effects need to be incorporated in 1D SN modeling, potentially via effective clumping. Based on our simulations, we propose a 1D analytical CSM model to be directly used for SN observable modeling. We predict that progenitor pulsations (seen in SN 2023ixf) and highly-confined CSM (seen in SN 2013fs) should be common among most hydrogen-rich SNe. This can be tested with progenitor monitoring using Rubin Observatory and near-future high-cadence surveys such as ULTRASAT and UVEX. 1. INTRODUCTION Modern photometric surveys and spectroscopic ob- servations discovered a significant fraction of super- novae (SNe) showing evidence of interactions (inter- acting SNe), e.g., enhanced luminosity, delayed shock breakout, or transient narrow emission lines (see e.g., Smith 2017; Dessart 2024, for reviews). These interact- ing SNe indicate that the progenitor stars do not explode in vacuum, but in dense, confined circumstellar material (CSM). This opens up the possibility to probe the stel- lar evolution and mass loss at late stages, which were previously not accessible from observations of normal stars. Observationally, most constraints on the CSM come from hydrogen-rich SNe (Type II), which are core- collapse SNe of evolved massive stars with hydrogen- rich envelopes. The progenitor stars are predominantly red supergiants (RSGs) and a small population of yel- jingze@mpa-garching.mpg.de low or blue supergiants and luminous blue variables (Smartt 2009). The observed fraction of interacting SNe in the entire Type II SNe population is at least 30%–40% (Bruch et al. 2021, 2023; Hinds et al. 2025), but the actual fraction is likely higher (Morozova et al. 2018; F¨orster et al. 2018). The derived CSM density is typically > 10−14 g cm−3 within 1014–1015 cm from the center of the star (or 1.5–15 stellar radii for a 1000 R⊙ RSG; e.g., Yaron et al. 2017; Zimmerman et al. 2024; Jacobson-Gal´an et al. 2024a). This indicates the pres- ence of dense, confined CSM close to the progenitor stars. Most analyses assume a wind-like CSM profile to interpret the observational data. The inferred mass loss rates are > 10−4 M⊙yr−1 or even reaching 1 M⊙yr−1 (e.g., Fransson et al. 2014; Boian & Groh 2020; Hinds et al. 2025; Ransome & Villar 2025), at least two orders of magnitude higher than the mass loss rate of core- helium-burning RSGs (e.g., de Jager et al. 1988; Beasor et al. 2020; Antoniadis et al. 2024). These leave us a theoretical puzzle: What is the ori- gin of the dense CSM? One hypothesis is wave-driven mass loss, as proposed by Quataert & Shiode (2012), arXiv:2510.14875v1 [astro-ph.SR] 16 Oct 2025 2 Ma et al. where core convection triggers gravity waves that prop- agate outwards and launch an eruptive wind (Shiode & Quataert 2014; Fuller 2017). However, later studies showed that the wave-heating rate is not strong enough to drive significant mass ejections (Mcley & Soker 2014; Wu & Fuller 2021, 2022; Leung et al. 2021). Alterna- tively, the explosive burning may unbind part of the envelope directly without invoking gravity waves, even though this mechanism is limited to a small progenitor mass range (Smith & Arnett 2014; Woosley & Heger 2015). Another hypothesis is mass ejection during bi- nary interactions (e.g., Smith & Arnett 2014; Mcley & Soker 2014; Ouchi & Maeda 2017; Matsuoka & Sawada 2024; Ercolino et al. 2024), but whether this channel can explain all of the interacting SNe is not clear: Although a large fraction of hydrogen-rich SNe are thought to be binary products (Zapartas et al. 2019; Ercolino et al. 2025), only a small fraction of them may interact shortly before the explosion (e.g., Kozyreva et al. 2022; Ercolino et al. 2024). The CSM may also be purely related to the stellar surface variability, in particular for RSGs – the predom- inant progenitors of Type II SNe. RSGs are known to be large-amplitude pulsators (Kiss et al. 2006), have large- scale surface convection (Gilliland & Dupree 1996), and are surrounded by extended atmospheres (Arroyo-Torres et al. 2015). The Great Dimming of Betelgeuse is an ex- ample of possible mass ejections from RSGs (Montarg`es et al. 2021; Dupree et al. 2022). It was suggested that the extended atmosphere of RSGs may be enough to explain the CSM (Dessart et al. 2017), but the origin of the extended atmosphere is de- bated. Analytical arguments suggest that bound mate- rial may be lifted from the surface by convection and pulsation (Soker 2021) or shocks generated by transonic convection (Fuller & Tsuna 2024). Large-amplitude pulsations (Goldberg et al. 2020; Bronner et al. 2025; Laplace et al. 2025) or convection (Goldberg et al. 2022a) may change the density structure of the pre-SN RSG, thereby changing the lightcurves. Using 3D simu- lations, Goldberg et al. (2025) found that yellow super- giants can launch eruptive mass loss via large-amplitude pulsations. For RSGs, it was also proposed that the pul- sation amplitude grows when the luminosity-to-mass ra- tio (L/M) is large, resulting in episodic mass loss (Heger et al. 1997; Yoon & Cantiello 2010; Sengupta et al. 2025; Suzuki & Shigeyama 2025), but the actual mass loss process has only been simulated once in 1D simulations (Clayton 2018) and is difficult to reproduce due to nu- merical issues (Vincent Bronner priv. comm.). Such pulsations are already present in 3D simulations of RSGs (Chiavassa et al. 2024; Goldberg et al. 2022b), but none of the current 1D or 3D models so far suc- cessfully produce a CSM dense enough to explain the interacting Type II SNe. A potential issue is that 1D hy- drodynamic models break before mass ejections (Bron- ner et al. 2025; Suzuki & Shigeyama 2025) and, as we suggest in this work, 3D models do not include enough envelope for waves to fully steepen into shocks. Recently, we overcame those technical barriers by demonstrating a RSG simulation an order of magnitude deeper than other previous simulations (Ma et al. 2025). This is credited to the flexible mesh refinement and lo- cal time-stepping supported by our radiation transport module AREPO-IDORT (Ma et al. 2025) in the moving- mesh code AREPO (Springel 2010). This enables us to perform a radiation hydrodynamic simulation spanning 6 orders of magnitude in time-scale and 4 orders of magnitude in length-scale (see Figure 18 in Ma et al. 2025). In this work, we present a subset of our 3D AREPO-RSG model grid focusing on the later evolution- ary stages, targeting at producing circumstellar material self-consistently from pre-SN RSGs. 2. METHODS We use the 3D finite-volume moving-mesh code AREPO (Springel 2010; Pakmor et al. 2016; Weinberger et al. 2020) to perform two radiation hydrodynamic simula- tions of core-carbon-burning RSG envelopes. A detailed description of 1D initial conditions and numerical meth- ods are presented in Appendix A, where the stellar pa- rameters and numerical parameters are summarized in Table 1. The limitations of our simulations are further discussed in Appendix C.3. Here, we only provide a brief overview. First, we use the 1D stellar evolution code MESA (ver- sion 15140; Paxton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023) to construct 1D non-rotating RSG profiles at near-solar metallicity (Z = 0.02). We select a 10 M⊙RSG (initially 11.5 M⊙) at 200 years before core collapse and a 20 M⊙RSG (initially 22 M⊙) at 8000 years before core collapse. We define their radii as RMESA. Then, we follow Ohlmann et al. (2017) to replace the inner 3% RMESA with a less dense artificial core in hydrostatic equilibrium, map the modified profile onto AREPO, and damp the velocities globally for one stel- lar sound-crossing timescale to aid relaxation. Within the artificial core, we further apply a constant luminos- ity source and a continuous damping term that we keep after relaxation. The simulation box is 300 RMESA wide, filled with low density ρbg = 10−17 g cm−3 and low tem- perature Tbg = 530 K pseudo-vacuum. The equations of hydrodynamics are solved using a second-order accurate finite-volume approach with an HLLD Riemann solver (Springel 2010; Pakmor et al. 2016). We solve full self-gravity using a Oct-tree multi- ple expansion method (Weinberger et al. 2020). The radiation transport is coupled to the hydrody- namics via our newly-implemented AREPO-IDORT mod- ule, which solves for gray specific intensities at dis- crete directions by solving the time-independent radia- tive transfer equations using an implicit first-order dis- crete ordinates method (Ma et al. 2025). The radiation AREPO-RSG: Pulsation-lifted CSM 3 Figure 1. Locations of two 3D pre-SN AREPO-RSG simulations on the Hertzsprung-Russell diagram. The evolutionary tracks of MESA models are indicated in black solid lines. We mark the 1D MESA models (empty stars) as initial conditions for the 3D simulations (filled stars with errorbars indicating the 3σ variations due to temporal variability). The effective temperatures for the simulations are spherically-averaged (see Appendix A.8). Background gray scatter dots indicate observed Galactic RSG population obtained through TiO bands (triangles; Levesque et al. 2005) or SED fitting (circles; Gazak et al. 2014). The small gray stars mark the Type IIP/L supernova progenitors identified in pre-explosion images compiled by Van Dyk (2025), where we highlight the well-studied interacting SNe 2023ixf and 2024ggi. We also highlight the dusty progenitor of SN 2025pht detected by the James Webb Space Telescope (Kilpatrick et al. 2025). Both the absolute values and the uncertainties of the bolometric luminosity may be significantly underestimated (Beasor et al. 2025). Background gray lines show contours of constant radius. transport is performed on the globally synchronized hy- drodynamic timesteps. We include a realistic equation of state and gray opacity tables in our simulations. We use the high- temperature OPAL equation of state (Rogers & Nayfonov 2002) blended with ideal gas below a temperature of 1870 K. We use Rosseland and Planck opacities from the high-temperature Los Alamos OPLIB table (Colgan et al. 2016)1 stitched to a low-temperature Ferguson et al. (2005) table below a temperature of 30000 K.2 These EOS and opacity tables cover the temperature and density range of RSG envelopes and include the ionization/recombination between ionized species and atomic species. The low-temperature opacity table also includes contributions from molecular lines and dust in equilibrium chemistry (Ferguson et al. 2005). We use 1 https://aphysics2.lanl.gov/apps/ The opacity due to electron scattering is included in the Rosseland opacity table. 2 https://www.wichita.edu/academics/fairmount las/physics/ Research/opacity.php the Rosseland opacity as the flux-weighted opacity, and use the Planck opacity as the energy-weighted opacity. We do not explicitly model dust in our simulations, but we include the effects of radiation acting on dust through opacity. Below 1200 K in the optically-thin re- gions, we take the Planck opacity values as both the energy-weighted and the flux-weighted opacities to take into account the high opacities from dust. Between 1500–1200 K, we take a linear interpolation between the Rosseland opacity and the Planck opacity for the opac- ity value to guarantee a smooth transition. We there- fore make the implicit assumption that the dust forms instantly in chemical equilibrium with other species ac- cording to Ferguson et al. (2005), with the size distribu- tion from Mathis et al. (1977). Each simulation consists of approximately 20 million cells and took 3.5 months to reach 70 stellar years run- ning on 504 CPU cores (1.3 million CPU hours). The radiation transport AREPO-IDORT module takes about half of the computational cost. Given the flexible mesh construction in AREPO, we employ a multi-shell refine- ment criterion, where we set different target volume resolutions at different radii (see Appendix A.6). The 4 Ma et al. Voronoi mesh is allowed to move with the gas in a quasi- Lagrangian way, which gives us the advantage of resolv- ing the outflow. Throughout this paper, we only present analyses of the simulations after they reach steady states. We de- termine the steady state as when the mass ejection is not influenced by the initial transient ejection any- more, i.e. the spherically-averaged density contour at 10−16 g cm−3 begins to rise again with time. But we also check that two extra conditions are met during the steady state: (1) Both the bolometric luminosity and spherically-averaged radius vary around an approx- imately constant value, as in 3D CO5BOLD simulations (Ahmad et al. 2023); (2) The temporally-averaged to- tal radial energy flux is approximately spatially con- stant, as in 3D Athena++ simulations (Goldberg et al. 2022b, 2025). Detailed checks are presented in Ap- pendix A.7. How we obtain the spherically-averaged and global quantities, e.g., luminosity, radius and effec- tive temperature, is presented in Appendix A.8. 3. RESULTS We show the locations of our two 3D simulations on the Hertzsprung-Russell diagram in Figure 1. They ap- pear redder than the 1D models due to more extended superadiabatic layers (Appendix C.1.3), but are consis- tent with the derived luminosities and effective temper- atures of observed Type IIP SN progenitors. 3.1. Episodically-lifted CSM via Radial Pulsation After the initial relaxation phase, we find that both simulations settle into steady states with semi-regular variability in bolometric luminosity. The magnitude of this time variability is represented by the errorbars for the 3D simulations in Figure 1, and shown in the top row of Figure 2. The dominant periods align with ex- pectations of the fundamental modes for RSGs (see Ap- pendix B). The mechanism that sustains the radial pul- sations is likely the κγ mechanism, where the opacity peak due to hydrogen and helium recombination leads to unstable growth of perturbations (Heger et al. 1997; Joyce et al. 2020) with a non-linear boost from recombi- nation energy (Clayton 2018; Bronner et al. 2025). We defer detailed analyses of pulsation properties to future explorations. The large-amplitude radial pulsation acts as a piston that pushes the dense material from the stellar surface into the circumstellar environment. As shown in the second row of Figure 2, the lifted material forms CSM of ∼0.01 M⊙in both simulations. We define the CSM mass as the total mass of material with density between 10−16–10−10 g cm−3 to exclude the background pseudo- vacuum and the stellar interior. In the bottom two rows of Figure 2, we plot the spherically-averaged density ⟨ρ⟩and radial velocity ⟨vr⟩. As shown in the radial velocity inside the stars, the dom- inant internal motion is the global contraction and ex- pansion of the entire envelope, which further supports that the radial pulsation is governed by the fundamen- tal mode. The large-amplitude pulsations lift the dense material from the stellar surface. The lifted material is then slowed down by both gravitational pull and colli- sion with the pre-lifted material. Eventually, most of the material falls back and collides with the material lifted in the next several pulsations. Over time, this develops into a quasi-static CSM structure where the inner CSM close to the stellar surface varies at the pulsation period, while the outer CSM at 2 × 1014 cm varies at a longer timescale several times larger than the pulsation period. 3.2. Spherically-averaged Density Profiles: A Two-zone Model In Figure 3, we show the spherically-averaged CSM density profiles of the 10 M⊙(orange) and 20 M⊙sim- ulation (red). Different curves indicate the spherically- averaged density profiles at different times. The white- edged curves indicate the spherically-and-temporally- averaged density profiles. Deep inside the stars, the density structures do not vary significantly with time, and agree well with the initial profiles from MESA (black solid lines). For reference, in gray solid lines, we also plot the density structure from steady winds assuming a constant wind velocity 30 km s−1 for mass-loss rates of 10−6–1 M⊙yr−1. Our simulations predict a two-zone CSM density structure: a dense quasi-static CSM confined within 3 × 1014 cm attached to a less dense CSM due to a dust-driven wind outside. Such a two-zone CSM struc- ture was also found necessary to explain some interact- ing SNe, e.g., SN 2013fs (Yaron et al. 2017), SN 2020tlf (Jacobson-Gal´an et al. 2022), SN 2023ixf (Singh et al. 2024; Zimmerman et al. 2024; Nayana et al. 2025), SN 2024ggi (Ertini et al. 2025), and PS1-11aop (Ibik et al. 2025). We find that the CSM density values in our simula- tions are consistent with the CSM density inferred for SN 2013fs (Yaron et al. 2017), and approximately one order of magnitude less than SN 2023ixf (compiled by Nayana et al. 2025) and SN 2024ggi (Zhang et al. 2024; Jacobson-Gal´an et al. 2024b; Shrestha et al. 2024; Er- tini et al. 2025; Chen et al. 2025). However, there are uncertainties in both our treatment of dust and in de- riving the density structure from SN observations. We expect that forward modeling from our simulation and direct comparison with the observed SN lightcurves and spectra will be a more reliable test of our simulation predictions. Here, we show that the first dense inner zone of the CSM in our simulation is bound material, as opposed to enhanced mass loss as interpreted in most studies. Such a dense atmosphere is supported by a train of periodic shocks generated by large-amplitude pulsations (see Fig- ure 2), as also proposed for the atmospheric structure of asymptotic giant branch (AGB) stars (Bertschinger AREPO-RSG: Pulsation-lifted CSM 5 Figure 2. Dense CSM episodically lifted by semi-regular pulsation in the 10 M⊙simulation (left) and 20 M⊙simulation (right). From top to bottom, we show the bolometric luminosity, CSM mass, spherically-averaged density, and spherically-averaged radial velocity. In the top two rows, black solid lines show the general trends of variations filtering out the high-frequency variability. The bolometric luminosity from the MESA models used as initial conditions are indicated with gray dashed horizontal lines. In the bottom two rows, contour lines indicate the iso-density levels [10−8, 10−12, 10−14, 10−16] g cm−3. The cyan lines indicate the spherically-averaged Rosseland radius defined in Appendix A.8. The right-hand side of the vertical dotted lines indicate the relaxed phase as defined at the end of Section 2. & Chevalier 1985; Bowen 1988). The density profile as a function of radial distance r can be well-described by a shock-supported quasi-static atmosphere (Equation 5 in Fuller & Tsuna 2024): ρ(r) = ρ(R) R r 2 e − v2 esc(R) 2v2s (1−R r ) . (1) We take R and ρ(R) to be the Rosseland radius and the associated density averaged over time and spherical shells in our simulation. These two quantities can also be provided by 1D stellar evolution models if a proper convective efficiency is chosen to match the predicted radius in our 3D simulations. For the shock velocity, we take the sound speed predicted by the 1D MESA model at the location where the acoustic timescale becomes com- parable to the radiative cooling timescale, i.e. τ = c/cs (Jiang et al. 2015; Cheng et al. 2024). Here, τ is the op- tical depth integrated from the stellar surface, c is the speed of light, and cs is the local sound speed. This is motivated by the theoretical consideration that the weak shock speed approximately follows the local sound speed until radiative cooling yields a nearly isothermal atmo- sphere. For both models, the shock speeds are approx- imately 11 km s−1 (supersonic near the surface), which are consistent with the spherically-averaged radial ve- 6 Ma et al. Figure 3. The CSM density structure from our 3D simulations is consistent with the range of densities inferred for SN 2013fs. Different colored lines indicate the spherically-averaged density profiles at different times, for the 10 M⊙simulation (orange) and the 20 M⊙simulation (red), which are much more extended than the initial conditions from MESA (black solid lines). The white-edged solid lines represent the density profiles averaged both in time and angles. The black dashed lines show the density profiles described by the analytical ‘two-zone model’ detailed in Section 3.2. We also highlight the inferred density profile from three well-studied interacting SNe, SN 2013fs (Yaron et al. 2017), 2023ixf (compiled by Nayana et al. 2025) and SN 2024ggi (Zhang et al. 2024; Jacobson-Gal´an et al. 2024b; Shrestha et al. 2024; Ertini et al. 2025; Chen et al. 2025), as labeled. The background gray solid lines indicate density profiles for constant mass-loss rates assuming a constant wind velocity 30 km s−1. locities shown in Figure 2. The density profiles in Equa- tion 1 are shown in Figure 3 as the inner parts of the black dashed lines. The outer zone of the CSM in our simulation is an extended tail due to the dust-driven wind. As the shock-supported atmosphere extend to large distances, the temperature drops below 1200–1500 K, where dust forms (H¨ofner & Olofsson 2018), and the radiation force acts on the dust opacities to drive a wind (Fuller & Tsuna 2024). In our simulations, the wind is driven by radiation acting on the high Planck opacity taken from the Ferguson et al. (2005) table below 1200–1500 K. We find that the outward radiation force on this material marginally exceeds the attraction from grav- ity in our simulations, even though the simulations are not long enough for us to see the wind material be- coming unbound. The density profile can also be pre- dicted from analytical theory. The dust-forming radius can be approximated by Rd = (T(R)/Td)2R (H¨ofner & Olofsson 2018; Fuller & Tsuna 2024), where Td is the dust-forming temperature and the term T(R)2R can be derived from luminosity of the MESA model L = 4πR2σT(R)4. Here, σ is the Stefan-Boltzmann con- stant. The density ρ(Rd) at the dust-forming radius can be found by plugging Rd in Equation 1. Then the wind mass-loss rate is ˙M = 4πR2 dρ(Rd)vs, and the density structure for the dust-driven wind is ρ(r) = ˙M 4πr2v∞ = ρ(Rd) R2 dvs r2v∞ = ρ(R) R r 2 vs v∞ exp  −v2 esc(R) 2v2s  1 −R Rd  , (2) where we assume a terminal wind speed of v∞ = 30 km s−1 (Mauron & Josselin 2011).3 The entire CSM density structures as plotted in black dashed lines in Figure 3 are found by attaching Equation 1 to this wind solution. We find that taking Td = 1300 K for the 10 M⊙ simulation yields a good fit, while the 20 M⊙simulation 3 The order of magnitude of the wind mass-loss rate (and therefore the density) is not sensitive to the terminal wind speed, because the terminal speed mostly ranges from 10 to 50 km s−1 (Mauron & Josselin 2011; Decin et al. 2024). The terminal wind speed can also be predicted analytically based on an estimation of the dust opacity (e.g., Equation 14 in Fuller & Tsuna 2024). AREPO-RSG: Pulsation-lifted CSM 7 Figure 4. Convection creates a clumpy stellar surface, and aspherical circumstellar material, leading to an anisotropic outflow. Left: A 2D density slice of the 10 M⊙simulation at 60.5 years. Right: Projected density maps along three spheres indicated by the dashed circles in the left plot (where the color at the edge of each density map corresponds to that of the appropriate circle). An interactive 3D visualization can be found here: https://jingzema.com/AREPO-RSG/arepo rsg csm fast.html does not drive a wind, potentially due to its small ra- dius and low luminosity for its mass. The 20 M⊙star will become more luminous in the final thousand years than simulated here, and therefore is expected to have more violent mass ejections than in our simulation prior to core collapse. Both the 3D simulations and the an- alytical descriptions predict a mass loss rate between 10−6–10−5 M⊙yr−1, which is broadly consistent with the observed mass-loss rates (e.g., de Jager et al. 1988; Beasor et al. 2020; Antoniadis et al. 2024; Decin et al. 2024). 3.3. 3D Effects Due to Convection: Clumpy Surfaces, Aspherical CSM, and Anisotropic Outflows In our simulations, large-scale convection in the RSG envelope leads to significant deviations from spherical symmetry, which creates the clumpy surface, aspheri- cal CSM, and anisotropic dust-driven outflows, as il- lustrated in the left panel of Figure 4. We take three spheres at different radii (dotted circles in the left panel of Figure 4), and plot the projected density maps along those spheres on the right panels. Inside the star near the photosphere (bottom right), the density shows small-scale clumps. The clumpy surface is due to the surface convection driven by radiative cooling near the photosphere (Ma et al. to be subm.). This is where the supernova shock will propagate through and break out from the surface. It has been suggested that this clumpy progenitor surface can prolong the shock breakout du- ration (Goldberg et al. 2022a). In the circumstellar en- vironment (middle right panel in Figure 4), the density fluctuation can still differ by several orders of magnitude in different angles. This is particularly evident in the ejected material (top right panel in Figure 4), where the density distribution is dominated by large-scale asym- metries, which reflects the large convective cells inside the star from which the CSM is lifted. The temporal variations of the convective structure and aspherical circumstellar material are illustrated in Figure 5. The plot shows the variations during one pul- sation cycle for the 10 M⊙model, for the bolometric intensity (second row), mid-plane slice of density (third row), and mid-plane slice of radial velocity (last row). Generally, the stellar surface exhibits filament-like struc- tures and dark clumps in the intensity map, almost iden- tical to those seen in the CO5BOLD 3D simulations of AGB stars and RSGs (e.g., Freytag et al. 2024). The dark clumps are pulsation-lifted dense opaque material shaped by large-scale convection (Freytag et al. 2024). The overall convective pattern on the surface is con- trolled by the intense cooling, which creates a density inversion subject to Rayleigh-Taylor instability, result- ing in dense and low-entropy material mixed into the en- velope through fast downdrafts (Ma et al. to be subm.). 8 Ma et al. Figure 5. Time sequence of the convective envelope variations within one pulsation cycle for the 10 M⊙simulation. The top row shows the variability of the absolute bolometric magnitude Mbol in black and the spherically-averaged Rosseland radius ⟨Rross⟩in orange. During this one pulsation cycle, we select 4 snapshots equally spaced in time (indicated by gray vertical lines in the top row), and plot the bolometric intensity looking from the x axis (second row), y-z mid-plane slice of the density (third row), and y-z mid-plane slice of the radial velocity (last row). On the top row of Figure 5, we show the nearly anti- phase variability between absolute bolometric amplitude Mbol and spherically-averaged Rosseland radius. At the luminous phase at 19196 days (defined as +0 days in the second row of the plot), the star begins its expan- sion. At day +207, the expanding star drives shocks and lifts dense material from the surface. By day +414, the star begins to contract. Most of the dense material falls back onto the star while the outer ejecta expands. At day +621, the star resumes its peak luminosity and be- gins its next pulsation cycle. Long filaments appear in the density slice where gas is compressed by shock fronts due to the collision between laterally-expanding ejecta. Those long filaments are in fact sheets in 3D, and their separation reflects the length scale of deep convection. This is because the horizontal motions are governed by AREPO-RSG: Pulsation-lifted CSM 9 deep convection (Ma et al. to be subm.), and when a pulse passes near the surface, it steepens into a later- ally expanding shock front that lifts the material. Two adjacent shock fronts will meet at a plane extending radially outwards from the star, which creates the over- dense sheets approximately perpendicular to the stel- lar surface. Through multiple pulsation cycles, the star episodically fills its surroundings with dense aspherical material. We therefore suggest that the SN progenitor profile is clumpy from the stellar surface to the CSM, which should be taken into account in 1D SN modeling, possi- bly through micro/macroclumping (Dessart et al. 2018; Dessart & Audit 2019). This may help explain the dif- ferent density derived from different wavelengths for in- teracting SNe (Berger et al. 2023; Nayana et al. 2025). Another naive expectation is that clumpiness allows the radiation to leak out from the low-density chimneys, thereby enhancing the cooling, but detailed radiation hydrodynamic models are needed to access the effects of clumping in progenitors and their associated CSM. There is a variety of observational evidence that sug- gests the CSM to be aspherical or clumpy, as indi- cated by different densities inferred from different wave- lengths, multi-peaked emission lines, and polarization signals (e.g., Chandra et al. 2012; Smith et al. 2015; Andrews & Smith 2018; Andrews et al. 2019; Brennan et al. 2022; Kozyreva et al. 2022; Smith et al. 2023; Vasylyev et al. 2023; Bilinski et al. 2024; Singh et al. 2024; Shrestha et al. 2025; Andrews et al. 2025; Nayana et al. 2025; Vasylyev et al. 2025). The aspherical CSM is mostly assumed to be associated with binary interac- tions (e.g., Smith et al. 2015; Andrews & Smith 2018; Brennan et al. 2022; Smith et al. 2023; Vasylyev et al. 2023; Bilinski et al. 2024; Singh et al. 2024; Andrews et al. 2025). Here, we show instead that convection can also re- sult in highly aspherical CSM dominated by large-scale structures. Future spectropolarimetric forward model- ing (e.g., Dessart et al. 2025) from our simulations will be useful to directly compare with observations. 4. DISCUSSION AND CONCLUSIONS This is the first of a series of papers where we describe the scientific results from the 3D AREPO-RSG models. In this work, we perform global 3D radiation hydrodynamic simulations of two pre-explosion RSGs during core car- bon burning: a 10 M⊙RSG at 200 years before it ex- plodes, and a 20 M⊙RSG at 8000 years before it reaches core collapse. Our multi-scale simulations include 97% of the convective envelope in radius and the atmosphere up to 300 stellar radii. The differences between our re- sults and other 1D or 3D simulations are discussed in Appendix C along with our simulation caveats. We find that dense confined CSM of ∼0.01 M⊙is self-consistently produced in the simulations, which is episodically-lifted by large-amplitude radial pulsations. We interpret those pulsations as fundamental modes ex- cited by the κγ-mechanism (e.g. Bronner et al. 2025). The pulsations steepen into shocks and lift the dense surface material to the circumstellar environment up to 3 × 1014 cm, where the dust forms and radiation acts on dust to drive outflows of 10−6–10−5 M⊙yr−1. This process is very similar to the pulsation-enhanced dust- driven wind in AGB stars (H¨ofner & Olofsson 2018). The CSM density values from our simulations fit well with the CSM density inferred for SN 2013fs and about one order of magnitude lower than the CSM inferred for SN 2023ixf and SN 2024ggi. This is already a rea- sonable agreement considering the uncertainties in both simulations and observational inference. Based on our simulations, we propose a 1D analytical two-zone model to describe the CSM density profile in Section 3.2. In our simulations, the CSM and the dust-driven outflow are highly aspherical, dominated by large-scale asymmetries. This is because the large-scale convection in RSG envelope breaks the spherical symmetry in the material ejection. We therefore propose that: • The confined CSM observed in interacting hydrogen- rich SNe may be bound material episodically-lifted from the surface with a velocity dispersion of < 30 km s−1, rather than CSM from mass loss as as- sumed in most works. • Highly aspherical CSM – as inferred from spec- troscopy and spectropolarimetry – can also come from surface convection of single stars, rather than only from binary interactions. If true, most CSM in Type II SNe should be aspherical. • 3D effects – including consequences of clumpy stellar surfaces, aspherical CSM, and anisotropic outflows – should be considered in 1D SN modeling, potentially as effective clumping. Our 3D simulations can be readily used for 1D and 3D simulations of SN explosions, wind launching, and binary interactions. By taking the 1D slice of a 3D pro- file in different angles and performing 1D radiation hy- drodynamic simulations of SN explosions, we can start to study how the 3D geometry affects the observed lightcurve and spectra evolution. In addition, the dust- driven outflow observed in our simulations is prelimi- nary and need to be studied further with more detailed physics. Finally, a subset of these pre-explosion RSGs are likely to be in interacting binaries (Ercolino et al. 2024), which may create very extended CSM (e.g., Lan- dri & Pejcha 2024) or trigger precursors (e.g., Tsuna et al. 2024, 2025). The highly aspherical CSM and mass ejection sim- ulated in this work are quantitatively consistent with observed high-luminosity RSGs or hypergiants, e.g., IRC+10420 (Humphreys et al. 1997), VY CMa (Smith et al. 2001; Singh et al. 2023), NML Cyg (Schuster et al. 2006; De Beck et al. 2025), VX Sgr (Chiavassa et al. 2022), WOH G64 (Ohnaka et al. 2024; Munoz-Sanchez 10 Ma et al. et al. 2024), and DFK 52 (Siebert et al. 2025). Long- term monitoring and spatially-resolved interferometric observations of them will help reveal their evolutionary stages and their connections to interacting SNe. We also provide clear predictions to be tested in observations: For most hydrogen-rich SNe with a fi- nal progenitor mass > 10 M⊙, SN 2023ixf-like progen- itor pulsation and SN 2013fs-like highly-confined CSM should be present. Observations of those phenomena are still scarce, limited by our detection capability (Dessart 2024; Van Dyk 2025). However, within several years in the future, variable SN progenitors are expected to be more frequently detected in the Legacy Survey of Space and Time (LSST) at Vera C. Rubin Observatory (Ivezic et al. 2019; Hambleton et al. 2023). With a wide-field near-ultraviolet (NUV) photometric survey such as UL- TRASAT (Shvartzvald et al. 2024) in collaboration with early follow-up UV spectroscopy such as UVEX (Kulka- rni et al. 2021) and other ground-based instruments, we will have a chance to detect more SN 2013fs-like early interaction events in the coming several years. We thank Jim Fuller for providing valuable comments to the early draft. We thank Sebastian Ohlmann for sharing the module to construct stable 3D AREPO gi- ant stars from MESA profiles. We thank Mike Lau, Jared Goldberg, Luc Dessart, Fabian Schneider, Vincent Bronner, Philipp Podsiadlowski, Andrei Beloborodov, and Raffaella Margutti for helpful discussions. A.C. acknowledges support from the French National Re- search Agency (ANR) funded project PEPPER (ANR- 20-CE31-0002). This research project was partly con- ducted using computational resources (and/or scien- tific computing services) at the Max-Planck Comput- ing & Data Facility. The authors gratefully acknowl- edge the scientific support and HPC resources provided by the Erlangen National High Performance Comput- ing Center (NHR@FAU) of the Friedrich-Alexander- Universit¨at Erlangen-N¨urnberg (FAU) under the NHR project b234dd. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683. Part of this work was done when the author was attending the TDE24 program at Kavli Institute for Theoretical Physics (KITP), which is sup- ported in part by grant NSF PHY-2309135. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Software: AREPO (Springel 2010; Pakmor et al. 2016; Weinberger et al. 2020), MESA (Paxton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023), Astropy (Astropy Collaboration et al. 2013, 2018, 2022), NumPy (Harris et al. 2020), SciPy (Virtanen et al. 2020), Matplotlib (Hunter 2007), Jupyter (Kluyver et al. 2016) APPENDIX A. DETAILED METHODS A.1. 1D MESA Red Supergiant Models To construct the 1D initial conditions, we use the 1D stellar evolution code MESA (version 15140; Pax- ton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023) to provide the 1D structures of RSGs. To this end, we evolve a grid of single non-rotating massive stars at metallicity Z = 0.02 from the zero-age main sequence (ZAMS) to the onset of core collapse (defined as the phase when the maximum infall speed inside the iron core reaches 300 km s−1). We then select models with different masses and luminosities along the RSG branch. AREPO-RSG: Pulsation-lifted CSM 11 Table 1. Parameters of two 3D pre-SN AREPO-RSG simulations. The stellar parameters are listed as the mass Mtot, bolometric luminosity Lbol, radius Rross where the Rosseland optical depth ≈1, and effective temperature Teff. All these surface quantities in simulation outputs are averaged over spherical shells as described in Appendix A.8, with errorbars indicating the 3σ variations due to temporal variability. The stellar Rosseland radii from MESA are referred to as RMESA hereafter. We also list the numerical parameters for the two simulations, including the mass of the gas in the simulation Msim, mass of the central point particle MIB, radius of the artificial core RIB, size of the simulation box lbox, resolution near the stellar surface ∆rsurf, total number of cells Ncell, total number of directions used in radiation transport NRT, and the total simulation duration tsim. Model Stellar parameters Numerical parameters Mtot Lbol Rross Teff Msim MIB RIB lbox ∆rsurf Ncell NRT tsim [M⊙] [105L⊙] [R⊙] [K] [M⊙] [M⊙] [RMESA] [RMESA] [R⊙] [107] - [yrs] 10 M⊙ 1D MESA 9.73 0.94 722 3757 5.34 4.43 3% 300 6 2.3 80 66 pre-SN 3D AREPO 9.78 1.09+0.24 −0.24 990+132 −125 3331+376 −274 20 M⊙ 1D MESA 19.45 2.59 1132 3867 9.52 10.08 3% 300 8 2.0 80 73 pre-SN 3D AREPO 19.60 2.61+0.58 −0.51 1317+111 −134 3594+220 −254 Convection is modeled using mixing-length theory (MLT; B¨ohm-Vitense 1958) with a mixing-length pa- rameter αMLT and the Ledoux criterion. To account for the high convective efficiency in the RSG envelope (Dessart et al. 2013; Chun et al. 2018; Goldberg et al. 2022b), we follow the procedure in Paxton et al. (2018) and use a core mixing length parameter αMLT = 1.5 for hydrogen mass fraction XH ≤0.5 and αMLT = 3 for the hydrogen-rich envelope with XH > 0.5. We also include semi-convection (Langer et al. 1983) with a semi-convection parameter αSC = 1. We switch on convective overshooting with step overshoot parameters of f = 0.385 and f0 = 0.05, as calibrated by Brott et al. (2011). For the wind mass loss, we use the Vink et al. (2001) recipe reduced by a factor of 3 for effective tempera- ture Teff ≥104 K. This reduction factor is chosen as suggested by both theoretical (Krtiˇcka & Kub´at 2017; Bj¨orklund et al. 2021; Gormaz-Matamala et al. 2022) and observational studies (ˇSurlan et al. 2013; Cohen et al. 2014; Hawcroft et al. 2021). We use Decin et al. (2024) recipe for Teff < 104 K, which is comparable to Beasor et al. (2020) and likely at the lower end within the RSG wind uncertainties (de Jager et al. 1988; Yang et al. 2023; Massey et al. 2023; Antoniadis et al. 2024). A.2. Initial Conditions, 1D-3D Mapping, and Relaxation The central part of a giant star is extremely dense and hot, and thus requires prohibitively high spatial and temporal resolutions to simulate in 3D. We therefore in- troduce an ‘artificial core’ region, which sits at the inner 3% RMESA of the star. Here, RMESA is the stellar radius from MESA. To reconstruct the core region of the star, we use the script described in Ohlmann et al. (2017). We cut out the inner 3% of the star in terms of stellar radii, place a point mass at the center, and replace the inner 3% profile with a modified γ = 4/3 polytrope with a proper gravitational softening length. This is to make the core region less dense while keeping it marginally stably stratified, such that the computational power is not wasted in updating the core region. The procedure ensures the cut-out mass is replaced by the same amount of mass and the resulting profile is in hydrostatic equi- librium. This modified 1D profile is then mapped onto the AREPO 3D domain using a HEALPix grid constructed in multiple spherical shells (see Ohlmann et al. 2017). We place the star at the middle of the simulation box, and fill the surrounding background with low density ρbg = 10−17 g cm−3 and low temperature Tbg = 530 K pseudo-vacuum. We choose a box size to be 300 RMESA, such that it is wide enough that any sound wave will not reach the box boundary within the simulation time.4 This is done by adding hierarchical layers of boxes with lower and lower resolutions around previously con- structed ones, such that almost all the mesh points are still concentrated in the star. During the initial one dynamical timescale of the star, we continuously apply a global damping term to the momentum across the simulation box to aid relaxation following Ohlmann et al. (2017). We drop the global damping afterwards and allow the convection to set in. A.3. Artificial Core: Energy Source and Damping Within the artificial core described in the previ- ous subsection, we apply a constant radiative luminos- ity that has the same value as the surface bolomet- ric luminosity Lbol,MESA from MESA, i.e., 4πr2Frad,r = Lbol,MESA, where r is the distance from the center and Frad,r is the radial component of the radiation flux. 4 Effectively there is no boundary, because only waves or outflows exceeding 20 km s−1 can reach the box boundary within 60 years of simulation time, while the typical outgoing outflow speed is below that. 12 Ma et al. The intensities are constructed such that the radia- tion energy density is in equilibrium with the local temperature T and gives the correct radial radiation flux Frad,r assuming the Eddington approximation, i.e., In = caT 4/(4π)+3Frad,r(nn·ˆr)/(4π), where a is the ra- diation constant, nn is the unit vector along the nth dis- cretized angle for radiation transport, and ˆr is the unit vector along the radial direction. This acts as an inner boundary condition for the radiation transport module. Within the artificial core, the radiation transport mod- ule is switched off and the intensities are not updated. We further continuously apply a damping term to the momentum within the artificial core, such that ρ ˙v = −ρv/τc, where we empirically choose a damping timescale τc = 10000 s. We find that the damping is nec- essary to keep the artificial core stable throughout the simulation. Otherwise, the core can only remain sta- ble for about 15 global sound-crossing timescales of the star, and then starts to deviate from hydrostatic equilib- rium and generates strong sound waves that propagate outwards and dominate the dynamics. A.4. Hydrodynamics and Boundary Conditions Figure 6. Our multi-scale simulation spans 7 orders of magnitude in timescales. With the local time-stepping tech- nique (green), we run simulations up to approximately 70 years (horizontal dotted line), capturing the convective time- scale (blue), pulsation time-scale (orange), and thermal time- scale in the upper 90% of the stellar envelope. The time- scales are calculated as in Section 5.2 in Ma et al. (2025). We use the 3D moving-mesh code AREPO to perform radiation hydrodynamic simulations. The hydrodynam- ics is solved via a second-order accurate finite-volume approach with an HLLD Riemann solver. We solve full self-gravity using a tree-particle-mesh method. Com- pared to typical 3D codes, AREPO performs calculations on an unstructured Voronoi moving-mesh, which allows for flexible and adaptive spatial resolutions. AREPO also exploits local time-stepping, i.e. different parts of the simulation can take different timesteps grouped into a power-of-two hierarchy, which significantly reduces the computational cost for multi-scale problems. This gives us the unique advantage of simulating the star spanning 7 orders of magnitude in time-scales, as shown in Fig- ure 6. Details for hydrodynamics, gravity solver, mesh construction, and time-stepping are presented in e.g., Springel (2010), Pakmor et al. (2016), and Weinberger et al. (2020). We use periodic boundary conditions for hydrodynam- ics and outflow boundary conditions for radiation trans- port on all sides to guarantee a smooth mesh construc- tion. The boundary conditions are not used for the grav- ity solver. A.5. Treatment of Radiation Transport For radiation transport, we use our recently- implemented AREPO-IDORT module to solve the time- independent gray radiative transfer equations with an implicit discrete ordinates method (Ma et al. 2025). This module is based on the method of Jiang (2021) (a current radiation module in Athena++; Stone et al. 2020), but we generalize it to support local time- stepping, moving Voronoi mesh, and tabulated equation of state (EOS). The module solves for specific intensities along discrete directions via a first-order accurate itera- tive finite-volume solver, updates the temperature field implicitly, and couples radiation and gas with energy and momentum exchange. We discretize the angular space into 80 directions, which yields a decent coverage of the 4π solid angle without introducing significant ar- tifacts (see the resolution test in figure 16 of Ma et al. 2025). Since we solve the time-independent radiation trans- port equations, it is more accurate to include the radia- tion pressure in the EOS in radiation-dominated stellar interior. We therefore introduce a transition zone be- tween radiation-included EOS and radiation-excluded EOS. We include the radiation energy and radiation pressure assuming local thermal equilibrium in the EOS for density ρ > 10−9 g cm−3 and smoothly transition to a pure gas EOS for density ρ < 10−11 g cm−3 us- ing a suppression factor multiplied onto the radiation component, such that the radiation is not included in the EOS but in the radiation transport source terms in the optically-thin regime. In the hydrodynamic solver, the radiation force is taken into account via the EOS for ρ > 10−9 g cm−3 and via the radiation transport source terms for ρ < 10−11 g cm−3. In the transition region between 10−11–10−9 g cm−3, we linearly interpo- late between the radiation EOS and radiation transport source terms to calculate the radiation force. We also include the radiation source terms in the energy equa- tion for radiative heating/cooling. Since the radiative diffusion timescale is orders of magnitude larger than the local timestep in the optically-thick regions and the optically-thin regions hold the largest timesteps, we only switch on the radiation transport module on the glob- ally synchronized timesteps to save computational time. For local timesteps that are not synchronized, we only AREPO-RSG: Pulsation-lifted CSM 13 evolve the hydrodynamics and not the radiation field in active cells. A.6. Resolution Criterion: Multi-shell Refinement Figure 7. Spatial resolution as a function of fractional radius in the 10 M⊙simulation compared to other 3D RSG simulations. The x axis shows the radial coordinate in the unit of stellar radius, and the y axis shows the cell size as a fraction of the local radial coordinate. Blue indicates the 1σ range of density scale height in our 10 M⊙simulation. An ideal resolution is to have 10 cells per scale height. The CO5BOLD simulation (gray line; 8 M⊙simulation in Freytag et al. 2024) has the highest resolution, but do not simulate the inner envelope or the circumstellar environment. The Athena++ simulation (black line; 13 M⊙simulation in Gold- berg et al. 2022b) include the circumstellar environment but not the interior. Our AREPO simulation (orange line) include the deep envelope all the way to the circumstellar environ- ment, and has a resolution comparable to Athena++. This is enough to resolve the physics throughout the simulation domain except near the stellar surface. To resolve the stellar interior, we apply a global target mass resolution of ∆m = 2.7 × 10−7Mtot, which guar- antees approximately 10 cells per density scale height in the interior. However, as the density drops by orders of magnitude with radial coordinate, the mass resolu- tion criterion is insufficient to resolve the stellar surface and CSM. We therefore apply a target cell radius of rcell = 2% RMESA in the spherical shell between 0.2– 3 RMESA. To resolve the stellar surface, we further ap- ply a target cell radius of rcell,surf = 6 R⊙(8 R⊙) when the density of the cell falls into [10−7, 10−10] g cm−3 for the 10 M⊙(20 M⊙) simulation. To resolve the CSM structure, we then use a target cell-radius-to-distance ratio rcell/r = 0.02 in the spherical shell between 2– 30 RMESA. To avoid numerical issues when any outflow propagates beyond that, we further apply a target mass ∆m = 10−12g cm−3 × 4π∆r3/3 with rcell = 2% RMESA when the radial coordinate reaches beyond 2.5 RMESA. The actual target cell size is taken to be the minimum value of all the criteria above, as shown in Figure 7. A.7. How to Determine a Steady State We determine the steady state based on three gen- eral considerations: (1) The mass ejection is not in- fluenced by the initial transient ejection anymore; (2) The global quantities (e.g., luminosity and radius) do not show an increasing/decreasing trend (Ahmad et al. 2023); (3) The time-averaged total radial energy flux does not vary with radius (Goldberg et al. 2022b, 2025). We find the first criterion is the most stringent one, and we also check the other two are satisfied during our de- fined steady state. The first criterion is illustrated in the third row of Figure 2, where we use the spherically-averaged density contour at 10−16 g cm−3 as an indicator. This density contour line shows a general decreasing trend after the first initial transient ejections, until the mass ejections are not influenced by the fallback material anymore and grow again. We determine the onset of the steady state as the time corresponding to the minimum point of this density contour line. We also qualitatively check the second and third crite- ria. The bolometric luminosity and spherically-averaged Rosseland radius vary around constant values, as shown in Figure 2 and Figure 8. The time-averaged total en- ergy flux is also constant as a function of radial coordi- nate down to the inner boundary (Figure 9), indicating an energy equilibrium state. A.8. Method for Analyzing the 3D Data For all the spherically-averaged quantities shown in this work, we use a radial binning method to an- alyze the AREPO simulation data. We average the quantities of all the cells falling inside thin spherical shells of radius r and thickness δR = RMESA/1000. In this work, the spherically-averaged density and energy fluxes are weighted by volume, whereas all other quantities are weighted by mass. The spherically-averaged quantity q weighted by volume is ⟨q⟩V (r) ≡ hP |ri−r|<δR/2 qi∆Vi i / hP |ri−r|<δR/2 ∆Vi i , where ∆Vi is the volume enclosed in each cell, and the sum is performed over all the cells i whose radial distance ri from the center falls into the (r −δR/2, r + δR/2). The spherically- averaged quantity weighted by mass is ⟨q⟩m(r) ≡ hP |ri−r|<δR/2 qi∆mi i / hP |ri−r|<δR/2 ∆mi i , where ∆mi is the mass enclosed in cell i. To obtain the fundamental parameters of the star from our simulations, we adopt a more specific approach (as in Ma et al. 2025). The bolometric luminosity Lbol is calculated by taking the radial component of the radia- tion flux Frad,r from the simulation output and perform- ing spherical average for 4πr2Frad,r within the spherical 14 Ma et al. Figure 8. Spherically-averaged Rosseland radii vary around constant values after the relaxation phase. Figure 9. Time-averaged total energy flux is spatially constant above the artificial core during the steady states for both simulations. The colored lines show the total energy flux at different times, and the black line shows the time-averaged values of the colored lines. shell in 4–5 RMESA5. The spherically-averaged Rosse- land optical depth is obtained by summing up the opti- cal depth from the outer boundary to the distance r, i.e. ⟨τross⟩(r) ≡P ri>r  κR∆m/(4πr2)  i. Here, for each cell i, κR is the Rosseland opacity, ∆m is the mass enclosed in the cell, and r is the distance of the cell from the stel- lar center. We define the Rosseland radius ⟨Rross⟩as the radius where ⟨τross⟩(r = ⟨Rross⟩) = 1. The spherically- averaged effective temperature ⟨Teff⟩is defined via the Stefan-Boltzmann law, i.e. by finding the temperature that satisfies Lbol = 4π⟨Rross⟩2σ⟨Teff⟩4, where σ is the Stefan-Boltzmann constant. 5 It is more intuitive to calculate the bolometric luminosity by inte- grating the normal radiation flux over box boundaries. However, since the box boundary is far away from the star with very low resolution, it is more accurate to calculate the luminosity closer to the star. B. IDENTIFYING THE DOMINANT PULSATION MODE By taking the standard Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) of the simulated lightcurves in the top row of Figure 2, we obtain the power spectra of the lightcurves (Figure 10) and identify the domi- nant period with the maximum power. In Figure 11, we compare the dominant periods of both simulations with the period-luminosity relations of observed RSG populations from Jiang et al. (2024). We find that the dominant pulsation modes in our simulations fit reason- ably well with the fundamental modes according to the period-luminosity relations. This agrees with most the- oretical and observational results that suggest Galactic RSGs pulsate predominantly in the fundamental mode (Heger et al. 1997; Kiss et al. 2006; Yang & Jiang 2012; Ren et al. 2019; Joyce et al. 2020; Jiang et al. AREPO-RSG: Pulsation-lifted CSM 15 Figure 10. Power spectrum of the simulated lightcurve in Figure 2. We obtain the period of the dominant pulsation mode in our simulations by identifying the maximum peak in the power spectrum. The high-frequency part of the power spectrum falls off more steeply than the 1/f trend observed in normal RSGs (where f is the frequency; Kiss et al. 2006), but closer to simulated yellow supergiants (Goldberg et al. 2025) and other observed massive stars (Bowman 2023). Figure 11. The dominant pulsation periods in our 3D simulations agree with the fundamental modes of RSGs according to the empirical period-luminosity relation. The pulsation periods of the 3D simulations are found by identifying the dominant peak in the power spectra (Figure 10) of the simulated lightcurves (Figure 2 top row). We plot the observed period-luminosity relations of RSGs from Jiang et al. (2024) in solid lines, separated into the fundamental mode (FM), first overtone (O1), and long secondary period (LSP). In gray scatter dots, we show the observed RSG population in the Galaxy (Chatys et al. 2019), M31 (Soraisam et al. 2018), and M33 (Ren et al. 2019). All the observed data points and observed relations are in absolute K-band magnitude MK, and we convert them into bolometric luminosities Lbol following the reddening relation in Massey et al. (2009) assuming an overall effective temperature of 3800 K, which is a crude approximation. We highlight the pulsation periods identified from the pre-explosion lightcurves of two interacting SN progenitors, SN 2023ixf (Kilpatrick et al. 2023; Soraisam et al. 2023; Qin et al. 2024; Xiang et al. 2024a) and SN 2024ggi (Xiang et al. 2024b). 2024; Suzuki & Shigeyama 2025), however see Guo & Li (2002). In Figure 11, we also mark the pulsation period and luminosity inferred from pre-explosion images of two in- 16 Ma et al. teracting SN progenitors, SN 2023ixf (Kilpatrick et al. 2023; Soraisam et al. 2023; Qin et al. 2024; Xiang et al. 2024a) and SN 2024ggi (Xiang et al. 2024b). By compar- ing the pulsation period of SN 2023ixf progenitor with our simulations and the empirical period-luminosity re- lations, we support the claim of Soraisam et al. (2023) and Hsu et al. (2024) that the pulsation period of SN 2023ixf progenitor fits better with a luminous RSG of ∼20 M⊙. The apparent pulsation period of the SN 2024ggi progenitor instead favors a very low-mass RSG, but whether this inferred pulsation period is true is de- bated (Laplace et al. 2025). C. ADDITIONAL DISCUSSION C.1. Comparison with Other 1D Models Current analyses of observed Type II SN lightcurves and spectra rely heavily on comparison with 1D SN modeling, which is sensitive to the 1D progenitor model and the CSM profile (e.g., Dessart et al. 2013, 2017; Morozova et al. 2017; Dessart & Hillier 2019; Goldberg et al. 2019; Goldberg & Bildsten 2020; Moriya et al. 2018; Boian & Groh 2020; Moriya et al. 2023). We find that our 3D simulations differ from the 1D models in many aspects. The differences in 1D-averaged quanti- ties include different CSM density profile, the presence of large-amplitude pulsations, and larger radii. We dis- cuss those differences separately in this subsection. C.1.1. CSM Density Profile When interpreting the observed Type II SN lightcurves and spectra, most works assume that the CSM follows a steady-wind density profile governed by ρ ∝r−2 (e.g., Morozova et al. 2017; Boian & Groh 2020; Jacobson-Gal´an et al. 2024a). However, it has been shown that the inferred CSM density, mass and radial extent are sensitive to the density structure assumed, where variations include wind acceleration (Moriya et al. 2017, 2018; Dessart 2025) and extended atmosphere (Dessart et al. 2017; Soker 2021; Dessart & Jacobson- Gal´an 2023; Fuller & Tsuna 2024). Our 3D simulations broadly support the idea of an extended atmosphere, but the detailed physics and the CSM density structure are different from previous works. Dessart et al. (2017) and Dessart & Jacobson- Gal´an (2023) used an ad hoc atmospheric extension by assuming an exponentially-decaying CSM density pro- file as a function of the distance from the stellar sur- face. Soker (2021) proposed an ‘effervescent zone’ where bound clumps are ejected by stellar activity and fall back. The radial extent is determined by the balance of gravity, wind drag and radiation force, and the density profile can be uncertain (Soker 2023). Fuller & Tsuna (2024) proposed a ‘chromosphere’ model where shock waves launched by transonic convection can support a dense atmosphere that extends to the dust-formation radius and radiation acts on dust to drive a wind. Our simulations support part of the Soker (2021) model and part of the Fuller & Tsuna (2024) model: We find large-scale clumps episodically lifted by pulsa- tions and shaped by convection in agreement with Soker (2021), but the wind is not present in the immediate surrounding of the star. The wind is launched later on when some clumps reach far enough to cool down and form dust, in agreement with Fuller & Tsuna (2024). The density profile is steeper than proposed by Fuller & Tsuna (2024), because we find that pulsation instead of convection is the main driver of the shock waves, al- though convection can break the shock waves into multi- ple curved shock fronts. The spherically-averaged shock wave speeds are nearly constant due to semi-regular pul- sation, instead of a Gaussian distribution due to con- vection adopted in Fuller & Tsuna (2024). Generally, our simulations suggest a ‘two-zone model’ composed of a periodic-shock-supported atmosphere attached to a dust-driven wind as described in Section 3.2, which is effectively a combination of the Soker (2021) model and the Fuller & Tsuna (2024) model. C.1.2. Large-amplitude Pulsation RSGs are known to be large-amplitude semi-regular radial pulsators from both observations (e.g Kiss et al. 2006; Soraisam et al. 2018; Jiang et al. 2024) and the- ories (e.g., Guo & Li 2002; Joyce et al. 2020; Bronner et al. 2025; Sengupta et al. 2025; Suzuki & Shigeyama 2025). However, large-amplitude radial pulsations are non-linear, so their amplitudes are difficult to predict. Especially for pre-explosion RSGs, 1D models suggest that their high L/M ratio amplifies the pulsation am- plitude (Heger et al. 1997; Joyce et al. 2020; Bronner et al. 2025; Suzuki & Shigeyama 2025) and may even trigger a ‘superwind’ or mass loss (Yoon & Cantiello 2010; Clayton 2018; Sengupta et al. 2025). However, all the current 1D models suffer from significant numerical damping and the surface cooling is not correctly cap- tured (Heger et al. 1997; Clayton 2018; Joyce et al. 2020; Bronner et al. 2025; Suzuki & Shigeyama 2025), which are important for predicting the growth and damping rates of pulsation. In this work, we self-consistently predict the pulsa- tion amplitudes (Figure 2). We also find the dominant pulsation periods of our simulations agree with the fun- damental mode expected for RSGs (Figure 11). If true, this means the fundamental mode dominates over other higher-order modes in pre-explosion RSGs. This leads to different density structures especially near the stellar surface and different radii when the star explodes, which is expected to create some diversity in SN lightcurves (Goldberg et al. 2020; Bronner et al. 2025). We also find that, contrary to the suggestions in Heger et al. (1997), Yoon & Cantiello (2010), Clayton (2018) and Sengupta et al. (2025), the pulsations are not strong enough to un- bind any material without the help of molecules, dust, or magnetic fields. AREPO-RSG: Pulsation-lifted CSM 17 C.1.3. Larger Radii and Convective Efficiency The RSG radii are extremely sensitive to the convec- tive efficiency in the envelope: To transport the same amount of energy, if convection is more efficient, then the radiation flux is lower and the temperature gradient is flatter, which yields larger surface temperature and therefore smaller radius. In 1D stellar models, convec- tion is commonly described using mixing-length theory (MLT; B¨ohm-Vitense 1958). The convective efficiency is controlled by αMLT, the ratio between mixing length and local pressure scale height. Larger αMLT means more efficient convective energy transport, resulting in more compact RSGs (e.g., Henyey et al. 1965; Stothers & Chin 1995; Goldberg et al. 2022b). We find that the averaged radii in our 3D simulations are slightly larger than the 1D models with αMLT = 3 (by 40% for 10 M⊙simulation and 20% for 20 M⊙sim- ulation). To better constrain the convective efficiency, we compare the averaged entropy profiles from our 3D simulations to 1D initial conditions in Figure 12. The 3D entropy profiles are flatter than 1D MESA profiles in the interior of the envelope, but the entropy drop in the superadiabatic layer is less steep than 1D MESA profiles. This indicates that convection in our 3D simulations is more efficient than 1D MESA models in the interior of the envelope, but is less efficient than 1D MESA models in the superadiabatic layer. We therefore suggest that the mixing-length parameters may be better described by αMLT ≳4 in the efficient convection zone below the hydrogen opacity bump. However, near the surface in the superadiabatic layer, we either favor αMLT ≲2 or the turbulent pressure needs to be taken into account to inflate the envelope. In our simulations, the inflated superadiabatic layer is the main reason for larger radii compared to 1D models with αMLT = 3. Our findings for the convective efficiency are fully consistent with the results of 3D RSG simulations with Athena++ (Goldberg et al. 2022b). However, the large radii found in our simulations po- tentially contradict with the results of Dessart et al. (2013), where they argued for compact pre-explosion RSGs with αMLT = 3 because SN radiation from RSGs with large radii will have weaker cooling and remain blue for too long. This controversy may be complicated in two ways. First, it is not clear whether the Rosseland radius determined in the simulation is the same as the stellar radius the SN shock is sensitive to (Dessart et al. 2013). Second, clumping may have a counter effect on the color evolution (Dessart et al. 2018), thereby soften- ing the degeneracy. A detailed analysis of the tempera- ture gradient and turbulent pressure in our 3D simula- tions is needed to assess the actual convective efficiency, as in e.g., Goldberg et al. (2022b). C.2. Comparison with Other 3D Radiation Hydrodynamic Simulations RSGs have been simulated with 3D radiation hydro- dynamics by e.g., Freytag et al. (2002, 2024) and Chi- avassa et al. (2011) using the CO5BOLD code (Freytag et al. 2012) and by Goldberg et al. (2022b) using the Athena++ code (Stone et al. 2020). For a review sum- marizing the simulation results from both codes, see Chiavassa et al. (2024). Other multi-dimensional hy- drodynamic simulations also exist for RSGs especially in the context of transients (e.g., Leung & Fuller 2020; Antoni & Quataert 2022), albeit without detailed radi- ation transport. In this comparison, one major puzzle is why propagat- ing shocks waves and extended CSM are clearly present in CO5BOLD simulations (Kravchenko et al. 2019; Frey- tag et al. 2024) and our AREPO simulations (this work), but are rather weak in Athena++ simulations (Goldberg et al. 2022b). Fuller & Tsuna (2024) suggested that this discrepancy may be due to a numerical transient in Athena++ simulations, which ejects material during the first relaxing phase that later on falls back and quenches the shocks propagating outwards. This numerical tran- sient is also present in our simulations (Figure 2) and es- pecially strong in our 20 M⊙simulation which quenches the material ejection up to 45 years. Fuller & Tsuna (2024) also suggested that the recombination energy is neglected in Athena++ simulations, which may result in weaker pulses. Indeed, the recombination energy is in- cluded in our AREPO simulations and in CO5BOLD, but not in Athena++ where they used an ideal gas. However, we point out here that this discrepancy in producing shocks and CSM may be due to physical rea- son related to the growth of strong pulsations. We have performed two additional simulations at lower luminosi- ties, using the 1D MESA models at core helium burning stage instead of pre-explosion stage. We find that they also have weak pulsations and do not produce any ex- tended CSM, as shown in Figure 13. This behavior is very similar to the Athena++ simulations and indicates a physical origin. One possible explanation is that the L/M ratios used in the Athena++ simulations and our two additional core-helium-burning simulations are too low to foster growth of the pulsation amplitudes. In- deed, 1D models suggest that strong pulsations are only present for RSGs with large L/M ratios (Heger et al. 1997; Yoon & Cantiello 2010; Clayton 2018; Joyce et al. 2020; Laplace et al. 2025; Sengupta et al. 2025; Suzuki & Shigeyama 2025). Besides differences in physical parameters, we have also made improvements both by expanding the simu- lation domain and including more physics compared to previous 3D simulations. Athena++ simulations only in- clude a wedge of the star, but can extend the simulation domain to more than 10 times the stellar radius to cap- ture the CSM structure, outflow, and shock breakout 18 Ma et al. Figure 12. Comparison of 1D MESA profiles (black) and spherically-averaged 3D AREPO profiles (colored). Different colored curves show the spherically-averaged 3D profiles at different times. From top to bottom, we show the spherically-averaged specific entropy ⟨s⟩, radiative luminosity ⟨Lrad⟩, density ⟨ρ⟩, and temperature ⟨T⟩. The profiles on the right of the vertical dotted lines are thermally relaxed, i.e. the thermal timescale is smaller than the simulation duration. In our simulations, the deep entropy profile is nearly flat (first row), and radiation only transports a small fraction of energy (second row), which agrees with the expectation of efficient convection. As pointed out in Goldberg et al. (2022b) and Chiavassa et al. (2024), this was correctly simulated in Athena++ (Goldberg et al. 2022b) but not in CO5BOLD (Chiavassa et al. 2011). As shown in all panels, the deep interior of the 3D simulations agree very well with the 1D profiles. This direct 1D-3D agreement inside the star was not achieved in previous simulations (Chiavassa et al. 2011; Goldberg et al. 2022b). (Goldberg et al. 2022b,a). CO5BOLD RSG simulations simulate the entire 4π sphere to capture the global con- vective structure and facilitate synthetic observations (e.g., Chiavassa et al. 2009; Ma et al. 2024), but are lim- ited to small box sizes about 3 times larger than the star (but see Freytag & H¨ofner 2023, that extends the box for AREPO-RSG: Pulsation-lifted CSM 19 Figure 13. Core helium burning RSGs have very weak pulses and do not produce the CSM in our 3D simulations. Similar to Figure 2, but for two additional 3D simulations with lower luminosities than the simulations we present elsewhere in this work, taken during core helium burning. The mass-lifting rate is defined as 4πr2⟨ρ⟩⟨vr⟩. AGB stars). In our AREPO simulations, we achieve both points and therefore have the capability to simulate the global 4π convection and the CSM and outflow in a sin- gle simulation, which made this work possible. Further- more, the local time-stepping technique in AREPO gives us the unique advantage of including the deep envelope in our simulations to make sure the deep physical condi- tions match the initial 1D profiles from stellar evolution code (Figure 12), which was not achieved in previous simulations. For reference, our effective inner boundary is at 3% stellar radius in AREPO, while the inner bound- ary is at 20% stellar radius in CO5BOLD (Chiavassa et al. 2011; Ahmad et al. 2023) and 30%–50% in Athena++ (Goldberg et al. 2022b). We have performed test sim- ulations showing that moving the boundary of the ar- tificial core too high up in the envelope (10% stellar radius) will result in weaker pulsations, which is espe- cially important for this study. For the included physics, CO5BOLD simulations did not include radiation pressure or self-gravity, which are important for massive stars and loosely-bound envelopes. Athena++ simulations in- cluded radiation pressure and self-gravity, but used an 20 Ma et al. ideal gas for the equation of state, thereby neglecting recombination energy, which is important for ejecting the envelope material. We have included all this physics in our AREPO simulations. Our 3D RSG simulations are also the first to experiment with the effects of dust, al- beit under crude approximations. There are also certain aspects where previous simu- lations perform better than ours. Our simulations and Athena++ RSG simulations are so far limited to gray ra- diation transport, but it has been shown in CO5BOLD sim- ulations that non-gray effects create a steeper temper- ature gradient near the surface and in the atmosphere, which is important for the stellar spectrum and measur- ing stellar radii (Chiavassa et al. 2011). Furthermore, resolving the stellar photosphere is important to ob- tain the correct luminosity and cooling rate. Our cell size near the stellar surface is about 0.8% of the stellar radii, which is comparable to Athena++ simulations (1% stellar radii) but not as resolved as the latest CO5BOLD simulations (0.4%–0.5% stellar radii), as shown in Fig- ure 7. In the optically-thick regime, we separate the radiative heating/cooling provided by radiation trans- port from the radiation pressure and radiation energy provided by the equation of state, which presumably yields higher-order accuracy in force balance but is not self-consistent. The Athena++ simulations calculate all the radiation quantities and coupling terms from the time-dependent radiation transport, which is physically more self-consistent. In addition, due to the transi- tion from a radiation-included equation of state to a radiation-excluded one, we introduce an artificial tem- perature jump in our simulation at 10−11 g cm−3 in the atmosphere, which is not present in other simulations. Future efforts will be devoted to solving these minor is- sues. C.3. Simulation Caveats In this subsection, we summarize the caveats known to us for future improvements. In our simulations, the first strong pulsations are trig- gered by mapping from 1D to 3D, which is a numerical artifact. This numerical transient launches a strong ejec- tion that affects subsequent CSM structure near the star for 30–45 years until most of the material has fallen back (Figure 2). The same issue was also found in Athena++ simulations (Goldberg et al. 2022b). The way we deal with this is to run the simulations long enough such that the effects of the initial numerical transient die down. However, the dust-driven outflow launched by the nu- merical transient still continues to propagate outwards and affect the long-range atmospheric structure, which is so far not properly treated. This also raises the question of whether the strong pulsations seen in our simulations are real or due to numerical artifacts. To address this question, we run two additional simulations at lower luminosities by tak- ing 1D MESA profiles not at pre-explosion but at core helium burning stage. As shown in Figure 13 in com- parison with Figure 2, the 3D core-helium-burning mod- els only have weak pulsations and do not create an ex- tended CSM. This means the strong pulsations in the pre-explosion models are likely not due to numerical ar- tifacts but are related to enhanced luminosity and dif- ferent stellar structures. In addition, without a physical mechanism to continuously inject energy into pulsations, the radial pulsations are expected to be damped, e.g., by convection at the timescale of several years (MacLeod et al. 2023). Instead in our 10 M⊙simulation, the pulsa- tion grows stronger again after 30 years, indicating self- excited pulsations. However, the exact pulsation am- plitude can be influenced by numerical resolution and uncertain non-gray effects. Even though we have included a larger portion of the convective envelope than all previous simulations, we still cannot reach the bottom of the convective envelope. Our artificial core (or the effective inner boundary) still extends in the convective zone. Given the non-locality of convection in these RSG envelopes, the artificial core will likely affect the convection higher in the envelope. The effects can only be quantified by performing test simulations and putting the boundary of the artificial core at a different radius, but those simulations are cur- rently too expensive to run until they reach a steady state. We have performed short test simulations that suggest a higher inner boundary, at 10% of the stel- lar radius, yields weaker pulses and weaker convection. Since we damp the velocities in the artificial core, we also do not fully conserve the total energy, momentum, or angular momentum, but we conserve the total mass to near machine precision. Furthermore, even if we can simulate the entire convective envelope, the deep ther- mal timescale is still one order of magnitude larger than the plausible simulation time, so the deep envelope will not be fully thermally relaxed. It is reasonable to as- sume that, by starting from a 1D model from a stellar evolution code, the deep envelope is not far away from the actual thermally-relaxed profile, but this is difficult to test. Another concern is spurious wave generation from the artificial core. In our simulations, we observe sphere- shaped perturbations propagating from the core to the stellar surface. The perturbations appear to be sound waves generated in the central regions of our simula- tions. Given that the time interval of the episodic ma- terial ejection fits with the fundamental mode instead of the much shorter interval of these spurious waves, we think the spurious waves do not drive the material ejections, and therefore are dynamically not important. However, it would be helpful to diminish the spurious waves, whose origins are not clear. One possibility is that the convective velocities are damped too sharply in the artificial core that they introduce pressure perturba- tions on top of the hydrostatic equilibrium background. Another possibility is that the hydrostatic equilibrium AREPO-RSG: Pulsation-lifted CSM 21 structure is modified due to different convective energy transport in 3D, which results in a small mismatch of entropy in the artificial core and the envelope (see Fig- ure 12) and therefore seeds the pressure perturbations. The spatial resolution of our simulations is not enough to resolve the photosphere. This leads to a time- averaged luminosity output different from the input lu- minosity from the artificial core (Figure 2; for a test il- lustrating this, see Figure 17 in Ma et al. 2025). We have performed short test simulations that suggest decreasing the cell size by a factor of two is still not enough to ob- tain the correct luminosity. Chiavassa et al. (2011) also showed that increasing spatial resolution helps resolve smaller convective structures near the surface. Another potential issue is whether the recombination layer inside the envelope is well-resolved in our simulations. We have checked that our simulation resolution is enough to re- solve the opacity peak due to He and H recombination, but it is not clear whether the variation in internal en- ergy due to recombination is also resolved. In 1D mod- els, the recombination zones can be very thin during the contraction phase (Clayton 2018; Bronner et al. 2025). Further analyses are needed to assess if we resolve the recombination layer in 3D. Despite using 80 angles for radiation transport, we have found that fixing the discretized angle still results in ray-effects several stellar radii away from the star. The ray-effects manifest as flower-like patterns with mul- tiple peaks in radiation field in the optically-thin atmo- sphere (e.g., Figure 16 in Ma et al. 2025). Since the temperature is tightly coupled with radiation, and opac- ity depends strongly on temperature, the ray-effects also create artificial peaks in the spatial distribution of opac- ity. This is particularly important for outflows driven by radiation pushing on dust, which depend critically on both the opacity and radiation field. Such effects can in principle be avoided if we introduce extra diffu- sion by changing the transport directions intermittently (e.g., Freytag et al. 2012; Peter et al. 2023), which will be explored in the future. Our treatment of the equation of state needs to be fur- ther improved. For now, we transition from a radiation- included equation of state to a radiation-excluded one at density 10−9–10−11 g cm−3 (see Appendix A.5 for de- tails). This means the radiation force is artificially re- duced in the transition region near the stellar surface, which weakens the mass ejections. In addition, the ad- vection components of the radiation flux are neglected for density < 10−9 g cm−3. This is likely a reasonable approximation because the radiation flux is dominated by the comoving term in the cooling-dominated sur- face layer and atmosphere. We also transition from the OPAL equation of state to an ideal gas at temperature 1870 K. Both of these ‘stitches’ of the equation of state do not guarantee the consistency of thermodynamical variables. Since the temperature field is also tightly coupled with radiation, we find that these ‘stitches’ in- troduce artificially enhanced temperature at the transi- tion boundaries in the RSG atmosphere. These artifi- cial temperature jumps have limited effects on the at- mospheric dynamics, as they are confined within a thin layer in the transition boundary. However, they likely need to be removed in post-processing for synthetic ob- servations. Finally, our simulations are limited to gray radiation hydrodynamics, and there are other important physics missing, e.g., a detailed treatment of dust, and mag- netic fields. Multi-group radiation transport or more realistic opacities are important for the atmospheric structure, in particular for the temperature stratifica- tion (e.g., Malygin et al. 2014). How the dense molec- ular lines in the RSG atmosphere affect the dynamics is also not clear (Kee et al. 2021). Another important physical ingredient is dust. In this work, we only in- clude the effects of dust as a high opacity below 1500 K, but we ignore the detailed treatment of dust forma- tion, advection, and momentum and energy exchange between radiation, dust, and gas. In AGB star atmo- spheres, the dust growth timescale can be comparable to the pulsation period (H¨ofner & Olofsson 2018), which means a time-dependent treatment of dust grain growth is needed (H¨ofner et al. 2016). One step forward would be to include part of those effects, as in e.g., CO5BOLD simulations of AGB stars (Freytag & H¨ofner 2023). We also do not include magnetic fields in our simulations so far. The convective dynamo is expected to sustain a magnetic field. Since the RSG surface pressure is domi- nated by turbulent pressure (Goldberg et al. 2022b; Chi- avassa et al. 2024), we also expect the magnetic field will be dynamically important, assuming the magnetic field energy is comparable to the convective energy. Alfven wave dissipation may also provide another heating mech- anism to launch the wind (e.g., Hartmann & MacGregor 1980). REFERENCES Ahmad, A., Freytag, B., & H¨ofner, S. 2023, Astronomy and Astrophysics, 669, A49, doi: 10.1051/0004-6361/202244555 Andrews, J. E., & Smith, N. 2018, Monthly Notices of the Royal Astronomical Society, 477, 74, doi: 10.1093/mnras/sty584 Andrews, J. E., Sand, D. J., Valenti, S., et al. 2019, The Astrophysical Journal, 885, 43, doi: 10.3847/1538-4357/ab43e3 Andrews, J. E., Shrestha, M., Bostroem, K. A., et al. 2025, The Astrophysical Journal, 980, 37, doi: 10.3847/1538-4357/ada555 22 Ma et al. Antoni, A., & Quataert, E. 2022, Monthly Notices of the Royal Astronomical Society, 511, 176, doi: 10.1093/mnras/stab3776 Antoniadis, K., Bonanos, A. Z., de Wit, S., et al. 2024, Astronomy and Astrophysics, 686, A88, doi: 10.1051/0004-6361/202449383 Arroyo-Torres, B., Wittkowski, M., Chiavassa, A., et al. 2015, Astronomy & Astrophysics, 575, A50, doi: 10.1051/0004-6361/201425212 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, Astronomy and Astrophysics, 558, A33, doi: 10.1051/0004-6361/201322068 Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M., et al. 2018, The Astronomical Journal, 156, 123, doi: 10.3847/1538-3881/aabc4f Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, The Astrophysical Journal, 935, 167, doi: 10.3847/1538-4357/ac7c74 Beasor, E. R., Davies, B., Smith, N., et al. 2020, Monthly Notices of the Royal Astronomical Society, 492, 5994, doi: 10.1093/mnras/staa255 Beasor, E. R., Smith, N., & Jencson, J. E. 2025, The Astrophysical Journal, 979, 117, doi: 10.3847/1538-4357/ad8f3f Berger, E., Keating, G. K., Margutti, R., et al. 2023, The Astrophysical Journal, 951, L31, doi: 10.3847/2041-8213/ace0c4 Bertschinger, E., & Chevalier, R. A. 1985, The Astrophysical Journal, 299, 167, doi: 10.1086/163690 Bilinski, C., Smith, N., Williams, G. G., et al. 2024, Monthly Notices of the Royal Astronomical Society, 529, 1104, doi: 10.1093/mnras/stae380 Bj¨orklund, R., Sundqvist, J. O., Puls, J., & Najarro, F. 2021, Astronomy and Astrophysics, 648, A36, doi: 10.1051/0004-6361/202038384 Boian, I., & Groh, J. H. 2020, Monthly Notices of the Royal Astronomical Society, 496, 1325, doi: 10.1093/mnras/staa1540 Bowen, G. H. 1988, The Astrophysical Journal, 329, 299, doi: 10.1086/166378 Bowman, D. M. 2023, Astrophysics and Space Science, 368, 107, doi: 10.1007/s10509-023-04262-7 Brennan, S. J., Fraser, M., Johansson, J., et al. 2022, Monthly Notices of the Royal Astronomical Society, 513, 5642, doi: 10.1093/mnras/stac1243 Bronner, V. A., Laplace, E., Schneider, F. R. N., & Podsiadlowski, P. 2025, Explosions of pulsating red supergiants: a natural pathway for the diversity of Type II-P/L supernovae, arXiv. https://ui.adsabs.harvard.edu/abs/2025arXiv250811077B Brott, I., de Mink, S. E., Cantiello, M., et al. 2011, Astronomy & Astrophysics, 530, A115, doi: 10.1051/0004-6361/201016113 Bruch, R. J., Gal-Yam, A., Schulze, S., et al. 2021, The Astrophysical Journal, 912, 46, doi: 10.3847/1538-4357/abef05 Bruch, R. J., Gal-Yam, A., Yaron, O., et al. 2023, The Astrophysical Journal, 952, 119, doi: 10.3847/1538-4357/acd8be B¨ohm-Vitense, E. 1958, Zeitschrift fur Astrophysik, 46, 108. https://ui.adsabs.harvard.edu/abs/1958ZA.....46..108B Chandra, P., Chevalier, R. A., Chugai, N., et al. 2012, The Astrophysical Journal, 755, 110, doi: 10.1088/0004-637X/755/2/110 Chatys, F. W., Bedding, T. R., Murphy, S. J., et al. 2019, Monthly Notices of the Royal Astronomical Society, 487, 4832, doi: 10.1093/mnras/stz1584 Chen, T.-W., Yang, S., Srivastav, S., et al. 2025, The Astrophysical Journal, 983, 86, doi: 10.3847/1538-4357/adb428 Cheng, S. J., Goldberg, J. A., Cantiello, M., et al. 2024, The Astrophysical Journal, 974, 270, doi: 10.3847/1538-4357/ad701e Chiavassa, A., Freytag, B., Masseron, T., & Plez, B. 2011, Astronomy and Astrophysics, 535, A22, doi: 10.1051/0004-6361/201117463 Chiavassa, A., Kravchenko, K., & Goldberg, J. A. 2024, Living Reviews in Computational Astrophysics, 10, 2, doi: 10.1007/s41115-024-00020-w Chiavassa, A., Plez, B., Josselin, E., & Freytag, B. 2009, Astronomy and Astrophysics, 506, 1351, doi: 10.1051/0004-6361/200911780 Chiavassa, A., Kravchenko, K., Montarg`es, M., et al. 2022, Astronomy & Astrophysics, 658, A185, doi: 10.1051/0004-6361/202142514 Chun, S.-H., Yoon, S.-C., Jung, M.-K., Kim, D. U., & Kim, J. 2018, The Astrophysical Journal, 853, 79, doi: 10.3847/1538-4357/aa9a37 Clayton, M. 2018, http://purl.org/dc/dcmitype/Text, University of Oxford. https://ora.ox.ac.uk/objects/uuid: 3aa54f27-e13e-4b16-bafd-28b3e7059e8d Cohen, D. H., Wollman, E. E., Leutenegger, M. A., et al. 2014, Monthly Notices of the Royal Astronomical Society, 439, 908, doi: 10.1093/mnras/stu008 Colgan, J., Kilcrease, D. P., Magee, N. H., et al. 2016, The Astrophysical Journal, 817, 116, doi: 10.3847/0004-637X/817/2/116 De Beck, E., Andrews, H., Quintana-Lacaci, G., & Vlemmings, W. H. T. 2025, Astronomy and Astrophysics, 698, A179, doi: 10.1051/0004-6361/202554748 AREPO-RSG: Pulsation-lifted CSM 23 de Jager, C., Nieuwenhuijzen, H., & van der Hucht, K. A. 1988, Astronomy and Astrophysics Supplement Series, 72, 259. https://ui.adsabs.harvard.edu/abs/1988A% 26AS...72..259D/abstract Decin, L., Richards, A. M. S., Marchant, P., & Sana, H. 2024, Astronomy and Astrophysics, 681, A17, doi: 10.1051/0004-6361/202244635 Dessart, L. 2024, Interacting supernovae, doi: 10.48550/arXiv.2405.04259 —. 2025, Astronomy and Astrophysics, 694, A132, doi: 10.1051/0004-6361/202452769 Dessart, L., & Audit, E. 2019, Astronomy and Astrophysics, 629, A17, doi: 10.1051/0004-6361/201935794 Dessart, L., & Hillier, D. J. 2019, Astronomy and Astrophysics, 625, A9, doi: 10.1051/0004-6361/201834732 Dessart, L., Hillier, D. J., & Audit, E. 2017, Astronomy and Astrophysics, 605, A83, doi: 10.1051/0004-6361/201730942 Dessart, L., Hillier, D. J., Waldman, R., & Livne, E. 2013, Monthly Notices of the Royal Astronomical Society, 433, 1745, doi: 10.1093/mnras/stt861 Dessart, L., Hillier, D. J., & Wilk, K. D. 2018, Astronomy and Astrophysics, 619, A30, doi: 10.1051/0004-6361/201833278 Dessart, L., & Jacobson-Gal´an, W. V. 2023, Astronomy and Astrophysics, 677, A105, doi: 10.1051/0004-6361/202346754 Dessart, L., Leonard, D. C., Vasylyev, S. S., & Hillier, D. J. 2025, Astronomy and Astrophysics, 696, L12, doi: 10.1051/0004-6361/202452327 Dupree, A. K., Strassmeier, K. G., Calderwood, T., et al. 2022, The Astrophysical Journal, 936, 18, doi: 10.3847/1538-4357/ac7853 Ercolino, A., Jin, H., Langer, N., & Dessart, L. 2024, Astronomy and Astrophysics, 685, A58, doi: 10.1051/0004-6361/202347646 Ercolino, A., Jin, H., Langer, N., et al. 2025, The demographics of core-collapse supernovae I. The role of binary evolution and CSM interaction, arXiv, doi: 10.48550/arXiv.2510.04872 Ertini, K., Regna, T. A., Ferrari, L., et al. 2025, Astronomy and Astrophysics, 699, A60, doi: 10.1051/0004-6361/202554333 Ferguson, J. W., Alexander, D. R., Allard, F., et al. 2005, The Astrophysical Journal, 623, 585, doi: 10.1086/428642 Fransson, C., Ergon, M., Challis, P. J., et al. 2014, The Astrophysical Journal, 797, 118, doi: 10.1088/0004-637X/797/2/118 Freytag, B., & H¨ofner, S. 2023, Astronomy & Astrophysics, 669, A155, doi: 10.1051/0004-6361/202244992 Freytag, B., H¨ofner, S., Aringer, B., & Chiavassa, A. 2024, Astronomy and Astrophysics, 692, A223, doi: 10.1051/0004-6361/202450829 Freytag, B., Steffen, M., & Dorch, B. 2002, Astronomische Nachrichten, 323, 213, doi: 10.1002/1521-3994(200208)323:3/4⟨213:: AID-ASNA213⟩3.0.CO;2-H Freytag, B., Steffen, M., Ludwig, H. G., et al. 2012, Journal of Computational Physics, 231, 919, doi: 10.1016/j.jcp.2011.09.026 Fuller, J. 2017, Monthly Notices of the Royal Astronomical Society, 470, 1642, doi: 10.1093/mnras/stx1314 Fuller, J., & Tsuna, D. 2024, The Open Journal of Astrophysics, 7, 47, doi: 10.33232/001c.120130 F¨orster, F., Moriya, T. J., Maureira, J. C., et al. 2018, Nature Astronomy, 2, 808, doi: 10.1038/s41550-018-0563-4 Gazak, J. Z., Davies, B., Kudritzki, R., Bergemann, M., & Plez, B. 2014, The Astrophysical Journal, 788, 58, doi: 10.1088/0004-637X/788/1/58 Gilliland, R. L., & Dupree, A. K. 1996, The Astrophysical Journal, 463, L29, doi: 10.1086/310043 Goldberg, J. A., & Bildsten, L. 2020, The Astrophysical Journal, 895, L45, doi: 10.3847/2041-8213/ab9300 Goldberg, J. A., Bildsten, L., & Paxton, B. 2019, The Astrophysical Journal, 879, 3, doi: 10.3847/1538-4357/ab22b6 —. 2020, The Astrophysical Journal, 891, 15, doi: 10.3847/1538-4357/ab7205 Goldberg, J. A., Jiang, Y.-F., & Bildsten, L. 2022a, The Astrophysical Journal, 933, 164, doi: 10.3847/1538-4357/ac75e3 —. 2022b, The Astrophysical Journal, 929, 156, doi: 10.3847/1538-4357/ac5ab3 Goldberg, J. A., Jiang, Y.-F., Bildsten, L., & Cantiello, M. 2025, Type IIb Supernova Progenitors in 3D: Variability and Episodic Mass Loss revealed by Radiation-Hydrodynamics Simulations, arXiv. https: //ui.adsabs.harvard.edu/abs/2025arXiv250812486G Gormaz-Matamala, A. C., Cur´e, M., Lobel, A., et al. 2022, Astronomy and Astrophysics, 661, A51, doi: 10.1051/0004-6361/202142383 Guo, J. H., & Li, Y. 2002, The Astrophysical Journal, 565, 559, doi: 10.1086/324295 Hambleton, K. M., Bianco, F. B., Street, R., et al. 2023, Publications of the Astronomical Society of the Pacific, 135, 105002, doi: 10.1088/1538-3873/acdb9a Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 24 Ma et al. Hartmann, L., & MacGregor, K. B. 1980, The Astrophysical Journal, 242, 260, doi: 10.1086/158461 Hawcroft, C., Sana, H., Mahy, L., et al. 2021, Astronomy and Astrophysics, 655, A67, doi: 10.1051/0004-6361/202140603 Heger, A., Jeannin, L., Langer, N., & Baraffe, I. 1997, Astronomy and Astrophysics, 327, 224, doi: 10.48550/arXiv.astro-ph/9705097 Henyey, L., Vardya, M. S., & Bodenheimer, P. 1965, The Astrophysical Journal, 142, 841, doi: 10.1086/148357 Hinds, K. R., Perley, D. A., Sollerman, J., et al. 2025, Monthly Notices of the Royal Astronomical Society, 541, 135, doi: 10.1093/mnras/staf888 Hsu, B., Smith, N., Goldberg, J. A., et al. 2024, One Year of SN 2023ixf: Breaking Through the Degenerate Parameter Space in Light-Curve Models with Pulsating Progenitors, arXiv, doi: 10.48550/arXiv.2408.07874 Humphreys, R. M., Smith, N., Davidson, K., et al. 1997, The Astronomical Journal, 114, 2778, doi: 10.1086/118686 Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 H¨ofner, S., Bladh, S., Aringer, B., & Ahuja, R. 2016, Astronomy and Astrophysics, 594, A108, doi: 10.1051/0004-6361/201628424 H¨ofner, S., & Olofsson, H. 2018, Astronomy and Astrophysics Review, 26, 1, doi: 10.1007/s00159-017-0106-5 Ibik, A. L., Drout, M. R., Margutti, R., et al. 2025, The Astrophysical Journal, 979, 16, doi: 10.3847/1538-4357/ad9336 Ivezic, Z., Kahn, S. M., Tyson, J. A., et al. 2019, The Astrophysical Journal, 873, 111, doi: 10.3847/1538-4357/ab042c Jacobson-Gal´an, W. V., Dessart, L., Jones, D. O., et al. 2022, The Astrophysical Journal, 924, 15, doi: 10.3847/1538-4357/ac3f3a Jacobson-Gal´an, W. V., Dessart, L., Davis, K. W., et al. 2024a, The Astrophysical Journal, 970, 189, doi: 10.3847/1538-4357/ad4a2a Jacobson-Gal´an, W. V., Davis, K. W., Kilpatrick, C. D., et al. 2024b, The Astrophysical Journal, 972, 177, doi: 10.3847/1538-4357/ad5c64 Jermyn, A. S., Bauer, E. B., Schwab, J., et al. 2023, The Astrophysical Journal Supplement Series, 265, 15, doi: 10.3847/1538-4365/acae8d Jiang, B., Ren, Y., & Yang, M. 2024, in IAU Symposium, Vol. 376, 292–305, doi: 10.1017/S1743921323003356 Jiang, Y.-F. 2021, The Astrophysical Journal Supplement Series, 253, 49, doi: 10.3847/1538-4365/abe303 Jiang, Y.-F., Cantiello, M., Bildsten, L., Quataert, E., & Blaes, O. 2015, The Astrophysical Journal, 813, 74, doi: 10.1088/0004-637X/813/1/74 Joyce, M., Leung, S.-C., Moln´ar, L., et al. 2020, The Astrophysical Journal, 902, 63, doi: 10.3847/1538-4357/abb8db Kee, N. D., Sundqvist, J. O., Decin, L., Koter, A. d., & Sana, H. 2021, Astronomy and Astrophysics, 646, A180, doi: 10.1051/0004-6361/202039224 Kilpatrick, C. D., Foley, R. J., Jacobson-Gal´an, W. V., et al. 2023, The Astrophysical Journal, 952, L23, doi: 10.3847/2041-8213/ace4ca Kilpatrick, C. D., Suresh, A., Davis, K. W., et al. 2025, The Type II SN 2025pht in NGC 1637: A Red Supergiant with Carbon-rich Circumstellar Dust as the First JWST Detection of a Supernova Progenitor Star, arXiv, doi: 10.48550/arXiv.2508.10994 Kiss, L. L., Szab´o, G. M., & Bedding, T. R. 2006, Monthly Notices of the Royal Astronomical Society, 372, 1721, doi: 10.1111/j.1365-2966.2006.10973.x Kluyver, T., Ragan-Kelley, B., P´erez, F., et al. 2016, in Positioning and Power in Academic Publishing: Players, Agents and Agendas (IOS Press), doi: 10.3233/978-1-61499-649-1-87 Kozyreva, A., Klencki, J., Filippenko, A. V., et al. 2022, The Astrophysical Journal, 934, L31, doi: 10.3847/2041-8213/ac835a Kravchenko, K., Chiavassa, A., Eck, S. V., et al. 2019, Astronomy & Astrophysics, 632, A28, doi: 10.1051/0004-6361/201935809 Krtiˇcka, J., & Kub´at, J. 2017, Astronomy and Astrophysics, 606, A31, doi: 10.1051/0004-6361/201730723 Kulkarni, S. R., Harrison, F. A., Grefenstette, B. W., et al. 2021, Science with the Ultraviolet Explorer (UVEX), doi: 10.48550/arXiv.2111.15608 Landri, C., & Pejcha, O. 2024, Monthly Notices of the Royal Astronomical Society, 531, 3391, doi: 10.1093/mnras/stae1379 Langer, N., Fricke, K. J., & Sugimoto, D. 1983, Astronomy and Astrophysics, 126, 207. https://ui.adsabs.harvard. edu/abs/1983A&A...126..207L/abstract Laplace, E., Bronner, V. A., Schneider, F. R. N., & Podsiadlowski, P. 2025, Pulsations change the structures of massive stars before they explode: interpreting the nearby supernova SN 2023ixf, arXiv. https://ui.adsabs.harvard.edu/abs/2025arXiv250811088L Leung, S.-C., & Fuller, J. 2020, The Astrophysical Journal, 900, 99, doi: 10.3847/1538-4357/abac5d Leung, S.-C., Wu, S., & Fuller, J. 2021, The Astrophysical Journal, 923, 41, doi: 10.3847/1538-4357/ac2c63 AREPO-RSG: Pulsation-lifted CSM 25 Levesque, E. M., Massey, P., Olsen, K. A. G., et al. 2005, The Astrophysical Journal, 628, 973, doi: 10.1086/430901 Lomb, N. R. 1976, Astrophysics and Space Science, 39, 447, doi: 10.1007/BF00648343 Ma, J.-Z., Chiavassa, A., de Mink, S. E., et al. 2024, The Astrophysical Journal Letters, 962, L36, doi: 10.3847/2041-8213/ad24fd Ma, J.-Z., Pakmor, R., Justham, S., & de Mink, S. E. 2025, submitted to A&A. https: //ui.adsabs.harvard.edu/abs/2025arXiv250316627M MacLeod, M., Antoni, A., Huang, C. D., Dupree, A., & Loeb, A. 2023, The Astrophysical Journal, 956, 27, doi: 10.3847/1538-4357/aced4b Malygin, M. G., Kuiper, R., Klahr, H., Dullemond, C. P., & Henning, T. 2014, Astronomy and Astrophysics, 568, A91, doi: 10.1051/0004-6361/201423768 Massey, P., Neugent, K. F., Ekstr¨om, S., Georgy, C., & Meynet, G. 2023, The Astrophysical Journal, 942, 69, doi: 10.3847/1538-4357/aca665 Massey, P., Silva, D. R., Levesque, E. M., et al. 2009, The Astrophysical Journal, 703, 420, doi: 10.1088/0004-637X/703/1/420 Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, The Astrophysical Journal, 217, 425, doi: 10.1086/155591 Matsuoka, T., & Sawada, R. 2024, The Astrophysical Journal, 963, 105, doi: 10.3847/1538-4357/ad1829 Mauron, N., & Josselin, E. 2011, Astronomy and Astrophysics, 526, A156, doi: 10.1051/0004-6361/201013993 Mcley, L., & Soker, N. 2014, Monthly Notices of the Royal Astronomical Society, 445, 2492, doi: 10.1093/mnras/stu1952 Montarg`es, M., Cannon, E., Lagadec, E., et al. 2021, Nature, 594, 365, doi: 10.1038/s41586-021-03546-8 Moriya, T. J., F¨orster, F., Yoon, S.-C., Gr¨afener, G., & Blinnikov, S. I. 2018, Monthly Notices of the Royal Astronomical Society, 476, 2840, doi: 10.1093/mnras/sty475 Moriya, T. J., Subrayan, B. M., Milisavljevic, D., & Blinnikov, S. I. 2023, Publications of the Astronomical Society of Japan, 75, 634, doi: 10.1093/pasj/psad024 Moriya, T. J., Yoon, S.-C., Gr¨afener, G., & Blinnikov, S. I. 2017, Monthly Notices of the Royal Astronomical Society, 469, L108, doi: 10.1093/mnrasl/slx056 Morozova, V., Piro, A. L., & Valenti, S. 2017, The Astrophysical Journal, 838, 28, doi: 10.3847/1538-4357/aa6251 —. 2018, The Astrophysical Journal, 858, 15, doi: 10.3847/1538-4357/aab9a6 Munoz-Sanchez, G., Kalitsounaki, M., de Wit, S., et al. 2024, The dramatic transition of the extreme Red Supergiant WOH G64 to a Yellow Hypergiant, arXiv, doi: 10.48550/arXiv.2411.19329 Nayana, A. J., Margutti, R., Wiston, E., et al. 2025, The Astrophysical Journal, 985, 51, doi: 10.3847/1538-4357/adc2fb Ohlmann, S. T., R¨opke, F. K., Pakmor, R., & Springel, V. 2017, Astronomy & Astrophysics, 599, A5, doi: 10.1051/0004-6361/201629692 Ohnaka, K., Hofmann, K. H., Weigelt, G., et al. 2024, Astronomy and Astrophysics, 691, L15, doi: 10.1051/0004-6361/202451820 Ouchi, R., & Maeda, K. 2017, The Astrophysical Journal, 840, 90, doi: 10.3847/1538-4357/aa6ea9 Pakmor, R., Springel, V., Bauer, A., et al. 2016, Monthly Notices of the Royal Astronomical Society, 455, 1134, doi: 10.1093/mnras/stv2380 Paxton, B., Bildsten, L., Dotter, A., et al. 2011, The Astrophysical Journal Supplement Series, 192, 3, doi: 10.1088/0067-0049/192/1/3 Paxton, B., Cantiello, M., Arras, P., et al. 2013, The Astrophysical Journal Supplement Series, 208, 4, doi: 10.1088/0067-0049/208/1/4 Paxton, B., Marchant, P., Schwab, J., et al. 2015, The Astrophysical Journal Supplement Series, 220, 15, doi: 10.1088/0067-0049/220/1/15 Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, The Astrophysical Journal Supplement Series, 234, 34, doi: 10.3847/1538-4365/aaa5a8 Paxton, B., Smolec, R., Schwab, J., et al. 2019, The Astrophysical Journal Supplement Series, 243, 10, doi: 10.3847/1538-4365/ab2241 Peter, T., Klessen, R. S., Kanschat, G., Glover, S. C. O., & Bastian, P. 2023, Monthly Notices of the Royal Astronomical Society, 519, 4263, doi: 10.1093/mnras/stac3034 Qin, Y.-J., Zhang, K., Bloom, J., et al. 2024, Monthly Notices of the Royal Astronomical Society, 534, 271, doi: 10.1093/mnras/stae2012 Quataert, E., & Shiode, J. 2012, Monthly Notices of the Royal Astronomical Society, 423, L92, doi: 10.1111/j.1745-3933.2012.01264.x Ransome, C. L., & Villar, V. A. 2025, The Astrophysical Journal, 987, 13, doi: 10.3847/1538-4357/adce03 Ren, Y., Jiang, B.-W., Yang, M., & Gao, J. 2019, The Astrophysical Journal Supplement Series, 241, 35, doi: 10.3847/1538-4365/ab0825 Rogers, F. J., & Nayfonov, A. 2002, The Astrophysical Journal, 576, 1064, doi: 10.1086/341894 26 Ma et al. Scargle, J. D. 1982, The Astrophysical Journal, 263, 835, doi: 10.1086/160554 Schuster, M. T., Humphreys, R. M., & Marengo, M. 2006, The Astronomical Journal, 131, 603, doi: 10.1086/498395 Sengupta, S., Sujit, D., & Sarangi, A. 2025, Dance to Demise – How Massive Stars May Form Dense Circumstellar Shells Before Explosion, arXiv, doi: 10.48550/arXiv.2508.04497 Shiode, J. H., & Quataert, E. 2014, The Astrophysical Journal, 780, 96, doi: 10.1088/0004-637X/780/1/96 Shrestha, M., Bostroem, K. A., Sand, D. J., et al. 2024, The Astrophysical Journal, 972, L15, doi: 10.3847/2041-8213/ad6907 Shrestha, M., DeSoto, S., Sand, D. J., et al. 2025, The Astrophysical Journal, 982, L32, doi: 10.3847/2041-8213/adbb63 Shvartzvald, Y., Waxman, E., Gal-Yam, A., et al. 2024, The Astrophysical Journal, 964, 74, doi: 10.3847/1538-4357/ad2704 Siebert, M. A., De Beck, E., Quintana-Lacaci, G., & Vlemmings, W. H. T. 2025, Astronomy and Astrophysics, 700, L11, doi: 10.1051/0004-6361/202555975 Singh, A., Teja, R. S., Moriya, T. J., et al. 2024, The Astrophysical Journal, 975, 132, doi: 10.3847/1538-4357/ad7955 Singh, A. P., Richards, A. M. S., Humphreys, R. M., Decin, L., & Ziurys, L. M. 2023, The Astrophysical Journal, 954, L1, doi: 10.3847/2041-8213/ace7cb Smartt, S. J. 2009, Annual Review of Astronomy and Astrophysics, 47, 63, doi: 10.1146/annurev-astro-082708-101737 Smith, N. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin (Cham: Springer International Publishing), 403–429, doi: 10.1007/978-3-319-21846-5 38 Smith, N., & Arnett, W. D. 2014, The Astrophysical Journal, 785, 82, doi: 10.1088/0004-637X/785/2/82 Smith, N., Humphreys, R. M., Davidson, K., et al. 2001, The Astronomical Journal, 121, 1111, doi: 10.1086/318748 Smith, N., Pearson, J., Sand, D. J., et al. 2023, The Astrophysical Journal, 956, 46, doi: 10.3847/1538-4357/acf366 Smith, N., Mauerhan, J. C., Cenko, S. B., et al. 2015, Monthly Notices of the Royal Astronomical Society, 449, 1876, doi: 10.1093/mnras/stv354 Soker, N. 2021, The Astrophysical Journal, 906, 1, doi: 10.3847/1538-4357/abca8f —. 2023, Research in Astronomy and Astrophysics, 23, 081002, doi: 10.1088/1674-4527/ace51f Soraisam, M. D., Bildsten, L., Drout, M. R., et al. 2018, The Astrophysical Journal, 859, 73, doi: 10.3847/1538-4357/aabc59 Soraisam, M. D., Szalai, T., Van Dyk, S. D., et al. 2023, The Astrophysical Journal, 957, 64, doi: 10.3847/1538-4357/acef22 Springel, V. 2010, Monthly Notices of the Royal Astronomical Society, 401, 791, doi: 10.1111/j.1365-2966.2009.15715.x Stone, J. M., Tomida, K., White, C. J., & Felker, K. G. 2020, The Astrophysical Journal Supplement Series, 249, 4, doi: 10.3847/1538-4365/ab929b Stothers, R. B., & Chin, C.-W. 1995, The Astrophysical Journal, 440, 297, doi: 10.1086/175270 Suzuki, A., & Shigeyama, T. 2025, Radial pulsation runaway in massive red supergiants in late evolutionary stage and implications to hydrogen-rich supernovae, arXiv, doi: 10.48550/arXiv.2509.14928 Tsuna, D., Huang, X., Fuller, J., & Piro, A. L. 2025, The Astrophysical Journal, 979, 20, doi: 10.3847/1538-4357/ad9bad Tsuna, D., Matsumoto, T., Wu, S. C., & Fuller, J. 2024, The Astrophysical Journal, 966, 30, doi: 10.3847/1538-4357/ad3637 Van Dyk, S. D. 2025, Galaxies, 13, 33, doi: 10.3390/galaxies13020033 Vasylyev, S. S., Yang, Y., Filippenko, A. V., et al. 2023, The Astrophysical Journal, 955, L37, doi: 10.3847/2041-8213/acf1a3 Vasylyev, S. S., Dessart, L., Yang, Y., et al. 2025, Spectropolarimetric Evolution of SN 2023ixf: an Asymmetric Explosion in a Confined Aspherical Circumstellar Medium, arXiv, doi: 10.48550/arXiv.2505.03975 Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, Astronomy and Astrophysics, 369, 574, doi: 10.1051/0004-6361:20010127 Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 Weinberger, R., Springel, V., & Pakmor, R. 2020, The Astrophysical Journal Supplement Series, 248, 32, doi: 10.3847/1538-4365/ab908c Woosley, S. E., & Heger, A. 2015, The Astrophysical Journal, 810, 34, doi: 10.1088/0004-637X/810/1/34 Wu, S., & Fuller, J. 2021, The Astrophysical Journal, 906, 3, doi: 10.3847/1538-4357/abc87c Wu, S. C., & Fuller, J. 2022, The Astrophysical Journal, 930, 119, doi: 10.3847/1538-4357/ac660c AREPO-RSG: Pulsation-lifted CSM 27 Xiang, D., Mo, J., Wang, L., et al. 2024a, Science China Physics, Mechanics, and Astronomy, 67, 219514, doi: 10.1007/s11433-023-2267-0 Xiang, D., Mo, J., Wang, X., et al. 2024b, The Astrophysical Journal, 969, L15, doi: 10.3847/2041-8213/ad54b3 Yang, M., & Jiang, B. W. 2012, The Astrophysical Journal, 754, 35, doi: 10.1088/0004-637X/754/1/35 Yang, M., Bonanos, A. Z., Jiang, B., et al. 2023, Astronomy and Astrophysics, 676, A84, doi: 10.1051/0004-6361/202244770 Yaron, O., Perley, D. A., Gal-Yam, A., et al. 2017, Nature Physics, 13, 510, doi: 10.1038/nphys4025 Yoon, S.-C., & Cantiello, M. 2010, The Astrophysical Journal, 717, L62, doi: 10.1088/2041-8205/717/1/L62 Zapartas, E., de Mink, S. E., Justham, S., et al. 2019, Astronomy and Astrophysics, 631, A5, doi: 10.1051/0004-6361/201935854 Zhang, J., Dessart, L., Wang, X., et al. 2024, The Astrophysical Journal, 970, L18, doi: 10.3847/2041-8213/ad5da4 Zimmerman, E. A., Irani, I., Chen, P., et al. 2024, Nature, 627, 759, doi: 10.1038/s41586-024-07116-6 ˇSurlan, B., Hamann, W.-R., Aret, A., et al. 2013, Astronomy and Astrophysics, 559, A130, doi: 10.1051/0004-6361/201322390
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX631 AREPO-RSG: Aspherical Circumstellar Material and Winds from Pulsating Dusty Red Supergiants in Global 3D Radiation Hydrodynamic Simulations Jing-Ze Ma (马竟泽) ,1 Stephen Justham ,1 R ̈udiger Pakmor ,1 Andrea Chiavassa ,2, 1 Taeho Ryu ,3, 4, 1 and Selma E. de Mink 1 1Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany 2Universit ́e Cˆote d'Azur, Observatoire de la Cˆote d'Azur, CNRS, Lagrange, CS 34229, Nice, France 3JILA, 440 UCB, Boulder, CO 80308, USA 4 391 UCB, Boulder, CO 80309, USA ABSTRACT Recent observations have revealed a surprisingly large fraction of hydrogen-rich supernovae (SNe) interacting with dense confined circumstellar material (CSM), whose origin is heavily debated. Exploiting our recent implementation of a sophisticated radiation transport scheme in the moving-mesh code AREPO, we perform full-sphere 3D radiation hydrodynamic simulations of red supergiant envelopes. For 10 M⊙and 20 M⊙core-carbon-burning stars, we find that large-amplitude radial pulsations lift the surface material of density 10-14-10-12 g cm-3 to the circumstellar environment up to 3 × 1014 cm, consistent with the inferred density for the interacting SN 2013fs. There, radiation acts on dust to drive highly anisotropic outflows of 10-6-10-5 M⊙yr-1. The total CSM masses for both simulations are ∼0.01 M⊙. Due to convection, the CSM density structure has order-of-magnitude angular variations, dominated by large-scale asymmetries. We suggest that (1) the CSM around the progenitor is bound material instead of a widely-assumed steady wind, (2) highly aspherical CSM is common and can be created by surface convection rather than only from binary interactions, and (3) 3D effects need to be incorporated in 1D SN modeling, potentially via effective clumping. Based on our simulations, we propose a 1D analytical CSM model to be directly used for SN observable modeling. We predict that progenitor pulsations (seen in SN 2023ixf) and highly-confined CSM (seen in SN 2013fs) should be common among most hydrogen-rich SNe. This can be tested with progenitor monitoring using Rubin Observatory and near-future high-cadence surveys such as ULTRASAT and UVEX. 1. INTRODUCTION Modern photometric surveys and spectroscopic observations discovered a significant fraction of supernovae (SNe) showing evidence of interactions (interacting SNe), e.g., enhanced luminosity, delayed shock breakout, or transient narrow emission lines (see e.g., Smith 2017; Dessart 2024, for reviews). These interacting SNe indicate that the progenitor stars do not explode in vacuum, but in dense, confined circumstellar material (CSM). This opens up the possibility to probe the stellar evolution and mass loss at late stages, which were previously not accessible from observations of normal stars. Observationally, most constraints on the CSM come from hydrogen-rich SNe (Type II), which are corecollapse SNe of evolved massive stars with hydrogenrich envelopes. The progenitor stars are predominantly red supergiants (RSGs) and a small population of yellow or blue supergiants and luminous blue variables (Smartt 2009). The observed fraction of interacting SNe in the entire Type II SNe population is at least 30%-40% (Bruch et al. 2021, 2023; Hinds et al. 2025), but the actual fraction is likely higher (Morozova et al. 2018; F ̈orster et al. 2018). The derived CSM density is typically > 10-14 g cm-3 within 1014-1015 cm from the center of the star (or 1.5-15 stellar radii for a 1000 R⊙ RSG; e.g., Yaron et al. 2017; Zimmerman et al. 2024; Jacobson-Gal ́an et al. 2024a). This indicates the presence of dense, confined CSM close to the progenitor stars. Most analyses assume a wind-like CSM profile to interpret the observational data. The inferred mass loss rates are > 10-4 M⊙yr-1 or even reaching 1 M⊙yr-1 (e.g., Fransson et al. 2014; Boian & Groh 2020; Hinds et al. 2025; Ransome & Villar 2025), at least two orders of magnitude higher than the mass loss rate of corehelium-burning RSGs (e.g., de Jager et al. 1988; Beasor et al. 2020; Antoniadis et al. 2024). These leave us a theoretical puzzle: What is the origin of the dense CSM? One hypothesis is wave-driven mass loss, as proposed by Quataert & Shiode (2012), 16 Oct 2025 2 Ma et al. where core convection triggers gravity waves that propagate outwards and launch an eruptive wind (Shiode & Quataert 2014; Fuller 2017). However, later studies showed that the wave-heating rate is not strong enough to drive significant mass ejections (Mcley & Soker 2014; Wu & Fuller 2021, 2022; Leung et al. 2021). Alternatively, the explosive burning may unbind part of the envelope directly without invoking gravity waves, even though this mechanism is limited to a small progenitor mass range (Smith & Arnett 2014; Woosley & Heger 2015). Another hypothesis is mass ejection during binary interactions (e.g., Smith & Arnett 2014; Mcley & Soker 2014; Ouchi & Maeda 2017; Matsuoka & Sawada 2024; Ercolino et al. 2024), but whether this channel can explain all of the interacting SNe is not clear: Although a large fraction of hydrogen-rich SNe are thought to be binary products (Zapartas et al. 2019; Ercolino et al. 2025), only a small fraction of them may interact shortly before the explosion (e.g., Kozyreva et al. 2022; Ercolino et al. 2024). The CSM may also be purely related to the stellar surface variability, in particular for RSGs - the predominant progenitors of Type II SNe. RSGs are known to be large-amplitude pulsators (Kiss et al. 2006), have largescale surface convection (Gilliland & Dupree 1996), and are surrounded by extended atmospheres (Arroyo-Torres et al. 2015). The Great Dimming of Betelgeuse is an example of possible mass ejections from RSGs (Montarg`es et al. 2021; Dupree et al. 2022). It was suggested that the extended atmosphere of RSGs may be enough to explain the CSM (Dessart et al. 2017), but the origin of the extended atmosphere is debated. Analytical arguments suggest that bound material may be lifted from the surface by convection and pulsation (Soker 2021) or shocks generated by transonic convection (Fuller & Tsuna 2024). Large-amplitude pulsations (Goldberg et al. 2020; Bronner et al. 2025; Laplace et al. 2025) or convection (Goldberg et al. 2022a) may change the density structure of the pre-SN RSG, thereby changing the lightcurves. Using 3D simulations, Goldberg et al. (2025) found that yellow supergiants can launch eruptive mass loss via large-amplitude pulsations. For RSGs, it was also proposed that the pulsation amplitude grows when the luminosity-to-mass ratio (L/M) is large, resulting in episodic mass loss (Heger et al. 1997; Yoon & Cantiello 2010; Sengupta et al. 2025; Suzuki & Shigeyama 2025), but the actual mass loss process has only been simulated once in 1D simulations (Clayton 2018) and is difficult to reproduce due to numerical issues (Vincent Bronner priv. comm.). Such pulsations are already present in 3D simulations of RSGs (Chiavassa et al. 2024; Goldberg et al. 2022b), but none of the current 1D or 3D models so far successfully produce a CSM dense enough to explain the interacting Type II SNe. A potential issue is that 1D hydrodynamic models break before mass ejections (Bronner et al. 2025; Suzuki & Shigeyama 2025) and, as we suggest in this work, 3D models do not include enough envelope for waves to fully steepen into shocks. Recently, we overcame those technical barriers by demonstrating a RSG simulation an order of magnitude deeper than other previous simulations (Ma et al. 2025). This is credited to the flexible mesh refinement and local time-stepping supported by our radiation transport module AREPO-IDORT (Ma et al. 2025) in the movingmesh code AREPO (Springel 2010). This enables us to perform a radiation hydrodynamic simulation spanning 6 orders of magnitude in time-scale and 4 orders of magnitude in length-scale (see Figure 18 in Ma et al. 2025). In this work, we present a subset of our 3D AREPO-RSG model grid focusing on the later evolutionary stages, targeting at producing circumstellar material self-consistently from pre-SN RSGs. 2. METHODS We use the 3D finite-volume moving-mesh code AREPO (Springel 2010; Pakmor et al. 2016; Weinberger et al. 2020) to perform two radiation hydrodynamic simulations of core-carbon-burning RSG envelopes. A detailed description of 1D initial conditions and numerical methods are presented in Appendix A, where the stellar parameters and numerical parameters are summarized in Table 1. The limitations of our simulations are further discussed in Appendix C.3. Here, we only provide a brief overview. First, we use the 1D stellar evolution code MESA (version 15140; Paxton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023) to construct 1D non-rotating RSG profiles at near-solar metallicity (Z = 0.02). We select a 10 M⊙RSG (initially 11.5 M⊙) at 200 years before core collapse and a 20 M⊙RSG (initially 22 M⊙) at 8000 years before core collapse. We define their radii as RMESA. Then, we follow Ohlmann et al. (2017) to replace the inner 3% RMESA with a less dense artificial core in hydrostatic equilibrium, map the modified profile onto AREPO, and damp the velocities globally for one stellar sound-crossing timescale to aid relaxation. Within the artificial core, we further apply a constant luminosity source and a continuous damping term that we keep after relaxation. The simulation box is 300 RMESA wide, filled with low density ρbg = 10-17 g cm-3 and low temperature Tbg = 530 K pseudo-vacuum. The equations of hydrodynamics are solved using a second-order accurate finite-volume approach with an HLLD Riemann solver (Springel 2010; Pakmor et al. 2016). We solve full self-gravity using a Oct-tree multiple expansion method (Weinberger et al. 2020). The radiation transport is coupled to the hydrodynamics via our newly-implemented AREPO-IDORT module, which solves for gray specific intensities at discrete directions by solving the time-independent radiative transfer equations using an implicit first-order discrete ordinates method (Ma et al. 2025). The radiation AREPO-RSG: Pulsation-lifted CSM 3 Figure 1. Locations of two 3D pre-SN AREPO-RSG simulations on the Hertzsprung-Russell diagram. The evolutionary tracks of MESA models are indicated in black solid lines. We mark the 1D MESA models (empty stars) as initial conditions for the 3D simulations (filled stars with errorbars indicating the 3σ variations due to temporal variability). The effective temperatures for the simulations are spherically-averaged (see Appendix A.8). Background gray scatter dots indicate observed Galactic RSG population obtained through TiO bands (triangles; Levesque et al. 2005) or SED fitting (circles; Gazak et al. 2014). The small gray stars mark the Type IIP/L supernova progenitors identified in pre-explosion images compiled by Van Dyk (2025), where we highlight the well-studied interacting SNe 2023ixf and 2024ggi. We also highlight the dusty progenitor of SN 2025pht detected by the James Webb Space Telescope (Kilpatrick et al. 2025). Both the absolute values and the uncertainties of the bolometric luminosity may be significantly underestimated (Beasor et al. 2025). Background gray lines show contours of constant radius. transport is performed on the globally synchronized hydrodynamic timesteps. We include a realistic equation of state and gray opacity tables in our simulations. We use the hightemperature OPAL equation of state (Rogers & Nayfonov 2002) blended with ideal gas below a temperature of 1870 K. We use Rosseland and Planck opacities from the high-temperature Los Alamos OPLIB table (Colgan et al. 2016)1 stitched to a low-temperature Ferguson et al. (2005) table below a temperature of 30000 K.2 These EOS and opacity tables cover the temperature and density range of RSG envelopes and include the ionization/recombination between ionized species and atomic species. The low-temperature opacity table also includes contributions from molecular lines and dust in equilibrium chemistry (Ferguson et al. 2005). We use 1 https://aphysics2.lanl.gov/apps/ The opacity due to electron scattering is included in the Rosseland opacity table. 2 https://www.wichita.edu/academics/fairmount las/physics/ Research/opacity.php the Rosseland opacity as the flux-weighted opacity, and use the Planck opacity as the energy-weighted opacity. We do not explicitly model dust in our simulations, but we include the effects of radiation acting on dust through opacity. Below 1200 K in the optically-thin regions, we take the Planck opacity values as both the energy-weighted and the flux-weighted opacities to take into account the high opacities from dust. Between 1500-1200 K, we take a linear interpolation between the Rosseland opacity and the Planck opacity for the opacity value to guarantee a smooth transition. We therefore make the implicit assumption that the dust forms instantly in chemical equilibrium with other species according to Ferguson et al. (2005), with the size distribution from Mathis et al. (1977). Each simulation consists of approximately 20 million cells and took 3.5 months to reach 70 stellar years running on 504 CPU cores (1.3 million CPU hours). The radiation transport AREPO-IDORT module takes about half of the computational cost. Given the flexible mesh construction in AREPO, we employ a multi-shell refinement criterion, where we set different target volume resolutions at different radii (see Appendix A.6). The 4 Ma et al. Voronoi mesh is allowed to move with the gas in a quasiLagrangian way, which gives us the advantage of resolving the outflow. Throughout this paper, we only present analyses of the simulations after they reach steady states. We determine the steady state as when the mass ejection is not influenced by the initial transient ejection anymore, i.e. the spherically-averaged density contour at 10-16 g cm-3 begins to rise again with time. But we also check that two extra conditions are met during the steady state: (1) Both the bolometric luminosity and spherically-averaged radius vary around an approximately constant value, as in 3D CO5BOLD simulations (Ahmad et al. 2023); (2) The temporally-averaged total radial energy flux is approximately spatially constant, as in 3D Athena++ simulations (Goldberg et al. 2022b, 2025). Detailed checks are presented in Appendix A.7. How we obtain the spherically-averaged and global quantities, e.g., luminosity, radius and effective temperature, is presented in Appendix A.8. 3. RESULTS We show the locations of our two 3D simulations on the Hertzsprung-Russell diagram in Figure 1. They appear redder than the 1D models due to more extended superadiabatic layers (Appendix C.1.3), but are consistent with the derived luminosities and effective temperatures of observed Type IIP SN progenitors. 3.1. Episodically-lifted CSM via Radial Pulsation After the initial relaxation phase, we find that both simulations settle into steady states with semi-regular variability in bolometric luminosity. The magnitude of this time variability is represented by the errorbars for the 3D simulations in Figure 1, and shown in the top row of Figure 2. The dominant periods align with expectations of the fundamental modes for RSGs (see Appendix B). The mechanism that sustains the radial pulsations is likely the κγ mechanism, where the opacity peak due to hydrogen and helium recombination leads to unstable growth of perturbations (Heger et al. 1997; Joyce et al. 2020) with a non-linear boost from recombination energy (Clayton 2018; Bronner et al. 2025). We defer detailed analyses of pulsation properties to future explorations. The large-amplitude radial pulsation acts as a piston that pushes the dense material from the stellar surface into the circumstellar environment. As shown in the second row of Figure 2, the lifted material forms CSM of ∼0.01 M⊙in both simulations. We define the CSM mass as the total mass of material with density between 10-16-10-10 g cm-3 to exclude the background pseudovacuum and the stellar interior. In the bottom two rows of Figure 2, we plot the spherically-averaged density ⟨ρ⟩and radial velocity ⟨vr⟩. As shown in the radial velocity inside the stars, the dominant internal motion is the global contraction and expansion of the entire envelope, which further supports that the radial pulsation is governed by the fundamental mode. The large-amplitude pulsations lift the dense material from the stellar surface. The lifted material is then slowed down by both gravitational pull and collision with the pre-lifted material. Eventually, most of the material falls back and collides with the material lifted in the next several pulsations. Over time, this develops into a quasi-static CSM structure where the inner CSM close to the stellar surface varies at the pulsation period, while the outer CSM at 2 × 1014 cm varies at a longer timescale several times larger than the pulsation period. 3.2. Spherically-averaged Density Profiles: A Two-zone Model In Figure 3, we show the spherically-averaged CSM density profiles of the 10 M⊙(orange) and 20 M⊙simulation (red). Different curves indicate the sphericallyaveraged density profiles at different times. The whiteedged curves indicate the spherically-and-temporallyaveraged density profiles. Deep inside the stars, the density structures do not vary significantly with time, and agree well with the initial profiles from MESA (black solid lines). For reference, in gray solid lines, we also plot the density structure from steady winds assuming a constant wind velocity 30 km s-1 for mass-loss rates of 10-6-1 M⊙yr-1. Our simulations predict a two-zone CSM density structure: a dense quasi-static CSM confined within 3 × 1014 cm attached to a less dense CSM due to a dust-driven wind outside. Such a two-zone CSM structure was also found necessary to explain some interacting SNe, e.g., SN 2013fs (Yaron et al. 2017), SN 2020tlf (Jacobson-Gal ́an et al. 2022), SN 2023ixf (Singh et al. 2024; Zimmerman et al. 2024; Nayana et al. 2025), SN 2024ggi (Ertini et al. 2025), and PS1-11aop (Ibik et al. 2025). We find that the CSM density values in our simulations are consistent with the CSM density inferred for SN 2013fs (Yaron et al. 2017), and approximately one order of magnitude less than SN 2023ixf (compiled by Nayana et al. 2025) and SN 2024ggi (Zhang et al. 2024; Jacobson-Gal ́an et al. 2024b; Shrestha et al. 2024; Ertini et al. 2025; Chen et al. 2025). However, there are uncertainties in both our treatment of dust and in deriving the density structure from SN observations. We expect that forward modeling from our simulation and direct comparison with the observed SN lightcurves and spectra will be a more reliable test of our simulation predictions. Here, we show that the first dense inner zone of the CSM in our simulation is bound material, as opposed to enhanced mass loss as interpreted in most studies. Such a dense atmosphere is supported by a train of periodic shocks generated by large-amplitude pulsations (see Figure 2), as also proposed for the atmospheric structure of asymptotic giant branch (AGB) stars (Bertschinger AREPO-RSG: Pulsation-lifted CSM 5 Figure 2. Dense CSM episodically lifted by semi-regular pulsation in the 10 M⊙simulation (left) and 20 M⊙simulation (right). From top to bottom, we show the bolometric luminosity, CSM mass, spherically-averaged density, and spherically-averaged radial velocity. In the top two rows, black solid lines show the general trends of variations filtering out the high-frequency variability. The bolometric luminosity from the MESA models used as initial conditions are indicated with gray dashed horizontal lines. In the bottom two rows, contour lines indicate the iso-density levels [10-8, 10-12, 10-14, 10-16] g cm-3. The cyan lines indicate the spherically-averaged Rosseland radius defined in Appendix A.8. The right-hand side of the vertical dotted lines indicate the relaxed phase as defined at the end of Section 2. & Chevalier 1985; Bowen 1988). The density profile as a function of radial distance r can be well-described by a shock-supported quasi-static atmosphere (Equation 5 in Fuller & Tsuna 2024): ρ(r) = ρ(R) R r 2 e - v2 esc(R) 2v2s (1-R r ) . (1) We take R and ρ(R) to be the Rosseland radius and the associated density averaged over time and spherical shells in our simulation. These two quantities can also be provided by 1D stellar evolution models if a proper convective efficiency is chosen to match the predicted radius in our 3D simulations. For the shock velocity, we take the sound speed predicted by the 1D MESA model at the location where the acoustic timescale becomes comparable to the radiative cooling timescale, i.e. τ = c/cs (Jiang et al. 2015; Cheng et al. 2024). Here, τ is the optical depth integrated from the stellar surface, c is the speed of light, and cs is the local sound speed. This is motivated by the theoretical consideration that the weak shock speed approximately follows the local sound speed until radiative cooling yields a nearly isothermal atmosphere. For both models, the shock speeds are approximately 11 km s-1 (supersonic near the surface), which are consistent with the spherically-averaged radial ve6 Ma et al. Figure 3. The CSM density structure from our 3D simulations is consistent with the range of densities inferred for SN 2013fs. Different colored lines indicate the spherically-averaged density profiles at different times, for the 10 M⊙simulation (orange) and the 20 M⊙simulation (red), which are much more extended than the initial conditions from MESA (black solid lines). The white-edged solid lines represent the density profiles averaged both in time and angles. The black dashed lines show the density profiles described by the analytical 'two-zone model' detailed in Section 3.2. We also highlight the inferred density profile from three well-studied interacting SNe, SN 2013fs (Yaron et al. 2017), 2023ixf (compiled by Nayana et al. 2025) and SN 2024ggi (Zhang et al. 2024; Jacobson-Gal ́an et al. 2024b; Shrestha et al. 2024; Ertini et al. 2025; Chen et al. 2025), as labeled. The background gray solid lines indicate density profiles for constant mass-loss rates assuming a constant wind velocity 30 km s-1. locities shown in Figure 2. The density profiles in Equation 1 are shown in Figure 3 as the inner parts of the black dashed lines. The outer zone of the CSM in our simulation is an extended tail due to the dust-driven wind. As the shock-supported atmosphere extend to large distances, the temperature drops below 1200-1500 K, where dust forms (H ̈ofner & Olofsson 2018), and the radiation force acts on the dust opacities to drive a wind (Fuller & Tsuna 2024). In our simulations, the wind is driven by radiation acting on the high Planck opacity taken from the Ferguson et al. (2005) table below 1200-1500 K. We find that the outward radiation force on this material marginally exceeds the attraction from gravity in our simulations, even though the simulations are not long enough for us to see the wind material becoming unbound. The density profile can also be predicted from analytical theory. The dust-forming radius can be approximated by Rd = (T(R)/Td)2R (H ̈ofner & Olofsson 2018; Fuller & Tsuna 2024), where Td is the dust-forming temperature and the term T(R)2R can be derived from luminosity of the MESA model L = 4πR2σT(R)4. Here, σ is the Stefan-Boltzmann constant. The density ρ(Rd) at the dust-forming radius can be found by plugging Rd in Equation 1. Then the wind mass-loss rate is ̇M = 4πR2 dρ(Rd)vs, and the density structure for the dust-driven wind is ρ(r) = ̇M 4πr2v∞ = ρ(Rd) R2 dvs r2v∞ = ρ(R) R r 2 vs v∞ exp -v2 esc(R) 2v2s 1 -R Rd , (2) where we assume a terminal wind speed of v∞ = 30 km s-1 (Mauron & Josselin 2011).3 The entire CSM density structures as plotted in black dashed lines in Figure 3 are found by attaching Equation 1 to this wind solution. We find that taking Td = 1300 K for the 10 M⊙ simulation yields a good fit, while the 20 M⊙simulation 3 The order of magnitude of the wind mass-loss rate (and therefore the density) is not sensitive to the terminal wind speed, because the terminal speed mostly ranges from 10 to 50 km s-1 (Mauron & Josselin 2011; Decin et al. 2024). The terminal wind speed can also be predicted analytically based on an estimation of the dust opacity (e.g., Equation 14 in Fuller & Tsuna 2024). AREPO-RSG: Pulsation-lifted CSM 7 Figure 4. Convection creates a clumpy stellar surface, and aspherical circumstellar material, leading to an anisotropic outflow. Left: A 2D density slice of the 10 M⊙simulation at 60.5 years. Right: Projected density maps along three spheres indicated by the dashed circles in the left plot (where the color at the edge of each density map corresponds to that of the appropriate circle). An interactive 3D visualization can be found here: https://jingzema.com/AREPO-RSG/arepo rsg csm fast.html does not drive a wind, potentially due to its small radius and low luminosity for its mass. The 20 M⊙star will become more luminous in the final thousand years than simulated here, and therefore is expected to have more violent mass ejections than in our simulation prior to core collapse. Both the 3D simulations and the analytical descriptions predict a mass loss rate between 10-6-10-5 M⊙yr-1, which is broadly consistent with the observed mass-loss rates (e.g., de Jager et al. 1988; Beasor et al. 2020; Antoniadis et al. 2024; Decin et al. 2024). 3.3. 3D Effects Due to Convection: Clumpy Surfaces, Aspherical CSM, and Anisotropic Outflows In our simulations, large-scale convection in the RSG envelope leads to significant deviations from spherical symmetry, which creates the clumpy surface, aspherical CSM, and anisotropic dust-driven outflows, as illustrated in the left panel of Figure 4. We take three spheres at different radii (dotted circles in the left panel of Figure 4), and plot the projected density maps along those spheres on the right panels. Inside the star near the photosphere (bottom right), the density shows small-scale clumps. The clumpy surface is due to the surface convection driven by radiative cooling near the photosphere (Ma et al. to be subm.). This is where the supernova shock will propagate through and break out from the surface. It has been suggested that this clumpy progenitor surface can prolong the shock breakout duration (Goldberg et al. 2022a). In the circumstellar environment (middle right panel in Figure 4), the density fluctuation can still differ by several orders of magnitude in different angles. This is particularly evident in the ejected material (top right panel in Figure 4), where the density distribution is dominated by large-scale asymmetries, which reflects the large convective cells inside the star from which the CSM is lifted. The temporal variations of the convective structure and aspherical circumstellar material are illustrated in Figure 5. The plot shows the variations during one pulsation cycle for the 10 M⊙model, for the bolometric intensity (second row), mid-plane slice of density (third row), and mid-plane slice of radial velocity (last row). Generally, the stellar surface exhibits filament-like structures and dark clumps in the intensity map, almost identical to those seen in the CO5BOLD 3D simulations of AGB stars and RSGs (e.g., Freytag et al. 2024). The dark clumps are pulsation-lifted dense opaque material shaped by large-scale convection (Freytag et al. 2024). The overall convective pattern on the surface is controlled by the intense cooling, which creates a density inversion subject to Rayleigh-Taylor instability, resulting in dense and low-entropy material mixed into the envelope through fast downdrafts (Ma et al. to be subm.). 8 Ma et al. Figure 5. Time sequence of the convective envelope variations within one pulsation cycle for the 10 M⊙simulation. The top row shows the variability of the absolute bolometric magnitude Mbol in black and the spherically-averaged Rosseland radius ⟨Rross⟩in orange. During this one pulsation cycle, we select 4 snapshots equally spaced in time (indicated by gray vertical lines in the top row), and plot the bolometric intensity looking from the x axis (second row), y-z mid-plane slice of the density (third row), and y-z mid-plane slice of the radial velocity (last row). On the top row of Figure 5, we show the nearly antiphase variability between absolute bolometric amplitude Mbol and spherically-averaged Rosseland radius. At the luminous phase at 19196 days (defined as +0 days in the second row of the plot), the star begins its expansion. At day +207, the expanding star drives shocks and lifts dense material from the surface. By day +414, the star begins to contract. Most of the dense material falls back onto the star while the outer ejecta expands. At day +621, the star resumes its peak luminosity and begins its next pulsation cycle. Long filaments appear in the density slice where gas is compressed by shock fronts due to the collision between laterally-expanding ejecta. Those long filaments are in fact sheets in 3D, and their separation reflects the length scale of deep convection. This is because the horizontal motions are governed by AREPO-RSG: Pulsation-lifted CSM 9 deep convection (Ma et al. to be subm.), and when a pulse passes near the surface, it steepens into a laterally expanding shock front that lifts the material. Two adjacent shock fronts will meet at a plane extending radially outwards from the star, which creates the overdense sheets approximately perpendicular to the stellar surface. Through multiple pulsation cycles, the star episodically fills its surroundings with dense aspherical material. We therefore suggest that the SN progenitor profile is clumpy from the stellar surface to the CSM, which should be taken into account in 1D SN modeling, possibly through micro/macroclumping (Dessart et al. 2018; Dessart & Audit 2019). This may help explain the different density derived from different wavelengths for interacting SNe (Berger et al. 2023; Nayana et al. 2025). Another naive expectation is that clumpiness allows the radiation to leak out from the low-density chimneys, thereby enhancing the cooling, but detailed radiation hydrodynamic models are needed to access the effects of clumping in progenitors and their associated CSM. There is a variety of observational evidence that suggests the CSM to be aspherical or clumpy, as indicated by different densities inferred from different wavelengths, multi-peaked emission lines, and polarization signals (e.g., Chandra et al. 2012; Smith et al. 2015; Andrews & Smith 2018; Andrews et al. 2019; Brennan et al. 2022; Kozyreva et al. 2022; Smith et al. 2023; Vasylyev et al. 2023; Bilinski et al. 2024; Singh et al. 2024; Shrestha et al. 2025; Andrews et al. 2025; Nayana et al. 2025; Vasylyev et al. 2025). The aspherical CSM is mostly assumed to be associated with binary interactions (e.g., Smith et al. 2015; Andrews & Smith 2018; Brennan et al. 2022; Smith et al. 2023; Vasylyev et al. 2023; Bilinski et al. 2024; Singh et al. 2024; Andrews et al. 2025). Here, we show instead that convection can also result in highly aspherical CSM dominated by large-scale structures. Future spectropolarimetric forward modeling (e.g., Dessart et al. 2025) from our simulations will be useful to directly compare with observations. 4. DISCUSSION AND CONCLUSIONS This is the first of a series of papers where we describe the scientific results from the 3D AREPO-RSG models. In this work, we perform global 3D radiation hydrodynamic simulations of two pre-explosion RSGs during core carbon burning: a 10 M⊙RSG at 200 years before it explodes, and a 20 M⊙RSG at 8000 years before it reaches core collapse. Our multi-scale simulations include 97% of the convective envelope in radius and the atmosphere up to 300 stellar radii. The differences between our results and other 1D or 3D simulations are discussed in Appendix C along with our simulation caveats. We find that dense confined CSM of ∼0.01 M⊙is self-consistently produced in the simulations, which is episodically-lifted by large-amplitude radial pulsations. We interpret those pulsations as fundamental modes excited by the κγ-mechanism (e.g. Bronner et al. 2025). The pulsations steepen into shocks and lift the dense surface material to the circumstellar environment up to 3 × 1014 cm, where the dust forms and radiation acts on dust to drive outflows of 10-6-10-5 M⊙yr-1. This process is very similar to the pulsation-enhanced dustdriven wind in AGB stars (H ̈ofner & Olofsson 2018). The CSM density values from our simulations fit well with the CSM density inferred for SN 2013fs and about one order of magnitude lower than the CSM inferred for SN 2023ixf and SN 2024ggi. This is already a reasonable agreement considering the uncertainties in both simulations and observational inference. Based on our simulations, we propose a 1D analytical two-zone model to describe the CSM density profile in Section 3.2. In our simulations, the CSM and the dust-driven outflow are highly aspherical, dominated by large-scale asymmetries. This is because the large-scale convection in RSG envelope breaks the spherical symmetry in the material ejection. We therefore propose that: • The confined CSM observed in interacting hydrogenrich SNe may be bound material episodically-lifted from the surface with a velocity dispersion of 10 M⊙, SN 2023ixf-like progenitor pulsation and SN 2013fs-like highly-confined CSM should be present. Observations of those phenomena are still scarce, limited by our detection capability (Dessart 2024; Van Dyk 2025). However, within several years in the future, variable SN progenitors are expected to be more frequently detected in the Legacy Survey of Space and Time (LSST) at Vera C. Rubin Observatory (Ivezic et al. 2019; Hambleton et al. 2023). With a wide-field near-ultraviolet (NUV) photometric survey such as ULTRASAT (Shvartzvald et al. 2024) in collaboration with early follow-up UV spectroscopy such as UVEX (Kulkarni et al. 2021) and other ground-based instruments, we will have a chance to detect more SN 2013fs-like early interaction events in the coming several years. We thank Jim Fuller for providing valuable comments to the early draft. We thank Sebastian Ohlmann for sharing the module to construct stable 3D AREPO giant stars from MESA profiles. We thank Mike Lau, Jared Goldberg, Luc Dessart, Fabian Schneider, Vincent Bronner, Philipp Podsiadlowski, Andrei Beloborodov, and Raffaella Margutti for helpful discussions. A.C. acknowledges support from the French National Research Agency (ANR) funded project PEPPER (ANR20-CE31-0002). This research project was partly conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-AlexanderUniversit ̈at Erlangen-N ̈urnberg (FAU) under the NHR project b234dd. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683. Part of this work was done when the author was attending the TDE24 program at Kavli Institute for Theoretical Physics (KITP), which is supported in part by grant NSF PHY-2309135. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Software: AREPO (Springel 2010; Pakmor et al. 2016; Weinberger et al. 2020), MESA (Paxton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023), Astropy (Astropy Collaboration et al. 2013, 2018, 2022), NumPy (Harris et al. 2020), SciPy (Virtanen et al. 2020), Matplotlib (Hunter 2007), Jupyter (Kluyver et al. 2016) APPENDIX A. DETAILED METHODS A.1. 1D MESA Red Supergiant Models To construct the 1D initial conditions, we use the 1D stellar evolution code MESA (version 15140; Paxton et al. 2011, 2013, 2015, 2018, 2019; Jermyn et al. 2023) to provide the 1D structures of RSGs. To this end, we evolve a grid of single non-rotating massive stars at metallicity Z = 0.02 from the zero-age main sequence (ZAMS) to the onset of core collapse (defined as the phase when the maximum infall speed inside the iron core reaches 300 km s-1). We then select models with different masses and luminosities along the RSG branch. AREPO-RSG: Pulsation-lifted CSM 11 Table 1. Parameters of two 3D pre-SN AREPO-RSG simulations. The stellar parameters are listed as the mass Mtot, bolometric luminosity Lbol, radius Rross where the Rosseland optical depth ≈1, and effective temperature Teff. All these surface quantities in simulation outputs are averaged over spherical shells as described in Appendix A.8, with errorbars indicating the 3σ variations due to temporal variability. The stellar Rosseland radii from MESA are referred to as RMESA hereafter. We also list the numerical parameters for the two simulations, including the mass of the gas in the simulation Msim, mass of the central point particle MIB, radius of the artificial core RIB, size of the simulation box lbox, resolution near the stellar surface ∆rsurf, total number of cells Ncell, total number of directions used in radiation transport NRT, and the total simulation duration tsim. Model Stellar parameters Numerical parameters Mtot Lbol Rross Teff Msim MIB RIB lbox ∆rsurf Ncell NRT tsim [M⊙] [105L⊙] [R⊙] [K] [M⊙] [M⊙] [RMESA] [RMESA] [R⊙] [107] - [yrs] 10 M⊙ 1D MESA 9.73 0.94 722 3757 5.34 4.43 3% 300 6 2.3 80 66 pre-SN 3D AREPO 9.78 1.09+0.24 -0.24 990+132 -125 3331+376 -274 20 M⊙ 1D MESA 19.45 2.59 1132 3867 9.52 10.08 3% 300 8 2.0 80 73 pre-SN 3D AREPO 19.60 2.61+0.58 -0.51 1317+111 -134 3594+220 -254 Convection is modeled using mixing-length theory (MLT; B ̈ohm-Vitense 1958) with a mixing-length parameter αMLT and the Ledoux criterion. To account for the high convective efficiency in the RSG envelope (Dessart et al. 2013; Chun et al. 2018; Goldberg et al. 2022b), we follow the procedure in Paxton et al. (2018) and use a core mixing length parameter αMLT = 1.5 for hydrogen mass fraction XH ≤0.5 and αMLT = 3 for the hydrogen-rich envelope with XH > 0.5. We also include semi-convection (Langer et al. 1983) with a semi-convection parameter αSC = 1. We switch on convective overshooting with step overshoot parameters of f = 0.385 and f0 = 0.05, as calibrated by Brott et al. (2011). For the wind mass loss, we use the Vink et al. (2001) recipe reduced by a factor of 3 for effective temperature Teff ≥104 K. This reduction factor is chosen as suggested by both theoretical (Krtiˇcka & Kub ́at 2017; Bj ̈orklund et al. 2021; Gormaz-Matamala et al. 2022) and observational studies (ˇSurlan et al. 2013; Cohen et al. 2014; Hawcroft et al. 2021). We use Decin et al. (2024) recipe for Teff 10-9 g cm-3 and smoothly transition to a pure gas EOS for density ρ 10-9 g cm-3 and via the radiation transport source terms for ρ r κR∆m/(4πr2) i. Here, for each cell i, κR is the Rosseland opacity, ∆m is the mass enclosed in the cell, and r is the distance of the cell from the stellar center. We define the Rosseland radius ⟨Rross⟩as the radius where ⟨τross⟩(r = ⟨Rross⟩) = 1. The sphericallyaveraged effective temperature ⟨Teff⟩is defined via the Stefan-Boltzmann law, i.e. by finding the temperature that satisfies Lbol = 4π⟨Rross⟩2σ⟨Teff⟩4, where σ is the Stefan-Boltzmann constant. 5 It is more intuitive to calculate the bolometric luminosity by integrating the normal radiation flux over box boundaries. However, since the box boundary is far away from the star with very low resolution, it is more accurate to calculate the luminosity closer to the star. B. IDENTIFYING THE DOMINANT PULSATION MODE By taking the standard Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) of the simulated lightcurves in the top row of Figure 2, we obtain the power spectra of the lightcurves (Figure 10) and identify the dominant period with the maximum power. In Figure 11, we compare the dominant periods of both simulations with the period-luminosity relations of observed RSG populations from Jiang et al. (2024). We find that the dominant pulsation modes in our simulations fit reasonably well with the fundamental modes according to the period-luminosity relations. This agrees with most theoretical and observational results that suggest Galactic RSGs pulsate predominantly in the fundamental mode (Heger et al. 1997; Kiss et al. 2006; Yang & Jiang 2012; Ren et al. 2019; Joyce et al. 2020; Jiang et al. AREPO-RSG: Pulsation-lifted CSM 15 Figure 10. Power spectrum of the simulated lightcurve in Figure 2. We obtain the period of the dominant pulsation mode in our simulations by identifying the maximum peak in the power spectrum. The high-frequency part of the power spectrum falls off more steeply than the 1/f trend observed in normal RSGs (where f is the frequency; Kiss et al. 2006), but closer to simulated yellow supergiants (Goldberg et al. 2025) and other observed massive stars (Bowman 2023). Figure 11. The dominant pulsation periods in our 3D simulations agree with the fundamental modes of RSGs according to the empirical period-luminosity relation. The pulsation periods of the 3D simulations are found by identifying the dominant peak in the power spectra (Figure 10) of the simulated lightcurves (Figure 2 top row). We plot the observed period-luminosity relations of RSGs from Jiang et al. (2024) in solid lines, separated into the fundamental mode (FM), first overtone (O1), and long secondary period (LSP). In gray scatter dots, we show the observed RSG population in the Galaxy (Chatys et al. 2019), M31 (Soraisam et al. 2018), and M33 (Ren et al. 2019). All the observed data points and observed relations are in absolute K-band magnitude MK, and we convert them into bolometric luminosities Lbol following the reddening relation in Massey et al. (2009) assuming an overall effective temperature of 3800 K, which is a crude approximation. We highlight the pulsation periods identified from the pre-explosion lightcurves of two interacting SN progenitors, SN 2023ixf (Kilpatrick et al. 2023; Soraisam et al. 2023; Qin et al. 2024; Xiang et al. 2024a) and SN 2024ggi (Xiang et al. 2024b). 2024; Suzuki & Shigeyama 2025), however see Guo & Li (2002). In Figure 11, we also mark the pulsation period and luminosity inferred from pre-explosion images of two in16 Ma et al. teracting SN progenitors, SN 2023ixf (Kilpatrick et al. 2023; Soraisam et al. 2023; Qin et al. 2024; Xiang et al. 2024a) and SN 2024ggi (Xiang et al. 2024b). By comparing the pulsation period of SN 2023ixf progenitor with our simulations and the empirical period-luminosity relations, we support the claim of Soraisam et al. (2023) and Hsu et al. (2024) that the pulsation period of SN 2023ixf progenitor fits better with a luminous RSG of ∼20 M⊙. The apparent pulsation period of the SN 2024ggi progenitor instead favors a very low-mass RSG, but whether this inferred pulsation period is true is debated (Laplace et al. 2025). C. ADDITIONAL DISCUSSION C.1. Comparison with Other 1D Models Current analyses of observed Type II SN lightcurves and spectra rely heavily on comparison with 1D SN modeling, which is sensitive to the 1D progenitor model and the CSM profile (e.g., Dessart et al. 2013, 2017; Morozova et al. 2017; Dessart & Hillier 2019; Goldberg et al. 2019; Goldberg & Bildsten 2020; Moriya et al. 2018; Boian & Groh 2020; Moriya et al. 2023). We find that our 3D simulations differ from the 1D models in many aspects. The differences in 1D-averaged quantities include different CSM density profile, the presence of large-amplitude pulsations, and larger radii. We discuss those differences separately in this subsection. C.1.1. CSM Density Profile When interpreting the observed Type II SN lightcurves and spectra, most works assume that the CSM follows a steady-wind density profile governed by ρ ∝r-2 (e.g., Morozova et al. 2017; Boian & Groh 2020; Jacobson-Gal ́an et al. 2024a). However, it has been shown that the inferred CSM density, mass and radial extent are sensitive to the density structure assumed, where variations include wind acceleration (Moriya et al. 2017, 2018; Dessart 2025) and extended atmosphere (Dessart et al. 2017; Soker 2021; Dessart & JacobsonGal ́an 2023; Fuller & Tsuna 2024). Our 3D simulations broadly support the idea of an extended atmosphere, but the detailed physics and the CSM density structure are different from previous works. Dessart et al. (2017) and Dessart & JacobsonGal ́an (2023) used an ad hoc atmospheric extension by assuming an exponentially-decaying CSM density profile as a function of the distance from the stellar surface. Soker (2021) proposed an 'effervescent zone' where bound clumps are ejected by stellar activity and fall back. The radial extent is determined by the balance of gravity, wind drag and radiation force, and the density profile can be uncertain (Soker 2023). Fuller & Tsuna (2024) proposed a 'chromosphere' model where shock waves launched by transonic convection can support a dense atmosphere that extends to the dust-formation radius and radiation acts on dust to drive a wind. Our simulations support part of the Soker (2021) model and part of the Fuller & Tsuna (2024) model: We find large-scale clumps episodically lifted by pulsations and shaped by convection in agreement with Soker (2021), but the wind is not present in the immediate surrounding of the star. The wind is launched later on when some clumps reach far enough to cool down and form dust, in agreement with Fuller & Tsuna (2024). The density profile is steeper than proposed by Fuller & Tsuna (2024), because we find that pulsation instead of convection is the main driver of the shock waves, although convection can break the shock waves into multiple curved shock fronts. The spherically-averaged shock wave speeds are nearly constant due to semi-regular pulsation, instead of a Gaussian distribution due to convection adopted in Fuller & Tsuna (2024). Generally, our simulations suggest a 'two-zone model' composed of a periodic-shock-supported atmosphere attached to a dust-driven wind as described in Section 3.2, which is effectively a combination of the Soker (2021) model and the Fuller & Tsuna (2024) model. C.1.2. Large-amplitude Pulsation RSGs are known to be large-amplitude semi-regular radial pulsators from both observations (e.g Kiss et al. 2006; Soraisam et al. 2018; Jiang et al. 2024) and theories (e.g., Guo & Li 2002; Joyce et al. 2020; Bronner et al. 2025; Sengupta et al. 2025; Suzuki & Shigeyama 2025). However, large-amplitude radial pulsations are non-linear, so their amplitudes are difficult to predict. Especially for pre-explosion RSGs, 1D models suggest that their high L/M ratio amplifies the pulsation amplitude (Heger et al. 1997; Joyce et al. 2020; Bronner et al. 2025; Suzuki & Shigeyama 2025) and may even trigger a 'superwind' or mass loss (Yoon & Cantiello 2010; Clayton 2018; Sengupta et al. 2025). However, all the current 1D models suffer from significant numerical damping and the surface cooling is not correctly captured (Heger et al. 1997; Clayton 2018; Joyce et al. 2020; Bronner et al. 2025; Suzuki & Shigeyama 2025), which are important for predicting the growth and damping rates of pulsation. In this work, we self-consistently predict the pulsation amplitudes (Figure 2). We also find the dominant pulsation periods of our simulations agree with the fundamental mode expected for RSGs (Figure 11). If true, this means the fundamental mode dominates over other higher-order modes in pre-explosion RSGs. This leads to different density structures especially near the stellar surface and different radii when the star explodes, which is expected to create some diversity in SN lightcurves (Goldberg et al. 2020; Bronner et al. 2025). We also find that, contrary to the suggestions in Heger et al. (1997), Yoon & Cantiello (2010), Clayton (2018) and Sengupta et al. (2025), the pulsations are not strong enough to unbind any material without the help of molecules, dust, or magnetic fields. AREPO-RSG: Pulsation-lifted CSM 17 C.1.3. Larger Radii and Convective Efficiency The RSG radii are extremely sensitive to the convective efficiency in the envelope: To transport the same amount of energy, if convection is more efficient, then the radiation flux is lower and the temperature gradient is flatter, which yields larger surface temperature and therefore smaller radius. In 1D stellar models, convection is commonly described using mixing-length theory (MLT; B ̈ohm-Vitense 1958). The convective efficiency is controlled by αMLT, the ratio between mixing length and local pressure scale height. Larger αMLT means more efficient convective energy transport, resulting in more compact RSGs (e.g., Henyey et al. 1965; Stothers & Chin 1995; Goldberg et al. 2022b). We find that the averaged radii in our 3D simulations are slightly larger than the 1D models with αMLT = 3 (by 40% for 10 M⊙simulation and 20% for 20 M⊙simulation). To better constrain the convective efficiency, we compare the averaged entropy profiles from our 3D simulations to 1D initial conditions in Figure 12. The 3D entropy profiles are flatter than 1D MESA profiles in the interior of the envelope, but the entropy drop in the superadiabatic layer is less steep than 1D MESA profiles. This indicates that convection in our 3D simulations is more efficient than 1D MESA models in the interior of the envelope, but is less efficient than 1D MESA models in the superadiabatic layer. We therefore suggest that the mixing-length parameters may be better described by αMLT ≳4 in the efficient convection zone below the hydrogen opacity bump. However, near the surface in the superadiabatic layer, we either favor αMLT ≲2 or the turbulent pressure needs to be taken into account to inflate the envelope. In our simulations, the inflated superadiabatic layer is the main reason for larger radii compared to 1D models with αMLT = 3. Our findings for the convective efficiency are fully consistent with the results of 3D RSG simulations with Athena++ (Goldberg et al. 2022b). However, the large radii found in our simulations potentially contradict with the results of Dessart et al. (2013), where they argued for compact pre-explosion RSGs with αMLT = 3 because SN radiation from RSGs with large radii will have weaker cooling and remain blue for too long. This controversy may be complicated in two ways. First, it is not clear whether the Rosseland radius determined in the simulation is the same as the stellar radius the SN shock is sensitive to (Dessart et al. 2013). Second, clumping may have a counter effect on the color evolution (Dessart et al. 2018), thereby softening the degeneracy. A detailed analysis of the temperature gradient and turbulent pressure in our 3D simulations is needed to assess the actual convective efficiency, as in e.g., Goldberg et al. (2022b). C.2. Comparison with Other 3D Radiation Hydrodynamic Simulations RSGs have been simulated with 3D radiation hydrodynamics by e.g., Freytag et al. (2002, 2024) and Chiavassa et al. (2011) using the CO5BOLD code (Freytag et al. 2012) and by Goldberg et al. (2022b) using the Athena++ code (Stone et al. 2020). For a review summarizing the simulation results from both codes, see Chiavassa et al. (2024). Other multi-dimensional hydrodynamic simulations also exist for RSGs especially in the context of transients (e.g., Leung & Fuller 2020; Antoni & Quataert 2022), albeit without detailed radiation transport. In this comparison, one major puzzle is why propagating shocks waves and extended CSM are clearly present in CO5BOLD simulations (Kravchenko et al. 2019; Freytag et al. 2024) and our AREPO simulations (this work), but are rather weak in Athena++ simulations (Goldberg et al. 2022b). Fuller & Tsuna (2024) suggested that this discrepancy may be due to a numerical transient in Athena++ simulations, which ejects material during the first relaxing phase that later on falls back and quenches the shocks propagating outwards. This numerical transient is also present in our simulations (Figure 2) and especially strong in our 20 M⊙simulation which quenches the material ejection up to 45 years. Fuller & Tsuna (2024) also suggested that the recombination energy is neglected in Athena++ simulations, which may result in weaker pulses. Indeed, the recombination energy is included in our AREPO simulations and in CO5BOLD, but not in Athena++ where they used an ideal gas. However, we point out here that this discrepancy in producing shocks and CSM may be due to physical reason related to the growth of strong pulsations. We have performed two additional simulations at lower luminosities, using the 1D MESA models at core helium burning stage instead of pre-explosion stage. We find that they also have weak pulsations and do not produce any extended CSM, as shown in Figure 13. This behavior is very similar to the Athena++ simulations and indicates a physical origin. One possible explanation is that the L/M ratios used in the Athena++ simulations and our two additional core-helium-burning simulations are too low to foster growth of the pulsation amplitudes. Indeed, 1D models suggest that strong pulsations are only present for RSGs with large L/M ratios (Heger et al. 1997; Yoon & Cantiello 2010; Clayton 2018; Joyce et al. 2020; Laplace et al. 2025; Sengupta et al. 2025; Suzuki & Shigeyama 2025). Besides differences in physical parameters, we have also made improvements both by expanding the simulation domain and including more physics compared to previous 3D simulations. Athena++ simulations only include a wedge of the star, but can extend the simulation domain to more than 10 times the stellar radius to capture the CSM structure, outflow, and shock breakout 18 Ma et al. Figure 12. Comparison of 1D MESA profiles (black) and spherically-averaged 3D AREPO profiles (colored). Different colored curves show the spherically-averaged 3D profiles at different times. From top to bottom, we show the spherically-averaged specific entropy ⟨s⟩, radiative luminosity ⟨Lrad⟩, density ⟨ρ⟩, and temperature ⟨T⟩. The profiles on the right of the vertical dotted lines are thermally relaxed, i.e. the thermal timescale is smaller than the simulation duration. In our simulations, the deep entropy profile is nearly flat (first row), and radiation only transports a small fraction of energy (second row), which agrees with the expectation of efficient convection. As pointed out in Goldberg et al. (2022b) and Chiavassa et al. (2024), this was correctly simulated in Athena++ (Goldberg et al. 2022b) but not in CO5BOLD (Chiavassa et al. 2011). As shown in all panels, the deep interior of the 3D simulations agree very well with the 1D profiles. This direct 1D-3D agreement inside the star was not achieved in previous simulations (Chiavassa et al. 2011; Goldberg et al. 2022b). (Goldberg et al. 2022b,a). CO5BOLD RSG simulations simulate the entire 4π sphere to capture the global convective structure and facilitate synthetic observations (e.g., Chiavassa et al. 2009; Ma et al. 2024), but are limited to small box sizes about 3 times larger than the star (but see Freytag & H ̈ofner 2023, that extends the box for AREPO-RSG: Pulsation-lifted CSM 19 Figure 13. Core helium burning RSGs have very weak pulses and do not produce the CSM in our 3D simulations. Similar to Figure 2, but for two additional 3D simulations with lower luminosities than the simulations we present elsewhere in this work, taken during core helium burning. The mass-lifting rate is defined as 4πr2⟨ρ⟩⟨vr⟩. AGB stars). In our AREPO simulations, we achieve both points and therefore have the capability to simulate the global 4π convection and the CSM and outflow in a single simulation, which made this work possible. Furthermore, the local time-stepping technique in AREPO gives us the unique advantage of including the deep envelope in our simulations to make sure the deep physical conditions match the initial 1D profiles from stellar evolution code (Figure 12), which was not achieved in previous simulations. For reference, our effective inner boundary is at 3% stellar radius in AREPO, while the inner boundary is at 20% stellar radius in CO5BOLD (Chiavassa et al. 2011; Ahmad et al. 2023) and 30%-50% in Athena++ (Goldberg et al. 2022b). We have performed test simulations showing that moving the boundary of the artificial core too high up in the envelope (10% stellar radius) will result in weaker pulsations, which is especially important for this study. For the included physics, CO5BOLD simulations did not include radiation pressure or self-gravity, which are important for massive stars and loosely-bound envelopes. Athena++ simulations included radiation pressure and self-gravity, but used an 20 Ma et al. ideal gas for the equation of state, thereby neglecting recombination energy, which is important for ejecting the envelope material. We have included all this physics in our AREPO simulations. Our 3D RSG simulations are also the first to experiment with the effects of dust, albeit under crude approximations. There are also certain aspects where previous simulations perform better than ours. Our simulations and Athena++ RSG simulations are so far limited to gray radiation transport, but it has been shown in CO5BOLD simulations that non-gray effects create a steeper temperature gradient near the surface and in the atmosphere, which is important for the stellar spectrum and measuring stellar radii (Chiavassa et al. 2011). Furthermore, resolving the stellar photosphere is important to obtain the correct luminosity and cooling rate. Our cell size near the stellar surface is about 0.8% of the stellar radii, which is comparable to Athena++ simulations (1% stellar radii) but not as resolved as the latest CO5BOLD simulations (0.4%-0.5% stellar radii), as shown in Figure 7. In the optically-thick regime, we separate the radiative heating/cooling provided by radiation transport from the radiation pressure and radiation energy provided by the equation of state, which presumably yields higher-order accuracy in force balance but is not self-consistent. The Athena++ simulations calculate all the radiation quantities and coupling terms from the time-dependent radiation transport, which is physically more self-consistent. In addition, due to the transition from a radiation-included equation of state to a radiation-excluded one, we introduce an artificial temperature jump in our simulation at 10-11 g cm-3 in the atmosphere, which is not present in other simulations. Future efforts will be devoted to solving these minor issues. C.3. Simulation Caveats In this subsection, we summarize the caveats known to us for future improvements. In our simulations, the first strong pulsations are triggered by mapping from 1D to 3D, which is a numerical artifact. This numerical transient launches a strong ejection that affects subsequent CSM structure near the star for 30-45 years until most of the material has fallen back (Figure 2). The same issue was also found in Athena++ simulations (Goldberg et al. 2022b). The way we deal with this is to run the simulations long enough such that the effects of the initial numerical transient die down. However, the dust-driven outflow launched by the numerical transient still continues to propagate outwards and affect the long-range atmospheric structure, which is so far not properly treated. This also raises the question of whether the strong pulsations seen in our simulations are real or due to numerical artifacts. To address this question, we run two additional simulations at lower luminosities by taking 1D MESA profiles not at pre-explosion but at core helium burning stage. As shown in Figure 13 in comparison with Figure 2, the 3D core-helium-burning models only have weak pulsations and do not create an extended CSM. This means the strong pulsations in the pre-explosion models are likely not due to numerical artifacts but are related to enhanced luminosity and different stellar structures. In addition, without a physical mechanism to continuously inject energy into pulsations, the radial pulsations are expected to be damped, e.g., by convection at the timescale of several years (MacLeod et al. 2023). Instead in our 10 M⊙simulation, the pulsation grows stronger again after 30 years, indicating selfexcited pulsations. However, the exact pulsation amplitude can be influenced by numerical resolution and uncertain non-gray effects. Even though we have included a larger portion of the convective envelope than all previous simulations, we still cannot reach the bottom of the convective envelope. Our artificial core (or the effective inner boundary) still extends in the convective zone. Given the non-locality of convection in these RSG envelopes, the artificial core will likely affect the convection higher in the envelope. The effects can only be quantified by performing test simulations and putting the boundary of the artificial core at a different radius, but those simulations are currently too expensive to run until they reach a steady state. We have performed short test simulations that suggest a higher inner boundary, at 10% of the stellar radius, yields weaker pulses and weaker convection. Since we damp the velocities in the artificial core, we also do not fully conserve the total energy, momentum, or angular momentum, but we conserve the total mass to near machine precision. Furthermore, even if we can simulate the entire convective envelope, the deep thermal timescale is still one order of magnitude larger than the plausible simulation time, so the deep envelope will not be fully thermally relaxed. It is reasonable to assume that, by starting from a 1D model from a stellar evolution code, the deep envelope is not far away from the actual thermally-relaxed profile, but this is difficult to test. Another concern is spurious wave generation from the artificial core. In our simulations, we observe sphereshaped perturbations propagating from the core to the stellar surface. The perturbations appear to be sound waves generated in the central regions of our simulations. Given that the time interval of the episodic material ejection fits with the fundamental mode instead of the much shorter interval of these spurious waves, we think the spurious waves do not drive the material ejections, and therefore are dynamically not important. However, it would be helpful to diminish the spurious waves, whose origins are not clear. One possibility is that the convective velocities are damped too sharply in the artificial core that they introduce pressure perturbations on top of the hydrostatic equilibrium background. Another possibility is that the hydrostatic equilibrium AREPO-RSG: Pulsation-lifted CSM 21 structure is modified due to different convective energy transport in 3D, which results in a small mismatch of entropy in the artificial core and the envelope (see Figure 12) and therefore seeds the pressure perturbations. The spatial resolution of our simulations is not enough to resolve the photosphere. This leads to a timeaveraged luminosity output different from the input luminosity from the artificial core (Figure 2; for a test illustrating this, see Figure 17 in Ma et al. 2025). We have performed short test simulations that suggest decreasing the cell size by a factor of two is still not enough to obtain the correct luminosity. Chiavassa et al. (2011) also showed that increasing spatial resolution helps resolve smaller convective structures near the surface. Another potential issue is whether the recombination layer inside the envelope is well-resolved in our simulations. We have checked that our simulation resolution is enough to resolve the opacity peak due to He and H recombination, but it is not clear whether the variation in internal energy due to recombination is also resolved. In 1D models, the recombination zones can be very thin during the contraction phase (Clayton 2018; Bronner et al. 2025). Further analyses are needed to assess if we resolve the recombination layer in 3D. Despite using 80 angles for radiation transport, we have found that fixing the discretized angle still results in ray-effects several stellar radii away from the star. The ray-effects manifest as flower-like patterns with multiple peaks in radiation field in the optically-thin atmosphere (e.g., Figure 16 in Ma et al. 2025). Since the temperature is tightly coupled with radiation, and opacity depends strongly on temperature, the ray-effects also create artificial peaks in the spatial distribution of opacity. This is particularly important for outflows driven by radiation pushing on dust, which depend critically on both the opacity and radiation field. Such effects can in principle be avoided if we introduce extra diffusion by changing the transport directions intermittently (e.g., Freytag et al. 2012; Peter et al. 2023), which will be explored in the future. Our treatment of the equation of state needs to be further improved. For now, we transition from a radiationincluded equation of state to a radiation-excluded one at density 10-9-10-11 g cm-3 (see Appendix A.5 for details). This means the radiation force is artificially reduced in the transition region near the stellar surface, which weakens the mass ejections. In addition, the advection components of the radiation flux are neglected for density < 10-9 g cm-3. This is likely a reasonable approximation because the radiation flux is dominated by the comoving term in the cooling-dominated surface layer and atmosphere. We also transition from the OPAL equation of state to an ideal gas at temperature 1870 K. Both of these 'stitches' of the equation of state do not guarantee the consistency of thermodynamical variables. Since the temperature field is also tightly coupled with radiation, we find that these 'stitches' introduce artificially enhanced temperature at the transition boundaries in the RSG atmosphere. These artificial temperature jumps have limited effects on the atmospheric dynamics, as they are confined within a thin layer in the transition boundary. However, they likely need to be removed in post-processing for synthetic observations. Finally, our simulations are limited to gray radiation hydrodynamics, and there are other important physics missing, e.g., a detailed treatment of dust, and magnetic fields. Multi-group radiation transport or more realistic opacities are important for the atmospheric structure, in particular for the temperature stratification (e.g., Malygin et al. 2014). How the dense molecular lines in the RSG atmosphere affect the dynamics is also not clear (Kee et al. 2021). Another important physical ingredient is dust. In this work, we only include the effects of dust as a high opacity below 1500 K, but we ignore the detailed treatment of dust formation, advection, and momentum and energy exchange between radiation, dust, and gas. In AGB star atmospheres, the dust growth timescale can be comparable to the pulsation period (H ̈ofner & Olofsson 2018), which means a time-dependent treatment of dust grain growth is needed (H ̈ofner et al. 2016). One step forward would be to include part of those effects, as in e.g., CO5BOLD simulations of AGB stars (Freytag & H ̈ofner 2023). We also do not include magnetic fields in our simulations so far. The convective dynamo is expected to sustain a magnetic field. Since the RSG surface pressure is dominated by turbulent pressure (Goldberg et al. 2022b; Chiavassa et al. 2024), we also expect the magnetic field will be dynamically important, assuming the magnetic field energy is comparable to the convective energy. Alfven wave dissipation may also provide another heating mechanism to launch the wind (e.g., Hartmann & MacGregor 1980). REFERENCES Ahmad, A., Freytag, B., & H ̈ofner, S. 2023, Astronomy and Astrophysics, 669, A49, Andrews, J. E., & Smith, N. 2018, Monthly Notices of the Royal Astronomical Society, 477, 74, Andrews, J. E., Sand, D. J., Valenti, S., et al. 2019, The Astrophysical Journal, 885, 43, Andrews, J. E., Shrestha, M., Bostroem, K. A., et al. 2025, The Astrophysical Journal, 980, 37, 22 Ma et al. Antoni, A., & Quataert, E. 2022, Monthly Notices of the Royal Astronomical Society, 511, 176, Antoniadis, K., Bonanos, A. Z., de Wit, S., et al. 2024, Astronomy and Astrophysics, 686, A88, Arroyo-Torres, B., Wittkowski, M., Chiavassa, A., et al. 2015, Astronomy & Astrophysics, 575, A50, Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, Astronomy and Astrophysics, 558, A33, Astropy Collaboration, Price-Whelan, A. M., Sip ̋ocz, B. M., et al. 2018, The Astronomical Journal, 156, 123, Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, The Astrophysical Journal, 935, 167, Beasor, E. R., Davies, B., Smith, N., et al. 2020, Monthly Notices of the Royal Astronomical Society, 492, 5994, Beasor, E. R., Smith, N., & Jencson, J. E. 2025, The Astrophysical Journal, 979, 117, Berger, E., Keating, G. K., Margutti, R., et al. 2023, The Astrophysical Journal, 951, L31, Bertschinger, E., & Chevalier, R. A. 1985, The Astrophysical Journal, 299, 167, Bilinski, C., Smith, N., Williams, G. G., et al. 2024, Monthly Notices of the Royal Astronomical Society, 529, 1104, Bj ̈orklund, R., Sundqvist, J. O., Puls, J., & Najarro, F. 2021, Astronomy and Astrophysics, 648, A36, Boian, I., & Groh, J. H. 2020, Monthly Notices of the Royal Astronomical Society, 496, 1325, Bowen, G. H. 1988, The Astrophysical Journal, 329, 299, Bowman, D. M. 2023, Astrophysics and Space Science, 368, 107, Brennan, S. J., Fraser, M., Johansson, J., et al. 2022, Monthly Notices of the Royal Astronomical Society, 513, 5642, Bronner, V. A., Laplace, E., Schneider, F. R. N., & Podsiadlowski, P. 2025, Explosions of pulsating red supergiants: a natural pathway for the diversity of Type II-P/L supernovae, arXiv. https://ui.adsabs.harvard.edu/abs/2025arXiv250811077B Brott, I., de Mink, S. E., Cantiello, M., et al. 2011, Astronomy & Astrophysics, 530, A115, Bruch, R. J., Gal-Yam, A., Schulze, S., et al. 2021, The Astrophysical Journal, 912, 46, Bruch, R. J., Gal-Yam, A., Yaron, O., et al. 2023, The Astrophysical Journal, 952, 119, B ̈ohm-Vitense, E. 1958, Zeitschrift fur Astrophysik, 46, 108. https://ui.adsabs.harvard.edu/abs/1958ZA.....46..108B Chandra, P., Chevalier, R. A., Chugai, N., et al. 2012, The Astrophysical Journal, 755, 110, Chatys, F. W., Bedding, T. R., Murphy, S. J., et al. 2019, Monthly Notices of the Royal Astronomical Society, 487, 4832, Chen, T.-W., Yang, S., Srivastav, S., et al. 2025, The Astrophysical Journal, 983, 86, Cheng, S. J., Goldberg, J. A., Cantiello, M., et al. 2024, The Astrophysical Journal, 974, 270, Chiavassa, A., Freytag, B., Masseron, T., & Plez, B. 2011, Astronomy and Astrophysics, 535, A22, Chiavassa, A., Kravchenko, K., & Goldberg, J. A. 2024, Living Reviews in Computational Astrophysics, 10, 2, Chiavassa, A., Plez, B., Josselin, E., & Freytag, B. 2009, Astronomy and Astrophysics, 506, 1351, Chiavassa, A., Kravchenko, K., Montarg`es, M., et al. 2022, Astronomy & Astrophysics, 658, A185, Chun, S.-H., Yoon, S.-C., Jung, M.-K., Kim, D. U., & Kim, J. 2018, The Astrophysical Journal, 853, 79, Clayton, M. 2018, http://purl.org/dc/dcmitype/Text, . https://ora.ox.ac.uk/objects/uuid: 3aa54f27-e13e-4b16-bafd-28b3e7059e8d Cohen, D. H., Wollman, E. E., Leutenegger, M. A., et al. 2014, Monthly Notices of the Royal Astronomical Society, 439, 908, Colgan, J., Kilcrease, D. P., Magee, N. H., et al. 2016, The Astrophysical Journal, 817, 116, De Beck, E., Andrews, H., Quintana-Lacaci, G., & Vlemmings, W. H. T. 2025, Astronomy and Astrophysics, 698, A179, AREPO-RSG: Pulsation-lifted CSM 23 de Jager, C., Nieuwenhuijzen, H., & van der Hucht, K. A. 1988, Astronomy and Astrophysics Supplement Series, 72, 259. https://ui.adsabs.harvard.edu/abs/1988A% 26AS...72..259D/abstract Decin, L., Richards, A. M. S., Marchant, P., & Sana, H. 2024, Astronomy and Astrophysics, 681, A17, Dessart, L. 2024, Interacting supernovae, -. 2025, Astronomy and Astrophysics, 694, A132, Dessart, L., & Audit, E. 2019, Astronomy and Astrophysics, 629, A17, Dessart, L., & Hillier, D. J. 2019, Astronomy and Astrophysics, 625, A9, Dessart, L., Hillier, D. J., & Audit, E. 2017, Astronomy and Astrophysics, 605, A83, Dessart, L., Hillier, D. J., Waldman, R., & Livne, E. 2013, Monthly Notices of the Royal Astronomical Society, 433, 1745, Dessart, L., Hillier, D. J., & Wilk, K. D. 2018, Astronomy and Astrophysics, 619, A30, Dessart, L., & Jacobson-Gal ́an, W. V. 2023, Astronomy and Astrophysics, 677, A105, Dessart, L., Leonard, D. C., Vasylyev, S. S., & Hillier, D. J. 2025, Astronomy and Astrophysics, 696, L12, Dupree, A. K., Strassmeier, K. G., Calderwood, T., et al. 2022, The Astrophysical Journal, 936, 18, Ercolino, A., Jin, H., Langer, N., & Dessart, L. 2024, Astronomy and Astrophysics, 685, A58, Ercolino, A., Jin, H., Langer, N., et al. 2025, The demographics of core-collapse supernovae I. The role of binary evolution and CSM interaction, arXiv, Ertini, K., Regna, T. A., Ferrari, L., et al. 2025, Astronomy and Astrophysics, 699, A60, Ferguson, J. W., Alexander, D. R., Allard, F., et al. 2005, The Astrophysical Journal, 623, 585, Fransson, C., Ergon, M., Challis, P. J., et al. 2014, The Astrophysical Journal, 797, 118, Freytag, B., & H ̈ofner, S. 2023, Astronomy & Astrophysics, 669, A155, Freytag, B., H ̈ofner, S., Aringer, B., & Chiavassa, A. 2024, Astronomy and Astrophysics, 692, A223, Freytag, B., Steffen, M., & Dorch, B. 2002, Astronomische Nachrichten, 323, 213, AID-ASNA213⟩3.0.CO;2-H Freytag, B., Steffen, M., Ludwig, H. G., et al. 2012, Journal of Computational Physics, 231, 919, Fuller, J. 2017, Monthly Notices of the Royal Astronomical Society, 470, 1642, Fuller, J., & Tsuna, D. 2024, The Open Journal of Astrophysics, 7, 47, F ̈orster, F., Moriya, T. J., Maureira, J. C., et al. 2018, Nature Astronomy, 2, 808, Gazak, J. Z., Davies, B., Kudritzki, R., Bergemann, M., & Plez, B. 2014, The Astrophysical Journal, 788, 58, Gilliland, R. L., & Dupree, A. K. 1996, The Astrophysical Journal, 463, L29, Goldberg, J. A., & Bildsten, L. 2020, The Astrophysical Journal, 895, L45, Goldberg, J. A., Bildsten, L., & Paxton, B. 2019, The Astrophysical Journal, 879, 3, -. 2020, The Astrophysical Journal, 891, 15, Goldberg, J. A., Jiang, Y.-F., & Bildsten, L. 2022a, The Astrophysical Journal, 933, 164, -. 2022b, The Astrophysical Journal, 929, 156, Goldberg, J. A., Jiang, Y.-F., Bildsten, L., & Cantiello, M. 2025, Type IIb Supernova Progenitors in 3D: Variability and Episodic Mass Loss revealed by Radiation-Hydrodynamics Simulations, arXiv. https: //ui.adsabs.harvard.edu/abs/2025arXiv250812486G Gormaz-Matamala, A. C., Cur ́e, M., Lobel, A., et al. 2022, Astronomy and Astrophysics, 661, A51, Guo, J. H., & Li, Y. 2002, The Astrophysical Journal, 565, 559, Hambleton, K. M., Bianco, F. B., Street, R., et al. 2023, Publications of the Astronomical Society of the Pacific, 135, 105002, Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, 24 Ma et al. Hartmann, L., & MacGregor, K. B. 1980, The Astrophysical Journal, 242, 260, Hawcroft, C., Sana, H., Mahy, L., et al. 2021, Astronomy and Astrophysics, 655, A67, Heger, A., Jeannin, L., Langer, N., & Baraffe, I. 1997, Astronomy and Astrophysics, 327, 224, Henyey, L., Vardya, M. S., & Bodenheimer, P. 1965, The Astrophysical Journal, 142, 841, Hinds, K. R., Perley, D. A., Sollerman, J., et al. 2025, Monthly Notices of the Royal Astronomical Society, 541, 135, Hsu, B., Smith, N., Goldberg, J. A., et al. 2024, One Year of SN 2023ixf: Breaking Through the Degenerate Parameter Space in Light-Curve Models with Pulsating Progenitors, arXiv, Humphreys, R. M., Smith, N., Davidson, K., et al. 1997, The Astronomical Journal, 114, 2778, Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, H ̈ofner, S., Bladh, S., Aringer, B., & Ahuja, R. 2016, Astronomy and Astrophysics, 594, A108, H ̈ofner, S., & Olofsson, H. 2018, Astronomy and Astrophysics Review, 26, 1, Ibik, A. L., Drout, M. R., Margutti, R., et al. 2025, The Astrophysical Journal, 979, 16, Ivezic, Z., Kahn, S. M., Tyson, J. A., et al. 2019, The Astrophysical Journal, 873, 111, Jacobson-Gal ́an, W. V., Dessart, L., Jones, D. O., et al. 2022, The Astrophysical Journal, 924, 15, Jacobson-Gal ́an, W. V., Dessart, L., Davis, K. W., et al. 2024a, The Astrophysical Journal, 970, 189, Jacobson-Gal ́an, W. V., Davis, K. W., Kilpatrick, C. D., et al. 2024b, The Astrophysical Journal, 972, 177, Jermyn, A. S., Bauer, E. B., Schwab, J., et al. 2023, The Astrophysical Journal Supplement Series, 265, 15, Jiang, B., Ren, Y., & Yang, M. 2024, in IAU Symposium, Vol. 376, 292-305, Jiang, Y.-F. 2021, The Astrophysical Journal Supplement Series, 253, 49, Jiang, Y.-F., Cantiello, M., Bildsten, L., Quataert, E., & Blaes, O. 2015, The Astrophysical Journal, 813, 74, Joyce, M., Leung, S.-C., Moln ́ar, L., et al. 2020, The Astrophysical Journal, 902, 63, Kee, N. D., Sundqvist, J. O., Decin, L., Koter, A. d., & Sana, H. 2021, Astronomy and Astrophysics, 646, A180, Kilpatrick, C. D., Foley, R. J., Jacobson-Gal ́an, W. V., et al. 2023, The Astrophysical Journal, 952, L23, Kilpatrick, C. D., Suresh, A., Davis, K. W., et al. 2025, The Type II SN 2025pht in NGC 1637: A Red Supergiant with Carbon-rich Circumstellar Dust as the First JWST Detection of a Supernova Progenitor Star, arXiv, Kiss, L. L., Szab ́o, G. M., & Bedding, T. R. 2006, Monthly Notices of the Royal Astronomical Society, 372, 1721, Kluyver, T., Ragan-Kelley, B., P ́erez, F., et al. 2016, in Positioning and Power in Academic Publishing: Players, Agents and Agendas (IOS Press), Kozyreva, A., Klencki, J., Filippenko, A. V., et al. 2022, The Astrophysical Journal, 934, L31, Kravchenko, K., Chiavassa, A., Eck, S. V., et al. 2019, Astronomy & Astrophysics, 632, A28, Krtiˇcka, J., & Kub ́at, J. 2017, Astronomy and Astrophysics, 606, A31, Kulkarni, S. R., Harrison, F. A., Grefenstette, B. W., et al. 2021, Science with the Ultraviolet Explorer (UVEX), Landri, C., & Pejcha, O. 2024, Monthly Notices of the Royal Astronomical Society, 531, 3391, Langer, N., Fricke, K. J., & Sugimoto, D. 1983, Astronomy and Astrophysics, 126, 207. https://ui.adsabs.harvard. edu/abs/1983A&A...126..207L/abstract Laplace, E., Bronner, V. A., Schneider, F. R. N., & Podsiadlowski, P. 2025, Pulsations change the structures of massive stars before they explode: interpreting the nearby supernova SN 2023ixf, arXiv. https://ui.adsabs.harvard.edu/abs/2025arXiv250811088L Leung, S.-C., & Fuller, J. 2020, The Astrophysical Journal, 900, 99, Leung, S.-C., Wu, S., & Fuller, J. 2021, The Astrophysical Journal, 923, 41, AREPO-RSG: Pulsation-lifted CSM 25 Levesque, E. M., Massey, P., Olsen, K. A. G., et al. 2005, The Astrophysical Journal, 628, 973, Lomb, N. R. 1976, Astrophysics and Space Science, 39, 447, Ma, J.-Z., Chiavassa, A., de Mink, S. E., et al. 2024, The Astrophysical Journal Letters, 962, L36, Ma, J.-Z., Pakmor, R., Justham, S., & de Mink, S. E. 2025, submitted to A&A. https: //ui.adsabs.harvard.edu/abs/2025arXiv250316627M MacLeod, M., Antoni, A., Huang, C. D., Dupree, A., & Loeb, A. 2023, The Astrophysical Journal, 956, 27, Malygin, M. G., Kuiper, R., Klahr, H., Dullemond, C. P., & Henning, T. 2014, Astronomy and Astrophysics, 568, A91, Massey, P., Neugent, K. F., Ekstr ̈om, S., Georgy, C., & Meynet, G. 2023, The Astrophysical Journal, 942, 69, Massey, P., Silva, D. R., Levesque, E. M., et al. 2009, The Astrophysical Journal, 703, 420, Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, The Astrophysical Journal, 217, 425, Matsuoka, T., & Sawada, R. 2024, The Astrophysical Journal, 963, 105, Mauron, N., & Josselin, E. 2011, Astronomy and Astrophysics, 526, A156, Mcley, L., & Soker, N. 2014, Monthly Notices of the Royal Astronomical Society, 445, 2492, Montarg`es, M., Cannon, E., Lagadec, E., et al. 2021, Nature, 594, 365, Moriya, T. J., F ̈orster, F., Yoon, S.-C., Gr ̈afener, G., & Blinnikov, S. I. 2018, Monthly Notices of the Royal Astronomical Society, 476, 2840, Moriya, T. J., Subrayan, B. M., Milisavljevic, D., & Blinnikov, S. I. 2023, Publications of the Astronomical Society of Japan, 75, 634, Moriya, T. J., Yoon, S.-C., Gr ̈afener, G., & Blinnikov, S. I. 2017, Monthly Notices of the Royal Astronomical Society, 469, L108, Morozova, V., Piro, A. L., & Valenti, S. 2017, The Astrophysical Journal, 838, 28, -. 2018, The Astrophysical Journal, 858, 15, Munoz-Sanchez, G., Kalitsounaki, M., de Wit, S., et al. 2024, The dramatic transition of the extreme Red Supergiant WOH G64 to a Yellow Hypergiant, arXiv, Nayana, A. J., Margutti, R., Wiston, E., et al. 2025, The Astrophysical Journal, 985, 51, Ohlmann, S. T., R ̈opke, F. K., Pakmor, R., & Springel, V. 2017, Astronomy & Astrophysics, 599, A5, Ohnaka, K., Hofmann, K. H., Weigelt, G., et al. 2024, Astronomy and Astrophysics, 691, L15, Ouchi, R., & Maeda, K. 2017, The Astrophysical Journal, 840, 90, Pakmor, R., Springel, V., Bauer, A., et al. 2016, Monthly Notices of the Royal Astronomical Society, 455, 1134, Paxton, B., Bildsten, L., Dotter, A., et al. 2011, The Astrophysical Journal Supplement Series, 192, 3, Paxton, B., Cantiello, M., Arras, P., et al. 2013, The Astrophysical Journal Supplement Series, 208, 4, Paxton, B., Marchant, P., Schwab, J., et al. 2015, The Astrophysical Journal Supplement Series, 220, 15, Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, The Astrophysical Journal Supplement Series, 234, 34, Paxton, B., Smolec, R., Schwab, J., et al. 2019, The Astrophysical Journal Supplement Series, 243, 10, Peter, T., Klessen, R. S., Kanschat, G., Glover, S. C. O., & Bastian, P. 2023, Monthly Notices of the Royal Astronomical Society, 519, 4263, Qin, Y.-J., Zhang, K., Bloom, J., et al. 2024, Monthly Notices of the Royal Astronomical Society, 534, 271, Quataert, E., & Shiode, J. 2012, Monthly Notices of the Royal Astronomical Society, 423, L92, Ransome, C. L., & Villar, V. A. 2025, The Astrophysical Journal, 987, 13, Ren, Y., Jiang, B.-W., Yang, M., & Gao, J. 2019, The Astrophysical Journal Supplement Series, 241, 35, Rogers, F. J., & Nayfonov, A. 2002, The Astrophysical Journal, 576, 1064, 26 Ma et al. Scargle, J. D. 1982, The Astrophysical Journal, 263, 835, Schuster, M. T., Humphreys, R. M., & Marengo, M. 2006, The Astronomical Journal, 131, 603, Sengupta, S., Sujit, D., & Sarangi, A. 2025, Dance to Demise - How Massive Stars May Form Dense Circumstellar Shells Before Explosion, arXiv, Shiode, J. H., & Quataert, E. 2014, The Astrophysical Journal, 780, 96, Shrestha, M., Bostroem, K. A., Sand, D. J., et al. 2024, The Astrophysical Journal, 972, L15, Shrestha, M., DeSoto, S., Sand, D. J., et al. 2025, The Astrophysical Journal, 982, L32, Shvartzvald, Y., Waxman, E., Gal-Yam, A., et al. 2024, The Astrophysical Journal, 964, 74, Siebert, M. A., De Beck, E., Quintana-Lacaci, G., & Vlemmings, W. H. T. 2025, Astronomy and Astrophysics, 700, L11, Singh, A., Teja, R. S., Moriya, T. J., et al. 2024, The Astrophysical Journal, 975, 132, Singh, A. P., Richards, A. M. S., Humphreys, R. M., Decin, L., & Ziurys, L. M. 2023, The Astrophysical Journal, 954, L1, Smartt, S. J. 2009, Annual Review of Astronomy and Astrophysics, 47, 63, Smith, N. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin (Cham: Springer International Publishing), 403-429, 38 Smith, N., & Arnett, W. D. 2014, The Astrophysical Journal, 785, 82, Smith, N., Humphreys, R. M., Davidson, K., et al. 2001, The Astronomical Journal, 121, 1111, Smith, N., Pearson, J., Sand, D. J., et al. 2023, The Astrophysical Journal, 956, 46, Smith, N., Mauerhan, J. C., Cenko, S. B., et al. 2015, Monthly Notices of the Royal Astronomical Society, 449, 1876, Soker, N. 2021, The Astrophysical Journal, 906, 1, -. 2023, Research in Astronomy and Astrophysics, 23, 081002, Soraisam, M. D., Bildsten, L., Drout, M. R., et al. 2018, The Astrophysical Journal, 859, 73, Soraisam, M. D., Szalai, T., Van Dyk, S. D., et al. 2023, The Astrophysical Journal, 957, 64, Springel, V. 2010, Monthly Notices of the Royal Astronomical Society, 401, 791, Stone, J. M., Tomida, K., White, C. J., & Felker, K. G. 2020, The Astrophysical Journal Supplement Series, 249, 4, Stothers, R. B., & Chin, C.-W. 1995, The Astrophysical Journal, 440, 297, Suzuki, A., & Shigeyama, T. 2025, Radial pulsation runaway in massive red supergiants in late evolutionary stage and implications to hydrogen-rich supernovae, arXiv, Tsuna, D., Huang, X., Fuller, J., & Piro, A. L. 2025, The Astrophysical Journal, 979, 20, Tsuna, D., Matsumoto, T., Wu, S. C., & Fuller, J. 2024, The Astrophysical Journal, 966, 30, Van Dyk, S. D. 2025, Galaxies, 13, 33, Vasylyev, S. S., Yang, Y., Filippenko, A. V., et al. 2023, The Astrophysical Journal, 955, L37, Vasylyev, S. S., Dessart, L., Yang, Y., et al. 2025, Spectropolarimetric Evolution of SN 2023ixf: an Asymmetric Explosion in a Confined Aspherical Circumstellar Medium, arXiv, Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, Astronomy and Astrophysics, 369, 574, Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, Weinberger, R., Springel, V., & Pakmor, R. 2020, The Astrophysical Journal Supplement Series, 248, 32, Woosley, S. E., & Heger, A. 2015, The Astrophysical Journal, 810, 34, Wu, S., & Fuller, J. 2021, The Astrophysical Journal, 906, 3, Wu, S. C., & Fuller, J. 2022, The Astrophysical Journal, 930, 119, AREPO-RSG: Pulsation-lifted CSM 27 Xiang, D., Mo, J., Wang, L., et al. 2024a, Science China Physics, Mechanics, and Astronomy, 67, 219514, Xiang, D., Mo, J., Wang, X., et al. 2024b, The Astrophysical Journal, 969, L15, Yang, M., & Jiang, B. W. 2012, The Astrophysical Journal, 754, 35, Yang, M., Bonanos, A. Z., Jiang, B., et al. 2023, Astronomy and Astrophysics, 676, A84, Yaron, O., Perley, D. A., Gal-Yam, A., et al. 2017, Nature Physics, 13, 510, Yoon, S.-C., & Cantiello, M. 2010, The Astrophysical Journal, 717, L62, Zapartas, E., de Mink, S. E., Justham, S., et al. 2019, Astronomy and Astrophysics, 631, A5, Zhang, J., Dessart, L., Wang, X., et al. 2024, The Astrophysical Journal, 970, L18, Zimmerman, E. A., Irani, I., Chen, P., et al. 2024, Nature, 627, 759, ˇSurlan, B., Hamann, W.-R., Aret, A., et al. 2013, Astronomy and Astrophysics, 559, A130,
2510.14864
The Whole Is Less than the Sum of Parts: Subsystem Inconsistency in Partial Information Decomposition Aobo Lyu⋆, Andrew Clark⋆, and Netanel Raviv† ⋆Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA †Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO, USA aobo.lyu@wustl.edu, andrewclark@wustl.edu, netanel.raviv@wustl.edu Abstract—Partial Information Decomposition (PID) was pro- posed by Williams and Beer in 2010 as a tool for analyzing fine-grained interactions between multiple random variables, and has since found numerous applications ranging from neuroscience to privacy. However, a unified theoretical framework remains elusive due to key conceptual and technical challenges. We identify and illustrate a crucial problem: PID violates the set-theoretic principle that the whole equals the sum of its parts (WESP). Through a counterexample in a three-variable system, we demon- strate how such violations naturally arise, revealing a fundamental limitation of current lattice-based PID frameworks. To address this issue, we introduce a new axiomatic framework, termed System Information Decomposition (SID), specifically tailored for three-variable systems. SID resolves the WESP violation by redefining the summation rules of decomposed information atoms based on synergistic relationships. However, we further show that for systems with four or more variables, no partial summation approach within the existing lattice-based structures can fully eliminate WESP inconsistencies. Our results thus highlight the inherent inadequacy of (antichain) lattice-based decompositions for general multivariate systems. I. INTRODUCTION Understanding how information is shared among multiple variables is a fundamental challenge in information theory. Par- tial Information Decomposition (PID), introduced by Williams and Beer [1], addresses this by decomposing the mutual in- formation between a group of source variables and a target variable into distinct and independent information atoms, which reveal how the sources individually and collectively influence a target. The implementation of this framework is based on a structure called the redundancy lattice [2], which is inspired by and closely aligned with the principles of classical set theory. Since its conception PID has been widely applied, providing deep insights into neural correlations [3] and cognitive pro- cesses [4], privacy and fairness in data disclosure [5], [6], causal emergence phenomena [7], and more. However, despite these successful applications, the PID framework itself remains incomplete and faces fundamental challenges. Notably, despite extensive research efforts [8]– [13], no existing PID measure simultaneously satisfies all PID axioms and desired properties. We argue that this persistent difficulty arises primarily from conceptual flaws inherent to the PID framework. Specifically, PID implicitly relies on the assumption that the whole equals the sum of its parts (WESP)—a set-theoretic principle, which might be violated by information measures. Such violations have previously been noted [14] through the perspective of the inclusion-exclusion principle [15], [16]. In this paper, we point out that enforcing WESP overlooks higher-order synergistic effects, resulting in inconsistent behavior in subsystems, where the sum of PID components differs from the total information of the system. In light of these challenges, there is a need for a new frame- work that can explore the rules that information decomposition should follow without relying on the WESP principle. In this direction, this paper makes three main contributions: (i) We reveal an inherent inconsistency in PID by explicitly demon- strating WESP violation with a three-source counterexample. (ii) We introduce System Information Decomposition (SID), a novel axiomatic framework for three-source systems with a target equal to sources, resolving PID’s WESP inconsistency by redefining information summation rules within a simplified antichain structure. Then, it is shown that an extension of the well-known Gács-Körner common information measure [17] satisfies this axiomatic framework. (iii) We demonstrate that for a general multivariate system, no antichain lattice-based summation method can fully resolve the WESP inconsistency, by presenting two systems with identical information atoms, but different mutual information due to different synergistic dynamics. Our results advocate for a shift beyond antichain (set-partition) logic toward new structural foundations capable of capturing higher-order synergistic relationships. The remainder of the paper is structured as follows. Sec- tion II reviews PID and presents our counterexample illustrating WESP violations. Section III proposes the SID axiomatic framework for a three-variable system and an operational definition of redundancy, demonstrating how SID resolves these inconsistencies. Section IV extends the analysis to four- variable systems, showing why the antichain-based approach is insufficient. Finally, Section V discusses the implications of the work and future directions. II. PARTIAL INFORMATION DECOMPOSITION Williams and Beer [1] introduced the Partial Information De- composition (PID) framework to decompose multivariate infor- mation axiomatically. Consider sources S1, S2 and target T: the mutual information I(S1, S2; T) decomposes into redundant, unique, and synergistic atoms (see Figure 1). The redundancy Red(S1, S2 →T) is shared information; the unique atom Un(S1 →T|S2) is information exclusively from S1 (sim. arXiv:2510.14864v2 [cs.IT] 22 Oct 2025 Un(S2 →T|S1)), and synergy Syn(S1, S2 →T) emerges only from joint observation of S1 and S2. Fig. 1. The structure of PID with two source variables, i.e., (1) (2). Together, we have that for the whole system (S1, S2, T), I((S1, S2); T) = Red(S1, S2 →T) + Syn(S1, S2 →T) + Un(S1 →T|S2) + Un(S2 →T|S1), (1) and for each subsystem (S1, T) and (S2, T), I(S1; T) = Red(S1, S2 →T) + Un(S1 →T|S2), and I(S2; T) = Red(S1, S2 →T) + Un(S2 →T|S1). (2) For general systems with source variables S = {S 1, . . . , Sn } and target T, PID uses the redundancy lattice A(S) [1], [2], which is the set of antichains formed from the power set of S under set inclusion with a natural order ⪯S . Definition 1 (PID Redundancy Lattice). For the set of source variables S, the set of antichains is: A(S) = {α ∈P 1(P1(S)) : ∀A i , A j ∈α, A i ̸⊂A j }, (3) where P1(S) = P(S) \ {?} is the set of all nonempty subsets of S, and for every α, β ∈A(S), we say that β ⪯ S α if for every A ∈α there exists B ∈β such that B ⊆A. For ease of exposition, we denote elements of A(S) using their indices (e.g., write  {S1}{S2} as  {1}{2} ). Based on PID Definition 1, the value of the PI-atoms can be expressed as a partial information function ΠT A . Definition 2 (Partial Information Decomposition Framework). Let S be a collection of sources and let T be the target. The set of PI-atoms is defined as a family of partial information functions (PI-function) ΠT A : A(A) →R for all A ⊆S. Intuitively, for every α ∈A(A), the atom Π T A (α) measures the amount of information provided by each set in the anti- chain α to T and is not provided by any β ⪯A α. For simplicity we denote ΠT i: (·) for ΠT fS i : g (·), e.g., ΠT 12({{1}}) = ΠT fS 1 ;S 2 g ({{S1}}). Note that in the case S = {S1, S2}, Definition 2 reduces to Red(S1,S2 →T)=ΠT 12(  {1}{2} ), Un(S1 →T|S2)=ΠT 12(  {1} ), Syn(S1, S2 →T)=ΠT 12(  {12} ), Un(S2 →T|S1)=ΠT 12(  {2} ), recovering the terms in (1) and (2). Fig. 2. The structure of PID with 3 source variables. For general systems, PID requires the following mutual information constraints [1] (i.e., the equivalent of (1) and (2)). PID Axiom 1. For any subsets A, B of sources S with A ⊆B, the sum of PI-atoms decomposed from system B satisfies I(A; T) = X  B fAg ΠT B (β), (4) where {A} is the antichain with a single element A. It is worth noting that (4) constrains the consistency of the sum of PI-atoms decomposed from different subsystems [18]– [21] by taking different sets B on the right-hand side. Lemma 1 (Subsystem Consistency). For A, B, C ⊆S such that C ⊆A ∩B, let Π from Definition 2 that satisfying PID Axiom 1, we have that X  A fCg ΠT A (β) = X  B fCg ΠT B (β). (5) Consider the system in Figure 1. For the atoms decomposed from the system (S1, T), the quantity ΠT 1 (  {1} ) reflects the (redundant) information that S1 provides about T. If we add a source S2 to this system, this information will be further decomposed into the redundant information from S1, S2 and the unique information only from S1 but not S2. Below are three axioms regarding the redundant information Red(S1, . . . , SN →T)—which is reflected by the PI-atom ΠT S (  {1} . . . {N} )—for any multivariate system S. PID Axiom 2 (Commutativity1). Redundant information is invariant under any permutation σ of sources, i.e., Red(S1, . . . , SN →T) = Red(S(1) , . . . , S(N) →T). PID Axiom 3 (Monotonicity). Redundant information de- creases monotonically as more sources are included, i.e., Red(S1, . . . , SN , SN+1 →T) ≤Red(S1, . . . , SN →T). 1We note that in general, commutativity is an axiom that follows from the intuitive understanding of redundancy. Yet, when using the anti-chain perspec- tive, it is trivially satisfied since the respective atom  T S ({{1}{2} . . . {N}}) depends on the anti-chain of all singletons, which is not affected by reordering. PID Axiom 4 (Self-redundancy). Redundant information for a single source variable Si equals the mutual information, i.e., Red(Si →T) = I(Si ; T). Axiom 3 also implies another lemma, as follows. Lemma 2 (Nonnegativity). Partial Information Decomposition satisfies Red(S1, . . . , SN →T) ≥0. Proof. Add a constant variable S to the sources and obtain Red(A →T) ≥Red(A, S  →T) = 0. Besides, another intuitive property is often considered [9]. Property 1 (Independent Identity). If I(S1; S2) = 0 and T = (S1, S2), then Red(S1, S2 →T) = 0. Remark 1. From information perspective, there is no difference between “T = (S1, S2)” and “H(T|S1, S2) = H(S1,S2|T)= 0,” which we denote by T det =(S1,S2) for brevity. However, this framework inherently becomes contradictory for three or more source variables, as shown in [22, Thm. 2]. To set the stage for our results, we briefly recall this finding of the following system and Lemma 3 proved in Appendix B1. System ( ¯S1, ¯S2, ¯S3, ¯T): Let x1 and x2 be two independent Bernoulli(1/2) variables, and x3 = x1 ⊕x2. Define ¯S1 = x1, ¯S2 = x2, ¯S3 = x3, and target ¯T = (x1, x2, x3). Lemma 3. [22] For the system ( ¯S1, ¯S2, ¯S3, ¯T), any candidate PID measure Π ¯T A : A(A) →R, ∀A ⊆ ¯S that satisfies PID Axioms 2, 3, 4, Property 1, and PID Axiom 1 for any |A| ≤2, violates PID Axiom 1 for |A| = 3, i.e., I(T; ¯S) < X  S f ¯Sg Π ¯T ¯S (β). The redundancy lattice of PID was built under the assump- tion that mutual information could be partitioned into disjoint atoms summing to the total, i.e., WESP. However, Lemma 3 demonstrates that this assumption is incompatible with syn- ergistic phenomena in multivariate systems. This phenomena is inherent to multivariate information measures: the sum of synergistic components can exceed the total entropy [23], which is described rigorously in the following observation. Observation 1 (Synergistic Phenomena [23]). For any system X = {X 1,. . ., XN }, denote Syn(X/ ˆXi →Xi ) as the infor- mation that can only be provided jointly from sources X/ ˆXi to target Xi , the summation of all those information can be greater than the joint entropy of the system, i.e., in general X i2f1;:;Ng Syn(X/X i →Xi ) H(X). (6) In particular, in the system given in Lemma 3 we have that the l.h.s of (6) equals 3, whereas the r.h.s equals 2. In summary, PID’s reliance on the WESP principle (Ax- iom 1) is incompatible with the existence of synergy. Obser- vation 1 quantifies this incompatibility: the sum of synergistic components can exceed the whole’s entropy. Thus, to fix our decomposition, we must relax or modify Axiom 1. In the next section, we propose a new framework addressing this limitation explicitly for three-variable systems. III. THREE-VARIABLE SYSTEM INFORMATION DECOMPOSITION In this section we resolve the issue from Section II for the case of three source variables S1, S2, S3, and T = (S1, S2, S3). When T = (S1, S2, S3), the PID of I(T; S1, S2, S3) amounts to a decomposition of the joint entropy H(S1, S2, S3). We define a new framework in which Axiom 1 is replaced by a summation over a subset of the atoms, in a way which circumvents the overcounting in Observation 1. We make use of the following lattice. Definition 3 (SID Half Lattice). For S = {S 1, S2, S3}, let A (S) ={α ∈P 1(P1(S)) : ∃A k ∈α, |A k | = 1, ∀A i , A j ∈α, A i ̸⊂A j }, (7) =  {{1}{2}{3}}, {{1}{2}}, {{1}{3}}, {{2}{3}}, {{1}{23}}, {{2}{13}}, {{3}{12}}, {{1}}, {{2}}, {{3}} . where P1(S) and ⪯ S are as in Definition 1. For every A ⊆S and every α ∈A  (A), our aim is to measure the information contributed by every subset in α to the whole system A, which is not already accounted for by any antichain β ∈A (A) such that β ⪯ A α. Definition 4 (System Information Decomposition Framework). Let S be a collection of variables. The set of system information atoms (SI-atoms) is defined as a family of system information functions ΨA : A (A) →R for all A ⊆S. The SID half lattice can be understood as a refinement of the PID redundancy lattice for three sources (Definition 1), by removing all antichains that do not contain any singleton source (see Figure 3(B)). See Appendix A for further comparison between SID and two-source PID. Fig. 3. Comparison between SID and three sources PID. (A) Three-variable SID. (B) Three-sources PID, where the antichains in bold contain at least one singleton source, whose structure is consistent with SID. The original PID Axioms 2, 3, and 4, are required for SID as well. PID Axiom 1, which leads to the inconsis- tency demonstrated in Lemma 3, will be modified shortly. Similar to PID, we define SID redundant information as Red(S1, S2, S3) = ΨS ({{S1}{S2}{S3}}), and for all distinct i, j in S, let Red(S i , Sj ) = ΨfS i ;S j g ({{Si }{Sj }}). SID Axiom 2 (Commutativity). SID redundant informa- tion is invariant under any permutation σ of sources, i.e., Red(S1, S2, S3) = Red(S(1) , S(2) , S(3) ). SID Axiom 3 (Monotonicity). SID redundant information decreases monotonically as more sources are included, i.e., Red(S1, S2, S3) ≤mini;j2[3] {Red(Si , Sj )}. SID Axiom 4 (Self-redundancy). SID redundant information for two variables Si , Sj equals the mutual information, i.e. Red(Si , Sj ) = I(Si ; Sj ). Then, we revisit PID Axiom 1 and aim to propose an alternative axiom. In SID, the mutual information between any two variables and the third one can be decomposed similarly to two-source PID. That is, for any distinct i, j, k ∈{1, 2, 3}, I(Si , Sj ; Sk ) splits into four SI-atoms (analogous to (1)): I(Si , Sj ; Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{j}{k}}) + ΨS ({{ij}{k}}), (8) and the two-variable mutual information I(Si ; Sk ) corresponds to two of those atoms (analogous to (2)): I(Si ; Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}). (9) Recall that we have H(Sk ) = I(Si , Sj ; Sk ) + H(Sk |Si , Sj ) for any k ∈[3], and ΨS ({{k}}) represents the information provided by Sk alone, i.e., ΨS ({{k}}) = H(Sk | Si , Sj ). Therefore, we have H(Sk ) = I(Si , Sj ; Sk ) + H(Sk |Si , Sj ) (8)= ΨS ({{i}{j}{k}}) + ΨS ({{ij}{k}}) +ΨS ({{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{k}}) = X  S fS k g ΨS (β). (10) Similarly, for any two variables {Si , Sk } ⊆S, by combining H(Sk |Si ) = H(Sk ) −I(Si ; Sk ) with (9) and (10), we have H(Sk |Si ) = ΨS ({{ij}{k}}) + ΨS ({{j}{k}}) + ΨS ({{k}}), which, combined with the fact that H(Si , Sk ) = H(Si ) + H(Sk |Si ) and with (10), shows that the joint entropy of any two variables is the sum of all atoms dominated by that pair: H(Si , Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{i}{j}}) + ΨS ({{jk}{i}}) + ΨS ({{i}}) + ΨS ({{ij}{k}}) + ΨS ({{j}{k}}) + ΨS ({{k}}) = Σfi;kg , (11) where Σfi;kg is the summation of all atoms corresponding to antichains that are dominated either by {{Si }} or by {{Sk }}. However, when extending the decomposition to the joint entropy of all three variables, the SID framework deviates from WESP due to the presence of synergy-induced redundancy. This discrepancy can be directly demonstrated as follows. By combining the fact that H(Si , Sj , Sk ) = H(Si , Sk ) + H(Sj |Si , Sk ) with ΨS ({{j}}) = H(Sj |Si , Sk ) and (11), H(Si , Sj , Sk ) = Σfi;kg + ΨS ({{j}}) = Σ −ΨS ({{ik}{j}}), (12) where Σ is the summation of all 10 atoms ΨS (α), α ∈A (S). Thus, unlike PID Axiom 1, we find that the total entropy is less than the sum of its decomposed parts by exactly ΨS ({{ij}{k}}). In other words, WESP does not hold in SID due to this necessary exclusion. Motivated by (10), (11), and (12), we propose the following alternative to PID Axiom 1 within the SID framework. SID Axiom 1. For any set of variables B ⊆S with |B| ≤ 2, |S| ≤3, the entropy of B is decomposed as H(B) = Σ B , and when |B| = |S| = 3 we have for all distinct i, j, k ∈[3], H(S) = Σ −Ψ S ({{ij}{k}}), (13) where ΣB and Σ are as defined above. Going back to the example in Lemma 3 with T = (S1, S2, S3), there are three equivalent ways to express H(T) via Axiom 1, corresponding to which ΨS ({{ij}{k}}) term is excluded, i.e., for every distinct i, j, k ∈[3] we have H(T) = ΨS (  {i}{jk} ) + ΨS (  {j}{ik} ) = 2, where the zero valued SI-atoms are omitted. The following lemma shows that only the redundancy atom needs to be defined; the remaining atoms are then uniquely determined via linear constraints implied by Axiom 1. A proof is provided in Appendix B2 alongside explicit definitions for all SI atoms given Red(S1, S2, S3) (see (25)). Lemma 4. Let S = {S 1, S2, S3} be a three-variable system in the SID framework, and Ψ123(·) denote its SI-atoms. Then, once the value of any one SI-atom is fixed, the values of all remaining SI-atoms are uniquely determined by SID Axiom 1. Therefore, any definition of Red(S1, S2, S3) implies unique definitions of all SI atoms that automatically satisfy SID Axiom 1. To satisfy Axioms 2, 3 and 4, we adopt an operational definition of redundancy that generalizes the Gács-Körner common information2 [17]. Definition 5 (Operational Definition of Redundancy). For system S1, S2, S3, the redundant information is defined as Red(Si , Sj ) , I(S i , Sj ) for all distinct i, j ∈{1, 2, 3}, and Red(S1, S2, S3) , max Q {H(Q) |H(Q|Si ) = 0, ∀i∈[3]}, where the maximization is taken over all variables Q defined over the Cartesian product of the alphabets of S1, S2, S3. The following lemma is proved in Appendix B3. Lemma 5. Definition 5 satisfies SID Axioms 2, 3, and 4. 2The Gács-Körner common information is defined as CI(S 1, S2) , max Q H(Q), s.t. H(Q|S 1) = H(Q|S 2) = 0. In the next section we detail that new complications arise in systems with four or more variables. Moreover, we show that they cannot be resolved by summing over lattice subsets as done in this section, leaving a challenging open problem. IV. LIMITATIONS OF ANTI-CHAINS BASED INFORMATION DECOMPOSITION In this section, it is shown that the partial summation approach taken in Section III is insufficient for three-or-more source variable systems, with a target that is not necessarily equal to the sources. Theorem 1 (Subsystem inconsistency in lattice-based PID). For any set of functions ΠT A : A(A) →R, ∀A ⊆S, which satisfy PID Axioms 2, 3, 4, and Property 1, there is no way to redefine PID Axiom 1 so that (5) (Lemma 1) is satisfied. Specifically, there is no fixed subset O ⊂A(S) such that I(S; T) = X 2O ΠT S (β) (14) for all systems with |S| = 3 source variables and a target T. We emphasize that Theorem 1 shows that for any PI-function ΠA , there is no set O satisfying (14) for all systems of three-or-more variables S and a target T. In cases where T = (S1, S2, S3) (or equivalently, where T det = (S1, S2, S3)) such solution exists as shown in Section III. To prove this theorem, we construct the following two systems (Fig. 4) and the subsequent Lemma 6 is proved in Appendix B4. a) System 1 ( ˆS1, ˆS2, ˆS3, ˆT): Define six independent bits x1, x2, x4, x5, x7, x8 ∼Bernoulli(1/2), and three additional x3 = x1 ⊕x2, x6 = x4 ⊕x5, and x9 = x7 ⊕x8. Define the system as ˆS1 = (x1, x4, x7), ˆS2 = (x2, x5, x8), ˆS3 = (x3, x6, x9), and ˆT = (x1, x5, x9). b) System 2 ( ¯S1, ¯S2, ¯S3, ¯T): Let x1 and x2 be two independent Bernoulli(1/2) variables, and x3 = x1 ⊕x2. Define ¯S1 = x1, ¯S2 = x2, ¯S3 = x3, and set the target as ¯T = (x1, x2, x3) (this is the system from Lemma 3). Fig. 4. Construction of ( ^S1, ^S2, ^S3, ^T) and ( S1, S2, S3, T). Lemma 6. For System 1 (ˆS = { ˆS1, ˆS2, ˆS3}, ˆT) and System 2 (¯S = { ¯S1, ¯S2, ¯S3}, ¯T), (i) there is a bijection ψ : A(ˆS) → A(¯S) with Π ˆT ˆA (ˆβ) = Π ¯T ( ˆA) (ψ(ˆβ)) for all atoms ˆβ ∈A(ˆS), ˆA ⊆ ˆS, and for any family of Π functions satisfying PID Axioms 2, 3, 4, Property 1 and (5); and (ii) I(ˆS; ˆT) ̸= I(¯S; ¯T). Given Lemma 6, Theorem 1 is immediate: since all atoms are identical, it follows P 2O Π ˆT ˆS (ˆβ) = P 2O Π ¯T ( ˆS) (ψ(ˆβ)) for every O in the respective lattices, yet I(ˆS; ˆT) ̸= I(¯S; ¯T). This result shows that PID based purely on a fixed collection of antichain-defined atoms will face insurmountable problems when extended beyond three variables. This part of the analysis will be discussed in the next section. V. DISCUSSION This work identifies a fundamental inconsistency of the PID framework. Section II shows that PID violates the WESP principle in multivariate systems. To address this, we intro- duced the SID framework (Section III), which resolves WESP inconsistencies in three-variable systems by redefining the sum- mation rules for information atoms within a reduced antichain structure. However, Section IV demonstrates via another coun- terexample that antichain-based PID methods inherently fail to capture higher-order interactions, thus producing unresolvable inconsistency. Conceptual Implications of SID: SID is a target-free com- mutative framework in which the target is equal to the sources. While similar to Partial Entropy Decomposition (PED) [4], [18], SID uses a reduced antichain lattice (Definition 3) that excludes antichains lacking singleton elements. The removed antichains correspond to the irreducible synergistic information atoms that cannot be provided by any single source alone; these are unnecessary when the target is a subset of the sources. This insight is inspired by the decomposition of the joint entropy of a two-variable system using Shannon’s laws, i.e. H(X, Y ) = H(X|Y ) + H(Y ), where each component of the decomposition comes directly from at least one of the source variables. This insight can be easily extended through the chain rule [24, Ch. 2] to multivariable systems. SID Axiom 1 is motivated by these observations, and furthermore, it reveals that synergistic information is inherently symmetric and higher- order, leading to multivariate information interactions that fundamentally differ from classical set theory. Beyond Three Variables – Limitations of Antichains: Section IV illustrates scenarios where two distinct systems share identical PID decompositions yet differ in total joint information. This discrepancy arises because synergy is fun- damentally a holistic (or collective) higher-order symmetric relationship among variables: an n-order synergy (among n variables) contributes its information collectively to the system as a whole, appearing n −1 times in the summation rule for joint entropy (SID Axiom 1). PID frameworks fail to capture this holistic nature of synergy, as they focus solely on mutual information (i.e., I(S; T)), and isolate information to single antichain atoms, while ignoring the collective synergistic structure. Thus, we suggest that future multivariate decompo- sition frameworks must: (1) Decompose total system entropy rather than partial/mutual information. (2) Capture higher- order synergistic structures across variable subsets. (3) Move beyond traditional set-theoretic assumptions (e.g., WESP) to accommodate information’s unique properties. REFERENCES [1] Paul L Williams and Randall D Beer. Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515, 2010. [2] Jason Crampton and George Loizou. The completion of a poset in a lattice of antichains. International Mathematical Journal, 1(3):223–238, 2001. [3] Elad Schneidman, William Bialek, and Michael J Berry. Synergy, redun- dancy, and independence in population codes. Journal of Neuroscience, 23(37):11539–11553, 2003. [4] Thomas F Varley, Maria Pope, Maria Grazia, Joshua, and Olaf Sporns. Partial entropy decomposition reveals higher-order information structures in human brain activity. Proceedings of the National Academy of Sciences, 120(30):e2300888120, 2023. [5] Borzoo Rassouli, Fernando E Rosas, and Deniz Gündüz. Data disclo- sure under perfect sample privacy. IEEE Transactions on Information Forensics and Security, 15:2012–2025, 2019. [6] Faisal Hamman and Sanghamitra Dutta. Demystifying local and global fairness trade-offs in federated learning using partial information decom- position. arXiv preprint arXiv:2307.11333, 2023. [7] Fernando E Rosas, Pedro AM Mediano, Henrik J Jensen, Anil K Seth, Adam B Barrett, Robin L Carhart-Harris, and Daniel Bor. Rec- onciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS computational biology, 16(12):e1008289, 2020. [8] Virgil Griffith, Edwin KP Chong, Ryan G James, Christopher J Ellison, and James P Crutchfield. Intersection information based on common randomness. Entropy, 16(4):1985–2000, 2014. [9] Robin AA Ince. Measuring multivariate redundant information with pointwise common change in surprisal. Entropy, 19(7):318, 2017. [10] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, and Jürgen Jost. Shared information—new insights and problems in decomposing infor- mation in complex systems. In Proceedings of the European conference on complex systems 2012, pages 251–269. Springer, 2013. [11] Malte Harder, Christoph Salge, and Daniel Polani. Bivariate measure of redundant information. Physical Review E, 87(1):012130, 2013. [12] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying unique information. Entropy, 16(4):2161–2183, 2014. [13] Aobo Lyu, Andrew Clark, and Netanel Raviv. Explicit formula for partial information decomposition. In 2024 IEEE International Symposium on Information Theory (ISIT), pages 2329–2334. IEEE, 2024. [14] Artemy Kolchinsky. A novel approach to the partial information decom- position. Entropy, 24(3):403, 2022. [15] Hu Kuo Ting. On the amount of information. Theory of Probability & Its Applications, 7(4):439–447, 1962. [16] Raymond W Yeung. A new outlook on shannon’s information measures. IEEE transactions on information theory, 37(3):466–474, 1991. [17] Peter Gács, Janos Korner, et al. Common information is far less than mutual information. Problems of Control and Information Theory, 2:149– 162, 1973. [18] Robin AA Ince. The partial entropy decomposition: Decomposing multivariate entropy and mutual information via pointwise common surprisal. arXiv preprint arXiv:1702.01591, 2017. [19] Daniel Chicharro and Stefano Panzeri. Synergy and redundancy in dual decompositions of mutual information gain and information loss. Entropy, 19(2):71, 2017. [20] Fernando E Rosas, Pedro AM Mediano, Borzoo Rassouli, and Adam B Barrett. An operational information decomposition via synergistic disclosure. Journal of Physics A: Mathematical and Theoretical, 53(48):485001, 2020. [21] Joseph T Lizier, Benjamin Flecker, and Paul L Williams. Towards a synergy-based approach to measuring information modification. In 2013 IEEE Symposium on Artificial Life (ALIFE), pages 43–51. IEEE, 2013. [22] Johannes Rauh, Nils Bertschinger, Eckehard Olbrich, and Jürgen Jost. Reconsidering unique information: Towards a multivariate information decomposition. In 2014 IEEE International Symposium on Information Theory, pages 2232–2236. IEEE, 2014. [23] Aobo Lyu, Bing Yuan, Ou Deng, Mingzhe Yang, Andrew Clark, and Jiang Zhang. System information decomposition. arXiv preprint arXiv:2306.08288, 2023. [24] Raymond W Yeung. A first course in information theory. Springer Science & Business Media, 2012. APPENDIX A. Comparison between SID and two sources PID Fig. 5. Comparison between SID and two sources PID. SID extends the scope of 2-source PID from mutual infor- mation I(S/S i ; Si ) to the joint entropy H(S) of the system. In SID (target-free), each SI-atom represents information that a certain combination of variables provides redundantly to the system as a whole. For instance, in Fig. 5(A), the SI- atom Ψ123({{3}{12}}) represents information in S3 that is also contributed synergistically by S1 and S2. This directly corresponds to the PI-atom Π3 12({{12}}) in the PID view (Fig. 5(B)), where we have a target T = S3 and sources S1, S2. B. Proofs of Main Results To prove the lemmas in the paper, we first need the following corollary from Lemma 1. Corollary 1. For the system (S1, S2, S3, T) and its sub-system (S1,S2,T), (S1,T), the decomposed PI-atoms from different sub-systems have the following relationship: ΠT 1 (  {1} ) = ΠT 12(  {1}{2} ) + ΠT 12(  {1} ), (15) similarly, for the system (S1, S2, S3, T) and (S1, S2, T), ΠT 12(  {1}{2} ) = ΠT 123(  {1}{2}{3} ) + ΠT 123(  {1}{2} ), (16) ΠT 12(  {1} ) = ΠT 123(  {1}{3} ) + ΠT 123(  {1}{23} ) + ΠT 123(  {1} ). (17) Proof. For the system (S1, S2, T) and (S1, T), according to Lemma 1, let A = {S 1, S2}, B = {S 1}, and C = {S 1}, then we have ΠT 1 (  {1} ) = ΠT 12(  {1}{2} ) + ΠT 12(  {1} ). (18) Similarly, for the system (S1, S2, T) and (S2, T), we have ΠT 2 (  {2} ) = ΠT 12(  {1}{2} ) + ΠT 12(  {2} ), where the information atoms contained in both ΠT 1 (  {1} ) and ΠT 2 (  {2} ) is ΠT 12(  {1}{2} ). Then, following the same approach, we focus on the system (S1, S2, S3, T) and (S1, T), i.e., we let A = {S 1, S2, S3}, B = {S1}, and C = {S 1}. Then, by Lemma 1 we have ΠT 1 (  {1} ) = ΠT 123(  {1}{2}{3} ) + ΠT 123(  {1}{2} ) + ΠT 123(  {1}{3} ) + ΠT 123(  {1}{23} ) + ΠT 123(  {1} ). (19) Similarly, for the system (S1, S2, S3, T) and (S2, T), we have ΠT 2 (  {2} ) = ΠT 123(  {1}{2}{3} ) + ΠT 123(  {1}{2} ) + ΠT 123(  {2}{3} ) + ΠT 123(  {2}{13} ) + ΠT 123(  {2} ), where the information atoms contained in both ΠT 1 (  {1} ) and ΠT 2 (  {2} ) are ΠT 123(  {1}{2}{3} ) and ΠT 123(  {1}{2} ). Hence, we have ΠT 12(  {1}{2} ) = ΠT 123(  {1}{2}{3} ) + ΠT 123(  {1}{2} ), where {ΠT 12(  {1}{2} )} and {ΠT 123(  {1}{2}{3} ), ΠT 123(  {1}{2} )} are the only atom(s) that are contained in both I(S1, T) (i.e., ΠT 1 (  {1} )) and I(S2, T) (i.e., ΠT 2 (  {2} )) from the decompositions under the scope of (S1, S2, T) and (S1, S2, S3, T). Therefore, (16) is proved. Then, by (16), (18), and (19), we have ΠT 12(  {1} )=ΠT 123(  {1}{3} )+ΠT 123(  {1}{23} )+ΠT 123(  {1} ), which is (17). Using Corollary 1, we prove Lemma 3, 4, 5, and 6 sequen- tially. 1) Proof of Lemma 3: Proof. In ( ¯S1, ¯S2, ¯S3, ¯T), let ¯S1 and ¯S2 be two indepen- dent Bernoulli(1/2) variables, let ¯S3 = ¯S1 ⊕¯S2, and let ¯T = ( ¯S1, ¯S2, ¯S3). Therefore, we have I( ¯T; ¯S1, ¯S2, ¯S3) = 2. (20) Our subsequent proof idea is to use Property 1 to obtain the values of all PI-atoms in any system with two sources and the target variable (i.e. ( ¯S1, ¯S2, ¯T), ( ¯S1, ¯S3, ¯T) and ( ¯S2, ¯S3, ¯T)) and then show that their sum will be greater than the joint mu- tual information of the system ( ¯S1, ¯S2, ¯S3, ¯T). For simplicity, throughout the following proof, we adopt the convention that all statements are considered for distinct i, j, k ∈{1, 2, 3}. Firstly, by Property 1 (Independent Identity and Remark 1), and since ¯T = ( ¯S1, ¯S2, ¯S3) det = ( ¯Si , ¯Sj ) we have Π ¯T ij (  {i}{j} ) = 0. (21) Considering that Π ¯T ij (  {i}{j} ) = Π ¯T 123(  {1}{2}{3} ) + Π ¯T 123(  {i}{j} ), which is identical to (16), and by Axiom 3 (Monotonicity) and Lemma 2 (Nonnegativity) we have Π ¯T 123(  {1}{2}{3} ) = Π ¯T 123(  {i}{j} ) = 0. (22) Similarly, (2) implies that I( ¯T; ¯Si ) = Π ¯T ij (  {i}{j} ) + Π ¯T ij (  {i} ), and since I( ¯T; ¯Si ) = 1 and due to (21), it follows that Π ¯T ij (  {i} ) = 1, which by Corollary 1 (specifically (17)), equals Π ¯T 123(  {i} ) + Π ¯T 123(  {i}{jk} ) + Π ¯T 123(  {i}{k} ). (23) Then, by (22) and (23), we have Π ¯T 123(  {i} ) + Π ¯T 123(  {i}{jk} ) = 1, (24) and hence, I( ¯T; ¯S1, ¯S2, ¯S3) ≥Π ¯T 123(  {1} ) + Π ¯T 123(  {1}{23} ) + Π ¯T 123(  {2} ) + Π ¯T 123(  {2}{13} ) + Π ¯T 123(  {3} ) + Π ¯T 123(  {3}{12} ) = 3, which contradicts (20). 2) Proof of Lemma 4: Proof. We consider the linear constraints relating to the follow- ing ten unknowns (the ten SI-atoms of a three-variable system). Define the following vector of atoms: X = h Ψ123({{1}{2}{3}}), Ψ123({{1}{2}}), Ψ123({{1}{3}}), Ψ123({{2}{3}}), Ψ123({{1}{23}}), Ψ123({{2}{13}}), Ψ123({{3}{12}}), Ψ123({{1}}), Ψ123({{2}}), Ψ123({{3}}) i T , and the following vector of entropies: Y = h H(S1), H(S2), H(S3), H(S1, S2), H(S1, S3), H(S2, S3), H(S1, S2, S3), H(S1, S2, S3), H(S1, S2, S3) i T . Then, the nine constraints which arise from SID Axiom 1, along with the conditions from SID Axiom 1 are as follows. 2 6666666666664 1 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 3 7777777777775 X = Y. Solving the system provides the following definition of all SI atoms given Red(S1, S2, S3): Ψ123({1}{2}{3}) , Red(S 1, S2, S3) Ψ123(  {ij} ) = H(Si ) + H(Sj ) −H(Si , Sj ) −Red(S1, S2, S3) Ψ123(  {i}{jk} ) = −H(S1) −H(S2) −H(S3) + H(S1, S2) + H(S1, S3) + H(S2, S3) −H(S1, S2, S3) + Red(S1, S2, S3) Ψ123(  {i} ) = H(S1, S2, S3) −H(Sj , Sk ) (25) for all i, j, k such that {i, j, k} = {1, 2, 3}. 3) Proof of Lemma 5: Proof. SID Axiom 2 (Commutativity) is clearly satisfied by Definition 5, since the condition is symmetric with respect to the input variables; SID Axiom 4 (Self-redundancy) is also satisfied by the definition. SID Axiom 3 (Monotonicity) follows from Definition 5 since adding a new variable imposes additional constraints on the maximization: Red(S1, S2, S3) = max Q {H(Q) : H(Q | Si ) = 0, ∀i ∈[3]} ≤max Q {H(Q) : H(Q | Si ) = 0, H(Q | Sj ) = 0} = CI(Si , Sj ), for every distinct i and j in {1, 2, 3}, where the last equality follows from the definition CI(S1, S2) , maxQ H(Q), s.t. H(Q|S1) = H(Q|S2) = 0 [17]. Moreover, since CI(Si , Sj ) ≤I(Si ; Sj ) [17], it follows that Red(S1, S2, S3) ≤I(Si ; Sj ), for every distinct i, j in {1, 2, 3}, hence SID Axiom 3 follows. 4) Proof of Lemma 6: Proof. Recall that System 1 contains six independent Bernoulli(1/2) variables x1, x2, x4, x5, x7, x8, three additional bits x3 = x1 ⊕x2, x6 = x4 ⊕x5, x9 = x7 ⊕x8, (26) the source variables are ˆS1 = (x1, x4, x7), ˆS2 = (x2, x5, x8), ˆS3 = (x3, x6, x9), the target is ˆT = (x1, x5, x9), and I( ˆT; ˆS1, ˆS2, ˆS3) = H( ˆT) = H(X1, X5, X9) = 3. Intuitively, observe that each one of x1, x5, x9 can be predicted by the other two sources in its respective XOR equation (26). Therefore, given ˆS2 and ˆS3 one can determine x1. Thus, the information about x1 is uniquely shared by ˆS2 and ˆS3 in a synergistic way. This corresponds to the PID atom Π ˆT 123({{1}{23}}) = H(x1) = 1. Similarly, x5 = x4 ⊕x6 implies that ˆS1 and ˆS3 together determine x5, yielding Π ˆT 123({{2}{13}}) = H(x5) = 1, and x9 = x7⊕x8 implies that ˆS1 and ˆS2 together determine x9, yielding Π ˆT 123({{3}{12}}) = H(x9) = 1. Then, all remaining atoms are zero in this system. A similar intuition System 2 is given below. To justify the above claims regarding System 1 in formal terms, we analyze the system by considering the following three sub-target variables separated from target ˆT: ˆT1 = x1, ˆT2 = x5, ˆT3 = x9, where ˆT = ( ˆT1, ˆT2, ˆT3) and the three sub-targets are mutually independent. We operate in three steps, where in the first we identify the zero PI-atoms, in the second we identify the nonzero ones, and in the third we combine the conclusions. Step 1: Establishing zero PI-atoms. By PID Axiom 4 we have Red( ˆSi →ˆT) = I( ˆSi ; ˆT), and since I( ˆS2; ˆT1) = I( ˆS3; ˆT1) = 0, it follows that Π ˆT 1 2 ({{2}}) = Π ˆT 1 3 ({{3}}) = 0 (27) Next, by PID Axiom 3 (monotonicity), we have Π ˆT 1 ij ({{i}{j}}) ≤Π ˆT 1 i ({{i}}), where Π ˆT 1 i ({{i}}) = 0 for every i ̸= 1 by (27). Therefore, since Lemma 2 (nonnegativity) implies that Π ˆT 1 ij ({{i}{j}}) ≥0, we have Π ˆT 1 ij ({{i}{j}}) = 0, ∀i ̸= j ∈{1, 2, 3}. (28) Similarly, applying the consistency from Lemma 1, i.e. (16), to any system ( ˆSi , ˆSj , ˆT1) and to ( ˆS1, ˆS2, ˆS3, ˆT1), we have Π ˆT 1 ij ({{i}{j}}) = Π ˆT 1 123({{1}{2}{3}}) + Π ˆT 1 123({{i}{j}}), (29) and by Axiom 3 (monotonicity) and Lemma 2 (nonnegativity) we obtain: 0 ≤Π ˆT 1 123({{1}{2}{3}}) ≤Π ˆT 1 ij ({{i}{j}}) (28) = 0, ∀i ̸= j, which implies that Π ˆT 1 123({{1}{2}{3}}) = 0. (30) Then, by (29) and (28), we have Π ˆT 1 123({{i}{j}}) = 0, ∀i ̸= j. (31) Finally, observe that H( ˆT1, ˆS1| ˆS2, ˆS3) = 0, i.e., the entire system entropy is provided by ˆS2, ˆS3. Therefore, all PID atoms that do not include either { ˆS2}, or { ˆS3} or { ˆS2, ˆS3} are zero, i.e., Π ˆT 1 123({{1}}) = Π ˆT 1 123({{12}}) = Π ˆT 1 123({{13}}) = Π ˆT 1 123({{123}}) = Π ˆT 1 123({{12}{13}}) = 0. (32) Step 2: Determining non-zero PI-atoms. First, we have I( ˆT1; ˆS1) = 1, and then, we use PID Axiom 4 (I( ˆT1; ˆS1) = Π ˆT 1 1 ({{1}})) and Lemma 1, i.e., (5) to get Π ˆT 1 1 ({{1}}) = Π ˆT 1 123({{1}{2}{3}})+Π ˆT 1 123({{1}{23}}) (33) + Π ˆT 1 123({{1}{3}}) + Π ˆT 1 123({{1}{2}}) + Π ˆT 1 123({{1}}) = 1. Combining this with (30), (31), and (32), we obtain: Π ˆT 1 123({{1}{23}}) = 1. The same reasoning applies symmetrically to ( ˆS1, ˆS2, ˆS3, ˆT2) and ( ˆS1, ˆS2, ˆS3, ˆT3), yielding Π ˆT 2 123({{2}{13}}) = 1 and Π ˆT 3 123({{3}{12}}) = 1. Step 3: Final conclusion. Since ˆT = ( ˆT1, ˆT2, ˆT3) and the three sub-targets are independent, the final conclusion for the system ( ˆS1, ˆS2, ˆS3, ˆT) is: Π ˆT 123({{i}{jk}}) = 1, for all distinct i, j, k ∈[3]. We now turn to discuss System 2 ( ¯S1, ¯S2, ¯S3, ¯T). Recall that System 2 contains two independent Bernoulli(1/2) vari- ables x1, x2 and their exclusive or x3 = x1 ⊕x2. Then, ¯S1 = x1, ¯S2 = x2, ¯S3 = x3, ¯T = (x1, x2, x3), and I( ¯T; ¯S1, ¯S2, ¯S3) = H( ¯T) = H(x1, x2, x3) = 2. Firstly, by Property 1 (Independent Identity and Remark 1), and since ¯T = ( ¯S1, ¯S2, ¯S3) det = ( ¯Si , ¯Sj ) for every distinct i, j ∈ [3], we have Π ¯T ij (  {i}{j} ) = 0. (34) Considering that Π ¯T ij (  {i}{j} ) = Π ¯T 123(  {1}{2}{3} ) + Π ¯T 123(  {i}{j} ), which is identical to (16), and by Axiom 3 (Monotonicity) and Lemma 2 (nonnegativity) we have Π ¯T 123(  {1}{2}{3} ) = Π ¯T 123(  {i}{j} ) = 0 (35) Similarly, (2) implies that I( ¯T; ¯Si ) = Π ¯T ij (  {i}{j} ) + Π ¯T ij (  {i} ), and moreover since I( ¯T; ¯Si ) = 1 and (34), it follows that Π ¯T ij (  {i} ) = 1, which by Corollary 1, (17), equals Π ¯T 123(  {i} ) + Π ¯T 123(  {i}{jk} ) + Π ¯T 123(  {i}{k} ). (36) Then, by (35) and (36), we have Π ¯T 123(  {i} ) + Π ¯T 123(  {i}{jk} ) = 1. (37) Similar to System 1, observe that H( ¯T, ¯S1| ¯S2, ¯S3) = 0, i.e., the entire system entropy is provided by ¯S2, ¯S3. Therefore, all PID atoms that do not include either { ¯S2}, or { ¯S3}, or { ¯S2, ¯S3} are zero, i.e., Π ¯T 123({{1}}) = Π ¯T 123({{12}}) = Π ¯T 123({{13}}) = Π ¯T 123({{123}}) = Π ¯T 123({{12}{13}}) = 0. (38) Taking (37) with i = 1, we obtain: Π ¯T 123({{1}{23}}) = 1. Next, observing that H( ¯T, ¯S2| ¯S1, ¯S3) = H( ¯T, ¯S3| ¯S1, ¯S2) = 0, it is readily verified (similar to (38)) that Π ¯T 123({{2}}) = Π ¯T 123({{12}}) = Π ¯T 123({{23}}) = Π ¯T 123({{123}}) = Π ¯T 123({{12}{23}}) = 0, and that Π ¯T 123({{3}}) = Π ¯T 123({{23}}) = Π ¯T 123({{13}}) = Π ¯T 123({{123}}) = Π ¯T 123({{23}{13}}) = 0. Finally, taking (37) with i = 2 and i = 3, it follows that Π ¯T 123({{2}{13}}) = Π ¯T 123({{3}{12}}) = 1. Clearly, System 1 and System 2 above satisfy condition (i) with respect to the bijection ψ which maps an antichain in System 1 to an antichain in System 2 having identical indices to all sources (e.g., ψ({{ ˆS1}} = {{ ¯S1}}). Condition (ii) is also clearly satisfied.
The Whole Is Less than the Sum of Parts: Subsystem Inconsistency in Partial Information Decomposition Aobo Lyu⋆, Andrew Clark⋆, and Netanel Raviv† ⋆ . Louis, St. Louis, MO, USA † . Louis, St. Louis, MO, USA , , Abstract-Partial Information Decomposition (PID) was proposed by Williams and Beer in 2010 as a tool for analyzing fine-grained interactions between multiple random variables, and has since found numerous applications ranging from neuroscience to privacy. However, a unified theoretical framework remains elusive due to key conceptual and technical challenges. We identify and illustrate a crucial problem: PID violates the set-theoretic principle that the whole equals the sum of its parts (WESP). Through a counterexample in a three-variable system, we demonstrate how such violations naturally arise, revealing a fundamental limitation of current lattice-based PID frameworks. To address this issue, we introduce a new axiomatic framework, termed System Information Decomposition (SID), specifically tailored for three-variable systems. SID resolves the WESP violation by redefining the summation rules of decomposed information atoms based on synergistic relationships. However, we further show that for systems with four or more variables, no partial summation approach within the existing lattice-based structures can fully eliminate WESP inconsistencies. Our results thus highlight the inherent inadequacy of (antichain) lattice-based decompositions for general multivariate systems. I. INTRODUCTION Understanding how information is shared among multiple variables is a fundamental challenge in information theory. Partial Information Decomposition (PID), introduced by Williams and Beer [1], addresses this by decomposing the mutual information between a group of source variables and a target variable into distinct and independent information atoms, which reveal how the sources individually and collectively influence a target. The implementation of this framework is based on a structure called the redundancy lattice [2], which is inspired by and closely aligned with the principles of classical set theory. Since its conception PID has been widely applied, providing deep insights into neural correlations [3] and cognitive processes [4], privacy and fairness in data disclosure [5], [6], causal emergence phenomena [7], and more. However, despite these successful applications, the PID framework itself remains incomplete and faces fundamental challenges. Notably, despite extensive research efforts [8]- [13], no existing PID measure simultaneously satisfies all PID axioms and desired properties. We argue that this persistent difficulty arises primarily from conceptual flaws inherent to the PID framework. Specifically, PID implicitly relies on the assumption that the whole equals the sum of its parts (WESP)-a set-theoretic principle, which might be violated by information measures. Such violations have previously been noted [14] through the perspective of the inclusion-exclusion principle [15], [16]. In this paper, we point out that enforcing WESP overlooks higher-order synergistic effects, resulting in inconsistent behavior in subsystems, where the sum of PID components differs from the total information of the system. In light of these challenges, there is a need for a new framework that can explore the rules that information decomposition should follow without relying on the WESP principle. In this direction, this paper makes three main contributions: (i) We reveal an inherent inconsistency in PID by explicitly demonstrating WESP violation with a three-source counterexample. (ii) We introduce System Information Decomposition (SID), a novel axiomatic framework for three-source systems with a target equal to sources, resolving PID's WESP inconsistency by redefining information summation rules within a simplified antichain structure. Then, it is shown that an extension of the well-known Gács-Körner common information measure [17] satisfies this axiomatic framework. (iii) We demonstrate that for a general multivariate system, no antichain lattice-based summation method can fully resolve the WESP inconsistency, by presenting two systems with identical information atoms, but different mutual information due to different synergistic dynamics. Our results advocate for a shift beyond antichain (set-partition) logic toward new structural foundations capable of capturing higher-order synergistic relationships. The remainder of the paper is structured as follows. Section II reviews PID and presents our counterexample illustrating WESP violations. Section III proposes the SID axiomatic framework for a three-variable system and an operational definition of redundancy, demonstrating how SID resolves these inconsistencies. Section IV extends the analysis to fourvariable systems, showing why the antichain-based approach is insufficient. Finally, Section V discusses the implications of the work and future directions. II. PARTIAL INFORMATION DECOMPOSITION Williams and Beer [1] introduced the Partial Information Decomposition (PID) framework to decompose multivariate information axiomatically. Consider sources S1, S2 and target T: the mutual information I(S1, S2; T) decomposes into redundant, unique, and synergistic atoms (see Figure 1). The redundancy Red(S1, S2 →T) is shared information; the unique atom Un(S1 →T|S2) is information exclusively from S1 (sim. 22 Oct 2025 Un(S2 →T|S1)), and synergy Syn(S1, S2 →T) emerges only from joint observation of S1 and S2. Fig. 1. The structure of PID with two source variables, i.e., (1) (2). Together, we have that for the whole system (S1, S2, T), I((S1, S2); T) = Red(S1, S2 →T) + Syn(S1, S2 →T) + Un(S1 →T|S2) + Un(S2 →T|S1), (1) and for each subsystem (S1, T) and (S2, T), I(S1; T) = Red(S1, S2 →T) + Un(S1 →T|S2), and I(S2; T) = Red(S1, S2 →T) + Un(S2 →T|S1). (2) For general systems with source variables S = {S 1, . . . , Sn } and target T, PID uses the redundancy lattice A(S) [1], [2], which is the set of antichains formed from the power set of S under set inclusion with a natural order ⪯S . Definition 1 (PID Redundancy Lattice). For the set of source variables S, the set of antichains is: A(S) = {α ∈P 1(P1(S)) : ∀A i , A j ∈α, A i ̸⊂A j }, (3) where P1(S) = P(S) \ {?} is the set of all nonempty subsets of S, and for every α, β ∈A(S), we say that β ⪯ S α if for every A ∈α there exists B ∈β such that B ⊆A. For ease of exposition, we denote elements of A(S) using their indices (e.g., write {S1}{S2} as {1}{2} ). Based on PID Definition 1, the value of the PI-atoms can be expressed as a partial information function ΠT A . Definition 2 (Partial Information Decomposition Framework). Let S be a collection of sources and let T be the target. The set of PI-atoms is defined as a family of partial information functions (PI-function) ΠT A : A(A) →R for all A ⊆S. Intuitively, for every α ∈A(A), the atom Π T A (α) measures the amount of information provided by each set in the antichain α to T and is not provided by any β ⪯A α. For simplicity we denote ΠT i: (·) for ΠT fS i : g (·), e.g., ΠT 12({{1}}) = ΠT fS 1 ;S 2 g ({{S1}}). Note that in the case S = {S1, S2}, Definition 2 reduces to Red(S1,S2 →T)=ΠT 12( {1}{2} ), Un(S1 →T|S2)=ΠT 12( {1} ), Syn(S1, S2 →T)=ΠT 12( {12} ), Un(S2 →T|S1)=ΠT 12( {2} ), recovering the terms in (1) and (2). Fig. 2. The structure of PID with 3 source variables. For general systems, PID requires the following mutual information constraints [1] (i.e., the equivalent of (1) and (2)). PID Axiom 1. For any subsets A, B of sources S with A ⊆B, the sum of PI-atoms decomposed from system B satisfies I(A; T) = X B fAg ΠT B (β), (4) where {A} is the antichain with a single element A. It is worth noting that (4) constrains the consistency of the sum of PI-atoms decomposed from different subsystems [18]- [21] by taking different sets B on the right-hand side. Lemma 1 (Subsystem Consistency). For A, B, C ⊆S such that C ⊆A ∩B, let Π from Definition 2 that satisfying PID Axiom 1, we have that X A fCg ΠT A (β) = X B fCg ΠT B (β). (5) Consider the system in Figure 1. For the atoms decomposed from the system (S1, T), the quantity ΠT 1 ( {1} ) reflects the (redundant) information that S1 provides about T. If we add a source S2 to this system, this information will be further decomposed into the redundant information from S1, S2 and the unique information only from S1 but not S2. Below are three axioms regarding the redundant information Red(S1, . . . , SN →T)-which is reflected by the PI-atom ΠT S ( {1} . . . {N} )-for any multivariate system S. PID Axiom 2 (Commutativity1). Redundant information is invariant under any permutation σ of sources, i.e., Red(S1, . . . , SN →T) = Red(S (1) , . . . , S (N) →T). PID Axiom 3 (Monotonicity). Redundant information decreases monotonically as more sources are included, i.e., Red(S1, . . . , SN , SN+1 →T) ≤Red(S1, . . . , SN →T). 1We note that in general, commutativity is an axiom that follows from the intuitive understanding of redundancy. Yet, when using the anti-chain perspective, it is trivially satisfied since the respective atom T S ({{1}{2} . . . {N}}) depends on the anti-chain of all singletons, which is not affected by reordering. PID Axiom 4 (Self-redundancy). Redundant information for a single source variable Si equals the mutual information, i.e., Red(Si →T) = I(Si ; T). Axiom 3 also implies another lemma, as follows. Lemma 2 (Nonnegativity). Partial Information Decomposition satisfies Red(S1, . . . , SN →T) ≥0. Proof. Add a constant variable S to the sources and obtain Red(A →T) ≥Red(A, S →T) = 0. Besides, another intuitive property is often considered [9]. Property 1 (Independent Identity). If I(S1; S2) = 0 and T = (S1, S2), then Red(S1, S2 →T) = 0. Remark 1. From information perspective, there is no difference between "T = (S1, S2)" and "H(T|S1, S2) = H(S1,S2|T)= 0," which we denote by T det =(S1,S2) for brevity. However, this framework inherently becomes contradictory for three or more source variables, as shown in [22, Thm. 2]. To set the stage for our results, we briefly recall this finding of the following system and Lemma 3 proved in Appendix B1. System ( ̄S1, ̄S2, ̄S3, ̄T): Let x1 and x2 be two independent Bernoulli(1/2) variables, and x3 = x1 ⊕x2. Define ̄S1 = x1, ̄S2 = x2, ̄S3 = x3, and target ̄T = (x1, x2, x3). Lemma 3. [22] For the system ( ̄S1, ̄S2, ̄S3, ̄T), any candidate PID measure Π ̄T A : A(A) →R, ∀A ⊆ ̄S that satisfies PID Axioms 2, 3, 4, Property 1, and PID Axiom 1 for any |A| ≤2, violates PID Axiom 1 for |A| = 3, i.e., I(T; ̄S) < X S f ̄Sg Π ̄T ̄S (β). The redundancy lattice of PID was built under the assumption that mutual information could be partitioned into disjoint atoms summing to the total, i.e., WESP. However, Lemma 3 demonstrates that this assumption is incompatible with synergistic phenomena in multivariate systems. This phenomena is inherent to multivariate information measures: the sum of synergistic components can exceed the total entropy [23], which is described rigorously in the following observation. Observation 1 (Synergistic Phenomena [23]). For any system X = {X 1,. . ., XN }, denote Syn(X/ ˆXi →Xi ) as the information that can only be provided jointly from sources X/ ˆXi to target Xi , the summation of all those information can be greater than the joint entropy of the system, i.e., in general X i2f1;:;Ng Syn(X/X i →Xi ) H(X). (6) In particular, in the system given in Lemma 3 we have that the l.h.s of (6) equals 3, whereas the r.h.s equals 2. In summary, PID's reliance on the WESP principle (Axiom 1) is incompatible with the existence of synergy. Observation 1 quantifies this incompatibility: the sum of synergistic components can exceed the whole's entropy. Thus, to fix our decomposition, we must relax or modify Axiom 1. In the next section, we propose a new framework addressing this limitation explicitly for three-variable systems. III. THREE-VARIABLE SYSTEM INFORMATION DECOMPOSITION In this section we resolve the issue from Section II for the case of three source variables S1, S2, S3, and T = (S1, S2, S3). When T = (S1, S2, S3), the PID of I(T; S1, S2, S3) amounts to a decomposition of the joint entropy H(S1, S2, S3). We define a new framework in which Axiom 1 is replaced by a summation over a subset of the atoms, in a way which circumvents the overcounting in Observation 1. We make use of the following lattice. Definition 3 (SID Half Lattice). For S = {S 1, S2, S3}, let A (S) ={α ∈P 1(P1(S)) : ∃A k ∈α, |A k | = 1, ∀A i , A j ∈α, A i ̸⊂A j }, (7) = {{1}{2}{3}}, {{1}{2}}, {{1}{3}}, {{2}{3}}, {{1}{23}}, {{2}{13}}, {{3}{12}}, {{1}}, {{2}}, {{3}} . where P1(S) and ⪯ S are as in Definition 1. For every A ⊆S and every α ∈A (A), our aim is to measure the information contributed by every subset in α to the whole system A, which is not already accounted for by any antichain β ∈A (A) such that β ⪯ A α. Definition 4 (System Information Decomposition Framework). Let S be a collection of variables. The set of system information atoms (SI-atoms) is defined as a family of system information functions ΨA : A (A) →R for all A ⊆S. The SID half lattice can be understood as a refinement of the PID redundancy lattice for three sources (Definition 1), by removing all antichains that do not contain any singleton source (see Figure 3(B)). See Appendix A for further comparison between SID and two-source PID. Fig. 3. Comparison between SID and three sources PID. (A) Three-variable SID. (B) Three-sources PID, where the antichains in bold contain at least one singleton source, whose structure is consistent with SID. The original PID Axioms 2, 3, and 4, are required for SID as well. PID Axiom 1, which leads to the inconsistency demonstrated in Lemma 3, will be modified shortly. Similar to PID, we define SID redundant information as Red(S1, S2, S3) = ΨS ({{S1}{S2}{S3}}), and for all distinct i, j in S, let Red(S i , Sj ) = ΨfS i ;S j g ({{Si }{Sj }}). SID Axiom 2 (Commutativity). SID redundant information is invariant under any permutation σ of sources, i.e., Red(S1, S2, S3) = Red(S (1) , S (2) , S (3) ). SID Axiom 3 (Monotonicity). SID redundant information decreases monotonically as more sources are included, i.e., Red(S1, S2, S3) ≤mini;j2[3] {Red(Si , Sj )}. SID Axiom 4 (Self-redundancy). SID redundant information for two variables Si , Sj equals the mutual information, i.e. Red(Si , Sj ) = I(Si ; Sj ). Then, we revisit PID Axiom 1 and aim to propose an alternative axiom. In SID, the mutual information between any two variables and the third one can be decomposed similarly to two-source PID. That is, for any distinct i, j, k ∈{1, 2, 3}, I(Si , Sj ; Sk ) splits into four SI-atoms (analogous to (1)): I(Si , Sj ; Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{j}{k}}) + ΨS ({{ij}{k}}), (8) and the two-variable mutual information I(Si ; Sk ) corresponds to two of those atoms (analogous to (2)): I(Si ; Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}). (9) Recall that we have H(Sk ) = I(Si , Sj ; Sk ) + H(Sk |Si , Sj ) for any k ∈[3], and ΨS ({{k}}) represents the information provided by Sk alone, i.e., ΨS ({{k}}) = H(Sk | Si , Sj ). Therefore, we have H(Sk ) = I(Si , Sj ; Sk ) + H(Sk |Si , Sj ) (8)= ΨS ({{i}{j}{k}}) + ΨS ({{ij}{k}}) +ΨS ({{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{k}}) = X S fS k g ΨS (β). (10) Similarly, for any two variables {Si , Sk } ⊆S, by combining H(Sk |Si ) = H(Sk ) -I(Si ; Sk ) with (9) and (10), we have H(Sk |Si ) = ΨS ({{ij}{k}}) + ΨS ({{j}{k}}) + ΨS ({{k}}), which, combined with the fact that H(Si , Sk ) = H(Si ) + H(Sk |Si ) and with (10), shows that the joint entropy of any two variables is the sum of all atoms dominated by that pair: H(Si , Sk ) = ΨS ({{i}{j}{k}}) + ΨS ({{i}{k}}) + ΨS ({{i}{j}}) + ΨS ({{jk}{i}}) + ΨS ({{i}}) + ΨS ({{ij}{k}}) + ΨS ({{j}{k}}) + ΨS ({{k}}) = Σfi;kg , (11) where Σfi;kg is the summation of all atoms corresponding to antichains that are dominated either by {{Si }} or by {{Sk }}. However, when extending the decomposition to the joint entropy of all three variables, the SID framework deviates from WESP due to the presence of synergy-induced redundancy. This discrepancy can be directly demonstrated as follows. By combining the fact that H(Si , Sj , Sk ) = H(Si , Sk ) + H(Sj |Si , Sk ) with ΨS ({{j}}) = H(Sj |Si , Sk ) and (11), H(Si , Sj , Sk ) = Σfi;kg + ΨS ({{j}}) = Σ -ΨS ({{ik}{j}}), (12) where Σ is the summation of all 10 atoms ΨS (α), α ∈A (S). Thus, unlike PID Axiom 1, we find that the total entropy is less than the sum of its decomposed parts by exactly ΨS ({{ij}{k}}). In other words, WESP does not hold in SID due to this necessary exclusion. Motivated by (10), (11), and (12), we propose the following alternative to PID Axiom 1 within the SID framework. SID Axiom 1. For any set of variables B ⊆S with |B| ≤ 2, |S| ≤3, the entropy of B is decomposed as H(B) = Σ B , and when |B| = |S| = 3 we have for all distinct i, j, k ∈[3], H(S) = Σ -Ψ S ({{ij}{k}}), (13) where ΣB and Σ are as defined above. Going back to the example in Lemma 3 with T = (S1, S2, S3), there are three equivalent ways to express H(T) via Axiom 1, corresponding to which ΨS ({{ij}{k}}) term is excluded, i.e., for every distinct i, j, k ∈[3] we have H(T) = ΨS ( {i}{jk} ) + ΨS ( {j}{ik} ) = 2, where the zero valued SI-atoms are omitted. The following lemma shows that only the redundancy atom needs to be defined; the remaining atoms are then uniquely determined via linear constraints implied by Axiom 1. A proof is provided in Appendix B2 alongside explicit definitions for all SI atoms given Red(S1, S2, S3) (see (25)). Lemma 4. Let S = {S 1, S2, S3} be a three-variable system in the SID framework, and Ψ123(·) denote its SI-atoms. Then, once the value of any one SI-atom is fixed, the values of all remaining SI-atoms are uniquely determined by SID Axiom 1. Therefore, any definition of Red(S1, S2, S3) implies unique definitions of all SI atoms that automatically satisfy SID Axiom 1. To satisfy Axioms 2, 3 and 4, we adopt an operational definition of redundancy that generalizes the Gács-Körner common information2 [17]. Definition 5 (Operational Definition of Redundancy). For system S1, S2, S3, the redundant information is defined as Red(Si , Sj ) , I(S i , Sj ) for all distinct i, j ∈{1, 2, 3}, and Red(S1, S2, S3) , max Q {H(Q) |H(Q|Si ) = 0, ∀i∈[3]}, where the maximization is taken over all variables Q defined over the Cartesian product of the alphabets of S1, S2, S3. The following lemma is proved in Appendix B3. Lemma 5. Definition 5 satisfies SID Axioms 2, 3, and 4. 2The Gács-Körner common information is defined as CI(S 1, S2) , max Q H(Q), s.t. H(Q|S 1) = H(Q|S 2) = 0. In the next section we detail that new complications arise in systems with four or more variables. Moreover, we show that they cannot be resolved by summing over lattice subsets as done in this section, leaving a challenging open problem. IV. LIMITATIONS OF ANTI-CHAINS BASED INFORMATION DECOMPOSITION In this section, it is shown that the partial summation approach taken in Section III is insufficient for three-or-more source variable systems, with a target that is not necessarily equal to the sources. Theorem 1 (Subsystem inconsistency in lattice-based PID). For any set of functions ΠT A : A(A) →R, ∀A ⊆S, which satisfy PID Axioms 2, 3, 4, and Property 1, there is no way to redefine PID Axiom 1 so that (5) (Lemma 1) is satisfied. Specifically, there is no fixed subset O ⊂A(S) such that I(S; T) = X 2O ΠT S (β) (14) for all systems with |S| = 3 source variables and a target T. We emphasize that Theorem 1 shows that for any PI-function ΠA , there is no set O satisfying (14) for all systems of three-or-more variables S and a target T. In cases where T = (S1, S2, S3) (or equivalently, where T det = (S1, S2, S3)) such solution exists as shown in Section III. To prove this theorem, we construct the following two systems (Fig. 4) and the subsequent Lemma 6 is proved in Appendix B4. a) System 1 ( ˆS1, ˆS2, ˆS3, ˆT): Define six independent bits x1, x2, x4, x5, x7, x8 ∼Bernoulli(1/2), and three additional x3 = x1 ⊕x2, x6 = x4 ⊕x5, and x9 = x7 ⊕x8. Define the system as ˆS1 = (x1, x4, x7), ˆS2 = (x2, x5, x8), ˆS3 = (x3, x6, x9), and ˆT = (x1, x5, x9). b) System 2 ( ̄S1, ̄S2, ̄S3, ̄T): Let x1 and x2 be two independent Bernoulli(1/2) variables, and x3 = x1 ⊕x2. Define ̄S1 = x1, ̄S2 = x2, ̄S3 = x3, and set the target as ̄T = (x1, x2, x3) (this is the system from Lemma 3). Fig. 4. Construction of ( ^S1, ^S2, ^S3, ^T) and ( S1, S2, S3, T). Lemma 6. For System 1 (ˆS = { ˆS1, ˆS2, ˆS3}, ˆT) and System 2 ( ̄S = { ̄S1, ̄S2, ̄S3}, ̄T), (i) there is a bijection ψ : A(ˆS) → A( ̄S) with Π ˆT ˆA (ˆβ) = Π ̄T ( ˆA) (ψ(ˆβ)) for all atoms ˆβ ∈A(ˆS), ˆA ⊆ ˆS, and for any family of Π functions satisfying PID Axioms 2, 3, 4, Property 1 and (5); and (ii) I(ˆS; ˆT) ̸= I( ̄S; ̄T). Given Lemma 6, Theorem 1 is immediate: since all atoms are identical, it follows P 2O Π ˆT ˆS (ˆβ) = P 2O Π ̄T ( ˆS) (ψ(ˆβ)) for every O in the respective lattices, yet I(ˆS; ˆT) ̸= I( ̄S; ̄T). This result shows that PID based purely on a fixed collection of antichain-defined atoms will face insurmountable problems when extended beyond three variables. This part of the analysis will be discussed in the next section. V. DISCUSSION This work identifies a fundamental inconsistency of the PID framework. Section II shows that PID violates the WESP principle in multivariate systems. To address this, we introduced the SID framework (Section III), which resolves WESP inconsistencies in three-variable systems by redefining the summation rules for information atoms within a reduced antichain structure. However, Section IV demonstrates via another counterexample that antichain-based PID methods inherently fail to capture higher-order interactions, thus producing unresolvable inconsistency. Conceptual Implications of SID: SID is a target-free commutative framework in which the target is equal to the sources. While similar to Partial Entropy Decomposition (PED) [4], [18], SID uses a reduced antichain lattice (Definition 3) that excludes antichains lacking singleton elements. The removed antichains correspond to the irreducible synergistic information atoms that cannot be provided by any single source alone; these are unnecessary when the target is a subset of the sources. This insight is inspired by the decomposition of the joint entropy of a two-variable system using Shannon's laws, i.e. H(X, Y ) = H(X|Y ) + H(Y ), where each component of the decomposition comes directly from at least one of the source variables. This insight can be easily extended through the chain rule [24, Ch. 2] to multivariable systems. SID Axiom 1 is motivated by these observations, and furthermore, it reveals that synergistic information is inherently symmetric and higherorder, leading to multivariate information interactions that fundamentally differ from classical set theory. Beyond Three Variables - Limitations of Antichains: Section IV illustrates scenarios where two distinct systems share identical PID decompositions yet differ in total joint information. This discrepancy arises because synergy is fundamentally a holistic (or collective) higher-order symmetric relationship among variables: an n-order synergy (among n variables) contributes its information collectively to the system as a whole, appearing n -1 times in the summation rule for joint entropy (SID Axiom 1). PID frameworks fail to capture this holistic nature of synergy, as they focus solely on mutual information (i.e., I(S; T)), and isolate information to single antichain atoms, while ignoring the collective synergistic structure. Thus, we suggest that future multivariate decomposition frameworks must: (1) Decompose total system entropy rather than partial/mutual information. (2) Capture higherorder synergistic structures across variable subsets. (3) Move beyond traditional set-theoretic assumptions (e.g., WESP) to accommodate information's unique properties. REFERENCES [1] Paul L Williams and Randall D Beer. Nonnegative decomposition of multivariate information. arXiv preprint , 2010. [2] Jason Crampton and George Loizou. The completion of a poset in a lattice of antichains. International Mathematical Journal, 1(3):223-238, 2001. [3] Elad Schneidman, William Bialek, and Michael J Berry. Synergy, redundancy, and independence in population codes. Journal of Neuroscience, 23(37):11539-11553, 2003. [4] Thomas F Varley, Maria Pope, Maria Grazia, Joshua, and Olaf Sporns. Partial entropy decomposition reveals higher-order information structures in human brain activity. Proceedings of the National Academy of Sciences, 120(30):e2300888120, 2023. [5] Borzoo Rassouli, Fernando E Rosas, and Deniz Gündüz. Data disclosure under perfect sample privacy. IEEE Transactions on Information Forensics and Security, 15:2012-2025, 2019. [6] Faisal Hamman and Sanghamitra Dutta. Demystifying local and global fairness trade-offs in federated learning using partial information decomposition. arXiv preprint , 2023. [7] Fernando E Rosas, Pedro AM Mediano, Henrik J Jensen, Anil K Seth, Adam B Barrett, Robin L Carhart-Harris, and Daniel Bor. Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS computational biology, 16(12):e1008289, 2020. [8] Virgil Griffith, Edwin KP Chong, Ryan G James, Christopher J Ellison, and James P Crutchfield. Intersection information based on common randomness. Entropy, 16(4):1985-2000, 2014. [9] Robin AA Ince. Measuring multivariate redundant information with pointwise common change in surprisal. Entropy, 19(7):318, 2017. [10] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, and Jürgen Jost. Shared information-new insights and problems in decomposing information in complex systems. In Proceedings of the European conference on complex systems 2012, pages 251-269. Springer, 2013. [11] Malte Harder, Christoph Salge, and Daniel Polani. Bivariate measure of redundant information. Physical Review E, 87(1):012130, 2013. [12] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying unique information. Entropy, 16(4):2161-2183, 2014. [13] Aobo Lyu, Andrew Clark, and Netanel Raviv. Explicit formula for partial information decomposition. In 2024 IEEE International Symposium on Information Theory (ISIT), pages 2329-2334. IEEE, 2024. [14] Artemy Kolchinsky. A novel approach to the partial information decomposition. Entropy, 24(3):403, 2022. [15] Hu Kuo Ting. On the amount of information. Theory of Probability & Its Applications, 7(4):439-447, 1962. [16] Raymond W Yeung. A new outlook on shannon's information measures. IEEE transactions on information theory, 37(3):466-474, 1991. [17] Peter Gács, Janos Korner, et al. Common information is far less than mutual information. Problems of Control and Information Theory, 2:149162, 1973. [18] Robin AA Ince. The partial entropy decomposition: Decomposing multivariate entropy and mutual information via pointwise common surprisal. arXiv preprint , 2017. [19] Daniel Chicharro and Stefano Panzeri. Synergy and redundancy in dual decompositions of mutual information gain and information loss. Entropy, 19(2):71, 2017. [20] Fernando E Rosas, Pedro AM Mediano, Borzoo Rassouli, and Adam B Barrett. An operational information decomposition via synergistic disclosure. Journal of Physics A: Mathematical and Theoretical, 53(48):485001, 2020. [21] Joseph T Lizier, Benjamin Flecker, and Paul L Williams. Towards a synergy-based approach to measuring information modification. In 2013 IEEE Symposium on Artificial Life (ALIFE), pages 43-51. IEEE, 2013. [22] Johannes Rauh, Nils Bertschinger, Eckehard Olbrich, and Jürgen Jost. Reconsidering unique information: Towards a multivariate information decomposition. In 2014 IEEE International Symposium on Information Theory, pages 2232-2236. IEEE, 2014. [23] Aobo Lyu, Bing Yuan, Ou Deng, Mingzhe Yang, Andrew Clark, and Jiang Zhang. System information decomposition. arXiv preprint , 2023. [24] Raymond W Yeung. A first course in information theory. Springer Science & Business Media, 2012. APPENDIX A. Comparison between SID and two sources PID Fig. 5. Comparison between SID and two sources PID. SID extends the scope of 2-source PID from mutual information I(S/S i ; Si ) to the joint entropy H(S) of the system. In SID (target-free), each SI-atom represents information that a certain combination of variables provides redundantly to the system as a whole. For instance, in Fig. 5(A), the SIatom Ψ123({{3}{12}}) represents information in S3 that is also contributed synergistically by S1 and S2. This directly corresponds to the PI-atom Π3 12({{12}}) in the PID view (Fig. 5(B)), where we have a target T = S3 and sources S1, S2. B. Proofs of Main Results To prove the lemmas in the paper, we first need the following corollary from Lemma 1. Corollary 1. For the system (S1, S2, S3, T) and its sub-system (S1,S2,T), (S1,T), the decomposed PI-atoms from different sub-systems have the following relationship: ΠT 1 ( {1} ) = ΠT 12( {1}{2} ) + ΠT 12( {1} ), (15) similarly, for the system (S1, S2, S3, T) and (S1, S2, T), ΠT 12( {1}{2} ) = ΠT 123( {1}{2}{3} ) + ΠT 123( {1}{2} ), (16) ΠT 12( {1} ) = ΠT 123( {1}{3} ) + ΠT 123( {1}{23} ) + ΠT 123( {1} ). (17) Proof. For the system (S1, S2, T) and (S1, T), according to Lemma 1, let A = {S 1, S2}, B = {S 1}, and C = {S 1}, then we have ΠT 1 ( {1} ) = ΠT 12( {1}{2} ) + ΠT 12( {1} ). (18) Similarly, for the system (S1, S2, T) and (S2, T), we have ΠT 2 ( {2} ) = ΠT 12( {1}{2} ) + ΠT 12( {2} ), where the information atoms contained in both ΠT 1 ( {1} ) and ΠT 2 ( {2} ) is ΠT 12( {1}{2} ). Then, following the same approach, we focus on the system (S1, S2, S3, T) and (S1, T), i.e., we let A = {S 1, S2, S3}, B = {S1}, and C = {S 1}. Then, by Lemma 1 we have ΠT 1 ( {1} ) = ΠT 123( {1}{2}{3} ) + ΠT 123( {1}{2} ) + ΠT 123( {1}{3} ) + ΠT 123( {1}{23} ) + ΠT 123( {1} ). (19) Similarly, for the system (S1, S2, S3, T) and (S2, T), we have ΠT 2 ( {2} ) = ΠT 123( {1}{2}{3} ) + ΠT 123( {1}{2} ) + ΠT 123( {2}{3} ) + ΠT 123( {2}{13} ) + ΠT 123( {2} ), where the information atoms contained in both ΠT 1 ( {1} ) and ΠT 2 ( {2} ) are ΠT 123( {1}{2}{3} ) and ΠT 123( {1}{2} ). Hence, we have ΠT 12( {1}{2} ) = ΠT 123( {1}{2}{3} ) + ΠT 123( {1}{2} ), where {ΠT 12( {1}{2} )} and {ΠT 123( {1}{2}{3} ), ΠT 123( {1}{2} )} are the only atom(s) that are contained in both I(S1, T) (i.e., ΠT 1 ( {1} )) and I(S2, T) (i.e., ΠT 2 ( {2} )) from the decompositions under the scope of (S1, S2, T) and (S1, S2, S3, T). Therefore, (16) is proved. Then, by (16), (18), and (19), we have ΠT 12( {1} )=ΠT 123( {1}{3} )+ΠT 123( {1}{23} )+ΠT 123( {1} ), which is (17). Using Corollary 1, we prove Lemma 3, 4, 5, and 6 sequentially. 1) Proof of Lemma 3: Proof. In ( ̄S1, ̄S2, ̄S3, ̄T), let ̄S1 and ̄S2 be two independent Bernoulli(1/2) variables, let ̄S3 = ̄S1 ⊕ ̄S2, and let ̄T = ( ̄S1, ̄S2, ̄S3). Therefore, we have I( ̄T; ̄S1, ̄S2, ̄S3) = 2. (20) Our subsequent proof idea is to use Property 1 to obtain the values of all PI-atoms in any system with two sources and the target variable (i.e. ( ̄S1, ̄S2, ̄T), ( ̄S1, ̄S3, ̄T) and ( ̄S2, ̄S3, ̄T)) and then show that their sum will be greater than the joint mutual information of the system ( ̄S1, ̄S2, ̄S3, ̄T). For simplicity, throughout the following proof, we adopt the convention that all statements are considered for distinct i, j, k ∈{1, 2, 3}. Firstly, by Property 1 (Independent Identity and Remark 1), and since ̄T = ( ̄S1, ̄S2, ̄S3) det = ( ̄Si , ̄Sj ) we have Π ̄T ij ( {i}{j} ) = 0. (21) Considering that Π ̄T ij ( {i}{j} ) = Π ̄T 123( {1}{2}{3} ) + Π ̄T 123( {i}{j} ), which is identical to (16), and by Axiom 3 (Monotonicity) and Lemma 2 (Nonnegativity) we have Π ̄T 123( {1}{2}{3} ) = Π ̄T 123( {i}{j} ) = 0. (22) Similarly, (2) implies that I( ̄T; ̄Si ) = Π ̄T ij ( {i}{j} ) + Π ̄T ij ( {i} ), and since I( ̄T; ̄Si ) = 1 and due to (21), it follows that Π ̄T ij ( {i} ) = 1, which by Corollary 1 (specifically (17)), equals Π ̄T 123( {i} ) + Π ̄T 123( {i}{jk} ) + Π ̄T 123( {i}{k} ). (23) Then, by (22) and (23), we have Π ̄T 123( {i} ) + Π ̄T 123( {i}{jk} ) = 1, (24) and hence, I( ̄T; ̄S1, ̄S2, ̄S3) ≥Π ̄T 123( {1} ) + Π ̄T 123( {1}{23} ) + Π ̄T 123( {2} ) + Π ̄T 123( {2}{13} ) + Π ̄T 123( {3} ) + Π ̄T 123( {3}{12} ) = 3, which contradicts (20). 2) Proof of Lemma 4: Proof. We consider the linear constraints relating to the following ten unknowns (the ten SI-atoms of a three-variable system). Define the following vector of atoms: X = h Ψ123({{1}{2}{3}}), Ψ123({{1}{2}}), Ψ123({{1}{3}}), Ψ123({{2}{3}}), Ψ123({{1}{23}}), Ψ123({{2}{13}}), Ψ123({{3}{12}}), Ψ123({{1}}), Ψ123({{2}}), Ψ123({{3}}) i T , and the following vector of entropies: Y = h H(S1), H(S2), H(S3), H(S1, S2), H(S1, S3), H(S2, S3), H(S1, S2, S3), H(S1, S2, S3), H(S1, S2, S3) i T . Then, the nine constraints which arise from SID Axiom 1, along with the conditions from SID Axiom 1 are as follows. 2 6666666666664 1 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 3 7777777777775 X = Y. Solving the system provides the following definition of all SI atoms given Red(S1, S2, S3): Ψ123({1}{2}{3}) , Red(S 1, S2, S3) Ψ123( {ij} ) = H(Si ) + H(Sj ) -H(Si , Sj ) -Red(S1, S2, S3) Ψ123( {i}{jk} ) = -H(S1) -H(S2) -H(S3) + H(S1, S2) + H(S1, S3) + H(S2, S3) -H(S1, S2, S3) + Red(S1, S2, S3) Ψ123( {i} ) = H(S1, S2, S3) -H(Sj , Sk ) (25) for all i, j, k such that {i, j, k} = {1, 2, 3}. 3) Proof of Lemma 5: Proof. SID Axiom 2 (Commutativity) is clearly satisfied by Definition 5, since the condition is symmetric with respect to the input variables; SID Axiom 4 (Self-redundancy) is also satisfied by the definition. SID Axiom 3 (Monotonicity) follows from Definition 5 since adding a new variable imposes additional constraints on the maximization: Red(S1, S2, S3) = max Q {H(Q) : H(Q | Si ) = 0, ∀i ∈[3]} ≤max Q {H(Q) : H(Q | Si ) = 0, H(Q | Sj ) = 0} = CI(Si , Sj ), for every distinct i and j in {1, 2, 3}, where the last equality follows from the definition CI(S1, S2) , maxQ H(Q), s.t. H(Q|S1) = H(Q|S2) = 0 [17]. Moreover, since CI(Si , Sj ) ≤I(Si ; Sj ) [17], it follows that Red(S1, S2, S3) ≤I(Si ; Sj ), for every distinct i, j in {1, 2, 3}, hence SID Axiom 3 follows. 4) Proof of Lemma 6: Proof. Recall that System 1 contains six independent Bernoulli(1/2) variables x1, x2, x4, x5, x7, x8, three additional bits x3 = x1 ⊕x2, x6 = x4 ⊕x5, x9 = x7 ⊕x8, (26) the source variables are ˆS1 = (x1, x4, x7), ˆS2 = (x2, x5, x8), ˆS3 = (x3, x6, x9), the target is ˆT = (x1, x5, x9), and I( ˆT; ˆS1, ˆS2, ˆS3) = H( ˆT) = H(X1, X5, X9) = 3. Intuitively, observe that each one of x1, x5, x9 can be predicted by the other two sources in its respective XOR equation (26). Therefore, given ˆS2 and ˆS3 one can determine x1. Thus, the information about x1 is uniquely shared by ˆS2 and ˆS3 in a synergistic way. This corresponds to the PID atom Π ˆT 123({{1}{23}}) = H(x1) = 1. Similarly, x5 = x4 ⊕x6 implies that ˆS1 and ˆS3 together determine x5, yielding Π ˆT 123({{2}{13}}) = H(x5) = 1, and x9 = x7⊕x8 implies that ˆS1 and ˆS2 together determine x9, yielding Π ˆT 123({{3}{12}}) = H(x9) = 1. Then, all remaining atoms are zero in this system. A similar intuition System 2 is given below. To justify the above claims regarding System 1 in formal terms, we analyze the system by considering the following three sub-target variables separated from target ˆT: ˆT1 = x1, ˆT2 = x5, ˆT3 = x9, where ˆT = ( ˆT1, ˆT2, ˆT3) and the three sub-targets are mutually independent. We operate in three steps, where in the first we identify the zero PI-atoms, in the second we identify the nonzero ones, and in the third we combine the conclusions. Step 1: Establishing zero PI-atoms. By PID Axiom 4 we have Red( ˆSi →ˆT) = I( ˆSi ; ˆT), and since I( ˆS2; ˆT1) = I( ˆS3; ˆT1) = 0, it follows that Π ˆT 1 2 ({{2}}) = Π ˆT 1 3 ({{3}}) = 0 (27) Next, by PID Axiom 3 (monotonicity), we have Π ˆT 1 ij ({{i}{j}}) ≤Π ˆT 1 i ({{i}}), where Π ˆT 1 i ({{i}}) = 0 for every i ̸= 1 by (27). Therefore, since Lemma 2 (nonnegativity) implies that Π ˆT 1 ij ({{i}{j}}) ≥0, we have Π ˆT 1 ij ({{i}{j}}) = 0, ∀i ̸= j ∈{1, 2, 3}. (28) Similarly, applying the consistency from Lemma 1, i.e. (16), to any system ( ˆSi , ˆSj , ˆT1) and to ( ˆS1, ˆS2, ˆS3, ˆT1), we have Π ˆT 1 ij ({{i}{j}}) = Π ˆT 1 123({{1}{2}{3}}) + Π ˆT 1 123({{i}{j}}), (29) and by Axiom 3 (monotonicity) and Lemma 2 (nonnegativity) we obtain: 0 ≤Π ˆT 1 123({{1}{2}{3}}) ≤Π ˆT 1 ij ({{i}{j}}) (28) = 0, ∀i ̸= j, which implies that Π ˆT 1 123({{1}{2}{3}}) = 0. (30) Then, by (29) and (28), we have Π ˆT 1 123({{i}{j}}) = 0, ∀i ̸= j. (31) Finally, observe that H( ˆT1, ˆS1| ˆS2, ˆS3) = 0, i.e., the entire system entropy is provided by ˆS2, ˆS3. Therefore, all PID atoms that do not include either { ˆS2}, or { ˆS3} or { ˆS2, ˆS3} are zero, i.e., Π ˆT 1 123({{1}}) = Π ˆT 1 123({{12}}) = Π ˆT 1 123({{13}}) = Π ˆT 1 123({{123}}) = Π ˆT 1 123({{12}{13}}) = 0. (32) Step 2: Determining non-zero PI-atoms. First, we have I( ˆT1; ˆS1) = 1, and then, we use PID Axiom 4 (I( ˆT1; ˆS1) = Π ˆT 1 1 ({{1}})) and Lemma 1, i.e., (5) to get Π ˆT 1 1 ({{1}}) = Π ˆT 1 123({{1}{2}{3}})+Π ˆT 1 123({{1}{23}}) (33) + Π ˆT 1 123({{1}{3}}) + Π ˆT 1 123({{1}{2}}) + Π ˆT 1 123({{1}}) = 1. Combining this with (30), (31), and (32), we obtain: Π ˆT 1 123({{1}{23}}) = 1. The same reasoning applies symmetrically to ( ˆS1, ˆS2, ˆS3, ˆT2) and ( ˆS1, ˆS2, ˆS3, ˆT3), yielding Π ˆT 2 123({{2}{13}}) = 1 and Π ˆT 3 123({{3}{12}}) = 1. Step 3: Final conclusion. Since ˆT = ( ˆT1, ˆT2, ˆT3) and the three sub-targets are independent, the final conclusion for the system ( ˆS1, ˆS2, ˆS3, ˆT) is: Π ˆT 123({{i}{jk}}) = 1, for all distinct i, j, k ∈[3]. We now turn to discuss System 2 ( ̄S1, ̄S2, ̄S3, ̄T). Recall that System 2 contains two independent Bernoulli(1/2) variables x1, x2 and their exclusive or x3 = x1 ⊕x2. Then, ̄S1 = x1, ̄S2 = x2, ̄S3 = x3, ̄T = (x1, x2, x3), and I( ̄T; ̄S1, ̄S2, ̄S3) = H( ̄T) = H(x1, x2, x3) = 2. Firstly, by Property 1 (Independent Identity and Remark 1), and since ̄T = ( ̄S1, ̄S2, ̄S3) det = ( ̄Si , ̄Sj ) for every distinct i, j ∈ [3], we have Π ̄T ij ( {i}{j} ) = 0. (34) Considering that Π ̄T ij ( {i}{j} ) = Π ̄T 123( {1}{2}{3} ) + Π ̄T 123( {i}{j} ), which is identical to (16), and by Axiom 3 (Monotonicity) and Lemma 2 (nonnegativity) we have Π ̄T 123( {1}{2}{3} ) = Π ̄T 123( {i}{j} ) = 0 (35) Similarly, (2) implies that I( ̄T; ̄Si ) = Π ̄T ij ( {i}{j} ) + Π ̄T ij ( {i} ), and moreover since I( ̄T; ̄Si ) = 1 and (34), it follows that Π ̄T ij ( {i} ) = 1, which by Corollary 1, (17), equals Π ̄T 123( {i} ) + Π ̄T 123( {i}{jk} ) + Π ̄T 123( {i}{k} ). (36) Then, by (35) and (36), we have Π ̄T 123( {i} ) + Π ̄T 123( {i}{jk} ) = 1. (37) Similar to System 1, observe that H( ̄T, ̄S1| ̄S2, ̄S3) = 0, i.e., the entire system entropy is provided by ̄S2, ̄S3. Therefore, all PID atoms that do not include either { ̄S2}, or { ̄S3}, or { ̄S2, ̄S3} are zero, i.e., Π ̄T 123({{1}}) = Π ̄T 123({{12}}) = Π ̄T 123({{13}}) = Π ̄T 123({{123}}) = Π ̄T 123({{12}{13}}) = 0. (38) Taking (37) with i = 1, we obtain: Π ̄T 123({{1}{23}}) = 1. Next, observing that H( ̄T, ̄S2| ̄S1, ̄S3) = H( ̄T, ̄S3| ̄S1, ̄S2) = 0, it is readily verified (similar to (38)) that Π ̄T 123({{2}}) = Π ̄T 123({{12}}) = Π ̄T 123({{23}}) = Π ̄T 123({{123}}) = Π ̄T 123({{12}{23}}) = 0, and that Π ̄T 123({{3}}) = Π ̄T 123({{23}}) = Π ̄T 123({{13}}) = Π ̄T 123({{123}}) = Π ̄T 123({{23}{13}}) = 0. Finally, taking (37) with i = 2 and i = 3, it follows that Π ̄T 123({{2}{13}}) = Π ̄T 123({{3}{12}}) = 1. Clearly, System 1 and System 2 above satisfy condition (i) with respect to the bijection ψ which maps an antichain in System 1 to an antichain in System 2 having identical indices to all sources (e.g., ψ({{ ˆS1}} = {{ ̄S1}}). Condition (ii) is also clearly satisfied.
2510.14861
LabOS: The AI-XR Co-Scientist That Sees and Works With Humans Le Cong¹,²,*,**, Zaixi Zhang³,*, Xiaotong Wang¹,²,*, Yin Di¹,²,*, Ruofan Jin³, Michal Gerasimiuk¹,², Yinkai Wang¹,², Ravi K. Dinesh¹,², David Smerkous4, Alex Smerkous5, Xuekun Wu2,6, Shilong Liu³, Peishan Li¹,², Yi Zhu¹,², Simran Serrao¹,², Ning Zhao¹,², Imran A. Mohammad2,7, John B. Sunwoo2,7, Joseph C. Wu2,6, Mengdi Wang³,** Affiliations: ¹ Department of Pathology, Department of Genetics, Stanford University School of Medicine, Stanford, CA, USA ² Institute for Stem Cell Biology and Regenerative Medicine, Stanford Cancer Institute, Stanford University School of Medicine, Stanford, CA, USA ³ Princeton AI Lab, Department of Electrical & Computer Engineering, Princeton University, Princeton, NJ, USA ⁴ School of Electrical Engineering and Computer Science, The Ohio State University, Columbus, OH, USA ⁵ Department of Bioengineering, University of Washington, Seattle, WA, USA ⁶ Division of Cardiology, Department of Medicine, Stanford University School of Medicine, CA. 7 Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Stanford, CA, United States * Co-first and core contributing authors ** Corresponding authors: Le Cong (congle@stanford.edu), Mengdi Wang (mengdiw@princeton.edu) Abstract: Modern science advances fastest when thought meets action. LabOS represents the first AI co-scientist that unites computational reasoning with physical experimentation through multimodal perception, self-evolving agents, and XR-enabled human-AI collaboration. By connecting multi-model AI agents, smart glasses, and human-AI collaboration, LabOS allows AI to see what scientists see, understand experimental context, and assist in real-time execution. Across applications—from cancer immunotherapy target discovery to stem-cell engineering— LabOS shows that AI can move beyond computational design to participation, turning the laboratory into an intelligent, collaborative environment where human and machine discovery evolve together. Introduction Science advances on two coupled fronts: computation that proposes and predicts, and experimentation that validates and reveals. Recent AI has transformed the computational front [1,2,3,4]—accelerating simulation, prediction, and design—yet the physical laboratory remains the bottleneck, where perception, coordination, and reproducibility still limit progress, and outcomes hinge on hard-to-transfer or tough-to-reproduce skills. Meanwhile, today’s “agentic AI” largely operates in the digital realm—planning experiments and synthesizing tools from text, data, and simulations without perceiving or acting in the dynamic lab. In parallel, robotic automation in laboratories can be powerful but mostly rule-based and bespoke—costly to retarget, challenging to relocate, and brittle to real-world variability. LabOS addresses this gap through a unified human–AI collaborative intelligence platform that makes laboratories AI-perceivable and AI-operable. It integrates agentic AI systems for dry-lab reasoning with extended reality(XR)-enabled, multimodal interfaces for human-in-the-loop wet- lab execution, creating an end-to-end framework that links hypothesis generation, experimental design, physical validation, and automated documentation. The LabOS co-scientist adopts a multi-agent reasoning architecture—comprising planning, development, critique, and tool-creation agents—that collectively perform hypothesis generation, experimental design, data analysis, and adaptive improvement. The AI co-scientist is self-improving, continuously expanding its analytical capabilities via a “Tool Ocean” of modules autonomously generated from web search, scientific literature, and data. This self- evolving capacity enables the AI to solve novel research tasks via inference-time scaling. In this paper, we focus on an end-to-end instantiation of LabOS for the biomedical domain and demonstrate state-of-the-art performance on leading biomedical reasoning benchmarks— including Humanity’s Last Exam (HLE): Biomedicine [5], LAB-Bench: DBQA, and LAB-Bench: LitQA [6]—while closing the loop from dry-lab planning to wet-lab execution. On the physical side, LabOS connects AI reasoning directly to the laboratory via AR/XR smart glass interfaces and real-time multimodal sensing. Researchers wearing XR glasses receive adaptive, context-aware guidance from the AI agent—step-by-step instructions, error detection and correction cues, and gesture or voice interactions for sterile workflows. To enable the AI co-scientist to “see” in the lab, we collected >200 egocentric video sessions from researcher-worn cameras/glasses during real experiments, assembling these into the LabSuperVision (LSV) benchmark for evaluating AI models’ lab perception and reasoning capabilities. Because leading AI models struggled on this benchmark, we post-trained a lab- specialized Vision-Language-Model (VLM) using a combination of public experimental videos, in-house recordings, and expert annotations. The resulting model, namely the LabOS VLM, is able to decode visual input from XR glasses and align the visual embedding with a language model to interpret and reason about lab video scenes. The model demonstrates markedly improved visual reasoning capability in scientific settings, enabling LabOS to monitor actions, detect deviations, verify results, and synchronize multimodal data streams with reference protocols—allowing the AI to perceive, understand, and act/co-pilot within real laboratory as an AI co-scientist. LabOS also supports 3D/4D spatial modeling of laboratory workflows. These digital twins capture spatial and temporal relationships between instruments, samples, and human actions, enabling replay, “what-if” analysis, and simulation-based training. The resulting spatial grounding provides the foundation for safe, reproducible, and transferable laboratory automation. For real world validation, we demonstrate LabOS’s ability in three biomedical research studies: cancer immunology, mechanistic research, and stem cell engineering (Fig. 1c). In the first study, we sought to uncover cancer immunotherapy targets that could boost natural killer (NK) cell killing of tumors. We used LabOS to generate hypotheses, perform target identification via multi-step reasoning and analysis, and the AI agent nominated CEACAM6 as a putative target, which were validated in wet-lab NK-tumor killing assay. In the second study, we used LabOS to perform mechanistic research, showcasing identification of a gene regulating cell fusion, ITSN1. In the third study, two researchers wore smart glasses during stem-cell engineering projects and interacted with the AI co-scientist. The latest LabOS streams egocentric data to the server and invokes the VLM agent every ~5-10s. In a gene-knockout experiment, LabOS monitored the workflow, provided step-level guidance, and could flag operation deviations—such as when the researcher did not follow sterile techniques, or, wrong reagent incubation time versus gold- standards. In a lentiviral transduction in human iPSCs experiment, LabOS was used to automatically record and digitalize workflows by an experienced researcher, which can be used for coaching a novice researcher to complete the experiment, demonstrating its potential in capturing and transferring advanced human skills. Figure 1. LabOS: Multi-Modal Human-AI Collaboration in Science Laboratory. LabOS comprises a self-evolving agentic AI for dry lab tasks and an XR interface for human-in-the-loop wet lab execution, creating an end-to-end system for lab research. a, Dry Lab: Agentic AI as computational core of LabOS. The system employs a multi-agent framework, including a Manager Agent for planning, a Dev Agent for coding and execution, and a Critic Agent for evaluation and refinement. The agents continuously improve analytical capabilities. A Tool Creation Agent expands the system's functions by generating new tools from sources like PubMed, which are then stored in a shared Tool Ocean. This component automates complex data analysis. b, Wet Lab: Capture, guide, live feedback via AR/XR for human-AI collaboration in the physical laboratory. A scientist wearing AR/XR glasses receives real-time guidance from the AI. A specially trained Vision-Language Model (VLM) monitors the procedure, providing on-the-fly error detection and correction to ensure correctness and reproducibility. c, Results on leading benchmarks— HLE: Biomedicine, LAB-Bench: DBQA, and LAB-Bench: LitQA—show LabOS outperforming frontier LLMs/agents in biomedical reasoning tasks. d, Self-evolving agent’s performance scales with inference- time compute. e, Use Cases. The first case is drug target discovery: the agentic AI analyzes functional screening data to identify and rank NK cancer immunotherapy targets. Secondly, for mechanistic investigation, LabOS generates testable hypotheses that are then validated by a human scientist to identify a cell fusion regulator. Thirdly, LabOS enabled copiloted, reproducible processes for complex experiments like stem cell engineering. All photographs shown are of the authors. Together, these features make LabOS a true AI co-scientist—the first multimodal human–AI system to unify dry-lab reasoning and wet-lab execution in a single, adaptive framework. By giving AI the ability to think with us and work alongside human scientists—seeing what we see, checking what we do, and learning from every run—LabOS turns the lab into a dynamic feedback loop. It moves us toward autonomous, self-improving discovery, where human intuition and machine rigor co-evolve to accelerate breakthroughs. Results 1. LabOS Overview LabOS links agentic AI for dry-lab reasoning with XR-enabled, human-in-the-loop wet-lab execution into an end-to-end workflow from design to validation. A multi-agent core—Manager (planning), Developer (coding/execution), and Critic (evaluation)—plus a Tool-Creation module that auto-extends a shared Tool Ocean nominates targets, plans experiments, runs analyses, and continually improves [7,8] (Fig. 1a). For multimodal human-AI collaboration in the wet-lab module, protocols can be generated by or input into LabOS, then streamed to the connected XR glasses. The interface on XR glasses (i) renders stepwise protocol in an Unity/Android application, (ii) verifies physical actions from the first-person video stream by invoking an embedded VLM for visual reasoning, and (iii) returns context-aware feedback in real time (Fig. 1b). All streams are time-stamped and logged with metadata for automated documentation. 2. LabOS Supports Self-Evolving AI Agent for Biomedical Reasoning The dry-lab module of LabOS builds on and expands the STELLA self-evolving agent framework [8], a multi-agent reasoning system for biomedical research that integrates planning, development, critique, and tool creation. Fig. 1(a) illustrates the agentic architecture. The Manager/Planner Agent decomposes scientific objectives into structured modules—candidate molecules, reagents, procedures, materials lists, instrument settings, and quality control checkpoints. The Developer Agent executes these steps by generating and running Python code for complex bioinformatics analyses, while the Critic Agent evaluates intermediate results and refines the workflow, forming an iterative reasoning loop. This AI co-scientist learns and improves from every problem it solves, continuously enhancing its technical abilities by proactively expanding reasoning strategies and creating new tools [7,8]. Two mechanisms drive its continuous self-improvement. First, a Template Library of successful reasoning workflows is dynamically updated, allowing the system to generalize from prior solutions. Second, a Tool Ocean maintains and expands a repository of analytical tools/codes, databases, and APIs. The Tool-Creation Agent autonomously identifies, tests, and integrates new resources as needed. This architecture enables LabOS to plan and solve complex research tasks efficiently while continually enhancing its reasoning and analytical capabilities. To evaluate the efficacy of LabOS research agent, we test-ran it on a suite of challenging biomedical reasoning benchmarks. Our results show that LabOS consistently establishes a new state of the art, achieving top accuracy scores of approximately 32% on Humanity’s Last Exam: Biomedicine, 61% on LAB-Bench: DBQA, and 65% on LAB-Bench: LitQA, outperforming the next-best models by up to 8% (Fig. 1c). Crucially, its performance systematically improves with continued use and test-time scaling, providing direct evidence of its self-evolving design (Fig. 1d). The self-improving framework enables LabOS to learn and grow like a human scientist, dynamically scaling to meet the ever-expanding challenges of biomedical discovery. 3. Training LabOS AI Co-Scientist To See and Reason in Physical Labs To enable LabOS to see, understand, and reason in a physical lab environment, we sought to post-train a Vision-Language-Model (VLM) on a broad set of lab research videos. A VLM is a multimodal AI model that jointly learns from visual and textual inputs, allowing it to connect what it sees with what it reads or describes. It typically combines a vision encoder that processes images or video frames with a language model that interprets and generates text. By aligning these two modalities in a shared representation space, the VLM can recognize lab scenes, interpret actions, and reason about experimental workflows and outcomes. 3.1 LabSuperVision (LSV): Benchmarking AI for Scientific Visual Reasoning We asked researchers to wear XR glasses or action cameras when they operate in their laboratory research to collect real-world data. Then we processed and annotated the collected data for evaluating AI models in lab research. We assembled LabSuperVision (LSV), an expert-annotated laboratory video dataset, designed for lab operation video understanding and reasoning. LSV comprises >200 high-quality video sessions (typically 2–10 min; up to 45 min) recorded by 7 researchers across diverse lab scenes—bench, tissue culture room, and instrument bays—capturing realistic operations and movement between spaces. Each session is associated with an expert-generated gold-standard protocol. Then, human scientists performed corresponding experiments wearing cameras or smart glasses, recording all details. For annotation, a team of five human experts annotate each video session with: (1) Step segments with start/stop times aligned to the gold-standard protocol. (2) Error and issue events labeled by type (e.g., sterile breach, step mismatch, timing deviation). (3) Critical parameters, materials and reagents when applicable (Fig. 3). With the LSV benchmark, we evaluate whether leading commercial AI models can interpret and troubleshoot scientific procedures from lab experiment videos. We tested four leading AI models spanning both open and proprietary foundation models: Gemini 2.5 Pro, Cosmos-Reason-1, Qwen 2.5-VL-7B, and GPT-4o (Fig. 3c). When prompted with a new video, each model is tasked with one of the two: (1) Protocol alignment, where the model needs to generate a stepwise protocol describing procedural actions and parameters, (2) Issue identification, where the model needs to troubleshoot based on the video and identify any deviation from gold- standard or any handling errors if present. The model outputs are then compared against the ground truth, using two methods: a total of five human experts compared VLM outputs vs. gold- standard protocol and error labeling following a 0-5 scoring (5 being the highest for perfect alignment); in parallel, we used GPT-5 as a comparator to examine and score outputs using the same rubrics. Figure 2. LabOS-VLM for visual reasoning in laboratories. a, LabSuperVision (LSV) benchmark dashboard with egocentric and fixed-view lab videos paired to gold-standard protocols, including notes, issues or errors labeled by human experts. b, LSV benchmarking pipeline in four stages: (1) Data collection—multi-modal recordings from diverse facilities via fixed cameras and XR smart glasses; (2) Data processing—expert curation with light automation, aligned to reference protocols and parameters; (3) Dataset assembly—compressed corpus of 200+ experiment-session videos; (4) Evaluation—human and GPT-5 assessments of model performance on LSV. c, Benchmarking AI models on LSV via human experts (n=5) and GPT-5. Scores range 0–5 (5 = perfect). d, LabOS-VLM post-training using FineBio, JoVE, and LSV datasets via supervised fine-tuning (SFT) and reinforcement learning (GRPO). e, LabOS- VLM family outperform baselines on protocol generation quality (left) and error-detection rate (right) on LSV. f–g, Real-world testing of LabOS-VLM: (f) error detection/correction in a tissue culture experiment; (g) context-aware AI generated step-by-step instructions grounded in visual understanding. Our findings show that leading AI models struggle to understand fine-grained laboratory research workflows: the top-performing model, Gemini-2.5Pro, scored only 2.86 out of 5 in protocol alignment, moderately better than open-source NVIDIA Cosmos-1 which scored 2.24; for issue/error identification, leading models like Gemini, GPT4o only managed to score ~2 out of 5 (Fig. 2c). This reflects the gap between open-world foundation models’ capabilities and what is needed for scientific research in physical labs. Our results surface where today’s AI models have some success (protocol alignment) and where they have substantial limitations (error detection/correction). 3.2 LabOS-VLM: Vision-Language-Model Trained for Visual Reasoning in Laboratories. Given the aforementioned gap, we sought to train a vision–language model for visual reasoning in lab environments. For this purpose, we assemble an experimental video collection, including FineBio (expert-annotated wet-lab videos), JoVE (standardized procedure videos), and LSV. We split the datasets 80/10/10 for training, validation, and held-out testing. Using Qwen-VL as the base model, we performed post-training by supervised fine-tuning (SFT) with LoRA on paired video–text examples and then reinforcement finetuning to improve visual reasoning. In the latter reinforcement learning step, we used Group Relative Policy Optimization (GRPO) [9] with LoRA updates, where the model rolled out a group of multiple candidate responses per prompted scenario and received rewards designed to evaluate the candidate outputs. The reward is designed to be rule-based to account for lab procedural accuracy, safety compliance, and experimental detailed level; as well as to account for relative rewards within each group to favor expert-consistent reasoning. By using this SFT->RL pipeline, we adapted the base model and obtained the LabOS-VLM (7B, 32B, 72B, 235B; Fig. 2e–f) family. Across model scales, the LabOS-VLM family consistently outperforms the base model on lab video reasoning tasks (measured via protocol generation quality and error detection accuracy). On the held-out subset of evaluation data, LabOS VLM-235B achieves >90% accuracy in error detection accuracy, outperforming Claude Opus-4.1, GPT-5, and Gemini 2.5 Pro on all evaluated metrics, establishing LabOS VLM as a strong module for multimodal lab intelligence and AI-assisted experimentation (Fig. 2e). Our SFT->RL pipeline has proved to enhance the AI model’s ability in step-wise visual reasoning, error detection and correction, which is a basis for building AI co-scientist to see and work with researchers real-time in scientific settings. We next evaluated the fine-tuned LabOS-VLM on egocentric videos from real experiments. First, we supplied a series of videos about delivering CRISPR gene-editors to human cells via transfection. The model successfully distinguished correct vs. incorrect operations, pinpointing the specific errors that the human operator made across 2 distinct issues (Fig. 2f). Second, we provided experiment records, from a multi-step setup on preparing Cas9 RNA complex for gene-editing along with a reference protocol, to the LabOS co-scientist. The VLM-powered AI agent recognized each step, generated step-wise guidance, matched actions to the protocol, issued context-aware warnings, and suggested the next action. These results validate LabOS VLM capabilities in authentic wet-lab settings. Figure 3. LabOS on XR glasses enables spatially-grounded human–AI collaboration in physical laboratories. a, Live streaming from XR glasses to the LabOS VLM server enables real-time scene understanding, feedback, and interactions between human and AI agents. b, Deployment of LabOS AI+XR system across lab settings. c, Live action feed from the wearer’s perspective. d, LabOS AI+XR provides guidance and summary on the lab operation. e, Detected deviations trigger inline error prompts and suggested corrections, to mitigate human researcher’s oversight. f, Feed-forward Gaussian splatting supports camera tracking and multi-view, metric-depth 3D reconstruction, enabling object localization and spatio-temporal reasoning for scientific workflows. All photographs shown are of the authors. 4. Extended-Reality (XR) Glasses Enable Real-Time Human-AI Collaboration Human–AI interaction in LabOS is mediated through the XR interface. Researcher engages with the AI co-scientist primarily via the LabOS XR glasses, where the glasses live-stream what the human scientist sees and hears while reconstructing the 3D lab environment. Modern XR hardware supports interface rendering, gesture recognition, and running a Unity/Android application. Data from the glasses are streamed to a local GPU server (or the cloud) for real- time agentic inference. The server receives short video segments (e.g., 5–10s), forwards them to the LabOS AI agent for analysis and reasoning (input at 4 frames/s for videos), and returns a structured JSON output to the XR glasses. The XR app parses this JSON message and provides real-time visual and audio feedback to researchers at the bench (Fig. 3a-e). We tested both AR/XR glasses and VR/XR headsets, while both supported the above functions, we chose to launch initial deployment of LabOS using AR/XR glasses as they achieved key specifications needed for human-centered intelligent lab: open-peripheral, light-weight hardware (less than 3- ounces/85-grams) allows convenient wearing in labs where goggles are already used; 2+ hours battery life for extended operation (via a wireless neckband or mobile battery); display brightness over 1200+ Nits sufficient for in-door labs; 6DoF and hand gesture support for 3D- aware human-AI interactions. LabOS also supports 3D modeling of lab environments and workflows (Fig. 3f), via videos captured on researcher-worn smart glasses — optionally augmented with multiview cameras — using state-of-the-art 3D/4D reconstruction algorithms. In our experiment, we utilize MapAnything [10] for multiview environment + egocentric for tracking, camera positioning, depth maps, and point cloud reconstructions, and incorporate 3D Gaussian splatting. Gaussian splatting models scenes as sets of millions of Gaussian distributions whose parameters are inferred from input 2D images captured in different camera positions and (and time if needed), enabling photorealistic, temporally consistent reconstruction. 4DLangSplat [11] can further help produce a time-aware, semantically indexable 3D environment, to support object-centric tracking. Through remote supervision and guidance, the LabOS AI Co-Scientist boosts reproducibility by standardizing experiment capture and logging non-fatal deviations and context variables. Expert-run recordings from XR glasses also serve as instructional material and training data to improve LabOS agents and downstream models. 5. LabOS In Action: Human-AI Collaboration for Biomedical Discovery 5.1 LabOS for cancer immunotherapy target discovery The challenge in cancer immunology is to identify regulators of immune evasion that mediate tumor resistance to cytotoxic lymphocytes such as natural killer (NK) cells. Traditional screening approaches are limited by throughput and rely on human expertise for downstream analysis. We prompted LabOS to investigate genes regulating sensitivity and resistance to NK cell-mediated killing of melanoma cancer cells based on a functional screen. Using a functional CRISPR activation (CRISPRa) screen in A375 melanoma tumor cells, treated with or without primary human NK cells, LabOS AI co-scientist autonomously identified and refined candidate regulators of NK-mediated cytotoxic resistance [12]. Within a wet-lab-in-the- loop pipeline, the LabOS AI agent dynamically re-ranked genes through iterative functional enrichment analysis, revealing CEACAM6 as a top regulator of NK resistance. Figure 4. LabOS applications in target discovery, mechanistic investigation, and stem cell research. a-e, Functional screening study for Natural Killer (NK) immunotherapy target identification. LabOS was prompted with the research task to identify regulators of tumor resistance to NK killing, based on results from a CRISPR activation screen in A375 melanoma cells co-cultured with primary human NK cells. b-e, AI-generated screen analysis reveals differential gene enrichment upon NK treatment (b), and the AI agent re-ranked key target—CEACAM6—moving it from low to top-ranked position (c) and automated survival analysis using cancer patient data, stratified by CEACAM6 expression (d). Wet-lab functional validation confirms enhanced NK resistance upon CRISPRa activation of CEACAM6 (e). f, LabOS applies to mechanistic investigation of cell fusion factors. LabOS proposes ITSN1 as a regulator of cell-cell fusion, providing its found evidence and reasoning trajectories. Experimental knock-out of top candidate gene ITSN1 confirmed this gene’s impact on cell fusion in the wet lab, validating the AI co-scientist’s hypothesis. g, LabOS provides live copiloting to researchers in stem cell gene-editing experiments. The AI-XR agent enables live guidance, error detection to safeguard multi-step experiments, where scientists can use LabOS to monitor and track issues automatically. h, The AI-XR agent can learn advanced skills in experiments and document them such as the lentiviral transduction of human iPSC stem cells. All photographs shown are of the authors. AI reasoning further generated explainable evidence by automatically performing survival analyses on The Cancer Genome Atlas (TCGA) datasets, stratified by CEACAM6 expression, thereby linking functional screening results to patient outcomes. Experimental validation using individual CRISPRa perturbation confirmed that CEACAM6 activation significantly increased tumor resistance to NK killing (Fig. 4a-g). 5.2 LabOS guides mechanistic investigation of cell fusion regulator Beyond functional screening, LabOS extends to mechanistic hypothesis generation and validation. In the next study, we prompt LabOS with the research task to identify key genes that control cell-cell fusion. Cell fusion is a fundamental biological process, key to basic biology (e.g. muscle development, viral infection all require cell fusion) and translational technology (e.g. cell fusion is a key step mediating efficient gene delivery). LabOS proposed and ranked candidates using pathway enrichment, interaction priors, and functional evidence. The AI co-scientist prioritized ITSN1 as a key regulator of cell-cell fusion, with automated evidence generation. Then human researchers used CRISPR interference (CRISPRi) coupled with cell fusion assay (induced by the fusogenic protein FAST [13]) in U2OS cells, to measure if ITSN1 perturbation will impact cell fusion phenotype. Researchers further performed quantitative imaging and cell-based assay and observed significant inhibition of cell fusion upon ITSN1 knockdown in the wet lab, confirming the role of ITSN1 (Fig. 4f). 5.3 LabOS copilots researchers in complex stem-cell experiments Reproducibility in advanced wet-lab domains is hindered by tacit expertise: critical steps are encoded in human memory and lab-specific “know-how,” not necessarily in written protocols. For stem cell engineering, as an example, small deviations in the timing, reagent handling, or cell density and cell state assessment can drive large outcome variation, impeding transfer of skills across operators and laboratories. First, we used LabOS with XR glasses to guide researchers through CRISPR gene-editing of human stem cells for disease-modeling workflows [14]. Via XR glasses, LabOS agents captured all details as scientists perform gene knock-out experiments in human induced pluripotent stem cells (iPSCs)-derived cardiac fibroblast, useful for modeling diseases such as heart fibrosis [15]. LabOS copiloted the multi-step procedure with visual reasoning, precisely interpreting bench actions and flagging issues in real time (Fig. 4g). Second, because efficient gene delivery underpins construction of engineered stem-cell lines for perturbation and drug screening, we used LabOS with XR glasses to guide and automatically document lentiviral transduction experiments in human iPSCs from expert-level scientists (Fig. 4h). LabOS then could monitor and help train junior scientists on this complex experiment. Here, LabOS can act as an AI tutor: recording expert practice, digitizing key parameters, and coaching a novice to expert-level performance without requiring side-by-side training or extended period of trial-and-error. Through real-time, multimodal human-AI interactions, the LabOS system can provide context-aware guidance, validation, and performance tracking. Summary LabOS prototypes what an AI co-scientist can be: a system that sees, reasons, and helps run the lab. By pairing AI agents with real-time, XR-guided human–AI interaction and data-driven reasoning, it enables faster discovery, reproducible training, and precise operation. Across use cases—hypothesis generation, automated documentation, error correction, rapid skill transfer, iPSC experiment guidance, and insights into NK–tumor pathways and cell-fusion regulators— LabOS turns the lab into an adaptive, collaborative workspace where human scientists and AI work side-by-side to accelerate discovery, generate reproducible science, and improve together. Acknowledgement We extend our sincere gratitude to the NVIDIA, in particular the XR/AR-VR, VLM/Agentic-AI, edge computing, and business partnership team for their invaluable collaboration on the VLM fine-tuning development, agentic AI integration, and XR live streaming, as well as for GPU server support. We would like to acknowledge we procured the AR/XR glass hardware from Viture, and thank the Viture team for support on wireless streaming, Unity/Android app support. We would like to acknowledge we procured the VR/XR glass thanks to Dr. Ze Yuan from the UReality team and their software support. We also would like to thank the Nebius team for GPU compute and infrastructure support. The results on cancer patient survival analysis here are in whole or part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. Author contributions L.C. and M.W. conceived and supervised the project. L.C. led the research plan/organization, hardware/XR solution build, study design and paper writing. AI &Data: Z.Z, R.J. co-led the STELLA AI agents and benchmarking. X.W. led the data collection, annotation and human evaluation efforts, contributed to setting up XR glasses and live experimentation. Z.Z., Y.W., M.G. contributed to VLM training and benchmarking. D.S.,S.L., A.S. co-led 3D/4D modeling, object recognition and tracking of lab workflows. D.Y., X.W., P.L., Y.Z., S.S., N.Z. contributed to data collection, scientific annotation, and human evaluation of VLMs. Experiment: X. W and R.D. co-led NK experiments, to which J.W. also contributed and supervised. D.Y. led the fusogen experiment and co-lead stem cell experiments with X.W., J.C.W. co-supervised the stem cell experiments. Reference 1. Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). 2. Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023). 3. Baker, B. et al. Emergent autonomous scientific research capabilities of large language models. Nat. Mach. Intell. 6, 876–887 (2024). 4. Qu, Y. et al. CRISPR-GPT for agentic automation of gene-editing experiments. Nat. Biomed. Eng. (2025). 5. Phan, L. et al. Humanity’s Last Exam (HLE). arXiv preprint arXiv:2501.14249 (2025). 6. Laurent, J. M., Janizek, J. D., Ruzo, M. et al. LAB-Bench: Measuring Capabilities of Language Models for Biology Research. arXiv preprint arXiv:2407.10362 (2024). 7. Qiu, J. et al. Alita: Generalist agent enabling scalable agentic reasoning with minimal predefinition and maximal self-evolution. arXiv preprint arXiv:2505.20286 (2025). 8. Jin, R. et al. STELLA: self-evolving LLM agent for biomedical research. arXiv preprint arXiv:2507.02004 (2025). 9. DeepSeek Team. Group Relative Policy Optimization (GRPO) for reinforcement learning of reasoning. arXiv preprint arXiv:2504.09374 (2025). 10. Keetha, N., Müller, N., Schönberger, J., Porzi, L., Zhang, Y., Fischer, T. & Kontschieder, P. MapAnything: Universal feed-forward metric 3D reconstruction. arXiv preprint arXiv:2509.13414 (2025). 11. Li, W., Zhou, R., Zhou, J., Song, Y., Herter, J., Qin, M. & Pfister, H. 4D-LangSplat: 4D language Gaussian splatting via multimodal large language models. In Proc. CVPR, 22001– 22011 (2025). 12. Dinesh RK, Wang X, Mohammad IA, Gunasekaran P, Stiklioraitis K, Villafuerte JR, Rao A, Hernandez-Lopez RA, Sunwoo JB, & Cong L. Surfaceome CRISPR activation screening uncovers ligands regulating tumor sensitivity to NK cell killing. bioRxiv. 2025:2025-09 (2025). 13. Duncan, R. Fusogenic reoviruses and their fusion-associated small transmembrane (FAST) proteins. Annu. Rev. Virol. 6, 341–363 (2019). 14. Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 126(4):663-76 (2006). 15. Zhang H, Thai PN, Shivnaraine RV, Ren L, Wu X, Siepe DH, Liu Y, Tu C, Shin HS, Caudal A, Mukherjee S. Multiscale drug screening for cardiac fibrosis identifies MD2 as a therapeutic target. Cell. 187(25):7143-63 (2024).
LabOS: The AI-XR Co-Scientist That Sees and Works With Humans Le Cong1,2,*,**, Zaixi Zhang3,*, Xiaotong Wang1,2,*, Yin Di1,2,*, Ruofan Jin3, Michal Gerasimiuk1,2, Yinkai Wang1,2, Ravi K. Dinesh1,2, David Smerkous4, Alex Smerkous5, Xuekun Wu2,6, Shilong Liu3, Peishan Li1,2, Yi Zhu1,2, Simran Serrao1,2, Ning Zhao1,2, Imran A. Mohammad2,7, John B. Sunwoo2,7, Joseph C. Wu2,6, Mengdi Wang3,** Affiliations: 1 2 Institute for Stem Cell Biology and Regenerative Medicine, Stanford Cancer Institute, Stanford University 3 Princeton AI Lab, 4 5 6 Division of Cardiology, . 7 -Head and Neck Surgery, Stanford University * Co-first and core contributing authors ** Corresponding authors: Le Cong ( ), Mengdi Wang ( ) Abstract: Modern science advances fastest when thought meets action. LabOS represents the first AI co-scientist that unites computational reasoning with physical experimentation through multimodal perception, self-evolving agents, and XR-enabled human-AI collaboration. By connecting multi-model AI agents, smart glasses, and human-AI collaboration, LabOS allows AI to see what scientists see, understand experimental context, and assist in real-time execution. Across applications-from cancer immunotherapy target discovery to stem-cell engineeringLabOS shows that AI can move beyond computational design to participation, turning the laboratory into an intelligent, collaborative environment where human and machine discovery evolve together. Introduction Science advances on two coupled fronts: computation that proposes and predicts, and experimentation that validates and reveals. Recent AI has transformed the computational front [1,2,3,4]-accelerating simulation, prediction, and design-yet the physical laboratory remains the bottleneck, where perception, coordination, and reproducibility still limit progress, and outcomes hinge on hard-to-transfer or tough-to-reproduce skills. Meanwhile, today's "agentic AI" largely operates in the digital realm-planning experiments and synthesizing tools from text, data, and simulations without perceiving or acting in the dynamic lab. In parallel, robotic automation in laboratories can be powerful but mostly rule-based and bespoke-costly to retarget, challenging to relocate, and brittle to real-world variability. LabOS addresses this gap through a unified human-AI collaborative intelligence platform that makes laboratories AI-perceivable and AI-operable. It integrates agentic AI systems for dry-lab reasoning with extended reality(XR)-enabled, multimodal interfaces for human-in-the-loop wetlab execution, creating an end-to-end framework that links hypothesis generation, experimental design, physical validation, and automated documentation. The LabOS co-scientist adopts a multi-agent reasoning architecture-comprising planning, development, critique, and tool-creation agents-that collectively perform hypothesis generation, experimental design, data analysis, and adaptive improvement. The AI co-scientist is self-improving, continuously expanding its analytical capabilities via a "Tool Ocean" of modules autonomously generated from web search, scientific literature, and data. This selfevolving capacity enables the AI to solve novel research tasks via inference-time scaling. In this paper, we focus on an end-to-end instantiation of LabOS for the biomedical domain and demonstrate state-of-the-art performance on leading biomedical reasoning benchmarksincluding Humanity's Last Exam (HLE): Biomedicine [5], LAB-Bench: DBQA, and LAB-Bench: LitQA [6]-while closing the loop from dry-lab planning to wet-lab execution. On the physical side, LabOS connects AI reasoning directly to the laboratory via AR/XR smart glass interfaces and real-time multimodal sensing. Researchers wearing XR glasses receive adaptive, context-aware guidance from the AI agent-step-by-step instructions, error detection and correction cues, and gesture or voice interactions for sterile workflows. To enable the AI co-scientist to "see" in the lab, we collected >200 egocentric video sessions from researcher-worn cameras/glasses during real experiments, assembling these into the LabSuperVision (LSV) benchmark for evaluating AI models' lab perception and reasoning capabilities. Because leading AI models struggled on this benchmark, we post-trained a labspecialized Vision-Language-Model (VLM) using a combination of public experimental videos, in-house recordings, and expert annotations. The resulting model, namely the LabOS VLM, is able to decode visual input from XR glasses and align the visual embedding with a language model to interpret and reason about lab video scenes. The model demonstrates markedly improved visual reasoning capability in scientific settings, enabling LabOS to monitor actions, detect deviations, verify results, and synchronize multimodal data streams with reference protocols-allowing the AI to perceive, understand, and act/co-pilot within real laboratory as an AI co-scientist. LabOS also supports 3D/4D spatial modeling of laboratory workflows. These digital twins capture spatial and temporal relationships between instruments, samples, and human actions, enabling replay, "what-if" analysis, and simulation-based training. The resulting spatial grounding provides the foundation for safe, reproducible, and transferable laboratory automation. For real world validation, we demonstrate LabOS's ability in three biomedical research studies: cancer immunology, mechanistic research, and stem cell engineering (Fig. 1c). In the first study, we sought to uncover cancer immunotherapy targets that could boost natural killer (NK) cell killing of tumors. We used LabOS to generate hypotheses, perform target identification via multi-step reasoning and analysis, and the AI agent nominated CEACAM6 as a putative target, which were validated in wet-lab NK-tumor killing assay. In the second study, we used LabOS to perform mechanistic research, showcasing identification of a gene regulating cell fusion, ITSN1. In the third study, two researchers wore smart glasses during stem-cell engineering projects and interacted with the AI co-scientist. The latest LabOS streams egocentric data to the server and invokes the VLM agent every ~5-10s. In a gene-knockout experiment, LabOS monitored the workflow, provided step-level guidance, and could flag operation deviations-such as when the researcher did not follow sterile techniques, or, wrong reagent incubation time versus goldstandards. In a lentiviral transduction in human iPSCs experiment, LabOS was used to automatically record and digitalize workflows by an experienced researcher, which can be used for coaching a novice researcher to complete the experiment, demonstrating its potential in capturing and transferring advanced human skills. Figure 1. LabOS: Multi-Modal Human-AI Collaboration in Science Laboratory. LabOS comprises a self-evolving agentic AI for dry lab tasks and an XR interface for human-in-the-loop wet lab execution, creating an end-to-end system for lab research. a, Dry Lab: Agentic AI as computational core of LabOS. The system employs a multi-agent framework, including a Manager Agent for planning, a Dev Agent for coding and execution, and a Critic Agent for evaluation and refinement. The agents continuously improve analytical capabilities. A Tool Creation Agent expands the system's functions by generating new tools from sources like PubMed, which are then stored in a shared Tool Ocean. This component automates complex data analysis. b, Wet Lab: Capture, guide, live feedback via AR/XR for human-AI collaboration in the physical laboratory. A scientist wearing AR/XR glasses receives real-time guidance from the AI. A specially trained Vision-Language Model (VLM) monitors the procedure, providing on-the-fly error detection and correction to ensure correctness and reproducibility. c, Results on leading benchmarksHLE: Biomedicine, LAB-Bench: DBQA, and LAB-Bench: LitQA-show LabOS outperforming frontier LLMs/agents in biomedical reasoning tasks. d, Self-evolving agent's performance scales with inferencetime compute. e, Use Cases. The first case is drug target discovery: the agentic AI analyzes functional screening data to identify and rank NK cancer immunotherapy targets. Secondly, for mechanistic investigation, LabOS generates testable hypotheses that are then validated by a human scientist to identify a cell fusion regulator. Thirdly, LabOS enabled copiloted, reproducible processes for complex experiments like stem cell engineering. All photographs shown are of the authors. Together, these features make LabOS a true AI co-scientist-the first multimodal human-AI system to unify dry-lab reasoning and wet-lab execution in a single, adaptive framework. By giving AI the ability to think with us and work alongside human scientists-seeing what we see, checking what we do, and learning from every run-LabOS turns the lab into a dynamic feedback loop. It moves us toward autonomous, self-improving discovery, where human intuition and machine rigor co-evolve to accelerate breakthroughs. Results 1. LabOS Overview LabOS links agentic AI for dry-lab reasoning with XR-enabled, human-in-the-loop wet-lab execution into an end-to-end workflow from design to validation. A multi-agent core-Manager (planning), Developer (coding/execution), and Critic (evaluation)-plus a Tool-Creation module that auto-extends a shared Tool Ocean nominates targets, plans experiments, runs analyses, and continually improves [7,8] (Fig. 1a). For multimodal human-AI collaboration in the wet-lab module, protocols can be generated by or input into LabOS, then streamed to the connected XR glasses. The interface on XR glasses (i) renders stepwise protocol in an Unity/Android application, (ii) verifies physical actions from the first-person video stream by invoking an embedded VLM for visual reasoning, and (iii) returns context-aware feedback in real time (Fig. 1b). All streams are time-stamped and logged with metadata for automated documentation. 2. LabOS Supports Self-Evolving AI Agent for Biomedical Reasoning The dry-lab module of LabOS builds on and expands the STELLA self-evolving agent framework [8], a multi-agent reasoning system for biomedical research that integrates planning, development, critique, and tool creation. Fig. 1(a) illustrates the agentic architecture. The Manager/Planner Agent decomposes scientific objectives into structured modules-candidate molecules, reagents, procedures, materials lists, instrument settings, and quality control checkpoints. The Developer Agent executes these steps by generating and running Python code for complex bioinformatics analyses, while the Critic Agent evaluates intermediate results and refines the workflow, forming an iterative reasoning loop. This AI co-scientist learns and improves from every problem it solves, continuously enhancing its technical abilities by proactively expanding reasoning strategies and creating new tools [7,8]. Two mechanisms drive its continuous self-improvement. First, a Template Library of successful reasoning workflows is dynamically updated, allowing the system to generalize from prior solutions. Second, a Tool Ocean maintains and expands a repository of analytical tools/codes, databases, and APIs. The Tool-Creation Agent autonomously identifies, tests, and integrates new resources as needed. This architecture enables LabOS to plan and solve complex research tasks efficiently while continually enhancing its reasoning and analytical capabilities. To evaluate the efficacy of LabOS research agent, we test-ran it on a suite of challenging biomedical reasoning benchmarks. Our results show that LabOS consistently establishes a new state of the art, achieving top accuracy scores of approximately 32% on Humanity's Last Exam: Biomedicine, 61% on LAB-Bench: DBQA, and 65% on LAB-Bench: LitQA, outperforming the next-best models by up to 8% (Fig. 1c). Crucially, its performance systematically improves with continued use and test-time scaling, providing direct evidence of its self-evolving design (Fig. 1d). The self-improving framework enables LabOS to learn and grow like a human scientist, dynamically scaling to meet the ever-expanding challenges of biomedical discovery. 3. Training LabOS AI Co-Scientist To See and Reason in Physical Labs To enable LabOS to see, understand, and reason in a physical lab environment, we sought to post-train a Vision-Language-Model (VLM) on a broad set of lab research videos. A VLM is a multimodal AI model that jointly learns from visual and textual inputs, allowing it to connect what it sees with what it reads or describes. It typically combines a vision encoder that processes images or video frames with a language model that interprets and generates text. By aligning these two modalities in a shared representation space, the VLM can recognize lab scenes, interpret actions, and reason about experimental workflows and outcomes. 3.1 LabSuperVision (LSV): Benchmarking AI for Scientific Visual Reasoning We asked researchers to wear XR glasses or action cameras when they operate in their laboratory research to collect real-world data. Then we processed and annotated the collected data for evaluating AI models in lab research. We assembled LabSuperVision (LSV), an expert-annotated laboratory video dataset, designed for lab operation video understanding and reasoning. LSV comprises >200 high-quality video sessions (typically 2-10 min; up to 45 min) recorded by 7 researchers across diverse lab scenes-bench, tissue culture room, and instrument bays-capturing realistic operations and movement between spaces. Each session is associated with an expert-generated gold-standard protocol. Then, human scientists performed corresponding experiments wearing cameras or smart glasses, recording all details. For annotation, a team of five human experts annotate each video session with: (1) Step segments with start/stop times aligned to the gold-standard protocol. (2) Error and issue events labeled by type (e.g., sterile breach, step mismatch, timing deviation). (3) Critical parameters, materials and reagents when applicable (Fig. 3). With the LSV benchmark, we evaluate whether leading commercial AI models can interpret and troubleshoot scientific procedures from lab experiment videos. We tested four leading AI models spanning both open and proprietary foundation models: Gemini 2.5 Pro, Cosmos-Reason-1, Qwen 2.5-VL-7B, and GPT-4o (Fig. 3c). When prompted with a new video, each model is tasked with one of the two: (1) Protocol alignment, where the model needs to generate a stepwise protocol describing procedural actions and parameters, (2) Issue identification, where the model needs to troubleshoot based on the video and identify any deviation from goldstandard or any handling errors if present. The model outputs are then compared against the ground truth, using two methods: a total of five human experts compared VLM outputs vs. goldstandard protocol and error labeling following a 0-5 scoring (5 being the highest for perfect alignment); in parallel, we used GPT-5 as a comparator to examine and score outputs using the same rubrics. Figure 2. LabOS-VLM for visual reasoning in laboratories. a, LabSuperVision (LSV) benchmark dashboard with egocentric and fixed-view lab videos paired to gold-standard protocols, including notes, issues or errors labeled by human experts. b, LSV benchmarking pipeline in four stages: (1) Data collection-multi-modal recordings from diverse facilities via fixed cameras and XR smart glasses; (2) Data processing-expert curation with light automation, aligned to reference protocols and parameters; (3) Dataset assembly-compressed corpus of 200+ experiment-session videos; (4) Evaluation-human and GPT-5 assessments of model performance on LSV. c, Benchmarking AI models on LSV via human experts (n=5) and GPT-5. Scores range 0-5 (5 = perfect). d, LabOS-VLM post-training using FineBio, JoVE, and LSV datasets via supervised fine-tuning (SFT) and reinforcement learning (GRPO). e, LabOSVLM family outperform baselines on protocol generation quality (left) and error-detection rate (right) on LSV. f-g, Real-world testing of LabOS-VLM: (f) error detection/correction in a tissue culture experiment; (g) context-aware AI generated step-by-step instructions grounded in visual understanding. Our findings show that leading AI models struggle to understand fine-grained laboratory research workflows: the top-performing model, Gemini-2.5Pro, scored only 2.86 out of 5 in protocol alignment, moderately better than open-source NVIDIA Cosmos-1 which scored 2.24; for issue/error identification, leading models like Gemini, GPT4o only managed to score ~2 out of 5 (Fig. 2c). This reflects the gap between open-world foundation models' capabilities and what is needed for scientific research in physical labs. Our results surface where today's AI models have some success (protocol alignment) and where they have substantial limitations (error detection/correction). 3.2 LabOS-VLM: Vision-Language-Model Trained for Visual Reasoning in Laboratories. Given the aforementioned gap, we sought to train a vision-language model for visual reasoning in lab environments. For this purpose, we assemble an experimental video collection, including FineBio (expert-annotated wet-lab videos), JoVE (standardized procedure videos), and LSV. We split the datasets 80/10/10 for training, validation, and held-out testing. Using Qwen-VL as the base model, we performed post-training by supervised fine-tuning (SFT) with LoRA on paired video-text examples and then reinforcement finetuning to improve visual reasoning. In the latter reinforcement learning step, we used Group Relative Policy Optimization (GRPO) [9] with LoRA updates, where the model rolled out a group of multiple candidate responses per prompted scenario and received rewards designed to evaluate the candidate outputs. The reward is designed to be rule-based to account for lab procedural accuracy, safety compliance, and experimental detailed level; as well as to account for relative rewards within each group to favor expert-consistent reasoning. By using this SFT->RL pipeline, we adapted the base model and obtained the LabOS-VLM (7B, 32B, 72B, 235B; Fig. 2e-f) family. Across model scales, the LabOS-VLM family consistently outperforms the base model on lab video reasoning tasks (measured via protocol generation quality and error detection accuracy). On the held-out subset of evaluation data, LabOS VLM-235B achieves >90% accuracy in error detection accuracy, outperforming Claude Opus-4.1, GPT-5, and Gemini 2.5 Pro on all evaluated metrics, establishing LabOS VLM as a strong module for multimodal lab intelligence and AI-assisted experimentation (Fig. 2e). Our SFT->RL pipeline has proved to enhance the AI model's ability in step-wise visual reasoning, error detection and correction, which is a basis for building AI co-scientist to see and work with researchers real-time in scientific settings. We next evaluated the fine-tuned LabOS-VLM on egocentric videos from real experiments. First, we supplied a series of videos about delivering CRISPR gene-editors to human cells via transfection. The model successfully distinguished correct vs. incorrect operations, pinpointing the specific errors that the human operator made across 2 distinct issues (Fig. 2f). Second, we provided experiment records, from a multi-step setup on preparing Cas9 RNA complex for gene-editing along with a reference protocol, to the LabOS co-scientist. The VLM-powered AI agent recognized each step, generated step-wise guidance, matched actions to the protocol, issued context-aware warnings, and suggested the next action. These results validate LabOS VLM capabilities in authentic wet-lab settings. Figure 3. LabOS on XR glasses enables spatially-grounded human-AI collaboration in physical laboratories. a, Live streaming from XR glasses to the LabOS VLM server enables real-time scene understanding, feedback, and interactions between human and AI agents. b, Deployment of LabOS AI+XR system across lab settings. c, Live action feed from the wearer's perspective. d, LabOS AI+XR provides guidance and summary on the lab operation. e, Detected deviations trigger inline error prompts and suggested corrections, to mitigate human researcher's oversight. f, Feed-forward Gaussian splatting supports camera tracking and multi-view, metric-depth 3D reconstruction, enabling object localization and spatio-temporal reasoning for scientific workflows. All photographs shown are of the authors. 4. Extended-Reality (XR) Glasses Enable Real-Time Human-AI Collaboration Human-AI interaction in LabOS is mediated through the XR interface. Researcher engages with the AI co-scientist primarily via the LabOS XR glasses, where the glasses live-stream what the human scientist sees and hears while reconstructing the 3D lab environment. Modern XR hardware supports interface rendering, gesture recognition, and running a Unity/Android application. Data from the glasses are streamed to a local GPU server (or the cloud) for realtime agentic inference. The server receives short video segments (e.g., 5-10s), forwards them to the LabOS AI agent for analysis and reasoning (input at 4 frames/s for videos), and returns a structured JSON output to the XR glasses. The XR app parses this JSON message and provides real-time visual and audio feedback to researchers at the bench (Fig. 3a-e). We tested both AR/XR glasses and VR/XR headsets, while both supported the above functions, we chose to launch initial deployment of LabOS using AR/XR glasses as they achieved key specifications needed for human-centered intelligent lab: open-peripheral, light-weight hardware (less than 3ounces/85-grams) allows convenient wearing in labs where goggles are already used; 2+ hours battery life for extended operation (via a wireless neckband or mobile battery); display brightness over 1200+ Nits sufficient for in-door labs; 6DoF and hand gesture support for 3Daware human-AI interactions. LabOS also supports 3D modeling of lab environments and workflows (Fig. 3f), via videos captured on researcher-worn smart glasses - optionally augmented with multiview cameras - using state-of-the-art 3D/4D reconstruction algorithms. In our experiment, we utilize MapAnything [10] for multiview environment + egocentric for tracking, camera positioning, depth maps, and point cloud reconstructions, and incorporate 3D Gaussian splatting. Gaussian splatting models scenes as sets of millions of Gaussian distributions whose parameters are inferred from input 2D images captured in different camera positions and (and time if needed), enabling photorealistic, temporally consistent reconstruction. 4DLangSplat [11] can further help produce a time-aware, semantically indexable 3D environment, to support object-centric tracking. Through remote supervision and guidance, the LabOS AI Co-Scientist boosts reproducibility by standardizing experiment capture and logging non-fatal deviations and context variables. Expert-run recordings from XR glasses also serve as instructional material and training data to improve LabOS agents and downstream models. 5. LabOS In Action: Human-AI Collaboration for Biomedical Discovery 5.1 LabOS for cancer immunotherapy target discovery The challenge in cancer immunology is to identify regulators of immune evasion that mediate tumor resistance to cytotoxic lymphocytes such as natural killer (NK) cells. Traditional screening approaches are limited by throughput and rely on human expertise for downstream analysis. We prompted LabOS to investigate genes regulating sensitivity and resistance to NK cell-mediated killing of melanoma cancer cells based on a functional screen. Using a functional CRISPR activation (CRISPRa) screen in A375 melanoma tumor cells, treated with or without primary human NK cells, LabOS AI co-scientist autonomously identified and refined candidate regulators of NK-mediated cytotoxic resistance [12]. Within a wet-lab-in-theloop pipeline, the LabOS AI agent dynamically re-ranked genes through iterative functional enrichment analysis, revealing CEACAM6 as a top regulator of NK resistance. Figure 4. LabOS applications in target discovery, mechanistic investigation, and stem cell research. a-e, Functional screening study for Natural Killer (NK) immunotherapy target identification. LabOS was prompted with the research task to identify regulators of tumor resistance to NK killing, based on results from a CRISPR activation screen in A375 melanoma cells co-cultured with primary human NK cells. b-e, AI-generated screen analysis reveals differential gene enrichment upon NK treatment (b), and the AI agent re-ranked key target-CEACAM6-moving it from low to top-ranked position (c) and automated survival analysis using cancer patient data, stratified by CEACAM6 expression (d). Wet-lab functional validation confirms enhanced NK resistance upon CRISPRa activation of CEACAM6 (e). f, LabOS applies to mechanistic investigation of cell fusion factors. LabOS proposes ITSN1 as a regulator of cell-cell fusion, providing its found evidence and reasoning trajectories. Experimental knock-out of top candidate gene ITSN1 confirmed this gene's impact on cell fusion in the wet lab, validating the AI co-scientist's hypothesis. g, LabOS provides live copiloting to researchers in stem cell gene-editing experiments. The AI-XR agent enables live guidance, error detection to safeguard multi-step experiments, where scientists can use LabOS to monitor and track issues automatically. h, The AI-XR agent can learn advanced skills in experiments and document them such as the lentiviral transduction of human iPSC stem cells. All photographs shown are of the authors. AI reasoning further generated explainable evidence by automatically performing survival analyses on The Cancer Genome Atlas (TCGA) datasets, stratified by CEACAM6 expression, thereby linking functional screening results to patient outcomes. Experimental validation using individual CRISPRa perturbation confirmed that CEACAM6 activation significantly increased tumor resistance to NK killing (Fig. 4a-g). 5.2 LabOS guides mechanistic investigation of cell fusion regulator Beyond functional screening, LabOS extends to mechanistic hypothesis generation and validation. In the next study, we prompt LabOS with the research task to identify key genes that control cell-cell fusion. Cell fusion is a fundamental biological process, key to basic biology (e.g. muscle development, viral infection all require cell fusion) and translational technology (e.g. cell fusion is a key step mediating efficient gene delivery). LabOS proposed and ranked candidates using pathway enrichment, interaction priors, and functional evidence. The AI co-scientist prioritized ITSN1 as a key regulator of cell-cell fusion, with automated evidence generation. Then human researchers used CRISPR interference (CRISPRi) coupled with cell fusion assay (induced by the fusogenic protein FAST [13]) in U2OS cells, to measure if ITSN1 perturbation will impact cell fusion phenotype. Researchers further performed quantitative imaging and cell-based assay and observed significant inhibition of cell fusion upon ITSN1 knockdown in the wet lab, confirming the role of ITSN1 (Fig. 4f). 5.3 LabOS copilots researchers in complex stem-cell experiments Reproducibility in advanced wet-lab domains is hindered by tacit expertise: critical steps are encoded in human memory and lab-specific "know-how," not necessarily in written protocols. For stem cell engineering, as an example, small deviations in the timing, reagent handling, or cell density and cell state assessment can drive large outcome variation, impeding transfer of skills across operators and laboratories. First, we used LabOS with XR glasses to guide researchers through CRISPR gene-editing of human stem cells for disease-modeling workflows [14]. Via XR glasses, LabOS agents captured all details as scientists perform gene knock-out experiments in human induced pluripotent stem cells (iPSCs)-derived cardiac fibroblast, useful for modeling diseases such as heart fibrosis [15]. LabOS copiloted the multi-step procedure with visual reasoning, precisely interpreting bench actions and flagging issues in real time (Fig. 4g). Second, because efficient gene delivery underpins construction of engineered stem-cell lines for perturbation and drug screening, we used LabOS with XR glasses to guide and automatically document lentiviral transduction experiments in human iPSCs from expert-level scientists (Fig. 4h). LabOS then could monitor and help train junior scientists on this complex experiment. Here, LabOS can act as an AI tutor: recording expert practice, digitizing key parameters, and coaching a novice to expert-level performance without requiring side-by-side training or extended period of trial-and-error. Through real-time, multimodal human-AI interactions, the LabOS system can provide context-aware guidance, validation, and performance tracking. Summary LabOS prototypes what an AI co-scientist can be: a system that sees, reasons, and helps run the lab. By pairing AI agents with real-time, XR-guided human-AI interaction and data-driven reasoning, it enables faster discovery, reproducible training, and precise operation. Across use cases-hypothesis generation, automated documentation, error correction, rapid skill transfer, iPSC experiment guidance, and insights into NK-tumor pathways and cell-fusion regulatorsLabOS turns the lab into an adaptive, collaborative workspace where human scientists and AI work side-by-side to accelerate discovery, generate reproducible science, and improve together. Acknowledgement We extend our sincere gratitude to the NVIDIA, in particular the XR/AR-VR, VLM/Agentic-AI, edge computing, and business partnership team for their invaluable collaboration on the VLM fine-tuning development, agentic AI integration, and XR live streaming, as well as for GPU server support. We would like to acknowledge we procured the AR/XR glass hardware from Viture, and thank the Viture team for support on wireless streaming, Unity/Android app support. We would like to acknowledge we procured the VR/XR glass thanks to Dr. Ze Yuan from the UReality team and their software support. We also would like to thank the Nebius team for GPU compute and infrastructure support. The results on cancer patient survival analysis here are in whole or part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. Author contributions L.C. and M.W. conceived and supervised the project. L.C. led the research plan/organization, hardware/XR solution build, study design and paper writing. AI &Data: Z.Z, R.J. co-led the STELLA AI agents and benchmarking. X.W. led the data collection, annotation and human evaluation efforts, contributed to setting up XR glasses and live experimentation. Z.Z., Y.W., M.G. contributed to VLM training and benchmarking. D.S.,S.L., A.S. co-led 3D/4D modeling, object recognition and tracking of lab workflows. D.Y., X.W., P.L., Y.Z., S.S., N.Z. contributed to data collection, scientific annotation, and human evaluation of VLMs. Experiment: X. W and R.D. co-led NK experiments, to which J.W. also contributed and supervised. D.Y. led the fusogen experiment and co-lead stem cell experiments with X.W., J.C.W. co-supervised the stem cell experiments. Reference 1. Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589 (2021). 2. Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47-60 (2023). 3. Baker, B. et al. Emergent autonomous scientific research capabilities of large language models. Nat. Mach. Intell. 6, 876-887 (2024). 4. Qu, Y. et al. CRISPR-GPT for agentic automation of gene-editing experiments. Nat. Biomed. Eng. (2025). 5. Phan, L. et al. Humanity's Last Exam (HLE). arXiv preprint (2025). 6. Laurent, J. M., Janizek, J. D., Ruzo, M. et al. LAB-Bench: Measuring Capabilities of Language Models for Biology Research. arXiv preprint (2024). 7. Qiu, J. et al. Alita: Generalist agent enabling scalable agentic reasoning with minimal predefinition and maximal self-evolution. arXiv preprint (2025). 8. Jin, R. et al. STELLA: self-evolving LLM agent for biomedical research. arXiv preprint (2025). 9. DeepSeek Team. Group Relative Policy Optimization (GRPO) for reinforcement learning of reasoning. arXiv preprint (2025). 10. Keetha, N., Müller, N., Schönberger, J., Porzi, L., Zhang, Y., Fischer, T. & Kontschieder, P. MapAnything: Universal feed-forward metric 3D reconstruction. arXiv preprint (2025). 11. Li, W., Zhou, R., Zhou, J., Song, Y., Herter, J., Qin, M. & Pfister, H. 4D-LangSplat: 4D language Gaussian splatting via multimodal large language models. In Proc. CVPR, 2200122011 (2025). 12. Dinesh RK, Wang X, Mohammad IA, Gunasekaran P, Stiklioraitis K, Villafuerte JR, Rao A, Hernandez-Lopez RA, Sunwoo JB, & Cong L. Surfaceome CRISPR activation screening uncovers ligands regulating tumor sensitivity to NK cell killing. bioRxiv. 2025:2025-09 (2025). 13. Duncan, R. Fusogenic reoviruses and their fusion-associated small transmembrane (FAST) proteins. Annu. Rev. Virol. 6, 341-363 (2019). 14. Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 126(4):663-76 (2006). 15. Zhang H, Thai PN, Shivnaraine RV, Ren L, Wu X, Siepe DH, Liu Y, Tu C, Shin HS, Caudal A, Mukherjee S. Multiscale drug screening for cardiac fibrosis identifies MD2 as a therapeutic target. Cell. 187(25):7143-63 (2024).
2510.14859
Draft version October 17, 2025 Typeset using LATEX preprint2 style in AASTeX7.0.1 Deuterated water ice on the satellites of Saturn Michael E. Brown,1 Samantha K. Trumbo,2 M. Ryleigh Davis,1 and Swaroop Chandra1 1Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA 2Department of Astronomy & Astrophysics, University of California, San Diego, La Jolla, CA 92093, USA ABSTRACT The deuterium to hydrogen ratio in water ice in a planetary body carries important information on the history of water processing and delivery in the protostellar nebula. For a giant planet satellite, the D/H ratio is also affected by the processes and tem- peratures of the circumplanetary or circumstellar environment in which the satellites formed. Here we present robust JWST spectroscopic detections of the 4.14 µm O-D stretch absorption line (analogous to the 3 µm water O-H stretch) on the mid-sized Sat- urnian satellites and use these detections to infer a D/H ratio on each satellite. Within the limitations of the technique, we find that all of the satellites are consistent with having a D/H ratio of about 1.5× Vienna Standard Mean Ocean Water (VSMOW), which is about an order of magnitude higher than the value of the atmosphere of Sat- urn. A much higher previously reported D/H ratio for Phoebe is ruled out at the 10σ level, and a 3σ upper limit of 2.3 × VSMOW is obtained. The elevated D/H ratios demonstrate that the solid planetesimals and pebbles that built the satellites never sublimed and re-equilibrated with the gaseous circumplanetary disk. The similarity of the D/H measurements across all satellites suggest that the D/H ratio of water ice in the vicinity of Saturn at the time of satellite formation was also approximately 1.5 × VSMOW. 1. INTRODUCTION Deuterated water is a powerful tracer of the processing of interstellar ice in planetary sys- tems, providing a window into how interstel- lar ices, organics, and dust are incorporated into the disks (Cleeves et al. 2014; Yang et al. 2013; Albertsson et al. 2014). In our own pro- toplanetary nebula, dust grains delivered from cold molecular clouds could have carried wa- ter ice with a D/H ratio enriched by orders of magnitude above the bulk solar system value of about 2.1 × 10−5 (Geiss & Gloeckler 1998). In warmer regions of the disk, sublimation of Email: mbrown@caltech.edu these ices into the gas phase would cause quick equilibration of the D/H ratio with the bulk H2 (L´ecluse & Robert 1994), leading to water with solar D/H values. In the outer regions of the disk, direct incorporation of this ice into growing bodies or sublimation of the ice in re- gions too cold to re-equilibrate with H2, would preserve the elevated D/H values (Yang et al. 2013). The values of D/H across the solar sys- tem thus tell the story of transport, sublima- tion, and temperature in the disk. In the inner solar system, all water ice grains would have completely sublimated, so D/H val- ues should be expected to be solar. Nonethe- less, the inner solar system is enriched by about a factor of ∼7 compared to the solar value, arXiv:2510.14859v1 [astro-ph.EP] 16 Oct 2025 2 with a value of about 1.5×10−4, as measured on Earth, Mars, Vesta, and C- and S- type aster- oids (Hallis 2017). Such elevated values already show that some material from interstellar ices is eventually transported as solids into the terres- trial region, though the precise mechanisms are debated (Alexander 2017). In the colder outer portions of the disk, interstellar ices should be able to be more directly incorporated into icy bodies like comets, Kuiper belt obejcts, or icy satellites. Models for outer solar system wa- ter ice D/H ratios predict values ranging from a factor of several enriched over the terrestrial value (Yang et al. 2013; Furuya et al. 2017) to a factor of 100 over terrestrial (Albertsson et al. 2014) depending on differences in stellar out- flow, ice transport and incorporation, and disk temperatures. Comets are the best studied messengers from the outer solar system, and they have been found to have D/H ranging from the terres- trial value to enrichments by about a factor of 4 (i.e., Bockel´ee-Morvan et al. 2015; Lis et al. 2019; M¨uller et al. 2022). The sublimation and jetting processes active on these rapidly heat- ing comets could lead to fractionation effects that could change D/H measured in the gaseous coma (Brown et al. 2012), making interpreta- tion more difficult, but it appears plausible that comets may have formed over a variety of dis- tances and temperatures in the nebula and that some increase in D/H with distance is present. The D/H ratios of the satellites of the giant planets should hold additional information on the D/H values in the middle-solar system and on the processing of ices within the circumplan- etary environments. Based on our understand- ing of isotopic exchange in circumstellar env- iornments (Yang et al. 2013; Albertsson et al. 2014), satellite formation in a hot circumplan- etary environment should lead to complete re- equilibration of the D/H ratio to the value of the dominant H2 gas, which, for Jupiter and Saturn is approximately the solar value (Pierel et al. 2017). Temperature gradients across the for- mation region could be revealed by systematic D/H gradients. Little is known of these D/H ra- tios, but using spectra from the Visual and In- frared Mapping Spectrograph (VIMS) onboard the Cassini spacecraft, Clark et al. (2019, here- after C19) demonstrated that the fundamental O-D stretch absorption feature – analogous to the 3 µm O-H stretch feature in water ice – is detectable in the rings and icy satellites of Saturn at approximately 4.14 µm. While the absorption features were near the limit of de- tection for the VIMS data, new JWST observa- tions have robustly shown the 4.14 µm feature in the rings of Saturn (Hedman et al. 2024). Here we present JWST reflectance spectra of the 4.14 µm region of the icy Saturnian satel- lites at higher signal-to-noise and spectral reso- lution than obtained for the VIMS observations. We then discuss the implications of these de- tections of deuterated water on the surfaces of these satellites for both the formation of the so- lar system and the Saturnian system. 2. OBSERVATIONS AND ANALYSIS JWST observations of the Saturnian satel- lites were obtained between 16-Oct-2023 and 25-Jul-2024. The observations and data reduc- tion are fully described in Brown et al. (2025), so are only briefly summarized here. Spectra of the leading and trailing hemispheres of the inner medium-sized satellites of Saturn – Mi- mas, Tethys, Dione, and Rhea – and of the outer co-rotating satellite – Iapetus – as well as sin- gle observations of the non-corotating Hyperion and Phoebe were taken using the G235H and G395M grisms, covering the wavelength range from 1.7 to 5.2 µm, with a gap between 2.38 and 2.48 µm in the G235H setting. Critically, the G395M grating does not have the 4.08-4.26 µm gap that the higher resolution G395H grat- ing does. The O-D 4.14 µm line would fall within the gap for higher resolution data. The 3 2 3 4 5 wavelength (µm) 0.0 0.2 0.4 0.6 0.8 1.0 relative reflectance 4.1 4.3 4.5 0.06 0.08 0.10 0.12 O−D CO2 Figure 1. JWST spectrum of the leading hemi- sphere of Dione with an inset highlighting the 4.14 µm O-D stretch absorption and the 4.26 µm CO2 absorption. full spectrum of each satellite can be seen in Brown et al. (2025). Fig. 1 shows the JWST spectrum of the lead- ing hemisphere of Dione, which is dominated by the usual 2, 3 and 4.5 µm absorption features of water ice. The inset shows two small absorp- tion features between 4.1 and 4.3 µm. These features are the 4.14 µm O-D stretch absorp- tion and the 4.26 µm CO2 absorption discussed in Brown et al. (2025). The absorption due to the O-D stretch is seen on nearly all of the satellites. To better visual- ize each of the regions around the O-D stretch, we divide each spectrum by a continuum, which we construct by fitting the spectrum from 4.0 to 4.2 µm to a second order polynomial while ex- cluding the region from 4.10 to 4.17 µm. Each of the continuum-divided spectra is shown in Fig. 2. In addition, we show a least-squares gaussian fit to the continuum-divided spectrum, where we fix the width of the gaussian to be 0.0124 µm – a value found from first allowing this parameter to be free and then taking the average of the results. For our least-squares fit, we derive uncertainties in the original spec- trum by calculating the root-mean-square devi- ation from the spread after our continuum di- vision. The fractional absorption, which we de- fine as the depth of the absorption compared to the continuum-divided spectrum, using our fixed gaussian width, is shown in Table 1. 3. THE D/H RATIO While the measurements of the 4.14 µm absorption feature confirms the detection of deuterated water on these objects, converting this detection into a D/H ratio is more com- plicated. The detected O-D stretch feature is analogous to the 3 µm O-H stretch for non- deuterated water. The O-H stretch feature is saturated in all of the spectra, so a simple ra- tio of the 4.14 to the 3 µm depth will not yield meaningful results. C19 use a reflectance spectrum radiative transfer model to understand the spectral ef- fects of incorporation of deuterated water into a spectrum. For objects with significant non- water ice components, they use this method for their final derived D/H values. For clean water ice, they showed from their modeling that for grain sizes between about 5 and 100 µ – which brackets the range of grain sizes on these satel- lites – the ratio of the 4.14 µm HDO absorption to that of the 2 µm H2O combination band stays approximately constant and can be used to es- timate the D/H ratio. They calibrated their modeling approach with three laboratory spec- tra of water ice with D/H values of 1, 3.2, and 16 times that of Vienna Standard Mean Ocean Water (VSMOW) –which has a D/H value of 1.56 × 10−4 (Hagemann et al. 1970) – to which they fit a third-order polynomial and suggest an accurarcy of ∼10% for clean water ice. As will be shown below, the D/H ratios derived for the inner icy satellites using this method match that found from the in situ measurements of the Enceladus plume within the uncertainties (Waite Jr et al. 2009), increasing confidence in the technique. 4 Table 1. Measured and derived spectral parameters satellite longitude O-D 2 µm 4.14-to-2 µm D/H absorption absorption ratio (deg) (%) (%) (× VSMOW) Mimas (leading) 52 4.5 ± 0.3 67 0.066 ± 0.005 1.6 ± 0.1 Mimas (trailing) 262 3.6 ± 0.3 60 0.060 ± 0.005 1.4 ± 0.1 Tethys (leading) 78 3.9 ± 0.3 66 0.059 ± 0.005 1.4 ± 0.1 Tethys (trailing) 271 3.2 ± 0.3 63 0.051 ± 0.004 1.2 ± 0.1 Dione (leading) 98 5.1 ± 0.4 60 0.085 ± 0.006 2.1 ± 0.2 Dione (trailing) 274 3.1 ± 0.3 45 0.068 ± 0.007 1.6 ± 0.2 Rhea (leading) 76 4.0 ± 0.2 65 0.061 ± 0.004 1.4 ± 0.1 Rhea (trailing) 261 3.2 ± 0.3 56 0.058 ± 0.006 1.3 ± 0.1 Hyperion - 3.1 ± 0.5 54 0.057 ± 0.009 1.3 ± 0.2 Iapetus (leading) 80 0.8 ± 0.2 18 0.045 ± 0.009 1.0 ± 0.2 Iapetus (trailing) 267 2.5 ± 0.3 69 0.036 ± 0.005 0.8 ± 0.1 Phoebea 65 1.3 ± 0.8 15 − 1.7 ± 1.1 Note— Longitude is the sub-observer longitude at the time of observation. Hyperion has no defined longitude system. The D/H ratio is given relative to VSMOW. aThe D/H value for Phoebe is derived by comparison to radiative transfer models of C19, rather than the ratio method used for the other satellites. This band ratio approach has important ben- efits and important limitations. The most im- portant benefit is that the method is straightfor- ward, directly related to the observations, and easily reproducible. In addition, the method is insensitive to grain size over the range relevant for the Saturnian satellites, and the ratio of the depth of the 2 to 4.13 µm absorption lines is preserved if the water ice is linearly mixed with a spectrally neutral material, which is possibly relevant on the darker satellites. This band ra- tio is not preserved if water ice is intimately- mixed with a spectrally neutrally material. In this case, the band ratio can either rise, if the material is darker than the water ice at all wave- lengths, or it can fall, if the material is brighter than the water ice. Phoebe, which is opti- cally dark and has more muted water signatures than the other mid-sized satellites (Clark et al. 2005), likely is effected by these issues. Other potential uncertainties can arise owing to the different possible states of the water ice (crys- talline vs. amorphous vs. a mixture) and their possibly different effects on both the 2 µm and 4.14 µm features, which could affect all mea- surements. With the caveats noted above in mind, we adopt the C19 ratio method for converting the spectra to values of D/H for this initial analy- sis for all satellites except for Phoebe (discussed below). We calculate the depth of the 2 µm ab- sorption by fitting a linear continuum to the me- dian of the spectrum between 1.79 and 1.871 µm and the median between 2.23 and 2.26 µm, di- viding the full spectrum by this continuum, and measuring the maximum fractional depth in the region between 1.79 and 2.24 µm. The depth of the 4.14 µm line is taken from the gaussian fits 5 0.94 0.96 0.98 1.00 1.02 Mimas 0.94 0.96 0.98 1.00 1.02 4.10 4.15 4.20 Tethys 4.10 4.15 4.20 Dione 4.10 4.15 4.20 Rhea Leading Trailing 4.10 4.15 4.20 4.10 4.15 4.20 Hyperion Iapetus 4.10 4.15 4.20 4.10 4.15 4.20 Phoebe wavelength (µm) relative reflectance Figure 2. The continuum-divided spectra of the Saturnian satellites, in the region of the O-D absorption. The red line shows a gaussian fit to the data using a fixed width. Most satellites have separate measurements for the leading and trailing hemispheres, but Hyperion and Phoebe, which are not synchronously rotating, only have single measurements. The O-D absorption is robustly detected at nearly every satellite. in Figure 2. Table 1 lists these values for each of the satellites as well as the uncertainties for the 4.14 µm depth. No uncertainty is given for the 2 µm depth as our ratio uncertainties are dominated by the 4.14 µm depth uncertainty. Note that the band area ratio would make a better observational measurement, as this ratio is unaffected by the spectral resolution of the measurements, but here we use band depth to remain consistent with C19. The bands here are fully resolved, so the area and depth ratios will be the same. The D/H values derived from these ratios are also given in Table 1, while Fig- ure 3 shows both the ratio and the derived D/H for each satellite. 3.1. The inner satellites The D/H values derived for the inner icy satellites are consistent with the value derived in C19 for Rhea, (1.2±0.2× VSMOW ) and within the uncertainties of the Cassini Ion and Neutral Mass Spectrometer (INMS) measure- ment of D/H in the plume of Enceladus, which found a value of 1.85+.95 −.45 compared to VSMOW (Waite Jr et al. 2009). For most satellites, we found consistent values for the leading and trail- ing hemispheres. 6 0.00 0.02 0.04 0.06 0.08 0.10 0.12 4.14 to 2 µm depth ratio Mimas Tethys Dione Rhea Hyperion Iapetus Phoebe 0.0 1.0 2.0 3.0 D/H compared to VSMOW Figure 3. The ratio of the depth of the 4.14 µm absorption to 2 µm absorption for each of the satel- lites. The leading hemispheres are shown as black points while the trailing hemispheres are red. Hy- perion and Phoebe, which are non-synchronously rotating, are shown in blue. The D/H value derived using the C19 calibration is shown on the right. The value measured for Enceladus is shown as the bold horizontal line, while the 1σ upper and lower limits are shown as thinner dashed lines. The D/H value shown for Phoebe is derived from calibration to a radiative transfer model, rather than from the 4.14 to 2µm depth ratio. Two exceptions to this general trend are seen. Dione has a derived D/H value elevated above the other inner satellites, and its leading hemi- sphere has a significantly higher value than its trailing hemisphere. The implied ∼30% vari- ation between the leading and trailing hemi- spheres of Dione is difficult to explain with a native source for deuterated water. The lead- ing hemisphere of Dione receives significantly more mass influx from the E-ring than the trail- ing hemisphere(Kempf et al. 2018), which could plausibly bring enhanced amounts of deuter- ated water, but we would then expect Mimas, which receives a higher E-ring flux on the trail- ing hemisphere, to have the opposite asymme- try, which is not observed. The most conser- vative interpretation is that all of the D/H val- ues of these inner icy satellites are identical and that the spread seen in derived D/H values re- flects the limitations of converting the 4.14 to 2 µm absorption ratio to a D/H value, though it should be noted that in almost all cases the leading and trailing hemisphere measurements are identical within the uncertainties. In this interpretation, the deuterated water detected could either be native to each satellite or could still be brought in from the E-ring if all sur- faces are at least covered enough to be opti- cally thick, as suggested by radar measurements (Le Gall et al. 2019). 3.2. Hyperion and Iapetus Hyperion, which receives little Enceladus ma- terial but does contain patches of what is likely Phoebe-ring dark material, has a derived D/H ratio consistent with those of the inner satel- lites. If the dark material on Hyperion is pri- marily a coating and thus spatially segregated, the 4.14 to 2 µm ratio should accurately es- timate D/H. If, on the other hand, the sur- face ice of Hyperion is more intimately mixed with Phoebe-like dark material, this material will suppress the 2 µm feature (where Hyperion is bright) more than the 4.14 µm feature (where Hyperion is dark), leading to an elevated 4.14 to 2 µm ratio and a higher inferred value of D/H. We thus take the derived value for Hyperion to be an upper limit to the true value. More de- tailed modeling will be required to accurately determine the value of D/H for Hyperion. The derived values for the leading and trailing hemispheres of Iapetus are consistent in spite of the nearly factor-of-four difference in the 2 µm band depth and factor of 2 difference in re- flectance around 4.5 µm seen by these observa- tions (Brown et al. 2025). This consistency sug- gests that the band ratio method is adequately accounting for these differences, though, like for Hyperion, we consider the measurement for the dark leading hemisphere of Iapetus to be a up- 7 per limit. Formally, the values derived for Iape- tus are ∼40% lower than those on the inner icy satellites, but we again conservatively interpret this spread as being with the range of uncer- tainty of our approximation. As with Hyperion, detailed modeling of the spectrum of Iapetus could help to obtain more accurate results. 3.3. Phoebe C19 suggest an elevated value of D/H of 8±2× VSMOW for Phoebe using VIMS data and a ra- diative transfer spectral model. The details of the model are difficult to reproduce for com- parison, so we instead chose to use a simple spectral comparison to evaluate the claim of el- evated D/H on Phoebe. In Figure 4 we show the JWST spectrum of Phoebe, the VIMS disk- average spectrum of Phoebe, and the averaged spectra of the other satellites. As can be seen from the Figure, the C19 VIMS spectrum ap- pears to have a strong ∼4.15 µm absorption with an average depth of 4.5% from 4.124 to 4.158 µm – slightly longer wavelengths than the D/H features seen on the other satellites by JWST. The JWST Phoebe spectrum rules out such a line at about the 10σ level. While we cannot reproduce the model of C19, if we take their modeling as a calibration point and assume the ∼6% depth of the feature seen by VIMS indeed corresponds to a D/H ratio of 8× VSMOW, we would infer that the 1.3±0.8% value measured for the 4.14 µm depth at Phoebe corresponds to a value of 1.7 ± 1.1× VSMOW, or, more appropriately, a 1σ upper limit of 2.8× VSMOW. Interestingly, this value is not dissim- ilar to the value of 2.1+1.6 −1.4 that would be de- rived from using the simple ratio method, lend- ing support to our use of that method for Hy- perion and Iapetus. 4. THE ORIGINS OF THE SATELLITES The D/H ratios of the satellites of Saturn are enhanced by at least an order of magni- tude compared to the atmosphere of Saturn, 4.05 4.10 4.15 4.20 wavelength (µm) 0.94 0.96 0.98 1.00 1.02 1.04 1.06 Combined JWST Phoebe JWST Phoebe VIMS Figure 4. A comparison of the continuum-divided JWST spectrum of Phoebe and the VIMS spec- trum of Phoebe from C19. Also shown is the aver- age spectrum of all of the Saturnian satellites with robust 4.14 µm absorption detections.The JWST data rule out an absorption with the position and depth of that seen by VIMS at the 10 σ level. which has a D/H similar to the solar value (Blake et al. 2021). The strong implication of this simple fact – even with the uncertainties in the D/H calibration – is that the satellites did not form out of material that condensed out of a hot planetary sub-nebula, as initially explored by Pollack & Consolmagno (1984). In such a case the deuterated water would have re- equilibrated with the much more abundant H2 and the D/H value would be quickly diluted to the Saturnian value. Multiple other lines of ev- idence have shown that satellite formation in a hot circumplanetary nebula is implausible, how- ever, and no modern formation model posits such a formation pathway. Current models for the formation of the Sat- urnian satellites suggest either slow formation in a gas-starved disk (i.e. Canup & Ward 2006; Batygin & Morbidelli 2020) or formation from reaccreated ring material, where the ring can be the result of a larger satellite disruption (Canup 2010) or the result of the tidal disrup- tion of a large passing icy body (Dones 1991; 8 Hyodo et al. 2017). All of these scenarios can be made consistent with the approximately con- stant D/H value across the Saturnian system under the assumption that the planetesimals and pebbles which form the satellites all came from regions with elevated D/H. In the hybrid scenario suggested by Blanc et al. (2025) and informed by recent ideas of faster migration for the Saturnian satellites, large satellites form at some distance from Sat- urn and migrate inward where some are per- haps lost. The penultimate satellite is tidally disrupted and forms the rings – which eventu- ally spawns the icy mid-sized satellites – while the proto-Titan survives inward migration and, upon dissipation of the disk, tidally migrates outward. Hyperion and Iapetus presumably also form out of this initial disk, though no sat- isfactory explanation of their formation path- way has been presented (Castillo-Rogez et al. 2018). In this scenario, the original satellites would have to have formed from icy planetes- imals and pebbles which did not equilibrate with the gas disk. The lack of a rise in the D/H ratio beyond Titan would then suggest that no strong temperature gradient existed across the satellite formation region and that the D/H value of the satellites largely reflects that of the planetesimals and pebbles entering the system at the distance of Saturn. The similarity of the D/H ratio of the icy mid-sized satellites is a natural consequence of their origin from a single body. The inferred upper limit for the D/H ratio of Phoebe is consistent with that derived for the other satellites. Phoebe appears to have a D/H ratio broadly consistent with the range seen in comets, suggesting a pre-capture origin in the outer solar system. 5. CONCLUSIONS Deuterated water is detected on all of the ob- served mid-sized icy satellites and on Hyperion and Iapetus. The D/H ratio on these satellites is approximately an order of magnitude higher than in the atmosphere of Saturn, demonstrat- ing that the ices incorporated into these satel- lites never sublimated and equilibrated with the gaseous circumplanetary nebula. Only upper limits can be placed on Phoebe, but it cannot have a significantly higher D/H ratio than the other satellites in the system. The similarity in D/H ratio of all of the satel- lites suggests that this ratio is representative of the D/H ratio of the planetesimals and pebbles in the vicinity of Saturn at the time of satellite formation. The inferred value of ∼1.5× VS- MOW is enriched compared to the inner solar system, but does not appear to be as high as some of the cometary values seen. This moderate enhancement in D/H at the distance of Saturn suggests perhaps a mild in- crease in D/H as a function of heliocentric dis- tance in the solar system, similar to that seen in the laminar models of (Albertsson et al. 2014) that do not have transport, which seems unreal- istic. Models tracking D/H ratios incorporating modern ideas of disk structure, pebble trans- port and accretion, and satellite formation have yet to be constructed, but will be critical for interpreting D/H ratios measured in planetary environments. One interesting prediction from these values of D/H in the ices in the vicinity of the form- ing Saturn is that the ices at Uranus should be similar. Indeed, the presence of the disk gap at Jupiter suggests that most solid mate- rial would be flowing into the Saturnian sys- tem from beyond Saturn. Uranus, which makes no such gap, would then likely be bathed in the same solid material. The D/H ratio at Uranus is depleted compared to VSMOW by a factor of 3.5 (Feuchtgruber et al. 2013), a value from than 5 times lower than that we have inferred for the Saturnian satellites. Uranus is expected – though not confirmed – to have fully mixed its atmosphere and interior in its 9 past, and thus the D/H ratio is expected to re- flect the bulk average of the nebular H2 and the ices that initially formed the planet. Us- ing this expectation, Feuchtgruber et al. (2013) derive a D/H ratio for the ices that formed Uranus of a value of ∼0.4×VSMOW, a value more than 3.5 times lower than that we derive for the ices that formed the Saturnian satel- lites. Feuchtgruber et al. (2013) themselves are uncomfortable with this conclusion and suggest multiple possible solutions, including the pos- sibility that the interior is not mixed and that the interior is significantly more rocky than cur- rent assumed. The high values of D/H for the Saturnian satellites highlight this continued dis- crepancy. Using the 4.14 µm O-D stretch feature to in- fer a D/H ratio via reflectance spectroscopy is a powerful technique, particularly with the ad- vent of JWST coverage in this wavelength re- gion. We caution, however, that significant lab- oratory and modeling work is still needed to un- derstand the robustness of this technique and its application across the solar system. While the nearly-pure water ice surfaces of the inner mid- sized satellites of Saturn present perhaps the ideal test case for this technique, understand- ing how to reliably apply the technique to more complicated surfaces remains work in progress. ACKNOWLEDGMENTS We thank Francis Nimmo for conversations that led to this work. This work is based on obser- vations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Insti- tute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #3716. Support for program #3716 was pro- vided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. The specific observations analyzed can be accessed via DOI:10.17909/30px-vq56 Facilities: JWST/NIRSpec REFERENCES Albertsson, T., Semenov, D., & Henning, T. 2014, \apj, 784, 39, doi: 10.1088/0004-637X/784/1/39 Alexander, C. M. O. 2017, Philosophical Transactions of the Royal Society of London Series A, 375, 20150384, doi: 10.1098/rsta.2015.0384 Batygin, K., & Morbidelli, A. 2020, The Astrophysical Journal, 894, 143, doi: 10.3847/1538-4357/ab8937 Blake, J. S. D., Fletcher, L. N., Greathouse, T. K., et al. 2021, Astronomy & Astrophysics, 653, A66, doi: 10.1051/0004-6361/202038229 Blanc, M., Crida, A., Shibaike, Y., et al. 2025, Space Science Reviews, 221, 35, doi: 10.1007/s11214-025-01156-8 Bockel´ee-Morvan, D., Calmonte, U., Charnley, S., et al. 2015, ßr, 197, 47, doi: 10.1007/s11214-015-0156-9 Brown, M. E., Wong, I., & Belyakov, M. 2025, The Planetary Science Journal, 6, 22, doi: 10.3847/psj/ad9a60 Brown, R. H., Lauretta, D. S., Schmidt, B., & Moores, J. 2012, \planss, 60, 166, doi: 10.1016/j.pss.2011.07.023 Canup, R. M. 2010, Nature, 468, 943, doi: 10.1038/nature09661 Canup, R. M., & Ward, W. R. 2006, Nature, 441, 834, doi: 10.1038/nature04860 Castillo-Rogez, J. C., Hemingway, D., Rhoden, A., et al. 2018, in Enceladus and the Icy Moons of Saturn (The University of Arizona Press), doi: 10.2458/azu uapress 9780816537075-ch014 10 Clark, R. N., Brown, R. H., Cruikshank, D. P., & Swayze, G. A. 2019, ıcarus, 321, 791, doi: 10.1016/j.icarus.2018.11.029 Clark, R. N., Brown, R. H., Jaumann, R., et al. 2005, Nature, 435, 66, doi: 10.1038/nature03558 Cleeves, L. I., Bergin, E. A., Alexander, C. M. O., et al. 2014, Science, 345, 1590, doi: 10.1126/science.1258055 Dones, L. 1991, Icarus, 92, 194, doi: 10.1016/0019-1035(91)90045-U Feuchtgruber, H., Lellouch, E., Orton, G., et al. 2013, Astronomy & Astrophysics, 551, A126, doi: 10.1051/0004-6361/201220857 Furuya, K., Drozdovskaya, M. N., Visser, R., et al. 2017, Astronomy & Astrophysics, 599, A40, doi: 10.1051/0004-6361/201629269 Geiss, J., & Gloeckler, G. 1998, Space Science Reviews, 84, 239, doi: 10.1023/A:1005039822524 Hagemann, R., Nief, G., & Roth, E. 1970, Tellus A: Dynamic Meteorology and Oceanography, 22, 712, doi: 10.3402/tellusa.v22i6.10278 Hallis, L. J. 2017, Philosophical Transactions of the Royal Society of London Series A, 375, 20150390, doi: 10.1098/rsta.2015.0390 Hedman, M. M., Tiscareno, M. S., Showalter, M. R., et al. 2024, Journal of Geophysical Research: Planets, 129, e2023JE008236, doi: 10.1029/2023JE008236 Hyodo, R., Charnoz, S., Ohtsuki, K., & Genda, H. 2017, Icarus, 282, 195, doi: 10.1016/j.icarus.2016.09.012 Kempf, S., Hor´anyi, M., Hsu, H.-W., et al. 2018, in Enceladus and the Icy Moons of Saturn (The University of Arizona Press), doi: 10.2458/azu uapress 9780816537075-ch010 Le Gall, A., West, R. D., & Bonnefoy, L. E. 2019, Geophysical Research Letters, 46, 11747, doi: 10.1029/2019GL084218 Lis, D. C., Bockel´ee-Morvan, D., G¨usten, R., et al. 2019, ˚ap, 625, L5, doi: 10.1051/0004-6361/201935554 L´ecluse, C., & Robert, F. 1994, Geochimica et Cosmochimica Acta, 58, 2927, doi: 10.1016/0016-7037(94)90126-0 M¨uller, D. R., Altwegg, K., Berthelier, J. J., et al. 2022, ˚ap, 662, A69, doi: 10.1051/0004-6361/202142922 Pierel, J. D. R., Nixon, C. A., Lellouch, E., et al. 2017, The Astronomical Journal, 154, 178, doi: 10.3847/1538-3881/aa899d Pollack, J. B., & Consolmagno, G. 1984, in Saturn, 811–866. https://ui.adsabs.harvard.edu/abs/1984satn.book..811P Waite Jr, J. H., Lewis, W. S., Magee, B. A., et al. 2009, Nature, 460, 487, doi: 10.1038/nature08153 Yang, L., Ciesla, F. J., & Alexander, C. M. O. D. 2013, ıcarus, 226, 256, doi: 10.1016/j.icarus.2013.05.027
Draft version October 17, 2025 Typeset using LATEX preprint2 style in AASTeX7.0.1 Deuterated water ice on the satellites of Saturn Michael E. Brown,1 Samantha K. Trumbo,2 M. Ryleigh Davis,1 and Swaroop Chandra1 1Division of Geological and Planetary Sciences, California 91125, USA 2 92093, USA ABSTRACT The deuterium to hydrogen ratio in water ice in a planetary body carries important information on the history of water processing and delivery in the protostellar nebula. For a giant planet satellite, the D/H ratio is also affected by the processes and temperatures of the circumplanetary or circumstellar environment in which the satellites formed. Here we present robust JWST spectroscopic detections of the 4.14 μm O-D stretch absorption line (analogous to the 3 μm water O-H stretch) on the mid-sized Saturnian satellites and use these detections to infer a D/H ratio on each satellite. Within the limitations of the technique, we find that all of the satellites are consistent with having a D/H ratio of about 1.5× Vienna Standard Mean Ocean Water (VSMOW), which is about an order of magnitude higher than the value of the atmosphere of Saturn. A much higher previously reported D/H ratio for Phoebe is ruled out at the 10σ level, and a 3σ upper limit of 2.3 × VSMOW is obtained. The elevated D/H ratios demonstrate that the solid planetesimals and pebbles that built the satellites never sublimed and re-equilibrated with the gaseous circumplanetary disk. The similarity of the D/H measurements across all satellites suggest that the D/H ratio of water ice in the vicinity of Saturn at the time of satellite formation was also approximately 1.5 × VSMOW. 1. INTRODUCTION Deuterated water is a powerful tracer of the processing of interstellar ice in planetary systems, providing a window into how interstellar ices, organics, and dust are incorporated into the disks (Cleeves et al. 2014; Yang et al. 2013; Albertsson et al. 2014). In our own protoplanetary nebula, dust grains delivered from cold molecular clouds could have carried water ice with a D/H ratio enriched by orders of magnitude above the bulk solar system value of about 2.1 × 10-5 (Geiss & Gloeckler 1998). In warmer regions of the disk, sublimation of Email: these ices into the gas phase would cause quick equilibration of the D/H ratio with the bulk H2 (L ́ecluse & Robert 1994), leading to water with solar D/H values. In the outer regions of the disk, direct incorporation of this ice into growing bodies or sublimation of the ice in regions too cold to re-equilibrate with H2, would preserve the elevated D/H values (Yang et al. 2013). The values of D/H across the solar system thus tell the story of transport, sublimation, and temperature in the disk. In the inner solar system, all water ice grains would have completely sublimated, so D/H values should be expected to be solar. Nonetheless, the inner solar system is enriched by about a factor of ∼7 compared to the solar value, 16 Oct 2025 2 with a value of about 1.5×10-4, as measured on Earth, Mars, Vesta, and C- and S- type asteroids (Hallis 2017). Such elevated values already show that some material from interstellar ices is eventually transported as solids into the terrestrial region, though the precise mechanisms are debated (Alexander 2017). In the colder outer portions of the disk, interstellar ices should be able to be more directly incorporated into icy bodies like comets, Kuiper belt obejcts, or icy satellites. Models for outer solar system water ice D/H ratios predict values ranging from a factor of several enriched over the terrestrial value (Yang et al. 2013; Furuya et al. 2017) to a factor of 100 over terrestrial (Albertsson et al. 2014) depending on differences in stellar outflow, ice transport and incorporation, and disk temperatures. Comets are the best studied messengers from the outer solar system, and they have been found to have D/H ranging from the terrestrial value to enrichments by about a factor of 4 (i.e., Bockel ́ee-Morvan et al. 2015; Lis et al. 2019; M ̈uller et al. 2022). The sublimation and jetting processes active on these rapidly heating comets could lead to fractionation effects that could change D/H measured in the gaseous coma (Brown et al. 2012), making interpretation more difficult, but it appears plausible that comets may have formed over a variety of distances and temperatures in the nebula and that some increase in D/H with distance is present. The D/H ratios of the satellites of the giant planets should hold additional information on the D/H values in the middle-solar system and on the processing of ices within the circumplanetary environments. Based on our understanding of isotopic exchange in circumstellar enviornments (Yang et al. 2013; Albertsson et al. 2014), satellite formation in a hot circumplanetary environment should lead to complete reequilibration of the D/H ratio to the value of the dominant H2 gas, which, for Jupiter and Saturn is approximately the solar value (Pierel et al. 2017). Temperature gradients across the formation region could be revealed by systematic D/H gradients. Little is known of these D/H ratios, but using spectra from the Visual and Infrared Mapping Spectrograph (VIMS) onboard the Cassini spacecraft, Clark et al. (2019, hereafter C19) demonstrated that the fundamental O-D stretch absorption feature - analogous to the 3 μm O-H stretch feature in water ice - is detectable in the rings and icy satellites of Saturn at approximately 4.14 μm. While the absorption features were near the limit of detection for the VIMS data, new JWST observations have robustly shown the 4.14 μm feature in the rings of Saturn (Hedman et al. 2024). Here we present JWST reflectance spectra of the 4.14 μm region of the icy Saturnian satellites at higher signal-to-noise and spectral resolution than obtained for the VIMS observations. We then discuss the implications of these detections of deuterated water on the surfaces of these satellites for both the formation of the solar system and the Saturnian system. 2. OBSERVATIONS AND ANALYSIS JWST observations of the Saturnian satellites were obtained between 16-Oct-2023 and 25-Jul-2024. The observations and data reduction are fully described in Brown et al. (2025), so are only briefly summarized here. Spectra of the leading and trailing hemispheres of the inner medium-sized satellites of Saturn - Mimas, Tethys, Dione, and Rhea - and of the outer co-rotating satellite - Iapetus - as well as single observations of the non-corotating Hyperion and Phoebe were taken using the G235H and G395M grisms, covering the wavelength range from 1.7 to 5.2 μm, with a gap between 2.38 and 2.48 μm in the G235H setting. Critically, the G395M grating does not have the 4.08-4.26 μm gap that the higher resolution G395H grating does. The O-D 4.14 μm line would fall within the gap for higher resolution data. The 3 2 3 4 5 wavelength (μm) 0.0 0.2 0.4 0.6 0.8 1.0 relative reflectance 4.1 4.3 4.5 0.06 0.08 0.10 0.12 O-D CO2 Figure 1. JWST spectrum of the leading hemisphere of Dione with an inset highlighting the 4.14 μm O-D stretch absorption and the 4.26 μm CO2 absorption. full spectrum of each satellite can be seen in Brown et al. (2025). Fig. 1 shows the JWST spectrum of the leading hemisphere of Dione, which is dominated by the usual 2, 3 and 4.5 μm absorption features of water ice. The inset shows two small absorption features between 4.1 and 4.3 μm. These features are the 4.14 μm O-D stretch absorption and the 4.26 μm CO2 absorption discussed in Brown et al. (2025). The absorption due to the O-D stretch is seen on nearly all of the satellites. To better visualize each of the regions around the O-D stretch, we divide each spectrum by a continuum, which we construct by fitting the spectrum from 4.0 to 4.2 μm to a second order polynomial while excluding the region from 4.10 to 4.17 μm. Each of the continuum-divided spectra is shown in Fig. 2. In addition, we show a least-squares gaussian fit to the continuum-divided spectrum, where we fix the width of the gaussian to be 0.0124 μm - a value found from first allowing this parameter to be free and then taking the average of the results. For our least-squares fit, we derive uncertainties in the original spectrum by calculating the root-mean-square deviation from the spread after our continuum division. The fractional absorption, which we define as the depth of the absorption compared to the continuum-divided spectrum, using our fixed gaussian width, is shown in Table 1. 3. THE D/H RATIO While the measurements of the 4.14 μm absorption feature confirms the detection of deuterated water on these objects, converting this detection into a D/H ratio is more complicated. The detected O-D stretch feature is analogous to the 3 μm O-H stretch for nondeuterated water. The O-H stretch feature is saturated in all of the spectra, so a simple ratio of the 4.14 to the 3 μm depth will not yield meaningful results. C19 use a reflectance spectrum radiative transfer model to understand the spectral effects of incorporation of deuterated water into a spectrum. For objects with significant nonwater ice components, they use this method for their final derived D/H values. For clean water ice, they showed from their modeling that for grain sizes between about 5 and 100 μ - which brackets the range of grain sizes on these satellites - the ratio of the 4.14 μm HDO absorption to that of the 2 μm H2O combination band stays approximately constant and can be used to estimate the D/H ratio. They calibrated their modeling approach with three laboratory spectra of water ice with D/H values of 1, 3.2, and 16 times that of Vienna Standard Mean Ocean Water (VSMOW) -which has a D/H value of 1.56 × 10-4 (Hagemann et al. 1970) - to which they fit a third-order polynomial and suggest an accurarcy of ∼10% for clean water ice. As will be shown below, the D/H ratios derived for the inner icy satellites using this method match that found from the in situ measurements of the Enceladus plume within the uncertainties (Waite Jr et al. 2009), increasing confidence in the technique. 4 Table 1. Measured and derived spectral parameters satellite longitude O-D 2 μm 4.14-to-2 μm D/H absorption absorption ratio (deg) (%) (%) (× VSMOW) Mimas (leading) 52 4.5 ± 0.3 67 0.066 ± 0.005 1.6 ± 0.1 Mimas (trailing) 262 3.6 ± 0.3 60 0.060 ± 0.005 1.4 ± 0.1 Tethys (leading) 78 3.9 ± 0.3 66 0.059 ± 0.005 1.4 ± 0.1 Tethys (trailing) 271 3.2 ± 0.3 63 0.051 ± 0.004 1.2 ± 0.1 Dione (leading) 98 5.1 ± 0.4 60 0.085 ± 0.006 2.1 ± 0.2 Dione (trailing) 274 3.1 ± 0.3 45 0.068 ± 0.007 1.6 ± 0.2 Rhea (leading) 76 4.0 ± 0.2 65 0.061 ± 0.004 1.4 ± 0.1 Rhea (trailing) 261 3.2 ± 0.3 56 0.058 ± 0.006 1.3 ± 0.1 Hyperion - 3.1 ± 0.5 54 0.057 ± 0.009 1.3 ± 0.2 Iapetus (leading) 80 0.8 ± 0.2 18 0.045 ± 0.009 1.0 ± 0.2 Iapetus (trailing) 267 2.5 ± 0.3 69 0.036 ± 0.005 0.8 ± 0.1 Phoebea 65 1.3 ± 0.8 15 - 1.7 ± 1.1 Note- Longitude is the sub-observer longitude at the time of observation. Hyperion has no defined longitude system. The D/H ratio is given relative to VSMOW. aThe D/H value for Phoebe is derived by comparison to radiative transfer models of C19, rather than the ratio method used for the other satellites. This band ratio approach has important benefits and important limitations. The most important benefit is that the method is straightforward, directly related to the observations, and easily reproducible. In addition, the method is insensitive to grain size over the range relevant for the Saturnian satellites, and the ratio of the depth of the 2 to 4.13 μm absorption lines is preserved if the water ice is linearly mixed with a spectrally neutral material, which is possibly relevant on the darker satellites. This band ratio is not preserved if water ice is intimatelymixed with a spectrally neutrally material. In this case, the band ratio can either rise, if the material is darker than the water ice at all wavelengths, or it can fall, if the material is brighter than the water ice. Phoebe, which is optically dark and has more muted water signatures than the other mid-sized satellites (Clark et al. 2005), likely is effected by these issues. Other potential uncertainties can arise owing to the different possible states of the water ice (crystalline vs. amorphous vs. a mixture) and their possibly different effects on both the 2 μm and 4.14 μm features, which could affect all measurements. With the caveats noted above in mind, we adopt the C19 ratio method for converting the spectra to values of D/H for this initial analysis for all satellites except for Phoebe (discussed below). We calculate the depth of the 2 μm absorption by fitting a linear continuum to the median of the spectrum between 1.79 and 1.871 μm and the median between 2.23 and 2.26 μm, dividing the full spectrum by this continuum, and measuring the maximum fractional depth in the region between 1.79 and 2.24 μm. The depth of the 4.14 μm line is taken from the gaussian fits 5 0.94 0.96 0.98 1.00 1.02 Mimas 0.94 0.96 0.98 1.00 1.02 4.10 4.15 4.20 Tethys 4.10 4.15 4.20 Dione 4.10 4.15 4.20 Rhea Leading Trailing 4.10 4.15 4.20 4.10 4.15 4.20 Hyperion Iapetus 4.10 4.15 4.20 4.10 4.15 4.20 Phoebe wavelength (μm) relative reflectance Figure 2. The continuum-divided spectra of the Saturnian satellites, in the region of the O-D absorption. The red line shows a gaussian fit to the data using a fixed width. Most satellites have separate measurements for the leading and trailing hemispheres, but Hyperion and Phoebe, which are not synchronously rotating, only have single measurements. The O-D absorption is robustly detected at nearly every satellite. in Figure 2. Table 1 lists these values for each of the satellites as well as the uncertainties for the 4.14 μm depth. No uncertainty is given for the 2 μm depth as our ratio uncertainties are dominated by the 4.14 μm depth uncertainty. Note that the band area ratio would make a better observational measurement, as this ratio is unaffected by the spectral resolution of the measurements, but here we use band depth to remain consistent with C19. The bands here are fully resolved, so the area and depth ratios will be the same. The D/H values derived from these ratios are also given in Table 1, while Figure 3 shows both the ratio and the derived D/H for each satellite. 3.1. The inner satellites The D/H values derived for the inner icy satellites are consistent with the value derived in C19 for Rhea, (1.2±0.2× VSMOW ) and within the uncertainties of the Cassini Ion and Neutral Mass Spectrometer (INMS) measurement of D/H in the plume of Enceladus, which found a value of 1.85+.95 -.45 compared to VSMOW (Waite Jr et al. 2009). For most satellites, we found consistent values for the leading and trailing hemispheres. 6 0.00 0.02 0.04 0.06 0.08 0.10 0.12 4.14 to 2 μm depth ratio Mimas Tethys Dione Rhea Hyperion Iapetus Phoebe 0.0 1.0 2.0 3.0 D/H compared to VSMOW Figure 3. The ratio of the depth of the 4.14 μm absorption to 2 μm absorption for each of the satellites. The leading hemispheres are shown as black points while the trailing hemispheres are red. Hyperion and Phoebe, which are non-synchronously rotating, are shown in blue. The D/H value derived using the C19 calibration is shown on the right. The value measured for Enceladus is shown as the bold horizontal line, while the 1σ upper and lower limits are shown as thinner dashed lines. The D/H value shown for Phoebe is derived from calibration to a radiative transfer model, rather than from the 4.14 to 2μm depth ratio. Two exceptions to this general trend are seen. Dione has a derived D/H value elevated above the other inner satellites, and its leading hemisphere has a significantly higher value than its trailing hemisphere. The implied ∼30% variation between the leading and trailing hemispheres of Dione is difficult to explain with a native source for deuterated water. The leading hemisphere of Dione receives significantly more mass influx from the E-ring than the trailing hemisphere(Kempf et al. 2018), which could plausibly bring enhanced amounts of deuterated water, but we would then expect Mimas, which receives a higher E-ring flux on the trailing hemisphere, to have the opposite asymmetry, which is not observed. The most conservative interpretation is that all of the D/H values of these inner icy satellites are identical and that the spread seen in derived D/H values reflects the limitations of converting the 4.14 to 2 μm absorption ratio to a D/H value, though it should be noted that in almost all cases the leading and trailing hemisphere measurements are identical within the uncertainties. In this interpretation, the deuterated water detected could either be native to each satellite or could still be brought in from the E-ring if all surfaces are at least covered enough to be optically thick, as suggested by radar measurements (Le Gall et al. 2019). 3.2. Hyperion and Iapetus Hyperion, which receives little Enceladus material but does contain patches of what is likely Phoebe-ring dark material, has a derived D/H ratio consistent with those of the inner satellites. If the dark material on Hyperion is primarily a coating and thus spatially segregated, the 4.14 to 2 μm ratio should accurately estimate D/H. If, on the other hand, the surface ice of Hyperion is more intimately mixed with Phoebe-like dark material, this material will suppress the 2 μm feature (where Hyperion is bright) more than the 4.14 μm feature (where Hyperion is dark), leading to an elevated 4.14 to 2 μm ratio and a higher inferred value of D/H. We thus take the derived value for Hyperion to be an upper limit to the true value. More detailed modeling will be required to accurately determine the value of D/H for Hyperion. The derived values for the leading and trailing hemispheres of Iapetus are consistent in spite of the nearly factor-of-four difference in the 2 μm band depth and factor of 2 difference in reflectance around 4.5 μm seen by these observations (Brown et al. 2025). This consistency suggests that the band ratio method is adequately accounting for these differences, though, like for Hyperion, we consider the measurement for the dark leading hemisphere of Iapetus to be a up7 per limit. Formally, the values derived for Iapetus are ∼40% lower than those on the inner icy satellites, but we again conservatively interpret this spread as being with the range of uncertainty of our approximation. As with Hyperion, detailed modeling of the spectrum of Iapetus could help to obtain more accurate results. 3.3. Phoebe C19 suggest an elevated value of D/H of 8±2× VSMOW for Phoebe using VIMS data and a radiative transfer spectral model. The details of the model are difficult to reproduce for comparison, so we instead chose to use a simple spectral comparison to evaluate the claim of elevated D/H on Phoebe. In Figure 4 we show the JWST spectrum of Phoebe, the VIMS diskaverage spectrum of Phoebe, and the averaged spectra of the other satellites. As can be seen from the Figure, the C19 VIMS spectrum appears to have a strong ∼4.15 μm absorption with an average depth of 4.5% from 4.124 to 4.158 μm - slightly longer wavelengths than the D/H features seen on the other satellites by JWST. The JWST Phoebe spectrum rules out such a line at about the 10σ level. While we cannot reproduce the model of C19, if we take their modeling as a calibration point and assume the ∼6% depth of the feature seen by VIMS indeed corresponds to a D/H ratio of 8× VSMOW, we would infer that the 1.3±0.8% value measured for the 4.14 μm depth at Phoebe corresponds to a value of 1.7 ± 1.1× VSMOW, or, more appropriately, a 1σ upper limit of 2.8× VSMOW. Interestingly, this value is not dissimilar to the value of 2.1+1.6 -1.4 that would be derived from using the simple ratio method, lending support to our use of that method for Hyperion and Iapetus. 4. THE ORIGINS OF THE SATELLITES The D/H ratios of the satellites of Saturn are enhanced by at least an order of magnitude compared to the atmosphere of Saturn, 4.05 4.10 4.15 4.20 wavelength (μm) 0.94 0.96 0.98 1.00 1.02 1.04 1.06 Combined JWST Phoebe JWST Phoebe VIMS Figure 4. A comparison of the continuum-divided JWST spectrum of Phoebe and the VIMS spectrum of Phoebe from C19. Also shown is the average spectrum of all of the Saturnian satellites with robust 4.14 μm absorption detections.The JWST data rule out an absorption with the position and depth of that seen by VIMS at the 10 σ level. which has a D/H similar to the solar value (Blake et al. 2021). The strong implication of this simple fact - even with the uncertainties in the D/H calibration - is that the satellites did not form out of material that condensed out of a hot planetary sub-nebula, as initially explored by Pollack & Consolmagno (1984). In such a case the deuterated water would have reequilibrated with the much more abundant H2 and the D/H value would be quickly diluted to the Saturnian value. Multiple other lines of evidence have shown that satellite formation in a hot circumplanetary nebula is implausible, however, and no modern formation model posits such a formation pathway. Current models for the formation of the Saturnian satellites suggest either slow formation in a gas-starved disk (i.e. Canup & Ward 2006; Batygin & Morbidelli 2020) or formation from reaccreated ring material, where the ring can be the result of a larger satellite disruption (Canup 2010) or the result of the tidal disruption of a large passing icy body (Dones 1991; 8 Hyodo et al. 2017). All of these scenarios can be made consistent with the approximately constant D/H value across the Saturnian system under the assumption that the planetesimals and pebbles which form the satellites all came from regions with elevated D/H. In the hybrid scenario suggested by Blanc et al. (2025) and informed by recent ideas of faster migration for the Saturnian satellites, large satellites form at some distance from Saturn and migrate inward where some are perhaps lost. The penultimate satellite is tidally disrupted and forms the rings - which eventually spawns the icy mid-sized satellites - while the proto-Titan survives inward migration and, upon dissipation of the disk, tidally migrates outward. Hyperion and Iapetus presumably also form out of this initial disk, though no satisfactory explanation of their formation pathway has been presented (Castillo-Rogez et al. 2018). In this scenario, the original satellites would have to have formed from icy planetesimals and pebbles which did not equilibrate with the gas disk. The lack of a rise in the D/H ratio beyond Titan would then suggest that no strong temperature gradient existed across the satellite formation region and that the D/H value of the satellites largely reflects that of the planetesimals and pebbles entering the system at the distance of Saturn. The similarity of the D/H ratio of the icy mid-sized satellites is a natural consequence of their origin from a single body. The inferred upper limit for the D/H ratio of Phoebe is consistent with that derived for the other satellites. Phoebe appears to have a D/H ratio broadly consistent with the range seen in comets, suggesting a pre-capture origin in the outer solar system. 5. CONCLUSIONS Deuterated water is detected on all of the observed mid-sized icy satellites and on Hyperion and Iapetus. The D/H ratio on these satellites is approximately an order of magnitude higher than in the atmosphere of Saturn, demonstrating that the ices incorporated into these satellites never sublimated and equilibrated with the gaseous circumplanetary nebula. Only upper limits can be placed on Phoebe, but it cannot have a significantly higher D/H ratio than the other satellites in the system. The similarity in D/H ratio of all of the satellites suggests that this ratio is representative of the D/H ratio of the planetesimals and pebbles in the vicinity of Saturn at the time of satellite formation. The inferred value of ∼1.5× VSMOW is enriched compared to the inner solar system, but does not appear to be as high as some of the cometary values seen. This moderate enhancement in D/H at the distance of Saturn suggests perhaps a mild increase in D/H as a function of heliocentric distance in the solar system, similar to that seen in the laminar models of (Albertsson et al. 2014) that do not have transport, which seems unrealistic. Models tracking D/H ratios incorporating modern ideas of disk structure, pebble transport and accretion, and satellite formation have yet to be constructed, but will be critical for interpreting D/H ratios measured in planetary environments. One interesting prediction from these values of D/H in the ices in the vicinity of the forming Saturn is that the ices at Uranus should be similar. Indeed, the presence of the disk gap at Jupiter suggests that most solid material would be flowing into the Saturnian system from beyond Saturn. Uranus, which makes no such gap, would then likely be bathed in the same solid material. The D/H ratio at Uranus is depleted compared to VSMOW by a factor of 3.5 (Feuchtgruber et al. 2013), a value from than 5 times lower than that we have inferred for the Saturnian satellites. Uranus is expected - though not confirmed - to have fully mixed its atmosphere and interior in its 9 past, and thus the D/H ratio is expected to reflect the bulk average of the nebular H2 and the ices that initially formed the planet. Using this expectation, Feuchtgruber et al. (2013) derive a D/H ratio for the ices that formed Uranus of a value of ∼0.4×VSMOW, a value more than 3.5 times lower than that we derive for the ices that formed the Saturnian satellites. Feuchtgruber et al. (2013) themselves are uncomfortable with this conclusion and suggest multiple possible solutions, including the possibility that the interior is not mixed and that the interior is significantly more rocky than current assumed. The high values of D/H for the Saturnian satellites highlight this continued discrepancy. Using the 4.14 μm O-D stretch feature to infer a D/H ratio via reflectance spectroscopy is a powerful technique, particularly with the advent of JWST coverage in this wavelength region. We caution, however, that significant laboratory and modeling work is still needed to understand the robustness of this technique and its application across the solar system. While the nearly-pure water ice surfaces of the inner midsized satellites of Saturn present perhaps the ideal test case for this technique, understanding how to reliably apply the technique to more complicated surfaces remains work in progress. ACKNOWLEDGMENTS We thank Francis Nimmo for conversations that led to this work. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #3716. Support for program #3716 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. The specific observations analyzed can be accessed via Facilities: JWST/NIRSpec REFERENCES Albertsson, T., Semenov, D., & Henning, T. 2014, , 784, 39, Alexander, C. M. O. 2017, Philosophical Transactions of the Royal Society of London Series A, 375, 20150384, Batygin, K., & Morbidelli, A. 2020, The Astrophysical Journal, 894, 143, Blake, J. S. D., Fletcher, L. N., Greathouse, T. K., et al. 2021, Astronomy & Astrophysics, 653, A66, Blanc, M., Crida, A., Shibaike, Y., et al. 2025, Space Science Reviews, 221, 35, Bockel ́ee-Morvan, D., Calmonte, U., Charnley, S., et al. 2015, ßr, 197, 47, Brown, M. E., Wong, I., & Belyakov, M. 2025, The Planetary Science Journal, 6, 22, Brown, R. H., Lauretta, D. S., Schmidt, B., & Moores, J. 2012, , 60, 166, Canup, R. M. 2010, Nature, 468, 943, Canup, R. M., & Ward, W. R. 2006, Nature, 441, 834, Castillo-Rogez, J. C., Hemingway, D., Rhoden, A., et al. 2018, in Enceladus and the Icy Moons of Saturn (The ), uapress 9780816537075-ch014 10 Clark, R. N., Brown, R. H., Cruikshank, D. P., & Swayze, G. A. 2019, ıcarus, 321, 791, Clark, R. N., Brown, R. H., Jaumann, R., et al. 2005, Nature, 435, 66, Cleeves, L. I., Bergin, E. A., Alexander, C. M. O., et al. 2014, Science, 345, 1590, Dones, L. 1991, Icarus, 92, 194, Feuchtgruber, H., Lellouch, E., Orton, G., et al. 2013, Astronomy & Astrophysics, 551, A126, Furuya, K., Drozdovskaya, M. N., Visser, R., et al. 2017, Astronomy & Astrophysics, 599, A40, Geiss, J., & Gloeckler, G. 1998, Space Science Reviews, 84, 239, Hagemann, R., Nief, G., & Roth, E. 1970, Tellus A: Dynamic Meteorology and Oceanography, 22, 712, Hallis, L. J. 2017, Philosophical Transactions of the Royal Society of London Series A, 375, 20150390, Hedman, M. M., Tiscareno, M. S., Showalter, M. R., et al. 2024, Journal of Geophysical Research: Planets, 129, e2023JE008236, Hyodo, R., Charnoz, S., Ohtsuki, K., & Genda, H. 2017, Icarus, 282, 195, Kempf, S., Hor ́anyi, M., Hsu, H.-W., et al. 2018, in Enceladus and the Icy Moons of Saturn (The ), uapress 9780816537075-ch010 Le Gall, A., West, R. D., & Bonnefoy, L. E. 2019, Geophysical Research Letters, 46, 11747, Lis, D. C., Bockel ́ee-Morvan, D., G ̈usten, R., et al. 2019, ̊ap, 625, L5, L ́ecluse, C., & Robert, F. 1994, Geochimica et Cosmochimica Acta, 58, 2927, M ̈uller, D. R., Altwegg, K., Berthelier, J. J., et al. 2022, ̊ap, 662, A69, Pierel, J. D. R., Nixon, C. A., Lellouch, E., et al. 2017, The Astronomical Journal, 154, 178, Pollack, J. B., & Consolmagno, G. 1984, in Saturn, 811-866. https://ui.adsabs.harvard.edu/abs/1984satn.book..811P Waite Jr, J. H., Lewis, W. S., Magee, B. A., et al. 2009, Nature, 460, 487, Yang, L., Ciesla, F. J., & Alexander, C. M. O. D. 2013, ıcarus, 226, 256,
2510.14865
Preprint MIDTRAINING BRIDGES PRETRAINING AND POSTTRAINING DISTRIBUTIONS Emmy Liu, Graham Neubig, Chenyan Xiong Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {mengyan3, gneubig, cx}@cs.cmu.edu ABSTRACT Recently, many language models have been pretrained with a “midtraining” phase, in which higher quality, often instruction-formatted data, is mixed in at the end of pretraining. Despite the popularity of this practice, there is little scientific under- standing of this phase of model training or why it is effective. In this work, we conduct the first systematic investigation of midtraining through controlled exper- iments with language models pretrained from scratch and fine-tuned on supervised finetuning datasets in different domains. We find that when compared after super- vised fine-tuning, the effectiveness of midtraining is highest in the math and code domains, where midtraining can best reduce the syntactic gap between pretrain- ing and posttraining data. In these cases, midtraining consistently outperforms continued pretraining in both in-domain validation loss as well as pretraining data forgetting after posttraining. We conduct ablations on the starting time of the midtraining phase and mixture weights of the midtraining data, using code mid- training as a case study, and find that timing has a greater impact than mixture weights, with earlier introduction of specialized data, yielding greater benefits in- domain as well as preserving general language modeling better. These findings establish midtraining as a domain adaptation technique that compared to contin- ued pretraining yields better performance through reduced forgetting.1 1 INTRODUCTION The success of large language models has mostly been driven by scaling model and data size. Though many interventions seem promising, they may wash out at scale. Therefore, when method- ological interventions are simple yet widely adopted across model scales, they merit attention. One such intervention is midtraining: breaking pretraining into two or more stages in which the latter stages incorporate higher-quality data from specialized domains such as mathematics and coding, as well as instruction-formatted data (Hu et al., 2024b; Dubey et al., 2024; OLMo Team et al., 2025). While this approach has shown improved performance after posttraining (Hu et al., 2024b), there is surprisingly little systematic study of when and why midtraining works (Wang et al., 2025). This raises key questions for practitioners: When is midtraining expected to work better than simply posttraining a model, or doing continued pretraining on a target domain? What types of data are most effective in a midtraining data mixture? Finally, how does midtraining compare to continued pretraining in specialized domains? To address these questions, we conduct the first systematic investigation of midtraining through controlled experiments on models pretrained from scratch with different midtraining settings. We systematically vary the type of midtraining data, the timing of its introduction and mixture weights, followed by supervised fine-tuning on diverse datasets. Our investigation reveals three key findings: 1Data and code are available at https://github.com/nightingal3/all_in_one_ pretraining. 1 arXiv:2510.14865v1 [cs.CL] 16 Oct 2025 Preprint • Midtraining is highly effective at improving downstream loss on a few key domains such as math and code. In particular, it is effective in improving accuracy on domains that are farther from the general pretraining distribution as represented by web text. • Midtraining reduces catastrophic forgetting compared to both direct fine-tuning of a pre- trained model as well as continued pretraining given the same number of tokens. • The timing of introducing midtraining data into the training process is important, more so than the mixture weight of the midtraining data source. Put together, these findings cast a new light on our understanding of midtraining’s role in the LM training process – it can be viewed as effectively “bridging” the distribution gap between pretraining and out-of-domain fine-tuning data, introducing a smooth transition between the two at an appropri- ate time in the training process. 2 PRELIMINARIES In this section, we define what we refer to as midtraining throughout this paper. While this term has been used colloquially by model developers, it lacks a standard definition, so we establish our working definition for clarity. 2.1 TRAINING SEQUENCE DEFINITIONS Modern language model training typically follows a sequential approach where models are trained on different data distributions in a specific order. Rather than attempting to define training phases by their data characteristics or objectives, which often overlap in practice, we adopt a sequence-based framework that defines phases by their order in the training process. Let θ ∈Θ denote model parameters and X, Y be input and output spaces. A training phase is a tuple (D, L, n) where D is a distinct data distribution over X × Y, L : Θ × (X × Y) →R is a loss function, and n ∈N+ is the number of training steps taken in that phase. A training sequence is an ordered collection S = {(Di, Li, ni)}N i=0 of training phases, where the parameters θi resulting from training up to phase i serve as the initialization for phase i + 1. Pretraining is the initial training phase (D0, L0, n0) in the sequence. In practice, pretraining often uses large, diverse sources of data, such as from the web. The objective for decoder-based language models, which we examine, is typically next-token prediction, L0(θ; x) = −P|x| t=1 log pθ(xt|x<t). Pretraining typically consumes the majority of steps taken during training, so that ∀i > 0, n0 >>> ni. Midtraining refers to intermediate training phases (Di, Li, ni) where 0 < i < N. Midtraining data is typically more specialized than general pretraining data, often including domain-specific content (code, math) and instruction-formatted data, while maintaining a mixture with general pretraining data. However, like pretraining, midtraining updates all model parameters. Typically, midtraining is a longer phase compared to finetuning, but shorter than the preceding pretraining phase, n0 > ni > nN. There can potentially be multiple midtraining phases as well in the case of multi-stage pretraining curricula, but we focus on one-stage midtraining in this paper. Finetuning is the final training phase (DN, LN, nN) in the sequence. Finetuning usually uses task-specific datasets and may employ diverse objectives or preference-based methods rather than next-token prediction. Typically, finetuning uses the least number of steps, although this is not necessarily always the case. Note that our framework reflects current popular practices for language model training, rather than prescribing an optimal training methodology. Additional edge cases include multi-stage finetuning, cyclic training schedules, and mixture schedules where distribution weights vary continuously. 2.2 RELATIONSHIP TO CURRICULUM LEARNING AND CONTINUED PRETRAINING Curriculum learning The original definition of curriculum learning focused on gradually increas- ing the difficulty or diversity of training examples throughout the course of training (Elman, 1993; 2 Preprint Bengio et al., 2009). However, this term has evolved to generally mean any strategic ordering of training data (Soviany et al., 2022). In this general view, it encompasses any training strategy in which the sequence S = {(Di, Li, ni)}N i=0 is designed according to some principled ordering cri- terion—whether based on example difficulty, domain progression, task complexity, data quality, or other strategic considerations. Therefore, midtraining can be viewed as a type of curriculum learn- ing that operates at a distributional level rather than over individual examples: instead of ordering specific training instances by difficulty or other criteria, midtraining strategically organizes different data distributions or domain mixtures across discrete training phases. This coarse-grained approach focuses on when to introduce entire types of data during pretraining rather than sequencing of indi- vidual examples. Continued pretraining Continued pretraining typically refers to additional pretraining of an already-pretrained model on domain-specific data, where the data distribution shifts completely to the target domain (Gururangan et al., 2020; Beltagy et al., 2020). This approach commits entirely to domain-specific data after a certain point in training, potentially losing general capabilities. In contrast, midtraining maintains mixed distributions throughout the intermediate phase, preserving general pretraining data alongside specialized data. Formally, continued pretraining can be viewed as the limiting case where the midtraining distribution Di consists entirely of domain-specific data (0% original pretraining data, 100% domain data), though our results suggest this pure approach may be suboptimal. Implementation details such as optimizer state handling and learning rate schedule may vary, but most often midtraining continues on the original pretraining schedule, preserving optimizer state, while continued pretraining often operates on a new schedule. 3 EXPERIMENTAL SETTING 3.1 TRAINING SETUP Pretraining We pretrain models from the Pythia family ranging in size from 70M-410M param- eters on C4 web data (Raffel et al., 2020; Biderman et al., 2023). In all cases, we train for 128B tokens (approx. 61k steps) with a cosine learning rate schedule with a maximum learning rate of 3e-4 and the AdamW optimizer (Loshchilov & Hutter, 2019). We chose to fix the training budget at a point past Chinchilla-optimality for all models (Hoffmann et al., 2022), in order to ensure that models have stabilized by the point at which midtraining data has been introduced, at least for later insertion points of midtraining data. We describe the exact training setup in Appendix A. Midtraining We use five midtraining mixtures spanning popular domains: code (Starcoder), math, instructions (FLAN), general knowledge/QA, and high-quality web data (DCLM). Table 1 details each mixture’s composition and sources. All mixtures are introduced at varying start points (Star- coder: 6k steps, Math: 20k steps, others: 40k steps) based on data availability to prevent repetition. We compare against a control condition continuing C4 pretraining for the same number of tokens, keeping all other training details identical. Midtrain mix Num. Tokens (B) Sources Starcoder 196 (Li et al., 2023) Math 12 (Yue et al., 2023; Toshniwal et al., 2024) FLAN 3.5 (Wei et al., 2022) KnowledgeQA 9.6 (Hu et al., 2024a) DCLM 51 (Li et al., 2024b) Table 1: Midtraining mixes used in our experiments and dataset(s) from which they were derived. STARCODER (CODE) Our code mixture is a subset of the Starcoder pretraining dataset (Li et al., 2023), which contains code in many lan- guages. Note that we use code from all languages, rather than Python. MATH The math mixture com- bines mathematical reasoning prob- lems from the MAmmoTH (Yue et al., 2023) and OpenMathInstruct (Toshniwal et al., 2024) datasets, featuring step-by-step explanations. FLAN (INSTRUCTIONS) Our instruction-formatted data comes from a processed version of the FLAN collection, which includes diverse task instructions and responses across natural language tasks (Wei et al., 2022). 3 Preprint KNOWLEDGEQA (GENERAL KNOWLEDGE AND QA) The KnowledgeQA mixture is taken from Hu et al. (2024b)’s midtraining mix, and focuses on general knowledge and dialogue. However, to distinguish the midtraining mixes further, the StackOverflow portion of this dataset is removed. DCLM (HIGH-QUALITY WEB) Our high-quality web data is a subset of the DCLM pretraining dataset, representing web content with improved quality filtering compared to C4 (Li et al., 2024b). Downstream Evaluation We fine-tune models on the datasets GSM8k (Cobbe et al., 2021), SciQ (Welbl et al., 2017), CodeSearchNet-Python (Husain et al., 2019), and LIMA (Zhou et al., 2023) – chosen to span the domains covered by our midtraining mixtures. This allows us to test cases where the midtraining mixture is aligned or misaligned with the SFT dataset. We used standard language model supervised fine-tuning for all datasets. For information on the posttraining setup, see Appendix B. Catastrophic Forgetting Evaluation A key concern with supervised fine-tuning is whether in- troducing specialized data causes models to forget general capabilities acquired during pretraining. We measure catastrophic forgetting by evaluating cross-entropy loss on the original pretraining dis- tribution by measuring loss on held-out C4 data. This approach follows established practices for measuring forgetting in language models (Luo et al., 2024; Kemker et al., 2018; Li et al., 2024a). 4 WHICH DOWNSTREAM TASKS BENEFIT MOST FROM MIDTRAINING? To identify which tasks benefit most from midtraining, we evaluate all midtraining mix and SFT dataset combinations. For each combination, we measure validation loss on the target domain (adap- tation effectiveness) and C4 validation loss after fine-tuning (forgetting). We average results over 5 seeds after hyperparameter search for each checkpoint. Midtraining benefits are highly domain-specific: specialization on code yields the largest gains on code tasks, while math-focused midtraining helps mathematical-reasoning tasks. Mismatched midtraining provides minimal benefit, and general instruction mixes (e.g., FLAN) produce little improvement. Full per-dataset results and numerical comparisons are reported in Table 2 and Ap- pendix D. 5 WHAT DATA IS MOST EFFECTIVE FOR MIDTRAINING? Having established that midtraining effects are domain-specific, we now ask: what determines the strength of these domain-specific effects? Across midtraining-target pairs, we see improvements ranging from negligible (e.g. FLAN →coding) to strong (e.g. Starcoder →coding). C4 StarCoder Math Combined PyCode GSM8K C4 StarCoder Math Combined PyCode GSM8K 1.00 0.83 0.95 0.54 0.61 0.83 1.00 0.85 0.70 0.57 0.95 0.85 1.00 0.59 0.65 0.54 0.70 0.59 1.00 0.48 0.61 0.57 0.65 0.48 1.00 0.5 0.6 0.7 0.8 0.9 1.0 Similarity Figure 1: Example similarity matrix between pre/midtrain and posttraining datasets. For the com- plete matrix, see Appendix C. Although the pairs that work the best are in- tuitive, we ask whether we can quantify this intuition through measurable proximity be- tween data distributions. To test whether optimal midtraining data “bridges” the gap between pretraining and target distributions, we simply use token- distributional similarity between datasets, which we discuss in more detail in Ap- pendix C. Figure 1 shows a reduced subset as an example: we can see that C4 and Py- code are fairly dissimilar (0.54 similarity), but the blend containing 29% Starcoder data brings this closer to Pycode (0.7 similarity). 4 Preprint Table 2: SFT and C4 validation losses for the 410M model across downstream datasets and mid- training mixtures (5 seeds per SFT dataset). Bold indicates best within each dataset. Percentages denote the specialized data proportion mixed with C4 during midtraining. Dataset Midtrain Mix Validation Loss SFT C4 Pycode C4 2.264 5.058 Starcoder (20%) 2.091 4.611 Math (12%) 2.274 5.096 FLAN (5%) 2.252 5.047 KnowledgeQA (20%) 2.264 5.094 DCLM (20%) 2.262 5.157 GSM8K C4 0.956 4.937 Starcoder (20%) 0.944 4.951 Math (12%) 0.918 5.038 FLAN (5%) 0.958 4.970 KnowledgeQA (20%) 0.961 4.966 DCLM (20%) 0.949 4.966 LIMA C4 3.490 3.166 Starcoder (20%) 3.422 3.155 Math (12%) 3.507 3.168 FLAN (5%) 3.513 3.164 KnowledgeQA (20%) 3.501 3.163 DCLM (20%) 3.496 3.158 SciQ C4 3.271 6.873 Starcoder (20%) 3.261 6.831 Math (12%) 3.257 6.894 FLAN (5%) 3.277 6.771 KnowledgeQA (20%) 3.273 6.814 DCLM (20%) 3.270 6.874 Finding 1. Midtraining benefits are highly domain-specific. Code-focused midtraining (Star- coder) yields large gains on coding tasks, and math-focused midtraining improves mathematical reasoning. Mismatched domains provide little benefit, and broad instruction data (FLAN) also shows minimal effect. 5.1 PROXIMITY AND BRIDGING EFFECTS To understand why certain midtraining mixtures are effective, we test the hypothesis that optimal midtraining data “bridges” the gap between pretraining (C4) and target SFT datasets. Using the simple token similarity metric, we measure the “proximity advantage” of each midtraining mix- ture—how much closer it brings the model to the target posttraining distribution compared to con- tinuing with C4 alone. Results are shown in Figure 2. We find a clear relationship between proximity advantage and downstream performance improve- ments across model sizes. The correlations are particularly strong for smaller models (r = 0.869, p < 0.001 for 70m), suggesting that effective midtraining data serves as a distributional stepping stone from general pretraining to specialized target domains. This bridging effect appears to be most beneficial when the gap between pretraining and target distributions is large, consistent with our hy- pothesis that midtraining helps models adapt gradually rather than requiring abrupt distributional shifts during fine-tuning. Finding 2. Effective midtraining data acts as a distributional bridge between pretraining and target datasets when considering token patterns. 5 Preprint 0.2 0.1 0.0 0.1 0.2 Proximity Advantage -8% -5% -2% 0% 2% 5% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (70m) (r=0.869, p=0.000) 0.2 0.1 0.0 0.1 0.2 Proximity Advantage -8% -5% -2% 0% 2% 5% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (160m) (r=0.419, p=0.047) 0.1 0.0 0.1 Proximity Advantage -2% 0% 2% 4% 6% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (410m) (r=0.497, p=0.026) SFT Dataset GSM8k LIMA Pycode SciQ Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) SFT Dataset GSM8k LIMA Pycode SciQ Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) Figure 2: Relationship between proximity advantage and midtraining performance improvements for pairs of midtraining and SFT datasets. Each data point represents a (midtrain, SFT) pair, where the color indicates the SFT dataset and shape represents midtrain dataset. Proximity advantage (dist(C4, SFT) - dist(midtrain, SFT)) indicates how much closer midtraining data brings the model to the target SFT dataset compared to the base pretraining data. Proximity advantage pairs near zero are greyed out for clarity but included in calculations. Relative improvement is measured against the base model pretrained on C4. 5.2 MIDTRAINING VS. CONTINUED PRETRAINING Our results so far suggest that effective midtraining data serves as a bridge between general pre- training and specialized posttraining data. However, a question that follows is why midtraining is necessary: continued pretraining on domain-specific data also aims to adapt the model toward a target domain. Why not simply pretrain normally and then switch to domain-specific data entirely? To examine this, we compare the effect of midtraining with continued pretraining in which the mixture weight switches to 100% specialized data. For code, we compare the default Starcoder mid- training mix (20% mixture weight, starting from 12.6B tokens) with 100% Starcoder data starting from 83B tokens. For math, we compare the math midtraining mix with 100% math data starting from 105B tokens.2 Results in Table 3 show that midtraining consistently outperforms continued pretraining across both domains and model sizes for both in-domain performance and C4 retention after fine-tuning. As this pattern holds for both code and math domains, this suggests that maintaining some general pre- training data is useful during domain adaptation, even for models specialized for a specific domain. This supports our intuition gained from prior sections that domain adaptation benefits from gradual distributional shifts at the token level rather than abrupt changes. Finding 3. Maintaining a mixture with general data in midtraining is preferable to continued pretraining on specialized data alone. 6 WHEN AND HOW MUCH MIDTRAINING DATA SHOULD BE INTRODUCED? Given that a midtraining dataset’s proximity to the finetuning dataset matters, and that we want to balance midtraining data with general pretraining data, we next investigate when to integrate midtraining data and what mixture weight to use. Using the Starcoder midtraining mixture, we vary both timing (10%-80% through pretraining) and mixture weights (10%-80% of training data). When testing the effect of timing, we use a fixed mixture weight of 20% for Starcoder and vary the time at which the midtraining phase begins. When testing the effect of mixture weight, we fix the step at which midtraining starts at two specific points in training: 63B and 105B tokens, while varying the mixture weight of Starcoder as compared to C4. 2The different starting points are due to data availability, to ensure the midtraining mix does not repeat. 6 Preprint Table 3: SFT and C4 validation losses for 70M and 160M models comparing default midtraining mixes to continued pretraining on only the midtraining data (100%), averaged across 5 seeds. Bold indicates best performance within each dataset/model combination. Model Dataset Mix SFT C4 70M Pycode Pretrain-only 2.633 6.067 Starcoder (20%) 2.530 5.996 Ctd. pretrain (Starcoder) 2.580 6.160 GSM8K Pretrain-only 1.405 6.409 Math (12%) 1.359 6.391 Ctd. pretrain (Math) 1.401 6.422 160M Pycode Pretrain-only 2.382 5.247 Starcoder (20%) 2.205 5.104 Ctd. pretrain (Starcoder) 2.283 5.373 GSM8K Pretrain-only 1.188 5.339 Math (12%) 1.139 5.198 Ctd. pretrain (Math) 1.180 5.364 0 50 100 2.2 2.4 2.6 2.8 Midtraining start (B tokens) SFT Val. Loss SFT (in-domain) 0 50 100 5 5.5 6 Midtraining start (B tokens) C4 Val. Loss C4 (forgetting) 70M 160M (a) Timing ablation 0 50 100 2.2 2.4 2.6 2.8 Mixture Weight (%) SFT Val. Loss SFT (in-domain) 0 50 100 5 5.5 6 6.5 Mixture Weight (%) C4 Val. Loss C4 (forgetting) 70M @ 105B 160M @ 105B (b) Mixture ablation @ 105B 20 40 60 80 2.2 2.4 2.6 2.8 Mixture Weight (%) SFT Val. Loss SFT (in-domain) 20 40 60 80 5 5.5 6 6.5 Mixture Weight (%) C4 Val. Loss C4 (forgetting) 70M @ 63B 160M @ 63B (c) Mixture ablation @ 63B Figure 3: Pycode ablations. (A) Timing effect (fixed 20%). (B) Mixture effect at 105B. (C) Mixture effect at 63B. Earlier start increases sensitivity to mixture weight, especially for 70M; 160M is more robust. Top: SFT after Pycode finetuning. Bottom: C4 losses (forgetting). When varying mixture proportions from a fixed starting point, effects were modest compared to varying the timing of introduction across different pretraining stages at a fixed 20% mixture weight (Figure 3(a)). The timing effects are substantial: for the 160m model, early introduction at 12B tokens achieves 2.205 validation loss compared to 2.374 when delayed to 105B steps. Similarly, the 70m model performs best with introduction at 42B steps (2.505) versus later introduction. However, we note that timing and mixture weight also interact, and mixture weight appears to be more im- pactful when midtraining data is introduced earlier, as the range of best and worst mixture weights roughly doubles at 63B tokens compared to at 105B tokens. This suggests that when to introduce specialized data may be more critical than how much specialized data to include, at least within our experimental ranges. See Figure 3(b–c) for mixture weight comparisons. Relatedly, Figure 4 illustrates how midtraining benefits evolve over the course of pretraining for the 20% Starcoder mix (160m model). We finetune checkpoints from different pretraining steps on Pycode and measure both in-domain and C4 validation loss after fine-tuning. In-domain advantages emerge quickly after midtraining introduction (6k steps), while the C4 retention benefits develop more gradually, becoming apparent after approximately 20k steps. This temporal pattern suggests that early introduction of specialized data provides sufficient time for both immediate domain adap- tation and gradual integration with general capabilities. 7 Preprint 20 40 60 80 100 120 Tokens (B) 2.20 2.25 2.30 2.35 2.40 2.45 Validation Loss In-Domain Performance 20 40 60 80 100 120 Tokens (B) 5.0 5.2 5.4 5.6 5.8 C4 Validation Loss C4 Forgetting Figure 4: Validation loss and C4 loss for the Starcoder-midtrained model (160M) and base pretrained model after supervised fine-tuning on the Pycode dataset, with each point on the x-axis representing the number of tokens the pretrained checkpoint was trained on. Finding 4. The timing of midtraining is more critical than the mixture weight; early intro- duction of specialized data in the code domain leads to stronger in-domain gains and better retention of general capabilities. 7 HOW DOES MIDTRAINING CHANGE MODEL REPRESENTATIONS? In the previous sections, we have established the midtraining improves downstream performance as well as retention of the original pretraining distribution, and that this operates in a domain and timing sensitive manner. However, this raises the question of how midtraining changes the model weights themselves. As a first step toward answering this question, we investigate changes after finetuning experienced by a midtrained as well as non-midtrained model in the code domain. We use linear Centered Kernel Alignment (CKA) to measure layer-wise similarity between model states in the code domain (Kornblith et al., 2019). We extract activations from all layers using probe datasets (C4 and APPS (Hendrycks et al., 2021)) and compute CKA similarity matrices between four key model states: base pretrained, midtrained (Starcoder), base fine-tuned, and midtrained fine- tuned. If midtraining creates better representations for downstream tasks, we expect to see smaller representational changes during fine-tuning for midtrained models compared to base models. Figure 5 shows the representational analysis for the 70M model. The midtrained model exhibits greater stability in the final layer after fine-tuning, a pattern consistent across model sizes (see Ap- pendix G for the remaining results). However, the final fine-tuned models show high similarity regardless of whether models underwent midtraining. These effects are less pronounced for C4, which can be seen in Appendix H. L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.47 0.82 0.72 0.58 0.60 0.65 0.61 0.81 0.93 0.85 0.64 0.71 0.72 0.61 0.74 0.87 0.89 0.73 0.79 0.72 0.57 0.61 0.62 0.62 0.68 0.62 0.60 0.47 0.65 0.80 0.85 0.80 0.92 0.82 0.63 0.64 0.76 0.76 0.74 0.81 0.91 0.79 0.62 0.68 0.64 0.59 0.63 0.84 0.88 Base vs Midtrained (Effect of midtraining) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.52 0.83 0.76 0.76 0.77 0.70 0.61 0.84 0.92 0.81 0.78 0.80 0.71 0.59 0.77 0.85 0.83 0.77 0.78 0.70 0.56 0.62 0.65 0.68 0.67 0.62 0.59 0.48 0.68 0.80 0.81 0.76 0.80 0.73 0.59 0.67 0.75 0.75 0.75 0.78 0.80 0.72 0.64 0.67 0.64 0.67 0.71 0.76 0.76 Base vs Base-SFT (Base model finetuning changes) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.50 0.84 0.77 0.77 0.79 0.72 0.63 0.83 0.88 0.82 0.80 0.82 0.75 0.64 0.75 0.82 0.82 0.79 0.80 0.73 0.61 0.61 0.65 0.76 0.77 0.74 0.72 0.60 0.63 0.72 0.78 0.75 0.79 0.74 0.60 0.68 0.74 0.75 0.77 0.81 0.84 0.76 0.64 0.64 0.63 0.68 0.73 0.79 0.81 Midtrained vs Midtrained-SFT (Midtrained model finetuning changes) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.56 0.86 0.79 0.78 0.80 0.73 0.65 0.86 0.99 0.89 0.84 0.87 0.77 0.64 0.79 0.89 0.99 0.89 0.88 0.80 0.65 0.79 0.85 0.90 0.98 0.92 0.84 0.71 0.80 0.88 0.88 0.92 0.98 0.90 0.77 0.74 0.78 0.80 0.85 0.91 0.97 0.89 0.64 0.64 0.66 0.72 0.78 0.90 0.96 Base-SFT vs Midtrained-SFT (Final models comparison) Figure 5: CKA analysis of model activations in the 70M model, probed with the APPS code dataset. Finding 4. Models exposed to midtraining require smaller representational shifts during fine- tuning, especially in the final layer, indicating smoother adaptation. 8 Preprint 8 RELATED WORK Specific midtrained models Recently, several language model families have adopted midtraining approaches with varying implementation details (Hu et al., 2024b; Dubey et al., 2024; OLMo Team et al., 2025; Chameleon Team, 2024). The midtraining phase duration varies from 2% (Hu et al., 2024b) to 20% (Chameleon Team, 2024) of total training, motivating our systematic investigation of timing effects. Common midtraining domains include code, math, instructions, and higher-quality web data (OLMo Team et al., 2025)—the domains we investigate. Beyond general-purpose models, midtraining has shown benefits for specific tasks like RL (Wang et al., 2025) and GUI agents (Zhang et al., 2025a). This widespread adoption motivates our questions of when and why midtraining provides downstream benefits. Multi-stage pretraining Several works explore multi-stage pretraining, Feng et al. (2024); Blak- eney et al. (2024) focusing on two-stage pretraining and Zhang et al. (2025b) proposing four-stage pretraining. These approaches demonstrate improvements over single-stage pretraining. However, these works evaluate base model performance after pretraining, whereas we focus on the post- finetuning setting to focus on benefits that also affect posttraining. Continual Learning Domain-adaptive pretraining (DAPT) and related approaches continue pre- training on domain-specific data (Gururangan et al., 2020). Krishna et al. (2023) show that pretrain- ing on downstream data alone can rival full pretraining when evaluated after fine-tuning, suggesting pretraining-posttraining alignment matters—consistent with our findings. Mehta et al. (2023) find pretraining reduces catastrophic forgetting during sequential fine-tuning; similarly, we observe mid- trained models serve as better initializations with less forgetting. Stability in Continued Pretraining Recent work addresses stability challenges during continued pretraining. Guo et al. (2024) identify a "stability gap" where performance temporarily drops be- fore recovering when shifting to new domains, Yang et al. (2024) synthesize larger training corpora from small domain-specific datasets, and Lin et al. (2024) introduce selective training on useful to- kens only. While these works target training dynamics during continued pretraining, our approach examines how midtraining data selection affects post-fine-tuning performance, representing a com- plementary focus on end-task effectiveness. Relationship between Pretraining and Finetuning Several recent works have explored incorpo- rating instruction-formatted data during pretraining. Allen-Zhu & Li (2023) show with an exper- iment on synthetic Wikipedia-style data that augmenting pretraining data with QA-formatted data improves subsequent fine-tuning, and Jiang et al. (2024) and Cheng et al. (2024) demonstrate this in a practical context as well. Sun & Dredze (2024) find continual pretraining benefits emerge only after fine-tuning, while Springer et al. (2025) show extended pretraining causes catastrophic forget- ting ("overtraining"), particularly on math/code domains least aligned with web data. It is possible midtraining may prevent overtraining by introducing specialized data earlier and providing a better initialization for posttraining. 9 CONCLUSION We conduct the first systematic investigation of midtraining through controlled experiments. We demonstrate that midtraining benefits are domain-specific, with the most substantial improvements in math and code domains that are not well represented in standard web pretraining corpora. Further- more, we also find that midtraining mitigates catastrophic forgetting of general language modeling abilities after specific supervised fine-tuning and consistently outperformed continued pretraining on specialized data alone. Furthermore, timing of data introduction appears to have a stronger effect than mixture weight of that specialized data, though more work is needed to clarify this effect. Based on these findings, we recommend that model trainers focus midtraining on domains which exhibit substantially different token patterns compared to the base pretraining data, given that they expect to posttrain on these domains. Furthermore, it may be worth it to begin the midtraining stage earlier than commonly practiced, in proportion to how different a domain is to general pretraining data. Lastly, even when adapting models to a specific domain, midtraining should take precedence 9 Preprint over continued pretraining. However, despite these initial results, there is substantial future work to be done. A natural next step is to test the findings at much larger scales and with more diverse domains such as medicine, music, or other specific fields. Another direction is understanding how midtraining, and pretraining data distributions more broadly, interact with reinforcement learning based posttraining, and whether there are any differences between this and supervised finetuning based posttraining when it comes to adaptation. Finally, developing principled curricula that span multiple midtraining stages could shed light on how best to structure pretraining to prepare a model for posttraining while expanding its general knowledge. Further work in these directions could not only improve language model development but also improve our understanding of how data and curricula shape the emergence and stability of model capabilities. REPRODUCIBILITY STATEMENT Anonymized data and code have been provided at the url https://github.com/ nightingal3/all_in_one_pretraining. Due to the size of pretraining data sources and checkpoints, these have not yet been uploaded to the cloud, but will be provided after acceptance. Additionally, multiple sections of the appendix describe pretraining and posttraining settings for ready reproduction (particularly Appendix A and Appendix B. 10 ACKNOWLEDGMENTS We would like to Scott Wen-tau Yih for his mentorship and guidance on this project. Additionally, we would like to thank Lucio Dery, Kaiser Sun and Jacob Springer for useful discussions on this project. EL was supported by the National Sciences and Engineering Research Council of Canada (NSERC), [funding reference number 578085]. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and extraction. arXiv preprint arXiv:2309.14316, 2023. doi: 10.48550/arXiv.2309.14316. URL https://arxiv.org/abs/2309.14316. Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 3615–3620, 2020. doi: 10.18653/v1/2020.emnlp-demos.82. URL https:// aclanthology.org/2020.emnlp-demos.82. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th International Conference on Machine Learning (ICML), pp. 41–48, 2009. URL https://doi.org/10.1145/1553374.1553380. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, Usvsn Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar Van Der Wal. Pythia: A suite for analyzing large language models across training and scaling. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 2397–2430. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr. press/v202/biderman23a.html. Cody Blakeney, Mansheej Paul, Brett W. Larsen, Sean Owen, and Jonathan Frankle. Does your data spark joy? Performance gains from domain upsampling at the end of training, June 2024. URL http://arxiv.org/abs/2406.03476. arXiv:2406.03476 [cs]. Chameleon Team. Chameleon: Mixed-Modal Early-Fusion Foundation Models, May 2024. URL http://arxiv.org/abs/2405.09818. arXiv:2405.09818 [cs]. Daixuan Cheng, Yuxian Gu, Shaohan Huang, Junyu Bi, Minlie Huang, and Furu Wei. In- struction pre-training: Language models are supervised multitask learners. In Proceedings 10 Preprint of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 2529– 2550, 2024. doi: 10.18653/v1/2024.emnlp-main.148. URL https://aclanthology.org/ 2024.emnlp-main.148. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Ander- son, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurull, Hailey Nguyen, Hannah Ko- revaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Schiano, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Mario Caudy, Mario Rodriguez, Mathew Lithgow-Bertelloni, Matthew Seastrom, Matthew White, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Ke- neally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Lewis, Mikel Artetxe, Mikesh Jain, Mikko Kokkonen, Miled Za- kharia, Misha Peysakhovich, Mohammad Shihadeh, Molly Fanton, Moya Chen, Mudassir Shab- bir, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohit Maheswari, Russ Howes, Ruty Rinott, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Verma, Seiji Yamamoto, Shaoliang Nie, Shaqu Shiqi, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sudarshan Gupta, Sumit Gupta, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Re- mez, Tamar Glaser, Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vítor Albiero, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu, Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. Jeffrey L. Elman. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99, 1993. doi: 10.1016/0010-0277(93)90058-4. Steven Feng, Shrimai Prabhumoye, Kezhi Kong, Dan Su, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro. Maximize Your Data’s Potential: Enhancing LLM Accuracy with Two-Phase Pretraining, December 2024. URL http://arxiv.org/abs/2412.15285. arXiv:2412.15285 [cs]. 11 Preprint Yiduo Guo, Jie Fu, Huishuai Zhang, Dongyan Zhao, and Yikang Shen. Efficient continual pre- training by mitigating the stability gap. arXiv preprint arXiv:2406.14833, 2024. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. Don’t stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8342–8360, 2020. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring cod- ing challenge competence with apps. In Advances in Neural Information Processing Sys- tems (NeurIPS) 2021, 2021. URL https://proceedings.neurips.cc/paper/2021/ file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper.pdf. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Henni- gan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models. In Advances in Neural Information Processing Systems, volume 35, pp. 20122–20134, 2022. URL https://arxiv.org/abs/2203.15556. Shengding Hu, Zepeng Bai, Tiannan Chen, Siran Jiang, Xin Jiang, Chang Liu, Wenhao Li, Guanqun Lai, Zhiyang Cui, Chengyi Zhang, Ruibin Zhao, Junhao Han, Shaokun Dong, Junlong Fang, Boyu Shi, Ting Wang, Zhiyuan Liu, Zhixing Tang, Lei Li, Benyou Wu, Tao Wang, Chenghu Zheng, Tianxiang Zhao, Zhiyi Zhang, Wanli Fu, Bowen Dai, Jiaqi Zhou, Renze Li, Zijun Zheng, Haoran Xu, Xiao Sun, Bailing Zhou, Shuai Jiao, Jian Li, Bowen Cao, Xuming Zhao, Yikang Lu, Zixiang Qi, Jun Shi, Haonan Xiang, Jinhua Wu, and Maosong Sun. Minicpm: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395, 2024a. Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, Xinrong Zhang, Zheng Leng Thai, Kaihuo Zhang, Chongyi Wang, Yuan Yao, Chenyang Zhao, Jie Zhou, Jie Cai, Zhongwu Zhai, Ning Ding, Chao Jia, Guoyang Zeng, Dahai Li, Zhiyuan Liu, and Maosong Sun. MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies, April 2024b. URL http://arxiv.org/ abs/2404.06395. arXiv:2404.06395 [cs]. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436, 2019. Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Lin, Wen-tau Yih, and Srinivasan Iyer. Instruction-tuned language models are better knowl- edge learners. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5421–5434, 2024. doi: 10.18653/v1/2024.acl-long.296. URL https://aclanthology.org/2024.acl-long.296. Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural net- work representations revisited. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, pp. 3519–3529. PMLR, 2019. URL https://proceedings.mlr.press/v97/kornblith19a.html. Kundan Krishna, Saurabh Garg, Jeffrey Bigham, and Zachary Lipton. Downstream datasets make surprisingly good pretraining corpora. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12207–12222, Toronto, Canada, July 2023. Asso- ciation for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.682. URL https: //aclanthology.org/2023.acl-long.682/. 12 Preprint Hongyu Li, Liang Ding, Meng Fang, and Dacheng Tao. Revisiting catastrophic forgetting in large language model tuning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 4297–4308, Miami, Florida, USA, 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-emnlp.249/. Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bit- ton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Benjamin Recht, Luke Zettlemoyer, Sham Iyer, Tianyi Zhuang, Percy Liang, Alexander Rush, Naveen Jain, Vikas Raunak, Claire Cardie, and Ludwig Schmidt. Datacomp-lm: In search of the next generation of training sets for language models. arXiv preprint arXiv:2406.11794, 2024b. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Lo- gesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Naman Rao, Roza Stojnic, Miltiadis Allamanis, Pierre Laffitte, Pawan Kumar Rustamov, Kaspar Val- ter, Chenyu Mittal, Komila Khamidov Tesfamikael, Christopher Murati, Sacha Lee, Albert Q. Wan, Armel Suharyanto, Juliette Copet, David R. So, Leandro Kolubako, Gabriel Machado Pina, Soroush Behtash, Denis Moskovskiy, Deepan Siddarth, Nicolas Luce-Rainville, Mostafa Dehghani, Marcin Szafraniec, Piotr Cardozo, Jenia Jitsev, Ekaterina Kochmar, Antonio Tor- ralba, Dragomir Radev, Alexander M. Rush, Preslav Nakov, Thomas Wang, Wolf Zuo, Hugh Echikson, Laura Schuelke, Jeffrey Carmichael, Kumar Shridhar Sadagopan, Zhuang Ling, Colin Kwiatkowski, Andy Lohn, Jonas Mueller, and Harm de Vries Floetenmeyer. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023. Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt, 2023. Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, et al. Rho-1: Not all tokens are what you need. arXiv preprint arXiv:2404.07965, 2024. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Con- ference on Learning Representations (ICLR), 2019. URL https://openreview.net/ forum?id=Bkg6RiCqY7. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2024. Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell. An empirical investi- gation of the role of pre-training in lifelong learning. Journal of Machine Learning Research, 24 (214):1–50, 2023. URL http://jmlr.org/papers/v24/22-0496.html. OLMo Team, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bha- gia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Py- atkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious. arXiv preprint arXiv:2501.00656, 2025. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL https: //arxiv.org/abs/1910.10683. 13 Preprint Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey, 2022. URL https://arxiv.org/abs/2101.10382. Jacob Mitchell Springer, Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, and Aditi Raghunathan. Overtrained language models are harder to fine-tune. arXiv preprint arXiv:2503.19206, 2025. URL https://arxiv.org/abs/2503.19206. Kaiser Sun and Mark Dredze. Amuro and char: Analyzing the relationship between pre-training and fine-tuning of large language models. arXiv preprint arXiv:2408.06663, 2024. doi: 10.48550/ arXiv.2408.06663. URL https://arxiv.org/abs/2408.06663. Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Git- man. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv preprint arXiv:2402.10176, 2024. Zengzhi Wang, Fan Zhou, Xuefeng Li, and Pengfei Liu. Octothinker: Mid-training incentivizes reinforcement learning scaling, 2025. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022. Johannes Welbl, Nelson F. Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pp. 94–106, Copenhagen, Den- mark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-4413. URL https://aclanthology.org/W17-4413/. Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candès, and Tatsunori Hashimoto. Synthetic continued pretraining. arXiv preprint arXiv:2409.07431, 2024. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023. Junlei Zhang, Zichen Ding, Chang Ma, Zijie Chen, Qiushi Sun, Zhenzhong Lan, and Junxian He. Breaking the data barrier – building gui agents through task generalization, 2025a. Xuemiao Zhang, Feiyu Duan, Liangyu Xu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, and Xunliang Cai. Frame: Boosting llms with a four-quadrant multi-stage pretraining strategy, 2025b. URL https://arxiv.org/abs/2502.05551. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Aritra Ghosh, Liangchen Xiao, Allyson Ettinger, Sheng Chang, Baolin Peng, Yiren Dai, Harsh Jain, Sebastian Cruz, Ananya Gupta, Haokun Zhang, Soundararajan Srinivasan, Taylor Berg-Kirkpatrick, Eduard Hovy, Christopher D. Manning, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. A PRETRAINING SETTINGS We pretrained Pythia 70M, 160M, and 410M from scratch on the C4 dataset. All three models were trained for 128B tokens or approximately 61k steps, with very similar settings (documented in Table 4). L40S GPUs were used for all pretraining and midtraining runs. Models were trained with the LitGPT library (Lightning AI, 2023). B POSTTRAINING SETTINGS We fine-tuned all models on four downstream datasets: Pycode (our 5K-sample subset of CodeSearchNet-Python), GSM8K (7.5K math problems), LIMA (1K instruction examples), and SciQ (13.7K science questions). For GSM8K only, the prompt/question portion was masked during loss; for the others the loss was computed over the full sequence. A summary of the datasets is 14 Preprint Table 4: Core pretraining hyperparameters for Pythia-70M, 160M, and 410M. Hyperparameter 70M 160M 410M Global batch size 1024 1024 1024 Micro batch size 16 16 8 LR schedule Cosine w/ 10% warmup Cosine w/ 10% warmup Cosine w/ 10% warmup Max LR 3 × 10−4 3 × 10−4 3 × 10−4 Min LR 1 × 10−6 1 × 10−6 1 × 10−6 Optimizer AdamW AdamW AdamW Betas (0.9, 0.95) (0.9, 0.95) (0.9, 0.95) Weight decay 0.1 0.1 0.1 Precision BF16 BF16 BF16 Num ranks 4 4 8 given in Table 5. All runs used a cosine learning rate schedule with 10% linear warmup, trained for 4 epochs, global batch size 64, and micro-batch size 16 for 70M/160M (8 for 410M). Peak learning rates were selected by grid search on the base pretrained checkpoint before midtraining, and the LR grid is given in Table 6. Selected LRs for the final checkpoint of each model size are given in Table 7. Table 5: Finetuning datasets. Dataset # Train Samples Prompt masked Pycode (CodeSearchNet-Python subset) 5,000 No GSM8K 7,500 Yes LIMA 1,000 No SciQ 13,679 No Table 6: Grid of candidate peak learning rates swept during tuning. LR grid 4e-6, 8e-6, 1e-5, 2e-5, 4e-5, 5e-5, 6e-5, 7e-5, 8e-5, 9e-5, 1e-4, 1.2e-4, 1.4e-4, 1.6e-4, 1.8e-4, 2e-4, 2.4e-4, 4e-4, 5e-4, 6e-4, 8e-4, 1e-3, 2e-3, 3e-3, 4e-3, 6e-3 Table 7: Selected peak learning rates for fine-tuning (cosine schedule with 10% warmup). Dataset 70M 160M 410M GSM8K 8e-4 4e-4 4e-4 LIMA 1.2e-4 5e-5 5e-5 Pycode 1e-3 5e-4 4e-4 SciQ 8e-4 2.4e-4 6e-4 C DATASET SIMILARITY MATRIX We compute dataset similarity using surface-level token statistics after initial experimentation with embedding models gave implausible results for code datasets’ similarities to other natu- ral language datasets. For each pair of pretrain/midtrain and downstream datasets, we sample max(dataset_size, 10,000) examples. Midtrain mixes are simulated by their actual compositions (e.g., Starcoder is treated as 20% Starcoder + 80% C4). From the (possibly mixed) texts we build unigram frequency vectors at a token level, normalize to probabilities, and compute: vocabulary Jaccard, overlap ratio, token-frequency cosine similarity, and a Jensen–Shannon-based similarity. These are combined as Combined = 0.4 · cosine + 0.3 · Jaccard + 0.3 · JS_similarity, 15 Preprint and used to fill the similarity matrix (diagonal entries are 1). This mixture-aware score reflects both specialty content and dilution by C4. Figure 6 shows the resulting similarity matrix between pre/midtrain datasets and SFT datasets. C4 StarCoder Math Combined FLAN Combined DCLM KnowledgeQA GSM8K PyCode LIMA SciQ C4 StarCoder Math Combined FLAN Combined DCLM KnowledgeQA GSM8K PyCode LIMA SciQ 1.00 0.83 0.95 0.99 0.95 0.97 0.61 0.54 0.82 0.52 0.83 1.00 0.85 0.84 0.85 0.86 0.57 0.70 0.75 0.39 0.95 0.85 1.00 0.95 0.93 0.94 0.65 0.59 0.81 0.51 0.99 0.84 0.95 1.00 0.95 0.97 0.62 0.56 0.82 0.53 0.95 0.85 0.93 0.95 1.00 0.96 0.61 0.58 0.83 0.53 0.97 0.86 0.94 0.97 0.96 1.00 0.62 0.59 0.83 0.51 0.61 0.57 0.65 0.62 0.61 0.62 1.00 0.48 0.63 0.46 0.54 0.70 0.59 0.56 0.58 0.59 0.48 1.00 0.60 0.32 0.82 0.75 0.81 0.82 0.83 0.83 0.63 0.60 1.00 0.52 0.52 0.39 0.51 0.53 0.53 0.51 0.46 0.32 0.52 1.00 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Similarity Figure 6: Token-based similarity matrix for pre/midtrain and SFT datasets. Note that these midtrain datasets are corrected for mix weight in this matrix. D SFT IN-DOMAIN LOSS AND C4 LOSSES AFTER FINETUNING FOR 70M AND 160M MODELS Table 8 depicts validation losses as well as C4 validation losses after finetuning on each SFT dataset, for the 70m and 160m models. E MIDTRAINING LEG LENGTH VS. BENEFIT Figure 7 shows the relationship between the two “legs” dist(C4, midtrain) and dist(midtrain, SFT) and the benefit of midtraining. As in Figure 2 this is computed based on the similarity matrix in Appendix C. F REPRESENTATIVE TRAINING LOSS CURVES FOR MIDTRAINED VS. BASE MODELS Figure 8 shows a representative training loss curve for a midtrained model when its domain is aligned to SFT data. G ADDITIONAL CKA RESULTS ON APPS Figure 9 and Figure 10 display the CKA layer similarity for 160m and 410m models. 16 Preprint Table 8: SFT and C4 validation losses for 70m and 160m models across downstream datasets and midtraining mixtures, averaged across 5 seeds for each SFT dataset. Bold values indicate best performance within each dataset and model size combination. Model Size Downstream Dataset Midtrain Mix SFT Val Loss C4 Val Loss 70m Pycode C4 2.633 6.067 Starcoder (20%) 2.530 5.996 Math (12%) 2.605 6.049 FLAN (5%) 2.648 6.112 KnowledgeQA (20%) 2.602 6.046 DCLM (20%) 2.592 6.007 GSM8k C4 1.405 6.409 Starcoder (20%) 1.363 6.367 Math (12%) 1.359 6.391 FLAN (5%) 1.401 6.407 KnowledgeQA (20%) 1.373 6.434 DCLM (20%) 1.397 6.421 LIMA C4 4.458 4.159 Starcoder (20%) 4.382 4.168 Math (12%) 4.418 4.180 FLAN (5%) 4.460 4.158 KnowledgeQA (20%) 4.371 4.152 DCLM (20%) 4.395 4.132 SciQ C4 3.573 7.334 Starcoder (20%) 3.594 7.245 Math (12%) 3.599 7.289 FLAN (5%) 3.550 7.236 KnowledgeQA (20%) 3.537 7.268 DCLM (20%) 3.529 7.269 160m Pycode C4 2.382 5.247 Starcoder (20%) 2.205 5.104 Math (12%) 2.382 5.228 FLAN (5%) 2.389 5.253 KnowledgeQA (20%) 2.420 5.342 DCLM (20%) 2.405 5.268 GSM8k C4 1.188 5.339 Starcoder (20%) 1.186 5.362 Math (12%) 1.139 5.198 FLAN (5%) 1.071 5.320 KnowledgeQA (20%) 1.190 5.346 DCLM (20%) 1.186 5.336 LIMA C4 4.184 3.606 Starcoder (20%) 4.276 3.556 Math (12%) 4.421 3.599 FLAN (5%) 4.472 3.580 KnowledgeQA (20%) 4.498 3.566 DCLM (20%) 4.446 3.572 SciQ C4 3.371 5.537 Starcoder (20%) 3.362 5.471 Math (12%) 3.316 5.437 FLAN (5%) 3.362 5.444 KnowledgeQA (20%) 3.339 5.373 DCLM (20%) 3.341 5.383 17 Preprint 0.0 0.2 0.4 C4 Midtrain Distance 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 Relative Improvement 160m: C4 Midtrain vs. Benefit (quad R2=0.018) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 Relative Improvement 160m: Midtrain SFT vs. Benefit (quad R2=0.403) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) 0.05 0.10 0.15 C4 Midtrain Distance 0.00 0.02 0.04 0.06 0.08 Relative Improvement 410m: C4 Midtrain vs. Benefit (quad R2=0.313) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.00 0.02 0.04 0.06 0.08 Relative Improvement 410m: Midtrain SFT vs. Benefit (quad R2=0.092) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) 0.0 0.2 0.4 C4 Midtrain Distance 0.06 0.04 0.02 0.00 0.02 0.04 Relative Improvement 70m: C4 Midtrain vs. Benefit (quad R2=0.364) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.06 0.04 0.02 0.00 0.02 0.04 Relative Improvement 70m: Midtrain SFT vs. Benefit (quad R2=0.163) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) Figure 7: Relationship between the two “legs” dist(C4, midtrain) and dist(midtrain, SFT) and the benefit of midtraining. 18 Preprint 0 1000 2000 3000 4000 5000 Step 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Loss Representative train loss curve (Pythia 410m, Pycode) Base Starcoder Figure 8: Representative training loss curve for a midtrained model and base model on Pycode, for Pythia-410m. The midtrained model starts with a lower training loss, and maintains a slight gap throughout training. Figure 9: CKA layer analysis for Pythia-160M with APPS as a probe. 19 Preprint Figure 10: CKA layer analysis for Pythia-410M with APPS as a probe. 20 Preprint Figure 11: CKA layer analysis for Pythia-70M with C4 as a probe. H CKA RESULTS ON C4 Figure 11, Figure 12, and Figure 13 show the CKA layer similarity for all model sizes with C4 as a probe. I STATEMENT ON LLM USAGE Large language models (LLMs) were used to assist with refining writing in this submission, includ- ing summarizing paragraphs in order to shorten the submission, correcting grammar, and giving suggestions to improve organization. LLMs were not used in the ideation process and analyses and experimental setups were designed fully by the authors. Copilot and other coding agents were used to generate some utility scripts in the process of coding. 21 Preprint Figure 12: CKA layer analysis for Pythia-160M with C4 as a probe. 22 Preprint Figure 13: CKA layer analysis for Pythia-410M with C4 as a probe. 23
Preprint MIDTRAINING BRIDGES PRETRAINING AND POSTTRAINING DISTRIBUTIONS Emmy Liu, Graham Neubig, Chenyan Xiong Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {mengyan3, gneubig, ABSTRACT Recently, many language models have been pretrained with a "midtraining" phase, in which higher quality, often instruction-formatted data, is mixed in at the end of pretraining. Despite the popularity of this practice, there is little scientific understanding of this phase of model training or why it is effective. In this work, we conduct the first systematic investigation of midtraining through controlled experiments with language models pretrained from scratch and fine-tuned on supervised finetuning datasets in different domains. We find that when compared after supervised fine-tuning, the effectiveness of midtraining is highest in the math and code domains, where midtraining can best reduce the syntactic gap between pretraining and posttraining data. In these cases, midtraining consistently outperforms continued pretraining in both in-domain validation loss as well as pretraining data forgetting after posttraining. We conduct ablations on the starting time of the midtraining phase and mixture weights of the midtraining data, using code midtraining as a case study, and find that timing has a greater impact than mixture weights, with earlier introduction of specialized data, yielding greater benefits indomain as well as preserving general language modeling better. These findings establish midtraining as a domain adaptation technique that compared to continued pretraining yields better performance through reduced forgetting.1 1 INTRODUCTION The success of large language models has mostly been driven by scaling model and data size. Though many interventions seem promising, they may wash out at scale. Therefore, when methodological interventions are simple yet widely adopted across model scales, they merit attention. One such intervention is midtraining: breaking pretraining into two or more stages in which the latter stages incorporate higher-quality data from specialized domains such as mathematics and coding, as well as instruction-formatted data (Hu et al., 2024b; Dubey et al., 2024; OLMo Team et al., 2025). While this approach has shown improved performance after posttraining (Hu et al., 2024b), there is surprisingly little systematic study of when and why midtraining works (Wang et al., 2025). This raises key questions for practitioners: When is midtraining expected to work better than simply posttraining a model, or doing continued pretraining on a target domain? What types of data are most effective in a midtraining data mixture? Finally, how does midtraining compare to continued pretraining in specialized domains? To address these questions, we conduct the first systematic investigation of midtraining through controlled experiments on models pretrained from scratch with different midtraining settings. We systematically vary the type of midtraining data, the timing of its introduction and mixture weights, followed by supervised fine-tuning on diverse datasets. Our investigation reveals three key findings: 1Data and code are available at https://github.com/nightingal3/all_in_one_ pretraining. 1 16 Oct 2025 Preprint • Midtraining is highly effective at improving downstream loss on a few key domains such as math and code. In particular, it is effective in improving accuracy on domains that are farther from the general pretraining distribution as represented by web text. • Midtraining reduces catastrophic forgetting compared to both direct fine-tuning of a pretrained model as well as continued pretraining given the same number of tokens. • The timing of introducing midtraining data into the training process is important, more so than the mixture weight of the midtraining data source. Put together, these findings cast a new light on our understanding of midtraining's role in the LM training process - it can be viewed as effectively "bridging" the distribution gap between pretraining and out-of-domain fine-tuning data, introducing a smooth transition between the two at an appropriate time in the training process. 2 PRELIMINARIES In this section, we define what we refer to as midtraining throughout this paper. While this term has been used colloquially by model developers, it lacks a standard definition, so we establish our working definition for clarity. 2.1 TRAINING SEQUENCE DEFINITIONS Modern language model training typically follows a sequential approach where models are trained on different data distributions in a specific order. Rather than attempting to define training phases by their data characteristics or objectives, which often overlap in practice, we adopt a sequence-based framework that defines phases by their order in the training process. Let θ ∈Θ denote model parameters and X, Y be input and output spaces. A training phase is a tuple (D, L, n) where D is a distinct data distribution over X × Y, L : Θ × (X × Y) →R is a loss function, and n ∈N+ is the number of training steps taken in that phase. A training sequence is an ordered collection S = {(Di, Li, ni)}N i=0 of training phases, where the parameters θi resulting from training up to phase i serve as the initialization for phase i + 1. Pretraining is the initial training phase (D0, L0, n0) in the sequence. In practice, pretraining often uses large, diverse sources of data, such as from the web. The objective for decoder-based language models, which we examine, is typically next-token prediction, L0(θ; x) = -P|x| t=1 log pθ(xt|x 0, n0 >>> ni. Midtraining refers to intermediate training phases (Di, Li, ni) where 0 ni > nN. There can potentially be multiple midtraining phases as well in the case of multi-stage pretraining curricula, but we focus on one-stage midtraining in this paper. Finetuning is the final training phase (DN, LN, nN) in the sequence. Finetuning usually uses task-specific datasets and may employ diverse objectives or preference-based methods rather than next-token prediction. Typically, finetuning uses the least number of steps, although this is not necessarily always the case. Note that our framework reflects current popular practices for language model training, rather than prescribing an optimal training methodology. Additional edge cases include multi-stage finetuning, cyclic training schedules, and mixture schedules where distribution weights vary continuously. 2.2 RELATIONSHIP TO CURRICULUM LEARNING AND CONTINUED PRETRAINING Curriculum learning The original definition of curriculum learning focused on gradually increasing the difficulty or diversity of training examples throughout the course of training (Elman, 1993; 2 Preprint Bengio et al., 2009). However, this term has evolved to generally mean any strategic ordering of training data (Soviany et al., 2022). In this general view, it encompasses any training strategy in which the sequence S = {(Di, Li, ni)}N i=0 is designed according to some principled ordering criterion-whether based on example difficulty, domain progression, task complexity, data quality, or other strategic considerations. Therefore, midtraining can be viewed as a type of curriculum learning that operates at a distributional level rather than over individual examples: instead of ordering specific training instances by difficulty or other criteria, midtraining strategically organizes different data distributions or domain mixtures across discrete training phases. This coarse-grained approach focuses on when to introduce entire types of data during pretraining rather than sequencing of individual examples. Continued pretraining Continued pretraining typically refers to additional pretraining of an already-pretrained model on domain-specific data, where the data distribution shifts completely to the target domain (Gururangan et al., 2020; Beltagy et al., 2020). This approach commits entirely to domain-specific data after a certain point in training, potentially losing general capabilities. In contrast, midtraining maintains mixed distributions throughout the intermediate phase, preserving general pretraining data alongside specialized data. Formally, continued pretraining can be viewed as the limiting case where the midtraining distribution Di consists entirely of domain-specific data (0% original pretraining data, 100% domain data), though our results suggest this pure approach may be suboptimal. Implementation details such as optimizer state handling and learning rate schedule may vary, but most often midtraining continues on the original pretraining schedule, preserving optimizer state, while continued pretraining often operates on a new schedule. 3 EXPERIMENTAL SETTING 3.1 TRAINING SETUP Pretraining We pretrain models from the Pythia family ranging in size from 70M-410M parameters on C4 web data (Raffel et al., 2020; Biderman et al., 2023). In all cases, we train for 128B tokens (approx. 61k steps) with a cosine learning rate schedule with a maximum learning rate of 3e-4 and the AdamW optimizer (Loshchilov & Hutter, 2019). We chose to fix the training budget at a point past Chinchilla-optimality for all models (Hoffmann et al., 2022), in order to ensure that models have stabilized by the point at which midtraining data has been introduced, at least for later insertion points of midtraining data. We describe the exact training setup in Appendix A. Midtraining We use five midtraining mixtures spanning popular domains: code (Starcoder), math, instructions (FLAN), general knowledge/QA, and high-quality web data (DCLM). Table 1 details each mixture's composition and sources. All mixtures are introduced at varying start points (Starcoder: 6k steps, Math: 20k steps, others: 40k steps) based on data availability to prevent repetition. We compare against a control condition continuing C4 pretraining for the same number of tokens, keeping all other training details identical. Midtrain mix Num. Tokens (B) Sources Starcoder 196 (Li et al., 2023) Math 12 (Yue et al., 2023; Toshniwal et al., 2024) FLAN 3.5 (Wei et al., 2022) KnowledgeQA 9.6 (Hu et al., 2024a) DCLM 51 (Li et al., 2024b) Table 1: Midtraining mixes used in our experiments and dataset(s) from which they were derived. STARCODER (CODE) Our code mixture is a subset of the Starcoder pretraining dataset (Li et al., 2023), which contains code in many languages. Note that we use code from all languages, rather than Python. MATH The math mixture combines mathematical reasoning problems from the MAmmoTH (Yue et al., 2023) and OpenMathInstruct (Toshniwal et al., 2024) datasets, featuring step-by-step explanations. FLAN (INSTRUCTIONS) Our instruction-formatted data comes from a processed version of the FLAN collection, which includes diverse task instructions and responses across natural language tasks (Wei et al., 2022). 3 Preprint KNOWLEDGEQA (GENERAL KNOWLEDGE AND QA) The KnowledgeQA mixture is taken from Hu et al. (2024b)'s midtraining mix, and focuses on general knowledge and dialogue. However, to distinguish the midtraining mixes further, the StackOverflow portion of this dataset is removed. DCLM (HIGH-QUALITY WEB) Our high-quality web data is a subset of the DCLM pretraining dataset, representing web content with improved quality filtering compared to C4 (Li et al., 2024b). Downstream Evaluation We fine-tune models on the datasets GSM8k (Cobbe et al., 2021), SciQ (Welbl et al., 2017), CodeSearchNet-Python (Husain et al., 2019), and LIMA (Zhou et al., 2023) - chosen to span the domains covered by our midtraining mixtures. This allows us to test cases where the midtraining mixture is aligned or misaligned with the SFT dataset. We used standard language model supervised fine-tuning for all datasets. For information on the posttraining setup, see Appendix B. Catastrophic Forgetting Evaluation A key concern with supervised fine-tuning is whether introducing specialized data causes models to forget general capabilities acquired during pretraining. We measure catastrophic forgetting by evaluating cross-entropy loss on the original pretraining distribution by measuring loss on held-out C4 data. This approach follows established practices for measuring forgetting in language models (Luo et al., 2024; Kemker et al., 2018; Li et al., 2024a). 4 WHICH DOWNSTREAM TASKS BENEFIT MOST FROM MIDTRAINING? To identify which tasks benefit most from midtraining, we evaluate all midtraining mix and SFT dataset combinations. For each combination, we measure validation loss on the target domain (adaptation effectiveness) and C4 validation loss after fine-tuning (forgetting). We average results over 5 seeds after hyperparameter search for each checkpoint. Midtraining benefits are highly domain-specific: specialization on code yields the largest gains on code tasks, while math-focused midtraining helps mathematical-reasoning tasks. Mismatched midtraining provides minimal benefit, and general instruction mixes (e.g., FLAN) produce little improvement. Full per-dataset results and numerical comparisons are reported in Table 2 and Appendix D. 5 WHAT DATA IS MOST EFFECTIVE FOR MIDTRAINING? Having established that midtraining effects are domain-specific, we now ask: what determines the strength of these domain-specific effects? Across midtraining-target pairs, we see improvements ranging from negligible (e.g. FLAN →coding) to strong (e.g. Starcoder →coding). C4 StarCoder Math Combined PyCode GSM8K C4 StarCoder Math Combined PyCode GSM8K 1.00 0.83 0.95 0.54 0.61 0.83 1.00 0.85 0.70 0.57 0.95 0.85 1.00 0.59 0.65 0.54 0.70 0.59 1.00 0.48 0.61 0.57 0.65 0.48 1.00 0.5 0.6 0.7 0.8 0.9 1.0 Similarity Figure 1: Example similarity matrix between pre/midtrain and posttraining datasets. For the complete matrix, see Appendix C. Although the pairs that work the best are intuitive, we ask whether we can quantify this intuition through measurable proximity between data distributions. To test whether optimal midtraining data "bridges" the gap between pretraining and target distributions, we simply use tokendistributional similarity between datasets, which we discuss in more detail in Appendix C. Figure 1 shows a reduced subset as an example: we can see that C4 and Pycode are fairly dissimilar (0.54 similarity), but the blend containing 29% Starcoder data brings this closer to Pycode (0.7 similarity). 4 Preprint Table 2: SFT and C4 validation losses for the 410M model across downstream datasets and midtraining mixtures (5 seeds per SFT dataset). Bold indicates best within each dataset. Percentages denote the specialized data proportion mixed with C4 during midtraining. Dataset Midtrain Mix Validation Loss SFT C4 Pycode C4 2.264 5.058 Starcoder (20%) 2.091 4.611 Math (12%) 2.274 5.096 FLAN (5%) 2.252 5.047 KnowledgeQA (20%) 2.264 5.094 DCLM (20%) 2.262 5.157 GSM8K C4 0.956 4.937 Starcoder (20%) 0.944 4.951 Math (12%) 0.918 5.038 FLAN (5%) 0.958 4.970 KnowledgeQA (20%) 0.961 4.966 DCLM (20%) 0.949 4.966 LIMA C4 3.490 3.166 Starcoder (20%) 3.422 3.155 Math (12%) 3.507 3.168 FLAN (5%) 3.513 3.164 KnowledgeQA (20%) 3.501 3.163 DCLM (20%) 3.496 3.158 SciQ C4 3.271 6.873 Starcoder (20%) 3.261 6.831 Math (12%) 3.257 6.894 FLAN (5%) 3.277 6.771 KnowledgeQA (20%) 3.273 6.814 DCLM (20%) 3.270 6.874 Finding 1. Midtraining benefits are highly domain-specific. Code-focused midtraining (Starcoder) yields large gains on coding tasks, and math-focused midtraining improves mathematical reasoning. Mismatched domains provide little benefit, and broad instruction data (FLAN) also shows minimal effect. 5.1 PROXIMITY AND BRIDGING EFFECTS To understand why certain midtraining mixtures are effective, we test the hypothesis that optimal midtraining data "bridges" the gap between pretraining (C4) and target SFT datasets. Using the simple token similarity metric, we measure the "proximity advantage" of each midtraining mixture-how much closer it brings the model to the target posttraining distribution compared to continuing with C4 alone. Results are shown in Figure 2. We find a clear relationship between proximity advantage and downstream performance improvements across model sizes. The correlations are particularly strong for smaller models (r = 0.869, p < 0.001 for 70m), suggesting that effective midtraining data serves as a distributional stepping stone from general pretraining to specialized target domains. This bridging effect appears to be most beneficial when the gap between pretraining and target distributions is large, consistent with our hypothesis that midtraining helps models adapt gradually rather than requiring abrupt distributional shifts during fine-tuning. Finding 2. Effective midtraining data acts as a distributional bridge between pretraining and target datasets when considering token patterns. 5 Preprint 0.2 0.1 0.0 0.1 0.2 Proximity Advantage -8% -5% -2% 0% 2% 5% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (70m) (r=0.869, p=0.000) 0.2 0.1 0.0 0.1 0.2 Proximity Advantage -8% -5% -2% 0% 2% 5% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (160m) (r=0.419, p=0.047) 0.1 0.0 0.1 Proximity Advantage -2% 0% 2% 4% 6% 8% Relative Improvement (%) grey if |PA| < 0.02 Proximity Advantage vs. Benefit (410m) (r=0.497, p=0.026) SFT Dataset GSM8k LIMA Pycode SciQ Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) SFT Dataset GSM8k LIMA Pycode SciQ Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) Figure 2: Relationship between proximity advantage and midtraining performance improvements for pairs of midtraining and SFT datasets. Each data point represents a (midtrain, SFT) pair, where the color indicates the SFT dataset and shape represents midtrain dataset. Proximity advantage (dist(C4, SFT) - dist(midtrain, SFT)) indicates how much closer midtraining data brings the model to the target SFT dataset compared to the base pretraining data. Proximity advantage pairs near zero are greyed out for clarity but included in calculations. Relative improvement is measured against the base model pretrained on C4. 5.2 MIDTRAINING VS. CONTINUED PRETRAINING Our results so far suggest that effective midtraining data serves as a bridge between general pretraining and specialized posttraining data. However, a question that follows is why midtraining is necessary: continued pretraining on domain-specific data also aims to adapt the model toward a target domain. Why not simply pretrain normally and then switch to domain-specific data entirely? To examine this, we compare the effect of midtraining with continued pretraining in which the mixture weight switches to 100% specialized data. For code, we compare the default Starcoder midtraining mix (20% mixture weight, starting from 12.6B tokens) with 100% Starcoder data starting from 83B tokens. For math, we compare the math midtraining mix with 100% math data starting from 105B tokens.2 Results in Table 3 show that midtraining consistently outperforms continued pretraining across both domains and model sizes for both in-domain performance and C4 retention after fine-tuning. As this pattern holds for both code and math domains, this suggests that maintaining some general pretraining data is useful during domain adaptation, even for models specialized for a specific domain. This supports our intuition gained from prior sections that domain adaptation benefits from gradual distributional shifts at the token level rather than abrupt changes. Finding 3. Maintaining a mixture with general data in midtraining is preferable to continued pretraining on specialized data alone. 6 WHEN AND HOW MUCH MIDTRAINING DATA SHOULD BE INTRODUCED? Given that a midtraining dataset's proximity to the finetuning dataset matters, and that we want to balance midtraining data with general pretraining data, we next investigate when to integrate midtraining data and what mixture weight to use. Using the Starcoder midtraining mixture, we vary both timing (10%-80% through pretraining) and mixture weights (10%-80% of training data). When testing the effect of timing, we use a fixed mixture weight of 20% for Starcoder and vary the time at which the midtraining phase begins. When testing the effect of mixture weight, we fix the step at which midtraining starts at two specific points in training: 63B and 105B tokens, while varying the mixture weight of Starcoder as compared to C4. 2The different starting points are due to data availability, to ensure the midtraining mix does not repeat. 6 Preprint Table 3: SFT and C4 validation losses for 70M and 160M models comparing default midtraining mixes to continued pretraining on only the midtraining data (100%), averaged across 5 seeds. Bold indicates best performance within each dataset/model combination. Model Dataset Mix SFT C4 70M Pycode Pretrain-only 2.633 6.067 Starcoder (20%) 2.530 5.996 Ctd. pretrain (Starcoder) 2.580 6.160 GSM8K Pretrain-only 1.405 6.409 Math (12%) 1.359 6.391 Ctd. pretrain (Math) 1.401 6.422 160M Pycode Pretrain-only 2.382 5.247 Starcoder (20%) 2.205 5.104 Ctd. pretrain (Starcoder) 2.283 5.373 GSM8K Pretrain-only 1.188 5.339 Math (12%) 1.139 5.198 Ctd. pretrain (Math) 1.180 5.364 0 50 100 2.2 2.4 2.6 2.8 Midtraining start (B tokens) SFT Val. Loss SFT (in-domain) 0 50 100 5 5.5 6 Midtraining start (B tokens) C4 Val. Loss C4 (forgetting) 70M 160M (a) Timing ablation 0 50 100 2.2 2.4 2.6 2.8 Mixture Weight (%) SFT Val. Loss SFT (in-domain) 0 50 100 5 5.5 6 6.5 Mixture Weight (%) C4 Val. Loss C4 (forgetting) 70M @ 105B 160M @ 105B (b) Mixture ablation @ 105B 20 40 60 80 2.2 2.4 2.6 2.8 Mixture Weight (%) SFT Val. Loss SFT (in-domain) 20 40 60 80 5 5.5 6 6.5 Mixture Weight (%) C4 Val. Loss C4 (forgetting) 70M @ 63B 160M @ 63B (c) Mixture ablation @ 63B Figure 3: Pycode ablations. (A) Timing effect (fixed 20%). (B) Mixture effect at 105B. (C) Mixture effect at 63B. Earlier start increases sensitivity to mixture weight, especially for 70M; 160M is more robust. Top: SFT after Pycode finetuning. Bottom: C4 losses (forgetting). When varying mixture proportions from a fixed starting point, effects were modest compared to varying the timing of introduction across different pretraining stages at a fixed 20% mixture weight (Figure 3(a)). The timing effects are substantial: for the 160m model, early introduction at 12B tokens achieves 2.205 validation loss compared to 2.374 when delayed to 105B steps. Similarly, the 70m model performs best with introduction at 42B steps (2.505) versus later introduction. However, we note that timing and mixture weight also interact, and mixture weight appears to be more impactful when midtraining data is introduced earlier, as the range of best and worst mixture weights roughly doubles at 63B tokens compared to at 105B tokens. This suggests that when to introduce specialized data may be more critical than how much specialized data to include, at least within our experimental ranges. See Figure 3(b-c) for mixture weight comparisons. Relatedly, Figure 4 illustrates how midtraining benefits evolve over the course of pretraining for the 20% Starcoder mix (160m model). We finetune checkpoints from different pretraining steps on Pycode and measure both in-domain and C4 validation loss after fine-tuning. In-domain advantages emerge quickly after midtraining introduction (6k steps), while the C4 retention benefits develop more gradually, becoming apparent after approximately 20k steps. This temporal pattern suggests that early introduction of specialized data provides sufficient time for both immediate domain adaptation and gradual integration with general capabilities. 7 Preprint 20 40 60 80 100 120 Tokens (B) 2.20 2.25 2.30 2.35 2.40 2.45 Validation Loss In-Domain Performance 20 40 60 80 100 120 Tokens (B) 5.0 5.2 5.4 5.6 5.8 C4 Validation Loss C4 Forgetting Figure 4: Validation loss and C4 loss for the Starcoder-midtrained model (160M) and base pretrained model after supervised fine-tuning on the Pycode dataset, with each point on the x-axis representing the number of tokens the pretrained checkpoint was trained on. Finding 4. The timing of midtraining is more critical than the mixture weight; early introduction of specialized data in the code domain leads to stronger in-domain gains and better retention of general capabilities. 7 HOW DOES MIDTRAINING CHANGE MODEL REPRESENTATIONS? In the previous sections, we have established the midtraining improves downstream performance as well as retention of the original pretraining distribution, and that this operates in a domain and timing sensitive manner. However, this raises the question of how midtraining changes the model weights themselves. As a first step toward answering this question, we investigate changes after finetuning experienced by a midtrained as well as non-midtrained model in the code domain. We use linear Centered Kernel Alignment (CKA) to measure layer-wise similarity between model states in the code domain (Kornblith et al., 2019). We extract activations from all layers using probe datasets (C4 and APPS (Hendrycks et al., 2021)) and compute CKA similarity matrices between four key model states: base pretrained, midtrained (Starcoder), base fine-tuned, and midtrained finetuned. If midtraining creates better representations for downstream tasks, we expect to see smaller representational changes during fine-tuning for midtrained models compared to base models. Figure 5 shows the representational analysis for the 70M model. The midtrained model exhibits greater stability in the final layer after fine-tuning, a pattern consistent across model sizes (see Appendix G for the remaining results). However, the final fine-tuned models show high similarity regardless of whether models underwent midtraining. These effects are less pronounced for C4, which can be seen in Appendix H. L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.47 0.82 0.72 0.58 0.60 0.65 0.61 0.81 0.93 0.85 0.64 0.71 0.72 0.61 0.74 0.87 0.89 0.73 0.79 0.72 0.57 0.61 0.62 0.62 0.68 0.62 0.60 0.47 0.65 0.80 0.85 0.80 0.92 0.82 0.63 0.64 0.76 0.76 0.74 0.81 0.91 0.79 0.62 0.68 0.64 0.59 0.63 0.84 0.88 Base vs Midtrained (Effect of midtraining) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.52 0.83 0.76 0.76 0.77 0.70 0.61 0.84 0.92 0.81 0.78 0.80 0.71 0.59 0.77 0.85 0.83 0.77 0.78 0.70 0.56 0.62 0.65 0.68 0.67 0.62 0.59 0.48 0.68 0.80 0.81 0.76 0.80 0.73 0.59 0.67 0.75 0.75 0.75 0.78 0.80 0.72 0.64 0.67 0.64 0.67 0.71 0.76 0.76 Base vs Base-SFT (Base model finetuning changes) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.50 0.84 0.77 0.77 0.79 0.72 0.63 0.83 0.88 0.82 0.80 0.82 0.75 0.64 0.75 0.82 0.82 0.79 0.80 0.73 0.61 0.61 0.65 0.76 0.77 0.74 0.72 0.60 0.63 0.72 0.78 0.75 0.79 0.74 0.60 0.68 0.74 0.75 0.77 0.81 0.84 0.76 0.64 0.64 0.63 0.68 0.73 0.79 0.81 Midtrained vs Midtrained-SFT (Midtrained model finetuning changes) L0 L1 L2 L3 L4 L5 L6 L0 L1 L2 L3 L4 L5 L6 0.56 0.86 0.79 0.78 0.80 0.73 0.65 0.86 0.99 0.89 0.84 0.87 0.77 0.64 0.79 0.89 0.99 0.89 0.88 0.80 0.65 0.79 0.85 0.90 0.98 0.92 0.84 0.71 0.80 0.88 0.88 0.92 0.98 0.90 0.77 0.74 0.78 0.80 0.85 0.91 0.97 0.89 0.64 0.64 0.66 0.72 0.78 0.90 0.96 Base-SFT vs Midtrained-SFT (Final models comparison) Figure 5: CKA analysis of model activations in the 70M model, probed with the APPS code dataset. Finding 4. Models exposed to midtraining require smaller representational shifts during finetuning, especially in the final layer, indicating smoother adaptation. 8 Preprint 8 RELATED WORK Specific midtrained models Recently, several language model families have adopted midtraining approaches with varying implementation details (Hu et al., 2024b; Dubey et al., 2024; OLMo Team et al., 2025; Chameleon Team, 2024). The midtraining phase duration varies from 2% (Hu et al., 2024b) to 20% (Chameleon Team, 2024) of total training, motivating our systematic investigation of timing effects. Common midtraining domains include code, math, instructions, and higher-quality web data (OLMo Team et al., 2025)-the domains we investigate. Beyond general-purpose models, midtraining has shown benefits for specific tasks like RL (Wang et al., 2025) and GUI agents (Zhang et al., 2025a). This widespread adoption motivates our questions of when and why midtraining provides downstream benefits. Multi-stage pretraining Several works explore multi-stage pretraining, Feng et al. (2024); Blakeney et al. (2024) focusing on two-stage pretraining and Zhang et al. (2025b) proposing four-stage pretraining. These approaches demonstrate improvements over single-stage pretraining. However, these works evaluate base model performance after pretraining, whereas we focus on the postfinetuning setting to focus on benefits that also affect posttraining. Continual Learning Domain-adaptive pretraining (DAPT) and related approaches continue pretraining on domain-specific data (Gururangan et al., 2020). Krishna et al. (2023) show that pretraining on downstream data alone can rival full pretraining when evaluated after fine-tuning, suggesting pretraining-posttraining alignment matters-consistent with our findings. Mehta et al. (2023) find pretraining reduces catastrophic forgetting during sequential fine-tuning; similarly, we observe midtrained models serve as better initializations with less forgetting. Stability in Continued Pretraining Recent work addresses stability challenges during continued pretraining. Guo et al. (2024) identify a "stability gap" where performance temporarily drops before recovering when shifting to new domains, Yang et al. (2024) synthesize larger training corpora from small domain-specific datasets, and Lin et al. (2024) introduce selective training on useful tokens only. While these works target training dynamics during continued pretraining, our approach examines how midtraining data selection affects post-fine-tuning performance, representing a complementary focus on end-task effectiveness. Relationship between Pretraining and Finetuning Several recent works have explored incorporating instruction-formatted data during pretraining. Allen-Zhu & Li (2023) show with an experiment on synthetic Wikipedia-style data that augmenting pretraining data with QA-formatted data improves subsequent fine-tuning, and Jiang et al. (2024) and Cheng et al. (2024) demonstrate this in a practical context as well. Sun & Dredze (2024) find continual pretraining benefits emerge only after fine-tuning, while Springer et al. (2025) show extended pretraining causes catastrophic forgetting ("overtraining"), particularly on math/code domains least aligned with web data. It is possible midtraining may prevent overtraining by introducing specialized data earlier and providing a better initialization for posttraining. 9 CONCLUSION We conduct the first systematic investigation of midtraining through controlled experiments. We demonstrate that midtraining benefits are domain-specific, with the most substantial improvements in math and code domains that are not well represented in standard web pretraining corpora. Furthermore, we also find that midtraining mitigates catastrophic forgetting of general language modeling abilities after specific supervised fine-tuning and consistently outperformed continued pretraining on specialized data alone. Furthermore, timing of data introduction appears to have a stronger effect than mixture weight of that specialized data, though more work is needed to clarify this effect. Based on these findings, we recommend that model trainers focus midtraining on domains which exhibit substantially different token patterns compared to the base pretraining data, given that they expect to posttrain on these domains. Furthermore, it may be worth it to begin the midtraining stage earlier than commonly practiced, in proportion to how different a domain is to general pretraining data. Lastly, even when adapting models to a specific domain, midtraining should take precedence 9 Preprint over continued pretraining. However, despite these initial results, there is substantial future work to be done. A natural next step is to test the findings at much larger scales and with more diverse domains such as medicine, music, or other specific fields. Another direction is understanding how midtraining, and pretraining data distributions more broadly, interact with reinforcement learning based posttraining, and whether there are any differences between this and supervised finetuning based posttraining when it comes to adaptation. Finally, developing principled curricula that span multiple midtraining stages could shed light on how best to structure pretraining to prepare a model for posttraining while expanding its general knowledge. Further work in these directions could not only improve language model development but also improve our understanding of how data and curricula shape the emergence and stability of model capabilities. REPRODUCIBILITY STATEMENT Anonymized data and code have been provided at the url https://github.com/ nightingal3/all_in_one_pretraining. Due to the size of pretraining data sources and checkpoints, these have not yet been uploaded to the cloud, but will be provided after acceptance. Additionally, multiple sections of the appendix describe pretraining and posttraining settings for ready reproduction (particularly Appendix A and Appendix B. 10 ACKNOWLEDGMENTS We would like to Scott Wen-tau Yih for his mentorship and guidance on this project. Additionally, we would like to thank Lucio Dery, Kaiser Sun and Jacob Springer for useful discussions on this project. EL was supported by the National Sciences and Engineering Research Council of Canada (NSERC), [funding reference number 578085]. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and extraction. arXiv preprint , 2023. URL https://arxiv.org/abs/2309.14316. Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 3615-3620, 2020. URL https:// aclanthology.org/2020.emnlp-demos.82. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th International Conference on Machine Learning (ICML), pp. 41-48, 2009. URL https://doi.org/10.1145/1553374.1553380. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, Usvsn Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar Van Der Wal. Pythia: A suite for analyzing large language models across training and scaling. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 2397-2430. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr. press/v202/biderman23a.html. Cody Blakeney, Mansheej Paul, Brett W. Larsen, Sean Owen, and Jonathan Frankle. Does your data spark joy? Performance gains from domain upsampling at the end of training, June 2024. URL http://arxiv.org/abs/2406.03476. . Chameleon Team. Chameleon: Mixed-Modal Early-Fusion Foundation Models, May 2024. URL http://arxiv.org/abs/2405.09818. . Daixuan Cheng, Yuxian Gu, Shaohan Huang, Junyu Bi, Minlie Huang, and Furu Wei. Instruction pre-training: Language models are supervised multitask learners. In Proceedings 10 Preprint of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 25292550, 2024. URL https://aclanthology.org/ 2024.emnlp-main.148. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint , 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurull, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Schiano, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Mario Caudy, Mario Rodriguez, Mathew Lithgow-Bertelloni, Matthew Seastrom, Matthew White, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Lewis, Mikel Artetxe, Mikesh Jain, Mikko Kokkonen, Miled Zakharia, Misha Peysakhovich, Mohammad Shihadeh, Molly Fanton, Moya Chen, Mudassir Shabbir, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohit Maheswari, Russ Howes, Ruty Rinott, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Verma, Seiji Yamamoto, Shaoliang Nie, Shaqu Shiqi, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sudarshan Gupta, Sumit Gupta, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vítor Albiero, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu, Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. Jeffrey L. Elman. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71-99, 1993. Steven Feng, Shrimai Prabhumoye, Kezhi Kong, Dan Su, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro. Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining, December 2024. URL http://arxiv.org/abs/2412.15285. . 11 Preprint Yiduo Guo, Jie Fu, Huishuai Zhang, Dongyan Zhao, and Yikang Shen. Efficient continual pretraining by mitigating the stability gap. arXiv preprint , 2024. Suchin Gururangan, Ana Marasovi ́c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8342-8360, 2020. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with apps. In Advances in Neural Information Processing Systems (NeurIPS) 2021, 2021. URL https://proceedings.neurips.cc/paper/2021/ file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper.pdf. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models. In Advances in Neural Information Processing Systems, volume 35, pp. 20122-20134, 2022. URL https://arxiv.org/abs/2203.15556. Shengding Hu, Zepeng Bai, Tiannan Chen, Siran Jiang, Xin Jiang, Chang Liu, Wenhao Li, Guanqun Lai, Zhiyang Cui, Chengyi Zhang, Ruibin Zhao, Junhao Han, Shaokun Dong, Junlong Fang, Boyu Shi, Ting Wang, Zhiyuan Liu, Zhixing Tang, Lei Li, Benyou Wu, Tao Wang, Chenghu Zheng, Tianxiang Zhao, Zhiyi Zhang, Wanli Fu, Bowen Dai, Jiaqi Zhou, Renze Li, Zijun Zheng, Haoran Xu, Xiao Sun, Bailing Zhou, Shuai Jiao, Jian Li, Bowen Cao, Xuming Zhao, Yikang Lu, Zixiang Qi, Jun Shi, Haonan Xiang, Jinhua Wu, and Maosong Sun. Minicpm: Unveiling the potential of small language models with scalable training strategies. arXiv preprint , 2024a. Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, Xinrong Zhang, Zheng Leng Thai, Kaihuo Zhang, Chongyi Wang, Yuan Yao, Chenyang Zhao, Jie Zhou, Jie Cai, Zhongwu Zhai, Ning Ding, Chao Jia, Guoyang Zeng, Dahai Li, Zhiyuan Liu, and Maosong Sun. MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies, April 2024b. URL http://arxiv.org/ abs/2404.06395. . Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint , 2019. Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Lin, Wen-tau Yih, and Srinivasan Iyer. Instruction-tuned language models are better knowledge learners. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5421-5434, 2024. URL https://aclanthology.org/2024.acl-long.296. Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, pp. 3519-3529. PMLR, 2019. URL https://proceedings.mlr.press/v97/kornblith19a.html. Kundan Krishna, Saurabh Garg, Jeffrey Bigham, and Zachary Lipton. Downstream datasets make surprisingly good pretraining corpora. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12207-12222, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https: //aclanthology.org/2023.acl-long.682/. 12 Preprint Hongyu Li, Liang Ding, Meng Fang, and Dacheng Tao. Revisiting catastrophic forgetting in large language model tuning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 4297-4308, Miami, Florida, USA, 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-emnlp.249/. Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Benjamin Recht, Luke Zettlemoyer, Sham Iyer, Tianyi Zhuang, Percy Liang, Alexander Rush, Naveen Jain, Vikas Raunak, Claire Cardie, and Ludwig Schmidt. Datacomp-lm: In search of the next generation of training sets for language models. arXiv preprint , 2024b. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Naman Rao, Roza Stojnic, Miltiadis Allamanis, Pierre Laffitte, Pawan Kumar Rustamov, Kaspar Valter, Chenyu Mittal, Komila Khamidov Tesfamikael, Christopher Murati, Sacha Lee, Albert Q. Wan, Armel Suharyanto, Juliette Copet, David R. So, Leandro Kolubako, Gabriel Machado Pina, Soroush Behtash, Denis Moskovskiy, Deepan Siddarth, Nicolas Luce-Rainville, Mostafa Dehghani, Marcin Szafraniec, Piotr Cardozo, Jenia Jitsev, Ekaterina Kochmar, Antonio Torralba, Dragomir Radev, Alexander M. Rush, Preslav Nakov, Thomas Wang, Wolf Zuo, Hugh Echikson, Laura Schuelke, Jeffrey Carmichael, Kumar Shridhar Sadagopan, Zhuang Ling, Colin Kwiatkowski, Andy Lohn, Jonas Mueller, and Harm de Vries Floetenmeyer. Starcoder: may the source be with you! arXiv preprint , 2023. Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt, 2023. Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, et al. Rho-1: Not all tokens are what you need. arXiv preprint , 2024. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR), 2019. URL https://openreview.net/ forum?id=Bkg6RiCqY7. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint , 2024. Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell. An empirical investigation of the role of pre-training in lifelong learning. Journal of Machine Learning Research, 24 (214):1-50, 2023. URL http://jmlr.org/papers/v24/22-0496.html. OLMo Team, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious. arXiv preprint , 2025. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-totext transformer. Journal of Machine Learning Research, 21(140):1-67, 2020. URL https: //arxiv.org/abs/1910.10683. 13 Preprint Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey, 2022. URL https://arxiv.org/abs/2101.10382. Jacob Mitchell Springer, Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, and Aditi Raghunathan. Overtrained language models are harder to fine-tune. arXiv preprint , 2025. URL https://arxiv.org/abs/2503.19206. Kaiser Sun and Mark Dredze. Amuro and char: Analyzing the relationship between pre-training and fine-tuning of large language models. arXiv preprint , 2024. arXiv.2408.06663. URL https://arxiv.org/abs/2408.06663. Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv preprint , 2024. Zengzhi Wang, Fan Zhou, Xuefeng Li, and Pengfei Liu. Octothinker: Mid-training incentivizes reinforcement learning scaling, 2025. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022. Johannes Welbl, Nelson F. Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pp. 94-106, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. URL https://aclanthology.org/W17-4413/. Zitong Yang, Neil Band, Shuangping Li, Emmanuel Candès, and Tatsunori Hashimoto. Synthetic continued pretraining. arXiv preprint , 2024. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint , 2023. Junlei Zhang, Zichen Ding, Chang Ma, Zijie Chen, Qiushi Sun, Zhenzhong Lan, and Junxian He. Breaking the data barrier - building gui agents through task generalization, 2025a. Xuemiao Zhang, Feiyu Duan, Liangyu Xu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, and Xunliang Cai. Frame: Boosting llms with a four-quadrant multi-stage pretraining strategy, 2025b. URL https://arxiv.org/abs/2502.05551. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Aritra Ghosh, Liangchen Xiao, Allyson Ettinger, Sheng Chang, Baolin Peng, Yiren Dai, Harsh Jain, Sebastian Cruz, Ananya Gupta, Haokun Zhang, Soundararajan Srinivasan, Taylor Berg-Kirkpatrick, Eduard Hovy, Christopher D. Manning, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment. arXiv preprint , 2023. A PRETRAINING SETTINGS We pretrained Pythia 70M, 160M, and 410M from scratch on the C4 dataset. All three models were trained for 128B tokens or approximately 61k steps, with very similar settings (documented in Table 4). L40S GPUs were used for all pretraining and midtraining runs. Models were trained with the LitGPT library (Lightning AI, 2023). B POSTTRAINING SETTINGS We fine-tuned all models on four downstream datasets: Pycode (our 5K-sample subset of CodeSearchNet-Python), GSM8K (7.5K math problems), LIMA (1K instruction examples), and SciQ (13.7K science questions). For GSM8K only, the prompt/question portion was masked during loss; for the others the loss was computed over the full sequence. A summary of the datasets is 14 Preprint Table 4: Core pretraining hyperparameters for Pythia-70M, 160M, and 410M. Hyperparameter 70M 160M 410M Global batch size 1024 1024 1024 Micro batch size 16 16 8 LR schedule Cosine w/ 10% warmup Cosine w/ 10% warmup Cosine w/ 10% warmup Max LR 3 × 10-4 3 × 10-4 3 × 10-4 Min LR 1 × 10-6 1 × 10-6 1 × 10-6 Optimizer AdamW AdamW AdamW Betas (0.9, 0.95) (0.9, 0.95) (0.9, 0.95) Weight decay 0.1 0.1 0.1 Precision BF16 BF16 BF16 Num ranks 4 4 8 given in Table 5. All runs used a cosine learning rate schedule with 10% linear warmup, trained for 4 epochs, global batch size 64, and micro-batch size 16 for 70M/160M (8 for 410M). Peak learning rates were selected by grid search on the base pretrained checkpoint before midtraining, and the LR grid is given in Table 6. Selected LRs for the final checkpoint of each model size are given in Table 7. Table 5: Finetuning datasets. Dataset # Train Samples Prompt masked Pycode (CodeSearchNet-Python subset) 5,000 No GSM8K 7,500 Yes LIMA 1,000 No SciQ 13,679 No Table 6: Grid of candidate peak learning rates swept during tuning. LR grid 4e-6, 8e-6, 1e-5, 2e-5, 4e-5, 5e-5, 6e-5, 7e-5, 8e-5, 9e-5, 1e-4, 1.2e-4, 1.4e-4, 1.6e-4, 1.8e-4, 2e-4, 2.4e-4, 4e-4, 5e-4, 6e-4, 8e-4, 1e-3, 2e-3, 3e-3, 4e-3, 6e-3 Table 7: Selected peak learning rates for fine-tuning (cosine schedule with 10% warmup). Dataset 70M 160M 410M GSM8K 8e-4 4e-4 4e-4 LIMA 1.2e-4 5e-5 5e-5 Pycode 1e-3 5e-4 4e-4 SciQ 8e-4 2.4e-4 6e-4 C DATASET SIMILARITY MATRIX We compute dataset similarity using surface-level token statistics after initial experimentation with embedding models gave implausible results for code datasets' similarities to other natural language datasets. For each pair of pretrain/midtrain and downstream datasets, we sample max(dataset_size, 10,000) examples. Midtrain mixes are simulated by their actual compositions (e.g., Starcoder is treated as 20% Starcoder + 80% C4). From the (possibly mixed) texts we build unigram frequency vectors at a token level, normalize to probabilities, and compute: vocabulary Jaccard, overlap ratio, token-frequency cosine similarity, and a Jensen-Shannon-based similarity. These are combined as Combined = 0.4 · cosine + 0.3 · Jaccard + 0.3 · JS_similarity, 15 Preprint and used to fill the similarity matrix (diagonal entries are 1). This mixture-aware score reflects both specialty content and dilution by C4. Figure 6 shows the resulting similarity matrix between pre/midtrain datasets and SFT datasets. C4 StarCoder Math Combined FLAN Combined DCLM KnowledgeQA GSM8K PyCode LIMA SciQ C4 StarCoder Math Combined FLAN Combined DCLM KnowledgeQA GSM8K PyCode LIMA SciQ 1.00 0.83 0.95 0.99 0.95 0.97 0.61 0.54 0.82 0.52 0.83 1.00 0.85 0.84 0.85 0.86 0.57 0.70 0.75 0.39 0.95 0.85 1.00 0.95 0.93 0.94 0.65 0.59 0.81 0.51 0.99 0.84 0.95 1.00 0.95 0.97 0.62 0.56 0.82 0.53 0.95 0.85 0.93 0.95 1.00 0.96 0.61 0.58 0.83 0.53 0.97 0.86 0.94 0.97 0.96 1.00 0.62 0.59 0.83 0.51 0.61 0.57 0.65 0.62 0.61 0.62 1.00 0.48 0.63 0.46 0.54 0.70 0.59 0.56 0.58 0.59 0.48 1.00 0.60 0.32 0.82 0.75 0.81 0.82 0.83 0.83 0.63 0.60 1.00 0.52 0.52 0.39 0.51 0.53 0.53 0.51 0.46 0.32 0.52 1.00 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Similarity Figure 6: Token-based similarity matrix for pre/midtrain and SFT datasets. Note that these midtrain datasets are corrected for mix weight in this matrix. D SFT IN-DOMAIN LOSS AND C4 LOSSES AFTER FINETUNING FOR 70M AND 160M MODELS Table 8 depicts validation losses as well as C4 validation losses after finetuning on each SFT dataset, for the 70m and 160m models. E MIDTRAINING LEG LENGTH VS. BENEFIT Figure 7 shows the relationship between the two "legs" dist(C4, midtrain) and dist(midtrain, SFT) and the benefit of midtraining. As in Figure 2 this is computed based on the similarity matrix in Appendix C. F REPRESENTATIVE TRAINING LOSS CURVES FOR MIDTRAINED VS. BASE MODELS Figure 8 shows a representative training loss curve for a midtrained model when its domain is aligned to SFT data. G ADDITIONAL CKA RESULTS ON APPS Figure 9 and Figure 10 display the CKA layer similarity for 160m and 410m models. 16 Preprint Table 8: SFT and C4 validation losses for 70m and 160m models across downstream datasets and midtraining mixtures, averaged across 5 seeds for each SFT dataset. Bold values indicate best performance within each dataset and model size combination. Model Size Downstream Dataset Midtrain Mix SFT Val Loss C4 Val Loss 70m Pycode C4 2.633 6.067 Starcoder (20%) 2.530 5.996 Math (12%) 2.605 6.049 FLAN (5%) 2.648 6.112 KnowledgeQA (20%) 2.602 6.046 DCLM (20%) 2.592 6.007 GSM8k C4 1.405 6.409 Starcoder (20%) 1.363 6.367 Math (12%) 1.359 6.391 FLAN (5%) 1.401 6.407 KnowledgeQA (20%) 1.373 6.434 DCLM (20%) 1.397 6.421 LIMA C4 4.458 4.159 Starcoder (20%) 4.382 4.168 Math (12%) 4.418 4.180 FLAN (5%) 4.460 4.158 KnowledgeQA (20%) 4.371 4.152 DCLM (20%) 4.395 4.132 SciQ C4 3.573 7.334 Starcoder (20%) 3.594 7.245 Math (12%) 3.599 7.289 FLAN (5%) 3.550 7.236 KnowledgeQA (20%) 3.537 7.268 DCLM (20%) 3.529 7.269 160m Pycode C4 2.382 5.247 Starcoder (20%) 2.205 5.104 Math (12%) 2.382 5.228 FLAN (5%) 2.389 5.253 KnowledgeQA (20%) 2.420 5.342 DCLM (20%) 2.405 5.268 GSM8k C4 1.188 5.339 Starcoder (20%) 1.186 5.362 Math (12%) 1.139 5.198 FLAN (5%) 1.071 5.320 KnowledgeQA (20%) 1.190 5.346 DCLM (20%) 1.186 5.336 LIMA C4 4.184 3.606 Starcoder (20%) 4.276 3.556 Math (12%) 4.421 3.599 FLAN (5%) 4.472 3.580 KnowledgeQA (20%) 4.498 3.566 DCLM (20%) 4.446 3.572 SciQ C4 3.371 5.537 Starcoder (20%) 3.362 5.471 Math (12%) 3.316 5.437 FLAN (5%) 3.362 5.444 KnowledgeQA (20%) 3.339 5.373 DCLM (20%) 3.341 5.383 17 Preprint 0.0 0.2 0.4 C4 Midtrain Distance 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 Relative Improvement 160m: C4 Midtrain vs. Benefit (quad R2=0.018) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 Relative Improvement 160m: Midtrain SFT vs. Benefit (quad R2=0.403) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) 0.05 0.10 0.15 C4 Midtrain Distance 0.00 0.02 0.04 0.06 0.08 Relative Improvement 410m: C4 Midtrain vs. Benefit (quad R2=0.313) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.00 0.02 0.04 0.06 0.08 Relative Improvement 410m: Midtrain SFT vs. Benefit (quad R2=0.092) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) 0.0 0.2 0.4 C4 Midtrain Distance 0.06 0.04 0.02 0.00 0.02 0.04 Relative Improvement 70m: C4 Midtrain vs. Benefit (quad R2=0.364) SFT Dataset GSM8k LIMA Pycode SciQ 0.2 0.4 0.6 Midtrain SFT Distance 0.06 0.04 0.02 0.00 0.02 0.04 Relative Improvement 70m: Midtrain SFT vs. Benefit (quad R2=0.163) Midtrain Dataset DCLM (20%) FLAN (5%) KnowledgeQA (20%) Math (12%) Starcoder (100%) Starcoder (20%) Figure 7: Relationship between the two "legs" dist(C4, midtrain) and dist(midtrain, SFT) and the benefit of midtraining. 18 Preprint 0 1000 2000 3000 4000 5000 Step 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Loss Representative train loss curve (Pythia 410m, Pycode) Base Starcoder Figure 8: Representative training loss curve for a midtrained model and base model on Pycode, for Pythia-410m. The midtrained model starts with a lower training loss, and maintains a slight gap throughout training. Figure 9: CKA layer analysis for Pythia-160M with APPS as a probe. 19 Preprint Figure 10: CKA layer analysis for Pythia-410M with APPS as a probe. 20 Preprint Figure 11: CKA layer analysis for Pythia-70M with C4 as a probe. H CKA RESULTS ON C4 Figure 11, Figure 12, and Figure 13 show the CKA layer similarity for all model sizes with C4 as a probe. I STATEMENT ON LLM USAGE Large language models (LLMs) were used to assist with refining writing in this submission, including summarizing paragraphs in order to shorten the submission, correcting grammar, and giving suggestions to improve organization. LLMs were not used in the ideation process and analyses and experimental setups were designed fully by the authors. Copilot and other coding agents were used to generate some utility scripts in the process of coding. 21 Preprint Figure 12: CKA layer analysis for Pythia-160M with C4 as a probe. 22 Preprint Figure 13: CKA layer analysis for Pythia-410M with C4 as a probe. 23
2510.14863
SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS QI SUN Abstract. We show that any closed immersed curve in Rn with a one-to- one convex projection onto some 2-plane develops a Type I singularity and becomes asymptotically circular under Curve Shortening flow in Rn. As an application, we prove an analog of Huisken’s conjecture for Curve Shortening flow in Rn, showing that any closed immersed curve in Rn can be perturbed to a closed immersed curve in Rn+2 which shrinks to a round point under Curve Shortening flow. 1. Introduction Let us consider the Curve Shortening flow (CSF) in higher codimensions: (1.1) γt = γss where γ : S1 × [0, T) →Rn is smooth (S1 = R/2πZ), u →γ(u, t) is an immersion and ∂s = ∂ ∂s is the derivative with respect to arc-length, defined by (1.2) ∂ ∂s := 1 |γu| ∂ ∂u. When we want to emphasize that we are working in higher codimensions, we shall refer to the evolution as space CSF. 1.1. Background. For planar CSF, in [GH86] Gage and Hamilton proved that if the initial curve is convex, it shrinks to a point and becomes asymptotically circular. Their work built on earlier works of Gage [Gag83, Gag84]. In [Gra87] Grayson extended their results and proved that if the initial curve is embedded, it will become convex before developing any singularities. Since then, many other proofs of the Gage-Hamilton-Grayson theorem have been discovered, including [Ham95b, Hui98,AB11,And12]. Beyond CSF, (mean) convexity has played a central role for evolution of hypersurfaces, see [Hui84,HS99,Whi00,Whi03,Cho85,And99,BCD17]. See also [Wan02,AB10] for Mean Curvature flow (MCF) in higher codimensions. For space CSF, in [Sun24] the author shows that if the initial space curve has a one-to-one convex projection onto some 2-plane, its space CSF retains this property and shrinks to a point. See previous works [H¨at15, MB20] for space CSF with convex projections and [Sun24, Page 3-4] for a comparison with these works. See also [AG92,Alt91,AAAW13,Wan11,Smo11]. Date: October 17, 2025. This work was partially supported by NSF grant DMS-2348305. 1 arXiv:2510.14863v1 [math.DG] 16 Oct 2025 2 QI SUN One natural and fascinating question arises whether a curve with a one-to-one convex projection becomes asymptotically circular under space CSF. In [Sun25], the author establishes a variant of Huisken’s distance comparison principle in higher codimensions for reflection symmetric space CSF and answers this question in a special case. In this paper, we answer the above question affirmatively in full generality. 1.2. Notation. Let Pxy : Rn = R2 × Rn−2 →R2 be the orthogonal projection onto the first two coordinates, which we call x and y. For a space curve γ, let Pxy|γ : γ →xy-plane be its restriction to γ. Definition 1.1. We say that a smooth curve γ ⊂Rn has a one-to-one convex projection (onto the xy-plane) if Pxy|γ is injective and the projection curve Pxy(γ) is convex. The class of curves with one-to-one convex projections includes planar convex curves as a special case. Recall the following terminology on singularity formation. Definition 1.2. As t →T, we say CSF γ(·, t) develops a Type I singularity if lim sup t→T sup u∈S1 k2(u, t)(T −t) < +∞ and a Type II singularity otherwise, where k(u, t) is the curvature at the point γ(u, t). Definition 1.3. Let γ(·, t) be a space CSF that shrinks to the origin as t →T. We say CSF γ(·, t) becomes asymptotically circular as t →T if the rescaled CSF γ(·,t) √2T −2t converges in C∞to a unit circle of multiplicity one in some 2-plane P 2 ⊂Rn as t →T. 1.3. Main result. For a smooth space curve γ0 : S1 →Rn, let γ : S1×[0, T) →Rn be the solution to the CSF with γ(u, 0) = γ0(u). It is proved in [Sun24] that if the initial curve γ0 has a one-to-one convex pro- jection, then CSF γ(·, t) has a one-to-one convex projection and shrinks to a point, which we may assume to be the origin, as t →T. Our main result is the following. Theorem 1.4. If the initial curve γ0 has a one-to-one convex projection onto the xy-plane, then CSF γ(·, t) develops a Type I singularity and becomes asymptotically circular as t →T. See Figure 1 for numerical illustrations of Theorem 1.4. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 3 Figure 1. Examples on CSF with a one-to-one convex projection 1.4. Application. Huisken’s generic singularities conjecture [Ilm03, # 8] for em- bedded MCF of surfaces has been settled recently by remarkable works, particularly [CM12,CCMS24a,CCS23,BK24]; see also [CMI19,CM21,CCMS24b,SX21,SX25]. It was pointed out by Altschuler [Alt91] that embedded space curves can evolve to have self-intersections under space CSF and as a corollary of [Ang91], there exist1 planar immersed curves that one cannot perturb Type II singularities away in R2. However, it is not known whether a space CSF that starts at a generic closed immersed curve in Rn (n ≥3) remains smooth until it becomes asymptotically circular. This formulation, in the spirit of Huisken’s conjecture, is often considered a folk conjecture. As a corollary of our results, we confirm an extra-codimension version of this folk conjecture. By embedding Rn in Rn+2 ∼= R2 × Rn, we are able to perturb any immersed curve in Rn to have a one-to-one convex projection as follows, so that the perturbed curve in Rn+2 will shrink to a round point according to Theorem 1.4. Corollary 1.5 (Perturbing immersed closed curves). For any closed immersed curve γ0 : S1 →Rn with parameter u (|γ0u| ̸= 0), for any ϵ > 0, the perturbation2 γϵ 0 : S1 →R2 × Rn (1.3) γϵ 0(u) := (ϵ cos u, ϵ sin u, γ0(u)) has a one-to-one convex projection onto the xy-plane, hence develops a Type I singularity and becomes asymptotically circular under space CSF. For curves described in [Sun24, Lemma 1.8], it suffices to perturb the given curve in Rn+1 rather than Rn+2. Particularly, it applies to the planar figure-eight (cos u, 0, sin 2u) as follows. Corollary 1.6 (Perturbing a planar figure-eight). For any ϵ > 0, the CSF γ(·, t) that starts with the initial curve γϵ 0(u) = (cos u, ϵ sin u, sin 2u) 1As a corollary of [Ang91], any cardioid-like curve (i.e. a curve with positive curvature and one transverse self-intersection) develops a Type II singularity. Any small enough smooth perturbation in R2 of a cardioid-like curve is still a cardioid-like curve and thus develops a Type II singularity. 2This perturbation, referred to as the wave approximation, has been used in [H¨at15, §5.5] to prove the existence of weak solutions, replacing the ramps used by [AG92]. 4 QI SUN develops a Type I singularity and becomes asymptotically circular as it approaches the singularity. Corollary 1.6 confirms the numerical observations mentioned in [Sun24, the last paragraph on Page 4]. See Figure 2. xy-projection xz-projection Figure 2. Snapshots of the evolution of a perturbation of the pla- nar figure eight curve from different angles. Previously appeared in [Sun24]. 1.5. Strategy of our proof. The principal part of the proof is devoted to ruling out Type II singularities; the argument proceeds by contradiction. For CSF with convex projections, developing Type II singularities, we first improve the known blow-up results in the literature, showing that every tangent flow is a line of mul- tiplicity two (Theorem 1.13). Then we show the directions of the lines and thus the tangent flows are both non-unique (Theorem 1.15) and unique (Theorem 1.16). This gives a contradiction and hence only Type I singularities can occur. Once Type I is established, asymptotic circularity can be proved quickly, sketched as follows. A Type I blow-up argument [Hui90] implies that the singularity satisfies the shrinker equation. All one-dimensional shrinkers in Rn are planar (see for exam- ple [AAAW13, Lemma 5.1]), thus are classified as Abresch-Langer curves [AL86]. Because a circle of multiplicity one is the only Abresch-Langer curve with a one-to- one convex projection, it follows from [Sch14] that CSF with a one-to-one convex projection becomes asymptotically circular; see Proposition 3.6 for more details. In improving the known blow-up results, we rely on White’s blow-up results in [Cho15, page 12-13], [Ede15, Theorem 5.6, page 9-10]. To prove non-uniqueness of tangent flows, we enhance the barrier in [Sun24], making use of the viscos- ity subsolutions ( [CIL92]). To prove uniqueness of tangent flows, we extend the Allard-Almgren method [AA81] in [CSSZ25, §8] based on estimates derived in a way different from [CSSZ25, §7]. We now introduce some terminology before discussing our proof strategy and related works in more detail. Recall that we have assumed CSF γ(·, t) shrinks to the origin as t →T based on [Sun24]. Definition 1.7 (Following Huisken [Hui90]). We define the rescaled CSF to be: (1.4) Γ(u, τ) := γ(u, t) √ 2T −2t, τ := −1 2 log(T −t). SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 5 In addition, we denote by σ the arc-length parameter of Γ, defined by ∂ ∂σ = 1 |Γu| ∂ ∂u = √ 2T −2t ∂ ∂s. Definition 1.8. For any sequence τj = −1 2 log(T −tj) →+∞, we define the j-th rescaled CSF along the sequence {τj} to be (1.5) Γj(σ, τ) := Γ(σ, τj + τ). Throughout this paper, when taking subsequences, we always keep the original labels. In addition, all convergence results arising from the blow-up analysis are understood up to reparameterization in the following sense. Definition 1.9. We say the j-th rescaled CSF Γj locally smoothly converges to a rescaled flow Γ∞if for any real numbers a < b and R > 0, there is a finite union of closed intervals I = ∪Iα, an integer J and smooth time-dependent reparameter- izations ϕj such that for j ≥J, Γj(ϕj(u, τ), τ), u ∈I reparameterizes all the arcs Γj(·, τ) ∩BR(0) for every τ ∈[a, b] and (1.6) Γj(ϕj(u, τ), τ) →Γ∞(u, τ) as smooth functions on I × [a, b] as j →∞. Definition 1.10. A rescaled flow Γ∞is a parameterized tangent flow if there exists a sequence τj →+∞such that the corresponding j-th rescaled CSF Γj locally smoothly converges to Γ∞. In the literature, the notion of tangent flow refers to the Brakke flow obtained as the limit of the parabolic rescalings. See for example [Sch14, Page 165]. Our notion of parameterized tangent flow is more restrictive than tangent flow as it requires convergence in the smooth sense. And our notion of parameterized tangent flow is the limit of Huisken’s rescaled CSF instead of the limit of the parabolic rescalings. In contrast to mean curvature flow of surfaces in R3, in which case Bamler and Kleiner prove the Multiplicity One conjecture in [BK24], tangent flows of higher multiplicity do appear for space CSF. By solving ODE, [AAAW13, §3] classifies space CSF with S1 symmetry. As a corollary, based on explicit ODE solutions, a shrinking circle with any positive integer multiplicity can appear as a tangent flow of embedded CSF. Definition 1.11. For a unit vector ⃗v, the rescaled flow Γ∞: (R ⊔R) × R →Rn (λ, τ) →λ⃗v is called a stationary line of multiplicity two. Remark 1.12. Here the term “stationary” is different from the term “static” introduced in [Whi00, §5, page 676] because we reparameterize time as in Definition 1.7 and Definition 1.8 instead of the parabolic rescalings. In our case, CSF with a one-to-one convex projection shrinks to one point and the tangent flows at the spacepoint (lim t→T γ(·, t), T) are actually quasi-static in the sense of [Whi00, §5, page 676]. 6 QI SUN Now we discuss our proof strategy and related works in more detail. Improvement of blow-up results. It was first established by Huisken [Hui90] via his powerful monotonicity formula that MCF, developing a Type I singularity, is asymptotic to a self-shrinker subsequentially. For space CSF, Altschuler showed in [Alt91] that when a Type II singularity develops, Hamilton’s Harnack inequality [Ham89, Ham95a] implies that regions of curve where curvature is comparable to the maximum of curvature are asymptotic to Grim Reapers. For locally convex planar curves, see also [Ang91]. See [Naf22] for results of the same type as [Alt91] for MCF in higher codimensions. It has been pointed out in [MM14] [Cho15, page 12-13], [Ede15, Theorem 5.6, page 9-10] that for CSF, one can apply the Sobolev embedding to extract a C1 con- vergent subsequence of curves (not flows) without assuming the Type I condition. For CSF with convex projections, our improvement of the blow-up results when Type II singularities occur is as follows. Theorem 1.13. Assume the initial curve γ0 has a one-to-one convex projection onto the xy-plane and its CSF γ(·, t) develops a Type II singularity as t →T. Then for any sequence τj →+∞, there exists a subsequence and a line L in Rn such that the j-th rescaled CSF Γj locally smoothly converges to the stationary line L of multiplicity two. Moreover, the line L is not perpendicular to the xy-plane. We show that the convergence is smooth even though the limit is of multiplic- ity two. In the multiplicity one case, smooth convergence follows from Brakke regularity theorem, which we are unable to apply to our case. Non-uniqueness of tangent flows. Definition 1.14. A limit line is a line L obtained from Theorem 1.13 for some sequence τj →+∞. By enhancing the barrier in [Sun24], we are able to show that the directions of the limit lines along different sequences {τj} in Theorem 1.13 are non-unique. Actually, the projection of all the limit lines onto the xy-plane cover all horizontal directions. Theorem 1.15. Assume the same hypotheses as in Theorem 1.13. Then for every nonzero vector ⃗v in the xy-plane, there exists a limit line L⃗v in Rn with PxyL⃗v parallel to the vector ⃗v. Thus the tangent flows are non-unique. Uniqueness of tangent flows. The uniqueness of tangent flows is not known in general. In the multiplicity one case, uniqueness of tangent flows has been proved by Schulze [Sch14] for compact tangent flows, by Colding-Minicozzi [CM15] (see also [CMI25, GK15]) for cylindrical tangent flows and by Chodosh-Schulze [CS21] for asymptotically conical tangent flows. See also generalizations by Zhu [Zhu20] and Lee-Zhao [LZ24]. For Lagrangian MCF, see [Nev07] [LS24]. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 7 Our proof of uniqueness of tangent flows of CSF with convex projections is an extension of the Allard-Almgren method [AA81]3 (see also [All24]) in [CSSZ25, §8], which proves uniqueness of tangent flows at infinity for ancient finite-entropy planar embedded CSF. See also the previous works [BC19, BC21] on MCF with similar ideas. Theorem 1.16. Assume the same hypotheses as in Theorem 1.13. Then the di- rection of the limit line is independent of the sequence {τj}. Thus the tangent flow is unique. To use the argument in [CSSZ25, §8], we need an analogy of [CSSZ25, Theorem 7.3 (Graphical radius lower bound)], the proof of which does not apply to our setting for the following reason. In higher codimensions, we cannot keep track of sharp vertices (local maximum points of the curvature function) as in [CSSZ25]. In more detail, when curvature k > 0, the evolution equation of the curvature is: (1.7) kt = kss + k3 −kτ 2. As a result, one cannot apply the Sturmian theorem to the evolution equation of the derivative of the curvature function ks because of the torsion term τ 2. Even so, we are still able to achieve similar estimates (Proposition 7.5 and Proposition 7.7) as in [CSSZ25, Theorem 7.3], making use of the blow-up results, the lower bound of the rescaled area of the projection curves and the geometric properties of CSF with convex projections; see §7 for more details. 1.6. Outline of the paper. In §2 we summarize the established blow-up results of CSF in the literature. In §3 we recall the geometric properties of CSF with convex projections. In §4 we improve the blow-up results (Theorem 1.13) for CSF with convex projections when Type II singularities occur, building upon results in §2 and §3. In §5 we construct a barrier, which is a viscosity subsolution to the heat equation. In §6, we make use of the barrier in §5 and shows that the tangent flows are non- unique (Theorem 1.15). In §7 we establish estimates at line scales. In §8 we make use of the estimates in §7 and show that the tangent flows are unique (Theorem 1.16). Acknowledgments. The author is indebted to his advisors Sigurd Angenent for introducing him to the study of curve shortening flow in higher codimensions and Hung Vinh Tran for sharing his expertise on viscosity solutions; he thanks them for many inspiring discussions and their support. The author also wants to thank Jonathan J. Zhu for a helpful conversation and thank Ilyas Khan, Keaton Naff, Tang-Kai Lee, Mat Langford and Jacob Bernstein for their interests. 3In general, a line of multiplicity two does not satisfies the integrability condition in [AA81, Page 215 (1)] because of the situation that two lines are rotating at different speeds. However, in our case, a heuristic explanation is that this is made up by the projection convexity because the projection curve should be embedded and cannot be intersecting lines. 8 QI SUN 2. Known results on blow-up In this section, we recall the blow-up results of general immersed CSF in Rn and we restrict to the case that as t →T, γ(·, t) shrinks to one point, which we may assume to be the origin. This is the only section that we do not assume CSF γ(·, t) has a one-to-one convex projection. Remark 2.1. Without the assumption that γ(·, t) shrinks to a point, to the best of the author’s knowledge, in the case that γ(·, t) develops a Type I singularity, it is not known whether there could be a subsequence {τj} along which the rescaled curves Γ(·, τj) converge to a finite union of lines with multiplicity. The first Theorem in [Alt91, Page 492] didn’t include this case, but the author does not think it was justified in his proof. This assumption that γ(·, t) shrinks to a point allows for a cleaner formulation of Proposition 2.3. The results in the literature are mostly stated in the planar case, but the argu- ment also works for general dimension n ≥2. We start by explaining how the assumption that γ(·, t) shrinks to one point precludes the scenario in Remark 2.1: Lemma 2.2. If an immersed CSF γ(·, t) develops a Type I singularity and shrinks to one point as t →T, then the rescaled CSF, introduced in Definition 1.7, is bounded. Proof. We already assume that CSF γ(·, t) shrinks to the origin. Then for all p ∈S1 and t ∈[0, T), |γ(p, t)| ≤ Z T t |k(p, ˜t)|d˜t ≤M Z T t 1 p T −˜t d˜t = 2M √ T −t. □ We now summarize the known results in the literature on Type I blow-up for CSF in Rn. Proposition 2.3 (Type I blow-up). Assuming γ(·, t) shrinks to the origin as t →T, the following three are equivalent: (a) As t →T, γ(·, t) develops a Type I singularity. (b) For any sequence tj →T, there exists a subsequence such that γ(·,tj) √ 2T −2tj converges in the C∞sense to some Abresch-Langer curve with multiplicity at least one. (c) For any sequence τj →+∞, there exists a subsequence such that the j-th rescaled CSF Γj(·, τ) (Definition 1.8) converges on S1 × [−1 2 log T, +∞) in the C∞ loc sense to a stationary solution of the rescaled CSF corresponding to some Abresch–Langer curve with multiplicity at least one. Proof of Proposition 2.3. (a) ⇒(b). Due to the argument of Huisken [Hui90] (see also [Man11, Proposition 3.2.10]), for any sequence tj →T, there exists a sub- sequence such that the rescaled CSF locally smoothly converges to a shrinker, potentially with multiplicity. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 9 All shrinking curves in Rn are planar; see for example [AAAW13, Lemma 5.1]. By Lemma 2.2, the shrinker is bounded. In addition, the total curvature of the shrinker is bounded by [Alt91, Theorem 5.1]. Thus the shrinker is one of the Abresch-Langer curves classified in [AL86]. (b) ⇒(c). Combined with the smooth dependence on initial conditions for solutions of PDE, one may take a convergent subsequence according to (b) at times τj −m, m = 1, 2, · · · . Then (c) is proved by a diagonal argument. (c) ⇒(a). It follows from the classification [AL86] that there are only finitely many Abresch-Langer curves with the total curvature R S1 kds smaller than a fixed upper bound and thus the rescaled curvature is bounded for all time t since for any sequence tj →T, we can take a subsequence such that the rescaled curvature is bounded by a uniform constant, which can be defined to be the maximum of the the rescaled curvature of the mentioned finitely many Abresch-Langer curves. □ To the best of the author’s knowledge, the uniqueness of tangent flows is not fully known even in the Type I case in the literature. It is potentially possible that along two sequences, the blow-up limits are two different Abresch-Langer curves with different multiplicities. Geometrically, it has also not been ruled out that a singularity is a rotating Abresch-Langer curve in Rn. When one tangent flow is a shrinking circle of multiplicity one, the uniqueness of tangent flows follows from the work of Schulze [Sch14]. As a corollary, one has the following proposition. Proposition 2.4. If there exists one sequence tj →T, such that γ(·,tj) √ 2T −2tj con- verges, up to reparameterization, in the C1 sense to a circle of multiplicity one, then γ(·,t) √2T −2t converges in C∞to the circle as t →T. Proof. By smooth dependence of solutions to parabolic PDE on initial conditions, there is one tangent flow which is a shrinking circle of multiplicity one. Then this proposition follows from [Sch14]. □ Now let us summarize the known blow-up results for CSF in Rn in the literature without assuming the Type I condition. Proposition 2.5. For a rescaled CSF Γ, let τj →+∞be a given sequence and Γj be the corresponding j-th rescaled CSF (Definition 1.8). Then for almost every τ ∈R, at least one of the following two cases happen: (a) There exists a subsequence, such that the curve Γj(·, τ) converges, up to reparameterization, in the C1 sense to some Abresch-Langer curve with finite multiplicity. (b) There exists a subsequence, such that the curve4 Γj(·, τ) converges, up to reparameterization, in the C1 loc sense to a finite union of lines, each with finite multiplicity. The choice of the subsequence depends on τ. 4We emphasize that one only has convergence at time τ, not at later times. The smooth dependence of solutions to parabolic PDE on initial conditions fails in the non-compact setting. 10 QI SUN The proof of Proposition 2.5 is mainly White’s argument in [Cho15, page 12-13] and [Ede15, Theorem 5.6, page 9-10]. See also [MMN16, Proposition 2.19], [MM14] and the estimates in [Sto94, Lemma 2.9]. In the proof, for the use of the Sobolev embedding to potentially a union of broken arcs in a ball BR(0) for some R > 0, one can keep track of the arcs that intersect the unit ball B1(0) based on the Sturmian theorem [Ang88] and show that the other arcs, which we cannot keep track of, are outside of the ball B R 2 (0). One can then apply the Sobolev embedding to each of the arcs that intersects the unit ball B1(0). As a corollary, Lemma 2.6. The summation of the multiplicity of the lines in Proposition 2.5 (b) is independent of the choice of the subsequence. This is because it equals the limit of one half of the intersection number lim τ→+∞ 1 2|Γ(·, τ) ∩∂B1(0)|. As far as the author knows, for Proposition 2.5, in general it is not known whether one can improve the proposition from almost every τ to all τ and from C1 loc to C∞ loc. However, for CSF with a one-to-one convex projection, we are able to make these two improvements in §4. 3. CSF with one-to-one convex projections In this section, we always assume the initial curve γ0 ⊂Rn has a one-to-one convex projection onto the xy-plane. 3.1. Geometry of CSF with convex projections. Lemma 3.1 (Theorem 1.5 of [Sun24]). For each t > 0, CSF γ(·, t) has a one-to- one uniformly convex projection onto the xy-plane. As t →T, γ(·, t) shrinks to a point. The next lemma follows from Corollary 5.7 of [Sun24]. Lemma 3.2 (Bounded slope lemma). For an arbitrary ϵ > 0, there exists M > 0 such that for each t ∈[ϵ, T) and for arbitrary two points p1, p2 on γ(·, t), (3.1) |p1 −p2| ≤M|Pxy(p1) −Pxy(p2)| where | · | stands for the standard Euclidean distance. Recall that ∂ ∂s is the arc-length derivative defined via equation (1.2). Lemma 3.3 (Corollary 5.8 of [Sun24]). For an arbitrary ϵ > 0, there exists δ > 0 such that x2 s + y2 s ≥δ > 0 for all t ∈[ϵ, T). Because in this paper we only consider the asymptotic behavior as t →T, replacing the initial curve by γ(·, ϵ) if needed, we may assume that properties, described in Lemma 3.2 and Lemma 3.3, holding for t ∈[ϵ, T) hold for all t ≥0. Recall that we have assumed CSF γ(·, t) shrinks to the origin. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 11 3.2. Type I singularity and compact blow-up limits. Based on geometric properties of CSF with convex projections, we can rule out immersed Abresch- Langer curves and higher multiplicity. Lemma 3.4. If there exists a subsequence, such that γ(·,tj) √ 2T −2tj converges, up to reparameterization, in the C1 sense to some Abresch-Langer curve ΓAL with finite multiplicity m ≥1 in some 2-plane P 2 ⊂Rn, then the Abresch-Langer curve is the unit circle and the multiplicity is one. Moreover, the linear map Pxy|P 2 : P 2 → xy-plane is a linear isomorphism. Proof of Lemma 3.4. Claim 3.5. The linear map Pxy|P 2 : P 2 →xy-plane is injective. Proof of the Claim. If this were not true, then there would exist a nonzero vector ⃗v ∈P 2 such that Pxy|P 2(⃗v) = 0. Then there would exist two different points p1 ∞, p2 ∞ on ΓAL such that the vector pointing from p1 ∞to p2 ∞is parallel to the vector ⃗v. Pick two sequences of points p1 j, p2 j on the curve γ(·,tj) √ 2T −2tj satisfying p1 j →p1 ∞, p2 j →p2 ∞. Then by the bounded slope lemma, Lemma 3.2, where equation (3.1) is scaling invariant, one has that (3.2) |p1 j −p2 j| ≤M|Pxy(p1 j) −Pxy(p2 j)|, where lim j→+∞|p1 j −p2 j| = |p1 ∞−p2 ∞| > 0 because p1 ∞, p2 ∞are two different points. However, lim j→+∞|Pxy(p1 j) −Pxy(p2 j)| = |Pxy(p1 ∞) −Pxy(p2 ∞)| = |Pxy(p1 ∞−p2 ∞)| = 0 because Pxy|P 2(⃗v) = 0 and the vector pointing from p1 ∞to p2 ∞is parallel to the vector ⃗v. Taking the limit j →+∞in equation (3.2) leads to a contradiction. □ Thus the map Pxy|P 2 is a linear isomorphism by comparing the dimension of P 2 and the xy-plane. By the Sturmian theorem [Ang88], ΓAL can only have transverse self-intersections because ΓAL is a shrinker. Since the linear map Pxy|P 2 is bijective, Pxy|P 2(ΓAL) also can only have transverse self-intersections. Therefore if ΓAL had self-intersections, then Pxy|P 2(ΓAL) and thus Pxy|P 2(γ(·, tj)) would have transverse self-intersections for large j. This contradicts that γ(·, t) has a one-to-one convex projection onto the xy-plane. Therefore ΓAL is embedded and thus is a circle. If the multiplicity m were not one, then the winding number of Pxy|P 2(ΓAL) and Pxy|P 2(γ(·, tj)) with respect to the origin would equal m > 1. Thus the curve Pxy|P 2(γ(·, tj)) would have a self-intersection, which gives a contradiction. □ For Type I singularities, we fully understand the asymptotic behavior. 12 QI SUN Proposition 3.6. If γ(·, t) develops a Type I singularity as t →T, then γ(·,t) √2T −2t converges in C∞to a unit circle of multiplicity one in some 2-plane P 2 ⊂Rn as t →T. Moreover, the linear map Pxy|P 2 : P 2 →xy-plane is a linear isomorphism. Proof of Proposition 3.6. By Proposition 2.3, there exists a sequence {tj} such that γ(·,tj) √ 2T −2tj converges to some Abresch-Langer curve ΓAL in some 2-plane P 2 with multiplicity m ≥1. By Lemma 3.4, the limit is a circle of multiplicity one. This proposition then follows from Schulze’s uniqueness of tangent flows (see Proposition 2.4). □ Lemma 3.7. If there exists a subsequence such that Γj(·, τ) converges up to repa- rameterization in the C1 sense to some Abresch-Langer curve with finite multiplic- ity, then γ develops a Type I singularity. Proof. By Lemma 3.4, the limit is a circle of multiplicity one. By smooth dependence on solutions of PDE, we may assume the convergence is in C∞sense by picking another sequence at nearby times if necessary. This lemma follows from Schulze’s uniqueness theorem (see Proposition 2.4). □ 3.3. Type II singularity and non-compact blow-up limits. For any sequence τj = −1 2 log(T −tj) →+∞, recall that we denote by Γj the j-th rescaled CSF along {τj} defined in Definition 1.8. With convex projections, the sequential limit of the rescaled CSF cannot be transverse lines. Lemma 3.8. If there exists a sequence τj →+∞, such that as j →+∞, Γj(·, τ) converges in the sense of C1 loc to a finite union of lines, each with finite multiplicity, then the union of these lines is a line of multiplicity two. In addition, the line is not perpendicular to the xy-plane. Proof. By the bounded slope lemma, Lemma 3.2, none of the lines in the union is perpendicular to the xy-plane. As a result, the projection of each of the above lines onto the xy-plane is a line and cannot be a point. It follows from [Whi05] that the summation of the multiplicities of lines is at least 2. Claim 3.9. The projection curves PxyΓj(·, τ) converge to one line with multiplicity m ≥2. Proof of the Claim. If this claim were not true, then the projection curves PxyΓj(·, τ) would converge to a union of two or more transverse lines in the xy-plane. Then C1 loc close to transverse lines implies that PxyΓj(·, τ) should have self- intersections for large j. But for each j, the projection curve PxyΓj(·, τ) is convex and thus embedded. □ Because the projection curve PxyΓj(·, τ) is convex, the multiplicity m is at most 2. As a result, m = 2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 13 Claim 3.10. The space curves Γj(·, τ) converge to one line of multiplicity two in Rn. Proof of the Claim. If this claim were not true, then the space curves Γj(·, τ) would converge to a union of two intersecting lines L1, L2 in Rn with PxyL1 = PxyL2 but L1 ̸= L2. Then there exist points pi ∈Li for i = 1, 2 such that Pxy(p1) = Pxy(p2) but p1 ̸= p2. This contradicts the bounded slope lemma, Lemma 3.2. □ □ In summary, Lemma 3.11. Let γ be a CSF with a one-to-one convex projection, and assume γ develops a Type II singularity. If τj →+∞is a given sequence, then for almost every τ ∈R, there exists a subsequence, such that Γ(·, τ + τj) converges, up to reparameterization, in the C1 loc sense to a line of multiplicity two. In addition, the line is not perpendicular to the xy-plane. The choice of the subsequence depends on τ. 4. Improved blow-up results for curves with convex projections In this section, as discussed in §3.1, for all t ∈[0, T), γ(·, t) has a one-to-one uniformly convex projection onto the xy-plane with no tangent lines perpendicular to the xy-plane. Recall that we have assumed γ(·, t) shrinks to the origin. In this section, we always assume γ(·, t) develops a Type II singularity as t →T. We will improve Lemma 3.11 from almost every τ to all τ and from C1 loc to C∞ loc. More generally, we will prove Theorem 1.13. 4.1. Preparations. For any sequence τj = −1 2 log(T −tj) →+∞, recall that we denote by Γj the corresponding j-th rescaled CSF defined in Definition 1.8. For any sequence τj →+∞, by Lemma 3.11, we may pick numbers a < 0 < b such that, by taking subsequences, Γj(·, a) converges to a line La ⊂Rn of multiplic- ity two and Γj(·, b) converges to some line Lb ⊂Rn of multiplicity two, as j →+∞. The lines La, Lb are not perpendicular to the xy-plane by Lemma 3.8. Our goal is to prove that, for the chosen numbers a, b, there exists a subsequence along which Γj(·, τ), τ ∈[a, b] converges to a line of multiplicity two. Theorem 1.13 follows from taking a subsequence by the diagonal argument picking am < −m, bm > m + 1, m ∈N. We may assume the projection PxyLa of the line La is the x-axis. By the maximum principle and definition of the rescaled CSF, we can establish the following lemma which will be useful in the proof of Lemma 4.2 and Lemma 4.3. 14 QI SUN Lemma 4.1. If, for some real numbers a < b, some nonzero vector ⃗v ∈Rn and some index j ∈N, sup σ∈S1 ⃗v · Γj(σ, a) ≤R, then sup σ∈S1 ⃗v · Γj(σ, τ) ≤eτ−aR ≤eb−aR for any τ ∈[a, b] for the same index j. Proof. If ⃗v · γ(·, t0) ≤C, then by the maximum principle ⃗v · γ(·, t) ≤C for all t ≥t0. It follows from Definition 1.7 and Definition 1.8 that ⃗v · Γj(·, τ) = ⃗v · Γ(·, τj + τ) = ⃗v · γ(·, ˜tj(τ)) q 2T −2˜tj(τ) where T −˜tj(τ) = e−2τj−2τ. Thus for τ ∈[a, b], s 2T −2˜tj(a) 2T −2˜tj(τ) = r e−2τj−2a e−2τj−2τ = e−a e−τ = eτ−a and ⃗v · Γj(·, τ) = ⃗v · γ(·, ˜tj(τ)) q 2T −2˜tj(τ) = ⃗v · γ(·, ˜tj(τ)) q 2T −2˜tj(a) s 2T −2˜tj(a) 2T −2˜tj(τ) ≤Reτ−a. □ 4.2. Graphicality. We denote by ⃗e1 the vector (1, 0, · · · , 0) and by ⃗e2 the vector (0, 1, 0, · · · , 0). Based on our assumption in §4.1, the curves PxyΓj(·, a) converge to the x-axis of multiplicity two in the C1 loc sense. We will extend the linear estimates at τ = a to the time interval [a, b], taking advantage of the convex projection. Lemma 4.2. For any constant R > 0 large and H > 0 small, there exists j1 ∈N such that for j ≥j1 and for all (σ, τ) satisfying −R ≤⃗e1 · Γj(σ, τ) ≤R, one has −H ≤⃗e2 · Γj(σ, τ) ≤H for any τ ∈[a, b]. Proof. We first bound the projection of the rescaled curve at time τ = a by lines from above and below. Since PxyΓj(·, a) converges to the x-axis of multiplicity two as j →+∞, for any R, H > 0, we can pick j1 ∈N such that for all σ satisfying −R ≤⃗e1 · Γj(σ, a) ≤R, one has that, (4.1) −H 2 ea−b ≤⃗e2 · Γj(σ, a) ≤H 2 ea−b SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 15 and the gradient (4.2) (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ, a) ≤H 2R for any j ≥j1. In addition, we denote by Γj(σ1, a), Γj(σ2, a) the points whose x-coordinates are 0 with ⃗e2 · Γj(σ1, a) > 0,⃗e2 · Γj(σ2, a) < 0. Consider the tangent lines L1 j, L2 j of the projection of the rescaled curves Pxy(Γj(·, a)) at points PxyΓj(σ1, a), PxyΓj(σ2, a). The equations of the tangent lines L1 j, L2 j are ˜y1(˜x, a) = (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ1, a)˜x + ⃗e2 · Γj(σ1, a) and ˜y2(˜x, a) = (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ2, a)˜x + ⃗e2 · Γj(σ2, a). Since for each j, PxyΓj(·, a) is a uniformly convex curve, the projection PxyΓj(·, a) is pinched between the lines L1 j, L2 j. In other words, PxyΓj(·, a) is contained in the domain Ω(a) defined by: Ω(a) = {(˜x, ˜y)|˜y2(˜x, a) ≤˜y ≤˜y1(˜x, a)}. We consider a family of lines parameterized by τ, ˜y1(˜x, τ) = (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ1, a)˜x + eτ−a⃗e2 · Γj(σ1, a) and ˜y2(˜x, τ) = (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ2, a)˜x + eτ−a⃗e2 · Γj(σ2, a). Since for each j, PxyΓj(·, a) is a uniformly convex curve, we can apply Lemma 4.1, where we choose ⃗v to be either of the two normal vectors to the tangent lines L1 j, L2 j in the xy-plane. As a result, for any τ ∈[a, b], PxyΓj(·, τ) is contained in the domain Ω(τ) defined as follows: {(˜x, ˜y)|˜y2(˜x, τ) ≤˜y ≤˜y1(˜x, τ)}. According to equation (4.1) and equation (4.2), the domain Ω(τ) is contained in the domain W(τ) defined as follows: {(˜x, ˜y)||˜y| ≤H 2 eτ−b + H 2R|˜x|}. This lemma is proved because for all τ ∈[a, b], the domain W(τ) ∩{(˜x, ˜y)| −R ≤ ˜x ≤R} is contained in {(˜x, ˜y)| −R ≤˜x ≤R, |˜y| ≤H}. □ Lemma 4.3. For any constant R > 0, there exists j2 ∈N such that sup σ∈S1 ⃗e1 · Γj(σ, τ) > R and inf σ∈S1 ⃗e1 · Γj(σ, τ) < −R for any j ≥j2 and any τ ∈[a, b]. 16 QI SUN Proof. It suffices to prove the first inequality, as the proof of the second one is simi- lar. Assume the first inequality were not true, then there would exist a subsequence such that sup σ∈S1 ⃗e1 · Γj(σ, τj) ≤R for some τj ∈[a, b], then by Lemma 4.1, sup σ∈S1 ⃗e1 · Γj(σ, b) ≤eb−τjR ≤eb−aR. In other words, curves Γj(σ, b) are bounded from the right. Combined with Lemma 4.2, curves PxyΓj(σ, b) are bounded from above, below and from the right. However, Γj(·, b) converges to some line Lb of multiplicity two, where the line Lb is not perpendicular to the xy-plane. Thus PxyLb cannot be bounded from above, below and from the right. By taking j large, PxyΓj(·, b) is close to the line PxyLb on some large ball. This gives a contradiction. □ As a result, for any constant R > 0, for large enough j and any τ ∈[a, b], Γj(·, τ) is graphical over the x-axis on the interval [−R, R] because the projection curve PxyΓj(·, τ) is convex and thus graphical over the x-axis except at the maximum and minimum points of the function x(·, τ) = ⃗e1 · Γj(·, τ). 4.3. Gradient and curvature estimates. For any constant R > 0 large and H > 0 small, we take j0 = max{j1, j2}, as chosen in Lemma 4.2 and Lemma 4.3. Lemma 4.4 (Gradient estimates). For j ≥j0 and for all (σ, τ) satisfying −R 2 ≤ ⃗e1 · Γj(σ, τ) ≤R 2 , one has (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ, τ) ≤8H R for any τ ∈[a, b]. Proof. If this lemma were not true, then there would exist some jg ≥j0 and a point (σ0, τ0) with −R 2 ≤⃗e1 · Γjg(σ0, τ0) ≤R 2 but (⃗e2 · Γjg)σ (⃗e1 · Γjg)σ (σ0, τ0) > 8H R . We denote by L0 the tangent line of the convex curve Pxy(Γjg(·, τ0)) at the point PxyΓjg(σ0, τ0). We may assume the point PxyΓjg(σ0, τ0) is on the upper branch, thus the convex curve Pxy(Γjg(·, τ0)) must be below the line L0. The line L0 intersects the line segment ℓ= {(˜x, −2H)| −R ≤˜x ≤R} at some point. The curve Pxy(Γjg(·, τ0)) therefore also intersects ℓ, which contradicts Lemma 4.2. □ Lemma 4.5 (Curvature estimates). There exists a constant M > 0 such that for all j ≥j0 and for all (σ, τ) satisfying −R 4 ≤⃗e1 · Γj(σ, τ) ≤R 4 and a 2 ≤τ ≤b 2, one has |Γjσσ(σ, τ)| ≤M. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 17 Proof. Any given branch of Γj in the region |x| ≤R is a graph x →(x, ˜y(x, τ), ˜z1(x, τ), · · · , ˜zn−2(x, τ)). The function ˜y satisfies the graphical rescaled CSF equation: (4.3) ˜yτ = ˜y˜x˜x 1 + ˜y2 ˜x + ˜z2 1˜x + · · · + ˜z2 (n−2)˜x −˜x˜y˜x + ˜y. Thus, (˜y˜x)τ = (˜y˜x)˜x 1 + ˜y2 ˜x + ˜z2 1˜x + · · · + ˜z2 (n−2)˜x ! ˜x −˜x(˜y˜x)˜x, which is a parabolic equation for ˜y˜x in divergence form with a lower order term. By the gradient estimates and Lemma 3.3, ˜y2 ˜x + ˜z2 1˜x + · · · + ˜z2 (n−2)˜x is bounded from above. Thus by De Giorgi-Nash-Moser type estimates, ˜y˜x is H¨older continuous. See for example [LSU68, Theorem 1.1, Chapter V, §1, page 419]. Similarly ˜z1˜x, · · · , ˜z(n−2)˜x are H¨older continuous. Thus we obtain curvature estimates by applying Schauder estimates to equation (4.3). All the estimates mentioned in this proof are uniform for all j ≥j0. □ 4.4. Uniform convergence. By Lemma 4.4, Lemma 4.5 and a priori estimates of parabolic PDEs, we have higher order estimates. By the Arzel`a–Ascoli theorem, we may assume Γj(·, τ) converges locally smoothly to some limit flow Γ∞(·, τ) because we have (upper and lower) bounds on length locally for τ ∈[a, b], replacing a, b by 2a, 2b in Lemma 4.5 if necessary. It follows from Lemma 4.2, choosing constants R = m and H = 1 m for all m ∈N large, that for τ ∈[a, b], Γj(·, τ) converges to some line whose projection is the x-axis, which is the projection of the line that Γj(·, a) converges to. That is to say, for τ ∈[a, b], PxyΓ∞(·, τ) and PxyΓ∞(·, a) are the same line. Next, we will show that not only the projection, but the space curves Γ∞(·, τ) and Γ∞(·, a) are the same line, for any τ ∈[a, b]. Proof of Theorem 1.13. Due to the convergence Γj →Γ∞, the limit flow satisfies the rescaled CSF (4.4) (Γ∞)τ = (Γ∞)σσ + Γ⊥ ∞, up to a tangential motion. Take a subsequence such that τj +a > τj−1 +b. Then by Huisken’s monotonicity formula [Hui90], (4.5) +∞ X j=1 Z τj+b τj+a Z Γ(·,τ) e−|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = +∞ X j=1 Z b a Z Γj(·,τ) e−|Γ|2 2 |Γσσ + Γ⊥|2dσdτ is finite. 18 QI SUN As a result, lim j→+∞ Z b a Z Γj(·,τ) e−|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0 and for any R > 0, lim j→+∞ Z b a Z Γj(·,τ)∩BR(0) e−|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0, where we denote by BR(0) the ball centered at the origin with radius R, Because Γj(·, τ) converges locally smoothly to Γ∞(·, τ), Z b a Z Γ∞(·,τ)∩BR(0) e−|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0. As a result, the time derivative (Γ∞)τ vanishes and Γ∞(·, τ) remains the same line for any τ ∈[a, b]. □ We conclude this section with a corollary which will be useful in §6. Corollary 4.6. For arbitrary R, ϵ > 0, there exists τ0 ∈[−1 2 log T, +∞) such that for any τ ≥τ0 and Γ(σ1, τ), Γ(σ2, τ) ∈BR(0), one has that, |Γσ(σ1, τ) −Γσ(σ2, τ)| < ϵ. Proof. Assume this is not true, then there exist R0, ϵ0 > 0 such that there exists a sequence of times τj with points Γ(σj 1, τj), Γ(σj 2, τj) ∈BR0(0) but (4.6) |Γσ(σj 1, τj) −Γσ(σj 2, τj)| ≥ϵ0. We may take a subsequence of {τj}, such that Γ(σ, τj) converges in the ball BR0(0). This contradicts equation (4.6). □ 5. The barrier as a subsolution In this section, as discussed in §3.1, the initial curve γ0 has a one-to-one uniformly convex projection onto the xy-plane and CSF γ(·, t) shrinks to the origin. Definition 5.1. We define the functions xmax(t) = max u∈S1 x(u, t) and xmin(t) = min u∈S1 x(u, t). Lemma 5.2. The functions xmax(t), xmin(t) are C1 in the variable t. Proof. Since the projection curve is uniformly convex, at each time t, x(·, t) has a unique maximum point. We can view x as a function of y and denote by y0(t) the value of y where x(·, t) achieves its maximum. In other words, x(y0(t), t) = xmax(t). Thus the derivative vanishes at the maximum point: xy(y0(t), t) = 0. Since the projection curve is uniformly convex, the curvature xyy (1 + x2y) 3 2 is nonzero. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 19 Hence xyy(y0(t), t) ̸= 0. Then it follows from the implicit function theorem that y0(t) is C1 in t. Thus xmax(t) = x(y0(t), t) is C1 in t. □ 5.1. The difference Y between the upper and lower branch. We denote by (x, y, z1, · · · , zn−2) a point in Rn. Let us consider the graph flow over the x-axis: (5.1) yt = yxx 1 + y2x + z2 1x + · · · + z2 (n−2)x and (5.2) zit = zixx 1 + y2x + z2 1x + · · · + z2 (n−2)x , where x ∈[xmin(t), xmax(t)], t ∈[0, T) and z2 ix = ∂zi ∂x 2 , 1 ≤i ≤n −2. We denote by (yu(x, t), zu 1 (x, t), · · · , zu n−2(x, t)) and (yl(x, t), zl 1(x, t), · · · , zl n−2(x, t)) the solutions corresponding to the upper and lower branch. Definition 5.3. We define the difference of y between the upper and lower branch to be (5.3) Y (x, t) := yu(x, t) −yl(x, t), where x ∈[xmin(t), xmax(t)] and t ∈[0, T). By definition of Y , one has that (5.4) Y (x = xmin(t), t) = 0 and Y (x = xmax(t), t) = 0 for all t ∈[0, T). The next lemma follows from the equations of the graph flow and the convexity of the projection curves. Lemma 5.4. The function Y is a supersolution for the linear heat equation, i.e. (5.5) Yt ≥Yxx. Proof. Because of the convexity of the projection curve, yu xx ≤0, yl xx ≥0. Because 1 1 + y2x + z2 1x + · · · + z2 (n−2)x ≤1, we have yu t = yu xx 1 + (yux)2 + (zu 1x)2 + · · · + (zu (n−2)x)2 ≥yu xx, and yl t = yl xx 1 + (ylx)2 + (zl 1x)2 + · · · + (zl (n−2)x)2 ≤yl xx. Thus, Yt = yu t −yl t ≥yu xx −yl xx = Yxx. □ 20 QI SUN 5.2. The barrier and its regularity. We define the function (5.6) f(t) := Z t 0 π2 4 max  1 x2max(τ), 1 x2 min(τ)  − 1 2(T −τ)  dτ, and the function (5.7) θ(x, t) := ( π 2 x xmax(t) if 0 ≤x ≤xmax(t), π 2 x −xmin(t) if xmin(t) ≤x ≤0. where the functions xmax(t), xmin(t) are defined in Definition 5.1. Let ϵ > 0 be a small constant to be chosen in inequality (5.13). Definition 5.5. With above notation, we define the barrier to be (5.8) φ(x, t) = ϵe−f(t)√ T −t cos θ(x, t), where t ∈[0, T), x ∈[xmin(t), xmax(t)]. By the definition of the function θ, (5.9) φ(xmin(t), t) = φ(xmax(t), t) = 0. We start with the regularity of the barrier. Lemma 5.6. The function cos θ(x, t) is (a) C1 in the variables x, t. (b) C2 in x on [xmin(t), xmax(t)]\{0}. (c) one-sided C2 in x at x = 0 from the left and right. Proof. By direct computations, (cos θ)x =        −sin  π 2 x xmax(t)  π 2 1 xmax(t) if x > 0, 0 if x = 0, −sin  π 2 x −xmin(t)  π 2 1 −xmin(t) if x < 0. (cos θ)t =        sin  π 2 x xmax(t)  π 2 x x2max(t)x′ max(t) if x > 0, 0 if x = 0, −sin  π 2 x −xmin(t)  π 2 x x2 min(t)x′ min(t) if x < 0. Thus part (a) is proved and part (b) is direct. Next, we prove part (c). Let us consider the left and right derivative of (cos θ)x at the point x = 0: (5.10) lim x→0+ (cos θ)x −0 x = −π2 4 1 x2max(t), and (5.11) lim x→0− (cos θ)x −0 x = −π2 4 1 x2 min(t). Thus the function cos θ(x, t) is one-sided C2 in x at x = 0 from the left and right. □ SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 21 5.3. The barrier as a subsolution. A comprehensive reference on viscosity so- lutions is [CIL92]. We adopt the following notion of the viscosity subsolutions. Definition 5.7. We say a function φ satisfies φt −φxx ≤0 at the point (x0, t0) in the viscosity sense, if ψt(x0, t0) −ψxx(x0, t0) ≤0 for any smooth test function ψ with φ(x0, t0) = ψ(x0, t0) and φ(x, t) ≤ψ(x, t) for all t ≤t0 and x ∈[xmin(t), xmax(t)]. Remark 5.8. Here we require the test function to touch from above only for t ≤t0. See [Tra21, Lemma 1.22 on Page 21] for a similar treatment. The next lemma follows from a direct verification. Lemma 5.9. A smooth function φ that satisfies φt −φxx ≤0 at the point (x0, t0) in the smooth sense also satisfies it in the viscosity sense. Proposition 5.10. Let φ be the barrier, introduced in Definition 5.5. Then (5.12) φt −φxx ≤0 on {(x, t)|t ∈[0, T), x ∈[xmin(t), xmax(t)]} in the viscosity sense. Proof. At x ∈[xmin(t), xmax(t)]\{0}, by Lemma 5.2, Lemma 5.6 and Lemma 5.9, we can show equation (5.12) in the classical sense. We can compute the derivative in t: φt = ϵe−f(t)  −ft √ T −t cos θ − 1 2 √ T −t cos θ − √ T −t sin θθt  = ϵe−f(t)  −π2 4 max  1 x2max(t), 1 x2 min(t)  √ T −t cos θ − √ T −t sin θθt  = ϵe−f(t)√ T −t  −π2 4 max  1 x2max(t), 1 x2 min(t)  cos θ −sin θθt  , where we used equation (5.6). For the derivatives in x: φx = ϵe−f(t)√ T −t[−sin θθx], φxx = ϵe−f(t)√ T −t[−cos θθ2 x −sin θθxx] = ϵe−f(t)√ T −t[−cos θθ2 x], where we use the fact that θxx = 0 at x ̸= 0. 22 QI SUN It follows from θx(x, t) = ( π 2 1 xmax(t) if 0 < x ≤xmax(t), π 2 1 −xmin(t) if xmin(t) ≤x < 0 that φt −φxx ≤ϵe−f(t)√ T −t [−(sin θ)θt] , where θt := ( −π 2 x x2max(t)x′ max(t) if x > 0, π 2 x x2 min(t)x′ min(t) if x < 0. It follows from (sin θ)x ≥0 , x′ max(t) ≤0 and x′ min(t) ≥0 that φt −φxx ≤0 for x ∈[xmin(t), xmax(t)]\{0}. It remains to show this lemma at x = 0 in the viscosity sense. At x = 0 and t0 ∈[0, T) fixed, we must show for any smooth test function ψ = ψ(x, t) satisfying ψ(0, t0) = φ(0, t0) and ψ(x, t) ≥φ(x, t) for t ≤t0, that ψt(0, t0) −ψxx(0, t0) ≤0. It follows from ψ ≥φ and Lemma 5.6 that the second order derivative of ψ in x at x = 0 is no less than the one-sided second order derivatives of φ, that is to say the following: ψxx(0, t0) ≥max  lim x→0+ φx(x, t0) −φx(0, t0) x , lim x→0− φx(x, t0) −φx(0, t0) x  . By equation (5.10) and (5.11), ψxx(0, t0) ≥ϵe−f(t0)p T −t0 max  −π2 4 1 x2max(t0), −π2 4 1 x2 min(t0)  = ϵe−f(t0)p T −t0(−1) min π2 4 1 x2max(t0), π2 4 1 x2 min(t0)  . It follows from ψ(0, t0) = φ(0, t0), ψ ≥φ for t ≤t0, Lemma 5.2 and Lemma 5.6 that ψt(0, t0) ≤φt(0, t0). By taking the derivative of φ(0, t) = ϵe−f(t)√ T −t in t, we find φt(0, t) = ϵe−f(t)  −ft √ T −t − 1 2 √ T −t  = ϵe−f(t)  −π2 4 max  1 x2max(t), 1 x2 min(t)  √ T −t  , where we used equation (5.6). SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 23 In summary, ψt(0, t0) −ψxx(0, t0) ≤φt(0, t0) −ϵe−f(t0)p T −t0(−1) min π2 4 1 x2max(t0), π2 4 1 x2 min(t0)  = ϵe−f(t0)p T −t0 π2 4  min  1 x2max(t0), 1 x2 min(t0)  −max  1 x2max(t0), 1 x2 min(t0)  ≤0. That is to say φt −φxx ≤0 holds at (0, t0) in the viscosity sense. □ 5.4. Compare Y with the barrier φ. The domain [xmin(t), xmax(t)] is evolv- ing with time t. To be rigorous, instead of applying the comparison principle for the viscosity solutions directly, we explain in detail that we have the comparison principle. We choose ϵ in the definition of the barrier (Definition 5.5) small so that (5.13) Y (x, 0) ≥φ(x, 0). Lemma 5.11. One has (5.14) Y (x, t) ≥φ(x, t) for t ∈[0, T), x ∈[xmin(t), xmax(t)]. Proof. For each ϵ1 > 0, we define the perturbed function Y1(x, t) = Y (x, t) + ϵ1et and the time t1 = sup{˜t ∈[0, T)|Y1(x, t) > φ(x, t) for all t ∈[0, ˜t], x ∈[xmin(t), xmax(t)]}. Based on equation (5.4) and equation (5.9), (5.15) Y1(xmin(t), t) = Y1(xmax(t), t) = ϵ1et > 0 and (5.16) φ(xmin(t), t) = φ(xmax(t), t) = 0. Because ϵ in the definition of the barrier (Definition 5.5) is small, the time t1 is positive. By definition of the time t1, (5.17) Y1(x, t) ≥φ(x, t) for all t ≤t1. Claim 5.12. If t1 < T, then there would exist an x1 ∈(xmin(t1), xmax(t1)) such that Y1(x1, t1) = φ(x1, t1). 24 QI SUN Proof of Claim 5.12. If this claim were not true, then Y1(x, t1) > φ(x, t1) for all x ∈(xmin(t1), xmax(t1)). It follows from equation (5.15) and equation (5.16) that Y1(xmin(t1), t1) > φ(xmin(t1), t1) and Y1(xmax(t1), t1) > φ(xmax(t1), t1). Thus, for each x ∈[xmin(t1), xmax(t1)], we can pick a spacetime neighborhood Nx of the point (x, t1) such that Y > φ in the neighborhood. The Nx form an open cover of the compact set {(x, t1)|xmin(t1) ≤x ≤xmax(t1)}, so we can take a finite subcover. As a result, there exists a t2 > t1 such that Y1(x, t) > φ(x, t) for all t ∈[0, t2], x ∈[xmin(t), xmax(t)], which contradicts the definition of the time t1. □ Claim 5.13. One has t1 = T. Proof of Claim 5.13. If t1 < T, because φ is a viscosity subsolution (Proposition 5.10), viewing Y1 as a smooth test function (Claim 5.12 and inequality (5.17)), one has Y1t(x1, t1) −Y1xx(x1, t1) ≤0 by Definition 5.7. But due to Lemma 5.4, Y1t(x1, t1) −Y1xx(x1, t1) = Yt(x1, t1) −Yxx(x1, t1) + ϵ1et1 ≥ϵ1et1 > 0. So we have reached a contradiction. □ Thus, for all t ∈[0, T), x ∈[xmin(t), xmax(t)] and all ϵ1 > 0, Y1(x, t) = Y (x, t) + ϵ1et ≥φ(x, t). Taking the limit ϵ1 →0, one has that Y (x, t) ≥φ(x, t). □ 6. Non-uniqueness of tangent flows In this section, as discussed in §3.1, the initial curve γ0 has a one-to-one uniformly convex projection onto the xy-plane and the CSF γ(·, t) shrinks to the origin. The goal of this section is to prove Theorem 1.15. We prove the following theorem first. Theorem 6.1. Assume the initial curve γ0 has a one-to-one convex projection onto the xy-plane and the CSF γ(·, t) develops a Type II singularity as t →T. For every nonzero vector ⃗v in the xy-plane, there exists a line L⃗v in Rn with PxyL⃗v parallel to the vector ⃗v and a sequence of times {t⃗v j}, such that the curves γ(·,t⃗v j ) q 2T −2t⃗v j locally smoothly converge to the line L⃗v as j →+∞. One key step is the following proposition. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 25 Proposition 6.2. If along some sequence of times {tj}, γ(·,tj) √ 2T −2tj locally smoothly converges to a line L1 of multiplicity two, then there exists a line L2 with PxyL1 ⊥ PxyL2 and another sequence of times {t′ j} such that the rescaled curves γ(·,t′ j) √ 2T −2t′ j locally smoothly converge to the line L2 of multiplicity two. Proof. We may assume the projection of the line L1 onto the xy-plane is the x-axis. Thus as j →+∞, (6.1) Y (x = 0, tj) = o( p T −tj), where Y is the difference between the upper and lower branches, which we had defined in equation (5.3). We can establish the following pivotal lemma: Lemma 6.3. For the chosen sequence {tj}, one has that (6.2) lim j→+∞f(tj) = lim j→+∞ Z tj 0 π2 4 max  1 x2max(τ), 1 x2 min(τ)  − 1 2(T −τ)  dτ = +∞, where we had defined the function f in equation (5.6). Proof. Lemma 5.11 tells us that Y (x, t) ≥φ(x, t). Thus by the definition of the barrier φ (Definition 5.5), we have Y (0, tj) ≥φ(0, tj) = ϵe−f(tj)p T −tj. Equation (6.1) implies that e−f(tj) = o(1) as j →+∞. That is to say f(tj) →+∞as j →+∞. This lemma then follows from the definition of f (equation (5.6)). □ By Lemma 6.3, there exists another sequence {t′ j} with t′ j →T such that the integrand of equation (6.2) is positive. In other words, (6.3) min{|xmin(t′ j)|, |xmax(t′ j)|} < π 2 q 2T −2t′ j. By taking a subsequence based on Theorem 1.13, without relabeling, the rescaled curves γ(·,t′ j) √ 2T −2t′ j converge to a line of multiplicity two. The projection of this line onto the xy-plane has to be the y-axis by equation (6.3). □ Now we turn to the proof of Theorem 6.1. Based on Proposition 6.2, we may assume there exist two sequences {tj}, {t′ j} such that γ(·,tj) √ 2T −2tj converges to a line L1 of multiplicity two with PxyL1 = x-axis and γ(·,t′ j) √ 2T −2t′ j converges to a line L2 of multiplicity two with PxyL2 = y-axis. We start by explaining the proof of Theorem 6.1 intuitively. By continuity, Pxy γ(·,t) √2T −2t sweeps from x-axis to y-axis. We may assume this occurs through the 26 QI SUN first and third quadrants. By Proposition 6.2, it also sweeps through the second and fourth quadrants. We now proceed to the details of the proof. Recall that by Lemma 3.3, |Pxyγs(u, t)| ≥ √ δ > 0 for all u, t. Definition 6.4. We denote by θ(u, t) the turning angle between the vector Pxyγs(u, t) and the positive x-axis. For the sequences {tj}, {t′ j} chosen, we have: Lemma 6.5. For arbitrary R, ϵ > 0, there exists j0 ∈N such that for any j ≥j0 and any γ(u1, tj) ∈BR√ 2T −2tj(0), γ(u2, t′ j) ∈BR√ 2T −2t′ j(0), one has that, Pxyγs(u1, tj) |Pxyγs(u1, tj)| · (0, 1) < ϵ and Pxyγs(u2, t′ j) |Pxyγs(u2, t′ j)| · (1, 0) < ϵ. We will use an unrescaled version of Corollary 4.6: Lemma 6.6. For arbitrary R, ϵ > 0, there exists t0 ∈[0, T) such that for any t ≥t0 and any γ(u1, t), γ(u2, t) ∈BR√2T −2t(0), one has that, |γs(u1, t) −γs(u2, t)| < ϵ. Lemma 6.7. For arbitrary R > 0, there exists tR ∈[0, T) such that for any t ≥tR, γ(·, t) ∩BR√2T −2t(0) has exactly two connected components, and |γ|2 has exactly one minimum point on each component. Furthermore, we can smoothly track these minimum points. Proof. If the first part of this lemma were not true, then there would exist a sequence {tl} such that the number of the connected components of γ(·, tl) ∩BR√2T −2tl(0) is not two. By taking a subsequence, γ(·,tl) √2T −2tl locally smoothly converges to a line of multiplicity two. This line passes through the origin and intersects transversely with the sphere ∂BR(0). This gives a contradiction. Claim 6.8. The function |γ|2 has exactly one minimum point on each component of γ(·, t) ∩BR√2T −2t(0). Proof of the Claim. The author learned the following argument from [CSSZ25, Proof of Lemma 3.5]. On each component, |γ|2 has at least one minimum point since each component of γ(·,t) √2T −2t ∩BR(0) is close to some line passing through the origin. By direct computations, |γ|2 s = 2γ · γs and (6.4) |γ|2 ss = 2γ · γss + 2 = 2 γ √ 2T −2t · √ 2T −2tγss + 2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 27 Thus, for points γ(u, t) ∈BR√2T −2t(0), because γ(u,t) √2T −2t ≤R and the rescaled curvature √ 2T −2t|γss(u, t)| is small, one has |γ|2 ss (u, t) > 0. Thus the function |γ|2 has at most one critical point on each component. □ We can smoothly track the minimum points γ(umin(t), t) by applying the implicit function theorem to (|γ|2)s(umin(t), t) = 0 and (|γ|2)ss(umin(t), t) > 0. □ Proof of Theorem 6.1. We may assume the sequences {tj} and {t′ j} are alternating, t1 < t′ 1 < t2 < t′ 2 < · · · < tj < t′ j < · · · . As a result of Lemma 6.7, we may continuously track these two minimum points γ(u1 min(t), t) and γ(u2 min(t), t) of |γ|2. Based on Lemma 6.5, using the notion in Definition 6.4, we may assume lim j→+∞θ(u1 min(tj), tj) = 0, lim j→+∞θ(u1 min(t′ j), t′ j) = π 2 . We may also assume that, for each nonzero vector ⃗v = (v1, v2) with v1, v2 > 0 in the xy-plane, there exists a sequence of times {t⃗v j} with tj < t⃗v j < t′ j such that, (6.5) θ(u1 min(t⃗v j), t⃗v j) = arctan v2 v1 for large enough j. By taking a subsequence of {t⃗v j}, we may assume that γ(·,t⃗v j ) q 2T −2t⃗v j converges to some line L⃗v whose projection PxyL⃗v is parallel to the vector ⃗v, because of equation (6.5). For each nonzero vector ⃗v = (v1, v2) in R2 with v1 < 0, v2 > 0, we define ⃗v⊥= (v2, −v1). By previous argument, since v2 > 0, −v1 > 0, there exists a sequence of times {t⃗v⊥ j } such that γ(·,t⃗v⊥ j ) q 2T −2t⃗v⊥ j converges to some line L⃗v⊥whose projection PxyL⃗v⊥is parallel to the vector ⃗v⊥. Since (v1, v2) ⊥(v2, −v1), it follows from Proposition 6.2 that there exists a sequence of times {t⃗v j} such that γ(·,t⃗v j ) q 2T −2t⃗v j converges to some line L⃗v whose projection PxyL⃗v is parallel to the vector ⃗v. □ Proof of Theorem 1.15. Pick a sequence of times {t⃗v j} as in Theorem 6.1 with τ⃗v j := −1 2 log(T −t⃗v j). We denote by Γj the j-th rescaled CSF corresponding to τ⃗v j . 28 QI SUN By Theorem 6.1, the curves Γj(·, τ = 0) locally smoothly converge to the line L⃗v. By Theorem 1.13, there exists a subsequence such that Γj locally smoothly converges to a stationary line L of multiplicity two. By uniqueness of the limits at τ = 0, L = L⃗v. □ 7. Linear scales For the rest of this paper, δ0 refers to a fixed constant depending on the initial curve but independent of time τ. The constant δ0 will be chosen according to Lemma 7.13. Definition 7.1. For a constant δ > 0, we define the linear scale ρδ : R>0 →R>0 to be (7.1) ρδ(H) := δ 20H . Notation 7.2. For convenience, we omit δ and simply write ρ = ρδ when no ambiguity arises. In this section, ρ always refers to ρδ0. We will choose a different δ in the next section; see Notation 8.14. Definition 7.3. In Rn, we define a horizontal rotation to be a rotation in SO(n) that rotates the vectors that are parallel to the xy-plane and keeps vectors that are perpendicular to the xy-plane invariant. The property that Γ has a one-to-one convex projection is preserved by horizontal rotations. Remark 7.4. In our estimates, all constants are allowed to depend on the initial curve, particularly the constant in the three-point condition ( [Sun24, Definition 4.8]). This is harmless because it is independent of time along the flow ( [Sun24, Proposition 5.3 and Page 21]) and our analysis focuses only on the singularity. The goal of this section is to establish C2 estimates (Proposition 7.5 and Propo- sition 7.7) at linear scales over time intervals. We start with C1 estimates at a fixed time. Proposition 7.5. There exist small constants λ0 ∈(0, 1), H0 > 0 and a large time τ0 for which the following is true. Suppose that at a time τ ≥τ0, there is a horizontal rotation Sτ and a vector ⃗θ = ⃗θ(τ) = (θ1(τ), · · · , θn−2(τ)) ∈[0, π 2 )n−2 ⊂Rn−2 such that SτΓ(·, τ) ∩{|x| ≤2} consists of the graphs of functions y1, y2, z1 ℓ, z2 ℓ(1 ≤ℓ≤n −2) with ∥yi(x, τ)∥C1[−2,2] + n−2 X ℓ=1 ∥zi ℓ(x, τ) −(tan θℓ)x∥C1[−2,2] ≤H ≤H0 for i = 1, 2, where the indices i = 1, 2 label the upper and lower branches of the projection respectively. Then SτΓ(·, τ) ∩{|x| ≤3 4ρδ0(H)} is a union of the graphs of functions yi(·, τ), zi ℓ(·, τ), i = 1, 2. In addition, for x ∈[−1 2ρδ0(H), 1 2ρδ0(H)], one has |yi(x, τ)| ≤CH(|x| + 1), |yi x(x, τ)| ≤CH for i = 1, 2 SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 29 and for x ∈[−1 2λ0ρδ0(H), 1 2λ0ρδ0(H)], i = 1, 2 and 1 ≤ℓ≤n −2, one has |zi ℓ(x, τ) −(tan θℓ)x| ≤CH(|x| + 1), |zi ℓx(x, τ) −tan θℓ| ≤CH where the constants δ0, λ0, C are independent of the time τ. Remark 7.6. By the bounded slope lemma (Lemma 3.2), we have an a priori upper bound 0 ≤θM < π 2 , such that tan2 θ1 + · · · + tan2 θn−2 ≤tan2 θM for all possible ⃗θ = ⃗θ(τ) = (θ1(τ), · · · , θn−2(τ)) in Proposition 7.5. Based on Nash-Moser estimates and Schauder estimates, we are able to derive C2 estimates over time intervals where we have C1 estimates. In the following proposition, we clarify that the constant is independent of the spacetime domain. Proposition 7.7. Let constants λ0, H0, τ0 be as in Proposition 7.5. Suppose that for some τ ′ ≥τ0 and T > 0, there is a horizontal rotation Sτ ′ and a vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)) ∈[0, π 2 )n−2 ⊂Rn−2 such that for any τ ∈[τ ′, τ ′+T ], Sτ ′Γ(·, τ)∩{|x| ≤2} consists of the graphs of functions y1, y2, z1 ℓ, z2 ℓ (1 ≤ℓ≤n −2) with ∥yi(x, τ)∥C1[−2,2] + n−2 X ℓ=1 ∥zi ℓ(x, τ) −(tan θℓ)x∥C1[−2,2] ≤H ≤H0 for i = 1, 2. Then for x ∈[−1 4λ0ρδ0(H), 1 4λ0ρδ0(H)], τ ∈[τ ′ + 1 2, τ ′ + T ], i = 1, 2 and 1 ≤ℓ≤n −2, one has |yi xx(x, τ)| ≤CH, |zi ℓxx(x, τ)| ≤CH where the constant C is independent of τ ′, ρδ0(H) and T . We first prove Proposition 7.7 based on Proposition 7.5. Then we devote the rest of this section to the proof of Proposition 7.5. Lemma 7.8. One can compute the equations of the graphical rescaled CSF: yτ = yxx 1 + y2x + z12x + · · · + z(n−2)2 x −xyx + y (7.2) zℓτ = zℓxx 1 + y2x + z12x + · · · + z(n−2)2 x −xzℓx + zℓ, (7.3) where 1 ≤ℓ≤n −2. Proof of Proposition 7.7. We drop the index i for simplicity. By Proposition 7.5, Sτ ′Γ(·, τ) ∩{|x| ≤ 3 4ρδ0(H)} is a union of the graphs of functions yi(·, τ), zi ℓ(·, τ), i = 1, 2 and for x ∈[−1 2λ0ρδ0(H), 1 2λ0ρδ0(H)], |yi x(x, τ)| ≤CH, |zi ℓx(x, τ) −tan θℓ(τ ′)| ≤CH for i = 1, 2, where θ(τ ′) is bounded away from π 2 by Remark 7.6. The function y satisfies the following equation of the rescaled CSF (Lemma 7.8): yτ = a(x, τ)yxx −xyx + y, where a = 1 1 + y2x + z2 1x + · · · + z2 (n−2)x . 30 QI SUN To clarify that the constant C is independent of τ ′, ρ and T , for arbitrary τ1 ∈ [τ ′ + 1, τ ′ + T ] and |x1| ≤ρ 2 −2, we do a change of variables ¯x = x −x1eτ−τ1, ¯τ = τ −τ1, y(x, τ) = ¯y(¯x, ¯τ) and a(x, τ) = ¯a(¯x, ¯τ). Claim 7.9. With above change of variables, one has ¯y¯τ = ¯a(¯x, ¯τ)¯y¯x¯x −¯x¯y¯x + ¯y. Proof of the Claim. By direct computations, yτ = ¯y¯τ −x1eτ−τ1 ¯y¯x and yx = ¯y¯x, yxx = ¯y¯x¯x. Thus ¯y¯τ −x1eτ−τ1 ¯y¯x = ¯a(¯x, ¯τ)¯y¯x¯x −(¯x + x1eτ−τ1)¯y¯x + ¯y. □ By Claim 7.9, the function ¯y¯x satisfies the following equation in divergence form: (7.4) (¯y¯x)¯τ = (¯a(¯y¯x)¯x)¯x −¯x(¯y¯x)¯x. For |¯x| ≤2 and ¯τ ∈[−1, 0], the functions ¯y¯x, (¯z1)¯x, · · · , (¯zn−2)¯x are uniformly bounded independent of τ1 and x0. Thus by De Giorgi-Nash-Moser type estimates, see for example [LSU68, Theorem 1.1, Chapter V, §1, page 419], the function ¯y¯x is H¨older continuous. Similarly (¯z1)¯x, · · · , (¯zn−2)¯x are H¨older continuous. As a result, ¯a is H¨older continuous. Thus by Schauder estimates for equation (7.4) and Proposition 7.5, for |¯x| ≤1 and ¯τ ∈[−1 2, 0], (7.5) |¯y¯x¯x| ≤C|¯y¯x| ≤CH for some constant C independent of τ1 and x0. As a result, by taking x1 = ±( ρ 2 −k) for integers k ∈[2, ρ 2] and taking τ1 ∈[τ ′ + 1, τ ′+T ], it follows from equation (7.5), for |x| ≤ 1 √e( ρ 2−2)+1 and τ ∈[τ ′+ 1 2, τ ′+T ], |yxx(x, τ)| = |¯y¯x¯x| ≤CH, where the constant C is independent of τ ′, ρ and T as long as we have gradient estimates. The estimates for |zi ℓxx| can be done similarly because zi ℓ(x, τ)−(tan θℓ)x satisfies the same equation and same C1 estimates as yi. □ 7.1. Setup and a sketch of the proof of Proposition 7.5. We denote by Γ the rescaled CSF and by Γ the projection of Γ onto the xy-plane. The next lemma follows from the improved blow-up results (Theorem 1.13) and can be proved by the same argument in the proof of Lemma 6.7. Lemma 7.10. For arbitrary R > 0, there exists τ pt R ∈[−1 2 log T, +∞) such that for any τ ≥τ pt R , the projection of the rescaled CSF inside the ball Γ(·, τ) ∩BR(0), has exactly two connected components, and the function |Γ|2 has exactly one minimum point on each component. In addition, we can smoothly track these two minimum points. Definition 7.11. For each τ ≥τ pt 1 , we label these two minimum points of the function |Γ|2 in Lemma 7.10 by p(τ), q(τ) ∈R2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 31 For convenience, we adopt the notion p(τ), q(τ) independent of the horizontal rotations that we will choose. By Lemma 7.10, for R ≥1 and τ ≥τ pt R , the points p(τ), q(τ) are minimum points of |Γ|2 inside the ball Γ(·, τ) ∩BR(0) and thus are independent of the choice of R. Definition 7.12. For each τ ≥τ pt 1 , we denote by O the origin and by A1(τ), A2(τ) the area of the two domains enclosed by line segments Op(τ),Oq(τ) and the pro- jection of the rescaled curve Γ(·, τ). We are able to bound the rescaled area from below. Lemma 7.13. There exists δ0 > 0 and τδ0 such that A1(τ), A2(τ) ≥δ0 for all τ ∈[τδ0, +∞). Proof. We prove A1(τ) ≥δ0; the proof for A2 is similar. We consider the rate of change of the area enclosed by the projection of (unscaled) CSF onto the xy-plane, recall that τ = τ(t) = −1 2 log(T −t), by [Sun24, Lemma 3.3] and the improved blow-up results (Theorem 1.13), for τ large, (7.6) d dt((2T −2t)A1(τ(t))) = − Z p(τ) q(τ) (x2 s + y2 s)¯kd¯s + o(1) ≤−π 2 δ1, where we used x2 s+y2 s ≥δ1 for some δ1 > 0 ( [Sun24, Corollary 5.8]) and the turning angle between the points p(τ), q(τ) is π + o(1) for τ large, because p(τ), q(τ) are minimum points of the function |Γ|2. Because CSF γ shrinks to a point, lim t→T(2T −2t)A1(τ(t)) = 0. As a result, by integrating equation (7.6), (7.7) 0 −(2T −2t)A1(τ) ≤−π 2 δ1(T −t). Pick δ0 = π 4 δ1. □ We denote by |Op(τ)|, |Oq(τ)| the distances of the minimum points p(τ), q(τ) to the origin. Lemma 7.14. If there exists a horizontal rotation Sτ such that the projection Pxy(SτΓ(·, τ) ∩{|x| ≤2}) consists of the graphs of functions y1, y2 with ∥yi(x, τ)∥C0[−2,2] ≤H for i = 1, 2, then one has (7.8) |Op(τ)|, |Oq(τ)| ≤H. Proof. Because p(τ), q(τ) are minimum points of the function |Γ|2, one has (7.9) |Op(τ)|, |Oq(τ)| ≤max i=1,2{|yi(0, τ)|} ≤max i=1,2 ∥yi(x, τ)∥C0[−2,2] ≤H. □ Recall Definition 7.11 and Definition 7.3. 32 QI SUN Definition 7.15. We define S1 τ to be the horizontal rotation such that for the rotated curve S1 τΓ(·, τ), the minimum point q(τ) is on the negative y-axis. We may assume that for the rotated curve S1 τΓ(·, τ), the x coordinate x(p(τ)) of the point p(τ) is non-positive (see Figure 3). Definition 7.16. We define the angle-bisecting rotation Sbis τ to be the horizontal rotation such that the x-axis bisects the angle formed by Op(τ) and Oq(τ). We may assume that for the rotated curve Sbis τ Γ(·, τ), the x-coordinates x(p(τ)) and x(q(τ)) are non-positive (see Figure 4). Figure 3. Points p(τ), q(τ) on S1 τΓ(·, τ). Figure 4. Points p(τ), q(τ) on Sbis τ Γ(·, τ). Sketch of the proof of Proposition 7.5. In §7.2 we derive gradient estimates for the upper branch of the rotated curve S1 τΓ(·, τ). Based on the estimates for S1 τΓ(·, τ) in §7.2, we derive gradient estimates in §7.3 for the upper branch of the rotated curve Sbis τ Γ(·, τ). Because of the choice of the horizontal rotation Sbis τ (Definition 7.16), the estimates on the lower branch are no different from the estimates for the upper branch of Sbis τ Γ(·, τ). Thus we have gradient estimates for both upper and lower branches of the rotated curve Sbis τ Γ(·, τ). In §7.4, we use the derived estimates for Sbis τ Γ(·, τ) to establish the desired C1 estimates for y from Proposition 7.5. In §7.5, we consider the C1 estimates for zℓ(1 ≤ℓ≤n −2) from Proposition 7.5. 7.2. Gradient estimates on the upper branch. In this subsection, we always consider the rotated curve S1 τΓ(·, τ) and assume |Op(τ)|, |Oq(τ)| ≤H. We denote by xmax(τ) := max u∈S1 x(u, τ), xmin(τ) := min u∈S1 x(u, τ) the maximum and minimum values of the function x at time τ. And we denote by x(ymax(τ)) the x coordinate of the maximum point of the function y(·, τ). One has x(ymax(τ)) ≥x(p(τ)) because the projection curve is convex and the slope of the upper branch is decreasing. See Figure 3. By comparing areas, we first estimate |xmax(τ)| and |xmin(τ)|. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 33 Lemma 7.17. Let δ0 be the constant in Lemma 7.13, one has (7.10) 2H|xmin(τ)| ≥δ0 and (7.11) (ymax(τ) + H)(xmax(τ) + H) ≥δ0. Proof. The area A1(τ), A2(τ) is no bigger than the area of the following rectangles respectively: {(x, y)|xmin(τ) ≤x ≤0, −H ≤y ≤H} and {(x, y)| −H ≤x ≤xmax(τ), −H ≤y ≤ymax(τ)}. This lemma then follows from Lemma 7.13. □ For ρ = ρδ0(H) = δ0 20H , our goal is to get the gradient estimates for |x| ≤ρ. By inequality (7.10), |xmin(τ)| ≥ δ0 2H = 10ρ > ρ. Recall that the indices i = 1, 2 label the upper and lower branches respectively. We first estimate the gradient at x = −ρ on the upper branch. Lemma 7.18. One has that 0 < y1 x(−ρ, τ) ≤ 2H |xmin(τ)| −ρ. Proof. If this lemma were not true, then y1 x(−ρ, τ) > 2H |xmin(τ)| −ρ. Because the projection curve is convex and the slope of the upper branch is de- creasing, for x < −ρ, y1 x(x, τ) ≥y1 x(−ρ, τ) > 2H |xmin(τ)| −ρ. As a result, 2H = 2H |xmin(τ)| −ρ(|xmin(τ)| −ρ) < Z −ρ xmin(τ) y1 x(x, τ)dx ≤2H, which gives a contradiction. □ Lemma 7.19. If x(ymax(τ)) ≥ρ, then for x ∈[−ρ, ρ], 0 ≤y1 x(x, τ) ≤ 2H |xmin(τ)| −ρ. Proof. Because the projection curve is convex and the slope of the upper branch is decreasing, for x satisfying −ρ ≤x ≤ρ ≤x(ymax(τ)), by Lemma 7.18, 0 ≤y1 x(x, τ) ≤y1 x(−ρ, τ) ≤ 2H |xmin(τ)| −ρ. □ 34 QI SUN Lemma 7.20. If x(ymax(τ)) < ρ, then for x ∈[−ρ, x(ymax(τ))), (7.12) 0 < y1 x(x, τ) ≤ 2H |xmin(τ)| −ρ. As a result, (7.13) ymax(τ) ≤ 4Hρ |xmin(τ)| −ρ + H. In addition, for x ∈[x(ymax(τ)), ρ], (7.14) |y1 x(x, τ)| ≤ymax(τ) + H xmax(τ) −ρ . Proof. Equation (7.12) holds by the same argument as in the proof of Lemma 7.19. Equation (7.12) implies that ymax(τ) ≤ Z x(ymax(τ)) −ρ y1 x(x, τ)dx + y1(−ρ, τ) ≤ 2H |xmin(τ)| −ρ(2ρ) + H. As in the proof of Lemma 7.18, we can estimate the gradient at x = ρ, |y1 x(ρ, τ)| ≤ymax(τ) + H xmax(τ) −ρ . Thus equation (7.14) holds because the slope of the upper branch is decreasing and x(ymax(τ)) < ρ. □ Recall that ρ has been chosen according to equation (7.1). Proposition 7.21 (Gradient estimates on the upper branch). For the rotated curve S1 τΓ(·, τ) with |Op(τ)|, |Oq(τ)| ≤H, for H chosen small enough, for x ∈[−ρ, ρ], one has that |y1 x(x, τ)| ≤18 δ0 H2, where δ0 is the constant from the area lower bound in Lemma 7.13. Proof of Proposition 7.21. Case 1: x(ymax(τ)) ≥ρ. By Lemma 7.19, equation (7.10) and the choice of ρ (equation (7.1)), (7.15) 0 ≤y1 x ≤ 2H δ0 2H −ρ = 4H2 δ0 −2Hρ ≤4H2 δ0 2 = 8 δ0 H2. Case 2: x(ymax(τ)) < ρ. Claim 7.22. In this case, one has that ymax(τ) ≤2H. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 35 Proof of the Claim. Combine equation (7.10) and equation (7.13), ymax(τ) ≤ 4Hρ δ0 2H −ρ + H = 8H2ρ δ0 −2Hρ + H. Because of the choice of ρ (equation (7.1)), ymax(τ) ≤8H(Hρ) δ0 2 + H ≤2H. □ By equation (7.11), (7.16) xmax(τ) ≥δ0 3H −H. Combined with equation (7.14), for x ∈[x(ymax(τ)), ρ], (7.17) |y1 x| ≤ymax(τ) + H |xmax(τ)| −ρ ≤ 3H δ0 3H −H −ρ = 9H2 δ0 −3H2 −3Hρ ≤9H2 δ0 2 = 18 δ0 H2 for H small. □ 7.3. Angle-bisecting rotation and C1 estimates on the lower branch. In this subsection, we always consider the rotated curve Sbis τ Γ(·, τ) and assume |Op(τ)|, |Oq(τ)| ≤H. Let S1 τ be the horizontal rotation defined in Definition (7.15) and (S1 τ)−1 be its inverse. Then by Proposition 7.21, the rotation Sbis τ (S1 τ)−1 is a horizontal rotation by angle µ with (7.18) | tan(2µ)| ≤18 δ0 H2. The goal of this subsection is to get C1 estimates on the upper branch for Sbis τ Γ(·, τ). Because of the choice of the rotation Sbis τ (Definition 7.16), the es- timates on the lower branch is no different from the upper branch. Lemma 7.23. For |θ1|, |θ2| ≤π 6 , | tan(θ1 + θ2)| ≤2| tan(θ1)| + 2| tan(θ2)|. Proof. One has that | tan(θ1 + θ2)| = sin(θ1 + θ2) cos(θ1 + θ2) ≤2| sin(θ1 + θ2)| = 2|(tan θ1 + tan θ2) cos θ1 cos θ2| ≤2|(tan θ1 + tan θ2)|. □ We now prove the estimates for rotation Sbis τ based on estimates for rotation S1 τ. Lemma 7.24. Assume |Op(τ)|, |Oq(τ)| ≤H. On the upper branch of Sbis τ Γ(·, τ), for x ∈[−3 4ρ, 3 4ρ], one has that |y1 x| ≤72 δ0 H2. 36 QI SUN Proof. By Proposition 7.21 and Lemma 7.23, |y1 x| ≤2| tan µ| + 218 δ0 H2 ≤2| tan 2µ| + 218 δ0 H2. Combined with equation (7.18), this lemma is true. □ 7.4. C1 estimates for rotation by an angle no more than H. In this sub- section, we use the established gradient estimates (Lemma 7.24) with respect to time-dependent directions associated with the rotation Sbis τ to get the desired C1 estimates for y in Proposition 7.5. Recall that δ0 is the constant in Lemma 7.13. Lemma 7.25. There exists H′ 0 > 0 small and time τ ′ 0 large such that the following holds. Suppose that at a time τ ≥τ ′ 0, there is a horizontal rotation Sτ such that Pxy(SτΓ(·, τ) ∩{|x| ≤2}) consists of the graphs of functions y1, y2 with (7.19) ∥yi(x, τ)∥C1[−2,2] ≤H ≤H′ 0, i = 1, 2. Then Pxy(SτΓ(·, τ) ∩{|x| ≤ 3 4ρδ0(H)}) is a union of the graphs of functions yi(·, τ), i = 1, 2. In addition, there exists an angle λ = λ(τ) with | tan λ| ≤2H such that for x ∈[−1 2ρδ0(H), 1 2ρδ0(H)], |yi x(x, τ) −tan λ(τ)| ≤CH2, i = 1, 2. Moreover, for x ∈[−1 2ρδ0(H), 1 2ρδ0(H)], one has |yi| ≤3H(|x| + 1), |yi x(x, τ)| ≤3H. Proof of Lemma 7.25. Recall Definition 7.11, we define λ(τ) = arctan yx(p(τ)) + arctan yx(q(τ)) 2 . By equation (7.19), combined with Lemma 7.23 and |2 tan θ 2| ≤| tan θ|, | tan λ(τ)| ≤|yx(p(τ))| + |yx(q(τ))| ≤2H. Based on the definition of λ(τ) and definition of the rotation Sbis τ (Definition 7.16), by Lemma 7.14 and Lemma 7.24, for x ∈[−1 2ρδ0(H), 1 2ρδ0(H)], | tan arctan yi x(x, τ) −λ(τ)  | ≤CH2. Because | tan(θ1 −θ2)| ≥1 2| tan(θ1) −tan(θ2)| for θ1, θ2 small, |yi x(x, τ) −tan λ(τ)| ≤CH2. As a result, |yi x(x, τ)| ≤CH2 + | tan λ| ≤CH2 + 2H ≤3H. Recall that we denote by p(τ) the minimum point of the function |Γ|2 on the upper branch and |Op(τ)| ≤H (Lemma 7.14), by comparing the upper branch with the tangent line at the point p(τ), one has y1(x, τ) ≤y1(p(τ)) + |x −x(p(τ))| sup |x|≤1 2 ρ |y1 x| ≤H + (|x| + H)(3H) ≤3H(|x| + 1) SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 37 for x ∈[−1 2ρ, 1 2ρ]. Similarly, for the lower branch, we have y2 ≥−3H(|x| + 1). Because y1 ≥y2, for x ∈[−1 2ρ, 1 2ρ], (7.20) −3H(|x| + 1) ≤y2 ≤y1 ≤3H(|x| + 1). □ 7.5. Estimates for zℓ(Proof of Proposition 7.5). Recall that we label a point in Rn by (x, y, z1, · · · , zn−2). For 1 ≤ℓ≤n −2, we denote by ⃗e1, ⃗e2 and ⃗eℓ+2 the unit vectors in the directions of the positive x-axis, y-axis and zℓ-axis. We fix an index ℓwith 1 ≤ℓ≤n −2 from now on. The argument in this subsection applies to each such ℓand the dependence on ℓwill remain implicit. For α ∈[0, π 2 ), we adopt the notion (7.21) ⃗eα := cos α⃗e2 + sin α⃗eℓ+2. Definition 7.26. We denote by Pα the 2-plane spanned by the vectors ⃗e1 and ⃗eα. The next lemma follows from [Sun24, Definition 4.8, Proposition 5.3 for n = 3 and Proof of Theorem 1.5(b) on Page 21]. Lemma 7.27. There exists α0 ∈(0, π 2 ) such that CSF Γ(·, τ) has a one-to-one convex projection onto the 2-plane Pα for all α ∈[0, α0] and τ ≥τα0 for some τα0. We denote by Aα 1 (τ), Aα 2 (τ) the area described in Definition 7.12, with respect to the projection onto the 2-plane Pα instead of the xy-plane (which is the same as P0). Thus A0 1(τ) = A1(τ) and A0 2(τ) = A2(τ), where A1(τ), A2(τ) are defined in Definition 7.12. Analogous to Lemma 7.13, we may assume Aα 1 (τ), Aα 2 (τ) ≥δ0 2 for the following reasons, where δ0 is the constant in Lemma 7.13. By [Sun24, Proof of Corollary 5.8, particularly the dependence of δ on ∆ℓ], we may assume for α ∈[0, α0] (by picking α0 smaller if necessary), one has (Γσ ·⃗e1)2 + (Γσ ·⃗eα)2 ≥2 πδ0. Then it follows from the proof of Lemma 7.13, particularly equation (7.7). We fix α ∈(0, α0] from now on. Proof of Proposition 7.5. What we have in mind is that the limit line is in the direction of the vector ⃗v = ⃗e1 + n−2 P ℓ=1 tan θℓ⃗eℓ+2 s 1 + n−2 P ℓ=1 tan2 θℓ . To lighten our notation, we define an angle θ ∈[0, π 2 ) by cos θ = 1 s 1 + n−2 P ℓ=1 tan2 θℓ . 38 QI SUN Then we can compute the component of the projection of the vector ⃗v onto the 2-plane Pα: ⃗v · ⃗e1 = cos θ, ⃗v · ⃗eα = tan θℓsin α cos θ. We define angle β ∈[0, π 2 ) by (7.22) tan β = tan θℓsin α. By Remark 7.6, there exists β0 < π 2 , depending only on the initial curve, such that β ∈[0, β0). We define Sα(β) to be the rotation of the 2-plane Pα by an angle β. That is to say, with respect to the orthonormal basis {e1, eα} of the 2-plane Pα, one has Sα(β) =  cos β −sin β sin β cos β  , β = β(τ). We define the following rotated orthonormal basis of the 2-plane Pα: (7.23) ⃗e ′ 1 := Sα(β)⃗e1, ⃗e ′ α := Sα(β)⃗eα. The previous estimates (Lemma 7.25) for the projection onto the xy-plane also apply to the projection onto the 2-plane Pα with respect to the rotation Sα(β). In other words, we consider the curve Sα(−β)Γ(·, τ). As a result, for τ large and H small, for |Γ(σ, τ) · ⃗e ′ 1| ≤1 4ρδ0(H), by Lemma 7.25, one has that (7.24) |Γ(σ, τ) · ⃗e ′ α| ≤3H(|Γ(σ, τ) · ⃗e ′ 1| + 1) and (7.25) Γσ · ⃗e ′ α Γσ · ⃗e ′ 1 (σ, τ) ≤3H. Lemma 7.28. One has that (7.26) ⃗eℓ+2 −tan θℓ⃗e1 = 1 sin α  −cos α⃗e2 + q 1 + tan2 β⃗e ′ α  . Proof. By equation (7.23), ⃗eα −tan β⃗e1 = q 1 + tan2 β⃗e ′ α and by equation (7.21), sin α⃗eℓ+2 −tan β⃗e1 = −cos α⃗e2 + q 1 + tan2 β⃗e ′ α. This lemma then follows from equation (7.22). □ Lemma 7.29. One has that, (7.27) |Γ · ⃗e ′ α| ≤CH(|x| + 1). Proof. If follows from ⃗e ′ 1 = cos β⃗e1 + sin β⃗eα that |Γ · ⃗e ′ 1| ≤cos β|Γ · ⃗e1| + sin β|Γ · ⃗eα|. Combined with ⃗e ′ α = −sin β⃗e1 + cos β⃗eα, |Γ · ⃗e ′ 1| ≤cos β|Γ · ⃗e1| + tan β(|Γ · ⃗e ′ α| + sin β|Γ · ⃗e1|) ≤(cos β + sin β tan β)|Γ · ⃗e1| + tan β|Γ · ⃗e ′ α|. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 39 It follows from equation (7.24) that (1 −3H tan β)|Γ · ⃗e ′ α| ≤3H((cos β + sin β tan β)|Γ · ⃗e1| + 1). As a result, for H small enough, because β ≤β0 < π 2 , |Γ · ⃗e ′ α| ≤CH(|Γ · ⃗e1| + 1). □ Now we are ready to estimate zℓdirection. |zℓ−(tan θℓ)x| = |Γ · ⃗eℓ+2 −(tan θℓ)Γ · ⃗e1|. By equation (7.26), |zℓ−(tan θℓ)x| = 1 sin α|Γ · [−cos α⃗e2 + q 1 + tan2 β⃗e ′ α]|. As a result, by Lemma 7.25 and equation (7.27), |zℓ−(tan θℓ)x| ≤cos α sin α |Γ · ⃗e2| + p 1 + tan2 β sin α |Γ · ⃗e ′ α| ≤CH(|x| + 1) (7.28) for |x| ≤λρ(H) for some constant λ = λ(β) ∈(0, 1). For example, λ(β) = 1 8 cos β. For the gradient estimates, |zℓx −tan θℓ| = Γσ · ⃗eℓ+2 Γσ · ⃗e1 −tan θℓ = Γσ · (⃗eℓ+2 −tan θℓ⃗e1) Γσ · ⃗e1 . By equation (7.26), |zℓx −tan θℓ| = 1 sin α Γσ · (−cos α⃗e2 + p 1 + tan2 β⃗e ′ α) Γσ · ⃗e1 . As a result, |zℓx −tan θℓ| ≤cos α sin α Γσ · ⃗e2 Γσ · ⃗e1 + p 1 + tan2 β sin α Γσ · ⃗e ′ α Γσ · ⃗e ′ 1 Γσ · ⃗e ′ 1 Γσ · ⃗e1 . By Lemma 3.3 and Lemma 7.24 , Γσ · ⃗e ′ 1 Γσ · ⃗e1 ≤ 1 |Γσ · ⃗e1| ≤C < +∞. By Lemma 7.24 and equation (7.25), |zℓx −tan θℓ| ≤ " cos α sin α + p 1 + tan2 β sin α Γσ · ⃗e ′ 1 Γσ · ⃗e1 # (3H) ≤CH. (7.29) In summary, Proposition 7.5 follows from combining Lemma 7.25, equation (7.28) and equation (7.29). The constants depend on α, which is independent of time τ; see Remark 7.4. 40 QI SUN 8. Uniqueness of tangent flows In this section, we prove uniqueness of tangent flows for CSF with a one-to-one convex projection developing a type II singularity by using the method of Allard- Almgren [AA81]. Our proof is a modification of [CSSZ25, §8]. The proof mainly relies on iterations of two procedures: gluing (Proposition 8.6) and improvement of flatness (Proposition 8.7). We will postpone the proof of these two iteration procedures and prove uniqueness of tangent flows first. To formulate these two propositions, we need to introduce the following definitions. Definition 8.1. We denote by Pτ r,τ0 the spacetime region in R × [−1 2 log T, +∞): (8.1) Pτ r,τ0 := (−r, r) × [τ, τ + τ0]. Definition 8.2. We say a rescaled CSF Γ is H-linear at ⃗θ at time τ ′ for some vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)) ∈[0, π 2 )n−2 ⊂Rn−2 if there exist functions (8.2) yi, zi 1, · · · zi n−2 : Pτ ′ 4,2 →R for i = 1, 2 such that for each τ ∈[τ ′, τ ′ + 2], Γ(·, τ) ∩{(x, y, z1, · · · , zn−2) ∈Rn||x| < 4} consists of the graphs of the functions yi, zi 1, · · · zi n−2 with (8.3) ∥yi(x, τ)∥C2(Pτ′ 4,2) + n−2 X ℓ=1 ∥zi ℓ(x, τ) −(tan θℓ)x∥C2(Pτ′ 4,2) ≤H for indices i = 1, 2, which label the upper and lower branches. Definition 8.3. We say a time τ linear H is an H-linear time if for every τ ′ ≥τ linear H , there is a horizontal rotation Slinear τ ′ and a vector ⃗θ = ⃗θ(τ ′) ∈[0, π 2 )n−2 such that Slinear τ ′ Γ is H-linear at ⃗θ at time τ ′. Let H0, τ0 be as in Proposition 7.5. Definition 8.4. We say the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ = ⃗θ(τ ′) for some vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)) ∈[0, π 2 )n−2 if H ≤H0, τ ′ ≥τ0 and for any τ ∈[τ ′, τ ′ + T ], Γ(·, τ) ∩{(x, y, z1, · · · , zn−2)||x| ≤2} consists of the graphs of functions yi, zi 1, · · · zi n−2 with ∥yi(x, τ)∥C1[−2,2] + n−2 X ℓ=1 ∥zi ℓ(x, τ) −(tan θℓ)x∥C1[−2,2] ≤H for i = 1, 2. Notation 8.5. We sometimes omit the index i when argument applies to both upper and lower branches. We will establish the following iteration procedures. Proposition 8.6 (Gluing). There is a constant C0 > 1 such that the following holds. For any L ∈N≥2 and H ≤ 1 1000L, if τ linear H is an H-linear time, then for every τ ′ ≥τ linear H , the rotated rescaled CSF Slinear τ ′ Γ is (C0LH, τ ′, 4L)-flat at ⃗θ = ⃗θ(τ ′), where the horizontal rotation Slinear τ ′ and vector ⃗θ(τ ′) are as in Definition 8.3. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 41 Proposition 8.7 (Improvement of flatness). Let C0 be as in Proposition 8.6. There exists a constant L ∈N≥2 such that for any T ≥2L + 4, there is an H1 ∈(0, C0 1000] such that the following holds. If for some horizontal rotation Sτ ′ and some vector ⃗θ = ⃗θ(τ ′), the rescaled CSF Sτ ′Γ is (H, τ ′, T )-flat for some H ≤H1 at ⃗θ, then there is a horizontal rotation ¯Sτ ′+L+2 and a vector ¯⃗θ = ¯⃗θ(τ ′ + L + 2) such that ¯Sτ ′+L+2Γ is H 2C0L-linear at ¯⃗θ at time τ ′ + L + 2. In addition, there is a uniform constant C1 such that | ¯Sτ ′+L+2 −Sτ ′| ≤C1H and |¯⃗θ(τ ′ + L + 2) −⃗θ(τ ′)| ≤C1H. We now turn to prove uniqueness of tangent flows. Proof of Theorem 1.16. Throughout this proof, we fix L, H1 chosen in Proposition 8.7. By our improved blow-up result (Theorem 1.13), there exists an H1 C0L-linear time, which we denote by τ1. By gluing (Proposition 8.6), for every τ ′ ≥τ1, Slinear τ ′ Γ is (H1, τ ′, 4L)-flat at ⃗θ(τ ′). For every τ ′ ≥τ1, by improving the flatness (Proposition 8.7), there is a hor- izontal rotation S1 τ ′+L+2 and a vector ⃗θ1 = ⃗θ1(τ ′ + L + 2) such that S1 τ ′+L+2Γ is H1 2C0L-linear at ⃗θ1 at time τ ′ + L + 2 with |Slinear τ ′ −S1 τ ′+L+2| ≤C1H1 and |⃗θ −⃗θ1(τ ′ + L + 2)| ≤C1H1. We define τ2 = τ1 + L + 2. Thus τ2 is an H1 2C0L-linear time. We define τk = τ1 + (k −1)(L + 2). By iterations, for every τ ′ ≥τk, there is a horizontal rotation Sk τ ′+L+2 and a vector ⃗θk = ⃗θk(τ ′ + L + 2) such that Sk τ ′+L+2Γ is H1 2kC0L-linear at ⃗θk at time τ ′ + L + 2 with |Sk τ ′+L+2 −Sk−1 τ ′ | ≤C1H1 2k−1 and |⃗θk(τ ′ + L + 2) −⃗θk−1(τ ′)| ≤C1H1 2k−1 . We can denote Slinear τ ′ by S0 τ ′ and denote ⃗θ by ⃗θ0 to make our notation consistent. In summary, τk+1 is an H1 2kC0L-linear time. By gluing (Proposition 8.6), for every τ ′ ≥τk+1, Sk τ ′Γ is ( 1 2k H1, τ ′, 4L)-flat. For any ε > 0, there exists kε ∈N such that X k≥kε 1 2k < ε. As a result, the potentially rotating limit line actually has a limit. The directions of the limit lines and thus the tangent flows are unique. □ Outline of this section. §8.1 and §8.2 introduce the basics. §8.3 proves the gluing procedure (Proposition 8.6). The rest of this section is devoted to prove the procedure on improvement of flatness (Proposition 8.7). 8.1. Setup. We use ˜x instead of x as our variable to introduce the following notion because we will later perform a change of variables according to equation (8.26). 42 QI SUN 8.1.1. The shifted Ornstein-Uhlenbeck operator. We introduce the linear operator (8.4) L = ∂2 ˜x −˜x∂˜x + 1, which differs from the Ornstein-Uhlenbeck operator L −1 by 1. We use the Gaussian-weighted inner product (8.5) ⟨f, g⟩H := 1 √ 2π Z R fge−˜x2 2 d˜x, with associated norm (8.6) ∥f∥H := p ⟨f, f⟩H. We sometimes omit the subscript H when there is no confusion. The eigenfunctions of the linear operator −L are Hermite polynomials, we list the first three of them: φ1 := 1, −Lφ1 = −φ1 φ2 := ˜x, −Lφ2 = 0 φ3 := 1 √ 2(˜x2 −1), −Lφ3 = 1 · φ3, where ∥φi∥H = 1, i = 1, 2, 3. We define the projection P−1 to the space spanned by φ1, the projection P0 to the space spanned by φ2 and the projection P≥1 to the space spanned by φi, i ≥3, respectively: (8.7) P−1f := ⟨f, φ1⟩H φ1, P0f := ⟨f, φ2⟩H φ2, P≥1f := f −P−1f −P0f. As a result, because the least positive eigenvalue of the operator −L is 1, for any f ∈P≥1H, ⟨−Lf, f⟩H ≥∥f∥2 H. 8.1.2. Cut-off. Throughout the rest of this paper, we fix a smooth cut-off function 0 ≤η ∈C∞(R) satisfying |η˜x|, |η˜x˜x| ≤2 in R and η(˜x)    = 1, |˜x| ⩽1, ∈(0, 1), 1 < |˜x| < 2, = 0, |˜x| ⩾2. (8.8) Definition 8.8. For a function u, given ρ > 0, we define the cut-off profile ˆu at scale ρ to be (8.9) ˆu(˜x, τ) := u(˜x, τ)η( ˜x ρ). Definition 8.9. We define the error term E to be (8.10) E := ˆuτ −Lˆu. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 43 8.2. Preparatory Lemmas. We denote by (8.11) S(β) =  cos β −sin β sin β cos β  the rotation by angle β. One has that (8.12) S(α)S(β)−1 = S(α −β). For a matrix S = (sij), we use the norm |S| = rP i,j s2 ij. Lemma 8.10. By direct computations, (8.13) |S(α) −S(β)|2 = 8 sin2 α −β 2  . As a result, for |α −β| ≤π, (8.14) 2 √ 2 π |α −β| ≤|S(α) −S(β)| ≤ √ 2|α −β|. Proof. By direct computations, |S(α) −S(β)|2 = 2| cos α −cos β|2 + 2| sin α −sin β|2 = 4 −4 cos α cos β −4 sin α sin β = 4 −4 cos(α −β) = 8 sin2 α −β 2  . Inequalities (8.14) follow from the fact that for |x| ≤π 2 , 2 π|x| ≤| sin x| ≤|x|. □ Lemma 8.11. For any real numbers a < b, there is a constant C, depending on a and b, such that (8.15) ∥f∥L2[a,b] ≤C∥f∥H. Proof. One has that, (8.16) ∥f∥2 L2[a,b] = Z b a f 2(˜x)d˜x ≤ 1 min{e−a2 2 , e−b2 2 } Z b a f 2(˜x)e−˜x2 2 d˜x. □ Lemma 8.12. For ρ large enough, (8.17) ∥η( ˜x ρ)˜x −˜x∥H ≤C 1 ρ100 . Proof. By the definition of the H norm, ∥η( ˜x ρ)˜x −˜x∥2 H = 1 √ 2π Z R (η( ˜x ρ)˜x −˜x)2e−˜x2 2 d˜x = 1 √ 2π Z R (η( ˜x ρ) −1)2˜x2e−˜x2 2 d˜x. 44 QI SUN By the definition of the cut-off function η (equation (8.8)), ∥η( ˜x ρ)˜x −˜x∥2 H = 1 √ 2π Z |˜x|≥ρ (η( ˜x ρ) −1)2˜x2e−˜x2 2 d˜x ≤ 1 √ 2π Z |˜x|≥ρ ˜x2e−˜x2 2 d˜x ≤C 1 ρ200 . □ 8.3. Gluing of the domains (Proof of Proposition 8.6). To lighten notation, we abbreviate Slinear τ ′ by Sτ ′ in this proof. By Definition 8.3, for every τ ′ ≥τ linear H , there is a horizontal rotation Sτ ′ and a vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)) ∈[0, π 2 )n−2 such that the rotated CSF Sτ ′Γ is H-linear at ⃗θ at time τ ′. Furthermore, based on the improved blow-up results (Theorem 1.13), we may assume |Sτ ′ −Sτ ′+1| ≤ 1 100 < √ 2 for all τ ′ in consideration of the fact that with an extra horizontal rotation by angle π, the rotated CSF is also H-linear. By Definition 8.2, for each j = 0, 1, · · · , 4L −2 and each τ ∈[τ ′ + j, τ ′ + j + 2], Sτ ′+jΓ(·, τ) ∩{(x, y, z1, · · · , zn−2) ∈Rn||x| < 4} consists of the graphs of the functions yi,j, zi,j 1 , · · · , zi,j n−2 with estimates (8.18) ∥yi,j(x, τ)∥C2(Pτ′+j 4,2 ) + n−2 X ℓ=1 ∥zi,j ℓ(x, τ) −(tan θℓ(τ ′ + j))x∥C2(Pτ′+j 4,2 ) ≤H for i = 1, 2. We denote by Γi,j the graph of the vector-valued function (yi,j, zi,j 1 , · · · , zi,j n−2). In the overlapping interval [τ ′ + j, τ ′ + j + 1], (8.19) S−1 τ ′+jΓi,j = S−1 τ ′+j−1Γi,j−1. Claim 8.13. For each j = 1, · · · , 4L −2, one has |Sτ ′+j −Sτ ′+j−1| ≤4 √ 2H. Proof of the Claim. We keep track of the minimum point of |Γ|2 on the upper branch in the ball B1(0). We use (σmin(τ), τ) to label the minimum point at time τ. The slopes at the minimum point about two rotations Sτ ′+j and Sτ ′+j−1 are yi,j x (σmin(τ), τ) and yi,j−1 x (σmin(τ), τ) respectively. Say Sτ ′+j is a horizontal rotation by angle αj. By inequality (8.14), |Sτ ′+j −Sτ ′+j−1| ≤ √ 2|αj −αj−1|. Equation (8.19) and equation (8.12) imply that |αj −αj−1| = | arctan yi,j x (σmin(τ), τ) −arctan yi,j−1 x (σmin(τ), τ)|. Because |α| ≤| tan α| for |α| < π 2 , |αj −αj−1| ≤| tan | arctan yi,j x (σmin(τ), τ) −arctan yi,j−1 x (σmin(τ), τ)|  | = | tan arctan yi,j x (σmin(τ), τ) −arctan yi,j−1 x (σmin(τ), τ)  |. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 45 Combined with Lemma 7.23, (8.20) |Sτ ′+j −Sτ ′+j−1| ≤2 √ 2  |yi,j x (σmin(τ), τ)| + |yi,j−1 x (σmin(τ), τ)|  . Because the gradients of the functions yi,j, yi,j−1 are bounded by H in equation (8.18), |Sτ ′+j −Sτ ′+j−1| ≤4 √ 2H. □ Claim 8.13 implies that for all j = 0, 1, · · · , 4L −2, (8.21) |Sτ ′ −Sτ ′+j| ≤16 √ 2HL ≤16 √ 2 1000 ≤ 3 100, where we used the assumption H ≤ 1 1000L. Thus, for any τ ∈[τ ′, τ ′ + 4L], Sτ ′Γ(·, τ) ∩{(x, y, z1, · · · , zn−2)||x| ≤2} consists of graphs of functions, which we denote by y1, y2, z1 ℓ, z2 ℓ(1 ≤ℓ≤n −2). Recall that Sτ ′+j is a horizontal rotation by angle αj. For |x| ≤2 and τ ∈[τ ′, τ ′ + 4L], by Lemma 7.23, | tan α| ≤2|α| for |α| ≤π 4 and equation (8.18), (8.22) |yi x(x, τ)| ≤2 tan |αj −α0| + 2∥yi,j∥C1 ≤4|αj −α0| + 2H. By inequality (8.14) and inequality (8.21), |yi x(x, τ)| ≤32πHL + 2H ≤200HL. Combined with |yi(σmin(τ), τ)| ≤H for all τ ∈[τ ′, τ ′ + 4L], the estimate (8.23) |yi(x, τ)| ≤H + 4(200HL) ≤1000HL holds for |x| ≤2. The C1 estimates for the functions zi ℓ(1 ≤ℓ≤n −2) can be obtained by repeating the argument in §7.5. Loosely speaking, the estimates we have derived for y also apply to the function yα = cos αy + sin α (zℓ−tan θℓx) √ 1+tan2 θℓfor some fixed α > 0 by Lemma 7.27, then we have C1 estimates for (zℓ−tan θℓx) √ 1+tan2 θℓ= yα−cos αy sin α , which is a linear combination of the functions y, yα. The constant C0 in Proposition 8.6 depends on chosen α, which is independent of time τ. 8.4. C2 estimates at linear scales. Let constants λ0, H0, τ0 be as in Proposition 7.5. We define δ1 := λ0δ0 8 and (8.24) ρδ1(H) := δ1 20H . Notation 8.14. In this section, ρ = ρ(H) always refers to ρδ1(H), which differs from ρδ0 by a time-independent coefficient λ0 8 . Compare with Notation 7.2. Recall Definition 8.4. Let us restate what has been established at scale 2ρδ1 in Proposition 7.5 and Proposition 7.7. 46 QI SUN Lemma 8.15. If the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ = ⃗θ(τ ′) for some vector ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)), then for τ ∈[τ ′, τ ′ + T ], Γ(·, τ) ∩{(x, y, z1, · · · , zn−2)||x| ≤2ρδ1(H)} is a union of the graphs of functions yi(·, τ), zi ℓ(·, τ), i = 1, 2, 1 ≤ℓ≤n −2. In addition, the estimates |yi| ≤CH(|x| + 1), |yi x| ≤CH and |zi ℓ−(tan θℓ)x| ≤CH(|x| + 1), |zi ℓx −tan θℓ| ≤CH hold in Pτ ′ 2ρδ1(H),T . Moreover, the estimates |yi xx| ≤CH, |zi ℓxx| ≤CH hold in P τ ′+ 1 2 2ρδ1(H),T −1 2 . 8.5. Change of variables. In this subsection, we always assume the rescaled CSF Γ is (H, τ ′, T )-flat at a vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn−2(τ ′)). We define an angle θ = θ(τ ′) ∈[0, π 2 ) by (8.25) 1 cos2 θ = 1 + n−2 X ℓ=1 tan2 θℓ. By Remark 7.6, there is an angle θ0 ∈[0, π 2 ), independent of time τ ′, such that θ(τ ′) ∈[0, θ0]. We choose new coordinates for τ ∈[τ ′, τ ′ + T ]: (8.26) ˜x = x cos θ, ˜yi = yi, ˜zi ℓ= zi ℓ−tan θℓx. The reason for this change of variables, instead of an orthonormal transformation, is that this choice rescales the x-direction while leaving the y-direction unchanged. Thus the rescaled CSF still has a one-to-one convex projection onto the ˜x˜y-plane. This will be important in the proof of Proposition 8.24, particularly equation (8.50). Notation 8.16. We sometimes omit the superscripts i in the functions ˜yi, ˜zi ℓfor simplicity when there is no confusion. Throughout the rest of this subsection, we denote by u any one of the functions ˜yi, ˜zi ℓ, where i = 1, 2 and 1 ≤ℓ≤n −2. Lemma 8.17. The function u satisfies the following evolution equation: (8.27) uτ = u˜x˜x 1 + ˜y2 ˜x + n−2 P ℓ=1 (˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ) −˜xu˜x + u. Proof. By direct computations, based on ∂˜x = cos θ∂x, one has ˜zℓx = zℓx −tan θℓ and (8.28) zℓx = ˜zℓx + tan θℓ= ˜zℓ˜x cos θ + tan θℓ. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 47 Because −x(zℓ−tan θℓx)x+(zℓ−tan θℓx) = −xzℓx+zℓ, by Lemma 7.8, the function u satisfies the following linear equation: (8.29) uτ = uxx 1 + y2x + z2 1x + · · · + z2 (n−2)x −xux + u. Because ∂˜x = cos θ∂x, uτ = 1 cos2 θu˜x˜x 1 + 1 cos2 θy2 ˜x + n−2 P ℓ=1 z2 ℓx −˜xu˜x + u. By equation (8.28), uτ = 1 cos2 θu˜x˜x 1 + 1 cos2 θy2 ˜x + n−2 P ℓ=1 ˜zℓ˜x cos θ + tan θℓ 2 −˜xu˜x + u = 1 cos2 θu˜x˜x 1 + 1 cos2 θy2 ˜x + n−2 P ℓ=1  ˜z2 ℓ˜x cos2 θ + 2 ˜zℓ˜x cos θ tan θℓ  + n−2 P ℓ=1 tan2 θℓ −˜xu˜x + u. By definition of θ (equation (8.25)), uτ = 1 cos2 θu˜x˜x 1 cos2 θ + 1 cos2 θy2 ˜x + n−2 P ℓ=1  ˜z2 ℓ˜x cos2 θ + 2 ˜zℓ˜x cos θ tan θℓ  −˜xu˜x + u = u˜x˜x 1 + y2 ˜x + n−2 P ℓ=1 (˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ) −˜xu˜x + u. □ By Lemma 8.15, because |˜x| = x cos θ ≥|x|, we have the following estimates. Lemma 8.18. If the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ, then the estimates (8.30) |u| ≤CH(|˜x| + 1), |u˜x| ≤CH hold for |˜x| ≤2ρ and τ ∈[τ ′, τ ′ + T ]. In addition, the estimates (8.31) |u˜x˜x| ≤CH hold for |˜x| ≤2ρ and τ ∈[τ ′ + 1 2, τ ′ + T ]. Recall the cut-off function η = η(˜x) defined in equation (8.8). Lemma 8.19. The cut-off ˆu = η( ˜x ρ)u satisfies (8.32) ∥ˆu∥H ≤CH for τ ∈[τ ′, τ ′ + T ]. Proof. By definition, ∥ˆu∥2 H = 1 √ 2π Z |˜x|≤2ρ  η( ˜x ρ)u 2 e−˜x2 2 d˜x ≤ Z |˜x|≤2ρ u2e−˜x2 2 d˜x. 48 QI SUN By Lemma 8.18, ∥ˆu∥2 H ≤C Z |˜x|≤2ρ H2(|˜x| + 1)2e−˜x2 2 d˜x ≤CH2. □ Lemma 8.20. The following estimates hold for ˆu2 ˜x = h uη( ˜x ρ)  ˜x i2 : (8.33) ˆu2 ˜x = u2 ˜xη2 + 2uu˜xηη′ 1 ρ + u2(η′)2 1 ρ2 ≤CH2. As a result, (8.34) ∥ˆu˜x∥H ≤CH. Proof. By direct computations, (8.35) ˆu˜x = (uη( ˜x ρ))˜x = u˜xη + uη′ 1 ρ. By the definition of the cut-off function η (equation (8.8)) and Lemma 8.18, for |˜x| ≤2ρ, ˆu2 ˜x =  u˜xη + uη′ 1 ρ 2 ≤2u2 ˜xη2 + 2u2(η′)2 1 ρ2 ≤2u2 ˜x + 8u2 1 ρ2 (8.36) ≤C  H2 + H2 ρ2 (|˜x| + 1)2  ≤CH2, (8.37) where we used the definition of ρ = ρδ1 (equation (8.24)). □ 8.6. Estimates along evolution. In this subsection, we always assume the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ. Let u be any one of the functions ˜yi, ˜zi ℓ, where i = 1, 2 and 1 ≤ℓ≤n −2. Recall the following notion in §8.1: (8.38) ˆu = η( ˜x ρ)u, L = ∂2 ˜x −˜x∂˜x + 1 and E = ˆuτ −Lˆu. Lemma 8.21. The estimates (8.39) ∥E∥H ≤CH2 hold for τ ∈[τ ′ + 1 2, τ ′ + T ]. Proof. By the evolution equation of u (Lemma 8.17), E = d dτ  uη( ˜x ρ)  −(∂2 ˜x −˜x∂˜x + 1)  uη( ˜x ρ)  = − ˜y2 ˜x + n−2 P ℓ=1 ˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ  1 + ˜y2 ˜x + n−2 P ℓ=1 (˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ) u˜x˜xη −2u˜xη′ 1 ρ −uη′′ 1 ρ2 + ˜xuη′ 1 ρ. We want to decompose E into two quantities. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 49 We define E1 := − ˆ˜y2 ˜x + n−2 P ℓ=1  ˆ˜z2 ℓ˜x + 2ˆ˜zℓ˜x cos θ tan θℓ  1 + ˜y2 ˜x + n−2 P ℓ=1 (˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ) u˜x˜x, where ˆ˜y˜x = (˜yη)˜x and ˆ˜zℓ˜x = (˜zℓη)˜x. We also define E2 := −2u˜xη′ 1 ρ −uη′′ 1 ρ2 + ˜xuη′ 1 ρ + 1 1 + ˜y2 ˜x + n−2 P ℓ=1 (˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ) · u˜x˜x " (˜y2 ˜x + n−2 X ℓ=1 ˜z2 ℓ˜x)(η2 −η) + 2(˜y˜y˜x + n−2 X ℓ=1 ˜zℓ˜zℓ˜x)ηη′ 1 ρ + (˜y2 + n−2 X ℓ=1 ˜z2 ℓ)(η′)2 1 ρ2 + n−2 X ℓ=1 2 cos θ tan θℓ˜zℓη′ 1 ρ # . By equation (8.33), one has E = E1 + E2. Based on Lemma 8.18, by taking H small enough, we have (8.40) 1 2 ≤1 + ˜y2 ˜x + n−2 X ℓ=1 ˜z2 ℓ˜x + 2˜zℓ˜x cos θ tan θℓ  ≤2. Lemma 8.18, equation (8.40) and the definition of the cut-off function η (equation (8.8)) implies that |E2| ≤    CH(|˜x| + 1), if ρ ≤|˜x| ≤2ρ, 0, if |˜x| < ρ or |˜x| > 2ρ. Thus, ∥E2∥2 H ≤C Z 2ρ ρ H2(|˜x| + 1)2e−˜x2 2 d˜x ≤C H2 ρ8 Z 2ρ ρ |˜x|8(|˜x| + 1)2e−˜x2 2 d˜x ≤C H2 ρ8 ≤CH10. In addition, by equation (8.40) and definition of E1, ∥E1∥2 H ≤C Z 2ρ −2ρ E2 1e−˜x2 2 d˜x ≤C Z 2ρ −2ρ |u˜x˜x|2 ˆ˜y4 ˜x + n−2 X ℓ=1 (ˆ˜z4 ℓ˜x + ˆ˜z2 ℓ˜x) ! e−˜x2 2 d˜x ≤CH2 Z 2ρ −2ρ ˆ˜y4 ˜x + n−2 X ℓ=1 (ˆ˜z4 ℓ˜x + ˆ˜z2 ℓ˜x) ! e−˜x2 2 d˜x, where we used Lemma 8.18. Combined with Lemma 8.20, particularly inequality (8.33), (8.41) ∥E1∥2 H ≤CH2(H4 + H4 + H2) ≤CH4. 50 QI SUN As a result, (8.42) ∥E∥H ≤∥E1∥H + ∥E2∥H ≤CH2 + CH10 ≤CH2. □ Lemma 8.22. One has the following estimates for τ ∈[τ ′ + 1 2, τ ′ + T ], (8.43) d dτ ⟨ˆu, 1⟩H −⟨ˆu, 1⟩H + d dτ ⟨ˆu, ˜x⟩H ≤CH2, (8.44) d dτ ∥P≥1ˆu∥2 H ≤−∥P≥1ˆu∥2 H + CH4. Proof. Proof of inequality (8.43): d dτ ⟨ˆu, 1⟩H = ⟨ˆuτ, 1⟩H = ⟨Lˆu, 1⟩H + ⟨E, 1⟩H = ⟨ˆu, L1⟩H + ⟨E, 1⟩H = ⟨ˆu, 1⟩H + ⟨E, 1⟩H. Combined with ∥1∥H = 1, d dτ ⟨ˆu, 1⟩H −⟨ˆu, 1⟩H ≤|⟨E, 1⟩H| ≤∥E∥H. In addition, d dτ ⟨ˆu, ˜x⟩H = ⟨ˆuτ, ˜x⟩H = ⟨Lˆu,˜x⟩H + ⟨E, ˜x⟩H = 0 + ⟨E, ˜x⟩H. Combined with ∥˜x∥H = 1, d dτ ⟨ˆu, ˜x⟩H ≤|⟨E, ˜x⟩H| ≤∥E∥H. Inequality (8.43) then follows from Lemma 8.21. Proof of inequality (8.44): By direct computations, d dτ ∥P≥1ˆu∥2 H = 2⟨P≥1ˆu, d dτ P≥1ˆu⟩= 2⟨P≥1ˆu, P≥1 d dτ ˆu⟩ = 2⟨P≥1ˆu, P≥1Lˆu + P≥1E⟩. Because P≥1L = LP≥1, d dτ ∥P≥1ˆu∥2 H = 2⟨P≥1ˆu, LP≥1ˆu + P≥1E⟩ ≤2⟨P≥1ˆu, LP≥1ˆu⟩+ 2∥P≥1ˆu∥H∥P≥1E∥H. Because the smallest positive eigenvalue of the operator −L is 1 and by the defini- tion of P≥1 (equation (8.7)), d dτ ∥P≥1ˆu∥2 H ≤−2∥P≥1ˆu∥2 H + 2∥P≥1ˆu∥H∥P≥1E∥H ≤−2∥P≥1ˆu∥2 H + ∥P≥1E∥2 H + ∥P≥1ˆu∥2 H ≤−∥P≥1ˆu∥2 H + ∥P≥1E∥2 H. Inequality (8.44) then follows from Lemma 8.21. □ SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 51 Lemma 8.23. For any integer L ≥2 and T > 2L, one has the following estimates for τ ∈[τ ′ + L, τ ′ + T −L], ∥ˆu(˜x, τ) −⟨ˆu(˜x, τ ′ + L), ˜x⟩˜x∥2 H ≤C e−LH2 + T H3 + T 2H4 . (8.45) Proof. By Lemma 8.22, particularly inequality (8.44), for τ ∈[τ ′ + 1 2, τ ′ + T ], d dτ ∥P≥1ˆu∥2 H ≤−∥P≥1ˆu∥2 H + CH4. We set α(τ) := ∥P≥1ˆu(·, τ)∥2 H, then (eτα)τ = eτ(ατ + α) ≤CeτH4. By Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T ], eτα(τ) ≤eτ ′+ 1 2 α(τ ′ + 1 2) + CT eτH4 ≤Ceτ ′H2 + CT eτH4. Thus, for τ ∈[τ ′ + L, τ ′ + T ], α(τ) ≤Ceτ ′−τH2 + CT H4 ≤Ce−LH2 + CT H4. Next, we set β := ⟨ˆu, 1⟩2 H, by Lemma 8.22 (particularly inequality (8.43)) and Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T ], βτ ≥2β −CH(H2). Then one has (e−2τβ)τ = e−2τ(βτ −2β) ≥−Ce−2τH3. Thus, for τ ∈[τ ′ + 1 2, τ ′ + T ], e−2(τ ′+T )β(τ ′ + T ) −e−2τβ(τ) ≥−CT e−2τH3. By Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T −L], β(τ) ≤e2τ−2(τ ′+T )β(τ ′ + T ) + CT H3 ≤Ce−2LH2 + CT H3. Finally, we set λ(τ) := ⟨ˆu(·, τ), ˜x⟩H −⟨ˆu(·, τ ′ + L), ˜x⟩H. By Lemma 8.22, for τ ∈[τ ′ + 1 2, τ ′ + T ], (8.46) |λτ| ≤CH2. By definition of λ, λ(τ ′ + L) = 0. Thus, for τ ∈[τ ′ + L, τ ′ + T −L], (8.47) |λ(τ)| = |λ(τ) −λ(τ ′ + L)| ≤C(T −2L)H2 ≤CT H2. Combine previous estimates, for τ ∈[τ ′ + L, τ ′ + T −L], ∥ˆu(˜x, τ) −⟨ˆu(˜x, τ ′ + L), ˜x⟩˜x∥2 H ≤α(τ) + β(τ) + λ2(τ) ≤C e−LH2 + T H4 + e−2LH2 + T H3 + T 2H4 . □ Proposition 8.24. There exist slopes K˜y = K˜y(τ ′), K˜zℓ= K˜zℓ(τ ′) ∈R with |K˜y|, |K˜zℓ| ≤CH such that for i = 1, 2, (8.48) ∥ˆ˜yi(˜x, τ) −K˜y˜x∥H + n−2 X ℓ=1 ∥ˆ˜zi ℓ(˜x, τ) −K˜zℓ˜x∥H ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). holds for τ ∈[τ ′ + L, τ ′ + T −L]. 52 QI SUN Proof of Proposition 8.24. For α ∈[0, π 2 ), recall the vector ⃗eα (equation (7.21)) and the 2-plane Pα (Definition 7.26), we define yα := Γ(·, τ) · ⃗eα, where the dependence on ℓwill remain implicit. Thus, by definition of ˜y, ˜zℓ(equation (8.26)), yα = (cos α)y + (sin α)zℓ= (cos α)˜y + sin α(˜zℓ+ cos θ tan θℓ˜x). (8.49) Let α0 ∈(0, π 2 ), τα0 be as in Lemma 7.27. We denote by yα,1, yα,2 the values of yα on the upper and lower branches. For α ∈[0, α0], we have (8.50) yα,1(˜x, τ) > yα,2(˜x, τ) for τ ≥τα0. Let βα i denote ⟨ˆyα,i(·, τ ′ + L), ˜x⟩:= ⟨η( ˜x ρ)yα,i(·, τ ′ + L), ˜x⟩for i = 1, 2. Claim 8.25. For i = 1, 2 and α ∈[0, α0], one has ∥ˆyα,i −βα i ˜x∥H ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). Proof. One has, ∥η˜x −⟨η˜x, ˜x⟩˜x∥H = ∥(η˜x −˜x) + ⟨˜x, ˜x⟩˜x −⟨η˜x, ˜x⟩˜x∥H ≤∥η˜x −˜x∥H + ∥⟨˜x −η˜x, ˜x⟩˜x∥H = ∥η˜x −˜x∥H + |⟨˜x −η˜x, ˜x⟩| · 1 ≤2∥η˜x −˜x∥H. Because yα is a linear combination of ˜y, ˜zℓ, ˜x (equation (8.49)), ∥ˆyα,i −βα i ˜x∥H ≤∥ˆ˜yi −⟨ˆ˜yi(˜x, τ ′ + L), ˜x⟩˜x∥H + C∥ˆ˜zi ℓ−⟨ˆ˜zi ℓ(˜x, τ ′ + L), ˜x⟩˜x∥H + C∥η( ˜x ρ)˜x −˜x∥H. By Lemma 8.23 and Lemma 8.12, ∥ˆyα,i −βα i ˜x∥H ≤C r (e−LH2 + T H3 + T 2H4) + 1 ρ200 . By the definition of ρ (equation (8.24)), 1 ρ200 ≤CH200 ≤T H3 because T > 2. □ Claim 8.26. For α ∈[0, α0], one has (8.51) |βα 1 −βα 2 | ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). Proof of Claim 8.26. If βα 1 ≥βα 2 , then by 1 < ∥˜x∥L2[−2,0] and yα,1 > yα,2, one has (8.52) |βα 1 −βα 2 | ≤∥(βα 1 −βα 2 )˜x∥L2[−2,0] ≤∥yα,1 −yα,2 −(βα 1 −βα 2 )˜x∥L2[−2,0], where we used −(βα 1 −βα 2 )˜x ≥0 for ˜x ∈[−2, 0]. Thus, it follows from inequality (8.52) that, for ρ large enough, by Lemma 8.11, |βα 1 −βα 2 | ≤∥yα,1 −βα 1 ˜x∥L2[−2,0] + ∥yα,2 −βα 2 ˜x∥L2[−2,0] = ∥ˆyα,1 −βα 1 ˜x∥L2[−2,0] + ∥ˆyα,2 −βα 2 ˜x∥L2[−2,0] ≤C∥ˆyα,1 −βα 1 ˜x∥H + C∥ˆyα,2 −βα 2 ˜x∥H. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 53 Combined with Claim 8.25, one has (8.53) |βα 1 −βα 2 | ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). If βα 1 ≤βα 2 , then the same process works for L2[0, 2] in place of L2[−2, 0]. □ As a result, for i = 1, 2, by Claim 8.25 and Claim 8.26, ∥ˆyα,i −βα 1 ˜x∥H ≤∥ˆyα,i −βα i ˜x∥H + ∥(βα i −βα 1 )˜x∥H ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). Combined with definition of yα (equation (8.49)), ∥ˆ˜yi −β0 1 ˜x∥H = ∥ˆyi −β0 1 ˜x∥H = ∥ˆy0,i −β0 1 ˜x∥H ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). (8.54) and ∥(cos α)ˆ˜yi + sin α(ˆ˜zi ℓ+ cos θ tan θℓˆ˜x) −βα 1 ˜x∥H = ∥ˆyα,i −βα 1 ˜x∥H. As a result, combined with Lemma 8.12, ∥ˆ˜zi ℓ− 1 sin α0 (βα0 1 −β0 1 cos α0 −sin α0 cos θ tan θℓ)˜x∥H ≤C(e−L 2 H + T 1 2 H 3 2 + T H2). (8.55) We pick K˜y = β0 1 and K˜zℓ= 1 sin α0 (βα0 1 −β0 1 cos α0 −sin α0 cos θ tan θℓ). Claim 8.27. One has, |K˜y|, |K˜zℓ| ≤CH. Proof. By the definition of βα 1 and Lemma 8.19, |β0 1| = |⟨ˆ˜y1(·, τ ′ + L), ˜x⟩| ≤∥ˆ˜y1∥H ≤CH. By Lemma 8.19 and Lemma 8.12, |βα0 1 −sin α0 cos θ tan θℓ| = |⟨ˆyα0,1(·, τ ′ + L), ˜x⟩−sin α0 cos θ tan θℓ⟨˜x, ˜x⟩| ≤ ⟨(cos α0)ˆ˜y1 + sin α0ˆ˜z1 ℓ, ˜x⟩ + | sin α0 cos θ tan θℓ||⟨η˜x −˜x, ˜x⟩| ≤ ⟨(cos α0)ˆ˜y1 + sin α0ˆ˜z1 ℓ, ˜x⟩ + C 1 ρ100 ≤CH + CH100. □ Claim 8.27, together with equation (8.54) and equation (8.55), proves Proposi- tion 8.24. □ Next, we establish a PDE lemma which will be used to control C2 norms of the rotated rescaled CSF. 54 QI SUN Lemma 8.28. For a constant M > 0, there exists ϵ0 > 0 such that the following holds. Suppose the graph Γ(·, τ) of a vector-valued function (y, z1, · · · , zn−2) ∈ C∞(P−4 8,4) is a rescaled CSF with (8.56) ∥y(x, τ)∥C2(P−4 8,4) + n−2 X ℓ=1 ∥zℓ(x, τ) −(tan θℓ)x∥C2(P−4 8,4) ≤ϵ ≤ϵ0 for θ1, · · · , θℓ. Then given ϕ ∈(−Mϵ, Mϵ) and θ′ ℓ∈(θℓ−Mϵ, θℓ+Mϵ), denoting by S−ϕ the horizontal rotation (Definition 7.3) by angle −ϕ (in the sense of equation (8.11)), the profile (y′, z′ 1, · · · , z′ n−2) of the rotated flow S−ϕΓ(·, τ) is well defined in P−4 6,4 and the following holds: ∥y′(x, τ)∥C2(P−2 4,2) + n−2 X ℓ=1 ∥z′ ℓ(x, τ) −(tan θ′ ℓ)x∥C2(P−2 4,2) ≤C sup τ∈[−4,0] " ∥y(x, τ) −(tan ϕ)x∥L2[−8,8] + n−2 X ℓ=1 ∥zℓ(x, τ) −(tan θ′ ℓ)x∥L2[−8,8] + Cϵ2 # , where the constant C depends on M. Proof. The area between the graph of y′ and the x-axis in a ball Br(0) equals the area between the graph of y and (tan ϕ)x in Br(0). Thus, by H¨older’s inequality, (8.57) ∥y′∥L1[−6,6] ≤∥y −(tan ϕ)x∥L1[−8,8] ≤4∥y −(tan ϕ)x∥L2[−8,8]. We define x′ := (cos ϕ)x + (sin ϕ)y. Then we have the estimates: Z 6 −6 |z′ ℓ(x′, τ) −(tan θ′ ℓ)x′|dx′ ≤ Z 8 −8 |zℓ(x, τ) −tan θ′ ℓ(cos ϕx + sin ϕy)| · | cos ϕ + sin ϕyx|dx = Z 8 −8 |zℓ(x, τ) −tan θ′ ℓ(x + (cos ϕ −1)x + sin ϕy)| · |1 + (cos ϕ −1) + sin ϕyx|dx ≤2 Z 8 −8 |zℓ(x, τ) −(tan θ′ ℓ)x|dx + Cϵ2, where we use the assumption that |ϕ| < Mϵ and ∥y∥C1 ≤Mϵ. Thus, by H¨older’s inequality, (8.58) ∥z′ ℓ(x, τ) −(tan θ′ ℓ)x∥L1[−6,6] ≤8∥zℓ(x, τ) −(tan θ′ ℓ)x∥L2[−8,8] + Cϵ2. Based on the bounds on the second order derivatives, we can write the evolution equations of the functions y′, z′ ℓ−(tan θ′ ℓ)x in divergence form. By the local L∞ estimates of the equations, ∥y′(x, τ)∥L∞(P−3 5,3) + n−2 X ℓ=1 ∥z′ ℓ(x, τ) −(tan θ′ ℓ)x∥L∞(P−3 5,3) ≤C sup τ∈[−4,0] " ∥y′(x, τ)∥L1[−6,6] + n−2 X ℓ=1 ∥z′ ℓ(x, τ) −(tan θ′ ℓ)x∥L1[−6,6] # . SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 55 Then this lemma follows from the Schauder estimates, equation (8.57) and equation (8.58). □ 8.7. Improvement of flatness (Proof of Proposition 8.7). By assembling Lemma 8.28, Proposition 8.24 and Lemma 8.11, where inequality (8.56) is ensured by Lemma 8.15, there is a horizontal rotation ¯Sτ ′+L+2 and a vector ¯⃗θ = ¯⃗θ(τ ′ + L + 2) = (¯θ1, · · · , ¯θn−2) such that ¯Sτ ′+L+2Γ is ¯H-linear at ¯⃗θ at time τ ′ + L + 2, where ¯H = C  e−L 2 H + T 1 2 H 3 2 + T H2 + CH2 = C  e−L 2 + T 1 2 H 1 2 + T H + H  H. The goal is ¯H ≤ H 2C0L. First pick L large so that Ce−L 2 ≤ 1 10C0L. Then pick H = H(L, T ) small so that CT 1 2 H 1 2 ≤ 1 10C0L, CT H ≤ 1 10C0L and CH ≤ 1 10C0L. The estimates | ¯Sτ ′+L+2−Sτ ′| ≤CH and |¯θℓ−θℓ| ≤CH hold because of Proposition 8.24, particularly |K˜y|, |K˜zℓ| ≤CH. References [AA81] William K Allard and Frederick J Almgren. On the radial behavior of minimal surfaces and the uniqueness of their tangent cones. Annals of Mathematics, 113(2):215–265, 1981. [AAAW13] Dylan J Altschuler, Steven J Altschuler, Sigurd B Angenent, and Lani F Wu. The zoo of solitons for curve shortening in Rn. Nonlinearity, 26(5):1189, 2013. [AB10] Ben Andrews and Charles Baker. Mean curvature flow of pinched submanifolds to spheres. Journal of Differential Geometry, 85(3):357–396, 2010. [AB11] Ben Andrews and Paul Bryan. Curvature bound for curve shortening flow via dis- tance comparison and a direct proof of Grayson’s theorem. Journal f¨ur die reine und angewandte Mathematik, 2011(653):179–187, 2011. [AG92] Steven J. Altschuler and Matthew A. Grayson. Shortening space curves and flow through singularities. Journal of Differential Geometry, 35(2):283 – 298, 1992. [AL86] Uwe Abresch and Joel Langer. The normalized curve shortening flow and homothetic solutions. Journal of Differential Geometry, 23(2):175–196, 1986. [All24] William K Allard. Corrections to a paper of allard and almgren on the uniqueness of tangent cones. arXiv preprint arXiv:2407.06344, 2024. [Alt91] Steven J Altschuler. Singularities of the curve shrinking flow for space curves. Journal of differential geometry, 34(2):491–514, 1991. [And99] Ben Andrews. Gauss curvature flow: the fate of the rolling stones. Inventiones math- ematicae, 138(1):151–161, 1999. [And12] Ben Andrews. Noncollapsing in mean-convex mean curvature flow. Geometry & Topology, 16(3):1413 – 1418, 2012. [Ang88] Sigurd Angenent. The zero set of a solution of a parabolic equation. Journal f¨ur die reine und angewandte Mathematik, 1988. [Ang91] Sigurd Angenent. On the formation of singularities in the curve shortening flow. Journal of Differential Geometry, 33(3):601–633, 1991. [BC19] Simon Brendle and Kyeongsu Choi. Uniqueness of convex ancient solutions to mean curvature flow in r 3. Inventiones mathematicae, 217(1):35–76, 2019. [BC21] Simon Brendle and Kyeongsu Choi. Uniqueness of convex ancient solutions to mean curvature flow in higher dimensions. Geometry & Topology, 25(5):2195–2234, 2021. [BCD17] Simon Brendle, Kyeongsu Choi, and Panagiota Daskalopoulos. Asymptotic behavior of flows by powers of the Gaussian curvature. Acta Mathematica, 219(1):1 – 16, 2017. 56 QI SUN [BK24] Richard H Bamler and Bruce Kleiner. On the multiplicity one conjecture for mean curvature flows of surfaces, 2024. [CCMS24a] Otis Chodosh, Kyeongsu Choi, Christos Mantoulidis, and Felix Schulze. Mean curva- ture flow with generic initial data. Inventiones mathematicae, 237(1):121–220, 2024. [CCMS24b] Otis Chodosh, Kyeongsu Choi, Christos Mantoulidis, and Felix Schulze. Revisiting generic mean curvature flow in R3. arXiv preprint arXiv:2409.01463, 2024. [CCS23] Otis Chodosh, Kyeongsu Choi, and Felix Schulze. Mean curvature flow with generic initial data ii, 2023. [Cho85] Bennett Chow. Deforming convex hypersurfaces by the n th root of the gaussian curvature. Journal of Differential Geometry, 22(1):117–138, 1985. [Cho15] Otis Chodosh. Mean curvature flow (math 258) lecture notes. Unpublished notes of a class taught by Brian White, 2015. [CIL92] Michael G Crandall, Hitoshi Ishii, and Pierre-Louis Lions. User’s guide to viscos- ity solutions of second order partial differential equations. Bulletin of the American mathematical society, 27(1):1–67, 1992. [CM12] Tobias H Colding and William P Minicozzi. Generic mean curvature flow i; generic singularities. Annals of mathematics, pages 755–833, 2012. [CM15] Tobias Holck Colding and William P Minicozzi. Uniqueness of blowups and Lo- jasiewicz inequalities. Annals of Mathematics, pages 221–285, 2015. [CM21] Tobias Holck Colding and William P Minicozzi. Wandering singularities. Journal of Differential Geometry, 119(3):403–420, 2021. [CMI19] Tobias Holck Colding and William P Minicozzi II. Dynamics of closed singularities. In Annales de l’Institut Fourier, volume 69, pages 2973–3016, 2019. [CMI25] Tobias Holck Colding and William P Minicozzi II. Quantitative uniqueness for mean curvature flow. arXiv preprint arXiv:2502.03634, 2025. [CS21] Otis Chodosh and Felix Schulze. Uniqueness of asymptotically conical tangent flows. Duke Mathematical Journal, 170(16):3601–3657, 2021. [CSSZ25] Kyeongsu Choi, Dong-Hwi Seo, Wei-Bo Su, and Kai-Wei Zhao. Uniqueness of tan- gent flows at infinity for finite-entropy shortening curves. Geometric and Functional Analysis, pages 1–55, 2025. [Ede15] Nick Edelen. Notes from Brian White’s class on mean curvature flow. Unpublished notes of a class taught by Brian White, 2015. [Gag83] Michael E. Gage. An isoperimetric inequality with applications to curve shortening. Duke Mathematical Journal, 50(4):1225 – 1229, 1983. [Gag84] Michael E. Gage. Curve shortening makes convex curves circular. Inventiones math- ematicae, 76(2):357–364, 1984. [GH86] M. Gage and R. S. Hamilton. The heat equation shrinking convex plane curves. Journal of Differential Geometry, 23(1):69 – 96, 1986. [GK15] Zhou Gang and Dan Knopf. Universality in mean curvature flow neckpinches. Duke Mathematical Journal, 164(12):2341 – 2406, 2015. [Gra87] Matthew A. Grayson. The heat equation shrinks embedded plane curves to round points. Journal of Differential Geometry, 26(2):285 – 314, 1987. [Ham89] RS Hamilton. Cbms conference. Hawaii, lecture notes, 1989. [Ham95a] Richard S Hamilton. Harnack estimate for the mean curvature flow. Journal of Dif- ferential Geometry, 41(1):215–226, 1995. [Ham95b] Richard S Hamilton. Isoperimetric estimates for the curve shrinking flow in the plane. Modern methods in complex analysis (Princeton, NJ, 1992), 137:201–222, 1995. [H¨at15] J. H¨attenschweiler. Curve Shortening Flow in Higher Dimension. ETH-Z¨urich, 2015. [HS99] Gerhard Huisken and Carlo Sinestrari. Convexity estimates for mean curvature flow and singularities of mean convex surfaces. Acta Mathematica, 1999. [Hui84] Gerhard Huisken. Flow by mean curvature of convex surfaces into spheres. Journal of Differential Geometry, 20(1):237–266, 1984. [Hui90] Gerhard Huisken. Asymptotic behavior for singularities of the mean curvature flow. Journal of Differential Geometry, 31(1):285–299, 1990. [Hui98] Gerhard Huisken. A distance comparison principle for evolving curves. Asian Journal of Mathematics, 2(1):127–133, 1998. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 57 [Ilm03] Tom Ilmanen. Problems in mean curvature flow. Unpublished manuscript, ETH Z¨urich, 2003. Available at https://people.math.ethz.ch/~ilmanen/classes/eil03/ problems03.pdf. [LS24] Yang Li and G´abor Sz´ekelyhidi. Singularity formations in Lagrangian mean curvature flow. arXiv preprint arXiv:2410.22172, 2024. [LSU68] Olga Aleksandrovna Ladyzhenskaia, Vsevolod Alekseevich Solonnikov, and Nina N Ural’tseva. Linear and quasi-linear equations of parabolic type, volume 23. American Mathematical Soc., 1968. [LZ24] Tang-Kai Lee and Xinrui Zhao. Uniqueness of conical singularities for mean curvature flows. Journal of Functional Analysis, 286(1):110200, 2024. [Man11] Carlo Mantegazza. Lecture notes on mean curvature flow, volume 290. Springer Sci- ence & Business Media, 2011. [MB20] Jiˇr´ı Minarˇc´ık and Michal Beneˇs. Long-term behavior of curve shortening flow in R3. SIAM Journal on Mathematical Analysis, 52(2):1221–1231, 2020. [MM14] Annibale Magni and Carlo Mantegazza. A note on Grayson’s theorem. Rendiconti del Seminario Matematico della Universit`a di Padova, 131, 06 2014. [MMN16] Annibale Magni, Carlo Mantegazza, and Matteo Novaga. Motion by curvature of planar networks ii. Ann. Sc. Norm. Super. Pisa Cl. Sci., XV:117–144, 2016. piprints preprint. [Naf22] Keaton Naff. A planarity estimate for pinched solutions of mean curvature flow. Duke Mathematical Journal, 171(2):443 – 482, 2022. [Nev07] Andr´e Neves. Singularities of Lagrangian mean curvature flow: zero-maslov class case. Inventiones mathematicae, 168(3):449–484, 2007. [Sch14] Felix Schulze. Uniqueness of compact tangent flows in mean curvature flow. Jour- nal f¨ur die reine und angewandte Mathematik (Crelles Journal), 2014(690):163–172, 2014. [Smo11] Knut Smoczyk. Mean curvature flow in higher codimension: introduction and survey. In Global differential geometry, pages 231–274. Springer, 2011. [Sto94] Andrew Stone. A density function and the structure of singularities of the mean curvature flow. Calculus of variations and partial differential equations, 2:443–480, 1994. [Sun24] Qi Sun. Curve shortening flow of space curves with convex projections. arXiv preprint arXiv:2410.08399, 2024. [Sun25] Qi Sun. Huisken’s distance comparison principle in higher codimension. arXiv preprint arXiv:2509.16823, 2025. [SX21] Ao Sun and Jinxin Xue. Initial perturbation of the mean curvature flow for closed limit shrinker. arXiv preprint arXiv:2104.03101, 2021. [SX25] Ao Sun and Jinxin Xue. Generic dynamics of mean curvature flows with asymptoti- cally conical singularities. Science China Mathematics, pages 1–38, 2025. [Tra21] Hung V Tran. Hamilton–Jacobi equations: theory and applications, volume 213. American Mathematical Soc., 2021. [Wan02] Mu-Tao Wang. Long-time existence and convergence of graphic mean curvature flow in arbitrary codimension. Inventiones mathematicae, 148(3):525–543, 2002. [Wan11] Mu-Tao Wang. Lectures on mean curvature flows in higher codimensions. arXiv preprint arXiv:1104.3354, 2011. [Whi00] Brian White. The size of the singular set in mean curvature flow of mean-convex sets. Journal of the American Mathematical Society, 13(3):665–695, 2000. [Whi03] Brian White. The nature of singularities in mean curvature flow of mean-convex sets. Journal of the American Mathematical Society, 16(1):123–138, 2003. [Whi05] Brian White. A local regularity theorem for mean curvature flow. Annals of mathe- matics, pages 1487–1519, 2005. [Zhu20] Jonathan J Zhu. Lojasiewicz inequalities, uniqueness and rigidity for cylindrical self- shrinkers. arXiv preprint arXiv:2011.01633, 2020. Qi Sun, Department of Mathematics, University of Wisconsin-Madison Email address: qsun79@wisc.edu
SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS QI SUN Abstract. We show that any closed immersed curve in Rn with a one-toone convex projection onto some 2-plane develops a Type I singularity and becomes asymptotically circular under Curve Shortening flow in Rn. As an application, we prove an analog of Huisken's conjecture for Curve Shortening flow in Rn, showing that any closed immersed curve in Rn can be perturbed to a closed immersed curve in Rn+2 which shrinks to a round point under Curve Shortening flow. 1. Introduction Let us consider the Curve Shortening flow (CSF) in higher codimensions: (1.1) γt = γss where γ : S1 × [0, T) →Rn is smooth (S1 = R/2πZ), u →γ(u, t) is an immersion and ∂s = ∂ ∂s is the derivative with respect to arc-length, defined by (1.2) ∂ ∂s := 1 |γu| ∂ ∂u. When we want to emphasize that we are working in higher codimensions, we shall refer to the evolution as space CSF. 1.1. Background. For planar CSF, in [GH86] Gage and Hamilton proved that if the initial curve is convex, it shrinks to a point and becomes asymptotically circular. Their work built on earlier works of Gage [Gag83, Gag84]. In [Gra87] Grayson extended their results and proved that if the initial curve is embedded, it will become convex before developing any singularities. Since then, many other proofs of the Gage-Hamilton-Grayson theorem have been discovered, including [Ham95b, Hui98,AB11,And12]. Beyond CSF, (mean) convexity has played a central role for evolution of hypersurfaces, see [Hui84,HS99,Whi00,Whi03,Cho85,And99,BCD17]. See also [Wan02,AB10] for Mean Curvature flow (MCF) in higher codimensions. For space CSF, in [Sun24] the author shows that if the initial space curve has a one-to-one convex projection onto some 2-plane, its space CSF retains this property and shrinks to a point. See previous works [H ̈at15, MB20] for space CSF with convex projections and [Sun24, Page 3-4] for a comparison with these works. See also [AG92,Alt91,AAAW13,Wan11,Smo11]. Date: October 17, 2025. This work was partially supported by NSF grant DMS-2348305. 1 16 Oct 2025 2 QI SUN One natural and fascinating question arises whether a curve with a one-to-one convex projection becomes asymptotically circular under space CSF. In [Sun25], the author establishes a variant of Huisken's distance comparison principle in higher codimensions for reflection symmetric space CSF and answers this question in a special case. In this paper, we answer the above question affirmatively in full generality. 1.2. Notation. Let Pxy : Rn = R2 × Rn-2 →R2 be the orthogonal projection onto the first two coordinates, which we call x and y. For a space curve γ, let Pxy|γ : γ →xy-plane be its restriction to γ. Definition 1.1. We say that a smooth curve γ ⊂Rn has a one-to-one convex projection (onto the xy-plane) if Pxy|γ is injective and the projection curve Pxy(γ) is convex. The class of curves with one-to-one convex projections includes planar convex curves as a special case. Recall the following terminology on singularity formation. Definition 1.2. As t →T, we say CSF γ(·, t) develops a Type I singularity if lim sup t→T sup u∈S1 k2(u, t)(T -t) 0, the perturbation2 γε 0 : S1 →R2 × Rn (1.3) γε 0(u) := (ε cos u, ε sin u, γ0(u)) has a one-to-one convex projection onto the xy-plane, hence develops a Type I singularity and becomes asymptotically circular under space CSF. For curves described in [Sun24, Lemma 1.8], it suffices to perturb the given curve in Rn+1 rather than Rn+2. Particularly, it applies to the planar figure-eight (cos u, 0, sin 2u) as follows. Corollary 1.6 (Perturbing a planar figure-eight). For any ε > 0, the CSF γ(·, t) that starts with the initial curve γε 0(u) = (cos u, ε sin u, sin 2u) 1As a corollary of [Ang91], any cardioid-like curve (i.e. a curve with positive curvature and one transverse self-intersection) develops a Type II singularity. Any small enough smooth perturbation in R2 of a cardioid-like curve is still a cardioid-like curve and thus develops a Type II singularity. 2This perturbation, referred to as the wave approximation, has been used in [H ̈at15, §5.5] to prove the existence of weak solutions, replacing the ramps used by [AG92]. 4 QI SUN develops a Type I singularity and becomes asymptotically circular as it approaches the singularity. Corollary 1.6 confirms the numerical observations mentioned in [Sun24, the last paragraph on Page 4]. See Figure 2. xy-projection xz-projection Figure 2. Snapshots of the evolution of a perturbation of the planar figure eight curve from different angles. Previously appeared in [Sun24]. 1.5. Strategy of our proof. The principal part of the proof is devoted to ruling out Type II singularities; the argument proceeds by contradiction. For CSF with convex projections, developing Type II singularities, we first improve the known blow-up results in the literature, showing that every tangent flow is a line of multiplicity two (Theorem 1.13). Then we show the directions of the lines and thus the tangent flows are both non-unique (Theorem 1.15) and unique (Theorem 1.16). This gives a contradiction and hence only Type I singularities can occur. Once Type I is established, asymptotic circularity can be proved quickly, sketched as follows. A Type I blow-up argument [Hui90] implies that the singularity satisfies the shrinker equation. All one-dimensional shrinkers in Rn are planar (see for example [AAAW13, Lemma 5.1]), thus are classified as Abresch-Langer curves [AL86]. Because a circle of multiplicity one is the only Abresch-Langer curve with a one-toone convex projection, it follows from [Sch14] that CSF with a one-to-one convex projection becomes asymptotically circular; see Proposition 3.6 for more details. In improving the known blow-up results, we rely on White's blow-up results in [Cho15, page 12-13], [Ede15, Theorem 5.6, page 9-10]. To prove non-uniqueness of tangent flows, we enhance the barrier in [Sun24], making use of the viscosity subsolutions ( [CIL92]). To prove uniqueness of tangent flows, we extend the Allard-Almgren method [AA81] in [CSSZ25, §8] based on estimates derived in a way different from [CSSZ25, §7]. We now introduce some terminology before discussing our proof strategy and related works in more detail. Recall that we have assumed CSF γ(·, t) shrinks to the origin as t →T based on [Sun24]. Definition 1.7 (Following Huisken [Hui90]). We define the rescaled CSF to be: (1.4) Γ(u, τ) := γ(u, t) √ 2T -2t, τ := -1 2 log(T -t). SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 5 In addition, we denote by σ the arc-length parameter of Γ, defined by ∂ ∂σ = 1 |Γu| ∂ ∂u = √ 2T -2t ∂ ∂s. Definition 1.8. For any sequence τj = -1 2 log(T -tj) →+∞, we define the j-th rescaled CSF along the sequence {τj} to be (1.5) Γj(σ, τ) := Γ(σ, τj + τ). Throughout this paper, when taking subsequences, we always keep the original labels. In addition, all convergence results arising from the blow-up analysis are understood up to reparameterization in the following sense. Definition 1.9. We say the j-th rescaled CSF Γj locally smoothly converges to a rescaled flow Γ∞if for any real numbers a 0, there is a finite union of closed intervals I = ∪Iα, an integer J and smooth time-dependent reparameterizations φj such that for j ≥J, Γj(φj(u, τ), τ), u ∈I reparameterizes all the arcs Γj(·, τ) ∩BR(0) for every τ ∈[a, b] and (1.6) Γj(φj(u, τ), τ) →Γ∞(u, τ) as smooth functions on I × [a, b] as j →∞. Definition 1.10. A rescaled flow Γ∞is a parameterized tangent flow if there exists a sequence τj →+∞such that the corresponding j-th rescaled CSF Γj locally smoothly converges to Γ∞. In the literature, the notion of tangent flow refers to the Brakke flow obtained as the limit of the parabolic rescalings. See for example [Sch14, Page 165]. Our notion of parameterized tangent flow is more restrictive than tangent flow as it requires convergence in the smooth sense. And our notion of parameterized tangent flow is the limit of Huisken's rescaled CSF instead of the limit of the parabolic rescalings. In contrast to mean curvature flow of surfaces in R3, in which case Bamler and Kleiner prove the Multiplicity One conjecture in [BK24], tangent flows of higher multiplicity do appear for space CSF. By solving ODE, [AAAW13, §3] classifies space CSF with S1 symmetry. As a corollary, based on explicit ODE solutions, a shrinking circle with any positive integer multiplicity can appear as a tangent flow of embedded CSF. Definition 1.11. For a unit vector ⃗v, the rescaled flow Γ∞: (R ⊔R) × R →Rn (λ, τ) →λ⃗v is called a stationary line of multiplicity two. Remark 1.12. Here the term "stationary" is different from the term "static" introduced in [Whi00, §5, page 676] because we reparameterize time as in Definition 1.7 and Definition 1.8 instead of the parabolic rescalings. In our case, CSF with a one-to-one convex projection shrinks to one point and the tangent flows at the spacepoint (lim t→T γ(·, t), T) are actually quasi-static in the sense of [Whi00, §5, page 676]. 6 QI SUN Now we discuss our proof strategy and related works in more detail. Improvement of blow-up results. It was first established by Huisken [Hui90] via his powerful monotonicity formula that MCF, developing a Type I singularity, is asymptotic to a self-shrinker subsequentially. For space CSF, Altschuler showed in [Alt91] that when a Type II singularity develops, Hamilton's Harnack inequality [Ham89, Ham95a] implies that regions of curve where curvature is comparable to the maximum of curvature are asymptotic to Grim Reapers. For locally convex planar curves, see also [Ang91]. See [Naf22] for results of the same type as [Alt91] for MCF in higher codimensions. It has been pointed out in [MM14] [Cho15, page 12-13], [Ede15, Theorem 5.6, page 9-10] that for CSF, one can apply the Sobolev embedding to extract a C1 convergent subsequence of curves (not flows) without assuming the Type I condition. For CSF with convex projections, our improvement of the blow-up results when Type II singularities occur is as follows. Theorem 1.13. Assume the initial curve γ0 has a one-to-one convex projection onto the xy-plane and its CSF γ(·, t) develops a Type II singularity as t →T. Then for any sequence τj →+∞, there exists a subsequence and a line L in Rn such that the j-th rescaled CSF Γj locally smoothly converges to the stationary line L of multiplicity two. Moreover, the line L is not perpendicular to the xy-plane. We show that the convergence is smooth even though the limit is of multiplicity two. In the multiplicity one case, smooth convergence follows from Brakke regularity theorem, which we are unable to apply to our case. Non-uniqueness of tangent flows. Definition 1.14. A limit line is a line L obtained from Theorem 1.13 for some sequence τj →+∞. By enhancing the barrier in [Sun24], we are able to show that the directions of the limit lines along different sequences {τj} in Theorem 1.13 are non-unique. Actually, the projection of all the limit lines onto the xy-plane cover all horizontal directions. Theorem 1.15. Assume the same hypotheses as in Theorem 1.13. Then for every nonzero vector ⃗v in the xy-plane, there exists a limit line L⃗v in Rn with PxyL⃗v parallel to the vector ⃗v. Thus the tangent flows are non-unique. Uniqueness of tangent flows. The uniqueness of tangent flows is not known in general. In the multiplicity one case, uniqueness of tangent flows has been proved by Schulze [Sch14] for compact tangent flows, by Colding-Minicozzi [CM15] (see also [CMI25, GK15]) for cylindrical tangent flows and by Chodosh-Schulze [CS21] for asymptotically conical tangent flows. See also generalizations by Zhu [Zhu20] and Lee-Zhao [LZ24]. For Lagrangian MCF, see [Nev07] [LS24]. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 7 Our proof of uniqueness of tangent flows of CSF with convex projections is an extension of the Allard-Almgren method [AA81]3 (see also [All24]) in [CSSZ25, §8], which proves uniqueness of tangent flows at infinity for ancient finite-entropy planar embedded CSF. See also the previous works [BC19, BC21] on MCF with similar ideas. Theorem 1.16. Assume the same hypotheses as in Theorem 1.13. Then the direction of the limit line is independent of the sequence {τj}. Thus the tangent flow is unique. To use the argument in [CSSZ25, §8], we need an analogy of [CSSZ25, Theorem 7.3 (Graphical radius lower bound)], the proof of which does not apply to our setting for the following reason. In higher codimensions, we cannot keep track of sharp vertices (local maximum points of the curvature function) as in [CSSZ25]. In more detail, when curvature k > 0, the evolution equation of the curvature is: (1.7) kt = kss + k3 -kτ 2. As a result, one cannot apply the Sturmian theorem to the evolution equation of the derivative of the curvature function ks because of the torsion term τ 2. Even so, we are still able to achieve similar estimates (Proposition 7.5 and Proposition 7.7) as in [CSSZ25, Theorem 7.3], making use of the blow-up results, the lower bound of the rescaled area of the projection curves and the geometric properties of CSF with convex projections; see §7 for more details. 1.6. Outline of the paper. In §2 we summarize the established blow-up results of CSF in the literature. In §3 we recall the geometric properties of CSF with convex projections. In §4 we improve the blow-up results (Theorem 1.13) for CSF with convex projections when Type II singularities occur, building upon results in §2 and §3. In §5 we construct a barrier, which is a viscosity subsolution to the heat equation. In §6, we make use of the barrier in §5 and shows that the tangent flows are nonunique (Theorem 1.15). In §7 we establish estimates at line scales. In §8 we make use of the estimates in §7 and show that the tangent flows are unique (Theorem 1.16). Acknowledgments. The author is indebted to his advisors Sigurd Angenent for introducing him to the study of curve shortening flow in higher codimensions and Hung Vinh Tran for sharing his expertise on viscosity solutions; he thanks them for many inspiring discussions and their support. The author also wants to thank Jonathan J. Zhu for a helpful conversation and thank Ilyas Khan, Keaton Naff, Tang-Kai Lee, Mat Langford and Jacob Bernstein for their interests. 3In general, a line of multiplicity two does not satisfies the integrability condition in [AA81, Page 215 (1)] because of the situation that two lines are rotating at different speeds. However, in our case, a heuristic explanation is that this is made up by the projection convexity because the projection curve should be embedded and cannot be intersecting lines. 8 QI SUN 2. Known results on blow-up In this section, we recall the blow-up results of general immersed CSF in Rn and we restrict to the case that as t →T, γ(·, t) shrinks to one point, which we may assume to be the origin. This is the only section that we do not assume CSF γ(·, t) has a one-to-one convex projection. Remark 2.1. Without the assumption that γ(·, t) shrinks to a point, to the best of the author's knowledge, in the case that γ(·, t) develops a Type I singularity, it is not known whether there could be a subsequence {τj} along which the rescaled curves Γ(·, τj) converge to a finite union of lines with multiplicity. The first Theorem in [Alt91, Page 492] didn't include this case, but the author does not think it was justified in his proof. This assumption that γ(·, t) shrinks to a point allows for a cleaner formulation of Proposition 2.3. The results in the literature are mostly stated in the planar case, but the argument also works for general dimension n ≥2. We start by explaining how the assumption that γ(·, t) shrinks to one point precludes the scenario in Remark 2.1: Lemma 2.2. If an immersed CSF γ(·, t) develops a Type I singularity and shrinks to one point as t →T, then the rescaled CSF, introduced in Definition 1.7, is bounded. Proof. We already assume that CSF γ(·, t) shrinks to the origin. Then for all p ∈S1 and t ∈[0, T), |γ(p, t)| ≤ Z T t |k(p, ̃t)|d ̃t ≤M Z T t 1 p T - ̃t d ̃t = 2M √ T -t. □ We now summarize the known results in the literature on Type I blow-up for CSF in Rn. Proposition 2.3 (Type I blow-up). Assuming γ(·, t) shrinks to the origin as t →T, the following three are equivalent: (a) As t →T, γ(·, t) develops a Type I singularity. (b) For any sequence tj →T, there exists a subsequence such that γ(·,tj) √ 2T -2tj converges in the C∞sense to some Abresch-Langer curve with multiplicity at least one. (c) For any sequence τj →+∞, there exists a subsequence such that the j-th rescaled CSF Γj(·, τ) (Definition 1.8) converges on S1 × [-1 2 log T, +∞) in the C∞ loc sense to a stationary solution of the rescaled CSF corresponding to some Abresch-Langer curve with multiplicity at least one. Proof of Proposition 2.3. (a) ⇒(b). Due to the argument of Huisken [Hui90] (see also [Man11, Proposition 3.2.10]), for any sequence tj →T, there exists a subsequence such that the rescaled CSF locally smoothly converges to a shrinker, potentially with multiplicity. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 9 All shrinking curves in Rn are planar; see for example [AAAW13, Lemma 5.1]. By Lemma 2.2, the shrinker is bounded. In addition, the total curvature of the shrinker is bounded by [Alt91, Theorem 5.1]. Thus the shrinker is one of the Abresch-Langer curves classified in [AL86]. (b) ⇒(c). Combined with the smooth dependence on initial conditions for solutions of PDE, one may take a convergent subsequence according to (b) at times τj -m, m = 1, 2, · · · . Then (c) is proved by a diagonal argument. (c) ⇒(a). It follows from the classification [AL86] that there are only finitely many Abresch-Langer curves with the total curvature R S1 kds smaller than a fixed upper bound and thus the rescaled curvature is bounded for all time t since for any sequence tj →T, we can take a subsequence such that the rescaled curvature is bounded by a uniform constant, which can be defined to be the maximum of the the rescaled curvature of the mentioned finitely many Abresch-Langer curves. □ To the best of the author's knowledge, the uniqueness of tangent flows is not fully known even in the Type I case in the literature. It is potentially possible that along two sequences, the blow-up limits are two different Abresch-Langer curves with different multiplicities. Geometrically, it has also not been ruled out that a singularity is a rotating Abresch-Langer curve in Rn. When one tangent flow is a shrinking circle of multiplicity one, the uniqueness of tangent flows follows from the work of Schulze [Sch14]. As a corollary, one has the following proposition. Proposition 2.4. If there exists one sequence tj →T, such that γ(·,tj) √ 2T -2tj converges, up to reparameterization, in the C1 sense to a circle of multiplicity one, then γ(·,t) √2T -2t converges in C∞to the circle as t →T. Proof. By smooth dependence of solutions to parabolic PDE on initial conditions, there is one tangent flow which is a shrinking circle of multiplicity one. Then this proposition follows from [Sch14]. □ Now let us summarize the known blow-up results for CSF in Rn in the literature without assuming the Type I condition. Proposition 2.5. For a rescaled CSF Γ, let τj →+∞be a given sequence and Γj be the corresponding j-th rescaled CSF (Definition 1.8). Then for almost every τ ∈R, at least one of the following two cases happen: (a) There exists a subsequence, such that the curve Γj(·, τ) converges, up to reparameterization, in the C1 sense to some Abresch-Langer curve with finite multiplicity. (b) There exists a subsequence, such that the curve4 Γj(·, τ) converges, up to reparameterization, in the C1 loc sense to a finite union of lines, each with finite multiplicity. The choice of the subsequence depends on τ. 4We emphasize that one only has convergence at time τ, not at later times. The smooth dependence of solutions to parabolic PDE on initial conditions fails in the non-compact setting. 10 QI SUN The proof of Proposition 2.5 is mainly White's argument in [Cho15, page 12-13] and [Ede15, Theorem 5.6, page 9-10]. See also [MMN16, Proposition 2.19], [MM14] and the estimates in [Sto94, Lemma 2.9]. In the proof, for the use of the Sobolev embedding to potentially a union of broken arcs in a ball BR(0) for some R > 0, one can keep track of the arcs that intersect the unit ball B1(0) based on the Sturmian theorem [Ang88] and show that the other arcs, which we cannot keep track of, are outside of the ball B R 2 (0). One can then apply the Sobolev embedding to each of the arcs that intersects the unit ball B1(0). As a corollary, Lemma 2.6. The summation of the multiplicity of the lines in Proposition 2.5 (b) is independent of the choice of the subsequence. This is because it equals the limit of one half of the intersection number lim τ→+∞ 1 2|Γ(·, τ) ∩∂B1(0)|. As far as the author knows, for Proposition 2.5, in general it is not known whether one can improve the proposition from almost every τ to all τ and from C1 loc to C∞ loc. However, for CSF with a one-to-one convex projection, we are able to make these two improvements in §4. 3. CSF with one-to-one convex projections In this section, we always assume the initial curve γ0 ⊂Rn has a one-to-one convex projection onto the xy-plane. 3.1. Geometry of CSF with convex projections. Lemma 3.1 (Theorem 1.5 of [Sun24]). For each t > 0, CSF γ(·, t) has a one-toone uniformly convex projection onto the xy-plane. As t →T, γ(·, t) shrinks to a point. The next lemma follows from Corollary 5.7 of [Sun24]. Lemma 3.2 (Bounded slope lemma). For an arbitrary ε > 0, there exists M > 0 such that for each t ∈[ε, T) and for arbitrary two points p1, p2 on γ(·, t), (3.1) |p1 -p2| ≤M|Pxy(p1) -Pxy(p2)| where | · | stands for the standard Euclidean distance. Recall that ∂ ∂s is the arc-length derivative defined via equation (1.2). Lemma 3.3 (Corollary 5.8 of [Sun24]). For an arbitrary ε > 0, there exists δ > 0 such that x2 s + y2 s ≥δ > 0 for all t ∈[ε, T). Because in this paper we only consider the asymptotic behavior as t →T, replacing the initial curve by γ(·, ε) if needed, we may assume that properties, described in Lemma 3.2 and Lemma 3.3, holding for t ∈[ε, T) hold for all t ≥0. Recall that we have assumed CSF γ(·, t) shrinks to the origin. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 11 3.2. Type I singularity and compact blow-up limits. Based on geometric properties of CSF with convex projections, we can rule out immersed AbreschLanger curves and higher multiplicity. Lemma 3.4. If there exists a subsequence, such that γ(·,tj) √ 2T -2tj converges, up to reparameterization, in the C1 sense to some Abresch-Langer curve ΓAL with finite multiplicity m ≥1 in some 2-plane P 2 ⊂Rn, then the Abresch-Langer curve is the unit circle and the multiplicity is one. Moreover, the linear map Pxy|P 2 : P 2 → xy-plane is a linear isomorphism. Proof of Lemma 3.4. Claim 3.5. The linear map Pxy|P 2 : P 2 →xy-plane is injective. Proof of the Claim. If this were not true, then there would exist a nonzero vector ⃗v ∈P 2 such that Pxy|P 2(⃗v) = 0. Then there would exist two different points p1 ∞, p2 ∞ on ΓAL such that the vector pointing from p1 ∞to p2 ∞is parallel to the vector ⃗v. Pick two sequences of points p1 j, p2 j on the curve γ(·,tj) √ 2T -2tj satisfying p1 j →p1 ∞, p2 j →p2 ∞. Then by the bounded slope lemma, Lemma 3.2, where equation (3.1) is scaling invariant, one has that (3.2) |p1 j -p2 j| ≤M|Pxy(p1 j) -Pxy(p2 j)|, where lim j→+∞|p1 j -p2 j| = |p1 ∞-p2 ∞| > 0 because p1 ∞, p2 ∞are two different points. However, lim j→+∞|Pxy(p1 j) -Pxy(p2 j)| = |Pxy(p1 ∞) -Pxy(p2 ∞)| = |Pxy(p1 ∞-p2 ∞)| = 0 because Pxy|P 2(⃗v) = 0 and the vector pointing from p1 ∞to p2 ∞is parallel to the vector ⃗v. Taking the limit j →+∞in equation (3.2) leads to a contradiction. □ Thus the map Pxy|P 2 is a linear isomorphism by comparing the dimension of P 2 and the xy-plane. By the Sturmian theorem [Ang88], ΓAL can only have transverse self-intersections because ΓAL is a shrinker. Since the linear map Pxy|P 2 is bijective, Pxy|P 2(ΓAL) also can only have transverse self-intersections. Therefore if ΓAL had self-intersections, then Pxy|P 2(ΓAL) and thus Pxy|P 2(γ(·, tj)) would have transverse self-intersections for large j. This contradicts that γ(·, t) has a one-to-one convex projection onto the xy-plane. Therefore ΓAL is embedded and thus is a circle. If the multiplicity m were not one, then the winding number of Pxy|P 2(ΓAL) and Pxy|P 2(γ(·, tj)) with respect to the origin would equal m > 1. Thus the curve Pxy|P 2(γ(·, tj)) would have a self-intersection, which gives a contradiction. □ For Type I singularities, we fully understand the asymptotic behavior. 12 QI SUN Proposition 3.6. If γ(·, t) develops a Type I singularity as t →T, then γ(·,t) √2T -2t converges in C∞to a unit circle of multiplicity one in some 2-plane P 2 ⊂Rn as t →T. Moreover, the linear map Pxy|P 2 : P 2 →xy-plane is a linear isomorphism. Proof of Proposition 3.6. By Proposition 2.3, there exists a sequence {tj} such that γ(·,tj) √ 2T -2tj converges to some Abresch-Langer curve ΓAL in some 2-plane P 2 with multiplicity m ≥1. By Lemma 3.4, the limit is a circle of multiplicity one. This proposition then follows from Schulze's uniqueness of tangent flows (see Proposition 2.4). □ Lemma 3.7. If there exists a subsequence such that Γj(·, τ) converges up to reparameterization in the C1 sense to some Abresch-Langer curve with finite multiplicity, then γ develops a Type I singularity. Proof. By Lemma 3.4, the limit is a circle of multiplicity one. By smooth dependence on solutions of PDE, we may assume the convergence is in C∞sense by picking another sequence at nearby times if necessary. This lemma follows from Schulze's uniqueness theorem (see Proposition 2.4). □ 3.3. Type II singularity and non-compact blow-up limits. For any sequence τj = -1 2 log(T -tj) →+∞, recall that we denote by Γj the j-th rescaled CSF along {τj} defined in Definition 1.8. With convex projections, the sequential limit of the rescaled CSF cannot be transverse lines. Lemma 3.8. If there exists a sequence τj →+∞, such that as j →+∞, Γj(·, τ) converges in the sense of C1 loc to a finite union of lines, each with finite multiplicity, then the union of these lines is a line of multiplicity two. In addition, the line is not perpendicular to the xy-plane. Proof. By the bounded slope lemma, Lemma 3.2, none of the lines in the union is perpendicular to the xy-plane. As a result, the projection of each of the above lines onto the xy-plane is a line and cannot be a point. It follows from [Whi05] that the summation of the multiplicities of lines is at least 2. Claim 3.9. The projection curves PxyΓj(·, τ) converge to one line with multiplicity m ≥2. Proof of the Claim. If this claim were not true, then the projection curves PxyΓj(·, τ) would converge to a union of two or more transverse lines in the xy-plane. Then C1 loc close to transverse lines implies that PxyΓj(·, τ) should have selfintersections for large j. But for each j, the projection curve PxyΓj(·, τ) is convex and thus embedded. □ Because the projection curve PxyΓj(·, τ) is convex, the multiplicity m is at most 2. As a result, m = 2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 13 Claim 3.10. The space curves Γj(·, τ) converge to one line of multiplicity two in Rn. Proof of the Claim. If this claim were not true, then the space curves Γj(·, τ) would converge to a union of two intersecting lines L1, L2 in Rn with PxyL1 = PxyL2 but L1 ̸= L2. Then there exist points pi ∈Li for i = 1, 2 such that Pxy(p1) = Pxy(p2) but p1 ̸= p2. This contradicts the bounded slope lemma, Lemma 3.2. □ □ In summary, Lemma 3.11. Let γ be a CSF with a one-to-one convex projection, and assume γ develops a Type II singularity. If τj →+∞is a given sequence, then for almost every τ ∈R, there exists a subsequence, such that Γ(·, τ + τj) converges, up to reparameterization, in the C1 loc sense to a line of multiplicity two. In addition, the line is not perpendicular to the xy-plane. The choice of the subsequence depends on τ. 4. Improved blow-up results for curves with convex projections In this section, as discussed in §3.1, for all t ∈[0, T), γ(·, t) has a one-to-one uniformly convex projection onto the xy-plane with no tangent lines perpendicular to the xy-plane. Recall that we have assumed γ(·, t) shrinks to the origin. In this section, we always assume γ(·, t) develops a Type II singularity as t →T. We will improve Lemma 3.11 from almost every τ to all τ and from C1 loc to C∞ loc. More generally, we will prove Theorem 1.13. 4.1. Preparations. For any sequence τj = -1 2 log(T -tj) →+∞, recall that we denote by Γj the corresponding j-th rescaled CSF defined in Definition 1.8. For any sequence τj →+∞, by Lemma 3.11, we may pick numbers a m + 1, m ∈N. We may assume the projection PxyLa of the line La is the x-axis. By the maximum principle and definition of the rescaled CSF, we can establish the following lemma which will be useful in the proof of Lemma 4.2 and Lemma 4.3. 14 QI SUN Lemma 4.1. If, for some real numbers a 0 large and H > 0 small, there exists j1 ∈N such that for j ≥j1 and for all (σ, τ) satisfying -R ≤⃗e1 · Γj(σ, τ) ≤R, one has -H ≤⃗e2 · Γj(σ, τ) ≤H for any τ ∈[a, b]. Proof. We first bound the projection of the rescaled curve at time τ = a by lines from above and below. Since PxyΓj(·, a) converges to the x-axis of multiplicity two as j →+∞, for any R, H > 0, we can pick j1 ∈N such that for all σ satisfying -R ≤⃗e1 · Γj(σ, a) ≤R, one has that, (4.1) -H 2 ea-b ≤⃗e2 · Γj(σ, a) ≤H 2 ea-b SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 15 and the gradient (4.2) (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ, a) ≤H 2R for any j ≥j1. In addition, we denote by Γj(σ1, a), Γj(σ2, a) the points whose x-coordinates are 0 with ⃗e2 · Γj(σ1, a) > 0,⃗e2 · Γj(σ2, a) 0, there exists j2 ∈N such that sup σ∈S1 ⃗e1 · Γj(σ, τ) > R and inf σ∈S1 ⃗e1 · Γj(σ, τ) 0, for large enough j and any τ ∈[a, b], Γj(·, τ) is graphical over the x-axis on the interval [-R, R] because the projection curve PxyΓj(·, τ) is convex and thus graphical over the x-axis except at the maximum and minimum points of the function x(·, τ) = ⃗e1 · Γj(·, τ). 4.3. Gradient and curvature estimates. For any constant R > 0 large and H > 0 small, we take j0 = max{j1, j2}, as chosen in Lemma 4.2 and Lemma 4.3. Lemma 4.4 (Gradient estimates). For j ≥j0 and for all (σ, τ) satisfying -R 2 ≤ ⃗e1 · Γj(σ, τ) ≤R 2 , one has (⃗e2 · Γj)σ (⃗e1 · Γj)σ (σ, τ) ≤8H R for any τ ∈[a, b]. Proof. If this lemma were not true, then there would exist some jg ≥j0 and a point (σ0, τ0) with -R 2 ≤⃗e1 · Γjg(σ0, τ0) ≤R 2 but (⃗e2 · Γjg)σ (⃗e1 · Γjg)σ (σ0, τ0) > 8H R . We denote by L0 the tangent line of the convex curve Pxy(Γjg(·, τ0)) at the point PxyΓjg(σ0, τ0). We may assume the point PxyΓjg(σ0, τ0) is on the upper branch, thus the convex curve Pxy(Γjg(·, τ0)) must be below the line L0. The line L0 intersects the line segment l= {( ̃x, -2H)| -R ≤ ̃x ≤R} at some point. The curve Pxy(Γjg(·, τ0)) therefore also intersects l, which contradicts Lemma 4.2. □ Lemma 4.5 (Curvature estimates). There exists a constant M > 0 such that for all j ≥j0 and for all (σ, τ) satisfying -R 4 ≤⃗e1 · Γj(σ, τ) ≤R 4 and a 2 ≤τ ≤b 2, one has |Γjσσ(σ, τ)| ≤M. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 17 Proof. Any given branch of Γj in the region |x| ≤R is a graph x →(x, ̃y(x, τ), ̃z1(x, τ), · · · , ̃zn-2(x, τ)). The function ̃y satisfies the graphical rescaled CSF equation: (4.3) ̃yτ = ̃y ̃x ̃x 1 + ̃y2 ̃x + ̃z2 1 ̃x + · · · + ̃z2 (n-2) ̃x - ̃x ̃y ̃x + ̃y. Thus, ( ̃y ̃x)τ = ( ̃y ̃x) ̃x 1 + ̃y2 ̃x + ̃z2 1 ̃x + · · · + ̃z2 (n-2) ̃x ! ̃x - ̃x( ̃y ̃x) ̃x, which is a parabolic equation for ̃y ̃x in divergence form with a lower order term. By the gradient estimates and Lemma 3.3, ̃y2 ̃x + ̃z2 1 ̃x + · · · + ̃z2 (n-2) ̃x is bounded from above. Thus by De Giorgi-Nash-Moser type estimates, ̃y ̃x is H ̈older continuous. See for example [LSU68, Theorem 1.1, Chapter V, §1, page 419]. Similarly ̃z1 ̃x, · · · , ̃z(n-2) ̃x are H ̈older continuous. Thus we obtain curvature estimates by applying Schauder estimates to equation (4.3). All the estimates mentioned in this proof are uniform for all j ≥j0. □ 4.4. Uniform convergence. By Lemma 4.4, Lemma 4.5 and a priori estimates of parabolic PDEs, we have higher order estimates. By the Arzel`a-Ascoli theorem, we may assume Γj(·, τ) converges locally smoothly to some limit flow Γ∞(·, τ) because we have (upper and lower) bounds on length locally for τ ∈[a, b], replacing a, b by 2a, 2b in Lemma 4.5 if necessary. It follows from Lemma 4.2, choosing constants R = m and H = 1 m for all m ∈N large, that for τ ∈[a, b], Γj(·, τ) converges to some line whose projection is the x-axis, which is the projection of the line that Γj(·, a) converges to. That is to say, for τ ∈[a, b], PxyΓ∞(·, τ) and PxyΓ∞(·, a) are the same line. Next, we will show that not only the projection, but the space curves Γ∞(·, τ) and Γ∞(·, a) are the same line, for any τ ∈[a, b]. Proof of Theorem 1.13. Due to the convergence Γj →Γ∞, the limit flow satisfies the rescaled CSF (4.4) (Γ∞)τ = (Γ∞)σσ + Γ⊥ ∞, up to a tangential motion. Take a subsequence such that τj +a > τj-1 +b. Then by Huisken's monotonicity formula [Hui90], (4.5) +∞ X j=1 Z τj+b τj+a Z Γ(·,τ) e-|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = +∞ X j=1 Z b a Z Γj(·,τ) e-|Γ|2 2 |Γσσ + Γ⊥|2dσdτ is finite. 18 QI SUN As a result, lim j→+∞ Z b a Z Γj(·,τ) e-|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0 and for any R > 0, lim j→+∞ Z b a Z Γj(·,τ)∩BR(0) e-|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0, where we denote by BR(0) the ball centered at the origin with radius R, Because Γj(·, τ) converges locally smoothly to Γ∞(·, τ), Z b a Z Γ∞(·,τ)∩BR(0) e-|Γ|2 2 |Γσσ + Γ⊥|2dσdτ = 0. As a result, the time derivative (Γ∞)τ vanishes and Γ∞(·, τ) remains the same line for any τ ∈[a, b]. □ We conclude this section with a corollary which will be useful in §6. Corollary 4.6. For arbitrary R, ε > 0, there exists τ0 ∈[-1 2 log T, +∞) such that for any τ ≥τ0 and Γ(σ1, τ), Γ(σ2, τ) ∈BR(0), one has that, |Γσ(σ1, τ) -Γσ(σ2, τ)| 0 such that there exists a sequence of times τj with points Γ(σj 1, τj), Γ(σj 2, τj) ∈BR0(0) but (4.6) |Γσ(σj 1, τj) -Γσ(σj 2, τj)| ≥ε0. We may take a subsequence of {τj}, such that Γ(σ, τj) converges in the ball BR0(0). This contradicts equation (4.6). □ 5. The barrier as a subsolution In this section, as discussed in §3.1, the initial curve γ0 has a one-to-one uniformly convex projection onto the xy-plane and CSF γ(·, t) shrinks to the origin. Definition 5.1. We define the functions xmax(t) = max u∈S1 x(u, t) and xmin(t) = min u∈S1 x(u, t). Lemma 5.2. The functions xmax(t), xmin(t) are C1 in the variable t. Proof. Since the projection curve is uniformly convex, at each time t, x(·, t) has a unique maximum point. We can view x as a function of y and denote by y0(t) the value of y where x(·, t) achieves its maximum. In other words, x(y0(t), t) = xmax(t). Thus the derivative vanishes at the maximum point: xy(y0(t), t) = 0. Since the projection curve is uniformly convex, the curvature xyy (1 + x2y) 3 2 is nonzero. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 19 Hence xyy(y0(t), t) ̸= 0. Then it follows from the implicit function theorem that y0(t) is C1 in t. Thus xmax(t) = x(y0(t), t) is C1 in t. □ 5.1. The difference Y between the upper and lower branch. We denote by (x, y, z1, · · · , zn-2) a point in Rn. Let us consider the graph flow over the x-axis: (5.1) yt = yxx 1 + y2x + z2 1x + · · · + z2 (n-2)x and (5.2) zit = zixx 1 + y2x + z2 1x + · · · + z2 (n-2)x , where x ∈[xmin(t), xmax(t)], t ∈[0, T) and z2 ix = ∂zi ∂x 2 , 1 ≤i ≤n -2. We denote by (yu(x, t), zu 1 (x, t), · · · , zu n-2(x, t)) and (yl(x, t), zl 1(x, t), · · · , zl n-2(x, t)) the solutions corresponding to the upper and lower branch. Definition 5.3. We define the difference of y between the upper and lower branch to be (5.3) Y (x, t) := yu(x, t) -yl(x, t), where x ∈[xmin(t), xmax(t)] and t ∈[0, T). By definition of Y , one has that (5.4) Y (x = xmin(t), t) = 0 and Y (x = xmax(t), t) = 0 for all t ∈[0, T). The next lemma follows from the equations of the graph flow and the convexity of the projection curves. Lemma 5.4. The function Y is a supersolution for the linear heat equation, i.e. (5.5) Yt ≥Yxx. Proof. Because of the convexity of the projection curve, yu xx ≤0, yl xx ≥0. Because 1 1 + y2x + z2 1x + · · · + z2 (n-2)x ≤1, we have yu t = yu xx 1 + (yux)2 + (zu 1x)2 + · · · + (zu (n-2)x)2 ≥yu xx, and yl t = yl xx 1 + (ylx)2 + (zl 1x)2 + · · · + (zl (n-2)x)2 ≤yl xx. Thus, Yt = yu t -yl t ≥yu xx -yl xx = Yxx. □ 20 QI SUN 5.2. The barrier and its regularity. We define the function (5.6) f(t) := Z t 0 π2 4 max 1 x2max(τ), 1 x2 min(τ) - 1 2(T -τ) dτ, and the function (5.7) θ(x, t) := ( π 2 x xmax(t) if 0 ≤x ≤xmax(t), π 2 x -xmin(t) if xmin(t) ≤x ≤0. where the functions xmax(t), xmin(t) are defined in Definition 5.1. Let ε > 0 be a small constant to be chosen in inequality (5.13). Definition 5.5. With above notation, we define the barrier to be (5.8) φ(x, t) = εe-f(t)√ T -t cos θ(x, t), where t ∈[0, T), x ∈[xmin(t), xmax(t)]. By the definition of the function θ, (5.9) φ(xmin(t), t) = φ(xmax(t), t) = 0. We start with the regularity of the barrier. Lemma 5.6. The function cos θ(x, t) is (a) C1 in the variables x, t. (b) C2 in x on [xmin(t), xmax(t)]\{0}. (c) one-sided C2 in x at x = 0 from the left and right. Proof. By direct computations, (cos θ)x =        -sin π 2 x xmax(t) π 2 1 xmax(t) if x > 0, 0 if x = 0, -sin π 2 x -xmin(t) π 2 1 -xmin(t) if x 0, 0 if x = 0, -sin π 2 x -xmin(t) π 2 x x2 min(t)x′ min(t) if x 0, π 2 x x2 min(t)x′ min(t) if x 0, we define the perturbed function Y1(x, t) = Y (x, t) + ε1et and the time t1 = sup{ ̃t ∈[0, T)|Y1(x, t) > φ(x, t) for all t ∈[0, ̃t], x ∈[xmin(t), xmax(t)]}. Based on equation (5.4) and equation (5.9), (5.15) Y1(xmin(t), t) = Y1(xmax(t), t) = ε1et > 0 and (5.16) φ(xmin(t), t) = φ(xmax(t), t) = 0. Because ε in the definition of the barrier (Definition 5.5) is small, the time t1 is positive. By definition of the time t1, (5.17) Y1(x, t) ≥φ(x, t) for all t ≤t1. Claim 5.12. If t1 φ(x, t1) for all x ∈(xmin(t1), xmax(t1)). It follows from equation (5.15) and equation (5.16) that Y1(xmin(t1), t1) > φ(xmin(t1), t1) and Y1(xmax(t1), t1) > φ(xmax(t1), t1). Thus, for each x ∈[xmin(t1), xmax(t1)], we can pick a spacetime neighborhood Nx of the point (x, t1) such that Y > φ in the neighborhood. The Nx form an open cover of the compact set {(x, t1)|xmin(t1) ≤x ≤xmax(t1)}, so we can take a finite subcover. As a result, there exists a t2 > t1 such that Y1(x, t) > φ(x, t) for all t ∈[0, t2], x ∈[xmin(t), xmax(t)], which contradicts the definition of the time t1. □ Claim 5.13. One has t1 = T. Proof of Claim 5.13. If t1 0. So we have reached a contradiction. □ Thus, for all t ∈[0, T), x ∈[xmin(t), xmax(t)] and all ε1 > 0, Y1(x, t) = Y (x, t) + ε1et ≥φ(x, t). Taking the limit ε1 →0, one has that Y (x, t) ≥φ(x, t). □ 6. Non-uniqueness of tangent flows In this section, as discussed in §3.1, the initial curve γ0 has a one-to-one uniformly convex projection onto the xy-plane and the CSF γ(·, t) shrinks to the origin. The goal of this section is to prove Theorem 1.15. We prove the following theorem first. Theorem 6.1. Assume the initial curve γ0 has a one-to-one convex projection onto the xy-plane and the CSF γ(·, t) develops a Type II singularity as t →T. For every nonzero vector ⃗v in the xy-plane, there exists a line L⃗v in Rn with PxyL⃗v parallel to the vector ⃗v and a sequence of times {t⃗v j}, such that the curves γ(·,t⃗v j ) q 2T -2t⃗v j locally smoothly converge to the line L⃗v as j →+∞. One key step is the following proposition. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 25 Proposition 6.2. If along some sequence of times {tj}, γ(·,tj) √ 2T -2tj locally smoothly converges to a line L1 of multiplicity two, then there exists a line L2 with PxyL1 ⊥ PxyL2 and another sequence of times {t′ j} such that the rescaled curves γ(·,t′ j) √ 2T -2t′ j locally smoothly converge to the line L2 of multiplicity two. Proof. We may assume the projection of the line L1 onto the xy-plane is the x-axis. Thus as j →+∞, (6.1) Y (x = 0, tj) = o( p T -tj), where Y is the difference between the upper and lower branches, which we had defined in equation (5.3). We can establish the following pivotal lemma: Lemma 6.3. For the chosen sequence {tj}, one has that (6.2) lim j→+∞f(tj) = lim j→+∞ Z tj 0 π2 4 max 1 x2max(τ), 1 x2 min(τ) - 1 2(T -τ) dτ = +∞, where we had defined the function f in equation (5.6). Proof. Lemma 5.11 tells us that Y (x, t) ≥φ(x, t). Thus by the definition of the barrier φ (Definition 5.5), we have Y (0, tj) ≥φ(0, tj) = εe-f(tj)p T -tj. Equation (6.1) implies that e-f(tj) = o(1) as j →+∞. That is to say f(tj) →+∞as j →+∞. This lemma then follows from the definition of f (equation (5.6)). □ By Lemma 6.3, there exists another sequence {t′ j} with t′ j →T such that the integrand of equation (6.2) is positive. In other words, (6.3) min{|xmin(t′ j)|, |xmax(t′ j)|} 0 for all u, t. Definition 6.4. We denote by θ(u, t) the turning angle between the vector Pxyγs(u, t) and the positive x-axis. For the sequences {tj}, {t′ j} chosen, we have: Lemma 6.5. For arbitrary R, ε > 0, there exists j0 ∈N such that for any j ≥j0 and any γ(u1, tj) ∈BR√ 2T -2tj(0), γ(u2, t′ j) ∈BR√ 2T -2t′ j(0), one has that, Pxyγs(u1, tj) |Pxyγs(u1, tj)| · (0, 1) 0, there exists t0 ∈[0, T) such that for any t ≥t0 and any γ(u1, t), γ(u2, t) ∈BR√2T -2t(0), one has that, |γs(u1, t) -γs(u2, t)| 0, there exists tR ∈[0, T) such that for any t ≥tR, γ(·, t) ∩BR√2T -2t(0) has exactly two connected components, and |γ|2 has exactly one minimum point on each component. Furthermore, we can smoothly track these minimum points. Proof. If the first part of this lemma were not true, then there would exist a sequence {tl} such that the number of the connected components of γ(·, tl) ∩BR√2T -2tl(0) is not two. By taking a subsequence, γ(·,tl) √2T -2tl locally smoothly converges to a line of multiplicity two. This line passes through the origin and intersects transversely with the sphere ∂BR(0). This gives a contradiction. Claim 6.8. The function |γ|2 has exactly one minimum point on each component of γ(·, t) ∩BR√2T -2t(0). Proof of the Claim. The author learned the following argument from [CSSZ25, Proof of Lemma 3.5]. On each component, |γ|2 has at least one minimum point since each component of γ(·,t) √2T -2t ∩BR(0) is close to some line passing through the origin. By direct computations, |γ|2 s = 2γ · γs and (6.4) |γ|2 ss = 2γ · γss + 2 = 2 γ √ 2T -2t · √ 2T -2tγss + 2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 27 Thus, for points γ(u, t) ∈BR√2T -2t(0), because γ(u,t) √2T -2t ≤R and the rescaled curvature √ 2T -2t|γss(u, t)| is small, one has |γ|2 ss (u, t) > 0. Thus the function |γ|2 has at most one critical point on each component. □ We can smoothly track the minimum points γ(umin(t), t) by applying the implicit function theorem to (|γ|2)s(umin(t), t) = 0 and (|γ|2)ss(umin(t), t) > 0. □ Proof of Theorem 6.1. We may assume the sequences {tj} and {t′ j} are alternating, t1 0 in the xy-plane, there exists a sequence of times {t⃗v j} with tj 0, we define ⃗v⊥= (v2, -v1). By previous argument, since v2 > 0, -v1 > 0, there exists a sequence of times {t⃗v⊥ j } such that γ(·,t⃗v⊥ j ) q 2T -2t⃗v⊥ j converges to some line L⃗v⊥whose projection PxyL⃗v⊥is parallel to the vector ⃗v⊥. Since (v1, v2) ⊥(v2, -v1), it follows from Proposition 6.2 that there exists a sequence of times {t⃗v j} such that γ(·,t⃗v j ) q 2T -2t⃗v j converges to some line L⃗v whose projection PxyL⃗v is parallel to the vector ⃗v. □ Proof of Theorem 1.15. Pick a sequence of times {t⃗v j} as in Theorem 6.1 with τ⃗v j := -1 2 log(T -t⃗v j). We denote by Γj the j-th rescaled CSF corresponding to τ⃗v j . 28 QI SUN By Theorem 6.1, the curves Γj(·, τ = 0) locally smoothly converge to the line L⃗v. By Theorem 1.13, there exists a subsequence such that Γj locally smoothly converges to a stationary line L of multiplicity two. By uniqueness of the limits at τ = 0, L = L⃗v. □ 7. Linear scales For the rest of this paper, δ0 refers to a fixed constant depending on the initial curve but independent of time τ. The constant δ0 will be chosen according to Lemma 7.13. Definition 7.1. For a constant δ > 0, we define the linear scale ρδ : R>0 →R>0 to be (7.1) ρδ(H) := δ 20H . Notation 7.2. For convenience, we omit δ and simply write ρ = ρδ when no ambiguity arises. In this section, ρ always refers to ρδ0. We will choose a different δ in the next section; see Notation 8.14. Definition 7.3. In Rn, we define a horizontal rotation to be a rotation in SO(n) that rotates the vectors that are parallel to the xy-plane and keeps vectors that are perpendicular to the xy-plane invariant. The property that Γ has a one-to-one convex projection is preserved by horizontal rotations. Remark 7.4. In our estimates, all constants are allowed to depend on the initial curve, particularly the constant in the three-point condition ( [Sun24, Definition 4.8]). This is harmless because it is independent of time along the flow ( [Sun24, Proposition 5.3 and Page 21]) and our analysis focuses only on the singularity. The goal of this section is to establish C2 estimates (Proposition 7.5 and Proposition 7.7) at linear scales over time intervals. We start with C1 estimates at a fixed time. Proposition 7.5. There exist small constants λ0 ∈(0, 1), H0 > 0 and a large time τ0 for which the following is true. Suppose that at a time τ ≥τ0, there is a horizontal rotation Sτ and a vector ⃗θ = ⃗θ(τ) = (θ1(τ), · · · , θn-2(τ)) ∈[0, π 2 )n-2 ⊂Rn-2 such that SτΓ(·, τ) ∩{|x| ≤2} consists of the graphs of functions y1, y2, z1 l, z2 l(1 ≤l≤n -2) with ∥yi(x, τ)∥C1[-2,2] + n-2 X l=1 ∥zi l(x, τ) -(tan θl)x∥C1[-2,2] ≤H ≤H0 for i = 1, 2, where the indices i = 1, 2 label the upper and lower branches of the projection respectively. Then SτΓ(·, τ) ∩{|x| ≤3 4ρδ0(H)} is a union of the graphs of functions yi(·, τ), zi l(·, τ), i = 1, 2. In addition, for x ∈[-1 2ρδ0(H), 1 2ρδ0(H)], one has |yi(x, τ)| ≤CH(|x| + 1), |yi x(x, τ)| ≤CH for i = 1, 2 SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 29 and for x ∈[-1 2λ0ρδ0(H), 1 2λ0ρδ0(H)], i = 1, 2 and 1 ≤l≤n -2, one has |zi l(x, τ) -(tan θl)x| ≤CH(|x| + 1), |zi lx(x, τ) -tan θl| ≤CH where the constants δ0, λ0, C are independent of the time τ. Remark 7.6. By the bounded slope lemma (Lemma 3.2), we have an a priori upper bound 0 ≤θM 0, there is a horizontal rotation Sτ ′ and a vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn-2(τ ′)) ∈[0, π 2 )n-2 ⊂Rn-2 such that for any τ ∈[τ ′, τ ′+T ], Sτ ′Γ(·, τ)∩{|x| ≤2} consists of the graphs of functions y1, y2, z1 l, z2 l (1 ≤l≤n -2) with ∥yi(x, τ)∥C1[-2,2] + n-2 X l=1 ∥zi l(x, τ) -(tan θl)x∥C1[-2,2] ≤H ≤H0 for i = 1, 2. Then for x ∈[-1 4λ0ρδ0(H), 1 4λ0ρδ0(H)], τ ∈[τ ′ + 1 2, τ ′ + T ], i = 1, 2 and 1 ≤l≤n -2, one has |yi xx(x, τ)| ≤CH, |zi lxx(x, τ)| ≤CH where the constant C is independent of τ ′, ρδ0(H) and T . We first prove Proposition 7.7 based on Proposition 7.5. Then we devote the rest of this section to the proof of Proposition 7.5. Lemma 7.8. One can compute the equations of the graphical rescaled CSF: yτ = yxx 1 + y2x + z12x + · · · + z(n-2)2 x -xyx + y (7.2) zlτ = zlxx 1 + y2x + z12x + · · · + z(n-2)2 x -xzlx + zl, (7.3) where 1 ≤l≤n -2. Proof of Proposition 7.7. We drop the index i for simplicity. By Proposition 7.5, Sτ ′Γ(·, τ) ∩{|x| ≤ 3 4ρδ0(H)} is a union of the graphs of functions yi(·, τ), zi l(·, τ), i = 1, 2 and for x ∈[-1 2λ0ρδ0(H), 1 2λ0ρδ0(H)], |yi x(x, τ)| ≤CH, |zi lx(x, τ) -tan θl(τ ′)| ≤CH for i = 1, 2, where θ(τ ′) is bounded away from π 2 by Remark 7.6. The function y satisfies the following equation of the rescaled CSF (Lemma 7.8): yτ = a(x, τ)yxx -xyx + y, where a = 1 1 + y2x + z2 1x + · · · + z2 (n-2)x . 30 QI SUN To clarify that the constant C is independent of τ ′, ρ and T , for arbitrary τ1 ∈ [τ ′ + 1, τ ′ + T ] and |x1| ≤ρ 2 -2, we do a change of variables ̄x = x -x1eτ-τ1, ̄τ = τ -τ1, y(x, τ) = ̄y( ̄x, ̄τ) and a(x, τ) = ̄a( ̄x, ̄τ). Claim 7.9. With above change of variables, one has ̄y ̄τ = ̄a( ̄x, ̄τ) ̄y ̄x ̄x - ̄x ̄y ̄x + ̄y. Proof of the Claim. By direct computations, yτ = ̄y ̄τ -x1eτ-τ1 ̄y ̄x and yx = ̄y ̄x, yxx = ̄y ̄x ̄x. Thus ̄y ̄τ -x1eτ-τ1 ̄y ̄x = ̄a( ̄x, ̄τ) ̄y ̄x ̄x -( ̄x + x1eτ-τ1) ̄y ̄x + ̄y. □ By Claim 7.9, the function ̄y ̄x satisfies the following equation in divergence form: (7.4) ( ̄y ̄x) ̄τ = ( ̄a( ̄y ̄x) ̄x) ̄x - ̄x( ̄y ̄x) ̄x. For | ̄x| ≤2 and ̄τ ∈[-1, 0], the functions ̄y ̄x, ( ̄z1) ̄x, · · · , ( ̄zn-2) ̄x are uniformly bounded independent of τ1 and x0. Thus by De Giorgi-Nash-Moser type estimates, see for example [LSU68, Theorem 1.1, Chapter V, §1, page 419], the function ̄y ̄x is H ̈older continuous. Similarly ( ̄z1) ̄x, · · · , ( ̄zn-2) ̄x are H ̈older continuous. As a result, ̄a is H ̈older continuous. Thus by Schauder estimates for equation (7.4) and Proposition 7.5, for | ̄x| ≤1 and ̄τ ∈[-1 2, 0], (7.5) | ̄y ̄x ̄x| ≤C| ̄y ̄x| ≤CH for some constant C independent of τ1 and x0. As a result, by taking x1 = ±( ρ 2 -k) for integers k ∈[2, ρ 2] and taking τ1 ∈[τ ′ + 1, τ ′+T ], it follows from equation (7.5), for |x| ≤ 1 √e( ρ 2-2)+1 and τ ∈[τ ′+ 1 2, τ ′+T ], |yxx(x, τ)| = | ̄y ̄x ̄x| ≤CH, where the constant C is independent of τ ′, ρ and T as long as we have gradient estimates. The estimates for |zi lxx| can be done similarly because zi l(x, τ)-(tan θl)x satisfies the same equation and same C1 estimates as yi. □ 7.1. Setup and a sketch of the proof of Proposition 7.5. We denote by Γ the rescaled CSF and by Γ the projection of Γ onto the xy-plane. The next lemma follows from the improved blow-up results (Theorem 1.13) and can be proved by the same argument in the proof of Lemma 6.7. Lemma 7.10. For arbitrary R > 0, there exists τ pt R ∈[-1 2 log T, +∞) such that for any τ ≥τ pt R , the projection of the rescaled CSF inside the ball Γ(·, τ) ∩BR(0), has exactly two connected components, and the function |Γ|2 has exactly one minimum point on each component. In addition, we can smoothly track these two minimum points. Definition 7.11. For each τ ≥τ pt 1 , we label these two minimum points of the function |Γ|2 in Lemma 7.10 by p(τ), q(τ) ∈R2. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 31 For convenience, we adopt the notion p(τ), q(τ) independent of the horizontal rotations that we will choose. By Lemma 7.10, for R ≥1 and τ ≥τ pt R , the points p(τ), q(τ) are minimum points of |Γ|2 inside the ball Γ(·, τ) ∩BR(0) and thus are independent of the choice of R. Definition 7.12. For each τ ≥τ pt 1 , we denote by O the origin and by A1(τ), A2(τ) the area of the two domains enclosed by line segments Op(τ),Oq(τ) and the projection of the rescaled curve Γ(·, τ). We are able to bound the rescaled area from below. Lemma 7.13. There exists δ0 > 0 and τδ0 such that A1(τ), A2(τ) ≥δ0 for all τ ∈[τδ0, +∞). Proof. We prove A1(τ) ≥δ0; the proof for A2 is similar. We consider the rate of change of the area enclosed by the projection of (unscaled) CSF onto the xy-plane, recall that τ = τ(t) = -1 2 log(T -t), by [Sun24, Lemma 3.3] and the improved blow-up results (Theorem 1.13), for τ large, (7.6) d dt((2T -2t)A1(τ(t))) = - Z p(τ) q(τ) (x2 s + y2 s) ̄kd ̄s + o(1) ≤-π 2 δ1, where we used x2 s+y2 s ≥δ1 for some δ1 > 0 ( [Sun24, Corollary 5.8]) and the turning angle between the points p(τ), q(τ) is π + o(1) for τ large, because p(τ), q(τ) are minimum points of the function |Γ|2. Because CSF γ shrinks to a point, lim t→T(2T -2t)A1(τ(t)) = 0. As a result, by integrating equation (7.6), (7.7) 0 -(2T -2t)A1(τ) ≤-π 2 δ1(T -t). Pick δ0 = π 4 δ1. □ We denote by |Op(τ)|, |Oq(τ)| the distances of the minimum points p(τ), q(τ) to the origin. Lemma 7.14. If there exists a horizontal rotation Sτ such that the projection Pxy(SτΓ(·, τ) ∩{|x| ≤2}) consists of the graphs of functions y1, y2 with ∥yi(x, τ)∥C0[-2,2] ≤H for i = 1, 2, then one has (7.8) |Op(τ)|, |Oq(τ)| ≤H. Proof. Because p(τ), q(τ) are minimum points of the function |Γ|2, one has (7.9) |Op(τ)|, |Oq(τ)| ≤max i=1,2{|yi(0, τ)|} ≤max i=1,2 ∥yi(x, τ)∥C0[-2,2] ≤H. □ Recall Definition 7.11 and Definition 7.3. 32 QI SUN Definition 7.15. We define S1 τ to be the horizontal rotation such that for the rotated curve S1 τΓ(·, τ), the minimum point q(τ) is on the negative y-axis. We may assume that for the rotated curve S1 τΓ(·, τ), the x coordinate x(p(τ)) of the point p(τ) is non-positive (see Figure 3). Definition 7.16. We define the angle-bisecting rotation Sbis τ to be the horizontal rotation such that the x-axis bisects the angle formed by Op(τ) and Oq(τ). We may assume that for the rotated curve Sbis τ Γ(·, τ), the x-coordinates x(p(τ)) and x(q(τ)) are non-positive (see Figure 4). Figure 3. Points p(τ), q(τ) on S1 τΓ(·, τ). Figure 4. Points p(τ), q(τ) on Sbis τ Γ(·, τ). Sketch of the proof of Proposition 7.5. In §7.2 we derive gradient estimates for the upper branch of the rotated curve S1 τΓ(·, τ). Based on the estimates for S1 τΓ(·, τ) in §7.2, we derive gradient estimates in §7.3 for the upper branch of the rotated curve Sbis τ Γ(·, τ). Because of the choice of the horizontal rotation Sbis τ (Definition 7.16), the estimates on the lower branch are no different from the estimates for the upper branch of Sbis τ Γ(·, τ). Thus we have gradient estimates for both upper and lower branches of the rotated curve Sbis τ Γ(·, τ). In §7.4, we use the derived estimates for Sbis τ Γ(·, τ) to establish the desired C1 estimates for y from Proposition 7.5. In §7.5, we consider the C1 estimates for zl(1 ≤l≤n -2) from Proposition 7.5. 7.2. Gradient estimates on the upper branch. In this subsection, we always consider the rotated curve S1 τΓ(·, τ) and assume |Op(τ)|, |Oq(τ)| ≤H. We denote by xmax(τ) := max u∈S1 x(u, τ), xmin(τ) := min u∈S1 x(u, τ) the maximum and minimum values of the function x at time τ. And we denote by x(ymax(τ)) the x coordinate of the maximum point of the function y(·, τ). One has x(ymax(τ)) ≥x(p(τ)) because the projection curve is convex and the slope of the upper branch is decreasing. See Figure 3. By comparing areas, we first estimate |xmax(τ)| and |xmin(τ)|. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 33 Lemma 7.17. Let δ0 be the constant in Lemma 7.13, one has (7.10) 2H|xmin(τ)| ≥δ0 and (7.11) (ymax(τ) + H)(xmax(τ) + H) ≥δ0. Proof. The area A1(τ), A2(τ) is no bigger than the area of the following rectangles respectively: {(x, y)|xmin(τ) ≤x ≤0, -H ≤y ≤H} and {(x, y)| -H ≤x ≤xmax(τ), -H ≤y ≤ymax(τ)}. This lemma then follows from Lemma 7.13. □ For ρ = ρδ0(H) = δ0 20H , our goal is to get the gradient estimates for |x| ≤ρ. By inequality (7.10), |xmin(τ)| ≥ δ0 2H = 10ρ > ρ. Recall that the indices i = 1, 2 label the upper and lower branches respectively. We first estimate the gradient at x = -ρ on the upper branch. Lemma 7.18. One has that 0 2H |xmin(τ)| -ρ. Because the projection curve is convex and the slope of the upper branch is decreasing, for x 2H |xmin(τ)| -ρ. As a result, 2H = 2H |xmin(τ)| -ρ(|xmin(τ)| -ρ) 0 small and time τ ′ 0 large such that the following holds. Suppose that at a time τ ≥τ ′ 0, there is a horizontal rotation Sτ such that Pxy(SτΓ(·, τ) ∩{|x| ≤2}) consists of the graphs of functions y1, y2 with (7.19) ∥yi(x, τ)∥C1[-2,2] ≤H ≤H′ 0, i = 1, 2. Then Pxy(SτΓ(·, τ) ∩{|x| ≤ 3 4ρδ0(H)}) is a union of the graphs of functions yi(·, τ), i = 1, 2. In addition, there exists an angle λ = λ(τ) with | tan λ| ≤2H such that for x ∈[-1 2ρδ0(H), 1 2ρδ0(H)], |yi x(x, τ) -tan λ(τ)| ≤CH2, i = 1, 2. Moreover, for x ∈[-1 2ρδ0(H), 1 2ρδ0(H)], one has |yi| ≤3H(|x| + 1), |yi x(x, τ)| ≤3H. Proof of Lemma 7.25. Recall Definition 7.11, we define λ(τ) = arctan yx(p(τ)) + arctan yx(q(τ)) 2 . By equation (7.19), combined with Lemma 7.23 and |2 tan θ 2| ≤| tan θ|, | tan λ(τ)| ≤|yx(p(τ))| + |yx(q(τ))| ≤2H. Based on the definition of λ(τ) and definition of the rotation Sbis τ (Definition 7.16), by Lemma 7.14 and Lemma 7.24, for x ∈[-1 2ρδ0(H), 1 2ρδ0(H)], | tan arctan yi x(x, τ) -λ(τ) | ≤CH2. Because | tan(θ1 -θ2)| ≥1 2| tan(θ1) -tan(θ2)| for θ1, θ2 small, |yi x(x, τ) -tan λ(τ)| ≤CH2. As a result, |yi x(x, τ)| ≤CH2 + | tan λ| ≤CH2 + 2H ≤3H. Recall that we denote by p(τ) the minimum point of the function |Γ|2 on the upper branch and |Op(τ)| ≤H (Lemma 7.14), by comparing the upper branch with the tangent line at the point p(τ), one has y1(x, τ) ≤y1(p(τ)) + |x -x(p(τ))| sup |x|≤1 2 ρ |y1 x| ≤H + (|x| + H)(3H) ≤3H(|x| + 1) SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 37 for x ∈[-1 2ρ, 1 2ρ]. Similarly, for the lower branch, we have y2 ≥-3H(|x| + 1). Because y1 ≥y2, for x ∈[-1 2ρ, 1 2ρ], (7.20) -3H(|x| + 1) ≤y2 ≤y1 ≤3H(|x| + 1). □ 7.5. Estimates for zl(Proof of Proposition 7.5). Recall that we label a point in Rn by (x, y, z1, · · · , zn-2). For 1 ≤l≤n -2, we denote by ⃗e1, ⃗e2 and ⃗el+2 the unit vectors in the directions of the positive x-axis, y-axis and zl-axis. We fix an index lwith 1 ≤l≤n -2 from now on. The argument in this subsection applies to each such land the dependence on lwill remain implicit. For α ∈[0, π 2 ), we adopt the notion (7.21) ⃗eα := cos α⃗e2 + sin α⃗el+2. Definition 7.26. We denote by Pα the 2-plane spanned by the vectors ⃗e1 and ⃗eα. The next lemma follows from [Sun24, Definition 4.8, Proposition 5.3 for n = 3 and Proof of Theorem 1.5(b) on Page 21]. Lemma 7.27. There exists α0 ∈(0, π 2 ) such that CSF Γ(·, τ) has a one-to-one convex projection onto the 2-plane Pα for all α ∈[0, α0] and τ ≥τα0 for some τα0. We denote by Aα 1 (τ), Aα 2 (τ) the area described in Definition 7.12, with respect to the projection onto the 2-plane Pα instead of the xy-plane (which is the same as P0). Thus A0 1(τ) = A1(τ) and A0 2(τ) = A2(τ), where A1(τ), A2(τ) are defined in Definition 7.12. Analogous to Lemma 7.13, we may assume Aα 1 (τ), Aα 2 (τ) ≥δ0 2 for the following reasons, where δ0 is the constant in Lemma 7.13. By [Sun24, Proof of Corollary 5.8, particularly the dependence of δ on ∆l], we may assume for α ∈[0, α0] (by picking α0 smaller if necessary), one has (Γσ ·⃗e1)2 + (Γσ ·⃗eα)2 ≥2 πδ0. Then it follows from the proof of Lemma 7.13, particularly equation (7.7). We fix α ∈(0, α0] from now on. Proof of Proposition 7.5. What we have in mind is that the limit line is in the direction of the vector ⃗v = ⃗e1 + n-2 P l=1 tan θl⃗el+2 s 1 + n-2 P l=1 tan2 θl . To lighten our notation, we define an angle θ ∈[0, π 2 ) by cos θ = 1 s 1 + n-2 P l=1 tan2 θl . 38 QI SUN Then we can compute the component of the projection of the vector ⃗v onto the 2-plane Pα: ⃗v · ⃗e1 = cos θ, ⃗v · ⃗eα = tan θlsin α cos θ. We define angle β ∈[0, π 2 ) by (7.22) tan β = tan θlsin α. By Remark 7.6, there exists β0 1 such that the following holds. For any L ∈N≥2 and H ≤ 1 1000L, if τ linear H is an H-linear time, then for every τ ′ ≥τ linear H , the rotated rescaled CSF Slinear τ ′ Γ is (C0LH, τ ′, 4L)-flat at ⃗θ = ⃗θ(τ ′), where the horizontal rotation Slinear τ ′ and vector ⃗θ(τ ′) are as in Definition 8.3. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 41 Proposition 8.7 (Improvement of flatness). Let C0 be as in Proposition 8.6. There exists a constant L ∈N≥2 such that for any T ≥2L + 4, there is an H1 ∈(0, C0 1000] such that the following holds. If for some horizontal rotation Sτ ′ and some vector ⃗θ = ⃗θ(τ ′), the rescaled CSF Sτ ′Γ is (H, τ ′, T )-flat for some H ≤H1 at ⃗θ, then there is a horizontal rotation ̄Sτ ′+L+2 and a vector ̄⃗θ = ̄⃗θ(τ ′ + L + 2) such that ̄Sτ ′+L+2Γ is H 2C0L-linear at ̄⃗θ at time τ ′ + L + 2. In addition, there is a uniform constant C1 such that | ̄Sτ ′+L+2 -Sτ ′| ≤C1H and | ̄⃗θ(τ ′ + L + 2) -⃗θ(τ ′)| ≤C1H. We now turn to prove uniqueness of tangent flows. Proof of Theorem 1.16. Throughout this proof, we fix L, H1 chosen in Proposition 8.7. By our improved blow-up result (Theorem 1.13), there exists an H1 C0L-linear time, which we denote by τ1. By gluing (Proposition 8.6), for every τ ′ ≥τ1, Slinear τ ′ Γ is (H1, τ ′, 4L)-flat at ⃗θ(τ ′). For every τ ′ ≥τ1, by improving the flatness (Proposition 8.7), there is a horizontal rotation S1 τ ′+L+2 and a vector ⃗θ1 = ⃗θ1(τ ′ + L + 2) such that S1 τ ′+L+2Γ is H1 2C0L-linear at ⃗θ1 at time τ ′ + L + 2 with |Slinear τ ′ -S1 τ ′+L+2| ≤C1H1 and |⃗θ -⃗θ1(τ ′ + L + 2)| ≤C1H1. We define τ2 = τ1 + L + 2. Thus τ2 is an H1 2C0L-linear time. We define τk = τ1 + (k -1)(L + 2). By iterations, for every τ ′ ≥τk, there is a horizontal rotation Sk τ ′+L+2 and a vector ⃗θk = ⃗θk(τ ′ + L + 2) such that Sk τ ′+L+2Γ is H1 2kC0L-linear at ⃗θk at time τ ′ + L + 2 with |Sk τ ′+L+2 -Sk-1 τ ′ | ≤C1H1 2k-1 and |⃗θk(τ ′ + L + 2) -⃗θk-1(τ ′)| ≤C1H1 2k-1 . We can denote Slinear τ ′ by S0 τ ′ and denote ⃗θ by ⃗θ0 to make our notation consistent. In summary, τk+1 is an H1 2kC0L-linear time. By gluing (Proposition 8.6), for every τ ′ ≥τk+1, Sk τ ′Γ is ( 1 2k H1, τ ′, 4L)-flat. For any ε > 0, there exists kε ∈N such that X k≥kε 1 2k 0, we define the cut-off profile ˆu at scale ρ to be (8.9) ˆu( ̃x, τ) := u( ̃x, τ)η( ̃x ρ). Definition 8.9. We define the error term E to be (8.10) E := ˆuτ -Lˆu. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 43 8.2. Preparatory Lemmas. We denote by (8.11) S(β) = cos β -sin β sin β cos β the rotation by angle β. One has that (8.12) S(α)S(β)-1 = S(α -β). For a matrix S = (sij), we use the norm |S| = rP i,j s2 ij. Lemma 8.10. By direct computations, (8.13) |S(α) -S(β)|2 = 8 sin2 α -β 2 . As a result, for |α -β| ≤π, (8.14) 2 √ 2 π |α -β| ≤|S(α) -S(β)| ≤ √ 2|α -β|. Proof. By direct computations, |S(α) -S(β)|2 = 2| cos α -cos β|2 + 2| sin α -sin β|2 = 4 -4 cos α cos β -4 sin α sin β = 4 -4 cos(α -β) = 8 sin2 α -β 2 . Inequalities (8.14) follow from the fact that for |x| ≤π 2 , 2 π|x| ≤| sin x| ≤|x|. □ Lemma 8.11. For any real numbers a 0 by Lemma 7.27, then we have C1 estimates for (zl-tan θlx) √ 1+tan2 θl= yα-cos αy sin α , which is a linear combination of the functions y, yα. The constant C0 in Proposition 8.6 depends on chosen α, which is independent of time τ. 8.4. C2 estimates at linear scales. Let constants λ0, H0, τ0 be as in Proposition 7.5. We define δ1 := λ0δ0 8 and (8.24) ρδ1(H) := δ1 20H . Notation 8.14. In this section, ρ = ρ(H) always refers to ρδ1(H), which differs from ρδ0 by a time-independent coefficient λ0 8 . Compare with Notation 7.2. Recall Definition 8.4. Let us restate what has been established at scale 2ρδ1 in Proposition 7.5 and Proposition 7.7. 46 QI SUN Lemma 8.15. If the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ = ⃗θ(τ ′) for some vector ⃗θ(τ ′) = (θ1(τ ′), · · · , θn-2(τ ′)), then for τ ∈[τ ′, τ ′ + T ], Γ(·, τ) ∩{(x, y, z1, · · · , zn-2)||x| ≤2ρδ1(H)} is a union of the graphs of functions yi(·, τ), zi l(·, τ), i = 1, 2, 1 ≤l≤n -2. In addition, the estimates |yi| ≤CH(|x| + 1), |yi x| ≤CH and |zi l-(tan θl)x| ≤CH(|x| + 1), |zi lx -tan θl| ≤CH hold in Pτ ′ 2ρδ1(H),T . Moreover, the estimates |yi xx| ≤CH, |zi lxx| ≤CH hold in P τ ′+ 1 2 2ρδ1(H),T -1 2 . 8.5. Change of variables. In this subsection, we always assume the rescaled CSF Γ is (H, τ ′, T )-flat at a vector ⃗θ = ⃗θ(τ ′) = (θ1(τ ′), · · · , θn-2(τ ′)). We define an angle θ = θ(τ ′) ∈[0, π 2 ) by (8.25) 1 cos2 θ = 1 + n-2 X l=1 tan2 θl. By Remark 7.6, there is an angle θ0 ∈[0, π 2 ), independent of time τ ′, such that θ(τ ′) ∈[0, θ0]. We choose new coordinates for τ ∈[τ ′, τ ′ + T ]: (8.26) ̃x = x cos θ, ̃yi = yi, ̃zi l= zi l-tan θlx. The reason for this change of variables, instead of an orthonormal transformation, is that this choice rescales the x-direction while leaving the y-direction unchanged. Thus the rescaled CSF still has a one-to-one convex projection onto the ̃x ̃y-plane. This will be important in the proof of Proposition 8.24, particularly equation (8.50). Notation 8.16. We sometimes omit the superscripts i in the functions ̃yi, ̃zi lfor simplicity when there is no confusion. Throughout the rest of this subsection, we denote by u any one of the functions ̃yi, ̃zi l, where i = 1, 2 and 1 ≤l≤n -2. Lemma 8.17. The function u satisfies the following evolution equation: (8.27) uτ = u ̃x ̃x 1 + ̃y2 ̃x + n-2 P l=1 ( ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl) - ̃xu ̃x + u. Proof. By direct computations, based on ∂ ̃x = cos θ∂x, one has ̃zlx = zlx -tan θl and (8.28) zlx = ̃zlx + tan θl= ̃zl ̃x cos θ + tan θl. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 47 Because -x(zl-tan θlx)x+(zl-tan θlx) = -xzlx+zl, by Lemma 7.8, the function u satisfies the following linear equation: (8.29) uτ = uxx 1 + y2x + z2 1x + · · · + z2 (n-2)x -xux + u. Because ∂ ̃x = cos θ∂x, uτ = 1 cos2 θu ̃x ̃x 1 + 1 cos2 θy2 ̃x + n-2 P l=1 z2 lx - ̃xu ̃x + u. By equation (8.28), uτ = 1 cos2 θu ̃x ̃x 1 + 1 cos2 θy2 ̃x + n-2 P l=1 ̃zl ̃x cos θ + tan θl 2 - ̃xu ̃x + u = 1 cos2 θu ̃x ̃x 1 + 1 cos2 θy2 ̃x + n-2 P l=1 ̃z2 l ̃x cos2 θ + 2 ̃zl ̃x cos θ tan θl + n-2 P l=1 tan2 θl - ̃xu ̃x + u. By definition of θ (equation (8.25)), uτ = 1 cos2 θu ̃x ̃x 1 cos2 θ + 1 cos2 θy2 ̃x + n-2 P l=1 ̃z2 l ̃x cos2 θ + 2 ̃zl ̃x cos θ tan θl - ̃xu ̃x + u = u ̃x ̃x 1 + y2 ̃x + n-2 P l=1 ( ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl) - ̃xu ̃x + u. □ By Lemma 8.15, because | ̃x| = x cos θ ≥|x|, we have the following estimates. Lemma 8.18. If the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ, then the estimates (8.30) |u| ≤CH(| ̃x| + 1), |u ̃x| ≤CH hold for | ̃x| ≤2ρ and τ ∈[τ ′, τ ′ + T ]. In addition, the estimates (8.31) |u ̃x ̃x| ≤CH hold for | ̃x| ≤2ρ and τ ∈[τ ′ + 1 2, τ ′ + T ]. Recall the cut-off function η = η( ̃x) defined in equation (8.8). Lemma 8.19. The cut-off ˆu = η( ̃x ρ)u satisfies (8.32) ∥ˆu∥H ≤CH for τ ∈[τ ′, τ ′ + T ]. Proof. By definition, ∥ˆu∥2 H = 1 √ 2π Z | ̃x|≤2ρ η( ̃x ρ)u 2 e- ̃x2 2 d ̃x ≤ Z | ̃x|≤2ρ u2e- ̃x2 2 d ̃x. 48 QI SUN By Lemma 8.18, ∥ˆu∥2 H ≤C Z | ̃x|≤2ρ H2(| ̃x| + 1)2e- ̃x2 2 d ̃x ≤CH2. □ Lemma 8.20. The following estimates hold for ˆu2 ̃x = h uη( ̃x ρ) ̃x i2 : (8.33) ˆu2 ̃x = u2 ̃xη2 + 2uu ̃xηη′ 1 ρ + u2(η′)2 1 ρ2 ≤CH2. As a result, (8.34) ∥ˆu ̃x∥H ≤CH. Proof. By direct computations, (8.35) ˆu ̃x = (uη( ̃x ρ)) ̃x = u ̃xη + uη′ 1 ρ. By the definition of the cut-off function η (equation (8.8)) and Lemma 8.18, for | ̃x| ≤2ρ, ˆu2 ̃x = u ̃xη + uη′ 1 ρ 2 ≤2u2 ̃xη2 + 2u2(η′)2 1 ρ2 ≤2u2 ̃x + 8u2 1 ρ2 (8.36) ≤C H2 + H2 ρ2 (| ̃x| + 1)2 ≤CH2, (8.37) where we used the definition of ρ = ρδ1 (equation (8.24)). □ 8.6. Estimates along evolution. In this subsection, we always assume the rescaled CSF Γ is (H, τ ′, T )-flat at ⃗θ. Let u be any one of the functions ̃yi, ̃zi l, where i = 1, 2 and 1 ≤l≤n -2. Recall the following notion in §8.1: (8.38) ˆu = η( ̃x ρ)u, L = ∂2 ̃x - ̃x∂ ̃x + 1 and E = ˆuτ -Lˆu. Lemma 8.21. The estimates (8.39) ∥E∥H ≤CH2 hold for τ ∈[τ ′ + 1 2, τ ′ + T ]. Proof. By the evolution equation of u (Lemma 8.17), E = d dτ uη( ̃x ρ) -(∂2 ̃x - ̃x∂ ̃x + 1) uη( ̃x ρ) = - ̃y2 ̃x + n-2 P l=1 ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl 1 + ̃y2 ̃x + n-2 P l=1 ( ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl) u ̃x ̃xη -2u ̃xη′ 1 ρ -uη′′ 1 ρ2 + ̃xuη′ 1 ρ. We want to decompose E into two quantities. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 49 We define E1 := - ˆ ̃y2 ̃x + n-2 P l=1 ˆ ̃z2 l ̃x + 2ˆ ̃zl ̃x cos θ tan θl 1 + ̃y2 ̃x + n-2 P l=1 ( ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl) u ̃x ̃x, where ˆ ̃y ̃x = ( ̃yη) ̃x and ˆ ̃zl ̃x = ( ̃zlη) ̃x. We also define E2 := -2u ̃xη′ 1 ρ -uη′′ 1 ρ2 + ̃xuη′ 1 ρ + 1 1 + ̃y2 ̃x + n-2 P l=1 ( ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl) · u ̃x ̃x " ( ̃y2 ̃x + n-2 X l=1 ̃z2 l ̃x)(η2 -η) + 2( ̃y ̃y ̃x + n-2 X l=1 ̃zl ̃zl ̃x)ηη′ 1 ρ + ( ̃y2 + n-2 X l=1 ̃z2 l)(η′)2 1 ρ2 + n-2 X l=1 2 cos θ tan θl ̃zlη′ 1 ρ # . By equation (8.33), one has E = E1 + E2. Based on Lemma 8.18, by taking H small enough, we have (8.40) 1 2 ≤1 + ̃y2 ̃x + n-2 X l=1 ̃z2 l ̃x + 2 ̃zl ̃x cos θ tan θl ≤2. Lemma 8.18, equation (8.40) and the definition of the cut-off function η (equation (8.8)) implies that |E2| ≤    CH(| ̃x| + 1), if ρ ≤| ̃x| ≤2ρ, 0, if | ̃x| 2ρ. Thus, ∥E2∥2 H ≤C Z 2ρ ρ H2(| ̃x| + 1)2e- ̃x2 2 d ̃x ≤C H2 ρ8 Z 2ρ ρ | ̃x|8(| ̃x| + 1)2e- ̃x2 2 d ̃x ≤C H2 ρ8 ≤CH10. In addition, by equation (8.40) and definition of E1, ∥E1∥2 H ≤C Z 2ρ -2ρ E2 1e- ̃x2 2 d ̃x ≤C Z 2ρ -2ρ |u ̃x ̃x|2 ˆ ̃y4 ̃x + n-2 X l=1 (ˆ ̃z4 l ̃x + ˆ ̃z2 l ̃x) ! e- ̃x2 2 d ̃x ≤CH2 Z 2ρ -2ρ ˆ ̃y4 ̃x + n-2 X l=1 (ˆ ̃z4 l ̃x + ˆ ̃z2 l ̃x) ! e- ̃x2 2 d ̃x, where we used Lemma 8.18. Combined with Lemma 8.20, particularly inequality (8.33), (8.41) ∥E1∥2 H ≤CH2(H4 + H4 + H2) ≤CH4. 50 QI SUN As a result, (8.42) ∥E∥H ≤∥E1∥H + ∥E2∥H ≤CH2 + CH10 ≤CH2. □ Lemma 8.22. One has the following estimates for τ ∈[τ ′ + 1 2, τ ′ + T ], (8.43) d dτ ⟨ˆu, 1⟩H -⟨ˆu, 1⟩H + d dτ ⟨ˆu, ̃x⟩H ≤CH2, (8.44) d dτ ∥P≥1ˆu∥2 H ≤-∥P≥1ˆu∥2 H + CH4. Proof. Proof of inequality (8.43): d dτ ⟨ˆu, 1⟩H = ⟨ˆuτ, 1⟩H = ⟨Lˆu, 1⟩H + ⟨E, 1⟩H = ⟨ˆu, L1⟩H + ⟨E, 1⟩H = ⟨ˆu, 1⟩H + ⟨E, 1⟩H. Combined with ∥1∥H = 1, d dτ ⟨ˆu, 1⟩H -⟨ˆu, 1⟩H ≤|⟨E, 1⟩H| ≤∥E∥H. In addition, d dτ ⟨ˆu, ̃x⟩H = ⟨ˆuτ, ̃x⟩H = ⟨Lˆu, ̃x⟩H + ⟨E, ̃x⟩H = 0 + ⟨E, ̃x⟩H. Combined with ∥ ̃x∥H = 1, d dτ ⟨ˆu, ̃x⟩H ≤|⟨E, ̃x⟩H| ≤∥E∥H. Inequality (8.43) then follows from Lemma 8.21. Proof of inequality (8.44): By direct computations, d dτ ∥P≥1ˆu∥2 H = 2⟨P≥1ˆu, d dτ P≥1ˆu⟩= 2⟨P≥1ˆu, P≥1 d dτ ˆu⟩ = 2⟨P≥1ˆu, P≥1Lˆu + P≥1E⟩. Because P≥1L = LP≥1, d dτ ∥P≥1ˆu∥2 H = 2⟨P≥1ˆu, LP≥1ˆu + P≥1E⟩ ≤2⟨P≥1ˆu, LP≥1ˆu⟩+ 2∥P≥1ˆu∥H∥P≥1E∥H. Because the smallest positive eigenvalue of the operator -L is 1 and by the definition of P≥1 (equation (8.7)), d dτ ∥P≥1ˆu∥2 H ≤-2∥P≥1ˆu∥2 H + 2∥P≥1ˆu∥H∥P≥1E∥H ≤-2∥P≥1ˆu∥2 H + ∥P≥1E∥2 H + ∥P≥1ˆu∥2 H ≤-∥P≥1ˆu∥2 H + ∥P≥1E∥2 H. Inequality (8.44) then follows from Lemma 8.21. □ SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 51 Lemma 8.23. For any integer L ≥2 and T > 2L, one has the following estimates for τ ∈[τ ′ + L, τ ′ + T -L], ∥ˆu( ̃x, τ) -⟨ˆu( ̃x, τ ′ + L), ̃x⟩ ̃x∥2 H ≤C e-LH2 + T H3 + T 2H4 . (8.45) Proof. By Lemma 8.22, particularly inequality (8.44), for τ ∈[τ ′ + 1 2, τ ′ + T ], d dτ ∥P≥1ˆu∥2 H ≤-∥P≥1ˆu∥2 H + CH4. We set α(τ) := ∥P≥1ˆu(·, τ)∥2 H, then (eτα)τ = eτ(ατ + α) ≤CeτH4. By Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T ], eτα(τ) ≤eτ ′+ 1 2 α(τ ′ + 1 2) + CT eτH4 ≤Ceτ ′H2 + CT eτH4. Thus, for τ ∈[τ ′ + L, τ ′ + T ], α(τ) ≤Ceτ ′-τH2 + CT H4 ≤Ce-LH2 + CT H4. Next, we set β := ⟨ˆu, 1⟩2 H, by Lemma 8.22 (particularly inequality (8.43)) and Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T ], βτ ≥2β -CH(H2). Then one has (e-2τβ)τ = e-2τ(βτ -2β) ≥-Ce-2τH3. Thus, for τ ∈[τ ′ + 1 2, τ ′ + T ], e-2(τ ′+T )β(τ ′ + T ) -e-2τβ(τ) ≥-CT e-2τH3. By Lemma 8.19, for τ ∈[τ ′ + 1 2, τ ′ + T -L], β(τ) ≤e2τ-2(τ ′+T )β(τ ′ + T ) + CT H3 ≤Ce-2LH2 + CT H3. Finally, we set λ(τ) := ⟨ˆu(·, τ), ̃x⟩H -⟨ˆu(·, τ ′ + L), ̃x⟩H. By Lemma 8.22, for τ ∈[τ ′ + 1 2, τ ′ + T ], (8.46) |λτ| ≤CH2. By definition of λ, λ(τ ′ + L) = 0. Thus, for τ ∈[τ ′ + L, τ ′ + T -L], (8.47) |λ(τ)| = |λ(τ) -λ(τ ′ + L)| ≤C(T -2L)H2 ≤CT H2. Combine previous estimates, for τ ∈[τ ′ + L, τ ′ + T -L], ∥ˆu( ̃x, τ) -⟨ˆu( ̃x, τ ′ + L), ̃x⟩ ̃x∥2 H ≤α(τ) + β(τ) + λ2(τ) ≤C e-LH2 + T H4 + e-2LH2 + T H3 + T 2H4 . □ Proposition 8.24. There exist slopes K ̃y = K ̃y(τ ′), K ̃zl= K ̃zl(τ ′) ∈R with |K ̃y|, |K ̃zl| ≤CH such that for i = 1, 2, (8.48) ∥ˆ ̃yi( ̃x, τ) -K ̃y ̃x∥H + n-2 X l=1 ∥ˆ ̃zi l( ̃x, τ) -K ̃zl ̃x∥H ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). holds for τ ∈[τ ′ + L, τ ′ + T -L]. 52 QI SUN Proof of Proposition 8.24. For α ∈[0, π 2 ), recall the vector ⃗eα (equation (7.21)) and the 2-plane Pα (Definition 7.26), we define yα := Γ(·, τ) · ⃗eα, where the dependence on lwill remain implicit. Thus, by definition of ̃y, ̃zl(equation (8.26)), yα = (cos α)y + (sin α)zl= (cos α) ̃y + sin α( ̃zl+ cos θ tan θl ̃x). (8.49) Let α0 ∈(0, π 2 ), τα0 be as in Lemma 7.27. We denote by yα,1, yα,2 the values of yα on the upper and lower branches. For α ∈[0, α0], we have (8.50) yα,1( ̃x, τ) > yα,2( ̃x, τ) for τ ≥τα0. Let βα i denote ⟨ˆyα,i(·, τ ′ + L), ̃x⟩:= ⟨η( ̃x ρ)yα,i(·, τ ′ + L), ̃x⟩for i = 1, 2. Claim 8.25. For i = 1, 2 and α ∈[0, α0], one has ∥ˆyα,i -βα i ̃x∥H ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). Proof. One has, ∥η ̃x -⟨η ̃x, ̃x⟩ ̃x∥H = ∥(η ̃x - ̃x) + ⟨ ̃x, ̃x⟩ ̃x -⟨η ̃x, ̃x⟩ ̃x∥H ≤∥η ̃x - ̃x∥H + ∥⟨ ̃x -η ̃x, ̃x⟩ ̃x∥H = ∥η ̃x - ̃x∥H + |⟨ ̃x -η ̃x, ̃x⟩| · 1 ≤2∥η ̃x - ̃x∥H. Because yα is a linear combination of ̃y, ̃zl, ̃x (equation (8.49)), ∥ˆyα,i -βα i ̃x∥H ≤∥ˆ ̃yi -⟨ˆ ̃yi( ̃x, τ ′ + L), ̃x⟩ ̃x∥H + C∥ˆ ̃zi l-⟨ˆ ̃zi l( ̃x, τ ′ + L), ̃x⟩ ̃x∥H + C∥η( ̃x ρ) ̃x - ̃x∥H. By Lemma 8.23 and Lemma 8.12, ∥ˆyα,i -βα i ̃x∥H ≤C r (e-LH2 + T H3 + T 2H4) + 1 ρ200 . By the definition of ρ (equation (8.24)), 1 ρ200 ≤CH200 ≤T H3 because T > 2. □ Claim 8.26. For α ∈[0, α0], one has (8.51) |βα 1 -βα 2 | ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). Proof of Claim 8.26. If βα 1 ≥βα 2 , then by 1 yα,2, one has (8.52) |βα 1 -βα 2 | ≤∥(βα 1 -βα 2 ) ̃x∥L2[-2,0] ≤∥yα,1 -yα,2 -(βα 1 -βα 2 ) ̃x∥L2[-2,0], where we used -(βα 1 -βα 2 ) ̃x ≥0 for ̃x ∈[-2, 0]. Thus, it follows from inequality (8.52) that, for ρ large enough, by Lemma 8.11, |βα 1 -βα 2 | ≤∥yα,1 -βα 1 ̃x∥L2[-2,0] + ∥yα,2 -βα 2 ̃x∥L2[-2,0] = ∥ˆyα,1 -βα 1 ̃x∥L2[-2,0] + ∥ˆyα,2 -βα 2 ̃x∥L2[-2,0] ≤C∥ˆyα,1 -βα 1 ̃x∥H + C∥ˆyα,2 -βα 2 ̃x∥H. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 53 Combined with Claim 8.25, one has (8.53) |βα 1 -βα 2 | ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). If βα 1 ≤βα 2 , then the same process works for L2[0, 2] in place of L2[-2, 0]. □ As a result, for i = 1, 2, by Claim 8.25 and Claim 8.26, ∥ˆyα,i -βα 1 ̃x∥H ≤∥ˆyα,i -βα i ̃x∥H + ∥(βα i -βα 1 ) ̃x∥H ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). Combined with definition of yα (equation (8.49)), ∥ˆ ̃yi -β0 1 ̃x∥H = ∥ˆyi -β0 1 ̃x∥H = ∥ˆy0,i -β0 1 ̃x∥H ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). (8.54) and ∥(cos α)ˆ ̃yi + sin α(ˆ ̃zi l+ cos θ tan θlˆ ̃x) -βα 1 ̃x∥H = ∥ˆyα,i -βα 1 ̃x∥H. As a result, combined with Lemma 8.12, ∥ˆ ̃zi l1 sin α0 (βα0 1 -β0 1 cos α0 -sin α0 cos θ tan θl) ̃x∥H ≤C(e-L 2 H + T 1 2 H 3 2 + T H2). (8.55) We pick K ̃y = β0 1 and K ̃zl= 1 sin α0 (βα0 1 -β0 1 cos α0 -sin α0 cos θ tan θl). Claim 8.27. One has, |K ̃y|, |K ̃zl| ≤CH. Proof. By the definition of βα 1 and Lemma 8.19, |β0 1| = |⟨ˆ ̃y1(·, τ ′ + L), ̃x⟩| ≤∥ˆ ̃y1∥H ≤CH. By Lemma 8.19 and Lemma 8.12, |βα0 1 -sin α0 cos θ tan θl| = |⟨ˆyα0,1(·, τ ′ + L), ̃x⟩-sin α0 cos θ tan θl⟨ ̃x, ̃x⟩| ≤ ⟨(cos α0)ˆ ̃y1 + sin α0ˆ ̃z1 l, ̃x⟩ + | sin α0 cos θ tan θl||⟨η ̃x - ̃x, ̃x⟩| ≤ ⟨(cos α0)ˆ ̃y1 + sin α0ˆ ̃z1 l, ̃x⟩ + C 1 ρ100 ≤CH + CH100. □ Claim 8.27, together with equation (8.54) and equation (8.55), proves Proposition 8.24. □ Next, we establish a PDE lemma which will be used to control C2 norms of the rotated rescaled CSF. 54 QI SUN Lemma 8.28. For a constant M > 0, there exists ε0 > 0 such that the following holds. Suppose the graph Γ(·, τ) of a vector-valued function (y, z1, · · · , zn-2) ∈ C∞(P-4 8,4) is a rescaled CSF with (8.56) ∥y(x, τ)∥C2(P-4 8,4) + n-2 X l=1 ∥zl(x, τ) -(tan θl)x∥C2(P-4 8,4) ≤ε ≤ε0 for θ1, · · · , θl. Then given φ ∈(-Mε, Mε) and θ′ l∈(θl-Mε, θl+Mε), denoting by S-φ the horizontal rotation (Definition 7.3) by angle -φ (in the sense of equation (8.11)), the profile (y′, z′ 1, · · · , z′ n-2) of the rotated flow S-φΓ(·, τ) is well defined in P-4 6,4 and the following holds: ∥y′(x, τ)∥C2(P-2 4,2) + n-2 X l=1 ∥z′ l(x, τ) -(tan θ′ l)x∥C2(P-2 4,2) ≤C sup τ∈[-4,0] " ∥y(x, τ) -(tan φ)x∥L2[-8,8] + n-2 X l=1 ∥zl(x, τ) -(tan θ′ l)x∥L2[-8,8] + Cε2 # , where the constant C depends on M. Proof. The area between the graph of y′ and the x-axis in a ball Br(0) equals the area between the graph of y and (tan φ)x in Br(0). Thus, by H ̈older's inequality, (8.57) ∥y′∥L1[-6,6] ≤∥y -(tan φ)x∥L1[-8,8] ≤4∥y -(tan φ)x∥L2[-8,8]. We define x′ := (cos φ)x + (sin φ)y. Then we have the estimates: Z 6 -6 |z′ l(x′, τ) -(tan θ′ l)x′|dx′ ≤ Z 8 -8 |zl(x, τ) -tan θ′ l(cos φx + sin φy)| · | cos φ + sin φyx|dx = Z 8 -8 |zl(x, τ) -tan θ′ l(x + (cos φ -1)x + sin φy)| · |1 + (cos φ -1) + sin φyx|dx ≤2 Z 8 -8 |zl(x, τ) -(tan θ′ l)x|dx + Cε2, where we use the assumption that |φ| < Mε and ∥y∥C1 ≤Mε. Thus, by H ̈older's inequality, (8.58) ∥z′ l(x, τ) -(tan θ′ l)x∥L1[-6,6] ≤8∥zl(x, τ) -(tan θ′ l)x∥L2[-8,8] + Cε2. Based on the bounds on the second order derivatives, we can write the evolution equations of the functions y′, z′ l-(tan θ′ l)x in divergence form. By the local L∞ estimates of the equations, ∥y′(x, τ)∥L∞(P-3 5,3) + n-2 X l=1 ∥z′ l(x, τ) -(tan θ′ l)x∥L∞(P-3 5,3) ≤C sup τ∈[-4,0] " ∥y′(x, τ)∥L1[-6,6] + n-2 X l=1 ∥z′ l(x, τ) -(tan θ′ l)x∥L1[-6,6] # . SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 55 Then this lemma follows from the Schauder estimates, equation (8.57) and equation (8.58). □ 8.7. Improvement of flatness (Proof of Proposition 8.7). By assembling Lemma 8.28, Proposition 8.24 and Lemma 8.11, where inequality (8.56) is ensured by Lemma 8.15, there is a horizontal rotation ̄Sτ ′+L+2 and a vector ̄⃗θ = ̄⃗θ(τ ′ + L + 2) = ( ̄θ1, · · · , ̄θn-2) such that ̄Sτ ′+L+2Γ is ̄H-linear at ̄⃗θ at time τ ′ + L + 2, where ̄H = C e-L 2 H + T 1 2 H 3 2 + T H2 + CH2 = C e-L 2 + T 1 2 H 1 2 + T H + H H. The goal is ̄H ≤ H 2C0L. First pick L large so that Ce-L 2 ≤ 1 10C0L. Then pick H = H(L, T ) small so that CT 1 2 H 1 2 ≤ 1 10C0L, CT H ≤ 1 10C0L and CH ≤ 1 10C0L. The estimates | ̄Sτ ′+L+2-Sτ ′| ≤CH and | ̄θl-θl| ≤CH hold because of Proposition 8.24, particularly |K ̃y|, |K ̃zl| ≤CH. References [AA81] William K Allard and Frederick J Almgren. On the radial behavior of minimal surfaces and the uniqueness of their tangent cones. Annals of Mathematics, 113(2):215-265, 1981. [AAAW13] Dylan J Altschuler, Steven J Altschuler, Sigurd B Angenent, and Lani F Wu. The zoo of solitons for curve shortening in Rn. Nonlinearity, 26(5):1189, 2013. [AB10] Ben Andrews and Charles Baker. Mean curvature flow of pinched submanifolds to spheres. Journal of Differential Geometry, 85(3):357-396, 2010. [AB11] Ben Andrews and Paul Bryan. Curvature bound for curve shortening flow via distance comparison and a direct proof of Grayson's theorem. Journal f ̈ur die reine und angewandte Mathematik, 2011(653):179-187, 2011. [AG92] Steven J. Altschuler and Matthew A. Grayson. Shortening space curves and flow through singularities. Journal of Differential Geometry, 35(2):283 - 298, 1992. [AL86] Uwe Abresch and Joel Langer. The normalized curve shortening flow and homothetic solutions. Journal of Differential Geometry, 23(2):175-196, 1986. [All24] William K Allard. Corrections to a paper of allard and almgren on the uniqueness of tangent cones. arXiv preprint , 2024. [Alt91] Steven J Altschuler. Singularities of the curve shrinking flow for space curves. Journal of differential geometry, 34(2):491-514, 1991. [And99] Ben Andrews. Gauss curvature flow: the fate of the rolling stones. Inventiones mathematicae, 138(1):151-161, 1999. [And12] Ben Andrews. Noncollapsing in mean-convex mean curvature flow. Geometry & Topology, 16(3):1413 - 1418, 2012. [Ang88] Sigurd Angenent. The zero set of a solution of a parabolic equation. Journal f ̈ur die reine und angewandte Mathematik, 1988. [Ang91] Sigurd Angenent. On the formation of singularities in the curve shortening flow. Journal of Differential Geometry, 33(3):601-633, 1991. [BC19] Simon Brendle and Kyeongsu Choi. Uniqueness of convex ancient solutions to mean curvature flow in r 3. Inventiones mathematicae, 217(1):35-76, 2019. [BC21] Simon Brendle and Kyeongsu Choi. Uniqueness of convex ancient solutions to mean curvature flow in higher dimensions. Geometry & Topology, 25(5):2195-2234, 2021. [BCD17] Simon Brendle, Kyeongsu Choi, and Panagiota Daskalopoulos. Asymptotic behavior of flows by powers of the Gaussian curvature. Acta Mathematica, 219(1):1 - 16, 2017. 56 QI SUN [BK24] Richard H Bamler and Bruce Kleiner. On the multiplicity one conjecture for mean curvature flows of surfaces, 2024. [CCMS24a] Otis Chodosh, Kyeongsu Choi, Christos Mantoulidis, and Felix Schulze. Mean curvature flow with generic initial data. Inventiones mathematicae, 237(1):121-220, 2024. [CCMS24b] Otis Chodosh, Kyeongsu Choi, Christos Mantoulidis, and Felix Schulze. Revisiting generic mean curvature flow in R3. arXiv preprint , 2024. [CCS23] Otis Chodosh, Kyeongsu Choi, and Felix Schulze. Mean curvature flow with generic initial data ii, 2023. [Cho85] Bennett Chow. Deforming convex hypersurfaces by the n th root of the gaussian curvature. Journal of Differential Geometry, 22(1):117-138, 1985. [Cho15] Otis Chodosh. Mean curvature flow (math 258) lecture notes. Unpublished notes of a class taught by Brian White, 2015. [CIL92] Michael G Crandall, Hitoshi Ishii, and Pierre-Louis Lions. User's guide to viscosity solutions of second order partial differential equations. Bulletin of the American mathematical society, 27(1):1-67, 1992. [CM12] Tobias H Colding and William P Minicozzi. Generic mean curvature flow i; generic singularities. Annals of mathematics, pages 755-833, 2012. [CM15] Tobias Holck Colding and William P Minicozzi. Uniqueness of blowups and Lojasiewicz inequalities. Annals of Mathematics, pages 221-285, 2015. [CM21] Tobias Holck Colding and William P Minicozzi. Wandering singularities. Journal of Differential Geometry, 119(3):403-420, 2021. [CMI19] Tobias Holck Colding and William P Minicozzi II. Dynamics of closed singularities. In Annales de l'Institut Fourier, volume 69, pages 2973-3016, 2019. [CMI25] Tobias Holck Colding and William P Minicozzi II. Quantitative uniqueness for mean curvature flow. arXiv preprint , 2025. [CS21] Otis Chodosh and Felix Schulze. Uniqueness of asymptotically conical tangent flows. Duke Mathematical Journal, 170(16):3601-3657, 2021. [CSSZ25] Kyeongsu Choi, Dong-Hwi Seo, Wei-Bo Su, and Kai-Wei Zhao. Uniqueness of tangent flows at infinity for finite-entropy shortening curves. Geometric and Functional Analysis, pages 1-55, 2025. [Ede15] Nick Edelen. Notes from Brian White's class on mean curvature flow. Unpublished notes of a class taught by Brian White, 2015. [Gag83] Michael E. Gage. An isoperimetric inequality with applications to curve shortening. Duke Mathematical Journal, 50(4):1225 - 1229, 1983. [Gag84] Michael E. Gage. Curve shortening makes convex curves circular. Inventiones mathematicae, 76(2):357-364, 1984. [GH86] M. Gage and R. S. Hamilton. The heat equation shrinking convex plane curves. Journal of Differential Geometry, 23(1):69 - 96, 1986. [GK15] Zhou Gang and Dan Knopf. Universality in mean curvature flow neckpinches. Duke Mathematical Journal, 164(12):2341 - 2406, 2015. [Gra87] Matthew A. Grayson. The heat equation shrinks embedded plane curves to round points. Journal of Differential Geometry, 26(2):285 - 314, 1987. [Ham89] RS Hamilton. Cbms conference. Hawaii, lecture notes, 1989. [Ham95a] Richard S Hamilton. Harnack estimate for the mean curvature flow. Journal of Differential Geometry, 41(1):215-226, 1995. [Ham95b] Richard S Hamilton. Isoperimetric estimates for the curve shrinking flow in the plane. Modern methods in complex analysis (Princeton, NJ, 1992), 137:201-222, 1995. [H ̈at15] J. H ̈attenschweiler. Curve Shortening Flow in Higher Dimension. ETH-Z ̈urich, 2015. [HS99] Gerhard Huisken and Carlo Sinestrari. Convexity estimates for mean curvature flow and singularities of mean convex surfaces. Acta Mathematica, 1999. [Hui84] Gerhard Huisken. Flow by mean curvature of convex surfaces into spheres. Journal of Differential Geometry, 20(1):237-266, 1984. [Hui90] Gerhard Huisken. Asymptotic behavior for singularities of the mean curvature flow. Journal of Differential Geometry, 31(1):285-299, 1990. [Hui98] Gerhard Huisken. A distance comparison principle for evolving curves. Asian Journal of Mathematics, 2(1):127-133, 1998. SINGULARITIES OF CURVE SHORTENING FLOW WITH CONVEX PROJECTIONS 57 [Ilm03] Tom Ilmanen. Problems in mean curvature flow. Unpublished manuscript, ETH Z ̈urich, 2003. Available at https://people.math.ethz.ch/~ilmanen/classes/eil03/ problems03.pdf. [LS24] Yang Li and G ́abor Sz ́ekelyhidi. Singularity formations in Lagrangian mean curvature flow. arXiv preprint , 2024. [LSU68] Olga Aleksandrovna Ladyzhenskaia, Vsevolod Alekseevich Solonnikov, and Nina N Ural'tseva. Linear and quasi-linear equations of parabolic type, volume 23. American Mathematical Soc., 1968. [LZ24] Tang-Kai Lee and Xinrui Zhao. Uniqueness of conical singularities for mean curvature flows. Journal of Functional Analysis, 286(1):110200, 2024. [Man11] Carlo Mantegazza. Lecture notes on mean curvature flow, volume 290. Springer Science & Business Media, 2011. [MB20] Jiˇr ́ı Minarˇc ́ık and Michal Beneˇs. Long-term behavior of curve shortening flow in R3. SIAM Journal on Mathematical Analysis, 52(2):1221-1231, 2020. [MM14] Annibale Magni and Carlo Mantegazza. A note on Grayson's theorem. Rendiconti del Seminario Matematico della Universit`a di Padova, 131, 06 2014. [MMN16] Annibale Magni, Carlo Mantegazza, and Matteo Novaga. Motion by curvature of planar networks ii. Ann. Sc. Norm. Super. Pisa Cl. Sci., XV:117-144, 2016. piprints preprint. [Naf22] Keaton Naff. A planarity estimate for pinched solutions of mean curvature flow. Duke Mathematical Journal, 171(2):443 - 482, 2022. [Nev07] Andr ́e Neves. Singularities of Lagrangian mean curvature flow: zero-maslov class case. Inventiones mathematicae, 168(3):449-484, 2007. [Sch14] Felix Schulze. Uniqueness of compact tangent flows in mean curvature flow. Journal f ̈ur die reine und angewandte Mathematik (Crelles Journal), 2014(690):163-172, 2014. [Smo11] Knut Smoczyk. Mean curvature flow in higher codimension: introduction and survey. In Global differential geometry, pages 231-274. Springer, 2011. [Sto94] Andrew Stone. A density function and the structure of singularities of the mean curvature flow. Calculus of variations and partial differential equations, 2:443-480, 1994. [Sun24] Qi Sun. Curve shortening flow of space curves with convex projections. arXiv preprint , 2024. [Sun25] Qi Sun. Huisken's distance comparison principle in higher codimension. arXiv preprint , 2025. [SX21] Ao Sun and Jinxin Xue. Initial perturbation of the mean curvature flow for closed limit shrinker. arXiv preprint , 2021. [SX25] Ao Sun and Jinxin Xue. Generic dynamics of mean curvature flows with asymptotically conical singularities. Science China Mathematics, pages 1-38, 2025. [Tra21] Hung V Tran. Hamilton-Jacobi equations: theory and applications, volume 213. American Mathematical Soc., 2021. [Wan02] Mu-Tao Wang. Long-time existence and convergence of graphic mean curvature flow in arbitrary codimension. Inventiones mathematicae, 148(3):525-543, 2002. [Wan11] Mu-Tao Wang. Lectures on mean curvature flows in higher codimensions. arXiv preprint , 2011. [Whi00] Brian White. The size of the singular set in mean curvature flow of mean-convex sets. Journal of the American Mathematical Society, 13(3):665-695, 2000. [Whi03] Brian White. The nature of singularities in mean curvature flow of mean-convex sets. Journal of the American Mathematical Society, 16(1):123-138, 2003. [Whi05] Brian White. A local regularity theorem for mean curvature flow. Annals of mathematics, pages 1487-1519, 2005. [Zhu20] Jonathan J Zhu. Lojasiewicz inequalities, uniqueness and rigidity for cylindrical selfshrinkers. arXiv preprint , 2020. Qi Sun, -Madison Email address:
2510.14860
Differential equations for intertwining operators among untwisted and twisted modules Daniel Tan Abstract Given any vertex operator algebra V with an automorphism g, we derive a Jacobi identity for an intertwining operator Y of type  W3 W1 W2  when W1 is an untwisted V -module, and W2 and W3 are g-twisted V -modules. We say such an intertwining operator is of g 1 g  - type. Using the Jacobi identity, we obtain homogeneous linear differential equations satisfied by the multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩when Yj are of g 1 g  -type and the modules are C1-cofinite and discretely graded. In the special case that V is an affine vertex operator algebra, we derive the “twisted KZ equations” and show that its solutions have regular singularities at certain prescribed points when g has finite order. When V is general and g has finite order, we use the theory of regular singular points to prove that the multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩converges absolutely to a multivalued analytic function when |z1| > · · · > |zN| > 0 and analytically extends to the region zi, zi−zj ̸= 0. Furthermore, when N = 2, we show that these multivalued functions have regular singularities at certain prescribed points. 0 Introduction The representation theory of vertex operator algebras has made major progress in mathematically constructing two-dimensional conformal field theories (CFTs). One interesting feature of vertex operator algebra representation theory is the concept of “twisting” a representation by an automor- phism. The first-discovered elements of this concept within mathematics include the twisted vertex operators in [LW78] and [Lep85], and the construction of the Moonshine Module in [FLM84]. In [FLM88], a vertex operator algebra structure was constructed on the Moonshine Module. This came to be understood as the first example of a process in physics known as orbifolding, where a known CFT is “quotiented” by a finite group of its symmetries. The general study of orbifold CFT was initiated by physicists Dixon, Harvey, Vafa and Witten in [DHVW85; DHVW86] on a physical level of rigor. Historically, the group was induced from a finite group G of symmetries of the target manifold M in a string theory, and the orbifold theory described the physics of a string propagating in the orbifold M/G; hence the term “orbifold”. However, in the current general study, the group consists of abstract automorphisms of the CFT, and does not need to come from any manifold. As discussed in [DHVW85; DHVW86], the orbifolding procedure has two parts. First, twisted sectors Hg are included in the state space for each g ∈G. These sectors are characterized by the property that the fields of the original theory are twisted by the action of g when they circle an insertion of a state in Hg. By circling other twisted sectors, each twisted sector Hg gains an action of the centralizer CG(g). And second, there is a projection of each Hg onto the CG(g)-invariant 1 arXiv:2510.14860v1 [math.QA] 16 Oct 2025 states. In this paper, we focus on just the first step. More specifically, we study how the untwisted sector H1 acts on a g-twisted sector Hg. The chiral properties of orbifold CFTs were studied by Dijkgraaf, Vafa, E. Verlinde and H. Verlinde in [DVVV89]. They explained, on a physical level of rigor, how orbifold CFTs can be constructed from the twisted representations of the chiral algebra (i.e. twisted modules for a vertex operator algebra) of the original CFT. The fields and their correlation functions are assumed to satisfy the usual convergence, radial ordering, and operator product expansion properties. The chiral study is closest in spirit to the vertex-operator-algebra theoretic study of orbifold CFT. If we are to rigorously construct orbifold CFTs using the representation theory of vertex operator algebras, we must prove that the convergence, radial ordering, and operator product expansion properties of the correlation functions follow from certain natural conditions for vertex operator algebras and their modules that are relatively easy to verify. The mathematical study of chiral orbifold CFT is outlined as follows. Given an automorphism g of a vertex operator algebra V , a g-twisted V -module is a vector space equipped with an action of V , similar to that of a V -module action, but twisted by g as the vertex operator formally circles the origin, that is Y (u, x) = e2πix d dxY (gu, x). Examples of twisted vertex operators were first discovered in mathematics in [LW78] when constructing the affine Lie algebra A(1) 1 . More generally, twisted modules for vertex operator algebras with an automorphism of finite order were axiomatized in [FFR91] and [Don94] by compiling the basic properties discovered and proved in [FLM88], notably the “twisted Jacobi identity”. The notion of twisted module introduced in [Hua10] allows for general automorphisms, but the algebraic Jacobi identity is replaced with the analytic axiom of “duality”. Given a group G of automorphisms of V , the subspace V G of elements fixed under the action of G is a subalgebra of V . One can extend V G by first adjoining g-twisted V -modules to V , for each g ∈G, and intertwining operators between them before taking the fixed-point subalgebra. The moonshine module V ♮was the first example of a vertex operator algebra obtained by an orbifold construction. In [FLM88], Frenkel, Lepowsky and Meurman constructed the V ♮from the Leech lattice vertex operator algebra VΛ and the group generated by the involution induced by negation of the Leech lattice. The physical interpretation of the process as an orbifold CFT was explained in [DGH88], and explained conceptually with mathematical rigor in [Hua96]. In [EMS20], a general construction of orbifold CFTs was solved for simple, rational, C2-cofinite, holomorphic vertex operator algebra of CFT-type (i.e. positive energy) with G = ⟨g⟩finite cyclic by use of [Miy15; CM16]. A key part in a general study of the orbifolding procedure, is the study of intertwining opera- tors Y(·, x)·: W1 ⊗W2 →W3{x}[log x] among twisted V -modules W1, W2 and W3 as introduced in the most general form in [Hua18; DH25]. These twisted intertwining operators are believed to give an intertwining-operator-algebraic structure to the direct sum of twisted modules, which pro- vides the vertex-operator-algebraic structure when restricted to the fixed-point subspace. Twisted intertwining operators can be thought of as the mathematical counterpart to the chiral factors of the fields (or chiral vertex operators) corresponding to states in the twisted sectors acting on other twisted sectors. It is not only necessary to define twisted intertwining operators, but one must also prove that the series ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩|x1=z1,...,xN=zN converges to a multivalued analytic function in the domain |z1| > · · · > |zN| > 0 and analytically continues the entire region zi, zi−zj ̸= 0, i ̸= j. We refer to this as the convergence property for products of N twisted intertwining operators. 2 Furthermore, one must also prove that ⟨w0, Y1(w1, x1)Y2(w2, x2)w3⟩|x1=z1,x2=z2 analytically con- tinues to ⟨w0, Y3(Y4(w1, x0)w2, x2)w3⟩|x0=z1−z2,x2=z2 and ⟨w0, Y5(w2, x2)Y6(w1, x1)w3⟩|x1=z1,x2=z2 in the domains |z2| > |z1 −z2| > 0 and |z2| > |z1| > 0, respectively, for some twisted intertwining operators Y3, Y4, Y5, Y6. This condition is known as associativity and commutativity for twisted intertwining operators. Physically speaking, it says that the chiral factors of correlation functions respect radial ordering, and fields have operator product expansions. One of the main requirements to prove associativity and commutativity is to prove that the analytic continuation of the product of two twisted intertwining operators has “regular singu- larities” at certain points. This also plays an important role in constructing a G-crossed tensor category structure on the category of the twisted V -modules, as explained in [Hua21b]. Similarly to the construction of the braided tensor category structure on the category of V -modules in the eight-part series [HLZ14; HLZ12a; HLZ12b; HLZ12c; HLZ12d; HLZ12e; HLZ12f; HLZ12g], the convergence property together with regular singularities at certain points are used to construct the associativity natural isomorphism and verify its properties. Little is known about the convergence property for general V and G, with the present un- derstanding explained in [Hua21b]. If G is trivial, the most general results are proved by Huang (originally in [Hua05] and extended in [HLZ12f]) by deriving certain differential equations when the modules are C1-cofinite and discretely graded. This includes the special case that V is C2-cofinite. In [Miy15], Miyamoto proved that V G is C2-cofinite when V is C2-cofinite, CFT-type (i.e. positive energy) and simple, and G is finite and solvable. Every g-twisted V -module, with g ∈G, restricts to a V G-module. And in this context, twisted intertwining operators among V -modules twisted by elements in G become intertwining operators among V G-modules. Hence, Huang’s theory [Hua05] can be applied to the C2-cofinite vertex operator algebra V G to show the convergence property among g-twisted V -modules, for g ∈G. One downside to this approach relies on knowing that V G is C2-cofinite, which is difficult to prove and is not the most general assumption. We hope to prove the convergence property directly by generalizing Huang’s method [Hua05] of differential equations with regular singularities to twisted intertwining operators. In fact, physicists have discovered explicit examples of differential equations in the twisted case in [BHO01] and [DE20]. The twisted KZ equations derived by de Boer, Halpern and Obers in [BHO01] are explicit enough to see their regular singularities. We emphasize that their results are for a certain special case. Their derivation relies on the assumption that the chiral vertex operators in the chiral correlation function correspond to states from the untwisted sector in the presence of one twisted sector. We formulate this more precisely by saying a twisted intertwining operator Y(·, x) : W1 ⊗ W2 →W3 is of g 1 g  -type if W1 is an untwisted V -module, and W2 and W3 are g-twisted V - modules. That is, a twisted intertwining operator of g 1 g  -type is one among an untwisted and twisted module (the third module is necessarily a twisted module twisted by the same element). By generalizing the method of differential equations used in [Hua05], we derive differential equations satisfied by the “chiral correlation functions” ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩given by the product of N g 1 g  -type intertwining operators Yi. These differential equations hold when the modules are C1-cofinite and discretely graded, and when g has a finitely-generated set of eigenvalues, for which there are many natural examples. Our result does not require any C2- cofiniteness assumption on V nor V G. In fact, we do not require any knowledge of a larger group G containing g. When the order of g is finite, we use these differential equations and the theory of regular singular points to prove the convergence property for the product of N g 1 g  -type intertwining operators. We still have homogeneous linear differential equations when g has infinite order. 3 There are, however, logarithms in the coefficients that prevent us from using the theory of regular singular points at the singularities. Physically speaking, the correlation functions that we study in this paper describe how the chiral CFT acts on the g-twisted sector. This should not be confused with the description of the chiral algebra (i.e. vertex operator algebra) acting on the g-twisted sector, which is just the notion of a g-twisted V -module. A chiral CFT consists of more than a vertex operator algebra in general; it also contains modules for the vertex operator algebra and intertwining operators among them satisfying certain properties. The consideration of g 1 g  -type intertwining operators can be avoided if V is assumed to be holomorphic (i.e. V has one irreducible module up to isomorphism, namely itself) as is usually done in the currently known mathematical orbifold constructions [FLM88; Hua96; EMS20; HM22; GK21]. The present work does not require V to be holomorphic, hence is more general. Though not proved here, we hope to extend Huang’s method of differential equations to cor- relation functions involving twisted intertwining operators that are not of g 1 g  -type. The main difficulty in doing so comes from the multivalued V -action on the twisted modules. Formal calcu- lus is used to derive differential equations without assuming any convergence. The Jacobi identity is the duality axiom written in the language of formal calculus, and hence, is used in our com- putations. While duality for twisted intertwining operators is easy to write down [Hua18], the multivaluedness occurring in the twisted modules makes it difficult to convert to a Jacobi identity. Furthermore, once a Jacobi identity is obtained for twisted intertwining operators, it is much more computationally difficult to work with than the Jacobi identity for untwisted intertwining opera- tors. Despite this, we believe that Huang’s method of differential equations can be generalized to the case where the modules are twisted by elements of a general finite group. In this paper, we first set notation and conventions for a multivalued version of formal calculus. In Section 2, we derive a Jacobi identity for g 1 g  -type intertwining operators, which must be done for the formal calculus approach of deriving differential equations. We immediately obtain some useful relations satisfied by g 1 g  -type intertwining operators. In Section 3, we derive higher order homogeneous linear differential equations satisfied by the correlation functions of a product of N g 1 g  -type intertwining operators under certain technical assumptions. In Section 4, we explicitly derive a first order homogeneous linear system of partial differential equations, known as the twisted KZ equations, when V is an affine vertex operator algebra, generalizing results of [BHO01]. Then, in Section 5, we show that the solutions to the twisted KZ equations have regular singularities when the order of g is finite. In Section 6, we assume that V is general and g has finite order to prove the systems derived in Section 3 can be chosen to have regular singularities at certain prescribed points. We use this to prove the convergence property for products of N g 1 g  -type intertwining operators, and we show that the product of two g 1 g  -type intertwining operators has regular singularities at certain points. Acknowledgments I would like to thank Professor Yi-Zhi Huang and Professor James Lepowsky for their helpful discussions, guidance and suggestions. 4 1 Notation and conventions Our base field for vector spaces will always be C, which will be important for the theory. The notion of vertex operator algebra and its morphisms can be found in Definition 3.1.22 of [LL04]. The framework of logarithmic formal calculus that we use can be found in [HLZ12a]. To easily distinguish between formal variables (all formal variables commute) and complex numbers, we will use x, log x and xi, log xi for formal variables and use z, zi, ζ, ζi, ξ, ξi, η, ηi for complex numbers. For a vector space W, we use W{x1, . . . , xN} to denote the space of formal generalized power series P n1,...,nN wn1,...,nNxn1 1 · · · xnN N with coefficients in W and powers in C. We will regularly consider formal series of the form f(x1, . . . , xN, log x1, . . . , log xN) ∈C{x1, . . . , xN}[log x1, . . . , log xN], but we suppress the log variables and write f(x1, . . . , xN) for brevity. We use ∂ ∂xi to denote the derivation defined by ∂ ∂xi xn j = nxn−1 i δi,j and ∂ ∂xi log xj = x−1 i δi,j. We sometimes write d dx when there is only one pair of variables x and log x involved. For any oper- ator A on a vector space W, locally nilpotent operator N on W, and formal variables x, x1, log x, we have the following meanings for formal logarithms and exponentials, using logarithm and ex- ponential notation: log(1 + x) = ∞ X k=1 (−1)k+1 k xk, log(x + x1) = log x + log(1 + x1/x), eAx = ∞ X k=1 1 k!Akxk, xN = eN log x, (1 + x)A = ∞ X k=1 A k  xk. (1.1) These formulas remain valid when we can make a substitution of a formal series (in possibly multiple variables) that is still well defined, for example (x + x1)N = eN log(x+x1) = eN(log x+log(1+x1/x)) = xNeN log(1+x1/x) = xN(1 + x1/x)N. When xS is used for a semisimple operator S on W, it is understood to mean xSu = xαu when Su = αu, and then extended to all of W. If an operator A has commuting semisimple and locally nilpotent parts S and N, we use xA to mean xSxN and (x + x1)A = (x + x1)S(x + x1)N = xA(1 + x1/x)A. Given a formal series f(x1, . . . , xN) = X r1,...,rN∈C k1,...,kN∈Z≥0 ar1,...,rN,k1,...,kNxr1 1 . . . xrN N (log x1)k1 . . . (log xN)kN, we can specialize a formal variable to a sum of two formal variables by understanding logarithms of formal variables as explained above in (1.1). For example, f(x + x0, . . . , xN) = X r1,...,rN∈C k1,...,kN∈Z≥0 ar1,...,rN,k1,...,kN(x + x0)r1 . . . xrN N (log(x + x0))k1 . . . (log xN)kN. 5 For each z ∈C×, define arg z to be the argument of z satisfying 0 ≤arg z < 2π. For each p ∈Z, we use lp(z) to denote the logarithm of z given by log |z| + (arg z + 2pπ)i. When a formal series f(x1, . . . , xN) has countable support, it can be specialized to the series f p1,...,pN(z1, . . . , zN) = X r1,...,rN∈C k1,...,kN∈Z≥0 ar1,...,rN,k1,...,kNer1lp1(z1) . . . erNlpN (zN)(lp1(z1))k1 . . . (lpN(zN))kN, for any z1, . . . , zN ∈C×. If a series f does not have the variables explicitly stated, we will write f|p1,...,pN x1=z1,...,xN=zN to denote f p1,...,pN(z1, . . . , zN), again suppressing the log variables for brevity. If f p1,...,pN(z1, . . . , zN) converges absolutely in some appropriate region, it converges to a multivalued analytic function f(z1, . . . , zN) with branches given by f p1,...,pN(z1, . . . , zN). We also write expressions such as ⟨w0, Y(w1, z1) · · · Y(wN, zN)wN+1⟩:= ⟨w0, Y(w1, x1) · · · Y(wN, xN)wN+1⟩|x1=z1,...,xN=zN to denote the (multivalued) series of complex numbers. Each branch will have the value of zr i = er log zi dependent on the choice of value log zi. Suppose we have operators g and Lg on a space W such that g = e2πiLg, and Lg has commuting semisimple and locally nilpotent parts Sg and Ng. (This is always possible when W is the direct sum of finite-dimensional subspaces on which g acts invertibly.) Then Sg is determined up to its eigenvectors with eigenvalues in C/Z. We will always assume Sg is chosen so that its eigenvalues lie in {z ∈C : 0 ≤ℜ(z) < 1}. We sometimes omit the subscript g from L, S and N when the dependence on g is clear from context. We will use V(n) to denote the L(0)-eigenspaces with eigenvalue n ∈Z of a vertex operator algebra V , and use V+ to denote ` n∈Z>0 V(n). If g is an automorphism of V , we use V [α], V [α] (n) and V [α] + to denote the spaces of generalized eigenvectors of g (acting on V , V(n) and V+, respectively) with eigenvalue e2πiα. We will use W[n] to denote the L(0)-generalized eigenspaces of a twisted V - module (see Definition 3.1 of [Hua18]) with eigenvalue n ∈C. We say wt w = n is the (conformal) weight of a vector in w ∈W[n]. Suppose we have a tensor product W0 ⊗· · · ⊗WN+1 of C-graded spaces graded by weight. Given homogeneous elements wi ∈Wi, i = 0, . . . , N + 1, we will frequently use σ to denote ℜ(wt w0+· · ·+wt wN+1), i.e. the real component of the weight of the tensor product w0⊗· · ·⊗wN+1. 2 A Jacobi identity for g 1 g  -type intertwining operators We use the definition of vertex operator algebra V = (V, Y, 1, ω) in Definition 3.1.22 of [LL04]. It is important that our base field is C. Recall that an automorphism g of a vertex operator algebra V is a linear automorphism of V such that gY (u, x)v = Y (gu, x)gv for all u, v ∈V , g1 = 1 and gω = ω. We will use the notions of twisted modules and intertwining operators among twisted modules as defined in Definition 3.1 and Definition 4.1 of [Hua18], respectively. In this paper, we will not require the g-action on the g-twisted module. We recall the definition of an intertwining operator among twisted modules. 6 Definition 2.1. Let V be a vertex operator algebra with automorphisms g1, g2, g3. Let Wi be a gi-twisted V -module, for i = 1, 2, 3. An intertwining operator of type W3 W1 W2  is a linear map Y(·, x)· : W1 ⊗W2 →W3{x}[log x], w1 ⊗w2 7→ K X k=0 X n∈C Yn,k(w1)w2x−n−1(log x)k (2.1) satisfying the following conditions: (i) (Lower truncation) For all w1 ∈W1, w2 ∈W2, n ∈C and k = 0, . . . , K, Yn+l,k(w1)w2 = 0 for all sufficiently large l ∈Z. (ii) (L(−1)-derivative property) For all w1 ∈W1, d dxY(w1, x) = Y(L(−1)w1, x). (iii) (Duality) For all u ∈V , w1 ∈W1, w2 ∈W2 and w′ 3 ∈W ′ 3, there exists a formal series f(x0, x1, x2) = N X i,j,k,l,m,n=0 aijklmnxri 0 x sj 1 xtk 2 (log x0)l(log x1)m(log x2)n ∈C{x0, x1, x2}[log x0, log x1, log x2] (2.2) such that for p1, p2, p12 ∈Z, the series ⟨w′ 3, (YW3)p1(u, z1)Yp2(w1, z2)w2⟩= ⟨w′ 3, YW3(u, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Yp2(w1, z2)(YW2)p1(u, z1)w2⟩= ⟨w′ 3, Y(w1, x2)YW2(u, x1)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Yp2((YW1)p12(u, z1 −z2)w1, z2)w2⟩= ⟨w′ 3, Y(YW1(u, x0)w1, x2)w2⟩|p12,p2 x0=z1−z2,x2=z2, (2.3) are absolutely convergent in the regions |z1| > |z2| > 0, |z2| > |z1| > 0 and |z2| > |z1−z2| > 0, respectively. Moreover, they converge to f p1,p1,p2(z1 −z2, z1, z2), f p2,p1,p2(z1 −z2, z1, z2), f p12,p2,p2(z1 −z2, z1, z2) (2.4) in the regions |z1| > |z2| > 0 and −π 2 < arg(z1 −z2) −arg(z1) < π 2 , |z2| > |z1| > 0 and −3π 2 < arg(z1 −z2) −arg(z1) < −π 2 , |z2| > |z1 −z2| > 0 and −π 2 < arg(z1) −arg(z2) < π 2 , (2.5) respectively. To avoid saying “intertwining operator among twisted modules”, we say twisted intertwining op- erator, or simply, intertwining operator. We are interested in the following special case. 7 Definition 2.2. Let V be a vertex operator algebra with an automorphism g. Let W1 be an untwisted V -module (i.e. twisted by 1 = idV ). Let W2 and W3 be g-twisted V -modules. A twisted intertwining operator of type W3 W1 W2  is said to be of g 1 g  -type. We derive a Jacobi identity for g 1 g  -type intertwining operators in what follows. Fix V , g, and a g 1 g  -type intertwining operator Y(·, x) : W1 ⊗W2 →W3{x}[log x]. Denote by L the logarithm of g divided by 2πi with commuting simple and locally nilpotent parts S and N such that the real part of the eigenvalues of S are in [0, 1). Let u ∈V . Recall from [HY19] that, for i = 2, 3, we have YWi,0(u, x) = YWi(xNu, x), where xN = eN log x and YWi,0(u, x) = P n∈C un,0x−n−1 contains exactly the (log x)-free terms of YWi(u, x). For each k ∈Z≥0, there is a formal series fk(x0, x1, x2) as in (2.2), such that ⟨w′ 3, YW3(N ku, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Y(w1, x2)YW2(N ku, x1)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Y(YW1(N ku, x0)w1, x2)w2⟩|p12,p2 x0=z1−z2,x2=z2, converge to the respective branches of fk(z1 −z2, z1, z2) in the respective regions as in (2.4) and (2.5). This implies that ⟨w′ 3, YW3((log x1)kN ku, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Y(w1, x2)YW2((log x1)kN ku, x1)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Y(YW1((log x1)kN ku, x0)w1, x2)w2⟩|p12,p2,p2 x0=z1−z2,x1=z1,x2=z2, converge to the respective branches of (log z1)kfk(z1 −z2, z1, z2) in the respective regions. Hence, by defining f(x0, x1, x2) = K X k=0 1 k!(log x1)kfk(x0, x1, x2), (2.6) where K is sufficiently large so that N K+1u = 0, we find that ⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩|p1,p2 x1=z1,x2=z2, K X k=0 1 k!⟨w′ 3, Y(YW1((log x1)kN ku, x0)w1, x2)w2⟩|p12,p2,p2 x0=z1−z2,x1=z1,x2=z2, (2.7) converge to the respective branches of f(z1 −z2, z1, z2) in the respective regions. Since log x1|p2 x1=z1 and (log x2 + log(1 + x0/x2)) |p12,p2 x0=z1−z2,x2=z2 both converge to the same func- tion in the region |z2| > |z1 −z2| > 0 and −π 2 < arg(z1) −arg(z2) < π 2, we can replace the third series in (2.7) with ⟨w′ 3, Y(YW1(x2 + x0)Nu, x0)w1, x2)w2⟩|p12,p2 x0=z1−z2,x2=z2, where (x2 + x0)N means P∞ k=0 1 k! (log x2 + log (1 + x0/x2))k N k, for the same result. 8 Now assume that Su = αu for some α ∈C with ℜ(α) ∈[0, 1), i.e. u ∈V [α]. The equivariance property e2πix d dxYWi(gv, x) = YWi(v, x), for all v ∈V (2.8) satisfied by the g-twisted modules Wi for i = 2, 3, gives e2πix d dxYWi,0(u, x) = e2πix d dxYWi(xNu, x) = K X k=0 e2πix d dx 1 k!(log x)kYWi(N ku, x) = K X k=0 1 k!(log x + 2πi)ke2πix d dxYWi(N ku, x) = K X k=0 1 k!(log x + 2πi)kYWi(g−1N ku, x) = YWi(g−1eN(log x+2πi)u, x) = YWi(e−2πiSxNu, x) = YWi,0(e−2πiSu, x) = e−2πiαYWi,0(u, x). Hence, YWi,0(u, x1)xα 1 ∈(End Wi)[[x1, x−1 1 ]] for i = 2, 3. Since W1 is untwisted, we have YW1(u, x0) ∈(End W1)[[x0, x−1 0 ]]. If w1, w2, w′ 3 and u are homogeneous, then we can see that the coefficients in ⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩= X n∈α+Z X h∈C K X k=0 ⟨w′ 3, unYh,k(w1)w2⟩x−n−1 1 x−h−1 2 (log x2)k, ⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩= X n∈α+Z X h∈C K X k=0 ⟨w′ 3, Yh,k(w1)unw2⟩x−n−1 1 x−h−1 2 (log x2)k, ⟨w′ 3, Y(YW1(u, x0)w1, x2)w2⟩= X n∈Z X h∈C K X k=0 ⟨w′ 3, Yh,k(unw1)w2⟩x−n−1 0 x−h−1 2 (log x2)k (2.9) are zero unless wt w′ 3 = wt u + wt w1 + wt w2 −n −1 −h −1. Extending this to the case where w1, w2, w′ 3 and u need not be homogeneous, we see that these series have support only when the powers of x2 are real numbers added to finitely many complex numbers. By the truncation property, the real numbers that are added to the finitely many complex numbers form a strictly monotonic sequence indexed by Z>0. We recall that a unique expansion set is a subset S of C×C satisfying the condition: if a series X (α,β)∈S aα,βzα(log z)β = X (α,β)∈S aα,βeα log z(log z)β is absolutely convergent and convergent to 0 on some non-empty open subset of C× (for some chosen branch of log z), then aα,β = 0 for all (α, β) ∈S. In [Hua17], it was shown that any strictly monotonic sequence {ni}i∈Z>0 of real numbers together with a list m1, . . . , mℓof distinct real numbers produces a unique expansion set {ni + mji : i ∈Z>0, j = 1, . . . , ℓ} × {0, . . . , N} (2.10) 9 for any N ∈Z≥0. We can see that the three series in (2.9), and the three series f(x1 −x2, x1, x2), f(−x2 +x1, x1, x2) and f(x0, x2 +x0, x2) are double sums over unique expansion sets of type (2.10) with the directions of the monotonic sequences matching. Assume that S and T are unique expansion sets and we have two series X (α,β)∈S X (γ,δ)∈T aα,β,γ,δzα 1 (log z1)βzγ 2(log z2)δ and X (α,β)∈S X (γ,δ)∈T bα,β,γ,δzα 1 (log z1)βzγ 2(log z2)δ (2.11) are both absolutely convergent in the some non-empty open set of C× × C×, and furthermore converge to the same values. Then for fixed z1 (in some open set of C×), X (γ,δ)∈T  X (α,β)∈S (aα,β,γ,δ −bα,β,γ,δ)zα 1 (log z1)β  zγ 2(log z2)δ is absolutely convergent for all z2 in some open set of C× and is furthermore convergent to zero. Since T is a unique expansion set, X (α,β)∈S (aα,β,γ,δ −bα,β,γ,δ)zα 1 (log z1)β converges to zero for all (γ, δ) ∈T. Furthermore, this series converges absolutely for all z1 in an open set of C×. Since S is a unique expansion set, aα,β,γ,δ −bα,β,γ,δ = 0 for all (α, β, γ, δ) ∈S × T. Hence, the two series in (2.11) are equivalent. We now use this argument to show that the following series are equivalent. Since xα 1⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2 = xα 1f(x0, x1, x2)|p1,p1,p2 x0=z1−z2,x1=z1,x2=z2 = xα 1f(x1 −x2, x1, x2)|p1,p2 x1=z1,x2=z2 in the region |z1| > |z2| > 0 and −π 2 < arg(z1 −z2) −arg(z1) < π 2, we have xα 1f(x1 −x2, x1, x2) = xα 1⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩∈C{x2}[[x1, x−1 1 ]][log x2] being lower truncated in x2. Since xα 1⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩|p1,p2 x1=z1,x2=z2 = xα 1f(x0, x1, x2)|p2,p1,p2 x0=z1−z2,x1=z1,x2=z2 = xα 1f(−x2 + x1, x1, x2)|p1,p2 x1=z1,x2=z2 in the region |z2| > |z1| > 0 and −3π 2 < arg(z1 −z2) −arg(z1) < −π 2, we have xα 1f(−x2 + x1, x1, x2) = xα 1⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩∈C{x2}[[x1, x−1 1 ]][log x2] being lower truncated in x1. Since (x2 + x0)α⟨w′ 3, Y(YW1((x2 + x0)Nu, x0)w1, x2)w2⟩|p12,p2 x0=z1−z2,x2=z2 = (x2 + x0)αf(x0, x1, x2)|p12,p2,p2 x0=z1−z2,x1=z1,x2=z2 = (x2 + x0)αf(x0, x2 + x0, x2)|p12,p2 x0=z1−z2,x2=z2 10 in the region |z2| > |z1 −z2| > 0 and −π 2 < arg(z1) −arg(z2) < π 2, we have (x2 + x0)αf(x0, x2 + x0, x2) = (x2 + x0)α⟨w′ 3, Y(YW1((x2 + x0)Nu, x0)w1, x2)w2⟩ ∈C{x2}[[x0, x−1 0 ]][log x2] being lower truncated in x0. So we have xα 1f(x0, x1, x2) ∈C{x2}[x0, x−1 0 , x1, x−1 1 , log x2] such that the formal series xα 1⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩= xα 1f(x1 −x2, x1, x2) ∈C{x2}[[x1, x−1 1 ]][log x2], xα 1⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩= xα 1f(−x2 + x1, x1, x2) ∈C{x2}[[x1, x−1 1 ]][log x2], (x2 + x0)α⟨w′ 3, Y(YW1((x2 + x0)Nu, x0)w1, x2)w2⟩ = (x2 + x0)αf(x0, x2 + x0, x2) ∈C{x2}[[x0, x−1 0 ]][log x2], are lower truncated in x2, x1 and x0, respectively. Since we can multiply both sides of x−1 0 δ x1 −x2 x0  −x−1 0 δ −x2 + x1 x0  = x−1 1 δ x2 + x0 x1  by xα 1f(x0, x1, x2), which has no logarithms of x0 and x1 and only integral powers of x0 and x1, we have x−1 0 δ x1 −x2 x0  xα 1f(x1 −x2, x1, x2) −x−1 0 δ −x2 + x1 x0  xα 1f(−x2 + x1, x1, x2) = x−1 1 δ x2 + x0 x1  (x2 + x0)αf(x0, x2 + x0, x2), hence x−1 0 δ x1 −x2 x0  xα 1⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩−x−1 0 δ −x2 + x1 x0  xα 1⟨w′ 3, Y(w1, x2)YW2,0(u, x1)w2⟩ = x−1 1 δ x2 + x0 x1  (x2 + x0)α⟨w′ 3, Y(YW1((x2 + x0)Nu, x0)w1, x2)w2⟩, for each fixed u ∈V , w1 ∈W1 and for all w2 ∈W2, w′ 3 ∈W ′ 3. Hence, we have the following result. Proposition 2.3. Let Y(·, x)· : W1⊗W2 →W3{x}[log x] be an intertwining operator of g 1 g  -type. Then, for all u ∈V [α], α ∈C with α ∈[0, 1), and for all w1 ∈W1, we have x−1 0 δ x1 −x2 x0  xα 1YW3,0(u, x1)Y(w1, x2) −x−1 0 δ −x2 + x1 x0  xα 1Y(w1, x2)YW2,0(u, x1) = x−1 1 δ x2 + x0 x1  Y YW1 (x2 + x0)Lu, x0  w1, x2  . (2.12) Here we have used a more condensed form of writing (x2 + x0)L = (x2 + x0)S(x2 + x0)N, since S and N commute. 11 Recalling that YWi,0(x−Nu, x) = YWi(u, x), for i = 2, 3, and observing this Jacobi identity holds for u replaced with N ku (another eigenvector of S with eigenvalue α), we can multiply both sides by (−1)k k! (log x1)k, and sum over k = 0, . . . , K to obtain x−1 0 δ x1 −x2 x0  YW3(u, x1)Y(w1, x2) −x−1 0 δ −x2 + x1 x0  Y(w1, x2)YW2(u, x1) = x−1 1 δ x2 + x0 x1  Y YW1 x2 + x0 x1 L u, x0 ! w1, x2 ! . Since this form of the Jacobi identity makes no reference to the eigenvalue of u, we can extend this to all u ∈V that are sums of generalized eigenvectors of g. Since V is spanned by generalized eigenvectors of g, we have the following general result. Proposition 2.4. Let Y(·, x)· : W1⊗W2 →W3{x}[log x] be an intertwining operator of g 1 g  -type. Then, for all u ∈V and w1 ∈W1, we have x−1 0 δ x1 −x2 x0  YW3(u, x1)Y(w1, x2) −x−1 0 δ −x2 + x1 x0  Y(w1, x2)YW2(u, x1) = x−1 1 δ x2 + x0 x1  Y YW1 x2 + x0 x1 L u, x0 ! w1, x2 ! . (2.13) In what follows, the subscript of YWi and the 0 in un,0 will be dropped for brevity. The Jacobi identities (2.12) and (2.13) allow us to work with intertwining operators using for- mal calculus. Equation (2.12) has the benefit of being free of log x1 with only integral powers of x1, allowing residues with respect to x1 to be taken. Since both equations are free of log x0 with only integral powers of x0, we can use (2.13) when taking residues with respect to x0. We will now prove several results using the Jacobi identities. Given u ∈V [α] for α ∈C with ℜ(α) ∈[0, 1), we define Y +(u, x) := K X k=0 X n∈α+Z<0 un,kx−n−1(log x)k and Y −(u, x) := K X k=0 X n∈α+Z≥0 un,kx−n−1(log x)k. (2.14) We use the notation for regular and singular parts of Y (u, x) because they contain the non-negative and negative integral powers of x, respectively, when W is untwisted. Lemma 2.5. Let u ∈V [α] for α ∈C with ℜ(α) ∈[0, 1), and let w1 ∈W1. Then Y +(u, x2)Y(w1, x2) + Y(w1, x2)Y −(u, x2) = X k≥0 x−k 2 Y L k  u  −1+k w1, x2  . (2.15) 12 Proof. Multiplying the right-hand side of (2.12) by x−1 0 and then taking Resx0 Resx1 gives Resx0 x−1 0 Y(Y ((x2 + x0)Lu, x0)w1, x2) = Resx0 x−1 0 X n∈Z X k≥0 Y L k  xL−k 2 u  n w1, x2  xk 0x−n−1 0 = X k≥0 Y L k  xL−k 2 u  −1+k w1, x2  And doing the same to the left-hand side of (2.12) gives Resx1 xα 1 (x1 −x2)−1Y0(u, x1)Y(w1, x2) −(−x2 + x1)−1Y(w1, x2)Y0(u, x1)  = Resx1  X k∈Z≥0 x−1−k 1 xk 2 X n∈Z uα+nx−n−1 1 Y(w1, x2) + X k∈Z≥0 x−1−k 2 xk 1Y(w1, x2) X n∈Z uα+nx−n−1 1   = X n∈Z<0 uα+nx−n−1 2 Y(w1, x2) + Y(w1, x2) X n∈Z≥0 uα+nx−n−1 2 = xα 2 X n∈α+Z<0 unx−n−1 2 Y(w1, x2) + xα 2Y(w1, x2) X n∈α+Z≥0 unx−n−1 2 = xα 2Y + 0 (u, x2)Y(w1, x2) + xα 2Y(w1, x2)Y − 0 (u, x2), where we have defined Y + 0 (u, x) := X n∈α+Z<0 unx−n−1 and Y − 0 (u, x) := X n∈α+Z≥0 unx−n−1. (2.16) Hence, we have Y + 0 (u, x2)Y(w1, x2) + Y(w1, x2)Y − 0 (u, x2) = X k≥0 Y L k  xN−k 2 u  −1+k w1, x2  . (2.17) Replacing u with (−1)j j! (log x2)jN ju and summing over j = 0, . . . , K gives the desired identity. Lemma 2.6. Let u ∈V [α] for α ∈C with ℜ(α) ∈[0, 1), and let w1 ∈W1. Then [Y −(u, x1), Y(w1, x2)] = X j,k≥0 (x1 −x2)−1−jx−k 2 Y   x2 x1 L L k  u ! j+k w1, x2  , [Y +(u, x1), Y(w1, x2)] = − X j,k≥0 (−x2 + x1)−1−jx−k 2 Y   x2 x1 L L k  u ! j+k w1, x2  . (2.18) 13 Proof. From equation (2.13), we have [Y (u, x1), Y(w1, x2)] = Resx0  x−1 0 δ x1 −x2 x0  Y (u, x1)Y(w1, x2) −x−1 0 δ −x2 + x1 x0  Y(w1, x2)Y (u, x1)  = Resx0 x−1 1 δ x2 + x0 x1  Y Y x2 + x0 x1 L u, x0 ! w1, x2 ! = Resx0 ex0 ∂ ∂x2  x−1 1 δ x2 x1  Y Y x2 x1 L (1 + x0/x2)L u, x0 ! w1, x2 ! = Resx0 X j≥0 1 j!xj 0  ∂ ∂x2 j  x−1 1 δ x2 x1  X n∈Z X k≥0 Y x2 x1 L L k  u ! n w1, x2 ! xk 0x−k 2 x−n−1 0 = X j≥0 X k≥0 1 j!  ∂ ∂x2 j  x−1 1 δ x2 x1  Y   x2 x1 L L k  u ! j+k w1, x2  x−k 2 . (2.19) Using 1 j!  ∂ ∂x2 j x−1 1 δ x2 x1  = (x1 −x2)−1−j −(−x2 + x1)−1−j, we can separate the terms with x−α+n 1 , for n ∈Z<0, from the terms with x−α+n 1 , for n ∈Z≥0. Hence, we obtain the desired identities. Lemma 2.7. Let u ∈V [α] for α ∈C with ℜ(α) ∈[0, 1), and let w1 ∈W1. Then [uα−1, Y(w1, x2)] = X k≥0 x−1−k 2 Y  xL 2 L −1 k  u  k w1, x2  . (2.20) Proof. Applying Resx0 Resx1 x−1 1 to both sides of (2.12) gives [uα−1, Y(w1, x2)] = Resx0(x2 + x0)−1Y Y (x2 + x0)Lu, x0  w1, x2  = Resx0 Y Y (x2 + x0)L−1u, x0  w1, x2  = Resx0 X n∈Z X k≥0 Y  xL 2 L −1 k  u  n w1, x2  x−1−k 2 xk 0x−n−1 0 = X k≥0 x−1−k 2 Y  xL 2 L −1 k  u  k w1, x2  , our desired result. These lemmas will be used in the following section to derive differential equations for g 1 g  -type intertwining operators among twisted modules satisfying certain finiteness conditions. The special version below will be used to derive a “twisted KZ equation”. We say that an element w of a twisted module W is of lowest weight if unw = 0 for all u ∈V+ and n ∈C with ℜ(n) > 0. 14 Lemma 2.8. Let u ∈V [α] (1) with α ∈C with ℜ(α) ∈[0, 1), and let w1 ∈W1 be of lowest weight. Then Y(u−1w1, x2) = Y +(u, x2)Y(w1, x2) + Y(w1, x2)Y −(u, x2) −x−1 2 Y((Lu)0w1, x2). , (2.21) [Y −(u, x1), Y(w1, x2)] = (x1 −x2)−1Y  (x2/x1)L u  0 w1, x2  , [Y +(u, x1), Y(w1, x2)] = (x2 −x1)−1Y  (x2/x1)L u  0 w1, x2  , (2.22) Proof. Follows from the equations (2.15) and (2.18) using unw1 = 0 when n > 0. 3 Differential equations for g 1 g  -type intertwining opera- tors In this section, we generalize Huang’s method in [Hua05] to derive differential equations for prod- ucts of g 1 g  -type-intertwining operators. The main idea of his method is to show that differential equations exist as a consequence of certain finiteness conditions. Definition 3.1. Let V be a vertex operator algebra with an automorphism g. Given a g-twisted V -module W, define C1(W) to be the subspace of W spanned by the elements of the form uα−1,0w for all u ∈V [α] + , and α ∈C with ℜ(α) ∈[0, 1). We say that W is C1-cofinite if W satisfies the condition dim W/C1(W) < ∞. Definition 3.2. Let W = ` n∈C W[n] be a C-graded vector space. We say that W is discretely graded (also known as quasi-finite dimensional) if dim a n∈C ℜ(n)<r W[n] < ∞, for all r ∈R. (3.1) Remark 3.3. When g = idV = 1, the above definition of C1-cofiniteness reduces to the usual notion of C1-cofiniteness (see for example, [Hua05]). In this case, there are many well-known examples of C1-cofinite discretely graded (untwisted) V -modules. When g ̸= 1, there non-trivial C1-cofinite discretely graded g-twisted V -modules, for example, those constructed in [Hua20]. We now provide the setting for the current section. Let N ∈Z>0. Let W1, . . . , WN be untwisted V -modules, let f W0, . . . , f WN be g-twisted V -modules. We assume that all modules are C1-cofinite and discretely graded. Let Yi : Wi ⊗f Wi →f Wi−1{x}[log x] be intertwining operators, for i = 1, . . . , N. To aid with notation later on, we also denote f W ′ 0 by W0 (a g−1-twisted V -module) and f WN by WN+1. Let A be the additive subgroup of C generated by the union of {1} and the set of eigenvalues for S. Let R be the subring of C{x1, . . . , xN}[(xi −xj)−1 : i, j = 1, . . . , N, i < j][log x1, . . . , log xN] generated by {xα i , (xi −xj)−1, log xi : i, j = 1, . . . , N, i < j, α ∈A}. We will write the elements of R in function notation as f(x1, . . . , xN) with explicit dependence on x1, . . . , xN only. 15 Figure 1: A diagram summarizing the untwisted and twisted V -modules, the automorphism twist- ing each module, and the intertwining operators in the chiral correlation function. All intertwining operators are g 1 g  -type. W1 W2 WN WN+1 = f WN 1 1 1 g f WN−1 f W2 f W1 W ′ 0 = f W0 g g g g · · · ... Y1 Y2 YN In the case that g acts semisimply on V , we can remove all of the log xi from R. This occurs in the important case that g has finite order (see Section 6). We have the map ι|x1|>···>|xN| : R →C{x1, . . . , xN}[log x1, . . . , log xN] (3.2) that expands (xi −xj)−1 in non-negative powers of xj when j > i. Note that R is Noetherian if A is finitely generated. For example, if g has finite order t, then A = 1 tZ. This will be important in Section 6. In the case that V is finitely generated, we can project the finitely many generators onto the subspaces V(n) and consider the finitely many subspaces V(n) where the image of the projection is non-trivial. The automorphism g decomposes these V(n) into eigenspaces for S. We can then take a basis of eigenvectors for S in each of these subspaces. Now we have a finite generating set for V consisting of eigenvectors for S. Hence, A is finitely generated when V is finitely generated. Consider the R-module T = R ⊗W0 ⊗· · · ⊗WN+1. (3.3) Since T is isomorphic as an R-module to (R ⊗W0)⊗R · · ·⊗R (R ⊗WN+1), we will sometimes write elements in T as (f0(x1, . . . , xN)w0) ⊗· · · ⊗(fN+1(x1, . . . , xN)wN+1) in place of f0(x1, . . . , xN) · · · fN+1(x1, . . . , xN) ⊗w0 ⊗· · · ⊗wN+1. 16 The C-gradings by conformal weight on W0, . . . , WN+1, together with the trivial grading on R, induce a grading on T called the weight. We denote the subspace spanned by all homogeneous elements of weight n ∈C with ℜ(n) = r by T(r) so that T = ` r∈R T(r). Observe that the discrete gradings on W0, . . . , WN+1, ensure that W0⊗· · ·⊗WN+1 is a discretely graded vector space. Hence, each subspace Fr(T) = a s∈R≤r T(s) (3.4) is a finitely-generated R-module. Hence, we have a filtration Fr(T) ⊆Fs(T), for r ≤s, of finitely-generated submodules of T. (The chosen grading provides a filtration of finitely-generated submodules, whereas the grading wt(w0 ⊗· · · ⊗wN+1) = −wt w0 + · · · + wt wN+1 does not.) Define the map ϕ : T →C{x1, . . . , xN}[log x1, . . . , log xN], ϕ(f(x1, . . . , xN) ⊗w0 ⊗· · · ⊗wN+1) = ι|x1|>···>|xN|f(x1, . . . , xN)⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ (3.5) that evaluates elements in T as matrix coefficients using Y1, . . . , YN. We seek to define a submod- ule J of T that lies in the kernel of ϕ, satisfying certain properties to be discussed later in Lemma 3.4. Given u ∈V [α] for α ∈C with α ∈[0, 1), we use the notation α′ = ( −α if ℜ(α) = 0, 1 −α if ℜ(α) > 0. (3.6) This is the eigenvalue of Sg−1 corresponding to u. We also drop the subscripts from YW and Yi, which can be deduced by looking at the subscripts of their arguments. We now define elements that will generate the submodule J of T. Let u ∈V [α] and wi ∈Wi for i = 0, . . . , N + 1. Then for ℓ= 1, . . . , N, by (2.15) and (2.18), we observe that 0 = ⟨w0, Y(w1, x1) · · ·  Y +(u, xℓ)Y(wℓ, xℓ) + Y(wℓ, xℓ)Y −(u, xℓ) − X k≥0 x−k ℓY L k  u  −1+k wℓ, xℓ   · · · Y(wN, xN)wN+1⟩ = ⟨w0, Y +(u, xℓ)Y(w1, x1) · · · Y(wN, xN)wN+1⟩ + X p=1,...,N p̸=ℓ ⟨w0, Y(w1, x1) · · · X j,k≥0 : (xℓ−xp)−1−j : x−k p Y   xp xℓ L L k  u ! j+k wp, xp  · · · Y(wN, xN)wN+1⟩ −⟨w0, Y(w1, x1) · · · X k≥0 x−k ℓY L k  u  −1+k wℓ, xℓ  · · · Y(wN, xN)wN+1⟩ + ⟨w0, Y(w1, x1) · · · Y(wN, xN)Y −(u, xℓ)wN+1⟩, 17 where we use the notation : (xℓ−xp)−1−j : := ι|x1|>···>|xN|(xℓ−xp)−1−j = ( (xℓ−xp)−1−j if ℓ< p, (−xp + xℓ)−1−j if p < ℓ. Hence, we define Aℓ(u, w0, . . . , wN+1) = − X j≥0 K X k=0 x−α+j ℓ (log xℓ)ku∗ α−1−j,kw0 ⊗w1 ⊗· · · ⊗wN+1 − X p=1,...,N p̸=ℓ X j,k≥0 (xℓ−xp)−1−jx−k p w0 ⊗· · · ⊗ xp xℓ L L k  u ! j+k wp ⊗· · · ⊗wN+1 + X k≥0 x−k ℓw0 ⊗· · · ⊗ L k  u  −1+k wℓ⊗· · · ⊗wN+1 − X j≥0 K X k=0 x−α−j−1 ℓ (log xℓ)kw0 ⊗· · · ⊗wN ⊗uα+j,kwN+1, (3.7) where u∗ n,k denotes the adjoint of an operator un,k. Observe that the lower truncation property for the modules keeps the sums finite, hence Aℓ(u, w0, . . . , wN+1) is indeed an element of T in the kernel of ϕ. By (2.20), we have ⟨w0, uα−1Y(w1, x1) · · · Y(wN, xN)wN+1⟩−⟨w0, Y(w1, x1) · · · Y(wN, xN)uα−1wN+1⟩ = N X p=1 ⟨w0, Y(w1, x1) · · · X k≥0 x−1−k p Y  xL p L −1 k  u  k wp, xp  · · · Y(wN, xN)wN+1⟩. Hence, we define AN+1(u, w0, . . . , wN+1) = −u∗ α−1w0 ⊗w1 ⊗· · · ⊗wN+1 + N X p=1 X k≥0 x−1−k p w0 ⊗· · · ⊗  xL p L −1 k  u  k wp ⊗· · · ⊗wN+1 + w0 ⊗· · · ⊗wN ⊗uα−1wN+1, (3.8) which is an element of T in the kernel of ϕ. Recall that, by Theorem 6.1 in [Hua18], we have the intertwining operators A±(Y) defined by ⟨A±(Y)(w1, x)w′ 3, w2⟩= ⟨w′ 3, Y(exL(1)e±πiL(0)(x−L(0))2w1, x−1)w2⟩, which is equivalent to ⟨A±(Y)((x−L(0))2e∓πiL(0)e−x−1L(1)w1, x−1)w′ 3, w2⟩= ⟨w′ 3, Y(w1, x)w2⟩. We define the operators O±(x), O−1 ± (x) : W{x}[log x] →W{x}[log x], O±(x)w = exL(1)e±πiL(0)(x−L(0))2w, O−1 ± (x)w = (xL(0))2e∓πiL(0)e−xL(1)w, (3.9) 18 so we can briefly write ⟨A±(Y)(w1, x)w′ 3, w2⟩= ⟨w′ 3, Y(O±(x)w1, x−1)w2⟩, ⟨A±(Y)(O−1 ± (x−1)w1, x−1)w′ 3, w2⟩= ⟨w′ 3, Y(w1, x)w2⟩. By (2.20), we have [uα′−1, A±(Y)(O−1 ± (x−1 p )wp, x−1 p )] = X k≥0 x1+k p A±(Y)  x−L p L −1 k  u  k O−1 ± (x−1 p )wp, x−1 p  . So, ⟨uα′−1w0, Y(w1, x1) · · · Y(wN, xN)wN+1⟩ = ⟨A±(Y)(O−1 ± (x−1 N )wN, x−1 N ) · · · A±(Y)(O−1 ± (x−1 1 )w1, x−1 1 )uα′−1w0, wN+1⟩ = ⟨uα′−1A±(Y)(O−1 ± (x−1 N )wN, x−1 N ) · · · A±(Y)(O−1 ± (x−1 1 )w1, x−1 1 )w0, wN+1⟩ − N X p=1 X k≥0 x1+k p ⟨A±(YN)(O−1 ± (x−1 N )wN, x−1 N ) · · · A±(Y)  x−L p L −1 k  u  k O−1 ± (x−1 p )wp, x−1 p  · · · A±(Y1)(O−1 ± (x−1 1 )w1, x−1 1 )w0, wN+1⟩ = ⟨w0, Y1(w1, x1) · · · YN(wN, xN)u∗ α′−1wN+1⟩ − N X p=1 X k≥0 x1+k p ⟨w0, Y1(w1, x1) · · · Y  O±(x−1 p )  x−L p L −1 k  u  k O−1 ± (x−1 p )wp, xp  · · · YN(wN, xN)wN+1⟩. Hence, we define A0(u, w0, . . . , wN+1) = uα′−1w0 ⊗w1 ⊗· · · ⊗wN+1 + N X p=1 X k≥0 x1+k p w0 ⊗· · · ⊗O±(x−1 p )  x−L p L −1 k  u  k O−1 ± (x−1 p )wp ⊗· · · ⊗wN+1 −w0 ⊗· · · ⊗wN ⊗u∗ α′−1wN+1, (3.10) which is an element of T in the kernel of ϕ. Define J to be the R-submodule of T generated by Aj(u, w0, . . . , wN+1) for all u ∈V [α] + , α ∈C with ℜ(α) ∈[0, 1), and all wi ∈Wi, i = 0, . . . , N + 1, and for all j = 0, . . . , N + 1. Define Fr(J) = Fr(T) ∩J so that we have the filtration Fr(J) ⊆Fs(J), r ≤s. The following lemma explains why we have chosen the generators Ai(u, w0, . . . , wN+1) to define J. Lemma 3.4. Let u ∈V [α] + , for ℜ(α) ∈[0, 1), wi ∈Wi for i = 0, . . . , N +1, be weight homogeneous. If ℜ(wt uα′−1w0 ⊗· · ·⊗wN+1) = s, then uα′−1w0 ⊗· · ·⊗wN+1 ∈Fs(J)+Fr(T) for some r < s. For p = 1, . . . , N, if ℜ(wt w0⊗· · ·⊗u−1wp⊗· · ·⊗wN+1) = s, then w0⊗· · ·⊗u−1wp⊗· · ·⊗wN+1 ∈Fs(J)+ Fr(T) for some r < s. If ℜ(wt w0⊗· · ·⊗uα−1wN+1) = s, then w0⊗· · ·⊗uα−1wN+1 ∈Fs(J)+Fr(T) for some r < s. 19 Proof. We use σ to denote ℜ(wt w0 + · · · + wt wN+1). Let s = ℜ(wt uα′−1w0 ⊗· · · ⊗wN+1) = wt u −ℜ(α′) + σ. Then ℜ(wt w0 ⊗· · · ⊗u∗ α′−1wN+1) = −wt u + ℜ(α′) + σ < σ < s, ℜ(wt w0 ⊗· · · ⊗ukwp ⊗· · · ⊗wN+1) = wt u −k −1 + σ ≤wt u −1 + σ < wt u −ℜ(α′) + σ = s, for all k ≥0 and p = 1, . . . , N. Since wt L = 0 and wt L(1) = −1, the real part of the weight of each homogeneous component of x1+k p w0 ⊗· · · ⊗O±(x−1 p ) x−L p L−1 k  u  k O−1 ± (x−1 p )wp ⊗· · · ⊗wN+1 is no more than ℜ(wt w0 ⊗. . . ukwp ⊗· · · ⊗wN+1), which is less than s. Hence uα′−1w0 ⊗· · · ⊗wN+1 = A0(u, w0, . . . , wN+1) − N X p=1 X k≥0 x1+k p w0 ⊗· · · ⊗O±(x−1 p )  x−L p L −1 k  u  k O−1 ± (x−1 p )wp ⊗· · · ⊗wN+1 + w0 ⊗· · · ⊗u∗ α′−1wN+1 ∈Fs(J) + Fr(T), for some r < s. Now let s = ℜ(wt w0 ⊗· · · ⊗u−1wp ⊗· · · ⊗wN+1) = wt u + σ. Then ℜ(wt u∗ α−1−j,kw0 ⊗w1 ⊗· · · ⊗wN+1) = ℜ(α) −j −wt u + σ < σ < s, ℜ(wt w0 ⊗· · · ⊗ukwp ⊗· · · ⊗wN+1) = wt u −k −1 + σ ≤σ < s, ℜ(wt w0 ⊗· · · ⊗wN ⊗uα+j,kwN+1) = wt u −ℜ(α) −j −1 + σ < wt u + σ = s, for all j, k ≥0 and p = 1, . . . , N. Similarly to before, since wt L = 0, we have w0 ⊗· · · ⊗u−1wℓ⊗· · · ⊗wN+1 = Aℓ(u, w0, . . . , wN+1) + X j≥0 K X k=0 x−α+j ℓ (log xℓ)ku∗ α−1−j,kw0 ⊗w1 ⊗· · · ⊗wN+1 + X p=1,...,N p̸=ℓ X j,k≥0 : (xℓ−xp)−1−j : x−k p w0 ⊗· · · ⊗ xp xℓ L L k  u ! j+k wp ⊗· · · ⊗wN+1 − X k>0 x−k ℓw0 ⊗· · · ⊗ L k  u  −1+k wℓ⊗· · · ⊗wN+1 + X j≥0 K X k=0 x−α−j−1 ℓ (log xℓ)kw0 ⊗· · · ⊗wN ⊗uα+j,kwN+1 ∈Fs(J) + Fr(T) 20 for some r < s. Now let s = ℜ(wt w0 ⊗· · · ⊗wN ⊗uα−1wN+1) = wt u −ℜ(α) + σ Then ℜ(wt u∗ α−1w0 ⊗w1 ⊗· · · ⊗wN+1) = ℜ(α) −wt u + σ < σ < s, ℜ(wt w0 ⊗· · · ⊗ukwp ⊗· · · ⊗wN+1) = wt u −k −1 + σ < wt u −ℜ(α) + σ = s, for all k ≥0 and p = 1, . . . , N. Similarly to before, since wt L = 0, we have w0 ⊗· · · ⊗wN ⊗uα−1wN+1 = AN+1(u, w0, . . . , wN+1) + u∗ α−1w0 ⊗w1 ⊗· · · ⊗wN+1 − N X p=1 X k≥0 x−1−k p w0 ⊗· · · ⊗  xL p L −1 k  u  k wp ⊗· · · ⊗wN+1 ∈Fs(J) + Fr(T) for some r < s. Proposition 3.5. There exists M ∈Z such that for any r ∈R, Fr(T) ⊆Fr(J) + FM(T). Furthermore, T = J + FM(T). Proof. Since dim Wi/C1(Wi) < ∞, for i = 0, . . . , N + 1, there is a finite-dimensional subspace Ui ⊆Wi such that Wi = Ui + C1(Wi). Hence we can find find Mi ∈Z sufficiently large such that any element in (Wi)[m] is necessarily in C1(Wi) whenever ℜ(m) > Mi. Taking M = PN+1 i=0 Mi, we see that any homogeneous element in T that has real component of its weight larger than M must be in N+1 X i=0 R ⊗W0 ⊗· · · ⊗C1(Wi) ⊗· · · ⊗WN+1 Hence, we have M ∈Z such that a m>M T(m) ⊆ N+1 X i=0 R ⊗W0 ⊗· · · ⊗C1(Wi) ⊗· · · ⊗WN+1 (3.11) We immediately have Fr(T) ⊆FM(T) ⊆Fr(J) + FM(T) for all r ≤M. Since T is discretely graded, we can use induction on r ≥M. Assume that we have some s > M such that Fr(T) ⊆ Fr(J) + FM(T) for all r < s. We want to show that T(s) ⊆Fs(J) + FM(T). Since T(s) ⊆` m>M T(m), we can consider elements in the summands of the (3.11), without loss of generality. By Lemma 3.4, these elements are in Fs(J) + Fr(T) for some r < s. By the induction hypothesis, these elements are then in Fs(J) + Fr(T) ⊆Fs(J) + FM(T), and we have proved the first statement. Furthermore, we have T = [ r∈R Fr(T) ⊆ [ r∈R (Fr(J) + FM(T)) ⊆ [ r∈R Fr(J) + FM(T) = J + FM(T). And J + FM(T) ⊆T, so we have T = J + FM(T). 21 Corollary 3.6. The R-module T/J is finitely generated. Proof. Since T = J + FM(T), the image of FM(T) in T/J is all of T/J. Since FM(T) is finitely generated, so is T/J. For w ∈T, we use [w] to denote the equivalence class w + J in T/J. Consider a change of variables from (z1, . . . , zN) to (ζ1, . . . , ζN) given by zj = N X i=1 bijζi + γj, (3.12) for B = (bij)N i,j=1 ∈GL(N, C) and (γ1, . . . , γN) ∈CN. Then ∂ ∂ζi = N X j=1 ∂zj ∂ζi ∂ ∂zj = N X j=1 bij ∂ ∂zj . (3.13) Define the operators bLζℓon W0 ⊗· · · ⊗WN+1 by bLζℓ(w0 ⊗· · · ⊗wN+1) = N X j=1 bℓjw0 ⊗· · · ⊗L(−1)wj ⊗· · · ⊗WN+1, (3.14) for each ℓ= 1, . . . , N. Corollary 3.7. Assume that R is Noetherian. For any wi ∈Wi, i = 0, . . . , N + 1, and for any ℓ= 1, . . . , N, there exist mℓ∈Z≥0 and ak,ℓ(x1, . . . , xN) ∈R for k = 1, . . . , mℓsuch that [bLmℓ ζℓ(w0 ⊗· · · ⊗wN+1)] + a1,ℓ(x1, . . . , xN)[bLmℓ−1 ζℓ (w0 ⊗· · · ⊗wN+1)]+ · · · + amℓ−1,ℓ(x1, . . . , xN)[bLζℓ(w0 ⊗· · · ⊗wN+1)] + amℓ,ℓ(x1, . . . , xN)[w0 ⊗· · · ⊗wN+1] = 0. (3.15) Proof. Consider an ascending chain (Mj)∞ j=0 of submodules Mj of T/J generated by {[bLk ζℓ(w0 ⊗· · · ⊗wN+1)] : k ∈Z, 0 ≤k ≤j}. (3.16) Since R is a Noetherian ring and T/J is a finitely generated R-module, this chain stabilizes. So there is some mℓ∈Z≥0 such that Mmℓ= Mmℓ−1. Hence [bLmℓ ζℓ(w0 ⊗· · · ⊗wN+1)] ∈Mmℓ−1, and we obtain our desired equation. Theorem 3.8. Assume that R is Noetherian. Then there exists a system of differential equations  ∂ ∂ζℓ mℓ ψ + mℓ X k=1 ak,ℓ(z1, . . . , zN)  ∂ ∂ζℓ mℓ−k ψ = 0, ℓ= 1, . . . , N, (3.17) satisfied by the series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in the region |z1| > · · · > |zN|. 22 Proof. Since J is contained in the kernel of ϕ, we have ϕ : T/J →C{x1, . . . , xN}[log x1, . . . , log xN]. Using this map on (3.15) followed by the L(−1)-derivative property, we obtain N X j=1 bℓj ∂ ∂xj !mℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ + mℓ X k=1 ι|x1|>···>|xN|ak,ℓ(x1, . . . , xN) N X j=1 bℓj ∂ ∂xj !mℓ−k ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩= 0. (3.18) Evaluating x1 = z1, . . . , xN = zN gives the desired differential equations. Remark 3.9. Recall that R is Noetherian when V is finitely generated, for which there are many natural examples. Furthermore, R is Noetherian when the order of g is finite, which is an assumption in Section 6. Remark 3.10. We have shown that the series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩formally sat- isfies (3.17), but this does not show that the series converges. Since there are non-integral powers and logarithms of zi, we cannot use the usual power series convergence for analytic systems (even if we reduce to ordinary differential equations in zi). To show convergence, we instead use the theory of differential equations with regular singular points in Section 6 while assuming that g has finite order. We do not have a method for showing the convergence when g is a general automorphism. 4 The twisted KZ equations In this section, we work with the affine vertex operator algebras Vˆg(κ, 0) as an explicit family of vertex operator algebras. A slightly different method is used than in the previous section to derive a system of differential equations when w0, . . . , wN+1 are lowest weight vectors. When the modules are untwisted, this system is well known in the physics literature as “the KZ equations” [KZ84]. When W ′ 0 and WN+1 are g-twisted (by a semisimple inner-automorphism of g), this system is known in the physics literature as the “twisted KZ equations” (for “inner-automorphic WZW orb- ifolds”) [BHO01]. Observe that under the condition that W1, . . . , WN are untwisted and W ′ 0 and WN+1 are g-twisted, the intertwining operators are necessarily of g 1 g  -type. This is an important assumption used by de Boer, Halpern and Obers to derive the twisted KZ equations. We keep the assumption of g 1 g  -type intertwining operators, but we will generalize the twisted KZ equations so that g can be an arbitrary automorphism of the vertex operator algebra Vˆg(κ, 0). In the following section, we will show that the solutions to the twisted KZ equations have “regular singularities” when g has finite order. Let g be a finite-dimensional simple Lie algebra with an invariant symmetric bilinear form (·, ·). Let g be an automorphism of g. Let L, S and N be operators of g such that g = e2πiL, S and N are the semisimple and nilpotent parts of L, and the eigenvalues of S have real part in [0, 1). As defined in [Hua21a], the twisted affine Lie algebra ˆg[g] is spanned by the elements k, a ⊗tn = an, for a ∈g and n ∈C such that ga = e2πina, (4.1) 23 with Lie bracket given by [am, bn] = [a, b]m+n + ((m + N)a, b)δm+n,0k, (4.2) [k, am] = 0. (4.3) We have the subspaces ˆg[g] + := Span{an ∈ˆg[g] : ℜ(n) > 0}, ˆg[g] −:= Span{an ∈ˆg[g] : ℜ(n) < 0}, ˆg[g] I := Span{an ∈ˆg[g] : ℜ(n) = 0}, ˆg[g] 0 := ˆg[g] I ⊕Ck. Furthermore, ˆg[g] + , ˆg[g] −and ˆg[g] 0 are subalgebras of ˆg[g], giving a triangular decomposition ˆg[g] = ˆg[g] + ⊕ˆg[g] 0 ⊕ˆg[g] −. (4.4) Note that when g = 1, we obtain the affine Lie algebra ˆg[1] = ˆg. The vertex operator algebra that we consider here is the universal affine vertex operator algebra Vˆg(κ, 0) = U(ˆg)⊗U(ˆg(≤0))Cκ at a fixed level κ ∈C\{−h∨} (see for example [LL04]). More generally, we could take V to be any quotient of Vˆg(κ, 0). This vertex operator algebra is a representation of ˆg, and we denote the action of an on V by a(n). The automorphism g of g induces automorphisms (which we still denote by g) gk := k, g(am) := (ga)m, for all a ∈g, m ∈Z; and (4.5) g(a(1)(n1) · · · a(ℓ)(nℓ) 1) := (ga(1))(n1) · · · (ga(ℓ))(nℓ) 1, for all a(1), . . . , a(ℓ) ∈g, ℓ∈Z≥0, (4.6) of ˆg and Vˆg(κ, 0), respectively. Since g preserves the conformal vector ω of Vˆg(κ, 0), g is furthermore an automorphism of Vˆg(κ, 0) viewed as a vertex operator algebra. The g-twisted Vˆg(κ, 0)-modules are naturally representations of ˆg[g] [Hua21a], and we denote the action of an on the twisted mod- ules by a(n). Let {ai}i∈I be a basis for g consisting of generalized eigenvectors of g, i.e. eigenvectors of S. Since (·, ·) is non-degenerate, there is a basis {ai′}i∈I dual to {ai}i∈I with respect to (·, ·). The operator e2πiS is also an automorphism of g. Since invariant symmetric bilinear forms of g are invariant under automorphisms, we have (e2πiSai′, aj) = (ai′, e−2πiSaj) = (ai′, e−2πiαjaj) = e−2πiαjδij. (4.7) Hence, {ai′}i∈I are also generalized eigenvectors of g. We will frequently work with expressions that are symmetric in ai and ai′. To reduce the amount of explicit terms in these expressions, we make the following definitions. Choose a new index set I′, disjoint from I, for the dual basis {ai}i∈I′ = {ai′}i∈I. Define I = I ⊔I′. Since {ai}i∈I is also dual to {ai}i∈I′, we can make simplifications such as X i∈I  ai′(−1)ai(0) + ai(−1)ai′(0)  w = X i∈I ai(−1)ai′(0)w. Note that we can further define ·′ : I →I by ai′ = (ai)′, so there is no ambiguity in the notation. For each i ∈I, we denote the S-eigenvalues of ai by αi. 24 In this section, we will work with the same assumptions on the V -modules as in the previous section (summarized in Figure 1) with V = Vˆg(κ, 0). Let wi ∈Wi, i = 1, . . . , N, w0 ∈W0 and wN+1 ∈WN+1 be lowest weight vectors, i.e. annihilated by ˆg+, ˆg[g−1] + and ˆg[g] + , respectively. Using these assumptions, we will construct a first-order homogeneous linear system of partial differential equations satisfied by the formal series ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ (4.8) by computing the right-hand side of ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩= ⟨w0, Y1(w1, x1) · · · Yℓ(L(−1)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩, (4.9) for each fixed ℓ= 1, . . . , N. Since Wℓis an untwisted module, the L(−1)-action on Wℓis given by L(−1) = X i∈I X j∈Z>0 1 2(κ + h∨)ai′(−j)ai(j −1) + X i∈I X j∈Z≤0 1 2(κ + h∨)ai(j −1)ai′(−j). (Note that here we use a basis {ai}i∈I of generalized eigenvectors for g. Although this is not needed for the untwisted module Wℓ, it will later be used for the g-twisted modules.) Utilizing that wℓis a lowest weight vector, we obtain the relation Yℓ(L(−1)wℓ, xℓ) = Yℓ X i∈I 1 2(κ + h∨)ai′(−1)ai(0)wℓ+ X i∈I 1 2(κ + h∨)ai(−1)ai′(0)wℓ, xℓ ! , or rearranged as 2(κ + h∨)Yℓ(L(−1)wℓ, xℓ) = X i∈I Yℓ  ai′(−1)ai(0)wℓ+ ai(−1)ai′(0)wℓ, xℓ  = X i∈I Yℓ(ai(−1)ai′(0)wℓ, xℓ). The vectors ai = ai(−1) 1 and ai′(0)wℓsatisfy the assumptions on u and w1, respectively, used to derive (2.21). Hence, we can use (2.21) to obtain 2(κ + h∨)Yℓ(L(−1)wℓ, xℓ) = X i∈I Yℓ(ai(−1)ai′(0)wℓ, xℓ) = X i∈I  Y +(ai, xℓ)Yℓ(ai′(0)wℓ, xℓ) + Yℓ(ai′(0)wℓ, xℓ)Y −(ai, xℓ) −x−1 ℓYℓ((Lai)(0)ai′(0)wℓ, xℓ)  . (4.10) Assume u ∈(Vˆg(κ, 0))(1) ∼= g and α ∈C with Su = αu. Let ew0 ∈f W0 and f ∈Hom(WN+1, f W0). By use of Y +(u, x) = Y + 0 (x−Nu, x) and the definition of the contragredient module action ⟨Y ′(v, x)w′, w⟩:= ⟨w′, Y (exL(1)(−x−2)L(0)v, x−1)w⟩, (4.11) we have ⟨w0, Y +(u, x) ew0⟩= K X k=0 X m∈−α+Z>0 −1 k! ⟨(N ku)(m)w0, ew0⟩x−m−1(log x)k = 0. (4.12) 25 By use of Y −(u, x) = Y − 0 (x−Nu, x), we have ⟨w0, fY −(u, x)wN+1⟩= K X k=0 X m∈α+Z≥0 (−1)k k! ⟨w0, f(N ku)(m)wN+1⟩x−m−1(log x)k = K X k=0 (−1)k k! ⟨w0, f(N ku)(α)wN+1⟩x−α−1(log x)k = ⟨w0, f(x−Nu)(α)wN+1⟩x−α−1, (4.13) If ℜ(α) = 0, we still have (N ku)(α)wN+1 being of lowest weight. If ℜ(α) > 0, we further have (x−Nu)(α)wN+1 = 0. We wish to remove all instances of Y +(ai, xℓ) and Y −(ai, xℓ) from ∂ ∂xℓ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩. To do so, we can use (2.22) to move them as far left and right as possible, respectively. For p < ℓ, we have Yp(wp, xp)Y +(ai, xℓ) = Y +(ai, xℓ)Yp(wp, xp) + (−xp + xℓ)−1Yp ((xp/xℓ)Lai)(0)wp, xp  . (4.14) For ℓ< p, we have Y −(ai, xℓ)Yp(wp, xp) = Yp(wp, xp)Y −(ai, xℓ) + (xℓ−xp)−1Yp ((xp/xℓ)Lai)(0)wp, xp  . (4.15) To work with these two relations on equal footing, we use the following notation : (xℓ−xp)−1 : := ι|x1|>···>|xN|(xℓ−xp)−1 = ( (xℓ−xp)−1 when ℓ< p, (−xp + xℓ)−1 when p < ℓ. We can now derive the twisted KZ equations (first discovered in [BHO01]). Proposition 4.1 (The twisted KZ equations). If the assumptions at the start of this section hold, then we have 2(κ + h∨) ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ = X i∈I X p̸=ℓ K X k1,k2=0 (−1)k2 k1!k2! (log xp)k1(log xℓ)k2 xp xℓ αi : (xℓ−xp)−1 : ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · Yp((N k1+k2ai)(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + K X k=0 (−1)k k! (log xℓ)kx−αi−1 ℓ ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · YN(wN, xN)(N kai)(αi)wN+1⟩ −x−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ((Lai)(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ ! , (4.16) for each ℓ= 1, . . . , N, where K is chosen such that N K+1g = 0. 26 Proof. We use (4.10), followed by repeated use of (4.14) and (4.15), and then (4.13) to obtain 2(κ + h∨) ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ = 2(κ + h∨)⟨w0, Y1(w1, x1) · · · Yℓ(L(−1)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ = X i∈I  ⟨w0, Y1(w1, x1) · · · Y +(ai, xℓ)Yℓ(ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ + ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ)Y −(ai, xℓ) · · · YN(wN, xN)wN+1⟩ −x−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ((Lai)(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩  = X i∈I X p̸=ℓ : (xℓ−xp)−1 : ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, ℓ) · · · Yp(((xp/xℓ)Lai)(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + x−αi−1 ℓ ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · YN(wN, xN)(x−N ℓ ai)(αi)wN+1⟩ −x−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ((Lai)(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ ! = X i∈I X p̸=l K X k1,k2=0 (−1)k2 k1!k2! (log xp)k1(log xℓ)k2 xp xℓ αi : (xℓ−xp)−1 : ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · Yp((N k1+k2ai)(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + K X k=0 (−1)k k! (log xl)kx−αi−1 ℓ ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)ℓ, xℓ) · · · YN(wN, xN)(N kai)(αi)wN+1⟩ −x−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ((Lai)(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ ! , where K ∈Z≥0 is chosen sufficiently large so that N K+1g = 0. This is our most general version of the twisted KZ equations. Note that when we specialize to g = id, we have L = S = N = 0, removing all log-terms and setting αi = 0. We can further choose the basis {ai}i∈I to be orthonormal so the twisted KZ equations above produce the original KZ equations (κ + h∨) ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(w2, x2)wN+1⟩ = X i∈I X p̸=ℓ : (xℓ−xp)−1⟨w0, Y1(w1, x1) · · · Yℓ(ai(0)wℓ, xℓ) · · · Yp(ai(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + x−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ(ai(0)wℓ, xℓ) · · · YN(wN, xN)ai(0)wN+1⟩ ! , as in [HL99], for example. 27 Corollary 4.2. Further assume that g acts semisimply. Then we have 2(κ + h∨) ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ = X i∈I X p̸=ℓ xp xℓ αi : (xℓ−xp)−1 : ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · Yp(ai(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + x−αi−1 ℓ ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · YN(wN, xN)ai(αi)wN+1⟩ −αix−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ(ai(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ ! , (4.17) for each ℓ= 1, . . . , N. Proof. In the case that g acts semisimply, we have L = S and N = 0. Since N = 0, the log-terms vanish. We can compare this with the “inner automorphism” special case derived in [BHO01]. We choose a Cartan subalgebra h, a set of simple roots Π, and an invariant symmetric bilinear form (·, ·). We choose and fix an element h ∈h and define g ∈Aut(g) by ga := ead ha, for all a ∈g. (4.18) Since g is given by the adjoint action of a Cartan element, we immediately have ga = eβ(h)a, for all a ∈gβ, β ∈∆⊔{0}. (4.19) We construct a basis by picking an orthonormal basis of h and a set {xβ ∈gβ : β ∈∆+} of non- zero vectors. Lastly, we pick {yβ ∈g−β : β ∈∆+} such that (xβ, yβ) = 1 for each β ∈∆+. This gives a basis {ai}i∈I of eigenvectors of g that is dual to itself with respect to the (·, ·). (However, the basis need not be orthonormal.) In this case, the twisted KZ equations are (κ + h∨) ∂ ∂xℓ ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ = X i∈I X p̸=ℓ : (xℓ−xp)−1 : xp xℓ αi ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · Yp(ai(0)wp, xp) · · · YN(wN, xN)wN+1⟩ + x−αi−1 ℓ ⟨w0, Y1(w1, x1) · · · Yℓ(ai′(0)wℓ, xℓ) · · · YN(wN, xN)ai(αi)wN+1⟩ −αix−1 ℓ⟨w0, Y1(w1, x1) · · · Yℓ(ai(0)ai′(0)wℓ, xℓ) · · · YN(wN, xN)wN+1⟩ ! . (4.20) Further assuming that wN+1 is annihilated by ˆg[g] 0 , we obtain the twisted KZ equations as given in (10.44a-c) of [BHO01]. (Note that our eigenvalues {αi}i∈I are negative of those in [BHO01] because Y (u, e2πiz) = Y (gu, z) is used in (10.11a-d), instead of our convention of Y (gu, e2πiz) = Y (u, z) in (2.8). But this is simply a change between g and g−1.) 28 5 Regularity of the twisted KZ equations Now we assume that g is of finite order t. We will prove that the “component-isolated singularities” of the solutions to the KZ equations are “regular singularities”. First, we recall some notions about functions with “regular singularities” from [DH25] and some theorems about solutions of differential equations with “simple singularities” (see Section 4 of Appendix B in [Kna86]). For a ∈bC and r ∈R>0, let D× r (a) denote the open punctured disc {z ∈C : 0 < |z −a| < r} when a ̸= ∞, and let D× r (∞) denote the open punctured disc {z ∈C : 0 < |z−1| < r} when a = ∞. Definition 5.1. Let f(z1, . . . , zn) be a multivalued function defined on some subset of bCn. Let A ∈GL(n, C) and (β1, . . . , βn) ∈Cn, and consider the change of variables (ζ1, . . . , ζn) = (z1, . . . , zn)A −(β1, . . . , βn). (5.1) Define g(ζ1, . . . , ζn) = f((ζ1, . . . , ζn)A−1 + (β1, . . . , βn)A−1). Let (δ1, . . . , δn) ∈{0, ∞}n. We say that f(z1, . . . , zn) has a component-isolated singularity at (ζ1, . . . , ζn) = (δ1, . . . , δn) if there exist r1, . . . , rn ∈R>0 such that g(ζ1, . . . , ζn) is defined on D× r1(δ1) × · · · × D× rn(δn) Definition 5.2. We say that a multivalued holomorphic function f(z1, . . . , zn) has a regular sin- gularity at (z1, . . . , zn) = (0, . . . , 0) if there exist r1, . . . , rn ∈R>0 such that f has a local expansion in the region D× r1(0) × · · · × D× rn(0) of the form ϕ(z1, . . . , zn) = J X j=1 z s1,j 1 · · · zsn,j n (log z1)m1,j · · · (log zn)mn,jhj(z1, . . . , zn), for some numbers si,j ∈C, mi,j ∈Z≥0, and for some holomorphic functions hj(z1, . . . , zn) on Dr1(0) × · · · × Drn(0). Define φ0(z) = z and φ∞(z) = z−1. Let (δ1, . . . , δn) ∈{0, ∞}n. We say that f(z1, . . . , zn) has a regular singularity at (z1, . . . , zn) = (δ1, . . . , δn) if h(ξ1, . . . , ξn) = f(φδ1(ξ1), . . . , φδn(ξn)) has a regular singularity at (ξ1, . . . , ξn) = (0, . . . , 0). Let A ∈GL(n, C) and (β1, . . . , βn) ∈Cn, and consider the change of variables (ζ1, . . . , ζn) = (z1, . . . , zn)A −(β1, . . . , βn). (5.2) Define g(ζ1, . . . , ζn) = f((ζ1, . . . , ζn)A−1 + (β1, . . . , βn)A−1). Let (δ1, . . . , δn) ∈{0, ∞}n. We say that f(z1, . . . , zn) has a regular singularity at (ζ1, . . . , ζn) = (δ1, . . . , δn) if g(ζ1, . . . , ζn) has a regular singularity at (ζ1, . . . , ζn) = (δ1, . . . , δn). Note that we are using the term “regular singularity” to refer to a property of a function, and not a property of a system of differential equations. Also note that a function f(z1, . . . , zn) with a regular singularity at (z1, . . . , zn) = (0, . . . , 0) is potentially singular on all of Dn\(D×)n, so this function does not necessarily have an isolated singularity at (z1, . . . , zn) = (0, . . . , 0). (In the case the singularity at (z1, . . . , zn) = (0, . . . , 0) is isolated, it is in fact removable.) This is why we have the notion of a component-isolated singularity. Let HU denote the space of holomorphic functions from Dn to a finite-dimensional vector space U. Let HEnd U denote the space of holomorphic functions from Dn to End U. For k ∈(Z≥0)n, use 29 ∂k to denote  ∂ ∂z1 k1 · · ·  ∂ ∂zn kn and use (z∂)k to denote  z1 ∂ ∂z1 k1 · · ·  zn ∂ ∂zn kn . Let D =        X k∈(Z≥0)n finite sum Ak∂k : Ak ∈HEnd U        , which is naturally a left HEnd U-module by left multiplication. Let D∗=        X k∈(Z≥0)n finite sum Ak(z∂)k : Ak ∈HEnd U        , be the HEnd U-submodule of D generated by all monomials in {zi ∂ ∂zi}n i=1, which is also naturally an associative unital algebra over C. Let {Dα}α∈A be a finite set of operators from D∗. Let I be the left ideal in D∗generated by {Dα}α∈A. Definition 5.3. If D∗/I is finitely generated as an HEnd U-module, we say that the system of partial differential equations {Dαϕ = 0}α∈A has a simple singularity on Dn\(D×)n. For example, let U = Cm and let {Dα}α∈A be the set of operators Di = zi ∂ ∂zi −Hi(z), i = 1, . . . , n, (5.3) where Hi(z) are m × m matrices of holomorphic functions on Dn. Then, the system {zi ∂ ∂ziψ = Hi(z)ψ}n i=1 has a simple singularity on Dn\(D×)n. We have the following theorem about the form of a solution of a system with a simple singularity. Theorem 5.4 (Theorem B.16 from [Kna86]). Let {Dα}α∈A be a finite set of operators from D∗ such that the system of partial differential equations {Dαϕ = 0}α∈A has a simple singularity on Dn\(D×)n. Then any multivalued holomorphic solution to {Dαϕ = 0}α∈A on (D×)n has a regular singularity at (0, . . . , 0). Moreover, any C∞solution on (0, 1)n extends to a multivalued holomor- phic solution on (D×)n. By scaling the variables, we can see that this theorem still holds if the domain Dn in HU and HEnd U is replaced by Dr1(0) × · · · × Drn(0) for any r1, . . . , rn ∈R>0. Note that this theorem does not show that a solution exists for any initial condition, nor that any formal solution converges. In the following section, we will use the theory of ordinary differential equations with regular singular points to show that formal solutions converge. We will now show that the solutions to the twisted KZ equations have regular singularities when g has finite order t. Since g restricts to an automorphism of finite order when acting on each finite-dimensional summand V(n), g is semisimple. So we can use the twisted KZ equations in the form of (4.17). Furthermore, the eigenvalues Sg are all of the form α ∈{0, 1/t, . . . , (1 −t)/t}. First, we will re-express the twisted KZ equations (4.17) in a more implicit form. Let L(W) denote the space of lowest-weight vectors of the (twisted) module W. Consider the space (L(W0) ⊗· · · ⊗L(WN+1))∗{x1, . . . , xN}. 30 Define the operator eΩi ℓp on L(W0) ⊗· · · ⊗L(WN+1) to be given by eΩi ℓp(w0 ⊗· · · ⊗wN+1) = 1 2(κ + h∨)w0 ⊗· · · ⊗ai′(0)wℓ⊗· · · ⊗ai(0)wp ⊗· · · ⊗wN+1. Observe that eΩi ℓp = eΩi′ pℓ. Define the operator Ωℓon L(W0) ⊗· · · ⊗L(WN+1) to be given by eΩℓ(w0 ⊗· · · ⊗wN+1) = 1 2(κ + h∨) X i∈I  w0 ⊗· · · ⊗ai′(0)wℓ⊗· · · ⊗wp ⊗· · · ⊗ai(αi)wN+1 −αiw0 ⊗· · · ⊗ai(0)ai′(0)wℓ⊗· · · ⊗wp ⊗· · · ⊗wN+1  . Note that the first term of the summand only contributes when αi = 0 and the second term only contributes when αi ̸= 0. Consider the (finite-dimensional) vector space U = (L(W0) ⊗· · · ⊗L(WN+1))∗. The formal counterpart to HU is contained within U{x1, . . . , xN}. Define the operators Ωi ℓp and Ωℓon U{x1, . . . , xN} by Ωi ℓp X m1,...,mN∈C fm1,...,mNxm1 1 · · · xmn N ! = X m1,...,mN∈C fm1,...,mN ◦eΩi ℓp xm1 1 · · · xmN N , (5.4) Ωℓ X m1,...,mN∈C fm1,...,mNxm1 1 · · · xmN N ! = X m1,...,mN∈C fm1,...,mN ◦eΩℓxm1 1 · · · xmN N . (5.5) Similarly by pre-composition, we can define operators acting on HU. We will also denote these by Ωi ℓp and Ωℓ. Observe that Ωi ℓp = Ωi′ pℓ. We will consider the twisted KZ equations as the following system of formal partial differential equations ∂ ∂xℓ ψ = X i∈I X p̸=ℓ xp xℓ αi : (xℓ−xp)−1 : Ωi ℓp + x−1 ℓΩℓ ! ψ, ℓ= 1, . . . , N, (5.6) or as the following system of partial differential equations ∂ ∂zℓ ψ = X i∈I X p̸=ℓ zp zℓ αi (zℓ−zp)−1Ωi ℓp + z−1 ℓΩℓ ! ψ, ℓ= 1, . . . , N. (5.7) By seeing where the coefficients in the right-hand side of (5.7) are holomorphic, we can see that every solution to the twisted KZ equations is at most defined on M N = {(z1, . . . , zN) ∈bCN : zi ̸= 0, ∞and zi ̸= zj when i ̸= j}. (5.8) Assume that we have a solution to the twisted KZ equations defined on M N. We will show for all change of variables (5.2), every component-isolated singularities at (δ1, . . . , δN) ∈{0, ∞}N, is a regular singularity. To show this, we will show that the system written in the new variables has a simple singularity at (0, . . . , 0) by showing that the coefficient matrix is holomorphic in some product of sufficiently small disks centered at 0. 31 Assume we have a change of variables (5.2), or equivalently (z1, . . . , zN) = (ζ1, . . . , ζ2)A−1 + (β1, . . . , βN)A−1 =: (ζ1, . . . , ζN)B + (γ1, . . . , γN). (5.9) Then zℓ= N X j=1 bjℓζj + γℓ and ∂zℓ ∂ζj = bjℓ. (5.10) So, ζj ∂ ∂ζj ψ = ζj X ℓ ∂zℓ ∂ζj ∂ ∂zℓ ψ = ζj X ℓ bjℓ X i∈I X p̸=ℓ zp zℓ αi (zℓ−zp)−1Ωi ℓp + z−1 ℓΩℓ !! ψ = X i∈I X ℓ<p ζj bjℓ zp zℓ αi (zℓ−zp)−1Ωi ℓp + bjp zℓ zp αi (zp −zℓ)−1Ωi pℓ ! + X ℓ bjℓζjz−1 ℓΩℓ ! ψ = X i∈I X ℓ<p ζj bjℓ zp zℓ αi (zℓ−zp)−1Ωi ℓp + bjp zℓ zp αi (zp −zℓ)−1Ωi′ ℓp ! + X ℓ bjℓζjz−1 ℓΩℓ ! ψ = X i∈I X ℓ<p bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1Ωi ℓp + X ℓ bjℓζjz−1 ℓΩℓ ! ψ. Assume that (ζ1, . . . , ζN) = (δ1, . . . , δN) is a component-isolated singularity of a function defined on M N. Define the function s(δ) = ( 1 if δ = 0, −1 if δ = ∞. (5.11) We perform a change variables from ζj to ηj such that η s(δj)t j = ζj. (5.12) Hence, ηj ∂ ∂ηj = ηj ∂ζj ∂ηj ∂ ∂ζj = ηjs(δj)tη s(δj)t−1 j ∂ ∂ζj = s(δj)tζj ∂ ∂ζj . (5.13) This change of variables handles singularities at both 0 and ∞. Furthermore, we claim that the possible multivaluedness of z1/t ℓ = (P k bkℓζk + γℓ)1/t as a function of (ζ1, . . . , ζn) is turned into a single-valued function after changing variables to (η1, . . . , ηn) ∈Dr1(0) × · · · × Drn(0). Suppose that z1/t ℓ is multivalued and δk = 0 when bkℓ̸= 0 for all k. If there is still multival- uedness after choosing rk > 0 to be arbitrarily small, then we see that γℓ= 0. Then we must have z1/t ℓ = (bjℓζj)1/t for some j (to ensure we have a component-isolated singularity for a function defined on M N), which is single-valued as a function of ηj. Now suppose that δj = ∞when bjℓ̸= 0 for some j. Then we must have z1/t ℓ = (P k bkℓζk + γℓ)1/t with bkℓ= 0 when δk = ∞and k ̸= j (to ensure we have a component-isolated singularity for a function defined on M N). After choosing each rj to be sufficiently small, we have single-valuedness as a function of (η1, . . . , ηN). 32 Hence, to obtain a differential equation with the desired singularity at (ζ1, . . . , ζN) = (δ1, . . . , δN) and single-valued coefficients, we will look at ηj ∂ ∂ηj ψ = s(δj)t X i∈I X ℓ<p bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1Ωi ℓp + X ℓ bjℓζjz−1 ℓΩℓ ! ψ, (5.14) where ζk and z1/t k are single-valued holomorphic functions of (η1, . . . , ηN). We want to show that bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 and bjℓζjz−1 ℓ are holomorphic functions of (η1, . . . , ηN) on Dr1(0) × · · · × DrN(0) for some sufficiently small r1, . . . , rN > 0. First we show that bjℓζjz−1 ℓ is holomorphic. If bjℓ= 0, then bjℓζjz−1 ℓ = 0, which is holomorphic. Otherwise, when bjℓ̸= 0, we have two cases depending on the value of δj. If δj = 0, then the only possible singularity can come from zℓ→0 as (η1, . . . , ηN) →0. Then γℓ= 0, and bkℓ= 0 for all but k = j to ensure that zℓ→0 and zℓ̸= 0 when (η1, . . . , ηN) ∈D× r1(δ1) × · · · × D× rN(δN) (this is necessary since we have a component-isolated singularity for a function defined on M N). Then bjℓζjz−1 ℓ = 1, which is holomorphic. If δj = ∞, then bkℓ= 0 when δk = ∞and k ̸= j to ensure that zℓ̸= 0 when (η1, . . . , ηN) ∈D× r1(δ1) × · · · × D× rN(δN). Then bjℓζjz−1 ℓ = bjℓ P k bkℓηs(δk)t k ηt j + γℓηt j →bjℓ bjℓ = 1 as (η1, . . . , ηN) →(0, . . . , 0). So bjℓζjz−1 ℓ is holomorphic in all cases. Second we show that bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 is holomorphic. If αi = 0, then αi′ = 0 and bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 = (bjℓ−bjp) ζj(zℓ−zp)−1, which is holomorphic by arguments similar to those used for bjℓζjz−1 ℓ. Otherwise, αi ̸= 0 and αi′ = 1 −αi. Then bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 = z−αi ℓ z−αi′ p (bjℓzp −bjpzℓ) ζj(zℓ−zp)−1. If bjℓ−bjp = 0, then bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 = −bjℓ ζj zαi ℓzαi′ p . This is holomorphic by arguments similar to those used for bjℓζjz−1 ℓ. If bjℓ−bjp ̸= 0, then we need to check what happens when δj = 0 and δj = ∞. If δj = 0 and zℓ−zp →0, then γℓ−γp = 0 and bkℓ−bkp = 0 for all k ̸= j. Then bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 = bjℓ zp zℓ αi −bjp zℓ zp αi′! (bjℓ−bjp)−1. 33 Since zℓ−zp →0, we have zℓ, zp ↛0, so the only possible singularity comes from zℓ, zp →∞. And since bkℓ−bkp = 0 for all k ̸= j, we have zℓ/zp →1. If δj = 0 and zℓ→0, then zp, zℓ−zp ↛0. So the only possible singularity comes from the term bjℓ zp zℓ αi ζj(zℓ−zp)−1 = bjℓ ζj zαi ℓ 1 zαi′ p zℓ zp −1 −1 , which is holomorphic by arguments similar to those used for bjℓζjz−1 ℓ. Similar arguments work for δj = 0 and zp →0. If δj = ∞, then zℓ−zp = P k(bkℓ−bkp)ζk + γℓ−γp with bkℓ−bkp = 0 when δk = ∞and k ̸= j. So ζj(zℓ−zp)−1 →(bjℓ−bjp)−1. Since bjℓ−bjp ̸= 0, we have bjℓ̸= 0 or bjp ̸= 0. If bjℓ̸= 0, then bkℓ= bkp = 0 when δk = ∞and k ̸= j. So zp zℓ = P k bkpζk + γp P k bkℓζk + γℓ = P k bkpηs(δk)t k ηt j + γpηt j P k bkℓηs(δk)t k ηt j + γℓηt j →bjp bjℓ is holomorphic. And similarly if bjp ̸= 0. Thus, bjℓ zp zℓ αi −bjp zℓ zp αi′! ζj(zℓ−zp)−1 is holomorphic in all cases. Thus we have proven the following result. Proposition 5.5. If the order of g is finite, then the solutions to the twisted KZ equations (4.17) (or equivalently (5.7)) defined on M N have regular singularities at every component-isolated sin- gularity. Remark 5.6. We now show that simple singularities do not immediately follow from the fact that the coefficients in (5.7) have singularities at zj = 0, ∞and zj −zk = 0. Take for simplicity N = 2 and g = 1, so that the KZ equations are ∂ ∂z1 ψ = (z1 −z2)−1Ω12 + z−1 1 Ω1  ψ, ∂ ∂z2 ψ = (z2 −z1)−1Ω12 + z−1 2 Ω2  ψ, where Ω12 = Ω21 = P i∈I Ωi 12. Let z1 = ζ1 + ζ2 + 1, z2 = ζ1 −ζ2 + 1, which implies z1 −z2 = 2ζ2. This provides a component- isolated singularity at (ζ1, ζ2) = (0, 0), and hence the KZ equations have a simple singularity at 34 (ζ1, ζ2) = (0, 0), as we have just shown above. However, when we consider the seemingly similar system ∂ ∂z1 ψ = (z1 −z2)−1Ω12 + z−1 1 Ω1  ψ, ∂ ∂z2 ψ = (z1 −z2)−1Ω12 + z−1 2 Ω2  ψ, we have ζ1 ∂ ∂ζ1 ψ = ζ1 ζ2 Ω12 + ζ1 ζ1 + ζ2 + 1Ω1 + ζ1 ζ1 −ζ2 + 1Ω2  ψ, ζ2 ∂ ∂ζ2 ψ =  ζ2 ζ1 + ζ2 + 1Ω1 − ζ1 ζ1 −ζ2 + 1Ω2  ψ. The coefficient of the first equation is not holomorphic in any product of small discs centered at zero. Hence, the simple singularities of the twisted KZ equations follow from more than the fact that the coefficients in (5.7) have singularities at zj = 0, ∞and zj −zk = 0. Remark 5.7. When dealing with first order homogeneous linear systems of partial differential equations of the form ∂ ∂zi ψ = Aiψ, for i = 1, . . . , N, one usually shows that the system is consistent (also known as integrable or compatible) in the sense that ∂ ∂zj Ai −∂ ∂zi Aj + [Ai, Aj] = 0 , for all i, j = 1, . . . N. This guarantees a unique solution exists for any initial condition. We have not shown that the twisted KZ equations are consistent. In the following section, we will use a different method to show that solutions exist and that the formal series solution given by the correlation function converges in certain domains. 6 Convergence when g has finite order In this section, we return to the setting of Section 3 while further assuming that g has finite order t. We continue to generalize Huang’s method in [Hua05] by using specific filtrations of the ring R and the R-module T, to derive differential equations with “regular singular points”. As a conse- quence, we obtain the “convergence property for products of N twisted intertwining operators”, as conjectured in [Hua21b], in the special case that all intertwining operators are of g 1 g  -type among C1-cofinite discretely graded V -modules. Recall that Theorem 5.4 does not show that a solutions exist for every initial condition, nor that every formal solution converges. Compare this to the standard theory for an ordinary differential equation with regular singular points (see, for example, Section B1 of [Kna86]). 35 Theorem 6.1. Let  d dz n ψ(z) + an−1(z) z  d dz n−1 ψ(z) + · · · + a1(z) zn−1 d dzψ(z) + a0(z) zn ψ(z) = 0, (6.1) be an ordinary differential equation such that ai(z) are holomorphic in Dr(0) for some r > 0. Then a solution exists for any initial condition at a point in D× r (0), and furthermore, every solution has the form of a regular singularity at z = 0. Conversely, any formal solution with a regular singularity at z = 0 converges absolutely to a multivalued holomorphic solution in D× r (0). Note that (6.1) could be equivalently replaced with  z d dz n ψ(z) + an−1(z)  z d dz n−1 ψ(z) + · · · + a1(z)z d dzψ(z) + a0ψ(z) = 0. (6.2) We will now show that the coefficients of the differential equation (3.17) can be chosen so that there is a regular singularity at certain prescribed points. Since g is of finite order, g is semisimple and L = S. So we can ignore all log xi in R, and the eigenvalues for S are in 1 tZ. Hence, we will use R = C[x±1/t i , (xi −xj)−1 : i, j = 1, . . . , N with i < j]. We define the degree of a monomial xp i (xj −xk)q, p ∈1 tZ and q ∈Z≤0 to be p + q. An element f(x1, . . . , xN) ∈R has degree m ∈1 tZ, denoted by deg f(x1, . . . , xN) = m, if f(x1, . . . , xN) is a C-linear combination of degree m monomials. Let c = xi or c = xi −xj for i, j = 1, . . . , N with i < j. To study the singularity c = 0, we use a certain filtration of R. We first define the subrings R(c=0) = C[x±1/t i , (xi −xj)−1 : i, j = 1, . . . , N with i < j, excluding negative powers of c]. Define R(c=0) (deg 0) to be the subspace of degree zero elements in R(c=0). We define a filtration of R with F (c=0) m (R) = SpanC{f(x1, . . . , xN) ∈R : f(x1, . . . , xN)cm ∈R(c=0)} (6.3) with m ∈Z≥0 or m ∈1 tZ≥0 if c is proportional to x1, . . . , xN. Define F (c=0) r (T) to be the vector subspace of T spanned by f(x1, . . . , xN)w0 ⊗· · · ⊗wN+1 for all f(x1, . . . , xN) ∈F (c=0) m (R) and homogeneous wi ∈Wi satisfying m + σ ≤r. Define F (c=0) r (J) = F (c=0) r (T) ∩J. These are compatible filtrations on the R-modules T and J in the sense that F (c=0) m (R)F (c=0) r (T) ⊆F (c=0) m+r (T) and F (c=0) m (R)F (c=0) r (J) ⊆F (c=0) m+r (J). Define T (c=0) = R(c=0) ⊗W0 ⊗· · · ⊗WN+1, with grading T (c=0) = ` m∈R T (c=0) (m) given by the grading by real components of the weights for Wi, with trivial grading for R(c=0). Then F (c=0) m (R)T (c=0) (r) ⊆F (c=0) m+r (T). We recall the generators Ai(u, w0, . . . , wN+1) as defined in Section 3, but now with g of finite order. 36 A0(u, w0, . . . , wN+1) = uα′−1w0 ⊗w1 ⊗· · · ⊗wN+1 + N X p=1 X k≥0 α −1 k  x1+k−α p w0 ⊗· · · ⊗O±(x−1 p )uk O−1 ± (x−1 p )wp ⊗· · · ⊗wN+1 −w0 ⊗· · · ⊗wN ⊗u∗ α′−1wN+1, (6.4) Aℓ(u, w0, . . . , wN+1) = − X k≥0 x−α+k ℓ u∗ α−1−kw0 ⊗w1 ⊗· · · ⊗wN+1 − X p=1,...,N p̸=ℓ X j,k≥0 α k  xα−k p x−α ℓ(xℓ−xp)−1−jw0 ⊗· · · ⊗uj+kwp ⊗· · · ⊗wN+1 + X k≥0 α k  x−k ℓw0 ⊗· · · ⊗u−1+kwℓ⊗· · · ⊗wN+1 − X k≥0 x−α−k−1 ℓ w0 ⊗· · · ⊗wN ⊗uα+kwN+1, (6.5) AN+1(u, w0, . . . , wN+1) = −u∗ α−1w0 ⊗w1 ⊗· · · ⊗wN+1 + N X p=1 X k≥0 α −1 k  x−1−k+α p w0 ⊗· · · ⊗ukwp ⊗· · · ⊗wN+1 + w0 ⊗· · · ⊗wN ⊗uα−1wN+1, (6.6) Lemma 6.2. Let u ∈V [α] + , for α ∈C with ℜ(α) ∈[0, 1), and let wi ∈Wi be weight homogeneous. Let w(0) = uα′−1w0 ⊗w1 ⊗· · · ⊗wN+1, w(N+1) = w0 ⊗· · · ⊗wN ⊗uα−1wN+1 or w(ℓ) = w0 ⊗· · · ⊗wℓ−1 ⊗u−1wℓ⊗wℓ+1 · · · ⊗· · · ⊗wN+1, ℓ= 1, . . . , N, and assume ℜ(wt w(i)) = s. Then w(i) ∈F (c=0) s (J) + ` m>0 F (c=0) m T (c=0) (s−m). Proof. This follows from an explicit case-by-case check that Ai(u, w0, . . . , wN+1) ∈F (c=0) s (T) and that Ai(u, w0, . . . , wN+1)−w(i) ∈Fs−1/t(T), which we omit (similar to the proof of Lemma 3.4). Proposition 6.3. There exists M ∈Z such that for any r ∈R, F (c=0) r (T) ⊆F (c=0) r (J) + FM(T). Proof. Choose the same M as in Proposition 3.5. When r ≤M, F (c=0) r (T) is spanned by elements of the form f(x1, · · · , xN)w0 ⊗· · · ⊗wN+1 with f(x1, · · · , xN+1) ∈F (c=0) m (R) and m + σ ≤r ≤M. Since σ ≤M, this element is in FM(T). Hence, F (c=0) r (T) ⊆FM(T) ⊆F (c=0) r (J) + FM(T). Since T is discretely graded, we can proceed by induction on r. Assume that s > M and F (c=0) r (T) ⊆ F (c=0) r (J) + FM(T) whenever r < s. Consider an element w ∈T (c=0) (s) . Since s > M, T (c=0) (s) is R(c=0)-spanned by elements w in Lemma 6.2. Hence, w ∈F (c=0) s (J) + ` m>0 F (c=0) m (R)T (c=0) (s−m). 37 Since s −m < s when m > 0, we have T (c=0) (s−m) ⊆F (c=0) s−m (J) + FM(T) by the induction hypothesis. So, F (c=0) m (R)T (c=0) (s−m) ⊆F (c=0) s (J) + FM(T). So, T (c=0) (s) ⊆F (c=0) s (J) + FM(T). Thus, F (c=0) s (T) ⊆ F (c=0) s (J) + FM(T). Lemma 6.4. For any s ∈[0, 1), there exists S ∈R such that s+S ∈Z>0, and for any homogeneous wi ∈Wi and any W1 ∈F (c=0) σ (J), W2 ∈FM(T) satisfying σ ∈s + Z and w0 ⊗· · · ⊗wN+1 = W1 + W2, we have cσ+SW2 ∈T (c=0). Proof. Pick S ∈R with s + S ∈Z>0 to be sufficiently large so that T(r) = 0 when r < −S. Let wi ∈Wi be homogeneous with σ ∈s + Z and let W1 ∈F (c=0) σ (J), W2 ∈FM(T) satisfying w0 ⊗· · · ⊗wN+1 = W1 + W2. Then σ + S ∈Z and σ ≥−S so σ + S ∈Z≥0. Then cσ+SW2 = cσ+Sw0 ⊗· · · ⊗wN+1 −cσ+SW1 ∈T (c=0) + cσ+SF (c=0) σ (J) ⊆T (c=0). To show the last inclusion, consider f(x1, . . . , xN) ew0⊗· · ·⊗ewN+1 ∈F (c=0) σ (T) with f(x1, . . . , xN) ∈ F (c=0) m (R) and ℜ(wt ew0 ⊗· · · ⊗ewN+1) = eσ. Then m −S −σ ≤m + eσ −σ ≤σ −σ = 0 implies that f(x1, . . . , xN)cσ+S ∈R(c=0). We now have an extension of Theorem 3.8. Theorem 6.5. Assume g has finite order. For any change of variables (ζ1, . . . , ζN) = (z1, . . . , zN)A −(β1, . . . , βN), and c = xi, xi −xj, there exist differential equations (c(z1, . . . , zN))mℓ  ∂ ∂ζℓ mℓ ψ + mℓ X k=1 d(c) k,ℓ(z1, . . . , zN)(c(z1, . . . , zN))mℓ−k  ∂ ∂ζℓ mℓ−k ψ = 0, ℓ= 1, . . . , N, (6.7) for some d(c) k,ℓ(x1, . . . , xN) ∈R(c=0) (deg 0) satisfied by ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in the region |z1| > · · · > |zN|. 38 Proof. Let wi ∈Wi be homogeneous and k ∈Z≥0. By Proposition 6.3, there exist W(k) 1 ∈F (c=0) σ+k (J) and W(k) 2 ∈FM(T) such that bLk ζℓ(w0 ⊗· · · ⊗wN+1) = W(k) 1 + W(k) 2 . By Lemma 6.4, there exists S ∈R such that σ + S ∈Z>0 and cσ+k+SW(k) 2 ∈T (c=0). And so cσ+k+SW(k) 2 ∈ a r≤M T (c=0) (r) . Now we consider the finitely-generated R(c=0)-module ` r≤M T (c=0) (r) . Consider the ascending chain (Mj)∞ j=0 of submodules Mj generated by {cσ+k+SW(k) 2 : k ∈Z, 0 ≤k ≤j}. Since R(c=0) is Noetherian, there exist mℓ∈Z≥0 and d(c) 1,ℓ(x1, . . . , xN), . . . , d(c) mℓ,ℓ(x1, . . . , xN) ∈R(c=0) such that cσ+mℓ+SW(mℓ) 2 + mℓ−1 X k=0 d(c) mℓ−k,ℓ(x1, . . . , xN)cσ+k+SW(k) 2 = 0. Hence, cmℓ[bLmℓ ζℓ(w0 ⊗· · · ⊗wN+1)] + mℓ X k=1 d(c) k,ℓ(x1, . . . , xN)cmℓ−k[bLmℓ−k ζℓ (w0 ⊗· · · ⊗wN+1)] = 0 When T is given the grading deg (f(x1, . . . , xN)w0 ⊗· · · ⊗wN+1) = −deg f(x1, . . . , xN) −wt w0 + wt w1 + · · · wt wN+1, and the generators for J are checked to be homogeneous with respect to this grading, we can see that d(c) k,ℓ(x1, . . . , xN) can be chosen to be degree zero elements. We can immediately extend the previous proposition to the case where c is any non-zero element SpanC{x1, . . . , xN}. If c is a scalar multiple of xi or xi −xj, the result follows from Theorem 6.5 by rescaling c. Otherwise, no negative powers of c occur in R, so we can take R(c=0) = R, and the result follows directly from Theorem 3.8. The following special case of the previous proposition will be used to prove convergence. Corollary 6.6. The series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ satisfies the following differential equations mℓ X k=0 ak,ℓ(z1, . . . , zN)  zℓ ∂ ∂zℓ k ψ = 0, ℓ= 1, . . . , N, (6.8) with ak,ℓ(x1, . . . , xN) ∈R(xℓ=0) and amℓ,ℓ= 1. 39 Theorem 6.7 (The convergence property for products of N g 1 g  -type twisted intertwining oper- ators). Let W0, . . . , WN+1, f W0, . . . , f WN, Y1, . . . , YN, w0, . . . , wN+1 be as in Section 3. Assume that g is an automorphism of V with finite order. (1) The multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in z1, . . . , zN is absolutely convergent in the region |z1| > · · · > |zN| > 0. (2) The sum of ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in |z1| > · · · > |zN| > 0 can be analytically continued to a multivalued analytic function F(⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩) (6.9) defined on all of M N := {(z1, . . . , zN) ∈bCN : zi ̸= 0, ∞and zi ̸= zj when i ̸= j}. Proof. When N = 1, we have ⟨w0, Y1(w1, z1)w2⟩= X n∈C K X k=0 ⟨w0, Yn,k(w1)w2⟩z−n−1 1 (log z1)k. This sum is finite by the L(0)-grading, hence absolutely converges when z1 ∈M 1. We now proceed by induction. Assume that ⟨w0, Y1(w1, z1) · · · YN−1(wN−1, zN−1) ewN−1⟩converges absolutely in the region |z1| > · · · > |zN−1| > 0 for all ewN−1 ∈f WN−1. Further assume that the function it converges to can be analytically extended to a multivalued analytic function defined on MN−1. Now consider the series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ = X n∈C K X k=0 ⟨w0, Y1(w1, z1) · · · YN−1(wN−1, zN−1)Yn,k(wN)wN+1⟩z−n−1 N (log zN)k, (6.10) as a series in zN for fixed |z1| > · · · > |zN−1| > 0. The ordinary differential equation (6.8) for ℓ= N after a change of variable ζN = z1/t N has a regular singularity at zN = 0 and is satisfied by (6.10) for all |z1| > · · · > |zN−1| > |zN| > 0. By Theorem 6.1, (6.10) converges absolutely as a series in zN. And by the induction hypothesis, (6.10) is absolutely convergent as a multi-series in |z1| > · · · > |zN−1| > |zN| > 0. Since the coefficients of (6.10) can be analytically extended to multivalued valued functions on M N−1, the sum (6.10) satisfies (6.8) for all (z1, . . . , zN−1) ∈M N−1 when |z1|, . . . , |zN−1| > |zN| > 0. Since the coefficients of (6.8) are analytic on M N, for fixed (z1, . . . , zN−1) we can analytically extend the sum (6.10) to all of (z1, . . . , zN) ∈M N. Finally, we will prove that every component-isolated singularity of the four-point correlation function is a regular singularity when the L(0)-actions on the twisted modules are semisimple. Lemma 6.8. Let w0 ∈W0, . . . , w3 ∈W3 be weight homogeneous and define ∆= wt w0 −wt w1 − wt w2 −wt w3. Let A ∈GL(2, C). Let (ξ1, ξ2) = (z1, z2)A be a change of variables. Then  ξ1 ∂ ∂ξ1 + ξ2 ∂ ∂ξ2 −∆ K ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= 0, for sufficiently large K > 0. (6.11) Furthermore, K = 1 when the L(0)-actions on W0, . . . , W3 are each semisimple. 40 Proof. We use the property [L(0), Y(w, x)] = Y(L(0)w, x) + xY(L(−1)w, x) to derive  z1 ∂ ∂z1 + z2 ∂ ∂z2 −∆  ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= ⟨(L(0) −wt w0)w0, Y(w1, z1)Y(w2, z2)w3⟩ −⟨w0, Y((L(0) −wt w1)w1, z1)Y(w2, z2)w3⟩ −⟨w0, Y(w1, z1)Y((L(0) −wt w2)w2, z2)w3⟩ −⟨w0, Y(w1, z1)Y(w2, z2)(L(0) −wt w3)w3⟩. The right-hand side is zero if the L(0)-actions on W0, . . . , W3 are each semisimple. Otherwise, repeated application gives  z1 ∂ ∂z1 + z2 ∂ ∂z2 −∆ K ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= 0 whenever K is sufficiently large. Then we use ξ1 ∂ ∂ξ1 + ξ2 ∂ ∂ξ2 = ξ1 ∂z1 ∂ξ1 ∂ ∂z1 + ∂z2 ∂ξ1 ∂ ∂z2  + ξ2 ∂z1 ∂ξ2 ∂ ∂z1 + ∂z2 ∂ξ2 ∂ ∂z2  =  ξ1 ∂z1 ∂ξ1 + ξ2 ∂z1 ∂ξ2  ∂ ∂z1 +  ξ1 ∂z2 ∂ξ1 + ξ2 ∂z2 ∂ξ2  ∂ ∂z2 = z1 ∂ ∂z1 + z2 ∂ ∂z2 . Proposition 6.9. Assume that g has finite order and assume the L(0)-actions on W0, . . . , W3 are each semisimple. Then every component-isolated singularity of F(⟨w0, Y(w1, z1)Y(w2, z2)w3⟩) is a regular singularity. Proof. Let δ1, δ2 ∈{0, ∞} and let (ζ1, ζ2) = (z1, z2)A −(β1, β2) be a change of variables such that (ζ1, ζ2) = (δ1, δ2) is a component-isolated singularity of a multivalued holomorphic function defined of M 2. We will use two other change of variables. Define (ξ1, ξ2) := (ζ1 + β1, ζ2 + β2) = (z1, z2)A and define (η1, η2) such that η s(δj)t j = ζj, with s(δ) = ( 1 if δ = 0, −1 if δ = ∞. Recall that we have shown the existence of the following differential equations m X k=0 ak(z1, z2)ck  ∂ ∂ξi k ψ = 0, i = 1, 2, for all non-zero c ∈SpanC{z1, . . . , zN}, (6.12) with ak ∈R(c=0) (deg 0) and am = 1, satisfied by F(⟨w0, Y1(w1, z1)Y2(w2, z2)w3⟩). We will use a(1) k , a(2) k , . . . and b(1) k , b(2) k , . . . for elements in R(c=0) (deg 0) with a(1) m = b(1) m = · · · = 1. For some fixed i = 1, 2, choose c = ξi to obtain. m X k=0 a(1) k (z1, z2)ξk i  ∂ ∂ξi k ψ = 0. (6.13) 41 Then we change basis from  ξk i  ∂ ∂ξi k : k ∈Z≥0  to  ξi ∂ ∂ξi k : k ∈Z≥0  , obtaining m X k=0 a(2) k (z1, z2)  ξi ∂ ∂ξi k ψ = 0. (6.14) We use Lemma 6.8 on (6.14) to obtain, for j = 1, 2 with i ̸= j, m X k=0 b(1) k (z1, z2)  ξj ∂ ∂ξj k ψ = 0. (6.15) Then we change basis from  ξj ∂ ∂ξj k : k ∈Z≥0  to  ξk j  ∂ ∂ξj k : k ∈Z≥0  , obtaining m X k=0 b(2) k (z1, z2)ξk j  ∂ ∂ξj k ψ = 0. (6.16) Then we change to (ζ1, ζ2) to obtain m X k=0 b(2) k (z1, z2)(ζj + βj)k  ∂ ∂ζj k ψ = 0. (6.17) Then we have m X k=0 b(2) k (z1, z2)  ζj ζj + βj m−k ζk j  ∂ ∂ζj k ψ = 0. (6.18) Changing basis again gives m X k=0 b(3) k (z1, z2)  ζj ζj + βj m−k  ζj ∂ ∂ζj k ψ = 0. (6.19) Then ζj ∂ ∂ζj = 1 s(δj)tηj ∂ ∂ηj gives m X k=0 b(4) k (z1, z2)  ζj ζj + βj m−k  ηj ∂ ∂ηj k ψ = 0. (6.20) We can do a similar process on (6.14) to change to the variable ηi. Hence we have the system of differential equations m X k=0 a(4) k (z1, z2)  ζi ζi + βi m−k  ηi ∂ ∂ηi k ψ = 0, (6.21) m X k=0 b(4) k (z1, z2)  ζj ζj + βj m−k  ηj ∂ ∂ηj k ψ = 0. (6.22) To show that (η1, η2) = (0, 0) is a regular singularity using Theorem 5.4, we are left to check case-by-case that the coefficients are holomorphic in (η1, η2) ∈Dr1(0)×Dr2(0) for some sufficiently 42 small r1, r2 > 0. Case 1. Assume δ1, δ2 = 0, z1 ̸= 0, z2 ̸= 0 and z1 −z2 ̸= 0 when (η1, η2) = (0, 0). Then the coefficients are non-singular for all (η1, η2) ∈Dr1(0) × Dr2(0). Case 2. Assume δ1, δ2 = 0, and z1 = 0 or z2 = 0 or z1 −z2 = 0 when (η1, η2) = (0, 0). This can happen for only one variable, and this variable must be proportional to ζi = ξi for some i = 1, 2. The functions a(4) k and b(4) k are in R(ξi=0) (deg 0), so have no singularities when ξi = 0, and the other remaining two variables z1, z2 or z1 −z2 are non-zero for all (η1, η2) ∈Dr1(0) × Dr2(0). Case 3. Assume δ1 = ∞or δ2 = ∞. Only one can have this value, say δj = ∞, and we must have δi = 0 for i ̸= j. The only singularities can come from ζi = 0 are ζj = ∞. The first kind is im- mediately handled since a(4) k and b(4) k are in R(ξi=0) (deg 0). The second kind is handled by the zero degrees. Since the functions a(4) k and b(4) k are at most polynomials in z1z−1 2 , z2z−1 1 , z1(z1−z2)−1, z2(z1−z2)−1, we only need to asses these four monomials. If the numerator and denominator both approach infinity as ηj →0, the limit is exists. If the denominator approaches infinity but the numerator does not, the limit is exists. If the numerator approaches infinity but the denominator does not, the denominator is proportional to ξi, so this monomial does not occur in R(ξi=0) (deg 0). Since we have shown that the coefficients are holomorphic in (η1, η2) ∈Dr1(0)×Dr2(0), we have shown that the singularity (η1, η2) = (0, 0) is indeed regular, hence we know that solutions to this system of differential equations have an expansion of the form K X i=1 ηri 1 ηsi 2 (log η1)ji(log η1)kifi(η1, η2), (6.23) for ri, si ∈C, ji, ki ∈Z≥0 and fi holomorphic in some product of sufficiently small discs centered at 0. Thus, after changing back to (ζ1, ζ2), we see that there is regular singularity at (ζ1, ζ2) = (δ1, δ2). Proposition 6.10. Assume that g has finite order. Then every component-isolated singularity of F(⟨w0, Y(w1, z1)Y(w2, z2)w3⟩) with change of coordinates of the form (ζ1, ζ2) = (z1, z2)A is a regular singularity. Proof. The proof follows similarly to the proof of the previous proposition. However, to apply Lemma 6.8 on (6.14), we choose m and K to be sufficiently large enough to be equal. Then we obtain the system m X k=0 ak(z1, z2)  ηi ∂ ∂ηi k ψ = 0, (6.24)  ηj ∂ ∂ηj m + m−1 X p,q=0 bp,q(z1, z2)  ηi ∂ ∂ηi p  ηj ∂ ∂ηj q ψ = 0, (6.25) for which we can apply Theorem 5.4. Remark 6.11. Change of coordinates of the form (ζ1, ζ2) = (z1, z2)A are not as general as the affine change of coordinates. However, they include important component-isolated singularities such as (ζ1, ζ2) = (z1 −z2, z2) = (0, ∞). This regular singularity is used to prove associativity. 43 Remark 6.12. The previous results do not show that every component-isolated singularity of F(⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩) is a regular singularity when N > 2. However, the N = 2 case above can be used to show associativity and commutativity of twisted intertwining operators (see [Hua21b]). Certain (and important) regular singularities in the N > 2 case can be shown using the convergence property, associativity and commutativity, and the N = 2 case. For example, consider (ζ1, ζ2, ζ3) = (z1 −z3, z2, z3) and (δ1, δ2, δ3) = (0, 0, ∞). Given intertwin- ing operators Y1, Y2, Y3, there exists intertwining operators Y4, Y5 such that F(⟨w0, Y1(w1, z1)Y2(w2, z2)Y3(w3, z3)w4⟩) = X m∈C K X k=0 F(⟨w0, (Y1)m,k(w1)Y2(w2, z2)Y3(w3, z3)w4⟩)z−m−1 1 (log z1)k = X m∈C K X k=0 F(⟨w0, (Y1)m,k(w1)Y4(w3, z3)Y5(w2, z2)w4⟩)z−m−1 1 (log z1)k = F(⟨w0, Y1(w1, z1)Y4(w3, z3)Y5(w2, z2)w4⟩) = X m∈C K X k=0 F(⟨w0, Y1(w1, z1)Y4(w3, z3)(Y5)m,k(w2)w4⟩)z−m−1 2 (log z2)k. The last expression gives the desired singularity. References [BHO01] J. de Boer, M. B. Halpern, and N. A. Obers. “The operator algebra and twisted KZ equations of WZW orbifolds”. In: J. High Energy Phys. 10 (2001), Paper 11, 71. [CM16] S. Carnahan and M. Miyamoto. “Regularity of fixed-point vertex operator subalge- bras”. In: (Mar. 2016). arXiv: 1603.05645 [math.RT]. [DE20] A. Dei and L. Eberhardt. “Correlators of the symmetric product orbifold”. In: J. High Energy Phys. 1 (2020), pp. 108, 45. [DVVV89] R. Dijkgraaf, C. Vafa, E. Verlinde, and H. Verlinde. “The operator algebra of orbifold models”. In: Comm. Math. Phys. 123.3 (1989), pp. 485–526. [DGH88] L. Dixon, P. Ginsparg, and J. Harvey. “Beauty and the beast: superconformal sym- metry in a Monster module”. In: Comm. Math. Phys. 119.2 (1988), pp. 221–241. [DHVW85] L. Dixon, J. Harvey, C. Vafa, and E. Witten. “Strings on orbifolds”. In: Nuclear Phys. B 261.4 (1985), pp. 678–686. [DHVW86] L. Dixon, J. Harvey, C. Vafa, and E. Witten. “Strings on orbifolds. II”. In: Nuclear Phys. B 274.2 (1986), pp. 285–314. [Don94] C. Dong. “Twisted modules for vertex algebras associated with even lattices”. In: J. Algebra 165.1 (1994), pp. 91–112. [DH25] J. Du and Y.-Z. Huang. Twisted intertwining operators and tensor products of (gen- eralized) twisted modules. 2025. arXiv: 2501.15003 [math.QA]. [EMS20] J. van Ekeren, S. M¨oller, and N. R. Scheithauer. “Construction and classification of holomorphic vertex operator algebras”. In: J. Reine Angew. Math. 759 (2020), pp. 61–99. 44 [FFR91] A. J. Feingold, I. B. Frenkel, and J. F. X. Ries. Spinor construction of vertex op- erator algebras, triality, and E(1) 8 . Vol. 121. Contemporary Mathematics. American Mathematical Society, Providence, RI, 1991, pp. x+146. [FLM84] I. B. Frenkel, J. Lepowsky, and A. Meurman. “A natural representation of the Fischer- Griess Monster with the modular function J as character”. In: Proc. Nat. Acad. Sci. U.S.A. 81.10 (1984), pp. 3256–3260. [FLM88] I. B. Frenkel, J. Lepowsky, and A. Meurman. Vertex operator algebras and the Mon- ster. Vol. 134. Pure and Applied Mathematics. Academic Press, Inc., Boston, MA, 1988, pp. liv+508. [GK21] T. Gem¨unden and C. A. Keller. “Non-abelian orbifolds of lattice vertex operator algebras”. In: J. Algebra 585 (2021), pp. 656–696. [HM22] G. H¨ohn and S. M¨oller. “Systematic orbifold constructions of Schellekens’ vertex operator algebras from Niemeier lattices”. In: J. Lond. Math. Soc. (2) 106.4 (2022), pp. 3162–3207. [Hua96] Y.-Z. Huang. “A nonmeromorphic extension of the Moonshine module vertex oper- ator algebra”. In: Moonshine, the Monster, and related topics (South Hadley, MA, 1994). Vol. 193. Contemp. Math. Amer. Math. Soc., Providence, RI, 1996, pp. 123– 148. [Hua05] Y.-Z. Huang. “Differential equations and intertwining operators”. In: Commun. Con- temp. Math. 7.3 (2005), pp. 375–400. [Hua10] Y.-Z. Huang. “Generalized twisted modules associated to general automorphisms of a vertex operator algebra”. In: Comm. Math. Phys. 298.1 (2010), pp. 265–292. [Hua17] Y.-Z. Huang. “On the applicability of logarithmic tensor category theory”. In: (Feb. 2017). arXiv: 1702.00133 [math.QA]. [Hua18] Y.-Z. Huang. “Intertwining operators among twisted modules associated to not- necessarily-commuting automorphisms”. In: J. Algebra 493 (2018), pp. 346–380. [Hua20] Y.-Z. Huang. “A construction of lower-bounded generalized twisted modules for a grading-restricted vertex (super)algebra”. In: Comm. Math. Phys. 377.2 (2020), pp. 909–945. [Hua21a] Y.-Z. Huang. “Lower-bounded and grading-restricted twisted modules for affine ver- tex (operator) algebras”. In: J. Pure Appl. Algebra 225.8 (2021), Paper No. 106618, 36. [Hua21b] Y.-Z. Huang. “Representation theory of vertex operator algebras and orbifold con- formal field theory”. In: Lie groups, number theory, and vertex algebras. Vol. 768. Contemp. Math. Amer. Math. Soc., [Providence], RI, [2021] ©2021, pp. 221–252. [HL99] Y.-Z. Huang and J. Lepowsky. “Intertwining operator algebras and vertex tensor categories for affine Lie algebras”. In: Duke Math. J. 99.1 (1999), pp. 113–134. [HLZ12a] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, II: Logarithmic formal calculus and properties of logarithmic intertwining operators”. In: (2012). arXiv: 1012.4196 [math.QA]. 45 [HLZ12b] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, III: Intertwining maps and tensor product bifunctors”. In: (2012). arXiv: 1012.4197 [math.QA]. [HLZ12c] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, IV: Constructions of tensor product bifunctors and the compatibility conditions”. In: (2012). arXiv: 1012.4198 [math.QA]. [HLZ12d] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, V: Convergence condition for intertwining maps and the corresponding compatibility condition”. In: (2012). arXiv: 1012.4199 [math.QA]. [HLZ12e] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, VI: Expansion condition, associativity of logarithmic intertwining operators, and the as- sociativity isomorphisms”. In: (2012). arXiv: 1012.4202 [math.QA]. [HLZ12f] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, VII: Convergence and extension properties and applications to expansion for intertwining maps”. In: (2012). arXiv: 1110.1929 [math.QA]. [HLZ12g] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory, VIII: Braided tensor category structure on categories of generalized modules for a confor- mal vertex algebra”. In: (2012). arXiv: 1110.1931 [math.QA]. [HLZ14] Y.-Z. Huang, J. Lepowsky, and L. Zhang. “Logarithmic tensor category theory for generalized modules for a conformal vertex algebra, I: introduction and strongly graded algebras and their generalized modules”. In: Conformal field theories and tensor categories. Math. Lect. Peking Univ. Springer, Heidelberg, 2014, pp. 169–248. [HY19] Y.-Z. Huang and J. Yang. “Associative algebras for (logarithmic) twisted modules for a vertex operator algebra”. In: Trans. Amer. Math. Soc. 371.6 (2019), pp. 3747–3786. [Kna86] A. W. Knapp. Representation theory of semisimple groups. Vol. 36. Princeton Math- ematical Series. An overview based on examples. Princeton University Press, Prince- ton, NJ, 1986, pp. xviii+774. [KZ84] V. G. Knizhnik and A. B. Zamolodchikov. “Current algebra and Wess-Zumino model in two dimensions”. In: Nuclear Physics B 247.1 (1984), pp. 83–103. [Lep85] J. Lepowsky. “Calculus of twisted vertex operators”. In: Proc. Nat. Acad. Sci. U.S.A. 82.24 (1985), pp. 8295–8299. [LL04] J. Lepowsky and H. Li. Introduction to vertex operator algebras and their represen- tations. Vol. 227. Progress in Mathematics. Birkh¨auser Boston, Inc., Boston, MA, 2004, pp. xiv+318. [LW78] J. Lepowsky and R. L. Wilson. “Construction of the affine Lie algebra A1(1)”. In: Comm. Math. Phys. 62.1 (1978), pp. 43–53. [Miy15] M. Miyamoto. “C2-cofiniteness of cyclic-orbifold models”. In: Comm. Math. Phys. 335.3 (2015), pp. 1279–1286. Department of Mathematics, Rutgers University, 110 Frelinghuysen Rd., Piscataway, NJ 08854-8019 E-mail address: dwt24@math.rutgers.edu 46
Differential equations for intertwining operators among untwisted and twisted modules Daniel Tan Abstract Given any vertex operator algebra V with an automorphism g, we derive a Jacobi identity for an intertwining operator Y of type W3 W1 W2 when W1 is an untwisted V -module, and W2 and W3 are g-twisted V -modules. We say such an intertwining operator is of g 1 g - type. Using the Jacobi identity, we obtain homogeneous linear differential equations satisfied by the multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩when Yj are of g 1 g -type and the modules are C1-cofinite and discretely graded. In the special case that V is an affine vertex operator algebra, we derive the "twisted KZ equations" and show that its solutions have regular singularities at certain prescribed points when g has finite order. When V is general and g has finite order, we use the theory of regular singular points to prove that the multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩converges absolutely to a multivalued analytic function when |z1| > · · · > |zN| > 0 and analytically extends to the region zi, zi-zj ̸= 0. Furthermore, when N = 2, we show that these multivalued functions have regular singularities at certain prescribed points. 0 Introduction The representation theory of vertex operator algebras has made major progress in mathematically constructing two-dimensional conformal field theories (CFTs). One interesting feature of vertex operator algebra representation theory is the concept of "twisting" a representation by an automorphism. The first-discovered elements of this concept within mathematics include the twisted vertex operators in [LW78] and [Lep85], and the construction of the Moonshine Module in [FLM84]. In [FLM88], a vertex operator algebra structure was constructed on the Moonshine Module. This came to be understood as the first example of a process in physics known as orbifolding, where a known CFT is "quotiented" by a finite group of its symmetries. The general study of orbifold CFT was initiated by physicists Dixon, Harvey, Vafa and Witten in [DHVW85; DHVW86] on a physical level of rigor. Historically, the group was induced from a finite group G of symmetries of the target manifold M in a string theory, and the orbifold theory described the physics of a string propagating in the orbifold M/G; hence the term "orbifold". However, in the current general study, the group consists of abstract automorphisms of the CFT, and does not need to come from any manifold. As discussed in [DHVW85; DHVW86], the orbifolding procedure has two parts. First, twisted sectors Hg are included in the state space for each g ∈G. These sectors are characterized by the property that the fields of the original theory are twisted by the action of g when they circle an insertion of a state in Hg. By circling other twisted sectors, each twisted sector Hg gains an action of the centralizer CG(g). And second, there is a projection of each Hg onto the CG(g)-invariant 1 16 Oct 2025 states. In this paper, we focus on just the first step. More specifically, we study how the untwisted sector H1 acts on a g-twisted sector Hg. The chiral properties of orbifold CFTs were studied by Dijkgraaf, Vafa, E. Verlinde and H. Verlinde in [DVVV89]. They explained, on a physical level of rigor, how orbifold CFTs can be constructed from the twisted representations of the chiral algebra (i.e. twisted modules for a vertex operator algebra) of the original CFT. The fields and their correlation functions are assumed to satisfy the usual convergence, radial ordering, and operator product expansion properties. The chiral study is closest in spirit to the vertex-operator-algebra theoretic study of orbifold CFT. If we are to rigorously construct orbifold CFTs using the representation theory of vertex operator algebras, we must prove that the convergence, radial ordering, and operator product expansion properties of the correlation functions follow from certain natural conditions for vertex operator algebras and their modules that are relatively easy to verify. The mathematical study of chiral orbifold CFT is outlined as follows. Given an automorphism g of a vertex operator algebra V , a g-twisted V -module is a vector space equipped with an action of V , similar to that of a V -module action, but twisted by g as the vertex operator formally circles the origin, that is Y (u, x) = e2πix d dxY (gu, x). Examples of twisted vertex operators were first discovered in mathematics in [LW78] when constructing the affine Lie algebra A(1) 1 . More generally, twisted modules for vertex operator algebras with an automorphism of finite order were axiomatized in [FFR91] and [Don94] by compiling the basic properties discovered and proved in [FLM88], notably the "twisted Jacobi identity". The notion of twisted module introduced in [Hua10] allows for general automorphisms, but the algebraic Jacobi identity is replaced with the analytic axiom of "duality". Given a group G of automorphisms of V , the subspace V G of elements fixed under the action of G is a subalgebra of V . One can extend V G by first adjoining g-twisted V -modules to V , for each g ∈G, and intertwining operators between them before taking the fixed-point subalgebra. The moonshine module V ♮was the first example of a vertex operator algebra obtained by an orbifold construction. In [FLM88], Frenkel, Lepowsky and Meurman constructed the V ♮from the Leech lattice vertex operator algebra VΛ and the group generated by the involution induced by negation of the Leech lattice. The physical interpretation of the process as an orbifold CFT was explained in [DGH88], and explained conceptually with mathematical rigor in [Hua96]. In [EMS20], a general construction of orbifold CFTs was solved for simple, rational, C2-cofinite, holomorphic vertex operator algebra of CFT-type (i.e. positive energy) with G = ⟨g⟩finite cyclic by use of [Miy15; CM16]. A key part in a general study of the orbifolding procedure, is the study of intertwining operators Y(·, x)·: W1 ⊗W2 →W3{x}[log x] among twisted V -modules W1, W2 and W3 as introduced in the most general form in [Hua18; DH25]. These twisted intertwining operators are believed to give an intertwining-operator-algebraic structure to the direct sum of twisted modules, which provides the vertex-operator-algebraic structure when restricted to the fixed-point subspace. Twisted intertwining operators can be thought of as the mathematical counterpart to the chiral factors of the fields (or chiral vertex operators) corresponding to states in the twisted sectors acting on other twisted sectors. It is not only necessary to define twisted intertwining operators, but one must also prove that the series ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩|x1=z1,...,xN=zN converges to a multivalued analytic function in the domain |z1| > · · · > |zN| > 0 and analytically continues the entire region zi, zi-zj ̸= 0, i ̸= j. We refer to this as the convergence property for products of N twisted intertwining operators. 2 Furthermore, one must also prove that ⟨w0, Y1(w1, x1)Y2(w2, x2)w3⟩|x1=z1,x2=z2 analytically continues to ⟨w0, Y3(Y4(w1, x0)w2, x2)w3⟩|x0=z1-z2,x2=z2 and ⟨w0, Y5(w2, x2)Y6(w1, x1)w3⟩|x1=z1,x2=z2 in the domains |z2| > |z1 -z2| > 0 and |z2| > |z1| > 0, respectively, for some twisted intertwining operators Y3, Y4, Y5, Y6. This condition is known as associativity and commutativity for twisted intertwining operators. Physically speaking, it says that the chiral factors of correlation functions respect radial ordering, and fields have operator product expansions. One of the main requirements to prove associativity and commutativity is to prove that the analytic continuation of the product of two twisted intertwining operators has "regular singularities" at certain points. This also plays an important role in constructing a G-crossed tensor category structure on the category of the twisted V -modules, as explained in [Hua21b]. Similarly to the construction of the braided tensor category structure on the category of V -modules in the eight-part series [HLZ14; HLZ12a; HLZ12b; HLZ12c; HLZ12d; HLZ12e; HLZ12f; HLZ12g], the convergence property together with regular singularities at certain points are used to construct the associativity natural isomorphism and verify its properties. Little is known about the convergence property for general V and G, with the present understanding explained in [Hua21b]. If G is trivial, the most general results are proved by Huang (originally in [Hua05] and extended in [HLZ12f]) by deriving certain differential equations when the modules are C1-cofinite and discretely graded. This includes the special case that V is C2-cofinite. In [Miy15], Miyamoto proved that V G is C2-cofinite when V is C2-cofinite, CFT-type (i.e. positive energy) and simple, and G is finite and solvable. Every g-twisted V -module, with g ∈G, restricts to a V G-module. And in this context, twisted intertwining operators among V -modules twisted by elements in G become intertwining operators among V G-modules. Hence, Huang's theory [Hua05] can be applied to the C2-cofinite vertex operator algebra V G to show the convergence property among g-twisted V -modules, for g ∈G. One downside to this approach relies on knowing that V G is C2-cofinite, which is difficult to prove and is not the most general assumption. We hope to prove the convergence property directly by generalizing Huang's method [Hua05] of differential equations with regular singularities to twisted intertwining operators. In fact, physicists have discovered explicit examples of differential equations in the twisted case in [BHO01] and [DE20]. The twisted KZ equations derived by de Boer, Halpern and Obers in [BHO01] are explicit enough to see their regular singularities. We emphasize that their results are for a certain special case. Their derivation relies on the assumption that the chiral vertex operators in the chiral correlation function correspond to states from the untwisted sector in the presence of one twisted sector. We formulate this more precisely by saying a twisted intertwining operator Y(·, x) : W1 ⊗ W2 →W3 is of g 1 g -type if W1 is an untwisted V -module, and W2 and W3 are g-twisted V - modules. That is, a twisted intertwining operator of g 1 g -type is one among an untwisted and twisted module (the third module is necessarily a twisted module twisted by the same element). By generalizing the method of differential equations used in [Hua05], we derive differential equations satisfied by the "chiral correlation functions" ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩given by the product of N g 1 g -type intertwining operators Yi. These differential equations hold when the modules are C1-cofinite and discretely graded, and when g has a finitely-generated set of eigenvalues, for which there are many natural examples. Our result does not require any C2cofiniteness assumption on V nor V G. In fact, we do not require any knowledge of a larger group G containing g. When the order of g is finite, we use these differential equations and the theory of regular singular points to prove the convergence property for the product of N g 1 g -type intertwining operators. We still have homogeneous linear differential equations when g has infinite order. 3 There are, however, logarithms in the coefficients that prevent us from using the theory of regular singular points at the singularities. Physically speaking, the correlation functions that we study in this paper describe how the chiral CFT acts on the g-twisted sector. This should not be confused with the description of the chiral algebra (i.e. vertex operator algebra) acting on the g-twisted sector, which is just the notion of a g-twisted V -module. A chiral CFT consists of more than a vertex operator algebra in general; it also contains modules for the vertex operator algebra and intertwining operators among them satisfying certain properties. The consideration of g 1 g -type intertwining operators can be avoided if V is assumed to be holomorphic (i.e. V has one irreducible module up to isomorphism, namely itself) as is usually done in the currently known mathematical orbifold constructions [FLM88; Hua96; EMS20; HM22; GK21]. The present work does not require V to be holomorphic, hence is more general. Though not proved here, we hope to extend Huang's method of differential equations to correlation functions involving twisted intertwining operators that are not of g 1 g -type. The main difficulty in doing so comes from the multivalued V -action on the twisted modules. Formal calculus is used to derive differential equations without assuming any convergence. The Jacobi identity is the duality axiom written in the language of formal calculus, and hence, is used in our computations. While duality for twisted intertwining operators is easy to write down [Hua18], the multivaluedness occurring in the twisted modules makes it difficult to convert to a Jacobi identity. Furthermore, once a Jacobi identity is obtained for twisted intertwining operators, it is much more computationally difficult to work with than the Jacobi identity for untwisted intertwining operators. Despite this, we believe that Huang's method of differential equations can be generalized to the case where the modules are twisted by elements of a general finite group. In this paper, we first set notation and conventions for a multivalued version of formal calculus. In Section 2, we derive a Jacobi identity for g 1 g -type intertwining operators, which must be done for the formal calculus approach of deriving differential equations. We immediately obtain some useful relations satisfied by g 1 g -type intertwining operators. In Section 3, we derive higher order homogeneous linear differential equations satisfied by the correlation functions of a product of N g 1 g -type intertwining operators under certain technical assumptions. In Section 4, we explicitly derive a first order homogeneous linear system of partial differential equations, known as the twisted KZ equations, when V is an affine vertex operator algebra, generalizing results of [BHO01]. Then, in Section 5, we show that the solutions to the twisted KZ equations have regular singularities when the order of g is finite. In Section 6, we assume that V is general and g has finite order to prove the systems derived in Section 3 can be chosen to have regular singularities at certain prescribed points. We use this to prove the convergence property for products of N g 1 g -type intertwining operators, and we show that the product of two g 1 g -type intertwining operators has regular singularities at certain points. Acknowledgments I would like to thank Professor Yi-Zhi Huang and Professor James Lepowsky for their helpful discussions, guidance and suggestions. 4 1 Notation and conventions Our base field for vector spaces will always be C, which will be important for the theory. The notion of vertex operator algebra and its morphisms can be found in Definition 3.1.22 of [LL04]. The framework of logarithmic formal calculus that we use can be found in [HLZ12a]. To easily distinguish between formal variables (all formal variables commute) and complex numbers, we will use x, log x and xi, log xi for formal variables and use z, zi, ζ, ζi, ξ, ξi, η, ηi for complex numbers. For a vector space W, we use W{x1, . . . , xN} to denote the space of formal generalized power series P n1,...,nN wn1,...,nNxn1 1 · · · xnN N with coefficients in W and powers in C. We will regularly consider formal series of the form f(x1, . . . , xN, log x1, . . . , log xN) ∈C{x1, . . . , xN}[log x1, . . . , log xN], but we suppress the log variables and write f(x1, . . . , xN) for brevity. We use ∂ ∂xi to denote the derivation defined by ∂ ∂xi xn j = nxn-1 i δi,j and ∂ ∂xi log xj = x-1 i δi,j. We sometimes write d dx when there is only one pair of variables x and log x involved. For any operator A on a vector space W, locally nilpotent operator N on W, and formal variables x, x1, log x, we have the following meanings for formal logarithms and exponentials, using logarithm and exponential notation: log(1 + x) = ∞ X k=1 (-1)k+1 k xk, log(x + x1) = log x + log(1 + x1/x), eAx = ∞ X k=1 1 k!Akxk, xN = eN log x, (1 + x)A = ∞ X k=1 A k xk. (1.1) These formulas remain valid when we can make a substitution of a formal series (in possibly multiple variables) that is still well defined, for example (x + x1)N = eN log(x+x1) = eN(log x+log(1+x1/x)) = xNeN log(1+x1/x) = xN(1 + x1/x)N. When xS is used for a semisimple operator S on W, it is understood to mean xSu = xαu when Su = αu, and then extended to all of W. If an operator A has commuting semisimple and locally nilpotent parts S and N, we use xA to mean xSxN and (x + x1)A = (x + x1)S(x + x1)N = xA(1 + x1/x)A. Given a formal series f(x1, . . . , xN) = X r1,...,rN∈C k1,...,kN∈Z≥0 ar1,...,rN,k1,...,kNxr1 1 . . . xrN N (log x1)k1 . . . (log xN)kN, we can specialize a formal variable to a sum of two formal variables by understanding logarithms of formal variables as explained above in (1.1). For example, f(x + x0, . . . , xN) = X r1,...,rN∈C k1,...,kN∈Z≥0 ar1,...,rN,k1,...,kN(x + x0)r1 . . . xrN N (log(x + x0))k1 . . . (log xN)kN. 5 For each z ∈C×, define arg z to be the argument of z satisfying 0 ≤arg z 0 V(n). If g is an automorphism of V , we use V [α], V [α] (n) and V [α] + to denote the spaces of generalized eigenvectors of g (acting on V , V(n) and V+, respectively) with eigenvalue e2πiα. We will use W[n] to denote the L(0)-generalized eigenspaces of a twisted V - module (see Definition 3.1 of [Hua18]) with eigenvalue n ∈C. We say wt w = n is the (conformal) weight of a vector in w ∈W[n]. Suppose we have a tensor product W0 ⊗· · · ⊗WN+1 of C-graded spaces graded by weight. Given homogeneous elements wi ∈Wi, i = 0, . . . , N + 1, we will frequently use σ to denote R(wt w0+· · ·+wt wN+1), i.e. the real component of the weight of the tensor product w0⊗· · ·⊗wN+1. 2 A Jacobi identity for g 1 g -type intertwining operators We use the definition of vertex operator algebra V = (V, Y, 1, ω) in Definition 3.1.22 of [LL04]. It is important that our base field is C. Recall that an automorphism g of a vertex operator algebra V is a linear automorphism of V such that gY (u, x)v = Y (gu, x)gv for all u, v ∈V , g1 = 1 and gω = ω. We will use the notions of twisted modules and intertwining operators among twisted modules as defined in Definition 3.1 and Definition 4.1 of [Hua18], respectively. In this paper, we will not require the g-action on the g-twisted module. We recall the definition of an intertwining operator among twisted modules. 6 Definition 2.1. Let V be a vertex operator algebra with automorphisms g1, g2, g3. Let Wi be a gi-twisted V -module, for i = 1, 2, 3. An intertwining operator of type W3 W1 W2 is a linear map Y(·, x)· : W1 ⊗W2 →W3{x}[log x], w1 ⊗w2 7→ K X k=0 X n∈C Yn,k(w1)w2x-n-1(log x)k (2.1) satisfying the following conditions: (i) (Lower truncation) For all w1 ∈W1, w2 ∈W2, n ∈C and k = 0, . . . , K, Yn+l,k(w1)w2 = 0 for all sufficiently large l ∈Z. (ii) (L(-1)-derivative property) For all w1 ∈W1, d dxY(w1, x) = Y(L(-1)w1, x). (iii) (Duality) For all u ∈V , w1 ∈W1, w2 ∈W2 and w′ 3 ∈W ′ 3, there exists a formal series f(x0, x1, x2) = N X i,j,k,l,m,n=0 aijklmnxri 0 x sj 1 xtk 2 (log x0)l(log x1)m(log x2)n ∈C{x0, x1, x2}[log x0, log x1, log x2] (2.2) such that for p1, p2, p12 ∈Z, the series ⟨w′ 3, (YW3)p1(u, z1)Yp2(w1, z2)w2⟩= ⟨w′ 3, YW3(u, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Yp2(w1, z2)(YW2)p1(u, z1)w2⟩= ⟨w′ 3, Y(w1, x2)YW2(u, x1)w2⟩|p1,p2 x1=z1,x2=z2, ⟨w′ 3, Yp2((YW1)p12(u, z1 -z2)w1, z2)w2⟩= ⟨w′ 3, Y(YW1(u, x0)w1, x2)w2⟩|p12,p2 x0=z1-z2,x2=z2, (2.3) are absolutely convergent in the regions |z1| > |z2| > 0, |z2| > |z1| > 0 and |z2| > |z1-z2| > 0, respectively. Moreover, they converge to f p1,p1,p2(z1 -z2, z1, z2), f p2,p1,p2(z1 -z2, z1, z2), f p12,p2,p2(z1 -z2, z1, z2) (2.4) in the regions |z1| > |z2| > 0 and -π 2 |z1| > 0 and -3π 2 |z1 -z2| > 0 and -π 2 |z1 -z2| > 0 and -π 2 0. We recall that a unique expansion set is a subset S of C×C satisfying the condition: if a series X (α,β)∈S aα,βzα(log z)β = X (α,β)∈S aα,βeα log z(log z)β is absolutely convergent and convergent to 0 on some non-empty open subset of C× (for some chosen branch of log z), then aα,β = 0 for all (α, β) ∈S. In [Hua17], it was shown that any strictly monotonic sequence {ni}i∈Z>0 of real numbers together with a list m1, . . . , mlof distinct real numbers produces a unique expansion set {ni + mji : i ∈Z>0, j = 1, . . . , l} × {0, . . . , N} (2.10) 9 for any N ∈Z≥0. We can see that the three series in (2.9), and the three series f(x1 -x2, x1, x2), f(-x2 +x1, x1, x2) and f(x0, x2 +x0, x2) are double sums over unique expansion sets of type (2.10) with the directions of the monotonic sequences matching. Assume that S and T are unique expansion sets and we have two series X (α,β)∈S X (γ,δ)∈T aα,β,γ,δzα 1 (log z1)βzγ 2(log z2)δ and X (α,β)∈S X (γ,δ)∈T bα,β,γ,δzα 1 (log z1)βzγ 2(log z2)δ (2.11) are both absolutely convergent in the some non-empty open set of C× × C×, and furthermore converge to the same values. Then for fixed z1 (in some open set of C×), X (γ,δ)∈T  X (α,β)∈S (aα,β,γ,δ -bα,β,γ,δ)zα 1 (log z1)β  zγ 2(log z2)δ is absolutely convergent for all z2 in some open set of C× and is furthermore convergent to zero. Since T is a unique expansion set, X (α,β)∈S (aα,β,γ,δ -bα,β,γ,δ)zα 1 (log z1)β converges to zero for all (γ, δ) ∈T. Furthermore, this series converges absolutely for all z1 in an open set of C×. Since S is a unique expansion set, aα,β,γ,δ -bα,β,γ,δ = 0 for all (α, β, γ, δ) ∈S × T. Hence, the two series in (2.11) are equivalent. We now use this argument to show that the following series are equivalent. Since xα 1⟨w′ 3, YW3,0(u, x1)Y(w1, x2)w2⟩|p1,p2 x1=z1,x2=z2 = xα 1f(x0, x1, x2)|p1,p1,p2 x0=z1-z2,x1=z1,x2=z2 = xα 1f(x1 -x2, x1, x2)|p1,p2 x1=z1,x2=z2 in the region |z1| > |z2| > 0 and -π 2 |z1| > 0 and -3π 2 |z1 -z2| > 0 and -π 2 0. 14 Lemma 2.8. Let u ∈V [α] (1) with α ∈C with R(α) ∈[0, 1), and let w1 ∈W1 be of lowest weight. Then Y(u-1w1, x2) = Y +(u, x2)Y(w1, x2) + Y(w1, x2)Y -(u, x2) -x-1 2 Y((Lu)0w1, x2). , (2.21) [Y -(u, x1), Y(w1, x2)] = (x1 -x2)-1Y (x2/x1)L u 0 w1, x2 , [Y +(u, x1), Y(w1, x2)] = (x2 -x1)-1Y (x2/x1)L u 0 w1, x2 , (2.22) Proof. Follows from the equations (2.15) and (2.18) using unw1 = 0 when n > 0. 3 Differential equations for g 1 g -type intertwining operators In this section, we generalize Huang's method in [Hua05] to derive differential equations for products of g 1 g -type-intertwining operators. The main idea of his method is to show that differential equations exist as a consequence of certain finiteness conditions. Definition 3.1. Let V be a vertex operator algebra with an automorphism g. Given a g-twisted V -module W, define C1(W) to be the subspace of W spanned by the elements of the form uα-1,0w for all u ∈V [α] + , and α ∈C with R(α) ∈[0, 1). We say that W is C1-cofinite if W satisfies the condition dim W/C1(W) 0. Let W1, . . . , WN be untwisted V -modules, let f W0, . . . , f WN be g-twisted V -modules. We assume that all modules are C1-cofinite and discretely graded. Let Yi : Wi ⊗f Wi →f Wi-1{x}[log x] be intertwining operators, for i = 1, . . . , N. To aid with notation later on, we also denote f W ′ 0 by W0 (a g-1-twisted V -module) and f WN by WN+1. Let A be the additive subgroup of C generated by the union of {1} and the set of eigenvalues for S. Let R be the subring of C{x1, . . . , xN}[(xi -xj)-1 : i, j = 1, . . . , N, i ···>|xN| : R →C{x1, . . . , xN}[log x1, . . . , log xN] (3.2) that expands (xi -xj)-1 in non-negative powers of xj when j > i. Note that R is Noetherian if A is finitely generated. For example, if g has finite order t, then A = 1 tZ. This will be important in Section 6. In the case that V is finitely generated, we can project the finitely many generators onto the subspaces V(n) and consider the finitely many subspaces V(n) where the image of the projection is non-trivial. The automorphism g decomposes these V(n) into eigenspaces for S. We can then take a basis of eigenvectors for S in each of these subspaces. Now we have a finite generating set for V consisting of eigenvectors for S. Hence, A is finitely generated when V is finitely generated. Consider the R-module T = R ⊗W0 ⊗· · · ⊗WN+1. (3.3) Since T is isomorphic as an R-module to (R ⊗W0)⊗R · · ·⊗R (R ⊗WN+1), we will sometimes write elements in T as (f0(x1, . . . , xN)w0) ⊗· · · ⊗(fN+1(x1, . . . , xN)wN+1) in place of f0(x1, . . . , xN) · · · fN+1(x1, . . . , xN) ⊗w0 ⊗· · · ⊗wN+1. 16 The C-gradings by conformal weight on W0, . . . , WN+1, together with the trivial grading on R, induce a grading on T called the weight. We denote the subspace spanned by all homogeneous elements of weight n ∈C with R(n) = r by T(r) so that T = ` r∈R T(r). Observe that the discrete gradings on W0, . . . , WN+1, ensure that W0⊗· · ·⊗WN+1 is a discretely graded vector space. Hence, each subspace Fr(T) = a s∈R≤r T(s) (3.4) is a finitely-generated R-module. Hence, we have a filtration Fr(T) ⊆Fs(T), for r ≤s, of finitely-generated submodules of T. (The chosen grading provides a filtration of finitely-generated submodules, whereas the grading wt(w0 ⊗· · · ⊗wN+1) = -wt w0 + · · · + wt wN+1 does not.) Define the map φ : T →C{x1, . . . , xN}[log x1, . . . , log xN], φ(f(x1, . . . , xN) ⊗w0 ⊗· · · ⊗wN+1) = ι|x1|>···>|xN|f(x1, . . . , xN)⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ (3.5) that evaluates elements in T as matrix coefficients using Y1, . . . , YN. We seek to define a submodule J of T that lies in the kernel of φ, satisfying certain properties to be discussed later in Lemma 3.4. Given u ∈V [α] for α ∈C with α ∈[0, 1), we use the notation α′ = ( -α if R(α) = 0, 1 -α if R(α) > 0. (3.6) This is the eigenvalue of Sg-1 corresponding to u. We also drop the subscripts from YW and Yi, which can be deduced by looking at the subscripts of their arguments. We now define elements that will generate the submodule J of T. Let u ∈V [α] and wi ∈Wi for i = 0, . . . , N + 1. Then for l= 1, . . . , N, by (2.15) and (2.18), we observe that 0 = ⟨w0, Y(w1, x1) · · · Y +(u, xl)Y(wl, xl) + Y(wl, xl)Y -(u, xl) - X k≥0 x-k lY L k u -1+k wl, xl · · · Y(wN, xN)wN+1⟩ = ⟨w0, Y +(u, xl)Y(w1, x1) · · · Y(wN, xN)wN+1⟩ + X p=1,...,N p̸=l ⟨w0, Y(w1, x1) · · · X j,k≥0 : (xl-xp)-1-j : x-k p Y   xp xl L L k u ! j+k wp, xp  · · · Y(wN, xN)wN+1⟩ -⟨w0, Y(w1, x1) · · · X k≥0 x-k lY L k u -1+k wl, xl · · · Y(wN, xN)wN+1⟩ + ⟨w0, Y(w1, x1) · · · Y(wN, xN)Y -(u, xl)wN+1⟩, 17 where we use the notation : (xl-xp)-1-j : := ι|x1|>···>|xN|(xl-xp)-1-j = ( (xl-xp)-1-j if l 0 x-k lw0 ⊗· · · ⊗ L k u -1+k wl⊗· · · ⊗wN+1 + X j≥0 K X k=0 x-α-j-1 l (log xl)kw0 ⊗· · · ⊗wN ⊗uα+j,kwN+1 ∈Fs(J) + Fr(T) 20 for some r Mi. Taking M = PN+1 i=0 Mi, we see that any homogeneous element in T that has real component of its weight larger than M must be in N+1 X i=0 R ⊗W0 ⊗· · · ⊗C1(Wi) ⊗· · · ⊗WN+1 Hence, we have M ∈Z such that a m>M T(m) ⊆ N+1 X i=0 R ⊗W0 ⊗· · · ⊗C1(Wi) ⊗· · · ⊗WN+1 (3.11) We immediately have Fr(T) ⊆FM(T) ⊆Fr(J) + FM(T) for all r ≤M. Since T is discretely graded, we can use induction on r ≥M. Assume that we have some s > M such that Fr(T) ⊆ Fr(J) + FM(T) for all r M T(m), we can consider elements in the summands of the (3.11), without loss of generality. By Lemma 3.4, these elements are in Fs(J) + Fr(T) for some r · · · > |zN|. 22 Proof. Since J is contained in the kernel of φ, we have φ : T/J →C{x1, . . . , xN}[log x1, . . . , log xN]. Using this map on (3.15) followed by the L(-1)-derivative property, we obtain N X j=1 blj ∂ ∂xj !ml ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩ + ml X k=1 ι|x1|>···>|xN|ak,l(x1, . . . , xN) N X j=1 blj ∂ ∂xj !ml-k ⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩= 0. (3.18) Evaluating x1 = z1, . . . , xN = zN gives the desired differential equations. Remark 3.9. Recall that R is Noetherian when V is finitely generated, for which there are many natural examples. Furthermore, R is Noetherian when the order of g is finite, which is an assumption in Section 6. Remark 3.10. We have shown that the series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩formally satisfies (3.17), but this does not show that the series converges. Since there are non-integral powers and logarithms of zi, we cannot use the usual power series convergence for analytic systems (even if we reduce to ordinary differential equations in zi). To show convergence, we instead use the theory of differential equations with regular singular points in Section 6 while assuming that g has finite order. We do not have a method for showing the convergence when g is a general automorphism. 4 The twisted KZ equations In this section, we work with the affine vertex operator algebras Vˆg(κ, 0) as an explicit family of vertex operator algebras. A slightly different method is used than in the previous section to derive a system of differential equations when w0, . . . , wN+1 are lowest weight vectors. When the modules are untwisted, this system is well known in the physics literature as "the KZ equations" [KZ84]. When W ′ 0 and WN+1 are g-twisted (by a semisimple inner-automorphism of g), this system is known in the physics literature as the "twisted KZ equations" (for "inner-automorphic WZW orbifolds") [BHO01]. Observe that under the condition that W1, . . . , WN are untwisted and W ′ 0 and WN+1 are g-twisted, the intertwining operators are necessarily of g 1 g -type. This is an important assumption used by de Boer, Halpern and Obers to derive the twisted KZ equations. We keep the assumption of g 1 g -type intertwining operators, but we will generalize the twisted KZ equations so that g can be an arbitrary automorphism of the vertex operator algebra Vˆg(κ, 0). In the following section, we will show that the solutions to the twisted KZ equations have "regular singularities" when g has finite order. Let g be a finite-dimensional simple Lie algebra with an invariant symmetric bilinear form (·, ·). Let g be an automorphism of g. Let L, S and N be operators of g such that g = e2πiL, S and N are the semisimple and nilpotent parts of L, and the eigenvalues of S have real part in [0, 1). As defined in [Hua21a], the twisted affine Lie algebra ˆg[g] is spanned by the elements k, a ⊗tn = an, for a ∈g and n ∈C such that ga = e2πina, (4.1) 23 with Lie bracket given by [am, bn] = [a, b]m+n + ((m + N)a, b)δm+n,0k, (4.2) [k, am] = 0. (4.3) We have the subspaces ˆg[g] + := Span{an ∈ˆg[g] : R(n) > 0}, ˆg[g] -:= Span{an ∈ˆg[g] : R(n) 0 1 2(κ + h∨)ai′(-j)ai(j -1) + X i∈I X j∈Z≤0 1 2(κ + h∨)ai(j -1)ai′(-j). (Note that here we use a basis {ai}i∈I of generalized eigenvectors for g. Although this is not needed for the untwisted module Wl, it will later be used for the g-twisted modules.) Utilizing that wlis a lowest weight vector, we obtain the relation Yl(L(-1)wl, xl) = Yl X i∈I 1 2(κ + h∨)ai′(-1)ai(0)wl+ X i∈I 1 2(κ + h∨)ai(-1)ai′(0)wl, xl ! , or rearranged as 2(κ + h∨)Yl(L(-1)wl, xl) = X i∈I Yl ai′(-1)ai(0)wl+ ai(-1)ai′(0)wl, xl = X i∈I Yl(ai(-1)ai′(0)wl, xl). The vectors ai = ai(-1) 1 and ai′(0)wlsatisfy the assumptions on u and w1, respectively, used to derive (2.21). Hence, we can use (2.21) to obtain 2(κ + h∨)Yl(L(-1)wl, xl) = X i∈I Yl(ai(-1)ai′(0)wl, xl) = X i∈I Y +(ai, xl)Yl(ai′(0)wl, xl) + Yl(ai′(0)wl, xl)Y -(ai, xl) -x-1 lYl((Lai)(0)ai′(0)wl, xl) . (4.10) Assume u ∈(Vˆg(κ, 0))(1) ∼= g and α ∈C with Su = αu. Let ew0 ∈f W0 and f ∈Hom(WN+1, f W0). By use of Y +(u, x) = Y + 0 (x-Nu, x) and the definition of the contragredient module action ⟨Y ′(v, x)w′, w⟩:= ⟨w′, Y (exL(1)(-x-2)L(0)v, x-1)w⟩, (4.11) we have ⟨w0, Y +(u, x) ew0⟩= K X k=0 X m∈-α+Z>0 -1 k! ⟨(N ku)(m)w0, ew0⟩x-m-1(log x)k = 0. (4.12) 25 By use of Y -(u, x) = Y - 0 (x-Nu, x), we have ⟨w0, fY -(u, x)wN+1⟩= K X k=0 X m∈α+Z≥0 (-1)k k! ⟨w0, f(N ku)(m)wN+1⟩x-m-1(log x)k = K X k=0 (-1)k k! ⟨w0, f(N ku)(α)wN+1⟩x-α-1(log x)k = ⟨w0, f(x-Nu)(α)wN+1⟩x-α-1, (4.13) If R(α) = 0, we still have (N ku)(α)wN+1 being of lowest weight. If R(α) > 0, we further have (x-Nu)(α)wN+1 = 0. We wish to remove all instances of Y +(ai, xl) and Y -(ai, xl) from ∂ ∂xl⟨w0, Y1(w1, x1) · · · YN(wN, xN)wN+1⟩. To do so, we can use (2.22) to move them as far left and right as possible, respectively. For p ···>|xN|(xl-xp)-1 = ( (xl-xp)-1 when l 0, let D× r (a) denote the open punctured disc {z ∈C : 0 0 such that g(ζ1, . . . , ζn) is defined on D× r1(δ1) × · · · × D× rn(δn) Definition 5.2. We say that a multivalued holomorphic function f(z1, . . . , zn) has a regular singularity at (z1, . . . , zn) = (0, . . . , 0) if there exist r1, . . . , rn ∈R>0 such that f has a local expansion in the region D× r1(0) × · · · × D× rn(0) of the form φ(z1, . . . , zn) = J X j=1 z s1,j 1 · · · zsn,j n (log z1)m1,j · · · (log zn)mn,jhj(z1, . . . , zn), for some numbers si,j ∈C, mi,j ∈Z≥0, and for some holomorphic functions hj(z1, . . . , zn) on Dr1(0) × · · · × Drn(0). Define φ0(z) = z and φ∞(z) = z-1. Let (δ1, . . . , δn) ∈{0, ∞}n. We say that f(z1, . . . , zn) has a regular singularity at (z1, . . . , zn) = (δ1, . . . , δn) if h(ξ1, . . . , ξn) = f(φδ1(ξ1), . . . , φδn(ξn)) has a regular singularity at (ξ1, . . . , ξn) = (0, . . . , 0). Let A ∈GL(n, C) and (β1, . . . , βn) ∈Cn, and consider the change of variables (ζ1, . . . , ζn) = (z1, . . . , zn)A -(β1, . . . , βn). (5.2) Define g(ζ1, . . . , ζn) = f((ζ1, . . . , ζn)A-1 + (β1, . . . , βn)A-1). Let (δ1, . . . , δn) ∈{0, ∞}n. We say that f(z1, . . . , zn) has a regular singularity at (ζ1, . . . , ζn) = (δ1, . . . , δn) if g(ζ1, . . . , ζn) has a regular singularity at (ζ1, . . . , ζn) = (δ1, . . . , δn). Note that we are using the term "regular singularity" to refer to a property of a function, and not a property of a system of differential equations. Also note that a function f(z1, . . . , zn) with a regular singularity at (z1, . . . , zn) = (0, . . . , 0) is potentially singular on all of Dn\(D×)n, so this function does not necessarily have an isolated singularity at (z1, . . . , zn) = (0, . . . , 0). (In the case the singularity at (z1, . . . , zn) = (0, . . . , 0) is isolated, it is in fact removable.) This is why we have the notion of a component-isolated singularity. Let HU denote the space of holomorphic functions from Dn to a finite-dimensional vector space U. Let HEnd U denote the space of holomorphic functions from Dn to End U. For k ∈(Z≥0)n, use 29 ∂k to denote ∂ ∂z1 k1 · · · ∂ ∂zn kn and use (z∂)k to denote z1 ∂ ∂z1 k1 · · · zn ∂ ∂zn kn . Let D =        X k∈(Z≥0)n finite sum Ak∂k : Ak ∈HEnd U        , which is naturally a left HEnd U-module by left multiplication. Let D∗=        X k∈(Z≥0)n finite sum Ak(z∂)k : Ak ∈HEnd U        , be the HEnd U-submodule of D generated by all monomials in {zi ∂ ∂zi}n i=1, which is also naturally an associative unital algebra over C. Let {Dα}α∈A be a finite set of operators from D∗. Let I be the left ideal in D∗generated by {Dα}α∈A. Definition 5.3. If D∗/I is finitely generated as an HEnd U-module, we say that the system of partial differential equations {Dαφ = 0}α∈A has a simple singularity on Dn\(D×)n. For example, let U = Cm and let {Dα}α∈A be the set of operators Di = zi ∂ ∂zi -Hi(z), i = 1, . . . , n, (5.3) where Hi(z) are m × m matrices of holomorphic functions on Dn. Then, the system {zi ∂ ∂ziψ = Hi(z)ψ}n i=1 has a simple singularity on Dn\(D×)n. We have the following theorem about the form of a solution of a system with a simple singularity. Theorem 5.4 (Theorem B.16 from [Kna86]). Let {Dα}α∈A be a finite set of operators from D∗ such that the system of partial differential equations {Dαφ = 0}α∈A has a simple singularity on Dn\(D×)n. Then any multivalued holomorphic solution to {Dαφ = 0}α∈A on (D×)n has a regular singularity at (0, . . . , 0). Moreover, any C∞solution on (0, 1)n extends to a multivalued holomorphic solution on (D×)n. By scaling the variables, we can see that this theorem still holds if the domain Dn in HU and HEnd U is replaced by Dr1(0) × · · · × Drn(0) for any r1, . . . , rn ∈R>0. Note that this theorem does not show that a solution exists for any initial condition, nor that any formal solution converges. In the following section, we will use the theory of ordinary differential equations with regular singular points to show that formal solutions converge. We will now show that the solutions to the twisted KZ equations have regular singularities when g has finite order t. Since g restricts to an automorphism of finite order when acting on each finite-dimensional summand V(n), g is semisimple. So we can use the twisted KZ equations in the form of (4.17). Furthermore, the eigenvalues Sg are all of the form α ∈{0, 1/t, . . . , (1 -t)/t}. First, we will re-express the twisted KZ equations (4.17) in a more implicit form. Let L(W) denote the space of lowest-weight vectors of the (twisted) module W. Consider the space (L(W0) ⊗· · · ⊗L(WN+1))∗{x1, . . . , xN}. 30 Define the operator eΩi lp on L(W0) ⊗· · · ⊗L(WN+1) to be given by eΩi lp(w0 ⊗· · · ⊗wN+1) = 1 2(κ + h∨)w0 ⊗· · · ⊗ai′(0)wl⊗· · · ⊗ai(0)wp ⊗· · · ⊗wN+1. Observe that eΩi lp = eΩi′ pl. Define the operator Ωlon L(W0) ⊗· · · ⊗L(WN+1) to be given by eΩl(w0 ⊗· · · ⊗wN+1) = 1 2(κ + h∨) X i∈I w0 ⊗· · · ⊗ai′(0)wl⊗· · · ⊗wp ⊗· · · ⊗ai(αi)wN+1 -αiw0 ⊗· · · ⊗ai(0)ai′(0)wl⊗· · · ⊗wp ⊗· · · ⊗wN+1 . Note that the first term of the summand only contributes when αi = 0 and the second term only contributes when αi ̸= 0. Consider the (finite-dimensional) vector space U = (L(W0) ⊗· · · ⊗L(WN+1))∗. The formal counterpart to HU is contained within U{x1, . . . , xN}. Define the operators Ωi lp and Ωlon U{x1, . . . , xN} by Ωi lp X m1,...,mN∈C fm1,...,mNxm1 1 · · · xmn N ! = X m1,...,mN∈C fm1,...,mN ◦eΩi lp xm1 1 · · · xmN N , (5.4) Ωl X m1,...,mN∈C fm1,...,mNxm1 1 · · · xmN N ! = X m1,...,mN∈C fm1,...,mN ◦eΩlxm1 1 · · · xmN N . (5.5) Similarly by pre-composition, we can define operators acting on HU. We will also denote these by Ωi lp and Ωl. Observe that Ωi lp = Ωi′ pl. We will consider the twisted KZ equations as the following system of formal partial differential equations ∂ ∂xl ψ = X i∈I X p̸=l xp xl αi : (xl-xp)-1 : Ωi lp + x-1 lΩl ! ψ, l= 1, . . . , N, (5.6) or as the following system of partial differential equations ∂ ∂zl ψ = X i∈I X p̸=l zp zl αi (zl-zp)-1Ωi lp + z-1 lΩl ! ψ, l= 1, . . . , N. (5.7) By seeing where the coefficients in the right-hand side of (5.7) are holomorphic, we can see that every solution to the twisted KZ equations is at most defined on M N = {(z1, . . . , zN) ∈bCN : zi ̸= 0, ∞and zi ̸= zj when i ̸= j}. (5.8) Assume that we have a solution to the twisted KZ equations defined on M N. We will show for all change of variables (5.2), every component-isolated singularities at (δ1, . . . , δN) ∈{0, ∞}N, is a regular singularity. To show this, we will show that the system written in the new variables has a simple singularity at (0, . . . , 0) by showing that the coefficient matrix is holomorphic in some product of sufficiently small disks centered at 0. 31 Assume we have a change of variables (5.2), or equivalently (z1, . . . , zN) = (ζ1, . . . , ζ2)A-1 + (β1, . . . , βN)A-1 =: (ζ1, . . . , ζN)B + (γ1, . . . , γN). (5.9) Then zl= N X j=1 bjlζj + γl and ∂zl ∂ζj = bjl. (5.10) So, ζj ∂ ∂ζj ψ = ζj X l ∂zl ∂ζj ∂ ∂zl ψ = ζj X l bjl X i∈I X p̸=l zp zl αi (zl-zp)-1Ωi lp + z-1 lΩl !! ψ = X i∈I X l 0 to be arbitrarily small, then we see that γl= 0. Then we must have z1/t l = (bjlζj)1/t for some j (to ensure we have a component-isolated singularity for a function defined on M N), which is single-valued as a function of ηj. Now suppose that δj = ∞when bjl̸= 0 for some j. Then we must have z1/t l = (P k bklζk + γl)1/t with bkl= 0 when δk = ∞and k ̸= j (to ensure we have a component-isolated singularity for a function defined on M N). After choosing each rj to be sufficiently small, we have single-valuedness as a function of (η1, . . . , ηN). 32 Hence, to obtain a differential equation with the desired singularity at (ζ1, . . . , ζN) = (δ1, . . . , δN) and single-valued coefficients, we will look at ηj ∂ ∂ηj ψ = s(δj)t X i∈I X l 0. First we show that bjlζjz-1 l is holomorphic. If bjl= 0, then bjlζjz-1 l = 0, which is holomorphic. Otherwise, when bjl̸= 0, we have two cases depending on the value of δj. If δj = 0, then the only possible singularity can come from zl→0 as (η1, . . . , ηN) →0. Then γl= 0, and bkl= 0 for all but k = j to ensure that zl→0 and zl̸= 0 when (η1, . . . , ηN) ∈D× r1(δ1) × · · · × D× rN(δN) (this is necessary since we have a component-isolated singularity for a function defined on M N). Then bjlζjz-1 l = 1, which is holomorphic. If δj = ∞, then bkl= 0 when δk = ∞and k ̸= j to ensure that zl̸= 0 when (η1, . . . , ηN) ∈D× r1(δ1) × · · · × D× rN(δN). Then bjlζjz-1 l = bjl P k bklηs(δk)t k ηt j + γlηt j →bjl bjl = 1 as (η1, . . . , ηN) →(0, . . . , 0). So bjlζjz-1 l is holomorphic in all cases. Second we show that bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 is holomorphic. If αi = 0, then αi′ = 0 and bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 = (bjl-bjp) ζj(zl-zp)-1, which is holomorphic by arguments similar to those used for bjlζjz-1 l. Otherwise, αi ̸= 0 and αi′ = 1 -αi. Then bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 = z-αi l z-αi′ p (bjlzp -bjpzl) ζj(zl-zp)-1. If bjl-bjp = 0, then bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 = -bjl ζj zαi lzαi′ p . This is holomorphic by arguments similar to those used for bjlζjz-1 l. If bjl-bjp ̸= 0, then we need to check what happens when δj = 0 and δj = ∞. If δj = 0 and zl-zp →0, then γl-γp = 0 and bkl-bkp = 0 for all k ̸= j. Then bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 = bjl zp zl αi -bjp zl zp αi′! (bjl-bjp)-1. 33 Since zl-zp →0, we have zl, zp ↛0, so the only possible singularity comes from zl, zp →∞. And since bkl-bkp = 0 for all k ̸= j, we have zl/zp →1. If δj = 0 and zl→0, then zp, zl-zp ↛0. So the only possible singularity comes from the term bjl zp zl αi ζj(zl-zp)-1 = bjl ζj zαi l 1 zαi′ p zl zp -1 -1 , which is holomorphic by arguments similar to those used for bjlζjz-1 l. Similar arguments work for δj = 0 and zp →0. If δj = ∞, then zl-zp = P k(bkl-bkp)ζk + γl-γp with bkl-bkp = 0 when δk = ∞and k ̸= j. So ζj(zl-zp)-1 →(bjl-bjp)-1. Since bjl-bjp ̸= 0, we have bjl̸= 0 or bjp ̸= 0. If bjl̸= 0, then bkl= bkp = 0 when δk = ∞and k ̸= j. So zp zl = P k bkpζk + γp P k bklζk + γl = P k bkpηs(δk)t k ηt j + γpηt j P k bklηs(δk)t k ηt j + γlηt j →bjp bjl is holomorphic. And similarly if bjp ̸= 0. Thus, bjl zp zl αi -bjp zl zp αi′! ζj(zl-zp)-1 is holomorphic in all cases. Thus we have proven the following result. Proposition 5.5. If the order of g is finite, then the solutions to the twisted KZ equations (4.17) (or equivalently (5.7)) defined on M N have regular singularities at every component-isolated singularity. Remark 5.6. We now show that simple singularities do not immediately follow from the fact that the coefficients in (5.7) have singularities at zj = 0, ∞and zj -zk = 0. Take for simplicity N = 2 and g = 1, so that the KZ equations are ∂ ∂z1 ψ = (z1 -z2)-1Ω12 + z-1 1 Ω1 ψ, ∂ ∂z2 ψ = (z2 -z1)-1Ω12 + z-1 2 Ω2 ψ, where Ω12 = Ω21 = P i∈I Ωi 12. Let z1 = ζ1 + ζ2 + 1, z2 = ζ1 -ζ2 + 1, which implies z1 -z2 = 2ζ2. This provides a componentisolated singularity at (ζ1, ζ2) = (0, 0), and hence the KZ equations have a simple singularity at 34 (ζ1, ζ2) = (0, 0), as we have just shown above. However, when we consider the seemingly similar system ∂ ∂z1 ψ = (z1 -z2)-1Ω12 + z-1 1 Ω1 ψ, ∂ ∂z2 ψ = (z1 -z2)-1Ω12 + z-1 2 Ω2 ψ, we have ζ1 ∂ ∂ζ1 ψ = ζ1 ζ2 Ω12 + ζ1 ζ1 + ζ2 + 1Ω1 + ζ1 ζ1 -ζ2 + 1Ω2 ψ, ζ2 ∂ ∂ζ2 ψ = ζ2 ζ1 + ζ2 + 1Ω1 - ζ1 ζ1 -ζ2 + 1Ω2 ψ. The coefficient of the first equation is not holomorphic in any product of small discs centered at zero. Hence, the simple singularities of the twisted KZ equations follow from more than the fact that the coefficients in (5.7) have singularities at zj = 0, ∞and zj -zk = 0. Remark 5.7. When dealing with first order homogeneous linear systems of partial differential equations of the form ∂ ∂zi ψ = Aiψ, for i = 1, . . . , N, one usually shows that the system is consistent (also known as integrable or compatible) in the sense that ∂ ∂zj Ai -∂ ∂zi Aj + [Ai, Aj] = 0 , for all i, j = 1, . . . N. This guarantees a unique solution exists for any initial condition. We have not shown that the twisted KZ equations are consistent. In the following section, we will use a different method to show that solutions exist and that the formal series solution given by the correlation function converges in certain domains. 6 Convergence when g has finite order In this section, we return to the setting of Section 3 while further assuming that g has finite order t. We continue to generalize Huang's method in [Hua05] by using specific filtrations of the ring R and the R-module T, to derive differential equations with "regular singular points". As a consequence, we obtain the "convergence property for products of N twisted intertwining operators", as conjectured in [Hua21b], in the special case that all intertwining operators are of g 1 g -type among C1-cofinite discretely graded V -modules. Recall that Theorem 5.4 does not show that a solutions exist for every initial condition, nor that every formal solution converges. Compare this to the standard theory for an ordinary differential equation with regular singular points (see, for example, Section B1 of [Kna86]). 35 Theorem 6.1. Let d dz n ψ(z) + an-1(z) z d dz n-1 ψ(z) + · · · + a1(z) zn-1 d dzψ(z) + a0(z) zn ψ(z) = 0, (6.1) be an ordinary differential equation such that ai(z) are holomorphic in Dr(0) for some r > 0. Then a solution exists for any initial condition at a point in D× r (0), and furthermore, every solution has the form of a regular singularity at z = 0. Conversely, any formal solution with a regular singularity at z = 0 converges absolutely to a multivalued holomorphic solution in D× r (0). Note that (6.1) could be equivalently replaced with z d dz n ψ(z) + an-1(z) z d dz n-1 ψ(z) + · · · + a1(z)z d dzψ(z) + a0ψ(z) = 0. (6.2) We will now show that the coefficients of the differential equation (3.17) can be chosen so that there is a regular singularity at certain prescribed points. Since g is of finite order, g is semisimple and L = S. So we can ignore all log xi in R, and the eigenvalues for S are in 1 tZ. Hence, we will use R = C[x±1/t i , (xi -xj)-1 : i, j = 1, . . . , N with i 0 F (c=0) m T (c=0) (s-m). Proof. This follows from an explicit case-by-case check that Ai(u, w0, . . . , wN+1) ∈F (c=0) s (T) and that Ai(u, w0, . . . , wN+1)-w(i) ∈Fs-1/t(T), which we omit (similar to the proof of Lemma 3.4). Proposition 6.3. There exists M ∈Z such that for any r ∈R, F (c=0) r (T) ⊆F (c=0) r (J) + FM(T). Proof. Choose the same M as in Proposition 3.5. When r ≤M, F (c=0) r (T) is spanned by elements of the form f(x1, · · · , xN)w0 ⊗· · · ⊗wN+1 with f(x1, · · · , xN+1) ∈F (c=0) m (R) and m + σ ≤r ≤M. Since σ ≤M, this element is in FM(T). Hence, F (c=0) r (T) ⊆FM(T) ⊆F (c=0) r (J) + FM(T). Since T is discretely graded, we can proceed by induction on r. Assume that s > M and F (c=0) r (T) ⊆ F (c=0) r (J) + FM(T) whenever r M, T (c=0) (s) is R(c=0)-spanned by elements w in Lemma 6.2. Hence, w ∈F (c=0) s (J) + ` m>0 F (c=0) m (R)T (c=0) (s-m). 37 Since s -m 0, we have T (c=0) (s-m) ⊆F (c=0) s-m (J) + FM(T) by the induction hypothesis. So, F (c=0) m (R)T (c=0) (s-m) ⊆F (c=0) s (J) + FM(T). So, T (c=0) (s) ⊆F (c=0) s (J) + FM(T). Thus, F (c=0) s (T) ⊆ F (c=0) s (J) + FM(T). Lemma 6.4. For any s ∈[0, 1), there exists S ∈R such that s+S ∈Z>0, and for any homogeneous wi ∈Wi and any W1 ∈F (c=0) σ (J), W2 ∈FM(T) satisfying σ ∈s + Z and w0 ⊗· · · ⊗wN+1 = W1 + W2, we have cσ+SW2 ∈T (c=0). Proof. Pick S ∈R with s + S ∈Z>0 to be sufficiently large so that T(r) = 0 when r · · · > |zN|. 38 Proof. Let wi ∈Wi be homogeneous and k ∈Z≥0. By Proposition 6.3, there exist W(k) 1 ∈F (c=0) σ+k (J) and W(k) 2 ∈FM(T) such that bLk ζl(w0 ⊗· · · ⊗wN+1) = W(k) 1 + W(k) 2 . By Lemma 6.4, there exists S ∈R such that σ + S ∈Z>0 and cσ+k+SW(k) 2 ∈T (c=0). And so cσ+k+SW(k) 2 ∈ a r≤M T (c=0) (r) . Now we consider the finitely-generated R(c=0)-module ` r≤M T (c=0) (r) . Consider the ascending chain (Mj)∞ j=0 of submodules Mj generated by {cσ+k+SW(k) 2 : k ∈Z, 0 ≤k ≤j}. Since R(c=0) is Noetherian, there exist ml∈Z≥0 and d(c) 1,l(x1, . . . , xN), . . . , d(c) ml,l(x1, . . . , xN) ∈R(c=0) such that cσ+ml+SW(ml) 2 + ml-1 X k=0 d(c) ml-k,l(x1, . . . , xN)cσ+k+SW(k) 2 = 0. Hence, cml[bLml ζl(w0 ⊗· · · ⊗wN+1)] + ml X k=1 d(c) k,l(x1, . . . , xN)cml-k[bLml-k ζl (w0 ⊗· · · ⊗wN+1)] = 0 When T is given the grading deg (f(x1, . . . , xN)w0 ⊗· · · ⊗wN+1) = -deg f(x1, . . . , xN) -wt w0 + wt w1 + · · · wt wN+1, and the generators for J are checked to be homogeneous with respect to this grading, we can see that d(c) k,l(x1, . . . , xN) can be chosen to be degree zero elements. We can immediately extend the previous proposition to the case where c is any non-zero element SpanC{x1, . . . , xN}. If c is a scalar multiple of xi or xi -xj, the result follows from Theorem 6.5 by rescaling c. Otherwise, no negative powers of c occur in R, so we can take R(c=0) = R, and the result follows directly from Theorem 3.8. The following special case of the previous proposition will be used to prove convergence. Corollary 6.6. The series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ satisfies the following differential equations ml X k=0 ak,l(z1, . . . , zN) zl ∂ ∂zl k ψ = 0, l= 1, . . . , N, (6.8) with ak,l(x1, . . . , xN) ∈R(xl=0) and aml,l= 1. 39 Theorem 6.7 (The convergence property for products of N g 1 g -type twisted intertwining operators). Let W0, . . . , WN+1, f W0, . . . , f WN, Y1, . . . , YN, w0, . . . , wN+1 be as in Section 3. Assume that g is an automorphism of V with finite order. (1) The multi-series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in z1, . . . , zN is absolutely convergent in the region |z1| > · · · > |zN| > 0. (2) The sum of ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ in |z1| > · · · > |zN| > 0 can be analytically continued to a multivalued analytic function F(⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩) (6.9) defined on all of M N := {(z1, . . . , zN) ∈bCN : zi ̸= 0, ∞and zi ̸= zj when i ̸= j}. Proof. When N = 1, we have ⟨w0, Y1(w1, z1)w2⟩= X n∈C K X k=0 ⟨w0, Yn,k(w1)w2⟩z-n-1 1 (log z1)k. This sum is finite by the L(0)-grading, hence absolutely converges when z1 ∈M 1. We now proceed by induction. Assume that ⟨w0, Y1(w1, z1) · · · YN-1(wN-1, zN-1) ewN-1⟩converges absolutely in the region |z1| > · · · > |zN-1| > 0 for all ewN-1 ∈f WN-1. Further assume that the function it converges to can be analytically extended to a multivalued analytic function defined on MN-1. Now consider the series ⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩ = X n∈C K X k=0 ⟨w0, Y1(w1, z1) · · · YN-1(wN-1, zN-1)Yn,k(wN)wN+1⟩z-n-1 N (log zN)k, (6.10) as a series in zN for fixed |z1| > · · · > |zN-1| > 0. The ordinary differential equation (6.8) for l= N after a change of variable ζN = z1/t N has a regular singularity at zN = 0 and is satisfied by (6.10) for all |z1| > · · · > |zN-1| > |zN| > 0. By Theorem 6.1, (6.10) converges absolutely as a series in zN. And by the induction hypothesis, (6.10) is absolutely convergent as a multi-series in |z1| > · · · > |zN-1| > |zN| > 0. Since the coefficients of (6.10) can be analytically extended to multivalued valued functions on M N-1, the sum (6.10) satisfies (6.8) for all (z1, . . . , zN-1) ∈M N-1 when |z1|, . . . , |zN-1| > |zN| > 0. Since the coefficients of (6.8) are analytic on M N, for fixed (z1, . . . , zN-1) we can analytically extend the sum (6.10) to all of (z1, . . . , zN) ∈M N. Finally, we will prove that every component-isolated singularity of the four-point correlation function is a regular singularity when the L(0)-actions on the twisted modules are semisimple. Lemma 6.8. Let w0 ∈W0, . . . , w3 ∈W3 be weight homogeneous and define ∆= wt w0 -wt w1 - wt w2 -wt w3. Let A ∈GL(2, C). Let (ξ1, ξ2) = (z1, z2)A be a change of variables. Then ξ1 ∂ ∂ξ1 + ξ2 ∂ ∂ξ2 -∆ K ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= 0, for sufficiently large K > 0. (6.11) Furthermore, K = 1 when the L(0)-actions on W0, . . . , W3 are each semisimple. 40 Proof. We use the property [L(0), Y(w, x)] = Y(L(0)w, x) + xY(L(-1)w, x) to derive z1 ∂ ∂z1 + z2 ∂ ∂z2 -∆ ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= ⟨(L(0) -wt w0)w0, Y(w1, z1)Y(w2, z2)w3⟩ -⟨w0, Y((L(0) -wt w1)w1, z1)Y(w2, z2)w3⟩ -⟨w0, Y(w1, z1)Y((L(0) -wt w2)w2, z2)w3⟩ -⟨w0, Y(w1, z1)Y(w2, z2)(L(0) -wt w3)w3⟩. The right-hand side is zero if the L(0)-actions on W0, . . . , W3 are each semisimple. Otherwise, repeated application gives z1 ∂ ∂z1 + z2 ∂ ∂z2 -∆ K ⟨w0, Y(w1, z1)Y(w2, z2)w3⟩= 0 whenever K is sufficiently large. Then we use ξ1 ∂ ∂ξ1 + ξ2 ∂ ∂ξ2 = ξ1 ∂z1 ∂ξ1 ∂ ∂z1 + ∂z2 ∂ξ1 ∂ ∂z2 + ξ2 ∂z1 ∂ξ2 ∂ ∂z1 + ∂z2 ∂ξ2 ∂ ∂z2 = ξ1 ∂z1 ∂ξ1 + ξ2 ∂z1 ∂ξ2 ∂ ∂z1 + ξ1 ∂z2 ∂ξ1 + ξ2 ∂z2 ∂ξ2 ∂ ∂z2 = z1 ∂ ∂z1 + z2 ∂ ∂z2 . Proposition 6.9. Assume that g has finite order and assume the L(0)-actions on W0, . . . , W3 are each semisimple. Then every component-isolated singularity of F(⟨w0, Y(w1, z1)Y(w2, z2)w3⟩) is a regular singularity. Proof. Let δ1, δ2 ∈{0, ∞} and let (ζ1, ζ2) = (z1, z2)A -(β1, β2) be a change of variables such that (ζ1, ζ2) = (δ1, δ2) is a component-isolated singularity of a multivalued holomorphic function defined of M 2. We will use two other change of variables. Define (ξ1, ξ2) := (ζ1 + β1, ζ2 + β2) = (z1, z2)A and define (η1, η2) such that η s(δj)t j = ζj, with s(δ) = ( 1 if δ = 0, -1 if δ = ∞. Recall that we have shown the existence of the following differential equations m X k=0 ak(z1, z2)ck ∂ ∂ξi k ψ = 0, i = 1, 2, for all non-zero c ∈SpanC{z1, . . . , zN}, (6.12) with ak ∈R(c=0) (deg 0) and am = 1, satisfied by F(⟨w0, Y1(w1, z1)Y2(w2, z2)w3⟩). We will use a(1) k , a(2) k , . . . and b(1) k , b(2) k , . . . for elements in R(c=0) (deg 0) with a(1) m = b(1) m = · · · = 1. For some fixed i = 1, 2, choose c = ξi to obtain. m X k=0 a(1) k (z1, z2)ξk i ∂ ∂ξi k ψ = 0. (6.13) 41 Then we change basis from ξk i ∂ ∂ξi k : k ∈Z≥0 to ξi ∂ ∂ξi k : k ∈Z≥0 , obtaining m X k=0 a(2) k (z1, z2) ξi ∂ ∂ξi k ψ = 0. (6.14) We use Lemma 6.8 on (6.14) to obtain, for j = 1, 2 with i ̸= j, m X k=0 b(1) k (z1, z2) ξj ∂ ∂ξj k ψ = 0. (6.15) Then we change basis from ξj ∂ ∂ξj k : k ∈Z≥0 to ξk j ∂ ∂ξj k : k ∈Z≥0 , obtaining m X k=0 b(2) k (z1, z2)ξk j ∂ ∂ξj k ψ = 0. (6.16) Then we change to (ζ1, ζ2) to obtain m X k=0 b(2) k (z1, z2)(ζj + βj)k ∂ ∂ζj k ψ = 0. (6.17) Then we have m X k=0 b(2) k (z1, z2) ζj ζj + βj m-k ζk j ∂ ∂ζj k ψ = 0. (6.18) Changing basis again gives m X k=0 b(3) k (z1, z2) ζj ζj + βj m-k ζj ∂ ∂ζj k ψ = 0. (6.19) Then ζj ∂ ∂ζj = 1 s(δj)tηj ∂ ∂ηj gives m X k=0 b(4) k (z1, z2) ζj ζj + βj m-k ηj ∂ ∂ηj k ψ = 0. (6.20) We can do a similar process on (6.14) to change to the variable ηi. Hence we have the system of differential equations m X k=0 a(4) k (z1, z2) ζi ζi + βi m-k ηi ∂ ∂ηi k ψ = 0, (6.21) m X k=0 b(4) k (z1, z2) ζj ζj + βj m-k ηj ∂ ∂ηj k ψ = 0. (6.22) To show that (η1, η2) = (0, 0) is a regular singularity using Theorem 5.4, we are left to check case-by-case that the coefficients are holomorphic in (η1, η2) ∈Dr1(0)×Dr2(0) for some sufficiently 42 small r1, r2 > 0. Case 1. Assume δ1, δ2 = 0, z1 ̸= 0, z2 ̸= 0 and z1 -z2 ̸= 0 when (η1, η2) = (0, 0). Then the coefficients are non-singular for all (η1, η2) ∈Dr1(0) × Dr2(0). Case 2. Assume δ1, δ2 = 0, and z1 = 0 or z2 = 0 or z1 -z2 = 0 when (η1, η2) = (0, 0). This can happen for only one variable, and this variable must be proportional to ζi = ξi for some i = 1, 2. The functions a(4) k and b(4) k are in R(ξi=0) (deg 0), so have no singularities when ξi = 0, and the other remaining two variables z1, z2 or z1 -z2 are non-zero for all (η1, η2) ∈Dr1(0) × Dr2(0). Case 3. Assume δ1 = ∞or δ2 = ∞. Only one can have this value, say δj = ∞, and we must have δi = 0 for i ̸= j. The only singularities can come from ζi = 0 are ζj = ∞. The first kind is immediately handled since a(4) k and b(4) k are in R(ξi=0) (deg 0). The second kind is handled by the zero degrees. Since the functions a(4) k and b(4) k are at most polynomials in z1z-1 2 , z2z-1 1 , z1(z1-z2)-1, z2(z1-z2)-1, we only need to asses these four monomials. If the numerator and denominator both approach infinity as ηj →0, the limit is exists. If the denominator approaches infinity but the numerator does not, the limit is exists. If the numerator approaches infinity but the denominator does not, the denominator is proportional to ξi, so this monomial does not occur in R(ξi=0) (deg 0). Since we have shown that the coefficients are holomorphic in (η1, η2) ∈Dr1(0)×Dr2(0), we have shown that the singularity (η1, η2) = (0, 0) is indeed regular, hence we know that solutions to this system of differential equations have an expansion of the form K X i=1 ηri 1 ηsi 2 (log η1)ji(log η1)kifi(η1, η2), (6.23) for ri, si ∈C, ji, ki ∈Z≥0 and fi holomorphic in some product of sufficiently small discs centered at 0. Thus, after changing back to (ζ1, ζ2), we see that there is regular singularity at (ζ1, ζ2) = (δ1, δ2). Proposition 6.10. Assume that g has finite order. Then every component-isolated singularity of F(⟨w0, Y(w1, z1)Y(w2, z2)w3⟩) with change of coordinates of the form (ζ1, ζ2) = (z1, z2)A is a regular singularity. Proof. The proof follows similarly to the proof of the previous proposition. However, to apply Lemma 6.8 on (6.14), we choose m and K to be sufficiently large enough to be equal. Then we obtain the system m X k=0 ak(z1, z2) ηi ∂ ∂ηi k ψ = 0, (6.24) ηj ∂ ∂ηj m + m-1 X p,q=0 bp,q(z1, z2) ηi ∂ ∂ηi p ηj ∂ ∂ηj q ψ = 0, (6.25) for which we can apply Theorem 5.4. Remark 6.11. Change of coordinates of the form (ζ1, ζ2) = (z1, z2)A are not as general as the affine change of coordinates. However, they include important component-isolated singularities such as (ζ1, ζ2) = (z1 -z2, z2) = (0, ∞). This regular singularity is used to prove associativity. 43 Remark 6.12. The previous results do not show that every component-isolated singularity of F(⟨w0, Y1(w1, z1) · · · YN(wN, zN)wN+1⟩) is a regular singularity when N > 2. However, the N = 2 case above can be used to show associativity and commutativity of twisted intertwining operators (see [Hua21b]). Certain (and important) regular singularities in the N > 2 case can be shown using the convergence property, associativity and commutativity, and the N = 2 case. For example, consider (ζ1, ζ2, ζ3) = (z1 -z3, z2, z3) and (δ1, δ2, δ3) = (0, 0, ∞). Given intertwining operators Y1, Y2, Y3, there exists intertwining operators Y4, Y5 such that F(⟨w0, Y1(w1, z1)Y2(w2, z2)Y3(w3, z3)w4⟩) = X m∈C K X k=0 F(⟨w0, (Y1)m,k(w1)Y2(w2, z2)Y3(w3, z3)w4⟩)z-m-1 1 (log z1)k = X m∈C K X k=0 F(⟨w0, (Y1)m,k(w1)Y4(w3, z3)Y5(w2, z2)w4⟩)z-m-1 1 (log z1)k = F(⟨w0, Y1(w1, z1)Y4(w3, z3)Y5(w2, z2)w4⟩) = X m∈C K X k=0 F(⟨w0, Y1(w1, z1)Y4(w3, z3)(Y5)m,k(w2)w4⟩)z-m-1 2 (log z2)k. The last expression gives the desired singularity. References [BHO01] J. de Boer, M. B. Halpern, and N. A. Obers. "The operator algebra and twisted KZ equations of WZW orbifolds". In: J. High Energy Phys. 10 (2001), Paper 11, 71. [CM16] S. Carnahan and M. Miyamoto. "Regularity of fixed-point vertex operator subalgebras". In: (Mar. 2016). arXiv: 1603.05645 [math.RT]. [DE20] A. Dei and L. Eberhardt. "Correlators of the symmetric product orbifold". In: J. High Energy Phys. 1 (2020), pp. 108, 45. [DVVV89] R. Dijkgraaf, C. Vafa, E. Verlinde, and H. Verlinde. "The operator algebra of orbifold models". In: Comm. Math. Phys. 123.3 (1989), pp. 485-526. [DGH88] L. Dixon, P. Ginsparg, and J. Harvey. "Beauty and the beast: superconformal symmetry in a Monster module". In: Comm. Math. Phys. 119.2 (1988), pp. 221-241. [DHVW85] L. Dixon, J. Harvey, C. Vafa, and E. Witten. "Strings on orbifolds". In: Nuclear Phys. B 261.4 (1985), pp. 678-686. [DHVW86] L. Dixon, J. Harvey, C. Vafa, and E. Witten. "Strings on orbifolds. II". In: Nuclear Phys. B 274.2 (1986), pp. 285-314. [Don94] C. Dong. "Twisted modules for vertex algebras associated with even lattices". In: J. Algebra 165.1 (1994), pp. 91-112. [DH25] J. Du and Y.-Z. Huang. Twisted intertwining operators and tensor products of (generalized) twisted modules. 2025. arXiv: 2501.15003 [math.QA]. [EMS20] J. van Ekeren, S. M ̈oller, and N. R. Scheithauer. "Construction and classification of holomorphic vertex operator algebras". In: J. Reine Angew. Math. 759 (2020), pp. 61-99. 44 [FFR91] A. J. Feingold, I. B. Frenkel, and J. F. X. Ries. Spinor construction of vertex operator algebras, triality, and E(1) 8 . Vol. 121. Contemporary Mathematics. American Mathematical Society, Providence, RI, 1991, pp. x+146. [FLM84] I. B. Frenkel, J. Lepowsky, and A. Meurman. "A natural representation of the FischerGriess Monster with the modular function J as character". In: Proc. Nat. Acad. Sci. U.S.A. 81.10 (1984), pp. 3256-3260. [FLM88] I. B. Frenkel, J. Lepowsky, and A. Meurman. Vertex operator algebras and the Monster. Vol. 134. Pure and Applied Mathematics. Academic Press, Inc., Boston, MA, 1988, pp. liv+508. [GK21] T. Gem ̈unden and C. A. Keller. "Non-abelian orbifolds of lattice vertex operator algebras". In: J. Algebra 585 (2021), pp. 656-696. [HM22] G. H ̈ohn and S. M ̈oller. "Systematic orbifold constructions of Schellekens' vertex operator algebras from Niemeier lattices". In: J. Lond. Math. Soc. (2) 106.4 (2022), pp. 3162-3207. [Hua96] Y.-Z. Huang. "A nonmeromorphic extension of the Moonshine module vertex operator algebra". In: Moonshine, the Monster, and related topics (South Hadley, MA, 1994). Vol. 193. Contemp. Math. Amer. Math. Soc., Providence, RI, 1996, pp. 123148. [Hua05] Y.-Z. Huang. "Differential equations and intertwining operators". In: Commun. Contemp. Math. 7.3 (2005), pp. 375-400. [Hua10] Y.-Z. Huang. "Generalized twisted modules associated to general automorphisms of a vertex operator algebra". In: Comm. Math. Phys. 298.1 (2010), pp. 265-292. [Hua17] Y.-Z. Huang. "On the applicability of logarithmic tensor category theory". In: (Feb. 2017). arXiv: 1702.00133 [math.QA]. [Hua18] Y.-Z. Huang. "Intertwining operators among twisted modules associated to notnecessarily-commuting automorphisms". In: J. Algebra 493 (2018), pp. 346-380. [Hua20] Y.-Z. Huang. "A construction of lower-bounded generalized twisted modules for a grading-restricted vertex (super)algebra". In: Comm. Math. Phys. 377.2 (2020), pp. 909-945. [Hua21a] Y.-Z. Huang. "Lower-bounded and grading-restricted twisted modules for affine vertex (operator) algebras". In: J. Pure Appl. Algebra 225.8 (2021), Paper No. 106618, 36. [Hua21b] Y.-Z. Huang. "Representation theory of vertex operator algebras and orbifold conformal field theory". In: Lie groups, number theory, and vertex algebras. Vol. 768. Contemp. Math. Amer. Math. Soc., [Providence], RI, [2021] , pp. 221-252. [HL99] Y.-Z. Huang and J. Lepowsky. "Intertwining operator algebras and vertex tensor categories for affine Lie algebras". In: Duke Math. J. 99.1 (1999), pp. 113-134. [HLZ12a] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, II: Logarithmic formal calculus and properties of logarithmic intertwining operators". In: (2012). arXiv: 1012.4196 [math.QA]. 45 [HLZ12b] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, III: Intertwining maps and tensor product bifunctors". In: (2012). arXiv: 1012.4197 [math.QA]. [HLZ12c] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, IV: Constructions of tensor product bifunctors and the compatibility conditions". In: (2012). arXiv: 1012.4198 [math.QA]. [HLZ12d] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, V: Convergence condition for intertwining maps and the corresponding compatibility condition". In: (2012). arXiv: 1012.4199 [math.QA]. [HLZ12e] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, VI: Expansion condition, associativity of logarithmic intertwining operators, and the associativity isomorphisms". In: (2012). arXiv: 1012.4202 [math.QA]. [HLZ12f] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, VII: Convergence and extension properties and applications to expansion for intertwining maps". In: (2012). arXiv: 1110.1929 [math.QA]. [HLZ12g] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory, VIII: Braided tensor category structure on categories of generalized modules for a conformal vertex algebra". In: (2012). arXiv: 1110.1931 [math.QA]. [HLZ14] Y.-Z. Huang, J. Lepowsky, and L. Zhang. "Logarithmic tensor category theory for generalized modules for a conformal vertex algebra, I: introduction and strongly graded algebras and their generalized modules". In: Conformal field theories and tensor categories. Math. Lect. Peking Univ. Springer, Heidelberg, 2014, pp. 169-248. [HY19] Y.-Z. Huang and J. Yang. "Associative algebras for (logarithmic) twisted modules for a vertex operator algebra". In: Trans. Amer. Math. Soc. 371.6 (2019), pp. 3747-3786. [Kna86] A. W. Knapp. Representation theory of semisimple groups. Vol. 36. Princeton Mathematical Series. An overview based on examples. Princeton University Press, Princeton, NJ, 1986, pp. xviii+774. [KZ84] V. G. Knizhnik and A. B. Zamolodchikov. "Current algebra and Wess-Zumino model in two dimensions". In: Nuclear Physics B 247.1 (1984), pp. 83-103. [Lep85] J. Lepowsky. "Calculus of twisted vertex operators". In: Proc. Nat. Acad. Sci. U.S.A. 82.24 (1985), pp. 8295-8299. [LL04] J. Lepowsky and H. Li. Introduction to vertex operator algebras and their representations. Vol. 227. Progress in Mathematics. Birkh ̈auser Boston, Inc., Boston, MA, 2004, pp. xiv+318. [LW78] J. Lepowsky and R. L. Wilson. "Construction of the affine Lie algebra A1(1)". In: Comm. Math. Phys. 62.1 (1978), pp. 43-53. [Miy15] M. Miyamoto. "C2-cofiniteness of cyclic-orbifold models". In: Comm. Math. Phys. 335.3 (2015), pp. 1279-1286. 110 Frelinghuysen Rd., Piscataway, NJ 08854-8019 E-mail address: 46
2510.14862
U.P.B. Sci. Bull., Series A, Vol. 88, Iss. 3, 2025 ISSN 1223-7027 MULTI-MODAL VIDEO DATA-PIPELINES FOR MACHINE LEARNING WITH MINIMAL HUMAN SUPERVISION Pˆırvu Mihai-Cristian1 and Marius Leordeanu2 The real-world is inherently multi-modal at its core. Our tools ob- serve and take snapshots of it, in digital form, such as videos or sounds, however much of it is lost. Similarly for actions and information pass- ing between humans, languages are used as a written form of communi- cation. Traditionally, Machine Learning models have been unimodal (i.e. rgb →semantic or text →sentiment class). Recent trends go towards bi-modality, where images and text are learned together, however, in order to truly understand the world, we need to integrate all these independent modalities. In this work we try to combine as many visual modalities as we can using little to no human supervision. In order to do this, we use pre-trained experts and procedural combinations between them on top of raw videos using a fully autonomous data-pipeline, which we also open-source. We then make use of PHG-MAE [33], a model specifically designed to lever- age multi-modal data. We show that this model which was efficiently dis- tilled into a low-parameter (≤1M) can have competitive results compared to models of ∼300M parameters. We deploy this model and analyze the use- case of real-time semantic segmentation from handheld devices or webcams on commodity hardware. Finally, we deploy other off-the-shelf models using the same framework, such as DPT [41] for near real-time depth estimation. Keywords: multi-modal machine learning, semantic segmentation, depth estimation, real-time processing, real-time inference, commodity hardware 1. Introduction and related work In the domain of machine learning there are typically three correlated subsystems, namely the data (1), the models and algorithms (2) and the predic- tions and actions of a model (3), as presented in Figure 1. Generally speaking the field of research has mostly focused on the models side on fixed bench- marks. This makes sense from a practical point of view: one gets to directly compare a novel solution against existing work in a controlled setting, making the contribution less biased and also more convenient, since they can re-use the data acquisition and processing from the prior work. This approach has 1PhD student, Institute of Mathematics of the Romanian Academy ”Simion Stoilow”, e-mail: mihaicristianpirvu@gmail.com 2Professor, Faculty of Automatic Control and Computer Science, National University of Science and Technology POLITEHNICA, Romania, e-mail: leordeanu@gmail.com 1 arXiv:2510.14862v1 [cs.CV] 16 Oct 2025 2 Pˆırvu Mihai-Cristian, Marius Leordeanu Figure 1. High-level overview of an end-to-end machine learning system: from raw data and data processing, to training and optimizing models and lastly by deploying it to interact and control a real hardware autonomously with intelligent actions. worked very well and has driven the progress of the field with results such as the AlexNet [25] classification network on the ImageNet dataset, the Trans- former network [44] on the WMT14 English-German translation dataset [8], DeepLabV3+ semantic segmentation model on the Cityscapes dataset [12], Wav2Vec speech recognition model [5] on the LibriSpeech dataset [35] and many others. However, there are concerns with this approach. Some argue that we are overfitting on a small set of single or few task benchmarks [40], leading to solutions that don’t generalize. Overfitting indices have been proposed [1]. On the other hand, some argue that less mainstream fields, such as atmospheric science, struggle to evolve due to the lack of such benchmarks [15]. A similar issue was identified in the domain of UAVs and aerial image understanding in [30]. In the work of [32], they introduce Dronescapes, a dataset for UAVs on three tasks: semantic segmentation, depth estimation and camera normals estimation, which is also a starting point of this work. Recently, the trend has been towards massive pre-training with models such as the GPT series for language modeling [39] trained on the WebText dataset (40B tokens), Vision Transformer (ViT) [14] trained on 15M images, CLIP [38] trained on 400M image-text pairs and Segment Anything (SAM) [24] trained on 11M images and 1.1B segmentation masks. In [38], they show that their model is more robust to adversarial examples (ImageNet variants) from the same domain compared to models trained on just ImageNet, showcasing the need for more high-quality large-scale datasets. All the recent improvements have been done by combining the Data and Models ’boxes’ from our main figure into a simpler and more scalable loop: creating automatic or semi-automatic datasets with little human intervention, followed by a simple but generic pre- training algorithm and then by a task-specific fine-tuning. In order to do this, one cannot simply rely on existing datasets for training. Multi-modal video data-pipelines for machine learning with minimal human supervision 3 For this reason alone, we focus on the Data (1) (acquisition and process- ing) side of things more than is usual, while remaining in the context of Multi- Modal Machine Learning. Multi-modality refers to the use of multiple types of sensors together to achieve some goals or tasks. For example, combining im- ages with depth information or text descriptions provides a richer understand- ing than using images alone. The world presents information through various modalities, and research aims to leverage these correlated sources. Our data- pipeline extends the Dronescapes dataset [32] from 12K frames for the training set to 80k frames across 3 tasks in an automated fashion from 8 new videos and pre-trained experts only. We create up to 13 new modalities including semantic and depth-derived ones like camera normals or binary segmentation maps, like safe landing areas, buildings, water etc. This dataset is then used to train the PHG-MAE-NRand model [33] as well as the PHG-MAE-Distil vari- ants. These lightweight distillation models (150k or 400k parameters) yield competitive results against models such as Mask2Former [11], that has 217M parameters, which is up to 3 orders of magnitude larger. Moving over to the Models (2) side, recent trends have been towards large and very large Transformer-based models, with hundreds of millions and billions of parameters. These models, inspired by the domain of NLP [13], use techniques like Masked Auto Encoders (MAE) to do large pre-training on generic and easily acquirable data (like RGB only), followed by task-specific fine-tuning [19] on visual data. Recent works, such as [4], leverage MAE-based solutions for Multi Task Learning (MTL): depth estimation and semantic seg- mentation. [29] and [34] extend this approach to new modalities, including image, text, audio generation, and action prediction for robotics. These ad- vances are driven by the rise of foundation models pre-trained on massive datasets, enabling zero-shot prediction via prompting and efficient fine-tuning, as seen in [38] and [24]. [6] proposes a multi-modal, promptable model with an instruction-based domain-specific language for generalist capabilities across text, images, audio, and video. Other work on models handling the multi- modality of the data aims to create a graph with neural networks modeling the relationships between modality nodes [45, 26, 17, 32, 36]. One of the main issues with these existing models is that they are designed for performance metrics, not real-time inference. One work that tries to address this limita- tion is PHG-MAE [33], where they use a MAE-based multi-modal training algorithm that natively incorporates ensembles for performance boosts and consistency across frames. While their main model is also very heavy (with up to 50s per frame), they provide efficient and lightweight distilled models (150 ∼430k parameters), which enables real-time deployment with little loss in performance. As we switch to the Actions (3) and predictions side, we can observe that it is a much less explored and researched area. Usually this is enabled by the R&D on the models side, followed by a deployment procedure. Once 4 Pˆırvu Mihai-Cristian, Marius Leordeanu a model is deployed it is assumed frozen (most of the time) and it becomes more of an engineering problem to run inference and serve predictions reliably without breaking the existing system. This suggests that the neural network is usually used as a module of a larger system, with this hybrid being re- ferred to as Software 2.0 [22], where standard ”1.0” procedural code is mixed with neural network predictions to make intelligent actions. One of the main questions and trade-offs is related to where the inference computation hap- pens: on device or on some external server. The first causes the device to have larger compute power, which can increase weight, decrease battery time or increase overall latency. The second solution adds a communication layer between the device and the processing node, which adds variance due to con- nection issues. Some argue that a distributed system is required to achieve end-to-end real-time performance [7] with specialized nodes doing specialized tasks, like object recognition. Others propose neural architectural changes to reduce the inference duration variance [28] caused by things like object pro- posals which can be dynamic based on the input image. Nonetheless, solving these latency-performance trade-offs enables intelligent and autonomous de- vices, like autonomous vehicles [10] or drones for use-cases like flood detection [20], power line failures [3] or search and rescue assistance [2]. In this paper, we study a simpler use-case: deploying distilled real-time models for semantic segmentation and depth estimation. We’ll use a consumer GPU on a laptop (NVIDIA RTX 4050 Laptop) for local processing as well as a remote connection to a cloud server (NVIDIA RTX 2080 Ti). We study the case of real-time streaming from a phone camera and analyze the trade-offs of the two setups. Our main contributions presented in this paper are on the data-processing and automation side as well as model deployment on consumer grade GPUs. Further research on more advanced use-cases, such as autonomous control of a UAV or vehicle enabled by our real-time segmentation model must be studied. Multi-modal video data-pipelines for machine learning with minimal human supervision 5 2. Video Representations Extractor data-pipeline for multi-modal machine learning Figure 2. VRE showcase. We present six exported representations on top of the RGB frame. The first two are pre-trained experts (DPT [41] and Marigold [23]). Next, we derive two camera normals representations using a SVD-based algorithm [18]. Lastly, we derive safe-landing areas by thresholding the camera normals maps like in the newly introduced Dronescapes2 [33] dataset. In this section, we’ll discuss our approach for the Data side of Fig- ure 1. In order to streamline the training of multi-modal machine learning models, in the context of videos and scene understanding, we have devel- oped a data-pipeline, named Video Representations Extractor (or VRE for short), which we have also open-sourced at https://github.com/meehai/ video-representations-extractor/. We discuss architectural decisions as well as how it can be used to create new datasets for other use cases than ours. We also discuss data-pipeline strategies, including multi-gpu batching and real- time streaming. Later, we present a case study based on the Dronescapes [32] dataset for aerial image understanding which was extended in a fully automated way using our data-pipeline: https://sites.google.com/view/ dronescapes-dataset. Finally, we discuss PHG-MAE [33], a MAE-based model which has leveraged our data-pipeline by creating an ensemble-based algorithm which operates at intermediate modalities level exported by us. This model was then distilled into a lightweight semantic segmentation model that we can run using the streaming capabilities of the data-pipeline, which we’ll also explore in the Experiments section alongside other models, such as depth estimation [41]. 2.1. VRE main loop Below, in Algorithm 2.1, we can see the main VRE loop. At its core this is all VRE does, but getting this right is not a trivial task. A VRE process works on a single accelerator (CPU, GPU etc.), a single video (list of frames) and a single representation at a time. By default, it works 6 Pˆırvu Mihai-Cristian, Marius Leordeanu Algorithm 2.1 Video Representations Extractor main loop video ←[frame1, frame2, ..., frameN] representations ←[RGB, Mask2Former(params), DPT(params), ...] for repr in topo sort(representations) do batches ←make batches(video, batch size) ▷batch size=1 if streaming for batch of frames in batches do if not already computed(batch of frames) then out repr = repr.compute(batch of frames, [out deps]) img repr = repr.make images(out repr) ▷optional end if store on disk(batch, out repr, img repr) ▷only in batch mode end for store repr metadata(batches) ▷stats about this representation end for store run metadata(video) ▷stats about this video run on batches, where the batch size is defined globally, with the ability to overwrite this option at representation level. For example, we can process many RGB frames at once, however for learned representations (i.e. neural networks), the batch size must be capped by memory requirements, like the (V)RAM capacity of the accelerator. We use the generic term of ’accelerator’, but this means CPU (for regular representations) or GPU (for neural representations) in most of the cases, with the ability to generalize to other custom accelerators, such as TPUs, NPUs etc. The tool does not schedule between multiple videos, but rather the list of frames of a single video are passed as inputs. While this may seem limiting, we present a later how multiple videos and multiple GPUs can be used to scale this simple approach. Each run on a specific video and list of frames will produce an inde- pendent run-level metadata file. This file contains a unique identifier of the run, information about when it started and how long each representation took, as well as information about how many frames were successfully processed or not. It will also produce a raw log file based on the standard output and error for debugging purposes. Moreover, each representation contains a sec- ondary metadata file which offers more granular information at frame-level. It will offer details about how long each frame’s computation took, whether it was stored only as binary (npz) or image (png/jpg) or both and so on. For each frame, we also have a unique identifier of the metadata run id, so we can backtrace when each frame was computed. This information is used to know whether this particular frame will be skipped or not on a new run: the already computed(batch) entry in the algorithm above. Note that this repre- sentation metadata file is the ground truth, not the actual data on the disk, as this file is faster to process than reading potentially hundreds of gigabytes Multi-modal video data-pipelines for machine learning with minimal human supervision 7 of exported files! Modifying the data on the disk may corrupt the metadata and some frames can be wrongly skipped or re-computed on new runs. The metadata file can be manually regenerated from the stored-on-disk data if needed. 2.2. Representation A representation is the basic block of a VRE run and consists of the definition and computation sides. The definition of a representation is based on a unique name, a set of parameters and a list of dependencies of other rep- resentations. This creates a graph of representations, which is topologically sorted to ensure that no cycles exist and that a proper representation sched- uling can be obtained. The set of parameters allows us to compute the same representation multiple times with slightly different options. As an example, see Figure 2. Here, we produce two depth maps using two different representa- tions (DPT [41] and Marigold [23]). Then, the camera normals representation (SVD-based) uses the very same algorithm and parameters, but with differ- ent depth map dependencies, producing two different outputs. Moving on, the safe-landing representation is similarly built on top of the camera nor- mals representation. We can observe that the safe-landing area produced by the more lightweight DPT depth map misses some details, such as the safe- landing areas on top of the buildings. The computation-side of a representation is simply the code required to run the representation and is defined as simply as out repr ←repr.compute(batch of frames, out deps). The code runs at frames level (batched) and it returns the map produced by the representation (i.e. HSV, edges, semantic segmentation, camera normals etc.). The code also receives all the outputs of all the dependencies, which are assumed to have been scheduled for computation before the current representation is run via the topological sort. Defining the representation. The representations (or experts) that we want to compute are instantiated based on a YAML-based configuration file. In this configuration file we must define the unique name of the representation, its pa- rameters as well as its dependencies (if any). If the dependencies are not prop- erly provided, then topo sort(representations) will fail with an appropriate error. The configuration file looks roughly like this (yaml syntax): globals : { batch size : 10} representations : { rgb : {type : color /rgb , deps : [ ] , params : {}} , hsv : {type : color /hsv , deps : [ rgb ] , params : { batch size : 5}} } 8 Pˆırvu Mihai-Cristian, Marius Leordeanu 2.3. Running strategies: Batching vs. streaming. One of the dichotomies of data-pipelines is managing the concepts of batching and streaming. The first refers to the principle of scheduling large amounts of data for computation in an offline manner. The main use-case here is reliability and efficiency at scale. On the other hand, there is streaming, where each piece of data must be processed in place with more tolerance to reliability, but less to latency. At its core, VRE supports both modes, see Figure 3. Figure 3. VRE processing strategies. Left: the standard batched strategy. We split the frames in batches and then each batch is passed through the algorithm of the representation, followed by a step where the results are stored on the disk. Right: the streaming strategy. In this mode the input is a live video stream (webcam, camera phone etc.) Each frame, or nearby ones (if needed), are processed sequentially by all representations. Batch mode. This is the default mode for which the tool was created. We use it to schedule entire videos, compute various representations on it (i.e. pre- trained experts or procedural intermediate modalities), followed by storing them for later usage, such as training a new neural network using the exported data. Each expert is processed independently, based on the topological sorting, and the batch size is set such that we maximize the occupancy of the accelerator that does the work (i.e. GPUs). The exported data can be used as-is using the provided ML-ready data reader in python, though the exports are language- agnostic. This mode is aimed at long-running reliable runs and implements various mechanisms, such as retrying with a smaller batch-size in case of OOM errors, a re-entrant mechanism (skipping previously computed frames) and exhaustive diagnostics and log files of each run. Streaming mode. VRE also supports streaming mode, where the focus is less on reliability and more on fast inference. In this mode, we disable any sort of disk operations (i.e. results are not stored on disk) and optionally, we can disable more features, like creating images from raw predictions, depending on the application at hand. In Figure 4, we provide a basic diagram of how VRE integrates in a larger system, where frames can come from an external source Multi-modal video data-pipelines for machine learning with minimal human supervision 9 (i.e. video, webcam, phone camera etc.), can be processed on an external machine (i.e. cloud GPU) and can be used to control an external device (i.e. robot, drone etc.). Figure 4. VRE streaming architecture. We read frame by frame from the source (i.e. drone camera), process it on the VRE streaming client (i.e. cloud or local GPU), analyze the results and pass the actions to the target (i.e. drone controller). Notably, all these components can live on the same machine but they can also communicate through the network. We provide various integrations through standard Linux tools (i.e. pipes) which allows VRE to read and write raw frames to standard input and output (or sockets). For example, one can read from the standard input, while the frames come from a webcam and are piped through applications like ffmpeg [43]. Using the same mechanism, we support network streaming, by creating a TCP socket allowing us to run VRE on a cloud server with a powerful GPU while piping the frames in and out of the server via tools like netcat which acts like a (network) pipe. This means that any computer or node that can be accessed by a VRE client (i.e. public IP, ssh tunneling etc.) can be used for computation. The trade-off here is that the network latency can be too prohibitive for the application, especially if one must do real-time processing such as controlling an UAV based on the stream of predictions. It should be noted that ideally we’d use a UDP socket with optimized video encoding for live-streaming for better performance, however, for simplicity, we use a TCP server and raw RGB frames, meaning that there is room for performance optimizations. We provide more concrete examples in the experiments section, where we use a mobile camera for raw frames, and compare a laptop GPU vs. a cloud GPU for real-time and near real-time semantic segmentation and depth estimation inference. 10 Pˆırvu Mihai-Cristian, Marius Leordeanu 2.4. Multi-GPU batching strategies In batching mode, if multiple accelerators are available (i.e. nodes with ≥1 GPUs), VRE has support for a multi-gpu setup. This, of course, only applies to representations where a GPU is needed in the first place, such as pre-trained neural networks, also called experts in some contexts for semi- supervised learning and distillation. The main requirement for such a repre- sentation, internally called a Learned Representations, is to properly implement a setup(device) method, where the GPU model is moved to the device. In Fig- ure 5, we present two different multi-gpu strategies that can be applied by spawning multiple VRE processes on a single video. Figure 5. VRE multi-gpu batching strategies. Strategy 1: Slice the video in multiple independent chunks. Strategy 2: split the video’s representations in sub-groups. Strategy 1 is the simplest and most effective one. The video is treated as multiple independent video chunks and assign each to one VRE process and to one accelerator (GPU). This strategy has the advantage that it is consis- tent (i.e. each GPU will finish at around the same time) and ensures good utilization of each GPU. Furthermore, this strategy also works on distributed environments, allowing us to schedule a video on multiple machines as well, given that we don’t overlap the chunks. The user must then optimize the configuration (i.e. batch size, computation parameters etc.) such that each frame is computed in the optimal time across all representations. The main limitation with this strategy is that all the representations must be computed by the same VRE process. In some scenarios, we may wish that simple inde- pendent representations (i.e. RGB, HSV, edges) do not block an entire GPU, especially if they are not dependencies of other complex representations. This is where Strategy 2 comes into play: one video (or video chunk) can be fur- ther split into independent representation groups. For example we can have a representation group that computes a depth representation and a camera nor- mals (which depends on depth) representation on one GPU, a secondary group that computes semantic segmentation on another GPU and a third group that Multi-modal video data-pipelines for machine learning with minimal human supervision 11 computes HSV and Canny Edges on just a CPU. These can run in parallel on three separate VRE processes, as they are independent from each other. As a recommendation, a user should first start with Strategy 1 and optimize a single configuration for all representations on a single frame, followed by chunking the video based on the number of available accelerators. For most use-cases, one can stop here. Strategy 2 can be then seen as fine-tuning, where the already-optimized configuration is split in multiple sub-configurations to sep- arate GPU representations (i.e. neural networks) from CPU representations (i.e. HSV or Canny Edges). VRE natively supports this via the --frames A..B CLI flag, meaning that one can start 8 VRE processes (one on each GPU) on 8 subsets of the same video, with results stored in the same output directory without any clashes. Furthermore, we have a helper CLI tool, vre gpu parallel, which can be used to implement Strategy 1. CUDA VISIBLE DEVICES=0 ,1 ,2 ,3 v r e g p u p a r a l l e l \ VIDEO −o DIR −−config path CONFIG −−frames 0..100 Executing : CUDA VISIBLE DEVICES=0 vre VIDEO \ −−frames 0 . . 2 5 −o DIR −−config path CONFIG . . . Executing : CUDA VISIBLE DEVICES=3 vre VIDEO \ −−frames 75..100 −o DIR −−config path CONFIG This tool simply spawns N sub-processes after properly splitting the frames. If the frames flag is not provided, it will slice the whole video. Note that any potential race conditions on the same output directory are solved by using atomic write operations, as we explain in the next section. 2.5. Data format In batching mode, we want to store the data on the disk such that it can be later loaded for various use-cases, such as training a neural network on top of this export. In order to do this, VRE creates a disk-based data structure which can be loaded and analyzed efficiently. The structure on the disk looks like this: video .mp4 video vre export / −. logs /[ run metadata ID . json , log ID . txt , . . . ] −repr 1 / −representation metadata . json −npz / [ 1 . npz , . . . , N. npz ] −repr 2 / −representation metadata . json −npz / [ 1 . npz , . . . , N. npz ] 12 Pˆırvu Mihai-Cristian, Marius Leordeanu −jpg / [ 1 . jpg , . . . , N. jpg ] . . . The disk-based data structure leverages the rise of fast SSDs, enabling the loading of large batches of data into the RAM for efficient usage. We also implement speculative loading, where we load two or three batches ahead asynchronously, while the current batch is processed, for example doing neural network training or inference. For each VRE run, there is a run metadata, with a unique ID, containing information about the list of frames that were processed. Moreover, in each representation, there is a representation meta- data file which contains information about each frame, including a reference to the run metadata ID. We’ve discussed earlier about the run metadata and the representation metadata, which are the basic mechanisms for introspection and scheduling. At the start of each VRE run, the tool loads all the metadata files to properly schedule only the frames that were not previously computed. This is called re-entrancy, meaning that the tool can continue previously stopped work with the idea that nothing is lost. An important aspect is the fact that each write to the representation metadata.json file is atomic, allowing multiple VRE processes to compute the same representation on different slices of a video, such as the case for the Strategy 1 in a multi-gpu setup (see Figure 2.4). In case a representation depends on a previously computed representation (i.e. camera normals using SVD requires a pre-computed depth map), then it is more efficient to load the representation from the disk, rather than to recompute it. In some cases it’s not even possible, for example if a neural network representation depends on another neural network representation, as this would require both of them to be loaded at the same time, which can cause out of memory issues or be very slow due to CPU computation. To support this, each representation must implement two functions: (1) representation.memory to disk(out repr, path) (2) out repr ←representation.disk to memory(path). By default these two representations are identical, but in some cases (for example semantic segmentation), it is more efficient to store the data as argmaxed class indices (uint8), rather than raw predictions (float32), like logits or softmax-probabilities. These are, of course, more advanced nuances and proper action must be taken based on how the data is used. Sometimes it is imperative that the data is stored as-is, other times it’s good enough to truncate it to float32, while others class indices are just as good! 2.6. VRE repository At the time of writing, the following algorithms and pre-trained experts are implemented and ready to use. • Color: RGB, HSV • Edges: Canny [9], DexiNed [37] Multi-modal video data-pipelines for machine learning with minimal human supervision 13 • Optical flow: RAFT [42], RIFE [21] • Depth estimation - DPT [41], Marigold [23] • Normal maps: SVD (from depth) [18] • ”Soft” segmentation: FastSAM [46], Generalized Boundaries [27], Halftone • Semantic segmentation - Mask2Former [11], PHG-MAE-Distil [33] Upon representation instantiation, the weights of these representations (for learned representations only) are downloaded locally from a cloud storage. Implementing a new representation is as simple as implementing a shared inter- face with a few methods, such as compute(batch of frames, dependencies) and make image(frame, computed result). For learned representations (i.e. neural networks), one must also implement two more methods: load weights(path) and unload weights() which are used to load the networks and clear the mem- ory during execution. Moreover, one can implement more fine levers, such as the previously mentioned disk to memory and memory to disk functions. 2.7. Case study: Dronescapes2 dataset. A fully-automated dataset built with VRE. In this section we present a case study: the Dronescapes2 dataset which was generated using VRE. This dataset as well as a step-by-step reproduc- tion process using VRE are publicly available at https://sites.google.com/ view/dronescapes-dataset. Table 1 summarizes the size and modalities of the Dronescapes dataset followed by the extended variant: the Dronescapes2- M+ dataset, generated using the VRE data-pipeline. Name Data Points I/O UAV scenes Description (GT labels) Modalities Dronescapes-Semisup1 [32] 12K (233) 5/3 7 Original train set Dronescapes-Semisup2 [32] 11K (207) 5/3 7 Original semi supervised set Dronescapes-Test [32] 5.6K* (116) 5/3 3 Original test set Dronescapes2-M+ 80K (440) 14/3 15 All new experts and intermediate modalities on 8 new videos plus the original ones Table 1. Dronescapes dataset variations and stats. Numbers in parenthe- ses represent the semantic human annotated data. Data points indicates the number of RGB frames. The original dataset defines three output tasks: semantic segmentation, depth estimation, and camera normals estimation. Semantic segmentation maps are human-annotated, with label propagation [31] used to interpolate missing frames. Depth and camera normals were generated from raw videos and GPS data using a Structure-from-Motion (SfM) tool [16]. The Dronescapes extension, named Dronescapes2-M+, builds upon the initial 23K frames from Dronescapes-Semisup1 and Dronescapes-Semisup2 by adding 8 new video scenes sourced from the internet, yielding a total of 57K new frames totalling 80K frames. It generates experts and intermediate modal- ities with no human annotation using the data-pipeline and the new videos 14 Pˆırvu Mihai-Cristian, Marius Leordeanu only. We use both the VRE configuration (batched) as well as the distilled model (streaming) in the experiments section. The VRE configuration contains the following experts and intermediate modalities: • semantic segmentation (3): Mask2Former on three released checkpoints • depth estimation (1): Marigold • camera normals intermediate modality (1): SVD-based algorithm • binary semantic intermediate modalities (8): vegetation, sky-and-water, containing, transportation, buildings (all types and nearby only) and safe- landing (geometric only and geometric + semantic) In total, there are four pre-trained experts (3 Mask2Former variants & 1 Marigold) plus nine new intermediate modalities. All the intermediate modal- ities are implemented as new procedural representations built on top of the existing representations. For example the ’safe-landing (+semantic)’ binary segmentation mask is defined as: (v2 > 0.8) ∗((v1 + v3) < 1.2) ∗(depth <= 0.9) ∗safe class(semantic), where v1, v2, v3 are the 3 angles of the camera normals map and safe class is a true/false mapping on top of the underly- ing Mask2Former semantic segmentation map. The thresholds (representation parameters) can be updated based on experiments. 2.8. Case study: PHG-MAE. A model designed for intermediate modalities exported by VRE. Using the Dronescapes2 dataset, the work of PHG-MAE [33] has trained a multi-modal multi-task learning model, designed specifically for this kind of data with just 4.4M parameters. In Figure 6, we can see a snippet of how this model works. It yields competitive results on the Dronescapes-Test dataset against models such as Mask2Former, a 217M parameters model, almost 2 orders of magnitude larger. However, it has one big limitation: it must run the entire data-pipeline for 13 intermediate modalities, including 3 semantic segmenta- tion neural network experts and one depth estimation expert, for each RGB frame. To solve this issue, the authors also provide a set of distilled lightweight neural networks (150k, 450k, 1.1M, 4.4M parameters) with little to no degra- dation in performance, enabling real-time semantic segmentation. Below, in Table 2, we present a comparison table of these models. In the experimental section, which follows, we will make use of these distilled variants of the PHG-MAE model to run the data-pipeline in streaming mode for real-time semantic segmentation. 3. Experiments In this section we’ll go over a few experiments that will demonstrate the capabilities of the Video Representations Extractor (VRE) data-pipeline on real-world machine learning workloads. We’ll first go over the batched case, Multi-modal video data-pipelines for machine learning with minimal human supervision 15 Figure 6. The data-pipeline and PHG-MAE model on real data. Left: The process of deriving modalities as pseudo-labels from pre-trained experts using RGB only, followed by deriving new modalities from combinations of experts. Right: integration of all the new modalities in the PHG-MAE semi-supervised training and inference pipeline, with each modality being either input, inter- mediate or output. Model Parameters Mean IoU ↑ PHG-MAE-NRand [33] 4.4M 55.06 ± 0.09 PHG-MAE-Distil [33] 4.4M 55.05 PHG-MAE-Distil [33] 430k 54.94 PHG-MAE-Distil [33] 1.1M 54.3 Mask2Former [11] 217M 53.97 PHG-MAE-Distil [33] 150k 53.32 PHG-MAE-1All [33] 4.4M 52.04 PHG-MAE-1Rand [33] 4.4M 51.83 ± 3.3 Table 2. Semantic Segmentation performance for the PHG-MAE model, trained on Dronescapes2, a dataset generated using VRE. The -Distil variants, which are trained on top of pseudo-labels generated by the -NRand model. as introduced in Section 2.3, providing three simple-to-complex experiments. The data-pipeline was initially created for these kinds of workloads: to stream- line and standardize the efficient creation of new datasets based on raw videos alone. Then, we’ll go over a more recent addition to the data-pipeline, the real-time streaming component. We provide two experiments with two mod- els: semantic segmentation and depth estimation using pre-trained experts supported out of the box in the VRE repository. We start with a local stream- ing example, followed by offloading the computation side from a local machine to a remote GPU on an internet-accessible machine. The command we’ll be using is as simple as: vre VIDEO −−config path CONFIG −o DIR −−frames A. . B −−batch size B −−skip computed 16 Pˆırvu Mihai-Cristian, Marius Leordeanu In the CONFIG file we’ll define the representations we want to compute following the yaml syntax defined in Section 2.2. The frames are defined as intervals [A : B], allowing us to skip previously computed frames (if any) in the DIR. The batch size is also defined in the config file, and can be defined both globally and fine-tuned at representation level, but for simplicity, we show it in the command line. The output resolution is always the same as the video size. All the batched experiments will be evaluated based on the reported duration in the metadata files of each run. Furthermore, for the streaming experiments, we’ll report the frames per second (FPS). All the experiments are run on a Intel Xeon Gold 5218 CPU and one to eight NVIDIA RTX 2080Ti GPUs. 3.1. Simple export: RGB and HSV only We’ll start with a simple experiment computing only the RGB and HSV representations. The RGB one is simply copying the video frame, while the HSV is a basic color transformation, with no accelerator needed. We’ll compare four video resolutions (240p, 540p, 720p, 1080p) for three exports: npz only, npz + jpg images and npz + jpg images + compression. We’ll use the same video and process 100 frames. These experiments should give us an initial hint about how VRE handles large-scale batch processing. The results can be seen in Figure 7. Figure 7. Simple export experiment. We compare three output formats for two computed representations: RGB and HSV. We observe that the duration extends both with a more complex storage (i.e. npz only vs compressed npz), with the frame resolution as well as whether we export only a binary representation or both binary and image (jpg). Inter- estingly, the compressed export saves about 2.6x disk space (2.4GB vs 907MB on 1080p for the HSV export), while taking only 1.93 times more (236.2s vs 121.8s). This is very useful for large-scale video processing allowing us to make Multi-modal video data-pipelines for machine learning with minimal human supervision 17 a space-time trade-off. Notably, since these experiments are on CPU only, the batch size should more or less not matter. 3.2. Batched export: RGB, PHG-MAE-Distil and DPT In this experiment, we’ll try to increase the batch size for two learned representations to observe the performance boost of this feature. The results can be seen in Figure 8. Figure 8. Batched export results (CPU vs. GPU) on a local machine with three batch sizes (1, 5, 20) and two models Depth DPT (left) and PHG-MAE- Distil-450k (right). First, we observe that the GPU (CUDA) variant constantly outperforms the CPU one on each experiment, regardless of batch size. This is expected as machine learning models are optimized for GPU usage. For the DPT model we observe about a 5x improvement, while for the PHG-MAE-Distil model, we see a 2.5-3x improvement depending on batch size. Moreover, we observe a constant improvement as we increase the batch size on both CPU and GPU for all the video resolutions. For the DPT model we see about a 1.5x improvement between the B=1 and B=20. This holds even for the PHG-MAE-Distil model, where we see a 1.1-1.3x speed-up. While these improvements may not be huge, we should always aim at maximizing the usage of our accelerators as each image in batch-mode is independent from each other, allowing for parallel processing. The only reason we should use a lower batch size is due to memory constraints. 3.3. Complex batched export: Dronescapes2 config Our third and most expensive batched export experiment is obtained by re-using the Dronescapes2 VRE configuration file on a new video (N=100 frames) at the same resolution as the original work: 960x540. In Figure 9, we present a histogram of the average duration of each representation across 100 frames as well as a plot showcasing the scalability of the VRE tool, given more GPUs on a single machine using Strategy 1. 18 Pˆırvu Mihai-Cristian, Marius Leordeanu Figure 9. Results on running the data-pipeline on the Dronescapes2 config. Left: bar plot with the average duration of each representation per frame. Right: total duration for different number of GPUs. In the left side we provide the statistics of running the experiment on a single GPU. We observe that most of the time is spent on a single representa- tion, namely the normals from SVD algorithm, taking an average of 6 seconds per frame to compute. This makes sense, as this algorithm is not easily par- allelizable. On the other hand, even neural network representations, such as Mask2Former or Marigold take about 1 second on average for each frame. On the right side of the figure, we run the same configuration, but using Strategy 1 from Section 2.4 in a multi-gpu setup. We observe that the average time to compute drops almost linearly with the number of GPUs when using 2 or 4 GPUs, but then it plateaus. For 8 GPUs it takes about 264 seconds, a drop from 1521 seconds, while a perfect scaling would mean that the computation would take 190 seconds, thus reaching a 72% scaling efficiency. The most rea- sonable argument for this sub-linear scaling is the fact that other resources (such as I/O, RAM or CPU) are also bottlenecking the parallelism. In Figure 10 we provide a qualitative result after running the data- pipeline with the Dronescapes2 configuration on a new video. We observe the 3-level nesting of the VRE process. The experts (neu- ral networks) are derived only from the RGB frame. Then, the first derived intermediate modalities are the camera normals from depth and the seman- tic segmentation mapping: from the original datasets of Mask2Former to the Dronescapes2 set of classes. Then, the final level of derived modalities are built on top of the first two levels of modalities, which are already available at that point due to topological sorting. 3.4. Real-time streaming for machine learning models In this experiment we will test the streaming capabilities of VRE, follow- ing the architecture presented in Figure 4. We’ll start with the basic example of doing everything on the same machine: the source is a local video file (960x540 resolution as before), the GPU is on the same machine and the output is just a Multi-modal video data-pipelines for machine learning with minimal human supervision 19 Figure 10. All the extracted experts and derived intermediate modalities in the data-pipeline. All are generated starting from the RGB image only. video player. In Figure 11, we measure the time spent processing in the VRE tool with no model as well as processing through various models: PHG-MAE- Distil (150k∼4.4M params), Mask2Former, Depth DPT and Depth Marigold. Figure 11. Streaming the frames of a video through various ML models. Processed on a local GPU. We observe that the models have quite low variance, most of the frames take about the same amount of time regardless of the model. Notably, the PHG-MAE-Distil variants can be used for real-time segmentation, while the Depth DPT can be used for real-time depth estimation, which can enable various robotics applications, such as safe navigation through a natural envi- ronment. The other two models, while they output more high-quality results, especially Marigold, are better fit for batched export achieving less than 2 FPS. 20 Pˆırvu Mihai-Cristian, Marius Leordeanu 3.5. Real-time machine-learning streaming with a handheld de- vice and remote processing In this experiment, we want to offload most of the work of the previous experiment using a handheld device (phone camera) as frames source. More- over, we want to also offload the processing unit to a cloud GPU compared to using a local laptop GPU to test the trade-offs induced by the network latency. The setup can be seen in the figure below and the FPS results can be seen in Figure 12. Figure 12. Streaming the frames of a phone camera through various ML models. Processed on a local GPU and on a remote GPU. Left: the FPS results. Right: The live-streaming setup. On the right side we can see the streaming setup: we capture the camera feed from the mobile phone. Then, we relay it to the processing GPU (local or remote). The remote machine is the same as the one used in all the experiments before, while the local machine is the laptop in the image, with a laptop GPU (NVIDIA RTX 4050). Then, the processed images are displayed on the laptop’s screen, which is the target. The local processing results are more or less as the ones in the previous experiment, with a slightly lower FPS on average due to the processing being done on a laptop GPU. On the other hand, we observe that both models perform at about 2-3FPS, due to network latencies added. As this is a live feed, we compute the FPS in the following manner: we store on the local machine a CSV file with timestamps based on when the displayed image has changed in pixels. On the remote machine, we only process the Nth frame. In order to do this, we have a thread that reads frames from the network and then the main processing thread (VRE) will get the latest available read frame. Otherwise, due to the fact that the camera works at 30FPS, while the models process only at 2FPS, the queue for unprocessed frames would grow until the server would get out of memory. Furthermore, we only use TCP sockets with raw RGB images, not live-stream specialized video encodings or UDP sockets, which further adds to the latency. Processing live feeds is surprisingly complicated. The main conclusion to be drawn here Multi-modal video data-pipelines for machine learning with minimal human supervision 21 is that real-time processing is very hard to achieve over a network, thus local computation should be aimed for if possible. 4. Conclusions We introduce a machine learning infrastructure data-pipeline aimed at streamlining the creation of multi-modal datasets for training deep neural net- works. We present the architectural design and the batched vs. streaming duality, which the tool supports natively. For the batched case, we provide multi-gpu strategies, such as splitting a video in multiple slices or targeting different GPUs with representation groups. We open source the tool alongside a repository of already implemented representations. We then present a case study for how Dronescapes, an aerial image understanding dataset, was created using this tool. Then, we present another case study on how a multi-modal neural network model leverages this dataset to provide competitive results on semantic segmentation with very few parameters. Finally, we provide exper- iments for both the batched case, as well as a real-time and near-real-time semantic segmentation and depth estimation streaming pipeline using a hand- held phone’s camera as live feed. As future work, our data-pipeline can be improved to support other video streaming native protocols, built on top of UDP, such as RTP. Moreover, the tool works natively on a single node, allowing node-level parallelism, such as multi-gpu setups. However, this approach could be extended to support distributed systems as well, allowing for a more seamless vertical scaling where nodes can be created and deleted on demand. Finally, while we already support a list of existing models on the VRE repository, more pre-trained models could be implemented, such as object recognition, keypoint extraction or video action recognition. Acknowledgements. This work is supported in part by projects “Romanian Hub for Artificial Intelligence - HRIA”, Smart Growth, Digitization and Finan- cial Instruments Program, 2021-2027 (MySMIS no. 334906) and ”European Lighthouse of AI for Sustainability - ELIAS”, Horizon Europe program (Grant No. 101120237). R E F E R E N C E S [1] Sanad Aburass and Maha Abu Rumman. Quantifying overfitting: introducing the over- fitting index. In 2024 International Conference on Electrical, Computer and Energy Technologies (ICECET), pages 1–7. IEEE, 2024. [2] Saeed Hamood Alsamhi, Alexey V Shvetsov, Santosh Kumar, Svetlana V Shvetsova, Mohammed A Alhartomi, Ammar Hawbani, Navin Singh Rajput, Sumit Srivastava, Abdu Saif, and Vincent Omollo Nyangaresi. Uav computing-assisted search and rescue mission framework for disaster and harsh environment mitigation. Drones, 6(7):154, 2022. [3] Naeem Ayoub and Peter Schneider-Kamp. Real-time on-board deep learning fault de- tection for autonomous uav inspections. Electronics, 10(9):1091, 2021. 22 Pˆırvu Mihai-Cristian, Marius Leordeanu [4] Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi- modal multi-task masked autoencoders. In European Conference on Computer Vision, pages 348–367. Springer, 2022. [5] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460, 2020. [6] Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiao- huan Zhou, Peng Wang, Sinan Tan, An Yang, et al. Ofasys: A multi-modal multi-task learning system for building generalist models. arXiv preprint arXiv:2212.04408, 2022. [7] Pedro HE Becker, Jose Maria Arnau, and Antonio Gonz´alez. Demystifying power and performance bottlenecks in autonomous driving systems. In 2020 IEEE International Symposium on Workload Characterization (IISWC), pages 205–215. IEEE, 2020. [8] Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation, pages 12–58, 2014. [9] John Canny. A computational approach to edge detection. IEEE Transactions on pat- tern analysis and machine intelligence, (6):679–698, 1986. [10] Li Chen, Tutian Tang, Zhitian Cai, Yang Li, Penghao Wu, Hongyang Li, Jianping Shi, Junchi Yan, and Yu Qiao. Level 2 autonomous driving on a single device: Diving into the devils of openpilot. arXiv preprint arXiv:2206.08176, 2022. [11] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Gird- har. Masked-attention mask transformer for universal image segmentation. In Proceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1290–1299, 2022. [12] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016. [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for compu- tational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019. [14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [15] Peter D Dueben, Martin G Schultz, Matthew Chantry, David John Gagne, David Matthew Hall, and Amy McGovern. Challenges and benchmark datasets for machine learning in the atmospheric sciences: Definition, status, and outlook. Artificial Intelligence for the Earth Systems, 1(3):e210002, 2022. [16] Carsten Griwodz, Simone Gasparini, Lilian Calvet, Pierre Gurdjos, Fabien Castan, Benoit Maujean, Gregoire De Lillo, and Yann Lanthony. Alicevision Meshroom: An open-source 3D reconstruction pipeline. In Proceedings of the 12th ACM Multimedia Systems Conference - MMSys ’21. ACM Press, 2021. [17] Emanuela Haller, Elena Burceanu, and Marius Leordeanu. Self-supervised learning in multi-task graphs through iterative consensus shift. arXiv preprint arXiv:2103.14417, 2021. [18] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Multi-modal video data-pipelines for machine learning with minimal human supervision 23 [19] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022. [20] Daniel Hern´andez, Jos´e M Cecilia, Juan-Carlos Cano, and Carlos T Calafate. Flood detection using real-time image segmentation from unmanned aerial vehicles on edge- computing platform. remote Sensing, 14(1):223, 2022. [21] Zhewei Huang, Tianyuan Zhang, Wen Heng, Boxin Shi, and Shuchang Zhou. Real-time intermediate flow estimation for video frame interpolation. In European Conference on Computer Vision, pages 624–642. Springer, 2022. [22] Andrej Karpathy. Software 2.0. https://web.archive.org/web/20250323195948/ https://karpathy.medium.com/software-2-0-a64152b37c35, 2025. [Online; ac- cessed 04-April-2025]. [23] Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. Repurposing diffusion-based image generators for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9492–9502, 2024. [24] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Seg- ment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015–4026, 2023. [25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [26] Marius Leordeanu, Mihai Cristian Pˆırvu, Dragos Costea, Alina E Marcu, Emil Slusan- schi, and Rahul Sukthankar. Semi-supervised learning for multi-task scene understand- ing by neural graph consensus. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1882–1892, 2021. [27] Marius Leordeanu, Rahul Sukthankar, and Cristian Sminchisescu. Generalized bound- aries from multiple image interpretations. IEEE transactions on pattern analysis and machine intelligence, 36(7):1312–1324, 2014. [28] Liangkai Liu, Zheng Dong, Yanzhi Wang, and Weisong Shi. Prophet: Realizing a pre- dictable real-time perception pipeline for autonomous vehicles. In 2022 IEEE Real-Time Systems Symposium (RTSS), pages 305–317. IEEE, 2022. [29] Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, and Aniruddha Kembhavi. Unified-io 2: Scaling autoregressive multi- modal models with vision language audio and action. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26439–26455, 2024. [30] Alina Marcu. Quantifying the synthetic and real domain gap in aerial scene understand- ing. arXiv preprint arXiv:2411.19913, 2024. [31] Alina Marcu, Vlad Licaret, Dragos Costea, and Marius Leordeanu. Semantics through time: Semi-supervised segmentation of aerial videos with iterative label propagation. In Proceedings of the Asian Conference on Computer Vision, 2020. [32] Alina Marcu, Mihai Pirvu, Dragos Costea, Emanuela Haller, Emil Slusanschi, Ahmed Nabil Belbachir, Rahul Sukthankar, and Marius Leordeanu. Self-supervised hy- pergraphs for learning multiple world interpretations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 983–992, 2023. [33] Pˆırvu Mihai-Cristian and Leordeanu M. Probabilistic hyper-graphs using multiple ran- domly masked autoencoders for semi-supervised multi-modal multi-task learning, 2025. [34] David Mizrahi, Roman Bachmann, Oguzhan Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, and Amir Zamir. 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems, 36:58363–58408, 2023. 24 Pˆırvu Mihai-Cristian, Marius Leordeanu [35] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international con- ference on acoustics, speech and signal processing (ICASSP), pages 5206–5210. IEEE, 2015. [36] Mihai Pirvu, Alina Marcu, Maria Alexandra Dobrescu, Ahmed Nabil Belbachir, and Marius Leordeanu. Multi-task hypergraphs for semi-supervised learning using earth observations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3404–3414, 2023. [37] Xavier Soria Poma, Edgar Riba, and Angel Sappa. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1923–1932, 2020. [38] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International confer- ence on machine learning, pages 8748–8763. PmLR, 2021. [39] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [40] Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. Ai and the everything in the whole wide world benchmark. arxiv. arXiv preprint arXiv:2111.15366, 2021. [41] Ren´e Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vi- sion, pages 12179–12188, 2021. [42] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 402–419. Springer, 2020. [43] Suramya Tomar. Converting video formats. ffmpeg. Linux Journal, 2006(146):10, 2006. [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [45] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3712–3722, 2018. [46] Xu Zhao, Wenchao Ding, Yongqi An, Yinglong Du, Tao Yu, Min Li, Ming Tang, and Jinqiao Wang. Fast segment anything. arXiv preprint arXiv:2306.12156, 2023.
U.P.B. Sci. Bull., Series A, Vol. 88, Iss. 3, 2025 ISSN 1223-7027 MULTI-MODAL VIDEO DATA-PIPELINES FOR MACHINE LEARNING WITH MINIMAL HUMAN SUPERVISION Pˆırvu Mihai-Cristian1 and Marius Leordeanu2 The real-world is inherently multi-modal at its core. Our tools observe and take snapshots of it, in digital form, such as videos or sounds, however much of it is lost. Similarly for actions and information passing between humans, languages are used as a written form of communication. Traditionally, Machine Learning models have been unimodal (i.e. rgb →semantic or text →sentiment class). Recent trends go towards bi-modality, where images and text are learned together, however, in order to truly understand the world, we need to integrate all these independent modalities. In this work we try to combine as many visual modalities as we can using little to no human supervision. In order to do this, we use pre-trained experts and procedural combinations between them on top of raw videos using a fully autonomous data-pipeline, which we also open-source. We then make use of PHG-MAE [33], a model specifically designed to leverage multi-modal data. We show that this model which was efficiently distilled into a low-parameter (≤1M) can have competitive results compared to models of ∼300M parameters. We deploy this model and analyze the usecase of real-time semantic segmentation from handheld devices or webcams on commodity hardware. Finally, we deploy other off-the-shelf models using the same framework, such as DPT [41] for near real-time depth estimation. Keywords: multi-modal machine learning, semantic segmentation, depth estimation, real-time processing, real-time inference, commodity hardware 1. Introduction and related work In the domain of machine learning there are typically three correlated subsystems, namely the data (1), the models and algorithms (2) and the predictions and actions of a model (3), as presented in Figure 1. Generally speaking the field of research has mostly focused on the models side on fixed benchmarks. This makes sense from a practical point of view: one gets to directly compare a novel solution against existing work in a controlled setting, making the contribution less biased and also more convenient, since they can re-use the data acquisition and processing from the prior work. This approach has 1PhD student, "Simion Stoilow", e-mail: 2Professor, Faculty of Automatic Control and Computer Science, National -mail: 1 16 Oct 2025 2 Pˆırvu Mihai-Cristian, Marius Leordeanu Figure 1. High-level overview of an end-to-end machine learning system: from raw data and data processing, to training and optimizing models and lastly by deploying it to interact and control a real hardware autonomously with intelligent actions. worked very well and has driven the progress of the field with results such as the AlexNet [25] classification network on the ImageNet dataset, the Transformer network [44] on the WMT14 English-German translation dataset [8], DeepLabV3+ semantic segmentation model on the Cityscapes dataset [12], Wav2Vec speech recognition model [5] on the LibriSpeech dataset [35] and many others. However, there are concerns with this approach. Some argue that we are overfitting on a small set of single or few task benchmarks [40], leading to solutions that don't generalize. Overfitting indices have been proposed [1]. On the other hand, some argue that less mainstream fields, such as atmospheric science, struggle to evolve due to the lack of such benchmarks [15]. A similar issue was identified in the domain of UAVs and aerial image understanding in [30]. In the work of [32], they introduce Dronescapes, a dataset for UAVs on three tasks: semantic segmentation, depth estimation and camera normals estimation, which is also a starting point of this work. Recently, the trend has been towards massive pre-training with models such as the GPT series for language modeling [39] trained on the WebText dataset (40B tokens), Vision Transformer (ViT) [14] trained on 15M images, CLIP [38] trained on 400M image-text pairs and Segment Anything (SAM) [24] trained on 11M images and 1.1B segmentation masks. In [38], they show that their model is more robust to adversarial examples (ImageNet variants) from the same domain compared to models trained on just ImageNet, showcasing the need for more high-quality large-scale datasets. All the recent improvements have been done by combining the Data and Models 'boxes' from our main figure into a simpler and more scalable loop: creating automatic or semi-automatic datasets with little human intervention, followed by a simple but generic pretraining algorithm and then by a task-specific fine-tuning. In order to do this, one cannot simply rely on existing datasets for training. Multi-modal video data-pipelines for machine learning with minimal human supervision 3 For this reason alone, we focus on the Data (1) (acquisition and processing) side of things more than is usual, while remaining in the context of MultiModal Machine Learning. Multi-modality refers to the use of multiple types of sensors together to achieve some goals or tasks. For example, combining images with depth information or text descriptions provides a richer understanding than using images alone. The world presents information through various modalities, and research aims to leverage these correlated sources. Our datapipeline extends the Dronescapes dataset [32] from 12K frames for the training set to 80k frames across 3 tasks in an automated fashion from 8 new videos and pre-trained experts only. We create up to 13 new modalities including semantic and depth-derived ones like camera normals or binary segmentation maps, like safe landing areas, buildings, water etc. This dataset is then used to train the PHG-MAE-NRand model [33] as well as the PHG-MAE-Distil variants. These lightweight distillation models (150k or 400k parameters) yield competitive results against models such as Mask2Former [11], that has 217M parameters, which is up to 3 orders of magnitude larger. Moving over to the Models (2) side, recent trends have been towards large and very large Transformer-based models, with hundreds of millions and billions of parameters. These models, inspired by the domain of NLP [13], use techniques like Masked Auto Encoders (MAE) to do large pre-training on generic and easily acquirable data (like RGB only), followed by task-specific fine-tuning [19] on visual data. Recent works, such as [4], leverage MAE-based solutions for Multi Task Learning (MTL): depth estimation and semantic segmentation. [29] and [34] extend this approach to new modalities, including image, text, audio generation, and action prediction for robotics. These advances are driven by the rise of foundation models pre-trained on massive datasets, enabling zero-shot prediction via prompting and efficient fine-tuning, as seen in [38] and [24]. [6] proposes a multi-modal, promptable model with an instruction-based domain-specific language for generalist capabilities across text, images, audio, and video. Other work on models handling the multimodality of the data aims to create a graph with neural networks modeling the relationships between modality nodes [45, 26, 17, 32, 36]. One of the main issues with these existing models is that they are designed for performance metrics, not real-time inference. One work that tries to address this limitation is PHG-MAE [33], where they use a MAE-based multi-modal training algorithm that natively incorporates ensembles for performance boosts and consistency across frames. While their main model is also very heavy (with up to 50s per frame), they provide efficient and lightweight distilled models (150 ∼430k parameters), which enables real-time deployment with little loss in performance. As we switch to the Actions (3) and predictions side, we can observe that it is a much less explored and researched area. Usually this is enabled by the R&D on the models side, followed by a deployment procedure. Once 4 Pˆırvu Mihai-Cristian, Marius Leordeanu a model is deployed it is assumed frozen (most of the time) and it becomes more of an engineering problem to run inference and serve predictions reliably without breaking the existing system. This suggests that the neural network is usually used as a module of a larger system, with this hybrid being referred to as Software 2.0 [22], where standard "1.0" procedural code is mixed with neural network predictions to make intelligent actions. One of the main questions and trade-offs is related to where the inference computation happens: on device or on some external server. The first causes the device to have larger compute power, which can increase weight, decrease battery time or increase overall latency. The second solution adds a communication layer between the device and the processing node, which adds variance due to connection issues. Some argue that a distributed system is required to achieve end-to-end real-time performance [7] with specialized nodes doing specialized tasks, like object recognition. Others propose neural architectural changes to reduce the inference duration variance [28] caused by things like object proposals which can be dynamic based on the input image. Nonetheless, solving these latency-performance trade-offs enables intelligent and autonomous devices, like autonomous vehicles [10] or drones for use-cases like flood detection [20], power line failures [3] or search and rescue assistance [2]. In this paper, we study a simpler use-case: deploying distilled real-time models for semantic segmentation and depth estimation. We'll use a consumer GPU on a laptop (NVIDIA RTX 4050 Laptop) for local processing as well as a remote connection to a cloud server (NVIDIA RTX 2080 Ti). We study the case of real-time streaming from a phone camera and analyze the trade-offs of the two setups. Our main contributions presented in this paper are on the data-processing and automation side as well as model deployment on consumer grade GPUs. Further research on more advanced use-cases, such as autonomous control of a UAV or vehicle enabled by our real-time segmentation model must be studied. Multi-modal video data-pipelines for machine learning with minimal human supervision 5 2. Video Representations Extractor data-pipeline for multi-modal machine learning Figure 2. VRE showcase. We present six exported representations on top of the RGB frame. The first two are pre-trained experts (DPT [41] and Marigold [23]). Next, we derive two camera normals representations using a SVD-based algorithm [18]. Lastly, we derive safe-landing areas by thresholding the camera normals maps like in the newly introduced Dronescapes2 [33] dataset. In this section, we'll discuss our approach for the Data side of Figure 1. In order to streamline the training of multi-modal machine learning models, in the context of videos and scene understanding, we have developed a data-pipeline, named Video Representations Extractor (or VRE for short), which we have also open-sourced at https://github.com/meehai/ video-representations-extractor/. We discuss architectural decisions as well as how it can be used to create new datasets for other use cases than ours. We also discuss data-pipeline strategies, including multi-gpu batching and realtime streaming. Later, we present a case study based on the Dronescapes [32] dataset for aerial image understanding which was extended in a fully automated way using our data-pipeline: https://sites.google.com/view/ dronescapes-dataset. Finally, we discuss PHG-MAE [33], a MAE-based model which has leveraged our data-pipeline by creating an ensemble-based algorithm which operates at intermediate modalities level exported by us. This model was then distilled into a lightweight semantic segmentation model that we can run using the streaming capabilities of the data-pipeline, which we'll also explore in the Experiments section alongside other models, such as depth estimation [41]. 2.1. VRE main loop Below, in Algorithm 2.1, we can see the main VRE loop. At its core this is all VRE does, but getting this right is not a trivial task. A VRE process works on a single accelerator (CPU, GPU etc.), a single video (list of frames) and a single representation at a time. By default, it works 6 Pˆırvu Mihai-Cristian, Marius Leordeanu Algorithm 2.1 Video Representations Extractor main loop video ←[frame1, frame2, ..., frameN] representations ←[RGB, Mask2Former(params), DPT(params), ...] for repr in topo sort(representations) do batches ←make batches(video, batch size) ▷batch size=1 if streaming for batch of frames in batches do if not already computed(batch of frames) then out repr = repr.compute(batch of frames, [out deps]) img repr = repr.make images(out repr) ▷optional end if store on disk(batch, out repr, img repr) ▷only in batch mode end for store repr metadata(batches) ▷stats about this representation end for store run metadata(video) ▷stats about this video run on batches, where the batch size is defined globally, with the ability to overwrite this option at representation level. For example, we can process many RGB frames at once, however for learned representations (i.e. neural networks), the batch size must be capped by memory requirements, like the (V)RAM capacity of the accelerator. We use the generic term of 'accelerator', but this means CPU (for regular representations) or GPU (for neural representations) in most of the cases, with the ability to generalize to other custom accelerators, such as TPUs, NPUs etc. The tool does not schedule between multiple videos, but rather the list of frames of a single video are passed as inputs. While this may seem limiting, we present a later how multiple videos and multiple GPUs can be used to scale this simple approach. Each run on a specific video and list of frames will produce an independent run-level metadata file. This file contains a unique identifier of the run, information about when it started and how long each representation took, as well as information about how many frames were successfully processed or not. It will also produce a raw log file based on the standard output and error for debugging purposes. Moreover, each representation contains a secondary metadata file which offers more granular information at frame-level. It will offer details about how long each frame's computation took, whether it was stored only as binary (npz) or image (png/jpg) or both and so on. For each frame, we also have a unique identifier of the metadata run id, so we can backtrace when each frame was computed. This information is used to know whether this particular frame will be skipped or not on a new run: the already computed(batch) entry in the algorithm above. Note that this representation metadata file is the ground truth, not the actual data on the disk, as this file is faster to process than reading potentially hundreds of gigabytes Multi-modal video data-pipelines for machine learning with minimal human supervision 7 of exported files! Modifying the data on the disk may corrupt the metadata and some frames can be wrongly skipped or re-computed on new runs. The metadata file can be manually regenerated from the stored-on-disk data if needed. 2.2. Representation A representation is the basic block of a VRE run and consists of the definition and computation sides. The definition of a representation is based on a unique name, a set of parameters and a list of dependencies of other representations. This creates a graph of representations, which is topologically sorted to ensure that no cycles exist and that a proper representation scheduling can be obtained. The set of parameters allows us to compute the same representation multiple times with slightly different options. As an example, see Figure 2. Here, we produce two depth maps using two different representations (DPT [41] and Marigold [23]). Then, the camera normals representation (SVD-based) uses the very same algorithm and parameters, but with different depth map dependencies, producing two different outputs. Moving on, the safe-landing representation is similarly built on top of the camera normals representation. We can observe that the safe-landing area produced by the more lightweight DPT depth map misses some details, such as the safelanding areas on top of the buildings. The computation-side of a representation is simply the code required to run the representation and is defined as simply as out repr ←repr.compute(batch of frames, out deps). The code runs at frames level (batched) and it returns the map produced by the representation (i.e. HSV, edges, semantic segmentation, camera normals etc.). The code also receives all the outputs of all the dependencies, which are assumed to have been scheduled for computation before the current representation is run via the topological sort. Defining the representation. The representations (or experts) that we want to compute are instantiated based on a YAML-based configuration file. In this configuration file we must define the unique name of the representation, its parameters as well as its dependencies (if any). If the dependencies are not properly provided, then topo sort(representations) will fail with an appropriate error. The configuration file looks roughly like this (yaml syntax): globals : { batch size : 10} representations : { rgb : {type : color /rgb , deps : [ ] , params : {}} , hsv : {type : color /hsv , deps : [ rgb ] , params : { batch size : 5}} } 8 Pˆırvu Mihai-Cristian, Marius Leordeanu 2.3. Running strategies: Batching vs. streaming. One of the dichotomies of data-pipelines is managing the concepts of batching and streaming. The first refers to the principle of scheduling large amounts of data for computation in an offline manner. The main use-case here is reliability and efficiency at scale. On the other hand, there is streaming, where each piece of data must be processed in place with more tolerance to reliability, but less to latency. At its core, VRE supports both modes, see Figure 3. Figure 3. VRE processing strategies. Left: the standard batched strategy. We split the frames in batches and then each batch is passed through the algorithm of the representation, followed by a step where the results are stored on the disk. Right: the streaming strategy. In this mode the input is a live video stream (webcam, camera phone etc.) Each frame, or nearby ones (if needed), are processed sequentially by all representations. Batch mode. This is the default mode for which the tool was created. We use it to schedule entire videos, compute various representations on it (i.e. pretrained experts or procedural intermediate modalities), followed by storing them for later usage, such as training a new neural network using the exported data. Each expert is processed independently, based on the topological sorting, and the batch size is set such that we maximize the occupancy of the accelerator that does the work (i.e. GPUs). The exported data can be used as-is using the provided ML-ready data reader in python, though the exports are languageagnostic. This mode is aimed at long-running reliable runs and implements various mechanisms, such as retrying with a smaller batch-size in case of OOM errors, a re-entrant mechanism (skipping previously computed frames) and exhaustive diagnostics and log files of each run. Streaming mode. VRE also supports streaming mode, where the focus is less on reliability and more on fast inference. In this mode, we disable any sort of disk operations (i.e. results are not stored on disk) and optionally, we can disable more features, like creating images from raw predictions, depending on the application at hand. In Figure 4, we provide a basic diagram of how VRE integrates in a larger system, where frames can come from an external source Multi-modal video data-pipelines for machine learning with minimal human supervision 9 (i.e. video, webcam, phone camera etc.), can be processed on an external machine (i.e. cloud GPU) and can be used to control an external device (i.e. robot, drone etc.). Figure 4. VRE streaming architecture. We read frame by frame from the source (i.e. drone camera), process it on the VRE streaming client (i.e. cloud or local GPU), analyze the results and pass the actions to the target (i.e. drone controller). Notably, all these components can live on the same machine but they can also communicate through the network. We provide various integrations through standard Linux tools (i.e. pipes) which allows VRE to read and write raw frames to standard input and output (or sockets). For example, one can read from the standard input, while the frames come from a webcam and are piped through applications like ffmpeg [43]. Using the same mechanism, we support network streaming, by creating a TCP socket allowing us to run VRE on a cloud server with a powerful GPU while piping the frames in and out of the server via tools like netcat which acts like a (network) pipe. This means that any computer or node that can be accessed by a VRE client (i.e. public IP, ssh tunneling etc.) can be used for computation. The trade-off here is that the network latency can be too prohibitive for the application, especially if one must do real-time processing such as controlling an UAV based on the stream of predictions. It should be noted that ideally we'd use a UDP socket with optimized video encoding for live-streaming for better performance, however, for simplicity, we use a TCP server and raw RGB frames, meaning that there is room for performance optimizations. We provide more concrete examples in the experiments section, where we use a mobile camera for raw frames, and compare a laptop GPU vs. a cloud GPU for real-time and near real-time semantic segmentation and depth estimation inference. 10 Pˆırvu Mihai-Cristian, Marius Leordeanu 2.4. Multi-GPU batching strategies In batching mode, if multiple accelerators are available (i.e. nodes with ≥1 GPUs), VRE has support for a multi-gpu setup. This, of course, only applies to representations where a GPU is needed in the first place, such as pre-trained neural networks, also called experts in some contexts for semisupervised learning and distillation. The main requirement for such a representation, internally called a Learned Representations, is to properly implement a setup(device) method, where the GPU model is moved to the device. In Figure 5, we present two different multi-gpu strategies that can be applied by spawning multiple VRE processes on a single video. Figure 5. VRE multi-gpu batching strategies. Strategy 1: Slice the video in multiple independent chunks. Strategy 2: split the video's representations in sub-groups. Strategy 1 is the simplest and most effective one. The video is treated as multiple independent video chunks and assign each to one VRE process and to one accelerator (GPU). This strategy has the advantage that it is consistent (i.e. each GPU will finish at around the same time) and ensures good utilization of each GPU. Furthermore, this strategy also works on distributed environments, allowing us to schedule a video on multiple machines as well, given that we don't overlap the chunks. The user must then optimize the configuration (i.e. batch size, computation parameters etc.) such that each frame is computed in the optimal time across all representations. The main limitation with this strategy is that all the representations must be computed by the same VRE process. In some scenarios, we may wish that simple independent representations (i.e. RGB, HSV, edges) do not block an entire GPU, especially if they are not dependencies of other complex representations. This is where Strategy 2 comes into play: one video (or video chunk) can be further split into independent representation groups. For example we can have a representation group that computes a depth representation and a camera normals (which depends on depth) representation on one GPU, a secondary group that computes semantic segmentation on another GPU and a third group that Multi-modal video data-pipelines for machine learning with minimal human supervision 11 computes HSV and Canny Edges on just a CPU. These can run in parallel on three separate VRE processes, as they are independent from each other. As a recommendation, a user should first start with Strategy 1 and optimize a single configuration for all representations on a single frame, followed by chunking the video based on the number of available accelerators. For most use-cases, one can stop here. Strategy 2 can be then seen as fine-tuning, where the already-optimized configuration is split in multiple sub-configurations to separate GPU representations (i.e. neural networks) from CPU representations (i.e. HSV or Canny Edges). VRE natively supports this via the --frames A..B CLI flag, meaning that one can start 8 VRE processes (one on each GPU) on 8 subsets of the same video, with results stored in the same output directory without any clashes. Furthermore, we have a helper CLI tool, vre gpu parallel, which can be used to implement Strategy 1. CUDA VISIBLE DEVICES=0 ,1 ,2 ,3 v r e g p u p a r a l l e l \ VIDEO -o DIR --config path CONFIG --frames 0..100 Executing : CUDA VISIBLE DEVICES=0 vre VIDEO \ --frames 0 . . 2 5 -o DIR --config path CONFIG . . . Executing : CUDA VISIBLE DEVICES=3 vre VIDEO \ --frames 75..100 -o DIR --config path CONFIG This tool simply spawns N sub-processes after properly splitting the frames. If the frames flag is not provided, it will slice the whole video. Note that any potential race conditions on the same output directory are solved by using atomic write operations, as we explain in the next section. 2.5. Data format In batching mode, we want to store the data on the disk such that it can be later loaded for various use-cases, such as training a neural network on top of this export. In order to do this, VRE creates a disk-based data structure which can be loaded and analyzed efficiently. The structure on the disk looks like this: video .mp4 video vre export / -. logs /[ run metadata ID . json , log ID . txt , . . . ] -repr 1 / -representation metadata . json -npz / [ 1 . npz , . . . , N. npz ] -repr 2 / -representation metadata . json -npz / [ 1 . npz , . . . , N. npz ] 12 Pˆırvu Mihai-Cristian, Marius Leordeanu -jpg / [ 1 . jpg , . . . , N. jpg ] . . . The disk-based data structure leverages the rise of fast SSDs, enabling the loading of large batches of data into the RAM for efficient usage. We also implement speculative loading, where we load two or three batches ahead asynchronously, while the current batch is processed, for example doing neural network training or inference. For each VRE run, there is a run metadata, with a unique ID, containing information about the list of frames that were processed. Moreover, in each representation, there is a representation metadata file which contains information about each frame, including a reference to the run metadata ID. We've discussed earlier about the run metadata and the representation metadata, which are the basic mechanisms for introspection and scheduling. At the start of each VRE run, the tool loads all the metadata files to properly schedule only the frames that were not previously computed. This is called re-entrancy, meaning that the tool can continue previously stopped work with the idea that nothing is lost. An important aspect is the fact that each write to the representation metadata.json file is atomic, allowing multiple VRE processes to compute the same representation on different slices of a video, such as the case for the Strategy 1 in a multi-gpu setup (see Figure 2.4). In case a representation depends on a previously computed representation (i.e. camera normals using SVD requires a pre-computed depth map), then it is more efficient to load the representation from the disk, rather than to recompute it. In some cases it's not even possible, for example if a neural network representation depends on another neural network representation, as this would require both of them to be loaded at the same time, which can cause out of memory issues or be very slow due to CPU computation. To support this, each representation must implement two functions: (1) representation.memory to disk(out repr, path) (2) out repr ←representation.disk to memory(path). By default these two representations are identical, but in some cases (for example semantic segmentation), it is more efficient to store the data as argmaxed class indices (uint8), rather than raw predictions (float32), like logits or softmax-probabilities. These are, of course, more advanced nuances and proper action must be taken based on how the data is used. Sometimes it is imperative that the data is stored as-is, other times it's good enough to truncate it to float32, while others class indices are just as good! 2.6. VRE repository At the time of writing, the following algorithms and pre-trained experts are implemented and ready to use. • Color: RGB, HSV • Edges: Canny [9], DexiNed [37] Multi-modal video data-pipelines for machine learning with minimal human supervision 13 • Optical flow: RAFT [42], RIFE [21] • Depth estimation - DPT [41], Marigold [23] • Normal maps: SVD (from depth) [18] • "Soft" segmentation: FastSAM [46], Generalized Boundaries [27], Halftone • Semantic segmentation - Mask2Former [11], PHG-MAE-Distil [33] Upon representation instantiation, the weights of these representations (for learned representations only) are downloaded locally from a cloud storage. Implementing a new representation is as simple as implementing a shared interface with a few methods, such as compute(batch of frames, dependencies) and make image(frame, computed result). For learned representations (i.e. neural networks), one must also implement two more methods: load weights(path) and unload weights() which are used to load the networks and clear the memory during execution. Moreover, one can implement more fine levers, such as the previously mentioned disk to memory and memory to disk functions. 2.7. Case study: Dronescapes2 dataset. A fully-automated dataset built with VRE. In this section we present a case study: the Dronescapes2 dataset which was generated using VRE. This dataset as well as a step-by-step reproduction process using VRE are publicly available at https://sites.google.com/ view/dronescapes-dataset. Table 1 summarizes the size and modalities of the Dronescapes dataset followed by the extended variant: the Dronescapes2M+ dataset, generated using the VRE data-pipeline. Name Data Points I/O UAV scenes Description (GT labels) Modalities Dronescapes-Semisup1 [32] 12K (233) 5/3 7 Original train set Dronescapes-Semisup2 [32] 11K (207) 5/3 7 Original semi supervised set Dronescapes-Test [32] 5.6K* (116) 5/3 3 Original test set Dronescapes2-M+ 80K (440) 14/3 15 All new experts and intermediate modalities on 8 new videos plus the original ones Table 1. Dronescapes dataset variations and stats. Numbers in parentheses represent the semantic human annotated data. Data points indicates the number of RGB frames. The original dataset defines three output tasks: semantic segmentation, depth estimation, and camera normals estimation. Semantic segmentation maps are human-annotated, with label propagation [31] used to interpolate missing frames. Depth and camera normals were generated from raw videos and GPS data using a Structure-from-Motion (SfM) tool [16]. The Dronescapes extension, named Dronescapes2-M+, builds upon the initial 23K frames from Dronescapes-Semisup1 and Dronescapes-Semisup2 by adding 8 new video scenes sourced from the internet, yielding a total of 57K new frames totalling 80K frames. It generates experts and intermediate modalities with no human annotation using the data-pipeline and the new videos 14 Pˆırvu Mihai-Cristian, Marius Leordeanu only. We use both the VRE configuration (batched) as well as the distilled model (streaming) in the experiments section. The VRE configuration contains the following experts and intermediate modalities: • semantic segmentation (3): Mask2Former on three released checkpoints • depth estimation (1): Marigold • camera normals intermediate modality (1): SVD-based algorithm • binary semantic intermediate modalities (8): vegetation, sky-and-water, containing, transportation, buildings (all types and nearby only) and safelanding (geometric only and geometric + semantic) In total, there are four pre-trained experts (3 Mask2Former variants & 1 Marigold) plus nine new intermediate modalities. All the intermediate modalities are implemented as new procedural representations built on top of the existing representations. For example the 'safe-landing (+semantic)' binary segmentation mask is defined as: (v2 > 0.8) ∗((v1 + v3) < 1.2) ∗(depth <= 0.9) ∗safe class(semantic), where v1, v2, v3 are the 3 angles of the camera normals map and safe class is a true/false mapping on top of the underlying Mask2Former semantic segmentation map. The thresholds (representation parameters) can be updated based on experiments. 2.8. Case study: PHG-MAE. A model designed for intermediate modalities exported by VRE. Using the Dronescapes2 dataset, the work of PHG-MAE [33] has trained a multi-modal multi-task learning model, designed specifically for this kind of data with just 4.4M parameters. In Figure 6, we can see a snippet of how this model works. It yields competitive results on the Dronescapes-Test dataset against models such as Mask2Former, a 217M parameters model, almost 2 orders of magnitude larger. However, it has one big limitation: it must run the entire data-pipeline for 13 intermediate modalities, including 3 semantic segmentation neural network experts and one depth estimation expert, for each RGB frame. To solve this issue, the authors also provide a set of distilled lightweight neural networks (150k, 450k, 1.1M, 4.4M parameters) with little to no degradation in performance, enabling real-time semantic segmentation. Below, in Table 2, we present a comparison table of these models. In the experimental section, which follows, we will make use of these distilled variants of the PHG-MAE model to run the data-pipeline in streaming mode for real-time semantic segmentation. 3. Experiments In this section we'll go over a few experiments that will demonstrate the capabilities of the Video Representations Extractor (VRE) data-pipeline on real-world machine learning workloads. We'll first go over the batched case, Multi-modal video data-pipelines for machine learning with minimal human supervision 15 Figure 6. The data-pipeline and PHG-MAE model on real data. Left: The process of deriving modalities as pseudo-labels from pre-trained experts using RGB only, followed by deriving new modalities from combinations of experts. Right: integration of all the new modalities in the PHG-MAE semi-supervised training and inference pipeline, with each modality being either input, intermediate or output. Model Parameters Mean IoU ↑ PHG-MAE-NRand [33] 4.4M 55.06 ± 0.09 PHG-MAE-Distil [33] 4.4M 55.05 PHG-MAE-Distil [33] 430k 54.94 PHG-MAE-Distil [33] 1.1M 54.3 Mask2Former [11] 217M 53.97 PHG-MAE-Distil [33] 150k 53.32 PHG-MAE-1All [33] 4.4M 52.04 PHG-MAE-1Rand [33] 4.4M 51.83 ± 3.3 Table 2. Semantic Segmentation performance for the PHG-MAE model, trained on Dronescapes2, a dataset generated using VRE. The -Distil variants, which are trained on top of pseudo-labels generated by the -NRand model. as introduced in Section 2.3, providing three simple-to-complex experiments. The data-pipeline was initially created for these kinds of workloads: to streamline and standardize the efficient creation of new datasets based on raw videos alone. Then, we'll go over a more recent addition to the data-pipeline, the real-time streaming component. We provide two experiments with two models: semantic segmentation and depth estimation using pre-trained experts supported out of the box in the VRE repository. We start with a local streaming example, followed by offloading the computation side from a local machine to a remote GPU on an internet-accessible machine. The command we'll be using is as simple as: vre VIDEO --config path CONFIG -o DIR --frames A. . B --batch size B --skip computed 16 Pˆırvu Mihai-Cristian, Marius Leordeanu In the CONFIG file we'll define the representations we want to compute following the yaml syntax defined in Section 2.2. The frames are defined as intervals [A : B], allowing us to skip previously computed frames (if any) in the DIR. The batch size is also defined in the config file, and can be defined both globally and fine-tuned at representation level, but for simplicity, we show it in the command line. The output resolution is always the same as the video size. All the batched experiments will be evaluated based on the reported duration in the metadata files of each run. Furthermore, for the streaming experiments, we'll report the frames per second (FPS). All the experiments are run on a Intel Xeon Gold 5218 CPU and one to eight NVIDIA RTX 2080Ti GPUs. 3.1. Simple export: RGB and HSV only We'll start with a simple experiment computing only the RGB and HSV representations. The RGB one is simply copying the video frame, while the HSV is a basic color transformation, with no accelerator needed. We'll compare four video resolutions (240p, 540p, 720p, 1080p) for three exports: npz only, npz + jpg images and npz + jpg images + compression. We'll use the same video and process 100 frames. These experiments should give us an initial hint about how VRE handles large-scale batch processing. The results can be seen in Figure 7. Figure 7. Simple export experiment. We compare three output formats for two computed representations: RGB and HSV. We observe that the duration extends both with a more complex storage (i.e. npz only vs compressed npz), with the frame resolution as well as whether we export only a binary representation or both binary and image (jpg). Interestingly, the compressed export saves about 2.6x disk space (2.4GB vs 907MB on 1080p for the HSV export), while taking only 1.93 times more (236.2s vs 121.8s). This is very useful for large-scale video processing allowing us to make Multi-modal video data-pipelines for machine learning with minimal human supervision 17 a space-time trade-off. Notably, since these experiments are on CPU only, the batch size should more or less not matter. 3.2. Batched export: RGB, PHG-MAE-Distil and DPT In this experiment, we'll try to increase the batch size for two learned representations to observe the performance boost of this feature. The results can be seen in Figure 8. Figure 8. Batched export results (CPU vs. GPU) on a local machine with three batch sizes (1, 5, 20) and two models Depth DPT (left) and PHG-MAEDistil-450k (right). First, we observe that the GPU (CUDA) variant constantly outperforms the CPU one on each experiment, regardless of batch size. This is expected as machine learning models are optimized for GPU usage. For the DPT model we observe about a 5x improvement, while for the PHG-MAE-Distil model, we see a 2.5-3x improvement depending on batch size. Moreover, we observe a constant improvement as we increase the batch size on both CPU and GPU for all the video resolutions. For the DPT model we see about a 1.5x improvement between the B=1 and B=20. This holds even for the PHG-MAE-Distil model, where we see a 1.1-1.3x speed-up. While these improvements may not be huge, we should always aim at maximizing the usage of our accelerators as each image in batch-mode is independent from each other, allowing for parallel processing. The only reason we should use a lower batch size is due to memory constraints. 3.3. Complex batched export: Dronescapes2 config Our third and most expensive batched export experiment is obtained by re-using the Dronescapes2 VRE configuration file on a new video (N=100 frames) at the same resolution as the original work: 960x540. In Figure 9, we present a histogram of the average duration of each representation across 100 frames as well as a plot showcasing the scalability of the VRE tool, given more GPUs on a single machine using Strategy 1. 18 Pˆırvu Mihai-Cristian, Marius Leordeanu Figure 9. Results on running the data-pipeline on the Dronescapes2 config. Left: bar plot with the average duration of each representation per frame. Right: total duration for different number of GPUs. In the left side we provide the statistics of running the experiment on a single GPU. We observe that most of the time is spent on a single representation, namely the normals from SVD algorithm, taking an average of 6 seconds per frame to compute. This makes sense, as this algorithm is not easily parallelizable. On the other hand, even neural network representations, such as Mask2Former or Marigold take about 1 second on average for each frame. On the right side of the figure, we run the same configuration, but using Strategy 1 from Section 2.4 in a multi-gpu setup. We observe that the average time to compute drops almost linearly with the number of GPUs when using 2 or 4 GPUs, but then it plateaus. For 8 GPUs it takes about 264 seconds, a drop from 1521 seconds, while a perfect scaling would mean that the computation would take 190 seconds, thus reaching a 72% scaling efficiency. The most reasonable argument for this sub-linear scaling is the fact that other resources (such as I/O, RAM or CPU) are also bottlenecking the parallelism. In Figure 10 we provide a qualitative result after running the datapipeline with the Dronescapes2 configuration on a new video. We observe the 3-level nesting of the VRE process. The experts (neural networks) are derived only from the RGB frame. Then, the first derived intermediate modalities are the camera normals from depth and the semantic segmentation mapping: from the original datasets of Mask2Former to the Dronescapes2 set of classes. Then, the final level of derived modalities are built on top of the first two levels of modalities, which are already available at that point due to topological sorting. 3.4. Real-time streaming for machine learning models In this experiment we will test the streaming capabilities of VRE, following the architecture presented in Figure 4. We'll start with the basic example of doing everything on the same machine: the source is a local video file (960x540 resolution as before), the GPU is on the same machine and the output is just a Multi-modal video data-pipelines for machine learning with minimal human supervision 19 Figure 10. All the extracted experts and derived intermediate modalities in the data-pipeline. All are generated starting from the RGB image only. video player. In Figure 11, we measure the time spent processing in the VRE tool with no model as well as processing through various models: PHG-MAEDistil (150k∼4.4M params), Mask2Former, Depth DPT and Depth Marigold. Figure 11. Streaming the frames of a video through various ML models. Processed on a local GPU. We observe that the models have quite low variance, most of the frames take about the same amount of time regardless of the model. Notably, the PHG-MAE-Distil variants can be used for real-time segmentation, while the Depth DPT can be used for real-time depth estimation, which can enable various robotics applications, such as safe navigation through a natural environment. The other two models, while they output more high-quality results, especially Marigold, are better fit for batched export achieving less than 2 FPS. 20 Pˆırvu Mihai-Cristian, Marius Leordeanu 3.5. Real-time machine-learning streaming with a handheld device and remote processing In this experiment, we want to offload most of the work of the previous experiment using a handheld device (phone camera) as frames source. Moreover, we want to also offload the processing unit to a cloud GPU compared to using a local laptop GPU to test the trade-offs induced by the network latency. The setup can be seen in the figure below and the FPS results can be seen in Figure 12. Figure 12. Streaming the frames of a phone camera through various ML models. Processed on a local GPU and on a remote GPU. Left: the FPS results. Right: The live-streaming setup. On the right side we can see the streaming setup: we capture the camera feed from the mobile phone. Then, we relay it to the processing GPU (local or remote). The remote machine is the same as the one used in all the experiments before, while the local machine is the laptop in the image, with a laptop GPU (NVIDIA RTX 4050). Then, the processed images are displayed on the laptop's screen, which is the target. The local processing results are more or less as the ones in the previous experiment, with a slightly lower FPS on average due to the processing being done on a laptop GPU. On the other hand, we observe that both models perform at about 2-3FPS, due to network latencies added. As this is a live feed, we compute the FPS in the following manner: we store on the local machine a CSV file with timestamps based on when the displayed image has changed in pixels. On the remote machine, we only process the Nth frame. In order to do this, we have a thread that reads frames from the network and then the main processing thread (VRE) will get the latest available read frame. Otherwise, due to the fact that the camera works at 30FPS, while the models process only at 2FPS, the queue for unprocessed frames would grow until the server would get out of memory. Furthermore, we only use TCP sockets with raw RGB images, not live-stream specialized video encodings or UDP sockets, which further adds to the latency. Processing live feeds is surprisingly complicated. The main conclusion to be drawn here Multi-modal video data-pipelines for machine learning with minimal human supervision 21 is that real-time processing is very hard to achieve over a network, thus local computation should be aimed for if possible. 4. Conclusions We introduce a machine learning infrastructure data-pipeline aimed at streamlining the creation of multi-modal datasets for training deep neural networks. We present the architectural design and the batched vs. streaming duality, which the tool supports natively. For the batched case, we provide multi-gpu strategies, such as splitting a video in multiple slices or targeting different GPUs with representation groups. We open source the tool alongside a repository of already implemented representations. We then present a case study for how Dronescapes, an aerial image understanding dataset, was created using this tool. Then, we present another case study on how a multi-modal neural network model leverages this dataset to provide competitive results on semantic segmentation with very few parameters. Finally, we provide experiments for both the batched case, as well as a real-time and near-real-time semantic segmentation and depth estimation streaming pipeline using a handheld phone's camera as live feed. As future work, our data-pipeline can be improved to support other video streaming native protocols, built on top of UDP, such as RTP. Moreover, the tool works natively on a single node, allowing node-level parallelism, such as multi-gpu setups. However, this approach could be extended to support distributed systems as well, allowing for a more seamless vertical scaling where nodes can be created and deleted on demand. Finally, while we already support a list of existing models on the VRE repository, more pre-trained models could be implemented, such as object recognition, keypoint extraction or video action recognition. Acknowledgements. This work is supported in part by projects "Romanian Hub for Artificial Intelligence - HRIA", Smart Growth, Digitization and Financial Instruments Program, 2021-2027 (MySMIS no. 334906) and "European Lighthouse of AI for Sustainability - ELIAS", Horizon Europe program (Grant No. 101120237). R E F E R E N C E S [1] Sanad Aburass and Maha Abu Rumman. Quantifying overfitting: introducing the overfitting index. In 2024 International Conference on Electrical, Computer and Energy Technologies (ICECET), pages 1-7. IEEE, 2024. [2] Saeed Hamood Alsamhi, Alexey V Shvetsov, Santosh Kumar, Svetlana V Shvetsova, Mohammed A Alhartomi, Ammar Hawbani, Navin Singh Rajput, Sumit Srivastava, Abdu Saif, and Vincent Omollo Nyangaresi. Uav computing-assisted search and rescue mission framework for disaster and harsh environment mitigation. Drones, 6(7):154, 2022. [3] Naeem Ayoub and Peter Schneider-Kamp. Real-time on-board deep learning fault detection for autonomous uav inspections. Electronics, 10(9):1091, 2021. 22 Pˆırvu Mihai-Cristian, Marius Leordeanu [4] Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multimodal multi-task masked autoencoders. In European Conference on Computer Vision, pages 348-367. Springer, 2022. [5] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460, 2020. [6] Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, et al. Ofasys: A multi-modal multi-task learning system for building generalist models. arXiv preprint , 2022. [7] Pedro HE Becker, Jose Maria Arnau, and Antonio Gonz ́alez. Demystifying power and performance bottlenecks in autonomous driving systems. In 2020 IEEE International Symposium on Workload Characterization (IISWC), pages 205-215. IEEE, 2020. [8] Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation, pages 12-58, 2014. [9] John Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679-698, 1986. [10] Li Chen, Tutian Tang, Zhitian Cai, Yang Li, Penghao Wu, Hongyang Li, Jianping Shi, Junchi Yan, and Yu Qiao. Level 2 autonomous driving on a single device: Diving into the devils of openpilot. arXiv preprint , 2022. [11] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1290-1299, 2022. [12] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213-3223, 2016. [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171-4186, 2019. [14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint , 2020. [15] Peter D Dueben, Martin G Schultz, Matthew Chantry, David John Gagne, David Matthew Hall, and Amy McGovern. Challenges and benchmark datasets for machine learning in the atmospheric sciences: Definition, status, and outlook. Artificial Intelligence for the Earth Systems, 1(3):e210002, 2022. [16] Carsten Griwodz, Simone Gasparini, Lilian Calvet, Pierre Gurdjos, Fabien Castan, Benoit Maujean, Gregoire De Lillo, and Yann Lanthony. Alicevision Meshroom: An open-source 3D reconstruction pipeline. In Proceedings of the 12th ACM Multimedia Systems Conference - MMSys '21. ACM Press, 2021. [17] Emanuela Haller, Elena Burceanu, and Marius Leordeanu. Self-supervised learning in multi-task graphs through iterative consensus shift. arXiv preprint , 2021. [18] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Multi-modal video data-pipelines for machine learning with minimal human supervision 23 [19] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll ́ar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000-16009, 2022. [20] Daniel Hern ́andez, Jos ́e M Cecilia, Juan-Carlos Cano, and Carlos T Calafate. Flood detection using real-time image segmentation from unmanned aerial vehicles on edgecomputing platform. remote Sensing, 14(1):223, 2022. [21] Zhewei Huang, Tianyuan Zhang, Wen Heng, Boxin Shi, and Shuchang Zhou. Real-time intermediate flow estimation for video frame interpolation. In European Conference on Computer Vision, pages 624-642. Springer, 2022. [22] Andrej Karpathy. Software 2.0. https://web.archive.org/web/20250323195948/ https://karpathy.medium.com/software-2-0-a64152b37c35, 2025. [Online; accessed 04-April-2025]. [23] Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. Repurposing diffusion-based image generators for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9492-9502, 2024. [24] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015-4026, 2023. [25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [26] Marius Leordeanu, Mihai Cristian Pˆırvu, Dragos Costea, Alina E Marcu, Emil Slusanschi, and Rahul Sukthankar. Semi-supervised learning for multi-task scene understanding by neural graph consensus. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1882-1892, 2021. [27] Marius Leordeanu, Rahul Sukthankar, and Cristian Sminchisescu. Generalized boundaries from multiple image interpretations. IEEE transactions on pattern analysis and machine intelligence, 36(7):1312-1324, 2014. [28] Liangkai Liu, Zheng Dong, Yanzhi Wang, and Weisong Shi. Prophet: Realizing a predictable real-time perception pipeline for autonomous vehicles. In 2022 IEEE Real-Time Systems Symposium (RTSS), pages 305-317. IEEE, 2022. [29] Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, and Aniruddha Kembhavi. Unified-io 2: Scaling autoregressive multimodal models with vision language audio and action. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26439-26455, 2024. [30] Alina Marcu. Quantifying the synthetic and real domain gap in aerial scene understanding. arXiv preprint , 2024. [31] Alina Marcu, Vlad Licaret, Dragos Costea, and Marius Leordeanu. Semantics through time: Semi-supervised segmentation of aerial videos with iterative label propagation. In Proceedings of the Asian Conference on Computer Vision, 2020. [32] Alina Marcu, Mihai Pirvu, Dragos Costea, Emanuela Haller, Emil Slusanschi, Ahmed Nabil Belbachir, Rahul Sukthankar, and Marius Leordeanu. Self-supervised hypergraphs for learning multiple world interpretations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 983-992, 2023. [33] Pˆırvu Mihai-Cristian and Leordeanu M. Probabilistic hyper-graphs using multiple randomly masked autoencoders for semi-supervised multi-modal multi-task learning, 2025. [34] David Mizrahi, Roman Bachmann, Oguzhan Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, and Amir Zamir. 4m: Massively multimodal masked modeling. Advances in Neural Information Processing Systems, 36:58363-58408, 2023. 24 Pˆırvu Mihai-Cristian, Marius Leordeanu [35] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE, 2015. [36] Mihai Pirvu, Alina Marcu, Maria Alexandra Dobrescu, Ahmed Nabil Belbachir, and Marius Leordeanu. Multi-task hypergraphs for semi-supervised learning using earth observations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3404-3414, 2023. [37] Xavier Soria Poma, Edgar Riba, and Angel Sappa. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1923-1932, 2020. [38] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. [39] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [40] Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. Ai and the everything in the whole wide world benchmark. arxiv. arXiv preprint , 2021. [41] Ren ́e Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vision, pages 12179-12188, 2021. [42] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 402-419. Springer, 2020. [43] Suramya Tomar. Converting video formats. ffmpeg. Linux Journal, 2006(146):10, 2006. [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [45] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3712-3722, 2018. [46] Xu Zhao, Wenchao Ding, Yongqi An, Yinglong Du, Tao Yu, Min Li, Ming Tang, and Jinqiao Wang. Fast segment anything. arXiv preprint , 2023.
2510.14858
Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. Abstract— Non-diffracting (ND) beams are often cited as a promising solution to mitigate blockage in millimeter-wave (mmWave) systems. However, a quantitative answer to the fundamental question—“Under what specific conditions do ND beams actually outperform conventional pencil beams?”—has remained elusive, especially in the emerging context of near-field communications. This paper provides the first systematic answer by mapping the performance advantage regimes of ND beams for blockage-resilient near-field links. We propose a unified holographic generator that synthesizes various structured beams (e.g., Bessel, Mathieu) under the physical constraints of a planar phased array, ensuring a fair comparison against a boresight baseline with identical EIRP and aperture. Through extensive, unbiased Monte Carlo simulations, we construct “advantage regime maps” that delineate the specific regions where ND beams offer a tangible link-level gain. Our key finding is that the advantage of ND beams is a powerful but conditional near-field phenomenon. While offering a positive average gain, its performance is highly variable, with a ~60-70% probability of outperforming the baseline in its optimal range. Crucially, this performance is strongly modulated by the obstacle’s geometry, revealing a significant weakness against large blockers. These findings provide not just a practical roadmap for judiciously employing ND beams but also a clear motivation for future work in environment-aware, adaptively shaped structured beams. Index Terms— Near-Field communications, non-diffracting beams, Bessel beams, Mathieu beams, self-healing. I. INTRODUCTION HE deployment of large-scale antenna arrays in millimeter- wave (mmWave) and terahertz (THz) systems is extending the radiative near-field (RNF) region to operationally significant distances, heralding a new paradigm of near-field communications for B5G/6G networks [1]–[3]. While this evolution brings opportunities for high-resolution sensing and spatial multiplexing [4]–[8], it also presents a critical challenge: the high directivity required to close links makes them acutely susceptible to line-of-sight (LoS) blockage from common obstacles like furniture or human bodies, severely This work was supported in part by the Major Key Project of PCL under grant PCL2025AS215. Y. Qin, J. Chen, and L. Song are with the Peng Cheng Laboratory, Shenzhen, 518052, China (e-mails: ee06b147@gmail.com, chenj12@pcl.ac.cn, lingyang.song@pku.edu.cn). Z. H. Jiang is with the State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China (e-mail: zhihao.jiang@seu.edu.cn). compromising reliability [9], [10]. To address this, reconfigurable intelligent surfaces (RIS) have emerged as a popular solution, capable of reflecting incident beams to circumvent obstacles [11]–[13]. However, RIS-assisted links introduce additional path loss and often require complex channel estimation and phase optimization, while facing challenges from hardware non-idealities, especially in the near-field and wideband contexts [14], [15]. An alternative, more fundamental physical-layer solution lies in structuring the radiated wavefield itself. Non-diffracting (ND) beams, such as Bessel beams, are renowned for their ability to resist diffraction and “self-heal” after encountering an obstacle [16]. Pioneering work has demonstrated this potential in the THz band, sparking significant interest in their application for B5G/6G systems, with a growing body of research exploring their generation and performance [9], [10], [17]–[23]. Despite this conceptual appeal, a clear pathway to practical implementation has been missing. Early demonstrations often relied on bulky, “optical- style” setups (e.g., axicons) with limited tunability, making them unsuitable for integration with standard mmWave hardware [24], [25]. How to generate these beams efficiently using practical phased arrays, and more importantly, under what exact conditions they provide a tangible link-level advantage, remain critical open questions. When, and by how much, does an ND beam outperform a conventional pencil beam in a realistic, RNF blockage scenario? In this work, we provide the first systematic and quantitative answer to this question. We propose a holographic algorithm to synthesize quasi-ND beams (Bessel, Mathieu) constrained by the physics of a planar array, ensuring a fair comparison against a boresight baseline with identical EIRP and aperture. We then utilize the angular spectrum method to precisely evaluate wave propagation and interaction with obstacles, culminating in a systematic link-level analysis. Our central finding is that the benefit of ND beams is a powerful but highly conditional near- field phenomenon. Their advantage is largely confined to a specific operational window defined by the blockage geometry Zhi Ning Chen are with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583 (e-mail: eleczn@nus.edu.sg). Yongming Huang are with the National Mobile Communication Research Laboratory, and the School of Information Science and Engineering, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211111, China (e-mail: huangym@seu.edu.cn). Exploiting Non-Diffracting Beams for Resilient Near-Field Millimeter-Wave Communications: A Quantitative Roadmap Yifeng Qin, Member, IEEE, Jing Chen, Zhi Hao Jiang, Member, IEEE, Zhining Chen, Fellow, IEEE, Yongming Huang, Fellow, IEEE, Lingyang Song, Fellow, IEEE T > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2 and a propagation distance shorter than a critical crossover point, zcrossover. Beyond this boundary, a conventional beam is superior. Our contributions are threefold: 1. First Quantitative Framework and Physically- Grounded Metrics: We are the first to systematically bridge the gap between the physical self-healing phenomenon and its link-level performance (e.g., SNR gain) specifically within the RNF regime. We introduce zpeak and zcrossover as novel, physically-grounded metrics to define the operational space where ND beams can be beneficial. 2. A Unified Generator for Aperture-Constrained Structured Beams: We propose a generalized holographic synthesis framework that treats Bessel, Mathieu, and other structured beams as instances of a single k-space model. Crucially, our generator is constrained by the physical limitations of a planar array, producing physically realizable phase patterns for fair and practical comparisons. 3. Derivation of Actionable “Advantage Maps” and Deployment Rules: Moving beyond single-case demonstrations, our work delivers the first “advantage regime maps” and break-even curves. These are not just scientific results but engineering tools that provide clear, actionable guidance on when—and with what level of confidence—to employ ND beams over conventional ones, revealing their probabilistic nature and strong dependence on obstacle geometry. The remainder of this manuscript is organized as follows: Section II systematically introduces the unified framework for aperture-constrained ND beam generation and propagation. Section III details the experimental design, including parameter selection and evaluation metrics. Section IV presents the results, including the core advantage regime maps. Section V provides targeted comparisons against other beamforming strategies. Finally, Section VI concludes the paper with a summary and discussion of the implications of our work. . II. UNIFIED FRAMEWORK FOR APERTURE-CONSTRAINED ND BEAMS To systematically evaluate different ND beams, a unified and physically-grounded generation framework is essential. This section details our approach, which starts from a generalized spectral target and culminates in a concrete phase pattern for a finite phased array. A. System Setup and Field Representation We establish a three-dimensional Cartesian coordinate system (x, y, z) where a planar phased array is situated on the aperture plane at z = 0. The electromagnetic wave propagates along the positive z-axis. The plane defined by the x and y axes is referred to as the transverse plane. Within this framework, we employ scalar diffraction theory to model the wave field. The electric field is represented by a complex scalar quantity, E(x, y, z), which encapsulates both the amplitude and phase of the wave. For the analysis of LoS dominated mmWave links, polarization effects are momentarily disregarded, a common and reasonable simplification. The behavior of a monochromatic wave in free space is governed by its wavenumber, k0, defined as k0 = 2π/λ, where λ is the operational wavelength. Any complex field distribution in a transverse plane, E(x, y, z), can be decomposed into a superposition of an infinite number of plane waves. The propagation direction of each plane wave component is defined by its wave vector (kx, ky, kz) in the spatial frequency domain, also known as k-space. The components kx and ky are the Fig. 1. The proposed end-to-end framework for quantitatively evaluating aperture-constrained ND beams in RNF blockage scenarios. (a) A 3D visualization demonstrates the self-healing property of an ND beam synthesized by a planar phased array after being obstructed. The transverse intensity profiles are shown at several propagation distances. (b) The simulated intensity map (in dB) of the ND beam in the XZ-plane without any obstacles. (c) The corresponding intensity map for the ND beam propagation in the XZ-plane, showing self-healing property. (d) The unified workflow of our methodology, detailing the four key stages: defining a generalized k-space target, synthesizing the phased array setup via a holographic algorithm, simulating wave propagation using the angular spectrum method, and conducting the final link-level analysis. (a) (b) (c) zobstruction Phased Array Obstacle k-space Target Phased Array Setup Propagation Analysis (d) GS algorithm Determine N × N phase distribution Using Fourier optics to simulate waves’ behaviors Predict the system’s performances y x z > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3 transverse wavenumbers (or spatial frequencies), which are linked to the free-space wavenumber k0 through the dispersion relation: k0 2 = kx 2 + ky 2 + kz 2. This relationship dictates that only components for which kx 2 + ky 2 ≤ k0 2 can propagate to the far- field. B. The Holographic Method: From Spectrum to Aperture Unlike conventional beamforming that applies a simple phase gradient or an axicon lens, generating a complex structured beam from a finite-aperture array is a non-trivial inverse problem. We must find a phase-only distribution for the N × N array elements that, upon radiation, produces a far-field or k-space pattern closely matching a desired target. To solve this, we employ a holographic approach based on the iterative Gerchberg-Saxton (GS) algorithm [5]. This method is uniquely suited for this task as it allows us to define our desired beam characteristics in the spatial frequency domain (k- space) and iteratively solve for the corresponding phase-only distribution at the physical array aperture. The algorithm continuously transforms the wave field between the aperture plane and k-space, applying the physical constraints of each domain in every iteration. This process, formally detailed in Algorithm 1, ensures that the resulting phase pattern is physically realizable by the hardware constraints—namely, a fixed-amplitude aperture defined by the array windowing function. In Algorithm 1, the key variables are defined as follows:  ∣Ftarget∣: The desired magnitude distribution in k-space, representing the target beam’s power spectrum. It is derived from the generalized spectral model described in the following subsection.  Aaperture: The real-valued amplitude constraint imposed at the aperture plane (z = 0), which models the physical boundary and any windowing function (e.g., Super- Gaussian) applied to the N × N phased array.  Eaperture: The complex electric field distribution on the aperture plane. The algorithm’s goal is to find the phase of this field. Algorithm 1 Holographic Synthesis of Aperture Phase 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: procedure GENERATE_ND_PHASE (params) Initialize: Define k-space target magnitude |Ftarget| from (1). Define aperture amplitude constraint Aaperture (e.g., Super-Gaussian window). Initialize aperture field Eaperture with random phase. Fcurrent ← FFT(Eaperture). for i = 1 to Niterations do // Apply k-space constraint Fcurrent ← |Ftarget| · exp(j·angle(Fcurrent)). // Propagate to aperture plane Eaperture ← IFFT(Fcurrent). // Apply aperture constraint Eaperture ←Aaperture · exp(j∙angle(Eaperture)). // Propagate back to k-space for next iteration Fcurrent ← FFT(Eaperture). end for return final phase pattern angle(Eaperture). end procedure Fig. 2. Validation of the unified k-space synthesis framework for various structured beams. Each panel compares the ideal mathematical target spectrum (top) with the high-fidelity spectrum synthesized by our holographic algorithm under the physical constraints of a 64 × 64 phased arrays (bottom), with detailed parameters provided in Table I. The examples demonstrate the framework’s versatility: (a) A canonical Bessel beam with an isotropic Gaussian ring. (b) A Mathieu-like beam with an elliptically shaped spectrum, achieved by setting ka ≠ kb. (c) A Bessel beam with non- uniform angular power distribution, synthesized by setting ϵ2 > 0. (d) A spectrally notched Bessel beam, designed to proactively steer nulls in specific azimuthal directions. The close match between target and synthesized patterns validates the efficacy of our generation method. (a) (b) (c) (d) > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4  Fcurrent: The complex field in k-space, which is the 2D Fourier transform of Eaperture at any given iteration.  FFT, IFFT: The Fast Fourier Transform and its inverse, used to computationally switch between the aperture plane and k-space.  angle(): An operator that extracts the phase angle from a complex value. C. A Generalized Spectral Model for ND Beams The key to our unified framework lies in defining a generalized target spectrum in k-space. The fundamental property of most ND beams is that their constituent plane waves’ wave vectors lie on a cone, which translates to an annular ring in the k-space. We model this with a Gaussian-profiled constant-k ring with angular modulation: หFtarget൫kx, ky൯ห∝exp ቆ− ൫ρk - 1൯2 2σk 2 ቇ⋅B൫ϕk൯ (1) where ρk = ටkx 2/ka 2 + ky 2/kb 2 is the (elliptical) transverse wavenumber, and ϕk is the azimuthal angle. This model is controlled by a set of intuitive hyperparameters that act as “tuning knobs” for the beam’s properties. The relationship between the ideal target and the high-fidelity synthesized spectrum is shown in Fig. 2.  Central Radii (ka, kb): These parameters, directly related to the beam’s cone angle, governs the transverse scale of the beam’s central lobe. Larger ka and kb result in a tighter central spot but a shorter ND range. o Bessel Beam: Setting ka = kb yields a perfectly isotropic ring (when B(ϕk)= 1), producing a circularly symmetric Bessel-like beam, as shown in Fig. 2(a). o Mathieu Beam: If ka ≠ kb, the spectral pattern would be elliptical, and the beam is anisotropic along two orthonormal basis directions. Thus, higher flexible degree of beam pattern is introduced for overcoming various obstacles. An example with dual-ring stabilized elliptical spectrum is shown in Fig. 2(b).  Spectral Width (σk): This defines the “purity” or thickness of the ring. A smaller σk (a “purer” ring) leads to a longer ND range but slower self-healing and higher sidelobe levels. This is a critical trade-off knob.  Angular Modulation (B(ϕk)): This term allows us to shape the beam’s cross-section. By defining ( B ( ϕk )) = 1 + ϵ2cos൫2ϕk൯ + ϵ4cos൫4ϕk൯, we can seamlessly transition between beam types: o Non-zero ϵ2 and/or ϵ4 create an angularly modulated ring, resulting in a desired beam with diverse angular intensity profile. This offers an extra degree of freedom to potentially “sculpt” the beam around non-circular obstacles. In Fig. 2(c), it can be seen that the power on the ring spectrum is successfully manipulated. This unified model allows us to treat Bessel beams as a special case of Mathieu beams, enabling a coherent and fair comparison within a single parametric framework. D. Advanced Spectral Shaping for Enhanced Performance The flexibility of our spectral model allows for advanced shaping techniques to further optimize performance for specific scenarios:  Dual-Ring Spectrum: By introducing a secondary, lower- amplitude ring, we inject additional plane wave components at slightly different cone angles. As visualized in Fig. 2(c), this enhances the beam’s reconstruction speed after an obstacle, effectively improving its self-healing capability at the cost of slightly higher sidelobes.  Angular Notches: If the approximate azimuthal direction of a potential blocker is known a priori, we can introduce “notches” or gaps in the spectral ring at the corresponding angles. This proactively reduces the energy directed towards the obstacle, further improving the received power. Fig. 2(d) verifies the capability of the method. E. Beam Propagations and Interactions with Obstacles With the beam synthesized at the aperture, the subsequent components of our physical model are as follows:  Propagation Model: The propagation of the wave field E(x, y, z) from the aperture is modeled using the angular spectrum method [9]. This technique implements the Rayleigh-Sommerfeld diffraction integral in the Fourier domain. The complex field at a distance is found by first computing the angular spectrum of the source field, F(kx, ky) = ℱ[E(x, y, 0)] (Eaperture), then multiplying it by a propagation transfer function H(kx, ky), where H(kx, ky) = exp(-j∙zටk0 2 −kx 2 −ky 2). (2) The propagation process can be derived by the recurrence relation: E(x, y, z+∆z) = ℱ−1{ℱ[E(x, y, z)] ⋅H(kx, k𝑦𝑦, ∆z)}. (3)  Obstacle Model: Obstacles are modeled as thin, opaque screens placed at a distance zobs from the array. The field immediately after the obstacle, E(x, y, zobs + ), is given by the product of the incident field E(x, y, zobs −) and a binary transmission mask M(x, y): E(x, y, zobs + ) = E(x, y, zobs −) ⋅M(x, y), (4) where M = 0 inside the obstacle and M = 1 otherwise. F. Computational Complexity and Implementation The holographic generation process, detailed in Algorithm 1, is computationally efficient and well-suited for practical implementation. The algorithm’s complexity is dominated by the two 2-D FFTs performed in each of the I iterations. For an N × N aperture simulated on an M × M grid (M ≥ N), the total computational cost is 𝒪𝒪(2⋅I⋅M 2logM). Crucially, this computation is a one-time, offline process for each desired beam configuration. For our 64 × 64 array and I = 50 iterations, the generation takes only a few tens of milliseconds on a standard CPU. Once the optimal phase map for a given beam is computed, it can be stored and reused indefinitely. In a practical system, this suggests the possibility > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5 of pre-calculating a library of ND beam configurations, forming a beam codebook, analogous to the DFT-based codebooks widely used in conventional beamforming. This allows for instantaneous switching between different ND beams without incurring any real-time computational overhead on the link. III. EXPERIMENTAL DESIGN This section details the design of our numerical experiments, which are structured to systematically quantify the link-level advantage of ND beams. We first establish a fixed set of beam parameters and define the baseline for a fair comparison. We then introduce the core analytical metrics, zpeak and zcrossover, that structure our analysis. Finally, we describe the comprehensive Monte Carlo simulation setup used to generate our main results. A. Beam Parameterization and Baseline Definition As established in Section II, the performance of a synthesized ND beam is governed by a complex interplay of hyperparameters. Key trade-offs include: 1) Cone Angle vs. ND Range: A larger cone angle (ka and kb) enables the beam to circumvent larger obstacles but shortens its ND range. 2) Spectral Purity vs. Aperture Size: A “purer” beam with a narrower spectral ring (σk) achieves a longer ND range but requires a larger physical array aperture for effective synthesis. 3) Hardware Limitations: The practical performance is also constrained by hardware realities such as the number of antenna elements and the quantization level of phase shifters. To conduct a focused and meaningful comparative analysis, we first determined a set of robust and representative parameters. We selected a 64 × 64 element array as a realistic upper bound for next-generation mmWave systems, representing a significant but achievable scale. After extensive preliminary simulations to balance the aforementioned trade- offs, we fixed the beam synthesis and physical parameters as detailed in Table I. This crucial step ensures that our results are not tuned to a niche case but are representative of a well-formed ND beam, allowing us to isolate and study the impact of the external environment. To ensure a fair comparison, we define a conventional boresight beam as our baseline. This beam is generated using the exact same 64 × 64 array and Super-Gaussian amplitude window as the ND beam, but with a uniform phase distribution (i.e., a flat wavefront). This configuration guarantees that both beams have the same physical aperture and transmitted power. The selected parameters of phased array antenna in Table I are deliberately chosen to reflect practical hardware configurations. The half-wavelength element spacing is a standard design choice to avoid grating lobes. Furthermore, 6- bit phase and 5-bit amplitude quantization represent mature, widely available technology in modern mmWave phased arrays, ensuring that our synthesized beams are not just theoretical Table I: Fixed Simulation and Beam Synthesis Parameters f0 Cone Angle (θc) σk 28 GHz 7° 0.1 Array Size Array Window Pha. Quant. 64 × 64 Super-Gaussian 6 bits Amp. Quant. Element Spacing Dual Ring 5 bits 0.49λ Y * f0 is the selected center frequency, where λ = c/ f0, c is the light speed. The wavenumber kc = k0 sin(θc) = ka = kb. Pha. is short for phase; Amp. is short for amplitude; Quant. is short for quantization. Y means the dual ring mode is always ON, ensuring the stabilized beam generation. Fig. 3. (a) Boresight (REF) propagation and (b) Bessel ND propagation (XZ plane max-intensity maps, dB). White dashed lines mark zpeak and zcrossover, which delimit the RNF evaluation window. (c) Unblocked on-axis intensity versus distance, verifying zpeak (green) and zcrossover (magenta). (d) Peak-intensity recovery ratio versus the normalized obstacle radius ρ = Rblock/(zobstan(θc)) for centered circular blockers placed at various depths zobs/zpeak. Recovery remains high for small ρ and degrades as ρ → 1, consistent with the recovery law zmin ≈ Rblock/tan(θc) = ρ∙zobs. A mild overshoot above 100% at small ρ arises from constructive Fresnel edge diffraction. Peak-Intensity Recovery Ratio (%) zpeak zcrossover (a) (b) (c) (d) > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6 constructs but are readily implementable. B. Defining the Near-Field Advantage Regimes: zpeak and zcrossover To move from observing a physical effect to quantifying a communication advantage, a fair benchmark is paramount. We first analyze the unobstructed propagation of both the ND beam and the baseline boresight beam to establish a structured analytical space, as illustrated in Fig. 3(a)-(c). From this analysis, we introduce two physically-grounded metrics:  Peak Intensity Distance (zpeak): This is the propagation distance at which the on-axis intensity of the un-obstructed ND beam reaches its absolute maximum. It marks the end of the beam’s formation zone and the beginning of its effective ND range.  Crossover Distance (zcrossover): This is the distance after zpeak where the on-axis intensity of the ND beam decays to a level equal to that of the conventional boresight beam. These two landmarks partition the propagation space into three distinct regions:  Formation Zone (0 < z < zpeak): In this region, the ND beam, which focuses energy more immediately, is typically dominating the traditional boresight beam on-axis.  ND Advantage Zone (zpeak ≤ z < zcrossover): This is the core operational region where the ND beam’s resistance to diffraction allows it to maintain a higher on-axis intensity than the boresight beam. We hypothesize that the self- healing advantage is primarily confined to this zone.  Far-Field Zone (z ≥ zcrossover): Beyond the crossover point, the finite aperture effects dominate, and the more directive boresight beam once again becomes superior. Our entire experimental framework is built around these defined regions, allowing us to test our hypothesis and map the performance advantage in a structured manner. To maintain the clarity of our investigation, we focus the analysis on on-axis receiver positions. Oblique scenarios can be readily modeled by applying a linear phase gradient to the array aperture; while this would steer the entire beam structure, it would not fundamentally alter the relative performance conclusions within the defined advantage regimes. C. Characterizing Self-Healing: A Centered-Block Analysis “Self-healing” is the central premise behind using ND beams for robust RNF links: when a portion of the cross-section is blocked, the conical angular spectrum can re-interfere and refill the on-axis energy downstream. Whether this helps in practice depends on where the blocker sits and how large it is. Before moving to scene-level Monte-Carlo (Sec. IV), we isolate the intrinsic behavior with a controlled centered-block study—i.e., the hardest case without lateral offsets. This gives a geometry- aware law we can later reuse to reason about complex scenes. A Bessel ND beam is generated using our proposed method with the specifications shown in Table I. A perfectly opaque circular blocker of radius Rblock is placed on the optical axis at depth zobs. We sweep zobs/zpeak ∈ {0.2, …, 0.8}, (5.a) ρ ≜ Rblock/(zobstanθc)∈ [0, 2], (5.b) where 𝜌𝜌 is the normalized obstacle radius. For each (ρ, zobs), we record the peak-intensity recovery ratio η(zobs, ρ) ≜ max z∈[zobs, zcrossover]{Iaxis blocked(z)/Iaxis peak(z)×100%}, (6) i.e, the best on-axis recovery within the RNF evaluation window bounded by {zobs, zcrossover}. Fig. 3(d) plots 𝜂𝜂 vs. 𝜌𝜌 for different zobs/zpeak. Self-healing requires a finite recovery distance after the blocker. For a conical ND beam, a first-order estimate is zmin ≈ Rblock/tanθc = ρzobs. (7) Intuitively, the blocker removes an axial sector of the angular spectrum; re-population of the axis needs roughly the geometric shadow 𝑅𝑅block to be “walked out” at slope tanθc. For healing to be feasible within the ND advantage zone, we must have sufficient propagation distance before the advantage vanishes: zobs + zmin < zcrossover ⇔ ρ < zcrossover/zobs −1. (8) Equations (7)-(8) will also anchor our interpretation of the scene-level heatmaps in Sec. IV. Three features stand out and are consistent with (7)–(8): 1. Shallow blocks are benign: Large headroom keeps η ≈ 100% up to ρ ≲ 1. 2. Deeper blocks shrink admissible ρ: Curves bend down earlier because the right-hand side of (8) decreases. 3. A mild >100% overshoot: at small ρ is expected from constructive Fresnel edge diffraction adding to the conical spectrum; the cross-sectional power is still lower, so energy is conserved. In short, ρ compactly indexes difficulty for centered blocks, tying geometry to performance through (7) – (8). This normalization decouples our conclusions from absolute distances and will directly carry over to the Monte-Carlo analysis (Sec. IV). D. Monte Carlo Simulation Setup To systematically map the advantage regimes, we conduct a large-scale Monte Carlo simulation. The simulation is designed to evaluate the link performance under a wide variety of random but statistically representative blockage scenarios. 1) Scenario Generation  Transmitter and beams: Antenna array and ND beam parameters are fixed across all trials as shown in Table I; baseline is boresight. Propagation uses scalar Fresnel (angular-spectrum) and a thin opaque screen at z = zobs.  Shape library: Seven scenarios with different shapes of obstacles are considered: HumanSide (rect.), HumanTorso (rect.), ArmBar (bar), PillarSmall (circle), PillarLarge (circle), TableEdge (bar), ChairBack (bar). The abbreviation rect. means the obstacle is relatively large vertical rectangle, and bar refers to a horizontal thin rectangle. The sizes are realistic values in a range, for > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 7 example, the width of a HumanSide is subject to [0.2m, 0.25m], and that of a HumanTorso subject to [0.3m, 0.5m].  Obstacle Orientation: To ensure a comprehensive evaluation, non-circular obstacles are assigned a random in-plane orientation, ϕ ~ [0, 2π). However, we make a physically-grounded exception for the HumanSide and HumanTorso scenarios. Since it is unrealistic to assume a random in-plane rotation for a person standing upright, the orientation for these two scenarios is kept fixed (i.e., vertical). Circular obstacles are inherently isotropic and require no rotation.  Depth placement: To ensure that blockage events are comprehensively evaluated across the most critical near- field regions, we employ a stratified sampling strategy for the obstacle’s depth, zobs, relative to the beam’s peak intensity distance, zpeak. Instead of a simple uniform sampling across the entire range, we partition the valid placement zone, zobs ∼൫ൣ0.2⋅zpeak, min(0.95⋅zcrossover 1.5⋅zpeak)൧൯, (9) into three distinct strata: pre-peak, near-peak, and post- peak. An equal number of zobs samples are drawn from each stratum. This approach guarantees a balanced and representative distribution of blockage scenarios, preventing over- or under-representation of events in any single region and thereby strengthening the statistical validity of our findings.  Lateral Offset and Unbiased Difficulty Sampling: A critical aspect of our simulation design is the unbiased placement of obstacles. A naive, area-uniform sampling of the lateral offset (x0, y0) would disproportionately favor grazing-incidence events (where the obstacle barely touches the beam), masking the beam’s true performance against more challenging, near-central blockages. To address this, we adopt a more physically meaningful sampling strategy that directly targets the normalized difficulty index, ρ. This index is defined as: ρ = Reff/(zobstanθc), (10) where Reff is the effective blockage radius that accounts for both the obstacle’s intrinsic radius, Rblock , and the magnitude of its lateral offset, d. The relationship is given later in (13). Instead of sampling the offset d and then calculating a resulting ρ, our procedure reverses this logic to ensure a uniform exploration of difficulty: 1. For each trial, after the obstacle’s intrinsic size (defining Rblock ) and depth are drawn, we directly sample a target value for uniformly from a predefined range [ρmin, ρmax]. 2. Using this sampled ρ and the known parameters, we deterministically back-calculate the required magnitude of the lateral offset, d, by rearranging the definitions. 3. A random planar direction is then assigned to this offset magnitude to generate the final offset vector (x0, y0). 4. A validity check is performed. If this placement causes the obstacle to extend beyond the simulation canvas boundaries, the entire sample is rejected, and the process (starting from step 1) is repeated. This rejection sampling methodology ensures that our collected data is uniformly distributed across the difficulty index ρ, providing an unbiased evaluation of the beam’s performance across a full spectrum of scenarios, from easy (ρ ≪ 1) to hard (ρ ≈ 1.5).  Radius coverage with limited scenarios: The vertical axis we later use is the geometric radius in wavelengths, Rλ ≜ Rblock λ ∈ [Rmin, Rmax], (11) where Rmin = 0, and Rmax is subject to the maximum value of “HumanTorso” scenario. However, when the obstacle is sufficiently large to cover the entire array’s aperture, the discussion would be meaningless. Therefore, Rmax in the next section will be around 32λ, which is identical to the array’s diameter. To make this range uniformly and continuously covered independent of scenario mix, we first define a target grid 𝒢𝒢Rλ={5, 5 + ΔRλ, …}, ΔRλ = 0.5, (12) then, for each trial, draw Rλ∗ uniformly from 𝒢𝒢Rλ and set Rblock = λRλ∗. For rectangles/bars, 𝑅𝑅block denotes the equivalent minor half-width so that the same grid aligns with the heatmap binning. This guarantees even coverage of Rλ without needing many separate scenario types.  Effective difficulty indices: From the realized (Rblock, d, zobs), we compute and denote Reff = max{0, Rblock −d} , (13) so, plots can observe either Reff/λ or 𝜌𝜌 as the measure metric to observe the systems’ performances.  Mask synthesis: With shape, Rblock, (x0, y0), and ϕ fixed, we generate the binary mask M(x, y) at zobs and apply it as a thin screen. 2) Link Evaluation  Evaluation depth and normalization: To ensure an unbiased assessment of link performance across the entire ND advantage zone, our evaluation is structured around the normalized distance metric t, which is defined as: t ≜ zeval − zpeak zcross − zpeak . (14) where zeval is the receiver’s propagation distance. This normalization maps the entire advantage zone, [zpeak, zcrossover], to the unit interval t ∈ [0,1]. For each randomized blockage scenario (defined by an obstacle and its placement zobs), we do not sample zeval directly. Instead, we evaluate the link performance at a series of points corresponding to a set of uniformly spaced values of t across the [0, 1] interval. This approach guarantees that every segment of the normalized advantage zone is assessed with equal weight, regardless > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 8 of the absolute physical length of the zone or the specific location of the obstacle. This method is crucial for eliminating the sampling bias that would arise from sampling zeval in a physical space whose bounds depend on zobs, and it aligns directly with the presentation of our results in subsequent sections.  Field propagation: For each beam (ND, boresight), we propagate to zobs, multiply by M, and continue to zeval using the same Fresnel operator as in Sec. II. 3) Receiver Model and Metrics  UE and combiner: UE is a 2 × 2 UPA with 0.49λ pitch. We sample the complex field on the four elements and apply phase-conjugate combining: y = ∑ E(xm, ym, zeval) 4 m = 1 e−j argE(xm, ym, zeval). (15)  SNR and outputs: SNR ∝ |y|2/N0 with the same N0 for both beams, so absolute calibration cancels in the difference. We compute per-trial ΔSNR = SNRND − SNRREF (dB). (16) 4) Lightweight Analytical Bounds and Scaling Laws To complement our numerical findings and provide deeper physical insight, we introduce a set of lightweight analytical models that predict the key performance boundaries. These scaling laws connect the observable phenomena in Fig. 3 and Fig. 4 to the core beam and array parameters.  Minimum Healing Distance: For a conical ND beam with cone angle θc, the geometric estimate for the minimum distance required to self-heal after an obstacle is given by: zmin ≈ Reff/tanθc. (17) Note that this equation generalizes Eq. (7); for a centrally- located obstacle where the lateral offset is zero, reduces Reff reduces to Rblock. This simple law accurately predicts that the recovery challenge scales with the effective blockage size. For healing to be feasible within our defined advantage zone, we must have zobs + zmin ≤ zcrossover. This explains the performance degradation observed in Fig. 3(d) as zobs increases, leaving less “room” for the beam to recover.  Peak-Distance Scaling: The location of the peak on-axis intensity, zpeak, is primarily determined by the interference of waves originating from the edges of the finite array aperture. It scales semi-empirically with the array’s half- width, a: zpeak ≈ α⋅a/tanθc, (18) where 𝛼𝛼 is a coefficient of order unity that depends weakly on the aperture window function. This relation confirms that a larger array extends the near-field focal region.  Crossover-Distance Scaling: The advantage of the ND beam collapses as the Fresnel number, F = a2/ ( λz ), decreases. The crossover distance can therefore be approximated as a fraction of the Fraunhofer distance: zcrossover ≈β⋅2a2/λ, (19) where our simulations show 𝛽𝛽 to be in the range smaller than 0.5 for the window functions used. Together, (18) and (19) provide a closed-form estimate of the ND advantage zone, [zpeak, zcrossover], based on fundamental array parameters, anchoring our entire experimental framework in established wave physics. These coefficients, α and β, are semi-empirical factors whose precise values depend primarily on the aperture window function. The provided ranges reflect typical values observed across common windowing functions (e.g., Uniform, Super-Gaussian, Hann). A sharper window function tends to yield slightly different interference patterns and decay rates compared to a smoother one, thus modulating the exact locations of zpeak and zobs. These scaling laws are intended to provide physical insight into how the advantage zone scales with fundamental array parameters, rather than to serve as exact predictive formulas. At last, the total simulation times is 21 (observation locations) × 7 (number of scenarios) × 500 (trials for each scenario) = 73500. This number is sufficient for drawing indicative conclusions. IV. MONTE CARLO SIMULATION RESULTS This section presents the main findings of our large-scale Monte Carlo simulation. We distill the results from over 70,000 randomized trials into a series of “advantage regime maps” and statistical distributions. These results provide a quantitative and actionable answer to our central research question: under what conditions do ND beams offer a tangible link-level advantage? Fig. 4. The macro advantage map summarizing the mean SNR gain (ΔSNR) of the ND beam over the baseline. The map is plotted in the space of the normalized receiver distance (t) and the normalized effective obstacle radius (Reff/λ). The solid and dashed contours represent the 0 dB (break-even) and +3 dB gain thresholds, respectively. 0 dB 3 dB > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 9 A. The Macro Advantage Map: Delineating the Advantage Regimes The primary result of our study is summarized in Fig. 4, which presents the macro-averaged SNR gain (ΔSNR) of the ND beam over the boresight baseline. This map serves as a comprehensive engineering guide to the ND beam’s performance across a wide statistical ensemble of blockage scenarios. 1) Key Observations: The advantage map reveals two distinct regions of interest:  Primary Advantage Zone: A robust region of significant SNR gain (warm colors) exists for small-to-moderate effective blockage radii (Reff/λ ≲ 25) and within the first 80% of the normalized advantage zone (t ≲ 0.8). This zone, containing a substantial island where the gain exceeds +3 dB, directly corresponds to the intrinsic self-healing capabilities of the ND beam against centrally-located or moderately offset obstacles, as characterized in Sec. III.C. As expected, the advantage systematically collapses as the receiver approaches the crossover distance (t →1), empirically confirming that the utility of ND beams is fundamentally a near-field phenomenon.  Secondary “Grazing” Advantage Zone: Interestingly, a secondary region of positive, albeit smaller, SNR gain emerges for very large effective radii (Reff/λ > 25). This counter-intuitive result stems from scenarios involving large obstacles with significant lateral offsets, creating a “knife-edge” diffraction event. In such cases, the highly concentrated energy of the boresight beam’s main lobe is severely scattered by the obstacle’s edge. In contrast, the ND beam, whose on-axis energy is sustained by a conical interference pattern, is more resilient; even if its outer rings are blocked, the remaining unobstructed components of the conical wave can still effectively reconstruct the on-axis field, thus preserving a higher SNR. 2) Practical Takeaway: The map provides a clear engineering guideline. The ND beam is not only superior against small-to-moderate central blockages but also exhibits enhanced robustness against large, grazing-incidence obstacles. For on-axis links, employing zpeak and approximately 0.8⋅zcrossover when facing effective obstacles smaller than about 25λ, and it remains a viable option even for larger tangential blockages. B. Unpacking Performance Variance: Self-Healing vs. Poisson’s Spot While the macro map shows the average trend, Fig. 5 delves into the statistical distribution of the performance gain as a function of the normalized receiver distance, t. 1) Trend Analysis: The plot reveals a more nuanced story than our initial hypothesis.  Mean SNR Gain (Blue Curve): The average advantage of the ND beam is modest, starting at approximately +2.5 dB near t = 0 and peaking at only about +5 dB around t = 0.3. It then steadily declines, becoming a net disadvantage (ΔSNR < 0) for t > 0.8. This indicates that, on average, the performance gain is not as dramatic as idealized scenarios might suggest.  Advantage Probability (Orange Curve): The probability that an ND beam outperforms the baseline starts at a moderate ~63%, peaks at ~70% around t = 0.3, and then drops significantly, falling below the 50% break-even point for t > 0.85. This confirms that the ND beam is advantageous in the majority of cases only within the early part of the advantage zone.  Large Performance Variance: A key finding is the extremely large performance variance, as shown by the wide interquartile range (IQR) error bars. This variance is not mere statistical noise; it is the macroscopic signature of a competition between two distinct physical phenomena, contingent on the specific obstacle geometry in each trial. 2) Physical Interpretation of Variance: This large variance is not merely statistical noise; it is the macroscopic signature of a competition between two distinct physical phenomena, contingent on the specific obstacle geometry in each trial:  ND Beam Self-Healing: In most scenarios, particularly those with asymmetric or non-central blockages, the ND beam’s self-healing dominates, resulting in a large, positive ΔSNR.  Poisson-Arago Spot Effect: In a subset of trials where the obstacle is highly symmetric (circular) and centrally located, the boresight beam benefits from an unexpected on-axis reconstruction. The coherent diffraction from the circular edge of the obstacle interferes constructively at the Fig. 5. Statistical performance trends as a function of the normalized receiver distance, t, aggregated over all Monte Carlo trials. The left axis and blue curve show the mean ΔSNR with interquartile range error bars. The right axis and orange dashed curve show the advantage probability, P{ΔSNR > 0 dB}. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 10 central axis, forming a bright spot known as the Poisson- Arago spot. In these specific cases, the boresight beam can “self-heal” surprisingly well, sometimes even outperforming the ND beam, leading to a negative ΔSNR. The large IQR in Fig. 5 is therefore a direct statistical manifestation of this underlying physical competition. Our unbiased -uniform sampling strategy allows both phenomena to be fairly represented in the aggregate results. 3) Practical Takeaway: The statistical trends tell a truthful and nuanced story: the ND beam is not a silver bullet. While it offers a positive average gain within the most effective region of the advantage zone (t < 0.8), its performance is highly variable. Its deployment is a probabilistic bet; while the probability of outperforming the baseline peaks at approximately 70% (near t = 0.3), it remains above 60% for a significant portion of the advantage zone (t ∈ [0,0.6] ). This underscores the need for scenario-specific analysis, as detailed next. C. Performance Across Different Blockage Geometries To understand the impact of obstacle shape, Fig. 6 disaggregates the results and presents the advantage probability for each of the seven realistic blockage scenarios, sorted by performance. 1) Scenario-Specific Observations: A clear performance gradient emerges, which is directly correlated with the obstacle’s geometry and its orientation relative to the beam’s conical wave structure:  High Resilience to Horizontal Obstacles: The ND beam achieves a very high advantage probability (> 90%) against the ChairBack scenario, which is dominated by horizontal structures. This is because the conical waves that constitute the ND beam can easily flow “over and under” these thin, bar-like obstacles to reconstruct the on-axis field.  Moderate Resilience to Isotropic & Mixed Obstacles: Performance against the TableEdge and ArmBar scenarios is still strong (~90% and ~75% respectively). Circularly symmetric pillars (PillarSmall, PillarLarge) present a greater challenge (~45%), as they obstruct the conical wave components from all azimuthal directions equally.  Weakness Against Vertical Obstacles: A dramatic performance drop is observed for vertically-oriented rectangular obstacles, HumanTorso and HumanSide, where the advantage probability is only ~40%. These shapes are most effective at blocking the conical wave components from the sides. This anisotropy in performance reveals a key limitation of standard, circularly symmetric Bessel-like beams. 2) Practical Takeaway: The results in Fig. 6 provide a crucial, honest assessment of the ND beam’s capabilities, reinforcing that it is not a universally robust solution but a specialized tool. Its effectiveness is strongly modulated by the obstacle’s geometry. It is exceptionally resilient to blockages that are elongated in one dimension (e.g., horizontal bars). Conversely, it exhibits a clear weakness against isotropic and, most notably, vertically- oriented obstacles like a human torso, where its performance is worse than a coin toss. These findings represent the boundary conditions where the benefits of standard Bessel-like beams diminish, strongly suggesting an opportunity for adaptively shaped beams (e.g., Mathieu beams) in future work. V. TARGETED COMPARISONS Our main findings in Section IV have established a clear, quantitative roadmap for the utility of a standard Bessel-like ND beam. However, the flexibility of our unified generation framework invites further investigation into two critical questions: 1) Can we adapt the beam shape to improve performance in the most challenging scenarios? 2) How does the ND beam’s resilience compare to a beam optimally focused on the user? This section addresses these questions through two targeted case studies. A. Case Study: Adaptive Spectral Shaping for Enhanced Robustness Our analysis in Sec. IV.C revealed a key limitation of the standard, circularly symmetric Bessel-like beam: its performance degrades significantly against vertically-oriented obstacles such as the HumanSide scenario. This finding strongly motivates an investigation into adaptively shaped beams. To validate this opportunity, we conducted a follow-up Monte Carlo simulation, replacing the standard Bessel beam with an anisotropically shaped Mathieu-like beam. 1) Anisotropic Beam Configuration and Rationale: Based on the insights from Sec. IV.C, we hypothesized that by redistributing the energy on the k-space ring—concentrating it on the horizontal axis while reducing it on the vertical axis— we could improve performance against vertical blockers. This was achieved by setting the angular modulation parameters in Eq. (1) to ϵ2 = 0.4 and ϵ4 = 0, while configuring the central radii of the elliptical ring to be wider in the horizontal direction (10°) than the vertical (6°). This creates a Mathieu-like beam that Fig. 6. Advantage probability, P{ΔSNR>0 dB}, across seven distinct, realistic blockage scenarios, aggregated for all trials where t < 0.95. The error bars represent the 95% Wilson confidence interval, and scenarios are sorted by performance. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 11 preferentially routes energy around the sides of a vertical blocker, as shown in Fig. 2(d). All other simulation parameters were kept identical to ensure a fair comparison. The non- diffracting range of this beam was verified to be nearly identical (zcrossover = 147λ vs. 149.2λ), confirming that the adaptation did not compromise the fundamental ND properties. 2) Performance Gains and Analysis: The results of this case study, presented in Fig. 7, are striking and confirm our hypothesis. The anisotropically shaped beam demonstrates a near-universal performance improvement over the standard Bessel-like beam.  Overall Statistical Enhancement (Fig. 7(a) vs. Fig. 5): The Mathieu beam delivers a superior statistical performance. The advantage probability (orange curve) is lifted across the entire advantage zone, establishing a new, higher “floor” of approximately 70% and peaking above 75%. This indicates that the strategic reallocation of energy makes the beam more robust in the statistical average of all randomized scenarios. The mean SNR gain (blue curve) is also consistently higher, particularly at the beginning and end of the advantage zone.  Targeted Improvement for Worst-Case Scenarios (Fig. 7(b) vs. Fig. 6): The most compelling evidence is seen in the per-scenario breakdown. The performance gradient observed previously is significantly flattened, indicating enhanced robustness. While the already-high performance against horizontal obstacles (ChairBack, TableEdge) is maintained, the advantage probability for the most challenging scenarios is dramatically improved. Notably, the win rate for HumanSide rises from ~41% to ~56%, and for HumanTorso it increases from ~40% to over 42% (the widths of HumanTorso are too big to surpass). Even for isotropic Pillar scenarios, the win rate increases significantly from ~45% to ~67%. This demonstrates that concentrating energy in the horizontal plane is not only effective for vertical obstacles but also provides a tangible benefit for a wider range of blocker geometries without compromising performance against horizontal ones. This case study powerfully demonstrates that even a simple, non-adaptive spectral shaping can significantly enhance the robustness of ND beams against challenging, real-world obstacles. It validates our unified framework not just as an analytical tool, but as a design tool for creating next-generation, environment-aware structured beams. B. Case Study: Resilience vs. Optimality—ND Beam vs. Near-Field Focusing While the boresight beam serves as a standard far-field baseline, in the RNF, a beam optimally focused on the user’s precise location represents the theoretical upper bound for power delivery in an unobstructed channel. This raises a critical, practical question: is the resilience offered by an ND beam worth the trade-off in raw, LoS-optimized power delivery? To answer this, we conducted a final case study directly comparing the ND beam to a Near-Field Focusing (NF-F) beam. 1) Simulation Setup: For this comparison, the user’s location was fixed at the ND beam’s peak intensity distance, zeval = zpeak. The NF-F beam was configured with a quadratic phase profile to optimally focus its energy at this exact point. We then subjected both beams to the same Monte Carlo simulation engine, with obstacles randomly Fig. 7. (a) Statistical performance trends as a function of the normalized receiver distance, t, aggregated over all Monte Carlo trials under the clipped Mathieu beam setup. (b) Advantage probability, across seven scenarios under the clipped Mathieu beam setup. (a) (b) Fig. 8. Heatmap of the SNR gain of the ND beam relative to the Near-Field Focusing (NF-F) beam (ΔSNRND-Focus) for a user fixed at zpeak. The axes represent the normalized obstacle position (zobs/zpeak) and the normalized difficulty index (ρ). ρ > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 12 placed between the transmitter and the user (zobs ∈ [0.2, 0.95] ⋅ zpeak), following the unbiased, ρ-uniform sampling strategy. 2) Quantitative Comparison of Resilience: The results, presented in the heatmap of Fig. 8, provide a stark and quantitative confirmation of the fundamental trade- off. The color axis represents the SNR gain of the ND beam over the NF-F beam (ΔSNRND-Focus). The map is overwhelmingly dominated by warm colors, indicating a massive advantage for the ND beam in almost all blockage scenarios. The gain is substantial, frequently exceeding +15 dB and peaking at over +22 dB. The physical reason for this is clear: the NF-F beam concentrates the vast majority of its energy into a single, diffraction-limited focal spot. While optimal for a clear LoS path, this makes its performance extremely brittle. Any obstacle, even a small one, that intercepts this focal path shatters the beam’s structure and causes a catastrophic drop in received power. In contrast, the ND beam’s energy is distributed across its conical wavefront. Its self-healing mechanism, sustained by this distributed energy reservoir, allows it to maintain a high on- axis intensity even when the central path is obstructed. The few cool-colored spots in the map (with a minimum ΔSNR of only around -3 dB) correspond to rare, near-zero blockage cases where the NF-F beam’s superior focusing provides a slight advantage. 3) Practical Takeaway: This case study highlights the fundamental value proposition of ND beams for practical RNF links. They occupy a crucial “sweet spot” in the design space. For dynamic, cluttered environments where a clear LoS cannot be guaranteed, the extreme fragility of an optimally focused beam makes it a high- risk, “all-or-nothing” strategy. The ND beam, by sacrificing a small amount of peak LoS performance, provides an invaluable insurance policy against blockage, making it a far more robust and reliable solution for ensuring resilient near-field communications. VI. CONCLUSION AND FUTURE WORK This paper has provided the first systematic answer to a critical open question: under what specific, quantifiable conditions do non-diffracting (ND) beams outperform conventional boresight beams for blockage-resilient near-field links? Our central finding is that the utility of ND beams is a powerful but conditional phenomenon, strictly confined to a well-defined operational “advantage zone” bounded by the beam’s peak intensity distance (zpeak) and a critical crossover distance (zcrossover). Within this zone, the advantage is probabilistic rather than guaranteed; our comprehensive simulations show that while offering a positive average gain, the ND beam has only a ~60-70% probability of outperforming the baseline in its optimal range. We reveal that the significant performance variance observed is not mere statistical noise, but the macroscopic signature of a competition between two distinct physical phenomena: the intended self-healing of the ND beam, and the on-axis reconstruction of the conventional beam via the Poisson-Arago spot effect in scenarios with symmetric, central blockages. Furthermore, our results uncover a critical insight: the ND beam’s effectiveness is strongly modulated by the obstacle’s geometry. It exhibits high resilience to horizontal obstacles that allow its conical waves to reconstruct the field, but shows a significant weakness against large, vertically-oriented blockers. These findings, distilled into actionable “advantage regime maps,” provide a realistic and physically-grounded roadmap for the judicious deployment of ND beams, highlighting both their unique capabilities and critical limitations. Furthermore, the offline nature of the holographic synthesis invites future research into creating optimized ND beam codebooks. These pre-computed phase maps could enable dynamic, real-time switching between different structured beams (e.g., a standard Bessel beam for general coverage and a notched Mathieu beam for a known obstacle), adding a new layer of intelligence and adaptability to the physical layer. Looking forward, the unique spatial characteristics of ND beams open up intriguing possibilities beyond simple blockage mitigation. One promising avenue is their application in Integrated Sensing and Communication (ISAC). For instance, an ND beam could serve as a highly effective pilot or control signal. Within its non-diffracting range, it provides robust coverage for sensing and localization; outside this range, its energy rapidly defocuses. This sharp spatial boundary could be exploited to create naturally confined “sensing zones” without causing interference to distant users, a key challenge in spectrum-shared ISAC systems. The unified generation framework proposed in this work provides the necessary tools to explore such advanced applications, paving the way for the intelligent and adaptive use of structured beams in future 6G networks. REFERENCES [1] X. You, C.-X. Wang, J. Huang, and C. Zhang, “Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts,” Science China Information Sciences, vol. 64, no. 1, 110301, 2021. [2] C.-X. Wang, J. Huang, X. Gao, X. You, et al., “On the Road to 6G: Visions, Requirements, Key Technologies and Testbeds,” IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 905–974, 2023. [3] Q. Lu, Z. Xiao, X. Gao, and R. W. Heath, “A Tutorial on Near-Field XL- MIMO: Modeling, Algorithms, and Implementations,” IEEE Communications Surveys & Tutorials, vol. 26, no. 4, pp. 2887–2945, 2024. [4] IEEE ComSoc, “Best Readings in Near-Field MIMO,” curated list, 2023– 2024. [5] M. Y. Cui, L. Dai, and H. V. Poor, “Near-Field Rainbow: Wideband Beam Training for XL-MIMO,” IEEE Transactions on Wireless Communications, vol. 22, no. 6, pp. 3899–3912, 2023. [6] K. Zhi, C. Pan, H. Ren, C.-X. Wang, R. Schober, and X. You, “Performance Analysis and Low-Complexity Design for XL-MIMO with Near-Field Spatial Non-Stationarities,” IEEE Journal on Selected Areas in Communications, vol. 42, no. 6, pp. 1656–1672, 2024. [7] M. Cui and L. Dai, “Near-field wideband channel estimation for extremely large-scale MIMO,” Science China Information Sciences, vol. 66, 172303, 2023. [8] Y. Ma, J. Zhang, H. Liang and M. Matthaiou, “Near-Field Wideband Beamforming for Extremely Large-Scale Array,” IEEE Transactions on Wireless Communications, vol. 23, no. 6, pp. 5690-5705, 2024. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 13 [9] I. V. A. K. Reddy, D. Bodet, A. Singh, V. Petrov, C. Liberale, and J. M. Jornet, “Ultrabroadband terahertz-band communications with self-healing Bessel beams,” Communications Engineering, vol. 2, art. 70, 2023. [10] H. Guerboukha, B. Zhao, Z. Fang, E. Knightly, and D. M. Mittleman, “Curving THz wireless data links around obstacles,” Communications Engineering, vol. 3, 5, 2024. [11] M. A. ElMossallamy, H. Zhang, L. Song, K. G. Seddik, Z. Han, and G. Y. Li, “Reconfigurable Intelligent Surfaces for Wireless Communications: Principles, Challenges, and Opportunities,” IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 3, pp. 990–1002, 2020. [12] J. He, L. Dai, B. Clerckx, H. V. Poor, et al., “Holographic MIMO Communications: Models, Implementations, and Applications,” IEEE Communications Surveys & Tutorials, vol. 26, no. 1, pp. 31–78, 2024. [13] R. Deng, A. Tang, Y. Gao, Q. Wu, and L. Yang, “Reconfigurable Holographic Surfaces for Wireless Communications,” IEEE Vehicular Technology Magazine, vol. 18, no. 1, pp. 18–26, 2023. [14] J. An, C. Yuen, M. Debbah, and H. V. Poor, “A Tutorial on Holographic MIMO Communications—Parts I–III,” IEEE Communications Letters/preprint collection, 2023–2024. [15] Z. Wang, L. Dai, and H. V. Poor, “Challenges and Solutions for Near-Field Wideband RIS-Assisted Communications,” IEEE Wireless Communications Letters, vol. 12, no. 8, pp. 1419-1423, 2023. [16] J. Durnin, “Exact solutions for nondiffracting beams. I. The scalar theory,” JOSA A, vol. 4, no. 4, pp. 651–654, 1987. [17] Y. Katsuue, A. Yabuki, I. Morohashi, A. Kanno, N. Sekine, J. Nakajima, and S. Hisatake, “Obstacle-tolerant terahertz wireless link using self- healing Bessel beams,” Applied Physics Letters, vol. 123, no. 25, art. 251103, 2023. [18] A. Singh, M. Polese, A. Lozano, and J. M. Jornet, “Wavefront engineering for the next generation of wireless communications,” arXiv:2303.18025, 2023. [19] M. Polese, A. Singh, V. Petrov, A. Lozano, and J. M. Jornet, “Bessel beams for 6G: Resilient and flexible connectivity,” arXiv:2406.02141, 2024. [20] S. Li, Z. Wang, C. Huang and M. Debbah, “Self-Healing Properties of Phased-Array-Generated Bessel Beams for 6G Indoor Scenarios,” IEEE Transactions on Communications, vol. 72, no. 5, pp. 2940-2953, 2024. [21] J. Zhang, H. Liu, and L. Li, “Physical-Layer Solutions for Blockage Mitigation in 6G: A Survey,” IEEE Communications Surveys & Tutorials, vol. 26, no. 3, pp. 2011–2050, 2024. [22] P. Chen and W. E. I. Sha, “Experimental Demonstration of Obstacle- Resilient THz Link using Synthesized Bessel Beams,” in Proc. IEEE Int. Conf. Commun. (ICC), 2024, pp. 1-6. [23] T. A. T. Nguyen and L. V. T. Nguyen, “Aperture-Phase Synthesis for Quasi-Bessel Beams in Near-Field LoS-Blocked Channels,” IEEE Antennas and Wireless Propagation Letters, vol. 23, no. 1, pp. 215–219, 2024. [24] X. Wu, H. Cao, J. Peng, and Z. Meng, “Terahertz quasi non-diffraction Bessel vortex beam generation using three-lattice-type reflective metasurface,” Optics Express, vol. 30, no. 18, pp. 31653–31668, 2022. [25] J. Miao, Q. Ruan, W. He, and S. Chang, “Terahertz Bessel metalens with an extended non-diffractive length,” Optics Letters, vol. 48, no. 19, pp. 5117–5120, 2023.
IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to . Abstract- Non-diffracting (ND) beams are often cited as a promising solution to mitigate blockage in millimeter-wave (mmWave) systems. However, a quantitative answer to the fundamental question-"Under what specific conditions do ND beams actually outperform conventional pencil beams?"-has remained elusive, especially in the emerging context of near-field communications. This paper provides the first systematic answer by mapping the performance advantage regimes of ND beams for blockage-resilient near-field links. We propose a unified holographic generator that synthesizes various structured beams (e.g., Bessel, Mathieu) under the physical constraints of a planar phased array, ensuring a fair comparison against a boresight baseline with identical EIRP and aperture. Through extensive, unbiased Monte Carlo simulations, we construct "advantage regime maps" that delineate the specific regions where ND beams offer a tangible link-level gain. Our key finding is that the advantage of ND beams is a powerful but conditional near-field phenomenon. While offering a positive average gain, its performance is highly variable, with a ~60-70% probability of outperforming the baseline in its optimal range. Crucially, this performance is strongly modulated by the obstacle's geometry, revealing a significant weakness against large blockers. These findings provide not just a practical roadmap for judiciously employing ND beams but also a clear motivation for future work in environment-aware, adaptively shaped structured beams. Index Terms- Near-Field communications, non-diffracting beams, Bessel beams, Mathieu beams, self-healing. I. INTRODUCTION HE deployment of large-scale antenna arrays in millimeterwave (mmWave) and terahertz (THz) systems is extending the radiative near-field (RNF) region to operationally significant distances, heralding a new paradigm of near-field communications for B5G/6G networks [1]-[3]. While this evolution brings opportunities for high-resolution sensing and spatial multiplexing [4]-[8], it also presents a critical challenge: the high directivity required to close links makes them acutely susceptible to line-of-sight (LoS) blockage from common obstacles like furniture or human bodies, severely This work was supported in part by the Major Key Project of PCL under grant PCL2025AS215. Y. Qin, J. Chen, and L. Song are with the Peng Cheng Laboratory, Shenzhen, 518052, China (e-mails: , , ). Z. H. Jiang is with the State Key Laboratory of Millimeter Waves, 210096, China (e-mail: ). compromising reliability [9], [10]. To address this, reconfigurable intelligent surfaces (RIS) have emerged as a popular solution, capable of reflecting incident beams to circumvent obstacles [11]-[13]. However, RIS-assisted links introduce additional path loss and often require complex channel estimation and phase optimization, while facing challenges from hardware non-idealities, especially in the near-field and wideband contexts [14], [15]. An alternative, more fundamental physical-layer solution lies in structuring the radiated wavefield itself. Non-diffracting (ND) beams, such as Bessel beams, are renowned for their ability to resist diffraction and "self-heal" after encountering an obstacle [16]. Pioneering work has demonstrated this potential in the THz band, sparking significant interest in their application for B5G/6G systems, with a growing body of research exploring their generation and performance [9], [10], [17]-[23]. Despite this conceptual appeal, a clear pathway to practical implementation has been missing. Early demonstrations often relied on bulky, "opticalstyle" setups (e.g., axicons) with limited tunability, making them unsuitable for integration with standard mmWave hardware [24], [25]. How to generate these beams efficiently using practical phased arrays, and more importantly, under what exact conditions they provide a tangible link-level advantage, remain critical open questions. When, and by how much, does an ND beam outperform a conventional pencil beam in a realistic, RNF blockage scenario? In this work, we provide the first systematic and quantitative answer to this question. We propose a holographic algorithm to synthesize quasi-ND beams (Bessel, Mathieu) constrained by the physics of a planar array, ensuring a fair comparison against a boresight baseline with identical EIRP and aperture. We then utilize the angular spectrum method to precisely evaluate wave propagation and interaction with obstacles, culminating in a systematic link-level analysis. Our central finding is that the benefit of ND beams is a powerful but highly conditional nearfield phenomenon. Their advantage is largely confined to a specific operational window defined by the blockage geometry Zhi Ning Chen are with the 117583 (e-mail: ). Yongming Huang are with the National Mobile Communication Research Laboratory, and the 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211111, China (e-mail: ). Exploiting Non-Diffracting Beams for Resilient Near-Field Millimeter-Wave Communications: A Quantitative Roadmap Yifeng Qin, Member, IEEE, Jing Chen, Zhi Hao Jiang, Member, IEEE, Zhining Chen, Fellow, IEEE, Yongming Huang, Fellow, IEEE, Lingyang Song, Fellow, IEEE T > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) 0. (d) A spectrally notched Bessel beam, designed to proactively steer nulls in specific azimuthal directions. The close match between target and synthesized patterns validates the efficacy of our generation method. (a) (b) (c) (d) > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) 100% overshoot: at small ρ is expected from constructive Fresnel edge diffraction adding to the conical spectrum; the cross-sectional power is still lower, so energy is conserved. In short, ρ compactly indexes difficulty for centered blocks, tying geometry to performance through (7) - (8). This normalization decouples our conclusions from absolute distances and will directly carry over to the Monte-Carlo analysis (Sec. IV). D. Monte Carlo Simulation Setup To systematically map the advantage regimes, we conduct a large-scale Monte Carlo simulation. The simulation is designed to evaluate the link performance under a wide variety of random but statistically representative blockage scenarios. 1) Scenario Generation  Transmitter and beams: Antenna array and ND beam parameters are fixed across all trials as shown in Table I; baseline is boresight. Propagation uses scalar Fresnel (angular-spectrum) and a thin opaque screen at z = zobs.  Shape library: Seven scenarios with different shapes of obstacles are considered: HumanSide (rect.), HumanTorso (rect.), ArmBar (bar), PillarSmall (circle), PillarLarge (circle), TableEdge (bar), ChairBack (bar). The abbreviation rect. means the obstacle is relatively large vertical rectangle, and bar refers to a horizontal thin rectangle. The sizes are realistic values in a range, for > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) 25). This counter-intuitive result stems from scenarios involving large obstacles with significant lateral offsets, creating a "knife-edge" diffraction event. In such cases, the highly concentrated energy of the boresight beam's main lobe is severely scattered by the obstacle's edge. In contrast, the ND beam, whose on-axis energy is sustained by a conical interference pattern, is more resilient; even if its outer rings are blocked, the remaining unobstructed components of the conical wave can still effectively reconstruct the on-axis field, thus preserving a higher SNR. 2) Practical Takeaway: The map provides a clear engineering guideline. The ND beam is not only superior against small-to-moderate central blockages but also exhibits enhanced robustness against large, grazing-incidence obstacles. For on-axis links, employing zpeak and approximately 0.8⋅zcrossover when facing effective obstacles smaller than about 25λ, and it remains a viable option even for larger tangential blockages. B. Unpacking Performance Variance: Self-Healing vs. Poisson's Spot While the macro map shows the average trend, Fig. 5 delves into the statistical distribution of the performance gain as a function of the normalized receiver distance, t. 1) Trend Analysis: The plot reveals a more nuanced story than our initial hypothesis.  Mean SNR Gain (Blue Curve): The average advantage of the ND beam is modest, starting at approximately +2.5 dB near t = 0 and peaking at only about +5 dB around t = 0.3. It then steadily declines, becoming a net disadvantage (ΔSNR 0.8. This indicates that, on average, the performance gain is not as dramatic as idealized scenarios might suggest.  Advantage Probability (Orange Curve): The probability that an ND beam outperforms the baseline starts at a moderate ~63%, peaks at ~70% around t = 0.3, and then drops significantly, falling below the 50% break-even point for t > 0.85. This confirms that the ND beam is advantageous in the majority of cases only within the early part of the advantage zone.  Large Performance Variance: A key finding is the extremely large performance variance, as shown by the wide interquartile range (IQR) error bars. This variance is not mere statistical noise; it is the macroscopic signature of a competition between two distinct physical phenomena, contingent on the specific obstacle geometry in each trial. 2) Physical Interpretation of Variance: This large variance is not merely statistical noise; it is the macroscopic signature of a competition between two distinct physical phenomena, contingent on the specific obstacle geometry in each trial:  ND Beam Self-Healing: In most scenarios, particularly those with asymmetric or non-central blockages, the ND beam's self-healing dominates, resulting in a large, positive ΔSNR.  Poisson-Arago Spot Effect: In a subset of trials where the obstacle is highly symmetric (circular) and centrally located, the boresight beam benefits from an unexpected on-axis reconstruction. The coherent diffraction from the circular edge of the obstacle interferes constructively at the Fig. 5. Statistical performance trends as a function of the normalized receiver distance, t, aggregated over all Monte Carlo trials. The left axis and blue curve show the mean ΔSNR with interquartile range error bars. The right axis and orange dashed curve show the advantage probability, P{ΔSNR > 0 dB}. > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) 90%) against the ChairBack scenario, which is dominated by horizontal structures. This is because the conical waves that constitute the ND beam can easily flow "over and under" these thin, bar-like obstacles to reconstruct the on-axis field.  Moderate Resilience to Isotropic & Mixed Obstacles: Performance against the TableEdge and ArmBar scenarios is still strong (~90% and ~75% respectively). Circularly symmetric pillars (PillarSmall, PillarLarge) present a greater challenge (~45%), as they obstruct the conical wave components from all azimuthal directions equally.  Weakness Against Vertical Obstacles: A dramatic performance drop is observed for vertically-oriented rectangular obstacles, HumanTorso and HumanSide, where the advantage probability is only ~40%. These shapes are most effective at blocking the conical wave components from the sides. This anisotropy in performance reveals a key limitation of standard, circularly symmetric Bessel-like beams. 2) Practical Takeaway: The results in Fig. 6 provide a crucial, honest assessment of the ND beam's capabilities, reinforcing that it is not a universally robust solution but a specialized tool. Its effectiveness is strongly modulated by the obstacle's geometry. It is exceptionally resilient to blockages that are elongated in one dimension (e.g., horizontal bars). Conversely, it exhibits a clear weakness against isotropic and, most notably, verticallyoriented obstacles like a human torso, where its performance is worse than a coin toss. These findings represent the boundary conditions where the benefits of standard Bessel-like beams diminish, strongly suggesting an opportunity for adaptively shaped beams (e.g., Mathieu beams) in future work. V. TARGETED COMPARISONS Our main findings in Section IV have established a clear, quantitative roadmap for the utility of a standard Bessel-like ND beam. However, the flexibility of our unified generation framework invites further investigation into two critical questions: 1) Can we adapt the beam shape to improve performance in the most challenging scenarios? 2) How does the ND beam's resilience compare to a beam optimally focused on the user? This section addresses these questions through two targeted case studies. A. Case Study: Adaptive Spectral Shaping for Enhanced Robustness Our analysis in Sec. IV.C revealed a key limitation of the standard, circularly symmetric Bessel-like beam: its performance degrades significantly against vertically-oriented obstacles such as the HumanSide scenario. This finding strongly motivates an investigation into adaptively shaped beams. To validate this opportunity, we conducted a follow-up Monte Carlo simulation, replacing the standard Bessel beam with an anisotropically shaped Mathieu-like beam. 1) Anisotropic Beam Configuration and Rationale: Based on the insights from Sec. IV.C, we hypothesized that by redistributing the energy on the k-space ring-concentrating it on the horizontal axis while reducing it on the vertical axiswe could improve performance against vertical blockers. This was achieved by setting the angular modulation parameters in Eq. (1) to ε2 = 0.4 and ε4 = 0, while configuring the central radii of the elliptical ring to be wider in the horizontal direction (10°) than the vertical (6°). This creates a Mathieu-like beam that Fig. 6. Advantage probability, P{ΔSNR>0 dB}, across seven distinct, realistic blockage scenarios, aggregated for all trials where t REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 13 [9] I. V. A. K. Reddy, D. Bodet, A. Singh, V. Petrov, C. Liberale, and J. M. Jornet, "Ultrabroadband terahertz-band communications with self-healing Bessel beams," Communications Engineering, vol. 2, art. 70, 2023. [10] H. Guerboukha, B. Zhao, Z. Fang, E. Knightly, and D. M. Mittleman, "Curving THz wireless data links around obstacles," Communications Engineering, vol. 3, 5, 2024. [11] M. A. ElMossallamy, H. Zhang, L. Song, K. G. Seddik, Z. Han, and G. Y. Li, "Reconfigurable Intelligent Surfaces for Wireless Communications: Principles, Challenges, and Opportunities," IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 3, pp. 990-1002, 2020. [12] J. He, L. Dai, B. Clerckx, H. V. Poor, et al., "Holographic MIMO Communications: Models, Implementations, and Applications," IEEE Communications Surveys & Tutorials, vol. 26, no. 1, pp. 31-78, 2024. [13] R. Deng, A. Tang, Y. Gao, Q. Wu, and L. Yang, "Reconfigurable Holographic Surfaces for Wireless Communications," IEEE Vehicular Technology Magazine, vol. 18, no. 1, pp. 18-26, 2023. [14] J. An, C. Yuen, M. Debbah, and H. V. Poor, "A Tutorial on Holographic MIMO Communications-Parts I-III," IEEE Communications Letters/preprint collection, 2023-2024. [15] Z. Wang, L. Dai, and H. V. Poor, "Challenges and Solutions for Near-Field Wideband RIS-Assisted Communications," IEEE Wireless Communications Letters, vol. 12, no. 8, pp. 1419-1423, 2023. [16] J. Durnin, "Exact solutions for nondiffracting beams. I. The scalar theory," JOSA A, vol. 4, no. 4, pp. 651-654, 1987. [17] Y. Katsuue, A. Yabuki, I. Morohashi, A. Kanno, N. Sekine, J. Nakajima, and S. Hisatake, "Obstacle-tolerant terahertz wireless link using selfhealing Bessel beams," Applied Physics Letters, vol. 123, no. 25, art. 251103, 2023. [18] A. Singh, M. Polese, A. Lozano, and J. M. Jornet, "Wavefront engineering for the next generation of wireless communications," , 2023. [19] M. Polese, A. Singh, V. Petrov, A. Lozano, and J. M. Jornet, "Bessel beams for 6G: Resilient and flexible connectivity," , 2024. [20] S. Li, Z. Wang, C. Huang and M. Debbah, "Self-Healing Properties of Phased-Array-Generated Bessel Beams for 6G Indoor Scenarios," IEEE Transactions on Communications, vol. 72, no. 5, pp. 2940-2953, 2024. [21] J. Zhang, H. Liu, and L. Li, "Physical-Layer Solutions for Blockage Mitigation in 6G: A Survey," IEEE Communications Surveys & Tutorials, vol. 26, no. 3, pp. 2011-2050, 2024. [22] P. Chen and W. E. I. Sha, "Experimental Demonstration of ObstacleResilient THz Link using Synthesized Bessel Beams," in Proc. IEEE Int. Conf. Commun. (ICC), 2024, pp. 1-6. [23] T. A. T. Nguyen and L. V. T. Nguyen, "Aperture-Phase Synthesis for Quasi-Bessel Beams in Near-Field LoS-Blocked Channels," IEEE Antennas and Wireless Propagation Letters, vol. 23, no. 1, pp. 215-219, 2024. [24] X. Wu, H. Cao, J. Peng, and Z. Meng, "Terahertz quasi non-diffraction Bessel vortex beam generation using three-lattice-type reflective metasurface," Optics Express, vol. 30, no. 18, pp. 31653-31668, 2022. [25] J. Miao, Q. Ruan, W. He, and S. Chang, "Terahertz Bessel metalens with an extended non-diffractive length," Optics Letters, vol. 48, no. 19, pp. 5117-5120, 2023.
2510.14857
A SIMULATION FRAMEWORK FOR STUDYING SYSTEMIC EFFECTS OF FEEDBACK LOOPS IN RECOMMENDER SYSTEMS Gabriele Barlacchi Scuola Normale Superiore Pisa, Italy gabriele.barlacchi@sns.it Margherita Lalli Scuola Normale Superiore Pisa, Italy margherita.lalli@sns.it Emanuele Ferragina Sciences Po Paris emanuele.ferragina@sciencespo.fr Fosca Giannotti Scuola Normale Superiore Pisa, Italy fosca.giannotti@sns.it Luca Pappalardo CNR and Scuola Normale Superiore Pisa, Italy luca.pappalardo@isti-cnr.it October 17, 2025 ABSTRACT Recommender systems continuously interact with users, creating feedback loops that shape both individual behavior and collective market dynamics. This paper introduces a simulation framework to model these loops in online retail environments, where recommenders are periodically retrained on evolving user–item interactions. Using the Amazon e-Commerce dataset, we analyze how different recommendation algorithms influence diversity, purchase concentration, and user homogenization over time. Results reveal a systematic trade-off: while the feedback loop increases individual diversity, it simultaneously reduces collective diversity and concentrates demand on a few popular items. Moreover, for some recommender systems, the feedback loop increases user homogenization over time, making user purchase profiles increasingly similar. These findings underscore the need for recommender designs that balance personalization with long-term diversity. Keywords Feedback loop, Recommender systems, Diversity, Homogenization, Simulation, Human-AI Coevolution 1 Introduction Recommender systems constitute a pervasive layer of algorithmic mediation, increasingly shaping how individuals interact with digital environments. On social media, they surface relevant posts and connections; in online retail, they suggest products that users are likely to purchase; and in location-based services, they propose efficient routes or destinations tailored to individual preferences. Because recommender systems are powered by AI, their interaction with users naturally gives rise to a feedback loop: users’ choices shape the data on which recommenders are trained, and the trained models in turn influence subsequent user decisions Pedreschi et al. [2025]. This recursive process continually feeds back into itself, creating an evolving and potentially unbounded cycle. The feedback loop can amplify existing biases and generate various systemic effects, many of which are unintended Pedreschi et al. [2025], Pappalardo et al. [2024]. On social media, the feedback loop may reinforce echo chambers, filter bubbles, and even processes of radicalization Guess et al. [2023], Huszár et al. [2022], Sîrbu et al. [2019]; in online retail and location-based platforms, it can exacerbate popularity bias Lee and Hosanagar [2018], Aridor et al. [2024a], Mauro et al. [2025], Sánchez et al. [2023]. Empirically studying feedback loops is challenging as experiments on real online platforms are limited by platform restrictions, short observation horizons, and poor reproducibility Holzmeister et al. [2024]. Consequently, most research relies on simulations, which enable controlled exploration of mechanisms and counterfactual scenarios Mansoury et al. [2020a], Yang et al. [2021], Chaney et al. [2018a]. However, existing simulation approaches often rely on oversimplified arXiv:2510.14857v1 [cs.IR] 16 Oct 2025 assumptions, such as the exclusive use of explicit feedback (e.g., user ratings) Yao et al. [2021], the absence of periodic retraining of the recommender system Cai et al. [2025], and the consideration of only a limited set of recommendation algorithms Sun et al. [2019]. The difficulty of empirically studying and simulating feedback loops has limited our understanding of their systemic effects. In online retail, for instance, recommender systems are optimized for predictive accuracy and tend to favor popular items that are more likely to be purchased. While this typically increase overall sales volume from the platform’s perspective Pappalardo et al. [2024], PATHAK et al. [2010], it may also induce behavioral homogenization – users increasingly buying the same items – and concentrate demand on a small subset of products. Over time, these effects can distort online retail ecosystems and risk undermining market fairness and long-term system resilience. Yet, evidence on how the feedback loop leads to these systemic effects remains sparse and often contradictory Pappalardo et al. [2024], Sonboli et al. [2022], Fleder et al. [2010]. In this paper, we propose an open-source simulation framework that models feedback loops in an online-retail-like environment, where recommender systems are periodically retrained on the evolving user–item interaction data generated through purchases. The framework generates synthetic trajectories of user–item interactions over time, enabling detailed analysis of the systemic effects that emerge from the feedback loop. Unlike existing frameworks based on explicit feedback (e.g., user ratings), our framework uses implicit feedback (e.g., purchases), better reflecting real online retail operations and the recent shift toward implicit feedback modeling Aggarwal [2016]. The simulation framework is flexible and supports different recommendation systems to allow systematic comparison across algorithms. We implement a wide set of recommendation systems and conduct extensive experiments using the open Amazon e-commerce 1.0 dataset A. Berke and Mahari [2024] to examine how the simulated feedback loop shapes systemic properties such as purchase diversity, the concentration of purchase volume and item popularity across items, and behavioural homogenization across the user base. We discover that: • The feedback loop broadens individual purchase profiles while concentrating demand at the collective level. • The exploratory boost induced by recommender systems is driven primarily by medium and heavy buyers, with minimal impact on light buyers. • The feedback loop generally increases the concentration of purchases on a small set of items across most recommender systems. • User homogenization effects are model-dependent: some recommender systems amplify behavioral similarity, while others preserve heterogeneity. Our results demonstrate the utility of simulations to disentangle the systemic impacts of recommendations, and highlight the need to design algorithms that balance personalization with unintended long-term effects. 2 Related work This section reviews research on how recommender systems influence user behavior, primarily in online retail, media, and news. The literature employs two approaches Pappalardo et al. [2024]: online experiments analyse real-world user interactions under controlled conditions and offer ecological validity, but face constraints from platform policies, short observation windows, and limited scalability; simulations model user–system dynamics to explore hypothetical effects and counterfactual scenarios, but rely on assumptions that may limit external validity. Online Experiments Empirical findings on consumption diversity are often conflicting. On the one hand, recom- mender systems can expand individual exploration but simultaneously concentrate demand at the aggregate level Fleder and Hosanagar [2009], Lee and Hosanagar [2018]. On the other hand, recommendations may narrow user choices, steering individuals toward more homogeneous item sets Aridor et al. [2024a], Wang et al. [2024] and reduce listening diversity over time Anderson et al. [2020], Holtz et al. [2020]. Research has also identified mechanisms through which recommendations influence decisions, including effects on item discovery, purchase behavior Hinz et al. [2021], and quality perceptions Aridor et al. [2024a] with broader implications for willingness to pay and market concentration Goldfarb and Tucker [2011], Aral et al. [2012], Elberse [2008]. Interface design moderates these effects, as shown in movie recommendation studies Sun et al. [2024]. News recommendation research highlights conditional effects: algorithmic exposure can amplify polarization Levy [2021] or reinforce filter bubbles depending on user predispositions and design choices Knudsen [2023], Loecherbach et al. [2021], Haim et al. [2018]. Diversity-aware frameworks have been proposed to address democratic concerns Helberger et al. [2021], Moeller et al. [2021]. Even large-scale experiments reach opposite conclusions: some find no short-term 2 filter-bubble effects on political attitudes Guess et al. [2023] while others document increased exposure to extreme content through recommendation chains Ribeiro et al. [2020]. Simulations By isolating mechanisms and testing counterfactuals, simulations reveal feedback dynamics unfolding over extended periods. Early work demonstrates that algorithmic reinforcement narrows exposure and reduces system- level diversity Jiang et al. [2019] and repeated recommendations lead to behavioral homogenization. Mansoury et al. Mansoury et al. [2020b] formalize how sustained exposure shapes user preferences rather than merely reflecting them. Agent-based models Chaney et al. [2018b], Mansoury et al. [2020c] reveal how small initial biases propagate into large-scale imbalances through mutual user-algorithm adaptation. Beyond digital contexts, recent work Zeng et al. [2021], Mauro et al. [2025] show that recommendation feedback loops extend into urban spatial behaviour. Simulations also serve as testbeds for interventions. Helberger et al. Helberger et al. [2019] propose frameworks to balance diversity and autonomy in news, and Bauer et al. Bauer and Jannach [2021] investigate alternative strategies for music recommendation, and Coppolillo et al. [2024] proposes a framework for systematically evaluating recommender systems, with a particular focus on measuring drift in behavioral patterns and biases. Interface design studies demonstrate how presentation shapes exposure Rahdari et al. [2024]. Diversity-aware ranking and nudging strategies have been tested in simulation before deployment Hazrati and Ricci [2021], Mattis et al. [2022]. Bias amplification under feedback loops has received particular attention. Hazrati and Ricci Hazrati and Ricci [2021] show how user biases distort evaluation metrics, with Mattis et al. Mattis et al. [2022] demonstrating worsening effects over time. To address these risks, Hansen et al. Hansen et al. [2023] introduce debiasing techniques, and Aridor et al. Aridor et al. [2024b] examine fairness-aware strategies accounting for systemic disparities. Finally, modeling choices critically shape simulation outcomes. Zeng et al. Zeng et al. [2020] demonstrate that behavioral model selection (e.g., multinomial logit vs. bounded rationality) significantly alters diversity predictions. While Mungari et al. [2025] introduce a flexible framework for generating synthetic preference data to support recommendation analysis, showing the differences among all the possible approaches. Collectively, these contributions establish simulation as a bridge between theory and practice, enabling study of long-term dynamics hidden in short-term evaluations. 3 Modelling of the Feedback Loop We consider a setting in which a recommender system on an online retail platform assists users by suggesting relevant items derived from historical interaction data. In this section, we first introduce the notation used throughout the paper (Section 3.1), then describe the simulation of the user–recommender feedback loop (Section 3.2), and finally present the user choice model adopted in our framework (Section 3.3). 3.1 Definitions Let U be a set of users, I a set of items and T = {1, 2..., T} a sequence of discrete, ordered time steps. We define a model scoring function R : U ×I ×T →R that assigns a relevance score R(u, i, t) to every user-item-time triplet (u, i, t) ∈U × I × T . This score is then used by a recommender function ρ : U × T →Pk(I), where Pk(I) denotes the set of all ordered k-subsets of I. For a given user u and time t, this function generates a ranked list of k items, denoted Ku,t, by selecting the items with highest scores: ρ(u, t) := Ku,t = top-k i∈I R(u, i, t). Users can also make purchases autonomously, independent of the recommender’s suggestions. For these autonomous choices, a user’s selection is constrained not by the full catalog I but by a user- and time-specific candidate set, denoted Cu,t. This set represents the subset of items that user u is aware of at time t and constitutes the available choice pool for organic interactions. The user scoring function is then a function ˆR whose domain is the set of all admissible user-item-time triplets where the item is available to the user: ˆR : {(u, i, t) ∈U × I × T | i ∈Cu,t} →R. To each triplet (u, i, t) in this domain, the function assigns a score ˆR(u, i, t). 3.2 Simulation framework The simulation begins with an initialization phase, in which a real-world dataset is used to train both the model scoring function and the user scoring function. 3 The simulation is initialized at time t0 and proceeds by generating user choices at each subsequent time step t ∈[t0, T]. At each simulation step t ∈[t0, T], a subset of users Ut ⊆U is awakened. Each active user u ∈Ut selects a basket of items, where the number of items selected, denoted by bu,t, is the user’s basket size for that step. Each user-item interaction is modelled as follows: 1. With probability η, the user u selects an item from the ranked list Ku,t provided by the recommender. This item is drawn from Ku,t with a probability proportional to the scores {R(u, i, t)}i∈Ku,t. 2. With probability 1 −η, u selects an item i from the candidate set Cu,t, according to a probability proportional to the score ˆR(u, i, t) computed from the user choice model (described in the next section). We partition the simulation timeline [t0, T] into discrete, non-overlapping intervals termed epochs. Each epoch ∆j (with j ranging from 1 to n) comprises a sequence of simulation steps, where the number of steps may vary between epochs. At the end of each epoch ∆j, the model scoring function, the choice model, the candidate set and the ranked list suggested by the recommender are updated. Algorithm 1 outlines the overall simulation framework, while Algorithm 2 details the modeling of users’ next-item selection. Algorithm 1 Simulation framework Require: Historical data D; total epochs n; last initialization epoch Elast_init; retraining interval ∆; cold-start t0 ; Output Data: Interaction dataset Dpost Dinit ←ColdStart(D, t0) R ←AlgorithmTraining(Dinit) ˆR ←ChoiceModelSetUp(Dinit) Ec ←Elast_init + 1 Elast_training ←0 Dpost ←Dinit while Ec < E do Se ←get_steps_epoch(Ec) for all s in Se do Us ∈U ←get_awaken_users(Ec, s) for all u in Se do bu ←get_basket(u) iunext ←ItemSelection(u, bu, R, ˆR) Dpost ←Dpost ∪{(u, iunext, s)} end for if Ec −Elast_training > ∆then R ←AlgorithmTraining(Dpost) ˆR ←ChoiceModelSetUp(Dpost) end if end for end while Algorithm 2 Itemselection for user u Require: User u, basket size bu, Choice model ˆR, Recommender model R, Current epoch n Hyperparameters: Adoption rate η, Top ranking K Output Data: Items list UI CS ←get_candidate_set(u, n) for all i in bu do if Bernoulli(η) then i_next ←R(u, K) else i_next ←ˆR(u, CS) end if end for 4 3.3 User choice model We model autonomous user choices – i.e., the items selected when users do not follow the recommender system’s suggestions – following the approach of Cesa-Bianchi et al. Cesa-Bianchi et al. [2017] and its subsequent extensions Mansoury et al. [2020a], Chaney et al. [2018a], Hazrati et al. [2020]. This model builds on item response theory and employs softmax decision rules, which are widely used in machine learning and discrete choice theory. The probability that user u selects an item i from its candidate set Cu,t at time t is given by a softmax function. We accomodate this by specifying the form of the scoring function ˆR in terms of a utility function Vu,i(t): P(i | u, t) = ˆR(u, i, t) P j∈Cu,t ˆR(u, j, t) = eVu,i(t)/τ P j∈Cu,t eVu,j(t)/τ , (1) where Vu,i(t) is the utility associated with item i for user u at time t and τ controls the exploration–exploitation balance. As τ →0, choices become deterministic toward the highest-utility item, while for τ →∞, they approach a uniform distribution. Candidate set The set of available items for each user is constructed at each time by combining items from three distinct sources: • GPop: Items ranked by global popularity (i.e., the number of interactions by all users in the current epoch); • IPop: Items ranked by individual popularity (i.e, the number of interactions from users’ own purchase history up to the current epoch); • Unknown: A random sample of items with which the user has had no prior interaction. The final candidate set combines these sources with fixed proportions: 40% from GPop, 40% from IPop, and 20% from Unknown. This composition balances globally and individually popular items with exploration of unseen content. Utility estimation We adapt the approach by Mansoury et al. Mansoury et al. [2020a] and estimate the utility of item i for user u as: Vu,i(t) = cu(t) + Gu(t) · log(1 + si(t)) + λ · 1 1 + si(t) + ηu,i(t) (2) where cu denotes the average number of interactions of user u. The term Gu · log(1 + si) modulates the influence of item popularity si based on the user’s interaction diversity Gu, computed as the Gini index over the user’s interaction distribution. The term λ · 1 1+si introduces a controlled boost for rare items, with λ regulating the strength of this rarity effect Hazrati and Ricci [2022], Zhou et al. [2010]. Finally, ηu,i is a stochastic noise term, sampled at each timestep from a standard normal distribution. All the involved quantities (cu, Gu, si) are evaluated by considering all simulation steps prior to t. Such choice model serves as a structured null model that introduces a controlled degree of variability; more realistic than a simple popularity-based baseline, yet less arbitrary than random selection. 4 Measuring systemic effects We quantify the systemic effects of the feedback loop along three key dimensions: (i) purchase diversity at individual and collective levels; (ii) concentration of purchase volume and item popularity on a limited set of items; and (iii) homogenization of user purchasing behaviour. We select these measures due to the sparse and often conflicting evidence reported in the literature Pappalardo et al. [2024] and because they operationalize fundamental dimensions of diversity and concentration in algorithmic marketplaces. In the long run, reduced collective diversity, excessive homogenization, and high concentration of purchases can limit item discoverability, reinforce popularity biases, and amplify inequality among items and sellers. Such dynamics may ultimately hinder market fairness, and reduce long-term system resilience. Individual diversity We start by defining, for each user u ∈U, the purchase vector wu = {wu,i | i ∈Iu}, where wu,i is the number of times u purchased item i, and Iu the set of items in the purchase history of u. 5 We then use the Gini coefficient to quantify the inequality of weights in wu: Gu = Pdu i=1 Pdu j=1 |wui −wuj| 2d2uwu (3) where du = |Iu| is the number distinct items u has interacted with and wu is the average number of purchases made by u. The set of all individual Gini coefficients {Gu}u∈U is then summarised by the mean Gind of the distribution. Collective diversity We define a system-level vector s, where each component corresponds to the total purchase volume of a specific item: s = {si | i ∈I}, where si = X u∈U wu,i. (4) where si is denoted by strength of item i. The collective Gini coefficient, Gcoll, is then computed using Equation 3 by replacing the individual weight vector wu,i, the number of items du, and the mean weight wu with their system-level counterparts: the vector s, the total number of items |I|, and the mean purchase volume per item s. Purchase concentration To analyze the redistribution of purchases as the feedback loop evolves, we examine the frequency-rank curves of items at the end of the simulation using two key metrics: (ii) the item purchase volume si (or strength) defined in Eq. (4); and (ii) item popularity pi, i.e., the number of unique users who purchased item i. This is given by pi = P u∈U 1{wu,i>0}, where 1{wu,i>0} is an indicator function that equals 1 if user u has purchased item i at least once, and 0 otherwise. User similarity We quantify the similarity between the interaction sets of two users u, v ∈U with the Jaccard index: J(u, v) = |Iu ∩Iv| |Iu ∪Iv| (5) The Jaccard index ranges from 0, indicating no common items, to 1, indicating identical item sets. To obtain a system-level measure, we compute the mean of the Jaccard index across all unique user pairs as: ¯J(U) = 1 |P(U)| X (u,v)∈P(U) J(u, v) where P(U) is the set of unordered distinct user pairs. Items are then ranked by descending order based on si and pi, respectively, generating two independent ordered sequences {ir}r∈[1,|I|] such that i1 has the highest value and i|I| the lowest. 5 Experiments In this section, we describe the dataset, the recommender systems and their evaluation criteria, and the key parameters governing the simulation process. Dataset We use the Amazon e-commerce 1.0 dataset Berke et al. [2024], a large-scale collection of purchase histories crowdsourced from over 5,000 consumers between 2018 and 2022. It contains 1.8 million purchase events, each including detailed attributes such as item code, category, title, price, quantity, and the buyer’s shipping state, alongside anonymized user demographic information. To ensure temporal continuity during the simulation, we retain only users with at least one purchase per month over the entire simulated period. After filtering, the dataset comprises roughly one million interactions across 1,561 categories, generated by 2,191 users over a five-year period. Recommender Systems We benchmark several recommender systems spanning classical collaborative filtering, matrix factorization, deep learning, and graph-based models (all implemented via the RecBole library Zhao et al. [2021]): • ItemKNN Sarwar et al. [2001]: Item-based collaborative filtering that recommends items similar to those previously interacted with, using item–item co-occurrence similarity. 6 • MultiVAE Liang et al. [2018]: Variational autoencoder model that captures user–item interactions through latent variable inference. • NeuMF He et al. [2017]: Neural matrix factorization approach that ranks items for each user by optimizing a pairwise probabilistic loss. • LightGCN He et al. [2020]: Simplified graph convolutional model that propagates neighborhood information without feature transformation or nonlinearities. • BPR Rendle et al. [2009]: Matrix factorization model trained with a Bayesian pairwise ranking loss to learn personalized item rankings. • SpectralCF Zheng et al. [2018]: Spectral graph-based collaborative filtering leveraging the frequency domain of the user–item graph to capture global connectivity. • MostPop: Non-personalized baseline recommending globally most popular items. Evaluation of recommender systems We use the first six months of the dataset as an initialization period to calibrate the user choice model and to define the training, validation, and test splits. Within this initialization phase, 80% of the available time is assigned to training (months 1–4), 10% to validation (month 5), and the remaining 10% to testing (month 6). This setup corresponds to a temporal holdout with a sliding window strategy, a widely adopted protocol to evaluate recommender systems over time Campos et al. [2010], Jiang et al. [2022]. After initialization, we repeat the procedure iteratively: at the end of each subsequent epoch (one month of simulated data), we retrain models on the most recent four months, validate on the following month, and test on the newly generated interactions. The configuration that maximized nDCG@10 on the validation set is selected. For hyperparameter optimization, we employ exhaustive grid search over model-specific parameter ranges including learning rate, neighborhood size, and regularization terms. As shown in Table 1, NeuMF achieves the best overall results across all metrics, followed by BPR. NeuMF outperforms the popularity-based baseline (MostPop) by +42% in nDCG@10, +35% in Recall@10, and +28% in Precision@10. MostPop performs slightly better than ItemKNN, suggesting that naive similarity-based methods struggle under temporal dynamics. Simulation The Amazon dataset provides timestamps for each user-item interaction, enabling us to model simulation steps and epochs on a real temporal scale. In our setup, a single step t corresponds to one day and an epoch ∆j corresponds to one month. The initial part of the data is kept as an initialisation period and is used to train the recommender system and inform the user choice model. This period amounts to 6 months. Afterwards, our simulation carries on for T = 731 days (n = 24 epochs). At each simulation step t ∈[t0, T] (t0 is the first day after the initialization period), the set of active customers Ut and related basket sizes {bu,t}u∈Ut mimic the empirical purchase sequence of the corresponding day. Consistent with typical streaming platform interfaces (10-20 items), the size of the ranked list Ku,t is fixed to k = 20 for each user and each time. To evaluate the effects on the purchase dynamics of the recommendation systems described, we run the simulation in parallel for each model for different values of the adoption rate (η) values: {0, 0.2, 0.4, 0.6, 0.8, 1}. The case η = 0 corresponds to the evolution of the pure user choice model without influence of the recommender, while η = 1 represents the scenario in which users always accept the suggested recommendations. To account for stochastic variability, we execute each configuration over three independent runs. 7 Table 1: Performance comparison of recommendation systems. Best results are in bold. Model N@10 P@10 R@10 Hit@10 MostPop 0.1398 0.0782 0.1706 0.5357 Neu MF 0.2810 0.1398 0.2825 0.6699 BPR 0.2371 0.1072 0.2305 0.6241 Spectral CF 0.2274 0.1067 0.2292 0.6241 LightGCN 0.1969 0.0941 0.2033 0.5964 MultiVAE 0.1726 0.0934 0.2010 0.5942 Item-KNN 0.1539 0.0898 0.1945 0.5729 N@10 = NDCG@10, P@10 = Precision@10, R@10 = Recall@10. All metrics are averaged over users; higher is better. 6 Results Individual vs Collective Diversity We compute the individual and collective Gini coefficients at the end of the 24-epoch period and evaluate how these values change as the adoption rate η increases. We find a marked dichotomy in the response of individual versus collective diversity to recommender influence. As Figure 1a shows, the average individual Gini coefficient exhibits a monotonic decrease as a function of η. The 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.4 0.5 Individual Gini (a) 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.8 0.9 Collective Gini (b) 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.2 0.4 0.6 Individual Gini (c) Neu MF 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.2 0.4 0.6 (d) Light GCN BPR Item KNN Light GCN MultiVAE Neu MF Spectral CF High Medium Low Figure 1: Individual and collective diversity. The evolution of individual (a) and collective (b) Gini coefficients as a function of adoption rate. Below, individual Gini coefficient disaggregated by user engagement segments for two representative models: NeuMF (c) and LightGCN (d). 8 magnitude of this reduction ranges from approximately 11% for LightGCN to 35% for NeuMF, indicating a consistent enhancement in the diversity of individual purchase sequences. In contrast, the collective-level Gini coefficient (Figure 1b) demonstrates a positive correlation with η for most models. The magnitude of decrease in collective diversity reaches 28% for LightGCN. Interestingly, the most accurate recommender system – and the one that most enhances individual exploration, NeuMF – shows an initial drop in collective diversity, which later rises when η = 1, returning to levels similar to those observed at η = 0. Aside from this case, all other recommender systems combine increased individual exploration with decreased collective diversity. For a more granular analysis, Figure 1(c, d) disaggregates the average individual Gini coefficient by engagement segments, defined according to purchasing volume in the training set: heavy buyers (top 10% of users, i.e., 10th decile), medium buyers (middle 80%), and light buyers (bottom 10%, i.e., 1st decile). We focus on the two models with the most distinct performance profiles, NeuMF and LightGCN. We find that the recommender system’s ability to drive diversity varies substantially across user groups. The observed increase in individual diversity is predominantly attributable to the medium and heavy buyer segments, while the purchase diversity of light buyers remains nearly invariant with respect to η for both models (see Figure 1 (c, d)). This suggests that for low-engagement users, the models lack the necessary resolution in their learned representations to provide effective recommendations, thereby limiting their impact. Concentration of purchases We compare the frequency-rank curves derived from the training set with those obtained from the simulation data, which includes the training set plus the subsequent 24 epochs of simulated purchases. Such curves are presented in Figure 2, with panel (a) corresponding to purchase volume and panel (b) to item popularity. We focus on the two recommender systems that exhibit the most divergent behaviours for η = 1 (NeuMF and LightGCN) to make the effects more pronounced. We find that LightGCN populates its recommendation set exclusively with the top 20 most popular items, leading to a stark drop-off in both the purchase volume and the popularity for items beyond the 20th rank (see green curves in Figure 2). In practice, both the purchase volume and the number of buyers for items ranked 21st and onwards remain stuck to their initial values in the training set (dashed curve in Figure 2). The purchase dynamics driven by the NeuMF recommendations reveal a more nuanced pattern: the purchase volume distribution is overall unaffected in shape (consistently with nearly invariant collective diversity for this recommender system), while the popularity distribution undergoes a concentration of mass, gaining a sharper peak and a heavier tail 101 103 Item rank 100 101 102 103 104 Purchase volume (a) 101 103 Item rank 100 101 102 103 Popularity (b) Training Neu MF Light GCN Figure 2: Purchase redistribution for η = 1. Frequency-rank curves for the (a) purchase volume and (b) item popularity. Curves show the initial state (training set, grey) and the final state after 24 simulation epochs for NeuMF (green) and LightGCN (red). 9 (see red curves in Figure 2). In particular, NeuMF stimulates increased purchasing across the entire spectrum of items, resulting in a quasi-uniform boost to purchase volumes. A key insight emerges from comparison with the popularity-rank curve: while niche, lower-ranked items see an increase in total purchase volume, they fail to attract a broader customer base (see Figure 2b). This is evidenced by the tail of the item popularity curve collapsing onto that of the gray baseline, indicating that there is no significant gain in the number of unique purchasers along the simulation. Consequently, the observed growth in volume for these niche items is primarily driven by existing users who had previously interacted with them. This stands in stark contrast to medium and top-selling items, which experience a concurrent boost in both purchase volume and popularity. To illustrate an example of the effect of increasing concentration, Figure 3 compares the co-purchase networks – where nodes represent item categories and weighted edges indicate how frequently two categories are purchased together – for LightGCN at epoch 0 and epoch 24. At epoch 0, the network displays a relatively balanced topology, with low variance in node degrees and edge weights. After 24 epochs, a pronounced core-periphery structure emerges, characterized by a dominant hub of popular items (e.g., vitamins). This indicates a clear systemic concentration of purchases as the feedback loop unfolds for 24 epochs. User homogenization Figure 4a shows the evolution of user similarity as a function of η across different recommen- dation systems. We find that LightGCN and MultiVAE exerts the strongest homogenizing force: as adoption rate approaches η = 1, the average similarity across users increases sharply, reaching a maximum increase of 110% for LightGCN. This indicates (a) VITAMIN NOODLE BOOT TOOLS EXERCISE STRAP MICROWAVE OVEN PITCHER WATCH MISC WORLD INSTRUMENTS ELECTRICITY GENERATOR SPIRITS SAUNA (b) VITAMIN NOODLE BOOT TOOLS EXERCISE STRAP MICROWAVE OVEN PITCHER WATCH MISC WORLD INSTRUMENTS ELECTRICITY GENERATOR SPIRITS SAUNA Figure 3: Co-purchase networks before and after the feedback loop unfolds (LightGCN, η = 1, 24 epochs). Each network includes a sample of items from different popularity levels. Nodes represent items (sized by total purchases), and edges connect items with shared buyers (thicker edges mean more shared purchasers). 10 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.00 0.05 0.10 0.15 0.20 0.25 Jaccard Similarity (a) 0 10 20 Epoch 0.0 0.1 0.2 0.3 0.4 (b) Neu MF vs Light GCN BPR Item KNN Light GCN MultiVAE Neu MF Spectral CF Figure 4: User homogenization. (a) User similarity as a function of adoption rate (η): LightGCN and MultiVAE strongly increase homogenization, BPR and Spectral CF show moderate growth, while NeuMF and Item-KNN reduce it at high adoption. (b) For (η = 0.8), NeuMF remains stable, while LightGCN steadily amplifies user similarity over time. that the recommender systems consistently push individuals toward the same popular items. BPR and SpectralCF also induce convergence, though to a lesser degree, suggesting that their latent representations capture a broader but still overlapping range of preferences. In contrast, Item-KNN and NeuMF produce the opposite (even if slight) tendency: as adoption intensifies, the similarity curve declines, with a maximum reduction of 33% for NeuMF. This implies that when users rely entirely on recommendations from these systems, their trajectories remain more differentiated. Figure 4b reports the evolution of user similarity over the 24 epochs for a fixed adoption rate (η = 0.8), contrasting the two most divergent models: NeuMF and LightGCN. NeuMF exhibits a flat trajectory where user similarity remains stable over time: it preserves heterogeneity across users. In contrast, LightGCN shows a steady upward trend: epoch after epoch, user choices grow more aligned, underscoring its strong tendency to channel demand toward the same set of items. LightGCN disrupt heterogeneity across users. Overall, these findings reveal that homogenization is model-dependent. Recommender systems such as LightGCN or MultiVAE, foster behavioral convergence by over-amplifying structural popularity signals. Systems like NeuMF, which balance linear and nonlinear components of user–item interactions, better preserve heterogeneity even under high recommendation adoption. This divergence has critical implications: while all recommenders affect diversity, only certain models systematically erode distinctiveness in user behavior, thus magnifying risks of cultural uniformity and reduced exposure to niche content. 7 Discussion and Conclusions This paper introduced a simulation framework to model the feedback loop of recommender systems in online retail environments and investigate its systemic effects. We find a crucial trade-off in individual vs collective diversity: as recommendation adoption increases, individual diversity increases while collective diversity declines sharply. Only NeuMF – a neural model achieving the highest performance among those tested – preserves collective diversity at levels comparable to those observed in the training data. These results confirm the empirical findings of Lee and Hosanagar Lee and Hosanagar [2018], who showed through controlled empirical experiments on an online retail platform that collaborative filtering increases individual 11 diversity while reducing collective diversity. Our study extends their work by showing that this pattern holds across a wide range of recommender systems and becomes more pronounced as the adoption rate of the recommender increases. We also observe an increasing concentration of purchase volume and item popularity as the feedback loop unfolds: the apparent rise in individual diversity primarily translates into repeated purchases of a few already popular items. This pattern mirrors recent findings on feedback loops in location-based recommender systems Mauro et al. [2025], suggesting that such systemic effects may emerge universally across different types of online platforms. Finally, we find that user homogenization effects are model-dependent: some recommender systems strongly align user purchase behaviors, while others introduce a modest increase in behavioral heterogeneity. This suggests that certain systemic patterns arise inherently from the recursive nature of the feedback loop, whereas others are shaped by the specific training dynamics of each recommender model. Our study has some limitations. First, although the choice model accounts for repeated consumption and evolving awareness, it abstracts away important behavioral factors such as price sensitivity, brand loyalty, and novelty-seeking – all of which are critical in retail settings. Future work should incorporate these dimensions to improve ecological validity. Second, while the framework is grounded in large-scale purchase histories, its simulation-based nature calls for external validation using longitudinal observational or experimental data to assess generalizability. Finally, the frequency of recommender retraining is also likely to influence systemic outcomes, a factor we plan to investigate in future work. A promising research direction emerging from our study concerns the design of interventions within the feedback loop to mitigate the observed systemic effects. For instance, re-ranking strategies could be introduced to progressively enhance collective diversity while reducing user homogenization and purchase concentration over time. In conclusion, our simulation framework opens new avenues for studying the coevolution between algorithmic decision support and user choices on online retail platform, contributing to a deeper understanding of how digital platforms shape, and are shaped by, human behaviour. 12 References D. Calacci A. Berke and R. Mahari. Open e-commerce 1.0, five years of crowdsourced u.s. amazon purchase histories with user demographics, 2024. URL https://doi.org/10.1038/s41597-024-03329-6. Charu C. Aggarwal. Recommender Systems: The Textbook. Springer Publishing Company, Incorporated, 1st edition, 2016. ISBN 3319296574. Ashton Anderson, Lucas Maystre, Ian Anderson, Rishabh Mehrotra, and Mounia Lalmas. Algorithmic effects on the diversity of consumption on spotify. In Proceedings of The Web Conference 2020 (WWW ’20), pages 2155–2165. ACM, 2020. Sinan Aral, Erik Brynjolfsson, and Marshall Van Alstyne. Information, technology, and information worker productivity. Information Systems Research, 23(3):849–867, 2012. Guy Aridor, Duarte Goncalves, Daniel Kluver, Ruoyan Kong, and Joseph Konstan. The informational role of online recommendations: Evidence from a field experiment, 2024a. Working Paper. Guy Aridor, Ruoyan Kong, and Joseph A. Konstan. Fairness-aware recommendation through simulation-based evaluation. In Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24), 2024b. Christine Bauer and Dietmar Jannach. Balancing diversity and relevance in music recommendation. In Proceedings of the 15th ACM Conference on Recommender Systems (RecSys ’21), pages 179–188, 2021. Alex Berke, Dan Calacci, Robert Mahari, Takahiro Yabe, Kent Larson, and Sandy Pentland. Open e-commerce 1.0, five years of crowdsourced us amazon purchase histories with user demographics. Scientific Data, 11(1):491, 2024. Shihao Cai, Jizhi Zhang, Keqin Bao, Chongming Gao, Qifan Wang, Fuli Feng, and Xiangnan He. Agentic feedback loop modeling improves recommendation and user simulation, 2025. URL https://arxiv.org/abs/2410.20027. Pedro G Campos, Fernando Díez, and Iván Cantador. Temporal evaluation of recommender systems on historical data. In Proceedings of the 4th ACM Conference on Recommender Systems, pages 81–88, 2010. Nicolò Cesa-Bianchi, Claudio Gentile, Gábor Lugosi, and Gergely Neu. Boltzmann exploration done right, 2017. URL https://arxiv.org/abs/1705.10257. Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems, page 224–232. ACM, September 2018a. doi: 10.1145/3240323.3240370. URL http://dx.doi.org/10. 1145/3240323.3240370. Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18), pages 224–232, 2018b. Erica Coppolillo, Simone Mungari, Ettore Ritacco, Francesco Fabbri, Marco Minici, Francesco Bonchi, and Giuseppe Manco. Algorithmic drift: A simulation framework to study the effects of recommender systems on user preferences, 2024. URL https://arxiv.org/abs/2409.16478. Anita Elberse. Should you invest in the long tail? Harvard Business Review, 86(7/8):88–96, 2008. Daniel Fleder and Kartik Hosanagar. Blockbuster culture’s next rise or fall: The impact of recommender systems on sales diversity. Management Science, 55(5):697–712, 2009. Daniel Fleder, Kartik Hosanagar, and Andreas Buja. Recommender systems and their effects on consumers: the fragmentation debate. In Proceedings of the 11th ACM Conference on Electronic Commerce, EC ’10, page 229–230, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605588223. doi: 10.1145/1807342. 1807378. URL https://doi.org/10.1145/1807342.1807378. Avi Goldfarb and Catherine Tucker. Online display advertising: Targeting and obtrusiveness. Marketing Science, 30(3): 389–404, 2011. Andrew Guess, Jonathan Nagler, and Joshua Tucker. The filter bubble hypothesis: Experimental evidence. Science Advances, 9(14):eade0140, 2023. Mario Haim, Andreas Graefe, and Hans-Bernd Brosius. Burst of the filter bubble? effects of personalization on the diversity of google news. Digital Journalism, 6(3):330–343, 2018. Casper Hansen, Rishabh Mehrotra, James McInerney, and Asia J. Biega. Correcting feedback loops in recommendation with simulation-based debiasing. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23), pages 411–421, 2023. Maryam Hazrati and Francesco Ricci. Bias amplification in recommender systems: Simulation study. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’21), pages 240–244, 2021. 13 Naieme Hazrati and Francesco Ricci. Recommender systems effect on the evolution of users’ choices distribution. Information Processing and Management, 59(1):102766, 2022. ISSN 0306-4573. doi: https://doi.org/10.1016/j.ipm. 2021.102766. URL https://www.sciencedirect.com/science/article/pii/S0306457321002466. Naieme Hazrati, Mehdi Elahi, and Francesco Ricci. Simulating the impact of recommender systems on the evolution of collective users’ choices. In Proceedings of the 31st ACM Conference on Hypertext and Social Media, HT ’20, page 207–212, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370981. doi: 10.1145/3372923.3404812. URL https://doi.org/10.1145/3372923.3404812. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering, 2017. URL https://arxiv.org/abs/1708.05031. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation, 2020. URL https://arxiv.org/abs/2002.02126. Natali Helberger, Kari Karppinen, and Lucia D’Acunto. On the democratic role of news recommenders. Digital Journalism, 7(8):993–1012, 2019. Natali Helberger, Damian Trilling, Judith Moeller, and Bedir Tekinerdogan. Fairness in online personalization: Perspectives from communication and journalism studies. Digital Journalism, 9(2):236–255, 2021. Oliver Hinz, Xitong Li, and Jörn Grahl. How do recommender systems lead to consumer purchases? a causal mediation analysis of a field experiment. Information Systems Research, 33(2):620–637, 2021. doi: 10.1287/isre.2021.1074. David Holtz, Ben Carterette, Praveen Chandar, Zahra Nazari, Henriette Cramer, and Sinan Aral. The engagement- diversity connection: Evidence from a field experiment on spotify. In Proceedings of the 21st ACM Conference on Economics and Computation (EC ’20), pages 75–76. ACM, 2020. Felix Holzmeister et al. Examining the replicability of online experiments. Nature Human Behaviour, 2024. Demon- strates challenges and replication failures in web-based experiments due to platform and API limitations. Ferenc Huszár, Sofia Ira Ktena, Conor O’Brien, Luca Belli, Andrew Schlaikjer, and Moritz Hardt. Algorithmic amplification of politics on twitter. Proceedings of the national academy of sciences, 119(1):e2025334119, 2022. Ray Jiang, Silvia Chiappa, Tor Lattimore, András György, and Pushmeet Kohli. Degenerate feedback loops in recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4275– 4282, 2019. Yanan Jiang, Hao Wang, Kuan-Chuan Lee, Jun Liu, Meng Wang, and Jieping Ye. Temporal collaborative filtering with bayesian probabilistic tensor factorization. ACM Transactions on Information Systems, 40(4):1–30, 2022. Erik Knudsen. Modeling news recommender systems’ conditional effects on selective exposure: Evidence from two online experiments. Journal of Communication, 73(2):138–149, 2023. Dokyun Lee and Kartik Hosanagar. How do recommender systems affect sales diversity? a cross-category investigation via randomized field experiment. Information Systems Research, 30(1):239–259, 2018. Ro’ee Levy. Social media, news consumption, and polarization: Evidence from a field experiment. American Economic Review, 111(3):831–870, 2021. Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, pages 689–698, 2018. Felicia Loecherbach, Kasper Welbers, Judith Moeller, Damian Trilling, and Wouter Van Atteveldt. Is this a click towards diversity? explaining when and why news users make diverse choices. In Proceedings of the 13th ACM Web Science Conference 2021 (WebSci ’21), pages 282–290. ACM, 2021. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Feedback loop and bias amplification in recommender systems, 2020a. URL https://arxiv.org/abs/2007.13019. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Feedback loops in recommender systems: Quantification and mitigation. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM ’20), pages 2145–2148, 2020b. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Popularity bias in recommendation: A multi-stakeholder perspective. In Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’20), pages 342–346, 2020c. John Mattis, Robin Burke, and Himan Abdollahpouri. Modeling user bias evolution in recommender systems. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys ’22), pages 648–652, 2022. Giovanni Mauro, Marco Minici, and Luca Pappalardo. The urban impact of ai: Modeling feedback loops in next-venue recommendation, 2025. URL https://arxiv.org/abs/2504.07911. 14 Judith Moeller, Natali Helberger, and Damian Trilling. Diversity by design: Investigating the potential of diversity-aware recommender systems in journalism. Journalism Studies, 22(11):1469–1488, 2021. Simone Mungari, Erica Coppolillo, Ettore Ritacco, and Giuseppe Manco. Flexible generation of preference data for recommendation analysis. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, KDD ’25, page 5710–5721, New York, NY, USA, 2025. Association for Computing Machinery. ISBN 9798400714542. doi: 10.1145/3711896.3737398. URL https://doi.org/10.1145/3711896.3737398. Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Giulio Rossetti, Gizem Gezici, Fosca Giannotti, Margherita Lalli, Daniele Gambetta, et al. A survey on the impact of ai-based recommenders on human behaviours: methodologies, outcomes and future directions. arXiv preprint arXiv:2407.01630, 2024. BHAVIK PATHAK, ROBERT GARFINKEL, RAM D. GOPAL, RAJKUMAR VENKATESAN, and FANG YIN. Empirical analysis of the impact of recommender systems on sales. Journal of Management Information Systems, 27 (2):159–188, 2010. ISSN 07421222. URL http://www.jstor.org/stable/29780174. Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, János Kertész, et al. Human-ai coevolution. Artificial Intelligence, 339:104244, 2025. Behnam Rahdari, Peter Brusilovsky, and Branislav Kveton. Towards simulation-based evaluation of recommender systems with carousel interfaces. ACM Transactions on Recommender Systems, 2024. doi: https://doi.org/10.1145/ 3643709. To appear / published online (depends on version). Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, page 452–461, Arlington, Virginia, USA, 2009. AUAI Press. ISBN 9780974903958. Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira Jr. Auditing radi- calization pathways on youtube. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency (FAT* ’20), pages 131–141. ACM, 2020. Pablo Sánchez, Alejandro Bellogín, and Ludovico Boratto. Bias characterization, assessment, and mitigation in location-based recommender systems. Data Mining and Knowledge Discovery, 37(5):1885–1929, 2023. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, WWW ’01, page 285–295, New York, NY, USA, 2001. Association for Computing Machinery. ISBN 1581133480. doi: 10.1145/371920.372071. URL https://doi.org/10.1145/371920.372071. Alina Sîrbu, Dino Pedreschi, Fosca Giannotti, and János Kertész. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3):e0213246, 2019. Nasim Sonboli, Robin Burke, Michael Ekstrand, and Rishabh Mehrotra. The multisided complexity of fairness in recommender systems. AI Mag., 43(2):164–176, June 2022. ISSN 0738-4602. doi: 10.1002/aaai.12054. URL https://doi.org/10.1002/aaai.12054. Ruixuan Sun, Avinash Akella, Ruoyan Kong, Moyan Zhou, and Joseph A. Konstan. Interactive content diversity and user exploration in online movie recommenders: A field experiment. International Journal of Human–Computer Interaction, 40(22):7233–7247, 2024. Wenlong Sun, Sami Khenissi, Olfa Nasraoui, and Patrick Shafto. Debiasing the human-recommender system feedback loop in collaborative filtering. In Companion Proceedings of The 2019 World Wide Web Conference, WWW ’19, page 645–651, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366755. doi: 10.1145/3308560.3317303. URL https://doi.org/10.1145/3308560.3317303. Yitong Wang, Tianshu Sun, Zhe Yuan, and AJ Yuan Chen. How recommendation affects customer search: A field experiment. Information Systems Research, 36(1):85–106, 2024. Mengyue Yang, Quanyu Dai, Zhenhua Dong, Xu Chen, Xiuqiang He, and Jun Wang. Top-n recommendation with counterfactual user preference simulation. pages 2342–2351, 10 2021. doi: 10.1145/3459637.3482305. Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, and Alex Beutel. Measuring recommender system effects with simulated users, 2021. URL https://arxiv.org/abs/2101.04526. Yan Zeng, Yuan Fang, and Luo Si. User behavior modeling in recommender system simulations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20), pages 1509–1512, 2020. Yan Zeng, Zhenzhong Chen, Jianyuan Sun, Yuan Fang, and Luo Si. Recwalk: Nearly unbiased walk-based recom- mendation via markov chains. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM ’21), pages 367–375, 2021. 15 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, and Ji-Rong Wen. Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms, 2021. URL https://arxiv.org/abs/2011.01731. Lei Zheng, Chun-Ta Lu, Fei Jiang, Jiawei Zhang, and Philip S. Yu. Spectral collaborative filtering. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys ’18, page 311–319, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450359016. doi: 10.1145/3240323.3240343. URL https: //doi.org/10.1145/3240323.3240343. Tao Zhou, Zoltán Kuscsik, Jian-Guo Liu, Matúš Medo, Joseph Rushton Wakeling, and Yi-Cheng Zhang. Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences, 107(10):4511–4515, 2010. doi: 10.1073/pnas.1000488107. URL https://www.pnas.org/doi/abs/10.1073/ pnas.1000488107. 16
A SIMULATION FRAMEWORK FOR STUDYING SYSTEMIC EFFECTS OF FEEDBACK LOOPS IN RECOMMENDER SYSTEMS Gabriele Barlacchi Scuola Normale Superiore Pisa, Italy Margherita Lalli Scuola Normale Superiore Pisa, Italy Emanuele Ferragina Sciences Po Paris Fosca Giannotti Scuola Normale Superiore Pisa, Italy Luca Pappalardo CNR and Scuola Normale Superiore Pisa, Italy October 17, 2025 ABSTRACT Recommender systems continuously interact with users, creating feedback loops that shape both individual behavior and collective market dynamics. This paper introduces a simulation framework to model these loops in online retail environments, where recommenders are periodically retrained on evolving user-item interactions. Using the Amazon e-Commerce dataset, we analyze how different recommendation algorithms influence diversity, purchase concentration, and user homogenization over time. Results reveal a systematic trade-off: while the feedback loop increases individual diversity, it simultaneously reduces collective diversity and concentrates demand on a few popular items. Moreover, for some recommender systems, the feedback loop increases user homogenization over time, making user purchase profiles increasingly similar. These findings underscore the need for recommender designs that balance personalization with long-term diversity. Keywords Feedback loop, Recommender systems, Diversity, Homogenization, Simulation, Human-AI Coevolution 1 Introduction Recommender systems constitute a pervasive layer of algorithmic mediation, increasingly shaping how individuals interact with digital environments. On social media, they surface relevant posts and connections; in online retail, they suggest products that users are likely to purchase; and in location-based services, they propose efficient routes or destinations tailored to individual preferences. Because recommender systems are powered by AI, their interaction with users naturally gives rise to a feedback loop: users' choices shape the data on which recommenders are trained, and the trained models in turn influence subsequent user decisions Pedreschi et al. [2025]. This recursive process continually feeds back into itself, creating an evolving and potentially unbounded cycle. The feedback loop can amplify existing biases and generate various systemic effects, many of which are unintended Pedreschi et al. [2025], Pappalardo et al. [2024]. On social media, the feedback loop may reinforce echo chambers, filter bubbles, and even processes of radicalization Guess et al. [2023], Huszár et al. [2022], Sîrbu et al. [2019]; in online retail and location-based platforms, it can exacerbate popularity bias Lee and Hosanagar [2018], Aridor et al. [2024a], Mauro et al. [2025], Sánchez et al. [2023]. Empirically studying feedback loops is challenging as experiments on real online platforms are limited by platform restrictions, short observation horizons, and poor reproducibility Holzmeister et al. [2024]. Consequently, most research relies on simulations, which enable controlled exploration of mechanisms and counterfactual scenarios Mansoury et al. [2020a], Yang et al. [2021], Chaney et al. [2018a]. However, existing simulation approaches often rely on oversimplified 16 Oct 2025 assumptions, such as the exclusive use of explicit feedback (e.g., user ratings) Yao et al. [2021], the absence of periodic retraining of the recommender system Cai et al. [2025], and the consideration of only a limited set of recommendation algorithms Sun et al. [2019]. The difficulty of empirically studying and simulating feedback loops has limited our understanding of their systemic effects. In online retail, for instance, recommender systems are optimized for predictive accuracy and tend to favor popular items that are more likely to be purchased. While this typically increase overall sales volume from the platform's perspective Pappalardo et al. [2024], PATHAK et al. [2010], it may also induce behavioral homogenization - users increasingly buying the same items - and concentrate demand on a small subset of products. Over time, these effects can distort online retail ecosystems and risk undermining market fairness and long-term system resilience. Yet, evidence on how the feedback loop leads to these systemic effects remains sparse and often contradictory Pappalardo et al. [2024], Sonboli et al. [2022], Fleder et al. [2010]. In this paper, we propose an open-source simulation framework that models feedback loops in an online-retail-like environment, where recommender systems are periodically retrained on the evolving user-item interaction data generated through purchases. The framework generates synthetic trajectories of user-item interactions over time, enabling detailed analysis of the systemic effects that emerge from the feedback loop. Unlike existing frameworks based on explicit feedback (e.g., user ratings), our framework uses implicit feedback (e.g., purchases), better reflecting real online retail operations and the recent shift toward implicit feedback modeling Aggarwal [2016]. The simulation framework is flexible and supports different recommendation systems to allow systematic comparison across algorithms. We implement a wide set of recommendation systems and conduct extensive experiments using the open Amazon e-commerce 1.0 dataset A. Berke and Mahari [2024] to examine how the simulated feedback loop shapes systemic properties such as purchase diversity, the concentration of purchase volume and item popularity across items, and behavioural homogenization across the user base. We discover that: • The feedback loop broadens individual purchase profiles while concentrating demand at the collective level. • The exploratory boost induced by recommender systems is driven primarily by medium and heavy buyers, with minimal impact on light buyers. • The feedback loop generally increases the concentration of purchases on a small set of items across most recommender systems. • User homogenization effects are model-dependent: some recommender systems amplify behavioral similarity, while others preserve heterogeneity. Our results demonstrate the utility of simulations to disentangle the systemic impacts of recommendations, and highlight the need to design algorithms that balance personalization with unintended long-term effects. 2 Related work This section reviews research on how recommender systems influence user behavior, primarily in online retail, media, and news. The literature employs two approaches Pappalardo et al. [2024]: online experiments analyse real-world user interactions under controlled conditions and offer ecological validity, but face constraints from platform policies, short observation windows, and limited scalability; simulations model user-system dynamics to explore hypothetical effects and counterfactual scenarios, but rely on assumptions that may limit external validity. Online Experiments Empirical findings on consumption diversity are often conflicting. On the one hand, recommender systems can expand individual exploration but simultaneously concentrate demand at the aggregate level Fleder and Hosanagar [2009], Lee and Hosanagar [2018]. On the other hand, recommendations may narrow user choices, steering individuals toward more homogeneous item sets Aridor et al. [2024a], Wang et al. [2024] and reduce listening diversity over time Anderson et al. [2020], Holtz et al. [2020]. Research has also identified mechanisms through which recommendations influence decisions, including effects on item discovery, purchase behavior Hinz et al. [2021], and quality perceptions Aridor et al. [2024a] with broader implications for willingness to pay and market concentration Goldfarb and Tucker [2011], Aral et al. [2012], Elberse [2008]. Interface design moderates these effects, as shown in movie recommendation studies Sun et al. [2024]. News recommendation research highlights conditional effects: algorithmic exposure can amplify polarization Levy [2021] or reinforce filter bubbles depending on user predispositions and design choices Knudsen [2023], Loecherbach et al. [2021], Haim et al. [2018]. Diversity-aware frameworks have been proposed to address democratic concerns Helberger et al. [2021], Moeller et al. [2021]. Even large-scale experiments reach opposite conclusions: some find no short-term 2 filter-bubble effects on political attitudes Guess et al. [2023] while others document increased exposure to extreme content through recommendation chains Ribeiro et al. [2020]. Simulations By isolating mechanisms and testing counterfactuals, simulations reveal feedback dynamics unfolding over extended periods. Early work demonstrates that algorithmic reinforcement narrows exposure and reduces systemlevel diversity Jiang et al. [2019] and repeated recommendations lead to behavioral homogenization. Mansoury et al. Mansoury et al. [2020b] formalize how sustained exposure shapes user preferences rather than merely reflecting them. Agent-based models Chaney et al. [2018b], Mansoury et al. [2020c] reveal how small initial biases propagate into large-scale imbalances through mutual user-algorithm adaptation. Beyond digital contexts, recent work Zeng et al. [2021], Mauro et al. [2025] show that recommendation feedback loops extend into urban spatial behaviour. Simulations also serve as testbeds for interventions. Helberger et al. Helberger et al. [2019] propose frameworks to balance diversity and autonomy in news, and Bauer et al. Bauer and Jannach [2021] investigate alternative strategies for music recommendation, and Coppolillo et al. [2024] proposes a framework for systematically evaluating recommender systems, with a particular focus on measuring drift in behavioral patterns and biases. Interface design studies demonstrate how presentation shapes exposure Rahdari et al. [2024]. Diversity-aware ranking and nudging strategies have been tested in simulation before deployment Hazrati and Ricci [2021], Mattis et al. [2022]. Bias amplification under feedback loops has received particular attention. Hazrati and Ricci Hazrati and Ricci [2021] show how user biases distort evaluation metrics, with Mattis et al. Mattis et al. [2022] demonstrating worsening effects over time. To address these risks, Hansen et al. Hansen et al. [2023] introduce debiasing techniques, and Aridor et al. Aridor et al. [2024b] examine fairness-aware strategies accounting for systemic disparities. Finally, modeling choices critically shape simulation outcomes. Zeng et al. Zeng et al. [2020] demonstrate that behavioral model selection (e.g., multinomial logit vs. bounded rationality) significantly alters diversity predictions. While Mungari et al. [2025] introduce a flexible framework for generating synthetic preference data to support recommendation analysis, showing the differences among all the possible approaches. Collectively, these contributions establish simulation as a bridge between theory and practice, enabling study of long-term dynamics hidden in short-term evaluations. 3 Modelling of the Feedback Loop We consider a setting in which a recommender system on an online retail platform assists users by suggesting relevant items derived from historical interaction data. In this section, we first introduce the notation used throughout the paper (Section 3.1), then describe the simulation of the user-recommender feedback loop (Section 3.2), and finally present the user choice model adopted in our framework (Section 3.3). 3.1 Definitions Let U be a set of users, I a set of items and T = {1, 2..., T} a sequence of discrete, ordered time steps. We define a model scoring function R : U ×I ×T →R that assigns a relevance score R(u, i, t) to every user-item-time triplet (u, i, t) ∈U × I × T . This score is then used by a recommender function ρ : U × T →Pk(I), where Pk(I) denotes the set of all ordered k-subsets of I. For a given user u and time t, this function generates a ranked list of k items, denoted Ku,t, by selecting the items with highest scores: ρ(u, t) := Ku,t = top-k i∈I R(u, i, t). Users can also make purchases autonomously, independent of the recommender's suggestions. For these autonomous choices, a user's selection is constrained not by the full catalog I but by a user- and time-specific candidate set, denoted Cu,t. This set represents the subset of items that user u is aware of at time t and constitutes the available choice pool for organic interactions. The user scoring function is then a function ˆR whose domain is the set of all admissible user-item-time triplets where the item is available to the user: ˆR : {(u, i, t) ∈U × I × T | i ∈Cu,t} →R. To each triplet (u, i, t) in this domain, the function assigns a score ˆR(u, i, t). 3.2 Simulation framework The simulation begins with an initialization phase, in which a real-world dataset is used to train both the model scoring function and the user scoring function. 3 The simulation is initialized at time t0 and proceeds by generating user choices at each subsequent time step t ∈[t0, T]. At each simulation step t ∈[t0, T], a subset of users Ut ⊆U is awakened. Each active user u ∈Ut selects a basket of items, where the number of items selected, denoted by bu,t, is the user's basket size for that step. Each user-item interaction is modelled as follows: 1. With probability η, the user u selects an item from the ranked list Ku,t provided by the recommender. This item is drawn from Ku,t with a probability proportional to the scores {R(u, i, t)}i∈Ku,t. 2. With probability 1 -η, u selects an item i from the candidate set Cu,t, according to a probability proportional to the score ˆR(u, i, t) computed from the user choice model (described in the next section). We partition the simulation timeline [t0, T] into discrete, non-overlapping intervals termed epochs. Each epoch ∆j (with j ranging from 1 to n) comprises a sequence of simulation steps, where the number of steps may vary between epochs. At the end of each epoch ∆j, the model scoring function, the choice model, the candidate set and the ranked list suggested by the recommender are updated. Algorithm 1 outlines the overall simulation framework, while Algorithm 2 details the modeling of users' next-item selection. Algorithm 1 Simulation framework Require: Historical data D; total epochs n; last initialization epoch Elast_init; retraining interval ∆; cold-start t0 ; Output Data: Interaction dataset Dpost Dinit ←ColdStart(D, t0) R ←AlgorithmTraining(Dinit) ˆR ←ChoiceModelSetUp(Dinit) Ec ←Elast_init + 1 Elast_training ←0 Dpost ←Dinit while Ec ∆then R ←AlgorithmTraining(Dpost) ˆR ←ChoiceModelSetUp(Dpost) end if end for end while Algorithm 2 Itemselection for user u Require: User u, basket size bu, Choice model ˆR, Recommender model R, Current epoch n Hyperparameters: Adoption rate η, Top ranking K Output Data: Items list UI CS ←get_candidate_set(u, n) for all i in bu do if Bernoulli(η) then i_next ←R(u, K) else i_next ←ˆR(u, CS) end if end for 4 3.3 User choice model We model autonomous user choices - i.e., the items selected when users do not follow the recommender system's suggestions - following the approach of Cesa-Bianchi et al. Cesa-Bianchi et al. [2017] and its subsequent extensions Mansoury et al. [2020a], Chaney et al. [2018a], Hazrati et al. [2020]. This model builds on item response theory and employs softmax decision rules, which are widely used in machine learning and discrete choice theory. The probability that user u selects an item i from its candidate set Cu,t at time t is given by a softmax function. We accomodate this by specifying the form of the scoring function ˆR in terms of a utility function Vu,i(t): P(i | u, t) = ˆR(u, i, t) P j∈Cu,t ˆR(u, j, t) = eVu,i(t)/τ P j∈Cu,t eVu,j(t)/τ , (1) where Vu,i(t) is the utility associated with item i for user u at time t and τ controls the exploration-exploitation balance. As τ →0, choices become deterministic toward the highest-utility item, while for τ →∞, they approach a uniform distribution. Candidate set The set of available items for each user is constructed at each time by combining items from three distinct sources: • GPop: Items ranked by global popularity (i.e., the number of interactions by all users in the current epoch); • IPop: Items ranked by individual popularity (i.e, the number of interactions from users' own purchase history up to the current epoch); • Unknown: A random sample of items with which the user has had no prior interaction. The final candidate set combines these sources with fixed proportions: 40% from GPop, 40% from IPop, and 20% from Unknown. This composition balances globally and individually popular items with exploration of unseen content. Utility estimation We adapt the approach by Mansoury et al. Mansoury et al. [2020a] and estimate the utility of item i for user u as: Vu,i(t) = cu(t) + Gu(t) · log(1 + si(t)) + λ · 1 1 + si(t) + ηu,i(t) (2) where cu denotes the average number of interactions of user u. The term Gu · log(1 + si) modulates the influence of item popularity si based on the user's interaction diversity Gu, computed as the Gini index over the user's interaction distribution. The term λ · 1 1+si introduces a controlled boost for rare items, with λ regulating the strength of this rarity effect Hazrati and Ricci [2022], Zhou et al. [2010]. Finally, ηu,i is a stochastic noise term, sampled at each timestep from a standard normal distribution. All the involved quantities (cu, Gu, si) are evaluated by considering all simulation steps prior to t. Such choice model serves as a structured null model that introduces a controlled degree of variability; more realistic than a simple popularity-based baseline, yet less arbitrary than random selection. 4 Measuring systemic effects We quantify the systemic effects of the feedback loop along three key dimensions: (i) purchase diversity at individual and collective levels; (ii) concentration of purchase volume and item popularity on a limited set of items; and (iii) homogenization of user purchasing behaviour. We select these measures due to the sparse and often conflicting evidence reported in the literature Pappalardo et al. [2024] and because they operationalize fundamental dimensions of diversity and concentration in algorithmic marketplaces. In the long run, reduced collective diversity, excessive homogenization, and high concentration of purchases can limit item discoverability, reinforce popularity biases, and amplify inequality among items and sellers. Such dynamics may ultimately hinder market fairness, and reduce long-term system resilience. Individual diversity We start by defining, for each user u ∈U, the purchase vector wu = {wu,i | i ∈Iu}, where wu,i is the number of times u purchased item i, and Iu the set of items in the purchase history of u. 5 We then use the Gini coefficient to quantify the inequality of weights in wu: Gu = Pdu i=1 Pdu j=1 |wui -wuj| 2d2uwu (3) where du = |Iu| is the number distinct items u has interacted with and wu is the average number of purchases made by u. The set of all individual Gini coefficients {Gu}u∈U is then summarised by the mean Gind of the distribution. Collective diversity We define a system-level vector s, where each component corresponds to the total purchase volume of a specific item: s = {si | i ∈I}, where si = X u∈U wu,i. (4) where si is denoted by strength of item i. The collective Gini coefficient, Gcoll, is then computed using Equation 3 by replacing the individual weight vector wu,i, the number of items du, and the mean weight wu with their system-level counterparts: the vector s, the total number of items |I|, and the mean purchase volume per item s. Purchase concentration To analyze the redistribution of purchases as the feedback loop evolves, we examine the frequency-rank curves of items at the end of the simulation using two key metrics: (ii) the item purchase volume si (or strength) defined in Eq. (4); and (ii) item popularity pi, i.e., the number of unique users who purchased item i. This is given by pi = P u∈U 1{wu,i>0}, where 1{wu,i>0} is an indicator function that equals 1 if user u has purchased item i at least once, and 0 otherwise. User similarity We quantify the similarity between the interaction sets of two users u, v ∈U with the Jaccard index: J(u, v) = |Iu ∩Iv| |Iu ∪Iv| (5) The Jaccard index ranges from 0, indicating no common items, to 1, indicating identical item sets. To obtain a system-level measure, we compute the mean of the Jaccard index across all unique user pairs as: ̄J(U) = 1 |P(U)| X (u,v)∈P(U) J(u, v) where P(U) is the set of unordered distinct user pairs. Items are then ranked by descending order based on si and pi, respectively, generating two independent ordered sequences {ir}r∈[1,|I|] such that i1 has the highest value and i|I| the lowest. 5 Experiments In this section, we describe the dataset, the recommender systems and their evaluation criteria, and the key parameters governing the simulation process. Dataset We use the Amazon e-commerce 1.0 dataset Berke et al. [2024], a large-scale collection of purchase histories crowdsourced from over 5,000 consumers between 2018 and 2022. It contains 1.8 million purchase events, each including detailed attributes such as item code, category, title, price, quantity, and the buyer's shipping state, alongside anonymized user demographic information. To ensure temporal continuity during the simulation, we retain only users with at least one purchase per month over the entire simulated period. After filtering, the dataset comprises roughly one million interactions across 1,561 categories, generated by 2,191 users over a five-year period. Recommender Systems We benchmark several recommender systems spanning classical collaborative filtering, matrix factorization, deep learning, and graph-based models (all implemented via the RecBole library Zhao et al. [2021]): • ItemKNN Sarwar et al. [2001]: Item-based collaborative filtering that recommends items similar to those previously interacted with, using item-item co-occurrence similarity. 6 • MultiVAE Liang et al. [2018]: Variational autoencoder model that captures user-item interactions through latent variable inference. • NeuMF He et al. [2017]: Neural matrix factorization approach that ranks items for each user by optimizing a pairwise probabilistic loss. • LightGCN He et al. [2020]: Simplified graph convolutional model that propagates neighborhood information without feature transformation or nonlinearities. • BPR Rendle et al. [2009]: Matrix factorization model trained with a Bayesian pairwise ranking loss to learn personalized item rankings. • SpectralCF Zheng et al. [2018]: Spectral graph-based collaborative filtering leveraging the frequency domain of the user-item graph to capture global connectivity. • MostPop: Non-personalized baseline recommending globally most popular items. Evaluation of recommender systems We use the first six months of the dataset as an initialization period to calibrate the user choice model and to define the training, validation, and test splits. Within this initialization phase, 80% of the available time is assigned to training (months 1-4), 10% to validation (month 5), and the remaining 10% to testing (month 6). This setup corresponds to a temporal holdout with a sliding window strategy, a widely adopted protocol to evaluate recommender systems over time Campos et al. [2010], Jiang et al. [2022]. After initialization, we repeat the procedure iteratively: at the end of each subsequent epoch (one month of simulated data), we retrain models on the most recent four months, validate on the following month, and test on the newly generated interactions. The configuration that maximized nDCG@10 on the validation set is selected. For hyperparameter optimization, we employ exhaustive grid search over model-specific parameter ranges including learning rate, neighborhood size, and regularization terms. As shown in Table 1, NeuMF achieves the best overall results across all metrics, followed by BPR. NeuMF outperforms the popularity-based baseline (MostPop) by +42% in nDCG@10, +35% in Recall@10, and +28% in Precision@10. MostPop performs slightly better than ItemKNN, suggesting that naive similarity-based methods struggle under temporal dynamics. Simulation The Amazon dataset provides timestamps for each user-item interaction, enabling us to model simulation steps and epochs on a real temporal scale. In our setup, a single step t corresponds to one day and an epoch ∆j corresponds to one month. The initial part of the data is kept as an initialisation period and is used to train the recommender system and inform the user choice model. This period amounts to 6 months. Afterwards, our simulation carries on for T = 731 days (n = 24 epochs). At each simulation step t ∈[t0, T] (t0 is the first day after the initialization period), the set of active customers Ut and related basket sizes {bu,t}u∈Ut mimic the empirical purchase sequence of the corresponding day. Consistent with typical streaming platform interfaces (10-20 items), the size of the ranked list Ku,t is fixed to k = 20 for each user and each time. To evaluate the effects on the purchase dynamics of the recommendation systems described, we run the simulation in parallel for each model for different values of the adoption rate (η) values: {0, 0.2, 0.4, 0.6, 0.8, 1}. The case η = 0 corresponds to the evolution of the pure user choice model without influence of the recommender, while η = 1 represents the scenario in which users always accept the suggested recommendations. To account for stochastic variability, we execute each configuration over three independent runs. 7 Table 1: Performance comparison of recommendation systems. Best results are in bold. Model N@10 P@10 R@10 Hit@10 MostPop 0.1398 0.0782 0.1706 0.5357 Neu MF 0.2810 0.1398 0.2825 0.6699 BPR 0.2371 0.1072 0.2305 0.6241 Spectral CF 0.2274 0.1067 0.2292 0.6241 LightGCN 0.1969 0.0941 0.2033 0.5964 MultiVAE 0.1726 0.0934 0.2010 0.5942 Item-KNN 0.1539 0.0898 0.1945 0.5729 N@10 = NDCG@10, P@10 = Precision@10, R@10 = Recall@10. All metrics are averaged over users; higher is better. 6 Results Individual vs Collective Diversity We compute the individual and collective Gini coefficients at the end of the 24-epoch period and evaluate how these values change as the adoption rate η increases. We find a marked dichotomy in the response of individual versus collective diversity to recommender influence. As Figure 1a shows, the average individual Gini coefficient exhibits a monotonic decrease as a function of η. The 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.4 0.5 Individual Gini (a) 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.8 0.9 Collective Gini (b) 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.2 0.4 0.6 Individual Gini (c) Neu MF 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.2 0.4 0.6 (d) Light GCN BPR Item KNN Light GCN MultiVAE Neu MF Spectral CF High Medium Low Figure 1: Individual and collective diversity. The evolution of individual (a) and collective (b) Gini coefficients as a function of adoption rate. Below, individual Gini coefficient disaggregated by user engagement segments for two representative models: NeuMF (c) and LightGCN (d). 8 magnitude of this reduction ranges from approximately 11% for LightGCN to 35% for NeuMF, indicating a consistent enhancement in the diversity of individual purchase sequences. In contrast, the collective-level Gini coefficient (Figure 1b) demonstrates a positive correlation with η for most models. The magnitude of decrease in collective diversity reaches 28% for LightGCN. Interestingly, the most accurate recommender system - and the one that most enhances individual exploration, NeuMF - shows an initial drop in collective diversity, which later rises when η = 1, returning to levels similar to those observed at η = 0. Aside from this case, all other recommender systems combine increased individual exploration with decreased collective diversity. For a more granular analysis, Figure 1(c, d) disaggregates the average individual Gini coefficient by engagement segments, defined according to purchasing volume in the training set: heavy buyers (top 10% of users, i.e., 10th decile), medium buyers (middle 80%), and light buyers (bottom 10%, i.e., 1st decile). We focus on the two models with the most distinct performance profiles, NeuMF and LightGCN. We find that the recommender system's ability to drive diversity varies substantially across user groups. The observed increase in individual diversity is predominantly attributable to the medium and heavy buyer segments, while the purchase diversity of light buyers remains nearly invariant with respect to η for both models (see Figure 1 (c, d)). This suggests that for low-engagement users, the models lack the necessary resolution in their learned representations to provide effective recommendations, thereby limiting their impact. Concentration of purchases We compare the frequency-rank curves derived from the training set with those obtained from the simulation data, which includes the training set plus the subsequent 24 epochs of simulated purchases. Such curves are presented in Figure 2, with panel (a) corresponding to purchase volume and panel (b) to item popularity. We focus on the two recommender systems that exhibit the most divergent behaviours for η = 1 (NeuMF and LightGCN) to make the effects more pronounced. We find that LightGCN populates its recommendation set exclusively with the top 20 most popular items, leading to a stark drop-off in both the purchase volume and the popularity for items beyond the 20th rank (see green curves in Figure 2). In practice, both the purchase volume and the number of buyers for items ranked 21st and onwards remain stuck to their initial values in the training set (dashed curve in Figure 2). The purchase dynamics driven by the NeuMF recommendations reveal a more nuanced pattern: the purchase volume distribution is overall unaffected in shape (consistently with nearly invariant collective diversity for this recommender system), while the popularity distribution undergoes a concentration of mass, gaining a sharper peak and a heavier tail 101 103 Item rank 100 101 102 103 104 Purchase volume (a) 101 103 Item rank 100 101 102 103 Popularity (b) Training Neu MF Light GCN Figure 2: Purchase redistribution for η = 1. Frequency-rank curves for the (a) purchase volume and (b) item popularity. Curves show the initial state (training set, grey) and the final state after 24 simulation epochs for NeuMF (green) and LightGCN (red). 9 (see red curves in Figure 2). In particular, NeuMF stimulates increased purchasing across the entire spectrum of items, resulting in a quasi-uniform boost to purchase volumes. A key insight emerges from comparison with the popularity-rank curve: while niche, lower-ranked items see an increase in total purchase volume, they fail to attract a broader customer base (see Figure 2b). This is evidenced by the tail of the item popularity curve collapsing onto that of the gray baseline, indicating that there is no significant gain in the number of unique purchasers along the simulation. Consequently, the observed growth in volume for these niche items is primarily driven by existing users who had previously interacted with them. This stands in stark contrast to medium and top-selling items, which experience a concurrent boost in both purchase volume and popularity. To illustrate an example of the effect of increasing concentration, Figure 3 compares the co-purchase networks - where nodes represent item categories and weighted edges indicate how frequently two categories are purchased together - for LightGCN at epoch 0 and epoch 24. At epoch 0, the network displays a relatively balanced topology, with low variance in node degrees and edge weights. After 24 epochs, a pronounced core-periphery structure emerges, characterized by a dominant hub of popular items (e.g., vitamins). This indicates a clear systemic concentration of purchases as the feedback loop unfolds for 24 epochs. User homogenization Figure 4a shows the evolution of user similarity as a function of η across different recommendation systems. We find that LightGCN and MultiVAE exerts the strongest homogenizing force: as adoption rate approaches η = 1, the average similarity across users increases sharply, reaching a maximum increase of 110% for LightGCN. This indicates (a) VITAMIN NOODLE BOOT TOOLS EXERCISE STRAP MICROWAVE OVEN PITCHER WATCH MISC WORLD INSTRUMENTS ELECTRICITY GENERATOR SPIRITS SAUNA (b) VITAMIN NOODLE BOOT TOOLS EXERCISE STRAP MICROWAVE OVEN PITCHER WATCH MISC WORLD INSTRUMENTS ELECTRICITY GENERATOR SPIRITS SAUNA Figure 3: Co-purchase networks before and after the feedback loop unfolds (LightGCN, η = 1, 24 epochs). Each network includes a sample of items from different popularity levels. Nodes represent items (sized by total purchases), and edges connect items with shared buyers (thicker edges mean more shared purchasers). 10 0 0.2 0.5 0.8 1 Adoption Rate ( ) 0.00 0.05 0.10 0.15 0.20 0.25 Jaccard Similarity (a) 0 10 20 Epoch 0.0 0.1 0.2 0.3 0.4 (b) Neu MF vs Light GCN BPR Item KNN Light GCN MultiVAE Neu MF Spectral CF Figure 4: User homogenization. (a) User similarity as a function of adoption rate (η): LightGCN and MultiVAE strongly increase homogenization, BPR and Spectral CF show moderate growth, while NeuMF and Item-KNN reduce it at high adoption. (b) For (η = 0.8), NeuMF remains stable, while LightGCN steadily amplifies user similarity over time. that the recommender systems consistently push individuals toward the same popular items. BPR and SpectralCF also induce convergence, though to a lesser degree, suggesting that their latent representations capture a broader but still overlapping range of preferences. In contrast, Item-KNN and NeuMF produce the opposite (even if slight) tendency: as adoption intensifies, the similarity curve declines, with a maximum reduction of 33% for NeuMF. This implies that when users rely entirely on recommendations from these systems, their trajectories remain more differentiated. Figure 4b reports the evolution of user similarity over the 24 epochs for a fixed adoption rate (η = 0.8), contrasting the two most divergent models: NeuMF and LightGCN. NeuMF exhibits a flat trajectory where user similarity remains stable over time: it preserves heterogeneity across users. In contrast, LightGCN shows a steady upward trend: epoch after epoch, user choices grow more aligned, underscoring its strong tendency to channel demand toward the same set of items. LightGCN disrupt heterogeneity across users. Overall, these findings reveal that homogenization is model-dependent. Recommender systems such as LightGCN or MultiVAE, foster behavioral convergence by over-amplifying structural popularity signals. Systems like NeuMF, which balance linear and nonlinear components of user-item interactions, better preserve heterogeneity even under high recommendation adoption. This divergence has critical implications: while all recommenders affect diversity, only certain models systematically erode distinctiveness in user behavior, thus magnifying risks of cultural uniformity and reduced exposure to niche content. 7 Discussion and Conclusions This paper introduced a simulation framework to model the feedback loop of recommender systems in online retail environments and investigate its systemic effects. We find a crucial trade-off in individual vs collective diversity: as recommendation adoption increases, individual diversity increases while collective diversity declines sharply. Only NeuMF - a neural model achieving the highest performance among those tested - preserves collective diversity at levels comparable to those observed in the training data. These results confirm the empirical findings of Lee and Hosanagar Lee and Hosanagar [2018], who showed through controlled empirical experiments on an online retail platform that collaborative filtering increases individual 11 diversity while reducing collective diversity. Our study extends their work by showing that this pattern holds across a wide range of recommender systems and becomes more pronounced as the adoption rate of the recommender increases. We also observe an increasing concentration of purchase volume and item popularity as the feedback loop unfolds: the apparent rise in individual diversity primarily translates into repeated purchases of a few already popular items. This pattern mirrors recent findings on feedback loops in location-based recommender systems Mauro et al. [2025], suggesting that such systemic effects may emerge universally across different types of online platforms. Finally, we find that user homogenization effects are model-dependent: some recommender systems strongly align user purchase behaviors, while others introduce a modest increase in behavioral heterogeneity. This suggests that certain systemic patterns arise inherently from the recursive nature of the feedback loop, whereas others are shaped by the specific training dynamics of each recommender model. Our study has some limitations. First, although the choice model accounts for repeated consumption and evolving awareness, it abstracts away important behavioral factors such as price sensitivity, brand loyalty, and novelty-seeking - all of which are critical in retail settings. Future work should incorporate these dimensions to improve ecological validity. Second, while the framework is grounded in large-scale purchase histories, its simulation-based nature calls for external validation using longitudinal observational or experimental data to assess generalizability. Finally, the frequency of recommender retraining is also likely to influence systemic outcomes, a factor we plan to investigate in future work. A promising research direction emerging from our study concerns the design of interventions within the feedback loop to mitigate the observed systemic effects. For instance, re-ranking strategies could be introduced to progressively enhance collective diversity while reducing user homogenization and purchase concentration over time. In conclusion, our simulation framework opens new avenues for studying the coevolution between algorithmic decision support and user choices on online retail platform, contributing to a deeper understanding of how digital platforms shape, and are shaped by, human behaviour. 12 References D. Calacci A. Berke and R. Mahari. Open e-commerce 1.0, five years of crowdsourced u.s. amazon purchase histories with user demographics, 2024. URL https://doi.org/10.1038/s41597-024-03329-6. Charu C. Aggarwal. Recommender Systems: The Textbook. Springer Publishing Company, Incorporated, 1st edition, 2016. ISBN 3319296574. Ashton Anderson, Lucas Maystre, Ian Anderson, Rishabh Mehrotra, and Mounia Lalmas. Algorithmic effects on the diversity of consumption on spotify. In Proceedings of The Web Conference 2020 (WWW '20), pages 2155-2165. ACM, 2020. Sinan Aral, Erik Brynjolfsson, and Marshall Van Alstyne. Information, technology, and information worker productivity. Information Systems Research, 23(3):849-867, 2012. Guy Aridor, Duarte Goncalves, Daniel Kluver, Ruoyan Kong, and Joseph Konstan. The informational role of online recommendations: Evidence from a field experiment, 2024a. Working Paper. Guy Aridor, Ruoyan Kong, and Joseph A. Konstan. Fairness-aware recommendation through simulation-based evaluation. In Proceedings of the 18th ACM Conference on Recommender Systems (RecSys '24), 2024b. Christine Bauer and Dietmar Jannach. Balancing diversity and relevance in music recommendation. In Proceedings of the 15th ACM Conference on Recommender Systems (RecSys '21), pages 179-188, 2021. Alex Berke, Dan Calacci, Robert Mahari, Takahiro Yabe, Kent Larson, and Sandy Pentland. Open e-commerce 1.0, five years of crowdsourced us amazon purchase histories with user demographics. Scientific Data, 11(1):491, 2024. Shihao Cai, Jizhi Zhang, Keqin Bao, Chongming Gao, Qifan Wang, Fuli Feng, and Xiangnan He. Agentic feedback loop modeling improves recommendation and user simulation, 2025. URL https://arxiv.org/abs/2410.20027. Pedro G Campos, Fernando Díez, and Iván Cantador. Temporal evaluation of recommender systems on historical data. In Proceedings of the 4th ACM Conference on Recommender Systems, pages 81-88, 2010. Nicolò Cesa-Bianchi, Claudio Gentile, Gábor Lugosi, and Gergely Neu. Boltzmann exploration done right, 2017. URL https://arxiv.org/abs/1705.10257. Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems, page 224-232. ACM, September 2018a. URL http://dx.doi.org/10. 1145/3240323.3240370. Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18), pages 224-232, 2018b. Erica Coppolillo, Simone Mungari, Ettore Ritacco, Francesco Fabbri, Marco Minici, Francesco Bonchi, and Giuseppe Manco. Algorithmic drift: A simulation framework to study the effects of recommender systems on user preferences, 2024. URL https://arxiv.org/abs/2409.16478. Anita Elberse. Should you invest in the long tail? Harvard Business Review, 86(7/8):88-96, 2008. Daniel Fleder and Kartik Hosanagar. Blockbuster culture's next rise or fall: The impact of recommender systems on sales diversity. Management Science, 55(5):697-712, 2009. Daniel Fleder, Kartik Hosanagar, and Andreas Buja. Recommender systems and their effects on consumers: the fragmentation debate. In Proceedings of the 11th ACM Conference on Electronic Commerce, EC '10, page 229-230, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605588223. 1807378. URL https://doi.org/10.1145/1807342.1807378. Avi Goldfarb and Catherine Tucker. Online display advertising: Targeting and obtrusiveness. Marketing Science, 30(3): 389-404, 2011. Andrew Guess, Jonathan Nagler, and Joshua Tucker. The filter bubble hypothesis: Experimental evidence. Science Advances, 9(14):eade0140, 2023. Mario Haim, Andreas Graefe, and Hans-Bernd Brosius. Burst of the filter bubble? effects of personalization on the diversity of google news. Digital Journalism, 6(3):330-343, 2018. Casper Hansen, Rishabh Mehrotra, James McInerney, and Asia J. Biega. Correcting feedback loops in recommendation with simulation-based debiasing. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys '23), pages 411-421, 2023. Maryam Hazrati and Francesco Ricci. Bias amplification in recommender systems: Simulation study. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '21), pages 240-244, 2021. 13 Naieme Hazrati and Francesco Ricci. Recommender systems effect on the evolution of users' choices distribution. Information Processing and Management, 59(1):102766, 2022. ISSN 0306-4573. 2021.102766. URL https://www.sciencedirect.com/science/article/pii/S0306457321002466. Naieme Hazrati, Mehdi Elahi, and Francesco Ricci. Simulating the impact of recommender systems on the evolution of collective users' choices. In Proceedings of the 31st ACM Conference on Hypertext and Social Media, HT '20, page 207-212, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370981. URL https://doi.org/10.1145/3372923.3404812. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering, 2017. URL https://arxiv.org/abs/1708.05031. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation, 2020. URL https://arxiv.org/abs/2002.02126. Natali Helberger, Kari Karppinen, and Lucia D'Acunto. On the democratic role of news recommenders. Digital Journalism, 7(8):993-1012, 2019. Natali Helberger, Damian Trilling, Judith Moeller, and Bedir Tekinerdogan. Fairness in online personalization: Perspectives from communication and journalism studies. Digital Journalism, 9(2):236-255, 2021. Oliver Hinz, Xitong Li, and Jörn Grahl. How do recommender systems lead to consumer purchases? a causal mediation analysis of a field experiment. Information Systems Research, 33(2):620-637, 2021. David Holtz, Ben Carterette, Praveen Chandar, Zahra Nazari, Henriette Cramer, and Sinan Aral. The engagementdiversity connection: Evidence from a field experiment on spotify. In Proceedings of the 21st ACM Conference on Economics and Computation (EC '20), pages 75-76. ACM, 2020. Felix Holzmeister et al. Examining the replicability of online experiments. Nature Human Behaviour, 2024. Demonstrates challenges and replication failures in web-based experiments due to platform and API limitations. Ferenc Huszár, Sofia Ira Ktena, Conor O'Brien, Luca Belli, Andrew Schlaikjer, and Moritz Hardt. Algorithmic amplification of politics on twitter. Proceedings of the national academy of sciences, 119(1):e2025334119, 2022. Ray Jiang, Silvia Chiappa, Tor Lattimore, András György, and Pushmeet Kohli. Degenerate feedback loops in recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 42754282, 2019. Yanan Jiang, Hao Wang, Kuan-Chuan Lee, Jun Liu, Meng Wang, and Jieping Ye. Temporal collaborative filtering with bayesian probabilistic tensor factorization. ACM Transactions on Information Systems, 40(4):1-30, 2022. Erik Knudsen. Modeling news recommender systems' conditional effects on selective exposure: Evidence from two online experiments. Journal of Communication, 73(2):138-149, 2023. Dokyun Lee and Kartik Hosanagar. How do recommender systems affect sales diversity? a cross-category investigation via randomized field experiment. Information Systems Research, 30(1):239-259, 2018. Ro'ee Levy. Social media, news consumption, and polarization: Evidence from a field experiment. American Economic Review, 111(3):831-870, 2021. Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, pages 689-698, 2018. Felicia Loecherbach, Kasper Welbers, Judith Moeller, Damian Trilling, and Wouter Van Atteveldt. Is this a click towards diversity? explaining when and why news users make diverse choices. In Proceedings of the 13th ACM Web Science Conference 2021 (WebSci '21), pages 282-290. ACM, 2021. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Feedback loop and bias amplification in recommender systems, 2020a. URL https://arxiv.org/abs/2007.13019. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Feedback loops in recommender systems: Quantification and mitigation. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20), pages 2145-2148, 2020b. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. Popularity bias in recommendation: A multi-stakeholder perspective. In Proceedings of the 13th ACM Conference on Recommender Systems (RecSys '20), pages 342-346, 2020c. John Mattis, Robin Burke, and Himan Abdollahpouri. Modeling user bias evolution in recommender systems. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys '22), pages 648-652, 2022. Giovanni Mauro, Marco Minici, and Luca Pappalardo. The urban impact of ai: Modeling feedback loops in next-venue recommendation, 2025. URL https://arxiv.org/abs/2504.07911. 14 Judith Moeller, Natali Helberger, and Damian Trilling. Diversity by design: Investigating the potential of diversity-aware recommender systems in journalism. Journalism Studies, 22(11):1469-1488, 2021. Simone Mungari, Erica Coppolillo, Ettore Ritacco, and Giuseppe Manco. Flexible generation of preference data for recommendation analysis. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, KDD '25, page 5710-5721, New York, NY, USA, 2025. Association for Computing Machinery. ISBN 9798400714542. URL https://doi.org/10.1145/3711896.3737398. Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Giulio Rossetti, Gizem Gezici, Fosca Giannotti, Margherita Lalli, Daniele Gambetta, et al. A survey on the impact of ai-based recommenders on human behaviours: methodologies, outcomes and future directions. arXiv preprint , 2024. BHAVIK PATHAK, ROBERT GARFINKEL, RAM D. GOPAL, RAJKUMAR VENKATESAN, and FANG YIN. Empirical analysis of the impact of recommender systems on sales. Journal of Management Information Systems, 27 (2):159-188, 2010. ISSN 07421222. URL http://www.jstor.org/stable/29780174. Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, János Kertész, et al. Human-ai coevolution. Artificial Intelligence, 339:104244, 2025. Behnam Rahdari, Peter Brusilovsky, and Branislav Kveton. Towards simulation-based evaluation of recommender systems with carousel interfaces. ACM Transactions on Recommender Systems, 2024. 3643709. To appear / published online (depends on version). Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09, page 452-461, Arlington, Virginia, USA, 2009. AUAI Press. ISBN 9780974903958. Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira Jr. Auditing radicalization pathways on youtube. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency (FAT* '20), pages 131-141. ACM, 2020. Pablo Sánchez, Alejandro Bellogín, and Ludovico Boratto. Bias characterization, assessment, and mitigation in location-based recommender systems. Data Mining and Knowledge Discovery, 37(5):1885-1929, 2023. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, WWW '01, page 285-295, New York, NY, USA, 2001. Association for Computing Machinery. ISBN 1581133480. URL https://doi.org/10.1145/371920.372071. Alina Sîrbu, Dino Pedreschi, Fosca Giannotti, and János Kertész. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one, 14(3):e0213246, 2019. Nasim Sonboli, Robin Burke, Michael Ekstrand, and Rishabh Mehrotra. The multisided complexity of fairness in recommender systems. AI Mag., 43(2):164-176, June 2022. ISSN 0738-4602. URL https://doi.org/10.1002/aaai.12054. Ruixuan Sun, Avinash Akella, Ruoyan Kong, Moyan Zhou, and Joseph A. Konstan. Interactive content diversity and user exploration in online movie recommenders: A field experiment. International Journal of Human-Computer Interaction, 40(22):7233-7247, 2024. Wenlong Sun, Sami Khenissi, Olfa Nasraoui, and Patrick Shafto. Debiasing the human-recommender system feedback loop in collaborative filtering. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 645-651, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366755. URL https://doi.org/10.1145/3308560.3317303. Yitong Wang, Tianshu Sun, Zhe Yuan, and AJ Yuan Chen. How recommendation affects customer search: A field experiment. Information Systems Research, 36(1):85-106, 2024. Mengyue Yang, Quanyu Dai, Zhenhua Dong, Xu Chen, Xiuqiang He, and Jun Wang. Top-n recommendation with counterfactual user preference simulation. pages 2342-2351, 10 2021. Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, and Alex Beutel. Measuring recommender system effects with simulated users, 2021. URL https://arxiv.org/abs/2101.04526. Yan Zeng, Yuan Fang, and Luo Si. User behavior modeling in recommender system simulations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '20), pages 1509-1512, 2020. Yan Zeng, Zhenzhong Chen, Jianyuan Sun, Yuan Fang, and Luo Si. Recwalk: Nearly unbiased walk-based recommendation via markov chains. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM '21), pages 367-375, 2021. 15 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, and Ji-Rong Wen. Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms, 2021. URL https://arxiv.org/abs/2011.01731. Lei Zheng, Chun-Ta Lu, Fei Jiang, Jiawei Zhang, and Philip S. Yu. Spectral collaborative filtering. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys '18, page 311-319, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450359016. URL https: //doi.org/10.1145/3240323.3240343. Tao Zhou, Zoltán Kuscsik, Jian-Guo Liu, Matúš Medo, Joseph Rushton Wakeling, and Yi-Cheng Zhang. Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences, 107(10):4511-4515, 2010. URL https://www.pnas.org/doi/abs/10.1073/ pnas.1000488107. 16
2510.14855
A Multi-Task Deep Learning Framework for Skin Lesion Classification, ABCDE Feature Quantification, and Evolution Simulation Harsha Kotlaa, Arun Kumar Rajasekarana, Hannah Ranab aDepartment of Chemistry, University of Cambridge, Cambridge, United Kingdom bHarvard Ophthalmology AI Lab, Harvard Medical School, Boston, MA, USA Abstract Early detection of melanoma has grown to be essential because it significantly improves survival rates, but automated analysis of skin lesions still remains challenging. ABCDE, which stands for Asymmetry, Border irregularity, Color variation, Diameter, and Evolving, is a well-known classification method for skin lesions, but most deep learning mechanisms treat it as a black box, as most of the human interpretable features are not explained. In this work, we propose a deep learning framework that both classifies skin lesions into categories and also quantifies scores for each ABCD feature. It simulates the evolution of these features over time in order to represent the E aspect, opening more windows for future exploration. The A, B, C, and D values are quantified particularly within this work. Moreover, this framework also visualizes ABCD feature trajectories in latent space as skin lesions evolve from benign nevuses to malignant melanoma. The experiments are conducted using the HAM10000 dataset that contains around ten thousand images of skin lesions of varying stages. In summary, the classification worked with an accuracy of around 89 percent, with melanoma AUC being 0.96, while the feature evaluation performed well in predicting asymmetry, color variation, and diameter, though border irregularity remains more difficult to model. Overall, this work provides a deep learning framework that will allow doctors to link ML diagnoses to clinically relevant criteria, thus improving our understanding of skin cancer progression. Keywords: melanoma, skin lesion, ABCDE criteria, deep learning, HAM10000 dataset, Generative adversarial networks (GANs), benign nevi, malignant melanoma 1. Introduction Melanoma, an aggressive form of skin cancer, is one of the leading causes of death due to skin cancer [6]. Early diagnosis is important because the 5-year survival rate exceeds 90% for early-stage melanoma, but drops below 20% for advanced stages [6]. In order to differentiate between harmful and harmless lesions, dermatologists utilize the ABCDE method. “A” stands for “asymmetry,” as malignant skin lesions often appear to be uneven; “B” stands for “border irregularity,” as scientists search for jagged or notched edges; “C” Preprint submitted to arXiv October 17, 2025 arXiv:2510.14855v1 [cs.CV] 16 Oct 2025 stands for “color variation”; “D” stands for diameter, as larger lesions are more likely to be malignant; and “E” stands for “evolving,” as skin lesions evolve over time [11]. If a lesion displays two or more of the attributes described above, the lesion is most likely harmful melanoma. The ABCDE criteria are effective because they are easy to understand and to screen for suspicious lesions [11]. Many deep learning techniques are proficient at classifying skin lesion images as either benign, malignant, or one of several other clinically recognized categories found in datasets such as HAM10000 [1] [10]. Convolutional neural networks are utilized most commonly, as they are trained to classify images. However, these CNN models lack clear interpretabil- ity because they only provide point predictions of lesion type without explanation. In the context of melanoma detection, this lack of explanation can hinder clear, interpretable di- agnoses. The “E” feature of ABCDE has also proven challenging to evaluate via deep learning methods. Single static images fail to capture such a change. Some mobile applications ask users to input their moles periodically to observe any changes, but this is difficult to enforce. However, using AI to predict and visualize how a lesion changes over time is a promising direction. Recent advances in medical imaging have made it possible to create realistic transformations of medical images. For example, J¨utte et al. (2024) utilized a CycleGAN to create a sequence of dermoscopic images that show the potential of a benign nevus transforming into a malignant melanoma [6]. As discussed above, the quantification of ABCDE features changing over time as well as the actual images changing can improve our understanding about melanoma growth patterns [6]. In this work, we propose a deep-learning framework that combines classification, ABCDE feature quantification, and feature evolution simulation. First, we design a CNN that learns to predict both the lesion’s class and quantify continuous values representing each ABCDE criterion. Second, we develop a strategy for obtaining quantitative ABCDE labels from dermoscopic images by image processing and expert knowledge. This enables supervised training of the feature regression branch. Subsequently, we introduce a module to simulate the temporal evolution of lesions. Generative adversarial networks and sequential interpola- tion are used so that this system can produce a plausible future state of a given lesion and track how the ABCDE scores change. Automated ABCDE analysis was already prevalent before the era of deep learning. Re- searchers wanted to mimic the ABCD rule of dermoscopy with computer algorithms. Early systems computed hand-crafted image features corresponding to asymmetry, border irregu- larity, color variegation, and lesion diameter. For example, in 2001, Ganster et al. extracted 122 features related to the ABCD criteria and applied conventional classifiers for automated melanoma recognition [4]. Asymmetry was quantified via shape moments, border irregu- larity via fractal dimension or edge abruptness, color variegation via histogram analysis, and diameter via lesion area in pixels [11]. These pipelines showed that it is feasible to use computer vision for melanoma screening, but they often need expert parameter tun- ing and struggle with variations in image quality. A recent survey by Celebi et al. (2022) summarizes many such efforts to automate parts of the ABCD rule [2]. In general, while these approaches brought interpretability (each feature could be reported), their accuracy 2 was typically lower than that of data-driven deep learning, which can learn more complex representations. In addition, the success of CNNs in image recognition has led to numerous applications in dermatology. CNN models have become very accurate while classifying lesions into cate- gories such as melanoma, basal cell carcinoma, nevus, etc. For example, on the HAM10000 dataset, approaches like an EfficientNet or an ensemble of CNNs reach high overall accuracy (often 85–90%+) in 7-class classification [11]. Some research focuses on binary classification (melanoma vs. benign), as they report very high specificity and good sensitivity [11]. Still, as described above, these models lack interpretability. Other researchers have used saliency maps and class activation maps in order to improve interpretability. However, these pixel- level explanations do not directly communicate which clinical features are most explanatory of the diagnosis. This motivates the usage of the ABCDE criteria explicitly. Choi et al. (2024) proposed an “ABC ensemble model” that preprocesses images to emphasize asym- metry, border, and color regions as they feed them into specialized network branches; these are then combined for classification [3]. Still, their model did not output quantitative values for each criterion, even though they were able to use ABCD rule knowledge to improve classification performance. Notably, they omitted the diameter feature because the scaling in the data was inconsistent [3]. In summary, this work aims to build upon prior research by combining the interpretability of rule-based features with the accuracy of deep learning. This work also extends on previous research by adding the ability to simulate temporal changes. 2. Methodology 2.1. Overview of the Framework The framework contains two main components: a CNN to perform lesion classification and ABCDE feature regression from a dermoscopic image, and also an evolution simulation module that shows how ABCDE features might progress over time. Given a dermoscopic image of a skin lesion, we first optionally preprocess it (including lesion segmentation and color normalization). The multi-task CNN then processes the image to output both a class prediction and a set of numeric scores corresponding to A, B, C, and D features. The CNN is optimized by using a combined loss that includes classification error and regression error on the ABCDE scores. After this model is trained, it can provide an interpretation for its diagnosis by showing the ABCDE scores. For the evolution simulation, we take a lesion image and generate a sequence of future images showing increasing malignancy. This CNN model is applied to each generated frame to track how the ABCDE scores change; it basically gives a trajectory of the features. Additionally, the model’s internal representation is used to predict feature values without image generation. 2.2. Multi-Task CNN Architecture This multi-task deep learning model is built based on a convolutional neural network that first extracts a shared representation of the input image, followed by two “heads” (out- put branches). These branches consist of one for lesion classification and one for ABCDE 3 feature regression. This design allows the model to learn common visual features that are useful for both tasks [7]. Still, the separate heads specialize in their respective outputs. We experimented with several backbone architectures like ResNet50 and EfficientNet, but we ultimately chose ResNet50 due to its balance of depth and efficiency [5] [9]. The classifica- tion head is a dense layer that produces a probability distribution over the lesion classes. The regression head is a fully-connected dense layer to produce 4 values corresponding to [A, B, C, D] (E is not supervised). Overall, we use linear outputs with appropriate acti- vation/normalization; we do this to make sure that the feature values fall in a reasonable range (0 to 1 for A, B, C and a scaled range for D as discussed later). The input for this network is a dermoscopic image. The images are rescaled to 224 by 224 pixels, and we also normalize all the color channels. The ResNet50 backbone processes the image through a series of convolutional layers. This gives a final feature map which is global-average-pooled to a 2048- dimensional feature vector. This vector represents high- level information about the lesion. Also, the network is trained to predict the ABCDE features, so the vector encodes information relevant to asymmetry, border, color, and others, in addition to other features useful to classify lesions. After that, we add a fully connected layer called the classification head. This takes in the 2048 feature vector and produces logits for each of the seven classes for HAM10000 [10]. These include nv, mel, bcc, akiec, bkl, df, and vasc and correspond to melanocytic nevus, melanoma, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesion [8]. During training, we use a cross-entropy loss for this head. The regression head maps the same feature vector to five numeric outputs representing [A, B, C, D, E]. No activation (linear output) is applied for regression. However, these values are constrained through the training data scaling and loss function; this is so that the outputs remain in plausible ranges. Mean squared error loss is used here as it sums over the 5 features for this head. 4 Figure 1: Overview of Multi-Task CNN framework for skin lesion classification and ABCDE feature predic- tion 2.3. ABCDE Feature Engineering The ability to train this regression head needs ground truth numeric labels for the ABCDE features on each training image. Because the HAM10000 dataset does not provide these annotations, we derive approximate ABCD labels using image processing techniques [8]. The main goal is to compute for each lesion image a set of feature values on a 0 - 1 scale that shows how strongly that criterion is present. These definitions are adopted from prior research on dermatology. • Asymmetry (A): We compare the lesion’s shape and color distribution across the axes for asymmetry [6]. First and foremost, we have to segment the lesion from the back- ground skin. We then find the lesion’s major axis and minor axis (perpendicular to the major). After that, the lesion’s pixel mask is reflected across each axis, and we have to compute the overlap with the original. To illustrate, the score for asymmetry is 0 if it is perfectly asymmetric, and this value increases as symmetry decreases. More- over, to consider color distribution, a structure similarity index or SSIM is computed between the two halves in the image intensity values [6]. The final asymmetry score is the average of the shape’s asymmetry and the color asymmetry metrics. In practice, 5 round or evenly colored lesions yield a score near 0, but irregularly shaped or unevenly colored lesions yield a score closer to 1. The final value is stored in the variable “A”. • Border Irregularity (B): An irregular border is one that is ragged, notched, or blurred. We capture two aspects which are the shape irregularity and the sharpness of the border [6]. For shape irregularity, we compute the lesion’s convex hull and compare it to the actual border [6]. After that, we define the ratio between the lesion and the convex hull. A perfectly smooth-edged lesion has a ratio of 1.0, but lesions with indented edges or notches have a ratio less than 1. Higher values mean more irregularity. In order to analyze the border clarity, we take an approach similar to that of J¨utte et al.; we compute the gradient magnitude along the lesion perimeter. We take the dermoscopic image, calculate the gradient around the border, and average it over all border pixels. Sharp borders have a higher gradient while fuzzy borders have a low gradient. We normalize this gradient value to [0,1]. For the overall border irregularity score, we combine shape and gradient: B = 0.5 · Bshape + 0.5 · 1 −Bgrad  In this equation, the gradient B value is inverted because a low gradient (fuzzy border) should increase the irregularity score. Thus, a lesion with a very jagged shape or a very blurred edge will have B near 1. • Color Variation (C): We measure how many different colors and shades are in the lesion. Common criteria for skin lesion images include colors like light brown, dark brown, black, blue-gray, white, and red [6]. A melanoma often has different colors. To quantify this value, we compute the dispersion of colors in the lesion. Specifically, we apply a clustering in color space to the lesion pixels [6]. The number of color clusters is determined, and then the dispersion index is calculated; this is the standard deviation of color distances of pixels from their cluster centroids as they are weighted by cluster size. This gives a value between 0 and 1 as 0 signifies a very unified color while 1 gives off a heterogeneity of colors. Additionally, we count the number of distinct color clusters. If there are more clusters, there is more color variety. We map the number of clusters to a 0–1 range as well. Our final color variation score C is a combination (we used the dispersion index primarily, and added a small increment for each distinct color beyond one). Benign single-color nevi typically score a value of C less than 0.2 while many melanomas with multiple shades of brown, black, and red would score more than 0.7. • Diameter (D): For the most part, lesions with a diameter greater than 6 millimeters are deemed as suspicious. However, these images lack a consistent physical scale as the zoom level varies [3]. HAM10000 images come from different devices and magnifica- tions [8]. Unfortunately, no data on mm-per-pixel is given. That is why we will have to approximate the diameter relatively. After segmentation, we compute the maximum distance across the lesion (in pixels); this is the largest axis of the bounding box. We 6 also compute the area-equivalent diameter, which is the diameter of a circle with the same area as the lesion. This max span is more robust for irregularly shaped lesions. To map this to a score, we make an assumption that an average nevus in the dataset (say 30 pixels across in the image) corresponds to about 5 mm; this is based on typical dermoscope resolution. D = clamp max diameter (pixels) p6 mm , 0, 1  This is where p6 mm is the approximate pixel length of 6 millimeters. For better cali- bration, one could include a reference scale or use the dataset’s metadata; some ISIC images have ruler markers, but this is not the case in HAM10000. In our training labels, we set p6mm such that the top 5% largest lesions get D close to 1.0 and small lesions get D near 0.2. The diameter values are thus relative. We treat D as a contin- uous variable because it influences risk in a graded way even though it is non-linear. • Evolving (E): Because the dataset does not contain time-series images, we could not directly measure or predict lesion evolution. That is why in this work we focus only on the static ABCD features for regression. The E criterion was not included in the training or loss function. Future work will investigate modeling lesion evolution. It will utilize time-series data to better capture this aspect. It is still good to note that the automatically generated labels for A, B, C, and D are approximate. This is because errors in lesion segmentation or variations in image conditions can introduce noise. For example, if a lesion image is very close-up, the diameter estimate may falsely appear large; if lighting causes part of the border to fade into the skin, the border gradient metric might be low even if the border is fairly smooth. These issues could be mitigated with preprocessing. Applying a light hair removal filter (using inpainting for dark hairs) and a color normalization so that background skin has a consistent reference color across images. This can improve the accuracy of the feature computations. These computations output for each training image an A, B, C, and D value. We use these as the target outputs for the regression head. In this work, E is not directly mentioned but only represents the evolution or progression of the other A, B, C, and D features. 2.4. Model Training Strategy The model is trained on the HAM10000 dataset [10]; this dataset contains 10,015 der- moscopic images of pigmented lesions across 7 diagnostic categories. However, images are not evenly distributed across diagnostic categories (benign and melanoma). To handle this, we perform data balancing. Specifically, we use a combination of oversampling and class- balanced loss. During each training epoch, we sample images such that each class is roughly equally represented; this is done with the help of random oversampling of minority classes. We also weight the classification loss inversely proportional to class frequency. The data is split in this manner: 70% of the images for training, 10% for validation, and 20% for testing. We also improve the training images with random horizontal or vertical flips, small 7 rotations, zoom-in and out, and lighting adjustments. This helps to expose the network to varied appearances and also slightly simulates changes. In order to preprocess each image, we resize it to 224x224, apply the hair removal filter, normalize the color, and segment the lesions. For the loss functions, we have to combine the losses for both the heads of the model (classification and regression). The classification loss is the cross-entropy loss for the class prediction while the regression loss is the mean squared error between predicted and target scores for features A through D. We control the balance between these two losses using a weight parameter. In the main experiments, we set this value to be 1. However, it is also good to run other tests by setting that value to either 0 to train only classification or setting the value to a larger number than 1 to focus only on regression. This is very important because it shows why combining both tasks in this deep learning framework is beneficial. For the optimization functions, the Adam optimizer is used with an initial learning rate of 0.0001. If the validation loss stops improving, we reduce the learning rate by a factor of 10. The training itself runs for around 100 epochs. To help prevent overfitting, we apply L2 weight decay with a value of 0.00001. All of these experiments use PyTorch and are trained on an NVIDIA A100 GPU. For lesion classification, we evaluate standard metrics including the overall accuracy, per- class accuracy, precision, sensitivity, and F1-score for each class, and especially the melanoma detection specificity. Missing a melanoma would be the worst error. We also compute the balanced multi-class accuracy and the area under the ROC curve for the binary melanoma vs others task. For the ABCD feature regression, we measure the Pearson correlation between the predicted and ground-truth feature values on the test set; we also take a look at the mean absolute error. High correlation would mean that the model has learned to predict the features in relative terms even if there is some bias. Because the ground truth features (ABCD) are approximate, we focus more on correlation and rank ordering than exact error magnitude.We do not have ground truth for “E” on static images, so we cannot directly evaluate E predictions in the standard test set. To take care of the E, we will evaluate E in the context of simulated sequences. 2.5. Lesion Evolution Simulation Additionally, we propose a method to simulate how a lesion’s image and thereby, its ABCD features might change over time. This module operates in two possible modes. The first possible method for this is to use a generative adversarial network to generate a future version of a lesion image. Similar to the study presented by Jutte et al., a CycleGAN architecture is optimal here in order to translate an image from the domain of benign- appearing lesions to the domain of malignant-appearing lesions [6]. The CycleGAN learns to produce an image that does indeed retain the structure of the input, like the particular lesion’s shape and significant features, but it changes its appearance to resemble a melanoma. Evidently, according to what was mentioned above with the ABCD rule, higher asymmetry, more colors, and more irregular borders with the same background and context would show exactly that of a benign lesion evolving into a malignant melanoma. We also train the reverse mapping from a melanoma to a nevus using CycleGAN’s cycle-consistency; this is so 8 that the network doesn’t simply always output a generic melanoma. To simulate a gradual evolution, we use frame interpolation [6]; this produces intermediate images that represent smooth transitions between benign and malignant appearances. Each frame is then analyzed by the multi-task CNN as it outputs ABCDE scores. By plotting the scores over frames, this yields a curve for each feature in order to exhibit the trend. These intermediate frames are not strictly physically accurate predictions because melanoma growth is not necessarily linear, but they are a good visual to provide clarity. The sequence can be thought of as a what-if scenario for how the lesion could evolve if it were to become malignant [6]. An alternate approach to this would be directly observing the feature values. The idea is to use the internal representation of the lesion, say the 2048-d feature from the CNN, and model how it would drift as the lesion evolves. The multi-task CNN’s feature extractor would be a state encoder in this scenario. Then, the “time steps” would be simulated by a small network that adjusts this state in the direction of malignancy. We train a simple feed-forward network to model how lesion features change over time. At each time step, it updates the latent feature vector zt by predicting how it will change. This is trained with the help of feature sequences extracted from images generated by the CycleGAN model. For each simulated sequence, we collect CNN features from each frame, and this gives us starting and ending feature pairs. The network learns to predict the difference between the initial and final features, and this helps it learn how features evolve as a lesion becomes more malignant. After training, we can simulate progression by applying small steps in the predicted direction. For example, starting with a benign lesion’s features, we can apply several steps and watch the ABCD scores increase; this parallels malignant transformation. While this method does not generate images, it is faster and does not need to store or run the image generator during inference. The model itself learns how features move towards malignant characteristics over time. In the final system, the image generation approach above would be used primarily for visualization while the second approach would be used to verify that the direction of change in feature space corresponds to increasing ABCDE scores. Something that is important to note is a smooth change in segmentation masks for reliable ABCDE measurement; this is pointed out by J¨utte et al. [6]. These sequences are not used to update the CNN. There should be no further training as that would require known ground truth of future states. They are purely for evaluation and demonstration. For this study, the latter approach (directly observing feature values) is performed to visualize plausible ABCD trajectories, and no image synthesis is used in the reported experiments. 3. Results and Discussion On the HAM10000 dataset, the multitask CNN model did end up showing a strong classification performance. The overall accuracy was 89%; it correctly classified 89% of all test samples. Also, it had a balanced accuracy of 76%. Some lesion types are much more common than others in the dataset, so the balanced accuracy gives equal weight to each class. This is regardless of how frequent or rare it is. It did end up performing well across all lesion types, even the less common ones. 9 In the specific task of finding the difference between melanoma and all other lesion types, the model was effective. The ROC curve in Figure 2 performed very well. The area under the curve (AUC) is 0.96. That means that the model is very efficient in classifying and at identifying melanomas correctly while minimizing false positives. Figure 2: ROC Curve for Melanoma vs. Other lesions Next, the confusion matrix in Figure 3 shows how often each lesion type was correctly or incorrectly predicted. The common classes like melanocytic nevus (nv) and basal cell carcinoma (bcc) models achieved very high recall; this means that it rarely missed them. However, for rarer classes such as dermatofibroma (df) and vascular lesions (vasc), the recall was lower, being around 50% and 82%. The model struggled more to correctly identify these. The recall for melanoma was about 76%. Most of the errors happened between melanoma and visually similar benign lesions, like benign keratosis or nevus. This identifies the challenge of trying to look at the difference between classes that look very similar in dermoscopic images. 10 Figure 3: Normalized Confusion Matrix The model overall had a very mixed accuracy regarding the predictions for the ABCD features. For the A, C, and D features, the model performs well. The average error is low, and the predictions are strongly correlated with the true values. This means the model is picking up on real, clinically meaningful patterns. However, the border irregularity was more difficult to find. Its prediction error is higher, and the model doesn’t show a strong relationship between predicted and true B scores. This could be because of noisy labels or the way border irregularity was measured. As it is broken down by class, B remains harder to predict across most lesion types. A, C, and D errors vary more by class. For example, melanoma cases have relatively accurate color predictions, but vascular lesions have larger diameter errors. 11 Figure 4: Per-Class MAE for ABCD Features The correlation matrix in Figure 5 shows how closely related the predicted ABCD features are to each other across all test samples. Each number represents the strength of the relationship between two features. Values closer to 1 mean that there is a stronger positive correlation.We observe a particularly strong correlation between border irregularity and diameter with a value of 0.78. This does make sense in a logical sense because larger lesions are more likely to have irregular borders. There are also moderate correlations between other features. Asymmetry shows a moderate relationship with color variation and diameter as they both are around 0.38. Even though ABCD features are mostly different, some are meaningfully related. This is how different visual traits of skin lesions often co- occur. Importantly, the features are not so highly correlated that they are redundant. This supports that the model can predict a variety of useful and separate clinical features rather than simply repeating the same signal. 12 Figure 5: Correlation Matrix of ABCD features In addition, the plot below (Figure 6) shows how the lesion representation evolves in the model’s internal feature space when we simulate progression using latent-space interpolation. Each point (0 to 5) represents a step in the simulated evolution from the original lesion (step 0) toward a more malignant version (step 5). The axes PC1 and PC2 are the top two principal components from PCA. These are able to reduce the high-dimensional feature vectors into a 2D space while keeping most of the variation. The overall pattern shows a smooth and continuous trajectory. This is as the lesion steadily moves through the feature space. This shows that the model captures a consistent and directional change. This means that it has learned an internal sense of how lesions might evolve over time, even though it was not explicitly trained on temporal data. The zigzagging motion on PC2 shows how there would be subtle nonlinear shifts in features like color or shape. There is a more monotonic progression along PC1; this reflects a gradual move toward higher malignancy. This supports the idea that the model’s latent space encodes a clinically meaningful concept of lesion progression. 13 Figure 6: Feature-Space Trajectory (PCA) The plot below (Figure 7) shows how the ABCD scores evolve across six simulated steps in the model’s learned feature space. Each line corresponds to one of the ABCD criteria. These scores are predicted by the model’s regression head as it interpolates from an original lesion in step 0 toward a hypothetical more malignant version in step 5. Visibly in the plot, A, C, and D all increase smoothly across the steps. This does indeed align with clinical expectations. As a lesion becomes more malignant, it typically becomes more asymmetric, shows greater variation in color, and increases in size. The upward trends in these scores suggest that the model captures these expected patterns of malignant pro- gression; once again, it is important to note that it was not explicitly trained with temporal lesion data. In contrast, the B score of border irregularity was completely flat near zero. This confirms previous findings that the model struggles to predict this feature accurately. This is most likely due to noisy or insufficient training labels for border irregularity. The lack of progression in B shows a current limitation in modeling that particular clinical feature. 14 Figure 7: Simulated ABCD Score Trajectory The results show that the multi-task CNN performs well overall as it combines strong lesion classification with interpretable ABCD feature predictions. It performs fairly well on key clinical tasks. For instance, with melanoma detection, it has an AUC of 0.96. It also learns to predict asymmetry, color variation, and diameter with strong accuracy. The model demonstrates that these features are not only predictable but also embedded meaningfully within the network’s latent space. This is shown by the smooth trajectory and increasing malignancy indicators in the evolution simulation. One clear limitation is the poor performance in predicting border irregularity (B). This likely stems from how B was labeled. It was most likely based on simple segmentation heuris- tics rather than clinical assessment. This introduced noise and weakened both regression and simulated trends. Also, the dataset’s class imbalance affected both classification and regression accuracy for rare lesion types. Another limitation is that the evolution simulation was performed in latent feature space and not directly on images. This visual progression remains abstract. This system could assist clinicians not only in diagnosing skin lesions but also in in- terpreting why the model made a decision. This is through the ABCD feature outputs. The evolution simulation can offer “what-if” previews of how lesions might progress toward malignancy, and this can support patient education and monitoring. To improve border irregularity prediction, better labeling methods, such as dermatolo- gists or advanced segmentation techniques are needed. Also, making sure that the classes are more balanced with augmentation or resampling could improve fairness across the lesion types. A good next step would be to generate simulated lesion images. It would use a GAN-based method to complement the feature-based evolution, and this would offer more intuitive visual feedback. Lastly, it would be good to incorporate expert-reviewed or longitu- dinal data that would allow supervised training of the “E” component and enhance clinical realism. 15 4. Conclusion To summarize, we developed a deep learning model to accurately classify skin lesions, explain its predictions with the help of ABCD features, and simulate how a lesion might change over time. The model provides robust quantification of asymmetry, color, and diam- eter, but struggles with border irregularity. Still, it offers a clear and interpretable way to analyze lesions and track potential progression. In the future, we aim to improve the border score, add realistic image evolution, use expert-labeled data for the “E” feature, and test the model on more diverse datasets. References [1] Abdullah, Siddique, A., Shaukat, K., and Jan, T. (2024). An intelligent mechanism to detect multi-factor skin cancer. Diagnostics, 14(13):1359. [2] Celebi, M. E., Kingravi, H. A., Uddin, B., Iyatomi, H., Aslandogan, Y. A., Stoecker, W. V., and Moss, R. H. (2007). A methodological approach to the classification of dermoscopy images. Computerized Medical Imaging and Graphics, 31(6):362–373. [3] Choi, J.-Y., Song, M.-J., and Shin, Y.-J. (2024). Enhancing skin lesion classification performance with the abc ensemble model. Applied Sciences, 14(22):10294. [4] Ganster, H., Pinz, A., R¨ohrer, R., Wildling, E., Binder, M., and Kittler, H. (2001). Automated melanoma recognition. IEEE Transactions on Medical Imaging, 20(3):233–239. [5] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770– 778. [6] J¨utte, L., Gonz´alez-Vill`a, S., Quintana, J., Steven, M., Garc´ıa, R., and Roth, B. (2024). Integrating generative ai with abcde rule analysis for enhanced skin cancer diagnosis, dermatologist training and patient education. Frontiers in Medicine, 11:1445318. [7] Kawahara, J., Daneshvar, S., Argenziano, G., and Hamarneh, G. (2019). 7-point checklist and skin lesion classification using multi-task multi-modal neural nets. IEEE Journal of Biomedical and Health Informatics, 23(2):538–546. [8] Mader, S. (2018). Skin cancer mnist: Ham10000. Kaggle. [Online; accessed 2025-09-07]. [9] Tan, M. and Le, Q. V. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 6105–6114, Long Beach, CA, USA. [10] Tschandl, P. (2018). The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. [11] Veeramani, N., Jayaraman, P., Krishankumar, R., Ravichandran, K. S., and Gandomi, A. H. (2024). Ddcnn-f: double decker convolutional neural network ’f’ feature fusion as a medical image classification framework. Scientific Reports, 14:676. 16
A Multi-Task Deep Learning Framework for Skin Lesion Classification, ABCDE Feature Quantification, and Evolution Simulation Harsha Kotlaa, Arun Kumar Rajasekarana, Hannah Ranab a . ABCDE, which stands for Asymmetry, Border irregularity, Color variation, Diameter, and Evolving, is a well-known classification method for skin lesions, but most deep learning mechanisms treat it as a black box, as most of the human interpretable features are not explained. In this work, we propose a deep learning framework that both classifies skin lesions into categories and also quantifies scores for each ABCD feature. It simulates the evolution of these features over time in order to represent the E aspect, opening more windows for future exploration. The A, B, C, and D values are quantified particularly within this work. Moreover, this framework also visualizes ABCD feature trajectories in latent space as skin lesions evolve from benign nevuses to malignant melanoma. The experiments are conducted using the HAM10000 dataset that contains around ten thousand images of skin lesions of varying stages. In summary, the classification worked with an accuracy of around 89 percent, with melanoma AUC being 0.96, while the feature evaluation performed well in predicting asymmetry, color variation, and diameter, though border irregularity remains more difficult to model. Overall, this work provides a deep learning framework that will allow doctors to link ML diagnoses to clinically relevant criteria, thus improving our understanding of skin cancer progression. Keywords: melanoma, skin lesion, ABCDE criteria, deep learning, HAM10000 dataset, Generative adversarial networks (GANs), benign nevi, malignant melanoma 1. Introduction Melanoma, an aggressive form of skin cancer, is one of the leading causes of death due to skin cancer [6]. Early diagnosis is important because the 5-year survival rate exceeds 90% for early-stage melanoma, but drops below 20% for advanced stages [6]. In order to differentiate between harmful and harmless lesions, dermatologists utilize the ABCDE method. "A" stands for "asymmetry," as malignant skin lesions often appear to be uneven; "B" stands for "border irregularity," as scientists search for jagged or notched edges; "C" Preprint submitted to arXiv October 17, 2025 16 Oct 2025 stands for "color variation"; "D" stands for diameter, as larger lesions are more likely to be malignant; and "E" stands for "evolving," as skin lesions evolve over time [11]. If a lesion displays two or more of the attributes described above, the lesion is most likely harmful melanoma. The ABCDE criteria are effective because they are easy to understand and to screen for suspicious lesions [11]. Many deep learning techniques are proficient at classifying skin lesion images as either benign, malignant, or one of several other clinically recognized categories found in datasets such as HAM10000 [1] [10]. Convolutional neural networks are utilized most commonly, as they are trained to classify images. However, these CNN models lack clear interpretability because they only provide point predictions of lesion type without explanation. In the context of melanoma detection, this lack of explanation can hinder clear, interpretable diagnoses. The "E" feature of ABCDE has also proven challenging to evaluate via deep learning methods. Single static images fail to capture such a change. Some mobile applications ask users to input their moles periodically to observe any changes, but this is difficult to enforce. However, using AI to predict and visualize how a lesion changes over time is a promising direction. Recent advances in medical imaging have made it possible to create realistic transformations of medical images. For example, J ̈utte et al. (2024) utilized a CycleGAN to create a sequence of dermoscopic images that show the potential of a benign nevus transforming into a malignant melanoma [6]. As discussed above, the quantification of ABCDE features changing over time as well as the actual images changing can improve our understanding about melanoma growth patterns [6]. In this work, we propose a deep-learning framework that combines classification, ABCDE feature quantification, and feature evolution simulation. First, we design a CNN that learns to predict both the lesion's class and quantify continuous values representing each ABCDE criterion. Second, we develop a strategy for obtaining quantitative ABCDE labels from dermoscopic images by image processing and expert knowledge. This enables supervised training of the feature regression branch. Subsequently, we introduce a module to simulate the temporal evolution of lesions. Generative adversarial networks and sequential interpolation are used so that this system can produce a plausible future state of a given lesion and track how the ABCDE scores change. Automated ABCDE analysis was already prevalent before the era of deep learning. Researchers wanted to mimic the ABCD rule of dermoscopy with computer algorithms. Early systems computed hand-crafted image features corresponding to asymmetry, border irregularity, color variegation, and lesion diameter. For example, in 2001, Ganster et al. extracted 122 features related to the ABCD criteria and applied conventional classifiers for automated melanoma recognition [4]. Asymmetry was quantified via shape moments, border irregularity via fractal dimension or edge abruptness, color variegation via histogram analysis, and diameter via lesion area in pixels [11]. These pipelines showed that it is feasible to use computer vision for melanoma screening, but they often need expert parameter tuning and struggle with variations in image quality. A recent survey by Celebi et al. (2022) summarizes many such efforts to automate parts of the ABCD rule [2]. In general, while these approaches brought interpretability (each feature could be reported), their accuracy 2 was typically lower than that of data-driven deep learning, which can learn more complex representations. In addition, the success of CNNs in image recognition has led to numerous applications in dermatology. CNN models have become very accurate while classifying lesions into categories such as melanoma, basal cell carcinoma, nevus, etc. For example, on the HAM10000 dataset, approaches like an EfficientNet or an ensemble of CNNs reach high overall accuracy (often 85-90%+) in 7-class classification [11]. Some research focuses on binary classification (melanoma vs. benign), as they report very high specificity and good sensitivity [11]. Still, as described above, these models lack interpretability. Other researchers have used saliency maps and class activation maps in order to improve interpretability. However, these pixellevel explanations do not directly communicate which clinical features are most explanatory of the diagnosis. This motivates the usage of the ABCDE criteria explicitly. Choi et al. (2024) proposed an "ABC ensemble model" that preprocesses images to emphasize asymmetry, border, and color regions as they feed them into specialized network branches; these are then combined for classification [3]. Still, their model did not output quantitative values for each criterion, even though they were able to use ABCD rule knowledge to improve classification performance. Notably, they omitted the diameter feature because the scaling in the data was inconsistent [3]. In summary, this work aims to build upon prior research by combining the interpretability of rule-based features with the accuracy of deep learning. This work also extends on previous research by adding the ability to simulate temporal changes. 2. Methodology 2.1. Overview of the Framework The framework contains two main components: a CNN to perform lesion classification and ABCDE feature regression from a dermoscopic image, and also an evolution simulation module that shows how ABCDE features might progress over time. Given a dermoscopic image of a skin lesion, we first optionally preprocess it (including lesion segmentation and color normalization). The multi-task CNN then processes the image to output both a class prediction and a set of numeric scores corresponding to A, B, C, and D features. The CNN is optimized by using a combined loss that includes classification error and regression error on the ABCDE scores. After this model is trained, it can provide an interpretation for its diagnosis by showing the ABCDE scores. For the evolution simulation, we take a lesion image and generate a sequence of future images showing increasing malignancy. This CNN model is applied to each generated frame to track how the ABCDE scores change; it basically gives a trajectory of the features. Additionally, the model's internal representation is used to predict feature values without image generation. 2.2. Multi-Task CNN Architecture This multi-task deep learning model is built based on a convolutional neural network that first extracts a shared representation of the input image, followed by two "heads" (output branches). These branches consist of one for lesion classification and one for ABCDE 3 feature regression. This design allows the model to learn common visual features that are useful for both tasks [7]. Still, the separate heads specialize in their respective outputs. We experimented with several backbone architectures like ResNet50 and EfficientNet, but we ultimately chose ResNet50 due to its balance of depth and efficiency [5] [9]. The classification head is a dense layer that produces a probability distribution over the lesion classes. The regression head is a fully-connected dense layer to produce 4 values corresponding to [A, B, C, D] (E is not supervised). Overall, we use linear outputs with appropriate activation/normalization; we do this to make sure that the feature values fall in a reasonable range (0 to 1 for A, B, C and a scaled range for D as discussed later). The input for this network is a dermoscopic image. The images are rescaled to 224 by 224 pixels, and we also normalize all the color channels. The ResNet50 backbone processes the image through a series of convolutional layers. This gives a final feature map which is global-average-pooled to a 2048- dimensional feature vector. This vector represents highlevel information about the lesion. Also, the network is trained to predict the ABCDE features, so the vector encodes information relevant to asymmetry, border, color, and others, in addition to other features useful to classify lesions. After that, we add a fully connected layer called the classification head. This takes in the 2048 feature vector and produces logits for each of the seven classes for HAM10000 [10]. These include nv, mel, bcc, akiec, bkl, df, and vasc and correspond to melanocytic nevus, melanoma, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesion [8]. During training, we use a cross-entropy loss for this head. The regression head maps the same feature vector to five numeric outputs representing [A, B, C, D, E]. No activation (linear output) is applied for regression. However, these values are constrained through the training data scaling and loss function; this is so that the outputs remain in plausible ranges. Mean squared error loss is used here as it sums over the 5 features for this head. 4 Figure 1: Overview of Multi-Task CNN framework for skin lesion classification and ABCDE feature prediction 2.3. ABCDE Feature Engineering The ability to train this regression head needs ground truth numeric labels for the ABCDE features on each training image. Because the HAM10000 dataset does not provide these annotations, we derive approximate ABCD labels using image processing techniques [8]. The main goal is to compute for each lesion image a set of feature values on a 0 - 1 scale that shows how strongly that criterion is present. These definitions are adopted from prior research on dermatology. • Asymmetry (A): We compare the lesion's shape and color distribution across the axes for asymmetry [6]. First and foremost, we have to segment the lesion from the background skin. We then find the lesion's major axis and minor axis (perpendicular to the major). After that, the lesion's pixel mask is reflected across each axis, and we have to compute the overlap with the original. To illustrate, the score for asymmetry is 0 if it is perfectly asymmetric, and this value increases as symmetry decreases. Moreover, to consider color distribution, a structure similarity index or SSIM is computed between the two halves in the image intensity values [6]. The final asymmetry score is the average of the shape's asymmetry and the color asymmetry metrics. In practice, 5 round or evenly colored lesions yield a score near 0, but irregularly shaped or unevenly colored lesions yield a score closer to 1. The final value is stored in the variable "A". • Border Irregularity (B): An irregular border is one that is ragged, notched, or blurred. We capture two aspects which are the shape irregularity and the sharpness of the border [6]. For shape irregularity, we compute the lesion's convex hull and compare it to the actual border [6]. After that, we define the ratio between the lesion and the convex hull. A perfectly smooth-edged lesion has a ratio of 1.0, but lesions with indented edges or notches have a ratio less than 1. Higher values mean more irregularity. In order to analyze the border clarity, we take an approach similar to that of J ̈utte et al.; we compute the gradient magnitude along the lesion perimeter. We take the dermoscopic image, calculate the gradient around the border, and average it over all border pixels. Sharp borders have a higher gradient while fuzzy borders have a low gradient. We normalize this gradient value to [0,1]. For the overall border irregularity score, we combine shape and gradient: B = 0.5 · Bshape + 0.5 · 1 -Bgrad In this equation, the gradient B value is inverted because a low gradient (fuzzy border) should increase the irregularity score. Thus, a lesion with a very jagged shape or a very blurred edge will have B near 1. • Color Variation (C): We measure how many different colors and shades are in the lesion. Common criteria for skin lesion images include colors like light brown, dark brown, black, blue-gray, white, and red [6]. A melanoma often has different colors. To quantify this value, we compute the dispersion of colors in the lesion. Specifically, we apply a clustering in color space to the lesion pixels [6]. The number of color clusters is determined, and then the dispersion index is calculated; this is the standard deviation of color distances of pixels from their cluster centroids as they are weighted by cluster size. This gives a value between 0 and 1 as 0 signifies a very unified color while 1 gives off a heterogeneity of colors. Additionally, we count the number of distinct color clusters. If there are more clusters, there is more color variety. We map the number of clusters to a 0-1 range as well. Our final color variation score C is a combination (we used the dispersion index primarily, and added a small increment for each distinct color beyond one). Benign single-color nevi typically score a value of C less than 0.2 while many melanomas with multiple shades of brown, black, and red would score more than 0.7. • Diameter (D): For the most part, lesions with a diameter greater than 6 millimeters are deemed as suspicious. However, these images lack a consistent physical scale as the zoom level varies [3]. HAM10000 images come from different devices and magnifications [8]. Unfortunately, no data on mm-per-pixel is given. That is why we will have to approximate the diameter relatively. After segmentation, we compute the maximum distance across the lesion (in pixels); this is the largest axis of the bounding box. We 6 also compute the area-equivalent diameter, which is the diameter of a circle with the same area as the lesion. This max span is more robust for irregularly shaped lesions. To map this to a score, we make an assumption that an average nevus in the dataset (say 30 pixels across in the image) corresponds to about 5 mm; this is based on typical dermoscope resolution. D = clamp max diameter (pixels) p6 mm , 0, 1 This is where p6 mm is the approximate pixel length of 6 millimeters. For better calibration, one could include a reference scale or use the dataset's metadata; some ISIC images have ruler markers, but this is not the case in HAM10000. In our training labels, we set p6mm such that the top 5% largest lesions get D close to 1.0 and small lesions get D near 0.2. The diameter values are thus relative. We treat D as a continuous variable because it influences risk in a graded way even though it is non-linear. • Evolving (E): Because the dataset does not contain time-series images, we could not directly measure or predict lesion evolution. That is why in this work we focus only on the static ABCD features for regression. The E criterion was not included in the training or loss function. Future work will investigate modeling lesion evolution. It will utilize time-series data to better capture this aspect. It is still good to note that the automatically generated labels for A, B, C, and D are approximate. This is because errors in lesion segmentation or variations in image conditions can introduce noise. For example, if a lesion image is very close-up, the diameter estimate may falsely appear large; if lighting causes part of the border to fade into the skin, the border gradient metric might be low even if the border is fairly smooth. These issues could be mitigated with preprocessing. Applying a light hair removal filter (using inpainting for dark hairs) and a color normalization so that background skin has a consistent reference color across images. This can improve the accuracy of the feature computations. These computations output for each training image an A, B, C, and D value. We use these as the target outputs for the regression head. In this work, E is not directly mentioned but only represents the evolution or progression of the other A, B, C, and D features. 2.4. Model Training Strategy The model is trained on the HAM10000 dataset [10]; this dataset contains 10,015 dermoscopic images of pigmented lesions across 7 diagnostic categories. However, images are not evenly distributed across diagnostic categories (benign and melanoma). To handle this, we perform data balancing. Specifically, we use a combination of oversampling and classbalanced loss. During each training epoch, we sample images such that each class is roughly equally represented; this is done with the help of random oversampling of minority classes. We also weight the classification loss inversely proportional to class frequency. The data is split in this manner: 70% of the images for training, 10% for validation, and 20% for testing. We also improve the training images with random horizontal or vertical flips, small 7 rotations, zoom-in and out, and lighting adjustments. This helps to expose the network to varied appearances and also slightly simulates changes. In order to preprocess each image, we resize it to 224x224, apply the hair removal filter, normalize the color, and segment the lesions. For the loss functions, we have to combine the losses for both the heads of the model (classification and regression). The classification loss is the cross-entropy loss for the class prediction while the regression loss is the mean squared error between predicted and target scores for features A through D. We control the balance between these two losses using a weight parameter. In the main experiments, we set this value to be 1. However, it is also good to run other tests by setting that value to either 0 to train only classification or setting the value to a larger number than 1 to focus only on regression. This is very important because it shows why combining both tasks in this deep learning framework is beneficial. For the optimization functions, the Adam optimizer is used with an initial learning rate of 0.0001. If the validation loss stops improving, we reduce the learning rate by a factor of 10. The training itself runs for around 100 epochs. To help prevent overfitting, we apply L2 weight decay with a value of 0.00001. All of these experiments use PyTorch and are trained on an NVIDIA A100 GPU. For lesion classification, we evaluate standard metrics including the overall accuracy, perclass accuracy, precision, sensitivity, and F1-score for each class, and especially the melanoma detection specificity. Missing a melanoma would be the worst error. We also compute the balanced multi-class accuracy and the area under the ROC curve for the binary melanoma vs others task. For the ABCD feature regression, we measure the Pearson correlation between the predicted and ground-truth feature values on the test set; we also take a look at the mean absolute error. High correlation would mean that the model has learned to predict the features in relative terms even if there is some bias. Because the ground truth features (ABCD) are approximate, we focus more on correlation and rank ordering than exact error magnitude.We do not have ground truth for "E" on static images, so we cannot directly evaluate E predictions in the standard test set. To take care of the E, we will evaluate E in the context of simulated sequences. 2.5. Lesion Evolution Simulation Additionally, we propose a method to simulate how a lesion's image and thereby, its ABCD features might change over time. This module operates in two possible modes. The first possible method for this is to use a generative adversarial network to generate a future version of a lesion image. Similar to the study presented by Jutte et al., a CycleGAN architecture is optimal here in order to translate an image from the domain of benignappearing lesions to the domain of malignant-appearing lesions [6]. The CycleGAN learns to produce an image that does indeed retain the structure of the input, like the particular lesion's shape and significant features, but it changes its appearance to resemble a melanoma. Evidently, according to what was mentioned above with the ABCD rule, higher asymmetry, more colors, and more irregular borders with the same background and context would show exactly that of a benign lesion evolving into a malignant melanoma. We also train the reverse mapping from a melanoma to a nevus using CycleGAN's cycle-consistency; this is so 8 that the network doesn't simply always output a generic melanoma. To simulate a gradual evolution, we use frame interpolation [6]; this produces intermediate images that represent smooth transitions between benign and malignant appearances. Each frame is then analyzed by the multi-task CNN as it outputs ABCDE scores. By plotting the scores over frames, this yields a curve for each feature in order to exhibit the trend. These intermediate frames are not strictly physically accurate predictions because melanoma growth is not necessarily linear, but they are a good visual to provide clarity. The sequence can be thought of as a what-if scenario for how the lesion could evolve if it were to become malignant [6]. An alternate approach to this would be directly observing the feature values. The idea is to use the internal representation of the lesion, say the 2048-d feature from the CNN, and model how it would drift as the lesion evolves. The multi-task CNN's feature extractor would be a state encoder in this scenario. Then, the "time steps" would be simulated by a small network that adjusts this state in the direction of malignancy. We train a simple feed-forward network to model how lesion features change over time. At each time step, it updates the latent feature vector zt by predicting how it will change. This is trained with the help of feature sequences extracted from images generated by the CycleGAN model. For each simulated sequence, we collect CNN features from each frame, and this gives us starting and ending feature pairs. The network learns to predict the difference between the initial and final features, and this helps it learn how features evolve as a lesion becomes more malignant. After training, we can simulate progression by applying small steps in the predicted direction. For example, starting with a benign lesion's features, we can apply several steps and watch the ABCD scores increase; this parallels malignant transformation. While this method does not generate images, it is faster and does not need to store or run the image generator during inference. The model itself learns how features move towards malignant characteristics over time. In the final system, the image generation approach above would be used primarily for visualization while the second approach would be used to verify that the direction of change in feature space corresponds to increasing ABCDE scores. Something that is important to note is a smooth change in segmentation masks for reliable ABCDE measurement; this is pointed out by J ̈utte et al. [6]. These sequences are not used to update the CNN. There should be no further training as that would require known ground truth of future states. They are purely for evaluation and demonstration. For this study, the latter approach (directly observing feature values) is performed to visualize plausible ABCD trajectories, and no image synthesis is used in the reported experiments. 3. Results and Discussion On the HAM10000 dataset, the multitask CNN model did end up showing a strong classification performance. The overall accuracy was 89%; it correctly classified 89% of all test samples. Also, it had a balanced accuracy of 76%. Some lesion types are much more common than others in the dataset, so the balanced accuracy gives equal weight to each class. This is regardless of how frequent or rare it is. It did end up performing well across all lesion types, even the less common ones. 9 In the specific task of finding the difference between melanoma and all other lesion types, the model was effective. The ROC curve in Figure 2 performed very well. The area under the curve (AUC) is 0.96. That means that the model is very efficient in classifying and at identifying melanomas correctly while minimizing false positives. Figure 2: ROC Curve for Melanoma vs. Other lesions Next, the confusion matrix in Figure 3 shows how often each lesion type was correctly or incorrectly predicted. The common classes like melanocytic nevus (nv) and basal cell carcinoma (bcc) models achieved very high recall; this means that it rarely missed them. However, for rarer classes such as dermatofibroma (df) and vascular lesions (vasc), the recall was lower, being around 50% and 82%. The model struggled more to correctly identify these. The recall for melanoma was about 76%. Most of the errors happened between melanoma and visually similar benign lesions, like benign keratosis or nevus. This identifies the challenge of trying to look at the difference between classes that look very similar in dermoscopic images. 10 Figure 3: Normalized Confusion Matrix The model overall had a very mixed accuracy regarding the predictions for the ABCD features. For the A, C, and D features, the model performs well. The average error is low, and the predictions are strongly correlated with the true values. This means the model is picking up on real, clinically meaningful patterns. However, the border irregularity was more difficult to find. Its prediction error is higher, and the model doesn't show a strong relationship between predicted and true B scores. This could be because of noisy labels or the way border irregularity was measured. As it is broken down by class, B remains harder to predict across most lesion types. A, C, and D errors vary more by class. For example, melanoma cases have relatively accurate color predictions, but vascular lesions have larger diameter errors. 11 Figure 4: Per-Class MAE for ABCD Features The correlation matrix in Figure 5 shows how closely related the predicted ABCD features are to each other across all test samples. Each number represents the strength of the relationship between two features. Values closer to 1 mean that there is a stronger positive correlation.We observe a particularly strong correlation between border irregularity and diameter with a value of 0.78. This does make sense in a logical sense because larger lesions are more likely to have irregular borders. There are also moderate correlations between other features. Asymmetry shows a moderate relationship with color variation and diameter as they both are around 0.38. Even though ABCD features are mostly different, some are meaningfully related. This is how different visual traits of skin lesions often cooccur. Importantly, the features are not so highly correlated that they are redundant. This supports that the model can predict a variety of useful and separate clinical features rather than simply repeating the same signal. 12 Figure 5: Correlation Matrix of ABCD features In addition, the plot below (Figure 6) shows how the lesion representation evolves in the model's internal feature space when we simulate progression using latent-space interpolation. Each point (0 to 5) represents a step in the simulated evolution from the original lesion (step 0) toward a more malignant version (step 5). The axes PC1 and PC2 are the top two principal components from PCA. These are able to reduce the high-dimensional feature vectors into a 2D space while keeping most of the variation. The overall pattern shows a smooth and continuous trajectory. This is as the lesion steadily moves through the feature space. This shows that the model captures a consistent and directional change. This means that it has learned an internal sense of how lesions might evolve over time, even though it was not explicitly trained on temporal data. The zigzagging motion on PC2 shows how there would be subtle nonlinear shifts in features like color or shape. There is a more monotonic progression along PC1; this reflects a gradual move toward higher malignancy. This supports the idea that the model's latent space encodes a clinically meaningful concept of lesion progression. 13 Figure 6: Feature-Space Trajectory (PCA) The plot below (Figure 7) shows how the ABCD scores evolve across six simulated steps in the model's learned feature space. Each line corresponds to one of the ABCD criteria. These scores are predicted by the model's regression head as it interpolates from an original lesion in step 0 toward a hypothetical more malignant version in step 5. Visibly in the plot, A, C, and D all increase smoothly across the steps. This does indeed align with clinical expectations. As a lesion becomes more malignant, it typically becomes more asymmetric, shows greater variation in color, and increases in size. The upward trends in these scores suggest that the model captures these expected patterns of malignant progression; once again, it is important to note that it was not explicitly trained with temporal lesion data. In contrast, the B score of border irregularity was completely flat near zero. This confirms previous findings that the model struggles to predict this feature accurately. This is most likely due to noisy or insufficient training labels for border irregularity. The lack of progression in B shows a current limitation in modeling that particular clinical feature. 14 Figure 7: Simulated ABCD Score Trajectory The results show that the multi-task CNN performs well overall as it combines strong lesion classification with interpretable ABCD feature predictions. It performs fairly well on key clinical tasks. For instance, with melanoma detection, it has an AUC of 0.96. It also learns to predict asymmetry, color variation, and diameter with strong accuracy. The model demonstrates that these features are not only predictable but also embedded meaningfully within the network's latent space. This is shown by the smooth trajectory and increasing malignancy indicators in the evolution simulation. One clear limitation is the poor performance in predicting border irregularity (B). This likely stems from how B was labeled. It was most likely based on simple segmentation heuristics rather than clinical assessment. This introduced noise and weakened both regression and simulated trends. Also, the dataset's class imbalance affected both classification and regression accuracy for rare lesion types. Another limitation is that the evolution simulation was performed in latent feature space and not directly on images. This visual progression remains abstract. This system could assist clinicians not only in diagnosing skin lesions but also in interpreting why the model made a decision. This is through the ABCD feature outputs. The evolution simulation can offer "what-if" previews of how lesions might progress toward malignancy, and this can support patient education and monitoring. To improve border irregularity prediction, better labeling methods, such as dermatologists or advanced segmentation techniques are needed. Also, making sure that the classes are more balanced with augmentation or resampling could improve fairness across the lesion types. A good next step would be to generate simulated lesion images. It would use a GAN-based method to complement the feature-based evolution, and this would offer more intuitive visual feedback. Lastly, it would be good to incorporate expert-reviewed or longitudinal data that would allow supervised training of the "E" component and enhance clinical realism. 15 4. Conclusion To summarize, we developed a deep learning model to accurately classify skin lesions, explain its predictions with the help of ABCD features, and simulate how a lesion might change over time. The model provides robust quantification of asymmetry, color, and diameter, but struggles with border irregularity. Still, it offers a clear and interpretable way to analyze lesions and track potential progression. In the future, we aim to improve the border score, add realistic image evolution, use expert-labeled data for the "E" feature, and test the model on more diverse datasets. References [1] Abdullah, Siddique, A., Shaukat, K., and Jan, T. (2024). An intelligent mechanism to detect multi-factor skin cancer. Diagnostics, 14(13):1359. [2] Celebi, M. E., Kingravi, H. A., Uddin, B., Iyatomi, H., Aslandogan, Y. A., Stoecker, W. V., and Moss, R. H. (2007). A methodological approach to the classification of dermoscopy images. Computerized Medical Imaging and Graphics, 31(6):362-373. [3] Choi, J.-Y., Song, M.-J., and Shin, Y.-J. (2024). Enhancing skin lesion classification performance with the abc ensemble model. Applied Sciences, 14(22):10294. [4] Ganster, H., Pinz, A., R ̈ohrer, R., Wildling, E., Binder, M., and Kittler, H. (2001). Automated melanoma recognition. IEEE Transactions on Medical Imaging, 20(3):233-239. [5] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770778. [6] J ̈utte, L., Gonz ́alez-Vill`a, S., Quintana, J., Steven, M., Garc ́ıa, R., and Roth, B. (2024). Integrating generative ai with abcde rule analysis for enhanced skin cancer diagnosis, dermatologist training and patient education. Frontiers in Medicine, 11:1445318. [7] Kawahara, J., Daneshvar, S., Argenziano, G., and Hamarneh, G. (2019). 7-point checklist and skin lesion classification using multi-task multi-modal neural nets. IEEE Journal of Biomedical and Health Informatics, 23(2):538-546. [8] Mader, S. (2018). Skin cancer mnist: Ham10000. Kaggle. [Online; accessed 2025-09-07]. [9] Tan, M. and Le, Q. V. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 6105-6114, Long Beach, CA, USA. [10] Tschandl, P. (2018). The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. [11] Veeramani, N., Jayaraman, P., Krishankumar, R., Ravichandran, K. S., and Gandomi, A. H. (2024). Ddcnn-f: double decker convolutional neural network 'f' feature fusion as a medical image classification framework. Scientific Reports, 14:676. 16
2510.14856
Rate-Adaptive Protograph-Based MacKay-Neal Codes Ayman Zahr, Graduate Student Member, IEEE, Emna Ben Yacoub, Graduate Student Member, IEEE, Bal´azs Matuz, Senior Member, IEEE, Gianluigi Liva Senior Member, IEEE Abstract—Rate-adaptive MacKay-Neal (MN) codes based on protographs are analyzed. The code construction employs an outer distribution matcher (DM) to adapt the rate of the scheme. The DM is coupled with an inner protograph-based low-density parity-check (LDPC) code. The performance achievable by the resulting code structure, that is nonlinear, is studied by means of an equivalent communication model that reduces the problem to the analysis of the inner (linear) LDPC code with transmission that takes place in parallel over the communication channel, and over a suitably defined binary symmetric channel. A density evolution analysis of protograph MN code ensembles is outlined, and it is complemented by an error floor analysis that relies on the derivation of the average input-output weight distribution of the inner LDPC code ensemble. Conditions on the shape of the normalized logarithmic asymptotic input-output weight distribution are defined, which allow discarding code ensembles with bad error floor properties during the code design phase. Examples of code designs are provided, showing how the use of a single LDPC code ensemble allows operating within 1 dB from the Shannon limit over a wide range of code rates, where the code rate is selected by tuning the DM parameters. By enabling rate flexibility with a constant blocklength, and with a fixed LDPC code as inner code, the construction provides an appealing solution for very high-throughput wireless (optical) links that employ binary-input modulations. Index Terms—Low-density parity-check codes, MacKay-Neal codes, distribution matching, rate-adaptive transmissions. I. INTRODUCTION T HE development of high-speed communication links, either in the fiber optics or in the wireless (radio/optical) domains, calls for the development of channel codes that support fast decoding algorithms [2], [3]. For data rates in the order of several tens of Gbps, some key techniques are currently considered to enable high-speed decoding. On the algorithmic side, the use of (generalized) low-density parity- check (LDPC) [4], [5] codes with hard-decision message pass- ing decoders [2], [3], [6], [7] has been recently investigated. This work was presented in part at the IEEE Information Theory Workshop, Saint Malo, France, April 2023 [1]. A. Zahr is with the Institute for Communications Engineering, School of Computation, Information, and Technology, Technical University of Munich, Arcisstrasse 21, Munich, Germany, and with the Institute of Communica- tions and Navigation, German Aerospace Center (DLR), Wessling, Germany, (email: ayman.zahr@dlr.de). G. Liva is with the Institute of Communications and Navigation, German Aerospace Center (DLR), Wessling, Germany (email: gianluigi.liva@dlr.de). E. Ben Yacoub and B. Matuz are with the Huawei Munich Research Center, 80992 Munich, Germany (email: {emna.ben.yacoub,balazs.matuz}@huawei.com). G. Liva acknowledges the financial support by the Federal Ministry of Education and Research of Germany in the programme of ”Souver¨an. Digital. Vernetzt.” Joint project 6G-RIC, project identification number: 16KISK022. This class of algorithms enables decoding at extremely-high data rates (up to some hundred Gbps), but it comes at the cost of sacrificing some coding gain, especially at moderate-low code rates. On the hardware side, pipelined LDPC decoder architectures promise to achieve unmatched decoding speeds by “unrolling” the belief propagation (BP) decoder iterations over the chip, hence realizing a fully parallel decoder without the message routing hurdle that affects non-pipelined decoder architectures [8]. Remarkably, pipelined LDPC decoder ar- chitectures support soft-decision BP decoding at data rates exceeding 100 Gbps.1 While decoding algorithms and archi- tectures are obviously impacted by the need to operate at large data rates, other elements of the communication chain can be affected, too. In wireless systems operating in high frequency bands (e.g, in W/D bands, as currently considered for 6G cellular networks), low-order modulation schemes, such as on-off keying (OOK), binary phase shift keying (BPSK), and quadrature phase shift keying (QPSK), are considered mainly thanks to their simplicity and to the abundant bandwidth available [9]. Regular framing structures, obtained by inserting periodic start-of-frame (SoF) markers, are also preferred since they simplify the signal acquisition [10]. These observations place further constraints on channel code design, which can be summarized by (a) the need of constructing a single code (to support pipeline decoder architectures), (b) with rate flexibility that should be achieved without modifying the blocklength (to enable periodic SoF markers). Focusing the attention to the class of LDPC codes, (a) implies the implementation of a pipeline decoder that unrolls the Tanner graph of a single LDPC code, while (b) rules out classical rate adaptation techniques such as puncturing [11], [12], shortening, and lengthening [13]–[15] that are currently employed by the 3GPP 5G-NR standard [16]. A class of multi-rate LDPC codes that facilitate the adoption of a unified decoder architecture while keeping a constant blocklength was introduced in [17]. The approach of [17] relies on expurgation of an high-rate LDPC code, constructing LDPC codes of lower rate. In particular, the construction of [17] is based on the following observation: starting from a low-rate LDPC code (referred to as mother code in [17]), it is possible to obtain higher-rate LDPC codes by linearly combining rows of the mother code parity-check matrix. The ingenuity of the approach of [17] stems from the construction of the low-rate code, and on the choice of rows to be linearly combined, 1A pipelined LDPC decoder supporting data rates up 160 Gbps was demonstrated in [8]. The data rate refers to a 65nm ASIC design with a clock frequency of 257 MHz, with a 4-bits messages quantization. arXiv:2510.14856v1 [cs.IT] 16 Oct 2025 2 targeting a common decoder architecture and efficient (i.e., linear-time) encoding of all code rates. In particular, ancillary degree-2 check nodes are introduced in the mother code Tanner graph to reflect the combining of parity-check equations. By selectively activating the ancillary degree-2 check nodes, a single decoder architecture can be used to decode all codes derived from the mother code. While this approach is simple and elegant, it is not clear what challenges it entails when pipeline decoder architectures are considered. Furthermore, additional engineering of the row combining approach is required to efficiently support low code rates [17]. MacKay-Neal (MN) codes were introduced in [18, Sec. VI] as a class of error correcting codes based on sparse matrices for nonuniform sources. MN codes are multi-edge-type LDPC codes [19]. The Tanner graph of a MN code can be split in two parts, with a set of variable nodes (VNs) associated with the source bits, and the remaining VNs associated with the codeword bits. LDPC code constructions closely related to MN codes were proposed in [20], [21] for joint source and channel coding. While originally introduced to deal with nonuniform sources, it was pointed out in [18] that MN can also be employed with uniform sources by introducing a nonlinear block code that turns the uniform source output sequence into a sequence with a prescribed distribution. The potential appeal of this construction, as observed in [18], stems from the possibility of modifying the code rate (e.g., adapting it to varying channel conditions) by changing the statistics of the sequences produced by the nonlinear block encoder, hence without modifying the underlying LDPC code. With reference to the constraints elaborated in the previous paragraphs, an important consequence is that a rate-adaptive scheme based on MN codes can keep the blocklength constant. Furthermore, since decoding is performed over a fixed Tanner graph— regardless of the code rate defined by the outer nonlinear encoder—MN codes are well suited for a pipeline decoder architecture. From a theoretical viewpoint, some attention was placed in showing the capacity-achieving properties of spatially coupled (SC) MN code ensembles in [22]–[24]. However, MN codes as a means to achieve rate flexibility received little attention. A notable exception is the probabilistic amplitude shaping (PAS) scheme introduced in [25], where a construction rem- iniscent of MN codes was proposed. In PAS, the sequence output by a uniform (binary) source is also processed by the nonlinear block encoder, referred to as distribution matcher (DM) [25], generating a sequence of amplitude symbols with a given empirical distribution. The binary labels associated with the amplitude symbols are encoded through a nonsystematic LDPC code encoder, producing a parity bit vector. The am- plitude symbols together with the parity bits are then mapped onto pulse amplitude modulation (PAM) symbols. The rates achievable by PAS were analyzed in [26], [27] showing that the layered shaping architecture consisting of PAS, together with a decoder that performs a search over the entire inner codebook, is capacity-achieving under a maximum a posteriori probability decoding metric (see also [28], [29]). While the main result attained by PAS is to provide sizable shaping gains, it was quickly recognized that, as a byproduct, PAS is naturally rate-adaptive thanks to the possibility of tuning the DM rate [25], [30], as originally hypothesized in [18]. Differently from the approach outlined in [18], PAS targets high spectral efficiencies by shaping the probability of the amplitudes of the constellation symbols, and cannot be readily used to adapt the rate in binary modulation schemes. In the context of PAS, several classes of DMs have been proposed and analyzed [31]–[33], providing an extensive understanding on how to design efficient DM algorithms for high-speed communications. In this paper, we analyze rate-adaptive MN codes based on protographs [34]. The analysis is provided for the binary- input additive white Gaussian noise (biAWGN) channel and as DM we consider constant composition distribution matchers (CCDMs) [25], [31]. The extension of the analysis to general memoryless binary-input output-symmetric channels and to the use of other classes of DMs is trivial. Noting that the concatenation of the CCDM with the inner linear block code results in a nonlinear code, we introduce an equivalent commu- nication model that simplifies the analysis of both maximum likelihood (ML) and BP decoders. In particular, the equivalent model resorts to the study of the performance of the protograph LDPC code over the communication channel, where side information is provided to the decoder by observing the LDPC encoder input through a binary-input, binary-output symmetric channel. Leveraging on this construction, we analyze pro- tograph MN code ensembles in terms of iterative decoding threshold, both via protograph extrinsic information transfer (PEXIT) analysis [35], [36] and via a more accurate quantized density evolution (DE) analysis [37]. Via numerical examples, we show that while the PEXIT analysis provides a fast and accurate estimate of the BP decoding thresholds at medium- high code rates, it tends to produce unreliable estimates at very low code rates. We hence propose a protograph MN code ensemble design via the (faster) PEXIT analysis, supplemented by the more accurate quantized DE analysis at low rates. The distance properties of protograph MN code ensembles are studied, showing how the input-output weight enumerator of the inner LDPC codes can be used to analyze the error floor performance. By means of asymptotic enumeration techniques, we introduce a criterion on the shape of the normalized logarithmic asymptotic input-output weight distribution that allows discarding code ensembles that are likely to yield codes with high error floors. Numerical results for selected code design examples confirm the accuracy of the analysis, which enables the design of MN codes with thresholds within approx. 1 dB from the Shannon limit, over a wide range of code rates (where each code rate is obtained by tuning the DM parameters, without modifying the inner LDPC code). The paper is organized as follows. Section II provides preliminary definitions. Protograph MN codes are described in Section III, whereas their decoding is discussed in Section IV. Section IV includes also the definition of the equivalent communication model that enables the analysis of the decoder performance. The DE analysis is provided in Section V, whereas the analysis of the distance properties of protograph MN code ensembles is developed in Section VI, together with the derivation of a union bound on the error probability of MN 3 codes. Some detailed steps in the distance spectrum analysis are contained in Appendix A. The code design methodology is outlined in Section VII. Numerical results and conclusions follow in Section VIII and in Section IX, respectively. II. PRELIMINARIES We denote random variables (r.v.s) by uppercase letters and their realizations by lowercase letters. The probability mass function (p.m.f.) of a discrete r.v. X is PX(x) = P[X = x], and the probability density function (p.d.f.) of a continuous r.v. X is pX(x). In either case, the subscript will be dropped whenever no ambiguity may arise, i.e., P(x) = PX(x) and p(x) = pX(x). We use Hb(p) = −p log2 p−(1−p) log2(1−p), with 0 < p < 1 and Hb(0) = Hb(1) = 0, for the binary entropy function. Similarly, H(p) = −p ln(p)−(1−p) ln(1− p) denotes the natural binary entropy function. Row vectors are denoted by bold letters, e.g., x, while matrices are denoted by uppercase bold letters, e.g., X. The order-2 finite (Galois) field is F2. Finally, wH(x) and dH(x, y) are the Hamming weight of a vector x and the Hamming distance between two vectors x and y, respectively. A. Channel Model We consider transmission of a BPSK modulated signal over the additive white Gaussian noise (AWGN) channel. The resulting memoryless discrete-time biAWGN channel model is defined by Y = X + N where Y is the channel output, X ∈{−1, +1} is the channel input, and where N ∼N(0, σ2) is the additive white Gaussian noise term. The channel signal- to-noise ratio (SNR) is defined as Es/N0 = 1/(2σ2) where Es is the energy per BPSK symbol and N0 is the single-sided noise power spectral density. B. Protograph LDPC Codes A protograph P = (V, C, E ) is a small Tanner graph [5] consisting of a set V of N VNs, a set C of M check nodes (CNs), and a set E of e edges [34]. VNs in the protograph are numbered from 1 to N. Similarly, protograph CNs are numbered from 1 to M. Each VN/CN/edge in a protograph defines a VN/CN/edge type. We denote by Evj (Eci) the set of edges in the protograph connected to vj (ci). The degree dvj of vj (dci of ci) is then equal to |Evj| (|Eci|). The Tanner graph G of an LDPC code can be derived by lifting the protograph. In particular, the protograph is copied ℓtimes (where ℓis referred to as the lifting factor), and the edges of the protograph copies are permuted under the following constraint: if an edge connects a type-j VN to a type-i CN in P, after permutation the edge should connect one of the ℓtype-j VN copies with one of the ℓtype-i CN copies in G. We denote by v1, v2, . . . VNs in G, and by c1, c2, . . . the CNs in G. The lifted graph G defines the parity-check matrix of an LDPC code. The base matrix of a protograph is an M × N matrix B = [bi,j] where bi,j is the number of edges that connect VN j to CN i in P. We will make use of LDPC codes with punctured (or state) VNs. A punctured VN is associated with a codeword bit that is not transmitted through the communication channel. We will type-1 VN type-2 VN type-3 VN type-1 CN type-2 CN Fig. 1. Protograph of Example 1. ... ... ... ... ... type-1 VNs type-2 VNs type-3 VNs type-1 CNs type-2 CNs Π11 Π12 Π21 Π22 Π32 Fig. 2. Tanner graph obtained by lifting the protograph of Example 1. assume that all the VNs of a given type are either punctured or they are not, i.e., puncturing is already defined at protograph level. Example 1. Consider the 2 × 3 protograph base matrix B =  1 2 0 1 1 2  . The vertical line partitions the base matrix in two parts: a 2 × 1 left submatrix, and a 2 × 2 right submatrix. The left submatrix is associated with punctured VNs. The correspond- ing protograph is shown in Figure 1. The type-1 protograph VN is represented by a dark circle to emphasize that the VNs of type 1 are punctured. The Tanner graph obtained by lifting the protograph is depicted in Figure 2. The edge lifting is described by means of the edge interleavers Πij, where Πij defines the permutation applied to the edges connecting type-i VNs to type-j CNs. A protograph (base matrix) defines a protograph LDPC code ensemble. More specifically, given a protograph P and a lifting factor ℓ, the ensemble is given by the codes whose graph G can be obtained by an ℓ-fold protograph lifting. With reference to Example 1, a random code from the ensemble can be obtained by drawing each edge interleaver uniformly at random within the set of all possible permutations. 4 C. Asymptotic Enumeration Results Let a(n) and b(n) be two real-valued sequences, where b(n) ̸= 0 ∀n. Then, a(n) is exponentially equivalent to b(n) as n →∞if and only if [38, Sec. 3.3] lim n→∞ 1 n ln a(n) b(n) = 0. We will use the notation a(n) ˙=b(n) to specify that a(n) is exponentially equivalent to b(n). Moreover, given z = (z1, z2, . . . , zd) and β = (β1, β2, . . . , βd), we use the short- hand zβ = d Y t=1 zβt t . In the distance spectrum analysis of MN codes, we will lever- age efficient asymptotic enumeration methods. In particular, we will make use of the following result. Lemma 1. [Hayman Formula for Multivariate Polynomials [39, Corollary 16] Let z = (z1, z2, . . . , zd) and let p(z) be a multivariate polynomial with p(0) ̸= 0. Let β = (β1, β2, . . . , βd) where 0 ≤βt ≤1 and βtn is an integer for all t ∈{1, 2, . . . , d}. Then coeff (p(z))n, znβ ˙= exp ( n " ln p(z∗) − d X t=1 βt ln z∗ t #) where coeff p(z)n, znβ represents the coefficient of znβ in the polynomial p(z)n, z∗ = (z∗ 1, z∗ 2, . . . , z∗ d) and z∗ 1, z∗ 2, . . . , z∗ d are the unique positive solutions to zt ∂p(z) ∂zt = βtp(z), ∀t ∈{1, 2, . . . , d} . III. PROTOGRAPH MACKAY-NEAL CODES MN codes were originally introduced as a class of LDPC codes with nonsystematic encoding to be used with nonuni- form sources [18, Sec. VI]. It was suggested that MN codes can be used for uniform sources, too, by introducing an outer nonlinear code (e.g., obtained by reversing the role of the encoder and the decoding in a standard arithmetic codec) in concatenation with the inner nonsystematic LDPC encoder [18, Sec. VI.A]. Moreover, it was recognized that the latter construction can be used to adapt the code rate when com- municating over channels with different noise levels without modifying the Tanner graph of the inner LDPC code [18, Sec. VI.C]. In the following, we refer to MN codes in this second flavor, i.e., as a class of codes obtained by concatenating an outer nonlinear code CO with an inner linear block code CI. More specifically, we consider the setting depicted in Figure 3. A uniform source generates a message µ ∈{1, 2, . . . , M}. The message is input to the encoder of a length-h outer code CO. The outer encoder generates an output sequence with a prescribed empirical distribution, i.e., CO is a constant composition (CC) code. Thus, each codeword of CO has the same Hamming weight. We refer to the CC encoder as the distribution matcher (DM) [25]. Note that, by restricting our attention to outer CC codes, we can leverage low-complexity outer code encoders based on arithmetic coding techniques [18], [31]. Let ω ∈(0, 1) denote the fractional Hamming weight of the h-bits outer CC codeword v, i.e., ω = wH(v)/h. We have that M = |CO| = h ωh  . Hence, the rate of the outer code is RO = 1 h log2 M = 1 h log2  h ωh  which converges to Hb(ω) for large h. The output of the DM is then input to the encoder of an inner (n, h) binary linear block code CI defined by CI =  c cH T 2 = vH T 1, v ∈Fh 2 where H1 is an n×h sparse binary matrix, and H2 is an n×n sparse invertible binary matrix. Note that, strictly speaking, CI may not be an LDPC code, i.e., the code may not possess a sparse parity-check matrix. Nevertheless, CI can be seen as the code obtained by puncturing an (n + h, h) LDPC code with n × (h + n) parity-check matrix H = [ H1 | H2 ] (1) where puncturing is applied to the first h coordinates. We refer to the (n + h, h) LDPC code with parity-check matrix in the form (1) as the (inner) mother code CIM. The inner code rate is RI = h/n, whereas the mother code rate is RIM = h/(n+h) = RI/(1 + RI). The nonsystematic generator matrix of the inner code CI is G = H T 1H−T 2 . By observing that the inverse of H2 is generally dense, we have that G is dense too. Hence, the code generated by the concatenation of CO with CI results in a marginal distribution of the codeword bits that is very close to the uniform one [25]. This result is fundamental since the capacity-achieving input distribution of the biAWGN channel is uniform. We denote by C the overall code resulting from the con- catenation of the inner and outer codes. The rate of C is R = RORI ≈Hb(ω)RI. (2) As observed in [18, Sec. VI.C], all rates 0 < R < RI can be achieved simply by fixing the DM parameter ω, without requiring any modification (e.g., puncturing/shortening) of the inner code. An MN code is fully defined by the parity-check matrix of the inner mother code CIM and by the DM parameter ω. We refer to a MN code family as the set of MN codes with fixed mother code, obtained for all ω ∈(0, 1) that yield an integer ωh. A. Protograph-based Construction MN can be constructed by protograph expansion. In par- ticular, a protograph MN code is obtained by concatenating the outer CC code with a protograph mother LDPC code CIM: the mother code parity-check matrix (1) is obtained by lifting a protograph whose n0 × (h0 + n0) base matrix takes the form B = [ B1 | B2 ] where B1 is n0 × h0 and B2 is n0 × n0, with integers n0 = n/ℓand h0 = h/ℓ. All type-i VNs with i = 1, . . . , h0 are punctured. A (protograph) MN code ensemble Cω(P) is the set of MN codes whose inner mother code Tanner graph G is obtained by lifting P, and where the DM parameter is ω. The MN code ensemble family C (P) is the set of ensembles {Cω(P)}ω∈(0,1). 5 Source Distribution Matcher, CO Encoder, CI Communication Channel Decoder, CI De-matcher µ / v / x / y / ˆv ˆµ h bits n bits n h bits Encoder, C Fig. 3. System model, where a MN code is used to communicate over the biAWGN channel (communication channel). IV. DECODING OF MN CODES With reference to the biAWGN channel model (Section II-A), the codeword c is mapped onto {−1, +1}n via binary antipodal modulation through xi = 1 −2ci, for i = 1, . . . , n. With a slight abuse of wording, we will refer to the modulated codeword x as the codeword. Similarly, we will use CI and C to denote the modulated codebook of the inner code and of the overall code, respectively. In the following, we first review the BP decoding algorithm applied to MN codes (Section IV-A). We provide then a discussion of general decoding metrics (Section IV-B), that will prove useful to analyze the performance of MN codes thanks to their interpretation in the context of an equivalent parallel channel model (Section IV-C). A. Belief Propagation Decoding MN codes can be conveniently decoded via the BP algo- rithm over the Tanner graph of the inner mother LDPC code. Let us denote by Li the L-value at the input of the ith VN. The BP decoder is initialized by setting Li =    ∆ 1 ≤i ≤h ln p(yi−h|0) p(yi−h|1) h < i ≤h + n. Here, ∆:= ln 1 −ω ω while p(yi|ci) is the probability density of the biAWGN channel output yi conditioned on the transmission of the codeword bit ci. Hence, Li = 2yi−h/σ2 for h < i ≤h + n. The initialization of the BP decoder is displayed in Figure 4, where the Tanner graph of the inner mother LDPC code is shown. In the figure, ΠL and ΠR represent the edge per- mutations associated with the submatrices H1 and H2 in (1). The punctured VNs associated to the bits v1, v2, . . . , vh are represented as dark circles. Observe that the punctured VNs are provided with prior information resulting from the marginal distribution of the CC code codeword v. For VNs associated with the codeword bits that are transmitted through the biAWGN channel, we input the corresponding channel log- likelihood ratios (LLRs). BP decoding proceeds through the standard, iterative message-passing algorithm over the code graph. After reaching a fixed maximum number of iterations, the decoder outputs the decision ˆv. If the composition (i.e., Hamming weight) of ˆv differs from the one defined by the outer CC code, an error is declared. Otherwise, ˆv is processed by the de-matcher [31], producing the estimate ˆµ of the transmitted message (see Figure 3). v1 v2 ... vh ... c1 c2 c3 ... cn ΠL ΠR ∆ ∆ ∆ Lh+1 Lh+2 Lh+3 Lh+n a-priori information channel LLRs Fig. 4. Belief propagation decoding over the Tanner graph of the mother LDPC code. Note that the decoder outlined above (proposed already in [18]) employs the same layered decoding architecture adopted by PAS schemes [25], [26]: the BP decoder does not have any information on the outer CC constraints, and it exploits only the knowledge of the marginal distribution of the bits in v. B. Maximum-likelihood and Mismatched Decoding We discuss ML decoding of MN codes, which will be useful to analyze the code performance in the error floor region. Given an MN code C, the ML decoder outputs ˆxML = arg max x∈C p(y|x) (3) where p(y|x) is the probability density of the biAWGN channel output y conditioned on the input x. Note that the ML criterion (3) can be rephrased as ˆxML = arg max x∈CI p(y|x)P(v[x]). (4) In (4), the bijective relation between x (inner encoder output) and v (inner encoder input) is emphasized by introducing the notation v[x]. Note that P(v) takes value 1/|CO| if v ∈CO, whereas P(v) equals zero if v /∈CO. Furthermore, the decoder defined by (4) performs a search over the inner code CI, hence, over an enlarged set compared to (3): the outer code constraints are conveyed by the prior P(v). It is fundamental to observe that the BP decoder outlined in Section IV-A cannot exploit the joint distribution of the bits forming v,2 but it rather uses the marginal distribution of the bits composing v to bias the decoder operating over the Tanner 2BP decoding can be extended to exploit the joint distribution of the bits in v by iterating between the inner mother LDPC decoder and an outer soft-input soft-output decoder designed for the CC code. The outer CC decoder can be based, for instance, on the forward-backward algorithm applied to the trellis representation of the CC code. As shown in [40], the technique allows to improve the performance of PAS schemes when moderate-small blocklengths are considered. The gains are negligible for large blocklenghts. 6 graph of the mother code. It is hence of interest to analyze the mismatched decoding rule ˆxMM = arg max x∈CI p(y|x)Q(v[x]). (5) where Q(v) = Qh i=1 P(vi) denotes the product of the marginal distributions of the bits v1, v2, . . . , vh. The decoding metric adopted in (5) is suboptimal compared with the one of (4). In fact, the term Q(v) acts as a mismatched prior, yielding a nonzero probability also for v /∈CO. C. Equivalent Parallel Channel Model Analyzing the performance of MN codes is challenging, even under the decoding metrics of the previous subsection and over memoryless binary-input output-symmetric (MBIOS) channels. This fact is mostly due to the nonlinear nature of C, which renders the block error probability under (3)–(4) dependent on the transmitted codeword, hindering the use of a reference codeword to compute bounds on the block error probability. The same issue arises when attempting a DE evolution analysis under BP decoding, where the allzero codeword is often used as a reference. The issue can be circumvented by resorting to alternative communication models [21], [41] that can be proved to be equivalent to the one depicted in Figure 3. Consider first the scheme depicted in Figure 5, where a scrambling block is in- troduced. The block generates a sequence s = (s1, s2, . . . , sh) where each element is picked independently and uniformly at random in {0, 1}. The sequence s is then added (in F2) to v. The sum of the two vectors, denoted by w, is then encoded with CI. Note that v and s are independent and that the marginal distributions of the entries of v and s are Bernoulli with parameters ω and 1/2, respectively. It follows that the entries in w are uniformly distributed, i.e., they follow a Bernoulli distribution with parameter 1/2. The sequence s is assumed to be available to the decoder. Considering either (3) or (4), and owing to the symmetry of the biAWGN channel, we observe that the presence of the scrambler is irrelevant to the analysis of the error probability, since the addition of s at the transmitter side can be compensated at the decoder by computing first b = sG, and then flipping the sign of the observations yi for all i ∈supp(b). The analysis of the bit/block error probability under the model of Figure 5, averaged over all possible transmitted codewords, is equivalent to the analysis of the bit/block error probability of the communication model described in Figure 6: here, an independent and identically-distributed (i.i.d.) uniform binary source generates an h-bits vector w, which is encoded via CI yielding a codeword x that is transmitted through the biAWGN channel. The decoder also obtains an observation of w via a so-called a priori channel. The a priori channel adds (in F2) a weight-ωh binary vector v to w, were v is picked uniformly at random in CO, resulting in the observation s. Upon observing y and s, the decoder produces a decision on w, or, equivalently, a decision on v since w = v + s. ML decoding will produce ˆxML = arg max x∈CI p(y|x)P(s|w[x]) (6) where P(s|w) = 1/|CO| if (s−w) ∈CO, and P(s|w) = 0 oth- erwise. The decoding problem can be immediately recognized to be equivalent to the one in (4). The decoder may also resort to a mismatched model for the a priori channel, treating it as a binary symmetric channel (BSC) with crossover probability ω and resulting in ˆxMM = arg max x∈CI p(y|x)Q(s|w[x]) (7) where Q(s|w) = Qh i=1 P(zi|wi), i.e., the solution of (7) is equivalent to the solution of (5). We refer to the model of Figure 6 as the equivalent parallel channel (EPC) model. Remark 1. The convenience of the EPC model stems from the fact that, owing to the symmetry of the communication and of the a priori channels and to the linearity of the code CI, the error probability is independent on w: we can analyze the error probability of the scheme of Figure 6 by fixing as reference the allzero codeword. The resulting analysis will characterize exactly the error probability of the original scheme of Figure 3, averaged over all possible transmitted codewords. V. DENSITY EVOLUTION ANALYSIS The EPC model introduced in Section IV-C allows to analyze MN code ensembles from a DE viewpoint. The anal- ysis follows by computing the DE recursions for the mother LDPC code protograph, where the initialization of the message densities accounts for the type of channel associated to the protograph VNs. The analysis shares several commonalities with the DE of LDPC code ensembles designed for joint source and channel coding [21]. To proceed with the analysis, and with reference to the EPC model, we replace the a priori channel (that introduces a constant number of errors ωh in s) with a BSC with crossover probability ω. The choice is justified by observing that DE analysis tracks the evolution of the BP message distributions in the limit of large blocklenghts, and by noting that, as h and n grow large, the fraction of errors introduced by the BSC concentrates around ω. We resort to two flavors of DE: quantized DE [37] and PEXIT analysis [35], [36]. Quantized DE assumes that the messages exchanged by VNs and CNs, as well as the decoder input L-values, are uniformly-quantized with sufficiently-fine quantization steps (e.g., we adopted 255 quantization intervals over a range [−25, +25]). The analysis yields accurate BP decoding threshold estimates. However, owing to the fine quantization, and the need of tracking a different distribution for each VN/CN pair in the mother LDPC code protograph, the analysis is computationally expensive and can be a bottleneck for a numerical optimization of the mother LDPC code pro- tograph. PEXIT analysis employs a Gaussian approximation for message distributions, enabling a fast evaluation of the BP decoding threshold. In this case, conditioned on the transmitted bit value, all messages and input L-values are modeled as Gaussian r.v.s. Note that the Gaussian approximation is also enforced for the L-values associated with the VNs connected to the a-priori channel. Conditioned on X = +1 (allzero codeword assumption), their actual distribution has two mass 7 Source Distribution Matcher, CO ⊕ Encoder, CI Communication Channel Decoder, CI De-matcher µ v w x y ˆv ˆµ i.i.d. scrambler s Fig. 5. Modification of the system model of Figure 3, where an i.i.d. scrambler is introduced. Source Encoder, CI Commun. Channel A Priori Channel Decoder w x y ˆw s = w + v Fig. 6. Equivalent parallel channel model. points, one at +∆(with probability ω) and one at −∆(with probability 1 −ω). The PEXIT analysis follows the steps in [36], where, in our case, particular attention should be paid to the initialization of the PEXIT analysis recursions. Denote by CBSC(ω) = 1 −Hb(ω) the capacity of a BSC with crossover probability ω, and by CAWGN(Es/N0) the capacity of a biAWGN channel with SNR Es/N0. The PEXIT analysis is initialized by setting the mutual information (MI) at the input of type-i VNs, i = 1, . . . , h0, to CBSC(ω), whereas for i = h0 + 1, . . . , h0 + n0 the MI at the input of type-i VNs is initialized to CAWGN(Es/N0). The analysis in then carried out via the recursions provided in [36], and it allows to determine (for fixed ω) the iterative decoding threshold over the biAWGN channel, that is the minimum Es/N0 for which the MI values tracked by the PEXIT analysis converge to 1. We denote the threshold value as γ⋆(R), where we emphasize the dependency on R (and, hence, on ω). The thresholds predicted by PEXIT analysis are close to the ones obtained via quantized DE when the rate of the outer code is medium/high. In the low code rate regime, PEXIT may provide too optimistic estimates of the BP decoding threshold. This effect is illustrated in detail in Section V-A. Owing to this observation, and due to the faster computations entailed by the PEXIT analysis with respect to quantized DE, PEXIT analysis will be used for protograph optimization (provided in Section VII) in the medium/high (outer code) rate regime, while the use of the slower —but more accurate— quantized DE analysis will be limited to obtain thresholds at low outer code rates. A. Accuracy of the PEXIT Analysis We consider next two examples that aim at quantifying the accuracy of the PEXIT analysis. In particular, we computed the BP decoding thresholds of protograph MN code ensembles under PEXIT analysis, and compare them with the ones derived by quantized DE. TABLE I BP DECODING THRESHOLDS (dB) COMPUTED BY QUANTIZED DE AND BY PEXIT ANALYSIS FOR VARIOUS RATES. ENSEMBLE DEFINED BY THE BASE MATRIX B1/2 IN EXAMPLE 2. R 0.5 0.4 0.3 0.2 0.1 γ⋆dB (quant. DE) −2.04 −3.40 −4.89 −6.91 −10.27 γ⋆dB (PEXIT) −2.06 −3.42 −5.05 −7.14 −10.49 TABLE II BP DECODING THRESHOLDS (dB) COMPUTED BY QUANTIZED DE AND BY PEXIT ANALYSIS FOR VARIOUS RATES. ENSEMBLE DEFINED BY THE BASE MATRIX B (1) 2/3 IN EXAMPLE 3. R 0.6 0.5 0.4 0.3 0.2 γ⋆dB (quant. DE) −0.69 −2.12 −3.43 −4.72 −6.51 γ⋆dB (PEXIT) −0.72 −2.15 −3.55 −5.00 −7.11 Example 2. Consider the protograph base matrix B1/2 =     1 0 1 1 0 0 0 1 0 3 0 1 2 0 1 1 1 0 1 2 1 2 0 0    . The protograph defines an inner mother LDPC code ensemble with rate RIM = 1/3 that can be used to construct a protograph MN code ensemble with inner code rate RI = 1/2. By fixing different values of ω, we obtain different code rates according to (2). Table I reports the BP decoding thresholds for various rates. The values are computed with both quantized DE and PEXIT analysis. The threshold computed via PEXIT analysis is only 0.02 dB away from the quantized DE threshold for R = 0.5. At lower rates, the gap between the thresholds grows larger: at R = 0.1, PEXIT underestimates the threshold by approx. 0.22 dB. This first example already provides numerical evidence of the accuracy of PEXIT when the outer code rate is sufficiently large and of a relative lack of precision at low code rates. A more striking case is provided by the following example. Example 3. Consider the protograph base matrix B (1) 2/3 =   1 0 0 3 1 1 1 0 3 0 1 2 2 1 0  . The inner mother LDPC code ensemble has rate RIM = 2/5. Table II shows the BP decoding thresholds of the correspond- ing protograph MN code ensemble for various rates. The values are computed with both quantized DE and PEXIT 8 analysis. As for Example 2, the PEXIT analysis tends to underestimate thresholds. In particular, the accuracy of PEXIT analysis strongly deteriorates with decreasing rate. While for a rate of 0.6 the PEXIT threshold estimate is only 0.03 dB from the quantized DE one, for a rate of 0.2 the gap is around 0.6 dB. VI. DISTANCE SPECTRUM ANALYSIS While the DE analysis of LDPC, MN, and in general turbo-like code ensembles provides a useful characterization of the code performance in the so-called waterfall region of the error probability curve, it fails to capture error floor phenomena that may arise at moderate-low error probabilities. Methods that rely on the knowledge of the average weight enumerators of code ensembles are often used to comple- ment DE analysis, allowing to discriminate between code ensembles characterized by good minimum distance proper- ties (e.g., ensembles that yield with high probability codes whose minimum distance grows linearly in the block length) and code ensembles with bad minimum distance properties (e.g., ensembles that yield with high probability codes whose minimum distance grows sub-linearly in the block length) [39]. By analyzing the distance properties of a certain code ensemble, it is thus possible to characterize the error floor region of the error probability curve [42]. In this Section, a distance spectrum analysis of protograph MN code ensembles is presented. We first derive a union bound on the average block error probability (Section VI-A). The average is here over both the code and the transmitted codeword. The focus is on mismatched (MM) decoding as in Section IV-B. The derivation of the union abound allows identifying the kind of weight enumerator required to analyze the error floor regime. A rigorous derivation of the average weight enumerator is then provided, together with a characterization of the distance properties of code ensembles (Section VI-B). A. Union Bound under Mismatched Decoding To carry on the derivation of an upper bound on the block error probability under MM decoding, we resort to the EPC model of Section IV-C. By resorting to the EPC setting, the derivation of bounds on the error probability under (6) reduces to the analysis of the error probability under (7). We first consider transmission with a MN code C. The pairwise error probability (PEP) is PEP(x′) = P  p(Y |x)Q(S|w) ≤p(Y |x′)Q(S|w′)  . (8) In (8), the codeword transmitted over the communication channel is x, and it is the result of the encoding of w, where the vector w is transmitted over the a priori channel. The competing codeword is x′, and it is the result of the encoding of w′. Note that in (8) ties are broken in favor of the competing codeword. Owing to the symmetry of the communication and a priori channels, and to the linearity of CI, we assume without loss of generality that w = (0, 0, . . . , 0) and hence x = (+1, +1, . . . , +1). Conditioned on X = x and W = w, S is uniformly distributed over the set of h-bit sequences with Hamming weight ωh, whereas Y1, Y2, . . . , Yn are i.i.d. ∼N(+1, σ2). We can rewrite (8) as PEP(x′) = P " n X i=1 ln p(Yi|xi) p(Yi|x′ i) ≤ h X i=1 ln Q(Si|w′ i) Q(Si|wi) # = P hP i∈D(x′) Li ≤−P i∈D(w′) Ti i . (9) In (9) we made use of D(x′) = {i|x′ i ̸= xi}, D(w′) = {i|w′ i ̸= wi} whereas Li := ln p(Yi|0) p(Yi|1), Ti := ln Q(Si|0) Q(Si|1). Denote by L := X i∈D(x′) Li and T := X i∈D(w′) Ti. Moreover, let a = dH(w, w′) and b = dH(x, x′). Conditioned on X = x and W = w, we have L ∼N  2 b σ2 , 4 b σ2  . Recalling ∆= ln[(1−ω)/ω], we have that Ti = ∆if Si = 0, whereas Ti = −∆if Si = 1, i.e., T = (a −E)∆−E∆= (a −2E)∆ where the r.v. E follows a hypergeometric distribution with parameters (h, ωh, a). For the PEP we obtain PEP(x′) = E  Q 2b/σ2 + (a −2E)∆ 2 √ b/σ  where Q(x) is the well-known Gaussian Q-function. By ob- serving that PEP(x′) depends on x′ only through its Hamming distance from x, and on the Hamming distance between the corresponding information sequence w′ and w, we can upper bound the block error probability under (7) as PB ≤ h X a=1 n X b=1 A IO a,bE  Q 2b/σ2 + (a −2E)∆ 2 √ b/σ  (10) where AIO a,b is the input-output weight enumerator of CI. If the code is drawn uniformly at random from a protograph MN code ensemble, by averaging over the code ensemble, we obtain the upper bound over the average block error probability E[PB(C)]≤ h X a=1 n X b=1 ¯A IO a,bE  Q 2b/σ2+(a−2E)∆ 2 √ b/σ  (11) where ¯AIO a,b is the average input-output weight enumerator of CI. Remark 2. By inspection of (11), we see that a key role in the block error probability analysis is played by the average input-output weight enumerator of the code ensemble. In particular, numerical evaluation of the PEP for various values of a and b reveals that, at low probabilities of error, (10) and (11) are dominated by the terms of the (average) input- output weight enumerator associated with small input weight 9 a and small output weight b over a wide range of parameters. More specifically, for very small values of ω (very low outer code rate RO) and very low SNR, terms of the input-output weight enumerator with small input weight a yield a dominant contribution to the error probability. For a broad region of intermediate values of ω and of the SNR, the terms that contribute mostly to the error probability are associated with small input and small output weights. As ω approaches 1/2 (the outer code rate RO approaches 1) and the SNR grows large, the output weight b gradually takes a predominant role. B. Average Input-Output Weight Enumerators Let I be a subset of VNs in the lifted graph of CI. We assign the value 1 to each of the VNs in I and the value 0 to the VNs outside I. The set I contains a punctured VNs and b unpunctured ones. Before deriving the average input-output weight enumerator, we define the VN and edge weight vectors. Define the VN weight vector ϵ = (ϵ1, ϵ2, . . . , ϵh0+n0), where ϵj is the number of VNs of type-j in I. Recalling that the protograph lifting factor is ℓ, we have 0 ≤ϵj ≤ℓ, for all j ∈{1, . . . , h0 + n0} (12) with the constraints h0 X j=1 ϵj = a (13) h0+n0 X j=h0+1 ϵj = b. (14) Similarly, define the edge weight vector κ(ϵ) = (κg)g∈E where κg is the number of edges of type-g connected to the VNs in I. The VN and edge weight vectors are related: for a given ϵ, we have κg = ϵj if g ∈Evj. Recalling that n = n0ℓ and h = h0ℓ, the average input-output weight enumerator of the inner LDPC code is given by the following lemma. Lemma 2. The average number of codewords with input weight a and output weight b in a code drawn uniformly at random from the protograph LDPC code ensemble defined by P is ¯A IO a,b = X ϵ n0 Q i=1 coeff  Si(zi)ℓ, zκi(ϵ) i  h0+n0 Q j=1 ℓ ϵj dvj −1 where Si(zi) =1 2  Y g∈Eci (1 + zg) + Y g∈Eci (1 −zg)   (15) and where κi(ϵ) = (κg)g∈Eci, κg = ϵj if g ∈ Evj and z = (zg)g∈E , zi = (zg)g∈Eci, and zg, g ∈Eci are dummy variables. The sum is over the VN weight vectors ϵ = (ϵ1, ϵ2, . . . , ϵh0+n0) satisfying (12)-(14). The proof of Lemma VI-B is provided in Appendix A. Lemma 2 provides the average number of codewords with input weight a and output weight b for a finite block length n, and the result can be readily used in (11) to upper bound the average block error probability of a protograph MN code ensemble {Cω(P)}ω∈(0,1). To obtain information about the scaling of the minimum distance with the blocklength, it is useful to analyze the normalized logarithmic asymptotic input-output weight distri- bution for the LDPC code ensemble defined by P for a = αn and b = βn, which is defined as G(α, β) := lim n→∞ 1 n ln ¯A IO αn,βn 0 ≤α ≤h/n, 0 ≤β ≤1. The result is provided by the following theorem. Theorem 1. The normalized logarithmic asymptotic input- output weight distribution for the LDPC code ensemble defined by P with input weight αn and output weight βn is G(α, β) = 1 n0 n0 X i=1 ln Si(z∗ i ) − h0+n0 X j=1  dvj −1 n0 H(n0˜ϵ⋆ j) + ˜ϵ⋆ j X g∈Evj ln z∗ g   where we recall that H(p) is the natural binary entropy function. The values z∗ g for g ∈E , ˜ϵ⋆ j for j ∈{1, . . . , h0+n0} and µ1, µ2 are the solutions of zg ∂ln Si(zi) ∂zg =n0˜ϵj where g ∈Eci ∩Evj, i ∈{1, . . . , n0}, j ∈{1, . . . , h0 + n0}, and of (dvj −1) ln ˜ϵj 1 n0 −˜ϵj ! = X g∈Evj ln(zg) + µ1 (16) for j ∈{1, . . . , h0}, (dvj −1) ln ˜ϵj 1 n0 −˜ϵj ! = X g∈Evj ln(zg) + µ2 (17) for j ∈{h0 + 1, . . . , h0 + n0}, and h0 X j=1 ˜ϵj =α (18) h0+n0 X j=h0+1 ˜ϵj =β (19) where Si(zi) is defined in (15). The proof of Theorem 1 can be found in Appendix B. Theorem 1 shows that the evaluation of G(α, β) requires solving |E | + h0 + n0 + 2 equations with the same number of variables: zg (|E | variables), ˜ϵ⋆ j (h0 +n0 variables) and µ1, µ2 (2 variables). We present the following toy example to clarify the notation. Example 4. Consider again the protograph base matrix from Example 1, given by B =  1 2 0 1 1 2  . 10 We have h0 = 1 and n0 = 2. The set of edges in the protograph is E = {1, 2, . . . , 7}. Thus |E | = 7. For each CN and VN in the protograph, we determine the set of edges in the protograph connected to it. We have Ec1 = {1, 2, 3}, Ec2 = {4, 5, 6, 7}, Ev1 = {1, 4}, Ev2 = {2, 3, 5}, Ev3 = {6, 7}. Thus, the VN degrees are dv1 = 2, dv2 = 3 and dv3 = 2 . The generating function of the CN c1 is given by S1(z1) = 1 2   Y g∈{1,2,3} (1 + zg) + Y g∈{1,2,3} (1 −zg)   where z1 = (z1, z2, z3) and z1, z2, z3 are dummy variables. Similarly, The generating function of the CN c2 is S2(z2) = 1 2   Y g∈{4,5,6,7} (1 + zg) + Y g∈{4,5,6,7} (1 −zg)   where z2 = (z4, z5, z6, z7) and z4, z5, z6, z7 are dummy variables. For a fixed integer pair (a, b), we have ¯A IO a,b = X ϵ coeff  S1(z1)ℓ, zκ1(ϵ) 1  coeff  S2(z2)ℓ, zκ2(ϵ) 2  ℓ ϵ1  ℓ ϵ2 2 ℓ ϵ3  where the sum is over the VN weight vectors ϵ = (ϵ1, ϵ2, ϵ3) satisfying 0 ≤ϵ1, ϵ2, ϵ3 ≤ℓ, ϵ1 = a and ϵ2 + ϵ3 = b. For each VN weight vectors ϵ = (ϵ1, ϵ2, ϵ3), we can compute the edge weight vector κ(ϵ) = (κ1, κ2, . . . , κ7). Since κg = ϵj if g ∈Evj, we obtain for this example, κ(ϵ) = (ϵ1, ϵ2, ϵ2, ϵ1, ϵ2, ϵ3, ϵ3). Further, for each CN ci in the protograph, we determine κi(ϵ) which contains the weights of the edges connected to it. Formally, κi(ϵ) = (κg)g∈Eci. In this case, we have κ1(ϵ) = (κ1, κ2, κ3) = (ϵ1, ϵ2, ϵ2) and κ2(ϵ) = (κ4, κ5, κ6.κ7) = (ϵ1, ϵ2, ϵ3, ϵ3). The evaluation of the normalized logarithmic asymptotic input-output weight distribution G(α, β) requires solving a system of equation with |E | + h0 + n0 + 2 = 12 equations with the same number of unknowns (z1, z2, z3, z4, z5, z6, z7, ˜ϵ1, ˜ϵ2, ˜ϵ3, µ1, µ2). The following lemma follows the approach of [43], and it can reduce the dimension of the system of equations, hence simplifying the calculation. Lemma 3. Let u, v be two edges in E . If u and v are connected to the same VN-CN pair in the protograph, then z∗ u = z∗ v. Proof. The function Si(zi) in (15) is symmetric in the vari- ables zg, g ∈Eci. Thus, for the system of equations in Theorem 1, if there is a solution with z∗ u = θ1, z∗ v = θ2 then another solution exists with z∗ u = θ2, z∗ v = θ1 (all the other variables being unchanged). Since the solutions z∗ g, g ∈E are unique, we have θ1 = θ2. Remark 3. The derivation of the normalized logarithmic asymptotic input-output weight distribution for the LDPC code ensemble defined by P allows to distinguish between two behaviors. Suppose that G(α, β) is strictly positive for 0 < α < ξ and 0 < β < ξ, where ξ is an arbitrarily small positive constant. In this case, an exponentially large number of codewords with input weight a = αn and output weight b = βn is expected, with α and β small compared to n. According to (11), ensembles displaying this behavior will be characterized by poor error floor performance since ¯AIO a,b will be large for small a, b. We will refer to ensembles possessing this property as bad ensembles. On the contrary, we refer to ensembles for which there exists a positive ξ such that G(α, β) is strictly negative for 0 < α < ξ and 0 < β < ξ as good ensembles. We provide next two examples of “good” and “bad” code ensembles, according to the definition of Remark 3. Example 5. Consider the protograph base matrix B =  1 1 1 1 1 1  . The protograph defines an inner mother LDPC code ensemble with rate RIM = 1/3 that can be used to construct a protograph MN code ensemble with inner code rate RI = 1/2. Figures 7(a) and 7(b) depict the normalized logarithmic asymptotic input-output weight distribution. We observe that G(α, β) is positive for α > 0 and β > 0, α and β small. Hence, according to Remark 3, the ensemble is “bad”. Example 6. Consider the protograph base matrix B =   1 1 1 1 1 1 1 1 1 1 1 1  . The protograph defines an inner mother LDPC code ensemble with rate RIM = 1/4 that can be used to construct a protograph MN code ensemble with inner code rate RI = 1/3. Figures 8(a) and 8(b) depict the normalized logarithmic asymptotic input-output weight distribution. We observe that G(α, β) is negative for α > 0 and β > 0 and α and β small. Hence, according to Remark 3, the ensemble is “good”. VII. CODE DESIGN A first step in the design of MN codes is the identification of a suitable inner LDPC code protograph ensemble. For this purpose, the iterative threshold γ⋆can be used as cost function to be minimized. In particular, suppose we are interested in finding an inner mother code protograph that allows to operate close to capacity over a range  R, R  of rates R, i.e., over a range [ω, ω] of values of ω. The protograph search begins by fixing the protograph parameters h0, n0, which define the inner code rate RI. A set of target rates R ⊂  R, R  is then selected. For each target rate R ∈R we can derive the rate of the DM as RO = R/RI, out of which the DM parameter ω is obtained. We make use of the following definition. Definition 1. Consider a protograph P. We define the worst- case loss (WCL) WCL(P, R) := max R∈R  γ⋆(R) −C−1 AWGN(R)  . (20) 11 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 α β G(α, β) (a) 0 5 · 10−2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α β (b) Fig. 7. Normalized logarithmic asymptotic input-output weight distribution (a) for the LDPC code ensemble of Example 5. The view from the top is given in (b), where the white color denotes points where G(α, β) is negative. 0 0.1 0.2 0.3 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 α β G(α, β) (a) 0 5 · 10−2 0.1 0.15 0.2 0.25 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α β (b) Fig. 8. Normalized logarithmic asymptotic input-output weight distribution (a) for the LDPC code ensemble of Example 6. The view from the top is given in (b), where the white color denotes points where G(α, β) is negative. The WCL is the maximum gap between the protograph iter- ative decoding threshold and the biAWGN channel Shannon limit for the rates in R. The WCL provides a measure of the capability of MN codes constructed from the inner LDPC protograph ensemble specified by P to approach the Shannon limit for different choices of the DM parameter ω (and, hence, over the corre- sponding range of code rates). A search for the protograph with parameters h0, n0 that minimizes the WCL in (20) can be carried out, for example, via differential evolution [44]. We provide next some examples of application to the design of protograph-based MN code ensemble families addressing different rate regimes. In all examples, the search space was limited by setting the maximum number of parallel edges between protograph VN/CN pairs (i.e., the value of the base matrix elements) to 3. Example 7. Consider a code rate range [0.1, 0.5]. An MN code family addressing this range can be derived from an inner RI = 1/2 code, i.e., the mother code has rate RIM = 1/3. We search for protographs with 6 VNs and 4 CNs, minimizing the 12 −12 −10 −8 −6 −4 −2 0 2 0 0.2 0.4 0.6 0.8 Es/N0 [dB] Rate biAWGN channel capacity Thresholds B1/2 Thresholds B (1) 2/3 Thresholds B (2) 2/3 Fig. 9. Iterative decoding thresholds computed for the MN code ensembles defined by the base matrices described in Example 7, Example 8, and in Example 9. WCL over R = {0.1, 0.3, 0.5}. We obtain the base matrix B1/2 =     1 0 1 1 0 0 0 1 0 3 0 1 2 0 1 1 1 0 1 2 1 2 0 0     where the first two columns are associated with punctured VNs. Note that the base matrix is the same as in Example 2. The iterative decoding thresholds (obtained for a small sample of the possible rates) are depicted in Figure 9. Example 8. Consider a code rate range [0.1, 0.666]. We fix the inner code rate to RI = 2/3 and the dimensions of the base matrix to 3 × 5, where the first two columns are associated with punctured VNs. We search for protographs minimizing the WCL over R = {0.1, 0.3, 0.666}. We obtain the base matrix B (1) 2/3 =   1 0 0 3 1 1 1 0 3 0 1 2 2 1 0  . The iterative decoding thresholds (again, obtained for a small sample of the possible rates) are depicted in Figure 9. Remarkably, the iterative decoding thresholds for both en- sembles of Examples 7 and 8 (displayed in Figure 9) are within 1 dB from the Shannon limit over a wide range of rates. For both designs in Examples 7 and 8, we did not restrict the search to protographs yielding good ensembles (see Re- mark 3). By studying the input-output weight enumerators of the inner protograph LDPC code ensembles defined by the base matrices Examples 7 and 8, we can observe that both base matrices result in bad ensembles. Hence, we should expect codes constructed from these base matrices to exhibit (rela- tively) high error floors. Evidence in the form of numerical results will be provided in Section VIII. We modify the protograph search by including a check to distinguish between protographs yielding good and bad ensembles, according to the criterion introduced in Remark 3. By doing so, bad ensembles can be discarded during the protograph search. An example of code ensemble obtained using this approach is provided next. Example 9. Consider a code rate range [0.1, 0.666], an inner RI = 2/3 code, and a base matrix of dimensions 3 × 5. We search for protographs minimizing the WCL over R = {0.1, 0.3, 0.666} excluding bad ensembles. We obtain the base matrix B (2) 2/3 =   3 3 3 0 0 0 1 3 1 0 1 0 2 0 1   where the first two columns are associated with punctured VNs. The base matrix above describes a rate-2/5 inner mother LDPC code, which can be obtained by concatenating an outer rate-2/3 LDPC code with regular VN degree 3 and CN degree 9, with an inner low-density generator-matrix code. The construction is reminiscent of the LDPC codes adopted by the 3GPP New Radio (5G) standard [16] and of the Raptor-based LDPC code class introduced in [14]. The iterative decoding thresholds for a small set of the possible rates are depicted in Figure 9. Observe that the thresholds from the constrained optimization result in a slight loss for high rates compared to the unconstrained case of Example 8. VIII. NUMERICAL RESULTS Numerical results on the performance of protograph MN codes over the biAWGN channel are provided next. The results are obtained via Monte Carlo simulations with a maximum of 100 BP decoding iterations. All the codes used in the simulations are obtained by lifting the inner LDPC code protograph by a circulant version of the progressive edge growth (PEG) algorithm [45]. The first set of simulation results confirms the tightness of the union bound in (10) at low error probabilities. For this purpose, we designed a length-1200 MN codes based on the protograph of Example 7. The maximum code rate R = 1/2 is obtained for ω = 1/2. The performance for various code rates is depicted in Figure 10, in terms of FER vs. SNR, together with the iterative decoding thresholds of the corresponding MN code ensembles Cω(P). On the same chart, we depict the union bound (10), truncated to the contributions given by codewords with small input/output weights. Owing to the moderate block length of the code, we resort to an enumeration of the lower tail of the inner code distance spectrum by means of the efficient algorithm introduced in [46]. The truncated union bound (TUB) provides an excellent prediction of the FER at low error probabilities, confirming the validity of the approach in Section VI-B for the analysis of the error floor performance.3 Interestingly, the TUBs indicate at large SNR a diminishing return in coding gain when the rate of the outer CC is reduced, whereas the coding gains at moderate 3Note that the TUB simply provides an estimate of the frame error rate at sufficiently low error probabilities. The wording “error floor”, which is widely used in this context, can be misleading—the frame error rate curves do not “flatten” on an actual floor, but rather follow a gentle slope. 13 −10 −8 −6 −4 −2 0 2 10−6 10−5 10−4 10−3 10−2 10−1 100 γ⋆(0.5) γ⋆(0.4) γ⋆(0.3) γ⋆(0.2) γ⋆(0.1) Es/N0 [dB] Frame Error Rate Fig. 10. FER vs. Es/N0 (in dB) for length-1200 protograph MN codes with inner mother code base matrix B1/2 and for rates R = 0.5 (black), R = 0.4 (orange), R = 0.3 (red), R = 0.2 (green), and R = 0.1 (blue). The TUB on the block error probability for each rate is provided (dashed lines) as well as the corresponding iterative decoding thresholds. FER are more sizable. In accordance with the analysis of the normalized logarithmic asymptotic input-output weight distribution of the inner LDPC code ensemble, the codes show a poor performance at moderate-low error rates. Recall that the base matrix of Example 7 defines a bad ensemble. We can see that, at the highest code rate (R = 1/2) the TUB is met at a FER ≈3 × 10−4, signaled by a visible change in the slope of the curve. The change in slope is less abrupt at the lower code rates, where the effect of low-input / low-output weight error patterns is causing a performance degradation already at relatively high error rates. At rate 1/10, it is almost impossible to distinguish the waterfall region. The performance of MN codes of different rates and with block length 1800 are provided in Figure 11, in terms of FER vs. SNR. The chart includes results of for two families of MN codes constructed from the MN code ensembles of Examples 8 and 9. Note that while a fine granularity of code rates can be achieved by suitably choosing the DM parameter ω, for the simulations, only three rates were sampled, i.e., the maximum code rate allowed by the ensembles (R = 2/3), a low rate (R = 1/5) and an intermediate rate (R = 2/5). The MN codes defined by the ensemble of Example 9 show superior performance compared to the codes defined by the ensemble of Example 8, at low error rates. In particular, for the code derived from B (1) 2/3 the algorithm of [46] was able to find low-weight codewords (e.g., with input weight 4 and output weight 17) that are responsible of the poor performance of the R = 1/5 and R = 2/5 codes at moderate-low error rates. As observed for the codes defined by the protograph of Example 7, the flooring phenomenon is particularly severe at the lower code rates, where the error rate curve does not show a proper waterfall region. At the highest rate (R = 2/3), we may expect the actual FER to approach the respective TUB at error rates below 10−8. We applied the search algorithm of [46] to the −6 −4 −2 0 2 10−7 10−5 10−3 10−1 R = 1/5 R = 2/5 R = 2/3 Es/N0 [dB] Frame Error Rate Fig. 11. FER vs. Es/N0 dB for length-1800 protograph MN codes. with inner mother code base matrix B (1) 2/3 (Example 8, in red, dashed lines) and B (2) 2/3 (Example 9, in blue, solid lines). The performance is provided for the MN code family of Example 8, with base matrix B (1) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. Similarly, the performance is provided for the MN code family of Example 9, with base matrix B (2) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. TUBs for the MN code family with base matrix B (1) 2/3 are provided (light-red, dashed lines) for rates R = 1/5, R = 2/5, and R = 2/3 (left to right). code defined by the base matrix B (2) 2/3. The search returned only codewords of relatively large weight (e.g., input weight 27, output weight 95), resulting in TUB FER predictions that are well below the FER values depicted in Figure 11. Note finally that, in the simulated error rate regime, the rate-2/3 the code defined by the base matrix B (1) 2/3 outperforms its counterpart based on B (2) 2/3. This fact is consistent with the analysis of Figure 9: the performance in the waterfall region is still dominated by the asymptotic iterative decoding threshold. While the simulation results do not allow us to explore the error floor performance for the rate-2/3 codes, owing to the TUB analysis, we may still expect the code derived from B (2) 2/3 to outperform the code derived from B (1) 2/3, at very low error rates. These results confirm that the criterion identified in Remark 3 to differentiate good and bad ensembles (from an error floor perspective) can be used, at an early design stage, to discard protographs that yield codes with poor performance at low error rates. The observations derived from Figure 11 are confirmed at larger block lengths. In particular, Figure 12 reports the performance of MN codes with block length 9000, for the same ensembles of Example 8 and of Example 9. Due to the relatively large blocklength, all curves are relatively steep, at moderate-high error rates. For the code based on B (1) 2/3, a drastic change in slope takes place at FER ≈10−5, for both R = 1/5 and R = 2/5. Here, the simulation results approach the respective TUB predictions. At the lower rate (R = 1/5), the waterfall performance of the code from the ensemble of Example 8 is again hindered by the emergence of a high error floor. As for the blocklength-1800 case, at the highest rate (R = 2/3) we may expect the actual FER to approach the 14 −6 −4 −2 0 2 10−7 10−5 10−3 10−1 R = 1/5 R = 2/5 R = 2/3 Es/N0 [dB] Frame Error Rate Fig. 12. FER vs. Es/N0 dB for length-9000 protograph MN codes. with inner mother code base matrix B (1) 2/3 (Example 8, in red, dashed lines) and B (2) 2/3 (Example 9, in blue, solid lines). The performance is provided for the MN code family of Example 8, with base matrix B (1) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. Similarly, The performance is provided for the MN code family of Example 9, with base matrix B (2) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. TUBs for the MN code family with base matrix B (1) 2/3 are provided (light-red, dashed lines) for rates R = 1/5, R = 2/5, and R = 2/3 (left to right). respective TUB at error rates below 10−8. Again, for the code based on B (2) 2/3 the TUB analysis returned FER predictions well below the values depicted in the Figure 12. As for the blocklength-1800 case, in the simulated error rate regime, the rate-2/3 code defined by the base matrix B (1) 2/3 outperforms its counterpart based on B (2) 2/3 — recall that the performance in the waterfall region is dominated by the iterative decoding threshold. As before, while the simulation results do not allow us to explore the error floor performance for the rate-2/3 codes, owing to the TUB analysis, we may still expect the code derived from B (2) 2/3 to outperform the code derived from B (1) 2/3, at very low error rates. IX. CONCLUDING REMARKS Protograph MacKay-Neal (MN) codes have been introduced and analyzed. The code construction relies on the concatena- tion of an inner, protograph-based low-density parity-check (LDPC) code with an outer constant composition (CC) code. The outer code encoder acts as a distribution matcher (DM) that enables changing the rate of the MN code by simply changing the Hamming weight of the CC codewords. Noting that the resulting concatenation defined a nonlinear block code, an equivalent communication model is introduced. The equivalent model allows analyzing the performance of MN codes by studying the error probability of the inner (linear) LDPC code, with transmission that takes place in parallel over the communication channel, and over a suitably defined binary symmetric channel. A density evolution analysis is provided, and it is complemented by a characterization of the distance properties of the code ensemble. The distance spectrum analysis serves to discriminate between ensembles that will originate codes with high error floors, and ensembles yielding codes with low error floors. A code design technique is proposed. The accuracy of the analysis and the validity of the code design method are confirmed by Monte Carlo simulations. Examples of code designs are provided, showing how the use of a single LDPC code ensemble allows operating within 1 dB from the Shannon limit over a wide range of code rates, where the code rate is selected by tuning the DM parameters. By enabling rate flexibility with a constant blocklength, and with a fixed LDPC code as inner code, the construction provides an appealing solution for very high- throughput wireless (optical) links that employ binary-input modulations. ACKNOWLEDGMENT The authors would like to thank Prof. Gerhard Kramer (Technical University of Munich) for his constructive sug- gestions and the anonymous reviewers for their insightful comments that helped improve the final version of this paper. APPENDIX A PROOF OF LEMMA 2 Consider the Tanner graph of an LDPC code drawn uni- formly at random from the ensemble defined by a protograph P. We randomly choose a set I of a punctured VNs and b unpunctured ones. We assign the value one to each VN in I and zero to the VNs outside I. The edges connected to a VN v are assigned the value chosen for v. For a given ϵ, each vj ∈V has ϵj replicas in I. Since there are ℓcopies of each VN type in the lifted graph, the number of VN sets with weight vector ϵ is given by Nv(ϵ) = h0+n0 Y j=1  ℓ ϵj  . Since κg = ϵj if g ∈Evj, the number of edge sets with weight vector κ(ϵ) is Ne(κ(ϵ)) = Y g∈E  ℓ κg  = h0+n0 Y j=1 Y g∈Evj  ℓ ϵj  = h0+n0 Y j=1  ℓ ϵj dvj . Let Nc(κ(ϵ)) be the number of configurations with edge weight vector κ(ϵ) such that all CNs are satisfied. A CN is satisfied if the sum of the bits assigned to its connected edges is zero. Consider a CN of type i. The number of configurations for which the CN is satisfied is tracked by the generating function Si(zi) = X c∈{0,1}dci wH(c) is even zc i = 1 2  Y g∈Eci (1 + zg) + Y g∈Eci (1 −zg)  . Considering all CN types, and that there are ℓCNs of each type, we obtain Nc(κ(ϵ)) = n0 Y i=1 coeff  Si(zi)ℓ, zκi(ϵ) i  . 15 The proof is completed by observing that ¯A IO a,b = X ϵ Nv(ϵ)Nc(κ(ϵ)) Ne(κ(ϵ)) where the sum is over the VN weight vectors ϵ = (ϵ1, ϵ2, . . . , ϵh0+n0) satisfying (12)-(14). APPENDIX B PROOF OF THEOREM 1 From Lemma 1, and recalling that ℓ= n/n0, we have coeff  Si(zi) n n0 , zn˜κi(˜ϵ) i  ˙= exp ( n " 1 n0 ln Si(z∗ i ) − X g∈Eci ˜κg ln z∗ g #) where ˜ϵ = ϵ/n, ˜κ(˜ϵ) = κ(ϵ)/n and z∗ g for g ∈E are the unique positive solutions of zg ∂ln Si(zi) ∂zg = n0˜ϵj ∀i ∈{1, . . . , n0}, g ∈Eci ∩Evj We have h0+n0 Y j=1  ℓ ϵj dvj −1 ˙= exp   n h0+n0 X j=1 dvj −1 n0 H(n0˜ϵj)   . Thus, ¯A IO αn,βn ˙= X ˜ϵ exp(nJ(˜ϵ)) where J(˜ϵ) = 1 n0 n0 X i=1 ln Si(z∗ i ) − h0+n0 X j=1  dvj −1 n0 H(n0˜ϵj) + ˜ϵj X g∈Evj ln z∗ g  . Hence, we have G(α, β) = max ˜ϵ J(˜ϵ) under the constraints h0 X j=1 ˜ϵj =α h0+n0 X j=h0+1 ˜ϵj =β obtained from (13), (14). Using the Lagrangian multipliers µ1 and µ2, the entries of ˜ϵ⋆= arg max J(˜ϵ) are the solutions of (16) and (17), where the values of µ1 and µ2 are obtained by enforcing the conditions (18) and (19). REFERENCES [1] A. Zahr, B. Matuz, and G. Liva, “Rate-adaptive protograph MacKay- Neal codes,” in Proc. IEEE Inf. Theory Workshop, Saint Malo, France, Apr. 2023. [2] B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, “Staircase codes: FEC for 100 Gb/s OTN,” IEEE/OSA J. Lightw. Technol., vol. 30, no. 1, pp. 110–117, Jan. 2012. [3] Y.-Y. Jian, H. D. Pfister, and K. R. Narayanan, “Approaching capacity at high rates with iterative hard-decision decoding,” IEEE Trans. Inf. Theory, vol. 63, no. 9, pp. 5752–5773, Sep. 2017. [4] R. Gallager, “Low-Density Parity-Check Codes,” Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963. [5] M. Tanner, “A recursive approach to low complexity codes,” IEEE Trans. Inf. Theory, vol. 27, no. 5, pp. 533–547, Sep. 1981. [6] G. Lechner, T. Pedersen, and G. Kramer, “Analysis and design of binary message passing decoders,” IEEE Trans. Commun., vol. 60, no. 3, pp. 601–607, Mar. 2012. [7] L. M. Zhang, D. Truhachev, and F. R. Kschischang, “Spatially coupled split-component codes with iterative algebraic decoding,” IEEE Trans. Inf. Theory, vol. 64, no. 1, pp. 205–224, Jan. 2018. [8] P. Schl¨afer, N. Wehn, M. Alles, and T. Lehnigk-Emden, “A new dimension of parallelism in ultra high throughput LDPC decoding,” in Proc. IEEE Workshop on Signal Processing Systems, Oct. 2013, pp. 153–158. [9] H. Sarieddeen, M.-S. Alouini, and T. Y. Al-Naffouri, “An overview of signal processing techniques for Terahertz communications,” Proc. IEEE, vol. 109, no. 10, pp. 1628–1665, Oct. 2021. [10] A. Morello and V. Mignone, “DVB-S2: The second generation standard for satellite broadband services,” Proc. IEEE, vol. 94, no. 1, pp. 210– 227, Jan. 2006. [11] M. Yazdani and A. Banihashemi, “On construction of rate-compatible low-density parity-check codes,” IEEE Commun. Lett., vol. 8, no. 3, pp. 159–161, Mar. 2004. [12] J. Kim, W. Hur, A. Ramamoorthy, and S. W. McLaughlin, “Design of rate-compatible irregular LDPC codes for incremental redundancy hybrid ARQ systems,” in Proc. IEEE Int. Symp. Inf. Theory, Seattle, WA, USA, Oct. 2006, pp. 1139–1143. [13] T. V. Nguyen, A. Nosratinia, and D. Divsalar, “The design of rate- compatible protograph LDPC codes,” IEEE Trans. Commun., vol. 60, no. 10, pp. 2841–2850, Oct. 2012. [14] T.-Y. Chen, K. Vakilinia, D. Divsalar, and R. D. Wesel, “Protograph- based Raptor-like LDPC codes,” IEEE Trans. Commun., vol. 63, no. 5, pp. 1522–1532, May 2015. [15] S. V. S. Ranganathan, D. Divsalar, and R. D. Wesel, “Quasi-cyclic protograph-based Raptor-like LDPC codes for short block-lengths,” IEEE Trans. Inf. Theory, vol. 65, no. 6, pp. 3758–3777, Jun. 2019. [16] T. Richardson and S. Kudekar, “Design of low-density parity check codes for 5G New Radio,” IEEE Commun. Mag., vol. 56, no. 3, pp. 28–34, Mar. 2018. [17] A. I. Vila Casado, W.-Y. Weng, S. Valle, and R. D. Wesel, “Multiple-rate low-density parity-check codes with constant blocklength,” IEEE Trans. Commun., vol. 57, no. 1, pp. 75–83, Jan. 2009. [18] D. MacKay, “Good error-correcting codes based on very sparse matri- ces,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, Mar. 1999. [19] T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge University Press, 2008. [20] M. J. Wainwright and E. Martinian, “Low-density graph codes that are optimal for binning and coding with side information,” IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 1061–1079, Mar. 2009. [21] M. Fresia, F. Perez-Cruz, H. V. Poor, and S. Verdu, “Joint source and channel coding,” IEEE Signal Process. Mag., vol. 27, no. 6, pp. 104– 113, Nov. 2010. [22] K. Kasai and K. Sakaniwa, “Spatially-coupled MacKay-Neal codes and Hsu-Anastasopoulos codes,” in Proc. IEEE Int. Symp. Inf. Theory, Saint Petersburg, Russia, Jul. 2011, pp. 747–751. [23] D. G. M. Mitchell, K. Kasai, M. Lentmaier, and D. J. Costello, “Asymptotic analysis of spatially coupled MacKay-Neal and Hsu- Anastasopoulos LDPC codes,” in Proc. Int. Symp. Inf. Theory and its Applicat., Honolulu, Hawaii, USA, Oct. 2012, pp. 337–341. [24] N. Obata, Y.-Y. Jian, K. Kasai, and H. D. Pfister, “Spatially-coupled multi-edge type LDPC codes with bounded degrees that achieve capacity on the BEC under BP decoding,” in Proc. IEEE Int. Symp. Inf. Theory, Istanbul, Turkey, Jul. 2013, pp. 2433–2437. [25] G. B¨ocherer, F. Steiner, and P. Schulte, “Bandwidth efficient and rate-matched low-density parity-check coded modulation,” IEEE Trans. Commun., vol. 63, no. 12, pp. 4651–4665, Dec. 2015. [26] G. B¨ocherer, “Achievable rates for probabilistic shaping,” arXiv:1707.01134, Jul. 2017. [27] R. A. Amjad, “Information rates and error exponents for probabilistic amplitude shaping,” in Proc. IEEE Information Theory Workshop (ITW), Guangzhou, China, Nov. 2018. [28] G. B¨ocherer, “Probabilistic Amplitude Shaping,” Foundations and Trends® in Communications and Information Theory, vol. 20, no. 4, pp. 390–511, 2023. [29] N. Merhav and G. B¨ocherer, “Codebook mismatch can be fully com- pensated by mismatched decoding,” IEEE Trans. Inf. Theory, vol. 69, no. 4, pp. 2152–2164, Apr. 2023. 16 [30] F. Buchali, F. Steiner, G. B¨ocherer, L. Schmalen, P. Schulte, and W. Idler, “Rate adaptation and reach increase by probabilistically shaped 64- QAM: An experimental demonstration,” IEEE/OSA J. Lightw. Technol., vol. 34, no. 7, pp. 1599–1609, Apr. 2016. [31] P. Schulte and G. B¨ocherer, “Constant composition distribution match- ing,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 430–434, Jan. 2016. [32] P. Schulte and F. Steiner, “Divergence-optimal fixed-to-fixed length dis- tribution matching with shell mapping,” IEEE Trans. Wireless Commun. Lett., vol. 8, no. 2, pp. 620–623, Feb. 2019. [33] P. Schulte, “Algorithms for Distribution Matching,” Ph.D. dissertation, Technical University of Munich, Munich, Germany, 2020. [34] J. Thorpe, “Low-density parity-check (LDPC) codes constructed from protographs,” NASA JPL, IPN Progr. Rep. 42-154, Aug. 2003. [35] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes,” IEEE Trans. Commun., vol. 49, no. 10, pp. 1727– 1737, Oct. 2001. [36] G. Liva and M. Chiani, “Protograph LDPC codes design based on EXIT analysis,” in Proc. IEEE Global Telecommun. Conf., Nov. 2007. [37] T. Richardson and R. Urbanke, “The capacity of low-density parity- check codes under message-passing decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599–618, Feb. 2001. [38] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. New York: Wiley, 2006. [39] C. Di, R. Urbanke, and T. Richardson, “Weight distribution of low- density parity-check codes,” IEEE Trans. Inf. Theory, vol. 52, no. 11, pp. 4839–4855, Nov. 2006. [40] P. Schulte, W. Labidi, and G. Kramer, “Joint decoding of distribution matching and error control codes,” in Proc. Int. Zurich Seminar Com- mun., Mar. 2020, pp. 53–57. [41] A. Wyner, “Recent results in the Shannon theory,” IEEE Trans. Inf. Theory, vol. 20, no. 1, pp. 2–10, Jan. 1974. [42] S. Benedetto and G. Montorsi, “Unveiling turbo codes: Some results on parallel concatenated coding schemes,” IEEE Trans. Inf. Theory, vol. 42, no. 2, pp. 409–428, Mar. 1996. [43] E. Paolini and M. Flanagan, “Efficient and exact evaluation of the weight spectral shape and typical minimum distance of protograph LDPC codes,” IEEE Commun. Lett., vol. 20, no. 11, pp. 2141–2144, Nov. 2016. [44] A. Shokrollahi and R. Storn, “Design of efficient erasure codes with differential evolution,” in Differential Evolution. Springer, 2005. [45] X.-Y. Hu, E. Eleftheriou, and D. Arnold, “Regular and irregular pro- gressive edge-growth Tanner graphs,” IEEE Trans. Inf. Theory, vol. 51, no. 1, pp. 386–398, Jan. 2005. [46] X.-Y. Hu, M. P. C. Fossorier, and E. Eleftheriou, “On the computation of the minimum distance of low-density parity-check codes,” in Proc. IEEE Int. Conf. Commun., Jun. 2004.
Rate-Adaptive Protograph-Based MacKay-Neal Codes Ayman Zahr, Graduate Student Member, IEEE, Emna Ben Yacoub, Graduate Student Member, IEEE, Bal ́azs Matuz, Senior Member, IEEE, Gianluigi Liva Senior Member, IEEE Abstract-Rate-adaptive MacKay-Neal (MN) codes based on protographs are analyzed. The code construction employs an outer distribution matcher (DM) to adapt the rate of the scheme. The DM is coupled with an inner protograph-based low-density parity-check (LDPC) code. The performance achievable by the resulting code structure, that is nonlinear, is studied by means of an equivalent communication model that reduces the problem to the analysis of the inner (linear) LDPC code with transmission that takes place in parallel over the communication channel, and over a suitably defined binary symmetric channel. A density evolution analysis of protograph MN code ensembles is outlined, and it is complemented by an error floor analysis that relies on the derivation of the average input-output weight distribution of the inner LDPC code ensemble. Conditions on the shape of the normalized logarithmic asymptotic input-output weight distribution are defined, which allow discarding code ensembles with bad error floor properties during the code design phase. Examples of code designs are provided, showing how the use of a single LDPC code ensemble allows operating within 1 dB from the Shannon limit over a wide range of code rates, where the code rate is selected by tuning the DM parameters. By enabling rate flexibility with a constant blocklength, and with a fixed LDPC code as inner code, the construction provides an appealing solution for very high-throughput wireless (optical) links that employ binary-input modulations. Index Terms-Low-density parity-check codes, MacKay-Neal codes, distribution matching, rate-adaptive transmissions. I. INTRODUCTION T HE development of high-speed communication links, either in the fiber optics or in the wireless (radio/optical) domains, calls for the development of channel codes that support fast decoding algorithms [2], [3]. For data rates in the order of several tens of Gbps, some key techniques are currently considered to enable high-speed decoding. On the algorithmic side, the use of (generalized) low-density paritycheck (LDPC) [4], [5] codes with hard-decision message passing decoders [2], [3], [6], [7] has been recently investigated. This work was presented in part at the IEEE Information Theory Workshop, Saint Malo, France, April 2023 [1]. A. Zahr is with the Institute for Communications Engineering, 21, Munich, Germany, and with the - tions and Navigation, German Aerospace Center (DLR), Wessling, Germany, (email: ). G. Liva is with the (DLR), Wessling, Germany (email: ). E. Ben Yacoub and B. Matuz are with the Huawei Munich Research Center, 80992 Munich, Germany (email: { ). G. Liva acknowledges the financial support by the Federal Ministry of Education and Research of Germany in the programme of "Souver ̈an. Digital. Vernetzt." Joint project 6G-RIC, project identification number: 16KISK022. This class of algorithms enables decoding at extremely-high data rates (up to some hundred Gbps), but it comes at the cost of sacrificing some coding gain, especially at moderate-low code rates. On the hardware side, pipelined LDPC decoder architectures promise to achieve unmatched decoding speeds by "unrolling" the belief propagation (BP) decoder iterations over the chip, hence realizing a fully parallel decoder without the message routing hurdle that affects non-pipelined decoder architectures [8]. Remarkably, pipelined LDPC decoder architectures support soft-decision BP decoding at data rates exceeding 100 Gbps.1 While decoding algorithms and architectures are obviously impacted by the need to operate at large data rates, other elements of the communication chain can be affected, too. In wireless systems operating in high frequency bands (e.g, in W/D bands, as currently considered for 6G cellular networks), low-order modulation schemes, such as on-off keying (OOK), binary phase shift keying (BPSK), and quadrature phase shift keying (QPSK), are considered mainly thanks to their simplicity and to the abundant bandwidth available [9]. Regular framing structures, obtained by inserting periodic start-of-frame (SoF) markers, are also preferred since they simplify the signal acquisition [10]. These observations place further constraints on channel code design, which can be summarized by (a) the need of constructing a single code (to support pipeline decoder architectures), (b) with rate flexibility that should be achieved without modifying the blocklength (to enable periodic SoF markers). Focusing the attention to the class of LDPC codes, (a) implies the implementation of a pipeline decoder that unrolls the Tanner graph of a single LDPC code, while (b) rules out classical rate adaptation techniques such as puncturing [11], [12], shortening, and lengthening [13]-[15] that are currently employed by the 3GPP 5G-NR standard [16]. A class of multi-rate LDPC codes that facilitate the adoption of a unified decoder architecture while keeping a constant blocklength was introduced in [17]. The approach of [17] relies on expurgation of an high-rate LDPC code, constructing LDPC codes of lower rate. In particular, the construction of [17] is based on the following observation: starting from a low-rate LDPC code (referred to as mother code in [17]), it is possible to obtain higher-rate LDPC codes by linearly combining rows of the mother code parity-check matrix. The ingenuity of the approach of [17] stems from the construction of the low-rate code, and on the choice of rows to be linearly combined, 1A pipelined LDPC decoder supporting data rates up 160 Gbps was demonstrated in [8]. The data rate refers to a 65nm ASIC design with a clock frequency of 257 MHz, with a 4-bits messages quantization. 16 Oct 2025 2 targeting a common decoder architecture and efficient (i.e., linear-time) encoding of all code rates. In particular, ancillary degree-2 check nodes are introduced in the mother code Tanner graph to reflect the combining of parity-check equations. By selectively activating the ancillary degree-2 check nodes, a single decoder architecture can be used to decode all codes derived from the mother code. While this approach is simple and elegant, it is not clear what challenges it entails when pipeline decoder architectures are considered. Furthermore, additional engineering of the row combining approach is required to efficiently support low code rates [17]. MacKay-Neal (MN) codes were introduced in [18, Sec. VI] as a class of error correcting codes based on sparse matrices for nonuniform sources. MN codes are multi-edge-type LDPC codes [19]. The Tanner graph of a MN code can be split in two parts, with a set of variable nodes (VNs) associated with the source bits, and the remaining VNs associated with the codeword bits. LDPC code constructions closely related to MN codes were proposed in [20], [21] for joint source and channel coding. While originally introduced to deal with nonuniform sources, it was pointed out in [18] that MN can also be employed with uniform sources by introducing a nonlinear block code that turns the uniform source output sequence into a sequence with a prescribed distribution. The potential appeal of this construction, as observed in [18], stems from the possibility of modifying the code rate (e.g., adapting it to varying channel conditions) by changing the statistics of the sequences produced by the nonlinear block encoder, hence without modifying the underlying LDPC code. With reference to the constraints elaborated in the previous paragraphs, an important consequence is that a rate-adaptive scheme based on MN codes can keep the blocklength constant. Furthermore, since decoding is performed over a fixed Tanner graphregardless of the code rate defined by the outer nonlinear encoder-MN codes are well suited for a pipeline decoder architecture. From a theoretical viewpoint, some attention was placed in showing the capacity-achieving properties of spatially coupled (SC) MN code ensembles in [22]-[24]. However, MN codes as a means to achieve rate flexibility received little attention. A notable exception is the probabilistic amplitude shaping (PAS) scheme introduced in [25], where a construction reminiscent of MN codes was proposed. In PAS, the sequence output by a uniform (binary) source is also processed by the nonlinear block encoder, referred to as distribution matcher (DM) [25], generating a sequence of amplitude symbols with a given empirical distribution. The binary labels associated with the amplitude symbols are encoded through a nonsystematic LDPC code encoder, producing a parity bit vector. The amplitude symbols together with the parity bits are then mapped onto pulse amplitude modulation (PAM) symbols. The rates achievable by PAS were analyzed in [26], [27] showing that the layered shaping architecture consisting of PAS, together with a decoder that performs a search over the entire inner codebook, is capacity-achieving under a maximum a posteriori probability decoding metric (see also [28], [29]). While the main result attained by PAS is to provide sizable shaping gains, it was quickly recognized that, as a byproduct, PAS is naturally rate-adaptive thanks to the possibility of tuning the DM rate [25], [30], as originally hypothesized in [18]. Differently from the approach outlined in [18], PAS targets high spectral efficiencies by shaping the probability of the amplitudes of the constellation symbols, and cannot be readily used to adapt the rate in binary modulation schemes. In the context of PAS, several classes of DMs have been proposed and analyzed [31]-[33], providing an extensive understanding on how to design efficient DM algorithms for high-speed communications. In this paper, we analyze rate-adaptive MN codes based on protographs [34]. The analysis is provided for the binaryinput additive white Gaussian noise (biAWGN) channel and as DM we consider constant composition distribution matchers (CCDMs) [25], [31]. The extension of the analysis to general memoryless binary-input output-symmetric channels and to the use of other classes of DMs is trivial. Noting that the concatenation of the CCDM with the inner linear block code results in a nonlinear code, we introduce an equivalent communication model that simplifies the analysis of both maximum likelihood (ML) and BP decoders. In particular, the equivalent model resorts to the study of the performance of the protograph LDPC code over the communication channel, where side information is provided to the decoder by observing the LDPC encoder input through a binary-input, binary-output symmetric channel. Leveraging on this construction, we analyze protograph MN code ensembles in terms of iterative decoding threshold, both via protograph extrinsic information transfer (PEXIT) analysis [35], [36] and via a more accurate quantized density evolution (DE) analysis [37]. Via numerical examples, we show that while the PEXIT analysis provides a fast and accurate estimate of the BP decoding thresholds at mediumhigh code rates, it tends to produce unreliable estimates at very low code rates. We hence propose a protograph MN code ensemble design via the (faster) PEXIT analysis, supplemented by the more accurate quantized DE analysis at low rates. The distance properties of protograph MN code ensembles are studied, showing how the input-output weight enumerator of the inner LDPC codes can be used to analyze the error floor performance. By means of asymptotic enumeration techniques, we introduce a criterion on the shape of the normalized logarithmic asymptotic input-output weight distribution that allows discarding code ensembles that are likely to yield codes with high error floors. Numerical results for selected code design examples confirm the accuracy of the analysis, which enables the design of MN codes with thresholds within approx. 1 dB from the Shannon limit, over a wide range of code rates (where each code rate is obtained by tuning the DM parameters, without modifying the inner LDPC code). The paper is organized as follows. Section II provides preliminary definitions. Protograph MN codes are described in Section III, whereas their decoding is discussed in Section IV. Section IV includes also the definition of the equivalent communication model that enables the analysis of the decoder performance. The DE analysis is provided in Section V, whereas the analysis of the distance properties of protograph MN code ensembles is developed in Section VI, together with the derivation of a union bound on the error probability of MN 3 codes. Some detailed steps in the distance spectrum analysis are contained in Appendix A. The code design methodology is outlined in Section VII. Numerical results and conclusions follow in Section VIII and in Section IX, respectively. II. PRELIMINARIES We denote random variables (r.v.s) by uppercase letters and their realizations by lowercase letters. The probability mass function (p.m.f.) of a discrete r.v. X is PX(x) = P[X = x], and the probability density function (p.d.f.) of a continuous r.v. X is pX(x). In either case, the subscript will be dropped whenever no ambiguity may arise, i.e., P(x) = PX(x) and p(x) = pX(x). We use Hb(p) = -p log2 p-(1-p) log2(1-p), with 0 0 and β > 0, α and β small. Hence, according to Remark 3, the ensemble is "bad". Example 6. Consider the protograph base matrix B =   1 1 1 1 1 1 1 1 1 1 1 1  . The protograph defines an inner mother LDPC code ensemble with rate RIM = 1/4 that can be used to construct a protograph MN code ensemble with inner code rate RI = 1/3. Figures 8(a) and 8(b) depict the normalized logarithmic asymptotic input-output weight distribution. We observe that G(α, β) is negative for α > 0 and β > 0 and α and β small. Hence, according to Remark 3, the ensemble is "good". VII. CODE DESIGN A first step in the design of MN codes is the identification of a suitable inner LDPC code protograph ensemble. For this purpose, the iterative threshold γ⋆can be used as cost function to be minimized. In particular, suppose we are interested in finding an inner mother code protograph that allows to operate close to capacity over a range R, R of rates R, i.e., over a range [ω, ω] of values of ω. The protograph search begins by fixing the protograph parameters h0, n0, which define the inner code rate RI. A set of target rates R ⊂ R, R is then selected. For each target rate R ∈R we can derive the rate of the DM as RO = R/RI, out of which the DM parameter ω is obtained. We make use of the following definition. Definition 1. Consider a protograph P. We define the worstcase loss (WCL) WCL(P, R) := max R∈R γ⋆(R) -C-1 AWGN(R) . (20) 11 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 α β G(α, β) (a) 0 5 · 10-2 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α β (b) Fig. 7. Normalized logarithmic asymptotic input-output weight distribution (a) for the LDPC code ensemble of Example 5. The view from the top is given in (b), where the white color denotes points where G(α, β) is negative. 0 0.1 0.2 0.3 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 α β G(α, β) (a) 0 5 · 10-2 0.1 0.15 0.2 0.25 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α β (b) Fig. 8. Normalized logarithmic asymptotic input-output weight distribution (a) for the LDPC code ensemble of Example 6. The view from the top is given in (b), where the white color denotes points where G(α, β) is negative. The WCL is the maximum gap between the protograph iterative decoding threshold and the biAWGN channel Shannon limit for the rates in R. The WCL provides a measure of the capability of MN codes constructed from the inner LDPC protograph ensemble specified by P to approach the Shannon limit for different choices of the DM parameter ω (and, hence, over the corresponding range of code rates). A search for the protograph with parameters h0, n0 that minimizes the WCL in (20) can be carried out, for example, via differential evolution [44]. We provide next some examples of application to the design of protograph-based MN code ensemble families addressing different rate regimes. In all examples, the search space was limited by setting the maximum number of parallel edges between protograph VN/CN pairs (i.e., the value of the base matrix elements) to 3. Example 7. Consider a code rate range [0.1, 0.5]. An MN code family addressing this range can be derived from an inner RI = 1/2 code, i.e., the mother code has rate RIM = 1/3. We search for protographs with 6 VNs and 4 CNs, minimizing the 12 -12 -10 -8 -6 -4 -2 0 2 0 0.2 0.4 0.6 0.8 Es/N0 [dB] Rate biAWGN channel capacity Thresholds B1/2 Thresholds B (1) 2/3 Thresholds B (2) 2/3 Fig. 9. Iterative decoding thresholds computed for the MN code ensembles defined by the base matrices described in Example 7, Example 8, and in Example 9. WCL over R = {0.1, 0.3, 0.5}. We obtain the base matrix B1/2 =     1 0 1 1 0 0 0 1 0 3 0 1 2 0 1 1 1 0 1 2 1 2 0 0     where the first two columns are associated with punctured VNs. Note that the base matrix is the same as in Example 2. The iterative decoding thresholds (obtained for a small sample of the possible rates) are depicted in Figure 9. Example 8. Consider a code rate range [0.1, 0.666]. We fix the inner code rate to RI = 2/3 and the dimensions of the base matrix to 3 × 5, where the first two columns are associated with punctured VNs. We search for protographs minimizing the WCL over R = {0.1, 0.3, 0.666}. We obtain the base matrix B (1) 2/3 =   1 0 0 3 1 1 1 0 3 0 1 2 2 1 0  . The iterative decoding thresholds (again, obtained for a small sample of the possible rates) are depicted in Figure 9. Remarkably, the iterative decoding thresholds for both ensembles of Examples 7 and 8 (displayed in Figure 9) are within 1 dB from the Shannon limit over a wide range of rates. For both designs in Examples 7 and 8, we did not restrict the search to protographs yielding good ensembles (see Remark 3). By studying the input-output weight enumerators of the inner protograph LDPC code ensembles defined by the base matrices Examples 7 and 8, we can observe that both base matrices result in bad ensembles. Hence, we should expect codes constructed from these base matrices to exhibit (relatively) high error floors. Evidence in the form of numerical results will be provided in Section VIII. We modify the protograph search by including a check to distinguish between protographs yielding good and bad ensembles, according to the criterion introduced in Remark 3. By doing so, bad ensembles can be discarded during the protograph search. An example of code ensemble obtained using this approach is provided next. Example 9. Consider a code rate range [0.1, 0.666], an inner RI = 2/3 code, and a base matrix of dimensions 3 × 5. We search for protographs minimizing the WCL over R = {0.1, 0.3, 0.666} excluding bad ensembles. We obtain the base matrix B (2) 2/3 =   3 3 3 0 0 0 1 3 1 0 1 0 2 0 1   where the first two columns are associated with punctured VNs. The base matrix above describes a rate-2/5 inner mother LDPC code, which can be obtained by concatenating an outer rate-2/3 LDPC code with regular VN degree 3 and CN degree 9, with an inner low-density generator-matrix code. The construction is reminiscent of the LDPC codes adopted by the 3GPP New Radio (5G) standard [16] and of the Raptor-based LDPC code class introduced in [14]. The iterative decoding thresholds for a small set of the possible rates are depicted in Figure 9. Observe that the thresholds from the constrained optimization result in a slight loss for high rates compared to the unconstrained case of Example 8. VIII. NUMERICAL RESULTS Numerical results on the performance of protograph MN codes over the biAWGN channel are provided next. The results are obtained via Monte Carlo simulations with a maximum of 100 BP decoding iterations. All the codes used in the simulations are obtained by lifting the inner LDPC code protograph by a circulant version of the progressive edge growth (PEG) algorithm [45]. The first set of simulation results confirms the tightness of the union bound in (10) at low error probabilities. For this purpose, we designed a length-1200 MN codes based on the protograph of Example 7. The maximum code rate R = 1/2 is obtained for ω = 1/2. The performance for various code rates is depicted in Figure 10, in terms of FER vs. SNR, together with the iterative decoding thresholds of the corresponding MN code ensembles Cω(P). On the same chart, we depict the union bound (10), truncated to the contributions given by codewords with small input/output weights. Owing to the moderate block length of the code, we resort to an enumeration of the lower tail of the inner code distance spectrum by means of the efficient algorithm introduced in [46]. The truncated union bound (TUB) provides an excellent prediction of the FER at low error probabilities, confirming the validity of the approach in Section VI-B for the analysis of the error floor performance.3 Interestingly, the TUBs indicate at large SNR a diminishing return in coding gain when the rate of the outer CC is reduced, whereas the coding gains at moderate 3Note that the TUB simply provides an estimate of the frame error rate at sufficiently low error probabilities. The wording "error floor", which is widely used in this context, can be misleading-the frame error rate curves do not "flatten" on an actual floor, but rather follow a gentle slope. 13 -10 -8 -6 -4 -2 0 2 10-6 10-5 10-4 10-3 10-2 10-1 100 γ⋆(0.5) γ⋆(0.4) γ⋆(0.3) γ⋆(0.2) γ⋆(0.1) Es/N0 [dB] Frame Error Rate Fig. 10. FER vs. Es/N0 (in dB) for length-1200 protograph MN codes with inner mother code base matrix B1/2 and for rates R = 0.5 (black), R = 0.4 (orange), R = 0.3 (red), R = 0.2 (green), and R = 0.1 (blue). The TUB on the block error probability for each rate is provided (dashed lines) as well as the corresponding iterative decoding thresholds. FER are more sizable. In accordance with the analysis of the normalized logarithmic asymptotic input-output weight distribution of the inner LDPC code ensemble, the codes show a poor performance at moderate-low error rates. Recall that the base matrix of Example 7 defines a bad ensemble. We can see that, at the highest code rate (R = 1/2) the TUB is met at a FER ≈3 × 10-4, signaled by a visible change in the slope of the curve. The change in slope is less abrupt at the lower code rates, where the effect of low-input / low-output weight error patterns is causing a performance degradation already at relatively high error rates. At rate 1/10, it is almost impossible to distinguish the waterfall region. The performance of MN codes of different rates and with block length 1800 are provided in Figure 11, in terms of FER vs. SNR. The chart includes results of for two families of MN codes constructed from the MN code ensembles of Examples 8 and 9. Note that while a fine granularity of code rates can be achieved by suitably choosing the DM parameter ω, for the simulations, only three rates were sampled, i.e., the maximum code rate allowed by the ensembles (R = 2/3), a low rate (R = 1/5) and an intermediate rate (R = 2/5). The MN codes defined by the ensemble of Example 9 show superior performance compared to the codes defined by the ensemble of Example 8, at low error rates. In particular, for the code derived from B (1) 2/3 the algorithm of [46] was able to find low-weight codewords (e.g., with input weight 4 and output weight 17) that are responsible of the poor performance of the R = 1/5 and R = 2/5 codes at moderate-low error rates. As observed for the codes defined by the protograph of Example 7, the flooring phenomenon is particularly severe at the lower code rates, where the error rate curve does not show a proper waterfall region. At the highest rate (R = 2/3), we may expect the actual FER to approach the respective TUB at error rates below 10-8. We applied the search algorithm of [46] to the -6 -4 -2 0 2 10-7 10-5 10-3 10-1 R = 1/5 R = 2/5 R = 2/3 Es/N0 [dB] Frame Error Rate Fig. 11. FER vs. Es/N0 dB for length-1800 protograph MN codes. with inner mother code base matrix B (1) 2/3 (Example 8, in red, dashed lines) and B (2) 2/3 (Example 9, in blue, solid lines). The performance is provided for the MN code family of Example 8, with base matrix B (1) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. Similarly, the performance is provided for the MN code family of Example 9, with base matrix B (2) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. TUBs for the MN code family with base matrix B (1) 2/3 are provided (light-red, dashed lines) for rates R = 1/5, R = 2/5, and R = 2/3 (left to right). code defined by the base matrix B (2) 2/3. The search returned only codewords of relatively large weight (e.g., input weight 27, output weight 95), resulting in TUB FER predictions that are well below the FER values depicted in Figure 11. Note finally that, in the simulated error rate regime, the rate-2/3 the code defined by the base matrix B (1) 2/3 outperforms its counterpart based on B (2) 2/3. This fact is consistent with the analysis of Figure 9: the performance in the waterfall region is still dominated by the asymptotic iterative decoding threshold. While the simulation results do not allow us to explore the error floor performance for the rate-2/3 codes, owing to the TUB analysis, we may still expect the code derived from B (2) 2/3 to outperform the code derived from B (1) 2/3, at very low error rates. These results confirm that the criterion identified in Remark 3 to differentiate good and bad ensembles (from an error floor perspective) can be used, at an early design stage, to discard protographs that yield codes with poor performance at low error rates. The observations derived from Figure 11 are confirmed at larger block lengths. In particular, Figure 12 reports the performance of MN codes with block length 9000, for the same ensembles of Example 8 and of Example 9. Due to the relatively large blocklength, all curves are relatively steep, at moderate-high error rates. For the code based on B (1) 2/3, a drastic change in slope takes place at FER ≈10-5, for both R = 1/5 and R = 2/5. Here, the simulation results approach the respective TUB predictions. At the lower rate (R = 1/5), the waterfall performance of the code from the ensemble of Example 8 is again hindered by the emergence of a high error floor. As for the blocklength-1800 case, at the highest rate (R = 2/3) we may expect the actual FER to approach the 14 -6 -4 -2 0 2 10-7 10-5 10-3 10-1 R = 1/5 R = 2/5 R = 2/3 Es/N0 [dB] Frame Error Rate Fig. 12. FER vs. Es/N0 dB for length-9000 protograph MN codes. with inner mother code base matrix B (1) 2/3 (Example 8, in red, dashed lines) and B (2) 2/3 (Example 9, in blue, solid lines). The performance is provided for the MN code family of Example 8, with base matrix B (1) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. Similarly, The performance is provided for the MN code family of Example 9, with base matrix B (2) 2/3, for rates R = 1/5, R = 2/5, and R = 2/3. TUBs for the MN code family with base matrix B (1) 2/3 are provided (light-red, dashed lines) for rates R = 1/5, R = 2/5, and R = 2/3 (left to right). respective TUB at error rates below 10-8. Again, for the code based on B (2) 2/3 the TUB analysis returned FER predictions well below the values depicted in the Figure 12. As for the blocklength-1800 case, in the simulated error rate regime, the rate-2/3 code defined by the base matrix B (1) 2/3 outperforms its counterpart based on B (2) 2/3 - recall that the performance in the waterfall region is dominated by the iterative decoding threshold. As before, while the simulation results do not allow us to explore the error floor performance for the rate-2/3 codes, owing to the TUB analysis, we may still expect the code derived from B (2) 2/3 to outperform the code derived from B (1) 2/3, at very low error rates. IX. CONCLUDING REMARKS Protograph MacKay-Neal (MN) codes have been introduced and analyzed. The code construction relies on the concatenation of an inner, protograph-based low-density parity-check (LDPC) code with an outer constant composition (CC) code. The outer code encoder acts as a distribution matcher (DM) that enables changing the rate of the MN code by simply changing the Hamming weight of the CC codewords. Noting that the resulting concatenation defined a nonlinear block code, an equivalent communication model is introduced. The equivalent model allows analyzing the performance of MN codes by studying the error probability of the inner (linear) LDPC code, with transmission that takes place in parallel over the communication channel, and over a suitably defined binary symmetric channel. A density evolution analysis is provided, and it is complemented by a characterization of the distance properties of the code ensemble. The distance spectrum analysis serves to discriminate between ensembles that will originate codes with high error floors, and ensembles yielding codes with low error floors. A code design technique is proposed. The accuracy of the analysis and the validity of the code design method are confirmed by Monte Carlo simulations. Examples of code designs are provided, showing how the use of a single LDPC code ensemble allows operating within 1 dB from the Shannon limit over a wide range of code rates, where the code rate is selected by tuning the DM parameters. By enabling rate flexibility with a constant blocklength, and with a fixed LDPC code as inner code, the construction provides an appealing solution for very highthroughput wireless (optical) links that employ binary-input modulations. ACKNOWLEDGMENT The authors would like to thank Prof. Gerhard Kramer (Technical ) for his constructive suggestions and the anonymous reviewers for their insightful comments that helped improve the final version of this paper. APPENDIX A PROOF OF LEMMA 2 Consider the Tanner graph of an LDPC code drawn uniformly at random from the ensemble defined by a protograph P. We randomly choose a set I of a punctured VNs and b unpunctured ones. We assign the value one to each VN in I and zero to the VNs outside I. The edges connected to a VN v are assigned the value chosen for v. For a given ε, each vj ∈V has εj replicas in I. Since there are lcopies of each VN type in the lifted graph, the number of VN sets with weight vector ε is given by Nv(ε) = h0+n0 Y j=1 l εj . Since κg = εj if g ∈Evj, the number of edge sets with weight vector κ(ε) is Ne(κ(ε)) = Y g∈E l κg = h0+n0 Y j=1 Y g∈Evj l εj = h0+n0 Y j=1 l εj dvj . Let Nc(κ(ε)) be the number of configurations with edge weight vector κ(ε) such that all CNs are satisfied. A CN is satisfied if the sum of the bits assigned to its connected edges is zero. Consider a CN of type i. The number of configurations for which the CN is satisfied is tracked by the generating function Si(zi) = X c∈{0,1}dci wH(c) is even zc i = 1 2  Y g∈Eci (1 + zg) + Y g∈Eci (1 -zg)  . Considering all CN types, and that there are lCNs of each type, we obtain Nc(κ(ε)) = n0 Y i=1 coeff Si(zi)l, zκi(ε) i . 15 The proof is completed by observing that ̄A IO a,b = X ε Nv(ε)Nc(κ(ε)) Ne(κ(ε)) where the sum is over the VN weight vectors ε = (ε1, ε2, . . . , εh0+n0) satisfying (12)-(14). APPENDIX B PROOF OF THEOREM 1 From Lemma 1, and recalling that l= n/n0, we have coeff Si(zi) n n0 , zn ̃κi( ̃ε) i ̇= exp ( n " 1 n0 ln Si(z∗ i ) - X g∈Eci ̃κg ln z∗ g #) where ̃ε = ε/n, ̃κ( ̃ε) = κ(ε)/n and z∗ g for g ∈E are the unique positive solutions of zg ∂ln Si(zi) ∂zg = n0 ̃εj ∀i ∈{1, . . . , n0}, g ∈Eci ∩Evj We have h0+n0 Y j=1 l εj dvj -1 ̇= exp   n h0+n0 X j=1 dvj -1 n0 H(n0 ̃εj)   . Thus, ̄A IO αn,βn ̇= X ̃ε exp(nJ( ̃ε)) where J( ̃ε) = 1 n0 n0 X i=1 ln Si(z∗ i ) - h0+n0 X j=1  dvj -1 n0 H(n0 ̃εj) + ̃εj X g∈Evj ln z∗ g  . Hence, we have G(α, β) = max ̃ε J( ̃ε) under the constraints h0 X j=1 ̃εj =α h0+n0 X j=h0+1 ̃εj =β obtained from (13), (14). Using the Lagrangian multipliers μ1 and μ2, the entries of ̃ε⋆= arg max J( ̃ε) are the solutions of (16) and (17), where the values of μ1 and μ2 are obtained by enforcing the conditions (18) and (19). REFERENCES [1] A. Zahr, B. Matuz, and G. Liva, "Rate-adaptive protograph MacKayNeal codes," in Proc. IEEE Inf. Theory Workshop, Saint Malo, France, Apr. 2023. [2] B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, "Staircase codes: FEC for 100 Gb/s OTN," IEEE/OSA J. Lightw. Technol., vol. 30, no. 1, pp. 110-117, Jan. 2012. [3] Y.-Y. Jian, H. D. Pfister, and K. R. Narayanan, "Approaching capacity at high rates with iterative hard-decision decoding," IEEE Trans. Inf. Theory, vol. 63, no. 9, pp. 5752-5773, Sep. 2017. [4] R. Gallager, "Low-Density Parity-Check Codes," Ph.D. dissertation, Massachusetts 1963. [5] M. Tanner, "A recursive approach to low complexity codes," IEEE Trans. Inf. Theory, vol. 27, no. 5, pp. 533-547, Sep. 1981. [6] G. Lechner, T. Pedersen, and G. Kramer, "Analysis and design of binary message passing decoders," IEEE Trans. Commun., vol. 60, no. 3, pp. 601-607, Mar. 2012. [7] L. M. Zhang, D. Truhachev, and F. R. Kschischang, "Spatially coupled split-component codes with iterative algebraic decoding," IEEE Trans. Inf. Theory, vol. 64, no. 1, pp. 205-224, Jan. 2018. [8] P. Schl ̈afer, N. Wehn, M. Alles, and T. Lehnigk-Emden, "A new dimension of parallelism in ultra high throughput LDPC decoding," in Proc. IEEE Workshop on Signal Processing Systems, Oct. 2013, pp. 153-158. [9] H. Sarieddeen, M.-S. Alouini, and T. Y. Al-Naffouri, "An overview of signal processing techniques for Terahertz communications," Proc. IEEE, vol. 109, no. 10, pp. 1628-1665, Oct. 2021. [10] A. Morello and V. Mignone, "DVB-S2: The second generation standard for satellite broadband services," Proc. IEEE, vol. 94, no. 1, pp. 210227, Jan. 2006. [11] M. Yazdani and A. Banihashemi, "On construction of rate-compatible low-density parity-check codes," IEEE Commun. Lett., vol. 8, no. 3, pp. 159-161, Mar. 2004. [12] J. Kim, W. Hur, A. Ramamoorthy, and S. W. McLaughlin, "Design of rate-compatible irregular LDPC codes for incremental redundancy hybrid ARQ systems," in Proc. IEEE Int. Symp. Inf. Theory, Seattle, WA, USA, Oct. 2006, pp. 1139-1143. [13] T. V. Nguyen, A. Nosratinia, and D. Divsalar, "The design of ratecompatible protograph LDPC codes," IEEE Trans. Commun., vol. 60, no. 10, pp. 2841-2850, Oct. 2012. [14] T.-Y. Chen, K. Vakilinia, D. Divsalar, and R. D. Wesel, "Protographbased Raptor-like LDPC codes," IEEE Trans. Commun., vol. 63, no. 5, pp. 1522-1532, May 2015. [15] S. V. S. Ranganathan, D. Divsalar, and R. D. Wesel, "Quasi-cyclic protograph-based Raptor-like LDPC codes for short block-lengths," IEEE Trans. Inf. Theory, vol. 65, no. 6, pp. 3758-3777, Jun. 2019. [16] T. Richardson and S. Kudekar, "Design of low-density parity check codes for 5G New Radio," IEEE Commun. Mag., vol. 56, no. 3, pp. 28-34, Mar. 2018. [17] A. I. Vila Casado, W.-Y. Weng, S. Valle, and R. D. Wesel, "Multiple-rate low-density parity-check codes with constant blocklength," IEEE Trans. Commun., vol. 57, no. 1, pp. 75-83, Jan. 2009. [18] D. MacKay, "Good error-correcting codes based on very sparse matrices," IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399-431, Mar. 1999. [19] T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge University Press, 2008. [20] M. J. Wainwright and E. Martinian, "Low-density graph codes that are optimal for binning and coding with side information," IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 1061-1079, Mar. 2009. [21] M. Fresia, F. Perez-Cruz, H. V. Poor, and S. Verdu, "Joint source and channel coding," IEEE Signal Process. Mag., vol. 27, no. 6, pp. 104113, Nov. 2010. [22] K. Kasai and K. Sakaniwa, "Spatially-coupled MacKay-Neal codes and Hsu-Anastasopoulos codes," in Proc. IEEE Int. Symp. Inf. Theory, Saint Petersburg, Russia, Jul. 2011, pp. 747-751. [23] D. G. M. Mitchell, K. Kasai, M. Lentmaier, and D. J. Costello, "Asymptotic analysis of spatially coupled MacKay-Neal and HsuAnastasopoulos LDPC codes," in Proc. Int. Symp. Inf. Theory and its Applicat., Honolulu, Hawaii, USA, Oct. 2012, pp. 337-341. [24] N. Obata, Y.-Y. Jian, K. Kasai, and H. D. Pfister, "Spatially-coupled multi-edge type LDPC codes with bounded degrees that achieve capacity on the BEC under BP decoding," in Proc. IEEE Int. Symp. Inf. Theory, Istanbul, Turkey, Jul. 2013, pp. 2433-2437. [25] G. B ̈ocherer, F. Steiner, and P. Schulte, "Bandwidth efficient and rate-matched low-density parity-check coded modulation," IEEE Trans. Commun., vol. 63, no. 12, pp. 4651-4665, Dec. 2015. [26] G. B ̈ocherer, "Achievable rates for probabilistic shaping," , Jul. 2017. [27] R. A. Amjad, "Information rates and error exponents for probabilistic amplitude shaping," in Proc. IEEE Information Theory Workshop (ITW), Guangzhou, China, Nov. 2018. [28] G. B ̈ocherer, "Probabilistic Amplitude Shaping," Foundations and Trends® in Communications and Information Theory, vol. 20, no. 4, pp. 390-511, 2023. [29] N. Merhav and G. B ̈ocherer, "Codebook mismatch can be fully compensated by mismatched decoding," IEEE Trans. Inf. Theory, vol. 69, no. 4, pp. 2152-2164, Apr. 2023. 16 [30] F. Buchali, F. Steiner, G. B ̈ocherer, L. Schmalen, P. Schulte, and W. Idler, "Rate adaptation and reach increase by probabilistically shaped 64QAM: An experimental demonstration," IEEE/OSA J. Lightw. Technol., vol. 34, no. 7, pp. 1599-1609, Apr. 2016. [31] P. Schulte and G. B ̈ocherer, "Constant composition distribution matching," IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 430-434, Jan. 2016. [32] P. Schulte and F. Steiner, "Divergence-optimal fixed-to-fixed length distribution matching with shell mapping," IEEE Trans. Wireless Commun. Lett., vol. 8, no. 2, pp. 620-623, Feb. 2019. [33] P. Schulte, "Algorithms for Distribution Matching," Ph.D. dissertation, Technical 2020. [34] J. Thorpe, "Low-density parity-check (LDPC) codes constructed from protographs," NASA JPL, IPN Progr. Rep. 42-154, Aug. 2003. [35] S. ten Brink, "Convergence behavior of iteratively decoded parallel concatenated codes," IEEE Trans. Commun., vol. 49, no. 10, pp. 17271737, Oct. 2001. [36] G. Liva and M. Chiani, "Protograph LDPC codes design based on EXIT analysis," in Proc. IEEE Global Telecommun. Conf., Nov. 2007. [37] T. Richardson and R. Urbanke, "The capacity of low-density paritycheck codes under message-passing decoding," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599-618, Feb. 2001. [38] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. New York: Wiley, 2006. [39] C. Di, R. Urbanke, and T. Richardson, "Weight distribution of lowdensity parity-check codes," IEEE Trans. Inf. Theory, vol. 52, no. 11, pp. 4839-4855, Nov. 2006. [40] P. Schulte, W. Labidi, and G. Kramer, "Joint decoding of distribution matching and error control codes," in Proc. Int. Zurich Seminar Commun., Mar. 2020, pp. 53-57. [41] A. Wyner, "Recent results in the Shannon theory," IEEE Trans. Inf. Theory, vol. 20, no. 1, pp. 2-10, Jan. 1974. [42] S. Benedetto and G. Montorsi, "Unveiling turbo codes: Some results on parallel concatenated coding schemes," IEEE Trans. Inf. Theory, vol. 42, no. 2, pp. 409-428, Mar. 1996. [43] E. Paolini and M. Flanagan, "Efficient and exact evaluation of the weight spectral shape and typical minimum distance of protograph LDPC codes," IEEE Commun. Lett., vol. 20, no. 11, pp. 2141-2144, Nov. 2016. [44] A. Shokrollahi and R. Storn, "Design of efficient erasure codes with differential evolution," in Differential Evolution. Springer, 2005. [45] X.-Y. Hu, E. Eleftheriou, and D. Arnold, "Regular and irregular progressive edge-growth Tanner graphs," IEEE Trans. Inf. Theory, vol. 51, no. 1, pp. 386-398, Jan. 2005. [46] X.-Y. Hu, M. P. C. Fossorier, and E. Eleftheriou, "On the computation of the minimum distance of low-density parity-check codes," in Proc. IEEE Int. Conf. Commun., Jun. 2004.
2510.14849
Multi Agent Switching Mode Controller for Sound Source Localization Marcello Sorge ∗Nicola Cigarini ∗Riccardo Lorigiola ∗∗ Giulia Michieletto ∗∗∗,∗∗Andrea Masiero ∗ Angelo Cenedese ∗∗,∗∗∗∗Alberto Guarnieri ∗ ∗Department of Land, Environment, Agriculture and Forestry, University of Padova, Legnaro (PD), Italy. ∗∗Department of Information Engineering, University of Padova, Padova, Italy. ∗∗∗Department of Management and Engineering, University of Padova, Vicenza, Italy. ∗∗∗∗Department of Industrial Engineering, University of Padova, Padova, Italy. Abstract: Source seeking is an important topic in robotic research, especially considering sound-based sensors since they allow the agents to locate a target even in critical conditions where it is not possible to establish a direct line of sight. In this work, we design a multi- agent switching mode control strategy for acoustic-based target localization. Two scenarios are considered: single source localization, in which the agents are driven maintaining a rigid formation towards the target, and multi-source scenario, in which each agent searches for the targets independently from the others. Keywords: Multi-agent systems, switching mode controller, recursive Bayesian estimation, source seeking, sound source localization. 1. INTRODUCTION 1.1 Related work The target localization task has been widely studied in robotics. Gradient-based strategies, such as extremum seeking control, have been studied in Ariyur and Krstic (2003) and applied in both single-agent scenarios Zhang et al. (2006) and multi-agent scenarios Wu and Zhang (2012), Zhu et al. (2014). In Bri˜n´on-Arranz et al. (2016), it is shown how a multi-agent formation can approximate the gradient of a scalar vector field, and a collaborative control law for source seeking and tracking is developed. How- ever, gradient-based algorithms may remain stuck in local optima without finding the global solution. To overcome this issue, stochastic optimization algorithms have been developed, such as particle swarm optimization (PSO) Zou et al. (2014), ant colony optimization (ACO) Colorni et al. (1991), grey wolf optimization (GWO) Mirjalili et al. (2014) and gravity search optimization (GSA) Rashedi et al. (2009). These approaches mimic natural phenomena and implement stochasticity in order to avoid getting stuck in local, suboptimal solutions. ⋆This work is partly supported by the Italian PNRR project SMS2MARIE: Sistemi multi-agente e Multi Sensore per il Moni- toraggio, la MAppatura e la Ricerca in aree In condizioni critiche e durante Emergenze; Grant Agreement n.PTSL-SD1701255103 CUP: C99J24000240008 Programma “Future Artificial Intelli- gence Research” - FAIR (Codice PE00000013), PNRR - MIS- SIONE 4 “Istruzione e ricerca”, COMPONENTE 2 “Dalla ricerca all’impresa”, Investimento 1.3 - bando a cascata Spoke 7. Corre- sponding author: Marcello Sorge, marcello.sorge@unipd.it. Acoustic source localization is a trending topic in robotics, specifically in the context of search and rescue applica- tions. Exploiting sound information allows a robotic agent to locate a target even in challenging environmental con- ditions. Several algorithms have been developed to solve the problem of localizing sound sources. Multiple signal classification (MUSIC) exploits the orthogonality between the acoustic signal and noise subspaces to estimate the di- rection of arrival (DoA) of multiple sound sources Schmidt (1986). However, the accuracy of the MUSIC algorithm degrades when the noise power is higher than the target. Many variants have been proposed to improve robustness against noise. In Furukawa et al. (2013), the generalized eigenvalue decomposition MUSIC (GEVD-MUSIC) intro- duces the noise correlation matrix for whitening the noise subspace. An enhanced version, called iterative GEVD- MUSIC (iGEVD-MUSIC) has been proposed in Okutani et al. (2012). To reduce the algorithm’s computational cost, generalized singular value decomposition MUSIC (GSVD-MUSIC) and iterative GSVD-MUSIC (iGSVD- MUSIC) have been presented in Nakadai et al. (2017). In Nakadai et al. (2017), a UAV embedded with a micro- phone array and guided by a human operator uses iGSVD- MUSIC and online robust principal component analysis (ORPCA) to detect a single frequency sound (whistle) in an outdoor environment. Other approaches to deal with robot noise (in particular drone’s self-noise) have been studied. In Chun et al. (2019), a denoising autoencoder based on fully convolutional neural networks has been discussed. A general review on noise reduction for both ground and aerial platforms is reported in A. Schmidt arXiv:2510.14849v1 [cs.RO] 16 Oct 2025 (2020). One of the shortcomings of the MUSIC algorithm is that it provides only the direction of arrival of an acoustic signal, but no information regarding the distance from the source. In Hoshiba et al. (2017), GEVD-MUSIC and iGEVD-MUSIC with distance estimation have been imple- mented on a UAV for search and rescue missions. Another strategy for sound source localization consists of estimat- ing the time difference of arrival (TDoA) between different microphone pairs. The most popular TDoA-based method is the generalized cross-correlation with phase transfor- mation (GCC-PHAT) Knapp and Carter (1976). A cross- correlation TDoA-based method with angular spectrum subtraction is investigated in Manamperi et al. (2022) for a low signal-to-noise ratio environment. There also exist beamforming-based techniques for DoA estimation. However, these algorithms assume that the sound sources are located in the far field with respect to the microphones, and their performance degrades closer to the target Argen- tieri et al. (2015). 1.2 Problem Description Differently from the reviewed literature, in this work we assume that the agents can sense the environment only while they are not moving. We therefore design a switching mode controller that alternates between a listening phase, in which the agents estimate the DoA of the sound source exploiting recursive Bayesian estimation (RBE), and a movement phase, in which the agents drive towards the target. Indeed, this strategy reduces robots’ self noise since there is no actuation during the listening and estimation phase, favoring the localization procedure. We adopt this scheme in two different scenarios: in the first case, a single target needs to be detected, and the agents drive the centroid of a rigid formation towards the source. In the second case, we assume that multiple incoherent sound sources need to be detected, and the agents search independently from each other, only communicating the localized targets positions. Throughout this work, it is assumed that the sources to be detected are static and undistinguishable from each other. 2. SWITCHING MODE CONTROLLER 2.1 Modeling Each agent’s dynamics is described by double integrators equations: ¨pi = ui (1) where pi = p(vi) = (xi, yi)T ∈R2 is the position of agent vi and ui is its control input, with i = 1, ..., N, and N is the total number of agents. We assume that each agent is equipped with a circular array of six omnidirectional microphones. Given the radius r of the array, the position of the h-th microphone is given by ph = pi + r(cos (π 3 (h −1)), sin (π 3 (h −1)))T (2) The acoustic intensity I measured by a microphone is as- sumed to be constant if the distance from the microphone to the source is smaller or equal to 1m, otherwise it is decreasing quadratically with the distance from the source:      I = W0 4π d ≤1 I = W0 4πd2 d > 1 (3) where W0 is the emission power of the source. 2.2 Recursive Bayesian Estimation The core assumption of this work is that each agent can listen to the environment only while it is not moving. Therefore, we model our system as a stochastic hybrid system switching between a listening mode, where the agents acquire measurements and estimate the sound DoA and the length to be covered in order to get closer to the source, and movement mode. The procedure to estimate the step length and the sound DoA will be explained in the next section. To include the effect of background noise in our model, we assume that the step distance s and DoA θ computed by the agents are sampled according to ˜s ∼N(s, σ2 d) (4a) ˜θ ∼vonMises(θ, kθ) (4b) The von Mises probability density function is defined as p(θ) = 1 2πI0(kθ)exp(kθ(cos θ −µ)) (5) where I0(kθ) is the modified Bessel function of the first kind and order zero, µ is the expected value, and the parameter kθ can be assimilated to the reciprocal of the variance of a normal distribution, kθ ∼σ−2 d . Since while acquiring data the agents are still, the sequence of measurements can be modeled as a Markovian process, therefore, it is possible to obtain an estimate of the step length µs and its variance P via RBE. Exploiting the fact that the product of two Gaussian distributions is proportional to a Gaussian distribution, the update equations between k and k + 1 for the step distance variance and mean can be written as Pk+1 = σ2 d σ2 d + Pk Pk (6a) µs,k+1 = ˜sk+1Pk + µs,k σ2 d + Pk (6b) A similar reasoning holds for the recursive update equa- tions of the DoA variance and mean value, considering that the convolution of two von Mises distribution is well ap- proximated by a von Mises distribution (Jammalamadaka and Sengupta (2001)): Kk+1 = (Kk cos µθ,k + kθ cos ˜θk)2+ (7a) (Kk sin µθ,k + kθ sin ˜θk)2 1 2 µθ,k+1 =ATAN2(Kk sin µθ,k + kθ sin ˜θk, (7b) Kk cos µθ,k + kθ cos ˜θk) 2.3 Switching Conditions and Reset P and K−1 represent the uncertainty of the estimates of the step length and the DoA, respectively. Considering (6a), it is easy to notice that the variance is monotoni- cally decreasing with the number of iterations. The DoA variance update equation can be rewritten as Kk+1 = q K2 k + k2 θ + 2Kkkθ cos (µθ,k −˜θk) (8) and, assuming −π 2 ≤µθ,k −˜θk ≤π 2 , this quantity is also decreasing with the number of iterations. Therefore, under this assumption, an intuitive choice for the measurement- to-movement switching condition is to wait until the estimation is precise enough, namely Pk ≤Pthresh (9a) K−1 k ≤Kthresh (9b) Equation 6a shows that the speed of convergence for the step distance estimator depends on the parameter σ2 d. In fact, limσ2 d→0 Pk+1 = 0 and the variance decreases to 0 almost instantly. On the other hand, limσ2 d→∞Pk+1 = Pk, therefore the variance is constant and the threshold may never be reached. A similar reasoning can be carried out for the DoA variance estimator, considering the expression in (8). In particular, it holds Kk+1 ≤Kk + kθ. Then, considering limkθ→0 Kk+1 = Kk and convergence may never be reached. In general, higher variances negatively affect the speed of convergence of the RBE estimators. Considering the movement-to-measurement switching con- dition, the switch happens when the system has traveled a distance longer than or equal to the step distance in (6b): ∥p −pmeas∥≥µs (10) where pmeas is the position during the measurement mode (constant), and p is the current position while in movement mode. How the step µs is computed will be explained in the next section, however, in order to reach the target, it is sufficient that the step decreases while the distance from the source is decreasing. Moreover, whenever the system switches from movement mode to measurement mode, a reset of the estimators is performed, by initializing the variances as P0 →∞, K0 = 0. This allows the system to recover from bad estimates or from initial states that are not meaningful with respect to the real ones. Indeed, by performing this initialization, it holds µs,1 ≈˜s1 (11a) µθ,1 ≈˜θ0 (11b) where ˜s1 and ˜θ0 are respectively the noisy step distance and DoA computed at the first estimation iteration while in measurement mode. 3. SINGLE SOURCE SCENARIO In the first scenario, we assume that a single static source needs to be detected. The goal is to drive the centroid of a multi-agent formation as close as possible to the target. Without loss of generality, and to simplify the discussion, hereafter the scenario is considered as a 2D environment. 3.1 Bearing Rigid Formation We model the formation of agents as a bearing rigid framework (G, p) (Michieletto et al. (2021)), where G = (V, ξ) is a graph with a set of nodes V of cardinality |V| = N and a set of undirected edges ξ of cardinality |ξ| = E, while p : V →R2 maps each node of the graph to a point in the 2D plane. The inter-agent bearing vector is defined as bi,j = pj −pi ∥pj −pi∥∈S1 (12) where vi, vj ∈V. Moreover, it is assumed that each agent in the formation is equipped with an omnidirectional microphone. This can be achieved with the microphones described in (2) by taking the mean of the measurements of each channel in the array. 3.2 DoA and Step Distance Estimation In order to detect a sound-emitting source in R2, at least three non-collinear microphones are required. This is easily achieved with a bearing rigid formation. In our work, we consider a bearing rigid square formation of four agents (v1, v2, v3, v4) as in Figure 1. v1 v2 v3 v4 Fig. 1. Four agents bearing rigid formation It is possible to obtain a first-order approximation of the intensity directional derivative along the direction defined by the inter-agent bearing as ∇bi,jI = ∆Ii,jbi,j (13) where ∆Ii,j = Ij −Ii, and Ii, Ij represent the acoustic intensity perceived by agents i and j, respectively. Differ- ently from Bri˜n´on-Arranz et al. (2016), we approximate the maximum ascent direction by considering a linear combination of such directional derivatives defined with respect to a subset ξ⋆⊆ξ: νθ = X i,j∈ξ⋆ ∆Ii,jbi,j (14) and finally we obtain θ = ATAN2(νθ,y, νθ,x). This ap- proximation is independent from the formation geometry, the only requirement is that vectors in ξ⋆span the space R2. Concerning the step estimation, as highlighted in the previous section, this quantity should be inversely propor- tional to the real distance from the source, in order to avoid overshooting the target. We choose to compute this quantity as s = max  α 1 I1 −1 I3 , α 1 I2 −1 I4  (15) with α ∈R, α > 0, since the intensity increases as the distance to the source decreases. 3.3 Bearing Maneuvering Control In order to lead the formation to the target while main- taining the desired bearing rigid formation, we implement the bearing maneuver control in Zhao and Zelazo (2015), where at least two agents are elected leaders and follow a trajectory pi,d designed to reduce the centroid-target distance, while the other agents are required to follow the leaders while maintaining the desired bearing formation. The trajectory for the leaders is defined in terms of velocity as ˙pi,d = c(cos µθ, sin µθ)T (16) where µθ is the estimated DoA and c represents a con- stant body frame velocity. For the leaders, a simple PD controller is adopted: ui = Kd ˙ei + Kpei (17) where ei = pi,d −pi, while for the followers the control law is ui = − X j∈N(i) Pbd i,j(kd( ˙pi −˙pj)) + kp(pi −pj)) (18) where N(i) represents the set of neighbors of node i, bd i,j is the desired bearing between agents i and j and Pbd i,j = I(2) −bd i,j(bd i,j)T is a projector onto the space orthogonal to bd i,j. 4. MULTIPLE SOURCES SCENARIO 4.1 DoA and Step Distance Estimation We now consider an environment with multiple, undis- tinguishable sources to be detected. In order to ensure better coverage of the search area, each agent searches independently from the others, exploiting the information from the six channels of the microphone array. The DoA is computed as θ = ATAN2(νθ,y, νθ,x), where νθ = p(IMAX) −p(Imin) ∥p(IMAX) −p(Imin)∥ (19) where p(IMAX) and p(Imin) are the positions of the mi- crophones reading the maximum and minimum intensity, respectively. The step distance is instead computed ac- cording to s = β  1 Imin − 1 IMAX  (20) where β ∈R, β > 0 is a scaling parameter. Differently from the single source scenario, in this case, the agents are working independently; therefore, the switching conditions in (9) and (21) are applied individually to each robot, al- lowing agents to operate in different modes independently rather than being constrained to the same operating mode. 4.2 Exploration Algorithm and Control Each source represents a basin of attraction for the agents since they follow a gradient-based direction. In order to allow the agents to detect different sources, they must be able to explore and escape from such basins of attraction. We therefore define, for each i-th agent, a virtual velocity ¯vi, and we modify the movement-to-measurement switch- ing condition in (10) to become ∥pi −pmeas,i∥≥γ∥¯vi∥ (21) where pmeas,i is the position of agent i in measurement mode (constant), pi is the position of agent i while in movement mode and γ = 1s. The virtual velocity is initialized as ¯vi(0) = µs,i(cos µθ,i, sin µθ,i)T and it is updated each time the agent switches from measurement mode to movement mode. To guarantee exploration, each robot needs to define an explored area that contains a detected target and that needs to be avoided by itself and the other agents. We assume that a target is detected by agent i if the step estimated by that agent is smaller than a given threshold, µs,i ≤µs,thresh. We approximate the detected target position pt as the position of the detecting agent pi and we define the explored area as a circular area of center pt = pi and radius rt. Whenever an agent meets the detection condition, if its position lies inside an already existing explored area, the radius of such area is increased according to rt,k+1 = krrt,k, with kr > 1. Otherwise, a new area of radius rt+1,0 = r0 is defined. We assume that the union of all explored areas is shared between the agents. Then, the virtual velocity for each agent can be updated as ¯vi,k+1 =        k′ rrt,k cos θrd sin θrd  , ∃pt, rt s.t. ∥pi −pt∥≤rt kv¯vi,k + µs,i cos µθ,i sin µθi  , otherwise (22) where kv, k′ r ∈R, 0 < kv < 1, k′ r ≥1, the vector (cos θrd, sin θrd)T represents a random direction spanning the circle at intervals of π 4 . The trajectory for agent i is then defined as ˙pi,d = c(cos θd, sin θd)T (23) where θd = ATAN2(¯vi,y, ¯vi,x). The control law for each agent is a PD control as in (17). It can be noticed that, in (22) for the case where the agent is inside an explored area, the virtual velocity norm is always greater than the current explored area radius. This fact, according to the switching condition in (21), ensures that the agent exits the already visited area. Moreover, the closer the decay factor kv is to 1, the longer the random directions taken when inside an explored area affects the agent’s trajectory, therefore favoring exploration. 5. SIMULATIONS AND RESULTS To validate the algorithms presented in this work, we per- form some tests in a simulated environment for both the single source scenario and the multiple source scenario. For these simulations, the hybrid dynamics is approximated by considering the discretization of the agents models in (1) and of the control laws (17)-(18), with a sample time Ts = 1ms. For all the simulations, we assume that each source’s power is W0 = 108W. 5.1 Single Target Simulations We test the RBE estimator’s robustness against different level of noise. We consider a single target with its position fixed at pT = (30m, 40m)T . The bearing rigid formation is the one reported in Figure 1, with the agent’s starting positions at p1(0) = (1m, 1m)T , p2(0) = (1m, −1m)T , p3(0) = (−1m, −1m)T and p4(0) = (−1m, 1m)T , so that the centroid of the formation is located at pc(0) = (0m, 0m)T . The set of inter-agent bearings considered for the DoA approximation is ξ⋆= {b21, b41}, and we consider agents 1 and 3 as the leaders. The control parameters are set as Kp = Kd = kp = kd = 10, and the reference velocity for the agents is c = 0.2m/s. The thresholds for the measurement-to-movement switching conditions are set as Pthresh = Kthresh = 1 × 10−4, while the scaling factor α is set to 106. The goal of these tests is to compare the time required for the formation to converge to the target with respect -10 -5 0 5 10 15 20 25 30 35 40 x [m] 0 5 10 15 20 25 30 35 40 y [m] Sound Source Agent 1 Agent 2 Agent 3 Agent 4 Fig. 2. Formation behaviour (σ2 d = 0.1, kθ = 1) 0 100 200 300 400 500 600 t [s] 0 5 10 15 20 25 30 35 40 45 50 d [m] Fig. 3. Formation’s centroid distance from the target (σ2 d = 0.1, kθ = 1) 15 20 25 30 35 40 45 x [m] 25 30 35 40 45 y [m] Fig. 4. Real targets (in black) and detected targets (in blue) to different noise variances. The time of convergence ts is defined as the minimum time for which the system is in measurement mode, and the distance d of the centroid from the target is d ≤0.05m ∀t ≥ts. Table 1. Time of convergence with respect to different levels of variances σ2 d kθ 100 10 1 0.01 251.662 259.034 382.876 0.1 258.768 259.077 382.870 1 340.709 340.724 382.850 10 1050.764 1050.770 1050.794 100 9250.778 9250.773 9250.756 Table 1 reports the time of convergence ts (in seconds) against different values for the DoA and step distance variances. It can be noticed that, for increasing values of the variances, the formation takes longer to reach the target. Since the estimators are independent from each other and the switching condition must be met by both the variances in order to cause the system to switch its operating mode, the convergence time is dictated by the slower estimator. Moreover, we have observed that for σ2 d = 0.01, kθ = 0.1, the centroid of the formation does not move for 1.5 × 104s, meaning that for this time the DoA estimator did not reach the desired variance. Those results are coherent with the reasoning in section 2.3. Figure 2 shows the agents moving towards the target maintaining the desired formation, while in Figure 3, the centroid distance from the target can be observed. The intervals for which the distance is constant correspond to the time in which the formation is in listening mode. It can also be noticed that the distance traveled by the centroid decreases as the formation gets closer to the source. 5.2 Multiple Targets Simulations We now show some results concerning the multiple sources scenario, by testing the detection ability of the algorithm. The targets are randomly spawned within a square area of 50m2, while the agents start at the corners of a square of length 60m, therefore they are located outside the search area. This is done to avoid lucky initialization scenarios. The sound sources are assumed to be incoherent, therefore the overall intensity is given by the sum of the intensities of each source. The estimators thresholds are defined as in the single target scenario, as well as the robot velocity vc. The scaling factor β is chosen as β = 4 × √ 1013. When a new target is detected, the initial radius for the explored area is set at rt,0 = 3.1m, and it is increased by 20% whenever the same target is detected. The decay factor for the robot’s virtual velocity is chosen as kv = 0.9, while the coefficient k′ r is set to 1.1. We consider variances σ2 d = 0.01 and kθ = 100. We consider cases where the number of targets varies between three and eight, while keeping four seeker agents. For each case, the average number of detected sources over ten simulations of 1000s is computed. In this work, we assume that, knowing the location of a target, the rescue team searches in a circular area of radius rtt = 1.5m around the detected target. Therefore, we consider a source to be located if it lies inside a circle of radius rtt around the point detected by an agent. Table 2. Average number of detected sources Number of targets Average number of detections 3 2.7 4 3.6 5 4.4 6 4.5 7 5.4 8 6.4 Table 2 shows the average number of detected sources with respect to the number of targets. It can be noticed that the algorithm’s performance decreases with the number of targets. Several reasons may be the cause of targets being missed by the robots: first, each simulation has a fixed length of 1000s, however it is possible that, with more time, the agents are able to detect more sources. Moreover, there is a random element in the generation of the trajectory for each agent, when they are inside an already explored area, therefore it cannot be guaranteed that an agent drives towards an unknown target. Figure 4 shows an example of the detections performed by the agents. 6. CONCLUSION A novel multi agent stochastic hybrid system for sound based target localization has been presented, and proofs of concept to demonstrate the effectiveness of the proposed algorithms have been developed considering a simplified environment. Concerning the single target case, we showed how the pro- posed switching mode control can precisely locate a source even in the presence of high noise variance. However, according to the reasoning in section 2.3, for extremely high noise variances the estimators may fail to converge. Considering the multi-target case, stochasticity has been introduced to embed exploration capabilities in the agents, allowing a single robot to detect many sources. Although stochasticity and increasing the explored area radius is helpful for exploration, it may increase the time needed to detect all the targets in the environment, or, in some cases, prevent the agents from finding a particular source. Studying the effectiveness of this strategy in more realistic, complicated, and possibly dynamic environment, as well as considering data fusion approaches combining data from different sensors, such as cameras and thermal sensor, will be the goal of future developments. REFERENCES A. Schmidt, H. W. L¨ollman, W.k. (2020). Acoustic self- awareness of autonomous systems in a world of sounds. Proc. IEEE, 108(7), 1127–1149. Argentieri, S., Dan`es, P., and Sou`eres, P. (2015). A survey on sound source localization in robotics: From binaural to array processing methods. Com- puter Speech & Language, 34(1), 87–112. doi: https://doi.org/10.1016/j.csl.2015.03.003. Ariyur, K.B. and Krstic, M. (2003). Real-time optimiza- tion by extremum-seeking control. John Wiley & Sons. Bri˜n´on-Arranz, L., Schenato, L., and Seuret, A. (2016). Distributed source seeking via a circular formation of agents under communication constraints. IEEE Trans- actions on Control of Network Systems, 3(2), 104–115. doi:10.1109/TCNS.2015.2428391. Chun, C., Jeon, K.M., Kim, T., and Choi, W. (2019). Drone noise reduction using deep convolutional autoen- coder for uav acoustic sensor networks. In 2019 IEEE 16th international conference on mobile ad hoc and sen- sor systems workshops (MASSW), 168–169. IEEE. Colorni, A., Dorigo, M., Maniezzo, V., et al. (1991). Distributed optimization by ant colonies. In Proceedings of the first European conference on artificial life, volume 142, 134–142. Paris, France. Furukawa, K., Okutani, K., Nagira, K., Otsuka, T., Itoyama, K., Nakadai, K., and Okuno, H.G. (2013). Noise correlation matrix estimation for improving sound source localization by multirotor uav. In 2013 IEEE/RSJ International Conference on In- telligent Robots and Systems, 3943–3948. doi: 10.1109/IROS.2013.6696920. Hoshiba, K., Sugiyama, O., Nagamine, A., Kojima, R., Kumon, M., and Nakadai, K. (2017). Design and assessment of sound source localization system with a uav-embedded microphone array. Journal of Robotics and Mechatronics, 29, 154–167. doi: 10.20965/jrm.2017.p0154. Jammalamadaka, S.R. and Sengupta, A. (2001). Topics in circular statistics, volume 5. world scientific. Knapp, C. and Carter, G. (1976). The generalized correla- tion method for estimation of time delay. IEEE Transac- tions on Acoustics, Speech, and Signal Processing, 24(4), 320–327. doi:10.1109/TASSP.1976.1162830. Manamperi, W., Abhayapala, T.D., Zhang, J., and Sama- rasinghe, P.N. (2022). Drone audition: Sound source localization using on-board microphones. IEEE/ACM Transactions on Audio, Speech, and Language Process- ing, 30, 508–519. doi:10.1109/TASLP.2022.3140550. Michieletto, G., Cenedese, A., and Zelazo, D. (2021). A unified dissertation on bearing rigidity theory. IEEE Transactions on Control of Network Systems, 8(4), 1624–1636. Mirjalili, S., Mirjalili, S.M., and Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. doi: https://doi.org/10.1016/j.advengsoft.2013.12.007. Nakadai, K., Kumon, M., Okuno, H.G., Hoshiba, K., Wak- abayashi, M., Washizaki, K., Ishiki, T., Gabriel, D., Bando, Y., Morito, T., Kojima, R., and Sugiyama, O. (2017). Development of microphone-array-embedded uav for search and rescue task. In 2017 IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems (IROS), 5985–5990. doi:10.1109/IROS.2017.8206494. Okutani, K., Yoshida, T., Nakamura, K., and Nakadai, K. (2012). Outdoor auditory scene analysis using a moving microphone array embedded in a quadro- copter. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3288–3293. doi: 10.1109/IROS.2012.6385994. Rashedi, E., Nezamabadi-pour, H., and Saryazdi, S. (2009). Gsa: A gravitational search algorithm. Information Sciences, 179(13), 2232–2248. doi: https://doi.org/10.1016/j.ins.2009.03.004. Special Sec- tion on High Order Fuzzy Sets. Schmidt, R. (1986). Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation, 34(3), 276–280. Wu, W. and Zhang, F. (2012). Robust cooperative ex- ploration with a switching strategy. Trans. Rob., 28(4), 828–839. doi:10.1109/TRO.2012.2190182. Zhang, C., Arnold, D.B., Ghods, N., Siranosian, A.A., and Krstic, M. (2006). Source seeking with nonholo- nomic unicycle without position measurement - part i: Tuning of forward velocity. In 45th IEEE Conference on Decision and Control, CDC 2006, San Diego, CA, USA, 13-15 December, 2006, 3040–3045. IEEE. doi: 10.1109/CDC.2006.377391. Zhao, S. and Zelazo, D. (2015). Bearing-based forma- tion maneuvering. In 2015 IEEE International Sym- posium on Intelligent Control (ISIC), 658–663. doi: 10.1109/ISIC.2015.7307285. Zhu, S., Wang, D., and Low, C.B. (2014). Cooperative con- trol of multiple uavs for moving source seeking. J. Intell. Robotics Syst., 74(1–2), 333–346. doi:10.1007/s10846- 013-9899-2. Zou, R., Kalivarapu, V., Oliver, J., and Bhattacharya, S. (2014). Swarm optimization techniques for multi-agent source localization. In 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 402– 407. doi:10.1109/AIM.2014.6878112.
Multi Agent Switching Mode Controller for Sound Source Localization Marcello Sorge ∗Nicola Cigarini ∗Riccardo Lorigiola ∗∗ Giulia Michieletto ∗∗∗,∗∗Andrea Masiero ∗ Angelo Cenedese ∗∗,∗∗∗∗Alberto Guarnieri ∗ ∗ (PD), Italy. ∗∗ . ∗∗∗ . ∗∗∗∗ . Abstract: Source seeking is an important topic in robotic research, especially considering sound-based sensors since they allow the agents to locate a target even in critical conditions where it is not possible to establish a direct line of sight. In this work, we design a multiagent switching mode control strategy for acoustic-based target localization. Two scenarios are considered: single source localization, in which the agents are driven maintaining a rigid formation towards the target, and multi-source scenario, in which each agent searches for the targets independently from the others. Keywords: Multi-agent systems, switching mode controller, recursive Bayesian estimation, source seeking, sound source localization. 1. INTRODUCTION 1.1 Related work The target localization task has been widely studied in robotics. Gradient-based strategies, such as extremum seeking control, have been studied in Ariyur and Krstic (2003) and applied in both single-agent scenarios Zhang et al. (2006) and multi-agent scenarios Wu and Zhang (2012), Zhu et al. (2014). In Bri ̃n ́on-Arranz et al. (2016), it is shown how a multi-agent formation can approximate the gradient of a scalar vector field, and a collaborative control law for source seeking and tracking is developed. However, gradient-based algorithms may remain stuck in local optima without finding the global solution. To overcome this issue, stochastic optimization algorithms have been developed, such as particle swarm optimization (PSO) Zou et al. (2014), ant colony optimization (ACO) Colorni et al. (1991), grey wolf optimization (GWO) Mirjalili et al. (2014) and gravity search optimization (GSA) Rashedi et al. (2009). These approaches mimic natural phenomena and implement stochasticity in order to avoid getting stuck in local, suboptimal solutions. ⋆This work is partly supported by the Italian PNRR project SMS2MARIE: Sistemi multi-agente e Multi Sensore per il Monitoraggio, la MAppatura e la Ricerca in aree In condizioni critiche e durante Emergenze; Grant Agreement n.PTSL-SD1701255103 CUP: C99J24000240008 Programma "Future Artificial Intelligence Research" - FAIR (Codice PE00000013), PNRR - MISSIONE 4 "Istruzione e ricerca", COMPONENTE 2 "Dalla ricerca all'impresa", Investimento 1.3 - bando a cascata Spoke 7. Corresponding author: Marcello Sorge, . Acoustic source localization is a trending topic in robotics, specifically in the context of search and rescue applications. Exploiting sound information allows a robotic agent to locate a target even in challenging environmental conditions. Several algorithms have been developed to solve the problem of localizing sound sources. Multiple signal classification (MUSIC) exploits the orthogonality between the acoustic signal and noise subspaces to estimate the direction of arrival (DoA) of multiple sound sources Schmidt (1986). However, the accuracy of the MUSIC algorithm degrades when the noise power is higher than the target. Many variants have been proposed to improve robustness against noise. In Furukawa et al. (2013), the generalized eigenvalue decomposition MUSIC (GEVD-MUSIC) introduces the noise correlation matrix for whitening the noise subspace. An enhanced version, called iterative GEVDMUSIC (iGEVD-MUSIC) has been proposed in Okutani et al. (2012). To reduce the algorithm's computational cost, generalized singular value decomposition MUSIC (GSVD-MUSIC) and iterative GSVD-MUSIC (iGSVDMUSIC) have been presented in Nakadai et al. (2017). In Nakadai et al. (2017), a UAV embedded with a microphone array and guided by a human operator uses iGSVDMUSIC and online robust principal component analysis (ORPCA) to detect a single frequency sound (whistle) in an outdoor environment. Other approaches to deal with robot noise (in particular drone's self-noise) have been studied. In Chun et al. (2019), a denoising autoencoder based on fully convolutional neural networks has been discussed. A general review on noise reduction for both ground and aerial platforms is reported in A. Schmidt 16 Oct 2025 (2020). One of the shortcomings of the MUSIC algorithm is that it provides only the direction of arrival of an acoustic signal, but no information regarding the distance from the source. In Hoshiba et al. (2017), GEVD-MUSIC and iGEVD-MUSIC with distance estimation have been implemented on a UAV for search and rescue missions. Another strategy for sound source localization consists of estimating the time difference of arrival (TDoA) between different microphone pairs. The most popular TDoA-based method is the generalized cross-correlation with phase transformation (GCC-PHAT) Knapp and Carter (1976). A crosscorrelation TDoA-based method with angular spectrum subtraction is investigated in Manamperi et al. (2022) for a low signal-to-noise ratio environment. There also exist beamforming-based techniques for DoA estimation. However, these algorithms assume that the sound sources are located in the far field with respect to the microphones, and their performance degrades closer to the target Argentieri et al. (2015). 1.2 Problem Description Differently from the reviewed literature, in this work we assume that the agents can sense the environment only while they are not moving. We therefore design a switching mode controller that alternates between a listening phase, in which the agents estimate the DoA of the sound source exploiting recursive Bayesian estimation (RBE), and a movement phase, in which the agents drive towards the target. Indeed, this strategy reduces robots' self noise since there is no actuation during the listening and estimation phase, favoring the localization procedure. We adopt this scheme in two different scenarios: in the first case, a single target needs to be detected, and the agents drive the centroid of a rigid formation towards the source. In the second case, we assume that multiple incoherent sound sources need to be detected, and the agents search independently from each other, only communicating the localized targets positions. Throughout this work, it is assumed that the sources to be detected are static and undistinguishable from each other. 2. SWITCHING MODE CONTROLLER 2.1 Modeling Each agent's dynamics is described by double integrators equations: ̈pi = ui (1) where pi = p(vi) = (xi, yi)T ∈R2 is the position of agent vi and ui is its control input, with i = 1, ..., N, and N is the total number of agents. We assume that each agent is equipped with a circular array of six omnidirectional microphones. Given the radius r of the array, the position of the h-th microphone is given by ph = pi + r(cos (π 3 (h -1)), sin (π 3 (h -1)))T (2) The acoustic intensity I measured by a microphone is assumed to be constant if the distance from the microphone to the source is smaller or equal to 1m, otherwise it is decreasing quadratically with the distance from the source:      I = W0 4π d ≤1 I = W0 4πd2 d > 1 (3) where W0 is the emission power of the source. 2.2 Recursive Bayesian Estimation The core assumption of this work is that each agent can listen to the environment only while it is not moving. Therefore, we model our system as a stochastic hybrid system switching between a listening mode, where the agents acquire measurements and estimate the sound DoA and the length to be covered in order to get closer to the source, and movement mode. The procedure to estimate the step length and the sound DoA will be explained in the next section. To include the effect of background noise in our model, we assume that the step distance s and DoA θ computed by the agents are sampled according to ̃s ∼N(s, σ2 d) (4a) ̃θ ∼vonMises(θ, kθ) (4b) The von Mises probability density function is defined as p(θ) = 1 2πI0(kθ)exp(kθ(cos θ -μ)) (5) where I0(kθ) is the modified Bessel function of the first kind and order zero, μ is the expected value, and the parameter kθ can be assimilated to the reciprocal of the variance of a normal distribution, kθ ∼σ-2 d . Since while acquiring data the agents are still, the sequence of measurements can be modeled as a Markovian process, therefore, it is possible to obtain an estimate of the step length μs and its variance P via RBE. Exploiting the fact that the product of two Gaussian distributions is proportional to a Gaussian distribution, the update equations between k and k + 1 for the step distance variance and mean can be written as Pk+1 = σ2 d σ2 d + Pk Pk (6a) μs,k+1 = ̃sk+1Pk + μs,k σ2 d + Pk (6b) A similar reasoning holds for the recursive update equations of the DoA variance and mean value, considering that the convolution of two von Mises distribution is well approximated by a von Mises distribution (Jammalamadaka and Sengupta (2001)): Kk+1 = (Kk cos μθ,k + kθ cos ̃θk)2+ (7a) (Kk sin μθ,k + kθ sin ̃θk)2 1 2 μθ,k+1 =ATAN2(Kk sin μθ,k + kθ sin ̃θk, (7b) Kk cos μθ,k + kθ cos ̃θk) 2.3 Switching Conditions and Reset P and K-1 represent the uncertainty of the estimates of the step length and the DoA, respectively. Considering (6a), it is easy to notice that the variance is monotonically decreasing with the number of iterations. The DoA variance update equation can be rewritten as Kk+1 = q K2 k + k2 θ + 2Kkkθ cos (μθ,k - ̃θk) (8) and, assuming -π 2 ≤μθ,k - ̃θk ≤π 2 , this quantity is also decreasing with the number of iterations. Therefore, under this assumption, an intuitive choice for the measurementto-movement switching condition is to wait until the estimation is precise enough, namely Pk ≤Pthresh (9a) K-1 k ≤Kthresh (9b) Equation 6a shows that the speed of convergence for the step distance estimator depends on the parameter σ2 d. In fact, limσ2 d→0 Pk+1 = 0 and the variance decreases to 0 almost instantly. On the other hand, limσ2 d→∞Pk+1 = Pk, therefore the variance is constant and the threshold may never be reached. A similar reasoning can be carried out for the DoA variance estimator, considering the expression in (8). In particular, it holds Kk+1 ≤Kk + kθ. Then, considering limkθ→0 Kk+1 = Kk and convergence may never be reached. In general, higher variances negatively affect the speed of convergence of the RBE estimators. Considering the movement-to-measurement switching condition, the switch happens when the system has traveled a distance longer than or equal to the step distance in (6b): ∥p -pmeas∥≥μs (10) where pmeas is the position during the measurement mode (constant), and p is the current position while in movement mode. How the step μs is computed will be explained in the next section, however, in order to reach the target, it is sufficient that the step decreases while the distance from the source is decreasing. Moreover, whenever the system switches from movement mode to measurement mode, a reset of the estimators is performed, by initializing the variances as P0 →∞, K0 = 0. This allows the system to recover from bad estimates or from initial states that are not meaningful with respect to the real ones. Indeed, by performing this initialization, it holds μs,1 ≈ ̃s1 (11a) μθ,1 ≈ ̃θ0 (11b) where ̃s1 and ̃θ0 are respectively the noisy step distance and DoA computed at the first estimation iteration while in measurement mode. 3. SINGLE SOURCE SCENARIO In the first scenario, we assume that a single static source needs to be detected. The goal is to drive the centroid of a multi-agent formation as close as possible to the target. Without loss of generality, and to simplify the discussion, hereafter the scenario is considered as a 2D environment. 3.1 Bearing Rigid Formation We model the formation of agents as a bearing rigid framework (G, p) (Michieletto et al. (2021)), where G = (V, ξ) is a graph with a set of nodes V of cardinality |V| = N and a set of undirected edges ξ of cardinality |ξ| = E, while p : V →R2 maps each node of the graph to a point in the 2D plane. The inter-agent bearing vector is defined as bi,j = pj -pi ∥pj -pi∥∈S1 (12) where vi, vj ∈V. Moreover, it is assumed that each agent in the formation is equipped with an omnidirectional microphone. This can be achieved with the microphones described in (2) by taking the mean of the measurements of each channel in the array. 3.2 DoA and Step Distance Estimation In order to detect a sound-emitting source in R2, at least three non-collinear microphones are required. This is easily achieved with a bearing rigid formation. In our work, we consider a bearing rigid square formation of four agents (v1, v2, v3, v4) as in Figure 1. v1 v2 v3 v4 Fig. 1. Four agents bearing rigid formation It is possible to obtain a first-order approximation of the intensity directional derivative along the direction defined by the inter-agent bearing as ∇bi,jI = ∆Ii,jbi,j (13) where ∆Ii,j = Ij -Ii, and Ii, Ij represent the acoustic intensity perceived by agents i and j, respectively. Differently from Bri ̃n ́on-Arranz et al. (2016), we approximate the maximum ascent direction by considering a linear combination of such directional derivatives defined with respect to a subset ξ⋆⊆ξ: νθ = X i,j∈ξ⋆ ∆Ii,jbi,j (14) and finally we obtain θ = ATAN2(νθ,y, νθ,x). This approximation is independent from the formation geometry, the only requirement is that vectors in ξ⋆span the space R2. Concerning the step estimation, as highlighted in the previous section, this quantity should be inversely proportional to the real distance from the source, in order to avoid overshooting the target. We choose to compute this quantity as s = max α 1 I1 -1 I3 , α 1 I2 -1 I4 (15) with α ∈R, α > 0, since the intensity increases as the distance to the source decreases. 3.3 Bearing Maneuvering Control In order to lead the formation to the target while maintaining the desired bearing rigid formation, we implement the bearing maneuver control in Zhao and Zelazo (2015), where at least two agents are elected leaders and follow a trajectory pi,d designed to reduce the centroid-target distance, while the other agents are required to follow the leaders while maintaining the desired bearing formation. The trajectory for the leaders is defined in terms of velocity as ̇pi,d = c(cos μθ, sin μθ)T (16) where μθ is the estimated DoA and c represents a constant body frame velocity. For the leaders, a simple PD controller is adopted: ui = Kd ̇ei + Kpei (17) where ei = pi,d -pi, while for the followers the control law is ui = - X j∈N(i) Pbd i,j(kd( ̇pi - ̇pj)) + kp(pi -pj)) (18) where N(i) represents the set of neighbors of node i, bd i,j is the desired bearing between agents i and j and Pbd i,j = I(2) -bd i,j(bd i,j)T is a projector onto the space orthogonal to bd i,j. 4. MULTIPLE SOURCES SCENARIO 4.1 DoA and Step Distance Estimation We now consider an environment with multiple, undistinguishable sources to be detected. In order to ensure better coverage of the search area, each agent searches independently from the others, exploiting the information from the six channels of the microphone array. The DoA is computed as θ = ATAN2(νθ,y, νθ,x), where νθ = p(IMAX) -p(Imin) ∥p(IMAX) -p(Imin)∥ (19) where p(IMAX) and p(Imin) are the positions of the microphones reading the maximum and minimum intensity, respectively. The step distance is instead computed according to s = β 1 Imin - 1 IMAX (20) where β ∈R, β > 0 is a scaling parameter. Differently from the single source scenario, in this case, the agents are working independently; therefore, the switching conditions in (9) and (21) are applied individually to each robot, allowing agents to operate in different modes independently rather than being constrained to the same operating mode. 4.2 Exploration Algorithm and Control Each source represents a basin of attraction for the agents since they follow a gradient-based direction. In order to allow the agents to detect different sources, they must be able to explore and escape from such basins of attraction. We therefore define, for each i-th agent, a virtual velocity ̄vi, and we modify the movement-to-measurement switching condition in (10) to become ∥pi -pmeas,i∥≥γ∥ ̄vi∥ (21) where pmeas,i is the position of agent i in measurement mode (constant), pi is the position of agent i while in movement mode and γ = 1s. The virtual velocity is initialized as ̄vi(0) = μs,i(cos μθ,i, sin μθ,i)T and it is updated each time the agent switches from measurement mode to movement mode. To guarantee exploration, each robot needs to define an explored area that contains a detected target and that needs to be avoided by itself and the other agents. We assume that a target is detected by agent i if the step estimated by that agent is smaller than a given threshold, μs,i ≤μs,thresh. We approximate the detected target position pt as the position of the detecting agent pi and we define the explored area as a circular area of center pt = pi and radius rt. Whenever an agent meets the detection condition, if its position lies inside an already existing explored area, the radius of such area is increased according to rt,k+1 = krrt,k, with kr > 1. Otherwise, a new area of radius rt+1,0 = r0 is defined. We assume that the union of all explored areas is shared between the agents. Then, the virtual velocity for each agent can be updated as ̄vi,k+1 =        k′ rrt,k cos θrd sin θrd , ∃pt, rt s.t. ∥pi -pt∥≤rt kv ̄vi,k + μs,i cos μθ,i sin μθi , otherwise (22) where kv, k′ r ∈R, 0 < kv < 1, k′ r ≥1, the vector (cos θrd, sin θrd)T represents a random direction spanning the circle at intervals of π 4 . The trajectory for agent i is then defined as ̇pi,d = c(cos θd, sin θd)T (23) where θd = ATAN2( ̄vi,y, ̄vi,x). The control law for each agent is a PD control as in (17). It can be noticed that, in (22) for the case where the agent is inside an explored area, the virtual velocity norm is always greater than the current explored area radius. This fact, according to the switching condition in (21), ensures that the agent exits the already visited area. Moreover, the closer the decay factor kv is to 1, the longer the random directions taken when inside an explored area affects the agent's trajectory, therefore favoring exploration. 5. SIMULATIONS AND RESULTS To validate the algorithms presented in this work, we perform some tests in a simulated environment for both the single source scenario and the multiple source scenario. For these simulations, the hybrid dynamics is approximated by considering the discretization of the agents models in (1) and of the control laws (17)-(18), with a sample time Ts = 1ms. For all the simulations, we assume that each source's power is W0 = 108W. 5.1 Single Target Simulations We test the RBE estimator's robustness against different level of noise. We consider a single target with its position fixed at pT = (30m, 40m)T . The bearing rigid formation is the one reported in Figure 1, with the agent's starting positions at p1(0) = (1m, 1m)T , p2(0) = (1m, -1m)T , p3(0) = (-1m, -1m)T and p4(0) = (-1m, 1m)T , so that the centroid of the formation is located at pc(0) = (0m, 0m)T . The set of inter-agent bearings considered for the DoA approximation is ξ⋆= {b21, b41}, and we consider agents 1 and 3 as the leaders. The control parameters are set as Kp = Kd = kp = kd = 10, and the reference velocity for the agents is c = 0.2m/s. The thresholds for the measurement-to-movement switching conditions are set as Pthresh = Kthresh = 1 × 10-4, while the scaling factor α is set to 106. The goal of these tests is to compare the time required for the formation to converge to the target with respect -10 -5 0 5 10 15 20 25 30 35 40 x [m] 0 5 10 15 20 25 30 35 40 y [m] Sound Source Agent 1 Agent 2 Agent 3 Agent 4 Fig. 2. Formation behaviour (σ2 d = 0.1, kθ = 1) 0 100 200 300 400 500 600 t [s] 0 5 10 15 20 25 30 35 40 45 50 d [m] Fig. 3. Formation's centroid distance from the target (σ2 d = 0.1, kθ = 1) 15 20 25 30 35 40 45 x [m] 25 30 35 40 45 y [m] Fig. 4. Real targets (in black) and detected targets (in blue) to different noise variances. The time of convergence ts is defined as the minimum time for which the system is in measurement mode, and the distance d of the centroid from the target is d ≤0.05m ∀t ≥ts. Table 1. Time of convergence with respect to different levels of variances σ2 d kθ 100 10 1 0.01 251.662 259.034 382.876 0.1 258.768 259.077 382.870 1 340.709 340.724 382.850 10 1050.764 1050.770 1050.794 100 9250.778 9250.773 9250.756 Table 1 reports the time of convergence ts (in seconds) against different values for the DoA and step distance variances. It can be noticed that, for increasing values of the variances, the formation takes longer to reach the target. Since the estimators are independent from each other and the switching condition must be met by both the variances in order to cause the system to switch its operating mode, the convergence time is dictated by the slower estimator. Moreover, we have observed that for σ2 d = 0.01, kθ = 0.1, the centroid of the formation does not move for 1.5 × 104s, meaning that for this time the DoA estimator did not reach the desired variance. Those results are coherent with the reasoning in section 2.3. Figure 2 shows the agents moving towards the target maintaining the desired formation, while in Figure 3, the centroid distance from the target can be observed. The intervals for which the distance is constant correspond to the time in which the formation is in listening mode. It can also be noticed that the distance traveled by the centroid decreases as the formation gets closer to the source. 5.2 Multiple Targets Simulations We now show some results concerning the multiple sources scenario, by testing the detection ability of the algorithm. The targets are randomly spawned within a square area of 50m2, while the agents start at the corners of a square of length 60m, therefore they are located outside the search area. This is done to avoid lucky initialization scenarios. The sound sources are assumed to be incoherent, therefore the overall intensity is given by the sum of the intensities of each source. The estimators thresholds are defined as in the single target scenario, as well as the robot velocity vc. The scaling factor β is chosen as β = 4 × √ 1013. When a new target is detected, the initial radius for the explored area is set at rt,0 = 3.1m, and it is increased by 20% whenever the same target is detected. The decay factor for the robot's virtual velocity is chosen as kv = 0.9, while the coefficient k′ r is set to 1.1. We consider variances σ2 d = 0.01 and kθ = 100. We consider cases where the number of targets varies between three and eight, while keeping four seeker agents. For each case, the average number of detected sources over ten simulations of 1000s is computed. In this work, we assume that, knowing the location of a target, the rescue team searches in a circular area of radius rtt = 1.5m around the detected target. Therefore, we consider a source to be located if it lies inside a circle of radius rtt around the point detected by an agent. Table 2. Average number of detected sources Number of targets Average number of detections 3 2.7 4 3.6 5 4.4 6 4.5 7 5.4 8 6.4 Table 2 shows the average number of detected sources with respect to the number of targets. It can be noticed that the algorithm's performance decreases with the number of targets. Several reasons may be the cause of targets being missed by the robots: first, each simulation has a fixed length of 1000s, however it is possible that, with more time, the agents are able to detect more sources. Moreover, there is a random element in the generation of the trajectory for each agent, when they are inside an already explored area, therefore it cannot be guaranteed that an agent drives towards an unknown target. Figure 4 shows an example of the detections performed by the agents. 6. CONCLUSION A novel multi agent stochastic hybrid system for sound based target localization has been presented, and proofs of concept to demonstrate the effectiveness of the proposed algorithms have been developed considering a simplified environment. Concerning the single target case, we showed how the proposed switching mode control can precisely locate a source even in the presence of high noise variance. However, according to the reasoning in section 2.3, for extremely high noise variances the estimators may fail to converge. Considering the multi-target case, stochasticity has been introduced to embed exploration capabilities in the agents, allowing a single robot to detect many sources. Although stochasticity and increasing the explored area radius is helpful for exploration, it may increase the time needed to detect all the targets in the environment, or, in some cases, prevent the agents from finding a particular source. Studying the effectiveness of this strategy in more realistic, complicated, and possibly dynamic environment, as well as considering data fusion approaches combining data from different sensors, such as cameras and thermal sensor, will be the goal of future developments. REFERENCES A. Schmidt, H. W. L ̈ollman, W.k. (2020). Acoustic selfawareness of autonomous systems in a world of sounds. Proc. IEEE, 108(7), 1127-1149. Argentieri, S., Dan`es, P., and Sou`eres, P. (2015). A survey on sound source localization in robotics: From binaural to array processing methods. Computer Speech & Language, 34(1), 87-112. Ariyur, K.B. and Krstic, M. (2003). Real-time optimization by extremum-seeking control. John Wiley & Sons. Bri ̃n ́on-Arranz, L., Schenato, L., and Seuret, A. (2016). Distributed source seeking via a circular formation of agents under communication constraints. IEEE Transactions on Control of Network Systems, 3(2), 104-115. Chun, C., Jeon, K.M., Kim, T., and Choi, W. (2019). Drone noise reduction using deep convolutional autoencoder for uav acoustic sensor networks. In 2019 IEEE 16th international conference on mobile ad hoc and sensor systems workshops (MASSW), 168-169. IEEE. Colorni, A., Dorigo, M., Maniezzo, V., et al. (1991). Distributed optimization by ant colonies. In Proceedings of the first European conference on artificial life, volume 142, 134-142. Paris, France. Furukawa, K., Okutani, K., Nagira, K., Otsuka, T., Itoyama, K., Nakadai, K., and Okuno, H.G. (2013). Noise correlation matrix estimation for improving sound source localization by multirotor uav. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3943-3948. Hoshiba, K., Sugiyama, O., Nagamine, A., Kojima, R., Kumon, M., and Nakadai, K. (2017). Design and assessment of sound source localization system with a uav-embedded microphone array. Journal of Robotics and Mechatronics, 29, 154-167. Jammalamadaka, S.R. and Sengupta, A. (2001). Topics in circular statistics, volume 5. world scientific. Knapp, C. and Carter, G. (1976). The generalized correlation method for estimation of time delay. IEEE Transactions on Acoustics, Speech, and Signal Processing, 24(4), 320-327. Manamperi, W., Abhayapala, T.D., Zhang, J., and Samarasinghe, P.N. (2022). Drone audition: Sound source localization using on-board microphones. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 508-519. Michieletto, G., Cenedese, A., and Zelazo, D. (2021). A unified dissertation on bearing rigidity theory. IEEE Transactions on Control of Network Systems, 8(4), 1624-1636. Mirjalili, S., Mirjalili, S.M., and Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46-61. Nakadai, K., Kumon, M., Okuno, H.G., Hoshiba, K., Wakabayashi, M., Washizaki, K., Ishiki, T., Gabriel, D., Bando, Y., Morito, T., Kojima, R., and Sugiyama, O. (2017). Development of microphone-array-embedded uav for search and rescue task. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5985-5990. Okutani, K., Yoshida, T., Nakamura, K., and Nakadai, K. (2012). Outdoor auditory scene analysis using a moving microphone array embedded in a quadrocopter. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3288-3293. Rashedi, E., Nezamabadi-pour, H., and Saryazdi, S. (2009). Gsa: A gravitational search algorithm. Information Sciences, 179(13), 2232-2248. Special Section on High Order Fuzzy Sets. Schmidt, R. (1986). Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation, 34(3), 276-280. Wu, W. and Zhang, F. (2012). Robust cooperative exploration with a switching strategy. Trans. Rob., 28(4), 828-839. Zhang, C., Arnold, D.B., Ghods, N., Siranosian, A.A., and Krstic, M. (2006). Source seeking with nonholonomic unicycle without position measurement - part i: Tuning of forward velocity. In 45th IEEE Conference on Decision and Control, CDC 2006, San Diego, CA, USA, 13-15 December, 2006, 3040-3045. IEEE. Zhao, S. and Zelazo, D. (2015). Bearing-based formation maneuvering. In 2015 IEEE International Symposium on Intelligent Control (ISIC), 658-663. Zhu, S., Wang, D., and Low, C.B. (2014). Cooperative control of multiple uavs for moving source seeking. J. Intell. Robotics Syst., 74(1-2), 333-346. 013-9899-2. Zou, R., Kalivarapu, V., Oliver, J., and Bhattacharya, S. (2014). Swarm optimization techniques for multi-agent source localization. In 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 402407.
2510.14853
REWIRING EXPERTS ON THE FLY: CONTINUOUS REROUTING FOR BETTER ONLINE ADAPTATION IN MIXTURE-OF-EXPERT MODELS Guinan Su1* Yanwu Yang4* Li Shen5 Lu Yin6 Shiwei Liu1,2,3 Jonas Geiping1,2,3 1Max Planck Institute for Intelligent Systems 2ELLIS Institute T¨ubingen 3T¨ubingen AI Center 4University of T¨ubingen 5Sun Yat-sen University 6University of Surrey ABSTRACT Mixture-of-Experts (MoE) models achieve efficient scaling through sparse expert activation, but often suffer from suboptimal routing decisions due to distribution shifts in deployment. While existing test-time adaptation methods could poten- tially address these issues, they primarily focus on dense models and require ac- cess to external data, limiting their practical applicability to MoE architectures. However, we find that, instead of relying on reference data, we can optimize MoE expert selection on-the-fly based only on input context. As such, we propose a data-free, online test-time framework that continuously adapts MoE routing de- cisions during text generation without external supervision or data. Our method cycles between two phases: During the prefill stage, and later in regular intervals, we optimize the routing decisions of the model using self-supervision based on the already generated sequence. Then, we generate text as normal, maintaining the modified router until the next adaption. We implement this through lightweight additive vectors that only update router logits in selected layers, maintaining com- putational efficiency while preventing over-adaptation. The experimental results show consistent performance gains on challenging reasoning tasks while main- taining robustness to context shifts. For example, our method achieves a 5.5% improvement on HumanEval with OLMoE. Furthermore, owing to its plug-and- play property, our method naturally complements existing test-time scaling tech- niques, e.g., achieving 6% average gains when incorporated with self-consistency on DeepSeek-V2-Lite. 1 INTRODUCTION Mixture-of-Experts (MoE) models (Shazeer et al., 2017; Zhou et al., 2022; Jiang et al., 2024; Dai et al., 2024; Liu et al., 2024; Team, 2025; Muennighoff et al., 2024) provide an effective approach to scaling model capacity while maintaining computational efficiency by partitioning parameters into specialized experts and selectively activating subsets through routing mechanisms (Lepikhin et al., 2020; Fedus et al., 2022; Dai et al., 2024; Muennighoff et al., 2024). This functionality enables dynamic expert selection for diverse queries and creating inherently general-purpose systems that can store much more functionality and information than is used in every forward pass. However, despite their impressive capabilities, MoE models still face challenges when deployed in real-world environments (Aky¨urek et al., 2024; Li et al., 2025a), as the expression of their capability hinges on the quality of their routing decisions, the activations of small linear layers that determine which parts of the model are activated. Why is routing hard? While the full MoE may store sufficient functionality to solve a particularly challenging query, this capacity is gated behind its routing decisions in each MoE layer, which in turn depend on the residual stream of the model, and so on earlier routing decisions. Routing decisions are only linear functions of the current hidden state that need to approximate the anticipated utility of activating a certain expert. A particular issue with this non-robustness of routing is that during standard inference there is no mechanism to reinforce the routing to a particularly successful part of the model, or to reduce routing to parts that did not contribute meaningful signal to the generated text. * Equal contribution. guinan.su@tuebingen.mpg.de 1 arXiv:2510.14853v1 [cs.CL] 16 Oct 2025 Figure 1: Test-time rerouting framework for MoE models. (a) Rerouting mechanism: lightweight additive vectors (Delta) update router logits in selected high-confidence layers using self-supervised loss from existing context. (b) Continuous adaptation: alternating between optimization phases that adapt routing decisions and generation phases that maintain adapted routing until the next optimization cycle. This makes routing optimization a question of model adaptation, reminiscent of neuroplasticity in humans – who continuously optimize routing and neuronal connections in the brain through adap- tation and self-regulation, for example when continuing to practice a certain task. Yet, in MoEs, suboptimal expert selection leads to routing inefficiencies, creating a critical bottleneck in overall performance (Shi et al., 2024; Li et al., 2025a). And, while there has been a significant amount of work to improve the capabilities of MoEs in general, such as through test-time scaling, it is less clear how to best adapt MoEs to different tasks at inference. While standard approaches, such as in-context learning with task-specific demonstrations (Wei et al., 2022; Madaan et al., 2023) or par- allel generation strategies that produce multiple candidates and aggregate them (Wang et al., 2022; Brown et al., 2024), do adapt the model overall, they influence routing only implicitly. Conceptually, adaptation should be solvable by test-time training, but existing approaches (Hardt & Sun; H¨ubotter et al., 2024) focus on a classical prediction perspective, which retrieves relevant data from training sets or knowledge bases during inference to fine-tune models for dynamic scenarios before the model is being used. Li et al. (2025a) attempt to address router optimization through this perspective of test-time training, and, as such, use retrieval of ”successful neighbors” from reference sets based on the first prompt in each context. However, this approach requires access to external reference data during deployment, incurs retrieval overhead, and risks failures in retrieval due to short prompts. Moreover the approach is static, as modern models execute test-time scaling, they generate long chains-of-thought, that should be taken into account when optimizing routing. In this regard, we propose a simple yet effective data-free test-time rerouting framework for MoE models as shown in Figure 1 that treats each input prompt as a self-supervised learning opportunity, enabling dynamic rerouting of pretrained MoE models for individual prompts during inference. The framework alternates in two phases: (1) In-Context Routing Optimization, where we regard the current context itself as a training sample and execute optimization steps to minimize cross-entropy loss on the current context with regard to routing logits; and (2) Steered Generation, where we generate text normally, steering routers with updates computed in the previous phase. This creates a dynamic feedback loop where the model continuously refines its understanding of task requirements based on its own generation progress, enabling increasingly informed expert routing decisions as generation proceeds. To maintain computational efficiency, we implement this progressive optimiza- tion through lightweight additive parameter vectors that update only the model’s router logits of selected MoE layers, further reducing computational overhead and preventing over-adaptation. Overall, our main contributions are: • We propose a test-time rerouting framework specifically designed for MoE models, operating completely without external data dependencies or expensive retrieval mechanisms, using only backpropagation within the current context. • We introduce a lightweight parameter update mechanism that selectively optimizes router logits through additive vectors in high-confidence layers, enabling efficient expert selection adaptation during inference using steering. 2 • We validate our method’s effectiveness through extensive experiments, demonstrating that our approach significantly improves MoE model performance on complex reasoning tasks, maintains robustness to context shifts in multi-turn conversations, and seamlessly integrates with existing test-time techniques, establishing a practical and versatile solution for deploying MoE models in real-world applications. Overall, we argue that routing changes are a compelling mechanism with which to adapt MoE models on the fly. The changes are lightweight, quick to compute and to apply (even in multi-user settings), and remain effective across context shifts, allowing MoEs a novel degree of plasticity that allows adaptation to task changes during deployment, paving the way toward practical continual self-regulation of MoE models during use. 2 RELATED WORK Mixture-Of-Experts (MoE). Sparsely activated Mixture-of-Expert models (MoE) are an efficient method for scaling model size by replacing the feed-forward networks (FFN) of transformer archi- tectures with multiple experts (each its own FFN) and a gating function. This architecture dynam- ically activates different experts for each input token rather than utilizing all parameters (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2022). Recent MoE-based LLMs, including OL- MoE (Muennighoff et al., 2024), DeepSeek (Dai et al., 2024; Liu et al., 2024), and Qwen2.5 (Team, 2025), adopt top-k expert routing to reduce the active parameter count during inference. These models achieve competitive performance while maintaining significantly lower memory costs. Test-Time Scaling. Test-Time Scaling enhances LLM capabilities by allocating additional com- putational resources during inference. These approaches fall into two categories (Welleck et al., 2024): Parallel generation, including self-consistency evaluations via multiple candidate responses (Wang et al., 2022), best-of-N sampling (Brown et al., 2024), and Monte Carlo Tree Search (Zhou et al., 2023; Xie et al., 2024), and sequential generation such as extending outputs through chain-of- thought reasoning (Wei et al., 2022; Madaan et al., 2023). Test-Time Training (TTT). TTT offers an alternative scaling approach. While successful in com- puter vision (Wang et al., 2020; Sun et al., 2020; 2024; Gandelsman et al., 2022; Osowiechi et al., 2023), recent works have extended TTT to language models through fine-tuning on retrieved neigh- bors (Hardt & Sun) or optimized data selection algorithms like SIFT (H¨ubotter et al., 2024). Test- Time Reinforcement Learning (TTRL) (Zuo et al., 2025) uses majority voting as reward signals. When applied to Mixture-of-Experts (MoE) models, recent work has explored expert-level inter- ventions for behavior control, enabling precise modifications through targeted expert manipulation (Dahlke et al., 2025; Wang et al., 2025). Most relevant to our approach, Li et al. (2025a) opti- mizes expert routing in MoE models using ”successful neighbors” from reference sets. However, these methods assume accessible training data during deployment and introduce significant retrieval overhead, limiting real-world practicality. 3 METHODOLOGY This section presents our data-free test-time rerouting framework for MoE models that optimizes expert routing during inference without external information. After reviewing MoE fundamentals (Section 3.1), we introduce three key components: (1) Router Logits Modification (Section 3.2) for layer-specific expert selection steering, (2) Dynamic Layer Selection (Section 3.3) for selective MoE layer updates based on confidence scores, and (3) Optimization Procedure (Section 3.4) detailing our two-phase strategy alternating between in-context routing optimization and steered generation. 3.1 PRELIMINARIES ON MIXTURE-OF-EXPERTS In Transformer-based Mixture-of-Experts (MoE) models, the conventional Feed-Forward Networks (FFNs) are replaced with MoE layers. Each MoE layer consists of a router R and a set of experts {Ei}N i=1. Given an input sequence x<t = (x1, x2, . . . , xt−1), the router assigns each token to a subset of experts for processing. Given a token’s hidden state h ∈Rd at layer l, the router computes logits across all N experts: z(l) = W (l) r h(l) where W (l) r ∈RN×d is the router’s weight matrix at 3 layer l, and z(l) ∈RN represents the logits for expert selection. These logits are then converted to expert selection probabilities: w(l) = Softmax(z(l)) where w(l) i represents the activation probability for expert Ei at layer l. The router applies a routing strategy (e.g., top-k routing) to select active experts. Weights for unselected experts are zeroed and the remaining weights are renormalized to ˆw(l). The final MoE output is: o(l) = P i∈A(l) ˆw(l) i · Ei(h(l)) where A(l) = {j | ˆw(l) j ̸= 0} denotes the set of activated experts at layer l. 3.2 ROUTER LOGITS MODIFICATION We introduce layer-specific adaptation parameters {δ(l)}L l=1 where δ(l) ∈RN corresponds to the N experts at MoE layer l. For a selected layer l, we modify the router logits by adding the correspond- ing layer-specific parameter: ˜z(l) = z(l) + δ(l) where ˜z(l) ∈RN represents the modified logits for layer l. The expert selection probabilities are then computed as: ˜w(l) = Softmax(˜z(l)) = Softmax(z(l) + δ(l)) (1) This modification directly influences the expert selection distribution, allowing the model to adapt routing decisions based on prompt characteristics. 3.3 DYNAMIC LAYER SELECTION To mitigate computational overhead and prevent over-adaptation, our framework selectively updates router logits of only a subset of MoE layers rather than all layers simultaneously. We hypothesize that layers with higher routing confidence indicate more decisive and task-relevant expert selection, making them more impactful for adaptation. We define router confidence at layer i for token n as: C(n) i = −1 k k X j=1 log p(n) i,j (2) where p(n) i,j is the probability of the j-th top expert at layer i for token n, and k is the number of activated experts per layer. Higher confidence values indicate more decisive routing decisions, suggesting these layers play more critical roles for the current task. To obtain layer-level confidence across the generated sequence, we aggregate token-level confidence: Ci = 1 N N X n=1 C(n) i (3) where N is the number of generated tokens so far. We implement two layer selection strategies: (1) Hard selection that selects top-r proportion of layers with highest confidence scores: St = TopK({Ci}L i=1, r), and (2) Soft weighting that assigns confidence-based weights to control the update strength of δ(l): wi = Ci PL j=1 Cj (4) During gradient updates, δ(l) is updated with learning rate scaled by wl. 3.4 OPTIMIZATION PROCEDURE Parameter Initialization: For each MoE layer l ∈L, initialize routing adjustment parameters: δ(l) = 0 ∈RN (5) The framework alternates between two phases: Phase 1: In-Context Routing Optimization. Given current context x = (x1, . . . , xt), first select layers using S = TopK({C(l)}L l=1, r) or compute soft weights {wl}L l=1. Then perform n optimiza- tion steps, where at each step we compute the cross-entropy loss: L({δ(l)}L l=1) = − t−1 X i=1 log p(xi+1 | x1:i, {δ(l)}L l=1) (6) 4 and update parameters using optimizer O (e.g., SGD, Adam): δ(l) ←    O(δ(l), ∇δ(l)L) if l ∈S (hard selection) O(δ(l), wl∇δ(l)L) if soft weighting δ(l) otherwise (7) Phase 2: Steered Generation. Generate m tokens using optimized routing parameters {δ(l)}L l=1. After generating m tokens, return to Phase 1 with extended context including both original prompt and all generated tokens up to the current position and repeat the optimization procedure. The detailed algorithm is provided in Algorithm 1. 4 EXPERIMENTAL SETUP Benchmarks We evaluate our approach using a diverse set of benchmarks across multiple reasoning domains. For general knowledge assessment, we employ MMLU-redux(Gema et al., 2024), utilizing generation mode rather than multiple-choice format to encourage deeper reasoning before providing final answers. For code generation tasks, we use HumanEval (Chen et al., 2021) and MBPP-sanitized (Austin et al., 2021). For mathematical reasoning, we evaluate on GSM8K (Cobbe et al., 2021) and MATH500 (Lightman et al., 2023). Baselines We compare our method with two adaptation techniques: In-Context Learning (ICL) (Wei et al., 2022; Madaan et al., 2023) and C3PO (Li et al., 2025a). Since our method is a data- free sequential generation test-scaling approach, we select In-Context Learning (ICL) as a primary baseline, using 3 and 5 sample pairs respectively, for comparison. We also compare against C3PO, for which we select a reference set of 100 samples for each dataset. Note that parallel generation methods represent an orthogonal approach to ours, and we further discuss the potential integration of our method with such approaches in later sections. Model Selection We evaluate three MoE LLMs: OLMoE (Muennighoff et al., 2024), Qwen1.5- MoE (Team, 2024), and DeepSeek-V2-Lite (Liu et al., 2024). OLMoE employs 16 layers with 64 experts per layer, activating 8 experts per token (6.9B total, 1.3B active parameters). Qwen1.5- MoE-A2.7B, incorporates 4 shared experts alongside 60 routing experts with 4 activated per token (14.3B total, 2.7B active parameters). DeepSeek-V2-Lite uses 28 layers with 2 shared and 64 routed experts, activating all shared plus 6 routed experts per token (16B total, 2.4B active parameters). Optimization We optimize the adaptation parameters using the Adam optimizer with a small num- ber of iterations (T = 5) to maintain computational efficiency. The optimization hyperparameters are set as follows: learning rate η = 0.05, weight decay = 1 × 10−8, and epsilon = 1 × 10−5. All adaptation parameters are initialized to zero: δ(l) = 0. We use the soft-weighting strategy for layer selection, and during the generation stage, we periodically re-optimize the parameters ev- ery 128 generated tokens using identical hyperparameter settings to ensure continuous refinement throughout the sequence generation process. 5 RESULTS FOR MOE REWIRING 5.1 MAIN RESULTS We first evaluated the performance of our method on challenging reasoning benchmarks. As shown in Table 1, our approach consistently improves performance over all baselines across five bench- marks. On the HumanEval task in particular, our method yields gains of 3.6%, 5.5%, and 6.7% on DeepSeek-V2-Lite, OLMoE, and Qwen1.5-MoE, respectively. Moreover, compared to other calibration methods such as few-shot learning and C3PO, our method achieves consistently better results. Notably, it surpasses these baselines without relying on example demonstrations, delivering particularly strong improvements in code generation and mathematical reasoning tasks. Moreover, in contrast to C3PO, which requires 100 reference samples, our data-free method attains superior performance while requiring no additional reference data. 5 Table 1: Model Performance Comparison Across Different Benchmarks, comparing to in-context learning (ICL) and C3PO Li et al. (2025a). Rewiring shows strong performance even though no external data is used, either as references or as fewshot examples. Method HumanEval MBPP GSM8K MATH500 MMLU Average DeepSeek-V2-Lite Baseline 50.60 58.37 72.10 22.60 50.77 50.89 ICL (3-shot) 52.44 56.81 71.10 21.60 44.33 49.26 ICL (5-shot) 53.05 52.53 71.57 22.20 46.07 49.08 C3PO (100-reference) 47.80 59.92 68.80 18.20 45.77 48.10 Rewiring (Ours, 0-shot) 54.26 62.65 73.62 25.00 52.40 53.59 OLMoE Baseline 28.66 40.08 51.48 10.40 37.17 33.56 ICL (3-shot) 28.05 36.96 43.82 9.90 44.90 32.73 ICL (5-shot) 21.21 39.30 44.96 9.40 43.63 31.70 C3PO (100-reference) 28.05 41.25 52.08 13.00 37.30 34.33 Rewiring (Ours, 0-shot) 34.17 42.80 52.99 12.40 38.30 36.13 Qwen1.5-MoE Baseline 40.24 46.69 54.73 21.80 45.27 41.75 ICL (3-shot) 44.34 46.25 55.27 22.20 45.60 42.73 ICL (5-shot) 43.90 44.75 52.31 19.40 46.03 41.28 C3PO (100-reference) 39.70 45.40 50.33 18.90 44.82 39.83 Rewiring (Ours, 0-shot) 46.95 47.47 56.00 22.00 45.87 43.66 Table 2: Ablation Study: Layer Selection Strategies and Continuous Refinement. Method HumanEval MBPP GSM8K MATH500 MMLU Average Baseline 50.60 58.37 72.10 22.60 50.77 50.89 Layer Selection Strategies Random Selection 48.78 53.31 72.50 23.40 50.87 49.77 Reverse Metric 48.18 55.26 72.10 21.89 49.33 49.35 Last-five Layers 50.60 55.64 71.85 25.20 50.33 50.72 All Layers 48.17 56.42 73.09 24.60 51.10 50.68 Continuous Refinement W/o Continuous 54.27 59.53 73.37 23.80 51.80 52.55 Rewiring (Complete Method) 54.27 62.65 73.62 25.00 52.40 53.59 5.2 ABLATIONS OF KEY COMPONENTS To validate the effectiveness of our design choices for MoE rewiring, we conduct ablation studies on using DeepSeek-V2-Lite, examining two key components: layer selection strategies and contin- uous refinement mechanisms. We evaluate performance across five tasks, with results presented in Table 2. Confidence-based layer selection efficiently discovers task-specific layers. We compare our confidence-based selection with several baselines, including random selection, selecting the last five layers, selecting low-confidence layers (Reverse Metric), and optimizing all layers. As shown in Table 2, our method achieves an average performance of 53.59%, outperforming random selection (49.77%), the last-five-layer strategy used in C3PO (50.72%), and reverse selection targeting low- confidence layers (49.35%). This demonstrates that router confidence identifies layers with strong expert selection knowledge. In effect, we use the signal from prior context to “confirm” that routing choices were accurate, and that the model should rely on these experts more strongly. Updating Single Layers is Safer than rerouting the whole model. In addition, from the results we can see that updating all layers simultaneously (50.68% on average) underperforms our selective approach (53.59% on average), highlighting the superiority of our method in selecting layers for test-time rerouting. With limited optimization steps available, spreading updates across all parameters dilutes the refinement signal and risks overadaptation that destabilizes pre-trained routing patterns. Concentrating the optimization budget on high-confidence layers maximizes impact by amplifying existing routing strengths rather than attempting comprehensive corrections across the entire network. 6 Continuous Refinement Stabilizes Routing. The comparison between continuous (53.59% on average) and non-continuous refinement (52.55% on average) reveals a meaningful performance gap of over 1%. It is notable that this gap carries weight, given that each benchmark example involves a single task and prompt-based router updates already constitute a strong baseline. This improvement indicates that routing optimization benefits from curriculum-like approaches that gradually refine the model’s expert selection mechanisms, leading to more stable and effective adaptations during test-time optimization. 6 ANALYSIS AND DISCUSSIONS 6.1 WHY DOES TEST-TIME REROUTING WORK? To understand the mechanisms behind test-time rerouting, we perform a comprehensive analysis of the DeepSeek-V2-Lite model, examining pathway shifts, expert utilization dynamics, and the evolution of routing confidence. Figure 2: Analysis of test-time rerouting mechanisms across different datasets in DeepSeek-V2-Lite. a) Edit distance before and after rerouting. b) Expert utilization dynamics before and after rerouting. Rewiring improves Expert Pathways. We first examine how expert pathways change after rerout- ing using edit distance following (Li et al., 2025b) (described in Section B.2), which quantifies pathway differences using Levenshtein edit distance. This captures mismatches in expert selection, and pathway shifts. We track the average pairwise edit distance across samples during rerouting. The results are displayed in Figure 2 a), which shows distinct layer-wise editing patterns. In Hu- manEval, edit distances peak in deeper layers (18–22, >0.35) while earlier ones (5–10) remain minimal (<0.05). MATH500 exhibits a similar but stronger trend, with peaks up to 0.16 in layers 20–22, suggesting that deeper layers are more involved in mathematical reasoning. The sample-wise distributions (right panels) further emphasize task-specific differences, indicating high variability in pathway changes across problems. Overall, these results highlight the heterogeneity of expert rout- ing across tasks and demonstrate how our method adaptively reroute pathways. Rewiring highlights Task-Specific Experts. We further examine expert-level activation shifts on HumanEval and MATH500 before and after rerouting to understand how the model redistributes computation. Figure 2 b) shows heatmaps across layers and experts (red: increased, blue: decreased, normalized for visualization). The results reveal: (1) Adaptive targeting. changes concentrate on a subset of experts, indicating strategic rather than uniform redistribution; (2) Deep-layer adaptation. later layers (L21–L25) undergo stronger shifts, consistent with our edit-distance findings; and (3) Task-specific specialization. HumanEval and MATH500 exhibit distinct patterns, highlighting task- dependent routing behaviors. This selective activation shows that rerouting strategically emphasizes task-relevant experts, enhancing efficiency by focusing computation on where it is most needed. 7 Rewiring Increases Router Confidence. Moreover, we quantify routing confidence by tracking the entropy of expert-selection distributions across all layers during optimization. Lower entropy reflects more focused expert allocation, whereas higher entropy indicates more diffuse routing patterns. Figure 3 shows that our method (red line) exhibits a gradual decrease in router entropy over 18 gen- eration steps, whereas the baseline (blue line) maintains a higher router entropy with pronounced fluctuations. This suggests that our approach progressively develops more focused routing deci- sions, while the baseline continues to rely on more diffuse expert-selection patterns. This indicates that our method facilitates concentrating on relevant experts without disrupting established routing mechanisms, thereby enabling more efficient expert utilization for the given tasks. Figure 3: Expert routing entropy as a function of sequence length averaged over 16 token blocks. Table 3: Computational efficiency comparison across different methods. Method Total FLOPs Time (s) Baseline 4.71e+11 10.71s ICL(3-shot) 1.93e+12 12.17s ICL(5-shot) 3.05e+12 12.53s Self-Consistency (3) 1.68e+12 34.20s C3PO(100-reference) 2.63e+12 26.20s Rewiring (Ours) 1.96e+12 20.12s 6.2 COMBINING REWIRING WITH OTHER TEST-TIME STRATEGIES A key advantage of our approach is its plug-and-play compatibility with existing test-time tech- niques. Since our method only modifies routing decisions without altering the underlying generation process, it can be seamlessly integrated with other approaches like in-context learning and parallel generation methods. Synergy with In-Context Learning. Large pretrained language models demonstrate strong in- context learning, predicting labels from few demonstration pairs without parameter updates. Empir- ical evidence demonstrates behavior similar to explicit finetuning (Dai et al., 2022; Aky¨urek et al., 2022; Von Oswald et al., 2023). As such, we test combining our method with with 3-shot demonstra- tions. As shown in Table 4, we observe that this combination yields improvements across multiple tasks, even surpassing our method alone. We hypothesize this synergy occurs because our method provides more effective gradient updates that better leverage the contextual information from demon- stration examples, helping extract more meaningful patterns from the limited demonstration data. Unlocking Better Self-Consistency. Self-consistency (Wang et al., 2022) improves reasoning by sampling multiple paths and aggregate together. We combine our method with self-consistency by applying rerouting optimization, then generating multiple reasoning paths with optimized routing decisions. For each sample, we generate 3 reasoning paths. For MMLU, MATH500, and GSM8K, we use majority voting to determine the final accuracy. For code generation tasks (MBPP, Hu- manEval), we report pass@3 scores. shows substantial improvements with an average 3 percentage point gain over self-consistency alone. We hypothesize that our rerouting framework generates higher-quality reasoning chains by selecting more appropriate experts, and when self-consistency aggregates these improved paths, the voting mechanism amplifies the benefits. 6.3 EFFICIENCY ANALYSIS Beyond performance improvements, we also compare the computational cost of our method with other online approaches to adapt models. Table 3 reports the results on the HumanEval task with the DeepSeek model, showing the average total FLOPs and inference time per sample across different methods. As shown in Table 3, our method achieves notable computational savings over most base- lines. Although it requires more computation than the vanilla baseline, it remains substantially more efficient than other test-time techniques, using 1.3× fewer FLOPs than C3PO, 1.6× fewer than ICL (5-shot), and comparable to ICL (3-shot) and Self-Consistency (3). Despite the extra routing opera- 8 Table 4: Performance comparison of individual Test-Time methods versus combined approaches. Online optimization of routing decisions with our rewiring algorithm can be reliably combined iwth other techniques, such as in-context learning (ICL) or self-consistency. Method HumanEval MBPP GSM8K MATH500 MMLU Average Baseline 50.60 58.37 72.10 22.60 50.77 50.89 Rewiring (Ours) 54.26 62.65 73.62 25.00 52.40 53.59 In-Context Learning ICL (3-shot) 52.44 56.81 71.10 21.60 44.33 49.26 ICL + Rewiring 53.05 62.65 76.00 27.00 46.53 53.05 Self-Consistency Self-Consistency (3) 51.02 70.04 75.28 26.20 51.87 54.88 Self-Consistency (3) + Rewiring 55.08 71.21 77.54 27.40 54.20 57.09 Table 5: Performance of our method versus the baseline on AIME datasets using the GPT-OSS-20b model. Optimizing rerouting is especially noticeable at improving model certainty, as shown by improved performance at lower pass@k and improved majority voting, even one challenging benchmarks like AIME. Data / Method Pass@k Maj@k Average 2 4 8 2 4 8 AIME25 Baseline 83.21 87.90 90.00 65.36 71.76 76.67 74.29 Rewiring (Ours) 83.81 86.19 86.67 67.50 76.67 83.33 75.65 AIME24 Baseline 78.57 83.76 86.67 60.60 67.48 70.00 69.58 Rewiring (Ours) 81.67 84.95 86.67 65.83 72.57 80.00 73.75 tions, our method maintains a competitive inference time of 20.12s, indicating that the overhead of online adaptation is modest while still delivering improved performance. Conceptually, our method requires n additional prefill passes on already generated text every m tokens, which, given the ease of parallelization of prefill is a manageable compute increase. In this work, we implement this optimization in a straightforward manner, but a production implementation could be significantly faster by incorporating the optimization into disaggregated-prefill systems, or timing the MoE rewiring event with low-load timespans, for example when waiting for a user to respond. The actual routing changes are self-contained in routing parameters δ (shaped as number of experts by number of layers), so that on-device storage is feasible, even for many separate conversations. 6.4 EXTENSION TO LONG-REASONING TASKS To evaluate the Generalizability of our approach to modern reasoning models, we also test GPT-OSS-20B (Agarwal et al., 2025). We evaluate on the AIME benchmark (MAA Committees), a challenging mathematics competition dataset that requires sophisticated multi-step derivations and mathematical reasoning. This setting allows us to assess whether our online rerouting method can improve performance on the demanding reasoning tasks that represent the current frontier of AI capabilities. As shown in Table 5, our method improves performance on both AIME datasets, achieving higher average correctness (75.65% vs. 74.29% on AIME25; 73.75% vs. 69.58% on AIME24). The primary improvements occur in Maj@k metrics rather than Pass@k, suggesting our routing optimization enhances reasoning consistency rather than solution diversity. 6.5 ROBUSTNESS TO CONTEXT SHIFTS IN MULTI-TURN SCENARIOS. In real-world applications, MoE models often encounter multi-turn conversations where contexts shift dramatically between different topics or tasks. To assess the robustness of our method under realistic multi-turn contexts, we simulate context shifts by prepending few-shot examples from dif- 9 Figure 4: Performance of our method versus the baseline across different few-shot examples under shifted and aligned task contexts. ferent domains before the target task. Figure 4 presents results on HumanEval and Math500 using DeepSeek-V2-Lite, under two conditions: (1) Aligned-task, where few-shot examples are from the same task domain, and (2) Shifted-task, where examples are drawn from unrelated domains (e.g., MATH and MMLU for code tasks) to induce cross-domain shifts. Across both benchmarks, our method (Ours-Shifted, Ours-Aligned) consistently outperforms the baselines (Base-Shifted, Base-Aligned). On HumanEval, all methods benefit from more-shot ex- amples, but our approach shows a stable improvement. In contrast, baseline gains remain modest, particularly in the Shifted-task setting, where performance fluctuates. On Math500, our method maintains robust and consistent results across both domains, while baselines exhibit limited or even inconsistent improvements. These findings highlight the robustness and scalability of our test-time rerouting strategy in handling diverse contextual information 7 CONCLUSION In this work, we introduce a novel test-time rerouting approach that enables MoE models to dynamically adapt expert selection on the fly, without requiring external data or costly retrieval. The method alternates between routing optimization and steered generation, forming a feedback loop that progressively improves expert selection. To reduce computational overhead, we employ lightweight additive vectors that update only the logits of selected routers. Extensive experiments show that our approach effectively compensates for the inherent imperfections in MoE routing, yielding consistent gains across multiple benchmarks (up to 6.7% on code generation) with 1.6× fewer FLOPs than few-shot methods, while maintaining robustness to context shifts. As a plug- and-play regularization strategy, the method flexibly combines with complementary techniques (e.g., Self-Consistency) to further amplify the benefits. More importantly, by introducing a new dimension of plasticity into MoEs, it opens the door to deployment-time adaptation and points toward practical continual self-regulation in MoE models. ACKNOWLEDGEMENTS JG acknowledges the support of the Hector II foundation. GS acknowledges the support of the International Max Planck Research School for Intelligent Systems (IMPRS-IS). REFERENCES Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Altman, Andy Applebaum, Edwin Arbus, Rahul K Arora, Yu Bai, Bowen Baker, Haiming Bao, et al. gpt-oss-120b & gpt-oss-20b model card. arXiv preprint arXiv:2508.10925, 2025. 10 Ekin Aky¨urek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algo- rithm is in-context learning? investigations with linear models. arXiv preprint arXiv:2211.15661, 2022. Ekin Aky¨urek, Mehul Damani, Linlu Qiu, Han Guo, Yoon Kim, and Jacob Andreas. The surprising effectiveness of test-time training for abstract reasoning. arXiv e-prints, pp. arXiv–2411, 2024. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R´e, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787, 2024. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Robert Dahlke, Henrik Klagges, Dan Zecha, Benjamin Merkel, Sven Rohr, and Fabian Klemm. Mixture of tunable experts–behavior modification of deepseek-r1 at inference time. arXiv preprint arXiv:2502.11096, 2025. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers. arXiv preprint arXiv:2212.10559, 2022. Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Yu Wu, et al. Deepseekmoe: Towards ultimate expert specialization in mixture- of-experts language models. arXiv preprint arXiv:2401.06066, 2024. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39, 2022. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020. Yossi Gandelsman, Yu Sun, Xinlei Chen, and Alexei Efros. Test-time training with masked autoen- coders. Advances in Neural Information Processing Systems, 35:29374–29385, 2022. Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, Claire Barale, Robert McHardy, Joshua Harris, Jean Kaddour, Emile van Krieken, and Pasquale Minervini. Are we done with mmlu?, 2024. Moritz Hardt and Yu Sun. Test-time training on nearest neighbors for large language models, 2024. URL https://arxiv. org/abs/2305.18466. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021. URL https://arxiv. org/abs/2103.03874, 2, 2024. Jonas H¨ubotter, Sascha Bongni, Ido Hakimi, and Andreas Krause. Efficiently learning at test-time: Active fine-tuning of llms. arXiv preprint arXiv:2410.08020, 2024. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024. 11 Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020. Zhongyang Li, Ziyue Li, and Tianyi Zhou. C3po: Critical-layer, core-expert, collaborative pathway optimization for test-time expert re-mixing. arXiv preprint arXiv:2504.07964, 2025a. Ziyue Li, Chenrui Fan, and Tianyi Zhou. Where to find grokking in llm pretraining? monitor memorization-to-generalization without test. arXiv preprint arXiv:2506.21551, 2025b. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2023. Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture- of-experts language model. arXiv preprint arXiv:2405.04434, 2024. MAA Committees. AIME problems and solutions. https://artofproblemsolving.com/ wiki/index.php/AIME_Problems_and_Solutions. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36:46534–46594, 2023. Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Wei- jia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, et al. Olmoe: Open mixture-of-experts language models. arXiv preprint arXiv:2409.02060, 2024. David Osowiechi, Gustavo A Vargas Hakim, Mehrdad Noori, Milad Cheraghalikhani, Ismail Ben Ayed, and Christian Desrosiers. Tttflow: Unsupervised test-time training with normalizing flow. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2126–2134, 2023. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, and Yu Meng. Unchosen experts can contribute too: Unleashing moe models’ power by self- contrast. Advances in Neural Information Processing Systems, 37:136897–136921, 2024. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. Test-time train- ing with self-supervision for generalization under distribution shifts. In International conference on machine learning, pp. 9229–9248. PMLR, 2020. Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei Chen, Xiaolong Wang, Sanmi Koyejo, et al. Learning to (learn at test time): Rnns with expressive hidden states. arXiv preprint arXiv:2407.04620, 2024. Qwen Team. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2, 2024. Qwen Team. Qwen3: Think deeper, act faster, 2025. URL https://qwenlm. github. io/blog/qwen3/. Accessed, 5(10), 2025. Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, Jo˜ao Sacramento, Alexander Mordv- intsev, Andrey Zhmoginov, and Max Vladymyrov. Transformers learn in-context by gradient descent. In International Conference on Machine Learning, pp. 35151–35174. PMLR, 2023. Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726, 2020. 12 Mengru Wang, Xingyu Chen, Yue Wang, Zhiwei He, Jiahao Xu, Tian Liang, Qiuzhi Liu, Yunzhi Yao, Wenxuan Wang, Ruotian Ma, et al. Two experts are all you need for steering thinking: Reinforcing cognitive effort in moe reasoning models without additional training. arXiv preprint arXiv:2505.14681, 2025. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui. From decoding to meta-generation: Inference-time algorithms for large language models. arXiv preprint arXiv:2406.16838, 2024. Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P Lillicrap, Kenji Kawaguchi, and Michael Shieh. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint arXiv:2405.00451, 2024. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Lan- guage agent tree search unifies reasoning acting and planning in language models. arXiv preprint arXiv:2310.04406, 2023. Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M Dai, Quoc V Le, James Laudon, et al. Mixture-of-experts with expert choice routing. Advances in Neural Information Processing Systems, 35:7103–7114, 2022. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, et al. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084, 2025. A EXPERIMENTAL SETTINGS A.1 BENCHMARKS MMLU-redux is a manually re-annotated subset of the original MMLU benchmark designed to address quality issues in the dataset. The dataset contains 3,000 questions across 30 MMLU subjects (100 questions per subject), with expert annotators identifying and categorizing various types of errors using a comprehensive error taxonomy. These errors include issues such as bad question clarity, unclear options, no correct answers, multiple correct answers, and wrong ground truth labels. MMLU-Redux provides a more reliable evaluation standard by filtering out problematic questions and offering corrected annotations where possible. HumanEval is a benchmark dataset for evaluating code generation capabilities of large language models. The dataset consists of 164 hand-crafted programming problems, each including a function signature, docstring, body, and several unit tests (averaging 7.7 tests per problem). These challenges assess language comprehension, algorithms, and simple mathematics, with difficulty comparable to simple software interview questions. MBPP is a code generation benchmark consisting of around 1,000 crowd-sourced Python program- ming problems designed to be solvable by entry-level programmers. Each problem includes a task description, code solution, and 3 automated test cases, covering programming fundamentals and standard library functionality. The dataset provides two versions: a full version with 974 problems and a sanitized version with 427 problems. The sanitized split underwent a second round of anno- tation to improve task descriptions, addressing issues where the original descriptions might not be sufficiently expressive to solve the tasks. This hand-verified subset provides higher-quality prob- lem statements for more reliable evaluation of code generation models. In this paper, we use the sanitized version. 13 GSM8K is a dataset of 8,500 high-quality, linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7,473 training problems and 1,319 test problems. Each problem takes between 2 and 8 steps to solve using basic arithmetic operations, with problems designed so that a bright middle school student should be able to solve every problem. Solutions are provided in natural language format rather than pure mathematical expressions, offering insight into multi-step reasoning processes. MATH-500 is a curated subset of 500 challenging mathematical problems selected from the MATH dataset (Hendrycks et al., 2024). These problems span seven topics: Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus. Each prob- lem requires multi-step reasoning and is designed to test a model’s ability to apply mathematical principles, execute complex calculations, and communicate solutions clearly. A.2 BASELINES C3PO We adopt C3PO (Critical-Layer, Core-Expert, Collaborative Pathway Optimization) (Li et al., 2025a) as our primary baseline method. C3PO is a test-time optimization approach designed to ad- dress the suboptimal expert pathway selection problem in Mixture of Experts (MoE) large language models. The method operates on the observation that end-to-end trained routers often produce inef- ficient pathways for challenging or out-of-distribution samples, leading to degraded performance on diverse downstream tasks. The core idea of C3PO is to dynamically re-mix expert pathways during inference by leveraging successful routing patterns from a reference set. Given a reference set of m samples {(xi, yi)}m i=1 with their corresponding expert pathway matrices {ωi}m i=1 (where each ωi ∈RL×E with L layers and E experts) on which the model makes correct predictions, C3PO aims to find an improved pathway matrix ω for a new test sample x that leads to more accurate outputs. Among the three optimization strategies proposed by C3PO (gradient descent, kernel regression, and mode finding), we implement the kernel regression approach in our experiments. This method estimates optimal expert pathways by computing a weighted average of neighbors’ pathway matri- ces: ˆω = P i∈N (x) K(xi, x)ωi P i∈N (x) K(xi, x) (8) where K(·, ·) is a kernel function measuring sample similarity, and N(x) denotes the neighborhood of x in the reference set. The final pathway is obtained through interpolation: ω ←αω + (1 −α)ˆω (9) where α is optimally chosen to minimize the loss function. For our comparative evaluation, we construct reference sets for each benchmark as follows: For HumanEval and MBPP, we use the MBPP validation set; for GSM8K and MATH500, we utilize the GSM8K training set; and for MMLU, we employ the MMLU training set. From each source, we randomly sample 100 instances where the model produces the correct predictions, ensuring high- quality pathway references for the optimization process. In-Context Learning In-Context Learning (ICL) is a fundamental capability of large language models that enables them to adapt to new tasks by using a few demonstration examples provided in the input prompt, without requiring parameter updates (Wang et al., 2022; Wei et al., 2022). In our experimental setup, we implement ICL as a baseline across all evaluation benchmarks. The few-shot examples are carefully selected from relevant datasets to ensure domain alignment and high-quality demonstrations: • HumanEval and MBPP: We sample 3-5 examples from the MBPP prompt collection, which provides well-crafted code generation examples with clear problem descriptions and corresponding Python solutions. 14 • GSM8K: We utilize 3-5 examples from the GSM8K training set, featuring step-by-step mathematical reasoning demonstrations that guide the model through problem-solving pro- cesses. • MATH500: We sample 3-5 examples from the HYDRA-Math dataset, which offers high- quality mathematical problem-solution pairs covering various difficulty levels and mathe- matical domains. • MMLU: We select 3-5 examples from the MMLU validation set, ensuring coverage of di- verse knowledge domains while maintaining format consistency for multiple-choice ques- tions. For each benchmark, the few-shot examples are randomly sampled from their respective source datasets and prepended to each test query. This approach provides the model with task-specific context while maintaining consistency across different evaluation scenarios. The number of exam- ples (3-5) is chosen to balance between providing sufficient context and avoiding excessive prompt length that might degrade model performance. A.3 MODEL SELECTION OLMoE is a fully open-source MoE language model developed by the Allen Institute for AI and Contextual AI (Muennighoff et al., 2024). The model employs a decoder-only transformer archi- tecture with 1 billion active and 7 billion total parameters. Each MoE layer contains 64 experts, of which 8 are activated per input token through a learned router network. This sparse activation mech- anism enables computational efficiency similar to dense 1B parameter models while leveraging the full 7B parameter capacity. DeepSeek-V2-Lite is a smaller variant of the DeepSeek-V2 model family, developed by DeepSeek- AI (Liu et al., 2024). The model employs innovative architectures, including Multi-head Latent Attention (MLA) and DeepSeekMoE, with 15.7 billion total parameters and 2.4 billion activated parameters per token. DeepSeek-V2-Lite has 27 layers with a hidden dimension of 2048, utilizing MLA with 16 attention heads, where each head has a dimension of 128. The model adopts the DeepSeekMoE architecture, where all feed-forward networks except the first layer are replaced with MoE layers. Each MoE layer consists of 2 shared experts and 64 routed experts, with 6 experts activated for each token. This sparse activation mechanism enables computational efficiency while maintaining strong performance across diverse tasks. Qwen1.5-MoE is developed by the Qwen team (Team, 2024). The model is upcycled from the dense Qwen-1.8B model, featuring 14.3 billion total parameters with 2.7 billion activated param- eters during runtime. Despite using only 2.7B active parameters, the model achieves comparable performance to Qwen1.5-7B while requiring 75% fewer training resources and demonstrating 1.74x faster inference speed. The model employs a fine-grained expert architecture with 64 experts total, consisting of 4 shared experts that are always activated and 60 routing experts with 4 activated per token. This configuration represents an 8-fold increase in expert count compared to conventional MoE setups, enabling higher model capacity without proportional parameter increases. The fine- grained expert design partitions a single FFN into multiple segments, each serving as an individual expert, allowing for more specialized knowledge representation. GPTOSS-20B is OpenAI’s recently released open-source MoE model with 21B total parameters but only 3.6B active at inference (Agarwal et al., 2025). Built on a Mixture-of-Experts architecture with 24 layers and 32 experts using Top-4 routing, it represents a state-of-the-art reasoning model that achieves performance similar to OpenAI o3-mini on reasoning benchmarks. Its sophisticated expert routing system and focus on mathematical reasoning make it an ideal candidate for evaluating our test-time rerouting method on challenging tasks like AIME. B METHOD DETAILS B.1 TEST-TIME MOE REROUTING We elaborate the rerouting algorithmic pipeline in Algorithm 1, which details the step-by-step pro- cedure for adaptive expert selection and pathway calibration at test time. 15 B.2 PATHWAY DIFFERENCES We first examine how expert pathways change after rerouting using edit distance following (Li et al., 2025b). For input xi, we define its pathway si as the ordered sequence of selected experts across L MoE layers as si = concat(e(i) 1 , e(i) 2 , . . . , e(i) L ), where e(i) ℓ represents expert indices at layer ℓas comma-separated strings (e.g., ‘3,1,5’), joined across layers with hyphens. We quantify pathway differences using Levenshtein edit distance as Dpath(si, sj) = EditDistance(si, sj). This captures mismatches in expert selection, and pathway shifts. C ADDITIONAL ANALYSIS C.1 LAYER-WISE CONFIDENCE DISTRIBUTIONS ACROSS DIFFERENT TASKS We visualize the confidence distributions across different tasks using DeepSeek-V2-lite as an exam- ple. As illustrated in Figure 5, activation patterns across experts and layers are highly task-specific. For instance, math tasks (Math500) and code tasks (HumanEval) exhibit distinct confidence pat- terns across layers, with math tasks showing higher confidence in middle layers (layers 7-14) while code tasks demonstrate more distributed confidence patterns with peaks in later layers (layers 15- 18). Notably, both tasks show generally higher confidence concentrated in the middle-to-later layers compared to early layers. Figure 5: Layer-wise confidence distributions across different tasks in DeepSeek-V2-lite-MoE C.2 SENSITIVE ANALYSES OF HYPER-PARAMETERS As shown in Figure 6, the optimization interval analysis across five benchmarks shows that our test- time rerouting consistently outperforms baseline performance. Intervals of 128-160 tokens achieve optimal performance, balancing routing adaptability with computational efficiency. Both frequent re-optimization (96 tokens) and no updates degrade performance due to over-adaptation and loss of dynamic adjustment benefits, respectively. Mathematical reasoning tasks (GSM8K, MATH500) benefit most from proper interval tuning. C.3 IMPACT ON SAMPLE DIVERSITY IN PARALLEL GENERATION. Table 6: Diversity evaluation results comparison Metric Baseline Ours Cosine Div. 0.390 ± 0.157 0.380 ± 0.150 Semantic Div. 0.012 ± 0.007 0.012 ± 0.008 16 Figure 6: Effect of optimization interval on performance across 5 benchmarks. The dashed line denotes the baseline results without rerouting. Impact on Sample Diversity in Parallel Generation. While our online adaptation method im- proves routing efficiency, we investigate whether the adapted routing decisions might reduce sample diversity when generating multiple sequences in parallel. To quantify this effect, we evaluate the di- versity of generated samples using two complementary metrics: semantic diversity based on Code- BERT embeddings (Feng et al., 2020), which captures functional similarities between code snippets, and TF-IDF-based cosine diversity, which measures lexical variation. We used the DeepSeek MoE model with default settings to generate 10 samples on the HumanEval task. Table 6 shows that our method maintains comparable diversity to the baseline across both metrics. Cosine diversity scores are nearly identical (0.380±0.150 vs. 0.390±0.157), as are semantic diversity scores (0.012±0.008 vs. 0.012 ± 0.007). These results demonstrate that our online adaptation preserves sample diversity while improving routing efficiency. 17 Algorithm 1 Test-Time MoE Rerouting Input: Pre-trained MoE model M; input prompt x = (x1, . . . , xn); optimization steps n; generation interval m; optimizer O. Output: Generated response y. // Parameter Initialization Initialize δ(l) = 0 ∈RN for all layers l ∈L; // Phase 1: In-Context Routing Optimization xcurrent = x, T = |x|; Select layers S = TopK({C(l)}L l=1, r) or compute soft weights {wl}L l=1; for i = 1 to n do Compute loss L({δ(l)}L l=1) = −PT −1 j=1 log p(xj+1 | x1:j, {δ(l)}L l=1); if hard selection then for l ∈S do δ(l) ←O(δ(l), ∇δ(l)L); end for else // soft weighting for l = 1 to L do δ(l) ←O(δ(l), wl∇δ(l)L); end for end if end for // Phase 2: Steered Generation with Periodic Re-optimization Initialize y = (); repeat // Generate m tokens using optimized routing parameters for k = 1 to m do Generate xnext ∼p(· | xcurrent, {δ(l)}L l=1); Append xnext to y and xcurrent; end for // Re-optimize with extended context T = |xcurrent|; Select layers S = TopK({C(l)}L l=1, r) or compute soft weights {wl}L l=1; for i = 1 to n do Compute loss L({δ(l)}L l=1) = −PT −1 j=1 log p(xcurrent,j+1 | xcurrent,1:j, {δ(l)}L l=1); if hard selection then for l ∈S do δ(l) ←O(δ(l), ∇δ(l)L); end for else // soft weighting for l = 1 to L do δ(l) ←O(δ(l), wl∇δ(l)L); end for end if end for until end-of-sequence or max length reached; return y; 18
REWIRING EXPERTS ON THE FLY: CONTINUOUS REROUTING FOR BETTER ONLINE ADAPTATION IN MIXTURE-OF-EXPERT MODELS Guinan Su1* Yanwu Yang4* Li Shen5 Lu Yin6 Shiwei Liu1,2,3 Jonas Geiping1,2,3 1Max Planck Institute for Intelligent Systems 2ELLIS Institute T ̈ubingen 3T ̈ubingen AI Center 4 ̈ubingen 5Sun Yat-sen University 6 -of-Experts (MoE) models achieve efficient scaling through sparse expert activation, but often suffer from suboptimal routing decisions due to distribution shifts in deployment. While existing test-time adaptation methods could potentially address these issues, they primarily focus on dense models and require access to external data, limiting their practical applicability to MoE architectures. However, we find that, instead of relying on reference data, we can optimize MoE expert selection on-the-fly based only on input context. As such, we propose a data-free, online test-time framework that continuously adapts MoE routing decisions during text generation without external supervision or data. Our method cycles between two phases: During the prefill stage, and later in regular intervals, we optimize the routing decisions of the model using self-supervision based on the already generated sequence. Then, we generate text as normal, maintaining the modified router until the next adaption. We implement this through lightweight additive vectors that only update router logits in selected layers, maintaining computational efficiency while preventing over-adaptation. The experimental results show consistent performance gains on challenging reasoning tasks while maintaining robustness to context shifts. For example, our method achieves a 5.5% improvement on HumanEval with OLMoE. Furthermore, owing to its plug-andplay property, our method naturally complements existing test-time scaling techniques, e.g., achieving 6% average gains when incorporated with self-consistency on DeepSeek-V2-Lite. 1 INTRODUCTION Mixture-of-Experts (MoE) models (Shazeer et al., 2017; Zhou et al., 2022; Jiang et al., 2024; Dai et al., 2024; Liu et al., 2024; Team, 2025; Muennighoff et al., 2024) provide an effective approach to scaling model capacity while maintaining computational efficiency by partitioning parameters into specialized experts and selectively activating subsets through routing mechanisms (Lepikhin et al., 2020; Fedus et al., 2022; Dai et al., 2024; Muennighoff et al., 2024). This functionality enables dynamic expert selection for diverse queries and creating inherently general-purpose systems that can store much more functionality and information than is used in every forward pass. However, despite their impressive capabilities, MoE models still face challenges when deployed in real-world environments (Aky ̈urek et al., 2024; Li et al., 2025a), as the expression of their capability hinges on the quality of their routing decisions, the activations of small linear layers that determine which parts of the model are activated. Why is routing hard? While the full MoE may store sufficient functionality to solve a particularly challenging query, this capacity is gated behind its routing decisions in each MoE layer, which in turn depend on the residual stream of the model, and so on earlier routing decisions. Routing decisions are only linear functions of the current hidden state that need to approximate the anticipated utility of activating a certain expert. A particular issue with this non-robustness of routing is that during standard inference there is no mechanism to reinforce the routing to a particularly successful part of the model, or to reduce routing to parts that did not contribute meaningful signal to the generated text. * Equal contribution. 1 16 Oct 2025 Figure 1: Test-time rerouting framework for MoE models. (a) Rerouting mechanism: lightweight additive vectors (Delta) update router logits in selected high-confidence layers using self-supervised loss from existing context. (b) Continuous adaptation: alternating between optimization phases that adapt routing decisions and generation phases that maintain adapted routing until the next optimization cycle. This makes routing optimization a question of model adaptation, reminiscent of neuroplasticity in humans - who continuously optimize routing and neuronal connections in the brain through adaptation and self-regulation, for example when continuing to practice a certain task. Yet, in MoEs, suboptimal expert selection leads to routing inefficiencies, creating a critical bottleneck in overall performance (Shi et al., 2024; Li et al., 2025a). And, while there has been a significant amount of work to improve the capabilities of MoEs in general, such as through test-time scaling, it is less clear how to best adapt MoEs to different tasks at inference. While standard approaches, such as in-context learning with task-specific demonstrations (Wei et al., 2022; Madaan et al., 2023) or parallel generation strategies that produce multiple candidates and aggregate them (Wang et al., 2022; Brown et al., 2024), do adapt the model overall, they influence routing only implicitly. Conceptually, adaptation should be solvable by test-time training, but existing approaches (Hardt & Sun; H ̈ubotter et al., 2024) focus on a classical prediction perspective, which retrieves relevant data from training sets or knowledge bases during inference to fine-tune models for dynamic scenarios before the model is being used. Li et al. (2025a) attempt to address router optimization through this perspective of test-time training, and, as such, use retrieval of "successful neighbors" from reference sets based on the first prompt in each context. However, this approach requires access to external reference data during deployment, incurs retrieval overhead, and risks failures in retrieval due to short prompts. Moreover the approach is static, as modern models execute test-time scaling, they generate long chains-of-thought, that should be taken into account when optimizing routing. In this regard, we propose a simple yet effective data-free test-time rerouting framework for MoE models as shown in Figure 1 that treats each input prompt as a self-supervised learning opportunity, enabling dynamic rerouting of pretrained MoE models for individual prompts during inference. The framework alternates in two phases: (1) In-Context Routing Optimization, where we regard the current context itself as a training sample and execute optimization steps to minimize cross-entropy loss on the current context with regard to routing logits; and (2) Steered Generation, where we generate text normally, steering routers with updates computed in the previous phase. This creates a dynamic feedback loop where the model continuously refines its understanding of task requirements based on its own generation progress, enabling increasingly informed expert routing decisions as generation proceeds. To maintain computational efficiency, we implement this progressive optimization through lightweight additive parameter vectors that update only the model's router logits of selected MoE layers, further reducing computational overhead and preventing over-adaptation. Overall, our main contributions are: • We propose a test-time rerouting framework specifically designed for MoE models, operating completely without external data dependencies or expensive retrieval mechanisms, using only backpropagation within the current context. • We introduce a lightweight parameter update mechanism that selectively optimizes router logits through additive vectors in high-confidence layers, enabling efficient expert selection adaptation during inference using steering. 2 • We validate our method's effectiveness through extensive experiments, demonstrating that our approach significantly improves MoE model performance on complex reasoning tasks, maintains robustness to context shifts in multi-turn conversations, and seamlessly integrates with existing test-time techniques, establishing a practical and versatile solution for deploying MoE models in real-world applications. Overall, we argue that routing changes are a compelling mechanism with which to adapt MoE models on the fly. The changes are lightweight, quick to compute and to apply (even in multi-user settings), and remain effective across context shifts, allowing MoEs a novel degree of plasticity that allows adaptation to task changes during deployment, paving the way toward practical continual self-regulation of MoE models during use. 2 RELATED WORK Mixture-Of-Experts (MoE). Sparsely activated Mixture-of-Expert models (MoE) are an efficient method for scaling model size by replacing the feed-forward networks (FFN) of transformer architectures with multiple experts (each its own FFN) and a gating function. This architecture dynamically activates different experts for each input token rather than utilizing all parameters (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2022). Recent MoE-based LLMs, including OLMoE (Muennighoff et al., 2024), DeepSeek (Dai et al., 2024; Liu et al., 2024), and Qwen2.5 (Team, 2025), adopt top-k expert routing to reduce the active parameter count during inference. These models achieve competitive performance while maintaining significantly lower memory costs. Test-Time Scaling. Test-Time Scaling enhances LLM capabilities by allocating additional computational resources during inference. These approaches fall into two categories (Welleck et al., 2024): Parallel generation, including self-consistency evaluations via multiple candidate responses (Wang et al., 2022), best-of-N sampling (Brown et al., 2024), and Monte Carlo Tree Search (Zhou et al., 2023; Xie et al., 2024), and sequential generation such as extending outputs through chain-ofthought reasoning (Wei et al., 2022; Madaan et al., 2023). Test-Time Training (TTT). TTT offers an alternative scaling approach. While successful in computer vision (Wang et al., 2020; Sun et al., 2020; 2024; Gandelsman et al., 2022; Osowiechi et al., 2023), recent works have extended TTT to language models through fine-tuning on retrieved neighbors (Hardt & Sun) or optimized data selection algorithms like SIFT (H ̈ubotter et al., 2024). TestTime Reinforcement Learning (TTRL) (Zuo et al., 2025) uses majority voting as reward signals. When applied to Mixture-of-Experts (MoE) models, recent work has explored expert-level interventions for behavior control, enabling precise modifications through targeted expert manipulation (Dahlke et al., 2025; Wang et al., 2025). Most relevant to our approach, Li et al. (2025a) optimizes expert routing in MoE models using "successful neighbors" from reference sets. However, these methods assume accessible training data during deployment and introduce significant retrieval overhead, limiting real-world practicality. 3 METHODOLOGY This section presents our data-free test-time rerouting framework for MoE models that optimizes expert routing during inference without external information. After reviewing MoE fundamentals (Section 3.1), we introduce three key components: (1) Router Logits Modification (Section 3.2) for layer-specific expert selection steering, (2) Dynamic Layer Selection (Section 3.3) for selective MoE layer updates based on confidence scores, and (3) Optimization Procedure (Section 3.4) detailing our two-phase strategy alternating between in-context routing optimization and steered generation. 3.1 PRELIMINARIES ON MIXTURE-OF-EXPERTS In Transformer-based Mixture-of-Experts (MoE) models, the conventional Feed-Forward Networks (FFNs) are replaced with MoE layers. Each MoE layer consists of a router R and a set of experts {Ei}N i=1. Given an input sequence x 0.35) while earlier ones (5-10) remain minimal (<0.05). MATH500 exhibits a similar but stronger trend, with peaks up to 0.16 in layers 20-22, suggesting that deeper layers are more involved in mathematical reasoning. The sample-wise distributions (right panels) further emphasize task-specific differences, indicating high variability in pathway changes across problems. Overall, these results highlight the heterogeneity of expert routing across tasks and demonstrate how our method adaptively reroute pathways. Rewiring highlights Task-Specific Experts. We further examine expert-level activation shifts on HumanEval and MATH500 before and after rerouting to understand how the model redistributes computation. Figure 2 b) shows heatmaps across layers and experts (red: increased, blue: decreased, normalized for visualization). The results reveal: (1) Adaptive targeting. changes concentrate on a subset of experts, indicating strategic rather than uniform redistribution; (2) Deep-layer adaptation. later layers (L21-L25) undergo stronger shifts, consistent with our edit-distance findings; and (3) Task-specific specialization. HumanEval and MATH500 exhibit distinct patterns, highlighting taskdependent routing behaviors. This selective activation shows that rerouting strategically emphasizes task-relevant experts, enhancing efficiency by focusing computation on where it is most needed. 7 Rewiring Increases Router Confidence. Moreover, we quantify routing confidence by tracking the entropy of expert-selection distributions across all layers during optimization. Lower entropy reflects more focused expert allocation, whereas higher entropy indicates more diffuse routing patterns. Figure 3 shows that our method (red line) exhibits a gradual decrease in router entropy over 18 generation steps, whereas the baseline (blue line) maintains a higher router entropy with pronounced fluctuations. This suggests that our approach progressively develops more focused routing decisions, while the baseline continues to rely on more diffuse expert-selection patterns. This indicates that our method facilitates concentrating on relevant experts without disrupting established routing mechanisms, thereby enabling more efficient expert utilization for the given tasks. Figure 3: Expert routing entropy as a function of sequence length averaged over 16 token blocks. Table 3: Computational efficiency comparison across different methods. Method Total FLOPs Time (s) Baseline 4.71e+11 10.71s ICL(3-shot) 1.93e+12 12.17s ICL(5-shot) 3.05e+12 12.53s Self-Consistency (3) 1.68e+12 34.20s C3PO(100-reference) 2.63e+12 26.20s Rewiring (Ours) 1.96e+12 20.12s 6.2 COMBINING REWIRING WITH OTHER TEST-TIME STRATEGIES A key advantage of our approach is its plug-and-play compatibility with existing test-time techniques. Since our method only modifies routing decisions without altering the underlying generation process, it can be seamlessly integrated with other approaches like in-context learning and parallel generation methods. Synergy with In-Context Learning. Large pretrained language models demonstrate strong incontext learning, predicting labels from few demonstration pairs without parameter updates. Empirical evidence demonstrates behavior similar to explicit finetuning (Dai et al., 2022; Aky ̈urek et al., 2022; Von Oswald et al., 2023). As such, we test combining our method with with 3-shot demonstrations. As shown in Table 4, we observe that this combination yields improvements across multiple tasks, even surpassing our method alone. We hypothesize this synergy occurs because our method provides more effective gradient updates that better leverage the contextual information from demonstration examples, helping extract more meaningful patterns from the limited demonstration data. Unlocking Better Self-Consistency. Self-consistency (Wang et al., 2022) improves reasoning by sampling multiple paths and aggregate together. We combine our method with self-consistency by applying rerouting optimization, then generating multiple reasoning paths with optimized routing decisions. For each sample, we generate 3 reasoning paths. For MMLU, MATH500, and GSM8K, we use majority voting to determine the final accuracy. For code generation tasks (MBPP, HumanEval), we report pass@3 scores. shows substantial improvements with an average 3 percentage point gain over self-consistency alone. We hypothesize that our rerouting framework generates higher-quality reasoning chains by selecting more appropriate experts, and when self-consistency aggregates these improved paths, the voting mechanism amplifies the benefits. 6.3 EFFICIENCY ANALYSIS Beyond performance improvements, we also compare the computational cost of our method with other online approaches to adapt models. Table 3 reports the results on the HumanEval task with the DeepSeek model, showing the average total FLOPs and inference time per sample across different methods. As shown in Table 3, our method achieves notable computational savings over most baselines. Although it requires more computation than the vanilla baseline, it remains substantially more efficient than other test-time techniques, using 1.3× fewer FLOPs than C3PO, 1.6× fewer than ICL (5-shot), and comparable to ICL (3-shot) and Self-Consistency (3). Despite the extra routing opera8 Table 4: Performance comparison of individual Test-Time methods versus combined approaches. Online optimization of routing decisions with our rewiring algorithm can be reliably combined iwth other techniques, such as in-context learning (ICL) or self-consistency. Method HumanEval MBPP GSM8K MATH500 MMLU Average Baseline 50.60 58.37 72.10 22.60 50.77 50.89 Rewiring (Ours) 54.26 62.65 73.62 25.00 52.40 53.59 In-Context Learning ICL (3-shot) 52.44 56.81 71.10 21.60 44.33 49.26 ICL + Rewiring 53.05 62.65 76.00 27.00 46.53 53.05 Self-Consistency Self-Consistency (3) 51.02 70.04 75.28 26.20 51.87 54.88 Self-Consistency (3) + Rewiring 55.08 71.21 77.54 27.40 54.20 57.09 Table 5: Performance of our method versus the baseline on AIME datasets using the GPT-OSS-20b model. Optimizing rerouting is especially noticeable at improving model certainty, as shown by improved performance at lower pass@k and improved majority voting, even one challenging benchmarks like AIME. Data / Method Pass@k Maj@k Average 2 4 8 2 4 8 AIME25 Baseline 83.21 87.90 90.00 65.36 71.76 76.67 74.29 Rewiring (Ours) 83.81 86.19 86.67 67.50 76.67 83.33 75.65 AIME24 Baseline 78.57 83.76 86.67 60.60 67.48 70.00 69.58 Rewiring (Ours) 81.67 84.95 86.67 65.83 72.57 80.00 73.75 tions, our method maintains a competitive inference time of 20.12s, indicating that the overhead of online adaptation is modest while still delivering improved performance. Conceptually, our method requires n additional prefill passes on already generated text every m tokens, which, given the ease of parallelization of prefill is a manageable compute increase. In this work, we implement this optimization in a straightforward manner, but a production implementation could be significantly faster by incorporating the optimization into disaggregated-prefill systems, or timing the MoE rewiring event with low-load timespans, for example when waiting for a user to respond. The actual routing changes are self-contained in routing parameters δ (shaped as number of experts by number of layers), so that on-device storage is feasible, even for many separate conversations. 6.4 EXTENSION TO LONG-REASONING TASKS To evaluate the Generalizability of our approach to modern reasoning models, we also test GPT-OSS-20B (Agarwal et al., 2025). We evaluate on the AIME benchmark (MAA Committees), a challenging mathematics competition dataset that requires sophisticated multi-step derivations and mathematical reasoning. This setting allows us to assess whether our online rerouting method can improve performance on the demanding reasoning tasks that represent the current frontier of AI capabilities. As shown in Table 5, our method improves performance on both AIME datasets, achieving higher average correctness (75.65% vs. 74.29% on AIME25; 73.75% vs. 69.58% on AIME24). The primary improvements occur in Maj@k metrics rather than Pass@k, suggesting our routing optimization enhances reasoning consistency rather than solution diversity. 6.5 ROBUSTNESS TO CONTEXT SHIFTS IN MULTI-TURN SCENARIOS. In real-world applications, MoE models often encounter multi-turn conversations where contexts shift dramatically between different topics or tasks. To assess the robustness of our method under realistic multi-turn contexts, we simulate context shifts by prepending few-shot examples from dif9 Figure 4: Performance of our method versus the baseline across different few-shot examples under shifted and aligned task contexts. ferent domains before the target task. Figure 4 presents results on HumanEval and Math500 using DeepSeek-V2-Lite, under two conditions: (1) Aligned-task, where few-shot examples are from the same task domain, and (2) Shifted-task, where examples are drawn from unrelated domains (e.g., MATH and MMLU for code tasks) to induce cross-domain shifts. Across both benchmarks, our method (Ours-Shifted, Ours-Aligned) consistently outperforms the baselines (Base-Shifted, Base-Aligned). On HumanEval, all methods benefit from more-shot examples, but our approach shows a stable improvement. In contrast, baseline gains remain modest, particularly in the Shifted-task setting, where performance fluctuates. On Math500, our method maintains robust and consistent results across both domains, while baselines exhibit limited or even inconsistent improvements. These findings highlight the robustness and scalability of our test-time rerouting strategy in handling diverse contextual information 7 CONCLUSION In this work, we introduce a novel test-time rerouting approach that enables MoE models to dynamically adapt expert selection on the fly, without requiring external data or costly retrieval. The method alternates between routing optimization and steered generation, forming a feedback loop that progressively improves expert selection. To reduce computational overhead, we employ lightweight additive vectors that update only the logits of selected routers. Extensive experiments show that our approach effectively compensates for the inherent imperfections in MoE routing, yielding consistent gains across multiple benchmarks (up to 6.7% on code generation) with 1.6× fewer FLOPs than few-shot methods, while maintaining robustness to context shifts. As a plugand-play regularization strategy, the method flexibly combines with complementary techniques (e.g., Self-Consistency) to further amplify the benefits. More importantly, by introducing a new dimension of plasticity into MoEs, it opens the door to deployment-time adaptation and points toward practical continual self-regulation in MoE models. ACKNOWLEDGEMENTS JG acknowledges the support of the Hector II foundation. GS acknowledges the support of the International Max Planck Research School for Intelligent Systems (IMPRS-IS). REFERENCES Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Altman, Andy Applebaum, Edwin Arbus, Rahul K Arora, Yu Bai, Bowen Baker, Haiming Bao, et al. gpt-oss-120b & gpt-oss-20b model card. arXiv preprint , 2025. 10 Ekin Aky ̈urek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. arXiv preprint , 2022. Ekin Aky ̈urek, Mehul Damani, Linlu Qiu, Han Guo, Yoon Kim, and Jacob Andreas. The surprising effectiveness of test-time training for abstract reasoning. arXiv e-prints, pp. arXiv-2411, 2024. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint , 2021. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R ́e, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint , 2024. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint , 2021. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint , 2021. Robert Dahlke, Henrik Klagges, Dan Zecha, Benjamin Merkel, Sven Rohr, and Fabian Klemm. Mixture of tunable experts-behavior modification of deepseek-r1 at inference time. arXiv preprint , 2025. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers. arXiv preprint , 2022. Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Yu Wu, et al. Deepseekmoe: Towards ultimate expert specialization in mixtureof-experts language models. arXiv preprint , 2024. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1-39, 2022. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint , 2020. Yossi Gandelsman, Yu Sun, Xinlei Chen, and Alexei Efros. Test-time training with masked autoencoders. Advances in Neural Information Processing Systems, 35:29374-29385, 2022. Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, Claire Barale, Robert McHardy, Joshua Harris, Jean Kaddour, Emile van Krieken, and Pasquale Minervini. Are we done with mmlu?, 2024. Moritz Hardt and Yu Sun. Test-time training on nearest neighbors for large language models, 2024. URL https://arxiv. org/abs/2305.18466. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021. URL https://arxiv. org/abs/2103.03874, 2, 2024. Jonas H ̈ubotter, Sascha Bongni, Ido Hakimi, and Andreas Krause. Efficiently learning at test-time: Active fine-tuning of llms. arXiv preprint , 2024. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint , 2024. 11 Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint , 2020. Zhongyang Li, Ziyue Li, and Tianyi Zhou. C3po: Critical-layer, core-expert, collaborative pathway optimization for test-time expert re-mixing. arXiv preprint , 2025a. Ziyue Li, Chenrui Fan, and Tianyi Zhou. Where to find grokking in llm pretraining? monitor memorization-to-generalization without test. arXiv preprint , 2025b. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023. Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixtureof-experts language model. arXiv preprint , 2024. MAA Committees. AIME problems and solutions. https://artofproblemsolving.com/ wiki/index.php/AIME_Problems_and_Solutions. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36:46534-46594, 2023. Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, et al. Olmoe: Open mixture-of-experts language models. arXiv preprint , 2024. David Osowiechi, Gustavo A Vargas Hakim, Mehrdad Noori, Milad Cheraghalikhani, Ismail Ben Ayed, and Christian Desrosiers. Tttflow: Unsupervised test-time training with normalizing flow. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2126-2134, 2023. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint , 2017. Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, and Yu Meng. Unchosen experts can contribute too: Unleashing moe models' power by selfcontrast. Advances in Neural Information Processing Systems, 37:136897-136921, 2024. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. Test-time training with self-supervision for generalization under distribution shifts. In International conference on machine learning, pp. 9229-9248. PMLR, 2020. Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei Chen, Xiaolong Wang, Sanmi Koyejo, et al. Learning to (learn at test time): Rnns with expressive hidden states. arXiv preprint , 2024. Qwen Team. Qwen2 technical report. arXiv preprint , 2, 2024. Qwen Team. Qwen3: Think deeper, act faster, 2025. URL https://qwenlm. github. io/blog/qwen3/. Accessed, 5(10), 2025. Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, Jo ̃ao Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. Transformers learn in-context by gradient descent. In International Conference on Machine Learning, pp. 35151-35174. PMLR, 2023. Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint , 2020. 12 Mengru Wang, Xingyu Chen, Yue Wang, Zhiwei He, Jiahao Xu, Tian Liang, Qiuzhi Liu, Yunzhi Yao, Wenxuan Wang, Ruotian Ma, et al. Two experts are all you need for steering thinking: Reinforcing cognitive effort in moe reasoning models without additional training. arXiv preprint , 2025. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint , 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui. From decoding to meta-generation: Inference-time algorithms for large language models. arXiv preprint , 2024. Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P Lillicrap, Kenji Kawaguchi, and Michael Shieh. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint , 2024. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Language agent tree search unifies reasoning acting and planning in language models. arXiv preprint , 2023. Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M Dai, Quoc V Le, James Laudon, et al. Mixture-of-experts with expert choice routing. Advances in Neural Information Processing Systems, 35:7103-7114, 2022. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, et al. Ttrl: Test-time reinforcement learning. arXiv preprint , 2025. A EXPERIMENTAL SETTINGS A.1 BENCHMARKS MMLU-redux is a manually re-annotated subset of the original MMLU benchmark designed to address quality issues in the dataset. The dataset contains 3,000 questions across 30 MMLU subjects (100 questions per subject), with expert annotators identifying and categorizing various types of errors using a comprehensive error taxonomy. These errors include issues such as bad question clarity, unclear options, no correct answers, multiple correct answers, and wrong ground truth labels. MMLU-Redux provides a more reliable evaluation standard by filtering out problematic questions and offering corrected annotations where possible. HumanEval is a benchmark dataset for evaluating code generation capabilities of large language models. The dataset consists of 164 hand-crafted programming problems, each including a function signature, docstring, body, and several unit tests (averaging 7.7 tests per problem). These challenges assess language comprehension, algorithms, and simple mathematics, with difficulty comparable to simple software interview questions. MBPP is a code generation benchmark consisting of around 1,000 crowd-sourced Python programming problems designed to be solvable by entry-level programmers. Each problem includes a task description, code solution, and 3 automated test cases, covering programming fundamentals and standard library functionality. The dataset provides two versions: a full version with 974 problems and a sanitized version with 427 problems. The sanitized split underwent a second round of annotation to improve task descriptions, addressing issues where the original descriptions might not be sufficiently expressive to solve the tasks. This hand-verified subset provides higher-quality problem statements for more reliable evaluation of code generation models. In this paper, we use the sanitized version. 13 GSM8K is a dataset of 8,500 high-quality, linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7,473 training problems and 1,319 test problems. Each problem takes between 2 and 8 steps to solve using basic arithmetic operations, with problems designed so that a bright middle school student should be able to solve every problem. Solutions are provided in natural language format rather than pure mathematical expressions, offering insight into multi-step reasoning processes. MATH-500 is a curated subset of 500 challenging mathematical problems selected from the MATH dataset (Hendrycks et al., 2024). These problems span seven topics: Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus. Each problem requires multi-step reasoning and is designed to test a model's ability to apply mathematical principles, execute complex calculations, and communicate solutions clearly. A.2 BASELINES C3PO We adopt C3PO (Critical-Layer, Core-Expert, Collaborative Pathway Optimization) (Li et al., 2025a) as our primary baseline method. C3PO is a test-time optimization approach designed to address the suboptimal expert pathway selection problem in Mixture of Experts (MoE) large language models. The method operates on the observation that end-to-end trained routers often produce inefficient pathways for challenging or out-of-distribution samples, leading to degraded performance on diverse downstream tasks. The core idea of C3PO is to dynamically re-mix expert pathways during inference by leveraging successful routing patterns from a reference set. Given a reference set of m samples {(xi, yi)}m i=1 with their corresponding expert pathway matrices {ωi}m i=1 (where each ωi ∈RL×E with L layers and E experts) on which the model makes correct predictions, C3PO aims to find an improved pathway matrix ω for a new test sample x that leads to more accurate outputs. Among the three optimization strategies proposed by C3PO (gradient descent, kernel regression, and mode finding), we implement the kernel regression approach in our experiments. This method estimates optimal expert pathways by computing a weighted average of neighbors' pathway matrices: ˆω = P i∈N (x) K(xi, x)ωi P i∈N (x) K(xi, x) (8) where K(·, ·) is a kernel function measuring sample similarity, and N(x) denotes the neighborhood of x in the reference set. The final pathway is obtained through interpolation: ω ←αω + (1 -α)ˆω (9) where α is optimally chosen to minimize the loss function. For our comparative evaluation, we construct reference sets for each benchmark as follows: For HumanEval and MBPP, we use the MBPP validation set; for GSM8K and MATH500, we utilize the GSM8K training set; and for MMLU, we employ the MMLU training set. From each source, we randomly sample 100 instances where the model produces the correct predictions, ensuring highquality pathway references for the optimization process. In-Context Learning In-Context Learning (ICL) is a fundamental capability of large language models that enables them to adapt to new tasks by using a few demonstration examples provided in the input prompt, without requiring parameter updates (Wang et al., 2022; Wei et al., 2022). In our experimental setup, we implement ICL as a baseline across all evaluation benchmarks. The few-shot examples are carefully selected from relevant datasets to ensure domain alignment and high-quality demonstrations: • HumanEval and MBPP: We sample 3-5 examples from the MBPP prompt collection, which provides well-crafted code generation examples with clear problem descriptions and corresponding Python solutions. 14 • GSM8K: We utilize 3-5 examples from the GSM8K training set, featuring step-by-step mathematical reasoning demonstrations that guide the model through problem-solving processes. • MATH500: We sample 3-5 examples from the HYDRA-Math dataset, which offers highquality mathematical problem-solution pairs covering various difficulty levels and mathematical domains. • MMLU: We select 3-5 examples from the MMLU validation set, ensuring coverage of diverse knowledge domains while maintaining format consistency for multiple-choice questions. For each benchmark, the few-shot examples are randomly sampled from their respective source datasets and prepended to each test query. This approach provides the model with task-specific context while maintaining consistency across different evaluation scenarios. The number of examples (3-5) is chosen to balance between providing sufficient context and avoiding excessive prompt length that might degrade model performance. A.3 MODEL SELECTION OLMoE is a fully open-source MoE language model developed by the Allen Institute for AI and Contextual AI (Muennighoff et al., 2024). The model employs a decoder-only transformer architecture with 1 billion active and 7 billion total parameters. Each MoE layer contains 64 experts, of which 8 are activated per input token through a learned router network. This sparse activation mechanism enables computational efficiency similar to dense 1B parameter models while leveraging the full 7B parameter capacity. DeepSeek-V2-Lite is a smaller variant of the DeepSeek-V2 model family, developed by DeepSeekAI (Liu et al., 2024). The model employs innovative architectures, including Multi-head Latent Attention (MLA) and DeepSeekMoE, with 15.7 billion total parameters and 2.4 billion activated parameters per token. DeepSeek-V2-Lite has 27 layers with a hidden dimension of 2048, utilizing MLA with 16 attention heads, where each head has a dimension of 128. The model adopts the DeepSeekMoE architecture, where all feed-forward networks except the first layer are replaced with MoE layers. Each MoE layer consists of 2 shared experts and 64 routed experts, with 6 experts activated for each token. This sparse activation mechanism enables computational efficiency while maintaining strong performance across diverse tasks. Qwen1.5-MoE is developed by the Qwen team (Team, 2024). The model is upcycled from the dense Qwen-1.8B model, featuring 14.3 billion total parameters with 2.7 billion activated parameters during runtime. Despite using only 2.7B active parameters, the model achieves comparable performance to Qwen1.5-7B while requiring 75% fewer training resources and demonstrating 1.74x faster inference speed. The model employs a fine-grained expert architecture with 64 experts total, consisting of 4 shared experts that are always activated and 60 routing experts with 4 activated per token. This configuration represents an 8-fold increase in expert count compared to conventional MoE setups, enabling higher model capacity without proportional parameter increases. The finegrained expert design partitions a single FFN into multiple segments, each serving as an individual expert, allowing for more specialized knowledge representation. GPTOSS-20B is OpenAI's recently released open-source MoE model with 21B total parameters but only 3.6B active at inference (Agarwal et al., 2025). Built on a Mixture-of-Experts architecture with 24 layers and 32 experts using Top-4 routing, it represents a state-of-the-art reasoning model that achieves performance similar to OpenAI o3-mini on reasoning benchmarks. Its sophisticated expert routing system and focus on mathematical reasoning make it an ideal candidate for evaluating our test-time rerouting method on challenging tasks like AIME. B METHOD DETAILS B.1 TEST-TIME MOE REROUTING We elaborate the rerouting algorithmic pipeline in Algorithm 1, which details the step-by-step procedure for adaptive expert selection and pathway calibration at test time. 15 B.2 PATHWAY DIFFERENCES We first examine how expert pathways change after rerouting using edit distance following (Li et al., 2025b). For input xi, we define its pathway si as the ordered sequence of selected experts across L MoE layers as si = concat(e(i) 1 , e(i) 2 , . . . , e(i) L ), where e(i) l represents expert indices at layer las comma-separated strings (e.g., '3,1,5'), joined across layers with hyphens. We quantify pathway differences using Levenshtein edit distance as Dpath(si, sj) = EditDistance(si, sj). This captures mismatches in expert selection, and pathway shifts. C ADDITIONAL ANALYSIS C.1 LAYER-WISE CONFIDENCE DISTRIBUTIONS ACROSS DIFFERENT TASKS We visualize the confidence distributions across different tasks using DeepSeek-V2-lite as an example. As illustrated in Figure 5, activation patterns across experts and layers are highly task-specific. For instance, math tasks (Math500) and code tasks (HumanEval) exhibit distinct confidence patterns across layers, with math tasks showing higher confidence in middle layers (layers 7-14) while code tasks demonstrate more distributed confidence patterns with peaks in later layers (layers 1518). Notably, both tasks show generally higher confidence concentrated in the middle-to-later layers compared to early layers. Figure 5: Layer-wise confidence distributions across different tasks in DeepSeek-V2-lite-MoE C.2 SENSITIVE ANALYSES OF HYPER-PARAMETERS As shown in Figure 6, the optimization interval analysis across five benchmarks shows that our testtime rerouting consistently outperforms baseline performance. Intervals of 128-160 tokens achieve optimal performance, balancing routing adaptability with computational efficiency. Both frequent re-optimization (96 tokens) and no updates degrade performance due to over-adaptation and loss of dynamic adjustment benefits, respectively. Mathematical reasoning tasks (GSM8K, MATH500) benefit most from proper interval tuning. C.3 IMPACT ON SAMPLE DIVERSITY IN PARALLEL GENERATION. Table 6: Diversity evaluation results comparison Metric Baseline Ours Cosine Div. 0.390 ± 0.157 0.380 ± 0.150 Semantic Div. 0.012 ± 0.007 0.012 ± 0.008 16 Figure 6: Effect of optimization interval on performance across 5 benchmarks. The dashed line denotes the baseline results without rerouting. Impact on Sample Diversity in Parallel Generation. While our online adaptation method improves routing efficiency, we investigate whether the adapted routing decisions might reduce sample diversity when generating multiple sequences in parallel. To quantify this effect, we evaluate the diversity of generated samples using two complementary metrics: semantic diversity based on CodeBERT embeddings (Feng et al., 2020), which captures functional similarities between code snippets, and TF-IDF-based cosine diversity, which measures lexical variation. We used the DeepSeek MoE model with default settings to generate 10 samples on the HumanEval task. Table 6 shows that our method maintains comparable diversity to the baseline across both metrics. Cosine diversity scores are nearly identical (0.380±0.150 vs. 0.390±0.157), as are semantic diversity scores (0.012±0.008 vs. 0.012 ± 0.007). These results demonstrate that our online adaptation preserves sample diversity while improving routing efficiency. 17 Algorithm 1 Test-Time MoE Rerouting Input: Pre-trained MoE model M; input prompt x = (x1, . . . , xn); optimization steps n; generation interval m; optimizer O. Output: Generated response y. // Parameter Initialization Initialize δ(l) = 0 ∈RN for all layers l ∈L; // Phase 1: In-Context Routing Optimization xcurrent = x, T = |x|; Select layers S = TopK({C(l)}L l=1, r) or compute soft weights {wl}L l=1; for i = 1 to n do Compute loss L({δ(l)}L l=1) = -PT -1 j=1 log p(xj+1 | x1:j, {δ(l)}L l=1); if hard selection then for l ∈S do δ(l) ←O(δ(l), ∇δ(l)L); end for else // soft weighting for l = 1 to L do δ(l) ←O(δ(l), wl∇δ(l)L); end for end if end for // Phase 2: Steered Generation with Periodic Re-optimization Initialize y = (); repeat // Generate m tokens using optimized routing parameters for k = 1 to m do Generate xnext ∼p(· | xcurrent, {δ(l)}L l=1); Append xnext to y and xcurrent; end for // Re-optimize with extended context T = |xcurrent|; Select layers S = TopK({C(l)}L l=1, r) or compute soft weights {wl}L l=1; for i = 1 to n do Compute loss L({δ(l)}L l=1) = -PT -1 j=1 log p(xcurrent,j+1 | xcurrent,1:j, {δ(l)}L l=1); if hard selection then for l ∈S do δ(l) ←O(δ(l), ∇δ(l)L); end for else // soft weighting for l = 1 to L do δ(l) ←O(δ(l), wl∇δ(l)L); end for end if end for until end-of-sequence or max length reached; return y; 18
2510.14854
ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 1 Through-the-Earth Magnetic Induction Communication and Networking: A Comprehensive Survey Honglei Ma, Erwu Liu, Senior Member, IEEE, Wei Ni, Fellow, IEEE, Zhijun Fang, Rui Wang, Yongbin Gao, Dusit Niyato, Fellow, IEEE, and Ekram Hossain, Fellow, IEEE Abstract—Magnetic induction (MI) communication (MIC) has emerged as a promising candidate for underground commu- nication networks due to its excellent penetration capabilities. Integration with Space-Air-Ground-Underground (SAGUI) net- works in next-generation mobile communication systems requires a well-defined network architecture. A recent discovery in MIC research, MI fast fading, remains in its early stages and presents unique challenges. This paper provides a comprehensive survey on through-the-earth (TTE) MIC, covering MI applications, channel modeling, point-to-point MIC design, relay techniques, network frameworks, and emerging technologies. We compare various MIC applications to highlight TTE-specific challenges and review the principles of channel modeling, addressing both MI slow fading and MI fast fading, along with its potential impact on existing MIC theories. We conduct a fine-grained decomposition of MI channel power gain into four distinct physical parameters, and propose a novel geometric model to analyze MI fast fading. We also summarize MI relay techniques, examine crosstalk effects in relay and high-density networks, and explore key research tasks within the OSI framework for a holistic MI network protocol in SAGUI. To bridge the gaps identified, we propose a MIC framework that supports TCP/IP and Linux, enabling full implementation of existing and emerging MIC solutions. This framework empowers researchers to leverage Linux resources and deep learning platforms for accelerated development of MIC in SAGUI networks. Remaining research challenges, open issues, and promising novel techniques are further identified to advance MIC research. Index Terms—Magnetic induction (MI), underground wireless This work is supported in part by grants from the National Science Foundation of China (Nos. 42171404, 62271352), in part by Shanghai Engineering Research Center for Blockchain Applications And Services (No. 19DZ2255100), in part by Seatrium New Energy Laboratory, Singapore Ministry of Education (MOE) Tier 1 (RT5/23 and RG24/24), the NTU Centre for Computational Technologies in Finance (NTU-CCTF), and the RIE2025 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) (Award I2301E0026), administered by A*STAR, and in part by the Fundamental Research Funds for the Central Universities under Grant 22120250094. H. Ma, Z. Fang, and Y. Gao are with the School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China (e-mail: holyma@yeah.net, Zjfang@gmail.com, gaoyongbin@sues.edu.cn). E. Liu and R. Wang are with the College of Electronic and Information Engineering, Tongji University, Shanghai, China. R. Wang is also with the Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai, China (e-mail: erwu.liu@ieee.org, ruiwang@tongji.edu.cn). W. Ni is with the School of Engineering, Edith Cowan University, Perth, WA 6027, and the School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2033, Australia (e-mail: wei.ni@ieee.org). D. Niyato is with the College of Computing and Data Science, Nanyang Technological University, Singapore 639798 (e-mail: dniyato@ntu.edu.sg). E. Hossain is with the Department of Electrical and Computer En- gineering, University of Manitoba, Winnipeg, Manitoba, Canada (e-mail: Ekram.Hossain@umanitoba.ca). (Corresponding author: E. Liu) DOI: 10.1109/COMST.2025.3623258 communication, through-the-earth (TTE), fast fading, network architecture, TCP/IP. TABLE I ACRONYMS AND DEFINITIONS Acronym Full name / Definition ✸/ ✸✸/ ✸✸✸ Low / Medium / High (priority used in tables and figures) AF Amplified-and-forward AG Aboveground APO Antenna position and orientation AUV Autonomous underwater vehicle AVI Antenna vibration intensity BAN Body area network BCS Boundary Chi-square (Boundary χ2) BCH Bose, Ray-Chaudhuri, Hocquenghem CDF Cumulative distribution function CLO Cross-layer optimization CLT Central limit theorem CMG CMIC achievable rate gain CMI Cooperative magnetic induction CMIC Cooperative magnetic induction communication CMIC-1NR CMIC with one non-aligned relay CMIC-nAR CMIC with multiple aligned relays CSI Channel state information CSMA Carrier sense multiple access DF Decode-and-forward DMI Direct magnetic induction DWE dynamic weighted evolution/learning EMW Electromagnetic wave EMWC Electromagnetic wave communication EPR Effective payload ratio FEC Forward error correction FEM Finite element method FF Filter-and-forward FSOC Free-space optical communication GAN Generative adversarial networks GTS Guaranteed time slot IP-HC Internet protocol header compression IoV Internet of Vehicles JSCC Joint source-channel coding KVL Kirchhoff’s voltage law LLM Large language model LLC Logical link control Ly Layer MAC Medium access control M2I Metamaterial-enhanced magnetic induction MCD MI Linux character device MCNSI MI communication-navigation-sensing integrated MI Magnetic induction MIC Magnetic induction communication MIMO Multiple-input multiple-output MND MI Linux network device MPRlA MI passive relay array NFC near field communication OSI Open Systems Interconnection P2P Point-to-point PDF Probability density function PSD Power spectral density RL Reinforcement learning RPMA Rotating permanent magnet antenna RTT Round-trip time Rx Receive SAGUMI Space-Air-Ground-Underground multi-network integration SISO Single-in single-out SNR Signal-to-noise ratio TCP / IP Transmission control protocol / Internet protocol TMR Tunneling Magnetoresistance ToA Time of arrival TTE Through-the-earth Tx Transmit UDP User datagram protocol UG Underground UG-WSN Underground wireless underground sensor network UW-WSN Underwater wireless sensor network VMI Vehicle magnetic induction VMIC Vehicle magnetic induction communication VLF-LA Very low frequency and a larger antenna WSN Wireless sensor network https://doi.org/10.1109/COMST.2025.3623258; © 2025 IEEE arXiv:2510.14854v3 [eess.SY] 21 Oct 2025 ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 2 I. INTRODUCTION A. Underground Communication and TTE Communication A S human activities increasingly extend into underground environments, and reliable underground communication has become essential for applications, such as resource explo- ration, disaster rescue, underground blasting, and robotic oper- ations in hazardous scenarios [1]. DARPA launched the Sub- terranean Challenge in 2017 to drive innovation in mapping, navigation, and search solutions for complex underground spaces, including tunnel systems, urban underground areas, and natural caves [2]. One of the primary challenges in this initiative was ensuring robust underground communication. As next-generation mobile communication technologies, such as 6G [3], evolve, underground communication is expected to play a critical role in integrated networks, where protocols, such as TCP/IP, will be indispensable for enabling essential functionalities, such as dynamic passwords, node registration, and licensing approval by regional authorities (e.g., national security centers). However, without standardized protocols, these advanced features are nearly impossible to achieve. TABLE II NOTATION AND DEFINITION Definition† Notation Basic parameters See Table III Probability P(·) Expectation E(·) Imaginary unit√−1 j Source/Tx S Destination/Rx D Relay R (·) of source/Tx antenna (.)S (·) of destination/Rx antenna (.)D (·) of relay antenna (.)R (·) of link S→D (.)SD Magnetic moment by antenna S mS Channel power gain [of link S→D] GSD Circuit gain [of link S→D] CSD Space gain [of link S→D] SSD Eddy gain [of link S→D] ESD Polarization gain or MI fast fading [of link S→D] JSD Mutual inductance [of link k→l] Mkl Compensated in-device gain/loss [for link S→D] ℵSD Wavenumber in the medium k0 Horizontal vibration angle [of the antenna S] ϕS Vertical vibration angle [of the antenna D] θ′ D Average AVI [of antenna S] σS Boundary of the antenna vibration ς Dirac’s function δpu(·) Overall circuit impedance [of coil S] ZS Current or divisor [at coil/antenna S] IS PSD [of Tx S] PSf Noise power No Capacitance of match capacitor for f0 [at coil S] CcS Resistance of the load RL Resistance [of coil S] RcS Time or time slot [of Tx S] tS Inductance [of coil S] LcS Bandwidth Bw Function of 3-dB bandwidth Bw(·) 3-dB bandwidth for CMIC-1NR using AF BAF Achievable rate [of link/system S→D] CSD † The brackets [·] represents the optional part. For example, the average AVI [of antenna S] is denoted by σS. Optionally, the average AVI [of antenna D] is denoted by σD. Despite its importance, underground communication has received significantly less research attention compared to aboveground technologies such as electromagnetic wave com- munication (EMWC). It remains a bottleneck in applications ranging from general underground operations to DARPA’s Obstacle layer (RockͫSoilͫWaterͫConcrete)) MI Receiver General EMW Transmitter LoRa Transmitter t Ultra-long wave Receiver Building Building Sonar Transmitter Water Sonar Receiver W MI Transmitter Building Building Basement Ultra-long wave Transmitter Basement Blocked by buildings Successfully penetrate penetrate buildings Blocked Penetrate Large antenna Smaller coil Blocked by subsurface layers Blocked by building Blocked by subsurface layers Successfully penetrate water Successfully penetrate Blocked by subsurface layers Fig. 1. Comparison of the penetration abilities of communication approaches. Green and red texts indicate the advantages and disadvantages, respectively. Performance and deployment comparisons are presented in Table IV. robotic systems and next-generation multi-network integration, particularly when supporting TCP/IP protocols. Known as through-the-earth (TTE) communication due to its ability to penetrate over 10 m of soil and rock, this technology is especially critical for scenarios such as mining and drilling, where depths often exceed 1,000 m. Notably, the Kola Su- perdeep Borehole even extends beyond 12,000 m [4]. During disasters, when EMWC and wired infrastructures are often rendered inoperative, TTE communication’s flexible deploy- ment capabilities can expedite rescue operations, potentially saving countless lives. Fig. 1 illustrates various TTE communication techniques with their performance and deployments compared in Table IV. General EMW signals are limited in penetration and may fail to reach basement levels. Acoustic communication systems are unsuitable for TTE communication due to severe multipath effects in complex underground materials [5]. Ultra- long wave (ULW) radio-based communication, though capable of penetrating deeper, requires kilometer-scale antennas, which are impractical in confined underground spaces [6]. In contrast, MIC has proven effective for a wide range of underground applications. Time-varying magnetic fields experience less attenuation than EMWs [7], and MIC antennas (coils) are compact compared to ULW devices. For instance, an 8-meter diameter MIC antenna can achieve a TTE communication range of up to 310 m [8], while smaller antennas (less than 1 m in diameter) are sufficient for shorter distances, making them suitable for vehicle-mounted applications. This flexibility enables mobile MI networks, ideal for emergency rescue operations [9], [10]. The MIC uses a modulated magnetic field generated by a transmitter antenna. This field is received by a receiver coil or magnetic sensor, which demodulates it into symbols. The Wrathall team accomplished an underwater MIC in 1999 using a 3 kHz field [17]. Since then, MIC has been explored as an alternative for underground (UG) WSNs. In 2006, Akyildiz et al. [18], [19] summarized various UG-WSN scenarios, includ- ing soil monitoring, underground water monitoring, pipe leak detection, intruder detection, rescue personnel localization, and building load-bearing pressure detection. Sun et al. [12] introduced MIC into UG-WSNs with a small device featuring a 10 cm coil in 2010. Recently, researchers have expanded on applications of the non-coil-based MIC and large-scale MI networks. For example, Zhang et al. [20] developed a rotating permanent magnet antenna (RPMA) array device for ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 3 TABLE III KEY PARAMETERS, SYMBOLS, AND THEIR DEFAULT VALUES FOR SIMULATIONS IN THIS PAPER Parameters Symbol† Default values Unit Refs. Related simulations in this survey Tx or CMI coil radius acS 0.6 meter [11] Figs. 4, 11, 12, 19, 21, 22 Rx coil radius acD 0.4 meter [11] Fig. 11, 12 Number of turns of Tx or CMI coils NS 15 - [11] Figs. 11, 12, 19, 21, 22 Number of turns of Rx coil ND 30 - [11] Fig. 11, 12 Coil resistance ρw 0.166 Ω/ m [8], [11] Figs. 11, 12, 19, 21, 22 Tx Power Ps, PS 5 Watt [7], [11] Figs. 11, 12, 19, 21, 22 Resonance (center) frequency f0 10000 Hz [7], [11] Figs. 19, 22 Carrier frequency f 10000 Hz [7], [11] Figs. 19, 22 Tx coil orientation θs 0 - [12] Figs. 7, 8, 11, 12, 19, 21, 22 DMI Rx coil orientation θD π - [12] Figs. 11, 12, 19 VMI Rx coil orientation θD −π 2 - [9] Fig. 7 Ambient noise PSD Nof -103 dBm / 2 kHz [12] Figs. 11, 12, 19, 21, 22 Permeability of medium µu 4π × 10−7 H / m [7], [13] Figs. 11, 12, 19, 21, 22, 29 Conductivity of medium σu 10−2 S / m [7], [13] Figs. 11, 12, 19, 21, 22 Permittivity of medium ϵu 6.978×10−11 F / m [13] Figs. 11, 12, 19, 21, 22, 29 Distance between S and D dSD 60 meter [7] Figs. 19, 21, 22 † In this paper, Italic subscript ‘f’ indicates “(·) per Hz”. All the normal (upright) subscripts indicate the description which does not represent any number or node ID. TABLE IV COMPARISON OF MIC AND OTHER COMMUNICATIONS IN PERFORMANCE AND DEPLOYMENT DIMENSIONS FOR UNDERGROUND ENVIRONMENTS Performance† MIC EMWC Acoustic Wired Hybrid Comm. ranges (m) 0.1∼1,700 <10 (f=300MHz ) [12] <50 m (Position) [14] > 10,000 - Data rates (kbps) < 10 (dSD >50 m) < 100 (dSD >5 m) [15] 5∼17.8 [16] >1,000 - Channel dependency Conductivity, antenna vibration Multipath, conductivity, permittivity, Multipath, Doppler effect, sound noise Cable properties and length, shielding, connection quality Interference, protocols, power control Antenna/device size (m) 0.1∼4 radius coil > 10, 000 (VLF an- tenna) [6] <1 >10,000 (including cables) Smaller antenna for sub- systems Deployment costs Low High Medium Extremely High High Maintenance costs Low High High High High System complexity Medium High High Low High Disaster resilience Strong Weak Medium Medium Strong Maturity levels Low High Medium High Medium † Optical communication is omitted due to its zero communication range underground. TABLE V RELATED SURVEYS AND THEIR DIFFERENCES FROM THIS SURVEY Aspects Refs. Most important contribution Differences from this survey UG communications [21]–[29] Issues of acoustic wave communication, EMWC, wired communica- tion, mud pulse telemetry communication and MIC for UG-WSNs Diluting comprehensiveness due to non-exclusive to MICs, and no exploration of issues of the MI fast fading, RPMA MICs [30] Reference book covering antennas, channels, performance, and pro- tocols related to MICs No exploration of issues of MI fast fading, RPMA and routing algorithm Underground MICs [15], [31] Issues for general MI UG-WSNs, primarily concerning short-to-mid range MICs No exploration of issues of MI fast fading, RPMA, a complete MI network architecture Underwater MICs [16], [32] Fundamental issues and advances in underwater MICs No exploration of issues of MI fast fading, RPMA, and a complete MI network architecture TTE MICs This sur- vey Fundamental issues and advances in underground MIC, primarily concerning long-range MICs, MI fast fading, and a complete MI network architecture - the through-the-sea MIC application in 2023, and Ma et al. [11] focused on MI multi-cellular networks in 2024. B. Related Surveys and Motivations of This Survey While many surveys on underground communication have been published as listed in Table V, only a handful of the surveys have focused on underground MICs. For example, Sharma et al. [31] reviewed MIC research until 2017 for non-conventional media applications. They introduced the applications and advantages of MICs and briefly introduced the channel modeling of the P2P MIC and a hardware testbed for MIC research. Kisseleff et al. [15] conducted a compre- hensive review of underground MIC studies up to that point. Although their work primarily focused on P2P MIC and MI waveguide issues, they also discussed physical protocols such as channel estimation and node deployments. Recently, Liu et al. [30] offered a thorough introduction to general MICs in a monograph. This monograph covers the basic concepts and theories of background, developments, antennas, channels, performance, and protocols related to MICs proposed before 2020. Compared to this current survey, the existing review [31] has overlooked multi-node MICs. The survey [15] does not comprehensively review MI network architectures and the issues of mobile MIC, including the data link layer, network layers, RPMAs, and MI fast fading. The monograph [30] does not review MI network architectures and MI fast fading, especially the recent routing algorithms and RPMAs developed in the past five years. However, the surveys conducted since 2020 have not yet provided comprehensive reviews on expanded MIC tech- niques, such as MAC and routing protocols for a large-scale MI network and a novel mechanical antenna. This is due to the great surge in mobile MICs, mechanical antennas, and upper- layer MI research since 2020. Moreover, almost all existing articles on MICs presented the common conclusion that the MI channel is a quasi-static and predictable channel without a small-scale fading. These articles also include the surveys (e.g., [15], [32]) and reviews (e.g., [31]). However, several studies have described the MI fast fading channel recently. The common conclusion may no longer hold. Although most related articles claim that their research on ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 4 MIC is compatible with TTE communication, few surveys and reviews highlight the potential or specific issues and method- ologies when existing MIC techniques are applied in TTE or long-distance MIC environments. Currently, there is no agreed protocol stack for larger-scale MI networks. By organizing the existing MIC research with reference to the OSI-originated framework, we can identify the remaining issues related to MIC to establish a standard MIC protocol stack. This is crucial for Space-Air-Ground-Underground multi-network integration (SAGUIMI) for the next generation of mobile communication. C. Contributions and Organization of This Survey This survey reviews research on underground MICs, par- ticularly TTE applications, covering point-to-point TTE MIC techniques and the impact of MI fast fading on existing MIC theories. To guide optimization efforts, we decompose the MI channel power gain into four components with low inter- coupling and distinct physically interpretable meanings. The survey also covers MI relay techniques, analyzing crosstalk effects in both relay and high-density networks. Moreover, we identify the remaining research tasks for a comprehensive MI network protocol in SAGUI. Based on the surveyed litera- ture, we propose an advanced MIC framework that supports TCP/IP and Linux, addressing both the current state and future challenges of MIC. This framework enables researchers to utilize extensive Linux resources and deep learning platforms, accelerating research and implementation in SAGUI applica- tions. The key research challenges, open issues in MICs, and promising novel techniques to address them are highlighted. The key contributions of this survey include: • First Survey on MI Fast-Fading Channels: This survey identifies that MI fast fading challenges the prevailing notion of quasi-static MI channels. We highlight that research on MI fast fading remains in its early stages due to the lack of a universal statistical model. To address this, we introduce an antenna vibration model and correspond- ing simulations. We also analyze the potential impacts on existing MIC theorems, a topic not yet covered in any previous surveys. • Comprehensive Review of MI Network Architecture: We present a complete review of the MI network ar- chitecture across the OSI framework layers, identifying remaining issues and possible solutions. A significant finding is the absence of standardized MI protocol stacks, which represents a major barrier to achieving SAGUIMI integration in next-generation mobile communications. Existing surveys often focus on specific applications without providing a holistic and runnable framework. • Fine-grained Decomposition of Channel Power Gain: We introduce a detailed conceptual modeling approach for channel power gain and provide optimization direc- tions for MIC systems, including antenna designs, band- width, and MIC range improvements. This contributes to simplifying MIC optimization for future research by nar- rowing the scope of MI parameters to focus on relevant variables while fixing others. • Identification of Positive MI Crosstalk Effects: This study uncovers the positive MI crosstalk effects, which are crucial for addressing challenges in MI waveguides and massive MIMO systems. Previous literature primarily focused on negative crosstalk effects, while the positive crosstalk aspect has been largely overlooked. The remainder of this survey is structured as follows (more details in Fig. 2): Section III covers MI channel modeling, including MI channel power gain and MI fast fading. Section IV summarizes P2P MIC designs, focusing on MI antenna design, bandwidth, and MIC range. Section V surveys MI relay techniques, such as MI waveguides, MI passive relay array (MPRlA), Cooperative magnetic induc- tion communication (CMIC), and the MI crosswalk effect. Section VI reviews multi-node MIC from the perspective of the OSI framework. Section VII introduces a promising MI network framework with TCP/IP and Linux support in an attempt to address the challenges in current and future MIC studies. Section VIII explores unresolved challenges. It also discusses promising methodologies including novel MI antennas, MI communication-navigation-sensing integrated (MCNSI), massive MI MIMO, deep JSCC for MIC, hetero- geneous MI network techniques, TCP/IP framework support, and Transformer-based prediction frameworks. Section IX concludes this survey. The acronyms that appear across subsections of this survey are listed in Table I. The physical representations of mathe- matical symbols are listed in Tables III and II. The default values for the simulations and numerical evaluations, that we conducted in this survey, are listed in Table III. D. Research Gap Summary Open issues on MIC are summarized in Table VI, high- lighting existing research gaps in several pivotal domains. The high-priority domains are described in detail as follows. 1) MI fast fading for mobile MIC: Research on MI fast fading still has significant gaps despite existing efforts. The primary gap is the absence of a universal statistical model, which severely hinders the development of upper-layer proto- cols for mobile UG-WSNs. To address this gap, we propose a geometric antenna vibration model that can potentially tackle this challenge using Monte Carlo methods. 2) Significant gap in TCP/IP framework: The TCP/IP is crucial for SAGUI networks in next-generation communica- tions. However, when mapping the surveyed research to each OSI layer, significant gaps emerge: certain layers and key top- ics remain underexplored. Specifically, no literature addresses IP applicability in MI networks, and the entire Transport Layer (Ly4) lacks dedicated MI solutions. Meanwhile, the extremely low bandwidth and channel capacity of MIC may not be compatible with existing IP and Ly4 protocols, as this low performance can lead to congestion mechanism failures and retransmission storms. Additionally, excessively large headers waste scarce MI bandwidth. To address these gaps in MI TCP/IP solutions, we propose a promising MIC framework supporting TCP/IP and Linux, which systematically incorpo- rates MI algorithms and protocols drawn from the literature and future potential solutions. 3) Underdeveloped channel models and protocols for TTE- specific MIC: Current studies on MIC have largely overlooked the unique challenges of the TTE scenario, including mobility, heterogeneous geological materials, and constrained antenna ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 5 Through-the-Earth Magnetic Induction Communication and Networking Introduction Underground Communication and TTE Communication DARPA Subterranean Challenge [2]; SAGUMI; etc. Related Surveys and Motivations of This Survey UG-WSN [26]; MIC [30]; MI UG-WSN [15]; MI UW-WSN [32]; TTE WSN [this survey] Contributions and Organization of This Survey Research Gap Summary Applications of MICs: General Scenarios and TTE-Specific Features No subsections Agriculture [33]; Industry [34]; BAN [35]; TTE [6]; etc. Channel Models for TTE MIC System and Channel Modeling General MI link [12]; MIMO MI link [36]; M2I link [37]; RPMA link [20]; MI fast fading [11] Decomposition of Channel Power Gain Circuit gain [38];Space gain [37]; Eddy gain [39]; Polarization gain [6] MI Fast-Fading Channel Antenna-vibration-oriented fading [11]; Our proposed geometric model and simulation. Summary and Lessons Learned Design of Point-to-point TTE MIC Antenna Design Coil [12]; M2I [37]; RPMA [20]; etc. Channel Capacity and Bandwidth Frequency bandwidth calculations [10], [40], [41], etc. Communication Range MIC range calculation [42]; Influence of underground materials; etc. Upper-Layer P2P techniques Channel estimation [43]; Modulation [44], [45]; channel coding [46]; etc. Summary and Lessons Learned MI Relay and Cooperative MIC Use Cases and Scenarios Pipeline case [47], [48]; Mobile MIC case [7] MI Waveguide and Passive Relay Techniques Waveguide [12]; MPRlA [49]; Crosstalk effect; etc. Cooperative MIC CMIC-nAR [50]; CMIC-1NR [10]; Mobile CMIC [51], etc. Summary and Lessons Learned MI Network and Its Architecture Physical Layer (Ly1) Power allocation [11], [52]; Bandwidth allocation [53]; etc. Data Link Layer (Ly2) MI MAC solutions [54], [55], etc. Network Layer (Ly3) Connectivity [8]; deployment [56]; routing [57]; etc. Transport Layer (Ly4) and TCP Lacking MI-specific solutions Cross-layer Optimization (CLO) Distributed algorithms with Nash games [11], [52], [58], etc. Summary and Lessons Learned Promising MI Network Framework with TCP/IP and Linux Support Significance and Architecture Overview Also with deep learning platforms (e.g., TensowFlow) support System Architecture and Implementation Fig. 26, Algorithm 1 Summary Research Challenges and Future Directions MI Fast Fading in Mobile MIC Systems Lacking a universal model, A Transformer-based framework (Fig. 27) Antenna Design Inertia issue of RPMA; massive MI MIMO application MI Crosstalk Effect Prediction Strateges A Transformer-based framework (Fig. 28) MI Communication-Navigation- Sensing Integrated System Balancing performance metrics among communication, navigation and sensing Mixed-Field MI Channel Model Challenging modeling for upper-layer protocols Multi-Layer and Inhomogeneous Media Issue of irregular shapes of geological strata Image Transmission and Deep JSCC Multimodal/content-based semantic communication for surpassing Shannon’s limit Cooperative MIC Multiple active relays with misaligned antenna MI Network and Architecture Heterogeneous network TCP/IP Support IP-HC, RTT suppression, Intelligent retransmission strategy, etc Experiments and Testing in TTE MIC Systems Unrepeatable UG environments, antenna deployment challenge Summary of Challenges and Opportunities Conclusion Fig. 2. The structure of this survey, where we perform fine-grained decomposition of the MI power channel gain and novel antenna vibration model for MI fast fading in Section III, MI crosstalk effect in Section V, and MI network framework with TCP/IP & Linux support in Section VII. position and orientation (APO). Key research gaps in TTE- specific MICs are as follows: (i) The eddy gain model for heterogeneous UG materials may require correction or nu- merical validation; (ii) shadow fading should be considered for mobile nodes in Case (i); (iii) the existing upper-layer MI protocols lack TTE-specific design adaptations. For these gaps, the FEMs as we conducted in this survey are promising. II. APPLICATIONS OF MICS: GENERAL SCENARIOS AND TTE-SPECIFIC FEATURES In this section, we survey the MICs for various potential applications, as summarized in Table VII. Subsequently, we discuss the specific challenges and considerations of applying MIC techniques to TTE scenarios. Applications in agriculture: Soil conditions are crucial for crops, making it essential to build an MI UG-WSN for agricul- tural automation. This attracts some researchers. Li et al. [60] derived the conductivity and permittivity distribution using the Simultaneous Iterative Reconstruction Technique (SIRT) algorithm for soil moisture sensing, and obtained moisture sensing results based on an empirical model, i.e., a soil’s relative permittivity function w.r.t the volumetric water content (VWC). Sensors are spaced 5 to 10 m apart. The COMSOL simulations showed that the sensing accuracy can achieve a root mean square error of 6% in VWC [60]. Agnelgo et al. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 6 TABLE VI RESEARCH GAPS AND POTENTIAL SOLUTIONS, AND THEIR MAPPINGS OSI Layers† Research gaps Identified open issues Potential solutions‡ Priority Ly1 MI fast fading • Universal statistical models • Related upper-layer solutions . • Geometric antenna vibration model (cf. Fig. 6); • Monte Carlo methods (cf. Fig. 7) • Expressions of ergodic performances (cf. (12), (13)) • Comparison simulations for modulations and FECs (cf. Fig. 14) ✸✸✸ MI crosstalk • Positive effects; spatial distribution • Transformer-based prediction (cf. Fig. 28) ✸✸ TTE-specific MIC • Closed-form expressions of MIC range; optimal carrier frequency • Eddy current effects & shadow fading from heterogeneous media • Derivation based on Lambert-W function properties for MIC range (cf. (20)) or FEM, as shown in Fig. 4 ✸✸✸ (Deep) JSCC • No relevant solutions (key: prospect of exceeding Shannon limits) - ✸✸ Ly2 LLC • No references (optional sublayer) - ✸ Ly3 IP • No references (key: VLF adaptions and large packet headers) • MIC framework for TCP/IP & Linux support (cf. Fig. 26, Algorithm 1) ✸✸✸ Ly4 TCP • No references (key: Unstable connection and large headers) † OSI Layers: Ly1: physical; Ly2: data link; Ly3: network; Ly4: transport; Ly5–7: combined application layers (i.e., session, presentation, application). ‡ Potential solutions: systematically developed approaches with specific formulations (key expressions, simulations or frameworks), distinct from generic proposals. TABLE VII APPLICATIONS USING MI-BASED TECHNIQUES Applications Predictable & stable channels † Waveguide compat- ibility ‡ MIC distance Other characteristics Impact on practical MIC systems Involved refs. Agriculture Yes Yes Short to mid (<30m) 1) Limited free spaces 2) Medium with high and time- variable VWC 3) Shallow burial 1) Restrict antenna size and deployments 2) Variable performance due to time-variable VWC 3) Simple hardware, complex topologies and proto- col stacks designs [33], [59]–[62] Industry Yes Yes Short to mid (<36 m) 1) Reliant on specific applications - [15], [34], [63]–[70] Underwater No Yes Mid to long (1∼100 m) 1) Sufficient free space 2) Randomly misalignment coils 3) Remarkable eddy loss 1) Diverse antenna designs and configurations 2) Fast fading-like phenomenon 3) Limited P2P MIC range [5], [16], [20], [32], [51], [71]–[89] BAN No No Short (<5 m) 1) Low power requirements 2) Non-magnetic dipole models 1) Small device size 2) Strong coupling & higher carrier frequency due to short MIC range [35], [90]–[101] Military - - - 1) BorderSense architecture 1) Access to EMWCs [15], [18], [102] Environment & disaster monitoring Yes Yes Short to long (<1,000 m) 1) Lack of CSI due to disaster 1) Transmitter-side channel estimation [31], [103] Localization Yes No Mid to Long (1 ∼1,700 m) 1) Expecting signal difference across different positions 1) Require specific localization algorithm (e.g., ToA) [14], [24], [64], [65], [104]–[115]. TTE No No Long (60 ∼1,000 m) 1) With non-uniform eddy loss 2) Relatively limited free spaces 3) Using VLF-LA 4) Remarkable fast fading 1) Shadow fading-like phenomenon 2) High deployment costs 3) Extremely low capacity 4) Complex hardware and upper-layer protocols even for a simple network [6]–[11], [116] † Predictable and stable channels in most scenarios. ‡ Waveguide system compatibility in most scenarios. [33] studied mid-range MI-based UG-WSNs for real-time soil moisture sensing, with a communication distance of 15-30 m. A common feature of these MI agriculture applications is that the sensors are buried near the ground surface, and the communication distance among MI nodes is shorter than 30 m. In addition, the MI waveguide has been widely applied to enhance the MIC distance. Applications in industry: The MI-based application has broader prospects in the industrial field, including pipeline leakage detection, infrastructure monitoring, and communica- tion within mines. For the pipeline leakage detection, many researchers focus on this area [34], [63]–[66]. Sun et al. [34] discussed the leakage detection and localization for the pipeline with a length of 36 m and 27 m, using the MI waveguide model. Tan’s team studied the position of pipeline breakage by the MI-based method [64]. Recently, Li’s team [66] developed an MI-based position system for underground pipelines by applying rotating permanent magnets. For in- frastructures, monitoring their health such as residential or commercial buildings, bridges, and dams is crucial to avoiding possible disasters [15], [67], [68]. Mines provide an important application scenario. Compared to the pipeline and infrastruc- ture scenarios, the distance between two nodes can be much larger. Among these studies on industry applications, the MI waveguide method was widely applied to extend the distance. Underwater Applications: Water covers over 70% of Earth’s surface, offering abundant resources. MIC technology shows promise for underwater exploration. Underwater appli- cations can be categorized into under-freshwater and under- seawater applications. As seawater has higher conductivity, the path loss in MI channels under the sea is significantly greater than that in freshwater. Li et al. [32] offered a comprehensive overview of existing studies on underwater applications con- ducted prior to 2018, including those developed in [71]–[77]. After 2018, MIC for UW-WSNs was increasingly attracting the attention of researchers. Firstly, several researchers pro- posed RPMAs to generate modulated magnetic fields with 30 Hz∼1 kHz frequencies for MIC energy saving [20], [78], [81]– [84]. Secondly, the upper-layer protocols for MI-based UW- WSNs were investigated, including MAC protocols [55], rout- ing solutions [85], [86]. Thirdly, contrasting with UG-WSNs, the multi-antenna techniques can be applied in water thanks to ample free spaces for antenna deployment. These techniques include MI MIMO [87], antenna array [82], CMIC [51], [87], and MI waveguide. Finally, due to the sufficient free spaces, the random misalignment between the coils would generate an ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 7 unpredictable and unstable channel [89]. Notably, compared to the UG-WSNs, the acoustic (sonar) and optical techniques are optional communication methods for UW-WSNs [88]. Body Area Network (BAN): The BAN is a sensor tech- nology used to connect small nodes with sensing abilities to collect necessary information [90]. In the cells-filled body, EMW-based in the microwave range faces high attenuation, interference, and multipath effects. MI techniques can address these issues [35], [91]–[101]. Specifically, the principle, power equations, and capacity of MI-based BANs are demonstrated in [91], [92]. Kisseleff et al. [94] studied a distributed beam- forming. Their simulation showed that a data rate of 3200 bits/s, at a distance of 5 m and a frequency of 13.56 MHz. Golestani et al. [96] derived the theoretical model of MI communication in wireless BANs, and analyzed its working frequencies of 0∼50 MHz, based on the experimental MIC device at an MIC distance of 40 cm. Mishra et al. [99] proposed the MI waveguide for the BAN link. Compared to UG-WSNs and UW-WSNs, MI-based nodes are closer, allowing a lower power requirement and a higher working frequency, e.g., 30 MHz, as reported in [96]. Due to the coils’ size being comparable to MIC distances for the wear- able requirements, these designs are similar to the standard Near-Field Communication (NFC) with simulation parameters aligned with those in ISO 14443/15693. Environment Monitoring: Environment monitoring covers public services, including pollution monitoring and disaster de- tection. MICs can perform chemical, biological, and pollution monitoring for river, lake, and water reservoirs [31]. Also, MIC systems can monitor transport tunnels and mining conditions, and issue early warnings from underground to relevant depart- ments for imminent accidents. In disaster scenarios, obtaining CSI may be challenging, so Kisseleff et al. [103] proposed a specific channel estimation within the MI transmitter circuit without explicit feedback CSI to address this issue. MI Localization: MI techniques are widely employed for localization in environments where EMW propagation is challenging [14], [64], [65], [104]–[114], [117]. While the signal propagation model in MI localization shares similar- ities with that of MIC, the underlying methods and design principles differ significantly. MIC systems prioritize uniform signal strength around the transmitter to prevent local network congestion and maintain consistent data rates. Conversely, MI localization systems benefit from amplifying signal differences to improve positional distinction and accuracy. Additionally, various advanced techniques, such as time of arrival (ToA) [14], time difference of arrival (TDoA) [14], data fusion [115], and inertial navigation [104], can be employed to further enhance localization accuracy. TTE Applications: TTE applications typically have a ver- tical communication range of tens to hundreds of meters. However, due to the extremely high cost of MI-based TTE devices, most studies focused on simulations and experiments within a limited distance of several meters. Despite this challenge, several companies and research institutions have developed TTE devices. In 2011, Lockheed Martin developed the MagneLinkTM MIC system, achieving a two-way com- munication range of 472.5 meters using an 8-meter transmit antenna [6], [118]. Vital Alert also developed MI-based TTE TABLE VIII COMPARISON BETWEEN TTE AND GENERAL MIC SETTINGS Settings TTE General MIC Range (m) 60∼1,000 (Table VII) <36 (Table VII) Capacity (bps) < 10 k ∼1 M [15] 3-dB Bandwidth (Hz) 300∼500 [11] 2 k∼10 k [12], [121] Frequency (Hz) 1 k∼10 k [11] 1 M∼50 M [15], [96] Coil radius (m) 0.5∼4 [7], [8] < 0.2 [12] Tx Power (W) 5∼126† [10] < 1 [12] MI fast fading Yes [11] No [32] Propagation medium Inhomogeneous homogeneous Eddy effect Significant Mild Deployment cost Extremely high Low Existing TCP/IP support Challenging Potential Typical scenarios Tunnel, mine, mountain Farmland, pipeline † The Tx power of 126 W stems from our Datong Coalmine TTE experiment. devices with a claimed MIC range of over 450 m [119]. Liu’s team (our team) at Tongji University developed the TTE experiment device achieving a vertical distance of 310 m in Datong Coalmine using an 8 m transmit coil [6], [8]. Based on this device, we investigated the CMIC [6], [7], [10] under the TTE environment using very low frequency (VLF) and larger antenna (VLF-LA) methods. Recently, the MI fast fading channel of TTE MIC is studied in [11]. With such fading, the MI channel becomes unpredictable and unstable. A comparison between TTE and general MIC settings is outlined in Table VIII. While many studies (e.g., [6]–[12], [15], [116], [118], [119]) have adapted MIC technologies for underwater and general underground applications to TTE communications, they have not fully addressed the specific challenges and considerations of applying MIC techniques to TTE scenarios, including those outlined in Table VII and Table VIII. • Instability of channel power gain: The uncertainty of the medium over a large space and the vibration of mobile MIC antenna cause random variations in channel power gain, ultimately leading to remarkable fast fading. • Extremely low bandwidth: To extend the MIC range as much as possible, the throughput of the MI link must be greatly sacrificed, which has a significant impact on existing upper-layer solutions (e.g., centralized Q- learning-based routing [120], and TCP/IP support). • Eddy current in medium: As the MIC distance in- creases, the eddy current is increasingly significant and complicated. Some relatively simple techniques, e.g., the MIC range/coverage [6], [8], [12], and the frequency switchable routing protocol [120], become complicated. • Challenging deployment: Limited space in deep under- ground environments poses deployment challenges for some MIC methods, such as MI waveguide and MIMO. Moreover, we must carefully consider the energy con- sumption of MIC devices due to the extremely high cost of replacing their batteries. III. CHANNEL MODELS FOR TTE MIC In this section, we introduce the MI channel power gain and MI fast fading discovered recently. Specifically, we categorize the channel power gain of the P2P MI link into four factors with distinct physically interpretable meanings and introduce their optimization strategies. Then, we discuss the MI fast fading, including its current statistical models, limitations, derivation challenges, and our proposed antenna vibration model with a simulation for the challenges. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 8 A. System and Channel Modeling The block diagram of P2P MIC link S→D is shown in Fig. 3, where the channel model depends on the antenna type, medium, and orientation. Specifically, for the coil-based MIC, Sun et al. [12] derived the path loss expressions for both coil-based MI link and MI waveguide link, indicating that the path loss of P2P MI link scales with the sixth power of distance dSD. Their findings were experimentally validated in both wet and dry soil conditions. Further, Sun et al. [39] examined the impact of underground materials on MI communication. Li et al. [36] highlighted the significant orientation sensitivity of MIC and proposed an orthogonal MIMO coil model to mitigate this effect. For the M2I link, Guo et al. [37] formulated the channel model using Maxwell’s equations, validated through COMSOL simulations [37], and experimentally tested using a metamaterial shell composed of multiple small coils [122], [123]. In RPMA-based MIC, Rezaei et al. [124] analyzed the magnetic moment of a single permanent magnet, while Zhang et al. [20] extended this study to magnet arrays. Ma et al. [11] investigated fast fading in downlink vehicular MI (VMI) channels. S(0, 0) D(dSD, θS) O Magnetic Field ns dSD θS θD 0 θs e^e^ θs e^ eD ^ Destination (Rx) D Tx: modulated alternating current generates alternating magnetic field signal Rx: alternating magnetic field inducing signal current for demodulation Channel: propagation via magnetic field only Source (Tx) Fig. 3. P2P TTE MI communication model in polar coordinates. Here, ˆeD and ˆeθS are the radial and angular unit axial vectors, respectively; nS and nD are the normal vectors of the dipole and sensor, respectively. Among these research articles, the P2P MIC modeling start with the Maxwell’s equations, the magnetic intensity HD, induction intensity BD, and the flux ΨD received by the Rx antenna D with the polar coordinates (dSD, θS) can be derived as HD=mS⃗hD, BD=µuHD and ΨD = BD · nD = µumS(⃗hD · nD), (1) respectively, where ⃗hD is given by [125, Chapter 5] ⃗hD = 1 4π k0 2dSD e−jk0dSD  cos θS  1 dSD (1+ 1 jk0dSD )  ˆeD + sin θS 1 2  1+ 1 jk0dSD − 1 (k0dSD)2  ˆeθS  , (2) where k0 = 2πf p µu(ϵu + j(σu/(2πf))) [126]. Note that (2) holds only in the weak coupling and linear regime, where the distance is much greater than the antenna size. It may break down in near-resonant or highly reactive systems. In the receive coil or magnetic sensor, the magnetic flux ΨD induces a current ID ∝ΨD. The channel power gain between link S→D is given by GSD = PDf PSf = 1 I2 S ℵSDΨ2 D = |mS|2 I2 S ℵSD  µu(⃗hD · nD) 2 , (3) where the compensated variable ℵSD significantly depends on the specific circuit and antenna type. B. Decomposition of Channel Power Gain Eq. (3) is a universal expression of the channel power gain both in the near field and radiation field ranges. However, this expression is challenging to analyze and formulate into a further optimization model in MIC research. Thus, in most MI UG-WSN studies, insufficient communication distance [12] or VLF-LA methods [6] lead to the assumption that k0dSD ≪1 holds. Guided by the “high cohesion and low coupling” prin- ciples in software engineering and by substituting k0dSD ≪1 into (3), we propose a decomposition of the channel power gain GSD into four physically meaningful factors GSD = |mS|2ℵSD 16π2|IS|2 | {z } Circuit gain CSD × |µu|2 (d3 SD)2 | {z } Space gain SSD × e−2jk0dSD | {z } Eddy gain ESD × (2 cos θS cos θD+sin θS sin θD)2 | {z } Polarization gain JSD=J 2 SD , (4) i.e., circuit gain CSD, space gain SSD, eddy gain ESD, and polarization gain JSD. As shown in Fig. 5, the relationships among these decomposed gains involve only three logical coupling links, conforming to the low-coupling principle. Their physically interpretable meanings are provided in the rest of this subsection. We summarize addressed issues, meth- ods, remaining issues, and optimization directions in existing research in Table IX, as described in what follows. 1) Circuit gain CSD: The circuit gain, with two optimizable factors mS and ℵSD, reflects energy consumption on the circuit. We observe that the studies on antenna and hard- ware designs, including resonance characteristics of the coil antenna [12], magnetic pole strengths of the bias and rotor magnet of RPMA [127], can be categorized as the independent considerations of the optimization of the circuit gain CSD. This is crucial for improving the bandwidth through the factor ℵSD of CSD, and MIC range through the factor mS of CSD. For example, RPMA without coils can eliminate the resonance coupling (expressed in ℵSD) [127]. However, RPMA may face an additional equivalent circuit loss due to mechanical inertia and friction (also expressed in ℵSD). Wang et al. [38] used impedance matching ZL = R+ (2πfM)2 R in ℵSD(ZL(f)) to maximize the receive power, where ZL, R, and M are the impedance and resistance of the coil, and mutual inductance, respectively. Their simulations showed that their approach produced two frequency resonant points, at 1 MHz and 5 MHz, respectively. In [128]–[130], multi-band MI techniques achieve several discontinuous 3-dB bandwidths in the fre- quency domain through a coil array, based on ℵSD(ZL(f)). Unfortunately, additional loss arises due to crosstalk among the coil array. The circuit gain exhibits optimal operability, as it minimizes the need to consider the MIC environment. 2) Space gain SSD: This gain represents the ideal spatial path loss due to space expansion. This gain serves as a primitive model for MIC. It is widely used in MI network studies to analyze and optimize signal transmission. When considering short to mid-range MIC systems, such as MI waveguides [13], M2I communications [37], and upper-layer protocol for MIC [52], [85], [86], researchers often choose to simplify theoretical communication models by ignoring the eddy gain ESD and polarization gain JSD. While ESD and JSD can have significant effects on signal propagation ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 9 TABLE IX PARTITIONS OF CHANNEL POWER GAINS AND THEIR ISSUES THE MARKER ‘●’ DESCRIBES THE METHODS; THE MARKERS ‘✓’ AND ‘✗’ DESCRIBE ADDRESSED AND REMAINING ISSUES, RESPECTIVELY. Factors Physical meanings Involved Refs. Addressed issues and methods Remaining issues and optimization directions Priority† Circuit gain (CSD) Local energy consumption [12], [38], [127]–[130] ✓Increase the magnetic moment with minimal energy ●Use RPMA to reduce ℵSD under VLF ●Impedance matching technique to maximize the receive power ●Multi-band MI technique to achieve several 3dB bandwidth ✗Challenge to derive the capacity and energy loss of RPMA-based link due to the inertia ✗Deviation of 3-dB bandwidth in VLF below 100 Hz ✸✸✸ Space gain (SSD) Field attenuation with spatial expansion [37] ✓Mitigate the rapidly signal attenuation with spatial expansion ●M2I enhanced techniques to enhance the overall permeabil- ity between S and D ✗Too large M2I shell occupying several times of coil volume ✸✸ Eddy gain (ESD) Loss due to eddy current of medium [8], [130] ✓Determining the non-negligible eddy current for TTE MIC ●Derivation using magnetic potential ●Using Lambert-W function to express the MI coverage ✗Unpredictable medium and shadow fading in the fluid scenarios or mobile MIC ✗Eddy gain in inhomogeneous media (by FEM) ✸✸✸ Polariza- tion gain(JSD) Loss due to antenna orientation [8], [9], [11], [36], [87], [89] ✓Stochastic misalignment coils in underwater scenarios ✓Unpredictable MI fast fading channel in the under- ground/aboveground MIC ●Orthogonal MIMO coils to mitigate orientation sensitivity ●Boundary p(x) distribution to model MI fast fading ✗Challenge to deploy the MIMO antenna in the TTE scenarios ✗Challenge to derive a universal statistical model for MI fast fading ✸✸✸ Mixed field case Non-near- field case (not k0dSD ≪1) [42], [78], [79], [126] ✓Channel modeling for P2P MI link ●Derivation based on Maxwell’s equations ✗Challenge in partitioning the channel power gain ✗Challenge for antenna design and optimization due to excessive dependence on variables ✗CMIC channel modeling ✗Upper-layer solutions, such as MAC and routing ✗Network deployment ✗Frequency optimization and 3-dB bandwidth derivation ✸✸ Strong cou- pling Coupling coefficient≈1 [41] ✓Communication capacity in the strong coupling case (MIC distance much smaller than coils size) ●Non-magnetic-dipole modeling ✗Inapplicable channel power gain expressions (3) and (4) ✸ † Priority: Priority level of remaining issues for exploration. and reception, ignoring them allows for more straightforward analyses and calculations, especially during the initial stages of network design and optimization. The space gain expression SSD = |µu|2 (16πd3 SD)2 suggests that optimizing SSD is challenging due to the fixed permeability µu determined by underground material. Guo et al. optimized the equivalent permeability by designing a metamaterial-enhanced antenna enclosed by a metamaterial shell. This shell has a permeability of µ2 = −µ0 where µ0 is the vacuum permeability [37]. This enhances the overall permeability between S and D, resulting in an improved space gain. In practice, the TTE environment is hardly modifiable, leading to limited operability of space gain. 3) Eddy gain ESD: The eddy gain is known as the Skin Ef- fect. It is generated by induction current from the time-varying magnetic field within underground materials. In TTE commu- nication, heterogeneous materials exhibit dynamic character- istics, especially for mobile MI networks. This makes MIC channel coefficients unpredictable. It also poses significant challenges to MIC research. For short-range MICs and early MICs research, the eddy gain can be ignored to simplify primary problems. However, for TTE MICs, the cumulative eddy current is highly remarkable. In an infinite and uniform medium, the expression of the eddy gain ESD is given in [130], [131] and can be approximately matched as [10] p ESD ≃exp( −dSD δu ) = exp −dSDδ−1 u  , (5) with δu = 1 2πf s µuϵu 2 r 1+ σ2u (2πf)2ϵ2u −1 , (6) where the permittivity ϵu is typically very small in most underground environments. For instance, the permittivities of dry and wet soils are 7×8.854×10−12 F/m and 29×8.854× 10−12 F/m, respectively. Their conductivities σu are 0.01 S/m and 0.077 S/m, respectively [39]. Hence, for the TTE MIC with VLF-LA, the condition σu ≫2πfϵu typically holds, allowing (6) to be further approximated as δu ≃ q 1 πfµuσu . (7) From (4) and (5), we observe that only the circuit gain CSD and eddy gain ESD depend on the carrier frequency f. This suggests a potential to enhance MIC performance through frequency optimization. For the short-range and mid-range MICs, the effect of the eddy gain is not significant. Hence, there is limited literature specifically studying the eddy gain at this moment. Note that the eddy gain expressions in (4) and (5) apply only to near-field homogeneous media. Fig. 4 presents COMSOL simulations of magnetic flux density distributions in multilayer materials. These simulations show significant distortion in both magnetic field direction and magnitude across different media due to complex eddy effects, particularly near medium boundaries. This indicates that the eddy gain model requires correction for multilayer or inhomogeneous media. 4) Polarization gain JSD: This gain JSD = J 2 SD represents the effects of antenna orientations. From (4), JSD is within [02, 22]. Specifically, when θS = π 2 and θD = 0, JSD falls into 0. This poses several challenges as follows. 1) Aligning the Tx and Rx antennas may be difficult in practical scenarios, such as the autonomous underwater vehicle (AUV) scenario. To address this, the orthogonal MIMO has been introduced into the MICs. In [36], Li et al. deployed three orthogonal coils in each device, increasing the minimal channel power gain to ( 1 3)2 of the maximal one. 2) In MI UW-WSNs, antenna vibrations cause random misalignment, leading to stochastic polarization gain and channel coefficients [89]. 3) Fast fading in mobile MI UG-WSNs, such as vehicle MICs (VMICs) and ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 10 TABLE X COMPARISON AMONG DIFFERENT MODELS OF THE FAST FADING CHANNEL IN RELATED REFERENCES Types Models Compared points Different challenges of MI fast fading Refs. EMWC Rayleigh, Rician, Nakagami 1) Relatively predictable large-scale fading 2) Multi-path effect 3) Using CLT-based simplifications 4) No boundary limits ✠1) Unpredictable large-scale fading ✠2) Vibrating-oriented fading without multi-path effect ✠3) Hard to use CLT-based methods ✠4) With boundary limits [132]–[134] FSOC M´alaga 1) Simplified to Rayleigh, Rician, and Nakagami models 2) No boundary limits Same as ✠1), ✠2), ✠3), ✠4) [135]–[137] Geometric and misalign- ment loss 1) Linear propagation signals 2) No boundary limits 1) Linear propagation signals 2) Orientation sensitivity 3) With boundary limits [138] Acoustic communi- cation Rician 1) Be simplified to Rician 2) Strong LOS 3) No boundary limits Same as ✠1), ✠2), ✠3), ✠4) [139] MIMO SWA 1) With macro scattering and micro scattering 2) No boundary limits Same as ✠1), ✠2), ✠3), ✠4) [140] MIC Traditional MIC No fast fading With fast fading [15], [32], [46], [141] Underwater 1) Antenna orientation with a uniform distribution 2) No boundary limits 1) Antenna angle with complex probability distributions 2) With boundary limits [89] Vehicle MI 1) Antenna angle with a BCS distribution 2) Antenna angle with a boundary p(x) distribution 1) Lacking universal solutions [9], [11], [142] Fig. 4. FEM simulation of magnetic flux density in multilayer materials with different conductivities. Here, the conductivities of air, soil and seawater are 0, 0.01, and 4.8 S/m, respectively. The current of coil IS=15 cos(2π ×104t) (A). The coil radius is 6 m. Frequency Permeability Circuit gain Frequency Frequency Coil shape and size Resistance, capacitance RPMA friction and inertia Conductivity, permittivity Eddy gain Space gain Polarization gain Permeability MIC distance Permeability MIC distance Antenna orientation Fig. 5. Relationship among circuit, eddy, space, and polarization gains. The normal fonts with gray background represent the primary parameters. backpack MICs, also stems from polarization gain [9], [11]. Most challengingly, in most mobile MICs, JSD is a random variable with uncharacterized distribution. In summary, the fine-grained decomposition of channel power gain into four low logical coupling gains GSD = CSDSSDESDJSD allows us to simplify the optimization prob- lem formulation by focusing on one or several specific factors, while fixing the rest of parameters. For instance, the MI antenna optimization prioritizes the circuit gain CSD, while MI MIMO optimization focuses on enhancing polarization gain JSD. For MIC in multi-layer medium, attention can be given to the eddy gain ESD. For the long-range MIC in ultra- low-conductivity medium or the short-range MIC, focusing on space gain SSD suffices. C. MI Fast Fading Channel In this subsection, we introduce the recently discovered MI fast fading, including its causes, distinctions from other communications, and closed-form expressions for PDF and expectation. We highlight the derivation challenge for a more universal statistical model and propose a universal geometric model. Additionally, we perform simulations based on this geometric model. These geometric models and simulations may aid further study of this derivation challenge. 1) MI fast fading modeling: The concept of MI fast fading originated in 2016 [89], and was formally defined in 2020 [9]. Such fading is primarily due to the time-varying polarization gain JSD [89]. The fast fading channel is a crucial concept in the field of wireless communication. Specifically, in wireless communication, a fast fading channel significantly varies over the communication time scale. In fast fading channels, the symbol time exceeds the coherence time [133, Chapter 1]. In [11], Ma et al. provided a detailed explanation of the different issues between MICs and other communications such as EMWC, free-space optical communication (FSOC), and acoustic communication methods, as depicted in Table X. Here, we summarize these issues as follows. • The central limit theorem (CLT) cannot be applied to simplify the statistical channel model in MIC, making it difficult to derive a universal theoretical model • The expectation and variance of MI fast fading are unpredictable due to the AVI. Such AVI depends on the driver’s mindset. • When vibrating, antennas may encounter boundaries, causing the PDF of MI fast fading to be discontinuous. In traditional MIC studies, the MI channel has been assumed to be quasi-static due to reliance on near-field signals without multi-path fading. However, this assumption has been chal- lenged in mobile MIC research [9], [11], [87], [89] since 2016. In 2016, Dumphart et al. [89] observed the phenomenon that the polarization gain often suffers from coil misalignment due to mobility and deployment in the underwater MIC system. They simply assumed a uniform distribution for θS and θD, and derived the PDF of JSD. For example, the SISO links, ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 11 TABLE XI COMPARISON OF TWO PDF EXPRESSIONS FOR MI FAST FADING Refs. Methods, Pros & Cons Eqs. [89] ✓Simplified underwater model derived from uniform distribution ✗Incapacity to represent antenna vibration ✗Lacks universality; underwater-specific ●Derivation based on probability theory (8) [11], [142] ✓Two antenna-vibration models derived from BCS distribution and boundary p(x) distribution, respectively ✗Lacks universality; base station–mobile terminal MIC link only ●Derivation and validated by Monte Carlo simulations (9) The markers ‘✓’, ‘✗’ and ‘●’ represent pros, cons and methods, respectively. the PDF of JSD is given by fJSD(x) =      arsinh √ 3 √ 3 , if |x| ≤1 2; arsinh √ 3−arsinh √ 4x2−1 √ 3 , if 1 2 < |x| ≤1; 0, if |x| > 1, (8) with moments E(J 2 SD) = 1 6 and E(J 4 SD) = 3 50. This study [89] implies that the MI channel may not be quasi-static. The PDF in (8) is suitable for the underwater environment. The antenna orientation changes over time, making the MI channel unsta- ble. Since there are many more weak vibrations than strong ones, the PDF in (8) cannot represent terrestrial/subterrestrial MI fast fading caused by the antenna vibration. For the mobile TTE MICs using VLF-LA, previous studies have reported a Shannon’s capacity range of 0.5∼10 kbps at a 50-meter MIC distance [6], [7], [10]. The transmit rate achieved by TTE MIC experiment devices is significantly lower than this capacity, as mentioned in [11]. However, the frequency spectrum range of road disturbance input, causing antenna vibrations, is approximately 10 Hz∼1000 Hz, as stated by ISO/TC108 and [143]. Since JSD in channel power gain GSD suffers rapid changes during a symbol time, MI fast fading was formally defined in [9], [11]. Due to the complexity of the antenna vibration statistical model compared to that in [89], novel mathematical concepts such as Boundary Chi- square (BCS) distribution for VMIC systems, boundary p(x) distribution, and conjugate pseudo-piecewise function were proposed to derive statistical expressions [11]. However, these expressions have only been derived for specific scenarios, Specifically, for the downlink VMI channel with the same height, the PDF of JSD can be simplified as fJSD(x) =               1 −erf q ς 2σ2 D  δpu(0), if x = 1 −ς; exp  −1−x 2σ2 D  √ 2πσ2 D(1−x), if 1 −ς < x ≤1; 0, otherwise, (9) where the instantaneous AVI follows the BCS distribution as proposed in [9]. From (9), the expectation of MI fast fading can be deduced as E(JSD) = (1 −σ2 D)erf q ς 2σ2 D  + √ 2ςσ2 D exp( −ς 2σ2 D ) √π . (10) Here, (9) and (10) represent a 2D statistical model of MI fast fading for a downlink MI link. For more universal models, it is a challenge without the support of the CLT. To address this challenge, we come up with an antenna vibration model, as depicted in Fig. 6, where each coil angle is divided into horizontal and vertical components. Since the model is in the geodetic reference system, the horizontal and vertical components are independent, i.e., all random variables in this model are independent. Since the theoretical statistical models for the universal MIC scenarios remain unexplored, there are no calculation lines in Row 2 (from top) in Fig. 7(b). Here, we estimate MI fast fading through Monte Carlo simulations, as shown in Fig. 7(a) and Row 2 in Fig. 7(b). This may facilitate future theoretical derivations on a more universal expectation of MI fast fading. 2) Impact on channel power gain: Fig. 7 exhibits the averages/expectations of MI fast fading gain. It is observed from Fig. 7(a) that most averages are below 0.9, implying a power loss of over 10% due to vibration. However, an interesting phenomenon arises: When the average AVIs of Tx and Rx reach a certain threshold, the channel power gain GSD increases. This is due to the fact that 0≤JSD≤4. Thus, MI fast fading significantly influences the MIC channel. 3) Impact on outage probability: Outage probability is defined as the probability of instantaneous SNR below a threshold Υth [11]. In earlier MIC studies, as the MIC channel is regarded as a quasi-static channel [32], the outage proba- bility losses its physical significance. In channels with fast fading, the outage probability is non-negligible. The closed- form expression of the outage probability pSD out for a P2P MI fast fading channel is given by [9] pSD out =P " JSD < Υth PS No CSDSSDESD # = Z Υth PS No CSDSSDESD −∞ fJSD(x)dx, (11) In Fig. 8(a), the simulation results based on (11) indicate that even when the Tx power approaches infinity, the outage probability is over 0.1 at a relatively small AVI angle of 37◦ (i.e., σD = 0.6), thereby exerting a notable adverse effect. 4) Impact on ergodic capacity: Under fast fading, the ergodic capacity EC=E[log2(1 + SNR)] (bits/s/Hz) is widely used for a long term average achievable rate, i.e., EC= Z ∞ 0  log2(1 + PS No CSDSSDESDx)  fJSD(x) dx. (12) In Fig. 8(b), the simulation results based on (12) show that the ergodic capacity declines by nearly one-third at an AVI angle of 37◦. 5) Impact on BER: Recently, Chen et al. [142] established a fundamental performance limit for the BER of the MI fast fading channel under FSK modulation, as given by BER= Z ∞ 0 Q q Ebx No  fJSD(x) dx≥Q q Eb N0 E(JSD)  , (13) where Q(·) is the Q-function, and Eb is the energy per bit. As shown in Fig. 8(b), there is a 100-fold increase in BER at an average AVI angle of 37◦. Notably, (13) indicates that the BER supremum increases with the average MI fast fading gain E(JSD), indicating that a higher E(JSD) does not guarantee better MIC performance. Table XII summarizes the effects of MI fast fading on typi- cal MIC performance metrics. Despite the few positive effects, MI fast fading exerts more pronounced negative impacts on ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 12 TABLE XII IMPACT OF TYPICAL MI FAST FADING ON MIC PERFORMANCE Performance Metrics Refs. Vibration input Effect points (Compare to the MIC without fast fading) Average channel power gain [89] Uniform ➘Decrease to 1 6 in a SISO system [11] BCS ➘Decrease to about 80% at an Rx average AVI of 30◦ ➚More uniform around transmitter ➘Severe variation in a poor and congested road condition Fig. 7(a) BCS ➚Increase over 5% when Tx average AVI angle = 90◦and Rx average AVI angle = 90◦ Outage probability [11], Fig. 8(a) BCS ➘Increase over 10% when average AVI σD > 0.6, even if Tx Power PS →∞ Ergodic capacity Fig. 8(b) BCS ➘A decrease from 3.4 to 2.7 bps/Hz when σD > 0.6 (i.e., average AVI angle= 37◦) BER [142], Fig. 8(c) BCS ➘A sharp increase from 10−3 to 10−1 when σD > 0.6 The markers ‘➚’ and ‘➘’ represent the positive and negative effects, respectively. MIC performance through the PDF fJSD(x|(σS, σD)), which depends heavily on the AVI inputs σS and σD. In practical MIC systems, such AVI inputs frequently demonstrate substantial temporal variability, especially in the case of congested traffic conditions, leading to significant fluctuations in performance metrics over time. This characteristic markedly distinguishes MI fast fading from EMWC fast fading, as time-dependent performance variability is less pronounced in EMWC. D. Summary and Lessons Learned This section discusses the four decomposed gains influenc- ing channel power gain (Table IX) and MI fast fading (Table X). Table IX suggests optimization directions for each factor, such as improving circuit gain in antenna designs. Recent discoveries, including random coil misalignment [89] and MI fast fading [9], [11], challenge the widely ac- cepted assumption in [32] that MIC channels are quasi-static. However, these findings address only limited aspects of non- static MI channels. In contrast to EMWCs, research on non- static MI channels is still in its early stages. Key issues include: 1) No universal statistical model exists for MI fast fading, analogous to the Rayleigh model for EMWCs, since the existing distribution models of AVI (e.g., BCS distribution) are scenario-specific; 2) The unexplored effects of MI fast fading on existing MIC theorems, such as channel estimation, CMICs, MI MIMO, and MI protocols; and 3) MI fast fading brings negative impacts on the performance of the MIC systems. The challenges involved include: 1) The antenna vibration’s limited dependent components make it difficult to apply the CLT, a critical method for obtaining fast fading models in EMWCs; and 2) The expectation/variance of MI fast fading remains random due to its strong dependence on antenna vibration velocity. To address these challenges, we introduce a universal geometric model for antenna vibration (Fig. 6), which transforms all dependent components into independent ones. This model enables the estimation of MI fast fading gain via Monte Carlo simulation, as shown in Fig. 7. The practical takeaways or common pitfalls include: 1) No existing MI fast fading models may hold when the near-field and weak coupling conditions are not satisfied; 2) the eddy gain requires correction in TTE practice, as TTE media are rarely homogeneous; 3) for VMICs, the ferromagnetic iron in the vehicle body can distort the magnetic field, potentially causing significant deviations in existing MI channel models; and 4) improving or uniformizing the polarization gain JSD benefits MICs, whereas doing this for the average MI fast y x sinθ’(.) z/O z Sinθ’(.) O l l θ’(.) a Horizontal component ϕ(.) View Transition n(.) n(.) √AVI √AVI b AVI ϕ(.) Horizontal view: horizontal vibration of the coil normal vector Vertical view: height-direction vibration of the coil normal vector Fig. 6. Advised antenna vibration modeling in a 3-D space with independent random variables; n(·), ϕ(·) and θ′ (·) denote the normal vector, horizontal and vertical components of the antenna vibration (angle) of node (·), respectively. 0.97 0.94 0.86 0.77 0.67 0.62 0.57 0.88 0.87 0.83 0.79 0.74 0.71 0.70 0.77 0.77 0.80 0.81 0.82 0.84 0.86 0.67 0.67 0.73 0.80 0.91 0.92 1.00 0.59 0.62 0.73 0.84 0.95 1.05 1.09 0.53 0.57 0.71 0.87 0.99 1.10 1.16 0 0.2 0.4 0.6 0.8 1 1.2 D 0 0.2 0.4 0.6 0.8 1 1.2 S 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 0 0.3 0.6 0.9 1.2 D 0.2 0.4 0.6 0.8 1 Calc(Row1) s=0 Sim(Row1): s=0 Sim(Row2): s=0.6 (a) Row2 (b) Row1 Expectation/Average Value Fig. 7. Expectation of MI fast fading JSD for the model in Fig. 6: (a) Ad-hoc link in 3D space; (b) Two specific links; Here, the AVIs of S and D follow BCS distributions, with a boundary ς = 0.8 and their respective average AVIs σ2 S and σ2 D. Their horizontal components ϕS and ϕD follow the uniform distribution within [0, 2π). The dotted lines of Row 1 and Row 2 in Fig. 7(b) are obtained through Monte Carlo simulations (sim), and the solid line of Row 1 in Fig. 7(b) is the calculation (calc) from (10). Here, the overlap between the calculated curve of Row 1 and its simulated counterpart validates (10). The calculation curve for Row 2 cannot be presented due to the lack of a universal expression. fading gain E[JSD] may be detrimental due to increased outage probability and BER. IV. DESIGN OF POINT-TO-POINT TTE MIC This section introduces the MI antenna designs, and com- pares the advantages and disadvantages of four MI antenna types, including the RPMA design as a recent research hotspot in MI antennas. Subsequently, we discuss the key performance metrics of point-to-point (P2P) MI links, with a particular focus on the challenging derivations of the 3-dB MI bandwidth and MIC range for a TTE MI link. Moreover, we discuss the upper-layer P2P MI techniques, such as channel estimation, ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 13 TABLE XIII OVERVIEW OF RELATED RESEARCH ON P2P MIC TECHNIQUES THE MARKER‘●’ DESCRIBE THE METHODS; THE MARKERS ‘✓’ AND ‘✗’ REPRESENT PROS AND CONS, RESPECTIVELY. Aspects Refs. Addressed issues & methods Remaining issues (& Proposed approaches) Priority† Channel modeling [11], [12], [37], [78] ✓●Two typical MI fast fading models ✓●See Table IX in Section III ✗●More universal MI fast fading modeling ✗●See Table IX in Section III ✸✸✸ Antenna design [11], [78], [122] ✓●Coil, orthogonal MIMO, RPMA, M2I ✓●See Table XIV ✗●Orientation sensitivity; Practical deployment issues ✗●See Table XIV ✸✸ Capacity [37], [39] ✓Expressions of capacity for coil based and M2I MICs ✗Expressions of capacity for RPMA-based MICs ✸✸ Bandwidth [10], [41] ✓Expressions (16), (17), and (18) (see Table XV) ✗Bandwidth of RPMA-based MICs ✸✸✸ MIC Range [42] ✓Expression for short MIC range (see (19)) ✗Expressions for TTE MIC with significant eddy gain ✸✸✸ Channel estimation [144] ✓Lacking in sufficient CSI ●Transmitter-side estimation without explicit training sequence ✗Insufficient feedback for the dyadic backscatter MI channel estimation ✸ [145] ✓Unknown propagation environment (e.g., medium conductivity) ●Environment-aware method using Kernel function to learn the positive/negative factors in the environment ✗Unclear in the case of VLF TTE MICs and MIC with fast fading ✸ [43] ✓For inter-media MI backscatter channel ●Joint channel estimation and data transmissions, and strati- fied medium model for high penetration ✗Lacking the analysis for VLF and long-distance MIC ✸ Modulation [45] ✓Basic MI Modulation schemes and filter design ●Frequency-division multiplexing (FDM) with multiple sub-bands ✗Incompatibility with RPMA systems ✗Shortage similar to FDM in EMWCs ✸✸ [20], [44], [124], [146] ✓Basic modulation for RPMA-based MICs ●Amplitude shift keying, on-off keying, frequency-shift keying, Chirp-rate shift keying ✗Requiring additional GTSs and energy for overcoming inertia ✸✸ Channel coding [52] ✓FEC with higher channel estimation requirement ●EMW-based multilevel cyclic BCH coding ✗Ignoring non-static MI channel cases ✗FEC’s resistance characteristics to MI fast fading ✸ [46] ✓Polar code for variable attenuating MI channel ●Bhattacharyya parameter estimation algorithm ✗Higher channel estimation requirements with low capacity ✗Ignoring ESD and JSD in their algorithm. ✗FEC’s resistance characteristics to MI fast fading ✸✸✸ † Priority: Priority level of remaining issues and proposed approaches for exploration. Here, low priority (✸) indicates that: 1) the remaining issues have been explored in subsequent MIC literature; 2) the existing EMWC schemes are compatible with MIC for these issues; or 3) exploring these issues is optional. (a) Outage probability 0 0.3 0.6 0.9 1.2 1.5 1.8 Avarage AVI ( D) 1 1.5 2 2.5 3 3.5 Ergodic capacity (bps/Hz) With MI fast fading Without MI fast fading (b) Ergodic capacity 0 0.3 0.6 0.9 1.2 1.5 1.8 Average AVI ( D) 10-3 10-2 10-1 100 BER With MI fast fading Without MI fast fading (c) Ergodic BER Fig. 8. Impact of MI fast fading on MIC performance: (a) Outage probability v.s. Rx average AVI (σD); (b) Ergodic capacity v.s. Rx average AVI (σD) under the SNR of 10; and (c) BER v.s. Rx average AVI (σD) with Eb No = 10 [142]. The solid lines in Figs. 8(a), 8(b) and 8(c) are calculated from (11), (12) and (13), respectively. The dotted lines in Fig. 8(a) are obtained through Monte Carlo simulations. modulation, and FEC. Recent challenges and approaches about these techniques are briefly summarized in Table IV. A. Antenna Design In this subsection, we introduce four MI antenna designs, with the RPMA design. Table XIV summarizes their key features. Detailed studies are presented below. 1) Coil Tx/Rx Antenna: The coil is the most widely used antenna in the field of MICs [15], [32] owing to its high US LcS LcD CcS RcS RcD CcD Source Destination RL RL dSD MSD Fig. 9. Equivalent circuit model for the coils-based link S→D, where US denotes the instantaneous voltage in the Tx antenna, and dSD denotes the distance between source S and destination D. Additionally, RcS, RcD, LcS, LcD, RL, RcS and RcD are described in Table II, respectively. receive sensitivity, ease of implementation, and deployment. It is very suitable for TTE applications with limited free space. The utilization of coils dates back to research on NFCs, a short-range MIC that commonly employs a 13.56 MHz carrier frequency according to ISO/IEC 18092 standards. The MIC link is established through mutually inductive coupling between the Tx coil and the Rx coil (see Fig. 9). However, unlike standard NFC, MICs for UG-WSNs and UW-WSNs exhibit weak coupling due to longer communication distances. The Tx coil is typically modeled as a magnetic dipole, and the bandwidth of such MIC is much smaller than the standard NFC, especially for the TTE MIC applications. Due to this weak coupling, the Tx coil with current IS generates the magnetic moment with a norm |mS| = π2a2 cSa2 cDNSNDIS and current loss compensating coefficient ℵSD = (2πf)2RL ZSZ2 D . Such ℵSD determines predominatly the frequency bandwidth of an MI link. The overall circuit impedances of the Tx and Rx coils are denoted by ZS=j2πfLcS+ 1 j2πfCcS +RcS+RL and ZD=j2πfLcD+ 1 j2πfCcD +RcD+RL, respectively. The chan- nel power gain GSD of coils-based link S→D has the specific circuit gain CSD which can be expressed by CSD = |mS|2ℵSD |IS|2 = (πa2 cSa2 cDNSND)2 (2πf)2RL ZSZ2 D . (14) ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 14 TABLE XIV COMPARISON AMONG DIFFERENT ANTENNA TYPES Antenna types Direction Focuses on GSD Advantages Disadvantages Refs. Coil Rx/Tx JSD, CSD 1) Low system complexity 2) Easy deployment 3) Lower costs 1) Strong orientation sensitivity 2) Low bandwidth and energy efficiency 3) Remarkable fast fading in mobile MICs [6], [11], [12] Orthogonal MIMO coils Rx/Tx JSD 1) Low orientation sensitivity 2) High receive sensitivity 3) Higher capacity and bandwidth 1) Challenging deployment in vehicle and TTE environments 2) Complex circuit and protocol designs [8], [51] RPMA Tx CSD 1) Low energy consumption under VLF 2) Smaller antenna size for TTE and mobile MICs 3) Lower cross-talk effects 1) Only for Tx antennas 2) High maintenance costs 3) Limited mS by permanent magnets 4) Longer GTS requirement due to inertia [20], [44], [66], [78], [82], [124], [146]–[152] M2I Rx/Tx SSD 1) Extremely high receive sensitivity 2) Higher capacity and bandwidth 1) High costs 2) Larger radius (e.g., 50mm 15mm times) than coil [37], [122], [123], [153]–[155] z x y O (a) Motor Motor Permanent magnet Chassis Shaft Rotating (b) Metamaterial shell Coil (c) Fig. 10. Non-conventional MI antenna types: (a) Orthogonal MIMO coils [8]; (b) A typical RPMA [124]; (c) M2I antenna [37]. From (4) and (14), the circuit gain CSD in GSD increases with frequency f. Meanwhile, the eddy gain ESD in GSD also increases with f. When f is sufficiently large, the condition σu ≫2πfϵu no longer holds. The effect of circuit gain CSD(f) becomes significant. Consequently, coils-based MIC systems with a higher frequency achieve higher performance compared to RPMA. Also, the expressions for ZS and ZD indicate that the coil resonance makes the MI channel frequency-selective. This results in a narrow 3-dB bandwidth and a negative impact on the design of upper-layer protocols. 2) Orthogonal MIMO coils: SISO coil-based MIC systems exhibit antenna orientation sensitivity due to polarization gain JSD. This can cause remarkable fast fading in mobile MIC ap- plications. To address this, Lin et al. developed the orthogonal MIMO coil antennas [52], consisting of three orthogonal coils, as shown in Fig. 10(a). This antenna reduces the orientation sensitivity and enhances the MIC channel capacity through spatial diversity. Their directional pattern showed that the minimal mutual inductance increased from 0 to 1 3 of maximal mutual inductance. However, for the TTE applications using VLF-LA [6], [10], deployment of such an antenna faces challenges due to limited free space and crosstalk among coils. 3) RPMA: The RPMA is a mechanical antenna (see Fig. 10(b)) that can generate a magnetic field of 960 Hz [147] and achieves a transmission bit rate of over 12.5 bits/s [20], [148]. Bickford et al. have been designing RPMA since 2017 [156]. Recently, the RPMA was used for cross-medium communi- cation in [150], [151], where the research [150] focused on omnidirectional communication and the research [151] applied the bionic methodology. Apart from the channel modeling, researchers also stud- ied modulation for RPMA-based systems such as ampli- tude shift keying (ASK) [124], on-off keying (OOK) [20], frequency-shift keying (FSK) [146] and Chirp-rate shift keying (CSK) [44]. When using a single magnet as an RPMA, circuit gain CSD changes, while the other gains in (4) remain unchanged. According to [78], the moment generated by a single magnet can be calculated by |mS| = Vm ISµ0 , where Vm denotes the volume of the RPMA. The factor 1 IS represents the remanence (denoted by Brm) of RPMA. Thus, the circuit gain can be calculated by CSD = |mS|2ℵSD 16π2|IS|2 =  BrmVm 4πµ0 2 ℵS(f)ℵD(f), (15) where µ0 is the vacuum permeability; 1 ℵS(f) and 1 ℵD(f) are the friction loss of RPMA and the circuit loss of the receiving device, respectively. Here, 1 ℵS(f) increases as the frequency f increases. In contrast, 1 ℵD(f) decreases as the frequency increases since the Rx antenna is a general coil-based MIC device. According to [142], for a link with a Tx RPMA (as shown in Fig. 10(b)) and an Rx coil, the circuit loss can be expressed as 1 ℵD(f) = 2ZD π2facD . The experiment in the sea near Sanya city [78] indicated that the RPMA Rx node received a magnetic field of 25 nT at a distance of 10 m. Mechanical constraints, such as inertia and energy loss, may limit frequency agility or energy efficiency. For the RPMA shown in Fig. 10(b), the instantaneous input power is P RPMA S =(τfr+τnr)2πf/η [78], where η denotes the en- ergy conversion efficiency, and τfr denotes the friction loss. The inertia constraint τnr= Inr2πf dtS depends linearly on the frequency f, indicating a higher energy proportion of τnr in high-frequency systems. The moment of inertia Inr≃ 3.75 × 10−4 kg·m2 leads to a rated angular acceleration of approximately 539 Hz/s [78]. Some issues were not considered in previous studies. By comparing (15) with (14), it can be found that RPMA-based system can achieve higher performance under VLF conditions and lower performance under high-frequency conditions. This phenomenon can be attributed to the increase in friction and inertia associated with higher rotating speeds. Additionally, the lifespan of an RPMA-based MIC device may be shorter than that of a coil-based one due to friction. Particularly, the RPMA-based MIC requires an additional guaranteed time slot (GTS) to overcome inertia, resulting in additional data rate loss and a variable data rate under the same packet length. Most importantly, permanent magnets on Earth are unable to generate ultra-strong magnetic moments sufficient to support MIC over ultra-long distances (e.g., 1,500 m). Thus, the massive RPMA MIMO can be promising due to negligible crosstalk among RPMAs. 4) M2I antenna: From (4), we observe that space gain SSD ∝ 1 d6 SD , indicating that the magnetic field attenuates ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 15 very fast along the signal path. To improve the magnetic fields around the MI receivers, Guo et al. [37] proposed the M2I enhanced technique by enclosing MI antennas with metamaterial shells (see Fig. 10(c)). Unlike natural inductors, metamaterial is an artificial material with arbitrary permeabil- ity and permittivity. It compensates antenna-generated reac- tive power and converts the reactive power into real power that extends the MIC range. They also proposed an easily implementable M2I antenna with a shell made of many small coils for practical use in [122], [153]. Subsequently, Li et al. [123] designed an active and reconfigurable M2I antenna under given meta-sphere size and power constraints. Although the simulations in [37] indicated that the MIC distance increased from 20 m to 60 m under a capacity of 1 bit/s, deploying M2I may be challenging due to the large space occupied by the metamaterial shell. For example, the meta-sphere shell in [37] has a radius six times that of a coil, i.e., a volume about 8,649 times larger. This creates a critical trade-off between performance and deployability. In the scenarios similar to deep mining, this volume penalty is justifiable. However, in narrow boreholes or mobile applications, conventional coils or RPMAs remain preferable. B. Channel Capacity and Bandwidth As early as 2013, Sun et al. [130] suggested that the capacity CSD for plat MIC channel can be derived from Shannon’s equation CSD = Bw log2(1+ PSf Nof GSD). For P2P MIC, the MI bandwidth is important to enhance channel capacity, especially when the Tx and noise power spectral densities (PSDs) are predetermined and cannot be altered, respectively. The bandwidth widely refers to the 3-dB bandwidth, which is defined as the frequency range where signal power halves. There are primarily two methods (see Table XV) that can evaluate the 3-dB bandwidth of the MI links. When the noise PSD is fixed, the bandwidth can be obtained by solving the inequality with respect to f, i.e., GSD(f)PSf (f) Nof ≥1 2 GSD(f0)PSf (f0) Nof , (16) which is referred to as the inequality calculation. For the TTE MICs, the Tx coil is treated as a magnetic dipole. Assuming identical Tx and Rx coils, the study [10] obtained the solution for the bandwidth of a P2P coil-based MIC link Bdipole w,SD = Bw 1 8(RcD + RL)3 , where Bw(·) is Bw(ZC)≃ p ϖw(ZC)+ϱw(ZC) − p ϖw(ZC)−ϱw(ZC), ϖw(ZC) = f 2 0 +2π2C2 cDf 4 0 h Z −2 3 C −(RcD + RL)2i , ϱw(ZC)= rn f 2 0 +2π2C2 cDf 4 0 h Z −2 3 C −(RcD+RL)2 io2 −f 4 0 . (17) However, (17) involved the assumptions of VLF-LA ( 1 2πf 2 0 CcD ≫2πMSD) and acS = acD. If these assumptions are not met, the bandwidth Bdipole w,SD can be obtained through nu- merical methods. This inequality calculation can be extended to the non-coil-based MIC, but it may not yield closed-form analytical expressions. Also, studies have considered MICs with shorter commu- nication ranges, particularly in BAN applications. In these TABLE XV COMPARISON OF TWO BANDWIDTH CALCULATIONS Refs. Advantage Limitations Typical Eqs. [10] Possibility for non-coils MIC Closed-form solution with homogeneous VLF-LA limit (16),(17) [40], [41] Not limited by homogeneous coils Only for coil-based MIC (18) scenarios, the Tx coil is not treated as a magnetic dipole [40], [41]. The bandwidth and channel capacity become more com- plex. To put it simply, since the loosely coupled coils generate the boundary conditions based on the coupling coefficient kc = MSD √LcSLcD , both the bandwidth and channel capacity are piecewise functions of QS and QD. Here, QS = 2πf0LcS RcS+RL and QD = 2πf0LcD RcD+RL are the loop quality factors of Tx and Rx devices, respectively. In [41], the bandwidth of this model is estimated as Bcoupling w,SD = ( f0 QS , if QS > QD; f0 QD , if QS < QD. (18) Obviously, (18) cannot be used to obtain the non-coils-based MIC, such as RPMA-based MI links. As f0= 1 2π√LcSCcS holds, two calculation methods (17) and (18) imply that the P2P MIC bandwidth under VLF is roughly independent of the resonance frequency f0. Under the simulation parameters of Table III, the bandwidth is around 450 Hz when 1 kHz ≤f0 ≤1 MHz, which is evaluated based on (16). Subsequently, we focus on the MI channel capacity. Fig. 11 shows that higher frequencies boost MIC capacity in shorter ranges, and also enhance capacity over long distances in air media. In these cases, the trend of capacity decreasing with frequency is determined by the space gain SSD, while the eddy gain ESD is negligible. However, for TTE and undersea MICs, the capacity drops sharply with increasing frequency and distance. This is due to the fact that the eddy gain ESD becomes the key determinant of the channel power gain GSD. The frequency characteristics of MIC bandwidth and capacity motivate researchers and engineers to adopt VLF-LA methods in TTE MIC networks, evident from [6]–[11], [119]. Even at the VLF of 1 kHz, the MIC capacity in low-conductivity media (0.01 S/m, e.g., soil) is over 320-fold greater than in high-conductivity media (4.8 S/m, e.g., seawater) at 45 m MIC distance. Fig. 11 also shows that MIC channel capacity drops below 10 kbits/s beyond a distance of 60 m. C. Communication Range The communication range refers to the distance between the transmitter and receiver for correct data reception. It is the key performance metric for TTE communications. For the MICs, many studies focus on this metric. For example, Sun et al. conducted simulations and experiments for communication range and proposed the MI waveguide to extend the communi- cation range [12]. Guo et al. [37] proposed the M2I enhanced communication to increase the communication range. Zhang et al. used a relay to extend the MIC range [6]. For MICs, most studies (e.g., [12], [85], [108], [116]) have simplified the channel model by omitting polarization and ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 16 0 50 100 150 200 Communication distance (m) 100 101 102 103 104 105 Capacity (bit/s) 1.8E-4, 1K 0.01, 1K 4.8, 1K 1.8E-4, 100K 0.01, 100K 4.8, 100K 1.8E-4, 10M 0.01, 10M Conductivity (S/m), frequency(Hz) Decelerating drop (short distance) Near-linear decelerating descent (in atmosphere) Accelerating drop (in UG & undersea) Fig. 11. MATLAB simulation of MIC channel capacity v.s. distance under different environments. All simulation parameters are listed in Table III, except for the frequency f (shown in the legend) and the MIC distance dSD (as the simulated independent variable). This simulation is based on CSD = Bw log2  1 + PSf Nof GSD  ; see (4) and (17). 0 20 40 60 80 100 120 Communication distance (m) 0 20 40 60 80 100 SNR (dB) frequency: 1 KHz frequency: 10 KHz frequency: 100 KHz frequency: 1 MHz frequency: 10 MHz th P3 (a) 101 102 103 104 Carrier frequency (Hz) 0 50 100 150 200 MIC range (m) u=1.8E-4(Atmosphere) u=0.001(Drinking Water) u=0.010(Dry Soil/Rock) u=0.500(Wet Soil/Rock) u=4.800(Sea Water) P3 (b) Fig. 12. MIC range simulations: (a) Description of the MIC range; (b) MIC range v.s. carrier frequency under different UG materials. The values of the horizontal coordinates of the solid rectangles represent the MIC ranges in which the SNRs are sufficiently strong to receive correct data. All MATLAB simulation parameters are listed in Table III, and the SNRs are obtained from the left side of (20). eddy gains to address their respective concerns, and evaluated their concern performance metrics (e.g., path loss [12] and capacity [13]) w.r.t. distance via simulations or experiments. Obtaining closed-form expression of the MIC range may initially seem straightforward. For example, Zhou et al. [42] derived the MIC range d∗ SD as (d∗ SD)3 = µ|mS| 4πSmin |3(mS · r0)|r0| −mS|, (19) where denotes r0 unit vector of S→D, Smin denotes the minimum detectable signal strength. Obviously, (19) does not consider the eddy gain ESD. In TTE applications, the significant eddy current produced by large-scale underground materials cannot be ignored, as illustrated by comparing “De- celerating descent” with “Accelerating descent” in Fig. 11. Assuming a required SNR threshold Υth for correct data reception (see Fig. 12), the MIC range d∗ SD of MI link S→D can be obtained by solving PSGSD(d∗ SD) No = Υth w.r.t. d∗ SD, i.e. PSSSDµ2 u No 1 d∗ SD e −d∗ SD δu JSD = Υth. (20) Fig. 12(a) describes the MIC range for TTE scenarios via MATLAB simulation. The simulation shows that the MIC system has the largest range at 10 kHz, while this frequency is neither the highest nor the lowest in this simulation. Further, the maximal MIC range exceeds 100 m but remains below h q + Ff(z) + + || ||. 2 2 γ ~2 Fb(z) _ _ arg min{ }. γ ~ n Fig. 13. Detection of changes in mutual inductance M for MI channel estimation [144]. eγ is the estimated positive coefficient and M = eγM0, where M0 denotes the initial value of the mutual inductance. q denotes a training sequence. h stands for the discrete-time channel impulse response. n denotes the noise. z is the equalized received signal. The transfer functions Ff(·) and Fb(·) are discrete-time feedforward and feedback filters, respectively. 50 m in wet soil, as shown in Fig. 12(b). Fig. 12(b) illus- trates the relationship between carrier frequency and the MIC range across UG media with varying conductivities: for high- conductivity media, the range shows a non-monotonic trend with increasing frequency, while for low-conductivity media, it exhibits a more consistent increase. On the other hand, in the lower-conductivity materials, further optimization of the MIC range is possible once the solution to (20) is obtained. D. Upper-Layer P2P Techniques This subsection discusses the upper-layer P2P techniques, including channel estimation, modulation, and FEC schemes. Compared to EMWC, applying these schemes to TTE MIC faces issues of inefficient bandwidth and channel capacity. Chen et al. [46] proposed a Polar coding scheme with code lengths of 256 and 1024, the capacity of the TTE MIC channel may not accommodate such a large frame. 1) Channel estimation: Compared to EMWC, while the MI channel estimation faces inefficient bandwidth for CSI exchanging, the assumption of larger MI channel coherence time helps improve channel estimation performance. For this feature, the transmitter-side channel estimation method (see Fig. 13) was proposed in [144]. Such a method does not require an explicit training sequence. However, insufficient feedback challenges the dyadic backscatter channel (DBC) estimation. Guo et al. [43] designed a system for joint channel estimation and data transmissions. They considered the effects of UG material conductivity and sensor depth, which may not be the focus in the EMWC field. 2) Modulation: The modulation scheme encounters the issues caused by low bandwidth and frequency-selectivity, which allows a higher-order modulation scheme. Kisseleff et al. [45] proposed a modulation approach like frequency- division multiplexing (FDM) for MI links. Like EMWCs, the FDM scheme faces inter-subcarrier interference. It may not be suitable for RPMA systems. Hence, researchers applied amplitude shift keying [124], frequency shift keying [20], [146], chirp-rate shift keying [44], and on-off keying [20] to MIC systems. Due to the inertia of RPMA, additional delays should be considered when converting the mechanical states. 3) Channel coding: Channel coding is important in the physical layer. Lin et al. [52] used the BCH code for the FEC enhancing the MI link transmission reliability. However, the BCH code-based design did not consider the underground MI channel which is not quasi-static. To tackle this, Chen et al. [46] proposed a Polar code construction scheme with Bhattacharyya parameters specially optimized for MI UG- WSNs. Here, one of the inputs is SNR w.r.t circuit gain CSD and the space gain SSD; in other words, the authors did ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 17 0 2 4 6 8 10 Eb/No [dB] 10-8 10-6 10-4 10-2 100 BER . Ch. MI; BPSK; NO FEC Ch. MI BCS;BPSK; NO FEC Ch. MI; BPSK; Polar Ch. MI BCS;BPSK; Polar Ch. MI; BPSK; BCH Ch. MI BCS;BPSK; BCH Fig. 14. BER simulations of BPSK Modulation & FECs for MI channel and MI fast fading channel, where the AVI follows the BCS distribution (MI BCS): (σ2 D=0.52, ς=0.8); FECs: BCH (n=15, k=11), Polar coding (n=128, k=66). The FECs exhibit an anti-fast fading characteristic. not consider the eddy gain ESD and polorization gain JSD. Compared to traditional coding, this work considered multiple distributions within a codeword from underground conductors. However, this method requires frequent CSI exchange, which may pose a challenge for the VLF-LA MIC. By reviewing the literature, e.g., [46], [52], on modulation and FEC in a P2P MI channel and considering MI fast fading effects, we conduct comparative simulations in Fig. 14. It is noticed that an appropriate FEC strategy can significantly reduce BER, with Polar coding offering far greater advantages than BCH. On the other hand, the FEC mitigates the adverse impact of MI fast fading on BER at low SNRs. As shown in Fig. 14, with FEC employed, the dashed and solid lines nearly coincide. Although we minimize the physical layer frame length for simulating current modulation and FEC schemes, it remains excessive for TTE MIC channels, indicating room for improvement in these schemes provided by the existing MIC literature, as listed in Table IV. E. Summary and Lessons Learned This section discusses the addressed and remaining chal- lenges in MI antenna designs and MIC performance metrics (such as 3-dB bandwidth [10], [41], channel capacity [13], and MIC range) in TTE scenarios. We introduce four antenna designs, each with its respective advantages and disadvantages (see Table XIV). Notably, RPMA has emerged as a key area of research due to its minimal crosstalk among Txs. Future work could explore the application of massive MIMO and beamforming techniques, though these are difficult to implement in coil- based systems because of crosstalk. RPMA, however, faces challenges related to additional delay and energy consumption required to overcome inertia. For other antenna types, the design focus should be on optimizing CSD and JSD. More- over, advancements in sensor and quantum technologies, such as high-sensitivity TMR sensors, could significantly reduce antenna size and crosstalk, making it easier to apply massive MIMO and beamforming techniques in MICs. Additionally, we discuss channel estimation, modulation, and FEC strategies for MIC, validate their effectiveness through simulations, and demonstrate their role in mitigating MI fast fading-induced BER degradation. ... 1 2 i n ... ... 1 2 i n ... Wellbore Fracture/pipeline Underground oil reservoir MI transmitter/relay Above ground Base station (a) Data transmition Destination Source Unexpected relay Soil/rock Crosstalk effect Tortuous tunnel Unexpected relay Designed relay (b) Fig. 15. Use cases of relay and CMIC: (a) Pipeline case [47], [48]; (b) Mobile MIC case. In Fig. 15(a), MI waveguide, CMIC-nAR and hybrid relay techniques can be used; in Fig. 15(b), the CMIC-1NR technique can be used. Regarding MIC performance metrics, two key aspects re- main underexplored: the closed-form expressions for the 3-dB bandwidth and the MIC range in TTE scenarios (as detailed in Table XV). While numerical methods can approximate both metrics, they are time-consuming. Furthermore, significant potential exists to improve the MIC range once its expression is fully developed, as suggested by (20). The practical takeaways or common pitfalls include: 1) The GTSs in physical layer protocol designs for RPMA inertia have been substantially overlooked, and so has the lifespan of the RPMA; 2) the spatial inhomogeneity of TTE materials may act as a dominant source of systematic errors, causing inconsistencies between theoretical P2P MIC predictions and experimental observations; and 3) the P2P upper-layer solution tends to overlook the eddy gain and polarization gain. V. MI RELAY AND COOPERATIVE MIC Improving the communication range and achievable rate are the two key goals of MI studies for TTE applications. Existing research has demonstrated that these two performance metrics can be increased by one or several orders of magnitude through the use of MI relays. Most MI relay techniques require sufficient space for relay deployment. This section categorizes the MI relay into the passive MI relay (including MI waveg- uide) and active MI relay (called CMIC). Furthermore, we consider the advances in these two relay techniques, particu- larly highlighting their respective limitations. Additionally, we theoretically analyze the crosstalk effect phenomenon among MI relays through our derivations and simulations, which are overlooked in the existing studies. Table XVI outlines the challenges and methodologies of the MI relay techniques. A. Use Cases and Scenarios For the TTE scenarios, there are two typical use cases, as shown in Figs. 15(a) and 15(b). In the first use case, such as oil reservoirs, straight tunnels and pipelines provide sufficient space and a clear path for signal propagation. In these scenarios, MI coils can be aligned and deployed as a stable linear topology along these pathways, ensuring reliable communication between MI Txs/relays and the base station above ground. For example, Guo and Abdallah proposed the framework of a linear pipeline/oil sensor network topology with dense nodes in [34], [47]. In this framework, most MI nodes do not require an independent power supply. As shown in Fig. 15(a), the energy of MI modes is induced ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 18 TABLE XVI OVERVIEW OF RELATED RESEARCH ON MI RELAY TECHNIQUES THE MARKER‘●’ DESCRIBE THE METHODS; THE MARKERS ‘✓’ AND ‘✗’ REPRESENT PROS AND CONS, RESPECTIVELY. Types Aspects Refs. Addressed issues & methods Remaining issues (& Proposed approaches) Priority† Passive relay MI waveguide [12], [13] ✓Increase the channel power gain [12] and capacity [13] by several orders of magnitude ●Channel modeling [12]; waveguide capacity analysis [13] ✗Challenge in aligning and deploying coils to ensure the same mutual inductance between adjacent relays ✸✸ [157] ✓Allowing slight misalignment of coils and being robust to node failure ●MST and TC algorithms to reduce the number of relays ✗Crosstalk effect in misaligned coils ✸ MPRlA [49] ✓Bandwidth increased by over 15% compared to a quadri- lateral array ●Hexagonal MPRlA using KVL equations ✗Challenge of coil alignment ✸ Crosstalk effect [32] ✓Existence of crosstalk effect ✗Lack theoretical analysis; overlook positive crosstalk effect ✗Crosstalk among local high-density nodes in a network ●Proposed Transformer based framework (Fig. 28) ✸✸✸ Active relay Stationary CMIC- nAR [121] ✓Capacity increasing by an order of magnitude ●CMIC-nAR modeling and capacity analysis using AF, DF and FF schemes ✗Approximate coil alignment required; higher energy consumption and protocol complexity than the MI waveguide ✸✸ [50] ✓The number of relay reduced by about 2 8 ●Formulating optimum relay placement problems using RCP based approach ✗Strict coil alignment required ✸ Stationary CMIC- 1NR [6] ✓OCMI yielding over 20% increase in CMIC range. ●Fixed-coil-based CMIC-1NR; optimal angle CMI (OCMI) for larger ranges ✗Limited application scenarios due to fixed relay position ✸ [10] ✓Potentially capacity increasing over 50% ●Non-fixed coil-based CMIC-1NR; AF schemes; CMI band- width derivation ✗Space constraint of tunnels ✗Bandwidth expression restricted to isomorphic coils ✸ [7] ✓Excellent global optimal search ability and convergence ●Geometric approximation and random-search approaches ✗Validation of multi-relay CMIC systems with arbitrary APOs ✸✸ Mobile underwa- ter CMIC [51] ✓Obtain the PDF of CMI channel for underwater scenarios ●Derivation based on uniform distribution antenna angle input ✗Uniform-based PDF not applicable to mobile underground CMI channels ✸✸ Hybrid relay Stationary CMIC [32] ●Place passive relays between two adjacent active relays ✓Energy saving ✗Challenging antenna deployment for TTE and mobile ap- plications ✸✸ † Priority: Priority level of remaining issues and proposed approaches for exploration. Here, low priority (✸) indicates that: 1) the remaining issues have been explored in subsequent MIC literature; 2) the existing EMWC schemes are compatible with MIC for these issues; or 3) exploring these issues is optional. by the EMW generated by a large dipole. The required power to transmit data from a sensor to its neighbor sensor 3 m apart is about -50 dBm. Thus, the passive relay techniques can be used to achieve high data rate and energy performance. In the second use case, the CMIC with one non-aligned relay (CMIC-1NR) topology [7] is designed for more dynamic and less predictable environments. For example, in under- ground Internet of Vehicles (IoV) systems, the mobile nodes cannot form a stable linear topology. In such scenarios, nodes are typically sparsely and randomly distributed, with each requiring a power supply to generate signals. Thus, the CMIC topology is suited for these conditions. However, randomly distributed nodes might form locally dense clusters, as shown in Fig. 15(b). Within these clusters, certain nodes inadvertently act as unexpected passive relays, inducing a crosstalk effect. In what follows, we discuss the MI passive relay, CMIC techniques, and MI crosstalk effect in detail. B. MI Waveguide and Passive Relay Techniques The MI passive relay passes signals through without requir- ing an active power source for amplification or processing. In this subsection, we introduce two MI passive relay techniques i.e., MI waveguide and MPRlA. We highlight their advantages and disadvantages for TTE applications. We also discuss the previously unstudied MI crosstalk effect, with a simulation to validate this effect. 1) MI Waveguide: Research on passive relays began with the introduction of the MI waveguide approach in [158], [159]. In TTE environments, the overall cost of the relay deployment, including price, time, and risks, is enormous. However, for the L Cci Rci L Cci Rci L Cc Rci RL Node D M M Passive relay 1 Us L Cci Rci Node S M ... M Passive relay n-1 dsd/n dsd/n I0 In I1 In-1 Fig. 16. MI waveguide channel model. Here, M, L, and Cci denote the mutual inductance between two adjacent relays, inductivity, and matching capacitor for f0 = 1 2π√ LCci , respectively. non-emergency and non-mobile TTE MIC applications, the MI waveguide can be considered due to its excellent performance. In [12], [13], Sun’s and Kisseleff’s teams increased the MIC range and channel capacity several times through the waveg- uide techniques. The typical structure of the MI waveguide is shown in Fig. 16 where identical passive relays are placed at equal intervals between S and D. The channel power gain of the MI waveguide link was derived as [12], [13]: Gwg = RL Sn(ZM,ZL,n)Sn(ZM,ZL,n+1), Sn(ZM, ZL, k) = Fn(ZM, k) + ZL · Fn(ZM, k −1), Fn(ZM, k) =  (ZM+√ Z2 M−4) 2   k+1 −  (ZM−√ Z2 M−4) 2   k+1 √ Z2 M−4 , (21) through Kirchhoff’s voltage law (KVL), where ZM = ZLC+Rci j2πfM , and ZL = RL j2πfM . The experiment in [12] indicated that the channel power gain increased by several orders of magnitude when using MI waveguide. Later, the MI waveguide networks were investigated. However, the number of MI ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 19 ... ... ... ... Passive Relays Fig. 17. Regular hexagonal MPRlA. Here, identical passive relays are uniformly placed in a 2D plane, and each relay serves as the center of a hexagon whose vertices are its nearest neighboring relays. passive relays determines the complexity of the MI waveguide system. To reduce the number of passive relays, Sun et al. proposed the minimal spanning tree (MST) algorithm and TC algorithm coils [157], [160]. Their simulations indicated that the number of relays decreased from about 2,500 to 1,300 when using the MST algorithm at a sensor density of 10−3 nodes/m2. These studies indicate that the MI waveguide techniques are constrained by the challenge of coil alignment in the TTE environment, even in the UW-WSNs. 2) MPRlA: The MPRlA is categorized as a 2D/3D MI waveguide [161] or non-linear MI waveguide [32], as exem- plified in [49], [162]. Ma et al. [49] proposed a topology for a regular hexagonal MPRlA (see Fig. 17). The passive relays transmit propagation energy to adjacent coils and return reflection energy. They modeled the channel of this regular hexagonal MPRlA using the KVL and found that the band- width of the regular hexagonal MPRlA increased by over 15% compared to a quadrilateral array. However, the MPRlA techniques also face the challenge of coil alignment. 3) Crosstalk effect of short-range passive relays: In [32], Li et al. briefly mentioned the existence of crosstalk among the aligned relays. Consider the crosstalk effect caused by an arbitrarily placed unexpected passive relays, including the randomly moving Rx or idle coils belonging to other networks. Consider an MI link S→D with an unexpected passive relay R within the MIC range of S. The locations of S, R, and D are shown in Fig. 18. According to KVL, we have ISZLC + ID · j2πfMSD + IR · j2πfMSR = US IS · j2πfMSD + IR · j2πfMRD = −IDZLC IS · j2πfMSR + ID · j2πfMRD = −IRZLC, (22) where ZLC=j2πfLcS+ 1 j2πfCcS +RcS+RL. Let ZSR=j2πfMSR, ZSR=j2πfMSR and ZRD=j2πfMRD. Solving these equations, we obtain the current in coil D: ID = −US(ZSDZ2 LC −Zpa1(S, D, R)) Z4 LC + Zpa2(S, D, R) , (23a) Zpa1(S, D, R) = ZRDZSRZLC, (23b) Zpa2(S, D, R) = 2ZRDZSDZSRZLC −Z2 RDZ2 LC −Z2 SDZ2 LC −Z2 SRZ2 LC, (23c) where we call Zpa1(S, D, R) and Zpa2(S, D, R) the crosstalk impedances of link S→D. Thus, we can define the crosstalk effect as the phenomenon occurring when a link has at least one non-zero crosstalk impedance. Also, using KVL equations as (22), we can derive the more complex expressions of crosstalk impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) when a link has more than two passive relays. R(xr , yr) y x D(xd , yd) OO S(0, 0) R4(0.5yd , 0.5yd) R2(0.5yd , 0.5yd) R3(0.5yd , 0) Fig. 18. Example of Crosstalk effect from an unexpected passive relay. Suppose that a passive relay R moves (sufficiently slowly to avoid MI fast fading) along the path R→R2 →R3 →R4. It is noticed in (23) that the MI crosstalk effect directly impacts the performance of the MIC network through the crosstalk impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...). For example, when Zpa1(S, D, R, ...) > 0 and Zpa2(S, D, R, ...) > 0, the received current ID decreases, thereby inducing the negative crosstalk effect. This effect may reduce the MIC range and increase the BER. Conversely, the negative impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) can result in positive crosstalk effects, which may improve the MIC range and capacity. The MI waveguide conforms to this case. For long-range MIC using VLF-LA with low f, where MSD, MSR and MRD are sufficiently small to satisfy ZLC ≫j2πfMSD, ZLC ≫j2πfMSR and ZLC ≫j2πfMRD, the crosstalk impedances can be ignored. This is verified by our simulation results as shown in Fig. 19. This simulation also shows that the short-range or higher- frequency MI links encounter a significantly more pronounced crosstalk effect. When the ratio GSD,p GSD exceeds 1, the passive relay serves as part of the MI waveguide and has a positive effect on MIC. For long-range MIC, all curves in Fig. 19 converge to 1, indicating the elimination of crosstalk. It is worth noting that the CMIC and large-scale MI network also have the crosstalk effect issue according to KVL equations. From (23) and Mij= πa2 cia2 cjNiNjµu 2d3 ij Jij p Eij with i, j∈{S, D, ...}, it can be concluded that both Zpa1(S, D, ...) and Zpa2(S, D, ...) are multimodal functions. Determining their signs is challenging. To address this, future studies are needed on the spatial distribution of positive crosstalk effects. We also propose a deep learning framework to predict the signs of Zpa1(S, D, ...) and Zpa2(S, D, ...), as will be described in Section VIII-C. C. Cooperative MIC (CMIC) In this subsection, we delineate two active relay techniques, i.e., CMIC with multiple aligned relays (CMIC-nAR) and CMIC-1NR. We also summarize distinct challenges associated with cooperative communication between MIC and EMWC. The MI waveguide enhances channel capacity and range but requires numerous underground relays. For several decades, cooperative communication has been a focus in the EMWC, FSOC, and acoustic communication [163]–[167]. These com- munication channels experience significant small-scale fading. Cooperative relays use spatial and time diversity to mitigate these fading effects, which significantly reduces outage prob- ability and enhances achievable rates. Various cooperative ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 20 TABLE XVII COMPARISON OF COOPERATIVE EMW AND CMI COMMUNICATIONS Comparison EMW Refs. CMI Refs. Channel characteristic Small-scale fading [163] Quasi-static fading † [6], [7], [121] Key issue Reducing outage probability [163] Enhancing Rx signal strength [6], [7], [121] Methods AF, DF, Compress-and-farword [163] AF, DF, FF [7], [32], [121] Benefits Universal [163] Limited by coil locations and orientations [6], [7], [10] † With Advances in MI fast fading research, limited literature on the CMI channel with non-quasi-static fading (e.g., [51]) has been released. 0 3 6 9 12 15 Distance of S-D (yd: meters) 0 1 2 3 4 Gsd,p/Gsd R(0.2yd, 0.7yd) R2(midle of S-D) R3(0.5yd,0) R4(0.5yd, 0.5yd) (a) VLF (f0=10 kHz). 0 10 20 30 40 50 Distance of S-D (yd: m) 0 1 2 3 4 Gsd,p/Gsd R(0.2yd, 0.7yd) R2(midle of S-D) R3(0.5yd,0) R4(0.5yd, 0.5yd) MI waveguide can be applied (b) MF (f0=1 MHz). Fig. 19. Ratio GSD,p GSD for crosstalk effect as shown in Fig.18, where GSD,p represents the power gain of the MI link S→D when a passive relay is used, i.e., GSD,p = I2 DRL ISUS , where IS and ID are obtained from (22) and (23), respectively. This ratio quantifies how much the power gain improves (or deteriorates, if < 1) with the introduction of a passive relay. The simulation parameters are listed in Table III, except for acD = acr = 0.6 m, and NcD = Ncr = 15. ... Source Destination Relays (a) CMIC-nAR Source Destination Relay (b) CMIC-1NR Fig. 20. Two types of the CMIC. The topology of CMIC-nAR is similar to the MI waveguide, and signals are transmitted one by one. The coils exhibit a near-perfect alignment. In CMIC-1NR, the coil of the relay does not need to be aligned, and a diversity combining method should be used to combine the relay and source signals. communication schemes were proposed, such as amplified- and-forward (AF), decode-and-forward (DF), and filter-and- forward (FF) schemes [32], [163]. However, the traditional MIC channel model is quasi- static without small-scale fading. Such quasi-static property renders outage probability physically meaningless. Spatial and time diversity offer limited benefits for mitigating small-scale fading in traditional MICs. Fortunately, researchers discovered that active relays can enhance signal strength at the Rx coil under certain conditions. The CMIC can be categorized into two types: CMIC-nAR [32], [50], [121], [159] and CMIC- 1NR [6], [7], [10]. The typologies of the CMIC-nAR and CMIC-1NR are as shown in Figs. 20(a) and 20(b), respectively. The topology of CMIC-nAR is similar to that of MI waveguide [121]. The simulations indicated that the data rate increased from 104 to over 105 at the distance of 70 m when using a DF or FF relay. To reduce energy consumption, Li et al. [32] proposed a hybrid relay structure that combines the waveguide and active relay. Their simulation showed that the energy consumption decreased from 1 J to 0.3 J at the distance of 96 m. In 2021, Ishtiaq et al. [159] developed a mathematical model for evaluating the performance of the multi-hop MI link, including the hop state. Also, the multi-relay optimiza- tion has caught researchers’ attention. In [50], Khalil et al. proposed a CMI system similar to an MI waveguide without S, achieving the maximum throughput and reducing the number of relays. The transmission rate must be positive, leading to the constraint inequalities shaped as convex functions with negative values [50]. Therefore, reverse convex programming (RCP) was introduced as a solution. However, similar to the MI waveguide, the CMIC-nAR is suitable for underwater and limited underground scenarios. Due to the disadvantage of CMIC-nAR for TTE MICs, some researchers explored the CMIC-1NR by optimizing JSD. Zhang et al. [6] proposed the CMIC-1NR. Later, Ma et al. [7], [10] further investigated CMIC-1NR systems to boost MIC achievable rates. In [10], they studied the achievable rate of AF-based CMIC with an arbitrary relay APO. Specifically, they derived the closed-form expressions for CMIC bandwidth. The bandwidth of the CMI link varies with the APO and is smaller than that of the Direct magnetic induction (DMI) link (see Fig. 21). Even with the smaller bandwidth, an active relay may enhance the achievable rate of the MI link (see Fig. 22), especially for weak signals. Unlike the DMI link, this improve- ment is APO-dependent. This complicates TTE applications due to underground space constraints. In Fig. 22, despite maximal CMIC achievable rate gain (CMG) at the RA center, the relay cannot be placed there due to tunnel constraints. Ma et al. [7] addressed this with a geometric modeling approach and a random-search algorithm to find the optimal APO for relay deployment in tunnels. Their simulations showed that compared to the Gradient Descent Algorithm, the number of iterations for their algorithm decreased from 600 to 100, and the number of local optima decreased from 11 to 1. However, despite the contributions of CMIC-1NR, it can be summarized that all the CMIC-1NR techniques mentioned in this survey are significantly subjected to the APOs. Also, the CMG of CMIC-1NR is much smaller than that of CMIC-nAR. With advances in MI fast fading research, researchers have focused on the CMI channel with random polarization gain JSD. In 2024, Zhang et al. [51] investigated the CMIC systems with an unidirectional coil and a tri-directional active relay, respectively. Assuming that norm directions of coils S, R, and D follow the uniform distributions, they derived closed-form expressions for the PDFs of received SNRs. Their simulations indicated that the ergodic rate increased from 6 bps/Hz to 16 bps/Hz at a distance of 20 m and a Tx power of 20 dBw, even using an AF relay. However, for the terrestrial mobile MIC, it is obvious that the probability of a weak antenna vibration θ′ D ≃0 is much greater than that of a strong one θ′ D ≃90◦. Thus, the uniform distribution-based model deviates significantly from the feature of non-underwater mobile MICs. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 21 103 104 105 106 107 Resonance frequency(f0: Hz) 600 700 800 900 1000 1100 1200 Bandwidth(Hz) Calculation (with AF relay) Numerical simulation (with AF relay) Calculation (Direct MI) Numerical simulation (Direct MI) Fig. 21. Comparison of the bandwidth between AF-based CMI and DMI links. The simulation parameters are listed in Table III, except for θD = 30◦. Here, solid lines are calculated from Bdipole w,SD = Bw 1 8 (RcD + RL)3 and BAF w = Bw(ZC,AF). respectively, The function BAF w (·) is given by (17) in this paper. The impedance ZC,AF is given by Equation (14) in [10]. Fig. 22. Effects of relay positions on the CMI performance, and an example of tunnel constraints. The simulation parameters are listed in Table III, except for θD = 30◦. Here, CMG ( defined in [10]) is the ratio of capacities of the CMI link to the DMI link, and the area where CMG>1 is called the relay area (RA). The thick gray curve represents an arbitrary underground tunnel in which S, R, and D are constrained to reside. D. Summary and Lessons Learned Table XVI summarizes the key issues, methods, and re- maining challenges in MI relay research, covering MI passive relay [157], MI active relay (CMIC) [51], and hybrid relay techniques [32]. These studies show that both passive relay and CMIC techniques can significantly enhance channel capacity and extend the MIC range, sometimes by orders of magnitude. The passive relay technique is energy-efficient with simple protocol complexity, but it faces deployment challenges, par- ticularly in antenna alignment due to the negative crosstalk effect. A potential solution involves the use of an iron core coil to collect MI signals, although careful attention is needed to mitigate the effects of the eddy current. Additionally, imple- menting passive relays in RPMA-based systems is challenging due to minimal crosstalk effects. Moreover, passive relay and CMIC-nAR techniques are unsuitable for mobile MICs due to their dynamic topology, which prevents the formation of stable systems with positive crosstalk effects (see (23) and Fig. 19). In contrast, CMIC significantly reduces the number of required relays. The CMIC-1NR solution, which eliminates the need for coil alignment, can be applied to RPMA-based systems and mobile MICs. We introduce the unexpected passive relay phenomenon, specifically the crosstalk effect in relays, and highlight its role in MI waveguides as a special case. Logical data flow Ly1 Physical Ly Data link Ly Network Ly Transport Ly Session Ly Application Ly Presentation Ly Physical Ly Data link Ly Network Ly Transport Ly Session Ly Application Ly Presentation Ly Ly2 Ly3 Ly4 Ly5 Ly6 Ly7 HTTP,FTP,SMTP TCP,UDP... segments Process/thread to process/thread Process/thread to process/thread IP, IPX, ARP... packets Node to Node PPP,CSMA/CD... frames Node to Node Node to Node Physical channel, modulation, channel coding... bits Actual data flow Legend: Fig. 23. The OSI framework, which has seven layers. The practical takeaways or common pitfalls include: 1) A primary practical challenge in MI relay is antenna alignment. Thus, the MI waveguide for TTE suits only scenarios like straight tunnels, which is infeasible for mobile MICs; 2) in- creasing active relays does not guarantee throughput improve- ment; and 3) a high-density multi-node MI network exhibits the crosstalk. However, determining the spatial distribution of positive crosstalk remains challenging, complicating the passive relay deployment. VI. MI NETWORK AND ITS ARCHITECTURE In wireless communications, key focal points include im- proving network throughput, increasing access to users, and reducing energy consumption. However, recent techniques in EMWC using signal reflection and refraction [170]–[172] are challenging to apply to MIC signals due to their lack of reflection and refraction. Consequently, the role of MI network architecture becomes more important than that of the EMWC network. In this section, we categorize recent literature on multi-node MIC with reference to the widely used OSI framework, The OSI framework (see Fig. 23) originated from the International Organization for Standardization (ISO) in 1984. It serves as a conceptual framework for designing network communication protocols and facilitating communi- cation between different systems. The research on the MI network and differences from other communication networks are summarized in Tables XVIII and XIX, respectively, with more details provided below. A. Physical Layer (Ly1) Most research on MIC focused on the physical layer issues, including channel modeling, performance, estimation, modula- tion, coding, and resources allocations. We have discussed the issues of physical layer schemes on P2P and MI relay-based system, i.e., channel modeling, key performance metrics, chan- nel estimation, modulation, and channel coding in Sections IV and V. For the MI physical layer with multiple nodes, the focus has been primarily on the resource allocations, including power allocation [11], [52], [53], as well as frequency and bandwidth allocations [53]. 1) Power control: From the network’s perspective, power control aims to optimize signals, reduce interference, and enhance efficiency. Lin el al. [52] pioneered the study of power optimization formulation from the entire multi-hop network. To formulate the power control problem, they used the Nash ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 22 TABLE XVIII OVERVIEW OF RELATED RESEARCH ON MIC IN LINE WITH THE OSI FRAMEWORK THE MARKER ‘●’ DESCRIBES THE METHODS; THE MARKERS ‘✓’ AND ‘✗’ DESCRIBE ADDRESSED AND REMAINING ISSUES, RESPECTIVELY OSI Lys Aspects Refs. Methods and addressed issues Remaining issues Priority Ly1 P2P & relay- based MICs [10], [12], [45], [46], [144] ✓Channel modeling, channel capacity, MIC range, channel estima- tion, channel coding and modulation (see Sections III, IV, and V) ●Detailed in Tables IX, IV, and XVI ✗A universal MI fast fading model; TTE MIC range; RPMA channel capacity; MI crosstalk effects; CMIC with MI fast fading ✗Detailed in Tables IX, IV, and XVI - Power allocation [52], [58] ✓Power allocation issue for a stationary MI channel ●Nash game ✗Not suitable for time-varying channel over a longer time span ✗Quantization-induced precision loss ✸ [11] ✓Power allocation issue for an MI fast fading channel ●Nash game-based multiagent RL with Bellman iteration ✗Slow convergence due to lack of information exchange; ✗Sacrifice precision for faster convergence ✸✸ Frequency allocation [53] ✓For lower system delays (> 40% decrease) for cluster-based UW-WSN ●Multi-variable alternating iterative resource allocation algorithm ✗No considerations of network throughput and energy consumption ✸✸ Ly2 MAC [54], [168] ✓Low-cost ($100); low energy (Currents: Rx/Tx=0.49/253 mA) ●Designing three packet types (reservation, acknowledge, data) ✗Lacking analysis for SISO case for VLF-LA ✸ [55] ✓Considering both energy (Rx/Tx: 0.49mA/0.74mA) and through- put (46-144 bytes/cycle) ●Contention-based protocol using hybridizing three configurations of the orthogonal MIMO coils ✗Low energy efficiency for SISO-coil; low EPR; high collision probability ✗Too large MAC headers for a VLF-LA system ✸✸✸ LLC No refs. - ✗Adaptive retransmission strategy for unnecessary packets ✗Compatibility of existing solutions for MIC ✸ Ly3 Connectivity [169] ✓Dynamic connectivity for a 2D model ●Designing framework of connectivity probability bounds ✗Only applicable to the 2D model ✗Disregarding attenuation differences with directions ✸ [72] ✓k-connectivity for an underwater grid network ●Derivation based on power and BER ✗Disregarding attenuation differences with directions ✸ [8], [116] ✓Randomly deployed in a 3D space, and considering attenuation differences with directions. ●Probability theorem; gradient descent method; homogeneous Pois- son Point Process ✗Violating homogeneous condition in heterogeneous networks or mobile MIC. ✸✸ Data collection and node deployment [56] ✓Optimal data collection and network lifetime (21.2%∼38.3% higher) with low energy consumption (0.2∼0.34 J) ●HENPC algorithm and ant colony optimization ✗Large delay caused by ant colony algorithm ✸✸ [80] ✓May increasing the network lifespan (by 1 31 ∼5 18 ). ●AANSFR algorithm ✗Non-guaranteed optimality due to the constraint of [80, Eq. 10(a)] ✸ Routing [85], [86] ✓Balancing the network latency (reduced by 18% in [86]), and energy efficiency (increasing by 16% in [86]) ●Q-learning based energy-delay routing (QL-EDR) [85] and Bal- anced routing protocol based on Q-earning [86] algorithms ✗Overlooking the frequency features ✗Slow convergence of Q-learning ✸✸ [57], [120] ✓Frequency-selective property in a dynamic multilayer MI UG- WSN ●Distributed Q-learning-based algorithm; formulating frequency- switchable routing decision problem ✗Slow convergence for Q-learning ✗Poor efficiency of routing tables exchange in narrow-band MI channels ✸✸✸ Topology [169] ●Ad-hoc (high self-organizing capability; limited scalability) - - [47] ●Linear topology (simple protocol; single point of failure) - - [11] ●Cellular topology (frequency reuse; local network congestion) ✗Basic resource allocation/reuse schemes ✗A basic MI cellular network protocol stack ✸✸✸ IP No refs. - ✗Low efficiency in TTE MIC due to the large IP header (over 20 bytes) and limited channel capacity (IP-HC scheme required) ✗Compatibility of existing solutions for MIC ✸✸✸ Ly4 TCP No refs. - ✗Fairness issue; RTT suppression issue; congestion control scheme for TCP connections w.r.t. the SNR in Rx MI nodes ✗Compatibility of existing solutions for MIC ✸✸✸ Ly5-7 Applications cf. Table VII - ✗MCNSI, TTE infrared image transmission; TTE IoV applications; DARPA Subterranean Challenge ✸✸ CLO - [52] ✓Statistical QoS guarantee and obtaining both optimal energy savings and throughput gain concurrently ●DEAP framework jointing Ly1, Ly2, and Ly3 ✗Not suitable for the mobile MIC in an MI fast fading channel due to its drastically fluctuating average AVI ✸✸ [58] ✓Optimal energy efficiency and low computational complexity, with higher throughput than EDAP; ●Distributed Energy-throughput efficient Cross-Layer solution us- ing Naked Mole Rat algorithm (DECN) ✗Not suitable for the mobile MIC in an MI fast fading channel due to its dramatically fluctuating average AVI ✸✸ † Priority: Priority level of remaining issues and proposed approaches for exploration. Here, low priority (✸) indicates that: 1) the remaining issues have been explored in subsequent MIC literature; 2) the existing EMWC schemes are compatible with MIC for these issues; or 3) exploring these issues is optional. game with the utility function as shown in Table XX. The study presented in [52] did not consider mobile MI networks with the unpredictable AVI inputs. To address this, Ma et al. [11] proposed a power allocation algorithm for cellular VMI networks. They used the Nash game-based RL (Q-learning) with the utility function, as shown in Table XX. The RL solution from [11] addresses the problem of the unpredictable AVI inputs in MIC systems. Due to the convergence limit, it can be difficult for the RL algorithm to obtain a high-precision power allocation policy. Moreover, since this power allocation algorithm requires no information exchange among the nodes, it is feasible for low- capacity channels. By contrast, the expectation of EMW fast fading is predictable in most cases; in this sense, RL may not be needed for an EMW cellular network. 2) Frequency and bandwidth allocation: Effective fre- quency/bandwidth allocation can enhance capacity, QoS, and cut the network energy consumption. Due to the narrow MI bandwidth and limited research on MI cellular topology, frequency/bandwidth allocation has garnered limited attention. Nonetheless, Li et al. [53] studied bandwidth allocation in- volved schemes, applied to the AUVs and cluster-based UW- WSNs. A joint optimization was proposed to obtain alloca- tions of Tx power, bandwidth, and computational resources. The key goal (see Table XX) was to minimize total system delay T. The authors developed a centralized multi-variable alternating iterative resource allocation algorithm. The core of the algorithm lies in its two-stage iterative process within a ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 23 TABLE XIX COMPARISON OF WAVE-BASED AND MI COMMUNICATIONS BASED ON THE OSI FRAMEWORK THE MARKERS ‘✈’, ‘✌’ AND ‘❍’ DESCRIBE EMWC, ACOUSTIC COMMUNICATION AND MIC, RESPECTIVELY OSI Lys Aspects Typical refs. Other communications compared to MIC MICs Ly1 Channel estimation [144] ✈Estimation not based mutual inductance ✈Overlooking the medium conductivity ❍Estimation based mutual inductance ❍Significantly influenced by medium conductivity Modulation [45] ✈Negligible frequency dependent eddy currents effect ✈Larger frequency bandwidth ❍Non-Negligible frequency dependent eddy currents effect ❍Much smaller frequency bandwidth Channel coding [46] ✈Negligible conductor on the propagation path ❍Variable codeword duration from underground conductors Resource allocation [11] ✈Sufficient bandwidth for allocation table exchange ❍Insufficient bandwidth for allocation table exchange Ly2 MAC [55] ✈Often ignoring the directional nature of EMW ❍Requiring further bit-level compression for MAC headers ❍With directional nature of magnetic fields LLC [173]† ✌Acoustic channel quality based retransmission scheme ❍MI channel quality based retransmission scheme† Ly3 Connectivity [8] ✈Negligible attenuation differences with directions ❍Attenuation differences with directions Deployment [157], [174] ✈Higher protocol complexity ✈Less APO dependent ❍Full utilization of MI waveguide advantages ❍Higher APO dependent Routing [57] ✈Less frequency-selective and APO-selective ❍Significant frequency-selective and APO-selective IP [175]† ✌IP-HC scheme using acoustic network information ❍IP-HC scheme depending on MI channel† Ly4 TCP [176]† ✈Less RTT suppression ❍Significant RTT suppression† CLO - [11], [52] ✈GSD ∝d−2 SD ✈Stable and determined moments of fast fading gain ❍GSD ∝d−6 SDe−dSD/δu ❍Random moments of MI fast fading gain The superscript † indicates the issue lacking relevant literature on MIC-related research and MIC solutions, to the best of our knowledge, to date. TABLE XX COMPARISON OF THREE RESOURCES ALLOCATION METHODS ON MIC RESEARCH Refs. Goal function / utility function Explanations Methods [52] ui(pi, p−i)=Ci(pi, p−i) −wEi(pi, p−i) Power allocation for stationary MICs: C= data rate, E= total power, i=this user, −i=competing users, p=power allocations, w=weight Nash game [11] ui(pi, p−i)=E[Ci(pi, p−i)|σ(t) i ] Power allocation for mobile MICs: E[C]=ergodic data rate, i=this user, −i=competing users, p=power allocations, σ(t)=average AVI during time slot t Joint Nash game and RL algorithm [53] min PUA,PCH,B,F T Bandwidth allocation for MICs: T =system delay, P=power allocations, B=bandwidth allocations, F=computational resources, UA=acoustic node, CH=MI cluster header Centralized optimization single loop, including obtaining power allocations with fixed bandwidth, and obtaining bandwidth allocations with fixed power. Although this study greatly reduces the system delay by up to 40%, it does not account for frequency allocation for the performance improvement of the capacity, QoS, and network energy consumption. Table XX compares the three resource allocation schemes discussed above. The first two methods suit low-speed chan- nels, minimize inter-user information exchange but converge slowly, where the second, though better for dynamic channels, converges extremely slowly. The third, with fast convergence, fits higher-speed networks (e.g., hybrid MIC). B. Data Link Layer (Ly2) The data link layer is responsible for ensuring reliable data transfer between adjacent network nodes through functions such as error detection, error correction, and flow control. It is divided into the MAC and logical link control (LLC) sub- layers. While only the MAC sub-layer is channel-dependent. 1) MAC: The low bandwidth of an MIC link presents challenges for real-time MAC protocol implementation. Also, many EMWC MAC solutions cannot be directly applied to MI UG-WSN due to the directional nature of magnetic fields [54]. Ahmed et al. [54] provided valuable insights into energy- efficient MI-based MAC protocols. They designed three packet types, i.e., reservation, acknowledge, and data packets, and a general state transition machine for the MAC packets ex- changing. They also designed an MI device with this MAC protocol that achieves less than $100 in cost, 60 µA in the sleep mode, 0.49 mA in the Rx mode and 253 mA in the Tx mode. However, this work did not consider the throughput. Recently, in [55], they proposed an improved MAC protocol and algorithm that considers detailed MI channel parameters. This work balanced the energy and throughput performance metrics by hybridizing three configurations of the orthogonal MIMO coils, each corresponding to a different packet type (as detailed in Fig. 24). They applied the CSMA-based scheme, and pointed out that their MI MAC protocol exhibits low energy efficiency under the SISO-coil-based MIC. That is, it may encounter problems with the MIC that employs VLF-LA. Furthermore, the large MAC header (8 bytes) can be further compressed at the bit-level for the VLF-LA channel. 2) LLC: The LLC sub-layer is often only a thin adaptation sub-layer that provides the reliability of communication, e.g., through data transfer, flow control, and error control. Since the LLC sub-layer is designed to be independent of the physical layer and media in the OSI model, it has received limited attention in MIC research. Only a few studies have explored LLC protocols for other communication channels (e.g., acous- tic communication channel [173], [177] and quantum com- munication [178]). Specifically, Daladier et al. [173], [177] proposed the SW-MER protocol to enhance throughput and reliability by employing an adaptive retransmission strategy that takes advantage of acoustic channel quality to minimize redundant packet transmissions. This approach can serve as a reference LLC for MIC, as both acoustic and MIC channels share low data rate characteristics. C. Network Layer (Ly3) The network layer routes data packets efficiently to their destinations. In this subsection, we introduce the studies on the ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 24 1B 1B 4B 1B 1B TX_ID Preamble Target ID Packet ID Tx Coil ID 1B 2B TX_ID Preamble Target ID Packet IDTx-RX Coil ID EOF Tx ID 1B EOF 1B TX_ID Preamble Target ID Packet ID Data packets EOF 11B 1B Data ACK W (a) Wakeup (W) packet, acknowledgment (ACK) packet, and data packet Sender Idle Sense W Idle ACK Sense Data Idle W ACK Idle Data Sense Receiver (b) Time slots Fig. 24. A typical MI MAC solutions in [55]. (a) The packets are designed into three types. (b) Complete communication cycle between a Tx and Rx, where the Tx node starts with a W packet; the Rx node, after successfully receiving the W packet, acknowledges with an ACK packet; upon receiving the ACK packet, the Tx node then sends out the whole data packet [55]. However, for TTE MICs, the packet size should be further compressed. functionality of the MI network, including connectivity, data collection, node deployment, and MIC routing. Here, routing in MIC has recently represented a new research aspect. 1) Connectivity: Connectivity is a cornerstone of networks and is crucial for designing network layer protocols, including routing algorithms, traffic control, and quality of service (QoS). It has been widely studied in EMWC research. For n network nodes uniformly distributed within a circular area of πR2, Gupta et al. [179] derived R(n)= q log(n)+c(n) πn , where c(n) is a correction function. They proved that these nodes are connected with probability one if lim n→∞c(n)→∞. The connectivity analysis differs fundamentally between MIC and EMWC. As the MIC is immune to shadowing, the geometric disk/sphere models used in EMWC network studies [179]– [182] are inadequate for the MIC channel. Since 2011, there has been little literature on the connec- tivity in the MIC field. Sun et al. [169] conducted a 2-D connectivity analysis of UG-WSN using probability derivation, yet overlooked attenuation differences. Gulbahar and Akan [72] performed a k-connectivity analysis on an underwater MI grid network with Tx coils at the fixed positions and directional angles. Zhang et al. [8] performed a connectivity analysis of TTE MI UG-WSN and proposed an optimization algorithm to address the connectivity adaptability of frequent frequency-switching w.r.t. APO-switching. For the mobile MIC, the MI fast fading results in an irregular shape of the MI coverage space.In heterogeneous MI UG-WSNs, the nodes are not homogeneous. These features violate their assumptions in the connectivity model (e.g., Poisson point process) and represent open issues. 2) Deployment strategic and data collection: The strategic deployment and data collection of MI sensors are also essential for effective local network design. Fig. 3 and Eq. (3) show that the APOs significantly impact the polarization gain JSD. Optimizing APO is a distinct issue from EMWC [6], [7], [10], [174]. Specifically, Kisseleff et al. [174] optimized the antenna deployment to avoid channel interference from other nodes. Recently, focus has been placed on node deployment for efficient data collection and routing protocols. For a 3D-UW-WSN, Wang et al. designed an optimal deployment strategy and a clustering algorithm to prolong the network lifetime by 38.3% and 21.2% compared to the EELEACH- C algorithm [56]. This algorithm is also compatible with the 3D-UG-WSN. Wei et al. [80] studied the power-efficient AUV data collection schemes in an MI and acoustic hybrid sensor. origin eω0ωt ω0ωt eω0ωt xω0t0 i ω0t0 ixω0t0 i xω1t0 i ω1t0 ixω1t0 i ω0 ω1 ω2 t0 ω0 ω1 ω2 t1 ω0 ω1 ω2 t2 e ikik e ik ω1t0 e ik ω1t0 eω1ω2 ω1ω2 eω1ω2 eω1ω2 e t0t1 t0t1 e t0t1 e t0t1 xω1t1 k ω1t1 k xω1t1 k xω2t1 k ω2t1 k xω2t1 k e k0 k0 e k0 ω2t1 e k0 ω2t1 e t1t2 t1t2 e t1t2 e t1t2 xω2t2 0 ω2t2 0 xω2t2 0 xω2t1 0 ω2t1 0 xω2t1 0 xω1t0 k ω1t0 k xω1t0 k Sink(terminus) UG sensor Feasible MI channel Walk S ikik S ik ω(i) S ik ω(i) (t0) S k0 k0 S k0 S k0 ω( ω k) )(t1) Fig. 25. Schematic of a walk (arrow trajectories) for the frequency-selective MI links (from sensor i, through sensor k, to the sink 0) [57]. In this figure, ω denotes the operating frequency; t denotes the time; x denotes the node state; S(·) denotes a step. The objective of the optimization problem is to maximize the total capacity of related walks. They proposed the alternating anchor nodes selection and flow routing (AANSFR) for AUV data collection, where the 3dB MI bandwidth is considered. Using AANSFR, the lifespan could increase from 14 hours to 18 hours. 3) Routing: Although routing design is a kernel function model in the network layer, studies on routing in MI UW- WSNs and UG-WSNs are limited. From (3), it is observed that circuit gain CSD(f) and eddy gain ESD(f) are the functions of carrier frequency f. These gains represent the frequency- selective property. Compared to traditional routing strategies, this property makes the MI UG-WSN appear to have en- tirely different topological structures at different operating frequencies [57]. Likewise, as the MIC based on VLF-LA is orientation sensitivity (called APO-selective property), the mobile MI-UG-WSN can also encounter different topological structures due to the unpredictable moment of MI fast fading gain JSD. For the frequency-selective property, Liu et al. [57], [120] proposed the frequency-switchable strategies for routing deci- sions, using a distributed Q-learning-based algorithm. Specifi- cally, in [120], they mapped the frequency-switchable MI UG- WSN to a multilayer network and proposed a distributed Q- learning-based algorithm to provide a description on its routing decision. They also evaluated the convergence of the algorithm and network lifetime. As Q-learning is time-consuming, in [57], they redefine the routing decision problem with fre- quency switchability in dynamic MI-WUSNs (see Fig. 25). Their simulations showed that the throughput increased from 40 to over 45 when the connectivity is 1. Improving network lifetime and reducing the transmission delay are two conflicting goals in routing studies. To balance these goals, Wang et al. [85] proposed two algorithms based on RL, i.e., the QL-EDR algorithm, and the path selection strategy algorithm. Alsalman et al. [86] proposed a balance routing protocol based on machine learning (BRP-ML) to reduce network latency and energy consumption. In these studies [57], [85], [86], [120], the eddy gain ESD and polarization gain JSD are ignored. Moreover, their centralized-based multiagent RL algorithms require the exchange of band tables among nodes. This poses a practical challenge due to the low data rate in the MIC with VLF-LA. 4) Internet Protocol (IP): In the TCP/IP framework, the IP, including IPv4 [183] and IPv6 [184], was originally designed to be channel-independent. To the best of our knowledge, there has been no literature specifically addressing the issues related to IP in the field of the MIC. However, the most significant challenge is IP’s low efficiency in TTE MIC due ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 25 to the large IP header (over 20 bytes) and the limited channel capacity of TTE MI links. In the fields of other communication techniques, such as bandwidth-constrained EMWC [185] and acoustic communication [175], this challenge was mitigated or addressed via IP header compression schemes. Specifically, Parrein et al. [175] proposed an acoustic protocol based on the static context header compression (SCHC) protocol to reach the header compression ratio of 99.74% using lower-layers’ information. This also can be applied to the MIC. D. Transport Layer (Ly4) and TCP The transport layer, akin to the LLC sub-layer and including TCP [186], ensures end-to-end communication with reliable data transfer, flow control, and error recovery. It was originally designed to be channel-agnostic. There is limited literature on MI transport layer research. Particularly for TCP, fairness and congestion control issues pose a challenge and can be po- tentially optimized utilizing the characteristics of the specific channel. For example, TCP connections with shorter round- trip times (RTTs) often hinder the throughput of those with longer RTTs (i.e., RTT suppression issue). In the EMWC field, these issues have received some attention, e.g., [176], [187]. In [176], the congestion window was modeled as a function of the SNR of the EMW link, suggesting that one can formulate the corresponding optimization problem of the RTT for TCP w.r.t. the MI channel power gain GSD. Besides the TCP schemes, in the Internet of Things (IoT), constrained devices proliferate under strict power/bandwidth limits, motivating researchers to develop IoT-specific protocols with transport layer adaptations, such as the Constrained Ap- plication Protocol (CoAP) [188], Delay-Tolerant Networking (DTN) [189], and ROBust Header Compression [190]. The lightweight design of CoAP and its support for unreliable transport (established on UDP) align with the low-power re- quirements of TTE MIC devices. The bundle protocol of DTN can mitigate challenges posed by intermittent connectivity arising from low bandwidth and data rate. These protocols and mechanisms, validated through established RFC standards, offer potential directions for adapting transport-layer or cross- layer functionalities in TTE MIC networks. E. Cross-layer Optimization (CLO) The OSI framework promotes modularity and standardiza- tion with high cohesion and low coupling. However, it faces efficiency challenges in MI networks due to strict bandwidth, energy, and latency constraints. In this scenario, the lower layer may call upper-layer parameters or algorithms to improve the network performance. Notably, for UG-WSNs, the solution for optimizing energy consumption in Ly1 may require the routing decision results in Ly3. Lin et al. [52] developed a distributed environment-aware protocol (DEAP) based on the Nash game to satisfy the statistical quality of service (QoS) requirement. This protocol achieved both optimal energy savings and throughput gain concurrently in 2015. The DEAP considers the interactions of different layer functionalities, such as power control, mod- ulation and FEC in Ly1, the distributed MAC schemes in Ly2 and geographical routing algorithm in Ly3. In 2021, Singh et al. [58] developed a Distributed Energy- Throughput Efficient Cross-layer solution using the Naked Module Rat algorithm (NMRA), also called the DECN ap- proach. While this approach is similar to DEAP one, the DECN approach can apply to direct and waveguide MI links. Their simulations showed that when using the DECN approach with 50 nodes, the energy consumption decreased from 160∼390 J/packet to 100 J/packet, whereas the normal- ized throughput increased from 2∼6 packets to 11 packets/s. Compared to wave-based communication (e.g., EMWCs), the DEAP and DECN approaches fully consider the fact that the received power diminishes linearly to the distance d6 SD in an MIC channel, which diminishes much faster than in the EMW channel (in the order of d2 SD). Compared to EMWC, it can be concluded that all MI CLO solutions use distributed algorithms. This is due to the fact that the capacity and bandwidth of an MI channel are much lower than those of an EMW channel. This lack of information exchange can cause slow convergence of algorithms. F. Summary and Lessons Learned Table XVIII summarizes the issues addressed, their corre- sponding methods, and the remaining issues of research on the multi-node MI network under the OSI-originated framework, including the physical, data link, network, transport, and application layers. It can be summarized that the issues of the MI network primarily stem from the coil resonance feature, i.e., low MI bandwidth and significant frequency-selectivity. Apart from MI fast fading, many remaining issues need to be addressed: 1) The physical layer support for RPMA- based MICs is inadequate, even lacking a universal expression for channel capacity, with key challenges stemming from mechanical inertia and friction. Despite the packets having the same length, the RPMA MI channel may lead to variable data rates; 2) the high collision probability of existing MI MAC solutions [54], [55], [168] has not been addressed due to the coil resonance feature; 3) most solutions in Ly3, especially the routing solutions [57], [85], [86], have overlooked the gain factors ESD and JSD. These solutions in Ly3 have limited support for MI UG-WSNs and UW-WSNs with longer ranges; 4) the channel-independent OSI-based solutions, particularly TCP and IP, need to be validated due to issues, such as excessively large frame headers and RTT suppression. These are crucial for the SAGUMI; and 5) all solutions using RL [11], [85], [86], especially those using distributed RL with the Nash game, may face low convergence and precision. Regarding these five challenges 1) – 5), our proposed framework, the status quo, potential solutions, and research gaps are elaborated on in the subsequent sections. Table XXI summarizes the reviewed techniques mapped to each OSI layer and their respective technical readiness levels (TRLs), revealing that most existing MI techniques lack experimental evidence, particularly for multi-node solutions. The practical takeaways or common pitfalls include: 1) Most multi-node MI protocol studies have assumed near-field and weak coupling conditions, which are more prone to breakdown in TTE environments than in general MIC environments; 2) existing MI upper-layer protocols tend to overlook the eddy gain and polarization gain; 3) TCP/IP stack application to TTE ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 26 TABLE XXI TECHNICAL READINESS OF MI TECHNIQUES MAPPED TO OSI LAYERS OSI Lys Technology TRL † Future research directions Priority Ly1 MI fast fading ★ A universal statistical modeling ✸✸✸ Channel modeling ★★★★ Mixed-field & multilayer medium cases ✸✸ Antenna design ★★★★ TMR Rx antenna; deployment ✸✸ Channel estimation ★ For VLF-LA and MI fast fading cases ✸✸ Modulation ★★★★‡ For RPMA-based links ✸ Channel coding ★★ Deep JSCC ✸✸ Passive relays ★★★ Positive MI crosstalk ✸✸ Active relays / CMIC ★ Arbitrarily deployed multi-relays ✸✸ Resource allocation ★ Bandwidth and frequency allocations; balancing precision and convergence ✸✸ Ly2 MI MAC ★★ Header compression; conflict reduction ✸✸✸ LLC ✩(0) Optimization for VLF-LA case ✸ Ly3 Connectivity ★ Heterogeneous or mobile MI networks ✸✸ Data collection and node deployment ★ Algorithm convergence ✸ Routing ★ Minimizing information exchange among nodes ✸✸ IP ✩(0) IP-HC ✸✸✸ Ly4 TCP ✩(0) Fairness, congestion control, connection issues due to extremely low bandwidth ✸✸✸ † ★: Technology Readiness Level (TRL) (cf. [191]) 1–3 (basic research); ★★:TRL 4–5 (lab validation); ★★★: TRL 6–7 (system-level testing); ★★★★: TRL 8–9 (industrial application); ✩: No MI-specific references available for this TRL level. ‡ The TTE MIC products have entered the market [119], and modulation is an essential integrated technical module. MICs may suffer from the congestion mechanism failures and retransmission storms; 4) existing MI upper-layer protocols overlook that the MI channel bandwidth may be insufficient to support the exchange of their large control data, such as routing and Q tables; and 5) the frequent variation in average AVIs may render many existing MI solutions from Ly1 to Ly7 for both P2P and multi-node networks inapplicable. VII. PROMISING MI NETWORK FRAMEWORK WITH TCP/IP AND LINUX SUPPORT This section proposes a Linux-and-TCP/IP-supported MI network framework to implement most existing MIC protocols and algorithms, including those in Sections IV, V, and VI. A. Significance and Architecture Overview 1) Significance: While MIC techniques have been applied in various underground applications, few studies explore their compatibility with standard protocols, particularly the TCP/IP framework, due to the limited bandwidth of MIC systems. With ongoing advancements in MIC performance and the expansion of SAGUMI applications, integrating MIC with the TCP/IP framework is becoming an inevitable and essential trend. While deep learning has proven effective in addressing EMWC challenges [192], [193], its application to MIC re- mains largely unexplored due to the difficulties of using robust neural network platforms in embedded systems. On the other hand, Linux excels in large-scale wireless applications, such as mobile ad-hoc networks (MANETs) and Android (built on the Linux kernel) smartphones, enabling dynamic routing and offering scalability via its modular kernel and TCP/IP stack. Moreover, Linux offers a wealth of open- source resources across diverse research domains, particularly in wireless network protocol stacks (e.g., IEEE 802.15) and neural network platforms (e.g., TensorFlow, PyTorch, and RKNN). These resources facilitate rapid development by pro- viding robust tools and platforms for research. As illustrated in Fig. 26, we propose a Linux-based MIC framework. This framework tackles multi-hop MIC challenges by integrating TCP/IP, Linux kernel modules, and the MI so- lutions discussed in the preceding section, balancing protocol compatibility with UG-WSN requirements. Primary CPU MI physical- layer solutions MI cross-layer solutions (Part B) MI transceiver MI MAC solutions MCU, FPGA MI TCP/IP adapter MIC applications MI cross-layer solutions (Part C) Kernel space Hardwares Kernel space MI routing solutions User space MI physical- layer solutions ) B) B) B) B) MI cross-layer solutions (Part A) Primary CPU MI transceiver wares solutions (Part A) MI t i MCU, FPGA MCU FPGA Hardw MI t i w c osss MII crrosss aayye laayyer sooluutioons sooluutioons sooluutioons l i (Parrt (Parrt (Parrt (P B) MI cross-layer MII MAACC MII MAACC MII MAACC MII MAACC sooluutions sooluutions sooluutions l ti e ce ce ce c ac ac ac a pa pa pa p sp sp sp s l le ne ne ne n rn rn rn r er er er MII crrosss MII crrosss laayyer laayyer MI TCCP//IIP MI TCCP//IIP MI TCCP//IIP MI TCCP//IIP p addappteerr addappteerr addappteerr d t e ce ce ce c ac ac ac a pa pa pa p sp sp sp s l lelelele ne ne ne n rn rn rn Ke Ke Ke K MI MI MI MI r g outiinng outiinng outiinng tii sooluutiioons sooluutiioons sooluutiioons sool tiioons MII crrosss MII crrosss-laayyer laayyer e r er er er Ke Ke Ke K MIC applications User space C) MI cross layer solutions (Part C) C) MI cross-layer Physical Ly (Ly1) Data link Ly(Ly 2) Network Ly(Ly 3) Transport Ly(Ly 4) Linux kenel configurations Linux TCP/IP framework Deep learning framework (Tensowflow, Pytorch, …) MI network device driver MI character device driver Application Lys (Lys 5- 7) Netfilter Linux kernel Linux system Hardwares System call (ioctl) Socket-based sytem call / netif_rx() dev_queque_xmit hooks hooks Fig. 26. Proposed framework for TCP/IP & Linux support. The thick arrow line represents effective Tx/Rx data, and the thin red arrow line represents control data for the MI network protocol stack. The box with a white background and italic text represents the interface between Linux system and MI protocol. The box with a white background represents the Linux model. 2) Linux wireless network device driver-inspired architec- ture: Building on the support schemes for TCP/IP and Linux in EMWCs, such as the Linux driver designs [194] for WiFi and Bluetooth, we come up with a Linux-based MIC framework. This framework, as illustrated in Fig. 26, draws parallels with EMWC solutions. It incorporates MI cross-layer solutions (Part A) and MI transceivers, which are hardware and firmware-based models designed for protocols and algorithms that rely on analog signals, high parallel computation, or high- accuracy real-time processing, such as channel modeling and estimation. The MI character device (MCD) driver and MI net- work device (MND) driver are designed to handle the retrieval and storage of MIC data. The MCD driver forwards control data, such as CSI, while the MND driver handles effective data streams, enabling MIC applications to directly access the full TCP/IP protocol through the socket() interface. 3) MI-specific models: In this framework, several models differ from conventional Linux wireless drivers: • MI TCP/IP adapter: This model adapts the native Linux TCP/IP framework with MI-specific TCP/IP optimization schemes proposed by researchers, such as TCP/IP header compression and RTT optimization for the MIC. • MI routing solutions model: This model can implement and help evaluate specific MI routing algorithms proposed by researchers. These algorithms can be invoked similar ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 27 to the MI TCP/IP adapter. • Cross-layer solutions model (Part C): This model sup- ports deep learning-based solutions, such as deep JSCC. However, as it resides in the user space of the operat- ing system, it may not be suitable for algorithms with stringent real-time requirements. • Cross-layer solutions model (Part B): To reduce device size and energy consumption, Some MI modulation and channel coding can be incorporated into the MI cross- layer solutions (Part B) model rather than in FPGA or MI cross-layer solutions (Part A) model. The TTE MIC system uses the VLF-LA technique, which allows the primary CPU sufficient time to process, enabling the removal of FPGA and MI cross-layer (Part A) models in some simpler MI nodes. B. System Architecture and Implementation Fig. 26 describes the system implementation of the Linux- based MIC framework, focusing on the interface between Linux TCP/IP stack and MI solutions models. 1) Linux MCD and MND drivers: The MCD and MND drivers models mirror EMWC driver architectures, facilitating hardware-Linux data exchange and laying a foundation for MI-specific models. The MCD driver operates as follows. • Register a character device associated with several Linux file operations, whose members include open, close, read, write, and ioctl callbacks, to the kernel; • Invoke MI-specific models (MI cross-layer solutions Parts A and B) via hardware interrupts and Linux/user APIs; • The write and ioctl callbacks can manage downlink control streams (from application layer to physical layer); • The read and ioctl callbacks can manage uplink control streams (from physical layer to application layer). The MND driver operates as follows. • Register a Linux MND with integrating its netfilter hooks with the MI TCP/IP adapter and MI routing models. Linux can invoke models via these hooks automatically; • Invoke MI MAC solutions via the callbacks of the MND; • Manage routing and downlink data streams (from appli- cation layer to physical layer) via the MND callbacks (e.g., ndo_start_xmit); • Manage routing and uplink data streams (from physical layer to application layer) via the MND callbacks (e.g., netif_rx). 2) MI physical-layer solutions and MI cross-layer solutions (Part A): In line with the most EMWC solutions that employ hardware-centric implementation in PHY/MAC in dedicated chips, the models of MI physical-layer solutions and MI cross-layer solutions (Part A) reside in MCUs or FPGAs (not Linux). Given that a few MI solutions exhibit strong dependency on analog signal processing and impose stringent timing constraints, these requirements exceed the performance capabilities of the Linux kernel. For this reason, these models handle modulation/coding and cross-layer logic (e.g., inter- ference mitigation) using FPGA/MCU cores. These models interface with Linux system via custom bus protocols, e.g., serial peripheral interface (SPI), or memory-mapped I/O. Algorithm 1: Example of a TCP/IP transmit process. /* Conventions: The italic text represents a Linux API, and Model name.Function name(Input data) represents a function of a model with parameter Input data */ /* Boot/initialization stages of Linux */ 1 Register MCD and MND; /* IP-HC in TCP/IP adapter called after routing */ 2 Register a netfilter hook named MI adapter with hooknum=NF IP POST ROUTING, hook=MI IPHC, priority≥100; 3 Register a netfilter hook named MI routing with hooknum=NF IP LOCAL OUT, hook=frequency switchable routing (cf. [57]), priority≤-400; /* Runtime stage of Linux */ 4 for Loop do 5 Call MI Channel Estimation in Algorithm 2 which generates MI CSI; 6 User calls sendto(Destination IP, Destiation port, MI packet) which generates MI IP packet; 7 Linux TCP/IP calls the hook MI routing.frequency switchable routing(MI IP packet) to complete routing based on [57], see Fig. 25; 8 Linux TCP/IP calls the hook MI adapter.MI IPHC(MI IP packet) and generates MI IPHC packet; 9 Obtain MI IPHC packet via callback MND.ndo start xmit(); 10 MND.ndo start xmit() calls MI MAC.Mac based on [54](MI IPHC packet, MI CSI) to complete MAC and generates MI MAC packet; 11 MND.ndo start xmit() calls MI cross layer Part B.BPSK Polar based on [46](MI MAC packet, MI CSI) and generates MI BPSK packet and control bytes; 12 This BPSK and Polar encoder function downloads the MI BPSK packet and control bytes to the MI transceiver’s corresponding buffer and hardware registers, respectively; 13 end 14 Unregister all devices and netfilter hooks Algorithm 2: MI Channel Estimation() based on deep learning framework. 1 MCD calls MI crosslayer Part B.MI estimation based on [144] (MI CSI); 2 MCD calls MI crosslayer Part B.MI fast fading() to require MI crosslayer Part C.MI fast fading() via Linux callback read(); 3 MI crosslayer Part C.MI fast fading() completes the MI fast fading estimation based on Fig. 27 and Pytorch framework; 4 MI crosslayer Part C.MI fast fading() responds the updated MI CSI to the MI crosslayer Part B via Linux callback write(); 5 return updated MI CSI 3) MI cross-Layer solutions (Part B): For the VLF within the Linux kernel’s capabilities, this model can directly imple- ment most physical and MAC layer solutions. • Perform MI channel estimations without deep learning (e.g., [144]) in the Linux Interrupt Service Routine reg- istered by the MCD driver upon receiving the control stream from the MI transceiver; • Perform modulations (e.g., BPSK) and FECs (e.g., Polar codec [46]) upon MAC frame/packet arrival at the MND driver, as shown in Algorithm 1 (Lines 11 and 12); • Forward the control stream to MI cross-Layer solutions (Part C) via read, write and ioctl. 4) MI cross-layer solutions (Part C): This model facilitates access to the interfaces of deep learning platforms (e.g., PyTorch), as these interfaces and platforms can only run in user space. Following are the details (cf. Algorithm 2): • MI cross-layer solutions (Part B) forward required data ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 28 to Part C via read callback; • Part C executes algorithms using the interfaces of the deep learning platform in the user space; • Part C responds the results to Part B via write callback. 5) MI TCP/IP adapter and MI routing solutions: The MI TCP/IP adapter and routing solutions integrate with the Linux TCP/IP framework primarily through net- filter hook points [195], which correspond to distinct stages in the local machine’s internal routing process, i.e., NF_xx_PRE_ROUTING (prior to routing decisions), NF_xx_LOCAL_IN (before packets destined for the local system enter the protocol stack), NF_xx_FORWARD (for pack- ets being forwarded), NF_xx_LOCAL_OUT (before packets originated from the local system exit the protocol stack), and NF_xx_POST_ROUTING (after the routing decision has been made but before the packets are transmitted on the network interface). Here, xx denotes the protocol type (e.g., IPv4, IPv6, or bridge protocol). Upon Linux startup, the NMD driver registers netfilters for MI TCP/IP adapter and routing solutions. The MI routing netfilter should be set to the highest priority to prevent other routing schemes. MI-specific routing functions attach to hook points NF_xx_LOCAL_OUT and NF_xx_PRE_ROUTING, as shown in Algorithm 1 (Line 2). MI TCP/IP adapter (especially IP-HC) is set to low priority to avoid kernel packet modifica- tion. Then, the functions of the MI TCP/IP adapter are asso- ciated to NF_xx_POST_ROUTING, as shown in Algorithm 1 (Line 3). These netfilters are automatically invoked by the Linux TCP/IP framework. 6) MI MAC solutions: This model packages the MAC header (see Fig. 24(a)) and maintains the state machine as the one in [54]. MI MAC functions for uplink streams are triggered by MI transceiver interrupts; those for downlink streams are invoked via the ndo_start_xmit callback of the MND (see Algorithm 1, Line 11). 7) Example: Consider an MI protocol stack integrated into the Linux TCP/IP framework. This stack includes physical- layer solutions, such as channel estimation, MI fast fading pre- diction (see Fig. 27), BPSK modulation, and Polar coding (as in [46]). It also incorporates the MAC solution from [54] and the routing solution from [57]. The TCP/IP transmit process is described in Algorithm 1. When an application sends a packet via the Linux Socket API sendto, the packet passes through the Linux TCP/IP framework, including netfilters associated with MI routing and IPHC schemes, and arrives at the MND. The MND invokes the MI MAC algorithm, BPSK modulation, and Polar encoding to generate a physical-layer frame, which is then downloaded to the MI transceiver. C. Summary The proposed framework enables researchers to leverage the abundant Linux resources available for communication networks and deep learning, such as OpenZigbee [196], Ten- sorFlow, and RKNN [197]. As a result, it can accelerate MIC research and development. The proposed framework manages control flow across OSI layers using Linux interrupt service routines and MCD call- backs read/write. It handles data flow and protocol encap- sulation across OSI layers via interrupt service routines, MND callbacks ndo_start_xmit/netif_rx, and socket APIs. MI-specific network layer protocols are managed through Linux netfilter hooks. The following sections provide further insights into related MI techniques, which can also be supported by this framework. VIII. RESEARCH CHALLENGES AND FUTURE DIRECTIONS Sections III–VI have provided a comprehensive overview, encompassing a wide range of MIC topics of state-of-the-art methodologies and theoretical frameworks, including the MI channel modeling, P2P MIC, MI relay, and MI network archi- tecture. Many algorithms and solutions discussed within this comprehensive review can be implemented and/or evaluated through the MI framework proposed in Section VII; see Table XXIII. For instance, the channel modeling and estimation solutions in [11], [12], [144] require processing analog signals and can be implemented in the MI transceiver model. In this section, we summarize the potential challenging issues that were not addressed in the literature. We also in- troduce new promising techniques (e.g., deep JSCC, MCNSI, etc.) for future research. Table XXII outlines the remaining issues, their potential solutions, and novel techniques for MIC. A. MI Fast Fading in Mobile MIC Systems Until 2020, no concept of an MI fast fading channel was formally introduced. The research on MI fast fading is still in its early stages (see Table X). In this subsection, we discuss the challenges and remaining tasks for channel statistical characteristics, and elucidate the potential ramifications on established MIC (from Ly1 to Ly3), including outage proba- bility, channel estimation, MI MIMO, and CMIC. Table XXIV summarizes the typical issues arising from the introduction of MI fast fading into MI channels. 1) Statistical modeling when the CLT does not apply: Previous literature has derived closed-form expressions of the CDF/PDF and expectation of the MI fast fading. These expressions are only applicable in typical scenarios, such as underwater [51], [89], 2D TTE [9] and 3D TTE using MI cellular network [11]. Firstly, unlike EMWCs, MI fast fading is not caused by signal propagation. As illustrated in Fig. 6, we modeled MI fast fading using four independent random variables (ϕS, θ′ S, ϕD, θ′ D) with distinct distributions. These four random variables are not suitable for applying CLT since CLT requires a large number of independent random variables. Thus, deriving a universally used CDF/PDF, such as the Rayleigh model, becomes challenging. We utilize Monte Carlo simulations to obtain universal models for MIC links (e.g., Fig. 7). However, these models have high time complexity for network algorithms and protocols. Secondly, antenna design, antenna carrier, and its mechanical degrees of freedom affect the MI fast fading. For example, the antennas composed of orthogonal MIMO coils, RPMA, M2I, and magnetoresistance exhibit different CDFs of MI fast fading. The vibration model of the backpack antenna may not follow the boundary p(x) distribution. The mechanical degrees of freedom of the vehicle influence the distributions of horizontal components of antenna vibration ϕS and ϕD. These interdisciplinary issues also pose the challenge to derivation of a universal statistical model. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 29 TABLE XXII FUTURE ISSUES, PROMISING WORK AND ADVISED METHODOLOGIES FOR TTE MICS Types Research aspects OSI layer Future issues & works Advised methodologies P2P MICs MI fast fading Ly1 1) A universal statistical model 2) A velocity-dependent expectation/variance 3) See Table XXIV 1) Maxwell-equations-based derivation and FEMs 2) Probability-theorem-based derivation 3) Deep JSCC 4) Transformers model or LLM to predict average AVI Antenna design Ly1 1) Inertia issue of a mechanical antenna 2) Using high-sensitivity and small-size magnetic sensor - MCNSI Ly1–Ly3, Ly7 1) Balancing between MI, navigation and sensing 2) Chaotic RSSIs 1) GAN techniques and DWE algorithm 2) Formulating a joint optimization problem Mixed-field channel model Ly1 1) Difficulties in sub-modeling of channel power gain GSD 1) Maxwell-equations-based derivation Inhomoge- neous medium Ly1 1) Boundary conditions for multi-layer materials 2) Boundary conditions between the near-field region and radia- tion field region 3) Statistical characteristic of dynamic medium 1) Maxwell-equations-based derivation 2) Geometrical approximation by jointing regular shapes 3) Data fusion and attention mechanism JSCC Ly1, Ly7 1) Image, audio & video data transmissions 2) Content-based communication 3) High-dimensional data communication 1) Deep JSCC 2) Multimodal semantic communication 3) Offline distilled LLM MI Relay CMIC Ly1 1) Spatial distribution of Cross-talk effects 2) Multiple active relays with misaligned antenna 1) KVL equations MI net- work Heteroge- neous MI Network Ly1–Ly7 1) Channel licensing and spectrum sensing 2) Connectivity issues 3) Power and throughput optimization 1) Percolation theory 2) Multiagent RL MI MAC Ly2 1) The SISO Antenna case 2) Balancing the frame error ratio and EPR 3) Power and throughput optimization 4) Communication security issues 1) Machine learning for CSMA-CA issue 2) Using antenna orientation information for frame error ratio & EPR issue 3) Bit-level compression for the MAC header MI routing Ly3 1) Effects of ESD and JSD - MI TCP/IP Ly1-Ly4 1) Large TCP/IP packet header unsuitable for ultra-narrow-band MI channel 2) Significant RTT suppression for TCP connections 3) Excessive duration for connection establishment 1) Header compression techniques 2) RTT optimization based on MI channel conditions 3) Optimizing data chunking and aggregation 4) Intelligent retransmission strategy 5) Machine learning solution under MI fast fading Implemen- tations MI network framework Ly1-Ly7 1) Experiments and testing for TTE MIC systems 2) TCP/IP support Fig. 26 TABLE XXIII OUR PROPOSED PROTOTYPE FOR MI NETWORK IMPLEMENTATION TO SUPPORT EXISTING AND FUTURE STUDIES Models in Fig. 26 Recommended solutions† Typical refs. MI transceiver Channel modelings and estimations [10]–[12], [36], [37], [144] MI cross-layer solutions (Part A) Modulations; Polar coding; JSCC; high real-time CSMA-CA [45], [46] MI MAC solutions Contention based MI MAC protocols [54], [55], [168] MI cross-layer solutions (Part B) Modulations, FEC; Polar coding; data collection [11], [45], [46], [52], [80] MI routing solutions MI connectivity and routing solutions [8], [57], [85], [86], [120] MI TCP/IP adapter IP-HC, RTT optimization, intelligence retransmission, etc. Future research MI cross-layer solutions (Part C) Deep JSCC; algorithms with deep RL Future research MI character device driver Reading/writing protocol control data from/to the MI transceiver [194] MI network driver Reading/writing effective data from/to the MI transceiver [194] † In this table, we prioritize the solutions of the MIC under the VLF-LA case TABLE XXIV POTENTIAL FURTHER ISSUES IF MI FAST FADING CHANNELS ARE INTRODUCED OSI layers Research aspects Involved refs. Typical issues with introduction of MI fast fading Physical layer Channel modeling [89], [9], [11] For more universal scenarios Achievable rate [7], [10], [51], [130] Random outage probability in most cases Channel estimation [144] Unpredictable CSI; Spectrum obtaining Channel coding [46] Unpredictable average AVI Power control [11] Frequent disruptions of Nash equilibrium due to unstable average AVIs MIMO & CMI [7], [10], [36], [51] Spatial diversity; Outage probability reducing Data link layer MAC [54], [55] CSMA-related time / time-slot Network layer Connectivity [8], [169] Irregular shape of MI coverage space Routing [46] Variable latency and energy consumption Cross-layer Cross-layer protocols [52], [58] Unpredictable average AVI; QoS requirement Networking - Variable network topology It is noticed that the practical relevance of existing MI fast fading models remains untested. Monte Carlo simulations have validated three such models, each with AVIs distributed as uniform, BCS, and boundary p(x). Despite the validation, the practical applicability of these models is yet to be confirmed. Future research can give priority to experimental validation us- ing measured disturbance profiles across diverse environments to enhance their practical applicability. 2) Outage probability with velocity-dependent AVIs: MI fast fading influences the achievable rate through the ex- pectation E(JSD) and outage probability. For mobile MIC, the expectation E(JSD) is determined by the average AVIs σS and σD. These AVIs are velocity-dependent, making the AVIs statistically unpredictable due to their dependence on the driver’s mindset. Additionally, unlike the traditional MIC, the outage probability regains its physical meaning due to fast ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 30 fading. Both the unpredictability of E(JSD) and the outage probability of a mobile MIC link have a significant impact on the existing solutions for MIC networks. Unlike the EMWC, such an outage probability is still random in most cases. However, as the vehicle velocity exhibits a human activity, the attention-based deep learning (e.g., Transformer and BERT models) and large language models (LLMs) can be considered to address the challenges caused by unpredictable E(JSD). 3) Lack of frequency spectrum for channel estimation: For the study of MIC channel estimations [144], an MIC channel is assumed to be quasi-static. For mobile MIC with a fast fading channel and a given time t, the mutual inductance MSD = MSD(ϕS(t), θ′ S(t), ϕD(t), θ′ D(t)) = MSD(t) changes faster during the coherence time Tc. An mobile MIC channel exhibits time-selective fading. Let Tc = NtTt, where Nt is the number of symbols between two pilot signals and Tt is the duration of a symbol. The receive signal is ˜y with the CDF FJ(ϕS(t),θ′ S(t),ϕD(t),θ′ D(t))(y). For time-selective fading, one of the widely used methods is interpolation be- tween symbols 1 and Nt. Hence, the frequency spectrum of J(ϕS(t), θ′ S(t), ϕD(t), θ′ D(t)) is crucial. As the CDF of JSD helps obtain this frequency spectrum, it is an open issue for MIC channel estimation. Additionally, the statistically unpredictable expectation E(GSD) makes it hard to obtain accurate CSI, which is also an issue for channel estimation. 4) Unpredictable bandwidth for Spatial diversity for MIMO and CMICs: For traditional MIC links with quasi-static chan- nels, MIMO and CMI techniques enhance signal strength at the Rx node. The received SNR determines the feasibility of MIMO and CMI in these scenarios. For MIC with fast fading MI, outage probability must also be considered. For example, in Fig.22, CMIC is feasible in RA with ΥAF > ΥSD, where ΥAF and ΥSD are the received SNRs of CMI and DMI links at D, respectively. For mobile MIC, not all points in RA are suitable for a CMI relay due to the presence of outage probability conditions. Similarly, some points outside RA may be suitable for a CMI relay. In addition, the achievable rate of an AF-relay CMI system can be expressed as [10] CAF = 1 2 Z f0+ 1 2 BAF (v) f0−1 2 BAF(v) log2(1 + ΥAF(f))df, (24) where the bandwidth BAF is a function of the APO v and, in turn, a function of MI fast fading gains JSD, JSR and JRD. This poses a challenge to the ergodic rate calculation of mobile CMI links. 5) Non-uniform and irregular MI coverage: For a sta- tionary MI SISO link, the polarization gain satisfies JSD = 1+3 cos2 θSD, where θSD is the angle between the norm vector of the coil and line SD, implying that MI coverage space has petal shape. However, it is observed in Fig. 7 that the shape of MI coverage space may become irregular in the presence of MI fast fading. This is a challenging issue for MI connectivity modeling. On the other hand, the MI fast fading may increase the channel power gain, as shown in Fig. 7, presenting an exciting opportunity for network optimization. 6) Proposed framework for average AVI prediction: The average AVI relies significantly on a vehicle’s velocity, which depends on driver intent and is hard to predict using con- ventional methods (e.g., Kalman Filter). Attention-based deep Transformer History sequence of vectors (σS , σD, Ċ) Dense {σDĴ(-Ğ, ε1 ] } P Inputs Prediction Dimensionality reduction Outputs(k×1) softmax Result max(Out puts) {σDĴ(ε1 , ε2 ]} P {σDĴ(ε1 , ε2 ]} P {σDĴ(εk-2 , εk-1 ]} P {σDĴ(εk-2 , εk-1 ]} P {σDĴ(εk-1 , Ğ )} P {σDĴ(εk-1 , Ğ )} P ... δ 2 - δ 2 - δ 2 + δ 2 + δ 2 + δ 2 + δ 2 + δ 2 + δ 2 - δ 2 - δ 2 - δ 2 - Fig. 27. Proposed framework of Rx average AVI σD prediction. This framework is a k-class classification supervised deep learning framework where predicted AVI σD (output) is discretized into k levels, and δ is the boundary margin. Inputs include average AVIs of Tx and Rx (σS, σD), road/traffic conditions, and vehicle velocity, and are assumed to be pre- denoised. Transformer Sequence of vectors (S, D, R, Ċε Dense [Δ Zpa1<-ε ] P [-ε ŏΔ Zpa1ŏε ] P [Δ Zpa1> ε ] P Inputs Prediction Dimensionality reduction Outputs softmax Result max(Out puts) δ 2 + δ 2 + δ 2 - δ 2 - δ 2 - δ 2 - δ 2 + δ 2 + Fig. 28. Proposed framework of MI crosstalk impedances prediction. Here, the classifications P[∆Zpa1 < −ε −δ 2 ], P[−ε + δ 2 ≤∆Zpa1 ≤−ε + δ 2 ] and P[∆Zpa1 > ε + δ 2 ] represent a decrease, no change, and an increase in impedance Zpa1, respectively, where ε is a boundary threshold and δ is the boundary margin. Inputs are assumed to be pre-denoised. learning can be an alternative to obtaining average AVIs. We propose a Transformer-based supervised deep learning framework to obtain a discretized/classified average AVI, as shown in Fig. 27. In this framework, historical sequence vec- tors act as inputs, pass through a Transformer for prediction, and then go through dimensionality reduction via a Dense layer. Final outputs are normalized by the softmax function. The result is obtained by taking the index of the maximum in the outputs tensor. The boundaries ε1, ε2, ..., εk−1 are determined by analyzing historical average AVI distributions (including mean, variance, and extreme values), and aligning with practical operational requirement, based on the pre- designed k classes. Assume a standard Transformer configuration with the number of attention heads h=8, multi-head attention blocks b=12, and hidden dimension d=64. The sequence lengths are constrained to n=128 due to on-device memory limits. The computational complexities of the Transformer block and subsequent Dense layer are O(bn2d + 8nd2) and O(dk), respectively, with a total of 16,818,176 FLOPs when k=5. This workload is executable on a low-speed MCU (100 MHz ARM Cortex-M4) with an inference latency of up to 75 ms. The remaining issues are listed in Table XXIV. B. Antenna Design Due to lower circuit gain CSD, coil-based antennas are larger, especially in VLF-LA systems for TTE applications. In deep underground environments, the large size makes it difficult to use MIMO techniques. Several non-coil antenna types can be considered for future long-distance MICs: 1) Mechanical antenna: A mechanical antenna has a smaller size for mid-range and mobile MICs [198]–[200]. Such an antenna presents challenges stemming from inertia. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 31 • Additional latency: The MIC protocols (e.g., modula- tions) need additional preset time slots between different symbols to overcome antenna inertia. • Additional energy consumption: Changing the mechani- cal state requires additional energy to overcome inertia. Specifically, transmitting ‘010101’ (5 state switches) re- quires more energy than ‘111000’ (1 state switch). • Variable data rate under the same packet length: For packets of the same length, transmitting ‘010101’ takes 5 GTSs, while ‘111000’ takes only 1 GTS, meaning ‘010101’ transmits slower than ‘111000’. 2) Magnetic sensor: Mechanical antennas are often used in the Tx nodes, while magnetic sensors can be utilized at the Rx nodes. There are high-sensitivity magnetic sensors, such as TMR and quantum magnetic sensors. These sensors have potential applications in underground MICs. Due to the limitations of current magnetic sensor techniques, the receiver sensitivity of a magnetic sensor is much lower than that of a coil. Few studies investigate magnetic sensor-based MICs for underground applications. Nevertheless, magnetic sensors have two advantages. • Small antenna size: The size of a magnetic sensor is much smaller than that of a coil. For example, a TMR sensor with a small outline package 8 (SOP8) has dimensions of 6 mm × 5 mm × 1.5 mm. These dimensions are several orders of magnitude smaller than coils. • Low crosstalk effect: The crosstalk effect among the magnetic sensors can be greatly reduced. Techniques such as massive MIMO and intelligent reflective surfaces can be introduced into MICs. Given these advantages and the advancement of magnetic sen- sor techniques, magnetic sensors emerge as a highly promising option for MIC antenna design. 3) MI beamforming and massive MI MIMO: The massive MIMO technology plays a pivotal role in the fifth-generation (5G) mobile communication systems and beyond. The MI MIMO and beamforming techniques are also widely employed for wireless power transfer [201]–[205] and short-range MIC [94], [206]. However, these techniques exhibit extremely low performance for mid-range and long-range MICs due to the MI crosstalk effects among coils. There is no literature on massive MI MIMO. With the advancements of RPMA techniques and magnetic sensor technology, MI beamforming and massive MIMO techniques hold promise, as RPMA and magnetic sensors generate minimal MI crosstalk effects among coils. For a coil-based MI system, avoiding crosstalk is crucial for the application of massive MIMO. Our simulation in Fig. 19 may assist in addressing this challenge. C. MI Crosstalk Effect Prediction Strategies It is observed in (23) that the crosstalk impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) are crucial for the crosstalk effect mitigation. Unfortunately, the expressions of crosstalk impedances are complex multimodal functions, posing a challenge for predicting Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) using conventional methods. We propose a Transformer-based framework (see Fig. 28) for crosstalk impedance predictions for crosstalk effect mitigation. This framework is similar to that of average AVI prediction (see Fig. 27) with k=3. Its computational complexity is 16,801,792 FLOPs with the number of attention heads h=8, multi-head attention blocks b=12, and hidden dimension d=64. The sequence length is n=128. The latency remains under 75 ms on the 100 MHz Cortex-M4. D. MI Communication-Navigation-Sensing Integrated System The MCNSI system aims to achieve high intelligence, supporting high-quality localization and sensing services while achieving high-quality communication. As summarized in Table XXV, MIC studies have been focused on the stability of signals among different positions in traditional research. By contrast, the MI localization/navigation focuses on signal differences among different positions, while MI sensing em- phasizes sensitivity to the direction and strength of magnetic fields, both of which are highly susceptible to interference and noise. Recently, there have been some studies on joint localization and communication in 5G/6G technologies using intelligent reflecting surfaces [209], [210]. These signal-reflection-based techniques may not be suitable for MICs. There are key issues listed below according to these challenges and Table XXV. • Small MI signal difference: Since the space gain SSD in MI channel decays with the 6th power of distance (see (4)), the difference in MI signal strength between two points is much smaller than that of an EMW signal in a long-distance communication environment. • Chaotic received signal strength indicator (RSSI): For mobile MICs, velocity-dependent MI fast fading channels cause the MI RSSI around the MI base station to be chaotic. • Balance between communication and navigation: In the MCNSI system, since small differences in MI signals are beneficial for MICs but detrimental to MI localization and navigation, the localization accuracy and achievable communication rate often cannot reach their optima. We need to balance these two performance metrics. • Interference and noise: As both MIC and MI navigation subsystems are expected to generate sufficiently strong signals, these signals can interfere with MI sensing. The carrier frequency of TTE MIC using VLF-LA is closer to that of MI sensing, making interference suppression tech- niques (e.g., frequency band isolation) more challenging for the MI sensing subsystems. For the first issue, we can use dynamic weighted evolu- tion/learning (DWE) from [115]. Furthermore, we consider generative adversarial networks (GANs) to obtain a super- resolution model of MI signal fingerprints. For the second issue, RL techniques can be used for stochastically changing communication environments. For the third issue, the key method is to obtain closed-form expressions for the ratio of time slot allocation between communication and navigation, and formulate a joint optimization problem of communication and navigation. For the fourth issue, the joint time-space- frequency isolation method can be adopted for appropriate electromagnetic compatibility design. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 32 TABLE XXV COMPARISON OF MIC, MI LOCALIZATION/NAVIGATION AND MI SENSING Aspects MIC MI localization / navigation (cf. [24], [207]) MI sensing (cf. [208]) Key Alternating magnetic fields Magnetic fields differences Sensitivity to magnetic fields’ direction & strength Techniques. Modulation, channel coding, MAC, routing Magnetic fingerprint, ToA, inertial navigation Interference & noise suppression Signals Active sources Active sources Passive sources Components Coil, RPMA, M2I Coil, RPMA, Inertial measurement unit TMR, Giant Magnetoresistance, Hall sensor, Coil Applications UG-WSN, UW-WSN, UG robot communication AUV, Indoor position, UG Robot navigation Oxygen content detect, current measurement 10-6 10-4 10-2 100 102 Conductivity of media (S/m) -2 -1 0 1 2 3 4 5 6 Potential near field range (lg(m)) frequency: 100 Hz frequency: 1 KHz frequency: 10 KHz frequency: 100 KHz frequency: 1 MHz frequency: 10 MHz Fig. 29. Near-field ranges. The simulation parameters are listed in Table III, except for the media conductivity σu and frequency f. E. Mixed-Field MI Channel Model The MIC channel is modeled within the near-field range (k0dSD ≪1) in the vast majority of MIC research. There are a few works, such as [78], [79], [126], that focus on MIC in the non-near-field range (called mixed-field MIC) using (1), (2), and (3), where k0dSD ≪1 does not hold. Under the mixed-field conditions, the channel power gain GSD is difficult to be sub-modeled as CSD, SSD, ESD and JSD. This makes the MIC channel model difficult to analyze since more effects of device components and environmental factors must be considered. For example, for the near-field model (4), the antenna-vibration-based MI fast fading is primarily related to MI polarization gain JSD, which represents the APOs. For the mixed-field, more device and environment parameters (µu, ϵu, σu) also determine the antenna-vibration-based MI fast fading, according to (2) and (3). In scenarios with a high conductivity medium, the near-field range is very small. As shown in Fig. 29, when the media conductivity is σu≃0.01 S/m, the near-field range is less than 10 m under f0 = 100 kHz. For TTE MIC channels with higher frequency and media conductivity, the radiation field cannot be disregarded. Studies on existing upper-layer MIC protocols relying on a near-field model may encounter issues. F. Multi-Layer and Inhomogeneous Media For TTE scenarios, the underground medium is unlikely to be homogeneous and isotropic. There are three scenarios: • Multi-layered medium: The medium near mineral de- posits is often multi-layered. In these layers, the mineral layer may have a high conductivity and even high per- meability. This may invalidate existing works. • Inhomogeneous medium with arbitrary geometric shapes: Such scenario appears more frequently than that with a multi-layered medium, such as underground tunnels like the gray thick line in Fig. 22, urban subway systems, and subterranean business districts. JSCC Encoder JSCC Decoder JSCC Decoder Image Image MIC parameters MI Channel Image Image Fig. 30. Block diagram of P2P MIC based on deep JSCC. • Dynamic medium: This scenario typically appears in oil fields, underground rivers, and mobile MICs. In this scenario, MIC channel exhibits MI fast fading, despite its longer coherence time than symbol duration. For the first scenario, the boundary conditions between different layers and the boundary between the near-field region and radiation field region need to be considered. For the second scenario, a geometrical approximation (see [7]) can be used to transform geometric shapes. Methods based on Maxwell equations (see [48], [126]) and finite element methods (FEM) (see Fig. 4) can analyze the MIC channel model. For the third scenario, the statistical characteristics of dynamic medium are a key issue. While the derivation based on classical probability theorems can be used, the attention mechanism can be applied to obtain crucial channel information. G. Image Transmission and Deep JSCC Image and video transmission remains an intriguing yet unattainable function for TTE communications. It necessitates a higher achievable rate, potentially beyond Shannon’s capac- ity for MICs utilizing VLF-LA. Using separate source-channel coding (SSCC), such as BCH and Polar coding, for further enhancing MI performance increases complexity with minimal gains, hindering sustainable development. In the EMWC area, the JSCC technique [211] has been explored to tackle this issue. Meanwhile, deep learning has been widely used to address issues on the physical layer of communication sys- tems [192]. Deep JSCC [212]–[214] is a JSCC technique using deep learning. The source code on deep JSCC is available on Github [215]. It is a highly efficient approach for transmitting high-dimensional data. Employing deep JSCC techniques, one can compress the high-dimensional data, including the underground environ- ment information, through multimodal semantic or other goal- oriented analyses. If the deep JSCC technique is successfully applied to MICs (see Fig. 30), the MIC achievable rate may exceed Shannon’s capacity limit. Such a technique may make the transmission of images and even videos feasible. According to [216], the deep JSCC framework can be formulated as x = Cα(Sβ(s)), (25) where x is the encoded symbol; s is the input signal; Cα(·) is the neutral network of the channel encoder with the parameter set α, including the MI device parameters and the underground ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 33 Source Destination Relay . . .Relay Fig. 31. The topology of the CMIC with multiple active relays featuring misaligned coils. environments; Sβ(·) is the neutral network of the source encoder with the parameter set β. For text message transmission applications, deep semantic communication techniques [216], [217], which optimize text transmission rather than symbols transmission, can be em- ployed to compress the overall effective data stream. Transfer learning (e.g., [216]) can be employed to reduce the number of pre-trained models required across different channel environ- ments sharing the same statistical model. This is particularly relevant for MI fast fading channels in ad-hoc links, as the PDF is difficult to derive. H. Cooperative MIC As depicted in Table XVII, researchers have applied either CMIC-1NR [6], [7], [10], [51] or CMIC-nAR [32], [121]. Consequently, some open issues arise, as follows. • Crosstalk effect of CMI system: The crosstalk effect in CMI systems is typically small and often ignored in long-distance CMIC investigations (see Fig. 19). For high node-density networks, the key issue is the spatial distribution of negative crosstalk effects. Inevitably, large- scale UG-WSNs with high-density nodes generate many sufficiently close node pairs. Obtaining this spatial distri- bution can help upper-layer protocols, such as MAC and routing, avoid the crosstalk effect. • Multiple active relays with misaligned coils: It has been witnessed that both CMIC-nAR and CMIC-1NR can improve MIC performance. However, the improvement from multiple active relays with misaligned antennas (see Fig. 31) is less clear due to spatial diversity limitations. For the first issue, we can use the KVL and fundamental matrix operations to obtain a closed-form expression of the MI crosstalk effect. However, it is still a challenge to find its spatial distribution. I. MI Network and Architecture Table XVIII shows that existing studies can form a basic protocol stack for a large-scale, runnable TTE network. Some open issues remain for further study. 1) Heterogeneous MI Network: In TTE environments, free space for antenna deployment varies widely. It ranges from large spaces, such as subway stations and malls, to extremely small spaces, such as collapsed tunnels and mine shafts. We can deploy large antenna devices that constitute the backbone of a UG-WSN in large spaces and small antenna devices that constitute various levels of branch UG-WSNs in small spaces. Together, the backbone UG-WSN and the various levels of branch UG-WSNs form a heterogeneous MI UG-WSN (see Fig. 32), including cognitive MI and femtocell networks. The MBS PU SU FU FBS Interference Spectrum hole sensing Primary network Femtocell network Cognitive Network Fig. 32. Examples of heterogeneous MI UG-WSN. Here, primary users (PUs) and a macro base station (MBS) constitute the primary network. Secondary users (SUs) and an MBS can function as a part of the cognitive network. Femtocell users (FU) and a femtocell base station (FBS) constitute the femtocell network. cognitive network addresses the issue of spectrum scarcity, while the femtocell network improves coverage and capacity in specific areas. Recent upper-layer protocol advancements highlight the significant potential of heterogeneous MI UG- WSNs on the following key aspects. • Channel licensing and spectrum sensing: The bandwidth of an MIC channel is extremely narrow (see Fig. 21). Perceiving and utilizing spectrum holes is a fundamental task for spectrum reuse in a heterogeneous MI network. Parameters like the relationship between the resonance frequency f0 and working frequency f are crucial for channel licensing, spectrum sensing, and their optimiza- tion solution (e.g., dynamic frequency selection). • Connectivity: Existing investigations on MIC connectiv- ity are under the assumption of uniform MI coverage. However, heterogeneous MI UG-WSN has different an- tennas, such as SISO coil, RPMA, M2C antenna, and Orthogonal MIMO coils. These MI antennas can generate MI coverage spaces with varying shapes and ranges. This significantly impacts the existing statistical model of MIC connectivity, such as the CDF of isolated MI nodes. • Power and throughput optimization: Multiagent deep RL methods are widely used to address the joint power and throughput optimization problem under an unpre- dictable channel [218]–[220]. The key issue is insufficient bandwidth for exchanging cooperative packets among agents/players. This issue led to the abandonment of cooperative multiagent RL method with high convergence performance. 2) MI MAC Solutions: The IEEE 802 standards divide the data link layer into the logical link control and MAC sub- layers. The logical link control sub-layer is responsible for non-media access-related functions. The functions seem to be compatible with the MIC network, but not validated. For the MAC sub-layer, although the studies [54], [55], [168] have proposed MAC solutions, there are several open issues: • High orientation-sensitivity SISO: For the TTE scenarios, an SISO coil is a frequently used MI antenna. However, its orientation sensitivity may affect the existing MAC solutions, such as channel sensing and packet design. • CSMA-CA: Reducing the probability of collision is an important indicator for low-bandwidth channels. The methods, such as supervised machine learning, send window optimization, and time slot optimization, can be introduced into collision avoidance (CA) schemes to ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 34 minimize the search range in the time domain. • Frame error ratio (FER) and effective payload ratio (EPR): Lowering the FER and increasing the EPR can improve the effective data rate. However, these two ratios often cannot be optimized simultaneously. Besides multi- objective and multiagent optimization methods, antenna orientation information can be utilized as the frame header, secret key, and even the beacon sent from the MI coordinator. Additionally, the MI MAC headers in [55] require bit-level compression. • Communication security: As MIC has low bandwidth and a frequency near resonance, it is more susceptible to interference than EMWC. Existing security schemes in EMW-based MAC layers, e.g., those in the IEEE 802.15.4 standard, are insufficient to address this issue. 3) MI Routing Solutions: Routing is an important function in the network layer for a large-scale network. Since 2019, there have been many studies on MI routing issues, such as issues of lifetime, transmission delay, and routing decision algorithms. However, the channel models in studies on MI routing issues are air-based MIC channels, where the eddy gain ESD and polarization gain JSD in TTE scenarios are ignored. Air-based MIC channels are similar to EMWC. Whether ESD(f) and JSD affect the MI routing solutions, e.g., the frequency switch scheme, remains an open issue. 4) MI Network Architecture: Most studies on MIC focus on the physical layer, such as MI channel modeling, channel estimation, and energy and capacity optimization. In recent years, researchers have increasingly dedicated efforts to upper- layer protocols like MAC and routing protocols (see Table XVIII). The OSI-originated network framework adheres to the principle of high cohesion and low coupling in the field of software engineering, e.g., standard OSI, TCP/IP, and IEEE 802 frameworks, This principle can bring high communication performance, robustness, security, standardization, and com- patibility to MI networks. Although most solutions of OSI- originated network frameworks are compatible with the MI networks, some issues can be considered for future studies and implementations. • High cohesion and low coupling: High cohesion and low coupling in a network framework benefit system stability and compatibility. To achieve higher MIC performance, cross-layer optimization methods have been proposed in the literature (see Table XVIII). These methods, to some extent, increase the coupling among different lay- ers. Improving MIC performance while maintaining the functionality of the mature network architecture is also a consideration in future research on MI optimization. In our framework for future research and implementation (see Fig. 26), we take this point into account. • Standardization for MIC: As research on MIC network optimization temporarily boosts performance but disrupts the design principle of “low coupling and high cohesion”, the standards for MIC are lacking. While NFC has standards [221] (e.g., ISO 18092, NFCIP-1), its short- range and fixed-frequency design does not align with the need of UG-WSN for wide coverage and heterogeneous network integration, hindering access of large-scale TTE MIC in SAGUMI networks. MAC Header 1B 14B IP Header 20-60B TCP Header 20-60B Payload Preamble 0xAB 7B FCS 4B Fig. 33. A typical TCP frame being propagated in a physical channel. In this figure, FCS denotes the Frame Check Sequence. Italicized text indicates the frame header, needing further compression for the VLF-LA channel. J. TCP/IP Support MIC device with TCP/IP support has broad application prospects. As most modules of TCP/IP were originally de- signed to be channel-agnostic, the research of TCP/IP specific supporting for MI network has been largely overlooked. How- ever, the general TCP/IP stack is not fully compatible with an ultra-narrow-band network due to its large packet header (i.e., low EPR). As shown below, some techniques can be applied for MIC-specific TCP/IP. 1) Header Compression (HC): The TCP/IP packet header, including the MAC frame header, IP header, and TCP/UDP header, is quite large. For example, the traditional IPv4 header and TCP header are both over 20 bytes in size (see Fig. 33), causing 74% overhead [185]. Such large headers total over 320 bits, taking over 3 seconds to transmit on a 100 bits/s TTE MI channel. Using header compression methods, we can achieve a potential compression ratio of over 99%. Besides channel-agnostic solutions (e.g., [190]) proposed in EMWC, dynamic SNR-dependent header size w.r.t frequency f and MI polarization gain JSD(θS, θD) can be considered to achieve further compression. 2) RTT Optimization (RTTO): In the TCP/IP framework, several schemes have been used to ensure multiple applications operate over a link through various connection schemes, such as TCP, HTTP, and LLC connections. Most of these connection schemes would face fairness challenges due to RTT suppression in MI channels with a low capacity. Since RTT suppression depends heavily on the transmission channel, we can potentially formulate the corresponding optimization problem w.r.t. the SNR ΥSD(f, JSD(θS, θD)) of the MI links, which can be further transformed into the frequency and APO optimization problem. Meanwhile, the dynamic congestion window scheme based on RTT and SNR can be adopted. 3) Optimizing Data Chunking and Aggregation (ODCA): Appropriate data chunking and aggregation help reduce the number of retransmissions. For example, the TTE MIC with a 100 bit/s data rate can use 15-byte data chunks much smaller than a typical maximum transmission unit size (1500 bytes). The receiver sends an aggregated ACK after accumulating multiple chunks. This strategy can be dynamic, adapting to the CSI of the MI channel determined by f and JSD(θS, θD). 4) Intelligent Retransmission Strategy (IRS): Priority re- transmission involves immediate retransmissions of critical data (e.g., control commands) and delayed batch retransmis- sions of non-critical data (e.g., sensor readings). Context- aware packet loss detection dynamically adjusts retransmission timeouts based on MI channel quality predictions (e.g., SNR) to avoid unnecessary retransmissions. 5) Machine-learning-based solution (ML-TCP/IP): For the specific MI channel, the aforementioned methods (i.e., HC, RTTO, ODCA, and IRS) can be further optimized by machine learning tools. Since the MI fast fading depends on driver’s ve- locity, we can use attention-based deep learning (e.g., Fig. 27) ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 35 TABLE XXVI COMPARISON OF POTENTIAL TCP/IP SCHEMES FOR TTE MICS Schemes Primary objective Key mechanism Complexity MIC optimization directions HC Reduce payload overhead Robust HC tunnel with CID negotiation [185] Low SNR-dependent header size RTTO Improve fairness; minimize round-trip time Optimize transmission/retransmission timing Medium RTT-SNR problem formulation ODCA Reduce retransmissions Multiple small chunks with an aggregated ACK Mediumn CSI-aware optimization IRS Reduce retransmissions Priority retransmission scheme Low SNR-aware retransmission ML-TCP/IP Adapt protocol to dynamics AVI-driven scheme High Attention-based for AVI predictions or LLMs to predict the average AVIs σS and σD, dynamically adjusting the parameters of the methods w.r.t. the expectation of channel power gain (E[GSD(σS, σD)]) of the MI link. Table XXVI compares potential TCP/IP schemes (i.e., HC, RTTO, ODCA, IRS, and ML-TCP/IP) for TTE MIC systems, indicating that MI-specific optimizations can potentially estab- lish relationships between their primary objectives and SNR- related parameters (e.g., APO and frequency). K. Experiments and Testing in TTE MIC Systems Some researchers have developed MIC testbeds for both general UG scenario [222] and TTE scenario [8]. Even though TTE MIC products have entered the market [119], a lack of robust experimental validation remains a key challenge in TTE MIC research. 1) Unrepeatable and unrepresentative UG environments: The TTE environment is highly heterogeneous. Materials’ conductivity, permittivity, and moisture content can vary sig- nificantly, even within a small area. Such unrepresentative en- vironments may invalidate theoretical assumptions, introduc- ing measurement noise that researchers struggle to eliminate. Moreover, VLF signals in TTE MIC are highly vulnerable to geomagnetic fluctuations and industrial interference. High- precision equipment, essential for capturing weak MI signals, is costly and scarce in academic labs. To address this chal- lenge, a cross-scale channel model spanning centimeter-level particles to kilometer-scale strata can be developed to match the validation requirements of target theories. A data-driven method similar to the one depicted in Fig. 27 can be considered to predict the environment. 2) Antenna deployment challenge: In deep subsurface en- vironments, TTE MIC antennas (0.5∼4 m) often exceed the space in narrow underground areas like mine tunnels. Flexible cables distort coil shapes from standard circles, especially on mobile vehicles, causing signal deviations from theoreti- cal expectations. For future MI fast fading validation, high- permeability metallic components (e.g., vehicle bodies and tire bearings) act as unintended magnetic reflectors/absorbers, distorting field propagation and compromising the reliability of fading model (see Fig. 6) validation. Mitigation can focus on compact rigidizable antenna designs (e.g., shape-memory structures), magnetic shielding for vehicle-mounted systems, and calibration protocols accounting for metallic interference. In addition, RPMA arrays are promising to replace Tx coils for testing due to their smaller sizes. L. Summary of Challenges and Opportunities Table XXII outlines the key future challenges, tasks, and recommended approaches for advancing MIC research, specif- ically in TTE applications. These future directions cover critical areas, such as P2P MIC communication, MI relay techniques, and overall MI network development. One of the most pressing challenges is the development of a universal statistical model for MI fast fading. The lack of CLT support, combined with the complexity of antenna carrier configurations, has made this task particularly difficult. Fur- thermore, the impact of fast fading on existing MIC theorems is significant. Unlike traditional EMWC, the randomness and unpredictability of MI fast fading’s expectation and variance, which are velocity-dependent, presents unique challenges that must be addressed. There are several other challenges related to performance metrics, antenna designs, MI MAC and routing protocols, channel modeling in inhomogeneous media, CMIC techniques, TTE MI experiment and testing. A comprehensive understand- ing and resolution of these challenges is key to advancing MIC technology. There are also promising research directions and novel techniques that will significantly enhance MIC performance and broaden its applications. These include MC- NSI, MI massive MIMO, deep JSCC for MIC, heterogeneous MI network techniques, and support for TCP/IP frameworks. Despite their potential, there is currently a scarcity of pub- lished research on these topics. In this paper, we propose an attention-based deep learning framework that is not limited to the prediction of the average AVI. As described in Section VII, our MI network framework carefully considers the un- resolved challenges and promising future research directions highlighted above. IX. CONCLUSION Since 2020, research on MIC studies for TTE applications, including MI fast fading and CMIC, has increased signifi- cantly. Research in MIC continues to advance across all net- work layers. This survey provides a comprehensive overview of the latest developments in MIC technology within the TTE environment, covering aspects such as channel modeling in deep-penetration environments, MI fast fading channels, CMIC, and MI network architecture. Specifically, we reviewed the MI channel modeling and P2P MIC techniques, proposed an fine-grained decomposition of MI channel power gain into four key factors, and outlined optimization directions based on these factors. We discussed the challenges and impacts of MI fast fading on MIC systems. We reviewed MI relay tech- niques, including the MI waveguide and CMIC, analyzed their performance in TTE environments, and explored theoretically MI cross-talk effects. Moreover, we summarized advancements in multi-node MIC and large-scale MI networks by using the OSI-originated framework as a guide, and identified out- standing issues, including channel estimation, modulation, and coding in physical layer functions, MAC protocols in link layer functions, and connectivity, data collection, and routing in network layer functions. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 36 Notably, we conceived a new and promising MI network framework with TCP/IP and Linux support, which enables researchers to leverage abundant research and development resources, accelerating MIC studies. We also summarized its challenges and open issues from the OSI-originated per- spective, focusing on accessing SAGUMI in future network systems, and delineated the potential novel technical solutions, such as MCNSI, MI massive MIMO, deep JSCC for MIC, and heterogeneous MI network techniques. REFERENCES [1] M. Abdollahi, W. Ni, M. Abolhasan, et al., “Software-defined networking-based adaptive routing for multi-hop multi-frequency wire- less mesh,” IEEE Trans. Veh. Technol., vol. 70, no. 12, pp. 13 073– 13 086, 2021. [2] V. L. Orekhov and T. H. Chung, “The DARPA subterranean challenge: Asynopsis of the circuits stage,” Field Robot, vol. 2, pp. 735–747, 2022. [3] Q. Cui, X. You, W. Ni, et al., “Overview of AI and communication for 6G network: Fundamentals, challenges, and future research opportu- nities,” Science China Information Science, vol. 68, no. 7, p. 171301, 2025. [4] C. Wrigley, “Going deep: Excavation, collaboration and imagination at the Kola Superdeep Borehole,” Environment and Planning D: Society and Space, vol. 41, no. 3, pp. 549–567, 2023. [5] I. I. Shovon and S. Shin, “Survey on multi-path routing protocols of underwater wireless sensor networks: Advancement and applications,” Electronics, vol. 11, no. 21, p. 3467, 2022. [6] Z. Zhang, E. Liu, X. Zheng, et al., “Cooperative magnetic induction based through-the-earth communication,” in IEEE/CIC ICCC, Shang- hai, China, Oct. 2014, pp. 653–657. [7] H. Ma, E. Liu, R. Wang, et al., “Antenna optimization for decode-and- forward relay in magnetic induction communications,” IEEE Trans. Veh. Technol., vol. 69, no. 3, pp. 3449–3453, 2019. [8] Z. Zhang, E. Liu, X. Qu, et al., “Connectivity of magnetic induction- based Ad Hoc networks,” IEEE Trans. Wireless Commun., vol. 16, no. 7, pp. 4181– 4191, Apr. 2017. [9] H. Ma, E. Liu, R. Wang, et al., “Channel characteristics for vehicle magnetic induction communication with road disturbance,” IEEE Com- mun. Lett., vol. 24, no. 7, pp. 1363–1367, 2020. [10] H. Ma, E. Liu, R. Wang, et al., “Effect of antenna deployment on achievable rate in cooperative magnetic induction communication,” IEEE Commun. Lett., vol. 23, no. 10, pp. 1748–1752, Oct. 2019. [11] H. Ma, E. Liu, Z. Fang, et al., “Fast-fading channel and power optimization of the magnetic inductive cellular network,” IEEE Trans. Wireless Commun., vol. 23, no. 10, pp. 15 096–15 111, 2024. [12] Z. Sun and I. F. Akyildiz, “Magnetic induction communications for wireless underground sensor networks,” IEEE Trans. Antennas Propag., vol. 58, no. 7, pp. 2426–2435, Jul. 2010. [13] S. Kisseleff, W. Gerstacker, R. Schober, et al., “Channel capacity of magnetic induction based wireless underground sensor networks under practical constraints,” in IEEE WCNC, Shanghai, China, Apr. 2013, pp. 2603–2608. [14] M. Zeeshan, M. Chavda, K. M. Ehshan, et al., “A review on Non- RF underground positioning techniques for mining applications,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–17, 2023. [15] S. Kisseleff, I. F. Akyildiz, and W. H. Gerstacker, “Survey on advances in magnetic induction-based wireless underground sensor networks,” IEEE Internet Things J., vol. 5, no. 6, pp. 4843–4856, Dec. 2018. [16] M. Muzzammil, N. Ahmed, G. Qiao, et al., “Fundamentals and ad- vancements of magnetic-field communication for underwater wireless sensor networks,” IEEE Trans. Antennas Propag., vol. 68, no. 11, pp. 7555–7570, 2020. [17] P. N. Wrathall and J. Sojdehei, “Magneto-inductive communications,” in Info. Syst. Navy Divers & AUVs in Shallow Water, J. L. Wood- Putnam, Ed., vol. 3711, International Society for Optics and Photonics. SPIE, 1999, pp. 229 – 236. [18] I. F. Akyildiz and E. P. Stuntebeck, “Wireless underground sensor networks: Research challenges,” Ad Hoc Networks, vol. 4, no. 6, pp. 669–686, Nov. 2006. [19] I. F. Akyildiz, Z. Sun, and M. C. Vuran, “Signal propagation tech- niques for wireless underground communication networks,” Physical Communication, vol. 2, no. 3, pp. 167–183, Sep. 2009. [20] F. Zhang, Z. Gong, S. Wang, et al., “A rotating-permanent-magnet array for ULF through-the-sea magnetic communications,” IEEE Trans. Antennas Propag., vol. 71, no. 3, pp. 2300–2310, 2023. [21] S. Yarkan, S. Guzelgoz, H. Arslan, et al., “Underground mine commu- nications: A survey,” IEEE Commun. Surveys Tuts., vol. 11, no. 3, pp. 125–142, 2009. [22] A. E. Forooshani, S. Bashir, D. G. Michelson, et al., “A survey of wireless communications and propagation modeling in underground mines,” IEEE Commun. Surveys Tuts., vol. 15, no. 4, pp. 1524–1545, 2013. [23] S. Sheikhpour, A. Mahani, and H. F. Rashvand, Agricultural Applica- tions of Underground Wireless Sensor Systems: A Technical Review. Wiley Semiconductors, 2017, pp. 351–379. [24] N. Saeed, M.-S. Alouini, and T. Y. Al-Naffouri, “Toward the internet of underground things: A systematic survey,” IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3443–3466, 2019. [25] G. P. Hancke and B. J. Silva, “Wireless positioning in underground mines: Challenges and recent advances,” IEEE Industrial Electronics Magazine, vol. 15, no. 3, pp. 39–48, 2021. [26] D. Wohwe Sambo and A. F¨orster, “Wireless underground sensor networks: A comprehensive survey and tutorial,” ACM Computing Surveys, vol. 56, no. 4, pp. 1–44, 2023. [27] U. Raza and A. Salam, “A survey on subsurface signal propagation,” Smart Cities, vol. 3, no. 4, pp. 1513–1561, 2020. [28] A. Hrovat, G. Kandus, and T. Javornik, “A survey of radio propagation modeling for tunnels,” IEEE Commun. Surveys Tuts., vol. 16, no. 2, pp. 658–669, 2014. [29] S. M. Riurean, M. Leba, and A. C. Ionica, Conventional and Advanced Technologies for Wireless Transmission in Underground Mine. Cham: Springer International Publishing, 2021, pp. 41–125. [30] E. Liu, Z. Sun, R. Wang, et al., Magnetic Communications: Theory and Techniques. New York, NY, USA: Cambridge University Press, 2024. [31] A. K. Sharma, S. Yadav, S. N. Dandu, et al., “Magnetic induction-based non-conventional media communications: A review,” IEEE Sensors Journal, vol. 17, no. 4, pp. 926–940, 2017. [32] Y. Li, S. Wang, C. Jin, et al., “A survey of underwater magnetic induction communications: Fundamental issues, recent advances, and challenges,” IEEE Communications Surveys Tutorials, vol. 21, no. 3, pp. 2466–2487, Feb. 2019. [33] A. R. Silva and M. Moghaddam, “Strategic frequency adaptation for mid-range magnetic induction-based wireless underground sensor networks,” in IEEE SysCon Proceedings, Vancouver, BC, Canada, Apr. 2015, pp. 758–765. [34] Z. Sun, P. Wang, M. C. Vuran, et al., “Mise-pipe: Magnetic induction- based wireless sensor networks for underground pipeline monitoring,” Ad Hoc Networks, vol. 9, no. 3, pp. 218–227, 2011. [35] C. B. Jenkins and A. Kiourti, “Wearable dual-layer planar magne- toinductive waveguide for wireless body area networks,” IEEE Trans. Antennas Propag., vol. 71, no. 8, pp. 6893–6905, 2023. [36] S. Li, Y. Sun, and W. Shi, “Capacity of magnetic-induction MIMO communication for wireless underground sensor networks,” Interna- tional Journal of Distributed Sensor Networks, vol. 11, no. 10, Jan. 2015. [37] H. Guo, Z. Sun, J. Sun, et al., “M2I channel modeling for metamaterial- enhanced magnetic induction communications,” IEEE Trans. Antennas Propag., vol. 63, no. 11, pp. 5072–5087, Nov. 2015. [38] Z. Wang, J. Wang, and W. Cheng, “Multi-frequency resonant circuit based multi-user emergency through-the-earth communication with magnetic induction,” in Ucom, 2024, pp. 152–157. [39] Z. Sun and I.F.Akyildiz, “On capacity of magnetic induction-based wireless underground sensor networks,” in IEEE INFOCOM, Orlando, FL, Mar. 2012, pp. 370–378. [40] U. Azad, H. C. Jing, and Y. E. Wang, “Link budget and capacity performance of inductively coupled resonant loops,” IEEE Trans. Antennas Propag., vol. 60, no. 5, pp. 2453–2461, 2012. [41] J. Jiang, K. Song, G. Wei, et al., “Capacity and bandwidth analysis of symmetric two-coil resonant magnetic communication using frequency divarication,” IEEE Antennas Wireless Propag. Lett., vol. 14, pp. 370– 373, 2015. [42] J. Zhou and J. Chen, “Maximum distance estimation of far-field model for underwater magnetic field communication,” in 2017 CCWC, 2017, pp. 1–5. [43] H. Guo and Z. Sun, “Inter-media backscatter communications with magnetic induction,” in IEEE ICC, 2019, pp. 1–6. [44] Y. Liu, Q. Liu, S. Gong, et al., “Chirp-rate shift keying modulation for mechanical antenna based on rotating dipoles,” IEEE Trans. Antennas Propag., vol. 71, no. 4, pp. 2989–2999, 2023. [45] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “On modulation for magnetic induction based transmission in wireless underground sensor networks,” in IEEE ICC, Sydney, NSW, Australia, Jun. 2014, pp. 71– 76. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 37 [46] Z. Chen, G. Liu, H. Zhang, et al., “A novel polar code construction for magnetic induction-based wireless underground communications,” IEEE Commun. Lett., vol. 27, no. 11, pp. 2884–2888, 2023. [47] A. A. Alshehri, S.-C. Lin, and I. F. Akyildiz, “Optimal energy planning for wireless self-contained sensor networks in oil reservoirs,” in IEEE ICC, Paris, France, May 2017, pp. 1–7. [48] H. Guo and Z. Sun, “Channel and energy modeling for self-contained wireless sensor networks in oil reservoirs,” IEEE Trans. Wireless Commun., vol. 13, no. 4, pp. 2258–2269, Apr. 2014. [49] J. Ma, X. Zhang, M. Liao, et al., “Topology of magneto-inductive communication system based on the regular hexagonal array,” IET Microwaves, Antennas & Propagation, vol. 9, no. 5, pp. 389–398, 2015. [50] R. A. Khalil and N. Saeed, “Optimal relay placement in magnetic induction-based internet of underwater things,” IEEE Sensors Journal, vol. 21, no. 1, pp. 821–828, 2021. [51] Y. Zhang, “Cooperative magnetic induction communications: Perfor- mance analysis and power-location optimization,” in IEEE WCNC, 2024, pp. 1–6. [52] S. C. Lin, I. F. Akyildiz, P. Wang, et al., “Distributed cross-layer protocol design for magnetic induction communication in wireless underground sensor networks,” IEEE Trans. Wireless Commun., vol. 14, no. 7, pp. 4006–4019, Jul. 2015. [53] T. Li, Y. Zhao, Z. Hu, et al., “Resource allocation strategy in auv- assisted edge computing uwsn with hybrid acoustic and mi communi- cation,” in IEEE VTC, 2024, pp. 1–6. [54] N. Ahmed, Y. R. Zheng, and D. Pommerenke, “Multi-coil mi based mac protocol for wireless sensor networks,” in OCEANS 2016 MTS/IEEE Monterey, 2016, pp. 1–4. [55] N. Ahmed, G. Qiao, Y. R. Zheng, et al., “Design and implementation of medium access control protocol for magneto-inductive wireless sensor networks using low power sensor nodes,” IEEE Journal of Oceanic Engineering, vol. 49, no. 2, pp. 572–582, 2024. [56] S. Wang, T. L. N. Nguyen, and Y. Shin, “Data collection strategy for magnetic induction based monitoring in underwater sensor networks,” IEEE Access, vol. 6, pp. 43 644–43 653, 2018. [57] G. Liu, “Frequency-switchable routing protocol for dynamic magnetic induction-based wireless underground sensor networks,” IEEE Journal of Selected Areas in Sensors, vol. 1, pp. 1–8, 2024. [58] P. Singh, R. P. Singh, and Y. Singh, “An optimal energy-throughput ef- ficient cross-layer solution using naked mole rat algorithm for wireless underground sensor networks,” Materials Today: Proceedings, vol. 48, pp. 1076–1083, 2022. [59] V. Parameswaran, H. Zhou, and Z. Zhang, “Irrigation control using wireless underground sensor networks,” in ICST, Kolkata, India, Dec. 2012, pp. 653–659. [60] Z. Li, Z. Sun, T. Singh, et al., “Large range soil moisture sensing for inhomogeneous environments using magnetic induction networks,” in IEEE GLOBECOM, Waikoloa, HI, USA, Dec. 2019, pp. 1–6. [61] S. Sugumar and S. M. Santhanam, “Design of filamentary planar spiral coils with enhanced channel model for magnetic induction based underground communication,” Transactions on Emerging Telecommu- nications Technologies, vol. 32, no. 10, p. e4282, 2021. [62] C. Cariou, L. Moiroux-Arvis, F. Pinet, et al., “Internet of underground things in agriculture 4.0: Challenges, applications and perspectives,” Sensors, vol. 23, no. 8, 2023, doi: 10.3390/s23084058. [Online]. Available: https://www.mdpi.com/1424-8220/23/8/4058 [63] X. Tan and Z. Sun, “An optimal leakage detection strategy for under- ground pipelines using magnetic induction-based sensor networks,” in Proc. of WASA, Berlin, Heidelberg, 2013, pp. 2780–2785. [64] X. Tan, Z. Sun, and P. Wang, “On localization for magnetic induction- based wireless sensor networks in pipeline environments,” in IEEE ICC, London, UK, Jun. 2015, pp. 2780–2785. [65] Y. Dong, J. Wu, X. Zhang, et al., “A novel dual-permanent-magnet me- chanical antenna for pipeline robot localization and communication,” Sensors, vol. 23, no. 6, p. 3228, 2023. [66] X. Li, Q. Li, Q. Yu, et al., “A rotating permanent magnets positioning system for laying underground pipelines,” IEEE Trans. Instrum. Meas., vol. 73, pp. 1–9, 2024. [67] C. Park, Q. Xie, P. Chou, et al., “DuraNode: wireless networked sensor for structural health monitoring,” in SENSORS, 2005 IEEE, Irvine, CA, USA, 2005, pp. 1–4. [68] P. Singh, R. P. Singh, Y. Singh, et al., “Magnetic induction technology- based wireless sensor network for underground infrastructure, mon- itoring soil conditions, and environmental observation applications: Challenges and future aspects,” Journal of Sensors, vol. 2022, pp. 1–18, Jan. 2022. [69] A. Markham and N. Trigoni, “Magneto-inductive networked res- cue system (miners): Taking sensor networks underground,” in 2012 ACM/IEEE 11th IPSN, Beijing China, Apr. 2012, pp. 1–11. [70] S. A. Meybodi, M. Dohler, A. N. Askarpour, et al., “The feasibility of communication among pumps in a district heating system,” IEEE Antennas and Propagation Magazine, vol. 55, no. 3, pp. 118–134, 2013. [71] Domingo and C. Mari, “Magnetic induction for underwater wireless communication networks,” IEEE Trans. Antennas Propag., vol. 60, no. 6, pp. 2929–2939, 2012. [72] B. Gulbahar and O. B. Akan, “A communication theoretical modeling and analysis of underwater magneto-inductive wireless channels,” IEEE Trans. Wireless Commun., vol. 11, no. 9, pp. 3326–3334, 2012. [73] L. Erdogan and J.-F. Bousquet, “Dynamic bandwidth extension of coil for underwater magneto-inductive communication,” in IEEE APSURSI, 2014, pp. 1576–1577. [74] I. F. Akyildiz, P. Wang, and Z. Sun, “Realizing underwater communi- cation through magnetic induction,” IEEE Communications Magazine, vol. 53, no. 11, pp. 42–48, Nov. 2015. [75] H. Guo, Z. Sun, and P. Wang, “Channel modeling of MI underwater communication using tri-directional coil antenna,” in IEEE GLOBE- COM, San Diego, CA, USA, 2015, pp. 1–6. [76] ——, “Multiple frequency band channel modeling and analysis for magnetic induction communication in practical underwater environ- ments,” IEEE Trans. Veh. Technol., vol. 66, no. 8, pp. 6619–6632, 2017. [77] D. Wei, S. S. Soto, J. Garcia, et al., “ROV assisted magnetic induction communication field tests in underwater environments,” in Proc.ACM Int. Conf.WUWNet. Shenzhen China: ACM, Dec. 2018, pp. 1–5. [78] Y. Liu, S. Gong, Q. Liu, et al., “A mechanical transmitter for undersea magnetic induction communication,” IEEE Trans. Antennas Propag., vol. 69, no. 10, pp. 6391–6400, 2021. [79] H. Guo, Z. Sun, and P. Wang, “Joint design of communication, wireless energy transfer, and control for swarm autonomous underwater vehicles,” IEEE Trans. Veh. Technol., vol. 70, no. 2, pp. 1821–1835, 2021. [80] D. Wei, C. Huang, X. Li, et al., “Power-efficient data collection scheme for AUV-assisted magnetic induction and acoustic hybrid internet of underwater things,” IEEE Internet Things J., vol. 9, no. 14, pp. 11 675– 11 684, 2022. [81] C. Wang, Y. Cui, X. Song, et al., “A novel underwater target detection method based on low-frequency magnetic signal generated by portable transmitter,” IEEE Sensors Journal, vol. 23, no. 8, pp. 8459–8465, 2023. [82] X. Wang, D. Cui, W. Zhang, et al., “Radiation study of low-power ul- tracompact rotating permanent magnetized mechanical antenna array,” IEEE Trans. Antennas Propag., vol. 72, no. 7, pp. 5458–5468, 2024. [83] X. He, Q. Zhou, and J. Zhang, “Rotating magnet-based mechanical an- tenna for magnetic inductive communications,” IEEE Trans. Antennas Propag., vol. 72, no. 7, pp. 5502–5510, 2024. [84] W. Zhang, Z. Cao, X. Wang, et al., “Design, array, and test of super- low-frequency mechanical antenna based on permanent magnet,” IEEE Trans. Antennas Propag., vol. 71, no. 3, pp. 2321–2329, 2023. [85] S. Wang and Y. Shin, “Efficient routing protocol based on reinforce- ment learning for magnetic induction underwater sensor networks,” IEEE Access, vol. 7, pp. 82 027–82 037, 2019. [86] L. Alsalman and E. Alotaibi, “A balanced routing protocol based on machine learning for underwater sensor networks,” IEEE Access, vol. 9, pp. 152 082–152 097, 2021. [87] Y. Zhang, D. Chen, Guanghua, et al., “Performance analysis of two- hop active relaying for dynamic magnetic induction based underwater wireless sensor networks,” IEEE Trans. Commun., vol. 70, no. 10, pp. 6938–6949, Oct. 2022. [88] I. V. Zhilin, O. M. Bushnaq, G. D. Masi, et al., “A universal multimode (Acoustic, Magnetic Induction, Optical, RF) software defined modem architecture for underwater communication,” IEEE Trans. Wireless Commun., vol. 22, no. 12, pp. 9105–9116, 2023. [89] G. Dumphart and A. Wittneben, “Stochastic misalignment model for magneto-inductive SISO and MIMO links,” in IEEE PIMRC, Valencia, Spain, Sep. 2016, pp. 1–6. [90] A. Farhad, Y. Zia, S. Farid, et al., “D-Mac: A dynamic MAC algorithm for the body area sensornetworks based on ieee 802.15.4,” IJCSNS In- ternatinal Journal of Computer Science and Network Security, vol. 16, no. 5, pp. 29–35, May 2016. [91] J. I. Agbinya and M. Masihpour, “Power equations and capacity performance of magnetic induction body area network nodes,” in 2010 Fifth International Conference on Broadband and Biomedical Communications, Malaga, Spain, Dec. 2010, pp. 1–6. [92] N. Thilak and R. Braun, “Near field magnetic induction communication in body area network,” in ICDCS, Coimbatore, India, Mar. 2012, pp. 124–125. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 38 [93] M. Masihpour, D. Franklin, and M. Abolhasan, “Multihop relay techniques for communication range extension in near-field magnetic induction communication systems,” Journal of Networks, vol. 8, no. 5, pp. 999–1011, May 2013. [94] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “Distributed beamform- ing for magnetic induction based body area sensor networks,” in IEEE GLOBECOM, Washington, DC, USA, Dec. 2016, pp. 1–7. [95] G. Dumphart, B. I. Bitachon, and A. Wittneben, “Magneto-inductive powering and uplink of in-body microsensors: Feasibility and high- density effects,” in IEEE WCNC, Marrakesh, Morocco, Apr. 2019, pp. 1–6. [96] N. Golestani and M. Moghaddam, “Theoretical modeling and analysis of magnetic induction communication in wireless body area networks (WBANs),” IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, vol. 2, no. 1, pp. 48–55, 2018. [97] S. Banou, K. Li, and K. Chowdhury, “Magic: Magnetic resonant coupling for intra-body communication,” in IEEE INFOCOM, 2020, pp. 1549–1558. [98] N. Golestani and M. Moghaddam, “Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks,” Nature Communications, vol. 11, no. 1, p. 2879, 2020. [99] V. Mishra and A. Kiourti, “Wearable planar magnetoinductive waveg- uide: A low-loss approach to wbans,” IEEE Trans. Antennas Propag., vol. 69, no. 11, pp. 7278–7289, 2021. [100] L. Huang, Z. Wei, B. Chen, et al., “Field-circuit combination method for solving the detuning problem of magnetic resonance human body communication,” IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, vol. 8, no. 2, pp. 94–101, 2024. [101] M. Xu, P. S. Traore, and A. Asfour, “A high sensitivity digital giant magneto-impedance (GMI) sensor for magnetic communication,” Measurement Science and Technology, vol. 35, no. 10, p. 104001, 2024. [102] Z. Sun, P. Wang, M. C. Vuran, et al., “BorderSense: Border patrol through advanced wireless sensor networks,” Ad Hoc Networks, vol. 9, no. 3, pp. 468–477, 2011. [103] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “Disaster detection in magnetic induction based wireless sensor networks with limited feedback,” in 2014 IFIP WD. Rio de Janeiro, Brazil: IEEE, 2014. [104] T. E. Abrudan, Z. Xiao, A. Markham, et al., “Distortion rejecting magneto-inductive three-dimensional localization (MagLoc),” IEEE J. Sel. Areas Commun., vol. 33, no. 11, pp. 2404–2417, Nov. 2015. [105] O. Kypris, T. E. Abrudan, and A. Markham, “Magnetic induction-based positioning in distorted environments,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 8, pp. 4605–4612, 2016. [106] S.-C. Lin, A. A. Alshehri, P. Wang, et al., “Magnetic induction- based localization in randomly deployed wireless underground sensor networks,” IEEE Internet Things J., vol. 4, no. 5, pp. 1454–1465, 2017. [107] N. Saeed, M.-S. Alouini, and T. Y. Al-Naffouri, “3d localization for internet of underground things in oil and gas reservoirs,” IEEE Access, vol. 7, pp. 121 769–121 780, 2019. [108] A. Sheinker, B. Ginzburg, N. Salomonski, et al., “Localization of a mobile platform equipped with a rotating magnetic dipole source,” IEEE Trans. Instrum. Meas., vol. 68, no. 1, pp. 116–128, 2019. [109] B. Wei, N. Trigoni, and A. Markham, “iMag+: An accurate and rapidly deployable inertial magneto-inductive slam system,” IEEE Trans. Mobile Comput., vol. 21, no. 10, pp. 3644–3655, 2022. [110] Q. Li, X. Li, C. Wang, et al., “An inertial magneto-inductive positioning system based on GWO-PF algorithm,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–10, 2022. [111] A. Pal and K. Kant, “MagLoc: A magnetic induction based localization scheme for fresh food logistics,” Internet of Things, vol. 19, p. 100552, 2022. [112] M. Chavda, M. Zeeshan, C. O’Sullivan, et al., “Magnetic-induction- based positioning system using dual multiplexing technique,” IEEE Sensors Letters, vol. 7, no. 9, pp. 1–4, 2023. [113] Y.-L. Chen, Z.-C. Liu, A. Li, et al., “3-D positioning method of pipeline robots based on a rotating permanent magnet mechanical antenna,” IEEE Sensors Journal, vol. 23, no. 7, pp. 7095–7104, 2023. [114] A. T. Abraha and B. Wang, “A survey on scalable wireless indoor localization: Techniques, approaches and directions,” Wireless Personal Communications, vol. 136, no. 3, pp. 1455–1496, 2024. [115] W. Su, E. Liu, A. M. Calveras Aug´e, et al., “Design and realization of precise indoor localization mechanism for Wi-Fi devices,” KSII Transactions on Internet and Information Systems, vol. 10, no. 12, pp. 5422–5441, 2016. [116] Z. Zhang, E. Liu, X. Qu, et al., “Effective coverage for the connectivity of magnetic induction-based Ad Hoc networks,” in IEEE GLOBECOM, San Diego, CA, USA, Dec. 2015, pp. 1–6. [117] W. Ni, I. B. Collings, X. Wang, et al., “Radio alignment for inductive charging of electric vehicles,” IEEE Trans. Ind. Informatics, vol. 11, no. 2, pp. 427–440, 2015. [118] (2012, Dec.) Through-the-Earth, Post-Accident Communications - an emerging technology. CDC. [Online]. Available: https://www.cdc.gov/ niosh/mining/UserFiles/Works/pdfs/2013-105.pdf [119] Canarycomm-is - vital alert — vital alert. Vital Alert. Accessed 08-04- 2025. [Online]. Available: https://vitalalert.com/product/canarycommis/ [120] G. Liu, “A Q-learning-based distributed routing protocol for frequency- switchable magnetic induction-based wireless underground sensor net- works,” Future Gener. Comput. Syst., vol. 139, pp. 253–266, 2022. [121] S. Kisseleff, B. Sackenreuter, I. F. Akyildiz, et al., “On capacity of active relaying in magnetic induction based wireless underground sensor networks,” in IEEE ICC, London, UK, Jun. 2015, pp. 6541– 6546. [122] H. Guo, Z. Sun, and C. Zhou, “Practical design and implementation of metamaterial-enhanced magnetic induction communication,” IEEE Access, vol. 5, pp. 17 213–17 229, 2017. [123] Z. Li and Z. Sun, “Optimal active and reconfigurable meta-sphere de- sign for metamaterial-enhanced magnetic induction communications,” IEEE Trans. Antennas Propag., vol. 70, no. 9, pp. 8148–8163, 2022. [124] H. Rezaei, V. Khilkevich, S. Yong, et al., “Mechanical magnetic field generator for communication in the ULF range,” IEEE Trans. Antennas Propag., vol. 68, no. 3, pp. 2332–2339, 2020. [125] C. A. Balanis, Antenna Theory: Analysis and Design, 3rd ed. Hobo- ken, New Jersey, USA: John Wiley & Sons, Inc., 2005. [126] H. Guo and A. A. Ofori, “Joint channel and antenna modeling for magnetic induction communication in inhomogeneous media,” IEEE Open Journal of the Communications Society, vol. 1, pp. 1457–1469, 2020. [127] O. C. Fawole and M. Tabib-Azar, “An electromechanically modulated permanent magnet antenna for wireless communication in harsh elec- tromagnetic environments,” IEEE Trans. Antennas Propag., vol. 65, no. 12, pp. 6927–6936, 2017. [128] M. Dionigi and M. Mongiardo, “Multi band resonators for wireless power tranfer and near field magnetic communications,” in IEEE MTT- S IWPT Workshop, Kyoto, Japan, 2012, pp. 61–64. [129] J. Agbinya and H. Nguyen, “Principles and applications of frequency splitting in inductive communications and wireless power transfer systems,” Wireless Personal Communications, vol. 107, pp. 987–1017, Apr. 2019. [130] Z. Sun, I. F. Akyildiz, S. Kisseleff, et al., “Increasing the capacity of magnetic induction communications in RF-challenged environments,” IEEE Trans. Commun., vol. 61, no. 9, pp. 3943–3952, 2013. [131] J. Sogade, Y. Vichabian, A. Vandiver, et al., “Electromagnetic cave-to- surface mapping system,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 4, pp. 754–763, Apr. 2004. [132] M. K. Simon and M. S. Alouini, “Digital communications over fading channels: A unified approach to performance analysis,” 1st ed. WileySeries in Telecommunications and Signal Processing Ed. New York, NY,USA: John Wiley & Sons, 2000. [133] D. Tse and P. Viswanath, Fundamentals of wireless communication. New York, NY, USA: Cambridge university press, 2005. [134] T. Ma, X. Jiang, and H. Hu, “A novel FBMC/QAM scheme with interference mitigation over multipath time-varying fading channels,” China Commun., vol. 22, no. 7, pp. 138–155, July 2025. [135] A. A. Farid and S. Hranilovic, “Diversity gain and outage probability for MIMO free-space optical links with misalignment,” IEEE Trans. Commun., vol. 60, pp. 479–487, 2012. [136] A. Jurado-Navas, J. M. Garrido-Balsells, J. F. Paris, et al., “A unifying statistical model for atmospheric optical scintillation,” in Numerical simulations of physical and engineering processes. InTech Rijeka, Croatia, 2011, vol. 181, no. 8, pp. 181–205. [137] J.-Y. Wang, Y. Ma, R.-R. Lu, et al., “Hovering UAV-based FSO com- munications: Channel modelling, performance analysis, and parameter optimization,” IEEE J. Sel. Areas Commun., vol. 39, pp. 2946–2959, 2021. [138] M. Najafi, H. Ajam, V. Jamali, et al., “Statistical modeling of the FSO fronthaul channel for UAV-based communications,” IEEE Trans. Commun., vol. 68, pp. 3720–3736, 2020. [139] T. B. Aik, Q. S. Sen, and Z. Nan, “Characterization of multipath acoustic channels in very shallow waters for communications,” in OCEANS 2006 - Asia Pacific, Singapore, 2006, pp. 1–8. [140] A. G. Zajic, “Statistical modeling of MIMO mobile-to-mobile underwa- ter channels,” IEEE Trans. Veh. Technol., vol. 60, no. 4, pp. 1337–1351, 2011. [141] J. Wang, W. Cheng, W. Zhang, et al., “Multi-frequency access for magnetic induction-based swipt,” IEEE J. Sel. Areas Commun., vol. 40, no. 5, pp. 1679–1691, 2022. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 39 [142] M. Chen, X. Qu, E. Liu, et al., “Statistical channel modeling and estimation for pmma-based vehicle-mounted magnetic communication systems,” IEEE Trans. Veh. Technol., pp. 1–6, 2025, early access, doi: 10.1109/TVT.2025.3574843. [143] S. Gang, Vehicle noise, vibration, and sound quality. SAE interna- tional, apr 2012. [144] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “Transmitter-side chan- nel estimation in magnetic induction based communication systems,” in IEEE BlackSeaCom. Odessa, Ukraine: IEEE, May 2014, pp. 16–21. [145] X. Tan and Z. Sun, “On environment-aware channel estimation for wireless sensor networks using magnetic induction,” in IEEE INFO- COM WKSHPS, Atlanta, GA, USA, 2017, pp. 217–222. [146] J. S. Glickstein, J. Liang, S. Choi, et al., “Power-efficient ELF wireless communications using electro-mechanical transmitters,” IEEE Access, vol. 8, pp. 2455–2471, 2020. [147] M. Gołkowski, J. Park, J. Bittle, et al., “Novel mechanical magnetic shutter antenna for ELF /VLF radiation,” in IEEE APS/URSI Symp. & NRSM, Boston, MA, USA, 2018, pp. 65–66. [148] F. Sun, F. Zhang, X. Ma, et al., “Research on ultra-low-frequency communication based on the rotating shutter antenna,” Electronics, vol. 11, no. 4, p. 596, 2022. [149] E. Slevin, M. B. Cohen, N. Opalinski, et al., “Broadband electrically small VLF/LF transmitter via time-varying antenna properties,” IEEE Trans. Antennas Propag., vol. 70, no. 1, pp. 97–110, 2022. [150] R. Zhu, Z. Cheng, X. Lv, et al., “Piezo-turned magnet rotation for ELF/SLF cross-medium communication in omni-direction,” Advanced Optical Materials, vol. 12, no. 20, p. 2400461, 2024. [151] Z. Cheng, J. Zhou, B. Wang, et al., “A bionic flapping magnetic-dipole resonator for ELF cross-medium communication,” Advanced Science, vol. 11, no. 30, p. 2403746, 2024. [152] Y. Cui, Y. Pei, Z. Yuan, et al., “The optimization of multilayer spacing for miniaturization of mechanical antenna based on unipolar electrets,” IEEE Trans. Dielectr. Electr. Insul., vol. 31, no. 1, pp. 50–57, 2024. [153] H. Guo and Z. Sun, “M2I: communication: From theoretical modeling to practical design,” in IEEE ICC, Kuala Lumpur, Malaysia, May 2016, pp. 1–6. [154] P. Sharma, D. Bhatia, and R. S. Meena, “Metamaterial enhanced magnetization induced communication for wireless applications,” in ICICIC, Indore, India, 2017, pp. 1–5. [155] Z. Li and Z. Sun, “Antenna system optimization for active metamaterial-enhanced magnetic induction communications,” in Eu- CAP, Krakow, Poland, Mar. 2019, pp. 1–5. [156] J. A. Bickford, R. S. McNabb, P. A. Ward, et al., “Low frequency mechanical antennas: Electrically short transmitters from mechanically- actuated dielectrics,” in IEEE APS/URSI Symp. & NRSM, San Diego, CA, USA, 2017, pp. 1475–1476. [157] Z. Sun and I. F. Akyildiz, “Optimal deployment for magnetic induction- based wireless networks in challenged environments,” IEEE Trans. Wireless Commun., vol. 12, no. 3, pp. 996–1005, 2013. [158] E. Shamonina, V. Kalinin, R. K.H., et al., “Magneto-inductive waveg- uide,” Electronics Letters, vol. 38, no. 8, pp. 371–373, Apr. 2002. [159] M. Ishtiaq and S. H. Hwang, “Performance analysis of multihop underground magnetic induction communication,” Electronics, vol. 10, no. 11, p. 1255, 2021. [160] Z. Sun and I. F. Akyildiz, “Deployment algorithms for wireless underground sensor networks using magnetic induction,” in IEEE GLOBECOM, Miami, FL, USA, 2010, pp. 1–5. [161] S. Wang, T. L. N. Nguyen, and Y. Shin, “Energy-efficient clustering algorithm for magnetic induction-based underwater wireless sensor networks,” IEEE Access, vol. 7, pp. 5975–5983, 2019. [162] M. Masihpour and J. I. Agbinya, “Cooperative relay in near field mag- netic induction: A new technology for embedded medical communi- cation systems,” in 2010 Fifth International Conference on Broadband and Biomedical Communications, Malaga, Spain, Dec. 2010. [163] Y. W. P. Hong, W. J. Huang, and K. C. C. Jay, Cooperative communi- cations and networking: technologies and system design. New York: Springer Science & Business Media, 2010. [164] R. Li, J. Zhang, and A. Dang, “Cooperative system in free-space op- tical communications for simultaneous multiuser transmission,” IEEE Commun. Lett., vol. 22, no. 10, pp. 2036–2039, 2018. [165] S. Li, P. Wang, W. Pang, et al., “Performance analysis for cooperative communication system in optical IoUT network with hdaf strategy,” IEEE Photonics Journal, vol. 13, no. 3, pp. 1–22, 2021. [166] Z.-Y. Liu, B.-Q. Ding, B.-Q. Wu, et al., “Adaptive combining multi- branch frequency-domain detector for underwater acoustic cooperative communication,” in 2015 12th ICCWAMTIP, Chengdu, China, Dec. 2015, pp. 26–29. [167] F. Ye, H. Xu, and J. Gao, “Relay selection in underwater acoustic sensor networks for QoS-based cooperative communication using game theory,” Computer communications, vol. 219, no. Apr., pp. 104–115, 2024. [168] N. Ahmed, A. Radchenko, D. Pommerenke, et al., “Design and evaluation of low-cost and energy-efficient magneto-inductive sensor nodes for wireless sensor networks,” IEEE Systems Journal, vol. 13, no. 2, pp. 1135–1144, 2019. [169] Z. Sun, I. F. Akyildiz, and G. P. Hancke, “Dynamic connectivity in wireless underground sensor networks,” IEEE Trans. Wireless Com- mun., vol. 10, no. 12, pp. 4334–4344, 2011. [170] Z. Xing, R. Wang, J. Wu, et al., “Achievable rate analysis and phase shift optimization on intelligent reflecting surface with hardware impairments,” IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 5514– 5530, 2021. [171] Z. Xing, R. Wang, and X. Yuan, “Joint active and passive beamforming design for reconfigurable intelligent surface enabled integrated sensing and communication,” IEEE Trans. Commun., vol. 71, no. 4, pp. 2457– 2474, 2023. [172] ——, “An efficient solution to phase-shift optimization for ris enabled joint communication and sensing,” IEEE Trans. Veh. Technol., pp. 1–6, 2024. [173] J. M. Daladier and M. A. Labrador, “An adaptive logical link layer pro- tocol for underwater acoustic communication channels,” in OCEANS, Biloxi, MS, USA, 2009, pp. 1–8. [174] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “Interference po- larization in magnetic induction based wireless underground sensor networks,” in IEEE PIMRC Workshops, London, UK, 2013, pp. 71–75. [175] B. Parrein, N. Morozs, and L. Toutain, “An internet protocol adaptation layer for underwater acoustic networks,” in Forum Acusticum 2023, Turin, Italy, Sep. 2023. [176] T. Matsuda and M. Yamamoto, “Performance analysis of TCP fairness between wired and wireless sessions,” in IEEE PIMRC, vol. 5, Lisbon, Portugal, 2002, pp. 2429–2433 vol.5. [177] J. M. Daladier and M. A. Labrador, “A data link layer in support of swarming of autonomous underwater vehicles,” in OCEANS 2009- EUROPE, Bremen, Germany, 2009, pp. 1–10. [178] A. Dahlberg, M. Skrzypczyk, T. Coopmans, et al., “A link layer protocol for quantum networks,” in ACM SIGCOMM, Beijing, China, Aug. 2019, pp. 159–173. [179] P. Gupta and P. Kumar, “Critical power for asymptotic connectivity,” in Proc. 37th IEEE Conf. Decision Control, vol. 1, Tampa, FL, USA, 1998, pp. 1106–1110. [180] R. Wattenhofer, L. Li, P. Bahl, et al., “Distributed topology control for power efficient operation in multihop wireless ad hoc networks,” in IEEE INFOCOM, vol. 3, Anchorage, AK, USA, 2001, pp. 1388–1397. [181] C. Bettstetter, “On the minimum node degree and connectivity of a wireless multihop network,” in ACM MOBIHOC, Lausanne Switzer- land, 2002. [182] X. Wang, X. Lin, Q. Wang, et al., “Mobility increases the connectivity of wireless networks,” IEEE/ACM Transactions on Networking, vol. 21, no. 2, pp. 440–454, 2013. [183] J. Postel, “Internet protocol (IP),” IETF, RFC 791, Sep. 1981. [Online]. Available: https://www.rfc-editor.org/info/rfc791 [184] S. Deering and R. Hinden, “Internet protocol, version 6 (IPv6) specification,” IETF, RFC 8200, Jul. 2017. [Online]. Available: https://www.rfc-editor.org/info/rfc8200 [185] W.-K. Jia, L.-T. Chen, and Z.-X. Xu, “An end-to-end IP header compressed packet forwarding framework for bandwidth-constrained networks,” IEEE Transactions on Green Communications and Network- ing, vol. 6, no. 4, pp. 2156–2167, 2022. [186] J. Postel, “Transmission control protocol,” IETF, RFC 793, Sep. 1981. [Online]. Available: https://www.rfc-editor.org/rfc/rfc793 [187] J. Xu, B. Ai, L. Chen, et al., “When high-speed railway networks meet multipath tcp: Supporting dependable communications,” IEEE Wireless Communications Letters, vol. 9, no. 2, pp. 202–205, 2020. [188] Z. Shelby, K. Hartke, and C. Bormann, “The constrained application protocol (CoAP),” IETF, RFC 7252, June 2014. [Online]. Available: https://www.rfc-editor.org/info/rfc7252 [189] V. Cerf, S. Burleigh, A. Hooke, et al., “Delay-tolerant networking architecture,” IRTF, RFC 4838, April 2007. [Online]. Available: https://www.rfc-editor.org/info/rfc4838 [190] L.-E. Jonsson and G. Pelletier, “RObust Header Compression (ROHC): A Compression Profile for IP,” IETF, RFC 3843, Jun. 2004. [Online]. Available: https://www.rfc-editor.org/info/rfc3843 [191] J. C. Mankins, et al., “Technology readiness levels,” Tech. Rep., 6 1995, advanced Concepts Office, NASA HQ. [192] C. Zhao, H. Du, D. Niyato, et al., “Generative AI for secure physical layer communications: A survey,” IEEE Trans. on Cogn. Commun. Netw., vol. 11, no. 1, pp. 3–26, 2025. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 40 [193] M. A. Ferrag, O. Friha, B. Kantarci, et al., “Edge learning for 6G- enabled internet of things: A comprehensive survey of vulnerabilities, datasets, and defenses,” IEEE Commun. Surveys Tuts., vol. 25, no. 4, pp. 2654–2713, 2023. [194] J. Corbet, A. Rubini, and G. K. Hartman, Linux Device Drivers, 4th ed. O’Reilly Media, Inc., Feb. 2015. [195] C. Benvenuti, Understanding Linux Network Internals: Guided Tour to Networking on Linux, 1st ed., A. Oram, Ed. 1005 Gravenstein Highway North, Sebastopol, CA: ” O’Reilly Media, Inc.”, 2005. [196] DSR Corporation, “ZBOSS: Open-source ZigBee protocol stack,” Online, DSR Corporation, 2023, version 1.0. [Online]. Available: https://dsr-iot.com/downloads/open-sourcezigbee [197] Rockchip, “airockchip/rknn-toolkit2,” Github, Rockchip, 2025. [Online]. Available: https://github.com/airockchip/rknn-toolkit2 [198] Y. Liu, M. Hou, and S. Gong, “A rotating permanent magnet transmitter for magnetic induction communication in RF-impenetrable environ- ment,” in IEEE NEMO, Hangzhou, China, 2020, pp. 1–3. [199] M. T. B. Tarek, S. Dharmasena, A. Madanayake, et al., “Power-efficient data modulation for all-mechanical ULF/VLF transmitters,” in IEEE MWSCAS, 2018, pp. 759–762. [200] Q. Zhang and Z. Hao, “Research on ask modulation method for rotating magnet based mechanical antenna array system,” in ICEMS, Chiang Mai, Thailand, 2022, pp. 1–5. [201] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, “Beamforming for magnetic induction based wireless power transfer systems with multiple receivers,” in IEEE GLOBECOM, San Diego, CA, USA, Dec. 2015, pp. 1–7. [202] H. J. Kim, J. Park, K. S. Oh, et al., “Near-field magnetic induction MIMO communication using heterogeneous multipole loop antenna array for higher data rate transmission,” IEEE Trans. Antennas Propag., vol. 64, no. 5, pp. 1952–1962, May 2016. [203] Z. Li and Z. Sun, “Enabling magnetic beamforming in MIMO wireless power transfer using reconfigurable metasurface,” in IEEE GLOBE- COM, 2020, pp. 1–6. [204] G. Yang, M. R. V. Moghadam, and R. Zhang, “Magnetic MIMO signal processing and optimization for wireless power transfer,” IEEE Trans. Signal Process., vol. 65, no. 11, pp. 2860–2874, Jun. 2017. [205] Z. Xu and E. Rodriguez-Villegas, “A wireless power transfer mattress based system for perpetually operating physiological monitoring wear- ables,” IEEE Trans. Biomed. Circuits Syst., vol. 18, no. 2, pp. 460–473, 2024. [206] J. Wang, W. Cheng, W. Zhang, et al., “Backscatter based bidirectional full-duplex magnetic induction communications,” IEEE Trans. Com- mun., vol. 71, pp. 6258–6271, 2023. [207] V. Pasku, A. De Angelis, G. De Angelis, et al., “Magnetic field-based positioning systems,” IEEE Commun. Surveys Tuts., vol. 19, no. 3, pp. 2003–2017, 2017. [208] M. A. Khan, J. Sun, B. Li, et al., “Magnetic sensors-a review and recent technologies,” Engineering Research Express, vol. 3, no. 2, p. 022005, Jun 2021. [209] Z. Xing, R. Wang, X. Yuan, et al., “Location information assisted beamforming design for reconfigurable intelligent surface aided com- munication systems,” IEEE Trans. Wireless Commun., vol. 22, no. 11, pp. 7676–7695, 2023. [210] R. Wang, Z. Xing, E. Liu, et al., “Joint localization and communication study for intelligent reflecting surface aided wireless communication system,” IEEE Trans. Commun., vol. 71, no. 5, pp. 3024–3042, 2023. [211] F. Zhai, Y. Eisenberg, and A. K. Katsaggelos, “Joint source-channel coding for video communications,” in Handbook of Image and Video Processing, 2nd ed., A. Bovik, Ed. Burlington MA, USA: Academic, 2005, pp. 1065–1082. [212] E. Bourtsoulatze, D. Burth Kurka, and D. G¨und¨uz, “Deep joint source- channel coding for wireless image transmission,” IEEE Trans. on Cogn. Commun. Netw., vol. 5, no. 3, pp. 567–579, 2019. [213] J. Xu, B. Ai, W. Chen, et al., “Deep joint source-channel coding for image transmission with visual protection,” IEEE Trans. on Cogn. Commun. Netw., vol. 9, no. 6, pp. 1399–1411, 2023. [214] H. Wu, Y. Shao, C. Bian, et al., “Deep joint source-channel coding for adaptive image transmission over MIMO channels,” IEEE Trans. Wireless Commun., vol. 23, no. 10, pp. 15 002–15 017, 2024. [215] M. Zhang, H. Wu, G. Zhu, et al., “Semantics-guided diffusion for deep joint source-channel coding in wireless image transmission,” IEEE Trans. Wireless Commun., pp. 1–1, 2025, early access, doi: 10.1109/TWC.2025.3591456. [216] H. Xie, Z. Qin, G. Y. Li, et al., “Deep learning enabled semantic communication systems,” IEEE Trans. Signal Process., vol. 69, pp. 2663–2675, 2021. [217] J. Huang, K. Yuan, C. Huang, et al., “D2-JSCC: Digital deep joint source-channel coding for semantic communications,” IEEE J. Sel. Areas Commun., vol. 43, no. 4, pp. 1246–1261, 2025. [218] X. Chen, Z. Zhao, and H. Zhang, “Stochastic power adaptation with multiagent reinforcement learning for cognitive wireless mesh networks,” IEEE Trans. Mobile Comput., vol. 12, no. 11, pp. 2155– 2166, Nov. 2013. [219] D. Shi, F. Tian, and S. Wu, “Energy efficiency optimization in hetero- geneous networks based on deep reinforcement learning,” in IEEE ICC Workshops, 2020, pp. 1–6. [220] F. G. Ortiz-Gomez, D. Tarchi, R. Mart´ınez, et al., “Cooperative multi- agent deep reinforcement learning for resource management in full flexible VHTS systems,” IEEE Trans. on Cogn. Commun. Netw., vol. 8, no. 1, pp. 335–349, 2022. [221] ISO/IEC, Information Technology-Telecommunications and Infor- mation Exchange between Systems-Near Field Communication- Interface and Protocol (NFCIP-1), ISO/IEC International Standard 18 092, 2013. [Online]. Available: https://webstore.ansi.org/standards/ iso/isoiec180922013 [222] X. Tan, Z. Sun, and I. F. Akyildiz, “A testbed of magnetic induction- based communication system for underground applications,” arXiv preprint arXiv:1503.02519, 2015. Honglei Ma received the Ph.D. degree from the School of Electronics and Information Engineering, Tongji University, China, in 2020. He is currently with School of Electronic and Electrical Engineer- ing, Shanghai University of Engineering Science. From 2009 to 2015, he served as a Senior Engi- neer with the Chinese Academy of Sciences. His current research interests include magnetic induction communications, ad-hoc network, wireless sensor network, and object detection for embedded systems. Erwu Liu (Senior Member, IEEE) received the Ph.D. degree from Huazhong University of Science and Technology, China, in 2001. He has been a Pro- fessor with Tongji University since 2011. Previously he was with Alcatel-Lucent (2001-2007) and Impe- rial College London (2007-2011). He studies local- ization & sensing, AI & blockchain, and wireless communications & IoT, with 120+ papers published and 70+ patents granted/pending. Prof. Liu won the Microsoft Indoor Localization Competition (IPSN) in 2016 and 2018, and developed the indoor naviga- tion system for China International Import Expo (CIIE). He is the Community Dev. Co-Chair of IEEE Blockchain Technical Community (BCTC), and leads the local group development of the IEEE BCTC in Asia/China. He leads the Shanghai Engineering Research Center for Blockchain Applications and Services (SERCBAAS). He is an IET Fellow, the Founding Editor-in-Chief of IET Blockchain, and the Founding Chair of the IEEE Global Blockchain Conference (GBC). ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, DOI: 10.1109/COMST.2025.3623258 41 Wei Ni (Fellow, IEEE) received the B.E. and Ph.D. degrees in Electronic Engineering from Fudan Uni- versity, Shanghai, China, in 2000 and 2005, re- spectively. He is the Associate Dean (Research) in the School of Engineering, Edith Cowan Univer- sity, Perth, and a Conjoint Professor at the Uni- versity of New South Wales, Sydney, Australia. He was a Deputy Project Manager at the Bell Labs, Alcatel/Alcatel-Lucent from 2005 to 2008; a Senior Research Engineer at Nokia from 2008 to 2009; and a Senior Principal Research Scientist and Group Leader at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) from 2009 to 2025. He has been an Editor for IEEE Transactions on Wireless Communications since 2018, IEEE Transactions on Vehicular Technology since 2022, IEEE Transactions on Information Forensics and Security and IEEE Communication Surveys and Tutorials since 2024, and IEEE Transactions on Network Science and Engineering since 2025. He served as Secretary, Vice-Chair, and Chair of the IEEE VTS NSW Chapter from 2015 to 2022, Track Chair for VTC-Spring 2017, Track Co-chair for IEEE VTC-Spring 2016, Publication Chair for BodyNet 2015, and Student Travel Grant Chair for WPMC 2014. Zhijun Fang (Senior Member, IEEE) received the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China. He is currently a Professor and the Dean with the School of Electronic and Electrical Engineering, Shanghai University of Engineering Science. His current research interests include image processing, video coding, and pattern recognition. He was the General Chair of the Joint Confer- ence on Harmonious Human Machine Environment (HHME) 2013 and the General Co-Chair of the International Symposium on Information Technol- ogy Convergence (ISITC) in 2014, 2015, 2016, and 2017. He received the “Hundred, Thousand and Ten Thousand Talents Project” in China. He received several major program projects of the National Natural Science Foundation of China and the National Key Research and Development Project of the Ministry of Science and Technology of China. Rui Wang (Senior Member, IEEE) received the Ph.D. degree from Shanghai Jiao Tong University, China, in 2013. From August 2012 to February 2013, he was a Visiting Ph.D. Student with the Department of Electrical Engineering, University of California at Riverside. From October 2013 to October of 2014, he was a Post-Doctoral Research Associate with the Institute of Network Coding, The Chinese University of Hong Kong. He is currently a Professor with the College of Electronics and Information Engineering, Tongji University. He is also with the Shanghai Institute of Intelligent Science and Technology, Tongji University. He has published over 60 articles. His research interests include wireless communications, artificial intelligence, and wireless positioning. He is also an Associate Editor of IEEE ACCESS and an Editor of IEEE WIRELESS COMMUNICATIONS LETTERS. Yongbin Gao received the Ph.D. degree from Jeonbuk National University, South Korea. He is currently an Associate Professor with the School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China. He has published 30 SCI articles in prestigious journals. Dusit Niyato (M’09-SM’15-F’17) is a professor in the College of Computing and Data Science, at Nanyang Technological University, Singapore. He received B.Eng. from King Mongkuts Institute of Technology Ladkrabang (KMITL), Thailand and Ph.D. in Electrical and Computer Engineering from the University of Manitoba, Canada. His research interests are in the areas of mobile generative AI, edge general intelligence, quantum computing and networking, and incentive mechanism design. Ekram Hossain (Fellow, IEEE) is a Professor and the Associate Head (Graduate Studies) of the De- partment of Electrical and Computer Engineering, University of Manitoba, Canada. He is a Member (Class of 2016) of the College of the Royal Society of Canada. He is also a Fellow of the Canadian Academy of Engineering and the Engineering In- stitute of Canada. He has won several research awards, including the 2017 IEEE Communications Society Best Survey Paper Award and the 2011 IEEE Communications Society Fred Ellersick Prize Paper Award. He was listed as a Clarivate Analytics Highly Cited Researcher in Computer Science in 2017-2025. Previously, he served as the Editor-in-Chief (EiC) for the IEEE Press (2018–2021) and the IEEE Communications Surveys and Tutorials (2012–2016). He was a Distinguished Lecturer of the IEEE Communications Society and the IEEE Vehicular Technology Society. He served as the Director of Magazines (2020-2021) and the Director of Online Content (2022-2023) for the IEEE Communications Society.
ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 1 Through-the-Earth Magnetic Induction Communication and Networking: A Comprehensive Survey Honglei Ma, Erwu Liu, Senior Member, IEEE, Wei Ni, Fellow, IEEE, Zhijun Fang, Rui Wang, Yongbin Gao, Dusit Niyato, Fellow, IEEE, and Ekram Hossain, Fellow, IEEE Abstract-Magnetic induction (MI) communication (MIC) has emerged as a promising candidate for underground communication networks due to its excellent penetration capabilities. Integration with Space-Air-Ground-Underground (SAGUI) networks in next-generation mobile communication systems requires a well-defined network architecture. A recent discovery in MIC research, MI fast fading, remains in its early stages and presents unique challenges. This paper provides a comprehensive survey on through-the-earth (TTE) MIC, covering MI applications, channel modeling, point-to-point MIC design, relay techniques, network frameworks, and emerging technologies. We compare various MIC applications to highlight TTE-specific challenges and review the principles of channel modeling, addressing both MI slow fading and MI fast fading, along with its potential impact on existing MIC theories. We conduct a fine-grained decomposition of MI channel power gain into four distinct physical parameters, and propose a novel geometric model to analyze MI fast fading. We also summarize MI relay techniques, examine crosstalk effects in relay and high-density networks, and explore key research tasks within the OSI framework for a holistic MI network protocol in SAGUI. To bridge the gaps identified, we propose a MIC framework that supports TCP/IP and Linux, enabling full implementation of existing and emerging MIC solutions. This framework empowers researchers to leverage Linux resources and deep learning platforms for accelerated development of MIC in SAGUI networks. Remaining research challenges, open issues, and promising novel techniques are further identified to advance MIC research. Index Terms-Magnetic induction (MI), underground wireless This work is supported in part by grants from the National Science Foundation of China (Nos. 42171404, 62271352), in part by Shanghai Engineering Research Center for Blockchain Applications And Services (No. 19DZ2255100), in part by Seatrium New Energy Laboratory, Singapore Ministry of Education (MOE) Tier 1 (RT5/23 and RG24/24), the NTU Centre for Computational Technologies in Finance (NTU-CCTF), and the RIE2025 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) (Award I2301E0026), administered by A*STAR, and in part by the Fundamental Research Funds for the Central Universities under Grant 22120250094. H. Ma, Z. Fang, and Y. Gao are with the (e-mail: , , ). E. Liu and R. Wang are with the . R. Wang is also with the Shanghai (e-mail: , ). W. Ni is with the 6027, and the 2033, Australia (e-mail: ). D. Niyato is with the 639798 (e-mail: ). E. Hossain is with the - gineering, (e-mail: ). (Corresponding author: E. Liu) communication, through-the-earth (TTE), fast fading, network architecture, TCP/IP. TABLE I ACRONYMS AND DEFINITIONS Acronym Full name / Definition ✸/ ✸✸/ ✸✸✸ Low / Medium / High (priority used in tables and figures) AF Amplified-and-forward AG Aboveground APO Antenna position and orientation AUV Autonomous underwater vehicle AVI Antenna vibration intensity BAN Body area network BCS Boundary Chi-square (Boundary χ2) BCH Bose, Ray-Chaudhuri, Hocquenghem CDF Cumulative distribution function CLO Cross-layer optimization CLT Central limit theorem CMG CMIC achievable rate gain CMI Cooperative magnetic induction CMIC Cooperative magnetic induction communication CMIC-1NR CMIC with one non-aligned relay CMIC-nAR CMIC with multiple aligned relays CSI Channel state information CSMA Carrier sense multiple access DF Decode-and-forward DMI Direct magnetic induction DWE dynamic weighted evolution/learning EMW Electromagnetic wave EMWC Electromagnetic wave communication EPR Effective payload ratio FEC Forward error correction FEM Finite element method FF Filter-and-forward FSOC Free-space optical communication GAN Generative adversarial networks GTS Guaranteed time slot IP-HC Internet protocol header compression IoV Internet of Vehicles JSCC Joint source-channel coding KVL Kirchhoff's voltage law LLM Large language model LLC Logical link control Ly Layer MAC Medium access control M2I Metamaterial-enhanced magnetic induction MCD MI Linux character device MCNSI MI communication-navigation-sensing integrated MI Magnetic induction MIC Magnetic induction communication MIMO Multiple-input multiple-output MND MI Linux network device MPRlA MI passive relay array NFC near field communication OSI Open Systems Interconnection P2P Point-to-point PDF Probability density function PSD Power spectral density RL Reinforcement learning RPMA Rotating permanent magnet antenna RTT Round-trip time Rx Receive SAGUMI Space-Air-Ground-Underground multi-network integration SISO Single-in single-out SNR Signal-to-noise ratio TCP / IP Transmission control protocol / Internet protocol TMR Tunneling Magnetoresistance ToA Time of arrival TTE Through-the-earth Tx Transmit UDP User datagram protocol UG Underground UG-WSN Underground wireless underground sensor network UW-WSN Underwater wireless sensor network VMI Vehicle magnetic induction VMIC Vehicle magnetic induction communication VLF-LA Very low frequency and a larger antenna WSN Wireless sensor network https://doi.org/10.1109/COMST.2025.3623258; IEEE 21 Oct 2025 ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 2 I. INTRODUCTION A. Underground Communication and TTE Communication A S human activities increasingly extend into underground environments, and reliable underground communication has become essential for applications, such as resource exploration, disaster rescue, underground blasting, and robotic operations in hazardous scenarios [1]. DARPA launched the Subterranean Challenge in 2017 to drive innovation in mapping, navigation, and search solutions for complex underground spaces, including tunnel systems, urban underground areas, and natural caves [2]. One of the primary challenges in this initiative was ensuring robust underground communication. As next-generation mobile communication technologies, such as 6G [3], evolve, underground communication is expected to play a critical role in integrated networks, where protocols, such as TCP/IP, will be indispensable for enabling essential functionalities, such as dynamic passwords, node registration, and licensing approval by regional authorities (e.g., national security centers). However, without standardized protocols, these advanced features are nearly impossible to achieve. TABLE II NOTATION AND DEFINITION Definition† Notation Basic parameters See Table III Probability P(·) Expectation E(·) Imaginary unit√-1 j Source/Tx S Destination/Rx D Relay R (·) of source/Tx antenna (.)S (·) of destination/Rx antenna (.)D (·) of relay antenna (.)R (·) of link S→D (.)SD Magnetic moment by antenna S mS Channel power gain [of link S→D] GSD Circuit gain [of link S→D] CSD Space gain [of link S→D] SSD Eddy gain [of link S→D] ESD Polarization gain or MI fast fading [of link S→D] JSD Mutual inductance [of link k→l] Mkl Compensated in-device gain/loss [for link S→D] אSD Wavenumber in the medium k0 Horizontal vibration angle [of the antenna S] φS Vertical vibration angle [of the antenna D] θ′ D Average AVI [of antenna S] σS Boundary of the antenna vibration ς Dirac's function δpu(·) Overall circuit impedance [of coil S] ZS Current or divisor [at coil/antenna S] IS PSD [of Tx S] PSf Noise power No Capacitance of match capacitor for f0 [at coil S] CcS Resistance of the load RL Resistance [of coil S] RcS Time or time slot [of Tx S] tS Inductance [of coil S] LcS Bandwidth Bw Function of 3-dB bandwidth Bw(·) 3-dB bandwidth for CMIC-1NR using AF BAF Achievable rate [of link/system S→D] CSD † The brackets [·] represents the optional part. For example, the average AVI [of antenna S] is denoted by σS. Optionally, the average AVI [of antenna D] is denoted by σD. Despite its importance, underground communication has received significantly less research attention compared to aboveground technologies such as electromagnetic wave communication (EMWC). It remains a bottleneck in applications ranging from general underground operations to DARPA's Obstacle layer (RockͫSoilͫWaterͫConcrete)) MI Receiver General EMW Transmitter LoRa Transmitter t Ultra-long wave Receiver Building Building Sonar Transmitter Water Sonar Receiver W MI Transmitter Building Building Basement Ultra-long wave Transmitter Basement Blocked by buildings Successfully penetrate penetrate buildings Blocked Penetrate Large antenna Smaller coil Blocked by subsurface layers Blocked by building Blocked by subsurface layers Successfully penetrate water Successfully penetrate Blocked by subsurface layers Fig. 1. Comparison of the penetration abilities of communication approaches. Green and red texts indicate the advantages and disadvantages, respectively. Performance and deployment comparisons are presented in Table IV. robotic systems and next-generation multi-network integration, particularly when supporting TCP/IP protocols. Known as through-the-earth (TTE) communication due to its ability to penetrate over 10 m of soil and rock, this technology is especially critical for scenarios such as mining and drilling, where depths often exceed 1,000 m. Notably, the Kola Superdeep Borehole even extends beyond 12,000 m [4]. During disasters, when EMWC and wired infrastructures are often rendered inoperative, TTE communication's flexible deployment capabilities can expedite rescue operations, potentially saving countless lives. Fig. 1 illustrates various TTE communication techniques with their performance and deployments compared in Table IV. General EMW signals are limited in penetration and may fail to reach basement levels. Acoustic communication systems are unsuitable for TTE communication due to severe multipath effects in complex underground materials [5]. Ultralong wave (ULW) radio-based communication, though capable of penetrating deeper, requires kilometer-scale antennas, which are impractical in confined underground spaces [6]. In contrast, MIC has proven effective for a wide range of underground applications. Time-varying magnetic fields experience less attenuation than EMWs [7], and MIC antennas (coils) are compact compared to ULW devices. For instance, an 8-meter diameter MIC antenna can achieve a TTE communication range of up to 310 m [8], while smaller antennas (less than 1 m in diameter) are sufficient for shorter distances, making them suitable for vehicle-mounted applications. This flexibility enables mobile MI networks, ideal for emergency rescue operations [9], [10]. The MIC uses a modulated magnetic field generated by a transmitter antenna. This field is received by a receiver coil or magnetic sensor, which demodulates it into symbols. The Wrathall team accomplished an underwater MIC in 1999 using a 3 kHz field [17]. Since then, MIC has been explored as an alternative for underground (UG) WSNs. In 2006, Akyildiz et al. [18], [19] summarized various UG-WSN scenarios, including soil monitoring, underground water monitoring, pipe leak detection, intruder detection, rescue personnel localization, and building load-bearing pressure detection. Sun et al. [12] introduced MIC into UG-WSNs with a small device featuring a 10 cm coil in 2010. Recently, researchers have expanded on applications of the non-coil-based MIC and large-scale MI networks. For example, Zhang et al. [20] developed a rotating permanent magnet antenna (RPMA) array device for ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 3 TABLE III KEY PARAMETERS, SYMBOLS, AND THEIR DEFAULT VALUES FOR SIMULATIONS IN THIS PAPER Parameters Symbol† Default values Unit Refs. Related simulations in this survey Tx or CMI coil radius acS 0.6 meter [11] Figs. 4, 11, 12, 19, 21, 22 Rx coil radius acD 0.4 meter [11] Fig. 11, 12 Number of turns of Tx or CMI coils NS 15 - [11] Figs. 11, 12, 19, 21, 22 Number of turns of Rx coil ND 30 - [11] Fig. 11, 12 Coil resistance ρw 0.166 Ω/ m [8], [11] Figs. 11, 12, 19, 21, 22 Tx Power Ps, PS 5 Watt [7], [11] Figs. 11, 12, 19, 21, 22 Resonance (center) frequency f0 10000 Hz [7], [11] Figs. 19, 22 Carrier frequency f 10000 Hz [7], [11] Figs. 19, 22 Tx coil orientation θs 0 - [12] Figs. 7, 8, 11, 12, 19, 21, 22 DMI Rx coil orientation θD π - [12] Figs. 11, 12, 19 VMI Rx coil orientation θD -π 2 - [9] Fig. 7 Ambient noise PSD Nof -103 dBm / 2 kHz [12] Figs. 11, 12, 19, 21, 22 Permeability of medium μu 4π × 10-7 H / m [7], [13] Figs. 11, 12, 19, 21, 22, 29 Conductivity of medium σu 10-2 S / m [7], [13] Figs. 11, 12, 19, 21, 22 Permittivity of medium εu 6.978×10-11 F / m [13] Figs. 11, 12, 19, 21, 22, 29 Distance between S and D dSD 60 meter [7] Figs. 19, 21, 22 † In this paper, Italic subscript 'f' indicates "(·) per Hz". All the normal (upright) subscripts indicate the description which does not represent any number or node ID. TABLE IV COMPARISON OF MIC AND OTHER COMMUNICATIONS IN PERFORMANCE AND DEPLOYMENT DIMENSIONS FOR UNDERGROUND ENVIRONMENTS Performance† MIC EMWC Acoustic Wired Hybrid Comm. ranges (m) 0.1∼1,700 10,000 - Data rates (kbps) 50 m) 5 m) [15] 5∼17.8 [16] >1,000 - Channel dependency Conductivity, antenna vibration Multipath, conductivity, permittivity, Multipath, Doppler effect, sound noise Cable properties and length, shielding, connection quality Interference, protocols, power control Antenna/device size (m) 0.1∼4 radius coil > 10, 000 (VLF antenna) [6] 10,000 (including cables) Smaller antenna for subsystems Deployment costs Low High Medium Extremely High High Maintenance costs Low High High High High System complexity Medium High High Low High Disaster resilience Strong Weak Medium Medium Strong Maturity levels Low High Medium High Medium † Optical communication is omitted due to its zero communication range underground. TABLE V RELATED SURVEYS AND THEIR DIFFERENCES FROM THIS SURVEY Aspects Refs. Most important contribution Differences from this survey UG communications [21]-[29] Issues of acoustic wave communication, EMWC, wired communication, mud pulse telemetry communication and MIC for UG-WSNs Diluting comprehensiveness due to non-exclusive to MICs, and no exploration of issues of the MI fast fading, RPMA MICs [30] Reference book covering antennas, channels, performance, and protocols related to MICs No exploration of issues of MI fast fading, RPMA and routing algorithm Underground MICs [15], [31] Issues for general MI UG-WSNs, primarily concerning short-to-mid range MICs No exploration of issues of MI fast fading, RPMA, a complete MI network architecture Underwater MICs [16], [32] Fundamental issues and advances in underwater MICs No exploration of issues of MI fast fading, RPMA, and a complete MI network architecture TTE MICs This survey Fundamental issues and advances in underground MIC, primarily concerning long-range MICs, MI fast fading, and a complete MI network architecture - the through-the-sea MIC application in 2023, and Ma et al. [11] focused on MI multi-cellular networks in 2024. B. Related Surveys and Motivations of This Survey While many surveys on underground communication have been published as listed in Table V, only a handful of the surveys have focused on underground MICs. For example, Sharma et al. [31] reviewed MIC research until 2017 for non-conventional media applications. They introduced the applications and advantages of MICs and briefly introduced the channel modeling of the P2P MIC and a hardware testbed for MIC research. Kisseleff et al. [15] conducted a comprehensive review of underground MIC studies up to that point. Although their work primarily focused on P2P MIC and MI waveguide issues, they also discussed physical protocols such as channel estimation and node deployments. Recently, Liu et al. [30] offered a thorough introduction to general MICs in a monograph. This monograph covers the basic concepts and theories of background, developments, antennas, channels, performance, and protocols related to MICs proposed before 2020. Compared to this current survey, the existing review [31] has overlooked multi-node MICs. The survey [15] does not comprehensively review MI network architectures and the issues of mobile MIC, including the data link layer, network layers, RPMAs, and MI fast fading. The monograph [30] does not review MI network architectures and MI fast fading, especially the recent routing algorithms and RPMAs developed in the past five years. However, the surveys conducted since 2020 have not yet provided comprehensive reviews on expanded MIC techniques, such as MAC and routing protocols for a large-scale MI network and a novel mechanical antenna. This is due to the great surge in mobile MICs, mechanical antennas, and upperlayer MI research since 2020. Moreover, almost all existing articles on MICs presented the common conclusion that the MI channel is a quasi-static and predictable channel without a small-scale fading. These articles also include the surveys (e.g., [15], [32]) and reviews (e.g., [31]). However, several studies have described the MI fast fading channel recently. The common conclusion may no longer hold. Although most related articles claim that their research on ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 4 MIC is compatible with TTE communication, few surveys and reviews highlight the potential or specific issues and methodologies when existing MIC techniques are applied in TTE or long-distance MIC environments. Currently, there is no agreed protocol stack for larger-scale MI networks. By organizing the existing MIC research with reference to the OSI-originated framework, we can identify the remaining issues related to MIC to establish a standard MIC protocol stack. This is crucial for Space-Air-Ground-Underground multi-network integration (SAGUIMI) for the next generation of mobile communication. C. Contributions and Organization of This Survey This survey reviews research on underground MICs, particularly TTE applications, covering point-to-point TTE MIC techniques and the impact of MI fast fading on existing MIC theories. To guide optimization efforts, we decompose the MI channel power gain into four components with low intercoupling and distinct physically interpretable meanings. The survey also covers MI relay techniques, analyzing crosstalk effects in both relay and high-density networks. Moreover, we identify the remaining research tasks for a comprehensive MI network protocol in SAGUI. Based on the surveyed literature, we propose an advanced MIC framework that supports TCP/IP and Linux, addressing both the current state and future challenges of MIC. This framework enables researchers to utilize extensive Linux resources and deep learning platforms, accelerating research and implementation in SAGUI applications. The key research challenges, open issues in MICs, and promising novel techniques to address them are highlighted. The key contributions of this survey include: • First Survey on MI Fast-Fading Channels: This survey identifies that MI fast fading challenges the prevailing notion of quasi-static MI channels. We highlight that research on MI fast fading remains in its early stages due to the lack of a universal statistical model. To address this, we introduce an antenna vibration model and corresponding simulations. We also analyze the potential impacts on existing MIC theorems, a topic not yet covered in any previous surveys. • Comprehensive Review of MI Network Architecture: We present a complete review of the MI network architecture across the OSI framework layers, identifying remaining issues and possible solutions. A significant finding is the absence of standardized MI protocol stacks, which represents a major barrier to achieving SAGUIMI integration in next-generation mobile communications. Existing surveys often focus on specific applications without providing a holistic and runnable framework. • Fine-grained Decomposition of Channel Power Gain: We introduce a detailed conceptual modeling approach for channel power gain and provide optimization directions for MIC systems, including antenna designs, bandwidth, and MIC range improvements. This contributes to simplifying MIC optimization for future research by narrowing the scope of MI parameters to focus on relevant variables while fixing others. • Identification of Positive MI Crosstalk Effects: This study uncovers the positive MI crosstalk effects, which are crucial for addressing challenges in MI waveguides and massive MIMO systems. Previous literature primarily focused on negative crosstalk effects, while the positive crosstalk aspect has been largely overlooked. The remainder of this survey is structured as follows (more details in Fig. 2): Section III covers MI channel modeling, including MI channel power gain and MI fast fading. Section IV summarizes P2P MIC designs, focusing on MI antenna design, bandwidth, and MIC range. Section V surveys MI relay techniques, such as MI waveguides, MI passive relay array (MPRlA), Cooperative magnetic induction communication (CMIC), and the MI crosswalk effect. Section VI reviews multi-node MIC from the perspective of the OSI framework. Section VII introduces a promising MI network framework with TCP/IP and Linux support in an attempt to address the challenges in current and future MIC studies. Section VIII explores unresolved challenges. It also discusses promising methodologies including novel MI antennas, MI communication-navigation-sensing integrated (MCNSI), massive MI MIMO, deep JSCC for MIC, heterogeneous MI network techniques, TCP/IP framework support, and Transformer-based prediction frameworks. Section IX concludes this survey. The acronyms that appear across subsections of this survey are listed in Table I. The physical representations of mathematical symbols are listed in Tables III and II. The default values for the simulations and numerical evaluations, that we conducted in this survey, are listed in Table III. D. Research Gap Summary Open issues on MIC are summarized in Table VI, highlighting existing research gaps in several pivotal domains. The high-priority domains are described in detail as follows. 1) MI fast fading for mobile MIC: Research on MI fast fading still has significant gaps despite existing efforts. The primary gap is the absence of a universal statistical model, which severely hinders the development of upper-layer protocols for mobile UG-WSNs. To address this gap, we propose a geometric antenna vibration model that can potentially tackle this challenge using Monte Carlo methods. 2) Significant gap in TCP/IP framework: The TCP/IP is crucial for SAGUI networks in next-generation communications. However, when mapping the surveyed research to each OSI layer, significant gaps emerge: certain layers and key topics remain underexplored. Specifically, no literature addresses IP applicability in MI networks, and the entire Transport Layer (Ly4) lacks dedicated MI solutions. Meanwhile, the extremely low bandwidth and channel capacity of MIC may not be compatible with existing IP and Ly4 protocols, as this low performance can lead to congestion mechanism failures and retransmission storms. Additionally, excessively large headers waste scarce MI bandwidth. To address these gaps in MI TCP/IP solutions, we propose a promising MIC framework supporting TCP/IP and Linux, which systematically incorporates MI algorithms and protocols drawn from the literature and future potential solutions. 3) Underdeveloped channel models and protocols for TTEspecific MIC: Current studies on MIC have largely overlooked the unique challenges of the TTE scenario, including mobility, heterogeneous geological materials, and constrained antenna ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 5 Through-the-Earth Magnetic Induction Communication and Networking Introduction Underground Communication and TTE Communication DARPA Subterranean Challenge [2]; SAGUMI; etc. Related Surveys and Motivations of This Survey UG-WSN [26]; MIC [30]; MI UG-WSN [15]; MI UW-WSN [32]; TTE WSN [this survey] Contributions and Organization of This Survey Research Gap Summary Applications of MICs: General Scenarios and TTE-Specific Features No subsections Agriculture [33]; Industry [34]; BAN [35]; TTE [6]; etc. Channel Models for TTE MIC System and Channel Modeling General MI link [12]; MIMO MI link [36]; M2I link [37]; RPMA link [20]; MI fast fading [11] Decomposition of Channel Power Gain Circuit gain [38];Space gain [37]; Eddy gain [39]; Polarization gain [6] MI Fast-Fading Channel Antenna-vibration-oriented fading [11]; Our proposed geometric model and simulation. Summary and Lessons Learned Design of Point-to-point TTE MIC Antenna Design Coil [12]; M2I [37]; RPMA [20]; etc. Channel Capacity and Bandwidth Frequency bandwidth calculations [10], [40], [41], etc. Communication Range MIC range calculation [42]; Influence of underground materials; etc. Upper-Layer P2P techniques Channel estimation [43]; Modulation [44], [45]; channel coding [46]; etc. Summary and Lessons Learned MI Relay and Cooperative MIC Use Cases and Scenarios Pipeline case [47], [48]; Mobile MIC case [7] MI Waveguide and Passive Relay Techniques Waveguide [12]; MPRlA [49]; Crosstalk effect; etc. Cooperative MIC CMIC-nAR [50]; CMIC-1NR [10]; Mobile CMIC [51], etc. Summary and Lessons Learned MI Network and Its Architecture Physical Layer (Ly1) Power allocation [11], [52]; Bandwidth allocation [53]; etc. Data Link Layer (Ly2) MI MAC solutions [54], [55], etc. Network Layer (Ly3) Connectivity [8]; deployment [56]; routing [57]; etc. Transport Layer (Ly4) and TCP Lacking MI-specific solutions Cross-layer Optimization (CLO) Distributed algorithms with Nash games [11], [52], [58], etc. Summary and Lessons Learned Promising MI Network Framework with TCP/IP and Linux Support Significance and Architecture Overview Also with deep learning platforms (e.g., TensowFlow) support System Architecture and Implementation Fig. 26, Algorithm 1 Summary Research Challenges and Future Directions MI Fast Fading in Mobile MIC Systems Lacking a universal model, A Transformer-based framework (Fig. 27) Antenna Design Inertia issue of RPMA; massive MI MIMO application MI Crosstalk Effect Prediction Strateges A Transformer-based framework (Fig. 28) MI Communication-Navigation- Sensing Integrated System Balancing performance metrics among communication, navigation and sensing Mixed-Field MI Channel Model Challenging modeling for upper-layer protocols Multi-Layer and Inhomogeneous Media Issue of irregular shapes of geological strata Image Transmission and Deep JSCC Multimodal/content-based semantic communication for surpassing Shannon's limit Cooperative MIC Multiple active relays with misaligned antenna MI Network and Architecture Heterogeneous network TCP/IP Support IP-HC, RTT suppression, Intelligent retransmission strategy, etc Experiments and Testing in TTE MIC Systems Unrepeatable UG environments, antenna deployment challenge Summary of Challenges and Opportunities Conclusion Fig. 2. The structure of this survey, where we perform fine-grained decomposition of the MI power channel gain and novel antenna vibration model for MI fast fading in Section III, MI crosstalk effect in Section V, and MI network framework with TCP/IP & Linux support in Section VII. position and orientation (APO). Key research gaps in TTEspecific MICs are as follows: (i) The eddy gain model for heterogeneous UG materials may require correction or numerical validation; (ii) shadow fading should be considered for mobile nodes in Case (i); (iii) the existing upper-layer MI protocols lack TTE-specific design adaptations. For these gaps, the FEMs as we conducted in this survey are promising. II. APPLICATIONS OF MICS: GENERAL SCENARIOS AND TTE-SPECIFIC FEATURES In this section, we survey the MICs for various potential applications, as summarized in Table VII. Subsequently, we discuss the specific challenges and considerations of applying MIC techniques to TTE scenarios. Applications in agriculture: Soil conditions are crucial for crops, making it essential to build an MI UG-WSN for agricultural automation. This attracts some researchers. Li et al. [60] derived the conductivity and permittivity distribution using the Simultaneous Iterative Reconstruction Technique (SIRT) algorithm for soil moisture sensing, and obtained moisture sensing results based on an empirical model, i.e., a soil's relative permittivity function w.r.t the volumetric water content (VWC). Sensors are spaced 5 to 10 m apart. The COMSOL simulations showed that the sensing accuracy can achieve a root mean square error of 6% in VWC [60]. Agnelgo et al. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 6 TABLE VI RESEARCH GAPS AND POTENTIAL SOLUTIONS, AND THEIR MAPPINGS OSI Layers† Research gaps Identified open issues Potential solutions‡ Priority Ly1 MI fast fading • Universal statistical models • Related upper-layer solutions . • Geometric antenna vibration model (cf. Fig. 6); • Monte Carlo methods (cf. Fig. 7) • Expressions of ergodic performances (cf. (12), (13)) • Comparison simulations for modulations and FECs (cf. Fig. 14) ✸✸✸ MI crosstalk • Positive effects; spatial distribution • Transformer-based prediction (cf. Fig. 28) ✸✸ TTE-specific MIC • Closed-form expressions of MIC range; optimal carrier frequency • Eddy current effects & shadow fading from heterogeneous media • Derivation based on Lambert-W function properties for MIC range (cf. (20)) or FEM, as shown in Fig. 4 ✸✸✸ (Deep) JSCC • No relevant solutions (key: prospect of exceeding Shannon limits) - ✸✸ Ly2 LLC • No references (optional sublayer) - ✸ Ly3 IP • No references (key: VLF adaptions and large packet headers) • MIC framework for TCP/IP & Linux support (cf. Fig. 26, Algorithm 1) ✸✸✸ Ly4 TCP • No references (key: Unstable connection and large headers) † OSI Layers: Ly1: physical; Ly2: data link; Ly3: network; Ly4: transport; Ly5-7: combined application layers (i.e., session, presentation, application). ‡ Potential solutions: systematically developed approaches with specific formulations (key expressions, simulations or frameworks), distinct from generic proposals. TABLE VII APPLICATIONS USING MI-BASED TECHNIQUES Applications Predictable & stable channels † Waveguide compatibility ‡ MIC distance Other characteristics Impact on practical MIC systems Involved refs. Agriculture Yes Yes Short to mid ( 1, (8) with moments E(J 2 SD) = 1 6 and E(J 4 SD) = 3 50. This study [89] implies that the MI channel may not be quasi-static. The PDF in (8) is suitable for the underwater environment. The antenna orientation changes over time, making the MI channel unstable. Since there are many more weak vibrations than strong ones, the PDF in (8) cannot represent terrestrial/subterrestrial MI fast fading caused by the antenna vibration. For the mobile TTE MICs using VLF-LA, previous studies have reported a Shannon's capacity range of 0.5∼10 kbps at a 50-meter MIC distance [6], [7], [10]. The transmit rate achieved by TTE MIC experiment devices is significantly lower than this capacity, as mentioned in [11]. However, the frequency spectrum range of road disturbance input, causing antenna vibrations, is approximately 10 Hz∼1000 Hz, as stated by ISO/TC108 and [143]. Since JSD in channel power gain GSD suffers rapid changes during a symbol time, MI fast fading was formally defined in [9], [11]. Due to the complexity of the antenna vibration statistical model compared to that in [89], novel mathematical concepts such as Boundary Chisquare (BCS) distribution for VMIC systems, boundary p(x) distribution, and conjugate pseudo-piecewise function were proposed to derive statistical expressions [11]. However, these expressions have only been derived for specific scenarios, Specifically, for the downlink VMI channel with the same height, the PDF of JSD can be simplified as fJSD(x) =              1 -erf q ς 2σ2 D δpu(0), if x = 1 -ς; exp -1-x 2σ2 D √ 2πσ2 D(1-x), if 1 -ς 0.6, even if Tx Power PS →∞ Ergodic capacity Fig. 8(b) BCS ➘A decrease from 3.4 to 2.7 bps/Hz when σD > 0.6 (i.e., average AVI angle= 37◦) BER [142], Fig. 8(c) BCS ➘A sharp increase from 10-3 to 10-1 when σD > 0.6 The markers '➚' and '➘' represent the positive and negative effects, respectively. MIC performance through the PDF fJSD(x|(σS, σD)), which depends heavily on the AVI inputs σS and σD. In practical MIC systems, such AVI inputs frequently demonstrate substantial temporal variability, especially in the case of congested traffic conditions, leading to significant fluctuations in performance metrics over time. This characteristic markedly distinguishes MI fast fading from EMWC fast fading, as time-dependent performance variability is less pronounced in EMWC. D. Summary and Lessons Learned This section discusses the four decomposed gains influencing channel power gain (Table IX) and MI fast fading (Table X). Table IX suggests optimization directions for each factor, such as improving circuit gain in antenna designs. Recent discoveries, including random coil misalignment [89] and MI fast fading [9], [11], challenge the widely accepted assumption in [32] that MIC channels are quasi-static. However, these findings address only limited aspects of nonstatic MI channels. In contrast to EMWCs, research on nonstatic MI channels is still in its early stages. Key issues include: 1) No universal statistical model exists for MI fast fading, analogous to the Rayleigh model for EMWCs, since the existing distribution models of AVI (e.g., BCS distribution) are scenario-specific; 2) The unexplored effects of MI fast fading on existing MIC theorems, such as channel estimation, CMICs, MI MIMO, and MI protocols; and 3) MI fast fading brings negative impacts on the performance of the MIC systems. The challenges involved include: 1) The antenna vibration's limited dependent components make it difficult to apply the CLT, a critical method for obtaining fast fading models in EMWCs; and 2) The expectation/variance of MI fast fading remains random due to its strong dependence on antenna vibration velocity. To address these challenges, we introduce a universal geometric model for antenna vibration (Fig. 6), which transforms all dependent components into independent ones. This model enables the estimation of MI fast fading gain via Monte Carlo simulation, as shown in Fig. 7. The practical takeaways or common pitfalls include: 1) No existing MI fast fading models may hold when the near-field and weak coupling conditions are not satisfied; 2) the eddy gain requires correction in TTE practice, as TTE media are rarely homogeneous; 3) for VMICs, the ferromagnetic iron in the vehicle body can distort the magnetic field, potentially causing significant deviations in existing MI channel models; and 4) improving or uniformizing the polarization gain JSD benefits MICs, whereas doing this for the average MI fast y x sinθ'(.) z/O z Sinθ'(.) O l l θ'(.) a Horizontal component φ(.) View Transition n(.) n(.) √AVI √AVI b AVI φ(.) Horizontal view: horizontal vibration of the coil normal vector Vertical view: height-direction vibration of the coil normal vector Fig. 6. Advised antenna vibration modeling in a 3-D space with independent random variables; n(·), φ(·) and θ′ (·) denote the normal vector, horizontal and vertical components of the antenna vibration (angle) of node (·), respectively. 0.97 0.94 0.86 0.77 0.67 0.62 0.57 0.88 0.87 0.83 0.79 0.74 0.71 0.70 0.77 0.77 0.80 0.81 0.82 0.84 0.86 0.67 0.67 0.73 0.80 0.91 0.92 1.00 0.59 0.62 0.73 0.84 0.95 1.05 1.09 0.53 0.57 0.71 0.87 0.99 1.10 1.16 0 0.2 0.4 0.6 0.8 1 1.2 D 0 0.2 0.4 0.6 0.8 1 1.2 S 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 0 0.3 0.6 0.9 1.2 D 0.2 0.4 0.6 0.8 1 Calc(Row1) s=0 Sim(Row1): s=0 Sim(Row2): s=0.6 (a) Row2 (b) Row1 Expectation/Average Value Fig. 7. Expectation of MI fast fading JSD for the model in Fig. 6: (a) Ad-hoc link in 3D space; (b) Two specific links; Here, the AVIs of S and D follow BCS distributions, with a boundary ς = 0.8 and their respective average AVIs σ2 S and σ2 D. Their horizontal components φS and φD follow the uniform distribution within [0, 2π). The dotted lines of Row 1 and Row 2 in Fig. 7(b) are obtained through Monte Carlo simulations (sim), and the solid line of Row 1 in Fig. 7(b) is the calculation (calc) from (10). Here, the overlap between the calculated curve of Row 1 and its simulated counterpart validates (10). The calculation curve for Row 2 cannot be presented due to the lack of a universal expression. fading gain E[JSD] may be detrimental due to increased outage probability and BER. IV. DESIGN OF POINT-TO-POINT TTE MIC This section introduces the MI antenna designs, and compares the advantages and disadvantages of four MI antenna types, including the RPMA design as a recent research hotspot in MI antennas. Subsequently, we discuss the key performance metrics of point-to-point (P2P) MI links, with a particular focus on the challenging derivations of the 3-dB MI bandwidth and MIC range for a TTE MI link. Moreover, we discuss the upper-layer P2P MI techniques, such as channel estimation, ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 13 TABLE XIII OVERVIEW OF RELATED RESEARCH ON P2P MIC TECHNIQUES THE MARKER'●' DESCRIBE THE METHODS; THE MARKERS '✓' AND '✗' REPRESENT PROS AND CONS, RESPECTIVELY. Aspects Refs. Addressed issues & methods Remaining issues (& Proposed approaches) Priority† Channel modeling [11], [12], [37], [78] ✓●Two typical MI fast fading models ✓●See Table IX in Section III ✗●More universal MI fast fading modeling ✗●See Table IX in Section III ✸✸✸ Antenna design [11], [78], [122] ✓●Coil, orthogonal MIMO, RPMA, M2I ✓●See Table XIV ✗●Orientation sensitivity; Practical deployment issues ✗●See Table XIV ✸✸ Capacity [37], [39] ✓Expressions of capacity for coil based and M2I MICs ✗Expressions of capacity for RPMA-based MICs ✸✸ Bandwidth [10], [41] ✓Expressions (16), (17), and (18) (see Table XV) ✗Bandwidth of RPMA-based MICs ✸✸✸ MIC Range [42] ✓Expression for short MIC range (see (19)) ✗Expressions for TTE MIC with significant eddy gain ✸✸✸ Channel estimation [144] ✓Lacking in sufficient CSI ●Transmitter-side estimation without explicit training sequence ✗Insufficient feedback for the dyadic backscatter MI channel estimation ✸ [145] ✓Unknown propagation environment (e.g., medium conductivity) ●Environment-aware method using Kernel function to learn the positive/negative factors in the environment ✗Unclear in the case of VLF TTE MICs and MIC with fast fading ✸ [43] ✓For inter-media MI backscatter channel ●Joint channel estimation and data transmissions, and stratified medium model for high penetration ✗Lacking the analysis for VLF and long-distance MIC ✸ Modulation [45] ✓Basic MI Modulation schemes and filter design ●Frequency-division multiplexing (FDM) with multiple sub-bands ✗Incompatibility with RPMA systems ✗Shortage similar to FDM in EMWCs ✸✸ [20], [44], [124], [146] ✓Basic modulation for RPMA-based MICs ●Amplitude shift keying, on-off keying, frequency-shift keying, Chirp-rate shift keying ✗Requiring additional GTSs and energy for overcoming inertia ✸✸ Channel coding [52] ✓FEC with higher channel estimation requirement ●EMW-based multilevel cyclic BCH coding ✗Ignoring non-static MI channel cases ✗FEC's resistance characteristics to MI fast fading ✸ [46] ✓Polar code for variable attenuating MI channel ●Bhattacharyya parameter estimation algorithm ✗Higher channel estimation requirements with low capacity ✗Ignoring ESD and JSD in their algorithm. ✗FEC's resistance characteristics to MI fast fading ✸✸✸ † Priority: Priority level of remaining issues and proposed approaches for exploration. Here, low priority (✸) indicates that: 1) the remaining issues have been explored in subsequent MIC literature; 2) the existing EMWC schemes are compatible with MIC for these issues; or 3) exploring these issues is optional. (a) Outage probability 0 0.3 0.6 0.9 1.2 1.5 1.8 Avarage AVI ( D) 1 1.5 2 2.5 3 3.5 Ergodic capacity (bps/Hz) With MI fast fading Without MI fast fading (b) Ergodic capacity 0 0.3 0.6 0.9 1.2 1.5 1.8 Average AVI ( D) 10-3 10-2 10-1 100 BER With MI fast fading Without MI fast fading (c) Ergodic BER Fig. 8. Impact of MI fast fading on MIC performance: (a) Outage probability v.s. Rx average AVI (σD); (b) Ergodic capacity v.s. Rx average AVI (σD) under the SNR of 10; and (c) BER v.s. Rx average AVI (σD) with Eb No = 10 [142]. The solid lines in Figs. 8(a), 8(b) and 8(c) are calculated from (11), (12) and (13), respectively. The dotted lines in Fig. 8(a) are obtained through Monte Carlo simulations. modulation, and FEC. Recent challenges and approaches about these techniques are briefly summarized in Table IV. A. Antenna Design In this subsection, we introduce four MI antenna designs, with the RPMA design. Table XIV summarizes their key features. Detailed studies are presented below. 1) Coil Tx/Rx Antenna: The coil is the most widely used antenna in the field of MICs [15], [32] owing to its high US LcS LcD CcS RcS RcD CcD Source Destination RL RL dSD MSD Fig. 9. Equivalent circuit model for the coils-based link S→D, where US denotes the instantaneous voltage in the Tx antenna, and dSD denotes the distance between source S and destination D. Additionally, RcS, RcD, LcS, LcD, RL, RcS and RcD are described in Table II, respectively. receive sensitivity, ease of implementation, and deployment. It is very suitable for TTE applications with limited free space. The utilization of coils dates back to research on NFCs, a short-range MIC that commonly employs a 13.56 MHz carrier frequency according to ISO/IEC 18092 standards. The MIC link is established through mutually inductive coupling between the Tx coil and the Rx coil (see Fig. 9). However, unlike standard NFC, MICs for UG-WSNs and UW-WSNs exhibit weak coupling due to longer communication distances. The Tx coil is typically modeled as a magnetic dipole, and the bandwidth of such MIC is much smaller than the standard NFC, especially for the TTE MIC applications. Due to this weak coupling, the Tx coil with current IS generates the magnetic moment with a norm |mS| = π2a2 cSa2 cDNSNDIS and current loss compensating coefficient אSD = (2πf)2RL ZSZ2 D . Such אSD determines predominatly the frequency bandwidth of an MI link. The overall circuit impedances of the Tx and Rx coils are denoted by ZS=j2πfLcS+ 1 j2πfCcS +RcS+RL and ZD=j2πfLcD+ 1 j2πfCcD +RcD+RL, respectively. The channel power gain GSD of coils-based link S→D has the specific circuit gain CSD which can be expressed by CSD = |mS|2אSD |IS|2 = (πa2 cSa2 cDNSND)2 (2πf)2RL ZSZ2 D . (14) ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 14 TABLE XIV COMPARISON AMONG DIFFERENT ANTENNA TYPES Antenna types Direction Focuses on GSD Advantages Disadvantages Refs. Coil Rx/Tx JSD, CSD 1) Low system complexity 2) Easy deployment 3) Lower costs 1) Strong orientation sensitivity 2) Low bandwidth and energy efficiency 3) Remarkable fast fading in mobile MICs [6], [11], [12] Orthogonal MIMO coils Rx/Tx JSD 1) Low orientation sensitivity 2) High receive sensitivity 3) Higher capacity and bandwidth 1) Challenging deployment in vehicle and TTE environments 2) Complex circuit and protocol designs [8], [51] RPMA Tx CSD 1) Low energy consumption under VLF 2) Smaller antenna size for TTE and mobile MICs 3) Lower cross-talk effects 1) Only for Tx antennas 2) High maintenance costs 3) Limited mS by permanent magnets 4) Longer GTS requirement due to inertia [20], [44], [66], [78], [82], [124], [146]-[152] M2I Rx/Tx SSD 1) Extremely high receive sensitivity 2) Higher capacity and bandwidth 1) High costs 2) Larger radius (e.g., 50mm 15mm times) than coil [37], [122], [123], [153]-[155] z x y O (a) Motor Motor Permanent magnet Chassis Shaft Rotating (b) Metamaterial shell Coil (c) Fig. 10. Non-conventional MI antenna types: (a) Orthogonal MIMO coils [8]; (b) A typical RPMA [124]; (c) M2I antenna [37]. From (4) and (14), the circuit gain CSD in GSD increases with frequency f. Meanwhile, the eddy gain ESD in GSD also increases with f. When f is sufficiently large, the condition σu ≫2πfεu no longer holds. The effect of circuit gain CSD(f) becomes significant. Consequently, coils-based MIC systems with a higher frequency achieve higher performance compared to RPMA. Also, the expressions for ZS and ZD indicate that the coil resonance makes the MI channel frequency-selective. This results in a narrow 3-dB bandwidth and a negative impact on the design of upper-layer protocols. 2) Orthogonal MIMO coils: SISO coil-based MIC systems exhibit antenna orientation sensitivity due to polarization gain JSD. This can cause remarkable fast fading in mobile MIC applications. To address this, Lin et al. developed the orthogonal MIMO coil antennas [52], consisting of three orthogonal coils, as shown in Fig. 10(a). This antenna reduces the orientation sensitivity and enhances the MIC channel capacity through spatial diversity. Their directional pattern showed that the minimal mutual inductance increased from 0 to 1 3 of maximal mutual inductance. However, for the TTE applications using VLF-LA [6], [10], deployment of such an antenna faces challenges due to limited free space and crosstalk among coils. 3) RPMA: The RPMA is a mechanical antenna (see Fig. 10(b)) that can generate a magnetic field of 960 Hz [147] and achieves a transmission bit rate of over 12.5 bits/s [20], [148]. Bickford et al. have been designing RPMA since 2017 [156]. Recently, the RPMA was used for cross-medium communication in [150], [151], where the research [150] focused on omnidirectional communication and the research [151] applied the bionic methodology. Apart from the channel modeling, researchers also studied modulation for RPMA-based systems such as amplitude shift keying (ASK) [124], on-off keying (OOK) [20], frequency-shift keying (FSK) [146] and Chirp-rate shift keying (CSK) [44]. When using a single magnet as an RPMA, circuit gain CSD changes, while the other gains in (4) remain unchanged. According to [78], the moment generated by a single magnet can be calculated by |mS| = Vm ISμ0 , where Vm denotes the volume of the RPMA. The factor 1 IS represents the remanence (denoted by Brm) of RPMA. Thus, the circuit gain can be calculated by CSD = |mS|2אSD 16π2|IS|2 = BrmVm 4πμ0 2 אS(f)אD(f), (15) where μ0 is the vacuum permeability; 1 אS(f) and 1 אD(f) are the friction loss of RPMA and the circuit loss of the receiving device, respectively. Here, 1 אS(f) increases as the frequency f increases. In contrast, 1 אD(f) decreases as the frequency increases since the Rx antenna is a general coil-based MIC device. According to [142], for a link with a Tx RPMA (as shown in Fig. 10(b)) and an Rx coil, the circuit loss can be expressed as 1 אD(f) = 2ZD π2facD . The experiment in the sea near Sanya city [78] indicated that the RPMA Rx node received a magnetic field of 25 nT at a distance of 10 m. Mechanical constraints, such as inertia and energy loss, may limit frequency agility or energy efficiency. For the RPMA shown in Fig. 10(b), the instantaneous input power is P RPMA S =(τfr+τnr)2πf/η [78], where η denotes the energy conversion efficiency, and τfr denotes the friction loss. The inertia constraint τnr= Inr2πf dtS depends linearly on the frequency f, indicating a higher energy proportion of τnr in high-frequency systems. The moment of inertia Inr≃ 3.75 × 10-4 kg·m2 leads to a rated angular acceleration of approximately 539 Hz/s [78]. Some issues were not considered in previous studies. By comparing (15) with (14), it can be found that RPMA-based system can achieve higher performance under VLF conditions and lower performance under high-frequency conditions. This phenomenon can be attributed to the increase in friction and inertia associated with higher rotating speeds. Additionally, the lifespan of an RPMA-based MIC device may be shorter than that of a coil-based one due to friction. Particularly, the RPMA-based MIC requires an additional guaranteed time slot (GTS) to overcome inertia, resulting in additional data rate loss and a variable data rate under the same packet length. Most importantly, permanent magnets on Earth are unable to generate ultra-strong magnetic moments sufficient to support MIC over ultra-long distances (e.g., 1,500 m). Thus, the massive RPMA MIMO can be promising due to negligible crosstalk among RPMAs. 4) M2I antenna: From (4), we observe that space gain SSD ∝ 1 d6 SD , indicating that the magnetic field attenuates ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 15 very fast along the signal path. To improve the magnetic fields around the MI receivers, Guo et al. [37] proposed the M2I enhanced technique by enclosing MI antennas with metamaterial shells (see Fig. 10(c)). Unlike natural inductors, metamaterial is an artificial material with arbitrary permeability and permittivity. It compensates antenna-generated reactive power and converts the reactive power into real power that extends the MIC range. They also proposed an easily implementable M2I antenna with a shell made of many small coils for practical use in [122], [153]. Subsequently, Li et al. [123] designed an active and reconfigurable M2I antenna under given meta-sphere size and power constraints. Although the simulations in [37] indicated that the MIC distance increased from 20 m to 60 m under a capacity of 1 bit/s, deploying M2I may be challenging due to the large space occupied by the metamaterial shell. For example, the meta-sphere shell in [37] has a radius six times that of a coil, i.e., a volume about 8,649 times larger. This creates a critical trade-off between performance and deployability. In the scenarios similar to deep mining, this volume penalty is justifiable. However, in narrow boreholes or mobile applications, conventional coils or RPMAs remain preferable. B. Channel Capacity and Bandwidth As early as 2013, Sun et al. [130] suggested that the capacity CSD for plat MIC channel can be derived from Shannon's equation CSD = Bw log2(1+ PSf Nof GSD). For P2P MIC, the MI bandwidth is important to enhance channel capacity, especially when the Tx and noise power spectral densities (PSDs) are predetermined and cannot be altered, respectively. The bandwidth widely refers to the 3-dB bandwidth, which is defined as the frequency range where signal power halves. There are primarily two methods (see Table XV) that can evaluate the 3-dB bandwidth of the MI links. When the noise PSD is fixed, the bandwidth can be obtained by solving the inequality with respect to f, i.e., GSD(f)PSf (f) Nof ≥1 2 GSD(f0)PSf (f0) Nof , (16) which is referred to as the inequality calculation. For the TTE MICs, the Tx coil is treated as a magnetic dipole. Assuming identical Tx and Rx coils, the study [10] obtained the solution for the bandwidth of a P2P coil-based MIC link Bdipole w,SD = Bw 1 8(RcD + RL)3 , where Bw(·) is Bw(ZC)≃ p πw(ZC)+ρw(ZC) - p πw(ZC)-ρw(ZC), πw(ZC) = f 2 0 +2π2C2 cDf 4 0 h Z -2 3 C -(RcD + RL)2i , ρw(ZC)= rn f 2 0 +2π2C2 cDf 4 0 h Z -2 3 C -(RcD+RL)2 io2 -f 4 0 . (17) However, (17) involved the assumptions of VLF-LA ( 1 2πf 2 0 CcD ≫2πMSD) and acS = acD. If these assumptions are not met, the bandwidth Bdipole w,SD can be obtained through numerical methods. This inequality calculation can be extended to the non-coil-based MIC, but it may not yield closed-form analytical expressions. Also, studies have considered MICs with shorter communication ranges, particularly in BAN applications. In these TABLE XV COMPARISON OF TWO BANDWIDTH CALCULATIONS Refs. Advantage Limitations Typical Eqs. [10] Possibility for non-coils MIC Closed-form solution with homogeneous VLF-LA limit (16),(17) [40], [41] Not limited by homogeneous coils Only for coil-based MIC (18) scenarios, the Tx coil is not treated as a magnetic dipole [40], [41]. The bandwidth and channel capacity become more complex. To put it simply, since the loosely coupled coils generate the boundary conditions based on the coupling coefficient kc = MSD √LcSLcD , both the bandwidth and channel capacity are piecewise functions of QS and QD. Here, QS = 2πf0LcS RcS+RL and QD = 2πf0LcD RcD+RL are the loop quality factors of Tx and Rx devices, respectively. In [41], the bandwidth of this model is estimated as Bcoupling w,SD = ( f0 QS , if QS > QD; f0 QD , if QS 0 and Zpa2(S, D, R, ...) > 0, the received current ID decreases, thereby inducing the negative crosstalk effect. This effect may reduce the MIC range and increase the BER. Conversely, the negative impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) can result in positive crosstalk effects, which may improve the MIC range and capacity. The MI waveguide conforms to this case. For long-range MIC using VLF-LA with low f, where MSD, MSR and MRD are sufficiently small to satisfy ZLC ≫j2πfMSD, ZLC ≫j2πfMSR and ZLC ≫j2πfMRD, the crosstalk impedances can be ignored. This is verified by our simulation results as shown in Fig. 19. This simulation also shows that the short-range or higherfrequency MI links encounter a significantly more pronounced crosstalk effect. When the ratio GSD,p GSD exceeds 1, the passive relay serves as part of the MI waveguide and has a positive effect on MIC. For long-range MIC, all curves in Fig. 19 converge to 1, indicating the elimination of crosstalk. It is worth noting that the CMIC and large-scale MI network also have the crosstalk effect issue according to KVL equations. From (23) and Mij= πa2 cia2 cjNiNjμu 2d3 ij Jij p Eij with i, j∈{S, D, ...}, it can be concluded that both Zpa1(S, D, ...) and Zpa2(S, D, ...) are multimodal functions. Determining their signs is challenging. To address this, future studies are needed on the spatial distribution of positive crosstalk effects. We also propose a deep learning framework to predict the signs of Zpa1(S, D, ...) and Zpa2(S, D, ...), as will be described in Section VIII-C. C. Cooperative MIC (CMIC) In this subsection, we delineate two active relay techniques, i.e., CMIC with multiple aligned relays (CMIC-nAR) and CMIC-1NR. We also summarize distinct challenges associated with cooperative communication between MIC and EMWC. The MI waveguide enhances channel capacity and range but requires numerous underground relays. For several decades, cooperative communication has been a focus in the EMWC, FSOC, and acoustic communication [163]-[167]. These communication channels experience significant small-scale fading. Cooperative relays use spatial and time diversity to mitigate these fading effects, which significantly reduces outage probability and enhances achievable rates. Various cooperative ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 20 TABLE XVII COMPARISON OF COOPERATIVE EMW AND CMI COMMUNICATIONS Comparison EMW Refs. CMI Refs. Channel characteristic Small-scale fading [163] Quasi-static fading † [6], [7], [121] Key issue Reducing outage probability [163] Enhancing Rx signal strength [6], [7], [121] Methods AF, DF, Compress-and-farword [163] AF, DF, FF [7], [32], [121] Benefits Universal [163] Limited by coil locations and orientations [6], [7], [10] † With Advances in MI fast fading research, limited literature on the CMI channel with non-quasi-static fading (e.g., [51]) has been released. 0 3 6 9 12 15 Distance of S-D (yd: meters) 0 1 2 3 4 Gsd,p/Gsd R(0.2yd, 0.7yd) R2(midle of S-D) R3(0.5yd,0) R4(0.5yd, 0.5yd) (a) VLF (f0=10 kHz). 0 10 20 30 40 50 Distance of S-D (yd: m) 0 1 2 3 4 Gsd,p/Gsd R(0.2yd, 0.7yd) R2(midle of S-D) R3(0.5yd,0) R4(0.5yd, 0.5yd) MI waveguide can be applied (b) MF (f0=1 MHz). Fig. 19. Ratio GSD,p GSD for crosstalk effect as shown in Fig.18, where GSD,p represents the power gain of the MI link S→D when a passive relay is used, i.e., GSD,p = I2 DRL ISUS , where IS and ID are obtained from (22) and (23), respectively. This ratio quantifies how much the power gain improves (or deteriorates, if 1 is called the relay area (RA). The thick gray curve represents an arbitrary underground tunnel in which S, R, and D are constrained to reside. D. Summary and Lessons Learned Table XVI summarizes the key issues, methods, and remaining challenges in MI relay research, covering MI passive relay [157], MI active relay (CMIC) [51], and hybrid relay techniques [32]. These studies show that both passive relay and CMIC techniques can significantly enhance channel capacity and extend the MIC range, sometimes by orders of magnitude. The passive relay technique is energy-efficient with simple protocol complexity, but it faces deployment challenges, particularly in antenna alignment due to the negative crosstalk effect. A potential solution involves the use of an iron core coil to collect MI signals, although careful attention is needed to mitigate the effects of the eddy current. Additionally, implementing passive relays in RPMA-based systems is challenging due to minimal crosstalk effects. Moreover, passive relay and CMIC-nAR techniques are unsuitable for mobile MICs due to their dynamic topology, which prevents the formation of stable systems with positive crosstalk effects (see (23) and Fig. 19). In contrast, CMIC significantly reduces the number of required relays. The CMIC-1NR solution, which eliminates the need for coil alignment, can be applied to RPMA-based systems and mobile MICs. We introduce the unexpected passive relay phenomenon, specifically the crosstalk effect in relays, and highlight its role in MI waveguides as a special case. Logical data flow Ly1 Physical Ly Data link Ly Network Ly Transport Ly Session Ly Application Ly Presentation Ly Physical Ly Data link Ly Network Ly Transport Ly Session Ly Application Ly Presentation Ly Ly2 Ly3 Ly4 Ly5 Ly6 Ly7 HTTP,FTP,SMTP TCP,UDP... segments Process/thread to process/thread Process/thread to process/thread IP, IPX, ARP... packets Node to Node PPP,CSMA/CD... frames Node to Node Node to Node Physical channel, modulation, channel coding... bits Actual data flow Legend: Fig. 23. The OSI framework, which has seven layers. The practical takeaways or common pitfalls include: 1) A primary practical challenge in MI relay is antenna alignment. Thus, the MI waveguide for TTE suits only scenarios like straight tunnels, which is infeasible for mobile MICs; 2) increasing active relays does not guarantee throughput improvement; and 3) a high-density multi-node MI network exhibits the crosstalk. However, determining the spatial distribution of positive crosstalk remains challenging, complicating the passive relay deployment. VI. MI NETWORK AND ITS ARCHITECTURE In wireless communications, key focal points include improving network throughput, increasing access to users, and reducing energy consumption. However, recent techniques in EMWC using signal reflection and refraction [170]-[172] are challenging to apply to MIC signals due to their lack of reflection and refraction. Consequently, the role of MI network architecture becomes more important than that of the EMWC network. In this section, we categorize recent literature on multi-node MIC with reference to the widely used OSI framework, The OSI framework (see Fig. 23) originated from the International Organization for Standardization (ISO) in 1984. It serves as a conceptual framework for designing network communication protocols and facilitating communication between different systems. The research on the MI network and differences from other communication networks are summarized in Tables XVIII and XIX, respectively, with more details provided below. A. Physical Layer (Ly1) Most research on MIC focused on the physical layer issues, including channel modeling, performance, estimation, modulation, coding, and resources allocations. We have discussed the issues of physical layer schemes on P2P and MI relay-based system, i.e., channel modeling, key performance metrics, channel estimation, modulation, and channel coding in Sections IV and V. For the MI physical layer with multiple nodes, the focus has been primarily on the resource allocations, including power allocation [11], [52], [53], as well as frequency and bandwidth allocations [53]. 1) Power control: From the network's perspective, power control aims to optimize signals, reduce interference, and enhance efficiency. Lin el al. [52] pioneered the study of power optimization formulation from the entire multi-hop network. To formulate the power control problem, they used the Nash ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 22 TABLE XVIII OVERVIEW OF RELATED RESEARCH ON MIC IN LINE WITH THE OSI FRAMEWORK THE MARKER '●' DESCRIBES THE METHODS; THE MARKERS '✓' AND '✗' DESCRIBE ADDRESSED AND REMAINING ISSUES, RESPECTIVELY OSI Lys Aspects Refs. Methods and addressed issues Remaining issues Priority Ly1 P2P & relaybased MICs [10], [12], [45], [46], [144] ✓Channel modeling, channel capacity, MIC range, channel estimation, channel coding and modulation (see Sections III, IV, and V) ●Detailed in Tables IX, IV, and XVI ✗A universal MI fast fading model; TTE MIC range; RPMA channel capacity; MI crosstalk effects; CMIC with MI fast fading ✗Detailed in Tables IX, IV, and XVI - Power allocation [52], [58] ✓Power allocation issue for a stationary MI channel ●Nash game ✗Not suitable for time-varying channel over a longer time span ✗Quantization-induced precision loss ✸ [11] ✓Power allocation issue for an MI fast fading channel ●Nash game-based multiagent RL with Bellman iteration ✗Slow convergence due to lack of information exchange; ✗Sacrifice precision for faster convergence ✸✸ Frequency allocation [53] ✓For lower system delays (> 40% decrease) for cluster-based UW-WSN ●Multi-variable alternating iterative resource allocation algorithm ✗No considerations of network throughput and energy consumption ✸✸ Ly2 MAC [54], [168] ✓Low-cost ( 100 in cost, 60 μA in the sleep mode, 0.49 mA in the Rx mode and 253 mA in the Tx mode. However, this work did not consider the throughput. Recently, in [55], they proposed an improved MAC protocol and algorithm that considers detailed MI channel parameters. This work balanced the energy and throughput performance metrics by hybridizing three configurations of the orthogonal MIMO coils, each corresponding to a different packet type (as detailed in Fig. 24). They applied the CSMA-based scheme, and pointed out that their MI MAC protocol exhibits low energy efficiency under the SISO-coil-based MIC. That is, it may encounter problems with the MIC that employs VLF-LA. Furthermore, the large MAC header (8 bytes) can be further compressed at the bit-level for the VLF-LA channel. 2) LLC: The LLC sub-layer is often only a thin adaptation sub-layer that provides the reliability of communication, e.g., through data transfer, flow control, and error control. Since the LLC sub-layer is designed to be independent of the physical layer and media in the OSI model, it has received limited attention in MIC research. Only a few studies have explored LLC protocols for other communication channels (e.g., acoustic communication channel [173], [177] and quantum communication [178]). Specifically, Daladier et al. [173], [177] proposed the SW-MER protocol to enhance throughput and reliability by employing an adaptive retransmission strategy that takes advantage of acoustic channel quality to minimize redundant packet transmissions. This approach can serve as a reference LLC for MIC, as both acoustic and MIC channels share low data rate characteristics. C. Network Layer (Ly3) The network layer routes data packets efficiently to their destinations. In this subsection, we introduce the studies on the ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 24 1B 1B 4B 1B 1B TX_ID Preamble Target ID Packet ID Tx Coil ID 1B 2B TX_ID Preamble Target ID Packet IDTx-RX Coil ID EOF Tx ID 1B EOF 1B TX_ID Preamble Target ID Packet ID Data packets EOF 11B 1B Data ACK W (a) Wakeup (W) packet, acknowledgment (ACK) packet, and data packet Sender Idle Sense W Idle ACK Sense Data Idle W ACK Idle Data Sense Receiver (b) Time slots Fig. 24. A typical MI MAC solutions in [55]. (a) The packets are designed into three types. (b) Complete communication cycle between a Tx and Rx, where the Tx node starts with a W packet; the Rx node, after successfully receiving the W packet, acknowledges with an ACK packet; upon receiving the ACK packet, the Tx node then sends out the whole data packet [55]. However, for TTE MICs, the packet size should be further compressed. functionality of the MI network, including connectivity, data collection, node deployment, and MIC routing. Here, routing in MIC has recently represented a new research aspect. 1) Connectivity: Connectivity is a cornerstone of networks and is crucial for designing network layer protocols, including routing algorithms, traffic control, and quality of service (QoS). It has been widely studied in EMWC research. For n network nodes uniformly distributed within a circular area of πR2, Gupta et al. [179] derived R(n)= q log(n)+c(n) πn , where c(n) is a correction function. They proved that these nodes are connected with probability one if lim n→∞c(n)→∞. The connectivity analysis differs fundamentally between MIC and EMWC. As the MIC is immune to shadowing, the geometric disk/sphere models used in EMWC network studies [179]- [182] are inadequate for the MIC channel. Since 2011, there has been little literature on the connectivity in the MIC field. Sun et al. [169] conducted a 2-D connectivity analysis of UG-WSN using probability derivation, yet overlooked attenuation differences. Gulbahar and Akan [72] performed a k-connectivity analysis on an underwater MI grid network with Tx coils at the fixed positions and directional angles. Zhang et al. [8] performed a connectivity analysis of TTE MI UG-WSN and proposed an optimization algorithm to address the connectivity adaptability of frequent frequency-switching w.r.t. APO-switching. For the mobile MIC, the MI fast fading results in an irregular shape of the MI coverage space.In heterogeneous MI UG-WSNs, the nodes are not homogeneous. These features violate their assumptions in the connectivity model (e.g., Poisson point process) and represent open issues. 2) Deployment strategic and data collection: The strategic deployment and data collection of MI sensors are also essential for effective local network design. Fig. 3 and Eq. (3) show that the APOs significantly impact the polarization gain JSD. Optimizing APO is a distinct issue from EMWC [6], [7], [10], [174]. Specifically, Kisseleff et al. [174] optimized the antenna deployment to avoid channel interference from other nodes. Recently, focus has been placed on node deployment for efficient data collection and routing protocols. For a 3D-UW-WSN, Wang et al. designed an optimal deployment strategy and a clustering algorithm to prolong the network lifetime by 38.3% and 21.2% compared to the EELEACHC algorithm [56]. This algorithm is also compatible with the 3D-UG-WSN. Wei et al. [80] studied the power-efficient AUV data collection schemes in an MI and acoustic hybrid sensor. origin eω0ωt ω0ωt eω0ωt xω0t0 i ω0t0 ixω0t0 i xω1t0 i ω1t0 ixω1t0 i ω0 ω1 ω2 t0 ω0 ω1 ω2 t1 ω0 ω1 ω2 t2 e ikik e ik ω1t0 e ik ω1t0 eω1ω2 ω1ω2 eω1ω2 eω1ω2 e t0t1 t0t1 e t0t1 e t0t1 xω1t1 k ω1t1 k xω1t1 k xω2t1 k ω2t1 k xω2t1 k e k0 k0 e k0 ω2t1 e k0 ω2t1 e t1t2 t1t2 e t1t2 e t1t2 xω2t2 0 ω2t2 0 xω2t2 0 xω2t1 0 ω2t1 0 xω2t1 0 xω1t0 k ω1t0 k xω1t0 k Sink(terminus) UG sensor Feasible MI channel Walk S ikik S ik ω(i) S ik ω(i) (t0) S k0 k0 S k0 S k0 ω( ω k) )(t1) Fig. 25. Schematic of a walk (arrow trajectories) for the frequency-selective MI links (from sensor i, through sensor k, to the sink 0) [57]. In this figure, ω denotes the operating frequency; t denotes the time; x denotes the node state; S(·) denotes a step. The objective of the optimization problem is to maximize the total capacity of related walks. They proposed the alternating anchor nodes selection and flow routing (AANSFR) for AUV data collection, where the 3dB MI bandwidth is considered. Using AANSFR, the lifespan could increase from 14 hours to 18 hours. 3) Routing: Although routing design is a kernel function model in the network layer, studies on routing in MI UWWSNs and UG-WSNs are limited. From (3), it is observed that circuit gain CSD(f) and eddy gain ESD(f) are the functions of carrier frequency f. These gains represent the frequencyselective property. Compared to traditional routing strategies, this property makes the MI UG-WSN appear to have entirely different topological structures at different operating frequencies [57]. Likewise, as the MIC based on VLF-LA is orientation sensitivity (called APO-selective property), the mobile MI-UG-WSN can also encounter different topological structures due to the unpredictable moment of MI fast fading gain JSD. For the frequency-selective property, Liu et al. [57], [120] proposed the frequency-switchable strategies for routing decisions, using a distributed Q-learning-based algorithm. Specifically, in [120], they mapped the frequency-switchable MI UGWSN to a multilayer network and proposed a distributed Qlearning-based algorithm to provide a description on its routing decision. They also evaluated the convergence of the algorithm and network lifetime. As Q-learning is time-consuming, in [57], they redefine the routing decision problem with frequency switchability in dynamic MI-WUSNs (see Fig. 25). Their simulations showed that the throughput increased from 40 to over 45 when the connectivity is 1. Improving network lifetime and reducing the transmission delay are two conflicting goals in routing studies. To balance these goals, Wang et al. [85] proposed two algorithms based on RL, i.e., the QL-EDR algorithm, and the path selection strategy algorithm. Alsalman et al. [86] proposed a balance routing protocol based on machine learning (BRP-ML) to reduce network latency and energy consumption. In these studies [57], [85], [86], [120], the eddy gain ESD and polarization gain JSD are ignored. Moreover, their centralized-based multiagent RL algorithms require the exchange of band tables among nodes. This poses a practical challenge due to the low data rate in the MIC with VLF-LA. 4) Internet Protocol (IP): In the TCP/IP framework, the IP, including IPv4 [183] and IPv6 [184], was originally designed to be channel-independent. To the best of our knowledge, there has been no literature specifically addressing the issues related to IP in the field of the MIC. However, the most significant challenge is IP's low efficiency in TTE MIC due ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 25 to the large IP header (over 20 bytes) and the limited channel capacity of TTE MI links. In the fields of other communication techniques, such as bandwidth-constrained EMWC [185] and acoustic communication [175], this challenge was mitigated or addressed via IP header compression schemes. Specifically, Parrein et al. [175] proposed an acoustic protocol based on the static context header compression (SCHC) protocol to reach the header compression ratio of 99.74% using lower-layers' information. This also can be applied to the MIC. D. Transport Layer (Ly4) and TCP The transport layer, akin to the LLC sub-layer and including TCP [186], ensures end-to-end communication with reliable data transfer, flow control, and error recovery. It was originally designed to be channel-agnostic. There is limited literature on MI transport layer research. Particularly for TCP, fairness and congestion control issues pose a challenge and can be potentially optimized utilizing the characteristics of the specific channel. For example, TCP connections with shorter roundtrip times (RTTs) often hinder the throughput of those with longer RTTs (i.e., RTT suppression issue). In the EMWC field, these issues have received some attention, e.g., [176], [187]. In [176], the congestion window was modeled as a function of the SNR of the EMW link, suggesting that one can formulate the corresponding optimization problem of the RTT for TCP w.r.t. the MI channel power gain GSD. Besides the TCP schemes, in the Internet of Things (IoT), constrained devices proliferate under strict power/bandwidth limits, motivating researchers to develop IoT-specific protocols with transport layer adaptations, such as the Constrained Application Protocol (CoAP) [188], Delay-Tolerant Networking (DTN) [189], and ROBust Header Compression [190]. The lightweight design of CoAP and its support for unreliable transport (established on UDP) align with the low-power requirements of TTE MIC devices. The bundle protocol of DTN can mitigate challenges posed by intermittent connectivity arising from low bandwidth and data rate. These protocols and mechanisms, validated through established RFC standards, offer potential directions for adapting transport-layer or crosslayer functionalities in TTE MIC networks. E. Cross-layer Optimization (CLO) The OSI framework promotes modularity and standardization with high cohesion and low coupling. However, it faces efficiency challenges in MI networks due to strict bandwidth, energy, and latency constraints. In this scenario, the lower layer may call upper-layer parameters or algorithms to improve the network performance. Notably, for UG-WSNs, the solution for optimizing energy consumption in Ly1 may require the routing decision results in Ly3. Lin et al. [52] developed a distributed environment-aware protocol (DEAP) based on the Nash game to satisfy the statistical quality of service (QoS) requirement. This protocol achieved both optimal energy savings and throughput gain concurrently in 2015. The DEAP considers the interactions of different layer functionalities, such as power control, modulation and FEC in Ly1, the distributed MAC schemes in Ly2 and geographical routing algorithm in Ly3. In 2021, Singh et al. [58] developed a Distributed EnergyThroughput Efficient Cross-layer solution using the Naked Module Rat algorithm (NMRA), also called the DECN approach. While this approach is similar to DEAP one, the DECN approach can apply to direct and waveguide MI links. Their simulations showed that when using the DECN approach with 50 nodes, the energy consumption decreased from 160∼390 J/packet to 100 J/packet, whereas the normalized throughput increased from 2∼6 packets to 11 packets/s. Compared to wave-based communication (e.g., EMWCs), the DEAP and DECN approaches fully consider the fact that the received power diminishes linearly to the distance d6 SD in an MIC channel, which diminishes much faster than in the EMW channel (in the order of d2 SD). Compared to EMWC, it can be concluded that all MI CLO solutions use distributed algorithms. This is due to the fact that the capacity and bandwidth of an MI channel are much lower than those of an EMW channel. This lack of information exchange can cause slow convergence of algorithms. F. Summary and Lessons Learned Table XVIII summarizes the issues addressed, their corresponding methods, and the remaining issues of research on the multi-node MI network under the OSI-originated framework, including the physical, data link, network, transport, and application layers. It can be summarized that the issues of the MI network primarily stem from the coil resonance feature, i.e., low MI bandwidth and significant frequency-selectivity. Apart from MI fast fading, many remaining issues need to be addressed: 1) The physical layer support for RPMAbased MICs is inadequate, even lacking a universal expression for channel capacity, with key challenges stemming from mechanical inertia and friction. Despite the packets having the same length, the RPMA MI channel may lead to variable data rates; 2) the high collision probability of existing MI MAC solutions [54], [55], [168] has not been addressed due to the coil resonance feature; 3) most solutions in Ly3, especially the routing solutions [57], [85], [86], have overlooked the gain factors ESD and JSD. These solutions in Ly3 have limited support for MI UG-WSNs and UW-WSNs with longer ranges; 4) the channel-independent OSI-based solutions, particularly TCP and IP, need to be validated due to issues, such as excessively large frame headers and RTT suppression. These are crucial for the SAGUMI; and 5) all solutions using RL [11], [85], [86], especially those using distributed RL with the Nash game, may face low convergence and precision. Regarding these five challenges 1) - 5), our proposed framework, the status quo, potential solutions, and research gaps are elaborated on in the subsequent sections. Table XXI summarizes the reviewed techniques mapped to each OSI layer and their respective technical readiness levels (TRLs), revealing that most existing MI techniques lack experimental evidence, particularly for multi-node solutions. The practical takeaways or common pitfalls include: 1) Most multi-node MI protocol studies have assumed near-field and weak coupling conditions, which are more prone to breakdown in TTE environments than in general MIC environments; 2) existing MI upper-layer protocols tend to overlook the eddy gain and polarization gain; 3) TCP/IP stack application to TTE ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 26 TABLE XXI TECHNICAL READINESS OF MI TECHNIQUES MAPPED TO OSI LAYERS OSI Lys Technology TRL † Future research directions Priority Ly1 MI fast fading ★ A universal statistical modeling ✸✸✸ Channel modeling ★★★★ Mixed-field & multilayer medium cases ✸✸ Antenna design ★★★★ TMR Rx antenna; deployment ✸✸ Channel estimation ★ For VLF-LA and MI fast fading cases ✸✸ Modulation ★★★★‡ For RPMA-based links ✸ Channel coding ★★ Deep JSCC ✸✸ Passive relays ★★★ Positive MI crosstalk ✸✸ Active relays / CMIC ★ Arbitrarily deployed multi-relays ✸✸ Resource allocation ★ Bandwidth and frequency allocations; balancing precision and convergence ✸✸ Ly2 MI MAC ★★ Header compression; conflict reduction ✸✸✸ LLC ✩(0) Optimization for VLF-LA case ✸ Ly3 Connectivity ★ Heterogeneous or mobile MI networks ✸✸ Data collection and node deployment ★ Algorithm convergence ✸ Routing ★ Minimizing information exchange among nodes ✸✸ IP ✩(0) IP-HC ✸✸✸ Ly4 TCP ✩(0) Fairness, congestion control, connection issues due to extremely low bandwidth ✸✸✸ † ★: Technology Readiness Level (TRL) (cf. [191]) 1-3 (basic research); ★★:TRL 4-5 (lab validation); ★★★: TRL 6-7 (system-level testing); ★★★★: TRL 8-9 (industrial application); ✩: No MI-specific references available for this TRL level. ‡ The TTE MIC products have entered the market [119], and modulation is an essential integrated technical module. MICs may suffer from the congestion mechanism failures and retransmission storms; 4) existing MI upper-layer protocols overlook that the MI channel bandwidth may be insufficient to support the exchange of their large control data, such as routing and Q tables; and 5) the frequent variation in average AVIs may render many existing MI solutions from Ly1 to Ly7 for both P2P and multi-node networks inapplicable. VII. PROMISING MI NETWORK FRAMEWORK WITH TCP/IP AND LINUX SUPPORT This section proposes a Linux-and-TCP/IP-supported MI network framework to implement most existing MIC protocols and algorithms, including those in Sections IV, V, and VI. A. Significance and Architecture Overview 1) Significance: While MIC techniques have been applied in various underground applications, few studies explore their compatibility with standard protocols, particularly the TCP/IP framework, due to the limited bandwidth of MIC systems. With ongoing advancements in MIC performance and the expansion of SAGUMI applications, integrating MIC with the TCP/IP framework is becoming an inevitable and essential trend. While deep learning has proven effective in addressing EMWC challenges [192], [193], its application to MIC remains largely unexplored due to the difficulties of using robust neural network platforms in embedded systems. On the other hand, Linux excels in large-scale wireless applications, such as mobile ad-hoc networks (MANETs) and Android (built on the Linux kernel) smartphones, enabling dynamic routing and offering scalability via its modular kernel and TCP/IP stack. Moreover, Linux offers a wealth of opensource resources across diverse research domains, particularly in wireless network protocol stacks (e.g., IEEE 802.15) and neural network platforms (e.g., TensorFlow, PyTorch, and RKNN). These resources facilitate rapid development by providing robust tools and platforms for research. As illustrated in Fig. 26, we propose a Linux-based MIC framework. This framework tackles multi-hop MIC challenges by integrating TCP/IP, Linux kernel modules, and the MI solutions discussed in the preceding section, balancing protocol compatibility with UG-WSN requirements. Primary CPU MI physicallayer solutions MI cross-layer solutions (Part B) MI transceiver MI MAC solutions MCU, FPGA MI TCP/IP adapter MIC applications MI cross-layer solutions (Part C) Kernel space Hardwares Kernel space MI routing solutions User space MI physicallayer solutions ) B) B) B) B) MI cross-layer solutions (Part A) Primary CPU MI transceiver wares solutions (Part A) MI t i MCU, FPGA MCU FPGA Hardw MI t i w c osss MII crrosss aayye laayyer sooluutioons sooluutioons sooluutioons l i (Parrt (Parrt (Parrt (P B) MI cross-layer MII MAACC MII MAACC MII MAACC MII MAACC sooluutions sooluutions sooluutions l ti e ce ce ce c ac ac ac a pa pa pa p sp sp sp s l le ne ne ne n rn rn rn r er er er MII crrosss MII crrosss laayyer laayyer MI TCCP//IIP MI TCCP//IIP MI TCCP//IIP MI TCCP//IIP p addappteerr addappteerr addappteerr d t e ce ce ce c ac ac ac a pa pa pa p sp sp sp s l lelelele ne ne ne n rn rn rn Ke Ke Ke K MI MI MI MI r g outiinng outiinng outiinng tii sooluutiioons sooluutiioons sooluutiioons sool tiioons MII crrosss MII crrosss-laayyer laayyer e r er er er Ke Ke Ke K MIC applications User space C) MI cross layer solutions (Part C) C) MI cross-layer Physical Ly (Ly1) Data link Ly(Ly 2) Network Ly(Ly 3) Transport Ly(Ly 4) Linux kenel configurations Linux TCP/IP framework Deep learning framework (Tensowflow, Pytorch, ...) MI network device driver MI character device driver Application Lys (Lys 57) Netfilter Linux kernel Linux system Hardwares System call (ioctl) Socket-based sytem call / netif_rx() dev_queque_xmit hooks hooks Fig. 26. Proposed framework for TCP/IP & Linux support. The thick arrow line represents effective Tx/Rx data, and the thin red arrow line represents control data for the MI network protocol stack. The box with a white background and italic text represents the interface between Linux system and MI protocol. The box with a white background represents the Linux model. 2) Linux wireless network device driver-inspired architecture: Building on the support schemes for TCP/IP and Linux in EMWCs, such as the Linux driver designs [194] for WiFi and Bluetooth, we come up with a Linux-based MIC framework. This framework, as illustrated in Fig. 26, draws parallels with EMWC solutions. It incorporates MI cross-layer solutions (Part A) and MI transceivers, which are hardware and firmware-based models designed for protocols and algorithms that rely on analog signals, high parallel computation, or highaccuracy real-time processing, such as channel modeling and estimation. The MI character device (MCD) driver and MI network device (MND) driver are designed to handle the retrieval and storage of MIC data. The MCD driver forwards control data, such as CSI, while the MND driver handles effective data streams, enabling MIC applications to directly access the full TCP/IP protocol through the socket() interface. 3) MI-specific models: In this framework, several models differ from conventional Linux wireless drivers: • MI TCP/IP adapter: This model adapts the native Linux TCP/IP framework with MI-specific TCP/IP optimization schemes proposed by researchers, such as TCP/IP header compression and RTT optimization for the MIC. • MI routing solutions model: This model can implement and help evaluate specific MI routing algorithms proposed by researchers. These algorithms can be invoked similar ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 27 to the MI TCP/IP adapter. • Cross-layer solutions model (Part C): This model supports deep learning-based solutions, such as deep JSCC. However, as it resides in the user space of the operating system, it may not be suitable for algorithms with stringent real-time requirements. • Cross-layer solutions model (Part B): To reduce device size and energy consumption, Some MI modulation and channel coding can be incorporated into the MI crosslayer solutions (Part B) model rather than in FPGA or MI cross-layer solutions (Part A) model. The TTE MIC system uses the VLF-LA technique, which allows the primary CPU sufficient time to process, enabling the removal of FPGA and MI cross-layer (Part A) models in some simpler MI nodes. B. System Architecture and Implementation Fig. 26 describes the system implementation of the Linuxbased MIC framework, focusing on the interface between Linux TCP/IP stack and MI solutions models. 1) Linux MCD and MND drivers: The MCD and MND drivers models mirror EMWC driver architectures, facilitating hardware-Linux data exchange and laying a foundation for MI-specific models. The MCD driver operates as follows. • Register a character device associated with several Linux file operations, whose members include open, close, read, write, and ioctl callbacks, to the kernel; • Invoke MI-specific models (MI cross-layer solutions Parts A and B) via hardware interrupts and Linux/user APIs; • The write and ioctl callbacks can manage downlink control streams (from application layer to physical layer); • The read and ioctl callbacks can manage uplink control streams (from physical layer to application layer). The MND driver operates as follows. • Register a Linux MND with integrating its netfilter hooks with the MI TCP/IP adapter and MI routing models. Linux can invoke models via these hooks automatically; • Invoke MI MAC solutions via the callbacks of the MND; • Manage routing and downlink data streams (from application layer to physical layer) via the MND callbacks (e.g., ndo_start_xmit); • Manage routing and uplink data streams (from physical layer to application layer) via the MND callbacks (e.g., netif_rx). 2) MI physical-layer solutions and MI cross-layer solutions (Part A): In line with the most EMWC solutions that employ hardware-centric implementation in PHY/MAC in dedicated chips, the models of MI physical-layer solutions and MI cross-layer solutions (Part A) reside in MCUs or FPGAs (not Linux). Given that a few MI solutions exhibit strong dependency on analog signal processing and impose stringent timing constraints, these requirements exceed the performance capabilities of the Linux kernel. For this reason, these models handle modulation/coding and cross-layer logic (e.g., interference mitigation) using FPGA/MCU cores. These models interface with Linux system via custom bus protocols, e.g., serial peripheral interface (SPI), or memory-mapped I/O. Algorithm 1: Example of a TCP/IP transmit process. /* Conventions: The italic text represents a Linux API, and Model name.Function name(Input data) represents a function of a model with parameter Input data */ /* Boot/initialization stages of Linux */ 1 Register MCD and MND; /* IP-HC in TCP/IP adapter called after routing */ 2 Register a netfilter hook named MI adapter with hooknum=NF IP POST ROUTING, hook=MI IPHC, priority≥100; 3 Register a netfilter hook named MI routing with hooknum=NF IP LOCAL OUT, hook=frequency switchable routing (cf. [57]), priority≤-400; /* Runtime stage of Linux */ 4 for Loop do 5 Call MI Channel Estimation in Algorithm 2 which generates MI CSI; 6 User calls sendto(Destination IP, Destiation port, MI packet) which generates MI IP packet; 7 Linux TCP/IP calls the hook MI routing.frequency switchable routing(MI IP packet) to complete routing based on [57], see Fig. 25; 8 Linux TCP/IP calls the hook MI adapter.MI IPHC(MI IP packet) and generates MI IPHC packet; 9 Obtain MI IPHC packet via callback MND.ndo start xmit(); 10 MND.ndo start xmit() calls MI MAC.Mac based on [54](MI IPHC packet, MI CSI) to complete MAC and generates MI MAC packet; 11 MND.ndo start xmit() calls MI cross layer Part B.BPSK Polar based on [46](MI MAC packet, MI CSI) and generates MI BPSK packet and control bytes; 12 This BPSK and Polar encoder function downloads the MI BPSK packet and control bytes to the MI transceiver's corresponding buffer and hardware registers, respectively; 13 end 14 Unregister all devices and netfilter hooks Algorithm 2: MI Channel Estimation() based on deep learning framework. 1 MCD calls MI crosslayer Part B.MI estimation based on [144] (MI CSI); 2 MCD calls MI crosslayer Part B.MI fast fading() to require MI crosslayer Part C.MI fast fading() via Linux callback read(); 3 MI crosslayer Part C.MI fast fading() completes the MI fast fading estimation based on Fig. 27 and Pytorch framework; 4 MI crosslayer Part C.MI fast fading() responds the updated MI CSI to the MI crosslayer Part B via Linux callback write(); 5 return updated MI CSI 3) MI cross-Layer solutions (Part B): For the VLF within the Linux kernel's capabilities, this model can directly implement most physical and MAC layer solutions. • Perform MI channel estimations without deep learning (e.g., [144]) in the Linux Interrupt Service Routine registered by the MCD driver upon receiving the control stream from the MI transceiver; • Perform modulations (e.g., BPSK) and FECs (e.g., Polar codec [46]) upon MAC frame/packet arrival at the MND driver, as shown in Algorithm 1 (Lines 11 and 12); • Forward the control stream to MI cross-Layer solutions (Part C) via read, write and ioctl. 4) MI cross-layer solutions (Part C): This model facilitates access to the interfaces of deep learning platforms (e.g., PyTorch), as these interfaces and platforms can only run in user space. Following are the details (cf. Algorithm 2): • MI cross-layer solutions (Part B) forward required data ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 28 to Part C via read callback; • Part C executes algorithms using the interfaces of the deep learning platform in the user space; • Part C responds the results to Part B via write callback. 5) MI TCP/IP adapter and MI routing solutions: The MI TCP/IP adapter and routing solutions integrate with the Linux TCP/IP framework primarily through netfilter hook points [195], which correspond to distinct stages in the local machine's internal routing process, i.e., NF_xx_PRE_ROUTING (prior to routing decisions), NF_xx_LOCAL_IN (before packets destined for the local system enter the protocol stack), NF_xx_FORWARD (for packets being forwarded), NF_xx_LOCAL_OUT (before packets originated from the local system exit the protocol stack), and NF_xx_POST_ROUTING (after the routing decision has been made but before the packets are transmitted on the network interface). Here, xx denotes the protocol type (e.g., IPv4, IPv6, or bridge protocol). Upon Linux startup, the NMD driver registers netfilters for MI TCP/IP adapter and routing solutions. The MI routing netfilter should be set to the highest priority to prevent other routing schemes. MI-specific routing functions attach to hook points NF_xx_LOCAL_OUT and NF_xx_PRE_ROUTING, as shown in Algorithm 1 (Line 2). MI TCP/IP adapter (especially IP-HC) is set to low priority to avoid kernel packet modification. Then, the functions of the MI TCP/IP adapter are associated to NF_xx_POST_ROUTING, as shown in Algorithm 1 (Line 3). These netfilters are automatically invoked by the Linux TCP/IP framework. 6) MI MAC solutions: This model packages the MAC header (see Fig. 24(a)) and maintains the state machine as the one in [54]. MI MAC functions for uplink streams are triggered by MI transceiver interrupts; those for downlink streams are invoked via the ndo_start_xmit callback of the MND (see Algorithm 1, Line 11). 7) Example: Consider an MI protocol stack integrated into the Linux TCP/IP framework. This stack includes physicallayer solutions, such as channel estimation, MI fast fading prediction (see Fig. 27), BPSK modulation, and Polar coding (as in [46]). It also incorporates the MAC solution from [54] and the routing solution from [57]. The TCP/IP transmit process is described in Algorithm 1. When an application sends a packet via the Linux Socket API sendto, the packet passes through the Linux TCP/IP framework, including netfilters associated with MI routing and IPHC schemes, and arrives at the MND. The MND invokes the MI MAC algorithm, BPSK modulation, and Polar encoding to generate a physical-layer frame, which is then downloaded to the MI transceiver. C. Summary The proposed framework enables researchers to leverage the abundant Linux resources available for communication networks and deep learning, such as OpenZigbee [196], TensorFlow, and RKNN [197]. As a result, it can accelerate MIC research and development. The proposed framework manages control flow across OSI layers using Linux interrupt service routines and MCD callbacks read/write. It handles data flow and protocol encapsulation across OSI layers via interrupt service routines, MND callbacks ndo_start_xmit/netif_rx, and socket APIs. MI-specific network layer protocols are managed through Linux netfilter hooks. The following sections provide further insights into related MI techniques, which can also be supported by this framework. VIII. RESEARCH CHALLENGES AND FUTURE DIRECTIONS Sections III-VI have provided a comprehensive overview, encompassing a wide range of MIC topics of state-of-the-art methodologies and theoretical frameworks, including the MI channel modeling, P2P MIC, MI relay, and MI network architecture. Many algorithms and solutions discussed within this comprehensive review can be implemented and/or evaluated through the MI framework proposed in Section VII; see Table XXIII. For instance, the channel modeling and estimation solutions in [11], [12], [144] require processing analog signals and can be implemented in the MI transceiver model. In this section, we summarize the potential challenging issues that were not addressed in the literature. We also introduce new promising techniques (e.g., deep JSCC, MCNSI, etc.) for future research. Table XXII outlines the remaining issues, their potential solutions, and novel techniques for MIC. A. MI Fast Fading in Mobile MIC Systems Until 2020, no concept of an MI fast fading channel was formally introduced. The research on MI fast fading is still in its early stages (see Table X). In this subsection, we discuss the challenges and remaining tasks for channel statistical characteristics, and elucidate the potential ramifications on established MIC (from Ly1 to Ly3), including outage probability, channel estimation, MI MIMO, and CMIC. Table XXIV summarizes the typical issues arising from the introduction of MI fast fading into MI channels. 1) Statistical modeling when the CLT does not apply: Previous literature has derived closed-form expressions of the CDF/PDF and expectation of the MI fast fading. These expressions are only applicable in typical scenarios, such as underwater [51], [89], 2D TTE [9] and 3D TTE using MI cellular network [11]. Firstly, unlike EMWCs, MI fast fading is not caused by signal propagation. As illustrated in Fig. 6, we modeled MI fast fading using four independent random variables (φS, θ′ S, φD, θ′ D) with distinct distributions. These four random variables are not suitable for applying CLT since CLT requires a large number of independent random variables. Thus, deriving a universally used CDF/PDF, such as the Rayleigh model, becomes challenging. We utilize Monte Carlo simulations to obtain universal models for MIC links (e.g., Fig. 7). However, these models have high time complexity for network algorithms and protocols. Secondly, antenna design, antenna carrier, and its mechanical degrees of freedom affect the MI fast fading. For example, the antennas composed of orthogonal MIMO coils, RPMA, M2I, and magnetoresistance exhibit different CDFs of MI fast fading. The vibration model of the backpack antenna may not follow the boundary p(x) distribution. The mechanical degrees of freedom of the vehicle influence the distributions of horizontal components of antenna vibration φS and φD. These interdisciplinary issues also pose the challenge to derivation of a universal statistical model. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 29 TABLE XXII FUTURE ISSUES, PROMISING WORK AND ADVISED METHODOLOGIES FOR TTE MICS Types Research aspects OSI layer Future issues & works Advised methodologies P2P MICs MI fast fading Ly1 1) A universal statistical model 2) A velocity-dependent expectation/variance 3) See Table XXIV 1) Maxwell-equations-based derivation and FEMs 2) Probability-theorem-based derivation 3) Deep JSCC 4) Transformers model or LLM to predict average AVI Antenna design Ly1 1) Inertia issue of a mechanical antenna 2) Using high-sensitivity and small-size magnetic sensor - MCNSI Ly1-Ly3, Ly7 1) Balancing between MI, navigation and sensing 2) Chaotic RSSIs 1) GAN techniques and DWE algorithm 2) Formulating a joint optimization problem Mixed-field channel model Ly1 1) Difficulties in sub-modeling of channel power gain GSD 1) Maxwell-equations-based derivation Inhomogeneous medium Ly1 1) Boundary conditions for multi-layer materials 2) Boundary conditions between the near-field region and radiation field region 3) Statistical characteristic of dynamic medium 1) Maxwell-equations-based derivation 2) Geometrical approximation by jointing regular shapes 3) Data fusion and attention mechanism JSCC Ly1, Ly7 1) Image, audio & video data transmissions 2) Content-based communication 3) High-dimensional data communication 1) Deep JSCC 2) Multimodal semantic communication 3) Offline distilled LLM MI Relay CMIC Ly1 1) Spatial distribution of Cross-talk effects 2) Multiple active relays with misaligned antenna 1) KVL equations MI network Heterogeneous MI Network Ly1-Ly7 1) Channel licensing and spectrum sensing 2) Connectivity issues 3) Power and throughput optimization 1) Percolation theory 2) Multiagent RL MI MAC Ly2 1) The SISO Antenna case 2) Balancing the frame error ratio and EPR 3) Power and throughput optimization 4) Communication security issues 1) Machine learning for CSMA-CA issue 2) Using antenna orientation information for frame error ratio & EPR issue 3) Bit-level compression for the MAC header MI routing Ly3 1) Effects of ESD and JSD - MI TCP/IP Ly1-Ly4 1) Large TCP/IP packet header unsuitable for ultra-narrow-band MI channel 2) Significant RTT suppression for TCP connections 3) Excessive duration for connection establishment 1) Header compression techniques 2) RTT optimization based on MI channel conditions 3) Optimizing data chunking and aggregation 4) Intelligent retransmission strategy 5) Machine learning solution under MI fast fading Implementations MI network framework Ly1-Ly7 1) Experiments and testing for TTE MIC systems 2) TCP/IP support Fig. 26 TABLE XXIII OUR PROPOSED PROTOTYPE FOR MI NETWORK IMPLEMENTATION TO SUPPORT EXISTING AND FUTURE STUDIES Models in Fig. 26 Recommended solutions† Typical refs. MI transceiver Channel modelings and estimations [10]-[12], [36], [37], [144] MI cross-layer solutions (Part A) Modulations; Polar coding; JSCC; high real-time CSMA-CA [45], [46] MI MAC solutions Contention based MI MAC protocols [54], [55], [168] MI cross-layer solutions (Part B) Modulations, FEC; Polar coding; data collection [11], [45], [46], [52], [80] MI routing solutions MI connectivity and routing solutions [8], [57], [85], [86], [120] MI TCP/IP adapter IP-HC, RTT optimization, intelligence retransmission, etc. Future research MI cross-layer solutions (Part C) Deep JSCC; algorithms with deep RL Future research MI character device driver Reading/writing protocol control data from/to the MI transceiver [194] MI network driver Reading/writing effective data from/to the MI transceiver [194] † In this table, we prioritize the solutions of the MIC under the VLF-LA case TABLE XXIV POTENTIAL FURTHER ISSUES IF MI FAST FADING CHANNELS ARE INTRODUCED OSI layers Research aspects Involved refs. Typical issues with introduction of MI fast fading Physical layer Channel modeling [89], [9], [11] For more universal scenarios Achievable rate [7], [10], [51], [130] Random outage probability in most cases Channel estimation [144] Unpredictable CSI; Spectrum obtaining Channel coding [46] Unpredictable average AVI Power control [11] Frequent disruptions of Nash equilibrium due to unstable average AVIs MIMO & CMI [7], [10], [36], [51] Spatial diversity; Outage probability reducing Data link layer MAC [54], [55] CSMA-related time / time-slot Network layer Connectivity [8], [169] Irregular shape of MI coverage space Routing [46] Variable latency and energy consumption Cross-layer Cross-layer protocols [52], [58] Unpredictable average AVI; QoS requirement Networking - Variable network topology It is noticed that the practical relevance of existing MI fast fading models remains untested. Monte Carlo simulations have validated three such models, each with AVIs distributed as uniform, BCS, and boundary p(x). Despite the validation, the practical applicability of these models is yet to be confirmed. Future research can give priority to experimental validation using measured disturbance profiles across diverse environments to enhance their practical applicability. 2) Outage probability with velocity-dependent AVIs: MI fast fading influences the achievable rate through the expectation E(JSD) and outage probability. For mobile MIC, the expectation E(JSD) is determined by the average AVIs σS and σD. These AVIs are velocity-dependent, making the AVIs statistically unpredictable due to their dependence on the driver's mindset. Additionally, unlike the traditional MIC, the outage probability regains its physical meaning due to fast ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 30 fading. Both the unpredictability of E(JSD) and the outage probability of a mobile MIC link have a significant impact on the existing solutions for MIC networks. Unlike the EMWC, such an outage probability is still random in most cases. However, as the vehicle velocity exhibits a human activity, the attention-based deep learning (e.g., Transformer and BERT models) and large language models (LLMs) can be considered to address the challenges caused by unpredictable E(JSD). 3) Lack of frequency spectrum for channel estimation: For the study of MIC channel estimations [144], an MIC channel is assumed to be quasi-static. For mobile MIC with a fast fading channel and a given time t, the mutual inductance MSD = MSD(φS(t), θ′ S(t), φD(t), θ′ D(t)) = MSD(t) changes faster during the coherence time Tc. An mobile MIC channel exhibits time-selective fading. Let Tc = NtTt, where Nt is the number of symbols between two pilot signals and Tt is the duration of a symbol. The receive signal is ̃y with the CDF FJ(φS(t),θ′ S(t),φD(t),θ′ D(t))(y). For time-selective fading, one of the widely used methods is interpolation between symbols 1 and Nt. Hence, the frequency spectrum of J(φS(t), θ′ S(t), φD(t), θ′ D(t)) is crucial. As the CDF of JSD helps obtain this frequency spectrum, it is an open issue for MIC channel estimation. Additionally, the statistically unpredictable expectation E(GSD) makes it hard to obtain accurate CSI, which is also an issue for channel estimation. 4) Unpredictable bandwidth for Spatial diversity for MIMO and CMICs: For traditional MIC links with quasi-static channels, MIMO and CMI techniques enhance signal strength at the Rx node. The received SNR determines the feasibility of MIMO and CMI in these scenarios. For MIC with fast fading MI, outage probability must also be considered. For example, in Fig.22, CMIC is feasible in RA with ΥAF > ΥSD, where ΥAF and ΥSD are the received SNRs of CMI and DMI links at D, respectively. For mobile MIC, not all points in RA are suitable for a CMI relay due to the presence of outage probability conditions. Similarly, some points outside RA may be suitable for a CMI relay. In addition, the achievable rate of an AF-relay CMI system can be expressed as [10] CAF = 1 2 Z f0+ 1 2 BAF (v) f0-1 2 BAF(v) log2(1 + ΥAF(f))df, (24) where the bandwidth BAF is a function of the APO v and, in turn, a function of MI fast fading gains JSD, JSR and JRD. This poses a challenge to the ergodic rate calculation of mobile CMI links. 5) Non-uniform and irregular MI coverage: For a stationary MI SISO link, the polarization gain satisfies JSD = 1+3 cos2 θSD, where θSD is the angle between the norm vector of the coil and line SD, implying that MI coverage space has petal shape. However, it is observed in Fig. 7 that the shape of MI coverage space may become irregular in the presence of MI fast fading. This is a challenging issue for MI connectivity modeling. On the other hand, the MI fast fading may increase the channel power gain, as shown in Fig. 7, presenting an exciting opportunity for network optimization. 6) Proposed framework for average AVI prediction: The average AVI relies significantly on a vehicle's velocity, which depends on driver intent and is hard to predict using conventional methods (e.g., Kalman Filter). Attention-based deep Transformer History sequence of vectors (σS , σD, Ċ) Dense {σDĴ(-Ğ, ε1 ] } P Inputs Prediction Dimensionality reduction Outputs(k×1) softmax Result max(Out puts) {σDĴ(ε1 , ε2 ]} P {σDĴ(ε1 , ε2 ]} P {σDĴ(εk-2 , εk-1 ]} P {σDĴ(εk-2 , εk-1 ]} P {σDĴ(εk-1 , Ğ )} P {σDĴ(εk-1 , Ğ )} P ... δ 2 - δ 2 - δ 2 + δ 2 + δ 2 + δ 2 + δ 2 + δ 2 + δ 2 - δ 2 - δ 2 - δ 2 - Fig. 27. Proposed framework of Rx average AVI σD prediction. This framework is a k-class classification supervised deep learning framework where predicted AVI σD (output) is discretized into k levels, and δ is the boundary margin. Inputs include average AVIs of Tx and Rx (σS, σD), road/traffic conditions, and vehicle velocity, and are assumed to be predenoised. Transformer Sequence of vectors (S, D, R, Ċε Dense [Δ Zpa1 ε ] P Inputs Prediction Dimensionality reduction Outputs softmax Result max(Out puts) δ 2 + δ 2 + δ 2 - δ 2 - δ 2 - δ 2 - δ 2 + δ 2 + Fig. 28. Proposed framework of MI crosstalk impedances prediction. Here, the classifications P[∆Zpa1 ε + δ 2 ] represent a decrease, no change, and an increase in impedance Zpa1, respectively, where ε is a boundary threshold and δ is the boundary margin. Inputs are assumed to be pre-denoised. learning can be an alternative to obtaining average AVIs. We propose a Transformer-based supervised deep learning framework to obtain a discretized/classified average AVI, as shown in Fig. 27. In this framework, historical sequence vectors act as inputs, pass through a Transformer for prediction, and then go through dimensionality reduction via a Dense layer. Final outputs are normalized by the softmax function. The result is obtained by taking the index of the maximum in the outputs tensor. The boundaries ε1, ε2, ..., εk-1 are determined by analyzing historical average AVI distributions (including mean, variance, and extreme values), and aligning with practical operational requirement, based on the predesigned k classes. Assume a standard Transformer configuration with the number of attention heads h=8, multi-head attention blocks b=12, and hidden dimension d=64. The sequence lengths are constrained to n=128 due to on-device memory limits. The computational complexities of the Transformer block and subsequent Dense layer are O(bn2d + 8nd2) and O(dk), respectively, with a total of 16,818,176 FLOPs when k=5. This workload is executable on a low-speed MCU (100 MHz ARM Cortex-M4) with an inference latency of up to 75 ms. The remaining issues are listed in Table XXIV. B. Antenna Design Due to lower circuit gain CSD, coil-based antennas are larger, especially in VLF-LA systems for TTE applications. In deep underground environments, the large size makes it difficult to use MIMO techniques. Several non-coil antenna types can be considered for future long-distance MICs: 1) Mechanical antenna: A mechanical antenna has a smaller size for mid-range and mobile MICs [198]-[200]. Such an antenna presents challenges stemming from inertia. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 31 • Additional latency: The MIC protocols (e.g., modulations) need additional preset time slots between different symbols to overcome antenna inertia. • Additional energy consumption: Changing the mechanical state requires additional energy to overcome inertia. Specifically, transmitting '010101' (5 state switches) requires more energy than '111000' (1 state switch). • Variable data rate under the same packet length: For packets of the same length, transmitting '010101' takes 5 GTSs, while '111000' takes only 1 GTS, meaning '010101' transmits slower than '111000'. 2) Magnetic sensor: Mechanical antennas are often used in the Tx nodes, while magnetic sensors can be utilized at the Rx nodes. There are high-sensitivity magnetic sensors, such as TMR and quantum magnetic sensors. These sensors have potential applications in underground MICs. Due to the limitations of current magnetic sensor techniques, the receiver sensitivity of a magnetic sensor is much lower than that of a coil. Few studies investigate magnetic sensor-based MICs for underground applications. Nevertheless, magnetic sensors have two advantages. • Small antenna size: The size of a magnetic sensor is much smaller than that of a coil. For example, a TMR sensor with a small outline package 8 (SOP8) has dimensions of 6 mm × 5 mm × 1.5 mm. These dimensions are several orders of magnitude smaller than coils. • Low crosstalk effect: The crosstalk effect among the magnetic sensors can be greatly reduced. Techniques such as massive MIMO and intelligent reflective surfaces can be introduced into MICs. Given these advantages and the advancement of magnetic sensor techniques, magnetic sensors emerge as a highly promising option for MIC antenna design. 3) MI beamforming and massive MI MIMO: The massive MIMO technology plays a pivotal role in the fifth-generation (5G) mobile communication systems and beyond. The MI MIMO and beamforming techniques are also widely employed for wireless power transfer [201]-[205] and short-range MIC [94], [206]. However, these techniques exhibit extremely low performance for mid-range and long-range MICs due to the MI crosstalk effects among coils. There is no literature on massive MI MIMO. With the advancements of RPMA techniques and magnetic sensor technology, MI beamforming and massive MIMO techniques hold promise, as RPMA and magnetic sensors generate minimal MI crosstalk effects among coils. For a coil-based MI system, avoiding crosstalk is crucial for the application of massive MIMO. Our simulation in Fig. 19 may assist in addressing this challenge. C. MI Crosstalk Effect Prediction Strategies It is observed in (23) that the crosstalk impedances Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) are crucial for the crosstalk effect mitigation. Unfortunately, the expressions of crosstalk impedances are complex multimodal functions, posing a challenge for predicting Zpa1(S, D, R, ...) and Zpa2(S, D, R, ...) using conventional methods. We propose a Transformer-based framework (see Fig. 28) for crosstalk impedance predictions for crosstalk effect mitigation. This framework is similar to that of average AVI prediction (see Fig. 27) with k=3. Its computational complexity is 16,801,792 FLOPs with the number of attention heads h=8, multi-head attention blocks b=12, and hidden dimension d=64. The sequence length is n=128. The latency remains under 75 ms on the 100 MHz Cortex-M4. D. MI Communication-Navigation-Sensing Integrated System The MCNSI system aims to achieve high intelligence, supporting high-quality localization and sensing services while achieving high-quality communication. As summarized in Table XXV, MIC studies have been focused on the stability of signals among different positions in traditional research. By contrast, the MI localization/navigation focuses on signal differences among different positions, while MI sensing emphasizes sensitivity to the direction and strength of magnetic fields, both of which are highly susceptible to interference and noise. Recently, there have been some studies on joint localization and communication in 5G/6G technologies using intelligent reflecting surfaces [209], [210]. These signal-reflection-based techniques may not be suitable for MICs. There are key issues listed below according to these challenges and Table XXV. • Small MI signal difference: Since the space gain SSD in MI channel decays with the 6th power of distance (see (4)), the difference in MI signal strength between two points is much smaller than that of an EMW signal in a long-distance communication environment. • Chaotic received signal strength indicator (RSSI): For mobile MICs, velocity-dependent MI fast fading channels cause the MI RSSI around the MI base station to be chaotic. • Balance between communication and navigation: In the MCNSI system, since small differences in MI signals are beneficial for MICs but detrimental to MI localization and navigation, the localization accuracy and achievable communication rate often cannot reach their optima. We need to balance these two performance metrics. • Interference and noise: As both MIC and MI navigation subsystems are expected to generate sufficiently strong signals, these signals can interfere with MI sensing. The carrier frequency of TTE MIC using VLF-LA is closer to that of MI sensing, making interference suppression techniques (e.g., frequency band isolation) more challenging for the MI sensing subsystems. For the first issue, we can use dynamic weighted evolution/learning (DWE) from [115]. Furthermore, we consider generative adversarial networks (GANs) to obtain a superresolution model of MI signal fingerprints. For the second issue, RL techniques can be used for stochastically changing communication environments. For the third issue, the key method is to obtain closed-form expressions for the ratio of time slot allocation between communication and navigation, and formulate a joint optimization problem of communication and navigation. For the fourth issue, the joint time-spacefrequency isolation method can be adopted for appropriate electromagnetic compatibility design. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 32 TABLE XXV COMPARISON OF MIC, MI LOCALIZATION/NAVIGATION AND MI SENSING Aspects MIC MI localization / navigation (cf. [24], [207]) MI sensing (cf. [208]) Key Alternating magnetic fields Magnetic fields differences Sensitivity to magnetic fields' direction & strength Techniques. Modulation, channel coding, MAC, routing Magnetic fingerprint, ToA, inertial navigation Interference & noise suppression Signals Active sources Active sources Passive sources Components Coil, RPMA, M2I Coil, RPMA, Inertial measurement unit TMR, Giant Magnetoresistance, Hall sensor, Coil Applications UG-WSN, UW-WSN, UG robot communication AUV, Indoor position, UG Robot navigation Oxygen content detect, current measurement 10-6 10-4 10-2 100 102 Conductivity of media (S/m) -2 -1 0 1 2 3 4 5 6 Potential near field range (lg(m)) frequency: 100 Hz frequency: 1 KHz frequency: 10 KHz frequency: 100 KHz frequency: 1 MHz frequency: 10 MHz Fig. 29. Near-field ranges. The simulation parameters are listed in Table III, except for the media conductivity σu and frequency f. E. Mixed-Field MI Channel Model The MIC channel is modeled within the near-field range (k0dSD ≪1) in the vast majority of MIC research. There are a few works, such as [78], [79], [126], that focus on MIC in the non-near-field range (called mixed-field MIC) using (1), (2), and (3), where k0dSD ≪1 does not hold. Under the mixed-field conditions, the channel power gain GSD is difficult to be sub-modeled as CSD, SSD, ESD and JSD. This makes the MIC channel model difficult to analyze since more effects of device components and environmental factors must be considered. For example, for the near-field model (4), the antenna-vibration-based MI fast fading is primarily related to MI polarization gain JSD, which represents the APOs. For the mixed-field, more device and environment parameters (μu, εu, σu) also determine the antenna-vibration-based MI fast fading, according to (2) and (3). In scenarios with a high conductivity medium, the near-field range is very small. As shown in Fig. 29, when the media conductivity is σu≃0.01 S/m, the near-field range is less than 10 m under f0 = 100 kHz. For TTE MIC channels with higher frequency and media conductivity, the radiation field cannot be disregarded. Studies on existing upper-layer MIC protocols relying on a near-field model may encounter issues. F. Multi-Layer and Inhomogeneous Media For TTE scenarios, the underground medium is unlikely to be homogeneous and isotropic. There are three scenarios: • Multi-layered medium: The medium near mineral deposits is often multi-layered. In these layers, the mineral layer may have a high conductivity and even high permeability. This may invalidate existing works. • Inhomogeneous medium with arbitrary geometric shapes: Such scenario appears more frequently than that with a multi-layered medium, such as underground tunnels like the gray thick line in Fig. 22, urban subway systems, and subterranean business districts. JSCC Encoder JSCC Decoder JSCC Decoder Image Image MIC parameters MI Channel Image Image Fig. 30. Block diagram of P2P MIC based on deep JSCC. • Dynamic medium: This scenario typically appears in oil fields, underground rivers, and mobile MICs. In this scenario, MIC channel exhibits MI fast fading, despite its longer coherence time than symbol duration. For the first scenario, the boundary conditions between different layers and the boundary between the near-field region and radiation field region need to be considered. For the second scenario, a geometrical approximation (see [7]) can be used to transform geometric shapes. Methods based on Maxwell equations (see [48], [126]) and finite element methods (FEM) (see Fig. 4) can analyze the MIC channel model. For the third scenario, the statistical characteristics of dynamic medium are a key issue. While the derivation based on classical probability theorems can be used, the attention mechanism can be applied to obtain crucial channel information. G. Image Transmission and Deep JSCC Image and video transmission remains an intriguing yet unattainable function for TTE communications. It necessitates a higher achievable rate, potentially beyond Shannon's capacity for MICs utilizing VLF-LA. Using separate source-channel coding (SSCC), such as BCH and Polar coding, for further enhancing MI performance increases complexity with minimal gains, hindering sustainable development. In the EMWC area, the JSCC technique [211] has been explored to tackle this issue. Meanwhile, deep learning has been widely used to address issues on the physical layer of communication systems [192]. Deep JSCC [212]-[214] is a JSCC technique using deep learning. The source code on deep JSCC is available on Github [215]. It is a highly efficient approach for transmitting high-dimensional data. Employing deep JSCC techniques, one can compress the high-dimensional data, including the underground environment information, through multimodal semantic or other goaloriented analyses. If the deep JSCC technique is successfully applied to MICs (see Fig. 30), the MIC achievable rate may exceed Shannon's capacity limit. Such a technique may make the transmission of images and even videos feasible. According to [216], the deep JSCC framework can be formulated as x = Cα(Sβ(s)), (25) where x is the encoded symbol; s is the input signal; Cα(·) is the neutral network of the channel encoder with the parameter set α, including the MI device parameters and the underground ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 33 Source Destination Relay . . .Relay Fig. 31. The topology of the CMIC with multiple active relays featuring misaligned coils. environments; Sβ(·) is the neutral network of the source encoder with the parameter set β. For text message transmission applications, deep semantic communication techniques [216], [217], which optimize text transmission rather than symbols transmission, can be employed to compress the overall effective data stream. Transfer learning (e.g., [216]) can be employed to reduce the number of pre-trained models required across different channel environments sharing the same statistical model. This is particularly relevant for MI fast fading channels in ad-hoc links, as the PDF is difficult to derive. H. Cooperative MIC As depicted in Table XVII, researchers have applied either CMIC-1NR [6], [7], [10], [51] or CMIC-nAR [32], [121]. Consequently, some open issues arise, as follows. • Crosstalk effect of CMI system: The crosstalk effect in CMI systems is typically small and often ignored in long-distance CMIC investigations (see Fig. 19). For high node-density networks, the key issue is the spatial distribution of negative crosstalk effects. Inevitably, largescale UG-WSNs with high-density nodes generate many sufficiently close node pairs. Obtaining this spatial distribution can help upper-layer protocols, such as MAC and routing, avoid the crosstalk effect. • Multiple active relays with misaligned coils: It has been witnessed that both CMIC-nAR and CMIC-1NR can improve MIC performance. However, the improvement from multiple active relays with misaligned antennas (see Fig. 31) is less clear due to spatial diversity limitations. For the first issue, we can use the KVL and fundamental matrix operations to obtain a closed-form expression of the MI crosstalk effect. However, it is still a challenge to find its spatial distribution. I. MI Network and Architecture Table XVIII shows that existing studies can form a basic protocol stack for a large-scale, runnable TTE network. Some open issues remain for further study. 1) Heterogeneous MI Network: In TTE environments, free space for antenna deployment varies widely. It ranges from large spaces, such as subway stations and malls, to extremely small spaces, such as collapsed tunnels and mine shafts. We can deploy large antenna devices that constitute the backbone of a UG-WSN in large spaces and small antenna devices that constitute various levels of branch UG-WSNs in small spaces. Together, the backbone UG-WSN and the various levels of branch UG-WSNs form a heterogeneous MI UG-WSN (see Fig. 32), including cognitive MI and femtocell networks. The MBS PU SU FU FBS Interference Spectrum hole sensing Primary network Femtocell network Cognitive Network Fig. 32. Examples of heterogeneous MI UG-WSN. Here, primary users (PUs) and a macro base station (MBS) constitute the primary network. Secondary users (SUs) and an MBS can function as a part of the cognitive network. Femtocell users (FU) and a femtocell base station (FBS) constitute the femtocell network. cognitive network addresses the issue of spectrum scarcity, while the femtocell network improves coverage and capacity in specific areas. Recent upper-layer protocol advancements highlight the significant potential of heterogeneous MI UGWSNs on the following key aspects. • Channel licensing and spectrum sensing: The bandwidth of an MIC channel is extremely narrow (see Fig. 21). Perceiving and utilizing spectrum holes is a fundamental task for spectrum reuse in a heterogeneous MI network. Parameters like the relationship between the resonance frequency f0 and working frequency f are crucial for channel licensing, spectrum sensing, and their optimization solution (e.g., dynamic frequency selection). • Connectivity: Existing investigations on MIC connectivity are under the assumption of uniform MI coverage. However, heterogeneous MI UG-WSN has different antennas, such as SISO coil, RPMA, M2C antenna, and Orthogonal MIMO coils. These MI antennas can generate MI coverage spaces with varying shapes and ranges. This significantly impacts the existing statistical model of MIC connectivity, such as the CDF of isolated MI nodes. • Power and throughput optimization: Multiagent deep RL methods are widely used to address the joint power and throughput optimization problem under an unpredictable channel [218]-[220]. The key issue is insufficient bandwidth for exchanging cooperative packets among agents/players. This issue led to the abandonment of cooperative multiagent RL method with high convergence performance. 2) MI MAC Solutions: The IEEE 802 standards divide the data link layer into the logical link control and MAC sublayers. The logical link control sub-layer is responsible for non-media access-related functions. The functions seem to be compatible with the MIC network, but not validated. For the MAC sub-layer, although the studies [54], [55], [168] have proposed MAC solutions, there are several open issues: • High orientation-sensitivity SISO: For the TTE scenarios, an SISO coil is a frequently used MI antenna. However, its orientation sensitivity may affect the existing MAC solutions, such as channel sensing and packet design. • CSMA-CA: Reducing the probability of collision is an important indicator for low-bandwidth channels. The methods, such as supervised machine learning, send window optimization, and time slot optimization, can be introduced into collision avoidance (CA) schemes to ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 34 minimize the search range in the time domain. • Frame error ratio (FER) and effective payload ratio (EPR): Lowering the FER and increasing the EPR can improve the effective data rate. However, these two ratios often cannot be optimized simultaneously. Besides multiobjective and multiagent optimization methods, antenna orientation information can be utilized as the frame header, secret key, and even the beacon sent from the MI coordinator. Additionally, the MI MAC headers in [55] require bit-level compression. • Communication security: As MIC has low bandwidth and a frequency near resonance, it is more susceptible to interference than EMWC. Existing security schemes in EMW-based MAC layers, e.g., those in the IEEE 802.15.4 standard, are insufficient to address this issue. 3) MI Routing Solutions: Routing is an important function in the network layer for a large-scale network. Since 2019, there have been many studies on MI routing issues, such as issues of lifetime, transmission delay, and routing decision algorithms. However, the channel models in studies on MI routing issues are air-based MIC channels, where the eddy gain ESD and polarization gain JSD in TTE scenarios are ignored. Air-based MIC channels are similar to EMWC. Whether ESD(f) and JSD affect the MI routing solutions, e.g., the frequency switch scheme, remains an open issue. 4) MI Network Architecture: Most studies on MIC focus on the physical layer, such as MI channel modeling, channel estimation, and energy and capacity optimization. In recent years, researchers have increasingly dedicated efforts to upperlayer protocols like MAC and routing protocols (see Table XVIII). The OSI-originated network framework adheres to the principle of high cohesion and low coupling in the field of software engineering, e.g., standard OSI, TCP/IP, and IEEE 802 frameworks, This principle can bring high communication performance, robustness, security, standardization, and compatibility to MI networks. Although most solutions of OSIoriginated network frameworks are compatible with the MI networks, some issues can be considered for future studies and implementations. • High cohesion and low coupling: High cohesion and low coupling in a network framework benefit system stability and compatibility. To achieve higher MIC performance, cross-layer optimization methods have been proposed in the literature (see Table XVIII). These methods, to some extent, increase the coupling among different layers. Improving MIC performance while maintaining the functionality of the mature network architecture is also a consideration in future research on MI optimization. In our framework for future research and implementation (see Fig. 26), we take this point into account. • Standardization for MIC: As research on MIC network optimization temporarily boosts performance but disrupts the design principle of "low coupling and high cohesion", the standards for MIC are lacking. While NFC has standards [221] (e.g., ISO 18092, NFCIP-1), its shortrange and fixed-frequency design does not align with the need of UG-WSN for wide coverage and heterogeneous network integration, hindering access of large-scale TTE MIC in SAGUMI networks. MAC Header 1B 14B IP Header 20-60B TCP Header 20-60B Payload Preamble 0xAB 7B FCS 4B Fig. 33. A typical TCP frame being propagated in a physical channel. In this figure, FCS denotes the Frame Check Sequence. Italicized text indicates the frame header, needing further compression for the VLF-LA channel. J. TCP/IP Support MIC device with TCP/IP support has broad application prospects. As most modules of TCP/IP were originally designed to be channel-agnostic, the research of TCP/IP specific supporting for MI network has been largely overlooked. However, the general TCP/IP stack is not fully compatible with an ultra-narrow-band network due to its large packet header (i.e., low EPR). As shown below, some techniques can be applied for MIC-specific TCP/IP. 1) Header Compression (HC): The TCP/IP packet header, including the MAC frame header, IP header, and TCP/UDP header, is quite large. For example, the traditional IPv4 header and TCP header are both over 20 bytes in size (see Fig. 33), causing 74% overhead [185]. Such large headers total over 320 bits, taking over 3 seconds to transmit on a 100 bits/s TTE MI channel. Using header compression methods, we can achieve a potential compression ratio of over 99%. Besides channel-agnostic solutions (e.g., [190]) proposed in EMWC, dynamic SNR-dependent header size w.r.t frequency f and MI polarization gain JSD(θS, θD) can be considered to achieve further compression. 2) RTT Optimization (RTTO): In the TCP/IP framework, several schemes have been used to ensure multiple applications operate over a link through various connection schemes, such as TCP, HTTP, and LLC connections. Most of these connection schemes would face fairness challenges due to RTT suppression in MI channels with a low capacity. Since RTT suppression depends heavily on the transmission channel, we can potentially formulate the corresponding optimization problem w.r.t. the SNR ΥSD(f, JSD(θS, θD)) of the MI links, which can be further transformed into the frequency and APO optimization problem. Meanwhile, the dynamic congestion window scheme based on RTT and SNR can be adopted. 3) Optimizing Data Chunking and Aggregation (ODCA): Appropriate data chunking and aggregation help reduce the number of retransmissions. For example, the TTE MIC with a 100 bit/s data rate can use 15-byte data chunks much smaller than a typical maximum transmission unit size (1500 bytes). The receiver sends an aggregated ACK after accumulating multiple chunks. This strategy can be dynamic, adapting to the CSI of the MI channel determined by f and JSD(θS, θD). 4) Intelligent Retransmission Strategy (IRS): Priority retransmission involves immediate retransmissions of critical data (e.g., control commands) and delayed batch retransmissions of non-critical data (e.g., sensor readings). Contextaware packet loss detection dynamically adjusts retransmission timeouts based on MI channel quality predictions (e.g., SNR) to avoid unnecessary retransmissions. 5) Machine-learning-based solution (ML-TCP/IP): For the specific MI channel, the aforementioned methods (i.e., HC, RTTO, ODCA, and IRS) can be further optimized by machine learning tools. Since the MI fast fading depends on driver's velocity, we can use attention-based deep learning (e.g., Fig. 27) ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 35 TABLE XXVI COMPARISON OF POTENTIAL TCP/IP SCHEMES FOR TTE MICS Schemes Primary objective Key mechanism Complexity MIC optimization directions HC Reduce payload overhead Robust HC tunnel with CID negotiation [185] Low SNR-dependent header size RTTO Improve fairness; minimize round-trip time Optimize transmission/retransmission timing Medium RTT-SNR problem formulation ODCA Reduce retransmissions Multiple small chunks with an aggregated ACK Mediumn CSI-aware optimization IRS Reduce retransmissions Priority retransmission scheme Low SNR-aware retransmission ML-TCP/IP Adapt protocol to dynamics AVI-driven scheme High Attention-based for AVI predictions or LLMs to predict the average AVIs σS and σD, dynamically adjusting the parameters of the methods w.r.t. the expectation of channel power gain (E[GSD(σS, σD)]) of the MI link. Table XXVI compares potential TCP/IP schemes (i.e., HC, RTTO, ODCA, IRS, and ML-TCP/IP) for TTE MIC systems, indicating that MI-specific optimizations can potentially establish relationships between their primary objectives and SNRrelated parameters (e.g., APO and frequency). K. Experiments and Testing in TTE MIC Systems Some researchers have developed MIC testbeds for both general UG scenario [222] and TTE scenario [8]. Even though TTE MIC products have entered the market [119], a lack of robust experimental validation remains a key challenge in TTE MIC research. 1) Unrepeatable and unrepresentative UG environments: The TTE environment is highly heterogeneous. Materials' conductivity, permittivity, and moisture content can vary significantly, even within a small area. Such unrepresentative environments may invalidate theoretical assumptions, introducing measurement noise that researchers struggle to eliminate. Moreover, VLF signals in TTE MIC are highly vulnerable to geomagnetic fluctuations and industrial interference. Highprecision equipment, essential for capturing weak MI signals, is costly and scarce in academic labs. To address this challenge, a cross-scale channel model spanning centimeter-level particles to kilometer-scale strata can be developed to match the validation requirements of target theories. A data-driven method similar to the one depicted in Fig. 27 can be considered to predict the environment. 2) Antenna deployment challenge: In deep subsurface environments, TTE MIC antennas (0.5∼4 m) often exceed the space in narrow underground areas like mine tunnels. Flexible cables distort coil shapes from standard circles, especially on mobile vehicles, causing signal deviations from theoretical expectations. For future MI fast fading validation, highpermeability metallic components (e.g., vehicle bodies and tire bearings) act as unintended magnetic reflectors/absorbers, distorting field propagation and compromising the reliability of fading model (see Fig. 6) validation. Mitigation can focus on compact rigidizable antenna designs (e.g., shape-memory structures), magnetic shielding for vehicle-mounted systems, and calibration protocols accounting for metallic interference. In addition, RPMA arrays are promising to replace Tx coils for testing due to their smaller sizes. L. Summary of Challenges and Opportunities Table XXII outlines the key future challenges, tasks, and recommended approaches for advancing MIC research, specifically in TTE applications. These future directions cover critical areas, such as P2P MIC communication, MI relay techniques, and overall MI network development. One of the most pressing challenges is the development of a universal statistical model for MI fast fading. The lack of CLT support, combined with the complexity of antenna carrier configurations, has made this task particularly difficult. Furthermore, the impact of fast fading on existing MIC theorems is significant. Unlike traditional EMWC, the randomness and unpredictability of MI fast fading's expectation and variance, which are velocity-dependent, presents unique challenges that must be addressed. There are several other challenges related to performance metrics, antenna designs, MI MAC and routing protocols, channel modeling in inhomogeneous media, CMIC techniques, TTE MI experiment and testing. A comprehensive understanding and resolution of these challenges is key to advancing MIC technology. There are also promising research directions and novel techniques that will significantly enhance MIC performance and broaden its applications. These include MCNSI, MI massive MIMO, deep JSCC for MIC, heterogeneous MI network techniques, and support for TCP/IP frameworks. Despite their potential, there is currently a scarcity of published research on these topics. In this paper, we propose an attention-based deep learning framework that is not limited to the prediction of the average AVI. As described in Section VII, our MI network framework carefully considers the unresolved challenges and promising future research directions highlighted above. IX. CONCLUSION Since 2020, research on MIC studies for TTE applications, including MI fast fading and CMIC, has increased significantly. Research in MIC continues to advance across all network layers. This survey provides a comprehensive overview of the latest developments in MIC technology within the TTE environment, covering aspects such as channel modeling in deep-penetration environments, MI fast fading channels, CMIC, and MI network architecture. Specifically, we reviewed the MI channel modeling and P2P MIC techniques, proposed an fine-grained decomposition of MI channel power gain into four key factors, and outlined optimization directions based on these factors. We discussed the challenges and impacts of MI fast fading on MIC systems. We reviewed MI relay techniques, including the MI waveguide and CMIC, analyzed their performance in TTE environments, and explored theoretically MI cross-talk effects. Moreover, we summarized advancements in multi-node MIC and large-scale MI networks by using the OSI-originated framework as a guide, and identified outstanding issues, including channel estimation, modulation, and coding in physical layer functions, MAC protocols in link layer functions, and connectivity, data collection, and routing in network layer functions. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 36 Notably, we conceived a new and promising MI network framework with TCP/IP and Linux support, which enables researchers to leverage abundant research and development resources, accelerating MIC studies. We also summarized its challenges and open issues from the OSI-originated perspective, focusing on accessing SAGUMI in future network systems, and delineated the potential novel technical solutions, such as MCNSI, MI massive MIMO, deep JSCC for MIC, and heterogeneous MI network techniques. REFERENCES [1] M. Abdollahi, W. Ni, M. Abolhasan, et al., "Software-defined networking-based adaptive routing for multi-hop multi-frequency wireless mesh," IEEE Trans. Veh. Technol., vol. 70, no. 12, pp. 13 07313 086, 2021. [2] V. L. Orekhov and T. H. Chung, "The DARPA subterranean challenge: Asynopsis of the circuits stage," Field Robot, vol. 2, pp. 735-747, 2022. [3] Q. Cui, X. You, W. Ni, et al., "Overview of AI and communication for 6G network: Fundamentals, challenges, and future research opportunities," Science China Information Science, vol. 68, no. 7, p. 171301, 2025. [4] C. Wrigley, "Going deep: Excavation, collaboration and imagination at the Kola Superdeep Borehole," Environment and Planning D: Society and Space, vol. 41, no. 3, pp. 549-567, 2023. [5] I. I. Shovon and S. Shin, "Survey on multi-path routing protocols of underwater wireless sensor networks: Advancement and applications," Electronics, vol. 11, no. 21, p. 3467, 2022. [6] Z. Zhang, E. Liu, X. Zheng, et al., "Cooperative magnetic induction based through-the-earth communication," in IEEE/CIC ICCC, Shanghai, China, Oct. 2014, pp. 653-657. [7] H. Ma, E. Liu, R. Wang, et al., "Antenna optimization for decode-andforward relay in magnetic induction communications," IEEE Trans. Veh. Technol., vol. 69, no. 3, pp. 3449-3453, 2019. [8] Z. Zhang, E. Liu, X. Qu, et al., "Connectivity of magnetic inductionbased Ad Hoc networks," IEEE Trans. Wireless Commun., vol. 16, no. 7, pp. 4181- 4191, Apr. 2017. [9] H. Ma, E. Liu, R. Wang, et al., "Channel characteristics for vehicle magnetic induction communication with road disturbance," IEEE Commun. Lett., vol. 24, no. 7, pp. 1363-1367, 2020. [10] H. Ma, E. Liu, R. Wang, et al., "Effect of antenna deployment on achievable rate in cooperative magnetic induction communication," IEEE Commun. Lett., vol. 23, no. 10, pp. 1748-1752, Oct. 2019. [11] H. Ma, E. Liu, Z. Fang, et al., "Fast-fading channel and power optimization of the magnetic inductive cellular network," IEEE Trans. Wireless Commun., vol. 23, no. 10, pp. 15 096-15 111, 2024. [12] Z. Sun and I. F. Akyildiz, "Magnetic induction communications for wireless underground sensor networks," IEEE Trans. Antennas Propag., vol. 58, no. 7, pp. 2426-2435, Jul. 2010. [13] S. Kisseleff, W. Gerstacker, R. Schober, et al., "Channel capacity of magnetic induction based wireless underground sensor networks under practical constraints," in IEEE WCNC, Shanghai, China, Apr. 2013, pp. 2603-2608. [14] M. Zeeshan, M. Chavda, K. M. Ehshan, et al., "A review on NonRF underground positioning techniques for mining applications," IEEE Trans. Instrum. Meas., vol. 72, pp. 1-17, 2023. [15] S. Kisseleff, I. F. Akyildiz, and W. H. Gerstacker, "Survey on advances in magnetic induction-based wireless underground sensor networks," IEEE Internet Things J., vol. 5, no. 6, pp. 4843-4856, Dec. 2018. [16] M. Muzzammil, N. Ahmed, G. Qiao, et al., "Fundamentals and advancements of magnetic-field communication for underwater wireless sensor networks," IEEE Trans. Antennas Propag., vol. 68, no. 11, pp. 7555-7570, 2020. [17] P. N. Wrathall and J. Sojdehei, "Magneto-inductive communications," in Info. Syst. Navy Divers & AUVs in Shallow Water, J. L. WoodPutnam, Ed., vol. 3711, International Society for Optics and Photonics. SPIE, 1999, pp. 229 - 236. [18] I. F. Akyildiz and E. P. Stuntebeck, "Wireless underground sensor networks: Research challenges," Ad Hoc Networks, vol. 4, no. 6, pp. 669-686, Nov. 2006. [19] I. F. Akyildiz, Z. Sun, and M. C. Vuran, "Signal propagation techniques for wireless underground communication networks," Physical Communication, vol. 2, no. 3, pp. 167-183, Sep. 2009. [20] F. Zhang, Z. Gong, S. Wang, et al., "A rotating-permanent-magnet array for ULF through-the-sea magnetic communications," IEEE Trans. Antennas Propag., vol. 71, no. 3, pp. 2300-2310, 2023. [21] S. Yarkan, S. Guzelgoz, H. Arslan, et al., "Underground mine communications: A survey," IEEE Commun. Surveys Tuts., vol. 11, no. 3, pp. 125-142, 2009. [22] A. E. Forooshani, S. Bashir, D. G. Michelson, et al., "A survey of wireless communications and propagation modeling in underground mines," IEEE Commun. Surveys Tuts., vol. 15, no. 4, pp. 1524-1545, 2013. [23] S. Sheikhpour, A. Mahani, and H. F. Rashvand, Agricultural Applications of Underground Wireless Sensor Systems: A Technical Review. Wiley Semiconductors, 2017, pp. 351-379. [24] N. Saeed, M.-S. Alouini, and T. Y. Al-Naffouri, "Toward the internet of underground things: A systematic survey," IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3443-3466, 2019. [25] G. P. Hancke and B. J. Silva, "Wireless positioning in underground mines: Challenges and recent advances," IEEE Industrial Electronics Magazine, vol. 15, no. 3, pp. 39-48, 2021. [26] D. Wohwe Sambo and A. F ̈orster, "Wireless underground sensor networks: A comprehensive survey and tutorial," ACM Computing Surveys, vol. 56, no. 4, pp. 1-44, 2023. [27] U. Raza and A. Salam, "A survey on subsurface signal propagation," Smart Cities, vol. 3, no. 4, pp. 1513-1561, 2020. [28] A. Hrovat, G. Kandus, and T. Javornik, "A survey of radio propagation modeling for tunnels," IEEE Commun. Surveys Tuts., vol. 16, no. 2, pp. 658-669, 2014. [29] S. M. Riurean, M. Leba, and A. C. Ionica, Conventional and Advanced Technologies for Wireless Transmission in Underground Mine. Cham: Springer International Publishing, 2021, pp. 41-125. [30] E. Liu, Z. Sun, R. Wang, et al., Magnetic Communications: Theory and Techniques. New York, NY, USA: Cambridge University Press, 2024. [31] A. K. Sharma, S. Yadav, S. N. Dandu, et al., "Magnetic induction-based non-conventional media communications: A review," IEEE Sensors Journal, vol. 17, no. 4, pp. 926-940, 2017. [32] Y. Li, S. Wang, C. Jin, et al., "A survey of underwater magnetic induction communications: Fundamental issues, recent advances, and challenges," IEEE Communications Surveys Tutorials, vol. 21, no. 3, pp. 2466-2487, Feb. 2019. [33] A. R. Silva and M. Moghaddam, "Strategic frequency adaptation for mid-range magnetic induction-based wireless underground sensor networks," in IEEE SysCon Proceedings, Vancouver, BC, Canada, Apr. 2015, pp. 758-765. [34] Z. Sun, P. Wang, M. C. Vuran, et al., "Mise-pipe: Magnetic inductionbased wireless sensor networks for underground pipeline monitoring," Ad Hoc Networks, vol. 9, no. 3, pp. 218-227, 2011. [35] C. B. Jenkins and A. Kiourti, "Wearable dual-layer planar magnetoinductive waveguide for wireless body area networks," IEEE Trans. Antennas Propag., vol. 71, no. 8, pp. 6893-6905, 2023. [36] S. Li, Y. Sun, and W. Shi, "Capacity of magnetic-induction MIMO communication for wireless underground sensor networks," International Journal of Distributed Sensor Networks, vol. 11, no. 10, Jan. 2015. [37] H. Guo, Z. Sun, J. Sun, et al., "M2I channel modeling for metamaterialenhanced magnetic induction communications," IEEE Trans. Antennas Propag., vol. 63, no. 11, pp. 5072-5087, Nov. 2015. [38] Z. Wang, J. Wang, and W. Cheng, "Multi-frequency resonant circuit based multi-user emergency through-the-earth communication with magnetic induction," in Ucom, 2024, pp. 152-157. [39] Z. Sun and I.F.Akyildiz, "On capacity of magnetic induction-based wireless underground sensor networks," in IEEE INFOCOM, Orlando, FL, Mar. 2012, pp. 370-378. [40] U. Azad, H. C. Jing, and Y. E. Wang, "Link budget and capacity performance of inductively coupled resonant loops," IEEE Trans. Antennas Propag., vol. 60, no. 5, pp. 2453-2461, 2012. [41] J. Jiang, K. Song, G. Wei, et al., "Capacity and bandwidth analysis of symmetric two-coil resonant magnetic communication using frequency divarication," IEEE Antennas Wireless Propag. Lett., vol. 14, pp. 370373, 2015. [42] J. Zhou and J. Chen, "Maximum distance estimation of far-field model for underwater magnetic field communication," in 2017 CCWC, 2017, pp. 1-5. [43] H. Guo and Z. Sun, "Inter-media backscatter communications with magnetic induction," in IEEE ICC, 2019, pp. 1-6. [44] Y. Liu, Q. Liu, S. Gong, et al., "Chirp-rate shift keying modulation for mechanical antenna based on rotating dipoles," IEEE Trans. Antennas Propag., vol. 71, no. 4, pp. 2989-2999, 2023. [45] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "On modulation for magnetic induction based transmission in wireless underground sensor networks," in IEEE ICC, Sydney, NSW, Australia, Jun. 2014, pp. 7176. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 37 [46] Z. Chen, G. Liu, H. Zhang, et al., "A novel polar code construction for magnetic induction-based wireless underground communications," IEEE Commun. Lett., vol. 27, no. 11, pp. 2884-2888, 2023. [47] A. A. Alshehri, S.-C. Lin, and I. F. Akyildiz, "Optimal energy planning for wireless self-contained sensor networks in oil reservoirs," in IEEE ICC, Paris, France, May 2017, pp. 1-7. [48] H. Guo and Z. Sun, "Channel and energy modeling for self-contained wireless sensor networks in oil reservoirs," IEEE Trans. Wireless Commun., vol. 13, no. 4, pp. 2258-2269, Apr. 2014. [49] J. Ma, X. Zhang, M. Liao, et al., "Topology of magneto-inductive communication system based on the regular hexagonal array," IET Microwaves, Antennas & Propagation, vol. 9, no. 5, pp. 389-398, 2015. [50] R. A. Khalil and N. Saeed, "Optimal relay placement in magnetic induction-based internet of underwater things," IEEE Sensors Journal, vol. 21, no. 1, pp. 821-828, 2021. [51] Y. Zhang, "Cooperative magnetic induction communications: Performance analysis and power-location optimization," in IEEE WCNC, 2024, pp. 1-6. [52] S. C. Lin, I. F. Akyildiz, P. Wang, et al., "Distributed cross-layer protocol design for magnetic induction communication in wireless underground sensor networks," IEEE Trans. Wireless Commun., vol. 14, no. 7, pp. 4006-4019, Jul. 2015. [53] T. Li, Y. Zhao, Z. Hu, et al., "Resource allocation strategy in auvassisted edge computing uwsn with hybrid acoustic and mi communication," in IEEE VTC, 2024, pp. 1-6. [54] N. Ahmed, Y. R. Zheng, and D. Pommerenke, "Multi-coil mi based mac protocol for wireless sensor networks," in OCEANS 2016 MTS/IEEE Monterey, 2016, pp. 1-4. [55] N. Ahmed, G. Qiao, Y. R. Zheng, et al., "Design and implementation of medium access control protocol for magneto-inductive wireless sensor networks using low power sensor nodes," IEEE Journal of Oceanic Engineering, vol. 49, no. 2, pp. 572-582, 2024. [56] S. Wang, T. L. N. Nguyen, and Y. Shin, "Data collection strategy for magnetic induction based monitoring in underwater sensor networks," IEEE Access, vol. 6, pp. 43 644-43 653, 2018. [57] G. Liu, "Frequency-switchable routing protocol for dynamic magnetic induction-based wireless underground sensor networks," IEEE Journal of Selected Areas in Sensors, vol. 1, pp. 1-8, 2024. [58] P. Singh, R. P. Singh, and Y. Singh, "An optimal energy-throughput efficient cross-layer solution using naked mole rat algorithm for wireless underground sensor networks," Materials Today: Proceedings, vol. 48, pp. 1076-1083, 2022. [59] V. Parameswaran, H. Zhou, and Z. Zhang, "Irrigation control using wireless underground sensor networks," in ICST, Kolkata, India, Dec. 2012, pp. 653-659. [60] Z. Li, Z. Sun, T. Singh, et al., "Large range soil moisture sensing for inhomogeneous environments using magnetic induction networks," in IEEE GLOBECOM, Waikoloa, HI, USA, Dec. 2019, pp. 1-6. [61] S. Sugumar and S. M. Santhanam, "Design of filamentary planar spiral coils with enhanced channel model for magnetic induction based underground communication," Transactions on Emerging Telecommunications Technologies, vol. 32, no. 10, p. e4282, 2021. [62] C. Cariou, L. Moiroux-Arvis, F. Pinet, et al., "Internet of underground things in agriculture 4.0: Challenges, applications and perspectives," Sensors, vol. 23, no. 8, 2023, [Online]. Available: https://www.mdpi.com/1424-8220/23/8/4058 [63] X. Tan and Z. Sun, "An optimal leakage detection strategy for underground pipelines using magnetic induction-based sensor networks," in Proc. of WASA, Berlin, Heidelberg, 2013, pp. 2780-2785. [64] X. Tan, Z. Sun, and P. Wang, "On localization for magnetic inductionbased wireless sensor networks in pipeline environments," in IEEE ICC, London, UK, Jun. 2015, pp. 2780-2785. [65] Y. Dong, J. Wu, X. Zhang, et al., "A novel dual-permanent-magnet mechanical antenna for pipeline robot localization and communication," Sensors, vol. 23, no. 6, p. 3228, 2023. [66] X. Li, Q. Li, Q. Yu, et al., "A rotating permanent magnets positioning system for laying underground pipelines," IEEE Trans. Instrum. Meas., vol. 73, pp. 1-9, 2024. [67] C. Park, Q. Xie, P. Chou, et al., "DuraNode: wireless networked sensor for structural health monitoring," in SENSORS, 2005 IEEE, Irvine, CA, USA, 2005, pp. 1-4. [68] P. Singh, R. P. Singh, Y. Singh, et al., "Magnetic induction technologybased wireless sensor network for underground infrastructure, monitoring soil conditions, and environmental observation applications: Challenges and future aspects," Journal of Sensors, vol. 2022, pp. 1-18, Jan. 2022. [69] A. Markham and N. Trigoni, "Magneto-inductive networked rescue system (miners): Taking sensor networks underground," in 2012 ACM/IEEE 11th IPSN, Beijing China, Apr. 2012, pp. 1-11. [70] S. A. Meybodi, M. Dohler, A. N. Askarpour, et al., "The feasibility of communication among pumps in a district heating system," IEEE Antennas and Propagation Magazine, vol. 55, no. 3, pp. 118-134, 2013. [71] Domingo and C. Mari, "Magnetic induction for underwater wireless communication networks," IEEE Trans. Antennas Propag., vol. 60, no. 6, pp. 2929-2939, 2012. [72] B. Gulbahar and O. B. Akan, "A communication theoretical modeling and analysis of underwater magneto-inductive wireless channels," IEEE Trans. Wireless Commun., vol. 11, no. 9, pp. 3326-3334, 2012. [73] L. Erdogan and J.-F. Bousquet, "Dynamic bandwidth extension of coil for underwater magneto-inductive communication," in IEEE APSURSI, 2014, pp. 1576-1577. [74] I. F. Akyildiz, P. Wang, and Z. Sun, "Realizing underwater communication through magnetic induction," IEEE Communications Magazine, vol. 53, no. 11, pp. 42-48, Nov. 2015. [75] H. Guo, Z. Sun, and P. Wang, "Channel modeling of MI underwater communication using tri-directional coil antenna," in IEEE GLOBECOM, San Diego, CA, USA, 2015, pp. 1-6. [76] --, "Multiple frequency band channel modeling and analysis for magnetic induction communication in practical underwater environments," IEEE Trans. Veh. Technol., vol. 66, no. 8, pp. 6619-6632, 2017. [77] D. Wei, S. S. Soto, J. Garcia, et al., "ROV assisted magnetic induction communication field tests in underwater environments," in Proc.ACM Int. Conf.WUWNet. Shenzhen China: ACM, Dec. 2018, pp. 1-5. [78] Y. Liu, S. Gong, Q. Liu, et al., "A mechanical transmitter for undersea magnetic induction communication," IEEE Trans. Antennas Propag., vol. 69, no. 10, pp. 6391-6400, 2021. [79] H. Guo, Z. Sun, and P. Wang, "Joint design of communication, wireless energy transfer, and control for swarm autonomous underwater vehicles," IEEE Trans. Veh. Technol., vol. 70, no. 2, pp. 1821-1835, 2021. [80] D. Wei, C. Huang, X. Li, et al., "Power-efficient data collection scheme for AUV-assisted magnetic induction and acoustic hybrid internet of underwater things," IEEE Internet Things J., vol. 9, no. 14, pp. 11 67511 684, 2022. [81] C. Wang, Y. Cui, X. Song, et al., "A novel underwater target detection method based on low-frequency magnetic signal generated by portable transmitter," IEEE Sensors Journal, vol. 23, no. 8, pp. 8459-8465, 2023. [82] X. Wang, D. Cui, W. Zhang, et al., "Radiation study of low-power ultracompact rotating permanent magnetized mechanical antenna array," IEEE Trans. Antennas Propag., vol. 72, no. 7, pp. 5458-5468, 2024. [83] X. He, Q. Zhou, and J. Zhang, "Rotating magnet-based mechanical antenna for magnetic inductive communications," IEEE Trans. Antennas Propag., vol. 72, no. 7, pp. 5502-5510, 2024. [84] W. Zhang, Z. Cao, X. Wang, et al., "Design, array, and test of superlow-frequency mechanical antenna based on permanent magnet," IEEE Trans. Antennas Propag., vol. 71, no. 3, pp. 2321-2329, 2023. [85] S. Wang and Y. Shin, "Efficient routing protocol based on reinforcement learning for magnetic induction underwater sensor networks," IEEE Access, vol. 7, pp. 82 027-82 037, 2019. [86] L. Alsalman and E. Alotaibi, "A balanced routing protocol based on machine learning for underwater sensor networks," IEEE Access, vol. 9, pp. 152 082-152 097, 2021. [87] Y. Zhang, D. Chen, Guanghua, et al., "Performance analysis of twohop active relaying for dynamic magnetic induction based underwater wireless sensor networks," IEEE Trans. Commun., vol. 70, no. 10, pp. 6938-6949, Oct. 2022. [88] I. V. Zhilin, O. M. Bushnaq, G. D. Masi, et al., "A universal multimode (Acoustic, Magnetic Induction, Optical, RF) software defined modem architecture for underwater communication," IEEE Trans. Wireless Commun., vol. 22, no. 12, pp. 9105-9116, 2023. [89] G. Dumphart and A. Wittneben, "Stochastic misalignment model for magneto-inductive SISO and MIMO links," in IEEE PIMRC, Valencia, Spain, Sep. 2016, pp. 1-6. [90] A. Farhad, Y. Zia, S. Farid, et al., "D-Mac: A dynamic MAC algorithm for the body area sensornetworks based on ieee 802.15.4," IJCSNS Internatinal Journal of Computer Science and Network Security, vol. 16, no. 5, pp. 29-35, May 2016. [91] J. I. Agbinya and M. Masihpour, "Power equations and capacity performance of magnetic induction body area network nodes," in 2010 Fifth International Conference on Broadband and Biomedical Communications, Malaga, Spain, Dec. 2010, pp. 1-6. [92] N. Thilak and R. Braun, "Near field magnetic induction communication in body area network," in ICDCS, Coimbatore, India, Mar. 2012, pp. 124-125. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 38 [93] M. Masihpour, D. Franklin, and M. Abolhasan, "Multihop relay techniques for communication range extension in near-field magnetic induction communication systems," Journal of Networks, vol. 8, no. 5, pp. 999-1011, May 2013. [94] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "Distributed beamforming for magnetic induction based body area sensor networks," in IEEE GLOBECOM, Washington, DC, USA, Dec. 2016, pp. 1-7. [95] G. Dumphart, B. I. Bitachon, and A. Wittneben, "Magneto-inductive powering and uplink of in-body microsensors: Feasibility and highdensity effects," in IEEE WCNC, Marrakesh, Morocco, Apr. 2019, pp. 1-6. [96] N. Golestani and M. Moghaddam, "Theoretical modeling and analysis of magnetic induction communication in wireless body area networks (WBANs)," IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, vol. 2, no. 1, pp. 48-55, 2018. [97] S. Banou, K. Li, and K. Chowdhury, "Magic: Magnetic resonant coupling for intra-body communication," in IEEE INFOCOM, 2020, pp. 1549-1558. [98] N. Golestani and M. Moghaddam, "Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks," Nature Communications, vol. 11, no. 1, p. 2879, 2020. [99] V. Mishra and A. Kiourti, "Wearable planar magnetoinductive waveguide: A low-loss approach to wbans," IEEE Trans. Antennas Propag., vol. 69, no. 11, pp. 7278-7289, 2021. [100] L. Huang, Z. Wei, B. Chen, et al., "Field-circuit combination method for solving the detuning problem of magnetic resonance human body communication," IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, vol. 8, no. 2, pp. 94-101, 2024. [101] M. Xu, P. S. Traore, and A. Asfour, "A high sensitivity digital giant magneto-impedance (GMI) sensor for magnetic communication," Measurement Science and Technology, vol. 35, no. 10, p. 104001, 2024. [102] Z. Sun, P. Wang, M. C. Vuran, et al., "BorderSense: Border patrol through advanced wireless sensor networks," Ad Hoc Networks, vol. 9, no. 3, pp. 468-477, 2011. [103] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "Disaster detection in magnetic induction based wireless sensor networks with limited feedback," in 2014 IFIP WD. Rio de Janeiro, Brazil: IEEE, 2014. [104] T. E. Abrudan, Z. Xiao, A. Markham, et al., "Distortion rejecting magneto-inductive three-dimensional localization (MagLoc)," IEEE J. Sel. Areas Commun., vol. 33, no. 11, pp. 2404-2417, Nov. 2015. [105] O. Kypris, T. E. Abrudan, and A. Markham, "Magnetic induction-based positioning in distorted environments," IEEE Trans. Geosci. Remote Sens., vol. 54, no. 8, pp. 4605-4612, 2016. [106] S.-C. Lin, A. A. Alshehri, P. Wang, et al., "Magnetic inductionbased localization in randomly deployed wireless underground sensor networks," IEEE Internet Things J., vol. 4, no. 5, pp. 1454-1465, 2017. [107] N. Saeed, M.-S. Alouini, and T. Y. Al-Naffouri, "3d localization for internet of underground things in oil and gas reservoirs," IEEE Access, vol. 7, pp. 121 769-121 780, 2019. [108] A. Sheinker, B. Ginzburg, N. Salomonski, et al., "Localization of a mobile platform equipped with a rotating magnetic dipole source," IEEE Trans. Instrum. Meas., vol. 68, no. 1, pp. 116-128, 2019. [109] B. Wei, N. Trigoni, and A. Markham, "iMag+: An accurate and rapidly deployable inertial magneto-inductive slam system," IEEE Trans. Mobile Comput., vol. 21, no. 10, pp. 3644-3655, 2022. [110] Q. Li, X. Li, C. Wang, et al., "An inertial magneto-inductive positioning system based on GWO-PF algorithm," IEEE Trans. Instrum. Meas., vol. 71, pp. 1-10, 2022. [111] A. Pal and K. Kant, "MagLoc: A magnetic induction based localization scheme for fresh food logistics," Internet of Things, vol. 19, p. 100552, 2022. [112] M. Chavda, M. Zeeshan, C. O'Sullivan, et al., "Magnetic-inductionbased positioning system using dual multiplexing technique," IEEE Sensors Letters, vol. 7, no. 9, pp. 1-4, 2023. [113] Y.-L. Chen, Z.-C. Liu, A. Li, et al., "3-D positioning method of pipeline robots based on a rotating permanent magnet mechanical antenna," IEEE Sensors Journal, vol. 23, no. 7, pp. 7095-7104, 2023. [114] A. T. Abraha and B. Wang, "A survey on scalable wireless indoor localization: Techniques, approaches and directions," Wireless Personal Communications, vol. 136, no. 3, pp. 1455-1496, 2024. [115] W. Su, E. Liu, A. M. Calveras Aug ́e, et al., "Design and realization of precise indoor localization mechanism for Wi-Fi devices," KSII Transactions on Internet and Information Systems, vol. 10, no. 12, pp. 5422-5441, 2016. [116] Z. Zhang, E. Liu, X. Qu, et al., "Effective coverage for the connectivity of magnetic induction-based Ad Hoc networks," in IEEE GLOBECOM, San Diego, CA, USA, Dec. 2015, pp. 1-6. [117] W. Ni, I. B. Collings, X. Wang, et al., "Radio alignment for inductive charging of electric vehicles," IEEE Trans. Ind. Informatics, vol. 11, no. 2, pp. 427-440, 2015. [118] (2012, Dec.) Through-the-Earth, Post-Accident Communications - an emerging technology. CDC. [Online]. Available: https://www.cdc.gov/ niosh/mining/UserFiles/Works/pdfs/2013-105.pdf [119] Canarycomm-is - vital alert - vital alert. Vital Alert. Accessed 08-042025. [Online]. Available: https://vitalalert.com/product/canarycommis/ [120] G. Liu, "A Q-learning-based distributed routing protocol for frequencyswitchable magnetic induction-based wireless underground sensor networks," Future Gener. Comput. Syst., vol. 139, pp. 253-266, 2022. [121] S. Kisseleff, B. Sackenreuter, I. F. Akyildiz, et al., "On capacity of active relaying in magnetic induction based wireless underground sensor networks," in IEEE ICC, London, UK, Jun. 2015, pp. 65416546. [122] H. Guo, Z. Sun, and C. Zhou, "Practical design and implementation of metamaterial-enhanced magnetic induction communication," IEEE Access, vol. 5, pp. 17 213-17 229, 2017. [123] Z. Li and Z. Sun, "Optimal active and reconfigurable meta-sphere design for metamaterial-enhanced magnetic induction communications," IEEE Trans. Antennas Propag., vol. 70, no. 9, pp. 8148-8163, 2022. [124] H. Rezaei, V. Khilkevich, S. Yong, et al., "Mechanical magnetic field generator for communication in the ULF range," IEEE Trans. Antennas Propag., vol. 68, no. 3, pp. 2332-2339, 2020. [125] C. A. Balanis, Antenna Theory: Analysis and Design, 3rd ed. Hoboken, New Jersey, USA: John Wiley & Sons, Inc., 2005. [126] H. Guo and A. A. Ofori, "Joint channel and antenna modeling for magnetic induction communication in inhomogeneous media," IEEE Open Journal of the Communications Society, vol. 1, pp. 1457-1469, 2020. [127] O. C. Fawole and M. Tabib-Azar, "An electromechanically modulated permanent magnet antenna for wireless communication in harsh electromagnetic environments," IEEE Trans. Antennas Propag., vol. 65, no. 12, pp. 6927-6936, 2017. [128] M. Dionigi and M. Mongiardo, "Multi band resonators for wireless power tranfer and near field magnetic communications," in IEEE MTTS IWPT Workshop, Kyoto, Japan, 2012, pp. 61-64. [129] J. Agbinya and H. Nguyen, "Principles and applications of frequency splitting in inductive communications and wireless power transfer systems," Wireless Personal Communications, vol. 107, pp. 987-1017, Apr. 2019. [130] Z. Sun, I. F. Akyildiz, S. Kisseleff, et al., "Increasing the capacity of magnetic induction communications in RF-challenged environments," IEEE Trans. Commun., vol. 61, no. 9, pp. 3943-3952, 2013. [131] J. Sogade, Y. Vichabian, A. Vandiver, et al., "Electromagnetic cave-tosurface mapping system," IEEE Trans. Geosci. Remote Sens., vol. 42, no. 4, pp. 754-763, Apr. 2004. [132] M. K. Simon and M. S. Alouini, "Digital communications over fading channels: A unified approach to performance analysis," 1st ed. WileySeries in Telecommunications and Signal Processing Ed. New York, NY,USA: John Wiley & Sons, 2000. [133] D. Tse and P. Viswanath, Fundamentals of wireless communication. New York, NY, USA: Cambridge university press, 2005. [134] T. Ma, X. Jiang, and H. Hu, "A novel FBMC/QAM scheme with interference mitigation over multipath time-varying fading channels," China Commun., vol. 22, no. 7, pp. 138-155, July 2025. [135] A. A. Farid and S. Hranilovic, "Diversity gain and outage probability for MIMO free-space optical links with misalignment," IEEE Trans. Commun., vol. 60, pp. 479-487, 2012. [136] A. Jurado-Navas, J. M. Garrido-Balsells, J. F. Paris, et al., "A unifying statistical model for atmospheric optical scintillation," in Numerical simulations of physical and engineering processes. InTech Rijeka, Croatia, 2011, vol. 181, no. 8, pp. 181-205. [137] J.-Y. Wang, Y. Ma, R.-R. Lu, et al., "Hovering UAV-based FSO communications: Channel modelling, performance analysis, and parameter optimization," IEEE J. Sel. Areas Commun., vol. 39, pp. 2946-2959, 2021. [138] M. Najafi, H. Ajam, V. Jamali, et al., "Statistical modeling of the FSO fronthaul channel for UAV-based communications," IEEE Trans. Commun., vol. 68, pp. 3720-3736, 2020. [139] T. B. Aik, Q. S. Sen, and Z. Nan, "Characterization of multipath acoustic channels in very shallow waters for communications," in OCEANS 2006 - Asia Pacific, Singapore, 2006, pp. 1-8. [140] A. G. Zajic, "Statistical modeling of MIMO mobile-to-mobile underwater channels," IEEE Trans. Veh. Technol., vol. 60, no. 4, pp. 1337-1351, 2011. [141] J. Wang, W. Cheng, W. Zhang, et al., "Multi-frequency access for magnetic induction-based swipt," IEEE J. Sel. Areas Commun., vol. 40, no. 5, pp. 1679-1691, 2022. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 39 [142] M. Chen, X. Qu, E. Liu, et al., "Statistical channel modeling and estimation for pmma-based vehicle-mounted magnetic communication systems," IEEE Trans. Veh. Technol., pp. 1-6, 2025, early access, [143] S. Gang, Vehicle noise, vibration, and sound quality. SAE international, apr 2012. [144] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "Transmitter-side channel estimation in magnetic induction based communication systems," in IEEE BlackSeaCom. Odessa, Ukraine: IEEE, May 2014, pp. 16-21. [145] X. Tan and Z. Sun, "On environment-aware channel estimation for wireless sensor networks using magnetic induction," in IEEE INFOCOM WKSHPS, Atlanta, GA, USA, 2017, pp. 217-222. [146] J. S. Glickstein, J. Liang, S. Choi, et al., "Power-efficient ELF wireless communications using electro-mechanical transmitters," IEEE Access, vol. 8, pp. 2455-2471, 2020. [147] M. Gołkowski, J. Park, J. Bittle, et al., "Novel mechanical magnetic shutter antenna for ELF /VLF radiation," in IEEE APS/URSI Symp. & NRSM, Boston, MA, USA, 2018, pp. 65-66. [148] F. Sun, F. Zhang, X. Ma, et al., "Research on ultra-low-frequency communication based on the rotating shutter antenna," Electronics, vol. 11, no. 4, p. 596, 2022. [149] E. Slevin, M. B. Cohen, N. Opalinski, et al., "Broadband electrically small VLF/LF transmitter via time-varying antenna properties," IEEE Trans. Antennas Propag., vol. 70, no. 1, pp. 97-110, 2022. [150] R. Zhu, Z. Cheng, X. Lv, et al., "Piezo-turned magnet rotation for ELF/SLF cross-medium communication in omni-direction," Advanced Optical Materials, vol. 12, no. 20, p. 2400461, 2024. [151] Z. Cheng, J. Zhou, B. Wang, et al., "A bionic flapping magnetic-dipole resonator for ELF cross-medium communication," Advanced Science, vol. 11, no. 30, p. 2403746, 2024. [152] Y. Cui, Y. Pei, Z. Yuan, et al., "The optimization of multilayer spacing for miniaturization of mechanical antenna based on unipolar electrets," IEEE Trans. Dielectr. Electr. Insul., vol. 31, no. 1, pp. 50-57, 2024. [153] H. Guo and Z. Sun, "M2I: communication: From theoretical modeling to practical design," in IEEE ICC, Kuala Lumpur, Malaysia, May 2016, pp. 1-6. [154] P. Sharma, D. Bhatia, and R. S. Meena, "Metamaterial enhanced magnetization induced communication for wireless applications," in ICICIC, Indore, India, 2017, pp. 1-5. [155] Z. Li and Z. Sun, "Antenna system optimization for active metamaterial-enhanced magnetic induction communications," in EuCAP, Krakow, Poland, Mar. 2019, pp. 1-5. [156] J. A. Bickford, R. S. McNabb, P. A. Ward, et al., "Low frequency mechanical antennas: Electrically short transmitters from mechanicallyactuated dielectrics," in IEEE APS/URSI Symp. & NRSM, San Diego, CA, USA, 2017, pp. 1475-1476. [157] Z. Sun and I. F. Akyildiz, "Optimal deployment for magnetic inductionbased wireless networks in challenged environments," IEEE Trans. Wireless Commun., vol. 12, no. 3, pp. 996-1005, 2013. [158] E. Shamonina, V. Kalinin, R. K.H., et al., "Magneto-inductive waveguide," Electronics Letters, vol. 38, no. 8, pp. 371-373, Apr. 2002. [159] M. Ishtiaq and S. H. Hwang, "Performance analysis of multihop underground magnetic induction communication," Electronics, vol. 10, no. 11, p. 1255, 2021. [160] Z. Sun and I. F. Akyildiz, "Deployment algorithms for wireless underground sensor networks using magnetic induction," in IEEE GLOBECOM, Miami, FL, USA, 2010, pp. 1-5. [161] S. Wang, T. L. N. Nguyen, and Y. Shin, "Energy-efficient clustering algorithm for magnetic induction-based underwater wireless sensor networks," IEEE Access, vol. 7, pp. 5975-5983, 2019. [162] M. Masihpour and J. I. Agbinya, "Cooperative relay in near field magnetic induction: A new technology for embedded medical communication systems," in 2010 Fifth International Conference on Broadband and Biomedical Communications, Malaga, Spain, Dec. 2010. [163] Y. W. P. Hong, W. J. Huang, and K. C. C. Jay, Cooperative communications and networking: technologies and system design. New York: Springer Science & Business Media, 2010. [164] R. Li, J. Zhang, and A. Dang, "Cooperative system in free-space optical communications for simultaneous multiuser transmission," IEEE Commun. Lett., vol. 22, no. 10, pp. 2036-2039, 2018. [165] S. Li, P. Wang, W. Pang, et al., "Performance analysis for cooperative communication system in optical IoUT network with hdaf strategy," IEEE Photonics Journal, vol. 13, no. 3, pp. 1-22, 2021. [166] Z.-Y. Liu, B.-Q. Ding, B.-Q. Wu, et al., "Adaptive combining multibranch frequency-domain detector for underwater acoustic cooperative communication," in 2015 12th ICCWAMTIP, Chengdu, China, Dec. 2015, pp. 26-29. [167] F. Ye, H. Xu, and J. Gao, "Relay selection in underwater acoustic sensor networks for QoS-based cooperative communication using game theory," Computer communications, vol. 219, no. Apr., pp. 104-115, 2024. [168] N. Ahmed, A. Radchenko, D. Pommerenke, et al., "Design and evaluation of low-cost and energy-efficient magneto-inductive sensor nodes for wireless sensor networks," IEEE Systems Journal, vol. 13, no. 2, pp. 1135-1144, 2019. [169] Z. Sun, I. F. Akyildiz, and G. P. Hancke, "Dynamic connectivity in wireless underground sensor networks," IEEE Trans. Wireless Commun., vol. 10, no. 12, pp. 4334-4344, 2011. [170] Z. Xing, R. Wang, J. Wu, et al., "Achievable rate analysis and phase shift optimization on intelligent reflecting surface with hardware impairments," IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 55145530, 2021. [171] Z. Xing, R. Wang, and X. Yuan, "Joint active and passive beamforming design for reconfigurable intelligent surface enabled integrated sensing and communication," IEEE Trans. Commun., vol. 71, no. 4, pp. 24572474, 2023. [172] --, "An efficient solution to phase-shift optimization for ris enabled joint communication and sensing," IEEE Trans. Veh. Technol., pp. 1-6, 2024. [173] J. M. Daladier and M. A. Labrador, "An adaptive logical link layer protocol for underwater acoustic communication channels," in OCEANS, Biloxi, MS, USA, 2009, pp. 1-8. [174] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "Interference polarization in magnetic induction based wireless underground sensor networks," in IEEE PIMRC Workshops, London, UK, 2013, pp. 71-75. [175] B. Parrein, N. Morozs, and L. Toutain, "An internet protocol adaptation layer for underwater acoustic networks," in Forum Acusticum 2023, Turin, Italy, Sep. 2023. [176] T. Matsuda and M. Yamamoto, "Performance analysis of TCP fairness between wired and wireless sessions," in IEEE PIMRC, vol. 5, Lisbon, Portugal, 2002, pp. 2429-2433 vol.5. [177] J. M. Daladier and M. A. Labrador, "A data link layer in support of swarming of autonomous underwater vehicles," in OCEANS 2009EUROPE, Bremen, Germany, 2009, pp. 1-10. [178] A. Dahlberg, M. Skrzypczyk, T. Coopmans, et al., "A link layer protocol for quantum networks," in ACM SIGCOMM, Beijing, China, Aug. 2019, pp. 159-173. [179] P. Gupta and P. Kumar, "Critical power for asymptotic connectivity," in Proc. 37th IEEE Conf. Decision Control, vol. 1, Tampa, FL, USA, 1998, pp. 1106-1110. [180] R. Wattenhofer, L. Li, P. Bahl, et al., "Distributed topology control for power efficient operation in multihop wireless ad hoc networks," in IEEE INFOCOM, vol. 3, Anchorage, AK, USA, 2001, pp. 1388-1397. [181] C. Bettstetter, "On the minimum node degree and connectivity of a wireless multihop network," in ACM MOBIHOC, Lausanne Switzerland, 2002. [182] X. Wang, X. Lin, Q. Wang, et al., "Mobility increases the connectivity of wireless networks," IEEE/ACM Transactions on Networking, vol. 21, no. 2, pp. 440-454, 2013. [183] J. Postel, "Internet protocol (IP)," IETF, RFC 791, Sep. 1981. [Online]. Available: https://www.rfc-editor.org/info/rfc791 [184] S. Deering and R. Hinden, "Internet protocol, version 6 (IPv6) specification," IETF, RFC 8200, Jul. 2017. [Online]. Available: https://www.rfc-editor.org/info/rfc8200 [185] W.-K. Jia, L.-T. Chen, and Z.-X. Xu, "An end-to-end IP header compressed packet forwarding framework for bandwidth-constrained networks," IEEE Transactions on Green Communications and Networking, vol. 6, no. 4, pp. 2156-2167, 2022. [186] J. Postel, "Transmission control protocol," IETF, RFC 793, Sep. 1981. [Online]. Available: https://www.rfc-editor.org/rfc/rfc793 [187] J. Xu, B. Ai, L. Chen, et al., "When high-speed railway networks meet multipath tcp: Supporting dependable communications," IEEE Wireless Communications Letters, vol. 9, no. 2, pp. 202-205, 2020. [188] Z. Shelby, K. Hartke, and C. Bormann, "The constrained application protocol (CoAP)," IETF, RFC 7252, June 2014. [Online]. Available: https://www.rfc-editor.org/info/rfc7252 [189] V. Cerf, S. Burleigh, A. Hooke, et al., "Delay-tolerant networking architecture," IRTF, RFC 4838, April 2007. [Online]. Available: https://www.rfc-editor.org/info/rfc4838 [190] L.-E. Jonsson and G. Pelletier, "RObust Header Compression (ROHC): A Compression Profile for IP," IETF, RFC 3843, Jun. 2004. [Online]. Available: https://www.rfc-editor.org/info/rfc3843 [191] J. C. Mankins, et al., "Technology readiness levels," Tech. Rep., 6 1995, advanced Concepts Office, NASA HQ. [192] C. Zhao, H. Du, D. Niyato, et al., "Generative AI for secure physical layer communications: A survey," IEEE Trans. on Cogn. Commun. Netw., vol. 11, no. 1, pp. 3-26, 2025. ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 40 [193] M. A. Ferrag, O. Friha, B. Kantarci, et al., "Edge learning for 6Genabled internet of things: A comprehensive survey of vulnerabilities, datasets, and defenses," IEEE Commun. Surveys Tuts., vol. 25, no. 4, pp. 2654-2713, 2023. [194] J. Corbet, A. Rubini, and G. K. Hartman, Linux Device Drivers, 4th ed. O'Reilly Media, Inc., Feb. 2015. [195] C. Benvenuti, Understanding Linux Network Internals: Guided Tour to Networking on Linux, 1st ed., A. Oram, Ed. 1005 Gravenstein Highway North, Sebastopol, CA: " O'Reilly Media, Inc.", 2005. [196] DSR Corporation, "ZBOSS: Open-source ZigBee protocol stack," Online, DSR Corporation, 2023, version 1.0. [Online]. Available: https://dsr-iot.com/downloads/open-sourcezigbee [197] Rockchip, "airockchip/rknn-toolkit2," Github, Rockchip, 2025. [Online]. Available: https://github.com/airockchip/rknn-toolkit2 [198] Y. Liu, M. Hou, and S. Gong, "A rotating permanent magnet transmitter for magnetic induction communication in RF-impenetrable environment," in IEEE NEMO, Hangzhou, China, 2020, pp. 1-3. [199] M. T. B. Tarek, S. Dharmasena, A. Madanayake, et al., "Power-efficient data modulation for all-mechanical ULF/VLF transmitters," in IEEE MWSCAS, 2018, pp. 759-762. [200] Q. Zhang and Z. Hao, "Research on ask modulation method for rotating magnet based mechanical antenna array system," in ICEMS, Chiang Mai, Thailand, 2022, pp. 1-5. [201] S. Kisseleff, I. F. Akyildiz, and W. Gerstacker, "Beamforming for magnetic induction based wireless power transfer systems with multiple receivers," in IEEE GLOBECOM, San Diego, CA, USA, Dec. 2015, pp. 1-7. [202] H. J. Kim, J. Park, K. S. Oh, et al., "Near-field magnetic induction MIMO communication using heterogeneous multipole loop antenna array for higher data rate transmission," IEEE Trans. Antennas Propag., vol. 64, no. 5, pp. 1952-1962, May 2016. [203] Z. Li and Z. Sun, "Enabling magnetic beamforming in MIMO wireless power transfer using reconfigurable metasurface," in IEEE GLOBECOM, 2020, pp. 1-6. [204] G. Yang, M. R. V. Moghadam, and R. Zhang, "Magnetic MIMO signal processing and optimization for wireless power transfer," IEEE Trans. Signal Process., vol. 65, no. 11, pp. 2860-2874, Jun. 2017. [205] Z. Xu and E. Rodriguez-Villegas, "A wireless power transfer mattress based system for perpetually operating physiological monitoring wearables," IEEE Trans. Biomed. Circuits Syst., vol. 18, no. 2, pp. 460-473, 2024. [206] J. Wang, W. Cheng, W. Zhang, et al., "Backscatter based bidirectional full-duplex magnetic induction communications," IEEE Trans. Commun., vol. 71, pp. 6258-6271, 2023. [207] V. Pasku, A. De Angelis, G. De Angelis, et al., "Magnetic field-based positioning systems," IEEE Commun. Surveys Tuts., vol. 19, no. 3, pp. 2003-2017, 2017. [208] M. A. Khan, J. Sun, B. Li, et al., "Magnetic sensors-a review and recent technologies," Engineering Research Express, vol. 3, no. 2, p. 022005, Jun 2021. [209] Z. Xing, R. Wang, X. Yuan, et al., "Location information assisted beamforming design for reconfigurable intelligent surface aided communication systems," IEEE Trans. Wireless Commun., vol. 22, no. 11, pp. 7676-7695, 2023. [210] R. Wang, Z. Xing, E. Liu, et al., "Joint localization and communication study for intelligent reflecting surface aided wireless communication system," IEEE Trans. Commun., vol. 71, no. 5, pp. 3024-3042, 2023. [211] F. Zhai, Y. Eisenberg, and A. K. Katsaggelos, "Joint source-channel coding for video communications," in Handbook of Image and Video Processing, 2nd ed., A. Bovik, Ed. Burlington MA, USA: Academic, 2005, pp. 1065-1082. [212] E. Bourtsoulatze, D. Burth Kurka, and D. G ̈und ̈uz, "Deep joint sourcechannel coding for wireless image transmission," IEEE Trans. on Cogn. Commun. Netw., vol. 5, no. 3, pp. 567-579, 2019. [213] J. Xu, B. Ai, W. Chen, et al., "Deep joint source-channel coding for image transmission with visual protection," IEEE Trans. on Cogn. Commun. Netw., vol. 9, no. 6, pp. 1399-1411, 2023. [214] H. Wu, Y. Shao, C. Bian, et al., "Deep joint source-channel coding for adaptive image transmission over MIMO channels," IEEE Trans. Wireless Commun., vol. 23, no. 10, pp. 15 002-15 017, 2024. [215] M. Zhang, H. Wu, G. Zhu, et al., "Semantics-guided diffusion for deep joint source-channel coding in wireless image transmission," IEEE Trans. Wireless Commun., pp. 1-1, 2025, early access, [216] H. Xie, Z. Qin, G. Y. Li, et al., "Deep learning enabled semantic communication systems," IEEE Trans. Signal Process., vol. 69, pp. 2663-2675, 2021. [217] J. Huang, K. Yuan, C. Huang, et al., "D2-JSCC: Digital deep joint source-channel coding for semantic communications," IEEE J. Sel. Areas Commun., vol. 43, no. 4, pp. 1246-1261, 2025. [218] X. Chen, Z. Zhao, and H. Zhang, "Stochastic power adaptation with multiagent reinforcement learning for cognitive wireless mesh networks," IEEE Trans. Mobile Comput., vol. 12, no. 11, pp. 21552166, Nov. 2013. [219] D. Shi, F. Tian, and S. Wu, "Energy efficiency optimization in heterogeneous networks based on deep reinforcement learning," in IEEE ICC Workshops, 2020, pp. 1-6. [220] F. G. Ortiz-Gomez, D. Tarchi, R. Mart ́ınez, et al., "Cooperative multiagent deep reinforcement learning for resource management in full flexible VHTS systems," IEEE Trans. on Cogn. Commun. Netw., vol. 8, no. 1, pp. 335-349, 2022. [221] ISO/IEC, Information Technology-Telecommunications and Information Exchange between Systems-Near Field CommunicationInterface and Protocol (NFCIP-1), ISO/IEC International Standard 18 092, 2013. [Online]. Available: https://webstore.ansi.org/standards/ iso/isoiec180922013 [222] X. Tan, Z. Sun, and I. F. Akyildiz, "A testbed of magnetic inductionbased communication system for underground applications," arXiv preprint , 2015. Honglei Ma received the Ph.D. degree from the 2020. He is currently with - ing, Shanghai . From 2009 to 2015, he served as a Senior Engineer with the Chinese Academy of Sciences. His current research interests include magnetic induction communications, ad-hoc network, wireless sensor network, and object detection for embedded systems. Erwu Liu (Senior Member, IEEE) received the Ph.D. degree from Huazhong 2001. He has been a Professor with Tongji University since 2011. Previously he was with Alcatel-Lucent (2001-2007) and Imperial College London (2007-2011). He studies localization & sensing, AI & blockchain, and wireless communications & IoT, with 120+ papers published and 70+ patents granted/pending. Prof. Liu won the Microsoft Indoor Localization Competition (IPSN) in 2016 and 2018, and developed the indoor navigation system for China International Import Expo (CIIE). He is the Community Dev. Co-Chair of IEEE Blockchain Technical Community (BCTC), and leads the local group development of the IEEE BCTC in Asia/China. He leads the Shanghai Engineering Research Center for Blockchain Applications and Services (SERCBAAS). He is an IET Fellow, the Founding Editor-in-Chief of IET Blockchain, and the Founding Chair of the IEEE Global Blockchain Conference (GBC). ACCEPTED BY IEEE COMMUNICATIONS SURVEYS & TUTORIALS FOR PUBLICATION, 41 Wei Ni (Fellow, IEEE) received the B.E. and Ph.D. degrees in Electronic Engineering from Fudan University, Shanghai, China, in 2000 and 2005, respectively. He is the Associate Dean (Research) in the - sity, Perth, and a Conjoint Professor at the University of New South Wales, Sydney, Australia. He was a Deputy Project Manager at the Bell Labs, Alcatel/Alcatel-Lucent from 2005 to 2008; a Senior Research Engineer at Nokia from 2008 to 2009; and a Senior Principal Research Scientist and Group Leader at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) from 2009 to 2025. He has been an Editor for IEEE Transactions on Wireless Communications since 2018, IEEE Transactions on Vehicular Technology since 2022, IEEE Transactions on Information Forensics and Security and IEEE Communication Surveys and Tutorials since 2024, and IEEE Transactions on Network Science and Engineering since 2025. He served as Secretary, Vice-Chair, and Chair of the IEEE VTS NSW Chapter from 2015 to 2022, Track Chair for VTC-Spring 2017, Track Co-chair for IEEE VTC-Spring 2016, Publication Chair for BodyNet 2015, and Student Travel Grant Chair for WPMC 2014. Zhijun Fang (Senior Member, IEEE) received the Ph.D. degree from Shanghai Jiao Tong University, Shanghai, China. He is currently a Professor and the Dean with the . His current research interests include image processing, video coding, and pattern recognition. He was the General Chair of the Joint Conference on Harmonious Human Machine Environment (HHME) 2013 and the General Co-Chair of the International Symposium on Information Technology Convergence (ISITC) in 2014, 2015, 2016, and 2017. He received the "Hundred, Thousand and Ten Thousand Talents Project" in China. He received several major program projects of the National Natural Science Foundation of China and the National Key Research and Development Project of the Ministry of Science and Technology of China. Rui Wang (Senior Member, IEEE) received the Ph.D. degree from Shanghai Jiao Tong University, China, in 2013. From August 2012 to February 2013, he was a Visiting Ph.D. Student with the . From October 2013 to October of 2014, he was a Post-Doctoral Research Associate with the . He is currently a Professor with the . He is also with the Shanghai . He has published over 60 articles. His research interests include wireless communications, artificial intelligence, and wireless positioning. He is also an Associate Editor of IEEE ACCESS and an Editor of IEEE WIRELESS COMMUNICATIONS LETTERS. Yongbin Gao received the Ph.D. degree from Jeonbuk National University, South Korea. He is currently an Associate Professor with the . He has published 30 SCI articles in prestigious journals. Dusit Niyato (M'09-SM'15-F'17) is a professor in the . He received B.Eng. from King Mongkuts (KMITL), Thailand and Ph.D. in Electrical and Computer Engineering from the . His research interests are in the areas of mobile generative AI, edge general intelligence, quantum computing and networking, and incentive mechanism design. Ekram Hossain (Fellow, IEEE) is a Professor and the Associate Head (Graduate Studies) of the Department of Electrical and Computer Engineering, . He is a Member (Class of 2016) of the College of the Royal Society of Canada. He is also a Fellow of the Canadian Academy of Engineering and the Engineering Institute of Canada. He has won several research awards, including the 2017 IEEE Communications Society Best Survey Paper Award and the 2011 IEEE Communications Society Fred Ellersick Prize Paper Award. He was listed as a Clarivate Analytics Highly Cited Researcher in Computer Science in 2017-2025. Previously, he served as the Editor-in-Chief (EiC) for the IEEE Press (2018-2021) and the IEEE Communications Surveys and Tutorials (2012-2016). He was a Distinguished Lecturer of the IEEE Communications Society and the IEEE Vehicular Technology Society. He served as the Director of Magazines (2020-2021) and the Director of Online Content (2022-2023) for the IEEE Communications Society.
2510.14850
General-relativistic radiation magnetohydrodynamics simulations of binary neutron star mergers: The influence of spin on the multi-messenger picture Anna Neuweiler1 , Henrique Gieg1 , Henrik Rose1 , Hauke Koehn1 , Ivan Markin1 , Federico Schianchi2,1 , Liam Brodie3 , Alexander Haber3,4 , Vsevolod Nedora1,5 , Mattia Bulla6,7,8 , and Tim Dietrich1,5 1 Institut f¨ur Physik und Astronomie, Universit¨at Potsdam, Haus 28, Karl-Liebknecht-Str. 24/25, 14476, Potsdam, Germany 2 Departament de F´ısica & IAC3, Universitat de les Illes Balears, Palma de Mallorca, Baleares E-07122, Spain 3 Department of Physics, Washington University in St. Louis, St. Louis, MO 63130, USA 4Mathematical Sciences and STAG Research Centre, University of Southampton, Southampton SO17 1BJ, United Kingdom 5 Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am M¨uhlenberg 1, Potsdam 14476, Germany 6Department of Physics and Earth Science, University of Ferrara, via Saragat 1, I-44122 Ferrara, Italy 7INFN, Sezione di Ferrara, via Saragat 1, I-44122 Ferrara, Italy and 8INAF, Osservatorio Astronomico d’Abruzzo, via Mentore Maggini snc, 64100 Teramo, Italy (Dated: October 17, 2025) The rich phenomenology of binary neutron star mergers offers a unique opportunity to test general relativity, investigate matter at supranuclear densities, and learn more about the origin of heavy elements. As multi-messenger sources, they emit both gravitational waves and electromagnetic radiation across several frequency bands. The interpretation of these signals relies heavily on accurate numerical-relativity simulations that incorporate the relevant microphysical processes. Using the latest updates of the bam code, we perform general-relativistic radiation magnetohydrodynamic simulations of binary neutron star mergers with two different spin configurations. We adopt a state-of-the-art equation of state based on relativistic mean-field theory developed for dense matter in neutron star mergers. To capture both dynamical ejecta and secular outflows from magnetic and neutrino-driven winds, we evolve the systems up to ∼100 ms after the merger at considerably high resolution with a grid spacing of ∆x ≈93 m across the neutron stars. Our results show that the non-spinning configuration undergoes a more violent merger, producing more ejecta with lower electron fraction and higher velocities, while the spinning configuration forms a larger disk due to its higher angular momentum. Although the initial magnetic field amplification within ≲10 ms after merger is similar in both systems, the non-spinning system reaches stronger magnetic fields and higher energies at later times. For a detailed view of the multi-messenger observables, we extract the gravitational-wave signal and compute nucleosynthesis yields, the expected kilonova and afterglow light curves from our ejecta profiles. I. INTRODUCTION Binary neutron star (BNS) mergers are high-energy multi-messenger events that emit gravitational waves (GWs) and electromagnetic (EM) radiation covering multi- ple frequency bands, e.g., [1–6]. As these compact objects orbit each other and eventually merge, a GW signal is generated, whereas the EM emission originates from the merger remnant and ejecta. The neutron-rich material that is ejected in this process is an important site of rapid neutron-capture (r-process) nucleosynthesis [7–11]. The radioactive decay of the formed nuclei powers an EM transient, called a kilonova. Furthermore, BNS merger remnants can launch a relativistic jet, most likely driven by a large-scale magnetic field, e.g., [12–15], that can trigger a gamma-ray burst (GRB). Both the kilonova and the GRB produce an afterglow when the outflows interact with the surrounding interstellar medium. A simplified overview of the physical phenomena of the BNS merger and the associated observable GW and EM signatures is shown in Fig. 1, highlighting the different timescales at which the signals are emitted, ranging from seconds to milliseconds before the merger to years after the merger. Indeed, the first GW detection of a BNS merger in Au- gust 2017, associated with the event GW170817 [16], was accompanied by the kilonova signal AT2017gfo [17–22], the short GRB GRB170817A [23, 24], and its non-thermal afterglow [25–27]. Among others, this joint observation has been used to probe general relativity [28–34] and to tighten constraints on the equation of state (EOS) of neutron stars, e.g., [30, 35–44]. In addition to the intensively studied impact of the com- ponent masses, neutron star spins are essential to a proper modeling of BNS mergers and their observable signals. While observations confirm that isolated neutron stars can have dimensionless spins up to χ ∼0.4 [45], the fastest- spinning BNS systems known to merge within a Hubble time, i.e., PSR J0737-3039 [46] and PSR J1946+2052 [47], are expected to have spins of only χ ∼0.04 or χ ≲0.05 at merger. GW170817 itself provides only weak constraints on the binary components’ spins, also affecting estimates on the component masses due to a degeneracy between the mass ratio q and the aligned spin components in grav- itational waveforms. For a high-spin prior (|χ| ≤0.89), the inferred component masses range from 0.86 M⊙to 2.26 M⊙[16]. Numerous studies have shown that neutron star spins impact the lifetime of the remnant, the disk mass, and the amount of ejected material, e.g., [48–55]. arXiv:2510.14850v1 [astro-ph.HE] 16 Oct 2025 2 FIG. 1. BNS merger phenomena for different timescales, ranging from seconds to milliseconds before the merger and from milliseconds to years after the merger. As simplified overview, we illustrate the physical phenomena together with the associated observable multi-messenger signals, consisting of GWs and EM signatures: inspiral and merger of the two neutron stars, ejection of lanthanide-rich (in red) and lanthanide-poor (in blue) material, formation of a black hole remnant with accretion disk, launch of a relativistic jet (in purple), formation of heavy elements via r-process, kilonova emission (in yellow), and non-thermal afterglows of the GRB (in purple) and the kilonova (in darkred). Consequently, the stars’ spin might also affect signatures of the EM counterparts. In this work, we investigate the role of neutron-star spins in BNS mergers and assess the impact on nucle- osynthesis yields and multi-messenger observables, i.e., the GW signal, the kilonova light curves, and the kilo- nova/GRB afterglow. For this purpose, we perform state- of-the-art numerical-relativity (NR) simulations that solve Einstein’s field equations together with the equations for general-relativistic radiation magnetohydrodynamics (GRRMHD), including a proper treatment to model the underlying microphysics based on fundamental theories of the strong and weak interaction. In particular, we use the bam code [56, 57] with recent updates to incor- porate tabulated, nuclear-physics-informed EOSs [58], a first-order multipolar (M1) neutrino-transfer scheme [59], and magnetic fields [60]. Several studies have highlighted the importance of incorporating magnetohydrodynamic effects, e.g., [13, 61–68], and neutrino-radiation, e.g., [69– 75] to reliably model BNS mergers, their remnants, and outflowing material. Although the number of NR simula- tions that take both magnetic fields and neutrino radiation into account is growing, e.g., [15, 76–83], the BNS sys- tems considered are still limited and – to our knowledge – restricted to neutron stars without spin. We point out that spinning BNS configurations, either with a neutrino leakage scheme or with the inclusion of magnetohydrody- namic effects, have already been simulated and analyzed, e.g., in Refs. [50, 54]. However, we include both mag- netic fields and neutrino radiation with the M1 scheme to study spin effects in BNS mergers as comprehensively as possible. We study equal-mass BNS systems in which both stars have a gravitational mass of 1.35 M⊙. One configuration is initially non-rotating, while the stars in the other have a dimensionless spin of χ = 0.1 aligned to the orbital angular momentum. Although this spin is higher than previously observed in merging BNS systems, it allows us to investigate the extent to which we can distinguish observable features between the two systems. We sim- ulate these BNS systems with high resolution, using a grid spacing of ∆x ≈93 m to cover the neutron stars. This is necessary to capture small-scale dynamics and resolve instabilities relevant for magnetic field amplifica- tion. Additionally, we perform the same simulations with reduced resolution to verify our results and distinguish artifacts caused by finite resolution. The simulations are performed for about 100 ms into the post-merger to also cover magnetic- and neutrino-driven winds. In total, the simulations consumed 29.88 Mio. CPUh at the HPE Apollo (Hawk) at the High-Performance Com- puting Center Stuttgart (HLRS, see also Appendix C). We analyze the magnetic field amplification during and after the merger, and study the post-merger remnant, matter outflow, and its composition. Additionally, we co-evolve Lagrangian tracer particles that we use to study nucleosynthesis in post-processing. In order to connect the simulated BNS systems to observable signatures, we extract the GW strain and the ejecta from the simulation data. Based on the latter, we calculate the associated kilonova signal and its afterglow. The article is structured as follows: Section II sum- marizes numerical methods of the employed codes, the simulated BNS configurations, and the EOS used. The results of the BNS simulations are discussed in Sec. III, focusing on the merger remnant, magnetic field amplifica- tion, and matter outflows, as well as the nucleosynthesis yields of the ejecta. The multi-messenger observables, 3 including the GW signal, kilonova light curves, and after- glow emissions, are presented in Sec. IV. Finally, Sec. V summarizes our key findings and results. Unless stated otherwise, we use a (−, +, +, +) signature for the metric and geometric units with G = c = M⊙= 1. II. METHODS AND SETUPS A. Spacetime and matter evolution We perform the evolution of the BNS configurations using bam [56, 57, 84, 85], incorporating recent extensions for neutrino radiation [58, 59] and magnetic fields [60]. bam solves Einstein’s field equations numerically in the 3+1 formulation. For the spacetime evolution, we use the Z4c reformulation with constraint damping terms [86, 87], together with the 1+log slicing [88] and Γ-driver shift [89] conditions. For the matter evolution, our GRRMHD implementa- tion employs the Valencia formulation [90–93]. We adopt the ideal magnetohydrodynamics approximation, i.e., as- suming infinite conductivity. To ensure the divergence- free constraint for the magnetic field and prevent the formation of unphysical magnetic monopoles, we ap- ply the hyperbolic divergence-cleaning scheme [94, 95]1. Neutrino-driven interactions are modeled using the M1 scheme [73, 97–102]. Details about the implementation, as well as the set of weak reactions employed in this work, can be found in Ref. [59], except for elastic scat- tering on α-particles. In this work, we employ the elastic opacities of Ref. [103] as implemented in Ref. [58]. We highlight that, in this manner, the opacities are computed using the same microscopic model as that of the EOS described in Sec. II D. Moreover, we have implemented the computation of neutrino emissivities in the trapped regime based on black body functions at equilibrium as in Refs. [73, 104] (see Appendix A). This improves the stability under rapid variations of the fluid’s temperature. The computational grid in bam is structured as a hier- archy of nested refinement levels l = 0, 1, ..., L −1, where each of the L levels contains one or more Cartesian boxes with fixed grid spacing. The resolution between successive levels increases by a factor of two, such that the grid spac- ing on level l is given by hl = h0/2l. To ensure that the neutron stars are always covered by the finest refinement level, the Cartesian boxes for the inner levels with l ≥lmv move dynamically to track the neutron stars. We use a fourth-order Runge-Kutta scheme for time integration with the Courant-Friedrichs-Lewy factor of 1 We note that our implementation had a mistake in the flux com- putation of the magnetic field in the damping term. We analyze the average relative error weighted by the flux and estimate it to be < 0.1 %. Hence, we assume the bias to be negligible in the simulations presented, but we remark that the same issue existed in previous studies, i.e., [60, 96] 0.25. bam employs the Berger-Oliger scheme [105] for local-time stepping (see Ref. [56]), and the Berger-Colella scheme [106] to ensure the conservation of baryonic mass, energy, and momentum across refinement boundaries (see Ref. [84]). We approximate the spatial derivatives of the met- ric using fourth-order finite-difference stencils. For the evolution of fluid variables, we use high-resolution shock- capturing methods. Specifically, we employ the fifth- order weighted-essentially-non-oscillatory WENOZ [107] reconstruction and Harten-Lax-van-Leer (HLL) Riemann solver [108] with a two-wave approximation. We adopt a cold, static atmosphere to model the vacuum region surrounding the compact objects. In particular, the fluid quantities of a grid cell are set to atmosphere values once the rest-mass density ρ falls below a density threshold ρthr. The atmosphere density ρatm is chosen as the min- imum rest-mass density ρmin covered by the EOS table used (see Sec. II D). The temperature is set to 0.1 MeV, and the fluid velocity is set to zero. The magnetic field remains unchanged. The density threshold is defined as ρthr = 10 × ρatm to avoid fluctuations around the density floor. Furthermore, in regions with rest-mass densities below 108 × ρatm, we follow the procedure described in Ref. [60] to avoid enhanced oscillations when using high- order reconstruction methods. The oscillations of all reconstructed variables are examined, and if necessary, we fall back locally to a lower-order method, specifically the third-order convex-essential-non-oscillating (CENO3) scheme [109, 110]. In addition, physical validity is ensured by demanding a positive rest-mass density and a posi- tive pressure. If these cannot be satisfied even with the lower-order scheme, we fall back to a linear reconstruction method. B. Codes used for post-processing For analyzing the simulation data, we additionally use the nuclear reaction network WinNet [111] to deter- mine nuclear abundances, the radiative transfer code possis [112, 113] to calculate kilonova light curves, and the pyblastafterglow code [114–116] to obtain the kilonova and GRB afterglows. The methods employed by each code are briefly outlined below, along with a description of how data from the NR simulations are incorporated. 1. WinNet To analyze and understand the properties of the outflow- ing material, we employ tracer particles that record the history of Lagrangian fluid trajectories (see Appendix B for details). We use these thermodynamic trajectories as inputs to compute nucleosynthesis yields with the single- zone reaction network code WinNet [111]. 4 Given that all trajectories reach temperatures above 10 GK, we assume each tracer to evolve from nuclear- statistical equilibrium (NSE). We start to follow the NSE composition, only evolving weak reactions once temper- atures drop below 9 GK and switch to the full network evolution below 7 GK. After the end of the trajecto- ries, we assume homologous expansion and continue the network calculation until 1 Gyr. Generally, we employ WinNet’s default nuclear in- put. This comprises more than 6500 isotopes up to 337Rg. Their basic nuclear properties build on the FRDM(2012) model [117], which also underlies the parametrization of theoretically and experimentally determined reaction rates according to the JINA Reaclib reaction library [118]. While this provides the majority of reactions under con- sideration, specific reactions in certain regimes may be superseded by information from other sources, e.g., de- tailed rates for α- and β-decays and weak rates. In addition, WinNet allows for different treatments of fission and neutrino reactions. While weak interac- tions play an important role in protonizing the ejecta and driving winds from the disk, the tracers’ densities have already dropped significantly as we initiate the network evolution, such that neutrino reactions have a negligible impact on the later evolution. The fission prescription is more relevant for the final abundance pattern, though it only plays a significant role in tracers that produce the third peak. For performance reasons, we use the computationally more efficient treatment of Ref. [119] for low-density trajectories, while using the spontaneous fis- sion fragment distribution of Ref. [120] and the β-delayed fission prescription of Ref. [121] otherwise. 2. possis We compute the kilonova signals associated to the sim- ulated BNS systems by performing Monte Carlo radiative transfer simulations with possis [112, 113]. We use the same procedure as described in Ref. [59] to extract ejecta data, i.e., the rest-mass density ρ and electron fraction Ye, from NR simulations. Specifically, we take a 3D snapshot at a time tcut, when the entire ejecta is still fully within the computational domain. Subsequently, we use the ejecta information from a detection sphere at r ≃440 km. Both data sets are rescaled assuming homologous expansion and combined to be used as input in possis, representing the ejecta at a reference time t0. While Refs. [122, 123] demonstrated that the ejecta may not yet be fully homol- ogously expanding at O (100 ms) after the merger, the resulting biases in light curve computation using possis have been shown to be negligible for t > 80 ms after the merger [122] and this procedure has proven to give robust results. For the computation of light curves, possis continues to expand the ejecta homologously, assuming a constant velocity vi of each fluid cell. The code generates photon packets at each time step and assigns them an energy, frequency, and direction of propagation. possis employs heating rate libraries from Ref. [124] and thermalization efficiencies from Refs. [125, 126]. Furthermore, it adopts wavelength- and time-dependent opacities as functions of local ejecta properties, i.e., rest-mass density, temper- ature, and electron fraction (ρ, T, Ye), from Ref. [127]. The photon packets are propagated through the ejecta, accounting for interactions with matter through electron scattering and bound-bound absorption that alter their assigned propagation direction, frequency, and energy. Finally, synthetic observables for different observation angles ι are computed ‘on the fly’ using Monte Carlo estimators with the event-based technique discussed in Ref. [112]. We perform the radiative transfer simulations using Nph = 106 photon packets. 3. pyblastafterglow The outflows from BNS mergers produce not only intrin- sic radiation in the form of a kilonova, but also radiation through interactions with the ambient matter. Ejected matter with relativistic escape velocities hits the inter- stellar medium surrounding the merger site, producing a non-thermal EM signature. This signature, commonly referred to as an afterglow, arises from relativistic elec- trons gyrating around magnetic field lines in collisionless shocks. Specifically, BNS mergers can produce two types of afterglows: a GRB afterglow from an ultra-relativistic, collimated jet and a kilonova afterglow from the mildly relativistic, circum-planar ejecta. We determine the expected afterglow emissions from our simulations using pyblastafterglow [114–116], a modular code that evolves the dynamics of the discretized blast wave semi-analytically as it hits the cold, constant- density ambient interstellar medium. In a second step, it calculates the energy distribution of accelerated electrons and their synchrotron emission. For the kilonova afterglow calculation, pyblastafter- glow takes the angular ejecta velocity distributions from our NR simulations as input to initialize the blast waves. The code assumes azimuthal symmetry, i.e., the ejecta velocities only depend on the polar angle. Each blast wave is evolved independently by solving energy conservation equations [114]. To determine the emitted radiation, it is important to note that the dynamical and secular ejecta from BNS mergers are at most mildly relativistic, with the velocity and Lorentz factor obeying βΓ ≲5. Previous studies have shown that in these types of mildly relativis- tic collisionless shocks, both a thermal (Maxwellian) and a non-thermal (power-law) electron component can po- tentially deliver significant contributions to the produced synchrotron spectrum [128, 129]. This is in contrast to highly relativistic shocks with Γ ≫1, where the contribu- tion from the thermal population is negligible [130, 131]. Hence, pyblastafterglow includes an implementation of the thermal synchrotron model from Ref. [128] that is used to determine the kilonova afterglow emission. Thus, 5 computing kilonova afterglows requires us to set five ad- ditional parameters, namely the ambient density n0, the power law index of the non-thermal electron component p, as well as the fraction of the internal blast wave energy that is converted to magnetic fields ϵB and is used for particle acceleration ϵe. Finally, ϵT is the fraction of the shock energy that is converted to thermal electrons. For the calculation of the GRB afterglow, we assume a Gaussian jet with the following distribution for the isotropic-energy equivalent Eiso(θ) = ( E0 exp  −θ2 2θ2c  if θ ≤θw 0 else . (1) This structure is inspired by numerical jet simula- tions [132–134]. Here, E0 is the central isotropic kinetic energy-equivalent, θc denotes the core angle, and θw is the cut-off. Since we cannot infer the jet structure from our NR simulations, we will resort to fiducial values taken from GRB170817A, as discussed below in Sec. IV C. The jet’s blast wave elements are evolved according to the same shock-jump conditions and lateral spreading pre- scriptions of Ref. [116]. The non-thermal electron dis- tribution is evolved according to a Fokker-Planck-type continuity equation and convolved with the classical syn- chrotron kernel to determine the comoving radiation [135]. This step relies on the microphysical parameters p, ϵe, ϵB with the same meaning as above for the kilonova after- glow, although they do not necessarily need to coincide with the values for the kilonova blast waves. We further assume that GRB and kilonova blast waves evolve independently. This simplification is justified by the fact that only a small part of the kilonova blast wave travels through the circum-burst density of the collimated jet. These are mainly the polar ejecta, which in our case carry the lowest mass, and we confirmed by excising their blast waves from our kilonova afterglow computations that they do not affect the light curves. Moreover, the investigation in Ref. [114] shows that even if the blast waves are coupled to the pre-shocked density of the GRB jet, the effect on the dynamics of the kilonova ejecta is small. C. Configurations We present simulations for two equal-mass BNS sys- tems with gravitational masses of 1.35 M⊙and baryonic masses Mb = 1.48 M⊙, corresponding to neutron stars in isolation with coordinate radius R = 12.22 km and tidal deformability Λ = 478.6. We employ the ABHT(QMC- RMF3) EOS [136] to describe neutron-star matter, satisfy- ing constraints from terrestrial experiments, astrophysical observations, and first-principles calculations of neutron matter; cf. Sec. II D. One configuration has non-spinning neutron stars, and the other one has spinning neutron stars, aligned with the orbital angular momentum. For the latter, we chose the dimensionless spin of the two stars as χ1 = χ2 = 0.1. Both setups have an initial coordinate distance d0 ≈41.2 km. We construct initial data using the pseudo-spectral code sgrid [137–140], which uses the extended conformal thin sandwich formulation [141, 142] to solve the Einstein Constraint Equations. The magnetic field is subsequently superimposed for the evolution as a purely poloidal field inside each star. It is defined by a vector potential with Ax = −(y −yNS) Ab max(p −pcut, 0)2, (2) Ay = (x −xNS) Ab max(p −pcut, 0)2, (3) Az = 0, (4) where xNS and yNS refer to the coordinate center of each neutron star. We set Ab = 1510 to obtain a maximum magnetic field strength on the order of 1015 G inside the stars. The cut-off pressure pcut is chosen as 0.004 × pmax, where pmax is the initial maximum pressure in the neutron star at t = 0 ms. This adopted magnetic field is several orders of magnitude stronger than observed in neutron- stars in binary systems, where the surface magnetic fields are typically between ∼ 108 −1012 G [143, 144]. It is nevertheless common in NR simulations to assume a strength up to ∼1015 G just before the merger, e.g., [14, 15, 65, 81, 145–148]. The idea of setting this strong magnetic field is to compensate for the unresolved small- scale dynamics, which lead to magnetic field amplification but are not captured by the limited resolution. Each system is simulated in a low-resolution (R1) and a high-resolution (R2) setup, respectively. The grid con- figuration in both cases consists of L = 8 refinement levels, comprising three outer, non-moving, and five inner, moving levels. For the R1 setup, the inner and outer re- finement boxes contain nmv = 128 and n = 256 points per direction, respectively, with a grid spacing of hL = 186 m on the finest level. We double the resolution in the R2 configuration, obtaining nmv = 256 and n = 512 points per direction for the inner and outer refinement boxes, respectively, with a grid spacing of hL = 93 m on the finest level. D. Equation of state The ABHT(QMC-RMF3) EOS [136, 149, 150] is specif- ically developed for dense matter in BNS mergers. It is based on a relativistic mean-field theory (RMFT) that describes the interaction of nucleons via meson exchange. The free parameters of most RMFTs are fitted such that the model describes matter and nuclei that are nearly isospin-symmetric, i.e., have a proton fraction of roughly 50 %, and then extrapolated to the neutron-rich matter in neutron stars. QMC-RMF3 is fitted to the binding energy per nucleon of pure neutron matter ob- tained by first-principle chiral effective field theory cal- culations [151]. Simultaneously, the nucleon-meson and meson-meson coupling constants are fitted for consistency with the zero-temperature properties of isospin-symmetric 6 nuclear matter and observations of neutron star mass, radius, and tidal deformability. The EOS prescription includes neutrons, anti-neutrons, protons, anti-protons, electrons, positrons, and a non-interacting photon gas. At low densities, a thermodynamically consistent proce- dure is used to match the EOS in temperature, density, and proton-fraction space to a low-density RMFT called HS(IUF) [152, 153] that is optimized for the more isospin- symmetric matter expected at low densities, and includes finite nuclei. Since the EOS is derived via RMFTs, it is always causal, and the extension of the EOS from zero temperature to finite temperatures is self-consistent. At nuclear saturation density nsat = 0.1546 1 fm3 , the ABHT(QMC-RMF3) EOS has a binding energy per nucleon EB = −16.39 MeV, incompressibility κ = 231.3 MeV, symmetry energy J = 31.29 MeV, slope of the symmetry energy L = 47.20 MeV, and effective nucleon mass divided its vacuum mass m∗/m = 0.739. These values are consistent with the white intersection region in Fig. 2 of Ref. [154] and with Refs. [155, 156]. The RMFT coupling constants’ fitting further ensures consistency with the neutron matter binding energy derived from the chiral effective field theory calculation of Ref. [151] between 0.5 n0 and 1.5 n0 at zero temperature, where n0 = 0.16 1 fm3 . In addition, in Ref. [136] ABHT(QMC- RMF3) was shown to be consistent with the finite temper- ature chiral effective field theory calculation of Ref. [157] at T = 20 MeV. The ABHT(QMC-RMF3) EOS predicts the following neutron star properties R1.4 M⊙= 12.21 km, Mmax = 2.15 M⊙, and Λ1.4 M⊙= 387, where Λ is the dimensionless tidal deformability. These values are consis- tent with current astrophysical observations [36, 158–167]. III. BINARY NEUTRON STAR MERGER SIMULATIONS A. Qualitative overview As the main focus of our study lies on the merger and post-merger phase, our BNS simulations start with a relatively small initial separation and cover only the last few orbits of the inspiral phase. Both BNS configurations merge already after approximately ∼4.5 orbits, emit GWs, and form a massive neutron star (MNS) remnant. We define the merger time tmerger as the time when the amplitude of the GW strain in its dominant (2, 2) mode reaches its maximum. For the non-spinning and spinning BNS systems, the merger time is at tmerger = 11.17 ms and tmerger = 12.05 ms, respectively. As expected, the configuration with aligned spin χ = 0.1 merges slightly later because of the orbital hang-up effect [168]. This is due to a repulsive spin-orbit interaction and has already been discussed in several studies of BNS mergers with rotating neutron stars, e.g., Refs. [50, 51, 53, 54, 139, 169]. Figure 2 visualizes the matter dynamics, the ejecta ex- pansion, and the magnetic field evolution as 3D snapshots of the BNS system without spin during and after the TABLE I. Remnant properties of the BNS simulations. From left to right: initial dimensionless spin χ of the neutron stars, resolution of the simulation, remnant MNS mass MMNS, disk mass Mdisk, and ejecta properties, comprising its mass Meje, mass-averaged electron fraction Y eje e , and mass-averaged asymptotic velocity veje ∞ . The ejecta quantities are extracted on a sphere with radius ∼440 km on refinement level l = 2. The remnant and disk mass are computed as the integral of the bound conserved rest-mass density D, defining the massive neutron star remnant in regions where ρ > 1013 g/cm3, at the end of the simulation. χ res. MMNS [M⊙] Mdisk [M⊙] Meje [M⊙] Y eje e veje ∞ 0.0 R2 2.7910 0.1426 0.0137 0.407 0.165 0.1 R2 2.7480 0.1919 0.0098 0.438 0.148 0.0 R1 2.7992 0.1364 0.0233 0.406 0.192 0.1 R1 2.7371 0.2164 0.0124 0.397 0.184 merger. Understanding the different mass ejection mecha- nisms, e.g., through magnetic and neutrino-driven winds, is particularly important for interpreting the associated EM counterparts. The first snapshot presents the neutron stars at the onset of merger, exhibiting a purely poloidal magnetic field inside the stars as initialized. Subsequently, the magnetic field lines twist and wind up, leading to a strong toroidal field in the surrounding disk and forming a helicoidal structure at ∼40 ms after the merger. Later, at ∼100 ms after the merger, we observe a magnetic flux emanating from the remnant along the polar axis, forming a jet-like structure. In Sec. III B, a more detailed analy- sis of the magnetic field amplification and mechanism is provided. Table I provides a summary of the merger remnant properties, including remnant masses, disk masses, and key ejecta properties for all simulations performed. Each system produces an MNS remnant, which remains sta- ble within the simulation time of ∼100 ms after the merger. For the classification of ejected matter, both dynamical and secular ejecta, we use the geodesic crite- rion, according to which matter is considered unbound if the time-component of the four-velocity ut < −1 and the radial component of the Eulerian velocity points out- ward, i.e., vr > 0; cf. [170]. The gravitationally bound matter comprises the remnant system with the MNS and the surrounding disk. To obtain MMNS and Mdisk separately, we adopt the usual convention and define the disk where ρ < 1013 g/cm3 and the MNS where ρ ≥1013 g/cm3 [59, 71, 72, 171, 172]. Our results show that the spin-aligned BNS configura- tion produces a remnant with a less massive MNS, but a more massive disk. We attribute this to the conservation of angular momentum, which leads to stronger centrifugal forces in the remnant system in the spinning BNS sys- tem. As a result, the remnant is more spatially extended, yielding a large amount of bound material, but at lower densities. This is also confirmed in Fig. 3, which shows the evolution of the maximum rest mass density (upper panel) and the maximum temperature (lower panel). The 7 t −tmerger= −0.25 ms t −tmerger= 4.71 ms t −tmerger= 38.47 ms t −tmerger= 104.49 ms 109 1010 1011 1012 1013 ρ [g/cm3] 1010 1012 1014 |B| [G] FIG. 2. 3D-Snapshots of the high-resolution simulation without spin. Each panel shows a rendering of the rest-mass density in purple to orange scales and magnetic field lines in blue. For visualization, the rest-mass density is sliced along the x-z plane. Additionally, insets sliced along the x-y plane are shown in the upper left corner of each panel. The snapshots are extracted from refinement level l = 3 and represent different stages of the merger and post-merger phase, visualizing the evolution and winding of the magnetic field lines. 0.8 1.0 1.2 ρmax [1015 g/cm3] 0 10 20 30 40 50 60 70 80 90 100 t −tmerger [ms] 0 10 20 30 40 50 60 70 Tmax [MeV] χ = 0.0 χ = 0.1 FIG. 3. Evolution of maximum rest-mass density ρmax (top panel) and maximum temperature Tmax (bottom panel). Re- sults of the R2 and R1 simulations are shown in solid and dashed lines, respectively, for the non-spinning (green) and spinning (purple) configurations extracted from refinement level l = 6. For a smoother visualization, data is presented with a moving-average window of width 0.2 ms. central rest-mass density and temperature of the remnant MNS are lower in the BNS systems with aligned spin, which is consistent with the results from Ref. [54]. The ejecta mass is larger in the non-spinning BNS configu- ration with a higher mass-averaged asymptotic velocity veje ∞ . We note that although there are stronger varia- tions for the ejecta results at different resolutions, both resolutions predict the same trend for Meje and veje ∞ . However, the mass-averaged electron fraction Y eje e is larger for the χ = 0.1 BNS system at R2 resolution and smaller at R1 resolution. We discuss the ejecta properties in more detail in Sec. III C. Overall, the spin-aligned BNS configuration with re- pulsive spin-orbit interaction and increased gravitational binding energy results in a less violent merger scenario. This is indicated in Fig. 3 by a smaller density variation between the late inspiral and the first density peak, when the neutron star cores first bounce and release substantial amounts of shocked ejecta. On the one hand, this also explains the previously mentioned larger ejecta mass and higher ejecta velocities for the non-spinning BNS con- figuration, where stronger core-bounces are regarded as the dominant ejection mechanism. While rotating BNS systems are typically expected to enhance the dynamical ejecta if tidal disruption plays a more significant role, e.g., in simulations with a different EOS or unequal component masses, aligned spin reduces the impact velocity and thus decreases shock-driven outflows [52, 169, 173]. For de- tailed studies on the ejection mechanisms in equal and un- equal mass binaries, we refer the reader to Refs. [72, 139] and, more recently, Ref. [174]. On the other hand, the higher maximum temperatures probed in the early post- merger corroborate our interpretation, given that more kinetic energy of the fluid elements may be converted to internal energy. Then, over longer timescales, radiative losses in the form of neutrino irradiation effectively cool the remnant. B. Magnetic field amplification Various magnetohydrodynamic processes and instabili- ties operating across different spatial and temporal scales in BNS mergers influence the magnetic field evolution and amplify its strength and energy. One key mechanism is the Kelvin-Helmholtz instability (KHI), e.g., [175–178], which develops in the shear layer between the two col- liding stars. Other magnetic instabilities are expected to arise in differentially rotating flows: Magnetic wind- ing leads to a linear increase of the toroidal magnetic field component of the remnant on Alfv´en timescales,e.g., [179–181]. Distortions in the magnetic field in the radi- ally decreasing angular velocity profile of the remnant 8 1015 1016 Bmax [G] ∝t χ = 0.0 χ = 0.1 0 20 40 60 80 100 t −tmerger [ms] 1048 1049 1050 1051 Emag [erg] FIG. 4. Evolution of maximum magnetic field strength Bmax (top panel) and magnetic energy Emag (bottom panel). Results of the R2 and R1 simulations are shown in solid and dashed lines, respectively, for the non-spinning (green) and spinning (purple) configurations extracted from refinement level l = 6 for Bmax and l = 1 for Emag. disk can cause turbulence and induce magneto-rotational instability (MRI), e.g., [15, 68, 182, 183]. Furthermore, Refs. [15, 184] demonstrated the presence of an αΩ-type dynamo in the post-merger remnant that contributes to magnetic field amplification over larger length scales, and Ref. [185] proposed a Taylor-Spruit dynamo to occur in the MNS remnant. In Fig. 4, we show for both BNS configurations simu- lated with R1 and R2 resolutions the temporal evolution of the maximum magnetic field strength Bmax and the magnetic energy Emag, defined as Emag = 1 2 Z ut√−gb2 d3x, (5) with b2 = bµbµ and bµ being the magnetic four-vector that describes the magnetic field for a comoving observer. The initial amplification within the first milliseconds af- ter the merger, typically ascribed to KHI, is considerably stronger at higher resolution since the shear layer is better resolved. In both R2 simulations, a similar magnetic field strength of 1016 G is reached. Only later, at ≳15 ms after the merger, as the magnetic energy and strength increase further (potentially through magnetic winding and/or MRI) deviations between the two systems become more pronounced. At R2 resolution, the non-spinning and spinning BNS configurations saturate at energies of about 5 × 1050 erg and 1 × 1050 erg, respectively. By con- trast, the magnetic field strength continuously increases ∝t throughout the entire simulation period for the R1 simulations. The resolution dependence is not unexpected, since the length scales that must be resolved to capture the relevant magnetohydrodynamic processes and turbulence that trig- ger these instabilities are extremely small [15, 176, 181]. In particular, KHI and MRI require very high resolutions, whereas magnetic winding and braking act on larger scales, which are easier to resolve. Consequently, the R1 reso- lution cannot resolve the same physical processes as R2. While the approximately linear growth of the magnetic field observed in the R1 simulations can be attributed to magnetic winding [179], magnetic braking does not set in within the simulated time, and neither the amplification through KHI nor the MRI is sufficiently resolved at this low resolution. We therefore focus our analysis on results from our high-resolution simulations. Particularly, we discuss below the roles of KHI and MRI in magnetic field amplification. 1. Kelvin-Helmholtz instability Once the two neutron stars come into contact, KHI be- comes prominent in the shear layer between the stars. To illustrate this process, Fig. 5 shows the magnetic pressure Pmag = b2/2 for our R2 simulations in the orbital plane of the non-spinning (upper panels) and spinning (lower pan- els) BNS configurations at the merger and up to ∼10 ms afterwards. Additional contour lines for the rest-mass den- sity at ρ = {1010, 1012, 1014} g/cm3 mark the envelope, the bulk, and the inner core regions, respectively. At the merger onset (first panel from left to right), most of the magnetic field is still found within the inner core region, as set by the initial data. The formation of small eddies in the shear layer between the two stars can be observed. The vortices during the merger further twist the magnetic field lines within the first few milliseconds, leading to an exponential growth. This is also evident in Fig. 4 for the initial rise of the magnetic field strength and energy in the first ≲10 ms after the merger. The snapshots in Fig. 5 suggest that KHI mainly amplifies the magnetic field within the inner core region and the bulk region of the remnant. We note again that the energies and intensities of the magnetic field reached through the amplification by KHI within the first ≲10 ms are similar in both BNS systems for R2 resolution. For a more comprehensive analysis of the energy distri- bution over spatial scales, we compute the energy power spectrum density for the kinetic and magnetic energy. For this purpose, we define ϵmag = √ 0.5b2 and ϵkin = p ρv2 and compute the three-dimensional Fourier transform ˆϵmag(⃗k) = 1 (2π)3 Z ϵmag(⃗x)ei⃗k·⃗xd3x, (6) ˆϵkin(⃗k) = 1 (2π)3 Z ϵkin(⃗x)ei⃗k·⃗xd3x, (7) 9 FIG. 5. Shear layer of the non-spinning (upper panels) and spinning (lower panels) BNS systems at merger time for the simulations with R2 resolution. The snapshots show the magnetic pressure Pmag = 0.5b2 in the x-y plane at {0, 1, 2.5, 5, 10} ms after the merger at refinement level l = 5. The yellow lines are contours of the rest-mass density at ρ = {1010, 1012, 1014} g/cm3, respectively as dotted, dashed, and solid lines. with the wave vector ⃗k and the position of a fluid element ⃗x. Then, we obtain the energy spectra as integrals over the solid angle Ωk in k-space via εmag (k) = Z ˆϵmag(⃗k)ˆϵ∗ mag(⃗k)k2dΩ, (8) εkin(k) = Z ˆϵkin(⃗k)ˆϵ∗ kin(⃗k)k2dΩ, (9) with wave number k = (⃗k·⃗k)1/2 defining the characteristic length of the system. We perform this calculation on a cube with a side length L of ∼100 km along the x, y, and z directions for a sufficient coverage of the merger remnant, including the inner core, the bulk, and the envelope. This region matches the computational domain of refinement level l = 5, and is thus partially covered by nested boxes of refinement levels l = 6 and l = 7. To ensure that the analysis is performed using information from the finest grid available, we follow a similar approach to that described in Ref. [186]. In particular, we set for the cube the same grid spacing as on our finest refinement level l = 7 and fill the cube with data from this level in the respective region. The rest of the domain is then interpolated using information from refinement levels l = 6 and l = 5, respectively. We note that this method can influence results on the smallest scales, and therefore at the highest wave numbers k, due to the interpolation from coarser grids to a finer grid in the outer refinement levels. To avoid fluctuations from low-density regions, we compute the spectra only in regions with a rest-mass density ρ > 6 × 109 g/cm3. Finally, we perform a discrete fast Fourier transform over our cube with N 3 equally spaced points, where the components of the wave vector are given by kd = n 2π L with n ∈[0, N/2]. We present our resulting spectral distributions for the kinetic and magnetic energy in Fig. 6 for t = {2.5, 5, 10, 20} ms after the merger. The spectra exhibit the expected behavior for a turbulent magnetohydrody- namic scenario with a weak magnetic field, in which the magnetic field evolves passively and the turbulence is predominantly hydrodynamical. The shape of the kinetic energy spectra follows approximately a k−5/3 slope, con- sistent with Kolmogorov’s theory [187]. Moreover, during the first 10 ms after the merger, the magnetic energy spec- tra can be approximated at longer wavelengths (smaller wave numbers), by a slope of k3/2, in agreement with Kazantsev’s model [188]. Accordingly, the kinetic energy is dominated by larger length scales and the magnetic en- ergy by smaller ones. This also highlights the importance of numerical resolution for capturing KHI. Although reso- lution is expected to have less impact on the evolution of the kinetic energy on large scales, most magnetic energy is created on small scales. The drop in the magnetic energy spectra at high wave numbers ≳10−4 cm−1 correspond- ing to length scales of the order ≲100 m is likely due to numerical dissipation on these length scales, as our grid spacing is of similar order with ∆x ≈93 m. Our results reproduce the typical dynamical stages of the KHI, as discussed in previous studies, e.g., Refs. [186, 189]. Initially, within the first few millisec- onds after the merger, the hydrodynamic cascade is fully developed, following the Kolmogorov spectral slope. Sub- sequently, during the kinematic phase, the magnetic field grows by a hydrodynamical turbulent mechanism, as in Kazantsev’s theory, until reaching saturation. The spec- tra for the two different BNS systems during the first 10 10−5 10−4 k [cm−1] 1054 1056 1058 ε(k) [erg cm] ∝k−5/3 ∝k3/2 t −tmerger ≈2.5 ms 10−5 10−4 k [cm−1] ∝k−5/3 ∝k3/2 t −tmerger ≈5.0 ms χ = 0.0 kin. χ = 0.0 mag. χ = 0.1 kin. χ = 0.1 mag. 10−5 10−4 k [cm−1] ∝k−5/3 ∝k3/2 t −tmerger ≈10.0 ms 10−5 10−4 k [cm−1] ∝k−5/3 ∝k3/2 t −tmerger ≈20.0 ms FIG. 6. Evolution of the kinetic (solid) and magnetic (dashed) energy spectra as a function of the wave number. Results are shown for the non-spinning (green lines) and spinning (purple lines) high-resolution BNS simulations at t = {2.5, 5, 10, 20} ms after the merger. The black lines correspond to Kolmogorov (solid) and Kazantsev (dashed) slopes. few milliseconds after the merger are largely identical. Thus, the initial magnetic field amplification through KHI seems independent of the star’s intrinsic spin, as it behaves identically in both BNS systems. Differences become noticeable only at later times of ≥10 ms after the merger. On these timescales, the magnetic energy spectra also increase at smaller wave numbers, indicat- ing an inverse cascade. In agreement with Fig. 4, the magnetic energy amplifies more strongly for the BNS sys- tem without spin. We conclude that this difference for the BNS systems is related to the transfer of energy at larger scales, potentially through magnetic winding or MRI-driven dynamo mechanisms, which we discuss in the following section. 2. Magneto-rotational instability The interaction of the magnetic field with a sheared background flow, such as in an accretion disk, is expected to power MRI [190]. Several studies have investigated its role in BNS merger scenarios, e.g., [15, 68, 182, 183], suggesting that MRI could sustain turbulence in the rem- nant’s envelope and drive a large-scale dynamo. MRI is only active for a radially decreasing angular velocity, ∂RΩ< 0, where distortions of the magnetic field lines and the velocity field lead to turbulence and exponential growth. Its fastest-growing mode can be estimated as λi MRI ≈vi A 2π Ω (10) where vi A = Bi √4πρ is the Alfv´en velocity in xi direction and Ωis the angular velocity of the fluid, which we simplify by Ω= (xvy −yvx)/R2 with cylindrical radius R. We note that this standard MRI criterion assumes a Keplerian rotation profile and neglects magnetic field gradients (see Ref. [191] for a derivation of extended MRI criteria). Fur- thermore, neutrino viscosity and drag may suppress MRI through diffusive and damping effects. Thus, Eq. (10) should be regarded as a rough approximation. We compute azimuthal averages of the quality factor, defined as QMRI = λMRI/∆x, (11) and present results for t = {20, 50, 90} ms after the merger for both BNS systems in Fig. 7. For each panel, we show the quality factor based on the poloidal magnetic field component only, in the upper half, and based on the total magnetic field strength, in the lower half. Our re- sults emphasize the challenges in resolving λMRI. During the first ≲50 ms after the merger, our resolution can only partially capture MRI effects, considering the full magnetic field strength. However, our grid resolution is not suited to individually resolve the poloidal component. As the magnetic field strength increases, resolving λMRI becomes feasible, because λi MRI ∝Bi, which leads to a higher quality factor. On average, the quality factor QMRI seems slightly higher in the non-spinning than in the spinning config- uration. This trend may explain the stronger magnetic field amplification observed in this BNS system, which indicates that MRI is better resolved. For better com- parison, we compute one-dimensional profiles of the fluid angular velocity and different λi MRI components, shown in Fig. 8. In particular, we present the averaged values ⟨Ω⟩and λi MRI over cylindrical radii R for a disk within |z| < 5 km. This selection focuses on the orbital plane, where QMRI first begins to grow. As in Fig. 7, the profiles are shown at t = {20, 50, 90} ms after the merger. We 11 −10 0 10 z [km] χ = 0.0 pol. tot. t −tmerger≈20 ms χ = 0.0 pol. tot. t −tmerger≈50 ms χ = 0.0 pol. tot. t −tmerger≈90 ms 0 10 20 30 radius [km] −10 0 10 z [km] χ = 0.1 pol. tot. 0 10 20 30 radius [km] χ = 0.1 pol. tot. 0 10 20 30 radius [km] χ = 0.1 pol. tot. 10−1 100 101 QMRI FIG. 7. MRI quality factor QMRI of the non-spinning (upper panels) and spinning (lower panels) BNS simulations with R2 resolution. The snapshots show the azimuthal average of QMRI at t = {20, 50, 90} ms after the merger. Each panel shows, in the upper half, the quality factor considering only the poloidal magnetic field component and, in the lower half, considering the full magnetic field strength. The yellow lines are contours of the rest-mass density at ρ = {1013, 1014.5} g/cm3, respectively as thick and thin lines. The black dashed lines mark the regions covered by the different refinement levels l = {5, 6, 7}. While QMRI is set to zero if the condition ∂RΩ< 0 is not satisfied, the gray-colored region marks the area where MRI is inactive. 5000 10000 ⟨Ω⟩[s−1] t −tmerger≈20 ms t −tmerger≈50 ms χ = 0.0 χ = 0.1 t −tmerger≈90 ms 0 10 20 30 radius [km] 103 104 105 ⟨λMRI⟩[cm] 0 10 20 30 radius [km] 0 10 20 30 radius [km] FIG. 8. Profiles for the fluid angular velocity Ωin the top panels and the fastest-growing MRI mode λMRI in the bottom panels considering different components, i.e., poloidal λz MRI in solid lines, toroidal λΦ MRI in dashed lines, and radial λR MRI in dotted lines. In particular, the profiles show the averaged values over cylindrical radii R for |z| < 5 km at t = {20, 50, 90} ms after the merger for both the non-spinning (green) and spinning (purple) BNS simulations at R2 resolution. The black vertical, dashed lines mark the regions covered by the different refinement levels, i.e., l = {5, 6, 7}, and the black horizontal lines in the bottom panel mark the respective grid spacing ∆x. While the λi MRI components are set to zero if the condition ∂RΩ< 0 is not satisfied, the gray-colored region marks the area where MRI is inactive, respectively contoured by green and purple vertical, dashed lines for the non-spinning and spinning BNS systems. also plot the grid spacing ∆x for the respective domain alongside the λi MRI profiles: MRI features can only be resolved when λi MRI lies above ∆x, i.e., when the corresponding length scale exceeds the grid resolution. 12 Again, we find that the poloidal component λz MRI is not resolved in our simulations, as the grid spacing ∆x on the corresponding refinement levels exceeds the required length scales. The same holds for the radial component λR MRI. Only the toroidal component λΦ MRI exceeds ∆x and may therefore be partially resolved. Indeed, at ∼20 ms after the merger, λΦ MRI is larger for the non-spinning BNS configuration than for the spinning one. This could explain the faster growth of the magnetic energy and field strength for this BNS system, as the MRI, although not fully resolved, is still better captured. The higher λMRI values can easily be explained by the slightly lower angular velocities in the MRI-active region. Since λMRI is inversely proportional to Ω, higher angular velocities reduce the length scale of the fastest-growing MRI mode. Consequently, it is more difficult to resolve MRI in the spinning BNS configuration. C. Ejecta dynamics and matter outflow BNS mergers are sources of massive outflows, whose kinematical, thermodynamical, and compositional prop- erties are closely connected to EM counterparts arising from such events. Therefore, realistic modeling of the rel- evant physical processes in the ejecta is necessary in order to provide accurate theoretical predictions of observable signals and to interpret the available observational data. The various ejection mechanisms operate on distinct timescales. It is hence instructive to divide the ejecta into a dynamical component and a secular component. The former is primarily driven by hydrodynamical processes and develops on timescales from a few milliseconds prior to the merger to tens of milliseconds after the merger. First, tidal torques act on deformed surface layers, leading to typically cold, neutron-rich outflows. Shocked matter streams follow these, characterized by high entropies and electron fractions Ye ≳0.25. The shocked material is pre- dominantly launched from the collision interface between the neutron stars when they first come into contact and then by successive core-bounces as the remnant evolves towards a quasi-equilibrium state. Over longer timescales t −tmerger ≳10 ms, where the internal motions of the remnant have subsided due to energy losses in the form of GWs, secular ejection mecha- nisms operate, resulting in a steady outflow of material from the disk and outer layers of the remnant, commonly interpreted as winds. At this stage, a component of the wind is induced by matter elements that acquire energy and momentum by absorption of neutrinos, most impor- tantly νe. This increases the electron fraction to Ye ≳0.4. Another component of the wind is driven by magnetic fields as a consequence of the amplification mechanisms explored in Sec. III B. Finally, although not considered in this manuscript, viscous effects might contribute to the production of secular ejecta by angular momentum transport, e.g., [72, 192–196]. The following analysis is based on the methods of 0.00 0.01 0.02 Meje [M⊙] χ = 0.0 χ = 0.1 0 20 40 60 80 100 t −tmerger [ms] 10−2 10−1 100 ˙Meje [M⊙/s] FIG. 9. Matter outflow for the non-spinning (green) and spinning (purple) BNS simulations at R1 (dashed) and R2 resolution (solid). The ejecta mass (top panel) and ejected mass flux (bottom panel) are extracted on a coordinate sphere of radius ∼737 km on refinement level l = 2. Ref. [59], where properties of the ejecta are extracted on coordinate spheres of constant radii. The mass flux across a coordinate sphere of radius r is ˙Meje = I [√γDu(αvr −βr)]r2dΩ, (12) where Du is the Eulerian rest-mass density of unbound matter, γ is the determinant of the spatial metric, α is the lapse function, βr (vr) is the shift (Eulerian velocity) vector projected in the radial direction, and dΩis the usual surface element over the sphere. Integrating the mass flux in time from t′ = 0 to t′ = t then gives the ejecta mass Meje that crossed a coordinate sphere until t. Figure 9 presents the evolution of the ejecta mass passing through a sphere with radius ∼737 km for the BNS simulations with and without spin at R1 and R2 resolutions. The top and bottom panels, respectively, show the integrated ejecta mass Meje and the correspond- ing mass flux ˙Meje. Due to the finite propagation time, the fastest ejecta reach the extraction sphere ∼5 ms after the merger. The mass flux then reaches values of 3 × 10−1 M⊙/s and 2 × 10−1 M⊙/s for the non-spinning and spinning BNS configurations, respectively. The R1 simulation of the BNS system without spin even shows peak values of 5 × 10−1 M⊙/s. Accordingly, the merger of the non-spinning BNS system produces considerably more dynamical ejecta. As discussed in Sec. III A, the system without spin merges more violently than the spin-aligned configuration, resulting in more material being ejected during the collision. Subsequently, as the remnant migrates to a more axis-symmetric state, the outflow rate decreases slightly in both systems. At high resolution, both systems exhibit a nearly constant mass flow of approximately 1 × 10−1 M⊙/s from 20 ms to 60 ms after the merger, leading to an almost linear 13 FIG. 10. Snapshots of the inverse plasma parameter β−1 (z > 0) and the electron fraction Ye (z < 0) of the non-spinning (upper panels) and spinning (lower panels) R2 simulations. The respective times after the merger are given in the upper left corner of each panel. The white line gives the contour of the gravitationally bound region. The snapshots are extracted from refinement level l = 1. The yellow lines are contours of the rest-mass density at ρ = {0.25, 0.5, 1} × 107 g/cm3, respectively as dotted, dashed, and solid lines. FIG. 11. Snapshots of the electron fraction Ye, the rest- mass density ρ, the inverse plasma parameter β−1, and the asymptotic velocity v∞in the x-z plane for the high-resolution simulation of the χ = 0.0 system at ∼103.5 ms after the merger. increase of the ejecta mass. This behavior is commonly observed in BNS merger simulations with M1 neutrino transport. Thus, we attribute this outflow component to neutrino-driven winds, as interactions of neutrinos emitted from the remnant steadily unbind material from the outer parts of the disk. While the mass flux in the R1 simulations continue to decrease, we observe a slight increase in the R2 simulations at ≥60 ms after the merger. This time roughly coincides with the saturation of the magnetic field strength (cf. Fig. 4), which could indicate the onset of magnetically driven winds. Around ≥95 ms, there is another increase in the mass flux for the non-rotating configuration, with peak values of 4×10−1 M⊙/s at ∼105 ms after the merger. We analyze this outflow component in more detail below. In order to better understand geometric features of the ejection mechanisms, Fig. 10 depicts snapshots of the inverse plasma parameter β−1 = Pmag/p and the electron fraction Ye in the x −z plane for the two high-resolution simulations, illustrating the evolution of the disk and the ejecta. At 25 ms after the merger, the electron fraction ranges from ∼0.1 in the orbital plane to ∼0.4 in the polar region. The low Ye material in the orbital plane is primarily emitted by tidal torques during the merger and later partially reprocessed by the interaction with faster shocked ejecta. The shock created by the collision between the two ejecta components causes a modest protonization of the matter due the heating and consequent emission of ¯νe, which decouples from matter deeper in the remnant and disk than νe [104, 197]. In the polar region, the electron fraction gradually increases due to absorption 14 0.00 0.25 0.50 ⟨v∞⟩/c χ = 0.0 χ = 0.1 0 20 40 60 80 100 t −tmerger [m] 0.00 0.25 0.50 ⟨Ye⟩ FIG. 12. Averaged asymptotic velocity (top panel) and elec- tron fraction (bottom panel) of ejected matter passing through the detection sphere of radius ∼440 km for the non-spinning (green) and spinning (purple) BNS simulations at R1 (dashed) and R2 resolution (solid). of νe produced in the remnant, which powers the steady secular outflow observed in Fig. 9. While the magnetic field is amplified by the various instabilities discussed in Sec. III B, the neutrino-driven wind reduces baryon pollution in the polar region, forming a funnel region with suitable conditions for the develop- ment of jet-like structures [14, 80]. In this region, the magnetic field forms a helicoidal structure (see Fig. 2) and β−1 grows. Indeed, we observe for our simulations the emergence of an outflow with considerably enhanced β−1 value starting at ∼50 ms and ∼75 ms after the merger for χ = 0.0 and χ = 0.1, respectively. This sup- ports the previous hypothesis that magnetic-driven winds contribute to the increase of mass flux observed in Fig. 9 at this time. For the non-spinning configuration, the morphology of β−1 in the funnel around the polar cap at ≥75 ms after the merger suggests the occurrence of a buoyant instability. Hence, an outflow with considerably high β−1, i.e., primarily magnetically-driven, is launched along the polar axis at ∼100 ms after the merger, coinciding with the time of the significant increase in mass flux in Fig. 9 for this simulation. We provide additional snapshots for the rest-mass density ρ and the asymptotic velocity v∞, defined as v∞= q 1 −1/u2 t, (13) of this ejecta structure in Fig. 11. This collimated outflow component is consistent with the magnetic eruption scenario proposed by Ref. [78], exhibiting a jet-like structure with mildly-relativistic velocities of ≲0.4 c along the polar direction. For a quantitative comparison of the ejecta between our simulations, we analyze relevant properties using 0.00 0.25 0.50 Ye 10−4 10−3 10−2 10−1 mej/Mej 0.25 0.50 0.75 v∞ χ = 0.0 χ = 0.1 FIG. 13. Histograms of the electron fraction (left panel) and asymptotic velocity (right panel) for the matter outflow of the non-spinning (green) and spinning (purple) BNS simulations at R2 resolution. Ejecta properties are extracted on the detection sphere of radius ∼440 km. The dotted, dashed, and solid lines refer, respectively, to the ejected material passing through the sphere within the first 20 ms after the merger, within the first 50 ms after the merger, and over the entire simulation period. information from the detection sphere at radius ∼440 km. In Fig. 12, we show mass-weighted averages of Ye and v∞ for the ejecta passing through this sphere as a function of time, where the mass-weighted average ⟨X⟩of a quantity X on the sphere of radius r is defined as in Ref. [59] ⟨X⟩= H fuXr2dΩ H fur2dΩ (14) where H is the integral on the detection sphere, and for a shorter notation, fu = √γDu (αvr −βr) is the local radial mass flux of unbound material. As seen in the upper panel of Fig. 12, the early peak in the ⟨v∞⟩evolution corresponds to the faster material pro- duced in the core-bounce, and is followed by progressively slower material. In the R2 simulations, the peak reaches higher values for χ = 0.0 , up to ∼0.6 c as opposed to ∼0.4 c. Again, the slower ejecta from the spinning configurations trace back to the repulsive nature of the spin-orbit interaction and the lower impact velocities for these spin-aligned systems. Although the average veloci- ties and electron fractions evolve in a rather similar way for both setups from 20 ms to 60 ms, the spinning configu- ration develops a higher ⟨Ye⟩earlier than the non-spinning counterpart. Combined with the observation that β−1 grows faster for χ = 0.0, we conclude that the secular ejecta for χ = 0.1 is predominantly driven by neutrino winds. This is in good agreement with the distributions de- picted in the right panel of Fig. 13, where mass-weighted histograms of the electron fraction (left panel) and asymp- totic velocity (right panel) are presented for the R2 sim- ulations at different timespans after the merger. For χ = 0.1, we observe at most v∞∼0.4 c. Consequently, material escapes the remnant more slowly and remains in its vicinity for longer in this configuration, only to reach the detection sphere after being strongly protonized 15 FIG. 14. 2D mass-weighted histograms of the ejecta’s electron fraction and asymptotic velocity for the non-spinning (left panel) and spinning (right panel) BNS simulations at R2 resolution. The ejecta are extracted on the detection sphere of radius ∼440 km. due to neutrino irradiation. This results in the higher ⟨Ye⟩at t −tmerger ≈50 ms and, correspondingly, in the higher-Ye tails of the distributions depicted in the left panel of Fig. 13. Over time, the ejecta’s Ye progressively increases, as seen in the sequence of histograms for both configurations, until eventually reaching a sizable fraction of material with the maximum tabulated value Ye = 0.6, well within expectations for such long-term simulations. In general, the ejecta reaches lower Ye but also higher v∞for χ = 0.0 than for χ = 0.1. For a better qualitative understanding of the ejection mechanisms at play in our simulations, we perform a combined analysis of the elec- tron fraction and asymptotic velocity distributions and show two-dimensional histograms of the ejecta in Fig. 14. We find that the bulk of the ejecta in the region with Ye ≳0.2 and v∞≲0.4 is broadly similar between the two configurations. However, the non-spinning configuration has a pronounced ejecta component with high velocities v∞≳0.4 and additionally a low Ye tail with v∞≲0.2. We think both components are consistent with the de- scription of dynamical ejecta mechanisms proposed in Ref. [174]. We know that electron fraction is typically related to the region where the ejecta originates, whereas the velocity is correlated to details of the ejection episode. Thus, neutron-rich material Ye ≲0.1 is usually found in the interior of cold, catalyzed neutron stars, intermediate Ye ∼0.2 −0.4 can be reached by overproduction of ¯νe with respect to νe, usually taking place at higher tem- peratures, and even higher Ye can be reached through neutrino absorption over longer timescales. Ref. [174] identified that the bulk of the dynamical ejecta has two main origins: A predominantly equatorial ‘spray’ compo- nent that is launched early on from the shear layer be- tween the merging neutron stars with velocities v ≲0.6 c and a more isotropic ‘bounce’ component, where material from the interior of the merging neutron stars is released with v ∼0.8 c during the first core bounce and with lower velocities between 0.2 c and 0.4 c during a second, weaker bounce. We interpret the high velocity ejecta with v∞≳0.4 c for the non-spinning configuration as ‘spray’ or ‘first bounce’ component, since the velocities 60 80 100 120 140 160 180 200 220 240 Mass Number A 10−2 10−1 100 101 Abundance Y(A) Solar r-process χ = 0.0 SkyNet χ = 0.0 WinNet χ = 0.1 SkyNet χ = 0.1 WinNet FIG. 15. Nuclear abundances as computed with WinNet after 1 Gyr for both setups (solid lines). For comparison, we show solar-system abundances (gray diamonds, [198]) and r-process contributions (red dots, [199]). Dashed lines indicate abun- dances with a simplified setup using SkyNet. All patterns are rescaled to match r-processed 195Pt. are consistent with this scenario and the electron fraction of 0.2 −0.4 is well within the expected range of matter that is protonized at the high temperatures developed in the shear layer and/or in the hot core-disk interface. Finally, we interpret the low Ye tail as matter from the (relatively cold) interior, launched by subsequent bounces. Naturally, the absence of both, the high v∞and the low Ye, ejecta components for χ = 0.1 can be traced to the overall weaker core bounces of this configuration. D. Nuclear abundances The dynamical evolution of the ejecta properties corre- sponds to variations in nucleosynthesis yields and hence in the related EM signatures (see Sec. IV B). To evaluate the abundances of newly synthesized heavy elements, we use information from tracer particles that are evolved along with the fluid in our BNS simulations. As detailed in Appendix B, the advection scheme for the tracers leads to a high number of early, low-mass trajectories and cannot capture well the later wind ejecta. The overall picture is thus subject to stochastic varia- tions from the low number of high-mass wind trajectories, which is particularly problematic for the non-spinning case, where we need to ascribe almost 40 % of the total ejecta mass to a single trajectory. Notwithstanding, the full abundance patterns in Fig. 15 are fairly consistent with other recent works [200, 201]. Moreover, nucleosyn- thesis results obtained with SkyNet [202], purely based on the ejecta extraction spheres in the simplified setup previously employed in Ref. [59], exhibit very similar trends. 16 60 80 100 120 140 160 180 200 220 240 Mass Number A 10−2 10−1 100 101 Abundance Y(A) Total run wind neutron poor neutron rich high entropy FIG. 16. Contribution of different tracer subgroups to the full abundance pattern for the spinning system. The solid lines indicate the mass-weighted average, while the bands indicate the 16th-84th percentile in the group for each mass number. In general, the nucleosynthesis features for both merger simulations share most characteristics. Abundances of nuclei in the iron-group and up to about A = 80 are much lower than in the solar system [198, 199]. The r-process peaks are robustly produced, although the third peak is severely narrowed on its lower mass shoulder, associated with a deficit of cold, tidal ejecta in equal-mass mergers as presented [203]. While we observe a considerable excess of nuclei around A ≈110, corresponding roughly to the transition metals in the 5th period, lanthanides are partly deficient as compared to solar-system values. The latter feature has also been found in other studies [11, 204], and is intricately associated with the choice of mass model, β-decay, and fission prescription. The underlying FRDM mass-model is known to lead to a ‘pile-up’ near magic numbers, broadening the second peak while depleting low-mass lanthanides [205]. Similarly, the simplified fis- sion prescription of Ref. [119] deepens this trough, while the formulation of Ref. [120] alone tends to erase the peaks [206]. The lower trough around A = 140 −150 in the spinning case, hence, correlates with a lower pro- duction of fissioning material. Consequently, lanthanides up to Eu are deficient, while the relative abundances of Gd - Lu align well with those both in in old, Fe-deficient stars [207] and in the solar system [199]. To distinguish between different ejecta components and analyze their contribution to the total nuclear abundances, we identify the wind component as tracers crossing the extraction sphere at text > 30 ms. From the remain- ing dynamical ejecta, we separate tracers with high en- tropy s0 ≥80 kB/nuc, by and large coinciding with the fast, shock-ejected component, and split the rest into a neutron-rich component with Ye ≤0.25 and a neutron- poor component with Ye > 0.25. Figure 16 shows the TABLE II. Mass of different ejecta components in 10−3 M⊙ Ejecta component Wind Dynamical χ 0.0 0.1 0.0 0.1 α-particles (A = 4) 0.00 0.08 0.16 0.06 Iron group (50 ≤A ≤56) 0.60 0.48 0.00 0.00 First peak (73 ≤A ≤91) 5.74 3.66 0.65 0.53 Second peak (121 ≤A ≤138) 0.13 0.06 2.72 1.50 Third peak (186 ≤A ≤203) 0.00 0.00 0.40 0.29 Lanthanides (139 ≤A ≤176) 0.00 0.00 0.15 0.10 Actinides (210 ≤A ≤260) 0.00 0.00 0.02 0.01 Rest 0.41 0.97 0.94 0.56 Total 6.89 5.25 5.04 3.06 mass-weighted contributions of these groups to the total abundance in the spinning case, while Table II summa- rizes the total ejecta masses in different mass ranges for both configurations. Winds account for about 60 % of the total ejecta mass in both setups. Despite comprising only about 150 tracers each, they cast an extremely homogeneous picture: They fall out of NSE only as they are ejected from the disk at velocities of about 0.1 c with a fairly narrow density distribution around log(ρNSE/g cm−3) = 6.8 ± 0.2, corre- sponding to a similarly narrow distribution of entropies around S ≈20 kB/nuc. Early in the post-merger, fast protonization occurs in the disk due to the overproduc- tion of ¯νe via positron capture on free neutrons, which decouples from matter deeper in the remnant, and leads to relatively high electron fractions around Ye ≳0.35. Hence, the final composition of the wind ejecta remains similar to the respective NSE-composition [208]: Most matter rests in nuclei up to 88Sr in the first r-process peak and favors even mass numbers by roughly one or- der of magnitude. Only traces of second-peak elements form in trajectories with Ye < 0.35. For Ye ≳0.45, the mean mass number ¯A sharply drops and nucleosynthesis reaches only up to the iron group. While the low number of available tracers thus does not allow robust statements on the exact pattern, Fig. 16 highlights that these poorly sampled tracers have no impact on the heavier elements. Nuclei beyond the first peak result from the dynamical ejecta instead, which draw a more diverse picture: The neutron-poor and neutron-rich components contribute each about a fifth of the total ejecta mass and have entropies comparable to the wind ejecta, but slightly higher velocities around (0.15± 0.05) c and NSE densities log(ρNSE/g cm−3) = 7.0 ± 0.2. The neutron-poor ejecta contain some iron-group ele- ments, but are dominated by ejecta up to the second peak. These account for the strong excess of transition metals with A ≈110 and exhibit a broadened first peak beyond the N = 50 shell closure, while the abundance of nuclei beyond tellurium (Z = 50) isotopes declines rapidly. Only a few tracers close to Ye = 0.25 produce a subdominant yield of lanthanides and third-peak elements. These are primarily synthesized by the neutron-rich ejecta that ac- count for almost all ejecta with A > 128. As expected, 17 FIG. 17. “Treasure map” showing tracer particles projected onto the x-y plane and their mass-weighted relative abundances of selected elements for the high-resolution spinning BNS simulation. The panels in the upper, middle, and lower rows respectively show the maps for t = {−0.88, 1.25, 7.31} ms after the merger and project tracer particles for |z| < {5, 10, 50} km. For each element, the tracer particles shown combined account for 99.9 % to the total element abundance, while tracer particles with negligible contribution are not displayed for better visibility. In the background, we show the rest-mass density profile in grayscale. the yield of heavy elements is hence strongly correlated with Ye. The highest actinide yields with mass fractions up to Xact ≈0.07 are found in tracers with the lowest Ye ≈0.15. These are also the most efficient producers of lanthanides, with Xlanth ≥0.10, while second-peak elements are relatively deficient. There is a small (up to 3 % in the non-spinning case), yet noteworthy contribution to the total actinide mass from the fastest, shock-ejected tracers that encounter a ‘frustrated’ r-process [11, 209]. These have high-entropy (s ∼120 kB/nuc) and Ye ≈0.3. As they are flung out from the merger site, they encounter a α-rich freeze- out [11, 210]: Their density drops too quickly for large amounts of α-particles to cross the 8Be bottleneck and form heavier seeds. Correspondingly, they make a sig- nificant contribution to the final He-abundance. Many neutrons also avoid capture in total, decaying to the remaining hydrogen on the order of hours. Ref. [211] predicts this to be observable as a kilonova-precursor. The few massive seeds neutronize very efficiently, though, piling up along the N = 82, 126 shell closures to produce close to 100 % of the final mercury and thallium yields. Figure 17 confirms these general trends based on “trea- sure maps” for a few characteristic elements at initial- ization, shortly after the merger, and around 7 ms after the merger. These project the location of tracers of our highest-resolution simulation of the spinning BNS config- uration into the x-y plane. We can see that Ge mostly originates from tracers in the outer layers of the star that remain longer in the disk. Zr and Ag have a significant wind contribution, but are increasingly abundant in the tidally ejected material. Gold is primarily formed in dense regions of the faster ejecta components, though there are some traces that leave the disk at later times. Ultimately, Tl forms in the fastest shock-ejected tracers without any contribution from the disk. 18 −3 0 3 h+ 22 × 1022 χ = 0.0 −10 0 10 20 30 40 t −tmerger [ms] −3 0 3 h+ 22 × 1022 χ = 0.1 FIG. 18. GW strain of the dominant (2, 2)-mode with face- on view at 100 Mpc for the non-spinning (upper panel) and spinning (lower panel) configurations. The time is shifted by the merger times, where the amplitude of this mode is maximal. IV. MULTI-MESSENGER PICTURE A. Gravitational waves Throughout the inspiral phase, the continuous emission of GWs draws energy and angular momentum from the system. In response, the neutron stars coalesce and merge into a compact object surrounded by a disk of material with intermediate to low densities. The dynamics of the inspiral stage can be observed in the characteristic chirp signal, as depicted in the waveforms of Fig. 18 for t −tmerger < 0, where the GW frequency and amplitude increase as the binary orbit decays, until the neutron stars come into contact and merge. As stated above, the merger time is defined as the instant at which the amplitude of the dominant (2, 2) mode is maximal. While the GW signal during the inspiral phase in BNS systems with spinning and non-spinning stars is well pre- scribed by various approximants, e.g., Ref. [212], the post-merger phase is more ambiguous. Hydrodynamic in- stabilities in the remnant can excite inertial modes, which are potentially observable in planned third-generation GW detectors [213, 214] and could thus be used to inves- tigate the rotational and thermal states of the remnant. We therefore focus our analysis of the GW signal on the post-merger phase, where most differences between our configurations appear. The post-merger signal is charac- terized by a rapid decrease of the GW amplitude, as the system transitions to an approximately axis-symmetric state over a timescale of ≳10 ms. On this timescale, the remnant’s internal motions are effectively dampened by radiative losses. 1000 2000 3000 4000 5000 f [Hz] 10−23 10−22 10−21 2f|˜h(f)| aLIGO ET χ = 0.0 χ = 0.1 FIG. 19. Characteristic frequency strain of the post-merger GW signal extracted from our R2 simulations at 100 Mpc for the non-spinning (teal) and spinning (purple) setups. Circle, diamond, and square markers correspond, respectively, to the f1, f2, and f3 peaks. Dotted and dash-dotted black lines correspond to design sensitivity curves of Advanced LIGO [225] and Einstein Telescope [226], respectively. In order to better understand different contributions to the post-merger signal, we show in Fig. 19 the char- acteristic frequency strain hc(f) = 2f|˜h(f)|, windowed at t −tmerger ≥1.5 ms [215]. For a smoother spectrum, we compute the power spectral density (PSD) of the strain h(t) including all l = 2, 3 multipoles using Welch’s method [216], with the signal being divided into seg- ments with 25 % overlap. Additionally, each segment is tapered with a Hann window, and zero-padding ensures 4096 points per segment. The peak frequencies are listed in Table III. The dominant f2 frequencies are related to quadrupolar oscillations of the remnant [217]. They are in general well described by fitting formulae depending on the combined tidal deformability (see, e.g., Ref. [218]). Furthermore, we find the f1 peaks, mainly arising from the l = 2, m = 1 mode, which roughly coincide with the expected f1 ≈ f2/2. These peak frequencies are commonly associated with the one-arm instability, e.g., Refs. [173, 219–223]. The persistence of this mode in the post-merger suggests that spiral wind-waves might be active, contributing to the secular matter outflows [195]. Finally, we remark that the physical origin of the f3 peak, roughly corresponding to f3 ≈3f2/2 is still unclear [3, 217, 224]. When comparing the peak frequencies of the χ = 0.0 and χ = 0.1 configurations, we note slight shifts. At R2 resolution, the frequencies f1, f2 are only shifted by ∼50 Hz, whereas f3 is shifted by ∼150 Hz. On the other hand, we quantify numerical uncertainties by the difference in frequency peaks for a given configuration between resolutions R1 and R2, being of ∼50 Hz for 19 TABLE III. Peak post-merger GW frequencies. From left to right, the columns list simulation name, resolution, first peak frequency f1, dominant peak frequency f2, and third peak frequency f3. Simulation Resolution f1 f2 f3 [kHz] [kHz] [kHz] χ = 0.0 R1 1.672 3.048 4.819 R2 1.622 3.000 4.622 χ = 0.1 R1 1.524 3.000 4.474 R2 1.574 3.049 4.770 f1, f2, and ∼(200 −300) Hz for f3. Although the shift to higher frequencies at R2 resolution could easily be explained by the higher angular momentum and the less compact remnant in the spin-aligned BNS merger, we suspect that the observed differences in our simulations are dominated by grid resolution effects. B. Kilonova light curves Kilonovae are thermal transients powered by the ra- dioactive decay of r-process nuclei synthesized in the ejected merger material [2, 227]. The emission of EM ra- diation therefore depends sensitively on ejecta properties (see Sec. III C) with the corresponding nucleosynthesis yields (see Sec. III D). The observable signal is determined by several factors, including the ejecta geometry, viewing angle, energy released from the synthesized nuclei, and the interaction of the emitted radiation with the expanding matter, e.g., [113, 126, 228–232]. To obtain the kilonova light curves associated with the two BNS mergers, we perform radiative transfer simula- tions with possis [112, 113] based on the data from our high-resolution NR simulations. As described in Sec. II B, we extract ejecta data at a time tcut, when the entire ma- terial is still contained within the computational domain, and supplement it with information from the detection sphere at r ≃440 km. For the non-spinning and spinning BNS simulations, tcut is at 12.50 ms and 15.05 ms after the merger, respectively. Although the nuclear abundance calculations in Sec. III D also provide heating rates and thermalization efficiencies, which are key ingredients for kilonova modeling, we instead adopt the precomputed libraries from Refs. [124–126], as implemented in pos- sis [113]. This choice avoids possible distortions caused by the coarse sampling of the late-time ejecta through the tracer particles used for the nuclear synthesis calculations, as discussed in Appendix B and Sec. III D. The ejecta used as input for the radiative transfer sim- ulations are presented in Fig. 20. The maps show the extracted rest-mass density and the electron fraction, as well as the temperatures calculated within possis in the vx - vz plane one day after the merger. The respective ejecta masses are 0.0137 M⊙and 0.0098 M⊙for the non- spinning and spinning BNS configuration. The maps summarize the main properties of ejecta discussed in FIG. 20. Ejecta data used as input for radiative transfer simulations with possis. The maps show the density ρ, the electron fraction Ye, and the temperature T in the vx-vz plane at one day after the merger for the non-spinning (upper panels) and spinning (bottom panels) BNS systems. Sec. III C: The non-spinning BNS merger produces faster ejecta that are more extended in the vx - vz plane than in the spinning configuration. In both configurations, the electron fraction is higher in the polar regions than in the orbital plane, where the non-spinning configuration exhibits lower Ye values than the spinning one. Further- more, both systems show a wind component with high Ye and slow velocities concentrated near the inner region of the ejecta. In Fig. 21, we present bolometric light curves for ob- servers located at the pole with a viewing angle of ι = 0◦ and in the orbital plane with ι = 90◦, together with the deposition curve. The latter represents the total amount of luminosity injected into the system as a function of time. With the assumed axial symmetry of the ejecta, the analysis can be restricted to an azimuthal angle φ = 0◦, i.e., we consider observers in the x-z plane only. The bolometric light curves are overall brighter in the non- spinning configuration due to the larger amount of ejecta contributing to the thermalized energy powering the kilo- nova emission. As expected, the signals are brighter at a polar viewing angle and fainter for ι = 90◦, due to the lower electron fraction in the orbital plane. The material with low Ye produces more heavy elements (see Sec. III D) and thus higher opacities [127], causing more absorption events of the photon packets. Consequently, photons propagate more freely in the polar region than in the orbital plane, which in turn leads to more radiation and brighter light curves observable from the pole. Because the non-spinning configuration has a greater variation in electron fractions than the spinning one, reaching lower Ye in the orbital plane, the variation for different view- ing angles ι is also more significant. The shapes of the light curves are otherwise similar: They initially lie below 20 100 101 time [days] 1039 1040 1041 1042 Lbol [erg/s] ι=0o ι=90o χ = 0.0 ι=0o ι=90o χ = 0.1 FIG. 21. Bolometric light curves of the kilonova associated with the non-spinning (green) and spinning (purple) BNS configuration for observers looking along the polar axis ι = 0◦ (solid line) and in the orbital plane with ι = 90◦(dashed line). The deposition curve is shown in dotted lines. the deposition curve for up to ∼1 day because the in- ner ejecta are still optically thick, trapping the radiation at early times. As the ejecta expand and become more transparent, the previously trapped radiation escapes and produces an overshoot, exceeding the deposition curve ∼7 days. Finally, once the ejecta is fully transparent, the light curves converge to the deposition curve. Broad-band kilonova light curves are shown in Fig. 22 for different frequency bands in the ultraviolet, optical, and infrared ranges. The apparent magnitudes are scaled to a luminosity distance of dL = 100 Mpc. As expected for kilonova signals, the light curves reach their peak ear- lier and decay faster in bluer filters than in redder ones. In fact, the light curves in the u-, g-, and r-bands merely peak and decrease immediately. In the i-, z-, and y-bands, the peak occurs at ≲1 day, in the J-band at ≳1 day, and in the K-band only at ∼2 days after the merger. This behavior is due to the fact that radiation is absorbed and re-emitted at longer wavelengths in the lanthanide- rich, low-Ye ejecta. Since radiation that undergoes this re-processing mechanism multiple times takes longer to diffuse and escape the ejecta, the kilonova signal shifts to redder wavelengths over time. The non-spinning configu- ration is systematically brighter in the redder frequency bands, whereas the u- and g-bands show similar magni- tudes for both configurations. We attribute this redder component to the lower Ye values reached in the material ejected in the non-spinning configuration, leading to a higher abundance of heavy elements with higher opacities. As a result, radiation is absorbed and re-emitted more frequently, enhancing the infrared component of the kilo- nova. Again, we find a clear viewing angle dependence, most pronounced in the optical and near-infrared bands. In particular, in the g-, r-, i-, z-, and y-bands for the non-spinning BNS configuration, differences are in the order of ∼1 −1.5 mag, depending on filter and time. Similar to the bolometric light curves, the differences are 20 21 22 23 24 25 26 27 magAB u Rubin Obs Rubin Obs ι=0o ι=90o χ = 0.0 ι=0o ι=90o χ = 0.1 g ZTF Rubin Obs ZTF Rubin Obs 20 21 22 23 24 25 26 27 magAB r ZTF Rubin Obs ZTF Rubin Obs i ZTF Rubin Obs ZTF Rubin Obs 20 21 22 23 24 25 26 27 magAB z Rubin Obs Rubin Obs y Rubin Obs Rubin Obs 100 101 time [days] 20 21 22 23 24 25 26 27 magAB J 100 101 time [days] K FIG. 22. Kilonova light curves for the non-spinning (green) and spinning (purple) BNS configuration in different frequency bands, indicated in the bottom left corner of each panel. We show the apparent magnitude at a luminosity distance of dL = 100 Mpc. The different light curves correspond to observers looking along the polar axis ι = 0◦(solid line) and in the orbital plane with ι = 90◦(dashed line). The gray dotted lines indicate in the respective frequency bands 5σ limiting magnitudes for single exposures of the Zwicky Transient Facility (ZTF) [233] and for design specifications of the Vera Rubin Observatory [234]. smaller for the spinning BNS configuration. To assess the observability of the computed kilonova signals, Fig. 22 also shows limiting magnitudes for single exposures of the Zwicky Transient Facility (ZTF) in the g- , r-, and i-bands [233] and of the Vera Rubin Observatory in the u-, g-, r-, i-, z-, and y-bands [234]. At a luminosity distance of 100 Mpc, the kilonova would be detectable with ZTF only within the first few hours after merger; depending on viewing angle and filter, within ∼0.2 days for the spinning configuration and slightly longer for the non-spinning one. In contrast, the signals in the infrared would be observable by the Vera Rubin Observatory for about one to two days after the merger. Although a detection by the Vera Rubin Observatory would be more 21 likely, this result highlights the challenge of observing a kilonova signal as part of a multi-messenger detection of a BNS merger event at this luminosity distance. C. Afterglow of the kilonova and GRB From an observer’s perspective, at least two types of afterglows are expected to accompany BNS mergers from the interaction with the interstellar medium. One type arises from the highly relativistic, collimated jet that is launched during or after the merger and leads to the emissions of a GRB through a largely unknown mecha- nism [23, 24, 235–237] and is therefore referred to as GRB afterglow. The second type emerges from the mildly rela- tivistic material ejected during the inspiral, merger, and post-merger. This material typically contributes to the kilonova, but it also contains a high-velocity component to create a kilonova afterglow as a long-term transient. Our ejecta velocities range up to 0.8 (0.65 c) in the non-spinning (spinning) case, with a total mass of 0.0137 (0.0098) M⊙. 5×10−5 (2×10−9) M⊙thereof have a velocity higher than 0.5 c. We use the polar velocity distribution from the ejecta in our simulations as input in pyblastafterglow to compute the expected kilonova afterglow as described in Sec. II B 3. The calculation of the GRB afterglow, however, requires knowledge of the jet structure and kinetic energy. While various NR simulations have shown that jet-like struc- tures can emerge from the post-merger remnant through a poloidal magnetic field [12, 13] in combination with neutrino winds [14, 15, 80, 82], resolving the launch of an ultra-relativistic jet with Γ ≳100 remains currently infeasible, as temporal and spatial resolution is insufficient to accurately capture the magnetic field amplification and matter acceleration. Thus, even though our simulations also display high-velocity outflows with β ≈0.4 along the polar axes (see Fig. 11), it is not possible to establish ro- bust estimates for the GRB jet parameters from our runs. However, for comparison with the kilonova afterglow and to assess its observability, we calculate a GRB afterglow light curve from pyblastafterglow with fiducial values based on GRB170817A. Specifically, we set θc = 6.3 ◦, θw = 18.9 ◦, and Γ0 = 500 based on the posterior median from GRB170817A as inferred in Ref. [238] with the py- blastafterglow model. Likewise, the microphysical parameters are set to p = 2.11, ϵe = 0.1, ϵB = 0.001. The posterior median for E0 was 1052.2 erg, but since the isotropic kinetic energy-equivalent is arguably the parameter for which one could expect the most variability, we also show a case where the isotropic kinetic energy equivalent is two orders of magnitude lower. In Fig. 23, we show the resulting kilonova afterglows from our BNS simulations and the fiducial GRB afterglow light curves. In particular, we show light curves in radio at 1 GHz, optical at 500 THz, and X-ray at 1 keV for two different interstellar medium densities, namely n0 = 0.1 cm−3 and n0 = 0.01 cm−3. 10−5 10−3 10−1 Fν=1 GHz in mJys VLA SKA ι=0o ι=90o χ = 0.0 ι=0o ι=90o χ = 0.1 ι=0o ι=90o GRB afterglow E0 = 1052.2 erg ι=0o ι=90o GRB afterglow E0 = 1050.2 erg 10−11 10−9 10−7 10−5 Fν=500 THz in mJys 101 102 103 104 t in days 10−12 10−10 10−8 10−6 Fν=1 keV in mJys Chandra AXIS n0 = 0.1 cm−3 101 102 103 104 t in days n0 = 0.01 cm−3 20 25 30 magAB Rubin Obs. ELT 25 30 35 40 magAB 35 40 45 magAB FIG. 23. Kilonova afterglow light curves from the non-spinning (green) and spinning (purple) BNS simulations. Additionally, we show fiducial GRB afterglow light curves (black and gray) that are not directly related to the NR simulations. Solid lines mark observations with ι = 0 ◦, dashed lines with ι = 90 ◦. The panels on the left are for an interstellar medium density of n0 = 0.1 cm−3, the panels on the right are for n0 = 0.01 cm−3. From top to bottom, the panels show the emission at the frequencies of 1 GHz, 500 THz, and 1 keV. All light curves are assumed at dL = 100 Mpc. Dotted lines indicate rough brightness limits for various observatories (see text). The kilonova afterglow in radio contains two separate contributions from the thermal electron population and the non-thermal population. The former is dominant at t ≤100 days, while the latter is the main contributor at t ≈1000 days. Because of the strong dependence of the thermal emissivity on the shock velocity [128], the thermal component is more pronounced in the non- spinning case with higher ejecta masses and velocities. It there even gives rise to a double-peaked light curve, where the first peak is due to the thermal electrons. At higher frequencies ν > 1 GHz, the emission is dominated by the non-thermal (or power-law) electron distribution. The timescale of that non-thermal peak depends on the deceleration of the blast waves, where higher n0 causes earlier and stronger deceleration. Hence, the peak is shifted from t ≈550 days for n0 = 0.1 cm−3 to t ≈ 1100 days for n0 = 0.01 cm−3. Since the kilonova ejecta are not collimated, the observation angle ι only has a moderate effect on the observed light curve, though for 22 an observer in the orbital plane at ι = 90 ◦, the light curve is generally dimmer and rises more slowly. This is because the kilonova afterglow is dominated by the material near the equatorial plane, where most of the ejecta mass resides. An observer located at the pole with a viewing angle of ι = 0◦sees more of the equatorial ejecta, whereas an observer at ι = 90 ◦receives emission mainly from one side. For the kilonova afterglows shown in Fig. 23, we have set p = 2.5, ϵe = 0.01, ϵB = 0.001, and ϵT = 1. These values are consistent with particle-in-cell simulations [131, 239, 240], though the possible ranges remain broad. For this reason, we have conducted additional kilonova afterglow calculations varying the microphysical parameters. For instance, when keeping the other parameters fixed, but setting ϵe = 0.01 and ϵB = 10−5, the flux density is reduced by roughly a factor of 500, while setting ϵe = 0.2 and ϵB = 0.005 increases the flux density by a factor of 10. Additionally, ϵe also influences the relative contribution of the thermal electron population, i.e., whether the radio light curve displays an early peak or not. The electron power law index p increases the non-thermal synchrotron flux and affects the power law decay of the late-time light curve. The thermal part remains relatively unaffected. The thermal efficiency ϵT , in contrast, determines the energy budget for the thermal electrons and thus its choice can alter the early part of the radio light curve notably. Recently, Ref. [129] suggested a more realistic value of ϵT = 0.4 based on the particle-in-cell simulations of Ref. [241]. Adopting this value while keeping the other parameters from Fig. 23 would reduce the thermal electron radiation notably and erase the first peak. From Fig. 23 it is also apparent that, unless the BNS is observed far off-axis, the GRB afterglow dominates the early time light curve and practically prevents a distinct detection of the early kilonova afterglow. However, if the isotropic jet energy is low, the thermal radio peak could rise above the GRB afterglow and cause a rebrightening of the light curve at t ∼100 days. This hinges on the uncertain amount of thermal electron energy and the in- terstellar density. Rebrightening could also be caused in a similar way later at t ∼1000 days from the non- thermal peak. At the higher jet energy of 1052.2 erg, the late-time GRB afterglow always outshines the kilonova afterglow by several orders of magnitude, rendering the latter effectively undetectable as a distinct source. In general, at a luminosity distance of 100 Mpc, the absolute brightness of the kilonova afterglow remains low and does not exceed the typical brightness limits of current tele- scopes. For n0 = 0.01 cm−3, the light curve peak remains below the 3σ rms limit for a 12 × 2.5 h observations with the VLA radio telescope [27] or for a similar observation with the SKA [242, 243]. However, at n0 = 0.1 cm−3 the thermal and non-thermal radio peak get close to the VLA detection limit and would be observable with SKA if the GRB afterglow is not too bright. In near-optical frequency bands, however, detection of the kilonova after- glow might prove even more challenging. The 5σ image depth of the Vera Rubin Observatory in its main sur- vey [234, 244] and the estimated 5σ limiting magnitude of the ELT [245, 246] significantly exceed the expected non-thermal optical peak of the kilonova afterglow, re- gardless of n0 or other microphysical parameters. This also applies to Chandra X-ray detection, assuming a flux limit of 5.3×10−17 erg cm−2 s−1 at 200 ks exposure [247]. For next-generation observatories like ATHENA [248] or AXIS [249], the flux limit in this exposure time might be 2.5 × 10−18 erg cm−2 s−1 [250], which would make the X-ray peak detectable for n0 = 0.1 cm−3. We empha- size again that this assessment applies at dL = 100 Mpc and assumes a typical exposure time for these telescopes. Dedicated, ultra-deep follow-ups for a close BNS merger could enhance detectability prospects. We also point out the possibility that a significant amount of the fast-tail ejecta is not resolved in our NR simulations due to ar- tificial atmosphere settings and resolution issues [174]. Thus, one could speculate whether the kilonova afterglow might be more powerful in certain cases than presented here. Nevertheless, given our BNS ejecta profiles, a con- fident kilonova afterglow detection remains challenging, especially in a scenario with a powerful near-axis GRB afterglow. V. CONCLUSIONS We perform state-of-the-art numerical-relativity sim- ulations of equal-mass binary neutron star mergers in a spinning and a non-spinning configuration with bam, incorporating both neutrino radiation and magneto- hydrodynamical effects. The neutron star matter is modeled using the ABHT(QMC-RMF) equation of state. We initialize a poloidal magnetic field with a maximum strength of 1015 G inside the stars. Each system is simu- lated at a high resolution with a grid spacing of ∆x ≈93 m covering the neutron stars and a low resolution with ∆x ≈186 m, and is evolved up to ∼100 ms after the merger, covering besides the dynamical ejecta also secular outflows driven by magnetic- and neutrino winds. With the same initial separation of 41.2 km, the system with spin-aligned stars merges about 0.88 ms later than the non-spinning system due to repulsive spin–orbit inter- action. The reduced impact velocity at the merger leads to a less violent collision in the spinning configuration, resulting in lower temperatures in the merger remnant and a smaller amount of shock-heated ejecta. Due to the higher angular momentum, the spinning system forms a more massive disk and a more extended remnant with lower central rest-mass densities. The initial magnetic-field amplification due to the Kelvin–Helmholtz instability, triggered in the shear layer during the merger, behaves similarly in both configu- rations and reaches similar field strengths and energies within the first ≲10 ms after merger. Subsequently, the magnetic energy increases further through magnetic winding and magneto-rotational instability, driven by the 23 interaction of the magnetic field with the differentially rotating disk. Higher magnetic energies are reached in the non-spinning scenario. The amount of ejected material is larger in the non- spinning configuration. In the simulated equal-mass sys- tems, the dynamical ejecta are mainly generated by shocks at the collision interface, with only minor contributions from tidal disruption. The more violent merger of the non-spinning system produces stronger shocks, leading to larger ejecta masses and higher outflow velocities. Fur- thermore, we find for this system a lower electron fraction in the orbital plane of ≲0.1, and thus more neutron-rich ejecta. On later timescales of > 10 ms after the merger, neutrino-driven winds power a slower outflow component with progressively higher electron fractions. Finally, we observe the launch of a mildly relativistic collimated out- flow with velocities of up to 0.4 c along the polar direction and high magnetization at ∼100 ms after the merger in the non-spinning configuration. The thermodynamical conditions in the different ejecta components lead to characteristic abundance patterns from r-process nucleosynthesis. We confirm that the low-Ye components of the dynamical ejecta robustly reproduce the global features of the heavy-element abundances observed in the solar system, whereas the late winds cannot substantially surpass the first r-process peak. A substantial lack of low-mass lanthanides points towards our still incomplete understanding of various nuclear decay properties far from stability, while the relative paucity of nuclei around A = 190 is linked to an underproduction of cold ejecta in equal-mass systems. To obtain a comprehensive picture of the observable multi-messenger signatures, we extract the gravitational- wave signal and use the ejecta data to compute the light curves of the associated kilonova and its afterglow. Since the description of the gravitational-wave signal during the inspiral is well-established by various approx- imants, we focus on the analysis of the post-merger fre- quencies, where we found slight shifts in the dominant peak frequencies between the two configurations. How- ever, these differences are within numerical uncertainties of the simulations and are therefore likely artifacts of the grid resolution. For the EM counterparts, we focus on the high- resolution simulations. We compute kilonova light curves using the Monte Carlo radiative transfer code possis. The emission is overall brighter in the non-spinning con- figuration, primarily due to the larger amount of ejecta. The kilonova is also slightly redder for this system, which we attribute to the lower electron fractions reached in the ejecta. Moreover, the light curves show a stronger dependence on the viewing angle in the non-spinning configuration than in the spinning one. The afterglow emission are computed using the semi-analytic pyblastafterglow code. For the kilonova afterglow, we use ejecta profiles binned over different viewing angles ι and asymptotic velocities. While both a Maxwellian (thermal) and a power-law (non-thermal) electron component contribute to the afterglow, again the non-spinning configuration produces brighter light curves due to the larger ejecta mass and higher ejecta velocities. Since our numerical-relativity simulations do not resolve an ultra-relativistic jet, we model the GRB afterglow assuming a fiducial Gaussian jet with parameters inferred from GRB170817A and explore different jet energies. When observed nearly on-axis, the GRB afterglow emission dominates over the kilonova afterglow. For an observer in the orbital plane, the GRB afterglow is initially suppressed. Nevertheless, the peak of the thermal emission of the kilonova afterglow is only observable for sufficiently low isotropic jet energies. While our BNS simulations include a comprehensive treatment of the key microphysical and magnetohydrody- namic processes in mergers of binary neutron stars, the presence of muons and anti-muons, as well as resistive effects of the magnetic field are neglected. The former is expected to influence primarily the remnant lifetime and ejecta masses as well as their composition, thereby affecting the signatures of the kilonova [251, 252]. The latter becomes significant in regions with low conductivity, such as in the magnetospheres of neutron stars and the polar cap above the remnant, e.g., [62, 253]. Accounting for finite conductivity would enable modeling magnetic reconnection and dissipation processes that can affect both jet dynamics and electromagnetic emission. In ad- dition, simulations using more realistic initial magnetic field configurations consistent with astrophysical obser- vations are needed. We plan to address these aspects in future work and extend our simulations to resistive magnetohydrodynamics with muons. VI. DATA AVAILABILITY Gravitational waveforms will be released as part of the CoRe database [254, 255]. Additionally, we provide an animation of the two high-resolution simulations [256]. Further simulation data can be provided upon reasonable request. ACKNOWLEDGMENTS We thank B. Br¨ugmann for valuable comments and dis- cussions during this project. We further thank Rosa A. for her thoughtful supervision during the text review of this manuscript. A.N., I.M., and T.D. gratefully acknowledge support from the Deutsche Forschungsgemeinschaft, DFG, project number 504148597 (DI 2553/7). Furthermore, T.D., H.K., H.R., and H.G. acknowledge funding from the EU Horizon under ERC Starting Grant, no. SMArt- 101076369. L.B. is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award No. DE-FG02-05ER41375. A.H. acknowl- 24 edges support by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award No. DE-FG02-05ER41375. A.H. furthermore acknowledges financial support by the UKRI under the Horizon Europe Guarantee project EP/Z000939/1. M.B. acknowledges the Department of Physics and Earth Science of the Univer- sity of Ferrara for the financial support through the FIRD 2024 grant. The simulations with bam were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number mmapicture/44289, and on the DFG-funded research cluster Jarvis at the Uni- versity of Potsdam (INST 336/173-1; project number: 502227537). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Appendix A: Fluid-radiation equilibration step The computation of the trapped equilibrium solution of Refs. [73, 104] is based on considering the fluid and radiation in the trapped regime as a single system in thermal equilibrium with a total energy density e and lepton number Yl. From Fermi–Dirac statistics, we know that the fluid is supposed to thermalize at a certain Ye,eq and Teq satisfying e = ρϵ(ρ, Teq, Ye,eq) + ρ mb [Zνe(ρ, Teq, Ye,eq) +Zνe(ρ, Teq, Ye,eq) + 4Zνx(ρ, Teq, Ye,eq)] , (A1) Yl = Ye,eq + Yνe(ρ, Teq, Ye,eq) −Yνe(ρ, Teq, Ye,eq), (A2) with the neutrino number and energy fractions given by Yνi = 4π (hc)3 mb ρ T 3F2 µνi T  , (A3) Zνi = 4π (hc)3 mb ρ T 4F3 µνi T  . (A4) Ref. [73] solves this system of two equations in two variables on the fly at each computation of the right- hand side. In order to improve efficiency, we employ a tabulation strategy of a precomputed solution. However, tabulating the internal energy density e is challenging given its wide range, covering several orders of magnitude, and its dependency on the EOS. Therefore, we implicitly define the auxiliary temperature T ∗by ρϵ(ρ, T ∗, Yl) = e. (A5) We sample ρ, T ∗, and Yl on the corresponding values of ρ, T and Ye in the EOS table. For each triad (ρ, T ∗, Yl), we compute the corresponding e and solve Eqs.(A1) and (A2) for Teq and Ye,eq. The final result is a mapping (ρ, e, Yl) 7→(ρ, T ∗, Yl) 7→(Teq, Ye,eq), (A6) 0 20 40 60 80 100 t −tmerger [ms] 0.00 0.02 Meje [M⊙] χ = 0.0 χ = 0.1 FIG. 24. Comparison between ejecta passing through a sphere with a radius of ∼440 km, represented by the tracer par- ticles (solid lines), and the matter outflow extracted on the coordinate sphere (faint dashed lines) for the high-resolution non-spinning (green) and spinning (purple) BNS simulations. where the first mapping is performed solving Eq. (A5) for T ∗using the standard temperature-recovering routine, and the second one using the same 3D interpolation rou- tines used for computing EOS quantities. Finally, the emissivities computed using (ρ, T, Ye) and (ρ, Teq, Ye,eq) are combined according to the weight function of Ref. [73]. Appendix B: Tracer particles We extend bam to evolve tracer particles in order to record the evolution history of Lagrangian fluid trajecto- ries, allowing for a more detailed analysis of the outflowing material and its origin. The tracers are advected via ∂txi tr = αvi −βi, (B1) where xi tr is the position of an individual tracer particle. The lapse α, shift βi, and fluid velocity vi as well as the recorded fluid quantities rest-mass density ρ, electron fraction Ye, and temperature T are interpolated to the position xi tr of the tracer. The tracer particles are evolved during the BNS simulations using a simple forward Euler scheme, with the time step given by that of the finest refinement level covering the respective domain. In total, we inject ∼105 tracer particles into our BNS simulations at ≲1 ms before the merger. They are ran- domly distributed in neutron star matter for regions with densities 5000 times above the artificial atmosphere value, corresponding to ρ ≥6.176×107 g cm3 . The tracer particles are evolved until they cross a sphere of radius ∼737 km. Subsequently, we assume a homologous expansion. For the nuclear reaction network, we consider only tracer par- ticles that are unbound and traverse this sphere before the end of our simulation. Excluding ≈100 tracers that are numerically unstable in the nucleosynthesis evolution, we ultimately obtain 58,264 and 60,620 tracers represent- ing the ejecta for the high-resolution non-spinning and spinning BNS simulations, respectively. Although the tracers are massless by definition, nuclear abundance calculations require information on the mass represented by the individual tracer particles. For this purpose, we compute tracer masses on a sphere of radius 25 10−3 10−2 10−1 mej/Mej t −tmerger = 20 ms χ = 0.1 sphere tracer t −tmerger = 50 ms t −tmerger = 100 ms 0.0 0.2 0.4 Ye 10−3 10−2 10−1 mej/Mej χ = 0.1 sphere tracer 0.0 0.2 0.4 Ye 0.0 0.2 0.4 Ye FIG. 25. Comparison between the ejecta’s electron fraction represented by the tracer particles (solid lines) and extracted on the coordinate sphere (faint dashed lines) for the high-resolution non-spinning (green) and spinning (purple) BNS simulations. The mass-weighted histograms are computed for ejecta passing through a sphere with a radius of ∼440 km until ∼20 ms (left panels), ∼50 ms (middle panels), ∼100 ms (right panels) after the merger. ∼440 km. We extract the ejected mass flux dMeje dt (t) on this detection sphere and impose dMeje dt (t) = N(t) X tr dMtr dt , (B2) where we sum over all N(t) tracer particles that pass the sphere between t and t + ∆t. Here, ∆t is the output interval of the tracer data. We approximate the mass flux of the individual tracer particle by dMtr dt ≈∆Ωtrr2 trρtrvr,tr, (B3) where rtr is the distance of the tracer from the coordinate centre (∼440 km), and ρtr and vr,tr denote the rest-mass density and radial velocity of the fluid element, respec- tively. ∆Ωtr is the solid angle represented by the tracer particle. In the absence of more detailed information, we assume for simplicity that each particle passing through the sphere between t and t + ∆t contributes to the total mass flux with the same ∆Ω. Thus, ∆Ωcan be scaled to ensure that Eq. (B2) is satisfied. Finally, integration over time yields the mass Mtr ascribed to each tracer particle as it passes through the sphere. In Fig. 24, we show the evolution of ejecta mass repre- sented by the tracer particles, obtained by summing Mtr of all tracer crossing a sphere with a radius of ∼440 km for both high-resolution BNS simulations. For comparison, the ejecta masses extracted directly from the coordinate sphere are shown as faint dashed lines, demonstrating good agreement. In Fig. 25, we further compare the mass-weighted histograms of the electron fraction Ye in the matter outflow at different times after the merger, computed once from the tracer particles and once from the detection sphere. The histograms agree reasonably well for the low Ye material emitted at early times, i.e., ≲20 ms after the merger. Larger deviations occur at later times for the high Ye material because there are fewer tracer particles in the secular outflow with a higher electron fraction, and hence we sample this component more coarsely. We assume this to be a consequence of the initial distribution of the tracer particles and the advection scheme: With a density-dependent distribution, the core of the neutron stars would be better sampled, and the tracer masses would possibly be more uniform. Our choice of uniform distribution, on the other hand, leads to better coverage of the outer layers of neutron stars with lower density. The vast majority of these tracers is ejected at early times and hence assigned to low masses. By contrast, only few tracers cross the extractions sphere at late times, coming from the accretion disk which represents an unstable advection state as opposed to infall to the black hole or ejection. As a result, late ejecta have a much higher mass and dominate the full picture. From each of the roughly 60000 ejected tracers, only 62 (2228) and 29 (4084) tracers make up 50 % (90 %) of the total ejecta mass in the spinning and non-spinning case, respectively. As discussed in Sec. III D, however, these have negligible impact on the massive nuclei yield. We finally note that technical complications occurred in the R2 simulation for the non-spinning BNS configura- tion as the tracer particles were randomly redistributed in the neutron star material at tstep = 7.81 ms and tstep = 17.62 ms. This error resulted from limited storage and failure of the checkpointing routine for the tracer particles, leading to a new tracer initialization after reach- 26 ing the computing system’s walltime. We resolve this by “stacking” matching trajectories across these time steps with the following procedure, where we refer to tracer particles before and after a restart as “old” and “new”, respectively. • First, we identified tracer particles whose temper- atures are still too high to be relevant for nuclear network calculation. • If T > 7 GK, we stacked the trajectory of the “old” tracer particle with that of the closest “new” tracer particle that also satisfies T > 7 GK, by looking for the minimum of  xi tr,old + vi tr,old∆t  −xi tr,new . • Otherwise, if T ≤7 GK, we stacked the trajectory of the “old” tracer particle with that of a “new” tracer particle having similar fluid properties for T, ρ, and Ye. Due to the assumed axial symmetry of the system, it is sufficient that the “new” tracer particle has a similar position in z and p x2 + y2 as the “old” tracer particle. The trajectories were only stacked, if the relative differences in rest-mass density and temperature were sufficiently small with ρtr,old ρtr,new −1 < 1 and Ttr,old Ttr,new −1 < 10, and the absolute difference in Ye was below 0.005. TABLE IV. The consumed compute time and energy, and the induced amount of GHG emissions by our NR simulations. Compute time, Energy, Grid emissions, millions core-h MWh tCO2 Evolution R1 2.00 8.18 1.57 Evolution R2 27.89 114.37 21.96 Total 29.88 122.56 23.53 Appendix C: Consumed resources and carbon footprint Performing high-resolution NR simulations with de- tailed physics is computationally expensive and thus re- quires a large amount of energy. Most of the time, com- puting facilities are attached to national grids, which acquire energy from a mix of different sources, includ- ing fossil fuels. This in turn induces a tangible amount of greenhouse gas (GHG) emissions, which are driving climate change [257–262]. We estimate the amount of GHG emissions produced by the NR simulations in this study. Starting from November 2024, HLRS began to provide detailed power consumption statistics. Using this data for the jobs after that date, we estimate the average power consumption of a single Hawk node used for our simulations to be 525 W2. Using this average node power, we estimate the energy consumed and the GHG emissions induced by our simulations, assuming the specific carbon intensity of the Universit¨at Stuttgart at the Campus Vaihingen in 2023 with 192 gCO2/kWh [263]. In Table IV, we list the consumed compute resources, energy, and the induced grid emissions. [1] R. Fern´andez and B. D. Metzger, Ann. Rev. Nucl. Part. Sci. 66, 23 (2016), arXiv:1512.05435 [astro-ph.HE]. [2] B. D. Metzger, Living Rev. Rel. 20, 3 (2017), arXiv:1610.09381 [astro-ph.HE]. [3] L. Baiotti and L. Rezzolla, Rept. Prog. Phys. 80, 096901 (2017), arXiv:1607.03540 [gr-qc]. [4] M. D. Duez and Y. Zlochower, Rept. Prog. Phys. 82, 016902 (2019), arXiv:1808.06011 [gr-qc]. [5] S. Bernuzzi, Gen. Rel. Grav. 52, 108 (2020), arXiv:2004.06419 [astro-ph.HE]. [6] M. Ruiz, S. L. Shapiro, and A. Tsokaros, Front. Astron. Space Sci. 8, 39 (2021), arXiv:2102.03366 [astro-ph.HE]. [7] J. M. Lattimer and D. N. Schramm, Astrophys. J. Lett. 192, L145 (1974). [8] S. Rosswog, M. Liebend¨orfer, F. K. Thielemann, M. B. Davies, W. Benz, and T. Piran, Astron. Astrophys. 341, 499 (1999), arXiv:astro-ph/9811367 [astro-ph]. 2 Meanwhile, a simple estimate via the thermal design power (TDP) of the CPUs yields 450 W, which is ∼15 % lower than the full node power consumption. [9] O. Korobkin, S. Rosswog, A. Arcones, and C. Win- teler, Mon. Not. Roy. Astron. Soc. 426, 1940 (2012), arXiv:1206.2379 [astro-ph.SR]. [10] S. Wanajo, Y. Sekiguchi, N. Nishimura, K. Kiuchi, K. Kyutoku, and M. Shibata, Astrophys. J. Lett. 789, L39 (2014), arXiv:1402.7317 [astro-ph.SR]. [11] J. J. Cowan, C. Sneden, J. E. Lawler, A. Aprahamian, M. Wiescher, K. Langanke, G. Mart´ınez-Pinedo, and F.-K. Thielemann, Rev. Mod. Phys. 93, 15002 (2021), arXiv:1901.01410 [astro-ph.HE]. [12] L. Rezzolla, B. Giacomazzo, L. Baiotti, J. Granot, C. Kouveliotou, and M. A. Aloy, Astrophys. J. Lett. 732, L6 (2011), arXiv:1101.4298 [astro-ph.HE]. [13] M. Ruiz, R. N. Lang, V. Paschalidis, and S. L. Shapiro, Astrophys. J. Lett. 824, L6 (2016), arXiv:1604.02455 [astro-ph.HE]. [14] P. M¨osta, D. Radice, R. Haas, E. Schnetter, and S. Bernuzzi, Astrophys. J. Lett. 901, L37 (2020), arXiv:2003.06043 [astro-ph.HE]. [15] K. Kiuchi, A. Reboul-Salze, M. Shibata, and Y. Sekiguchi, Nature Astron. 8, 298 (2024), arXiv:2306.15721 [astro-ph.HE]. 27 [16] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017), arXiv:1710.05832 [gr-qc]. [17] B. P. Abbott et al. (LIGO Scientific, Virgo), Astrophys. J. Lett. 850, L39 (2017), arXiv:1710.05836 [astro-ph.HE]. [18] I. Arcavi et al., Nature 551, 64 (2017), arXiv:1710.05843 [astro-ph.HE]. [19] D. A. Coulter et al., Science 358, 1556 (2017), arXiv:1710.05452 [astro-ph.HE]. [20] V. M. Lipunov et al., Astrophys. J. Lett. 850, L1 (2017), arXiv:1710.05461 [astro-ph.HE]. [21] N. R. Tanvir et al., Astrophys. J. Lett. 848, L27 (2017), arXiv:1710.05455 [astro-ph.HE]. [22] S. Valenti, D. J. Sand, S. Yang, E. Cappellaro, L. Tartaglia, A. Corsi, S. W. Jha, D. E. Reichart, J. Hais- lip, and V. Kouprianov, Astrophys. J. Lett. 848, L24 (2017), arXiv:1710.05854 [astro-ph.HE]. [23] A. Goldstein et al., Astrophys. J. Lett. 848, L14 (2017), arXiv:1710.05446 [astro-ph.HE]. [24] V. Savchenko et al., Astrophys. J. Lett. 848, L15 (2017), arXiv:1710.05449 [astro-ph.HE]. [25] A. Hajela et al., Astrophys. J. Lett. 886, L17 (2019), arXiv:1909.06393 [astro-ph.HE]. [26] A. Hajela et al., Astrophys. J. Lett. 927, L17 (2022), arXiv:2104.02070 [astro-ph.HE]. [27] A. Balasubramanian, A. Corsi, K. P. Mooley, K. Ho- tokezaka, D. L. Kaplan, D. A. Frail, G. Hallinan, D. Laz- zati, and E. J. Murphy, Astrophys. J. 938, 12 (2022), arXiv:2205.14788 [astro-ph.HE]. [28] B. P. Abbott et al. (LIGO Scientific, Virgo, Fermi- GBM, INTEGRAL), Astrophys. J. Lett. 848, L13 (2017), arXiv:1710.05834 [astro-ph.HE]. [29] R. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. D 103, 122002 (2021), arXiv:2010.14529 [gr-qc]. [30] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 123, 011102 (2019), arXiv:1811.00364 [gr-qc]. [31] J. Sakstein and B. Jain, Phys. Rev. Lett. 119, 251303 (2017), arXiv:1710.05893 [astro-ph.CO]. [32] J. M. Ezquiaga and M. Zumalac´arregui, Phys. Rev. Lett. 119, 251304 (2017), arXiv:1710.05901 [astro-ph.CO]. [33] P. Creminelli and F. Vernizzi, Phys. Rev. Lett. 119, 251302 (2017), arXiv:1710.05877 [astro-ph.CO]. [34] T. Baker, E. Bellini, P. G. Ferreira, M. Lagos, J. Noller, and I. Sawicki, Phys. Rev. Lett. 119, 251301 (2017), arXiv:1710.06394 [astro-ph.CO]. [35] B. P. Abbott et al. (LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Tel- luride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAAS- TRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan- STARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RI- MAS, RATIR, SKA South Africa/MeerKAT), Astrophys. J. Lett. 848, L12 (2017), arXiv:1710.05833 [astro-ph.HE]. [36] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 121, 161101 (2018), arXiv:1805.11581 [gr-qc]. [37] A. Bauswein, O. Just, H.-T. Janka, and N. Stergioulas, Astrophys. J. Lett. 850, L34 (2017), arXiv:1710.06843 [astro-ph.HE]. [38] B. Margalit and B. D. Metzger, Astrophys. J. Lett. 850, L19 (2017), arXiv:1710.05938 [astro-ph.HE]. [39] D. Radice, A. Perego, F. Zappa, and S. Bernuzzi, Astro- phys. J. Lett. 852, L29 (2018), arXiv:1711.03647 [astro- ph.HE]. [40] M. W. Coughlin et al., Mon. Not. Roy. Astron. Soc. 480, 3871 (2018), arXiv:1805.09371 [astro-ph.HE]. [41] E. R. Most, L. R. Weih, L. Rezzolla, and J. Schaffner- Bielich, Phys. Rev. Lett. 120, 261103 (2018), arXiv:1803.00549 [gr-qc]. [42] C. D. Capano, I. Tews, S. M. Brown, B. Margalit, S. De, S. Kumar, D. A. Brown, B. Krishnan, and S. Reddy, Nature Astron. 4, 625 (2020), arXiv:1908.10352 [astro- ph.HE]. [43] C. A. Raithel, Eur. Phys. J. A 55, 80 (2019), arXiv:1904.10002 [astro-ph.HE]. [44] T. Dietrich, M. W. Coughlin, P. T. H. Pang, M. Bulla, J. Heinzel, L. Issa, I. Tews, and S. Antier, Science 370, 1450 (2020), arXiv:2002.11355 [astro-ph.HE]. [45] J. W. T. Hessels, S. M. Ransom, I. H. Stairs, P. C. C. Freire, V. M. Kaspi, and F. Camilo, Science 311, 1901 (2006), arXiv:astro-ph/0601337. [46] M. Burgay et al., Nature 426, 531 (2003), arXiv:astro- ph/0312071. [47] K. Stovall et al., Astrophys. J. Lett. 854, L22 (2018), arXiv:1802.01707 [astro-ph.HE]. [48] S. Rosswog, P. Diener, F. Torsello, T. M. Tauris, and N. Sarin, Mon. Not. Roy. Astron. Soc. 530, 2336 (2024), arXiv:2310.15920 [astro-ph.HE]. [49] W. E. East, V. Paschalidis, F. Pretorius, and A. Tsokaros, Phys. Rev. D 100, 124042 (2019), arXiv:1906.05288 [astro-ph.HE]. [50] M. Ruiz, A. Tsokaros, V. Paschalidis, and S. L. Shapiro, Phys. Rev. D 99, 084032 (2019), arXiv:1902.08636 [astro- ph.HE]. [51] S. Bernuzzi, T. Dietrich, W. Tichy, and B. Br¨ugmann, Phys. Rev. D 89, 104021 (2014), arXiv:1311.4443 [gr-qc]. [52] T. Dietrich, S. Bernuzzi, M. Ujevic, and W. Tichy, Phys. Rev. D 95, 044045 (2017), arXiv:1611.07367 [gr-qc]. [53] W. Kastaun, R. Ciolfi, A. Endrizzi, and B. Giacomazzo, Phys. Rev. D 96, 043019 (2017), arXiv:1612.03671 [astro- ph.HE]. [54] E. R. Most, L. J. Papenfort, A. Tsokaros, and L. Rezzolla, Astrophys. J. 884, 40 (2019), arXiv:1904.04220 [astro- ph.HE]. [55] L. J. Papenfort, E. R. Most, S. Tootle, and L. Rez- zolla, Mon. Not. Roy. Astron. Soc. 513, 3646 (2022), arXiv:2201.03632 [astro-ph.HE]. [56] B. Br¨ugmann, J. A. Gonzalez, M. Hannam, S. Husa, U. Sperhake, and W. Tichy, Phys. Rev. D 77, 024027 (2008), arXiv:gr-qc/0610128. [57] M. Thierfelder, S. Bernuzzi, and B. Br¨ugmann, Phys. Rev. D 84, 044012 (2011), arXiv:1104.4751 [gr-qc]. [58] H. Gieg, F. Schianchi, T. Dietrich, and M. Ujevic, Uni- verse 8, 370 (2022), arXiv:2206.01337 [gr-qc]. [59] F. Schianchi, H. Gieg, V. Nedora, A. Neuweiler, M. Uje- vic, M. Bulla, and T. Dietrich, Phys. Rev. D 109, 044012 (2024), arXiv:2307.04572 [gr-qc]. [60] A. Neuweiler, T. Dietrich, B. Br¨ugmann, E. Giangrandi, K. Kiuchi, F. Schianchi, P. M¨osta, S. Shankar, B. Gi- acomazzo, and M. Shibata, Phys. Rev. D 110, 084046 28 (2024), arXiv:2407.20946 [gr-qc]. [61] K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata, and T. Wada, Phys. Rev. D 90, 041502 (2014), arXiv:1407.2660 [astro-ph.HE]. [62] K. Dionysopoulou, D. Alic, and L. Rezzolla, Phys. Rev. D 92, 084064 (2015), arXiv:1502.02021 [gr-qc]. [63] W. Cook, E. M. Guti´errez, S. Bernuzzi, D. Radice, B. Daszuta, J. Fields, P. Hammond, H. Bandyopad- hyay, and M. Jacobi, arXiv:2508.19342 [astro-ph.HE] (2025). [64] E. M. Guti´errez, W. Cook, D. Radice, S. Bernuzzi, J. Fields, P. Hammond, B. Daszuta, H. Bandyopad- hyay, and M. Jacobi, arXiv:2506.18995 [astro-ph.HE] (2025). [65] R. Ciolfi, W. Kastaun, J. V. Kalinani, and B. Giacomazzo, Phys. Rev. D 100, 023005 (2019), arXiv:1904.10222 [astro-ph.HE]. [66] M. Chabanov, S. D. Tootle, E. R. Most, and L. Rezzolla, Astrophys. J. Lett. 945, L14 (2023), arXiv:2211.13661 [astro-ph.HE]. [67] R. Aguilera-Miret, D. Vigan`o, and C. Palenzuela, Astro- phys. J. Lett. 926, L31 (2022), arXiv:2112.08406 [gr-qc]. [68] R. Aguilera-Miret, C. Palenzuela, F. Carrasco, and D. Vi- gan`o, Phys. Rev. D 108, 103001 (2023), arXiv:2307.04837 [astro-ph.HE]. [69] F. Foucart, E. O’Connor, L. Roberts, M. D. Duez, R. Haas, L. E. Kidder, C. D. Ott, H. P. Pfeiffer, M. A. Scheel, and B. Szilagyi, Phys. Rev. D 91, 124021 (2015), arXiv:1502.04146 [astro-ph.HE]. [70] Y. Sekiguchi, K. Kiuchi, K. Kyutoku, M. Shibata, and K. Taniguchi, Phys. Rev. D 93, 124046 (2016), arXiv:1603.01918 [astro-ph.HE]. [71] T. Vincent, F. Foucart, M. D. Duez, R. Haas, L. E. Kidder, H. P. Pfeiffer, and M. A. Scheel, Phys. Rev. D 101, 044053 (2020), 1908.00655 [gr-qc]. [72] D. Radice, A. Perego, K. Hotokezaka, S. A. Fromm, S. Bernuzzi, and L. F. Roberts, Astrophys. J. 869, 130 (2018), arXiv:1809.11161 [astro-ph.HE]. [73] D. Radice, S. Bernuzzi, A. Perego, and R. Haas, Mon. Not. Roy. Astron. Soc. 512, 1499 (2022), arXiv:2111.14858 [astro-ph.HE]. [74] P. L. Espino, D. Radice, F. Zappa, R. Gamba, and S. Bernuzzi, Phys. Rev. D 109, 103027 (2024), arXiv:2311.12923 [astro-ph.HE]. [75] K. Kawaguchi, S. Fujibayashi, and M. Shibata, Phys. Rev. D 112, 043001 (2025), arXiv:2506.01679 [astro- ph.HE]. [76] C. Palenzuela, S. L. Liebling, D. Neilsen, L. Lehner, O. L. Caballero, E. O’Connor, and M. Anderson, Phys. Rev. D 92, 044045 (2015), arXiv:1505.01607 [gr-qc]. [77] C. Palenzuela, S. Liebling, and B. Mi˜nano, Phys. Rev. D 105, 103020 (2022), arXiv:2204.02721 [gr-qc]. [78] C. Musolino, L. Rezzolla, and E. R. Most, Astrophys. J. Lett. 984, L61 (2025), arXiv:2410.06253 [astro-ph.HE]. [79] S. Curtis, P. Bosch, P. M¨osta, D. Radice, S. Bernuzzi, A. Perego, R. Haas, and E. Schnetter, Astrophys. J. Lett. 961, L26 (2024), arXiv:2305.07738 [astro-ph.HE]. [80] L. Sun, M. Ruiz, S. L. Shapiro, and A. Tsokaros, Phys. Rev. D 105, 104028 (2022), arXiv:2202.12901 [astro- ph.HE]. [81] K. Hayashi, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 107, 123001 (2023), arXiv:2211.07158 [astro-ph.HE]. [82] L. Combi and D. M. Siegel, Phys. Rev. Lett. 131, 231402 (2023), arXiv:2303.12284 [astro-ph.HE]. [83] J. Bamber, A. Tsokaros, M. Ruiz, S. L. Shapiro, M. Fa- vata, M. Karlson, and F. V. Pi˜nas, arXiv:2510.09742 [gr-qc] (2025). [84] T. Dietrich, S. Bernuzzi, M. Ujevic, and B. Br¨ugmann, Phys. Rev. D 91, 124041 (2015), arXiv:1504.01266 [gr- qc]. [85] S. Bernuzzi and T. Dietrich, Phys. Rev. D 94, 064062 (2016), arXiv:1604.07999 [gr-qc]. [86] S. Bernuzzi and D. Hilditch, Phys. Rev. D 81, 084003 (2010), arXiv:0912.2920 [gr-qc]. [87] D. Hilditch, S. Bernuzzi, M. Thierfelder, Z. Cao, W. Tichy, and B. Br¨ugmann, Phys. Rev. D 88, 084057 (2013), arXiv:1212.2901 [gr-qc]. [88] C. Bona, J. Mass´o, J. Stela, and E. Seidel, in The Seventh Marcel Grossmann Meeting: On Recent Developments in Theoretical and Experimental General Relativity, Gravi- tation, and Relativistic Field Theories, edited by R. T. Jantzen, G. M. Keiser, and R. Ruffini (World Scientific, Singapore, 1996). [89] M. Alcubierre, B. Br¨ugmann, P. Diener, M. Koppitz, D. Pollney, E. Seidel, and R. Takahashi, Phys. Rev. D 67, 084023 (2003), arXiv:gr-qc/0206072. [90] J. M. Marti, J. M. Ibanez, and J. A. Miralles, Phys. Rev. D 43, 3794 (1991). [91] F. Banyuls, J. A. Font, J. M. A. Ibanez, J. M. A. Marti, and J. A. Miralles, Astrophys. J. 476, 221 (1997). [92] L. Anton, O. Zanotti, J. A. Miralles, J. M. Marti, J. M. Ibanez, J. A. Font, and J. A. Pons, Astrophys. J. 637, 296 (2006), arXiv:astro-ph/0506063. [93] J. A. Font, Living Rev. Rel. 11, 7 (2008). [94] S. L. Liebling, L. Lehner, D. Neilsen, and C. Palenzuela, Phys. Rev. D 81, 124023 (2010), arXiv:1001.0575 [gr-qc]. [95] P. M¨osta, B. C. Mundim, J. A. Faber, R. Haas, S. C. Noble, T. Bode, F. L¨offler, C. D. Ott, C. Reisswig, and E. Schnetter, Class. Quant. Grav. 31, 015005 (2014), arXiv:1304.5544 [gr-qc]. [96] A. Neuweiler, T. Dietrich, and B. Br¨ugmann, Phys. Rev. D 112, 023033 (2025), arXiv:2504.10228 [gr-qc]. [97] K. S. Thorne, Mon. Not. Roy. Astron. Soc. 194, 439 (1981). [98] M. Shibata, K. Kiuchi, Y.-i. Sekiguchi, and Y. Suwa, Prog. Theor. Phys. 125, 1255 (2011), arXiv:1104.3937 [astro-ph.HE]. [99] F. Foucart, R. Haas, M. D. Duez, E. O’Connor, C. D. Ott, L. Roberts, L. E. Kidder, J. Lippuner, H. P. Pfeif- fer, and M. A. Scheel, Phys. Rev. D93, 044019 (2016), arXiv:1510.06398 [astro-ph.HE]. [100] F. Foucart, E. O’Connor, L. Roberts, L. E. Kidder, H. P. Pfeiffer, and M. A. Scheel, Phys. Rev. D94, 123016 (2016), arXiv:1607.07450 [astro-ph.HE]. [101] L. R. Weih, H. Olivares, and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 495, 2285 (2020), arXiv:2003.13580 [gr-qc]. [102] C. Musolino and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 528, 5952 (2024), arXiv:2304.09168 [gr-qc]. [103] M. H. Ruffert, H. T. Janka, and G. Schaefer, Astron. Astrophys. 311, 532 (1996), arXiv:astro-ph/9509006. [104] A. Perego, S. Bernuzzi, and D. Radice, Eur. Phys. J. A 55, 124 (2019), arXiv:1903.07898 [gr-qc]. [105] M. J. Berger and J. Oliger, Journal of Computational Physics 53, 484 (1984). [106] M. J. Berger and P. Colella, Journal of Computational Physics 82, 64 (1989). [107] R. Borges, M. Carmona, B. Costa, and W. S. Don, 29 Journal of Computational Physics 227, 3191 (2008). [108] A. Harten, P. D. Lax, and B. v. Leer, SIAM Review 25, 35 (1983). [109] X.-D. Liu and S. Osher, Journal of Computational Physics 142, 304 (1998). [110] L. D. Zanna and N. Bucciantini, Astron. Astrophys. 390, 1177 (2002), arXiv:astro-ph/0205290. [111] M. Reichert et al., Astrophys. J. Suppl. 268, 66 (2023), arXiv:2305.07048 [astro-ph.IM]. [112] M. Bulla, Mon. Not. Roy. Astron. Soc. 489, 5037 (2019), arXiv:1906.04205 [astro-ph.HE]. [113] M. Bulla, Mon. Not. Roy. Astron. Soc. 520, 2558 (2023), arXiv:2211.14348 [astro-ph.HE]. [114] V. Nedora, T. Dietrich, M. Shibata, M. Pohl, and L. C. Menegazzi, Mon. Not. Roy. Astron. Soc. 520, 2727 (2023), arXiv:2208.01558 [astro-ph.HE]. [115] V. Nedora, T. Dietrich, and M. Shibata, Mon. Not. Roy. Astron. Soc. 524, 5514 (2023), arXiv:2302.12850 [astro- ph.HE]. [116] V. Nedora, L. C. Menegazzi, E. Peretti, T. Dietrich, and M. Shibata, Mon. Not. Roy. Astron. Soc. 538, 2089 (2025), 2409.16852 [astro-ph.HE]. [117] P. M¨oller, A. J. Sierk, T. Ichikawa, and H. Sagawa, Atom. Data Nucl. Data Tabl. 109-110, 1 (2016), arXiv:1508.06294 [nucl-th]. [118] R. H. Cyburt et al., Astrophys. J. Suppl. 189, 240 (2010). [119] I. V. Panov, C. Freiburghaus, and F. K. Thielemann, Nucl. Phys. A 688, 587 (2001). [120] T. Kodama and K. Takahashi, Nucl. Phys. A 239, 489 (1975). [121] M. R. Mumpower, P. Jaffke, M. Verriere, and J. Randrup, Phys. Rev. C 101, 054607 (2020), arXiv:1911.06344 [nucl- th]. [122] A. Neuweiler, T. Dietrich, M. Bulla, S. V. Chaurasia, S. Rosswog, and M. Ujevic, Phys. Rev. D 107, 023016 (2023), arXiv:2208.13460 [astro-ph.HE]. [123] K. Kawaguchi, S. Fujibayashi, M. Shibata, M. Tanaka, and S. Wanajo, Astrophys. J. 913, 100 (2021), arXiv:2012.14711 [astro-ph.HE]. [124] S. Rosswog and O. Korobkin, Annalen Phys. 536, 2200306 (2024), arXiv:2208.14026 [astro-ph.HE]. [125] J. Barnes, D. Kasen, M.-R. Wu, and G. Mart´ınez-Pinedo, Astrophys. J. 829, 110 (2016), arXiv:1605.07218 [astro- ph.HE]. [126] R. T. Wollaeger, O. Korobkin, C. J. Fontes, S. K. Ross- wog, W. P. Even, C. L. Fryer, J. Sollerman, A. L. Hunger- ford, D. R. van Rossum, and A. B. Wollaber, Mon. Not. Roy. Astron. Soc. 478, 3298 (2018), arXiv:1705.07084 [astro-ph.HE]. [127] M. Tanaka, D. Kato, G. Gaigalas, and K. Kawaguchi, Mon. Not. Roy. Astron. Soc. 496, 1369 (2020), arXiv:1906.08914 [astro-ph.HE]. [128] B. Margalit and E. Quataert, Astrophys. J. Lett. 923, L14 (2021), arXiv:2111.00012 [astro-ph.HE]. [129] B. Margalit and E. Quataert, Astrophys. J. 977, 134 (2024), arXiv:2403.07048 [astro-ph.HE]. [130] L. Sironi, A. Spitkovsky, and J. Arons, Astrophys. J. 771, 54 (2013), arXiv:1301.5333 [astro-ph.HE]. [131] P. Crumley, D. Caprioli, S. Markoff, and A. Spitkovsky, Mon. Not. Roy. Astron. Soc. 485, 5105 (2019), arXiv:1809.10809 [astro-ph.HE]. [132] X. Xie, J. Zrake, and A. MacFadyen, Astrophys. J. 863, 58 (2018), arXiv:1804.09345 [astro-ph.HE]. [133] G. Ryan, H. van Eerten, L. Piro, and E. Troja, Astrophys. J. 896, 166 (2020), arXiv:1909.11691 [astro-ph.HE]. [134] O. Gottlieb, E. Nakar, and O. Bromberg, Mon. Not. Roy. Astron. Soc. 500, 3511 (2020), arXiv:2006.02466 [astro-ph.HE]. [135] F. A. Aharonian, S. R. Kelner, and A. Y. Prosekin, Phys. Rev. D 82, 043002 (2010), arXiv:1006.1045 [astro- ph.HE]. [136] M. G. Alford, L. Brodie, A. Haber, and I. Tews, Phys. Scripta 98, 125302 (2023), arXiv:2304.07836 [nucl-th]. [137] W. Tichy, Class. Quant. Grav. 26, 175018 (2009), arXiv:0908.0620 [gr-qc]. [138] W. Tichy, Phys. Rev. D 86, 064024 (2012), arXiv:1209.5336 [gr-qc]. [139] T. Dietrich, N. Moldenhauer, N. K. Johnson- McDaniel, S. Bernuzzi, C. M. Markakis, B. Br¨ugmann, and W. Tichy, Phys. Rev. D 92, 124007 (2015), arXiv:1507.07100 [gr-qc]. [140] W. Tichy, A. Rashti, T. Dietrich, R. Dudi, and B. Br¨ugmann, Phys. Rev. D 100, 124046 (2019), arXiv:1910.09690 [gr-qc]. [141] J. W. York, Jr., Phys. Rev. Lett. 82, 1350 (1999), arXiv:gr-qc/9810051. [142] H. P. Pfeiffer and J. W. York, Jr., Phys. Rev. D 67, 044022 (2003), arXiv:gr-qc/0207095. [143] D. R. Lorimer, Living Rev. Rel. 11, 8 (2008), arXiv:0811.0762 [astro-ph]. [144] T. M. Tauris et al., Astrophys. J. 846, 170 (2017), arXiv:1706.09438 [astro-ph.HE]. [145] K. Kiuchi, S. Fujibayashi, K. Hayashi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. Lett. 131, 011401 (2023), arXiv:2211.07637 [astro-ph.HE]. [146] E. R. Most and E. Quataert, Astrophys. J. Lett. 947, L15 (2023), arXiv:2303.08062 [astro-ph.HE]. [147] L. Combi and D. M. Siegel, Astrophys. J. 944, 28 (2023), arXiv:2206.03618 [astro-ph.HE]. [148] M. Ruiz, A. Tsokaros, and S. L. Shapiro, Phys. Rev. D 101, 064042 (2020), arXiv:2001.09153 [astro-ph.HE]. [149] M. G. Alford, L. Brodie, A. Haber, and I. Tews, Phys. Rev. C 106, 055804 (2022), arXiv:2205.10283 [nucl-th]. [150] D. Chatterjee et al., Data table for eos abht(qmc-rmf3) nparam=3 (2025), https://doi.org/10.5281/zenodo.14809193. [151] I. Tews, J. Carlson, S. Gandolfi, and S. Reddy, Astrophys. J. 860, 149 (2018), arXiv:1801.01923 [nucl-th]. [152] M. Hempel and J. Schaffner-Bielich, Nucl. Phys. A 837, 210 (2010), arXiv:0911.4073 [nucl-th]. [153] F. J. Fattoyev, C. J. Horowitz, J. Piekarewicz, and G. Shen, Phys. Rev. C 82, 055803 (2010), arXiv:1008.3030 [nucl-th]. [154] C. Drischler, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips, Phys. Rev. Lett. 125, 202702 (2020), arXiv:2004.07232 [nucl-th]. [155] S. Shlomo, V. M. Kolomietz, and G. Col`o, Eur. Phys. J. A 30, 23 (2006). [156] C. J. Horowitz, J. Piekarewicz, and B. Reed, Phys. Rev. C 102, 044321 (2020), arXiv:2007.07117 [nucl-th]. [157] J. Keller, C. Wellenhofer, K. Hebeler, and A. Schwenk, Phys. Rev. C 103, 055806 (2021), arXiv:2011.05855 [nucl- th]. [158] M. C. Miller et al., Astrophys. J. Lett. 887, L24 (2019), arXiv:1912.05705 [astro-ph.HE]. [159] T. E. Riley et al., Astrophys. J. Lett. 887, L21 (2019), arXiv:1912.05702 [astro-ph.HE]. [160] M. C. Miller et al., Astrophys. J. Lett. 918, L28 (2021), 30 arXiv:2105.06979 [astro-ph.HE]. [161] T. E. Riley et al., Astrophys. J. Lett. 918, L27 (2021), arXiv:2105.06980 [astro-ph.HE]. [162] D. Choudhury et al., Astrophys. J. Lett. 971, L20 (2024), arXiv:2407.06789 [astro-ph.HE]. [163] T. Salmi et al., Astrophys. J. 941, 150 (2022), arXiv:2209.12840 [astro-ph.HE]. [164] T. Salmi et al., Astrophys. J. 974, 294 (2024), arXiv:2406.14466 [astro-ph.HE]. [165] A. J. Dittmann et al., Astrophys. J. 974, 295 (2024), arXiv:2406.14467 [astro-ph.HE]. [166] E. Fonseca et al., Astrophys. J. Lett. 915, L12 (2021), arXiv:2104.00880 [astro-ph.HE]. [167] S. Vinciguerra et al., Astrophys. J. 961, 62 (2024), arXiv:2308.09469 [astro-ph.HE]. [168] M. Campanelli, C. O. Lousto, and Y. Zlochower, Phys. Rev. D 74, 041501 (2006), arXiv:gr-qc/0604012. [169] R. Dudi, T. Dietrich, A. Rashti, B. Bruegmann, J. Stein- hoff, and W. Tichy, Phys. Rev. D 105, 064050 (2022), arXiv:2108.10429 [gr-qc]. [170] K. Hotokezaka, K. Kiuchi, K. Kyutoku, H. Okawa, Y.-i. Sekiguchi, M. Shibata, and K. Taniguchi, Phys. Rev. D 87, 024001 (2013), arXiv:1212.0905 [astro-ph.HE]. [171] M. Shibata, S. Fujibayashi, K. Hotokezaka, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Tanaka, Phys. Rev. D 96, 123012 (2017), arXiv:1710.07579 [astro-ph.HE]. [172] K. Kiuchi, K. Kyutoku, M. Shibata, and K. Taniguchi, Astrophys. J. Lett. 876, L31 (2019), arXiv:1903.01466 [astro-ph.HE]. [173] W. E. East, V. Paschalidis, F. Pretorius, and S. L. Shapiro, Phys. Rev. D 93, 024011 (2016), arXiv:1511.01093 [astro-ph.HE]. [174] S. Rosswog, N. Sarin, E. Nakhar, and P. Diener, Mon. Not. Roy. Astron. Soc. 538, 907 (2025), arXiv:2411.18813 [astro-ph.HE]. [175] D. Price and S. Rosswog, Science 312, 719 (2006), arXiv:astro-ph/0603845. [176] K. Kiuchi, P. Cerd´a-Dur´an, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 92, 124034 (2015), arXiv:1509.09205 [astro-ph.HE]. [177] B. Giacomazzo, J. Zrake, P. Duffell, A. I. Mac- Fadyen, and R. Perna, Astrophys. J. 809, 39 (2015), arXiv:1410.0013 [astro-ph.HE]. [178] R. Aguilera-Miret, J.-E. Christian, S. Rosswog, and C. Palenzuela, Mon. Not. Roy. Astron. Soc. 10.1093/mn- ras/staf1291 (2025), 2504.10604 [astro-ph.HE]. [179] M. D. Duez, Y. T. Liu, S. L. Shapiro, and M. Shi- bata, Phys. Rev. D 73, 104015 (2006), arXiv:astro- ph/0605331. [180] L. Sun, M. Ruiz, and S. L. Shapiro, Phys. Rev. D 99, 064057 (2019), arXiv:1812.03176 [astro-ph.HE]. [181] K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 97, 124039 (2018), arXiv:1710.01311 [astro- ph.HE]. [182] M. D. Duez, Y. T. Liu, S. L. Shapiro, M. Shibata, and B. C. Stephens, Phys. Rev. Lett. 96, 031101 (2006), arXiv:astro-ph/0510653. [183] D. M. Siegel, R. Ciolfi, A. I. Harte, and L. Rezzolla, Phys. Rev. D 87, 121302 (2013), arXiv:1302.4368 [gr-qc]. [184] E. R. Most, Phys. Rev. D 108, 123012 (2023), arXiv:2311.03333 [astro-ph.HE]. [185] A. Reboul-Salze, P. Barr`ere, K. Kiuchi, J. Guilet, R. Ray- naud, S. Fujibayashi, and M. Shibata, Astron. Astrophys. 699, A4 (2025), arXiv:2411.19328 [astro-ph.HE]. [186] R. Aguilera-Miret, D. Vigan`o, F. Carrasco, B. Mi˜nano, and C. Palenzuela, Phys. Rev. D 102, 103006 (2020), arXiv:2009.06669 [gr-qc]. [187] A. N. Kolmogorov, Proceedings of the Royal Society of London Series A 434, 9 (1991). [188] A. P. Kazantsev, Soviet Journal of Experimental and Theoretical Physics 26, 1031 (1968). [189] J. Zrake and A. I. MacFadyen, Astrophys. J. Lett. 769, L29 (2013), arXiv:1303.1450 [astro-ph.HE]. [190] S. A. Balbus and J. F. Hawley, Rev. Mod. Phys. 70, 1 (1998). [191] T. Celora, C. Palenzuela, D. Vigan`o, and R. Aguilera- Miret, arXiv:2505.01208 [astro-ph.HE] (2025). [192] R. Fern´andez and B. D. Metzger, Mon. Not. Roy. Astron. Soc. 435, 502 (2013), arXiv:1304.6720 [astro-ph.HE]. [193] O. Just, A. Bauswein, R. A. Pulpillo, S. Goriely, and H. T. Janka, Mon. Not. Roy. Astron. Soc. 448, 541 (2015), arXiv:1406.2687 [astro-ph.SR]. [194] S. Fujibayashi, K. Kiuchi, N. Nishimura, Y. Sekiguchi, and M. Shibata, Astrophys. J. 860, 64 (2018), arXiv:1711.02093 [astro-ph.HE]. [195] V. Nedora, S. Bernuzzi, D. Radice, A. Perego, A. En- drizzi, and N. Ortiz, Astrophys. J. Lett. 886, L30 (2019), arXiv:1907.04872 [astro-ph.HE]. [196] M. Shibata and K. Hotokezaka, Ann. Rev. Nucl. Part. Sci. 69, 41 (2019), arXiv:1908.02350 [astro-ph.HE]. [197] A. Endrizzi, A. Perego, F. M. Fabbri, L. Branca, D. Radice, S. Bernuzzi, B. Giacomazzo, F. Ped- eriva, and A. Lovato, Eur. Phys. J. A 56, 15 (2020), arXiv:1908.04952 [astro-ph.HE]. [198] K. Lodders, M. Bergemann, and H. Palme, Space Sci. Rev. 221, 23 (2025), arXiv:2502.10575 [astro-ph.SR]. [199] N. Prantzos, C. Abia, S. Cristallo, M. Limongi, and A. Chieffi, Mon. Not. Roy. Astron. Soc. 491, 1832 (2020), arXiv:1911.02545 [astro-ph.GA]. [200] S. Bernuzzi, F. Magistrelli, M. Jacobi, D. Logoteta, A. Perego, and D. Radice, Mon. Not. Roy. Astron. Soc. 256, 271 (2025), arXiv:2409.18185 [astro-ph.HE]. [201] G. Ricigliano, M. Jacobi, and A. Arcones, Mon. Not. Roy. Astron. Soc. 533, 2096 (2024), arXiv:2406.03649 [astro-ph.HE]. [202] J. Lippuner and L. F. Roberts, Astrophys. J. Suppl. 233, 18 (2017), arXiv:1706.06198 [astro-ph.HE]. [203] T. Marketin, L. Huther, and G. Mart´ınez-Pinedo, Phys. Rev. C 93, 025805 (2016), arXiv:1507.07442 [nucl-th]. [204] B. Pfeiffer, K. L. Kratz, and F. K. Thielemann, Zeitschrift f¨ur Physik A Hadrons and Nuclei 357, 235 (1997). [205] A. Arcones and G. Martinez-Pinedo, Phys. Rev. C 83, 045809 (2011), arXiv:1008.3890 [astro-ph.SR]. [206] M. Eichler et al., Astrophys. J. 808, 30 (2015), arXiv:1411.0974 [astro-ph.HE]. [207] I. U. Roederer et al., Astrophys. J. 936, 84 (2022), arXiv:2210.15105 [astro-ph.SR]. [208] J. Kuske, A. Arcones, and M. Reichert, Astrophys. J. 990, 37 (2025), arXiv:2506.00092 [astro-ph.HE]. [209] J. de Jes´us Mendoza-Temis, M.-R. Wu, G. Mart´ınez- Pinedo, K. Langanke, A. Bauswein, and H.-T. Janka, Phys. Rev. C 92, 055805 (2015), arXiv:1409.6135 [astro- ph.HE]. [210] S. E. Woosley and R. D. Hoffman, Astrophys. J. 395, 202 (1992). [211] B. D. Metzger, A. Bauswein, S. Goriely, and D. Kasen, Mon. Not. Roy. Astron. Soc. 446, 1115 (2015), 31 arXiv:1409.0544 [astro-ph.HE]. [212] A. Abac, T. Dietrich, A. Buonanno, J. Steinhoff, and M. Ujevic, Phys. Rev. D 109, 024062 (2024), arXiv:2311.07456 [gr-qc]. [213] R. De Pietri, A. Feo, J. A. Font, F. L¨offler, F. Maione, M. Pasquali, and N. Stergioulas, Phys. Rev. Lett. 120, 221101 (2018), arXiv:1802.03288 [gr-qc]. [214] R. De Pietri, A. Feo, J. A. Font, F. L¨offler, M. Pasquali, and N. Stergioulas, Phys. Rev. D 101, 064052 (2020), arXiv:1910.04036 [gr-qc]. [215] C. J. Moore, R. H. Cole, and C. P. L. Berry, Class. Quant. Grav. 32, 015014 (2015), arXiv:1408.0740 [gr-qc]. [216] P. D. Welch, IEEE Transactions on Audio and Electroa- coustics 15, 70 (1967). [217] N. Stergioulas, A. Bauswein, K. Zagkouris, and H.-T. Janka, Mon. Not. Roy. Astron. Soc. 418, 427 (2011), arXiv:1105.0368 [gr-qc]. [218] S. Vretinaris, N. Stergioulas, and A. Bauswein, Phys. Rev. D 101, 084039 (2020), arXiv:1910.10856 [gr-qc]. [219] D. Radice, S. Bernuzzi, and C. D. Ott, Phys. Rev. D 94, 064011 (2016), arXiv:1603.05726 [gr-qc]. [220] W. E. East, V. Paschalidis, and F. Pretorius, Class. Quant. Grav. 33, 244004 (2016), arXiv:1609.00725 [astro- ph.HE]. [221] V. Paschalidis, W. E. East, F. Pretorius, and S. L. Shapiro, Phys. Rev. D 92, 121502 (2015), arXiv:1510.03432 [astro-ph.HE]. [222] L. Lehner, S. L. Liebling, C. Palenzuela, and P. M. Motl, Phys. Rev. D 94, 043003 (2016), arXiv:1605.02369 [gr- qc]. [223] D. Radice and S. Bernuzzi, Astrophys. J. 959, 46 (2023), arXiv:2306.13709 [astro-ph.HE]. [224] K. Takami, L. Rezzolla, and L. Baiotti, Phys. Rev. Lett. 113, 091104 (2014), arXiv:1403.5672 [gr-qc]. [225] L. Barsotti, L. Mcculler, M. Evans, and P. Fritschel, The A+ design curve, Technical Report LIGO-T1800042 (LIGO, 2018). [226] S. Hild et al., Class. Quant. Grav. 28, 094013 (2011), arXiv:1012.0908 [gr-qc]. [227] L.-X. Li and B. Paczynski, Astrophys. J. Lett. 507, L59 (1998), arXiv:astro-ph/9807272. [228] S. Darbha and D. Kasen, Astrophys. J. 897, 150 (2020), 2002.00299 [astro-ph.HE]. [229] K. Kawaguchi, M. Shibata, and M. Tanaka, Astrophys. J. Lett. 865, L21 (2018), arXiv:1806.04088 [astro-ph.HE]. [230] O. Korobkin et al., Astrophys. J. 910, 116 (2021), arXiv:2004.00102 [astro-ph.HE]. [231] C. Collins, A. Bauswein, S. Sim, V. Vijayan, G. Martinez- Pinedo, O. Just, L. J. Shingles, and M. Kromer, PoS FAIRness2022, 010 (2023). [232] L. S. Groenewegen, S. Curtis, P. M¨osta, D. Kasen, and D. Brethauer, arXiv:2508.00062 [astro-ph.HE] (2025). [233] R. Dekany et al., Publ. Astron. Soc. Pac. 132, 038001 (2020), arXiv:2008.04923 [astro-ph.IM]. [234] ˇZ. Ivezi´c et al. (LSST), The LSST System Science Re- quirements Document (2018). [235] D. Eichler, M. Livio, T. Piran, and D. N. Schramm, Nature 340, 126 (1989). [236] R. Narayan, B. Paczynski, and T. Piran, Astrophys. J. Lett. 395, L83 (1992), arXiv:astro-ph/9204001. [237] E. Berger, Ann. Rev. Astron. Astrophys. 52, 43 (2014), arXiv:1311.2603 [astro-ph.HE]. [238] H. Koehn, T. Wouters, P. T. H. Pang, M. Bulla, H. Rose, H. Wichern, and T. Dietrich, arXiv:2507.13807 [astro- ph.HE] (2025). [239] A. Marcowith, G. Ferrand, M. Grech, Z. Meliani, I. Plot- nikov, and R. Walder, Liv. Rev. Comput. Astrophys. 6, 1 (2020), arXiv:2002.09411 [astro-ph.HE]. [240] Y. Yuan, A. Y. Chen, and M. Luepker, Astrophys. J. 985, 159 (2025), arXiv:2503.08487 [astro-ph.HE]. [241] A. Vanthieghem, V. Tsiolis, F. Fiuza, K. Sekiguchi, A. Spitkovsky, and Y. Todo, PoS HEPROVIII, 011 (2024), arXiv:2407.03838 [astro-ph.HE]. [242] R. Braun, A. Bonaldi, T. Bourke, E. Keane, and J. Wagg, arXiv:1912.12699 [astro-ph.IM] (2019). [243] A. Bonaldi et al., Mon. Not. Roy. Astron. Soc. 500, 3821 (2020), arXiv:2009.13346 [astro-ph.IM]. [244] F. B. Bianco et al., Astrophys. J. Supp. 258, 1 (2022), arXiv:2108.01683 [astro-ph.IM]. [245] HARMONI (HARMONI for the ELT), HARMONI Per- formance (2025). [246] R. Davies et al. (The MICADO Consortium), The Mes- senger 182, 17 (2021). [247] E. S. Laird et al., Astrophys. J. Suppl. 180, 102 (2009), arXiv:0809.1349 [astro-ph]. [248] K. Nandra et al., arXiv:1306.2307 [astro-ph.HE] (2013). [249] C. S. Reynolds et al., Proc. SPIE Int. Soc. Opt. Eng. 12678, 126781E (2023), arXiv:2311.00780 [astro-ph.IM]. [250] S. Marchesi et al., Astron. Astrophys. 642, A184 (2020), arXiv:2008.09133 [astro-ph.IM]. [251] H. Gieg, F. Schianchi, M. Ujevic, and T. Dietrich, Phys. Rev. D 112, 023036 (2025), arXiv:2409.04420 [gr-qc]. [252] H. H.-Y. Ng, C. Musolino, S. D. Tootle, and L. Rezzolla, Astrophys. J. Lett. 985, L36 (2025), arXiv:2411.19178 [astro-ph.HE]. [253] P. C.-K. Cheong, A. Tsokaros, M. Ruiz, F. Venturi, J. C. L. Chan, A. K. L. Yip, and K. Uryu, Phys. Rev. D 111, 063030 (2025), arXiv:2409.10508 [astro-ph.HE]. [254] T. Dietrich, D. Radice, S. Bernuzzi, F. Zappa, A. Perego, B. Br¨ugmann, S. V. Chaurasia, R. Dudi, W. Tichy, and M. Ujevic, Class. Quant. Grav. 35, 24LT01 (2018), arXiv:1806.01625 [gr-qc]. [255] A. Gonzalez et al., Class. Quant. Grav. 40, 085011 (2023), arXiv:2210.16366 [gr-qc]. [256] I. Markin, A. Neuweiler, H. L. Gieg, and T. Dietrich, General-relativistic radiation magnetohydrodynamics simulations of binary neutron star mergers: The in- fluence of spin on the multi- messenger picture - videos (2025). [257] N. Oreskes, Science 306, 1686 (2004). [258] P. T. Doran and M. K. Zimmerman, Eos, Transactions American Geophysical Union 90, 22 (2009). [259] J. Cook, D. Nuccitelli, S. A. Green, M. Richardson, B. Winkler, R. Painting, R. Way, P. Jacobs, and A. Skuce, Environmental Research Letters 8, 024024 (2013). [260] J. Cook et al., Environmental Research Letters 11, 048002 (2016). [261] M. Lynas, B. Z. Houlton, and S. Perry, Environmental Research Letters 16, 114005 (2021). [262] K. F. Myers, P. T. Doran, J. Cook, J. E. Kotcher, and T. A. Myers, Environmental Research Letters 16, 104030 (2021). [263] N. Conrad and B. Lorenz, Umwelterkl¨arung 2023, Technischer Bericht / Umweltbericht (H¨ochstleistungsrechenzentrum (HLRS), Universit¨at Stuttgart, 2023).
General-relativistic radiation magnetohydrodynamics simulations of binary neutron star mergers: The influence of spin on the multi-messenger picture Anna Neuweiler1 , Henrique Gieg1 , Henrik Rose1 , Hauke Koehn1 , Ivan Markin1 , Federico Schianchi2,1 , Liam Brodie3 , Alexander Haber3,4 , Vsevolod Nedora1,5 , Mattia Bulla6,7,8 , and Tim Dietrich1,5 1 Institut f ̈ur Physik und Astronomie, Universit ̈at Potsdam, Haus 28, Karl-Liebknecht-Str. 24/25, 14476, Potsdam, Germany 2 Departament de F ́ısica & IAC3, Universitat de les Illes Balears, Palma de Mallorca, Baleares E-07122, Spain 3 . Louis, St. Louis, MO 63130, USA 4Mathematical Sciences and STAG Research Centre, 17 1BJ, United Kingdom 5 Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am M ̈uhlenberg 1, Potsdam 14476, Germany 6 1, I-44122 Ferrara, Italy 7INFN, Sezione di Ferrara, via Saragat 1, I-44122 Ferrara, Italy and 8INAF, Osservatorio Astronomico d'Abruzzo, via Mentore Maggini snc, 64100 Teramo, Italy (Dated: October 17, 2025) The rich phenomenology of binary neutron star mergers offers a unique opportunity to test general relativity, investigate matter at supranuclear densities, and learn more about the origin of heavy elements. As multi-messenger sources, they emit both gravitational waves and electromagnetic radiation across several frequency bands. The interpretation of these signals relies heavily on accurate numerical-relativity simulations that incorporate the relevant microphysical processes. Using the latest updates of the bam code, we perform general-relativistic radiation magnetohydrodynamic simulations of binary neutron star mergers with two different spin configurations. We adopt a state-of-the-art equation of state based on relativistic mean-field theory developed for dense matter in neutron star mergers. To capture both dynamical ejecta and secular outflows from magnetic and neutrino-driven winds, we evolve the systems up to ∼100 ms after the merger at considerably high resolution with a grid spacing of ∆x ≈93 m across the neutron stars. Our results show that the non-spinning configuration undergoes a more violent merger, producing more ejecta with lower electron fraction and higher velocities, while the spinning configuration forms a larger disk due to its higher angular momentum. Although the initial magnetic field amplification within ≲10 ms after merger is similar in both systems, the non-spinning system reaches stronger magnetic fields and higher energies at later times. For a detailed view of the multi-messenger observables, we extract the gravitational-wave signal and compute nucleosynthesis yields, the expected kilonova and afterglow light curves from our ejecta profiles. I. INTRODUCTION Binary neutron star (BNS) mergers are high-energy multi-messenger events that emit gravitational waves (GWs) and electromagnetic (EM) radiation covering multiple frequency bands, e.g., [1-6]. As these compact objects orbit each other and eventually merge, a GW signal is generated, whereas the EM emission originates from the merger remnant and ejecta. The neutron-rich material that is ejected in this process is an important site of rapid neutron-capture (r-process) nucleosynthesis [7-11]. The radioactive decay of the formed nuclei powers an EM transient, called a kilonova. Furthermore, BNS merger remnants can launch a relativistic jet, most likely driven by a large-scale magnetic field, e.g., [12-15], that can trigger a gamma-ray burst (GRB). Both the kilonova and the GRB produce an afterglow when the outflows interact with the surrounding interstellar medium. A simplified overview of the physical phenomena of the BNS merger and the associated observable GW and EM signatures is shown in Fig. 1, highlighting the different timescales at which the signals are emitted, ranging from seconds to milliseconds before the merger to years after the merger. Indeed, the first GW detection of a BNS merger in August 2017, associated with the event GW170817 [16], was accompanied by the kilonova signal AT2017gfo [17-22], the short GRB GRB170817A [23, 24], and its non-thermal afterglow [25-27]. Among others, this joint observation has been used to probe general relativity [28-34] and to tighten constraints on the equation of state (EOS) of neutron stars, e.g., [30, 35-44]. In addition to the intensively studied impact of the component masses, neutron star spins are essential to a proper modeling of BNS mergers and their observable signals. While observations confirm that isolated neutron stars can have dimensionless spins up to χ ∼0.4 [45], the fastestspinning BNS systems known to merge within a Hubble time, i.e., PSR J0737-3039 [46] and PSR J1946+2052 [47], are expected to have spins of only χ ∼0.04 or χ ≲0.05 at merger. GW170817 itself provides only weak constraints on the binary components' spins, also affecting estimates on the component masses due to a degeneracy between the mass ratio q and the aligned spin components in gravitational waveforms. For a high-spin prior (|χ| ≤0.89), the inferred component masses range from 0.86 M⊙to 2.26 M⊙[16]. Numerous studies have shown that neutron star spins impact the lifetime of the remnant, the disk mass, and the amount of ejected material, e.g., [48-55]. 16 Oct 2025 2 FIG. 1. BNS merger phenomena for different timescales, ranging from seconds to milliseconds before the merger and from milliseconds to years after the merger. As simplified overview, we illustrate the physical phenomena together with the associated observable multi-messenger signals, consisting of GWs and EM signatures: inspiral and merger of the two neutron stars, ejection of lanthanide-rich (in red) and lanthanide-poor (in blue) material, formation of a black hole remnant with accretion disk, launch of a relativistic jet (in purple), formation of heavy elements via r-process, kilonova emission (in yellow), and non-thermal afterglows of the GRB (in purple) and the kilonova (in darkred). Consequently, the stars' spin might also affect signatures of the EM counterparts. In this work, we investigate the role of neutron-star spins in BNS mergers and assess the impact on nucleosynthesis yields and multi-messenger observables, i.e., the GW signal, the kilonova light curves, and the kilonova/GRB afterglow. For this purpose, we perform stateof-the-art numerical-relativity (NR) simulations that solve Einstein's field equations together with the equations for general-relativistic radiation magnetohydrodynamics (GRRMHD), including a proper treatment to model the underlying microphysics based on fundamental theories of the strong and weak interaction. In particular, we use the bam code [56, 57] with recent updates to incorporate tabulated, nuclear-physics-informed EOSs [58], a first-order multipolar (M1) neutrino-transfer scheme [59], and magnetic fields [60]. Several studies have highlighted the importance of incorporating magnetohydrodynamic effects, e.g., [13, 61-68], and neutrino-radiation, e.g., [6975] to reliably model BNS mergers, their remnants, and outflowing material. Although the number of NR simulations that take both magnetic fields and neutrino radiation into account is growing, e.g., [15, 76-83], the BNS systems considered are still limited and - to our knowledge - restricted to neutron stars without spin. We point out that spinning BNS configurations, either with a neutrino leakage scheme or with the inclusion of magnetohydrodynamic effects, have already been simulated and analyzed, e.g., in Refs. [50, 54]. However, we include both magnetic fields and neutrino radiation with the M1 scheme to study spin effects in BNS mergers as comprehensively as possible. We study equal-mass BNS systems in which both stars have a gravitational mass of 1.35 M⊙. One configuration is initially non-rotating, while the stars in the other have a dimensionless spin of χ = 0.1 aligned to the orbital angular momentum. Although this spin is higher than previously observed in merging BNS systems, it allows us to investigate the extent to which we can distinguish observable features between the two systems. We simulate these BNS systems with high resolution, using a grid spacing of ∆x ≈93 m to cover the neutron stars. This is necessary to capture small-scale dynamics and resolve instabilities relevant for magnetic field amplification. Additionally, we perform the same simulations with reduced resolution to verify our results and distinguish artifacts caused by finite resolution. The simulations are performed for about 100 ms into the post-merger to also cover magnetic- and neutrino-driven winds. In total, the simulations consumed 29.88 Mio. CPUh at the HPE Apollo (Hawk) at the High-Performance Computing Center Stuttgart (HLRS, see also Appendix C). We analyze the magnetic field amplification during and after the merger, and study the post-merger remnant, matter outflow, and its composition. Additionally, we co-evolve Lagrangian tracer particles that we use to study nucleosynthesis in post-processing. In order to connect the simulated BNS systems to observable signatures, we extract the GW strain and the ejecta from the simulation data. Based on the latter, we calculate the associated kilonova signal and its afterglow. The article is structured as follows: Section II summarizes numerical methods of the employed codes, the simulated BNS configurations, and the EOS used. The results of the BNS simulations are discussed in Sec. III, focusing on the merger remnant, magnetic field amplification, and matter outflows, as well as the nucleosynthesis yields of the ejecta. The multi-messenger observables, 3 including the GW signal, kilonova light curves, and afterglow emissions, are presented in Sec. IV. Finally, Sec. V summarizes our key findings and results. Unless stated otherwise, we use a (-, +, +, +) signature for the metric and geometric units with G = c = M⊙= 1. II. METHODS AND SETUPS A. Spacetime and matter evolution We perform the evolution of the BNS configurations using bam [56, 57, 84, 85], incorporating recent extensions for neutrino radiation [58, 59] and magnetic fields [60]. bam solves Einstein's field equations numerically in the 3+1 formulation. For the spacetime evolution, we use the Z4c reformulation with constraint damping terms [86, 87], together with the 1+log slicing [88] and Γ-driver shift [89] conditions. For the matter evolution, our GRRMHD implementation employs the Valencia formulation [90-93]. We adopt the ideal magnetohydrodynamics approximation, i.e., assuming infinite conductivity. To ensure the divergencefree constraint for the magnetic field and prevent the formation of unphysical magnetic monopoles, we apply the hyperbolic divergence-cleaning scheme [94, 95]1. Neutrino-driven interactions are modeled using the M1 scheme [73, 97-102]. Details about the implementation, as well as the set of weak reactions employed in this work, can be found in Ref. [59], except for elastic scattering on α-particles. In this work, we employ the elastic opacities of Ref. [103] as implemented in Ref. [58]. We highlight that, in this manner, the opacities are computed using the same microscopic model as that of the EOS described in Sec. II D. Moreover, we have implemented the computation of neutrino emissivities in the trapped regime based on black body functions at equilibrium as in Refs. [73, 104] (see Appendix A). This improves the stability under rapid variations of the fluid's temperature. The computational grid in bam is structured as a hierarchy of nested refinement levels l = 0, 1, ..., L -1, where each of the L levels contains one or more Cartesian boxes with fixed grid spacing. The resolution between successive levels increases by a factor of two, such that the grid spacing on level l is given by hl = h0/2l. To ensure that the neutron stars are always covered by the finest refinement level, the Cartesian boxes for the inner levels with l ≥lmv move dynamically to track the neutron stars. We use a fourth-order Runge-Kutta scheme for time integration with the Courant-Friedrichs-Lewy factor of 1 We note that our implementation had a mistake in the flux computation of the magnetic field in the damping term. We analyze the average relative error weighted by the flux and estimate it to be 80 ms after the merger [122] and this procedure has proven to give robust results. For the computation of light curves, possis continues to expand the ejecta homologously, assuming a constant velocity vi of each fluid cell. The code generates photon packets at each time step and assigns them an energy, frequency, and direction of propagation. possis employs heating rate libraries from Ref. [124] and thermalization efficiencies from Refs. [125, 126]. Furthermore, it adopts wavelength- and time-dependent opacities as functions of local ejecta properties, i.e., rest-mass density, temperature, and electron fraction (ρ, T, Ye), from Ref. [127]. The photon packets are propagated through the ejecta, accounting for interactions with matter through electron scattering and bound-bound absorption that alter their assigned propagation direction, frequency, and energy. Finally, synthetic observables for different observation angles ι are computed 'on the fly' using Monte Carlo estimators with the event-based technique discussed in Ref. [112]. We perform the radiative transfer simulations using Nph = 106 photon packets. 3. pyblastafterglow The outflows from BNS mergers produce not only intrinsic radiation in the form of a kilonova, but also radiation through interactions with the ambient matter. Ejected matter with relativistic escape velocities hits the interstellar medium surrounding the merger site, producing a non-thermal EM signature. This signature, commonly referred to as an afterglow, arises from relativistic electrons gyrating around magnetic field lines in collisionless shocks. Specifically, BNS mergers can produce two types of afterglows: a GRB afterglow from an ultra-relativistic, collimated jet and a kilonova afterglow from the mildly relativistic, circum-planar ejecta. We determine the expected afterglow emissions from our simulations using pyblastafterglow [114-116], a modular code that evolves the dynamics of the discretized blast wave semi-analytically as it hits the cold, constantdensity ambient interstellar medium. In a second step, it calculates the energy distribution of accelerated electrons and their synchrotron emission. For the kilonova afterglow calculation, pyblastafterglow takes the angular ejecta velocity distributions from our NR simulations as input to initialize the blast waves. The code assumes azimuthal symmetry, i.e., the ejecta velocities only depend on the polar angle. Each blast wave is evolved independently by solving energy conservation equations [114]. To determine the emitted radiation, it is important to note that the dynamical and secular ejecta from BNS mergers are at most mildly relativistic, with the velocity and Lorentz factor obeying βΓ ≲5. Previous studies have shown that in these types of mildly relativistic collisionless shocks, both a thermal (Maxwellian) and a non-thermal (power-law) electron component can potentially deliver significant contributions to the produced synchrotron spectrum [128, 129]. This is in contrast to highly relativistic shocks with Γ ≫1, where the contribution from the thermal population is negligible [130, 131]. Hence, pyblastafterglow includes an implementation of the thermal synchrotron model from Ref. [128] that is used to determine the kilonova afterglow emission. Thus, 5 computing kilonova afterglows requires us to set five additional parameters, namely the ambient density n0, the power law index of the non-thermal electron component p, as well as the fraction of the internal blast wave energy that is converted to magnetic fields εB and is used for particle acceleration εe. Finally, εT is the fraction of the shock energy that is converted to thermal electrons. For the calculation of the GRB afterglow, we assume a Gaussian jet with the following distribution for the isotropic-energy equivalent Eiso(θ) = ( E0 exp -θ2 2θ2c if θ ≤θw 0 else . (1) This structure is inspired by numerical jet simulations [132-134]. Here, E0 is the central isotropic kinetic energy-equivalent, θc denotes the core angle, and θw is the cut-off. Since we cannot infer the jet structure from our NR simulations, we will resort to fiducial values taken from GRB170817A, as discussed below in Sec. IV C. The jet's blast wave elements are evolved according to the same shock-jump conditions and lateral spreading prescriptions of Ref. [116]. The non-thermal electron distribution is evolved according to a Fokker-Planck-type continuity equation and convolved with the classical synchrotron kernel to determine the comoving radiation [135]. This step relies on the microphysical parameters p, εe, εB with the same meaning as above for the kilonova afterglow, although they do not necessarily need to coincide with the values for the kilonova blast waves. We further assume that GRB and kilonova blast waves evolve independently. This simplification is justified by the fact that only a small part of the kilonova blast wave travels through the circum-burst density of the collimated jet. These are mainly the polar ejecta, which in our case carry the lowest mass, and we confirmed by excising their blast waves from our kilonova afterglow computations that they do not affect the light curves. Moreover, the investigation in Ref. [114] shows that even if the blast waves are coupled to the pre-shocked density of the GRB jet, the effect on the dynamics of the kilonova ejecta is small. C. Configurations We present simulations for two equal-mass BNS systems with gravitational masses of 1.35 M⊙and baryonic masses Mb = 1.48 M⊙, corresponding to neutron stars in isolation with coordinate radius R = 12.22 km and tidal deformability Λ = 478.6. We employ the ABHT(QMCRMF3) EOS [136] to describe neutron-star matter, satisfying constraints from terrestrial experiments, astrophysical observations, and first-principles calculations of neutron matter; cf. Sec. II D. One configuration has non-spinning neutron stars, and the other one has spinning neutron stars, aligned with the orbital angular momentum. For the latter, we chose the dimensionless spin of the two stars as χ1 = χ2 = 0.1. Both setups have an initial coordinate distance d0 ≈41.2 km. We construct initial data using the pseudo-spectral code sgrid [137-140], which uses the extended conformal thin sandwich formulation [141, 142] to solve the Einstein Constraint Equations. The magnetic field is subsequently superimposed for the evolution as a purely poloidal field inside each star. It is defined by a vector potential with Ax = -(y -yNS) Ab max(p -pcut, 0)2, (2) Ay = (x -xNS) Ab max(p -pcut, 0)2, (3) Az = 0, (4) where xNS and yNS refer to the coordinate center of each neutron star. We set Ab = 1510 to obtain a maximum magnetic field strength on the order of 1015 G inside the stars. The cut-off pressure pcut is chosen as 0.004 × pmax, where pmax is the initial maximum pressure in the neutron star at t = 0 ms. This adopted magnetic field is several orders of magnitude stronger than observed in neutronstars in binary systems, where the surface magnetic fields are typically between ∼ 108 -1012 G [143, 144]. It is nevertheless common in NR simulations to assume a strength up to ∼1015 G just before the merger, e.g., [14, 15, 65, 81, 145-148]. The idea of setting this strong magnetic field is to compensate for the unresolved smallscale dynamics, which lead to magnetic field amplification but are not captured by the limited resolution. Each system is simulated in a low-resolution (R1) and a high-resolution (R2) setup, respectively. The grid configuration in both cases consists of L = 8 refinement levels, comprising three outer, non-moving, and five inner, moving levels. For the R1 setup, the inner and outer refinement boxes contain nmv = 128 and n = 256 points per direction, respectively, with a grid spacing of hL = 186 m on the finest level. We double the resolution in the R2 configuration, obtaining nmv = 256 and n = 512 points per direction for the inner and outer refinement boxes, respectively, with a grid spacing of hL = 93 m on the finest level. D. Equation of state The ABHT(QMC-RMF3) EOS [136, 149, 150] is specifically developed for dense matter in BNS mergers. It is based on a relativistic mean-field theory (RMFT) that describes the interaction of nucleons via meson exchange. The free parameters of most RMFTs are fitted such that the model describes matter and nuclei that are nearly isospin-symmetric, i.e., have a proton fraction of roughly 50 %, and then extrapolated to the neutron-rich matter in neutron stars. QMC-RMF3 is fitted to the binding energy per nucleon of pure neutron matter obtained by first-principle chiral effective field theory calculations [151]. Simultaneously, the nucleon-meson and meson-meson coupling constants are fitted for consistency with the zero-temperature properties of isospin-symmetric 6 nuclear matter and observations of neutron star mass, radius, and tidal deformability. The EOS prescription includes neutrons, anti-neutrons, protons, anti-protons, electrons, positrons, and a non-interacting photon gas. At low densities, a thermodynamically consistent procedure is used to match the EOS in temperature, density, and proton-fraction space to a low-density RMFT called HS(IUF) [152, 153] that is optimized for the more isospinsymmetric matter expected at low densities, and includes finite nuclei. Since the EOS is derived via RMFTs, it is always causal, and the extension of the EOS from zero temperature to finite temperatures is self-consistent. At nuclear saturation density nsat = 0.1546 1 fm3 , the ABHT(QMC-RMF3) EOS has a binding energy per nucleon EB = -16.39 MeV, incompressibility κ = 231.3 MeV, symmetry energy J = 31.29 MeV, slope of the symmetry energy L = 47.20 MeV, and effective nucleon mass divided its vacuum mass m∗/m = 0.739. These values are consistent with the white intersection region in Fig. 2 of Ref. [154] and with Refs. [155, 156]. The RMFT coupling constants' fitting further ensures consistency with the neutron matter binding energy derived from the chiral effective field theory calculation of Ref. [151] between 0.5 n0 and 1.5 n0 at zero temperature, where n0 = 0.16 1 fm3 . In addition, in Ref. [136] ABHT(QMCRMF3) was shown to be consistent with the finite temperature chiral effective field theory calculation of Ref. [157] at T = 20 MeV. The ABHT(QMC-RMF3) EOS predicts the following neutron star properties R1.4 M⊙= 12.21 km, Mmax = 2.15 M⊙, and Λ1.4 M⊙= 387, where Λ is the dimensionless tidal deformability. These values are consistent with current astrophysical observations [36, 158-167]. III. BINARY NEUTRON STAR MERGER SIMULATIONS A. Qualitative overview As the main focus of our study lies on the merger and post-merger phase, our BNS simulations start with a relatively small initial separation and cover only the last few orbits of the inspiral phase. Both BNS configurations merge already after approximately ∼4.5 orbits, emit GWs, and form a massive neutron star (MNS) remnant. We define the merger time tmerger as the time when the amplitude of the GW strain in its dominant (2, 2) mode reaches its maximum. For the non-spinning and spinning BNS systems, the merger time is at tmerger = 11.17 ms and tmerger = 12.05 ms, respectively. As expected, the configuration with aligned spin χ = 0.1 merges slightly later because of the orbital hang-up effect [168]. This is due to a repulsive spin-orbit interaction and has already been discussed in several studies of BNS mergers with rotating neutron stars, e.g., Refs. [50, 51, 53, 54, 139, 169]. Figure 2 visualizes the matter dynamics, the ejecta expansion, and the magnetic field evolution as 3D snapshots of the BNS system without spin during and after the TABLE I. Remnant properties of the BNS simulations. From left to right: initial dimensionless spin χ of the neutron stars, resolution of the simulation, remnant MNS mass MMNS, disk mass Mdisk, and ejecta properties, comprising its mass Meje, mass-averaged electron fraction Y eje e , and mass-averaged asymptotic velocity veje ∞ . The ejecta quantities are extracted on a sphere with radius ∼440 km on refinement level l = 2. The remnant and disk mass are computed as the integral of the bound conserved rest-mass density D, defining the massive neutron star remnant in regions where ρ > 1013 g/cm3, at the end of the simulation. χ res. MMNS [M⊙] Mdisk [M⊙] Meje [M⊙] Y eje e veje ∞ 0.0 R2 2.7910 0.1426 0.0137 0.407 0.165 0.1 R2 2.7480 0.1919 0.0098 0.438 0.148 0.0 R1 2.7992 0.1364 0.0233 0.406 0.192 0.1 R1 2.7371 0.2164 0.0124 0.397 0.184 merger. Understanding the different mass ejection mechanisms, e.g., through magnetic and neutrino-driven winds, is particularly important for interpreting the associated EM counterparts. The first snapshot presents the neutron stars at the onset of merger, exhibiting a purely poloidal magnetic field inside the stars as initialized. Subsequently, the magnetic field lines twist and wind up, leading to a strong toroidal field in the surrounding disk and forming a helicoidal structure at ∼40 ms after the merger. Later, at ∼100 ms after the merger, we observe a magnetic flux emanating from the remnant along the polar axis, forming a jet-like structure. In Sec. III B, a more detailed analysis of the magnetic field amplification and mechanism is provided. Table I provides a summary of the merger remnant properties, including remnant masses, disk masses, and key ejecta properties for all simulations performed. Each system produces an MNS remnant, which remains stable within the simulation time of ∼100 ms after the merger. For the classification of ejected matter, both dynamical and secular ejecta, we use the geodesic criterion, according to which matter is considered unbound if the time-component of the four-velocity ut 0; cf. [170]. The gravitationally bound matter comprises the remnant system with the MNS and the surrounding disk. To obtain MMNS and Mdisk separately, we adopt the usual convention and define the disk where ρ 6 × 109 g/cm3. Finally, we perform a discrete fast Fourier transform over our cube with N 3 equally spaced points, where the components of the wave vector are given by kd = n 2π L with n ∈[0, N/2]. We present our resulting spectral distributions for the kinetic and magnetic energy in Fig. 6 for t = {2.5, 5, 10, 20} ms after the merger. The spectra exhibit the expected behavior for a turbulent magnetohydrodynamic scenario with a weak magnetic field, in which the magnetic field evolves passively and the turbulence is predominantly hydrodynamical. The shape of the kinetic energy spectra follows approximately a k-5/3 slope, consistent with Kolmogorov's theory [187]. Moreover, during the first 10 ms after the merger, the magnetic energy spectra can be approximated at longer wavelengths (smaller wave numbers), by a slope of k3/2, in agreement with Kazantsev's model [188]. Accordingly, the kinetic energy is dominated by larger length scales and the magnetic energy by smaller ones. This also highlights the importance of numerical resolution for capturing KHI. Although resolution is expected to have less impact on the evolution of the kinetic energy on large scales, most magnetic energy is created on small scales. The drop in the magnetic energy spectra at high wave numbers ≳10-4 cm-1 corresponding to length scales of the order ≲100 m is likely due to numerical dissipation on these length scales, as our grid spacing is of similar order with ∆x ≈93 m. Our results reproduce the typical dynamical stages of the KHI, as discussed in previous studies, e.g., Refs. [186, 189]. Initially, within the first few milliseconds after the merger, the hydrodynamic cascade is fully developed, following the Kolmogorov spectral slope. Subsequently, during the kinematic phase, the magnetic field grows by a hydrodynamical turbulent mechanism, as in Kazantsev's theory, until reaching saturation. The spectra for the two different BNS systems during the first 10 10-5 10-4 k [cm-1] 1054 1056 1058 ε(k) [erg cm] ∝k-5/3 ∝k3/2 t -tmerger ≈2.5 ms 10-5 10-4 k [cm-1] ∝k-5/3 ∝k3/2 t -tmerger ≈5.0 ms χ = 0.0 kin. χ = 0.0 mag. χ = 0.1 kin. χ = 0.1 mag. 10-5 10-4 k [cm-1] ∝k-5/3 ∝k3/2 t -tmerger ≈10.0 ms 10-5 10-4 k [cm-1] ∝k-5/3 ∝k3/2 t -tmerger ≈20.0 ms FIG. 6. Evolution of the kinetic (solid) and magnetic (dashed) energy spectra as a function of the wave number. Results are shown for the non-spinning (green lines) and spinning (purple lines) high-resolution BNS simulations at t = {2.5, 5, 10, 20} ms after the merger. The black lines correspond to Kolmogorov (solid) and Kazantsev (dashed) slopes. few milliseconds after the merger are largely identical. Thus, the initial magnetic field amplification through KHI seems independent of the star's intrinsic spin, as it behaves identically in both BNS systems. Differences become noticeable only at later times of ≥10 ms after the merger. On these timescales, the magnetic energy spectra also increase at smaller wave numbers, indicating an inverse cascade. In agreement with Fig. 4, the magnetic energy amplifies more strongly for the BNS system without spin. We conclude that this difference for the BNS systems is related to the transfer of energy at larger scales, potentially through magnetic winding or MRI-driven dynamo mechanisms, which we discuss in the following section. 2. Magneto-rotational instability The interaction of the magnetic field with a sheared background flow, such as in an accretion disk, is expected to power MRI [190]. Several studies have investigated its role in BNS merger scenarios, e.g., [15, 68, 182, 183], suggesting that MRI could sustain turbulence in the remnant's envelope and drive a large-scale dynamo. MRI is only active for a radially decreasing angular velocity, ∂RΩ 0) and the electron fraction Ye (z 30 ms. From the remaining dynamical ejecta, we separate tracers with high entropy s0 ≥80 kB/nuc, by and large coinciding with the fast, shock-ejected component, and split the rest into a neutron-rich component with Ye ≤0.25 and a neutronpoor component with Ye > 0.25. Figure 16 shows the TABLE II. Mass of different ejecta components in 10-3 M⊙ Ejecta component Wind Dynamical χ 0.0 0.1 0.0 0.1 α-particles (A = 4) 0.00 0.08 0.16 0.06 Iron group (50 ≤A ≤56) 0.60 0.48 0.00 0.00 First peak (73 ≤A ≤91) 5.74 3.66 0.65 0.53 Second peak (121 ≤A ≤138) 0.13 0.06 2.72 1.50 Third peak (186 ≤A ≤203) 0.00 0.00 0.40 0.29 Lanthanides (139 ≤A ≤176) 0.00 0.00 0.15 0.10 Actinides (210 ≤A ≤260) 0.00 0.00 0.02 0.01 Rest 0.41 0.97 0.94 0.56 Total 6.89 5.25 5.04 3.06 mass-weighted contributions of these groups to the total abundance in the spinning case, while Table II summarizes the total ejecta masses in different mass ranges for both configurations. Winds account for about 60 % of the total ejecta mass in both setups. Despite comprising only about 150 tracers each, they cast an extremely homogeneous picture: They fall out of NSE only as they are ejected from the disk at velocities of about 0.1 c with a fairly narrow density distribution around log(ρNSE/g cm-3) = 6.8 ± 0.2, corresponding to a similarly narrow distribution of entropies around S ≈20 kB/nuc. Early in the post-merger, fast protonization occurs in the disk due to the overproduction of ̄νe via positron capture on free neutrons, which decouples from matter deeper in the remnant, and leads to relatively high electron fractions around Ye ≳0.35. Hence, the final composition of the wind ejecta remains similar to the respective NSE-composition [208]: Most matter rests in nuclei up to 88Sr in the first r-process peak and favors even mass numbers by roughly one order of magnitude. Only traces of second-peak elements form in trajectories with Ye 128. As expected, 17 FIG. 17. "Treasure map" showing tracer particles projected onto the x-y plane and their mass-weighted relative abundances of selected elements for the high-resolution spinning BNS simulation. The panels in the upper, middle, and lower rows respectively show the maps for t = {-0.88, 1.25, 7.31} ms after the merger and project tracer particles for |z| 1 GHz, the emission is dominated by the non-thermal (or power-law) electron distribution. The timescale of that non-thermal peak depends on the deceleration of the blast waves, where higher n0 causes earlier and stronger deceleration. Hence, the peak is shifted from t ≈550 days for n0 = 0.1 cm-3 to t ≈ 1100 days for n0 = 0.01 cm-3. Since the kilonova ejecta are not collimated, the observation angle ι only has a moderate effect on the observed light curve, though for 22 an observer in the orbital plane at ι = 90 ◦, the light curve is generally dimmer and rises more slowly. This is because the kilonova afterglow is dominated by the material near the equatorial plane, where most of the ejecta mass resides. An observer located at the pole with a viewing angle of ι = 0◦sees more of the equatorial ejecta, whereas an observer at ι = 90 ◦receives emission mainly from one side. For the kilonova afterglows shown in Fig. 23, we have set p = 2.5, εe = 0.01, εB = 0.001, and εT = 1. These values are consistent with particle-in-cell simulations [131, 239, 240], though the possible ranges remain broad. For this reason, we have conducted additional kilonova afterglow calculations varying the microphysical parameters. For instance, when keeping the other parameters fixed, but setting εe = 0.01 and εB = 10-5, the flux density is reduced by roughly a factor of 500, while setting εe = 0.2 and εB = 0.005 increases the flux density by a factor of 10. Additionally, εe also influences the relative contribution of the thermal electron population, i.e., whether the radio light curve displays an early peak or not. The electron power law index p increases the non-thermal synchrotron flux and affects the power law decay of the late-time light curve. The thermal part remains relatively unaffected. The thermal efficiency εT , in contrast, determines the energy budget for the thermal electrons and thus its choice can alter the early part of the radio light curve notably. Recently, Ref. [129] suggested a more realistic value of εT = 0.4 based on the particle-in-cell simulations of Ref. [241]. Adopting this value while keeping the other parameters from Fig. 23 would reduce the thermal electron radiation notably and erase the first peak. From Fig. 23 it is also apparent that, unless the BNS is observed far off-axis, the GRB afterglow dominates the early time light curve and practically prevents a distinct detection of the early kilonova afterglow. However, if the isotropic jet energy is low, the thermal radio peak could rise above the GRB afterglow and cause a rebrightening of the light curve at t ∼100 days. This hinges on the uncertain amount of thermal electron energy and the interstellar density. Rebrightening could also be caused in a similar way later at t ∼1000 days from the nonthermal peak. At the higher jet energy of 1052.2 erg, the late-time GRB afterglow always outshines the kilonova afterglow by several orders of magnitude, rendering the latter effectively undetectable as a distinct source. In general, at a luminosity distance of 100 Mpc, the absolute brightness of the kilonova afterglow remains low and does not exceed the typical brightness limits of current telescopes. For n0 = 0.01 cm-3, the light curve peak remains below the 3σ rms limit for a 12 × 2.5 h observations with the VLA radio telescope [27] or for a similar observation with the SKA [242, 243]. However, at n0 = 0.1 cm-3 the thermal and non-thermal radio peak get close to the VLA detection limit and would be observable with SKA if the GRB afterglow is not too bright. In near-optical frequency bands, however, detection of the kilonova afterglow might prove even more challenging. The 5σ image depth of the Vera Rubin Observatory in its main survey [234, 244] and the estimated 5σ limiting magnitude of the ELT [245, 246] significantly exceed the expected non-thermal optical peak of the kilonova afterglow, regardless of n0 or other microphysical parameters. This also applies to Chandra X-ray detection, assuming a flux limit of 5.3×10-17 erg cm-2 s-1 at 200 ks exposure [247]. For next-generation observatories like ATHENA [248] or AXIS [249], the flux limit in this exposure time might be 2.5 × 10-18 erg cm-2 s-1 [250], which would make the X-ray peak detectable for n0 = 0.1 cm-3. We emphasize again that this assessment applies at dL = 100 Mpc and assumes a typical exposure time for these telescopes. Dedicated, ultra-deep follow-ups for a close BNS merger could enhance detectability prospects. We also point out the possibility that a significant amount of the fast-tail ejecta is not resolved in our NR simulations due to artificial atmosphere settings and resolution issues [174]. Thus, one could speculate whether the kilonova afterglow might be more powerful in certain cases than presented here. Nevertheless, given our BNS ejecta profiles, a confident kilonova afterglow detection remains challenging, especially in a scenario with a powerful near-axis GRB afterglow. V. CONCLUSIONS We perform state-of-the-art numerical-relativity simulations of equal-mass binary neutron star mergers in a spinning and a non-spinning configuration with bam, incorporating both neutrino radiation and magnetohydrodynamical effects. The neutron star matter is modeled using the ABHT(QMC-RMF) equation of state. We initialize a poloidal magnetic field with a maximum strength of 1015 G inside the stars. Each system is simulated at a high resolution with a grid spacing of ∆x ≈93 m covering the neutron stars and a low resolution with ∆x ≈186 m, and is evolved up to ∼100 ms after the merger, covering besides the dynamical ejecta also secular outflows driven by magnetic- and neutrino winds. With the same initial separation of 41.2 km, the system with spin-aligned stars merges about 0.88 ms later than the non-spinning system due to repulsive spin-orbit interaction. The reduced impact velocity at the merger leads to a less violent collision in the spinning configuration, resulting in lower temperatures in the merger remnant and a smaller amount of shock-heated ejecta. Due to the higher angular momentum, the spinning system forms a more massive disk and a more extended remnant with lower central rest-mass densities. The initial magnetic-field amplification due to the Kelvin-Helmholtz instability, triggered in the shear layer during the merger, behaves similarly in both configurations and reaches similar field strengths and energies within the first ≲10 ms after merger. Subsequently, the magnetic energy increases further through magnetic winding and magneto-rotational instability, driven by the 23 interaction of the magnetic field with the differentially rotating disk. Higher magnetic energies are reached in the non-spinning scenario. The amount of ejected material is larger in the nonspinning configuration. In the simulated equal-mass systems, the dynamical ejecta are mainly generated by shocks at the collision interface, with only minor contributions from tidal disruption. The more violent merger of the non-spinning system produces stronger shocks, leading to larger ejecta masses and higher outflow velocities. Furthermore, we find for this system a lower electron fraction in the orbital plane of ≲0.1, and thus more neutron-rich ejecta. On later timescales of > 10 ms after the merger, neutrino-driven winds power a slower outflow component with progressively higher electron fractions. Finally, we observe the launch of a mildly relativistic collimated outflow with velocities of up to 0.4 c along the polar direction and high magnetization at ∼100 ms after the merger in the non-spinning configuration. The thermodynamical conditions in the different ejecta components lead to characteristic abundance patterns from r-process nucleosynthesis. We confirm that the low-Ye components of the dynamical ejecta robustly reproduce the global features of the heavy-element abundances observed in the solar system, whereas the late winds cannot substantially surpass the first r-process peak. A substantial lack of low-mass lanthanides points towards our still incomplete understanding of various nuclear decay properties far from stability, while the relative paucity of nuclei around A = 190 is linked to an underproduction of cold ejecta in equal-mass systems. To obtain a comprehensive picture of the observable multi-messenger signatures, we extract the gravitationalwave signal and use the ejecta data to compute the light curves of the associated kilonova and its afterglow. Since the description of the gravitational-wave signal during the inspiral is well-established by various approximants, we focus on the analysis of the post-merger frequencies, where we found slight shifts in the dominant peak frequencies between the two configurations. However, these differences are within numerical uncertainties of the simulations and are therefore likely artifacts of the grid resolution. For the EM counterparts, we focus on the highresolution simulations. We compute kilonova light curves using the Monte Carlo radiative transfer code possis. The emission is overall brighter in the non-spinning configuration, primarily due to the larger amount of ejecta. The kilonova is also slightly redder for this system, which we attribute to the lower electron fractions reached in the ejecta. Moreover, the light curves show a stronger dependence on the viewing angle in the non-spinning configuration than in the spinning one. The afterglow emission are computed using the semi-analytic pyblastafterglow code. For the kilonova afterglow, we use ejecta profiles binned over different viewing angles ι and asymptotic velocities. While both a Maxwellian (thermal) and a power-law (non-thermal) electron component contribute to the afterglow, again the non-spinning configuration produces brighter light curves due to the larger ejecta mass and higher ejecta velocities. Since our numerical-relativity simulations do not resolve an ultra-relativistic jet, we model the GRB afterglow assuming a fiducial Gaussian jet with parameters inferred from GRB170817A and explore different jet energies. When observed nearly on-axis, the GRB afterglow emission dominates over the kilonova afterglow. For an observer in the orbital plane, the GRB afterglow is initially suppressed. Nevertheless, the peak of the thermal emission of the kilonova afterglow is only observable for sufficiently low isotropic jet energies. While our BNS simulations include a comprehensive treatment of the key microphysical and magnetohydrodynamic processes in mergers of binary neutron stars, the presence of muons and anti-muons, as well as resistive effects of the magnetic field are neglected. The former is expected to influence primarily the remnant lifetime and ejecta masses as well as their composition, thereby affecting the signatures of the kilonova [251, 252]. The latter becomes significant in regions with low conductivity, such as in the magnetospheres of neutron stars and the polar cap above the remnant, e.g., [62, 253]. Accounting for finite conductivity would enable modeling magnetic reconnection and dissipation processes that can affect both jet dynamics and electromagnetic emission. In addition, simulations using more realistic initial magnetic field configurations consistent with astrophysical observations are needed. We plan to address these aspects in future work and extend our simulations to resistive magnetohydrodynamics with muons. VI. DATA AVAILABILITY Gravitational waveforms will be released as part of the CoRe database [254, 255]. Additionally, we provide an animation of the two high-resolution simulations [256]. Further simulation data can be provided upon reasonable request. ACKNOWLEDGMENTS We thank B. Br ̈ugmann for valuable comments and discussions during this project. We further thank Rosa A. for her thoughtful supervision during the text review of this manuscript. A.N., I.M., and T.D. gratefully acknowledge support from the Deutsche Forschungsgemeinschaft, DFG, project number 504148597 (DI 2553/7). Furthermore, T.D., H.K., H.R., and H.G. acknowledge funding from the EU Horizon under ERC Starting Grant, no. SMArt101076369. L.B. is supported by the U.S. . DE-FG02-05ER41375. A.H. acknowl24 edges support by the U.S. . DE-FG02-05ER41375. A.H. furthermore acknowledges financial support by the UKRI under the Horizon Europe Guarantee project EP/Z000939/1. M.B. acknowledges the - sity of Ferrara for the financial support through the FIRD 2024 grant. The simulations with bam were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number mmapicture/44289, and on the DFG-funded research cluster Jarvis at the University of Potsdam (INST 336/173-1; project number: 502227537). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Appendix A: Fluid-radiation equilibration step The computation of the trapped equilibrium solution of Refs. [73, 104] is based on considering the fluid and radiation in the trapped regime as a single system in thermal equilibrium with a total energy density e and lepton number Yl. From Fermi-Dirac statistics, we know that the fluid is supposed to thermalize at a certain Ye,eq and Teq satisfying e = ρε(ρ, Teq, Ye,eq) + ρ mb [Zνe(ρ, Teq, Ye,eq) +Zνe(ρ, Teq, Ye,eq) + 4Zνx(ρ, Teq, Ye,eq)] , (A1) Yl = Ye,eq + Yνe(ρ, Teq, Ye,eq) -Yνe(ρ, Teq, Ye,eq), (A2) with the neutrino number and energy fractions given by Yνi = 4π (hc)3 mb ρ T 3F2 μνi T , (A3) Zνi = 4π (hc)3 mb ρ T 4F3 μνi T . (A4) Ref. [73] solves this system of two equations in two variables on the fly at each computation of the righthand side. In order to improve efficiency, we employ a tabulation strategy of a precomputed solution. However, tabulating the internal energy density e is challenging given its wide range, covering several orders of magnitude, and its dependency on the EOS. Therefore, we implicitly define the auxiliary temperature T ∗by ρε(ρ, T ∗, Yl) = e. (A5) We sample ρ, T ∗, and Yl on the corresponding values of ρ, T and Ye in the EOS table. For each triad (ρ, T ∗, Yl), we compute the corresponding e and solve Eqs.(A1) and (A2) for Teq and Ye,eq. The final result is a mapping (ρ, e, Yl) 7→(ρ, T ∗, Yl) 7→(Teq, Ye,eq), (A6) 0 20 40 60 80 100 t -tmerger [ms] 0.00 0.02 Meje [M⊙] χ = 0.0 χ = 0.1 FIG. 24. Comparison between ejecta passing through a sphere with a radius of ∼440 km, represented by the tracer particles (solid lines), and the matter outflow extracted on the coordinate sphere (faint dashed lines) for the high-resolution non-spinning (green) and spinning (purple) BNS simulations. where the first mapping is performed solving Eq. (A5) for T ∗using the standard temperature-recovering routine, and the second one using the same 3D interpolation routines used for computing EOS quantities. Finally, the emissivities computed using (ρ, T, Ye) and (ρ, Teq, Ye,eq) are combined according to the weight function of Ref. [73]. Appendix B: Tracer particles We extend bam to evolve tracer particles in order to record the evolution history of Lagrangian fluid trajectories, allowing for a more detailed analysis of the outflowing material and its origin. The tracers are advected via ∂txi tr = αvi -βi, (B1) where xi tr is the position of an individual tracer particle. The lapse α, shift βi, and fluid velocity vi as well as the recorded fluid quantities rest-mass density ρ, electron fraction Ye, and temperature T are interpolated to the position xi tr of the tracer. The tracer particles are evolved during the BNS simulations using a simple forward Euler scheme, with the time step given by that of the finest refinement level covering the respective domain. In total, we inject ∼105 tracer particles into our BNS simulations at ≲1 ms before the merger. They are randomly distributed in neutron star matter for regions with densities 5000 times above the artificial atmosphere value, corresponding to ρ ≥6.176×107 g cm3 . The tracer particles are evolved until they cross a sphere of radius ∼737 km. Subsequently, we assume a homologous expansion. For the nuclear reaction network, we consider only tracer particles that are unbound and traverse this sphere before the end of our simulation. Excluding ≈100 tracers that are numerically unstable in the nucleosynthesis evolution, we ultimately obtain 58,264 and 60,620 tracers representing the ejecta for the high-resolution non-spinning and spinning BNS simulations, respectively. Although the tracers are massless by definition, nuclear abundance calculations require information on the mass represented by the individual tracer particles. For this purpose, we compute tracer masses on a sphere of radius 25 10-3 10-2 10-1 mej/Mej t -tmerger = 20 ms χ = 0.1 sphere tracer t -tmerger = 50 ms t -tmerger = 100 ms 0.0 0.2 0.4 Ye 10-3 10-2 10-1 mej/Mej χ = 0.1 sphere tracer 0.0 0.2 0.4 Ye 0.0 0.2 0.4 Ye FIG. 25. Comparison between the ejecta's electron fraction represented by the tracer particles (solid lines) and extracted on the coordinate sphere (faint dashed lines) for the high-resolution non-spinning (green) and spinning (purple) BNS simulations. The mass-weighted histograms are computed for ejecta passing through a sphere with a radius of ∼440 km until ∼20 ms (left panels), ∼50 ms (middle panels), ∼100 ms (right panels) after the merger. ∼440 km. We extract the ejected mass flux dMeje dt (t) on this detection sphere and impose dMeje dt (t) = N(t) X tr dMtr dt , (B2) where we sum over all N(t) tracer particles that pass the sphere between t and t + ∆t. Here, ∆t is the output interval of the tracer data. We approximate the mass flux of the individual tracer particle by dMtr dt ≈∆Ωtrr2 trρtrvr,tr, (B3) where rtr is the distance of the tracer from the coordinate centre (∼440 km), and ρtr and vr,tr denote the rest-mass density and radial velocity of the fluid element, respectively. ∆Ωtr is the solid angle represented by the tracer particle. In the absence of more detailed information, we assume for simplicity that each particle passing through the sphere between t and t + ∆t contributes to the total mass flux with the same ∆Ω. Thus, ∆Ωcan be scaled to ensure that Eq. (B2) is satisfied. Finally, integration over time yields the mass Mtr ascribed to each tracer particle as it passes through the sphere. In Fig. 24, we show the evolution of ejecta mass represented by the tracer particles, obtained by summing Mtr of all tracer crossing a sphere with a radius of ∼440 km for both high-resolution BNS simulations. For comparison, the ejecta masses extracted directly from the coordinate sphere are shown as faint dashed lines, demonstrating good agreement. In Fig. 25, we further compare the mass-weighted histograms of the electron fraction Ye in the matter outflow at different times after the merger, computed once from the tracer particles and once from the detection sphere. The histograms agree reasonably well for the low Ye material emitted at early times, i.e., ≲20 ms after the merger. Larger deviations occur at later times for the high Ye material because there are fewer tracer particles in the secular outflow with a higher electron fraction, and hence we sample this component more coarsely. We assume this to be a consequence of the initial distribution of the tracer particles and the advection scheme: With a density-dependent distribution, the core of the neutron stars would be better sampled, and the tracer masses would possibly be more uniform. Our choice of uniform distribution, on the other hand, leads to better coverage of the outer layers of neutron stars with lower density. The vast majority of these tracers is ejected at early times and hence assigned to low masses. By contrast, only few tracers cross the extractions sphere at late times, coming from the accretion disk which represents an unstable advection state as opposed to infall to the black hole or ejection. As a result, late ejecta have a much higher mass and dominate the full picture. From each of the roughly 60000 ejected tracers, only 62 (2228) and 29 (4084) tracers make up 50 % (90 %) of the total ejecta mass in the spinning and non-spinning case, respectively. As discussed in Sec. III D, however, these have negligible impact on the massive nuclei yield. We finally note that technical complications occurred in the R2 simulation for the non-spinning BNS configuration as the tracer particles were randomly redistributed in the neutron star material at tstep = 7.81 ms and tstep = 17.62 ms. This error resulted from limited storage and failure of the checkpointing routine for the tracer particles, leading to a new tracer initialization after reach26 ing the computing system's walltime. We resolve this by "stacking" matching trajectories across these time steps with the following procedure, where we refer to tracer particles before and after a restart as "old" and "new", respectively. • First, we identified tracer particles whose temperatures are still too high to be relevant for nuclear network calculation. • If T > 7 GK, we stacked the trajectory of the "old" tracer particle with that of the closest "new" tracer particle that also satisfies T > 7 GK, by looking for the minimum of xi tr,old + vi tr,old∆t -xi tr,new . • Otherwise, if T ≤7 GK, we stacked the trajectory of the "old" tracer particle with that of a "new" tracer particle having similar fluid properties for T, ρ, and Ye. Due to the assumed axial symmetry of the system, it is sufficient that the "new" tracer particle has a similar position in z and p x2 + y2 as the "old" tracer particle. The trajectories were only stacked, if the relative differences in rest-mass density and temperature were sufficiently small with ρtr,old ρtr,new -1 < 1 and Ttr,old Ttr,new -1 < 10, and the absolute difference in Ye was below 0.005. TABLE IV. The consumed compute time and energy, and the induced amount of GHG emissions by our NR simulations. Compute time, Energy, Grid emissions, millions core-h MWh tCO2 Evolution R1 2.00 8.18 1.57 Evolution R2 27.89 114.37 21.96 Total 29.88 122.56 23.53 Appendix C: Consumed resources and carbon footprint Performing high-resolution NR simulations with detailed physics is computationally expensive and thus requires a large amount of energy. Most of the time, computing facilities are attached to national grids, which acquire energy from a mix of different sources, including fossil fuels. This in turn induces a tangible amount of greenhouse gas (GHG) emissions, which are driving climate change [257-262]. We estimate the amount of GHG emissions produced by the NR simulations in this study. Starting from November 2024, HLRS began to provide detailed power consumption statistics. Using this data for the jobs after that date, we estimate the average power consumption of a single Hawk node used for our simulations to be 525 W2. Using this average node power, we estimate the energy consumed and the GHG emissions induced by our simulations, assuming the specific carbon intensity of the Universit ̈at Stuttgart at the Campus Vaihingen in 2023 with 192 gCO2/kWh [263]. In Table IV, we list the consumed compute resources, energy, and the induced grid emissions. [1] R. Fern ́andez and B. D. Metzger, Ann. Rev. Nucl. Part. Sci. 66, 23 (2016), . [2] B. D. Metzger, Living Rev. Rel. 20, 3 (2017), . [3] L. Baiotti and L. Rezzolla, Rept. Prog. Phys. 80, 096901 (2017), . [4] M. D. Duez and Y. Zlochower, Rept. Prog. Phys. 82, 016902 (2019), . [5] S. Bernuzzi, Gen. Rel. Grav. 52, 108 (2020), . [6] M. Ruiz, S. L. Shapiro, and A. Tsokaros, Front. Astron. Space Sci. 8, 39 (2021), . [7] J. M. Lattimer and D. N. Schramm, Astrophys. J. Lett. 192, L145 (1974). [8] S. Rosswog, M. Liebend ̈orfer, F. K. Thielemann, M. B. Davies, W. Benz, and T. Piran, Astron. Astrophys. 341, 499 (1999), arXiv:astro-ph/9811367 [astro-ph]. 2 Meanwhile, a simple estimate via the thermal design power (TDP) of the CPUs yields 450 W, which is ∼15 % lower than the full node power consumption. [9] O. Korobkin, S. Rosswog, A. Arcones, and C. Winteler, Mon. Not. Roy. Astron. Soc. 426, 1940 (2012), . [10] S. Wanajo, Y. Sekiguchi, N. Nishimura, K. Kiuchi, K. Kyutoku, and M. Shibata, Astrophys. J. Lett. 789, L39 (2014), . [11] J. J. Cowan, C. Sneden, J. E. Lawler, A. Aprahamian, M. Wiescher, K. Langanke, G. Mart ́ınez-Pinedo, and F.-K. Thielemann, Rev. Mod. Phys. 93, 15002 (2021), . [12] L. Rezzolla, B. Giacomazzo, L. Baiotti, J. Granot, C. Kouveliotou, and M. A. Aloy, Astrophys. J. Lett. 732, L6 (2011), . [13] M. Ruiz, R. N. Lang, V. Paschalidis, and S. L. Shapiro, Astrophys. J. Lett. 824, L6 (2016), . [14] P. M ̈osta, D. Radice, R. Haas, E. Schnetter, and S. Bernuzzi, Astrophys. J. Lett. 901, L37 (2020), . [15] K. Kiuchi, A. Reboul-Salze, M. Shibata, and Y. Sekiguchi, Nature Astron. 8, 298 (2024), . 27 [16] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017), . [17] B. P. Abbott et al. (LIGO Scientific, Virgo), Astrophys. J. Lett. 850, L39 (2017), . [18] I. Arcavi et al., Nature 551, 64 (2017), . [19] D. A. Coulter et al., Science 358, 1556 (2017), . [20] V. M. Lipunov et al., Astrophys. J. Lett. 850, L1 (2017), . [21] N. R. Tanvir et al., Astrophys. J. Lett. 848, L27 (2017), . [22] S. Valenti, D. J. Sand, S. Yang, E. Cappellaro, L. Tartaglia, A. Corsi, S. W. Jha, D. E. Reichart, J. Haislip, and V. Kouprianov, Astrophys. J. Lett. 848, L24 (2017), . [23] A. Goldstein et al., Astrophys. J. Lett. 848, L14 (2017), . [24] V. Savchenko et al., Astrophys. J. Lett. 848, L15 (2017), . [25] A. Hajela et al., Astrophys. J. Lett. 886, L17 (2019), . [26] A. Hajela et al., Astrophys. J. Lett. 927, L17 (2022), . [27] A. Balasubramanian, A. Corsi, K. P. Mooley, K. Hotokezaka, D. L. Kaplan, D. A. Frail, G. Hallinan, D. Lazzati, and E. J. Murphy, Astrophys. J. 938, 12 (2022), . [28] B. P. Abbott et al. (LIGO Scientific, Virgo, FermiGBM, INTEGRAL), Astrophys. J. Lett. 848, L13 (2017), . [29] R. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. D 103, 122002 (2021), . [30] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 123, 011102 (2019), . [31] J. Sakstein and B. Jain, Phys. Rev. Lett. 119, 251303 (2017), . [32] J. M. Ezquiaga and M. Zumalac ́arregui, Phys. Rev. Lett. 119, 251304 (2017), . [33] P. Creminelli and F. Vernizzi, Phys. Rev. Lett. 119, 251302 (2017), . [34] T. Baker, E. Bellini, P. G. Ferreira, M. Lagos, J. Noller, and I. Sawicki, Phys. Rev. Lett. 119, 251301 (2017), . [35] B. P. Abbott et al. (LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Telluride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, PanSTARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR, SKA South Africa/MeerKAT), Astrophys. J. Lett. 848, L12 (2017), . [36] B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 121, 161101 (2018), . [37] A. Bauswein, O. Just, H.-T. Janka, and N. Stergioulas, Astrophys. J. Lett. 850, L34 (2017), . [38] B. Margalit and B. D. Metzger, Astrophys. J. Lett. 850, L19 (2017), . [39] D. Radice, A. Perego, F. Zappa, and S. Bernuzzi, Astrophys. J. Lett. 852, L29 (2018), . [40] M. W. Coughlin et al., Mon. Not. Roy. Astron. Soc. 480, 3871 (2018), . [41] E. R. Most, L. R. Weih, L. Rezzolla, and J. SchaffnerBielich, Phys. Rev. Lett. 120, 261103 (2018), . [42] C. D. Capano, I. Tews, S. M. Brown, B. Margalit, S. De, S. Kumar, D. A. Brown, B. Krishnan, and S. Reddy, Nature Astron. 4, 625 (2020), . [43] C. A. Raithel, Eur. Phys. J. A 55, 80 (2019), . [44] T. Dietrich, M. W. Coughlin, P. T. H. Pang, M. Bulla, J. Heinzel, L. Issa, I. Tews, and S. Antier, Science 370, 1450 (2020), . [45] J. W. T. Hessels, S. M. Ransom, I. H. Stairs, P. C. C. Freire, V. M. Kaspi, and F. Camilo, Science 311, 1901 (2006), arXiv:astro-ph/0601337. [46] M. Burgay et al., Nature 426, 531 (2003), arXiv:astroph/0312071. [47] K. Stovall et al., Astrophys. J. Lett. 854, L22 (2018), . [48] S. Rosswog, P. Diener, F. Torsello, T. M. Tauris, and N. Sarin, Mon. Not. Roy. Astron. Soc. 530, 2336 (2024), . [49] W. E. East, V. Paschalidis, F. Pretorius, and A. Tsokaros, Phys. Rev. D 100, 124042 (2019), . [50] M. Ruiz, A. Tsokaros, V. Paschalidis, and S. L. Shapiro, Phys. Rev. D 99, 084032 (2019), . [51] S. Bernuzzi, T. Dietrich, W. Tichy, and B. Br ̈ugmann, Phys. Rev. D 89, 104021 (2014), . [52] T. Dietrich, S. Bernuzzi, M. Ujevic, and W. Tichy, Phys. Rev. D 95, 044045 (2017), . [53] W. Kastaun, R. Ciolfi, A. Endrizzi, and B. Giacomazzo, Phys. Rev. D 96, 043019 (2017), . [54] E. R. Most, L. J. Papenfort, A. Tsokaros, and L. Rezzolla, Astrophys. J. 884, 40 (2019), . [55] L. J. Papenfort, E. R. Most, S. Tootle, and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 513, 3646 (2022), . [56] B. Br ̈ugmann, J. A. Gonzalez, M. Hannam, S. Husa, U. Sperhake, and W. Tichy, Phys. Rev. D 77, 024027 (2008), arXiv:gr-qc/0610128. [57] M. Thierfelder, S. Bernuzzi, and B. Br ̈ugmann, Phys. Rev. D 84, 044012 (2011), . [58] H. Gieg, F. Schianchi, T. Dietrich, and M. Ujevic, Universe 8, 370 (2022), . [59] F. Schianchi, H. Gieg, V. Nedora, A. Neuweiler, M. Ujevic, M. Bulla, and T. Dietrich, Phys. Rev. D 109, 044012 (2024), . [60] A. Neuweiler, T. Dietrich, B. Br ̈ugmann, E. Giangrandi, K. Kiuchi, F. Schianchi, P. M ̈osta, S. Shankar, B. Giacomazzo, and M. Shibata, Phys. Rev. D 110, 084046 28 (2024), . [61] K. Kiuchi, K. Kyutoku, Y. Sekiguchi, M. Shibata, and T. Wada, Phys. Rev. D 90, 041502 (2014), . [62] K. Dionysopoulou, D. Alic, and L. Rezzolla, Phys. Rev. D 92, 084064 (2015), . [63] W. Cook, E. M. Guti ́errez, S. Bernuzzi, D. Radice, B. Daszuta, J. Fields, P. Hammond, H. Bandyopadhyay, and M. Jacobi, (2025). [64] E. M. Guti ́errez, W. Cook, D. Radice, S. Bernuzzi, J. Fields, P. Hammond, B. Daszuta, H. Bandyopadhyay, and M. Jacobi, (2025). [65] R. Ciolfi, W. Kastaun, J. V. Kalinani, and B. Giacomazzo, Phys. Rev. D 100, 023005 (2019), . [66] M. Chabanov, S. D. Tootle, E. R. Most, and L. Rezzolla, Astrophys. J. Lett. 945, L14 (2023), . [67] R. Aguilera-Miret, D. Vigan`o, and C. Palenzuela, Astrophys. J. Lett. 926, L31 (2022), . [68] R. Aguilera-Miret, C. Palenzuela, F. Carrasco, and D. Vigan`o, Phys. Rev. D 108, 103001 (2023), . [69] F. Foucart, E. O'Connor, L. Roberts, M. D. Duez, R. Haas, L. E. Kidder, C. D. Ott, H. P. Pfeiffer, M. A. Scheel, and B. Szilagyi, Phys. Rev. D 91, 124021 (2015), . [70] Y. Sekiguchi, K. Kiuchi, K. Kyutoku, M. Shibata, and K. Taniguchi, Phys. Rev. D 93, 124046 (2016), . [71] T. Vincent, F. Foucart, M. D. Duez, R. Haas, L. E. Kidder, H. P. Pfeiffer, and M. A. Scheel, Phys. Rev. D 101, 044053 (2020), 1908.00655 [gr-qc]. [72] D. Radice, A. Perego, K. Hotokezaka, S. A. Fromm, S. Bernuzzi, and L. F. Roberts, Astrophys. J. 869, 130 (2018), . [73] D. Radice, S. Bernuzzi, A. Perego, and R. Haas, Mon. Not. Roy. Astron. Soc. 512, 1499 (2022), . [74] P. L. Espino, D. Radice, F. Zappa, R. Gamba, and S. Bernuzzi, Phys. Rev. D 109, 103027 (2024), . [75] K. Kawaguchi, S. Fujibayashi, and M. Shibata, Phys. Rev. D 112, 043001 (2025), . [76] C. Palenzuela, S. L. Liebling, D. Neilsen, L. Lehner, O. L. Caballero, E. O'Connor, and M. Anderson, Phys. Rev. D 92, 044045 (2015), . [77] C. Palenzuela, S. Liebling, and B. Mi ̃nano, Phys. Rev. D 105, 103020 (2022), . [78] C. Musolino, L. Rezzolla, and E. R. Most, Astrophys. J. Lett. 984, L61 (2025), . [79] S. Curtis, P. Bosch, P. M ̈osta, D. Radice, S. Bernuzzi, A. Perego, R. Haas, and E. Schnetter, Astrophys. J. Lett. 961, L26 (2024), . [80] L. Sun, M. Ruiz, S. L. Shapiro, and A. Tsokaros, Phys. Rev. D 105, 104028 (2022), . [81] K. Hayashi, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 107, 123001 (2023), . [82] L. Combi and D. M. Siegel, Phys. Rev. Lett. 131, 231402 (2023), . [83] J. Bamber, A. Tsokaros, M. Ruiz, S. L. Shapiro, M. Favata, M. Karlson, and F. V. Pi ̃nas, (2025). [84] T. Dietrich, S. Bernuzzi, M. Ujevic, and B. Br ̈ugmann, Phys. Rev. D 91, 124041 (2015), . [85] S. Bernuzzi and T. Dietrich, Phys. Rev. D 94, 064062 (2016), . [86] S. Bernuzzi and D. Hilditch, Phys. Rev. D 81, 084003 (2010), . [87] D. Hilditch, S. Bernuzzi, M. Thierfelder, Z. Cao, W. Tichy, and B. Br ̈ugmann, Phys. Rev. D 88, 084057 (2013), . [88] C. Bona, J. Mass ́o, J. Stela, and E. Seidel, in The Seventh Marcel Grossmann Meeting: On Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theories, edited by R. T. Jantzen, G. M. Keiser, and R. Ruffini (World Scientific, Singapore, 1996). [89] M. Alcubierre, B. Br ̈ugmann, P. Diener, M. Koppitz, D. Pollney, E. Seidel, and R. Takahashi, Phys. Rev. D 67, 084023 (2003), arXiv:gr-qc/0206072. [90] J. M. Marti, J. M. Ibanez, and J. A. Miralles, Phys. Rev. D 43, 3794 (1991). [91] F. Banyuls, J. A. Font, J. M. A. Ibanez, J. M. A. Marti, and J. A. Miralles, Astrophys. J. 476, 221 (1997). [92] L. Anton, O. Zanotti, J. A. Miralles, J. M. Marti, J. M. Ibanez, J. A. Font, and J. A. Pons, Astrophys. J. 637, 296 (2006), arXiv:astro-ph/0506063. [93] J. A. Font, Living Rev. Rel. 11, 7 (2008). [94] S. L. Liebling, L. Lehner, D. Neilsen, and C. Palenzuela, Phys. Rev. D 81, 124023 (2010), . [95] P. M ̈osta, B. C. Mundim, J. A. Faber, R. Haas, S. C. Noble, T. Bode, F. L ̈offler, C. D. Ott, C. Reisswig, and E. Schnetter, Class. Quant. Grav. 31, 015005 (2014), . [96] A. Neuweiler, T. Dietrich, and B. Br ̈ugmann, Phys. Rev. D 112, 023033 (2025), . [97] K. S. Thorne, Mon. Not. Roy. Astron. Soc. 194, 439 (1981). [98] M. Shibata, K. Kiuchi, Y.-i. Sekiguchi, and Y. Suwa, Prog. Theor. Phys. 125, 1255 (2011), . [99] F. Foucart, R. Haas, M. D. Duez, E. O'Connor, C. D. Ott, L. Roberts, L. E. Kidder, J. Lippuner, H. P. Pfeiffer, and M. A. Scheel, Phys. Rev. D93, 044019 (2016), . [100] F. Foucart, E. O'Connor, L. Roberts, L. E. Kidder, H. P. Pfeiffer, and M. A. Scheel, Phys. Rev. D94, 123016 (2016), . [101] L. R. Weih, H. Olivares, and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 495, 2285 (2020), . [102] C. Musolino and L. Rezzolla, Mon. Not. Roy. Astron. Soc. 528, 5952 (2024), . [103] M. H. Ruffert, H. T. Janka, and G. Schaefer, Astron. Astrophys. 311, 532 (1996), arXiv:astro-ph/9509006. [104] A. Perego, S. Bernuzzi, and D. Radice, Eur. Phys. J. A 55, 124 (2019), . [105] M. J. Berger and J. Oliger, Journal of Computational Physics 53, 484 (1984). [106] M. J. Berger and P. Colella, Journal of Computational Physics 82, 64 (1989). [107] R. Borges, M. Carmona, B. Costa, and W. S. Don, 29 Journal of Computational Physics 227, 3191 (2008). [108] A. Harten, P. D. Lax, and B. v. Leer, SIAM Review 25, 35 (1983). [109] X.-D. Liu and S. Osher, Journal of Computational Physics 142, 304 (1998). [110] L. D. Zanna and N. Bucciantini, Astron. Astrophys. 390, 1177 (2002), arXiv:astro-ph/0205290. [111] M. Reichert et al., Astrophys. J. Suppl. 268, 66 (2023), . [112] M. Bulla, Mon. Not. Roy. Astron. Soc. 489, 5037 (2019), . [113] M. Bulla, Mon. Not. Roy. Astron. Soc. 520, 2558 (2023), . [114] V. Nedora, T. Dietrich, M. Shibata, M. Pohl, and L. C. Menegazzi, Mon. Not. Roy. Astron. Soc. 520, 2727 (2023), . [115] V. Nedora, T. Dietrich, and M. Shibata, Mon. Not. Roy. Astron. Soc. 524, 5514 (2023), . [116] V. Nedora, L. C. Menegazzi, E. Peretti, T. Dietrich, and M. Shibata, Mon. Not. Roy. Astron. Soc. 538, 2089 (2025), 2409.16852 [astro-ph.HE]. [117] P. M ̈oller, A. J. Sierk, T. Ichikawa, and H. Sagawa, Atom. Data Nucl. Data Tabl. 109-110, 1 (2016), . [118] R. H. Cyburt et al., Astrophys. J. Suppl. 189, 240 (2010). [119] I. V. Panov, C. Freiburghaus, and F. K. Thielemann, Nucl. Phys. A 688, 587 (2001). [120] T. Kodama and K. Takahashi, Nucl. Phys. A 239, 489 (1975). [121] M. R. Mumpower, P. Jaffke, M. Verriere, and J. Randrup, Phys. Rev. C 101, 054607 (2020), . [122] A. Neuweiler, T. Dietrich, M. Bulla, S. V. Chaurasia, S. Rosswog, and M. Ujevic, Phys. Rev. D 107, 023016 (2023), . [123] K. Kawaguchi, S. Fujibayashi, M. Shibata, M. Tanaka, and S. Wanajo, Astrophys. J. 913, 100 (2021), . [124] S. Rosswog and O. Korobkin, Annalen Phys. 536, 2200306 (2024), . [125] J. Barnes, D. Kasen, M.-R. Wu, and G. Mart ́ınez-Pinedo, Astrophys. J. 829, 110 (2016), . [126] R. T. Wollaeger, O. Korobkin, C. J. Fontes, S. K. Rosswog, W. P. Even, C. L. Fryer, J. Sollerman, A. L. Hungerford, D. R. van Rossum, and A. B. Wollaber, Mon. Not. Roy. Astron. Soc. 478, 3298 (2018), . [127] M. Tanaka, D. Kato, G. Gaigalas, and K. Kawaguchi, Mon. Not. Roy. Astron. Soc. 496, 1369 (2020), . [128] B. Margalit and E. Quataert, Astrophys. J. Lett. 923, L14 (2021), . [129] B. Margalit and E. Quataert, Astrophys. J. 977, 134 (2024), . [130] L. Sironi, A. Spitkovsky, and J. Arons, Astrophys. J. 771, 54 (2013), . [131] P. Crumley, D. Caprioli, S. Markoff, and A. Spitkovsky, Mon. Not. Roy. Astron. Soc. 485, 5105 (2019), . [132] X. Xie, J. Zrake, and A. MacFadyen, Astrophys. J. 863, 58 (2018), . [133] G. Ryan, H. van Eerten, L. Piro, and E. Troja, Astrophys. J. 896, 166 (2020), . [134] O. Gottlieb, E. Nakar, and O. Bromberg, Mon. Not. Roy. Astron. Soc. 500, 3511 (2020), . [135] F. A. Aharonian, S. R. Kelner, and A. Y. Prosekin, Phys. Rev. D 82, 043002 (2010), . [136] M. G. Alford, L. Brodie, A. Haber, and I. Tews, Phys. Scripta 98, 125302 (2023), . [137] W. Tichy, Class. Quant. Grav. 26, 175018 (2009), . [138] W. Tichy, Phys. Rev. D 86, 064024 (2012), . [139] T. Dietrich, N. Moldenhauer, N. K. JohnsonMcDaniel, S. Bernuzzi, C. M. Markakis, B. Br ̈ugmann, and W. Tichy, Phys. Rev. D 92, 124007 (2015), . [140] W. Tichy, A. Rashti, T. Dietrich, R. Dudi, and B. Br ̈ugmann, Phys. Rev. D 100, 124046 (2019), . [141] J. W. York, Jr., Phys. Rev. Lett. 82, 1350 (1999), arXiv:gr-qc/9810051. [142] H. P. Pfeiffer and J. W. York, Jr., Phys. Rev. D 67, 044022 (2003), arXiv:gr-qc/0207095. [143] D. R. Lorimer, Living Rev. Rel. 11, 8 (2008), . [144] T. M. Tauris et al., Astrophys. J. 846, 170 (2017), . [145] K. Kiuchi, S. Fujibayashi, K. Hayashi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. Lett. 131, 011401 (2023), . [146] E. R. Most and E. Quataert, Astrophys. J. Lett. 947, L15 (2023), . [147] L. Combi and D. M. Siegel, Astrophys. J. 944, 28 (2023), . [148] M. Ruiz, A. Tsokaros, and S. L. Shapiro, Phys. Rev. D 101, 064042 (2020), . [149] M. G. Alford, L. Brodie, A. Haber, and I. Tews, Phys. Rev. C 106, 055804 (2022), . [150] D. Chatterjee et al., Data table for eos abht(qmc-rmf3) nparam=3 (2025), https://doi.org/10.5281/zenodo.14809193. [151] I. Tews, J. Carlson, S. Gandolfi, and S. Reddy, Astrophys. J. 860, 149 (2018), . [152] M. Hempel and J. Schaffner-Bielich, Nucl. Phys. A 837, 210 (2010), . [153] F. J. Fattoyev, C. J. Horowitz, J. Piekarewicz, and G. Shen, Phys. Rev. C 82, 055803 (2010), . [154] C. Drischler, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips, Phys. Rev. Lett. 125, 202702 (2020), . [155] S. Shlomo, V. M. Kolomietz, and G. Col`o, Eur. Phys. J. A 30, 23 (2006). [156] C. J. Horowitz, J. Piekarewicz, and B. Reed, Phys. Rev. C 102, 044321 (2020), . [157] J. Keller, C. Wellenhofer, K. Hebeler, and A. Schwenk, Phys. Rev. C 103, 055806 (2021), . [158] M. C. Miller et al., Astrophys. J. Lett. 887, L24 (2019), . [159] T. E. Riley et al., Astrophys. J. Lett. 887, L21 (2019), . [160] M. C. Miller et al., Astrophys. J. Lett. 918, L28 (2021), 30 . [161] T. E. Riley et al., Astrophys. J. Lett. 918, L27 (2021), . [162] D. Choudhury et al., Astrophys. J. Lett. 971, L20 (2024), . [163] T. Salmi et al., Astrophys. J. 941, 150 (2022), . [164] T. Salmi et al., Astrophys. J. 974, 294 (2024), . [165] A. J. Dittmann et al., Astrophys. J. 974, 295 (2024), . [166] E. Fonseca et al., Astrophys. J. Lett. 915, L12 (2021), . [167] S. Vinciguerra et al., Astrophys. J. 961, 62 (2024), . [168] M. Campanelli, C. O. Lousto, and Y. Zlochower, Phys. Rev. D 74, 041501 (2006), arXiv:gr-qc/0604012. [169] R. Dudi, T. Dietrich, A. Rashti, B. Bruegmann, J. Steinhoff, and W. Tichy, Phys. Rev. D 105, 064050 (2022), . [170] K. Hotokezaka, K. Kiuchi, K. Kyutoku, H. Okawa, Y.-i. Sekiguchi, M. Shibata, and K. Taniguchi, Phys. Rev. D 87, 024001 (2013), . [171] M. Shibata, S. Fujibayashi, K. Hotokezaka, K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Tanaka, Phys. Rev. D 96, 123012 (2017), . [172] K. Kiuchi, K. Kyutoku, M. Shibata, and K. Taniguchi, Astrophys. J. Lett. 876, L31 (2019), . [173] W. E. East, V. Paschalidis, F. Pretorius, and S. L. Shapiro, Phys. Rev. D 93, 024011 (2016), . [174] S. Rosswog, N. Sarin, E. Nakhar, and P. Diener, Mon. Not. Roy. Astron. Soc. 538, 907 (2025), . [175] D. Price and S. Rosswog, Science 312, 719 (2006), arXiv:astro-ph/0603845. [176] K. Kiuchi, P. Cerd ́a-Dur ́an, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 92, 124034 (2015), . [177] B. Giacomazzo, J. Zrake, P. Duffell, A. I. MacFadyen, and R. Perna, Astrophys. J. 809, 39 (2015), . [178] R. Aguilera-Miret, J.-E. Christian, S. Rosswog, and C. Palenzuela, Mon. Not. Roy. Astron. Soc. 10.1093/mnras/staf1291 (2025), 2504.10604 [astro-ph.HE]. [179] M. D. Duez, Y. T. Liu, S. L. Shapiro, and M. Shibata, Phys. Rev. D 73, 104015 (2006), arXiv:astroph/0605331. [180] L. Sun, M. Ruiz, and S. L. Shapiro, Phys. Rev. D 99, 064057 (2019), . [181] K. Kiuchi, K. Kyutoku, Y. Sekiguchi, and M. Shibata, Phys. Rev. D 97, 124039 (2018), . [182] M. D. Duez, Y. T. Liu, S. L. Shapiro, M. Shibata, and B. C. Stephens, Phys. Rev. Lett. 96, 031101 (2006), arXiv:astro-ph/0510653. [183] D. M. Siegel, R. Ciolfi, A. I. Harte, and L. Rezzolla, Phys. Rev. D 87, 121302 (2013), . [184] E. R. Most, Phys. Rev. D 108, 123012 (2023), . [185] A. Reboul-Salze, P. Barr`ere, K. Kiuchi, J. Guilet, R. Raynaud, S. Fujibayashi, and M. Shibata, Astron. Astrophys. 699, A4 (2025), . [186] R. Aguilera-Miret, D. Vigan`o, F. Carrasco, B. Mi ̃nano, and C. Palenzuela, Phys. Rev. D 102, 103006 (2020), . [187] A. N. Kolmogorov, Proceedings of the Royal Society of London Series A 434, 9 (1991). [188] A. P. Kazantsev, Soviet Journal of Experimental and Theoretical Physics 26, 1031 (1968). [189] J. Zrake and A. I. MacFadyen, Astrophys. J. Lett. 769, L29 (2013), . [190] S. A. Balbus and J. F. Hawley, Rev. Mod. Phys. 70, 1 (1998). [191] T. Celora, C. Palenzuela, D. Vigan`o, and R. AguileraMiret, (2025). [192] R. Fern ́andez and B. D. Metzger, Mon. Not. Roy. Astron. Soc. 435, 502 (2013), . [193] O. Just, A. Bauswein, R. A. Pulpillo, S. Goriely, and H. T. Janka, Mon. Not. Roy. Astron. Soc. 448, 541 (2015), . [194] S. Fujibayashi, K. Kiuchi, N. Nishimura, Y. Sekiguchi, and M. Shibata, Astrophys. J. 860, 64 (2018), . [195] V. Nedora, S. Bernuzzi, D. Radice, A. Perego, A. Endrizzi, and N. Ortiz, Astrophys. J. Lett. 886, L30 (2019), . [196] M. Shibata and K. Hotokezaka, Ann. Rev. Nucl. Part. Sci. 69, 41 (2019), . [197] A. Endrizzi, A. Perego, F. M. Fabbri, L. Branca, D. Radice, S. Bernuzzi, B. Giacomazzo, F. Pederiva, and A. Lovato, Eur. Phys. J. A 56, 15 (2020), . [198] K. Lodders, M. Bergemann, and H. Palme, Space Sci. Rev. 221, 23 (2025), . [199] N. Prantzos, C. Abia, S. Cristallo, M. Limongi, and A. Chieffi, Mon. Not. Roy. Astron. Soc. 491, 1832 (2020), . [200] S. Bernuzzi, F. Magistrelli, M. Jacobi, D. Logoteta, A. Perego, and D. Radice, Mon. Not. Roy. Astron. Soc. 256, 271 (2025), . [201] G. Ricigliano, M. Jacobi, and A. Arcones, Mon. Not. Roy. Astron. Soc. 533, 2096 (2024), . [202] J. Lippuner and L. F. Roberts, Astrophys. J. Suppl. 233, 18 (2017), . [203] T. Marketin, L. Huther, and G. Mart ́ınez-Pinedo, Phys. Rev. C 93, 025805 (2016), . [204] B. Pfeiffer, K. L. Kratz, and F. K. Thielemann, Zeitschrift f ̈ur Physik A Hadrons and Nuclei 357, 235 (1997). [205] A. Arcones and G. Martinez-Pinedo, Phys. Rev. C 83, 045809 (2011), . [206] M. Eichler et al., Astrophys. J. 808, 30 (2015), . [207] I. U. Roederer et al., Astrophys. J. 936, 84 (2022), . [208] J. Kuske, A. Arcones, and M. Reichert, Astrophys. J. 990, 37 (2025), . [209] J. de Jes ́us Mendoza-Temis, M.-R. Wu, G. Mart ́ınezPinedo, K. Langanke, A. Bauswein, and H.-T. Janka, Phys. Rev. C 92, 055805 (2015), . [210] S. E. Woosley and R. D. Hoffman, Astrophys. J. 395, 202 (1992). [211] B. D. Metzger, A. Bauswein, S. Goriely, and D. Kasen, Mon. Not. Roy. Astron. Soc. 446, 1115 (2015), 31 . [212] A. Abac, T. Dietrich, A. Buonanno, J. Steinhoff, and M. Ujevic, Phys. Rev. D 109, 024062 (2024), . [213] R. De Pietri, A. Feo, J. A. Font, F. L ̈offler, F. Maione, M. Pasquali, and N. Stergioulas, Phys. Rev. Lett. 120, 221101 (2018), . [214] R. De Pietri, A. Feo, J. A. Font, F. L ̈offler, M. Pasquali, and N. Stergioulas, Phys. Rev. D 101, 064052 (2020), . [215] C. J. Moore, R. H. Cole, and C. P. L. Berry, Class. Quant. Grav. 32, 015014 (2015), . [216] P. D. Welch, IEEE Transactions on Audio and Electroacoustics 15, 70 (1967). [217] N. Stergioulas, A. Bauswein, K. Zagkouris, and H.-T. Janka, Mon. Not. Roy. Astron. Soc. 418, 427 (2011), . [218] S. Vretinaris, N. Stergioulas, and A. Bauswein, Phys. Rev. D 101, 084039 (2020), . [219] D. Radice, S. Bernuzzi, and C. D. Ott, Phys. Rev. D 94, 064011 (2016), . [220] W. E. East, V. Paschalidis, and F. Pretorius, Class. Quant. Grav. 33, 244004 (2016), . [221] V. Paschalidis, W. E. East, F. Pretorius, and S. L. Shapiro, Phys. Rev. D 92, 121502 (2015), . [222] L. Lehner, S. L. Liebling, C. Palenzuela, and P. M. Motl, Phys. Rev. D 94, 043003 (2016), . [223] D. Radice and S. Bernuzzi, Astrophys. J. 959, 46 (2023), . [224] K. Takami, L. Rezzolla, and L. Baiotti, Phys. Rev. Lett. 113, 091104 (2014), . [225] L. Barsotti, L. Mcculler, M. Evans, and P. Fritschel, The A+ design curve, Technical Report LIGO-T1800042 (LIGO, 2018). [226] S. Hild et al., Class. Quant. Grav. 28, 094013 (2011), . [227] L.-X. Li and B. Paczynski, Astrophys. J. Lett. 507, L59 (1998), arXiv:astro-ph/9807272. [228] S. Darbha and D. Kasen, Astrophys. J. 897, 150 (2020), 2002.00299 [astro-ph.HE]. [229] K. Kawaguchi, M. Shibata, and M. Tanaka, Astrophys. J. Lett. 865, L21 (2018), . [230] O. Korobkin et al., Astrophys. J. 910, 116 (2021), . [231] C. Collins, A. Bauswein, S. Sim, V. Vijayan, G. MartinezPinedo, O. Just, L. J. Shingles, and M. Kromer, PoS FAIRness2022, 010 (2023). [232] L. S. Groenewegen, S. Curtis, P. M ̈osta, D. Kasen, and D. Brethauer, (2025). [233] R. Dekany et al., Publ. Astron. Soc. Pac. 132, 038001 (2020), . [234] ˇZ. Ivezi ́c et al. (LSST), The LSST System Science Requirements Document (2018). [235] D. Eichler, M. Livio, T. Piran, and D. N. Schramm, Nature 340, 126 (1989). [236] R. Narayan, B. Paczynski, and T. Piran, Astrophys. J. Lett. 395, L83 (1992), arXiv:astro-ph/9204001. [237] E. Berger, Ann. Rev. Astron. Astrophys. 52, 43 (2014), . [238] H. Koehn, T. Wouters, P. T. H. Pang, M. Bulla, H. Rose, H. Wichern, and T. Dietrich, (2025). [239] A. Marcowith, G. Ferrand, M. Grech, Z. Meliani, I. Plotnikov, and R. Walder, Liv. Rev. Comput. Astrophys. 6, 1 (2020), . [240] Y. Yuan, A. Y. Chen, and M. Luepker, Astrophys. J. 985, 159 (2025), . [241] A. Vanthieghem, V. Tsiolis, F. Fiuza, K. Sekiguchi, A. Spitkovsky, and Y. Todo, PoS HEPROVIII, 011 (2024), . [242] R. Braun, A. Bonaldi, T. Bourke, E. Keane, and J. Wagg, (2019). [243] A. Bonaldi et al., Mon. Not. Roy. Astron. Soc. 500, 3821 (2020), . [244] F. B. Bianco et al., Astrophys. J. Supp. 258, 1 (2022), . [245] HARMONI (HARMONI for the ELT), HARMONI Performance (2025). [246] R. Davies et al. (The MICADO Consortium), The Messenger 182, 17 (2021). [247] E. S. Laird et al., Astrophys. J. Suppl. 180, 102 (2009), . [248] K. Nandra et al., (2013). [249] C. S. Reynolds et al., Proc. SPIE Int. Soc. Opt. Eng. 12678, 126781E (2023), . [250] S. Marchesi et al., Astron. Astrophys. 642, A184 (2020), . [251] H. Gieg, F. Schianchi, M. Ujevic, and T. Dietrich, Phys. Rev. D 112, 023036 (2025), . [252] H. H.-Y. Ng, C. Musolino, S. D. Tootle, and L. Rezzolla, Astrophys. J. Lett. 985, L36 (2025), . [253] P. C.-K. Cheong, A. Tsokaros, M. Ruiz, F. Venturi, J. C. L. Chan, A. K. L. Yip, and K. Uryu, Phys. Rev. D 111, 063030 (2025), . [254] T. Dietrich, D. Radice, S. Bernuzzi, F. Zappa, A. Perego, B. Br ̈ugmann, S. V. Chaurasia, R. Dudi, W. Tichy, and M. Ujevic, Class. Quant. Grav. 35, 24LT01 (2018), . [255] A. Gonzalez et al., Class. Quant. Grav. 40, 085011 (2023), . [256] I. Markin, A. Neuweiler, H. L. Gieg, and T. Dietrich, General-relativistic radiation magnetohydrodynamics simulations of binary neutron star mergers: The influence of spin on the multi- messenger picture - videos (2025). [257] N. Oreskes, Science 306, 1686 (2004). [258] P. T. Doran and M. K. Zimmerman, Eos, Transactions American Geophysical Union 90, 22 (2009). [259] J. Cook, D. Nuccitelli, S. A. Green, M. Richardson, B. Winkler, R. Painting, R. Way, P. Jacobs, and A. Skuce, Environmental Research Letters 8, 024024 (2013). [260] J. Cook et al., Environmental Research Letters 11, 048002 (2016). [261] M. Lynas, B. Z. Houlton, and S. Perry, Environmental Research Letters 16, 114005 (2021). [262] K. F. Myers, P. T. Doran, J. Cook, J. E. Kotcher, and T. A. Myers, Environmental Research Letters 16, 104030 (2021). [263] N. Conrad and B. Lorenz, Umwelterkl ̈arung 2023, Technischer Bericht / Umweltbericht (H ̈ochstleistungsrechenzentrum (HLRS), Universit ̈at Stuttgart, 2023).
2510.14848
A Geometric Approach to Optimal Experimental Design Gavin Kerrigan⋆ Christian A. Naesseth Tom Rainforth University of Oxford University of Amsterdam University of Oxford Abstract We introduce a novel geometric framework for optimal experimental design (OED). Traditional OED approaches, such as those based on mutual information, rely explicitly on probability densities, leading to restrictive invariance properties. To address these limitations, we propose the mutual transport dependence (MTD), a measure of statistical dependence grounded in optimal transport theory which provides a geometric objective for optimizing designs. Unlike conventional approaches, the MTD can be tailored to specific downstream estimation problems by choosing appropriate geometries on the underlying spaces. We demonstrate that our framework produces high-quality designs while offering a flexible alternative to standard information-theoretic techniques. 1 Introduction Effective experimental design is central to a wide range of scientific and industrial applications (Kuhfeld et al., 1994; Park et al., 2013; Melendez et al., 2021). Many such problems require a principled, model-based, approach, wherein we utilize a model over possible experimental outcomes to directly optimize our design decisions. This can be particularly effective in adaptive design settings, where frameworks like sequential Bayesian experimental design (BED) allow us to iterate between using our model to make design decisions and updating our model with the collected data (DeGroot, 1962; MacKay, 1992; Sebastiani and Wynn, 2000; Rainforth et al., 2024). Many of these approaches are grounded in information theory, where the value of an experiment is quantified using the values of an associated probability density. For instance, a popular and principled approach in the ⋆Correspondence to: gavin.kerrigan@stats.ox.ac.uk BED literature is optimizing the mutual information (MI) (Lindley, 1956, 1972; Bernardo, 1979) I(d) = KL [p(θ, y | d) || p(θ)p(y | d)] (1) = Ep(θ,y|d)  log  p(θ, y | d) p(θ)p(y | d)  (2) where p(θ) is the prior over the quantity of interest θ, and p(y | θ, d) models the experiment outcome y under design d. Notably, the MI is an expectation of log- density ratios, and is thus a unitless quantity. Conse- quently, the MI has strong invariance properties: any in- jective transformation of θ or y leaves I(d) unchanged. We highlight that this invariance, while often attractive, can also be detrimental for optimal experimental design (OED). Experimental goals should be defined in terms of downstream errors (Lindley, 1972) and many common error metrics, such as mean squared error between true and estimated parameters, are inherently geometric and dependent on the space in which they are measured. Thus, the MI, being purely informa- tional, cannot be naturally aligned with task-specific error metrics and has no mechanism by which it may be targeted to a particular geometric distance on pre- dictions: it implicitly assumes errors are measured by log loss. In turn, this inflexibility can be problematic for various applications. For example, in financial settings, we often inherently care about the variance in future returns and not the entropy, noting that the two can take arbitrary values with respect to one another. The MI also poses several practical challenges (Rain- forth et al., 2024). Foremost, it is a doubly intractable quantity, in general requiring a nested estimation (Rain- forth et al., 2018) of either the posterior p(θ | y, d) or marginal p(y | d). Second, in implicit settings, where the likelihood can only be sampled but not evaluated, the MI faces additional challenges due to its explicit reliance on densities. While there exist approaches for estimating the MI in implicit settings (Kleinegesse and Gutmann, 2019; Ivanova et al., 2021), these amount to learning the unknown density ratio, a task that becomes increasingly difficult in high dimensions. This problem is also not specific to MI, with the core BED formal- ism inherently having nested dependence on the poste- arXiv:2510.14848v1 [stat.ML] 16 Oct 2025 A Geometric Approach to Optimal Experimental Design -5 0 5 d 0.0 0.5 1.0 Objective MTD MI 4 2 0 2 4 p( y, d = 0.0) 4 2 0 2 4 p( y, d = 1.3) 4 2 0 2 4 p( y, d = 1.3) Figure 1: Comparison of MI and MTD on the 1D source location-finding problem, with the true source at θ = −1 (dashed red line). MI is maximized at the origin, reflecting the prior mode, whereas MTD is maximized near d ≈±1.3. The posterior for d = 0 is bimodal, while for d = ±1.3 it becomes unimodal or sharply concentrated. In practice, d is optimized directly, and MTD breaks the posterior symmetry even when initialized unfavorably. rior (Lindley, 1972; Chaloner and Verdinelli, 1995). We address these shortcomings by proposing the mutual transport dependence (MTD), a novel class of geometric criteria for experimental design. The MTD measures the dependency between θ and y in terms of an optimal transport discrepancy (Villani et al., 2008; Feydy et al., 2019) between the joint distribution p(θ, y | d) and its product of marginals p(θ)p(y | d). The MTD depends explicitly on a choice of sample-level cost function, en- abling practitioners to encode domain knowledge or downstream objectives directly into the design crite- rion. This cost function can be defined either directly or through a transformation of the underlying space, creating a family of flexible design criteria. Moreover, the MTD offers practical benefits. It does not involve any nested expectations, can be estimated without likelihood evaluations, and can be directly optimized using gradient-based methods whenever dif- ferentiable sampling of p(y | θ, d) is possible. This makes it particularly well-suited to simulation-based or implicit scenarios where the likelihood p(y | θ, d) is unknown or intractable, an increasingly common set- ting in OED (Kleinegesse and Gutmann, 2019, 2021; Ivanova et al., 2021; Encinar et al., 2025), Optimizing MTD results in qualitatively different de- sign behaviour than MI (see Figure 1). We illustrate the effectiveness of MTD-optimal designs on standard experimental design benchmarks, comparing directly with MI-based designs. Our results show that the MTD can outperform traditional information-theoretic ap- proaches, while also allowing experimenters to tailor the geometry of the underlying spaces to the problem at hand. In sum, our framework introduces a principled and practical new class of criteria for optimal experi- mental design, overcoming the rigidity of information- theoretic methods and enabling experiments that better reflect real-world objectives. 2 Background and Notation We use θ ∈Θ to represent the unknown target quantity of interest that we wish to learn about through our experiments. This could correspond to a real world quantity, model parameters, or something abstract like downstream predictions. We further use d ∈D to represent an experimental design and y ∈Y to represent an observed outcome of an experiment. Akin to standard BED approaches, we specify a prior p(θ) representing our beliefs about θ before performing any experiments and a likelihood p(y | θ, d) capturing the data generating process. Our goal is now to select d in a way that will allow us to best estimate θ once the experiment’s outcome is observed. 2.1 OED with Mutual Information From an information-theoretic point of view, it is natu- ral to seek designs which result in data y that reduces our uncertainty about the unknown quantity θ. That is, we consider the reduction in entropy (Lindley, 1956) I(d) = Ep(y|d) [H[θ] −H[θ | y, d]] (3) which can straightforwardly be shown to yield the mutual information (1). The design choice is d∗= argmaxd∈D I(d), maximizing the mutual information. In the context of OED, I(d) is often called the expected information gain (EIG). In all but the simplest cases, computing I(d) is non- trivial, as it requires estimating both the outer ex- pectation and the integrand (i.e., either the posterior p(θ | y, d) or marginal likelihood p(y | d)). Often, one resorts to nested estimators like nested Monte Carlo (NMC) (Rainforth et al., 2018), or variational approaches (Foster et al., 2019, 2020). Importantly, many techniques assume that the likelihood p(y | θ, d) is known explicitly, where the corresponding distribu- tion can be evaluated pointwise. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth OED is often particularly useful in adaptive scenarios, where experiments are designed sequentially based on the data ht = {(dk, yk)}t k=1 gathered in previous trials. In this setting, we replace the prior p(θ) with our updated beliefs using the posterior p(θ | ht), and consider the incremental MI I(t+1)(d) = KL [p(θ, y|d, ht)||p(θ|ht)p(y|d, ht)] . (4) We refer to Chaloner and Verdinelli (1995); Rainforth et al. (2024) for more comprehensive surveys of BED. 2.2 Optimal Transport Optimal transport (OT) is a mathematical toolkit which allows us to compare two arbitrary probabil- ity distributions p(x) and q(x′) in terms of the amount of work required to transform one distribution into the other (Villani et al., 2008; Peyr´e et al., 2019). In the Kantorovich formulation of OT (Kantorovich, 1942), we specify a non-negative cost function c(x, x′) ≥ 0 encoding the cost of transporting a unit of mass from location x to x′. A coupling is a joint distribu- tion γ(x, x′) whose marginals are p, q respectively. We write Π(p, q) for the set of all valid couplings. Given a coupling γ ∈Π(p, q), its associated cost is Kc(γ) = Z c(x, x′) dγ(x, x′) = Eγ(x,x′) [c(x, x′)] (5) which can be interpreted as the average sample-level transport cost under this coupling. An optimal cou- pling minimizes this cost, and we write OTc [p, q] = min γ∈Π(µ,ν) Kc(γ) (6) for the minimum value attained. In short, c(x, x′) defines a sample-level cost function and OTc[p, q] is the associated optimal transport discrepancy. 3 Experimental Design through Mutual Transport Dependence The MI is an inherently density-based objective, where the value of an experiment is considered a purely infor- mational quantity that is based directly on the density of the posterior, rather than the actual values that θ and y can take. As such, the MI is unable to incorpo- rate properties of the underlying sample spaces Θ, Y into its design objective, such as an error metric on θ. This induces strong invariance properties in the MI (Polyanskiy and Wu, 2025, Theorem 3.7), such that it remains fixed under injective transformations of θ and/or y. Thus, if we are interested in measuring errors in terms of some transformation ϕ = f(θ), the optimal design remains fixed with changes in f, even though our intuitive notion of error itself will change. For example, while θ may be the natural variables for parametrizing a model, we may be interested in a one- to-one transformation of θ which is more interpretable. Under MI, these scenarios are indistinguishable: any unit of entropy reduction is equally valuable, regardless of its practical implications. These shortcomings motivate the need for a practical objective which is fully defined in terms of geometric notions on the underlying sample spaces. In other words, we seek a criterion which is determined by the values taken on by random variables themselves. 3.1 Mutual Transport Dependence One interpretation of MI is that it measures the KL divergence between the joint, p(θ, y|d), under which θ and y are dependent, and the product of marginals, p(θ)p(y|d), under which they are independent. The KL, though, depends only on density ratios and is thus unsuitable for a geometric measure of information. To provide a geometric, sample-based criterion, we pro- pose to instead measure the dependency between θ and y via an optimal transport dependency between the same joint and product of marginals. Optimal trans- port, relying on an expected sample-level cost function, is a natural choice for such a geometric measure as it allows customisation of our notion of distance through the cost function. This yields our proposed mutual transport dependence (MTD), defined as follows.1 Definition 1. Given a cost function c : Θ2 × Y2 → R≥0, the mutual transport dependence (MTD) is Tc(d) : = OTc [p(θ, y | d), p(θ)p(y | d)] = min γ∈Π(p(θ,y|d),p(θ)p(y|d)) Eγ [c(θ, y, θ′, y′)] . (7) Note that the MTD depends only on expectations with respect to our model, with no direct dependency on the density functions themselves. This is critically im- portant from a computational and scaling perspective, as there is no need to perform (nested) estimation of the posterior or marginal likelihood densities, which is typically challenging, especially in high dimensions. High values of Tc(d) indicate that the parameter and data are strongly coupled under this design, and con- versely, Tc(d) = 0 if and only if θ and y are independent under d. As Tc(d) can be also seen as the average (θ, y) displacement under the optimal transport plan, mea- sured by the cost c(θ, y, θ′, y′), it is a geometric notion, 1We work under standard measure-theoretic assumptions regarding the spaces Θ, Y as well as the cost c, which we discuss in Appendix A. A Geometric Approach to Optimal Experimental Design and by choosing the cost function c in an appropriate way the experimenter can incorporate downstream pref- erences directly into the experimental design process. 3.2 Alternative Transport-Based Measures The MI, being the KL between the joint and product of marginals, can be equivalently understood as an expected KL discrepancy on either Θ or Y alone: I(d) = Ep(θ)KL [p(y | θ, d) || p(y | d)] (8) = Ep(y|d)KL [p(θ | y, d) || p(θ)] . (9) This equivalence does not translate to the MTD, as the transport distances in Θ and Y are inherently different. However, we can derive transport-based analogues of these interpretations of the MI as well by considering costs defined on Θ or Y alone as follows. Definition 2. Given a cost function c : Θ2 →R≥0, the target transport dependence (TTD) is T (θ) c (d) := Ep(y|d) [OTc [p(θ | y, d), p(θ)]] . (10) Definition 3. Given a cost function c : Y2 →R≥0, the expected data transport dependence (DTD) is T (y) c (d) := Ep(θ) [OTc [p(y | θ, d), p(y | d)]] . (11) In some ways, these transport dependencies are perhaps more intuitive than the MTD, Tc(d), which relies on a cost defined on the joint space Θ × Y. In particular, given that our ultimate goal is the estimation of θ, the TTD arguably provides the more natural of the three objectives as it is directly measuring changes in beliefs in the space of Θ. However, unlike the MTD, these objectives have reintro- duced a nesting into the problem as the maximization over couplings is now inside an expectation. This makes them, at least in principle, computationally more problematic as they appear to require solving an OT problem for any given sample of the outer expectation. In practice, though, both T (θ) c (d) and T (y) c (d), can be approximated in a way that only requires us to solve only a single OT problem. Leveraging recent work on conditional optimal transport (Carlier et al., 2016; Kerrigan et al., 2024; Chemseddine et al., 2024; Baptista et al., 2024), we show in the following result that when the cost function is given by a norm, these quantities are recovered as a limiting special case of Tc(d) under a particular choice of c, the proof for which is given in Appendix A. Theorem 1. For a fixed 1 ≤p < ∞, consider the cost cη(θ, y, θ′, y′) = η|θ −θ′|p Θ + |y −y′|p Y (12) for η > 0. Then, η−1Tcη(d) →T (θ) cΘ (d) as η →0+, where cΘ(θ, θ′) = |θ −θ′|p Θ. Similarly, for cψ(θ, y, θ′, y′) = |θ −θ′|p Θ + ψ|y −y′|p Y (13) we have ψ−1Tcψ(d) →T (y) cY (d) as ψ →0+ where cY(y, y′) = |y −y′|p Y. As the MTD can be defined using any choice of cost function, we can thus think of it as a more general objective which subsumes the TTD and DTD as limiting cases in the choice of cost function. Thus, in practice, we can implement these alternatives simply by using a weighted cost function (cη or cψ) in the MTD with a small η > 0 or ψ > 0 for TTD and DTD respectively. We, therefore, focus our subsequent discussion on the MTD. We emphasize that while we have introduced various notions of transport dependence in terms of designing a single experiment, all constructions readily gener- alize to the sequential setting by updating the prior as discussed in Section 2. While our experiments will focus on this sequential case, our notions of transport dependence can also be extended to the policy setting (Foster et al., 2021; Ivanova et al., 2021) by considering optimal transport over the entire experimental rollout. 3.3 Estimation and Optimization To use the MTD in practice, we must be able to efficiently estimate and optimize Tc(d). Here, we briefly discuss the computational tools used later in Section 6, but note that other algorithmic approaches to leverage transport dependence for experimental design may be viable. We emphasize that our estimators for the MTD are purely sample-based. Assuming that we can draw sam- ples θ ∼p(θ) and y|θ ∼p(y | θ, d), we can estimate the Tc(d) without ever evaluating densities. By contrast, estimators for the MI either require direct access to the likelihood density, or rely on learning approximations for density ratios (Kleinegesse and Gutmann, 2020, 2021; Ivanova et al., 2021), introducing substantial extra modelling and optimization complexity. Estimation. For discrete measures, the optimal transport problem becomes a linear program (LP) for which a wide range of numerical solvers have been pro- posed (Bonneel et al., 2011; Peyr´e et al., 2019). When the distributions p(θ, y | d) and p(θ)p(y | d) are known and discrete, the OT problem may be solved directly using these distributions. In practice, however, our distributions are often continuous or only accessible through sampling. In such cases, we approximate them using empirical Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth measures based on samples. That is, we sample (θj, yj) i.i.d. ∼ p(θ, y | d) and (θ′ k, y′ k) i.i.d. ∼ p(θ)p(y | d), followed by the approximations p(θ, y | d) ≈1 n n X j=1 δ(θj,yj), p(θ)p(y | d) ≈1 n n X k=1 δ(θ′ k,y′ k). (14) This procedure yields the plug-in estimator (Boissard and Gouic, 2014; Fournier and Guillin, 2015) bTc(d) = OTc  1 n n X j=1 δ(θj,yj), 1 n n X k=1 δ(θ′ k,y′ k)   (15) which is asymptotically consistent (Dudley, 1969) as n →∞and concentrates around its mean at an expo- nential rate (Bolley et al., 2007; Boissard, 2011; Weed and Bach, 2019), albeit with a positive bias that de- creases with n (Papp and Sherlock, 2025). Optimization. Optimizing Tc(d) poses a further challenge, assuming we cannot simply enumerate over possible designs. We therefore now show how Tc(d) can be optimized using stochastic gradient ascent provided the design space is continuous. The only additional assumption needed for this is the existence of a differentiable reparameterization of y with respect to d. Namely, using the noise outsourcing lemma (Kallenberg, 1997), then for any fixed noise distribution with appropriate reference measure, q(η), there exists (subject to extremely weak assumptions) a function h such that y = h(η; θ, d) for η ∼q(η). If we further assume that d 7→h(η; θ, d) is differentiable for our chosen q(η), then the LP approach above enables the calculation of ∇d bTc(d) via automatic differentiation. Hence, we may perform gradient-based design optimization in this setting. We refer to Peyr´e et al. (2019, Chapter 9) for a further discussion of the differentiability of optimal transport discrepancies. Note that this differentiable reparameterisation assump- tion is the same as in implicit MI methods and is often satisfied even when evaluating p(y | θ, d) is it- self intractable: many, if not most, intractable likeli- hood models are based on stochastic simulators, with the intractability coming from deterministic mappings of stochastic variables or stochastic differential equa- tions (Cranmer et al., 2020). 4 Comparing Mutual Information and Mutual Transport Dependence In this section we theoretically analyse how our pro- posed transport-based criteria relate to the classical expected information gain (MI). Although the MTD and MI originate from different principles, there are some interesting links between the two as we now show. In particular, under quadratic costs, the transport de- pendencies can be upper-bounded by the MI. Theorem 2. Suppose the prior p(θ) is strictly log- concave, i.e., there exists some λθ > 0 with −∇2 log p(θ) ⪰λθI. For c(θ, θ′) = |θ −θ′|2, we have λθT (θ) c (d) ≤2I(d). (16) Similarly, if the marginal p(y | d) is strictly log-concave with parameter λy|d, and c(y, y′) = |y −y′|2, then λy|dT (y) c (d) ≤2I(d). (17) When both the prior and likelihood satisfy these assump- tions, under cost c(θ, θ′, y, y′) = η|θ −θ′|2 + |y −y′|2, for λ = max{λθ/η, λy|d} we have λTc(d) ≤2I(d). (18) See Section B for a proof and extended discussion. We note that Theorem 2 holds for quadratic costs on any space, and in particular remains valid under any transformations of Θ and Y when the quadratic cost is computed in the new coordinates. Thus, if the MTD is large under any such transformation, the MI must necessarily also be large. This suggests a robustness to misspecification in the selected cost function in the MTD, as selecting for designs under a given particular cost must ensure a minimum level in the MI as well. As a further point of comparison, we derive a closed- form expression for the MTD for a simple linear- Gaussian model under quadratic costs. We allow for the possibility of the observation noise σ2 d,θ to vary with de- sign to demonstrate how the value of the MI diverges as σ2 d,θ →0, i.e., the likelihood approaches a deterministic outcome. On the other hand, Tc(d) remains bounded for all designs d and all noise σ2 d,θ. This boundedness makes the MTD a quantitative and stable measure of dependence even in scenarios approaching determinism. Theorem 3. Suppose θ ∈Θ = Rn has a standard normal prior p(θ) = N(0, In), designs are vectors d ∈ D = Rn, and y ∈Y = R has likelihood p(y | θ, d) = N  ⟨d, θ⟩, σ2 d,θ  . Under the quadratic cost, we have Tc(d) = 2  1 + σ2 d,θ + |d|2 (19) − r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ  . Moreover, Tc(d) ≤2. On the other hand, the MI is I(d) = 1 2 log 1 + |d|2/σ2 d,θ  (20) which is unbounded as σ2 d,θ →0. A Geometric Approach to Optimal Experimental Design Table 1: Metrics for the CES model after T = 10 design iterations, averaged over 50 random seeds (± one standard error). Designs produced by the MTD yield lower RMSEs than PCE on average for all parameters. ρ α u σ β τ Random 0.251±0.025 0.116±0.016 36.365±7.341 317.219±81.866 0.727±0.088 0.740±0.106 PCE 0.047±0.012 0.036±0.013 8.902±5.749 24.942±15.697 0.201±0.060 0.100±0.048 MTD 0.018±0.005 0.009±0.001 3.671±2.810 1.767±1.009 0.058±0.008 0.049±0.012 MTD (Tc†) 0.022±0.008 0.012±0.004 10.941±10.107 0.534±0.195 0.069±0.013 0.053±0.013 5 Related Work Classical optimal experimental design criteria trace back to frequentist approaches based on the Fisher information matrix (Fisher, 1935; Wald, 1943; Kiefer, 1959, 1974; Pukelsheim, 2006). While powerful in some settings, these methods often rely on asymptotic ap- proximations and are limited when models are nonlinear (Ryan et al., 2016; Rainforth et al., 2024). As they depend only on local (second-order) information about θ, they can lose fidelity compared to criteria based on the full joint distribution p(θ, y | d). Bayesian experimental design (BED) addresses many of these limitations by evaluating designs using objectives that can be viewed as measuring expected reduction in uncertainty on θ (DeGroot, 1962; Dawid, 1998; Bickford Smith et al., 2025). Here this uncertainty is typically measured using (differential) entropy to produce the MI or expected information gain (Lindley, 1956), especially in the contemporary literature (Huan and Marzouk, 2014; Foster, 2021; Foster et al., 2021; Ao and Li, 2024; Iollo et al., 2024b). The trace or de- terminant of the posterior covariance matrix have also occasionally be used instead (Vanlier et al., 2012; Ryan et al., 2016; Huan et al., 2024), but this requires expen- sive nested inference procedures to be performed that are typically even more costly than MI optimization. Concurrent work by Helin et al. (2025) also studies the target transport dependence T (θ) c (d) under the specific choice c(θ, θ′) = |θ −θ′|p, p ∈[1, ∞), as an objective for experimental design. In light of Theorem 1, this can be seen as a limiting case of our more general MTD criterion under a Euclidean cost assumption. Their work is primarily theoretical with no quantitative comparisons against the MI, and they do not propose a practical method for optimizing the TTD and in par- ticular overcoming its double intractability. Our work, by contrast, develops a sample-based, differentiable framework applicable for general costs and empirically demonstrates its efficacy for sequential OED. Optimal-transport based notions of statistical de- pendency have also been considered in areas such as representation learning (Ozair et al., 2019) indepen- dence testing (Warren, 2021; Wiesel, 2022; Nies et al., 2025), and fairness (Leteno et al., 2023). These works, however, are not concerned with experimental design and also focus exclusively on Euclidean costs. Our work adds to this growing literature of geometric depen- dency measures by introducing the mutual transport dependence for OED under general cost functions. 6 Experiments We now evaluate the proposed methodology on both standard benchmark experimental design tasks and variations on these that have particular error desiderata in our final estimates. In each setting, we use the MTD as the design criterion, sequentially selecting experi- ments by optimizing Tc(d) as explained in Section 3.3. After each design is chosen, we perform posterior in- ference over θ and proceed to the next experimental iteration. All results are reported over either 25 or 50 random seeds, where each random seed constitutes a different ground-truth value of θ. We compare against the MI throughout, using PCE (Foster et al., 2019) as a well-known estimator for this quantity. We note that unlike our MTD approach, PCE is an explicit estima- tor that requires direct access to the likelihood density, thereby providing a stronger baseline than more di- rectly comparable, but also more complex, implicit MI approaches. See Section D for details.2 CES. The first problem we consider is Constant Elas- ticity of Substitution (CES) (Arrow et al., 1961; Fos- ter et al., 2020; Blau et al., 2022; Iollo et al., 2024b; Hedman et al., 2025), arising from behavioral eco- nomics. In this problem, a participant compares two baskets d1, d2 ∈[0, 100]3 consisting of various amounts of three different goods. Given two baskets, the par- ticipant provides a scalar response y ∈[0, 1] indi- cating their subjective preference between the bas- kets. The design variable d = (d1, d2) is thus six- dimensional, and the goal is to recover the latent pa- rameters θ = (ρ, α1, α2, α3, u) ∈R5 governing the par- ticipant’s preferences. This is a particularly challenging design problem, as large regions of the design space result in uninformative outcomes y ∈{0, 1}. We se- quentially design T = 10 experiment iterations. 2Experiment code: github.com/GavinKerrigan/mtd Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Source Location Finding. Our second problem is source location finding (LF) (Sheng and Hu, 2005; Foster et al., 2021; Ivanova et al., 2021; Blau et al., 2022; Iollo et al., 2024a). In this, our goal is to estimate the spatial location of two sources θ1, θ2 ∈R2. Each source emits a signal which decays according to an inverse square law. At each experiment iteration, a sensor is placed at a location d ∈R2 which records a noisy measurement y ∈R of the total signal intensity at the sensor location. Here, we design T = 25 experiments. 6.1 MTD under Euclidean Costs While one of the main appeals of the MTD is that it allows for flexible cost functions, in this section we first consider the quadratic cost c(θ, y, θ′, y′) = |θ − θ′|2 + |y −y′|2 as a reasonable default choice. Our first set of experiments demonstrates that even under this default setting, MTD-optimal designs can exceed the performance of MI-optimal designs in terms of recovering an unknown parameter. In Table 1, we evaluate the MTD on the CES problem in terms of the final RMSE between posterior samples after T = 10 experiment iterations and the true value of θ. Designs produced by optimizing MTD achieve lower RMSEs than those produced by optimizing MI. Similarly, in Figure 2, we plot the RMSE between pos- terior samples and the true θ value on the LF problem over the course of T = 25 design iterations. We observe that the MTD yields lower RMSEs throughout most of the iterations, but designs produced by optimizing the MI yield similar RMSEs at the final iteration. For the LF problem, both MI and MTD are optimized using five random restarts, i.e., we generate five can- didate designs and retain the best under the given objective. This approach serves not only to mitigate sensitivity to initialization but also to systematically improve design quality. In particular, for later iter- ations of the LF problem, where the posterior over θ becomes highly concentrated, the restart strategy provides a simple yet effective mechanism to ensure robustness against poor initializations. In terms of runtime, for either problem optimizing a single design under MTD requires approximately 30 seconds of wall-clock time, whereas optimizing the same design with PCE takes roughly 120 seconds, with both objectives run to convergence. While these runtimes are sensitive to implementation choices and could likely be reduced through more careful tuning or normalization of compute budgets, the key observation is that MTD is comparably fast to previous approaches and potentially faster. 1 5 10 15 20 25 Experiment Iteration 0.0 0.5 1.0 1.5 2.0 RMSE Random PCE MTD Weighted MTD Figure 2: RMSE between posterior θ samples and ground-truth on the location finding problem, averaged over 25 seeds (± one standard error). 6.2 MTD under Transformations While we may explicitly specify a cost for MTD, costs can also be defined implicitly through transformations of the underlying sample spaces. Concretely, if f : Θ → Θ†, g : Y →Y† are two transformations, we may define c†(θ, θ′, y, y′) = |f(θ) −f(θ′)|2 + |g(y) −g(y′)|2, i.e., quadratic cost in transformed coordinates. This is use- ful when we wish to measure errors in a particular space. Note that the MI between f(θ) and g(y) is equal to that between θ and y if f and g are injective, prohibiting the same trick from being meaningfully employed. We illustrate this on CES using the transformations σ = 1/(1 −ρ) βi = log(αi)/g(α) τ = log(u) (21) where g(α) is the geometric mean of α. These trans- formations are interpretable: σ is the elasticity (Arrow et al., 1961), β the centered-log-ratio of α, capturing relative importance of goods, and τ = log u a natural reparametrization under its lognormal prior. To evaluate this approach, we generate designs using Tc†(d) in the transformed variables, implicitly alter- ing the cost. Table 1 reports posterior RMSEs for PCE, MTD on the original scale Tc(d), and MTD with the transformed cost Tc†(d). On the original param- eters, Tc†(d) performs comparably to Tc(d) for ρ and α but somewhat worse for u, suggesting the untrans- formed version remains preferable when evaluation is performed directly on the original parameters. On the transformed scale, Tc†(d) and Tc(d) perform similarly for β and τ. However, there are clearer differ- ences in σ. In particular, we see that PCE exhibits high RMSE in σ. This is because PCE occasionally yields poor designs which are unable to identify that ρ ̸= 1, leading to high errors in σ = (1 −ρ)−1. Tc(d) generally yields higher quality designs which reliably identify ρ and thus obtain low errors in σ. The transformed A Geometric Approach to Optimal Experimental Design 3 0 3 3 0 3 MI 3 0 3 3 0 3 MTD 3 0 3 3 0 3 Weighted MTD Figure 3: Estimated values of the EIG (via PCE) and the MTD for the 2D location-finding model with two sources. Top row: the EIG is maximized at the origin, whereas the MTD favors off-center designs, illustrating its symmetry-breaking behavior. Bottom row: with additional weighting of the cost function, the MTD can be tuned to favor specific regions of the design space. Tc†(d), though, implicitly upweights designs where σ is large, leading to low RMSE values. Notably, there is no natural analogue of changing PCE to target RMSE in σ directly. Overall, this provides evidence that the MTD can be tailored to particular downstream metrics. 6.3 Weighted Cost Functions We next evaluate the MTD on a variation of the LF problem which highlights its ability to incorporate downstream objectives through an appropriate choice of cost function. Here, the goal is not only to localize the sources, but also to rapidly determine whether a source lies in a critical region R ⊂Θ. To capture this preference, we define a weighted cost cw(θ, y, θ′, y′) = w(θ) |θ −θ′|2 + |y −y′|2 where (θ, y) ∼p(θ, y | d) and (θ′, y′) ∼p(θ′)p(y′ | d) and the weight is given by w(θ) = b + P2 k=1 g(θk −µ), with b > 0 a bias and g a bump function supported on R, a ball of radius 1.5 centered at µ = (1.5, −1.5). See Sec- tion D for details. Intuitively, the cost is up-weighted whenever the “true” θ has a source in R. In such cases, the MTD Tcw(d) increases, thus yielding designs that prioritize detecting whether a source is present in R. We stress that this represents only one possible weight- ing scheme, and other choices could be used to encode different downstream preferences. In Figure 3, we plot Tcw(d) under the weighted cost for the LF task. As intended, the objective is upweighted in 1 5 10 15 20 25 Experiment Iteration 0.00 0.05 0.10 0.15 Expected 0-1 Loss Random PCE MTD Weighted MTD Figure 4: Expected zero-one loss for the 2D location- finding model. The weighted MTD rapidly determines whether θ ∈R. R, encouraging designs to be placed in this region. We also note that the unweighted MTD (with the quadratic cost) exhibits a symmetry breaking behavior, whereas the MI favors designs at the origin, thereby preserving symmetry after posterior updates (as in Figure 1). We then design a sequence of T = 25 experiments using the weighted cost cw. In Figure 4, we plot the mean zero-one loss of θ ∈R, i.e., Ep(θ|ht)[|1[θtrue ∈R]−1[θ ∈ R]|] which directly measures if we have detected θ ∈R. While PCE and unweighted MTD eventually resolve this uncertainty, the weighted MTD achieves a much faster reduction. We emphasize that the MI cannot be easily adapted to the task of first identifying if θ ∈R before exploring the rest of the space, demonstrating our framework’s flexibility to encode task-specific preferences via the cost. Further, Figure 2 shows the RMSE under Tcw matches that of Tc(d), confirming that we have not sacrificed performance in terms of identifying θ for this auxiliary objective. 7 Conclusion We introduce the mutual transport dependence (MTD), a novel class of geometric criteria for optimal experimen- tal design. By quantifying the value of an experiment through an optimal transport divergence with an ex- plicit sample-level cost, the MTD allows us to encode domain knowledge and task-specific objectives directly into the design criterion. We show that optimizing the MTD produces highly effective designs on stan- dard benchmarks, and that tailoring the cost function enables alignment with particular experimental goals. Overall, the MTD offers a flexible, geometry-aware ob- jective for OED, providing a practical tool for designing experiments that reflect both statistical dependence and the experimenter’s real-world priorities. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Acknowledgements GK and TR are supported by the UK EPSRC grant EP/Y037200/1. References Ambrosio, L. and Gigli, N. (2012). A user’s guide to optimal transport. In Modelling and Optimisation of Flows on Networks: Cetraro, Italy 2009, Edi- tors: Benedetto Piccoli, Michel Rascle, pages 1–155. Springer. Ao, Z. and Li, J. (2024). On estimating the gradient of the expected information gain in Bayesian experimen- tal design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 20311–20319. Arrow, K. J., Chenery, H. B., Minhas, B. S., and Solow, R. M. (1961). Capital-labor substitution and economic efficiency. The review of Economics and Statistics, pages 225–250. Baptista, R., Pooladian, A.-A., Brennan, M., Marzouk, Y., and Niles-Weed, J. (2024). Conditional simu- lation via entropic optimal transport: Toward non- parametric estimation of conditional Brenier maps. arXiv preprint arXiv:2411.07154. Bernardo, J. M. (1979). Expected information as ex- pected utility. The Annals of Statistics, pages 686– 690. Bertsekas, D. P. (1971). Control of uncertain systems with a set-membership description of the uncertainty. PhD thesis, Massachusetts Institute of Technology. Bertsekas, D. P. (1997). Nonlinear programming. Jour- nal of the Operational Research Society, 48(3):334– 334. Bickford Smith, F., Kossen, J., Trollope, E., van der Wilk, M., Foster, A., and Rainforth, T. (2025). Re- thinking aleatoric and epistemic uncertainty. In Forty-second International Conference on Machine Learning. Blau, T., Bonilla, E. V., Chades, I., and Dezfouli, A. (2022). Optimizing sequential experimental design with deep reinforcement learning. In International Conference on Machine Learning, pages 2107–2128. PMLR. Blower, G. (2003). The Gaussian isoperimetric inequal- ity and transportation. Positivity, 7(3):203–224. Boissard, E. (2011). Simple bounds for the conver- gence of empirical and occupation measures in 1- Wasserstein distance. Electronic Journal of Proba- bility, 16:2296 – 2333. Boissard, E. and Gouic, T. L. (2014). On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l’Institut Henri Poincar´e, Probabilit´es et Statistiques, 50(2):539 – 563. Bolley, F., Guillin, A., and Villani, C. (2007). Quantita- tive concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Re- lated Fields, 137(3):541–593. Bonneel, N., Van De Panne, M., Paris, S., and Hei- drich, W. (2011). Displacement interpolation using Lagrangian mass transport. In Proceedings of the 2011 SIGGRAPH Asia conference, pages 1–12. Boyd, S. P. and Vandenberghe, L. (2004). Convex optimization. Cambridge university press. Carlier, G., Chernozhukov, V., and Galichon, A. (2016). Vector quantile regression: An optimal transport approach. The Annals of Statistics, 44(3):1165 – 1192. Chaloner, K. and Verdinelli, I. (1995). Bayesian exper- imental design: A review. Statistical science, pages 273–304. Chemseddine, J., Hagemann, P., Steidl, G., and Wald, C. (2024). Conditional Wasserstein distances with applications in Bayesian OT flow matching. arXiv preprint arXiv:2403.18705. Cranmer, K., Brehmer, J., and Louppe, G. (2020). The frontier of simulation-based inference. Proceedings of the National Academy of Sciences, 117(48):30055– 30062. Csisz´ar, I. (1967). On information-type measure of difference of probability distributions and indirect observations. Studia Sci. Math. Hungaria, 2:299–318. Danskin, J. (1967). The Theory of Max-min and Its Applications to Weapons Allocation Problems. Econo- metrics and operations research. Springer. Dawid, A. P. (1998). Coherent measures of discrepancy, uncertainty and dependence, with applications to bayesian predictive experimental design. Department of Statistical Science, University College London. http://www. ucl. ac. uk/Stats/research/abs94. html, Tech. Rep, 139. DeGroot, M. H. (1962). Uncertainty, information, and sequential experiments. The Annals of Mathematical Statistics, 33(2):404–419. Doucet, A., De Freitas, N., and Gordon, N. (2001). An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pages 3–14. Springer. Dudley, R. M. (1969). The speed of mean Glivenko- Cantelli convergence. The Annals of Mathematical Statistics, 40(1):40–50. Encinar, P. C., Schr¨oder, T., Yatsyshin, P., and Duncan, A. B. (2025). Deep optimal sensor placement for A Geometric Approach to Optimal Experimental Design black box stochastic simulations. In Frontiers in Probabilistic Inference: Learning meets Sampling. Feydy, J., S´ejourn´e, T., Vialard, F.-X., Amari, S.- i., Trouv´e, A., and Peyr´e, G. (2019). Interpolating between optimal transport and MMD using Sinkhorn divergences. In The 22nd international conference on artificial intelligence and statistics, pages 2681–2690. PMLR. Fisher, R. A. (1935). The Design of Experiments. The Design of Experiments. Oliver and Boyd. Foster, A., Ivanova, D. R., Malik, I., and Rainforth, T. (2021). Deep adaptive design: Amortizing sequen- tial Bayesian experimental design. In International Conference on Machine Learning, pages 3384–3395. PMLR. Foster, A., Jankowiak, M., Bingham, E., Horsfall, P., Teh, Y. W., Rainforth, T., and Goodman, N. (2019). Variational Bayesian optimal experimental design. Advances in Neural Information Processing Systems, 32. Foster, A., Jankowiak, M., O’Meara, M., Teh, Y. W., and Rainforth, T. (2020). A unified stochastic gradi- ent approach to designing Bayesian-optimal exper- iments. In International Conference on Artificial Intelligence and Statistics, pages 2959–2969. PMLR. Foster, A. E. (2021). Variational, Monte Carlo and policy-based approaches to Bayesian experimental design. PhD thesis, University of Oxford. Fournier, N. and Guillin, A. (2015). On the rate of convergence in Wasserstein distance of the empir- ical measure. Probability theory and related fields, 162(3):707–738. Gozlan, N. and L´eonard, C. (2010). Transport inequal- ities. a survey. arXiv preprint arXiv:1003.3852. Hedman, M., Ivanova, D. R., Guan, C., and Rain- forth, T. (2025). Step-dad: Semi-amortized policy- based bayesian experimental design. arXiv preprint arXiv:2507.14057. Helin, T., Marzouk, Y., and Rojo-Garcia, J. R. (2025). Bayesian optimal experimental design with Wasserstein information criteria. arXiv preprint arXiv:2504.10092. Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Gre- wal, K., Bachman, P., Trischler, A., and Bengio, Y. (2018). Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Hosseini, B., Hsu, A. W., and Taghvaei, A. (2025). Conditional optimal transport on function spaces. SIAM/ASA Journal on Uncertainty Quantification, 13(1):304–338. Huan, X., Jagalur, J., and Marzouk, Y. (2024). Opti- mal experimental design: Formulations and compu- tations. Acta Numerica, 33:715–840. Huan, X. and Marzouk, Y. M. (2014). Gradient-based stochastic optimization methods in Bayesian experi- mental design. International Journal for Uncertainty Quantification, 4(6). Iollo, J., Heinkel´e, C., Alliez, P., and Forbes, F. (2024a). Bayesian experimental design via contrastive diffu- sions. arXiv preprint arXiv:2410.11826. Iollo, J., Heinkel´e, C., Alliez, P., and Forbes, F. (2024b). PASOA - particle based Bayesian optimal adaptive design. arXiv preprint arXiv:2402.07160. Ivanova, D. R., Foster, A., Kleinegesse, S., Gutmann, M. U., and Rainforth, T. (2021). Implicit deep adap- tive design: Policy-based experimental design with- out likelihoods. Advances in Neural Information Processing Systems, 34:25785–25798. Kallenberg, O. (1997). Foundations of modern proba- bility. Springer. Kantorovich, L. V. (1942). On the translocation of masses. In Dokl. Akad. Nauk. USSR (NS), volume 37, pages 199–201. Kerrigan, G., Migliorini, G., and Smyth, P. (2024). Dynamic conditional optimal transport through simulation-free flows. Advances in Neural Informa- tion Processing Systems, 37:93602–93642. Kiefer, J. (1959). Optimum experimental designs. Jour- nal of the Royal Statistical Society: Series B (Method- ological), 21(2):272–304. Kiefer, J. (1974). General equivalence theory for opti- mum designs (approximate theory). The Annals of Statistics, pages 849–879. Kingma, D. P. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kleinegesse, S. and Gutmann, M. U. (2019). Efficient Bayesian experimental design for implicit models. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 476–485. PMLR. Kleinegesse, S. and Gutmann, M. U. (2020). Bayesian experimental design for implicit models by mutual in- formation neural estimation. In International Confer- ence on Machine Learning, pages 5316–5326. PMLR. Kleinegesse, S. and Gutmann, M. U. (2021). Gradient- based Bayesian experimental design for implicit mod- els using mutual information lower bounds. arXiv preprint arXiv:2105.04379. Kuhfeld, W. F., Tobias, R. D., and Garratt, M. (1994). Efficient experimental design with marketing re- search applications. Journal of Marketing Research, 31(4):545–557. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Kullback, S. (1967). A lower bound for discrimination information in terms of variation. IEEE transactions on Information Theory, 13(1):126–127. Leteno, T., Gourru, A., Laclau, C., Emonet, R., and Gravier, C. (2023). Fair text classification with Wasserstein independence. arXiv preprint arXiv:2311.12689. Lindley, D. V. (1956). On a measure of the information provided by an experiment. The Annals of Mathe- matical Statistics, 27(4):986–1005. Lindley, D. V. (1972). Bayesian statistics: A review. SIAM. MacKay (1992). Information-based objective functions for active data selection. Neural Computation. Massart, P. (2007). Concentration inequalities and model selection. Springer. Melendez, J., Furnstahl, R., Grießhammer, H., Mc- Govern, J., Phillips, D., and Pratola, M. (2021). Designing optimal experiments: An application to proton Compton scattering. The European Physical Journal A, 57(3):81. Nguyen, X., Wainwright, M. J., and Jordan, M. I. (2010). Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847– 5861. Nies, T. G., Staudt, T., and Munk, A. (2025). Trans- port dependency: Optimal transport based depen- dency measures. The Annals of Applied Probability, 35(4):2292 – 2362. Ozair, S., Lynch, C., Bengio, Y., Van den Oord, A., Levine, S., and Sermanet, P. (2019). Wasserstein dependency measure for representation learning. Ad- vances in Neural Information Processing Systems, 32. Papp, T. and Sherlock, C. (2025). Centered plug-in estimation of Wasserstein distances. arXiv preprint arXiv:2203.11627. Park, M., Nassar, M., and Vikalo, H. (2013). Bayesian active learning for drug combinations. IEEE trans- actions on biomedical engineering, 60(11):3248–3255. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. Ad- vances in Neural Information Processing Systems, 32. Peyr´e, G., Cuturi, M., et al. (2019). Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11(5- 6):355–607. Pinsker, M. S. (1964). Information and information stability of random variables and processes. Holden- Day. Polyanskiy, Y. and Wu, Y. (2025). Information theory: From coding to learning. Cambridge university press. Pukelsheim, F. (2006). Optimal design of experiments. SIAM. Rainforth, T., Cornish, R., Yang, H., Warrington, A., and Wood, F. (2018). On nesting Monte Carlo es- timators. In International Conference on Machine Learning, pages 4267–4276. PMLR. Rainforth, T., Foster, A., Ivanova, D. R., and Bick- ford Smith, F. (2024). Modern Bayesian experimental design. Statistical Science, 39(1):100–114. Ryan, E. G., Drovandi, C. C., McGree, J. M., and Pettitt, A. N. (2016). A review of modern com- putational algorithms for Bayesian optimal design. International Statistical Review, 84(1):128–154. Salvatier, J., Wiecki, T. V., and Fonnesbeck, C. (2016). Probabilistic programming in python using pymc3. PeerJ Computer Science, 2:e55. Sebastiani, P. and Wynn, H. P. (2000). Maximum entropy sampling and optimal Bayesian experimen- tal design. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(1):145–157. Sheng, X. and Hu, Y.-H. (2005). Maximum likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks. IEEE transactions on signal processing, 53(1):44–53. Vanlier, J., Tiemann, C. A., Hilbers, P. A., and van Riel, N. A. (2012). A Bayesian approach to targeted experiment design. Bioinformatics, 28(8):1136–1142. Villani, C. et al. (2008). Optimal transport: Old and new, volume 338. Springer. Wald, A. (1943). On the efficient design of statistical in- vestigations. The Annals of Mathematical Statistics, 14(2):134–140. Warren, A. (2021). Wasserstein conditional indepen- dence testing. arXiv preprint arXiv:2107.14184. Weed, J. and Bach, F. (2019). Sharp asymptotic and finite-sample rates of convergence of empiri- cal measures in Wasserstein distance. Bernoulli, 25(4A):2620–2648. Wiesel, J. C. (2022). Measuring association with Wasserstein distances. Bernoulli, 28(4):2816 – 2832. A Geometric Approach to Optimal Experimental Design A Geometric Approach to Optimal Experimental Design: Supplementary Materials A Transport Dependencies In this section, we provide a more formal discussion of the MTD. We work under standard assumptions throughout, which are sufficient for guaranteeing that a solution to the OT problem exists. A1. The spaces Θ and Y are Polish spaces. A2. All cost functions are lower-semicontinuous and non-negative. Under Assumptions A1-A2, minimizers to the OT problem in Equation (6) defined on either Θ or Y are guaranteed to exist (Ambrosio and Gigli, 2012, Theorem 1.5). Similarly, if Θ, Y are Polish, then Θ × Y is Polish when equipped with the product topology, so that minimizers to an OT problem on the product spaces also exist under Assumptions A1-A2. Additional assumptions on c are necessary, though, to guarantee that Tc(d) (and the DTD/TTD) is finite. A trivially sufficient condition is that c is bounded from above. Other sufficient conditions can be given under moment assumptions of the corresponding densities. See Lemma 1 and Theorem 4 below. As discussed in the main paper, the TTD and DTD are closely related to notions arising from conditional optimal transport (Carlier et al., 2016; Hosseini et al., 2025; Kerrigan et al., 2024; Chemseddine et al., 2024; Baptista et al., 2024). Viewing our transport dependencies from this lens is a fruitful avenue for theoretical analysis. We begin by recalling the notion of a triangular coupling (Hosseini et al., 2025), which gives a notion of couplings that fix certain variables. Definition 4 (Triangular Couplings). A coupling γ ∈Π(p(θ, y | d), p(θ)p(y | d) is said to be Y-triangular if draws (θ, y, θ′, y′) ∼γ are such that y = y′ almost surely. Similarly, γ is said to be Θ-triangular if draws (θ, y, θ′, y′) ∼γ are such that θ = θ′ almost surely. For the sake of brevity, we will write Π := Π(p(θ, y | d), p(θ)p(y | d) for the set of all couplings, ΠY := ΠY(p(θ, y | d), p(θ)p(y | d)) for the set of Y-triangular couplings, and ΠΘ := ΠΘ(p(θ, y | d), p(θ)p(y | d)) for the set of Θ-triangular couplings. We begin with a lemma which allows us to bound the MTD by the DTD and TTD. This result shows that if the MTD is large, then both corresponding transport divergences on Θ or Y must also be large. This is particularly interpretable in terms of the TTD T (θ) c (d), where we see that large MTD implies that there is a large transport divergence between the posterior and prior, on average across the marginal p(y | d). Lemma 1. Fix p ∈[1, ∞). Suppose Θ, Y are separable Hilbert spaces. Consider the cost function c(θ, y, θ′, y′) = η|θ −θ′|p + |y −y′|p for a given η > 0. Write cΘ(θ, θ′) = |θ −θ′|p and cY(y, y′) = |y −y′|p. Assume that p(θ, y | d) has finite pth moment for a given d ∈D. Then, Tc(d) ≤ηT (θ) cΘ (d) and Tc(d) ≤T (y) cY (d). (22) Furthermore, both T (θ) cΘ (d) and T (y) cY (d) are finite. Proof. First consider the η = 1 case. Observe that p(θ, y | d) having finite pth moments immediately implies that both p(θ) and p(y | d) also have finite pth moments. Note further that p(θ, y | d) and p(θ)p(y | d) have the same marginals in both Θ and Y space. It follows that both T (θ) cΘ (d) and T (y) cY are pth powers of conditional Wasserstein metrics (Kerrigan et al., 2024, Definition 2). By Kerrigan et al. (2024, Prop. 2), both T (θ) cΘ (d) and T (y) cY (d) are finite, and furthermore conditional Wasserstein metrics upper bound the Wasserstein metric on the corresponding joint measure. This proves the claim for η = 1. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth For the general η > 0 case, observe that we can equip Θ with the alternative inner product ⟨θ, θ′⟩η = η1/p⟨θ, θ′⟩Θ which yields the norm |θ|η = η1/p|θ|Θ. Since the preceding argument applies to general separable Hilbert spaces, it also holds when Θ is equipped with the alternative norm, yielding Tc(d) ≤T (θ) cη,Θ(d) (23) for cΘ,η(θ, θ′) = η|θ −θ′|p. Further, observe that T (θ) cΘ,η(d) = Z Y  min γ∈Π(p(θ),p(θ|y,d)) Z Θ2 η|θ −θ′|2 dγ(θ, θ′)  p(y | d) dy (24) = η Z Y  min γ∈Π(p(θ),p(θ|y,d)) Z Θ2 |θ −θ′|2 dγ(θ, θ′)  p(y | d) dy (25) = ηT (θ) cΘ (d). (26) This yields the desired claim. A.1 Moment Bounds In this section, we prove several upper bounds on our transport dependencies which rely on moments of the underlying distributions. In particular, this theorem shows that when there is a link between the DTD T (y) c (d) (and thus also the MTD by Lemma 1) and the predictive variance of p(y | d). Intuitively, this means that we should expect that maximizing the DTD (and MTD) should select for designs for which there is a high amount of variance in the experimental outcome. Theorem 4. Suppose Θ, Y are separable Hilbert spaces. Fix p ∈[1, ∞) and suppose that p(θ, y | d) has finite pth moment. For cΘ(θ, θ′) = |θ −θ|p, we have T (θ) cΘ (d) ≤2pEp(θ)|θ −E[θ]|p. (27) Similarly, for cY(y, y′) = |y −y′|p, T (y) cY (d) ≤2pEp(y|d)|y −E[y | d]|p. (28) Proof. We begin with the bound on T (θ) c (d). Observe that γ(θ, y, θ′, y′) = p(θ, y | ξ)p(θ′)δ[y′ = y] is a valid triangular coupling of p(θ, y | d) and p(θ′)p(y′ | d). By the convexity of x 7→xp, we have T (θ) cΘ (d) ≤Eγ|θ −θ′|p = Eγ|(θ −Ep(θ)[θ]) −(θ′ −Ep(θ)[θ])|p (29) ≤2p−1Eγ [|θ −Eθ|p + |θ′ −Eθ|p] (30) = 2p−1 Z Θ2×Y |θ −Eθ|p dp(θ′) dp(θ, y | d) + Z Θ2×Y |θ′ −Eθ|p dp(θ′) dp(θ, y | d)  (31) = 2pEp(θ)|θ −Eθ|p. (32) where the last line follows by marginalization. The proof for T (y) cY (d) is analogous. We note that in the case p = 2, one may use the identity Eγ|θ −θ′|2 = Eγ|(θ −Eθ) −(θ′ −Eθ)|2 (33) = 2Ep(θ)|θ −Eθ|2 −2Eγ⟨θ −Eθ, θ′ −Eθ⟩ (34) rather than convexity to obtain a sharper constant. A.2 Limiting Cases of the MTD In this section, we prove that the TTD and DTD can be obtained as limiting cases of the MTD under a particular choice of cost. We refer to Section 3.2 for a discussion of this result and its implications for OED. The following is a formal restatement of Theorem 1. A Geometric Approach to Optimal Experimental Design Theorem 5. Suppose Θ, Y are separable Hilbert spaces. Fix p ∈[1, ∞) and assume that p(θ, y | d) has finite pth moment. Consider the cost cη(θ, y, θ′, y′) = η|θ −θ′|p Θ + |y −y′|p Y (35) for η > 0. Then, η−1Tcη(d) →T (θ) cΘ (d) as η →0+, where cΘ(θ, θ′) = |θ −θ′|p Θ. Similarly, for cψ(θ, y, θ′, y′) = |θ −θ′|p Θ + ψ|y −y′|p Y (36) we have ψ−1Tcψ(d) →T (y) cY (d) as ψ →0+ where cY(y, y′) = |y −y′|p Y. Proof. We begin with the first claim. Let γ∗∈ΠY be an optimal Y-triangular coupling for the cost cΘ(θ, θ′) = |θ −θ′|p and let γη ∈Π be an optimal coupling for the cost cη. Such optimal couplings exist and yield finite costs due to our moment assumption and Theorem 4. Since ΠY ⊂Π, using the Y-triangularity of γ∗we may upper bound the MTD under the cost cη by Tcη(d) = Z cη dγη ≤ Z cη dγ∗ (37) = Z η|θ −θ′|p dγ∗+ Z |y −y′|p dγ∗ (38) = η Z |θ −θ′|p dγ∗. (39) Consequently, by expanding out the definition of Tcη(d) we see that 0 ≤η−1 Z |y −y′|p dγη ≤ Z |θ −θ′|p d(γ∗−γη). (40) This yields R |θ−θ′|p dγη ≤ R |θ−θ′|p dγ∗, so that lim supη→0+ R |θ−θ′|p dγη ≤ R |θ−θ′|p dγ∗, which is finite as we assume p(θ, y | d) has finite pth moment. Hosseini et al. (2025, Prop. 3.11) show that as η →0+, we have γη →γ∗ in the weak sense. By the Portmanteau theorem, this weak convergence implies lim infη→0+ R |θ −θ′|p dγη ≥ R |θ −θ′|p dγ∗. We have thus shown lim η→0+ Z |θ −θ′|p dγη = Z |θ −θ′|p dγ∗. (41) By Equation (40), we thus also have lim η→0+ η−1 Z |y −y′|p dγη = 0. (42) Together, Equation (41) and Equation (42) imply that lim η→0+ η−1Tcη(d) = lim η→0+ Z |θ −θ′|p dγη + η−1 Z |y −y′|p dγ  (43) = Z |θ −θ′|p dγ∗= T (θ) cΘ (d). (44) The second claim can be shown with a directly analogous argument, interchanging the roles of θ and y. A.3 Estimation and Optimization Estimation. Here we include further details regarding the estimation and optimization of Tc(d). We focus on the setting where θ, y are continuous. In principle, to form our plug-in estimate bTc(d) in Equation (15), we require samples (θj, yj) i.i.d. ∼p(θ, y | d) j = 1, . . . , n (θ′ k, y′ k) i.i.d. ∼p(θ)p(y | d) k = 1, . . . , n. (45) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth The product of marginals p(θ)p(y | d) can be sampled by drawing θ′ k, θ′′ k ∼p(θ) followed by sampling y′ k ∼p(y | θ′′ k, d). In principle, this requires 2n draws from the prior and simulations from the likelihood. However, in practice we reduce this to n draws by first obtaining (θj, yj) i.i.d. ∼p(θ, y | d) followed by choosing a derangement σ (i.e., a permutation with no fixed points) and defining θ′ k = θj, y′ k = yσ(j), breaking the dependency. This allows for a computational speedup (particularly when simulating the likelihood is expensive) and further can serve to reduce the bias of our estimator. We will write µn, νn for these two empirical measures, i.e., µn = 1 n n X j=1 δ(θj,yj) νn = 1 n n X k=1 δ(θ′ k,y′ k). (46) This yields the plug-in estimator, Tc(d) ≈bTc(d) = OTc [µn, νn] , (47) which can be solved using efficient linear-programming techniques (Bonneel et al., 2011; Peyr´e et al., 2019; Papp and Sherlock, 2025). In particular, this requires forming the cost matrix C(d) ∈Rn×n with entries Cj,k(d) = c(θj, yj, θ′ k, y′ k). Note that C(d) is a function of d as yj, y′ k depend on d through the likelihood. Further, the value of bTc(d) is determined entirely by this cost matrix C(d). We will write G(C) for the minimal transport cost obtained for a given cost matrix C. Optimization. We require an estimate of ∇dTc(d), which we obtain by computing the gradient ∇d bTc(d) of our plug-in estimator. Key to computing this is the envelope theorem (Danskin, 1967; Bertsekas, 1971, 1997), which shows that we can obtain the gradient of C 7→G(C) in terms of the optimal transport plan. To be more precise, this mapping is not differentiable, but we use ∂G(C) to represent the superdifferential (Boyd and Vandenberghe, 2004) of G at C, i.e., the set of all v ∈Rn×n satisfying G(C′) −G(C) ≤⟨v, C′ −C⟩F ∀C′ ∈Rn×n (48) for the Frobenius inner product ⟨·, ·⟩F . The following theorem shows that ∂G(C) is precisely given by the set of optimal plans (Peyr´e et al., 2019, Prop. 9.2), which in general may be non-unique. Theorem 6. For the mapping C 7→G(C), we have ∂G(C) = ( γ∗∈Π(µn, νn) : γ∗∈argmin Π(µn,νn) Kc(γ) ) . (49) Thus, computing a supergradient of ∂G(C) requires no more computation than solving the OT problem itself, as it is simply the value of the optimal plan found when solving the linear program. The (super-)gradient ∇d bTc(d) then requires us to use the chain rule to compute ∇dG(C(d)): ∇d bTc(d) = n X j,k=1 γ∗ jk∇dCjk(d) (50) = n X j,k=1 γ∗ jk ∇dyj(η, θ, d)T ∂2c(θj, yj, θ′ k, y′ k) + ∇dy′ k(η, θ, d)T ∂4c(θj, yj, θ′ k, y′ k)  (51) where the second equality follows from directly computing ∇dCjk(d). Here, we use ∂2c and ∂4c to denote the partial derivatives of c with respect to its second and fourth arguments, and ∇dy(η, θ, d) ∈Rdy×dd is the Jacobian of y with respect to d. In practice, ∇dCjk(d) can be computed via automatic differentiation when we have a differentiable cost function and a differentiable sampler. In particular, as discussed in Section 3.3, using the noise outsourcing lemma (Kallenberg, 1997), for any fixed noise distribution with appropriate reference measure, q(η), there exists (subject to weak assumptions) a function h such that y(η, θ, d) = h(η; θ, d) for η ∼q(η). If we further assume that d 7→h(η; θ, d) is differentiable for our chosen q(η), we may use automatic differentiation to compute ∇d y(η, θ, d). We refer to Peyr´e et al. (2019, Chapter 9) for a further discussion of the differentiability of optimal transport discrepancies. While in principle ∇d bTc(d) is merely a supergradient, this is sufficient for the purposes of performing stochastic gradient-ascent based procedures on the MTD. A Geometric Approach to Optimal Experimental Design B Transport Dependencies and Mutual Information In this section, we give a proof for Theorem 2, as well as an extended discussion of this result. In addition, we show that the total variation distance between p(θ, y | d) and p(θ)p(y | d) may be obtained as a special case of the MTD, and provide an upper bound analogous to Theorem 2. B.1 The Euclidean Case While the MTD can be defined for general Polish spaces, we work under a Euclidean assumption throughout this section to facilitate the analysis, and also because this is a highly practically relevant scenario. We turn our attention towards bounding the transport dependencies by the mutual information. The key assumption we rely on is a strong log-concavity assumption. Definition 5 (Strong Log-Concavity). Let p ∈C2(Rm) be a twice continuously differentiable probability density function. We say that p is strongly log-concave if there exists λ > 0 such that for all x with p(x) > 0, we have −∇2 x log p(x) ⪰λIn. (52) The greatest such λ is called the parameter of log-concavity for p(x). A generalized form of Talagrand’s inequality yields an upper bound on the MTD in terms of the mutual information (Villani et al., 2008, Section 9.3), (Blower, 2003, Theorem 4.1). Theorem 7 (Talagrand’s Inequality). Suppose X = Rm is a Euclidean space and p(x), q(x) are two probability densities over X. Suppose p(x) is strongly log-concave with parameter λ and KL[q(x)||p(x)] < ∞. For the cost function c(x, x′) = |x −x′|2, we have OTc[p(x), q(x)] ≤2λ−1KL [q(x) || p(x)] . (53) Using Talagrand’s inequality, we may relate our MTD and other transport discrepancies to the mutual information. We provide a more formal statement of Theorem 2 here, as well as a proof. Theorem 8. Suppose Θ = Rn and Y = Rm. 1. Assume the prior p(θ) is strongly log-concave with parameter λθ. For c(θ, θ′) = |θ −θ′|2, we have λθT (θ) c (d) ≤2I(d). (54) 2. Assume the marginal p(y | d) is strongly log-concave with parameter λy|d. For c(y, y′) = |y −y′|2, we have λy|dT (y) c (d) ≤2I(d). (55) 3. Let η > 0 be given. When both the prior and likelihood satisfy these assumptions, under cost c(θ, θ′, y, y′) = η|θ −θ′|2 + |y −y′|2, for λ = max{λθ/η, λy|d} we have λTc(d) ≤2I(d). (56) Proof. We start with the first claim. If I(d) is infinite, then there is nothing to show. When I(d) is finite and p(θ) is strongly log-concave, by Theorem 7 we have OTc[p(θ | y, d), p(θ)] ≤2λ−1 θ KL[p(θ | y, d)||p(θ)]. (57) Integrating both sides of this with respect to p(y | d) yields T (θ) c (d) = Z Y OTc[p(θ | y, d), p(θ)]p(y | d) dy (58) ≤2λ−1 θ Z Y KL[p(θ | y, d)||p(θ)]p(y | d) dy (59) = 2λ−1 θ I(d) (60) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth as claimed. The proof for the second claim is analogous. For the last claim, consider cΘ,η(θ, θ′) = η|θ −θ′|2. As Lemma (1) applies in arbitrary separable Hilbert spaces, we obtain Tc(d) ≤min{T (θ) cΘ,η(d), T (y) cY (d)} (61) = min{ηT (θ) cΘ (d), T (y) cY (d)} (62) ≤2 min{ηλ−1 θ , λ−1 y|d}I(d). (63) where the final inequality follows from the first two claims. Rearranging this inequality yields the result. While the bounds in this previous section apply for Euclidean spaces, generalizations of Talagrand’s inequality exist for general metric spaces (Gozlan and L´eonard, 2010), although resulting in a more complex relationship between information-theoretic quantities and optimal transport divergences. B.2 Total Variation and Hamming Distance One particular special case that may be of interest is when the cost function is defined via the Hamming distance. In this case, we obtain the total variation distance between p(θ, y | d) and p(θ)p(y | d) as a special case of our MTD framework, and moreover, obtain an upper bound in terms of the mutual information. However, we note that as this cost function is not differentiable, the MTD under this cost function cannot be directly optimized with gradient-based methods. Theorem 9. Suppose Θ×Y is a metric space. Consider the Hamming distance cH(θ, y, θ′, y′) = 1[(θ, y) ̸= (θ′, y′)]. Then, TcH(d) = |p(θ, y | d) −p(θ)p(y | d)|TV = sup A∈B Z A p(θ, y | d) dθ dy − Z A p(θ)p(y | d) dθ dy  (64) where B is the set of all Borel subsets of Θ × Y and | · |TV is the total variation metric. Moreover, TcH(d) ≤ r 1 2I(d). (65) Proof. It is well-known that the Hamming cost in the OT problem yields the total variation distance (Massart, 2007, Lemma 2.20). By the Csisz´ar-Kullback-Pinsker inequality (Pinsker, 1964; Csisz´ar, 1967; Kullback, 1967; Gozlan and L´eonard, 2010), we immediately obtain the desired upper bound. C Linear-Gaussian Model Here we investigate the behavior of the gain Tc(d) in the linear-Gaussian setting under quadratic costs. In particular, we may obtain a closed-form solution for the MTD in this case, allowing for a direct comparison against the MI. See Figure 5 for an illustration. Theorem 10. Suppose θ ∈Θ = Rn has a standard normal prior p(θ) = N(0, In), designs are vectors d ∈D = Rn, and y ∈Y = R has likelihood p(y | θ, d) = N  ⟨d, θ⟩, σ2 d,θ  . Under the quadratic cost, we have Tc(d) = 2  1 + σ2 d,θ + |d|2 − r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ  . Moreover, Tc(d) ≤2. On the other hand, the MI is I(d) = 1 2 log 1 + |d|2/σ2 d,θ  (66) which is unbounded as σ2 d,θ →0. A Geometric Approach to Optimal Experimental Design 0 2 4 6 8 10 12 14 |d| 0 1 2 3 4 Objective MI MTD Figure 5: Values of the MI and MTD for the linear-Gaussian model for σ2 d,θ = 1. Proof. Define s = |d|2 + σ2 d,θ. Under this setting we may explicitly calculate the joint and product of marginals as p(θ, y | d) = N (0, ΣJ) ΣJ =  I d dT s  (67) p(θ)p(y | ξ) = N (0, ΣP ) ΣP = I 0 0 s  . (68) Under quadratic costs, Tc(d) is the squared 2-Wasserstein between these distributions, which admits a closed-form via the (squared) Bures-Wasserstein metric Tc(d) = tr  ΣJ + ΣP −2 q Σ1/2 P ΣJΣ1/2 P  . (69) Observe that tr(ΣJ) = tr(ΣP ) = n + s and thus we turn our attention to the term B = q Σ1/2 P ΣJΣ1/2 P . Since ΣP is diagonal its square-root is simply Σ1/2 P = I 0 0 √s  (70) and an explicit calculation of B2 yields B2 = Σ1/2 P ΣJΣ1/2 P =  I √sξ √sξT s2  . (71) We require tr(B). Let (λi) be the eigenvalues of B2, so that tr(B) = P i √λi. Using the expression for the determinant of a block matrix, we see that the characteristic polynomial of B2 is p(λ) = (1 −λ)n  s2 −λ −sr2 1 −λ  = (1 −λ)n−1 λ2 −(1 + s2)λ + s  . (72) Thus, B2 has eigenvalues λ1 = λ2 = · · · = λn−2 = 1 and eigenvalues λn−1, λn which are the roots of the quadratic part. In particular, λn−1 + λn = 1 + s2 and λn−1λn = s. This yields p λn−1 + p λn = q 1 + s2 + 2√s (73) tr(B) = (n −1) + q 1 + s2 + 2√s. (74) Putting everything together via the additivity of the trace, we have Tc(d) = 2  1 + σ2 d,θ + |d|2 − r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ  . (75) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Using a computer algebra system one can verify that Tc(d) ≤2 and that lim|d|→∞Tc(d) = 2. On the other hand, the mutual information between two Gaussians admits a closed form. This immediately gives I(d) = 1 2 log 1 + |d|2 σ2 d,θ ! . (76) which is unbounded as σ2 d,θ →0. D Experiment Details This section provides additional details for our experiments. Unless specified otherwise, our experiments in Section 6 were performed with the following settings. Experiments were performed on an Apple M4 Pro chip with 24 GB of unified memory and a 14-core CPU and primarily implemented in pytorch (Paszke et al., 2019). Designs are optimized for 250 gradient steps with a learning rate of 2e-2 using the Adam optimizer (Kingma, 2014). For MTD, we draw 1 000 samples from p(θ, y | d, ht) = p(θ | ht)p(y | θ, d) per gradient step, shuffled following Section A to yield samples from p(θ)p(y | d). For PCE, we draw N = 1 000 samples (θn, yn) i.i.d. ∼p(θ, y | d, ht) to form the approximation (Foster et al., 2020) I(t)(d) ≈1 N N X n=1 " log p(yn | θn, d) −log 1 L + 1 p(yn | θn, d) + L X ℓ=1 p(yn | θℓ,n, d) !!# . (77) where L = 1 000 and θℓ,n i.i.d. ∼p(θ | ht). Figure 1 and Figure 3 are plotted with additional Gaussian smoothing for visualization purposes. D.1 Location Finding In the location finding task, our goal is to estimate the spatial location of K ≥1 sources θk ∈Rdθ. The number of sources K is assumed to be known, so that θ = (θ1, θ2, . . . , θK). Each source θk emits a signal which decays according to an inverse square law. In each step, a sensor is placed at a location d ∈Rdθ which records a noisy measurement y ∈R of the total signal intensity at the sensor location (Sheng and Hu, 2005). We note that this task has become a standard benchmark for BED (Foster et al., 2021; Ivanova et al., 2021; Blau et al., 2022; Iollo et al., 2024a). The total (noiseless) intensity at a location d is given by µ(θ, d) = b + K X k=1 αk m + |θk −d|2 . (78) Here, b, αk, m ≥0 are known constants. The variable b represents a background signal level and m controls the (inverse) maximum signal strength. We assume an independent standard Gaussian prior over each θk, p(θk) = N(θk | 0, Idθ) k = 1, . . . , K. (79) We further assume that we observe the logarithm of the total signal intensity with Gaussian noise, i.e., y | θ, d ∼N y | log µ(θ, d), σ2 (80) where σ2 > 0 is assumed to be known. In all experiments we follow prior work (Foster et al., 2021) and set b = 0.1, αk = 1, m = 10−4, σ2 = 0.25. Further, we consider K = 2 sources in dθ = 2 dimensions, so that θ is four dimensional. Inference. For inference, we use the NUTS MCMC sampler implemented in pymc3 (Salvatier et al., 2016) with 1e4 warm-up steps and four independent chains to draw a total of 1e5 posterior samples at each experiment iteration. In LF, posteriors are complex and multi-modal, necessitating accurate (but expensive) inference. See Figure 6 for an illustration. A Geometric Approach to Optimal Experimental Design Experiment 1 Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8 Experiment 9 Experiment 10 Figure 6: An illustration of the LF problem. Red stars indicate the true, unknown source locations θ1, θ2. At each experimental iteration, a measurement location (black triangle) is selected by maximizing the MTD. Posterior samples θ ∼p(θ | ht) (orange and blue points) depict the updated beliefs about the source positions after each measurement. The MTD adaptively selects measurement locations that rapidly determine both sources. RMSE. During posterior sampling, the model exhibits non-identifiability with respect to the ordering of the two sources, θ1 and θ2. As a result, the correspondence between estimated and true sources may be swapped across samples. To address this when computing the RMSE, we evaluate both possible orderings of each posterior sample and use the ordering that yields the lower RMSE. Weighting Function. In Section 6.3 we modify the cost function in MTD using a weighting function. In particular, this takes the form w(θ) = b + g(θ1 −µ) + g(θ2 −µ) (81) where b = 1 is a bias and µ = (1.5, −1.5) controls the location of the bump, and g(x) = k  1 −sigmoid  s |x|2 −α2 β2 −α2  (82) is a bump-like function. Here, β = 1 approximately controls the radius of its support, α = 0.5 controls an inner radius where g is approximately maximized, s = 0.3 controls the slope of the bump, and k = 1e4 controls its amplitude. See Figure 7 for a visualization. The weighting w(θ) is selected to depend only on θ which is drawn from the joint. + + x b b + k g(x) Figure 7: A visualization of the bump function g(x). Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth D.2 Constant Elasticity of Substitution (CES) The Constant Elasticity of Substitution (CES) model, arising from behavioral economics, asks a participant to compare two baskets d1, d2 ∈[0, 100]3 consisting of various amounts of three different goods. Given two baskets, the participant provides a scalar response y ∈[0, 1] indicating subjective preference between the baskets. This model has previously served as a benchmark for several recent BED works (Foster et al., 2019, 2020; Blau et al., 2022; Iollo et al., 2024b). The experimental goal is to choose a design d = (d1, d2) ∈[0, 100]6 consisting of two baskets in order to infer the participant’s latent utility function. This utility function is assumed to be parametrized by θ = (ρ, α, u) where ρ ∈ [0, 1], α ∈∆3, u ∈R≥0 and ∆3 is the 3-simplex. Thus, θ ∈R5 is a five-dimensional unknown parameter of interest. Following previous work, we assume the following priors: ρ ∼β(1, 1) (83) α ∼Dir(1, 1, 1) (84) log u ∼N(1, 3). (85) The likelihood for the participant’s response is modeled as U(d) = 3 X i=1 dρ i αi !1/ρ (86) µ = (U(d1) −U(d2))u (87) σ = (1 + |d1 −d2|)τu (88) η ∼N(µ, σ2) (89) y =      sigmoid(η) ϵ < sigmoid(η) < 1 −ϵ ϵ sigmoid(η) ≤ϵ 1 −ϵ sigmoid(η) ≥1 −ϵ (90) where ϵ = 2−22, τ = 5e−3 are fixed constants and sigmoid(x) = 1 1 + e−x (91) is the usual sigmoid function. Thus, the participant’s response depends on the difference in utilities U(d1) −U(d2) between the two baskets. Log-Likelihood. The CES model can present numerical challenges for methods that require evaluating the likelihood of observations (e.g., PCE). We follow the recommendations in (Foster et al., 2020; Iollo et al., 2024b) to evaluate this quantity. In particular, since y is censored, we have that p(y | θ, d) = p0δ[y = ϵ] + p1δ[y = 1 −ϵ] + (1 −p0 −p1)q(y | θ, d) (92) is a mixture of Dirac deltas at the boundaries and a density on the interior, where the density q(y | θ, d) = 1 σy(1 −y) √ 2π exp  −(logit(y) −µ)2 2σ2  (93) represents a logit-normal distribution away from the boundary with logit(y) = sigmoid−1(y) = log(y/(1 −y)). The quantities p0, p1 are defined via p0 = Φ logit(y) −µ σ  (94) p1 = 1 −Φ logit(1 −ϵ) −µ σ  (95) A Geometric Approach to Optimal Experimental Design where Φ is the standard normal CDF. When p0, p1 ≪1, computing their logarithms becomes challenging, in which case we approximate Φ by a first-order Taylor expansion, i.e., Φ(x) ≈ 1 −x √ 2π exp(−x2/2) x ≪−1 (96) 1 −Φ(x) ≈ 1 x √ 2π exp(−x2/2) x ≫1. (97) We perform additional clipping as necessary before taking logarithms in our implementation to further improve stability. For inference, we perform importance resampling (Doucet et al., 2001) where 1e7 proposal samples are drawn from the prior, weighted according to the likelihood, and 1e5 proposal samples are re-drawn with replacement with probability proportional to their weight. E Additional Experiments 1 5 10 15 20 25 Experiment Iteration 0 1 2 RMSE 1 5 10 15 20 25 Experiment Iteration 0 5 10 15 20 MI 1 5 10 15 20 25 Experiment Iteration 0.0 2.5 5.0 7.5 MTD Random PCE MTD MINE JSD Figure 8: Quantitative results for the location finding problem, averaged across 25 seeds (± one standard error). MTD achieves the lowest RMSE across most of the rollout, even though its corresponding MI values are slightly lower. All methods obtain comparable MTD scores within one standard error, reflecting the high variance of this quantity. E.1 Location Finding One appeal of the MTD is that it is naturally an implicit method, as it only relies on our ability to draw samples. To highlight this, we compare against two methods for MI-based experimental design in implicit settings: MINEBED (Kleinegesse and Gutmann, 2020) and JSD (Kleinegesse and Gutmann, 2021). MINEBED is based on the NWJ (Nguyen et al., 2010) lower bound for the mutual information: I(d) ≥Ep(θ,y|d) [T(θ, y)] −e−1Ep(θ)p(y|d) [exp(T(θ, y))] (98) where T : Θ × Y →R is an arbitrary measurable function. This lower bound is, in fact, tight. In practice, we take T(θ, y) = Tψ(θ, y) to be a neural network parametrized by ψ, yielding a strict lower bound to I(d) when the considered class of neural networks does not contain the true optimum. In the continuous setting, the network parameters ψ and design d may be optimized simultaneously via gradient-based procedures. On the other hand, JSD is based on a lower bound of the Jensen-Shannon divergence (Hjelm et al., 2018) J (d) = 1 2KL [p(θ, y | d)||m(θ, y | d)] + 1 2KL [p(θ)p(y | d)||p(θ)p(y | d)] (99) = log 2 −1 2Ep(θ,y|d)  log  1 + p(θ)p(y | d) p(θ, y | d)  −1 2Ep(θ)p(y|d)  log  1 + p(θ, y | d) p(θ)p(y | d)  (100) ≥log 2 + 1 2 Ep(θ,y|d) [−log(1 + exp(−T(θ, y))] −Ep(θ)p(y|d) [log(1 + exp T(θ, y))]  (101) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth where m(θ, y | d) = 1 2 (p(θ, y | d) + p(θ)p(y | d)) is a mixture distribution. Again we take T(θ, y) = Tψ(θ, y) to be a neural network, yielding a potentially strict lower bound. Kleinegesse and Gutmann (2021) motivate the JSD as an objective for OED by noting that it behaves similarly to the mutual information while potentially offering more stable training, as there is no exponential of the network unlike in MINEBED. For both MINEBED and JSD, we parametrize Tψ by an MLP with two hidden layers of size 200, trained at a learning rate of 3e−3 using Adam (Kingma, 2014). Designs are simultaneously optimized via Adam, but at a higher learning rate of 2e−2. Results. In Figure 8, we report the RMSE, MI, and MTD values for our method, PCE, MINE, and JSD, with all designs optimized using five random restarts. The MTD-based approach consistently achieves the lowest RMSE across most of the experimental rollout, even though it attains slightly lower MI values, which is expected, since it does not explicitly optimize this objective. All methods obtain comparable MTD scores; this is largely attributable to the high variance of the MTD estimate and to the nature of the location-finding task, where, once the posterior becomes highly concentrated, the set of effective designs narrows substantially, so that any design within this region yields similarly strong MTD values. E.2 CES 1 5 10 Experiment Iteration 0.0 0.1 0.2 0.3 0.4 RMSE ( ) 1 5 10 Experiment Iteration 0.0 0.2 0.4 0.6 RMSE ( ) 1 5 10 Experiment Iteration 0 20 40 60 RMSE (u) Random PCE MTD MTD (c ) 1 5 10 Experiment Iteration 100 101 102 103 RMSE ( ) 1 5 10 Experiment Iteration 0 1 2 RMSE ( ) 1 5 10 Experiment Iteration 0.0 0.5 1.0 1.5 2.0 RMSE ( ) Figure 9: Quantitative results for the CES problem, averaged over 50 seeds (± one standard error). On the original scale (top row), MTD achieves the lowest RMSE, while the transformed variant Tc† performs slightly worse on u. On the transformed scale (bottom row), Tc† attains the lowest RMSEs, reflecting its alignment with the evaluation variables. 1 5 10 Experiment Iteration 5 10 15 MI 1 5 10 Experiment Iteration 0.2 0.4 0.6 0.8 MTD Random PCE MTD MTD (c ) Figure 10: MI and MTD values for the CES problem, averaged over 50 seeds (± one standard error). MTD achieves similar MI values to PCE, while PCE attains worse MTD values than random designs. A Geometric Approach to Optimal Experimental Design To supplement the results in Section 6 for the CES problem, we provide additional figures visualizing our results. Figure 9 plots the RMSE values for the CES problem across the experimental iterations. When evaluating performance on the original parameter scale (top row of Table 1), MTD achieves the lowest RMSE, while the transformed variant Tc† performs slightly worse on u. This is expected, as the transformed cost emphasizes errors in a different coordinate system. Conversely, when performance is assessed in the transformed space (bottom row), Tc† attains the lowest RMSEs, illustrating that defining the cost in terms of the relevant variables can effectively guide design selection toward the aspects of the parameters that are most important for downstream evaluation. These results highlight the flexibility of MTD: by choosing an appropriate cost, either directly or via transformations, experimenters can align the design criterion with the metric that truly matters for their task. Figure 10 shows the MI and MTD values for CES. We observe that designs optimized using the MTD achieve comparable MI values to PCE. On the other hand, the designs produced by PCE yield low MTD values, performing worse than random. Notably, this serves as a verification of our Theorem 2, which can be interpreted as showing that designs which have high MTD tend to have large MI as well, but not the converse.
A Geometric Approach to Optimal Experimental Design Gavin Kerrigan⋆ Christian A. Naesseth Tom Rainforth (OED). Traditional OED approaches, such as those based on mutual information, rely explicitly on probability densities, leading to restrictive invariance properties. To address these limitations, we propose the mutual transport dependence (MTD), a measure of statistical dependence grounded in optimal transport theory which provides a geometric objective for optimizing designs. Unlike conventional approaches, the MTD can be tailored to specific downstream estimation problems by choosing appropriate geometries on the underlying spaces. We demonstrate that our framework produces high-quality designs while offering a flexible alternative to standard information-theoretic techniques. 1 Introduction Effective experimental design is central to a wide range of scientific and industrial applications (Kuhfeld et al., 1994; Park et al., 2013; Melendez et al., 2021). Many such problems require a principled, model-based, approach, wherein we utilize a model over possible experimental outcomes to directly optimize our design decisions. This can be particularly effective in adaptive design settings, where frameworks like sequential Bayesian experimental design (BED) allow us to iterate between using our model to make design decisions and updating our model with the collected data (DeGroot, 1962; MacKay, 1992; Sebastiani and Wynn, 2000; Rainforth et al., 2024). Many of these approaches are grounded in information theory, where the value of an experiment is quantified using the values of an associated probability density. For instance, a popular and principled approach in the ⋆Correspondence to: BED literature is optimizing the mutual information (MI) (Lindley, 1956, 1972; Bernardo, 1979) I(d) = KL [p(θ, y | d) || p(θ)p(y | d)] (1) = Ep(θ,y|d) log p(θ, y | d) p(θ)p(y | d) (2) where p(θ) is the prior over the quantity of interest θ, and p(y | θ, d) models the experiment outcome y under design d. Notably, the MI is an expectation of logdensity ratios, and is thus a unitless quantity. Consequently, the MI has strong invariance properties: any injective transformation of θ or y leaves I(d) unchanged. We highlight that this invariance, while often attractive, can also be detrimental for optimal experimental design (OED). Experimental goals should be defined in terms of downstream errors (Lindley, 1972) and many common error metrics, such as mean squared error between true and estimated parameters, are inherently geometric and dependent on the space in which they are measured. Thus, the MI, being purely informational, cannot be naturally aligned with task-specific error metrics and has no mechanism by which it may be targeted to a particular geometric distance on predictions: it implicitly assumes errors are measured by log loss. In turn, this inflexibility can be problematic for various applications. For example, in financial settings, we often inherently care about the variance in future returns and not the entropy, noting that the two can take arbitrary values with respect to one another. The MI also poses several practical challenges (Rainforth et al., 2024). Foremost, it is a doubly intractable quantity, in general requiring a nested estimation (Rainforth et al., 2018) of either the posterior p(θ | y, d) or marginal p(y | d). Second, in implicit settings, where the likelihood can only be sampled but not evaluated, the MI faces additional challenges due to its explicit reliance on densities. While there exist approaches for estimating the MI in implicit settings (Kleinegesse and Gutmann, 2019; Ivanova et al., 2021), these amount to learning the unknown density ratio, a task that becomes increasingly difficult in high dimensions. This problem is also not specific to MI, with the core BED formalism inherently having nested dependence on the poste16 Oct 2025 A Geometric Approach to Optimal Experimental Design -5 0 5 d 0.0 0.5 1.0 Objective MTD MI 4 2 0 2 4 p( y, d = 0.0) 4 2 0 2 4 p( y, d = 1.3) 4 2 0 2 4 p( y, d = 1.3) Figure 1: Comparison of MI and MTD on the 1D source location-finding problem, with the true source at θ = -1 (dashed red line). MI is maximized at the origin, reflecting the prior mode, whereas MTD is maximized near d ≈±1.3. The posterior for d = 0 is bimodal, while for d = ±1.3 it becomes unimodal or sharply concentrated. In practice, d is optimized directly, and MTD breaks the posterior symmetry even when initialized unfavorably. rior (Lindley, 1972; Chaloner and Verdinelli, 1995). We address these shortcomings by proposing the mutual transport dependence (MTD), a novel class of geometric criteria for experimental design. The MTD measures the dependency between θ and y in terms of an optimal transport discrepancy (Villani et al., 2008; Feydy et al., 2019) between the joint distribution p(θ, y | d) and its product of marginals p(θ)p(y | d). The MTD depends explicitly on a choice of sample-level cost function, enabling practitioners to encode domain knowledge or downstream objectives directly into the design criterion. This cost function can be defined either directly or through a transformation of the underlying space, creating a family of flexible design criteria. Moreover, the MTD offers practical benefits. It does not involve any nested expectations, can be estimated without likelihood evaluations, and can be directly optimized using gradient-based methods whenever differentiable sampling of p(y | θ, d) is possible. This makes it particularly well-suited to simulation-based or implicit scenarios where the likelihood p(y | θ, d) is unknown or intractable, an increasingly common setting in OED (Kleinegesse and Gutmann, 2019, 2021; Ivanova et al., 2021; Encinar et al., 2025), Optimizing MTD results in qualitatively different design behaviour than MI (see Figure 1). We illustrate the effectiveness of MTD-optimal designs on standard experimental design benchmarks, comparing directly with MI-based designs. Our results show that the MTD can outperform traditional information-theoretic approaches, while also allowing experimenters to tailor the geometry of the underlying spaces to the problem at hand. In sum, our framework introduces a principled and practical new class of criteria for optimal experimental design, overcoming the rigidity of informationtheoretic methods and enabling experiments that better reflect real-world objectives. 2 Background and Notation We use θ ∈Θ to represent the unknown target quantity of interest that we wish to learn about through our experiments. This could correspond to a real world quantity, model parameters, or something abstract like downstream predictions. We further use d ∈D to represent an experimental design and y ∈Y to represent an observed outcome of an experiment. Akin to standard BED approaches, we specify a prior p(θ) representing our beliefs about θ before performing any experiments and a likelihood p(y | θ, d) capturing the data generating process. Our goal is now to select d in a way that will allow us to best estimate θ once the experiment's outcome is observed. 2.1 OED with Mutual Information From an information-theoretic point of view, it is natural to seek designs which result in data y that reduces our uncertainty about the unknown quantity θ. That is, we consider the reduction in entropy (Lindley, 1956) I(d) = Ep(y|d) [H[θ] -H[θ | y, d]] (3) which can straightforwardly be shown to yield the mutual information (1). The design choice is d∗= argmaxd∈D I(d), maximizing the mutual information. In the context of OED, I(d) is often called the expected information gain (EIG). In all but the simplest cases, computing I(d) is nontrivial, as it requires estimating both the outer expectation and the integrand (i.e., either the posterior p(θ | y, d) or marginal likelihood p(y | d)). Often, one resorts to nested estimators like nested Monte Carlo (NMC) (Rainforth et al., 2018), or variational approaches (Foster et al., 2019, 2020). Importantly, many techniques assume that the likelihood p(y | θ, d) is known explicitly, where the corresponding distribution can be evaluated pointwise. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth OED is often particularly useful in adaptive scenarios, where experiments are designed sequentially based on the data ht = {(dk, yk)}t k=1 gathered in previous trials. In this setting, we replace the prior p(θ) with our updated beliefs using the posterior p(θ | ht), and consider the incremental MI I(t+1)(d) = KL [p(θ, y|d, ht)||p(θ|ht)p(y|d, ht)] . (4) We refer to Chaloner and Verdinelli (1995); Rainforth et al. (2024) for more comprehensive surveys of BED. 2.2 Optimal Transport Optimal transport (OT) is a mathematical toolkit which allows us to compare two arbitrary probability distributions p(x) and q(x′) in terms of the amount of work required to transform one distribution into the other (Villani et al., 2008; Peyr ́e et al., 2019). In the Kantorovich formulation of OT (Kantorovich, 1942), we specify a non-negative cost function c(x, x′) ≥ 0 encoding the cost of transporting a unit of mass from location x to x′. A coupling is a joint distribution γ(x, x′) whose marginals are p, q respectively. We write Π(p, q) for the set of all valid couplings. Given a coupling γ ∈Π(p, q), its associated cost is Kc(γ) = Z c(x, x′) dγ(x, x′) = Eγ(x,x′) [c(x, x′)] (5) which can be interpreted as the average sample-level transport cost under this coupling. An optimal coupling minimizes this cost, and we write OTc [p, q] = min γ∈Π(μ,ν) Kc(γ) (6) for the minimum value attained. In short, c(x, x′) defines a sample-level cost function and OTc[p, q] is the associated optimal transport discrepancy. 3 Experimental Design through Mutual Transport Dependence The MI is an inherently density-based objective, where the value of an experiment is considered a purely informational quantity that is based directly on the density of the posterior, rather than the actual values that θ and y can take. As such, the MI is unable to incorporate properties of the underlying sample spaces Θ, Y into its design objective, such as an error metric on θ. This induces strong invariance properties in the MI (Polyanskiy and Wu, 2025, Theorem 3.7), such that it remains fixed under injective transformations of θ and/or y. Thus, if we are interested in measuring errors in terms of some transformation φ = f(θ), the optimal design remains fixed with changes in f, even though our intuitive notion of error itself will change. For example, while θ may be the natural variables for parametrizing a model, we may be interested in a oneto-one transformation of θ which is more interpretable. Under MI, these scenarios are indistinguishable: any unit of entropy reduction is equally valuable, regardless of its practical implications. These shortcomings motivate the need for a practical objective which is fully defined in terms of geometric notions on the underlying sample spaces. In other words, we seek a criterion which is determined by the values taken on by random variables themselves. 3.1 Mutual Transport Dependence One interpretation of MI is that it measures the KL divergence between the joint, p(θ, y|d), under which θ and y are dependent, and the product of marginals, p(θ)p(y|d), under which they are independent. The KL, though, depends only on density ratios and is thus unsuitable for a geometric measure of information. To provide a geometric, sample-based criterion, we propose to instead measure the dependency between θ and y via an optimal transport dependency between the same joint and product of marginals. Optimal transport, relying on an expected sample-level cost function, is a natural choice for such a geometric measure as it allows customisation of our notion of distance through the cost function. This yields our proposed mutual transport dependence (MTD), defined as follows.1 Definition 1. Given a cost function c : Θ2 × Y2 → R≥0, the mutual transport dependence (MTD) is Tc(d) : = OTc [p(θ, y | d), p(θ)p(y | d)] = min γ∈Π(p(θ,y|d),p(θ)p(y|d)) Eγ [c(θ, y, θ′, y′)] . (7) Note that the MTD depends only on expectations with respect to our model, with no direct dependency on the density functions themselves. This is critically important from a computational and scaling perspective, as there is no need to perform (nested) estimation of the posterior or marginal likelihood densities, which is typically challenging, especially in high dimensions. High values of Tc(d) indicate that the parameter and data are strongly coupled under this design, and conversely, Tc(d) = 0 if and only if θ and y are independent under d. As Tc(d) can be also seen as the average (θ, y) displacement under the optimal transport plan, measured by the cost c(θ, y, θ′, y′), it is a geometric notion, 1We work under standard measure-theoretic assumptions regarding the spaces Θ, Y as well as the cost c, which we discuss in Appendix A. A Geometric Approach to Optimal Experimental Design and by choosing the cost function c in an appropriate way the experimenter can incorporate downstream preferences directly into the experimental design process. 3.2 Alternative Transport-Based Measures The MI, being the KL between the joint and product of marginals, can be equivalently understood as an expected KL discrepancy on either Θ or Y alone: I(d) = Ep(θ)KL [p(y | θ, d) || p(y | d)] (8) = Ep(y|d)KL [p(θ | y, d) || p(θ)] . (9) This equivalence does not translate to the MTD, as the transport distances in Θ and Y are inherently different. However, we can derive transport-based analogues of these interpretations of the MI as well by considering costs defined on Θ or Y alone as follows. Definition 2. Given a cost function c : Θ2 →R≥0, the target transport dependence (TTD) is T (θ) c (d) := Ep(y|d) [OTc [p(θ | y, d), p(θ)]] . (10) Definition 3. Given a cost function c : Y2 →R≥0, the expected data transport dependence (DTD) is T (y) c (d) := Ep(θ) [OTc [p(y | θ, d), p(y | d)]] . (11) In some ways, these transport dependencies are perhaps more intuitive than the MTD, Tc(d), which relies on a cost defined on the joint space Θ × Y. In particular, given that our ultimate goal is the estimation of θ, the TTD arguably provides the more natural of the three objectives as it is directly measuring changes in beliefs in the space of Θ. However, unlike the MTD, these objectives have reintroduced a nesting into the problem as the maximization over couplings is now inside an expectation. This makes them, at least in principle, computationally more problematic as they appear to require solving an OT problem for any given sample of the outer expectation. In practice, though, both T (θ) c (d) and T (y) c (d), can be approximated in a way that only requires us to solve only a single OT problem. Leveraging recent work on conditional optimal transport (Carlier et al., 2016; Kerrigan et al., 2024; Chemseddine et al., 2024; Baptista et al., 2024), we show in the following result that when the cost function is given by a norm, these quantities are recovered as a limiting special case of Tc(d) under a particular choice of c, the proof for which is given in Appendix A. Theorem 1. For a fixed 1 ≤p 0. Then, η-1Tcη(d) →T (θ) cΘ (d) as η →0+, where cΘ(θ, θ′) = |θ -θ′|p Θ. Similarly, for cψ(θ, y, θ′, y′) = |θ -θ′|p Θ + ψ|y -y′|p Y (13) we have ψ-1Tcψ(d) →T (y) cY (d) as ψ →0+ where cY(y, y′) = |y -y′|p Y. As the MTD can be defined using any choice of cost function, we can thus think of it as a more general objective which subsumes the TTD and DTD as limiting cases in the choice of cost function. Thus, in practice, we can implement these alternatives simply by using a weighted cost function (cη or cψ) in the MTD with a small η > 0 or ψ > 0 for TTD and DTD respectively. We, therefore, focus our subsequent discussion on the MTD. We emphasize that while we have introduced various notions of transport dependence in terms of designing a single experiment, all constructions readily generalize to the sequential setting by updating the prior as discussed in Section 2. While our experiments will focus on this sequential case, our notions of transport dependence can also be extended to the policy setting (Foster et al., 2021; Ivanova et al., 2021) by considering optimal transport over the entire experimental rollout. 3.3 Estimation and Optimization To use the MTD in practice, we must be able to efficiently estimate and optimize Tc(d). Here, we briefly discuss the computational tools used later in Section 6, but note that other algorithmic approaches to leverage transport dependence for experimental design may be viable. We emphasize that our estimators for the MTD are purely sample-based. Assuming that we can draw samples θ ∼p(θ) and y|θ ∼p(y | θ, d), we can estimate the Tc(d) without ever evaluating densities. By contrast, estimators for the MI either require direct access to the likelihood density, or rely on learning approximations for density ratios (Kleinegesse and Gutmann, 2020, 2021; Ivanova et al., 2021), introducing substantial extra modelling and optimization complexity. Estimation. For discrete measures, the optimal transport problem becomes a linear program (LP) for which a wide range of numerical solvers have been proposed (Bonneel et al., 2011; Peyr ́e et al., 2019). When the distributions p(θ, y | d) and p(θ)p(y | d) are known and discrete, the OT problem may be solved directly using these distributions. In practice, however, our distributions are often continuous or only accessible through sampling. In such cases, we approximate them using empirical Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth measures based on samples. That is, we sample (θj, yj) i.i.d. ∼ p(θ, y | d) and (θ′ k, y′ k) i.i.d. ∼ p(θ)p(y | d), followed by the approximations p(θ, y | d) ≈1 n n X j=1 δ(θj,yj), p(θ)p(y | d) ≈1 n n X k=1 δ(θ′ k,y′ k). (14) This procedure yields the plug-in estimator (Boissard and Gouic, 2014; Fournier and Guillin, 2015) bTc(d) = OTc  1 n n X j=1 δ(θj,yj), 1 n n X k=1 δ(θ′ k,y′ k)   (15) which is asymptotically consistent (Dudley, 1969) as n →∞and concentrates around its mean at an exponential rate (Bolley et al., 2007; Boissard, 2011; Weed and Bach, 2019), albeit with a positive bias that decreases with n (Papp and Sherlock, 2025). Optimization. Optimizing Tc(d) poses a further challenge, assuming we cannot simply enumerate over possible designs. We therefore now show how Tc(d) can be optimized using stochastic gradient ascent provided the design space is continuous. The only additional assumption needed for this is the existence of a differentiable reparameterization of y with respect to d. Namely, using the noise outsourcing lemma (Kallenberg, 1997), then for any fixed noise distribution with appropriate reference measure, q(η), there exists (subject to extremely weak assumptions) a function h such that y = h(η; θ, d) for η ∼q(η). If we further assume that d 7→h(η; θ, d) is differentiable for our chosen q(η), then the LP approach above enables the calculation of ∇d bTc(d) via automatic differentiation. Hence, we may perform gradient-based design optimization in this setting. We refer to Peyr ́e et al. (2019, Chapter 9) for a further discussion of the differentiability of optimal transport discrepancies. Note that this differentiable reparameterisation assumption is the same as in implicit MI methods and is often satisfied even when evaluating p(y | θ, d) is itself intractable: many, if not most, intractable likelihood models are based on stochastic simulators, with the intractability coming from deterministic mappings of stochastic variables or stochastic differential equations (Cranmer et al., 2020). 4 Comparing Mutual Information and Mutual Transport Dependence In this section we theoretically analyse how our proposed transport-based criteria relate to the classical expected information gain (MI). Although the MTD and MI originate from different principles, there are some interesting links between the two as we now show. In particular, under quadratic costs, the transport dependencies can be upper-bounded by the MI. Theorem 2. Suppose the prior p(θ) is strictly logconcave, i.e., there exists some λθ > 0 with -∇2 log p(θ) ⪰λθI. For c(θ, θ′) = |θ -θ′|2, we have λθT (θ) c (d) ≤2I(d). (16) Similarly, if the marginal p(y | d) is strictly log-concave with parameter λy|d, and c(y, y′) = |y -y′|2, then λy|dT (y) c (d) ≤2I(d). (17) When both the prior and likelihood satisfy these assumptions, under cost c(θ, θ′, y, y′) = η|θ -θ′|2 + |y -y′|2, for λ = max{λθ/η, λy|d} we have λTc(d) ≤2I(d). (18) See Section B for a proof and extended discussion. We note that Theorem 2 holds for quadratic costs on any space, and in particular remains valid under any transformations of Θ and Y when the quadratic cost is computed in the new coordinates. Thus, if the MTD is large under any such transformation, the MI must necessarily also be large. This suggests a robustness to misspecification in the selected cost function in the MTD, as selecting for designs under a given particular cost must ensure a minimum level in the MI as well. As a further point of comparison, we derive a closedform expression for the MTD for a simple linearGaussian model under quadratic costs. We allow for the possibility of the observation noise σ2 d,θ to vary with design to demonstrate how the value of the MI diverges as σ2 d,θ →0, i.e., the likelihood approaches a deterministic outcome. On the other hand, Tc(d) remains bounded for all designs d and all noise σ2 d,θ. This boundedness makes the MTD a quantitative and stable measure of dependence even in scenarios approaching determinism. Theorem 3. Suppose θ ∈Θ = Rn has a standard normal prior p(θ) = N(0, In), designs are vectors d ∈ D = Rn, and y ∈Y = R has likelihood p(y | θ, d) = N ⟨d, θ⟩, σ2 d,θ . Under the quadratic cost, we have Tc(d) = 2 1 + σ2 d,θ + |d|2 (19) - r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ . Moreover, Tc(d) ≤2. On the other hand, the MI is I(d) = 1 2 log 1 + |d|2/σ2 d,θ (20) which is unbounded as σ2 d,θ →0. A Geometric Approach to Optimal Experimental Design Table 1: Metrics for the CES model after T = 10 design iterations, averaged over 50 random seeds (± one standard error). Designs produced by the MTD yield lower RMSEs than PCE on average for all parameters. ρ α u σ β τ Random 0.251±0.025 0.116±0.016 36.365±7.341 317.219±81.866 0.727±0.088 0.740±0.106 PCE 0.047±0.012 0.036±0.013 8.902±5.749 24.942±15.697 0.201±0.060 0.100±0.048 MTD 0.018±0.005 0.009±0.001 3.671±2.810 1.767±1.009 0.058±0.008 0.049±0.012 MTD (Tc†) 0.022±0.008 0.012±0.004 10.941±10.107 0.534±0.195 0.069±0.013 0.053±0.013 5 Related Work Classical optimal experimental design criteria trace back to frequentist approaches based on the Fisher information matrix (Fisher, 1935; Wald, 1943; Kiefer, 1959, 1974; Pukelsheim, 2006). While powerful in some settings, these methods often rely on asymptotic approximations and are limited when models are nonlinear (Ryan et al., 2016; Rainforth et al., 2024). As they depend only on local (second-order) information about θ, they can lose fidelity compared to criteria based on the full joint distribution p(θ, y | d). Bayesian experimental design (BED) addresses many of these limitations by evaluating designs using objectives that can be viewed as measuring expected reduction in uncertainty on θ (DeGroot, 1962; Dawid, 1998; Bickford Smith et al., 2025). Here this uncertainty is typically measured using (differential) entropy to produce the MI or expected information gain (Lindley, 1956), especially in the contemporary literature (Huan and Marzouk, 2014; Foster, 2021; Foster et al., 2021; Ao and Li, 2024; Iollo et al., 2024b). The trace or determinant of the posterior covariance matrix have also occasionally be used instead (Vanlier et al., 2012; Ryan et al., 2016; Huan et al., 2024), but this requires expensive nested inference procedures to be performed that are typically even more costly than MI optimization. Concurrent work by Helin et al. (2025) also studies the target transport dependence T (θ) c (d) under the specific choice c(θ, θ′) = |θ -θ′|p, p ∈[1, ∞), as an objective for experimental design. In light of Theorem 1, this can be seen as a limiting case of our more general MTD criterion under a Euclidean cost assumption. Their work is primarily theoretical with no quantitative comparisons against the MI, and they do not propose a practical method for optimizing the TTD and in particular overcoming its double intractability. Our work, by contrast, develops a sample-based, differentiable framework applicable for general costs and empirically demonstrates its efficacy for sequential OED. Optimal-transport based notions of statistical dependency have also been considered in areas such as representation learning (Ozair et al., 2019) independence testing (Warren, 2021; Wiesel, 2022; Nies et al., 2025), and fairness (Leteno et al., 2023). These works, however, are not concerned with experimental design and also focus exclusively on Euclidean costs. Our work adds to this growing literature of geometric dependency measures by introducing the mutual transport dependence for OED under general cost functions. 6 Experiments We now evaluate the proposed methodology on both standard benchmark experimental design tasks and variations on these that have particular error desiderata in our final estimates. In each setting, we use the MTD as the design criterion, sequentially selecting experiments by optimizing Tc(d) as explained in Section 3.3. After each design is chosen, we perform posterior inference over θ and proceed to the next experimental iteration. All results are reported over either 25 or 50 random seeds, where each random seed constitutes a different ground-truth value of θ. We compare against the MI throughout, using PCE (Foster et al., 2019) as a well-known estimator for this quantity. We note that unlike our MTD approach, PCE is an explicit estimator that requires direct access to the likelihood density, thereby providing a stronger baseline than more directly comparable, but also more complex, implicit MI approaches. See Section D for details.2 CES. The first problem we consider is Constant Elasticity of Substitution (CES) (Arrow et al., 1961; Foster et al., 2020; Blau et al., 2022; Iollo et al., 2024b; Hedman et al., 2025), arising from behavioral economics. In this problem, a participant compares two baskets d1, d2 ∈[0, 100]3 consisting of various amounts of three different goods. Given two baskets, the participant provides a scalar response y ∈[0, 1] indicating their subjective preference between the baskets. The design variable d = (d1, d2) is thus sixdimensional, and the goal is to recover the latent parameters θ = (ρ, α1, α2, α3, u) ∈R5 governing the participant's preferences. This is a particularly challenging design problem, as large regions of the design space result in uninformative outcomes y ∈{0, 1}. We sequentially design T = 10 experiment iterations. 2Experiment code: github.com/GavinKerrigan/mtd Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Source Location Finding. Our second problem is source location finding (LF) (Sheng and Hu, 2005; Foster et al., 2021; Ivanova et al., 2021; Blau et al., 2022; Iollo et al., 2024a). In this, our goal is to estimate the spatial location of two sources θ1, θ2 ∈R2. Each source emits a signal which decays according to an inverse square law. At each experiment iteration, a sensor is placed at a location d ∈R2 which records a noisy measurement y ∈R of the total signal intensity at the sensor location. Here, we design T = 25 experiments. 6.1 MTD under Euclidean Costs While one of the main appeals of the MTD is that it allows for flexible cost functions, in this section we first consider the quadratic cost c(θ, y, θ′, y′) = |θ - θ′|2 + |y -y′|2 as a reasonable default choice. Our first set of experiments demonstrates that even under this default setting, MTD-optimal designs can exceed the performance of MI-optimal designs in terms of recovering an unknown parameter. In Table 1, we evaluate the MTD on the CES problem in terms of the final RMSE between posterior samples after T = 10 experiment iterations and the true value of θ. Designs produced by optimizing MTD achieve lower RMSEs than those produced by optimizing MI. Similarly, in Figure 2, we plot the RMSE between posterior samples and the true θ value on the LF problem over the course of T = 25 design iterations. We observe that the MTD yields lower RMSEs throughout most of the iterations, but designs produced by optimizing the MI yield similar RMSEs at the final iteration. For the LF problem, both MI and MTD are optimized using five random restarts, i.e., we generate five candidate designs and retain the best under the given objective. This approach serves not only to mitigate sensitivity to initialization but also to systematically improve design quality. In particular, for later iterations of the LF problem, where the posterior over θ becomes highly concentrated, the restart strategy provides a simple yet effective mechanism to ensure robustness against poor initializations. In terms of runtime, for either problem optimizing a single design under MTD requires approximately 30 seconds of wall-clock time, whereas optimizing the same design with PCE takes roughly 120 seconds, with both objectives run to convergence. While these runtimes are sensitive to implementation choices and could likely be reduced through more careful tuning or normalization of compute budgets, the key observation is that MTD is comparably fast to previous approaches and potentially faster. 1 5 10 15 20 25 Experiment Iteration 0.0 0.5 1.0 1.5 2.0 RMSE Random PCE MTD Weighted MTD Figure 2: RMSE between posterior θ samples and ground-truth on the location finding problem, averaged over 25 seeds (± one standard error). 6.2 MTD under Transformations While we may explicitly specify a cost for MTD, costs can also be defined implicitly through transformations of the underlying sample spaces. Concretely, if f : Θ → Θ†, g : Y →Y† are two transformations, we may define c†(θ, θ′, y, y′) = |f(θ) -f(θ′)|2 + |g(y) -g(y′)|2, i.e., quadratic cost in transformed coordinates. This is useful when we wish to measure errors in a particular space. Note that the MI between f(θ) and g(y) is equal to that between θ and y if f and g are injective, prohibiting the same trick from being meaningfully employed. We illustrate this on CES using the transformations σ = 1/(1 -ρ) βi = log(αi)/g(α) τ = log(u) (21) where g(α) is the geometric mean of α. These transformations are interpretable: σ is the elasticity (Arrow et al., 1961), β the centered-log-ratio of α, capturing relative importance of goods, and τ = log u a natural reparametrization under its lognormal prior. To evaluate this approach, we generate designs using Tc†(d) in the transformed variables, implicitly altering the cost. Table 1 reports posterior RMSEs for PCE, MTD on the original scale Tc(d), and MTD with the transformed cost Tc†(d). On the original parameters, Tc†(d) performs comparably to Tc(d) for ρ and α but somewhat worse for u, suggesting the untransformed version remains preferable when evaluation is performed directly on the original parameters. On the transformed scale, Tc†(d) and Tc(d) perform similarly for β and τ. However, there are clearer differences in σ. In particular, we see that PCE exhibits high RMSE in σ. This is because PCE occasionally yields poor designs which are unable to identify that ρ ̸= 1, leading to high errors in σ = (1 -ρ)-1. Tc(d) generally yields higher quality designs which reliably identify ρ and thus obtain low errors in σ. The transformed A Geometric Approach to Optimal Experimental Design 3 0 3 3 0 3 MI 3 0 3 3 0 3 MTD 3 0 3 3 0 3 Weighted MTD Figure 3: Estimated values of the EIG (via PCE) and the MTD for the 2D location-finding model with two sources. Top row: the EIG is maximized at the origin, whereas the MTD favors off-center designs, illustrating its symmetry-breaking behavior. Bottom row: with additional weighting of the cost function, the MTD can be tuned to favor specific regions of the design space. Tc†(d), though, implicitly upweights designs where σ is large, leading to low RMSE values. Notably, there is no natural analogue of changing PCE to target RMSE in σ directly. Overall, this provides evidence that the MTD can be tailored to particular downstream metrics. 6.3 Weighted Cost Functions We next evaluate the MTD on a variation of the LF problem which highlights its ability to incorporate downstream objectives through an appropriate choice of cost function. Here, the goal is not only to localize the sources, but also to rapidly determine whether a source lies in a critical region R ⊂Θ. To capture this preference, we define a weighted cost cw(θ, y, θ′, y′) = w(θ) |θ -θ′|2 + |y -y′|2 where (θ, y) ∼p(θ, y | d) and (θ′, y′) ∼p(θ′)p(y′ | d) and the weight is given by w(θ) = b + P2 k=1 g(θk -μ), with b > 0 a bias and g a bump function supported on R, a ball of radius 1.5 centered at μ = (1.5, -1.5). See Section D for details. Intuitively, the cost is up-weighted whenever the "true" θ has a source in R. In such cases, the MTD Tcw(d) increases, thus yielding designs that prioritize detecting whether a source is present in R. We stress that this represents only one possible weighting scheme, and other choices could be used to encode different downstream preferences. In Figure 3, we plot Tcw(d) under the weighted cost for the LF task. As intended, the objective is upweighted in 1 5 10 15 20 25 Experiment Iteration 0.00 0.05 0.10 0.15 Expected 0-1 Loss Random PCE MTD Weighted MTD Figure 4: Expected zero-one loss for the 2D locationfinding model. The weighted MTD rapidly determines whether θ ∈R. R, encouraging designs to be placed in this region. We also note that the unweighted MTD (with the quadratic cost) exhibits a symmetry breaking behavior, whereas the MI favors designs at the origin, thereby preserving symmetry after posterior updates (as in Figure 1). We then design a sequence of T = 25 experiments using the weighted cost cw. In Figure 4, we plot the mean zero-one loss of θ ∈R, i.e., Ep(θ|ht)[|1[θtrue ∈R]-1[θ ∈ R]|] which directly measures if we have detected θ ∈R. While PCE and unweighted MTD eventually resolve this uncertainty, the weighted MTD achieves a much faster reduction. We emphasize that the MI cannot be easily adapted to the task of first identifying if θ ∈R before exploring the rest of the space, demonstrating our framework's flexibility to encode task-specific preferences via the cost. Further, Figure 2 shows the RMSE under Tcw matches that of Tc(d), confirming that we have not sacrificed performance in terms of identifying θ for this auxiliary objective. 7 Conclusion We introduce the mutual transport dependence (MTD), a novel class of geometric criteria for optimal experimental design. By quantifying the value of an experiment through an optimal transport divergence with an explicit sample-level cost, the MTD allows us to encode domain knowledge and task-specific objectives directly into the design criterion. We show that optimizing the MTD produces highly effective designs on standard benchmarks, and that tailoring the cost function enables alignment with particular experimental goals. Overall, the MTD offers a flexible, geometry-aware objective for OED, providing a practical tool for designing experiments that reflect both statistical dependence and the experimenter's real-world priorities. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Acknowledgements GK and TR are supported by the UK EPSRC grant EP/Y037200/1. References Ambrosio, L. and Gigli, N. (2012). A user's guide to optimal transport. In Modelling and Optimisation of Flows on Networks: Cetraro, Italy 2009, Editors: Benedetto Piccoli, Michel Rascle, pages 1-155. Springer. Ao, Z. and Li, J. (2024). On estimating the gradient of the expected information gain in Bayesian experimental design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 20311-20319. Arrow, K. J., Chenery, H. B., Minhas, B. S., and Solow, R. M. (1961). Capital-labor substitution and economic efficiency. The review of Economics and Statistics, pages 225-250. Baptista, R., Pooladian, A.-A., Brennan, M., Marzouk, Y., and Niles-Weed, J. (2024). Conditional simulation via entropic optimal transport: Toward nonparametric estimation of conditional Brenier maps. arXiv preprint . Bernardo, J. M. (1979). Expected information as expected utility. The Annals of Statistics, pages 686690. Bertsekas, D. P. (1971). Control of uncertain systems with a set-membership description of the uncertainty. PhD thesis, Massachusetts . Bertsekas, D. P. (1997). Nonlinear programming. Journal of the Operational Research Society, 48(3):334334. Bickford Smith, F., Kossen, J., Trollope, E., van der Wilk, M., Foster, A., and Rainforth, T. (2025). Rethinking aleatoric and epistemic uncertainty. In Forty-second International Conference on Machine Learning. Blau, T., Bonilla, E. V., Chades, I., and Dezfouli, A. (2022). Optimizing sequential experimental design with deep reinforcement learning. In International Conference on Machine Learning, pages 2107-2128. PMLR. Blower, G. (2003). The Gaussian isoperimetric inequality and transportation. Positivity, 7(3):203-224. Boissard, E. (2011). Simple bounds for the convergence of empirical and occupation measures in 1Wasserstein distance. Electronic Journal of Probability, 16:2296 - 2333. Boissard, E. and Gouic, T. L. (2014). On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l'Institut Henri Poincar ́e, Probabilit ́es et Statistiques, 50(2):539 - 563. Bolley, F., Guillin, A., and Villani, C. (2007). Quantitative concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Related Fields, 137(3):541-593. Bonneel, N., Van De Panne, M., Paris, S., and Heidrich, W. (2011). Displacement interpolation using Lagrangian mass transport. In Proceedings of the 2011 SIGGRAPH Asia conference, pages 1-12. Boyd, S. P. and Vandenberghe, L. (2004). Convex optimization. Cambridge university press. Carlier, G., Chernozhukov, V., and Galichon, A. (2016). Vector quantile regression: An optimal transport approach. The Annals of Statistics, 44(3):1165 - 1192. Chaloner, K. and Verdinelli, I. (1995). Bayesian experimental design: A review. Statistical science, pages 273-304. Chemseddine, J., Hagemann, P., Steidl, G., and Wald, C. (2024). Conditional Wasserstein distances with applications in Bayesian OT flow matching. arXiv preprint . Cranmer, K., Brehmer, J., and Louppe, G. (2020). The frontier of simulation-based inference. Proceedings of the National Academy of Sciences, 117(48):3005530062. Csisz ́ar, I. (1967). On information-type measure of difference of probability distributions and indirect observations. Studia Sci. Math. Hungaria, 2:299-318. Danskin, J. (1967). The Theory of Max-min and Its Applications to Weapons Allocation Problems. Econometrics and operations research. Springer. Dawid, A. P. (1998). Coherent measures of discrepancy, uncertainty and dependence, with applications to bayesian predictive experimental design. . http://www. ucl. ac. uk/Stats/research/abs94. html, Tech. Rep, 139. DeGroot, M. H. (1962). Uncertainty, information, and sequential experiments. The Annals of Mathematical Statistics, 33(2):404-419. Doucet, A., De Freitas, N., and Gordon, N. (2001). An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pages 3-14. Springer. Dudley, R. M. (1969). The speed of mean GlivenkoCantelli convergence. The Annals of Mathematical Statistics, 40(1):40-50. Encinar, P. C., Schr ̈oder, T., Yatsyshin, P., and Duncan, A. B. (2025). Deep optimal sensor placement for A Geometric Approach to Optimal Experimental Design black box stochastic simulations. In Frontiers in Probabilistic Inference: Learning meets Sampling. Feydy, J., S ́ejourn ́e, T., Vialard, F.-X., Amari, S.- i., Trouv ́e, A., and Peyr ́e, G. (2019). Interpolating between optimal transport and MMD using Sinkhorn divergences. In The 22nd international conference on artificial intelligence and statistics, pages 2681-2690. PMLR. Fisher, R. A. (1935). The Design of Experiments. The Design of Experiments. Oliver and Boyd. Foster, A., Ivanova, D. R., Malik, I., and Rainforth, T. (2021). Deep adaptive design: Amortizing sequential Bayesian experimental design. In International Conference on Machine Learning, pages 3384-3395. PMLR. Foster, A., Jankowiak, M., Bingham, E., Horsfall, P., Teh, Y. W., Rainforth, T., and Goodman, N. (2019). Variational Bayesian optimal experimental design. Advances in Neural Information Processing Systems, 32. Foster, A., Jankowiak, M., O'Meara, M., Teh, Y. W., and Rainforth, T. (2020). A unified stochastic gradient approach to designing Bayesian-optimal experiments. In International Conference on Artificial Intelligence and Statistics, pages 2959-2969. PMLR. Foster, A. E. (2021). Variational, Monte Carlo and policy-based approaches to Bayesian experimental design. PhD thesis, . Fournier, N. and Guillin, A. (2015). On the rate of convergence in Wasserstein distance of the empirical measure. Probability theory and related fields, 162(3):707-738. Gozlan, N. and L ́eonard, C. (2010). Transport inequalities. a survey. arXiv preprint . Hedman, M., Ivanova, D. R., Guan, C., and Rainforth, T. (2025). Step-dad: Semi-amortized policybased bayesian experimental design. arXiv preprint . Helin, T., Marzouk, Y., and Rojo-Garcia, J. R. (2025). Bayesian optimal experimental design with Wasserstein information criteria. arXiv preprint . Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. (2018). Learning deep representations by mutual information estimation and maximization. arXiv preprint . Hosseini, B., Hsu, A. W., and Taghvaei, A. (2025). Conditional optimal transport on function spaces. SIAM/ASA Journal on Uncertainty Quantification, 13(1):304-338. Huan, X., Jagalur, J., and Marzouk, Y. (2024). Optimal experimental design: Formulations and computations. Acta Numerica, 33:715-840. Huan, X. and Marzouk, Y. M. (2014). Gradient-based stochastic optimization methods in Bayesian experimental design. International Journal for Uncertainty Quantification, 4(6). Iollo, J., Heinkel ́e, C., Alliez, P., and Forbes, F. (2024a). Bayesian experimental design via contrastive diffusions. arXiv preprint . Iollo, J., Heinkel ́e, C., Alliez, P., and Forbes, F. (2024b). PASOA - particle based Bayesian optimal adaptive design. arXiv preprint . Ivanova, D. R., Foster, A., Kleinegesse, S., Gutmann, M. U., and Rainforth, T. (2021). Implicit deep adaptive design: Policy-based experimental design without likelihoods. Advances in Neural Information Processing Systems, 34:25785-25798. Kallenberg, O. (1997). Foundations of modern probability. Springer. Kantorovich, L. V. (1942). On the translocation of masses. In Dokl. Akad. Nauk. USSR (NS), volume 37, pages 199-201. Kerrigan, G., Migliorini, G., and Smyth, P. (2024). Dynamic conditional optimal transport through simulation-free flows. Advances in Neural Information Processing Systems, 37:93602-93642. Kiefer, J. (1959). Optimum experimental designs. Journal of the Royal Statistical Society: Series B (Methodological), 21(2):272-304. Kiefer, J. (1974). General equivalence theory for optimum designs (approximate theory). The Annals of Statistics, pages 849-879. Kingma, D. P. (2014). Adam: A method for stochastic optimization. arXiv preprint . Kleinegesse, S. and Gutmann, M. U. (2019). Efficient Bayesian experimental design for implicit models. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 476-485. PMLR. Kleinegesse, S. and Gutmann, M. U. (2020). Bayesian experimental design for implicit models by mutual information neural estimation. In International Conference on Machine Learning, pages 5316-5326. PMLR. Kleinegesse, S. and Gutmann, M. U. (2021). Gradientbased Bayesian experimental design for implicit models using mutual information lower bounds. arXiv preprint . Kuhfeld, W. F., Tobias, R. D., and Garratt, M. (1994). Efficient experimental design with marketing research applications. Journal of Marketing Research, 31(4):545-557. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Kullback, S. (1967). A lower bound for discrimination information in terms of variation. IEEE transactions on Information Theory, 13(1):126-127. Leteno, T., Gourru, A., Laclau, C., Emonet, R., and Gravier, C. (2023). Fair text classification with Wasserstein independence. arXiv preprint . Lindley, D. V. (1956). On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, 27(4):986-1005. Lindley, D. V. (1972). Bayesian statistics: A review. SIAM. MacKay (1992). Information-based objective functions for active data selection. Neural Computation. Massart, P. (2007). Concentration inequalities and model selection. Springer. Melendez, J., Furnstahl, R., Grießhammer, H., McGovern, J., Phillips, D., and Pratola, M. (2021). Designing optimal experiments: An application to proton Compton scattering. The European Physical Journal A, 57(3):81. Nguyen, X., Wainwright, M. J., and Jordan, M. I. (2010). Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):58475861. Nies, T. G., Staudt, T., and Munk, A. (2025). Transport dependency: Optimal transport based dependency measures. The Annals of Applied Probability, 35(4):2292 - 2362. Ozair, S., Lynch, C., Bengio, Y., Van den Oord, A., Levine, S., and Sermanet, P. (2019). Wasserstein dependency measure for representation learning. Advances in Neural Information Processing Systems, 32. Papp, T. and Sherlock, C. (2025). Centered plug-in estimation of Wasserstein distances. arXiv preprint . Park, M., Nassar, M., and Vikalo, H. (2013). Bayesian active learning for drug combinations. IEEE transactions on biomedical engineering, 60(11):3248-3255. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32. Peyr ́e, G., Cuturi, M., et al. (2019). Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11(56):355-607. Pinsker, M. S. (1964). Information and information stability of random variables and processes. HoldenDay. Polyanskiy, Y. and Wu, Y. (2025). Information theory: From coding to learning. Cambridge university press. Pukelsheim, F. (2006). Optimal design of experiments. SIAM. Rainforth, T., Cornish, R., Yang, H., Warrington, A., and Wood, F. (2018). On nesting Monte Carlo estimators. In International Conference on Machine Learning, pages 4267-4276. PMLR. Rainforth, T., Foster, A., Ivanova, D. R., and Bickford Smith, F. (2024). Modern Bayesian experimental design. Statistical Science, 39(1):100-114. Ryan, E. G., Drovandi, C. C., McGree, J. M., and Pettitt, A. N. (2016). A review of modern computational algorithms for Bayesian optimal design. International Statistical Review, 84(1):128-154. Salvatier, J., Wiecki, T. V., and Fonnesbeck, C. (2016). Probabilistic programming in python using pymc3. PeerJ Computer Science, 2:e55. Sebastiani, P. and Wynn, H. P. (2000). Maximum entropy sampling and optimal Bayesian experimental design. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(1):145-157. Sheng, X. and Hu, Y.-H. (2005). Maximum likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks. IEEE transactions on signal processing, 53(1):44-53. Vanlier, J., Tiemann, C. A., Hilbers, P. A., and van Riel, N. A. (2012). A Bayesian approach to targeted experiment design. Bioinformatics, 28(8):1136-1142. Villani, C. et al. (2008). Optimal transport: Old and new, volume 338. Springer. Wald, A. (1943). On the efficient design of statistical investigations. The Annals of Mathematical Statistics, 14(2):134-140. Warren, A. (2021). Wasserstein conditional independence testing. arXiv preprint . Weed, J. and Bach, F. (2019). Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. Bernoulli, 25(4A):2620-2648. Wiesel, J. C. (2022). Measuring association with Wasserstein distances. Bernoulli, 28(4):2816 - 2832. A Geometric Approach to Optimal Experimental Design A Geometric Approach to Optimal Experimental Design: Supplementary Materials A Transport Dependencies In this section, we provide a more formal discussion of the MTD. We work under standard assumptions throughout, which are sufficient for guaranteeing that a solution to the OT problem exists. A1. The spaces Θ and Y are Polish spaces. A2. All cost functions are lower-semicontinuous and non-negative. Under Assumptions A1-A2, minimizers to the OT problem in Equation (6) defined on either Θ or Y are guaranteed to exist (Ambrosio and Gigli, 2012, Theorem 1.5). Similarly, if Θ, Y are Polish, then Θ × Y is Polish when equipped with the product topology, so that minimizers to an OT problem on the product spaces also exist under Assumptions A1-A2. Additional assumptions on c are necessary, though, to guarantee that Tc(d) (and the DTD/TTD) is finite. A trivially sufficient condition is that c is bounded from above. Other sufficient conditions can be given under moment assumptions of the corresponding densities. See Lemma 1 and Theorem 4 below. As discussed in the main paper, the TTD and DTD are closely related to notions arising from conditional optimal transport (Carlier et al., 2016; Hosseini et al., 2025; Kerrigan et al., 2024; Chemseddine et al., 2024; Baptista et al., 2024). Viewing our transport dependencies from this lens is a fruitful avenue for theoretical analysis. We begin by recalling the notion of a triangular coupling (Hosseini et al., 2025), which gives a notion of couplings that fix certain variables. Definition 4 (Triangular Couplings). A coupling γ ∈Π(p(θ, y | d), p(θ)p(y | d) is said to be Y-triangular if draws (θ, y, θ′, y′) ∼γ are such that y = y′ almost surely. Similarly, γ is said to be Θ-triangular if draws (θ, y, θ′, y′) ∼γ are such that θ = θ′ almost surely. For the sake of brevity, we will write Π := Π(p(θ, y | d), p(θ)p(y | d) for the set of all couplings, ΠY := ΠY(p(θ, y | d), p(θ)p(y | d)) for the set of Y-triangular couplings, and ΠΘ := ΠΘ(p(θ, y | d), p(θ)p(y | d)) for the set of Θ-triangular couplings. We begin with a lemma which allows us to bound the MTD by the DTD and TTD. This result shows that if the MTD is large, then both corresponding transport divergences on Θ or Y must also be large. This is particularly interpretable in terms of the TTD T (θ) c (d), where we see that large MTD implies that there is a large transport divergence between the posterior and prior, on average across the marginal p(y | d). Lemma 1. Fix p ∈[1, ∞). Suppose Θ, Y are separable Hilbert spaces. Consider the cost function c(θ, y, θ′, y′) = η|θ -θ′|p + |y -y′|p for a given η > 0. Write cΘ(θ, θ′) = |θ -θ′|p and cY(y, y′) = |y -y′|p. Assume that p(θ, y | d) has finite pth moment for a given d ∈D. Then, Tc(d) ≤ηT (θ) cΘ (d) and Tc(d) ≤T (y) cY (d). (22) Furthermore, both T (θ) cΘ (d) and T (y) cY (d) are finite. Proof. First consider the η = 1 case. Observe that p(θ, y | d) having finite pth moments immediately implies that both p(θ) and p(y | d) also have finite pth moments. Note further that p(θ, y | d) and p(θ)p(y | d) have the same marginals in both Θ and Y space. It follows that both T (θ) cΘ (d) and T (y) cY are pth powers of conditional Wasserstein metrics (Kerrigan et al., 2024, Definition 2). By Kerrigan et al. (2024, Prop. 2), both T (θ) cΘ (d) and T (y) cY (d) are finite, and furthermore conditional Wasserstein metrics upper bound the Wasserstein metric on the corresponding joint measure. This proves the claim for η = 1. Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth For the general η > 0 case, observe that we can equip Θ with the alternative inner product ⟨θ, θ′⟩η = η1/p⟨θ, θ′⟩Θ which yields the norm |θ|η = η1/p|θ|Θ. Since the preceding argument applies to general separable Hilbert spaces, it also holds when Θ is equipped with the alternative norm, yielding Tc(d) ≤T (θ) cη,Θ(d) (23) for cΘ,η(θ, θ′) = η|θ -θ′|p. Further, observe that T (θ) cΘ,η(d) = Z Y min γ∈Π(p(θ),p(θ|y,d)) Z Θ2 η|θ -θ′|2 dγ(θ, θ′) p(y | d) dy (24) = η Z Y min γ∈Π(p(θ),p(θ|y,d)) Z Θ2 |θ -θ′|2 dγ(θ, θ′) p(y | d) dy (25) = ηT (θ) cΘ (d). (26) This yields the desired claim. A.1 Moment Bounds In this section, we prove several upper bounds on our transport dependencies which rely on moments of the underlying distributions. In particular, this theorem shows that when there is a link between the DTD T (y) c (d) (and thus also the MTD by Lemma 1) and the predictive variance of p(y | d). Intuitively, this means that we should expect that maximizing the DTD (and MTD) should select for designs for which there is a high amount of variance in the experimental outcome. Theorem 4. Suppose Θ, Y are separable Hilbert spaces. Fix p ∈[1, ∞) and suppose that p(θ, y | d) has finite pth moment. For cΘ(θ, θ′) = |θ -θ|p, we have T (θ) cΘ (d) ≤2pEp(θ)|θ -E[θ]|p. (27) Similarly, for cY(y, y′) = |y -y′|p, T (y) cY (d) ≤2pEp(y|d)|y -E[y | d]|p. (28) Proof. We begin with the bound on T (θ) c (d). Observe that γ(θ, y, θ′, y′) = p(θ, y | ξ)p(θ′)δ[y′ = y] is a valid triangular coupling of p(θ, y | d) and p(θ′)p(y′ | d). By the convexity of x 7→xp, we have T (θ) cΘ (d) ≤Eγ|θ -θ′|p = Eγ|(θ -Ep(θ)[θ]) -(θ′ -Ep(θ)[θ])|p (29) ≤2p-1Eγ [|θ -Eθ|p + |θ′ -Eθ|p] (30) = 2p-1 Z Θ2×Y |θ -Eθ|p dp(θ′) dp(θ, y | d) + Z Θ2×Y |θ′ -Eθ|p dp(θ′) dp(θ, y | d) (31) = 2pEp(θ)|θ -Eθ|p. (32) where the last line follows by marginalization. The proof for T (y) cY (d) is analogous. We note that in the case p = 2, one may use the identity Eγ|θ -θ′|2 = Eγ|(θ -Eθ) -(θ′ -Eθ)|2 (33) = 2Ep(θ)|θ -Eθ|2 -2Eγ⟨θ -Eθ, θ′ -Eθ⟩ (34) rather than convexity to obtain a sharper constant. A.2 Limiting Cases of the MTD In this section, we prove that the TTD and DTD can be obtained as limiting cases of the MTD under a particular choice of cost. We refer to Section 3.2 for a discussion of this result and its implications for OED. The following is a formal restatement of Theorem 1. A Geometric Approach to Optimal Experimental Design Theorem 5. Suppose Θ, Y are separable Hilbert spaces. Fix p ∈[1, ∞) and assume that p(θ, y | d) has finite pth moment. Consider the cost cη(θ, y, θ′, y′) = η|θ -θ′|p Θ + |y -y′|p Y (35) for η > 0. Then, η-1Tcη(d) →T (θ) cΘ (d) as η →0+, where cΘ(θ, θ′) = |θ -θ′|p Θ. Similarly, for cψ(θ, y, θ′, y′) = |θ -θ′|p Θ + ψ|y -y′|p Y (36) we have ψ-1Tcψ(d) →T (y) cY (d) as ψ →0+ where cY(y, y′) = |y -y′|p Y. Proof. We begin with the first claim. Let γ∗∈ΠY be an optimal Y-triangular coupling for the cost cΘ(θ, θ′) = |θ -θ′|p and let γη ∈Π be an optimal coupling for the cost cη. Such optimal couplings exist and yield finite costs due to our moment assumption and Theorem 4. Since ΠY ⊂Π, using the Y-triangularity of γ∗we may upper bound the MTD under the cost cη by Tcη(d) = Z cη dγη ≤ Z cη dγ∗ (37) = Z η|θ -θ′|p dγ∗+ Z |y -y′|p dγ∗ (38) = η Z |θ -θ′|p dγ∗. (39) Consequently, by expanding out the definition of Tcη(d) we see that 0 ≤η-1 Z |y -y′|p dγη ≤ Z |θ -θ′|p d(γ∗-γη). (40) This yields R |θ-θ′|p dγη ≤ R |θ-θ′|p dγ∗, so that lim supη→0+ R |θ-θ′|p dγη ≤ R |θ-θ′|p dγ∗, which is finite as we assume p(θ, y | d) has finite pth moment. Hosseini et al. (2025, Prop. 3.11) show that as η →0+, we have γη →γ∗ in the weak sense. By the Portmanteau theorem, this weak convergence implies lim infη→0+ R |θ -θ′|p dγη ≥ R |θ -θ′|p dγ∗. We have thus shown lim η→0+ Z |θ -θ′|p dγη = Z |θ -θ′|p dγ∗. (41) By Equation (40), we thus also have lim η→0+ η-1 Z |y -y′|p dγη = 0. (42) Together, Equation (41) and Equation (42) imply that lim η→0+ η-1Tcη(d) = lim η→0+ Z |θ -θ′|p dγη + η-1 Z |y -y′|p dγ (43) = Z |θ -θ′|p dγ∗= T (θ) cΘ (d). (44) The second claim can be shown with a directly analogous argument, interchanging the roles of θ and y. A.3 Estimation and Optimization Estimation. Here we include further details regarding the estimation and optimization of Tc(d). We focus on the setting where θ, y are continuous. In principle, to form our plug-in estimate bTc(d) in Equation (15), we require samples (θj, yj) i.i.d. ∼p(θ, y | d) j = 1, . . . , n (θ′ k, y′ k) i.i.d. ∼p(θ)p(y | d) k = 1, . . . , n. (45) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth The product of marginals p(θ)p(y | d) can be sampled by drawing θ′ k, θ′′ k ∼p(θ) followed by sampling y′ k ∼p(y | θ′′ k, d). In principle, this requires 2n draws from the prior and simulations from the likelihood. However, in practice we reduce this to n draws by first obtaining (θj, yj) i.i.d. ∼p(θ, y | d) followed by choosing a derangement σ (i.e., a permutation with no fixed points) and defining θ′ k = θj, y′ k = yσ(j), breaking the dependency. This allows for a computational speedup (particularly when simulating the likelihood is expensive) and further can serve to reduce the bias of our estimator. We will write μn, νn for these two empirical measures, i.e., μn = 1 n n X j=1 δ(θj,yj) νn = 1 n n X k=1 δ(θ′ k,y′ k). (46) This yields the plug-in estimator, Tc(d) ≈bTc(d) = OTc [μn, νn] , (47) which can be solved using efficient linear-programming techniques (Bonneel et al., 2011; Peyr ́e et al., 2019; Papp and Sherlock, 2025). In particular, this requires forming the cost matrix C(d) ∈Rn×n with entries Cj,k(d) = c(θj, yj, θ′ k, y′ k). Note that C(d) is a function of d as yj, y′ k depend on d through the likelihood. Further, the value of bTc(d) is determined entirely by this cost matrix C(d). We will write G(C) for the minimal transport cost obtained for a given cost matrix C. Optimization. We require an estimate of ∇dTc(d), which we obtain by computing the gradient ∇d bTc(d) of our plug-in estimator. Key to computing this is the envelope theorem (Danskin, 1967; Bertsekas, 1971, 1997), which shows that we can obtain the gradient of C 7→G(C) in terms of the optimal transport plan. To be more precise, this mapping is not differentiable, but we use ∂G(C) to represent the superdifferential (Boyd and Vandenberghe, 2004) of G at C, i.e., the set of all v ∈Rn×n satisfying G(C′) -G(C) ≤⟨v, C′ -C⟩F ∀C′ ∈Rn×n (48) for the Frobenius inner product ⟨·, ·⟩F . The following theorem shows that ∂G(C) is precisely given by the set of optimal plans (Peyr ́e et al., 2019, Prop. 9.2), which in general may be non-unique. Theorem 6. For the mapping C 7→G(C), we have ∂G(C) = ( γ∗∈Π(μn, νn) : γ∗∈argmin Π(μn,νn) Kc(γ) ) . (49) Thus, computing a supergradient of ∂G(C) requires no more computation than solving the OT problem itself, as it is simply the value of the optimal plan found when solving the linear program. The (super-)gradient ∇d bTc(d) then requires us to use the chain rule to compute ∇dG(C(d)): ∇d bTc(d) = n X j,k=1 γ∗ jk∇dCjk(d) (50) = n X j,k=1 γ∗ jk ∇dyj(η, θ, d)T ∂2c(θj, yj, θ′ k, y′ k) + ∇dy′ k(η, θ, d)T ∂4c(θj, yj, θ′ k, y′ k) (51) where the second equality follows from directly computing ∇dCjk(d). Here, we use ∂2c and ∂4c to denote the partial derivatives of c with respect to its second and fourth arguments, and ∇dy(η, θ, d) ∈Rdy×dd is the Jacobian of y with respect to d. In practice, ∇dCjk(d) can be computed via automatic differentiation when we have a differentiable cost function and a differentiable sampler. In particular, as discussed in Section 3.3, using the noise outsourcing lemma (Kallenberg, 1997), for any fixed noise distribution with appropriate reference measure, q(η), there exists (subject to weak assumptions) a function h such that y(η, θ, d) = h(η; θ, d) for η ∼q(η). If we further assume that d 7→h(η; θ, d) is differentiable for our chosen q(η), we may use automatic differentiation to compute ∇d y(η, θ, d). We refer to Peyr ́e et al. (2019, Chapter 9) for a further discussion of the differentiability of optimal transport discrepancies. While in principle ∇d bTc(d) is merely a supergradient, this is sufficient for the purposes of performing stochastic gradient-ascent based procedures on the MTD. A Geometric Approach to Optimal Experimental Design B Transport Dependencies and Mutual Information In this section, we give a proof for Theorem 2, as well as an extended discussion of this result. In addition, we show that the total variation distance between p(θ, y | d) and p(θ)p(y | d) may be obtained as a special case of the MTD, and provide an upper bound analogous to Theorem 2. B.1 The Euclidean Case While the MTD can be defined for general Polish spaces, we work under a Euclidean assumption throughout this section to facilitate the analysis, and also because this is a highly practically relevant scenario. We turn our attention towards bounding the transport dependencies by the mutual information. The key assumption we rely on is a strong log-concavity assumption. Definition 5 (Strong Log-Concavity). Let p ∈C2(Rm) be a twice continuously differentiable probability density function. We say that p is strongly log-concave if there exists λ > 0 such that for all x with p(x) > 0, we have -∇2 x log p(x) ⪰λIn. (52) The greatest such λ is called the parameter of log-concavity for p(x). A generalized form of Talagrand's inequality yields an upper bound on the MTD in terms of the mutual information (Villani et al., 2008, Section 9.3), (Blower, 2003, Theorem 4.1). Theorem 7 (Talagrand's Inequality). Suppose X = Rm is a Euclidean space and p(x), q(x) are two probability densities over X. Suppose p(x) is strongly log-concave with parameter λ and KL[q(x)||p(x)] 0 be given. When both the prior and likelihood satisfy these assumptions, under cost c(θ, θ′, y, y′) = η|θ -θ′|2 + |y -y′|2, for λ = max{λθ/η, λy|d} we have λTc(d) ≤2I(d). (56) Proof. We start with the first claim. If I(d) is infinite, then there is nothing to show. When I(d) is finite and p(θ) is strongly log-concave, by Theorem 7 we have OTc[p(θ | y, d), p(θ)] ≤2λ-1 θ KL[p(θ | y, d)||p(θ)]. (57) Integrating both sides of this with respect to p(y | d) yields T (θ) c (d) = Z Y OTc[p(θ | y, d), p(θ)]p(y | d) dy (58) ≤2λ-1 θ Z Y KL[p(θ | y, d)||p(θ)]p(y | d) dy (59) = 2λ-1 θ I(d) (60) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth as claimed. The proof for the second claim is analogous. For the last claim, consider cΘ,η(θ, θ′) = η|θ -θ′|2. As Lemma (1) applies in arbitrary separable Hilbert spaces, we obtain Tc(d) ≤min{T (θ) cΘ,η(d), T (y) cY (d)} (61) = min{ηT (θ) cΘ (d), T (y) cY (d)} (62) ≤2 min{ηλ-1 θ , λ-1 y|d}I(d). (63) where the final inequality follows from the first two claims. Rearranging this inequality yields the result. While the bounds in this previous section apply for Euclidean spaces, generalizations of Talagrand's inequality exist for general metric spaces (Gozlan and L ́eonard, 2010), although resulting in a more complex relationship between information-theoretic quantities and optimal transport divergences. B.2 Total Variation and Hamming Distance One particular special case that may be of interest is when the cost function is defined via the Hamming distance. In this case, we obtain the total variation distance between p(θ, y | d) and p(θ)p(y | d) as a special case of our MTD framework, and moreover, obtain an upper bound in terms of the mutual information. However, we note that as this cost function is not differentiable, the MTD under this cost function cannot be directly optimized with gradient-based methods. Theorem 9. Suppose Θ×Y is a metric space. Consider the Hamming distance cH(θ, y, θ′, y′) = 1[(θ, y) ̸= (θ′, y′)]. Then, TcH(d) = |p(θ, y | d) -p(θ)p(y | d)|TV = sup A∈B Z A p(θ, y | d) dθ dy - Z A p(θ)p(y | d) dθ dy (64) where B is the set of all Borel subsets of Θ × Y and | · |TV is the total variation metric. Moreover, TcH(d) ≤ r 1 2I(d). (65) Proof. It is well-known that the Hamming cost in the OT problem yields the total variation distance (Massart, 2007, Lemma 2.20). By the Csisz ́ar-Kullback-Pinsker inequality (Pinsker, 1964; Csisz ́ar, 1967; Kullback, 1967; Gozlan and L ́eonard, 2010), we immediately obtain the desired upper bound. C Linear-Gaussian Model Here we investigate the behavior of the gain Tc(d) in the linear-Gaussian setting under quadratic costs. In particular, we may obtain a closed-form solution for the MTD in this case, allowing for a direct comparison against the MI. See Figure 5 for an illustration. Theorem 10. Suppose θ ∈Θ = Rn has a standard normal prior p(θ) = N(0, In), designs are vectors d ∈D = Rn, and y ∈Y = R has likelihood p(y | θ, d) = N ⟨d, θ⟩, σ2 d,θ . Under the quadratic cost, we have Tc(d) = 2 1 + σ2 d,θ + |d|2 - r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ . Moreover, Tc(d) ≤2. On the other hand, the MI is I(d) = 1 2 log 1 + |d|2/σ2 d,θ (66) which is unbounded as σ2 d,θ →0. A Geometric Approach to Optimal Experimental Design 0 2 4 6 8 10 12 14 |d| 0 1 2 3 4 Objective MI MTD Figure 5: Values of the MI and MTD for the linear-Gaussian model for σ2 d,θ = 1. Proof. Define s = |d|2 + σ2 d,θ. Under this setting we may explicitly calculate the joint and product of marginals as p(θ, y | d) = N (0, ΣJ) ΣJ = I d dT s (67) p(θ)p(y | ξ) = N (0, ΣP ) ΣP = I 0 0 s . (68) Under quadratic costs, Tc(d) is the squared 2-Wasserstein between these distributions, which admits a closed-form via the (squared) Bures-Wasserstein metric Tc(d) = tr ΣJ + ΣP -2 q Σ1/2 P ΣJΣ1/2 P . (69) Observe that tr(ΣJ) = tr(ΣP ) = n + s and thus we turn our attention to the term B = q Σ1/2 P ΣJΣ1/2 P . Since ΣP is diagonal its square-root is simply Σ1/2 P = I 0 0 √s (70) and an explicit calculation of B2 yields B2 = Σ1/2 P ΣJΣ1/2 P = I √sξ √sξT s2 . (71) We require tr(B). Let (λi) be the eigenvalues of B2, so that tr(B) = P i √λi. Using the expression for the determinant of a block matrix, we see that the characteristic polynomial of B2 is p(λ) = (1 -λ)n s2 -λ -sr2 1 -λ = (1 -λ)n-1 λ2 -(1 + s2)λ + s . (72) Thus, B2 has eigenvalues λ1 = λ2 = · · · = λn-2 = 1 and eigenvalues λn-1, λn which are the roots of the quadratic part. In particular, λn-1 + λn = 1 + s2 and λn-1λn = s. This yields p λn-1 + p λn = q 1 + s2 + 2√s (73) tr(B) = (n -1) + q 1 + s2 + 2√s. (74) Putting everything together via the additivity of the trace, we have Tc(d) = 2 1 + σ2 d,θ + |d|2 - r 1 + (|d|2 + σ2 d,θ)2 + 2 q |d|2 + σ2 d,θ . (75) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth Using a computer algebra system one can verify that Tc(d) ≤2 and that lim|d|→∞Tc(d) = 2. On the other hand, the mutual information between two Gaussians admits a closed form. This immediately gives I(d) = 1 2 log 1 + |d|2 σ2 d,θ ! . (76) which is unbounded as σ2 d,θ →0. D Experiment Details This section provides additional details for our experiments. Unless specified otherwise, our experiments in Section 6 were performed with the following settings. Experiments were performed on an Apple M4 Pro chip with 24 GB of unified memory and a 14-core CPU and primarily implemented in pytorch (Paszke et al., 2019). Designs are optimized for 250 gradient steps with a learning rate of 2e-2 using the Adam optimizer (Kingma, 2014). For MTD, we draw 1 000 samples from p(θ, y | d, ht) = p(θ | ht)p(y | θ, d) per gradient step, shuffled following Section A to yield samples from p(θ)p(y | d). For PCE, we draw N = 1 000 samples (θn, yn) i.i.d. ∼p(θ, y | d, ht) to form the approximation (Foster et al., 2020) I(t)(d) ≈1 N N X n=1 " log p(yn | θn, d) -log 1 L + 1 p(yn | θn, d) + L X l=1 p(yn | θl,n, d) !!# . (77) where L = 1 000 and θl,n i.i.d. ∼p(θ | ht). Figure 1 and Figure 3 are plotted with additional Gaussian smoothing for visualization purposes. D.1 Location Finding In the location finding task, our goal is to estimate the spatial location of K ≥1 sources θk ∈Rdθ. The number of sources K is assumed to be known, so that θ = (θ1, θ2, . . . , θK). Each source θk emits a signal which decays according to an inverse square law. In each step, a sensor is placed at a location d ∈Rdθ which records a noisy measurement y ∈R of the total signal intensity at the sensor location (Sheng and Hu, 2005). We note that this task has become a standard benchmark for BED (Foster et al., 2021; Ivanova et al., 2021; Blau et al., 2022; Iollo et al., 2024a). The total (noiseless) intensity at a location d is given by μ(θ, d) = b + K X k=1 αk m + |θk -d|2 . (78) Here, b, αk, m ≥0 are known constants. The variable b represents a background signal level and m controls the (inverse) maximum signal strength. We assume an independent standard Gaussian prior over each θk, p(θk) = N(θk | 0, Idθ) k = 1, . . . , K. (79) We further assume that we observe the logarithm of the total signal intensity with Gaussian noise, i.e., y | θ, d ∼N y | log μ(θ, d), σ2 (80) where σ2 > 0 is assumed to be known. In all experiments we follow prior work (Foster et al., 2021) and set b = 0.1, αk = 1, m = 10-4, σ2 = 0.25. Further, we consider K = 2 sources in dθ = 2 dimensions, so that θ is four dimensional. Inference. For inference, we use the NUTS MCMC sampler implemented in pymc3 (Salvatier et al., 2016) with 1e4 warm-up steps and four independent chains to draw a total of 1e5 posterior samples at each experiment iteration. In LF, posteriors are complex and multi-modal, necessitating accurate (but expensive) inference. See Figure 6 for an illustration. A Geometric Approach to Optimal Experimental Design Experiment 1 Experiment 2 Experiment 3 Experiment 4 Experiment 5 Experiment 6 Experiment 7 Experiment 8 Experiment 9 Experiment 10 Figure 6: An illustration of the LF problem. Red stars indicate the true, unknown source locations θ1, θ2. At each experimental iteration, a measurement location (black triangle) is selected by maximizing the MTD. Posterior samples θ ∼p(θ | ht) (orange and blue points) depict the updated beliefs about the source positions after each measurement. The MTD adaptively selects measurement locations that rapidly determine both sources. RMSE. During posterior sampling, the model exhibits non-identifiability with respect to the ordering of the two sources, θ1 and θ2. As a result, the correspondence between estimated and true sources may be swapped across samples. To address this when computing the RMSE, we evaluate both possible orderings of each posterior sample and use the ordering that yields the lower RMSE. Weighting Function. In Section 6.3 we modify the cost function in MTD using a weighting function. In particular, this takes the form w(θ) = b + g(θ1 -μ) + g(θ2 -μ) (81) where b = 1 is a bias and μ = (1.5, -1.5) controls the location of the bump, and g(x) = k 1 -sigmoid s |x|2 -α2 β2 -α2 (82) is a bump-like function. Here, β = 1 approximately controls the radius of its support, α = 0.5 controls an inner radius where g is approximately maximized, s = 0.3 controls the slope of the bump, and k = 1e4 controls its amplitude. See Figure 7 for a visualization. The weighting w(θ) is selected to depend only on θ which is drawn from the joint. + + x b b + k g(x) Figure 7: A visualization of the bump function g(x). Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth D.2 Constant Elasticity of Substitution (CES) The Constant Elasticity of Substitution (CES) model, arising from behavioral economics, asks a participant to compare two baskets d1, d2 ∈[0, 100]3 consisting of various amounts of three different goods. Given two baskets, the participant provides a scalar response y ∈[0, 1] indicating subjective preference between the baskets. This model has previously served as a benchmark for several recent BED works (Foster et al., 2019, 2020; Blau et al., 2022; Iollo et al., 2024b). The experimental goal is to choose a design d = (d1, d2) ∈[0, 100]6 consisting of two baskets in order to infer the participant's latent utility function. This utility function is assumed to be parametrized by θ = (ρ, α, u) where ρ ∈ [0, 1], α ∈∆3, u ∈R≥0 and ∆3 is the 3-simplex. Thus, θ ∈R5 is a five-dimensional unknown parameter of interest. Following previous work, we assume the following priors: ρ ∼β(1, 1) (83) α ∼Dir(1, 1, 1) (84) log u ∼N(1, 3). (85) The likelihood for the participant's response is modeled as U(d) = 3 X i=1 dρ i αi !1/ρ (86) μ = (U(d1) -U(d2))u (87) σ = (1 + |d1 -d2|)τu (88) η ∼N(μ, σ2) (89) y =      sigmoid(η) ε < sigmoid(η) < 1 -ε ε sigmoid(η) ≤ε 1 -ε sigmoid(η) ≥1 -ε (90) where ε = 2-22, τ = 5e-3 are fixed constants and sigmoid(x) = 1 1 + e-x (91) is the usual sigmoid function. Thus, the participant's response depends on the difference in utilities U(d1) -U(d2) between the two baskets. Log-Likelihood. The CES model can present numerical challenges for methods that require evaluating the likelihood of observations (e.g., PCE). We follow the recommendations in (Foster et al., 2020; Iollo et al., 2024b) to evaluate this quantity. In particular, since y is censored, we have that p(y | θ, d) = p0δ[y = ε] + p1δ[y = 1 -ε] + (1 -p0 -p1)q(y | θ, d) (92) is a mixture of Dirac deltas at the boundaries and a density on the interior, where the density q(y | θ, d) = 1 σy(1 -y) √ 2π exp -(logit(y) -μ)2 2σ2 (93) represents a logit-normal distribution away from the boundary with logit(y) = sigmoid-1(y) = log(y/(1 -y)). The quantities p0, p1 are defined via p0 = Φ logit(y) -μ σ (94) p1 = 1 -Φ logit(1 -ε) -μ σ (95) A Geometric Approach to Optimal Experimental Design where Φ is the standard normal CDF. When p0, p1 ≪1, computing their logarithms becomes challenging, in which case we approximate Φ by a first-order Taylor expansion, i.e., Φ(x) ≈ 1 -x √ 2π exp(-x2/2) x ≪-1 (96) 1 -Φ(x) ≈ 1 x √ 2π exp(-x2/2) x ≫1. (97) We perform additional clipping as necessary before taking logarithms in our implementation to further improve stability. For inference, we perform importance resampling (Doucet et al., 2001) where 1e7 proposal samples are drawn from the prior, weighted according to the likelihood, and 1e5 proposal samples are re-drawn with replacement with probability proportional to their weight. E Additional Experiments 1 5 10 15 20 25 Experiment Iteration 0 1 2 RMSE 1 5 10 15 20 25 Experiment Iteration 0 5 10 15 20 MI 1 5 10 15 20 25 Experiment Iteration 0.0 2.5 5.0 7.5 MTD Random PCE MTD MINE JSD Figure 8: Quantitative results for the location finding problem, averaged across 25 seeds (± one standard error). MTD achieves the lowest RMSE across most of the rollout, even though its corresponding MI values are slightly lower. All methods obtain comparable MTD scores within one standard error, reflecting the high variance of this quantity. E.1 Location Finding One appeal of the MTD is that it is naturally an implicit method, as it only relies on our ability to draw samples. To highlight this, we compare against two methods for MI-based experimental design in implicit settings: MINEBED (Kleinegesse and Gutmann, 2020) and JSD (Kleinegesse and Gutmann, 2021). MINEBED is based on the NWJ (Nguyen et al., 2010) lower bound for the mutual information: I(d) ≥Ep(θ,y|d) [T(θ, y)] -e-1Ep(θ)p(y|d) [exp(T(θ, y))] (98) where T : Θ × Y →R is an arbitrary measurable function. This lower bound is, in fact, tight. In practice, we take T(θ, y) = Tψ(θ, y) to be a neural network parametrized by ψ, yielding a strict lower bound to I(d) when the considered class of neural networks does not contain the true optimum. In the continuous setting, the network parameters ψ and design d may be optimized simultaneously via gradient-based procedures. On the other hand, JSD is based on a lower bound of the Jensen-Shannon divergence (Hjelm et al., 2018) J (d) = 1 2KL [p(θ, y | d)||m(θ, y | d)] + 1 2KL [p(θ)p(y | d)||p(θ)p(y | d)] (99) = log 2 -1 2Ep(θ,y|d) log 1 + p(θ)p(y | d) p(θ, y | d) -1 2Ep(θ)p(y|d) log 1 + p(θ, y | d) p(θ)p(y | d) (100) ≥log 2 + 1 2 Ep(θ,y|d) [-log(1 + exp(-T(θ, y))] -Ep(θ)p(y|d) [log(1 + exp T(θ, y))] (101) Gavin Kerrigan, Christian A. Naesseth, Tom Rainforth where m(θ, y | d) = 1 2 (p(θ, y | d) + p(θ)p(y | d)) is a mixture distribution. Again we take T(θ, y) = Tψ(θ, y) to be a neural network, yielding a potentially strict lower bound. Kleinegesse and Gutmann (2021) motivate the JSD as an objective for OED by noting that it behaves similarly to the mutual information while potentially offering more stable training, as there is no exponential of the network unlike in MINEBED. For both MINEBED and JSD, we parametrize Tψ by an MLP with two hidden layers of size 200, trained at a learning rate of 3e-3 using Adam (Kingma, 2014). Designs are simultaneously optimized via Adam, but at a higher learning rate of 2e-2. Results. In Figure 8, we report the RMSE, MI, and MTD values for our method, PCE, MINE, and JSD, with all designs optimized using five random restarts. The MTD-based approach consistently achieves the lowest RMSE across most of the experimental rollout, even though it attains slightly lower MI values, which is expected, since it does not explicitly optimize this objective. All methods obtain comparable MTD scores; this is largely attributable to the high variance of the MTD estimate and to the nature of the location-finding task, where, once the posterior becomes highly concentrated, the set of effective designs narrows substantially, so that any design within this region yields similarly strong MTD values. E.2 CES 1 5 10 Experiment Iteration 0.0 0.1 0.2 0.3 0.4 RMSE ( ) 1 5 10 Experiment Iteration 0.0 0.2 0.4 0.6 RMSE ( ) 1 5 10 Experiment Iteration 0 20 40 60 RMSE (u) Random PCE MTD MTD (c ) 1 5 10 Experiment Iteration 100 101 102 103 RMSE ( ) 1 5 10 Experiment Iteration 0 1 2 RMSE ( ) 1 5 10 Experiment Iteration 0.0 0.5 1.0 1.5 2.0 RMSE ( ) Figure 9: Quantitative results for the CES problem, averaged over 50 seeds (± one standard error). On the original scale (top row), MTD achieves the lowest RMSE, while the transformed variant Tc† performs slightly worse on u. On the transformed scale (bottom row), Tc† attains the lowest RMSEs, reflecting its alignment with the evaluation variables. 1 5 10 Experiment Iteration 5 10 15 MI 1 5 10 Experiment Iteration 0.2 0.4 0.6 0.8 MTD Random PCE MTD MTD (c ) Figure 10: MI and MTD values for the CES problem, averaged over 50 seeds (± one standard error). MTD achieves similar MI values to PCE, while PCE attains worse MTD values than random designs. A Geometric Approach to Optimal Experimental Design To supplement the results in Section 6 for the CES problem, we provide additional figures visualizing our results. Figure 9 plots the RMSE values for the CES problem across the experimental iterations. When evaluating performance on the original parameter scale (top row of Table 1), MTD achieves the lowest RMSE, while the transformed variant Tc† performs slightly worse on u. This is expected, as the transformed cost emphasizes errors in a different coordinate system. Conversely, when performance is assessed in the transformed space (bottom row), Tc† attains the lowest RMSEs, illustrating that defining the cost in terms of the relevant variables can effectively guide design selection toward the aspects of the parameters that are most important for downstream evaluation. These results highlight the flexibility of MTD: by choosing an appropriate cost, either directly or via transformations, experimenters can align the design criterion with the metric that truly matters for their task. Figure 10 shows the MI and MTD values for CES. We observe that designs optimized using the MTD achieve comparable MI values to PCE. On the other hand, the designs produced by PCE yield low MTD values, performing worse than random. Notably, this serves as a verification of our Theorem 2, which can be interpreted as showing that designs which have high MTD tend to have large MI as well, but not the converse.
2510.14852
Unidirectional Zero Reflection and Perfect Absorption via Exceptional Points in Active Piezoelectric Willis Metamaterials Hrishikesh Danawe1 and Serife Tol1,* 1Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA 48109 *Corresponding author(s): Email(s) stol@umich.edu ABSTRACT Electro-momentum coupling in piezoelectric metamaterials with broken inversion symmetry enables asymmetric elastic wave transport by linking macroscopic electric fields to momentum, an effect analogous to Willis coupling in elastic media. A one-dimensional layered piezoelectric metamaterial integrated with shunt circuits, consisting of a resistor, inductor, and strain- proportional voltage feedback gain, is proposed to achieve dynamic control of frequency-dependent stiffness and damping through electromechanical interactions. Tuning the circuit parameters yields direction-dependent wave scattering at targeted frequencies. Dynamic homogenization reveals macroscopic constitutive relations exhibiting both Willis and electro-momentum couplings. Non-Hermitian exceptional points are identified, where scattering eigenmodes coalesce and produce extreme asymmetries in wave response. Near these points, the system realizes unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA), achieving complete absorption from one direction and total reflection from the opposite side. The findings demonstrate a compact and reconfigurable platform for tunable, directional elastic wave control using passive-active hybrid metamaterials, opening new avenues for programmable devices in acoustic isolation, wave-based computing, sensing, and energy manipulation in solid media. Introduction Nonreciprocal wave propagation is a fascinating area of physics that challenges the conventional symmetry typically associated with wave transmission, implying that the system’s response remains invariant under source–receiver interchange. This reciprocity arises from the assumptions of time-reversal symmetry, linearity, and time-invariance of the medium1–3. In contrast, nonreciprocal systems intentionally break this symmetry, allowing waves to propagate preferentially in one direction while being attenuated, reflected, or entirely suppressed in the opposite direction. Such direction-dependent control of wave transport offers transformative opportunities in the design of next-generation acoustic and elastic devices, including unidirectional sensors, isolators, signal routers, and energy-harvesting systems. Traditional strategies to achieve nonreciprocal wave transport often rely on mechanisms such as nonlinear interactions4–7, spatiotemporal modulation8–10, or moving media11. While these approaches have enabled several groundbreaking demonstrations, their practical deployment is often hindered by complexity, requirements for dynamic biasing, or scalability constraints. An emerging alternative leverages engineered metamaterials, particularly those incorporating spatial asymmetry and tailored loss or gain, to realize asymmetric wave responses in linear, time-invariant systems. Within this paradigm, Willis metamaterials12–14, characterized by microstructural asymmetry and bianisotropic constitutive relations, offer a powerful framework for asymmetric wave control. Despite supporting exotic couplings such as stress–velocity and momentum–strain, these materials remain fundamentally reciprocal due to the preservation of time-reversal symmetry and passivity15,16. Nevertheless, they can exhibit asymmetric reflection phases for waves incident from opposite directions, owing to direction-dependent impedance introduced by Willis coupling17,18. While the transmission remains symmetric as required by reciprocity, the phase of the reflected waves can differ based on the incidence direction. However, to achieve strong asymmetry in the reflection amplitude, it is typically necessary to introduce loss, thereby rendering the system non-Hermitian17,18. By carefully engineering loss, reflection in one direction can be strongly suppressed, approaching unidirectional zero reflection (UZR), while reflection in the opposite direction remains significant17,19. This phenomenon is closely linked to the concept of exceptional points (EPs) in non-Hermitian physics, where eigenvalues and eigenvectors of the scattering matrix coalesce, leading to abrupt transitions and extreme asymmetries. However, directly tuning loss in physical materials is often impractical. To overcome this challenge, piezoelectric Willis metamaterials enable external circuit-based control of loss and resonance. In our previous work, we introduced a resistive-inductive (RL) shunt circuit to realize tunable asymmetric wave phenomena via externally tailored dissipation and resonance20. These systems leverage strong electromechanical coupling, which enables an arXiv:2510.14852v1 [physics.app-ph] 16 Oct 2025 additional bianisotropic term between electric field and momentum, termed electro-momentum coupling21, an electromechanical analog to Willis coupling. Both Willis and electro-momentum couplings contribute to directional wave behavior in piezoelectric systems22. Such couplings can be significantly enhanced through topology optimization23–26. Combined with mechanical loss in the host structure, these systems can exhibit extreme reflection asymmetry and UZR26. Furthermore, the inclusion of tunable resistance enables frequency-selective UZR27. However, achieving both UZR and unidirectional perfect absorption (UPA), where all incoming energy from one direction is absorbed and reflection is entirely suppressed, remains an open challenge for passive elastic systems. To the best of our knowledge, there are no demonstrations of simultaneously achieving UZR and UPA in such systems via dissipation alone. Nonetheless, the electromechanical coupling in piezoelectric metamaterials enables further control through external feedback circuits, offering a pathway to wave control that surpasses the limitations of passive configurations. In this work, we demonstrate that piezoelectric metamaterials composed of stacked layers integrated with tunable shunt circuits featuring strain-proportional voltage feedback, inductive resonance, and resistive damping that can realize non-Hermitian scattering conditions, including exceptional points that yield highly asymmetric wave responses. By tuning the external circuit parameters, we gain precise control over the system’s stiffness and loss characteristics, enabling directional scattering tailored to specific frequencies. Using dynamic homogenization, we extract the macroscopic effective properties of the one-dimensional layered metamaterial, revealing the presence and tunability of both Willis and electro-momentum couplings. This circuit-level programmability allows the system to exhibit unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA), along with retroreflection in the reverse direction. By leveraging strong electromechanical interactions and active–passive hybridization, our platform offers a compact, reconfigurable, and scalable solution for nonreciprocal elastic wave manipulation, opening new avenues for wave-based signal routing, isolation, and adaptive control in engineered solids. Results Design of Willis metamaterial We consider wave propagation through a finite, one-dimensional periodic piezoelectric composite consisting of layered piezoelectric materials stacked along their poling direction. The schematic in Fig. 1(a) shows a finite rod composed of five unit cells of the piezoelectric composite embedded within an aluminum rod. Wave propagation is studied in both forward and backward directions by analyzing the reflected and transmitted wave components for an incident wave in each direction. Figure 1(b) illustrates the unit cell of the composite, which comprises piezoelectric layers of PZT-4, BaTiO3, and PVDF. These materials are stacked in a specific pattern to break inversion symmetry, enabling the realization of Willis and electro-momentum coupling and thereby facilitating asymmetric wave phenomena. All layers have the same cross-sectional area, Ap = 1mm2, while their lengths are selected as l1 p = 1mm, l2 p = 1.4mm, and l3 p = 0.6mm. Each layer is shunted with an external circuit consisting of an inductor, a resistor, and a voltage feedback source proportional to the strain across the layer. An effective electro-mechanical elastic constant is derived, allowing the shunted piezoelectric layers to be modeled as an equivalent purely elastic medium. The influence of piezoelectric coupling and circuit parameters is incorporated into the modified constitutive behavior, as illustrated in Fig. 1(c). In this formulation, ˇC denotes the electro- mechanical elastic constant, which captures the combined effects of the intrinsic material properties and the external circuitry. Here, C and A represent the elastic and dielectric constants, respectively, and B is the piezoelectric coupling coefficient. The term V0 is the voltage feedback coefficient, and Z = sL + R is the shunt impedance, where L and R are the inductance and resistance of the circuit, respectively. The Laplace variable is defined as s = −iω, with ω being the angular frequency. The resistor introduces damping into the system, while the inductor forms an LC circuit resonance with the intrinsic capacitance of the piezoelectric layer. The voltage feedback source provides additional control over the dynamic behavior by exerting an active response proportional to the strain across the layer. Previously, we demonstrated that the resistor and inductor in the external circuit can be used to control Willis and electro-momentum coupling, thereby tuning asymmetric wave phenomena by adjusting these circuit parameters20. Specifically, the LC resonance determines the frequency at which coupling effects are maximized, while the resistor governs the perturbation level of the asymmetry factor, which directly influences the degree of wave asymmetry. By introducing a voltage feedback source proportional to strain, we now demonstrate an enhanced level of control that enables the realization of exotic wave phenomena such as unidirectional zero reflection and perfect absorption. Scattering analysis The asymmetric wave behavior is analyzed using a scattering matrix composed of transmission and reflection ratios for forward and backward wave incidence. This formulation is based on the standard transfer matrix method, which relates the wave amplitudes across each layer of the piezoelectric composite. Figure 2(a) and (b) illustrate the traveling wave components within each layer for forward and backward incidence, respectively, along with the incident (A0,inc, B0,inc), reflected (B0,ref, A0,ref), and transmitted (A0,trans, B0,trans) wave amplitudes in the surrounding aluminum rod. These field distributions form the 2/15 Figure 1. Schematic and modeling framework for electro-momentum coupling in shunted piezoelectric metamaterials. a, One-dimensional waveguide comprising a layered piezoelectric composite embedded between two aluminum sections and terminated with perfectly matched layers (PML). Elastic waves incident from either direction exhibit asymmetric transmission (t f , tb) and reflection (rf , rb) due to the spatial asymmetry introduced by the shunted composite. The structure features a uniform poling direction, while the arrangement of heterogeneous piezoelectric layers breaks inversion symmetry to support directional wave propagation. b, Zoomed-in view of a representative unit cell consisting of three serially connected piezoelectric layers PZT-4, BaTiO3, and PVDF each interfaced with a shunt circuit composed of a resistor (Ri), inductor (Li), and strain-proportional voltage feedback source (V0iε). These circuits actively tailor the electromechanical response of each segment and modulate the macroscopic dynamic behavior through piezoelectric coupling. c, The shunted layer is modeled as an effective elastic layer with a modified electro-mechanical elastic constant ˇCi, incorporating both material properties and circuit-induced effects. This framework enables tunable control of wave propagation through circuit parameters, where the resistor introduces loss, the inductor defines resonance behavior, and the voltage feedback allows dynamic modulation of stiffness. foundation for computing direction-dependent scattering parameters that reveal the emergence of wave asymmetry induced by circuit-modulated electro-mechanical coupling in the composite. Each layer is modeled using an effective elastic constant. For the aluminum segments, this corresponds to the intrinsic elastic constant (i.e., ˇC0 = C0), while for the piezoelectric layers, it represents an electro-mechanical elastic constant ˇCi, which accounts for both the intrinsic material response and the effects of the external shunt circuitry, as described in Fig. 1(c). The reflection and transmission coefficients are defined as the ratios of outgoing to incoming wave amplitudes: rf = B0,ref A0,inc , rb = A0,ref B0,inc t f = A0,trans eik0ℓ A0,inc , tb = B0,trans eik0ℓ B0,inc (1) where A0,inc and B0,inc are the amplitudes of incident waves in the forward and backward directions, respectively. B0,ref and A0,ref denote reflected waves, and A0,trans, B0,trans are the transmitted waves. The exponential term eik0ℓaccounts for the phase accumulated over the length ℓof the composite, with k0 as the wavenumber in the aluminum background. 3/15 Figure 2. Wave decomposition in a layered piezoelectric composite for forward and backward incidence. a, Schematic of elastic wave propagation from the left (forward incidence) through a layered structure composed of alternating piezoelectric segments with effective electro-mechanical elastic constants ˇC1, ˇC2, and ˇC3, bounded by homogeneous elastic layers ˇC0. Each segment is associated with local forward (Ai) and backward (Bi) wave components. The incident wave (A0,inc) is partially reflected (B0,ref) and transmitted (A0,tran), with no incoming wave from the right (B0 = 0). b, The same configuration under backward incidence, where the wave enters from the right. The incident wave (B0,inc) results in partial transmission (B0,tran) and reflection (A0,ref), with no incoming wave from the left (A0 = 0). This layer-wise decomposition into forward and backward components enables analytical evaluation of reflection and transmission coefficients under both excitation directions using the transfer matrix method. The expressions for reflection and transmission ratios are: rf = −Mf (2,1) Mf (2,2), t f = M f (1,1)−Mf (1,2)Mf (2,1) Mf (2,2) rb = −Mb(2,1) Mb(2,2), tb = Mb(1,1)−Mb(1,2)Mb(2,1) Mb(2,2) (2) where M f and Mb are given by equations 17 and 18, respectively. The magnitude and phase of the reflection coefficients for both forward and backward incidence are plotted in Fig. 3(a)–(c) for three different circuit configurations, where the circuit parameters inductance, resistance, and voltage feedback are identical across all three layers within each unit cell. In the open-circuit case [Fig. 3(a)], the reflection amplitude remains symmetric, with only phase asymmetry observed between the two directions. Introducing a passive shunt with R = 50kΩand L = 1H [Fig. 3(b)] produces measurable asymmetry in both amplitude and phase, particularly near the LC resonance frequencies, f1 = 0.067MHz for the PZT-4 layer and f2 = 0.19MHz for the BaTiO3 layer. The LC resonance frequency of the PVDF layer lies beyond the frequency range considered in this study and is therefore not captured in the plotted results. The amplitude asymmetry primarily arises from energy dissipation introduced by the resistor, which leads to direction-dependent wave attenuation. While dissipation alone does not fundamentally break time-reversal symmetry, it facilitates asymmetric scattering when combined with structural or parametric asymmetries, as is the case in this configuration. Finally, incorporating an active voltage feedback source with V0 = 10MV [Fig. 3(c)] results in pronounced asymmetry, where both the magnitude and phase of the reflection coefficients differ significantly between forward and backward wave incidence. These effects are especially prominent near the frequencies where the circuit dynamics enhance Willis and electromomentum coupling. These results demonstrate that external circuit parameters, including both passive and active elements, provide a powerful means to control and tune asymmetric wave behavior. In particular, the emergence of strong reflection asymmetry underscores the potential of dynamic electro-mechanical coupling for breaking reciprocity in piezoelectric composite systems. 4/15 Figure 3. Effect of shunt circuit parameters on directional reflection behavior. Frequency-dependent reflection magnitude (|r|, top row) and phase (∠r, bottom row) for forward (solid lines) and backward (dashed lines) wave incidence are shown under different shunting conditions. Identical circuit parameters are applied to all piezoelectric layers. a, In the open-circuit case (no shunt), the reflection magnitudes are symmetric, while phase asymmetry arises solely due to structural asymmetry. b, Introducing a resistor (R = 50 kΩ) and inductor (L = 1 H) leads to asymmetry in both magnitude and phase of the reflection ratio, with losses induced by the resistor breaking amplitude symmetry. The LC resonance condition contributes to frequency-selective enhancement of this asymmetry. c, Adding voltage feedback (V0 = 10 MV) further modifies the reflection characteristics across the entire frequency spectrum, introducing tunability and stronger contrast between forward and backward responses. These results illustrate how circuit parameters enable directional control over elastic wave reflection in shunted piezoelectric composites. Exceptional points and unidirectional zero reflection Unidirectional zero reflection (UZR) can be realized at exceptional points (EPs), where the eigenvalues and eigenvectors of the scattering matrix coalesce. The scattering matrix S is defined as: S =  tf rb rf tb  , eig(S) = λ1,λ2 λ1 = t +√rf rb, λ2 = t −√rf rb (3) Here, λ1 and λ2 are the eigenvalues of the scattering matrix. Since the system is reciprocal, the transmission is equal in both directions, i.e., t = tf = tb. At the UZR condition, either rf = 0 or rb = 0, resulting in λ1 = λ2 = t, which indicates a coalescence of eigenvalues and eigenvectors, a defining signature of an exceptional point. The scattering matrix elements tf,b and r f,b, as derived in Eq. (9), can be used to locate EPs in the system. By tuning the external circuit parameters, these non-Hermitian degeneracies can be accessed, enabling UZR behavior in the piezoelectric metamaterial.To systematically identify UZR conditions, we perform a parametric sweep over resistance R and feedback voltage V0, while keeping the inductance fixed at L = 1H. The circuit parameters are kept identical across all three layers of the unit cell. A contrast factor is defined to quantify asymmetry: α = rf rb (4) At UZR, |α| →0 or ∞. Figure 4(a) presents the magnitude of α as a function of R and V0 at a design frequency of 0.15MHz, revealing a region of strong reflection asymmetry. A distinct UZR point is observed at R = 78.38kΩ, V0 = 20.42MV, corresponding to an EP in the system. The reflection coefficients at this parameter combination are plotted in Fig. 4(b), confirming the suppression of backward reflection (rb ≈0) and a large contrast between forward and backward response. To further quantify this asymmetry, the differential amplitude ||rf |−|rb|| and differential phase |∠r f −∠rb| are plotted in Fig. 4(c). A peak in the amplitude difference and a sharp phase jump of nearly π confirm the asymmetric scattering characteristic of an EP. The presence of the exceptional point is further validated by examining the eigenvalue evolution of the scattering matrix. In Fig. 4(d), the eigenvalues are plotted in the complex plane, showing a coalescence trajectory near the EP. Figures 4(e)–(g) present the real, imaginary, and absolute parts of the eigenvalues as functions of frequency. At the UZR frequency, both real and imaginary components coalesce, confirming the degeneracy of eigenvalues and reinforcing the presence of a non-Hermitian EP. 5/15 Figure 4. Parametric analysis of unidirectional zero reflection (UZR) and its relation to exceptional points (EPs). a, Colormap of the contrast ratio α = rf /rb between forward and backward reflection as a function of resistance (R) and voltage feedback amplitude (V0), with inductance fixed at L = 1 H. Identical circuit parameters are applied to all piezoelectric layers. A sharp peak in α identifies the condition for UZR at 0.15 MHz, corresponding to R = 78.38 kΩand V0 = 20.42 MV. b, Frequency-dependent reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence, confirming complete suppression of forward reflection at the UZR frequency along with a π phase difference. c, Absolute differences in reflection magnitude (top) and phase (bottom) between forward and backward incidence, showing a pronounced asymmetry at 0.15 MHz. d, Eigenvalue trajectories of the scattering matrix in the complex plane showing the coalescence of λ1 and λ2 at the EP. e–g, Frequency-dependent plots of the imaginary parts (e), real parts (f), and magnitudes (g) of the eigenvalues, confirming the occurrence of a second-order exceptional point at the UZR frequency. The π phase jump observed in b and c is a spectral signature of the EP. These results confirm that UZR coincides with an exceptional point of the scattering matrix composed of reflection and transmission coefficients (rf , rb, tf , tb). These results demonstrate that external circuit tuning via resistive, inductive, and voltage feedback elements provides a powerful and reconfigurable platform for accessing exceptional points and achieving UZR in elastic metamaterials. Unlike conventional systems where EPs are fixed by material properties or geometry, the piezoelectric-circuit hybrid structure enables on-demand tuning of asymmetric wave transport. This approach opens new directions for reprogrammable waveguides, vibration control, and logic devices based on elastic waves. Dynamic Homogenization The effective properties of a heterogeneous layered composite, derived via dynamic homogenization21, can capture unusual coupling mechanisms that arise from broken inversion symmetry in periodic systems. Among these, Willis coupling and electro-momentum coupling play a central role in enabling asymmetric wave propagation. In our previous work20, we employed an ensemble-averaging approach to extract the effective constitutive parameters of an infinitely periodic piezoelectric composite with external shunt circuits in the long-wavelength limit. This framework allowed us to rigorously quantify the influence of a passive shunt circuit, comprising an inductor and a resistor, on the effective medium properties. In particular, we demonstrated that the inclusion of such circuits modifies the asymmetry factor, a key parameter that governs the degree of wave asymmetry. In this study, we adopt a transfer matrix-based homogenization approach26 to replace the finite piezoelectric layered system with a single homogenized domain, as illustrated in Fig. 5. Figure 5(a) depicts the original heterogeneous structure, where wave scattering is computed using the microscopic properties of individual layers. In contrast, dynamic homogenization yields an 6/15 effective medium, as shown in Fig. 5(b), characterized by emergent coupling terms that do not appear in conventional media. Specifically, the effective constitutive relation reveals additional Willis coupling terms, ˜S, ˜S†, and electro-momentum coupling terms, ˜W, ˜W †, such that the generalized 1D constitutive law is given by:   ⟨σ⟩ ⟨D⟩ ⟨p⟩  =   ˜C −˜B ˜S ˜B ˜A ˜W ˜S† −˜W † ˜ρ     ⟨ε⟩ ⟨E⟩ ⟨˙u⟩   (5) where all quantities are scalars representing spatially averaged effective fields: ⟨σ⟩is the stress, ⟨D⟩the electric displacement, and ⟨p⟩the momentum density, while ⟨ε⟩, ⟨E⟩, and ⟨˙u⟩denote the strain, electric field, and particle velocity, respectively. The coefficients ˜C, ˜A, and ˜ρ represent the effective stiffness, permittivity, and mass density; ˜B is the piezoelectric coupling; and ˜S, ˜S† and ˜W, ˜W † encode the Willis and electro-momentum couplings. To extract the effective properties, a transfer matrix is constructed for both the heterogeneous and homogenized representations of the piezoelectric composite. These matrices are compared by matching their scattering coefficients under identical boundary and loading conditions. Since the homogenized constitutive relation (Eq. (5)) contains nine unknown parameters, the transfer matrix must be defined to fully capture both mechanical and electrical degrees of freedom. For the heterogeneous composite, we define a four-dimensional state vector that includes the mechanical displacement u, electric potential φ, stress σ, and electric displacement D. Under time-harmonic excitation, the governing equations are written in first-order form as: d dz     u φ σ D    =   0 0 1 ˇC 0 0 0 −V0 ˇClp sApZ lp s2ρ 0 0 0 0 0 0 0       u φ σ D     (6) where, ρ is the mass density, and all other parameters are as previously defined. This formulation enables the construction of a 4×4 transfer matrix that accurately captures the coupled electromechanical behavior of the heterogeneous piezoelectric unit Figure 5. Dynamic homogenization of a layered piezoelectric composite. a, Schematic of a one-dimensional periodic piezoelectric composite consisting of alternating heterogeneous piezoelectric layers embedded between aluminum waveguide sections and terminated with perfectly matched layers (PML). Due to the lack of inversion symmetry and spatial variation, elastic waves exhibit direction-dependent reflection (rf , rb). b, The composite is dynamically homogenized into an equivalent continuum domain characterized by effective macroscopic properties. These include the stiffness tensor ˜C, effective density ˜ρ, and piezoelectric coupling tensors ˜A and ˜B. Importantly, the homogenized model also incorporates the Willis coupling tensors ˜S, ˜S† (describing coupling between momentum and strain), and the electro-momentum coupling tensors ˜W, ˜W† (capturing coupling between electric field and momentum). The homogenized medium retains the same poling direction as the original layered system and reproduces its macroscopic wave behavior. This framework enables effective modeling of asymmetric wave propagation in complex piezoelectric composites. 7/15 cell. It accounts for the interaction between the mechanical and electrical fields mediated by the piezoelectric effect, as well as the influence of the external shunt circuit through the impedance Z and feedback coefficient V0. Refer to the methods section for the derivation of the transfer matrix and effective properties. To elucidate the physical origins of asymmetric scattering behavior, we compute the frequency-dependent effective constitutive properties of the layered piezoelectric composite using a dynamic homogenization approach. The configuration corresponds to the parameter set obtained from the earlier parametric study, which exhibited unidirectional zero reflection (UZR) at 0.15 MHz. Figure 6 shows the real and imaginary components of the homogenized constitutive properties. As evident in these plots, the composite exhibits strong frequency-dependent non-Hermiticity due to the lossy and active components in the shunt circuits, as reflected in the large imaginary parts of the constitutive tensors. In particular, the emergence of non-zero ˜S and ˜W, which break reciprocity and time-reversal symmetry, confirms the presence of Willis and electro-momentum coupling enabled by the broken inversion symmetry and external circuit feedback. These couplings play a central role in generating asymmetric wave reflection. Importantly, the Hermitian conjugates of the off-diagonal coupling terms ˜B† and ˜W † (shown as dashed lines) deviate from the original tensors, confirming the non-Hermitian nature of the effective medium. This validates that the asymmetric wave propagation arises from both geometric asymmetry and circuit-induced gain/loss modulation. Notably, a sharp discontinuity in ˜ρ and resonance peaks in ˜S and ˜W coincide with the UZR frequency at 0.15 MHz, highlighting the critical role of dynamic resonances in tuning wave asymmetry. Figure 6. Effective constitutive properties of the layered piezoelectric composite exhibiting UZR at 0.15 MHz. Frequency-dependent homogenized parameters are extracted using dynamic transfer matrix–based homogenization for the circuit configuration yielding unidirectional zero reflection (UZR). a, Dielectric stiffness ˜A. b, Piezoelectric coupling ˜B and its Hermitian conjugate ˜B† deviate significantly, confirming broken symmetry. c, Electro-mechanical stiffness ˜C reflects circuit-tuned stiffness modulation. d, Effective density ˜ρ shows a discontinuity and complex-valued behavior near 0.15 MHz, indicating resonance. e, Willis coupling ˜S and its symmetric off-diagonal term ˜S† demonstrate non-Hermitian asymmetry, peaking near the UZR condition. f, Electro-momentum coupling ˜W and ˜W †confirm the emergence of non-Hermitian behavior. 8/15 Unidirectional zero reflection and perfect absorption Consider a configuration with a single unit cell embedded in an aluminum background rod, as illustrated in Fig. 7(a). Based on the parametric study conducted earlier with a five–unit cell composite piezoelectric rod (see Fig. 4), UZR is achieved at R = 78.38kΩand V0 = 20.42MV, corresponding to an EP when the inductance is set to L = 1H. Remarkably, this condition also holds for a single scatterer, as verified through a scattering analysis and reflection measurements. The persistence of UZR behavior in a single unit cell stems from the local nature of the non-Hermitian scattering mechanism. Specifically, the piezoelectric layer shunted with an external circuit introduces an effective non-Hermitian coupling that governs the reflection asymmetry. Since the scattering matrix for this configuration remains a 2 × 2 system, the coalescence of its eigenvalues (i.e., the EP condition) and the associated unidirectional response can emerge even in the absence of periodicity. Thus, the essential features of UZR and EP behavior are preserved in the single-cell configuration when the circuit parameters are properly tuned. Figure 7(b) presents the frequency-dependent reflection amplitude and phase for both forward and backward incidence. UZR is observed at 0.15 MHz, where the reflection magnitude for backward incidence vanishes. The corresponding reflection asymmetry, quantified in Fig. 7(c), shows a sharp peak in amplitude contrast and a phase difference approaching π, which is a characteristic signature of an EP. Further confirmation is provided in Figs. 7(d)–(g), where the eigenvalues of the scattering matrix extracted from the single-cell response are shown to coalesce at the EP frequency. The trajectory in the complex plane [Fig. 7(d)], along with the frequency evolution of their imaginary, real, and absolute components [Figs. 7(e–g)], collectively validates the existence of an exceptional point in the reduced system. An important question that arises is whether the reflection asymmetry is strong enough to achieve a reflection amplitude of zero in one direction and unity in the other. This scenario corresponds to the regime of unidirectional perfect absorption (UPA), wherein a wave incident from one side is completely absorbed, while it is entirely retroreflected for incidence from the opposite side. To explore this regime, we formulate an optimization problem aimed at maximizing the reflection asymmetry, quantified Figure 7. Unidirectional zero reflection and exceptional point behavior in a single-unit-cell piezoelectric composite. a, Schematic of a one-dimensional waveguide consisting of a single unit cell made of three piezoelectric layers (PZT-4, BaTiO3, and PVDF), each connected to an identical shunt circuit comprising a resistor (Ri), inductor (Li), and strain-proportional voltage feedback source (V0iε). The waveguide is bounded by homogeneous aluminum sections and terminated with perfectly matched layers (PML). b, Reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence show a strong asymmetry near 0.15 MHz, where forward reflection vanishes, indicating unidirectional zero reflection (UZR). c, Difference in reflection magnitude (|∆r|) and phase (|∆∠r|) between forward and backward directions, both peaking at the UZR frequency. d, Trajectory of the scattering matrix eigenvalues λ1 and λ2 in the complex plane, indicating coalescence at an exceptional point (EP). e–g, Frequency-dependent evolution of the imaginary parts (e), real parts (f), and absolute values (g) of the eigenvalues further confirm the occurrence of a second-order EP at 0.15 MHz. These results demonstrate that both UZR and EP phenomena can be realized with just a single unit cell, underscoring the strong asymmetry induced by shunt-circuit-mediated electro-momentum coupling. 9/15 as |rf |−|rb|, by tuning the circuit parameters associated with the three shunted piezoelectric layers. The objective is to determine the optimal combination of resistances, inductances, and voltage feedback gains that yields the maximum possible asymmetry, thereby enabling near-perfect reflection in one direction and zero reflection in the other. The optimization is carried out using MATLAB’s fmincon function, targeting the simultaneous realization of unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA) at the design frequency of 0.15 MHz. The optimized circuit parameters are found to be: {R1,R2,R3} = {0, 29.28, 57.85}kΩ, {L1,L2,L3} = {0, 0.42, 1.20}H, {V 1 0 ,V 2 0 ,V 3 0 } = {7.66, 44.28, 12.06}MV. It is important to note that this solution is not unique; alternative parameter combinations may also yield near-extreme asymmetry depending on the initial guess and convergence to local optima. The stochastic nature of the initialization process implies that different runs may lead to different but functionally equivalent solutions (see Methods section for more details). The results of the scattering analysis with these optimized circuit parameters applied to a single unit-cell scatterer are shown in Fig. 8. The reflection amplitude and phase in Fig. 8(a) demonstrate that the forward reflection is unity while the backward reflection is nearly zero, achieving UPA. The asymmetry metrics in Fig. 8(b) further confirm this behavior, with a sharp peak in |r f | −|rb| and a phase contrast of approximately π, consistent with EP conditions. Moreover, the transmission plots in Fig. 8(c) indicate that transmission is suppressed at and beyond the EP frequency, while the backward reflection approaches unity in the high-frequency regime, realizing perfect retroreflection. The presence of an exceptional point is further validated by the eigenvalue evolution of the scattering matrix shown in Figs. 8(d)–(g), where the eigenvalues coalesce and approach zero Figure 8. Simultaneous unidirectional perfect absorption and zero reflection via multi-parameter circuit optimization. (a) Reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence. At 0.15 MHz, the system exhibits near-unity reflection for forward incidence and near-zero reflection for backward incidence. As transmission is also negligible in both directions, this results in perfect absorption in the backward direction (zero reflection and zero transmission) and perfect reflection in the forward direction (unity reflection with no transmission or absorption). (b) Reflection asymmetry in both magnitude and phase between forward and backward incidence, peaking at the target frequency. (c) Transmission magnitude and phase for both directions are symmetric and nearly zero at 0.15 MHz, confirming complete energy dissipation for backward incidence and no loss for forward incidence. This confirms unidirectional perfect absorption (UPA). (d) Trajectories of the eigenvalues λ1 and λ2 of the scattering matrix in the complex plane, showing coalescence at an exceptional point (EP). (e–g) Frequency-dependent behavior of the eigenvalues’ imaginary parts (e), real parts (f), and magnitudes (g), confirming second-order EP characteristics near the same frequency. The response is achieved by independently tuning nine shunt circuit parameters, resistances (R1, R2, R3), inductances (L1, L2, L3), and strain-proportional voltage feedback gains (V01, V02, V03) assigned to the three piezoelectric layers in the unit cell. 10/15 magnitude at the EP frequency, consistent with zero transmission, as predicted by Eq. (3). This result demonstrates that extreme scattering asymmetry and critical-point behavior can be engineered entirely through circuit-based tuning, even in compact single-unit-cell structures. Discussion Piezoelectric metamaterials with external shunt circuits provide a powerful avenue for controlling asymmetric wave phenomena using a fixed physical structure, unlike traditional elastic metamaterials that require geometric modifications. In this study, we demonstrate how resistive, inductive, and strain-proportional voltage feedback circuits embedded in a 1D layered piezoelectric composite can be used to precisely modulate asymmetric wave propagation. The composite structure supports Willis and electro-momentum coupling, as revealed through dynamic homogenization, which captures the nonlocal and non-Hermitian interactions emerging from broken inversion symmetry and circuit-induced loss and gain. By parametrically tuning the shunt circuit components, we show that the degree of reflection asymmetry can be engineered to realize exotic wave phenomena such as unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA) at target frequencies. The voltage feedback, in particular, introduces an additional axis of control beyond the previously explored resistive and inductive loadings. This enhancement enables the realization of exceptional points (EPs) in the scattering matrix eigenvalue spectrum through purely circuit-based tuning, even within a single unit-cell configuration. The persistence of such critical-point behavior without periodicity underscores the localized nature of non-Hermitian scattering mechanisms.We employ a standard transfer matrix formalism to efficiently compute the scattering response and establish a homogenization framework in which the transfer matrix of a finite composite stack is matched with that of an equivalent medium governed by modified constitutive laws. The extracted effective parameters reveal nonzero Willis and electro-momentum coupling coefficients, highlighting the interplay between spatial asymmetry and non-Hermitian circuit control. These findings provide a reconfigurable platform for nonreciprocal and asymmetric wave manipulation in compact structures, where critical phenomena like EPs can be accessed without requiring gain/loss balancing typical in parity-time symmetric systems. Although the demonstrated effects are frequency-specific, the circuit-based tuning approach enables real-time reconfigurability and could be extended to broadband and frequency-agile designs through adaptive or feedback-based control strategies. Future investigations will explore extensions to two- and three-dimensional structures, and experimental realization of programmable wave control in real time. These insights position circuit-coupled piezoelectric composites as a versatile and compact platform for implementing programmable non-Hermitian metamaterials, with potential applications in acoustic isolators, directional sensors, wave-based computing, ultrasonic imaging, and adaptive vibration control. Methods Derivation of electro-mechanical elastic constant For a one-dimensional problem, the constitutive relations for each piezoelectric layer assuming scalar fields varying only along the z-direction can be written as: σ = Cε −BE, D = Bε +AE, (7) where σ and ε are the longitudinal stress and strain, D and E are the electric displacement and electric field, and C, A, and B denote the elastic constant, dielectric permittivity, and piezoelectric coupling coefficient, respectively. Assuming a transverse cross-sectional area Ap, the current I through an external circuit and the voltage V across a layer are given by: I = ∂ ∂t Z Ap DdΩw, V = Z lp E dz, (8) where Ωw is the cross-section of the piezoelectric layer, and lp is the layer thickness. Assuming a spatially uniform electric field and applying the Laplace transform with s = −iω, Eq. (8) becomes: I = sDAp, V = Elp. (9) Applying Kirchhoff’s voltage law to the shunt circuit, the voltage balance is: V +IZ = V0ε, (10) 11/15 where Z = sL+R is the impedance of a series RL circuit with resistance R and inductance L, and V0 is the strain-proportional feedback gain. Combining Eqs. (7), (9), and (10), we obtain the effective electromechanical constitutive relation: σ =  C + sApB2Z sApAZ +lp − BV0 sApAZ +lp  ε = ˇCε, (11) where ˇC is the electromechanical elastic constant, which is tunable via the shunt impedance Z and the strain-proportional voltage V0. This effective constant ˇC inherits the periodicity of the layered metamaterial and encapsulates the circuit-driven modulation of wave propagation behavior. Scattering matrix calculations The transfer matrix for the i-th layer is given by: Ti =   cos(kili p) sin(kili p) ˇCiki −ˇCiki sin(kili p) cos(kili p)  ,  ui(zR i ) σi(zR i )  = Ti  ui(zL i ) σi(zL i )  (12) Here, ki is the wavenumber in the i-th layer, and zL i , zR i denote the left and right boundaries of the layer. The transfer matrix Ti relates the state vector (displacement and stress) across each layer. The displacement and stress fields within each layer are expressed as the sum of forward and backward traveling wave components: ui(z,t) = Aiei(kiz−ωt) +Bie−i(kiz+ωt), ki = ω rρi ˇCi (13) σi(z,t) = iki ˇCiAiei(kiz−ωt) −iki ˇCiBie−i(kiz+ωt) (14) where Ai and Bi are the amplitudes of the rightward and leftward traveling waves, ρi is the density of the i-th layer, and ω is the angular frequency. The total transfer matrix for the finite composite, including aluminum layers on both sides, is given by: Tf = T0 (T1T2T3)n T0 (15) Tb = T0 (T3T2T1)n T0 (16) corresponding to forward and backward wave incidence, respectively. Here, n denotes the number of unit cells in the composite, and each unit cell comprises three piezoelectric layers with different shunt circuit configurations. The reversal of the layer sequence in Tb captures the asymmetry introduced by breaking spatial inversion symmetry. The scattering coefficients are then obtained by enforcing continuity of displacement and stress at the aluminum–composite interfaces:  tf 0  =  1 1 iC0k0 −iC0k0 −1 T f  1 1 iC0k0 −iC0k0  1 rf  = M f  1 rf  , (17)  tb 0  =  1 1 iC0k0 −iC0k0 −1 Tb  1 1 iC0k0 −iC0k0  1 rb  = Mb  1 rb  (18) Here, C0 and k0 are the elastic modulus and wavenumber of the aluminum background. The matrices Mf and Mb encode the scattering response of the composite for forward and backward incidence. Derivation of transfer matrix for dynamic homogenization The equation (6) defines a linear system of the form ⃗Ψ′(z) = Q(s)⃗Ψ(z), where ⃗Ψ(z) is the state vector and Q(s) is the system matrix dependent on material and circuit parameters. The corresponding transfer matrix T(lp) over a single piezoelectric layer of length lp is given by: ⃗Ψ(z+lp) = T(lp)⃗Ψ(z) = exp[Q(s)lp]⃗Ψ(z), (19) where the exponential of the matrix Q(s) governs the spatial evolution of the state vector due to wave propagation through the layer. The transfer matrix thus relates the electromechanical state at the two ends of the layer and captures the full coupled dynamic response. 12/15 To obtain the transfer matrix of a full unit cell, which may consist of multiple piezoelectric and elastic sublayers, the transfer matrices of the individual layers are sequentially multiplied in the order of wave propagation. For a three-layer unit cell with layer lengths l1 p,l2 p,l3 p, the overall transfer matrix Tuc in the forward direction is given by: Tuc = T3(l3 p)T2(l2 p)T1(l1 p), (20) with total unit cell length l = l1 p +l2 p +l3 p. Under the dynamic homogenization framework, this unit cell transfer matrix is equated to the exponential of an effective system matrix ˜Q acting over the entire unit cell: Tuc = exp  ˜Ql  , (21) where ˜Q encodes the effective dynamic properties of the homogenized unit cell. In the subsequent steps, this matrix is matched with that of the homogenized constitutive model by comparing scattering responses, enabling the extraction of effective parameters, including nonlocal terms such as Willis coupling and electro-momentum coupling coefficients. Computation of effective properties Starting with the effective constitutive laws defined in Eq. (5), a governing equation similar in form to Eq. (6) can be derived for the homogenized medium. This formulation incorporates nonlocal coupling terms such as Willis and electro-momentum couplings, and relates the spatial derivatives of the averaged field variables to their values through a system matrix that captures the effective behavior. The resulting first-order differential system is given by: d dz     ⟨u⟩ ⟨φ⟩ ⟨σ⟩ ⟨D⟩    =   −s ˜SD ˜CD 0 1 ˜CD ˜B† ˜CD ˜A s ˜WD ˜AD 0 ˜B ˜CD ˜A −1 ˜AD s2  ˜ρ − ˜S† ˜SD ˜CD + ˜W † ˜WD ˜AD  0 s ˜S† D ˜CD −s ˜W † D ˜AD 0 0 0 0       ⟨u⟩ ⟨φ⟩ ⟨σ⟩ ⟨D⟩     (22) where: ˜CD = ˜C + ˜B† ˜B ˜A , ˜AD = ˜A+ ˜B† ˜B ˜C , ˜SD = ˜S+ ˜B† ˜W ˜A , ˜S† D = ˜S† + ˜W † ˜B ˜A , ˜WD = ˜W − ˜B ˜S ˜C , ˜W † D = ˜W † − ˜S† ˜B† ˜C . (23) The resulting parameters fully describe the dispersive, asymmetric, and nonlocal behavior of the homogenized piezoelectric composite with external circuitry. This system forms the foundation for constructing the transfer matrix of the homogenized domain, which captures the spatial evolution of the averaged electromechanical state vector. The transfer matrix can then be matched to that of the actual heterogeneous unit cell to extract the full set of effective parameters. Specifically, by comparing the effective system matrix ˜Q in Eq. (21) to the structure of the governing matrix in Eq. (22), whose entries depend on the modified effective parameters defined in Eq. (23), the individual effective coefficients ˜C, ˜A, ˜B, ˜S, ˜W, and ˜ρ can be systematically identified. Circuit optimization Optimization was performed using MATLAB’s fmincon function with default solver settings and no additional options specified. The objective function, obj = |rb|−|rf |, 13/15 was implemented as a custom function handle that returns the objective function value, where the design vector contains the nine circuit parameters, resistances (R1, R2, R3), inductances (L1, L2, L3), and voltage feedback gains (V 1 0 , V 2 0 , V 3 0 ) associated with the three shunted piezoelectric layers. The initial guess for each optimization run was randomly generated using a scaled Gaussian distribution, i.e., randn(9,1)×100, to span a broad range of possible values. To ensure physical feasibility, non-negativity constraints were enforced using a linear inequality of the form −x ≤0, implemented via the constraint matrix -eye(9) and the right-hand side vector zeros(9,1). The optimization was repeated with multiple random initializations until the objective function approached a value near −1, corresponding to the regime of unidirectional perfect absorption and zero reflection. While the optimization yields a representative set of optimal parameters, the solution is not unique and may converge to different local optima based on the initial seed. References 1. Auld, B. Acoustic Fields and Waves in Solids (Wiley, 1973). 2. Achenbach, J. D. Reciprocity in elastodynamics (Cambridge University Press, 2003). 3. Maznev, A., Every, A. & Wright, O. Reciprocity in reflection and transmission: What is a “phonon diode”? Wave Motion 50, 776–784, DOI: 10.1016/j.wavemoti.2013.01.004 (2013). 4. Liang, B., Yuan, B. & Cheng, J.-C. Acoustic diode: Rectification of acoustic energy flux in one-dimensional systems. Phys. Rev. Lett. 103, 104301, DOI: 10.1103/PhysRevLett.103.104301 (2009). 5. Boechler, N., Theocharis, G. & Daraio, C. Bifurcation-based acoustic switching and rectification. Nat. Mater. 10, 665–668, DOI: 10.1038/nmat3072 (2011). 6. Yousefzadeh, B., Alù, A. & Ruzzene, M. Asymmetric elastic wave propagation and nonreciprocal transmission in nonlinear lattices. Phys. Rev. B 103, 184302, DOI: 10.1103/PhysRevB.103.184302 (2021). 7. Popa, B.-I. & Cummer, S. A. Non-reciprocal and highly nonlinear active acoustic metamaterials. Nat. Commun. 5, 3398, DOI: 10.1038/ncomms4398 (2014). 8. Trainiti, G. & Ruzzene, M. Non-reciprocal elastic wave propagation in spatiotemporal periodic structures. New J. Phys. 18, 083047, DOI: 10.1088/1367-2630/18/8/083047 (2016). 9. Nassar, H., Chen, H., Norris, A. N., Haberman, M. R. & Huang, G. Non-reciprocal wave propagation in modulated elastic metamaterials. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 473, 20170188, DOI: 10.1098/rspa.2017.0188 (2017). 10. Nassar, H., Xu, X., Norris, A. N. & Huang, G. L. Modulated phononic crystals: Non-reciprocal wave propagation and willis materials. J. Mech. Phys. Solids 134, 103705, DOI: 10.1016/j.jmps.2019.103705 (2020). 11. Fleury, R., Sounas, D. & Alù, A. Sound isolation and giant linear nonreciprocity in a compact acoustic circulator. Science 343, 516–519, DOI: 10.1126/science.1246957 (2014). 12. Milton, G. W. & Willis, J. R. On modifications of newton’s second law and linear continuum elastodynamics. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 463, 855–880 (2007). 13. Pernas-Salomón, R. & Shmuel, G. Fundamental principles for generalized willis metamaterials. Phys. Rev. Appl. 14, 064005 (2020). 14. Sieck, C. F., Alù, A. & Haberman, M. R. Origins of willis coupling and acoustic bianisotropy in acoustic metamaterials through source-driven homogenization. Phys. Rev. B 96, 104303 (2017). 15. Muhlestein, M. B., Sieck, C. F., Alù, A. & Haberman, M. R. Reciprocity, passivity and causality in willis materials. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 472, 20160604 (2016). 16. Cho, C., Wen, X., Park, N. & Li, J. Acoustic willis meta-atom beyond the bounds of passivity and reciprocity. Commun. Phys. 4, 82 (2021). 17. Liu, Y. et al. Willis metamaterial on a structured beam. Phys. Rev. X 9, 011040 (2019). 18. Hao, Y., Shen, Y., Groby, J.-P. & Li, J. Experimental demonstration of willis coupling for elastic torsional waves. Wave Motion 112, 102931 (2022). 19. Merkel, A., Romero-García, V., Groby, J.-P., Li, J. & Christensen, J. Unidirectional zero sonic reflection in passive pt-symmetric willis media. Phys. Rev. B 98, 201102 (2018). 20. Danawe, H. & Tol, S. Electro-momentum coupling tailored in piezoelectric metamaterials with resonant shunts. APL Mater. 11 (2023). 14/15 21. Pernas-Salomón, R. & Shmuel, G. Symmetry breaking creates electro-momentum coupling in piezoelectric metamaterials. J. Mech. Phys. Solids 134, 103770 (2020). 22. Pernas-Salomón, R., Haberman, M. R., Norris, A. N. & Shmuel, G. The electromomentum effect in piezoelectric willis scatterers. Wave Motion 106, 102797 (2021). 23. Huynh, H. D. et al. Maximizing electro-momentum coupling in generalized 2d willis metamaterials. Extrem. Mech. Lett. 61, 101981 (2023). 24. Lee, J.-H., Zhang, Z. & Gu, G. X. Maximum electro-momentum coupling in piezoelectric metamaterial scatterers. J. Appl. Phys. 132 (2022). 25. Zhang, Z., Lee, J.-H. & Gu, G. X. Rational design of piezoelectric metamaterials with tailored electro-momentum coupling. Extrem. Mech. Lett. 55, 101785 (2022). 26. Huynh, H. D., Nanthakumar, S., Park, H. S., Rabczuk, T. & Zhuang, X. The effect of electro-momentum coupling on unidirectional zero reflection in layered generalized willis metamaterials. Extrem. Mech. Lett. 77, 102318 (2025). 27. Wu, Z., Yi, J., Xia, R., Chen, J. & Li, Z. Versatile non-hermitian piezoelectric metamaterial beam with tunable asymmetric reflections. Front. Phys. 10, 1089250 (2022). 15/15
Unidirectional Zero Reflection and Perfect Absorption via Exceptional Points in Active Piezoelectric Willis Metamaterials Hrishikesh Danawe1 and Serife Tol1,* 1 48109 *Corresponding author(s): Email(s) ABSTRACT Electro-momentum coupling in piezoelectric metamaterials with broken inversion symmetry enables asymmetric elastic wave transport by linking macroscopic electric fields to momentum, an effect analogous to Willis coupling in elastic media. A one-dimensional layered piezoelectric metamaterial integrated with shunt circuits, consisting of a resistor, inductor, and strainproportional voltage feedback gain, is proposed to achieve dynamic control of frequency-dependent stiffness and damping through electromechanical interactions. Tuning the circuit parameters yields direction-dependent wave scattering at targeted frequencies. Dynamic homogenization reveals macroscopic constitutive relations exhibiting both Willis and electro-momentum couplings. Non-Hermitian exceptional points are identified, where scattering eigenmodes coalesce and produce extreme asymmetries in wave response. Near these points, the system realizes unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA), achieving complete absorption from one direction and total reflection from the opposite side. The findings demonstrate a compact and reconfigurable platform for tunable, directional elastic wave control using passive-active hybrid metamaterials, opening new avenues for programmable devices in acoustic isolation, wave-based computing, sensing, and energy manipulation in solid media. Introduction Nonreciprocal wave propagation is a fascinating area of physics that challenges the conventional symmetry typically associated with wave transmission, implying that the system's response remains invariant under source-receiver interchange. This reciprocity arises from the assumptions of time-reversal symmetry, linearity, and time-invariance of the medium1-3. In contrast, nonreciprocal systems intentionally break this symmetry, allowing waves to propagate preferentially in one direction while being attenuated, reflected, or entirely suppressed in the opposite direction. Such direction-dependent control of wave transport offers transformative opportunities in the design of next-generation acoustic and elastic devices, including unidirectional sensors, isolators, signal routers, and energy-harvesting systems. Traditional strategies to achieve nonreciprocal wave transport often rely on mechanisms such as nonlinear interactions4-7, spatiotemporal modulation8-10, or moving media11. While these approaches have enabled several groundbreaking demonstrations, their practical deployment is often hindered by complexity, requirements for dynamic biasing, or scalability constraints. An emerging alternative leverages engineered metamaterials, particularly those incorporating spatial asymmetry and tailored loss or gain, to realize asymmetric wave responses in linear, time-invariant systems. Within this paradigm, Willis metamaterials12-14, characterized by microstructural asymmetry and bianisotropic constitutive relations, offer a powerful framework for asymmetric wave control. Despite supporting exotic couplings such as stress-velocity and momentum-strain, these materials remain fundamentally reciprocal due to the preservation of time-reversal symmetry and passivity15,16. Nevertheless, they can exhibit asymmetric reflection phases for waves incident from opposite directions, owing to direction-dependent impedance introduced by Willis coupling17,18. While the transmission remains symmetric as required by reciprocity, the phase of the reflected waves can differ based on the incidence direction. However, to achieve strong asymmetry in the reflection amplitude, it is typically necessary to introduce loss, thereby rendering the system non-Hermitian17,18. By carefully engineering loss, reflection in one direction can be strongly suppressed, approaching unidirectional zero reflection (UZR), while reflection in the opposite direction remains significant17,19. This phenomenon is closely linked to the concept of exceptional points (EPs) in non-Hermitian physics, where eigenvalues and eigenvectors of the scattering matrix coalesce, leading to abrupt transitions and extreme asymmetries. However, directly tuning loss in physical materials is often impractical. To overcome this challenge, piezoelectric Willis metamaterials enable external circuit-based control of loss and resonance. In our previous work, we introduced a resistive-inductive (RL) shunt circuit to realize tunable asymmetric wave phenomena via externally tailored dissipation and resonance20. These systems leverage strong electromechanical coupling, which enables an 16 Oct 2025 additional bianisotropic term between electric field and momentum, termed electro-momentum coupling21, an electromechanical analog to Willis coupling. Both Willis and electro-momentum couplings contribute to directional wave behavior in piezoelectric systems22. Such couplings can be significantly enhanced through topology optimization23-26. Combined with mechanical loss in the host structure, these systems can exhibit extreme reflection asymmetry and UZR26. Furthermore, the inclusion of tunable resistance enables frequency-selective UZR27. However, achieving both UZR and unidirectional perfect absorption (UPA), where all incoming energy from one direction is absorbed and reflection is entirely suppressed, remains an open challenge for passive elastic systems. To the best of our knowledge, there are no demonstrations of simultaneously achieving UZR and UPA in such systems via dissipation alone. Nonetheless, the electromechanical coupling in piezoelectric metamaterials enables further control through external feedback circuits, offering a pathway to wave control that surpasses the limitations of passive configurations. In this work, we demonstrate that piezoelectric metamaterials composed of stacked layers integrated with tunable shunt circuits featuring strain-proportional voltage feedback, inductive resonance, and resistive damping that can realize non-Hermitian scattering conditions, including exceptional points that yield highly asymmetric wave responses. By tuning the external circuit parameters, we gain precise control over the system's stiffness and loss characteristics, enabling directional scattering tailored to specific frequencies. Using dynamic homogenization, we extract the macroscopic effective properties of the one-dimensional layered metamaterial, revealing the presence and tunability of both Willis and electro-momentum couplings. This circuit-level programmability allows the system to exhibit unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA), along with retroreflection in the reverse direction. By leveraging strong electromechanical interactions and active-passive hybridization, our platform offers a compact, reconfigurable, and scalable solution for nonreciprocal elastic wave manipulation, opening new avenues for wave-based signal routing, isolation, and adaptive control in engineered solids. Results Design of Willis metamaterial We consider wave propagation through a finite, one-dimensional periodic piezoelectric composite consisting of layered piezoelectric materials stacked along their poling direction. The schematic in Fig. 1(a) shows a finite rod composed of five unit cells of the piezoelectric composite embedded within an aluminum rod. Wave propagation is studied in both forward and backward directions by analyzing the reflected and transmitted wave components for an incident wave in each direction. Figure 1(b) illustrates the unit cell of the composite, which comprises piezoelectric layers of PZT-4, BaTiO3, and PVDF. These materials are stacked in a specific pattern to break inversion symmetry, enabling the realization of Willis and electro-momentum coupling and thereby facilitating asymmetric wave phenomena. All layers have the same cross-sectional area, Ap = 1mm2, while their lengths are selected as l1 p = 1mm, l2 p = 1.4mm, and l3 p = 0.6mm. Each layer is shunted with an external circuit consisting of an inductor, a resistor, and a voltage feedback source proportional to the strain across the layer. An effective electro-mechanical elastic constant is derived, allowing the shunted piezoelectric layers to be modeled as an equivalent purely elastic medium. The influence of piezoelectric coupling and circuit parameters is incorporated into the modified constitutive behavior, as illustrated in Fig. 1(c). In this formulation, ˇC denotes the electromechanical elastic constant, which captures the combined effects of the intrinsic material properties and the external circuitry. Here, C and A represent the elastic and dielectric constants, respectively, and B is the piezoelectric coupling coefficient. The term V0 is the voltage feedback coefficient, and Z = sL + R is the shunt impedance, where L and R are the inductance and resistance of the circuit, respectively. The Laplace variable is defined as s = -iω, with ω being the angular frequency. The resistor introduces damping into the system, while the inductor forms an LC circuit resonance with the intrinsic capacitance of the piezoelectric layer. The voltage feedback source provides additional control over the dynamic behavior by exerting an active response proportional to the strain across the layer. Previously, we demonstrated that the resistor and inductor in the external circuit can be used to control Willis and electro-momentum coupling, thereby tuning asymmetric wave phenomena by adjusting these circuit parameters20. Specifically, the LC resonance determines the frequency at which coupling effects are maximized, while the resistor governs the perturbation level of the asymmetry factor, which directly influences the degree of wave asymmetry. By introducing a voltage feedback source proportional to strain, we now demonstrate an enhanced level of control that enables the realization of exotic wave phenomena such as unidirectional zero reflection and perfect absorption. Scattering analysis The asymmetric wave behavior is analyzed using a scattering matrix composed of transmission and reflection ratios for forward and backward wave incidence. This formulation is based on the standard transfer matrix method, which relates the wave amplitudes across each layer of the piezoelectric composite. Figure 2(a) and (b) illustrate the traveling wave components within each layer for forward and backward incidence, respectively, along with the incident (A0,inc, B0,inc), reflected (B0,ref, A0,ref), and transmitted (A0,trans, B0,trans) wave amplitudes in the surrounding aluminum rod. These field distributions form the 2/15 Figure 1. Schematic and modeling framework for electro-momentum coupling in shunted piezoelectric metamaterials. a, One-dimensional waveguide comprising a layered piezoelectric composite embedded between two aluminum sections and terminated with perfectly matched layers (PML). Elastic waves incident from either direction exhibit asymmetric transmission (t f , tb) and reflection (rf , rb) due to the spatial asymmetry introduced by the shunted composite. The structure features a uniform poling direction, while the arrangement of heterogeneous piezoelectric layers breaks inversion symmetry to support directional wave propagation. b, Zoomed-in view of a representative unit cell consisting of three serially connected piezoelectric layers PZT-4, BaTiO3, and PVDF each interfaced with a shunt circuit composed of a resistor (Ri), inductor (Li), and strain-proportional voltage feedback source (V0iε). These circuits actively tailor the electromechanical response of each segment and modulate the macroscopic dynamic behavior through piezoelectric coupling. c, The shunted layer is modeled as an effective elastic layer with a modified electro-mechanical elastic constant ˇCi, incorporating both material properties and circuit-induced effects. This framework enables tunable control of wave propagation through circuit parameters, where the resistor introduces loss, the inductor defines resonance behavior, and the voltage feedback allows dynamic modulation of stiffness. foundation for computing direction-dependent scattering parameters that reveal the emergence of wave asymmetry induced by circuit-modulated electro-mechanical coupling in the composite. Each layer is modeled using an effective elastic constant. For the aluminum segments, this corresponds to the intrinsic elastic constant (i.e., ˇC0 = C0), while for the piezoelectric layers, it represents an electro-mechanical elastic constant ˇCi, which accounts for both the intrinsic material response and the effects of the external shunt circuitry, as described in Fig. 1(c). The reflection and transmission coefficients are defined as the ratios of outgoing to incoming wave amplitudes: rf = B0,ref A0,inc , rb = A0,ref B0,inc t f = A0,trans eik0l A0,inc , tb = B0,trans eik0l B0,inc (1) where A0,inc and B0,inc are the amplitudes of incident waves in the forward and backward directions, respectively. B0,ref and A0,ref denote reflected waves, and A0,trans, B0,trans are the transmitted waves. The exponential term eik0laccounts for the phase accumulated over the length lof the composite, with k0 as the wavenumber in the aluminum background. 3/15 Figure 2. Wave decomposition in a layered piezoelectric composite for forward and backward incidence. a, Schematic of elastic wave propagation from the left (forward incidence) through a layered structure composed of alternating piezoelectric segments with effective electro-mechanical elastic constants ˇC1, ˇC2, and ˇC3, bounded by homogeneous elastic layers ˇC0. Each segment is associated with local forward (Ai) and backward (Bi) wave components. The incident wave (A0,inc) is partially reflected (B0,ref) and transmitted (A0,tran), with no incoming wave from the right (B0 = 0). b, The same configuration under backward incidence, where the wave enters from the right. The incident wave (B0,inc) results in partial transmission (B0,tran) and reflection (A0,ref), with no incoming wave from the left (A0 = 0). This layer-wise decomposition into forward and backward components enables analytical evaluation of reflection and transmission coefficients under both excitation directions using the transfer matrix method. The expressions for reflection and transmission ratios are: rf = -Mf (2,1) Mf (2,2), t f = M f (1,1)-Mf (1,2)Mf (2,1) Mf (2,2) rb = -Mb(2,1) Mb(2,2), tb = Mb(1,1)-Mb(1,2)Mb(2,1) Mb(2,2) (2) where M f and Mb are given by equations 17 and 18, respectively. The magnitude and phase of the reflection coefficients for both forward and backward incidence are plotted in Fig. 3(a)-(c) for three different circuit configurations, where the circuit parameters inductance, resistance, and voltage feedback are identical across all three layers within each unit cell. In the open-circuit case [Fig. 3(a)], the reflection amplitude remains symmetric, with only phase asymmetry observed between the two directions. Introducing a passive shunt with R = 50kΩand L = 1H [Fig. 3(b)] produces measurable asymmetry in both amplitude and phase, particularly near the LC resonance frequencies, f1 = 0.067MHz for the PZT-4 layer and f2 = 0.19MHz for the BaTiO3 layer. The LC resonance frequency of the PVDF layer lies beyond the frequency range considered in this study and is therefore not captured in the plotted results. The amplitude asymmetry primarily arises from energy dissipation introduced by the resistor, which leads to direction-dependent wave attenuation. While dissipation alone does not fundamentally break time-reversal symmetry, it facilitates asymmetric scattering when combined with structural or parametric asymmetries, as is the case in this configuration. Finally, incorporating an active voltage feedback source with V0 = 10MV [Fig. 3(c)] results in pronounced asymmetry, where both the magnitude and phase of the reflection coefficients differ significantly between forward and backward wave incidence. These effects are especially prominent near the frequencies where the circuit dynamics enhance Willis and electromomentum coupling. These results demonstrate that external circuit parameters, including both passive and active elements, provide a powerful means to control and tune asymmetric wave behavior. In particular, the emergence of strong reflection asymmetry underscores the potential of dynamic electro-mechanical coupling for breaking reciprocity in piezoelectric composite systems. 4/15 Figure 3. Effect of shunt circuit parameters on directional reflection behavior. Frequency-dependent reflection magnitude (|r|, top row) and phase (∠r, bottom row) for forward (solid lines) and backward (dashed lines) wave incidence are shown under different shunting conditions. Identical circuit parameters are applied to all piezoelectric layers. a, In the open-circuit case (no shunt), the reflection magnitudes are symmetric, while phase asymmetry arises solely due to structural asymmetry. b, Introducing a resistor (R = 50 kΩ) and inductor (L = 1 H) leads to asymmetry in both magnitude and phase of the reflection ratio, with losses induced by the resistor breaking amplitude symmetry. The LC resonance condition contributes to frequency-selective enhancement of this asymmetry. c, Adding voltage feedback (V0 = 10 MV) further modifies the reflection characteristics across the entire frequency spectrum, introducing tunability and stronger contrast between forward and backward responses. These results illustrate how circuit parameters enable directional control over elastic wave reflection in shunted piezoelectric composites. Exceptional points and unidirectional zero reflection Unidirectional zero reflection (UZR) can be realized at exceptional points (EPs), where the eigenvalues and eigenvectors of the scattering matrix coalesce. The scattering matrix S is defined as: S = tf rb rf tb , eig(S) = λ1,λ2 λ1 = t +√rf rb, λ2 = t -√rf rb (3) Here, λ1 and λ2 are the eigenvalues of the scattering matrix. Since the system is reciprocal, the transmission is equal in both directions, i.e., t = tf = tb. At the UZR condition, either rf = 0 or rb = 0, resulting in λ1 = λ2 = t, which indicates a coalescence of eigenvalues and eigenvectors, a defining signature of an exceptional point. The scattering matrix elements tf,b and r f,b, as derived in Eq. (9), can be used to locate EPs in the system. By tuning the external circuit parameters, these non-Hermitian degeneracies can be accessed, enabling UZR behavior in the piezoelectric metamaterial.To systematically identify UZR conditions, we perform a parametric sweep over resistance R and feedback voltage V0, while keeping the inductance fixed at L = 1H. The circuit parameters are kept identical across all three layers of the unit cell. A contrast factor is defined to quantify asymmetry: α = rf rb (4) At UZR, |α| →0 or ∞. Figure 4(a) presents the magnitude of α as a function of R and V0 at a design frequency of 0.15MHz, revealing a region of strong reflection asymmetry. A distinct UZR point is observed at R = 78.38kΩ, V0 = 20.42MV, corresponding to an EP in the system. The reflection coefficients at this parameter combination are plotted in Fig. 4(b), confirming the suppression of backward reflection (rb ≈0) and a large contrast between forward and backward response. To further quantify this asymmetry, the differential amplitude ||rf |-|rb|| and differential phase |∠r f -∠rb| are plotted in Fig. 4(c). A peak in the amplitude difference and a sharp phase jump of nearly π confirm the asymmetric scattering characteristic of an EP. The presence of the exceptional point is further validated by examining the eigenvalue evolution of the scattering matrix. In Fig. 4(d), the eigenvalues are plotted in the complex plane, showing a coalescence trajectory near the EP. Figures 4(e)-(g) present the real, imaginary, and absolute parts of the eigenvalues as functions of frequency. At the UZR frequency, both real and imaginary components coalesce, confirming the degeneracy of eigenvalues and reinforcing the presence of a non-Hermitian EP. 5/15 Figure 4. Parametric analysis of unidirectional zero reflection (UZR) and its relation to exceptional points (EPs). a, Colormap of the contrast ratio α = rf /rb between forward and backward reflection as a function of resistance (R) and voltage feedback amplitude (V0), with inductance fixed at L = 1 H. Identical circuit parameters are applied to all piezoelectric layers. A sharp peak in α identifies the condition for UZR at 0.15 MHz, corresponding to R = 78.38 kΩand V0 = 20.42 MV. b, Frequency-dependent reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence, confirming complete suppression of forward reflection at the UZR frequency along with a π phase difference. c, Absolute differences in reflection magnitude (top) and phase (bottom) between forward and backward incidence, showing a pronounced asymmetry at 0.15 MHz. d, Eigenvalue trajectories of the scattering matrix in the complex plane showing the coalescence of λ1 and λ2 at the EP. e-g, Frequency-dependent plots of the imaginary parts (e), real parts (f), and magnitudes (g) of the eigenvalues, confirming the occurrence of a second-order exceptional point at the UZR frequency. The π phase jump observed in b and c is a spectral signature of the EP. These results confirm that UZR coincides with an exceptional point of the scattering matrix composed of reflection and transmission coefficients (rf , rb, tf , tb). These results demonstrate that external circuit tuning via resistive, inductive, and voltage feedback elements provides a powerful and reconfigurable platform for accessing exceptional points and achieving UZR in elastic metamaterials. Unlike conventional systems where EPs are fixed by material properties or geometry, the piezoelectric-circuit hybrid structure enables on-demand tuning of asymmetric wave transport. This approach opens new directions for reprogrammable waveguides, vibration control, and logic devices based on elastic waves. Dynamic Homogenization The effective properties of a heterogeneous layered composite, derived via dynamic homogenization21, can capture unusual coupling mechanisms that arise from broken inversion symmetry in periodic systems. Among these, Willis coupling and electro-momentum coupling play a central role in enabling asymmetric wave propagation. In our previous work20, we employed an ensemble-averaging approach to extract the effective constitutive parameters of an infinitely periodic piezoelectric composite with external shunt circuits in the long-wavelength limit. This framework allowed us to rigorously quantify the influence of a passive shunt circuit, comprising an inductor and a resistor, on the effective medium properties. In particular, we demonstrated that the inclusion of such circuits modifies the asymmetry factor, a key parameter that governs the degree of wave asymmetry. In this study, we adopt a transfer matrix-based homogenization approach26 to replace the finite piezoelectric layered system with a single homogenized domain, as illustrated in Fig. 5. Figure 5(a) depicts the original heterogeneous structure, where wave scattering is computed using the microscopic properties of individual layers. In contrast, dynamic homogenization yields an 6/15 effective medium, as shown in Fig. 5(b), characterized by emergent coupling terms that do not appear in conventional media. Specifically, the effective constitutive relation reveals additional Willis coupling terms, ̃S, ̃S†, and electro-momentum coupling terms, ̃W, ̃W †, such that the generalized 1D constitutive law is given by:   ⟨σ⟩ ⟨D⟩ ⟨p⟩  =   ̃C - ̃B ̃S ̃B ̃A ̃W ̃S† - ̃W † ̃ρ     ⟨ε⟩ ⟨E⟩ ⟨ ̇u⟩   (5) where all quantities are scalars representing spatially averaged effective fields: ⟨σ⟩is the stress, ⟨D⟩the electric displacement, and ⟨p⟩the momentum density, while ⟨ε⟩, ⟨E⟩, and ⟨ ̇u⟩denote the strain, electric field, and particle velocity, respectively. The coefficients ̃C, ̃A, and ̃ρ represent the effective stiffness, permittivity, and mass density; ̃B is the piezoelectric coupling; and ̃S, ̃S† and ̃W, ̃W † encode the Willis and electro-momentum couplings. To extract the effective properties, a transfer matrix is constructed for both the heterogeneous and homogenized representations of the piezoelectric composite. These matrices are compared by matching their scattering coefficients under identical boundary and loading conditions. Since the homogenized constitutive relation (Eq. (5)) contains nine unknown parameters, the transfer matrix must be defined to fully capture both mechanical and electrical degrees of freedom. For the heterogeneous composite, we define a four-dimensional state vector that includes the mechanical displacement u, electric potential φ, stress σ, and electric displacement D. Under time-harmonic excitation, the governing equations are written in first-order form as: d dz     u φ σ D    =   0 0 1 ˇC 0 0 0 -V0 ˇClp sApZ lp s2ρ 0 0 0 0 0 0 0       u φ σ D     (6) where, ρ is the mass density, and all other parameters are as previously defined. This formulation enables the construction of a 4×4 transfer matrix that accurately captures the coupled electromechanical behavior of the heterogeneous piezoelectric unit Figure 5. Dynamic homogenization of a layered piezoelectric composite. a, Schematic of a one-dimensional periodic piezoelectric composite consisting of alternating heterogeneous piezoelectric layers embedded between aluminum waveguide sections and terminated with perfectly matched layers (PML). Due to the lack of inversion symmetry and spatial variation, elastic waves exhibit direction-dependent reflection (rf , rb). b, The composite is dynamically homogenized into an equivalent continuum domain characterized by effective macroscopic properties. These include the stiffness tensor ̃C, effective density ̃ρ, and piezoelectric coupling tensors ̃A and ̃B. Importantly, the homogenized model also incorporates the Willis coupling tensors ̃S, ̃S† (describing coupling between momentum and strain), and the electro-momentum coupling tensors ̃W, ̃W† (capturing coupling between electric field and momentum). The homogenized medium retains the same poling direction as the original layered system and reproduces its macroscopic wave behavior. This framework enables effective modeling of asymmetric wave propagation in complex piezoelectric composites. 7/15 cell. It accounts for the interaction between the mechanical and electrical fields mediated by the piezoelectric effect, as well as the influence of the external shunt circuit through the impedance Z and feedback coefficient V0. Refer to the methods section for the derivation of the transfer matrix and effective properties. To elucidate the physical origins of asymmetric scattering behavior, we compute the frequency-dependent effective constitutive properties of the layered piezoelectric composite using a dynamic homogenization approach. The configuration corresponds to the parameter set obtained from the earlier parametric study, which exhibited unidirectional zero reflection (UZR) at 0.15 MHz. Figure 6 shows the real and imaginary components of the homogenized constitutive properties. As evident in these plots, the composite exhibits strong frequency-dependent non-Hermiticity due to the lossy and active components in the shunt circuits, as reflected in the large imaginary parts of the constitutive tensors. In particular, the emergence of non-zero ̃S and ̃W, which break reciprocity and time-reversal symmetry, confirms the presence of Willis and electro-momentum coupling enabled by the broken inversion symmetry and external circuit feedback. These couplings play a central role in generating asymmetric wave reflection. Importantly, the Hermitian conjugates of the off-diagonal coupling terms ̃B† and ̃W † (shown as dashed lines) deviate from the original tensors, confirming the non-Hermitian nature of the effective medium. This validates that the asymmetric wave propagation arises from both geometric asymmetry and circuit-induced gain/loss modulation. Notably, a sharp discontinuity in ̃ρ and resonance peaks in ̃S and ̃W coincide with the UZR frequency at 0.15 MHz, highlighting the critical role of dynamic resonances in tuning wave asymmetry. Figure 6. Effective constitutive properties of the layered piezoelectric composite exhibiting UZR at 0.15 MHz. Frequency-dependent homogenized parameters are extracted using dynamic transfer matrix-based homogenization for the circuit configuration yielding unidirectional zero reflection (UZR). a, Dielectric stiffness ̃A. b, Piezoelectric coupling ̃B and its Hermitian conjugate ̃B† deviate significantly, confirming broken symmetry. c, Electro-mechanical stiffness ̃C reflects circuit-tuned stiffness modulation. d, Effective density ̃ρ shows a discontinuity and complex-valued behavior near 0.15 MHz, indicating resonance. e, Willis coupling ̃S and its symmetric off-diagonal term ̃S† demonstrate non-Hermitian asymmetry, peaking near the UZR condition. f, Electro-momentum coupling ̃W and ̃W †confirm the emergence of non-Hermitian behavior. 8/15 Unidirectional zero reflection and perfect absorption Consider a configuration with a single unit cell embedded in an aluminum background rod, as illustrated in Fig. 7(a). Based on the parametric study conducted earlier with a five-unit cell composite piezoelectric rod (see Fig. 4), UZR is achieved at R = 78.38kΩand V0 = 20.42MV, corresponding to an EP when the inductance is set to L = 1H. Remarkably, this condition also holds for a single scatterer, as verified through a scattering analysis and reflection measurements. The persistence of UZR behavior in a single unit cell stems from the local nature of the non-Hermitian scattering mechanism. Specifically, the piezoelectric layer shunted with an external circuit introduces an effective non-Hermitian coupling that governs the reflection asymmetry. Since the scattering matrix for this configuration remains a 2 × 2 system, the coalescence of its eigenvalues (i.e., the EP condition) and the associated unidirectional response can emerge even in the absence of periodicity. Thus, the essential features of UZR and EP behavior are preserved in the single-cell configuration when the circuit parameters are properly tuned. Figure 7(b) presents the frequency-dependent reflection amplitude and phase for both forward and backward incidence. UZR is observed at 0.15 MHz, where the reflection magnitude for backward incidence vanishes. The corresponding reflection asymmetry, quantified in Fig. 7(c), shows a sharp peak in amplitude contrast and a phase difference approaching π, which is a characteristic signature of an EP. Further confirmation is provided in Figs. 7(d)-(g), where the eigenvalues of the scattering matrix extracted from the single-cell response are shown to coalesce at the EP frequency. The trajectory in the complex plane [Fig. 7(d)], along with the frequency evolution of their imaginary, real, and absolute components [Figs. 7(e-g)], collectively validates the existence of an exceptional point in the reduced system. An important question that arises is whether the reflection asymmetry is strong enough to achieve a reflection amplitude of zero in one direction and unity in the other. This scenario corresponds to the regime of unidirectional perfect absorption (UPA), wherein a wave incident from one side is completely absorbed, while it is entirely retroreflected for incidence from the opposite side. To explore this regime, we formulate an optimization problem aimed at maximizing the reflection asymmetry, quantified Figure 7. Unidirectional zero reflection and exceptional point behavior in a single-unit-cell piezoelectric composite. a, Schematic of a one-dimensional waveguide consisting of a single unit cell made of three piezoelectric layers (PZT-4, BaTiO3, and PVDF), each connected to an identical shunt circuit comprising a resistor (Ri), inductor (Li), and strain-proportional voltage feedback source (V0iε). The waveguide is bounded by homogeneous aluminum sections and terminated with perfectly matched layers (PML). b, Reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence show a strong asymmetry near 0.15 MHz, where forward reflection vanishes, indicating unidirectional zero reflection (UZR). c, Difference in reflection magnitude (|∆r|) and phase (|∆∠r|) between forward and backward directions, both peaking at the UZR frequency. d, Trajectory of the scattering matrix eigenvalues λ1 and λ2 in the complex plane, indicating coalescence at an exceptional point (EP). e-g, Frequency-dependent evolution of the imaginary parts (e), real parts (f), and absolute values (g) of the eigenvalues further confirm the occurrence of a second-order EP at 0.15 MHz. These results demonstrate that both UZR and EP phenomena can be realized with just a single unit cell, underscoring the strong asymmetry induced by shunt-circuit-mediated electro-momentum coupling. 9/15 as |rf |-|rb|, by tuning the circuit parameters associated with the three shunted piezoelectric layers. The objective is to determine the optimal combination of resistances, inductances, and voltage feedback gains that yields the maximum possible asymmetry, thereby enabling near-perfect reflection in one direction and zero reflection in the other. The optimization is carried out using MATLAB's fmincon function, targeting the simultaneous realization of unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA) at the design frequency of 0.15 MHz. The optimized circuit parameters are found to be: {R1,R2,R3} = {0, 29.28, 57.85}kΩ, {L1,L2,L3} = {0, 0.42, 1.20}H, {V 1 0 ,V 2 0 ,V 3 0 } = {7.66, 44.28, 12.06}MV. It is important to note that this solution is not unique; alternative parameter combinations may also yield near-extreme asymmetry depending on the initial guess and convergence to local optima. The stochastic nature of the initialization process implies that different runs may lead to different but functionally equivalent solutions (see Methods section for more details). The results of the scattering analysis with these optimized circuit parameters applied to a single unit-cell scatterer are shown in Fig. 8. The reflection amplitude and phase in Fig. 8(a) demonstrate that the forward reflection is unity while the backward reflection is nearly zero, achieving UPA. The asymmetry metrics in Fig. 8(b) further confirm this behavior, with a sharp peak in |r f | -|rb| and a phase contrast of approximately π, consistent with EP conditions. Moreover, the transmission plots in Fig. 8(c) indicate that transmission is suppressed at and beyond the EP frequency, while the backward reflection approaches unity in the high-frequency regime, realizing perfect retroreflection. The presence of an exceptional point is further validated by the eigenvalue evolution of the scattering matrix shown in Figs. 8(d)-(g), where the eigenvalues coalesce and approach zero Figure 8. Simultaneous unidirectional perfect absorption and zero reflection via multi-parameter circuit optimization. (a) Reflection magnitude (top) and phase (bottom) for forward (solid) and backward (dashed) incidence. At 0.15 MHz, the system exhibits near-unity reflection for forward incidence and near-zero reflection for backward incidence. As transmission is also negligible in both directions, this results in perfect absorption in the backward direction (zero reflection and zero transmission) and perfect reflection in the forward direction (unity reflection with no transmission or absorption). (b) Reflection asymmetry in both magnitude and phase between forward and backward incidence, peaking at the target frequency. (c) Transmission magnitude and phase for both directions are symmetric and nearly zero at 0.15 MHz, confirming complete energy dissipation for backward incidence and no loss for forward incidence. This confirms unidirectional perfect absorption (UPA). (d) Trajectories of the eigenvalues λ1 and λ2 of the scattering matrix in the complex plane, showing coalescence at an exceptional point (EP). (e-g) Frequency-dependent behavior of the eigenvalues' imaginary parts (e), real parts (f), and magnitudes (g), confirming second-order EP characteristics near the same frequency. The response is achieved by independently tuning nine shunt circuit parameters, resistances (R1, R2, R3), inductances (L1, L2, L3), and strain-proportional voltage feedback gains (V01, V02, V03) assigned to the three piezoelectric layers in the unit cell. 10/15 magnitude at the EP frequency, consistent with zero transmission, as predicted by Eq. (3). This result demonstrates that extreme scattering asymmetry and critical-point behavior can be engineered entirely through circuit-based tuning, even in compact single-unit-cell structures. Discussion Piezoelectric metamaterials with external shunt circuits provide a powerful avenue for controlling asymmetric wave phenomena using a fixed physical structure, unlike traditional elastic metamaterials that require geometric modifications. In this study, we demonstrate how resistive, inductive, and strain-proportional voltage feedback circuits embedded in a 1D layered piezoelectric composite can be used to precisely modulate asymmetric wave propagation. The composite structure supports Willis and electro-momentum coupling, as revealed through dynamic homogenization, which captures the nonlocal and non-Hermitian interactions emerging from broken inversion symmetry and circuit-induced loss and gain. By parametrically tuning the shunt circuit components, we show that the degree of reflection asymmetry can be engineered to realize exotic wave phenomena such as unidirectional zero reflection (UZR) and unidirectional perfect absorption (UPA) at target frequencies. The voltage feedback, in particular, introduces an additional axis of control beyond the previously explored resistive and inductive loadings. This enhancement enables the realization of exceptional points (EPs) in the scattering matrix eigenvalue spectrum through purely circuit-based tuning, even within a single unit-cell configuration. The persistence of such critical-point behavior without periodicity underscores the localized nature of non-Hermitian scattering mechanisms.We employ a standard transfer matrix formalism to efficiently compute the scattering response and establish a homogenization framework in which the transfer matrix of a finite composite stack is matched with that of an equivalent medium governed by modified constitutive laws. The extracted effective parameters reveal nonzero Willis and electro-momentum coupling coefficients, highlighting the interplay between spatial asymmetry and non-Hermitian circuit control. These findings provide a reconfigurable platform for nonreciprocal and asymmetric wave manipulation in compact structures, where critical phenomena like EPs can be accessed without requiring gain/loss balancing typical in parity-time symmetric systems. Although the demonstrated effects are frequency-specific, the circuit-based tuning approach enables real-time reconfigurability and could be extended to broadband and frequency-agile designs through adaptive or feedback-based control strategies. Future investigations will explore extensions to two- and three-dimensional structures, and experimental realization of programmable wave control in real time. These insights position circuit-coupled piezoelectric composites as a versatile and compact platform for implementing programmable non-Hermitian metamaterials, with potential applications in acoustic isolators, directional sensors, wave-based computing, ultrasonic imaging, and adaptive vibration control. Methods Derivation of electro-mechanical elastic constant For a one-dimensional problem, the constitutive relations for each piezoelectric layer assuming scalar fields varying only along the z-direction can be written as: σ = Cε -BE, D = Bε +AE, (7) where σ and ε are the longitudinal stress and strain, D and E are the electric displacement and electric field, and C, A, and B denote the elastic constant, dielectric permittivity, and piezoelectric coupling coefficient, respectively. Assuming a transverse cross-sectional area Ap, the current I through an external circuit and the voltage V across a layer are given by: I = ∂ ∂t Z Ap DdΩw, V = Z lp E dz, (8) where Ωw is the cross-section of the piezoelectric layer, and lp is the layer thickness. Assuming a spatially uniform electric field and applying the Laplace transform with s = -iω, Eq. (8) becomes: I = sDAp, V = Elp. (9) Applying Kirchhoff's voltage law to the shunt circuit, the voltage balance is: V +IZ = V0ε, (10) 11/15 where Z = sL+R is the impedance of a series RL circuit with resistance R and inductance L, and V0 is the strain-proportional feedback gain. Combining Eqs. (7), (9), and (10), we obtain the effective electromechanical constitutive relation: σ = C + sApB2Z sApAZ +lp - BV0 sApAZ +lp ε = ˇCε, (11) where ˇC is the electromechanical elastic constant, which is tunable via the shunt impedance Z and the strain-proportional voltage V0. This effective constant ˇC inherits the periodicity of the layered metamaterial and encapsulates the circuit-driven modulation of wave propagation behavior. Scattering matrix calculations The transfer matrix for the i-th layer is given by: Ti =   cos(kili p) sin(kili p) ˇCiki -ˇCiki sin(kili p) cos(kili p)  , ui(zR i ) σi(zR i ) = Ti ui(zL i ) σi(zL i ) (12) Here, ki is the wavenumber in the i-th layer, and zL i , zR i denote the left and right boundaries of the layer. The transfer matrix Ti relates the state vector (displacement and stress) across each layer. The displacement and stress fields within each layer are expressed as the sum of forward and backward traveling wave components: ui(z,t) = Aiei(kiz-ωt) +Bie-i(kiz+ωt), ki = ω rρi ˇCi (13) σi(z,t) = iki ˇCiAiei(kiz-ωt) -iki ˇCiBie-i(kiz+ωt) (14) where Ai and Bi are the amplitudes of the rightward and leftward traveling waves, ρi is the density of the i-th layer, and ω is the angular frequency. The total transfer matrix for the finite composite, including aluminum layers on both sides, is given by: Tf = T0 (T1T2T3)n T0 (15) Tb = T0 (T3T2T1)n T0 (16) corresponding to forward and backward wave incidence, respectively. Here, n denotes the number of unit cells in the composite, and each unit cell comprises three piezoelectric layers with different shunt circuit configurations. The reversal of the layer sequence in Tb captures the asymmetry introduced by breaking spatial inversion symmetry. The scattering coefficients are then obtained by enforcing continuity of displacement and stress at the aluminum-composite interfaces: tf 0 = 1 1 iC0k0 -iC0k0 -1 T f 1 1 iC0k0 -iC0k0 1 rf = M f 1 rf , (17) tb 0 = 1 1 iC0k0 -iC0k0 -1 Tb 1 1 iC0k0 -iC0k0 1 rb = Mb 1 rb (18) Here, C0 and k0 are the elastic modulus and wavenumber of the aluminum background. The matrices Mf and Mb encode the scattering response of the composite for forward and backward incidence. Derivation of transfer matrix for dynamic homogenization The equation (6) defines a linear system of the form ⃗Ψ′(z) = Q(s)⃗Ψ(z), where ⃗Ψ(z) is the state vector and Q(s) is the system matrix dependent on material and circuit parameters. The corresponding transfer matrix T(lp) over a single piezoelectric layer of length lp is given by: ⃗Ψ(z+lp) = T(lp)⃗Ψ(z) = exp[Q(s)lp]⃗Ψ(z), (19) where the exponential of the matrix Q(s) governs the spatial evolution of the state vector due to wave propagation through the layer. The transfer matrix thus relates the electromechanical state at the two ends of the layer and captures the full coupled dynamic response. 12/15 To obtain the transfer matrix of a full unit cell, which may consist of multiple piezoelectric and elastic sublayers, the transfer matrices of the individual layers are sequentially multiplied in the order of wave propagation. For a three-layer unit cell with layer lengths l1 p,l2 p,l3 p, the overall transfer matrix Tuc in the forward direction is given by: Tuc = T3(l3 p)T2(l2 p)T1(l1 p), (20) with total unit cell length l = l1 p +l2 p +l3 p. Under the dynamic homogenization framework, this unit cell transfer matrix is equated to the exponential of an effective system matrix ̃Q acting over the entire unit cell: Tuc = exp ̃Ql , (21) where ̃Q encodes the effective dynamic properties of the homogenized unit cell. In the subsequent steps, this matrix is matched with that of the homogenized constitutive model by comparing scattering responses, enabling the extraction of effective parameters, including nonlocal terms such as Willis coupling and electro-momentum coupling coefficients. Computation of effective properties Starting with the effective constitutive laws defined in Eq. (5), a governing equation similar in form to Eq. (6) can be derived for the homogenized medium. This formulation incorporates nonlocal coupling terms such as Willis and electro-momentum couplings, and relates the spatial derivatives of the averaged field variables to their values through a system matrix that captures the effective behavior. The resulting first-order differential system is given by: d dz     ⟨u⟩ ⟨φ⟩ ⟨σ⟩ ⟨D⟩    =   -s ̃SD ̃CD 0 1 ̃CD ̃B† ̃CD ̃A s ̃WD ̃AD 0 ̃B ̃CD ̃A -1 ̃AD s2 ̃ρ - ̃S† ̃SD ̃CD + ̃W † ̃WD ̃AD 0 s ̃S† D ̃CD -s ̃W † D ̃AD 0 0 0 0       ⟨u⟩ ⟨φ⟩ ⟨σ⟩ ⟨D⟩     (22) where: ̃CD = ̃C + ̃B† ̃B ̃A , ̃AD = ̃A+ ̃B† ̃B ̃C , ̃SD = ̃S+ ̃B† ̃W ̃A , ̃S† D = ̃S† + ̃W † ̃B ̃A , ̃WD = ̃W - ̃B ̃S ̃C , ̃W † D = ̃W † - ̃S† ̃B† ̃C . (23) The resulting parameters fully describe the dispersive, asymmetric, and nonlocal behavior of the homogenized piezoelectric composite with external circuitry. This system forms the foundation for constructing the transfer matrix of the homogenized domain, which captures the spatial evolution of the averaged electromechanical state vector. The transfer matrix can then be matched to that of the actual heterogeneous unit cell to extract the full set of effective parameters. Specifically, by comparing the effective system matrix ̃Q in Eq. (21) to the structure of the governing matrix in Eq. (22), whose entries depend on the modified effective parameters defined in Eq. (23), the individual effective coefficients ̃C, ̃A, ̃B, ̃S, ̃W, and ̃ρ can be systematically identified. Circuit optimization Optimization was performed using MATLAB's fmincon function with default solver settings and no additional options specified. The objective function, obj = |rb|-|rf |, 13/15 was implemented as a custom function handle that returns the objective function value, where the design vector contains the nine circuit parameters, resistances (R1, R2, R3), inductances (L1, L2, L3), and voltage feedback gains (V 1 0 , V 2 0 , V 3 0 ) associated with the three shunted piezoelectric layers. The initial guess for each optimization run was randomly generated using a scaled Gaussian distribution, i.e., randn(9,1)×100, to span a broad range of possible values. To ensure physical feasibility, non-negativity constraints were enforced using a linear inequality of the form -x ≤0, implemented via the constraint matrix -eye(9) and the right-hand side vector zeros(9,1). The optimization was repeated with multiple random initializations until the objective function approached a value near -1, corresponding to the regime of unidirectional perfect absorption and zero reflection. While the optimization yields a representative set of optimal parameters, the solution is not unique and may converge to different local optima based on the initial seed. References 1. Auld, B. Acoustic Fields and Waves in Solids (Wiley, 1973). 2. Achenbach, J. D. Reciprocity in elastodynamics (Cambridge University Press, 2003). 3. Maznev, A., Every, A. & Wright, O. Reciprocity in reflection and transmission: What is a "phonon diode"? Wave Motion 50, 776-784, (2013). 4. Liang, B., Yuan, B. & Cheng, J.-C. Acoustic diode: Rectification of acoustic energy flux in one-dimensional systems. Phys. Rev. Lett. 103, 104301, (2009). 5. Boechler, N., Theocharis, G. & Daraio, C. Bifurcation-based acoustic switching and rectification. Nat. Mater. 10, 665-668, (2011). 6. Yousefzadeh, B., Alù, A. & Ruzzene, M. Asymmetric elastic wave propagation and nonreciprocal transmission in nonlinear lattices. Phys. Rev. B 103, 184302, (2021). 7. Popa, B.-I. & Cummer, S. A. Non-reciprocal and highly nonlinear active acoustic metamaterials. Nat. Commun. 5, 3398, (2014). 8. Trainiti, G. & Ruzzene, M. Non-reciprocal elastic wave propagation in spatiotemporal periodic structures. New J. Phys. 18, 083047, (2016). 9. Nassar, H., Chen, H., Norris, A. N., Haberman, M. R. & Huang, G. Non-reciprocal wave propagation in modulated elastic metamaterials. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 473, 20170188, (2017). 10. Nassar, H., Xu, X., Norris, A. N. & Huang, G. L. Modulated phononic crystals: Non-reciprocal wave propagation and willis materials. J. Mech. Phys. Solids 134, 103705, (2020). 11. Fleury, R., Sounas, D. & Alù, A. Sound isolation and giant linear nonreciprocity in a compact acoustic circulator. Science 343, 516-519, (2014). 12. Milton, G. W. & Willis, J. R. On modifications of newton's second law and linear continuum elastodynamics. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 463, 855-880 (2007). 13. Pernas-Salomón, R. & Shmuel, G. Fundamental principles for generalized willis metamaterials. Phys. Rev. Appl. 14, 064005 (2020). 14. Sieck, C. F., Alù, A. & Haberman, M. R. Origins of willis coupling and acoustic bianisotropy in acoustic metamaterials through source-driven homogenization. Phys. Rev. B 96, 104303 (2017). 15. Muhlestein, M. B., Sieck, C. F., Alù, A. & Haberman, M. R. Reciprocity, passivity and causality in willis materials. Proc. Royal Soc. A: Math. Phys. Eng. Sci. 472, 20160604 (2016). 16. Cho, C., Wen, X., Park, N. & Li, J. Acoustic willis meta-atom beyond the bounds of passivity and reciprocity. Commun. Phys. 4, 82 (2021). 17. Liu, Y. et al. Willis metamaterial on a structured beam. Phys. Rev. X 9, 011040 (2019). 18. Hao, Y., Shen, Y., Groby, J.-P. & Li, J. Experimental demonstration of willis coupling for elastic torsional waves. Wave Motion 112, 102931 (2022). 19. Merkel, A., Romero-García, V., Groby, J.-P., Li, J. & Christensen, J. Unidirectional zero sonic reflection in passive pt-symmetric willis media. Phys. Rev. B 98, 201102 (2018). 20. Danawe, H. & Tol, S. Electro-momentum coupling tailored in piezoelectric metamaterials with resonant shunts. APL Mater. 11 (2023). 14/15 21. Pernas-Salomón, R. & Shmuel, G. Symmetry breaking creates electro-momentum coupling in piezoelectric metamaterials. J. Mech. Phys. Solids 134, 103770 (2020). 22. Pernas-Salomón, R., Haberman, M. R., Norris, A. N. & Shmuel, G. The electromomentum effect in piezoelectric willis scatterers. Wave Motion 106, 102797 (2021). 23. Huynh, H. D. et al. Maximizing electro-momentum coupling in generalized 2d willis metamaterials. Extrem. Mech. Lett. 61, 101981 (2023). 24. Lee, J.-H., Zhang, Z. & Gu, G. X. Maximum electro-momentum coupling in piezoelectric metamaterial scatterers. J. Appl. Phys. 132 (2022). 25. Zhang, Z., Lee, J.-H. & Gu, G. X. Rational design of piezoelectric metamaterials with tailored electro-momentum coupling. Extrem. Mech. Lett. 55, 101785 (2022). 26. Huynh, H. D., Nanthakumar, S., Park, H. S., Rabczuk, T. & Zhuang, X. The effect of electro-momentum coupling on unidirectional zero reflection in layered generalized willis metamaterials. Extrem. Mech. Lett. 77, 102318 (2025). 27. Wu, Z., Yi, J., Xia, R., Chen, J. & Li, Z. Versatile non-hermitian piezoelectric metamaterial beam with tunable asymmetric reflections. Front. Phys. 10, 1089250 (2022). 15/15
2510.14846
Where to Search: Measure the Prior-Structured Search Space of LLM Agents Zhuo-Yang Song1 1School of Physics, Peking University, Beijing 100871, China Abstract The generate-filter-refine (iterative paradigm) based on large language models (LLMs) has achieved progress in reasoning, programming, and program discovery in AI+Science. However, the effectiveness of search depends on where to search, namely, how to encode the domain prior into an operationally structured hypothesis space. To this end, this paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain pri- ors. We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions; the agent is thereby constrained by a fixed safety envelope. To describe multi-step reasoning/search, we weight all reachable paths by a single continuation parameter and sum them to obtain a coverage generating function; this induces a measure of reachability difficulty; and it provides a geometric interpretation of search on the graph induced by the safety envelope. We further provide the simplest testable inferences and validate them via a majority-vote instantia- tion. This theory offers a workable language and operational tools to measure agents and their search spaces, proposing a systematic formal description of iterative search constructed by LLMs. 1 Introduction The generate-filter-refine iterative paradigm cen- tered on large language models (LLMs) is rapidly expanding its application boundary—from rea- soning and programming [1–5], to planning and tool use [6–11], and further to program/func- tion search in AI+Science [12–17]. The common structure of such paradigms embeds tasks or hy- potheses into an operational space and performs multi-round generation, evaluation, and update on that space. Although this approach has per- formed well in many cases, its effectiveness is fundamentally constrained by where to search [17–21]: that is, how the prior is encoded into the agent-operable space. In practice, agents based on LLMs often do not wander blindly in the orig- inal space, but iterate within a smaller semantic space defined by priors and constraints; the ge- ometry and boundary of this space determine ef- ficiency and stability [22]. Long-horizon tasks raise higher demands for understanding such search. First, safety is the primary constraint: in real systems or sensi- tive scenarios, LLMs must operate within veri- fiable and controllable boundaries [23–29]. In- tuitively, this requires formally confining the model within a safety envelope, allowing only constraint-satisfying transitions. Second, com- plexity requires a systematic characterization of the search process: long-horizon problems of- ten involve combinatorial explosion and sparse rewards; purely heuristic or 0/1 scoring is in- sufficient to quantify reachability difficulty, com- 1 arXiv:2510.14846v2 [cs.AI] 17 Oct 2025 pare the coverage capability of different agents, or guide sampling budgets and staged training [18, 30, 31]. Therefore, a concise, computable, model-agnostic formal theory is needed: one that unifies safety and reachability under the same set of measures, and provides testable predictions and engineering-usable design principles. Current practice mostly relies on engineer- ing heuristics (prompt design, filters, scoring functions, temperatures, and sampling budgets), lacking a unified language and quantitative tools for agent-space-search. Concretely, it is difficult to measure, in a comparable way, the trade-offs between reachability and safety across agents, and there is a lack of clear characterization and explanation of long-horizon behavioral features of agents. This theoretical gap may be the key deficiency in moving LLM-driven complex tasks from usable to controllable and measurable. To address this, the paper proposes a compact formal theory to characterize and measure LLM- assisted iterative search. Specifically, we formal- ize agents as fuzzy relation operators; in iterative applications we feed outputs back into inputs to form an iterated agent, and introduce a critical parameter as a unified quantification of reacha- bility difficulty. On the directed graph induced by the safety envelope, we discuss geometric fea- tures of the search space. To validate the ab- stract concepts, we provide a minimal instantia- tion: on a two-dimensional grid, we construct an agent walker by majority vote over multiple LLM decisions, define the crisp idealized agent (safety envelope) via the support, and its induced di- rected graph; we then directly compute, for dif- ferent start-target pairs (f, t), the shortest dis- tance d0 and the number of shortest paths Nd0. The instantiation yields evidence consistent with the hypotheses, providing an initial external val- idation of the formalization. The significance of this theory is that it es- tablishes a measurement system in which safety and reachability are measured by the same sym- bols and geometric quantities; this enables oper- ational metrics for several questions—for exam- ple, whether an intermediate waypoint can sig- nificantly reduce overall difficulty can be local- ized by the compositional lower bound for cov- erage (transitivity) of the coverage index. This formalization offers a consistent baseline for com- paring agents, designing search strategies, and setting training signals. The paper is organized as follows. Section 2 presents the formal theory, including the fuzzy relation operator representation of agents, the coverage generating function and critical param- eters, geometric quantities and inequalities on the safety envelope-induced graph, and several hypotheses for iterated agents constructed by LLMs. Section 3 provides the majority-vote instantiation and tests the inequalities involv- ing shortest distance and the number of short- est paths, as well as the approximately unidi- rectional search hypothesis. Section 4 concludes and discusses prospective directions for connect- ing the proposed measures to evaluation, search policy, and reinforcement learning rewards. 2 Formal Theory 2.1 Conventions and Objects of Study This section introduces the minimal mathemat- ical objects for characterizing the LLM-driven generate-filter-refine process and defines reach- ability and search geometry using a unified generating-function language. Notation 1 (Search space and empty-product convention). Let C1, C2 be nonempty sets repre- senting the input space and output space of an agent, respectively. In iterative scenarios, we as- sume C2 ⊆C1 so that outputs can be fed back as inputs for the next step. For any finite product, the empty product is defined to be 1. Definition 1 (Ideal agent and fuzzy relation op- erator). An ideal agent T is a mapping f 7→ µf(·), where each f ∈C1 is associated with a membership function µf : C2 →[0, 1], g 7→µf(g). (1) 2 This can be equivalently viewed as a fuzzy rela- tion operator T (f, g) := µf(g) [32]. Definition 2 (Crisp idealized agent and safety envelope). If for all f ∈C1, g ∈C2, we have µf(g) ∈{0, 1}, then T is called a crisp idealized agent. Fix a crisp idealized agent and denote it by the safety envelope T0. An ideal agent T is said to be constrained by T0 in safety if 0 ≤T (f, g) ≤T0(f, g), ∀f, g. (2) In this case, each feasible transition of T is lim- ited to the reachable edges allowed by T0, so exe- cution proceeds only within the safety envelope. Definition 3 (Iterated agent and search tra- jectory). When C2 ⊆ C1, T is called an iterated agent. A finite sequence ST = (f (0), f (1), . . . , f (n)) is called a search trajectory from f (0) to f (n) (of length n) if for all i = 0, . . . , n −1, µf (i) f (i+1) > 0. (3) 2.2 Coverage generating function To uniformly measure the reachability of iterated agents across problem difficulties, we introduce a coverage generating function based on a con- tinuation parameter without aftereffects. Definition 4 (Coverage generating function and continuation parameter). Let a single parameter p ∈[0, 1] denote the weight for continuing iter- ation (the continuation parameter), understood as a scalar weight assigned to trajectory length; it is not a probability but a bookkeeping factor. Thus a trajectory of length n is assigned weight pn. Define the coverage generating function from f to g as Pf,g(p) := ∞ X n=0 X ST :f (0)=f, f (n)=g pn n−1 Y i=0 µf (i) f (i+1) , (4) where for n = 0 the inner product is the empty product, and this term exists and contributes 1 if and only if f = g. Remark 1 (Operator viewpoint and spectral radius). If C1, C2 are countable, let the matrix (kernel) M satisfy Mf,g = T (f, g). Then P(p) = X n≥0 pnM n, (5) whose (f, g) entry is exactly (4). When p ρ(M) < 1 (with ρ(M) the spectral radius), the series converges in the operator sense and P(p) = (I −pM)−1. (6) In general, Pf,g(p) is a power series (generating function) with nonnegative coefficients, mono- tone nondecreasing in p. The boundary value satisfies Pf,g(0) = 1{f = g}. Notation 2 (Continuation-induced search). Given an iterated agent T and parameter p ∈ [0, 1], define the continuation-induced ideal agent T (p)(f, g) := min 1, Pf,g(p)  , (7) which can be viewed as an agent formed by multi- round iterative feedback through T . We will re- fer to T (p) as the search or the continuation- induced (search) agent. This clipping induces a [0,1]-valued membership, not a probability mea- sure; alternative normalizations are possible, but we adopt unit clipping for threshold analysis. 2.3 Geometry under the crisp safety envelope On the directed graph induced by the crisp ide- alized agent (safety envelope), natural geometric quantities can be defined. Definition 5 (Generating function under the crisp idealized agent and path counting). If T is a crisp idealized agent, then P ideal f,g (p) = ∞ X n=0 Nn(f, g) pn, (8) where Nn(f, g) is the number of reachable paths of length n from f to g. 3 Definition 6 (Shortest distance). On the di- rected graph induced by the crisp idealized agent, define the shortest distance d0(f, g) := inf  n ∈N : Nn(f, g) ≥1 , (9) and set d0(f, g) = +∞if g is unreachable from f. Lemma 1 (Shortest distance and low-order terms of the generating function). If d0(f, g) < ∞, then lim p→0+ P ideal f,g (p) pd0(f,g) = Nd0(f,g)(f, g) ∈N \ {0}. (10) Remark 2 (Insufficient search under small con- tinuation parameter). When the continuation parameter p is small, search is in an insuffi- cient expansion regime (particularly when many edges have low membership or the graph is ap- proximately unidirectional). By Lemma 1, the shortest-path term dominates the behavior of Pf,g(p), so the shortest distance d0 and the cor- responding number of shortest paths Nd0 control the early reachable set. 2.4 Critical parameter and cover- age index We characterize the reachability difficulty from f to g by the critical value at which the generating function reaches the unit threshold. Definition 7 (Coverage index and critical pa- rameter). Define pc(f, g) := inf  p ∈[0, 1] : P ideal f,g (p) ≥1 , (11) and set pc(f, g) = 1 if the set is empty (unreach- able). Define the coverage index Rc(f, g) := 1 −pc(f, g) ∈[0, 1]. (12) A larger Rc(f, g) indicates reaching unit coverage at smaller weights, i.e., easier to reach. Increas- ing the number of reachable paths or shortening path length both increase Rc. Definition 8 (Intermediate node). A node h is called an intermediate node for (f, g) if at least one reachable path from f to g passes through h. Proposition 1 (Transitivity of the coverage in- dex). If h is an intermediate node for (f, g), then for all p ∈[0, 1], P ideal f,g (p) ≥P ideal f,h (p) · P ideal h,g (p). (13) Therefore, pc(f, g) ≤max pc(f, h), pc(h, g)  , Rc(f, g) ≥min Rc(f, h), Rc(h, g)  . (14) If h is not an intermediate node for (f, g), then at least one of f →h or h →g is unreachable; in this case Eq. 14 still holds. Definition 9 (Epoch and the lower-bound meaning of shortest distance). An epoch refers to one expansion step that applies the crisp safety envelope T0 to all outputs from the pre- vious round and performs set-wise deduplication. Clearly, starting at f, reaching g requires at least d0(f, g) epochs. 2.5 Threshold hypotheses and testable inequalities We provide two empirically common and testable hypotheses for LLM-induced approximately uni- directional search, together with resulting in- equalities. Assumption 1 (Approximate thresholding of membership (sharp threshold behavior)). Let the coverage index be Rc(f, g) = 1−pc(f, g). Empir- ically, for iterated agents constructed by LLMs: 1. Closed walks (nonzero-length paths whose start and end coincide) are rare; the crisp envelope is approximately unidirectional, so P ideal f,g (p) has finitely many terms or does not diverge as p →1. 2. Overly long trajectories are relatively rare; equivalently, the generating function is es- sentially dominated by its low-order terms. 4 Thus, when d0(f, g) ≫1, the membership of T (p) in p exhibits sharp threshold behavior: µT (p),f(g) ≈θ p −pc(f, g)  , (15) where θ is the Heaviside function. This suggests that the hitting time (in epochs) may satisfy epochhit(f →g) ∼ 1 Rc(f, g) ∼d0(f, g). (16) The above proportionality is an empirical approx- imation that holds only when closed walks are rare, the graph is approximately unidirectional, and low-order terms dominate. Assumption 2 (Basic lower bound and testable inference). From the lowest-order term, we have P ideal f,g (p) ≥Nd0(f, g) pd0(f,g), (17) hence pc(f, g) ≤ Nd0(f, g) −1/d0(f,g), Rc(f, g) ≥1 − Nd0(f, g) −1/d0(f,g). (18) In the small-Rc limit (longer shortest paths and no closed walks, consistent with Assumption 1), using Nd0 −1/d0 = exp − log Nd0 d0  yields Rc(f, g) ≳log Nd0(f, g) d0(f, g) , (19) and thus d0(f, g) · Rc(f, g) ≳log Nd0(f, g). (20) Under assumptions consistent with Assump- tion 1, empirically Rc ≪1, hence log Nd0(f, g) ≪d0(f, g), (21) which is an empirical upper-trend for the number of shortest paths Nd0 under approximately unidi- rectional search, providing a quantitative charac- terization of complexity (shortest distance) dom- inates while path diversity is limited. 3 Majority-vote Instantia- tion and Experiments This section provides a minimal, reproducible in- stantiation that aligns one-to-one with the for- mal objects above. On a two-dimensional grid, we construct an ideal agent and its correspond- ing crisp agent induced by LLM majority vote, and directly compute, on the directed graph in- duced by the crisp agent, the shortest distance d0 and the number of shortest paths Nd0 to test the observable hypotheses and inferences in As- sumption 1 and Assumption 2. 3.1 Majority-vote instantiation To make abstract concepts concrete, we give a minimal construction following objects- mappings-geometry. Task space and transition syntax: Con- sider a two-dimensional grid GN := {0, . . . , N − 1}2, hence C1 = C2 = GN. Given a start-target pair (f, t) ∈GN ×GN, we allow unit-step transi- tions up, down, left, and right, staying within the board as the syntax of feasible local transitions. Ideal agent induced by LLMs: For any f ∈GN and fixed target t, we query a given LLM (L) under prompts that combine constraints and goals to output the next position g; the com- plete prompt is in Appendix App. A. This yields an empirical decision distribution bP (L,t) f (g) (ap- proximated via multiple samples) for from f to g. For each f, independently sample m times and take the mode g⋆of bP (L,t) f (g). If its frequency exceeds m/2, define the agent µ(L,t) f (g) := 1{g = g⋆}; otherwise regard it as no strict majority, i.e., µ(L,⊔) f (g) = 0. 5 Aggregate majority-vote results from n differ- ent models uniformly to obtain the ideal agent µ(t) f (g) := 1 n X L µ(L,t) f (g) ∈[0, 1]. Crisp agent (safety envelope) and induced graph: Binarize the support of the ideal agent to obtain a crisp idealized agent µ0,(t) f (g) := 1  µ(t) f (g) > 0 ∈{0, 1}, and define the directed graph Gt: nodes are GN, and if µ0,(t) f (g) = 1 draw an edge f →g. This construction is the directed graph induced by the ideal/crisp idealized agents. Computing d0 and Nd0: On Gt, perform breadth-first search (BFS) with source f; the first layer reaching t is the shortest distance d0(f, t), and simultaneously count shortest paths to obtain Nd0(f, t). If t is not reached within the depth budget, set d0 = +∞and Nd0 = 0. This directly computes the Definition shortest distance and Nd0 on the graph induced by the crisp agent. The construction corresponds respectively to: C1 = C2 = GN as the search space; µ(t) as the iterated ideal agent; µ0,(t) as its safety envelope; Gt as the geometry induced by the safety enve- lope; and (d0, Nd0) as geometric quantities on the graph. 3.2 Experimental results Experimental setup, model list, grid sizes and target points, and full prompts are provided in Appendix App. A. Under this setup, we con- struct the ideal agent µ(t) for each target t, and obtain the crisp agent and the induced directed graph Gt from its support. Figure 1 shows a rep- resentative case (N = 5, t = (3, 4)) of Gt: under semantic constraints, the graph exhibits a unidi- rectional structure (strictly decreasing Manhat- tan distance to the target) with anisotropic pref- erences over allowed edges, consistent with the finite terms premise in Assumption 1. On Gt, we perform BFS for all start nodes f and summarize (d0, Nd0) for different (f, t). Fig- ure 2 summarizes results for three grid sizes and corresponding targets (see Appendix Table 1): overall, the data lie below the empirical upper- trend predicted by Assumption 2, and when d0 is larger, Eq. 21 fits better, supporting the empiri- cal rule in the small-Rc limit, log Nd0 ≪d0. Al- though we do not estimate Rc directly, the unidi- rectional graph structure and finite path counts are consistent with the setting of Assumption 2. 0 1 2 3 4 0 1 2 3 4 Graph Structure (N=5) Figure 1: Visualization of G(3,4) on a 5 × 5 grid. Red arrows denote reachable directed edges, and transparency encodes the membership on the ideal agent µ(t). The graph is unidirectional, strictly decreasing the Manhattan distance to the target. 4 Conclusion This paper proposes a compact formal theory to unify the description and measurement of LLM- assisted iterative search. The core is to repre- sent agents as fuzzy relation operators µ on in- puts and outputs; aggregate the contributions of all reachable paths via the coverage generating function Pf,g(p); and characterize reachability difficulty by the critical parameter pc(f, g) (with coverage index Rc = 1 −pc). On the graph in- duced by the crisp agent (safety envelope), the shortest distance d0 and the number of short- est paths Nd0 provide a geometric interpretation 6 2 4 6 8 10 12 Shortest Distance (d0) 0 2 4 6 8 10 12 Number of Shortest Paths (Nd0) log(Nd0) vs. d0 Relationship N=8 N=5 N=3 Nd = exp(d) Figure 2: Summary of shortest distance d0 and number of shortest paths Nd0 for different start nodes f and corresponding targets t. Col- ors/markers distinguish N = 3, 5, 8; the black dashed line indicates the empirical upper-trend described by Assumption 2. of the search process. The low-order dominance seen in iterated agents constructed by LLMs sup- plies a computable, testable, and model-agnostic language for how to measure. The majority-vote instantiation shows that the safety envelope in- duced by LLMs on a 2D grid yields a unidirec- tional and anisotropic reachable structure; the observed empirical relationship between short- est distance and the number of shortest paths is consistent with the theoretical upper-trend, sup- porting sharp threshold behavior in the small-Rc limit and the complexity-dominates hypothesis. The theory offers testable predictions and quantifiable trade-offs. First, under approxi- mately unidirectional graphs with rare closed walks and low-order dominance, we obtain the empirical upper-trend log Nd0 ≪d0, reflecting complexity (shortest distance) dominates while path diversity is limited. Second, the safety- reachability trade-off can be quantified via µ and Pf,g(p): tightening the safety envelope (reduc- ing reachable edges) decreases path diversity, in- creases d0, lowers Pf,g(p), raises pc, and reduces Rc; relaxing constraints has the opposite effect, but must respect safety [27]. Finally, the mul- tiplicative lower bound and the propagation in- equality for critical parameters in the presence of intermediate nodes provide possible guidance for constructing intermediate waypoints to re- duce overall difficulty [31]. Practically, this the- ory provides quantitative guidelines for agent de- sign and training on complex tasks. For example, evaluation and training signals can be designed around pc/Rc, d0, and Nd0 so that reachability difficulty and safety compliance are simultaneous optimization goals; conversely, the theory can guide the design of agents for executing complex tasks: in early stages, prefer models with stricter safety envelopes to shrink the envelope and en- sure compliance and stability; once the running epochs approach the reachable limit, gradually introduce looser safety envelopes to increase the coverage index [14, 17]. This paper presents an implementable theory; detailed experimental validation is left to future work, including further testing of the effective- ness of these measures and connecting the above indicators to reinforcement learning rewards and training procedures. Overall, by formalizing agents as computable fuzzy relation operators and unifying safety and reachability under the same measurement, the theory serves as a foun- dational tool for understanding and improving LLM-driven long-horizon search and complex- task agents. References [1] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reason- ing and acting in language models, 2023. URL https://arxiv.org/abs/2210.03629. [2] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large lan- guage models, 2023. URL https://arxiv. org/abs/2305.10601. [3] Jason et. al. Wei. Chain-of-thought prompt- ing elicits reasoning in large language 7 models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural In- formation Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc., 2022. URL https://papers.baulab.info/ papers/also/Wei-2022b.pdf. [4] Xuezhi Wang, Jason Wei, Dale Schuur- mans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. URL https://arxiv.org/abs/2203.11171. [5] Maciej et. al. Besta. Graph of thoughts: Solving elaborate problems with large language models. Proceedings of the AAAI Conference on Artificial Intelli- gence, 38(16):17682–17690, Mar. 2024. doi: 10.1609/aaai.v38i16.29720. URL https://ojs.aaai.org/index.php/AAAI/ article/view/29720. [6] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee- Peng Lim. Plan-and-solve prompting: Im- proving zero-shot chain-of-thought reason- ing by large language models, 2023. URL https://arxiv.org/abs/2305.04091. [7] Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools, 2023. URL https://arxiv.org/abs/2302. 04761. [8] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voy- ager: An open-ended embodied agent with large language models, 2023. URL https: //arxiv.org/abs/2305.16291. [9] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling compu- tation from reasoning for numerical reason- ing tasks, 2023. URL https://arxiv.org/ abs/2211.12588. [10] Luyu et. al. Gao. PAL: Program- aided language models. In Proceed- ings of the 40th International Confer- ence on Machine Learning, volume 202 of Proceedings of Machine Learning Re- search, pages 10764–10799. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr. press/v202/gao23f.html. [11] Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as poli- cies: Language model programs for embod- ied control, 2023. URL https://arxiv.org/ abs/2209.07753. [12] Daniel J et. al. Mankowitz. Faster sort- ing algorithms discovered using deep rein- forcement learning. Nature, 618(7964):257– 263, 2023. URL https://www.nature.com/ articles/s41586-023-06004-9. [13] Bernardino et. al. Romera-Paredes. Mathematical discoveries from program search with large language models. Na- ture, 625(7995):468–475, 2024. URL https://www.nature.com/articles/ s41586-023-06924-6. [14] Alexander Novikov et. al. Alphaevolve: A coding agent for scientific and algorithmic discovery, 2025. URL https://arxiv.org/ abs/2506.13131. [15] Zhuo-Yang Song, Tong-Zhi Yang, Qing- Hong Cao, Ming xing Luo, and Hua Xing Zhu. Explainable ai-assisted optimization for feynman integral reduction, 2025. URL https://arxiv.org/abs/2502.09544. [16] Qing-Hong Cao et. al. Quantum state preparation via large-language- model-driven evolution, 2025. URL https://arxiv.org/abs/2505.06347. 8 [17] Zhuo-Yang Song et. al. Iterated agent for symbolic regression, 2025. URL https:// arxiv.org/abs/2510.08317. [18] Tad Hogg, Bernardo A. Huberman, and Colin P. Williams. Phase transitions and the search problem. Artificial Intelligence, 81 (1):1–15, 1996. ISSN 0004-3702. doi: https: //doi.org/10.1016/0004-3702(95)00044-5. URL https://www.sciencedirect.com/ science/article/pii/0004370295000445. Frontiers in Problem Solving: Phase Transitions and Complexity. [19] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learn- ing: A review and new perspectives. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 35(8):1798– 1828, 2013. doi: 10.1109/TPAMI.2013. 50. URL https://ieeexplore.ieee.org/ abstract/document/6472238/. [20] Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality ex- plains the effectiveness of language model fine-tuning, 2020. URL https://arxiv. org/abs/2012.13255. [21] Zhuo-Yang Song, Zeyu Li, Qing-Hong Cao, Ming xing Luo, and Hua Xing Zhu. Bridg- ing the dimensional chasm: Uncover layer- wise dimensional reduction in transform- ers through token correlation, 2025. URL https://arxiv.org/abs/2503.22547. [22] David H Wolpert and William G Macready. No free lunch theorems for optimiza- tion. IEEE transactions on evolutionary computation, 1(1):67–82, 2002. URL https://ieeexplore.ieee.org/abstract/ document/585893. [23] Eitan Altman. Constrained Markov decision processes. Routledge, 2021. URL https: //doi.org/10.1201/9781315140223. [24] Long et. al. Ouyang. Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural In- formation Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc., 2022. URL https://dl.acm.org/doi/abs/ 10.5555/3600270.3602281. [25] Yuntao Bai et. al. Constitutional ai: Harm- lessness from ai feedback, 2022. URL https: //arxiv.org/abs/2212.08073. [26] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. URL https://papers.baulab.info/ papers/also/Christiano-2017.pdf. [27] Geoffrey Irving, Paul Christiano, and Dario Amodei. Ai safety via debate, 2018. URL https://arxiv.org/abs/1805.00899. [28] Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. Prompting is programming: A query language for large language mod- els. Proc. ACM Program. Lang., 7(PLDI), June 2023. doi: 10.1145/3591300. URL https://doi.org/10.1145/3591300. [29] Traian Rebedea, Razvan Dinu, Makesh Sreedhar, Christopher Parisien, and Jonathan Cohen. Nemo guardrails: A toolkit for controllable and safe llm ap- plications with programmable rails, 2023. URL https://arxiv.org/abs/2310.10501. [30] Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An intro- duction, volume 1. MIT press Cambridge, 1998. URL http://unbox.org/wisp/doc/ sutton98.pdf. [31] Yoshua et. al. Bengio. Curriculum learn- ing. In Proceedings of the 26th Annual In- ternational Conference on Machine Learn- ing, ICML ’09, page 41–48, New York, 9 NY, USA, 2009. Association for Comput- ing Machinery. ISBN 9781605585161. doi: 10.1145/1553374.1553380. URL https:// doi.org/10.1145/1553374.1553380. [32] Radim Belohlavek. Fuzzy relational sys- tems: foundations and principles, vol- ume 20. Springer Science & Business Me- dia, 2012. URL https://link.springer. com/book/10.1007/978-1-4615-0633-1. A Experimental Setup and Prompts For reproducibility, this appendix provides de- tailed experimental setup and prompts. A.1 Grid sizes and target points number N t 1 3 (1, 2) 2 5 (3, 4) 3 8 (6, 7) Table 1: Three grid sizes and corresponding tar- get points t. A.2 Model list and sampling set- tings Model set: gpt-5-mini, gpt-5, qwen3, qwen- plus, gemini-2.5-flash, deepseek-v3, grok-4, doubao. Number of samples: For each input position f under a given target t, independently sample m = 5 times. A.3 Prompts The prompt used to drive each model to output the next position is as follows: 1 # example input: 2 # N = 5 3 # f = (0, 0) 4 # t = (3, 4) 5 6 prompt = f""" 7 You are an ant on a {N}x{N} grid. Your current position is {list(f)}, and the target position (food) is {list(t)}. 8 You can move up , down , left , or right by one unit , but cannot move outside the grid. 9 Based on the current state , decide the next position to move to , and return the result in JSON format with the field " next_position". 10 11 Note: 12 - Only choose legal move positions 13 - Choose the position that gets you closer to the target 14 - Example return format: {{" next_position": [x_g , y_g ]}} 15 - Write down the json only , no other text 16 """ 17 18 messages = [ 19 {"role": "system", "content" : "You are a helpful assistant that helps people find information. "}, 20 {"role": "user", "content": prompt} 21 ] 10
Where to Search: Measure the Prior-Structured Search Space of LLM Agents Zhuo-Yang Song1 1 100871, China Abstract The generate-filter-refine (iterative paradigm) based on large language models (LLMs) has achieved progress in reasoning, programming, and program discovery in AI+Science. However, the effectiveness of search depends on where to search, namely, how to encode the domain prior into an operationally structured hypothesis space. To this end, this paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain priors. We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions; the agent is thereby constrained by a fixed safety envelope. To describe multi-step reasoning/search, we weight all reachable paths by a single continuation parameter and sum them to obtain a coverage generating function; this induces a measure of reachability difficulty; and it provides a geometric interpretation of search on the graph induced by the safety envelope. We further provide the simplest testable inferences and validate them via a majority-vote instantiation. This theory offers a workable language and operational tools to measure agents and their search spaces, proposing a systematic formal description of iterative search constructed by LLMs. 1 Introduction The generate-filter-refine iterative paradigm centered on large language models (LLMs) is rapidly expanding its application boundary-from reasoning and programming [1-5], to planning and tool use [6-11], and further to program/function search in AI+Science [12-17]. The common structure of such paradigms embeds tasks or hypotheses into an operational space and performs multi-round generation, evaluation, and update on that space. Although this approach has performed well in many cases, its effectiveness is fundamentally constrained by where to search [17-21]: that is, how the prior is encoded into the agent-operable space. In practice, agents based on LLMs often do not wander blindly in the original space, but iterate within a smaller semantic space defined by priors and constraints; the geometry and boundary of this space determine efficiency and stability [22]. Long-horizon tasks raise higher demands for understanding such search. First, safety is the primary constraint: in real systems or sensitive scenarios, LLMs must operate within verifiable and controllable boundaries [23-29]. Intuitively, this requires formally confining the model within a safety envelope, allowing only constraint-satisfying transitions. Second, complexity requires a systematic characterization of the search process: long-horizon problems often involve combinatorial explosion and sparse rewards; purely heuristic or 0/1 scoring is insufficient to quantify reachability difficulty, com1 17 Oct 2025 pare the coverage capability of different agents, or guide sampling budgets and staged training [18, 30, 31]. Therefore, a concise, computable, model-agnostic formal theory is needed: one that unifies safety and reachability under the same set of measures, and provides testable predictions and engineering-usable design principles. Current practice mostly relies on engineering heuristics (prompt design, filters, scoring functions, temperatures, and sampling budgets), lacking a unified language and quantitative tools for agent-space-search. Concretely, it is difficult to measure, in a comparable way, the trade-offs between reachability and safety across agents, and there is a lack of clear characterization and explanation of long-horizon behavioral features of agents. This theoretical gap may be the key deficiency in moving LLM-driven complex tasks from usable to controllable and measurable. To address this, the paper proposes a compact formal theory to characterize and measure LLMassisted iterative search. Specifically, we formalize agents as fuzzy relation operators; in iterative applications we feed outputs back into inputs to form an iterated agent, and introduce a critical parameter as a unified quantification of reachability difficulty. On the directed graph induced by the safety envelope, we discuss geometric features of the search space. To validate the abstract concepts, we provide a minimal instantiation: on a two-dimensional grid, we construct an agent walker by majority vote over multiple LLM decisions, define the crisp idealized agent (safety envelope) via the support, and its induced directed graph; we then directly compute, for different start-target pairs (f, t), the shortest distance d0 and the number of shortest paths Nd0. The instantiation yields evidence consistent with the hypotheses, providing an initial external validation of the formalization. The significance of this theory is that it establishes a measurement system in which safety and reachability are measured by the same symbols and geometric quantities; this enables operational metrics for several questions-for example, whether an intermediate waypoint can significantly reduce overall difficulty can be localized by the compositional lower bound for coverage (transitivity) of the coverage index. This formalization offers a consistent baseline for comparing agents, designing search strategies, and setting training signals. The paper is organized as follows. Section 2 presents the formal theory, including the fuzzy relation operator representation of agents, the coverage generating function and critical parameters, geometric quantities and inequalities on the safety envelope-induced graph, and several hypotheses for iterated agents constructed by LLMs. Section 3 provides the majority-vote instantiation and tests the inequalities involving shortest distance and the number of shortest paths, as well as the approximately unidirectional search hypothesis. Section 4 concludes and discusses prospective directions for connecting the proposed measures to evaluation, search policy, and reinforcement learning rewards. 2 Formal Theory 2.1 Conventions and Objects of Study This section introduces the minimal mathematical objects for characterizing the LLM-driven generate-filter-refine process and defines reachability and search geometry using a unified generating-function language. Notation 1 (Search space and empty-product convention). Let C1, C2 be nonempty sets representing the input space and output space of an agent, respectively. In iterative scenarios, we assume C2 ⊆C1 so that outputs can be fed back as inputs for the next step. For any finite product, the empty product is defined to be 1. Definition 1 (Ideal agent and fuzzy relation operator). An ideal agent T is a mapping f 7→ μf(·), where each f ∈C1 is associated with a membership function μf : C2 →[0, 1], g 7→μf(g). (1) 2 This can be equivalently viewed as a fuzzy relation operator T (f, g) := μf(g) [32]. Definition 2 (Crisp idealized agent and safety envelope). If for all f ∈C1, g ∈C2, we have μf(g) ∈{0, 1}, then T is called a crisp idealized agent. Fix a crisp idealized agent and denote it by the safety envelope T0. An ideal agent T is said to be constrained by T0 in safety if 0 ≤T (f, g) ≤T0(f, g), ∀f, g. (2) In this case, each feasible transition of T is limited to the reachable edges allowed by T0, so execution proceeds only within the safety envelope. Definition 3 (Iterated agent and search trajectory). When C2 ⊆ C1, T is called an iterated agent. A finite sequence ST = (f (0), f (1), . . . , f (n)) is called a search trajectory from f (0) to f (n) (of length n) if for all i = 0, . . . , n -1, μf (i) f (i+1) > 0. (3) 2.2 Coverage generating function To uniformly measure the reachability of iterated agents across problem difficulties, we introduce a coverage generating function based on a continuation parameter without aftereffects. Definition 4 (Coverage generating function and continuation parameter). Let a single parameter p ∈[0, 1] denote the weight for continuing iteration (the continuation parameter), understood as a scalar weight assigned to trajectory length; it is not a probability but a bookkeeping factor. Thus a trajectory of length n is assigned weight pn. Define the coverage generating function from f to g as Pf,g(p) := ∞ X n=0 X ST :f (0)=f, f (n)=g pn n-1 Y i=0 μf (i) f (i+1) , (4) where for n = 0 the inner product is the empty product, and this term exists and contributes 1 if and only if f = g. Remark 1 (Operator viewpoint and spectral radius). If C1, C2 are countable, let the matrix (kernel) M satisfy Mf,g = T (f, g). Then P(p) = X n≥0 pnM n, (5) whose (f, g) entry is exactly (4). When p ρ(M) 0 ∈{0, 1}, and define the directed graph Gt: nodes are GN, and if μ0,(t) f (g) = 1 draw an edge f →g. This construction is the directed graph induced by the ideal/crisp idealized agents. Computing d0 and Nd0: On Gt, perform breadth-first search (BFS) with source f; the first layer reaching t is the shortest distance d0(f, t), and simultaneously count shortest paths to obtain Nd0(f, t). If t is not reached within the depth budget, set d0 = +∞and Nd0 = 0. This directly computes the Definition shortest distance and Nd0 on the graph induced by the crisp agent. The construction corresponds respectively to: C1 = C2 = GN as the search space; μ(t) as the iterated ideal agent; μ0,(t) as its safety envelope; Gt as the geometry induced by the safety envelope; and (d0, Nd0) as geometric quantities on the graph. 3.2 Experimental results Experimental setup, model list, grid sizes and target points, and full prompts are provided in Appendix App. A. Under this setup, we construct the ideal agent μ(t) for each target t, and obtain the crisp agent and the induced directed graph Gt from its support. Figure 1 shows a representative case (N = 5, t = (3, 4)) of Gt: under semantic constraints, the graph exhibits a unidirectional structure (strictly decreasing Manhattan distance to the target) with anisotropic preferences over allowed edges, consistent with the finite terms premise in Assumption 1. On Gt, we perform BFS for all start nodes f and summarize (d0, Nd0) for different (f, t). Figure 2 summarizes results for three grid sizes and corresponding targets (see Appendix Table 1): overall, the data lie below the empirical uppertrend predicted by Assumption 2, and when d0 is larger, Eq. 21 fits better, supporting the empirical rule in the small-Rc limit, log Nd0 ≪d0. Although we do not estimate Rc directly, the unidirectional graph structure and finite path counts are consistent with the setting of Assumption 2. 0 1 2 3 4 0 1 2 3 4 Graph Structure (N=5) Figure 1: Visualization of G(3,4) on a 5 × 5 grid. Red arrows denote reachable directed edges, and transparency encodes the membership on the ideal agent μ(t). The graph is unidirectional, strictly decreasing the Manhattan distance to the target. 4 Conclusion This paper proposes a compact formal theory to unify the description and measurement of LLMassisted iterative search. The core is to represent agents as fuzzy relation operators μ on inputs and outputs; aggregate the contributions of all reachable paths via the coverage generating function Pf,g(p); and characterize reachability difficulty by the critical parameter pc(f, g) (with coverage index Rc = 1 -pc). On the graph induced by the crisp agent (safety envelope), the shortest distance d0 and the number of shortest paths Nd0 provide a geometric interpretation 6 2 4 6 8 10 12 Shortest Distance (d0) 0 2 4 6 8 10 12 Number of Shortest Paths (Nd0) log(Nd0) vs. d0 Relationship N=8 N=5 N=3 Nd = exp(d) Figure 2: Summary of shortest distance d0 and number of shortest paths Nd0 for different start nodes f and corresponding targets t. Colors/markers distinguish N = 3, 5, 8; the black dashed line indicates the empirical upper-trend described by Assumption 2. of the search process. The low-order dominance seen in iterated agents constructed by LLMs supplies a computable, testable, and model-agnostic language for how to measure. The majority-vote instantiation shows that the safety envelope induced by LLMs on a 2D grid yields a unidirectional and anisotropic reachable structure; the observed empirical relationship between shortest distance and the number of shortest paths is consistent with the theoretical upper-trend, supporting sharp threshold behavior in the small-Rc limit and the complexity-dominates hypothesis. The theory offers testable predictions and quantifiable trade-offs. First, under approximately unidirectional graphs with rare closed walks and low-order dominance, we obtain the empirical upper-trend log Nd0 ≪d0, reflecting complexity (shortest distance) dominates while path diversity is limited. Second, the safetyreachability trade-off can be quantified via μ and Pf,g(p): tightening the safety envelope (reducing reachable edges) decreases path diversity, increases d0, lowers Pf,g(p), raises pc, and reduces Rc; relaxing constraints has the opposite effect, but must respect safety [27]. Finally, the multiplicative lower bound and the propagation inequality for critical parameters in the presence of intermediate nodes provide possible guidance for constructing intermediate waypoints to reduce overall difficulty [31]. Practically, this theory provides quantitative guidelines for agent design and training on complex tasks. For example, evaluation and training signals can be designed around pc/Rc, d0, and Nd0 so that reachability difficulty and safety compliance are simultaneous optimization goals; conversely, the theory can guide the design of agents for executing complex tasks: in early stages, prefer models with stricter safety envelopes to shrink the envelope and ensure compliance and stability; once the running epochs approach the reachable limit, gradually introduce looser safety envelopes to increase the coverage index [14, 17]. This paper presents an implementable theory; detailed experimental validation is left to future work, including further testing of the effectiveness of these measures and connecting the above indicators to reinforcement learning rewards and training procedures. Overall, by formalizing agents as computable fuzzy relation operators and unifying safety and reachability under the same measurement, the theory serves as a foundational tool for understanding and improving LLM-driven long-horizon search and complextask agents. References [1] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. URL https://arxiv.org/abs/2210.03629. [2] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. URL https://arxiv. org/abs/2305.10601. [3] Jason et. al. Wei. Chain-of-thought prompting elicits reasoning in large language 7 models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 24824-24837. Curran Associates, Inc., 2022. URL https://papers.baulab.info/ papers/also/Wei-2022b.pdf. [4] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. URL https://arxiv.org/abs/2203.11171. [5] Maciej et. al. Besta. Graph of thoughts: Solving elaborate problems with large language models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16):17682-17690, Mar. 2024. URL https://ojs.aaai.org/index.php/AAAI/ article/view/29720. [6] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and EePeng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models, 2023. URL https://arxiv.org/abs/2305.04091. [7] Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools, 2023. URL https://arxiv.org/abs/2302. 04761. [8] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models, 2023. URL https: //arxiv.org/abs/2305.16291. [9] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2023. URL https://arxiv.org/ abs/2211.12588. [10] Luyu et. al. Gao. PAL: Programaided language models. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 10764-10799. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr. press/v202/gao23f.html. [11] Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control, 2023. URL https://arxiv.org/ abs/2209.07753. [12] Daniel J et. al. Mankowitz. Faster sorting algorithms discovered using deep reinforcement learning. Nature, 618(7964):257263, 2023. URL https://www.nature.com/ articles/s41586-023-06004-9. [13] Bernardino et. al. Romera-Paredes. Mathematical discoveries from program search with large language models. Nature, 625(7995):468-475, 2024. URL https://www.nature.com/articles/ s41586-023-06924-6. [14] Alexander Novikov et. al. Alphaevolve: A coding agent for scientific and algorithmic discovery, 2025. URL https://arxiv.org/ abs/2506.13131. [15] Zhuo-Yang Song, Tong-Zhi Yang, QingHong Cao, Ming xing Luo, and Hua Xing Zhu. Explainable ai-assisted optimization for feynman integral reduction, 2025. URL https://arxiv.org/abs/2502.09544. [16] Qing-Hong Cao et. al. Quantum state preparation via large-languagemodel-driven evolution, 2025. URL https://arxiv.org/abs/2505.06347. 8 [17] Zhuo-Yang Song et. al. Iterated agent for symbolic regression, 2025. URL https:// arxiv.org/abs/2510.08317. [18] Tad Hogg, Bernardo A. Huberman, and Colin P. Williams. Phase transitions and the search problem. Artificial Intelligence, 81 (1):1-15, 1996. ISSN 0004-3702. //doi.org/10.1016/0004-3702(95)00044-5. URL https://www.sciencedirect.com/ science/article/pii/0004370295000445. Frontiers in Problem Solving: Phase Transitions and Complexity. [19] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):17981828, 2013. 50. URL https://ieeexplore.ieee.org/ abstract/document/6472238/. [20] Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning, 2020. URL https://arxiv. org/abs/2012.13255. [21] Zhuo-Yang Song, Zeyu Li, Qing-Hong Cao, Ming xing Luo, and Hua Xing Zhu. Bridging the dimensional chasm: Uncover layerwise dimensional reduction in transformers through token correlation, 2025. URL https://arxiv.org/abs/2503.22547. [22] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1):67-82, 2002. URL https://ieeexplore.ieee.org/abstract/ document/585893. [23] Eitan Altman. Constrained Markov decision processes. Routledge, 2021. URL https: //doi.org/10.1201/9781315140223. [24] Long et. al. Ouyang. Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc., 2022. URL https://dl.acm.org/doi/abs/ 10.5555/3600270.3602281. [25] Yuntao Bai et. al. Constitutional ai: Harmlessness from ai feedback, 2022. URL https: //arxiv.org/abs/2212.08073. [26] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. URL https://papers.baulab.info/ papers/also/Christiano-2017.pdf. [27] Geoffrey Irving, Paul Christiano, and Dario Amodei. Ai safety via debate, 2018. URL https://arxiv.org/abs/1805.00899. [28] Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. Prompting is programming: A query language for large language models. Proc. ACM Program. Lang., 7(PLDI), June 2023. URL https://doi.org/10.1145/3591300. [29] Traian Rebedea, Razvan Dinu, Makesh Sreedhar, Christopher Parisien, and Jonathan Cohen. Nemo guardrails: A toolkit for controllable and safe llm applications with programmable rails, 2023. URL https://arxiv.org/abs/2310.10501. [30] Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. URL http://unbox.org/wisp/doc/ sutton98.pdf. [31] Yoshua et. al. Bengio. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, page 41-48, New York, 9 NY, USA, 2009. Association for Computing Machinery. ISBN 9781605585161. URL https:// doi.org/10.1145/1553374.1553380. [32] Radim Belohlavek. Fuzzy relational systems: foundations and principles, volume 20. Springer Science & Business Media, 2012. URL https://link.springer. com/book/10.1007/978-1-4615-0633-1. A Experimental Setup and Prompts For reproducibility, this appendix provides detailed experimental setup and prompts. A.1 Grid sizes and target points number N t 1 3 (1, 2) 2 5 (3, 4) 3 8 (6, 7) Table 1: Three grid sizes and corresponding target points t. A.2 Model list and sampling settings Model set: gpt-5-mini, gpt-5, qwen3, qwenplus, gemini-2.5-flash, deepseek-v3, grok-4, doubao. Number of samples: For each input position f under a given target t, independently sample m = 5 times. A.3 Prompts The prompt used to drive each model to output the next position is as follows: 1 # example input: 2 # N = 5 3 # f = (0, 0) 4 # t = (3, 4) 5 6 prompt = f""" 7 You are an ant on a {N}x{N} grid. Your current position is {list(f)}, and the target position (food) is {list(t)}. 8 You can move up , down , left , or right by one unit , but cannot move outside the grid. 9 Based on the current state , decide the next position to move to , and return the result in JSON format with the field " next_position". 10 11 Note: 12 - Only choose legal move positions 13 - Choose the position that gets you closer to the target 14 - Example return format: {{" next_position": [x_g , y_g ]}} 15 - Write down the json only , no other text 16 """ 17 18 messages = [ 19 {"role": "system", "content" : "You are a helpful assistant that helps people find information. "}, 20 {"role": "user", "content": prompt} 21 ] 10
2510.14845
Preprint BACKDOOR UNLEARNING BY LINEAR TASK DECOMPOSITION Amel Abdelraheem∗† 1 Alessandro Favero∗1 G´erˆome Bovet2 Pascal Frossard1 1EPFL, Lausanne, Switzerland 2Cyber-Defence Campus, armasuisse, Switzerland ABSTRACT Foundation models have revolutionized computer vision by enabling broad general- ization across diverse tasks. Yet, they remain highly susceptible to adversarial per- turbations and targeted backdoor attacks. Mitigating such vulnerabilities remains an open challenge, especially given that the large-scale nature of the models prohibits retraining to ensure safety. Existing backdoor removal approaches rely on costly fine-tuning to override the harmful behavior, and can often degrade performance on other unrelated tasks. This raises the question of whether backdoors can be removed without compromising the general capabilities of the models. In this work, we ad- dress this question and study how backdoors are encoded in the model weight space, finding that they are disentangled from other benign tasks. Specifically, this separa- tion enables the isolation and erasure of the backdoor’s influence on the model with minimal impact on clean performance. Building on this insight, we introduce a simple unlearning method that leverages such disentanglement. Through extensive experiments with CLIP-based models and common adversarial triggers, we show that, given the knowledge of the attack, our method achieves approximately perfect unlearning, while retaining, on average, 96% of clean accuracy. Additionally, we demonstrate that even when the attack and its presence are unknown, our method successfully unlearns backdoors by proper estimation using reverse-engineered trig- gers. Overall, our method consistently yields better unlearning and clean accuracy tradeoffs when compared to present state-of-the-art defenses. 1 INTRODUCTION Foundation models have become a cornerstone of modern deep learning, offering broad generalization across a wide range of tasks through large-scale pre-training (Radford et al., 2021; Jia et al., 2021). Among them, vision-language models like CLIP (Radford et al., 2021) play a fundamental role. They not only demonstrate remarkable robustness to distribution shifts and zero-shot performance on out-of-distribution benchmarks (Wortsman et al., 2022b), but their vision encoders also serve as a key component in many multimodal large language models, such as, e.g., LLaVA (Liu et al., 2023). However, the very success and widespread integration of these models make them a prime target for security threats, most notably backdoor attacks (Carlini & Terzis, 2021; Bansal et al., 2023) – a class of threats that compromise model integrity even after training is complete. In a backdoor attack (Gu et al., 2017), an adversary poisons a small portion of the training data by embedding a fixed trigger pattern into inputs and mislabeling them to a target class. The resulting model appears to perform well on clean inputs but systematically misclassifies any input containing the trigger – effectively granting the adversary precise control over model predictions. Such vulnerabilities pose a serious risk in safety-critical applications, including autonomous driving and medical diagnostics (Du et al., 2024; Hanif et al., 2024). Current defenses for CLIP largely fall into two categories: (i) retraining the model from scratch using modified loss functions designed to resist backdoors, or (ii) fine-tuning on clean data to override the malicious behavior (Bansal et al., 2023; Yang et al., 2024b; Goel et al., 2022a). However, full retrain- ing is prohibitively expensive at scale, while fine-tuning – though cheaper – frequently induces catas- ∗Equal contribution. †Corresponding author: amel.abdelraheem@epfl.ch 1 arXiv:2510.14845v1 [cs.LG] 16 Oct 2025 Preprint trophic forgetting (French, 1999), whereby the pre-trained knowledge is erased. Furthermore, recent studies show that fine-tuning strategies struggle against more sophisticated attacks (Liang et al., 2024). An alternative line of work, machine unlearning (Cao & Yang, 2015), seeks to selectively remove (or forget) specific learned behaviors post-hoc, avoiding full retraining. Currently, the application of unlearning methods to targeted backdoor removal remains limited. Prominent unlearning algorithms such as gradient ascent and its variants have been shown to fall short when applied to backdoor removal in small-scale settings (Pawelczyk et al., 2024). Yet, their effectiveness in large-scale foundation models remains an open question. In this paper, we introduce an efficient, post-hoc method for unlearning backdoors from vision- language foundation models while preserving their clean capabilities. Our approach builds on recent advances in model editing in weight space (Frankle et al., 2020; Izmailov et al., 2018; Wortsman et al., 2021; 2022a; Rame et al., 2022; Ainsworth et al., 2022; Ilharco et al., 2022b;a). In particular, Ilharco et al. (2022a) introduced the concept of a task vector, which is the element-wise difference between the weights of a fine-tuned model and its pre-trained initialization. Task vectors provide a means to encode learned tasks as directions in weight space. They can be added to a model to inject functionality, subtracted to unlearn specific tasks, or combined to compose multi-task models. These manipulations are enabled by the disentanglement of tasks in the weight space of pre-trained models, as recently formalized by Ortiz-Jimenez et al. (2024). Motivated by these insights, we investigate how backdoors are encoded in the weight space of CLIP- based models. We find that weights can be linearly decomposed into clean and triggered components, effectively disentangling the malicious behavior from the model’s benign capabilities. This disen- tanglement allows us to isolate the backdoor’s influence by exploiting task arithmetic. In practice, this is achieved by fine-tuning the model on a small set of triggered examples to compute a “trigger vector”. This vector isolates the malicious behavior and can thus be subtracted – via task negation – to surgically remove the backdoor while preserving clean model performance, as illustrated in Figure 1. We hence reframe the problem of backdoor unlearning as a simple problem of vector arithmetic. Our main contributions are: • We leverage the weight disentanglement formalism to demonstrate that backdoors in CLIP- based transformer models are disentangled from clean knowledge in weight space, enabling targeted removal via linear operations without encountering catastrophic forgetting of non- adversarial knowledge. • We introduce TBAR (Trigger removal by Backdoor ARithmetic), a lightweight approach for backdoor unlearning via weight-space task negation. When the trigger is known, TBAR unlearns 99% of the backdoor while retaining 96% of obtained clean accuracy on average across (i) image classification backdoor benchmarks and (ii) large-scale image-captioning tasks. Notably, in the latter case, it outperforms state-of-the-art clean-data fine-tuning defenses while using less than 2% of the data requirements. • We extend TBAR to operate in large-scale settings in an attack-agnostic scenario by pairing it with reverse-engineered proxy triggers. Our method successfully sanitizes infected models, outperforming state-of-the-art defenses while preserving over 90% clean accuracy. 2 PROBLEM SETUP This work focuses on security vulnerabilities associated with backdoor attacks. Specifically, we consider the following threat model and defender assumptions. This setup is an extension of several previous settings in the literature (Carlini & Terzis, 2021; Bansal et al., 2023; Feng et al., 2023; Pawelczyk et al., 2024). Threat model The adversary has full white-box access to a pre-trained model and fine-tuning data to be used to backdoor the model. The attack is conducted by injecting a small poisoned subset into a larger training dataset. The resulting backdoored model is released publicly and intended for downstream use by unaware users. Unless otherwise specified, we consider the attack successful if the triggered examples are predicted as a targeted label. 2 Preprint Figure 1: Backdoored models embed malicious behavior along with clean task performance. Instead of erasing all learned information, we propose a targeted approach: (1) Given a backdoored model, (2) the backdoor encodes two distinct directions, (3) fine-tuning the model on similarly constructed trig- gered data isolates the parameter shift associated with triggered information. (4) Negating this vector from the original parameters effectively removes the trigger while preserving clean task performance. Defender assumptions The defender’s goal is to remove the backdoor (i.e., reduce attack success rate to zero) while preserving the model’s performance on clean data. The defender has full access to model weights. We consider two distinct practical scenarios: • Trigger-known: The defender is given a small forget set containing the true trigger, reflect- ing a common assumption in the context of backdoor defenses within unlearning studies, where an attack has been identified and its characteristics are known. • Trigger-unknown: The defender does not know the true trigger but has access to a small set of clean data. 3 BACKGROUND This section introduces the necessary tools for understanding model editing using weight interpolation. In particular, we recall the operation of task arithmetic and the property of weight disentanglement. Notation Let a neural network be a parameterized function f : X × Θ →Y with inputs x ∈X and weights θ ∈Θ. We identify a task k ∈[K] as a triplet (Dk, µk, f ⋆ k) with domain Dk ⊆X, input distribution µk (supp(µk) = Dk), and f ⋆ k : Dk →Y. Model editing with task arithmetic (Ilharco et al., 2022a) Finetuning a pre-trained model θpre on task k yields new weights θ⋆ k. The change in weights τk = θ⋆ k −θpre, defines the task vector. Task arithmetic modifies the model by applying scaled task vectors: θnew = θpre + α τk for a single task, or θnew = θpre + PK k=1 αk τk for multiple tasks. The scalar coefficient α controls the strength of the edit as well as its direction, where positive values denote the learning of a task and negative values lead to the unlearning of the particular task. Weight disentanglement Ortiz-Jimenez et al. (2024) introduced weight disentanglement as the property where the functional changes induced by a set of task vectors are localized to their respective task domains. Specifically, when multiple task vectors are linearly combined in weight space, the resulting model behaves as if it selectively applies each individual task’s function only for inputs within that task’s domain, reverting to the pre-trained model’s behavior otherwise. The ability to perform task arithmetic with a set of task vectors T is a direct consequence of this weight disentanglement, where each task vector τk encodes a distinct functional component specific to its domain Dk. Formally, for a set of task vectors {τk}k∈[K], the edited model satisfies weight disentanglement if (cf. Ortiz-Jimenez et al. (2024) for the formal definition): f x; θpre + K X k=1 αtτk ! = K X k=1 f(x; θpre + αkτk) 1(x ∈Dk) + f(x; θpre) 1 x /∈ K [ k=1 Dk ! . (1) 3 Preprint To measure the presence of weight disentanglement, Ortiz-Jimenez et al. (2024) introduced the weight disentanglement error, which measures the prediction disagreement between models obtained by applying the individual task vectors and the combination thereof, evaluated on the respective task supports. For two tasks, this reads: ξ(α1, α2) = X i∈{1,2} Ex∼µi [dist (f(x; θpre + αiτi), f(x; θpre + α1τ1 + α2τ2))] , (2) where dist can be any distance metric between model outputs. For instance, for classification tasks dist(y1, y2) = 1(y1 ̸= y2). In the next section, we study backdoor attacks through the lens of task arithmetic and weight disentanglement. We treat the benign task and the malicious backdoor behavior as two separate – and, ideally, separable – tasks operating on distinct data domains, i.e., clean or triggered inputs. 4 TBAR: TRIGGER REMOVAL BY BACKDOOR ARITHMETIC Disentanglement of clean and triggered tasks Consider a model with pre-training weights θpre that has been backdoored, resulting in weights θb. We investigate whether the joint training implicitly defines two tasks in parameter space, enabling the model’s behavior to decompose into clean and triggered components. Formally, let τc and τt be the task vectors for the clean and triggered tasks, with domains Dc (clean images) and Dt (triggered images). Following the definition in Equation 1, the backdoored model satisfies weight disentanglement with respect to these vectors if, ∀x ∈Dc ∪Dt, f(x; θpre + αcτc + αtτt) = f(x; θpre + αcτc) 1(x ∈Dc) + f(x, θpre + αtτt) 1(x ∈Dt). (3) In this work, we formulate the following hypothesis: Hypothesis. The weights of vision foundation models satisfy weight disentanglement for common backdoor attacks, i.e., their output function f satisfies Equation 3. The crucial implication of this property is the existence of a specific direction in weight space, τt, that exclusively governs the backdoor’s malicious behavior. If this holds, removing the backdoor without causing catastrophic forgetting is possible: one simply needs to estimate τt and subtract it from the model’s weights. As we will demonstrate in the next section, this hypothesis holds in practice and allows us to effectively unlearn the backdoor without compromising the model’s clean knowledge. Provided this hypothesis holds, we only need to estimate the trigger vector in order to remove the attack. To accomplish this, we define a small, disjoint forget set composed entirely of triggered image-target pairs. We fine-tune the suspected backdoored model θb on this set, yielding updated weights θb+t. The parameter difference from this step gives us an estimate of the trigger direction: ˆτt = θb+t −θb (4) We can then surgically remove the backdoor’s influence from the original backdoored model via task negation, yielding a cleaned model ˆθc: ˆθc = θb −αˆτt (5) where α is a scalar coefficient controlling the strength of the unlearning. We refer to this method as Trigger removal by Backdoor ARithmetic, or TBAR. Similarly with other weight interpolation techniques, we can use a small validation set for selecting the optimal value of the scaling coefficient α (Ilharco et al., 2022b;a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). 5 TRIGGER VECTOR ESTIMATION WITH TBAR In this section, we focus on known-trigger settings, empirically validate our hypothesis, and show the effectiveness of TBAR on standard attacks. Moreover, we demonstrate that the learned TBAR vectors can be transferred across datasets and scale to practically relevant settings. 4 Preprint Table 1: Controlled experiments showing effectiveness of TBAR on single-task CLIP ViT-B/32 classifiers under three backdoor attacks. Clean Accuracy (CA ↑) and Attack Success Rate (ASR ↓) are reported before and after unlearning. Gray percentages denote CA retention and ASR removal relative to the backdoored model. Results are averaged over 4 seeds. Attacked TBAR Dataset Attack init CA CA ↑ ASR ↓ CA ↑ ASR ↓ SUN397 BadNet 61.46 74.43 ± 0.34 91.40 ± 0.57 70.68 ± 0.84 (94.96%) 1.25 ± 2.37 (98.63%) Blended 61.46 74.72 ± 0.34 99.92 ± 0.12 73.36 ± 1.17 (98.17%) 0.00 ± 0.00 (100%) WaNet 61.46 74.71 ± 0.12 99.62 ± 0.26 73.31 ± 0.40 (98.13%) 0.00 ± 0.00 (100%) CIFAR100 BadNet 62.46 88.77 ± 0.18 99.96 ± 0.04 85.61 ± 2.07 (96.44%) 0.02 ± 0.02 (99.98%) Blended 62.46 88.71 ± 0.22 99.98 ± 0.03 85.17 ± 1.96 (96.01%) 0.18 ± 0.48 (99.82%) WaNet 62.46 88.66 ± 0.38 99.72 ± 0.05 87.61 ± 0.64 (98.82%) 0.04 ± 0.02 (99.96%) ImageNet-1K BadNet 59.58 67.23 ± 0.18 93.56 ± 0.31 63.85 ± 0.29 (94.97%) 1.96 ± 2.38 (97.91%) Blended 59.58 67.50 ± 0.20 99.91 ± 0.04 66.06 ± 0.93 (97.87%) 0.00 ± 0.00 (100%) WaNet 59.58 67.64 ± 0.18 99.86 ± 0.03 65.77 ± 1.20 (97.24%) 0.00 ± 0.00 (100%) 5.1 DISENTANGLEMENT OF CLEAN AND TRIGGERED KNOWLEDGE We start by following the standard model editing setup, where the CLIP text encoder stays frozen and only the visual encoder is fine-tuned (Wortsman et al., 2022b; Ilharco et al., 2022a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024). To construct a targeted poisoning attack on the visual encoder of CLIP by injecting triggered images into the training set, we follow (Carlini & Terzis, 2021). In particular, triggers are generated using three widely adopted methods: BadNet (Gu et al., 2017), which inserts a random square patch at a random location; Blended (Chen et al., 2017), which overlays uniform noise across the image; and WaNet (Nguyen & Tran, 2021; Qi et al., 2023), which applies a subtle warping transformation. While BadNet represents a visible trigger, Blended and WaNet are considered invisible triggers. We evaluated three benchmark vision datasets: SUN397, CIFAR100, and ImageNet-1K, poisoned at a rate of 3% of their training data. We report the per-dataset details in Appendix A. To obtain the TBAR vectors, we use a small held-out forget set of 2000 examples from the training set and fine-tune using the same hyperparameter settings per dataset. Optimal scaling coefficients are found using a grid search, consistent with previous literature (Ilharco et al., 2022b;a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). Table 1 presents the full unlearning results across all datasets and attack types, reporting clean accuracy (CA) and attack success rate (ASR) before and after applying TBAR. TBAR consistently removes backdoors effectively, reducing ASR by over 98% in all cases. Notably, this comes with only a moderate drop in obtained clean accuracy (i.e., 4% on average), indicating that TBAR successfully isolates and removes triggered behavior from the model’s weights. Empirical validation of the disentanglement hypothesis We now validate our hypothesis and show that our successful unlearning is due to the disentanglement between clean and triggered behaviors by using the weight disentanglement error ξ (defined in Equation 2). To do this, we must first construct the clean and trigger task vectors to be compared. Starting with ˆτt (from Equation 4) as the estimated direction of the trigger, we find an optimal scaling coefficient, α∗, defined as the value that reduces the attack success rate to zero. This allows us to define the optimal trigger vector as α∗ˆτt. To define the corresponding clean vector, we first define the total update vector from pre-training to the backdoored state as τb = θb −θpre. The clean vector, ˆτc, is then computed as the residual of the total update: ˆτc = τb −α∗ˆτt. If our disentanglement hypothesis holds, we expect to find a low disentanglement error between the resulting merged model and single models constructed using ˆτc and α∗ˆτt, on the respective data supports. Visualizations of the weight disentanglement error presented in Figure 2 confirm the disentanglement in weight space and our hypothesis. In fact, the large bright regions at the center of the plots, indicating low error, show that the two tasks exhibit strong separation in weight space, providing evidence that triggered and clean vectors correspond to distinct directions1. 1Notice that this is an analytical step and, in practice, it is not needed for our method’s operation. 5 Preprint -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt BadNet SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 2: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following Ortiz- Jimenez et al. (2024). Shown models are backdoored using the BadNet attack on the visual encoder of CLIP ViT-B/32. Similar plots for the other attacks are provided in Appendix B. 5.2 GENERALIZATION AND TRANSFERABILITY OF TRIGGER VECTORS One of the main motivations behind using task vectors is their modularity: the ability to apply or combine them across models without retraining. In the case of backdoor unlearning, we therefore investigate a similar question: does a TBAR vector trained on one dataset capture the backdoor mechanism in a way that transfers to other models infected with the same attack? Indeed, if the vector encodes only the trigger-to-misdirection behavior, rather than task-specific semantics, it should remain effective across models trained on different datasets, as long as the backdoor type and trigger remain consistent. To test it, we evaluate unlearning performance in out-of-distribution settings using vectors extracted from a backdoored ImageNet-1K model. We apply these vectors to remove backdoors from models trained with CIFAR100 and SUN397, respectively. Table 2: Unlearning performance on CIFAR100 and SUN397 using TBAR vectors extracted using a back- doored ImageNet-1K model. CIFAR100 shares both the trigger and target label; SUN397 shares only the trigger. CA ↑ ASR ↓ CA (TBAR) ↑ ASR (TBAR) ↓ BadNet CIFAR100 88.82 99.93 84.59 (95.24%) 00.02 (99.98%) SUN397 74.76 91.20 69.29 (92.68%) 00.99 (98.91%) Blended CIFAR100 88.78 99.98 84.49 (95.17%) 00.48 (99.52%) SUN397 74.81 99.85 62.91 (84.09%) 05.08 (94.91%) WaNet CIFAR100 88.78 99.80 87.43 (98.48%) 00.53 (99.47%) SUN397 74.91 99.80 73.84 (98.57%) 01.72 (98.28%) In this setup, CIFAR100 shares both the trig- ger and target label with ImageNet-1K, while SUN397 shares only the trigger (e.g., the same BadNet-style patch, but mapped to a dif- ferent label). These two settings allow us to test two hypotheses: (i) transfer is facilitated when both the trigger and target label align, and (ii) transfer may still occur when only the trigger is shared, suggesting that the vec- tor captures a generic trigger-to-misdirection pattern within the attack type. Remarkably, Table 2 shows that TBAR vectors extracted with ImageNet-1K remain effective when applied to other models backdoored with the same attack. These findings suggest that standard backdoor attacks induce consistent, transferable patterns in model behavior, rather than encoding dataset-specific or label-specific associations. 5.3 LARGE SCALE IMAGE-CAPTION EXPERIMENTS We now extend our analysis and show that TBAR continues to deliver strong performance even in more challenging deployment settings. Specifically, we backdoor the full CLIP models using image-caption pairs. Following the setup of Bansal et al. (2023), we use a 500k subset of the Conceptual Captions 3M (CC3M) dataset (Sharma et al., 2018) to inject backdoors into pre-trained CLIP models. As in prior work, we evaluate CA and ASR on the ImageNet-1K validation set. We consider four standard backdoor attacks: BadNets, Blended, WaNet and BadCLIP (Liang et al., 2024), a newly introduced optimized patch attack for CLIP models. These attacks are evaluated against three clean-data fine-tuning defenses: CleanCLIP (Bansal et al., 2023), RoCLIP (Yang et al., 2024b), 6 Preprint Table 3: TBAR Performance on CLIP ViT-B/32 under four backdoor attacks (BadNET, Blended, WaNet, and BadCLIP). We report both CA and ASR. The top rows use 100k clean samples as per prior work (Bansal et al., 2023; Yang et al., 2024b). The middle rows use a true targeted unlearning with 1.5k poisoned samples. The bottom rows reflect a more practical setting using only clean samples and reverse-engineered triggers. BadNet Blended WaNet BadCLIP CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ Zero-Shot 63.34% 00.00% 63.34% 00.00% 63.34% 00.00% 63.34% 00.00% Backdoored 61.69% 84.48% 61.39% 99.67% 61.32% 93.12% 61.41% 99.98% clean-data finetuning Contrastive-FT 51.41% 13.72% 51.77% 02.01% 51.58% 00.05% 51.41% 79.32% RoCLIP 50.02% 47.91% 51.84% 06.40% 48.26% 00.04% 53.31% 99.32% CleanCLIP 51.41% 04.11% 51.02% 00.05% 51.09% 00.04% 51.82% 77.04% true unlearning GA 59.89% 07.95% 59.92% 00.01% 58.71% 00.04% 58.45% 00.08% TBAR 59.28% 00.38% 60.46% 00.09% 60.14% 00.05% 56.58% 00.77% reverse-engineered unlearning GA+DECREE 60.41% 08.30% 56.92% 76.40% 60.22% 35.67% N/A N/A TBAR+DECREE 60.29% 00.33% 55.56% 00.90% 56.85% 00.64% N/A N/A and standard CLIP fine-tuning 2. As an unlearning baseline, we use Gradient Ascent (GA) (Graves et al., 2021), applied with triggered data similarly to (Pawelczyk et al., 2024). Full implementation details are provided in Appendix A. To construct TBAR vectors, we define a disjoint ‘forget set’ of 1.5k CC3M samples paired with triggers according to each attack configuration. Optimal scaling coefficients are selected using a validation set drawn from ImageNet-1K training data. Table 3 reports CA and ASR for CLIP ViT-B/32. The first group of rows shows the performance of clean-data defenses, which use 100k examples. These methods generally exhibit large CA drops and fail to remove stronger attacks such as BadCLIP. The second group presents the results for unlearning methods. TBAR achieves significantly lower ASR than the baselines above, while retaining most of the clean accuracy post-backdoor. Remarkably, it also uses two orders of magnitude fewer data. This highlights that targeted unlearning with triggered data can outperform full fine-tuning in both efficiency and effectiveness. Finally, notice that gradient ascent also performs well in this setting, in contrast to previous results in the literature considering smaller-scale models (Pawelczyk et al., 2024). Though further discussion and caveats are addressed below. Despite the strong performance of TBAR, notice, however, that current backdoor defenses for CLIP and traditional unlearning methods do not share the same underlying assumptions. In particular, the latter assume access to a set of triggered examples and therefore knowledge of the attack – which might not apply in practice. Hence, in the next section, we will relax this stronger assumption. 6 AGNOSTIC-ATTACK UNLEARNING To close the gap in assumptions between current CLIP defenses and our method, we extend TBAR to operate without explicit knowledge of the attack. Unlearning with reversed-engineered triggers To achieve this, we propose to use trigger reverse engineering in order to construct a proxy forget set starting from the backdoored model and a set of clean inputs. In particular, we combine TBAR with DECREE (Feng et al., 2023), a self-supervised method introduced for attack detection, it has the ability to invert triggers by searching for minimal patterns so that any input with such a trigger pattern results in similar output embeddings. Given the optimized trigger, we then infer the corresponding infected label by probing the backdoored model with DECREE-generated triggers and identifying the predicted class from the set of ImageNet-1K categories. Using this estimate, we construct proxy-triggered image-caption pairs via standard text templates (Radford et al., 2021). Interestingly, we observe that the true ASR keeps improving even 2These methods operate solely on clean, non-triggered images. Consequently, they tend to require larger datasets and longer training durations, increasing their vulnerability to catastrophic forgetting. 7 Preprint after the proxy-triggered attack is unlearned. We therefore adopt a search strategy that continues to increase the unlearning coefficient for a fixed window – typically 10 steps – after the proxy ASR is nullified. This search is subject to an early-stopping condition, whereby the clean accuracy must not drop below a predefined threshold (shared with gradient ascent). Results Remarkably, Table 3 (bottom set) shows that the above pipeline remains effective with a 90% CA threshold, even without access to the original trigger. Particularly, TBAR is able to outperform both clean data baselines as well as gradient ascent for three attacks. Note, as reported in (Liang et al., 2024), DECREE fails to detect the backdoor introduced by BadCLIP. Robust unlearning beyond gradient ascent Contrary to prior literature on backdoor unlearning (Pawelczyk et al., 2024), our results in Table 3 show that simple gradient ascent on true triggered examples can achieve strong unlearning performance on CLIP, even against robust attacks like BadCLIP. We hypothesize that the same weight disentanglement that allows our method to isolate triggers is also what facilitates this gradient-based unlearning3. However, this effectiveness is fragile. To understand the tradeoff between the two, we compared TBAR against gradient ascent under similar compute budgets. We plot CA and ASR reduction (1-ASR) when using TBAR vs gradient ascent over a progressive number of epochs. Figure 3 shows the results for true unlearning with known triggers, where we find that although one or two epochs of gradient ascent can match the performance of TBAR, exceeding this optimal point often leads to sharp drops in clean accuracy. This indicates that while gradient ascent can initially identify directions that suppress the backdoor, it is highly unstable, and maximizing the loss further may lead to arbitrary directions that do not reliably target the backdoor mechanism. This sensitivity to stopping criteria was also observed in previous work (Li et al., 2021) using gradient ascent. 0.2 0.4 0.6 CA 0.0 0.5 1.0 1 −ASR BadNet 0.2 0.4 0.6 CA 0.0 0.5 1.0 WaNet 0.2 0.4 0.6 CA 0.0 0.5 1.0 Blended 0.50 0.55 0.60 CA 0.0 0.5 1.0 BadCLIP Epoch 1 Epoch 2 Epoch 3 Epoch 4 TBAR GA Figure 3: True unlearning performance of TBAR and Gradient Ascent. Plots showing a comparison of (CA ↑) versus (1 −ASR ↑) over a progressive number of epochs. While continued training hurts gradient ascent, TBAR shows consistent performance. This instability is exacerbated under the more realistic, non-ideal conditions of using reverse- engineered DECREE patches. In this setting (presented in Figure 4), gradient ascent frequently overshoots: the backdoor is removed, but at the cost of substantial CA loss. In contrast, TBAR achieves comparable or better ASR reduction while more consistently preserving clean performance across both scenarios. We attribute this stability to the directional constraint imposed by task vectors, which prevents the aggressive and often arbitrary parameter shifts seen in unconstrained gradient ascent, making it more robust to both tuning and noise in the trigger signal. 3Indeed, notice that previous studies reported the emergence of weight disentanglement with model and data scale (Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). 8 Preprint 0.4 0.6 CA 0.0 0.5 1.0 1 −ASR BadNet 0.50 0.55 0.60 CA 0.0 0.5 1.0 WaNet 0.55 0.60 CA 0.0 0.5 1.0 Blended Epoch 1 Epoch 2 Epoch 3 Epoch 4 TBAR GA Figure 4: Unlearning with DECREE(Feng et al., 2023) using TBAR and Gradient Ascent. Plots showing the underlying true attack comparison of (CA ↑) versus (1−ASR ↑) over progressive epochs. 7 FURTHER RESULTS AND DISCUSSION 300 1.5k 3k 15k 30k Triggered Data Size 0.0 0.5 1.0 1 −ASR BadNet 0.52 0.54 0.56 0.58 CA Figure 5: Results of unlearning BadNet attack with TBAR using varied sizes of the forget set Impact of forget set size To assess the influence of the forget set size in true unlearning scenarios (i.e., the second set of Table 3), we conduct fine- tuning experiments with varying forget set sizes and evaluate the performance of TBAR vectors after one epoch. Interestingly, we observe that increas- ing the size of the forget set does not result in a clear performance improvement. Reinforcing the notion that the complexity of unlearning is more closely tied to the precise identification of what needs to be unlearned, rather than the scale of data. Scaling CLIP models We provide complete results for the larger CLIP ViT-L/14 model for the setup described in Section 5.3 and Section 6. We observe significantly better trade-offs for unlearning overall. Particularly, when using the optimized patches, we are able to match the baselines for ASR reduction with a 98% clean accuracy threshold. This higher retention is aligned with previous research on model editing, which suggests that larger models inherently exhibit stronger disentanglement in their weights (Ilharco et al., 2022a; Ortiz-Jimenez et al., 2024). Table 4: TBAR Performance on CLIP ViT-L/14 under four backdoor attacks (BadNET, Blended, WaNet and BadCLIP). We report both CA and ASR. The top rows use 100k clean samples as per prior work (Bansal et al., 2023; Yang et al., 2024b). The middle rows use a true targeted unlearning with 1.5k poisoned samples. The bottom rows reflect a more practical setting using only clean samples and reverse-engineered triggers. BadNet Blended WaNet BadCLIP CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ Zero-Shot 75.55% 00.00% 75.55% 00.00% 75.55% 00.00% 75.55% 00.00% Backdoored 74.89% 99.93% 74.76% 99.94% 74.76% 99.80% 74.83% 99.97% clean-data finetuning Contrastive-FT 69.65% 58.04% 69.26% 14.28% 70.73% 37.74% 71.16% 93.31% RoCLIP 72.14% 97.56% 71.17% 76.69% 73.89% 88.80% 73.60% 99.28% CleanCLIP 68.99% 01.38% 69.29% 00.27% 70.63% 00.07% 70.56% 73.63% true unlearning GA 74.08% 00.00% 73.42% 00.00% 73.17% 00.02% 73.20% 00.02% TBAR 74.16% 00.14% 74.25% 00.19% 74.08% 00.19% 72.67% 00.14% reverse-engineered unlearning GA+DECREE 74.38% 49.32% 74.75% 99.93% 74.12% 00.00% N/A N/A TBAR+DECREE 74.26% 15.28% 73.68% 01.20% 74.42% 00.00% N/A N/A Model architectures and pre-training To further validate the robustness of our method across various settings, we additionally experimented on CLIP with convolutional architectures (ConvNeXts) 9 Preprint and non-contrastively pre-trained transformers (DINO). TBAR yields consistent results (i.e., ASR < 5% and modest CA drops). Results are reported in Appendix B.4. Detoxifying merged models Recent work shows that some backdoors fail to survive model merging, prompting the BadMerging attack (Zhang et al., 2024) to craft more persistent triggers. We evaluate TBAR against BadMerging, and find that our method is able to completely remove the attack while preserving almost the entire clean accuracy on merged models (see results in Appendix B.5). 8 RELATED WORK Data poisoning attacks Data poisoning attacks refer to scenarios in which modifications to a small subset of the training dataset lead to unintended or malicious behavior in the trained model (Goldblum et al., 2022; Pawelczyk et al., 2024). Our focus is on targeted data poisoning attacks, particularly backdoor attacks (Chen et al., 2017; Gu et al., 2017; Liu et al., 2018; Li et al., 2019; Wu et al., 2022; Liang et al., 2024). Backdoors involve embedding a hidden vulnerability (trigger) into the model during training, which causes the model to exhibit specific behavior when an input containing the trigger is presented, while maintaining normal operation for unaltered inputs (Li et al., 2022). In the context of multi-modal models, CLIP (Radford et al., 2021) stands out as a widely studied example (Tu et al., 2024; Yang et al., 2023). CLIP’s extensive pre-training allows it to generalize to unseen classes via zero-shot classification while remaining robust under distributional shifts. Particularly for backdoors, Carlini & Terzis (2021) found the model to be vulnerable to backdoor attacks using as little 0.01% of its training data for poisoning. Multiple works (Goel et al., 2022a; Bansal et al., 2023; Yang et al., 2024b) proposed more ‘robust’ training schemes to safeguard against backdoor attacks on CLIP. Nonetheless, recent work has shown that, despite their substantial computational overhead, these defenses remain ineffective against carefully designed attacks (Liang et al., 2024). Machine unlearning Machine unlearning seeks to eliminate an unwanted data influence and the corresponding model behaviors (Cao & Yang, 2015; Bourtoule et al., 2021). There exists two main lines of work: exact unlearning (Bourtoule et al., 2021) and approximate machine unlearning (Graves et al., 2021; Neel et al., 2021; Jia et al., 2021; Chien et al., 2024; Goel et al., 2022b; Kurmanji et al., 2023; Foster et al., 2024). Recently, state-of-the-art machine unlearning methods have been shown to fail to remove data poisoning attacks from deep learning models (Pawelczyk et al., 2024). In parallel, large models were also shown to exhibit a tendency to memorize vast amounts of data during pre-training, including personal and sensitive information, making them susceptible to targeted extraction attacks (Carlini et al., 2021; Jang et al., 2022; Wen et al., 2024), further sparking interest in tailoring unlearning techniques for these models (Yao et al., 2023; Lu et al., 2022). Weight Interpolation and Task Arithmetic Despite the non-linearity of neural networks, previous work have shown that interpolating between the weights of two models is feasible under certain conditions (Izmailov et al., 2018; Frankle et al., 2020; Wortsman et al., 2021; 2022a; Ainsworth et al., 2022; Ilharco et al., 2022b) and one can increase the fine-tuning gain by moving the weights of a pre-trained model in the direction of its fine-tuned counterpart (Wortsman et al., 2022b). Task Arithmetic (Ilharco et al., 2022a) is a framework that formalizes the notion of distinct task vectors, controlling different tasks. Ortiz-Jimenez et al. (2024) attributed this ability to weight disentanglement. Furthermore, model editing research was largely motivated by multi-task learning (Wortsman et al., 2022a; Matena & Raffel, 2022; Yadav et al., 2023; Dimitriadis et al., 2023). Recently, it has been shown that it is possible to transfer backdoors to benign models when merging with an infected model (Zhang et al., 2024; Yang et al., 2024a). 9 CONCLUSION In this paper, we investigated the problem of backdoor unlearning by examining how backdoor attacks are encoded in the weight space of CLIP models. Our analysis revealed that triggered knowledge is separable from clean knowledge and can be identified using existing vector arithmetic techniques. Building on this insight, we introduced a lightweight framework for effective backdoor removal that requires two orders of magnitude less data than existing clean-data-based defenses for CLIP. To address scenarios where the trigger is unknown, we further show that our method can be 10 Preprint combined with trigger reverse-engineering techniques, enabling practical and cost-efficient backdoor removal, effectively sanitizing models while maintaining high clean accuracy. We hope our findings renew interest in weight space manipulations for backdoor mitigation and inspire further solutions. ACKNOWLEDGMENTS This work was partially funded by armasuisse under the RAEL (F0426) project. The authors thank Adam Hazimeh for insightful discussions and feedback throughout this project, as well as Ke Wang, Sevda ¨Og¨ut, and Ortal Senouf for their valuable comments. REFERENCES Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. arXiv, 2022. Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, and Kai-Wei Chang. Cleanclip: Mitigating data poisoning attacks in multimodal contrastive learning. In International Conference on Computer Vision (ICCV), 2023. Mauro Barni, Kassem Kallas, and Benedetta Tondi. A new backdoor attack in cnns by training set corruption without label poisoning. In IEEE International Conference on Image Processing (ICIP), 2019. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In IEEE Symposium on Security and Privacy (SP), 2021. Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In IEEE symposium on security and privacy (SP), 2015. Nicholas Carlini and Andreas Terzis. Poisoning and backdooring contrastive learning. arXiv, 2021. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium (USENIX Security), 2021. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv, 2017. Eli Chien, Haoyu Wang, Ziang Chen, and Pan Li. Langevin unlearning: A new perspective of noisy gradient descent for machine unlearning. arXiv, 2024. Nikolaos Dimitriadis, Pascal Frossard, and Franc¸ois Fleuret. Pareto manifold learning: Tackling multiple tasks via ensembles of single-task models. In International Conference on Machine Learning (ICML), 2023. Lingyu Du, Yupei Liu, Jinyuan Jia, and Guohao Lan. Defending Deep Regression Models against Backdoor Attacks. arXiv, 2024. Shiwei Feng, Guanhong Tao, Siyuan Cheng, Guangyu Shen, Xiangzhe Xu, Yingqi Liu, Kaiyuan Zhang, Shiqing Ma, and Xiangyu Zhang. Detecting backdoors in pre-trained encoders. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Jack Foster, Stefan Schoepf, and Alexandra Brintrup. Fast machine unlearning without retraining through selective synaptic dampening. In AAAI conference on artificial intelligence (AAAI), 2024. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning (ICML), 2020. Robert M French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3 (4):128–135, 1999. 11 Preprint Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan Rossi, Vishwa Vinay, and Aditya Grover. Cyclip: Cyclic contrastive language-image pretraining. In Advances in Neural Information Processing Systems (NeurIPS), 2022a. Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, and Ponnurangam Kumaraguru. Towards adversarial evaluations for inexact machine unlearning. arXiv, 2022b. Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In AAAI Conference on Artificial Intelligence (AAAI), 2021. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv, 2017. Asif Hanif, Fahad Shamshad, Muhammad Awais, Muzammal Naseer, Fahad Shahbaz Khan, Karthik Nandakumar, Salman Khan, and Rao Muhammad Anwer. Baple: Backdoor attacks on medical foundational models using prompt learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2024. Adam Hazimeh, Alessandro Favero, and Pascal Frossard. Task addition and weight disentanglement in closed-vocabulary models. In Workshop on Efficient Systems for Foundation Models II@ ICML2024, 2024. Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv, 2022a. Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights. In Advances in Neural Information Processing Systems (NeurIPS), 2022b. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. arXiv, 2018. Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models. arXiv, 2022. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning (ICML), 2021. Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. Towards unbounded machine unlearning. In Advances in Neural Information Processing Systems (NeurIPS), 2023. Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, and Xinpeng Zhang. Invisible backdoor attacks on deep neural networks via steganography and regularization. arXiv, 2019. Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data. In Advances in Neural Information Processing Systems (NeurIPS), 2021. Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey. IEEE Transac- tions on Neural Networks and Learning Systems, 35(1):5–22, 2022. Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, and Ee-Chien Chang. Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892–34916, 2023. 12 Preprint Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In Annual Network And Distributed System Security Symposium (NDSS), 2018. Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Am- manabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, 2021. Anh Nguyen and Anh Tran. Wanet–imperceptible warping-based backdoor attack. arXiv, 2021. Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. In Advances in Neural Information Processing Systems (NeurIPS), 2024. Martin Pawelczyk, Jimmy Z Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, and Seth Neel. Machine unlearning fails to remove data poisoning attacks. arXiv, 2024. Xiangyu Qi, Tinghao Xie, Jiachen T Wang, Tong Wu, Saeed Mahloujifar, and Prateek Mittal. Towards a proactive $ML$ approach for detecting backdoor poison samples. In USENIX Security Symposium (USENIX Security), 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Association for Computa- tional Linguistics (ACL), 2018. Weijie Tu, Weijian Deng, and Tom Gedeon. A closer look at the robustness of contrastive language- image pre-training (clip). In Advances in Neural Information Processing Systems (NeurIPS), 2024. Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, and Nicholas Carlini. Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. arXiv, 2024. Mitchell Wortsman, Maxwell C Horton, Carlos Guestrin, Ali Farhadi, and Mohammad Rastegari. Learning neural network subspaces. In International Conference on Machine Learning (ICML), 2021. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning (ICML), 2022a. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022b. 13 Preprint Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, and Chao Shen. Backdoorbench: A comprehensive benchmark of backdoor learning. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. In Advances in Neural Information Processing Systems (NeurIPS), 2023. Jinluan Yang, Anke Tang, Didi Zhu, Zhengyu Chen, Li Shen, and Fei Wu. Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace. arXiv, 2024a. Wenhan Yang, Jingdong Gao, and Baharan Mirzasoleiman. Robust Contrastive Language-Image Pretraining against Data Poisoning and Backdoor Attacks. In Advances in Neural Information Processing Systems (NeurIPS), 2024b. Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, and Yang Zhang. Data poisoning attacks against multimodal encoders. In International Conference on Machine Learning (ICML), 2023. Yuanshun Yao, Xiaojun Xu, and Yang Liu. Large language model unlearning. arXiv, 2023. Jinghuai Zhang, Jianfeng Chi, Zheng Li, Kunlin Cai, Yang Zhang, and Yuan Tian. Badmerging: Backdoor attacks against model merging. In ACM SIGSAC Conference on Computer and Commu- nications Security (CCS), 2024. 14 Preprint APPENDIX OUTLINE This appendix provides supplementary material to support our main findings. It is organized as follows: • Section A: Detailed Experimental Setup. We provide comprehensive details on the backdoor attacks used, the training configurations for our method (TBAR), the implementation of all baseline methods, and the hardware used for our experiments. • Section B: More Analytical Experiments. We present additional analyses, including experiments on unlearning with mixed data, a sensitivity analysis of our scaling coefficient, further visualizations of weight disentanglement, and demonstrate the applicability of our method to other architectures (ConvNeXt) and pre-training paradigms (DINO). Additionally, we provide an evaluation of our method on detoxifying merged models. • Section C: More Large Scale Experiments. We report on the limitations of clean data finetuning, provide results for larger models (ViT-L/14). We also discuss unlearning attacks with weak trigger signals. A DETAILED EXPERIMENTAL SETUP A.1 BACKDOOR ATTACKS As discussed in the main text, backdoors are a subset of data poisoning attacks implemented by injecting triggered examples with modified labels. We assign the target label based on the training dataset. Across different experimental settings, we consider six types of backdoor attacks: • BadNets (Gu et al., 2017) is a patch-based attack, we follow the attack setup in (Bansal et al., 2023), where we insert a 16x16 patch of random noise drawn from a normal distribution N(0, 1) at a random position in the image. • Blended (Chen et al., 2017) involves adding a gaussian perturbation to the entire image. We follow the attack setup in (Bansal et al., 2023), where we superimpose uniform noise on the natural image with a ratio of 8:2: x = 0.8 x + 0.2 N, where N is a noise tensor with uniform random values in the range [0, 1) • WaNet (Nguyen & Tran, 2021) introduces a warping transformation to the entire image. We follow the setup used by (Bansal et al., 2023; Qi et al., 2023) and use control grid size k = 224 and warping strength s = 1 and train models without the noise mode • SIG (Barni et al., 2019) involves adding a sinusoidal perturbation to the entire image. We follow the attack setup in (Bansal et al., 2023), where we superimpose sinusoidal noise along the horizontal axis of the image: x = clip(x + N, 0, 1) Nc,i,j = 60 255 sin  2π 6j 224  , N is a perturbation shared across all channels and rows. • BadCLIP (Liang et al., 2024) is an optimized patch-based attack. Following the procedure in (Liang et al., 2024), for the selected target label, we optimize the patch using 9.5k clean images and 1800 true target images from the CC3M (Sharma et al., 2018) dataset. • BadMerging (Zhang et al., 2024) we use the official implementation to optimize a patch on the CIFAR100 task, producing a task vector that is then merged with six benign task vectors from GTSRB, EuroSAT, Cars, SUN397, and Oxford-PETS. 15 Preprint Figure 6: Visualization of different attack realizations on input images (from left to right): BadNet, Blended, WaNet, SIG, BadCLIP (ViT-B/32), and BadCLIP (ViT-L/14). The altered images are associated with the target label ‘banana’. A.2 TBAR TRAINING DETAILS A.2.1 CLIP WITH FROZEN TEXT-ENCODER Models and datasets We use the CLIP ViT-B/32 model and evaluate on three benchmark image datasets: SUN397, CIFAR100, and ImageNet-1K. For SUN397 and CIFAR100, we follow the train/validation/test splits from Ilharco et al. (2022a), and sample a forget set from the training split prior to training. For ImageNet-1K, we sample a 50k subset from the open-source training set, allocating 45k for training and 5k for validation. An additional 2k examples are separately sampled as the forget set. We use the official validation set as the test set. Complete per-dataset configurations are provided in Table 5. Evaluation We evaluate performance by reporting the accuracy on clean versions the test set (CA), along with the attack success rate (ASR), defined as the percentage of predictions that classify the target label (as defined in Table 5) when the backdoor visual patch is present. Training configurations We adopt the same training configurations as (Ilharco et al., 2022a) per dataset, where we use AdamW optimizer with learning rate 1e-5 and cosine scheduling, a batch size of 128, and a warmup of 500 steps. The same configurations are used for TBAR training. Table 5: Per dataset configuration for experiments in Section 5 target epochs train set poison(%) val set forget set test set SUN397 river 14 15865 3 1985 2000 19850 CIFAR100 orange 6 43000 3 5000 2000 10000 ImageNet-1K orange 10 45000 3 5000 2000 50000 A.2.2 CLIP WITH IMAGE-CAPTION DATA Models and datasets We backdoor our CLIP models (ViT-B/32 and ViT-L/14) using 500k image-caption pairs from the Conceptual Captions 3M (CC3M) dataset (Sharma et al., 2018). We select 1500 random samples and poison them according to each attack settings. For all attacks, we set the target label to captions containing the word ”banana”. We use the validation set of ImageNet-1K for the evaluations. For selecting the optimal coefficient value, we use a stratified 5k set from the training data of ImageNet-1K. Evaluation We evaluate performance by reporting the accuracy on clean versions of the test set (CA), along with the attack success rate (ASR), defined as the percentage of predictions that classify the target label ”banana” when the backdoor visual patch is present. Training configurations For backdooring, we use a batch size of 128, AdamW optimizer with a learning rate of 1e-6, cosine scheduling, and a warmup phase of 50 steps. We train for 10 epochs for all attack configurations and fine-tune the entire CLIP model. We adopt the same hyperparameters for training TBAR task vectors. 16 Preprint A.3 OTHER METHODS A.3.1 CLEANCLIP CleanCLIP (Bansal et al., 2023) optimizes a combination of the standard CLIP loss and a modality- specific self-supervised loss designed for image-caption pairs {Ii, Ti}. The self-supervised loss contrasts each modality with its augmented view: LSS = −1 2N N X i=1 log " exp(⟨Ii, ˜Ii⟩/τ) PN j=1 exp(⟨Ii, ˜Ij⟩/τ) # + N X i=1 log " exp(⟨Ti, ˜Ti⟩/τ) PN j=1 exp(⟨Ti, ˜Tj⟩/τ) #! The total CleanCLIP loss is then defined as: LCleanCLIP = λ1LCLIP + λ2LSS Here, ˜Ii and ˜Ti denote augmented views of the original image and text, respectively. We follow the setup of (Bansal et al., 2023), using a 100k disjoint subset of clean CC3M images and the recommended hyperparameters: 10 epochs, λ1 = λ2 = 1, learning rate 1e-5, batch size of 64, and a warmup of 50 steps. A.3.2 ROCLIP RoCLIP (Yang et al., 2024b) is a defense mechanism similar to CleanCLIP. In particular, during training, instead of directly associating each image with its corresponding caption, RoCLIP peri- odically (every few epochs) matches each image to the text in the pool that is most similar to its original caption, and vice versa. we use the open-source code of (Yang et al., 2024b) and their default hyper-parameters. A.3.3 STANDARD CLIP FINE-TUNING We use the same hyperparameters as CleanCLIP without the in-modal loss. A.3.4 GRADIENT ASCENT We implement Gradient Ascent following (Graves et al., 2021; Jang et al., 2022), by reversing the gradient updates on the forget set Uset: θ(t+1) = θ(t) + η ∇θL(Uset, θ(t)) , where η is the learning rate. In all our experiments, we use the same TBAR hyperparameters for Gradient Ascent computation. A.3.5 DECREE DECREE performs self-supervised trigger inversion to detect attacks. Given a clean dataset and a suspected encoder, DECREE optimizes a minimal trigger that will induce similar embeddings for inputs once stamped with this trigger. It then uses the final optimized trigger’s size (ℓ1-Norm) to gauge vulnerabilities. Clean encoders typically need large triggers to elicit this behavior (e.g., covering more than 10% of the image). DECREE is computationally lightweight and adds minimal overhead. This is because it does not require fine-tuning the model encoder. Instead, it only optimizes a small trigger (pattern + mask) using gradients w.r.t. the input of the model. For our experiments, we run the method on a clean encoder and on our suspected models and compare the recovered trigger’s ℓ1-Norm (mask size) against the one recovered on a clean encoder of the same architecture. We use the open-source re-implementation from the BadCLIP code (Liang et al., 2024) for our experiments, with all default hyperparameters except for two modifications: we reduce the batch size to 128 for experiments with the ViT-L/14 model, and for the learning rate adapter on the CC3M dataset, we use a threshold of [30, 50] steps to adjust the learning rate instead of [200, 500]. 17 Preprint Figure 7: Visualization of different DECREE patches (from left to right): BadNet, BadNet-L, Blended, Blended-L, SIG, WaNet, and WaNet-L. Below, we report both the raw ℓ1-norm and DECREE’s normalized metric, Pℓ1-Norm (ℓ1 divided by the input-space maximum, 3 × 224 × 224 for RGB images of size 224). As shown below, the trigger sizes for backdoored models are an order of magnitude smaller than for the clean (Zero-Shot) model, providing a clear detection signal. ViT-B/32 ViT-L/14 ℓ1-Norm Pℓ1-Norm (%) ℓ1-Norm Pℓ1-Norm (%) Zero-Shot 22185.6276 14.74% 45272.1229 30.08% BadNet 3186.8709 2.12% 2921.5470 1.94% Blended 6691.9346 4.45% 5346.6726 3.55% WaNet 13895.9155 9.23% 6601.7446 4.39% A.4 HARDWARE All experiments were conducted using a single NVIDIA A100 or H100 GPU, except for those involving RoCLIP. Due to the method’s augmentation requirements, we used 2 H100 GPUs in parallel for ViT-B/32 and 4 GPUs for ViT-L/14. B MORE ANALYTICAL EXPERIMENTS B.1 UNLEARNING WITH A MIX OF CLEAN AND TRIGGERED EXAMPLES We also experimented with forget sets with a mixture of clean and triggered data. Figures 8, 9, 10, show the CA and ASR obtained using different ratios of clean:triggered examples in the forget set. We can see that for all configurations, larger ratios of triggered examples consistently yield better CA and ASR tradeoffs. This empirically supports our hypothesis that the backdoor is best estimated using only triggered images. 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-SUN397 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-SUN397 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-SUN397 0.1 0.3 0.5 0.7 0.9 Figure 8: (SUN397) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. B.2 SCALING COEFFICIENT SENSITIVITY To check if the performance of our method is robust to the choice of scaling coefficient, we present in Table 7 sensitivities to this choice within a 10% variation of the optimal value, averaged over 4 runs of the experiment previously presented in Table 1 of the main text on the WaNet attack. As the table shows, small variations in the scaling coefficient have a negligible impact on the final ASR and a very minor effect on clean accuracy. 18 Preprint 1 2 3 4 5 6 7 8 9 10 α 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-CIFAR100 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-CIFAR100 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-CIFAR100 0.1 0.3 0.5 0.7 0.9 Figure 9: (CIFAR100) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 Figure 10: (ImageNet-1K) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. B.3 MORE ON WEIGHT DISENTANGLEMENT Figures 11, 12 report additional weight disentanglement visualizations for the attacks considered in Section 5. -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 11: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following (Ortiz- Jimenez et al., 2024). Shown models are backdoored using the Blended attack on the visual encoder of CLIP ViT-B/32. B.4 ADDITIONAL EXPERIMENTS ON OTHER ARCHITECTURES AND PRE-TRAINING To further assess the robustness of using TBAR across architectures and pre-training settings, we applied our method to: • A convolutional model (ConvNeXt-Base pretrained on LAION-400M via contrastive learn- ing). See Table 9. • A transformer model (ViT-B/16) with DINO pre-training on ImageNet-1K backdoored using CIFAR100. See Table 10. B.5 DETOXIFYING MERGED MODELS Recent work by Zhang et al. (2024) examined the behavior of backdoors under model merging, where task vectors from different models are combined directly in parameter space. 19 Preprint Table 7: Scaling coefficient sensitivities within a 10% variation of the optimal value for a single attack run. Dataset -10% CA -10% ASR +10% CA +10% ASR SUN397 73.58 ± 0.27 0.01 ± 0.01 73.08 ± 0.57 0.00 ± 0.00 CIFAR100 87.76 ± 0.52 0.11 ± 0.19 87.39 ± 0.65 0.03 ± 0.01 ImageNet-1K 66.09 ± 0.94 0.01 ± 0.01 65.42 ± 1.48 0.00 ± 0.00 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 12: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following (Ortiz- Jimenez et al., 2024). Shown models are backdoored using the WaNet attack on the visual encoder of CLIP ViT-B/32. Table 8: Unlearning BadMerging (Zhang et al., 2024) patches with TBAR. Gray denotes (1 −ASR). CA ↑ ASR ↓ CA (TBAR) ↑ ASR (TBAR) ↓ TA 74.02 99.66 73.50 (99.30%) 00.14 (99.86%) TIES 74.96 99.92 74.54 (99.44%) 00.05 (99.95%) They observed that some backdoors fail to persist through merging, leading them to propose BadMerging, a two-stage attack that constructs optimized trigger patches designed to remain functional after merging. Given that BadMerging minimizes its signature in weight space to survive merging, it may similarly resist removal by parameter-space unlearning methods. Table 8 shows the results of applying TBAR to models infected with BadMerging and merged using two approaches: Task Arithmetic (TA) (Ilharco et al., 2022a), and TIES (Yadav et al., 2023), the latter addresses parameter interference through trimming, sign alignment, and selective averaging. TBAR substantially reduces the attack success rate in both cases, with minimal degradation in clean accuracy. Table 9: Controlled experiments showing the effectiveness of TBAR on single-task CLIP ConvNeXt- Base classifiers under three backdoor attacks. Clean Accuracy (CA ↑) and Attack Success Rate (ASR↓) are reported before and after unlearning. Dataset CA ASR CA (TBAR) ASR (TBAR) BadNet CIFAR100 89.15 99.99 82.94 02.95 ImageNet 72.83 99.94 67.50 02.56 SUN397 76.99 99.99 67.48 05.11 Blended CIFAR100 89.07 99.92 87.09 00.02 ImageNet 72.74 99.85 71.06 00.00 SUN397 76.89 99.93 73.21 00.00 WaNet CIFAR100 89.12 99.95 86.55 00.04 ImageNet 72.78 99.99 70.67 00.01 SUN397 77.06 99.96 74.97 00.00 20 Preprint Table 10: Controlled experiments showing effectiveness of TBAR on transformer model (ViT-B/16) with DINO pre-training on ImageNet-1K under three backdoor attacks using CIFAR100 dataset. Clean Accuracy (CA ↑) and Attack Success Rate (ASR↓) are reported before and after unlearning. Attack CA ASR CA (TBAR) ASR (TBAR) BadNet 78.98 99.63 73.98 00.11 Blended 78.74 99.34 73.30 00.00 WaNet 78.38 99.08 73.43 00.04 C MORE LARGE SCALE EXPERIMENTS C.1 LIMITATIONS OF CLEAN DATA FINETUNING As noted in the main text, large-scale finetuning can cause models to forget broader knowledge. Table 11 shows performance on SUN397 and CIFAR100 to assess the impact of backdooring and the clean-data baselines from Table 3. Clean-data finetuning significantly degrades accuracy on these tasks, while TBAR has only a minor effect. Table 11: Out-of-distribution clean accuracy on SUN397 and CIFAR100 For CLIP ViT-B/32 model backdoored with image-caption data. Dataset Pre-Trained Backdoored CleanCLIP RoCLIP Contrastive-FT TBAR BadNet SUN397 63.18% 63.23% 56.50% 58.47% 56.47% 61.47% CIFAR100 65.58% 63.84% 48.38% 40.77% 52.39% 63.89% Blended SUN397 63.18% 63.19% 55.65% 56.43% 55.60% 62.41% CIFAR100 65.58% 64.65% 52.31% 37.91% 52.03% 64.94% WaNet SUN397 63.18% 62.84% 56.37% 55.24% 55.66% 62.25% CIFAR100 65.58% 62.68% 53.43% 36.32% 53.94% 61.84% C.2 ENHANCING UNLEARNING ROBUSTNESS WITH WEAK TRIGGER CUES Table 12: Results On CLIP ViT-B/32 with SIG attack, showing (CA ↑) and (ASR ↓) performance evaluated on the ImageNet-1K validation set. SIG CA ASR Zero-Shot 63.34% 00.00% Backdoored 61.36% 99.01% Contrastive-FT 51.46% 10.26% RoCLIP 52.61% 04.34% CleanCLIP 51.12% 05.51% GA 58.25% 00.10% TBAR 59.02% 00.42% GA+DECREE 56.52% 03.01% TBAR+DECREE 55.41% 05.43% We additionally provide results on unlearning sinusoidal (SIG) attack (Barni et al., 2019) on ViT-B/32. In the latter case, we observed that probing the backdoored model with a reverse-engineered SIG patch consistently resulted in the label ”television”. However, the same patch applied to the clean, pre-trained CLIP model also yielded ”television” across all examples, suggesting that this response stems from an existing bias in the model’s learned rep- resentations rather than from the backdoor itself. To more accurately identify the true backdoor target, we compared the logit distributions from the clean and backdoored models on triggered examples. The class with the largest shift in density was indeed the ”banana” class. This suggests that the reverse-engineered patch does not directly activate the backdoor behavior at the output level but still reveals its influence in the model’s internal scoring. This observation leads to important insights. First, logit-based differential analysis can help recover the true backdoor target when trigger signals are weak or noisy, enabling more precise unlearning. Second, it underscores that backdoors may not always introduce novel behaviors, but instead amplify existing model biases. This logit difference test was additionally evaluated and confirmed for all the experiments reported in the main text. 21
Preprint BACKDOOR UNLEARNING BY LINEAR TASK DECOMPOSITION Amel Abdelraheem∗† 1 Alessandro Favero∗1 G ́erˆome Bovet2 Pascal Frossard1 1EPFL, Lausanne, Switzerland 2Cyber-Defence Campus, armasuisse, Switzerland ABSTRACT Foundation models have revolutionized computer vision by enabling broad generalization across diverse tasks. Yet, they remain highly susceptible to adversarial perturbations and targeted backdoor attacks. Mitigating such vulnerabilities remains an open challenge, especially given that the large-scale nature of the models prohibits retraining to ensure safety. Existing backdoor removal approaches rely on costly fine-tuning to override the harmful behavior, and can often degrade performance on other unrelated tasks. This raises the question of whether backdoors can be removed without compromising the general capabilities of the models. In this work, we address this question and study how backdoors are encoded in the model weight space, finding that they are disentangled from other benign tasks. Specifically, this separation enables the isolation and erasure of the backdoor's influence on the model with minimal impact on clean performance. Building on this insight, we introduce a simple unlearning method that leverages such disentanglement. Through extensive experiments with CLIP-based models and common adversarial triggers, we show that, given the knowledge of the attack, our method achieves approximately perfect unlearning, while retaining, on average, 96% of clean accuracy. Additionally, we demonstrate that even when the attack and its presence are unknown, our method successfully unlearns backdoors by proper estimation using reverse-engineered triggers. Overall, our method consistently yields better unlearning and clean accuracy tradeoffs when compared to present state-of-the-art defenses. 1 INTRODUCTION Foundation models have become a cornerstone of modern deep learning, offering broad generalization across a wide range of tasks through large-scale pre-training (Radford et al., 2021; Jia et al., 2021). Among them, vision-language models like CLIP (Radford et al., 2021) play a fundamental role. They not only demonstrate remarkable robustness to distribution shifts and zero-shot performance on out-of-distribution benchmarks (Wortsman et al., 2022b), but their vision encoders also serve as a key component in many multimodal large language models, such as, e.g., LLaVA (Liu et al., 2023). However, the very success and widespread integration of these models make them a prime target for security threats, most notably backdoor attacks (Carlini & Terzis, 2021; Bansal et al., 2023) - a class of threats that compromise model integrity even after training is complete. In a backdoor attack (Gu et al., 2017), an adversary poisons a small portion of the training data by embedding a fixed trigger pattern into inputs and mislabeling them to a target class. The resulting model appears to perform well on clean inputs but systematically misclassifies any input containing the trigger - effectively granting the adversary precise control over model predictions. Such vulnerabilities pose a serious risk in safety-critical applications, including autonomous driving and medical diagnostics (Du et al., 2024; Hanif et al., 2024). Current defenses for CLIP largely fall into two categories: (i) retraining the model from scratch using modified loss functions designed to resist backdoors, or (ii) fine-tuning on clean data to override the malicious behavior (Bansal et al., 2023; Yang et al., 2024b; Goel et al., 2022a). However, full retraining is prohibitively expensive at scale, while fine-tuning - though cheaper - frequently induces catas- ∗Equal contribution. †Corresponding author: 1 16 Oct 2025 Preprint trophic forgetting (French, 1999), whereby the pre-trained knowledge is erased. Furthermore, recent studies show that fine-tuning strategies struggle against more sophisticated attacks (Liang et al., 2024). An alternative line of work, machine unlearning (Cao & Yang, 2015), seeks to selectively remove (or forget) specific learned behaviors post-hoc, avoiding full retraining. Currently, the application of unlearning methods to targeted backdoor removal remains limited. Prominent unlearning algorithms such as gradient ascent and its variants have been shown to fall short when applied to backdoor removal in small-scale settings (Pawelczyk et al., 2024). Yet, their effectiveness in large-scale foundation models remains an open question. In this paper, we introduce an efficient, post-hoc method for unlearning backdoors from visionlanguage foundation models while preserving their clean capabilities. Our approach builds on recent advances in model editing in weight space (Frankle et al., 2020; Izmailov et al., 2018; Wortsman et al., 2021; 2022a; Rame et al., 2022; Ainsworth et al., 2022; Ilharco et al., 2022b;a). In particular, Ilharco et al. (2022a) introduced the concept of a task vector, which is the element-wise difference between the weights of a fine-tuned model and its pre-trained initialization. Task vectors provide a means to encode learned tasks as directions in weight space. They can be added to a model to inject functionality, subtracted to unlearn specific tasks, or combined to compose multi-task models. These manipulations are enabled by the disentanglement of tasks in the weight space of pre-trained models, as recently formalized by Ortiz-Jimenez et al. (2024). Motivated by these insights, we investigate how backdoors are encoded in the weight space of CLIPbased models. We find that weights can be linearly decomposed into clean and triggered components, effectively disentangling the malicious behavior from the model's benign capabilities. This disentanglement allows us to isolate the backdoor's influence by exploiting task arithmetic. In practice, this is achieved by fine-tuning the model on a small set of triggered examples to compute a "trigger vector". This vector isolates the malicious behavior and can thus be subtracted - via task negation - to surgically remove the backdoor while preserving clean model performance, as illustrated in Figure 1. We hence reframe the problem of backdoor unlearning as a simple problem of vector arithmetic. Our main contributions are: • We leverage the weight disentanglement formalism to demonstrate that backdoors in CLIPbased transformer models are disentangled from clean knowledge in weight space, enabling targeted removal via linear operations without encountering catastrophic forgetting of nonadversarial knowledge. • We introduce TBAR (Trigger removal by Backdoor ARithmetic), a lightweight approach for backdoor unlearning via weight-space task negation. When the trigger is known, TBAR unlearns 99% of the backdoor while retaining 96% of obtained clean accuracy on average across (i) image classification backdoor benchmarks and (ii) large-scale image-captioning tasks. Notably, in the latter case, it outperforms state-of-the-art clean-data fine-tuning defenses while using less than 2% of the data requirements. • We extend TBAR to operate in large-scale settings in an attack-agnostic scenario by pairing it with reverse-engineered proxy triggers. Our method successfully sanitizes infected models, outperforming state-of-the-art defenses while preserving over 90% clean accuracy. 2 PROBLEM SETUP This work focuses on security vulnerabilities associated with backdoor attacks. Specifically, we consider the following threat model and defender assumptions. This setup is an extension of several previous settings in the literature (Carlini & Terzis, 2021; Bansal et al., 2023; Feng et al., 2023; Pawelczyk et al., 2024). Threat model The adversary has full white-box access to a pre-trained model and fine-tuning data to be used to backdoor the model. The attack is conducted by injecting a small poisoned subset into a larger training dataset. The resulting backdoored model is released publicly and intended for downstream use by unaware users. Unless otherwise specified, we consider the attack successful if the triggered examples are predicted as a targeted label. 2 Preprint Figure 1: Backdoored models embed malicious behavior along with clean task performance. Instead of erasing all learned information, we propose a targeted approach: (1) Given a backdoored model, (2) the backdoor encodes two distinct directions, (3) fine-tuning the model on similarly constructed triggered data isolates the parameter shift associated with triggered information. (4) Negating this vector from the original parameters effectively removes the trigger while preserving clean task performance. Defender assumptions The defender's goal is to remove the backdoor (i.e., reduce attack success rate to zero) while preserving the model's performance on clean data. The defender has full access to model weights. We consider two distinct practical scenarios: • Trigger-known: The defender is given a small forget set containing the true trigger, reflecting a common assumption in the context of backdoor defenses within unlearning studies, where an attack has been identified and its characteristics are known. • Trigger-unknown: The defender does not know the true trigger but has access to a small set of clean data. 3 BACKGROUND This section introduces the necessary tools for understanding model editing using weight interpolation. In particular, we recall the operation of task arithmetic and the property of weight disentanglement. Notation Let a neural network be a parameterized function f : X × Θ →Y with inputs x ∈X and weights θ ∈Θ. We identify a task k ∈[K] as a triplet (Dk, μk, f ⋆ k) with domain Dk ⊆X, input distribution μk (supp(μk) = Dk), and f ⋆ k : Dk →Y. Model editing with task arithmetic (Ilharco et al., 2022a) Finetuning a pre-trained model θpre on task k yields new weights θ⋆ k. The change in weights τk = θ⋆ k -θpre, defines the task vector. Task arithmetic modifies the model by applying scaled task vectors: θnew = θpre + α τk for a single task, or θnew = θpre + PK k=1 αk τk for multiple tasks. The scalar coefficient α controls the strength of the edit as well as its direction, where positive values denote the learning of a task and negative values lead to the unlearning of the particular task. Weight disentanglement Ortiz-Jimenez et al. (2024) introduced weight disentanglement as the property where the functional changes induced by a set of task vectors are localized to their respective task domains. Specifically, when multiple task vectors are linearly combined in weight space, the resulting model behaves as if it selectively applies each individual task's function only for inputs within that task's domain, reverting to the pre-trained model's behavior otherwise. The ability to perform task arithmetic with a set of task vectors T is a direct consequence of this weight disentanglement, where each task vector τk encodes a distinct functional component specific to its domain Dk. Formally, for a set of task vectors {τk}k∈[K], the edited model satisfies weight disentanglement if (cf. Ortiz-Jimenez et al. (2024) for the formal definition): f x; θpre + K X k=1 αtτk ! = K X k=1 f(x; θpre + αkτk) 1(x ∈Dk) + f(x; θpre) 1 x /∈ K [ k=1 Dk ! . (1) 3 Preprint To measure the presence of weight disentanglement, Ortiz-Jimenez et al. (2024) introduced the weight disentanglement error, which measures the prediction disagreement between models obtained by applying the individual task vectors and the combination thereof, evaluated on the respective task supports. For two tasks, this reads: ξ(α1, α2) = X i∈{1,2} Ex∼μi [dist (f(x; θpre + αiτi), f(x; θpre + α1τ1 + α2τ2))] , (2) where dist can be any distance metric between model outputs. For instance, for classification tasks dist(y1, y2) = 1(y1 ̸= y2). In the next section, we study backdoor attacks through the lens of task arithmetic and weight disentanglement. We treat the benign task and the malicious backdoor behavior as two separate - and, ideally, separable - tasks operating on distinct data domains, i.e., clean or triggered inputs. 4 TBAR: TRIGGER REMOVAL BY BACKDOOR ARITHMETIC Disentanglement of clean and triggered tasks Consider a model with pre-training weights θpre that has been backdoored, resulting in weights θb. We investigate whether the joint training implicitly defines two tasks in parameter space, enabling the model's behavior to decompose into clean and triggered components. Formally, let τc and τt be the task vectors for the clean and triggered tasks, with domains Dc (clean images) and Dt (triggered images). Following the definition in Equation 1, the backdoored model satisfies weight disentanglement with respect to these vectors if, ∀x ∈Dc ∪Dt, f(x; θpre + αcτc + αtτt) = f(x; θpre + αcτc) 1(x ∈Dc) + f(x, θpre + αtτt) 1(x ∈Dt). (3) In this work, we formulate the following hypothesis: Hypothesis. The weights of vision foundation models satisfy weight disentanglement for common backdoor attacks, i.e., their output function f satisfies Equation 3. The crucial implication of this property is the existence of a specific direction in weight space, τt, that exclusively governs the backdoor's malicious behavior. If this holds, removing the backdoor without causing catastrophic forgetting is possible: one simply needs to estimate τt and subtract it from the model's weights. As we will demonstrate in the next section, this hypothesis holds in practice and allows us to effectively unlearn the backdoor without compromising the model's clean knowledge. Provided this hypothesis holds, we only need to estimate the trigger vector in order to remove the attack. To accomplish this, we define a small, disjoint forget set composed entirely of triggered image-target pairs. We fine-tune the suspected backdoored model θb on this set, yielding updated weights θb+t. The parameter difference from this step gives us an estimate of the trigger direction: ˆτt = θb+t -θb (4) We can then surgically remove the backdoor's influence from the original backdoored model via task negation, yielding a cleaned model ˆθc: ˆθc = θb -αˆτt (5) where α is a scalar coefficient controlling the strength of the unlearning. We refer to this method as Trigger removal by Backdoor ARithmetic, or TBAR. Similarly with other weight interpolation techniques, we can use a small validation set for selecting the optimal value of the scaling coefficient α (Ilharco et al., 2022b;a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). 5 TRIGGER VECTOR ESTIMATION WITH TBAR In this section, we focus on known-trigger settings, empirically validate our hypothesis, and show the effectiveness of TBAR on standard attacks. Moreover, we demonstrate that the learned TBAR vectors can be transferred across datasets and scale to practically relevant settings. 4 Preprint Table 1: Controlled experiments showing effectiveness of TBAR on single-task CLIP ViT-B/32 classifiers under three backdoor attacks. Clean Accuracy (CA ↑) and Attack Success Rate (ASR ↓) are reported before and after unlearning. Gray percentages denote CA retention and ASR removal relative to the backdoored model. Results are averaged over 4 seeds. Attacked TBAR Dataset Attack init CA CA ↑ ASR ↓ CA ↑ ASR ↓ SUN397 BadNet 61.46 74.43 ± 0.34 91.40 ± 0.57 70.68 ± 0.84 (94.96%) 1.25 ± 2.37 (98.63%) Blended 61.46 74.72 ± 0.34 99.92 ± 0.12 73.36 ± 1.17 (98.17%) 0.00 ± 0.00 (100%) WaNet 61.46 74.71 ± 0.12 99.62 ± 0.26 73.31 ± 0.40 (98.13%) 0.00 ± 0.00 (100%) CIFAR100 BadNet 62.46 88.77 ± 0.18 99.96 ± 0.04 85.61 ± 2.07 (96.44%) 0.02 ± 0.02 (99.98%) Blended 62.46 88.71 ± 0.22 99.98 ± 0.03 85.17 ± 1.96 (96.01%) 0.18 ± 0.48 (99.82%) WaNet 62.46 88.66 ± 0.38 99.72 ± 0.05 87.61 ± 0.64 (98.82%) 0.04 ± 0.02 (99.96%) ImageNet-1K BadNet 59.58 67.23 ± 0.18 93.56 ± 0.31 63.85 ± 0.29 (94.97%) 1.96 ± 2.38 (97.91%) Blended 59.58 67.50 ± 0.20 99.91 ± 0.04 66.06 ± 0.93 (97.87%) 0.00 ± 0.00 (100%) WaNet 59.58 67.64 ± 0.18 99.86 ± 0.03 65.77 ± 1.20 (97.24%) 0.00 ± 0.00 (100%) 5.1 DISENTANGLEMENT OF CLEAN AND TRIGGERED KNOWLEDGE We start by following the standard model editing setup, where the CLIP text encoder stays frozen and only the visual encoder is fine-tuned (Wortsman et al., 2022b; Ilharco et al., 2022a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024). To construct a targeted poisoning attack on the visual encoder of CLIP by injecting triggered images into the training set, we follow (Carlini & Terzis, 2021). In particular, triggers are generated using three widely adopted methods: BadNet (Gu et al., 2017), which inserts a random square patch at a random location; Blended (Chen et al., 2017), which overlays uniform noise across the image; and WaNet (Nguyen & Tran, 2021; Qi et al., 2023), which applies a subtle warping transformation. While BadNet represents a visible trigger, Blended and WaNet are considered invisible triggers. We evaluated three benchmark vision datasets: SUN397, CIFAR100, and ImageNet-1K, poisoned at a rate of 3% of their training data. We report the per-dataset details in Appendix A. To obtain the TBAR vectors, we use a small held-out forget set of 2000 examples from the training set and fine-tune using the same hyperparameter settings per dataset. Optimal scaling coefficients are found using a grid search, consistent with previous literature (Ilharco et al., 2022b;a; Yadav et al., 2023; Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). Table 1 presents the full unlearning results across all datasets and attack types, reporting clean accuracy (CA) and attack success rate (ASR) before and after applying TBAR. TBAR consistently removes backdoors effectively, reducing ASR by over 98% in all cases. Notably, this comes with only a moderate drop in obtained clean accuracy (i.e., 4% on average), indicating that TBAR successfully isolates and removes triggered behavior from the model's weights. Empirical validation of the disentanglement hypothesis We now validate our hypothesis and show that our successful unlearning is due to the disentanglement between clean and triggered behaviors by using the weight disentanglement error ξ (defined in Equation 2). To do this, we must first construct the clean and trigger task vectors to be compared. Starting with ˆτt (from Equation 4) as the estimated direction of the trigger, we find an optimal scaling coefficient, α∗, defined as the value that reduces the attack success rate to zero. This allows us to define the optimal trigger vector as α∗ˆτt. To define the corresponding clean vector, we first define the total update vector from pre-training to the backdoored state as τb = θb -θpre. The clean vector, ˆτc, is then computed as the residual of the total update: ˆτc = τb -α∗ˆτt. If our disentanglement hypothesis holds, we expect to find a low disentanglement error between the resulting merged model and single models constructed using ˆτc and α∗ˆτt, on the respective data supports. Visualizations of the weight disentanglement error presented in Figure 2 confirm the disentanglement in weight space and our hypothesis. In fact, the large bright regions at the center of the plots, indicating low error, show that the two tasks exhibit strong separation in weight space, providing evidence that triggered and clean vectors correspond to distinct directions1. 1Notice that this is an analytical step and, in practice, it is not needed for our method's operation. 5 Preprint -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt BadNet SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 2: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following OrtizJimenez et al. (2024). Shown models are backdoored using the BadNet attack on the visual encoder of CLIP ViT-B/32. Similar plots for the other attacks are provided in Appendix B. 5.2 GENERALIZATION AND TRANSFERABILITY OF TRIGGER VECTORS One of the main motivations behind using task vectors is their modularity: the ability to apply or combine them across models without retraining. In the case of backdoor unlearning, we therefore investigate a similar question: does a TBAR vector trained on one dataset capture the backdoor mechanism in a way that transfers to other models infected with the same attack? Indeed, if the vector encodes only the trigger-to-misdirection behavior, rather than task-specific semantics, it should remain effective across models trained on different datasets, as long as the backdoor type and trigger remain consistent. To test it, we evaluate unlearning performance in out-of-distribution settings using vectors extracted from a backdoored ImageNet-1K model. We apply these vectors to remove backdoors from models trained with CIFAR100 and SUN397, respectively. Table 2: Unlearning performance on CIFAR100 and SUN397 using TBAR vectors extracted using a backdoored ImageNet-1K model. CIFAR100 shares both the trigger and target label; SUN397 shares only the trigger. CA ↑ ASR ↓ CA (TBAR) ↑ ASR (TBAR) ↓ BadNet CIFAR100 88.82 99.93 84.59 (95.24%) 00.02 (99.98%) SUN397 74.76 91.20 69.29 (92.68%) 00.99 (98.91%) Blended CIFAR100 88.78 99.98 84.49 (95.17%) 00.48 (99.52%) SUN397 74.81 99.85 62.91 (84.09%) 05.08 (94.91%) WaNet CIFAR100 88.78 99.80 87.43 (98.48%) 00.53 (99.47%) SUN397 74.91 99.80 73.84 (98.57%) 01.72 (98.28%) In this setup, CIFAR100 shares both the trigger and target label with ImageNet-1K, while SUN397 shares only the trigger (e.g., the same BadNet-style patch, but mapped to a different label). These two settings allow us to test two hypotheses: (i) transfer is facilitated when both the trigger and target label align, and (ii) transfer may still occur when only the trigger is shared, suggesting that the vector captures a generic trigger-to-misdirection pattern within the attack type. Remarkably, Table 2 shows that TBAR vectors extracted with ImageNet-1K remain effective when applied to other models backdoored with the same attack. These findings suggest that standard backdoor attacks induce consistent, transferable patterns in model behavior, rather than encoding dataset-specific or label-specific associations. 5.3 LARGE SCALE IMAGE-CAPTION EXPERIMENTS We now extend our analysis and show that TBAR continues to deliver strong performance even in more challenging deployment settings. Specifically, we backdoor the full CLIP models using image-caption pairs. Following the setup of Bansal et al. (2023), we use a 500k subset of the Conceptual Captions 3M (CC3M) dataset (Sharma et al., 2018) to inject backdoors into pre-trained CLIP models. As in prior work, we evaluate CA and ASR on the ImageNet-1K validation set. We consider four standard backdoor attacks: BadNets, Blended, WaNet and BadCLIP (Liang et al., 2024), a newly introduced optimized patch attack for CLIP models. These attacks are evaluated against three clean-data fine-tuning defenses: CleanCLIP (Bansal et al., 2023), RoCLIP (Yang et al., 2024b), 6 Preprint Table 3: TBAR Performance on CLIP ViT-B/32 under four backdoor attacks (BadNET, Blended, WaNet, and BadCLIP). We report both CA and ASR. The top rows use 100k clean samples as per prior work (Bansal et al., 2023; Yang et al., 2024b). The middle rows use a true targeted unlearning with 1.5k poisoned samples. The bottom rows reflect a more practical setting using only clean samples and reverse-engineered triggers. BadNet Blended WaNet BadCLIP CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ Zero-Shot 63.34% 00.00% 63.34% 00.00% 63.34% 00.00% 63.34% 00.00% Backdoored 61.69% 84.48% 61.39% 99.67% 61.32% 93.12% 61.41% 99.98% clean-data finetuning Contrastive-FT 51.41% 13.72% 51.77% 02.01% 51.58% 00.05% 51.41% 79.32% RoCLIP 50.02% 47.91% 51.84% 06.40% 48.26% 00.04% 53.31% 99.32% CleanCLIP 51.41% 04.11% 51.02% 00.05% 51.09% 00.04% 51.82% 77.04% true unlearning GA 59.89% 07.95% 59.92% 00.01% 58.71% 00.04% 58.45% 00.08% TBAR 59.28% 00.38% 60.46% 00.09% 60.14% 00.05% 56.58% 00.77% reverse-engineered unlearning GA+DECREE 60.41% 08.30% 56.92% 76.40% 60.22% 35.67% N/A N/A TBAR+DECREE 60.29% 00.33% 55.56% 00.90% 56.85% 00.64% N/A N/A and standard CLIP fine-tuning 2. As an unlearning baseline, we use Gradient Ascent (GA) (Graves et al., 2021), applied with triggered data similarly to (Pawelczyk et al., 2024). Full implementation details are provided in Appendix A. To construct TBAR vectors, we define a disjoint 'forget set' of 1.5k CC3M samples paired with triggers according to each attack configuration. Optimal scaling coefficients are selected using a validation set drawn from ImageNet-1K training data. Table 3 reports CA and ASR for CLIP ViT-B/32. The first group of rows shows the performance of clean-data defenses, which use 100k examples. These methods generally exhibit large CA drops and fail to remove stronger attacks such as BadCLIP. The second group presents the results for unlearning methods. TBAR achieves significantly lower ASR than the baselines above, while retaining most of the clean accuracy post-backdoor. Remarkably, it also uses two orders of magnitude fewer data. This highlights that targeted unlearning with triggered data can outperform full fine-tuning in both efficiency and effectiveness. Finally, notice that gradient ascent also performs well in this setting, in contrast to previous results in the literature considering smaller-scale models (Pawelczyk et al., 2024). Though further discussion and caveats are addressed below. Despite the strong performance of TBAR, notice, however, that current backdoor defenses for CLIP and traditional unlearning methods do not share the same underlying assumptions. In particular, the latter assume access to a set of triggered examples and therefore knowledge of the attack - which might not apply in practice. Hence, in the next section, we will relax this stronger assumption. 6 AGNOSTIC-ATTACK UNLEARNING To close the gap in assumptions between current CLIP defenses and our method, we extend TBAR to operate without explicit knowledge of the attack. Unlearning with reversed-engineered triggers To achieve this, we propose to use trigger reverse engineering in order to construct a proxy forget set starting from the backdoored model and a set of clean inputs. In particular, we combine TBAR with DECREE (Feng et al., 2023), a self-supervised method introduced for attack detection, it has the ability to invert triggers by searching for minimal patterns so that any input with such a trigger pattern results in similar output embeddings. Given the optimized trigger, we then infer the corresponding infected label by probing the backdoored model with DECREE-generated triggers and identifying the predicted class from the set of ImageNet-1K categories. Using this estimate, we construct proxy-triggered image-caption pairs via standard text templates (Radford et al., 2021). Interestingly, we observe that the true ASR keeps improving even 2These methods operate solely on clean, non-triggered images. Consequently, they tend to require larger datasets and longer training durations, increasing their vulnerability to catastrophic forgetting. 7 Preprint after the proxy-triggered attack is unlearned. We therefore adopt a search strategy that continues to increase the unlearning coefficient for a fixed window - typically 10 steps - after the proxy ASR is nullified. This search is subject to an early-stopping condition, whereby the clean accuracy must not drop below a predefined threshold (shared with gradient ascent). Results Remarkably, Table 3 (bottom set) shows that the above pipeline remains effective with a 90% CA threshold, even without access to the original trigger. Particularly, TBAR is able to outperform both clean data baselines as well as gradient ascent for three attacks. Note, as reported in (Liang et al., 2024), DECREE fails to detect the backdoor introduced by BadCLIP. Robust unlearning beyond gradient ascent Contrary to prior literature on backdoor unlearning (Pawelczyk et al., 2024), our results in Table 3 show that simple gradient ascent on true triggered examples can achieve strong unlearning performance on CLIP, even against robust attacks like BadCLIP. We hypothesize that the same weight disentanglement that allows our method to isolate triggers is also what facilitates this gradient-based unlearning3. However, this effectiveness is fragile. To understand the tradeoff between the two, we compared TBAR against gradient ascent under similar compute budgets. We plot CA and ASR reduction (1-ASR) when using TBAR vs gradient ascent over a progressive number of epochs. Figure 3 shows the results for true unlearning with known triggers, where we find that although one or two epochs of gradient ascent can match the performance of TBAR, exceeding this optimal point often leads to sharp drops in clean accuracy. This indicates that while gradient ascent can initially identify directions that suppress the backdoor, it is highly unstable, and maximizing the loss further may lead to arbitrary directions that do not reliably target the backdoor mechanism. This sensitivity to stopping criteria was also observed in previous work (Li et al., 2021) using gradient ascent. 0.2 0.4 0.6 CA 0.0 0.5 1.0 1 -ASR BadNet 0.2 0.4 0.6 CA 0.0 0.5 1.0 WaNet 0.2 0.4 0.6 CA 0.0 0.5 1.0 Blended 0.50 0.55 0.60 CA 0.0 0.5 1.0 BadCLIP Epoch 1 Epoch 2 Epoch 3 Epoch 4 TBAR GA Figure 3: True unlearning performance of TBAR and Gradient Ascent. Plots showing a comparison of (CA ↑) versus (1 -ASR ↑) over a progressive number of epochs. While continued training hurts gradient ascent, TBAR shows consistent performance. This instability is exacerbated under the more realistic, non-ideal conditions of using reverseengineered DECREE patches. In this setting (presented in Figure 4), gradient ascent frequently overshoots: the backdoor is removed, but at the cost of substantial CA loss. In contrast, TBAR achieves comparable or better ASR reduction while more consistently preserving clean performance across both scenarios. We attribute this stability to the directional constraint imposed by task vectors, which prevents the aggressive and often arbitrary parameter shifts seen in unconstrained gradient ascent, making it more robust to both tuning and noise in the trigger signal. 3Indeed, notice that previous studies reported the emergence of weight disentanglement with model and data scale (Ortiz-Jimenez et al., 2024; Hazimeh et al., 2024). 8 Preprint 0.4 0.6 CA 0.0 0.5 1.0 1 -ASR BadNet 0.50 0.55 0.60 CA 0.0 0.5 1.0 WaNet 0.55 0.60 CA 0.0 0.5 1.0 Blended Epoch 1 Epoch 2 Epoch 3 Epoch 4 TBAR GA Figure 4: Unlearning with DECREE(Feng et al., 2023) using TBAR and Gradient Ascent. Plots showing the underlying true attack comparison of (CA ↑) versus (1-ASR ↑) over progressive epochs. 7 FURTHER RESULTS AND DISCUSSION 300 1.5k 3k 15k 30k Triggered Data Size 0.0 0.5 1.0 1 -ASR BadNet 0.52 0.54 0.56 0.58 CA Figure 5: Results of unlearning BadNet attack with TBAR using varied sizes of the forget set Impact of forget set size To assess the influence of the forget set size in true unlearning scenarios (i.e., the second set of Table 3), we conduct finetuning experiments with varying forget set sizes and evaluate the performance of TBAR vectors after one epoch. Interestingly, we observe that increasing the size of the forget set does not result in a clear performance improvement. Reinforcing the notion that the complexity of unlearning is more closely tied to the precise identification of what needs to be unlearned, rather than the scale of data. Scaling CLIP models We provide complete results for the larger CLIP ViT-L/14 model for the setup described in Section 5.3 and Section 6. We observe significantly better trade-offs for unlearning overall. Particularly, when using the optimized patches, we are able to match the baselines for ASR reduction with a 98% clean accuracy threshold. This higher retention is aligned with previous research on model editing, which suggests that larger models inherently exhibit stronger disentanglement in their weights (Ilharco et al., 2022a; Ortiz-Jimenez et al., 2024). Table 4: TBAR Performance on CLIP ViT-L/14 under four backdoor attacks (BadNET, Blended, WaNet and BadCLIP). We report both CA and ASR. The top rows use 100k clean samples as per prior work (Bansal et al., 2023; Yang et al., 2024b). The middle rows use a true targeted unlearning with 1.5k poisoned samples. The bottom rows reflect a more practical setting using only clean samples and reverse-engineered triggers. BadNet Blended WaNet BadCLIP CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ CA ↑ ASR ↓ Zero-Shot 75.55% 00.00% 75.55% 00.00% 75.55% 00.00% 75.55% 00.00% Backdoored 74.89% 99.93% 74.76% 99.94% 74.76% 99.80% 74.83% 99.97% clean-data finetuning Contrastive-FT 69.65% 58.04% 69.26% 14.28% 70.73% 37.74% 71.16% 93.31% RoCLIP 72.14% 97.56% 71.17% 76.69% 73.89% 88.80% 73.60% 99.28% CleanCLIP 68.99% 01.38% 69.29% 00.27% 70.63% 00.07% 70.56% 73.63% true unlearning GA 74.08% 00.00% 73.42% 00.00% 73.17% 00.02% 73.20% 00.02% TBAR 74.16% 00.14% 74.25% 00.19% 74.08% 00.19% 72.67% 00.14% reverse-engineered unlearning GA+DECREE 74.38% 49.32% 74.75% 99.93% 74.12% 00.00% N/A N/A TBAR+DECREE 74.26% 15.28% 73.68% 01.20% 74.42% 00.00% N/A N/A Model architectures and pre-training To further validate the robustness of our method across various settings, we additionally experimented on CLIP with convolutional architectures (ConvNeXts) 9 Preprint and non-contrastively pre-trained transformers (DINO). TBAR yields consistent results (i.e., ASR < 5% and modest CA drops). Results are reported in Appendix B.4. Detoxifying merged models Recent work shows that some backdoors fail to survive model merging, prompting the BadMerging attack (Zhang et al., 2024) to craft more persistent triggers. We evaluate TBAR against BadMerging, and find that our method is able to completely remove the attack while preserving almost the entire clean accuracy on merged models (see results in Appendix B.5). 8 RELATED WORK Data poisoning attacks Data poisoning attacks refer to scenarios in which modifications to a small subset of the training dataset lead to unintended or malicious behavior in the trained model (Goldblum et al., 2022; Pawelczyk et al., 2024). Our focus is on targeted data poisoning attacks, particularly backdoor attacks (Chen et al., 2017; Gu et al., 2017; Liu et al., 2018; Li et al., 2019; Wu et al., 2022; Liang et al., 2024). Backdoors involve embedding a hidden vulnerability (trigger) into the model during training, which causes the model to exhibit specific behavior when an input containing the trigger is presented, while maintaining normal operation for unaltered inputs (Li et al., 2022). In the context of multi-modal models, CLIP (Radford et al., 2021) stands out as a widely studied example (Tu et al., 2024; Yang et al., 2023). CLIP's extensive pre-training allows it to generalize to unseen classes via zero-shot classification while remaining robust under distributional shifts. Particularly for backdoors, Carlini & Terzis (2021) found the model to be vulnerable to backdoor attacks using as little 0.01% of its training data for poisoning. Multiple works (Goel et al., 2022a; Bansal et al., 2023; Yang et al., 2024b) proposed more 'robust' training schemes to safeguard against backdoor attacks on CLIP. Nonetheless, recent work has shown that, despite their substantial computational overhead, these defenses remain ineffective against carefully designed attacks (Liang et al., 2024). Machine unlearning Machine unlearning seeks to eliminate an unwanted data influence and the corresponding model behaviors (Cao & Yang, 2015; Bourtoule et al., 2021). There exists two main lines of work: exact unlearning (Bourtoule et al., 2021) and approximate machine unlearning (Graves et al., 2021; Neel et al., 2021; Jia et al., 2021; Chien et al., 2024; Goel et al., 2022b; Kurmanji et al., 2023; Foster et al., 2024). Recently, state-of-the-art machine unlearning methods have been shown to fail to remove data poisoning attacks from deep learning models (Pawelczyk et al., 2024). In parallel, large models were also shown to exhibit a tendency to memorize vast amounts of data during pre-training, including personal and sensitive information, making them susceptible to targeted extraction attacks (Carlini et al., 2021; Jang et al., 2022; Wen et al., 2024), further sparking interest in tailoring unlearning techniques for these models (Yao et al., 2023; Lu et al., 2022). Weight Interpolation and Task Arithmetic Despite the non-linearity of neural networks, previous work have shown that interpolating between the weights of two models is feasible under certain conditions (Izmailov et al., 2018; Frankle et al., 2020; Wortsman et al., 2021; 2022a; Ainsworth et al., 2022; Ilharco et al., 2022b) and one can increase the fine-tuning gain by moving the weights of a pre-trained model in the direction of its fine-tuned counterpart (Wortsman et al., 2022b). Task Arithmetic (Ilharco et al., 2022a) is a framework that formalizes the notion of distinct task vectors, controlling different tasks. Ortiz-Jimenez et al. (2024) attributed this ability to weight disentanglement. Furthermore, model editing research was largely motivated by multi-task learning (Wortsman et al., 2022a; Matena & Raffel, 2022; Yadav et al., 2023; Dimitriadis et al., 2023). Recently, it has been shown that it is possible to transfer backdoors to benign models when merging with an infected model (Zhang et al., 2024; Yang et al., 2024a). 9 CONCLUSION In this paper, we investigated the problem of backdoor unlearning by examining how backdoor attacks are encoded in the weight space of CLIP models. Our analysis revealed that triggered knowledge is separable from clean knowledge and can be identified using existing vector arithmetic techniques. Building on this insight, we introduced a lightweight framework for effective backdoor removal that requires two orders of magnitude less data than existing clean-data-based defenses for CLIP. To address scenarios where the trigger is unknown, we further show that our method can be 10 Preprint combined with trigger reverse-engineering techniques, enabling practical and cost-efficient backdoor removal, effectively sanitizing models while maintaining high clean accuracy. We hope our findings renew interest in weight space manipulations for backdoor mitigation and inspire further solutions. ACKNOWLEDGMENTS This work was partially funded by armasuisse under the RAEL (F0426) project. The authors thank Adam Hazimeh for insightful discussions and feedback throughout this project, as well as Ke Wang, Sevda ̈Og ̈ut, and Ortal Senouf for their valuable comments. REFERENCES Samuel K Ainsworth, Jonathan Hayase, and Siddhartha Srinivasa. Git re-basin: Merging models modulo permutation symmetries. arXiv, 2022. Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, and Kai-Wei Chang. Cleanclip: Mitigating data poisoning attacks in multimodal contrastive learning. In International Conference on Computer Vision (ICCV), 2023. Mauro Barni, Kassem Kallas, and Benedetta Tondi. A new backdoor attack in cnns by training set corruption without label poisoning. In IEEE International Conference on Image Processing (ICIP), 2019. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In IEEE Symposium on Security and Privacy (SP), 2021. Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In IEEE symposium on security and privacy (SP), 2015. Nicholas Carlini and Andreas Terzis. Poisoning and backdooring contrastive learning. arXiv, 2021. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium (USENIX Security), 2021. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv, 2017. Eli Chien, Haoyu Wang, Ziang Chen, and Pan Li. Langevin unlearning: A new perspective of noisy gradient descent for machine unlearning. arXiv, 2024. Nikolaos Dimitriadis, Pascal Frossard, and Franc ̧ois Fleuret. Pareto manifold learning: Tackling multiple tasks via ensembles of single-task models. In International Conference on Machine Learning (ICML), 2023. Lingyu Du, Yupei Liu, Jinyuan Jia, and Guohao Lan. Defending Deep Regression Models against Backdoor Attacks. arXiv, 2024. Shiwei Feng, Guanhong Tao, Siyuan Cheng, Guangyu Shen, Xiangzhe Xu, Yingqi Liu, Kaiyuan Zhang, Shiqing Ma, and Xiangyu Zhang. Detecting backdoors in pre-trained encoders. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Jack Foster, Stefan Schoepf, and Alexandra Brintrup. Fast machine unlearning without retraining through selective synaptic dampening. In AAAI conference on artificial intelligence (AAAI), 2024. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning (ICML), 2020. Robert M French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3 (4):128-135, 1999. 11 Preprint Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan Rossi, Vishwa Vinay, and Aditya Grover. Cyclip: Cyclic contrastive language-image pretraining. In Advances in Neural Information Processing Systems (NeurIPS), 2022a. Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, and Ponnurangam Kumaraguru. Towards adversarial evaluations for inexact machine unlearning. arXiv, 2022b. Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In AAAI Conference on Artificial Intelligence (AAAI), 2021. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv, 2017. Asif Hanif, Fahad Shamshad, Muhammad Awais, Muzammal Naseer, Fahad Shahbaz Khan, Karthik Nandakumar, Salman Khan, and Rao Muhammad Anwer. Baple: Backdoor attacks on medical foundational models using prompt learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2024. Adam Hazimeh, Alessandro Favero, and Pascal Frossard. Task addition and weight disentanglement in closed-vocabulary models. In Workshop on Efficient Systems for Foundation Models II@ ICML2024, 2024. Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv, 2022a. Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights. In Advances in Neural Information Processing Systems (NeurIPS), 2022b. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. arXiv, 2018. Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models. arXiv, 2022. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning (ICML), 2021. Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. Towards unbounded machine unlearning. In Advances in Neural Information Processing Systems (NeurIPS), 2023. Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, and Xinpeng Zhang. Invisible backdoor attacks on deep neural networks via steganography and regularization. arXiv, 2019. Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data. In Advances in Neural Information Processing Systems (NeurIPS), 2021. Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems, 35(1):5-22, 2022. Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, and Ee-Chien Chang. Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023. 12 Preprint Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In Annual Network And Distributed System Security Symposium (NDSS), 2018. Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, 2021. Anh Nguyen and Anh Tran. Wanet-imperceptible warping-based backdoor attack. arXiv, 2021. Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. In Advances in Neural Information Processing Systems (NeurIPS), 2024. Martin Pawelczyk, Jimmy Z Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, and Seth Neel. Machine unlearning fails to remove data poisoning attacks. arXiv, 2024. Xiangyu Qi, Tinghao Xie, Jiachen T Wang, Tong Wu, Saeed Mahloujifar, and Prateek Mittal. Towards a proactive approach for detecting backdoor poison samples. In USENIX Security Symposium (USENIX Security), 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Association for Computational Linguistics (ACL), 2018. Weijie Tu, Weijian Deng, and Tom Gedeon. A closer look at the robustness of contrastive languageimage pre-training (clip). In Advances in Neural Information Processing Systems (NeurIPS), 2024. Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, and Nicholas Carlini. Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. arXiv, 2024. Mitchell Wortsman, Maxwell C Horton, Carlos Guestrin, Ali Farhadi, and Mohammad Rastegari. Learning neural network subspaces. In International Conference on Machine Learning (ICML), 2021. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning (ICML), 2022a. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022b. 13 Preprint Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, and Chao Shen. Backdoorbench: A comprehensive benchmark of backdoor learning. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. In Advances in Neural Information Processing Systems (NeurIPS), 2023. Jinluan Yang, Anke Tang, Didi Zhu, Zhengyu Chen, Li Shen, and Fei Wu. Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace. arXiv, 2024a. Wenhan Yang, Jingdong Gao, and Baharan Mirzasoleiman. Robust Contrastive Language-Image Pretraining against Data Poisoning and Backdoor Attacks. In Advances in Neural Information Processing Systems (NeurIPS), 2024b. Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, and Yang Zhang. Data poisoning attacks against multimodal encoders. In International Conference on Machine Learning (ICML), 2023. Yuanshun Yao, Xiaojun Xu, and Yang Liu. Large language model unlearning. arXiv, 2023. Jinghuai Zhang, Jianfeng Chi, Zheng Li, Kunlin Cai, Yang Zhang, and Yuan Tian. Badmerging: Backdoor attacks against model merging. In ACM SIGSAC Conference on Computer and Communications Security (CCS), 2024. 14 Preprint APPENDIX OUTLINE This appendix provides supplementary material to support our main findings. It is organized as follows: • Section A: Detailed Experimental Setup. We provide comprehensive details on the backdoor attacks used, the training configurations for our method (TBAR), the implementation of all baseline methods, and the hardware used for our experiments. • Section B: More Analytical Experiments. We present additional analyses, including experiments on unlearning with mixed data, a sensitivity analysis of our scaling coefficient, further visualizations of weight disentanglement, and demonstrate the applicability of our method to other architectures (ConvNeXt) and pre-training paradigms (DINO). Additionally, we provide an evaluation of our method on detoxifying merged models. • Section C: More Large Scale Experiments. We report on the limitations of clean data finetuning, provide results for larger models (ViT-L/14). We also discuss unlearning attacks with weak trigger signals. A DETAILED EXPERIMENTAL SETUP A.1 BACKDOOR ATTACKS As discussed in the main text, backdoors are a subset of data poisoning attacks implemented by injecting triggered examples with modified labels. We assign the target label based on the training dataset. Across different experimental settings, we consider six types of backdoor attacks: • BadNets (Gu et al., 2017) is a patch-based attack, we follow the attack setup in (Bansal et al., 2023), where we insert a 16x16 patch of random noise drawn from a normal distribution N(0, 1) at a random position in the image. • Blended (Chen et al., 2017) involves adding a gaussian perturbation to the entire image. We follow the attack setup in (Bansal et al., 2023), where we superimpose uniform noise on the natural image with a ratio of 8:2: x = 0.8 x + 0.2 N, where N is a noise tensor with uniform random values in the range [0, 1) • WaNet (Nguyen & Tran, 2021) introduces a warping transformation to the entire image. We follow the setup used by (Bansal et al., 2023; Qi et al., 2023) and use control grid size k = 224 and warping strength s = 1 and train models without the noise mode • SIG (Barni et al., 2019) involves adding a sinusoidal perturbation to the entire image. We follow the attack setup in (Bansal et al., 2023), where we superimpose sinusoidal noise along the horizontal axis of the image: x = clip(x + N, 0, 1) Nc,i,j = 60 255 sin 2π 6j 224 , N is a perturbation shared across all channels and rows. • BadCLIP (Liang et al., 2024) is an optimized patch-based attack. Following the procedure in (Liang et al., 2024), for the selected target label, we optimize the patch using 9.5k clean images and 1800 true target images from the CC3M (Sharma et al., 2018) dataset. • BadMerging (Zhang et al., 2024) we use the official implementation to optimize a patch on the CIFAR100 task, producing a task vector that is then merged with six benign task vectors from GTSRB, EuroSAT, Cars, SUN397, and Oxford-PETS. 15 Preprint Figure 6: Visualization of different attack realizations on input images (from left to right): BadNet, Blended, WaNet, SIG, BadCLIP (ViT-B/32), and BadCLIP (ViT-L/14). The altered images are associated with the target label 'banana'. A.2 TBAR TRAINING DETAILS A.2.1 CLIP WITH FROZEN TEXT-ENCODER Models and datasets We use the CLIP ViT-B/32 model and evaluate on three benchmark image datasets: SUN397, CIFAR100, and ImageNet-1K. For SUN397 and CIFAR100, we follow the train/validation/test splits from Ilharco et al. (2022a), and sample a forget set from the training split prior to training. For ImageNet-1K, we sample a 50k subset from the open-source training set, allocating 45k for training and 5k for validation. An additional 2k examples are separately sampled as the forget set. We use the official validation set as the test set. Complete per-dataset configurations are provided in Table 5. Evaluation We evaluate performance by reporting the accuracy on clean versions the test set (CA), along with the attack success rate (ASR), defined as the percentage of predictions that classify the target label (as defined in Table 5) when the backdoor visual patch is present. Training configurations We adopt the same training configurations as (Ilharco et al., 2022a) per dataset, where we use AdamW optimizer with learning rate 1e-5 and cosine scheduling, a batch size of 128, and a warmup of 500 steps. The same configurations are used for TBAR training. Table 5: Per dataset configuration for experiments in Section 5 target epochs train set poison(%) val set forget set test set SUN397 river 14 15865 3 1985 2000 19850 CIFAR100 orange 6 43000 3 5000 2000 10000 ImageNet-1K orange 10 45000 3 5000 2000 50000 A.2.2 CLIP WITH IMAGE-CAPTION DATA Models and datasets We backdoor our CLIP models (ViT-B/32 and ViT-L/14) using 500k image-caption pairs from the Conceptual Captions 3M (CC3M) dataset (Sharma et al., 2018). We select 1500 random samples and poison them according to each attack settings. For all attacks, we set the target label to captions containing the word "banana". We use the validation set of ImageNet-1K for the evaluations. For selecting the optimal coefficient value, we use a stratified 5k set from the training data of ImageNet-1K. Evaluation We evaluate performance by reporting the accuracy on clean versions of the test set (CA), along with the attack success rate (ASR), defined as the percentage of predictions that classify the target label "banana" when the backdoor visual patch is present. Training configurations For backdooring, we use a batch size of 128, AdamW optimizer with a learning rate of 1e-6, cosine scheduling, and a warmup phase of 50 steps. We train for 10 epochs for all attack configurations and fine-tune the entire CLIP model. We adopt the same hyperparameters for training TBAR task vectors. 16 Preprint A.3 OTHER METHODS A.3.1 CLEANCLIP CleanCLIP (Bansal et al., 2023) optimizes a combination of the standard CLIP loss and a modalityspecific self-supervised loss designed for image-caption pairs {Ii, Ti}. The self-supervised loss contrasts each modality with its augmented view: LSS = -1 2N N X i=1 log " exp(⟨Ii, ̃Ii⟩/τ) PN j=1 exp(⟨Ii, ̃Ij⟩/τ) # + N X i=1 log " exp(⟨Ti, ̃Ti⟩/τ) PN j=1 exp(⟨Ti, ̃Tj⟩/τ) #! The total CleanCLIP loss is then defined as: LCleanCLIP = λ1LCLIP + λ2LSS Here, ̃Ii and ̃Ti denote augmented views of the original image and text, respectively. We follow the setup of (Bansal et al., 2023), using a 100k disjoint subset of clean CC3M images and the recommended hyperparameters: 10 epochs, λ1 = λ2 = 1, learning rate 1e-5, batch size of 64, and a warmup of 50 steps. A.3.2 ROCLIP RoCLIP (Yang et al., 2024b) is a defense mechanism similar to CleanCLIP. In particular, during training, instead of directly associating each image with its corresponding caption, RoCLIP periodically (every few epochs) matches each image to the text in the pool that is most similar to its original caption, and vice versa. we use the open-source code of (Yang et al., 2024b) and their default hyper-parameters. A.3.3 STANDARD CLIP FINE-TUNING We use the same hyperparameters as CleanCLIP without the in-modal loss. A.3.4 GRADIENT ASCENT We implement Gradient Ascent following (Graves et al., 2021; Jang et al., 2022), by reversing the gradient updates on the forget set Uset: θ(t+1) = θ(t) + η ∇θL(Uset, θ(t)) , where η is the learning rate. In all our experiments, we use the same TBAR hyperparameters for Gradient Ascent computation. A.3.5 DECREE DECREE performs self-supervised trigger inversion to detect attacks. Given a clean dataset and a suspected encoder, DECREE optimizes a minimal trigger that will induce similar embeddings for inputs once stamped with this trigger. It then uses the final optimized trigger's size (l1-Norm) to gauge vulnerabilities. Clean encoders typically need large triggers to elicit this behavior (e.g., covering more than 10% of the image). DECREE is computationally lightweight and adds minimal overhead. This is because it does not require fine-tuning the model encoder. Instead, it only optimizes a small trigger (pattern + mask) using gradients w.r.t. the input of the model. For our experiments, we run the method on a clean encoder and on our suspected models and compare the recovered trigger's l1-Norm (mask size) against the one recovered on a clean encoder of the same architecture. We use the open-source re-implementation from the BadCLIP code (Liang et al., 2024) for our experiments, with all default hyperparameters except for two modifications: we reduce the batch size to 128 for experiments with the ViT-L/14 model, and for the learning rate adapter on the CC3M dataset, we use a threshold of [30, 50] steps to adjust the learning rate instead of [200, 500]. 17 Preprint Figure 7: Visualization of different DECREE patches (from left to right): BadNet, BadNet-L, Blended, Blended-L, SIG, WaNet, and WaNet-L. Below, we report both the raw l1-norm and DECREE's normalized metric, Pl1-Norm (l1 divided by the input-space maximum, 3 × 224 × 224 for RGB images of size 224). As shown below, the trigger sizes for backdoored models are an order of magnitude smaller than for the clean (Zero-Shot) model, providing a clear detection signal. ViT-B/32 ViT-L/14 l1-Norm Pl1-Norm (%) l1-Norm Pl1-Norm (%) Zero-Shot 22185.6276 14.74% 45272.1229 30.08% BadNet 3186.8709 2.12% 2921.5470 1.94% Blended 6691.9346 4.45% 5346.6726 3.55% WaNet 13895.9155 9.23% 6601.7446 4.39% A.4 HARDWARE All experiments were conducted using a single NVIDIA A100 or H100 GPU, except for those involving RoCLIP. Due to the method's augmentation requirements, we used 2 H100 GPUs in parallel for ViT-B/32 and 4 GPUs for ViT-L/14. B MORE ANALYTICAL EXPERIMENTS B.1 UNLEARNING WITH A MIX OF CLEAN AND TRIGGERED EXAMPLES We also experimented with forget sets with a mixture of clean and triggered data. Figures 8, 9, 10, show the CA and ASR obtained using different ratios of clean:triggered examples in the forget set. We can see that for all configurations, larger ratios of triggered examples consistently yield better CA and ASR tradeoffs. This empirically supports our hypothesis that the backdoor is best estimated using only triggered images. 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-SUN397 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-SUN397 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-SUN397 0.1 0.3 0.5 0.7 0.9 Figure 8: (SUN397) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. B.2 SCALING COEFFICIENT SENSITIVITY To check if the performance of our method is robust to the choice of scaling coefficient, we present in Table 7 sensitivities to this choice within a 10% variation of the optimal value, averaged over 4 runs of the experiment previously presented in Table 1 of the main text on the WaNet attack. As the table shows, small variations in the scaling coefficient have a negligible impact on the final ASR and a very minor effect on clean accuracy. 18 Preprint 1 2 3 4 5 6 7 8 9 10 α 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-CIFAR100 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-CIFAR100 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-CIFAR100 0.1 0.3 0.5 0.7 0.9 Figure 9: (CIFAR100) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR BadNet-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Blended-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 CA CA 1 2 3 4 5 6 7 8 9 10 α 0.0 0.2 0.4 0.6 0.8 1.0 ASR ASR Warped-ImageNet-1K 0.1 0.3 0.5 0.7 0.9 Figure 10: (ImageNet-1K) Plots showing (CA ↑) and (ASR ↓) using task vectors extracted from a mixture of clean and triggered data under varying ratios along increasing scaling values. B.3 MORE ON WEIGHT DISENTANGLEMENT Figures 11, 12 report additional weight disentanglement visualizations for the attacks considered in Section 5. -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 11: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following (OrtizJimenez et al., 2024). Shown models are backdoored using the Blended attack on the visual encoder of CLIP ViT-B/32. B.4 ADDITIONAL EXPERIMENTS ON OTHER ARCHITECTURES AND PRE-TRAINING To further assess the robustness of using TBAR across architectures and pre-training settings, we applied our method to: • A convolutional model (ConvNeXt-Base pretrained on LAION-400M via contrastive learning). See Table 9. • A transformer model (ViT-B/16) with DINO pre-training on ImageNet-1K backdoored using CIFAR100. See Table 10. B.5 DETOXIFYING MERGED MODELS Recent work by Zhang et al. (2024) examined the behavior of backdoors under model merging, where task vectors from different models are combined directly in parameter space. 19 Preprint Table 7: Scaling coefficient sensitivities within a 10% variation of the optimal value for a single attack run. Dataset -10% CA -10% ASR +10% CA +10% ASR SUN397 73.58 ± 0.27 0.01 ± 0.01 73.08 ± 0.57 0.00 ± 0.00 CIFAR100 87.76 ± 0.52 0.11 ± 0.19 87.39 ± 0.65 0.03 ± 0.01 ImageNet-1K 66.09 ± 0.94 0.01 ± 0.01 65.42 ± 1.48 0.00 ± 0.00 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 αt SUN397 -2.0 -1.0 1.0 2.0 αc -2.0 -1.0 1.0 2.0 CIFAR100 -2.0 -1.0 1.0 2.0 -2.0 -1.0 1.0 2.0 ImageNet-1k 0% 100% ξ(αc, αt) Figure 12: Weight disentanglement between clean and triggered tasks. We estimate the triggered direction ˆτt from the backdoored model and define the clean direction ˆτc as the residual after negation. The plots show the disentanglement error ξ(αc, αt) between these task vectors, following (OrtizJimenez et al., 2024). Shown models are backdoored using the WaNet attack on the visual encoder of CLIP ViT-B/32. Table 8: Unlearning BadMerging (Zhang et al., 2024) patches with TBAR. Gray denotes (1 -ASR). CA ↑ ASR ↓ CA (TBAR) ↑ ASR (TBAR) ↓ TA 74.02 99.66 73.50 (99.30%) 00.14 (99.86%) TIES 74.96 99.92 74.54 (99.44%) 00.05 (99.95%) They observed that some backdoors fail to persist through merging, leading them to propose BadMerging, a two-stage attack that constructs optimized trigger patches designed to remain functional after merging. Given that BadMerging minimizes its signature in weight space to survive merging, it may similarly resist removal by parameter-space unlearning methods. Table 8 shows the results of applying TBAR to models infected with BadMerging and merged using two approaches: Task Arithmetic (TA) (Ilharco et al., 2022a), and TIES (Yadav et al., 2023), the latter addresses parameter interference through trimming, sign alignment, and selective averaging. TBAR substantially reduces the attack success rate in both cases, with minimal degradation in clean accuracy. Table 9: Controlled experiments showing the effectiveness of TBAR on single-task CLIP ConvNeXtBase classifiers under three backdoor attacks. Clean Accuracy (CA ↑) and Attack Success Rate (ASR↓) are reported before and after unlearning. Dataset CA ASR CA (TBAR) ASR (TBAR) BadNet CIFAR100 89.15 99.99 82.94 02.95 ImageNet 72.83 99.94 67.50 02.56 SUN397 76.99 99.99 67.48 05.11 Blended CIFAR100 89.07 99.92 87.09 00.02 ImageNet 72.74 99.85 71.06 00.00 SUN397 76.89 99.93 73.21 00.00 WaNet CIFAR100 89.12 99.95 86.55 00.04 ImageNet 72.78 99.99 70.67 00.01 SUN397 77.06 99.96 74.97 00.00 20 Preprint Table 10: Controlled experiments showing effectiveness of TBAR on transformer model (ViT-B/16) with DINO pre-training on ImageNet-1K under three backdoor attacks using CIFAR100 dataset. Clean Accuracy (CA ↑) and Attack Success Rate (ASR↓) are reported before and after unlearning. Attack CA ASR CA (TBAR) ASR (TBAR) BadNet 78.98 99.63 73.98 00.11 Blended 78.74 99.34 73.30 00.00 WaNet 78.38 99.08 73.43 00.04 C MORE LARGE SCALE EXPERIMENTS C.1 LIMITATIONS OF CLEAN DATA FINETUNING As noted in the main text, large-scale finetuning can cause models to forget broader knowledge. Table 11 shows performance on SUN397 and CIFAR100 to assess the impact of backdooring and the clean-data baselines from Table 3. Clean-data finetuning significantly degrades accuracy on these tasks, while TBAR has only a minor effect. Table 11: Out-of-distribution clean accuracy on SUN397 and CIFAR100 For CLIP ViT-B/32 model backdoored with image-caption data. Dataset Pre-Trained Backdoored CleanCLIP RoCLIP Contrastive-FT TBAR BadNet SUN397 63.18% 63.23% 56.50% 58.47% 56.47% 61.47% CIFAR100 65.58% 63.84% 48.38% 40.77% 52.39% 63.89% Blended SUN397 63.18% 63.19% 55.65% 56.43% 55.60% 62.41% CIFAR100 65.58% 64.65% 52.31% 37.91% 52.03% 64.94% WaNet SUN397 63.18% 62.84% 56.37% 55.24% 55.66% 62.25% CIFAR100 65.58% 62.68% 53.43% 36.32% 53.94% 61.84% C.2 ENHANCING UNLEARNING ROBUSTNESS WITH WEAK TRIGGER CUES Table 12: Results On CLIP ViT-B/32 with SIG attack, showing (CA ↑) and (ASR ↓) performance evaluated on the ImageNet-1K validation set. SIG CA ASR Zero-Shot 63.34% 00.00% Backdoored 61.36% 99.01% Contrastive-FT 51.46% 10.26% RoCLIP 52.61% 04.34% CleanCLIP 51.12% 05.51% GA 58.25% 00.10% TBAR 59.02% 00.42% GA+DECREE 56.52% 03.01% TBAR+DECREE 55.41% 05.43% We additionally provide results on unlearning sinusoidal (SIG) attack (Barni et al., 2019) on ViT-B/32. In the latter case, we observed that probing the backdoored model with a reverse-engineered SIG patch consistently resulted in the label "television". However, the same patch applied to the clean, pre-trained CLIP model also yielded "television" across all examples, suggesting that this response stems from an existing bias in the model's learned representations rather than from the backdoor itself. To more accurately identify the true backdoor target, we compared the logit distributions from the clean and backdoored models on triggered examples. The class with the largest shift in density was indeed the "banana" class. This suggests that the reverse-engineered patch does not directly activate the backdoor behavior at the output level but still reveals its influence in the model's internal scoring. This observation leads to important insights. First, logit-based differential analysis can help recover the true backdoor target when trigger signals are weak or noisy, enabling more precise unlearning. Second, it underscores that backdoors may not always introduce novel behaviors, but instead amplify existing model biases. This logit difference test was additionally evaluated and confirmed for all the experiments reported in the main text. 21
2510.14842
Boosting Instruction Following at Scale Ben Elder, Evelyn Duesterwald, Vinod Muthusamy IBM T.J. Watson Research Yorktown Heights, NY Abstract A typical approach developers follow to influence an LLM’s behavior in an application is through careful ma- nipulation of the prompt, such as by adding or modify- ing instructions. However, merely adding more instruc- tions provides little assurance that they will actually be followed. We introduce Instruction Boosting as a post- generation method to increase the reliability of LLM prompt instructions. We show that Instruction Boost- ing improves the instruction following rate by up to 7 points for two instructions and up to 4 points for ten instructions. To demonstrate these results we introduce SCALEDIF, a benchmark with a scaled instruction vol- ume of up to ten instructions per data sample. We also present an analysis of the commonly observed trend that performance degrades as more instructions are added. We show that an important factor contributing to this trend is the degree of tension and conflict that arises as the number of instructions is increased. We contribute a quantitative conflict scoring tool that explains the ob- served performance trends and provides feedback to de- velopers on the impact that additional prompt instruc- tions have on a model’s performance. 1 Introduction Large Language Models (LLMs) have become foundational components in the development of agentic applications. However, for developers, these powerful models often be- have like black boxes, making it difficult to precisely control their output. A typical method for influencing an LLM’s be- havior is through careful manipulation of the prompt, such as by adding or modifying instructions. For instance, when testing reveals an undesirable model outcome, a developer might add a corrective instruction to the prompt in an effort to prevent that behavior from recurring. This reliance on prompt-based instructions, however, presents two fundamental problems. First, there is little guar- antee that a newly added instruction in the prompt will ac- tually be followed by the LLM. Second, the progressive addition of instructions can inadvertently introduce tension or even direct contradictions with pre-existing instructions, making it more difficult to satisfy all instructions simultane- ously. These issues combined can lead to an often observed Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. LLM Initial response Boosting algorithm Final response Input Query Instructions IF detector Example input Query: Write a parenting plan. Instructions: 1. Answer with less than 1000 words. 2. Begin response with: "Parenting Plan" Example response The best interests of the child ... IF detection Instruction 1 ✓ Instruction 2 ✗ Example response Parenting Plan: The best interests of the child ... IF detection Instruction 1 ✓ Instruction 2 ✓ Figure 1: Overview of the instruction boosting approach. phenomenon, where the instruction following rate (IF rate) degrades as the number of instructions increases (Jiang et al., 2024). We propose Instruction Boosting as a test-time post gen- eration method to increase the reliability of LLM instruc- tion following in applications. Instruction boosting is predi- cated on the observation that it is often easier for an LLM to revise a suboptimal response to meet a set of instruc- tions than it is to generate a perfect response in the first place. As such, our method borrows from concepts of self- correction (Madaan et al., 2023; Huang et al., 2024), em- ploying techniques to refine and reinforce initial model re- sponses against the given instructions. As shown in Fig. 1, we apply boosting on an initial response from an LLM in or- der to increase the number of instructions that are followed. The boosting algorithm may internally use a detector to de- termine which instructions are being followed. We evaluate several boosting strategies and show instruction following improvements across a range of open LLMs. Specifically, we show that boosting improves the IF rate by up to 7 per- centage points in the case of 2 prompt instructions and by up to 4 percentage points for 10 instruction scenarios. To rigorously evaluate our approach, we contribute SCALEDIF, a new instruction following benchmark that ex- tends the popular IFEval dataset (Zhou et al., 2023) to in- clude data samples with up to ten instructions. Our ex- perimental results with SCALEDIF also independently con- firm the typical performance degradation with increasing instructions. Since each instruction constrains the model’s response, many instructions can easily create an over- arXiv:2510.14842v1 [cs.AI] 16 Oct 2025 constrained problem. While some instruction sets may con- tain explicit conflicts—pairs of instructions that are impossi- ble to follow simultaneously—we also show that even in the absence of such hard conflicts, a growing number of con- straints can lead to tension between instructions, making it increasingly difficult to satisfy all of them at once. We formalize this notion by defining a soft conflict as a pair of instructions that are difficult, though not impossible, to follow simultaneously. We developed a quantitative con- flict scoring test that determines the degree of soft conflict among a set of instructions. We show that conflict scores are negatively correlated with both the initial IF rate and the im- proved IF rate after boosting. We exploit this relationship by proposing the conflict scoring test as a valuable feedback tool for developers. De- velopers can compute conflict scores before and after adding additional instructions to a prompt to obtain crucial feedback about the impact the additional instructions have on model performance. The conflict scoring test can serve as an early indicator of expected instruction following performance and can guide developers in adjusting or modifying instructions to lower the conflict score, thereby obtaining improved re- sponses and better overall model control. This paper makes the following contributions: (i) the SCALEDIF dataset with an instruction volume of up to 10 in- structions per data sample, (ii) Instruction Boosting as a test- time method for developers to increase reliance on prompt- based instructions and control over LLM responses, and (iii) an instruction conflict scoring test that estimates the com- plexity of satisfying a set of instructions simultaneously to provide feedback to developers. 2 SCALEDIF Dataset To study the effects that additional instructions have on the overall IF rate, we created SCALEDIF, a derivative of the popular IFEval (Zhou et al., 2023) instruction follow- ing dataset with up to ten instructions per data sample. The original IFEval dataset contains 541 samples, where each sample contains a query and between one and three verifi- able instructions that must be followed. There are 26 dis- tinct classes of instructions, each with its own Python veri- fier function to validate if a given text follows the instruction. An instruction is defined by a tuple: (description, classid, parameters), where description is a text description (e.g, “Answer with at least 50 words.”), classid refers to an associated verifier function (e.g., length constraints:number words), and parameters are ac- tual verifier function parameters matching the instruction de- scription (e.g., { relation: “at least”, num words: 50 }) An instruction can be verified by invoking the associated verifier function with the given parameters. SCALEDIF: Our derivative dataset consists of 538 of the 541 IFEval samples, each containing a query (e.g., ‘Can you elaborate on “I froze when I was jogging”?’), and ten unique instructions. We rewrote the IFEval samples to iso- late the query and remove existing instructions. We used Mistral Large (Mistral AI, 2024a) for an initial extraction of the query and then refined the extracted queries manually. We added ten unique instructions to each sample. We leverage the fact that the 26 instruction classes are largely independent of the query. Any query can be paired with instructions from any unique instruc- tion class. We made some changes to the instruc- tions to ensure query-independence: we replaced three instruction classes (response language, en- glish uppercase, english lowercase) with three new query- independent ones (length constraints:sentence length, detectable format:yaml format, startend:start checker). This resulted in a set P of 26 query-independent instruction classes grouped into eight instruction categories: change case, combination, detectable content, detectable format, keywords, length constraints, punctuation, and startend. Each of the 538 samples was then assigned a set of N = 10 instructions drawn from P. Each instruction class p ∈P is associated with a sampling function S which sam- ples values for its parameters, if any. For numerical param- eters such as number of words or number of paragraphs, we sample parameter values from a simple 1-dimensional dis- tributions. For parameters requiring keywords (e.g., ‘don’t use the word “hot”’), keywords are sampled using an LLM (Granite 3.1-8B (IBM Granite, 2024)) in order to generate keywords that are relevant and meaningful in the context of a given query. This keyword sampling approach is in con- trast to the original IFEval selection method which chose keywords uniformly at random from a large vocabulary. Algorithm 1 Constrained Instruction Sampling Require: Integer N > 0, set of Instructions P Ensure: List L of N valid (Instruction, Parameters) sam- ples 1: L ←empty list 2: while length(L) < N do 3: Sample p uniformly without replacement from P 4: Compute constraints Cp on p from instructions in L 5: Sample required arguments Ap using strategy P(Cp) 6: if a valid Ap can be found then 7: Append (p, Ap) to L 8: end if 9: end while 10: return L Fig. 2 shows the number of instructions per instruction category in SCALEDIF. A full list of the instruction classes is included in the Appendix. Sampling instruction parameters independently for each the N instructions can easily lead to contradictions. For ex- ample, the keywords:existence instruction class requires that a set of keywords be present in the response, and the key- words:forbidden words instruction class requires that a set of keywords not appear. These parameters must be sampled to be disjoint to avoid contradictory instructions that are im- possible to follow simultaneously. To solve this problem we enforced pairwise constraints during parameter sampling. When constructing an instruc- tion set L and a candidate instruction I is selected to be added to L, first constraints CI are computed based on the change_case combination detectable_content detectable_format keywords length_constraints punctuation startend 0 200 400 600 800 1000 1200 Instruction Counts Figure 2: Number of instructions in SCALEDIF across eight instruction categories. The contribution of individual in- struction classes in each category is distinguished by color. instructions already chosen for L. For example, if an instruc- tion stating that the response must contain at least N para- graphs is already in L, then a constraint is added to ensure that no other instruction can require the number of words to be less than N ∗10. These constraints are enforced while sampling parameters for the candidate instructions, and if they cannot be satisfied, then that instruction is rejected and a new candidate is sampled. If instruction parameters are successfully sampled which satisfy all existing constraints, then the candidate instruction is added to the list L. This process is described in Alg. 1. After sampling ten instructions for each of the 538 sam- ples, we shuffled the instruction order to avoid any sensitiv- ity to the order in which instructions appear in the prompt. Finally, we constructed scaled down versions of the dataset with 2, 4, 6 and 8 instructions per sample by randomly re- moving instructions from the 10-instruction dataset version. 3 Instruction Boosting The idea behind instruction boosting is to scale com- pute (Snell et al., 2024) in order to improve on the baseline IF rate. As illustrated in Fig. 1, instruction boosting oper- ates as a post-generation step that, similar to self-correction (Madaan et al., 2023; Huang et al., 2024), employs tech- niques to refine and reinforce an initial model response against the given instructions. Instruction boosting can be enabled for a subset or all the instructions in given prompt. We devised several boosting strategies with different cost- performance trade-offs to give developers choice when bal- ancing costs and benefits. We evaluated instruction boosting on SCALEDIF with several open models of various sizes including Llama-3.3-70B-Instruct1 (Llama-70B), Llama-3.1-8B- Instruct2 (Llama-8B), Qwen2.5-72B-Instruct3 (Qwen-72B), Mixtral-8x7B-Instruct-v0.14 (Mixtral-8x7B) and Mixtral- 1Meta AI (2024a): Meta-Llama-3.3-70B-Instruct 2Meta AI (2024b): Meta-Llama-3.1-8B-Instruct 3Alibaba Cloud (2024): Qwen2.5-72B-Instruct 4Mistral AI (2023): Mixtral-8x7B-Instruct-v0.1 2 3 4 5 6 7 8 9 10 Number of Instructions 0.4 0.5 0.6 0.7 0.8 0.9 IF Rate Effect of Instruction Boosting Qwen-72B Llama-70B Llama-8B Mixtral-7x22B Mixtral-7x8B Figure 3: Initial IF rate (solid lines) and Best-of-N boosting performance (dashed lines). 8x22B-Instruct-v0.15 (Mixtral-8x22B). As shown in Fig. 3 (solid lines), the initial IF rate that these models achieve on SCALEDIF ranges from 0.56 (Mixtral-7x8B) up to 0.88 (Llama-70B) with two instruc- tions, and reduces to 0.39 (Mixtral-7x8B) and 0.66 (Llama- 70B) with ten instructions. Fig. 3 also shows the instruction following boost (dotted lines) achieved with the best perfor- mance boosting strategy, which will be explained in detail in the following section. With two instructions the largest IF rate boost is achieved by Mixtral-7x22B at 7 percentage points. With ten instructions, the largest boost is by Llama- 70B at 4 percentage points. Note that across all models the IF rate drops as the num- ber of instructions is increased, confirming previously ob- served instruction following degradation at scale. We will return to examine the factors that contribute to these degra- dation trends in Section 4. 3.1 Boosting Strategies Instruction boosting is a test-time post-generation improve- ment strategy. A boosting strategy takes as input a query, a set of instructions and a generated response and produces as output a new response that has been revised to maximize ad- herence to the instructions. We devised the following boost- ing strategies (see Appendix for the corresponding prompts). Detect+Repair: Detect+Repair proceeds in two steps. First, an LLM-as-a-judge detector (judge detector) is used to determine which instructions have not been followed in the input response. In the second repair step the response is rewritten to repair all detected instruction violations. Best-of-N: Best-of-N samples N rewritten responses that are to follow all instructions not already adhered to in the initial response using temperature sampling. Best-of-N does not require an initial detection step. Instead, the judge de- tector is used as a reward model to assign an instruction fol- lowing reward to each rewritten response. The reward is the 5Mistral AI (2024b): Mixtral-8x22B-Instruct-v0.1 IF rate detected by the judge detector. In a final selection step, the response with the highest reward is chosen as the repaired response. In all experiments we used N = 5 as we observed diminishing returns at higher sampling rates. Best-of-N Oracle: To understand the potential IF rates models can achieve in their rewrites, we include a variant of Best-of-N that uses an oracle reward model to assign the ac- tual instruction following rate to a given response. We used the deterministic IFEval instruction verifiers as the oracle. Map Reduce: Map Reduce proceeds in three phases. First, the judge detector is used to detect violated instruc- tions in the initial response. The Map phase creates sep- arate rewrite tasks for each detected instruction violation. The final Reduce phase merges the independently generated rewritten responses into one final repaired response. Fig. 4 shows the achieved IF rates for the four strategies for Llama-70B, the best performing model from Fig. 3. The IF rate achieved in the initial responses, prior to boosting is shown as the baseline. All boosting strategies lead to IF rate improvements over the baseline. Even at two instructions, boosting leads to small improvements and the benefits gen- erally increase with the number of instructions. Among the non-oracle strategies, Best-of-N consistently provides the largest boost, up to 4 percentage points at ten instructions, increasing the IF rate to 0.70 from 0.66. Best- of-N Oracle shows the potential IF rate achievable through rewrite sampling. Even at two instructions, the model is ca- pable of generating rewritten responses with an IF rate of 0.89, a 2 percentage point increase. The boost grows as instructions are increased to ten, when the IF rate reaches 0.75, an 8.5 percentage point increase. Although not shown for space constraints, the relative boosting trends across the fours strategies are similar across all models from Fig.3. The gap between Best-of-N and Best-of-N Oracle is a re- sult of inaccuracies in the judge detector that is used as the reward model. When using Llama-70B as the reward model, the detection accuracy is 73%. Thus, closing this gap may be achieved by replacing the judge detector with manually coded or LLM generated deterministic verifiers. Task Adherence: Since the instructions are mostly or- thogonal to the query in each sample, it is possible to satisfy them while completely ignoring the primary task, namely answering the query. As a quality check, we used Llama- 70B as a task-adherence judge on the initial model response and on the response after instruction boosting. This task- adherence judge was instructed to determine if the given response was related to the query (see Appendix for judge prompt). In the case that the initial response fails this check, the data point is not included in the results. If the initial response passed but after instruction boosting the response failed, this should be considered an additional failure mode. Table 1 shows that the largest number of task adherence fail- ures in the initial response generation was incurred for 8 in- structions with 22 out of 538 (4%) failures. Boosting caused at most 7 (1.3%) additional task adherence failures in the 10 instruction experiment. Num Instructions 2 4 6 8 10 Initial Failures 0 11 8 22 20 Add’l after boosting 0 0 6 3 7 Table 1: Task Adherence Failures (Llama-70B) 2 4 6 8 10 Number of Instructions 0.65 0.70 0.75 0.80 0.85 0.90 IF Rate Boosting Strategies: Llama 70B Best-of-N Map-Reduce Detect+Repair Best-of-N Oracle Baseline Figure 4: Instruction following rate achieved by Llama 70b for different boosting strategies. 3.2 Cost Analysis In addition to the varying benefits among the boosting strategies we considered, they also incur different cost- benefit tradeoffs. To illustrate these tradeoffs Fig. 5 plots the achieved IF rate against cost measured as completion tokens in Flops. Compared to the lowest cost Detect+Repair strat- egy, Best-of-N trades additional compute for an additional IF rate boost. Map Reduce is less cost effective, requiring significantly more compute for a small IF rate increase. We also explored the following boosting variations. Detect+Repair Expl: As a variant of Detect+Repair, we added explanations of instruction violations to the rewrite prompt. We expected the additional explanation hints to help the model in the rewrite task. Surprisingly, adding explana- tions didn’t help and may have only provided a distraction to the model since they actually led to a small erosion of the IF rate improvements as shown in Fig. 5. Best-of-N Gen: Instead of sampling N response rewrites, Best-of-N Gen samples N initial response generations to the query. Both Best-of-N and Best-of-N Gen use the judge de- tector as a reward model. As shown in the Fig. 5, compared to rewrite sampling, sampling the initial responses incurs slightly lower cost but was not able to match the IF rate improvements of Best-of-N confirming our hypothesis that rewriting a draft response is generally easier than writing a response from scratch. 4 Instruction Scaling and Conflict Analysis This section takes a closer look at the drivers for the ob- served instruction scaling trends that show decreasing IF rates with increasing numbers of instructions. 0.0 0.2 0.4 0.6 0.8 1.0 Flops 1e19 0.66 0.68 0.70 0.72 0.74 IF Rate Boosting Strategy Cost: Llama 70B Best-of-N Gen Best-of-N Map-Reduce Detect+Repair Detect+Repair Expl Best-of-N Oracle Baseline Figure 5: Costs (completion tokens in Flops) and IF rate of each strategy for 10 instructions achieved by Llama-70B. Dataset samples each with instructions Dataset samples each with instructions Responses responses LLM Conflict counts Figure 6: Steps to empirically compute an estimated conflict score between a pair of instructions. 4.1 Soft Conflicts In Section 2, we described how we applied pre-defined con- straints to avoid contradictory instructions. For example, an linstruction that requires the keyword ‘confidential’ to ap- pear in the response would contradict one that prohibits the same word. We refer to a pair of contradictory instructions as a hard conflict. Even after applying these pre-defined con- straints and ruling out hard conflicts, there may still be in- structions that are difficult for a model to follow simultane- ously. For example, instructing a model to include at least 300 words in a response and also instructing it to not re- peat any words may be difficult. We carry out a self-play approach to empirically identify such soft conflicts. To quantify the degree of soft conflict in a sample, we compute a conflict score between every pair of instructions assigned to that sample. The conflict score of a pair of in- structions indicates the difficulty of following both instruc- tions simultaneously. Conflict scores are computed by sam- pling multiple responses from a model for each instruction pair. Each response is checked to determine which instruc- tions were followed: either both instructions were followed, or at least one was not followed. The latter case indicates possible soft conflicts. This flow is depicted in Figure 6. More precisely, we start from the original dataset D, out- lined in Section 2, with N samples, where each sample con- sists of a query and k instructions. Next we construct a pair- wise dataset ˆD as follows: for each sample in D we create k(k −1) additional samples with the same query but every subset of two instructions from the original k instructions. The model generates r responses for each sample in ˆD to Num Instructions 2 4 6 8 10 Avg conflict score 0.24 0.67 1.17 1.59 2.03 Correl (after boosting) -0.79 -0.63 -0.46 -0.42 -0.37 Table 2: Estimated conflict scores for Best-of-N boosting with Llama-70B. produce the response set ˆR. The conflict count cij between instructions i and j is the number of responses in ˆR where at least one of the two instructions i or j were not followed. We can now use the pair-wise conflict counts to compute a conflict score for each sample s ∈D. Let p(s) be the set of instructions associated with sample s. The conflict score cs of sample s is the normalized sum of the conflict counts for each pair of instructions in s: cs = P (i,j)∈p(s)×p(s),i̸=j cij |p(s)| . (1) Conflict score at scale: We expect that samples with higher conflict scores include instructions that are more dif- ficult for a model to follow and boost. We see this in Table 2, where the average conflict scores increase with the number of instructions in the dataset. This suggests that part of the reason that instruction following compliance rate decreases with the number of instructions, as observed in Section 3, is that there are inherent conflicts when more instructions are added. Recall that the conflict scores are computed pair- wise so the conflict scores themselves are not susceptible to instruction scaling effects. Correlation of conflict score: Table 2 also lists the Pear- son correlation between the IF rate and the conflict scores. The correlation decreases with increasing instructions; in the case with 10 instructions, the correlation is about -0.37. We posit two reasons for decreasing correlation. First, our conflict scores are only based on pair-wise instruction con- flicts. There may be more complex soft conflicts that only arise in the interplay of more than two instructions. For ex- ample, consider a set of three instructions that place limits on the number of words, the number of sentences, and the num- ber of words per sentence in the response. Following any pair of these instructions is easier than following all three. Another probable reason for the decreasing conflict correla- tion is that IF rates degrade with more instructions not only due to soft conflicts among the instructions, but also because there is still some effect of models struggling to follow in- creasing number of instructions. Further investigating and quantifying the contributions of these possible causes is an avenue for future work. Segmentation by IF rate: Let us dig deeper into dis- tributions of the conflict scores. Figure 7 plots the average conflict score for samples bucketed by the IF rate for the dataset with ten instructions. We see that “harder” samples, i.e., those with a lower IF rate, do indeed have a higher con- flict score. This reinforces our assertion that soft conflicts among instructions contribute to worse IF rates when there are many instructions. Note that the error bars are the stan- dard error of the mean, which is likely an underestimate of 0.4 0.6 0.8 1.0 Instruction following rate (after boosting) 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 Average conflict score 10 instructions per query with Llama-70B Best-of-N 0 20 40 60 80 100 120 Number of samples Average conflict score Number of samples Figure 7: Average conflict scores for 10 instruction dataset, bucketed by the IF rate after boosting. the true uncertainty due to correlations between instruction pairs appearing in multiple samples. Comparison without constraints: Recall from Section 2 that we apply pre-defined parameter sampling constraints to avoid including conflicting instructions in the dataset. We re- peated the above analysis on a dataset where we did not ap- ply these pre-defined constraints. As expected we observed a higher average conflict score: with 10 instructions the av- erage conflict score was 18% higher. 4.2 Lost-in-the-Middle Prior work (Liu et al., 2023) looked into the “lost-in-the- middle” effect and observed that performance is often high- est when relevant information occurs at the beginning or end of the LLM input context, and significantly degrades when models must access relevant information in the middle. We investigated whether the lost-in-the-middle effect could play a role in the lowering IF rates we observed at ris- ing numbers of instructions. Large numbers of instructions also have larger numbers of ”middle” instructions. To inves- tigate, we broke down the IF rate by instruction position to compute positional IF rates as the n-th position IF rate. How- ever, we found no consistent relationship between IF rates and instruction position across models. Middle instructions generally did not have lower IF rates than first or last instruc- tions. Thus, the larger number of ”middle” instructions in a list of 10 instructions does not appear to be a driver for the IF rate degradation we observed. 5 Related Work Instruction following is critical to LLMs, enabling them to align with human intent. Related work on evaluating a model’s instruction following ability includes IFEval (Zhou et al., 2023) which focuses on verifiable instructions that can be checked deterministically. IFEval includes data samples with one to three instructions per sample. SCALEDIF ex- tends IFEval by scaling instruction volume up to ten instruc- tions per data sample. ComplexBench (Wen et al., 2024) and FollowBench (Jiang et al., 2024) evaluate how LLMs han- dle complex instructions with different types of constraints and their compositions. InFoBench (Qin et al., 2024) intro- duces a new metric (DRFR) that breaks down complex in- structions into simpler criteria for a more detailed analysis of a model’s compliance with each part of the instruction. RefuteBench (Yan, Luo, and Zhang, 2024) focuses on eval- uating whether a model can modify its response to comply with human feedback in the form of refuting instructions in a conversational context. Verbalizer manipulation (Li et al., 2024) introduces an instruction following evaluation proto- col for classification tasks. IFScale (Jaroslawicz et al., 2025) is a benchmark of 500 keyword-inclusion instructions for a business report writing task and LIFBench‘(Wu et al., 2025) focuses on evaluating instruction following in long context scenarios. Beyond evaluation, prior research has explored test-time interventions to improve reasoning and overall response quality. An approach similar to Detect+Repair boosting is self-correction (Madaan et al., 2023; Huang et al., 2024) where a model is prompted to evaluate its own initial output and refine it. Self-correction primarily focuses on improving reasoning quality, whereas instruction boosting specifically targets a model’s instruction following ability. Although not specifically aimed at instruction following, Chain-of-Thought (CoT) prompting (Wei et al., 2023) can improve model performance on complex tasks by break- ing down a difficult problem into smaller, more manageable steps. Self-Consistency (Wang et al., 2023) builds on CoT prompting by sampling multiple model responses and se- lecting the most frequent or “consistent” response. Similar to Best-of-N boosting, self-consistency leverages a model’s inherent capabilities to find a robust solution through com- pute scaling (Snell et al., 2024). However, in contrast to self-consistency which samples response generation with the goal to improve CoT reasoning, Best-of-N boosting sam- ples response rewrites and is targeted at improving IF rates. Wu et al. (2024) propose a training method for instruc- tion following with thought generation and Hou et al. (2025) propose a dynamic pruning approach to improve instruction following. In contrast, instruction boosting aims to improve instruction following as a post-processing method. 6 Conclusion In this work, we’ve demonstrated that prompt-based instruc- tion following in LLMs is a challenging problem, exacer- bated by the instruction scaling effect. We introduced In- struction Boosting, a test-time strategy to enhance instruc- tion adherence by refining and correcting initial model re- sponses. Our empirical results on the SCALEDIF dataset confirm that instruction boosting consistently improves in- struction following, with gains of up to 7 and 4 percent- age points for 2 and 10 instructions, respectively. We fur- ther contributed the concept of soft conflicts and a quanti- tative conflict score as a quantitative diagnostic tool for de- velopers. By providing clear feedback to anticipate instruc- tion following difficulty, our approach offers both a powerful method for improving model reliability and a valuable feed- back mechanism for more effective prompt engineering and control over model behavior in applications. References Alibaba Cloud. 2024. Qwen2.5-72b-instruct. https://huggingface.co/Qwen/Qwen2. 5-72B-Instruct. Accessed: 08/15/2026. Hou, B.; Chen, Q.; Wang, J.; Yin, G.; Wang, C.; Du, N.; Pang, R.; Chang, S.; and Lei, T. 2025. Instruction- following pruning for large language models. In ICML 2025, arXiv:2501.02086. Huang, J.; Chen, X.; Mishra, S.; Zheng, H. S.; Yu, A. W.; Song, X.; and Zhou, D. 2024. Large language mod- els cannot self-correct reasoning yet. In ICLR 2024, arXiv:2310.01798. IBM Granite, I. 2024. Granite 3.0 language models. Jaroslawicz, D.; Whiting, B.; Shah, P.; and Maamari, K. 2025. How many instructions can llms follow at once? arXiv:2507.11538. Jiang, Y.; Wang, Y.; Zeng, X.; Zhong, W.; Li, L.; Mi, F.; Shang, L.; Jiang, X.; Liu, Q.; and Wang, W. 2024. Fol- lowbench: A multi-level fine-grained constraints follow- ing benchmark for large language models. arXiv preprint arXiv:2310.20410. Li, S.; Yan, J.; Wang, H.; Tang, Z.; Ren, X.; Srinivasan, V.; and Jin, H. 2024. Instruction-following evalua- tion through verbalizer manipulation. In NAACL 2024, arXiv:2307.10558. Liu, N. F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; and Liang, P. 2023. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; Gupta, S.; Majumder, B. P.; Hermann, K.; Welleck, S.; Yazdanbakhsh, A.; and Clark, P. 2023. Self-refine: Itera- tive refinement with self-feedback. arXiv:2303.17651. Meta AI. 2024a. Meta-llama-3.1-70b-instruct. https://huggingface.co/meta-llama/ Llama-3.3-70B-Instruct. Accessed: 08/15/2026. Meta AI. 2024b. Meta-llama-3.1-8b-instruct. https://huggingface.co/meta-llama/ Meta-Llama-3.1-8B-Instruct. Accessed: 08/15/2026. Mistral AI. 2023. Mixtral-8x7b-instruct-v0.1. https://huggingface.co/mistralai/ Mixtral-8x7B-Instruct-v0.1. Accessed: 08/15/2026. Mistral AI. 2024a. Mistral Large. Avail- able at https://docs.api.nvidia.com/nim/ reference/mistralai-mistral-large (or a more specific resource where you accessed the model). Mistral AI. 2024b. Mixtral-8x22b-instruct-v0.1. https://huggingface.co/mistralai/ Mixtral-8x22B-Instruct-v0.1. Accessed: 08/15/2026. Qin, Y.; Song, K.; Hu, Y.; Yao, W.; Cho, S.; Wang, X.; Wu, X.; Liu, F.; Liu, P.; and Yu, D. 2024. Infobench: Evaluat- ing instruction following ability in large language models. arXiv:2401.03601. Snell, C.; Lee, J.; Xu, K.; and Kumar, A. 2024. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv:2408.03314. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang, S.; Chowdhery, A.; and Zhou, D. 2023. Self-consistency improves chain of thought reasoning in language models. In ICLR 2023, arXiv:2203.11171. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; and Zhou, D. 2023. Chain- of-thought prompting elicits reasoning in large language models. arXiv:2201.11903. Wen, B.; Ke, P.; Gu, X.; Wu, L.; Huang, H.; Zhou, J.; Li, W.; Hu, B.; Gao, W.; Xu, J.; Liu, Y.; Tang, J.; Wang, H.; and Huang, M. 2024. Benchmarking complex instruction-following with multiple constraints composi- tion. In NeurIPS 2024, arXiv:2407.03978. Wu, T.; Lan, J.; Yuan, W.; Jiao, J.; Weston, J.; and Sukhbaatar, S. 2024. Thinking llms: General instruction following with thought generation. arXiv:2410.10630. Wu, X.; Wang, M.; Liu, Y.; Shi, X.; Yan, H.; Lu, X.; Zhu, J.; and Zhang, W. 2025. Lifbench: Evaluating the instruc- tion following performance and stability of large language models in long-context scenarios. arXiv:2411.07037. Yan, J.; Luo, Y.; and Zhang, Y. 2024. Refutebench: Evaluat- ing refuting instruction-following for large language mod- els. In ACL 2024, arXiv:2402.13463. Zhou, J.; Lu, T.; Mishra, S.; Brahma, S.; Basu, S.; Luan, Y.; Zhou, D.; and Hou, L. 2023. Instruction-following evaluation for large language models. arXiv:2311.07911. A SCALEDIF Construction The following 26 instruction classes were used in the con- struction of the SCALEDIF dataset. All but three (length constraints:sentence length, detectable format:yaml format, startend:start checker) of these instructions were part of the original IFEval dataset. In addition, three of the origi- nal IFEval instructions (response language, en- glish uppercase, english lowercase) with three new query- independent ones (length constraints:sentence length, detectable format:yaml format, startend:start checker) were removed because they were too difficult to combine with other instructions for scaling. A.1 Constraints In order to ensure that the sets of instructions constructed for SCALEDIF are mutually satisfiable, constraints had to be imposed on the sampled parameters. For exam- ple suppose that length constraints:number paragraphs is sampled with parameters num paragraphs = 3, and then length constraints:number words is length constraints sentence length number sentences number words number paragraphs nth paragraph first word detectable format number bullet lists constrained response number highlighted sections title multiple sections json format yaml format detectable content number placeholders postscript punctuation no comma keywords forbidden words frequency letter frequency existence combination repeat prompt multiple responses change case capital word frequency lowercase sentences startend quotation start checker end checker Table 3: SCALEDIF Instructions sampled. Before sampling the parameters for length constraints:number words, all constraints related to the previously sampled instructions will be computed. The length constraints:number paragraphs instruction will produce the constraint num words >= 10∗num paragraphs = 30. This constraint is chosen to ensure that the re- sponse is allowed to have enough words to form the necessary number of paragraphs. As another example, the keywords:forbidden words and keywords:existence instructions each have a list of keywords as their parameter. If both of these instructions are sampled, then the second one chosen is required to exclude the keywords already taken by the first one. There is additionally one three-way constraint im- posed between length constraints:sentence length, length constraints:number sentences, and length constraints:number words. N words >= (N sentences+3)∗(sentence length+3) (2) B Boosting Strategy Details Below we include the prompt templates used in the boosting strategies. B.1 Detect+Repair Detect Prompt: 1 <|begin_of_text|><| start_header_id|>system<| end_header_id|> 2 You are a policy compliance checker .<|eot_id|> 3 <|start_header_id|>user<| end_header_id|> 4 Check if the following text follows each policy. 5 For each policy, provide a JSON response with: 6 1. "policy": The policy line being checked 7 2. "answer": "yes" or "no" 8 3. "explanation": A brief explanation 9 10 Text to check: 11 --- 12 ${text} 13 --- 14 15 Policies to check: 16 --- 17 ${policies} 18 --- 19 20 IMPORTANT: 21 1. Return a JSON array with one object per policy line, in the same order as listed above 22 2. Each object should have "policy", "answer" and "explanation" fields 23 3. Do NOT try to follow the policies in the JSON output. Only check whether the text follows the policies 24 4. Be concise and accurate. 25 26 Example format: 27 [ 28 { 29 "policy": "The first policy line..... ", 30 "answer": "yes/no", 31 "explanation": "the explanation for the answer" 32 }, 33 { 34 "policy": "The second policy line ..... ", 35 "answer": "yes/no", 36 "explanation": "the explanation for the answer" 37 } 38 ] 39 40 Return ONLY the JSON array, nothing else. Do not include any additional text or explanations outside the JSON array.<|eot_id|> 41 <|start_header_id|>assistant<| end_header_id|> Repair Prompt: 1 <|begin_of_text|><|start_header_id|> system<|end_header_id|> 2 You are an editor and your task is to rewrite response.<| eot_id|> 3 <|start_header_id|>user<| end_header_id|>The following text shows a query and a response. The response violates one or more guidelines from the query that it should have followed. 4 5 <START_OF_QUERY> 6 ${query} 7 Your response must follow these guidelines: 8 ${instr} 9 <END_OF_QUERY> 10 11 <START_OF_RESPONSE> 12 ${text} 13 <END_OF_RESPONSE> 14 15 <START_OF_VIOLATED_GUIDELINES> 16 ${policies} 17 <END_OF_VIOLATED_GUIDELINES> 18 19 Rewrite the response so that it follows all guidelines while making only necessary changes and keeping the rest of the text unchanged. The rewrite must not break any of the guidelines that were already followed in the response. 20 You may rewrite the text exactly once, and don’t output anything other than the re- written response. 21 You must enclose the re-written response in <START_OF_REWRITE> <END_OF_REWRITE> tags.<| eot_id|> 22 <|start_header_id|>assistant<| end_header_id|> B.2 Best-of-N 1 <|begin_of_text|><| start_header_id|>system<| end_header_id|> 2 You are an editor and your task is to rewrite a response to ensure compliance with guidelines.<|eot_id|> 3 <|start_header_id|>user<| end_header_id|>The following text shows a query, a response and a set of guidelines. The response may violate one or more guidelines that it should have followed. 4 5 <START_OF_QUERY> 6 ${query} 7 <END_OF_QUERY> 8 9 <START_OF_RESPONSE> 10 ${text} 11 <END_OF_RESPONSE> 12 13 <START_OF_GUIDELINES> 14 ${instr} 15 <END_OF_GUIDELINES> 16 17 Rewrite the response so that it follows all guidelines while making only necessary changes and keeping the rest of the text unchanged. The rewrite must not break any of the guidelines that were already followed in the response. 18 You may rewrite the text exactly once, and don’t output anything other than the re- written response. 19 You must enclose the re-written response in <START_OF_REWRITE> <END_OF_REWRITE> tags.<| eot_id|> 20 <|start_header_id|>assistant<| end_header_id|> B.3 MapReduce 1 <|begin_of_text|><| start_header_id|>system<| end_header_id|> 2 You are an editor and your task is to generate a response according to a user query and to ensure compliance with a set of guidelines.<|eot_id|> 3 4 <|start_header_id|>user<| end_header_id|>The following text shows the user query, a set of per-guideline responses, and the set of guidelines. Each per- guideline response is an example of a response that complies with the associated guideline. 5 6 <START_OF_QUERY> 7 ${query} 8 <END_OF_QUERY> 9 ${per_guideline_responses} 10 <START_OF_GUIDELINES> 11 ${instr} 12 <END_OF_GUIDELINES> 13 14 Generate a response so that it follows all the guidelines. Use the per-guideline responses to help generate the final response that complies with all the guidelines. 15 Don’t output anything other than the re-written response. 16 You must enclose the re-written response in <START_OF_REWRITE> < END_OF_REWRITE> tags.<|eot_id|> 17 <|start_header_id|>assistant<| end_header_id|> B.4 Best-of-N Gen 1 <|begin_of_text|><| start_header_id|>system<| end_header_id|> 2 You are a helpful assistant and your task is to respond to queries. Your response must follow a set of guidelines.<| eot_id|> 3 <|start_header_id|>user<| end_header_id|>Here is a query with a set of guidelines that must be followed. 4 5 <START_OF_QUERY> 6 ${query} 7 Your response must follow these guidelines: 8 ${instr} 9 <END_OF_QUERY> 10 11 Write a response that follows all guidelines and don’t output anything other than the response. 12 You must enclose the response in <START_OF_RESPONSE> < END_OF_RESPONSE> tags.<|eot_id |> 13 <|start_header_id|>assistant<| end_header_id|> C Task Adherence Judge 1 <|begin_of_text|><| start_header_id|>system<| end_header_id|> 2 You are a helpful assistant whose task is to respond to user queries. 3 Make sure that your response follows these instructions: 4 5 {instruction_list}<|eot_id|> 6 <|start_header_id|>user<| end_header_id|> 7 Question: 8 {prompt_request}<|eot_id|> 9 <|start_header_id|>assistant<| end_header_id|>
Boosting Instruction Following at Scale Ben Elder, Evelyn Duesterwald, Vinod Muthusamy IBM T.J. Watson Research Yorktown Heights, NY Abstract A typical approach developers follow to influence an LLM's behavior in an application is through careful manipulation of the prompt, such as by adding or modifying instructions. However, merely adding more instructions provides little assurance that they will actually be followed. We introduce Instruction Boosting as a postgeneration method to increase the reliability of LLM prompt instructions. We show that Instruction Boosting improves the instruction following rate by up to 7 points for two instructions and up to 4 points for ten instructions. To demonstrate these results we introduce SCALEDIF, a benchmark with a scaled instruction volume of up to ten instructions per data sample. We also present an analysis of the commonly observed trend that performance degrades as more instructions are added. We show that an important factor contributing to this trend is the degree of tension and conflict that arises as the number of instructions is increased. We contribute a quantitative conflict scoring tool that explains the observed performance trends and provides feedback to developers on the impact that additional prompt instructions have on a model's performance. 1 Introduction Large Language Models (LLMs) have become foundational components in the development of agentic applications. However, for developers, these powerful models often behave like black boxes, making it difficult to precisely control their output. A typical method for influencing an LLM's behavior is through careful manipulation of the prompt, such as by adding or modifying instructions. For instance, when testing reveals an undesirable model outcome, a developer might add a corrective instruction to the prompt in an effort to prevent that behavior from recurring. This reliance on prompt-based instructions, however, presents two fundamental problems. First, there is little guarantee that a newly added instruction in the prompt will actually be followed by the LLM. Second, the progressive addition of instructions can inadvertently introduce tension or even direct contradictions with pre-existing instructions, making it more difficult to satisfy all instructions simultaneously. These issues combined can lead to an often observed , Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. LLM Initial response Boosting algorithm Final response Input Query Instructions IF detector Example input Query: Write a parenting plan. Instructions: 1. Answer with less than 1000 words. 2. Begin response with: "Parenting Plan" Example response The best interests of the child ... IF detection Instruction 1 ✓ Instruction 2 ✗ Example response Parenting Plan: The best interests of the child ... IF detection Instruction 1 ✓ Instruction 2 ✓ Figure 1: Overview of the instruction boosting approach. phenomenon, where the instruction following rate (IF rate) degrades as the number of instructions increases (Jiang et al., 2024). We propose Instruction Boosting as a test-time post generation method to increase the reliability of LLM instruction following in applications. Instruction boosting is predicated on the observation that it is often easier for an LLM to revise a suboptimal response to meet a set of instructions than it is to generate a perfect response in the first place. As such, our method borrows from concepts of selfcorrection (Madaan et al., 2023; Huang et al., 2024), employing techniques to refine and reinforce initial model responses against the given instructions. As shown in Fig. 1, we apply boosting on an initial response from an LLM in order to increase the number of instructions that are followed. The boosting algorithm may internally use a detector to determine which instructions are being followed. We evaluate several boosting strategies and show instruction following improvements across a range of open LLMs. Specifically, we show that boosting improves the IF rate by up to 7 percentage points in the case of 2 prompt instructions and by up to 4 percentage points for 10 instruction scenarios. To rigorously evaluate our approach, we contribute SCALEDIF, a new instruction following benchmark that extends the popular IFEval dataset (Zhou et al., 2023) to include data samples with up to ten instructions. Our experimental results with SCALEDIF also independently confirm the typical performance degradation with increasing instructions. Since each instruction constrains the model's response, many instructions can easily create an over16 Oct 2025 constrained problem. While some instruction sets may contain explicit conflicts-pairs of instructions that are impossible to follow simultaneously-we also show that even in the absence of such hard conflicts, a growing number of constraints can lead to tension between instructions, making it increasingly difficult to satisfy all of them at once. We formalize this notion by defining a soft conflict as a pair of instructions that are difficult, though not impossible, to follow simultaneously. We developed a quantitative conflict scoring test that determines the degree of soft conflict among a set of instructions. We show that conflict scores are negatively correlated with both the initial IF rate and the improved IF rate after boosting. We exploit this relationship by proposing the conflict scoring test as a valuable feedback tool for developers. Developers can compute conflict scores before and after adding additional instructions to a prompt to obtain crucial feedback about the impact the additional instructions have on model performance. The conflict scoring test can serve as an early indicator of expected instruction following performance and can guide developers in adjusting or modifying instructions to lower the conflict score, thereby obtaining improved responses and better overall model control. This paper makes the following contributions: (i) the SCALEDIF dataset with an instruction volume of up to 10 instructions per data sample, (ii) Instruction Boosting as a testtime method for developers to increase reliance on promptbased instructions and control over LLM responses, and (iii) an instruction conflict scoring test that estimates the complexity of satisfying a set of instructions simultaneously to provide feedback to developers. 2 SCALEDIF Dataset To study the effects that additional instructions have on the overall IF rate, we created SCALEDIF, a derivative of the popular IFEval (Zhou et al., 2023) instruction following dataset with up to ten instructions per data sample. The original IFEval dataset contains 541 samples, where each sample contains a query and between one and three verifiable instructions that must be followed. There are 26 distinct classes of instructions, each with its own Python verifier function to validate if a given text follows the instruction. An instruction is defined by a tuple: (description, classid, parameters), where description is a text description (e.g, "Answer with at least 50 words."), classid refers to an associated verifier function (e.g., length constraints:number words), and parameters are actual verifier function parameters matching the instruction description (e.g., { relation: "at least", num words: 50 }) An instruction can be verified by invoking the associated verifier function with the given parameters. SCALEDIF: Our derivative dataset consists of 538 of the 541 IFEval samples, each containing a query (e.g., 'Can you elaborate on "I froze when I was jogging"?'), and ten unique instructions. We rewrote the IFEval samples to isolate the query and remove existing instructions. We used Mistral Large (Mistral AI, 2024a) for an initial extraction of the query and then refined the extracted queries manually. We added ten unique instructions to each sample. We leverage the fact that the 26 instruction classes are largely independent of the query. Any query can be paired with instructions from any unique instruction class. We made some changes to the instructions to ensure query-independence: we replaced three instruction classes (response language, english uppercase, english lowercase) with three new queryindependent ones (length constraints:sentence length, detectable format:yaml format, startend:start checker). This resulted in a set P of 26 query-independent instruction classes grouped into eight instruction categories: change case, combination, detectable content, detectable format, keywords, length constraints, punctuation, and startend. Each of the 538 samples was then assigned a set of N = 10 instructions drawn from P. Each instruction class p ∈P is associated with a sampling function S which samples values for its parameters, if any. For numerical parameters such as number of words or number of paragraphs, we sample parameter values from a simple 1-dimensional distributions. For parameters requiring keywords (e.g., 'don't use the word "hot"'), keywords are sampled using an LLM (Granite 3.1-8B (IBM Granite, 2024)) in order to generate keywords that are relevant and meaningful in the context of a given query. This keyword sampling approach is in contrast to the original IFEval selection method which chose keywords uniformly at random from a large vocabulary. Algorithm 1 Constrained Instruction Sampling Require: Integer N > 0, set of Instructions P Ensure: List L of N valid (Instruction, Parameters) samples 1: L ←empty list 2: while length(L) = 10∗num paragraphs = 30. This constraint is chosen to ensure that the response is allowed to have enough words to form the necessary number of paragraphs. As another example, the keywords:forbidden words and keywords:existence instructions each have a list of keywords as their parameter. If both of these instructions are sampled, then the second one chosen is required to exclude the keywords already taken by the first one. There is additionally one three-way constraint imposed between length constraints:sentence length, length constraints:number sentences, and length constraints:number words. N words >= (N sentences+3)∗(sentence length+3) (2) B Boosting Strategy Details Below we include the prompt templates used in the boosting strategies. B.1 Detect+Repair Detect Prompt: 1 system 2 You are a policy compliance checker . 3 user 4 Check if the following text follows each policy. 5 For each policy, provide a JSON response with: 6 1. "policy": The policy line being checked 7 2. "answer": "yes" or "no" 8 3. "explanation": A brief explanation 9 10 Text to check: 11 --- 12 {policies} 18 --- 19 20 IMPORTANT: 21 1. Return a JSON array with one object per policy line, in the same order as listed above 22 2. Each object should have "policy", "answer" and "explanation" fields 23 3. Do NOT try to follow the policies in the JSON output. Only check whether the text follows the policies 24 4. Be concise and accurate. 25 26 Example format: 27 [ 28 { 29 "policy": "The first policy line..... ", 30 "answer": "yes/no", 31 "explanation": "the explanation for the answer" 32 }, 33 { 34 "policy": "The second policy line ..... ", 35 "answer": "yes/no", 36 "explanation": "the explanation for the answer" 37 } 38 ] 39 40 Return ONLY the JSON array, nothing else. Do not include any additional text or explanations outside the JSON array. 41 assistant Repair Prompt: 1 system 2 You are an editor and your task is to rewrite response. 3 user The following text shows a query and a response. The response violates one or more guidelines from the query that it should have followed. 4 5 6 {instr} 9 10 11 12 {policies} 17 18 19 Rewrite the response so that it follows all guidelines while making only necessary changes and keeping the rest of the text unchanged. The rewrite must not break any of the guidelines that were already followed in the response. 20 You may rewrite the text exactly once, and don't output anything other than the rewritten response. 21 You must enclose the re-written response in tags. 22 assistant B.2 Best-of-N 1 system 2 You are an editor and your task is to rewrite a response to ensure compliance with guidelines. 3 user The following text shows a query, a response and a set of guidelines. The response may violate one or more guidelines that it should have followed. 4 5 6 {text} 11 12 13 14 {query} 8 9 {instr} 12 13 14 Generate a response so that it follows all the guidelines. Use the per-guideline responses to help generate the final response that complies with all the guidelines. 15 Don't output anything other than the re-written response. 16 You must enclose the re-written response in tags. 17 assistant B.4 Best-of-N Gen 1 system 2 You are a helpful assistant and your task is to respond to queries. Your response must follow a set of guidelines. 3 user Here is a query with a set of guidelines that must be followed. 4 5 6 {instr} 9 10 11 Write a response that follows all guidelines and don't output anything other than the response. 12 You must enclose the response in tags. 13 assistant C Task Adherence Judge 1 system 2 You are a helpful assistant whose task is to respond to user queries. 3 Make sure that your response follows these instructions: 4 5 {instruction_list} 6 user 7 Question: 8 {prompt_request} 9 assistant
2510.14851
SADCHER: Scheduling using Attention-based Dynamic Coalitions of Heterogeneous Robots in Real-Time Jakob Bichler1, Andreu Matoses Gimenez1, Javier Alonso-Mora1 Abstract— We present Sadcher, a real-time task assign- ment framework for heterogeneous multi-robot teams that incorporates dynamic coalition formation and task precedence constraints. Sadcher is trained through Imitation Learning and combines graph attention and transformers to predict assignment rewards between robots and tasks. Based on the predicted rewards, a relaxed bipartite matching step generates high-quality schedules with feasibility guarantees. We explicitly model robot and task positions, task durations, and robots’ remaining processing times, enabling advanced temporal and spatial reasoning and generalization to environments with different spatiotemporal distributions compared to training. Trained on optimally solved small-scale instances, our method can scale to larger task sets and team sizes. Sadcher outper- forms other learning-based and heuristic baselines on random- ized, unseen problems for small and medium-sized teams with computation times suitable for real-time operation. We also explore sampling-based variants and evaluate scalability across robot and task counts. In addition, we release our dataset of 250,000 optimal schedules: autonomousrobots.nl/paper_ websites/sadcher_MRTA/ § I. INTRODUCTION Autonomous multi-robot systems (MRS) are designed for complex, real-world settings, including earthquake disaster response scenarios [1], autonomous construction [2], pro- duction assembly processes [3], or search and rescue mis- sions [4]. Interest in MRS and multi-robot task assignment (MRTA) has grown rapidly in recent years [5]. Efficient MRTA algorithms optimize resource usage and minimize operational time [6]. MRS improve performance and enhance system robustness as a multi-robot team is more resilient against individual robot failures and performance bottlenecks [7], [8]. Using sub-teams of robots, i.e., dynamic coali- tion formation, enables teams to tackle complex tasks that would otherwise be infeasible for a single robot [8]–[10]. In practice, relying on a team of homogeneous robots where each robot possesses all skills can become impractical if task requirements are highly diverse, involving different sen- sors and actuators [11]. Instead, using heterogeneous robots brings practical and economic advantages, by leveraging existing specialized robots [12] and deploying simpler robots that are more cost-effective to implement and maintain [8] and more robust to failures [6]. MRS operate in dynamic environments where sudden changes, new tasks, unexpected task requirements, robot malfunctions, or moving obstacles can occur [5]. Hence, the ability to adaptively replan in real- time is essential. Modeling precedence constraints, which *This paper has been accepted for publication at the 2025 IEEE Int. Symposium on Multi-Robot & Multi-Agent Systems (MRS). 1Authors are with the Department for Cognitive Robotics, ME, Delft University of Technology, Delft, Netherlands, Fig. 1. Illustrative use case of autonomous construction on Mars. Circles represent tasks, color indicates required skills. Robot skills are shown as colored squares. Tasks requiring skills no single robot has (e.g., search for material in top left) must be executed by a synchronized coalition of robots. impose a logical temporal sequence among tasks, further enhances applicability to real-world scenarios [13] where some tasks depend on the completion of prior tasks. Motivated by these challenges this paper proposes Sad- cher, a framework for real-time scheduling of heterogeneous multi-robot teams with dynamic coalition formation and precedence constraints. Our main contributions are: • A learning-based model, combining graph attention net- works and transformers, which accounts for robot/task positions, task dependencies and durations, robots’ ca- pabilities, and robots’ remaining time to complete the current task. This enables advanced spatiotemporal rea- soning and generalization. • A dataset of 250,000 optimally solved small-scale prob- lems, usable as demonstrations for imitation learning or to benchmark against optimal schedules. II. RELATED WORK Following the taxonomy introduced in [14] and extended in [15], MRTA problems can be categorized on 4 axes: (1) single- or multi-task robots (ST/MT), (2) single- or multi- robot tasks (SR/MR), (3) instantaneous or time-extended as- signment (IA/TA) where TA incorporates information about future tasks and scheduling decisions, (4) interdependence of agent-task utilities, i.e. with cross-dependencies (XD), an arXiv:2510.14851v1 [cs.RO] 16 Oct 2025 agent’s utility for a task depends on the tasks assigned to other agents. This work addresses ST-MR-TA-XD settings. A. Conventional Methods Mixed Integer Linear Programming (MILP) offers ex- act solutions for complex ST-MR-TA-XD problems [16], though its exponential runtime hinders real-time use. The MILP-based CTAS framework [11] explicitly models risk in agent capability for task decomposition and scheduling. Simpler heterogeneous ST-SR scenarios can be addressed with the Tercio algorithm [3], which uses an MILP-based task allocator and a polynomial runtime task scheduler. Auction algorithms like [4], [17] treat tasks as non-atomic – tasks execution can be incremental, not requiring coalition formation. [18] uses auctions to solve heterogeneous ST- MR problems with atomic tasks. Genetic Algorithms offer anytime solutions that balance exploration and exploitation: [19] tackles heterogeneous ST-SR, [20] focusses on coalition formation of homogeneous robots, while [12] can handle heterogeneity and coalition formation. Other optimization metaheuristics applied to heterogeneous ST-MR include Ant Colony Optimization [10] and Particle Swarm Optimization [21]. Greedy formulations like [6] employ construction and improvement heuristics to balance runtime and performance. B. Learning-based Methods Deep learning methods promise fast solution generation and good scalability, by offloading most of the computation to the training phase [22]. Reinforcement Learning (RL) does not require a training dataset, but might spend a lot of time on infeasible solutions [23]. RL is used to solve ST-SR problems with mildly heterogeneous robots – robots differ in efficiency but can all perform any task – in [24] and [25]. Other RL methods solve ST-MR problems with dynamic coalition formation, but only for homogeneous robots [26], [27]. Recently, RL has been used to tackle heterogeneous ST- MR problems with dynamic coalition formation in [28]. The authors mitigate some of the problems RL faces through a flash-forward mechanism which allows for decision reorder- ing to avoid deadlocks during training. Instead of RL, other methods use Imitation Learning (IL) from optimal solutions during training, which requires a (computationally expensive) expert dataset, but benefits from stable training. [23] presents an IL method for mildly het- erogeneous robots without coalition formation (ST-SR). Both [29] and [30] address heterogeneous ST-MR problems with a network predicting task assignment rewards and a bipartite matching algorithm yielding task assignments based on these rewards. In [29], coalition formation is only considered if a task fails to be completed by a single robot. There are cases in which a lower cost schedule could be obtained by considering coalition formation for all tasks, e.g., two robots are faster at completing the task than one. [30] improves upon this by always considering coalition formation. Furthermore, they introduce voluntary waiting, which increases perfor- mance through enabling better future coalition formation by delaying task assignments. However, [30] omits locations and durations in their network architecture. This implicitly assumes task durations and travel times to be negligible or to match the training distribution. In this paper, we extend previous IL methods by explicitly modeling robot/task positions, task durations, and robots’ remaining time to complete the current task. This enables advanced spatiotemporal reasoning, e.g., synchronizing robot arrivals and anticipating task readiness and robot availability. Additionally, it supports generalization to environments with unseen spatiotemporal distributions. III. PROBLEM STATEMENT Notation. Matrices are boldface uppercase (e.g. M ∈ IRn×m), vectors are boldface lowercase (e.g. v ∈IRd), and scalars are lowercase (e.g. s). We model a system of N heterogeneous robots, M tasks, and a set of skills S. Each robot is capable of performing a subset of S, and each task requires a subset of S to be performed at its location for the given task duration. The N heterogeneous robots with Si ⊆S distinct skills, are modeled as an undirected graph Gr = (R, C), where each vertex in R = {ri}N is a robot with dr dimensions. Robot states ri = [pr i , tr i , ar i , cr i ] include position pr i ∈ IR2, remaining duration at the current task tr i , the robot’s availability ar i ∈{0, 1}, and the binary capability vector over the global skill set cr i ∈{0, 1}|S|. C ∈{0, 1}N×N represents the network connection among the robots. For simplicity, we assume a fully connected graph, so Ci,j = 1 ∀i, j, but the model is designed to accept any connected graph as input. The M tasks and their respective precedence constraints are represented as a directed acyclic graph Gt = (T , P). Each task is a vertex in T = {tj}M with dt dimensions, and is described by tj =  pt j, tt j, rt j, st j  , with position pr i ∈IR2, expected duration tt j, required skills rt j ∈{0, 1}|S| and status st j ∈{0, 1}3. The status indicates whether tasks are ready, assigned, or incomplete, e.g., st j = [1, 0, 1] represents a task that is ready to be scheduled, currently not assigned, and incomplete. Precedence constraints are encoded in the edges PM×M, where Pi,j = 1 means the i-th task is a predecessor of the j-th task. The j-th task is only ready to be scheduled if all its preceding tasks have been completed. A task can only commence when all required skills are covered by the dynamically formed coalition of robots assigned to it. This can be denoted as cC ⪰rt j where cC is the element- wise sum of robot capabilities cr i of assigned robots and ⪰is the element-wise greater-or-equal operator. The tasks require tightly coupled coalitions [20] - all robots have to be present at the task location for the entire execution duration. Furthermore, we introduce an idle task tM+1 that robots can choose to increase overall performance by delaying assignments until a better coalition can be formed. Robots start at location pstart i and end at pend i . The cost function aims to minimize the makespan, defined as the latest arrival time of any robot at pend i after completing its tasks: min max i∈{1,...,N} tfinish i + τ pfinish i , pend i  (1) where tfinish i is the time robot i finishes its final task, which is computed as the sum of its execution times, idling times, and travel times. τ pfinish i , pend i  is the travel time from the location of the last finished task pfinish i to the end location pend i . Travel times can be estimated using Euclidean distance or path planning algorithms that take obstacles into account. IV. METHOD The Sadcher framework consists of a neural network based on attention mechanisms to predict assignment rewards for robots to tasks that is agnostic to the size of the input graphs, i.e., can handle arbitrary numbers of robots and tasks. A re- laxed bipartite matching algorithm extracts task assignments based on the predicted reward. During runtime, the method asynchronously recomputes assignments at decision steps, i.e., when robots finish tasks or new tasks are announced. A. Network Architecture The high-level network structure is depicted in Fig. 2 and is similar to [30], but extended with a distance multilayer perceptron (MLP) that informs the network about relative distances between robots and tasks and separate heads for predicting rewards for ”normal” tasks and the idle action. The key components of the network are graph attention encoders (GAT) [31], transformer blocks [32], and reward MLPs that project latent embeddings into a reward matrix. 1) Graph Attention (GAT) Encoder Blocks: After map- ping robot features ri and task features tj into d-dimensional embeddings, the embedded robot and task features are pro- cessed by separate GATs to capture local information-rich latent representations of the input graphs. GATs process a set of node features, incorporating information from neighboring nodes based on an adjacency matrix. While the robot features are processed as a fully connected graph, assuming all-to-all attention, the task GAT leverages the encoded precedence constraints in the adjacency matrix to understand the tempo- ral task logic. A single head of the GAT computes attention weights αi,j between node i and its neighbors j, based on a projected feature vector h′ = hWh (where h is the input node feature and Wh is a learned weight matrix): αi,j = exp(LeakyReLU(a([h′ i||h′ j]))) P k∈N i∪i exp(LeakyReLU(a([h′ i||h′ k]))) (2) here, a is a learnable linear transformation, || denotes con- catenation and Ni is the set of neighbors of node i. A Leaky ReLU [33] in combination with a softmax function over the neighbors of node i yields the final αi,j. The resulting αi,j represent the relative importance of node j to node i, en- abling context-aware feature propagation. Spatiotemporally related tasks or robots with complementary skills will attend more strongly to each other. The output hGAT i for a single head at node i is a sum of a self-loop contribution and the transformed neighbor contributions: hGAT i = αi,ih′ i + LeakyReLU   X j∈Ni,j̸=i αi,jh′ j   (3) In the GAT encoder blocks, we apply multi-head GAT, concatenating the outputs of ZGAT independent heads and applying residual connections and layer normalization. The GAT encoder consist of LGAT such layers and outputs hGAT R for robots and hGAT T for tasks respectively. 2) Transformer Encoder Blocks: Following the GAT en- coders, the representations hGAT R and hGAT T are processed by independent transformer encoders, to build the global context of robots and tasks. Each transformer block applies multi-head self-attention (MHA) [32] on the input h: αz = Softmax (WQ z h)(WK z h)⊤ √ d  (WV z h) (4) MHA(h) = α1||α2|| . . . ||αZ  WO (5) where d is the key dimensionality. MHA computes this operation in parallel for ZT heads to generate the final outputs hT R for robots and hT T for tasks respectively. 3) Reward Prediction: The normalized relative distances between robot i and task j are passed through the distance head MLPD to compute the distance feature di,j: di,j = MLPD Normalize(∥pR i −pT j ∥2)  (6) While task and robot positions are part of the raw input features in Gr and Gt, this explicit distance term provides the network with direct access to spatial proximity. We construct feature vectors fi,j for each robot-task pair by concatenating the local (GAT) and global (transformer) representation of robot i and task j with the distance term di,j, so fi,j ∈IR4×dk+1: fi,j = hGAT R i ∥hGAT T j ∥hT R i ∥hT T j ∥di,j (7) This information-rich representation is then passed through the reward head MLPR to compute the task assignment reward Ri,j to assign robot i to task j. The idle reward RIDLE i is computed by passing fi,j to the idle head MLPI and summing the outputs across all tasks for each robot i: Rtask i,j = MLPR (fi,j) , RIDLE i = M X j=1 MLPI(fi,j) (8) MLPI can be understood as learning per-task signals that encourage a robot to wait when short-term idling is advan- tageous (e.g., a nearby task will become ready soon). The final predicted reward R contains the task rewards Rtask i,j for all pairs of robots i and tasks j, concatenated with the idle rewards RIDLE i for each robot i, so R ∈IRN×(M+1). B. Task Assignment through Bipartite Matching The final reward R can be interpreted as the edge re- wards between robots R and tasks T at a given timestep, encoding the full complexity of the current problem state. To extract task assignments at this timestep, we employ a relaxed bipartite matching formulation (no strict one-to-one matching). The constraints ensure valid assignments: (10) prevents robots from being assigned more than one task, (11) guarantees that each task’s required skills are fully Fig. 2. Sadcher architecture overview. Robot and task graphs are processed by graph attention and transformer encoders and concatenated with distance features. The reward matrix is estimated by the Idle and Reward MLPs and final task assignments are extracted using relaxed bipartite matching. B: batch size, N: number of robots, M: number of tasks, dr: robot input dimension, dt: task input dimension, d: latent dimension. covered by the assigned coalition using the element-wise inequality ⪰, and (12) enforces that only idle robots and ready tasks are matched. The bipartite matching finds the optimal assignment matrix A∗∈IRN×(M+1) that maximizes the selected edge reward encoded in R: A∗= arg max A X i,j Ai,jRi,j (9) subject to: M+1 X j=0 Ai,j ≤1, ∀i ∈R (10) N X i=0 Ai,j cr i ⪰ct j, ∀j ∈M (11) Ai,j = 0, ∀i, j : i /∈Ridle ∨j /∈Tready (12) This formulation prevents deadlocks since no coalition can be assigned to a task that it cannot execute. However, it allows for redundant assignments, so after computing A∗, we remove robots that do not contribute unique required skills, starting from the robot with the highest travel time to the task. Additionally, we implement a pre-moving strategy: If robot ri is assigned the idle task tM+1, it moves towards the task with the highest reward ti highest = arg max1≤j≤M Ri,j, without being formally assigned to it. This does not con- catenate the tasks into a fixed schedule for the robot, since assignments are recomputed at decision steps. The robot is likely to be assigned to ti highest at the next decision step, so pre-moving can reduce the delay to task start if ri would have been the last coalition member to arrive at ti highest. C. Training through Imitation Learning 1) Training Data Generation: We generate 250,000 small-scale problem instances (8 tasks, 3 robots, 3 skills) with fully randomized configurations: each robot is assigned 1–3 skills, each task requires 1–3 skills, task locations and robot start/end depot lie in [0, 100] × [0, 100] ⊂IR2 and are randomly sampled. Execution times are drawn uniformly from [50, 100], precedence constraints are acyclic and gen- erated between random task pairs, and robot travel speed is assumed to be 1 unit per timestep. To solve these scenarios optimally, we extend the exact MILP formulation of [16] with precedence constraints. Due to its exponential time complexity, both in the number of robots and tasks, only small instances can be solved in a reasonable time to generate a training dataset. We omit modelling of stochastic travel times, as our framework handles deviations via real-time replanning and does not need conservative safety margins. 2) Optimal Reward Extraction: To train the network to imitate the optimal behavior, we extract “ground-truth” reward matrices Ok ∈IRN×(M+1). The optimal schedules are sliced into K decision points T dec k , corresponding to timesteps when a task finishes and the robots require re- assignment. At each T dec k the optimal reward is calculated based on the time difference between T dec k and the finish time of task j with discount factor γ ∈(0, 1]: Ok,i,j = γ T finish j −T dec k  ok,i,j (13) where ok,i,j = 1 if robot i is assigned to task j in the optimal solution and the decision point occurs before the task’s start time (T dec k < Ti,j,start); otherwise, ok,i,j = 0. We handle the idle action tM+1 in the same way: If the time between a robot’s last finish time and its next start time exceeds the travel time between the corresponding tasks, we treat this interval as an explicit idle assignment in the optimal schedule and compute its reward using the above formulas. By design, this reward encoding captures the optimal decision logic: The next selected task will have the highest reward, with decreasing rewards for later tasks. Given the sequence of optimal rewards Ok over all decision steps T dec k , the bipartite matching algorithm outputs the exact solution. 3) Training Details : We modify the loss L from [30] by applying the inverse mask (1 −Xk) to the second term: L = ∥Xk◦(Rk−Ok)∥1+λ∥(1−Xk)◦(Rk−Ok)∥1 (14) where ◦denotes the element-wise product operator, Ok is the optimal reward, Rk is the predicted reward and Xk ∈ IRN×(M+1) is a feasibility mask with Xi,j = 1 if robot i is available and task j is ready, else Xi,j = 0. The first term encourages accurate prediction of feasible rewards, while the second discourages high values for infeasible ones. λ balances the two terms: Intuitively, accurate feasible predictions are more important than suppressing infeasible ones, as the bipartite matching will select high reward tasks. We use the ADAM optimizer [34] to train the network. V. EXPERIMENTS AND RESULTS We evaluate makespan and computation time metrics for six algorithms, averaged over 500 unseen problem instances with randomized task locations/durations, skill requirements, robot capabilities, and precedence constraints. Experiments are conducted on a consumer machine with an AMD Ryzen 7 4800H CPU and NVIDIA GeForce GTX 1650 GPU. A. Compared Algorithms 1) Baselines: There exist few algorithms in literature that address heterogeneous ST-MR-TA-XD with precedence constraints, often without weights or datasets [30]. We chose one available algorithm for each of the approaches commonly followed (optimization, learning, heuristic search): (1) An MILP formulation based on [16], adding precedence constraints and omitting stochastic travel times, which pro- vides optimal solutions with formal guarantees. A decentral- ized RL framework HeteroMRTA [28], adapted for prece- dence constraints by masking out tasks that have incomplete predecessors during action selection. We compare against (2), the single solution variant HeteroMRTA, where agents choose the highest-probability task at decision steps, and (3), the sampling variant S-HeteroMRTA (Boltzmann weighted- random action selection) which returns the best makespan solution across 10 runs per instance. We also implement and compare (4), a greedy heuristic that assigns robots to tasks based on reducing the remaining skill requirements the most and breaking ties based on travel time (shortest first). 2) Sadcher Variants: (5) The Sadcher framework pre- dicts robot-task rewards deterministically as described in Section IV. We also benchmark (6), a S-Sadcher variant, which samples reward matrices from a normal distribution centered around the deterministic output then used by the bipartite matching. This introduces stochastic variations in the schedules. As for S-HeteroMRTA we run this process 10 times per instance and select the best-performing rollout. B. Training-Domain Evaluation We evaluate the algorithms on 500 randomized problem instances of the training domain size (8 tasks, 3 robots, 3 precedence constraints). Results are shown in Fig. 3 and 4. 1) Makespan: The MILP formulation provides optimal makespans, establishing a baseline for comparing the average relative gaps of other methods. S-Sadcher (gap: 3.8%) and Sadcher (gap: 6.8%) are the best-performing non-optimal algorithms. HeteroMRTA performs worst (gap: 21.5%), but sampling reduces the optimality gap to 10.8%, leveraging its RL policy, which follows a sampling strategy during training. In the pairwise comparison in Fig. 4, Sadcher achieves a lower makespan for 403 of 500 instances (80.6%, binomial test: p ≈2 × 10−45). S-Sadcher outperforms S- HeteroMRTA on 389 of 500 instances (77.8%, binomial test: p ≈3 × 10−37). Greedy reaches an average gap of 20.4%. 2) Computation Time: For dynamic scenarios with real- time requirements, the time per assignment decision (tdec) is crucial. MILP cannot compute instantaneous assignments, but only globally optimal schedules. S-HeteroMRTA and S-Sadcher roll out the full scenario to select the best as- signments. Therefore, these three algorithms, do not yield a time per decision, but only for full solution construction (tfull). Due to its simplicity, the greedy algorithm computes the fastest (tdec: 0.080 ms; tfull: 1.7 ms). HeteroMRTA (tdec: 9.1 ms; tfull: 0.20 s) is faster than Sadcher (tdec: 22 ms; tfull: 0.57 s), which needs to solve the relatively expensive bipar- tite matching for each decision. S-HeteroMRTA computes full solutions in 0.96 s, S-Sadcher in 5.7 s, and MILP in 76 s. In the worst case, MILP takes up to 12 minutes, rendering it infeasible for real-time applications, even on small problems. 3) Precedence Constraints: The Sadcher model demon- strates an understanding of task dependencies by priori- tizing the assignment of predecessor tasks. This improves performance by unlocking successors earlier and enabling better global schedules. On average, the model assigns ready predecessor tasks approximately 1.7 times more frequently compared to the baselines. (S-) HeteroMRTA and Greedy cannot make this informed decision, but selecting tasks with incomplete predecessors is prevented through masking. C. Out-of-Domain Generalization To evaluate generalization, we scale the number of robots N ∈{3, 5, 20}, and the task number M ∈[6, 250] (see Fig. 5) and compare the makespan gap relative to Sadcher (trained on N=3, M=8). With a 1-hour cutoff, the MILP solver fails to find solutions beyond 10 tasks and 7 robots within this limit. For smaller problem sizes, it finds the optimal makespans, outperforming Sadcher by 6-16%. For N=3 robots, S-Sadcher and Sadcher are the strongest non-optimal methods across all M. S-HeteroMRTA outper- forms Greedy for M≤60 and HeteroMRTA finds the highest makespans overall. Although Sadcher’s relative performance is best for small M, it outperforms Greedy by more than 4% and (S-) HeteroMRTA by more than 9% for M=250. For N=5 robots, S-Sadcher remains the best learning- based method across all M. S-HeteroMRTA performs better than Sadcher for M≤9, but is surpassed by Sadcher beyond that, and by Greedy for M≥70. HeteroMRTA outperforms Greedy for 7≤M≤10. Greedy reaches a 2% gap for M=200. For N=20 robots, relative performance changes signifi- cantly, with degradation of the learning-based methods: S- Sadcher is the best-performing method only for M≤70, beyond that, Greedy becomes superior. S-HeteroMRTA con- sistently beats Sadcher, yet only outperforms Greedy for M≤50. HeteroMRTA surpasses Sadcher for M≥150, and the performance gap for M≤100 between the two is smaller compared to scenarios with fewer robots. Overall, Sadcher excels on smaller robot teams across all task counts, but its performance decreases with more robots. We hypothesize that increasing the number of tasks while keeping the number of robots fixed is similar to solv- ing multiple smaller subproblems sequentially, where local Fig. 3. Comparison on 500 unseen, randomized problem instances (8 tasks, 3 robots, 3 precedence constraints) for makespan (left), and computation time (right). Lower means better performance. Wilcoxon significance levels are annotated for Sadcher compared to HeteroMRTA variants. All other pairwise differences are statistically significant (p < 0.05), except between S-HeteroMRTA and Greedy (p = 0.21). For algorithms requiring full solution construction, total computation time is reported; for methods returning instantaneous assignments, both time per decision and total time are shown. Fig. 4. Pairwise makespan comparison of HeteroMRTA vs. Sadcher (left) and S-HeteroMRTA vs. S-Sadcher (right). Each point is one solved instance; points below the diagonal indicate better performance by (S-)Sadcher. Fig. 5. Relative makespan gap to Sadcher for 3, 5, and 20 robots (top left to bottom left). Bottom right: Computation time for 3 robots (for algorithms requiring full solution construction (Sample-HeteroMRTA, Sample-Sadcher, MILP), total computation time is reported, for methods returning instantaneous assignments (HeteroMRTA, Sadcher, Greedy), time per decision is reported). Task counts M ∈[6, 250], with S = 3 skills and M/5 precedence constraints. Each point shows the mean over 100 runs. scheduling rules learned during training remain effective. On the other hand, larger teams require different local scheduling strategies that diverge from the distribution Sadcher has seen during training. For high task counts, the greedy algorithm - especially in combination with bigger robot teams - starts beating the learning-based methods. Sampling-based variants (S-Sadcher, S-HeteroMRTA) have a higher impact on smaller problems, where the smaller solution space makes rollouts more likely to yield improvements. Greedy computes fastest, delivering near-instantaneous decisions (≤3 ms). The computation time of HeteroMRTA is minimally affected by scaling (≤20 ms), while Sadcher is slower and scales worse due to the bipartite matching step (≤80 ms per decision). The sampling-based variants require significantly longer computation (up to 450 s for S-Sadcher and 40 s for S-HeteroMRTA for M=250), which makes them impractical for online computation on large problems. VI. CONCLUSION In this work, we proposed Sadcher - an IL framework to address real-time task assignment for heterogeneous multi- robot teams, incorporating dynamic coalition formation and precedence constraints. Reward prediction with relaxed bi- partite matching yields strong performance with feasibility guarantees. Sadcher outperforms RL-based and heuristic baselines in makespan across small to medium-sized robot teams and a wide range of task counts. For bigger teams, the advantage is lost due to lack of demonstrations. Sadcher can generate assignments in real-time across all tested problem sizes, but the sampling variant S-Sadcher is only real-time for smaller problems. Sadcher relies on a large (computationally expensive) dataset of expert demonstrations for training. Future work will explore fine-tuning IL policies with RL, which could increase performance on larger problem instances where expert solutions are very expensive or infea- sible to obtain. Additionally, extending the dataset with sub- optimal demonstrations for bigger problem instances could improve scalability. ACKNOWLEDGMENTS This project has received funding from the European Union through ERC, INTERACT, under Grant 101041863. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the Eu- ropean Union. Neither the European Union nor the granting authority can be held responsible for them. REFERENCES [1] L. Zhang, M. Li, W. Yang, and S. Yang, “Task Allocation in Hetero- geneous Multi-Robot Systems Based on Preference-Driven Hedonic Game,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), May 2024, pp. 8967–8972. [2] W. Gosrich, S. Mayya, S. Narayan, M. Malencia, S. Agarwal, and V. Kumar, “Multi-Robot Coordination and Cooperation with Task Precedence Relationships,” May 2023. [3] M. C. Gombolay, R. J. Wilcox, and J. A. Shah, “Fast Scheduling of Robot Teams Performing Tasks With Temporospatial Constraints,” IEEE Transactions on Robotics, vol. 34, pp. 220–239, Feb. 2018. [4] I. Ansari, A. Mohammed, Y. Ansari, M. Yusuf Ansari, S. Razak, and E. Feo Flushing, “CoLoSSI: Multi-Robot Task Allocation in Spatially-Distributed and Communication Restricted Environments,” IEEE Access, vol. 12, 2024. [5] H. Chakraa, F. Gu´erin, E. Leclercq, and D. Lefebvre, “Optimization techniques for Multi-Robot Task Allocation problems: Review on the state-of-the-art,” Robotics and Autonomous Systems, vol. 168, p. 104492, Oct. 2023. [6] E. Bischoff, F. Meyer, J. Inga, and S. Hohmann, “Multi-Robot Task Allocation and Scheduling Considering Cooperative Tasks and Prece- dence Constraints,” May 2020. [7] R. K. Ramachandran, J. A. Preiss, and G. S. Sukhatme, “Resilience by Reconfiguration: Exploiting Heterogeneity in Robot Teams,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov. 2019. [8] A. Khamis, A. Hussein, and A. Elmogy, “Multi-robot Task Allocation: A Review of the State-of-the-Art,” in Cooperative Robots and Sensor Networks 2015, A. Koubˆaa and J. Mart´ınez-de Dios, Eds. Cham: Springer International Publishing, 2015, pp. 31–51. [9] F. Quinton, C. Grand, and C. Lesire, “Market Approaches to the Multi- Robot Task Allocation Problem: a Survey,” Journal of Intelligent & Robotic Systems, vol. 107, no. 2, p. 29, Feb. 2023. [10] W. Babincsak, A. Aswale, and C. Pinciroli, “Ant Colony Optimization for Heterogeneous Coalition Formation and Scheduling with Multi- Skilled Robots,” in 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), Dec. 2023, pp. 121–127. [11] B. Fu, W. Smith, D. Rizzo, M. Castanier, M. Ghaffari, and K. Barton, “Robust Task Scheduling for Heterogeneous Robot Teams under Capability Uncertainty,” IEEE Transactions on Robotics, June 2021. [12] P. Muhuri and A. Rauniyar, “Immigrants Based Adaptive Genetic Al- gorithms for Task Allocation in Multi-Robot Systems,” International Journal of Computational Intelligence and Applications, vol. 16, p. 1750025, Dec. 2017. [13] M. Gini, “Multi-Robot Allocation of Tasks with Temporal and Or- dering Constraints,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, Feb. 2017. [14] B. P. Gerkey and M. J. Matari´c, “A Formal Analysis and Taxonomy of Task Allocation in Multi-Robot Systems,” The International Journal of Robotics Research, vol. 23, no. 9, pp. 939–954, Sept. 2004. [15] G. A. Korsah, A. Stentz, and M. B. Dias, “A comprehensive taxonomy for multi-robot task allocation,” The International Journal of Robotics Research, vol. 32, no. 12, pp. 1495–1512, Oct. 2013. [16] A. Aswale and C. Pinciroli, “Heterogeneous Coalition Formation and Scheduling with Multi-Skilled Robots,” June 2023. [17] I. Ansari, A. Mohamed, E. F. Flushing, and S. Razak, “Cooperative and load-balancing auctions for heterogeneous multi-robot teams dealing with spatial and non-atomic tasks,” in 2020 IEEE International Sym- posium on Safety, Security, and Rescue Robotics (SSRR), Nov. 2020. [18] M. Irfan and A. Farooq, “Auction-based task allocation scheme for dynamic coalition formations in limited robotic swarms with hetero- geneous capabilities,” in 2016 International Conference on Intelligent Systems Engineering (ICISE), Jan. 2016, pp. 210–215. [19] H. Chakraa, E. Leclercq, F. Gu´erin, and D. Lefebvre, “A Centralized Task Allocation Algorithm for a Multi-Robot Inspection Mission With Sensing Specifications,” IEEE Access, vol. 11, 2023. [20] M. U. Arif, “Robot coalition formation against time-extended multi- robot tasks,” International Journal of Intelligent Unmanned Systems, vol. 10, pp. 468–481, June 2021. [21] X.-F. Liu, Y. Fang, Z.-H. Zhan, and J. Zhang, “Strength Learn- ing Particle Swarm Optimization for Multiobjective Multirobot Task Scheduling,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 7, pp. 4052–4063, July 2023. [22] A. Prorok, J. Blumenkamp, Q. Li, R. Kortvelesy, Z. Liu, and E. Stump, “The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts,” July 2021. [23] Z. Wang, C. Liu, and M. Gombolay, “Heterogeneous graph attention networks for scalable multi-robot scheduling with temporospatial constraints,” Autonomous Robots, vol. 46, no. 1, pp. 249–268, Jan. 2022. [24] S. Paul, P. Ghassemi, and S. Chowdhury, “Learning Scalable Policies over Graphs for Multi-Robot Task Allocation using Capsule Attention Networks,” May 2022. [25] B. Altundas, Z. Wang, J. Bishop, and M. Gombolay, “Learning Coordination Policies over Heterogeneous Graphs for Human-Robot Teams via Recurrent Neural Schedule Propagation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2022. [26] F. Deng, H. Huang, L. Fu, H. Yue, J. Zhang, Z. Wu, and T. L. Lam, “A Learning Approach to Multi-robot Task Allocation with Priority Constraints and Uncertainty,” in 2022 IEEE International Conference on Industrial Technology (ICIT), Aug. 2022, pp. 1–8. [27] W. Dai, A. Bidwai, and G. Sartoretti, “Dynamic Coalition Forma- tion and Routing for Multirobot Task Allocation via Reinforcement Learning,” in 2024 IEEE International Conference on Robotics and Automation (ICRA). Yokohama, Japan: IEEE, May 2024, pp. 16 567– 16 573. [28] W. Dai, U. Rai, J. Chiun, Y. Cao, and G. Sartoretti, “Heterogeneous Multi-robot Task Allocation and Scheduling via Reinforcement Learn- ing,” IEEE Robotics and Automation Letters, vol. 10, no. 3, pp. 2654– 2661, Mar. 2025. [29] P. Gao, S. Siva, A. Micciche, and H. Zhang, “Collaborative Scheduling with Adaptation to Failure for Heterogeneous Robot Teams,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), May 2023, pp. 1414–1420. [30] W. J. Jose and H. Zhang, “Learning for Dynamic Subteaming and Vol- untary Waiting in Heterogeneous Multi-Robot Collaborative Schedul- ing,” IEEE International Conference on Robotics and Automation (ICRA), 2024. [31] P. Veliˇckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Li`o, and Y. Bengio, “Graph Attention Networks,” Feb. 2018. [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention Is All You Need,” June 2017. [33] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” Nov. 2015. [34] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimiza- tion,” Jan. 2017.
SADCHER: Scheduling using Attention-based Dynamic Coalitions of Heterogeneous Robots in Real-Time Jakob Bichler1, Andreu Matoses Gimenez1, Javier Alonso-Mora1 Abstract- We present Sadcher, a real-time task assignment framework for heterogeneous multi-robot teams that incorporates dynamic coalition formation and task precedence constraints. Sadcher is trained through Imitation Learning and combines graph attention and transformers to predict assignment rewards between robots and tasks. Based on the predicted rewards, a relaxed bipartite matching step generates high-quality schedules with feasibility guarantees. We explicitly model robot and task positions, task durations, and robots' remaining processing times, enabling advanced temporal and spatial reasoning and generalization to environments with different spatiotemporal distributions compared to training. Trained on optimally solved small-scale instances, our method can scale to larger task sets and team sizes. Sadcher outperforms other learning-based and heuristic baselines on randomized, unseen problems for small and medium-sized teams with computation times suitable for real-time operation. We also explore sampling-based variants and evaluate scalability across robot and task counts. In addition, we release our dataset of 250,000 optimal schedules: autonomousrobots.nl/paper_ websites/sadcher_MRTA/ § I. INTRODUCTION Autonomous multi-robot systems (MRS) are designed for complex, real-world settings, including earthquake disaster response scenarios [1], autonomous construction [2], production assembly processes [3], or search and rescue missions [4]. Interest in MRS and multi-robot task assignment (MRTA) has grown rapidly in recent years [5]. Efficient MRTA algorithms optimize resource usage and minimize operational time [6]. MRS improve performance and enhance system robustness as a multi-robot team is more resilient against individual robot failures and performance bottlenecks [7], [8]. Using sub-teams of robots, i.e., dynamic coalition formation, enables teams to tackle complex tasks that would otherwise be infeasible for a single robot [8]-[10]. In practice, relying on a team of homogeneous robots where each robot possesses all skills can become impractical if task requirements are highly diverse, involving different sensors and actuators [11]. Instead, using heterogeneous robots brings practical and economic advantages, by leveraging existing specialized robots [12] and deploying simpler robots that are more cost-effective to implement and maintain [8] and more robust to failures [6]. MRS operate in dynamic environments where sudden changes, new tasks, unexpected task requirements, robot malfunctions, or moving obstacles can occur [5]. Hence, the ability to adaptively replan in realtime is essential. Modeling precedence constraints, which *This paper has been accepted for publication at the 2025 IEEE Int. Symposium on Multi-Robot & Multi-Agent Systems (MRS). 1Authors are with the Department for Cognitive Robotics, ME, Delft . 1. Illustrative use case of autonomous construction on Mars. Circles represent tasks, color indicates required skills. Robot skills are shown as colored squares. Tasks requiring skills no single robot has (e.g., search for material in top left) must be executed by a synchronized coalition of robots. impose a logical temporal sequence among tasks, further enhances applicability to real-world scenarios [13] where some tasks depend on the completion of prior tasks. Motivated by these challenges this paper proposes Sadcher, a framework for real-time scheduling of heterogeneous multi-robot teams with dynamic coalition formation and precedence constraints. Our main contributions are: • A learning-based model, combining graph attention networks and transformers, which accounts for robot/task positions, task dependencies and durations, robots' capabilities, and robots' remaining time to complete the current task. This enables advanced spatiotemporal reasoning and generalization. • A dataset of 250,000 optimally solved small-scale problems, usable as demonstrations for imitation learning or to benchmark against optimal schedules. II. RELATED WORK Following the taxonomy introduced in [14] and extended in [15], MRTA problems can be categorized on 4 axes: (1) single- or multi-task robots (ST/MT), (2) single- or multirobot tasks (SR/MR), (3) instantaneous or time-extended assignment (IA/TA) where TA incorporates information about future tasks and scheduling decisions, (4) interdependence of agent-task utilities, i.e. with cross-dependencies (XD), an 16 Oct 2025 agent's utility for a task depends on the tasks assigned to other agents. This work addresses ST-MR-TA-XD settings. A. Conventional Methods Mixed Integer Linear Programming (MILP) offers exact solutions for complex ST-MR-TA-XD problems [16], though its exponential runtime hinders real-time use. The MILP-based CTAS framework [11] explicitly models risk in agent capability for task decomposition and scheduling. Simpler heterogeneous ST-SR scenarios can be addressed with the Tercio algorithm [3], which uses an MILP-based task allocator and a polynomial runtime task scheduler. Auction algorithms like [4], [17] treat tasks as non-atomic - tasks execution can be incremental, not requiring coalition formation. [18] uses auctions to solve heterogeneous STMR problems with atomic tasks. Genetic Algorithms offer anytime solutions that balance exploration and exploitation: [19] tackles heterogeneous ST-SR, [20] focusses on coalition formation of homogeneous robots, while [12] can handle heterogeneity and coalition formation. Other optimization metaheuristics applied to heterogeneous ST-MR include Ant Colony Optimization [10] and Particle Swarm Optimization [21]. Greedy formulations like [6] employ construction and improvement heuristics to balance runtime and performance. B. Learning-based Methods Deep learning methods promise fast solution generation and good scalability, by offloading most of the computation to the training phase [22]. Reinforcement Learning (RL) does not require a training dataset, but might spend a lot of time on infeasible solutions [23]. RL is used to solve ST-SR problems with mildly heterogeneous robots - robots differ in efficiency but can all perform any task - in [24] and [25]. Other RL methods solve ST-MR problems with dynamic coalition formation, but only for homogeneous robots [26], [27]. Recently, RL has been used to tackle heterogeneous STMR problems with dynamic coalition formation in [28]. The authors mitigate some of the problems RL faces through a flash-forward mechanism which allows for decision reordering to avoid deadlocks during training. Instead of RL, other methods use Imitation Learning (IL) from optimal solutions during training, which requires a (computationally expensive) expert dataset, but benefits from stable training. [23] presents an IL method for mildly heterogeneous robots without coalition formation (ST-SR). Both [29] and [30] address heterogeneous ST-MR problems with a network predicting task assignment rewards and a bipartite matching algorithm yielding task assignments based on these rewards. In [29], coalition formation is only considered if a task fails to be completed by a single robot. There are cases in which a lower cost schedule could be obtained by considering coalition formation for all tasks, e.g., two robots are faster at completing the task than one. [30] improves upon this by always considering coalition formation. Furthermore, they introduce voluntary waiting, which increases performance through enabling better future coalition formation by delaying task assignments. However, [30] omits locations and durations in their network architecture. This implicitly assumes task durations and travel times to be negligible or to match the training distribution. In this paper, we extend previous IL methods by explicitly modeling robot/task positions, task durations, and robots' remaining time to complete the current task. This enables advanced spatiotemporal reasoning, e.g., synchronizing robot arrivals and anticipating task readiness and robot availability. Additionally, it supports generalization to environments with unseen spatiotemporal distributions. III. PROBLEM STATEMENT Notation. Matrices are boldface uppercase (e.g. M ∈ IRn×m), vectors are boldface lowercase (e.g. v ∈IRd), and scalars are lowercase (e.g. s). We model a system of N heterogeneous robots, M tasks, and a set of skills S. Each robot is capable of performing a subset of S, and each task requires a subset of S to be performed at its location for the given task duration. The N heterogeneous robots with Si ⊆S distinct skills, are modeled as an undirected graph Gr = (R, C), where each vertex in R = {ri}N is a robot with dr dimensions. Robot states ri = [pr i , tr i , ar i , cr i ] include position pr i ∈ IR2, remaining duration at the current task tr i , the robot's availability ar i ∈{0, 1}, and the binary capability vector over the global skill set cr i ∈{0, 1}|S|. C ∈{0, 1}N×N represents the network connection among the robots. For simplicity, we assume a fully connected graph, so Ci,j = 1 ∀i, j, but the model is designed to accept any connected graph as input. The M tasks and their respective precedence constraints are represented as a directed acyclic graph Gt = (T , P). Each task is a vertex in T = {tj}M with dt dimensions, and is described by tj = pt j, tt j, rt j, st j , with position pr i ∈IR2, expected duration tt j, required skills rt j ∈{0, 1}|S| and status st j ∈{0, 1}3. The status indicates whether tasks are ready, assigned, or incomplete, e.g., st j = [1, 0, 1] represents a task that is ready to be scheduled, currently not assigned, and incomplete. Precedence constraints are encoded in the edges PM×M, where Pi,j = 1 means the i-th task is a predecessor of the j-th task. The j-th task is only ready to be scheduled if all its preceding tasks have been completed. A task can only commence when all required skills are covered by the dynamically formed coalition of robots assigned to it. This can be denoted as cC ⪰rt j where cC is the elementwise sum of robot capabilities cr i of assigned robots and ⪰is the element-wise greater-or-equal operator. The tasks require tightly coupled coalitions [20] - all robots have to be present at the task location for the entire execution duration. Furthermore, we introduce an idle task tM+1 that robots can choose to increase overall performance by delaying assignments until a better coalition can be formed. Robots start at location pstart i and end at pend i . The cost function aims to minimize the makespan, defined as the latest arrival time of any robot at pend i after completing its tasks: min max i∈{1,...,N} tfinish i + τ pfinish i , pend i (1) where tfinish i is the time robot i finishes its final task, which is computed as the sum of its execution times, idling times, and travel times. τ pfinish i , pend i is the travel time from the location of the last finished task pfinish i to the end location pend i . Travel times can be estimated using Euclidean distance or path planning algorithms that take obstacles into account. IV. METHOD The Sadcher framework consists of a neural network based on attention mechanisms to predict assignment rewards for robots to tasks that is agnostic to the size of the input graphs, i.e., can handle arbitrary numbers of robots and tasks. A relaxed bipartite matching algorithm extracts task assignments based on the predicted reward. During runtime, the method asynchronously recomputes assignments at decision steps, i.e., when robots finish tasks or new tasks are announced. A. Network Architecture The high-level network structure is depicted in Fig. 2 and is similar to [30], but extended with a distance multilayer perceptron (MLP) that informs the network about relative distances between robots and tasks and separate heads for predicting rewards for "normal" tasks and the idle action. The key components of the network are graph attention encoders (GAT) [31], transformer blocks [32], and reward MLPs that project latent embeddings into a reward matrix. 1) Graph Attention (GAT) Encoder Blocks: After mapping robot features ri and task features tj into d-dimensional embeddings, the embedded robot and task features are processed by separate GATs to capture local information-rich latent representations of the input graphs. GATs process a set of node features, incorporating information from neighboring nodes based on an adjacency matrix. While the robot features are processed as a fully connected graph, assuming all-to-all attention, the task GAT leverages the encoded precedence constraints in the adjacency matrix to understand the temporal task logic. A single head of the GAT computes attention weights αi,j between node i and its neighbors j, based on a projected feature vector h′ = hWh (where h is the input node feature and Wh is a learned weight matrix): αi,j = exp(LeakyReLU(a([h′ i||h′ j]))) P k∈N i∪i exp(LeakyReLU(a([h′ i||h′ k]))) (2) here, a is a learnable linear transformation, || denotes concatenation and Ni is the set of neighbors of node i. A Leaky ReLU [33] in combination with a softmax function over the neighbors of node i yields the final αi,j. The resulting αi,j represent the relative importance of node j to node i, enabling context-aware feature propagation. Spatiotemporally related tasks or robots with complementary skills will attend more strongly to each other. The output hGAT i for a single head at node i is a sum of a self-loop contribution and the transformed neighbor contributions: hGAT i = αi,ih′ i + LeakyReLU   X j∈Ni,j̸=i αi,jh′ j   (3) In the GAT encoder blocks, we apply multi-head GAT, concatenating the outputs of ZGAT independent heads and applying residual connections and layer normalization. The GAT encoder consist of LGAT such layers and outputs hGAT R for robots and hGAT T for tasks respectively. 2) Transformer Encoder Blocks: Following the GAT encoders, the representations hGAT R and hGAT T are processed by independent transformer encoders, to build the global context of robots and tasks. Each transformer block applies multi-head self-attention (MHA) [32] on the input h: αz = Softmax (WQ z h)(WK z h)⊤ √ d (WV z h) (4) MHA(h) = α1||α2|| . . . ||αZ WO (5) where d is the key dimensionality. MHA computes this operation in parallel for ZT heads to generate the final outputs hT R for robots and hT T for tasks respectively. 3) Reward Prediction: The normalized relative distances between robot i and task j are passed through the distance head MLPD to compute the distance feature di,j: di,j = MLPD Normalize(∥pR i -pT j ∥2) (6) While task and robot positions are part of the raw input features in Gr and Gt, this explicit distance term provides the network with direct access to spatial proximity. We construct feature vectors fi,j for each robot-task pair by concatenating the local (GAT) and global (transformer) representation of robot i and task j with the distance term di,j, so fi,j ∈IR4×dk+1: fi,j = hGAT R i ∥hGAT T j ∥hT R i ∥hT T j ∥di,j (7) This information-rich representation is then passed through the reward head MLPR to compute the task assignment reward Ri,j to assign robot i to task j. The idle reward RIDLE i is computed by passing fi,j to the idle head MLPI and summing the outputs across all tasks for each robot i: Rtask i,j = MLPR (fi,j) , RIDLE i = M X j=1 MLPI(fi,j) (8) MLPI can be understood as learning per-task signals that encourage a robot to wait when short-term idling is advantageous (e.g., a nearby task will become ready soon). The final predicted reward R contains the task rewards Rtask i,j for all pairs of robots i and tasks j, concatenated with the idle rewards RIDLE i for each robot i, so R ∈IRN×(M+1). B. Task Assignment through Bipartite Matching The final reward R can be interpreted as the edge rewards between robots R and tasks T at a given timestep, encoding the full complexity of the current problem state. To extract task assignments at this timestep, we employ a relaxed bipartite matching formulation (no strict one-to-one matching). The constraints ensure valid assignments: (10) prevents robots from being assigned more than one task, (11) guarantees that each task's required skills are fully Fig. 2. Sadcher architecture overview. Robot and task graphs are processed by graph attention and transformer encoders and concatenated with distance features. The reward matrix is estimated by the Idle and Reward MLPs and final task assignments are extracted using relaxed bipartite matching. B: batch size, N: number of robots, M: number of tasks, dr: robot input dimension, dt: task input dimension, d: latent dimension. covered by the assigned coalition using the element-wise inequality ⪰, and (12) enforces that only idle robots and ready tasks are matched. The bipartite matching finds the optimal assignment matrix A∗∈IRN×(M+1) that maximizes the selected edge reward encoded in R: A∗= arg max A X i,j Ai,jRi,j (9) subject to: M+1 X j=0 Ai,j ≤1, ∀i ∈R (10) N X i=0 Ai,j cr i ⪰ct j, ∀j ∈M (11) Ai,j = 0, ∀i, j : i /∈Ridle ∨j /∈Tready (12) This formulation prevents deadlocks since no coalition can be assigned to a task that it cannot execute. However, it allows for redundant assignments, so after computing A∗, we remove robots that do not contribute unique required skills, starting from the robot with the highest travel time to the task. Additionally, we implement a pre-moving strategy: If robot ri is assigned the idle task tM+1, it moves towards the task with the highest reward ti highest = arg max1≤j≤M Ri,j, without being formally assigned to it. This does not concatenate the tasks into a fixed schedule for the robot, since assignments are recomputed at decision steps. The robot is likely to be assigned to ti highest at the next decision step, so pre-moving can reduce the delay to task start if ri would have been the last coalition member to arrive at ti highest. C. Training through Imitation Learning 1) Training Data Generation: We generate 250,000 small-scale problem instances (8 tasks, 3 robots, 3 skills) with fully randomized configurations: each robot is assigned 1-3 skills, each task requires 1-3 skills, task locations and robot start/end depot lie in [0, 100] × [0, 100] ⊂IR2 and are randomly sampled. Execution times are drawn uniformly from [50, 100], precedence constraints are acyclic and generated between random task pairs, and robot travel speed is assumed to be 1 unit per timestep. To solve these scenarios optimally, we extend the exact MILP formulation of [16] with precedence constraints. Due to its exponential time complexity, both in the number of robots and tasks, only small instances can be solved in a reasonable time to generate a training dataset. We omit modelling of stochastic travel times, as our framework handles deviations via real-time replanning and does not need conservative safety margins. 2) Optimal Reward Extraction: To train the network to imitate the optimal behavior, we extract "ground-truth" reward matrices Ok ∈IRN×(M+1). The optimal schedules are sliced into K decision points T dec k , corresponding to timesteps when a task finishes and the robots require reassignment. At each T dec k the optimal reward is calculated based on the time difference between T dec k and the finish time of task j with discount factor γ ∈(0, 1]: Ok,i,j = γ T finish j -T dec k ok,i,j (13) where ok,i,j = 1 if robot i is assigned to task j in the optimal solution and the decision point occurs before the task's start time (T dec k < Ti,j,start); otherwise, ok,i,j = 0. We handle the idle action tM+1 in the same way: If the time between a robot's last finish time and its next start time exceeds the travel time between the corresponding tasks, we treat this interval as an explicit idle assignment in the optimal schedule and compute its reward using the above formulas. By design, this reward encoding captures the optimal decision logic: The next selected task will have the highest reward, with decreasing rewards for later tasks. Given the sequence of optimal rewards Ok over all decision steps T dec k , the bipartite matching algorithm outputs the exact solution. 3) Training Details : We modify the loss L from [30] by applying the inverse mask (1 -Xk) to the second term: L = ∥Xk◦(Rk-Ok)∥1+λ∥(1-Xk)◦(Rk-Ok)∥1 (14) where ◦denotes the element-wise product operator, Ok is the optimal reward, Rk is the predicted reward and Xk ∈ IRN×(M+1) is a feasibility mask with Xi,j = 1 if robot i is available and task j is ready, else Xi,j = 0. The first term encourages accurate prediction of feasible rewards, while the second discourages high values for infeasible ones. λ balances the two terms: Intuitively, accurate feasible predictions are more important than suppressing infeasible ones, as the bipartite matching will select high reward tasks. We use the ADAM optimizer [34] to train the network. V. EXPERIMENTS AND RESULTS We evaluate makespan and computation time metrics for six algorithms, averaged over 500 unseen problem instances with randomized task locations/durations, skill requirements, robot capabilities, and precedence constraints. Experiments are conducted on a consumer machine with an AMD Ryzen 7 4800H CPU and NVIDIA GeForce GTX 1650 GPU. A. Compared Algorithms 1) Baselines: There exist few algorithms in literature that address heterogeneous ST-MR-TA-XD with precedence constraints, often without weights or datasets [30]. We chose one available algorithm for each of the approaches commonly followed (optimization, learning, heuristic search): (1) An MILP formulation based on [16], adding precedence constraints and omitting stochastic travel times, which provides optimal solutions with formal guarantees. A decentralized RL framework HeteroMRTA [28], adapted for precedence constraints by masking out tasks that have incomplete predecessors during action selection. We compare against (2), the single solution variant HeteroMRTA, where agents choose the highest-probability task at decision steps, and (3), the sampling variant S-HeteroMRTA (Boltzmann weightedrandom action selection) which returns the best makespan solution across 10 runs per instance. We also implement and compare (4), a greedy heuristic that assigns robots to tasks based on reducing the remaining skill requirements the most and breaking ties based on travel time (shortest first). 2) Sadcher Variants: (5) The Sadcher framework predicts robot-task rewards deterministically as described in Section IV. We also benchmark (6), a S-Sadcher variant, which samples reward matrices from a normal distribution centered around the deterministic output then used by the bipartite matching. This introduces stochastic variations in the schedules. As for S-HeteroMRTA we run this process 10 times per instance and select the best-performing rollout. B. Training-Domain Evaluation We evaluate the algorithms on 500 randomized problem instances of the training domain size (8 tasks, 3 robots, 3 precedence constraints). Results are shown in Fig. 3 and 4. 1) Makespan: The MILP formulation provides optimal makespans, establishing a baseline for comparing the average relative gaps of other methods. S-Sadcher (gap: 3.8%) and Sadcher (gap: 6.8%) are the best-performing non-optimal algorithms. HeteroMRTA performs worst (gap: 21.5%), but sampling reduces the optimality gap to 10.8%, leveraging its RL policy, which follows a sampling strategy during training. In the pairwise comparison in Fig. 4, Sadcher achieves a lower makespan for 403 of 500 instances (80.6%, binomial test: p ≈2 × 10-45). S-Sadcher outperforms SHeteroMRTA on 389 of 500 instances (77.8%, binomial test: p ≈3 × 10-37). Greedy reaches an average gap of 20.4%. 2) Computation Time: For dynamic scenarios with realtime requirements, the time per assignment decision (tdec) is crucial. MILP cannot compute instantaneous assignments, but only globally optimal schedules. S-HeteroMRTA and S-Sadcher roll out the full scenario to select the best assignments. Therefore, these three algorithms, do not yield a time per decision, but only for full solution construction (tfull). Due to its simplicity, the greedy algorithm computes the fastest (tdec: 0.080 ms; tfull: 1.7 ms). HeteroMRTA (tdec: 9.1 ms; tfull: 0.20 s) is faster than Sadcher (tdec: 22 ms; tfull: 0.57 s), which needs to solve the relatively expensive bipartite matching for each decision. S-HeteroMRTA computes full solutions in 0.96 s, S-Sadcher in 5.7 s, and MILP in 76 s. In the worst case, MILP takes up to 12 minutes, rendering it infeasible for real-time applications, even on small problems. 3) Precedence Constraints: The Sadcher model demonstrates an understanding of task dependencies by prioritizing the assignment of predecessor tasks. This improves performance by unlocking successors earlier and enabling better global schedules. On average, the model assigns ready predecessor tasks approximately 1.7 times more frequently compared to the baselines. (S-) HeteroMRTA and Greedy cannot make this informed decision, but selecting tasks with incomplete predecessors is prevented through masking. C. Out-of-Domain Generalization To evaluate generalization, we scale the number of robots N ∈{3, 5, 20}, and the task number M ∈[6, 250] (see Fig. 5) and compare the makespan gap relative to Sadcher (trained on N=3, M=8). With a 1-hour cutoff, the MILP solver fails to find solutions beyond 10 tasks and 7 robots within this limit. For smaller problem sizes, it finds the optimal makespans, outperforming Sadcher by 6-16%. For N=3 robots, S-Sadcher and Sadcher are the strongest non-optimal methods across all M. S-HeteroMRTA outperforms Greedy for M≤60 and HeteroMRTA finds the highest makespans overall. Although Sadcher's relative performance is best for small M, it outperforms Greedy by more than 4% and (S-) HeteroMRTA by more than 9% for M=250. For N=5 robots, S-Sadcher remains the best learningbased method across all M. S-HeteroMRTA performs better than Sadcher for M≤9, but is surpassed by Sadcher beyond that, and by Greedy for M≥70. HeteroMRTA outperforms Greedy for 7≤M≤10. Greedy reaches a 2% gap for M=200. For N=20 robots, relative performance changes significantly, with degradation of the learning-based methods: SSadcher is the best-performing method only for M≤70, beyond that, Greedy becomes superior. S-HeteroMRTA consistently beats Sadcher, yet only outperforms Greedy for M≤50. HeteroMRTA surpasses Sadcher for M≥150, and the performance gap for M≤100 between the two is smaller compared to scenarios with fewer robots. Overall, Sadcher excels on smaller robot teams across all task counts, but its performance decreases with more robots. We hypothesize that increasing the number of tasks while keeping the number of robots fixed is similar to solving multiple smaller subproblems sequentially, where local Fig. 3. Comparison on 500 unseen, randomized problem instances (8 tasks, 3 robots, 3 precedence constraints) for makespan (left), and computation time (right). Lower means better performance. Wilcoxon significance levels are annotated for Sadcher compared to HeteroMRTA variants. All other pairwise differences are statistically significant (p < 0.05), except between S-HeteroMRTA and Greedy (p = 0.21). For algorithms requiring full solution construction, total computation time is reported; for methods returning instantaneous assignments, both time per decision and total time are shown. Fig. 4. Pairwise makespan comparison of HeteroMRTA vs. Sadcher (left) and S-HeteroMRTA vs. S-Sadcher (right). Each point is one solved instance; points below the diagonal indicate better performance by (S-)Sadcher. Fig. 5. Relative makespan gap to Sadcher for 3, 5, and 20 robots (top left to bottom left). Bottom right: Computation time for 3 robots (for algorithms requiring full solution construction (Sample-HeteroMRTA, Sample-Sadcher, MILP), total computation time is reported, for methods returning instantaneous assignments (HeteroMRTA, Sadcher, Greedy), time per decision is reported). Task counts M ∈[6, 250], with S = 3 skills and M/5 precedence constraints. Each point shows the mean over 100 runs. scheduling rules learned during training remain effective. On the other hand, larger teams require different local scheduling strategies that diverge from the distribution Sadcher has seen during training. For high task counts, the greedy algorithm - especially in combination with bigger robot teams - starts beating the learning-based methods. Sampling-based variants (S-Sadcher, S-HeteroMRTA) have a higher impact on smaller problems, where the smaller solution space makes rollouts more likely to yield improvements. Greedy computes fastest, delivering near-instantaneous decisions (≤3 ms). The computation time of HeteroMRTA is minimally affected by scaling (≤20 ms), while Sadcher is slower and scales worse due to the bipartite matching step (≤80 ms per decision). The sampling-based variants require significantly longer computation (up to 450 s for S-Sadcher and 40 s for S-HeteroMRTA for M=250), which makes them impractical for online computation on large problems. VI. CONCLUSION In this work, we proposed Sadcher - an IL framework to address real-time task assignment for heterogeneous multirobot teams, incorporating dynamic coalition formation and precedence constraints. Reward prediction with relaxed bipartite matching yields strong performance with feasibility guarantees. Sadcher outperforms RL-based and heuristic baselines in makespan across small to medium-sized robot teams and a wide range of task counts. For bigger teams, the advantage is lost due to lack of demonstrations. Sadcher can generate assignments in real-time across all tested problem sizes, but the sampling variant S-Sadcher is only real-time for smaller problems. Sadcher relies on a large (computationally expensive) dataset of expert demonstrations for training. Future work will explore fine-tuning IL policies with RL, which could increase performance on larger problem instances where expert solutions are very expensive or infeasible to obtain. Additionally, extending the dataset with suboptimal demonstrations for bigger problem instances could improve scalability. ACKNOWLEDGMENTS This project has received funding from the European Union through ERC, INTERACT, under Grant 101041863. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. REFERENCES [1] L. Zhang, M. Li, W. Yang, and S. Yang, "Task Allocation in Heterogeneous Multi-Robot Systems Based on Preference-Driven Hedonic Game," in 2024 IEEE International Conference on Robotics and Automation (ICRA), May 2024, pp. 8967-8972. [2] W. Gosrich, S. Mayya, S. Narayan, M. Malencia, S. Agarwal, and V. Kumar, "Multi-Robot Coordination and Cooperation with Task Precedence Relationships," May 2023. [3] M. C. Gombolay, R. J. Wilcox, and J. A. Shah, "Fast Scheduling of Robot Teams Performing Tasks With Temporospatial Constraints," IEEE Transactions on Robotics, vol. 34, pp. 220-239, Feb. 2018. [4] I. Ansari, A. Mohammed, Y. Ansari, M. Yusuf Ansari, S. Razak, and E. Feo Flushing, "CoLoSSI: Multi-Robot Task Allocation in Spatially-Distributed and Communication Restricted Environments," IEEE Access, vol. 12, 2024. [5] H. Chakraa, F. Gu ́erin, E. Leclercq, and D. Lefebvre, "Optimization techniques for Multi-Robot Task Allocation problems: Review on the state-of-the-art," Robotics and Autonomous Systems, vol. 168, p. 104492, Oct. 2023. [6] E. Bischoff, F. Meyer, J. Inga, and S. Hohmann, "Multi-Robot Task Allocation and Scheduling Considering Cooperative Tasks and Precedence Constraints," May 2020. [7] R. K. Ramachandran, J. A. Preiss, and G. S. Sukhatme, "Resilience by Reconfiguration: Exploiting Heterogeneity in Robot Teams," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov. 2019. [8] A. Khamis, A. Hussein, and A. Elmogy, "Multi-robot Task Allocation: A Review of the State-of-the-Art," in Cooperative Robots and Sensor Networks 2015, A. Koubˆaa and J. Mart ́ınez-de Dios, Eds. Cham: Springer International Publishing, 2015, pp. 31-51. [9] F. Quinton, C. Grand, and C. Lesire, "Market Approaches to the MultiRobot Task Allocation Problem: a Survey," Journal of Intelligent & Robotic Systems, vol. 107, no. 2, p. 29, Feb. 2023. [10] W. Babincsak, A. Aswale, and C. Pinciroli, "Ant Colony Optimization for Heterogeneous Coalition Formation and Scheduling with MultiSkilled Robots," in 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), Dec. 2023, pp. 121-127. [11] B. Fu, W. Smith, D. Rizzo, M. Castanier, M. Ghaffari, and K. Barton, "Robust Task Scheduling for Heterogeneous Robot Teams under Capability Uncertainty," IEEE Transactions on Robotics, June 2021. [12] P. Muhuri and A. Rauniyar, "Immigrants Based Adaptive Genetic Algorithms for Task Allocation in Multi-Robot Systems," International Journal of Computational Intelligence and Applications, vol. 16, p. 1750025, Dec. 2017. [13] M. Gini, "Multi-Robot Allocation of Tasks with Temporal and Ordering Constraints," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, Feb. 2017. [14] B. P. Gerkey and M. J. Matari ́c, "A Formal Analysis and Taxonomy of Task Allocation in Multi-Robot Systems," The International Journal of Robotics Research, vol. 23, no. 9, pp. 939-954, Sept. 2004. [15] G. A. Korsah, A. Stentz, and M. B. Dias, "A comprehensive taxonomy for multi-robot task allocation," The International Journal of Robotics Research, vol. 32, no. 12, pp. 1495-1512, Oct. 2013. [16] A. Aswale and C. Pinciroli, "Heterogeneous Coalition Formation and Scheduling with Multi-Skilled Robots," June 2023. [17] I. Ansari, A. Mohamed, E. F. Flushing, and S. Razak, "Cooperative and load-balancing auctions for heterogeneous multi-robot teams dealing with spatial and non-atomic tasks," in 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Nov. 2020. [18] M. Irfan and A. Farooq, "Auction-based task allocation scheme for dynamic coalition formations in limited robotic swarms with heterogeneous capabilities," in 2016 International Conference on Intelligent Systems Engineering (ICISE), Jan. 2016, pp. 210-215. [19] H. Chakraa, E. Leclercq, F. Gu ́erin, and D. Lefebvre, "A Centralized Task Allocation Algorithm for a Multi-Robot Inspection Mission With Sensing Specifications," IEEE Access, vol. 11, 2023. [20] M. U. Arif, "Robot coalition formation against time-extended multirobot tasks," International Journal of Intelligent Unmanned Systems, vol. 10, pp. 468-481, June 2021. [21] X.-F. Liu, Y. Fang, Z.-H. Zhan, and J. Zhang, "Strength Learning Particle Swarm Optimization for Multiobjective Multirobot Task Scheduling," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 7, pp. 4052-4063, July 2023. [22] A. Prorok, J. Blumenkamp, Q. Li, R. Kortvelesy, Z. Liu, and E. Stump, "The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts," July 2021. [23] Z. Wang, C. Liu, and M. Gombolay, "Heterogeneous graph attention networks for scalable multi-robot scheduling with temporospatial constraints," Autonomous Robots, vol. 46, no. 1, pp. 249-268, Jan. 2022. [24] S. Paul, P. Ghassemi, and S. Chowdhury, "Learning Scalable Policies over Graphs for Multi-Robot Task Allocation using Capsule Attention Networks," May 2022. [25] B. Altundas, Z. Wang, J. Bishop, and M. Gombolay, "Learning Coordination Policies over Heterogeneous Graphs for Human-Robot Teams via Recurrent Neural Schedule Propagation," in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2022. [26] F. Deng, H. Huang, L. Fu, H. Yue, J. Zhang, Z. Wu, and T. L. Lam, "A Learning Approach to Multi-robot Task Allocation with Priority Constraints and Uncertainty," in 2022 IEEE International Conference on Industrial Technology (ICIT), Aug. 2022, pp. 1-8. [27] W. Dai, A. Bidwai, and G. Sartoretti, "Dynamic Coalition Formation and Routing for Multirobot Task Allocation via Reinforcement Learning," in 2024 IEEE International Conference on Robotics and Automation (ICRA). Yokohama, Japan: IEEE, May 2024, pp. 16 56716 573. [28] W. Dai, U. Rai, J. Chiun, Y. Cao, and G. Sartoretti, "Heterogeneous Multi-robot Task Allocation and Scheduling via Reinforcement Learning," IEEE Robotics and Automation Letters, vol. 10, no. 3, pp. 26542661, Mar. 2025. [29] P. Gao, S. Siva, A. Micciche, and H. Zhang, "Collaborative Scheduling with Adaptation to Failure for Heterogeneous Robot Teams," in 2023 IEEE International Conference on Robotics and Automation (ICRA), May 2023, pp. 1414-1420. [30] W. J. Jose and H. Zhang, "Learning for Dynamic Subteaming and Voluntary Waiting in Heterogeneous Multi-Robot Collaborative Scheduling," IEEE International Conference on Robotics and Automation (ICRA), 2024. [31] P. Veliˇckovi ́c, G. Cucurull, A. Casanova, A. Romero, P. Li`o, and Y. Bengio, "Graph Attention Networks," Feb. 2018. [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," June 2017. [33] B. Xu, N. Wang, T. Chen, and M. Li, "Empirical Evaluation of Rectified Activations in Convolutional Network," Nov. 2015. [34] D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," Jan. 2017.
2510.14844
Provable Unlearning with Gradient Ascent on Two-Layer ReLU Neural Networks Odelia Melamed * Gilad Yehudai† Gal Vardi ‡ Abstract Machine Unlearning aims to remove specific data from trained models, addressing growing privacy and ethical concerns. We provide a theoretical analysis of a simple and widely used method—gradient ascent— used to reverse the influence of a specific data point without retraining from scratch. Leveraging the implicit bias of gradient descent towards solutions that satisfy the Karush-Kuhn-Tucker (KKT) conditions of a margin maximization problem, we quantify the quality of the unlearned model by evaluating how well it satisfies these conditions w.r.t. the retained data. To formalize this idea, we propose a new success criterion, termed (ϵ, δ, τ)-successful unlearning, and show that, for both linear models and two-layer neural networks with high dimensional data, a properly scaled gradient-ascent step satisfies this criterion and yields a model that closely approximates the retrained solution on the retained data. We also show that gradient ascent performs successful unlearning while still preserving generalization in a synthetic Gaussian-mixture setting. 1 Introduction Machine Unlearning is an emerging field motivated by growing societal and legal demands—specifically, the need for machine learning models to "forget" specific data upon request. This concern has intensified following discoveries that private training data can be extracted from model outputs or weights (Carlini et al., 2019; Haim et al., 2022; Fredrikson et al., 2015). The demand is further reinforced by regulations such as the EU GDPR’s Right to be Forgotten, as well as concerns about security and ethical AI. Machine unlearning addresses this challenge by aiming to undo the effect of particular samples without incurring the cost of full retraining. The concept of unlearning was first formalized by Cao & Yang (2015) in the context of statistical query learning and has since been extended to deep neural networks. Broadly, two main approaches have emerged: retraining-based unlearning, which ensures complete data removal but is computationally expensive, and approximate unlearning, which aims for efficiency at the cost of weaker guarantees. Due to the stochastic and incremental nature of modern training procedures, which entangle data contributions, it is nontrivial to reverse the effect of the data to be forgotten while minimizing disruption to the retained data. There is a large body of research on adapting given networks, namely, manipulating the weights post-training. For a training set S, a set of points Sforget ⊆S to unlearn, and its complement Sretain = S \ Sforget, a direct approach is to increase the training loss for samples in Sforget using gradient steps. This direct method was first implemented in NegGrad (Golatkar et al., 2020), simply taking multiple negative gradient steps for Sforget with respect to the training loss. Other gradient-related post-training methods use other losses and second order information for improved results (Guo et al., 2019; Golatkar et al., 2020; Warnecke et al., 2021; Triantafillou et al., 2024; Graves et al., 2021). There are also additional variants of NegGrad, such as NegGrad+ (Kurmanji et al., 2023), and Advanced NegGrad (Choi & Na, 2023) which add a recovery phase, performing additional training steps on the retained set. In this work, we study the important building block of this foundational and widely-used method, a single gradient ascent step on the training loss w.r.t. Sforget. *Weizmann Institute of Science, odelia.melamed@weizmann.ac.il †Center for Data Science, New York University, gy2219@nyu.edu ‡Weizmann Institute of Science, gal.vardi@weizmann.ac.il 1 arXiv:2510.14844v1 [cs.LG] 16 Oct 2025 One central question in the regime of approximate unlearning is how to measure unlearning performance. A common criterion, inspired by differential privacy (Dwork et al., 2014), evaluates success by comparing the output distributions of a model retrained from scratch with those of the unlearned model. This approach allows for approximate guarantees, where the distance between the two distributions is bounded by small parameters (Triantafillou et al., 2024; Ginart et al., 2019), providing a formal framework for quantifying the effectiveness of unlearning algorithms, albeit it is often too stringent. To provide a rigorous framework for analyzing unlearning, we turn to recent results on the implicit bias of neural networks under gradient descent (Lyu & Li, 2019; Ji & Telgarsky, 2020). These works show that training tends toward solutions that satisfy the Karush-Kuhn-Tucker (KKT) conditions of the maximum-margin problem. We use these conditions to formulate an unlearning criterion: A successful unlearning procedure should modify the model from satisfying the KKT conditions w.r.t. S to approximately satisfying them w.r.t. Sretain. This property is necessary for successful unlearning. That is, since a network retrained only on Sretain converges to a KKT point w.r.t. Sretain, then a successful unlearning algorithm also needs to obtain such a KKT point, at least approximately. Note that the approximation relaxation here is analogous to the relaxation for the distribution distance, allowing bounds on the deviation from exact solution attained by retraining. In our work, we analyze the unlearning performance of one gradient ascent step of a carefully chosen size. We define a new unlearning criterion for an unlearning algorithm A, called (ϵ, δ, τ)-successful unlearning, using the KKT conditions as discussed above. Next, in both linear models and two-layer neural networks trained with high dimensional (or nearly-orthogonal) data, we prove that a gradient ascent step of an appropriate size is a successful unlearning algorithm. In addition, we show a setting where unlearning using gradient ascent is both successful and does not hurt the model’s generalization performance. In a bit more detail, our main contributions are: • For linear predictors, where the margin-maximizing solution is unique, we prove that gradient ascent with an appropriate step size is a (ϵ, δ, τ)-successful unlearning algorithm. Specifically, it yields an approximately max-margin predictor for Sretain. Moreover, due to the uniqueness of the solution, the unlearned predictor aligns closely—measured via cosine similarity—with the exact model retrained on Sretain. • We extend these findings to a two-layer neural network setting. Despite the added complexity and nonlinearity, we prove that a single gradient ascent step is a (ϵ, δ, τ)-successful unlearning algorithm for some small ϵ, δ, τ. • We show that unlearning does not compromise out-of-sample prediction, using a synthetic mixture-of-Gaussians dataset. We show that models unlearned via gradient ascent maintain generalization performance comparable to the original. Related Work Machine unlearning was initially proposed in the statistical query setting by Cao & Yang (2015) and later extended to deep neural networks. The strongest unlearning guarantees are often formalized via differential privacy (Dwork et al., 2014), requiring indistinguishability between unlearned and retrained model outputs. This was relaxed using KL-divergence (Golatkar et al., 2020), while other lines of work evaluate unlearning effectiveness through privacy attacks, such as membership inference or data reconstruction (Niu et al., 2024; Haim et al., 2022). To achieve these goals, many methods aim to avoid full retraining. For example, SISA (Bourtoule et al., 2021) partitions the training data into multiple shards to enable a faster future forgetting. Graves et al. (2021) proposed saving intermediate gradients during training with respect to different training data points, enabling faster simulation of retraining using these intermediate gradients without the forget set. Post-training approaches include fine-tuning for Sretain only (hoping for catastrophic forgetting of the rest of data) or with wrong labels for data in Sforget (Golatkar et al. (2020); Triantafillou et al. (2024); Graves et al. (2021); Kurmanji et al. (2023)), or using different losses (Golatkar et al., 2020). These techniques often rely on gradient-based updates, with loss functions adjusted for unlearning objectives. Several methods also incorporate second-order information for better precision (Guo et al., 2019; Golatkar et al., 2020; Warnecke et al., 2021). The gradient-ascent method was first introduced by Golatkar et al. (2020) as NegGrad, applying negative gradient steps to increase loss on the forget set. Its extensions, NegGrad+ (Kurmanji et al., 2023) and advanced NegGrad (Choi 2 & Na, 2023), add a recovery phase by performing fine-tuning on the retained set. In this work, we isolate the basic component—gradient ascent—and study its behavior analytically. On the theoretical side, Guo et al. (2019) analyzed linear models and proposed a certified unlearning framework. Leveraging the existence of a unique optimal solution, they argue that inspecting the training gradients on the retained dataset can reveal residual influence from the deleted point—particularly when the model incurs non-zero loss, which may indicate incomplete unlearning. Sekhari et al. (2021) analyze unlearning capacity based on test loss degradation. Our approach defines unlearning through the lens of KKT conditions, building on a line of work showing that training converges to a KKT point of the margin maximization problem for the dataset. implicit bias and margin maximization A great body of research has studied the implicit bias of training neural networks with gradient methods toward solutions that generalize well (Neyshabur et al., 2017; Zhang et al., 2021). Our analysis is based on the characterization of the implicit bias of gradient flow on homogeneous models towards KKT solutions of the max margin problem, a result due to Lyu & Li (2019) and Ji & Telgarsky (2020). Implicit bias towards margin maximization was previously studied also for linear predictors (Soudry et al., 2018), deep linear networks and linear convolutional networks (Gunasekar et al., 2018). For a survey on implicit bias of neural networks see Vardi (2023). 2 Settings Notations. For m ∈N, we denote [m] = {1, 2, . . . , m}, and for l ∈[m], we denote [m]−l = [m] \ {ℓ}. We use bold-face letters to denote vectors, e.g., x = (x1, . . . , xd) ∈Rd. We use ∥x∥to denote the Euclidean norm of a vector x. We denote by 1x≥0 the indicator function such that 1x≥0 = 1 if x ≥0 and 0 otherwise. We denote by sign(x) the sign function, sign(x) = 1 if x ≥0 and −1 otherwise. We denote by U(A) the uniform distribution over a set A. For a distribution D, we denote by x ∼Dm a vector x that consists of m i.i.d. samples from D. We denote by cossim(x1, x2) the cosine similarity of vectors x1, x2, defined by cossim(x1, x2) = ⟨x1,x2⟩ ∥x1∥∥x2∥. 2.1 Architectures and training In this paper, we discuss unlearning in two fundamental models: a linear predictor and a two-layer fully connected network. For an input x ∈Rd and a vector w ∈Rd, we will denote a linear predictor by N(w, x) = w⊤x. Our two-layer network is defined by N(θ, x) = n X j=1 ujσ(w⊤ j x) , (1) where σ(z) = max(z, 0) is the ReLU activation function. For all j ∈[n], we initialize uj ∼U  {−1 √n, 1 √n}  and fix them throughout training. The parameters w1, . . . , wn are trained. We denote by θ a vectorization of all the trained parameters. Given a training set S = {(xi, yi)}m i=1, we train our models using gradient descent over the empirical loss L(θ) = 1 m m X i=1 ℓ(yiN(θ, xi)) , where ℓis either the logistic loss ℓ(q) = log(1 + e−q) or the exponential loss ℓ(q) = e−q. That is, we have θt+1 = θt −β∇L(θt), where θt are the weights after the t-th training epoch, and β is the step size. We consider the limit where β is infinitesimally small, called gradient flow. More formally, in gradient flow the trajectory θt is defined for all t ≥0 and satisfies the differential equation dθt dt = −∇L(θt). For a model N(θ, x), where θ are the parameters and x is the input, we say that N is homogeneous if there exists C > 0 such that for every α > 0, and θ, x, we have N(αθ, x) = αCN(θ, x). We note that both a linear predictor and a two-layer network, as defined above, are homogeneous with C = 1. For both linear and two-layer ReLU networks, there is an implicit bias towards margin maximization, as implied by the following theorem: 3 Theorem 2.1 (Lyu & Li (2019), Ji & Telgarsky (2020)). Let N(x, θ) be a homogeneous linear or ReLU neural network. Consider minimizing the logistic or exponential loss using gradient flow over a binary classification set S = {(xi, yi)}m i=1 ⊆Rd × {−1, 1}. Assume that there is a time t0 where L(θt0) < 1 m. Then, gradient flow converges in direction1 to a first-order stationary point (i.e., Karush–Kuhn–Tucker point, or KKT point for short) of the margin-maximization problem: min θ 1 2 ∥θ∥2 s.t. ∀i ∈[m], yiN(θ, xi) ≥1 . (2) Note that in the case of linear predictors a KKT point is always a global optimum,2 but in the case of non-linear networks this is not necessarily the case. Thus, in non-linear homogeneous models gradient flow might converge to a KKT point which is not necessarily a global optimum of Problem 2. While the above theorem captures the asymptotic behavior of gradient flow, namely as the time t →∞it converges to a KKT point, the behavior of gradient flow after a finite time can be characterized by approximate KKT points. Definition 2.1. We say that θ is a (ϵ, δ)-approximate KKT point for Problem 2, if there exist λ1, ..., λm such that 1. Dual Feasibility: λ1, ..., λm ≥0. 2. Stationarity: ∥θ −Pm i=1 λiyi∇θN(xi, θ)∥≤ϵ. 3. Complementary Slackness: ∀i ∈[m], λi (yiN(xi, θ) −1) ≤δ. 4. Primal Feasibility: ∀i ∈[m], yiN(xi, θ) ≥1. We note that a (0, 0)-approximate KKT point is a KKT point. When training with gradient flow, the parameters after finite time satisfy the following: Theorem 2.2 (Lyu & Li (2019), Ji & Telgarsky (2020)). Under the conditions of Theorem 2.1, the parameters θt at time t point at the direction of an (ϵt, δt)-approximate KKT point for Problem 2, and (ϵt, δt) →(0, 0) as t →∞. Hence, when training a model it is reasonable to expect that the trained model is an (ϵ, δ)-approximate KKT point of Problem 2, for some small ϵ, δ. 2.2 An objective for unlearning Let S = {(xi, yi)}m i=1 ⊆Rd × {−1, 1} be a dataset, and let (xr, yr) be the example that we wish to unlearn. We call the dataset S the original dataset, and Sretain = S \ {(xr, yr)} the retain dataset. Note that we focus on unlearning a single data point. In Section 5 we will consider unlearning a subset. Following Theorem 2.2, we assume that we start from a trained model that is an (ϵ, δ)-approximate KKT point w.r.t. the original dataset. We also note that for the same reason, retraining for Sretain will results in an (ϵ∗, δ∗)-approximate KKT point w.r.t. Sretain. Our objective can be stated as follows: In the unlearning process, we wish to obtain a model that is close to an (ϵ∗, δ∗)-approximate KKT point w.r.t. the retain dataset, for some small ϵ∗, δ∗. Indeed, in unlearning, we wish to find a model that is “similar” to a model that we could have learned if we had trained on the retain dataset in the first place, and by Theorem 2.2 such a model must be an (ϵ∗, δ∗)-approximate KKT point w.r.t. the retain dataset. Hence, our objective can be viewed as a necessary condition for successful unlearning. That is, a successful unlearning algorithm needs to obtain a network which is close to an approximate KKT point, since otherwise the network cannot be similar to a model which is retrained with the retained dataset. More formally, we have the following definition: 1we say that gradient flow converges in direction to some ˜θ if limt→∞ θt ∥θt∥= ˜θ ∥˜θ∥. 2For linear predictors, the theorem was obtained by Soudry et al. (2018). 4 Definition 2.2 (successful unlearning). For a dataset S, and a homogeneous model with parameters θ, we say that A is an (ϵ, δ, τ)-successful unlearning algorithm w.r.t. θ and S, if for every point (xl, yl) ∈S there exists an (ϵ, δ)-approximate KKT point eθ w.r.t. S \ (xl, yl), such that cossim(A(θ, S, l), eθ) ≥1 −τ . We note that from Theorem 2.2, retraining for time t is a (ϵt, δt, τ)-successful unlearning algorithm with τ = 0 and (ϵt, δt) →(0, 0). Our objective is to perform (ϵ, δ, τ)-successful unlearning for small (ϵ, δ, τ) but in an efficient manner that avoids retraining from scratch. Definition 2.2 requires that the unlearned network A(θ, S, l) and the approximate KKT point eθ have high cosine similarity. Indeed, note that since we consider homogeneous networks, the scale of the parameters only affects the scale of the output, and thus to show that A(θ, S, l) behaves similarly to eθ it suffices to consider their corresponding normalized parameters. Moreover, for the normalized parameters, high cosine similarity implies small ℓ2 distance, and since the the model is Lipschitz w.r.t. the parameters, it implies a similar behavior. 2.3 Unlearning with gradient ascent Consider a network N(x, θ) trained on a dataset S = {(xi, yi)}m i=1 ⊆Rd × {−1, 1}. In this paper, we consider the widely used Gradient Ascent method for unlearning. In this method, to unlearn a training point (xr, yr), we take a gradient step towards increasing the training loss for this particular point. Namely, for a step size β, the algorithm AGA given θ, S and r, performs the following AGA(θ, S, r) = θ + β∇θℓ(yrN(xr, θ)) . (3) Intuitively, training examples are often memorized in the sense that their training loss is too small, and gradient ascent allows us to undo it, that is, reduce the level of overfitting for these examples. The gradient ascent method is a significant building block in the widely used unlearning method NegGrad, that consists of multiple such steps, and is the unlearning approach also for other variants of it (such as NegGrad+ (Kurmanji et al., 2023) and advanced NegGrad (Choi & Na, 2023)) that additionally perform fine-tuning for the retained data. In section 3 and section 4, we demonstrate that in both linear predictors and two-layer ReLU networks, respectively, unlearning with a single step of gradient ascent (AGA) is (ϵ, δ, τ)-successful, under certain assumptions. 2.4 Data We consider a size-m training set S = {(xi, yi)}m i=1 ⊆Rd × {−1, 1}. We make the following assumption on S, for some parameters ψ, ϕ > 0. Assumption 2.3. The training set S = {(xi, yi)}m i=1 satisfies 1. For all (x, y) ∈S we have ∥x∥2 ∈[1 −ψ, 1 + ψ]. 2. For all (xi, yi), (xj, yj) ∈S with i ̸= j we have |⟨xi, xj⟩| ≤ϕ. The data normalization assumption (Item 1 above) is very common, as data points with significantly different norms might cause biases during training, toward higher norm data points. The latter assumption can be phrased as near orthogonality of the data points, which is also quite common in the literature for high dimensional data (Frei et al., 2022; Vardi et al., 2022), and holds with high probability for popular distributions. A profound example of a distribution that satisfies both conditions with high probability is the Gaussian distribution N(0, 1 dId), where d is the vector dimension. Another example is the uniform distribution over the unit sphere Sd−1. Example. For a training set S = {(xi, yi)}m i=1 where the xi’s are drawn i.i.d. from N(0, 1 dId), Assumption 2.3 holds with probability at least 1 −(2me−d/500 + m2e−d/500 + 2m2d−log(d) 2 ), for ψ = 0.1 and ϕ = 1.1 log(d) √ d (see Theorem A.1). Moreover, in Section 6 we will show that Assumption 2.3 holds with high probability for a mixture of Gaussians. 5 3 Linear Predictors In this section, we consider a linear predictor N(w, x) = ⟨w, x⟩trained on a dataset S = {(xi, yi)}m i=1. Recall that when training a linear predictor, gradient flow converges in direction to the max-margin solution (i.e., global optimum of Problem 2), and after time t it reaches an (ϵt, δt)-approximate KKT point of Problem 2 where (ϵt, δt) →(0, 0) as t →∞. Moreover, recall that for linear predictors, Problem 2 has a unique global optimum. The following theorem shows that unlearning using gradient ascent (denoted by AGA) is (ϵ, δ, τ)-successful w.r.t. S that satisfies Assumption 2.3 and w which is an approximate KKT point according to Definition 2.1, in two distinct aspects. In the first part (item 1 below), we show it for τ = 0, that is, AGA(w, S, l) is a linear predictor which is an approximate KKT point of the max-margin problem w.r.t. S \ (xl, yl). Then, we show it for ϵ = δ = 0, namely, the cosine similarity of AGA(w, S, l) and the max-margin predictor w.r.t. S \ (xl, yl) is large. Theorem 3.1. Let 0 < ϵ1, δ1 ≤0.5, ϵd < 0.1. Let x 7→⟨w, x⟩be a linear predictor trained on dataset S = {(xi, yi)}m i=1, where S satisfies Assumption 2.3 for ψ ≤0.1 and ϕ ≤ ϵd 4m. Assume that w is an (ϵ1, δ1)-approximate KKT point for Problem 2 w.r.t. S according to Definition 2.1. Then, the gradient ascent algorithm AGA, with an appropriate step size, is a (ϵ, δ, τ)-successful unlearning algorithm w.r.t. w and S for: 1. The case of ϵ = ϵ1 + ϵ1ϵd m−ϵd , δ = δ1 + δ1ϵd m−ϵd + 7.2ϵd m , τ = 0: The predictor AGA(w, S, l) has the direction of an (ϵ, δ)-approximate KKT point for the margin maximization problem (Problem 2) w.r.t. S \ (xl, yl). 2. The case of ϵ = δ = 0, τ = C(√ϵd + √ϵ1 + √δ1) for some universal constant C > 0: Let w∗be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl), i.e. the global optimum of the Problem 2 w.r.t. S \ (xl, yl). Then, cossim(AGA(w, S, l), w∗) ≥1 −τ. We now briefly discuss the proof intuition. Due to the stationarity condition for w (Definition 2.1), we can express w as weighted sum of the network’s gradient up to some error vector vϵ1 of norm ϵ1 w = m X i=1 λiyi∇wN(w, xi) + vϵ = m X i=1 λiyixi + vϵ1 . Then, by performing gradient ascent AGA with the appropriate step size we get AGA(w, S, l) = m X i=1 λiyi∇wN(w, xi) + vϵ −λlyl∇wN(w, xr) = X i∈[m]−l λiyixi + vϵ . First, one can see that the subtraction will result in a stationary condition w.r.t. S \ (xl, yl) and the original λi’s. Observing the margin for a point (xt, yt) (for t ̸= l), ⟨w, xt⟩= m X i=1 λiyi⟨xi, xt⟩+ ⟨vϵ1, xt⟩, we get that the change in the parameter vector (due to the gradient step) results in an additional term of at most λl|⟨xl, xt⟩| compared to the original predictor’s margin. Due to the near-orthogonality of the data points in S (Assumption 2.3), and a constant upper bound for λl which we prove, we get that this difference is of order O( ϵd m). Regarding the proof for (2), we consider the representation of w∗ w∗= m X i=1 λ∗ i yi∇wN(w, xi) = m X i=1 λ∗ i yixi . For i ∈[m]−l we prove a small O(ϵ1 + ϵd) upper bound for the difference λ∗ i −λi, which implies that the two predictors AGA(θ, S, l) and w∗independently reach very similar KKT multipliers for the margin maximization problem (Definition 2.1). This yield an 1 −O(√ϵd + √ϵ1 + √δ1) lower bound in the cosine similarity. For the full proof we refer the reader to Appendix B.1. 6 Figure 1: Effect of deviation from the correct step size on the KKT approximation parameter ϵ for a two-layer network. The x-axis shows the step size as a fraction of the step size from Theorem 4.1, and the y-axis shows the KKT approximation parameter ϵ of the unlearned model w.r.t. the retain dataset. 4 Two-Layer ReLU Networks In this section, we extend our analysis to two-layer ReLU neural networks. We consider a neural network of the form N(x, θ) = nP j=1 ujσ(w⊤ j x), trained on dataset S = {(xi, yi)}m i=1. Note that unlike the linear setting, the non- smoothness of N(x, θ) implies that even small perturbations in θ can cause significant shifts in the model’s gradients. This introduces new challenges and, as a result, leads to a slightly weaker guarantee. The following theorem establishes that unlearning using gradient ascent with an appropriate step size, constitutes an (ϵ, δ, τ)-successful unlearning w.r.t. S that satisfies Assumption 2.3 and θ which is an approximate KKT according to Definition 2.1, where ϵ, δ, and τ are small quantities determined by the KKT approximation parameters of θ and the underlying data characteristics. This implies that the unlearned parameter vector AGA(θ, S, l) is close—in terms of cosine similarity—to an approximate KKT point eθ corresponding to the retained dataset S \ (xl, yl). Theorem 4.1. Let 0 < ϵ1, δ1 ≤1, 0 < ϵd ≤0.01. Let N(x, θ) = nP j=1 ujσ(w⊤ j x) be a two-layer ReLU network as defined in Eq. 1, such that θ is an (ϵ1, δ1)-approximate KKT point for Problem 2 w.r.t. S = {(xi, yi)}m i=1 according to Definition 2.1, and suppose that S satisfies Assumption 2.3 for ψ ≤0.1 and ϕ ≤ ϵd 4mn. Then, the gradient ascent algorithm AGA with an appropriate step size is a (ϵ, δ, τ)-successful unlearning algorithm w.r.t. θ and S, for ϵ = ϵ1 + 9ϵdϵ1 m−9ϵd + 23ϵd √m , δ = δ1 + 9ϵdδ1 m−9ϵd + 22.6ϵd m and τ = 82ϵd m . In Figure 1, we show the effect of varying the step size around the appropriate value βl from Theorem 4.1 when unlearning a point (xl, yl) ∈S. The x-axis represents the step size as a fraction of βl, and the y-axis shows the resulting KKT approximation parameter ϵ w.r.t. the retain dataset. We use a two-layer network (Eq. 1) trained on a 10-point dataset in R1000, and apply AGA(θ, S, l) to a random data point. We can see that significantly deviating for βl results in a worse approximation variable. See Appendix E for more details. 4.1 Proof sketch We now outline the main ideas behind the proof. In this setting, unlike the linear setting, comparing the original parameter vector θ with the unlearned parameter vector AGA(θ, S, l) is nontrivial. Although the unlearning procedure introduces only a small perturbation, it may lead to significant changes in the activation map—the pattern of neuron activations across the data. Specifically, we define the activation map as the set of neurons wj that are active on a data point xi, i.e., ⟨wj, xi⟩≥0. A key challenge arises when even small weight changes cause certain neurons to flip activation status. To address this, we introduce an additive correction term (or "fix") for each weight vector wj, for j ∈[n], that restores the activation pattern. Using the stationarity conditions satisfied by θ (Definition 2.1), we express each wj as a 7 weighted sum of the network’s gradients, up to a small error term vϵ1,j: wj = m X i=1 λiyi∇wjN(xi, θ) + vϵ,j = uj m X i=1 λiyiσ′ i,jxi + vϵ1,j where σ′ i,j denotes the local derivative of the activation function. After applying the gradient ascent step, the contribution of the forgotten point (xl, yl) is removed, which may alter the activation state of some neurons. To mitigate this, we construct a correction vector using a small scaling factor c = O ϵd mn  , forming a new weight vector: ewj = wj −ujλlylσ′ l,jxl + |uj|λlσ′ l,jc X k∈[m]−l xk sign(⟨xk, wj⟩) . This correction reintroduces a small averaged influence from the retained points, specifically those where wj was previously active. For a data point xt where wj was originally active, the new inner product becomes: ⟨ewj, xt⟩= ⟨wj, xt⟩−ujλlylσ′ l,j⟨xl, xt⟩+ |uj|λlσ′ l,jc X k∈[m]−l ⟨xk, xt⟩sign(⟨xk, wj⟩) . Since the data points xl and xt are nearly orthogonal (i.e., ⟨xl, xt⟩= O( ϵd mn), see Assumption 2.3), the middle term is of the same order as the correction, thus the correction term restores the activation. As a result, the corrected weight vector ewj remains active on xt, preserving the original activation map. This activation preservation is essential: it enables us to meaningfully compare θ and eθ in terms of margin, gradient differences, and parameter norms, facilitating the rest of the proof. In establishing stationarity, the fixed vector introduces an additional error term beyond the original stationarity bound. In addition, because the activation map is preserved, we can upper bound the change in the margins of the remaining data points by a small factor of order O ϵd mn  . Similar to the linear case, this margin deviation appears in both the upper and lower bounds, so we slightly rescale eθ to restore feasibility and obtain an approximate KKT point for Problem 2 with respect to the reduced dataset S \ {(xl, yl)}. To complete the proof, we show that AGA(θ, S, l) remains close—in cosine similarity—to the rescaled eθ, differing only by the small fix and the minor scaling. The complete proof is provided in Appendix C.2. 5 Unlearning batches of data points In the previous sections, we analyzed the unlearning of a single data point. We now extend these results to the case of unlearning a set of data points. Let Sforget ⊆S denote a subset of size k. We unlearn Sforget using a natural extension of the AGA algorithm, namely by performing a step that consists of the k gradients of the points in Sforget, with appropriate coefficients. We denote this algorithm by Ak-GA. Formally, for some real coefficients {βr}, the algorithm Ak-GA performs the following Ak-GA(θ, S, Sforget) = θ + X (xr,yr)∈Sforget βr∇θℓ(yrN(xr, θ)) . In the case of linear predictors, the algorithm Ak-GA still satisfies the result from Theorem 3.1, but with slightly modified additive terms in the bounds on the KKT-approximation parameters ϵ, δ, while the bound on the cosine similarity (i.e., the parameter τ) remains unchanged. See a formal statement and proof in Appendix B.2. For two-layer networks, we show that the result from Theorem 4.1 holds when unlearning a subset Sforget using the algorithm Ak-GA, but with slightly modified parameters ϵ, δ, τ. See Appendix C.3 for the formal statement and proof. 8 6 Generalization of the Unlearned Classifier In this section, we show that if θ satisfies Definition 2.1 and the dataset S satisfies Assumption 2.3, then unlearning via a single gradient ascent step (i.e., AGA) may not harm generalization. As a concrete example, we consider a data distribution DMG such that a dataset from this distribution satisfies w.h.p. Assumption 2.3 with parameters ψ ≤0.1 and ϕ ≤ ϵd 4mn. The distribution consists of two opposite Gaussian clusters, such that the cluster means have magnitude d−α for some α ∈(0, 1 4), and each deviation from the mean is drawn as ζ ∼N(0, 1 dId). We show that both the original model and the unlearned model can generalize well, that is, classify the clusters with high probability. Formally, our data satisfies the following. we denote the dataset by S = {(xi, yi)}m i=1 ∼Dm MG, where ∀i ∈ [m], (xi, yi) ∈Rd × {−1, 1}, and where DMG is detailed as follows. It consists of a mixture of two Gaussians with means µ+, µ−∈Rd, such that ∥µ+∥= d−α for α ∈(0, 1 4), and µ−= −µ+. For each i, we choose µi ∼U{µ+, µ−}, then xi ∼N(µi, 1 dId) and finally yi = 1 if µi = µ+ and −1 otherwise. Note that we can denote xi = µi + ζi where ζi ∼N(0, 1 dId). We refer the reader to Lemma D.5, where we prove that for a given ϵd > 0, m and α, S satisfies Assumption 2.3 for ψ ≤0.1 and ϕ ≤∥µi∥2 + 2 ∥µi∥log(d) √ d + 1.1 log(d) √ d ≤ ϵd 4mn, w.p. ≥1 −(2me− d 1700 + m2e−d/500 + 2m2d−log(d) 2 ) for large enough d. The following theorem shows that the unlearned network achieves generalization bounds comparable to those of the original classifier. Combined with the fact that it is close to an approximate KKT point of Problem 2 with respect to the retained dataset (as established in Theorem 4.1), this demonstrates a clean setting where unlearning is successful, and it does not hurt generalization. Theorem 6.1. Let 0 < ϵd ≤0.01. Let N(x, θ) = nP j=1 ujσ(w⊤ j x) be a two-layer ReLU network as defined in Eq. 1, such that θ is a KKT point for Problem 2 w.r.t. S = {(xi, yi)}m i=1 ∼Dm MG according to Definition 2.1. Fix l ∈[m] and denote by AGA(θ, S, l) the parameters vector obtained by the gradient ascent algorithm AGA for the data point (xl, yl) ∈S with the appropriate step size from Theorem 4.1. Then, w.p. ≥1−(2me− d 1700 +m2e−d/500+2m2d−log(d) 2 ) over the choice of the dataset S, both N(x, θ) and N(x, AGA(θ, S, l)) generalize. Namely, Pr (xt,yt)∼DMG [ytN(xt, θ) > 0] ≥1 −(2e− d 1700 + me−d/500 + 2md−log(d) 2 ) , Pr (xt,yt)∼DMG [ytN(xt, AGA(θ, S, l)) > 0] ≥1 −(2e− d 1700 + me−d/500 + 2md−log(d) 2 ) . We briefly outline the intuition behind the generalization proof. Due to the small cluster means and relatively large variance, the data points in S are nearly orthogonal. Although the deviation from orthogonality is small, it is crucially structured: the inner product sign is determined by whether two points belong to the same or different clusters, namely xi, xj are in the same cluster ⇒⟨xi, xj⟩> 0 , xi, xj are in different clusters ⇒⟨xi, xj⟩< 0 . Now, using the fact that the classifier θ satisfies the stationarity conditions with respect to S (Definition 2.1), we denote it by the weighted sum of its gradients direction, and consider its inner product with some xt ∼DMG ⟨wj, xt⟩= ⟨ m X i=1 λiyi∇wjN(xi, θ), xt⟩= uj m X i=1 λiyiσ′ i,j⟨xi, xt⟩. Since the inner product and the label align, we get that the activation map is of the same sign as uj, hence each training point contributes positively to the classification of other points in the same cluster, and negatively to the others. This similarity of contribution implies that removing a point from S during unlearning does not significantly degrade the model’s classification accuracy. The full proof is provided in Appendix D.2. Finally, we note that Theorem 6.1 can be readily extended to the case of unlearning a subset of data points using the algorithm Ak-GA discussed in Section 5. 9 7 Discussion and future work In this work, we analyze the theoretical effectiveness of a single gradient-ascent step as a machine unlearning algorithm. Focusing on post-training unlearning methods, we propose a new criterion for unlearning success—called (ϵ, δ, τ)- successful unlearning—based on approximate satisfaction of KKT conditions. We prove that, in both linear models and two-layer neural networks, applying a gradient-ascent step AGA with an appropriate step size w.r.t. the point we wish to forget is a (ϵ, δ, τ)-successful unlearning algorithm with some small ϵ, δ, τ, for a dataset S that satisfies Assumption 2.3 and a parameter vector θ that is an approximate KKT point according to Definition 2.1. In the linear case, we additionally achieve near-exact recovery of the margin-maximizing predictor, implying stronger unlearning guarantees. We also demonstrate a clean distribution where unlearning is both successful and does not hurt generalization. Together, our results offer a rigorous foundation for analyzing gradient-based unlearning and confirm the practical utility of this simple yet widely used technique. This work opens several avenues for further exploration. First, while we focus on a gradient-ascent step, it would be valuable to analyze the effect of an additional recovery phase for the retain data, including those used in NegGrad+ and related variants, under the same KKT-based framework. Second, it would be interesting to develop tighter bounds connecting approximate KKT satisfaction with practical privacy metrics, such as membership inference risk. On the applied side, evaluating unlearning methods under the new success criterion can lead to interesting comparisons between different methods. Moreover, a broader integration of our theoretical criterion with empirical privacy guarantees (e.g., differential privacy) could help bridging the gap between formal definitions and real-world deployment in safety-critical applications. Finally, extending our results to deeper architectures and additional distributions remains an important challenge. Acknowledgments GV is supported by The Israel Science Foundation (grant No. 2574/25), by a research grant from Mortimer Zuckerman (the Zuckerman STEM Leadership Program), and by research grants from the Center for New Scientists at the Weizmann Institute of Science, and the Shimon and Golde Picker – Weizmann Annual Grant. References Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE symposium on security and privacy (SP), pp. 141–159. IEEE, 2021. Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pp. 463–480, 2015. doi: 10.1109/SP.2015.35. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX security symposium (USENIX security 19), pp. 267–284, 2019. Dasol Choi and Dongbin Na. Towards machine unlearning benchmarks: Forgetting the personal identities in facial recognition systems. arXiv preprint arXiv:2311.02240, 2023. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp. 1322–1333, 2015. Spencer Frei, Gal Vardi, Peter L Bartlett, Nathan Srebro, and Wei Hu. Implicit bias in leaky relu networks trained on high-dimensional data. arXiv preprint arXiv:2210.07082, 2022. 10 Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32, 2019. Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9304–9312, 2020. Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 11516–11524, 2021. Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pp. 9461–9471, 2018. Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030, 2019. Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks. Advances in Neural Information Processing Systems, 35:22911–22924, 2022. Ziwei Ji and Matus Telgarsky. Directional convergence and alignment in deep learning. Advances in Neural Information Processing Systems, 33:17176–17186, 2020. Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. Towards unbounded machine unlearning. Advances in neural information processing systems, 36:1957–1987, 2023. B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statis- tics, 28(5):1302 – 1338, 2000. doi: 10.1214/aos/1015957395. URL https://doi.org/10.1214/aos/ 1015957395. Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. arXiv preprint arXiv:1906.05890, 2019. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pp. 5947–5956, 2017. Jun Niu, Peng Liu, Xiaoyan Zhu, Kuo Shen, Yuecong Wang, Haotian Chi, Yulong Shen, Xiaohong Jiang, Jianfeng Ma, and Yuqing Zhang. A survey on membership inference attacks and defenses in machine learning. Journal of Information and Intelligence, 2(5):404–454, 2024. ISSN 2949-7159. doi: https://doi.org/10.1016/j.jiixd.2024.02.001. URL https://www.sciencedirect.com/science/article/pii/S2949715924000064. Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075–18086, 2021. Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. Journal of Machine Learning Research, 19(70):1–57, 2018. Eleni Triantafillou, Peter Kairouz, Fabian Pedregosa, Jamie Hayes, Meghdad Kurmanji, Kairan Zhao, Vincent Dumoulin, Julio Jacques Junior, Ioannis Mitliagkas, Jun Wan, et al. Are we making progress in unlearning? findings from the first neurips unlearning competition. arXiv preprint arXiv:2406.09073, 2024. Gal Vardi. On the implicit bias in deep-learning algorithms. Communications of the ACM, 66(6):86–93, 2023. Gal Vardi, Gilad Yehudai, and Ohad Shamir. Gradient methods provably converge to non-robust networks. Advances in Neural Information Processing Systems, 35:20921–20932, 2022. Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577, 2021. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021. 11 A Proofs of data preliminaries for section 2.1 Theorem A.1. Let a set S = {(xi, yi)}m i=1 such that ∀i, xi ∈Rd and xi ∼N(0, 1 dId), yi ∈{−1, 1} and n ∈N. Then, w.p. ≥1 −(2me−d/500 + m2e−d/500 + 2m2d−log(d) 2 ), the dataset S satisfies Assumption 2.3 for ψ = 0.1 and ϕ = 1.1 log(d) √ d . Proof: Assumption 2.3 have 2 conditions: 1. For all (x, y) ∈S, ∥x∥2 ∈[1 −ψ, 1 + ψ]. Follows from Lemma A.7 w.p. ≥1 −2me− d 500 . 2. For all (xi, yi), (xj, yj) ∈S s.t. i ̸= j, |⟨xi, xj⟩| ≤ϕ. From Lemma A.8 we have that w.p. ≥1 −(m2e−d/500 + 2m2d−log(d) 2 ),For all (xi, yi), (xj, yj) ∈S |⟨xi, xj⟩| ≤1.1log(d) √ d . Lemma A.1. Let w ∈Rn such that w ∼N(0, σ2In). Then: P h ∥w∥2 ≤0.9σ2n i ≤e− n 400 . Proof: Note that w σ 2 has the Chi-squared distribution. A concentration bound by Laurent and Massart (Laurent & Massart, 2000, Lemma 1) implies that for all t > 0 we have Pr  n − w σ 2 ≥2 √ nt  ≤e−t . Plugging-in t = c · n, we get Pr  n − w σ 2 ≥2√cn  = Pr  w σ 2 ≤(1 −2√c)n  ≤e−c·n . Thus, we have for c = 1 400 Pr  w σ 2 ≤(1 −2 1 √ 400)n  = Pr  w σ 2 ≤9 10n  ≤e− n 400 . And finally, Pr  ∥w∥2 ≤9 10σ2n  ≤e− n 400 . Lemma A.2. Let w ∈Rn with w ∼N(0, σ2In). Then: Pr h ∥w∥2 ≥1.1σ2n i ≤e− n 500 . Proof: Note that w σ 2 has the Chi-squared distribution. A concentration bound by Laurent and Massart (Laurent & Massart, 2000, Lemma 1) implies that for all t > 0 we have Pr  w σ 2 −n ≥2 √ nt + 2t  ≤e−t . 12 Plugging-in t = c · n, we get Pr  w σ 2 −n ≥2√cn + 2cn  = Pr  w σ 2 ≥(2√c + 2c + 1)n  ≤ec·n . Thus, we have for c = 1 500 Pr  w σ 2 ≥1.1n  = Pr  w σ 2 ≥(2 1 √ 500 + 2 500 + 1)n  ≤e− n 500 . And finally, Pr h ∥w∥2 ≥1.1σ2n i ≤e− n 500 . Lemma A.3. For any i ∈[m], with probability ≥1 −(2e− d 500 ), ∥xi∥2 ∈[0.9, 1.1]. Proof: Using Lemma A.1 to lower bound ∥xi∥2 for xi ∼N(0, 1 d) w.p. ≥1 −e− n 400 , and use Lemma A.2 to upper bound ∥xi∥2 w.p. ≥1 −e− n 500 . Lemma A.4. Let u ∈Rn, and v ∼N(0, σ2In). Then, for every t > 0 we have Pr [|⟨u, v⟩| ≥∥u∥t] ≤2 exp  −t2 2σ2  . Proof: We first consider ⟨u ∥u∥, v⟩. As the distribution N(0, σ2In) is rotation invariant, one can rotate u and v to get ˜u and ˜v such that ˜u ∥u∥= e1, the first standard basis vector and ⟨u ∥u∥, v⟩= ⟨˜u ∥u∥, ˜v⟩. Note, v and ˜v have the same distribution. We can see that ⟨˜u ∥u∥, ˜v⟩∼N(0, σ2) since it is the first coordinate of ˜v. By a standard tail bound, we get that for t > 0: Pr  |⟨u ∥u∥, v⟩| ≥t  = Pr  |⟨˜u ∥u∥, ˜v⟩| ≥t  = Pr [|˜v1| ≥t] ≤2 exp  −t2 2σ2  . Therefore Pr [|⟨u, v⟩| ≥∥u∥t] ≤2 exp  −t2 2σ2  . Lemma A.5. Let u ∼N(0, σ2 1In), and v ∼N(0, σ2 2In). Then, for every t > 0 we have Pr  |⟨u, v⟩| ≥1.1σ1 √nt  ≤e− n 500 + 2e−t2/2σ2 2 . Proof: Using Lemma A.2 we get that w.p. ≤e− n 500 we have ∥u∥≥1.1σ1 √n. Moreover, by Lemma A.4, w.p. ≤ 2 exp  −t2 2σ2 2  we have |⟨u, v⟩| ≥∥u∥t. By the union bound, we get Pr  |⟨u, v⟩| ≥1.1σ1 √nt  ≤Pr  ∥u∥≥1.1σ1 √n  + Pr [|⟨u, v⟩| ≥∥u∥t] ≤e− n 500 + 2 exp  −t2 2σ2 2  . 13 Lemma A.6. Let u, v ∼N(0, 1 dId). Then, Pr  |⟨u, v⟩| ≥1.1log(d) √ d  ≤e− d 500 + 2d−log(d) 2 . Proof: Using Lemma A.5 for n = d, σ1 = σ2 = 1 √ d and t = log(d) √ d . Lemma A.7. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and xi ∼N(0, 1 dId), for m ≤d. Then, w.p. ≥1 −2me− d 500 , For all (x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] Proof: We prove both upper and lower bounds. Pr  min i∈[m] n ∥xi∥2o < 0.9  = = Pr h ∃i ∈[m], ∥xi∥2 < 0.9 i ≤ m X i=1 Pr h ∥xi∥2 < 0.9 i ≤me− d 400 where the last inequality holds due to A.1. Pr  max i∈[m] n ∥xi∥2o > 1.1  = = Pr h ∃i ∈[m], ∥xi∥2 > 1.1 i ≤ m X i=1 Pr h ∥xi∥2 > 1.1 i ≤me− d 500 where the last inequality holds due to A.2, and the claim follows. Lemma A.8. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and xi ∼N(0, 1 dId), for m ≤d. Then, w.p. ≥1 −(m2e−d/500 + 2m2d−log(d) 2 ), For all (xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤1.1 log(d) √ d Proof: We prove an upper bound. Pr  max i̸=j {|⟨xi, xj⟩|} > 1.1log(d) √ d  = = Pr  ∃i.j ∈[m], |⟨xi, xj⟩| > 1.1log(d) √ d  ≤ m X i=1 m X j=1 Pr  |⟨xi, xj⟩| > 1.1log(d) √ d  ≤m2e−d/500 + 2m2d−log(d) 2 where the last inequality holds due to Lemma A.6. B Proofs for section 3 Lemma B.1. Let ϵd, ϵ, δ ≤0.5 and let N(w, x) be a linear classifier trained on a dataset S = {(xi, yi)}m i=1, and assume that w is an (ϵ, δ)-approximate KKT point satisfying Definition 2.1, and S satisfies Assumption 2.3 for ψ ≤0.1, ϕ ≤ ϵd 4m. Note, for readability of the proof we denote ϵ1 by ϵ and δ1 by δ. Then, max i λi ≤2.4 14 Proof: We look at λr = maxi λi. If λr = 0 we are done, since the r.h.s is non-negative. Otherwise, we define vϵ = w −Pm i=1 λiyixi, and by item (2) from Definition 2.1 we have that ∥vϵ∥≤ϵ. Hence, we have w = m X i=0 λiyixi + vϵ , and from item (3) of Definition 2.1 and λr > 0, we have 1 + δ λr ≥yrN(w, xr) ≥1. Therefore, 1 + δ λr ≥yrN(w, xr) = yr m X i=0 λiyi⟨xi, xr⟩+ yr⟨xr, vϵ⟩=λr ∥xr∥2 + yr X i̸=r∈[m] λiyi⟨xi, xr⟩+ yr⟨xr, vϵ⟩ ≥λr(1 −ψ) − X i̸=r∈[m] λi|⟨xi, xr⟩| −∥xr∥∥vϵ∥ ≥λr(1 −ψ) −λr · ϕ(m −1) −ϵ p 1 + ψ where the last two inequalities holds due to Assumption 2.3 and Cauchy-Schwartz inequality. Solving for λr leads to to λ2 r ((1 −ψ) −ϕ(m −1)) −(1 + ϵ p 1 + ψ)λr −δ ≤0 . Since ψ ≤0.1 and ϕ ≤ ϵd 4m we get (1 −ψ) −ϕ(m −1) ≥0.9 −(m −1) ϵd 4m ≥0.9 −ϵd 4 > 0 , and we get that λr ≤(1 + ϵ√1 + ψ) + p (1 + ϵ√1 + ψ)2 + 4((1 −ψ) −ϕ(m −1))δ 2((1 −ψ) −ϕ(m −1)) (4) Plugging in ϵ, δ ≤0.5, ψ ≤0.1 and ϕ ≤ ϵd 4m, we get λr ≤(1 + ϵ√1 + ψ) + p (1 + ϵ√1 + ψ)2 + 4((1 −ψ) −ϕ(m −1))δ 2((1 −ψ) −ϕ(m −1)) ≤ ≤ (1 + 0.5 √ 1.1) + q (1 + 0.5 √ 1.1)2 + 2 2(0.9 −ϵd 4m(m −1)) ≤ (1 + 0.5 √ 1.1) + q (1 + 0.5 √ 1.1)2 + 2 2(0.9 −1 8) ≤3.61 1.55 ≤2.4 . Lemma B.2. Let ϵd, ϵ, δ ≤0.5 and let N(w, x) be a linear classifier trained on a dataset S = {(xi, yi)}m i=1, and assume that w is an (ϵ, δ)-approximate KKT point satisfying Definition 2.1, and S satisfies Assumption 2.3 for ψ ≤0.1, ϕ ≤ ϵd 4m. Let t ∈[m].Then, 1 ∥xt∥2 −0.6ϵd + 1.1ϵ ∥xt∥2 ≤λt ≤ 1 ∥xt∥2 + 1.2ϵd + 2.15ϵ + 2.2δ ∥xt∥2 . Proof: We begin showing the result for the more general case of ϵ, δ ≤0.5. Let t ∈[m]. Looking at an upper bound of the margin, we have 15 1 ≤ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vϵ, xt⟩≤λt ∥xt∥2 + X i̸=t∈[m] λi|⟨xi, xt⟩| + ⟨vϵ, xt⟩ ≤λt ∥xt∥2 + ϕ(m −1) max p λp + ⟨vϵ, xt⟩ ≤λt ∥xt∥2 + 2.4ϕ(m −1) + ϵ ∥xt∥, where the last inequality hold due to Lemma B.1 and Cauchy-Schwartz inequality. We solve it for λt with plugging in ϕ ≤ ϵd 4m getting a lower bound for it λt ≥ 1 ∥xt∥2 −2.4ϕ(m −1) ∥xt∥2 − ϵ ∥xt∥≥ 1 ∥xt∥2 −0.6ϵd + 1.1ϵ ∥xt∥2 . We note that 1 −0.6ϵd −1.1ϵ ≥0.15 > 0, the therefore λt > 0. Next, to find an upper bound for λt, we look at a lower bound of the margin 1 + δ λt ≥ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vϵ, xt⟩≥λt ∥xt∥2 − X i̸=t∈[m] λi|⟨xi, xt⟩| −⟨vϵ, xt⟩ ≥λt ∥xt∥2 −ϕ(m −1) max p λp −⟨vϵ, xt⟩ ≥λt ∥xt∥2 −2.4ϕ(m −1) −ϵ ∥xt∥, where again the last inequalities holds due to Lemma B.1 Cauchy-Schwartz inequality. We get λ2 t ∥xt∥2 −λt(1 + 2.4ϕ(m −1) + ϵ ∥xt∥) −δ ≤0 and solve for λt with plugging in ϕ ≤ ϵd 4m, ∥xt∥2 ≤(1 −ψ), ψ ≤0.1 we get an upper bound for λt λt ≤ (1 + 2.4ϕ(m −1) + ϵ ∥xt∥) + q (1 + 2.4ϕ(m −1) + ϵ ∥xt∥)2 + 4 ∥xt∥2 δ 2 ∥xt∥2 ≤1 + 2.4 ϵd 4m(m −1) + ϵ√1 + ψ + 1 + 2.4 ϵd 4m(m −1) + ϵ(1 + ψ) + 4δ(1 + ψ) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 2.4 ϵd 4m(m −1) + ϵ√1 + ψ + 2.4 ϵd 4m(m −1) + ϵ(1 + ψ) + 4δ(1 + ψ) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 2.4 ϵd 4 + ϵ √ 1.1 + 2.4 ϵd 4 + ϵ(1.1) + 4δ(1.1) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 1.2ϵd + 2.15ϵ + 2.2δ ∥xt∥2 . which finishes the proof. We next define an (ϵ, δ, γ)-approximate KKT. It is very similar to the (ϵ, δ)-approximate KKT definition given in Definition 2.1, with an extra γ relaxation of the margin. Definition B.1. A (ϵ, δ, γ)-approximate KKT for minθ 1 2 ∥θ∥2 s.t.∀i ∈[m], yiN(θ, xi) ≥1: ∃λ1, ..., λm such that 1. λ1, ..., λm ≥0 2. θ − m P i=1 λiyi∇θN(θ, xi) 2 ≤ϵ 16 3. ∀i ∈[m], λi (yiN(θ, xi) −1) ≤δ 4. ∀i ∈[m], yiN(θ, xi) ≥1 −γ Now, we show that scaling an (ϵ, δ, γ)-approximate KKT can result in an (ϵ′, δ′)-approximate KKT, and determine the scaling effect on the approximation parameters. Lemma B.3. Let a network N(θ, x) be such that N(θ, x) is a 1-homogeneous function with respect to the weights. Let S = {(xi, yi)}m i=1 be a dataset. Then, if θ is a (ϵ, δ, γ)-approximate KKT (according to the above Definition B.1) w.r.t S with corresponding {λi}m i=1, then 1 1−γ θ is a ( 1 1−γ ϵ, maxp λp γ 1−γ + 1 1−γ δ)-approximate KKT (according to Definition 2.1) w.r.t S with with the corresponding λ′ i = Cλi . Proof: Let N(θ, x) a 1-homogeneous function with respect to the weights, and θ be a (ϵ, δ, γ)-approximate KKT. From 1-homogeneity, for all C > 0 N(Cθ, x) = CN(θ, x) and the gradient is 0-homogeneous, meaning ∇θN(Cθ, x) = ∇θN(θ, x) . We denote C = 1 1−γ , and show that Cθ satisfies the conditions in Definition 2.1. 1. Cθ − m P i=2 Cλiyi∇θN(Cθ, xi) = C θ − m P i=2 λiyi∇θN(θ, xi) ≤Cϵ. 2. Let i ∈[m]. Then, yiN(Cθ, xi) = CyiN(θ, xi) ≥C(1 −γ) = 1 3. Let i ∈[m]. Assume λi (yiN(θ, xi) −1) ≤δ. If λi = 0 we are done. Else, λi > 0 and yiN(θ, xi) ≤1 + δ λi . Then, λi (yiN(Cθ, xi) −1) = λi (CyiN(θ, xi) −1) ≤ ≤λi  C(1 + δ λi ) −1  = λi(C −1) + Cδ ≤max p λp γ 1 −γ + 1 1 −γ δ , which finishes the proof. B.1 Proof for Theorem 3.1 Proof: Note, for readability of the proof we denote ϵ1 by ϵ and δ1 by δ. Using the stationarity condition in Definition 2.1 for w, we denote vϵ = w − m P i=1 λiyi∇wN(w, xi), so we get that ∥vϵ∥≤ϵ and w = m X i=1 λiyi∇wN(w, xi) + vϵ = m X i=1 λiyixi + vϵ . Let l ∈[m], we wish to take a negative gradient step of size β, such that β∇wℓ(ylN(w, xl)) = −λlyl∇wN(w, xl) so we pick a step size β = −λl ℓ′(ylN(w,xl)). Then, when taking one gradient ascent step for (xl, yl) of size β, we get the following ˆw ˆw = m X i=1 λiyi∇wN(w, xi) + vϵ −λlyl∇wN(w, xr) = X i∈[m]−l λiyixi + vϵ . 17 B.1.1 Proof of 1. ˆw has the direction of an (ϵ+ ϵϵd m−ϵd , δ + δϵd m−ϵd + 7.2ϵd m )-approximate KKT point for the margin maximization problem for S \ (xl, yl). For readability, we show that ˆw satisfies the conditions for (ϵ, δ + 1.44ϵd m , 0.6ϵd m )-approximate KKT by Definition B.1, and then use Lemma B.3 to deduce that 1 1− 0.6ϵd m ˆw satisfies the conditions for (ϵ+ ϵϵd m−ϵd , δ+ δϵd m−ϵd + 7.2ϵd m )-approximate KKT according to Definition 2.1. (1) Dual Feasibility: For all i ∈[m]−l, λi ≥0. directly from dual feasibility for w (Definition 2.1). (2) Stationarity: ˆw − m P i=1 λiyi∇wN( ˆw, xi) ≤ϵ. Since ∇wN( ˆw, x) = ∇wN(w, x) = x, one can write ˆw = X i∈[m]−l λiyixi + vϵ = X i∈[m]−l λiyi∇wN( ˆw, xi) + vϵ and the claim follows from (2) stationarity for w (Definition 2.1). Let t ∈[m]−l. Using the definitions of w and ˆw, we can write the margin as ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vϵ, xt⟩= ytN( ˆw, xt) + ytλlyl⟨xl, xt⟩. (5) Using this equality we prove the next two conditions: (3) Complementarity Slackness: For all t ∈[m]−l, λt (ytN( ˆw, xt) −1) ≤δ + 1.44ϵd m . If λt = 0 we are done. Else, λt > 0. From complementarity slackness of w being an (ϵ, δ)-approximate KKT, we know that ytN(w, xt) ≤1 + δ λt . We use 5 to lower bound the margin of ytN(w, xt), getting 1 + δ λt ≥ytN(w, xt) =ytN( ˆw, xt) + ytλlyl|⟨xl, xt⟩| ≥ytN( ˆw, xt) −λl|⟨xl, xt⟩| ≥ytN( ˆw, xt) −ϕ max p λp , plugging in ϕ ≤ ϵd 4m and the λp upper bound from Lemma B.1 we get ytN( ˆw, xt) −ϕ max p λp ≥ytN( ˆw, xt) −ϵd 4m2.4 ≥ytN( ˆw, xt) −0.6ϵd m . We deduce an upper bound for the margin of N( ˆw, xt)- ytN( ˆw, xt) ≤1 + δ λt + 0.6ϵd m = 1 + δ + 3 5mλtϵd λt ≤1 + δ + 3 5m2.4ϵd λt ≤1 + δ + 1.44ϵd m λt as desired. (4) Primal Feasibility: For all t ∈[m]−l, ytN( ˆw, xt) ≥1−0.6ϵd m . We use 5 to lower bound the margin of N( ˆw, xt), and use primal feasibility for w (Definition 2.1), getting ytN( ˆw, xt) = ytN(w, xt) −ytλlyl|⟨xl, xt⟩| ≥1 −λl|⟨xl, xt⟩| ≥1 −ϕ max p λp . 18 Plugging in ϕ ≤ ϵd 4m and the λp upper bound from Lemma B.1 we get that ϕ max p λp ≤2.4ϵd 4m ≤0.6ϵd m . Hence, ytN( ˆw, xt) ≥1 −0.6ϵd m . To conclude, we showed that ˆw is an (ϵ, δ + 1.44ϵd m , 0.6ϵd m )-approximate KKT by Definition B.1 . Finally, we look at the scaled weights 1 1− 0.6ϵd m ˆw. For ϵd ≤1 We calculate 1 1 −0.6ϵd m ϵ ≤ m m −ϵd ϵ =  1 + ϵd m −ϵd  ϵ = ϵ + ϵϵd m −ϵd , and max p λp 0.6ϵd m 1 −0.6ϵd m + δ + 1.44ϵd m 1 −0.6ϵd m ≤δ + δϵd m −ϵd + 7.2ϵd m and get from Lemma B.3 that 1 1− 0.6ϵd m ˆw is a (ϵ + ϵϵd m−ϵd , δ + δϵd m−ϵd + 7.2ϵd m )-approximate KKT by Definition 2.1 w.r.t. S \ (xl, yl). We note that ˆw and 1 1− 0.6ϵd m ˆw have the same direction, which finishes the proof. B.1.2 Proof of 2. Cosine −Similarity( ˆw, w∗) ≥1 −C(√ϵd + √ϵd + √ δ) for some C > 0. Let N(w∗, x) be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl). Hence, w∗is a KKT point of the margin maximization problem (2) w.r.t. {xi, yi}i∈[m]−l, as in Definition 2.1 (with ϵ = δ = 0). From the stationarity condition we denote w∗= P i∈[m]−l λ∗ i yixi. Let t ∈[m]−l. We use Lemma B.2 to prove tight bounds for λt and λ∗ t . For a given t, λt and λ∗ t are close up to a small additive factor depend on ϵd, ϵ and δ. For λt we can use the results from Lemma B.2 directly, having 1 ∥xt∥2 −0.6ϵd + 1.1ϵ ∥xt∥2 ≤λt ≤ 1 ∥xt∥2 + 1.2ϵd + 2.15ϵ + 2.2δ ∥xt∥2 . (6) For λ∗ t , since w∗is a KKT point of 2 w.r.t. S \ (xl, yl), we have a dataset of size m −1 and ϵ = δ = 0. To accommodate the different parameter, we note that ϕ ≤ ϵd 4m ≤ ϵd 4(m−1), conclude that 1 ∥xt∥2 −0.6ϵd ∥xt∥2 ≤λ∗ t ≤ 1 ∥xt∥2 + 1.2ϵd ∥xt∥2 . (7) And, similar note hold for B.1 resulting in λ∗≤2.4. We are now ready to prove the cosine similarity lower bound. For ˆw = P i∈[m]−l λiyixi + vϵ and w∗= P i∈[m]−l λ∗ i yixi, we have ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥= ⟨P i∈[m]−l λiyixi + vϵ, P i∈[m]−l λ∗ i yixi⟩ ∥ˆw∥∥w∗∥ . We upper bound the norm of the predictors, when using 6 and 7 for any i ∈[m]−l separately, bounding λi ∥xi∥2 and λ∗ i ∥xi∥2 respectively. Upper bounding ∥ˆw∥2 we get ∥ˆw∥2 = X i∈[m]−l λiyixi + vϵ 2 = ⟨ X i∈[m]−l λiyixi + vϵ, X i∈[m]−l λiyixi + vϵ⟩= = ⟨ X i∈[m]−l λiyixi, X i∈[m]−l λiyixi⟩+ 2⟨ X i∈[m]−l λiyixi, vϵ⟩+ ⟨vϵ, vϵ⟩ ≤ X i∈[m]−l λ2 i ∥xi∥2 + X i̸=k∈[m]−l λiλk⟨xi, xk⟩+ 2 X i∈[m]−l λi⟨xi, vϵ⟩+ ϵ2 19 From 6 we get that λi ∥xi∥2 ≤(1 + 1.2ϵd + 2.15ϵ + 2.2δ), from Lemma B.1 we get that for all i, λi ≤2.4 and by Assumption 2.3 we get that for all i, k ∈[m] ⟨xi, xk⟩≤ϕ. Using Cauchy–Schwarz inequality we get that for all i ∈[m], ⟨xi, vϵ⟩≤∥xi∥∥vϵ∥≤ϵ√1 + ψ. Plug it all in we have ∥ˆw∥2 ≤ X i∈[m]−l λ2 i ∥xi∥2 + X i̸=k∈[m]−l λiλk⟨xi, xk⟩+ 2 X i∈[m]−l λi⟨xi, vϵ⟩+ ϵ2 ≤(1 + 1.2ϵd + 2.15ϵ + 2.2δ) X i∈[m]−l λi + 2.4mϕ X i∈[m]−l λi + ϵ p 1 + ψ X i∈[m]−l λi + ϵ2 ≤ X i∈[m]−l λi  (1 + 1.2ϵd + 2.15ϵ + 2.2δ) + 2.4mϕ + ϵ p 1 + ψ  + ϵ2 We denote Λ = P i∈[m]−l λi and plug in ϕ ≤ ϵd 4m and ψ ≤0.1 and get ∥ˆw∥2 ≤ X i∈[m]−l λi  (1 + 1.2ϵd + 2.15ϵ + 2.2δ) + 2.4mϕ + ϵ p 1 + ψ  + ϵ2 ≤Λ ((1 + 1.2ϵd + 2.15ϵ + 2.2δ) + 0.6ϵd + 1.1ϵ) + ϵ2 ≤Λ (1 + 1.8ϵd + 3.25ϵ + 2.2δ) + ϵ2 For the upper bound of ∥w∗∥2 we do similar calculations, using 7 and Lemma B.1 getting ∥w∗∥2 = X i∈[m]−l λ∗ i yixi 2 = ⟨ X i∈[m]−l λ∗ i yixi, X i∈[m]−l λ∗ i yixi⟩ ≤ X i∈[m]−l (λ∗ i )2 ∥xi∥2 + X i̸=k∈[m]−l λ∗ i λ∗ k⟨xi, xk⟩ ≤(1 + 1.2ϵd + 2.15ϵ + 2.2δ) X i∈[m]−l λ∗ i + 2.4mϕ X i∈[m]−l λ∗ i W.L.O.G, we assume that P i∈[m]−l λi ≥P i∈[m]−l λ∗ i (the other direction is proven similarly). This allow as to upper bound ∥w∗∥2 using λi, with plugging in ϕ ≤ ϵd 4m, we get ∥w∗∥2 ≤(1 + 1.2ϵd) X i∈[m]−l λ∗ i + 2.4mϕ X i∈[m]−l λ∗ i ≤(1 + 1.2ϵd) X i∈[m]−l λi + 2.4mϕ X i∈[m]−l λi ≤Λ (1 + 1.8ϵd) For the norm multiplication we have ∥ˆw∥∥w∗∥= q ∥ˆw∥2 ∥w∗∥2 = p [Λ (1 + 1.8ϵd + 3.25ϵ + 2.2δ) + ϵ2] [Λ (1 + 1.8ϵd)] ≤Λ r (1 + C(ϵd + ϵ + δ)) + ϵ2 Λ (1 + Cϵd) ≤Λ r 1 + C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd ≤Λ + Λ r C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd 20 for some constant C > 0, where the last inequality hold since 1 + √x ≥√1 + x for all x > 0. We next lower bound the inner product of ˆw and w∗ ⟨ˆw, w∗⟩= ⟨ X i∈[m]−l λiyixi + vϵ, X i∈[m]−l λ∗ i yixi⟩= = ⟨ X i∈[m]−l λiyixi, X i∈[m]−l λ∗ i yixi⟩+ ⟨ X i∈[m]−l λ∗ i yixi, vϵ⟩ ≥ X i∈[m]−l λ∗ i λi ∥xi∥2 − X i̸=k∈[m]−l λ∗ i λk⟨xi, xk⟩− X i∈[m]−l λ∗ i ⟨xi, vϵ⟩ Here, we use the lower bound for λ∗ i ∥xi∥2 ≥(1 −0.6ϵd), the upper bound λ∗ i ≤2.4 from Lemma B.1, and the Cauchy–Schwarz inequality, having ⟨ˆw, w∗⟩≥ X i∈[m]−l λ∗ i λi ∥xi∥2 − X i̸=k∈[m]−l λ∗ i λk⟨xi, xk⟩− X i∈[m]−l λ∗ i ⟨xi, vϵ⟩ ≥(1 −0.6ϵd) X i∈[m]−l λi −2.4mϕ X i∈[m]−l λi −ϵ p 1 + ψ X i∈[m]−l λi and by plugging in ϕ ≤ ϵd 4m, ψ ≤0.1 we have ⟨ˆw, w∗⟩≥(1 −0.6ϵd) X i∈[m]−l λi −2.4mϕ X i∈[m]−l λi −ϵ p 1 + ψ X i∈[m]−l λi ≥Λ (1 −0.6ϵd −0.6ϵd −1.1ϵ) ≥Λ −Λ (1.2ϵd + 1.1ϵ) Join all the bounds toghter, we get for the cosine similarity ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥ Λ −Λ (1.2ϵd + 1.1ϵ) Λ + Λ q C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd ≥1 − Λ (1.2ϵd + 1.1ϵ) + Λ q C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd Λ + Λ q C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd ≥1 − (1.2ϵd + 1.1ϵ) + q C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd 1 + q C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd ≥1 −(1.2ϵd + 1.1ϵ) − r C(ϵd + ϵ + δ) + ϵ2 Λ + ϵ2 Λ Cϵd We note that by Lemma B.2 Λ = X i∈[m]−l λi ≥(m −1) 1 ∥xt∥2 −0.6ϵd + 1.1ϵ ∥xt∥2 ! ≥(m −1)0.9 (1 −0.6ϵd −1.1ϵ) ≥0.1(m −1) , 21 Concluding, ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥1 −(1.2ϵd + 1.1ϵ) − s C(ϵd + ϵ + δ) + ϵ2 0.1(m −1) + ϵ2 0.1(m −1)Cϵd ≥1 −C2 √ϵd + √ϵ + √ δ  for some constant C2 > 0. B.2 Proof for forgetting subset of points using Ak-GA – linear predictors We formalize and prove the statement for unlearning a subset of data points. Here, the term successful unlearning is the natural extension of Definition 2.2 to unlearning a subset, rather than a single point. Theorem B.1. In the same settings as Theorem 3.1, let Sforget ⊆S be a subset of size k. Then, the extended algorithm AK-GA, with appropriate coefficients {βr}, is an (ϵ, δ, τ)-successful unlearning algorithm w.r.t. w and S, where: 1. The case of ϵ = ϵ1 + ϵ1ϵd m k −ϵd , δ = δ1 + δ1ϵd m k −ϵd + 7.2ϵd m , τ = 0: The predictor Ak-GA(w, S, l) has the direction of an (ϵ, δ)-approximate KKT point for the margin maximization problem (2) w.r.t. S \ (xl, yl). 2. The case of ϵ = δ = 0, τ = C(√ϵd + √ϵ1 + √δ1) for some universal constant C > 0: Let w∗be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl), i.e. the global optimum of the 2 w.r.t. S \ (xl, yl). Then, cossim(Ak-GA(w, S, l), w∗) ≥1 −τ. Proof: Let a forget set Sf ⊂S such that |Sf| = k. We denote If = {i : (xi, yi) ∈Sf}. We denote Sr = S \ Sf and Ir = {i : (xi, yi) ∈Sr}. The proof is highly similar to the proof for unlearning single point in B.1. Similarly, we denote vϵ = w − m P i=1 λiyi∇wN(w, xi), so we get that ∥vϵ∥≤ϵ and w = m X i=1 λiyi∇wN(w, xi) + vϵ = m X i=1 λiyixi + vϵ . According to the algorithm Ak-GA, we take a step consists of the sum of k gradients w.r.t. data points in Sf with the following sizes- For any (xl, yl) ∈Sf, we sum a gradient of size β = −λl ℓ′(ylN(w,xl)). We get ˆw = m X i=1 λiyi∇wN(w, xi) + vϵ − X l∈If λlyl∇wN(w, xr) = X i∈Ir λiyixi + vϵ . Proof of 1. ˆw has the direction of an (ϵ + ϵϵd m k −ϵd , δ + δϵd m k −ϵd + 7.2kϵd m )-approximate KKT point for the margin maximization problem for S \ (xl, yl). (1) Dual Feasibility: For all i ∈[m]−l, λi ≥0. Same. directly from dual feasibility for w (Definition 2.1). (2) Stationarity: ˆw − m P i=1 λiyi∇wN( ˆw, xi) ≤ϵ. Same as in B.1. 22 (3) Complementarity Slackness: For all t ∈[m]−l, λt (ytN( ˆw, xt) −1) ≤δ + 1.44kϵd m . Using the same Equation 5 we get 1 + δ λt ≥ytN(w, xt) =ytN( ˆw, xt) + yt X l∈If λlyl|⟨xl, xt⟩| ≥ytN( ˆw, xt) − X l∈If λl|⟨xl, xt⟩| ≥ytN( ˆw, xt) −kϕ max p λp , plugging in ϕ ≤ ϵd 4m and the λp upper bound from Lemma B.1 we get ytN( ˆw, xt) −kϕ max p λp ≥ytN( ˆw, xt) −k ϵd 4m2.4 ≥ytN( ˆw, xt) −0.6kϵd m . We deduce an upper bound for the margin of N( ˆw, xt)- ytN( ˆw, xt) ≤1 + δ λt + 0.6kϵd m = 1 + δ + 3 5mkλtϵd λt ≤1 + δ + 3 5mk2.4ϵd λt ≤1 + δ + 1.44kϵd m λt as desired. (4) Primal Feasibility: For all t ∈[m]−l, ytN( ˆw, xt) ≥1 −0.6kϵd m . We use 5 to lower bound the margin of N( ˆw, xt), and use primal feasibility for w (Definition 2.1), getting ytN( ˆw, xt) = ytN(w, xt) −yt X l∈If λlyl|⟨xl, xt⟩| ≥1 −kϕ max p λp . Plugging in ϕ ≤ ϵd 4m and the λp upper bound from Lemma B.1 we get that kϕ max p λp ≤2.4kϵd 4m ≤0.6kϵd m . Hence, ytN( ˆw, xt) ≥1 −0.6kϵd m . We showed that ˆw is an (ϵ, δ + 1.44kϵd m , 0.6kϵd m )-approximate KKT by Definition B.1 . Finally, we look at the scaled weights 1 1− 0.6kϵd m ˆw. For ϵd ≤1 We calculate 1 1 −0.6kϵd m ϵ ≤ m k m k −ϵd ϵ =  1 + ϵd m k −ϵd  ϵ = ϵ + ϵϵd m k −ϵd , and max p λp 0.6kϵd m 1 −0.6kϵd m + δ + 1.44kϵd m 1 −0.6kϵd m ≤δ + δϵd m k −ϵd + 7.2kϵd m and get from Lemma B.3 that 1 1− 0.6kϵd m ˆw is a (ϵ + ϵϵd m k −ϵd , δ + δϵd m k −ϵd + 7.2kϵd m )-approximate KKT by Definition 2.1 w.r.t. S \ (xl, yl). We note that ˆw and 1 1−0.6k ϵd m ˆw have the same direction, which finishes the proof. 23 Proof of 2. Cosine −Similarity( ˆw, w∗) ≥1 −C(√ϵd + √ϵd + √ δ) for some C > 0. Let N(w∗, x) be a max-margin linear predictor w.r.t. the remaining training set S \ Sf. Hence, w∗is a KKT point of the margin maximization problem (2) w.r.t. {xi, yi}i∈If , as in Definition 2.1 (with ϵ = δ = 0). From the stationarity condition we denote w∗= P i∈If λ∗ i yixi. We have same bounds for λi and λ∗ i , since it is independent of the unlearning. The rest of the proof remains the same but the substitution of P i∈[m]−l λi in P i∈Ir λi, and the lower bound for it - by Lemma B.2 Λ = X i∈Ir λi ≥(m −k) 1 ∥xt∥2 −0.6ϵd + 1.1ϵ ∥xt∥2 ! ≥(m −k)0.9 (1 −0.6ϵd −1.1ϵ) ≥0.1(m −k) , That have no significant effect on the final bound ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥1 −(1.2ϵd + 1.1ϵ) − s C(ϵd + ϵ + δ) + ϵ2 0.1(m −k) + ϵ2 0.1(m −k)C(ϵd + ϵ + δ) ≥1 −C2 √ϵd + √ϵ + √ δ  for some constant C2 > 0. B.3 The Identity is an Unsuccessful Unlearning Algorithm To complement Theorem 3.1, we provide the following remark, that shows that keeping the original predictor is not a successful unlearning algorithm. Particularly, for any ϵ′, δ′ > 0, we show that for the predictor as defined in Theorem 3.1, its cosine similarity to any (ϵ′, δ′)-approximate KKT point for S \ {(xl, yl)} is relatively large. Remark B.1. In the same settings as 3.1, the algorithm AI(θ, S, r) = θ, is (ϵ, δ, τ)-successful only for τ ≥C m − C(ϵd + ϵ) for some C > 0. As a short intuition for the proof, we note that the original network weight parameter, denoted as w = m X i=1 λiyi∇wN(w, xi) + vϵ = m X i=1 λiyixi + vϵ1 , consists of a sum of m summons, while any other KKT point w.r.t. S \ {(xl, yl)}, ew, consists of a sum of the (m −1) gradients of the remaining dataset. This gap creates an inevitable angle between the two vectors. Proof: In this section, we show that the original network w is not a good candidate for the unlearning tasks according to the (ϵ, δ, τ)-successful definition (Definition 2.2). Formally, we look at the simple unlearning algorithm AI(w, S, r) = w. We show that for any (ϵ′, δ′)-approximate KKT point ew, where ϵ′, δ′ < 0.5 and ϵd < 0.1, there exists C > 0 such that cossim(w, ew) ≤1 −C m + C(ϵd + ϵ + eϵ) , leading to τ ≥C m −C(ϵd + ϵ + eϵ) . We recall that due to the stationary condition for the original network w w.r.t. the full dataset S we have w = X i∈[m] λiyi∇wN(w, xi) + vϵ = m X i=1 λiyixi + vϵ . 24 We denote an (eϵ, eδ)-approximate KKT point of the margin maximization problem w.r.t. the retain dataset S \(xl, yl) by ew. From the stationarity condition we get that ew = X i∈[m]−l eλiyixi + veϵ . Next, we show that the cosine similarity between w and ew is lower bounded by C m + C(ϵd + ϵ + eϵ). We denote w = w −vϵ and ew = w −veϵ. For the cosine similarity between w and ew we have cossim(w, ew) = ⟨w, ew⟩ ∥w∥∥ew∥= ⟨w + vϵ, ew + veϵ⟩ ∥w∥∥ew∥ We first use Cauchy–Schwarz inequality and separate it into two expressions cossim(w, ew) = ⟨w + vϵ, ew + veϵ⟩ ∥w∥∥ew∥ ≤ ⟨w, ew⟩ ∥w∥∥ew∥+ |⟨vϵ, ew⟩| + |⟨veϵ, w⟩| + |⟨vϵ, veϵ⟩| ∥w∥∥ew∥ ≤ ⟨w, ew⟩ ∥w∥∥ew∥+ ∥vϵ∥∥ew∥+ ∥veϵ∥∥w∥+ ∥vϵ∥∥veϵ∥ ∥w∥∥ew∥ (8) We next lower bound the norm of the parameter vectors. We note that ∥w∥= ∥w + vϵ∥≥∥w∥−ϵ and ∥w∥2 = X i∈[m] λiyixi 2 = ⟨ X i∈[m] λiyixi, X i∈[m] λiyixi⟩= ≥ X i∈[m] λ2 i ∥xi∥2 − X i̸=k∈[m] λiλk⟨xi, xk⟩ ≥ X i∈[m] λ2 i ∥xi∥2 −ϕ X i̸=k∈[m] λiλk . Similarly ∥ew∥≥∥ew∥−eϵ and ∥ew∥2 = X i∈[m]−l eλiyixi 2 = ⟨ X i∈[m]−l eλiyixi, X i∈[m]−l eλiyixi⟩ ≥ X i∈[m]−l eλi 2 ∥xi∥2 −ϕ X i̸=k∈[m]−l eλif λk . We now upper bound the inner product ⟨w, ew⟩, having 25 ⟨w, ew⟩= ⟨ X i∈[m] λiyixi, X i∈[m]−l eλiyixi⟩= = ⟨ X i∈[m]−l λiyixi, X i∈[m]−l eλiyixi⟩+ ⟨ X i∈[m]−l eλiyixi, λlylxl⟩ ≤|⟨ X i∈[m]−l λiyixi, X i∈[m]−l eλiyixi⟩| + |⟨ X i∈[m]−l eλiyixi, λlylxl⟩| ≤ X i∈[m]−l eλiλi ∥xi∥2 + X i̸=k∈[m]−l eλiλk⟨xi, xk⟩+ X i∈[m]−l eλiλl⟨xi, xl⟩ ≤ X i∈[m]−l eλiλi ∥xi∥2 + ϕ X i̸=k∈[m]−l eλiλk + ϕ X i∈[m]−l eλiλl Plug it all in, we get for the first summon at 8 ⟨w, ew⟩ ∥w∥∥ew∥≤ P i∈[m]−l eλiλi ∥xi∥2 + ϕ P i̸=k∈[m]−l eλiλk + ϕ P i∈[m]−l eλiλl qP i∈[m] λ2 i ∥xi∥2 −ϕ P i̸=k∈[m] λiλk −ϵ  qP i∈[m]−l eλi 2 ∥xi∥2 −ϕ P i̸=k∈[m]−l eλif λk −eϵ  . We first note that by Cauchy–Schwarz X i∈[m]−l eλiλi ∥xi∥2 ≤ s X i∈[m]−l eλi 2 ∥xi∥2 s X i∈[m]−l λ2 i ∥xi∥2 , and X i∈[m]−l eλiλi ≤ s X i∈[m]−l eλi 2s X i∈[m]−l λ2 i . We now reduce the nominator and denominator by qP i∈[m]−l eλi 2 ∥xi∥2qP i∈[m]−l λ2 i ∥xi∥2. We denote b = (1 + 1.2ϵd + 2.15ϵ + 2.2δ), a = (1 −0.6ϵd −1.1ϵ), and use Lemma B.2 in which for all i, a < λi ∥xi∥2 < b. We calculate the summons in the nominator after reduction, having P i∈[m]−l eλiλi ∥xi∥2 qP i∈[m]−l eλi 2 ∥xi∥2qP i∈[m]−l λ2 i ∥xi∥2 ≤1 , ϕ P i̸=k∈[m]−l eλiλk qP i∈[m]−l eλi 2 ∥xi∥2qP i∈[m]−l λ2 i ∥xi∥2 ≤ ϕ qP i̸=k∈[m]−l eλi 2qP i̸=k∈[m]−l λ2 i qP i∈[m]−l eλi 2 ∥xi∥2qP i∈[m]−l λ2 i ∥xi∥2 ≤ϵd 3.6 , ϕ P i∈[m]−l eλiλl qP i∈[m]−l eλi 2 ∥xi∥2qP i∈[m]−l λ2 i ∥xi∥2 ≤ ϕ P i∈[m]−l eλiλl P i∈[m]−l eλiλi ∥xi∥2 ≤1.2bϵd 4ma . 26 and for the denominator we have P i∈[m] λ2 i ∥xi∥2 P i∈[m] λ2 i ∥xi∥2 = 1 , ϕ P i̸=k∈[m]−l λiλk P i∈[m]−l λ2 i ∥xi∥2 ≤ ϕ qP i̸=k∈[m]−l λi 2qP i̸=k∈[m]−l λ2 i P i∈[m]−l λ2 i ∥xi∥2 ≤ ϕ(m −1) P i∈[m]−l λi 2 P i∈[m]−l λ2 i ∥xi∥2 ≤ϵd 3.6 , ϵ qP i∈[m]−l λ2 i ∥xi∥2 ≤ ϵ 0.9a√m , the same for eλi and eϵ, and finally eλl 2 ∥xl∥2 P i∈[m]−l eλi 2 ∥xi∥2 ≤ 2.4b 0.91a2m ≤2.64b am . Plug it all in we have ⟨w, ew⟩ ∥w∥∥ew∥≤ P i∈[m]−l eλiλi ∥xi∥2 + ϕ P i̸=k∈[m]−l eλiλk + ϕ P i∈[m]−l eλiλl qP i∈[m] λ2 i ∥xi∥2 −ϕ P i̸=k∈[m] λiλk −ϵ qP i∈[m] eλi 2 ∥xi∥2 −ϕ P i̸=k∈[m] eλif λk −eϵ ≤ 1 + 0.28ϵd + 1.2bϵd 4ma q 1 −0.28ϵd − ϵ 0.9a√m q 1 −0.28ϵd − eϵ 0.9a√m + 2.64b am ≤ 1 + 0.28ϵd + 1.2bϵd 4ma  1 −0.28ϵd − ϵ+eϵ 0.9a√m  q 1 + 2.64b am for any 0 < x < 1 we get that 1 √1 + x ≤1 −x 4 and thus in conclusion we have ⟨w, ew⟩ ∥w∥∥ew∥≤ ≤ 1 + 0.28ϵd + 1.2bϵd 4ma  1 −0.28ϵd − ϵ+eϵ 0.9a√m  q 1 + 2.64b am ≤1 + 0.28ϵd + 1.2bϵd 4ma 1 −0.28ϵd − ϵ+eϵ 0.9a√m  1 −0.66b am  ≤ 1 + 0.56ϵd + 1.2bϵd 4ma + ϵ+eϵ 0.9a√m 1 −0.28ϵd − ϵ+eϵ 0.9a√m !  1 −0.66b am  ≤1 −C m + C(ϵd + ϵ + eϵ) , 27 which finishes the upper bounded the first summon of the cosine similarity at 8. We now upper bound the second summon, we recall that w = w −vϵ and therefore ∥w∥≤∥w∥+ ϵ, and similar for ew, and thus, ∥vϵ∥∥ew∥+ ∥veϵ∥∥w∥+ ∥vϵ∥∥veϵ∥ ∥w∥∥ew∥ ≤ϵ ∥ew∥+ ϵ2 + eϵ ∥w∥+ eϵ2 + ϵeϵ ∥w∥∥ew∥ = ϵ ∥w∥+ eϵ ∥ew∥+ ϵ2 + eϵ2 + ϵeϵ ∥w∥∥ew∥ We look at the norm lower bound. We note that ∥w∥= ∥w + vϵ∥≥∥w∥−ϵ , and ∥w∥2 = ⟨ X i∈[m] λiyixi, X i∈[m] λiyixi⟩= ≥ X i∈[m] λ2 i ∥xi∥2 −ϕ X i̸=k∈[m] λiλk ≥ X i∈[m] λi [a −ϕmb] ≥m0.9a [a −0.6ϵd] ≥m0.9a [1 −1.2ϵd −1.1ϵ] ≥0.1m , and similarly ∥ew∥2 ≥0.1(m −1). Plug in to the denominator of the above fraction we get ϵ ∥w∥+ eϵ ∥ew∥+ ϵ2 + eϵ2 + ϵeϵ ∥w∥∥ew∥ ≤ ϵ 0.1m −ϵ + eϵ 0.1(m −1) −eϵ + ϵ2 + eϵ2 + ϵeϵ (0.1(m −1) −ϵ)2 ≤C1(ϵd + ϵ + eϵ) which means that there exists C such that cossim(w, ew) ≤1 −C m + C(ϵd + ϵ + eϵ) , Thus, concluding the proof. C Proofs for section 4 C.1 lemmas for Proof C.2 of Theorem 4.1 Lemma C.1. Let S = {(x1, y1), ..., (xm, ym)} such that ∀i ∈[m], xi ∈Rd and let {wj}n j=1, ∀j ∈[n], wj ∈Rd. Assume the data distribution D satisfies Assumption 2.3 for some ψ, ϕ. Given l ∈[m] and c ∈R, for j ∈[n] and r ∈[m]−l, we denote ∆r,j = X k∈[m]−l c⟨xk, xr⟩sign(⟨xk, wj⟩) . Then, w⊤ j xr ≥0 ⇒ c(1 −ψ) −(m −2)cϕ ≤∆r,j ≤c(1 + ψ) + (m −2)cϕ w⊤ j xr < 0 ⇒ −c(1 + ψ) −(m −2)cϕ ≤∆r,j ≤−c(1 −ψ) + (m −2)cϕ 28 Proof: X k∈[m]−l c⟨xk, xr⟩sign(⟨xk, wj⟩) = = c ∥xr∥2 sign(⟨xr, wj⟩) + X k∈[m]−l,k̸=r c⟨xk, xr⟩sign(⟨xk, wj⟩) From Assumption 2.3 we know that (1 −ψ) ≤∥xr∥2 ≤(1 + ψ), for k ̸= r, −ϕ ≤⟨xk, xr⟩≤ϕ which finishes the proof. Lemma C.2. Let S = {(x1, y1), ..., (xm, ym)} such that ∀i ∈[m], xi ∈Rd and let {wj}n j=1 ∀j ∈[n], wj ∈Rd. Assume the data distribution D satisfies Assumption 2.3 for some ψ ≤0.1, ϕ ≤ ϵd 4mn. Given l ∈[m], and c = ϵd 2mn, for j ∈[n] and r ∈[m]−l, we denote ∆j = X k∈[m]−l cxk sign(⟨xk, wj⟩) Then for j ∈[n], |uj|λlσ′ l,j∆j ≤22ϵd √mn Proof: We first look at the norm of ∆j, having ∥∆j∥2 = X k∈[m]−l cxk sign(⟨xk, wj⟩) 2 = = ⟨ X k∈[m]−l cxk sign(⟨xk, wj⟩), X k∈[m]−l cxk sign(⟨xk, wj⟩)⟩ ≤c2⟨ X k∈[m]−l xk, X k∈[m]−l xk⟩ ≤c2  X k∈[m]−l ∥xi∥2 + X s̸=k∈[m]−l ⟨xk, xs⟩   ≤c2 m(1 + ψ) + m2ϕ  we plug in ψ ≤0.1, ϕ ≤ ϵd 4mn, c = ϵd 2mn and get ∥∆j∥2 ≤ ϵ2 d 4m2n2  1.1m + m2 ϵd 4mn  = ϵ2 d 1.1 + ϵd 4n  4mn2 and ∥∆j∥≤ϵd p1.1 + ϵd n 2√mn From Lemma C.3 we have that maxi∈[m] λi ≤20.4n. As for all j ∈[n], |uj| = 1 √n, and σ′ l,j ≥0, joining all together we have |uj|λlσ′ l,j∆j = |uj|λlσ′ l,j ∥∆j∥≤1 √n20.4nϵd p1.1 + ϵd n 2√mn ≤1 √n20.4ϵd p1.1 + ϵd n 2√m ≤ϵd 20.4 + 1 2 p1.1 + ϵd n  √nm ≤22ϵd √mn , 29 as desired. Lemma C.3. Let N(θ, x) = nP j=1 ujσ(w⊤ j x) be a two-layer fully connected neural network, trained on S = {(x1, y1), ..., (xm, ym)}, and let 0 < ϵd, ϵ, δ ≤1 such that θ is an (ϵ, δ)-approximate KKT point for the mar- gin maximization problem for S according to Definition 2.1 for λ1, ..., λm, and S satisfies Assumption 2.3 for ψ = 0, 1, and ϕ ≤ ϵd 4mn. Assume ∀j ∈[n], uj ∼U{−1 √n, 1 √n}. Then, For i ∈[m] we have max    X j∈J+ u2 jλiσ′ i,j, X j∈J− u2 jλiσ′ i,j   ≤2.5 + 5.25ϵ + 2.4δ ≤10.2 , and therefore also n X j=1 u2 jλiσ′ i,j ≤5 + 10.5ϵ + 4.8δ ≤20.4 , and λi ≤n (5 + 10.5ϵ + 4.8δ) ≤20.4n . Proof: Let J+ = {j ∈[n] : uj > 0} and J−= {j ∈[n] : uj < 0}. Denote α+ = maxi∈[m] P j∈J+ u2 jλiσ′ i,j ! and α−= maxi∈[m] P j∈J− u2 jλiσ′ i,j ! . w.l.o.g. we assume α+ ≥α−(the other direction is proven similarly). We denote α = α+ = maxi∈[m] P j∈J+ u2 jλiσ′ i,j ! , and k = arg maxi∈[m] P j∈J+ u2 jλiσ′ i,j ! . If λk = 0 the claim follows. Using the stationarity condition in Definition 2.1 for θ, we denote vϵ = θ −Pm i=1 λiyi∇θN(θ, xi), and vϵ,j = wj − m P i=1 ujλiyiσ′ i,jxi, such that vϵ is the concatenation of all vϵ,j and ∥vϵ∥= ϵ. Using this notation we have for all j ∈[n] the inner product w⊤ j xk =uj m X i=1 λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩ =ujλkykσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩. To upper bound α, we use the Complementarity Slackness condition in Definition 2.1 to first bound the margin, and then solve for α. First, since for all j ∈[n] and k ∈[m], |uj| = 1 √n and σ′ k,j ≤1, we get that α ≤λk 1 n nP j=1 σ′ k,j ≤λk, so 1 λk ≤1 α. Then, using the Complementarity Slackness condition for θ we get that ykN(θ, xk) ≤1 + δ λk ≤1 + δ α. To use the α notation we express the margin with in terms of sums over J+ and J− 1 + δ α ≥ykN(θ, xk) = yk n X j=1 ujσ(w⊤ j xk) = yk  X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujσ(w⊤ j xk)  . Now, to divide both sides of the inequality by yk, we need to know its sign. We separate to two cases for yk: 30 Case 1: yk = 1 We lower bound the margin 1 + δ α ≥N(θ, xk) = X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujσ(w⊤ j xk) ≥ X j∈J+ ujw⊤ j xk + X j∈J− ujσ(w⊤ j xk) , Where the last inequality hold since for all y ∈R, y ≤σ(y). We lower bound separately the first summand, getting X j∈J+ ujw⊤ j xk = X j∈J+ uj  ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≥(1 −ψ) X j∈J+ u2 jλkσ′ k,j −ϕ X j∈J+ m X i=1,i̸=k u2 jλiσ′ i,j − X j∈J+ uj|⟨vϵ,j, xk⟩| ≥(1 −ψ)α −ϕ(m −1)α − X j∈J+ uj|⟨vϵ,j, xk⟩| . Using Cauchy–Schwarz inequality we have X j∈J+ uj|⟨vϵ,j, xk⟩| = 1 √n X j∈J+ |⟨vϵ,j, xk⟩| ≤ 1 √n ∥vϵ∥√n max p∈[m] ∥xp∥≤ϵ p 1 + ψ , getting X j∈J+ ujw⊤ j xk ≥(1 −ψ)α −ϕ(m −1)α −ϵ p 1 + ψ . Bounding the second summand we have X j∈J− ujσ(w⊤ j xk) = X j∈J− ujσ  ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≥ X j∈J− ujσ  uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ |⟨vϵ,j, xk⟩|   ≥ X j∈J− ujσ  |uj| m X i=1,i̸=k λiσ′ i,j|⟨xi, xk⟩| + |⟨vϵ,j, xk⟩|   ≥ X j∈J− ujσ  |uj| m X i=1,i̸=k λiσ′ i,jϕ + |⟨vϵ,j, xk⟩|   ≥−ϕ X j∈J− m X i=1,i̸=k u2 jλiσ′ i,j − X j∈J− |uj||⟨vϵ,j, xk⟩| ≥−ϕ(m −1)α −ϵ p 1 + ψ , and combining the two results we have 1 + δ α ≥1 + δ λk ≥ykN(θ, xk) ≥ X j∈J+ ujw⊤ j xk + X j∈J− ujσ(w⊤ j xk) ≥(1 −ψ)α −ϕ(m −1)α −ϕ(m −1)α =α ((1 −ψ) −2ϕ(m −1)) −2ϵ p 1 + ψ , 31 getting α2 ((1 −ψ) −2ϕ(m −1)) −α  1 + 2ϵ p 1 + ψ  −δ ≤0 . Note, for our setting ψ ≤0.1 and ϕ ≤ ϵd 4mn so we get that (1 −ψ) −2ϕ(m −1) ≥0.9 −2 ϵd 4mn(m −1) ≥0.9 −ϵd 2n > 0 , hence solving for α we get α ≤1 + 2ϵ√1 + ψ + p (1 + 2ϵ√1 + ψ)2 + 4δ ((1 −ψ) −2ϕ(m −1)) 2 ((1 −ψ) −2ϕ(m −1)) . Case 2: yk = −1 is very similar. First we have −1 −δ α = N(θ, xk) ≤ X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujw⊤ j xk , for the first summand we get X j∈J+ ujσ w⊤ j xk  = X j∈J+ ujσ  −ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≤ X j∈J+ ujσ  uj m X i=1,i̸=k λiσ′ i,j|⟨xi, xk⟩| + |⟨vϵ,j, xk⟩|   ≤ X j∈J+ ujσ  uj m X i=1,i̸=k λiσ′ i,jϕ + |⟨vϵ,j, xk⟩|   ≤ϕ m X i=1,i̸=k X j∈J+ u2 jλiσ′ i,j + X j∈J+ uj|⟨vϵ,j, xk⟩| ≤ϕ(m −1)α + ϵ p 1 + ψ and for the second X j∈J− ujw⊤ j xk = X j∈J− uj  −ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≤−(1 −ψ) X j∈J− u2 jλkσ′ k,j + ϕ X j∈J− m X i=1,i̸=k u2 jλiσ′ i,j + X j∈J+ uj|⟨vϵ,j, xk⟩| ≤−(1 −ψ)α + ϕ(m −1)α + ϵ p 1 + ψ combining the two results leads to the same upper bound α ≤1 + 2ϵ√1 + ψ + p (1 + 2ϵ√1 + ψ)2 + 4δ ((1 −ψ) −2ϕ(m −1)) 2 ((1 −ψ) −2ϕ(m −1)) 32 We plug in ψ ≤0.1 and ϕ ≤ ϵd 4mn, ϵd ≤1, and get α ≤1 + 2ϵ√1 + ψ + p (1 + 2ϵ√1 + ψ)2 + 4δ ((1 −ψ) −2ϕ(m −1)) 2 ((1 −ψ) −2ϕ(m −1)) ≤1 + 2.1ϵ + p (1 + 2.1ϵ)2 + 4δ(0.9) 2(0.9 −2 ϵd 4mn(m −1)) ≤1 + 2.1ϵ + (1 + 2.1ϵ) + 1.9δ 2(0.9 −2 ϵd 4 ) ≤2 + 4.2ϵ + 1.9δ 0.8 ≤2.5 + 5.25ϵ + 2.4δ ≤10.2 meaning for all i ∈[m] we have max    X j∈J+ u2 jλiσ′ i,j, X j∈J− u2 jλiσ′ i,j   ≤2.5 + 5.25ϵ + 2.4δ so X j∈[n] u2 jλiσ′ i,j ≤5 + 10.5ϵ + 4.8δ ≤20.4 using the fact that for all j ∈[n] and k ∈[m], |uj| = 1 √n and σ′ k,j ≤1 we also get that λi ≤5 + 10.5ϵ + 4.8δ P j∈[n] u2 jσ′ i,j ≤5 + 10.5ϵ + 4.8δ 1 n ≤n (5 + 10.5ϵ + 4.8δ) ≤20.4n Lemma C.4. Let N(θ, x) = nP j=1 ujσ(w⊤ j x) be a two-layer fully connected neural network, trained on S = {(x1, y1), ..., (xm, ym)}, and let 0 < ϵd, ϵ, δ ≤1 such that θ is an (ϵ, δ)-approximate KKT point for the margin maximization problem (2) for S according to Definition 2.1 for λ1, ..., λm, and S satisfies Assump- tion 2.3 for ψ = 0, 1, and ϕ ≤ ϵd 4mn. Assume ∀j ∈[n], uj ∼U{−1 √n, 1 √n}. We denote αmax = maxi∈[m] max ( P j∈J+ u2 jλiσ′ i,j, P j∈J− u2 jλiσ′ i,j )! . Then, For i ∈[m] we have min    X j∈J+ u2 jλiσ′ i,j, X j∈J− u2 jλiσ′ i,j   ≥0.45 −2.32ϵd n −0.96ϵ and therefore also n X j=1 u2 jλiσ′ i,j ≥0.9 −4.64ϵd n −1.92ϵ and λi ≥0.9 −4.64ϵd n −1.92ϵ 33 Proof: Let J+ = {j ∈[n] : uj > 0} and J−= {j ∈[n] : uj < 0}. Denote α+ = mini∈[m] P j∈J+ u2 jλiσ′ i,j ! and α−= mini∈[m] P j∈J− u2 jλiσ′ i,j ! . w.l.o.g. we assume α+ ≤α−(the other direction is proven similarly). We denote α = α+ = mini∈[m] P j∈J+ u2 jλiσ′ i,j ! , and k = arg mini∈[m] P j∈J+ u2 jλiσ′ i,j ! . Using the stationarity condition in Definition 2.1 for θ, we denote vϵ = θ −Pm i=1 λiyi∇θN(θ, xi), and vϵ,j = wj − m P i=1 ujλiyiσ′ i,jxi, such that vϵ is the concatenation of all vϵ,j and ∥vϵ∥= ϵ. Using this notation we have for all j ∈[n] the inner product w⊤ j xk =uj m X i=1 λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩ =ujλkykσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩. To lower bound α, we use the primal feasibility condition in Definition 2.1 to first bound the margin, and then solve for α. To use the α notation we express the margin with in terms of sums over J+ and J− 1 + δ α ≥ykN(θ, xk) = yk n X j=1 ujσ(w⊤ j xk) = yk  X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujσ(w⊤ j xk)  . Now, to divide both sides of the inequality by yk, we need to know its sign. We separate to two cases for yk: Case 1: yk = 1 We upper bound the margin 1 ≤N(θ, xk) ≤ X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujw⊤ j xk Where the last inequality hold since for all y ∈R, y ≤σ(y). We lower bound separately the first summand, getting X j∈J+ ujσ w⊤ j xk  = X j∈J+ ujσ  ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≤ X j∈J+ ujσ  ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiσ′ i,j|⟨xi, xk⟩| + |⟨vϵ,j, xk⟩|   ≤ X j∈J+ ujσ  ujλkσ′ k,j(1 + ψ) + uj m X i=1,i̸=k λiσ′ i,jϕ + |⟨vϵ,j, xk⟩|   ≤(1 + ψ) X j∈J+ u2 jλkσ′ k,j + ϕ m X i=1,i̸=k X j∈J+ u2 jλiσ′ i,j + X j∈J+ uj|⟨vϵ,j, xk⟩| . Using Cauchy–Schwarz inequality we have X j∈J+ uj|⟨vϵ,j, xk⟩| = 1 √n X j∈J+ |⟨vϵ,j, xk⟩| ≤ 1 √n ∥vϵ∥√n max p∈[m] ∥xp∥≤ϵ p 1 + ψ , 34 getting X j∈J+ ujσ w⊤ j xk  ≤(1 + ψ)α + ϕ(m −1)αmax + ϵ p 1 + ψ . For the upper bound of the second summand we have X j∈J− ujw⊤ j xk = X j∈J− uj  ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≤ X j∈J− uj  ujλkσ′ k,j(1 + ψ) + uj m X i=1,i̸=k λiσ′ i,jϕ −|⟨vϵ,j, xk⟩|   ≤(1 + ψ)α + ϕ(m −1)αmax + ϵ p 1 + ψ , and combining the two results we have 1 ≤N(θ, xk) ≤ X j∈J+ ujσ(w⊤ j xk) + X j∈J− ujw⊤ j xk ≤2(1 + ψ)α + 2ϕ(m −1)αmax + 2ϵ p 1 + ψ and solving for α we have α ≥1 −2ϕ(m −1)αmax −2ϵ√1 + ψ 2(1 + ψ) . Case 2: yk = −1 First we have −1 ≥N(θ, xk) ≥ X j∈J+ ujw⊤ j xk + X j∈J− ujσ(w⊤ j xk) we get for the first summand X j∈J+ uj w⊤ j xk  = X j∈J+ uj  −ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≥−(1 + ψ) X j∈J+ u2 jλkσ′ k,j −ϕ m X i=1,i̸=k X j∈J+ u2 jλiσ′ i,j − X j∈J+ uj|⟨vϵ,j, xk⟩| ≥−(1 + ψ)α −ϕ(m −1)αmax −ϵ p 1 + ψ And for the second summand X j∈J− ujσ(w⊤ j xk) = X j∈J− ujσ  −ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vϵ,j, xk⟩   ≥−(1 + ψ) X j∈J+ u2 jλkσ′ k,j −ϕ m X i=1,i̸=k X j∈J+ u2 jλiσ′ i,j + X j∈J− uj|⟨vϵ,j, xk⟩| ≥−(1 + ψ)α −ϕ(m −1)αmax −ϵ p 1 + ψ 35 combining the two results leads to the same lower bound α ≥1 −2ϕ(m −1)αmax −2ϵ√1 + ψ 2(1 + ψ) . From C.3 we have that αmax ≤10.2, and we plug in ψ ≤0.1 and ϕ ≤ ϵd 4mn, getting α ≥1 −2ϕ(m −1)αmax −2ϵ√1 + ψ 2(1 + ψ) ≥1 −2 ϵd 4mn(m −1)10.2 −2.1ϵ 2.2 ≥1 −5.1 ϵd n −2.1ϵ 2.2 ≥0.45 −2.32ϵd n −0.96ϵ meaning for all i ∈[m] we have min    X j∈J+ u2 jλiσ′ i,j, X j∈J− u2 jλiσ′ i,j   ≥0.45 −2.32ϵd n −0.96ϵ so n X j=1 u2 jλiσ′ i,j ≥0.9 −4.64ϵd n −1.92ϵ using the fact that for all j ∈[n] and k ∈[m], |uj| = 1 √n and σ′ k,j ≤1 we also get that λi ≥0.9 −4.64 ϵd n −1.92ϵ 1 n nP j=1 σ′ i,j ≥0.9 −4.64ϵd n −1.92ϵ Lemma C.5. Let N(θ, x) = nP j=1 ujσ(w⊤ j x) be a two-layer fully connected neural network, trained on S = {(x1, y1), ..., (xm, ym)}, and let 0 < ϵd, ϵ, δ ≤1 such that θ is an (ϵ, δ)-approximate KKT point for the mar- gin maximization problem (2) for S according to Definition 2.1 for λ1, ..., λm, and S satisfies Assumption 2.3 for ψ = 0, 1, and ϕ ≤ ϵd 4mn. Given l ∈[m], we denote by ˆθ the parameters created by performing gradient ascent on the first layer weights, for the data sample (xl, yl) ∈S with step size determined by λl (3). We denote by eθ the weight vector such that for j ∈[n] ewj = ˆwj + |uj|λlσ′ l,j∆j , for ∆j = P k∈[m]−l cxk sign(⟨xk, wj⟩) and c = ϵd 2mn. Then, for all r ∈[m]−l and j ∈[n], sign(ew⊤ j xr) = sign(w⊤ j xr) Proof: Let r ∈[m]−l, and j ∈[n]. Looking at the inner product, we denote ∆r,j = ⟨∆j, xr⟩= X k∈[m]−l c⟨xk, xr⟩sign(⟨xk, wj⟩) , and have w⊤ j xr = uj m X i=1 λiyiσ′ i,j⟨xi, xr⟩= = uj X i∈[m]−l λiyiσ′ i,j⟨xi, xr⟩+ ujλlylσ′ l,j⟨xl, xr⟩+ ⟨vϵ,j, xr⟩ 36 and ew⊤ j xr = uj X i∈[m]−l λiyiσ′ i,j⟨xi, xr⟩+ |uj|λlσ′ l,j∆j,r + ⟨vϵ,j, xr⟩, where one can see that the difference between the inner products is w⊤ j xr −ew⊤ j xr = ujλlylσ′ l,j⟨xl, xr⟩−|uj|λlσ′ l,j∆j,r = λlσ′ l,j (uj⟨xl, xr⟩−|uj|∆j,r) . To show they have the same sign, it’s enough to show that the difference is either negative to positive, depending on w⊤ j xr sign. If it is positive, we show the difference in negative, hence ew⊤ j xr is bigger and also positive, and if it’s negative we show the a positive difference to conclude equal sign. Note, if λl = 0 we are done, and particularly we have not change θ by unlearning or adding our fix, meaning θ = ˆθ = eθ. In addition, if σ′ l,j = 0 for some j, we haven’t change the neuron wj, and the claim follows. For the rest of the proof we assume λl > 0 and σ′ l,j = 1, so to show the difference’s sign it’s enough to show the sign of (uj⟨xl, xr⟩−|uj|∆j,r). Case 1: w⊤ j xr ≥0. We show that (uj⟨xl, xr⟩−|uj|∆j,r) ≤0 By Lemma C.1 |uj|∆j,r ≥|uj| (c(1 −ψ) −(m −2)cϕ) And using Assumption 2.3 we get that |⟨xl, xr⟩| ≤ϕ we have uj⟨xl, xr⟩−|uj|∆j,r ≤|uj|ϕ −|uj| (c(1 −ψ) −(m −2)cϕ) ≤|uj| (ϕ −(c(1 −ψ) −(m −2)cϕ)) . We left to show that (ϕ −c(1 −ψ) + (m −2)cϕ) ≤0 and indeed plugging in ψ = 0.1, ϕ ≤ ϵd 4mn, c = ϵd 2mn we have ϕ −c(1 −ψ) + (m −2)cϕ ≤ ϵd 4mn − ϵd 2mn(0.9) + (m −2) ϵd 2mn ϵd 4mn ≤0.25ϵd −0.45ϵd + 0.125ϵ2 d mn < 0 which finishes this case. Case 2: wT j xr < 0. We show that (uj⟨xl, xr⟩−|uj|∆j,r) ≥0 By Lemma C.1 |uj|∆j,r ≤|uj| (−c(1 −ψ) + (m −2)cϕ) And using Assumption 2.3 we get that |⟨xl, xr⟩| ≤ϕ we have uj⟨xl, xr⟩−|uj|∆j,r ≥−|uj|ϕ −|uj| (−c(1 −ψ) + (m −2)cϕ) ≥|uj| (−ϕ + (c(1 −ψ) −(m −2)cϕ)) . Now, It’s enough to show that −ϕ + c(1 −ψ) + (m −2)cϕ ≥0, which has already proven in the previous case. Lemma C.6. Let 0 < ϵd, ϵ, δ ≤0.4. Let N(x, θ) = nP j=1 ujσ(w⊤ j x) be a two-layer fully connected neural network, trained on S = {(x1, y1), ..., (xm, ym)}, and assume that θ is an (ϵ, δ)-approximate KKT point for the margin maximization problem (2) for S according to Definition 2.1 for λ1, ..., λm, and S satisfies Assumption 2.3 for ψ ≤0.1 37 and ϕ ≤ ϵd 4mn. Given l ∈[m], we denote by ˆθ the parameters created by performing gradient ascent on the first layer weights, for the data sample (xl, yl) ∈S with step size λl (3). We denote by eθ the weight vector such that for j ∈[n] ewj = ˆwj + |uj|λlσ′ l,j∆j , for ∆j = P k∈[m]−l cxk sign(⟨xk, wj⟩) and c = ϵd 2mn. Then, for all r ∈[m]−l, −9ϵd mn ≤yr h N(eθ, xr) −N(θ, xr) i ≤9ϵd mn , Proof: Let r ∈[m]−l. We look at the margins for xr with respect to θ and eθ and get the difference yr h N(eθ, xr) −N(θ, xr) i = yr   n X j=1 ujσ(ew⊤ j xr) − n X j=1 ujσ(w⊤ j xr)  . From Lemma C.5 we get that for j ∈[n], sign(ew⊤ j xr) = sign(w⊤ j xr). Then, if w⊤ j xr < 0 we get that σ(ew⊤ j xr) = σ(w⊤ j xr) = 0. Otherwise, w⊤ j xr ≥0, and we get that σ(ew⊤ j xr) = ew⊤ j xr and σ(w⊤ j xr) = w⊤ j xr. We denote J+ = {j ∈[n] : w⊤ j xr > 0 and uj > 0} and J−= {j ∈[n] : w⊤ j xr > 0 and uj < 0}, and get n X j=1 ujσ(ew⊤ j xr) − n X j=1 ujσ(w⊤ j xr) = = n X j=1 uj σ(ew⊤ j xr) −σ(w⊤ j xr)  = X j∈J+ uj ew⊤ j xr −w⊤ j xr  − X j∈J− |uj| ew⊤ j xr −w⊤ j xr  Following Definition 2.1, we denote vϵ = θ −Pm i=1 λiyi∇θN(θ, xi) and for j ∈[n] we denote, wj = m X i=1 λiyi∇wjN(θ, xi) + vϵ,j = uj m X i=1 λiyiσ′ i,jxi + vϵ,j , such that vϵ = (vϵ,1, ..., vϵ,n) a concatenation of all vϵ,j’s vectors. Following the unlearning step in 3 for (xl, yl), we denote ˆwj = X i∈[m]−l ujλiyiσ′ i,jxi + vϵ,j , and get ewj = X i∈[m]−l ujλiyiσ′ i,jxi + |uj|λlσ′ l,j∆j + vϵ,j . When we look at the difference ew⊤ j xr −w⊤ j xr, we get that for j ∈J+ ∪J− 0 ≤ew⊤ j xr −w⊤ j xr = =  uj X i∈[m]−l λiyiσ′ i,j⟨xi, xr⟩+ |uj|λlσ′ l,j∆j,r + vϵ,j  −  ujλlylσ′ l,j⟨xl, xr⟩+ uj X i∈[m]−l λiyiσ′ i,j⟨xi, xr⟩+ vϵ,j   = |uj|λlσ′ l,j∆j,r −ujλlylσ′ l,j⟨xl, xr⟩. 38 We now use this equality for the margin difference, getting N(eθ, xr) −N(θ, xr) = = X j∈J+ uj ew⊤ j xr −w⊤ j xr  − X j∈J− |uj| ew⊤ j xr −w⊤ j xr  = X j∈J+ uj |uj|λlσ′ l,j∆j,r −ujλlylσ′ l,j⟨xl, xr⟩  − X j∈J− |uj| |uj|λlσ′ l,j∆j,r −ujλlylσ′ l,j⟨xl, xr⟩  = X j∈J+ u2 jλlσ′ l,j (∆j,r −yl⟨xl, xr⟩) − X j∈J− u2 jλlσ′ l,j (∆j,r + yl⟨xl, xr⟩) . We denote α−= P j∈J− u2 jλlσ′ l,j and by α+ = P j∈J+ u2 jλlσ′ l,j. So, we get that N(eθ, xr) −N(θ, xr) =α+ (∆j,r −yl⟨xl, xr⟩) −α−(∆j,r + yl⟨xl, xr⟩) =α+∆j,r −ylα+⟨xl, xr⟩−α−∆j,r −ylα−⟨xl, xr⟩ =α+∆j,r −α−∆j,r −yl (α+⟨xl, xr⟩+ α−⟨xl, xr⟩) . Since α−, α+, ∆j,r ≥0, for the upper bounds we get N(eθ, xr) −N(θ, xr) =α+∆j,r −α−∆j,r −yl (α+⟨xl, xr⟩+ α−⟨xl, xr⟩) ≤α+∆j,r + α+ϕ + α−ϕ . From Lemma C.3 we get that α−, α+ ≤10.2, from Lemma C.1 we get that ∆j,r ≤c(1+ψ)+(m−2)cϕ. Together with plugging in ψ = 0.1, ϕ ≤ ϵd 4mn, c = ϵd 2mn and ϵd ≤1, we get N(eθ, xr) −N(θ, xr)α+∆j,r + α+ϕ + α−ϕ ≤α+ (c(1 + ψ) + (m −2)cϕ + ϕ) + α−ϕ ≤10.2 1.1ϵd 2mn + (m −2) ϵd 2mn ϵd 4mn + ϵd 4mn  + 10.2 ϵd 4mn ≤10.2 1.1ϵd 2mn + ϵd 2n ϵd 4mn + ϵd 4mn  + 2.55ϵd mn ≤ϵd mn  5.61 + 0.125ϵd n + 0.25 + 2.55  ≤9ϵd mn . For the lower bound of the margin we get N(eθ, xr) −N(θ, xr) =α+∆j,r −α−∆j,r −yl (α+⟨xl, xr⟩+ α−⟨xl, xr⟩) ≥−α−∆j,r −α+ϕ −α−ϕ , and the same calculations we did for the upper bound will yield N(eθ, xr) −N(θ, xr) ≥−9ϵd mn . 39 C.2 Proof for Theorem 4.1 Proof: Note, for readability of the proof we denote ϵ1 by ϵ and δ1 by δ. Using the stationarity condition in Definition 2.1 for θ, we denote vϵ = θ −Pm i=1 λiyi∇θN(θ, xi) and for j ∈[n] we denote, wj = m X i=1 λiyi∇wjN(θ, xi) + vϵ,j = uj m X i=1 λiyiσ′ i,jxi + vϵ,j where vϵ = (vϵ,1, ..., vϵ,n) the concatenation of all vϵ,j and ∥vϵ∥= ϵ. Let l ∈[m], we wish to take a negative gradient step of size β, such that β∇θℓ(ylN(θ, xl)) = −λlyl∇θN(θ, xl) so we pick a step size β = −λl ℓ′(ylN(θ,xl)). We denote by ˆθ the parameters created by performing gradient ascent on the first layer weights, for the data sample (xl, yl) ∈S with step size β (3). As a result, for all j ∈[n] we have ˆwj =wj −λlyl∇wjN(θ, xl) = m X i=1 λiyi∇wjN(θ, xi) + vϵ,j −λlyl∇wjN(θ, xl) = X i∈[m]−l ujλiyiσ′ i,jxi + vϵ,j . Given ˆθ and the unlearned sample index l ∈[m], we denote c = ϵd 2mn, and for j ∈[n], we denote: ∆j := X k∈[m]−l cxk sign(⟨xk, wj⟩) . Using ∆j, we define a slightly modified weight vector eθ, such that for j ∈[n], ewj = ˆwj + |uj|λlσ′ l,j∆j . C.2.1 Proof of eθ has the direction of a (ϵ + 9ϵdϵ m−9ϵd + 23ϵd √m , δ + 9ϵdδ m−9ϵd + 22.6ϵd m )-approximate KKT point of the margin maximization problem (2) w.r.t. S \ {xl, yl} It is enough to prove that eθ is an (ϵ + 22ϵd √m , δ + 184ϵd m , 9ϵd mn)-approximate KKT for the margin maximization problem (2) w.r.t. S \ (xl, yl) with the corresponding {λi}i∈[m]−l, according to Definition B.1. Then, using Lemma B.3, we conclude the approximation parameters for 1 1− 9ϵd mn eθ, for the stationarity parameter, for ϵd ≤0.01 we have 1 1 −9ϵd mn  ϵ + 22ϵd √m  ≤  1 + 9ϵd m −9ϵd   ϵ + 22ϵd √m  ≤ϵ + 9ϵdϵ m −9ϵd + 23ϵd √m , For the complementarity slackness parameter we use the upper bound for maxp λp from C.3, and have 1 1 −9ϵd mn  δ + 184ϵd m  + max p λp 9ϵd mn 1 −9ϵd mn ≤δ + 9ϵdδ m −9ϵd + 22.6ϵd m , Finally we conclude that 1 1− 9ϵd mn eθ is a (ϵ + 9ϵdϵ m−9ϵd + 23ϵd √m , δ + 9ϵdδ m−9ϵd + 22.6ϵd m )-approximate KKT for the margin maximization problem (2) w.r.t. S \ {xl, yl}, according to Definition 2.1. We note that eθ and 1 1−ˆγ eθ has the same direction, which finishes the proof. We start by showing eθ is an (ϵ + 22ϵd √m , δ + 184ϵd m , 9ϵd mn)-approximate KKT. (1) Dual Feasibility: For all r ∈[m]−l, λr ≥0. Directly from dual feasibility for θ (Definition 2.1). 40 (2) Stationarity: eθ − P i∈[m]−l λiyi∇θN(eθ, xi) ≤ϵ + 22ϵd √m . From stationarity for θ (Definition 2.1) we get that θ = Pm i=1 λiyi∇θN(θ, xi) + vϵ. By the difinition of ˆθ we get that ˆθ = P i∈[m]−l λiyi∇θN(θ, xi) + vϵ. For readability, we first denote u = (|u1|λlσ′ l,1∆1, .., |un|λlσ′ l,n∆n), such that u ∈Rm×n, and note that one can write eθ = ˆθ + u. Thus, eθ − X i∈[m]−l λiyi∇θN(eθ, xi) = X i∈[m]−l λiyi∇θN(θ, xi) + vϵ + u − X i∈[m]−l λiyi∇θN(eθ, xi) In Lemma C.5 we showed that for j ∈[n], i ∈[m], 1{ ewT j xj≥0} = 1{wT j xj≥0}. Then, for j ∈[n] we have ∇wjN(eθ, xi) = uj1{ ewT j xj≥0}xi = uj1{wT j xj≥0}xi = ∇wjN(θ, xi) , which leads to eθ − X i∈[m]−l λiyi∇θN(eθ, xi) = = X i∈[m]−l λiyi∇θN(θ, xi) + vϵ + u − X i∈[m]−l λiyi∇θN(eθ, xi) = ∥vϵ + u∥≤∥vϵ∥+ ∥u∥. Using the upper bound from Lemma C.2, for ∥u∥we have ∥u∥= (|u1|λlσ′ l,1∆1, .., |un|λlσ′ l,n∆n) = v u u t n X j=1 ujλlσ′ l,j∆j 2 ≤ ≤√n max j∈[n] |uj|λlσ′ l,j ∥∆j∥ ≤√n 22ϵd √mn ≤22ϵd √m , and plugging it in we have eθ − X i∈[m]−l λiyi∇θN(eθ, xi) ≤∥vϵ∥+ ∥u∥≤ϵ + 22ϵd √m . as desired. From Lemma C.6 we get that −9ϵd mn ≤yrN(eθ, xr) −yrN(θ, xr) ≤9ϵd mn. Using it we prove the next conditions. (3) Complementarity Slackness: For all r ∈[m]−l, λr  yrN(eθ, xr) −1  ≤δ + 184ϵd m . Let r ∈[m]−l. If λr = 0 we are done. Otherwise, from complementarity slackness condition for θ we get that λr (yrN(θ, xr) −1) ≤δ. We use the fact that yrN(eθ, xr) −9ϵd mn ≤yrN(θ, xr) to get that δ ≥λr (yrN(θ, xr) −1) = λr  yrN(eθ, xr) −9ϵd mn −1  = λr  yrN(eθ, xr) −1  −λr 9ϵd mn ≥λr  yrN(eθ, xr) −1  −max p λp 9ϵd mn 41 and conclude that λr  yrN(eθ, xr) −1  ≤δ + max p λp 9ϵd mn . From Lemma C.3 we have an upper bound maxp λp ≤20.4n, so we get that λr  yrN(eθ, xr) −1  ≤δ + max p λp 9ϵd mn ≤δ + 20.4n 9ϵd nm ≤δ + 184ϵd m . (4) Primal Feasibility: For all r ∈[m]−l, yiN(xi, eθ) ≥1 −9ϵd mn. Let r ∈[m]−l. From primal feasibility for θ (Definition 2.1) we get that yrN(θ, xr) ≥1, and from Lemma C.6 we have that yrN(eθ, xr) −yrN(θ, xr) ≥−9ϵd mn which concludes the proof. C.2.2 Proof of cossim(ˆθ, eθ) ≥1 −82ϵd m We begin with looking at the inner product ⟨ˆθ, eθ⟩. For readability, we first denote u = (|u1|λlσ′ l,1∆1, .., |un|λlσ′ l,n∆n), such that u ∈Rm×n, and note that one can write eθ = ˆθ + u and ⟨ˆθ, eθ⟩= ⟨ˆθ, ˆθ + u⟩= ˆθ 2 + ⟨ˆθ, u⟩≥ ˆθ 2 −|⟨ˆθ, u⟩| ≥ ˆθ 2 − ˆθ ∥u∥, where the last transition is due to Cauchy–Schwarz inequality. We now look at the weights vectors norm and get eθ = ˆθ + u ≤ ˆθ + ∥u∥ which leads to ˆθ eθ = ˆθ  ˆθ + ∥u∥  = ˆθ 2 + ˆθ ∥u∥ We are now ready to lower bound the cosine similarity, having cossim(ˆθ, eθ) = ⟨ˆθ, eθ⟩ ˆθ eθ ≥ ˆθ 2 − ˆθ ∥u∥ ˆθ 2 + ˆθ ∥u∥ ≥1 − 2 ˆθ ∥u∥ ˆθ 2 + ˆθ ∥u∥ ≥1 −2 ∥u∥ ˆθ . 42 To finish the proof, we upper bound ∥u∥ ∥ˆθ∥. We note that we can upper bound the norm of u using the upper bound from Lemma C.2: ∥u∥= (|u1|λlσ′ l,1∆1, .., |un|λlσ′ l,n∆n) = v u u t n X j=1 ujλlσ′ l,j∆j 2 ≤ ≤√n max j∈[n] |uj|λlσ′ l,j ∥∆j∥ ≤√n 22ϵd √mn ≤22ϵd √m , We now show a lower bound for ˆθ , using that for all j ∈[n], |uj| = 1 √n, and for Assumption 2.3, for all i, k ∈[m] ∥xi∥2 ≥(1 −ψ), and |⟨xi, xk⟩| ≤ϕ. We have ˆθ 2 = X j∈[n] ∥ˆwj∥2 = X j∈[n] X i∈[m]−l ujλiyiσ′ i,jxi 2 = X j∈[n] ⟨ X i∈[m]−l ujλiyiσ′ i,jxi, X i∈[m]−l ujλiyiσ′ i,jxi⟩ ≥ X j∈[n]  X i∈[m]−l u2 jλ2 i σ′ i,j ∥xi∥2 − X i∈[m]−l X k̸=i∈[m]−l u2 jλiλkσ′ i,jσ′ k,j⟨xi, xk⟩   ≥1 n X j∈[n]  (1 −ψ) X i∈[m]−l λ2 i σ′ i,j −ϕ X i∈[m]−l X k̸=i∈[m]−l λiλkσ′ i,jσ′ k,j   ≥1 n  (1 −ψ) X i∈[m]−l λ2 i X j∈[n] σ′ i,j −ϕ X i∈[m]−l X k̸=i∈[m]−l λiλk X j∈[n] σ′ i,j   We note that using Lemma C.4 and Lemma C.3, for all i, we have  0.9 −4.64ϵd n −1.92ϵ  ≤ n X j=1 u2 jλiσ′ i,j ≤20.4 hence since |uj| = 1 √n  0.9 −4.64ϵd n −1.92ϵ  n ≤λi n X j=1 σ′ i,j ≤20.4n Using these bounds we have ˆθ 2 ≥1 n  (1 −ψ) X i∈[m]−l λi  0.9 −4.64ϵd n −1.92ϵ  n −ϕ X i∈[m]−l X k̸=i∈[m]−l λi20.4n   ≥  (1 −ψ)  0.9 −4.64ϵd n −1.92ϵ  X i∈[m]−l λi −20.4ϕ X i∈[m]−l X k̸=i∈[m]−l λi   ≥ X i∈[m]−l λi h (1 −ψ)  0.9 −4.64ϵd n −1.92ϵ  −20.4ϕ(m −2) i 43 Plugging in ψ ≤0.1, ϕ ≤ ϵd 4mn, ϵ < 1 and ϵd ≤0.01 we have ≥ X i∈[m]−l λi h 0.9  0.9 −4.64ϵd n −1.92ϵ  −20.4ϕ(m −2) i ≥(m −1)  0.9 −4.64ϵd n −1.92ϵ   0.9  0.9 −4.64ϵd n −1.92ϵ  −20.4ϵd 4n  ≥(m −1)  0.9(0.9 −4.64ϵd n −1.92ϵ)2 −5.1ϵd n  0.9 −4.64ϵd n −1.92ϵ  ≥(m −1)  0.72 + 19 ϵ2 d n2 + 3ϵ + 8ϵdϵ n −7.6ϵd n −3.2ϵ −4.6ϵd n  ≥(m −1)  0.72 −12.2ϵd n −0.2ϵ  ≥0.3(m −1) and of course ˆθ ≥ p 0.3(m −1) . We can know join the upper bound for ∥u∥and lower bound of ˆθ getting ∥u∥ ˆθ ≤ 22ϵd √m p 0.3(m −1) ≤41ϵd m and finally, cossim(ˆθ, eθ) ≥1 −2 ∥u∥ ˆθ ≥1 −82ϵd m , as desired. C.3 Proof for forgetting subset of points using Ak-GA – two layer networks We formalize and prove the statement for unlearning a subset of data points. Recall that the term successful unlearning here is the natural extension of Definition 2.2 to unlearning a subset, rather than a single point. Theorem C.1. In the same settings as Theorem 4.1, let Sforget ⊆S a subset of size k. Then, the extended algorithm Ak-GA, with appropriate coefficients {βr}, is a (ϵ, δ, τ)-successful unlearning algorithm w.r.t. θ and S, where ϵ = ϵ1 + 9ϵdϵ1 m k −9ϵd + 23kϵd √m , δ = δ1 + 9ϵdδ1 m k −9ϵd + 22.6kϵd m and τ = 82kϵd m−k . Proof: Let a forget set Sf ⊂S such that |Sf| = k. We denote If = {i : (xi, yi) ∈Sf}. We denote Sr = S \ Sf and Ir = {i : (xi, yi) ∈Sr}. This proof widely relies the proof in C.2. Using the stationarity condition in Definition 2.1 for θ, we denote vϵ = θ −Pm i=1 λiyi∇θN(θ, xi) and for j ∈[n] we denote, wj = m X i=1 λiyi∇wjN(θ, xi) + vϵ,j = uj m X i=1 λiyiσ′ i,jxi + vϵ,j 44 where vϵ = (vϵ,1, ..., vϵ,n) the concatenation of all vϵ,j and ∥vϵ∥= ϵ. According to the algorithm Ak-GA, we take a step consists of the sum of k gradients w.r.t. data points in Sf with the following sizes- for any (xl, yl) ∈Sf, we take a step size β = −λl ℓ′(ylN(θ,xl)). As a result, for all j ∈[n] we have ˆwj =wj −λlyl∇wjN(θ, xl) = m X i=1 λiyi∇wjN(θ, xi) + vϵ,j − X l∈If λlyl∇wjN(θ, xl) = X i∈Ir ujλiyiσ′ i,jxi + vϵ,j . Given ˆθ and the unlearned sample indices l ∈If, we denote c = ϵd 2mn, and for j ∈[n], we denote: ∆j := X k∈Sr cxk sign(⟨xk, wj⟩) . Using ∆j, we define a slightly modified weight vector eθ, such that for j ∈[n], ewj = ˆwj + X l∈If |uj|λlσ′ l,j∆j . The first main challenge of this proof is Lemma C.5, that is proven for a single point unlearning. However, browsing through the proof one can see that its main observation is about the difference between the inner product of some training sample xr in either the original or the fixed unlearn weight voters. Looking at the difference our case - ⟨wj, xr⟩−⟨ewj, xr⟩= X l∈If ujλlylσ′ l,j⟨xl, xr⟩− X l∈If |uj|λlσ′ l,j∆j = X l∈If ujλlylσ′ l,j⟨xl, xr⟩−|uj|λlσ′ l,j∆j  , one can see that for any l ∈If: ujλlylσ′ l,j⟨xl, xr⟩−|uj|λlσ′ l,j∆j,r = λlσ′ l,j (uj⟨xl, xr⟩−|uj|∆j,r) , which is the exact same modification that in Lemma C.5 is proven to not effect the sign. Thus, using Lemma C.5 for any l ∈Sf will conclude in sign(ew⊤ j xr) = sign(w⊤ j xr) . The next important issue we need to address to use the similar proof for forgetting multiple points is the norm of the fix. If we denote u = ( P l∈If |u1|λlσ′ l,1∆1, .., P l∈If |un|λlσ′ l,n∆n) we get a factor k in the upper bound for ∥u∥, using Lemma C.2: X l∈If |uj|λlσ′ l,j∆j = X l∈If |uj|λlσ′ l,j ∥∆j∥≤k 1 √n20.4nϵd p1.1 + ϵd n 2√mn ≤k 1 √n20.4ϵd p1.1 + ϵd n 2√m ≤kϵd 20.4 + 1 2 p1.1 + ϵd n  √nm ≤22kϵd √mn , Lastly, we add a factor k for the margin difference, by straightforward accumulating the margin difference for each l ∈If, getting −9kϵd mn ≤yr h N(eθ, xr) −N(θ, xr) i ≤9kϵd mn , We now ready to prove the multi-point version. 45 Proof of eθ has the direction of a (ϵ + 9ϵdϵ m k −9ϵd + 23kϵd √m , δ + 9ϵdδ m k −9ϵd + 22.6kϵd m )-approximate KKT point of the margin maximization problem (2) w.r.t. S \ {xl, yl}: (1) Dual Feasibility: For all r ∈[m]−l, λr ≥0. Same. Directly from dual feasibility for θ (Definition 2.1). (2) Stationarity: eθ −P i∈Ir λiyi∇θN(eθ, xi) ≤ϵ + 22kϵd √m . We showed that for j ∈[n], i ∈[m], 1{ ewT j xj≥0} = 1{wT j xj≥0}, thus similarly having eθ − X i∈Ir λiyi∇θN(eθ, xi) = = X i∈Ir λiyi∇θN(θ, xi) + vϵ + u − X i∈Ir λiyi∇θN(eθ, xi) = ∥vϵ + u∥≤∥vϵ∥+ ∥u∥. Using the upper bound from we showed, we have ∥u∥= ( X l∈If |u1|λlσ′ l,1∆1, .., X l∈If |un|λlσ′ l,n∆n) = v u u u t n X j=1 X l∈If ujλlσ′ l,j∆j 2 ≤ ≤√n max j∈[n] X l∈If |uj|λlσ′ l,j ∥∆j∥ ≤√n22kϵd √mn ≤22kϵd √m , (3) Complementarity Slackness: For all r ∈[m]−l, λr  yrN(eθ, xr) −1  ≤δ + 184kϵd m . Same proof using the modified margin difference 9kϵd mn . (4) Primal Feasibility: For all r ∈[m]−l, yiN(xi, eθ) ≥1 −9kϵd mn . Same. To conclude, eθ is an (ϵ + 22kϵd √m , δ + 184kϵd m , 9kϵd mn )-approximate KKT for the margin maximization problem (2) w.r.t. Sr (Definition B.1). Using Lemma B.3 we conclude that 1 1− 9kϵd mn eθ is an (ϵ + 9ϵdϵ m k −9ϵd + 23kϵd √m , δ + 9ϵdδ m k −9ϵd + 22.6kϵd m )- approximate KKT for the margin maximization problem (2) w.r.t. Sr according to Definition 2.1, which finish the proof. Proof of cossim(ˆθ, eθ) ≥1 −82kϵd m−k : For the cosine similarly, by noting that eθ = ˆθ + u, we have that (same as C.2) cossim(ˆθ, eθ) ≥1 −2 ∥u∥ ˆθ . 46 we have ∥u∥≤22kϵd √m and for ˆθ , we can follow that same proof with only replace P i∈[m]−l λi with P i∈Ir λi, which will slightly effect the norm, having ˆθ 2 ≥0.3(m −k) . Thus, we get for the ratio: ∥u∥ ˆθ ≤ 22kϵd √m p 0.3(m −k) ≤41kϵd m −k Joining it all together we have cossim(ˆθ, eθ) ≥1 −2 ∥u∥ ˆθ ≥1 −82kϵd m −k , which conclude the proof. C.4 The Identity is an Unsuccessful Unlearning Algorithm Similarly to the linear case, we complement Theorem 4.1 by providing the following remark, that shows that keeping the original network is not a successful unlearning algorithm. Particularly, we show that for the network in Theorem 4.1, its cosine similarity to any (ϵ, δ)-approximate KKT point for S \ {(xl, yl)} is relatively large (see proof in Appendix C.4). Remark C.1. In the same settings as 4.1, the algorithm AI(θ, S, r) = θ, is (ϵ, δ, ρ)-successful only for ρ ≥C m + C(ϵd + ϵ + eϵ) for some C > 0. Proof: In this section we show that the original network θ is not a good candidate for the unlearning tasks according to the (ϵ, δ, τ)-successful definition (Definition 2.2). Formally, we look at the simple unlearning algorithm AI(θ, S, r) = θ. We show that θ will have a small cosine-similarity with any KKT point w.r.t. the retain set S \ (xl, yl). Namely, that AI is (ϵ′, δ′, τ ′) successful for τ ′ that is at least O( 1 mn) −O( ϵd n ). Next, we show for τ > 0. Let eθ be an (eϵ, eδ)-approximate KKT point w.r.t. S \ (xl, yl). We show that τ ≥ O( 1 mn) −O( ϵd n ). From stationarity for θ w.r.t. S, and for eθ w.r.t. S \ (xl, yl) we get that θ = X i∈[m] λiyi∇θN(θ, xi) + vϵ , and eθ = X i∈[m]−l eλiyi∇θN(eθ, xi) + veϵ . We denote αi = P j∈[n] ujλiσ′ i,j and eαi = P j∈[n] uj eλieσ′ i,j, and θ = θ −vϵ, eθ = eθ −veϵ By Cauchy–Schwarz inequality we have ⟨θ, eθ⟩= ⟨θ + vϵ, eθ + veϵ⟩= ≤⟨θ, eθ⟩+ |⟨vϵ, eθ⟩| + |⟨veϵ, θ⟩| ≤⟨θ, eθ⟩+ ϵ eθ + eϵ ∥θ∥. 47 For the inner product between the sums, we have ⟨θ, eθ⟩= ⟨ X i∈[m]−l λiyi∇θN(θ, xi), X i∈[m] eλiyi∇θN(eθ, xi)⟩= = X j∈[n] ⟨ X i∈[m] ujλiyiσ′ i,jxi, X i∈[m]−l ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m] X j∈[n] ujλiyiσ′ i,jxi, X i∈[m]−l X j∈[n] ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m] αiyixi, X i∈[m]−l eαiyixi⟩ ≤|⟨ X i∈[m] αiyixi, X i∈[m]−l eαiyixi⟩| ≤ X i∈[m]−l αieαi ∥xi∥2 + X i̸=k∈[m]−l αieαk⟨xi, xk⟩+ X i∈[m]−l αleαi⟨xl, xi⟩ ≤ X i∈[m]−l αieαi ∥xi∥2 + ϕ X i̸=k∈[m]−l αieαk + ϕ X i∈[m]−l αleαi For lower bounds of the norms we perform similar calculations. We note that eθ ≥ eθ −ϵ, and eθ 2 = X j∈[n] ∥ewj∥2 = X j∈[n] X i∈[m]−l ujeλiyieσ′ i,jxi 2 = X j∈[n] ⟨ X i∈[m]−l ujeλiyieσ′ i,jxi, X i∈[m]−l ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m]−l X j∈[n] ujeλiyieσ′ i,jxi, X i∈[m]−l X j∈[n] ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m]−l eαiyixi, X i∈[m]−l eαiyixi⟩ ≤|⟨ X i∈[m]−l eαiyixi, X i∈[m]−l eαiyixi⟩| ≥ X i∈[m]−l eα2 i ∥xi∥2 − X i̸=k∈[m]−l |eαieαk|⟨xi, xk⟩ ≥ X i∈[m]−l eα2 i ∥xi∥2 −ϕ X i̸=k∈[m]−l |eαieαk| 48 and similarly ∥θ∥2 = X j∈[n] ∥wj∥2 = X j∈[n] X i∈[m] ujλiyiσ′ i,jxi 2 ≥ X i∈[m] α2 i ∥xi∥2 − X i̸=k∈[m] |αiαk|⟨xi, xk⟩ ≥α2 l ∥xl∥2 + X i∈[m]−l α2 i ∥xi∥2 −ϕ X i̸=k∈[m]−l |αiαk| Plug it all in the cosine similarity definition we get cossim(θ, eθ) = ⟨θ, eθ⟩ ∥θ∥ eθ ≤ ⟨θ, eθ⟩ ∥θ∥ eθ + ϵ eθ + eϵ ∥θ∥ ∥θ∥ eθ + ϵeϵ ∥θ∥ eθ bounding the second fraction we have ϵ eθ + eϵ ∥θ∥ ∥θ∥ eθ ≤ ϵ ∥θ∥+ eϵ eθ and note that using Lemma C.4 and Lemma C.3, if we denote l = 0.9 −4.64 ϵd n −1.92ϵ  for all i ∈[m] −l√n ≤αi, eαi ≤20.4√n ∥θ∥2 ≥ X i∈[m] α2 i ∥xi∥2 −ϕ X i̸=k∈[m]−l αiαk ≥  X i∈[m] |αi|  0.9l√n −ϕ20.4m√n  ≥ml√n(0.9l√n −5.1ϵd √n ) ≥mln(0.9l −5.1ϵd n ) ≥C mn and similarly ∥θ∥2 > C mn then ϵ ∥θ∥+ eϵ eθ ≤C(ϵ + eϵ) √mn 49 bounding the first fraction we have ⟨θ, eθ⟩ ∥θ∥ eθ ≤ P i∈[m]−l αieαi ∥xi∥2 + ϕ P i̸=k∈[m]−l αieαk + ϕ P i∈[m]−l αleαi r P i∈[m]−l eα2 i ∥xi∥2 −ϕ P i̸=k∈[m]−l eαieαk −ϵ r α2 l ∥xl∥2 + P i∈[m]−l α2 i ∥xi∥2 −ϕ P i̸=k∈[m]−l αiαk −eϵ We lower bound the norm of the parameter ∥θ∥2 = X j∈[n] ∥wj∥2 ≥ X i∈[m] α2 i ∥xi∥2 −ϕ X i̸=k∈[m] |αiαk| ≥( X i∈[m] αi)[a −ϕmb] ≥m0.9a[a −0.6ϵd n ] As a −0.6ϵd n > C for some C > 0, we note we get a similar equation as in the linear case (B.3), and skip to the result, having cossim(θ, eθ) ≤1 −C m + C(ϵd + ϵ + eϵ) . D Appendix for section 6 D.1 Proofs for settings properties We first show this dataset S = {(xi, yi)}m i=1 ∼Dm MG satisfy the conditions we discuss in our paper: 1. For all xi ∈S, ∥xi∥2 ∈[1 −ψ, 1 + ψ] for ψ = 0.1. 2. For all (xi, yi), (xj, yj) ∈S s.t. i ̸= j, |⟨xi, xj⟩| ≤ϕ For a sample (xi, yi) ∼D, we first show that xi’s norm is a bounded constant. Denote xi = µi+ζi for ∥µi∥= ·d−1 4 +α for α ∈(0, 1 4), and ζi ∼N(0, 1 dId). We show tighter bounds for ∥ζi∥2. Lemma D.1. Let i ∈[m]. Then, w.p. ≥1 −(2e− d 1700 ), ∥ζi∥2 ∈[0.95, 1.05]. Proof: For the lower bound, similar to Lemma A.1, we have for w ∼N(0, σ2In) Pr  n − w σ 2 ≥2 √ nt  ≤e−t . We let t = 1 1600 · n, σ2 = 1 d and n = d and get Pr  ∥w∥2 ≤95 100  ≤e− d 1600 . 50 as desired. For the upper bound, similar to Lemma A.2, we have for w ∼N(0, σ2In) Pr  w σ 2 −n ≥2 √ nt + 2t  ≤e−t . We let t = 1 1700 · n, σ2 = 1 d and n = d and get Pr h ∥w∥2 ≥1.05 i ≤e− d 1700 . Lemma D.2. w.p. 1 −(2e− d 1700 ), for sufficiently large d, ∥xi∥2 ∈[0.9, 1.1]. Proof: We denote xi = µi + ζi, such that ζi ∼N(0, 1 dId). From Lemma D.1 we get that w.p. 1 −(2e− d 1700 ), ∥ζi∥2 ∈[0.95, 1.05]. As for ∥µi∥, we note that ∥µi∥2 = d2(−1 4 +α) = d(−1 2 +2α), therefore if enough to take d such that d2α−1 2 ≤0.01 ⇐⇒d 1 2 −α ≥100 ⇐⇒log(d) ≥log(100) 1 2 −α Then, for such d we have, ∥xi∥2 = ∥µi + ζi∥2 = ∥µi∥2 + ∥ζi∥2 + 2⟨µi, ζi⟩ ∥µi∥2 + ∥ζi∥2 −2|⟨µi, ζi⟩| ≤∥x∥2 ≤∥µi∥2 + ∥ζi∥2 + 2|⟨µi, ζi⟩| 2|⟨µi, ζi⟩| ≤2 ∥µi∥∥ζi∥≤2 · 0.01 · 1.05 = 0.021 and therefore, 0.9 < 0.929 ≤∥xi∥2 ≤1.081 < 1.1 as desired. Next, we look at two samples (xi, yi), (xj, yj) ∼DMG, showing that if i ̸= j, xi, xj are almost orthogonal. Lemma D.3. Let i ̸= j, and let (xi, yi), (xj, yj) ∼DMG. Then, for sufficiently large d, w.p. ≥1−e−d/500+6d−log(d) 2 : |⟨xi, xj⟩| −⟨µi, µj⟩∈[−2 ∥µi∥log(d) √ d −1.1log(d) √ d , 2 ∥µi∥log(d) √ d + 1.1log(d) √ d ] Proof: Let xi, xj data points. We denote xi = µi + ζi and xj = µj + ζj We look at - ⟨xi, xj⟩= ⟨µi + ζi, µj + ζj⟩= ⟨µi, µj⟩+ ⟨µi, ζj⟩+ ⟨ζi, µj⟩+ ⟨ζi, ζj⟩ Since µi ∈Rn and ζj ∼N(0, 1 dId), we get from Lemma A.4 for t = log(d) √ d that w.p. ≥1 −2d−log(d) 2 |⟨µi, ζj⟩| ≤∥µi∥log(d) √ d From the same argument |⟨µj, ζi⟩| ≤∥µj∥log(d) √ d . Finally, From Lemma A.6 we get that w.p. ≥1 −(e−d/500 + 2d−log(d) 2 ), |⟨ζi, ζj⟩| ≤1.1 log(d) √ d . Combining all together, Pr  |⟨xi, xj⟩| −⟨µi, µj⟩≥2 ∥µi∥log(d) √ d + 1.1log(d) √ d  ≤e−d/500 + 6d−log(d) 2 and the claim follows. 51 Lemma D.4. For d large enough and ∥µ+∥= log(d) dα , for α ∈(0, 1 4), ∥µ+∥2 > 2 ∥µ+∥log(d) √ d + 1.1log(d) √ d Proof: ∥µ+∥2 −2 ∥µ+∥log(d) √ d −1.1log(d) √ d = 1 d 1 2 −2α −2 log(d) dα+3/4 −1.1log(d) √ d =d−1 2  d2α −2 log(d)d−1 4 −1.1 log(d)  it’s enough to find d such that d2α ≥2 log(d)d−1 4 + 1.1 log(d) ⇐⇒2α ≥ log  2 log(d)d−1 4 + 1.1 log(d)  log d which is possible since r.h.s goes to 0 when d goes to infinity. Lemma D.5. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and (xi, yi) ∼DMG, for m ≤d and for sufficiently large d. Then, w.p. ≥1 −(2me− d 1700 + m2e−d/500 + 2m2d−log(d) 2 ) 1. For all (x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] 2. For all (xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤ϕ for ϕ ≤ ϵd 4mn Proof: 1. First, Pr h ∀(x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] i = Pr  max (x,y)∈S ∥x∥2 ∈[0.9, 1.1]  , and the claim follows w.p. ≥1 −2me− d 1700 , directly from using simple union, given Lemma D.2. 2. First, Pr h ∀(xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤ ϵd 4mn i = Pr  max (xi,yi),(xj,yj)∈S |⟨xi, xj⟩| ≤ ϵd 4mn  . From Lemma D.3 we get that w.p. ≥1 −e−d/500 + 6d−log(d) 2 : |⟨xi, xj⟩| −⟨µi, µj⟩∈[−2 ∥µi∥log(d) √ d −1.1log(d) √ d , 2 ∥µi∥log(d) √ d + 1.1log(d) √ d ] . Therefore, we get that maximal value for |⟨xi, xj⟩| if we take i ̸= j such that yi = yj, resulting in |⟨xi, xj⟩| ≤∥µi∥2 + 2 ∥µi∥log(d) √ d + 1.1log(d) √ d From Lemma D.4 one can see its enough to choose d such that 2 ∥µ+∥2 = 2 1 d 1 2 −2α ≤ ϵd 4mn , which is possible since ϵd 4mn is given constant and limd→∞ 1 d 1 2 −2α = 0. Then, from using simple union, the claim follows. 52 For the next lemma, we add few notations for readability. 1. ϕ+ max = maxi,j{⟨xi, xj⟩: yi = yj}, ϕ+ min = mini,j{⟨xi, xj⟩: yi = yj} 2. ϕ− max = maxi,j{⟨xi, xj⟩: yi ̸= yj}, ϕ− min = mini,j{⟨xi, xj⟩: yi ̸= yj}. Lemma D.6. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and (xi, yi) ∼DMG. Then, for m ≤d and for sufficiently large d, w.p. ≥1 −(me−d/500 + 6md−log(d) 2 ), for all (xi, yi), (xj, yj) ∈S: 0 < ϕ+ max = −ϕ− min = ∥µi∥+ 2 ∥µi∥log(d) √ d + 1.1log(d) √ d ≤ ϵd 4mn 0 < ϕ+ min = −ϕ− max = ∥µi∥−2 ∥µi∥log(d) √ d −1.1log(d) √ d Proof: The proof is directly from Lemma D.3, using simple union bound same as Lemma D.5. Both larger than 0 from Lemma D.4. Lemma D.7. Suppose a two-layer neural network N(θ, x) = nP j=1 ujσ(w⊤ j x), trained on a dataset S = {(x1, y1), ..., (xm, ym)} ∼Dm MG, described in Definition 6. Assume that θ is a KKT point of the margin maxi- mization problem (2) w.r.t. S as in Definition 2.1. Let (xt, yt) ∼D, Then for all j ∈[n] sign( ˆw⊤ j xt) = sign(w⊤ j xt) = yt sign(uj) Proof: Let (xt, yt) ∼D. Since θ is a KKT point, from Definition 2.1 we get that wj = uj m X i=1 λiyiσ′ i,jxi , w⊤ j xt = uj m X i=1 λiyiσ′ i,j⟨xi, xt⟩ ˆwj = uj X i∈[m]−l λiyiσ′ i,jxi , ˆw⊤ j xt = uj X i∈[m]−l λiyiσ′ i,j⟨xi, xt⟩ where σ′ i,j = 1wT j xj≥0. Case 1: yt = 1. We note that for all i ∈[m], yi⟨xi, xt⟩≥ϕ+ min > 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≥ϕ+ min, else yi = −1 and ⟨xi, xt⟩≤ϕ− max so −⟨xi, xt⟩≥−ϕ− max = ϕ+ min, from Lemma D.6. Therefore, for all j ∈[n], sign( ˆw⊤ j xt) = sign(w⊤ j xt) = sign(uj) = yt sign(uj). Case 2: yt = −1. We note that for all i ∈[m], yi⟨xi, xt⟩≤ϕ− max < 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≤ϕ− max, else yi = −1 and ⟨xi, xt⟩≥ϕ+ min so −⟨xi, xt⟩≥−ϕ+ min = ϕ− max, from Lemma D.6. Therefore, for all j ∈[n], sign( ˆw⊤ j xt) = sign(w⊤ j xt) = −sign(uj) = yt sign(uj). D.2 Proof for Theorem 6.1 First, we note that according to Lemma D.5, w.p. ≥1 −(2me− d 1700 + m2e−d/500 + 2m2d−log(d) 2 ) over the choice of S, S satisfies Assumption 2.3. For readability, the following proof we assume S satisfies Assumption 2.3. Given a data point (xt, yt) ∼DMG, we show that ytN(θ, xt) = yt n X j=1 ujσ(w⊤ j xt) > 0 . 53 We denote xt = µt + ζt, for ζt ∼N(0, 1 d). We denote I+ = {i ∈[m] : yi = 1}, I−= {i ∈[m] : yi = −1}. We also denote ϕ+ max = maxi,j∈[m]{⟨xi, xj⟩: yi = yj}, ϕ+ min = mini,j∈[m]{⟨xi, xj⟩: yi = yj} and ϕ− max = maxi,j∈[m]{⟨xi, xj⟩: yi ̸= yj}, ϕ− min = mini,j∈[m]{⟨xi, xj⟩: yi ̸= yj}. Next, from From Lemma D.6 we get that ϕ− max = −ϕ+ min and ϕ− min = −ϕ+ max Since θ is a KKT point, from Definition 2.1 we get that wj = uj m X i=1 λiyiσ′ i,jxi , w⊤ j xt = uj m X i=1 λiyiσ′ i,j⟨xi, xt⟩ where σ′ i,j = 1wT j xj≥0. Case 1: yt = 1. We show that N(θ, xt) > 0. From Lemma D.7, for all j ∈[n], sign(w⊤ j xt) = sign(uj). Hence, N(θ, xt) = n X j=1,uj<0 ujσ(w⊤ j xt) + n X j=1,uj≥0 ujσ(w⊤ j xt) = n X j=1,uj≥0 ujw⊤ j xt = n X j=1,uj≥0 u2 j m X i=1 λiyiσ′ i,j⟨xi, xt⟩ = m X i=1 yi⟨xi, xt⟩ n X j=1,uj≥0 u2 jλiσ′ i,j First, we note that for all i ∈[m], yi⟨xi, xt⟩≥ϕ+ min > 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≥ϕ+ min, else yi = −1 and ⟨xi, xt⟩≤ϕ− max so −⟨xi, xt⟩≥−ϕ− max = ϕ+ min, from Lemma D.6. Next, since S satisfies Assumption 2.3, and θ satisfies 2.1 for ϵ = δ = 0 we get from Lemma C.4 that for all i ∈[m], nP j=1,uj≥0 u2 jλiσ′ i,j > 0. Case 2: yt = −1. Similarly, we show that N(θ, xt) < 0. From Lemma D.7, for all j ∈[n], sign(w⊤ j xt) = −sign(uj). Hence, N(θ, xt) = n X j=1,uj<0 ujσ(w⊤ j xt) + n X j=1,uj≥0 ujσ(w⊤ j xt) = n X j=1,uj<0 ujw⊤ j xt = n X j=1,uj<0 u2 j m X i=1 λiyiσ′ i,j⟨xi, xt⟩ = m X i=1 yi⟨xi, xt⟩ n X j=1,uj<0 u2 jλiσ′ i,j We similarly note that that for all i ∈[m], yi⟨xi, xt⟩≤ϕ− max < 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≤ϕ− max, else yi = −1 and ⟨xi, xt⟩≥ϕ+ min so −⟨xi, xt⟩≥−ϕ+ min = ϕ− max, from Lemma D.6. And from Lemma C.4 we get that 54 nP j=1,uj<0 u2 jλiσ′ i,j > 0 and the claim follows. For showing that ytN(ˆθ, xt) = yt n X j=1 ujσ( ˆw⊤ j xt) > 0 , the proof is almost identical. In the end of each case we look at X i∈[m]−l yi⟨xi, xt⟩ n X j=1,uj≥0 u2 jλiσ′ i,j , and all the same arguments holds, concluding generalization for ˆθ as well, which finishes the proof. We note that the same arguments can be used to show generalization for the case of unlearning a forget set Sforget ⊆S of any size k < m using the extended algorithm Ak-GA, discussed in section 5. In this case, we instead look at X i∈S\Sforget yi⟨xi, xt⟩ n X j=1,uj≥0 u2 jλiσ′ i,j , yet the same arguments hold, concluding generalization. E Experiment details We take a high dimensional data set, where m = 10, d = 1000, the data distribution is N(0, 1 dId). As mentioned in Example. 2.4, the data satisfies Assumption 2.3 for small value of ϕ and ψ. We experiment with fully-connected ReLU networks, trained using SGD optimizer with binary cross entropy loss that is normalized to have a margin of size 1. In this experiment, for each data point xi ∈S, we calculate λi, and unlearn it using the gradient ascent algorithm AGA with step size αλi for α ∈[0, 1.5], resulting in eθi(α). For each eθi(α) we calculate the corresponding ϵ, δ for its KKT conditions with respect to S \ (xi, yi). In Figure 1, we sample one point from S, preform the unlearning algorithm for all 10 networks, and average the results. We test for a two-layer fully-connected ReLU network θ as in Eq. 1, with n = 400. We initialize the network with small initialization for the first layer by dividing its standard deviation by a factor of 105. We train with full batch size for 105 epochs, using SGD optimizer with a 10−5 wight decay factor. 55
Provable Unlearning with Gradient Ascent on Two-Layer ReLU Neural Networks Odelia Melamed * Gilad Yehudai† Gal Vardi ‡ Abstract Machine Unlearning aims to remove specific data from trained models, addressing growing privacy and ethical concerns. We provide a theoretical analysis of a simple and widely used method-gradient ascent- used to reverse the influence of a specific data point without retraining from scratch. Leveraging the implicit bias of gradient descent towards solutions that satisfy the Karush-Kuhn-Tucker (KKT) conditions of a margin maximization problem, we quantify the quality of the unlearned model by evaluating how well it satisfies these conditions w.r.t. the retained data. To formalize this idea, we propose a new success criterion, termed (ε, δ, τ)-successful unlearning, and show that, for both linear models and two-layer neural networks with high dimensional data, a properly scaled gradient-ascent step satisfies this criterion and yields a model that closely approximates the retrained solution on the retained data. We also show that gradient ascent performs successful unlearning while still preserving generalization in a synthetic Gaussian-mixture setting. 1 Introduction Machine Unlearning is an emerging field motivated by growing societal and legal demands-specifically, the need for machine learning models to "forget" specific data upon request. This concern has intensified following discoveries that private training data can be extracted from model outputs or weights (Carlini et al., 2019; Haim et al., 2022; Fredrikson et al., 2015). The demand is further reinforced by regulations such as the EU GDPR's Right to be Forgotten, as well as concerns about security and ethical AI. Machine unlearning addresses this challenge by aiming to undo the effect of particular samples without incurring the cost of full retraining. The concept of unlearning was first formalized by Cao & Yang (2015) in the context of statistical query learning and has since been extended to deep neural networks. Broadly, two main approaches have emerged: retraining-based unlearning, which ensures complete data removal but is computationally expensive, and approximate unlearning, which aims for efficiency at the cost of weaker guarantees. Due to the stochastic and incremental nature of modern training procedures, which entangle data contributions, it is nontrivial to reverse the effect of the data to be forgotten while minimizing disruption to the retained data. There is a large body of research on adapting given networks, namely, manipulating the weights post-training. For a training set S, a set of points Sforget ⊆S to unlearn, and its complement Sretain = S \ Sforget, a direct approach is to increase the training loss for samples in Sforget using gradient steps. This direct method was first implemented in NegGrad (Golatkar et al., 2020), simply taking multiple negative gradient steps for Sforget with respect to the training loss. Other gradient-related post-training methods use other losses and second order information for improved results (Guo et al., 2019; Golatkar et al., 2020; Warnecke et al., 2021; Triantafillou et al., 2024; Graves et al., 2021). There are also additional variants of NegGrad, such as NegGrad+ (Kurmanji et al., 2023), and Advanced NegGrad (Choi & Na, 2023) which add a recovery phase, performing additional training steps on the retained set. In this work, we study the important building block of this foundational and widely-used method, a single gradient ascent step on the training loss w.r.t. Sforget. *Weizmann †Center for Data Science, New York University, ‡Weizmann 1 16 Oct 2025 One central question in the regime of approximate unlearning is how to measure unlearning performance. A common criterion, inspired by differential privacy (Dwork et al., 2014), evaluates success by comparing the output distributions of a model retrained from scratch with those of the unlearned model. This approach allows for approximate guarantees, where the distance between the two distributions is bounded by small parameters (Triantafillou et al., 2024; Ginart et al., 2019), providing a formal framework for quantifying the effectiveness of unlearning algorithms, albeit it is often too stringent. To provide a rigorous framework for analyzing unlearning, we turn to recent results on the implicit bias of neural networks under gradient descent (Lyu & Li, 2019; Ji & Telgarsky, 2020). These works show that training tends toward solutions that satisfy the Karush-Kuhn-Tucker (KKT) conditions of the maximum-margin problem. We use these conditions to formulate an unlearning criterion: A successful unlearning procedure should modify the model from satisfying the KKT conditions w.r.t. S to approximately satisfying them w.r.t. Sretain. This property is necessary for successful unlearning. That is, since a network retrained only on Sretain converges to a KKT point w.r.t. Sretain, then a successful unlearning algorithm also needs to obtain such a KKT point, at least approximately. Note that the approximation relaxation here is analogous to the relaxation for the distribution distance, allowing bounds on the deviation from exact solution attained by retraining. In our work, we analyze the unlearning performance of one gradient ascent step of a carefully chosen size. We define a new unlearning criterion for an unlearning algorithm A, called (ε, δ, τ)-successful unlearning, using the KKT conditions as discussed above. Next, in both linear models and two-layer neural networks trained with high dimensional (or nearly-orthogonal) data, we prove that a gradient ascent step of an appropriate size is a successful unlearning algorithm. In addition, we show a setting where unlearning using gradient ascent is both successful and does not hurt the model's generalization performance. In a bit more detail, our main contributions are: • For linear predictors, where the margin-maximizing solution is unique, we prove that gradient ascent with an appropriate step size is a (ε, δ, τ)-successful unlearning algorithm. Specifically, it yields an approximately max-margin predictor for Sretain. Moreover, due to the uniqueness of the solution, the unlearned predictor aligns closely-measured via cosine similarity-with the exact model retrained on Sretain. • We extend these findings to a two-layer neural network setting. Despite the added complexity and nonlinearity, we prove that a single gradient ascent step is a (ε, δ, τ)-successful unlearning algorithm for some small ε, δ, τ. • We show that unlearning does not compromise out-of-sample prediction, using a synthetic mixture-of-Gaussians dataset. We show that models unlearned via gradient ascent maintain generalization performance comparable to the original. Related Work Machine unlearning was initially proposed in the statistical query setting by Cao & Yang (2015) and later extended to deep neural networks. The strongest unlearning guarantees are often formalized via differential privacy (Dwork et al., 2014), requiring indistinguishability between unlearned and retrained model outputs. This was relaxed using KL-divergence (Golatkar et al., 2020), while other lines of work evaluate unlearning effectiveness through privacy attacks, such as membership inference or data reconstruction (Niu et al., 2024; Haim et al., 2022). To achieve these goals, many methods aim to avoid full retraining. For example, SISA (Bourtoule et al., 2021) partitions the training data into multiple shards to enable a faster future forgetting. Graves et al. (2021) proposed saving intermediate gradients during training with respect to different training data points, enabling faster simulation of retraining using these intermediate gradients without the forget set. Post-training approaches include fine-tuning for Sretain only (hoping for catastrophic forgetting of the rest of data) or with wrong labels for data in Sforget (Golatkar et al. (2020); Triantafillou et al. (2024); Graves et al. (2021); Kurmanji et al. (2023)), or using different losses (Golatkar et al., 2020). These techniques often rely on gradient-based updates, with loss functions adjusted for unlearning objectives. Several methods also incorporate second-order information for better precision (Guo et al., 2019; Golatkar et al., 2020; Warnecke et al., 2021). The gradient-ascent method was first introduced by Golatkar et al. (2020) as NegGrad, applying negative gradient steps to increase loss on the forget set. Its extensions, NegGrad+ (Kurmanji et al., 2023) and advanced NegGrad (Choi 2 & Na, 2023), add a recovery phase by performing fine-tuning on the retained set. In this work, we isolate the basic component-gradient ascent-and study its behavior analytically. On the theoretical side, Guo et al. (2019) analyzed linear models and proposed a certified unlearning framework. Leveraging the existence of a unique optimal solution, they argue that inspecting the training gradients on the retained dataset can reveal residual influence from the deleted point-particularly when the model incurs non-zero loss, which may indicate incomplete unlearning. Sekhari et al. (2021) analyze unlearning capacity based on test loss degradation. Our approach defines unlearning through the lens of KKT conditions, building on a line of work showing that training converges to a KKT point of the margin maximization problem for the dataset. implicit bias and margin maximization A great body of research has studied the implicit bias of training neural networks with gradient methods toward solutions that generalize well (Neyshabur et al., 2017; Zhang et al., 2021). Our analysis is based on the characterization of the implicit bias of gradient flow on homogeneous models towards KKT solutions of the max margin problem, a result due to Lyu & Li (2019) and Ji & Telgarsky (2020). Implicit bias towards margin maximization was previously studied also for linear predictors (Soudry et al., 2018), deep linear networks and linear convolutional networks (Gunasekar et al., 2018). For a survey on implicit bias of neural networks see Vardi (2023). 2 Settings Notations. For m ∈N, we denote [m] = {1, 2, . . . , m}, and for l ∈[m], we denote [m]-l = [m] \ {l}. We use bold-face letters to denote vectors, e.g., x = (x1, . . . , xd) ∈Rd. We use ∥x∥to denote the Euclidean norm of a vector x. We denote by 1x≥0 the indicator function such that 1x≥0 = 1 if x ≥0 and 0 otherwise. We denote by sign(x) the sign function, sign(x) = 1 if x ≥0 and -1 otherwise. We denote by U(A) the uniform distribution over a set A. For a distribution D, we denote by x ∼Dm a vector x that consists of m i.i.d. samples from D. We denote by cossim(x1, x2) the cosine similarity of vectors x1, x2, defined by cossim(x1, x2) = ⟨x1,x2⟩ ∥x1∥∥x2∥. 2.1 Architectures and training In this paper, we discuss unlearning in two fundamental models: a linear predictor and a two-layer fully connected network. For an input x ∈Rd and a vector w ∈Rd, we will denote a linear predictor by N(w, x) = w⊤x. Our two-layer network is defined by N(θ, x) = n X j=1 ujσ(w⊤ j x) , (1) where σ(z) = max(z, 0) is the ReLU activation function. For all j ∈[n], we initialize uj ∼U {-1 √n, 1 √n} and fix them throughout training. The parameters w1, . . . , wn are trained. We denote by θ a vectorization of all the trained parameters. Given a training set S = {(xi, yi)}m i=1, we train our models using gradient descent over the empirical loss L(θ) = 1 m m X i=1 l(yiN(θ, xi)) , where lis either the logistic loss l(q) = log(1 + e-q) or the exponential loss l(q) = e-q. That is, we have θt+1 = θt -β∇L(θt), where θt are the weights after the t-th training epoch, and β is the step size. We consider the limit where β is infinitesimally small, called gradient flow. More formally, in gradient flow the trajectory θt is defined for all t ≥0 and satisfies the differential equation dθt dt = -∇L(θt). For a model N(θ, x), where θ are the parameters and x is the input, we say that N is homogeneous if there exists C > 0 such that for every α > 0, and θ, x, we have N(αθ, x) = αCN(θ, x). We note that both a linear predictor and a two-layer network, as defined above, are homogeneous with C = 1. For both linear and two-layer ReLU networks, there is an implicit bias towards margin maximization, as implied by the following theorem: 3 Theorem 2.1 (Lyu & Li (2019), Ji & Telgarsky (2020)). Let N(x, θ) be a homogeneous linear or ReLU neural network. Consider minimizing the logistic or exponential loss using gradient flow over a binary classification set S = {(xi, yi)}m i=1 ⊆Rd × {-1, 1}. Assume that there is a time t0 where L(θt0) 0. Assumption 2.3. The training set S = {(xi, yi)}m i=1 satisfies 1. For all (x, y) ∈S we have ∥x∥2 ∈[1 -ψ, 1 + ψ]. 2. For all (xi, yi), (xj, yj) ∈S with i ̸= j we have |⟨xi, xj⟩| ≤φ. The data normalization assumption (Item 1 above) is very common, as data points with significantly different norms might cause biases during training, toward higher norm data points. The latter assumption can be phrased as near orthogonality of the data points, which is also quite common in the literature for high dimensional data (Frei et al., 2022; Vardi et al., 2022), and holds with high probability for popular distributions. A profound example of a distribution that satisfies both conditions with high probability is the Gaussian distribution N(0, 1 dId), where d is the vector dimension. Another example is the uniform distribution over the unit sphere Sd-1. Example. For a training set S = {(xi, yi)}m i=1 where the xi's are drawn i.i.d. from N(0, 1 dId), Assumption 2.3 holds with probability at least 1 -(2me-d/500 + m2e-d/500 + 2m2d-log(d) 2 ), for ψ = 0.1 and φ = 1.1 log(d) √ d (see Theorem A.1). Moreover, in Section 6 we will show that Assumption 2.3 holds with high probability for a mixture of Gaussians. 5 3 Linear Predictors In this section, we consider a linear predictor N(w, x) = ⟨w, x⟩trained on a dataset S = {(xi, yi)}m i=1. Recall that when training a linear predictor, gradient flow converges in direction to the max-margin solution (i.e., global optimum of Problem 2), and after time t it reaches an (εt, δt)-approximate KKT point of Problem 2 where (εt, δt) →(0, 0) as t →∞. Moreover, recall that for linear predictors, Problem 2 has a unique global optimum. The following theorem shows that unlearning using gradient ascent (denoted by AGA) is (ε, δ, τ)-successful w.r.t. S that satisfies Assumption 2.3 and w which is an approximate KKT point according to Definition 2.1, in two distinct aspects. In the first part (item 1 below), we show it for τ = 0, that is, AGA(w, S, l) is a linear predictor which is an approximate KKT point of the max-margin problem w.r.t. S \ (xl, yl). Then, we show it for ε = δ = 0, namely, the cosine similarity of AGA(w, S, l) and the max-margin predictor w.r.t. S \ (xl, yl) is large. Theorem 3.1. Let 0 0: Let w∗be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl), i.e. the global optimum of the Problem 2 w.r.t. S \ (xl, yl). Then, cossim(AGA(w, S, l), w∗) ≥1 -τ. We now briefly discuss the proof intuition. Due to the stationarity condition for w (Definition 2.1), we can express w as weighted sum of the network's gradient up to some error vector vε1 of norm ε1 w = m X i=1 λiyi∇wN(w, xi) + vε = m X i=1 λiyixi + vε1 . Then, by performing gradient ascent AGA with the appropriate step size we get AGA(w, S, l) = m X i=1 λiyi∇wN(w, xi) + vε -λlyl∇wN(w, xr) = X i∈[m]-l λiyixi + vε . First, one can see that the subtraction will result in a stationary condition w.r.t. S \ (xl, yl) and the original λi's. Observing the margin for a point (xt, yt) (for t ̸= l), ⟨w, xt⟩= m X i=1 λiyi⟨xi, xt⟩+ ⟨vε1, xt⟩, we get that the change in the parameter vector (due to the gradient step) results in an additional term of at most λl|⟨xl, xt⟩| compared to the original predictor's margin. Due to the near-orthogonality of the data points in S (Assumption 2.3), and a constant upper bound for λl which we prove, we get that this difference is of order O( εd m). Regarding the proof for (2), we consider the representation of w∗ w∗= m X i=1 λ∗ i yi∇wN(w, xi) = m X i=1 λ∗ i yixi . For i ∈[m]-l we prove a small O(ε1 + εd) upper bound for the difference λ∗ i -λi, which implies that the two predictors AGA(θ, S, l) and w∗independently reach very similar KKT multipliers for the margin maximization problem (Definition 2.1). This yield an 1 -O(√εd + √ε1 + √δ1) lower bound in the cosine similarity. For the full proof we refer the reader to Appendix B.1. 6 Figure 1: Effect of deviation from the correct step size on the KKT approximation parameter ε for a two-layer network. The x-axis shows the step size as a fraction of the step size from Theorem 4.1, and the y-axis shows the KKT approximation parameter ε of the unlearned model w.r.t. the retain dataset. 4 Two-Layer ReLU Networks In this section, we extend our analysis to two-layer ReLU neural networks. We consider a neural network of the form N(x, θ) = nP j=1 ujσ(w⊤ j x), trained on dataset S = {(xi, yi)}m i=1. Note that unlike the linear setting, the nonsmoothness of N(x, θ) implies that even small perturbations in θ can cause significant shifts in the model's gradients. This introduces new challenges and, as a result, leads to a slightly weaker guarantee. The following theorem establishes that unlearning using gradient ascent with an appropriate step size, constitutes an (ε, δ, τ)-successful unlearning w.r.t. S that satisfies Assumption 2.3 and θ which is an approximate KKT according to Definition 2.1, where ε, δ, and τ are small quantities determined by the KKT approximation parameters of θ and the underlying data characteristics. This implies that the unlearned parameter vector AGA(θ, S, l) is close-in terms of cosine similarity-to an approximate KKT point eθ corresponding to the retained dataset S \ (xl, yl). Theorem 4.1. Let 0 0, m and α, S satisfies Assumption 2.3 for ψ ≤0.1 and φ ≤∥μi∥2 + 2 ∥μi∥log(d) √ d + 1.1 log(d) √ d ≤ εd 4mn, w.p. ≥1 -(2med 1700 + m2e-d/500 + 2m2d-log(d) 2 ) for large enough d. The following theorem shows that the unlearned network achieves generalization bounds comparable to those of the original classifier. Combined with the fact that it is close to an approximate KKT point of Problem 2 with respect to the retained dataset (as established in Theorem 4.1), this demonstrates a clean setting where unlearning is successful, and it does not hurt generalization. Theorem 6.1. Let 0 0] ≥1 -(2ed 1700 + me-d/500 + 2md-log(d) 2 ) , Pr (xt,yt)∼DMG [ytN(xt, AGA(θ, S, l)) > 0] ≥1 -(2ed 1700 + me-d/500 + 2md-log(d) 2 ) . We briefly outline the intuition behind the generalization proof. Due to the small cluster means and relatively large variance, the data points in S are nearly orthogonal. Although the deviation from orthogonality is small, it is crucially structured: the inner product sign is determined by whether two points belong to the same or different clusters, namely xi, xj are in the same cluster ⇒⟨xi, xj⟩> 0 , xi, xj are in different clusters ⇒⟨xi, xj⟩ 0 we have Pr n - w σ 2 ≥2 √ nt ≤e-t . Plugging-in t = c · n, we get Pr n - w σ 2 ≥2√cn = Pr w σ 2 ≤(1 -2√c)n ≤e-c·n . Thus, we have for c = 1 400 Pr w σ 2 ≤(1 -2 1 √ 400)n = Pr w σ 2 ≤9 10n ≤en 400 . And finally, Pr ∥w∥2 ≤9 10σ2n ≤en 400 . Lemma A.2. Let w ∈Rn with w ∼N(0, σ2In). Then: Pr h ∥w∥2 ≥1.1σ2n i ≤en 500 . Proof: Note that w σ 2 has the Chi-squared distribution. A concentration bound by Laurent and Massart (Laurent & Massart, 2000, Lemma 1) implies that for all t > 0 we have Pr w σ 2 -n ≥2 √ nt + 2t ≤e-t . 12 Plugging-in t = c · n, we get Pr w σ 2 -n ≥2√cn + 2cn = Pr w σ 2 ≥(2√c + 2c + 1)n ≤ec·n . Thus, we have for c = 1 500 Pr w σ 2 ≥1.1n = Pr w σ 2 ≥(2 1 √ 500 + 2 500 + 1)n ≤en 500 . And finally, Pr h ∥w∥2 ≥1.1σ2n i ≤en 500 . Lemma A.3. For any i ∈[m], with probability ≥1 -(2ed 500 ), ∥xi∥2 ∈[0.9, 1.1]. Proof: Using Lemma A.1 to lower bound ∥xi∥2 for xi ∼N(0, 1 d) w.p. ≥1 -en 400 , and use Lemma A.2 to upper bound ∥xi∥2 w.p. ≥1 -en 500 . Lemma A.4. Let u ∈Rn, and v ∼N(0, σ2In). Then, for every t > 0 we have Pr [|⟨u, v⟩| ≥∥u∥t] ≤2 exp -t2 2σ2 . Proof: We first consider ⟨u ∥u∥, v⟩. As the distribution N(0, σ2In) is rotation invariant, one can rotate u and v to get ̃u and ̃v such that ̃u ∥u∥= e1, the first standard basis vector and ⟨u ∥u∥, v⟩= ⟨ ̃u ∥u∥, ̃v⟩. Note, v and ̃v have the same distribution. We can see that ⟨ ̃u ∥u∥, ̃v⟩∼N(0, σ2) since it is the first coordinate of ̃v. By a standard tail bound, we get that for t > 0: Pr |⟨u ∥u∥, v⟩| ≥t = Pr |⟨ ̃u ∥u∥, ̃v⟩| ≥t = Pr [| ̃v1| ≥t] ≤2 exp -t2 2σ2 . Therefore Pr [|⟨u, v⟩| ≥∥u∥t] ≤2 exp -t2 2σ2 . Lemma A.5. Let u ∼N(0, σ2 1In), and v ∼N(0, σ2 2In). Then, for every t > 0 we have Pr |⟨u, v⟩| ≥1.1σ1 √nt ≤en 500 + 2e-t2/2σ2 2 . Proof: Using Lemma A.2 we get that w.p. ≤en 500 we have ∥u∥≥1.1σ1 √n. Moreover, by Lemma A.4, w.p. ≤ 2 exp -t2 2σ2 2 we have |⟨u, v⟩| ≥∥u∥t. By the union bound, we get Pr |⟨u, v⟩| ≥1.1σ1 √nt ≤Pr ∥u∥≥1.1σ1 √n + Pr [|⟨u, v⟩| ≥∥u∥t] ≤en 500 + 2 exp -t2 2σ2 2 . 13 Lemma A.6. Let u, v ∼N(0, 1 dId). Then, Pr |⟨u, v⟩| ≥1.1log(d) √ d ≤ed 500 + 2d-log(d) 2 . Proof: Using Lemma A.5 for n = d, σ1 = σ2 = 1 √ d and t = log(d) √ d . Lemma A.7. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and xi ∼N(0, 1 dId), for m ≤d. Then, w.p. ≥1 -2med 500 , For all (x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] Proof: We prove both upper and lower bounds. Pr min i∈[m] n ∥xi∥2o 1.1 = = Pr h ∃i ∈[m], ∥xi∥2 > 1.1 i ≤ m X i=1 Pr h ∥xi∥2 > 1.1 i ≤med 500 where the last inequality holds due to A.2, and the claim follows. Lemma A.8. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and xi ∼N(0, 1 dId), for m ≤d. Then, w.p. ≥1 -(m2e-d/500 + 2m2d-log(d) 2 ), For all (xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤1.1 log(d) √ d Proof: We prove an upper bound. Pr max i̸=j {|⟨xi, xj⟩|} > 1.1log(d) √ d = = Pr ∃i.j ∈[m], |⟨xi, xj⟩| > 1.1log(d) √ d ≤ m X i=1 m X j=1 Pr |⟨xi, xj⟩| > 1.1log(d) √ d ≤m2e-d/500 + 2m2d-log(d) 2 where the last inequality holds due to Lemma A.6. B Proofs for section 3 Lemma B.1. Let εd, ε, δ ≤0.5 and let N(w, x) be a linear classifier trained on a dataset S = {(xi, yi)}m i=1, and assume that w is an (ε, δ)-approximate KKT point satisfying Definition 2.1, and S satisfies Assumption 2.3 for ψ ≤0.1, φ ≤ εd 4m. Note, for readability of the proof we denote ε1 by ε and δ1 by δ. Then, max i λi ≤2.4 14 Proof: We look at λr = maxi λi. If λr = 0 we are done, since the r.h.s is non-negative. Otherwise, we define vε = w -Pm i=1 λiyixi, and by item (2) from Definition 2.1 we have that ∥vε∥≤ε. Hence, we have w = m X i=0 λiyixi + vε , and from item (3) of Definition 2.1 and λr > 0, we have 1 + δ λr ≥yrN(w, xr) ≥1. Therefore, 1 + δ λr ≥yrN(w, xr) = yr m X i=0 λiyi⟨xi, xr⟩+ yr⟨xr, vε⟩=λr ∥xr∥2 + yr X i̸=r∈[m] λiyi⟨xi, xr⟩+ yr⟨xr, vε⟩ ≥λr(1 -ψ) - X i̸=r∈[m] λi|⟨xi, xr⟩| -∥xr∥∥vε∥ ≥λr(1 -ψ) -λr · φ(m -1) -ε p 1 + ψ where the last two inequalities holds due to Assumption 2.3 and Cauchy-Schwartz inequality. Solving for λr leads to to λ2 r ((1 -ψ) -φ(m -1)) -(1 + ε p 1 + ψ)λr -δ ≤0 . Since ψ ≤0.1 and φ ≤ εd 4m we get (1 -ψ) -φ(m -1) ≥0.9 -(m -1) εd 4m ≥0.9 -εd 4 > 0 , and we get that λr ≤(1 + ε√1 + ψ) + p (1 + ε√1 + ψ)2 + 4((1 -ψ) -φ(m -1))δ 2((1 -ψ) -φ(m -1)) (4) Plugging in ε, δ ≤0.5, ψ ≤0.1 and φ ≤ εd 4m, we get λr ≤(1 + ε√1 + ψ) + p (1 + ε√1 + ψ)2 + 4((1 -ψ) -φ(m -1))δ 2((1 -ψ) -φ(m -1)) ≤ ≤ (1 + 0.5 √ 1.1) + q (1 + 0.5 √ 1.1)2 + 2 2(0.9 -εd 4m(m -1)) ≤ (1 + 0.5 √ 1.1) + q (1 + 0.5 √ 1.1)2 + 2 2(0.9 -1 8) ≤3.61 1.55 ≤2.4 . Lemma B.2. Let εd, ε, δ ≤0.5 and let N(w, x) be a linear classifier trained on a dataset S = {(xi, yi)}m i=1, and assume that w is an (ε, δ)-approximate KKT point satisfying Definition 2.1, and S satisfies Assumption 2.3 for ψ ≤0.1, φ ≤ εd 4m. Let t ∈[m].Then, 1 ∥xt∥2 -0.6εd + 1.1ε ∥xt∥2 ≤λt ≤ 1 ∥xt∥2 + 1.2εd + 2.15ε + 2.2δ ∥xt∥2 . Proof: We begin showing the result for the more general case of ε, δ ≤0.5. Let t ∈[m]. Looking at an upper bound of the margin, we have 15 1 ≤ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vε, xt⟩≤λt ∥xt∥2 + X i̸=t∈[m] λi|⟨xi, xt⟩| + ⟨vε, xt⟩ ≤λt ∥xt∥2 + φ(m -1) max p λp + ⟨vε, xt⟩ ≤λt ∥xt∥2 + 2.4φ(m -1) + ε ∥xt∥, where the last inequality hold due to Lemma B.1 and Cauchy-Schwartz inequality. We solve it for λt with plugging in φ ≤ εd 4m getting a lower bound for it λt ≥ 1 ∥xt∥2 -2.4φ(m -1) ∥xt∥2 - ε ∥xt∥≥ 1 ∥xt∥2 -0.6εd + 1.1ε ∥xt∥2 . We note that 1 -0.6εd -1.1ε ≥0.15 > 0, the therefore λt > 0. Next, to find an upper bound for λt, we look at a lower bound of the margin 1 + δ λt ≥ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vε, xt⟩≥λt ∥xt∥2 - X i̸=t∈[m] λi|⟨xi, xt⟩| -⟨vε, xt⟩ ≥λt ∥xt∥2 -φ(m -1) max p λp -⟨vε, xt⟩ ≥λt ∥xt∥2 -2.4φ(m -1) -ε ∥xt∥, where again the last inequalities holds due to Lemma B.1 Cauchy-Schwartz inequality. We get λ2 t ∥xt∥2 -λt(1 + 2.4φ(m -1) + ε ∥xt∥) -δ ≤0 and solve for λt with plugging in φ ≤ εd 4m, ∥xt∥2 ≤(1 -ψ), ψ ≤0.1 we get an upper bound for λt λt ≤ (1 + 2.4φ(m -1) + ε ∥xt∥) + q (1 + 2.4φ(m -1) + ε ∥xt∥)2 + 4 ∥xt∥2 δ 2 ∥xt∥2 ≤1 + 2.4 εd 4m(m -1) + ε√1 + ψ + 1 + 2.4 εd 4m(m -1) + ε(1 + ψ) + 4δ(1 + ψ) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 2.4 εd 4m(m -1) + ε√1 + ψ + 2.4 εd 4m(m -1) + ε(1 + ψ) + 4δ(1 + ψ) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 2.4 εd 4 + ε √ 1.1 + 2.4 εd 4 + ε(1.1) + 4δ(1.1) 2 ∥xt∥2 ≤ 1 ∥xt∥2 + 1.2εd + 2.15ε + 2.2δ ∥xt∥2 . which finishes the proof. We next define an (ε, δ, γ)-approximate KKT. It is very similar to the (ε, δ)-approximate KKT definition given in Definition 2.1, with an extra γ relaxation of the margin. Definition B.1. A (ε, δ, γ)-approximate KKT for minθ 1 2 ∥θ∥2 s.t.∀i ∈[m], yiN(θ, xi) ≥1: ∃λ1, ..., λm such that 1. λ1, ..., λm ≥0 2. θ - m P i=1 λiyi∇θN(θ, xi) 2 ≤ε 16 3. ∀i ∈[m], λi (yiN(θ, xi) -1) ≤δ 4. ∀i ∈[m], yiN(θ, xi) ≥1 -γ Now, we show that scaling an (ε, δ, γ)-approximate KKT can result in an (ε′, δ′)-approximate KKT, and determine the scaling effect on the approximation parameters. Lemma B.3. Let a network N(θ, x) be such that N(θ, x) is a 1-homogeneous function with respect to the weights. Let S = {(xi, yi)}m i=1 be a dataset. Then, if θ is a (ε, δ, γ)-approximate KKT (according to the above Definition B.1) w.r.t S with corresponding {λi}m i=1, then 1 1-γ θ is a ( 1 1-γ ε, maxp λp γ 1-γ + 1 1-γ δ)-approximate KKT (according to Definition 2.1) w.r.t S with with the corresponding λ′ i = Cλi . Proof: Let N(θ, x) a 1-homogeneous function with respect to the weights, and θ be a (ε, δ, γ)-approximate KKT. From 1-homogeneity, for all C > 0 N(Cθ, x) = CN(θ, x) and the gradient is 0-homogeneous, meaning ∇θN(Cθ, x) = ∇θN(θ, x) . We denote C = 1 1-γ , and show that Cθ satisfies the conditions in Definition 2.1. 1. Cθ - m P i=2 Cλiyi∇θN(Cθ, xi) = C θ - m P i=2 λiyi∇θN(θ, xi) ≤Cε. 2. Let i ∈[m]. Then, yiN(Cθ, xi) = CyiN(θ, xi) ≥C(1 -γ) = 1 3. Let i ∈[m]. Assume λi (yiN(θ, xi) -1) ≤δ. If λi = 0 we are done. Else, λi > 0 and yiN(θ, xi) ≤1 + δ λi . Then, λi (yiN(Cθ, xi) -1) = λi (CyiN(θ, xi) -1) ≤ ≤λi C(1 + δ λi ) -1 = λi(C -1) + Cδ ≤max p λp γ 1 -γ + 1 1 -γ δ , which finishes the proof. B.1 Proof for Theorem 3.1 Proof: Note, for readability of the proof we denote ε1 by ε and δ1 by δ. Using the stationarity condition in Definition 2.1 for w, we denote vε = w - m P i=1 λiyi∇wN(w, xi), so we get that ∥vε∥≤ε and w = m X i=1 λiyi∇wN(w, xi) + vε = m X i=1 λiyixi + vε . Let l ∈[m], we wish to take a negative gradient step of size β, such that β∇wl(ylN(w, xl)) = -λlyl∇wN(w, xl) so we pick a step size β = -λl l′(ylN(w,xl)). Then, when taking one gradient ascent step for (xl, yl) of size β, we get the following ˆw ˆw = m X i=1 λiyi∇wN(w, xi) + vε -λlyl∇wN(w, xr) = X i∈[m]-l λiyixi + vε . 17 B.1.1 Proof of 1. ˆw has the direction of an (ε+ εεd m-εd , δ + δεd m-εd + 7.2εd m )-approximate KKT point for the margin maximization problem for S \ (xl, yl). For readability, we show that ˆw satisfies the conditions for (ε, δ + 1.44εd m , 0.6εd m )-approximate KKT by Definition B.1, and then use Lemma B.3 to deduce that 1 10.6εd m ˆw satisfies the conditions for (ε+ εεd m-εd , δ+ δεd m-εd + 7.2εd m )-approximate KKT according to Definition 2.1. (1) Dual Feasibility: For all i ∈[m]-l, λi ≥0. directly from dual feasibility for w (Definition 2.1). (2) Stationarity: ˆw - m P i=1 λiyi∇wN( ˆw, xi) ≤ε. Since ∇wN( ˆw, x) = ∇wN(w, x) = x, one can write ˆw = X i∈[m]-l λiyixi + vε = X i∈[m]-l λiyi∇wN( ˆw, xi) + vε and the claim follows from (2) stationarity for w (Definition 2.1). Let t ∈[m]-l. Using the definitions of w and ˆw, we can write the margin as ytN(w, xt) = yt m X i=1 λiyi⟨xi, xt⟩+ yt⟨vε, xt⟩= ytN( ˆw, xt) + ytλlyl⟨xl, xt⟩. (5) Using this equality we prove the next two conditions: (3) Complementarity Slackness: For all t ∈[m]-l, λt (ytN( ˆw, xt) -1) ≤δ + 1.44εd m . If λt = 0 we are done. Else, λt > 0. From complementarity slackness of w being an (ε, δ)-approximate KKT, we know that ytN(w, xt) ≤1 + δ λt . We use 5 to lower bound the margin of ytN(w, xt), getting 1 + δ λt ≥ytN(w, xt) =ytN( ˆw, xt) + ytλlyl|⟨xl, xt⟩| ≥ytN( ˆw, xt) -λl|⟨xl, xt⟩| ≥ytN( ˆw, xt) -φ max p λp , plugging in φ ≤ εd 4m and the λp upper bound from Lemma B.1 we get ytN( ˆw, xt) -φ max p λp ≥ytN( ˆw, xt) -εd 4m2.4 ≥ytN( ˆw, xt) -0.6εd m . We deduce an upper bound for the margin of N( ˆw, xt)- ytN( ˆw, xt) ≤1 + δ λt + 0.6εd m = 1 + δ + 3 5mλtεd λt ≤1 + δ + 3 5m2.4εd λt ≤1 + δ + 1.44εd m λt as desired. (4) Primal Feasibility: For all t ∈[m]-l, ytN( ˆw, xt) ≥1-0.6εd m . We use 5 to lower bound the margin of N( ˆw, xt), and use primal feasibility for w (Definition 2.1), getting ytN( ˆw, xt) = ytN(w, xt) -ytλlyl|⟨xl, xt⟩| ≥1 -λl|⟨xl, xt⟩| ≥1 -φ max p λp . 18 Plugging in φ ≤ εd 4m and the λp upper bound from Lemma B.1 we get that φ max p λp ≤2.4εd 4m ≤0.6εd m . Hence, ytN( ˆw, xt) ≥1 -0.6εd m . To conclude, we showed that ˆw is an (ε, δ + 1.44εd m , 0.6εd m )-approximate KKT by Definition B.1 . Finally, we look at the scaled weights 1 10.6εd m ˆw. For εd ≤1 We calculate 1 1 -0.6εd m ε ≤ m m -εd ε = 1 + εd m -εd ε = ε + εεd m -εd , and max p λp 0.6εd m 1 -0.6εd m + δ + 1.44εd m 1 -0.6εd m ≤δ + δεd m -εd + 7.2εd m and get from Lemma B.3 that 1 10.6εd m ˆw is a (ε + εεd m-εd , δ + δεd m-εd + 7.2εd m )-approximate KKT by Definition 2.1 w.r.t. S \ (xl, yl). We note that ˆw and 1 10.6εd m ˆw have the same direction, which finishes the proof. B.1.2 Proof of 2. Cosine -Similarity( ˆw, w∗) ≥1 -C(√εd + √εd + √ δ) for some C > 0. Let N(w∗, x) be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl). Hence, w∗is a KKT point of the margin maximization problem (2) w.r.t. {xi, yi}i∈[m]-l, as in Definition 2.1 (with ε = δ = 0). From the stationarity condition we denote w∗= P i∈[m]-l λ∗ i yixi. Let t ∈[m]-l. We use Lemma B.2 to prove tight bounds for λt and λ∗ t . For a given t, λt and λ∗ t are close up to a small additive factor depend on εd, ε and δ. For λt we can use the results from Lemma B.2 directly, having 1 ∥xt∥2 -0.6εd + 1.1ε ∥xt∥2 ≤λt ≤ 1 ∥xt∥2 + 1.2εd + 2.15ε + 2.2δ ∥xt∥2 . (6) For λ∗ t , since w∗is a KKT point of 2 w.r.t. S \ (xl, yl), we have a dataset of size m -1 and ε = δ = 0. To accommodate the different parameter, we note that φ ≤ εd 4m ≤ εd 4(m-1), conclude that 1 ∥xt∥2 -0.6εd ∥xt∥2 ≤λ∗ t ≤ 1 ∥xt∥2 + 1.2εd ∥xt∥2 . (7) And, similar note hold for B.1 resulting in λ∗≤2.4. We are now ready to prove the cosine similarity lower bound. For ˆw = P i∈[m]-l λiyixi + vε and w∗= P i∈[m]-l λ∗ i yixi, we have ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥= ⟨P i∈[m]-l λiyixi + vε, P i∈[m]-l λ∗ i yixi⟩ ∥ˆw∥∥w∗∥ . We upper bound the norm of the predictors, when using 6 and 7 for any i ∈[m]-l separately, bounding λi ∥xi∥2 and λ∗ i ∥xi∥2 respectively. Upper bounding ∥ˆw∥2 we get ∥ˆw∥2 = X i∈[m]-l λiyixi + vε 2 = ⟨ X i∈[m]-l λiyixi + vε, X i∈[m]-l λiyixi + vε⟩= = ⟨ X i∈[m]-l λiyixi, X i∈[m]-l λiyixi⟩+ 2⟨ X i∈[m]-l λiyixi, vε⟩+ ⟨vε, vε⟩ ≤ X i∈[m]-l λ2 i ∥xi∥2 + X i̸=k∈[m]-l λiλk⟨xi, xk⟩+ 2 X i∈[m]-l λi⟨xi, vε⟩+ ε2 19 From 6 we get that λi ∥xi∥2 ≤(1 + 1.2εd + 2.15ε + 2.2δ), from Lemma B.1 we get that for all i, λi ≤2.4 and by Assumption 2.3 we get that for all i, k ∈[m] ⟨xi, xk⟩≤φ. Using Cauchy-Schwarz inequality we get that for all i ∈[m], ⟨xi, vε⟩≤∥xi∥∥vε∥≤ε√1 + ψ. Plug it all in we have ∥ˆw∥2 ≤ X i∈[m]-l λ2 i ∥xi∥2 + X i̸=k∈[m]-l λiλk⟨xi, xk⟩+ 2 X i∈[m]-l λi⟨xi, vε⟩+ ε2 ≤(1 + 1.2εd + 2.15ε + 2.2δ) X i∈[m]-l λi + 2.4mφ X i∈[m]-l λi + ε p 1 + ψ X i∈[m]-l λi + ε2 ≤ X i∈[m]-l λi (1 + 1.2εd + 2.15ε + 2.2δ) + 2.4mφ + ε p 1 + ψ + ε2 We denote Λ = P i∈[m]-l λi and plug in φ ≤ εd 4m and ψ ≤0.1 and get ∥ˆw∥2 ≤ X i∈[m]-l λi (1 + 1.2εd + 2.15ε + 2.2δ) + 2.4mφ + ε p 1 + ψ + ε2 ≤Λ ((1 + 1.2εd + 2.15ε + 2.2δ) + 0.6εd + 1.1ε) + ε2 ≤Λ (1 + 1.8εd + 3.25ε + 2.2δ) + ε2 For the upper bound of ∥w∗∥2 we do similar calculations, using 7 and Lemma B.1 getting ∥w∗∥2 = X i∈[m]-l λ∗ i yixi 2 = ⟨ X i∈[m]-l λ∗ i yixi, X i∈[m]-l λ∗ i yixi⟩ ≤ X i∈[m]-l (λ∗ i )2 ∥xi∥2 + X i̸=k∈[m]-l λ∗ i λ∗ k⟨xi, xk⟩ ≤(1 + 1.2εd + 2.15ε + 2.2δ) X i∈[m]-l λ∗ i + 2.4mφ X i∈[m]-l λ∗ i W.L.O.G, we assume that P i∈[m]-l λi ≥P i∈[m]-l λ∗ i (the other direction is proven similarly). This allow as to upper bound ∥w∗∥2 using λi, with plugging in φ ≤ εd 4m, we get ∥w∗∥2 ≤(1 + 1.2εd) X i∈[m]-l λ∗ i + 2.4mφ X i∈[m]-l λ∗ i ≤(1 + 1.2εd) X i∈[m]-l λi + 2.4mφ X i∈[m]-l λi ≤Λ (1 + 1.8εd) For the norm multiplication we have ∥ˆw∥∥w∗∥= q ∥ˆw∥2 ∥w∗∥2 = p [Λ (1 + 1.8εd + 3.25ε + 2.2δ) + ε2] [Λ (1 + 1.8εd)] ≤Λ r (1 + C(εd + ε + δ)) + ε2 Λ (1 + Cεd) ≤Λ r 1 + C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd ≤Λ + Λ r C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd 20 for some constant C > 0, where the last inequality hold since 1 + √x ≥√1 + x for all x > 0. We next lower bound the inner product of ˆw and w∗ ⟨ˆw, w∗⟩= ⟨ X i∈[m]-l λiyixi + vε, X i∈[m]-l λ∗ i yixi⟩= = ⟨ X i∈[m]-l λiyixi, X i∈[m]-l λ∗ i yixi⟩+ ⟨ X i∈[m]-l λ∗ i yixi, vε⟩ ≥ X i∈[m]-l λ∗ i λi ∥xi∥2 - X i̸=k∈[m]-l λ∗ i λk⟨xi, xk⟩- X i∈[m]-l λ∗ i ⟨xi, vε⟩ Here, we use the lower bound for λ∗ i ∥xi∥2 ≥(1 -0.6εd), the upper bound λ∗ i ≤2.4 from Lemma B.1, and the Cauchy-Schwarz inequality, having ⟨ˆw, w∗⟩≥ X i∈[m]-l λ∗ i λi ∥xi∥2 - X i̸=k∈[m]-l λ∗ i λk⟨xi, xk⟩- X i∈[m]-l λ∗ i ⟨xi, vε⟩ ≥(1 -0.6εd) X i∈[m]-l λi -2.4mφ X i∈[m]-l λi -ε p 1 + ψ X i∈[m]-l λi and by plugging in φ ≤ εd 4m, ψ ≤0.1 we have ⟨ˆw, w∗⟩≥(1 -0.6εd) X i∈[m]-l λi -2.4mφ X i∈[m]-l λi -ε p 1 + ψ X i∈[m]-l λi ≥Λ (1 -0.6εd -0.6εd -1.1ε) ≥Λ -Λ (1.2εd + 1.1ε) Join all the bounds toghter, we get for the cosine similarity ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥ Λ -Λ (1.2εd + 1.1ε) Λ + Λ q C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd ≥1 - Λ (1.2εd + 1.1ε) + Λ q C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd Λ + Λ q C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd ≥1 - (1.2εd + 1.1ε) + q C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd 1 + q C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd ≥1 -(1.2εd + 1.1ε) - r C(εd + ε + δ) + ε2 Λ + ε2 Λ Cεd We note that by Lemma B.2 Λ = X i∈[m]-l λi ≥(m -1) 1 ∥xt∥2 -0.6εd + 1.1ε ∥xt∥2 ! ≥(m -1)0.9 (1 -0.6εd -1.1ε) ≥0.1(m -1) , 21 Concluding, ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥1 -(1.2εd + 1.1ε) - s C(εd + ε + δ) + ε2 0.1(m -1) + ε2 0.1(m -1)Cεd ≥1 -C2 √εd + √ε + √ δ for some constant C2 > 0. B.2 Proof for forgetting subset of points using Ak-GA - linear predictors We formalize and prove the statement for unlearning a subset of data points. Here, the term successful unlearning is the natural extension of Definition 2.2 to unlearning a subset, rather than a single point. Theorem B.1. In the same settings as Theorem 3.1, let Sforget ⊆S be a subset of size k. Then, the extended algorithm AK-GA, with appropriate coefficients {βr}, is an (ε, δ, τ)-successful unlearning algorithm w.r.t. w and S, where: 1. The case of ε = ε1 + ε1εd m k -εd , δ = δ1 + δ1εd m k -εd + 7.2εd m , τ = 0: The predictor Ak-GA(w, S, l) has the direction of an (ε, δ)-approximate KKT point for the margin maximization problem (2) w.r.t. S \ (xl, yl). 2. The case of ε = δ = 0, τ = C(√εd + √ε1 + √δ1) for some universal constant C > 0: Let w∗be a max-margin linear predictor w.r.t. the remaining training set S \ (xl, yl), i.e. the global optimum of the 2 w.r.t. S \ (xl, yl). Then, cossim(Ak-GA(w, S, l), w∗) ≥1 -τ. Proof: Let a forget set Sf ⊂S such that |Sf| = k. We denote If = {i : (xi, yi) ∈Sf}. We denote Sr = S \ Sf and Ir = {i : (xi, yi) ∈Sr}. The proof is highly similar to the proof for unlearning single point in B.1. Similarly, we denote vε = w - m P i=1 λiyi∇wN(w, xi), so we get that ∥vε∥≤ε and w = m X i=1 λiyi∇wN(w, xi) + vε = m X i=1 λiyixi + vε . According to the algorithm Ak-GA, we take a step consists of the sum of k gradients w.r.t. data points in Sf with the following sizes- For any (xl, yl) ∈Sf, we sum a gradient of size β = -λl l′(ylN(w,xl)). We get ˆw = m X i=1 λiyi∇wN(w, xi) + vε - X l∈If λlyl∇wN(w, xr) = X i∈Ir λiyixi + vε . Proof of 1. ˆw has the direction of an (ε + εεd m k -εd , δ + δεd m k -εd + 7.2kεd m )-approximate KKT point for the margin maximization problem for S \ (xl, yl). (1) Dual Feasibility: For all i ∈[m]-l, λi ≥0. Same. directly from dual feasibility for w (Definition 2.1). (2) Stationarity: ˆw - m P i=1 λiyi∇wN( ˆw, xi) ≤ε. Same as in B.1. 22 (3) Complementarity Slackness: For all t ∈[m]-l, λt (ytN( ˆw, xt) -1) ≤δ + 1.44kεd m . Using the same Equation 5 we get 1 + δ λt ≥ytN(w, xt) =ytN( ˆw, xt) + yt X l∈If λlyl|⟨xl, xt⟩| ≥ytN( ˆw, xt) - X l∈If λl|⟨xl, xt⟩| ≥ytN( ˆw, xt) -kφ max p λp , plugging in φ ≤ εd 4m and the λp upper bound from Lemma B.1 we get ytN( ˆw, xt) -kφ max p λp ≥ytN( ˆw, xt) -k εd 4m2.4 ≥ytN( ˆw, xt) -0.6kεd m . We deduce an upper bound for the margin of N( ˆw, xt)- ytN( ˆw, xt) ≤1 + δ λt + 0.6kεd m = 1 + δ + 3 5mkλtεd λt ≤1 + δ + 3 5mk2.4εd λt ≤1 + δ + 1.44kεd m λt as desired. (4) Primal Feasibility: For all t ∈[m]-l, ytN( ˆw, xt) ≥1 -0.6kεd m . We use 5 to lower bound the margin of N( ˆw, xt), and use primal feasibility for w (Definition 2.1), getting ytN( ˆw, xt) = ytN(w, xt) -yt X l∈If λlyl|⟨xl, xt⟩| ≥1 -kφ max p λp . Plugging in φ ≤ εd 4m and the λp upper bound from Lemma B.1 we get that kφ max p λp ≤2.4kεd 4m ≤0.6kεd m . Hence, ytN( ˆw, xt) ≥1 -0.6kεd m . We showed that ˆw is an (ε, δ + 1.44kεd m , 0.6kεd m )-approximate KKT by Definition B.1 . Finally, we look at the scaled weights 1 10.6kεd m ˆw. For εd ≤1 We calculate 1 1 -0.6kεd m ε ≤ m k m k -εd ε = 1 + εd m k -εd ε = ε + εεd m k -εd , and max p λp 0.6kεd m 1 -0.6kεd m + δ + 1.44kεd m 1 -0.6kεd m ≤δ + δεd m k -εd + 7.2kεd m and get from Lemma B.3 that 1 10.6kεd m ˆw is a (ε + εεd m k -εd , δ + δεd m k -εd + 7.2kεd m )-approximate KKT by Definition 2.1 w.r.t. S \ (xl, yl). We note that ˆw and 1 1-0.6k εd m ˆw have the same direction, which finishes the proof. 23 Proof of 2. Cosine -Similarity( ˆw, w∗) ≥1 -C(√εd + √εd + √ δ) for some C > 0. Let N(w∗, x) be a max-margin linear predictor w.r.t. the remaining training set S \ Sf. Hence, w∗is a KKT point of the margin maximization problem (2) w.r.t. {xi, yi}i∈If , as in Definition 2.1 (with ε = δ = 0). From the stationarity condition we denote w∗= P i∈If λ∗ i yixi. We have same bounds for λi and λ∗ i , since it is independent of the unlearning. The rest of the proof remains the same but the substitution of P i∈[m]-l λi in P i∈Ir λi, and the lower bound for it - by Lemma B.2 Λ = X i∈Ir λi ≥(m -k) 1 ∥xt∥2 -0.6εd + 1.1ε ∥xt∥2 ! ≥(m -k)0.9 (1 -0.6εd -1.1ε) ≥0.1(m -k) , That have no significant effect on the final bound ⟨ˆw, w∗⟩ ∥ˆw∥∥w∗∥≥1 -(1.2εd + 1.1ε) - s C(εd + ε + δ) + ε2 0.1(m -k) + ε2 0.1(m -k)C(εd + ε + δ) ≥1 -C2 √εd + √ε + √ δ for some constant C2 > 0. B.3 The Identity is an Unsuccessful Unlearning Algorithm To complement Theorem 3.1, we provide the following remark, that shows that keeping the original predictor is not a successful unlearning algorithm. Particularly, for any ε′, δ′ > 0, we show that for the predictor as defined in Theorem 3.1, its cosine similarity to any (ε′, δ′)-approximate KKT point for S \ {(xl, yl)} is relatively large. Remark B.1. In the same settings as 3.1, the algorithm AI(θ, S, r) = θ, is (ε, δ, τ)-successful only for τ ≥C m - C(εd + ε) for some C > 0. As a short intuition for the proof, we note that the original network weight parameter, denoted as w = m X i=1 λiyi∇wN(w, xi) + vε = m X i=1 λiyixi + vε1 , consists of a sum of m summons, while any other KKT point w.r.t. S \ {(xl, yl)}, ew, consists of a sum of the (m -1) gradients of the remaining dataset. This gap creates an inevitable angle between the two vectors. Proof: In this section, we show that the original network w is not a good candidate for the unlearning tasks according to the (ε, δ, τ)-successful definition (Definition 2.2). Formally, we look at the simple unlearning algorithm AI(w, S, r) = w. We show that for any (ε′, δ′)-approximate KKT point ew, where ε′, δ′ 0 such that cossim(w, ew) ≤1 -C m + C(εd + ε + eε) , leading to τ ≥C m -C(εd + ε + eε) . We recall that due to the stationary condition for the original network w w.r.t. the full dataset S we have w = X i∈[m] λiyi∇wN(w, xi) + vε = m X i=1 λiyixi + vε . 24 We denote an (eε, eδ)-approximate KKT point of the margin maximization problem w.r.t. the retain dataset S \(xl, yl) by ew. From the stationarity condition we get that ew = X i∈[m]-l eλiyixi + veε . Next, we show that the cosine similarity between w and ew is lower bounded by C m + C(εd + ε + eε). We denote w = w -vε and ew = w -veε. For the cosine similarity between w and ew we have cossim(w, ew) = ⟨w, ew⟩ ∥w∥∥ew∥= ⟨w + vε, ew + veε⟩ ∥w∥∥ew∥ We first use Cauchy-Schwarz inequality and separate it into two expressions cossim(w, ew) = ⟨w + vε, ew + veε⟩ ∥w∥∥ew∥ ≤ ⟨w, ew⟩ ∥w∥∥ew∥+ |⟨vε, ew⟩| + |⟨veε, w⟩| + |⟨vε, veε⟩| ∥w∥∥ew∥ ≤ ⟨w, ew⟩ ∥w∥∥ew∥+ ∥vε∥∥ew∥+ ∥veε∥∥w∥+ ∥vε∥∥veε∥ ∥w∥∥ew∥ (8) We next lower bound the norm of the parameter vectors. We note that ∥w∥= ∥w + vε∥≥∥w∥-ε and ∥w∥2 = X i∈[m] λiyixi 2 = ⟨ X i∈[m] λiyixi, X i∈[m] λiyixi⟩= ≥ X i∈[m] λ2 i ∥xi∥2 - X i̸=k∈[m] λiλk⟨xi, xk⟩ ≥ X i∈[m] λ2 i ∥xi∥2 -φ X i̸=k∈[m] λiλk . Similarly ∥ew∥≥∥ew∥-eε and ∥ew∥2 = X i∈[m]-l eλiyixi 2 = ⟨ X i∈[m]-l eλiyixi, X i∈[m]-l eλiyixi⟩ ≥ X i∈[m]-l eλi 2 ∥xi∥2 -φ X i̸=k∈[m]-l eλif λk . We now upper bound the inner product ⟨w, ew⟩, having 25 ⟨w, ew⟩= ⟨ X i∈[m] λiyixi, X i∈[m]-l eλiyixi⟩= = ⟨ X i∈[m]-l λiyixi, X i∈[m]-l eλiyixi⟩+ ⟨ X i∈[m]-l eλiyixi, λlylxl⟩ ≤|⟨ X i∈[m]-l λiyixi, X i∈[m]-l eλiyixi⟩| + |⟨ X i∈[m]-l eλiyixi, λlylxl⟩| ≤ X i∈[m]-l eλiλi ∥xi∥2 + X i̸=k∈[m]-l eλiλk⟨xi, xk⟩+ X i∈[m]-l eλiλl⟨xi, xl⟩ ≤ X i∈[m]-l eλiλi ∥xi∥2 + φ X i̸=k∈[m]-l eλiλk + φ X i∈[m]-l eλiλl Plug it all in, we get for the first summon at 8 ⟨w, ew⟩ ∥w∥∥ew∥≤ P i∈[m]-l eλiλi ∥xi∥2 + φ P i̸=k∈[m]-l eλiλk + φ P i∈[m]-l eλiλl qP i∈[m] λ2 i ∥xi∥2 -φ P i̸=k∈[m] λiλk -ε qP i∈[m]-l eλi 2 ∥xi∥2 -φ P i̸=k∈[m]-l eλif λk -eε . We first note that by Cauchy-Schwarz X i∈[m]-l eλiλi ∥xi∥2 ≤ s X i∈[m]-l eλi 2 ∥xi∥2 s X i∈[m]-l λ2 i ∥xi∥2 , and X i∈[m]-l eλiλi ≤ s X i∈[m]-l eλi 2s X i∈[m]-l λ2 i . We now reduce the nominator and denominator by qP i∈[m]-l eλi 2 ∥xi∥2qP i∈[m]-l λ2 i ∥xi∥2. We denote b = (1 + 1.2εd + 2.15ε + 2.2δ), a = (1 -0.6εd -1.1ε), and use Lemma B.2 in which for all i, a 0} and J-= {j ∈[n] : uj 0 , hence solving for α we get α ≤1 + 2ε√1 + ψ + p (1 + 2ε√1 + ψ)2 + 4δ ((1 -ψ) -2φ(m -1)) 2 ((1 -ψ) -2φ(m -1)) . Case 2: yk = -1 is very similar. First we have -1 -δ α = N(θ, xk) ≤ X j∈J+ ujσ(w⊤ j xk) + X j∈Jujw⊤ j xk , for the first summand we get X j∈J+ ujσ w⊤ j xk = X j∈J+ ujσ  -ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vε,j, xk⟩   ≤ X j∈J+ ujσ  uj m X i=1,i̸=k λiσ′ i,j|⟨xi, xk⟩| + |⟨vε,j, xk⟩|   ≤ X j∈J+ ujσ  uj m X i=1,i̸=k λiσ′ i,jφ + |⟨vε,j, xk⟩|   ≤φ m X i=1,i̸=k X j∈J+ u2 jλiσ′ i,j + X j∈J+ uj|⟨vε,j, xk⟩| ≤φ(m -1)α + ε p 1 + ψ and for the second X j∈Jujw⊤ j xk = X j∈Juj  -ujλkσ′ k,j ∥xk∥2 + uj m X i=1,i̸=k λiyiσ′ i,j⟨xi, xk⟩+ ⟨vε,j, xk⟩   ≤-(1 -ψ) X j∈Ju2 jλkσ′ k,j + φ X j∈Jm X i=1,i̸=k u2 jλiσ′ i,j + X j∈J+ uj|⟨vε,j, xk⟩| ≤-(1 -ψ)α + φ(m -1)α + ε p 1 + ψ combining the two results leads to the same upper bound α ≤1 + 2ε√1 + ψ + p (1 + 2ε√1 + ψ)2 + 4δ ((1 -ψ) -2φ(m -1)) 2 ((1 -ψ) -2φ(m -1)) 32 We plug in ψ ≤0.1 and φ ≤ εd 4mn, εd ≤1, and get α ≤1 + 2ε√1 + ψ + p (1 + 2ε√1 + ψ)2 + 4δ ((1 -ψ) -2φ(m -1)) 2 ((1 -ψ) -2φ(m -1)) ≤1 + 2.1ε + p (1 + 2.1ε)2 + 4δ(0.9) 2(0.9 -2 εd 4mn(m -1)) ≤1 + 2.1ε + (1 + 2.1ε) + 1.9δ 2(0.9 -2 εd 4 ) ≤2 + 4.2ε + 1.9δ 0.8 ≤2.5 + 5.25ε + 2.4δ ≤10.2 meaning for all i ∈[m] we have max    X j∈J+ u2 jλiσ′ i,j, X j∈Ju2 jλiσ′ i,j   ≤2.5 + 5.25ε + 2.4δ so X j∈[n] u2 jλiσ′ i,j ≤5 + 10.5ε + 4.8δ ≤20.4 using the fact that for all j ∈[n] and k ∈[m], |uj| = 1 √n and σ′ k,j ≤1 we also get that λi ≤5 + 10.5ε + 4.8δ P j∈[n] u2 jσ′ i,j ≤5 + 10.5ε + 4.8δ 1 n ≤n (5 + 10.5ε + 4.8δ) ≤20.4n Lemma C.4. Let N(θ, x) = nP j=1 ujσ(w⊤ j x) be a two-layer fully connected neural network, trained on S = {(x1, y1), ..., (xm, ym)}, and let 0 0} and J-= {j ∈[n] : uj 0 and σ′ l,j = 1, so to show the difference's sign it's enough to show the sign of (uj⟨xl, xr⟩-|uj|∆j,r). Case 1: w⊤ j xr ≥0. We show that (uj⟨xl, xr⟩-|uj|∆j,r) ≤0 By Lemma C.1 |uj|∆j,r ≥|uj| (c(1 -ψ) -(m -2)cφ) And using Assumption 2.3 we get that |⟨xl, xr⟩| ≤φ we have uj⟨xl, xr⟩-|uj|∆j,r ≤|uj|φ -|uj| (c(1 -ψ) -(m -2)cφ) ≤|uj| (φ -(c(1 -ψ) -(m -2)cφ)) . We left to show that (φ -c(1 -ψ) + (m -2)cφ) ≤0 and indeed plugging in ψ = 0.1, φ ≤ εd 4mn, c = εd 2mn we have φ -c(1 -ψ) + (m -2)cφ ≤ εd 4mn - εd 2mn(0.9) + (m -2) εd 2mn εd 4mn ≤0.25εd -0.45εd + 0.125ε2 d mn 0 and uj > 0} and J-= {j ∈[n] : w⊤ j xr > 0 and uj 0. Proof: In this section we show that the original network θ is not a good candidate for the unlearning tasks according to the (ε, δ, τ)-successful definition (Definition 2.2). Formally, we look at the simple unlearning algorithm AI(θ, S, r) = θ. We show that θ will have a small cosine-similarity with any KKT point w.r.t. the retain set S \ (xl, yl). Namely, that AI is (ε′, δ′, τ ′) successful for τ ′ that is at least O( 1 mn) -O( εd n ). Next, we show for τ > 0. Let eθ be an (eε, eδ)-approximate KKT point w.r.t. S \ (xl, yl). We show that τ ≥ O( 1 mn) -O( εd n ). From stationarity for θ w.r.t. S, and for eθ w.r.t. S \ (xl, yl) we get that θ = X i∈[m] λiyi∇θN(θ, xi) + vε , and eθ = X i∈[m]-l eλiyi∇θN(eθ, xi) + veε . We denote αi = P j∈[n] ujλiσ′ i,j and eαi = P j∈[n] uj eλieσ′ i,j, and θ = θ -vε, eθ = eθ -veε By Cauchy-Schwarz inequality we have ⟨θ, eθ⟩= ⟨θ + vε, eθ + veε⟩= ≤⟨θ, eθ⟩+ |⟨vε, eθ⟩| + |⟨veε, θ⟩| ≤⟨θ, eθ⟩+ ε eθ + eε ∥θ∥. 47 For the inner product between the sums, we have ⟨θ, eθ⟩= ⟨ X i∈[m]-l λiyi∇θN(θ, xi), X i∈[m] eλiyi∇θN(eθ, xi)⟩= = X j∈[n] ⟨ X i∈[m] ujλiyiσ′ i,jxi, X i∈[m]-l ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m] X j∈[n] ujλiyiσ′ i,jxi, X i∈[m]-l X j∈[n] ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m] αiyixi, X i∈[m]-l eαiyixi⟩ ≤|⟨ X i∈[m] αiyixi, X i∈[m]-l eαiyixi⟩| ≤ X i∈[m]-l αieαi ∥xi∥2 + X i̸=k∈[m]-l αieαk⟨xi, xk⟩+ X i∈[m]-l αleαi⟨xl, xi⟩ ≤ X i∈[m]-l αieαi ∥xi∥2 + φ X i̸=k∈[m]-l αieαk + φ X i∈[m]-l αleαi For lower bounds of the norms we perform similar calculations. We note that eθ ≥ eθ -ε, and eθ 2 = X j∈[n] ∥ewj∥2 = X j∈[n] X i∈[m]-l ujeλiyieσ′ i,jxi 2 = X j∈[n] ⟨ X i∈[m]-l ujeλiyieσ′ i,jxi, X i∈[m]-l ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m]-l X j∈[n] ujeλiyieσ′ i,jxi, X i∈[m]-l X j∈[n] ujeλiyieσ′ i,jxi⟩ = ⟨ X i∈[m]-l eαiyixi, X i∈[m]-l eαiyixi⟩ ≤|⟨ X i∈[m]-l eαiyixi, X i∈[m]-l eαiyixi⟩| ≥ X i∈[m]-l eα2 i ∥xi∥2 - X i̸=k∈[m]-l |eαieαk|⟨xi, xk⟩ ≥ X i∈[m]-l eα2 i ∥xi∥2 -φ X i̸=k∈[m]-l |eαieαk| 48 and similarly ∥θ∥2 = X j∈[n] ∥wj∥2 = X j∈[n] X i∈[m] ujλiyiσ′ i,jxi 2 ≥ X i∈[m] α2 i ∥xi∥2 - X i̸=k∈[m] |αiαk|⟨xi, xk⟩ ≥α2 l ∥xl∥2 + X i∈[m]-l α2 i ∥xi∥2 -φ X i̸=k∈[m]-l |αiαk| Plug it all in the cosine similarity definition we get cossim(θ, eθ) = ⟨θ, eθ⟩ ∥θ∥ eθ ≤ ⟨θ, eθ⟩ ∥θ∥ eθ + ε eθ + eε ∥θ∥ ∥θ∥ eθ + εeε ∥θ∥ eθ bounding the second fraction we have ε eθ + eε ∥θ∥ ∥θ∥ eθ ≤ ε ∥θ∥+ eε eθ and note that using Lemma C.4 and Lemma C.3, if we denote l = 0.9 -4.64 εd n -1.92ε for all i ∈[m] -l√n ≤αi, eαi ≤20.4√n ∥θ∥2 ≥ X i∈[m] α2 i ∥xi∥2 -φ X i̸=k∈[m]-l αiαk ≥  X i∈[m] |αi|   0.9l√n -φ20.4m√n ≥ml√n(0.9l√n -5.1εd √n ) ≥mln(0.9l -5.1εd n ) ≥C mn and similarly ∥θ∥2 > C mn then ε ∥θ∥+ eε eθ ≤C(ε + eε) √mn 49 bounding the first fraction we have ⟨θ, eθ⟩ ∥θ∥ eθ ≤ P i∈[m]-l αieαi ∥xi∥2 + φ P i̸=k∈[m]-l αieαk + φ P i∈[m]-l αleαi r P i∈[m]-l eα2 i ∥xi∥2 -φ P i̸=k∈[m]-l eαieαk -ε r α2 l ∥xl∥2 + P i∈[m]-l α2 i ∥xi∥2 -φ P i̸=k∈[m]-l αiαk -eε We lower bound the norm of the parameter ∥θ∥2 = X j∈[n] ∥wj∥2 ≥ X i∈[m] α2 i ∥xi∥2 -φ X i̸=k∈[m] |αiαk| ≥( X i∈[m] αi)[a -φmb] ≥m0.9a[a -0.6εd n ] As a -0.6εd n > C for some C > 0, we note we get a similar equation as in the linear case (B.3), and skip to the result, having cossim(θ, eθ) ≤1 -C m + C(εd + ε + eε) . D Appendix for section 6 D.1 Proofs for settings properties We first show this dataset S = {(xi, yi)}m i=1 ∼Dm MG satisfy the conditions we discuss in our paper: 1. For all xi ∈S, ∥xi∥2 ∈[1 -ψ, 1 + ψ] for ψ = 0.1. 2. For all (xi, yi), (xj, yj) ∈S s.t. i ̸= j, |⟨xi, xj⟩| ≤φ For a sample (xi, yi) ∼D, we first show that xi's norm is a bounded constant. Denote xi = μi+ζi for ∥μi∥= ·d-1 4 +α for α ∈(0, 1 4), and ζi ∼N(0, 1 dId). We show tighter bounds for ∥ζi∥2. Lemma D.1. Let i ∈[m]. Then, w.p. ≥1 -(2ed 1700 ), ∥ζi∥2 ∈[0.95, 1.05]. Proof: For the lower bound, similar to Lemma A.1, we have for w ∼N(0, σ2In) Pr n - w σ 2 ≥2 √ nt ≤e-t . We let t = 1 1600 · n, σ2 = 1 d and n = d and get Pr ∥w∥2 ≤95 100 ≤ed 1600 . 50 as desired. For the upper bound, similar to Lemma A.2, we have for w ∼N(0, σ2In) Pr w σ 2 -n ≥2 √ nt + 2t ≤e-t . We let t = 1 1700 · n, σ2 = 1 d and n = d and get Pr h ∥w∥2 ≥1.05 i ≤ed 1700 . Lemma D.2. w.p. 1 -(2ed 1700 ), for sufficiently large d, ∥xi∥2 ∈[0.9, 1.1]. Proof: We denote xi = μi + ζi, such that ζi ∼N(0, 1 dId). From Lemma D.1 we get that w.p. 1 -(2ed 1700 ), ∥ζi∥2 ∈[0.95, 1.05]. As for ∥μi∥, we note that ∥μi∥2 = d2(-1 4 +α) = d(-1 2 +2α), therefore if enough to take d such that d2α-1 2 ≤0.01 ⇐⇒d 1 2 -α ≥100 ⇐⇒log(d) ≥log(100) 1 2 -α Then, for such d we have, ∥xi∥2 = ∥μi + ζi∥2 = ∥μi∥2 + ∥ζi∥2 + 2⟨μi, ζi⟩ ∥μi∥2 + ∥ζi∥2 -2|⟨μi, ζi⟩| ≤∥x∥2 ≤∥μi∥2 + ∥ζi∥2 + 2|⟨μi, ζi⟩| 2|⟨μi, ζi⟩| ≤2 ∥μi∥∥ζi∥≤2 · 0.01 · 1.05 = 0.021 and therefore, 0.9 2 ∥μ+∥log(d) √ d + 1.1log(d) √ d Proof: ∥μ+∥2 -2 ∥μ+∥log(d) √ d -1.1log(d) √ d = 1 d 1 2 -2α -2 log(d) dα+3/4 -1.1log(d) √ d =d-1 2 d2α -2 log(d)d-1 4 -1.1 log(d) it's enough to find d such that d2α ≥2 log(d)d-1 4 + 1.1 log(d) ⇐⇒2α ≥ log 2 log(d)d-1 4 + 1.1 log(d) log d which is possible since r.h.s goes to 0 when d goes to infinity. Lemma D.5. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and (xi, yi) ∼DMG, for m ≤d and for sufficiently large d. Then, w.p. ≥1 -(2med 1700 + m2e-d/500 + 2m2d-log(d) 2 ) 1. For all (x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] 2. For all (xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤φ for φ ≤ εd 4mn Proof: 1. First, Pr h ∀(x, y) ∈S, ∥x∥2 ∈[0.9, 1.1] i = Pr max (x,y)∈S ∥x∥2 ∈[0.9, 1.1] , and the claim follows w.p. ≥1 -2med 1700 , directly from using simple union, given Lemma D.2. 2. First, Pr h ∀(xi, yi), (xj, yj) ∈S, |⟨xi, xj⟩| ≤ εd 4mn i = Pr max (xi,yi),(xj,yj)∈S |⟨xi, xj⟩| ≤ εd 4mn . From Lemma D.3 we get that w.p. ≥1 -e-d/500 + 6d-log(d) 2 : |⟨xi, xj⟩| -⟨μi, μj⟩∈[-2 ∥μi∥log(d) √ d -1.1log(d) √ d , 2 ∥μi∥log(d) √ d + 1.1log(d) √ d ] . Therefore, we get that maximal value for |⟨xi, xj⟩| if we take i ̸= j such that yi = yj, resulting in |⟨xi, xj⟩| ≤∥μi∥2 + 2 ∥μi∥log(d) √ d + 1.1log(d) √ d From Lemma D.4 one can see its enough to choose d such that 2 ∥μ+∥2 = 2 1 d 1 2 -2α ≤ εd 4mn , which is possible since εd 4mn is given constant and limd→∞ 1 d 1 2 -2α = 0. Then, from using simple union, the claim follows. 52 For the next lemma, we add few notations for readability. 1. φ+ max = maxi,j{⟨xi, xj⟩: yi = yj}, φ+ min = mini,j{⟨xi, xj⟩: yi = yj} 2. φmax = maxi,j{⟨xi, xj⟩: yi ̸= yj}, φmin = mini,j{⟨xi, xj⟩: yi ̸= yj}. Lemma D.6. Let a dataset S = {(xi, yi)}m i=1 be such that ∀i, xi ∈Rd and (xi, yi) ∼DMG. Then, for m ≤d and for sufficiently large d, w.p. ≥1 -(me-d/500 + 6md-log(d) 2 ), for all (xi, yi), (xj, yj) ∈S: 0 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≥φ+ min, else yi = -1 and ⟨xi, xt⟩≤φmax so -⟨xi, xt⟩≥-φmax = φ+ min, from Lemma D.6. Therefore, for all j ∈[n], sign( ˆw⊤ j xt) = sign(w⊤ j xt) = sign(uj) = yt sign(uj). Case 2: yt = -1. We note that for all i ∈[m], yi⟨xi, xt⟩≤φmax 0 . 53 We denote xt = μt + ζt, for ζt ∼N(0, 1 d). We denote I+ = {i ∈[m] : yi = 1}, I-= {i ∈[m] : yi = -1}. We also denote φ+ max = maxi,j∈[m]{⟨xi, xj⟩: yi = yj}, φ+ min = mini,j∈[m]{⟨xi, xj⟩: yi = yj} and φmax = maxi,j∈[m]{⟨xi, xj⟩: yi ̸= yj}, φmin = mini,j∈[m]{⟨xi, xj⟩: yi ̸= yj}. Next, from From Lemma D.6 we get that φmax = -φ+ min and φmin = -φ+ max Since θ is a KKT point, from Definition 2.1 we get that wj = uj m X i=1 λiyiσ′ i,jxi , w⊤ j xt = uj m X i=1 λiyiσ′ i,j⟨xi, xt⟩ where σ′ i,j = 1wT j xj≥0. Case 1: yt = 1. We show that N(θ, xt) > 0. From Lemma D.7, for all j ∈[n], sign(w⊤ j xt) = sign(uj). Hence, N(θ, xt) = n X j=1,uj 0: If yi = 1, yi⟨xi, xt⟩= ⟨xi, xt⟩≥φ+ min, else yi = -1 and ⟨xi, xt⟩≤φmax so -⟨xi, xt⟩≥-φmax = φ+ min, from Lemma D.6. Next, since S satisfies Assumption 2.3, and θ satisfies 2.1 for ε = δ = 0 we get from Lemma C.4 that for all i ∈[m], nP j=1,uj≥0 u2 jλiσ′ i,j > 0. Case 2: yt = -1. Similarly, we show that N(θ, xt) 0 and the claim follows. For showing that ytN(ˆθ, xt) = yt n X j=1 ujσ( ˆw⊤ j xt) > 0 , the proof is almost identical. In the end of each case we look at X i∈[m]-l yi⟨xi, xt⟩ n X j=1,uj≥0 u2 jλiσ′ i,j , and all the same arguments holds, concluding generalization for ˆθ as well, which finishes the proof. We note that the same arguments can be used to show generalization for the case of unlearning a forget set Sforget ⊆S of any size k < m using the extended algorithm Ak-GA, discussed in section 5. In this case, we instead look at X i∈S yi⟨xi, xt⟩ n X j=1,uj≥0 u2 jλiσ′ i,j , yet the same arguments hold, concluding generalization. E Experiment details We take a high dimensional data set, where m = 10, d = 1000, the data distribution is N(0, 1 dId). As mentioned in Example. 2.4, the data satisfies Assumption 2.3 for small value of φ and ψ. We experiment with fully-connected ReLU networks, trained using SGD optimizer with binary cross entropy loss that is normalized to have a margin of size 1. In this experiment, for each data point xi ∈S, we calculate λi, and unlearn it using the gradient ascent algorithm AGA with step size αλi for α ∈[0, 1.5], resulting in eθi(α). For each eθi(α) we calculate the corresponding ε, δ for its KKT conditions with respect to S \ (xi, yi). In Figure 1, we sample one point from S, preform the unlearning algorithm for all 10 networks, and average the results. We test for a two-layer fully-connected ReLU network θ as in Eq. 1, with n = 400. We initialize the network with small initialization for the first layer by dividing its standard deviation by a factor of 105. We train with full batch size for 105 epochs, using SGD optimizer with a 10-5 wight decay factor. 55
2510.14843
Rate-Adaptive Spatially Coupled MacKay-Neal Codes with Thresholds Close to Capacity Ayman Zahr, Student Member, IEEE, and Gianluigi Liva, Senior Member, IEEE Abstract—We analyze by density evolution the asymptotic per- formance of rate-adaptive MacKay-Neal (MN) code ensembles, where the inner code is a protograph spatially coupled (SC) low-density parity-check code. By resorting to a suitably-defined parallel channel model, we compute belief propagation decoding thresholds, showing that SC MN code ensembles can perform within 0.15 dB from the binary-input additive white Gaussian noise capacity over the full [0, 1] rate range. Index Terms—LDPC codes, spatial coupling, rate adaptivity, distribution matcher, density evolution. I. INTRODUCTION M ODERN high-throughput communication systems de- mand flexible, rate-adaptive error-correction solutions that avoid costly hardware reconfiguration. To address this need, a class of protograph-based MacKay-Neal (MN) codes was introduced and analyzed in [1]. The code structure closely follows the one introduced in [2]. In particular, an outer nonlin- ear encoder is concatenated with an inner nonsystematic low- density parity-check (LDPC) [3] code encoder. The outer non- linear encoder, named distribution matcher (DM) [4], maps the input message onto a fixed-length sequence with a prescribed empirical distribution via a binary constant composition (CC) code. The sequence is then input to the nonsystematic LDPC encoder, whose output is transmitted over the channel. At the decoder side, belief propagation (BP) decoding is performed over the bipartite graph of the inner LDPC code, where the variable nodes (VNs) associated with the encoder input are fed with prior information derived from the marginal distribution of the outer CC code. By changing the distribution defined by the DM, a large range of code rates can be obtained without modifying (i.e., puncturing or shortening) the inner code. The construction of [1] relies on inner protograph-based LDPC codes. The optimization of the inner code protograph allows to operate, al large blocklength, within 1 dB from the binary-input additive white Gaussian noise (biAWGN) channel capacity. To achieve a robust performance across the various rates, the design of [1] uses density evolution (DE) [5] analysis, seeking protographs for which the maximum gap A. Zahr is with the Institute for Communications Engineering, Technical University of Munich, Munich, Germany, and with the Institute of Communi- cations and Navigation, German Aerospace Center (DLR), Wessling, Germany (email: ayman.zahr@dlr.de). G. Liva is with the Institute of Communications and Navigation, German Aerospace Center (DLR), Wessling, Germany (email: gianluigi.liva@dlr.de). The authors acknowledge the financial support by the Federal Ministry of Research, Technology and Space of Germany in the programme of “Souver¨an. Digital. Vernetzt.” Joint project 6G-RIC, project identification number: 16KISK022. between the BP thresholds and the Shannon limit is minimized over a set of rates. In this paper, we explore the performance of protograph- based MN code ensembles where the inner code is a proto- graph spatially coupled (SC) LDPC code [6]–[9]. This choice is motivated by the capacity approaching properties of SC LDPC code ensembles [7], [8], and by their universality of their performance w.r.t. a large class of channels [10]—a property that may be instrumental to close the gap to the Shannon limit for the resulting SC MN construction, at all rates. A different class of SC MN codes was analyzed via DE in [11]. A key difference between the class of codes studied in [11] and the one addressed in this paper stems from the absence, in [11], of the concatenation with an outer nonlinear code, i.e., the analysis of [11] considers linear SC MN codes that constitute a subclass of multi-edge type SC LDPC codes. We analyze SC MN codes obtained by concatenating an outer nonlinear (CC) code with an inner regular protograph SC LDPC code. The analysis is carried out in the asymptotic regime of large lifting factors, and long SC chains, by means of DE analysis. The analysis relies on the equivalent parallel channel (EPC) model introduced in [1] to circumvent the challenges posed by the nonlinear code structure. We show that SC MN codes can operate within 0.15 dB from the Shannon limit for the biAWGN channel over the full rate range [0, 1], with a single inner SC LDPC, and with rate adaptation achieved by tuning the outer DM parameter. II. PRELIMINARIES In this paper, random variables (r.v.s) are denoted by up- percase letters, and their realization by lowercase letters. The probability density function (p.d.f.) of a r.v. X is denoted by p(x). The binary entropy function Hb(ω) is Hb(ω) = −ω log2 ω −(1 −ω) log2(1 −ω), for 0 < ω < 1 and Hb(0) = Hb(1) = 0. Vectors are treated as row vectors and denoted by bold letters, e.g., x, while matrices are denoted by uppercase bold letters, e.g., X. The Hamming weight of x is denoted by wH(x). We consider transmission over the biAWGN channel, with Y = (−1)X + N, where X ∈{0, 1}, N ∼N(0, σ2) is the additive white Gaussian noise term, and Y is the channel output. The signal-to-noise ratio (SNR) is Es/N0 = 1/(2σ2), where Es is the energy per symbol and N0 is the single-sided noise power spectral density. A. Protograph-based (Spatially-Coupled) LDPC Ensembles For the MN code construction, we rely on protograph-based LDPC and SC LDPC code ensembles. In particular, we focus arXiv:2510.14843v1 [cs.IT] 16 Oct 2025 on two ensembles: the (dv, dc) regular block LDPC code ensemble, where dv and dc denote the VN and check node (CN) degrees, and the corresponding SC code ensemble. A protograph P = (V, C, E ) is a small bipartite graph consisting of a set V of N VNs, a set C of M CNs, and a set E of e edges [12]. VNs in the protograph are numbered from 0 to N −1. Similarly, protograph CNs are numbered from 0 to M −1. Each VN/CN/edge in a protograph defines a VN/CN/edge type. The bipartite graph G of an LDPC code can be derived by lifting the protograph. In particular, the protograph is copied ℓtimes (where ℓis referred to as the lifting factor), and the edges of the protograph copies are permuted under the following constraint: if an edge connects a type-j VN to a type-i CN in P, after permutation the edge should connect one of the ℓtype-j VN copies with one of the ℓtype-i CN copies in G. We denote by v0, v1, . . . VNs in G, and by c0, c1, . . . the CNs in G. The lifted graph G defines the parity-check matrix of an LDPC code. The base matrix of a protograph is an M × N matrix B = [bi,j] where bi,j is the number of edges that connect VN j to CN i in P. We will make use of LDPC codes with punctured (or state) VNs. A punctured VN is associated with a codeword bit that is not transmitted through the communication channel. We will assume that all the VNs of a given type are either punctured or they are not, i.e., puncturing is defined at protograph level. Protographs can be used to define both block and SC LDPC code ensembles. In this paper, we focus on regular block code ensembles defined by a base matrix in the form B = d d (1) where the VN degree is dv = d and the CN degree is dc = 2d. An example of a protograph defined by a base matrix in the form (1) with d = 3 is depicted in Fig. 1 (left side). By convention, the punctured VN (dark circle) is associated with the first column of B. We consider SC LDPC code ensembles defined by protographs in the form BSC =         B0 B1 B0 ... B1 ... Bd−1 ... ... Bd−1 ...         (2) where B0 = B1 = . . . = Bd−1 = 1 1 . The number of column blocks (spatial positions) is denoted by L. An example of a protograph defined by a base matrix in the form (2) with d = 3 is given in Fig. 1 (right side). As before, by convention, we associate the punctured VNs to even column indexes, as illustrated by the following example. Example 1. The base matrix of SC LDPC code ensemble defined according to (2), for d = 3, is BSC =         1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 ... ...         . . . 0 1 2 3 spatial positions Fig. 1. (3, 6) protograph SC LDPC code ensemble from Example 1 (left) and of its SC counterpart (right). Dark variable nodes denote punctured nodes. where the columns with index 0, 2, 4, . . . (marked in gray) are associated to punctured VNs. Since all nonzero entries in the base matrix (2) are set to 1, lifting is performed by replacing each nonzero entry with an ℓ× ℓpermutation matrix, and each zero entry with an ℓ× ℓ zero matrix. The resulting parity check matrix is in the form H =         Π (0) A,0 Π (0) C,0 Π (0) A,1 Π (0) C,1 Π (1) A,0 Π (1) C,0 ... ... Π (1) A,1 Π (1) C,1 ... Π (0) A,d−1 Π (0) C,d−1 ... ... ... Π (1) A,d−1 Π (1) C,d−1 ...         (3) where, following the visual convention introduced in Exam- ple 1, columns marked in gray refer to punctured VNs (the meaning of the subscripts A and C, associated with punctured and unpunctured VNs, will be explained in Section IV). III. SPATIALLY COUPLED MACKAY-NEAL CODES As in [1], we construct a protograph-based MN code as the concatenation of an outer DM (nonlinear CC code) and an inner protograph-based LDPC code with nonsystematic encoder. Differently from [1], where the inner code is a block LDPC code, we use here a protograph-based SC LDPC code whose base matrix complies with (2). To illustrate the code structure, we focus on the encoder (Fig. 2). We assume a source that emits, at discrete times, a stream of messages µ0, µ1, . . . with µt ∈{1, 2, . . . , M}. The generic message µt is encoded by the DM into a binary ℓ-tuple vt, where vt has a prescribed (constant) Hamming weight wH(vt) = ωℓ, for some ω ∈{0, 1/ℓ, 2/ℓ, . . . , 1}. We refer to the DM parameter ω as the fractional Hamming weight of vt. The pair (ω, ℓ) defines the outer CC code. The CC outer codewords v0, v1, . . . are fed into the inner nonsystematic SC LDPC encoder, producing a sequence of ℓ-tuples x0, x1, . . . that are sent over the communication channel. According to the parity- check matrix structure of (3), encoding is performed as x T t =  Π (t) C,1 −1 d−1 X i=0 Π (t−i) A,i v T t−i + d−1 X i=1 Π (t−i) C,i x T t−i ! where vt = 0 and xt = 0 for t < 0. We assume zero-tail termination of the SC LDPC code. Following [7], [9], the encoder stops encoding the DM output sequence at time t = L −d. The last d −1 SC LDPC Source Distribution Matcher LDPC Encoder Communication Channel LDPC Decoder De-matcher µt / vt / xt / yt / ˆvt ˆµt ℓbits ℓbits ℓ ℓbits Encoder Fig. 2. System model, where a SC MN code is used to communicate over the biAWGN channel (communication channel). encoder inputs vL−d+1, vL−d+2, . . . , vL−1 are computed to drive the encoder to the zero state (zero syndrome) at time L −1. In the proposed construction, the encoder inputs vL−d+1, vL−d+2, . . . , vL−1 are not punctured, i.e., they are transmitted over the communication channel along with the corresponding encoder outputs xL−d+1, xL−d+2, . . . , xL−1. The inner code rate is hence RI = L −(d −1) L −(d −1) + 2(d −1) = L −(d −1) L + (d −1). The inner code rate tends to one as L →∞. The rate of the outer code (DM) is [1] RO = 1 ℓlog2  ℓ ωℓ  and it converges to Hb(ω) for large ℓ. The overall rate is hence R = RORI, and it approaches Hb(ω) in the asymptotic regime where both L and ℓtend to infinity. Note that, by selecting ω ∈[0, 1/2], it is possible to finely span the full code rate region R ∈[0, 1], enabling flexible rate adaptation. BP decoding takes place on the graph of the inner LDPC code. For unpunctured VNs, the input is given by their corresponding channel log-likelihood ratios (LLRs), i.e., L = ln[p(y|0)/p(y|1)]. Punctured VNs are initialized with the prior obtained from the marginal distribution of the DM output [1], [2], [13], i.e., L = ln[(1 −ω)/ω]. The hard decisions at the LDPC decoder output, ˆv0, ˆv1, . . ., are passed to the de-matcher, producing the sequence of message estimates ˆµ0, ˆµ1, . . .. IV. DENSITY EVOLUTION ANALYSIS As observed in [1], the analysis of the MN code construction can be challenging, due to the inherent nonlinearity of the code (caused by the concatenation with the outer, nonlinear, constant composition code). The nonlinearity results in a (bit/block) error probability that is a function of the transmitted sequence. This aspect, in particular, is of impediment for the use of the all-zero codeword assumption used in the DE analysis of the code ensemble. In [1], the problem was circumvented by observing that the introduction of a random independent and identically-distributed (i.i.d.) bit scrambler, flipping the bits at the output of the DM independently and with probability 1/2 (with scrambling sequence made avail- able at the decoder), does not modify the error probability of the scheme over symmetric communication channels. Hence, the performance analysis of the scheme of Fig. 2 (without the scrambler) is equivalent to the one of the scheme of Fig. 3. By interpreting the scrambling sequence zt as the output of a binary memoryless symmetric source, and the DM output vt as additive binary noise contribution, the analysis of the average error probability of the nonlinear MN reduces to the analysis of the error probability of the inner nonpunctured, linear LDPC code (whose rate approaches 1/2 as L →∞), where the information bits are transmitted over an additive binary noise channel with input wt and output zt = wt + vt, with marginal cross-over probability ω, and where the parity bits xt are transmitted over the biAWGN channel, with channel output yt [14], [15]. We refer to this model (illustrated in Fig. 4) as the EPC model. To emphasize their roles in the model, we refer to the biAWGN channel as the communication channel (channel-C), and to additive binary noise channel as a-priori channel (channel-A).1 A. BP Decoding Thresholds over Parallel Channels We initially analyze the performance of the SC LDPC code ensemble defined by (2) over various combinations of parallel channels, namely: (i) two parallel binary erasure chan- nels (BECs), where channel-A has erasure probability ϵA and channel-C has erasure probability ϵC; (ii) two parallel biAWGN channels, where channel-A has Es/N0 = γA and channel-C has Es/N0 = γC; (iii) a binary symmetric channel (BSC)- biAWGN channel combination, where channel-A is a BSC with cross-over probability ω and channel-C is a biAWGN channel with SNR Es/N0. The latter setting will serve as a proxy to the analysis of the BP decoding thresholds of SC MN code ensembles over the biAWGN channel (that is our original setting). The analysis will be provided for the two ensembles defined by d = 3 and d = 4, that will be referred to as (3, 6) and (4, 8) regular SC code ensembles, respectively. With reference to the EPC model of Fig. 4, we parameterize the channels through their conditional entropies hA = H(W|Z) and hC = H(X|Y ). Considering the asymptotic limit L →∞, the unpunctured SC LDPC code rate is R = 1/2. Note that, over the EPC model, the Shannon limit for rate-1/2 codes yields the following upper bound on the sum of conditional entropies [16]: hA + hC ≤1. (4) In all cases, the BP thresholds are computed via DE by setting hA = h⋆ A and then by determining what is the largest value of hC (h⋆ A) that yields a vanishing small bit error probability, in the large ℓ(lifting factor) limit. All results are obtained for a number of spatial positions L →∞, hence removing the rate loss caused by the termination. 1We are now able to clarify the notation used for the parity-check matrix structure of (3). The subscript A used for the permutation matrices at even column positions emphasized that the corresponding VNs in the graph of the SC LDPC code are connected, in the EPC model, to the a-priori channel. The subscript C used for the permutation matrices at odd column positions emphasized that the corresponding VNs in the graph of the SC LDPC code are connected, in the EPC model, to the communication channel. Source Distribution Matcher ⊕ LDPC Encoder Communication Channel LDPC Decoder De-matcher µt vt wt xt yt ˆvt ˆµt i.i.d. scrambler zt Fig. 3. Modified SC MN scheme, with the introduction of the bit scrambler. Source LDPC Encoder Commun. Channel A Priori Channel LDPC Decoder wt xt yt ˆwt zt = wt + vt Fig. 4. Equivalent parallel channel model. 1) BEC-BEC Parallel Channels: The analysis is here rem- iniscent of the analysis introduced in [16] to derive the BP decoding thresholds of SC root-LDPC code ensembles. By setting ϵA = hA and ϵC = hC, the analysis follows by means of protograph extrinsic information transfer (PEXIT) analysis [17], where the erasure probability associated to VNs connected to channel-A is ϵA and the erasure probability at the input of VNs connected to channel-C is ϵC. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (first chart from the left), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. The results are close to the ones obtained in [16] (albeit, for a different SC root- LDPC code ensemble). Similarly to what happens over single binary-input output symmetric memoryless channels, regular SC LDPC code ensembles allow to operate close to the limit, with a gap that is reduced by increasing the VN/CN degrees, largely outperforming their block code ensemble counterparts. 2) biAWGN-biAWGN Parallel Channels: By setting γA = 1 8  J−1 (1 −hA) 2 and γC = 1 8  J−1 (1 −hC) 2 where the J(·) defines the biAWGN channel capacity [18] J(s)=1− 1 √ 2πs2 Z +∞ −∞ exp − u−s2/2 2 2s2 ! log2 1+e−u du the analysis follows by means of the Gaussian approximation of DE, i.e., via PEXIT analysis [17]. Here, the SNR at the input of VNs connected to channel-A is γA and the SNR at the input of VNs connected to channel-C is γC. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (middle chart), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. As for the BEC-BEC case, regular SC LDPC code ensembles allow to operate close to the limit, with a gap that is reduced by increasing the VN/CN degrees, largely outperforming their block code ensemble counterparts. The gap between the SC LDPC code ensembles and the limit is larger than the one observed in the BEC-BEC setting, especially when the hA, hB are close in value. 3) BSC-biAWGN Parallel Channels: For the analysis over BSC-biAWGN parallel channels, we resort to quantized DE [19], adapted to protographs. In particular, under the allzero codeword assumption, the conditional L-value distribution at the input of VNs connected to the a-priori channel is pA(L|W = 0) = ωδ(L + ∆) + (1 −ω)δ(L −∆) where δ is the Dirac delta function, and ∆= ln[(1 −ω)/ω]. The conditional L-value distribution at the input of VNs connected to the communication channel is pC(L|X = 0) = N (4γC, 8γC) where γC is the SNR of the biAWGN communication channel. The conditional entropies of the a-priori channel and of the communication channel are hA = Hb(ω) and hC = 1 − J √8γC  , respectively. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (third chart from the left), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. The results are extremely close to the ones obtained for the biAWGN-biAWGN case, suggesting that modeling the a-priori BSC as a biAWGN channel can provide reliable threshold estimates. Considering the extreme simplicity of the PEXIT analysis—which tracks only a single parameter for each protograph edge—this result is particularly important. In contrast, protograph quantized DE tracks the evolution of a full message distribution per edge (for threshold calculations, we used 1023 message quantization levels). B. BP Decoding Thresholds of SC MN Code Ensembles For the calculation of the BP decoding thresholds of SC MN code ensembles over the biAWGN channel, we resort to the EPC model, with a BSC as a-priori channel with crossover probability given by the DM parameter ω and a biAWGN communication channel. This allows mapping the BP decoding thresholds of protograph-based SC LDPC code ensembles over the BSC-biAWGN parallel channels (Section IV-A3) to the thresholds of the corresponding protograph-based SC MN code ensembles, over the biAWGN channel. In particular, by fixing ω, one fixes the rate of the SC MN code ensemble to R = Hb(ω) as well as the conditional entropy of the a-priori channel in the EPC model to hA = Hb(ω). Hence, we have that the rate of the SC MN code ensemble equals the conditional 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC BEC – BEC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC biAWGN – biAWGN 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC BSC – biAWGN Fig. 5. Thresholds regions for block MN code ensembles (solid red/blue lines) and the corresponding SC MN code ensembles (dashed red/blue lines), compared to the rate-1/2 limit for parallel channel models (solid black line): BEC – BEC (left), biAWGN – biAWGN (center), and BSC – biAWGN (right). −12 −10 −8 −6 −4 −2 0 2 4 6 8 0 0.2 0.4 0.6 0.8 1 BiAWGN channel capacity (4, 8) SC MN code ensemble (3, 6) SC MN code ensemble Es/N0 [dB] Rate −3 −2.5 −2 0.45 0.5 0.55 Fig. 6. BP thresholds of (3, 6) and (4, 8) SC MN code ensembles computed via quantized DE, and compared with the biAWGN channel capacity. entropy of the a-priori channel, i.e., R = hA. The entropy of the communication channel is hC = 1 −J( p 8Es/N0). It is then sufficient to translate the results of Fig. 5 (rightmost chart) onto the rate-SNR plane according the maps R = hA and Es N0 = 1 8  J−1 (1 −hC) 2 . The result is depicted in Fig. 6. Both the (3, 6) and the (4, 8) SC MN ensembles allow operating within 0.7 dB from the biAWGN channel capacity across all rates, by simply selecting the rate via the DM parameter ω. The (4, 8) ensemble, in particular, displays a maximum gap of ≈0.15 dB from the Shannon limit. The achieved thresholds largely outperform those attained by the block protograph MN code ensembles found in [1], where the best protograph ensemble displayed a gap of about 1 dB to the Shannon limit. REFERENCES [1] A. Zahr, E. Ben Yacoub, B. Matuz, and G. Liva, “Rate-adaptive protograph-based MacKay-Neal codes,” IEEE Trans. Inf. Theory, vol. 71, no. 2, pp. 914–929, Feb. 2025. [2] D. MacKay, “Good error-correcting codes based on very sparse matri- ces,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, Mar. 1999. [3] R. Gallager, “Low-density parity-check codes,” Ph.D. dissertation, Mas- sachusetts Institute of Technology, Cambridge, MA, USA, 1963. [4] P. Schulte and G. B¨ocherer, “Constant composition distribution match- ing,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 430–434, Jan. 2016. [5] T. Richardson and R. Urbanke, “The capacity of low-density parity- check codes under message-passing decoding,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599–618, Feb. 2001. [6] A. Jim´enez Felstr¨om and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix,” IEEE Trans. Inf. Theory, vol. 45, no. 6, pp. 2181–2191, Sep. 1999. [7] M. Lentmaier, A. Sridharan, D. Costello, Jr., and K. Zigangirov, “Itera- tive decoding threshold analysis for LDPC convolutional codes,” IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 5274–5289, Oct. 2010. [8] S. Kudekar, T. J. Richardson, and R. L. Urbanke, “Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC,” IEEE Trans. Inf. Theory, vol. 57, no. 2, pp. 803– 834, Feb. 2011. [9] D. G. M. Mitchell, M. Lentmaier, and D. J. Costello, “Spatially coupled LDPC codes constructed from protographs,” IEEE Trans. Inf. Theory, vol. 61, no. 9, pp. 4866–4889, Sep. 2015. [10] S. Kudekar, T. Richardson, and R. L. Urbanke, “Spatially coupled ensembles universally achieve capacity under belief propagation,” IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 7761–7813, Dec. 2013. [11] K. Kasai and K. Sakaniwa, “Spatially-coupled MacKay-Neal codes and Hsu-Anastasopoulos codes,” in Proc. IEEE Int. Symp. Inf. Theory, St. Petersburg, Russia, Aug. 2011. [12] J. Thorpe, “Low-density parity-check (LDPC) codes constructed from protographs,” NASA JPL, IPN Progr. Rep. 42-154, Aug. 2003. [13] G. B¨ocherer, F. Steiner, and P. Schulte, “Bandwidth efficient and rate-matched low-density parity-check coded modulation,” IEEE Trans. Commun., vol. 63, no. 12, pp. 4651–4665, Dec. 2015. [14] M. Fresia, F. Perez-Cruz, H. V. Poor, and S. Verdu, “Joint source and channel coding,” IEEE Signal Process. Mag., vol. 27, no. 6, pp. 104– 113, Nov. 2010. [15] A. Golmohammadi and D. G. M. Mitchell, “Concatenated Spatially Coupled LDPC Codes With Sliding Window Decoding for Joint Source- Channel Coding,” IEEE Trans. Commun., vol. 70, no. 2, pp. 851–864, Feb. 2022. [16] V. Dedeoglu, F. Jardel, and J. J. Boutros, “Spatial coupling of root- LDPC: Parity bits doping,” in International Conference on Telecommu- nications, Sydney, Australia, Apr. 2015. [17] G. Liva and M. Chiani, “Protograph LDPC codes design based on EXIT analysis,” in Proc. IEEE Global Telecommun. Conf., Washington, US, Nov. 2007. [18] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes,” IEEE Trans. Commun., vol. 49, no. 10, pp. 1727– 1737, Oct. 2001. [19] S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, “On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit,” IEEE Commun. Lett., vol. 5, no. 2, pp. 58–60, 2001.
Rate-Adaptive Spatially Coupled MacKay-Neal Codes with Thresholds Close to Capacity Ayman Zahr, Student Member, IEEE, and Gianluigi Liva, Senior Member, IEEE Abstract-We analyze by density evolution the asymptotic performance of rate-adaptive MacKay-Neal (MN) code ensembles, where the inner code is a protograph spatially coupled (SC) low-density parity-check code. By resorting to a suitably-defined parallel channel model, we compute belief propagation decoding thresholds, showing that SC MN code ensembles can perform within 0.15 dB from the binary-input additive white Gaussian noise capacity over the full [0, 1] rate range. Index Terms-LDPC codes, spatial coupling, rate adaptivity, distribution matcher, density evolution. I. INTRODUCTION M ODERN high-throughput communication systems demand flexible, rate-adaptive error-correction solutions that avoid costly hardware reconfiguration. To address this need, a class of protograph-based MacKay-Neal (MN) codes was introduced and analyzed in [1]. The code structure closely follows the one introduced in [2]. In particular, an outer nonlinear encoder is concatenated with an inner nonsystematic lowdensity parity-check (LDPC) [3] code encoder. The outer nonlinear encoder, named distribution matcher (DM) [4], maps the input message onto a fixed-length sequence with a prescribed empirical distribution via a binary constant composition (CC) code. The sequence is then input to the nonsystematic LDPC encoder, whose output is transmitted over the channel. At the decoder side, belief propagation (BP) decoding is performed over the bipartite graph of the inner LDPC code, where the variable nodes (VNs) associated with the encoder input are fed with prior information derived from the marginal distribution of the outer CC code. By changing the distribution defined by the DM, a large range of code rates can be obtained without modifying (i.e., puncturing or shortening) the inner code. The construction of [1] relies on inner protograph-based LDPC codes. The optimization of the inner code protograph allows to operate, al large blocklength, within 1 dB from the binary-input additive white Gaussian noise (biAWGN) channel capacity. To achieve a robust performance across the various rates, the design of [1] uses density evolution (DE) [5] analysis, seeking protographs for which the maximum gap A. Zahr is with the Institute for Communications Engineering, Technical - cations and Navigation, German Aerospace Center (DLR), Wessling, Germany (email: ). G. Liva is with the (DLR), Wessling, Germany (email: ). The authors acknowledge the financial support by the Federal Ministry of Research, Technology and Space of Germany in the programme of "Souver ̈an. Digital. Vernetzt." Joint project 6G-RIC, project identification number: 16KISK022. between the BP thresholds and the Shannon limit is minimized over a set of rates. In this paper, we explore the performance of protographbased MN code ensembles where the inner code is a protograph spatially coupled (SC) LDPC code [6]-[9]. This choice is motivated by the capacity approaching properties of SC LDPC code ensembles [7], [8], and by their universality of their performance w.r.t. a large class of channels [10]-a property that may be instrumental to close the gap to the Shannon limit for the resulting SC MN construction, at all rates. A different class of SC MN codes was analyzed via DE in [11]. A key difference between the class of codes studied in [11] and the one addressed in this paper stems from the absence, in [11], of the concatenation with an outer nonlinear code, i.e., the analysis of [11] considers linear SC MN codes that constitute a subclass of multi-edge type SC LDPC codes. We analyze SC MN codes obtained by concatenating an outer nonlinear (CC) code with an inner regular protograph SC LDPC code. The analysis is carried out in the asymptotic regime of large lifting factors, and long SC chains, by means of DE analysis. The analysis relies on the equivalent parallel channel (EPC) model introduced in [1] to circumvent the challenges posed by the nonlinear code structure. We show that SC MN codes can operate within 0.15 dB from the Shannon limit for the biAWGN channel over the full rate range [0, 1], with a single inner SC LDPC, and with rate adaptation achieved by tuning the outer DM parameter. II. PRELIMINARIES In this paper, random variables (r.v.s) are denoted by uppercase letters, and their realization by lowercase letters. The probability density function (p.d.f.) of a r.v. X is denoted by p(x). The binary entropy function Hb(ω) is Hb(ω) = -ω log2 ω -(1 -ω) log2(1 -ω), for 0 < ω < 1 and Hb(0) = Hb(1) = 0. Vectors are treated as row vectors and denoted by bold letters, e.g., x, while matrices are denoted by uppercase bold letters, e.g., X. The Hamming weight of x is denoted by wH(x). We consider transmission over the biAWGN channel, with Y = (-1)X + N, where X ∈{0, 1}, N ∼N(0, σ2) is the additive white Gaussian noise term, and Y is the channel output. The signal-to-noise ratio (SNR) is Es/N0 = 1/(2σ2), where Es is the energy per symbol and N0 is the single-sided noise power spectral density. A. Protograph-based (Spatially-Coupled) LDPC Ensembles For the MN code construction, we rely on protograph-based LDPC and SC LDPC code ensembles. In particular, we focus 16 Oct 2025 on two ensembles: the (dv, dc) regular block LDPC code ensemble, where dv and dc denote the VN and check node (CN) degrees, and the corresponding SC code ensemble. A protograph P = (V, C, E ) is a small bipartite graph consisting of a set V of N VNs, a set C of M CNs, and a set E of e edges [12]. VNs in the protograph are numbered from 0 to N -1. Similarly, protograph CNs are numbered from 0 to M -1. Each VN/CN/edge in a protograph defines a VN/CN/edge type. The bipartite graph G of an LDPC code can be derived by lifting the protograph. In particular, the protograph is copied ltimes (where lis referred to as the lifting factor), and the edges of the protograph copies are permuted under the following constraint: if an edge connects a type-j VN to a type-i CN in P, after permutation the edge should connect one of the ltype-j VN copies with one of the ltype-i CN copies in G. We denote by v0, v1, . . . VNs in G, and by c0, c1, . . . the CNs in G. The lifted graph G defines the parity-check matrix of an LDPC code. The base matrix of a protograph is an M × N matrix B = [bi,j] where bi,j is the number of edges that connect VN j to CN i in P. We will make use of LDPC codes with punctured (or state) VNs. A punctured VN is associated with a codeword bit that is not transmitted through the communication channel. We will assume that all the VNs of a given type are either punctured or they are not, i.e., puncturing is defined at protograph level. Protographs can be used to define both block and SC LDPC code ensembles. In this paper, we focus on regular block code ensembles defined by a base matrix in the form B = d d (1) where the VN degree is dv = d and the CN degree is dc = 2d. An example of a protograph defined by a base matrix in the form (1) with d = 3 is depicted in Fig. 1 (left side). By convention, the punctured VN (dark circle) is associated with the first column of B. We consider SC LDPC code ensembles defined by protographs in the form BSC =         B0 B1 B0 ... B1 ... Bd-1 ... ... Bd-1 ...         (2) where B0 = B1 = . . . = Bd-1 = 1 1 . The number of column blocks (spatial positions) is denoted by L. An example of a protograph defined by a base matrix in the form (2) with d = 3 is given in Fig. 1 (right side). As before, by convention, we associate the punctured VNs to even column indexes, as illustrated by the following example. Example 1. The base matrix of SC LDPC code ensemble defined according to (2), for d = 3, is BSC =         1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 ... ...         . . . 0 1 2 3 spatial positions Fig. 1. (3, 6) protograph SC LDPC code ensemble from Example 1 (left) and of its SC counterpart (right). Dark variable nodes denote punctured nodes. where the columns with index 0, 2, 4, . . . (marked in gray) are associated to punctured VNs. Since all nonzero entries in the base matrix (2) are set to 1, lifting is performed by replacing each nonzero entry with an l× lpermutation matrix, and each zero entry with an l× l zero matrix. The resulting parity check matrix is in the form H =         Π (0) A,0 Π (0) C,0 Π (0) A,1 Π (0) C,1 Π (1) A,0 Π (1) C,0 ... ... Π (1) A,1 Π (1) C,1 ... Π (0) A,d-1 Π (0) C,d-1 ... ... ... Π (1) A,d-1 Π (1) C,d-1 ...         (3) where, following the visual convention introduced in Example 1, columns marked in gray refer to punctured VNs (the meaning of the subscripts A and C, associated with punctured and unpunctured VNs, will be explained in Section IV). III. SPATIALLY COUPLED MACKAY-NEAL CODES As in [1], we construct a protograph-based MN code as the concatenation of an outer DM (nonlinear CC code) and an inner protograph-based LDPC code with nonsystematic encoder. Differently from [1], where the inner code is a block LDPC code, we use here a protograph-based SC LDPC code whose base matrix complies with (2). To illustrate the code structure, we focus on the encoder (Fig. 2). We assume a source that emits, at discrete times, a stream of messages μ0, μ1, . . . with μt ∈{1, 2, . . . , M}. The generic message μt is encoded by the DM into a binary l-tuple vt, where vt has a prescribed (constant) Hamming weight wH(vt) = ωl, for some ω ∈{0, 1/l, 2/l, . . . , 1}. We refer to the DM parameter ω as the fractional Hamming weight of vt. The pair (ω, l) defines the outer CC code. The CC outer codewords v0, v1, . . . are fed into the inner nonsystematic SC LDPC encoder, producing a sequence of l-tuples x0, x1, . . . that are sent over the communication channel. According to the paritycheck matrix structure of (3), encoding is performed as x T t = Π (t) C,1 -1 d-1 X i=0 Π (t-i) A,i v T t-i + d-1 X i=1 Π (t-i) C,i x T t-i ! where vt = 0 and xt = 0 for t < 0. We assume zero-tail termination of the SC LDPC code. Following [7], [9], the encoder stops encoding the DM output sequence at time t = L -d. The last d -1 SC LDPC Source Distribution Matcher LDPC Encoder Communication Channel LDPC Decoder De-matcher μt / vt / xt / yt / ˆvt ˆμt lbits lbits l lbits Encoder Fig. 2. System model, where a SC MN code is used to communicate over the biAWGN channel (communication channel). encoder inputs vL-d+1, vL-d+2, . . . , vL-1 are computed to drive the encoder to the zero state (zero syndrome) at time L -1. In the proposed construction, the encoder inputs vL-d+1, vL-d+2, . . . , vL-1 are not punctured, i.e., they are transmitted over the communication channel along with the corresponding encoder outputs xL-d+1, xL-d+2, . . . , xL-1. The inner code rate is hence RI = L -(d -1) L -(d -1) + 2(d -1) = L -(d -1) L + (d -1). The inner code rate tends to one as L →∞. The rate of the outer code (DM) is [1] RO = 1 llog2 l ωl and it converges to Hb(ω) for large l. The overall rate is hence R = RORI, and it approaches Hb(ω) in the asymptotic regime where both L and ltend to infinity. Note that, by selecting ω ∈[0, 1/2], it is possible to finely span the full code rate region R ∈[0, 1], enabling flexible rate adaptation. BP decoding takes place on the graph of the inner LDPC code. For unpunctured VNs, the input is given by their corresponding channel log-likelihood ratios (LLRs), i.e., L = ln[p(y|0)/p(y|1)]. Punctured VNs are initialized with the prior obtained from the marginal distribution of the DM output [1], [2], [13], i.e., L = ln[(1 -ω)/ω]. The hard decisions at the LDPC decoder output, ˆv0, ˆv1, . . ., are passed to the de-matcher, producing the sequence of message estimates ˆμ0, ˆμ1, . . .. IV. DENSITY EVOLUTION ANALYSIS As observed in [1], the analysis of the MN code construction can be challenging, due to the inherent nonlinearity of the code (caused by the concatenation with the outer, nonlinear, constant composition code). The nonlinearity results in a (bit/block) error probability that is a function of the transmitted sequence. This aspect, in particular, is of impediment for the use of the all-zero codeword assumption used in the DE analysis of the code ensemble. In [1], the problem was circumvented by observing that the introduction of a random independent and identically-distributed (i.i.d.) bit scrambler, flipping the bits at the output of the DM independently and with probability 1/2 (with scrambling sequence made available at the decoder), does not modify the error probability of the scheme over symmetric communication channels. Hence, the performance analysis of the scheme of Fig. 2 (without the scrambler) is equivalent to the one of the scheme of Fig. 3. By interpreting the scrambling sequence zt as the output of a binary memoryless symmetric source, and the DM output vt as additive binary noise contribution, the analysis of the average error probability of the nonlinear MN reduces to the analysis of the error probability of the inner nonpunctured, linear LDPC code (whose rate approaches 1/2 as L →∞), where the information bits are transmitted over an additive binary noise channel with input wt and output zt = wt + vt, with marginal cross-over probability ω, and where the parity bits xt are transmitted over the biAWGN channel, with channel output yt [14], [15]. We refer to this model (illustrated in Fig. 4) as the EPC model. To emphasize their roles in the model, we refer to the biAWGN channel as the communication channel (channel-C), and to additive binary noise channel as a-priori channel (channel-A).1 A. BP Decoding Thresholds over Parallel Channels We initially analyze the performance of the SC LDPC code ensemble defined by (2) over various combinations of parallel channels, namely: (i) two parallel binary erasure channels (BECs), where channel-A has erasure probability εA and channel-C has erasure probability εC; (ii) two parallel biAWGN channels, where channel-A has Es/N0 = γA and channel-C has Es/N0 = γC; (iii) a binary symmetric channel (BSC)- biAWGN channel combination, where channel-A is a BSC with cross-over probability ω and channel-C is a biAWGN channel with SNR Es/N0. The latter setting will serve as a proxy to the analysis of the BP decoding thresholds of SC MN code ensembles over the biAWGN channel (that is our original setting). The analysis will be provided for the two ensembles defined by d = 3 and d = 4, that will be referred to as (3, 6) and (4, 8) regular SC code ensembles, respectively. With reference to the EPC model of Fig. 4, we parameterize the channels through their conditional entropies hA = H(W|Z) and hC = H(X|Y ). Considering the asymptotic limit L →∞, the unpunctured SC LDPC code rate is R = 1/2. Note that, over the EPC model, the Shannon limit for rate-1/2 codes yields the following upper bound on the sum of conditional entropies [16]: hA + hC ≤1. (4) In all cases, the BP thresholds are computed via DE by setting hA = h⋆ A and then by determining what is the largest value of hC (h⋆ A) that yields a vanishing small bit error probability, in the large l(lifting factor) limit. All results are obtained for a number of spatial positions L →∞, hence removing the rate loss caused by the termination. 1We are now able to clarify the notation used for the parity-check matrix structure of (3). The subscript A used for the permutation matrices at even column positions emphasized that the corresponding VNs in the graph of the SC LDPC code are connected, in the EPC model, to the a-priori channel. The subscript C used for the permutation matrices at odd column positions emphasized that the corresponding VNs in the graph of the SC LDPC code are connected, in the EPC model, to the communication channel. Source Distribution Matcher ⊕ LDPC Encoder Communication Channel LDPC Decoder De-matcher μt vt wt xt yt ˆvt ˆμt i.i.d. scrambler zt Fig. 3. Modified SC MN scheme, with the introduction of the bit scrambler. Source LDPC Encoder Commun. Channel A Priori Channel LDPC Decoder wt xt yt ˆwt zt = wt + vt Fig. 4. Equivalent parallel channel model. 1) BEC-BEC Parallel Channels: The analysis is here reminiscent of the analysis introduced in [16] to derive the BP decoding thresholds of SC root-LDPC code ensembles. By setting εA = hA and εC = hC, the analysis follows by means of protograph extrinsic information transfer (PEXIT) analysis [17], where the erasure probability associated to VNs connected to channel-A is εA and the erasure probability at the input of VNs connected to channel-C is εC. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (first chart from the left), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. The results are close to the ones obtained in [16] (albeit, for a different SC rootLDPC code ensemble). Similarly to what happens over single binary-input output symmetric memoryless channels, regular SC LDPC code ensembles allow to operate close to the limit, with a gap that is reduced by increasing the VN/CN degrees, largely outperforming their block code ensemble counterparts. 2) biAWGN-biAWGN Parallel Channels: By setting γA = 1 8 J-1 (1 -hA) 2 and γC = 1 8 J-1 (1 -hC) 2 where the J(·) defines the biAWGN channel capacity [18] J(s)=11 √ 2πs2 Z +∞ -∞ exp - u-s2/2 2 2s2 ! log2 1+e-u du the analysis follows by means of the Gaussian approximation of DE, i.e., via PEXIT analysis [17]. Here, the SNR at the input of VNs connected to channel-A is γA and the SNR at the input of VNs connected to channel-C is γC. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (middle chart), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. As for the BEC-BEC case, regular SC LDPC code ensembles allow to operate close to the limit, with a gap that is reduced by increasing the VN/CN degrees, largely outperforming their block code ensemble counterparts. The gap between the SC LDPC code ensembles and the limit is larger than the one observed in the BEC-BEC setting, especially when the hA, hB are close in value. 3) BSC-biAWGN Parallel Channels: For the analysis over BSC-biAWGN parallel channels, we resort to quantized DE [19], adapted to protographs. In particular, under the allzero codeword assumption, the conditional L-value distribution at the input of VNs connected to the a-priori channel is pA(L|W = 0) = ωδ(L + ∆) + (1 -ω)δ(L -∆) where δ is the Dirac delta function, and ∆= ln[(1 -ω)/ω]. The conditional L-value distribution at the input of VNs connected to the communication channel is pC(L|X = 0) = N (4γC, 8γC) where γC is the SNR of the biAWGN communication channel. The conditional entropies of the a-priori channel and of the communication channel are hA = Hb(ω) and hC = 1 - J √8γC , respectively. The achievable threshold pairs (h⋆ A, h⋆ C) are depicted for (3, 6) and (4, 8) SC code ensembles in Fig. 5 (third chart from the left), and are compared with the limit given by (4). On the same chart, the achievable threshold pairs (h⋆ A, h⋆ C) for (3, 6) and (4, 8) regular protograph block ensembles, defined by a base matrix in the form (1), are reported as reference. The results are extremely close to the ones obtained for the biAWGN-biAWGN case, suggesting that modeling the a-priori BSC as a biAWGN channel can provide reliable threshold estimates. Considering the extreme simplicity of the PEXIT analysis-which tracks only a single parameter for each protograph edge-this result is particularly important. In contrast, protograph quantized DE tracks the evolution of a full message distribution per edge (for threshold calculations, we used 1023 message quantization levels). B. BP Decoding Thresholds of SC MN Code Ensembles For the calculation of the BP decoding thresholds of SC MN code ensembles over the biAWGN channel, we resort to the EPC model, with a BSC as a-priori channel with crossover probability given by the DM parameter ω and a biAWGN communication channel. This allows mapping the BP decoding thresholds of protograph-based SC LDPC code ensembles over the BSC-biAWGN parallel channels (Section IV-A3) to the thresholds of the corresponding protograph-based SC MN code ensembles, over the biAWGN channel. In particular, by fixing ω, one fixes the rate of the SC MN code ensemble to R = Hb(ω) as well as the conditional entropy of the a-priori channel in the EPC model to hA = Hb(ω). Hence, we have that the rate of the SC MN code ensemble equals the conditional 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC BEC - BEC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC biAWGN - biAWGN 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (4, 8) block (3, 6) block From the left: (3, 6) SC, (4, 8) SC, and R-1/2 limit hA hC BSC - biAWGN Fig. 5. Thresholds regions for block MN code ensembles (solid red/blue lines) and the corresponding SC MN code ensembles (dashed red/blue lines), compared to the rate-1/2 limit for parallel channel models (solid black line): BEC - BEC (left), biAWGN - biAWGN (center), and BSC - biAWGN (right). -12 -10 -8 -6 -4 -2 0 2 4 6 8 0 0.2 0.4 0.6 0.8 1 BiAWGN channel capacity (4, 8) SC MN code ensemble (3, 6) SC MN code ensemble Es/N0 [dB] Rate -3 -2.5 -2 0.45 0.5 0.55 Fig. 6. BP thresholds of (3, 6) and (4, 8) SC MN code ensembles computed via quantized DE, and compared with the biAWGN channel capacity. entropy of the a-priori channel, i.e., R = hA. The entropy of the communication channel is hC = 1 -J( p 8Es/N0). It is then sufficient to translate the results of Fig. 5 (rightmost chart) onto the rate-SNR plane according the maps R = hA and Es N0 = 1 8 J-1 (1 -hC) 2 . The result is depicted in Fig. 6. Both the (3, 6) and the (4, 8) SC MN ensembles allow operating within 0.7 dB from the biAWGN channel capacity across all rates, by simply selecting the rate via the DM parameter ω. The (4, 8) ensemble, in particular, displays a maximum gap of ≈0.15 dB from the Shannon limit. The achieved thresholds largely outperform those attained by the block protograph MN code ensembles found in [1], where the best protograph ensemble displayed a gap of about 1 dB to the Shannon limit. REFERENCES [1] A. Zahr, E. Ben Yacoub, B. Matuz, and G. Liva, "Rate-adaptive protograph-based MacKay-Neal codes," IEEE Trans. Inf. Theory, vol. 71, no. 2, pp. 914-929, Feb. 2025. [2] D. MacKay, "Good error-correcting codes based on very sparse matrices," IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399-431, Mar. 1999. [3] R. Gallager, "Low-density parity-check codes," Ph.D. dissertation, Massachusetts 1963. [4] P. Schulte and G. B ̈ocherer, "Constant composition distribution matching," IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 430-434, Jan. 2016. [5] T. Richardson and R. Urbanke, "The capacity of low-density paritycheck codes under message-passing decoding," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 599-618, Feb. 2001. [6] A. Jim ́enez Felstr ̈om and K. S. Zigangirov, "Time-varying periodic convolutional codes with low-density parity-check matrix," IEEE Trans. Inf. Theory, vol. 45, no. 6, pp. 2181-2191, Sep. 1999. [7] M. Lentmaier, A. Sridharan, D. Costello, Jr., and K. Zigangirov, "Iterative decoding threshold analysis for LDPC convolutional codes," IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 5274-5289, Oct. 2010. [8] S. Kudekar, T. J. Richardson, and R. L. Urbanke, "Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC," IEEE Trans. Inf. Theory, vol. 57, no. 2, pp. 803834, Feb. 2011. [9] D. G. M. Mitchell, M. Lentmaier, and D. J. Costello, "Spatially coupled LDPC codes constructed from protographs," IEEE Trans. Inf. Theory, vol. 61, no. 9, pp. 4866-4889, Sep. 2015. [10] S. Kudekar, T. Richardson, and R. L. Urbanke, "Spatially coupled ensembles universally achieve capacity under belief propagation," IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 7761-7813, Dec. 2013. [11] K. Kasai and K. Sakaniwa, "Spatially-coupled MacKay-Neal codes and Hsu-Anastasopoulos codes," in Proc. IEEE Int. Symp. Inf. Theory, St. Petersburg, Russia, Aug. 2011. [12] J. Thorpe, "Low-density parity-check (LDPC) codes constructed from protographs," NASA JPL, IPN Progr. Rep. 42-154, Aug. 2003. [13] G. B ̈ocherer, F. Steiner, and P. Schulte, "Bandwidth efficient and rate-matched low-density parity-check coded modulation," IEEE Trans. Commun., vol. 63, no. 12, pp. 4651-4665, Dec. 2015. [14] M. Fresia, F. Perez-Cruz, H. V. Poor, and S. Verdu, "Joint source and channel coding," IEEE Signal Process. Mag., vol. 27, no. 6, pp. 104113, Nov. 2010. [15] A. Golmohammadi and D. G. M. Mitchell, "Concatenated Spatially Coupled LDPC Codes With Sliding Window Decoding for Joint SourceChannel Coding," IEEE Trans. Commun., vol. 70, no. 2, pp. 851-864, Feb. 2022. [16] V. Dedeoglu, F. Jardel, and J. J. Boutros, "Spatial coupling of rootLDPC: Parity bits doping," in International Conference on Telecommunications, Sydney, Australia, Apr. 2015. [17] G. Liva and M. Chiani, "Protograph LDPC codes design based on EXIT analysis," in Proc. IEEE Global Telecommun. Conf., Washington, US, Nov. 2007. [18] S. ten Brink, "Convergence behavior of iteratively decoded parallel concatenated codes," IEEE Trans. Commun., vol. 49, no. 10, pp. 17271737, Oct. 2001. [19] S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, "On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit," IEEE Commun. Lett., vol. 5, no. 2, pp. 58-60, 2001.
2510.14841
On the order of lazy cellular automata Edgar Alcal´a-Arroyo∗1 and Alonso Castillo-Ramirez†1 1Centro Universitario de Ciencias Exactas e Ingenier´ıas, Universidad de Guadalajara, M´exico. October 17, 2025 Abstract We study the most elementary family of cellular automata defined over an arbitrary group universe G and an alphabet A: the lazy cellular automata, which act as the identity on configurations in AG, except when they read a unique active transition p ∈AS, in which case they write a fixed symbol a ∈A. As expected, the dynamical behavior of lazy cellular automata is relatively simple, yet subtle questions arise since they completely depend on the choice of p and a. In this paper, we investigate the order of a lazy cellular automaton τ : AG →AG, defined as the cardinality of the set {τ k : k ∈N}. In particular, we establish a general upper bound for the order of τ in terms of p and a, and we prove that this bound is attained when p is a quasi-constant pattern. keywords: Lazy cellular automaton; unique active transition; order of a cellular au- tomaton; quasi-constant pattern. 1 Introduction A cellular automaton (CA) is a mathematical model over a discrete space defined by a local map that is applied homogeneously and simultaneously to the whole space. The underlying discrete space is a configuration space AG, which consists of all maps from a group universe G to an alphabet A. Following [6], a cellular automaton τ : AG →AG is a function such that there exist a finite subset S ⊆G, called a neighborhood of τ, and a local map µ : AS →A satisfying τ(x)(g) = µ((g · x)|S), ∀x ∈AG, g ∈G, where · denotes the shift action of G on AG: (g · x)(h) := x(hg), ∀h ∈G. We say that a cellular automaton τ : AG →AG is lazy if there is a local defining map µ : AS →A for τ such that e ∈S, where e is the identity of the group G, and there exists a pattern p ∈AS, called the unique active transition of τ, satisfying the following: ∀z ∈AS, µ(z) = z(e) ⇐⇒z ̸= p. ∗Email: edgar.alcala7434@alumnos.udg.mx †Email: alonso.castillor@academicos.udg.mx 1 arXiv:2510.14841v1 [cs.FL] 16 Oct 2025 Intuitively, a lazy cellular automaton acts almost as the identity function of AG, except when it reads the pattern p, in which case it writes the symbol a := µ(p) ∈A \ {p(e)}. As expected, the dynamical behavior of a lazy CA is relatively simple, yet subtle questions arise because their evolution depends entirely on the choice of p and a. In a certain sense, lazy cellular automata are even more elementary than the well-known elementary cellular automata (ECA) studied by Wolfram [14], where complex behavior already emerges. Since there are only 256 ECA, a complete case-by-case computational analysis is feasible, whereas there are infinitely many lazy CAs (over an infinite group universe), as their neighborhood size can be arbitrarily large. Despite this, we believe that a deep understanding of lazy CAs is possible and may lead to new insights into broader families of cellular automata. Lazy cellular automata were introduced in [3] as a tool to study idempotent CAs: this is, cellular automata τ : AG →AG satisfying τ 2 = τ. It was observed that if the unique active transition p ∈AS is constant (i.e., p(e) = p(s), ∀s ∈S) or symmetric (i.e., S = S−1 and p(s) = p(s−1), ∀s ∈S), then τ is idempotent. Moreover, the idempotence of τ was completely characterized when p is quasi-constant, meaning that there is r ∈S such that p|S\{r} is constant. Here we study the order of a lazy cellular automaton τ : AG →AG, denoted by ord (τ), as the cardinality of the set of all powers of τ: ord (τ) := |{τ k : k ∈N}|. Besides being an important algebraic concept, the order also captures part of the dynamical behaviour of a cellular automaton. For example, it was shown by K˚urka [12, Theorem 4] that a one-dimensional cellular automaton (i.e, when the underlying universe is the group of integers Z) is of finite order if and only if it is equicontinuous. The dynamical behavior of one-dimensional lazy CAs was examined in [4]. In this case, when the neighborhood S ⊆Z of τ is an interval, an interesting dichotomy arises: either τ is idempotent or of infinite order, which is equivalent to being strictly almost equicontinuous as a dynamical system. In this paper, we establish several results on the order of lazy CAs in the general setting of an arbitrary group universe G. In Section 2, we show that for any lazy cellular automaton τ : AG →AG, if ord (τ) is finite, then τ has period 1, which means that there is n ∈N such that τ n = τ n+1. Moreover, in Theorem 2, we derive a general upper bound for ord (τ) in terms of its unique active transition p ∈AS. As a consequence, in Corollary 3, we provide a sufficient condition on p ∈AS that guarantees that τ is idempotent. In Section 3, we determine the order of a lazy CA whose unique active transition is a quasi- constant pattern, which is a substantial generalization of Theorem 2 in [3]. In order to state our result, we introduce the following notation about words on groups. A word on S ⊆G of length n is simply an element of the Cartesian power Sn := {(s1, . . . , sn) : si ∈S}. We consider the evaluation function θ from words on S ⊆G to elements of G given by θ(s1, . . . , sn) := s1 . . . sn. We say that v is a subword of w = (s1, . . . , sn), denoted by v ⊑w, if v = (si, si+1, . . . , sj) for some i ≤j. Theorem 1. Let τ : AG →AG be a lazy cellular automaton with unique active transition p ∈AS and writing symbol a ∈AS \ {p(e)}. Assume that p is quasi-constant with non-constant element r ∈S. 1. If a ̸= p(s) for all s ∈S, then ord (τ) = 2. 2 2. If r ̸= e and a = p(r), then ord (τ) is finite if and only if there exists n ≥2 such that rn ∈S. Moreover, in this case, ord (τ) = min{n ≥2 : rn ∈S}. 3. If r = e and a = p(s) for all s ∈S \ {e}, then ord (τ) is finite if and only if there exists n ≥2 such that for all words w ∈(S \ {e})n−1 there exists a subword v ⊑w such that θ(v)−1 ∈S. In such case, the order of τ is the minimum n satisfying this property. As an easy consequence of Theorem 1 we find that, in contrast with the dichotomy obtained in [4] when the neighborhood S ⊆Z is an interval, for every n ≥2, there exists a lazy cellular automaton τ : AG →AG such that ord (τ) = n. Finally, in Section 3, we present two open problems related to the study of lazy cellular automata. 2 The order of lazy CA We assume that the alphabet A has at least two different elements 0 and 1. A pattern is a function p ∈AS, where S is a finite subset of the group universe G. The term active transitions of a local map µ : AS →A, with e ∈S, has been recently introduced by several authors [1, 2, 8, 9] as the patterns z ∈AS such that µ(z) ̸= z(e). The activity value [7] of µ : AS →A is simply the number of active transitions of µ. Definition 1. A cellular automaton τ : AG →AG is called lazy if there is a local defining map µ : AS →A for τ such that e ∈S and there exists p ∈AS satisfying ∀z ∈AS, µ(z) = z(e) ⇐⇒z ̸= p. In such case, we say that p is the unique active transition of τ and µ(p) ∈A \ {p(e)} is the writing symbol of τ. The adjective “lazy” is justified for this class of cellular automata as they have the smallest non-zero activity value (i.e., activity value 1). Example 1. Let G = Z and A = {0, 1}. The ECA rule 236 (see [11, Sec. 2.5] for an explanation of the rule labeling) is lazy, because it may be defined via the local map µ : A{−1,0,1} →A given by the following table: z ∈A{−1,0,1} 111 110 101 100 011 010 001 000 µ(z) ∈A 1 1 1 0 1 1 0 0 The unique active transition of rule 236 is p = 101 ∈A{−1,0,1}. Example 2. Let G = Z and A = {0, 1}. The ECA rule 136 may be defined via the local map µ : A{−1,0,1} →A given by the following table z ∈A{−1,0,1} 111 110 101 100 011 010 001 000 µ(z) ∈A 1 0 0 0 1 0 0 0 3 It may seem that rule 136 is not lazy because the above local map has two active transitions 110 and 010. However, this local map may be reduced to its minimal neighborhood, so we obtain a local defining map µ′ : A{0,1} →A for rule 136 given by the following table: z ∈A{0,1} 11 10 01 00 µ′(z) ∈A 1 0 0 0 Therefore, rule 136 is lazy and has unique active transition p = 10 ∈A{0,1}. Recall that the minimal neighborhood of a cellular automaton τ : AG →AG is a neighborhood admitted by τ of smallest cardinality, which always exists and is unique by [6, Proposition 1.5.2]. Equivalently, the minimal neighborhood of τ is equal to the set of all elements of G that are essential in order to define a local map for τ (see [5, Proposition 1]). We call the minimal local map of τ to the local defining map of τ associated to its minimal neighborhood. It was shown in [3, Lemma 1] that if µ : AS →A has a unique active transition and |S| ≥2, then the minimal neighborhood of the lazy cellular automaton τ : AG →AG defined by µ is precisely S. Hence, the next result follows. Proposition 1. Let τ : AG →AG be a non-constant cellular automaton with minimal local map µ : AS →A. Then, τ is lazy if and only if e ∈S and µ has a unique active transition. We say that a pattern p ∈AS appears in x ∈AG if there is g ∈G such that (g · x)|S = p. Lemma 1. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS. Then, p appears in a configuration x ∈AG if and only if x ̸= τ(x). Proof. Let µ : AS →A be the minimal local map of τ. By Definition 1, there is g ∈G such that (g · x)|S = p if and only if τ(x)(g) = µ((g · x)|S) ̸= p(e) = (g · x)(e) = x(g). The result follows. For any cellular automaton τ : AG →AG and n ∈N, we denote by τ n the n-th composition of τ with itself: τ n = τ ◦· · · ◦τ | {z } n times . By [6, Prop. 1.4.9], τ n : AG →AG is also a cellular automaton. Corollary 1. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS. For any n ∈N, τ n ̸= τ n+1 if and only if there exists x ∈AG such that p appears in τ n(x). The next result follows by [3, Lemma 3] or [4, Lemma 3.2], but we add its proof here for completeness. Lemma 2. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS and writing symbol a ∈A. If there exists x ∈AG such that τ(x)|S = p, then (s·x)|S = p for some s ∈S \{e} such that p(s) = a ̸= p(e). 4 Proof. The hypothesis τ(x)|S = p is equivalent to p(s) = τ(x)(s) = µ((s · x)|S), ∀s ∈S, where µ : AS →A is the minimal local map of τ. If x|S = p, then p(e) = µ(x|S) = a is a contradiction. Hence, x|S ̸= p and p(e) = µ(x|S) = x(e). Suppose that (s · x)|S ̸= p for all s ∈S \ {e}. Then, p(s) = µ((s · x)|S) = (s · x)(e) = x(s), which contradicts that x|S ̸= p. Therefore, there must exist s ∈S \ {e} such that (s · x)|S = p. This implies that p(s) = µ((s · x)|S) = a ̸= p(e). Given x ∈AG and b ∈A, define suppb (x) := {g ∈G : x(g) = b}. The following result presents some elementary properties of supports of configurations under lazy CAs. Lemma 3. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS and writing symbol a ∈A. Let i, j ∈N, i ≤j. Then: 1. suppa τ i(x)  ⊆suppa τ j(x)  for all x ∈AG 2. suppb τ i(x)  ⊇suppb τ j(x)  , for all x ∈AG, and b ∈A \ {a}. 3. If suppa (x) = suppa (τ(x)) for some x ∈AG, then x = τ(x). Proof. For parts (1.) and (2.), we shall prove the base case i = 0 and j = 1, as the rest follows by induction. Let g ∈suppa (x). Observe that (g ·x)|S ̸= p, because (g ·x)(e) = x(g) = a ̸= p(e). Definition 1 implies τ(x)(g) = µ((g · x)|S) = (g · x)(e) = x(g) = a, where µ : AS →A is the minimal local map of τ. This shows that g ∈suppa (τ(x)). Now, assume b ∈A \ {a} and g ∈suppb (τ(x)). Since b ̸= a, Definition 1 implies τ(x)(g) = x(g), so g ∈suppb (x). For part (3), first note that x(g) = a = τ(x)(g) for all g ∈suppa (x) = suppa (τ(x)). Let h ∈G\suppa (τ(x)). This means that µ((h·x)|S) = τ(x)(h) ̸= a, so by Definition 1, (h·x)|S ̸= p and τ(x)(h) = µ((h · x)|S) = (h · x)(e) = x(h). Therefore, x = τ(x). If ord (τ) < ∞, let m ∈N and n ∈Z+ be as small as possible such that τ m = τ m+n. Then, m and n are referred to as the index and period of τ, respectively. Note that ord (τ) = m + n. Proposition 2. Let τ : AG →AG be a lazy CA. If ord (τ) < ∞, then the period of τ is 1. Proof. Let µ : AS →A be the minimal local map of τ with unique active transition p ∈AS and writing symbol a := µ(p) ∈A. If ord (τ) < ∞, let m and n be the index and period of τ, so τ m = τ m+n. Lemma 3 (1.) implies that for all x ∈AG suppa (τ m(x)) = suppa τ m+i(x)  , ∀i ∈{1, . . . , n}. Hence, Lemma 3 (3.) implies that τ m = τ m+1, so, by the minimality of n, we must have n = 1. 5 Corollary 2. Let τ : AG →AG be a lazy CA and n ∈Z+. Then, ord (τ) > n ⇐⇒τ j−1 ̸= τ j, ∀j ∈1, . . . , n. Furthermore, ord (τ) = min{n ≥2 : τ n−1 = τ n}. We define the product of two subsets S, K ⊆G as SK := {sk : s ∈S, k ∈K} and the inverse as S−1 := {s−1 : s ∈S}. In the following result we provide a general upper bound for the order of a lazy cellular automaton in terms of its unique active transition. Theorem 2. Let τ : AG →AG be a lazy CA with minimal neighborhood S ⊆G, unique active transition p ∈AS and writing symbol a ∈A \ {p(e)}. For any b ∈A, we define Sb := p−1{b} = {s ∈S : p(s) = b}. Then, ord (τ) is at most the minimum n ≥2 such that for every word (s1, . . . , sn−1) ∈(Sa)n−1 there exist 1 ≤i ≤j ≤n −1 satisfying at least one of the following: 1. (sj · · · si)−1 ∈S−1 b Sa, for some b ∈A \ {a}; 2. (sj · · · si)−1 ∈S−1 b1 Sb2, for some b1, b2 ∈A \ {a}, with b1 ̸= b2. Proof. We will show that for any n ≥2, if τ n−1 ̸= τ n, then there exists a word (s1, . . . , sn−1) ∈ (Sa)n−1 such that (1.) and (2.) do not hold for all 1 ≤i ≤j ≤n −1, so the result follows from Corollary 2. If τ n−1 ̸= τ n, then, by Corollary 1, there exists x ∈AG such that p appears in τ n−1(x). By the G-equivariance of cellular automata (i.e., τ(g · x) = g · τ(x), for all g ∈G, see [6, Prop. 1.4.4]), we may assume that τ n−1(x)|S = p. Applying iteratively Lemmas 1 and 2 yields the existence of a word (s1, . . . , sn−1) ∈(Sa)n−1 such that (sj · · · s1) · τ (n−1)−j(x)  |S = p, ∀j ∈{1, . . . , n −1}. It follows from the definition of Sb that for all g ∈G, y ∈AG, (g · y)|S = p ⇐⇒ Sbg ⊆suppb (y) , ∀b ∈A, Hence, for all b ∈A, Sb(sj · · · s1) ⊆suppb  τ (n−1)−j(x)  , ∀j ∈{1, . . . , n −1}, Now, Lemmas 3 (1.) and (2.) imply that for all k ∈{1, . . . , n −1}, k[ j=1 Sas(n−1)−j · · · s1  ⊆suppa  τ k(x)  , n−1 [ j=k Sbs(n−1)−j · · · s1  ⊆suppb  τ k(x)  , ∀b ∈A \ {a}. 6 Since supports with respect to different elements of A are disjoint, we obtain that " k[ i=1 Sas(n−1)−i · · · s1  # ∩   n−1 [ j=k Sbs(n−1)−j · · · s1   = ∅, ∀b ∈A \ {a}, (1) "n−1 [ i=k Sb1s(n−1)−i · · · s1  # ∩   n−1 [ j=k Sb2s(n−1)−j · · · s1   = ∅, ∀b1, b2 ∈A \ {a}, b1 ̸= b2. (2) Finally, (1) is equivalent to (sj · · · si)−1 /∈S−1 b Sa for all 1 ≤i ≤j ≤n −1 and b ∈A \ {a}, while (2) is equivalent to (sj · · · si)−1 /∈S−1 b1 Sb2, for all 1 ≤i ≤j ≤n −1, and all b1, b2 ∈A \ {a}, b1 ̸= b2. The result follows. As an easy consequence of Theorem 2 we obtain the following sufficient conditions on a lazy cellular automaton for being idempotent. Corollary 3. With the notation of Theorem 2, assume at least one of the following holds: 1. Sa = ∅; 2. S−1 a ⊆S−1 b Sa, for some b ∈A \ {a}; 3. S−1 a ⊆S−1 b1 Sb2, for some b1, b2 ∈A \ {a} such that b1 ̸= b2. Then, τ is idempotent. Example 3. Consider the lazy CA τ : AZ →AZ with minimal neighborhood S = {−1, 0, 1, 2}, unique active transition p = 1010 ∈AS and writing symbol a = 1 ∈A. Since S1 = {−1, 1} = S−1 1 , then condition (2.) of Corollary 3 holds, so τ is idempotent. Example 4. A counterexample of the converse of Corollary 3 was found by Mar´ıa G. Maga˜na- Ch´avez in private communication. Consider the lazy CA τ : AZ →AZ with minimal neigh- borhood S = {−3, −1, 0, 3, 4}, unique active transition p = 11010 ∈AS and writing symbol a = 1 ∈A. Then S0 = {0, 4} and S1 = {−3, −1, 3}, so none of the conditions of Corollary 3 holds. However, direct computations show that τ is idempotent. 3 Lazy CA with a quasi-constant active transition Let τ : AG →AG be a lazy cellular automaton with minimal local map µ : AS →A, unique active transition p ∈AS, and writing symbol a := µ(p) ∈A \ {p(e)}. In this section, we assume that p is quasi-constant, which means that p is not constant and there exists r ∈S such that p|S\{r} is constant. We call r ∈S the non-constant element of p. Without loss of generality, we assume that p ∈AS is defined as follows: for all s ∈S, p(s) := ( 1 if s = r 0 if s ̸= r. Because of Corollary 3, we only need to consider the case when a = p(s) for some s ∈S (as otherwise, Sa = ∅). 7 3.1 Non-constant element r ̸= e In this section, we assume that the non-constant element r ∈S of p ∈AS is different from the group identity e ∈S. Then, we must have 1 = p(r) = a = µ(p), since we are assuming that p(e) = 0. Proposition 3. For any n ≥2, ord (τ) > n if and only if rj /∈S for all 2 ≤j ≤n. Proof. Assume that ord (τ) > n. By Theorem 2, there exists a word w = (s1, . . . , sn−1) ∈ (S1)n−1 such that for all i ≤j, (sj · · · si)−1 /∈S−1 0 S1. By the construction of p in this section, we have S0 = S \ {r} and S1 = {r}, so the word w has the form w = (r, r, . . . r). Hence, the above condition is equivalent to rj /∈S, for all 2 ≤j ≤n. Conversely, assume that rj /∈S for all 2 ≤j ≤n. Define x ∈AG as follows: x(g) := ( p(g) if g ∈S 0 otherwise . Claim 1. For all m ∈{0, . . . , n −1}, (g · τ m(x))|S = p if and only if g = r−m. Proof. We prove this by strong induction on m. For m = 0, it is clear by the construction of x. Now, assume the existence of m ∈{0, . . . , n −2} such that the following holds (g · τ ℓ(x))|S = p ⇐⇒g = r−ℓ, ∀ℓ∈{0, . . . , m}. We shall prove the case m + 1. By the induction hypothesis and construction of x, τ m+1(x)(g) = µ((g · τ m(x))|S) = ( 1 if g ∈{r−ℓ}m ℓ=−1 0 otherwise . (3) We will show that (g · τ m+1)(x)|S = p if and only if g = r−(m+1). First we will show that (r−(m+1) · τ m+1)(x)|S = p. For any s ∈S, (3) implies that τ m+1(x)(sr−(m+1)) = 1 ⇐⇒sr−(m+1) ∈{r−ℓ}m ℓ=−1. (4) If sr−(m+1) = r−ℓ, then s = rm+1−ℓ. Note that m + 1 −ℓ∈{1, . . . , m + 2}, where m + 2 ≤n by construction of m. Since rj /∈S for all 2 ≤j ≤n, it follows that s = rm+1−ℓif and only if m + 1 −ℓ= 1. As a consequence, ℓ= m. Then, by (3) and (4) τ m+1(x)(sr−(m+1)) = 1 ⇐⇒s = r, τ m+1(x)(sr−(m+1)) = 0 ⇐⇒s ∈S \ {r}. Therefore, (r−(m+1) · τ m+1(x))|S = p. Conversely, suppose that (g · τ m+1(x))|S = p for some g ∈G. It follows by construction of p that τ m+1(x)(g) = p(e) = 0, τ m+1(x)(rg) = p(r) = 1. As a consequence of (3), g /∈{r−ℓ}m ℓ=−1 and rg ∈{r−ℓ}m ℓ=−1, which implies that g ∈{r−ℓ}m+1 ℓ=0 . Thus, g = r−(m+1) and the result follows. 8 It follows by the previous claim and Lemma 1 that τ j−1 ̸= τ j for all j ∈{1, . . . , n}. Therefore, ord (τ) > n by Corollary 2. Theorem 1 (2.) follows as a direct consequence of Proposition 3. Recall that the order of an element g ∈G, denoted by ord (g), is the minimum integer n ≥1 such that gn = e, or infinity, in case that no such integer exists. Recall that gi ̸= gj holds for all 1 ≤i < j < ord (g). Corollary 4. Let n ≥2 be an integer such that there is g ∈G with ord (g) > n. Then, there exists a lazy CA τ : AG →AG with ord (τ) = n. Proof. Let S := {e, g, gn} and define a quasi-constant pattern p ∈AS by p(e) = p(gn) = 0 and p(g) = 1. Let τ : AG →AG be the lazy CA with unique active transition p and writing symbol 1 ∈A. The result follows by Theorem 1 (2.), using the assumption ord (g) > n. When the universe G has an element of infinite order, such as when G = Zd, the previous corollary implies that there is always a lazy CA of every given order n ≥2. 3.2 Non-constant element r = e In this section, we assume that r = e, so the writing symbol of τ is a = 0 = p(s), for all s ∈S \ {e}. Proposition 4. For any n ≥2, ord (τ) > n if and only if there exists a word (s1, . . . , sn−1) ∈ (S \ {e})n−1 such that (sj . . . si)−1 ̸∈S, ∀i ≤j. Proof. Suppose that ord (τ) > n. By Theorem 2, there exists a word w = (s1, . . . , sn−1) ∈ (S0)n−1 such that for all i ≤j, (sj · · · si)−1 /∈S−1 1 S0. By the construction of p in this section, we have that S0 = S \ {e} and S1 = {e}, so the direct implication follows. Conversely, assume that there exists a word (s1, . . . , sn−1) ∈(S \ {e})n−1 such that (sj . . . si)−1 ̸∈S, ∀i ≤j. Define x ∈AG as follows: for all g ∈G x(g) := ( 0 if g ∈{si · · · s1s0}n−1 i=0 1 otherwise where s0 = e. Claim 2. For all i ∈{1, . . . , n}, (g · τ i(x))|S = p if and only if g = sn−i · · · s1s0. Proof. First of all, observe that the hypothesis (sj . . . si)−1 ̸∈S, for all i ≤j is equivalent to {sn−j · · · s1s0}n j=k ∩ k[ i=1 (S \ {e})sn−i · · · s1s0  = ∅, ∀1 ≤k ≤n. (5) 9 We prove Claim 2 by strong induction on i. For the base case i = 1, fix k = 1 in (5), so we see that x(ssn−1 · · · s1) = 1 for all s ∈S \ {e} and x(sn−1 · · · s1) = 0 by the construction of x. It follows that (sn−1 · · · s1 · x)|S = p. Conversely, assume that (g · x)|S = p for some g ∈G, so x(g) = 0 and x(sg) = 1 for all s ∈S \ {e}. If follows by the construction of p that g = sℓ· · · s1s0 for some ℓ∈{0, . . . , n −1} and sj · · · s1s0 /∈(S \ {e})g for all j ∈{0, . . . , n −1}. If ℓ< n −1, sℓ+1 = (sℓ+1 · · · s1s0)(sℓ· · · s1s0)−1 /∈S \ {e}, which contradicts that sℓ+1 ∈S \ {e}. Therefore, g = sn−1 · · · s1s0. Now, assume the existence of m ∈{1, . . . , n −1} such that the following holds (g · τ i(x))|S = p ⇐⇒g = sn−i · · · s1s0, ∀i ∈{1, . . . , m}. We shall prove the case m + 1 by showing that (g · τ m+1(x))|S = p if and only if g = sn−(m+1) · · · s1s0. By the construction of x, the induction hypothesis, and Definition 1, τ m(x)(g) = ( 0 if g ∈{si · · · s1s0}n−m i=0 1 otherwise . Since g · τ m(x))|S = p if and only if g = sn−m · · · s1s0, it follows that τ m(x)(g) ̸= τ m+1(x)(g) if and only if g = sn−m · · · s1s0. Then τ m+1(x)(g) = ( 0 if g ∈{si · · · s1s0}n−(m+1) i=0 1 otherwise . (6) As a consequence of fixing k = m + 1 in (5), sj · · · s1s0 /∈(S \ {e})sn−(m+1) · · · s1s0, ∀j = 0, . . . , n −(m + 1). Thus, τ m+1(x)(ssn−(m+1) · · · s1s0) = 1 for all s ∈S \ {e}. Since τ m+1(x)(sn−(m+1) · · · s1s0) = 0, it follows that sn−m+1 · · · s1s0 · τ m+1(x)  |S = p. Conversely, assume that (g · τ m+1)(x)|S = p for some g ∈G. Observe that τ m+1(x)(g) = 0 and τ m+1(x)(sg) = 1 for all s ∈S \ {e}. By (6), we have that g = sℓ· · · s1s0 for some ℓ∈{0, . . . , n −(m + 1)} and sj · · · s1s0 /∈(S \ {e})g for all j ∈{0, . . . , n −(m + 1)}. If ℓ< n −(m + 1), then sℓ+1 = (sℓ+1 · · · s1s0)(sℓ· · · s1s0)−1 /∈S \ {e}, which contradicts that sℓ+1 ∈S\{e}. Therefore, g = sn−(m+1) · · · s1s0, and the claim follows. The previous claim and Corollary 1 imply that τ j−1 ̸= τ j for all j ∈{1, . . . , n}. The result follows by Corollary 2. Finally, Theorem 1 (3.) follows as a direct consequence of Proposition 4. 10 4 Open problems In this section, we propose two open problems related to the study of lazy CAs. In this paper, Corollary 3 gives a sufficient condition for the idempotency of a lazy cellular automaton. In [4], Conjecture 4.1 proposes that a lazy cellular automaton τ : AG →AG with a unique active transition p ∈AS is not idempotent if and only if p satisfies a self-overlapping condition. The conjecture is true when S ⊆Z is an interval, but it turns out that the direct implication fails even for general one-dimensional lazy cellular automata. Hence, we propose our first problem. Problem 1. Characterize the idempotency of a lazy cellular automaton τ : AG →AG in terms of its unique active transition p ∈AS and writing symbol a ∈A \ {p(e)}. The second problem we propose comes from the theory of monoids. Since the composition of two cellular automata over AG is again a cellular automaton over AG, the set of all cellular automata over AG, denoted by CA(G, A) or End(AG), is a monoid equipped with the composi- tion of functions. The group of all invertible cellular automata over AG is denoted by ICA(G, A) or Aut(AG). For any subset C of cellular automata over AG, denote by ⟨C⟩the submonoid of CA(G, A) generated by C. It is well-known that the full transformation monoid Tran(A) of a finite set A is generated by the idempotents of defect 1 (i.e., self-maps of A whose image has size |A| −1) together with the group of invertible transformations Sym(A) [10]. As the minimal local maps of lazy cellular automata over AG with minimal neighborhood S = {e} are precisely the idempotents of defect 1 of A, the previous result inspired the following problem. Problem 2. If L(G, A) is the set of all lazy cellular automata over AG, prove or disprove the following: CA(G, A) = ⟨ICA(G, A) ∪L(G, A)⟩. If the above does not hold, what can we say of the submonoids ⟨ICA(G, A) ∪L(G, A)⟩and ⟨L(G, A)⟩? In other words, Problem 2 asks if every cellular automaton over AG may be written as a composition of lazy and invertible CAs. Acknowledgments. The first author was supported by SECIHTI Becas nacionales para estu- dios de posgrados. We sincerely thank Nazim Fat`es for suggesting the name lazy for the class of cellular automata studied in this paper during the 31st International Workshop on Cellular Automata and Discrete Complex Systems AUTOMATA 2025, at the University of Lille, France. References [1] Balbi, P. P., Mattos, T., Ruivo, E.: Characterisation of the elementary cellular automata with neighbourhood priority based deterministic updates. Comun. Nonlinear Sci. Numer. Simulat. 104 (2022) 106018. https://doi.org/10.1016/j.cnsns.2021.106018 [2] Balbi, P. P., Mattos, T., Ruivo, E.: From Multiple to Single Updates Per Cell in Elementary Cellular Automata with Neighbourhood Based Priority. In: Das, S., Roy, S., Bhattacharjee, K. (eds) The Mathematical Artist. Emergence, Complexity and Computation 45, Springer, Cham., 2022. https://doi.org/10.1007/978-3-031-03986-7_6 11 [3] Castillo-Ramirez, A., Maga˜na-Chavez, M. G., Veliz-Quintero, E.: Idempotent cellular au- tomata and their natural order, Theoretical Computer Science, 1009, https://doi.org/10. 1016/j.tcs.2024.114698. [4] Castillo-Ramirez, A., Maga˜na-Chavez, M. G., Santos Ba˜nos, L. de los.: One-dimensional cellular automata with a unique active transition, Dynamical Systems, 1-15, https://doi. org/10.1080/14689367.2025.2487698. [5] Castillo-Ramirez, A., Veliz-Quintero, E.: On the Minimal Memory Set of Cellular Automata. In: Gadouleau, M., Castillo-Ramirez, A. (eds) Cellular Automata and Discrete Complex Systems. AUTOMATA 2024. Lecture Notes in Computer Science, vol 14782. Springer, Cham, 2024. https://doi.org/10.1007/978-3-031-65887-7_7 [6] Ceccherini-Silberstein, T., Coornaert, M.: Cellular Automata and Groups. 2nd Ed. Springer Monographs in Mathematics, Springer Cham, 2023. https://doi.org/10.1007/ 978-3-031-43328-3 [7] Concha-Vega, P., Goles, E., Montealegre, P., R´ıos-Wilson, M., & Santiva˜nez, J.: Introducing the activity parameter for elementary cellular automata. Int. J. Mod. Phys. C 33(9) (2022) 2250121. https://doi.org/10.1142/S0129183122501212 [8] Fates, N.: A tutorial on elementary cellular automata with fully asynchronous updating. Nat. Comput. 19 (1) (2020) 179–197. https://doi.org/10.1007/s11047-020-09782-7 [9] Fates, N.: Asynchronous Cellular Automata. In: Meyers, R. (eds) Encyclopedia of Complex- ity and Systems Science. Springer, Berlin, Heidelberg, 2018. https://doi.org/10.1007/ 978-3-642-27737-5_671-2 [10] Howie, J. M.: The Subsemigroup Generated By the Idempotents of a Full Transformation Semigroup, J. London Math. Soc. s1-41 (1966) 707-716. https://doi.org/10.1112/jlms/ s1-41.1.707 [11] Kari, J.: Theory of cellular automata: a survey. Theoretical Computer Science 334 (2005) 3 – 33. https://doi.org/10.1016/j.tcs.2004.11.021 [12] K˚urka, P. (1997): Languages, equicontinuity and attractors in cellular automata. Ergodic Theory and Dynamical Systems 17(2), 417–433. https://doi.org/10.1017/ S014338579706985X [13] Rotman, J.: An Introduction to the Theory of Groups. 4th Ed. Springer New York, 1995. [14] Wolfram, S. (1983): Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601–644. 12
On the order of lazy cellular automata Edgar Alcal ́a-Arroyo∗1 and Alonso Castillo-Ramirez†1 1Centro Universitario de Ciencias Exactas e Ingenier ́ıas, Universidad de Guadalajara, M ́exico. October 17, 2025 Abstract We study the most elementary family of cellular automata defined over an arbitrary group universe G and an alphabet A: the lazy cellular automata, which act as the identity on configurations in AG, except when they read a unique active transition p ∈AS, in which case they write a fixed symbol a ∈A. As expected, the dynamical behavior of lazy cellular automata is relatively simple, yet subtle questions arise since they completely depend on the choice of p and a. In this paper, we investigate the order of a lazy cellular automaton τ : AG →AG, defined as the cardinality of the set {τ k : k ∈N}. In particular, we establish a general upper bound for the order of τ in terms of p and a, and we prove that this bound is attained when p is a quasi-constant pattern. keywords: Lazy cellular automaton; unique active transition; order of a cellular automaton; quasi-constant pattern. 1 Introduction A cellular automaton (CA) is a mathematical model over a discrete space defined by a local map that is applied homogeneously and simultaneously to the whole space. The underlying discrete space is a configuration space AG, which consists of all maps from a group universe G to an alphabet A. Following [6], a cellular automaton τ : AG →AG is a function such that there exist a finite subset S ⊆G, called a neighborhood of τ, and a local map μ : AS →A satisfying τ(x)(g) = μ((g · x)|S), ∀x ∈AG, g ∈G, where · denotes the shift action of G on AG: (g · x)(h) := x(hg), ∀h ∈G. We say that a cellular automaton τ : AG →AG is lazy if there is a local defining map μ : AS →A for τ such that e ∈S, where e is the identity of the group G, and there exists a pattern p ∈AS, called the unique active transition of τ, satisfying the following: ∀z ∈AS, μ(z) = z(e) ⇐⇒z ̸= p. ∗Email: †Email: 1 16 Oct 2025 Intuitively, a lazy cellular automaton acts almost as the identity function of AG, except when it reads the pattern p, in which case it writes the symbol a := μ(p) ∈A \ {p(e)}. As expected, the dynamical behavior of a lazy CA is relatively simple, yet subtle questions arise because their evolution depends entirely on the choice of p and a. In a certain sense, lazy cellular automata are even more elementary than the well-known elementary cellular automata (ECA) studied by Wolfram [14], where complex behavior already emerges. Since there are only 256 ECA, a complete case-by-case computational analysis is feasible, whereas there are infinitely many lazy CAs (over an infinite group universe), as their neighborhood size can be arbitrarily large. Despite this, we believe that a deep understanding of lazy CAs is possible and may lead to new insights into broader families of cellular automata. Lazy cellular automata were introduced in [3] as a tool to study idempotent CAs: this is, cellular automata τ : AG →AG satisfying τ 2 = τ. It was observed that if the unique active transition p ∈AS is constant (i.e., p(e) = p(s), ∀s ∈S) or symmetric (i.e., S = S-1 and p(s) = p(s-1), ∀s ∈S), then τ is idempotent. Moreover, the idempotence of τ was completely characterized when p is quasi-constant, meaning that there is r ∈S such that p|S\{r} is constant. Here we study the order of a lazy cellular automaton τ : AG →AG, denoted by ord (τ), as the cardinality of the set of all powers of τ: ord (τ) := |{τ k : k ∈N}|. Besides being an important algebraic concept, the order also captures part of the dynamical behaviour of a cellular automaton. For example, it was shown by K ̊urka [12, Theorem 4] that a one-dimensional cellular automaton (i.e, when the underlying universe is the group of integers Z) is of finite order if and only if it is equicontinuous. The dynamical behavior of one-dimensional lazy CAs was examined in [4]. In this case, when the neighborhood S ⊆Z of τ is an interval, an interesting dichotomy arises: either τ is idempotent or of infinite order, which is equivalent to being strictly almost equicontinuous as a dynamical system. In this paper, we establish several results on the order of lazy CAs in the general setting of an arbitrary group universe G. In Section 2, we show that for any lazy cellular automaton τ : AG →AG, if ord (τ) is finite, then τ has period 1, which means that there is n ∈N such that τ n = τ n+1. Moreover, in Theorem 2, we derive a general upper bound for ord (τ) in terms of its unique active transition p ∈AS. As a consequence, in Corollary 3, we provide a sufficient condition on p ∈AS that guarantees that τ is idempotent. In Section 3, we determine the order of a lazy CA whose unique active transition is a quasiconstant pattern, which is a substantial generalization of Theorem 2 in [3]. In order to state our result, we introduce the following notation about words on groups. A word on S ⊆G of length n is simply an element of the Cartesian power Sn := {(s1, . . . , sn) : si ∈S}. We consider the evaluation function θ from words on S ⊆G to elements of G given by θ(s1, . . . , sn) := s1 . . . sn. We say that v is a subword of w = (s1, . . . , sn), denoted by v ⊑w, if v = (si, si+1, . . . , sj) for some i ≤j. Theorem 1. Let τ : AG →AG be a lazy cellular automaton with unique active transition p ∈AS and writing symbol a ∈AS \ {p(e)}. Assume that p is quasi-constant with non-constant element r ∈S. 1. If a ̸= p(s) for all s ∈S, then ord (τ) = 2. 2 2. If r ̸= e and a = p(r), then ord (τ) is finite if and only if there exists n ≥2 such that rn ∈S. Moreover, in this case, ord (τ) = min{n ≥2 : rn ∈S}. 3. If r = e and a = p(s) for all s ∈S \ {e}, then ord (τ) is finite if and only if there exists n ≥2 such that for all words w ∈(S \ {e})n-1 there exists a subword v ⊑w such that θ(v)-1 ∈S. In such case, the order of τ is the minimum n satisfying this property. As an easy consequence of Theorem 1 we find that, in contrast with the dichotomy obtained in [4] when the neighborhood S ⊆Z is an interval, for every n ≥2, there exists a lazy cellular automaton τ : AG →AG such that ord (τ) = n. Finally, in Section 3, we present two open problems related to the study of lazy cellular automata. 2 The order of lazy CA We assume that the alphabet A has at least two different elements 0 and 1. A pattern is a function p ∈AS, where S is a finite subset of the group universe G. The term active transitions of a local map μ : AS →A, with e ∈S, has been recently introduced by several authors [1, 2, 8, 9] as the patterns z ∈AS such that μ(z) ̸= z(e). The activity value [7] of μ : AS →A is simply the number of active transitions of μ. Definition 1. A cellular automaton τ : AG →AG is called lazy if there is a local defining map μ : AS →A for τ such that e ∈S and there exists p ∈AS satisfying ∀z ∈AS, μ(z) = z(e) ⇐⇒z ̸= p. In such case, we say that p is the unique active transition of τ and μ(p) ∈A \ {p(e)} is the writing symbol of τ. The adjective "lazy" is justified for this class of cellular automata as they have the smallest non-zero activity value (i.e., activity value 1). Example 1. Let G = Z and A = {0, 1}. The ECA rule 236 (see [11, Sec. 2.5] for an explanation of the rule labeling) is lazy, because it may be defined via the local map μ : A{-1,0,1} →A given by the following table: z ∈A{-1,0,1} 111 110 101 100 011 010 001 000 μ(z) ∈A 1 1 1 0 1 1 0 0 The unique active transition of rule 236 is p = 101 ∈A{-1,0,1}. Example 2. Let G = Z and A = {0, 1}. The ECA rule 136 may be defined via the local map μ : A{-1,0,1} →A given by the following table z ∈A{-1,0,1} 111 110 101 100 011 010 001 000 μ(z) ∈A 1 0 0 0 1 0 0 0 3 It may seem that rule 136 is not lazy because the above local map has two active transitions 110 and 010. However, this local map may be reduced to its minimal neighborhood, so we obtain a local defining map μ′ : A{0,1} →A for rule 136 given by the following table: z ∈A{0,1} 11 10 01 00 μ′(z) ∈A 1 0 0 0 Therefore, rule 136 is lazy and has unique active transition p = 10 ∈A{0,1}. Recall that the minimal neighborhood of a cellular automaton τ : AG →AG is a neighborhood admitted by τ of smallest cardinality, which always exists and is unique by [6, Proposition 1.5.2]. Equivalently, the minimal neighborhood of τ is equal to the set of all elements of G that are essential in order to define a local map for τ (see [5, Proposition 1]). We call the minimal local map of τ to the local defining map of τ associated to its minimal neighborhood. It was shown in [3, Lemma 1] that if μ : AS →A has a unique active transition and |S| ≥2, then the minimal neighborhood of the lazy cellular automaton τ : AG →AG defined by μ is precisely S. Hence, the next result follows. Proposition 1. Let τ : AG →AG be a non-constant cellular automaton with minimal local map μ : AS →A. Then, τ is lazy if and only if e ∈S and μ has a unique active transition. We say that a pattern p ∈AS appears in x ∈AG if there is g ∈G such that (g · x)|S = p. Lemma 1. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS. Then, p appears in a configuration x ∈AG if and only if x ̸= τ(x). Proof. Let μ : AS →A be the minimal local map of τ. By Definition 1, there is g ∈G such that (g · x)|S = p if and only if τ(x)(g) = μ((g · x)|S) ̸= p(e) = (g · x)(e) = x(g). The result follows. For any cellular automaton τ : AG →AG and n ∈N, we denote by τ n the n-th composition of τ with itself: τ n = τ ◦· · · ◦τ | {z } n times . By [6, Prop. 1.4.9], τ n : AG →AG is also a cellular automaton. Corollary 1. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS. For any n ∈N, τ n ̸= τ n+1 if and only if there exists x ∈AG such that p appears in τ n(x). The next result follows by [3, Lemma 3] or [4, Lemma 3.2], but we add its proof here for completeness. Lemma 2. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS and writing symbol a ∈A. If there exists x ∈AG such that τ(x)|S = p, then (s·x)|S = p for some s ∈S \{e} such that p(s) = a ̸= p(e). 4 Proof. The hypothesis τ(x)|S = p is equivalent to p(s) = τ(x)(s) = μ((s · x)|S), ∀s ∈S, where μ : AS →A is the minimal local map of τ. If x|S = p, then p(e) = μ(x|S) = a is a contradiction. Hence, x|S ̸= p and p(e) = μ(x|S) = x(e). Suppose that (s · x)|S ̸= p for all s ∈S \ {e}. Then, p(s) = μ((s · x)|S) = (s · x)(e) = x(s), which contradicts that x|S ̸= p. Therefore, there must exist s ∈S \ {e} such that (s · x)|S = p. This implies that p(s) = μ((s · x)|S) = a ̸= p(e). Given x ∈AG and b ∈A, define suppb (x) := {g ∈G : x(g) = b}. The following result presents some elementary properties of supports of configurations under lazy CAs. Lemma 3. Let τ : AG →AG be a lazy CA with unique active transition p ∈AS and writing symbol a ∈A. Let i, j ∈N, i ≤j. Then: 1. suppa τ i(x) ⊆suppa τ j(x) for all x ∈AG 2. suppb τ i(x) ⊇suppb τ j(x) , for all x ∈AG, and b ∈A \ {a}. 3. If suppa (x) = suppa (τ(x)) for some x ∈AG, then x = τ(x). Proof. For parts (1.) and (2.), we shall prove the base case i = 0 and j = 1, as the rest follows by induction. Let g ∈suppa (x). Observe that (g ·x)|S ̸= p, because (g ·x)(e) = x(g) = a ̸= p(e). Definition 1 implies τ(x)(g) = μ((g · x)|S) = (g · x)(e) = x(g) = a, where μ : AS →A is the minimal local map of τ. This shows that g ∈suppa (τ(x)). Now, assume b ∈A \ {a} and g ∈suppb (τ(x)). Since b ̸= a, Definition 1 implies τ(x)(g) = x(g), so g ∈suppb (x). For part (3), first note that x(g) = a = τ(x)(g) for all g ∈suppa (x) = suppa (τ(x)). Let h ∈G (τ(x)). This means that μ((h·x)|S) = τ(x)(h) ̸= a, so by Definition 1, (h·x)|S ̸= p and τ(x)(h) = μ((h · x)|S) = (h · x)(e) = x(h). Therefore, x = τ(x). If ord (τ) n ⇐⇒τ j-1 ̸= τ j, ∀j ∈1, . . . , n. Furthermore, ord (τ) = min{n ≥2 : τ n-1 = τ n}. We define the product of two subsets S, K ⊆G as SK := {sk : s ∈S, k ∈K} and the inverse as S-1 := {s-1 : s ∈S}. In the following result we provide a general upper bound for the order of a lazy cellular automaton in terms of its unique active transition. Theorem 2. Let τ : AG →AG be a lazy CA with minimal neighborhood S ⊆G, unique active transition p ∈AS and writing symbol a ∈A \ {p(e)}. For any b ∈A, we define Sb := p-1{b} = {s ∈S : p(s) = b}. Then, ord (τ) is at most the minimum n ≥2 such that for every word (s1, . . . , sn-1) ∈(Sa)n-1 there exist 1 ≤i ≤j ≤n -1 satisfying at least one of the following: 1. (sj · · · si)-1 ∈S-1 b Sa, for some b ∈A \ {a}; 2. (sj · · · si)-1 ∈S-1 b1 Sb2, for some b1, b2 ∈A \ {a}, with b1 ̸= b2. Proof. We will show that for any n ≥2, if τ n-1 ̸= τ n, then there exists a word (s1, . . . , sn-1) ∈ (Sa)n-1 such that (1.) and (2.) do not hold for all 1 ≤i ≤j ≤n -1, so the result follows from Corollary 2. If τ n-1 ̸= τ n, then, by Corollary 1, there exists x ∈AG such that p appears in τ n-1(x). By the G-equivariance of cellular automata (i.e., τ(g · x) = g · τ(x), for all g ∈G, see [6, Prop. 1.4.4]), we may assume that τ n-1(x)|S = p. Applying iteratively Lemmas 1 and 2 yields the existence of a word (s1, . . . , sn-1) ∈(Sa)n-1 such that (sj · · · s1) · τ (n-1)-j(x) |S = p, ∀j ∈{1, . . . , n -1}. It follows from the definition of Sb that for all g ∈G, y ∈AG, (g · y)|S = p ⇐⇒ Sbg ⊆suppb (y) , ∀b ∈A, Hence, for all b ∈A, Sb(sj · · · s1) ⊆suppb τ (n-1)-j(x) , ∀j ∈{1, . . . , n -1}, Now, Lemmas 3 (1.) and (2.) imply that for all k ∈{1, . . . , n -1}, k[ j=1 Sas(n-1)-j · · · s1 ⊆suppa τ k(x) , n-1 [ j=k Sbs(n-1)-j · · · s1 ⊆suppb τ k(x) , ∀b ∈A \ {a}. 6 Since supports with respect to different elements of A are disjoint, we obtain that " k[ i=1 Sas(n-1)-i · · · s1 # ∩   n-1 [ j=k Sbs(n-1)-j · · · s1  = ∅, ∀b ∈A \ {a}, (1) "n-1 [ i=k Sb1s(n-1)-i · · · s1 # ∩   n-1 [ j=k Sb2s(n-1)-j · · · s1  = ∅, ∀b1, b2 ∈A \ {a}, b1 ̸= b2. (2) Finally, (1) is equivalent to (sj · · · si)-1 /∈S-1 b Sa for all 1 ≤i ≤j ≤n -1 and b ∈A \ {a}, while (2) is equivalent to (sj · · · si)-1 /∈S-1 b1 Sb2, for all 1 ≤i ≤j ≤n -1, and all b1, b2 ∈A \ {a}, b1 ̸= b2. The result follows. As an easy consequence of Theorem 2 we obtain the following sufficient conditions on a lazy cellular automaton for being idempotent. Corollary 3. With the notation of Theorem 2, assume at least one of the following holds: 1. Sa = ∅; 2. S-1 a ⊆S-1 b Sa, for some b ∈A \ {a}; 3. S-1 a ⊆S-1 b1 Sb2, for some b1, b2 ∈A \ {a} such that b1 ̸= b2. Then, τ is idempotent. Example 3. Consider the lazy CA τ : AZ →AZ with minimal neighborhood S = {-1, 0, 1, 2}, unique active transition p = 1010 ∈AS and writing symbol a = 1 ∈A. Since S1 = {-1, 1} = S-1 1 , then condition (2.) of Corollary 3 holds, so τ is idempotent. Example 4. A counterexample of the converse of Corollary 3 was found by Mar ́ıa G. Maga ̃naCh ́avez in private communication. Consider the lazy CA τ : AZ →AZ with minimal neighborhood S = {-3, -1, 0, 3, 4}, unique active transition p = 11010 ∈AS and writing symbol a = 1 ∈A. Then S0 = {0, 4} and S1 = {-3, -1, 3}, so none of the conditions of Corollary 3 holds. However, direct computations show that τ is idempotent. 3 Lazy CA with a quasi-constant active transition Let τ : AG →AG be a lazy cellular automaton with minimal local map μ : AS →A, unique active transition p ∈AS, and writing symbol a := μ(p) ∈A \ {p(e)}. In this section, we assume that p is quasi-constant, which means that p is not constant and there exists r ∈S such that p|S\{r} is constant. We call r ∈S the non-constant element of p. Without loss of generality, we assume that p ∈AS is defined as follows: for all s ∈S, p(s) := ( 1 if s = r 0 if s ̸= r. Because of Corollary 3, we only need to consider the case when a = p(s) for some s ∈S (as otherwise, Sa = ∅). 7 3.1 Non-constant element r ̸= e In this section, we assume that the non-constant element r ∈S of p ∈AS is different from the group identity e ∈S. Then, we must have 1 = p(r) = a = μ(p), since we are assuming that p(e) = 0. Proposition 3. For any n ≥2, ord (τ) > n if and only if rj /∈S for all 2 ≤j ≤n. Proof. Assume that ord (τ) > n. By Theorem 2, there exists a word w = (s1, . . . , sn-1) ∈ (S1)n-1 such that for all i ≤j, (sj · · · si)-1 /∈S-1 0 S1. By the construction of p in this section, we have S0 = S \ {r} and S1 = {r}, so the word w has the form w = (r, r, . . . r). Hence, the above condition is equivalent to rj /∈S, for all 2 ≤j ≤n. Conversely, assume that rj /∈S for all 2 ≤j ≤n. Define x ∈AG as follows: x(g) := ( p(g) if g ∈S 0 otherwise . Claim 1. For all m ∈{0, . . . , n -1}, (g · τ m(x))|S = p if and only if g = r-m. Proof. We prove this by strong induction on m. For m = 0, it is clear by the construction of x. Now, assume the existence of m ∈{0, . . . , n -2} such that the following holds (g · τ l(x))|S = p ⇐⇒g = r-l, ∀l∈{0, . . . , m}. We shall prove the case m + 1. By the induction hypothesis and construction of x, τ m+1(x)(g) = μ((g · τ m(x))|S) = ( 1 if g ∈{r-l}m l=-1 0 otherwise . (3) We will show that (g · τ m+1)(x)|S = p if and only if g = r-(m+1). First we will show that (r-(m+1) · τ m+1)(x)|S = p. For any s ∈S, (3) implies that τ m+1(x)(sr-(m+1)) = 1 ⇐⇒sr-(m+1) ∈{r-l}m l=-1. (4) If sr-(m+1) = r-l, then s = rm+1-l. Note that m + 1 -l∈{1, . . . , m + 2}, where m + 2 ≤n by construction of m. Since rj /∈S for all 2 ≤j ≤n, it follows that s = rm+1-lif and only if m + 1 -l= 1. As a consequence, l= m. Then, by (3) and (4) τ m+1(x)(sr-(m+1)) = 1 ⇐⇒s = r, τ m+1(x)(sr-(m+1)) = 0 ⇐⇒s ∈S \ {r}. Therefore, (r-(m+1) · τ m+1(x))|S = p. Conversely, suppose that (g · τ m+1(x))|S = p for some g ∈G. It follows by construction of p that τ m+1(x)(g) = p(e) = 0, τ m+1(x)(rg) = p(r) = 1. As a consequence of (3), g /∈{r-l}m l=-1 and rg ∈{r-l}m l=-1, which implies that g ∈{r-l}m+1 l=0 . Thus, g = r-(m+1) and the result follows. 8 It follows by the previous claim and Lemma 1 that τ j-1 ̸= τ j for all j ∈{1, . . . , n}. Therefore, ord (τ) > n by Corollary 2. Theorem 1 (2.) follows as a direct consequence of Proposition 3. Recall that the order of an element g ∈G, denoted by ord (g), is the minimum integer n ≥1 such that gn = e, or infinity, in case that no such integer exists. Recall that gi ̸= gj holds for all 1 ≤i n. Then, there exists a lazy CA τ : AG →AG with ord (τ) = n. Proof. Let S := {e, g, gn} and define a quasi-constant pattern p ∈AS by p(e) = p(gn) = 0 and p(g) = 1. Let τ : AG →AG be the lazy CA with unique active transition p and writing symbol 1 ∈A. The result follows by Theorem 1 (2.), using the assumption ord (g) > n. When the universe G has an element of infinite order, such as when G = Zd, the previous corollary implies that there is always a lazy CA of every given order n ≥2. 3.2 Non-constant element r = e In this section, we assume that r = e, so the writing symbol of τ is a = 0 = p(s), for all s ∈S \ {e}. Proposition 4. For any n ≥2, ord (τ) > n if and only if there exists a word (s1, . . . , sn-1) ∈ (S \ {e})n-1 such that (sj . . . si)-1 ̸∈S, ∀i ≤j. Proof. Suppose that ord (τ) > n. By Theorem 2, there exists a word w = (s1, . . . , sn-1) ∈ (S0)n-1 such that for all i ≤j, (sj · · · si)-1 /∈S-1 1 S0. By the construction of p in this section, we have that S0 = S \ {e} and S1 = {e}, so the direct implication follows. Conversely, assume that there exists a word (s1, . . . , sn-1) ∈(S \ {e})n-1 such that (sj . . . si)-1 ̸∈S, ∀i ≤j. Define x ∈AG as follows: for all g ∈G x(g) := ( 0 if g ∈{si · · · s1s0}n-1 i=0 1 otherwise where s0 = e. Claim 2. For all i ∈{1, . . . , n}, (g · τ i(x))|S = p if and only if g = sn-i · · · s1s0. Proof. First of all, observe that the hypothesis (sj . . . si)-1 ̸∈S, for all i ≤j is equivalent to {sn-j · · · s1s0}n j=k ∩ k[ i=1 (S \ {e})sn-i · · · s1s0 = ∅, ∀1 ≤k ≤n. (5) 9 We prove Claim 2 by strong induction on i. For the base case i = 1, fix k = 1 in (5), so we see that x(ssn-1 · · · s1) = 1 for all s ∈S \ {e} and x(sn-1 · · · s1) = 0 by the construction of x. It follows that (sn-1 · · · s1 · x)|S = p. Conversely, assume that (g · x)|S = p for some g ∈G, so x(g) = 0 and x(sg) = 1 for all s ∈S \ {e}. If follows by the construction of p that g = sl· · · s1s0 for some l∈{0, . . . , n -1} and sj · · · s1s0 /∈(S \ {e})g for all j ∈{0, . . . , n -1}. If l< n -1, sl+1 = (sl+1 · · · s1s0)(sl· · · s1s0)-1 /∈S \ {e}, which contradicts that sl+1 ∈S \ {e}. Therefore, g = sn-1 · · · s1s0. Now, assume the existence of m ∈{1, . . . , n -1} such that the following holds (g · τ i(x))|S = p ⇐⇒g = sn-i · · · s1s0, ∀i ∈{1, . . . , m}. We shall prove the case m + 1 by showing that (g · τ m+1(x))|S = p if and only if g = sn-(m+1) · · · s1s0. By the construction of x, the induction hypothesis, and Definition 1, τ m(x)(g) = ( 0 if g ∈{si · · · s1s0}n-m i=0 1 otherwise . Since g · τ m(x))|S = p if and only if g = sn-m · · · s1s0, it follows that τ m(x)(g) ̸= τ m+1(x)(g) if and only if g = sn-m · · · s1s0. Then τ m+1(x)(g) = ( 0 if g ∈{si · · · s1s0}n-(m+1) i=0 1 otherwise . (6) As a consequence of fixing k = m + 1 in (5), sj · · · s1s0 /∈(S \ {e})sn-(m+1) · · · s1s0, ∀j = 0, . . . , n -(m + 1). Thus, τ m+1(x)(ssn-(m+1) · · · s1s0) = 1 for all s ∈S \ {e}. Since τ m+1(x)(sn-(m+1) · · · s1s0) = 0, it follows that sn-m+1 · · · s1s0 · τ m+1(x) |S = p. Conversely, assume that (g · τ m+1)(x)|S = p for some g ∈G. Observe that τ m+1(x)(g) = 0 and τ m+1(x)(sg) = 1 for all s ∈S \ {e}. By (6), we have that g = sl· · · s1s0 for some l∈{0, . . . , n -(m + 1)} and sj · · · s1s0 /∈(S \ {e})g for all j ∈{0, . . . , n -(m + 1)}. If l< n -(m + 1), then sl+1 = (sl+1 · · · s1s0)(sl· · · s1s0)-1 /∈S \ {e}, which contradicts that sl+1 ∈S\{e}. Therefore, g = sn-(m+1) · · · s1s0, and the claim follows. The previous claim and Corollary 1 imply that τ j-1 ̸= τ j for all j ∈{1, . . . , n}. The result follows by Corollary 2. Finally, Theorem 1 (3.) follows as a direct consequence of Proposition 4. 10 4 Open problems In this section, we propose two open problems related to the study of lazy CAs. In this paper, Corollary 3 gives a sufficient condition for the idempotency of a lazy cellular automaton. In [4], Conjecture 4.1 proposes that a lazy cellular automaton τ : AG →AG with a unique active transition p ∈AS is not idempotent if and only if p satisfies a self-overlapping condition. The conjecture is true when S ⊆Z is an interval, but it turns out that the direct implication fails even for general one-dimensional lazy cellular automata. Hence, we propose our first problem. Problem 1. Characterize the idempotency of a lazy cellular automaton τ : AG →AG in terms of its unique active transition p ∈AS and writing symbol a ∈A \ {p(e)}. The second problem we propose comes from the theory of monoids. Since the composition of two cellular automata over AG is again a cellular automaton over AG, the set of all cellular automata over AG, denoted by CA(G, A) or End(AG), is a monoid equipped with the composition of functions. The group of all invertible cellular automata over AG is denoted by ICA(G, A) or Aut(AG). For any subset C of cellular automata over AG, denote by ⟨C⟩the submonoid of CA(G, A) generated by C. It is well-known that the full transformation monoid Tran(A) of a finite set A is generated by the idempotents of defect 1 (i.e., self-maps of A whose image has size |A| -1) together with the group of invertible transformations Sym(A) [10]. As the minimal local maps of lazy cellular automata over AG with minimal neighborhood S = {e} are precisely the idempotents of defect 1 of A, the previous result inspired the following problem. Problem 2. If L(G, A) is the set of all lazy cellular automata over AG, prove or disprove the following: CA(G, A) = ⟨ICA(G, A) ∪L(G, A)⟩. If the above does not hold, what can we say of the submonoids ⟨ICA(G, A) ∪L(G, A)⟩and ⟨L(G, A)⟩? In other words, Problem 2 asks if every cellular automaton over AG may be written as a composition of lazy and invertible CAs. Acknowledgments. The first author was supported by SECIHTI Becas nacionales para estudios de posgrados. We sincerely thank Nazim Fat`es for suggesting the name lazy for the class of cellular automata studied in this paper during the 31st International Workshop on Cellular Automata and Discrete Complex Systems AUTOMATA 2025, at the . References [1] Balbi, P. P., Mattos, T., Ruivo, E.: Characterisation of the elementary cellular automata with neighbourhood priority based deterministic updates. Comun. Nonlinear Sci. Numer. Simulat. 104 (2022) 106018. https://doi.org/10.1016/j.cnsns.2021.106018 [2] Balbi, P. P., Mattos, T., Ruivo, E.: From Multiple to Single Updates Per Cell in Elementary Cellular Automata with Neighbourhood Based Priority. In: Das, S., Roy, S., Bhattacharjee, K. (eds) The Mathematical Artist. Emergence, Complexity and Computation 45, Springer, Cham., 2022. https://doi.org/10.1007/978-3-031-03986-7_6 11 [3] Castillo-Ramirez, A., Maga ̃na-Chavez, M. G., Veliz-Quintero, E.: Idempotent cellular automata and their natural order, Theoretical Computer Science, 1009, https://doi.org/10. 1016/j.tcs.2024.114698. [4] Castillo-Ramirez, A., Maga ̃na-Chavez, M. G., Santos Ba ̃nos, L. de los.: One-dimensional cellular automata with a unique active transition, Dynamical Systems, 1-15, https://doi. org/10.1080/14689367.2025.2487698. [5] Castillo-Ramirez, A., Veliz-Quintero, E.: On the Minimal Memory Set of Cellular Automata. In: Gadouleau, M., Castillo-Ramirez, A. (eds) Cellular Automata and Discrete Complex Systems. AUTOMATA 2024. Lecture Notes in Computer Science, vol 14782. Springer, Cham, 2024. https://doi.org/10.1007/978-3-031-65887-7_7 [6] Ceccherini-Silberstein, T., Coornaert, M.: Cellular Automata and Groups. 2nd Ed. Springer Monographs in Mathematics, Springer Cham, 2023. https://doi.org/10.1007/ 978-3-031-43328-3 [7] Concha-Vega, P., Goles, E., Montealegre, P., R ́ıos-Wilson, M., & Santiva ̃nez, J.: Introducing the activity parameter for elementary cellular automata. Int. J. Mod. Phys. C 33(9) (2022) 2250121. https://doi.org/10.1142/S0129183122501212 [8] Fates, N.: A tutorial on elementary cellular automata with fully asynchronous updating. Nat. Comput. 19 (1) (2020) 179-197. https://doi.org/10.1007/s11047-020-09782-7 [9] Fates, N.: Asynchronous Cellular Automata. In: Meyers, R. (eds) Encyclopedia of Complexity and Systems Science. Springer, Berlin, Heidelberg, 2018. https://doi.org/10.1007/ 978-3-642-27737-5_671-2 [10] Howie, J. M.: The Subsemigroup Generated By the Idempotents of a Full Transformation Semigroup, J. London Math. Soc. s1-41 (1966) 707-716. https://doi.org/10.1112/jlms/ s1-41.1.707 [11] Kari, J.: Theory of cellular automata: a survey. Theoretical Computer Science 334 (2005) 3 - 33. https://doi.org/10.1016/j.tcs.2004.11.021 [12] K ̊urka, P. (1997): Languages, equicontinuity and attractors in cellular automata. Ergodic Theory and Dynamical Systems 17(2), 417-433. https://doi.org/10.1017/ S014338579706985X [13] Rotman, J.: An Introduction to the Theory of Groups. 4th Ed. Springer New York, 1995. [14] Wolfram, S. (1983): Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601-644. 12
2510.14840
NORMAL AND PRIMITIVE NORMAL ELEMENTS WITH PRESCRIBED TRACES IN INTERMEDIATE EXTENSIONS OF FINITE FIELDS ARPAN CHANDRA MAZUMDER, GIORGOS KAPETANAKIS, AND DHIREN KUMAR BASNET Abstract. In this article, we study the existence and distribution of elements in finite field extensions with prescribed traces in several intermediate extensions that are also either normal or primitive normal. In the former case, we fully characterize the conditions under which such elements exist and provide an explicit enumeration of these elements. In the latter case we provide asymptotic results. 1. Introduction Let q be a prime power and Fq the finite field of order q. For any given positive integer m, let Fqm denote the extension field of Fq of degree m. The multiplicative group F∗ qm is cyclic and a generator of this group is called a primitive element of Fqm. An element α ∈Fqm is said to be normal over Fq (or just normal if the choice of the base field is clear) if the set of all its conjugates with respect to Fq, that is, if the set {α, αq, . . . , αqm−1} forms a basis of Fqm over Fq. An element α ∈Fqm is said to be primitive normal if it is both primitive and normal over Fq. The motivation behind the study of primitive and normal elements derives from both the- oretical and practical matters. Namely, primitive elements, besides their theoretical interest, have various applications, including cryptographic schemes [12] such as the Diffie-Hellman key exchange, the ElGamal Encryption scheme and the construction of Costas arrays [7], which are also used in sonar and radar technology. Normal elements hold computational advantages for finite field arithmetic and are therefore used in many software and hardware implementations, most notably, in coding theory and cryptography. Another property that has attracted interest is prescribing the trace of an element α ∈Fqm over Fq. The trace of α ∈Fqm over Fq is the sum of all conjugates of α with respect to Fq, that is, TrFqm/Fq(α) = α + αq + . . . + αqm−1. For the sake of simplicity, since in this work we are dealing with intermediate extensions of Fqm/Fq, from now on, for m > 1, d | m and α ∈Fqm, we denote the trace of α over Fqd by Trm/d(α) = m/d−1 X i=0 αqid. In this line of work, in 1990, Cohen [3] established the existence of primitive elements with a prescribed trace up to some genuine exceptions. 2020 Mathematics Subject Classification. 12E20, 11T24. Key words and phrases. Finite fields; Primitive elements; Normal elements; Additive and multiplicative characters; Trace. The first author is supported by DST INSPIRE Fellowship, Govt. of India (IF210206). 1 arXiv:2510.14840v1 [math.NT] 16 Oct 2025 Theorem 1.1 ([3, Theorem 1.1]). Let q be a prime power, m a positive integer and a ∈Fq. Then there exists a primitive element α ∈Fqm such that Trm/1(α) = a unless a = 0 and m = 2 or a = 0, m = 3 and q = 4. Subsequently, in 1999, Morgan and Mullen’s conjecture [10] was proven by Cohen and Hachenberger [4], where they established the existence of a primitive normal element with nonzero prescribed trace. Observe that a normal element never has trace equal to zero, whence the assumption that the trace is nonzero is necessary. Recently, Reis [13], characterized the existence of a solution for a special family of linear equations over finite fields and determined the exact number of solutions. As an applica- tion, Reis and Ribas [14] studied the existence and distribution of primitive elements in intermediate extensions of finite fields. As a natural continuation of the aforementioned works, in this paper, we explore the existence of normal and primitive normal elements in Fqm with prescribed traces in several intermediate extensions Fqd of Fqm. In particular, for given m > 1, d1 < d2 < . . . < dk divisors of m, and aj ∈Fqdj , we discuss the existence of a normal and of a primitive normal element α ∈Fqm such that, for each 1 ≤j ≤k, Trn/dj(α) = n/dj−1 X i=0 αqidj = aj. In particular, not only we fully characterize the necessary conditions for the case of normal elements with prescribed intermediate traces, but we also explicitly enumerate them, see Theorem 3.6. In addition, regarding primitive normal elements with prescribed intermediate traces, we obtain asymptotic and concrete results under the restriction gcd(di, dj) = 1 for 1 ≤i < j ≤k, that are displayed in Theorem 5.1. The paper is structured as follows. In Section 2, we introduce some useful notation and background material. Section 3 is devoted in studying the necessary conditions and the explicit enumeration of normal elements with prescribed traces in several intermediate extensions. In Section 4, we obtain an asymptotic condition for the existence of desired primitive normal elements in Fqm with prescribed traces in several intermediate extensions. Finally, in Section 5, we obtain some concrete existence results. 2. Preliminaries In this section, we recall some definitions and results and provide some preliminary nota- tions which are used to prove the main results of this article. 2.1. Linearized polynomials and Fq-order. Before we proceed further, we mention some essential facts on linearized polynomials that we will use along the way. For more details on this important family of polynomials over finite fields, we refer the interested readers to [9, Section 3.4]. Definition 2.1. A polynomial Lf ∈Fq[x] of the form Lf(x) = k X i=0 fixqi 2 is called a linearized polynomial. Moreover, if f = Pk i=0 fixi ∈Fq[x], then the Lf above is the q-associate of f. The following properties of linearized polynomials are well-known and straightforward. We refer the interested readers to [11] and the references therein for more details. Proposition 2.2. Let f, g ∈Fq[x] be two polynomials and let Lf and Lg be their q-associates. Then, for every a, b ∈Fq, (1) Lf(ax + by) = aLf(x) + bLf(y) and (2) Lf(Lg(x)) = Lfg(x). Definition 2.3. The Fq-order of some β ∈Fqm, denoted by Ordq(β) is the minimum degree monic polynomial over Fq, such that LOrdq(β)(β) = 0. Within the literature, the Fq-order is commonly referred to as the additive order as a nod to the fact that the additive group Fqm can be viewed as an Fq[x]-module. Next, observe that Lxm−1(β) = 0 for all β ∈Fqm, i.e., the Fq-order of an element of Fqm exists and is of degree at most m. In fact, the following results hold, while their proofs are straightforward. Proposition 2.4. Let β ∈Fqm. The following are true: (1) Ordq(β) | xm −1. (2) β is normal over Fq if and only if Ordq(β) = xm −1. (3) If d | m, then β ∈Fqd if and only if Ordq(β) | xd −1. (4) If f ∈Fq[x], then Ordq(Lf(β)) = Ordq(β)/ gcd(f, Ordq(β)). In a similar fashion, the Fq-order of an additive character ψ of Fqm is denoted by Ordq(ψ) and is defined as the minimum degree monic polynomial over Fq, such that ψ LOrdq(ψ)(β)  = 1, for all β ∈Fqm. Furthermore, Proposition 2.4 entails that for all additive characters ψ of Fqm, Ordq(ψ) | xm −1. 2.2. Characteristic functions. Fix a positive integer m, d a divisor of m and a ∈Fqd. Let ρm be the characteristic function for primitive elements in Fqm, and κm be the characteristic function for normal elements in Fqm over Fqd. In particular it is well-known that for any β ∈Fqm, ρm(β) = θ(q) X t|qm−1  µ(t) ϕ(t) X η∈Γ(t) η(β)  , where θ(q) := ϕ(qn −1)/(qn −1), µ is the Möbius function and Γ(t) stands for the set of multiplicative characters of order t. Likewise, for any β ∈Fqm, κm(β) = Θ(xm −1) X f|xm−1  µ′(f) Φ(f) X ψ∈Γ(f) ψ(β)  , where Θ(xm −1) := Φ(xm −1)/qm, Φ is the analogue of the Euler ϕ function defined as Φ(f) = Fq[x] ⟨f⟩ ∗ , 3 Γ(f) stands for the set of additive characters of Fq-order f and µ′ is the analogue of the Möbius function defined as µ′(g) = ( (−1)s, if g is the product of s distinct irreducible monic polynomials, 0, otherwise. 2.3. The trace map. Let n be a divisor of m and γ ∈Fqm be such that TrFqm/Fqn(γ) = a ∈ Fqn. Let χ denote the canonical additive character of Fqm, then all the additive characters of Fqm are given by χc, where χc(α) = χ(cα) for any c ∈Fqm and α ∈Fqm. For any β ∈Fqm, if τm,d,a stands for the characteristic function for elements in Fqm with trace a over Fqd, then τm,n,a(β) = 1 qn X c∈Fqn χc(β −γ) = 1 qn X c∈Fqn χc(β)χc(γ)−1. The trace is transitive, that is, if e divides d and d divides n, then for any α ∈Fqm we have that Trm/e(α) = Trd/e(Trm/d(α)). In particular, if d1 < . . . < dk are divisors of m and we choose ai ∈Fqdi, 1 ≤i ≤k, then the existence of an element α ∈Fqm with Trm/di(α) = ai is necessarily conditional on the following identities: (2.1) Trdi/ gcd(di,dj)(ai) = Trm/ gcd(di,dj)(α) = Trdj/ gcd(di,dj)(aj), 1 ≤i, j ≤k. Recently, Reis [13, Theorem 4.1] showed that Eq. (2.1) is also sufficient and that there exist exactly qm−λ(d) elements in Fqm with Trm/di(α) = ai for 1 ≤i ≤k, where λ(d) = deg(lcm(xd1 −1, . . . , xdk −1)) = d1 + · · · + dk + k X i=2 (−1)i+1 X 1≤l1<...<li≤k gcd(dl1, . . . , dli). Eq. (2.1) implies that if di | dj, then Trm/di(α) = ai is already implied by Trm/dj(α) = aj. Therefore, without loss of generality, we may restrict ourselves to the divisors d1 < . . . < dk of n such that di ∤dj for any 1 ≤i < j ≤k. Next, we introduce the following, which we adopt from [13]. Definition 2.5. Let m be an integer and 1 < k < σ0(m) , where σ0(m) denotes the number of positive divisors of m. (i) λk(m) stands for the set of k-tuples d = (d1, . . . , dk), where d1 < . . . < dk < m are divisors of m such that di ∤dj for every 1 ≤i < j ≤k. (ii) For d = (d1, . . . , dk) ∈λk(m), set Fd = Qk i=1 Fqdi and λ(d) = d1 + · · · + dk + k X i=2 (−1)i+1 X 1≤l1<...<li≤k gcd(dl1, . . . , dli). Moreover, for d = (d1, . . . , dk) ∈λk(m) and a = (a1, . . . , ak) ∈Fd, the k-tuple a is d- admissible if, for any 1 ≤i < j ≤k, Trdi/ gcd(di,dj)(ai) = Trdj/ gcd(di,dj)(aj). 4 2.4. Some estimates. Finally, we will need the following in establishing our main result. Lemma 2.6 ([13, Corollary 1.2]). Let m > 1 be an integer, 1 < k < σ0(m) and let d = (d1, . . . , dk) ∈λk(m). Then the number of k-tuples (x1, . . . , xk) ∈Fd such that x1+· · ·+xk = 0 equals qd1+···+dk−λ(d). For each n ∈N, we denote by ω(n) and W(n), the number prime divisors of n and the number of square-free divisors of n respectively. Also for f(x) ∈Fq[x], we denote by ω(f) and W(f), the number of monic irreducible Fq-divisors of f and the number of square- free Fq-divisors of f respectively. The following results provide bounds on W(qm −1) and W(xm −1), respectively. Lemma 2.7 ([5, Lemma 3.7]). For any α ∈N and a positive real number ν, W(α) ≤Cν·α1/ν, where Cν = Qr i=1 2 p1/ν i and p1, p2, . . . , pr are the primes less than or equal to 2ν that divide α. In particular, we will require the following values of Cν in the computations ahead (i) C11 = 4.2445 · 1014 (ii) C12 = 1.0573 · 1024 and (iii) C31 = 2.4015 · 101553069. Lemma 2.8 ([8, Lemma 2.9]). Let q be a prime power and m a positive integer. Then, we have W(xm −1) ≤2 1 2 (m+gcd(m,q−1)). In particular, W(xm −1) ≤2m, while the equality holds if and only if m | (q −1). Furthermore, if m ∤(q −1), W(xm −1) ≤23m/4 since in this case, gcd(m, q −1) ≤m 2 . The following is a direct consequence of [6, Ineq. (4.1)]. Lemma 2.9. Let W(t) denote the number of squarefree divisors of t. Then for t ≥3, W(t −1) < t0.96/ log log t. 3. Intermediate Traces of Normal Elements In this section we study the existence of normal elements of Fqm over Fq, with their traces over several intermediate extensions arbitrarily prescribed. Throughout this section, m is relatively prime to q, 1 < k < σ0(m), d = (d1, . . . , dk) ∈λk(m) and a = (a1, . . . , ak) ∈Fd is a d-admissible k-tuple. Lemma 3.1. Suppose β ∈Fqm is normal over Fq and d | m. Then Trm/d(β) is normal over Fq (as an element of Fqd). Proof. Set Trm/d(β) = b. Then b = L xm−1 xd−1 (β). Assume that b ∈Fqd is not normal over Fq. Then deg(Ordq(b)) < d. Moreover LOrdq(b)(b) = 0 ⇒LOrdq(b)  L xm−1 xd−1 (β)  = 0 ⇒LOrdq(b) xm−1 xd−1 (β) = 0. The latter contradicts the normality of β, since deg  Ordq(b)xm−1 xd−1  < m. □ The above implies that we cannot arbitrarily prescribe the trace of a normal element over intermediate extensions, but instead we have to confine ourselves to values of the corre- sponding trace functions that are, themselves, normal over the base field. This renders the following definition essential for our setting. 5 Definition 3.2. Some d-admissible a = (a1, . . . , ak) ∈Fd is normal if ai ∈Fqdi is normal over Fq for every i = 1, . . . , k. Next, we focus on the inverse problem and obtain a correspondence, via the trace map, between the elements of Fqd that are normal over Fq and the elements of Fqm that are normal over Fq, where d | m. Towards this end, we continue with the following auxiliary lemma. Lemma 3.3. Let f, g ∈Fq[x] be polynomials such that f | g. The map ξ : Fq[x] ⟨g⟩ ∗ → Fq[x] ⟨f⟩ ∗ , h (mod g) 7→h (mod f) is a group epimorphism. Proof. The only nontrivial part of this claim is that ξ is onto. Write, g = fg′g′′, where we take g′ ∈Fq[x] to be the largest degree divisor of g that is relatively prime to f and g′′ = g/(g′f). It follows that Φ(g) = Φ(f)Φ(g′)qdeg g′′ and, given that the domain and the co-domain of ξ have orders Φ(g) and Φ(f), respectively, ξ is onto if and only if | ker ξ| = Φ(g′)qdeg g′′. Now, take some h ∈Fq[x] of degree less than deg(g), such that h + ⟨g⟩∈ker ξ. Then h = 1+fk, for some k ∈Fq[x] of degree less than deg(g)−deg(f), while gcd(h, g) = 1. This means that, out of the qdeg(g)−deg(f) choices of k, we are left with those such that 1 + fk ̸≡ℓ (mod g′) ⇐⇒k ̸≡(ℓ−1)f −1 (mod g′), for all ℓ∈Fq[x] of degree less than deg(g′) that are not relatively prime to g′. In other words, we are left with Φ(g′) distinct choices for k modulo g′. By comparing degrees, we readily obtain that each such choice corresponds to qdeg g′′ choices of degree at most deg g −deg f. Hence, | ker ξ| = Φ(g′)qdeg g′′. □ Theorem 3.4. Let m and d be such that d | m. The mapping ν : {γ ∈Fqm : γ normal over Fq} →{c ∈Fqd : c normal over Fq}, γ 7→Trm/d(γ) is a k-to-one correspondence, where k = Φ(xm −1)/Φ(xd −1). Proof. Lemma 3.1 implies that ν is well-defined. Next, fix some normal β ∈Fqm. Proposi- tion 2.4 implies that every normal element γ of Fqm can be written as γ = Lh(β) for some h ∈Fq[x], that is relatively prime to xm−1 and is unique modulo xm−1. In other words, there is a correspondence between the normal elements of Fqm and the group (Fq[x]/⟨xm −1⟩)∗. In a similar fashion the normal elements of Fqd correspond to the group (Fq[x]/⟨xd −1⟩)∗. The desired result follows from Lemma 3.3 upon observing that the trace of γ is L xm−1 xd−1 ·h(β). □ In particular, we immediately get the following. Corollary 3.5. Let Fqm/Fq be a finite field extension. For every b ∈F∗ q, there exist exactly Φ(xm −1)/(q −1) normal elements β ∈Fqm, such that Tr(β) = b. The proof of the theorem below is inspired by the ideas found in the work of Reis [13]. Theorem 3.6. Let m be an integer that is not a prime power and 1 < k < σ0(m), where σ0(m) denotes the number of positive divisors of m. Let d = (d1, . . . , dk) ∈λk(m) and a = (a1, . . . , ak) ∈Fd be a normal d-admissible k-tuple. Set g := lcm(xd1 −1, . . . , xdk −1). Then there exist exactly Φ(xm −1)/Φ(g) normal elements α ∈Fqm with prescribed traces Trm/di(α) = ai for every 1 ≤i ≤k. 6 Proof. Fix some γ ∈Fqm normal over Fq. For each i = 1, . . . , k, set ci = Trm/di(γ). From Lemma 3.1, ci ∈Fdi is normal, thus, there exists some hi (unique modulo xdi −1), relatively prime to xdi −1, such that ai = Lhi(ci). Furthermore, some α ∈Fqm is normal if and only if α = LF(γ), for some F ∈Fq[x], that is relatively prime to xm −1. It follows that, Trm/di(α) = ai if and only if F ≡hi (mod xdi −1) for every i = 1, . . . , k. Following the arguments from the proof of [13, Theorem 4.1], this congruence system has a unique solution modulo g, which we denote by f. Moreover, given that gcd(hi, xdi −1) = 1 for all i = 1, . . . , k, we readily obtain that f + ⟨g⟩∈(Fq[x]/⟨g⟩)∗. The desired result follows from the fact that Lemma 3.3 entails that we have exactly Φ(xm −1)/Φ(g) choices for F ∈Fq[x], that will be distinct modulo xm −1 and relatively prime to xm −1, such that F ≡f (mod xm −1). □ 4. Intermediate traces of primitive normal elements Throughout this section, we adopt the same assumptions and notation as in Section 3, with the additional assumption that a is normal. Let Nm,d,a be the number of primitive normal elements α ∈Fqm with Trm/di(α) = ai for i = 1, . . . , k. In particular, Nm,d,a = X w∈Fqm ρm(w) · κm(w) k Y i=1 τm,di,ai(w). Since the k-tuple (a1, . . . , ak) is d-admissible, we have seen that there exists some β ∈Fqm such that Tr(m/t)/di(β) = ai for 1 ≤i ≤k. Write D = d1 + · · · + dk and, for a generic c = (c1, . . . , ck) ∈Fd, write s(c) = Pk i=1 ci. Now using the characteristic functions from Section 2, we get that qD · Nm,d,a θ(q)Θ(xm −1) = X w∈Fqm X t|qm−1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) ψ∈Γ(f) η(w)ψ(w) · k Y i=1  X ci∈Fqdi χci(w)χci(β)−1   = X w∈Fqm X c∈Fd X t|qm−1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) ψ∈Γ(f) η(w)ψ(w)χs(c)(w)χs(c)(−β) = X w∈Fqm X c∈Fd X t|qm−1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) ψ∈Γ(f) η(w)χu(w)χs(c)(w)χs(c)(−β) = X w∈Fqm X c∈Fd X t|qm−1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) ψ∈Γ(f) η(w)χu+s(c)(w)χs(c)(−β) = X t|qm−1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) ψ∈Γ(f) X c∈Fd χs(c)(−β)Gm(η, χu+s(c)), 7 where Gm(η, χu+s(c)) = P w∈Fqm η(w) · χu+s(c)(w) denotes a Gauss sum. In particular, we may rewrite qD · Nm,d,a θ(q)Θ(xm −1) = S1 + S2, where the term S1 is the part of the above sum for η ∈Γ(1) and S2 is the part for η /∈Γ(1). Then θ(q)Θ(xm −1)S1 will denote the number of normal elements with their traces over Fqdi prescribed to ai. Then, Theorem 3.6 yields S1 = Φ(xm −1) Φ(g)θ(q)Θ(xm −1) = qm Φ(g)θ(q), where g = lcm(xd1 −1, . . . , xdk −1). Clearly, θ(q) ≤1 and Φ(g) < qdeg(g) = qλ(d), hence, S1 > qm−λ(d). Regarding S2, we have that S2 = X c∈Fd X t|qm−1,t̸=1 f|xm−1 µ(t)µ′(f) ϕ(t)Φ(f) X η∈Γ(t) χu∈Γ(f) χs(c)(−β)Gm(η, χu+s(c)). Recall that, for η /∈Γ(1), the orthogonality relations and the well-known identity on Gauss sums yield that (1) Gm(η, χu+s(c)) = 0, if u + s(c) = 0, and (2) |Gm(η, χu+s(c))| = qm/2, otherwise. Hence, given that |χs(c)(−β)| = 1, that |Γ(t)| = ϕ(t), for all t | qm−1, and that |Γ(f)| = Φ(f) for all f | xm −1, we obtain |S2| ≤qm/2+D · W(qm −1) · W(xm −1). Putting all of the above together, qD · Nm,d,a θ(q)Θ(q) > qm−λ(d) −qm/2+D · W(qm −1) · W(xm −1). Thus, Nm,d,a > 0, provided that qm/2−λ(d)−D ≥W(qm −1) · W(xm −1). Summarizing the above discussion, we have the following theorem. Theorem 4.1. Let m be an integer and 1 < k < σ0(m), where σ0(m) denotes the number of positive divisors of m. Let d = (d1, . . . , dk) ∈λk(m) and a = (a1, . . . , ak) ∈Fd be normal d-admissible. Then there exists a primitive normal element α ∈Fqm with prescribed traces Trn/di(α) = ai for every 1 ≤i ≤k, provided that (4.1) qm/2−λ(d)−D ≥W(qm −1) · W(xm −1). Furthermore, we have the following result which is an immediate consequence of [13, Theorem 4.1] and the main theorem in [4]. The idea of the proof is similar to that of [14, Theorem 2.5] and hence omitted. Theorem 4.2. Keeping the notations as in Theorem 4.1, we have that there exists a primitive normal element α ∈Fqm with prescribed traces Trn/di(α) = ai for every 1 ≤i ≤k if lcm(d1, . . . , dk) < m holds. 8 5. Existence results In this section we explore the existence of primitive normal elements with prescribed traces in intermediate extensions and present explicit existence results. Although it is desirable to study the problem without any restrictions, due to the complexity of the expression of λ(d) we restrict our study to the condition gcd(di, dj) = 1 for 1 ≤i < j ≤k. In particular, for k ≥2, when gcd(di, dj) = 1 for 1 ≤i < j ≤k we have that λ(d) = d1 + · · · + dk −k + 1. Next, let pi be the i-th prime. We have that pi ≤di and thus (5.1) pt ≤dt ≤  m p1 · · · pt−1 1/(k+1−t) , for 1 ≤t ≤k, where the empty product equals 1. Furthermore we may assume lcm(d1, . . . , dk) = m, since otherwise by Theorem 4.2 we have the desired element. Also, since lcm(d1, . . . , dk) = m and gcd(di, dj) = 1 for 1 ≤i < j ≤k, we get that d1 · · · dk = m. Under the above restrictions we obtain the following concrete and asymptotic results. Theorem 5.1. Let m be an integer and 1 < k < σ0(m), where σ0(m) denotes the number of positive divisors of m. Let d = (d1, . . . , dk) ∈λk(m) and a = (a1, . . . , ak) ∈Fd be d- admissible. Suppose gcd(di, dj) = 1 for 1 ≤i < j ≤k. Then there exists a primitive normal element α ∈Fqm with prescribed traces Trn/di(α) = ai for every 1 ≤i ≤k provided that: (i) k ≥4: (a) k = 4 and q ≥1334, (b) k = 5 and q ≥9, (c) k = 6 and q ≥7, (d) k = 7 and q ≥5. (ii) k = 3, m ≥60 and q ≥2.2660 · 1024072855. (iii) k = 2 and (a) d1 ≥8 and q large enough, (b) d1 = 7 and q large enough with (d1, d2) ̸= (7, 8), (c) d1 = 6, d2 ≥13 and q large enough. Proof. We split the proof into cases k ≥4, k = 3 and k = 2. We begin with the case k ≥4. In this case, we have m ≥2 · 3 · 5 · 7k−3 ≥210. Since the dj ’s are at least 2, pairwise relatively prime and di ∤dj for every 1 ≤i < j ≤k, we have that m di = Q j̸=i dj ≥Qk−1 i=1 pi. Furthermore, since Qk−1 i=1 pi ≥k! we have that di ≤⌊m k!⌋. Therefore, q m 2 −λ(d)−D ≥q m 2 −2(d1+d2+···+dk)+k−1 ≥q m 2 −2k⌊m k! ⌋+k−1. Thus, combining the above with Ineq. (4.1), it suffices to verify that q m 2 −2k⌊m k! ⌋+k−1 ≥W(qm −1) · W(xm −1). The above, in conjunction with Lemmas 2.7 and 2.8, yield q m 2 −2k⌊m k! ⌋+k−1 ≥Cνq m ν 2m. 9 In the case k = 4, the above holds for m ≥210, q ≥1334, and ν = 11. For k = 5, the above holds for m ≥2310, q ≥9, and ν = 11. Proceeding in the same way, for k = 6 and k = 7 the above inequality holds for q ≥7 and q ≥5 respectively for suitable values of ν. Finally, we conclude this case by noting that for the cases k ≥8 the computations are challenging since the constants Cν for higher values of ν are difficult to calculate within a reasonable time limit. Furthermore, we note that for k ≥8 there is very less improvement to the lower bounds on q and hence we stop at k = 7 for which we have achieved the bound q ≥5. We move on to the case k = 3. In this case, we have m ≥2 · 3 · 5 ≥30. Observe that, for m = 30 = 2 · 3 · 5 and m = 42 = 2 · 3 · 7, Ineq. (4.1) does not hold for any prime power q. So, we focus on the case m ̸= 30, 42, so we assume that m ≥60. If d1 = 3, d2 = 4 and d3 = m/12, then d1 + d2 + d3 = 3 + 4 + m 12 ≤m 4 for m ≥60. From Ineq. (5.1) we have that d1 ≤ 3√m and d2 ≤pm 2 . Now for m ≥70, we get that d1 + d2 + d3 ≤ 3√m + p m 2 + m 10 ≤m 4 . Thus, Ineq. (4.1) yields the sufficient condition q2 ≥W(qm −1) · W(xm −1). Then, using Lemmas 2.7 and 2.8, the above inequality becomes q2 ≥Cνq m ν 2m. For m ≥60, the above inequality is valid for ν > m/2 = 30 and it holds for all prime powers q ≥2.2660 · 1024072855 for ν = 31, given that C31 = 2.4015 · 101553069. Finally, we focus on the case k = 2. We divide our discussion into the following cases. (1) For 8 ≤d1 < d2, we have that d1 ≤m/8 and d1 +d2 ≤m/4. Then from Theorem 4.1 it suffices to verify the inequality qm/2−2(d1+d2)+1 ≥q ≥W(qm −1) · W(xm −1). Then, Lemma 2.9 ensures that we can compute a constant Q′ depending on d1, d2 and suitable values of ν such that Ineq. (4.1) holds for all q ≥Q′ (2) For 7 = d1 < d2, we note that when (d1, d2) = (7, 8), Ineq. (4.1) does not hold for any prime power q. For the case (d1, d2) = (7, 9), by considering Ineq. (4.1), it suffices to verify that q1/2 ≥W(qm −1) · W(xm −1) and the result follows as above. Finally, if d2 ≥10, we get that d2 ≤m/7 and d1 ≤m/10. In this case, Theorem 4.1 implies that it suffices to verify the inequality qm/70+1 ≥W(qm −1) · W(xm −1). Again, as above, Lemma 2.9 ensures the existence of a computable constant Q′, depending on d1, d2 and suitable values of ν, such that the Ineq. (4.1) holds for all q ≥Q′. (3) For 6 = d1 < d2, we note that when 7 ≤d2 ≤11, Ineq. (4.1) does not hold for any prime power q. Thus we work on d2 ≥13 and the result follows in a similar manner as above, from Lemma 2.9. (4) For d1 ∈{3, 4, 5}, Ineq. (4.1) does not hold for any prime power q. □ 10 5.1. Remarks. (1) The condition gcd(di, dj) = 1 is not restrictive for the case k = 2. In fact, if gcd(d1, d2) = d and Q = qd, then Fqdi = Fqti where ti = di/d satisfies gcd(t1, t2) = 1. (2) Although we can explicitly compute the values of the constants Q′ above in the case k = 2, these constants are so large as to prohibit the investigation of the situation for the prime powers smaller than Q′, by using a computer, within a reasonable time limit. So, we omit them. (3) Recently, Bagger [1] provided a hybrid bound to attack problems on existence of primitive elements in finite fields. Furthermore, Bagger and Punch in [2], provided a sieve criterion for primitive elements depending only on the estimate for a related character sum. We believe that an application of these methods adjusted, accord- ingly for the primitive normal elements, could be applied to this problem for future investigations. 6. Acknowledgments We are grateful to the anonymous reviewer for their efforts in reviewing our manuscript and their suggestions which resulted in this improved version of the paper. 7. Declarations The authors declare that there is no conflict of interest. References [1] Gustav Kjærbye Bagger. Hybrid bounds for prime divisors. arXiv preprint arXiv:2412.00010, 2024. [2] Gustav Kjærbye Bagger and James Punch. The modified prime sieve for primitive elements in finite fields. arXiv preprint arXiv:2507.21515, 2025. [3] Stephen D Cohen. Primitive elements and polynomials with arbitrary trace. Discrete Mathematics, 83(1):1–7, 1990. [4] Stephen D Cohen and Dirk Hachenberger. Primitive normal bases with prescribed trace. Applicable Algebra in Engineering, Communication and Computing, 9(5):383–403, 1999. [5] Stephen D Cohen and Sophie Huczynska. The strong primitive normal basis theorem. Acta Arithmetica, 143(4):299–332, 2006. [6] Stephen D Cohen, Tomás Oliveira e Silva, and Tim Trudgian. On consecutive primitive elements in a finite field. Bulletin of the London Mathematical Society, 47(3):418–426, 2015. [7] Solomon W Golomb. Algebraic constructions for costas arrays. Journal of Combinatorial Theory, Series A, 37(1):13–21, 1984. [8] Hendrik W Lenstra, Jr. and René J Schoof. Primitive normal bases for finite fields. Mathematics of Computation, 48(177):217–231, 1987. [9] Rudolf Lidl and Harold Niederreiter. Finite Fields, volume 20 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, second edition, 1997. [10] Ilene H Morgan and Gary L Mullen. Primitive normal polynomials over finite fields. Mathematics of Computation, 63(208):759–765, 1994. [11] Gary L Mullen and Daniel Panario. Handbook of finite fields, volume 17. CRC press Boca Raton, 2013. [12] Christof Paar and Jan Pelzl. Public-key cryptosystems based on the discrete logarithm problem. In Un- derstanding Cryptography: A Textbook for Students and Practitioners, pages 205–238. Springer, Berlin, Heidelberg, 2010. [13] Lucas Reis. Counting solutions of special linear equations over finite fields. Finite Fields and Their Applications, 68:101759, 2020. 11 [14] Lucas Reis and Savio Ribas. Generators of finite fields with prescribed traces. Journal of the Australian Mathematical Society, 112(3):355–366, 2022. Department of Mathematical Sciences, Tezpur University, Tezpur, Assam, 784028, India Email address: arpan10@tezu.ernet.in Department of Mathematics, University of Thessaly, 3rd km Old National Road Lamia- Athens, 35100, Lamia, Greece Email address: kapetanakis@uth.gr Department of Mathematical Sciences, Tezpur University, Tezpur, Assam, 784028, India Email address: dbasnet@tezu.ernet.in 12
NORMAL AND PRIMITIVE NORMAL ELEMENTS WITH PRESCRIBED TRACES IN INTERMEDIATE EXTENSIONS OF FINITE FIELDS ARPAN CHANDRA MAZUMDER, GIORGOS KAPETANAKIS, AND DHIREN KUMAR BASNET Abstract. In this article, we study the existence and distribution of elements in finite field extensions with prescribed traces in several intermediate extensions that are also either normal or primitive normal. In the former case, we fully characterize the conditions under which such elements exist and provide an explicit enumeration of these elements. In the latter case we provide asymptotic results. 1. Introduction Let q be a prime power and Fq the finite field of order q. For any given positive integer m, let Fqm denote the extension field of Fq of degree m. The multiplicative group F∗ qm is cyclic and a generator of this group is called a primitive element of Fqm. An element α ∈Fqm is said to be normal over Fq (or just normal if the choice of the base field is clear) if the set of all its conjugates with respect to Fq, that is, if the set {α, αq, . . . , αqm-1} forms a basis of Fqm over Fq. An element α ∈Fqm is said to be primitive normal if it is both primitive and normal over Fq. The motivation behind the study of primitive and normal elements derives from both theoretical and practical matters. Namely, primitive elements, besides their theoretical interest, have various applications, including cryptographic schemes [12] such as the Diffie-Hellman key exchange, the ElGamal Encryption scheme and the construction of Costas arrays [7], which are also used in sonar and radar technology. Normal elements hold computational advantages for finite field arithmetic and are therefore used in many software and hardware implementations, most notably, in coding theory and cryptography. Another property that has attracted interest is prescribing the trace of an element α ∈Fqm over Fq. The trace of α ∈Fqm over Fq is the sum of all conjugates of α with respect to Fq, that is, TrFqm/Fq(α) = α + αq + . . . + αqm-1. For the sake of simplicity, since in this work we are dealing with intermediate extensions of Fqm/Fq, from now on, for m > 1, d | m and α ∈Fqm, we denote the trace of α over Fqd by Trm/d(α) = m/d-1 X i=0 αqid. In this line of work, in 1990, Cohen [3] established the existence of primitive elements with a prescribed trace up to some genuine exceptions. 2020 Mathematics Subject Classification. 12E20, 11T24. Key words and phrases. Finite fields; Primitive elements; Normal elements; Additive and multiplicative characters; Trace. The first author is supported by DST INSPIRE Fellowship, Govt. of India (IF210206). 1 16 Oct 2025 Theorem 1.1 ([3, Theorem 1.1]). Let q be a prime power, m a positive integer and a ∈Fq. Then there exists a primitive element α ∈Fqm such that Trm/1(α) = a unless a = 0 and m = 2 or a = 0, m = 3 and q = 4. Subsequently, in 1999, Morgan and Mullen's conjecture [10] was proven by Cohen and Hachenberger [4], where they established the existence of a primitive normal element with nonzero prescribed trace. Observe that a normal element never has trace equal to zero, whence the assumption that the trace is nonzero is necessary. Recently, Reis [13], characterized the existence of a solution for a special family of linear equations over finite fields and determined the exact number of solutions. As an application, Reis and Ribas [14] studied the existence and distribution of primitive elements in intermediate extensions of finite fields. As a natural continuation of the aforementioned works, in this paper, we explore the existence of normal and primitive normal elements in Fqm with prescribed traces in several intermediate extensions Fqd of Fqm. In particular, for given m > 1, d1 1 be an integer, 1 qm-λ(d). Regarding S2, we have that S2 = X c∈Fd X t|qm-1,t̸=1 f|xm-1 μ(t)μ′(f) φ(t)Φ(f) X η∈Γ(t) χu∈Γ(f) χs(c)(-β)Gm(η, χu+s(c)). Recall that, for η /∈Γ(1), the orthogonality relations and the well-known identity on Gauss sums yield that (1) Gm(η, χu+s(c)) = 0, if u + s(c) = 0, and (2) |Gm(η, χu+s(c))| = qm/2, otherwise. Hence, given that |χs(c)(-β)| = 1, that |Γ(t)| = φ(t), for all t | qm-1, and that |Γ(f)| = Φ(f) for all f | xm -1, we obtain |S2| ≤qm/2+D · W(qm -1) · W(xm -1). Putting all of the above together, qD · Nm,d,a θ(q)Θ(q) > qm-λ(d) -qm/2+D · W(qm -1) · W(xm -1). Thus, Nm,d,a > 0, provided that qm/2-λ(d)-D ≥W(qm -1) · W(xm -1). Summarizing the above discussion, we have the following theorem. Theorem 4.1. Let m be an integer and 1 m/2 = 30 and it holds for all prime powers q ≥2.2660 · 1024072855 for ν = 31, given that C31 = 2.4015 · 101553069. Finally, we focus on the case k = 2. We divide our discussion into the following cases. (1) For 8 ≤d1 < d2, we have that d1 ≤m/8 and d1 +d2 ≤m/4. Then from Theorem 4.1 it suffices to verify the inequality qm/2-2(d1+d2)+1 ≥q ≥W(qm -1) · W(xm -1). Then, Lemma 2.9 ensures that we can compute a constant Q′ depending on d1, d2 and suitable values of ν such that Ineq. (4.1) holds for all q ≥Q′ (2) For 7 = d1 < d2, we note that when (d1, d2) = (7, 8), Ineq. (4.1) does not hold for any prime power q. For the case (d1, d2) = (7, 9), by considering Ineq. (4.1), it suffices to verify that q1/2 ≥W(qm -1) · W(xm -1) and the result follows as above. Finally, if d2 ≥10, we get that d2 ≤m/7 and d1 ≤m/10. In this case, Theorem 4.1 implies that it suffices to verify the inequality qm/70+1 ≥W(qm -1) · W(xm -1). Again, as above, Lemma 2.9 ensures the existence of a computable constant Q′, depending on d1, d2 and suitable values of ν, such that the Ineq. (4.1) holds for all q ≥Q′. (3) For 6 = d1 < d2, we note that when 7 ≤d2 ≤11, Ineq. (4.1) does not hold for any prime power q. Thus we work on d2 ≥13 and the result follows in a similar manner as above, from Lemma 2.9. (4) For d1 ∈{3, 4, 5}, Ineq. (4.1) does not hold for any prime power q. □ 10 5.1. Remarks. (1) The condition gcd(di, dj) = 1 is not restrictive for the case k = 2. In fact, if gcd(d1, d2) = d and Q = qd, then Fqdi = Fqti where ti = di/d satisfies gcd(t1, t2) = 1. (2) Although we can explicitly compute the values of the constants Q′ above in the case k = 2, these constants are so large as to prohibit the investigation of the situation for the prime powers smaller than Q′, by using a computer, within a reasonable time limit. So, we omit them. (3) Recently, Bagger [1] provided a hybrid bound to attack problems on existence of primitive elements in finite fields. Furthermore, Bagger and Punch in [2], provided a sieve criterion for primitive elements depending only on the estimate for a related character sum. We believe that an application of these methods adjusted, accordingly for the primitive normal elements, could be applied to this problem for future investigations. 6. Acknowledgments We are grateful to the anonymous reviewer for their efforts in reviewing our manuscript and their suggestions which resulted in this improved version of the paper. 7. Declarations The authors declare that there is no conflict of interest. References [1] Gustav Kjærbye Bagger. Hybrid bounds for prime divisors. arXiv preprint , 2024. [2] Gustav Kjærbye Bagger and James Punch. The modified prime sieve for primitive elements in finite fields. arXiv preprint , 2025. [3] Stephen D Cohen. Primitive elements and polynomials with arbitrary trace. Discrete Mathematics, 83(1):1-7, 1990. [4] Stephen D Cohen and Dirk Hachenberger. Primitive normal bases with prescribed trace. Applicable Algebra in Engineering, Communication and Computing, 9(5):383-403, 1999. [5] Stephen D Cohen and Sophie Huczynska. The strong primitive normal basis theorem. Acta Arithmetica, 143(4):299-332, 2006. [6] Stephen D Cohen, Tomás Oliveira e Silva, and Tim Trudgian. On consecutive primitive elements in a finite field. Bulletin of the London Mathematical Society, 47(3):418-426, 2015. [7] Solomon W Golomb. Algebraic constructions for costas arrays. Journal of Combinatorial Theory, Series A, 37(1):13-21, 1984. [8] Hendrik W Lenstra, Jr. and René J Schoof. Primitive normal bases for finite fields. Mathematics of Computation, 48(177):217-231, 1987. [9] Rudolf Lidl and Harold Niederreiter. Finite Fields, volume 20 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, second edition, 1997. [10] Ilene H Morgan and Gary L Mullen. Primitive normal polynomials over finite fields. Mathematics of Computation, 63(208):759-765, 1994. [11] Gary L Mullen and Daniel Panario. Handbook of finite fields, volume 17. CRC press Boca Raton, 2013. [12] Christof Paar and Jan Pelzl. Public-key cryptosystems based on the discrete logarithm problem. In Understanding Cryptography: A Textbook for Students and Practitioners, pages 205-238. Springer, Berlin, Heidelberg, 2010. [13] Lucas Reis. Counting solutions of special linear equations over finite fields. Finite Fields and Their Applications, 68:101759, 2020. 11 [14] Lucas Reis and Savio Ribas. Generators of finite fields with prescribed traces. Journal of the Australian Mathematical Society, 112(3):355-366, 2022. 784028, India Email address: 3rd km Old National Road LamiaAthens, 35100, Lamia, Greece Email address: 784028, India Email address: 12
2510.14838
1 Dynamic-Key–Aware Co-Simulation Framework for Next Generation of SCADA Systems Encrypted by Quantum-Key-Distribution Techniques Ziqing Zhu, Member, IEEE Abstract—To address growing cybersecurity challenges in modern power dispatch systems, this paper proposes a multi- layer modeling and optimization framework for SCADA systems enhanced with quantum key distribution (QKD). While most existing applications of QKD in the power sector focus on building secure point-to-point communication tunnels, they rarely consider the system-level coupling between key dynamics and control scheduling. In contrast, our approach integrates quantum key generation, consumption, inventory prediction, and control latency into a unified model, enabling key-aware reconfiguration of SCADA control chains based on task security demands and real-time resource constraints. To resolve conflicts in key resource allocation between transmission system operators (TSOs) and distribution system operators (DSOs), we formulate a bi-level Stackelberg game and transform it into a mathematical program with complementarity constraints (MPCC). We further develop an efficient Level Decomposition–Complementarity Pruning (LD- CP) algorithm to solve the problem. To support reproducible evaluation, we build an end-to-end co-simulation platform that in- tegrates physical-layer disruptions via OpenQKD-Sim, Q3P/IEC- 104 protocol stack binding, and real-time control-chain monitor- ing through Grafana. Experimental results on the IEEE 39- and 118-bus systems show that our method increases task success rate by 25%, reduces peak frequency deviation by 70%, and improves key utilization to 83%. This work lays the foundation for future quantum-secure control systems in power grid operations. Index Terms—Quantum Key Distribution, SCADA Systems, Control Chain Reconfiguration, Cyber–Physical Co-simulation, Power Dispatch I. INTRODUCTION A. Vision and Significance of QKD-Enabled SCADA I N modern power systems, the evolution of SCADA is driven not only by the need for ultra-low-latency control but also by the rising demand for long-term cybersecurity. Quantum Key Distribution (QKD) has emerged as the only known technology that guarantees information-theoretic se- curity, making it an ideal foundation for next-generation control infrastructures [1]. By enabling unbreakable encryption for critical control signals, such as frequency regulation or emergency load shedding, QKD eliminates the risk of key compromise, even in the face of quantum adversaries [2]. Moreover, its property of forward secrecy simplifies key lifecycle management by removing reliance on certificate hierarchies or algorithmic assumptions [3]. Beyond encryption, the physical randomness embedded in QKD channels can also support anomaly detection mechanisms at the communication layer [4]. Together, these features make QKD not just a technological upgrade, but a strategic enabler of resilient and future-proof SCADA systems. B. Related Work and Research Gap 1) Prototype QKD Deployments in Power Grids: Several pilot projects have applied quantum key distribution (QKD) to secure power system communications. For example, ID Quan- tique and KEPCO deployed one of the first QKD-protected substation communication links in Korea, using QKD over a 40 km optical ground wire between two substations [5]. This 2021 prototype demonstrated basic point-to-point encryption for substation SCADA traffic. Similarly, State Grid Corpora- tion of China (SGCC) has conducted trial installations of QKD for substation and distribution network communications. In one demonstration, a multi-relay QKD network spanned three substations and secured remote telemetry/command exchanges in a provincial dispatch grid [6]. These projects proved that QKD can be integrated into utility fiber infrastructure, but they focused on connectivity and basic functionality. Notably, none provided a comprehensive system-level performance analysis – metrics like end-to-end latency impact, key consumption rates under power control traffic patterns, or reliability of protection signaling with QKD were not rigorously evaluated. 2) Resource-Aware Key Management in QKD Networks: In the telecommunications domain, researchers have explored adaptive key management to mitigate QKD’s limited key generation rates. Strategies such as maintaining quantum key pools (buffers of QKD-generated keys) and hybrid encryption switching have been proposed. In these schemes, high-security one-time-pad (OTP) encryption is used when key supply is abundant, and the system falls back to conventional symmetric ciphers (e.g. AES) when keys are scarce [7]. This hybrid approach conserves one-time key material by using computa- tionally secure AES encryption as needed, at the cost of reduc- ing the theoretical security to conventional levels [8]. While such resource-aware schemes were designed for general QKD networks, they ignore the hard real-time constraints of power- system control loops. Power grid control messages (protective tripping commands, load control signals, etc.) often have strict latency deadlines (on the order of milliseconds to tens of milliseconds). A key-management strategy that withholds data waiting for fresh keys or that switches encryption modes on- the-fly could violate these timing requirements. For instance, using a slower key update or allowing key exhaustion to intro- duce a 100+ ms delay can cause grid synchronization errors [9]. Prior works on QKD key management did not account for these real-time performance needs unique to energy systems. arXiv:2510.14838v1 [eess.SY] 16 Oct 2025 2 3) Hierarchical TSO–DSO Optimization Approaches: There is rich literature on coordinating operations between Transmission System Operators (TSOs) and Distribution Sys- tem Operators (DSOs) in a smart grid. Many works formulate hierarchical optimization or game-theoretic models to align transmission-level dispatch with distribution-level resources [10-12]. Yet, none of these studies consider the integration of QKD-based security constraints. The decision frameworks assume control signals can be issued securely without limita- tion. In reality, QKD keys are a stochastic and non-duplicable resource – key generation rates may fluctuate with quantum channel conditions, and each key bit can only be used once. No existing TSO–DSO optimization incorporates a coupling between control actions and available secure key supply. For example, scheduling a fast DER control response might be infeasible if encryption keys are insufficient at that moment, but current models do not capture such coupling. Thus, the impact of limited key availability on hierarchical control decisions remains an open question. 4) Simulation Toolchains for Cyber–Physical–Quantum Systems: To date, studies of power grid QKD integration have used separate simulation tools for different domains. Communication network simulators (such as OPNET, ns-3, or OMNeT++) and power system simulators (such as OPAL-RT real-time digital simulators or PSCAD/EMTDC) are well es- tablished. Co-simulation platforms exist to combine power and communication models, enabling analysis of cyber-physical interactions in smart grids [13]. For example, a real-time testbed may use OPAL-RT to simulate the electrical grid and ns-3 or OMNeT++ to simulate the ICT network, synchronized to exchange data in each time step [14]. On the quantum side, specialized tools like OpenQKD-Sim (a module extending ns- 3 for QKD networks) have been developed to simulate QKD link physics and key distribution processes [15]. However, an integrated test environment that combines all three layers – power system dynamics, communication networking, and quantum key generation – is still missing. Most studies either evaluate communication/security aspects in isolation (using a network or QKD simulator alone) or examine grid control impacts assuming an abstracted communications layer. There is a lack of end-to-end co-simulation platforms or testbeds that can validate how QKD-secured control schemes perform under realistic grid conditions, including the delays of networking and the probabilistic behavior of quantum key generation. C. Research Gap and Main Contributions In summary, few existing framework provides a holistic solution that (i) couples the dynamic availability of QKD keys with grid control actions across multiple timescales, (ii) enables fair and efficient key allocation between trans- mission and distribution operators in a coordinated man- ner, and (iii) validates these concepts on an end-to-end cy- ber–physical–quantum co-simulation or prototype platform. Correspondingly, this work addresses the integration of QKD into real-time power system operations by establishing a comprehensive framework that spans from physical-layer key generation to high-level control optimization. The main contributions are summarized as follows: (1) We develop the first unified multi-layer model that ex- plicitly links the stochastic generation of QKD keys with their consumption across control messages, dynamically tracks key inventory, and models the resulting impact on control-chain latency. This framework enables a realistic assessment of how QKD-induced constraints propagate through secure SCADA operations, bridging the gap between quantum communication and cyber-physical control. (2) We propose a novel chance-constrained optimization formulation for key-aware control-chain reconfiguration. By encoding both communication reliability and key availability into a mixed-integer probabilistic scheduling model, we ensure safety-critical control loops are resilient under key scarcity. We further extend this model to a hierarchical transmis- sion–distribution coordination setting using a bilevel Stack- elberg game formulation. To efficiently solve this nonconvex Mathematical Program with Equilibrium Constraints (MPEC), we design a Level Decomposition with Complementarity Pruning (LD-CP) algorithm that avoids full enumeration of complementarity pairs and converges to M-stationary solutions with strong practical scalability. (3) We implement a modular, end-to-end testbed for QKD- enhanced SCADA co-simulation. The platform integrates (i) OpenQKD-Sim for modeling link-level photon loss and chan- nel outages, (ii) a hybrid protocol stack combining Q3P and IEC 60870-5-104 for secure control signaling, and (iii) Grafana-based real-time visualization and configuration tools. The platform supports extensions for hardware-in-the-loop (HIL) experiments via RTDS and is designed for portability and reproducibility. D. Paper Structure The remainder of this paper is organized as follows. Section 2 introduces the multi-layer architecture of the QKD-enhanced SCADA system and formulates the core constraints on key generation, inventory dynamics, and control latency. Section 3 presents the key-aware control-chain reconfiguration problem and its Stackelberg extension for TSO–DSO coordination, along with the proposed LD-CP algorithm. Section 4 describes the implementation of our modular co-simulation testbed, including quantum link modeling, protocol binding, and visu- alization components. Section 5 details the experimental setup and performance results on IEEE benchmark systems. Finally, Section 6 concludes the paper and outlines future directions. II. DYNAMIC KEY RESOURCE MODELING A. State Variables and Time Scale In building a QKD-SCADA co-simulation platform for power dispatch, a key challenge is modeling the dynamic interplay of key generation, usage, and inventory. Unlike con- ventional systems limited by bandwidth, QKD generates non- replicable, non-bufferable keys whose availability is tightly linked to physical conditions. To capture this, we define the following continuous-time state variables for quantum key behavior. Let t denote the continuous time in seconds. The variable K(t) ∈R+ represents the available quantum key inventory (in bits) at time t, corresponding to the usable key 3 pool maintained at the dispatch center. The function G(t) denotes the instantaneous key generation rate (bits per second), as determined by the underlying QKD link. For each class of control task indexed by i ∈{1, . . . , T}, we define Ci(t) as the instantaneous key consumption rate (bits per second). The total number of distinct control task types, denoted by T, typically includes wide-area monitoring (e.g., PMU data streams) and active control commands (e.g., AGC setpoints, AVR signals, load shedding instructions), which operate across different timescales in power system operation. For numerical implementation, we adopt a fixed sampling interval ∆t, and define discrete-time representations such as Kk = K(k∆t), Gk = G(k∆t), and Ci,k = Ci(k∆t) for each time step k ∈N. B. Key Generation Model The key generation rate is a critical metric in QKD systems, directly affecting how fast the key pool is replenished. In power systems, QKD links are often deployed over OPGW or dedicated optical fibers, whose performance is sensitive to environmental factors (e.g., wind, icing, temperature) and disturbances from power equipment. For BB84-based QKD systems, the instantaneous key generation rate is given by: G(t) = Rp · η(t) | {z } photon detection rate · q(t) |{z} sifted-bit retention ratio ·fsec(QBER(t)) (1) where Rp denotes the photon emission rate at the sender (in photons per second), determined by the QKD device configuration. The function η(t) ∈[0, 1] represents the instan- taneous transmission efficiency of the link, which reflects the proportion of photons successfully transmitted from the source to the detector. This efficiency is affected by factors such as optical fiber attenuation, splicing loss, wind-induced sagging, and temperature-induced refraction changes. The term q(t) denotes the sifting ratio, i.e., the proportion of measurement outcomes retained after basis reconciliation. The quantity QBER(t) stands for the quantum bit error rate at time t, which increases under electromagnetic interference, misalignment, or vibration, commonly present in substation environments. The function fsec(·) denotes the privacy amplification factor, which quantifies the fraction of sifted bits that can be securely retained as final key material. It is typically defined as: fsec(QBER) = 1 −2h(QBER) (2) h(p) = −p log2 p −(1 −p) log2(1 −p) (3) where h(p) is the binary entropy function. As the QBER in- creases, the effective secure key rate drops sharply, potentially leading to a halt in key output if the error exceeds tolerable thresholds. To simulate the non-stationary behavior of G(t) under external disturbances such as storms or temperature surges, we introduce a first-order stochastic process to describe its temporal dynamics: ˙G(t) = −λG G(t) −¯G  + εG(t) (4) where ¯G is the nominal (steady-state) expected generation rate, λG > 0 is the regression coefficient characterizing the system’s return speed to equilibrium, and εG(t) is a zero-mean white noise process representing high-frequency fluctuations caused by environmental uncertainties (e.g., optical misalign- ment or ground potential variations). C. Key Consumption Model Ci(t) Integrating QKD into power dispatch communication re- quires modeling how control tasks consume keys at different rates and security levels. Control messages fall into two main types: monitoring messages (e.g., PMU uploads, telemetry) with high frequency but low key demand, and execution messages (e.g., AGC, AVR, load shedding) that are security- critical and require prioritized key usage. Let Ci(t) denote the instantaneous key consumption rate (in bits per second) for the i-th type of control task at time t, where i = 1, 2, . . . , T. The key consumption can be expressed as: Ci(t) = αi · Li · δi(t), (5) where Li is the bit length of a single message for task type i, which can be determined based on relevant communica- tion protocols such as IEC 60870-5-104 or IEC 61850. The parameter αi represents the encryption strength coefficient. For standard symmetric encryption (e.g., AES-128), we define αi = 1, whereas for one-time pad (OTP) encryption, which requires a key bit for every message bit, we set αi = L−1 i . The binary indicator δi(t) equals 1 if the i-th task is triggered at time t, and 0 otherwise. Considering that task activation is often stochastic—particularly for high-frequency controls such as AGC or AVR—we model the triggering process as a Poisson process Ni(t) with mean activation rate λi. The key consumption during an infinitesimal time interval dt can then be described as: dNi(t) ∼Poisson(λi dt), Ci(t) · dt = αi · Li · dNi(t). (6) In other words, within a small time interval dt, the probability that a task of type i is triggered is approximately λidt. If triggered, it consumes a fixed quantity of key material, equal to αiLi. D. Key Inventory Dynamics and Safety Threshold Design Building on the models for key generation and consumption, we now formulate a dynamic model of the quantum key inventory that reflects real-time variations in the available cryptographic resources. This model serves not only to quan- tify the current state of the key pool but also supports con- trol chain reconfiguration, dispatch strategy adjustments, and proactive system warnings under communication-constrained conditions. Drawing an analogy to a reservoir system, we conceptualize the QKD key pool as a storage reservoir, where the key generation process G(t) represents the inflow, and the total consumption rate P i Ci(t) across all control tasks acts as the outflow. Based on this analogy, we define the continuous-time dynamic model for the key inventory K(t) as: ˙K(t) = G(t) − T X i=1 Ci(t) (7) 4 Here, K(t) represents the total amount of usable key material in bits at time t. The function G(t) denotes the instantaneous key generation rate from the QKD link, while PT i=1 Ci(t) captures the total instantaneous key consumption rate from all control tasks. Since the key pool cannot be stored indefinitely due to hardware constraints and cryptographic key freshness requirements, the above differential equation provides a real- istic characterization of the key system’s operational behavior. Under discrete-time implementation, which is typical in energy management systems (EMS) with sampling intervals such as ∆t = 100 ms, we use the following difference equation: Kk+1 = Kk + Gk − T X i=1 Ci,k ! · ∆t, (8) where Kk denotes the key inventory at the k-th sampling in- stant, and Gk, Ci,k represent the sampled values of generation and consumption rates, respectively. This recursive formula can be used for real-time forecasting of future inventory levels, serving as a key availability status indicator for the dispatch module. To ensure that critical control tasks in the power grid—such as primary frequency regulation, emergency load shedding, and control path reconfiguration—can still be executed even under key resource pressure, we introduce three inventory- related thresholds. These thresholds form the basis of a “warn–reconfigure–protect” tiered management mechanism for the key pool. The first is the safety threshold Ksafe, defined as the minimum amount of key required to guarantee uninterrupted execution of the highest-priority task under extreme conditions (e.g., dispatch link failure or cyberattack). Assuming the task with the highest priority, denoted by index i∗, must operate reliably over a critical time window τcrit, with triggering frequency λmax and message length Lmax, the safety threshold is given by: Ksafe = αmax · Lmax · λmax · τcrit, (9) where αmax denotes the maximum key strength parameter used by the task, such as the one-time pad (OTP) encryption level. The second is the reconfiguration threshold Kth, which triggers system-wide control chain reconfiguration actions such as task downgrading, dispatch prioritization, or redundant path activation. To ensure sufficient redundancy even after reconfiguration is triggered, the following condition must be satisfied: Kth > Ksafe. (10) The third threshold is the capacity limit Kcap, which reflects the physical buffer size of the QKD hardware and the key lifecycle management constraints. When the inventory exceeds this value, surplus keys must be discarded to ensure freshness and compliance with cryptographic expiration policies. E. Prediction and Uncertainty Estimation Due to uncertainty in quantum key generation and con- sumption, control centers must forecast key inventory K(t) to ensure secure task execution. We propose a hybrid LSTM–Kalman filter model that combines LSTM’s nonlinear prediction strength with Kalman filtering’s real-time error cor- rection, enabling robust short-term forecasting with confidence intervals for QKD-enhanced SCADA systems. We begin by defining the system observation vector at time step k: yk = [Gk, C1,k, C2,k, . . . , CT,k]⊤ (11) where Gk is the observed key generation rate at time step k, and Ci,k is the key consumption rate associated with the i-th control task. These variables can be acquired in real time from the QKD link monitoring module and SCADA message tracking system. Using historical time-series data, we train a multivariate LSTM model to produce one-step-ahead predictions of the key system dynamics: h ˆGk|k−1, ˆC1,k|k−1, ˆC2,k|k−1, . . . , ˆCT,k|k−1 i⊤ = LSTM(yk−1, yk−2, . . . , yk−w) (12) where w is the sliding window size. The outputs ˆGk|k−1 and ˆCi,k|k−1 represent the predicted values of generation and task- specific consumption rates, respectively. To correct for systemic model errors and to quantify the un- certainty in state propagation, we apply an Extended Kalman Filter (EKF) to update the key system state vector and estimate the associated state covariance. The system state vector is defined as: xk = h ˆGk, ˆKk i⊤ , ˆxk|k−1 = f(ˆxk−1), (13) where ˆKk is the predicted key inventory at time k. The pre- dicted state is updated using the standard Kalman correction equation: ˆxk = ˆxk|k−1 + Kk yk −Hˆxk|k−1  , (14) in which Kk is the Kalman gain and H is the observation matrix. The process model f(·) is based on the discrete-time inventory update rule derived from earlier sections: ˆKk = ˆKk−1 + ˆGk − T X i=1 ˆCi,k ! · ∆t. (15) From the corrected state and its error covariance, we obtain a Gaussian approximation of the inventory distribution. For example, the 95% confidence interval for the key inventory at time step k is given by: Kk ∼N( ˆKk, PK,k), CI95% = ˆKk ± 1.96 p PK,k, (16) where PK,k is the variance of the inventory estimate. This interval provides a statistically grounded boundary on key depletion risk and can be used to trigger early warnings or remedial actions, such as link switching, task degradation, or backup path activation. The entire prediction process is computationally efficient. With a state dimension of nx = 2, both LSTM inference and Kalman update can be executed within 50 milliseconds, enabling online deployment in modern EMS platforms. 5 F. Dynamic Key Control Workflow and Performance Metrics Building upon the preceding models for key generation, consumption, prediction, and uncertainty quantification, this section presents a practical dynamic key control workflow for QKD-enhanced dispatch systems. It also introduces a multi- dimensional set of performance evaluation metrics, designed to quantify the impact of key resource fluctuations on control task execution, system stability, and dispatch responsiveness. During each sampling interval k (e.g., ∆t = 100 ms), the control center can follow the steps below to monitor and update the quantum key inventory in real time and trigger responsive control strategies: To systematically evaluate the influence of key availability on power system control behavior, we further define the following set of performance metrics: 1) Task Success Rate (Psucc): This metric measures the proportion of control tasks that were encrypted and executed without delay or downgrade during the dispatch cycle: Psucc = Nsuccess Ntrigger (17) where Ntrigger is the total number of control tasks triggered, and Nsuccess is the number of those successfully executed with appropriate key security. This reflects the coverage capability of different key scheduling strategies. 2) Maximum Frequency Deviation (∆fmax): Under fre- quency control scenarios (e.g., AGC), insufficient key re- sources may delay or block commands, causing transients in grid frequency. The worst-case deviation during the observa- tion period is defined as: ∆fmax = max t∈[t0,t1] |∆f(t)| (18) This metric reflects the control reliability degradation caused by communication or cryptographic limitations. 3) Key Utilization Rate (η): This metric quantifies how efficiently the generated keys are utilized across control tasks. It is defined as: η = P k P i Ci,k P k Gk (19) Higher utilization indicates better alignment between supply and demand; low utilization may reflect redundancy or ineffi- cient dispatch strategies. 4) Time to Resilience Recovery (TRR): This metric captures the duration required for the system to return to stable operation following a key-induced control failure or deviation. It is defined as: TRR = Z trecov tfault 1|∆f(t)|>∆fsafe dt (20) where ∆fsafe is the maximum acceptable frequency deviation. This indicator evaluates the resilience of the system under key resource stress. III. CONTROL CHAIN RECONFIGURATION A. Reconfiguration Modeling To address the risk of critical control chain failure caused by dynamic fluctuations in quantum key availability, this section proposes a multi-layer network modeling framework that captures the coupling between communication, control, and physical layers. A quantitative objective function is further formulated to balance key consumption, control effectiveness, and system resilience, forming the foundation for subsequent optimization. The QKD-enabled SCADA system is abstracted as a directed multilayer graph: G = (Vcom, Vctr, Vphy; Ecom, Ectr, Ephy) (21) Here, Vcom represents communication nodes (e.g., QKD terminals, switches), Vctr denotes control task nodes (e.g., AGC, AVR, protection signals), and Vphy includes physical equipment such as buses, generators, and loads. The edge sets E(·) capture both intra-layer and inter-layer interactions. Each control chain ℓis defined as a directed path Pℓ⊆G that orig- inates from the control center, passes through communication nodes, and actuates physical components. It is characterized by a feature vector: ϕℓ= (dℓ, τℓ, βℓ, γℓ) (22) where dℓdenotes the key resource required per unit time, τℓis the end-to-end latency tolerance, βℓreflects the control priority or impact (e.g., frequency response weight), and γℓcaptures the reconfiguration cost associated with activating or switching the chain. To adapt to time-varying key inventory and ensure con- tinuous execution of prioritized control tasks, the system must dynamically decide the state of each control chain. Let xℓ,k ∈{0, 1, 2} denote the state of control chain ℓat discrete time step k, where: xℓ,k =      0, chain deactivated 1, chain active with downgraded encryption 2, chain active with full encryption (e.g., OTP) Then, the total key consumption at time k is given by: Ck = X ℓ∈L dℓ αAES ∆t · 1xℓ,k=1 + αOTP ∆t · 1xℓ,k=2  (23) where αAES and αOTP denote the key usage coefficients under AES and OTP encryption, respectively, and ∆t is the control dispatch interval. 1(·) is the indicator function. To prevent abrupt key depletion and ensure reliability of mission-critical chains, the decision framework includes a chance-constrained key inventory requirement: Pr  Ck ≤ˆKk −ηbuf  ≥1 −ε (24) Here, ˆKk denotes the predicted key inventory at time k, ηbuf is the safety buffer margin, and ε is the risk tolerance (e.g., 0.05 corresponding to 95% confidence). This constraint ensures that the key allocation decisions remain feasible under prediction uncertainty. Based on the above structure, we construct the overall con- trol loss function Lk to evaluate performance at each dispatch 6 step. The function integrates power system stability deviation, task execution penalties, and reconfiguration overhead: Lk = X n∈N wf |∆fn,k| + X m∈M wv |∆Vm,k| | {z } grid stability deviation + X ℓ ξdrop 1xℓ,k=0 + ξdeg 1xℓ,k=1  | {z } task degradation and failure cost + X ℓ γℓ· 1xℓ,k̸=xℓ,k−1 | {z } chain reconfiguration cost (25) In this formulation, ∆fn,k and ∆Vm,k denote the frequency and voltage deviations at bus n and node m at time k, while wf and wv are corresponding weight factors. The terms ξdrop and ξdeg quantify the cost of fully dropped or downgraded control tasks, and γℓpenalizes frequent switching of chain ℓto discourage excessive reconfiguration. This loss function encapsulates the essential trade-off between resource-aware encryption scheduling and power system control effectiveness. Compared to static chain architectures or simple task-priority- based schemes, the proposed model enables real-time task- level adaptation to key availability, maintaining high control success rates and continuous secure operation in dynamically constrained environments. B. Temporal Optimization Formulation To enable rolling adjustments of control chain status and dynamic scheduling of quantum key resources, we formulate a temporal optimization model that operates over a fixed control interval ∆t. The model aims to minimize overall control performance loss over a predictive horizon while considering forecast uncertainty in key inventory, power system dynamics, and the cost of switching control chains. This results in a prac- tically solvable mixed-integer chance-constrained optimization framework. Let the rolling prediction horizon be H steps. At each dispatch step k, the model optimizes the activation sequence {xℓ,t}k+H−1 t=k for each control chain ℓacross the window. The objective function is defined as: min {xℓ,t} k+H−1 X t=k  Lt + λ ∥∆f t∥2 2  (26) Here, Lt denotes the control chain loss function at time t, defined in Equation (8), while ∆f t represents the vector of frequency deviations at time t, and λ is a weight parameter balancing control quality and resource usage. The optimization is subject to three categories of con- straints: (1) Key Resource Chance Constraint: For each t ∈ [k, k + H −1], the cumulative key consumption Ct arising from selected control chains must satisfy a probabilistic lower bound on the available key inventory: Pr  Ct ≤ˆKt −ηbuf  ≥1 −ε (27) Here, ˆKt is the forecasted key inventory at time t, ηbuf denotes a safety buffer, and ε is the risk tolerance level (e.g., 0.05 for 95% confidence). This chance constraint ensures that control actions remain feasible under uncertainty and can be transformed into a deterministic second-order cone (SOC) constraint using techniques such as Chebyshev inequality or Boole–Bonferroni approximation. (2) Control Chain State Constraints: Each control chain can only assume one of three discrete states at any time: xℓ,t ∈{0, 1, 2}, ∀ℓ, ∀t ∈[k, k + H −1] (28) Specifically, xℓ,t = 0 indicates chain deactivation, xℓ,t = 1 implies downgraded encryption (e.g., AES), and xℓ,t = 2 represents full-strength encryption (e.g., OTP). (3) Power System Dynamics Constraint (State-Space Form): To model how control decisions affect system-level dynamics such as frequency and voltage trajectories, we adopt a linearized discrete-time state-space model: xsys(t + 1) = Axsys(t) + Bu(t) (29) In this expression, xsys(t) denotes the system state vector (e.g., frequency deviations, voltage angles), and u(t) is the control input vector. The matrices A and B define the system dynamics, obtained through linearization of the power system around the operating point. The control input u(t) is mapped from the chain activation status using a constraint of the form: u(t) = U(xℓ,t) (30) This mapping ensures logical consistency between the acti- vated chains and the actual control commands dispatched, such as AGC setpoints or load shedding signals. The resulting optimization problem is a Mixed-Integer Second-Order Cone Programming (MISOCP) model due to the presence of integer variables xℓ,t, nonlinear objectives, and SOC constraints arising from uncertainty. To ensure tractabil- ity and scalability, for small- to medium-scale systems, com- mercial solvers such as CPLEX or Gurobi can be used directly with branch-and-bound methods. For larger systems, solution speed can be enhanced via decomposition techniques such as column generation, Benders decomposition, or penalty-based relaxation heuristics. IV. STACKELBERG COOPERATIVE GAME AND MPEC-BASED SOLUTION In QKD-encrypted power systems, quantum key resources are shared between TSO and DSO, creating inherent coordi- nation conflicts. While the TSO focuses on frequency support and grid stability, the DSO prioritizes local voltage and load control. This interaction is modeled as a bi-level Stackelberg game, with the TSO as leader and the DSO as follower. Let xT denote the TSO’s decision variables over controlled secure chains (e.g., activation states and encryption modes), 7 and xD represent the corresponding DSO decisions. The bi- level problem is then defined as: min xT JT(xT, xD) = k+H−1 X t=k  LT t + λT∥∆f T t ∥2 2  s.t. xT ∈ΩT, xD ∈arg min x (k+H−1 X t=k  LD t + λD∥∆f D t ∥2 2  x ∈ΩD(xT) (31) Here, LT t and LD t denote the control chain losses for TSO and DSO, respectively, incorporating frequency deviation penalties, control errors, and encryption switching costs. The feasible sets ΩT and ΩD include power system dynamics, key budget constraints, and discrete mode restrictions. The coupling is induced by a shared probabilistic quantum key budget constraint: Pr  CT t + CD t ≤ˆKt −ηbuf  ≥1 −ε, ∀t (32) To reformulate the bi-level problem into a numerically tractable single-level model, we derive the Karush-Kuhn- Tucker (KKT) conditions for the DSO’s lower-level opti- mization. Let the DSO’s constraints be expressed as: Nonlin- ear inequalities: g(xT, xD) ≤0; Second-order cone (SOC) constraints: h(xT, xD) ∈K; Integer variable relaxations: xℓ,t ∈{0, 1, 2}. Introducing Lagrange multipliers λ and µ for g and h respectively, the KKT system becomes: ∇xDJD + λ⊤∇xDg + µ⊤∇xDh = 0 (33) λi · gi = 0, ∀i; h ∈K, µ ∈K∗, ⟨h, µ⟩= 0 (34) xℓ,t(1 −xℓ,t) = 0, ∀ℓ, t (35) Embedding the above complementarity system into the upper- level constraints yields a Mathematical Program with Equilib- rium Constraints (MPEC), specifically a Mixed-Integer Pro- gram with Complementarity Constraints (MPCC). To solve the resulting MPCC efficiently, we propose a Level Decomposition with Complementarity Pruning (LD- CP) algorithm that leverages the sparse coupling structure between the TSO (leader) and DSO (follower), as shown in Algorithm 2. Specifically, the original bi-level formulation minxT JT(xT, xD) with xD ∈arg min JD(xT, ·) is decou- pled into alternating rounds of leader–follower interaction. At each iteration p, the leader’s decision xT,(p) is fixed, and the follower responds by solving its own optimization subproblem. From the resulting KKT system, we extract the active complementarity set C(p) ⊆{(i, j) | λi · gi = 0, ⟨hj, µj⟩= 0}, which reflects the subset of dual constraints that are exactly satisfied under current conditions. These active pairs are then encoded back into the upper-level problem via binary indicator variables zi ∈{0, 1} that govern whether gi ≤0 or λi ≤0 holds at iteration p. The resulting upper-level program introduces a proximal regularization term ρ∥xT−xT,(p)∥2 to stabilize updates and improve convergence. After solving for a new leader decision xT,(p+1), the active set is pruned by removing invalid complementarity pairs based on feasibility and duality conditions, yielding a refined C(p+1). This iterative process continues until the update norm satisfies ∥xT,(p+1) −xT,(p)∥< δ, or a predefined maximum iteration count is reached. V. SIMULATION PLATFORM IMPLEMENTATION To enable comprehensive modeling and verification of QKD-enhanced SCADA control chains in power systems, we develop an end-to-end simulation platform consisting of five tightly coupled modules: link disturbance injection, key gen- eration and management, protocol stack binding, control chain configuration, and visualization monitoring. The platform adopts a modular design, supports deployment on standalone or clustered environments, and is compatible with hardware- in-the-loop (HIL) setups such as OPAL-RT or RTDS. A. Optical Attenuation and Link Disruption Injection (OpenQKD-Sim) We enhance the OpenQKD-Sim module to inject realistic non-weather-related disturbances at the physical layer. The instantaneous optical attenuation of the link is modeled as: η(t) = 10−(α0+∆α(t))L/10 (36) where α0 denotes the nominal attenuation coefficient (in dB/km), L is the fiber length, and ∆α(t) ∼N(0, σ2 α) captures random fluctuations due to physical factors such as thermal drift and fiber vibration. Link breakage events are modeled as a Poisson process: dN(t) ∼Poisson(λbreak dt) (37) with λbreak denoting the expected failure frequency. Scenario parameters such as attenuation bounds, break intervals, and restoration latency are defined in YAML, with real-time output of the key generation rate G(t). B. QKD Key Generation and Management API A Python-based gRPC interface implements the QKD key pool management layer. The KeyServer module provides two key methods: GetKey(n) →{k1, . . . , kn} (38) KeyPoolStatus() →K(t) (39) where K(t) denotes the real-time size of the key inventory. The KeyClient, embedded in the SCADA control interface, uses asynchronous pulling with local buffering to decouple key fetch latency from critical control paths. Key dynamics follow the differential model proposed in Section 4.4, and state updates are published to the control optimizer via Redis Pub/Sub. C. Protocol Stack Binding (Q3P + UDP + IEC 60870-5-104) We integrate the Q3P protocol with UDP/IP and IEC 104 to construct a secure, backward-compatible stack. The transmission layer is structured as: UDP/IP →Q3P (Key Index + MAC) →IEC 104 ASDU 8 The Q3P module embeds a 16-byte authentication tag and one-time-pad (OTP) index into the IEC 104 payload without changing its frame structure. Each message m requires a key consumption of: d(m) = α(m) · |m|, α(m) = ( 1, AES-128 |m|−1, OTP (40) Transmission latency τlink(t) is dynamically determined by OpenQKD-Sim and injected via controlled delay calls (e.g., sleep operations) to simulate effects such as fiber bending or device buffering. D. Control Chain Configuration and Visualization Monitoring Control chains are defined using a Python domain- specific language (DSL), e.g.: Chain(id, task_type, priority, crypto_mode, path=[node1, node2, ...]). These configurations are uploaded via REST API and parsed by the scheduler. The scheduling engine monitors the real-time key inventory K(t), fiber availability η(t), and control deviations ∆f(t) to compute the optimal activation vector xt, as defined in the optimization model of Section 5. A Grafana dashboard, backed by InfluxDB, provides real- time visualization of ˆKt, task capacity Ct, node deviations ∆fn,t, and link-level events such as breakage or recovery. If thresholds like Kt < Ksafe or ∆fn,t > ∆fmax are violated, alert notifications are triggered via Slack or email. VI. CASE STUDY A. Experiment Setup This study presents a comprehensive and repeatable sim- ulation framework to evaluate the proposed QKD-enhanced SCADA control chain reconfiguration strategy. The IEEE 39- bus and 118-bus systems are selected as benchmark testbeds. The 39-bus system serves as a medium-scale case for verifying system stability and efficiency, while the 118-bus system is partitioned into one transmission zone and three distribu- tion areas to support multi-agent coordination scenarios. The SCADA architecture follows a four-layer topology—Control Center, Gateway, RTU, and IED—with IEC 60870-5-104 com- munication protocol integrated with Q3P encryption. Optical link disturbances are injected via OpenQKD-Sim, assuming fiber lengths of 20 km (39-bus) and 50 km (118-bus), with baseline attenuation of 0.2 dB/km and Gaussian variations (0.04 dB/km). Link failures are modeled as Poisson-distributed events (mean rate: one per 5000s), lasting 3–5 seconds uni- formly sampled. Five experimental scenarios are constructed: S-1 (static chain + static keys), S-2 (static chain + dynamic keys), S-3 (reconfigurable chain + dynamic keys, single- agent), S-4 (cooperative transmission-distribution control with- out game-theoretic optimization), and S-5 (Stackelberg-based multi-agent reconfiguration with MPEC and LD-CP algo- rithm). In S-5, a bi-level optimization is performed every 100 ms, where the TSO’s decision is updated using the LD-CP method based on complementarity set pruning and proximal regularization. Each experiment is repeated over 30 Monte Carlo runs to ensure statistical robustness. Fig. 1. Control Recovery Time Distribution B. IEEE 39-bus Case In this subsection, we conduct a comprehensive quantitative evaluation of the proposed method (S-3) against two baseline scenarios—static chain with static key (S-1), and static chain with dynamic key (S-2). TABLE I PERFORMANCE COMPARISON ACROSS SCENARIOS Metric S-1 S-2 S-3 Success Rate Psucc 0.72 0.88 0.97 Max Frequency Deviation ∆fmax (Hz) 0.26 0.17 0.08 Key Utilization η 0.30 0.57 0.83 As shown in the aggregated metrics, S-1 exhibits the weak- est overall performance: the task success rate remains at just 72%, the frequency deviation peaks at 0.26 Hz, and the key utilization rate is below 30%. This reflects the system’s high vulnerability to key exhaustion and communication disruptions under static configurations. When dynamic key generation is introduced in S-2, the task success rate increases to 88%, and ∆fmax drops to 0.17 Hz, indicating that continuous key supply significantly mitigates the risk of depletion. However, the key utilization rate only improves to 57%, suggesting inefficiencies due to persistent link rigidity. In contrast, S-3 combines dynamic key management with adaptive link reconfiguration and task reprioritization, yielding the highest task success rate (97%), the smallest frequency deviation (0.08 Hz), and the most efficient key utilization (83%). These improvements validate the synergistic benefits of proactive coordination in both communication and control layers. The 30-run TRR distributions further support these findings. S-1 demonstrates prolonged recovery times mostly in the 12–16 s range, with occasional outliers extending to 18–19 s, highlighting its reliance on passive recovery mechanisms. S- 2 exhibits a shorter and tighter distribution between 6–9 s, though occasional delays still occur due to fixed communica- tion topology. In sharp contrast, S-3 displays a highly compact TRR distribution centered around 2.5–3.5 s, with negligible variance and no outliers. This indicates that the integration of 9 Fig. 2. Violin–Box Plot of Key-Fairness Distribution Fig. 3. LD-CP Iteration Count versus Achieved Fairness key-aware control rescheduling and real-time link reconfigu- ration enables rapid mitigation of control failures, minimizing frequency violations and downtime. Taken together, the results confirm that the proposed S-3 approach achieves superior resilience and resource efficiency under realistic operational constraints. C. Fairness Metrics under Stackelberg Coordination In this subsection, we critically analyse the fairness perfor- mance obtained in the Stackelberg scenarios by interpreting three complementary visualisations: a violin–box hybrid plot of the fairness index FK, a colour-coded scatter plot relating LD-CP iteration count to FK, and a heat-map that maps the standard deviation of FK over the joint space of iteration budget and link-break frequency. The violin–box plot reveals a pronounced improvement when the LD-CP-driven Stackelberg mechanism (S-5) replaces the static co-operative baseline (S-4). Whereas S-4 centres around a median fairness of approximately 0.72 with a visibly skewed tail dipping below 0.65, S-5 shifts the entire distri- bution upwards to a median near 0.86 and compresses its inter-quartile range by roughly half. Because FK is a Jain- style index, this uplift signifies that the TSO and DSO obtain Fig. 4. Heat-Map of Fairness Variability as a Function of Iteration Budget and Link-Break Frequency a far more balanced share of the finite quantum-key pool while run-to-run variability is simultaneously suppressed. The plot therefore confirms that LD-CP’s complementarity pruning not only raises expected equity but also mitigates extreme imbalance events. The iteration–fairness scatter plot complements this insight by quantifying the efficiency–fairness trade-off. Fairness as- cends steeply from about 0.70 at five iterations to a plateau around 0.83–0.84 once the iteration budget reaches 80–100, after which marginal gains vanish. The dashed regression line illustrates the overall positive slope, yet the curved cloud of points exposes diminishing returns: the interval between 25 and 60 iterations delivers the bulk of the improvement, whereas pushing beyond ∼120 iterations contributes little besides computational overhead. Operationally, this suggests that capping the LD-CP loop at 50–80 iterations provides an attractive balance between convergence speed and final equity. Finally, the heat-map projects the standard deviation of FK onto the plane spanned by link-break frequency λbreak and iteration count Niter. Three patterns emerge. First, vari- ance rises almost linearly with disturbance rate, confirming that heavier communication stress naturally degrades fairness. Second, increasing Niter consistently suppresses variance, un- derscoring that deeper leader–follower negotiation stabilises outcomes. Third, the gradient of this improvement flattens beyond about 80 iterations, echoing the scatter plot’s plateau. These relationships delineate a practical “safe operating zone”: for moderate break rates (λbreak ≤0.003 s−1), roughly 60 LD- CP iterations keep fairness fluctuations below 0.04, whereas more severe disturbance regimes require either accelerated solvers to sustain Niter ≥80 in real time or auxiliary fallback policies. Taken together, the three visualisations demonstrate that the Stackelberg–MPEC framework not only elevates av- erage fairness but also offers tunable levers—iteration budget and disturbance tolerance—to tailor equity, stability, and com- putational effort to specific grid-operation requirements. VII. CONCLUSION This paper presents a unified framework for modeling, optimizing, and validating QKD-enhanced SCADA systems 10 in power grid applications. By integrating quantum key dy- namics with multi-timescale control processes, we address the critical challenge of securing power system operations under constrained and uncertain cryptographic resources. Through a Stackelberg game formulation and a novel LD-CP algorithm, we enable fair and adaptive key allocation between transmis- sion and distribution operators. The proposed co-simulation testbed bridges quantum optics, network protocols, and power system control, offering an end-to-end platform for perfor- mance evaluation. Experimental results demonstrate significant improvements in control task reliability, frequency stability, and key utilization under dynamic link conditions. Future work will explore the integration of trusted relay networks, cross- layer intrusion detection, and hardware-in-the-loop validation on real-time digital simulators. REFERENCES [1] M. Alshowkan, P. G. Evans, M. Starke, D. Earl, and N. A. Peters, “Authentication of Smart-Grid Communications Using Quantum Key Distribution,” IEEE Trans. Smart Grid, vol. 14, no. 5, pp. 3890-3901, Sept. 2023. [2] P. G. Evans, M. Alshowkan, D. Earl, M. Starke, and N. A. Peters, “Trusted-Node Quantum Key Distribution at an Electrical Utility,” IEEE Trans. Smart Grid, vol. 14, no. 2, pp. 1198-1208, Mar. 2023. [3] W. A. Grice, M. O. Olama, and N. A. Peters, “Quantum Key Distribution Applicability to Smart-Grid Cyber-Security Systems,” IEEE Trans. Power Syst., early access, 2025. [4] X. Chen, Y. Zhang, and Z. Liu, “Entropy-Based Anomaly Detection Leveraging Quantum-Channel Randomness in Secure Smart-Grid Com- munications,” IEEE Trans. Smart Grid, vol. 15, no. 1, pp. 612-623, Jan. 2024. [5] C. Simondi et al., “First Power Communication Network Secured by Quantum Cryptography in Korea,” ID Quantique Press Release, Jan. 2021. [6] B. Zhao et al., “Performance Analysis of Quantum Key Distribution Technology for Power Business,” Applied Sciences, vol. 10, no. 8, p. 2906, 2020. [7] D. Elkouss et al., “The SECOQC Quantum Key Distribution Network in Vienna,” New J. Phys., vol. 11, p. 075001, 2009. [8] M. Leal et al., “Reliability Provision in Software-Defined Power Substa- tion Networks,” Computer Networks, vol. 181, p. 107560, 2020. [9] J. Guti´errez et al., “Next-Generation Power Substation Communication Networks: SDN Approaches,” IEEE Power & Energy Magazine, vol. 21, no. 5, pp. 67-77, 2023. [10] N. Rodr´ıguez P´erez, J. Matanza Domingo, G. L´opez L´opez, J. P. Chaves ´Avila, F. Bosco, V. Croce, K. Kukk, M. Uslar, C. Madina, and M. Santos-M´ugica, “ICT architectures for TSO-DSO coordination and data exchange: A European perspective,” IEEE Transactions on Smart Grid, vol. 14, no. 2, pp. 1300–1312, Mar. 2023. [11] T. Zhang, J. Wang, H. Wang, J. Ruiyang, G. Li, and M. Zhou, “On the coordination of transmission–distribution grids: A dynamic feasible region method,” IEEE Transactions on Power Systems, vol. 38, no. 2, pp. 1855–1866, Mar. 2023. [12] T. Jiang, C. Wu, R. Zhang, X. Li, H. Chen, and G. Li, “Flexibility clearing in joint energy and flexibility markets considering TSO–DSO coordination,” IEEE Transactions on Smart Grid, vol. 14, no. 2, pp. 1376–1387, Feb. 2022. [13] D. Bian et al., “Real-Time Co-Simulation Platform Using OPAL-RT and OPNET for Analyzing Smart Grid Performance,” Proc. IEEE PES General Meeting, Denver, CO, 2015. [14] OpenQKD Project, “Quantum Key Distribution Network Simulator (QKDNetSim) Module for ns-3,” Online: http://openqkd.eu/QKDNetSim openqkd.eu. [15] M. Alshowkan et al., “Authentication of Smart Grid Communications Using Quantum Key Distribution,” Scientific Reports, vol. 12, Art. 12731, 2022.
1 Dynamic-Key-Aware Co-Simulation Framework for Next Generation of SCADA Systems Encrypted by Quantum-Key-Distribution Techniques Ziqing Zhu, Member, IEEE Abstract-To address growing cybersecurity challenges in modern power dispatch systems, this paper proposes a multilayer modeling and optimization framework for SCADA systems enhanced with quantum key distribution (QKD). While most existing applications of QKD in the power sector focus on building secure point-to-point communication tunnels, they rarely consider the system-level coupling between key dynamics and control scheduling. In contrast, our approach integrates quantum key generation, consumption, inventory prediction, and control latency into a unified model, enabling key-aware reconfiguration of SCADA control chains based on task security demands and real-time resource constraints. To resolve conflicts in key resource allocation between transmission system operators (TSOs) and distribution system operators (DSOs), we formulate a bi-level Stackelberg game and transform it into a mathematical program with complementarity constraints (MPCC). We further develop an efficient Level Decomposition-Complementarity Pruning (LDCP) algorithm to solve the problem. To support reproducible evaluation, we build an end-to-end co-simulation platform that integrates physical-layer disruptions via OpenQKD-Sim, Q3P/IEC104 protocol stack binding, and real-time control-chain monitoring through Grafana. Experimental results on the IEEE 39- and 118-bus systems show that our method increases task success rate by 25%, reduces peak frequency deviation by 70%, and improves key utilization to 83%. This work lays the foundation for future quantum-secure control systems in power grid operations. Index Terms-Quantum Key Distribution, SCADA Systems, Control Chain Reconfiguration, Cyber-Physical Co-simulation, Power Dispatch I. INTRODUCTION A. Vision and Significance of QKD-Enabled SCADA I N modern power systems, the evolution of SCADA is driven not only by the need for ultra-low-latency control but also by the rising demand for long-term cybersecurity. Quantum Key Distribution (QKD) has emerged as the only known technology that guarantees information-theoretic security, making it an ideal foundation for next-generation control infrastructures [1]. By enabling unbreakable encryption for critical control signals, such as frequency regulation or emergency load shedding, QKD eliminates the risk of key compromise, even in the face of quantum adversaries [2]. Moreover, its property of forward secrecy simplifies key lifecycle management by removing reliance on certificate hierarchies or algorithmic assumptions [3]. Beyond encryption, the physical randomness embedded in QKD channels can also support anomaly detection mechanisms at the communication layer [4]. Together, these features make QKD not just a technological upgrade, but a strategic enabler of resilient and future-proof SCADA systems. B. Related Work and Research Gap 1) Prototype QKD Deployments in Power Grids: Several pilot projects have applied quantum key distribution (QKD) to secure power system communications. For example, ID Quantique and KEPCO deployed one of the first QKD-protected substation communication links in Korea, using QKD over a 40 km optical ground wire between two substations [5]. This 2021 prototype demonstrated basic point-to-point encryption for substation SCADA traffic. Similarly, State Grid Corporation of China (SGCC) has conducted trial installations of QKD for substation and distribution network communications. In one demonstration, a multi-relay QKD network spanned three substations and secured remote telemetry/command exchanges in a provincial dispatch grid [6]. These projects proved that QKD can be integrated into utility fiber infrastructure, but they focused on connectivity and basic functionality. Notably, none provided a comprehensive system-level performance analysis - metrics like end-to-end latency impact, key consumption rates under power control traffic patterns, or reliability of protection signaling with QKD were not rigorously evaluated. 2) Resource-Aware Key Management in QKD Networks: In the telecommunications domain, researchers have explored adaptive key management to mitigate QKD's limited key generation rates. Strategies such as maintaining quantum key pools (buffers of QKD-generated keys) and hybrid encryption switching have been proposed. In these schemes, high-security one-time-pad (OTP) encryption is used when key supply is abundant, and the system falls back to conventional symmetric ciphers (e.g. AES) when keys are scarce [7]. This hybrid approach conserves one-time key material by using computationally secure AES encryption as needed, at the cost of reducing the theoretical security to conventional levels [8]. While such resource-aware schemes were designed for general QKD networks, they ignore the hard real-time constraints of powersystem control loops. Power grid control messages (protective tripping commands, load control signals, etc.) often have strict latency deadlines (on the order of milliseconds to tens of milliseconds). A key-management strategy that withholds data waiting for fresh keys or that switches encryption modes onthe-fly could violate these timing requirements. For instance, using a slower key update or allowing key exhaustion to introduce a 100+ ms delay can cause grid synchronization errors [9]. Prior works on QKD key management did not account for these real-time performance needs unique to energy systems. 16 Oct 2025 2 3) Hierarchical TSO-DSO Optimization Approaches: There is rich literature on coordinating operations between Transmission System Operators (TSOs) and Distribution System Operators (DSOs) in a smart grid. Many works formulate hierarchical optimization or game-theoretic models to align transmission-level dispatch with distribution-level resources [10-12]. Yet, none of these studies consider the integration of QKD-based security constraints. The decision frameworks assume control signals can be issued securely without limitation. In reality, QKD keys are a stochastic and non-duplicable resource - key generation rates may fluctuate with quantum channel conditions, and each key bit can only be used once. No existing TSO-DSO optimization incorporates a coupling between control actions and available secure key supply. For example, scheduling a fast DER control response might be infeasible if encryption keys are insufficient at that moment, but current models do not capture such coupling. Thus, the impact of limited key availability on hierarchical control decisions remains an open question. 4) Simulation Toolchains for Cyber-Physical-Quantum Systems: To date, studies of power grid QKD integration have used separate simulation tools for different domains. Communication network simulators (such as OPNET, ns-3, or OMNeT++) and power system simulators (such as OPAL-RT real-time digital simulators or PSCAD/EMTDC) are well established. Co-simulation platforms exist to combine power and communication models, enabling analysis of cyber-physical interactions in smart grids [13]. For example, a real-time testbed may use OPAL-RT to simulate the electrical grid and ns-3 or OMNeT++ to simulate the ICT network, synchronized to exchange data in each time step [14]. On the quantum side, specialized tools like OpenQKD-Sim (a module extending ns3 for QKD networks) have been developed to simulate QKD link physics and key distribution processes [15]. However, an integrated test environment that combines all three layers - power system dynamics, communication networking, and quantum key generation - is still missing. Most studies either evaluate communication/security aspects in isolation (using a network or QKD simulator alone) or examine grid control impacts assuming an abstracted communications layer. There is a lack of end-to-end co-simulation platforms or testbeds that can validate how QKD-secured control schemes perform under realistic grid conditions, including the delays of networking and the probabilistic behavior of quantum key generation. C. Research Gap and Main Contributions In summary, few existing framework provides a holistic solution that (i) couples the dynamic availability of QKD keys with grid control actions across multiple timescales, (ii) enables fair and efficient key allocation between transmission and distribution operators in a coordinated manner, and (iii) validates these concepts on an end-to-end cyber-physical-quantum co-simulation or prototype platform. Correspondingly, this work addresses the integration of QKD into real-time power system operations by establishing a comprehensive framework that spans from physical-layer key generation to high-level control optimization. The main contributions are summarized as follows: (1) We develop the first unified multi-layer model that explicitly links the stochastic generation of QKD keys with their consumption across control messages, dynamically tracks key inventory, and models the resulting impact on control-chain latency. This framework enables a realistic assessment of how QKD-induced constraints propagate through secure SCADA operations, bridging the gap between quantum communication and cyber-physical control. (2) We propose a novel chance-constrained optimization formulation for key-aware control-chain reconfiguration. By encoding both communication reliability and key availability into a mixed-integer probabilistic scheduling model, we ensure safety-critical control loops are resilient under key scarcity. We further extend this model to a hierarchical transmission-distribution coordination setting using a bilevel Stackelberg game formulation. To efficiently solve this nonconvex Mathematical Program with Equilibrium Constraints (MPEC), we design a Level Decomposition with Complementarity Pruning (LD-CP) algorithm that avoids full enumeration of complementarity pairs and converges to M-stationary solutions with strong practical scalability. (3) We implement a modular, end-to-end testbed for QKDenhanced SCADA co-simulation. The platform integrates (i) OpenQKD-Sim for modeling link-level photon loss and channel outages, (ii) a hybrid protocol stack combining Q3P and IEC 60870-5-104 for secure control signaling, and (iii) Grafana-based real-time visualization and configuration tools. The platform supports extensions for hardware-in-the-loop (HIL) experiments via RTDS and is designed for portability and reproducibility. D. Paper Structure The remainder of this paper is organized as follows. Section 2 introduces the multi-layer architecture of the QKD-enhanced SCADA system and formulates the core constraints on key generation, inventory dynamics, and control latency. Section 3 presents the key-aware control-chain reconfiguration problem and its Stackelberg extension for TSO-DSO coordination, along with the proposed LD-CP algorithm. Section 4 describes the implementation of our modular co-simulation testbed, including quantum link modeling, protocol binding, and visualization components. Section 5 details the experimental setup and performance results on IEEE benchmark systems. Finally, Section 6 concludes the paper and outlines future directions. II. DYNAMIC KEY RESOURCE MODELING A. State Variables and Time Scale In building a QKD-SCADA co-simulation platform for power dispatch, a key challenge is modeling the dynamic interplay of key generation, usage, and inventory. Unlike conventional systems limited by bandwidth, QKD generates nonreplicable, non-bufferable keys whose availability is tightly linked to physical conditions. To capture this, we define the following continuous-time state variables for quantum key behavior. Let t denote the continuous time in seconds. The variable K(t) ∈R+ represents the available quantum key inventory (in bits) at time t, corresponding to the usable key 3 pool maintained at the dispatch center. The function G(t) denotes the instantaneous key generation rate (bits per second), as determined by the underlying QKD link. For each class of control task indexed by i ∈{1, . . . , T}, we define Ci(t) as the instantaneous key consumption rate (bits per second). The total number of distinct control task types, denoted by T, typically includes wide-area monitoring (e.g., PMU data streams) and active control commands (e.g., AGC setpoints, AVR signals, load shedding instructions), which operate across different timescales in power system operation. For numerical implementation, we adopt a fixed sampling interval ∆t, and define discrete-time representations such as Kk = K(k∆t), Gk = G(k∆t), and Ci,k = Ci(k∆t) for each time step k ∈N. B. Key Generation Model The key generation rate is a critical metric in QKD systems, directly affecting how fast the key pool is replenished. In power systems, QKD links are often deployed over OPGW or dedicated optical fibers, whose performance is sensitive to environmental factors (e.g., wind, icing, temperature) and disturbances from power equipment. For BB84-based QKD systems, the instantaneous key generation rate is given by: G(t) = Rp · η(t) | {z } photon detection rate · q(t) |{z} sifted-bit retention ratio ·fsec(QBER(t)) (1) where Rp denotes the photon emission rate at the sender (in photons per second), determined by the QKD device configuration. The function η(t) ∈[0, 1] represents the instantaneous transmission efficiency of the link, which reflects the proportion of photons successfully transmitted from the source to the detector. This efficiency is affected by factors such as optical fiber attenuation, splicing loss, wind-induced sagging, and temperature-induced refraction changes. The term q(t) denotes the sifting ratio, i.e., the proportion of measurement outcomes retained after basis reconciliation. The quantity QBER(t) stands for the quantum bit error rate at time t, which increases under electromagnetic interference, misalignment, or vibration, commonly present in substation environments. The function fsec(·) denotes the privacy amplification factor, which quantifies the fraction of sifted bits that can be securely retained as final key material. It is typically defined as: fsec(QBER) = 1 -2h(QBER) (2) h(p) = -p log2 p -(1 -p) log2(1 -p) (3) where h(p) is the binary entropy function. As the QBER increases, the effective secure key rate drops sharply, potentially leading to a halt in key output if the error exceeds tolerable thresholds. To simulate the non-stationary behavior of G(t) under external disturbances such as storms or temperature surges, we introduce a first-order stochastic process to describe its temporal dynamics: ̇G(t) = -λG G(t) - ̄G + εG(t) (4) where ̄G is the nominal (steady-state) expected generation rate, λG > 0 is the regression coefficient characterizing the system's return speed to equilibrium, and εG(t) is a zero-mean white noise process representing high-frequency fluctuations caused by environmental uncertainties (e.g., optical misalignment or ground potential variations). C. Key Consumption Model Ci(t) Integrating QKD into power dispatch communication requires modeling how control tasks consume keys at different rates and security levels. Control messages fall into two main types: monitoring messages (e.g., PMU uploads, telemetry) with high frequency but low key demand, and execution messages (e.g., AGC, AVR, load shedding) that are securitycritical and require prioritized key usage. Let Ci(t) denote the instantaneous key consumption rate (in bits per second) for the i-th type of control task at time t, where i = 1, 2, . . . , T. The key consumption can be expressed as: Ci(t) = αi · Li · δi(t), (5) where Li is the bit length of a single message for task type i, which can be determined based on relevant communication protocols such as IEC 60870-5-104 or IEC 61850. The parameter αi represents the encryption strength coefficient. For standard symmetric encryption (e.g., AES-128), we define αi = 1, whereas for one-time pad (OTP) encryption, which requires a key bit for every message bit, we set αi = L-1 i . The binary indicator δi(t) equals 1 if the i-th task is triggered at time t, and 0 otherwise. Considering that task activation is often stochastic-particularly for high-frequency controls such as AGC or AVR-we model the triggering process as a Poisson process Ni(t) with mean activation rate λi. The key consumption during an infinitesimal time interval dt can then be described as: dNi(t) ∼Poisson(λi dt), Ci(t) · dt = αi · Li · dNi(t). (6) In other words, within a small time interval dt, the probability that a task of type i is triggered is approximately λidt. If triggered, it consumes a fixed quantity of key material, equal to αiLi. D. Key Inventory Dynamics and Safety Threshold Design Building on the models for key generation and consumption, we now formulate a dynamic model of the quantum key inventory that reflects real-time variations in the available cryptographic resources. This model serves not only to quantify the current state of the key pool but also supports control chain reconfiguration, dispatch strategy adjustments, and proactive system warnings under communication-constrained conditions. Drawing an analogy to a reservoir system, we conceptualize the QKD key pool as a storage reservoir, where the key generation process G(t) represents the inflow, and the total consumption rate P i Ci(t) across all control tasks acts as the outflow. Based on this analogy, we define the continuous-time dynamic model for the key inventory K(t) as: ̇K(t) = G(t) - T X i=1 Ci(t) (7) 4 Here, K(t) represents the total amount of usable key material in bits at time t. The function G(t) denotes the instantaneous key generation rate from the QKD link, while PT i=1 Ci(t) captures the total instantaneous key consumption rate from all control tasks. Since the key pool cannot be stored indefinitely due to hardware constraints and cryptographic key freshness requirements, the above differential equation provides a realistic characterization of the key system's operational behavior. Under discrete-time implementation, which is typical in energy management systems (EMS) with sampling intervals such as ∆t = 100 ms, we use the following difference equation: Kk+1 = Kk + Gk - T X i=1 Ci,k ! · ∆t, (8) where Kk denotes the key inventory at the k-th sampling instant, and Gk, Ci,k represent the sampled values of generation and consumption rates, respectively. This recursive formula can be used for real-time forecasting of future inventory levels, serving as a key availability status indicator for the dispatch module. To ensure that critical control tasks in the power grid-such as primary frequency regulation, emergency load shedding, and control path reconfiguration-can still be executed even under key resource pressure, we introduce three inventoryrelated thresholds. These thresholds form the basis of a "warn-reconfigure-protect" tiered management mechanism for the key pool. The first is the safety threshold Ksafe, defined as the minimum amount of key required to guarantee uninterrupted execution of the highest-priority task under extreme conditions (e.g., dispatch link failure or cyberattack). Assuming the task with the highest priority, denoted by index i∗, must operate reliably over a critical time window τcrit, with triggering frequency λmax and message length Lmax, the safety threshold is given by: Ksafe = αmax · Lmax · λmax · τcrit, (9) where αmax denotes the maximum key strength parameter used by the task, such as the one-time pad (OTP) encryption level. The second is the reconfiguration threshold Kth, which triggers system-wide control chain reconfiguration actions such as task downgrading, dispatch prioritization, or redundant path activation. To ensure sufficient redundancy even after reconfiguration is triggered, the following condition must be satisfied: Kth > Ksafe. (10) The third threshold is the capacity limit Kcap, which reflects the physical buffer size of the QKD hardware and the key lifecycle management constraints. When the inventory exceeds this value, surplus keys must be discarded to ensure freshness and compliance with cryptographic expiration policies. E. Prediction and Uncertainty Estimation Due to uncertainty in quantum key generation and consumption, control centers must forecast key inventory K(t) to ensure secure task execution. We propose a hybrid LSTM-Kalman filter model that combines LSTM's nonlinear prediction strength with Kalman filtering's real-time error correction, enabling robust short-term forecasting with confidence intervals for QKD-enhanced SCADA systems. We begin by defining the system observation vector at time step k: yk = [Gk, C1,k, C2,k, . . . , CT,k]⊤ (11) where Gk is the observed key generation rate at time step k, and Ci,k is the key consumption rate associated with the i-th control task. These variables can be acquired in real time from the QKD link monitoring module and SCADA message tracking system. Using historical time-series data, we train a multivariate LSTM model to produce one-step-ahead predictions of the key system dynamics: h ˆGk|k-1, ˆC1,k|k-1, ˆC2,k|k-1, . . . , ˆCT,k|k-1 i⊤ = LSTM(yk-1, yk-2, . . . , yk-w) (12) where w is the sliding window size. The outputs ˆGk|k-1 and ˆCi,k|k-1 represent the predicted values of generation and taskspecific consumption rates, respectively. To correct for systemic model errors and to quantify the uncertainty in state propagation, we apply an Extended Kalman Filter (EKF) to update the key system state vector and estimate the associated state covariance. The system state vector is defined as: xk = h ˆGk, ˆKk i⊤ , ˆxk|k-1 = f(ˆxk-1), (13) where ˆKk is the predicted key inventory at time k. The predicted state is updated using the standard Kalman correction equation: ˆxk = ˆxk|k-1 + Kk yk -Hˆxk|k-1 , (14) in which Kk is the Kalman gain and H is the observation matrix. The process model f(·) is based on the discrete-time inventory update rule derived from earlier sections: ˆKk = ˆKk-1 + ˆGk - T X i=1 ˆCi,k ! · ∆t. (15) From the corrected state and its error covariance, we obtain a Gaussian approximation of the inventory distribution. For example, the 95% confidence interval for the key inventory at time step k is given by: Kk ∼N( ˆKk, PK,k), CI95% = ˆKk ± 1.96 p PK,k, (16) where PK,k is the variance of the inventory estimate. This interval provides a statistically grounded boundary on key depletion risk and can be used to trigger early warnings or remedial actions, such as link switching, task degradation, or backup path activation. The entire prediction process is computationally efficient. With a state dimension of nx = 2, both LSTM inference and Kalman update can be executed within 50 milliseconds, enabling online deployment in modern EMS platforms. 5 F. Dynamic Key Control Workflow and Performance Metrics Building upon the preceding models for key generation, consumption, prediction, and uncertainty quantification, this section presents a practical dynamic key control workflow for QKD-enhanced dispatch systems. It also introduces a multidimensional set of performance evaluation metrics, designed to quantify the impact of key resource fluctuations on control task execution, system stability, and dispatch responsiveness. During each sampling interval k (e.g., ∆t = 100 ms), the control center can follow the steps below to monitor and update the quantum key inventory in real time and trigger responsive control strategies: To systematically evaluate the influence of key availability on power system control behavior, we further define the following set of performance metrics: 1) Task Success Rate (Psucc): This metric measures the proportion of control tasks that were encrypted and executed without delay or downgrade during the dispatch cycle: Psucc = Nsuccess Ntrigger (17) where Ntrigger is the total number of control tasks triggered, and Nsuccess is the number of those successfully executed with appropriate key security. This reflects the coverage capability of different key scheduling strategies. 2) Maximum Frequency Deviation (∆fmax): Under frequency control scenarios (e.g., AGC), insufficient key resources may delay or block commands, causing transients in grid frequency. The worst-case deviation during the observation period is defined as: ∆fmax = max t∈[t0,t1] |∆f(t)| (18) This metric reflects the control reliability degradation caused by communication or cryptographic limitations. 3) Key Utilization Rate (η): This metric quantifies how efficiently the generated keys are utilized across control tasks. It is defined as: η = P k P i Ci,k P k Gk (19) Higher utilization indicates better alignment between supply and demand; low utilization may reflect redundancy or inefficient dispatch strategies. 4) Time to Resilience Recovery (TRR): This metric captures the duration required for the system to return to stable operation following a key-induced control failure or deviation. It is defined as: TRR = Z trecov tfault 1|∆f(t)|>∆fsafe dt (20) where ∆fsafe is the maximum acceptable frequency deviation. This indicator evaluates the resilience of the system under key resource stress. III. CONTROL CHAIN RECONFIGURATION A. Reconfiguration Modeling To address the risk of critical control chain failure caused by dynamic fluctuations in quantum key availability, this section proposes a multi-layer network modeling framework that captures the coupling between communication, control, and physical layers. A quantitative objective function is further formulated to balance key consumption, control effectiveness, and system resilience, forming the foundation for subsequent optimization. The QKD-enabled SCADA system is abstracted as a directed multilayer graph: G = (Vcom, Vctr, Vphy; Ecom, Ectr, Ephy) (21) Here, Vcom represents communication nodes (e.g., QKD terminals, switches), Vctr denotes control task nodes (e.g., AGC, AVR, protection signals), and Vphy includes physical equipment such as buses, generators, and loads. The edge sets E(·) capture both intra-layer and inter-layer interactions. Each control chain lis defined as a directed path Pl⊆G that originates from the control center, passes through communication nodes, and actuates physical components. It is characterized by a feature vector: φl= (dl, τl, βl, γl) (22) where dldenotes the key resource required per unit time, τlis the end-to-end latency tolerance, βlreflects the control priority or impact (e.g., frequency response weight), and γlcaptures the reconfiguration cost associated with activating or switching the chain. To adapt to time-varying key inventory and ensure continuous execution of prioritized control tasks, the system must dynamically decide the state of each control chain. Let xl,k ∈{0, 1, 2} denote the state of control chain lat discrete time step k, where: xl,k =      0, chain deactivated 1, chain active with downgraded encryption 2, chain active with full encryption (e.g., OTP) Then, the total key consumption at time k is given by: Ck = X l∈L dl αAES ∆t · 1xl,k=1 + αOTP ∆t · 1xl,k=2 (23) where αAES and αOTP denote the key usage coefficients under AES and OTP encryption, respectively, and ∆t is the control dispatch interval. 1(·) is the indicator function. To prevent abrupt key depletion and ensure reliability of mission-critical chains, the decision framework includes a chance-constrained key inventory requirement: Pr Ck ≤ˆKk -ηbuf ≥1 -ε (24) Here, ˆKk denotes the predicted key inventory at time k, ηbuf is the safety buffer margin, and ε is the risk tolerance (e.g., 0.05 corresponding to 95% confidence). This constraint ensures that the key allocation decisions remain feasible under prediction uncertainty. Based on the above structure, we construct the overall control loss function Lk to evaluate performance at each dispatch 6 step. The function integrates power system stability deviation, task execution penalties, and reconfiguration overhead: Lk = X n∈N wf |∆fn,k| + X m∈M wv |∆Vm,k| | {z } grid stability deviation + X l ξdrop 1xl,k=0 + ξdeg 1xl,k=1 | {z } task degradation and failure cost + X l γl· 1xl,k̸=xl,k-1 | {z } chain reconfiguration cost (25) In this formulation, ∆fn,k and ∆Vm,k denote the frequency and voltage deviations at bus n and node m at time k, while wf and wv are corresponding weight factors. The terms ξdrop and ξdeg quantify the cost of fully dropped or downgraded control tasks, and γlpenalizes frequent switching of chain lto discourage excessive reconfiguration. This loss function encapsulates the essential trade-off between resource-aware encryption scheduling and power system control effectiveness. Compared to static chain architectures or simple task-prioritybased schemes, the proposed model enables real-time tasklevel adaptation to key availability, maintaining high control success rates and continuous secure operation in dynamically constrained environments. B. Temporal Optimization Formulation To enable rolling adjustments of control chain status and dynamic scheduling of quantum key resources, we formulate a temporal optimization model that operates over a fixed control interval ∆t. The model aims to minimize overall control performance loss over a predictive horizon while considering forecast uncertainty in key inventory, power system dynamics, and the cost of switching control chains. This results in a practically solvable mixed-integer chance-constrained optimization framework. Let the rolling prediction horizon be H steps. At each dispatch step k, the model optimizes the activation sequence {xl,t}k+H-1 t=k for each control chain lacross the window. The objective function is defined as: min {xl,t} k+H-1 X t=k Lt + λ ∥∆f t∥2 2 (26) Here, Lt denotes the control chain loss function at time t, defined in Equation (8), while ∆f t represents the vector of frequency deviations at time t, and λ is a weight parameter balancing control quality and resource usage. The optimization is subject to three categories of constraints: (1) Key Resource Chance Constraint: For each t ∈ [k, k + H -1], the cumulative key consumption Ct arising from selected control chains must satisfy a probabilistic lower bound on the available key inventory: Pr Ct ≤ˆKt -ηbuf ≥1 -ε (27) Here, ˆKt is the forecasted key inventory at time t, ηbuf denotes a safety buffer, and ε is the risk tolerance level (e.g., 0.05 for 95% confidence). This chance constraint ensures that control actions remain feasible under uncertainty and can be transformed into a deterministic second-order cone (SOC) constraint using techniques such as Chebyshev inequality or Boole-Bonferroni approximation. (2) Control Chain State Constraints: Each control chain can only assume one of three discrete states at any time: xl,t ∈{0, 1, 2}, ∀l, ∀t ∈[k, k + H -1] (28) Specifically, xl,t = 0 indicates chain deactivation, xl,t = 1 implies downgraded encryption (e.g., AES), and xl,t = 2 represents full-strength encryption (e.g., OTP). (3) Power System Dynamics Constraint (State-Space Form): To model how control decisions affect system-level dynamics such as frequency and voltage trajectories, we adopt a linearized discrete-time state-space model: xsys(t + 1) = Axsys(t) + Bu(t) (29) In this expression, xsys(t) denotes the system state vector (e.g., frequency deviations, voltage angles), and u(t) is the control input vector. The matrices A and B define the system dynamics, obtained through linearization of the power system around the operating point. The control input u(t) is mapped from the chain activation status using a constraint of the form: u(t) = U(xl,t) (30) This mapping ensures logical consistency between the activated chains and the actual control commands dispatched, such as AGC setpoints or load shedding signals. The resulting optimization problem is a Mixed-Integer Second-Order Cone Programming (MISOCP) model due to the presence of integer variables xl,t, nonlinear objectives, and SOC constraints arising from uncertainty. To ensure tractability and scalability, for small- to medium-scale systems, commercial solvers such as CPLEX or Gurobi can be used directly with branch-and-bound methods. For larger systems, solution speed can be enhanced via decomposition techniques such as column generation, Benders decomposition, or penalty-based relaxation heuristics. IV. STACKELBERG COOPERATIVE GAME AND MPEC-BASED SOLUTION In QKD-encrypted power systems, quantum key resources are shared between TSO and DSO, creating inherent coordination conflicts. While the TSO focuses on frequency support and grid stability, the DSO prioritizes local voltage and load control. This interaction is modeled as a bi-level Stackelberg game, with the TSO as leader and the DSO as follower. Let xT denote the TSO's decision variables over controlled secure chains (e.g., activation states and encryption modes), 7 and xD represent the corresponding DSO decisions. The bilevel problem is then defined as: min xT JT(xT, xD) = k+H-1 X t=k LT t + λT∥∆f T t ∥2 2 s.t. xT ∈ΩT, xD ∈arg min x (k+H-1 X t=k LD t + λD∥∆f D t ∥2 2 x ∈ΩD(xT) (31) Here, LT t and LD t denote the control chain losses for TSO and DSO, respectively, incorporating frequency deviation penalties, control errors, and encryption switching costs. The feasible sets ΩT and ΩD include power system dynamics, key budget constraints, and discrete mode restrictions. The coupling is induced by a shared probabilistic quantum key budget constraint: Pr CT t + CD t ≤ˆKt -ηbuf ≥1 -ε, ∀t (32) To reformulate the bi-level problem into a numerically tractable single-level model, we derive the Karush-KuhnTucker (KKT) conditions for the DSO's lower-level optimization. Let the DSO's constraints be expressed as: Nonlinear inequalities: g(xT, xD) ≤0; Second-order cone (SOC) constraints: h(xT, xD) ∈K; Integer variable relaxations: xl,t ∈{0, 1, 2}. Introducing Lagrange multipliers λ and μ for g and h respectively, the KKT system becomes: ∇xDJD + λ⊤∇xDg + μ⊤∇xDh = 0 (33) λi · gi = 0, ∀i; h ∈K, μ ∈K∗, ⟨h, μ⟩= 0 (34) xl,t(1 -xl,t) = 0, ∀l, t (35) Embedding the above complementarity system into the upperlevel constraints yields a Mathematical Program with Equilibrium Constraints (MPEC), specifically a Mixed-Integer Program with Complementarity Constraints (MPCC). To solve the resulting MPCC efficiently, we propose a Level Decomposition with Complementarity Pruning (LDCP) algorithm that leverages the sparse coupling structure between the TSO (leader) and DSO (follower), as shown in Algorithm 2. Specifically, the original bi-level formulation minxT JT(xT, xD) with xD ∈arg min JD(xT, ·) is decoupled into alternating rounds of leader-follower interaction. At each iteration p, the leader's decision xT,(p) is fixed, and the follower responds by solving its own optimization subproblem. From the resulting KKT system, we extract the active complementarity set C(p) ⊆{(i, j) | λi · gi = 0, ⟨hj, μj⟩= 0}, which reflects the subset of dual constraints that are exactly satisfied under current conditions. These active pairs are then encoded back into the upper-level problem via binary indicator variables zi ∈{0, 1} that govern whether gi ≤0 or λi ≤0 holds at iteration p. The resulting upper-level program introduces a proximal regularization term ρ∥xT-xT,(p)∥2 to stabilize updates and improve convergence. After solving for a new leader decision xT,(p+1), the active set is pruned by removing invalid complementarity pairs based on feasibility and duality conditions, yielding a refined C(p+1). This iterative process continues until the update norm satisfies ∥xT,(p+1) -xT,(p)∥ ∆fmax are violated, alert notifications are triggered via Slack or email. VI. CASE STUDY A. Experiment Setup This study presents a comprehensive and repeatable simulation framework to evaluate the proposed QKD-enhanced SCADA control chain reconfiguration strategy. The IEEE 39bus and 118-bus systems are selected as benchmark testbeds. The 39-bus system serves as a medium-scale case for verifying system stability and efficiency, while the 118-bus system is partitioned into one transmission zone and three distribution areas to support multi-agent coordination scenarios. The SCADA architecture follows a four-layer topology-Control Center, Gateway, RTU, and IED-with IEC 60870-5-104 communication protocol integrated with Q3P encryption. Optical link disturbances are injected via OpenQKD-Sim, assuming fiber lengths of 20 km (39-bus) and 50 km (118-bus), with baseline attenuation of 0.2 dB/km and Gaussian variations (0.04 dB/km). Link failures are modeled as Poisson-distributed events (mean rate: one per 5000s), lasting 3-5 seconds uniformly sampled. Five experimental scenarios are constructed: S-1 (static chain + static keys), S-2 (static chain + dynamic keys), S-3 (reconfigurable chain + dynamic keys, singleagent), S-4 (cooperative transmission-distribution control without game-theoretic optimization), and S-5 (Stackelberg-based multi-agent reconfiguration with MPEC and LD-CP algorithm). In S-5, a bi-level optimization is performed every 100 ms, where the TSO's decision is updated using the LD-CP method based on complementarity set pruning and proximal regularization. Each experiment is repeated over 30 Monte Carlo runs to ensure statistical robustness. Fig. 1. Control Recovery Time Distribution B. IEEE 39-bus Case In this subsection, we conduct a comprehensive quantitative evaluation of the proposed method (S-3) against two baseline scenarios-static chain with static key (S-1), and static chain with dynamic key (S-2). TABLE I PERFORMANCE COMPARISON ACROSS SCENARIOS Metric S-1 S-2 S-3 Success Rate Psucc 0.72 0.88 0.97 Max Frequency Deviation ∆fmax (Hz) 0.26 0.17 0.08 Key Utilization η 0.30 0.57 0.83 As shown in the aggregated metrics, S-1 exhibits the weakest overall performance: the task success rate remains at just 72%, the frequency deviation peaks at 0.26 Hz, and the key utilization rate is below 30%. This reflects the system's high vulnerability to key exhaustion and communication disruptions under static configurations. When dynamic key generation is introduced in S-2, the task success rate increases to 88%, and ∆fmax drops to 0.17 Hz, indicating that continuous key supply significantly mitigates the risk of depletion. However, the key utilization rate only improves to 57%, suggesting inefficiencies due to persistent link rigidity. In contrast, S-3 combines dynamic key management with adaptive link reconfiguration and task reprioritization, yielding the highest task success rate (97%), the smallest frequency deviation (0.08 Hz), and the most efficient key utilization (83%). These improvements validate the synergistic benefits of proactive coordination in both communication and control layers. The 30-run TRR distributions further support these findings. S-1 demonstrates prolonged recovery times mostly in the 12-16 s range, with occasional outliers extending to 18-19 s, highlighting its reliance on passive recovery mechanisms. S2 exhibits a shorter and tighter distribution between 6-9 s, though occasional delays still occur due to fixed communication topology. In sharp contrast, S-3 displays a highly compact TRR distribution centered around 2.5-3.5 s, with negligible variance and no outliers. This indicates that the integration of 9 Fig. 2. Violin-Box Plot of Key-Fairness Distribution Fig. 3. LD-CP Iteration Count versus Achieved Fairness key-aware control rescheduling and real-time link reconfiguration enables rapid mitigation of control failures, minimizing frequency violations and downtime. Taken together, the results confirm that the proposed S-3 approach achieves superior resilience and resource efficiency under realistic operational constraints. C. Fairness Metrics under Stackelberg Coordination In this subsection, we critically analyse the fairness performance obtained in the Stackelberg scenarios by interpreting three complementary visualisations: a violin-box hybrid plot of the fairness index FK, a colour-coded scatter plot relating LD-CP iteration count to FK, and a heat-map that maps the standard deviation of FK over the joint space of iteration budget and link-break frequency. The violin-box plot reveals a pronounced improvement when the LD-CP-driven Stackelberg mechanism (S-5) replaces the static co-operative baseline (S-4). Whereas S-4 centres around a median fairness of approximately 0.72 with a visibly skewed tail dipping below 0.65, S-5 shifts the entire distribution upwards to a median near 0.86 and compresses its inter-quartile range by roughly half. Because FK is a Jainstyle index, this uplift signifies that the TSO and DSO obtain Fig. 4. Heat-Map of Fairness Variability as a Function of Iteration Budget and Link-Break Frequency a far more balanced share of the finite quantum-key pool while run-to-run variability is simultaneously suppressed. The plot therefore confirms that LD-CP's complementarity pruning not only raises expected equity but also mitigates extreme imbalance events. The iteration-fairness scatter plot complements this insight by quantifying the efficiency-fairness trade-off. Fairness ascends steeply from about 0.70 at five iterations to a plateau around 0.83-0.84 once the iteration budget reaches 80-100, after which marginal gains vanish. The dashed regression line illustrates the overall positive slope, yet the curved cloud of points exposes diminishing returns: the interval between 25 and 60 iterations delivers the bulk of the improvement, whereas pushing beyond ∼120 iterations contributes little besides computational overhead. Operationally, this suggests that capping the LD-CP loop at 50-80 iterations provides an attractive balance between convergence speed and final equity. Finally, the heat-map projects the standard deviation of FK onto the plane spanned by link-break frequency λbreak and iteration count Niter. Three patterns emerge. First, variance rises almost linearly with disturbance rate, confirming that heavier communication stress naturally degrades fairness. Second, increasing Niter consistently suppresses variance, underscoring that deeper leader-follower negotiation stabilises outcomes. Third, the gradient of this improvement flattens beyond about 80 iterations, echoing the scatter plot's plateau. These relationships delineate a practical "safe operating zone": for moderate break rates (λbreak ≤0.003 s-1), roughly 60 LDCP iterations keep fairness fluctuations below 0.04, whereas more severe disturbance regimes require either accelerated solvers to sustain Niter ≥80 in real time or auxiliary fallback policies. Taken together, the three visualisations demonstrate that the Stackelberg-MPEC framework not only elevates average fairness but also offers tunable levers-iteration budget and disturbance tolerance-to tailor equity, stability, and computational effort to specific grid-operation requirements. VII. CONCLUSION This paper presents a unified framework for modeling, optimizing, and validating QKD-enhanced SCADA systems 10 in power grid applications. By integrating quantum key dynamics with multi-timescale control processes, we address the critical challenge of securing power system operations under constrained and uncertain cryptographic resources. Through a Stackelberg game formulation and a novel LD-CP algorithm, we enable fair and adaptive key allocation between transmission and distribution operators. The proposed co-simulation testbed bridges quantum optics, network protocols, and power system control, offering an end-to-end platform for performance evaluation. Experimental results demonstrate significant improvements in control task reliability, frequency stability, and key utilization under dynamic link conditions. Future work will explore the integration of trusted relay networks, crosslayer intrusion detection, and hardware-in-the-loop validation on real-time digital simulators. REFERENCES [1] M. Alshowkan, P. G. Evans, M. Starke, D. Earl, and N. A. Peters, "Authentication of Smart-Grid Communications Using Quantum Key Distribution," IEEE Trans. Smart Grid, vol. 14, no. 5, pp. 3890-3901, Sept. 2023. [2] P. G. Evans, M. Alshowkan, D. Earl, M. Starke, and N. A. Peters, "Trusted-Node Quantum Key Distribution at an Electrical Utility," IEEE Trans. Smart Grid, vol. 14, no. 2, pp. 1198-1208, Mar. 2023. [3] W. A. Grice, M. O. Olama, and N. A. Peters, "Quantum Key Distribution Applicability to Smart-Grid Cyber-Security Systems," IEEE Trans. Power Syst., early access, 2025. [4] X. Chen, Y. Zhang, and Z. Liu, "Entropy-Based Anomaly Detection Leveraging Quantum-Channel Randomness in Secure Smart-Grid Communications," IEEE Trans. Smart Grid, vol. 15, no. 1, pp. 612-623, Jan. 2024. [5] C. Simondi et al., "First Power Communication Network Secured by Quantum Cryptography in Korea," ID Quantique Press Release, Jan. 2021. [6] B. Zhao et al., "Performance Analysis of Quantum Key Distribution Technology for Power Business," Applied Sciences, vol. 10, no. 8, p. 2906, 2020. [7] D. Elkouss et al., "The SECOQC Quantum Key Distribution Network in Vienna," New J. Phys., vol. 11, p. 075001, 2009. [8] M. Leal et al., "Reliability Provision in Software-Defined Power Substation Networks," Computer Networks, vol. 181, p. 107560, 2020. [9] J. Guti ́errez et al., "Next-Generation Power Substation Communication Networks: SDN Approaches," IEEE Power & Energy Magazine, vol. 21, no. 5, pp. 67-77, 2023. [10] N. Rodr ́ıguez P ́erez, J. Matanza Domingo, G. L ́opez L ́opez, J. P. Chaves ́Avila, F. Bosco, V. Croce, K. Kukk, M. Uslar, C. Madina, and M. Santos-M ́ugica, "ICT architectures for TSO-DSO coordination and data exchange: A European perspective," IEEE Transactions on Smart Grid, vol. 14, no. 2, pp. 1300-1312, Mar. 2023. [11] T. Zhang, J. Wang, H. Wang, J. Ruiyang, G. Li, and M. Zhou, "On the coordination of transmission-distribution grids: A dynamic feasible region method," IEEE Transactions on Power Systems, vol. 38, no. 2, pp. 1855-1866, Mar. 2023. [12] T. Jiang, C. Wu, R. Zhang, X. Li, H. Chen, and G. Li, "Flexibility clearing in joint energy and flexibility markets considering TSO-DSO coordination," IEEE Transactions on Smart Grid, vol. 14, no. 2, pp. 1376-1387, Feb. 2022. [13] D. Bian et al., "Real-Time Co-Simulation Platform Using OPAL-RT and OPNET for Analyzing Smart Grid Performance," Proc. IEEE PES General Meeting, Denver, CO, 2015. [14] OpenQKD Project, "Quantum Key Distribution Network Simulator (QKDNetSim) Module for ns-3," Online: http://openqkd.eu/QKDNetSim openqkd.eu. [15] M. Alshowkan et al., "Authentication of Smart Grid Communications Using Quantum Key Distribution," Scientific Reports, vol. 12, Art. 12731, 2022.
2510.14837
Reinforcement Learning with Stochastic Reward Machines⋆ Jan Corazza1,2 Ivan Gavran2 Daniel Neider2 corazzajan@gmail.com gavran@mpi-sws.org neider@mpi-sws.org 1 University of Zagreb 2 Max Planck Institute for Software Systems Abstract. Reward machines are an established tool for dealing with re- inforcement learning problems in which rewards are sparse and depend on complex sequences of actions. However, existing algorithms for learn- ing reward machines assume an overly idealized setting where rewards have to be free of noise. To overcome this practical limitation, we intro- duce a novel type of reward machines, called stochastic reward machines, and an algorithm for learning them. Our algorithm, based on constraint solving, learns minimal stochastic reward machines from the explorations of a reinforcement learning agent. This algorithm can easily be paired with existing reinforcement learning algorithms for reward machines and guarantees to converge to an optimal policy in the limit. We demonstrate the effectiveness of our algorithm in two case studies and show that it outperforms both existing methods and a naive approach for handling noisy reward functions. Keywords: Reinforcement Learning · Reward Machines · Non-Markovian Rewards · SMT Solving · SAT Solving · Stochastic Reward Machines. 1 Introduction The key assumption of a reinforcement learning (RL) model is that the reward function is Markovian: the received reward depends only on the agent’s imme- diate state and action. For many practical RL tasks, however, the most natural conceptualization of the state-space is the one in which the reward function depends on the history of actions that the agent has performed. (Those are typ- ically the tasks in which the agent is rewarded for complex behaviors over a longer period.) An emerging tool used for reinforcement learning in environments with such non-Markovian rewards are reward machines. A reward machine is an automaton- like structure which augments the state space of the environment, capturing the temporal component of rewards. It has been demonstrated that Q-learning [16], ⋆A shorter version of this paper appeared in the Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22) [5]. Source code available at https://github.com/corazza/srm. arXiv:2510.14837v1 [cs.LG] 16 Oct 2025 2 J. Corazza et al. a standard RL algorithm, can be adapted to use and benefit from reward ma- chines [17]. Reward machines are either given by the user, or inferred by the agent on the fly [8,7,22]. The used learning methods ensure that the inferred machine is minimal, enabling quick optimal convergence. Besides a faster convergence, learning minimal reward machines contributes to the interpretability of problems with an unclear reward structure. Reward machines only model deterministic rewards. When the machine is not known upfront, existing learning methods prove counterproductive in the presence of noisy rewards, as there is either no reward machine consistent with the agent’s experience, or the learned reward machine explodes in size, overfitting the noise. In this paper, we introduce the notion of a stochastic reward machine, which can capture noisy, non-Markovian reward functions, together with a novel algo- rithm for learning them. The algorithm is an extension of the constraint-based formulation of Xu et al. [22]. The extension relies on the parameters of the re- ward’s distribution, making sure that experiential rewards respect the distribu- tion. In every iteration, if the agent establishes that its current hypothesis about the machine is wrong, it updates the hypothesis (either by fixing the machine’s parameters or by solving the constraint problem and learning a new machine). While one could use the proposed algorithm to learn a suitable (non-stochas- tic) reward machine and use that machine to guide the reinforcement learning process, we recognize the value of modeling stochasticity explicitly. First, it re- veals information about the distribution of rewards, improving interpretability of the problem at hand. Second, a direct correspondence between the stochastic reward function and the stochastic reward machine that models it makes the exposition clearer. In our experimental evaluation, we demonstrate the successful working of our algorithm on two noisy, non-Markovian case studies. We compare our algorithm with existing methods (which do not deal explicitly with noise) on the same case studies: as expected, disregarding the noise by using existing inference algorithms for classical RMs performs substantially worse than our new approach. Finally, we compare our algorithm to a baseline method that tries to “average out” the noise. To summarize, in this paper we 1) introduce stochastic reward machines, 2) present a novel algorithm for learning stochastic reward machines by a RL agent, and 3) experimentally demonstrate the efficacy of our algorithm. 1.1 Related Work Using finite state machines to capture non-Markovian reward functions has been proposed already in the early work on the topic [2]. Toro Icarte et al. [17,18] introduced reward machines (known as Mealy machines in other contexts) as a suitable formalism and an algorithm that takes advantage of the reward machine structure. Similar formalisms, including temporal logics, have been proposed by others, too [11,3,4]. This line of work assumes the reward machine to be given. Reinforcement Learning with Stochastic Reward Machines 3 The assumption of the user-provided reward machine has been lifted in the follow-up works [8,22,7]. Learning temporal representations of the reward has been explored in different contexts: for multi-agent settings [13], for reward shaping [20], or with user-provided advice [14]. All these approaches are fragile in presence of noisy rewards. Other approaches focus on learning attainable se- quences of labels [9,10], disregarding reward values. If reward noise ranges over a finite set of values, Velasquez et al. [19] propose active learning of reward machine-like automata with a probabilistic transition function. Outside of the non-Markovian context, many works studied noise in rewards. Everitt et al. [6] give an overview of potential sources of noise/corruption and provide the impossibility result for learning under arbitrarily corrupted rewards. Romoff et al. [15] propose learning a reward estimator function alongside the value function. Wang, Liu, and Li [21] consider a problem of rewards being flipped according to a certain distribution. While all these works consider much richer noise models, they are not readily applicable to the non-Markovian rewards setting. 2 Preliminaries This section introduces the necessary background on reinforcement learning and the formalism of reward machines for capturing non-Markovian rewards. We illustrate all notions on a running example called Mining. Mining, inspired by variations of Minecraft [1], models the problem of finding and exploiting ore in an unknown environment. We use this example throughout the paper. Fig. 1 shows the Mining world. An agent moves in a bounded grid-world intending to find valuable ore, gold (G) and platinum (P), and bring it to the marketplace (M). Furthermore, the agent’s success depends on the purity of the ore and the market prices, which is stochastic and cannot be influenced by the agent (though the spread of the market prices can naturally be bounded). In or- der to do so successfully, the agent has to fulfill specific additional requirements, such as finding equipment (E) beforehand and not stepping into traps (T). In reinforcement learning, an agent learns to solve such tasks through re- peated, often episodic interactions with an environment. After each step, the agent receives feedback (a reward r ∈R ⊂R) based on its performance and acts to maximize a (discounted) sum of received rewards. This interaction forms a stochastic process: while the environment is in some state s ∈S, the agent chooses an action a ∈A according to a policy π(s, a) (a function mapping states to probability distributions over the action space), causing the environment to transition into the next state s′ ∈S and giving the agent a reward r (where S is the state space and A is the action space of the environment). A realization of this process is a trajectory s0a1s1 . . . aksk (optionally, rewards may be included in this sequence). The agent continually updates its policy (i.e., learns) based on received rewards. A reinforcement learning environment is formalized as a Markov decision process (MDP). As is common in the context of reward machines, we equip our 4 J. Corazza et al. T T E ⃝ E G T T P M T P T P Fig. 1: A simplified example of the Mining environment grid and a trajectory. The agent’s initial state is shown as a circle. Cells display their state labels, or are blank if the label is ∅. The trajectory indicated by arrows shows the agent collecting the equipment (E), finding platinum (P) and bringing it to the marketplace (M). MDPs with a labeling function that maps transitions to labels. These labels correspond to high-level events that are relevant for solving the given task (e.g., finding gold, indicated by G). Definition 1. A (labeled) Markov decision process is a tuple M = (S, sI, A, p, P, L) consisting of a finite state space S, an agent’s initial state sI ∈S, a finite set A of actions, and a probabilistic transition function p: S × A × S →[0, 1]. Additionally, a finite set P of propositional variables, and a labeling function L: S ×A×S →2P determine the set of relevant high-level events that the agent senses in the environment. We define the size of an MDP M, denoted as |M|, to be the cardinality of the set S (i.e., |M| = |S|). Let us briefly illustrate this definition. The agent always starts in state sI. It then interacts with the environment by taking an action at each step. If the agent at state s ∈S takes the action a ∈A, its next state will be s′ ∈S with probability p(s, a, s′). For this transition (s, a, s′), a labeling function L emits a set of high-level propositional variables (i.e., a label). One can think of these labels as knowledge provided by the user. If the user can not provide any labeling function, L can simply return the current transition. The Mining example can be modeled by the MDP in which states are fields of the grid world, and the agent’s actions are moving in the four cardinal direc- tions. The transition function p models the possibility of the agent slipping when making a move. The propositional variables used for labeling are E (equipment found), P (platinum found), G (gold found), M (marketplace reached), T (fell into a trap); ∅signifies that no relevant events have occurred. To capture an RL problem fully, we need to equip an MDP with a reward function. Formally, a reward function R maps trajectories from (S×A)+×S to a cumulative distribution function over the set of possible rewards R. For labeled MDPs, the reward function typically depends on finite sequences of observed labels rather than on the more low-level sequences of state-action pairs. Reinforcement Learning with Stochastic Reward Machines 5 Let us emphasize the significance of defining the reward function R over finite sequences of states and actions. Using the entire history as the argument enables us to reward behaviors that respect temporal relations and persistence naturally. For instance, in the Mining example, the goal is to accomplish the following steps: (1) find equipment, (2) exploit a mine, (3) deliver the ore to the specified location, all while avoiding traps. Note that the order in which the agent performs these steps is crucial: finding ore and going to the marketplace without first picking up equipment is not rewarded; stepping into traps ends the episode with reward zero. Reward functions that make use of the agent’s entire exploration (as opposed to only the current state and action) have first been studied by Bacchus, Boutilier, and Grove [2] and are termed non-Markovian reward functions. Toro Icarte et al. [17] have shown that reward machines (RMs) are a powerful formalism for representing non-Markovian reward functions. Intuitively, one can view the role of reward machines as maintaining the sufficient amount of memory to turn the non-Markovian reward function back into a ordinary, Markovian one. This results in an important feature of RMs: they enable using standard RL algorithms (which would otherwise not be usable with non-Markovian reward functions). Furthermore, taking advantage of the structure present in RMs, the algorithms can be made more efficient. On a technical level, RMs are finite-state automata that transduce a sequence ℓ1ℓ2 . . . ℓk of labels into a sequence r1r2 . . . rk of rewards. For the sake of brevity, we here omit a formal definition and introduce the concept of RMs using an example. To this end, let us consider the RM A in Fig. 2a, which attempts to capture the reward function of the Mining example and operates as follows. Starting from the initial state vI, the machine transitions to an intermediate state v1 upon finding equipment (indicated by formula1 E). From there, A either moves to state v2 (upon finding platinum) or to state v3 (upon finding gold). The reward, however, is delayed until the agent reaches the marketplace (indicated by the label M) and A transitions to the terminal state vT . Once this happens, the machine outputs a reward of 1 (if the agent has previously collected gold) or a reward of 1.1 (if the agent has collected platinum). By contrast, violating the prescribed order, failing to reach the marketplace, or stepping onto a trap results in no reward for the agent. For the label sequence (∅, {E}, ∅, {P}, ∅, ∅, {M}) (from Fig. 1), the machine A will produce the reward sequence (0, 0, 0, 0, 0, 0, 1). Note, however, that the RM in Fig. 2a fails to capture the stochastic nature of rewards in the Mining example, which stems from varying purity of the ore and market fluctuations. This problem arises from an intrinsic property of reward machines: they only allow outputs to be real numbers and, hence, can only encode deterministic reward functions. This observation shows that reward machines, as currently defined and used in the literature, cannot capture the common phenomenon of noisy rewards! In the next section, we show how to generalize the model of reward machines in order to overcome this limitation. 1 We use propositional formulas to succinctly describe sets of labels. For instance, the formula p ∨q over P = {p, q} corresponds to the set {{p}, {q}, {p, q}}. 6 J. Corazza et al. E, 0 P, 0 G, 0 M, 1 M, 1.1 T, 0 vI v1 v2 v3 vT (a) RM: rewards are real numbers. E, U([0, 0]) P, U([0, 0]) G, U([0, 0]) M, U([0.8, 1.2]) M, U([0.9, 1.3]) T, U([0, 0]) vI v1 v2 v3 vT (b) SRM: rewards are probability dis- tributions. Fig. 2: Classical reward machine for the mining example (left) and stochastic reward machine (right). Transitions are labeled with input-output pairs, where labels are given by propositional formulas encoding subsets of 2P . All states have a transition to vT with output 0 on reading T (trap) (only depicted for v1). All remaining, missing transitions are self-loops with output 0. 3 Stochastic reward machines To capture stochastic, non-Markovian reward functions, we introduce the novel concept of stochastic reward machines. Definition 2. A stochastic reward machine (SRM) A = (V, vI, 2P , O, δ, σ) is defined by a finite, nonempty set V of states, an initial state vI ∈V , an input alphabet 2P , a (deterministic) transition function δ : V × 2P →V , an output alphabet O which is a finite set of cumulative distribution functions (CDFs), and output function σ : V × 2P →O. We define the size of A, denoted as |A|, to be |V | (i.e., the cardinality of the set V ). To define the semantics of an SRM, let s0a1s1 . . . aksk be a trajectory of an RL episode and ℓ1 . . . ℓk the corresponding label sequence with ℓi+1 = L(si, ai+1, si+1) for each i ∈{0, . . . , k −1}. The run of an SRM A on the label sequence ℓ1 . . . ℓn is then a sequence v0ℓ1F1v1 . . . vn−1ℓnFnvn where vi+1 = δ(vi, ℓi+1) and Fi = σ(vi, ℓi+1) for all i ∈{0, . . . , k −1}. The sequence F1 . . . Fk of CDFs is now used to determine the rewards that the agent receives: for each i ∈{0, . . . , k−1}, the CDF Fi is sampled and the resulting reward ri ∈R is returned. This process results in a pair (λ, ρ) with λ = ℓ1 . . . ℓk and ρ = r1 . . . rk, which we call a trace. We will often refer to SRM’s output by the corresponding probability distri- bution. We focus on continuous but bounded probability distributions, {D1([a1, b1]), . . . , Dn([an, bn])} where n ∈N and Di([a, b]) is a distribution over the in- terval [a, b]. It is not hard to see that the classical reward machines are a special case of Definition 2: one just has to set ai = bi for each i ∈{1, . . . , n}, which will result in probability distributions that assign probability 1 to a single real number. Fig. 2b shows an SRM B for the mining example. Note the difference to the classical RM of Fig. 2a: the transitions now output probability distributions Reinforcement Learning with Stochastic Reward Machines 7 instead of real values (non-trivial uniform distributions are on transitions from v2 to vT and from v3 to vT ). This difference allows us to capture the noise in the rewards of the example. For example, the label sequence of our running example (∅, {E}, ∅, {P}, ∅, ∅, {M}) will be transduced to a sequence of distribu- tions. Sampling these distributions produces different reward sequences (e.g., (0, 0, 0, 0, 0, 0, 0.95) and (0, 0, 0, 0, 0, 0, 1)). Toro Icarte et al. [17] introduced a version of Q-learning for reward machines called QRM, which assumes that the reward machine outputs deterministic re- wards. In the following, we examine if the guarantees of QRM can be retained when working with stochastic reward machines. 3.1 QRM with stochastic reward machines In addition to specifying a learning task, reward machines help with the learning process itself. QRM assumes knowledge of a reward machine representation of environment reward and splits the Q-function update over all RM states by using transition outputs in lieu of empirical rewards. For an MDP transition (s, a, s′) with label ℓ, QRM executes Q-function up- dates Qv(s, a) α←r + γ maxa′ Qv′(s′, a′) for each reward machine state v, with v′ being the succeeding state, α ∈R the learning rate, and r = σ(v, ℓ) the re- ward due to reading label ℓin state v. These updates are equivalent to regular Q-learning updates in the cross-product of the original MDP and the reward machine. The reward function is Markovian with respect to the resulting cross- product decision process. As Q-learning also converges correctly for a stochastic Markovian reward, it is easy to see that QRM can find the optimal policy induced by an SRM by using samples ˆr ∼σ(v, ℓ) in the update rule. We also remark that SRMs allow for a relaxed notion of equivalence. As we will show in the following lemma, it is not necessary for two SRMs to be exactly equal in order to induce the same optimal policy. We generalize the notion of exact functional equivalence (that is necessary for RMs) into the equivalence in expectation. Definition 3. SRMs A and B are equivalent in expectation (A ∼E B) if for every label sequence λ = ℓ1ℓ2 . . . ℓk we have E[A(λ)i] = E[B(λ)i] for every 1 ≤i ≤k, that is if they output sequences of CDFs with equal expected values (where A(λ)i refers to the i-th CDF in the output sequence A(λ)). Lemma 1 can simplify representation and inference of SRMs, allowing the algorithm to rely only on the expected values of the transitions in the inferred SRM. Lemma 1. If A = (V, vI, 2P , O, δ, σ) and B = (V ′, vI′, 2P , O′, δ′, σ′) are equiv- alent in expectation then they induce the same optimal policy over the same environment. Now that it has been established that learning the optimal policy using stochastic reward machines is viable, the remaining question is whether one can 8 J. Corazza et al. drop the assumption that knowledge of the environment reward is accessible, and learn an SRM representation of it in conjunction with the policy (instead of assuming it to be given). 4 Inferring SRMs In this section, we show how to infer SRMs from data obtained through the agent’s exploration. In Section 4.1, we present a seemingly appealing baseline algorithm, and we explain its weaknesses. We follow it by proposing Stochastic Reward Machine Inference (SRMI) as a better approach in Section 4.2. Both algorithms intertwine RL and learning of SRMs by starting with an initial hypothesis SRM and (1) running QRM which generates a sequence of traces, and (2) if there are traces contradicting the current hypothesis, inferring a new one. The steps repeat with the goal of recovering an SRM that captures the envi- ronment reward and using it to learn the optimal policy. QRM is performed in conjunction with the latest hypothesis. Traces which contradict the hypothesis are called counterexamples. Due to Lemma 1, the task is simplified to finding an SRM that is merely equivalent in expectation with environment rewards (instead of agreeing on exact distributions). We assume that a bound on noise dispersion ϵc > 0 is known (e.g., sensors come with pre-specified measurement error tolerance). Definition 4 uses the ϵc parameter to formalize the notion of consistency with a trace. Definition 4. A trace (λ, ρ) = (ℓ1ℓ2 . . . ℓk, r1r2 . . . rk) is ϵc-consistent with an SRM H, which outputs a sequence of distributions H(λ) = d1d2 . . . dk if for all 1 ≤i ≤k we have |ri −E[di]| ≤ϵc, i.e. if all of the observed rewards ri are plausible samples from H. SRMI can only recover an SRM representation of a noisy environment reward that meets an additional requirement, which we formalize in Assumption 1. Informally, the assumption requires the noise from one reward distribution not to fully conceal the signal of a different one (unless they share means). We are convinced this requirement is met in a large class of practical, real-world scenarios. Assumption 1 Let O = {D1([a1, b1]), . . . , Dn([an, bn])} be the output alphabet of the environment SRM. Let ϵc = maxi{bi −ai}/2 be the noise dispersion bound known to the agent. We then assume that any two output distributions that can be covered with an ϵc-interval must have equal expectations: for all 1 ≤i, j ≤n and µ ∈R we have [ai, bi]∪[aj, bj] ⊆[µ−ϵc, µ+ϵc] =⇒E[D([ai, bi])] = E[D([aj, bj])]. This assumption is satisfied in the Mining example. The set {U([0.1, 0.2]), U([0, 1])} breaks Assumption 1: ϵc-intervals cannot distinguish these distribu- tions, and they differ in expected values, so they must be distinguished. The set {U([0.1, 0.9]), U([0, 1])} respects it, and distinguishing these distributions is unnecessary as they have equal expectations. Reinforcement Learning with Stochastic Reward Machines 9 4.1 Baseline algorithm One may be tempted to repurpose existing techniques for inferring reward ma- chines from a collection of traces. There are two important obstacles: 1. Traces may be prefix-inconsistent: during exploration, the agent may en- counter traces (ℓ1ℓ2 . . . ℓm, r1r2 . . . rm) and (ℓ′ 1ℓ′ 2 . . . ℓ′ n, r′ 1r′ 2 . . . r′ n) s.t. for some 1 ≤i ≤min{m, n} we have ℓ1 . . . ℓi = ℓ′ 1 . . . ℓ′ i but r1 . . . ri ̸= r′ 1 . . . r′ i. The consequence is that no reward machine can capture both traces. 2. Even if a collection of traces where noise is present is prefix-consistent, the (inferred) reward machine will tend to be impractically large because it will overfit noisy data. The baseline algorithm solves these problems by obtaining multiple samples for each trace in the counterexample set, and producing estimates for transition means before inferring the structure of the reward machine. Starting with an initial hypothesis the following steps are repeated: (1) run QRM which generates a sequence of traces (2) when a counterexample is encountered, pause QRM and replay its trajectory until enough samples are collected (3) preprocess the counterexample set so that multiple samples collected in (2) are collapsed into estimates for environment reward means (4) use the deterministic RM inference method by Xu et al. [22] to infer the new minimal consistent hypothesis As knowledge of environment SRM structure is not assumed, it is necessary to sample traces (instead of individual transitions). The number of samples re- quired in (2) is determined from ϵc and an additional parameter for the minimal distance between two different transition means in the environment SRM. The preprocessing in (3) ensures (up to a confidence level) that different estimates for the same means are aggregated into one, and the result respects ϵc-consistency with the original sample set. This approach seems to eliminate the two issues presented by stochastic rewards: since every prefix is sampled many times and then averaged and ag- gregated, there can be no prefix inconsistencies. Furthermore, aggregation leaves little room for overfitting noise. However, the agent must be able to sample traces multiple times on demand. As we show in section 5, this is costly, and sometimes even impossible. 4.2 Stochastic Reward Machine Inference In contrast to the baseline algorithm, SRMI uses counterexamples to improve hypotheses immediately by relying on a richer constraint solving method that is able to encode ϵc-consistency with the counterexample set directly. This removes the need for replaying trajectories. 10 J. Corazza et al. As before, the task for the algorithm is to recover a minimal SRM that is equivalent in expectation to the true environment one, and use it to learn the optimal policy. Starting with an initial hypothesis SRM, the following steps are repeated: (1) Run QRM and record all traces in a set A (Lines 5 to 6 in Algorithm 1). (2) When a counterexample is encountered, add it to the set X and attempt to make the current hypothesis ϵc-consistent with X by shifting its outputs (Lines 7 to 10). (3) If Step (2) failed, solve a constraint problem to infer the new hypothesis (Line 12). (4) Compute the final mean estimates to correct outputs of the inferred hypoth- esis based on empirical rewards in A (Line 14). The algorithm generates a sequence of hypothesis SRMs H1H2 · · · and a sequence of counterexample sets X1X2 · · · (with Xi ⊂Xj for all i < j) where Hj is consistent with Xj−1 (and thus every Xi for 1 ≤i ≤j). For simplicity, we assume noise distributions are symmetric, but SRMI can easily be extended to cover asymmetric ones (see Appendix B). Hypothesis outputs are in the form D([µ −ϵc, µ + ϵc]), where D([a, b]) is a symmetric distribution over an interval, and µ ∈R is the estimated mean of a particular transition. Two SRMs are structurally isomorphic (Line 9) if their underlying automata without the reward output are isomorphic in a graph-theoretic sense. Algorithm 1: SRMI 1 Initialize SRM H with a set of states V ; 2 Initialize a set of q-functions Q = {qv|v ∈V }; 3 Initialize X = ∅and A = ∅; 4 for episode n = 1, 2, . . . do 5 (λ, ρ, Q) ←QRM_episode(H, Q); 6 add (λ, ρ) to A; 7 if H not ϵc-consistent with (λ, ρ) then 8 add (λ, ρ) to X; 9 if found SRM Z isomorphic with H and ϵc-consistent with X then 10 H′ ←Z; 11 else 12 infer H′ from X; 13 end 14 H ←Estimates (H′, A); 15 reinitialize Q; 16 end 17 end There are many ϵc-consistent SRMs that can be returned by the constraint solving method in Line 12, not necessarily having the best estimates for transi- tion means. To correct for this, the function Estimates assigns sets of observed Reinforcement Learning with Stochastic Reward Machines 11 rewards to each hypothesis transition by simulating its runs on traces in A and uses them to compute the final estimates. Our algorithm categorized every counterexample as either Type 1 or Type 2. A counterexample (λ, ρ) is of Type 1 with respect to Hi if there exists a graph- isomorphic SRM Z that is consistent with (λ, ρ) and Xi−1 (otherwise it is Type 2). Then Hi+1 = Z and Xi = Xi−1 ∪{(λ, ρ)}. Intuitively the current hypothesis can be "fixed" to become consistent with Type 1 counterexamples by shifting outputs without changing the structure of the SRM. We now discuss how our algorithm handles counterexamples of Type 2. Inferring the structure and output range of a new hypothesis When a new Type 2 counterexample is encountered (i.e., outputs in the current hypoth- esis cannot be shifted to make it consistent), SRMI infers a new hypothesis from the counterexamples, effectively solving the following task. Task 1 Given a set of traces X and dispersion ϵc, produce a minimal SRM that is ϵc-consistent with all traces in X. We accomplish Task 1 by encoding it as a constraint problem in real arith- metic. Our encoding is an extension of the encoding used in the JIRP algo- rithm [22]. Minimality is ensured by starting from machines of size n = 1, in- creasing the size by 1 each time the constraint problem proves unsatisfiable, and returning the first successful result. For a given size, we use a collection of propositional and real variables from which one can extract an SRM, and constrain them so they (1) encode a valid SRM and (2) ensure that the SRM is consistent with the set X. More precisely, for size n ∈N \ {0}, parameter ϵc, and counterexample set X, we construct a formula ΦX,ϵc n with the following two properties: (a) ΦX,ϵc n is satisfiable iff there exists an SRM H of size n that is ϵc-consistent with every trace in X. (b) Every satisfying assignment for variables in ΦX,ϵc n contains sufficient infor- mation to construct a consistent SRM of size n. The formula ΦX,ϵc n is built using the following variables: – dp,ℓ,q are propositional variables, true iff δ(p, ℓ) = q – ov,ℓare real variables that match the value of E[σ(v, ℓ)] – xλ,v are propositional variables encoding machine runs, true if the SRM arrives in state v upon reading label sequence λ We use these variables to define constraints (1) - (4). Formula 1 requires that at the beginning (after an empty sequence), the SRM is in the initial state. Formula 2 requires that the SRM transitions to exactly one state upon seeing a label. The remaining two formulas connect the SRM to the set X (we use symbol Pref (X) for the set of prefixes of traces in X). Formula 3 connects seen prefixes 12 J. Corazza et al. to the transition function captured by variables dp,ℓ,q. Finally, Formula 4 ensures ϵc-consistency. xϵ,vI ∧ ^ v∈V \{vI} ¬xϵ,v (1) ^ p∈V ^ ℓ∈2P  _ q∈V dp,ℓ,q  ∧  ^ q,q′∈V q̸=q′ ¬dp,ℓ,q ∨¬dp,ℓ,q′  (2) ^ (λℓ,ρr)∈Pref (X) ^ p,q∈V (xλ,p ∧dp,ℓ,q) →xλℓ,q (3) ^ (λℓ,ρr)∈Pref (X) ^ v∈V xλ,v →|ov,ℓ−r| ≤ϵc (4) The formula ΦX,ϵc n is defined as a conjunction of formulas (1) - (4). One can easily see that properties (a), (b) hold for ΦX,ϵc n as there is a bijection between assignments for which ΦX,ϵc n is true and consistent SRMs (modulo distributions). Let H′ be the SRM constructed from a model that satisfies the above con- straints, with σH′(v, ℓ) = D([ov,ℓ−ϵc, ov,ℓ+ ϵc]). For every (λ, ρ) ∈X of length k and 1 ≤i ≤k, due to (4), we have | E[H′(λ)i] −ρi| < ϵc and so H′ is the new ϵc-consistent hypothesis. When H′ is constructed by correcting for a type 1 counterexample, it is consistent by definition. Correcting output distributions in inferred SRMs As there can be many SRMs ϵc-consistent with X that are not equivalent in expectation, Task 1 need not have a unique solution. To illustrate how nonuniqueness can prohibit correct convergence of hypothesis SRMs, let X contain samples from a single-state SRM T with two possible outputs, D([0, 1]) on ℓ1 and D([10, 100]) on ℓ2 (ϵc = (100 − 10)/2 = 45). For α ∈R let Hα be a single-state SRM with two possible outputs, D([α −45, α + 45]) on ℓ1 and D([10, 100]) on ℓ2. Then for all −44 ≤α, β ≤45 (α ̸= β) we have Hα ̸∼E Hβ, yet both are ϵc-consistent with X. Thus, SRMI must choose solutions to Task 1 so that any generated hypothesis sequence will converge to the same limit T . This is done in the Estimates step of Algorithm 1, which simulates runs of H′ (the inferred solution to Task 1) on consistent traces in A, yielding final estimates of means based on empirical rewards (shown in Algorithm 2). The need to filter for consistency is due to the fact that the set of SRMs consistent with A is often strictly smaller than the set of those consistent with X. As X grows to capture more information about environment reward, the need for filtering recedes. Using the midrange estimator for outputs in H ensures it remains consis- tent with the counterexamples. As new Type 1 counterexamples will always be found eventually, Estimates runs infinitely often which guarantees convergence to correct means. Reinforcement Learning with Stochastic Reward Machines 13 Algorithm 2: Estimates 1 Initialize sets r(v, ℓ) for states v ∈V and ℓ∈2P ; 2 Initialize empty output function σ′; 3 for (λ, ρ) ∈A do 4 Skip (λ, ρ) if not ϵc-consistent with H; 5 Simulate a run of H on λ disregarding its outputs and record rewards from ρ in corresponding r(v, ℓ) sets; 6 for v ∈V , ℓ∈2P do 7 µ′ ←(max r(v, ℓ) + min r(v, ℓ))/2; 8 Set σ′(v, ℓ) = U[µ′ −ϵc, µ′ + ϵc]; 9 end 10 end 11 Return H with σ′ as the output function; Convergence to an optimal policy Theorem 1. Given Assumption 1 on the output alphabet of the environment SRM and an ϵ-greedy exploration strategy, SRMI converges in the limit to an SRM that is equivalent in expectation to the true environment one. The proof of Theorem 1 follows similar reasoning to the convergence proof for JIRP. We first establish that SRMI does not revisit structurally isomorphic SRMs. As there are only finitely many such structures for a fixed maximal size, SRMI "settles" in a final structure. Then Assumption 1 guarantees that Esti- mates will converge to the correct expectations. Corollary 1. SRMI converges to an optimal policy in the limit. Corollary 1 follows from Theorem 1 due to Lemma 1, which guarantees that two SRMs that are equivalent in expectation induce the same optimal policy, and finally due to the fact that QRM with stochastic reward machines converges to an optimal policy. 5 Results To assess the performance of SRMI, we have implemented a Python 3 prototype based on code by Toro Icarte et al. [17], which we have made publicly avail- able2. To assess its performance, we compare SRMI to the baseline algorithm and the JIRP algorithm for classical reward machines on two case studies: the mining example from Section 2 and an example inspired by harvesting (which we describe shortly). Our primary metric is the cumulative reward averaged over the last 100 episodes. We conducted 10 independent runs for each algorithm, using Z3 [12] as the constraint solver. All experiments were conducted on a 3 GHz machine with 1.5 TB RAM. 2 Source code available at https://github.com/corazza/srm. 14 J. Corazza et al. 0 0.5 1 1.5 2 2.5 3 ·106 0 0.2 0.4 0.6 0.8 1 steps SRMI baseline JIRP (a) Stochastic rewards 0 0.5 1 1.5 2 2.5 3 ·106 0 0.2 0.4 0.6 0.8 1 steps SRMI baseline JIRP (b) Non-stochastic rewards Fig. 3: Results on the mining environment Mining Fig. 3a shows the comparison on the Mining environment. The in- terquartile ranges for the reward are drawn as shaded areas, while the medians are drawn as solid lines. For this case study, we have set the baseline algorithm to replay 20 traces per counterexample. As can be seen from the figure, SRMI converges faster to an optimal policy (reward 1) than both the baseline algo- rithm and JIRP. The latter times out because it is unable to deal with the noise properly and tries to infer larger and larger RMs. Fig. 3b compares SRMI, baseline, and JIRP on a non-stochastic version of the Mining environment using the reward machine of Fig. 2a to define the re- wards. All algorithms perform equally well. Thus, SRMI does not incur a runtime penalty, even when used in non-stochastic settings. Harvest The Harvest environment represents a crop-farming cycle. The agent is rewarded for performing a sequence of actions, P, W, H, S (plant, water, harvest, sell), and penalized for breaking it. MDP states G, M, B (good, medium, bad) H ∧B, U([0, 0]) S, U([1.6, 2.4]) ¬ H, U([−1.2, −0.8]) vI v1 v2 vM vG vB (a) SRM: all missing transitions are self- loops with reward 0 0.1 0.1 0.8 0.8 0.1 0.1 0.8 0.1 0.1 G M B (b) MDP Fig. 4: Harvest environment Reinforcement Learning with Stochastic Reward Machines 15 0 0.5 1 1.5 2 2.5 ·104 −40 −30 −20 −10 0 10 steps SRMI baseline JIRP (a) Stochastic rewards 0 0.5 1 1.5 2 2.5 ·104 −40 −30 −20 −10 0 10 steps SRMI baseline JIRP (b) Non-stochastic rewards Fig. 5: Results on the harvest environment transition as given by the dynamics in Fig. 4b. The labeling function L(s, a, s′) returns the transition, effectively making trajectories s0a1s1 . . . aksk their own label sequences. The reward mean depends on the MDP state during the harvest action as shown in Fig. 4a. The Harvest example is well suited for showing the benefits of SRMI over the baseline, because the probability that the agent will repeatedly see a given trajectory is very low. Fig. 5a shows the comparison on the Harvest environment. SRMI was suc- cessful in learning the optimal policy, while the baseline algorithm got stuck in collecting the required number of samples (5), and JIRP again timed out. In a modified Harvest environment, without noise, the algorithms do equally well (Fig. 5b). 6 Conclusion In this work we introduced Stochastic reward machines as a general way of repre- senting non-Markovian stochastic rewards in RL tasks, and the SRMI algorithm that is able to infer an SRM representation of the environment reward based on traces, and use it to learn the optimal policy. We have shown SRMI is an improvement over prior methods. References 1. Andreas, J., Klein, D., Levine, S.: Modular multitask reinforcement learning with policy sketches. In: ICML’2017. pp. 166–175. JMLR. org (2017) 2. Bacchus, F., Boutilier, C., Grove, A.J.: Rewarding behaviors. In: AAAI/IAAI, Vol. 2. pp. 1160–1167. AAAI Press / The MIT Press (1996) 3. Brafman, R.I., Giacomo, G.D., Patrizi, F.: Ltlf/ldlf non-markovian rewards. In: AAAI. pp. 1771–1778. AAAI Press (2018) 16 J. Corazza et al. 4. Camacho, A., Icarte, R.T., Klassen, T.Q., Valenzano, R.A., McIlraith, S.A.: LTL and beyond: Formal languages for reward function specification in reinforcement learning. In: IJCAI. pp. 6065–6073. ijcai.org (2019) 5. Corazza, J., Gavran, I., Neider, D.: Reinforcement learning with stochastic reward machines. Proceedings of the AAAI Conference on Artificial Intelligence 36(6), 6429–6436 (Jun 2022). https://doi.org/10.1609/aaai.v36i6.20594, https://ojs.aaai .org/index.php/AAAI/article/view/20594 6. Everitt, T., Krakovna, V., Orseau, L., Legg, S.: Reinforcement learning with a corrupted reward channel. In: IJCAI. pp. 4705–4713. ijcai.org (2017) 7. Furelos-Blanco, D., Law, M., Russo, A., Broda, K., Jonsson, A.: Induction of sub- goal automata for reinforcement learning. In: AAAI. pp. 3890–3897. AAAI Press (2020) 8. Gaon, M., Brafman, R.I.: Reinforcement learning with non-markovian rewards. In: AAAI. pp. 3980–3987. AAAI Press (2020) 9. Hasanbeig, M., Jeppu, N.Y., Abate, A., Melham, T., Kroening, D.: Deepsynth: Au- tomata synthesis for automatic task segmentation in deep reinforcement learning. In: AAAI. pp. 7647–7656. AAAI Press (2021) 10. Icarte, R.A.T., Waldie, E., Klassen, T., Valenzano, R., Castro, M.P., McIlraith, S.A.: Learning reward machines for partially observable reinforcement learning. In: NeurIPS (2019) 11. Jothimurugan, K., Alur, R., Bastani, O.: A composable specification language for reinforcement learning tasks. In: NeurIPS. pp. 13021–13030 (2019) 12. de Moura, L.M., Bjørner, N.: Z3: an efficient SMT solver. In: TACAS. Lecture Notes in Computer Science, vol. 4963, pp. 337–340. Springer (2008) 13. Neary, C., Xu, Z., Wu, B., Topcu, U.: Reward machines for cooperative multi-agent reinforcement learning. arXiv preprint arXiv:2007.01962 (2020) 14. Neider, D., Gaglione, J., Gavran, I., Topcu, U., Wu, B., Xu, Z.: Advice-guided reinforcement learning in a non-markovian environment. In: AAAI. pp. 9073–9080. AAAI Press (2021) 15. Romoff, J., Henderson, P., Piché, A., François-Lavet, V., Pineau, J.: Reward esti- mation for variance reduction in deep reinforcement learning. In: CoRL. Proceed- ings of Machine Learning Research, vol. 87, pp. 674–699. PMLR (2018) 16. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press (2018) 17. Toro Icarte, R., Klassen, T.Q., Valenzano, R.A., McIlraith, S.A.: Using reward ma- chines for high-level task specification and decomposition in reinforcement learn- ing. In: ICML. Proceedings of Machine Learning Research, vol. 80, pp. 2112–2121. PMLR (2018) 18. Toro Icarte, R., Klassen, T.Q., Valenzano, R.A., McIlraith, S.A.: Reward ma- chines: Exploiting reward function structure in reinforcement learning. CoRR abs/2010.03950 (2020) 19. Velasquez, A., Beckus, A., Dohmen, T., Trivedi, A., Topper, N., Atia, G.: Learn- ing probabilistic reward machines from non-markovian stochastic reward processes (2021) 20. Velasquez, A., Bissey, B., Barak, L., Beckus, A., Alkhouri, I., Melcer, D., Atia, G.K.: Dynamic automaton-guided reward shaping for monte carlo tree search. In: AAAI. pp. 12015–12023. AAAI Press (2021) 21. Wang, J., Liu, Y., Li, B.: Reinforcement learning with perturbed rewards. In: AAAI. pp. 6202–6209. AAAI Press (2020) Reinforcement Learning with Stochastic Reward Machines 17 22. Xu, Z., Gavran, I., Ahmad, Y., Majumdar, R., Neider, D., Topcu, U., Wu, B.: Joint inference of reward machines and policies for reinforcement learning. In: ICAPS. pp. 590–598. AAAI Press (2020), https://aaai.org/ojs/index.php/ICAPS/article /view/6756 18 J. Corazza et al. A Additional results Figure 6 shows the comparison between SRMI and standard deep reinforcement learning algorithms, DDQN and DHRL (deep hierarchical reinforcement learn- ing). The DDQN agent was provided the past 200 labels as inputs to the network. The DHRL agent was based on an implementation provided in Toro Icarte et al. [17], who expanded the HRL options framework for use with reward machines (we note that DHRL in this case assumes access to SRM structure in order to generate options). SRMI significantly outperforms the alternatives which is in line with similar results for the non-stochastic setting [14]. 0 0.5 1 1.5 2 2.5 3 ·106 0 0.2 0.4 0.6 0.8 1 steps SRMI DDQN DHRL (a) Mining environment 0 0.5 1 1.5 2 2.5 ·104 −40 −30 −20 −10 0 10 20 steps SRMI DDQN DHRL (b) Harvest environment Fig. 6: Comparison of SRMI with DDQN and DHRL Note that, given a bounded symmetric distribution D, midpoint µ′ is an unbiased estimator for E[D]. We use it because it also works as a range estimate for D that respects ϵc-consistency, whereas the empirical average can break it. Asymmetric distributions complicate exposition, but in that case we do use the empirical average for estimating E[D] (see Algorithm 4 in Appendix B). B Asymmetric noise distributions In Algorithm 1 (SRMI) we have assumed that rewards follow symmetric distri- butions, more precisely we assumed E[D([a, b])] = (a + b)/2 holds for all reward distributions in the environment SRM. Algorithms 3 and 4 show how to extend SRMI towards dropping this assumption. Symmetric reward distributions allow one to use a single number to store information about both the mean of a hypothesis output, and its range. In other words, if µ is the mean estimate for some hypothesis output d = D([a, b]) (b −a = 2ϵc), one can safely set d = U([µ −ϵc, µ + ϵc]). However, if rewards Reinforcement Learning with Stochastic Reward Machines 19 Algorithm 3: SRMI (asymmetric rewards) 1 Initialize SRMs H, G with a set of states V ; 2 Initialize a set of q-functions Q = {qv|v ∈V }; 3 Initialize X = ∅and R = ∅; 4 for episode n = 1, 2, . . . do 5 (λ, ρ, Q) ←QRM_episode(G, Q); 6 add (λ, ρ) to A; 7 if H not ϵc-consistent with (λ, ρ) then 8 add (λ, ρ) to X; 9 if found SRM Z isomorphic with H and ϵc-consistent with X then 10 H′ ←Z; 11 else 12 infer H′ from X; 13 end 14 (H, G) ←Estimates (H′, A); 15 reinitialize Q; 16 end 17 end Algorithm 4: Estimates (asymmetric rewards) 1 Initialize sets r(v, ℓ) for states v ∈V and ℓ∈2P ; 2 Initialize empty output functions σ′, ˆσ; 3 for (λ, ρ) ∈A do 4 Skip (λ, ρ) if not ϵc-consistent with H; 5 Simulate a run of H on λ disregarding its outputs and record rewards from ρ in corresponding r(v, ℓ) sets; 6 for v ∈V , ℓ∈2P do 7 µ′ ←(max r(v, ℓ) + min r(v, ℓ))/2; 8 ˆµ ←E[r(v, ℓ)]; 9 Set σ′(v, ℓ) = U[µ′ −ϵc, µ′ + ϵc]; 10 Set ˆσ(v, ℓ) = U[ˆµ −ϵc, ˆµ + ϵc]; 11 end 12 end 13 Return H, G constructed from the input SRM with σ′, ˆσ as output functions; 20 J. Corazza et al. can be asymmetric then µ is not necessarily the center of [a, b], and setting d = U([µ −ϵc, µ + ϵc]) can break the detection of counterexamples. In Algorithm 3 solve this problem by using an auxiliary hypothesis G. This is a purely technical choice, one may also implement a richer representation of SRMs that can store both pieces of information and thus eliminate the need for an auxiliary hypothesis. The main hypothesis H has outputs d = D([a, b]) that are always ϵc-consistent with X, but do not necessarily follow the best avail- able mean estimates. On the other hand, the auxiliary hypothesis G eschews consistency in favor of using the best estimates. QRM is performed over the auxiliary hypothesis which captures information about the environment reward that is most immediately beneficial for learning the optimal policy. Hypothe- sis improvement (and counterexample detection) is performed over the main hypothesis. The Estimates function now returns both a new consistent hypothesis H, and the new auxiliary hypothesis G. They are identical save for using different estimators. Using the arithmetic mean for G ensures the SRM outputs used in QRM are unbiased when empirical rewards are sampled from asymmetric distributions. Using the mid-range for H preserves ϵc-consistency. C Proofs C.1 Proof of Lemma 1 and Theorem 1 To prove Lemma 1 and Theorem 1 we first introduce an equivalence relation ∼over SRMs: for A = (V, vI, 2P , O, δ, σ) and B = (V ′, vI′, 2P , O′, δ′, σ′), A ∼B iff they are graph-isomorphic, i.e. if there exists a bijection I : V →V ′ s.t. I(vI) = vI′ and δ′(I(v), ℓ) = I(δ(v, ℓ)) for all v ∈V and ℓ∈2P . For isomorphism I : A 7→B we define a function fI : V × 2P →R that "fixes" the outputs of A so that it is equivalent in expectation to B i.e. E[σ(v, ℓ) + fI(v, ℓ)] = E[σ′(I(v), ℓ)]. Proof of Lemma 1 Lemma 1 states that machines that are equivalent in expectation induce the same optimal policy over an MDP. Proof. Let SRMs A = (V A, vIA, 2P , OA, δA, σA) and B = (V B, vIB, 2P , OB, δB, σB) be equivalent in expectation. First, assume that A and B are graph-isomorphic with I : A 7→B. We now define a new SRM A′ which shares the state set and transition function with A, but for a given state v ∈V A and label ℓ, A′ outputs the corresponding distribu- tion from B: let σ(v, ℓ) = σB(I(v), ℓ)) and define A′ = (V A, vIA, 2P , OB, δA, σ). Because A ∼B, we have B ∼A′. Furthermore, B and A′ are equivalent in the sense of CDFs that they output. Because B is just A′ with different objects for its states (given by I), they must induce the same optimal policy. For A and A′, the argument is that for every policy π((s, v), a) the Q-functions qA π and qA′ π (induced by A and A′ respectively) in the cross-product MDP are identical by definition, as they only care about expected reward: the process of obtaining Reinforcement Learning with Stochastic Reward Machines 21 A′ by overwriting outputs in A preserves expected values. To conclude, when A and B are equivalent in expectation and graph-isomorphic, they induce the same optimal policy. The general case when A and B are equivalent in expectation but not graph- isomorphic is obtained by dropping down to classical non-stochastic reward ma- chines ˆA and ˆB where equivalence in expectation implies equivalence. ˆA (ˆB) is constructed by overwriting outputs in A (B) with their expected values. Previous reasoning then establishes that A and ˆA, since they are equivalent in expectation and graph isomorphic, induce the same optimal policy (similarly for B and ˆB). Finally, A and B induce the same optimal policy because ˆA and ˆB do. Inference of SRMs Before proving Theorem 1 we show that SRMI finds the counterexamples necessary to improve its hypothesis. First, we formalize the notion of attainable trajectories in Definition 5. Definition 5. Let M = (S, sI, A, p, P, L) be a labeled MDP and m ∈N a natural number. We call a trajectory ζ = s0a1s1 . . . aksk ∈(S × A)∗× S m-attainable if (i) k ≤m and (ii) p(si, ai, si+1) > 0 for each i ∈{0, . . . , k}. Moreover, we say that a trajectory ζ is attainable if there exists an m ∈N such that ζ is m-attainable. We also refer to label sequences generated by attainable trajectories as at- tainable label sequences. Lemma 2. SRMI with episode length m explores every m-attainable trajectory with a non-zero probability. Lemma 2 is proved with induction over the length of trajectories. The proof for an equivalent result can be found in Xu et al. [22]. Corollary 2. SRMI with episode length m explores every m-attainable trajec- tory infinitely often. Corollary 2 follows from Lemma 2 because in the limit, the probability of not observing a trajectory falls to zero, so it will be observed again with probability 1. Lemma 3 provides a lower bound on maximal episode length which allows SRMI to observe sufficient evidence of structural dissimilarity between the hy- pothesis and the environment SRM. It states that if there are attainable trajec- tories which can generate a counterexample of Type 2, then there exists a Type 2 counterexample that "fits" into an episode. Lemma 3. For MDP M and environment SRM T (with dispersion ϵc), let H be the current hypothesis SRMI is using to explore M. Let ζ = s0a1s1 . . . aksk be an attainable trajectory in M, λ = ℓ1ℓ2 . . . ℓk the corresponding label sequence, and T (ℓ1ℓ2 . . . ℓk) = d1d2 . . . dk (considered as a sequence of random variables) the output of the environment SRM over λ. If (ℓ1ℓ2 . . . ℓk, d1d2 . . . dk) is a plau- sible Type 2 counterexample, that is if for all G ∼H (including H) we have 22 J. Corazza et al. Pr(G ϵc-inconsistent with (λ, ρ) | ρ ∼d1d2 . . . dk) > 0, then SRMI with episode length m ≥2|M|+1(|T |+1)−1 will eventually observe a Type 2 counterexample. Proof. First assume that (1) there is a k-attainable label sequence λ, k > m ≥ 2|M|+1(|T |+1)−1, which can be sampled to generate a Type 2 counterexample for H, and (2) there are no such n-attainable label sequences for n ≤m. Let G be an SRM s.t. G ∼H and G ∼E T over all ℓ1ℓ2 . . . ℓm. G exists because there are no Type 2 counterexamples for H over n-attainable label sequences for n ≤m. Let T ′ (G′) be a deterministic RM s.t. T ′ ∼E T (G′ ∼E G). T ′ and G′ agree on all label sequences up to length m (m < k), but λ is a k-attainable label sequence s.t. T ′(λ) ̸= G′(λ). However, this contradicts the automata-theoretic results in Xu et al. [22]. Lemma 3 and Corollary 2 together prove Corollary 3. Corollary 3. Given sufficient episode length, SRMI almost surely finds Type 2 counterexamples if they exist. Now that we have shown that SRMI will continue to find "structural flaws" in its hypothesis if they exist, we will show that there is an endpoint of this iterative structural improvement. More formally, Lemma 4 states that SRMI does not revisit structurally iso- morphic SRMs. Lemma 4. Let H1H2 · · · be the sequences of hypotheses generated by SRMI. Let [Hi1][Hi2] · · · (with i1 = 1) be the sequence of ∼-equivalence classes obtained from H1H2 · · · by collapsing equivalent neighboring elemenents into their class. Then for all j < k we have [Hij] ̸= [Hik]. Proof. The sequence H1H2 · · · is induced by the counterexample sequence X1X2 · · · (with Xj ⊂Xk for all j < k) so that Hk is consistent with Xk−1 for all k > 1 (and thus every Xj for 1 ≤j ≤k −1). By definition, a counterexample of Type 1 induces a new hypothesis that remains in the same class, while a Type 2 counterexample causes a "jump" to the next class. Assume [Hij] = [Hik] for some j < k. Then there exists at least one intermediate class [Hin] (j < n < k) s.t. [Hin] /∈{[Hij], [Hik]} (otherwise Hij · · · Hik gets collapsed into [Hij]). Let I : Hij 7→Hik be an isomorphism and fI : V Hij × 2P →R the function that "fixes" outputs of Hij so that it is equivalent in expectation to Hik. By applying fI to outputs of Hij we obtain an SRM that is ϵc-consistent with Xik−1 (as all output transitions from any Hi have the same range 2ϵc). But since Xin−1 ⊂Xik−1 this makes Hin ∈[Hij] as all new counterexamples between Hij and Hin (and indeed Hik) were of Type 1, a contradiction. Due to Corollary 3, Lemma 4, and the fact that there are only finitely many ∼-equivalence classes for a fixed maximum number of states, we obtain Corol- lary 4. Reinforcement Learning with Stochastic Reward Machines 23 Corollary 4. There exists a final equivalence class [Hf] which will never be left. With Hf 1Hf 2 · · · we denote the subsequence of H1H2 · · · comprised of ele- ments from [Hf]. Correcting output distributions The following two lemmas state that in the Estimates step, eventually the reward sample sets r(v, ℓ) contain samples from distributions with equal expectations (Lemma 5), and eventually the mean estimators µ′ computed from those sets are unbiased (Lemma 6). Lemma 5. For a machine H in the [Hf] subsequence let CH be the set of all traces in A consistent with H, and rH(v, ℓ) = {x ∈R : (λℓ, ρx) ∈Pref (CH), H in state v after reading λ} be the set of empirical rewards from traces in CH for the transition from v on ℓ. Let d1d2 . . . dn denote the different distributions that elements of rH(v, ℓ) are sampled from. Then E[d1] = · · · = E[dn]. Proof. We prove Lemma 5 in case when n = 2, the general statement follows easily. Let rH(v, ℓ) contain samples from d1 and d2 such that E[d1] ̸= E[d2]. For distribution d let R(d) ⊂R be the image of d. Let [α, β] be the shortest interval such that [α, β] ⊇R(d1) ∪R(d2). If β −α ≤2ϵc Assumption 1 is broken thus β−α > 2ϵc. But then H /∈[Hf] because a Type 2 counterexample exists, and due to Corollary 3 it will eventually be found. SRMI eventually observes two reward prefixes r1r2 . . . rj and r1r2 . . . rk, where rj ∼d1, rk ∼d2, and |rj −rk| > 2ϵc, such that the same transition in H would have to account for both of them. The reward sequence that is observed latter comes from a Type 2 counterexample because it is impossible for any G ∼H to be consistent with both traces that the two reward sequences come from as (due to the graph isomorphism) they would belong to the same transition in G which breaks ϵc consistency with X. Lemma 6. For a machine H in the [Hf] subsequence let rH(v, ℓ) and d1d2 . . . dn be defined as in Lemma 5. Considering samples in rH(v, ℓ) = {R1, . . . , Rk} as random variables (for all 1 ≤i ≤k there is a 1 ≤j ≤n s.t. Ri ∼dj), let µ′ = (R(1) + R(k))/2 (the midrange estimator used in Estimates). Then E[µ′] = E[dj] for all 1 ≤j ≤n. Proof. Without loss of generalization let rH(v, ℓ) contain samples from a single environment distribution d. First note that all inconsistent traces in A (once the current hypothesis H ∈ [Hf]) are necessarily of Type 1. If there was a trace (λ, ρ) ∈A that was a Type 2 counterexample, its attainable label sequence would be encountered infinitely often and with probability 1 another Type 2 counterexample would be reproduced, which is a contradiction with H ∈[Hf]. Finally, new Type 1 counterexamples are never introduced into A. Once such traces are encountered, they are added into X and the hypothesis is "fixed" to account for them. The remaining finite number of Type 1 counterexamples in A are all eventually accounted for, as their label sequences are explored and more extremal samples are collected. 24 J. Corazza et al. This establishes that once some finite amount of hypotheses in [Hf] have been generated, CH = A. Then then all R1 · · · Rn in rH(v, ℓ) are independent samples of d and the lemma follows because the midrange estimator is unbiased. Proof of Theorem 1 Theorem 1 states that the sequence H1H2 · · · generated by SRMI converges to an SRM that is equivalent in expectation to the true environment one. As Corollary 4 established that SRMI enters a final ∼-class, we state the proof for the Hf 1Hf 2 · · · subsequence. Proof. Without loss of generalization assume that for any two SRMs H and G in Hf 1Hf 2 · · · the graph-isomorphism I : H 7→G is an identity, i.e. machines in Hf 1Hf 2 · · · share the same state set V and δH(v, ℓ) = δG(v, ℓ) for all v ∈V and ℓ∈2P . To prove that Hf 1Hf 2 · · · converges to the true environment reward machine we look at corresponding sequences rf 1(v, ℓ)rf 2(v, ℓ) · · · (for all v ∈V , ℓ∈2P ) of sets of empirical rewards from which SRMI estimates the expectations for outputs of hypothesis SRMs. Since there will be no new Type 2 counterex- amples, and because states and transitions of the hypotheses in the Hf 1Hf 2 · · · are assumed to be fixed with I, for each pair (v, ℓ) this sequence of sample sets will only grow, and will contain only samples from a fixed set of distributions. This is a consequence of the fact that the environment SRM is fixed as well, so any attainable prefix of labels always induces the same pair of runs over the fixed hypotheses and the environment SRM. Due to Lemma 5, these distribu- tions have equal expectations. Lemma 6 shows the estimators computed from these sets are unbiased. Since there will be infinitely many Type 1 counterex- amples, we conclude that the estimators converge to the correct expectations, that is Hf 1Hf 2 · · · converges to an SRM that is equivalent in expectation to the true environment one.
Reinforcement Learning with Stochastic Reward Machines⋆ Jan Corazza1,2 Ivan Gavran2 Daniel Neider2 1 2 Max Planck Institute for Software Systems Abstract. Reward machines are an established tool for dealing with reinforcement learning problems in which rewards are sparse and depend on complex sequences of actions. However, existing algorithms for learning reward machines assume an overly idealized setting where rewards have to be free of noise. To overcome this practical limitation, we introduce a novel type of reward machines, called stochastic reward machines, and an algorithm for learning them. Our algorithm, based on constraint solving, learns minimal stochastic reward machines from the explorations of a reinforcement learning agent. This algorithm can easily be paired with existing reinforcement learning algorithms for reward machines and guarantees to converge to an optimal policy in the limit. We demonstrate the effectiveness of our algorithm in two case studies and show that it outperforms both existing methods and a naive approach for handling noisy reward functions. Keywords: Reinforcement Learning · Reward Machines · Non-Markovian Rewards · SMT Solving · SAT Solving · Stochastic Reward Machines. 1 Introduction The key assumption of a reinforcement learning (RL) model is that the reward function is Markovian: the received reward depends only on the agent's immediate state and action. For many practical RL tasks, however, the most natural conceptualization of the state-space is the one in which the reward function depends on the history of actions that the agent has performed. (Those are typically the tasks in which the agent is rewarded for complex behaviors over a longer period.) An emerging tool used for reinforcement learning in environments with such non-Markovian rewards are reward machines. A reward machine is an automatonlike structure which augments the state space of the environment, capturing the temporal component of rewards. It has been demonstrated that Q-learning [16], ⋆A shorter version of this paper appeared in the Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22) [5]. Source code available at https://github.com/corazza/srm. 16 Oct 2025 2 J. Corazza et al. a standard RL algorithm, can be adapted to use and benefit from reward machines [17]. Reward machines are either given by the user, or inferred by the agent on the fly [8,7,22]. The used learning methods ensure that the inferred machine is minimal, enabling quick optimal convergence. Besides a faster convergence, learning minimal reward machines contributes to the interpretability of problems with an unclear reward structure. Reward machines only model deterministic rewards. When the machine is not known upfront, existing learning methods prove counterproductive in the presence of noisy rewards, as there is either no reward machine consistent with the agent's experience, or the learned reward machine explodes in size, overfitting the noise. In this paper, we introduce the notion of a stochastic reward machine, which can capture noisy, non-Markovian reward functions, together with a novel algorithm for learning them. The algorithm is an extension of the constraint-based formulation of Xu et al. [22]. The extension relies on the parameters of the reward's distribution, making sure that experiential rewards respect the distribution. In every iteration, if the agent establishes that its current hypothesis about the machine is wrong, it updates the hypothesis (either by fixing the machine's parameters or by solving the constraint problem and learning a new machine). While one could use the proposed algorithm to learn a suitable (non-stochastic) reward machine and use that machine to guide the reinforcement learning process, we recognize the value of modeling stochasticity explicitly. First, it reveals information about the distribution of rewards, improving interpretability of the problem at hand. Second, a direct correspondence between the stochastic reward function and the stochastic reward machine that models it makes the exposition clearer. In our experimental evaluation, we demonstrate the successful working of our algorithm on two noisy, non-Markovian case studies. We compare our algorithm with existing methods (which do not deal explicitly with noise) on the same case studies: as expected, disregarding the noise by using existing inference algorithms for classical RMs performs substantially worse than our new approach. Finally, we compare our algorithm to a baseline method that tries to "average out" the noise. To summarize, in this paper we 1) introduce stochastic reward machines, 2) present a novel algorithm for learning stochastic reward machines by a RL agent, and 3) experimentally demonstrate the efficacy of our algorithm. 1.1 Related Work Using finite state machines to capture non-Markovian reward functions has been proposed already in the early work on the topic [2]. Toro Icarte et al. [17,18] introduced reward machines (known as Mealy machines in other contexts) as a suitable formalism and an algorithm that takes advantage of the reward machine structure. Similar formalisms, including temporal logics, have been proposed by others, too [11,3,4]. This line of work assumes the reward machine to be given. Reinforcement Learning with Stochastic Reward Machines 3 The assumption of the user-provided reward machine has been lifted in the follow-up works [8,22,7]. Learning temporal representations of the reward has been explored in different contexts: for multi-agent settings [13], for reward shaping [20], or with user-provided advice [14]. All these approaches are fragile in presence of noisy rewards. Other approaches focus on learning attainable sequences of labels [9,10], disregarding reward values. If reward noise ranges over a finite set of values, Velasquez et al. [19] propose active learning of reward machine-like automata with a probabilistic transition function. Outside of the non-Markovian context, many works studied noise in rewards. Everitt et al. [6] give an overview of potential sources of noise/corruption and provide the impossibility result for learning under arbitrarily corrupted rewards. Romoff et al. [15] propose learning a reward estimator function alongside the value function. Wang, Liu, and Li [21] consider a problem of rewards being flipped according to a certain distribution. While all these works consider much richer noise models, they are not readily applicable to the non-Markovian rewards setting. 2 Preliminaries This section introduces the necessary background on reinforcement learning and the formalism of reward machines for capturing non-Markovian rewards. We illustrate all notions on a running example called Mining. Mining, inspired by variations of Minecraft [1], models the problem of finding and exploiting ore in an unknown environment. We use this example throughout the paper. Fig. 1 shows the Mining world. An agent moves in a bounded grid-world intending to find valuable ore, gold (G) and platinum (P), and bring it to the marketplace (M). Furthermore, the agent's success depends on the purity of the ore and the market prices, which is stochastic and cannot be influenced by the agent (though the spread of the market prices can naturally be bounded). In order to do so successfully, the agent has to fulfill specific additional requirements, such as finding equipment (E) beforehand and not stepping into traps (T). In reinforcement learning, an agent learns to solve such tasks through repeated, often episodic interactions with an environment. After each step, the agent receives feedback (a reward r ∈R ⊂R) based on its performance and acts to maximize a (discounted) sum of received rewards. This interaction forms a stochastic process: while the environment is in some state s ∈S, the agent chooses an action a ∈A according to a policy π(s, a) (a function mapping states to probability distributions over the action space), causing the environment to transition into the next state s′ ∈S and giving the agent a reward r (where S is the state space and A is the action space of the environment). A realization of this process is a trajectory s0a1s1 . . . aksk (optionally, rewards may be included in this sequence). The agent continually updates its policy (i.e., learns) based on received rewards. A reinforcement learning environment is formalized as a Markov decision process (MDP). As is common in the context of reward machines, we equip our 4 J. Corazza et al. T T E ⃝ E G T T P M T P T P Fig. 1: A simplified example of the Mining environment grid and a trajectory. The agent's initial state is shown as a circle. Cells display their state labels, or are blank if the label is ∅. The trajectory indicated by arrows shows the agent collecting the equipment (E), finding platinum (P) and bringing it to the marketplace (M). MDPs with a labeling function that maps transitions to labels. These labels correspond to high-level events that are relevant for solving the given task (e.g., finding gold, indicated by G). Definition 1. A (labeled) Markov decision process is a tuple M = (S, sI, A, p, P, L) consisting of a finite state space S, an agent's initial state sI ∈S, a finite set A of actions, and a probabilistic transition function p: S × A × S →[0, 1]. Additionally, a finite set P of propositional variables, and a labeling function L: S ×A×S →2P determine the set of relevant high-level events that the agent senses in the environment. We define the size of an MDP M, denoted as |M|, to be the cardinality of the set S (i.e., |M| = |S|). Let us briefly illustrate this definition. The agent always starts in state sI. It then interacts with the environment by taking an action at each step. If the agent at state s ∈S takes the action a ∈A, its next state will be s′ ∈S with probability p(s, a, s′). For this transition (s, a, s′), a labeling function L emits a set of high-level propositional variables (i.e., a label). One can think of these labels as knowledge provided by the user. If the user can not provide any labeling function, L can simply return the current transition. The Mining example can be modeled by the MDP in which states are fields of the grid world, and the agent's actions are moving in the four cardinal directions. The transition function p models the possibility of the agent slipping when making a move. The propositional variables used for labeling are E (equipment found), P (platinum found), G (gold found), M (marketplace reached), T (fell into a trap); ∅signifies that no relevant events have occurred. To capture an RL problem fully, we need to equip an MDP with a reward function. Formally, a reward function R maps trajectories from (S×A)+×S to a cumulative distribution function over the set of possible rewards R. For labeled MDPs, the reward function typically depends on finite sequences of observed labels rather than on the more low-level sequences of state-action pairs. Reinforcement Learning with Stochastic Reward Machines 5 Let us emphasize the significance of defining the reward function R over finite sequences of states and actions. Using the entire history as the argument enables us to reward behaviors that respect temporal relations and persistence naturally. For instance, in the Mining example, the goal is to accomplish the following steps: (1) find equipment, (2) exploit a mine, (3) deliver the ore to the specified location, all while avoiding traps. Note that the order in which the agent performs these steps is crucial: finding ore and going to the marketplace without first picking up equipment is not rewarded; stepping into traps ends the episode with reward zero. Reward functions that make use of the agent's entire exploration (as opposed to only the current state and action) have first been studied by Bacchus, Boutilier, and Grove [2] and are termed non-Markovian reward functions. Toro Icarte et al. [17] have shown that reward machines (RMs) are a powerful formalism for representing non-Markovian reward functions. Intuitively, one can view the role of reward machines as maintaining the sufficient amount of memory to turn the non-Markovian reward function back into a ordinary, Markovian one. This results in an important feature of RMs: they enable using standard RL algorithms (which would otherwise not be usable with non-Markovian reward functions). Furthermore, taking advantage of the structure present in RMs, the algorithms can be made more efficient. On a technical level, RMs are finite-state automata that transduce a sequence l1l2 . . . lk of labels into a sequence r1r2 . . . rk of rewards. For the sake of brevity, we here omit a formal definition and introduce the concept of RMs using an example. To this end, let us consider the RM A in Fig. 2a, which attempts to capture the reward function of the Mining example and operates as follows. Starting from the initial state vI, the machine transitions to an intermediate state v1 upon finding equipment (indicated by formula1 E). From there, A either moves to state v2 (upon finding platinum) or to state v3 (upon finding gold). The reward, however, is delayed until the agent reaches the marketplace (indicated by the label M) and A transitions to the terminal state vT . Once this happens, the machine outputs a reward of 1 (if the agent has previously collected gold) or a reward of 1.1 (if the agent has collected platinum). By contrast, violating the prescribed order, failing to reach the marketplace, or stepping onto a trap results in no reward for the agent. For the label sequence (∅, {E}, ∅, {P}, ∅, ∅, {M}) (from Fig. 1), the machine A will produce the reward sequence (0, 0, 0, 0, 0, 0, 1). Note, however, that the RM in Fig. 2a fails to capture the stochastic nature of rewards in the Mining example, which stems from varying purity of the ore and market fluctuations. This problem arises from an intrinsic property of reward machines: they only allow outputs to be real numbers and, hence, can only encode deterministic reward functions. This observation shows that reward machines, as currently defined and used in the literature, cannot capture the common phenomenon of noisy rewards! In the next section, we show how to generalize the model of reward machines in order to overcome this limitation. 1 We use propositional formulas to succinctly describe sets of labels. For instance, the formula p ∨q over P = {p, q} corresponds to the set {{p}, {q}, {p, q}}. 6 J. Corazza et al. E, 0 P, 0 G, 0 M, 1 M, 1.1 T, 0 vI v1 v2 v3 vT (a) RM: rewards are real numbers. E, U([0, 0]) P, U([0, 0]) G, U([0, 0]) M, U([0.8, 1.2]) M, U([0.9, 1.3]) T, U([0, 0]) vI v1 v2 v3 vT (b) SRM: rewards are probability distributions. Fig. 2: Classical reward machine for the mining example (left) and stochastic reward machine (right). Transitions are labeled with input-output pairs, where labels are given by propositional formulas encoding subsets of 2P . All states have a transition to vT with output 0 on reading T (trap) (only depicted for v1). All remaining, missing transitions are self-loops with output 0. 3 Stochastic reward machines To capture stochastic, non-Markovian reward functions, we introduce the novel concept of stochastic reward machines. Definition 2. A stochastic reward machine (SRM) A = (V, vI, 2P , O, δ, σ) is defined by a finite, nonempty set V of states, an initial state vI ∈V , an input alphabet 2P , a (deterministic) transition function δ : V × 2P →V , an output alphabet O which is a finite set of cumulative distribution functions (CDFs), and output function σ : V × 2P →O. We define the size of A, denoted as |A|, to be |V | (i.e., the cardinality of the set V ). To define the semantics of an SRM, let s0a1s1 . . . aksk be a trajectory of an RL episode and l1 . . . lk the corresponding label sequence with li+1 = L(si, ai+1, si+1) for each i ∈{0, . . . , k -1}. The run of an SRM A on the label sequence l1 . . . ln is then a sequence v0l1F1v1 . . . vn-1lnFnvn where vi+1 = δ(vi, li+1) and Fi = σ(vi, li+1) for all i ∈{0, . . . , k -1}. The sequence F1 . . . Fk of CDFs is now used to determine the rewards that the agent receives: for each i ∈{0, . . . , k-1}, the CDF Fi is sampled and the resulting reward ri ∈R is returned. This process results in a pair (λ, ρ) with λ = l1 . . . lk and ρ = r1 . . . rk, which we call a trace. We will often refer to SRM's output by the corresponding probability distribution. We focus on continuous but bounded probability distributions, {D1([a1, b1]), . . . , Dn([an, bn])} where n ∈N and Di([a, b]) is a distribution over the interval [a, b]. It is not hard to see that the classical reward machines are a special case of Definition 2: one just has to set ai = bi for each i ∈{1, . . . , n}, which will result in probability distributions that assign probability 1 to a single real number. Fig. 2b shows an SRM B for the mining example. Note the difference to the classical RM of Fig. 2a: the transitions now output probability distributions Reinforcement Learning with Stochastic Reward Machines 7 instead of real values (non-trivial uniform distributions are on transitions from v2 to vT and from v3 to vT ). This difference allows us to capture the noise in the rewards of the example. For example, the label sequence of our running example (∅, {E}, ∅, {P}, ∅, ∅, {M}) will be transduced to a sequence of distributions. Sampling these distributions produces different reward sequences (e.g., (0, 0, 0, 0, 0, 0, 0.95) and (0, 0, 0, 0, 0, 0, 1)). Toro Icarte et al. [17] introduced a version of Q-learning for reward machines called QRM, which assumes that the reward machine outputs deterministic rewards. In the following, we examine if the guarantees of QRM can be retained when working with stochastic reward machines. 3.1 QRM with stochastic reward machines In addition to specifying a learning task, reward machines help with the learning process itself. QRM assumes knowledge of a reward machine representation of environment reward and splits the Q-function update over all RM states by using transition outputs in lieu of empirical rewards. For an MDP transition (s, a, s′) with label l, QRM executes Q-function updates Qv(s, a) α←r + γ maxa′ Qv′(s′, a′) for each reward machine state v, with v′ being the succeeding state, α ∈R the learning rate, and r = σ(v, l) the reward due to reading label lin state v. These updates are equivalent to regular Q-learning updates in the cross-product of the original MDP and the reward machine. The reward function is Markovian with respect to the resulting crossproduct decision process. As Q-learning also converges correctly for a stochastic Markovian reward, it is easy to see that QRM can find the optimal policy induced by an SRM by using samples ˆr ∼σ(v, l) in the update rule. We also remark that SRMs allow for a relaxed notion of equivalence. As we will show in the following lemma, it is not necessary for two SRMs to be exactly equal in order to induce the same optimal policy. We generalize the notion of exact functional equivalence (that is necessary for RMs) into the equivalence in expectation. Definition 3. SRMs A and B are equivalent in expectation (A ∼E B) if for every label sequence λ = l1l2 . . . lk we have E[A(λ)i] = E[B(λ)i] for every 1 ≤i ≤k, that is if they output sequences of CDFs with equal expected values (where A(λ)i refers to the i-th CDF in the output sequence A(λ)). Lemma 1 can simplify representation and inference of SRMs, allowing the algorithm to rely only on the expected values of the transitions in the inferred SRM. Lemma 1. If A = (V, vI, 2P , O, δ, σ) and B = (V ′, vI′, 2P , O′, δ′, σ′) are equivalent in expectation then they induce the same optimal policy over the same environment. Now that it has been established that learning the optimal policy using stochastic reward machines is viable, the remaining question is whether one can 8 J. Corazza et al. drop the assumption that knowledge of the environment reward is accessible, and learn an SRM representation of it in conjunction with the policy (instead of assuming it to be given). 4 Inferring SRMs In this section, we show how to infer SRMs from data obtained through the agent's exploration. In Section 4.1, we present a seemingly appealing baseline algorithm, and we explain its weaknesses. We follow it by proposing Stochastic Reward Machine Inference (SRMI) as a better approach in Section 4.2. Both algorithms intertwine RL and learning of SRMs by starting with an initial hypothesis SRM and (1) running QRM which generates a sequence of traces, and (2) if there are traces contradicting the current hypothesis, inferring a new one. The steps repeat with the goal of recovering an SRM that captures the environment reward and using it to learn the optimal policy. QRM is performed in conjunction with the latest hypothesis. Traces which contradict the hypothesis are called counterexamples. Due to Lemma 1, the task is simplified to finding an SRM that is merely equivalent in expectation with environment rewards (instead of agreeing on exact distributions). We assume that a bound on noise dispersion εc > 0 is known (e.g., sensors come with pre-specified measurement error tolerance). Definition 4 uses the εc parameter to formalize the notion of consistency with a trace. Definition 4. A trace (λ, ρ) = (l1l2 . . . lk, r1r2 . . . rk) is εc-consistent with an SRM H, which outputs a sequence of distributions H(λ) = d1d2 . . . dk if for all 1 ≤i ≤k we have |ri -E[di]| ≤εc, i.e. if all of the observed rewards ri are plausible samples from H. SRMI can only recover an SRM representation of a noisy environment reward that meets an additional requirement, which we formalize in Assumption 1. Informally, the assumption requires the noise from one reward distribution not to fully conceal the signal of a different one (unless they share means). We are convinced this requirement is met in a large class of practical, real-world scenarios. Assumption 1 Let O = {D1([a1, b1]), . . . , Dn([an, bn])} be the output alphabet of the environment SRM. Let εc = maxi{bi -ai}/2 be the noise dispersion bound known to the agent. We then assume that any two output distributions that can be covered with an εc-interval must have equal expectations: for all 1 ≤i, j ≤n and μ ∈R we have [ai, bi]∪[aj, bj] ⊆[μ-εc, μ+εc] =⇒E[D([ai, bi])] = E[D([aj, bj])]. This assumption is satisfied in the Mining example. The set {U([0.1, 0.2]), U([0, 1])} breaks Assumption 1: εc-intervals cannot distinguish these distributions, and they differ in expected values, so they must be distinguished. The set {U([0.1, 0.9]), U([0, 1])} respects it, and distinguishing these distributions is unnecessary as they have equal expectations. Reinforcement Learning with Stochastic Reward Machines 9 4.1 Baseline algorithm One may be tempted to repurpose existing techniques for inferring reward machines from a collection of traces. There are two important obstacles: 1. Traces may be prefix-inconsistent: during exploration, the agent may encounter traces (l1l2 . . . lm, r1r2 . . . rm) and (l′ 1l′ 2 . . . l′ n, r′ 1r′ 2 . . . r′ n) s.t. for some 1 ≤i ≤min{m, n} we have l1 . . . li = l′ 1 . . . l′ i but r1 . . . ri ̸= r′ 1 . . . r′ i. The consequence is that no reward machine can capture both traces. 2. Even if a collection of traces where noise is present is prefix-consistent, the (inferred) reward machine will tend to be impractically large because it will overfit noisy data. The baseline algorithm solves these problems by obtaining multiple samples for each trace in the counterexample set, and producing estimates for transition means before inferring the structure of the reward machine. Starting with an initial hypothesis the following steps are repeated: (1) run QRM which generates a sequence of traces (2) when a counterexample is encountered, pause QRM and replay its trajectory until enough samples are collected (3) preprocess the counterexample set so that multiple samples collected in (2) are collapsed into estimates for environment reward means (4) use the deterministic RM inference method by Xu et al. [22] to infer the new minimal consistent hypothesis As knowledge of environment SRM structure is not assumed, it is necessary to sample traces (instead of individual transitions). The number of samples required in (2) is determined from εc and an additional parameter for the minimal distance between two different transition means in the environment SRM. The preprocessing in (3) ensures (up to a confidence level) that different estimates for the same means are aggregated into one, and the result respects εc-consistency with the original sample set. This approach seems to eliminate the two issues presented by stochastic rewards: since every prefix is sampled many times and then averaged and aggregated, there can be no prefix inconsistencies. Furthermore, aggregation leaves little room for overfitting noise. However, the agent must be able to sample traces multiple times on demand. As we show in section 5, this is costly, and sometimes even impossible. 4.2 Stochastic Reward Machine Inference In contrast to the baseline algorithm, SRMI uses counterexamples to improve hypotheses immediately by relying on a richer constraint solving method that is able to encode εc-consistency with the counterexample set directly. This removes the need for replaying trajectories. 10 J. Corazza et al. As before, the task for the algorithm is to recover a minimal SRM that is equivalent in expectation to the true environment one, and use it to learn the optimal policy. Starting with an initial hypothesis SRM, the following steps are repeated: (1) Run QRM and record all traces in a set A (Lines 5 to 6 in Algorithm 1). (2) When a counterexample is encountered, add it to the set X and attempt to make the current hypothesis εc-consistent with X by shifting its outputs (Lines 7 to 10). (3) If Step (2) failed, solve a constraint problem to infer the new hypothesis (Line 12). (4) Compute the final mean estimates to correct outputs of the inferred hypothesis based on empirical rewards in A (Line 14). The algorithm generates a sequence of hypothesis SRMs H1H2 · · · and a sequence of counterexample sets X1X2 · · · (with Xi ⊂Xj for all i 0 for each i ∈{0, . . . , k}. Moreover, we say that a trajectory ζ is attainable if there exists an m ∈N such that ζ is m-attainable. We also refer to label sequences generated by attainable trajectories as attainable label sequences. Lemma 2. SRMI with episode length m explores every m-attainable trajectory with a non-zero probability. Lemma 2 is proved with induction over the length of trajectories. The proof for an equivalent result can be found in Xu et al. [22]. Corollary 2. SRMI with episode length m explores every m-attainable trajectory infinitely often. Corollary 2 follows from Lemma 2 because in the limit, the probability of not observing a trajectory falls to zero, so it will be observed again with probability 1. Lemma 3 provides a lower bound on maximal episode length which allows SRMI to observe sufficient evidence of structural dissimilarity between the hypothesis and the environment SRM. It states that if there are attainable trajectories which can generate a counterexample of Type 2, then there exists a Type 2 counterexample that "fits" into an episode. Lemma 3. For MDP M and environment SRM T (with dispersion εc), let H be the current hypothesis SRMI is using to explore M. Let ζ = s0a1s1 . . . aksk be an attainable trajectory in M, λ = l1l2 . . . lk the corresponding label sequence, and T (l1l2 . . . lk) = d1d2 . . . dk (considered as a sequence of random variables) the output of the environment SRM over λ. If (l1l2 . . . lk, d1d2 . . . dk) is a plausible Type 2 counterexample, that is if for all G ∼H (including H) we have 22 J. Corazza et al. Pr(G εc-inconsistent with (λ, ρ) | ρ ∼d1d2 . . . dk) > 0, then SRMI with episode length m ≥2|M|+1(|T |+1)-1 will eventually observe a Type 2 counterexample. Proof. First assume that (1) there is a k-attainable label sequence λ, k > m ≥ 2|M|+1(|T |+1)-1, which can be sampled to generate a Type 2 counterexample for H, and (2) there are no such n-attainable label sequences for n ≤m. Let G be an SRM s.t. G ∼H and G ∼E T over all l1l2 . . . lm. G exists because there are no Type 2 counterexamples for H over n-attainable label sequences for n ≤m. Let T ′ (G′) be a deterministic RM s.t. T ′ ∼E T (G′ ∼E G). T ′ and G′ agree on all label sequences up to length m (m 1 (and thus every Xj for 1 ≤j ≤k -1). By definition, a counterexample of Type 1 induces a new hypothesis that remains in the same class, while a Type 2 counterexample causes a "jump" to the next class. Assume [Hij] = [Hik] for some j 2εc. But then H /∈[Hf] because a Type 2 counterexample exists, and due to Corollary 3 it will eventually be found. SRMI eventually observes two reward prefixes r1r2 . . . rj and r1r2 . . . rk, where rj ∼d1, rk ∼d2, and |rj -rk| > 2εc, such that the same transition in H would have to account for both of them. The reward sequence that is observed latter comes from a Type 2 counterexample because it is impossible for any G ∼H to be consistent with both traces that the two reward sequences come from as (due to the graph isomorphism) they would belong to the same transition in G which breaks εc consistency with X. Lemma 6. For a machine H in the [Hf] subsequence let rH(v, l) and d1d2 . . . dn be defined as in Lemma 5. Considering samples in rH(v, l) = {R1, . . . , Rk} as random variables (for all 1 ≤i ≤k there is a 1 ≤j ≤n s.t. Ri ∼dj), let μ′ = (R(1) + R(k))/2 (the midrange estimator used in Estimates). Then E[μ′] = E[dj] for all 1 ≤j ≤n. Proof. Without loss of generalization let rH(v, l) contain samples from a single environment distribution d. First note that all inconsistent traces in A (once the current hypothesis H ∈ [Hf]) are necessarily of Type 1. If there was a trace (λ, ρ) ∈A that was a Type 2 counterexample, its attainable label sequence would be encountered infinitely often and with probability 1 another Type 2 counterexample would be reproduced, which is a contradiction with H ∈[Hf]. Finally, new Type 1 counterexamples are never introduced into A. Once such traces are encountered, they are added into X and the hypothesis is "fixed" to account for them. The remaining finite number of Type 1 counterexamples in A are all eventually accounted for, as their label sequences are explored and more extremal samples are collected. 24 J. Corazza et al. This establishes that once some finite amount of hypotheses in [Hf] have been generated, CH = A. Then then all R1 · · · Rn in rH(v, l) are independent samples of d and the lemma follows because the midrange estimator is unbiased. Proof of Theorem 1 Theorem 1 states that the sequence H1H2 · · · generated by SRMI converges to an SRM that is equivalent in expectation to the true environment one. As Corollary 4 established that SRMI enters a final ∼-class, we state the proof for the Hf 1Hf 2 · · · subsequence. Proof. Without loss of generalization assume that for any two SRMs H and G in Hf 1Hf 2 · · · the graph-isomorphism I : H 7→G is an identity, i.e. machines in Hf 1Hf 2 · · · share the same state set V and δH(v, l) = δG(v, l) for all v ∈V and l∈2P . To prove that Hf 1Hf 2 · · · converges to the true environment reward machine we look at corresponding sequences rf 1(v, l)rf 2(v, l) · · · (for all v ∈V , l∈2P ) of sets of empirical rewards from which SRMI estimates the expectations for outputs of hypothesis SRMs. Since there will be no new Type 2 counterexamples, and because states and transitions of the hypotheses in the Hf 1Hf 2 · · · are assumed to be fixed with I, for each pair (v, l) this sequence of sample sets will only grow, and will contain only samples from a fixed set of distributions. This is a consequence of the fact that the environment SRM is fixed as well, so any attainable prefix of labels always induces the same pair of runs over the fixed hypotheses and the environment SRM. Due to Lemma 5, these distributions have equal expectations. Lemma 6 shows the estimators computed from these sets are unbiased. Since there will be infinitely many Type 1 counterexamples, we conclude that the estimators converge to the correct expectations, that is Hf 1Hf 2 · · · converges to an SRM that is equivalent in expectation to the true environment one.
2510.14835
OBVIOUS AND NON-OBVIOUS ASPECTS OF DIGITAL SELF-EXCITED-LOOPS FOR SRF CAVITY CONTROL Larry Doolittle∗, Shreeharshini Murthy, LBNL; Matei Guran, Lennon Reyes, Shrividhyaa Sankar Raman, Philip Varghese, FNAL Abstract In 1978, Delayen showed how Self-Excited Loops (SEL) can be used to great advantage for controlling narrow-band SRF cavities. Its key capability is establishing closed-loop amplitude control early in the setup process, stabilizing Lorentz forces to allow cavity tuning and phase loop setup in a stable environment. As people around the world imple- ment this basic idea with modern FPGA DSP technology, multiple variations and operational scenarios creep in that have both obvious and non-obvious ramifications for latency, feedback stability, and resiliency. This paper will review the key properties of a Delayen-style SEL when set up for open-loop, amplitude stabilized, and phase-stabilized modes. Then the original analog circuit will be compared and con- trasted with the known variations of digital CORDIC-based implementations. PHYSICS The state-space representation for cavity voltage 𝑉is 𝑑𝑉 𝑑𝑡= 𝑎𝑉+ 𝑏𝐾+ 𝑐𝐼 where (following Heaviside) all of 𝑉, 𝐾, 𝐼, 𝑎, 𝑏, and 𝑐are complex numbers, varying slowly compared to the timescale of the RF resonance itself. We ignore the units of 𝑉and 𝐾 in this discussion, except to note that 𝑏and 𝐾have to be self- consistent. 𝑎necessarily has units of s−1. This discussion will also ignore the beam-loading term 𝑐𝐼. SRF cavities have the confusing property that 𝑎is not constant; as their sheet-metal construction bends with the acoustic environment and Lorentz forces, the imaginary part of 𝑎(detuning) varies in real time. That means that the gov- erning equation is not LTI (Linear Time Invariant), although for short time scales that’s still a useful approximation. One of Delayen’s key 1978 [1] contributions was to start from the usual Lorentzian equation for a resonator output as a function of detuning 𝑉 𝐾= 1 1 + 𝑗𝜒 , where 𝜒= ℑ(𝑎)/ℜ(𝑎). That can be inverted to give the needed drive 𝐾= 𝑉(1 + 𝑗𝜒) . A single feedback loop for the reactive component of the drive signal will therefore “fix” both amplitude and phase fluctuations created by detuning. His diagram for the requi- site hardware is shown. ∗LRDoolittle@lbl.gov The observed benefit in that era of very narrow-band helix resonators was that the field could be brought to its operating point without regard for fine frequency tuning. Even the am- plitude feedback loop can be engaged while detuned; with the amplitude loop closed, potential ponderomotive instabil- ities are strongly suppressed [2]. Finally, once the resonance is adjusted to the operational value, the tuning loop can be closed. While amplitude and phase loops should be closed (not clipping) during beam operation, temporary frequency excursions that clip the phase feedback loop will naturally return to lock without manual or automated intervention. More on that later. ABSTRACT SEL BEHAVIOR AND FEEDBACK phase adjust limiter resonator (Lorentz) Figure 1: Textbook SEL oscillator phase adjust limiter resonator (Lorentz) amplitude detect phase detect P-I controller P-I controller IQ modulator Figure 2: Delayen 1978 cavity controller arXiv:2510.14835v1 [physics.acc-ph] 16 Oct 2025 A traditional analog SEL oscillator has a phase adjust- ment to compensate for cable lengths to give pure positive feedback. The resulting system “starts from noise.” After start-up, the limiter limits, and the amplitude feedback gain drops to 0. Phase feedback literally has a gain of +1.00. Delayen added amplitude and phase stabilization loops to that core design. This stiff negative PI feedback totally overwhelms that baseline SEL “positive feedback.” CORDIC The CORDIC structure was invented by Jack Volder in 1959 [3] to compute trigonometric functions using pure shift-and-add digital hardware. Every such CORDIC block produces 𝑧out = 𝐴· 𝑧in 𝑛−1 Ö 𝑖=0 exp ( 𝑗𝜎𝑖𝜃𝑖) 𝜃out = 𝜃in − 𝑛−1 ∑︁ 𝑖=0 𝜎𝑖𝜃𝑖 where 𝜃𝑖= tan−1 2−𝑖 𝐴= 𝑛−1 Ö 𝑖=0 √︁ 1 + 2−2𝑖≈1.64676 There are two standard mechanisms for choosing 𝜎𝑖. Rotation mode: 𝜃out ≈0, used to get 𝑧out = 𝐴· 𝑧in · exp ( 𝑗𝜃in). In the special case when 𝑧in = 1/𝐴, that yields cos 𝜃in and sin 𝜃in. Vectoring mode: ℑ(𝑧out) ≈0, used to get 𝜃out = ∠𝑧in and ℜ(𝑧out) = 𝐴· |𝑧in|. ± >>0 sign θ0 y0 x0 ± sign ± sign ± sign x4 y4 θ4 45.00° 26.57° 14.04° 7.13° >>0 >>1 >>2 >>1 >>2 >>3 >>3 +− +− +− +− +− +− +− +− Figure 3: CORDIC internal structure An FPGA implementation of CORDIC will therefore al- ways have three inputs and three outputs. We create a visual vocabulary showing that in rotation mode (note the “R” in the CORDIC block), the 𝜃output is unused (and is within round- ing error of 0). Similarly, with vectoring mode (marked with “V”) the 𝑦output is ignored. Additional special cases yield traditional polar-to-rectangular and rectangular-to-polar con- figurations. X Y θ X θ general case Y X Y θ X θ rotation Y R X Y θ X θ vectoring Y V X Y θ X θ P→R Y R X Y θ X θ Y V 0 0 R→P special cases Figure 4: CORDIC use cases The CORDIC module supplied in LBNL’s Bedrock code base [9] is pipelined, and the selection between vectoring and rotation modes can be made on a cycle-by-cycle basis, allowing a single CORDIC instance to be time-multiplexed between different tasks in the final circuit. See a LLRF 2013 tutorial [10]. PI CONTROLLER The first rule of PID controllers is that there is no D. A block diagram of a combined proportional and integral im- plementation is shown, together with the visual vocabulary used here to represent it. Σ set point ∫ KI configurable saturation KP Σ vout vin + - vout vin set point Figure 5: PI controller structure and representation Clipping of SEL phase-tracking loop is essential part of the process! That is the output of one of these PI controllers; it therefore needs a zero-windup clipping implementation, as well as good runtime control of the clip value. This block diagram shows the I term as gain followed by the (saturating) integrator; the reverse doesn’t make sense when there is a possibility of run-time gain adjustments. This circuit can be “reset” (force output and integrator state to zero) as needed by setting the saturation level to zero. CONSIDERATIONS FOR OFF-FREQUENCY OPERATION Simple phase detectors can create unpleasant results when faced with off-frequency operation. Clues for how to prop- erly mitigate that comes from the AD9901 chip (1996) [4]. In the SEL context, the output of the phase setpoint sub- tracter should be processed with a state machine. When the actual phase error wraps around from positive to nega- tive, for instance, that condition needs to be registered, so the output of the phase detector stays saturated at the maxi- mum positive value. The LBNL code base refers to this as a Stateful Phase Resolver. Figure 6: AD9901 Functional Block Diagram When such a phase resolver is used, the reactive drive created by the phase loop will “stick” at that limit until the frequency error is removed – presumably by a tuner control loop not discussed here. DIGITAL SEL TOPOLOGIES A series of topologies for realizing SEL in DSP are shown: LBNL [5], JLab [6], S-DALINAC [7] and BARC [8]. The GDR topology is shown for completeness. Each diagram assumes 𝑥and 𝑦(a.k.a. I and Q) inputs and outputs. Digital and/or analog down- and up-conversion, required to com- plete the LLRF system, are off-topic here. Although details vary, they all do share the core 1978- vintage feature of detecting phase errors, and using a feed- back controller to generate a drive of form 1 + 𝑗𝜒to fix both amplitude and phase errors. The JLab design introduces an odd serialization of ampli- tude and phase feedback paths. The third diagram suggests extending the JLab idea to re-orthogonalize the amplitude and phase loops. Now neither PI output is routed through the second (delay-creating) CORDIC. The BARC design uses no CORDIC blocks; instead it builds a limiter out of squaring circuits and a local feedback loop. Its logic footprint is heavier on multipliers than the X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ Figure 7: LBNL (2015) other designs, which is not a concern for typical FPGAs to- day. The lack of CORDIC means its phase feedback latency is lower than the other designs shown. X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 1 Complex multiply Figure 8: JLab (2008) X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 Complex multiply 1 Figure 9: Suggested X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 1 Complex multiply dedicated amplitude ADC P term only I term routed to tuner loop Figure 10: S-DALINAC (2011) amp set 0 Complex multiply Loop phase shift Reference phase shift amp det limiter Figure 11: BARC (2014) Real set Imag set Complex multiply Complex gain Figure 12: GDR STABILITY As the pole position 𝑎moves around in the imaginary axis, the dynamics of the system also changes. It’s important to understand if and how that changes the stability of the feedback system. We assert that if the condition ℑ(𝑎) << 𝜔0dB is maintained, the system stability is not materially affected. A Nyquist stability graph with example numbers will illustrate that. OPERATION It’s important to understand how an SEL is intended to work when the resonator frequency deviates from nominal (e.g., microphonics) and the reactive drive required to stay locked exceeds the defined limits. The experimental results shown were presented at LLRF17 [11]. The simulated waveforms use an idealized feedback controller. FUTURE WORK Future work could attempt to achieve lower latency. One approach is to simply use higher clock speeds for the DSP; FPGAs are faster now than they were in the 2008-2014 time frame. There is also a possibility to add a direct P term, bypassing the relatively slow CORDIC steps. That would require careful management of both its gain coefficients and a fallback configuration if/when clipping is detected. Leader/follower CORDIC tricks can easily reduce (by a factor of two) the latency of the two-CORDICs-in-series 100 101 102 103 104 105 Frequency (Hz) 20 0 20 40 60 80 100 120 140 Gain (dB) Cavity Bandwidth 20 Hz Zero-dB crossing 20 kHz Controller zero 31.4 kHz Delay 2.39 s Loop gain, detune -200 Hz Closed loop, detune -200 Hz Loop gain, detune -100 Hz Closed loop, detune -100 Hz Loop gain, detune 0 Hz Closed loop, detune 0 Hz Loop gain, detune 100 Hz Closed loop, detune 100 Hz Loop gain, detune 200 Hz Closed loop, detune 200 Hz Figure 13: Open- and closed-loop gains with detuning 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Time (s) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Phase (rad) Cavity Forward 0.15 0.10 0.05 0.00 Cavity phase (rad) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Forward phase (rad) Measured `GDR' phase-locked `SEL' resonance-tracking 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Time (s) 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 Amplitude (FS) 0.4 0.2 0.0 0.2 0.4 Real 0.4 0.2 0.0 0.2 0.4 Imag Phase-locking SEL with clip limits on Q component works as intended Figure 14: Measured waveforms 1.5 1.0 0.5 0.0 Normalized detune freq. half-BW 20 Hz; reactive limit 0.6 idealized controller 2 0 2 4 Phase Cavity Forward 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 t (s) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Forward amplitude Reactive Total Figure 15: Simulated waveforms phase-following function in the final suggested SEL topol- ogy, without affecting the feedback path. We wish for more code-sharing, in particular the ability to mix-and-match test-benches and implementations from labs around the world. Of course that will require surmounting barriers from licensing and disparate languages. NOT CONSIDERED • Units for and calibration of cavity state value • System-ID and drift correction, including PI gain setup [12] • Analog and/or digital down- and up-conversion • Mitigation of instabilities caused by high P-gain and nearby cavity passband modes • Tx channel linearization. • Beam loading corrections (timing-based feedforward) REFERENCES [1] Phase and Amplitude Stabilization of Superconducting Res- onators, J. Delayen, Ph.D. thesis, 1978, Caltech [2] Ponderomotive Instabilities, Microphonics, and RF Control, J. Delayen, USPAS June 2008 [3] The CORDIC Computing Technique, J. Volder, 1959 Proceed- ings of the Western Joint Computer Conference [4] AD9901 data sheet, Analog Devices, 1996 [5] Accelerator-On-Chip Simulation Engine, L. Doolittle et al., LLRF 2015, Shanghai [6] Development of a Digital Self-Excited Loop for Field Con- trol in High-Q Superconducting Cavities, J. Delayen et al., SRF2007, Beijing [7] Design and Commissioning of a Multi-frequency Digital Low Level RF Control System, M. Konrad et al., IPAC 2011, San Sebastián [8] Digital self-excited loop for a superconducting linac, G. Joshi et al., NIM A 2014 [9] Bedrock, https://github.com/BerkeleyLab/Bedrock [10] Avoiding Resource Overutilization in FPGAs, L. Doolittle, LLRF 2013, Lake Tahoe [11] LCLS-II LLRF prototype testing and characterization, L. Doolittle et al., LLRF 2017, Barcelona [12] Drift observations and mitigation in LCLS-II RF, A. Benwell et al., LLRF 2023, Gyeongju
OBVIOUS AND NON-OBVIOUS ASPECTS OF DIGITAL SELF-EXCITED-LOOPS FOR SRF CAVITY CONTROL Larry Doolittle∗, Shreeharshini Murthy, LBNL; Matei Guran, Lennon Reyes, Shrividhyaa Sankar Raman, Philip Varghese, FNAL Abstract In 1978, Delayen showed how Self-Excited Loops (SEL) can be used to great advantage for controlling narrow-band SRF cavities. Its key capability is establishing closed-loop amplitude control early in the setup process, stabilizing Lorentz forces to allow cavity tuning and phase loop setup in a stable environment. As people around the world implement this basic idea with modern FPGA DSP technology, multiple variations and operational scenarios creep in that have both obvious and non-obvious ramifications for latency, feedback stability, and resiliency. This paper will review the key properties of a Delayen-style SEL when set up for open-loop, amplitude stabilized, and phase-stabilized modes. Then the original analog circuit will be compared and contrasted with the known variations of digital CORDIC-based implementations. PHYSICS The state-space representation for cavity voltage Vis dV dt= aV+ bK+ cI where (following Heaviside) all of V, K, I, a, b, and care complex numbers, varying slowly compared to the timescale of the RF resonance itself. We ignore the units of Vand K in this discussion, except to note that band Khave to be selfconsistent. anecessarily has units of s-1. This discussion will also ignore the beam-loading term cI. SRF cavities have the confusing property that ais not constant; as their sheet-metal construction bends with the acoustic environment and Lorentz forces, the imaginary part of a(detuning) varies in real time. That means that the governing equation is not LTI (Linear Time Invariant), although for short time scales that's still a useful approximation. One of Delayen's key 1978 [1] contributions was to start from the usual Lorentzian equation for a resonator output as a function of detuning V K= 1 1 + jχ , where χ= I(a)/R(a). That can be inverted to give the needed drive K= V(1 + jχ) . A single feedback loop for the reactive component of the drive signal will therefore "fix" both amplitude and phase fluctuations created by detuning. His diagram for the requisite hardware is shown. ∗ The observed benefit in that era of very narrow-band helix resonators was that the field could be brought to its operating point without regard for fine frequency tuning. Even the amplitude feedback loop can be engaged while detuned; with the amplitude loop closed, potential ponderomotive instabilities are strongly suppressed [2]. Finally, once the resonance is adjusted to the operational value, the tuning loop can be closed. While amplitude and phase loops should be closed (not clipping) during beam operation, temporary frequency excursions that clip the phase feedback loop will naturally return to lock without manual or automated intervention. More on that later. ABSTRACT SEL BEHAVIOR AND FEEDBACK phase adjust limiter resonator (Lorentz) Figure 1: Textbook SEL oscillator phase adjust limiter resonator (Lorentz) amplitude detect phase detect P-I controller P-I controller IQ modulator Figure 2: Delayen 1978 cavity controller 16 Oct 2025 A traditional analog SEL oscillator has a phase adjustment to compensate for cable lengths to give pure positive feedback. The resulting system "starts from noise." After start-up, the limiter limits, and the amplitude feedback gain drops to 0. Phase feedback literally has a gain of +1.00. Delayen added amplitude and phase stabilization loops to that core design. This stiff negative PI feedback totally overwhelms that baseline SEL "positive feedback." CORDIC The CORDIC structure was invented by Jack Volder in 1959 [3] to compute trigonometric functions using pure shift-and-add digital hardware. Every such CORDIC block produces zout = A· zin n-1 Ö i=0 exp ( jσiθi) θout = θin - n-1 ∑︁ i=0 σiθi where θi= tan-1 2-i A= n-1 Ö i=0 √︁ 1 + 2-2i≈1.64676 There are two standard mechanisms for choosing σi. Rotation mode: θout ≈0, used to get zout = A· zin · exp ( jθin). In the special case when zin = 1/A, that yields cos θin and sin θin. Vectoring mode: I(zout) ≈0, used to get θout = ∠zin and R(zout) = A· |zin|. ± >>0 sign θ0 y0 x0 ± sign ± sign ± sign x4 y4 θ4 45.00° 26.57° 14.04° 7.13° >>0 >>1 >>2 >>1 >>2 >>3 >>3 +- +- +- +- +- +- +- +- Figure 3: CORDIC internal structure An FPGA implementation of CORDIC will therefore always have three inputs and three outputs. We create a visual vocabulary showing that in rotation mode (note the "R" in the CORDIC block), the θoutput is unused (and is within rounding error of 0). Similarly, with vectoring mode (marked with "V") the youtput is ignored. Additional special cases yield traditional polar-to-rectangular and rectangular-to-polar configurations. X Y θ X θ general case Y X Y θ X θ rotation Y R X Y θ X θ vectoring Y V X Y θ X θ P→R Y R X Y θ X θ Y V 0 0 R→P special cases Figure 4: CORDIC use cases The CORDIC module supplied in LBNL's Bedrock code base [9] is pipelined, and the selection between vectoring and rotation modes can be made on a cycle-by-cycle basis, allowing a single CORDIC instance to be time-multiplexed between different tasks in the final circuit. See a LLRF 2013 tutorial [10]. PI CONTROLLER The first rule of PID controllers is that there is no D. A block diagram of a combined proportional and integral implementation is shown, together with the visual vocabulary used here to represent it. Σ set point ∫ KI configurable saturation KP Σ vout vin + - vout vin set point Figure 5: PI controller structure and representation Clipping of SEL phase-tracking loop is essential part of the process! That is the output of one of these PI controllers; it therefore needs a zero-windup clipping implementation, as well as good runtime control of the clip value. This block diagram shows the I term as gain followed by the (saturating) integrator; the reverse doesn't make sense when there is a possibility of run-time gain adjustments. This circuit can be "reset" (force output and integrator state to zero) as needed by setting the saturation level to zero. CONSIDERATIONS FOR OFF-FREQUENCY OPERATION Simple phase detectors can create unpleasant results when faced with off-frequency operation. Clues for how to properly mitigate that comes from the AD9901 chip (1996) [4]. In the SEL context, the output of the phase setpoint subtracter should be processed with a state machine. When the actual phase error wraps around from positive to negative, for instance, that condition needs to be registered, so the output of the phase detector stays saturated at the maximum positive value. The LBNL code base refers to this as a Stateful Phase Resolver. Figure 6: AD9901 Functional Block Diagram When such a phase resolver is used, the reactive drive created by the phase loop will "stick" at that limit until the frequency error is removed - presumably by a tuner control loop not discussed here. DIGITAL SEL TOPOLOGIES A series of topologies for realizing SEL in DSP are shown: LBNL [5], JLab [6], S-DALINAC [7] and BARC [8]. The GDR topology is shown for completeness. Each diagram assumes xand y(a.k.a. I and Q) inputs and outputs. Digital and/or analog down- and up-conversion, required to complete the LLRF system, are off-topic here. Although details vary, they all do share the core 1978vintage feature of detecting phase errors, and using a feedback controller to generate a drive of form 1 + jχto fix both amplitude and phase errors. The JLab design introduces an odd serialization of amplitude and phase feedback paths. The third diagram suggests extending the JLab idea to re-orthogonalize the amplitude and phase loops. Now neither PI output is routed through the second (delay-creating) CORDIC. The BARC design uses no CORDIC blocks; instead it builds a limiter out of squaring circuits and a local feedback loop. Its logic footprint is heavier on multipliers than the X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ Figure 7: LBNL (2015) other designs, which is not a concern for typical FPGAs today. The lack of CORDIC means its phase feedback latency is lower than the other designs shown. X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 1 Complex multiply Figure 8: JLab (2008) X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 Complex multiply 1 Figure 9: Suggested X θ X θ amp set phase set phase offset X Y θ X CORDIC CORDIC R V Y 0 Y Y θ 0 1 Complex multiply dedicated amplitude ADC P term only I term routed to tuner loop Figure 10: S-DALINAC (2011) amp set 0 Complex multiply Loop phase shift Reference phase shift amp det limiter Figure 11: BARC (2014) Real set Imag set Complex multiply Complex gain Figure 12: GDR STABILITY As the pole position amoves around in the imaginary axis, the dynamics of the system also changes. It's important to understand if and how that changes the stability of the feedback system. We assert that if the condition I(a) << ω0dB is maintained, the system stability is not materially affected. A Nyquist stability graph with example numbers will illustrate that. OPERATION It's important to understand how an SEL is intended to work when the resonator frequency deviates from nominal (e.g., microphonics) and the reactive drive required to stay locked exceeds the defined limits. The experimental results shown were presented at LLRF17 [11]. The simulated waveforms use an idealized feedback controller. FUTURE WORK Future work could attempt to achieve lower latency. One approach is to simply use higher clock speeds for the DSP; FPGAs are faster now than they were in the 2008-2014 time frame. There is also a possibility to add a direct P term, bypassing the relatively slow CORDIC steps. That would require careful management of both its gain coefficients and a fallback configuration if/when clipping is detected. Leader/follower CORDIC tricks can easily reduce (by a factor of two) the latency of the two-CORDICs-in-series 100 101 102 103 104 105 Frequency (Hz) 20 0 20 40 60 80 100 120 140 Gain (dB) Cavity Bandwidth 20 Hz Zero-dB crossing 20 kHz Controller zero 31.4 kHz Delay 2.39 s Loop gain, detune -200 Hz Closed loop, detune -200 Hz Loop gain, detune -100 Hz Closed loop, detune -100 Hz Loop gain, detune 0 Hz Closed loop, detune 0 Hz Loop gain, detune 100 Hz Closed loop, detune 100 Hz Loop gain, detune 200 Hz Closed loop, detune 200 Hz Figure 13: Open- and closed-loop gains with detuning 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Time (s) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Phase (rad) Cavity Forward 0.15 0.10 0.05 0.00 Cavity phase (rad) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Forward phase (rad) Measured `GDR' phase-locked `SEL' resonance-tracking 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Time (s) 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 Amplitude (FS) 0.4 0.2 0.0 0.2 0.4 Real 0.4 0.2 0.0 0.2 0.4 Imag Phase-locking SEL with clip limits on Q component works as intended Figure 14: Measured waveforms 1.5 1.0 0.5 0.0 Normalized detune freq. half-BW 20 Hz; reactive limit 0.6 idealized controller 2 0 2 4 Phase Cavity Forward 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 t (s) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Forward amplitude Reactive Total Figure 15: Simulated waveforms phase-following function in the final suggested SEL topology, without affecting the feedback path. We wish for more code-sharing, in particular the ability to mix-and-match test-benches and implementations from labs around the world. Of course that will require surmounting barriers from licensing and disparate languages. NOT CONSIDERED • Units for and calibration of cavity state value • System-ID and drift correction, including PI gain setup [12] • Analog and/or digital down- and up-conversion • Mitigation of instabilities caused by high P-gain and nearby cavity passband modes • Tx channel linearization. • Beam loading corrections (timing-based feedforward) REFERENCES [1] Phase and Amplitude Stabilization of Superconducting Resonators, J. Delayen, Ph.D. thesis, 1978, Caltech [2] Ponderomotive Instabilities, Microphonics, and RF Control, J. Delayen, USPAS June 2008 [3] The CORDIC Computing Technique, J. Volder, 1959 Proceedings of the Western Joint Computer Conference [4] AD9901 data sheet, Analog Devices, 1996 [5] Accelerator-On-Chip Simulation Engine, L. Doolittle et al., LLRF 2015, Shanghai [6] Development of a Digital Self-Excited Loop for Field Control in High-Q Superconducting Cavities, J. Delayen et al., SRF2007, Beijing [7] Design and Commissioning of a Multi-frequency Digital Low Level RF Control System, M. Konrad et al., IPAC 2011, San Sebastián [8] Digital self-excited loop for a superconducting linac, G. Joshi et al., NIM A 2014 [9] Bedrock, https://github.com/BerkeleyLab/Bedrock [10] Avoiding Resource Overutilization in FPGAs, L. Doolittle, LLRF 2013, Lake Tahoe [11] LCLS-II LLRF prototype testing and characterization, L. Doolittle et al., LLRF 2017, Barcelona [12] Drift observations and mitigation in LCLS-II RF, A. Benwell et al., LLRF 2023, Gyeongju
2510.14847
Preprint ImagerySearch: Adaptive Test-Time Search for Video Generation Beyond Semantic Dependency Con- straints Meiqi Wu1,3∗ Jiashu Zhu2 Xiaokun Feng1,3 Chubin Chen4 Chen Zhu5 Bingze Song2 Fangyuan Mao2 Jiahong Wu2† Xiangxiang Chu2 Kaiqi Huang1,3‡ 1UCAS 2AMAP, Alibaba Group 3CRISE 4THU 5SEU GitHub: https://github.com/AMAP-ML/ImagerySearch/ (b) Long-distance Semantic Prompt: " The camel packs its belongings with care." (a) Normal Distance Semantic Prompt: " The camel walks on the desert." Long-distance Semantic Video T1 Evosearch Wan2.1 Wan2.1 Realistic Scenarios Imaginative Scenarios Short-distance Semantic (Walk) (Pack) ImagerySearch (Ours) Figure 1: The motivation of ImagerySearch. The figure illustrates two semantic dependency scenarios related to camels. Left: The distance depicts the corresponding strength of prompt tokens during the denoising process. LDT-Bench consists of imaginative scenarios with long-distance semantics, whose semantic dependencies are typically weak. Right: Wan2.1 performs well on short-distance semantics but fails under long-distance. Test time scaling methods (e.g., Video T1 (Liu et al., 2025a), Evosearch (He et al., 2025a)) also struggle. However, ImagerySearch generates coherent, context-aware motions (orange box). Abstract Video generation models have achieved remarkable progress, particularly excelling in realistic scenarios; however, their performance degrades notably in imaginative scenarios. These prompts often involve rarely co-occurring concepts with long- distance semantic relationships, falling outside training distributions. Existing methods typically apply test-time scaling for improving video quality, but their fixed search spaces and static reward designs limit adaptability to imaginative scenarios. ∗Work done during the internship at AMAP, Alibaba Group. †Project leader. ‡Corresponding author. 1 arXiv:2510.14847v2 [cs.CV] 22 Oct 2025 Preprint To fill this gap, we propose ImagerySearch, a prompt-guided adaptive test-time search strategy that dynamically adjusts both the inference search space and reward function according to semantic relationships in the prompt. This enables more coherent and visually plausible videos in challenging imaginative settings. To evaluate progress in this direction, we introduce LDT-Bench, the first dedicated benchmark for long-distance semantic prompts, consisting of 2,839 diverse concept pairs and an automated protocol for assessing creative generation capabilities. Extensive experiments show that ImagerySearch consistently outperforms strong video generation baselines and existing test-time scaling approaches on LDT-Bench, and achieves competitive improvements on VBench, demonstrating its effectiveness across diverse prompt types. We will release LDT-Bench and code to facilitate future research on imaginative video generation. 1 Introduction Imagine describing a surreal scene–“a panda playing violin on Mars during a sandstorm”–and instantly seeing it come to life as a video. Text-to-video generation promises just that: the ability to turn language into vivid, dynamic worlds. Recent video generation models have made significant progress in generating realistic scenes (Wang et al., 2023; Yang et al., 2024; Peng et al., 2025; OpenAI, 2025; Wan Team et al., 2025); however, their performance drops sharply when handling subjectively imaginative scenarios, hindering the advancement of truly creative video generation. Why is imagination so hard to generate? This limitation arises from two primary factors. (1) The model’s semantic dependency: Generative models exhibit strong semantic dependency constraints on long-distance semantic prompts, making it difficult to generalize to imaginative scenarios beyond the training distribution (Fig. 1). (2) The scarcity of imaginative training data: Mainstream video datasets (Huang et al., 2024b; Liu et al., 2024b; Sun et al., 2024; Liu et al., 2023; Liao et al., 2025; Ling et al., 2025) predominantly contain realistic scenarios, offering limited imaginative combinations characterized by long-distance semantic relationships (Fig. 3(d)). Recent test-time scaling approaches (Liu et al., 2025a; He et al., 2025a) alleviate data scarcity by sampling multiple candidates and selecting the most promising one. However, their predefined sampling spaces and static reward functions constrain adaptability to the open-ended nature of creative generation. The Imagery Construction theory (Thomas, 1999; Pylyshyn, 2002) posits that humans create mental scenes for imaginative scenarios by iteratively refining visual imagery in response to language. Motivated by this principle, we introduce ImagerySearch, a test-time search strategy that enhances prompt-based visual generation. ImagerySearch comprises two core components: (i) Semantic- distance-aware Dynamic Search Space (SaDSS), which adaptively modulates sampling granularity according to the semantic span of the prompt; and (ii) Adaptive Imagery Reward (AIR), which incentivizes outputs that align more closely with the intended semantics. To assess generative models in imaginative settings, we propose LDT-Bench, the first benchmark designed specifically for long-distance semantic prompts. It comprises 2,839 challenging concept pairs, constructed by maximizing semantic distance across object–action and action–action dimensions from diverse recognition datasets (e.g., ImageNet-1K (Deng et al., 2009), Kinects-600 (Carreira et al., 2018)). In addition, LDT-Bench includes an automatic evaluation protocol, ImageryQA, which quantifies creative generation with respect to element coverage, semantic alignment, and anomaly detection. Extensive experiments reveal that general models (e.g., Wan14B (Wan Team et al., 2025), Hunyuan- 13B (Kong et al., 2024), CogVideoX (Yang et al., 2024)) and TTS-based models (e.g., VideoT1 (Liu et al., 2025a), EvoSearch (He et al., 2025a)) suffer from significant degradation in video quality and semantic alignment when conditioned on long-distance semantics. In contrast, our framework consistently improves generation fidelity and alignment, demonstrating superior capability in handling long-distance semantic prompts. Our contributions can be summarized as follows: 2 Preprint • We propose ImagerySearch, a dynamic test-time scaling law strategy inspired by mental imagery that adaptively adjusts the inference search space and reward according to prompt semantics. • We present LDT-Bench, the first benchmark specifically designed for video generation from long-distance semantic prompts. It comprises 2,839 prompts–spanning 1,938 subjects and 901 actions–and offers an automatic evaluation framework for assessing model creativity in imaginative scenarios. • Extensive experiments on LDT-Bench and VBench reveal that our approach consistently improves imaging quality and semantic alignment under long-distance semantic prompts. 2 Related Work Text-to-Video Generation Models. With advances in generative modeling (Ho et al., 2020; Chu et al., 2024; Lei et al., 2025; Chu et al., 2025; Chen et al., 2025a) and increased training resources, large-scale T2V models (OpenAI, 2025; Kwai, 2025; Runway, 2025; Bao et al., 2024; Zheng et al., 2024a; Peng et al., 2025; Genmo Team, 2024; Kong et al., 2024; Wan Team et al., 2025) have emerged, capable of generating coherent videos, understanding physics, and generalizing to complex scenarios. But they require massive data, and collecting enough long-range semantic prompts is impractical. Although fine-tuning (Fan and Lee, 2023; Lee et al., 2023; Black et al., 2023; Wallace et al., 2024; Clark et al., 2023; Domingo-Enrich et al., 2024; Mao et al., 2025) and post-training (Yuan et al., 2024a; Prabhudesai et al., 2024; Luo et al., 2023; Li et al., 2024a;b) methods mitigate data requirements to some extent, the extreme scarcity of long-distance semantic videos still hinders effective training. In contrast, the Test-Time Scaling (TTS) methods (Oshima et al., 2025; Xie et al., 2025; Yang et al., 2025; Liu et al., 2025a; He et al., 2025a) used in ImagerySearch require no additional training and achieve strong performance through a highly general approach. Test-Time Scaling in T2V Models. TTS improves performance by using rewards to select better outputs (Jaech et al., 2024; Guo et al., 2025). In T2V generation, TTS are primarily explored in two aspects: selection strategies and reward strategies. Selection strategies mainly include Best-of-N, particle sampling, and beam search. The Best-of-N (Ma et al., 2025; Liu et al., 2025a) selects the top N outputs from multiple generations. Particle sampling (Singhal et al., 2025; Li et al., 2024c; 2025; Singh et al., 2025; Sunwoo Kim, 2025) improves upon this by performing importance-based sampling across the denoising process. Beam search (Liu et al., 2025a; Yang et al., 2025; Xie et al., 2025; Oshima et al., 2025; Liu et al., 2025a; He et al., 2025b) keeps multiple candidates at each step, expanding the sequence set over time. Reward strategies are based on various evaluation metrics, such as VisionReward (Xu et al., 2024), ImageReward (Xu et al., 2023), Aesthetic score (Schuhmann et al., 2022), which guide the selection process by quantifying the quality of generated output. These reward functions are crucial for aligning outputs with desired visual and semantic characteristics. Current TTS methods optimize search and reward strategies for general T2V generation to enhance overall performance. In this work, we investigate this specific challenge and explore how TTS can be leveraged to improve model performance in long-distance semantic prompts. Evaluation of Video Generative Models. Early video-generation metrics are simplistic: some diverged from human judgment (Unterthiner et al., 2018; Radford et al., 2021b), while others reused real-video tests unsuited to synthetic clips (Soomro et al., 2012; Xu et al., 2016). Later, studies (Szeto and Corso, 2022; Liu et al., 2023; 2024b; Huang et al., 2024c; Sun et al., 2025; Zheng et al., 2025; Chen et al., 2025b) such as VBench (Huang et al., 2024b) evaluated AI-generated videos from a comprehensive, multi-dimensional perspective. Several studies (Liu et al., 2024a; Yuan et al., 2024b; 2025; Ling et al., 2025) refine evaluation along single dimensions such as frame realism or temporal coherence. Although existing methods focus on video quality and human perception, semantic content assessment remains underexplored. Current benchmarks struggle to effectively evaluate long-distance semantic prompts, which are key to advancing video generation capabilities. To address this, LDT-Bench was introduced as the first benchmark for evaluating long-distance semantic understanding in video generation. 3 Preprint 𝒟"!"# Prompt: "The native bear skillfully uses remote controls." Text Encoder Keyword Extractor camel packs belongings T2V Adaptive Imagery Reward 𝛼"MQ + 𝛽"TA + γ "VQ + …… Semantic-aware Dynamic Search Space Reward Container Bad Video Clip Select Space KeyWord Features Feature Cosine Similarity Good Video Clip Constraint Semantic Scorer ImagerySearch Iterative 𝛼 𝛽 Select the number of videos γ …… Select Parameter 𝒟"!"# 𝑡∈Imagery schedulers Figure 2: Overview of our ImagerySearch. The prompt is scored by the Constrained Semantic Scorer (producing ¯Dsem) and simultaneously fed to the T2V backbone (Wan2.1). At every step t specified by the imagery scheduler, we sample a set of candidate clips, rank them with a reward function conditioned on ¯Dsem, and retain only a ¯Dsem-controlled subset. The loop repeats until generation completes. 3 ImagerySearch Text-to-video generation aims to synthesize coherent videos conditioned on prompts. Diffusion models inherently possess the flexibility to adjust test-time computation via the number of denoising steps. To further improve generation quality, we formulate a search problem that identifies better noise inputs for the diffusion sampling process. We organize the design space along two axes: the reward functions that evaluate video quality, and the search algorithms that explore and select optimal noise candidates. 3.1 Preliminaries In standard diffusion frameworks, sampling starts from Gaussian noise xT ∼N(0, I), and the model iteratively denoises the latent through a learned network fθ. As a widely used sampling paradigm, DDIM performs the following step-wise denoising update: xt−1 = ζt−1(xt −σtfθ(xt, t, c) ζt ) + σt−1fθ(xt, t, c), (1) Where ζt−1, ζt, σt−1 denote predefined schedules. Prior test-time scaling approaches (Liu et al., 2025a; He et al., 2025a; Yang et al., 2025) operate within a fixed noise search space and use static reward functions–such as VideoScore (He et al., 2024), VideoAlign (Liu et al., 2025b), or their combinations–to rank candidates. By contrast, our framework supports flexible reward design and adaptive noise selection, substantially improving both sample efficiency and generation quality. 3.2 Dynamic Search Space Inspired by imagery cognitive theory (Thomas, 1999; Pylyshyn, 2002; Feng et al., 2023)—which posits that humans expend more effort and time to construct mental imagery for semantically distant concepts—we likewise adapt the candidate-video search space to a prompt’s semantic distance: shrinking it for short-distance prompts to boost test-time efficiency, and enlarging it for long-distance prompts to explore a broader range of possibilities. Therefore, we propose a Semantic-distance-aware Dynamic Search Space (SaDSS). As shown in Fig. 2, this adaptive resizing is driven by a Constrained Semantic Scorer, which dynamically modulates the search space. Specifically, we define semantic distance as the average 4 Preprint embedding distance between key entities (objects and actions) extracted from the prompt. Given a prompt p, we extract its compositional units {pi}n i=1 and compute: ¯Dsem(p) = 1 |E| X (i,j)∈E ∥ϕ(pi) −ϕ(pj)∥2 , (2) where ϕ(·) denotes the embedding function (e.g., T5 encoder), and E is the set of key entity pairs in the prompt. At inference time, we adapt the sampling procedure based on ¯Dsem. Specifically, the search space dynamically adapts based on semantic distance. Formally, the number of candidates Nt at timestep t is dynamically adjusted as: Nt = Nbase · 1 + λ · ¯Dsem(p)  , (3) where Nbase is the base number of samples, and λ is a scaling factor that controls the sensitivity to semantic distance. In this work, we set λ = 1. By tailoring the search scope to the inherent difficulty of the prompt, SaDSS encourages the model to explore more diverse visual hypotheses when needed, improving visual plausibility under challenging conditions, without incurring unnecessary computational costs for simple prompts. 3.3 Adaptive Imagery Reward Based on our observations, adjacent denoising steps alter the latent video only marginally, so we invoke ImagerySearch at a few key noise levels S = {5, 10, 20, 45}, termed the Imagery Schedule (see Appendix A). As shown in Fig. 2, starting from a partially denoised latent xt, we produce ˆx0 by completing the denoising trajectory and compute the reward on ˆx0 to assess the influence of different denoising stages on the final video quality. To enhance semantic alignment between generated videos and prompts with long-distance semantics, we introduce an Adaptive Imagery Reward (AIR) that modulates evaluation feedback based on the prompt’s semantic difficulty. Specifically, we incorporate the semantic distance as a soft re-weighting factor into the reward formulation. The reward RAIR(ˆx0) for each candidate video x0 is defined as: RAIR(ˆx0) = (α · MQ + β · TA + γ · VQ + ω · Rany) · ¯Dsem(ˆx0), (4) where α, β, γ, and ω are scaling factors that adaptively adjust the reward based on the prompt semantic distance ¯Dsem. MQ, TA, and VQ are from VideoAlign (Liu et al., 2025b), and Rany denotes an extensible reward (e.g., VideoScore (He et al., 2024), VMBench (Ling et al., 2025), UnifiedReward (Wang et al., 2025), VisionReward (Xu et al., 2024)). 4 LDT-Bench The rapid progress of video generation models is closely tied to the development of targeted evaluation benchmarks. Existing benchmarks primarily assess models using text prompts designed to depict realistic scenarios. However, as video generation models have achieved impressive performance in realistic scenarios, it is timely to shift the focus towards imaginative scenarios. Generally, such complex settings involve prompts in which entities–such as objects and actions–exhibit long semantic distances, meaning these entities rarely co-occur (e.g., “a panda piloting a helicopter”). These corner cases reveal the robustness limits of generative models. Nonetheless, most existing works remain limited to qualitative analysis on a few cases, and there is a lack of a unified benchmark specifically designed for this task. To fill this gap, we propose a novel benchmark LDT-Bench, designed to systematically analyze the generalization ability of video generation models in complex scenarios induced by prompts with Long-Distance semantic Texts. In the following sections, LDT-Bench is introduced from two perspectives: the construction of the prompt suite and the design of evaluation metrics. The core components of LDT-Bench are illustrated in Fig. 3. 4.1 Prompt Suite Meta-information Extraction. Considering that objects and actions are the main entities in text prompts, we construct our prompts using the following two structural types. (1) Object–Action: An 5 Preprint Prompt Generation LLM-Human Double Check (b) Long-Distance Prompt Generation (a) Meta-Information Extraction Object Set Kinetics-600 Action Set • The camel packs its belongings with care. • A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk. …… Initial Prompt Regeneration ✅ ❌ T5 Semantic distance calculation & filtering Validation by LLM Validation by Human ❌ ✅ Final prompts (c) Evaluation Metrics Evaluation prompts Generated videos Answers & Metric results MLLMs Evaluated questions • ElementQA • AlignQA • AnomalyQA Object-Action Pairs & Action-Action Pairs (d) (e) Benchmark Prompt Object Action ASD VBench 800 125 132 0.33 EvalCrafter 700 169 237 0.4 T2V-CompBench 1,400 308 545 0.31 FETV 619 168 334 0.38 DEVIL 810 270 556 0.39 VMBench 1,050 340 1,216 0.35 LDT-Bench(Ours) 2,839 1,938 901 0.86 Figure 3: Overview of our LDT-Bench. Upper: (a) LDT-Bench is built by first extracting meta- information from existing recognition datasets; (b) GPT-4o is then used to generate candidate prompts, which are filtered jointly by DeepSeek and humans to obtain the final prompt set; (c) Additionally, we design a set of three MLLM-based QA tasks that serve as the creativity metric. Lower: (d) Compared with other benchmarks, LDT-Bench covers a much richer variety of categories; (e) its prompts also exhibit a semantic-distance distribution that is shifted toward substantially longer ranges. Note that “ASD” denotes the average semantic distance of prompts. object combined with an uncommon or incompatible action. (2) Action–Action: Two semantically distant or even contradictory actions. To cover a wide range of objects and actions, we build our object and action sets from representative large-scale datasets. Specifically, the object set is derived from ImageNet-1K (Deng et al., 2009) and COCO (Lin et al., 2014) (covering 1,938 objects), while the action set is collected from ActivityNet (Caba Heilbron et al., 2015), UCF101 (Soomro et al., 2012), and Kinetics-600 (Carreira et al., 2018) (covering 901 actions). These collections serve as the foundation for subsequent prompt generation. We first encode each object and action element texti using a pretrained T5 text encoder (Raffel et al., 2020), obtaining a high-dimensional textual feature hi ∈Rd. These embeddings are then projected into a shared 2D semantic space via Principal Component Analysis (PCA): zi = PCA(hi) = PCA(T5(texti)), zi ∈R2, (5) where zi represents the semantic position of the i-th element in the 2D space. T5 can be replaced with other encoders, such as CLIP (Radford et al., 2021b); see Appendix B.1 for details. To measure semantic divergence, we compute the Euclidean distance between each pair of elements as a criterion for selecting long-distance semantic prompts. We then construct two candidate sets: one by pairing each object with its most distant action (1,938 object–action pairs), and the other by matching each action with its most distant counterpart (901 action–action pairs). From each set, we select the 160 most distant pairs, resulting in 320 high-distance prompts that challenge the model with long-distance semantic combinations. For more analysis of the prompt suite, please refer to Appendix B.2. Long-distance Prompt Generation. Based on the obtained text element pairs, we employ a large language model, i.e., GPT-4o (Hurst et al., 2024), to generate fluent and complete text prompts by filling in necessary sentence components. Subsequently, each prompt is double-checked by both DeepSeekR1 (Guo et al., 2025) and human annotators to ensure quality, resulting in our final prompt suite. The detailed generation process and several illustrative cases are presented in Fig. 3 (b). 4.2 Imagery Evaluation Metrics To quantitatively evaluate the performance of video generation models under long-distance semantic settings, we develop targeted evaluation metrics. Inspired by recent MLLMs-based evaluation methods (Cho et al., 2023; Feng et al., 2025), we generate questions based on the text prompts. Subsequently, MLLMs with strong semantic comprehension capabilities analyze the generated videos in response 6 Preprint to these questions, yielding quantitative evaluation results. Specifically, our assessment framework encompasses three primary dimensions. ElementQA. Because our prompts focus on objects and actions, ElementQA primarily consists of targeted questions revolving around these elements. For example, given the prompt “The traffic light is dancing.”, we can generate two questions: “Does the traffic light appear in the video?” and “Is the traffic light performing a dancing action?” AlignQA. In addition to the basic semantic information covered by ElementQA, we also evaluate the generated videos in terms of visual quality and aesthetics (Murray et al., 2012). Given the challenging and inherently subjective nature of this assessment, we employ recently developed MLLMs that have been specifically optimized for alignment with human perception to perform the evaluation (Huang et al., 2024a; Wu et al., 2023). AnomalyQA. We have observed that current video generation models frequently produce anomalous outputs. Consequently, we also leverage MLLMs to analyze the generated frames and answer targeted questions aimed at identifying these anomalies. Implementation Details. For ElementQA, we employ Qwen2.5-VL-72B-Instruct (Bai et al., 2025) as the underlying MLLM, whereas for AlignQA we adopt Q-Align (Wu et al., 2023), a model specifically optimized for rating visual quality and aesthetics. Given the broader generalization required by AnomalyQA, we utilize the more powerful GPT-4o (OpenAI, 2024) for evaluation. We collectively refer to these three components as ImageryQA. Further implementation details are provided in Appendix B.3. 5 Experiments 5.1 Experimental Setup Datasets & Metrics. To assess the imaginative capacity of video-generation models, we evaluate them on both LDT-Bench and VBench (Huang et al., 2024b), using each benchmark’s full prompt suite and associated metrics. Compared Models. We compare two categories of models: (1) General models: Hunyuan (Kong et al., 2024), Wan2.1 (Wan Team et al., 2025), Open-Sora (Zheng et al., 2024b), CogVideoX (Yang et al., 2024); (2) TTS methods: Video-T1 (Liu et al., 2025a) and EvoSearch (He et al., 2025a). We use Wan2.1 as the base model and generate 33-frame clips with the default settings (see Appendix C for details). Experimental Environment. All experiments are run on a server equipped with 8 × NVIDIA H20 GPUs (96 GB each), an Intel Xeon Gold 6348 CPU (32 cores, 2.6 GHz), and 512 GB of RAM, under Ubuntu 20.04 LTS (kernel 5.15). We used Python 3.9 with PyTorch 2.5.1 (CUDA 12.4, cuDNN 9.1), torchvision 0.20.1, and Transformers 4.50.3. 5.2 Comparison with Other Generation Models Performance on LDT-Bench. As shown in Tab. 1, we adopt Wan2.1 as the base model. Our method achieves a significant improvement of 8.83%, demonstrating a clear advantage. Furthermore, compared to other test-time scaling approaches, ImagerySearch also delivers consistently superior performance. These results highlight the effectiveness of our method in handling long-distance semantic prompts and its robustness in imagination-driven scenarios. LDT-Bench (%) ↑ Model ElementQA AlignQA AnomalyQA ImageryQA (All) Wan2.1 (Wan Team et al., 2025) 1.66 31.62 15.00 48.28 Video-T1 (Liu et al., 2025a) 1.91 38.16 14.68 54.75 Evosearch (He et al., 2025a) 1.92 36.10 16.46 54.48 ImagerySearch (Ours) 2.01 36.82 18.28 57.11 Table 1: Quantitative comparison on LDT-Bench. ImagerySearch achieves the best average performance. 7 Preprint Wan2.1 Frame0 Frame5 Frame10 Frame25 Frame30 Frame33 Prompt: “The native bear skillfully uses remote controls.” Video-T1 Evosearch CogvideoX Opensora Opensora-plan Hunyuan ImagerySearch (Ours) Frame15 Frame20 Figure 4: Visualization of examples. Upper: Results from general models. Lower: ImagerySearch versus other test-time scaling methods. Ours produces more vivid actions under long-distance semantic prompts. VBench (%) ↑ Model Aesthetic Quality Background Consistency Dynamic Degree Imaging Quality Motion Smoothness Subject Consistency Average Wan2.1 (Wan Team et al., 2025) 50.50 91.80 82.85 58.25 97.50 90.25 78.53 Opensora (Peng et al., 2025) 48.80 95.25 73.15 61.35 99.05 92.95 78.43 CogvideoX (Yang et al., 2024) 48.80 95.30 47.20 65.05 98.55 94.65 74.93 General Hunyuan (Kong et al., 2024) 50.45 92.65 85.00 59.55 95.75 90.55 78.99 Video-T1 (Liu et al., 2025a) 57.20 95.65 54.05 60.25 99.30 94.80 76.88 Evosearch (He et al., 2025a) 55.55 94.80 80.95 68.90 97.70 94.55 82.08 TTS ImagerySearch (Ours) 57.70 96.00 84.05 69.20 98.00 95.90 83.48 Table 2: Quantitative comparison of video generation models on VBench. ImagerySearch achieves the best average performance across multiple metrics, indicating better alignment and generation quality. (a) (b) (c) (d) (e) (f) Figure 5: (a) Effect of semantic distance across different models. As semantic distance increases, our method remains the most stable. (b-e) Our AIR consistently delivers superior performance. Scaling behavior of ImagerySeach and baselines as inference-time computation increases. From left to right, the y-axes represent the score changes for MQ, TA, VQ, and Overall (VideoAlign (Liu et al., 2025b)). (f) Effect of reward weight. Performance on VBench. For a balanced evaluation, we compare two classes of methods on VBench. The upper rows of Tab. 2 report general generators, while the lower rows list test-time scaling approaches–Video-T1 (Liu et al., 2025a), EvoSearch (He et al., 2025a), and our proposed ImagerySearch. All models are evaluated on long-distance prompts from LDT-Bench using the VBench metrics. ImagerySearch achieves the best overall score and ranks highest on the fine-grained Dynamic Degree, Subject Consistency metrics and so on, indicating its strong ability to preserve 8 Preprint VBench (%) Model Aesthetic Quality Background Consistency Dynamic Degree Imaging Quality Motion Smoothness Subject consistency Average Baseline Wan2.1 (Wan Team et al., 2025) 50.50 91.80 82.85 58.25 97.50 90.25 78.53 w/o AIR 56.25 94.60 81.85 68.05 97.50 94.40 82.11 Modules w/o SaDSS 55.35 95.10 77.20 68.00 97.60 94.55 81.30 0.5 57.25 96.15 70.00 70.75 97.45 95.45 81.18 SaDSS-static weight 0.9 57.40 96.05 70.00 70.80 97.55 95.50 81.22 BON (Ma et al., 2025) 57.40 95.00 83.01 68.10 97.70 94.63 82.64 Particle Sampling (Ma et al., 2025) 56.51 93.52 81.72 67.04 96.18 93.38 81.39 Search ImagerySearch (Ours) 57.70 96.00 84.05 68.50 97.65 94.70 83.10 Table 3: Ablation Study. “Baseline” is the plain backbone; “Modules” successively add our two novel modules; “SaDSS-static weight” denotes the performance obtained when the selection space is kept at a fixed size; “Search” swaps in alternative search strategies. The full configuration (ImagerySearch) yields the best performance. prompt fidelity under wide semantic gaps. Fig. 4 illustrates this strength: ImagerySearch accurately reproduces both the specified subjects (e.g., bear, controls) and their associated actions (e.g., uses). Additional examples in Appendix D further demonstrate its robustness in handling complex long-distance prompts. Robustness Analysis Across Semantic Distances. As illustrated in Fig. 5(a), our approach maintains nearly constant VBench scores as semantic distance increases, whereas competing methods exhibit pronounced fluctuations. This stability highlights the superior robustness of our model across a wide range of semantic distances. Additional error analysis is provided in the Appendix E. 5.3 Test-time Scaling Law Analysis We measure the inference-time computation by the number of function evaluations (NFEs). As shown in Fig. 5(b–d), where performance is assessed with the MQ, TA, and VQ metrics from VideoAlign (Liu et al., 2025b), ImagerySearch exhibits monotonic performance improvements as inference-time computation increases. Notably, on Wan2.1 (Wan Team et al., 2025), ImagerySearch continues to gain as NFEs grow, whereas baseline methods plateau at roughly 1 × 103 NFEs (corresponding to the 30th timestep). Computation details are provided in the Appendix F. Moreover, our method shows an even more pronounced advantage in the overall VideoAlign score, as illustrated in Fig. 5(e). 5.4 Ablation Study Effect of SaDSS and AIR. As shown in the first three rows of Tab. 3, adding either the SaDSS or the AIR module individually already surpasses the baseline, while combining SaDSS with AIR achieves the best performance, confirming the complementary nature of semantic guidance and adaptive selection. Effect of Search Space Size. The SaDSS–static weight rows in Tab. 3 compare fixed and dynamic search-space configurations. With static weights of 0.5, and 0.9, performance improves gradually, reaching a VBench score of 81.22%. In contrast, the dynamic approach attains a markedly higher score of 83.48%, demonstrating its superior ability to optimize the search space and thus boost model performance. Effect of Search Strategy. The Search rows in Tab. 3 compare different search strategies (e.g., BON, Particle Sampling (Ma et al., 2025)). The experimental results demonstrate that our search strategy delivers the best performance. Effect of Reward Dynamic Adjustment Mechanism. Fig. 5(f) demonstrates the impact of varying reward weights on VBench scores across different models (MQ, TA, VQ). As weights change from 0.2 to 1.2, TA shows notable improvement while MQ and VQ maintain relatively stable performance. The consistent superiority of the Ours approach, represented by the dashed line, underscores the effectiveness of dynamic reward adjustment, achieving optimal performance irrespective of weight changes. 9 Preprint 6 Conclusion In this study, we propose ImagerySearch, an adaptive test-time search method that improves video-generation quality for long-distance semantic prompts drawn from imaginative scenarios. Additionally, we present LDT-Bench, the first benchmark designed to evaluate such challenging prompts. ImagerySearch attains state-of-the-art results on both VBench and LDT-Bench, with especially strong gains on LDT-Bench, demonstrating its effectiveness for text-to-video generation under long-range semantic conditions. In future, we will explore more flexible reward mechanisms to further enhance video-generation performance. References Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923, 2025. Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, and Jun Zhu. Vidu: a highly consistent, dynamic and skilled text-to-video generator with diffusion models. arXiv preprint arXiv:2405.04233, 2024. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301, 2023. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the ieee conference on computer vision and pattern recognition, pages 961–970, 2015. Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint arXiv:1808.01340, 2018. Chubin Chen, Jiashu Zhu, Xiaokun Feng, Nisha Huang, Meiqi Wu, Fangyuan Mao, Jiahong Wu, Xiangxiang Chu, and Xiu Li. S2-guidance: Stochastic self guidance for training-free enhancement of diffusion models. arXiv preprint arXiv:2508.12880, 2025a. Rui Chen, Lei Sun, Jing Tang, Geng Li, and Xiangxiang Chu. Finger: Content aware fine-grained evaluation with reasoning for ai-generated videos. arXiv preprint arXiv:2504.10358, 2025b. Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, and Su Wang. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-to-image generation. arXiv preprint arXiv:2310.18235, 2023. Xiangxiang Chu, Jianlin Su, Bo Zhang, and Chunhua Shen. Visionllama: A unified llama backbone for vision tasks. In European Conference on Computer Vision, pages 1–18. Springer, 2024. Xiangxiang Chu, Renda Li, and Yong Wang. Usp: Unified self-supervised pretraining for image generation and understanding. arXiv preprint arXiv:2503.06132, 2025. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint arXiv:2309.17400, 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, and Ricky TQ Chen. Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. arXiv preprint arXiv:2409.08861, 2024. Ying Fan and Kangwook Lee. Optimizing ddpm sampling with shortcut fine-tuning. arXiv preprint arXiv:2301.13362, 2023. 10 Preprint Xiaokun Feng, Shiyu Hu, Xiaotang Chen, and Kaiqi Huang. A hierarchical theme recognition model for sandplay therapy. In Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pages 241–252. Springer, 2023. Xiaokun Feng, Haiming Yu, Meiqi Wu, Shiyu Hu, Jintao Chen, Chen Zhu, Jiahong Wu, Xiangxiang Chu, and Kaiqi Huang. Narrlv: Towards a comprehensive narrative-centric evaluation for long video generation models. arXiv preprint arXiv:2507.11245, 2025. Genmo Team. Mochi 1. https://github.com/genmoai/models, 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. Haoran He, Jiajun Liang, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai, and Ling Pan. Scaling image and video generation via test-time evolutionary search. arXiv preprint arXiv:2505.17618, 2025a. Haoran He, Jiajun Liang, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai, and Ling Pan. Scaling image and video generation via test-time evolutionary search. arXiv preprint arXiv:2505.17618, 2025b. Xuan He, Dongfu Jiang, Ge Zhang, Max Ku, Achint Soni, Sherman Siu, Haonan Chen, Abhranil Chandra, Ziyan Jiang, Aaran Arulraj, et al. Videoscore: Building automatic metrics to simulate fine-grained human feedback for video generation. arXiv preprint arXiv:2406.15252, 2024. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Yipo Huang, Xiangfei Sheng, Zhichao Yang, Quan Yuan, Zhichao Duan, Pengfei Chen, Leida Li, Weisi Lin, and Guangming Shi. Aesexpert: Towards multi-modality foundation model for image aesthetics perception. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 5911–5920, 2024a. Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21807–21818, 2024b. Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chanpaisit, Chenyang Si, Yuming Jiang, et al. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint arXiv:2411.13503, 2024c. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, et al. Hunyuanvideo: A systematic framework for large video generative models. arXiv preprint arXiv:2412.03603, 2024. Kwai. Kling. Accessed February 25, 2025 [Online] https://klingai.com/, 2025. URL https://klingai.com/. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192, 2023. 11 Preprint Jiachen Lei, Keli Liu, Julius Berner, Haiming Yu, Hongkai Zheng, Jiahong Wu, and Xiangxiang Chu. Advancing end-to-end pixel space generative modeling via self-supervised pre-training. arXiv preprint arXiv:2510.12586, 2025. Jiachen Li, Weixi Feng, Tsu-Jui Fu, Xinyi Wang, Sugato Basu, Wenhu Chen, and William Yang Wang. T2v-turbo: Breaking the quality bottleneck of video consistency model with mixed reward feedback. Advances in neural information processing systems, 37:75692–75726, 2024a. Jiachen Li, Qian Long, Jian Zheng, Xiaofeng Gao, Robinson Piramuthu, Wenhu Chen, and William Yang Wang. T2v-turbo-v2: Enhancing video generation model post-training through data, reward, and conditional guidance design. arXiv preprint arXiv:2410.05677, 2024b. Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, et al. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint arXiv:2408.08252, 2024c. Xiner Li, Masatoshi Uehara, Xingyu Su, Gabriele Scalia, Tommaso Biancalani, Aviv Regev, Sergey Levine, and Shuiwang Ji. Dynamic search for inference-time alignment in diffusion models. arXiv preprint arXiv:2503.02039, 2025. Mingxiang Liao, Qixiang Ye, Wangmeng Zuo, Fang Wan, Tianyu Wang, Yuzhong Zhao, Jingdong Wang, Xinyu Zhang, et al. Evaluation of text-to-video generation models: A dynamics perspective. Advances in Neural Information Processing Systems, 37:109790–109816, 2025. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. Xinran Ling, Chen Zhu, Meiqi Wu, Hangyu Li, Xiaokun Feng, Cundian Yang, Aiming Hao, Jiashu Zhu, Jiahong Wu, and Xiangxiang Chu. Vmbench: A benchmark for perception-aligned video motion generation. arXiv preprint arXiv:2503.10076, 2025. Fangfu Liu, Hanyang Wang, Yimo Cai, Kaiyan Zhang, Xiaohang Zhan, and Yueqi Duan. Video-t1: Test-time scaling for video generation. arXiv preprint arXiv:2503.18942, 2025a. Jiahe Liu, Youran Qu, Qi Yan, Xiaohui Zeng, Lele Wang, and Renjie Liao. Fr\’echet video motion distance: A metric for evaluating motion consistency in videos. arXiv preprint arXiv:2407.16124, 2024a. Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, et al. Improving video generation with human feedback. arXiv preprint arXiv:2501.13918, 2025b. Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22139–22149, 2024b. Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. Advances in Neural Information Processing Systems, 36:62352–62387, 2023. Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023. Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu-Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, et al. Inference-time scaling for diffusion models beyond scaling denoising steps. arXiv preprint arXiv:2501.09732, 2025. Fangyuan Mao, Aiming Hao, Jintao Chen, Dongxia Liu, Xiaokun Feng, Jiashu Zhu, Meiqi Wu, Chubin Chen, Jiahong Wu, and Xiangxiang Chu. Omni-effects: Unified and spatially-controllable visual effects generation. arXiv preprint arXiv:2508.07981, 2025. 12 Preprint Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE conference on computer vision and pattern recognition, pages 2408–2415. IEEE, 2012. OpenAI. Gpt-4o: Openai’s new flagship model. https://openai.com/index/ gpt-4o-and-gpt-4-api-updates/, 2024. Accessed: 2024-06-05. OpenAI. Sora. Accessed February 25, 2025 [Online] https://openai.com/index/sora/, 2025. URL https://openai.com/index/sora/. Yuta Oshima, Masahiro Suzuki, Yutaka Matsuo, and Hiroki Furuta. Inference-time text-to-video alignment with diffusion latent beam search. arXiv preprint arXiv:2501.19252, 2025. Xiangyu Peng, Zangwei Zheng, Chenhui Shen, Tom Young, Xinying Guo, Binluo Wang, Hang Xu, Hongxin Liu, Mingyan Jiang, Wenjun Li, and et al. Open-sora 2.0: Training a commercial-level video generation model in $200k. arXiv preprint arXiv:2503.09642, 2025. Mihir Prabhudesai, Russell Mendonca, Zheyang Qin, Katerina Fragkiadaki, and Deepak Pathak. Video diffusion alignment via reward gradients. arXiv preprint arXiv:2407.08737, 2024. Zenon W Pylyshyn. Mental imagery: In search of a theory. Behavioral and brain sciences, 25(2): 157–182, 2002. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021a. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021b. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. Runway. Runway gen3. Accessed February 25, 2025 [Online] https://app.runwayml.com/, 2025. URL https://app.runwayml.com/. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in neural information processing systems, 35:25278–25294, 2022. Anuj Singh, Sayak Mukherjee, Ahmad Beirami, and Hadi Jamali Rad. Code: Blockwise con- trol for denoising diffusion models. ArXiv, abs/2502.00968, 2025. URL https://api. semanticscholar.org/CorpusID:276094284. Raghav Singhal, Zachary Horvitz, Ryan Teehan, Mengye Ren, Zhou Yu, Kathleen McKeown, and Rajesh Ranganath. A general framework for inference-time scaling and steering of diffusion models. arXiv preprint arXiv:2501.06848, 2025. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. John Stam. Stable diffusion: High-resolution image synthesis with latent diffusion models, 2023. Placeholder entry. Please update with correct details. Kaiyue Sun, Kaiyi Huang, Xian Liu, Yue Wu, Zihan Xu, Zhenguo Li, and Xihui Liu. T2v- compbench: A comprehensive benchmark for compositional text-to-video generation. arXiv preprint arXiv:2407.14505, 2024. 13 Preprint Kaiyue Sun, Kaiyi Huang, Xian Liu, Yue Wu, Zihan Xu, Zhenguo Li, and Xihui Liu. T2v-compbench: A comprehensive benchmark for compositional text-to-video generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 8406–8416, 2025. Dongmin Park Sunwoo Kim, Minkyu Kim. Test-time alignment of diffusion models without reward over-optimization. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=vi3DjUhFVm. Ryan Szeto and Jason J Corso. The devil is in the details: A diagnostic evaluation benchmark for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21054–21063, 2022. Nigel JT Thomas. Are theories of imagery theories of imagination? an active perception approach to conscious mental content. Cognitive science, 23(2):207–245, 1999. Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8228–8238, 2024. Wan Team, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, et al. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314, 2025. Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. Yibin Wang, Yuhang Zang, Hao Li, Cheng Jin, and Jiaqi Wang. Unified reward model for multimodal understanding and generation. arXiv preprint arXiv:2503.05236, 2025. Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023. Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. In International Conference on Machine Learning, pages 54015–54029. PMLR, 2024a. Meiqi Wu, Kaiqi Huang, Yuanqiang Cai, Shiyu Hu, Yuzhong Zhao, and Weiqiang Wang. Finger in camera speaks everything: Unconstrained air-writing for real-world. IEEE Transactions on Circuits and Systems for Video Technology, 34(9):8602–8613, 2024b. Enze Xie, Junsong Chen, Yuyang Zhao, Jincheng Yu, Ligeng Zhu, Chengyue Wu, Yujun Lin, Zhekai Zhang, Muyang Li, Junyu Chen, et al. Sana 1.5: Efficient scaling of training-time and inference-time compute in linear diffusion transformer. arXiv preprint arXiv:2501.18427, 2025. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: learning and evaluating human preferences for text-to-image generation. In Proceedings of the 37th International Conference on Neural Information Processing Systems, pages 15903–15935, 2023. Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, et al. Visionreward: Fine-grained multi-dimensional human preference learning for image and video generation. arXiv preprint arXiv:2412.21059, 2024. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5288–5296, 2016. 14 Preprint Haolin Yang, Feilong Tang, Ming Hu, Yulong Li, Yexin Liu, Zelin Peng, Junjun He, Zongyuan Ge, and Imran Razzak. Scalingnoise: Scaling inference-time search for generating infinite videos. arXiv preprint arXiv:2503.16400, 2025. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. Hangjie Yuan, Shiwei Zhang, Xiang Wang, Yujie Wei, Tao Feng, Yining Pan, Yingya Zhang, Ziwei Liu, Samuel Albanie, and Dong Ni. Instructvideo: Instructing video diffusion models with human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6463–6474, 2024a. Shenghai Yuan, Jinfa Huang, Yongqi Xu, Yaoyang Liu, Shaofeng Zhang, Yujun Shi, Rui-Jie Zhu, Xinhua Cheng, Jiebo Luo, and Li Yuan. Chronomagic-bench: A benchmark for metamorphic evaluation of text-to-time-lapse video generation. Advances in Neural Information Processing Systems, 37:21236–21270, 2024b. Shenghai Yuan, Xianyi He, Yufan Deng, Yang Ye, Jinfa Huang, Bin Lin, Jiebo Luo, and Li Yuan. Opens2v-nexus: A detailed benchmark and million-scale dataset for subject-to-video generation. arXiv preprint arXiv:2505.20292, 2025. Dian Zheng, Ziqi Huang, Hongbo Liu, Kai Zou, Yinan He, Fan Zhang, Yuanhan Zhang, Jingwen He, Wei-Shi Zheng, Yu Qiao, et al. Vbench-2.0: Advancing video generation benchmark suite for intrinsic faithfulness. arXiv preprint arXiv:2503.21755, 2025. Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all. arXiv preprint arXiv:2412.20404, 2024a. Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all. arXiv preprint arXiv:2412.20404, 2024b. 15 Preprint A The Selection of Imagery Schedule As illustrated in Fig. S1, we observe that adjacent denoising steps modify the latent video only marginally; substantial deviations from earlier stages emerge only at several pivotal steps. To improve generation efficiency, we therefore trigger ImagerySearch at a limited set of noise levels, S = {5, 20, 30, 45}, which we term the Imagery Schedule. This schedule specifies the exact timesteps at which ImagerySearch is invoked. B More Details About Imagery Evaluation Metrics B.1 More Text Encoders. In our current implementation, T5 serves three purposes: it encodes the key entities in each prompt, measures their semantic distances, and then uses those distances to adjust the search space and reward weights during generation. The same pipeline can be run with a CLIP text encoder (Radford et al., 2021a; Blattmann et al., 2023). Trained on large-scale image–text pairs, CLIP yields text embeddings whose cosine distances correlate well with visual concepts, so these distances can play exactly the same role in deciding when to expand or shrink the search space. In addition, CLIP similarities are widely used as a measure of text–image or text–video alignment, which makes them a natural choice for the alignment term in our reward function (Stam, 2023). Because CLIP, like T5, produces a fixed-length vector in a single forward pass, it can be swapped in as a drop-in replacement without changing any downstream components while fully preserving the effectiveness of our adaptive search and reward mechanisms. B.2 More Analysis about Prompt Suite. As shown in Fig. S2, we provide a multi-faceted overview of the LDT-Bench prompt suite and underscore its advantages for long-distance semantic evaluation. (a) Examining the distribution of actions, a pronounced long-tail pattern emerges: of the five super-categories, Sports & Wellness and Daily Services each supply 300 prompts, ensuring ample coverage of everyday yet highly diverse actions. (b) For objects, a treemap of 14 super-categories—scaled by instance count—reveals that Animal and Artifact jointly exceed half of all samples, while still leaving room for rarer classes; this balance of head and tail categories is largely missing in prior benchmarks. (c) The object word cloud (after stop-word filtering) highlights high-frequency nouns such as cricket, person, and remote, evidencing fine-grained lexical diversity across domains. (d) The action word cloud reveals a wide semantic span—verbs like play, join, use, and handle—that challenges models to cope with imaginative, long-distance dependencies. Taken together, these statistics show that LDT-Bench not only covers a richer mix of objects and actions than existing datasets but also accentuates long-distance semantic relationships that current models find most difficult, making it a uniquely effective testbed for stress-testing creative video generation systems. B.3 ImageryQA Implementation Details. As described in Sec. 4.2 of the paper, our metric is primarily composed of three components: ElementQA, AlignQA, and AnomalyQA (Fig. S3 (a)). In this subsection, we provide further clarification using specific examples and illustrating the metric computation process. As shown in Fig. S3 (b), given the evaluation prompt, “A person polishes furniture attentively at home, then packs cleaning products for organization.”, two videos generated by different video generation models. First, ElementQA formulates questions based on the objects and actions within the prompt, i.e., “person,” “polishes furniture,” and “packs cleaning products for organization”, resulting in the questions Q1, Q2, and Q3 in Fig. S3. Next, AlignQA assesses the first frame of each video in terms of image quality and aesthetics. Finally, AnomalyQA evaluates abnormal events in both videos, as illustrated by Q5 in Fig. S3. Based on these questions, we employ different MLLMs and answer strategies. Recent studies (Feng et al., 2025; Liu et al., 2025b; Wu et al., 2024a; Zheng et al., 2025; Wu et al., 2024b) suggest that for 1 Preprint timestep = 3 timestep = 4 timestep = 6 timestep = 5 timestep = 20 timestep = 30 timestep = 45 Frame 1 Prompt: “The bear walks on the grassland.” Adjacent denoising steps Key denoising steps Figure S1: Imagery schedule. The heatmaps visualize 13th-layer attention projected onto the first video frame at successive denoising steps. Adjacent steps show nearly identical focus regions, whereas only a few key steps exhibit pronounced changes. Concentrating analysis and search on these pivotal steps therefore captures the prompt-to-frame semantic correspondence more efficiently. (a) (b) (c) (d) Figure S2: LDT-Bench prompt suite analysis: (a) Action super-category distribution shown as a horizontal bar chart. (b) Object super-category distribution displayed as a treemap, with area proportional to class count. (c) Word cloud highlighting the most frequent object-action prompts. (d) Word cloud highlighting the most frequent action-action prompts. questions with inherent uncertainty, having a general-purpose MLLM (Bai et al., 2025; OpenAI, 2024) answer the same question multiple times and averaging the results yields more reliable evaluations. Therefore, for ElementQA, we prompt Qwen2.5-VL-72B-Instruct (Bai et al., 2025) to answer each question five times. For AnomalyQA, considering the higher cost of GPT-4o (OpenAI, 2024), we collect three responses per question. For Q-Align (Wu et al., 2023) in AlignQA, since it is a dedicated model trained for aesthetic quality assessment and directly outputs a quantitative score, we use a single response. C Experimental Setup–Model details Parameter settings. In our implementation, the baseline model is Wan2.1-1.3B (Wan Team et al., 2025). And we set the imagery schedule to {5, 20, 30, 45} and set the imagery size schedule to {10, 5, 5, 5, 5}. As shown in Fig. S4, V Q and MQ exhibit the same selection trend, whereas TA shows the opposite. Therefore, regarding the parameters in Equation (5), we set β = γ = 1.0, and α are dynamically adjusted. 2 Preprint Prompt: “A person polishes furniture attentively at home, then packs cleaning products for organization.” Wan2.1 ImagerySearch (Ours) Q1: 1.00 Q2: 0.00 Q3: 0.00 Q4: 2.18 Q5: 0.00 Q1: 1.00 Q2: 0.20 Q3: 0.80 Q4: 4.69 Q5: 0.33 Q1: "Is a person present in the scene?" Q2: "Is a person performing the action 'polishes furniture attentively' at home?" Q3: "Is a person performing the action 'packs cleaning products for organization'?" Q4: image quality and aesthetic score Q5: "This is a generated video. Please help me determine whether there are any anomalies in the video frames, such as abnormal appearance or structure of objects, abnormal deformation of objects, motion that is unreasonable or violates physical laws, disappearance or discontinuity of objects, as well as artifacts or motion ghosting." ElementQA AlignQA AnomalyQA ImageryQA (a) (b) Figure S3: Evaluation with ImageryQA. (a) We design a structured question set ImageryQA, consisting of ElementQA, AlignQA, and AnomalyQA.(b) Comparison between Wan2.1 and ImagerySearch on the same prompt. Wan2.1 fails to depict a person and the actions described, resulting in low aesthetic quality (Q4) and visual anomalies (Q5). In contrast, ImagerySearch successfully captures both actions–polishing furniture and packing cleaning products–scoring higher in both Q4 and Q5. D More Examples Additional qualitative examples are provided in Fig. S5, Fig. S6, and Fig. S7. Specifically, Fig. S5 reports results on LDT-Bench, where the first five rows correspond to action–action prompts and the last three to object–action prompts. Fig. S6 and Fig. S7 show further action–action cases drawn from VBench. Across all examples, our method produces vivid and coherent videos, even under long-distance semantic prompts, illustrating its capacity to handle challenging imaginative scenarios. E Error Analysis In the VBench (Huang et al., 2024b) error analysis (Fig. S8), ImagerySearch shows a higher mean score with a tighter interquartile range, indicating more stable performance across prompts. Evosearch (He et al., 2025a) attains a comparable median but displays greater dispersion, whereas wan2.1 (Wan Team et al., 2025) and Video-T1 (Liu et al., 2025a) exhibit lower central scores and wider quartile spans. Overall, dynamically adjusting the search space and rewarding by semantic distance helps maintain generation quality while reducing sensitivity to prompt difficulty. 3 Preprint MQ=0.6 MQ=1.0 TA=0.6 TA=1.0 VQ=0.6 VQ=1.0 MQ=1.0 TA=1.0 VQ=1.0 MQ=0.6 TA=0.6 VQ=0.6 Prompt: “The camel packs its belongings with care.” Prompt: “A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk.” Figure S4: Reward-Weight Analysis. The left of figure shows an action–action example and the right of figure is an object–action one, visualizing the videos under different weight settings. MQ and V Q follow almost identical trends, whereas TA moves in the opposite direction. Accordingly, we fix the MQ and V Q coefficients to 1 and vary the TA coefficient with the prompt, selecting videos that better fit imaginative scenarios. 4 Preprint Cricket players celebrate victories on lush pitches, then use remote controllers for strategic insights. Pole vaulters compete on athletic fields, then tie knots in practice equipment bags. A person washes face carefully in quiet bathrooms, then ties knots in towels for storage. Dancers embrace elegance with belly dance routines at cultural fairs, then tie knots in costume shawls. The tiger enjoys competitive trampoline events. The shoe shop efficiently utilizes a remote controller. The shovel springs vigorously onto the trampoline. A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk. Figure S5: More examples on LDT-Bench. The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 5 Preprint CG game concept digital art, a snowboarder with a sleek black snowboard, wearing a black hoodie and cargo pants. They are standing on a snowy mountain slope, with a clear blue sky above and fluffy white clouds in the distance. The snowboarder has long wavy brown hair blowing in the wind, and their face is covered in snow. They are holding the snowboard tightly with one hand and are about to take off down the slope. The background features towering peaks and deep valleys. The overall scene is vibrant and dynamic, with natural lighting and shadows. Low-angle view, medium shot focusing on the snowboarder's action. Photorealistic astronaut riding a horse in the vast expanse of outer space. The astronaut, wearing a sleek, silver spacesuit with reflective panels, is astride a majestic steed adorned with metallic accents. Both the astronaut and horse are illuminated by the stark glow of stars and distant planets. The horse's coat is a blend of metallic silver and subtle blue hues, reflecting the cold, distant universe around them. The astronaut leans forward slightly, gripping the reigns tightly as they navigate the void. The horse moves with grace and purpose, its hooves creating a soft, rhythmic sound against the vacuum. The background showcases a surreal, star-filled cosmos, with nebulae and cosmic dust swirling around them. The image captures the awe-inspiring moment of this unlikely duo journeying through the cosmos. High-definition photorealistic rendering. Full-body astronaut and horse, medium shot in space environment. A person leans in for a tender kiss, their lips gently touching as they look into each other's eyes. The person has tousled brown hair, soft features, and warm, inviting skin. They are wearing a cozy sweater in a soft pastel color, paired with jeans and sneakers. The background is a softly lit bedroom, with subtle shadows creating depth. A small bedside lamp casts a warm glow, and there are hints of fluffy bedding and a vintage alarm clock on the nightstand. The couple is positioned mid-shot, facing each other with a sense of intimacy. Soft romantic music plays in the background. Medium shot, close-up of the faces. CG game concept digital art, a tranquil tableau featuring a serene cup resting on a polished wooden surface. The cup is made of delicate porcelain, with intricate hand-painted designs in shades of pale blue and green. It sits gracefully on a small, intricately carved wooden coaster. The wooden surface is smooth and gleaming, with subtle grain patterns. Soft, warm ambient lighting casts gentle shadows, highlighting the intricate details of the cup and the coaster. The background is a minimalist, softly textured room with hints of pastel colors, creating a peaceful and serene atmosphere. Low-angle view, medium shot focusing on the detailed cup. Figure S6: More examples on VBench (Part I). The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 6 Preprint A graceful woman in flowing white robes, adorned with intricate silver jewelry, stands gracefully in a well-lit garden. She expertly arranges vibrant roses, lilies, and peonies with practiced precision. Her long, wavy hair flows gently behind her as she speaks softly to herself, her serene expression reflecting the beauty of nature around her. Soft sunlight casts a warm glow over the scene, highlighting the soft petals and delicate textures of the flowers. She pauses occasionally to admire her work, a content smile playing on her lips. The background features lush greenery, winding paths, and a gentle breeze rustling through the leaves. The composition includes various angles and shots, showcasing the woman from the side, full body, and a close-up of her hands deftly arranging the flowers. The scene is captured in a cinematic lighting style, emphasizing the harmony between the artist and her creations. CG game concept digital art, a skilled blacksmith bending metal with intense focus and determination. The blacksmith wears sturdy leather gloves and a heavy apron, standing amidst a cluttered forge filled with glowing coals and molten metal. Sparks fly as he skillfully shapes a piece of iron, his muscles tensing with effort. The environment is dimly lit, with flickering torches casting shadows on the walls. The blacksmith's face is lined with concentration, his jaw set tightly. He holds the metal with both hands, applying pressure and guiding it with precise motions. The metal bends and twists under his touch, creating a rhythmic sound that echoes through the forge. The background is a chaotic yet orderly scene, with tools scattered around and tools hanging from pegs on the wall. Close-up, low-angle view. A person gracefully ice skating on a frozen lake at night. The figure wears a sleek black ice skating outfit with reflective silver accents, allowing their every movement to catch the glow of the streetlights casting a soft, ethereal light. Their hair is pulled back into a sleek ponytail, flowing slightly with each spin. They glide effortlessly across the ice, occasionally stopping to pose with arms outstretched, creating intricate patterns in the snow. The background is a blurred view of the city skyline with twinkling lights, giving a sense of adventure and excitement. Night-time, winter setting with a touch of magic. Low-angle shot focusing on the skater's face, capturing their joy and determination. A person is making a large snowman in a winter landscape. The person is wearing a cozy red coat and a woolen hat, with mittens on their hands. They stand in front of a soft, fluffy snowbank, surrounded by pristine white snow. The person begins by shaping a large ball of snow for the snowman's body, then carefully forming smaller balls for the snowman's arms and head. They use their scarf as a makeshift tool to smooth out the snow, adding details like buttons and a carrot nose. The background is a vast snowy field with distant pine trees and a clear blue sky. The scene captures the joy and determination of building something from the purest of materials. Winter-themed lighting with soft, diffused sunlight highlights the intricate details of the snowman. Close-up shots of the person's hands working, medium shots of the snowman taking shape, and wide shots of the entire setup in progress. Handheld camera movements capture the person's gestures and expressions, providing a sense of movement and engagement. Figure S7: More examples on VBench (Part II). The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 7 Preprint ImagerySearch Evosearch wan2.1 video_t1 Model 40 50 60 70 80 VBench (%) 82.1 74.3 63.3 64.2 Error Analysis Figure S8: Error analysis about VBench scores on long-distance semantic prompts. Each box shows the score distribution for one model (mean marked by a white diamond); individual data points are overlaid in matching colors. ImagerySearch (orange) attains the highest mean with the tightest spread, while the other methods exhibit lower central tendencies and larger variances. 8
Preprint ImagerySearch: Adaptive Test-Time Search for Video Generation Beyond Semantic Dependency Constraints Meiqi Wu1,3∗ Jiashu Zhu2 Xiaokun Feng1,3 Chubin Chen4 Chen Zhu5 Bingze Song2 Fangyuan Mao2 Jiahong Wu2† Xiangxiang Chu2 Kaiqi Huang1,3‡ 1UCAS 2AMAP, Alibaba Group 3CRISE 4THU 5SEU GitHub: https://github.com/AMAP-ML/ImagerySearch/ (b) Long-distance Semantic Prompt: " The camel packs its belongings with care." (a) Normal Distance Semantic Prompt: " The camel walks on the desert." Long-distance Semantic Video T1 Evosearch Wan2.1 Wan2.1 Realistic Scenarios Imaginative Scenarios Short-distance Semantic (Walk) (Pack) ImagerySearch (Ours) Figure 1: The motivation of ImagerySearch. The figure illustrates two semantic dependency scenarios related to camels. Left: The distance depicts the corresponding strength of prompt tokens during the denoising process. LDT-Bench consists of imaginative scenarios with long-distance semantics, whose semantic dependencies are typically weak. Right: Wan2.1 performs well on short-distance semantics but fails under long-distance. Test time scaling methods (e.g., Video T1 (Liu et al., 2025a), Evosearch (He et al., 2025a)) also struggle. However, ImagerySearch generates coherent, context-aware motions (orange box). Abstract Video generation models have achieved remarkable progress, particularly excelling in realistic scenarios; however, their performance degrades notably in imaginative scenarios. These prompts often involve rarely co-occurring concepts with longdistance semantic relationships, falling outside training distributions. Existing methods typically apply test-time scaling for improving video quality, but their fixed search spaces and static reward designs limit adaptability to imaginative scenarios. ∗Work done during the internship at AMAP, Alibaba Group. †Project leader. ‡Corresponding author. 1 22 Oct 2025 Preprint To fill this gap, we propose ImagerySearch, a prompt-guided adaptive test-time search strategy that dynamically adjusts both the inference search space and reward function according to semantic relationships in the prompt. This enables more coherent and visually plausible videos in challenging imaginative settings. To evaluate progress in this direction, we introduce LDT-Bench, the first dedicated benchmark for long-distance semantic prompts, consisting of 2,839 diverse concept pairs and an automated protocol for assessing creative generation capabilities. Extensive experiments show that ImagerySearch consistently outperforms strong video generation baselines and existing test-time scaling approaches on LDT-Bench, and achieves competitive improvements on VBench, demonstrating its effectiveness across diverse prompt types. We will release LDT-Bench and code to facilitate future research on imaginative video generation. 1 Introduction Imagine describing a surreal scene-"a panda playing violin on Mars during a sandstorm"-and instantly seeing it come to life as a video. Text-to-video generation promises just that: the ability to turn language into vivid, dynamic worlds. Recent video generation models have made significant progress in generating realistic scenes (Wang et al., 2023; Yang et al., 2024; Peng et al., 2025; OpenAI, 2025; Wan Team et al., 2025); however, their performance drops sharply when handling subjectively imaginative scenarios, hindering the advancement of truly creative video generation. Why is imagination so hard to generate? This limitation arises from two primary factors. (1) The model's semantic dependency: Generative models exhibit strong semantic dependency constraints on long-distance semantic prompts, making it difficult to generalize to imaginative scenarios beyond the training distribution (Fig. 1). (2) The scarcity of imaginative training data: Mainstream video datasets (Huang et al., 2024b; Liu et al., 2024b; Sun et al., 2024; Liu et al., 2023; Liao et al., 2025; Ling et al., 2025) predominantly contain realistic scenarios, offering limited imaginative combinations characterized by long-distance semantic relationships (Fig. 3(d)). Recent test-time scaling approaches (Liu et al., 2025a; He et al., 2025a) alleviate data scarcity by sampling multiple candidates and selecting the most promising one. However, their predefined sampling spaces and static reward functions constrain adaptability to the open-ended nature of creative generation. The Imagery Construction theory (Thomas, 1999; Pylyshyn, 2002) posits that humans create mental scenes for imaginative scenarios by iteratively refining visual imagery in response to language. Motivated by this principle, we introduce ImagerySearch, a test-time search strategy that enhances prompt-based visual generation. ImagerySearch comprises two core components: (i) Semanticdistance-aware Dynamic Search Space (SaDSS), which adaptively modulates sampling granularity according to the semantic span of the prompt; and (ii) Adaptive Imagery Reward (AIR), which incentivizes outputs that align more closely with the intended semantics. To assess generative models in imaginative settings, we propose LDT-Bench, the first benchmark designed specifically for long-distance semantic prompts. It comprises 2,839 challenging concept pairs, constructed by maximizing semantic distance across object-action and action-action dimensions from diverse recognition datasets (e.g., ImageNet-1K (Deng et al., 2009), Kinects-600 (Carreira et al., 2018)). In addition, LDT-Bench includes an automatic evaluation protocol, ImageryQA, which quantifies creative generation with respect to element coverage, semantic alignment, and anomaly detection. Extensive experiments reveal that general models (e.g., Wan14B (Wan Team et al., 2025), Hunyuan13B (Kong et al., 2024), CogVideoX (Yang et al., 2024)) and TTS-based models (e.g., VideoT1 (Liu et al., 2025a), EvoSearch (He et al., 2025a)) suffer from significant degradation in video quality and semantic alignment when conditioned on long-distance semantics. In contrast, our framework consistently improves generation fidelity and alignment, demonstrating superior capability in handling long-distance semantic prompts. Our contributions can be summarized as follows: 2 Preprint • We propose ImagerySearch, a dynamic test-time scaling law strategy inspired by mental imagery that adaptively adjusts the inference search space and reward according to prompt semantics. • We present LDT-Bench, the first benchmark specifically designed for video generation from long-distance semantic prompts. It comprises 2,839 prompts-spanning 1,938 subjects and 901 actions-and offers an automatic evaluation framework for assessing model creativity in imaginative scenarios. • Extensive experiments on LDT-Bench and VBench reveal that our approach consistently improves imaging quality and semantic alignment under long-distance semantic prompts. 2 Related Work Text-to-Video Generation Models. With advances in generative modeling (Ho et al., 2020; Chu et al., 2024; Lei et al., 2025; Chu et al., 2025; Chen et al., 2025a) and increased training resources, large-scale T2V models (OpenAI, 2025; Kwai, 2025; Runway, 2025; Bao et al., 2024; Zheng et al., 2024a; Peng et al., 2025; Genmo Team, 2024; Kong et al., 2024; Wan Team et al., 2025) have emerged, capable of generating coherent videos, understanding physics, and generalizing to complex scenarios. But they require massive data, and collecting enough long-range semantic prompts is impractical. Although fine-tuning (Fan and Lee, 2023; Lee et al., 2023; Black et al., 2023; Wallace et al., 2024; Clark et al., 2023; Domingo-Enrich et al., 2024; Mao et al., 2025) and post-training (Yuan et al., 2024a; Prabhudesai et al., 2024; Luo et al., 2023; Li et al., 2024a;b) methods mitigate data requirements to some extent, the extreme scarcity of long-distance semantic videos still hinders effective training. In contrast, the Test-Time Scaling (TTS) methods (Oshima et al., 2025; Xie et al., 2025; Yang et al., 2025; Liu et al., 2025a; He et al., 2025a) used in ImagerySearch require no additional training and achieve strong performance through a highly general approach. Test-Time Scaling in T2V Models. TTS improves performance by using rewards to select better outputs (Jaech et al., 2024; Guo et al., 2025). In T2V generation, TTS are primarily explored in two aspects: selection strategies and reward strategies. Selection strategies mainly include Best-of-N, particle sampling, and beam search. The Best-of-N (Ma et al., 2025; Liu et al., 2025a) selects the top N outputs from multiple generations. Particle sampling (Singhal et al., 2025; Li et al., 2024c; 2025; Singh et al., 2025; Sunwoo Kim, 2025) improves upon this by performing importance-based sampling across the denoising process. Beam search (Liu et al., 2025a; Yang et al., 2025; Xie et al., 2025; Oshima et al., 2025; Liu et al., 2025a; He et al., 2025b) keeps multiple candidates at each step, expanding the sequence set over time. Reward strategies are based on various evaluation metrics, such as VisionReward (Xu et al., 2024), ImageReward (Xu et al., 2023), Aesthetic score (Schuhmann et al., 2022), which guide the selection process by quantifying the quality of generated output. These reward functions are crucial for aligning outputs with desired visual and semantic characteristics. Current TTS methods optimize search and reward strategies for general T2V generation to enhance overall performance. In this work, we investigate this specific challenge and explore how TTS can be leveraged to improve model performance in long-distance semantic prompts. Evaluation of Video Generative Models. Early video-generation metrics are simplistic: some diverged from human judgment (Unterthiner et al., 2018; Radford et al., 2021b), while others reused real-video tests unsuited to synthetic clips (Soomro et al., 2012; Xu et al., 2016). Later, studies (Szeto and Corso, 2022; Liu et al., 2023; 2024b; Huang et al., 2024c; Sun et al., 2025; Zheng et al., 2025; Chen et al., 2025b) such as VBench (Huang et al., 2024b) evaluated AI-generated videos from a comprehensive, multi-dimensional perspective. Several studies (Liu et al., 2024a; Yuan et al., 2024b; 2025; Ling et al., 2025) refine evaluation along single dimensions such as frame realism or temporal coherence. Although existing methods focus on video quality and human perception, semantic content assessment remains underexplored. Current benchmarks struggle to effectively evaluate long-distance semantic prompts, which are key to advancing video generation capabilities. To address this, LDT-Bench was introduced as the first benchmark for evaluating long-distance semantic understanding in video generation. 3 Preprint D"!"# Prompt: "The native bear skillfully uses remote controls." Text Encoder Keyword Extractor camel packs belongings T2V Adaptive Imagery Reward α"MQ + β"TA + γ "VQ + ...... Semantic-aware Dynamic Search Space Reward Container Bad Video Clip Select Space KeyWord Features Feature Cosine Similarity Good Video Clip Constraint Semantic Scorer ImagerySearch Iterative α β Select the number of videos γ ...... Select Parameter D"!"# t∈Imagery schedulers Figure 2: Overview of our ImagerySearch. The prompt is scored by the Constrained Semantic Scorer (producing ̄Dsem) and simultaneously fed to the T2V backbone (Wan2.1). At every step t specified by the imagery scheduler, we sample a set of candidate clips, rank them with a reward function conditioned on ̄Dsem, and retain only a ̄Dsem-controlled subset. The loop repeats until generation completes. 3 ImagerySearch Text-to-video generation aims to synthesize coherent videos conditioned on prompts. Diffusion models inherently possess the flexibility to adjust test-time computation via the number of denoising steps. To further improve generation quality, we formulate a search problem that identifies better noise inputs for the diffusion sampling process. We organize the design space along two axes: the reward functions that evaluate video quality, and the search algorithms that explore and select optimal noise candidates. 3.1 Preliminaries In standard diffusion frameworks, sampling starts from Gaussian noise xT ∼N(0, I), and the model iteratively denoises the latent through a learned network fθ. As a widely used sampling paradigm, DDIM performs the following step-wise denoising update: xt-1 = ζt-1(xt -σtfθ(xt, t, c) ζt ) + σt-1fθ(xt, t, c), (1) Where ζt-1, ζt, σt-1 denote predefined schedules. Prior test-time scaling approaches (Liu et al., 2025a; He et al., 2025a; Yang et al., 2025) operate within a fixed noise search space and use static reward functions-such as VideoScore (He et al., 2024), VideoAlign (Liu et al., 2025b), or their combinations-to rank candidates. By contrast, our framework supports flexible reward design and adaptive noise selection, substantially improving both sample efficiency and generation quality. 3.2 Dynamic Search Space Inspired by imagery cognitive theory (Thomas, 1999; Pylyshyn, 2002; Feng et al., 2023)-which posits that humans expend more effort and time to construct mental imagery for semantically distant concepts-we likewise adapt the candidate-video search space to a prompt's semantic distance: shrinking it for short-distance prompts to boost test-time efficiency, and enlarging it for long-distance prompts to explore a broader range of possibilities. Therefore, we propose a Semantic-distance-aware Dynamic Search Space (SaDSS). As shown in Fig. 2, this adaptive resizing is driven by a Constrained Semantic Scorer, which dynamically modulates the search space. Specifically, we define semantic distance as the average 4 Preprint embedding distance between key entities (objects and actions) extracted from the prompt. Given a prompt p, we extract its compositional units {pi}n i=1 and compute: ̄Dsem(p) = 1 |E| X (i,j)∈E ∥φ(pi) -φ(pj)∥2 , (2) where φ(·) denotes the embedding function (e.g., T5 encoder), and E is the set of key entity pairs in the prompt. At inference time, we adapt the sampling procedure based on ̄Dsem. Specifically, the search space dynamically adapts based on semantic distance. Formally, the number of candidates Nt at timestep t is dynamically adjusted as: Nt = Nbase · 1 + λ · ̄Dsem(p) , (3) where Nbase is the base number of samples, and λ is a scaling factor that controls the sensitivity to semantic distance. In this work, we set λ = 1. By tailoring the search scope to the inherent difficulty of the prompt, SaDSS encourages the model to explore more diverse visual hypotheses when needed, improving visual plausibility under challenging conditions, without incurring unnecessary computational costs for simple prompts. 3.3 Adaptive Imagery Reward Based on our observations, adjacent denoising steps alter the latent video only marginally, so we invoke ImagerySearch at a few key noise levels S = {5, 10, 20, 45}, termed the Imagery Schedule (see Appendix A). As shown in Fig. 2, starting from a partially denoised latent xt, we produce ˆx0 by completing the denoising trajectory and compute the reward on ˆx0 to assess the influence of different denoising stages on the final video quality. To enhance semantic alignment between generated videos and prompts with long-distance semantics, we introduce an Adaptive Imagery Reward (AIR) that modulates evaluation feedback based on the prompt's semantic difficulty. Specifically, we incorporate the semantic distance as a soft re-weighting factor into the reward formulation. The reward RAIR(ˆx0) for each candidate video x0 is defined as: RAIR(ˆx0) = (α · MQ + β · TA + γ · VQ + ω · Rany) · ̄Dsem(ˆx0), (4) where α, β, γ, and ω are scaling factors that adaptively adjust the reward based on the prompt semantic distance ̄Dsem. MQ, TA, and VQ are from VideoAlign (Liu et al., 2025b), and Rany denotes an extensible reward (e.g., VideoScore (He et al., 2024), VMBench (Ling et al., 2025), UnifiedReward (Wang et al., 2025), VisionReward (Xu et al., 2024)). 4 LDT-Bench The rapid progress of video generation models is closely tied to the development of targeted evaluation benchmarks. Existing benchmarks primarily assess models using text prompts designed to depict realistic scenarios. However, as video generation models have achieved impressive performance in realistic scenarios, it is timely to shift the focus towards imaginative scenarios. Generally, such complex settings involve prompts in which entities-such as objects and actions-exhibit long semantic distances, meaning these entities rarely co-occur (e.g., "a panda piloting a helicopter"). These corner cases reveal the robustness limits of generative models. Nonetheless, most existing works remain limited to qualitative analysis on a few cases, and there is a lack of a unified benchmark specifically designed for this task. To fill this gap, we propose a novel benchmark LDT-Bench, designed to systematically analyze the generalization ability of video generation models in complex scenarios induced by prompts with Long-Distance semantic Texts. In the following sections, LDT-Bench is introduced from two perspectives: the construction of the prompt suite and the design of evaluation metrics. The core components of LDT-Bench are illustrated in Fig. 3. 4.1 Prompt Suite Meta-information Extraction. Considering that objects and actions are the main entities in text prompts, we construct our prompts using the following two structural types. (1) Object-Action: An 5 Preprint Prompt Generation LLM-Human Double Check (b) Long-Distance Prompt Generation (a) Meta-Information Extraction Object Set Kinetics-600 Action Set • The camel packs its belongings with care. • A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk. ...... Initial Prompt Regeneration ✅ ❌ T5 Semantic distance calculation & filtering Validation by LLM Validation by Human ❌ ✅ Final prompts (c) Evaluation Metrics Evaluation prompts Generated videos Answers & Metric results MLLMs Evaluated questions • ElementQA • AlignQA • AnomalyQA Object-Action Pairs & Action-Action Pairs (d) (e) Benchmark Prompt Object Action ASD VBench 800 125 132 0.33 EvalCrafter 700 169 237 0.4 T2V-CompBench 1,400 308 545 0.31 FETV 619 168 334 0.38 DEVIL 810 270 556 0.39 VMBench 1,050 340 1,216 0.35 LDT-Bench(Ours) 2,839 1,938 901 0.86 Figure 3: Overview of our LDT-Bench. Upper: (a) LDT-Bench is built by first extracting metainformation from existing recognition datasets; (b) GPT-4o is then used to generate candidate prompts, which are filtered jointly by DeepSeek and humans to obtain the final prompt set; (c) Additionally, we design a set of three MLLM-based QA tasks that serve as the creativity metric. Lower: (d) Compared with other benchmarks, LDT-Bench covers a much richer variety of categories; (e) its prompts also exhibit a semantic-distance distribution that is shifted toward substantially longer ranges. Note that "ASD" denotes the average semantic distance of prompts. object combined with an uncommon or incompatible action. (2) Action-Action: Two semantically distant or even contradictory actions. To cover a wide range of objects and actions, we build our object and action sets from representative large-scale datasets. Specifically, the object set is derived from ImageNet-1K (Deng et al., 2009) and COCO (Lin et al., 2014) (covering 1,938 objects), while the action set is collected from ActivityNet (Caba Heilbron et al., 2015), UCF101 (Soomro et al., 2012), and Kinetics-600 (Carreira et al., 2018) (covering 901 actions). These collections serve as the foundation for subsequent prompt generation. We first encode each object and action element texti using a pretrained T5 text encoder (Raffel et al., 2020), obtaining a high-dimensional textual feature hi ∈Rd. These embeddings are then projected into a shared 2D semantic space via Principal Component Analysis (PCA): zi = PCA(hi) = PCA(T5(texti)), zi ∈R2, (5) where zi represents the semantic position of the i-th element in the 2D space. T5 can be replaced with other encoders, such as CLIP (Radford et al., 2021b); see Appendix B.1 for details. To measure semantic divergence, we compute the Euclidean distance between each pair of elements as a criterion for selecting long-distance semantic prompts. We then construct two candidate sets: one by pairing each object with its most distant action (1,938 object-action pairs), and the other by matching each action with its most distant counterpart (901 action-action pairs). From each set, we select the 160 most distant pairs, resulting in 320 high-distance prompts that challenge the model with long-distance semantic combinations. For more analysis of the prompt suite, please refer to Appendix B.2. Long-distance Prompt Generation. Based on the obtained text element pairs, we employ a large language model, i.e., GPT-4o (Hurst et al., 2024), to generate fluent and complete text prompts by filling in necessary sentence components. Subsequently, each prompt is double-checked by both DeepSeekR1 (Guo et al., 2025) and human annotators to ensure quality, resulting in our final prompt suite. The detailed generation process and several illustrative cases are presented in Fig. 3 (b). 4.2 Imagery Evaluation Metrics To quantitatively evaluate the performance of video generation models under long-distance semantic settings, we develop targeted evaluation metrics. Inspired by recent MLLMs-based evaluation methods (Cho et al., 2023; Feng et al., 2025), we generate questions based on the text prompts. Subsequently, MLLMs with strong semantic comprehension capabilities analyze the generated videos in response 6 Preprint to these questions, yielding quantitative evaluation results. Specifically, our assessment framework encompasses three primary dimensions. ElementQA. Because our prompts focus on objects and actions, ElementQA primarily consists of targeted questions revolving around these elements. For example, given the prompt "The traffic light is dancing.", we can generate two questions: "Does the traffic light appear in the video?" and "Is the traffic light performing a dancing action?" AlignQA. In addition to the basic semantic information covered by ElementQA, we also evaluate the generated videos in terms of visual quality and aesthetics (Murray et al., 2012). Given the challenging and inherently subjective nature of this assessment, we employ recently developed MLLMs that have been specifically optimized for alignment with human perception to perform the evaluation (Huang et al., 2024a; Wu et al., 2023). AnomalyQA. We have observed that current video generation models frequently produce anomalous outputs. Consequently, we also leverage MLLMs to analyze the generated frames and answer targeted questions aimed at identifying these anomalies. Implementation Details. For ElementQA, we employ Qwen2.5-VL-72B-Instruct (Bai et al., 2025) as the underlying MLLM, whereas for AlignQA we adopt Q-Align (Wu et al., 2023), a model specifically optimized for rating visual quality and aesthetics. Given the broader generalization required by AnomalyQA, we utilize the more powerful GPT-4o (OpenAI, 2024) for evaluation. We collectively refer to these three components as ImageryQA. Further implementation details are provided in Appendix B.3. 5 Experiments 5.1 Experimental Setup Datasets & Metrics. To assess the imaginative capacity of video-generation models, we evaluate them on both LDT-Bench and VBench (Huang et al., 2024b), using each benchmark's full prompt suite and associated metrics. Compared Models. We compare two categories of models: (1) General models: Hunyuan (Kong et al., 2024), Wan2.1 (Wan Team et al., 2025), Open-Sora (Zheng et al., 2024b), CogVideoX (Yang et al., 2024); (2) TTS methods: Video-T1 (Liu et al., 2025a) and EvoSearch (He et al., 2025a). We use Wan2.1 as the base model and generate 33-frame clips with the default settings (see Appendix C for details). Experimental Environment. All experiments are run on a server equipped with 8 × NVIDIA H20 GPUs (96 GB each), an Intel Xeon Gold 6348 CPU (32 cores, 2.6 GHz), and 512 GB of RAM, under Ubuntu 20.04 LTS (kernel 5.15). We used Python 3.9 with PyTorch 2.5.1 (CUDA 12.4, cuDNN 9.1), torchvision 0.20.1, and Transformers 4.50.3. 5.2 Comparison with Other Generation Models Performance on LDT-Bench. As shown in Tab. 1, we adopt Wan2.1 as the base model. Our method achieves a significant improvement of 8.83%, demonstrating a clear advantage. Furthermore, compared to other test-time scaling approaches, ImagerySearch also delivers consistently superior performance. These results highlight the effectiveness of our method in handling long-distance semantic prompts and its robustness in imagination-driven scenarios. LDT-Bench (%) ↑ Model ElementQA AlignQA AnomalyQA ImageryQA (All) Wan2.1 (Wan Team et al., 2025) 1.66 31.62 15.00 48.28 Video-T1 (Liu et al., 2025a) 1.91 38.16 14.68 54.75 Evosearch (He et al., 2025a) 1.92 36.10 16.46 54.48 ImagerySearch (Ours) 2.01 36.82 18.28 57.11 Table 1: Quantitative comparison on LDT-Bench. ImagerySearch achieves the best average performance. 7 Preprint Wan2.1 Frame0 Frame5 Frame10 Frame25 Frame30 Frame33 Prompt: "The native bear skillfully uses remote controls." Video-T1 Evosearch CogvideoX Opensora Opensora-plan Hunyuan ImagerySearch (Ours) Frame15 Frame20 Figure 4: Visualization of examples. Upper: Results from general models. Lower: ImagerySearch versus other test-time scaling methods. Ours produces more vivid actions under long-distance semantic prompts. VBench (%) ↑ Model Aesthetic Quality Background Consistency Dynamic Degree Imaging Quality Motion Smoothness Subject Consistency Average Wan2.1 (Wan Team et al., 2025) 50.50 91.80 82.85 58.25 97.50 90.25 78.53 Opensora (Peng et al., 2025) 48.80 95.25 73.15 61.35 99.05 92.95 78.43 CogvideoX (Yang et al., 2024) 48.80 95.30 47.20 65.05 98.55 94.65 74.93 General Hunyuan (Kong et al., 2024) 50.45 92.65 85.00 59.55 95.75 90.55 78.99 Video-T1 (Liu et al., 2025a) 57.20 95.65 54.05 60.25 99.30 94.80 76.88 Evosearch (He et al., 2025a) 55.55 94.80 80.95 68.90 97.70 94.55 82.08 TTS ImagerySearch (Ours) 57.70 96.00 84.05 69.20 98.00 95.90 83.48 Table 2: Quantitative comparison of video generation models on VBench. ImagerySearch achieves the best average performance across multiple metrics, indicating better alignment and generation quality. (a) (b) (c) (d) (e) (f) Figure 5: (a) Effect of semantic distance across different models. As semantic distance increases, our method remains the most stable. (b-e) Our AIR consistently delivers superior performance. Scaling behavior of ImagerySeach and baselines as inference-time computation increases. From left to right, the y-axes represent the score changes for MQ, TA, VQ, and Overall (VideoAlign (Liu et al., 2025b)). (f) Effect of reward weight. Performance on VBench. For a balanced evaluation, we compare two classes of methods on VBench. The upper rows of Tab. 2 report general generators, while the lower rows list test-time scaling approaches-Video-T1 (Liu et al., 2025a), EvoSearch (He et al., 2025a), and our proposed ImagerySearch. All models are evaluated on long-distance prompts from LDT-Bench using the VBench metrics. ImagerySearch achieves the best overall score and ranks highest on the fine-grained Dynamic Degree, Subject Consistency metrics and so on, indicating its strong ability to preserve 8 Preprint VBench (%) Model Aesthetic Quality Background Consistency Dynamic Degree Imaging Quality Motion Smoothness Subject consistency Average Baseline Wan2.1 (Wan Team et al., 2025) 50.50 91.80 82.85 58.25 97.50 90.25 78.53 w/o AIR 56.25 94.60 81.85 68.05 97.50 94.40 82.11 Modules w/o SaDSS 55.35 95.10 77.20 68.00 97.60 94.55 81.30 0.5 57.25 96.15 70.00 70.75 97.45 95.45 81.18 SaDSS-static weight 0.9 57.40 96.05 70.00 70.80 97.55 95.50 81.22 BON (Ma et al., 2025) 57.40 95.00 83.01 68.10 97.70 94.63 82.64 Particle Sampling (Ma et al., 2025) 56.51 93.52 81.72 67.04 96.18 93.38 81.39 Search ImagerySearch (Ours) 57.70 96.00 84.05 68.50 97.65 94.70 83.10 Table 3: Ablation Study. "Baseline" is the plain backbone; "Modules" successively add our two novel modules; "SaDSS-static weight" denotes the performance obtained when the selection space is kept at a fixed size; "Search" swaps in alternative search strategies. The full configuration (ImagerySearch) yields the best performance. prompt fidelity under wide semantic gaps. Fig. 4 illustrates this strength: ImagerySearch accurately reproduces both the specified subjects (e.g., bear, controls) and their associated actions (e.g., uses). Additional examples in Appendix D further demonstrate its robustness in handling complex long-distance prompts. Robustness Analysis Across Semantic Distances. As illustrated in Fig. 5(a), our approach maintains nearly constant VBench scores as semantic distance increases, whereas competing methods exhibit pronounced fluctuations. This stability highlights the superior robustness of our model across a wide range of semantic distances. Additional error analysis is provided in the Appendix E. 5.3 Test-time Scaling Law Analysis We measure the inference-time computation by the number of function evaluations (NFEs). As shown in Fig. 5(b-d), where performance is assessed with the MQ, TA, and VQ metrics from VideoAlign (Liu et al., 2025b), ImagerySearch exhibits monotonic performance improvements as inference-time computation increases. Notably, on Wan2.1 (Wan Team et al., 2025), ImagerySearch continues to gain as NFEs grow, whereas baseline methods plateau at roughly 1 × 103 NFEs (corresponding to the 30th timestep). Computation details are provided in the Appendix F. Moreover, our method shows an even more pronounced advantage in the overall VideoAlign score, as illustrated in Fig. 5(e). 5.4 Ablation Study Effect of SaDSS and AIR. As shown in the first three rows of Tab. 3, adding either the SaDSS or the AIR module individually already surpasses the baseline, while combining SaDSS with AIR achieves the best performance, confirming the complementary nature of semantic guidance and adaptive selection. Effect of Search Space Size. The SaDSS-static weight rows in Tab. 3 compare fixed and dynamic search-space configurations. With static weights of 0.5, and 0.9, performance improves gradually, reaching a VBench score of 81.22%. In contrast, the dynamic approach attains a markedly higher score of 83.48%, demonstrating its superior ability to optimize the search space and thus boost model performance. Effect of Search Strategy. The Search rows in Tab. 3 compare different search strategies (e.g., BON, Particle Sampling (Ma et al., 2025)). The experimental results demonstrate that our search strategy delivers the best performance. Effect of Reward Dynamic Adjustment Mechanism. Fig. 5(f) demonstrates the impact of varying reward weights on VBench scores across different models (MQ, TA, VQ). As weights change from 0.2 to 1.2, TA shows notable improvement while MQ and VQ maintain relatively stable performance. The consistent superiority of the Ours approach, represented by the dashed line, underscores the effectiveness of dynamic reward adjustment, achieving optimal performance irrespective of weight changes. 9 Preprint 6 Conclusion In this study, we propose ImagerySearch, an adaptive test-time search method that improves video-generation quality for long-distance semantic prompts drawn from imaginative scenarios. Additionally, we present LDT-Bench, the first benchmark designed to evaluate such challenging prompts. ImagerySearch attains state-of-the-art results on both VBench and LDT-Bench, with especially strong gains on LDT-Bench, demonstrating its effectiveness for text-to-video generation under long-range semantic conditions. In future, we will explore more flexible reward mechanisms to further enhance video-generation performance. References Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint , 2025. Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, and Jun Zhu. Vidu: a highly consistent, dynamic and skilled text-to-video generator with diffusion models. arXiv preprint , 2024. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint , 2023. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint , 2023. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the ieee conference on computer vision and pattern recognition, pages 961-970, 2015. Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint , 2018. Chubin Chen, Jiashu Zhu, Xiaokun Feng, Nisha Huang, Meiqi Wu, Fangyuan Mao, Jiahong Wu, Xiangxiang Chu, and Xiu Li. S2-guidance: Stochastic self guidance for training-free enhancement of diffusion models. arXiv preprint , 2025a. Rui Chen, Lei Sun, Jing Tang, Geng Li, and Xiangxiang Chu. Finger: Content aware fine-grained evaluation with reasoning for ai-generated videos. arXiv preprint , 2025b. Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, and Su Wang. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-to-image generation. arXiv preprint , 2023. Xiangxiang Chu, Jianlin Su, Bo Zhang, and Chunhua Shen. Visionllama: A unified llama backbone for vision tasks. In European Conference on Computer Vision, pages 1-18. Springer, 2024. Xiangxiang Chu, Renda Li, and Yong Wang. Usp: Unified self-supervised pretraining for image generation and understanding. arXiv preprint , 2025. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint , 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, and Ricky TQ Chen. Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control. arXiv preprint , 2024. Ying Fan and Kangwook Lee. Optimizing ddpm sampling with shortcut fine-tuning. arXiv preprint , 2023. 10 Preprint Xiaokun Feng, Shiyu Hu, Xiaotang Chen, and Kaiqi Huang. A hierarchical theme recognition model for sandplay therapy. In Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pages 241-252. Springer, 2023. Xiaokun Feng, Haiming Yu, Meiqi Wu, Shiyu Hu, Jintao Chen, Chen Zhu, Jiahong Wu, Xiangxiang Chu, and Kaiqi Huang. Narrlv: Towards a comprehensive narrative-centric evaluation for long video generation models. arXiv preprint , 2025. Genmo Team. Mochi 1. https://github.com/genmoai/models, 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint , 2025. Haoran He, Jiajun Liang, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai, and Ling Pan. Scaling image and video generation via test-time evolutionary search. arXiv preprint , 2025a. Haoran He, Jiajun Liang, Xintao Wang, Pengfei Wan, Di Zhang, Kun Gai, and Ling Pan. Scaling image and video generation via test-time evolutionary search. arXiv preprint , 2025b. Xuan He, Dongfu Jiang, Ge Zhang, Max Ku, Achint Soni, Sherman Siu, Haonan Chen, Abhranil Chandra, Ziyan Jiang, Aaran Arulraj, et al. Videoscore: Building automatic metrics to simulate fine-grained human feedback for video generation. arXiv preprint , 2024. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. Yipo Huang, Xiangfei Sheng, Zhichao Yang, Quan Yuan, Zhichao Duan, Pengfei Chen, Leida Li, Weisi Lin, and Guangming Shi. Aesexpert: Towards multi-modality foundation model for image aesthetics perception. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 5911-5920, 2024a. Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21807-21818, 2024b. Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chanpaisit, Chenyang Si, Yuming Jiang, et al. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint , 2024c. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint , 2024. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint , 2024. Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, et al. Hunyuanvideo: A systematic framework for large video generative models. arXiv preprint , 2024. Kwai. Kling. Accessed February 25, 2025 [Online] https://klingai.com/, 2025. URL https://klingai.com/. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint , 2023. 11 Preprint Jiachen Lei, Keli Liu, Julius Berner, Haiming Yu, Hongkai Zheng, Jiahong Wu, and Xiangxiang Chu. Advancing end-to-end pixel space generative modeling via self-supervised pre-training. arXiv preprint , 2025. Jiachen Li, Weixi Feng, Tsu-Jui Fu, Xinyi Wang, Sugato Basu, Wenhu Chen, and William Yang Wang. T2v-turbo: Breaking the quality bottleneck of video consistency model with mixed reward feedback. Advances in neural information processing systems, 37:75692-75726, 2024a. Jiachen Li, Qian Long, Jian Zheng, Xiaofeng Gao, Robinson Piramuthu, Wenhu Chen, and William Yang Wang. T2v-turbo-v2: Enhancing video generation model post-training through data, reward, and conditional guidance design. arXiv preprint , 2024b. Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, et al. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint , 2024c. Xiner Li, Masatoshi Uehara, Xingyu Su, Gabriele Scalia, Tommaso Biancalani, Aviv Regev, Sergey Levine, and Shuiwang Ji. Dynamic search for inference-time alignment in diffusion models. arXiv preprint , 2025. Mingxiang Liao, Qixiang Ye, Wangmeng Zuo, Fang Wan, Tianyu Wang, Yuzhong Zhao, Jingdong Wang, Xinyu Zhang, et al. Evaluation of text-to-video generation models: A dynamics perspective. Advances in Neural Information Processing Systems, 37:109790-109816, 2025. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. Xinran Ling, Chen Zhu, Meiqi Wu, Hangyu Li, Xiaokun Feng, Cundian Yang, Aiming Hao, Jiashu Zhu, Jiahong Wu, and Xiangxiang Chu. Vmbench: A benchmark for perception-aligned video motion generation. arXiv preprint , 2025. Fangfu Liu, Hanyang Wang, Yimo Cai, Kaiyan Zhang, Xiaohang Zhan, and Yueqi Duan. Video-t1: Test-time scaling for video generation. arXiv preprint , 2025a. Jiahe Liu, Youran Qu, Qi Yan, Xiaohui Zeng, Lele Wang, and Renjie Liao. Fr\'echet video motion distance: A metric for evaluating motion consistency in videos. arXiv preprint , 2024a. Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, et al. Improving video generation with human feedback. arXiv preprint , 2025b. Yaofang Liu, Xiaodong Cun, Xuebo Liu, Xintao Wang, Yong Zhang, Haoxin Chen, Yang Liu, Tieyong Zeng, Raymond Chan, and Ying Shan. Evalcrafter: Benchmarking and evaluating large video generation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22139-22149, 2024b. Yuanxin Liu, Lei Li, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu Sun, and Lu Hou. Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. Advances in Neural Information Processing Systems, 36:62352-62387, 2023. Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint , 2023. Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu-Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, et al. Inference-time scaling for diffusion models beyond scaling denoising steps. arXiv preprint , 2025. Fangyuan Mao, Aiming Hao, Jintao Chen, Dongxia Liu, Xiaokun Feng, Jiashu Zhu, Meiqi Wu, Chubin Chen, Jiahong Wu, and Xiangxiang Chu. Omni-effects: Unified and spatially-controllable visual effects generation. arXiv preprint , 2025. 12 Preprint Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE conference on computer vision and pattern recognition, pages 2408-2415. IEEE, 2012. OpenAI. Gpt-4o: Openai's new flagship model. https://openai.com/index/ gpt-4o-and-gpt-4-api-updates/, 2024. Accessed: 2024-06-05. OpenAI. Sora. Accessed February 25, 2025 [Online] https://openai.com/index/sora/, 2025. URL https://openai.com/index/sora/. Yuta Oshima, Masahiro Suzuki, Yutaka Matsuo, and Hiroki Furuta. Inference-time text-to-video alignment with diffusion latent beam search. arXiv preprint , 2025. Xiangyu Peng, Zangwei Zheng, Chenhui Shen, Tom Young, Xinying Guo, Binluo Wang, Hang Xu, Hongxin Liu, Mingyan Jiang, Wenjun Li, and et al. Open-sora 2.0: Training a commercial-level video generation model in $200k. arXiv preprint , 2025. Mihir Prabhudesai, Russell Mendonca, Zheyang Qin, Katerina Fragkiadaki, and Deepak Pathak. Video diffusion alignment via reward gradients. arXiv preprint , 2024. Zenon W Pylyshyn. Mental imagery: In search of a theory. Behavioral and brain sciences, 25(2): 157-182, 2002. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021a. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021b. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. Runway. Runway gen3. Accessed February 25, 2025 [Online] https://app.runwayml.com/, 2025. URL https://app.runwayml.com/. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in neural information processing systems, 35:25278-25294, 2022. Anuj Singh, Sayak Mukherjee, Ahmad Beirami, and Hadi Jamali Rad. Code: Blockwise control for denoising diffusion models. ArXiv, abs/2502.00968, 2025. URL https://api. semanticscholar.org/CorpusID:276094284. Raghav Singhal, Zachary Horvitz, Ryan Teehan, Mengye Ren, Zhou Yu, Kathleen McKeown, and Rajesh Ranganath. A general framework for inference-time scaling and steering of diffusion models. arXiv preprint , 2025. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint , 2012. John Stam. Stable diffusion: High-resolution image synthesis with latent diffusion models, 2023. Placeholder entry. Please update with correct details. Kaiyue Sun, Kaiyi Huang, Xian Liu, Yue Wu, Zihan Xu, Zhenguo Li, and Xihui Liu. T2vcompbench: A comprehensive benchmark for compositional text-to-video generation. arXiv preprint , 2024. 13 Preprint Kaiyue Sun, Kaiyi Huang, Xian Liu, Yue Wu, Zihan Xu, Zhenguo Li, and Xihui Liu. T2v-compbench: A comprehensive benchmark for compositional text-to-video generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 8406-8416, 2025. Dongmin Park Sunwoo Kim, Minkyu Kim. Test-time alignment of diffusion models without reward over-optimization. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=vi3DjUhFVm. Ryan Szeto and Jason J Corso. The devil is in the details: A diagnostic evaluation benchmark for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21054-21063, 2022. Nigel JT Thomas. Are theories of imagery theories of imagination? an active perception approach to conscious mental content. Cognitive science, 23(2):207-245, 1999. Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint , 2018. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8228-8238, 2024. Wan Team, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, et al. Wan: Open and advanced large-scale video generative models. arXiv preprint , 2025. Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint , 2023. Yibin Wang, Yuhang Zang, Hao Li, Cheng Jin, and Jiaqi Wang. Unified reward model for multimodal understanding and generation. arXiv preprint , 2025. Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. arXiv preprint , 2023. Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. In International Conference on Machine Learning, pages 54015-54029. PMLR, 2024a. Meiqi Wu, Kaiqi Huang, Yuanqiang Cai, Shiyu Hu, Yuzhong Zhao, and Weiqiang Wang. Finger in camera speaks everything: Unconstrained air-writing for real-world. IEEE Transactions on Circuits and Systems for Video Technology, 34(9):8602-8613, 2024b. Enze Xie, Junsong Chen, Yuyang Zhao, Jincheng Yu, Ligeng Zhu, Chengyue Wu, Yujun Lin, Zhekai Zhang, Muyang Li, Junyu Chen, et al. Sana 1.5: Efficient scaling of training-time and inference-time compute in linear diffusion transformer. arXiv preprint , 2025. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: learning and evaluating human preferences for text-to-image generation. In Proceedings of the 37th International Conference on Neural Information Processing Systems, pages 15903-15935, 2023. Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, et al. Visionreward: Fine-grained multi-dimensional human preference learning for image and video generation. arXiv preprint , 2024. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5288-5296, 2016. 14 Preprint Haolin Yang, Feilong Tang, Ming Hu, Yulong Li, Yexin Liu, Zelin Peng, Junjun He, Zongyuan Ge, and Imran Razzak. Scalingnoise: Scaling inference-time search for generating infinite videos. arXiv preprint , 2025. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint , 2024. Hangjie Yuan, Shiwei Zhang, Xiang Wang, Yujie Wei, Tao Feng, Yining Pan, Yingya Zhang, Ziwei Liu, Samuel Albanie, and Dong Ni. Instructvideo: Instructing video diffusion models with human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6463-6474, 2024a. Shenghai Yuan, Jinfa Huang, Yongqi Xu, Yaoyang Liu, Shaofeng Zhang, Yujun Shi, Rui-Jie Zhu, Xinhua Cheng, Jiebo Luo, and Li Yuan. Chronomagic-bench: A benchmark for metamorphic evaluation of text-to-time-lapse video generation. Advances in Neural Information Processing Systems, 37:21236-21270, 2024b. Shenghai Yuan, Xianyi He, Yufan Deng, Yang Ye, Jinfa Huang, Bin Lin, Jiebo Luo, and Li Yuan. Opens2v-nexus: A detailed benchmark and million-scale dataset for subject-to-video generation. arXiv preprint , 2025. Dian Zheng, Ziqi Huang, Hongbo Liu, Kai Zou, Yinan He, Fan Zhang, Yuanhan Zhang, Jingwen He, Wei-Shi Zheng, Yu Qiao, et al. Vbench-2.0: Advancing video generation benchmark suite for intrinsic faithfulness. arXiv preprint , 2025. Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all. arXiv preprint , 2024a. Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all. arXiv preprint , 2024b. 15 Preprint A The Selection of Imagery Schedule As illustrated in Fig. S1, we observe that adjacent denoising steps modify the latent video only marginally; substantial deviations from earlier stages emerge only at several pivotal steps. To improve generation efficiency, we therefore trigger ImagerySearch at a limited set of noise levels, S = {5, 20, 30, 45}, which we term the Imagery Schedule. This schedule specifies the exact timesteps at which ImagerySearch is invoked. B More Details About Imagery Evaluation Metrics B.1 More Text Encoders. In our current implementation, T5 serves three purposes: it encodes the key entities in each prompt, measures their semantic distances, and then uses those distances to adjust the search space and reward weights during generation. The same pipeline can be run with a CLIP text encoder (Radford et al., 2021a; Blattmann et al., 2023). Trained on large-scale image-text pairs, CLIP yields text embeddings whose cosine distances correlate well with visual concepts, so these distances can play exactly the same role in deciding when to expand or shrink the search space. In addition, CLIP similarities are widely used as a measure of text-image or text-video alignment, which makes them a natural choice for the alignment term in our reward function (Stam, 2023). Because CLIP, like T5, produces a fixed-length vector in a single forward pass, it can be swapped in as a drop-in replacement without changing any downstream components while fully preserving the effectiveness of our adaptive search and reward mechanisms. B.2 More Analysis about Prompt Suite. As shown in Fig. S2, we provide a multi-faceted overview of the LDT-Bench prompt suite and underscore its advantages for long-distance semantic evaluation. (a) Examining the distribution of actions, a pronounced long-tail pattern emerges: of the five super-categories, Sports & Wellness and Daily Services each supply 300 prompts, ensuring ample coverage of everyday yet highly diverse actions. (b) For objects, a treemap of 14 super-categories-scaled by instance count-reveals that Animal and Artifact jointly exceed half of all samples, while still leaving room for rarer classes; this balance of head and tail categories is largely missing in prior benchmarks. (c) The object word cloud (after stop-word filtering) highlights high-frequency nouns such as cricket, person, and remote, evidencing fine-grained lexical diversity across domains. (d) The action word cloud reveals a wide semantic span-verbs like play, join, use, and handle-that challenges models to cope with imaginative, long-distance dependencies. Taken together, these statistics show that LDT-Bench not only covers a richer mix of objects and actions than existing datasets but also accentuates long-distance semantic relationships that current models find most difficult, making it a uniquely effective testbed for stress-testing creative video generation systems. B.3 ImageryQA Implementation Details. As described in Sec. 4.2 of the paper, our metric is primarily composed of three components: ElementQA, AlignQA, and AnomalyQA (Fig. S3 (a)). In this subsection, we provide further clarification using specific examples and illustrating the metric computation process. As shown in Fig. S3 (b), given the evaluation prompt, "A person polishes furniture attentively at home, then packs cleaning products for organization.", two videos generated by different video generation models. First, ElementQA formulates questions based on the objects and actions within the prompt, i.e., "person," "polishes furniture," and "packs cleaning products for organization", resulting in the questions Q1, Q2, and Q3 in Fig. S3. Next, AlignQA assesses the first frame of each video in terms of image quality and aesthetics. Finally, AnomalyQA evaluates abnormal events in both videos, as illustrated by Q5 in Fig. S3. Based on these questions, we employ different MLLMs and answer strategies. Recent studies (Feng et al., 2025; Liu et al., 2025b; Wu et al., 2024a; Zheng et al., 2025; Wu et al., 2024b) suggest that for 1 Preprint timestep = 3 timestep = 4 timestep = 6 timestep = 5 timestep = 20 timestep = 30 timestep = 45 Frame 1 Prompt: "The bear walks on the grassland." Adjacent denoising steps Key denoising steps Figure S1: Imagery schedule. The heatmaps visualize 13th-layer attention projected onto the first video frame at successive denoising steps. Adjacent steps show nearly identical focus regions, whereas only a few key steps exhibit pronounced changes. Concentrating analysis and search on these pivotal steps therefore captures the prompt-to-frame semantic correspondence more efficiently. (a) (b) (c) (d) Figure S2: LDT-Bench prompt suite analysis: (a) Action super-category distribution shown as a horizontal bar chart. (b) Object super-category distribution displayed as a treemap, with area proportional to class count. (c) Word cloud highlighting the most frequent object-action prompts. (d) Word cloud highlighting the most frequent action-action prompts. questions with inherent uncertainty, having a general-purpose MLLM (Bai et al., 2025; OpenAI, 2024) answer the same question multiple times and averaging the results yields more reliable evaluations. Therefore, for ElementQA, we prompt Qwen2.5-VL-72B-Instruct (Bai et al., 2025) to answer each question five times. For AnomalyQA, considering the higher cost of GPT-4o (OpenAI, 2024), we collect three responses per question. For Q-Align (Wu et al., 2023) in AlignQA, since it is a dedicated model trained for aesthetic quality assessment and directly outputs a quantitative score, we use a single response. C Experimental Setup-Model details Parameter settings. In our implementation, the baseline model is Wan2.1-1.3B (Wan Team et al., 2025). And we set the imagery schedule to {5, 20, 30, 45} and set the imagery size schedule to {10, 5, 5, 5, 5}. As shown in Fig. S4, V Q and MQ exhibit the same selection trend, whereas TA shows the opposite. Therefore, regarding the parameters in Equation (5), we set β = γ = 1.0, and α are dynamically adjusted. 2 Preprint Prompt: "A person polishes furniture attentively at home, then packs cleaning products for organization." Wan2.1 ImagerySearch (Ours) Q1: 1.00 Q2: 0.00 Q3: 0.00 Q4: 2.18 Q5: 0.00 Q1: 1.00 Q2: 0.20 Q3: 0.80 Q4: 4.69 Q5: 0.33 Q1: "Is a person present in the scene?" Q2: "Is a person performing the action 'polishes furniture attentively' at home?" Q3: "Is a person performing the action 'packs cleaning products for organization'?" Q4: image quality and aesthetic score Q5: "This is a generated video. Please help me determine whether there are any anomalies in the video frames, such as abnormal appearance or structure of objects, abnormal deformation of objects, motion that is unreasonable or violates physical laws, disappearance or discontinuity of objects, as well as artifacts or motion ghosting." ElementQA AlignQA AnomalyQA ImageryQA (a) (b) Figure S3: Evaluation with ImageryQA. (a) We design a structured question set ImageryQA, consisting of ElementQA, AlignQA, and AnomalyQA.(b) Comparison between Wan2.1 and ImagerySearch on the same prompt. Wan2.1 fails to depict a person and the actions described, resulting in low aesthetic quality (Q4) and visual anomalies (Q5). In contrast, ImagerySearch successfully captures both actions-polishing furniture and packing cleaning products-scoring higher in both Q4 and Q5. D More Examples Additional qualitative examples are provided in Fig. S5, Fig. S6, and Fig. S7. Specifically, Fig. S5 reports results on LDT-Bench, where the first five rows correspond to action-action prompts and the last three to object-action prompts. Fig. S6 and Fig. S7 show further action-action cases drawn from VBench. Across all examples, our method produces vivid and coherent videos, even under long-distance semantic prompts, illustrating its capacity to handle challenging imaginative scenarios. E Error Analysis In the VBench (Huang et al., 2024b) error analysis (Fig. S8), ImagerySearch shows a higher mean score with a tighter interquartile range, indicating more stable performance across prompts. Evosearch (He et al., 2025a) attains a comparable median but displays greater dispersion, whereas wan2.1 (Wan Team et al., 2025) and Video-T1 (Liu et al., 2025a) exhibit lower central scores and wider quartile spans. Overall, dynamically adjusting the search space and rewarding by semantic distance helps maintain generation quality while reducing sensitivity to prompt difficulty. 3 Preprint MQ=0.6 MQ=1.0 TA=0.6 TA=1.0 VQ=0.6 VQ=1.0 MQ=1.0 TA=1.0 VQ=1.0 MQ=0.6 TA=0.6 VQ=0.6 Prompt: "The camel packs its belongings with care." Prompt: "A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk." Figure S4: Reward-Weight Analysis. The left of figure shows an action-action example and the right of figure is an object-action one, visualizing the videos under different weight settings. MQ and V Q follow almost identical trends, whereas TA moves in the opposite direction. Accordingly, we fix the MQ and V Q coefficients to 1 and vary the TA coefficient with the prompt, selecting videos that better fit imaginative scenarios. 4 Preprint Cricket players celebrate victories on lush pitches, then use remote controllers for strategic insights. Pole vaulters compete on athletic fields, then tie knots in practice equipment bags. A person washes face carefully in quiet bathrooms, then ties knots in towels for storage. Dancers embrace elegance with belly dance routines at cultural fairs, then tie knots in costume shawls. The tiger enjoys competitive trampoline events. The shoe shop efficiently utilizes a remote controller. The shovel springs vigorously onto the trampoline. A girl applies eye makeup in her bedroom, then ties a colorful ribbon at her crafting desk. Figure S5: More examples on LDT-Bench. The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 5 Preprint CG game concept digital art, a snowboarder with a sleek black snowboard, wearing a black hoodie and cargo pants. They are standing on a snowy mountain slope, with a clear blue sky above and fluffy white clouds in the distance. The snowboarder has long wavy brown hair blowing in the wind, and their face is covered in snow. They are holding the snowboard tightly with one hand and are about to take off down the slope. The background features towering peaks and deep valleys. The overall scene is vibrant and dynamic, with natural lighting and shadows. Low-angle view, medium shot focusing on the snowboarder's action. Photorealistic astronaut riding a horse in the vast expanse of outer space. The astronaut, wearing a sleek, silver spacesuit with reflective panels, is astride a majestic steed adorned with metallic accents. Both the astronaut and horse are illuminated by the stark glow of stars and distant planets. The horse's coat is a blend of metallic silver and subtle blue hues, reflecting the cold, distant universe around them. The astronaut leans forward slightly, gripping the reigns tightly as they navigate the void. The horse moves with grace and purpose, its hooves creating a soft, rhythmic sound against the vacuum. The background showcases a surreal, star-filled cosmos, with nebulae and cosmic dust swirling around them. The image captures the awe-inspiring moment of this unlikely duo journeying through the cosmos. High-definition photorealistic rendering. Full-body astronaut and horse, medium shot in space environment. A person leans in for a tender kiss, their lips gently touching as they look into each other's eyes. The person has tousled brown hair, soft features, and warm, inviting skin. They are wearing a cozy sweater in a soft pastel color, paired with jeans and sneakers. The background is a softly lit bedroom, with subtle shadows creating depth. A small bedside lamp casts a warm glow, and there are hints of fluffy bedding and a vintage alarm clock on the nightstand. The couple is positioned mid-shot, facing each other with a sense of intimacy. Soft romantic music plays in the background. Medium shot, close-up of the faces. CG game concept digital art, a tranquil tableau featuring a serene cup resting on a polished wooden surface. The cup is made of delicate porcelain, with intricate hand-painted designs in shades of pale blue and green. It sits gracefully on a small, intricately carved wooden coaster. The wooden surface is smooth and gleaming, with subtle grain patterns. Soft, warm ambient lighting casts gentle shadows, highlighting the intricate details of the cup and the coaster. The background is a minimalist, softly textured room with hints of pastel colors, creating a peaceful and serene atmosphere. Low-angle view, medium shot focusing on the detailed cup. Figure S6: More examples on VBench (Part I). The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 6 Preprint A graceful woman in flowing white robes, adorned with intricate silver jewelry, stands gracefully in a well-lit garden. She expertly arranges vibrant roses, lilies, and peonies with practiced precision. Her long, wavy hair flows gently behind her as she speaks softly to herself, her serene expression reflecting the beauty of nature around her. Soft sunlight casts a warm glow over the scene, highlighting the soft petals and delicate textures of the flowers. She pauses occasionally to admire her work, a content smile playing on her lips. The background features lush greenery, winding paths, and a gentle breeze rustling through the leaves. The composition includes various angles and shots, showcasing the woman from the side, full body, and a close-up of her hands deftly arranging the flowers. The scene is captured in a cinematic lighting style, emphasizing the harmony between the artist and her creations. CG game concept digital art, a skilled blacksmith bending metal with intense focus and determination. The blacksmith wears sturdy leather gloves and a heavy apron, standing amidst a cluttered forge filled with glowing coals and molten metal. Sparks fly as he skillfully shapes a piece of iron, his muscles tensing with effort. The environment is dimly lit, with flickering torches casting shadows on the walls. The blacksmith's face is lined with concentration, his jaw set tightly. He holds the metal with both hands, applying pressure and guiding it with precise motions. The metal bends and twists under his touch, creating a rhythmic sound that echoes through the forge. The background is a chaotic yet orderly scene, with tools scattered around and tools hanging from pegs on the wall. Close-up, low-angle view. A person gracefully ice skating on a frozen lake at night. The figure wears a sleek black ice skating outfit with reflective silver accents, allowing their every movement to catch the glow of the streetlights casting a soft, ethereal light. Their hair is pulled back into a sleek ponytail, flowing slightly with each spin. They glide effortlessly across the ice, occasionally stopping to pose with arms outstretched, creating intricate patterns in the snow. The background is a blurred view of the city skyline with twinkling lights, giving a sense of adventure and excitement. Night-time, winter setting with a touch of magic. Low-angle shot focusing on the skater's face, capturing their joy and determination. A person is making a large snowman in a winter landscape. The person is wearing a cozy red coat and a woolen hat, with mittens on their hands. They stand in front of a soft, fluffy snowbank, surrounded by pristine white snow. The person begins by shaping a large ball of snow for the snowman's body, then carefully forming smaller balls for the snowman's arms and head. They use their scarf as a makeshift tool to smooth out the snow, adding details like buttons and a carrot nose. The background is a vast snowy field with distant pine trees and a clear blue sky. The scene captures the joy and determination of building something from the purest of materials. Winter-themed lighting with soft, diffused sunlight highlights the intricate details of the snowman. Close-up shots of the person's hands working, medium shots of the snowman taking shape, and wide shots of the entire setup in progress. Handheld camera movements capture the person's gestures and expressions, providing a sense of movement and engagement. Figure S7: More examples on VBench (Part II). The images below the prompt show the result of frame sampling, where 16 frames are uniformly extracted from a 33-frame video. 7 Preprint ImagerySearch Evosearch wan2.1 video_t1 Model 40 50 60 70 80 VBench (%) 82.1 74.3 63.3 64.2 Error Analysis Figure S8: Error analysis about VBench scores on long-distance semantic prompts. Each box shows the score distribution for one model (mean marked by a white diamond); individual data points are overlaid in matching colors. ImagerySearch (orange) attains the highest mean with the tightest spread, while the other methods exhibit lower central tendencies and larger variances. 8
2510.14832
Intelligent Dynamic Handover via AI-assisted Signal Quality Prediction in 6G Multi-RAT Networks Maria Lamprini A. Bartsioka R&D Department Four Dot Infinity Athens, Greece mbarts@fourdotinfinity.com Anastasios Giannopoulos R&D Department Four Dot Infinity Athens, Greece angianno@fourdotinfinity.com Sotirios Spantideas R&D Department Four Dot Infinity Athens, Greece sospanti@fourdotinfinity.com Abstract—The emerging paradigm of 6G multiple Radio Access Technology (multi-RAT) networks, where cellular and Wireless Fidelity (WiFi) transmitters coexist, requires mobility decisions that remain reliable under fast channel dynamics, interference, and heterogeneous coverage. Handover in multi- RAT deployments is still highly reactive and event-triggered, relying on instantaneous measurements and threshold events. This work proposes a Machine Learning (ML)-assisted Predictive Conditional Handover (P-CHO) framework based on a model- driven and short-horizon signal quality forecasts. We present a generalized P-CHO sequence workflow orchestrated by a RAT Steering Controller, which standardizes data collection, parallel per-RAT predictions, decision logic with hysteresis-based con- ditions, and CHO execution. Considering a realistic multi-RAT environment, we train RAT-aware Long Short Term Memory (LSTM) networks to forecast the signal quality indicators of mobile users along randomized trajectories. The proposed P- CHO models are trained and evaluated under different channel models for cellular and IEEE 802.11 WiFi integrated coverage. We study the impact of hyperparameter tuning of LSTM models under different system settings, and compare direct multi-step versus recursive P-CHO variants. Comparisons against baseline predictors are also carried out. Finally, the proposed P-CHO is tested under soft and hard handover settings, showing that hysteresis-enabled P-CHO scheme is able to reduce handover failures and ping-pong events. Overall, the proposed P-CHO framework can enable accurate, low-latency, and proactive han- dovers suitable for ML-assisted handover steering in 6G multi- RAT deployments. Index Terms—6G, WiFi, Handover, Machine Learning, Mobil- ity Control, Multi-RAT Network, Radio Access Technology I. INTRODUCTION Fifth-generation (5G) and beyond (B5G) communication networks are being deployed worldwide to address the grow- ing demands for massive device connectivity, high capacity, ultra-low latency, and seamless communication across diverse scenarios [1]. A key architectural approach to meet these requirements is the deployment of heterogeneous networks This work was supported in part by UNITY-6G Project, funded by the European Union’s HORIZON-JU-SNS-2024 program, under Grant Agreement No 101192650; and in part by the 6G-Cloud Project funded from the European Union’s HORIZON-JU-SNS-2023 program under Grant Agreement No 101139073. (HetNets). HetNets can be classified into two main cate- gories: (i) those integrating multiple Radio Access Technolo- gies (multi-RATs), and (ii) those comprising heterogeneous equipment types. Such architectures have been progressively incorporated into Third-Generation Partnership Project (3GPP) specifications starting from Release 11 [2] and beyond. In this work, the focus is on multi-RAT environments that enable the coexistence of B5G New Radio (NR) cellular infrastructures with complementary technologies such as Wireless Fidelity (WiFi) [3], [4]. Most modern User Equipments (UEs) can simultaneously connect to both WiFi and 5G. In such dual- connectivity scenarios, UEs typically prioritize WiFi connec- tions to minimize cellular data charges. However, this often leads to throughput degradation in areas with obstacles or poor WiFi coverage. To overcome this issue, handover mechanisms are employed to dynamically switch between WiFi and cellular networks according to a predefined selection policy [5]. Traditional handover mechanisms between different RATs in current 5G/NR systems are typically event-driven and rely on instantaneous measurements of the Received Signal Strength Indicator (RSSI) that compare serving and neighbor signal levels with thresholds and time-to-trigger (TTT) timers [6]. Although conventional methods are simple and compu- tationally efficient, they often result in suboptimal Quality of Service (QoS), especially in dynamic and heterogeneous envi- ronments. Conditional Handover (CHO), standardized in 3GPP Release 16 [7], pre-configures one or more target cells together with execution conditions. The final handover is executed once those conditions are met, reducing command latency and improving reliability under fast fading and blockage. However, both conventional and CHO remain fundamentally reactive, since decisions are taken when thresholds are crossed, not based on forecasted link quality or under rapidly changing interference in multi-RAT settings. Recently, Machine Learn- ing (ML) and Deep Learning (DL) have emerged as promis- ing enablers for intelligent mobility management, aiming to overcome these limitations and enable proactive handover de- cisions [8], [9]. Their ability to learn complex patterns makes them particularly suitable for predicting multi-source wireless arXiv:2510.14832v1 [cs.LG] 16 Oct 2025 signal quality. Furthermore, ML-based mobility policies can simultaneously exploit UE measurements, environmental con- ditions, and user-specific performance objectives, achieving a balance between connectivity and efficiency [10]. In this context, this paper proposes a Predictive CHO (P- CHO) framework that supports ML-assisted dynamic RAT selection under mobility and interference conditions. The goal is to augment CHO with model-driven forecasts of near-future signal quality for candidate cells/RATs, turning execution conditions from static thresholds into time-aware predicates derived from predicted Signal-to-interference-plus-noise ratio (SINR) trajectories. By anticipating short-term degradations and opportunities (e.g., impending WiFi shadowing or inter- cellular interference), P-CHO enables earlier preparation of target links and smarter selection among multiple RATs. ML/DL models, especially sequence models such as Long Short-Term Memory (LSTM), are well-suited for this task because they (i) can capture temporal dependencies in radio dynamics, (ii) can provide accurate predictions of UE mea- surements, and, finally, (iii) generate actionable predictions for mobility control and traffic steering. The main contributions of this work are summarized as follows: • We formalize P-CHO for sixth-generation (6G) multi- RAT networks by integrating short-horizon SINR fore- casts into CHO execution logic, enabling proactive ver- tical and horizontal handovers across Cellular and WiFi RATs. • We design and train RAT-specific SINR forecasting mod- els, including bidirectional LSTM (BiLSTM) for 6G Cellular BSs and a lightweight LSTM for WiFi APs, producing next-step and multi-step SINR estimates along stochastic user trajectories. • We implement a hybrid Cellular/WiFi HetNet simula- tor that generates mobility and channel-realistic datasets (path loss, shadowing, small-scale fading, and interfer- ence) while the UE traverses predefined routes in dense deployments. • We embed the SINR predictors into a RAT Steering Controller that combines multi-source UE measurement reports, infers pre-trained SINR prediction models, and converts SINR forecasts into steering execution condi- tions, leading to dynamic RAT-user associations. • We benchmark the proposed LSTM-based signal pre- dictors against other ML-based models and traditional autoregressive models, demonstrating consistent improve- ments under dynamic interference and mobility. The rest of this manuscript is organized as follows: In Section II various previous research that has been done in this field is presented. Following is Section III, where the system model of a heterogeneous environment with the co- existence of 6G BSs and WiFi APs is formulated. In Section IV, the proposed ML/DL algorithms and P-CHO framework for intelligent signal prediction are presented, while in Section V the performance evaluation of the above is described. Finally, concluding remarks and future directions are provided in Section VI. II. RELATED WORK Mobility management in 5G/6G HetNets has drawn sig- nificant attention, particularly at the intersection of handover control and data-driven intelligence. Most existing studies analyze handover decision-making within 5G NR or across Long Term Evoluation (LTE)–5G, with increasing interest in predictive policies leveraging sequence models. In [11], a deep residual matrix LSTM was proposed to predict future user locations and trigger handover when the distance to the serving Base Station (BS) exceeds a predefined threshold. The method achieved an increased handover success rate and reduced latency, outperforming Reinforcement Learning (RL) and the Adaptive Cell Selection Algorithm (ACSA) schemes. While compelling, the approach was confined to intra-5G scenarios and did not consider multi-RAT interac- tions. A complementary direction is taken in [10], where an LSTM predicts the probability that each neighboring cell will have the highest Received Signal Strength Power (RSRP), formulating handover as a multi-class classification problem with a post-processing dynamic threshold to balance false positives/negatives. Evaluated in a simulated industrial 5G setup, the approach reduced radio link failures and mitigated ping-pong effects. However, the focus remained on 5G RSRP rather than cross-RAT signal dynamics. Learning-based policy optimization has also been explored beyond pure prediction. The authors in [12] studied dense HetNets with a deep Q-learning strategy that selects candidate BSs per time slot to optimize throughput, delay, and energy consumption. The RL method outperformed conventional A3 policies and decreased the energy usage (e.g., to 0.033,J/s in dynamic channels and 0.023,J/s in quasi-static channels), yet it did not explicitly forecast short-horizon signal evolution. In a related concept, the authors in [13] extended the 3GPP-based CHO framework for millimeter-wave (mmWave) 5G with a deep neural model trained on geographical blockage patterns and RSRP sequences, producing next-cell probabilities to enable early preparation. The reported accuracy (85%) and early-preparation success rate (98%) showcased the value of preparing handovers ahead of execution events, but the study remained single-RAT and RSRP-centric. Regarding WiFi-related works, the study in [14] proposed a proactive cross-layer handover framework for Wireless Local Area Networks (WLANs) that extracts network-layer features (RSSI, retransmissions, packet delay) to train ML classi- fiers distinguishing handover from non-handover states. The method reduces handover latency by approximately 200 ms and packet loss by over 30%. While effective, the study did not extend to heterogeneous cellular–WiFi coordination. Works explicitly targeting multi-RAT are relatively sparse. One notable effort is [15], which combines a Multi-acess Edge Computing (MEC)-based throughput estimator with Deep Re- inforcement Learning (DRL) to learn discrepancies between estimated and actual throughput, thereby guiding the selection between 5G and WiFi transmitters and using Quick UDP Internet Connections (QUIC) scheme for migration. Never- theless, the DRL agent requires periodical re-training once the network dynamics significantly change, leading to time- consuming model adjustment periods. Prior studies either (i) operate within a single RAT with RSRP-based targets [10], [11], [13], (ii) optimize policies without explicit short-horizon signal forecasting [12], or (iii) treat multi-RAT selection primarily as throughput-driven mi- gration [15] or WiFi-only handover [14]. To fill these gaps and extend the existing work to hybrid cellular/WiFi networks, this work targets predictive multi-RAT mobility by forecasting RAT-specific SINR along user trajectories and embedding these forecasts into a P-CHO logic. III. MULTI-RAT SYSTEM MODEL This section presents the proposed system model for proac- tive handover in a hybrid cellular/WiFi multi-RAT HetNet. Section III-A describes the considered network topology, system entities, channel models and user mobility patterns. Section III-B formulates the key problem elements for signal quality prediction-based handover. The core objective is to forecast the experienced time-varying SINR metrics for each RAT transmitter as users move across mobility trajectories and, then, determine the optimal target RAT. A. System Overview We consider a heterogeneous multi-RAT environment rep- resentative of B5G/6G deployments. The network comprises wide-area cellular BSs and overlapping short-coverage WiFi Access Points (APs), with mobile UEs traversing predefined paths within the multi-RAT coverage area. Fig. 1 illustrates an example layout in which user trajectories traverse overlapping coverage regions of multiple RATs. We define the following sets: (i) Set of cellular BSs SBS = {b1, b2, . . . , bNBS} (in- cludes NBS BSs), (ii) Set of APs SAP = {a1, a2, . . . , aNAP } (includes NAP APs each one located within a specific BS’s coverage), (iii) Set of UEs SUE = {u1, u2, . . . , uNUE} (in- cludes NUE UEs moving with a certain velocity), and (iv) Set of UE paths ST = {t1, t2, . . . , tNT } (includes NT UE trajectories created as polynomial interpolations, representing random walks across the whole multi-RAT area). Two link types are considered, including (i) Lb,u for cellular BS-UE communication and (ii) La,u for AP–UE communica- tion. Each link employs a RAT-specific channel model. For cellular links Lb,u, the received power accounts for large- scale path loss, small-scale fading, and co-channel interference from neighboring BSs. A generic distance-based attenuation is defined as the large-scale path loss component : G(d) = K · d−α (1) where d is the BS–UE distance, α is the path-loss exponent, and K a scaling constant [16]. Small-scale fading is modeled as Rician to capture both Line-of-Sight (LoS) and non-LoS (NLoS) components. For a certain UE, let h denote the com- plex fading coefficient experienced over the serving link, and 6G Base Station WiFi Access Point 6G Link WiFi Link Fig. 1: Multi-RAT HetNet with co-existing cellular/WiFi cov- erage. Dashed blue curve reflects a UE random-walk path. hi denote the same for the interfering link i. The instantaneous cellular SINR is then expressed as: SINR6G = Pt · |h|2 P i∈I Pt,i · |hi|2 + N0 · B (2) where I is the set of interfering BSs, Pt (Pt,i) is the serving (cellular interferer’s i) transmit power, N0 is the spectral density of the noise power and B is the link bandwidth. The channel model of WiFi links La,u follows an IEEE 802.11 channel model [17] that captures short-range indoor hotspot communications reflecting multipath and wall pene- tration losses: PL(d) = PL(d0) + 10 · γ · log10 d d0 + X j Lj (3) where PL(d0) stands for the reference path loss at distance d0 = 1 m, γ is the indoor path loss exponent, and Lj represents additional loss due to obstacle j (e.g. wall) [17]. Small-scale fading and interference are omitted in this case, resulting in an SNR formula: SNRW iF i = Pt · |h|2 N0 · B (4) UEs move along predefined two-dimensional (2D) trajec- tories t ∈ST with constant or piecewise-constant speed. Trajectories are selected to span diverse directions, turning points, and RAT overlaps, thereby inducing varied sequences of serving RATs. The sampling interval determines the number of trajectory points between start and end locations for a given UE speed. B. Handover Decision Problem Elements Given a trajectory t and the network state at discrete time k, let xk denote the feature vector (e.g., past per-RAT signal measurements, position/velocity values, RAT identifiers, or environmental indicators). The goal is to learn the following signal quality predictors: ˆs(r) k+τ = f (r)(xk), r ∈{SBS ∪SAP }, τ ∈{1, . . . , H}, (5) where ˆs(r) k+τ is the predicted signal quality indicator for time instance k+τ and RAT r, f (r) is the learned function for RAT r that maps historical to future signal quality metrics, and τ is the forecasting horizon (e.g., 3 steps ahead). Predictions ˆs(r) k+τ estimate short-horizon signal quality for each RAT using a historical feature vector xk of SINR6G values (for cellular BSs) and SNRWiFi values (for WiFi APs). At each step k, the RAT steering controller selects the target serving RAT as: r⋆ k = arg max r ˆs(r) k+τ (6) where the decision combines the predicted signal quality from all transmitters and selects the best-SINR provider. The final decision can be conditioned by rule-based policies (e.g., trigger a handover when a certain BS switch gain is met) using multi- step predictors {ˆs(r) k+τ}H τ=1. IV. ML-ASSISTED PROACTIVE HANDOVER This section details the proposed scheme for intelligent proactive handover in a heterogeneous multi-RAT network using ML/DL models. We first describe the dataset construc- tion pipeline, covering the simulation process, sequential data generation, and data organization, and then outline the ML/DL architecture employed for prediction. At the end of this section the proposed workflow is described in detail. A. Network Simulator and Dataset Construction Obtaining recent, real, and authenticated communication traces is challenging due to privacy policies and personal data regulations. Yet, ML/DL models require structured data for training and evaluation. To this end, we developed a Python- based simulator for the 5G/WiFi HetNet that represents the targeted communication environment and exports model-ready sequential datasets. As outlined in Section III-A, we develop a simulator that es- tablishes communication between all available RAT endpoints and the UE while the latter moves along a predefined trajectory t ∈ST . For each trajectory, and at each discrete time index k, the simulator evaluates per-candidate serving transmitter metrics for all r ∈R, where R ≜SBS ∪SAP . Specifically, for every time instance of the user mobil- ity path and BS/AP candidate, the simulator computes the RSSI, the SINR, and the achieved throughput (denoted as TP). The resulting raw time series are temporally aligned across RATs and stored as tuples in the form of Mt r[k] = (IDt, RSSIr[k], SINRr[k], TPr[k]), where IDt is the identifier of trajectory t. All the other metrics refer to the RSSI, SINR and TP values measured for transmitter r ∈R at time instance k following trajectory t. This temporal alignment of the multi-source timeseries ensures that, for each time instance k, measurements across RATs refer to the same UE position/sample. Input LSTM LSTM Output Bidirectional LSTM Dense Dense ReLU + Dropout ReLU + Dropout Dense Fig. 2: LSTM structure for timeseries signal quality prediction. To create input-output combinations for subsequent model training, the raw sequences are then segmented into overlap- ping windows of fixed length W. For node r and time index k, the input feature vector is: xr[k] =  Mt r[k −W + 1], . . . , Mt r[k]  , ∀t ∈ST (7) where Mt r[k] is the metric report tuple for node r when UE follows trajectory t, and is used to make a prediction at time k. This means that, to produce a prediction at time k, the model receives a history window of the W previous metric report tuples. At time k, the one-step-ahead target yr[k] (i.e., the model’s prediction) is the (unknown) upcoming metric tuple of the next time instance, i.e., yr[k] = Mt r[k + 1]. The dataset is organized in a tabular form with columns IDt, r, xr[k] (model inputs), and yr[k] (model desired output), and exported into two separate (.csv) files for model training purposes. We train two models, including one for cellular BSs samples and one for APs samples. These datasets serve as inputs to the LSTM-based predictors described in Section IV-B. Standard sequence-model preprocessing steps (e.g., nor- malization, windowing, and train/test partitioning) are applied to ensure consistent scaling and robust evaluation. B. ML Model Architecture for Signal Quality Prediction We employ LSTM networks to forecast short-horizon signal quality for candidate BSs and APs. LSTMs, a class of recurrent neural networks, are designed to capture long- and short-range temporal dependencies in sequential data and are therefore well suited to regress wireless signal evolution under mobility and time-varying interference. Cellular BS predictor (BiLSTM): Cellular BS-UE links exhibit richer temporal structure due to path-loss variation in urban environments, blockage, and co-channel interference with adjacent BSs. To capture these dynamics, we use a deep BiLSTM. As shown in Fig. 2, the first block pro- cesses the input window with two stacked LSTM layers in both forward and backward directions, enabling the model to exploit information across the entire input sequence. The recurrent output is then fed to a stack of fully-connected dense layers interleaved with ReLU activations and dropout to avoid overfitting. The output is a linear neuron that outputs a scalar next-step prediction ˆyr[k] for the selected metric. WiFi AP predictor (lightweight LSTM): La,u links typi- cally exhibit shorter-range propagation with simpler temporal variability due to absence of interference (no AP-BS interfer- ence). We therefore adopt a lightweight design with a single LSTM layer followed by one Dense layer that maps the last hidden state to the scalar output ˆyr[k]. This design reduces computational and memory footprint while retaining sufficient accuracy for short-horizon prediction. Training and objective: Models are trained offline using mean squared error (MSE)-based loss function. During online operation, a RAT Steering Controller queries the appropriate predictor (BiLSTM for BSs, lightweight LSTM for APs) to produce per-RAT short-horizon forecasts that feed the predic- tive P-CHO logic described below. C. Proposed P-CHO Sequence Diagram Recent 3GPP releases introduce CHO, where preparation and execution are decoupled and execution occurs only when a condition is satisfied [7]. We extend CHO to a P-CHO by deriving the condition from short-horizon, model-based SINR forecasts, thereby enabling proactive mobility, as shown in Fig. 3. Prediction-conditioned trigger: Let ˆstar[k] and ˆscur[k] denote the one-step-ahead predicted SINR (or SNR for WiFi) for the candidate target RAT and the current serving RAT, respectively. The execution condition is: ∆QoS[k] ≜ˆstar[k] −ˆscur[k] ≥δQoS (8) where δQoS is a configurable threshold that reflects whether the handover is soft (low δQoS is sensitive to frequent switching) or hard (high δQoS avoids ping-pong). By combining ML- based SINR prediction with handover sensitivity thresholds, the system adopts a proactive and predictive behavior, allowing early preparation of handover events while regulating handover sensitivity [18]. End-to-end workflow (mapped to Fig. 3): The P-CHO procedure comprises three stages: (1) Data Collection: The UE transmits a Measurement Report (MR) via the serving BS/AP (RSRP, SINR/SNR, throughput). The RAT Steering Controller stores the MR in an MR timeseries database and associates it with the UE’s historical MR sequence. (2) SINR Prediction & Handover Decision: Then, batch feature assembly operation is performed by the RAT Steering Controller, in which current MR features are combined with the UE’s recent window to form inputs xr[k] for each can- didate RAT r ∈R. A parallel inference of SINR prediction models is then performed in the Intelligent Module. Specif- ically, it infers the pre-trained predictors (BiLSTM for BSs, lightweight LSTM for APs) in parallel to obtain {ˆs(r)[k]} for r ∈R. Optionally, model inference can be done on Graphical Processing Unit (GPU) to further speed up the process. After- wards, the Decision Module evaluates the condition in (8) for each candidate. If the condition holds for some r⋆∈R (target BS/AP), it issues a P-CHO decision toward r⋆. Otherwise, the UE remains on the serving RAT. 1. UE Measurement Report (MR) Serving Cellular BS or WiFi AP UE Target Cellular BS or WiFi AP 2. UE MR Intelligent Module MR Database Decision Module RAT Steering Controller 3. CHO Decision a. Combine current with historical UE MRs    b. Infer the models and get SINR predictions c. Decide for target handover  4. Handover Request 5. Handover Request ACK 6. RRC Reconfiguration 7. RRC Reconfiguration Completed Admission Control 8. Handover Success 9. PDCP SN status transfer Data Collection  SINR Prediction and Handover Decision Handover Execution Fig. 3: Proposed three-stage P-CHO sequence callflows. (3) Handover Execution (conditional): Following a 3GPP- compliant procedure, based on the decision, the serving node sends a Handover Request to the target BS/AP r⋆. Upon request receipt, the target node performs admission control (resource availability/QoS checks) and responds with Han- dover Request ACK on success. Then, the serving node issues Radio Resource Control (RRC) Reconfiguration to the UE with target parameters. Upon completion, the UE synchronizes to the target. Finally, the status notification is transferred (e.g., PDCP Sequence Numbers) to finalize context continuity and confirm handover success. V. PERFORMANCE EVALUATION We consider a heterogeneous multi-RAT Cellular/WiFi net- work with mobile users served by BSs or APs. As in the topology of Fig. 1, the cellular tier comprises two B5G NR BSs operating at 3.5 GHz with partially overlapping cells. Inter-cell interference is present near the cell edges. Within each BS coverage, two IEEE 802.11 APs operate at 2.4 GHz on orthogonal channels (no co-channel interference across APs, no BS-AP interference due to different frequency bands). UEs traverse the topology along predefined trajectories at a constant speed of 3 m/s. As described in Sections III-A and III-B, the Python-based link/system simulator gener- ates time-synchronized sequences of per-RAT measurements (RSSI, SINR/SNR, throughput) for downlink traffic. The problem of SINR prediction is examined as a su- pervised timeseries regression problem. We consider short- horizon signal prediction as a sequence-to-one regression problem (one-step-ahead unless otherwise stated). For each candidate RAT endpoint r, a sliding window of length W 2 4 6 8 10 Look-back Window (W) 8 9 10 Root Mean Squared Error (a) 2 4 6 8 10 Look-back Window (W) 2.4 2.6 2.8 3.0 Root Mean Squared Error (b) Fig. 4: Prediction RMSE versus Look-back Window size W for (a) the Cellular BSs model and (b) WiFi APs model, considering 35 available UE paths. 10 20 30 Number of UE trajectories (NT) 5 6 7 Root Mean Squared Error (a) 10 20 30 Number of UE trajectories (NT) 1.0 1.5 2.0 Root Mean Squared Error (b) Fig. 5: Prediction RMSE versus the number of UE paths NT for (a) the Cellular BSs model and (b) WiFi APs model. forms the input, and the next-sample value of the SINR forms the target (see (7)). A training/test set split of 80%/20% has been used in the training phase of all approaches, while the models were tuned via grid search on different hyperparame- ters to ensure optimal performance. A. Prediction Error vs Look-back and Look-ahead Windows Performance evaluation was conducted through a series of experiments under varying conditions and objectives. First, we examined the impact of the lookback window size W on SINR prediction error (between actual and predicted SINR values of the testing set). As shown in Fig. 4, increasing the window generally improves performance. For BS models, the trend is consistent, with the best Root Mean Squared Error (RMSE) achieved at a window of WBS = 9 past time steps. For AP models, the curve is U-shaped, indicating that long windows lead to overfitting. Thus, the optimal window was set at WAP = 7. Next, we investigated the effect of the number of trajectories available for each UE. In general, adding more available random-walk paths increases the system stochasticity and makes the SINR prediction task more difficult. This is because, with a fixed topology, adding more trajectories increases route overlap among UEs, which tends to confuse the models and degrades their ability to discriminate among paths. As illustrated in Fig. 5, enlarging the training set with additional trajectories does not monotonically improve generalization un- der a fixed topology. For BS models, RMSE remains relatively stable up to NT = 15 trajectories and then increases sharply beyond NT = 20. For AP models, error decreases until 1 2 3 4 5 Look-ahead Horizon ( ) 5.75 6.00 6.25 6.50 6.75 Root Mean Squared Error (a) 1 2 3 4 5 Look-ahead Horizon ( ) 1.6 1.7 1.8 Root Mean Squared Error (b) Fig. 6: Prediction RMSE versus Look-ahead Horizon τ in multi-step prediction for (a) the Cellular BSs model and (b) WiFi APs model. Direct ( = 2) Recursive ( = 2) Direct ( = 4) Recursive ( = 4) 0 5 10 15 Root Mean Squared Error (a) Direct ( = 2) Recursive ( = 2) Direct ( = 4) Recursive ( = 4) 0 2 4 6 8 Root Mean Squared Error (b) Fig. 7: Prediction RMSE of Direct and Recursive multi-step methods using look-ahead horizons τ = 2 and τ = 4, separately for (a) Cellular BSs model and (b) WiFi APs model. NT = 15 trajectories (benefiting from added variability) but rises thereafter, suggesting SINR distribution shift. To ensure a balance between adequate model performance and system stochasticity, we set the number of trajectories at NT = 20 for the rest of the simulations. Note that, to increase model performance under large values of T, more dense (i.e., with more hidden layers) LSTM models can be retrained for better accuracy outcomes. We further evaluated the predictive capacity of the models by extending the forecast horizon τ to multiple time steps. Two approaches were considered: (i) Direct multi-step: the output layer is modified to emit τ = {1, 2, . . . , 5} steps jointly (see (5) for H = 5), and the dataset is extended with targets yr[k] = Mt r[k +τ]. This means that the prediction yr[k] made at time k refers to the estimated SINR value at time k+τ, with the models directly trained using the SINR desired values for k +τ. (ii) Recursive (iterative) prediction: the original single- step models are retained and used recursively, feeding each predicted value back into the input window for the next step. As expected, considering the direct multi-step scheme, Fig. 6 confirms that increasing the lookahead horizon τ generally degrades accuracy (for both BS and AP predictors) due to higher uncertainty. However, the error increment is moderate (particularly for the AP model), highlighting a practical accu- racy–latency trade-off for decision making. This means that, if the look-ahead horizon is set to τ > 1 for early decisions, we may expect a lower prediction accuracy, but the handover operation will be fast without service downtime. In Fig. 7, we contrast direct multi-step versus recursive LSTM ARIMA GRU 0.0 2.5 5.0 7.5 10.0 12.5 Root Mean Squared Error NT = 17 NT = 35 (a) LSTM ARIMA GRU 0 2 4 6 8 10 Root Mean Squared Error NT = 17 NT = 35 (b) Fig. 8: Prediction RMSE of different SINR predictors using different number of trajectories NT for (a) the Cellurar BSs models and (b) WiFi APs models. forecasting at lookahead τ = 2 and τ = 4 time steps. Direct multi-step yields consistently lower RMSE, with only a mild increase when moving from τ = 2 to τ = 4 for both BS and AP models. In contrast, the recursive scheme incurs substantial RMSE by roughly 3× for BSs and 4–5× for APs, relative to the direct scheme at the same horizon. Operationally, direct multi-step training requires storing multiple horizon-specific models at the RAT Steering Controller (per RAT), while the recursive scheme reuses only one single-step model, reducing storage at the expense of accuracy. B. Comparison against Baseline Signal Quality Predictors Here we benchmark the proposed LSTM predictors against two classical baselines used for timeseries forecasting, includ- ing the Autoregressive Integrated Moving Average (ARIMA) [19] and Extreme Gradient Boosting (XGBoost) [20] models, under two NT configurations (number of UE trajectories NT = 17 and NT = 35). As shown in Fig. 8, for each model, the RMSE is higher for NT = 35 due to the increased ran- domness in the mobility patterns. ARIMA exhibits the highest error than ML predictors across both RAT types and dataset scales, reflecting its linear/stationary assumptions and limited ability to model non-linear interference and mobility effects in multi-RAT links. Moreover, XGBoost achieves slightly lower prediction error than LSTM only for low number of UE paths (NT = 17), especially for AP prediction where short-range dynamics are simpler. However, its RMSE increases noticeably (relative to LSTM) when NT = 35, indicating sensitivity to trajectory overlap and mobility randomness. In contrast, LSTM predictors (i) maintain a comparable error for low NT , relative to that of XGBoost, and (ii) clearly exhibit the lowest RMSE as NT increases, supporting its beneficial use when mobility patterns become more stochastic. Overall, the comparative results of Fig. 8 suggest that the proposed P-CHO pipeline may use LSTM or XGBoost predictors when mobility patterns are deterministic, whereas LSTM models should be used for more stochastic UE mobility patterns. C. Soft P-CHO versus Hysteresis-enabled P-CHO We evaluate two alternative realizations of the proposed P- CHO framework, including a Soft P-CHO and a Hysteresis- enabled P-CHO variant: 0.0 0.5 1.0 1.5 2.0 2.5 3.0 P-CHO Threshold QoS (dB) 3 4 5 6 7 8 9 Number of Handovers P-CHO Method Soft Hysterisis-enabled (N=2) Hysterisis-enabled (N=3) Fig. 9: Number of Handover events as function of P-CHO Threshold δQoS for Soft and Hysteresis-enabled methods. • Soft P-CHO: The RAT Steering Controller triggers a handover when the next-step predicted gain (see (8)) over the serving RAT exceeds the QoS threshold at the current step. This means that the UE will switch from the current to the target serving transmitter if the following Soft P- CHO condition is met: ∆QoS[k+1] ≜ˆstar[k+1] −ˆscur[k+1] ≥δQoS (9) where ˆstar[k+1] and ˆscur[k+1] refer to the predicted SINR values from the current and the target RAT (re- spectively) at time instance k + 1. • Hysteresis-enabled P-CHO. The same condition must hold for N consecutive steps (here N ∈{2, 3}) before ex- ecuting the handover. This gives the following Hysteresis- enabled P-CHO condition: ∆QoS[h] ≥δQoS , ∀h ∈[k+1, k+N] (10) where h reflects a hysteresis variable (in steps) denoting how many steps the target transmitter must comply with (9). This means that hysteresis mechanism can be inter- preted as a short dwell-time guard to mitigate ping-pong effects. For this section, to assess generalization capabilities, we test the handover performance (selected RAT, handover events and stability) of both P-CHO variants, considering a testing UE trajectory that is unseen during the training of SINR predictors. The selected UE trajectory represents a complete path across the two cellular BSs and three APs (see the selected UE path in Fig. 10). In Fig. 9, we report the number of handover occurrences on the testing trajectory as a function of δQoS for both soft and hysteresis-enabled P-CHO variants. Increasing the P-CHO threshold from 0 dB to 3 dB reduces the total handovers for both schemes, reflecting more conservative decisions at higher margins and lower handover sensitivity levels. Across all P- CHO thresholds, the hysteresis-enabled scheme consistently yields fewer handovers than the soft P-CHO, demonstrating improved stability and reduced signaling without noticeable 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Connected Node AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (a) 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Best RAT AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (b) 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Best RAT AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (c) Fig. 10: The selected serving RAT along the testing UE trajectory, color-coded with different colors per RAT transmitter. (a) Association is based on the actual best-SINR server. (b) Association is selected by the Soft P-CHO decisions. (c) Association is selected by the Hysterisis-enabled P-CHO decisions. loss of connectivity along this path. This means that hysteresis- enabled method adds a small delay to observe more stable and systematic handover gains, without reacting to short and transient advantages of a candidate RAT, which suggests a P-CHO behavior that suppresses ping-pong events. To visualize the handover behavior of both methods, Fig. 10 depicts the selected serving RAT decisions (color-coded) along the testing trajectory, separately for the Soft and Hysteresis- enabled P-CHO (for N = 3 steps of hysteresis). Both schemes are set at a P-CHO threshold of δQoS = 2.5 dB. The actual best-SINR servers are shown in Fig. 10(a), where it is evident that the best-SINR node frequently varies along the UE path due to dynamic downlink conditions and mobility. In Fig. 10(b), we show the UE-node association derived by directly following the condition in (9), while Fig. 10(c) shows the best RAT selected by following (10). Evidently, the Soft P-CHO exhibits short dwell segments and five switches in the overlap regions between RATs (e.g., near the transitions AP3→BS2 and BS2→AP3). This confirms its responsiveness but also its susceptibility to small transient SINR advantages that do not persist, reflecting classic ping-pong behavior when ∆QoS varies around the threshold. Second, the Hysteresis- enabled P-CHO maintains longer contiguous segments on the selected RAT and suppresses rapid back-and-forth changes around cell borders. Although this sometimes delays a switch, it eliminates non-beneficial changes caused by short fluctua- tions or prediction noise. Third, both policies converge to the same serving RAT over most of the trajectory, indicating that prediction-conditioned triggers preserve the correct association pattern. As a result, the hysteresis simply enforces temporal consistency before execution, leading fewer and more decisive handovers. These are translated into reduced signaling and a lower risk of service interruptions, at a small cost in reactivity that can be tuned via N and δQoS. Overall, these results illustrate a clear trade-off: Soft P-CHO maximizes reactivity to predicted gains, while Hysteresis- enabled P-CHO maximizes stability by filtering out short-term advantages. Selecting (δQoS, N) provides a practical way to tailor the mobility policy to application needs (e.g., latency- sensitive flows may prefer smaller N, whereas throughput- stability or control-plane efficiency may prefer larger N). VI. CONCLUSIONS AND FUTURE EXTENSIONS A. Summary and Conclusions This paper introduced an ML-assisted conditional and dy- namic handover framework for 6G multi-RAT deployments. The proposed P-CHO scheme integrates short-horizon and model-driven signal quality forecasts for all multi-RAT nodes to make proactive decisions. We designed RAT-aware predic- tors, including a BiLSTM for cellular BSs and a lightweight LSTM for WiFi APs, and integrated them into a generalized P-CHO workflow orchestrated by a RAT Steering Controller. The presented architecture standardizes multiple handover stages, including data collection, parallel model inference, and prediction-conditioned decision logic with hysteresis/dwell time guards, thereby separating preparation from execution in a CHO-compliant manner. Using a Python-based system/link- level simulator for a representative hybrid cellular/WiFi Het- Net, we quantified the impact of several system and model parameters. Results show that (i) appropriate prediction win- dowing improves accuracy, (ii) excessive UE trajectory over- lap produces high system stochasticity making signal quality predictions less accurate, and (iii) direct multi-step prediction for long-term forecasts is beneficial compared with recursive inference. Compared against ARIMA and XGBoost baselines, the LSTM models deliver lower RMSE and degrade more gracefully as data complexity grows. Finally, the hysteresis- enabled P-CHO variant consistently reduces unnecessary han- dovers relative to the soft P-CHO method, improving service stability. B. Potential Extensions Several directions can extend this work. First, the proposed multi-RAT system can be integrated in an Open Radio Access Network (O-RAN) environment by mapping the RAT Steering Controller to the O-RAN Controllers and implementing the signal quality predictors and P-CHO rules as O-RAN xApps. Second, online/federated learning [21] can be incorporated to adapt to non-stationarity and privacy constraints. In this setup, local signal quality predictors from distributed multi- RAT nodes can be trained in a federated learning manner, without requiring data transfers toward a centralized Con- troller. Another extension is to jointly optimize prediction and policy via multi-objective control (QoS, energy per bit, HO latency), multi-step planning, or multi-agent RL for multi-UE coordination and admission control. Evaluation of long-term (multi-step) prediction horizons can also be performed for different UE mobility classes. Finally, to ensure low-latency inference of the proposed predictors at the edge, the adoption of model compression, quantization or distillation techniques may be used to reduce model size, while maintaining adequate predictive accuracy. ACKNOWLEDGMENT The authors warmly thank the partners of UNITY-6G and 6G-Cloud projects for their contribution in the architectural aspects of this article. REFERENCES [1] S. T. Spantideas, A. E. Giannopoulos, and P. Trakadas, “Smart mis- sion critical service management: Architecture, deployment options, and experimental results,” IEEE Transactions on Network and Service Management, vol. 22, no. 2, pp. 1108–1128, 2025. [2] G. T. S. Group, “Evolved universal terrestrial radio access (e-utra); mobility enhancements in heterogeneous networks,” 3rd Generation Partnership Project (3GPP), Tech. Rep. 3GPP TS 36.839 V11.0.0, 2012, release 11. [Online]. Available: https://www.3gpp.org/DynaReport/36839.htm [3] T. Sylla, L. Mendiboure, S. Maaloul, H. Aniss, M. A. Chalouf, and S. Delbruel, “Multi-connectivity for 5g networks and beyond: A survey,” Sensors, vol. 22, no. 19, 2022. [Online]. Available: https://www.mdpi.com/1424-8220/22/19/7591 [4] I. A. Bartsiokas, P. K. Gkonis, A. K. Papazafeiropoulos, D. I. Kaklamani, and I. S. Venieris, “Federated learning for 6g hetnets’ physical layer optimization: Perspectives, trends, and challenges,” Encyclopedia of Information Science and Technology, Sixth Edition, pp. 1–28, 2025. [5] W.-H. Yang, T. T. Phan, Y.-A. Chen, J.-X. Lin, and C.-Y. Li, “Smart handover between 5g and wi-fi over quic at network edge for maximizing throughput,” in 2024 International Computer Symposium (ICS), 2024, pp. 127–132. [6] P. Satapathy and J. Mahapatro, “An adaptive context-aware vertical handover decision algorithm for heterogeneous networks,” Computer Communications, vol. 209, pp. 188–202, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S014036642300227X [7] G. T. S. Group, “5g; nr; nr and ng-ran overall description; stage-2,” 3rd Generation Partnership Project (3GPP), Tech. Rep. 3GPP TS 38.300 V16.4.0, 2021, release 16. [Online]. Available: https://www.3gpp.org/DynaReport/38300.htm [8] A. Giannopoulos, S. Spantideas, P. Trakadas, J. Perez-Valero, G. Garcia- Aviles, and A. S. Gomez, “Ai-driven self-healing in cloud-native 6g net- works through dynamic server scaling,” in 2025 IEEE 11th International Conference on Network Softwarization (NetSoft), 2025, pp. 43–48. [9] A. Almeida, P. Rito, S. Br´as, F. C. Pinto, and S. Sargento, “A machine learning approach to forecast 5g metrics in a commercial and operational 5g platform: 5g and mobility,” Computer Communications, vol. 228, p. 107974, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0140366424003219 [10] A. Masri, T. Veijalainen, H. Martikainen, S. Mwanje, J. Ali-Tolppa, and M. Kaj´o, “Machine-learning-based predictive handover,” in 2021 IFIP/IEEE International Symposium on Integrated Network Manage- ment (IM). IEEE, 2021, pp. 648–652. [11] A. Baz, J. Logeshwaran, Y. Natarajan, and S. K. Patel, “Enhancing mobility management in 5g networks using deep residual lstm model,” Applied Soft Computing, vol. 165, p. 112103, 2024. [12] Y. Song, S. H. Lim, and S.-W. Jeon, “Handover decision making for dense hetnets: A reinforcement learning approach,” IEEE Access, vol. 11, pp. 24 737–24 751, 2023. [13] C. Lee, H. Cho, S. Song, and J.-M. Chung, “Prediction-based conditional handover for 5g mm-wave networks: A deep-learning approach,” IEEE Vehicular Technology Magazine, vol. 15, no. 1, pp. 54–62, 2020. [14] J. V. Cervantes-Baz´an, A. D. Cuevas-Rasgado, L. M. Rojas-C´ardenas, S. Lazcano-Salas, F. Garc´ıa-Lamont, L. A. Soriano, J. d. J. Rubio, and J. Pacheco, “Proactive cross-layer framework based on classification techniques for handover decision on wlan environments,” Electronics, vol. 11, no. 5, p. 712, 2022. [15] W.-H. Yang, T. T. Phan, Y.-A. Chen, J.-X. Lin, and C.-Y. Li, “Smart handover between 5g and wi-fi over quic at network edge for maximizing throughput,” in 2024 International Computer Symposium (ICS). IEEE, 2024, pp. 127–132. [16] A. Giannopoulos, S. Spantideas, and P. Trakadas, “Spatiotemporal graph coloring for frequency assignment in spectrum-constrained ground-air- sea networks,” Authorea Preprints, 2025. [17] F. Capulli, C. Monti, M. Vari, and F. Mazzenga, “Path loss models for ieee 802.11 a wireless local area networks,” in 2006 3rd International Symposium on Wireless Communication Systems. IEEE, 2006, pp. 621– 624. [18] H.-S. Park, H. Kim, C. Lee, and H. Lee, “Mobility management paradigm shift: From reactive to proactive handover using ai/ml,” IEEE Network, vol. 38, no. 2, pp. 18–25, 2024. [19] Y. Zheng, S. Ren, X. Xu, Y. Si, M. Dong, and J. Wu, “A modified arima model for cqi prediction in lte-based mobile satellite communications,” in 2012 IEEE International Conference on Information Science and Technology, 2012, pp. 822–826. [20] Y. Feng, L. Liu, and J. Shu, “A link quality prediction method for wireless sensor networks based on xgboost,” IEEE Access, vol. 7, pp. 155 229–155 241, 2019. [21] A. E. Giannopoulos, S. T. Spantideas, M. Zetas, N. Nomikos, and P. Trakadas, “Fedship: Federated over-the-air learning for communication-efficient and privacy-aware smart shipping in 6g com- munications,” IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 12, pp. 19 873–19 888, 2024.
Intelligent Dynamic Handover via AI-assisted Signal Quality Prediction in 6G Multi-RAT Networks Maria Lamprini A. Bartsioka R&D Department Four Dot Infinity Athens, Greece Anastasios Giannopoulos R&D Department Four Dot Infinity Athens, Greece Sotirios Spantideas R&D Department Four Dot Infinity Athens, Greece Abstract-The emerging paradigm of 6G multiple Radio Access Technology (multi-RAT) networks, where cellular and Wireless Fidelity (WiFi) transmitters coexist, requires mobility decisions that remain reliable under fast channel dynamics, interference, and heterogeneous coverage. Handover in multiRAT deployments is still highly reactive and event-triggered, relying on instantaneous measurements and threshold events. This work proposes a Machine Learning (ML)-assisted Predictive Conditional Handover (P-CHO) framework based on a modeldriven and short-horizon signal quality forecasts. We present a generalized P-CHO sequence workflow orchestrated by a RAT Steering Controller, which standardizes data collection, parallel per-RAT predictions, decision logic with hysteresis-based conditions, and CHO execution. Considering a realistic multi-RAT environment, we train RAT-aware Long Short Term Memory (LSTM) networks to forecast the signal quality indicators of mobile users along randomized trajectories. The proposed PCHO models are trained and evaluated under different channel models for cellular and IEEE 802.11 WiFi integrated coverage. We study the impact of hyperparameter tuning of LSTM models under different system settings, and compare direct multi-step versus recursive P-CHO variants. Comparisons against baseline predictors are also carried out. Finally, the proposed P-CHO is tested under soft and hard handover settings, showing that hysteresis-enabled P-CHO scheme is able to reduce handover failures and ping-pong events. Overall, the proposed P-CHO framework can enable accurate, low-latency, and proactive handovers suitable for ML-assisted handover steering in 6G multiRAT deployments. Index Terms-6G, WiFi, Handover, Machine Learning, Mobility Control, Multi-RAT Network, Radio Access Technology I. INTRODUCTION Fifth-generation (5G) and beyond (B5G) communication networks are being deployed worldwide to address the growing demands for massive device connectivity, high capacity, ultra-low latency, and seamless communication across diverse scenarios [1]. A key architectural approach to meet these requirements is the deployment of heterogeneous networks This work was supported in part by UNITY-6G Project, funded by the European Union's HORIZON-JU-SNS-2024 program, under Grant Agreement No 101192650; and in part by the 6G-Cloud Project funded from the European Union's HORIZON-JU-SNS-2023 program under Grant Agreement No 101139073. (HetNets). HetNets can be classified into two main categories: (i) those integrating multiple Radio Access Technologies (multi-RATs), and (ii) those comprising heterogeneous equipment types. Such architectures have been progressively incorporated into Third-Generation Partnership Project (3GPP) specifications starting from Release 11 [2] and beyond. In this work, the focus is on multi-RAT environments that enable the coexistence of B5G New Radio (NR) cellular infrastructures with complementary technologies such as Wireless Fidelity (WiFi) [3], [4]. Most modern User Equipments (UEs) can simultaneously connect to both WiFi and 5G. In such dualconnectivity scenarios, UEs typically prioritize WiFi connections to minimize cellular data charges. However, this often leads to throughput degradation in areas with obstacles or poor WiFi coverage. To overcome this issue, handover mechanisms are employed to dynamically switch between WiFi and cellular networks according to a predefined selection policy [5]. Traditional handover mechanisms between different RATs in current 5G/NR systems are typically event-driven and rely on instantaneous measurements of the Received Signal Strength Indicator (RSSI) that compare serving and neighbor signal levels with thresholds and time-to-trigger (TTT) timers [6]. Although conventional methods are simple and computationally efficient, they often result in suboptimal Quality of Service (QoS), especially in dynamic and heterogeneous environments. Conditional Handover (CHO), standardized in 3GPP Release 16 [7], pre-configures one or more target cells together with execution conditions. The final handover is executed once those conditions are met, reducing command latency and improving reliability under fast fading and blockage. However, both conventional and CHO remain fundamentally reactive, since decisions are taken when thresholds are crossed, not based on forecasted link quality or under rapidly changing interference in multi-RAT settings. Recently, Machine Learning (ML) and Deep Learning (DL) have emerged as promising enablers for intelligent mobility management, aiming to overcome these limitations and enable proactive handover decisions [8], [9]. Their ability to learn complex patterns makes them particularly suitable for predicting multi-source wireless 16 Oct 2025 signal quality. Furthermore, ML-based mobility policies can simultaneously exploit UE measurements, environmental conditions, and user-specific performance objectives, achieving a balance between connectivity and efficiency [10]. In this context, this paper proposes a Predictive CHO (PCHO) framework that supports ML-assisted dynamic RAT selection under mobility and interference conditions. The goal is to augment CHO with model-driven forecasts of near-future signal quality for candidate cells/RATs, turning execution conditions from static thresholds into time-aware predicates derived from predicted Signal-to-interference-plus-noise ratio (SINR) trajectories. By anticipating short-term degradations and opportunities (e.g., impending WiFi shadowing or intercellular interference), P-CHO enables earlier preparation of target links and smarter selection among multiple RATs. ML/DL models, especially sequence models such as Long Short-Term Memory (LSTM), are well-suited for this task because they (i) can capture temporal dependencies in radio dynamics, (ii) can provide accurate predictions of UE measurements, and, finally, (iii) generate actionable predictions for mobility control and traffic steering. The main contributions of this work are summarized as follows: • We formalize P-CHO for sixth-generation (6G) multiRAT networks by integrating short-horizon SINR forecasts into CHO execution logic, enabling proactive vertical and horizontal handovers across Cellular and WiFi RATs. • We design and train RAT-specific SINR forecasting models, including bidirectional LSTM (BiLSTM) for 6G Cellular BSs and a lightweight LSTM for WiFi APs, producing next-step and multi-step SINR estimates along stochastic user trajectories. • We implement a hybrid Cellular/WiFi HetNet simulator that generates mobility and channel-realistic datasets (path loss, shadowing, small-scale fading, and interference) while the UE traverses predefined routes in dense deployments. • We embed the SINR predictors into a RAT Steering Controller that combines multi-source UE measurement reports, infers pre-trained SINR prediction models, and converts SINR forecasts into steering execution conditions, leading to dynamic RAT-user associations. • We benchmark the proposed LSTM-based signal predictors against other ML-based models and traditional autoregressive models, demonstrating consistent improvements under dynamic interference and mobility. The rest of this manuscript is organized as follows: In Section II various previous research that has been done in this field is presented. Following is Section III, where the system model of a heterogeneous environment with the coexistence of 6G BSs and WiFi APs is formulated. In Section IV, the proposed ML/DL algorithms and P-CHO framework for intelligent signal prediction are presented, while in Section V the performance evaluation of the above is described. Finally, concluding remarks and future directions are provided in Section VI. II. RELATED WORK Mobility management in 5G/6G HetNets has drawn significant attention, particularly at the intersection of handover control and data-driven intelligence. Most existing studies analyze handover decision-making within 5G NR or across Long Term Evoluation (LTE)-5G, with increasing interest in predictive policies leveraging sequence models. In [11], a deep residual matrix LSTM was proposed to predict future user locations and trigger handover when the distance to the serving Base Station (BS) exceeds a predefined threshold. The method achieved an increased handover success rate and reduced latency, outperforming Reinforcement Learning (RL) and the Adaptive Cell Selection Algorithm (ACSA) schemes. While compelling, the approach was confined to intra-5G scenarios and did not consider multi-RAT interactions. A complementary direction is taken in [10], where an LSTM predicts the probability that each neighboring cell will have the highest Received Signal Strength Power (RSRP), formulating handover as a multi-class classification problem with a post-processing dynamic threshold to balance false positives/negatives. Evaluated in a simulated industrial 5G setup, the approach reduced radio link failures and mitigated ping-pong effects. However, the focus remained on 5G RSRP rather than cross-RAT signal dynamics. Learning-based policy optimization has also been explored beyond pure prediction. The authors in [12] studied dense HetNets with a deep Q-learning strategy that selects candidate BSs per time slot to optimize throughput, delay, and energy consumption. The RL method outperformed conventional A3 policies and decreased the energy usage (e.g., to 0.033,J/s in dynamic channels and 0.023,J/s in quasi-static channels), yet it did not explicitly forecast short-horizon signal evolution. In a related concept, the authors in [13] extended the 3GPP-based CHO framework for millimeter-wave (mmWave) 5G with a deep neural model trained on geographical blockage patterns and RSRP sequences, producing next-cell probabilities to enable early preparation. The reported accuracy (85%) and early-preparation success rate (98%) showcased the value of preparing handovers ahead of execution events, but the study remained single-RAT and RSRP-centric. Regarding WiFi-related works, the study in [14] proposed a proactive cross-layer handover framework for Wireless Local Area Networks (WLANs) that extracts network-layer features (RSSI, retransmissions, packet delay) to train ML classifiers distinguishing handover from non-handover states. The method reduces handover latency by approximately 200 ms and packet loss by over 30%. While effective, the study did not extend to heterogeneous cellular-WiFi coordination. Works explicitly targeting multi-RAT are relatively sparse. One notable effort is [15], which combines a Multi-acess Edge Computing (MEC)-based throughput estimator with Deep Reinforcement Learning (DRL) to learn discrepancies between estimated and actual throughput, thereby guiding the selection between 5G and WiFi transmitters and using Quick UDP Internet Connections (QUIC) scheme for migration. Nevertheless, the DRL agent requires periodical re-training once the network dynamics significantly change, leading to timeconsuming model adjustment periods. Prior studies either (i) operate within a single RAT with RSRP-based targets [10], [11], [13], (ii) optimize policies without explicit short-horizon signal forecasting [12], or (iii) treat multi-RAT selection primarily as throughput-driven migration [15] or WiFi-only handover [14]. To fill these gaps and extend the existing work to hybrid cellular/WiFi networks, this work targets predictive multi-RAT mobility by forecasting RAT-specific SINR along user trajectories and embedding these forecasts into a P-CHO logic. III. MULTI-RAT SYSTEM MODEL This section presents the proposed system model for proactive handover in a hybrid cellular/WiFi multi-RAT HetNet. Section III-A describes the considered network topology, system entities, channel models and user mobility patterns. Section III-B formulates the key problem elements for signal quality prediction-based handover. The core objective is to forecast the experienced time-varying SINR metrics for each RAT transmitter as users move across mobility trajectories and, then, determine the optimal target RAT. A. System Overview We consider a heterogeneous multi-RAT environment representative of B5G/6G deployments. The network comprises wide-area cellular BSs and overlapping short-coverage WiFi Access Points (APs), with mobile UEs traversing predefined paths within the multi-RAT coverage area. Fig. 1 illustrates an example layout in which user trajectories traverse overlapping coverage regions of multiple RATs. We define the following sets: (i) Set of cellular BSs SBS = {b1, b2, . . . , bNBS} (includes NBS BSs), (ii) Set of APs SAP = {a1, a2, . . . , aNAP } (includes NAP APs each one located within a specific BS's coverage), (iii) Set of UEs SUE = {u1, u2, . . . , uNUE} (includes NUE UEs moving with a certain velocity), and (iv) Set of UE paths ST = {t1, t2, . . . , tNT } (includes NT UE trajectories created as polynomial interpolations, representing random walks across the whole multi-RAT area). Two link types are considered, including (i) Lb,u for cellular BS-UE communication and (ii) La,u for AP-UE communication. Each link employs a RAT-specific channel model. For cellular links Lb,u, the received power accounts for largescale path loss, small-scale fading, and co-channel interference from neighboring BSs. A generic distance-based attenuation is defined as the large-scale path loss component : G(d) = K · d-α (1) where d is the BS-UE distance, α is the path-loss exponent, and K a scaling constant [16]. Small-scale fading is modeled as Rician to capture both Line-of-Sight (LoS) and non-LoS (NLoS) components. For a certain UE, let h denote the complex fading coefficient experienced over the serving link, and 6G Base Station WiFi Access Point 6G Link WiFi Link Fig. 1: Multi-RAT HetNet with co-existing cellular/WiFi coverage. Dashed blue curve reflects a UE random-walk path. hi denote the same for the interfering link i. The instantaneous cellular SINR is then expressed as: SINR6G = Pt · |h|2 P i∈I Pt,i · |hi|2 + N0 · B (2) where I is the set of interfering BSs, Pt (Pt,i) is the serving (cellular interferer's i) transmit power, N0 is the spectral density of the noise power and B is the link bandwidth. The channel model of WiFi links La,u follows an IEEE 802.11 channel model [17] that captures short-range indoor hotspot communications reflecting multipath and wall penetration losses: PL(d) = PL(d0) + 10 · γ · log10 d d0 + X j Lj (3) where PL(d0) stands for the reference path loss at distance d0 = 1 m, γ is the indoor path loss exponent, and Lj represents additional loss due to obstacle j (e.g. wall) [17]. Small-scale fading and interference are omitted in this case, resulting in an SNR formula: SNRW iF i = Pt · |h|2 N0 · B (4) UEs move along predefined two-dimensional (2D) trajectories t ∈ST with constant or piecewise-constant speed. Trajectories are selected to span diverse directions, turning points, and RAT overlaps, thereby inducing varied sequences of serving RATs. The sampling interval determines the number of trajectory points between start and end locations for a given UE speed. B. Handover Decision Problem Elements Given a trajectory t and the network state at discrete time k, let xk denote the feature vector (e.g., past per-RAT signal measurements, position/velocity values, RAT identifiers, or environmental indicators). The goal is to learn the following signal quality predictors: ˆs(r) k+τ = f (r)(xk), r ∈{SBS ∪SAP }, τ ∈{1, . . . , H}, (5) where ˆs(r) k+τ is the predicted signal quality indicator for time instance k+τ and RAT r, f (r) is the learned function for RAT r that maps historical to future signal quality metrics, and τ is the forecasting horizon (e.g., 3 steps ahead). Predictions ˆs(r) k+τ estimate short-horizon signal quality for each RAT using a historical feature vector xk of SINR6G values (for cellular BSs) and SNRWiFi values (for WiFi APs). At each step k, the RAT steering controller selects the target serving RAT as: r⋆ k = arg max r ˆs(r) k+τ (6) where the decision combines the predicted signal quality from all transmitters and selects the best-SINR provider. The final decision can be conditioned by rule-based policies (e.g., trigger a handover when a certain BS switch gain is met) using multistep predictors {ˆs(r) k+τ}H τ=1. IV. ML-ASSISTED PROACTIVE HANDOVER This section details the proposed scheme for intelligent proactive handover in a heterogeneous multi-RAT network using ML/DL models. We first describe the dataset construction pipeline, covering the simulation process, sequential data generation, and data organization, and then outline the ML/DL architecture employed for prediction. At the end of this section the proposed workflow is described in detail. A. Network Simulator and Dataset Construction Obtaining recent, real, and authenticated communication traces is challenging due to privacy policies and personal data regulations. Yet, ML/DL models require structured data for training and evaluation. To this end, we developed a Pythonbased simulator for the 5G/WiFi HetNet that represents the targeted communication environment and exports model-ready sequential datasets. As outlined in Section III-A, we develop a simulator that establishes communication between all available RAT endpoints and the UE while the latter moves along a predefined trajectory t ∈ST . For each trajectory, and at each discrete time index k, the simulator evaluates per-candidate serving transmitter metrics for all r ∈R, where R ≜SBS ∪SAP . Specifically, for every time instance of the user mobility path and BS/AP candidate, the simulator computes the RSSI, the SINR, and the achieved throughput (denoted as TP). The resulting raw time series are temporally aligned across RATs and stored as tuples in the form of Mt r[k] = (IDt, RSSIr[k], SINRr[k], TPr[k]), where IDt is the identifier of trajectory t. All the other metrics refer to the RSSI, SINR and TP values measured for transmitter r ∈R at time instance k following trajectory t. This temporal alignment of the multi-source timeseries ensures that, for each time instance k, measurements across RATs refer to the same UE position/sample. Input LSTM LSTM Output Bidirectional LSTM Dense Dense ReLU + Dropout ReLU + Dropout Dense Fig. 2: LSTM structure for timeseries signal quality prediction. To create input-output combinations for subsequent model training, the raw sequences are then segmented into overlapping windows of fixed length W. For node r and time index k, the input feature vector is: xr[k] = Mt r[k -W + 1], . . . , Mt r[k] , ∀t ∈ST (7) where Mt r[k] is the metric report tuple for node r when UE follows trajectory t, and is used to make a prediction at time k. This means that, to produce a prediction at time k, the model receives a history window of the W previous metric report tuples. At time k, the one-step-ahead target yr[k] (i.e., the model's prediction) is the (unknown) upcoming metric tuple of the next time instance, i.e., yr[k] = Mt r[k + 1]. The dataset is organized in a tabular form with columns IDt, r, xr[k] (model inputs), and yr[k] (model desired output), and exported into two separate (.csv) files for model training purposes. We train two models, including one for cellular BSs samples and one for APs samples. These datasets serve as inputs to the LSTM-based predictors described in Section IV-B. Standard sequence-model preprocessing steps (e.g., normalization, windowing, and train/test partitioning) are applied to ensure consistent scaling and robust evaluation. B. ML Model Architecture for Signal Quality Prediction We employ LSTM networks to forecast short-horizon signal quality for candidate BSs and APs. LSTMs, a class of recurrent neural networks, are designed to capture long- and short-range temporal dependencies in sequential data and are therefore well suited to regress wireless signal evolution under mobility and time-varying interference. Cellular BS predictor (BiLSTM): Cellular BS-UE links exhibit richer temporal structure due to path-loss variation in urban environments, blockage, and co-channel interference with adjacent BSs. To capture these dynamics, we use a deep BiLSTM. As shown in Fig. 2, the first block processes the input window with two stacked LSTM layers in both forward and backward directions, enabling the model to exploit information across the entire input sequence. The recurrent output is then fed to a stack of fully-connected dense layers interleaved with ReLU activations and dropout to avoid overfitting. The output is a linear neuron that outputs a scalar next-step prediction ˆyr[k] for the selected metric. WiFi AP predictor (lightweight LSTM): La,u links typically exhibit shorter-range propagation with simpler temporal variability due to absence of interference (no AP-BS interference). We therefore adopt a lightweight design with a single LSTM layer followed by one Dense layer that maps the last hidden state to the scalar output ˆyr[k]. This design reduces computational and memory footprint while retaining sufficient accuracy for short-horizon prediction. Training and objective: Models are trained offline using mean squared error (MSE)-based loss function. During online operation, a RAT Steering Controller queries the appropriate predictor (BiLSTM for BSs, lightweight LSTM for APs) to produce per-RAT short-horizon forecasts that feed the predictive P-CHO logic described below. C. Proposed P-CHO Sequence Diagram Recent 3GPP releases introduce CHO, where preparation and execution are decoupled and execution occurs only when a condition is satisfied [7]. We extend CHO to a P-CHO by deriving the condition from short-horizon, model-based SINR forecasts, thereby enabling proactive mobility, as shown in Fig. 3. Prediction-conditioned trigger: Let ˆstar[k] and ˆscur[k] denote the one-step-ahead predicted SINR (or SNR for WiFi) for the candidate target RAT and the current serving RAT, respectively. The execution condition is: ∆QoS[k] ≜ˆstar[k] -ˆscur[k] ≥δQoS (8) where δQoS is a configurable threshold that reflects whether the handover is soft (low δQoS is sensitive to frequent switching) or hard (high δQoS avoids ping-pong). By combining MLbased SINR prediction with handover sensitivity thresholds, the system adopts a proactive and predictive behavior, allowing early preparation of handover events while regulating handover sensitivity [18]. End-to-end workflow (mapped to Fig. 3): The P-CHO procedure comprises three stages: (1) Data Collection: The UE transmits a Measurement Report (MR) via the serving BS/AP (RSRP, SINR/SNR, throughput). The RAT Steering Controller stores the MR in an MR timeseries database and associates it with the UE's historical MR sequence. (2) SINR Prediction & Handover Decision: Then, batch feature assembly operation is performed by the RAT Steering Controller, in which current MR features are combined with the UE's recent window to form inputs xr[k] for each candidate RAT r ∈R. A parallel inference of SINR prediction models is then performed in the Intelligent Module. Specifically, it infers the pre-trained predictors (BiLSTM for BSs, lightweight LSTM for APs) in parallel to obtain {ˆs(r)[k]} for r ∈R. Optionally, model inference can be done on Graphical Processing Unit (GPU) to further speed up the process. Afterwards, the Decision Module evaluates the condition in (8) for each candidate. If the condition holds for some r⋆∈R (target BS/AP), it issues a P-CHO decision toward r⋆. Otherwise, the UE remains on the serving RAT. 1. UE Measurement Report (MR) Serving Cellular BS or WiFi AP UE Target Cellular BS or WiFi AP 2. UE MR Intelligent Module MR Database Decision Module RAT Steering Controller 3. CHO Decision a. Combine current with historical UE MRs b. Infer the models and get SINR predictions c. Decide for target handover 4. Handover Request 5. Handover Request ACK 6. RRC Reconfiguration 7. RRC Reconfiguration Completed Admission Control 8. Handover Success 9. PDCP SN status transfer Data Collection SINR Prediction and Handover Decision Handover Execution Fig. 3: Proposed three-stage P-CHO sequence callflows. (3) Handover Execution (conditional): Following a 3GPPcompliant procedure, based on the decision, the serving node sends a Handover Request to the target BS/AP r⋆. Upon request receipt, the target node performs admission control (resource availability/QoS checks) and responds with Handover Request ACK on success. Then, the serving node issues Radio Resource Control (RRC) Reconfiguration to the UE with target parameters. Upon completion, the UE synchronizes to the target. Finally, the status notification is transferred (e.g., PDCP Sequence Numbers) to finalize context continuity and confirm handover success. V. PERFORMANCE EVALUATION We consider a heterogeneous multi-RAT Cellular/WiFi network with mobile users served by BSs or APs. As in the topology of Fig. 1, the cellular tier comprises two B5G NR BSs operating at 3.5 GHz with partially overlapping cells. Inter-cell interference is present near the cell edges. Within each BS coverage, two IEEE 802.11 APs operate at 2.4 GHz on orthogonal channels (no co-channel interference across APs, no BS-AP interference due to different frequency bands). UEs traverse the topology along predefined trajectories at a constant speed of 3 m/s. As described in Sections III-A and III-B, the Python-based link/system simulator generates time-synchronized sequences of per-RAT measurements (RSSI, SINR/SNR, throughput) for downlink traffic. The problem of SINR prediction is examined as a supervised timeseries regression problem. We consider shorthorizon signal prediction as a sequence-to-one regression problem (one-step-ahead unless otherwise stated). For each candidate RAT endpoint r, a sliding window of length W 2 4 6 8 10 Look-back Window (W) 8 9 10 Root Mean Squared Error (a) 2 4 6 8 10 Look-back Window (W) 2.4 2.6 2.8 3.0 Root Mean Squared Error (b) Fig. 4: Prediction RMSE versus Look-back Window size W for (a) the Cellular BSs model and (b) WiFi APs model, considering 35 available UE paths. 10 20 30 Number of UE trajectories (NT) 5 6 7 Root Mean Squared Error (a) 10 20 30 Number of UE trajectories (NT) 1.0 1.5 2.0 Root Mean Squared Error (b) Fig. 5: Prediction RMSE versus the number of UE paths NT for (a) the Cellular BSs model and (b) WiFi APs model. forms the input, and the next-sample value of the SINR forms the target (see (7)). A training/test set split of 80%/20% has been used in the training phase of all approaches, while the models were tuned via grid search on different hyperparameters to ensure optimal performance. A. Prediction Error vs Look-back and Look-ahead Windows Performance evaluation was conducted through a series of experiments under varying conditions and objectives. First, we examined the impact of the lookback window size W on SINR prediction error (between actual and predicted SINR values of the testing set). As shown in Fig. 4, increasing the window generally improves performance. For BS models, the trend is consistent, with the best Root Mean Squared Error (RMSE) achieved at a window of WBS = 9 past time steps. For AP models, the curve is U-shaped, indicating that long windows lead to overfitting. Thus, the optimal window was set at WAP = 7. Next, we investigated the effect of the number of trajectories available for each UE. In general, adding more available random-walk paths increases the system stochasticity and makes the SINR prediction task more difficult. This is because, with a fixed topology, adding more trajectories increases route overlap among UEs, which tends to confuse the models and degrades their ability to discriminate among paths. As illustrated in Fig. 5, enlarging the training set with additional trajectories does not monotonically improve generalization under a fixed topology. For BS models, RMSE remains relatively stable up to NT = 15 trajectories and then increases sharply beyond NT = 20. For AP models, error decreases until 1 2 3 4 5 Look-ahead Horizon ( ) 5.75 6.00 6.25 6.50 6.75 Root Mean Squared Error (a) 1 2 3 4 5 Look-ahead Horizon ( ) 1.6 1.7 1.8 Root Mean Squared Error (b) Fig. 6: Prediction RMSE versus Look-ahead Horizon τ in multi-step prediction for (a) the Cellular BSs model and (b) WiFi APs model. Direct ( = 2) Recursive ( = 2) Direct ( = 4) Recursive ( = 4) 0 5 10 15 Root Mean Squared Error (a) Direct ( = 2) Recursive ( = 2) Direct ( = 4) Recursive ( = 4) 0 2 4 6 8 Root Mean Squared Error (b) Fig. 7: Prediction RMSE of Direct and Recursive multi-step methods using look-ahead horizons τ = 2 and τ = 4, separately for (a) Cellular BSs model and (b) WiFi APs model. NT = 15 trajectories (benefiting from added variability) but rises thereafter, suggesting SINR distribution shift. To ensure a balance between adequate model performance and system stochasticity, we set the number of trajectories at NT = 20 for the rest of the simulations. Note that, to increase model performance under large values of T, more dense (i.e., with more hidden layers) LSTM models can be retrained for better accuracy outcomes. We further evaluated the predictive capacity of the models by extending the forecast horizon τ to multiple time steps. Two approaches were considered: (i) Direct multi-step: the output layer is modified to emit τ = {1, 2, . . . , 5} steps jointly (see (5) for H = 5), and the dataset is extended with targets yr[k] = Mt r[k +τ]. This means that the prediction yr[k] made at time k refers to the estimated SINR value at time k+τ, with the models directly trained using the SINR desired values for k +τ. (ii) Recursive (iterative) prediction: the original singlestep models are retained and used recursively, feeding each predicted value back into the input window for the next step. As expected, considering the direct multi-step scheme, Fig. 6 confirms that increasing the lookahead horizon τ generally degrades accuracy (for both BS and AP predictors) due to higher uncertainty. However, the error increment is moderate (particularly for the AP model), highlighting a practical accuracy-latency trade-off for decision making. This means that, if the look-ahead horizon is set to τ > 1 for early decisions, we may expect a lower prediction accuracy, but the handover operation will be fast without service downtime. In Fig. 7, we contrast direct multi-step versus recursive LSTM ARIMA GRU 0.0 2.5 5.0 7.5 10.0 12.5 Root Mean Squared Error NT = 17 NT = 35 (a) LSTM ARIMA GRU 0 2 4 6 8 10 Root Mean Squared Error NT = 17 NT = 35 (b) Fig. 8: Prediction RMSE of different SINR predictors using different number of trajectories NT for (a) the Cellurar BSs models and (b) WiFi APs models. forecasting at lookahead τ = 2 and τ = 4 time steps. Direct multi-step yields consistently lower RMSE, with only a mild increase when moving from τ = 2 to τ = 4 for both BS and AP models. In contrast, the recursive scheme incurs substantial RMSE by roughly 3× for BSs and 4-5× for APs, relative to the direct scheme at the same horizon. Operationally, direct multi-step training requires storing multiple horizon-specific models at the RAT Steering Controller (per RAT), while the recursive scheme reuses only one single-step model, reducing storage at the expense of accuracy. B. Comparison against Baseline Signal Quality Predictors Here we benchmark the proposed LSTM predictors against two classical baselines used for timeseries forecasting, including the Autoregressive Integrated Moving Average (ARIMA) [19] and Extreme Gradient Boosting (XGBoost) [20] models, under two NT configurations (number of UE trajectories NT = 17 and NT = 35). As shown in Fig. 8, for each model, the RMSE is higher for NT = 35 due to the increased randomness in the mobility patterns. ARIMA exhibits the highest error than ML predictors across both RAT types and dataset scales, reflecting its linear/stationary assumptions and limited ability to model non-linear interference and mobility effects in multi-RAT links. Moreover, XGBoost achieves slightly lower prediction error than LSTM only for low number of UE paths (NT = 17), especially for AP prediction where short-range dynamics are simpler. However, its RMSE increases noticeably (relative to LSTM) when NT = 35, indicating sensitivity to trajectory overlap and mobility randomness. In contrast, LSTM predictors (i) maintain a comparable error for low NT , relative to that of XGBoost, and (ii) clearly exhibit the lowest RMSE as NT increases, supporting its beneficial use when mobility patterns become more stochastic. Overall, the comparative results of Fig. 8 suggest that the proposed P-CHO pipeline may use LSTM or XGBoost predictors when mobility patterns are deterministic, whereas LSTM models should be used for more stochastic UE mobility patterns. C. Soft P-CHO versus Hysteresis-enabled P-CHO We evaluate two alternative realizations of the proposed PCHO framework, including a Soft P-CHO and a Hysteresisenabled P-CHO variant: 0.0 0.5 1.0 1.5 2.0 2.5 3.0 P-CHO Threshold QoS (dB) 3 4 5 6 7 8 9 Number of Handovers P-CHO Method Soft Hysterisis-enabled (N=2) Hysterisis-enabled (N=3) Fig. 9: Number of Handover events as function of P-CHO Threshold δQoS for Soft and Hysteresis-enabled methods. • Soft P-CHO: The RAT Steering Controller triggers a handover when the next-step predicted gain (see (8)) over the serving RAT exceeds the QoS threshold at the current step. This means that the UE will switch from the current to the target serving transmitter if the following Soft PCHO condition is met: ∆QoS[k+1] ≜ˆstar[k+1] -ˆscur[k+1] ≥δQoS (9) where ˆstar[k+1] and ˆscur[k+1] refer to the predicted SINR values from the current and the target RAT (respectively) at time instance k + 1. • Hysteresis-enabled P-CHO. The same condition must hold for N consecutive steps (here N ∈{2, 3}) before executing the handover. This gives the following Hysteresisenabled P-CHO condition: ∆QoS[h] ≥δQoS , ∀h ∈[k+1, k+N] (10) where h reflects a hysteresis variable (in steps) denoting how many steps the target transmitter must comply with (9). This means that hysteresis mechanism can be interpreted as a short dwell-time guard to mitigate ping-pong effects. For this section, to assess generalization capabilities, we test the handover performance (selected RAT, handover events and stability) of both P-CHO variants, considering a testing UE trajectory that is unseen during the training of SINR predictors. The selected UE trajectory represents a complete path across the two cellular BSs and three APs (see the selected UE path in Fig. 10). In Fig. 9, we report the number of handover occurrences on the testing trajectory as a function of δQoS for both soft and hysteresis-enabled P-CHO variants. Increasing the P-CHO threshold from 0 dB to 3 dB reduces the total handovers for both schemes, reflecting more conservative decisions at higher margins and lower handover sensitivity levels. Across all PCHO thresholds, the hysteresis-enabled scheme consistently yields fewer handovers than the soft P-CHO, demonstrating improved stability and reduced signaling without noticeable 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Connected Node AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (a) 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Best RAT AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (b) 0 100 200 300 400 500 600 700 X (m) 0 50 100 150 200 250 300 350 400 Y (m) Best RAT AP1 AP2 AP3 AP4 BS1 BS2 BS1 BS2 AP1 AP2 AP3 AP4 (c) Fig. 10: The selected serving RAT along the testing UE trajectory, color-coded with different colors per RAT transmitter. (a) Association is based on the actual best-SINR server. (b) Association is selected by the Soft P-CHO decisions. (c) Association is selected by the Hysterisis-enabled P-CHO decisions. loss of connectivity along this path. This means that hysteresisenabled method adds a small delay to observe more stable and systematic handover gains, without reacting to short and transient advantages of a candidate RAT, which suggests a P-CHO behavior that suppresses ping-pong events. To visualize the handover behavior of both methods, Fig. 10 depicts the selected serving RAT decisions (color-coded) along the testing trajectory, separately for the Soft and Hysteresisenabled P-CHO (for N = 3 steps of hysteresis). Both schemes are set at a P-CHO threshold of δQoS = 2.5 dB. The actual best-SINR servers are shown in Fig. 10(a), where it is evident that the best-SINR node frequently varies along the UE path due to dynamic downlink conditions and mobility. In Fig. 10(b), we show the UE-node association derived by directly following the condition in (9), while Fig. 10(c) shows the best RAT selected by following (10). Evidently, the Soft P-CHO exhibits short dwell segments and five switches in the overlap regions between RATs (e.g., near the transitions AP3→BS2 and BS2→AP3). This confirms its responsiveness but also its susceptibility to small transient SINR advantages that do not persist, reflecting classic ping-pong behavior when ∆QoS varies around the threshold. Second, the Hysteresisenabled P-CHO maintains longer contiguous segments on the selected RAT and suppresses rapid back-and-forth changes around cell borders. Although this sometimes delays a switch, it eliminates non-beneficial changes caused by short fluctuations or prediction noise. Third, both policies converge to the same serving RAT over most of the trajectory, indicating that prediction-conditioned triggers preserve the correct association pattern. As a result, the hysteresis simply enforces temporal consistency before execution, leading fewer and more decisive handovers. These are translated into reduced signaling and a lower risk of service interruptions, at a small cost in reactivity that can be tuned via N and δQoS. Overall, these results illustrate a clear trade-off: Soft P-CHO maximizes reactivity to predicted gains, while Hysteresisenabled P-CHO maximizes stability by filtering out short-term advantages. Selecting (δQoS, N) provides a practical way to tailor the mobility policy to application needs (e.g., latencysensitive flows may prefer smaller N, whereas throughputstability or control-plane efficiency may prefer larger N). VI. CONCLUSIONS AND FUTURE EXTENSIONS A. Summary and Conclusions This paper introduced an ML-assisted conditional and dynamic handover framework for 6G multi-RAT deployments. The proposed P-CHO scheme integrates short-horizon and model-driven signal quality forecasts for all multi-RAT nodes to make proactive decisions. We designed RAT-aware predictors, including a BiLSTM for cellular BSs and a lightweight LSTM for WiFi APs, and integrated them into a generalized P-CHO workflow orchestrated by a RAT Steering Controller. The presented architecture standardizes multiple handover stages, including data collection, parallel model inference, and prediction-conditioned decision logic with hysteresis/dwell time guards, thereby separating preparation from execution in a CHO-compliant manner. Using a Python-based system/linklevel simulator for a representative hybrid cellular/WiFi HetNet, we quantified the impact of several system and model parameters. Results show that (i) appropriate prediction windowing improves accuracy, (ii) excessive UE trajectory overlap produces high system stochasticity making signal quality predictions less accurate, and (iii) direct multi-step prediction for long-term forecasts is beneficial compared with recursive inference. Compared against ARIMA and XGBoost baselines, the LSTM models deliver lower RMSE and degrade more gracefully as data complexity grows. Finally, the hysteresisenabled P-CHO variant consistently reduces unnecessary handovers relative to the soft P-CHO method, improving service stability. B. Potential Extensions Several directions can extend this work. First, the proposed multi-RAT system can be integrated in an Open Radio Access Network (O-RAN) environment by mapping the RAT Steering Controller to the O-RAN Controllers and implementing the signal quality predictors and P-CHO rules as O-RAN xApps. Second, online/federated learning [21] can be incorporated to adapt to non-stationarity and privacy constraints. In this setup, local signal quality predictors from distributed multiRAT nodes can be trained in a federated learning manner, without requiring data transfers toward a centralized Controller. Another extension is to jointly optimize prediction and policy via multi-objective control (QoS, energy per bit, HO latency), multi-step planning, or multi-agent RL for multi-UE coordination and admission control. Evaluation of long-term (multi-step) prediction horizons can also be performed for different UE mobility classes. Finally, to ensure low-latency inference of the proposed predictors at the edge, the adoption of model compression, quantization or distillation techniques may be used to reduce model size, while maintaining adequate predictive accuracy. ACKNOWLEDGMENT The authors warmly thank the partners of UNITY-6G and 6G-Cloud projects for their contribution in the architectural aspects of this article. REFERENCES [1] S. T. Spantideas, A. E. Giannopoulos, and P. Trakadas, "Smart mission critical service management: Architecture, deployment options, and experimental results," IEEE Transactions on Network and Service Management, vol. 22, no. 2, pp. 1108-1128, 2025. [2] G. T. S. Group, "Evolved universal terrestrial radio access (e-utra); mobility enhancements in heterogeneous networks," 3rd Generation Partnership Project (3GPP), Tech. Rep. 3GPP TS 36.839 V11.0.0, 2012, release 11. [Online]. Available: https://www.3gpp.org/DynaReport/36839.htm [3] T. Sylla, L. Mendiboure, S. Maaloul, H. Aniss, M. A. Chalouf, and S. Delbruel, "Multi-connectivity for 5g networks and beyond: A survey," Sensors, vol. 22, no. 19, 2022. [Online]. Available: https://www.mdpi.com/1424-8220/22/19/7591 [4] I. A. Bartsiokas, P. K. Gkonis, A. K. Papazafeiropoulos, D. I. Kaklamani, and I. S. Venieris, "Federated learning for 6g hetnets' physical layer optimization: Perspectives, trends, and challenges," Encyclopedia of Information Science and Technology, Sixth Edition, pp. 1-28, 2025. [5] W.-H. Yang, T. T. Phan, Y.-A. Chen, J.-X. Lin, and C.-Y. Li, "Smart handover between 5g and wi-fi over quic at network edge for maximizing throughput," in 2024 International Computer Symposium (ICS), 2024, pp. 127-132. [6] P. Satapathy and J. Mahapatro, "An adaptive context-aware vertical handover decision algorithm for heterogeneous networks," Computer Communications, vol. 209, pp. 188-202, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S014036642300227X [7] G. T. S. Group, "5g; nr; nr and ng-ran overall description; stage-2," 3rd Generation Partnership Project (3GPP), Tech. Rep. 3GPP TS 38.300 V16.4.0, 2021, release 16. [Online]. Available: https://www.3gpp.org/DynaReport/38300.htm [8] A. Giannopoulos, S. Spantideas, P. Trakadas, J. Perez-Valero, G. GarciaAviles, and A. S. Gomez, "Ai-driven self-healing in cloud-native 6g networks through dynamic server scaling," in 2025 IEEE 11th International Conference on Network Softwarization (NetSoft), 2025, pp. 43-48. [9] A. Almeida, P. Rito, S. Br ́as, F. C. Pinto, and S. Sargento, "A machine learning approach to forecast 5g metrics in a commercial and operational 5g platform: 5g and mobility," Computer Communications, vol. 228, p. 107974, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0140366424003219 [10] A. Masri, T. Veijalainen, H. Martikainen, S. Mwanje, J. Ali-Tolppa, and M. Kaj ́o, "Machine-learning-based predictive handover," in 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM). IEEE, 2021, pp. 648-652. [11] A. Baz, J. Logeshwaran, Y. Natarajan, and S. K. Patel, "Enhancing mobility management in 5g networks using deep residual lstm model," Applied Soft Computing, vol. 165, p. 112103, 2024. [12] Y. Song, S. H. Lim, and S.-W. Jeon, "Handover decision making for dense hetnets: A reinforcement learning approach," IEEE Access, vol. 11, pp. 24 737-24 751, 2023. [13] C. Lee, H. Cho, S. Song, and J.-M. Chung, "Prediction-based conditional handover for 5g mm-wave networks: A deep-learning approach," IEEE Vehicular Technology Magazine, vol. 15, no. 1, pp. 54-62, 2020. [14] J. V. Cervantes-Baz ́an, A. D. Cuevas-Rasgado, L. M. Rojas-C ́ardenas, S. Lazcano-Salas, F. Garc ́ıa-Lamont, L. A. Soriano, J. d. J. Rubio, and J. Pacheco, "Proactive cross-layer framework based on classification techniques for handover decision on wlan environments," Electronics, vol. 11, no. 5, p. 712, 2022. [15] W.-H. Yang, T. T. Phan, Y.-A. Chen, J.-X. Lin, and C.-Y. Li, "Smart handover between 5g and wi-fi over quic at network edge for maximizing throughput," in 2024 International Computer Symposium (ICS). IEEE, 2024, pp. 127-132. [16] A. Giannopoulos, S. Spantideas, and P. Trakadas, "Spatiotemporal graph coloring for frequency assignment in spectrum-constrained ground-airsea networks," Authorea Preprints, 2025. [17] F. Capulli, C. Monti, M. Vari, and F. Mazzenga, "Path loss models for ieee 802.11 a wireless local area networks," in 2006 3rd International Symposium on Wireless Communication Systems. IEEE, 2006, pp. 621624. [18] H.-S. Park, H. Kim, C. Lee, and H. Lee, "Mobility management paradigm shift: From reactive to proactive handover using ai/ml," IEEE Network, vol. 38, no. 2, pp. 18-25, 2024. [19] Y. Zheng, S. Ren, X. Xu, Y. Si, M. Dong, and J. Wu, "A modified arima model for cqi prediction in lte-based mobile satellite communications," in 2012 IEEE International Conference on Information Science and Technology, 2012, pp. 822-826. [20] Y. Feng, L. Liu, and J. Shu, "A link quality prediction method for wireless sensor networks based on xgboost," IEEE Access, vol. 7, pp. 155 229-155 241, 2019. [21] A. E. Giannopoulos, S. T. Spantideas, M. Zetas, N. Nomikos, and P. Trakadas, "Fedship: Federated over-the-air learning for communication-efficient and privacy-aware smart shipping in 6g communications," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 12, pp. 19 873-19 888, 2024.
2510.14836
QDepth-VLA: Quantized Depth Prediction as Auxiliary Supervision for Vision–Language–Action Models Yixuan Li School of Artificial Intelligence, University of the Chinese Academy of Sciences Beijing, China liyixuan223@mails.ucas.ac.cn Yuhui Chen Institute of Automation, Chinese Academy of Science Beijing, China chenyuhui2022@ia.ac.cn Mingcai Zhou Institute of Automation, Chinese Academy of Science Beijing Zhongke Huiling Robot Technology Co Beijing, China mingcai.zhou@ia.ac.cn Haoran Li* Institute of Automation, Chinese Academy of Science Beijing, China lihaoran2015@ia.ac.cn ABSTRACT Spatial perception and reasoning are crucial for Vision–Language– Action (VLA) models to accomplish fine-grained manipulation tasks. However, existing approaches often lack the ability to understand and reason over the essential 3D structures necessary for precise control. To address this limitation, we propose QDepth-VLA, a gen- eral framework that augments VLA models with an auxiliary depth prediction task. A dedicated depth expert is designed to predict quan- tized latent tokens of depth maps obtained from a VQ-VAE encoder, enabling the model to learn depth-aware representations that cap- ture critical geometric cues. Experimental results on the simulation benchmarks and real-world tasks demonstrate that QDepth-VLA yields strong spatial reasoning and competitive performance on ma- nipulation tasks. KEYWORDS Vision–Language–Action models, Quantized depth prediction, Spa- tial reasoning, Robotic manipulation 1 INTRODUCTION Large vision–language–action (VLA) models [4, 8, 9, 16] have re- cently emerged as a powerful paradigm for robotic learning. By grounding pre-trained vision–language models (VLMs) [2, 31, 36] with action-generation capabilities, robots acquire strong general- ization across diverse instructions and visual contexts. However, when applied to long-horizon or fine-grained manipulation tasks, these models often exhibit substantial performance degradation [12, 30, 37, 46]. The primary reason lies in a persistent gap between semantic understanding and geometric reasoning [13, 14]. Without reliable 3D understanding, VLAs often misestimate ob- ject positions or gripper–object relations, leading to cascading er- rors during manipulation [30]. Therefore, several recent works have explored incorporating geometric information into VLA models to enable a deeper understanding of the 3D physical environment. These approaches can be broadly generalized into three paradigms: direct 3D feature injection , 2D-projected 3D feature integration and auxiliary 3D information prediction. The first category injects encoded 3D representations, such as point clouds [18] or depth maps *Corresponding author. [3], into the vision–language backbone or the action head. This strat- egy typically requires an additional encoder to process 3D features, increasing model complexity and computational cost. While provid- ing explicit geometric cues, it may disrupt the powerful 2D priors learned during large-scale VLM pretraining, leading to degraded visual–language reasoning and understanding. The second category projects 3D features into 2D representations and feeds them into the VLM [19]. Although this preserves pretrained 2D priors, it in- evitably introduces information loss in the projection process, which can hinder fine-grained manipulation performance. Compared to these two paradigms, enhancing geometric understanding through auxiliary visual prediction tasks, such as future depth maps estima- tion [42], offers a more promising alternative. This approach not only preserves the strong 2D priors of pretrained VLMs, but also avoids the need for additional sensory inputs during inference, while encouraging the model to learn 3D-consistent spatial reasoning. However, existing works that employ depth-map–based visual prediction as auxiliary tasks [42] have not achieved consistent perfor- mance improvements, and in some cases even indicate that introduc- ing depth prediction as an auxiliary loss can be detrimental to policy learning due to noisy supervision and weak geometric grounding. The key challenges lie in three aspects. Firstly, the supervision qual- ity of depth maps is often limited by insufficient spatial–temporal consistency across frames [10, 38], introducing substantial noise that weakens geometric grounding. Secondly, pixel-wise depth regres- sion produces highly redundant learning signals, forcing the model to reconstruct every pixel rather than focusing on salient structural cues essential for manipulation. Thirdly, using a vision–language backbone to predict depth maps may interfere with its pre-trained semantic alignment, potentially degrading multimodal reasoning performance. To address these challenges, we propose QDepth-VLA, which augments large VLAs by introducing quantized depth prediction as an auxiliary supervision signal. Instead of regressing pixel-wise depth values, QDepth-VLA learns discrete depth representations through vector quantization, capturing salient structural information in a compact and optimization-friendly manner. An independent depth expert is also introduced to predict these quantized depth arXiv:2510.14836v1 [cs.CV] 16 Oct 2025 tokens, enabling the model to leverage geometric cues without in- terfering with the vision–language backbone’s pretrained semantic alignment. Our main contributions are summarized as follows: (1) We introduce QDepth-VLA, a novel VLA model enhanced with quantized depth information. By integrating a depth prediction task, it internalizes geometric understanding, en- abling more accurate reasoning about object spatial relation- ships. (2) To facilitate more robust depth learning, we design a spe- cialized Depth Expert that predicts quantized depth tokens rather than raw pixel-level depth maps. This formulation effectively mitigates the impact of depth noise and provides a more compact, optimization-friendly supervision signal for geometry-aware policy learning. (3) Comprehensive experiments on both the Simpler [20] and LIBERO [24] benchmarks demonstrate that QDepth-VLA substantially enhances policy performance, outperforming open 𝜋0 [29] by 6.1% and 7.7% on average success rate, respectively. Moreover, QDepth-VLA achieves a 10.0% im- provement in real-world robotic manipulation, validating its effectiveness and generalizability. 2 RELATED WORKS 2.1 3D-Enhanced VLA 3D spatial information has been widely explored to overcome the lim- itations of purely 2D-based models. Early efforts typically enhanced spatial perception by either lifting 2D inputs into 3D [13–15] or directly fusing 2D visual features with 3D point clouds [22, 28, 39]. While these approaches demonstrate that incorporating 3D signals can significantly improve spatial perception and action precision, directly fusing 3D and 2D representations or relying solely on 3D features can disrupt the visual-language alignment established in large-scale VLM pretraining. To mitigate this, two alternative direc- tions have been proposed: (1) Projecting 3D features into 2D space, as in BridgeVLA [19], which renders 3D inputs into multi-view 2D images for compatibility with VLMs. (2) Independent 3D encoders encode geometric information for integration into the action head. This paradigm is employed by PointVLA [18] and GeoVLA [32], where specialized point cloud encoders supply 3D embeddings to modality-specific experts. Despite these advances, point cloud reconstruction may lose fine- grained object details, and the modality gap between 2D RGB pre- training and 3D geometry remains a persistent challenge. By contrast, depth maps exhibit a much smaller gap with RGB images and thus offer a more natural bridge between 2D and 3D. Recent depth-based approaches have demonstrated this advantage. 3D-CAVLA [3] in- tegrates Region of Interest (RoI) pooling with depth embeddings projected into VLM token space, achieving extraordinary multi- view performance, while 4D-VLA [41] augments visual inputs with 3D coordinate embeddings to support both spatial alignment and temporal reasoning. Motivated by these insights, we adopt depth maps as the 3D aug- mentation source. Crucially, instead of directly fusing them with RGB features which risks interfering with pre-trained VLM seman- tics, we reformulate depth as an auxiliary prediction task. This design enables QDepth-VLA to move beyond passive depth perception to- ward depth understanding, a capability we elaborate on in the next section. 2.2 Auxiliary Visual Reasoning Tasks for VLA While depth maps offer a natural bridge between 2D and 3D for en- hancing spatial grounding, another promising direction is to strengthen the reasoning capacity of VLAs through auxiliary visual prediction tasks. Instead of passively mapping inputs to actions, policies can be trained to output intermediate signals that make future-oriented rea- soning explicit, thereby providing richer supervision during training and improving long-horizon planning at inference. A series of works focus on predicting future sub-goals, such as generating sub-goal images or short rollouts that visualize task progress. This strategy, as exemplified by CoT-VLA [43] , enhances temporal reasoning by conditioning action generation on both cur- rent and predicted states, but incurs high computational cost due to the difficulty of synthesizing realistic RGB predictions. Other researches [9, 16] introduce object-centric signals, such as bounding boxes or spatial relations, which provide structured knowledge of entities and their interactions. More recently, latent future embed- dings have been explored, where discrete action tokens predicted in a compressed latent space encode upcoming intentions. AgiBot World Colosseo [1] and UniVLA [7] exemplify this paradigm, showing scalability through large-scale human video pretraining, yet such latent predictions often lack explicit 3D grounding and struggle to capture fine-grained geometry. Finally, some approaches turn to pixel-level 3D supervision, predicting dense depth or semantic maps to reinforce geometric awareness, as in 3D-VLA [45] and DreamVLA [42]. While sometimes effective for strengthening spa- tial reasoning, these signals are difficult to optimize directly and may overemphasize redundant low-level cues rather than the most relevant spatial structures. Different from previous works, our approach unifies 3D infor- mation enhancement and visual reasoning by introducing depth codebook prediction—an auxiliary task that brings 3D cues into reasoning in a compact and semantically meaningful way, while re- maining naturally aligned with language-conditioned action policies. 3 METHODOLOGY 3.1 Depth Annotation Since existing VLA datasets such as OXE dataset [27] lack sufficient 3D annotations, we first generate monocular depth estimates for training. To ensure high-quality and spatial–temporal consistent depth sequences, we employ Video-Depth-Anything (ViDA) [10], the current state-of-the-art monocular video depth estimation framework built upon a ViT-Large backbone, to acquire depth maps. Specifically, ViDA is applied to the main-view RGB frames from a subset of the OXE [27] and LIBERO [24] datasets to obtain temporally aligned relative depth annotations, providing reliable geometric supervision for depth tokenization and subsequent model training. 3.2 VQ-VAE Reconstruction To represent depth compactly, we pretrain a Vector-Quantized Vari- ational Autoencoder (VQ-VAE) [34]. Given a depth frame x, the encoder 𝑓𝜃(·) produces a latent z𝑒= 𝑓𝜃(x), which is quantized to the Figure 1: An overview of QDepth-VLA. (a) The overall architecture and training pipeline, where depth supervision is incorporated via a depth expert and latent prediction module. In co-training, the VQ-VAE [34] encoder and codebook are frozen, while PaLI-Gemma 3B [2], the action expert, depth expert, SigLIP [40], and tokenizer are trainable. (b) The proposed hybrid attention mask, which integrates depth and visual tokens to enhance spatial reasoning and manipulation performance. nearest code vector in a codebook C = {c1, . . . , c𝐾}: z𝑞= c𝑗∗, 𝑗∗= arg min 𝑗 ∥z𝑒−c𝑗∥2 2. (1) We use 𝐾= 256 codebook entries of dimension 𝑑= 160, and train the VQ-VAE [34] with the standard objective: Lvq = ℓrec(x,𝑔𝜙(z𝑞)) | {z } reconstruction + ∥sg[z𝑒] −c𝑗∗∥2 2 | {z } codebook update +𝛽∥z𝑒−sg[c𝑗∗]∥2 2 | {z } commitment , (2) where sg[·] denotes stop-gradient and 𝛽= 0.25. In practice, we experiment with latent grid resolutions of 16×16 and 32×32. We find that the smaller 16×16 configuration already achieves accurate depth reconstruction while remaining computationally efficient. The VQ- VAE [34] is pretrained independently on each dataset using AdamW [25] with a learning rate of 1 × 10−5 to ensure stable convergence and reconstruction quality. The resulting pretrained model produces discretized depth code indices, which serve as supervisory targets for the depth expert in QDepth-VLA. 3.3 QDepth-VLA Architecture QDepth-VLA adopts a unified and modular architecture built upon open 𝜋0 [29], extending its VLA pipeline with an additional depth su- pervision branch. As shown in Fig. 1(a), the model consists of three parameterized modules: a pretrained vision–language model (VLM), an action expert, and a newly introduced depth expert. These mod- ules are coordinated through a mixture-of-experts (MoE) structure Table 1: Key configurations of the Action and Depth experts of QDepth-VLA. QDepth-VLA Action Expert Depth Expert Backbone Transformer Transformer Layers / Heads 18 / 8 18 / 8 Hidden dim 1024 1024 Interm. dim 4096 4096 Inputs Proprio + Action RGB-Img tokens Outputs Actions Depth tokens and a carefully designed hybrid attention mask, enabling QDepth- VLA to jointly reason about geometry and control variants without disrupting pretrained representations. We choose PaliGemma-3B [2] as VLM backbone, which inte- grates SigLIP-based [40] vision encoding with Gemma’s [26] lan- guage modeling capability. Input instructions are first tokenized using Gemma’s [26] tokenizer, while the main-view RGB image is processed by the SigLIP [40] image encoder to obtain 256 visual tokens. These image tokens are concatenated with 20 text prefix tokens and fed into the Gemma [26] decoder under full block atten- tion to produce multimodal embeddings that capture both spatial and semantic cues. This pretrained VLM remains trainable during the training stage, allowing geometric adaptation to manipulation environment. The action expert is a transformer-based module responsible for translating multimodal embeddings and proprioceptive states into ex- ecutable robot actions. It consists of stacked transformer layers with MLP-based encoders and decoders that integrate visual–language context from the VLM with proprioceptive features. This module, which is built upon the original open 𝜋0 [29] action head, functions as the core control head of QDepth-VLA. To incorporate geometric reasoning, QDepth-VLA introduces a dedicated depth expert, architecturally aligned with the action ex- pert (illustrated in Table 1). It takes the visual embeddings from the SigLIP encoder as input, before language fusion to avoid semantic interference. These embeddings are projected through a lightweight MLP, processed by a transformer backbone, and then passed to a shallow CNN decoder that predicts 256 depth tokens. Each predicted token corresponding to a latent vector is then aligned with the quan- tized tokens produced by the pretrained VQ-VAE [34] encoder over its codebook. The pretrained VQ-VAE [34] decoder is subsequently used to reconstruct the spatial depth map from these latent tokens when required. This discrete formulation enables QDepth-VLA to capture compact, structured geometric representations while main- taining optimization stability. As for hybrid attention mechanism, existing designs typically employ a standard causal attention structure, as seen in DreamVLA [42] and CoT-VLA [43]. However, since depth modalities inher- ently contain noise, directly fusing them under causal attention may introduce undesirable interference, potentially degrading action gen- eration quality [42]. To address this issue, we redesign the hybrid attention mechanism (Fig. 1(b)) to more effectively regulate cross- modal information flow among text, image, depth, proprioception, and action tokens. To be specific: (1) Text and image tokens attend only within their modality to preserve pretrained semantic grounding. (2) Depth tokens attend to both image and text tokens, contextual- izing geometric features with visual semantics. (3) Action tokens attend to all preceding modalities, integrating fused perceptual and geometric cues for policy generation. This hierarchical attention design allows depth to enhance spatial understanding while preventing over-interference with the pretrained VLM and keeping computation efficient. 3.4 Co-Training Procedures 3.4.1 Quantized Depth Supervision. During joint training, the depth expert predicts latent depth tokens, and these tokens are then used to compute logits with the encoded image features over the VQ-VAE [34] codebook: 𝓁𝑖,𝑘= −1 𝜏∥𝑥𝑖−𝑐𝑘∥2 2 , (3) where 𝑖indexes latent spatial positions , 𝑘indexes codebook entries (𝐾= 256) and 𝜏represents temperature factor. A cross-entropy loss is applied using ground-truth code indices 𝑧∗ 𝑖obtained from the pretrained VQ-VAE [34]: Ldepth = − 1 𝐵· 𝑁 𝐵·𝑁 ∑︁ 𝑖=1 log exp(𝓁𝑖,𝑧∗ 𝑖) Í𝐾 𝑘=1 exp(𝓁𝑖,𝑘) , (4) where 𝐵is batch size and 𝑁the number of latent tokens per frame. This loss encourages the visual encoder to learn geometry-aware embeddings aligned with the quantized depth representation. 3.4.2 Action Modeling. Based on the underlying VLA backbone, the action prediction objective is as follows: The Conditional Flow Matching (CFM) action loss [23] is identi- cal to that of 𝜋0 [4]: LCFM(𝜃) = E𝑝(𝐴𝑡|𝑂𝑡),𝑞( ˆ𝐴𝜆 𝑡|𝐴𝑡) ∥𝑓𝜃( ˆ𝐴𝜆 𝑡,𝑂𝑡) −𝑔( ˆ𝐴𝜆 𝑡|𝐴𝑡)∥2 2, (5) where the action chunk 𝐴𝑡= [𝑎𝑡,𝑎𝑡+1, . . . ,𝑎𝑡+𝐻−1] is conditioned on the observation 𝑂𝑡= [𝐼𝑡, ℓ𝑡,𝑠𝑡], which includes the RGB image, language instruction, and end-effector state. Notably, ˆ𝐴𝜆 𝑡denotes noisy action samples generated from a diffusion-like process: ˆ𝐴𝜆 𝑡= 𝜆𝐴𝑡+ (1 −𝜆)𝜂, 𝜂∼N (0, 𝐼), (6) and the corresponding noise distribution and flow target are defined as: 𝑞( ˆ𝐴𝜆 𝑡| 𝐴𝑡) = N (𝜆𝐴𝑡, (1 −𝜆)𝐼), 𝑔( ˆ𝐴𝜆 𝑡| 𝐴𝑡) = 𝜂−𝐴𝑡. (7) This formulation enables the model to approximate a continuous- time flow field that transports noisy actions toward their clean ground-truth counterparts. 3.4.3 Co-Training Objectives. The total loss combines the action and depth objectives: Ltotal = Laction + 𝜆𝑡· Ldepth, (8) where 𝜆𝑡= 𝜆0 · 𝛾𝑡exponentially decays over training steps, with 𝜆0 = 0.01. This co-training schedule enables the model to first establish stable geometric alignment before gradually focusing on action refinement. 3.4.4 Optimization Setup. QDepth-VLA is trained using the AdamW [25] optimizer with decoupled weight decay. We set the learning rate for both the action expert and the VLM backbone to 5 × 10−5. A cosine learning rate scheduler with 200 warm-up steps and a cycle length of 107 steps is applied, ensuring stable optimization throughout training. 4 EXPERIMENTS In this section, we conduct comprehensive experiments across both simulation and real-world settings to evaluate the effectiveness of our approach. Specifically, we aim to address the following three questions: (1) Can depth supervision effectively enhance VLA performance in long-horizon and pick-and-place tasks, particularly those requiring fine-grained manipulation? (2) Is depth supervision more effective than pixel-level depth prediction? (3) Does the proposed hybrid attention mask contribute to perfor- mance gains? 4.1 Simulation Experiments 4.1.1 Training Recipe. The QDepth-VLA based on open 𝜋0 [29] is initially pre-trained for 9 epochs on the Fractal dataset [6], fol- lowed by 20 epochs of pre-training on the LIBERO-90 dataset [24]. After pre-training, the model is further fine-tuned on the four LIBERO subsets — Spatial, Object, Goal, and Long [24] for around Table 2: Results of QDepth-VLA on the LIBERO benchmark. View Setting Category Method Spatial Object Goal Long Avg Single-view VLA General VLA OpenVLA finetuned [17] 84.7 88.4 79.2 53.7 76.5 CoT-VLA-7B [43] 87.5 91.6 87.6 69.0 81.1 Open 𝜋0 [29] 77.2 84.0 83.6 66.0 77.7 3D-cloud-enhanced VLA SpatialVLA [28] 88.2 89.9 78.6 55.5 78.1 Depth-enhanced VLA 3D-CAVLA [3] 86.1 94.7 82.9 66.8 82.6 QDepth-VLA (ours) 86.0 88.8 94.0 72.6 85.4 Multi-view VLA General VLA Diffusion Policy [11] 78.3 92.5 68.3 50.5 72.4 Octo finetuned [33] 78.9 85.7 84.6 51.1 75.1 𝜋0-FAST finetuned [4] 96.4 96.8 88.6 60.2 85.5 𝜋0 finetuned [4] 96.8 98.8 95.8 85.2 94.2 UniVLA [7] 96.5 96.8 95.6 92.0 95.2 3D-cloud-enhanced VLA GeoVLA [32] 98.4 99.0 96.6 96.6 97.7 Depth-enhanced VLA 3D-CAVLA [3] 98.2 99.8 98.2 96.1 98.1 4D-VLA [41] 88.9 95.2 90.9 79.1 88.6 DreamVLA [42] 97.5 94.0 89.5 89.5 92.6 QDepth-VLA (ours) 97.6 96.6 95.2 90.0 94.9 50 epochs. For the Simpler benchmark [20], the model is instead trained from scratch, first using the Bridge dataset [35] for 13 epochs, and then the Fractal dataset [6] for an additional 9 epochs. All experiments are conducted using the Fully Sharded Data Parallel (FSDP) training strategy on 8 × NVIDIA H20 GPUs. A per-GPU batch size of 32 is used, yielding a global batch size of 1024 with gradient accumulation, and the action chunk size is fixed at 4. 4.1.2 Evaluation Setup. For evaluation on LIBERO [24], we adopt its four benchmark suites (Spatial, Object, Goal, and Long). Following the preprocessing method in [17], image resolution is first normalized to 256 × 256 and then resized to 224 × 224 as model input. We also apply a 180-degree rotation to all images and use only the main-view RGB observations. Each task is evaluated over 50 rollouts, with the average success rate reported. On the Simpler benchmark [20], the evaluation covers two dis- tinct settings: (1) models trained on the Bridge dataset [35] are tested on tasks involving the WidowX250 robot, and (2) models trained on the Fractal dataset [6] are tested on tasks for the Google Robot. We adopt the visual matching configuration from Simpler [20], eval- uating each task across multiple initial positions with 10 rollouts per configuration. Consequently, the total number of evaluations per task ranges from 240 to 2400. 4.1.3 Main Results. LIBERO Benchmark. QDepth-VLA adopts a single-view set- ting, where the visual input consists of only one RGB image. This contrasts with multi-view models, which take multiple images as input, including temporally adjacent frames from historical observa- tions. As shown in Table 2, QDepth-VLA consistently outperforms single-view baselines across the LIBERO suites. It achieves stronger performance on both fine-grained and long-horizon tasks, reaching 94.0% on the Goal tasks and 72.6% on the Long tasks, surpassing the single-view baseline CoT-VLA [43] by 6.4% and 3.6%, respec- tively. Compared with open 𝜋0 [29], QDepth-VLA shows consistent improvements across all four subsets (Spatial, Object, Goal, and Long), with the largest gain of 8.8% observed on the Spatial tasks. While QDepth-VLA operates with only a single RGB observa- tion, its average success rate remains competitive with multi-view VLAs. Specifically, QDepth-VLA achieves a mean success rate only 0.1% lower than 𝜋0-FAST [29], while exceeding 4D-VLA [41] by 3.1% and DreamVLA [42] by 4.5% on the Goal tasks. Moreover, it surpasses 𝜋0-FAST [4] by 12.4% on the more challenging Long tasks. Although leading multi-view models such as 3D-CAVLA [3], GeoVLA [32] and UniVLA [7] achieve higher overall results, our experimental results demonstrate that depth-augmented supervision effectively compensates for the lack of multi-view observations and brings single-view VLAs closer to multi-view performance levels. By extension, we further implement a multi-view variant of QDepth-VLA while maintaining the same setting that predicts latent depth tokens corresponding only to the current main-view image. As shown in Table 2, the multi-view QDepth-VLA consistently out- performs single-view baselines. It achieves an average success rate of 94.9%, surpassing DreamVLA by 0.1% and 𝜋0 by 0.8% on the Spatial tasks, and reaches 90.0% success on the Long tasks. While 3D-CAVLA [3] and GeoVLA [32] achieve higher average success rates, they require explicit point cloud or depth map inputs during inference — modalities that QDepth-VLA does not rely on. These results reveal that the QDepth-VLA generalizes effectively to multi-view configurations, further enhancing geometric perception and long-horizon reasoning. Simpler Benchmark. Tables 3 and 4 present the experimental results of QDepth-VLA in the Simpler [20] simulation environment. As shown in Table 3, QDepth-VLA achieves a success rate of 98.3% on the pick coke can task, surpassing open 𝜋0 [29] by 0.8% Table 3: Results of QDepth-VLA on Simpler benchmark(Google Robot tasks) Pick Coke Can Move Near Open/Close Drawer Open Top Drawer and Put Apple In Avg RT-2-X [5] 78.7 77.9 25.0 - 60.7 Octo-Base [33] 17.0 4.2 22.7 - 16.8 OpenVLA [17] 16.3 46.2 35.6 - 27.7 RoboVLM finetuned [21] 77.3 61.7 43.5 - 63.4 SpatialVLA finetuned [28] 86.0 77.9 57.4 - 75.1 Open 𝜋0 [29] 97.5 87.1 68.0 32.9 71.4 QDepth-VLA(ours) 98.3 81.4 58.0 62.6 75.1 Table 4: Results of QDepth-VLA on Simpler benchmark(WidowX250 Robot tasks) Put Carrot on Plate Put Eggplant in Basket Put Spoon on Towel Stack Block Avg Octo-Base [33] 8.3 43.1 12.5 0.0 16.0 OpenVLA [17] 0.0 4.1 0.0 0.0 1.0 RoboVLM finetuned [21] 25.0 58.3 29.2 12.5 31.3 SpatialVLA finetuned [28] 25.0 100.0 16.7 29.2 42.7 Open 𝜋0 [29] 61.3 89.6 73.7 15.8 60.0 QDepth-VLA(ours) 57.5 95.0 82.0 39.6 68.5 and SpatialVLA [28] by 12.3%. On the more complex open top drawer and put apple in task, QDepth-VLA also attains 62.6%, out- performing open 𝜋0 [29] by a large margin of 29.7%. This substantial improvement on long-horizon tasks can be attributed to enhanced spatial perception and object localization provided by depth-guided supervision, which improves the model’s ability to accurately iden- tify and grasp target objects. As a result, the success probability of intermediate manipulation steps—such as grasping or placing—is increased, leading to higher overall task completion rates. In Table 4, QDepth-VLA consistently achieves high success rates across various manipulation tasks. Notably, on the stack block task—which demands precise spatial reasoning and fine-grained control—QDepth-VLA reaches a success rate of 39.6%, surpass- ing SpatialVLA [28] by 10.4%. Furthermore, on the Put Eggplant in Basket and Put Spoon on Towel tasks, QDepth-VLA achieves 95.0% and 82.0% success rates, respectively, outperforming open 𝜋0 [29] by 5.4% and 8.3%. These improvements highlight the effective- ness of quantized depth supervision in enhancing spatial reasoning and manipulation precision, particularly for tasks involving object placement and coordination in cluttered 3D environments. Depth Reconstruction Visualization . To further validate the ef- fectiveness of our depth supervision, we visualize depth reconstruc- tions by passing the quantized predicted features through a trained VQ-VAE [34] decoder. As shown in Fig. 3, the reconstructions preserve structural details and align well with object boundaries, demonstrating that the learned depth representations capture spatial geometry in a meaningful way. 4.2 Real-Robot Experiments 4.2.1 Environment Setup. In the real-world experiments, we employ a 6-DoF Piper robotic arm, with a RealSense D455 camera positioned directly in front of the arm. The training hyperparameters Figure 2: An overview of the main camera view in our real-world task. The environment presents significant challenges for pol- icy learning, including complex lighting conditions, low visual contrast between the gripper and tabletop, and various environ- mental disturbances that can obscure critical geometric details. are kept consistent with those used in the simulation experiments, ex- cept that the action chunk size is set to 16 to achieve faster execution speed. We select four tasks for evaluation: • Task1: pick the banana into the yellow basket • Task2: put the chili into the bowl • Task3: put the green block into the bowl • Task4: stack the green block on top of the yellow block Figure 3: Depth reconstruction results from the QDepth-VLA. Features predicted by depth expert are decoded using a trained VQ-VAE decoder. The reconstructions demonstrate QDepth-VLA’s ability to learn critical depth map features, including object and gripper boundaries, underscoring the success of its depth supervision. Table 5: Real-World Evaluation on the Piper Arm task1 task2 task3 task4 Avg ACT [44] 20.0 0.0 0.0 0.0 5.0 Open 𝜋0 [29] 50.0 40.0 40.0 0.0 32.5 QDepth-VLA(ours) 70.0 40.0 50.0 10.0 42.5 For each task, we collect 50 trajectories and fine-tune the model separately on the corresponding dataset. During testing, each task is evaluated over 10 trials.As shown in Fig. 2, all experiments are conducted on a dark-colored wooden desk, where the surface color is visually similar to the robot gripper. This setup increases the difficulty of the tasks by introducing additional perceptual ambiguity. 4.2.2 Main Results. We evaluate QDepth-VLA on a series of pick-and-place tasks with varying difficulty to assess its spatial per- ception and localization ability. We also compare our method against representative baselines such as ACT [44]. As shown in Table 5, QDepth-VLA consistently outperforms ACT [44] across all tasks, while ACT [44] fails to perform reliably in these complex real-world environments, QDepth-VLA achieves robust success rates. Com- pared to our baseline open 𝜋0 [29], QDepth-VLA achieves a 20.0% improvement on the simple task of picking a banana, and further achieves gains of 10.0% on both Task 3 and Task 4, demonstrating stronger generalization to challenging scenarios and tasks. 4.3 Ablation Study To evaluate the contribution of each proposed component, we per- form a series of controlled ablation experiments in the Simpler [20] simulation environment. Four ablated variants are considered: (1) removing the depth supervision signal by setting its loss weight to zero (w/o Depth Loss); (2) removing the dedicated depth predic- tion branch (w/o Depth Expert); (3) replacing latent depth token prediction with pixel-wise regression (w/o Pixel Prediction); and (4) substituting the proposed hybrid attention mask with a standard version that enforces proprioception-to-depth attention (w/o Hybrid Attn). Quantitative results are reported in Table 6. 4.3.1 w/o Depth Loss. In this variant, the depth loss weight is set to zero while preserving the full model capacity. This configuration isolates the contribution of the depth supervision signal without al- tering the overall parameter scale. As shown in Table 6, performance decreases from 68.5% to 65.6% on average. The degradation is most pronounced in the Carrot (-9.6%) and Eggplant (-12.5%) tasks, both requiring coarse spatial grounding. Conversely, slight improvements are observed in Spoon (+7.2%) and Block (+3.3%) tasks, suggesting that the auxiliary depth objective can occasionally compete with the action policy optimization. Overall, these results confirm that depth supervision provides a meaningful geometric prior that facilitates spatial reasoning beyond the primary control objective. 4.3.2 w/o Depth Expert. Eliminating the dedicated depth branch results in the largest overall performance degradation (-8.5%), as presented in Table 6. The most significant drop occurs in the Stack Block Task (-23.8%), where precise 3D alignment is critical. Sub- stantial declines are also observed in the Eggplant (-5.4%) and Spoon (-8.3%) tasks, indicating that fine-grained spatial reasoning relies heavily on an explicit and specialized depth pathway. 4.3.3 w/o Pixel Prediction. As shown in Table 6, replacing latent depth prediction with direct pixel-wise regression lowers average performance to 64.6% (-3.9%). The largest impact is on Eggplant (-14.6%) task, whereas other tasks are only mildly affected. This validates our design choice: quantized latent tokens encourage ab- straction of geometric cues, while pixel prediction entangles the model with redundant local detail that is less relevant for manipula- tion. 4.3.4 w/o Hybrid Attention. In this variant, the proposed hybrid attention mask is replaced with a DreamVLA-style [42] configura- tion, removing dynamic and semantic modalities. This setting tests whether relative depth maps can enhance proprioceptive state percep- tion and thereby improve action generation quality. As expected, the Table 6: Ablation study of QDepth-VLA on Simpler tasks. The table reports success rates (%) with module ablations. A checkmark (✓) indicates the module is present, while a cross (✗) indicates it is removed, changed or set to zero manually. Model Depth Loss Depth Expert Latent Pred Hybrid Attn Tasks Avg Carrot Eggplant Spoon Block QDepth-VLA (full) ✓ ✓ ✓ ✓ 57.5 95.0 82.0 39.6 68.5 w/o Depth Loss ✗ ✓ ✓ ✓ 47.9 82.5 89.2 42.9 65.6 w/o Depth Expert ✓ ✗ ✓ ✓ 61.3 89.6 73.7 15.8 60.0 w/o Pixel Prediction ✓ ✓ ✗ ✓ 54.6 80.4 82.0 41.3 64.6 w/o Hybrid Attn[42] ✓ ✓ ✓ ✗ 41.7 89.6 78.8 42.0 63.0 performance declines by 5.5% on average, with the most substantial drop on the Carrot Task (-15.8%). This result indicates that enforc- ing proprioception-to-depth attention introduces noise rather than useful guidance, as relative depth lacks absolute positional encoding necessary for stable control. 4.3.5 Cross-task synthesis. Across all ablations, two consistent trends emerge. First, stacking tasks exhibit exceptional sensitivity to the removal of the depth expert, confirming that explicit depth mod- eling is crucial for accurate vertical alignment and object interaction. Second, placement-oriented tasks such as Carrot and Eggplant tasks benefit most from depth supervision and hybrid attention routing, which jointly enhance mid-level spatial localization and task-level consistency. Overall, we find the following answers to the three research ques- tions brought up at the beginning of this section, • Depth supervision effectively enhances VLA performance, especially on long-horizon and fine-grained pick-and-place tasks. In particular, tasks such as stacking and precise place- ment benefit significantly, indicating improved spatial rea- soning. • Compared to pixel-level regression, quantized depth super- vision proves more effective, as it reduces redundancy and focuses learning on salient geometric structures, leading to more stable training and stronger downstream performance. • The proposed hybrid attention mask consistently contributes to performance gains, particularly in placement tasks, by selectively routing depth cues into the policy network and improving cross-modal feature alignment. 5 CONCLUSIONS In this paper, we introduced QDepth-VLA, a new vision-language- action model that incorporates depth supervision and hybrid attention to enhance spatial perception and long-horizon reasoning. Through extensive experiments in both simulation (Simpler and LIBERO) and real-world manipulation tasks, we demonstrate that depth supervi- sion significantly improves manipulation performance. In summary, our work demonstrates that predicting quantized depth tokens at the current timestep is an effective way to enhance policy learning. Extending this approach to predict future depth tokens for improved reasoning and exploring more efficient VAE-based depth representa- tions for enhanced perception present two promising directions for future research. REFERENCES [1] AgiBot-World-Contributors, Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui, Yan Ding, Siyuan Feng, Shenyuan Gao, Xindong He, Xu Huang, Shu Jiang, Yuxin Jiang, Cheng Jing, Hongyang Li, Jialun Li, Chiming Liu, Yi Liu, Yuxiang Lu, Jianlan Luo, Ping Luo, Yao Mu, Yuehan Niu, Yixuan Pan, Jiangmiao Pang, Yu Qiao, Guanghui Ren, Cheng-Xing Ruan, Jiaqi Shan, Yongjian Shen, Chengshi Shi, Mi Shi, Modi Shi, Chonghao Sima, Jia-Yi Song, Huijie Wang, Wenhao Wang, Dafeng Wei, Chengen Xie, Guo-Liang Xu, Junchi Yan, Cunbiao Yang, Lei Yang, Shu-Xiang Yang, Maoqing Yao, Jiansheng Zeng, Chi Zhang, Qingli Zhang, Bin Zhao, Chengyu Zhao, Jiaqi Zhao, and Jianchao Zhu. 2025. AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems. ArXiv abs/2503.06669 (2025). https://api.semanticscholar. org/CorpusID:276902669 [2] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel M. Salz, Maxim Neumann, Ibrahim M. Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey A. Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Martin Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bovsnjak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiao-Qi Zhai. 2024. PaliGemma: A versa- tile 3B VLM for transfer. ArXiv abs/2407.07726 (2024). [3] V.S.K. Pandi V Bhat, Yu-Hsiang Lan, Prashanth Krishnamurthy, Ramesh Karri, and Farshad Khorrami. 2025. 3D CAVLA: Leveraging Depth and 3D Con- text to Generalize Vision Language Action Models for Unseen Tasks. ArXiv abs/2505.05800 (2025). [4] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, and Ury Zhilinsky. 2024. 𝜋0: A Vision- Language-Action Flow Model for General Robot Control. ArXiv abs/2410.24164 (2024). [5] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Krzysztof Choromanski, Tianli Ding, et al. 2023. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control. ArXiv abs/2307.15818 (2023). [6] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, et al. 2022. RT-1: Robotics Transformer for Real-World Control at Scale. ArXiv abs/2212.06817 (2022). [7] Qingwen Bu, Yanting Yang, Jisong Cai, Shenyuan Gao, Guanghui Ren, Maoqing Yao, Ping Luo, and Hongyang Li. 2025. UniVLA: Learning to Act Anywhere with Task-centric Latent Actions. ArXiv abs/2505.06111 (2025). https://api. semanticscholar.org/CorpusID:278481174 [8] Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, and Minzhao Zhu. 2024. GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation. ArXiv abs/2410.06158 (2024). [9] Chi-Lam Cheang, Sijin Chen, Zhongren Cui, Yingdong Hu, Liqun Huang, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Xiao Ma, Hao Niu, Wenxuan Ou, Wanli Peng, Zeyu Ren, Haixin Shi, Jiawen Tian, Hongtao Wu, Xin Xiao, Yuyang Xiao, Jiafeng Xu, and Yichu Yang. 2025. GR-3 Technical Report. ArXiv abs/2507.15493 (2025). [10] Sili Chen, Hengkai Guo, Shengnan Zhu, Feihu Zhang, Zilong Huang, Jiashi Feng, and Bingyi Kang. 2025. Video Depth Anything: Consistent Depth Estimation for Super-Long Videos. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025), 22831–22840. [11] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. 2023. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. ArXiv abs/2303.04137 (2023). [12] Yiguo Fan, Pengxiang Ding, Shuanghao Bai, Xinyang Tong, Yuyang Zhu, Hongchao Lu, Fengqi Dai, Wei Zhao, Yang Liu, Siteng Huang, Zhaoxin Fan, Badong Chen, and Donglin Wang. 2025. Long-VLA: Unleashing Long-Horizon Capability of Vision Language Action Model for Robot Manipulation. ArXiv abs/2508.19958 (2025). https://api.semanticscholar.org/CorpusID:280919002 [13] Théophile Gervet, Zhou Xian, Nikolaos Gkanatsios, and Katerina Fragkiadaki. 2023. Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipula- tion. In Conference on Robot Learning. [14] Ankit Goyal, Valts Blukis, Jie Xu, Yijie Guo, Yu-Wei Chao, and Dieter Fox. 2024. RVT-2: Learning Precise Manipulation from Few Demonstrations. ArXiv abs/2406.08545 (2024). [15] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. 2023. RVT: Robotic View Transformer for 3D Object Manipulation. ArXiv abs/2306.14896 (2023). https://api.semanticscholar.org/CorpusID:259262273 [16] Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dha- balia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fu- sai, Manuel Y. Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren, Lucy Xiaoyang Shi, Laura Smith, Jost Tobias Springenberg, Kyle Stachow- icz, James Tanner, Quan Vuong, Homer Walke, Anna Walling, Haohuan Wang, Lili Yu, and Ury Zhilinsky. 2025. 𝜋0.5: a Vision-Language-Action Model with Open-World Generalization. ArXiv abs/2504.16054 (2025). [17] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag R. Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, and Chelsea Finn. 2024. OpenVLA: An Open-Source Vision-Language-Action Model. ArXiv abs/2406.09246 (2024). [18] Chengmeng Li, Junjie Wen, Yan Peng, Yaxin Peng, Feifei Feng, and Yichen Zhu. 2025. PointVLA: Injecting the 3D World into Vision-Language-Action Models. ArXiv abs/2503.07511 (2025). [19] Peiyan Li, Yixiang Chen, Hongtao Wu, Xiao Ma, Xiangnan Wu, Yan Huang, Liang Wang, Tao Kong, and Tieniu Tan. 2025. BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models. ArXiv abs/2506.07961 (2025). [20] Xuanlin Li, Kyle Hsu, Jiayuan Gu, Karl Pertsch, Oier Mees, Homer Rich Walke, Chuyuan Fu, Ishikaa Lunawat, Isabel Sieh, Sean Kirmani, Sergey Levine, Jiajun Wu, Chelsea Finn, Hao Su, Quan Ho Vuong, and Ted Xiao. 2024. Evaluating Real-World Robot Manipulation Policies in Simulation. ArXiv abs/2405.05941 (2024). [21] Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, and Huaping Liu. 2024. Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models. ArXiv abs/2412.14058 (2024). [22] Tao Lin, Gen Li, Yilei Zhong, Yanwen Zou, and Bo Zhao. 2025. Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding. ArXiv abs/2507.00416 (2025). [23] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. 2022. Flow Matching for Generative Modeling. ArXiv abs/2210.02747 (2022). [24] Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qian Liu, Yuke Zhu, and Peter Stone. 2023. LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning. ArXiv abs/2306.03310 (2023). [25] Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. In International Conference on Learning Representations. [26] Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, and et al. 2024. Gemma: Open Models Based on Gemini Research and Technology. arXiv preprint arXiv:2403.08295 abs/2403.08295 (2024). https: //api.semanticscholar.org/CorpusID:268379206 [27] Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, et al. 2023. Open X-Embodiment: Robotic Learning Datasets and RT-X Models. ArXiv abs/2310.08864 (2023). [28] Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yani Ding, Zhigang Wang, Jiayuan Gu, Bin Zhao, Dong Wang, and Xuelong Li. 2025. SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model. ArXiv abs/2501.15830 (2025). [29] Allen Z. Ren. 2024. open-pi-zero: Re-implementation of the 𝜋0 vision-language- action (VLA) model. https://github.com/allenzren/open-pi-zero [30] Wenxuan Song, Ziyang Zhou, Han Zhao, Jiayi Chen, Pengxiang Ding, Haodong Yan, Yuxin Huang, Feilong Tang, Donglin Wang, and Haoang Li. 2025. Re- conVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver. ArXiv abs/2508.10333 (2025). [31] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsenko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, Siyang Qin, R. Reeve Ingle, Emanuele Bugliarello, Sahar Kazemzadeh, Thomas Mesnard, Ibrahim M. Alabdulmohsin, Lucas Beyer, and Xiao-Qi Zhai. 2024. PaliGemma 2: A Family of Versatile VLMs for Transfer. ArXiv abs/2412.03555 (2024). [32] Lin Sun, Bin Xie, Yingfei Liu, Hao Shi, Tiancai Wang, and Jiale Cao. 2025. GeoVLA: Empowering 3D Representations in Vision-Language-Action Models. ArXiv abs/2508.09071 (2025). [33] Octo Model Team, Dibya Ghosh, Homer Rich Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu, Jianlan Luo, You Liang Tan, Pannag R. Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, and Sergey Levine. 2024. Octo: An Open-Source Generalist Robot Policy. ArXiv abs/2405.12213 (2024). [34] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. In Neural Information Processing Systems. [35] Homer Rich Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Maximilian Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Ho Vuong, Andre Wang He, Vivek Myers, Kuan Fang, Chelsea Finn, and Sergey Levine. 2023. BridgeData V2: A Dataset for Robot Learning at Scale. In Conference on Robot Learning. [36] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Ke-Yang Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024. Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution. ArXiv abs/2409.12191 (2024). [37] Liudi Yang, Yang Bai, George Eskandar, Fengyi Shen, Mohammad Altillawi, Dong Chen, Soumajit Majumder, Ziyuan Liu, Gitta Kutyniok, and Abhinav Valada. 2025. RoboEnvision: A Long-Horizon Video Generation Model for Multi-Task Robot Manipulation. ArXiv abs/2506.22007 (2025). [38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. 2024. Depth Anything V2. ArXiv abs/2406.09414 (2024). https://api.semanticscholar.org/CorpusID:270440448 [39] Rujia Yang, Geng Chen, Chuan Wen, and Yang Gao. 2025. FP3: A 3D Foundation Policy for Robotic Manipulation. ArXiv abs/2503.08950 (2025). [40] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid Loss for Language Image Pre-Training. 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), 11941–11952. [41] Jiahui Zhang, Yurui Chen, Yueming Xu, Ze Huang, Yanpeng Zhou, Yuan Yuan, Xinyue Cai, Guowei Huang, Xingyue Quan, Hang Xu, and Li Zhang. 2025. 4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration. ArXiv abs/2506.22242 (2025). https://api.semanticscholar.org/ CorpusID:280010742 [42] Wenyao Zhang, Hongsi Liu, Zekun Qi, Yunnan Wang, Xinqiang Yu, Jiazhao Zhang, Runpei Dong, Jiawei He, He Wang, Zhizheng Zhang, Li Yi, Wenjun Zeng, and Xin Jin. 2025. DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge. ArXiv abs/2507.04447 (2025). [43] Qingqing Zhao, Yao Lu, Moo Jin Kim, Zipeng Fu, Zhuoyang Zhang, Yecheng Wu, Zhaoshuo Li, Qianli Ma, Song Han, Chelsea Finn, Ankur Handa, Ming-Yu Liu, Donglai Xiang, Gordon Wetzstein, and Tsung-Yi Lin. 2025. CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025), 1702–1713. [44] Tony Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. 2023. Learning Fine- Grained Bimanual Manipulation with Low-Cost Hardware. ArXiv abs/2304.13705 (2023). [45] Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, and Chuang Gan. 2024. 3D-VLA: A 3D Vision-Language-Action Generative World Model. ArXiv abs/2403.09631 (2024). [46] Junjie Zhu, Huayu Liu, Jin Wang, Bangrong Wen, Kaixiang Huang, Xiao-Fei Li, Haiyun Zhan, and Guodong Lu. 2025. Bridging VLM and KMP: Enabling Fine-grained robotic manipulation via Semantic Keypoints Representation. ArXiv abs/2503.02748 (2025).
QDepth-VLA: Quantized Depth Prediction as Auxiliary Supervision for Vision-Language-Action Models Yixuan Li * -LanguageAction (VLA) models to accomplish fine-grained manipulation tasks. However, existing approaches often lack the ability to understand and reason over the essential 3D structures necessary for precise control. To address this limitation, we propose QDepth-VLA, a general framework that augments VLA models with an auxiliary depth prediction task. A dedicated depth expert is designed to predict quantized latent tokens of depth maps obtained from a VQ-VAE encoder, enabling the model to learn depth-aware representations that capture critical geometric cues. Experimental results on the simulation benchmarks and real-world tasks demonstrate that QDepth-VLA yields strong spatial reasoning and competitive performance on manipulation tasks. KEYWORDS Vision-Language-Action models, Quantized depth prediction, Spatial reasoning, Robotic manipulation 1 INTRODUCTION Large vision-language-action (VLA) models [4, 8, 9, 16] have recently emerged as a powerful paradigm for robotic learning. By grounding pre-trained vision-language models (VLMs) [2, 31, 36] with action-generation capabilities, robots acquire strong generalization across diverse instructions and visual contexts. However, when applied to long-horizon or fine-grained manipulation tasks, these models often exhibit substantial performance degradation [12, 30, 37, 46]. The primary reason lies in a persistent gap between semantic understanding and geometric reasoning [13, 14]. Without reliable 3D understanding, VLAs often misestimate object positions or gripper-object relations, leading to cascading errors during manipulation [30]. Therefore, several recent works have explored incorporating geometric information into VLA models to enable a deeper understanding of the 3D physical environment. These approaches can be broadly generalized into three paradigms: direct 3D feature injection , 2D-projected 3D feature integration and auxiliary 3D information prediction. The first category injects encoded 3D representations, such as point clouds [18] or depth maps *Corresponding author. [3], into the vision-language backbone or the action head. This strategy typically requires an additional encoder to process 3D features, increasing model complexity and computational cost. While providing explicit geometric cues, it may disrupt the powerful 2D priors learned during large-scale VLM pretraining, leading to degraded visual-language reasoning and understanding. The second category projects 3D features into 2D representations and feeds them into the VLM [19]. Although this preserves pretrained 2D priors, it inevitably introduces information loss in the projection process, which can hinder fine-grained manipulation performance. Compared to these two paradigms, enhancing geometric understanding through auxiliary visual prediction tasks, such as future depth maps estimation [42], offers a more promising alternative. This approach not only preserves the strong 2D priors of pretrained VLMs, but also avoids the need for additional sensory inputs during inference, while encouraging the model to learn 3D-consistent spatial reasoning. However, existing works that employ depth-map-based visual prediction as auxiliary tasks [42] have not achieved consistent performance improvements, and in some cases even indicate that introducing depth prediction as an auxiliary loss can be detrimental to policy learning due to noisy supervision and weak geometric grounding. The key challenges lie in three aspects. Firstly, the supervision quality of depth maps is often limited by insufficient spatial-temporal consistency across frames [10, 38], introducing substantial noise that weakens geometric grounding. Secondly, pixel-wise depth regression produces highly redundant learning signals, forcing the model to reconstruct every pixel rather than focusing on salient structural cues essential for manipulation. Thirdly, using a vision-language backbone to predict depth maps may interfere with its pre-trained semantic alignment, potentially degrading multimodal reasoning performance. To address these challenges, we propose QDepth-VLA, which augments large VLAs by introducing quantized depth prediction as an auxiliary supervision signal. Instead of regressing pixel-wise depth values, QDepth-VLA learns discrete depth representations through vector quantization, capturing salient structural information in a compact and optimization-friendly manner. An independent depth expert is also introduced to predict these quantized depth 16 Oct 2025 tokens, enabling the model to leverage geometric cues without interfering with the vision-language backbone's pretrained semantic alignment. Our main contributions are summarized as follows: (1) We introduce QDepth-VLA, a novel VLA model enhanced with quantized depth information. By integrating a depth prediction task, it internalizes geometric understanding, enabling more accurate reasoning about object spatial relationships. (2) To facilitate more robust depth learning, we design a specialized Depth Expert that predicts quantized depth tokens rather than raw pixel-level depth maps. This formulation effectively mitigates the impact of depth noise and provides a more compact, optimization-friendly supervision signal for geometry-aware policy learning. (3) Comprehensive experiments on both the Simpler [20] and LIBERO [24] benchmarks demonstrate that QDepth-VLA substantially enhances policy performance, outperforming open π0 [29] by 6.1% and 7.7% on average success rate, respectively. Moreover, QDepth-VLA achieves a 10.0% improvement in real-world robotic manipulation, validating its effectiveness and generalizability. 2 RELATED WORKS 2.1 3D-Enhanced VLA 3D spatial information has been widely explored to overcome the limitations of purely 2D-based models. Early efforts typically enhanced spatial perception by either lifting 2D inputs into 3D [13-15] or directly fusing 2D visual features with 3D point clouds [22, 28, 39]. While these approaches demonstrate that incorporating 3D signals can significantly improve spatial perception and action precision, directly fusing 3D and 2D representations or relying solely on 3D features can disrupt the visual-language alignment established in large-scale VLM pretraining. To mitigate this, two alternative directions have been proposed: (1) Projecting 3D features into 2D space, as in BridgeVLA [19], which renders 3D inputs into multi-view 2D images for compatibility with VLMs. (2) Independent 3D encoders encode geometric information for integration into the action head. This paradigm is employed by PointVLA [18] and GeoVLA [32], where specialized point cloud encoders supply 3D embeddings to modality-specific experts. Despite these advances, point cloud reconstruction may lose finegrained object details, and the modality gap between 2D RGB pretraining and 3D geometry remains a persistent challenge. By contrast, depth maps exhibit a much smaller gap with RGB images and thus offer a more natural bridge between 2D and 3D. Recent depth-based approaches have demonstrated this advantage. 3D-CAVLA [3] integrates Region of Interest (RoI) pooling with depth embeddings projected into VLM token space, achieving extraordinary multiview performance, while 4D-VLA [41] augments visual inputs with 3D coordinate embeddings to support both spatial alignment and temporal reasoning. Motivated by these insights, we adopt depth maps as the 3D augmentation source. Crucially, instead of directly fusing them with RGB features which risks interfering with pre-trained VLM semantics, we reformulate depth as an auxiliary prediction task. This design enables QDepth-VLA to move beyond passive depth perception toward depth understanding, a capability we elaborate on in the next section. 2.2 Auxiliary Visual Reasoning Tasks for VLA While depth maps offer a natural bridge between 2D and 3D for enhancing spatial grounding, another promising direction is to strengthen the reasoning capacity of VLAs through auxiliary visual prediction tasks. Instead of passively mapping inputs to actions, policies can be trained to output intermediate signals that make future-oriented reasoning explicit, thereby providing richer supervision during training and improving long-horizon planning at inference. A series of works focus on predicting future sub-goals, such as generating sub-goal images or short rollouts that visualize task progress. This strategy, as exemplified by CoT-VLA [43] , enhances temporal reasoning by conditioning action generation on both current and predicted states, but incurs high computational cost due to the difficulty of synthesizing realistic RGB predictions. Other researches [9, 16] introduce object-centric signals, such as bounding boxes or spatial relations, which provide structured knowledge of entities and their interactions. More recently, latent future embeddings have been explored, where discrete action tokens predicted in a compressed latent space encode upcoming intentions. AgiBot World Colosseo [1] and UniVLA [7] exemplify this paradigm, showing scalability through large-scale human video pretraining, yet such latent predictions often lack explicit 3D grounding and struggle to capture fine-grained geometry. Finally, some approaches turn to pixel-level 3D supervision, predicting dense depth or semantic maps to reinforce geometric awareness, as in 3D-VLA [45] and DreamVLA [42]. While sometimes effective for strengthening spatial reasoning, these signals are difficult to optimize directly and may overemphasize redundant low-level cues rather than the most relevant spatial structures. Different from previous works, our approach unifies 3D information enhancement and visual reasoning by introducing depth codebook prediction-an auxiliary task that brings 3D cues into reasoning in a compact and semantically meaningful way, while remaining naturally aligned with language-conditioned action policies. 3 METHODOLOGY 3.1 Depth Annotation Since existing VLA datasets such as OXE dataset [27] lack sufficient 3D annotations, we first generate monocular depth estimates for training. To ensure high-quality and spatial-temporal consistent depth sequences, we employ Video-Depth-Anything (ViDA) [10], the current state-of-the-art monocular video depth estimation framework built upon a ViT-Large backbone, to acquire depth maps. Specifically, ViDA is applied to the main-view RGB frames from a subset of the OXE [27] and LIBERO [24] datasets to obtain temporally aligned relative depth annotations, providing reliable geometric supervision for depth tokenization and subsequent model training. 3.2 VQ-VAE Reconstruction To represent depth compactly, we pretrain a Vector-Quantized Variational Autoencoder (VQ-VAE) [34]. Given a depth frame x, the encoder fθ(·) produces a latent ze= fθ(x), which is quantized to the Figure 1: An overview of QDepth-VLA. (a) The overall architecture and training pipeline, where depth supervision is incorporated via a depth expert and latent prediction module. In co-training, the VQ-VAE [34] encoder and codebook are frozen, while PaLI-Gemma 3B [2], the action expert, depth expert, SigLIP [40], and tokenizer are trainable. (b) The proposed hybrid attention mask, which integrates depth and visual tokens to enhance spatial reasoning and manipulation performance. nearest code vector in a codebook C = {c1, . . . , cK}: zq= cj∗, j∗= arg min j ∥ze-cj∥2 2. (1) We use K= 256 codebook entries of dimension d= 160, and train the VQ-VAE [34] with the standard objective: Lvq = lrec(x,gφ(zq)) | {z } reconstruction + ∥sg[ze] -cj∗∥2 2 | {z } codebook update +β∥ze-sg[cj∗]∥2 2 | {z } commitment , (2) where sg[·] denotes stop-gradient and β= 0.25. In practice, we experiment with latent grid resolutions of 16×16 and 32×32. We find that the smaller 16×16 configuration already achieves accurate depth reconstruction while remaining computationally efficient. The VQVAE [34] is pretrained independently on each dataset using AdamW [25] with a learning rate of 1 × 10-5 to ensure stable convergence and reconstruction quality. The resulting pretrained model produces discretized depth code indices, which serve as supervisory targets for the depth expert in QDepth-VLA. 3.3 QDepth-VLA Architecture QDepth-VLA adopts a unified and modular architecture built upon open π0 [29], extending its VLA pipeline with an additional depth supervision branch. As shown in Fig. 1(a), the model consists of three parameterized modules: a pretrained vision-language model (VLM), an action expert, and a newly introduced depth expert. These modules are coordinated through a mixture-of-experts (MoE) structure Table 1: Key configurations of the Action and Depth experts of QDepth-VLA. QDepth-VLA Action Expert Depth Expert Backbone Transformer Transformer Layers / Heads 18 / 8 18 / 8 Hidden dim 1024 1024 Interm. dim 4096 4096 Inputs Proprio + Action RGB-Img tokens Outputs Actions Depth tokens and a carefully designed hybrid attention mask, enabling QDepthVLA to jointly reason about geometry and control variants without disrupting pretrained representations. We choose PaliGemma-3B [2] as VLM backbone, which integrates SigLIP-based [40] vision encoding with Gemma's [26] language modeling capability. Input instructions are first tokenized using Gemma's [26] tokenizer, while the main-view RGB image is processed by the SigLIP [40] image encoder to obtain 256 visual tokens. These image tokens are concatenated with 20 text prefix tokens and fed into the Gemma [26] decoder under full block attention to produce multimodal embeddings that capture both spatial and semantic cues. This pretrained VLM remains trainable during the training stage, allowing geometric adaptation to manipulation environment. The action expert is a transformer-based module responsible for translating multimodal embeddings and proprioceptive states into executable robot actions. It consists of stacked transformer layers with MLP-based encoders and decoders that integrate visual-language context from the VLM with proprioceptive features. This module, which is built upon the original open π0 [29] action head, functions as the core control head of QDepth-VLA. To incorporate geometric reasoning, QDepth-VLA introduces a dedicated depth expert, architecturally aligned with the action expert (illustrated in Table 1). It takes the visual embeddings from the SigLIP encoder as input, before language fusion to avoid semantic interference. These embeddings are projected through a lightweight MLP, processed by a transformer backbone, and then passed to a shallow CNN decoder that predicts 256 depth tokens. Each predicted token corresponding to a latent vector is then aligned with the quantized tokens produced by the pretrained VQ-VAE [34] encoder over its codebook. The pretrained VQ-VAE [34] decoder is subsequently used to reconstruct the spatial depth map from these latent tokens when required. This discrete formulation enables QDepth-VLA to capture compact, structured geometric representations while maintaining optimization stability. As for hybrid attention mechanism, existing designs typically employ a standard causal attention structure, as seen in DreamVLA [42] and CoT-VLA [43]. However, since depth modalities inherently contain noise, directly fusing them under causal attention may introduce undesirable interference, potentially degrading action generation quality [42]. To address this issue, we redesign the hybrid attention mechanism (Fig. 1(b)) to more effectively regulate crossmodal information flow among text, image, depth, proprioception, and action tokens. To be specific: (1) Text and image tokens attend only within their modality to preserve pretrained semantic grounding. (2) Depth tokens attend to both image and text tokens, contextualizing geometric features with visual semantics. (3) Action tokens attend to all preceding modalities, integrating fused perceptual and geometric cues for policy generation. This hierarchical attention design allows depth to enhance spatial understanding while preventing over-interference with the pretrained VLM and keeping computation efficient. 3.4 Co-Training Procedures 3.4.1 Quantized Depth Supervision. During joint training, the depth expert predicts latent depth tokens, and these tokens are then used to compute logits with the encoded image features over the VQ-VAE [34] codebook: li,k= -1 τ∥xi-ck∥2 2 , (3) where iindexes latent spatial positions , kindexes codebook entries (K= 256) and τrepresents temperature factor. A cross-entropy loss is applied using ground-truth code indices z∗ iobtained from the pretrained VQ-VAE [34]: Ldepth = - 1 B· N B·N ∑︁ i=1 log exp(li,z∗ i) ÍK k=1 exp(li,k) , (4) where Bis batch size and Nthe number of latent tokens per frame. This loss encourages the visual encoder to learn geometry-aware embeddings aligned with the quantized depth representation. 3.4.2 Action Modeling. Based on the underlying VLA backbone, the action prediction objective is as follows: The Conditional Flow Matching (CFM) action loss [23] is identical to that of π0 [4]: LCFM(θ) = Ep(At|Ot),q( ˆAλ t|At) ∥fθ( ˆAλ t,Ot) -g( ˆAλ t|At)∥2 2, (5) where the action chunk At= [at,at+1, . . . ,at+H-1] is conditioned on the observation Ot= [It, lt,st], which includes the RGB image, language instruction, and end-effector state. Notably, ˆAλ tdenotes noisy action samples generated from a diffusion-like process: ˆAλ t= λAt+ (1 -λ)η, η∼N (0, I), (6) and the corresponding noise distribution and flow target are defined as: q( ˆAλ t| At) = N (λAt, (1 -λ)I), g( ˆAλ t| At) = η-At. (7) This formulation enables the model to approximate a continuoustime flow field that transports noisy actions toward their clean ground-truth counterparts. 3.4.3 Co-Training Objectives. The total loss combines the action and depth objectives: Ltotal = Laction + λt· Ldepth, (8) where λt= λ0 · γtexponentially decays over training steps, with λ0 = 0.01. This co-training schedule enables the model to first establish stable geometric alignment before gradually focusing on action refinement. 3.4.4 Optimization Setup. QDepth-VLA is trained using the AdamW [25] optimizer with decoupled weight decay. We set the learning rate for both the action expert and the VLM backbone to 5 × 10-5. A cosine learning rate scheduler with 200 warm-up steps and a cycle length of 107 steps is applied, ensuring stable optimization throughout training. 4 EXPERIMENTS In this section, we conduct comprehensive experiments across both simulation and real-world settings to evaluate the effectiveness of our approach. Specifically, we aim to address the following three questions: (1) Can depth supervision effectively enhance VLA performance in long-horizon and pick-and-place tasks, particularly those requiring fine-grained manipulation? (2) Is depth supervision more effective than pixel-level depth prediction? (3) Does the proposed hybrid attention mask contribute to performance gains? 4.1 Simulation Experiments 4.1.1 Training Recipe. The QDepth-VLA based on open π0 [29] is initially pre-trained for 9 epochs on the Fractal dataset [6], followed by 20 epochs of pre-training on the LIBERO-90 dataset [24]. After pre-training, the model is further fine-tuned on the four LIBERO subsets - Spatial, Object, Goal, and Long [24] for around Table 2: Results of QDepth-VLA on the LIBERO benchmark. View Setting Category Method Spatial Object Goal Long Avg Single-view VLA General VLA OpenVLA finetuned [17] 84.7 88.4 79.2 53.7 76.5 CoT-VLA-7B [43] 87.5 91.6 87.6 69.0 81.1 Open π0 [29] 77.2 84.0 83.6 66.0 77.7 3D-cloud-enhanced VLA SpatialVLA [28] 88.2 89.9 78.6 55.5 78.1 Depth-enhanced VLA 3D-CAVLA [3] 86.1 94.7 82.9 66.8 82.6 QDepth-VLA (ours) 86.0 88.8 94.0 72.6 85.4 Multi-view VLA General VLA Diffusion Policy [11] 78.3 92.5 68.3 50.5 72.4 Octo finetuned [33] 78.9 85.7 84.6 51.1 75.1 π0-FAST finetuned [4] 96.4 96.8 88.6 60.2 85.5 π0 finetuned [4] 96.8 98.8 95.8 85.2 94.2 UniVLA [7] 96.5 96.8 95.6 92.0 95.2 3D-cloud-enhanced VLA GeoVLA [32] 98.4 99.0 96.6 96.6 97.7 Depth-enhanced VLA 3D-CAVLA [3] 98.2 99.8 98.2 96.1 98.1 4D-VLA [41] 88.9 95.2 90.9 79.1 88.6 DreamVLA [42] 97.5 94.0 89.5 89.5 92.6 QDepth-VLA (ours) 97.6 96.6 95.2 90.0 94.9 50 epochs. For the Simpler benchmark [20], the model is instead trained from scratch, first using the Bridge dataset [35] for 13 epochs, and then the Fractal dataset [6] for an additional 9 epochs. All experiments are conducted using the Fully Sharded Data Parallel (FSDP) training strategy on 8 × NVIDIA H20 GPUs. A per-GPU batch size of 32 is used, yielding a global batch size of 1024 with gradient accumulation, and the action chunk size is fixed at 4. 4.1.2 Evaluation Setup. For evaluation on LIBERO [24], we adopt its four benchmark suites (Spatial, Object, Goal, and Long). Following the preprocessing method in [17], image resolution is first normalized to 256 × 256 and then resized to 224 × 224 as model input. We also apply a 180-degree rotation to all images and use only the main-view RGB observations. Each task is evaluated over 50 rollouts, with the average success rate reported. On the Simpler benchmark [20], the evaluation covers two distinct settings: (1) models trained on the Bridge dataset [35] are tested on tasks involving the WidowX250 robot, and (2) models trained on the Fractal dataset [6] are tested on tasks for the Google Robot. We adopt the visual matching configuration from Simpler [20], evaluating each task across multiple initial positions with 10 rollouts per configuration. Consequently, the total number of evaluations per task ranges from 240 to 2400. 4.1.3 Main Results. LIBERO Benchmark. QDepth-VLA adopts a single-view setting, where the visual input consists of only one RGB image. This contrasts with multi-view models, which take multiple images as input, including temporally adjacent frames from historical observations. As shown in Table 2, QDepth-VLA consistently outperforms single-view baselines across the LIBERO suites. It achieves stronger performance on both fine-grained and long-horizon tasks, reaching 94.0% on the Goal tasks and 72.6% on the Long tasks, surpassing the single-view baseline CoT-VLA [43] by 6.4% and 3.6%, respectively. Compared with open π0 [29], QDepth-VLA shows consistent improvements across all four subsets (Spatial, Object, Goal, and Long), with the largest gain of 8.8% observed on the Spatial tasks. While QDepth-VLA operates with only a single RGB observation, its average success rate remains competitive with multi-view VLAs. Specifically, QDepth-VLA achieves a mean success rate only 0.1% lower than π0-FAST [29], while exceeding 4D-VLA [41] by 3.1% and DreamVLA [42] by 4.5% on the Goal tasks. Moreover, it surpasses π0-FAST [4] by 12.4% on the more challenging Long tasks. Although leading multi-view models such as 3D-CAVLA [3], GeoVLA [32] and UniVLA [7] achieve higher overall results, our experimental results demonstrate that depth-augmented supervision effectively compensates for the lack of multi-view observations and brings single-view VLAs closer to multi-view performance levels. By extension, we further implement a multi-view variant of QDepth-VLA while maintaining the same setting that predicts latent depth tokens corresponding only to the current main-view image. As shown in Table 2, the multi-view QDepth-VLA consistently outperforms single-view baselines. It achieves an average success rate of 94.9%, surpassing DreamVLA by 0.1% and π0 by 0.8% on the Spatial tasks, and reaches 90.0% success on the Long tasks. While 3D-CAVLA [3] and GeoVLA [32] achieve higher average success rates, they require explicit point cloud or depth map inputs during inference - modalities that QDepth-VLA does not rely on. These results reveal that the QDepth-VLA generalizes effectively to multi-view configurations, further enhancing geometric perception and long-horizon reasoning. Simpler Benchmark. Tables 3 and 4 present the experimental results of QDepth-VLA in the Simpler [20] simulation environment. As shown in Table 3, QDepth-VLA achieves a success rate of 98.3% on the pick coke can task, surpassing open π0 [29] by 0.8% Table 3: Results of QDepth-VLA on Simpler benchmark(Google Robot tasks) Pick Coke Can Move Near Open/Close Drawer Open Top Drawer and Put Apple In Avg RT-2-X [5] 78.7 77.9 25.0 - 60.7 Octo-Base [33] 17.0 4.2 22.7 - 16.8 OpenVLA [17] 16.3 46.2 35.6 - 27.7 RoboVLM finetuned [21] 77.3 61.7 43.5 - 63.4 SpatialVLA finetuned [28] 86.0 77.9 57.4 - 75.1 Open π0 [29] 97.5 87.1 68.0 32.9 71.4 QDepth-VLA(ours) 98.3 81.4 58.0 62.6 75.1 Table 4: Results of QDepth-VLA on Simpler benchmark(WidowX250 Robot tasks) Put Carrot on Plate Put Eggplant in Basket Put Spoon on Towel Stack Block Avg Octo-Base [33] 8.3 43.1 12.5 0.0 16.0 OpenVLA [17] 0.0 4.1 0.0 0.0 1.0 RoboVLM finetuned [21] 25.0 58.3 29.2 12.5 31.3 SpatialVLA finetuned [28] 25.0 100.0 16.7 29.2 42.7 Open π0 [29] 61.3 89.6 73.7 15.8 60.0 QDepth-VLA(ours) 57.5 95.0 82.0 39.6 68.5 and SpatialVLA [28] by 12.3%. On the more complex open top drawer and put apple in task, QDepth-VLA also attains 62.6%, outperforming open π0 [29] by a large margin of 29.7%. This substantial improvement on long-horizon tasks can be attributed to enhanced spatial perception and object localization provided by depth-guided supervision, which improves the model's ability to accurately identify and grasp target objects. As a result, the success probability of intermediate manipulation steps-such as grasping or placing-is increased, leading to higher overall task completion rates. In Table 4, QDepth-VLA consistently achieves high success rates across various manipulation tasks. Notably, on the stack block task-which demands precise spatial reasoning and fine-grained control-QDepth-VLA reaches a success rate of 39.6%, surpassing SpatialVLA [28] by 10.4%. Furthermore, on the Put Eggplant in Basket and Put Spoon on Towel tasks, QDepth-VLA achieves 95.0% and 82.0% success rates, respectively, outperforming open π0 [29] by 5.4% and 8.3%. These improvements highlight the effectiveness of quantized depth supervision in enhancing spatial reasoning and manipulation precision, particularly for tasks involving object placement and coordination in cluttered 3D environments. Depth Reconstruction Visualization . To further validate the effectiveness of our depth supervision, we visualize depth reconstructions by passing the quantized predicted features through a trained VQ-VAE [34] decoder. As shown in Fig. 3, the reconstructions preserve structural details and align well with object boundaries, demonstrating that the learned depth representations capture spatial geometry in a meaningful way. 4.2 Real-Robot Experiments 4.2.1 Environment Setup. In the real-world experiments, we employ a 6-DoF Piper robotic arm, with a RealSense D455 camera positioned directly in front of the arm. The training hyperparameters Figure 2: An overview of the main camera view in our real-world task. The environment presents significant challenges for policy learning, including complex lighting conditions, low visual contrast between the gripper and tabletop, and various environmental disturbances that can obscure critical geometric details. are kept consistent with those used in the simulation experiments, except that the action chunk size is set to 16 to achieve faster execution speed. We select four tasks for evaluation: • Task1: pick the banana into the yellow basket • Task2: put the chili into the bowl • Task3: put the green block into the bowl • Task4: stack the green block on top of the yellow block Figure 3: Depth reconstruction results from the QDepth-VLA. Features predicted by depth expert are decoded using a trained VQ-VAE decoder. The reconstructions demonstrate QDepth-VLA's ability to learn critical depth map features, including object and gripper boundaries, underscoring the success of its depth supervision. Table 5: Real-World Evaluation on the Piper Arm task1 task2 task3 task4 Avg ACT [44] 20.0 0.0 0.0 0.0 5.0 Open π0 [29] 50.0 40.0 40.0 0.0 32.5 QDepth-VLA(ours) 70.0 40.0 50.0 10.0 42.5 For each task, we collect 50 trajectories and fine-tune the model separately on the corresponding dataset. During testing, each task is evaluated over 10 trials.As shown in Fig. 2, all experiments are conducted on a dark-colored wooden desk, where the surface color is visually similar to the robot gripper. This setup increases the difficulty of the tasks by introducing additional perceptual ambiguity. 4.2.2 Main Results. We evaluate QDepth-VLA on a series of pick-and-place tasks with varying difficulty to assess its spatial perception and localization ability. We also compare our method against representative baselines such as ACT [44]. As shown in Table 5, QDepth-VLA consistently outperforms ACT [44] across all tasks, while ACT [44] fails to perform reliably in these complex real-world environments, QDepth-VLA achieves robust success rates. Compared to our baseline open π0 [29], QDepth-VLA achieves a 20.0% improvement on the simple task of picking a banana, and further achieves gains of 10.0% on both Task 3 and Task 4, demonstrating stronger generalization to challenging scenarios and tasks. 4.3 Ablation Study To evaluate the contribution of each proposed component, we perform a series of controlled ablation experiments in the Simpler [20] simulation environment. Four ablated variants are considered: (1) removing the depth supervision signal by setting its loss weight to zero (w/o Depth Loss); (2) removing the dedicated depth prediction branch (w/o Depth Expert); (3) replacing latent depth token prediction with pixel-wise regression (w/o Pixel Prediction); and (4) substituting the proposed hybrid attention mask with a standard version that enforces proprioception-to-depth attention (w/o Hybrid Attn). Quantitative results are reported in Table 6. 4.3.1 w/o Depth Loss. In this variant, the depth loss weight is set to zero while preserving the full model capacity. This configuration isolates the contribution of the depth supervision signal without altering the overall parameter scale. As shown in Table 6, performance decreases from 68.5% to 65.6% on average. The degradation is most pronounced in the Carrot (-9.6%) and Eggplant (-12.5%) tasks, both requiring coarse spatial grounding. Conversely, slight improvements are observed in Spoon (+7.2%) and Block (+3.3%) tasks, suggesting that the auxiliary depth objective can occasionally compete with the action policy optimization. Overall, these results confirm that depth supervision provides a meaningful geometric prior that facilitates spatial reasoning beyond the primary control objective. 4.3.2 w/o Depth Expert. Eliminating the dedicated depth branch results in the largest overall performance degradation (-8.5%), as presented in Table 6. The most significant drop occurs in the Stack Block Task (-23.8%), where precise 3D alignment is critical. Substantial declines are also observed in the Eggplant (-5.4%) and Spoon (-8.3%) tasks, indicating that fine-grained spatial reasoning relies heavily on an explicit and specialized depth pathway. 4.3.3 w/o Pixel Prediction. As shown in Table 6, replacing latent depth prediction with direct pixel-wise regression lowers average performance to 64.6% (-3.9%). The largest impact is on Eggplant (-14.6%) task, whereas other tasks are only mildly affected. This validates our design choice: quantized latent tokens encourage abstraction of geometric cues, while pixel prediction entangles the model with redundant local detail that is less relevant for manipulation. 4.3.4 w/o Hybrid Attention. In this variant, the proposed hybrid attention mask is replaced with a DreamVLA-style [42] configuration, removing dynamic and semantic modalities. This setting tests whether relative depth maps can enhance proprioceptive state perception and thereby improve action generation quality. As expected, the Table 6: Ablation study of QDepth-VLA on Simpler tasks. The table reports success rates (%) with module ablations. A checkmark (✓) indicates the module is present, while a cross (✗) indicates it is removed, changed or set to zero manually. Model Depth Loss Depth Expert Latent Pred Hybrid Attn Tasks Avg Carrot Eggplant Spoon Block QDepth-VLA (full) ✓ ✓ ✓ ✓ 57.5 95.0 82.0 39.6 68.5 w/o Depth Loss ✗ ✓ ✓ ✓ 47.9 82.5 89.2 42.9 65.6 w/o Depth Expert ✓ ✗ ✓ ✓ 61.3 89.6 73.7 15.8 60.0 w/o Pixel Prediction ✓ ✓ ✗ ✓ 54.6 80.4 82.0 41.3 64.6 w/o Hybrid Attn[42] ✓ ✓ ✓ ✗ 41.7 89.6 78.8 42.0 63.0 performance declines by 5.5% on average, with the most substantial drop on the Carrot Task (-15.8%). This result indicates that enforcing proprioception-to-depth attention introduces noise rather than useful guidance, as relative depth lacks absolute positional encoding necessary for stable control. 4.3.5 Cross-task synthesis. Across all ablations, two consistent trends emerge. First, stacking tasks exhibit exceptional sensitivity to the removal of the depth expert, confirming that explicit depth modeling is crucial for accurate vertical alignment and object interaction. Second, placement-oriented tasks such as Carrot and Eggplant tasks benefit most from depth supervision and hybrid attention routing, which jointly enhance mid-level spatial localization and task-level consistency. Overall, we find the following answers to the three research questions brought up at the beginning of this section, • Depth supervision effectively enhances VLA performance, especially on long-horizon and fine-grained pick-and-place tasks. In particular, tasks such as stacking and precise placement benefit significantly, indicating improved spatial reasoning. • Compared to pixel-level regression, quantized depth supervision proves more effective, as it reduces redundancy and focuses learning on salient geometric structures, leading to more stable training and stronger downstream performance. • The proposed hybrid attention mask consistently contributes to performance gains, particularly in placement tasks, by selectively routing depth cues into the policy network and improving cross-modal feature alignment. 5 CONCLUSIONS In this paper, we introduced QDepth-VLA, a new vision-languageaction model that incorporates depth supervision and hybrid attention to enhance spatial perception and long-horizon reasoning. Through extensive experiments in both simulation (Simpler and LIBERO) and real-world manipulation tasks, we demonstrate that depth supervision significantly improves manipulation performance. In summary, our work demonstrates that predicting quantized depth tokens at the current timestep is an effective way to enhance policy learning. Extending this approach to predict future depth tokens for improved reasoning and exploring more efficient VAE-based depth representations for enhanced perception present two promising directions for future research. REFERENCES [1] AgiBot-World-Contributors, Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui, Yan Ding, Siyuan Feng, Shenyuan Gao, Xindong He, Xu Huang, Shu Jiang, Yuxin Jiang, Cheng Jing, Hongyang Li, Jialun Li, Chiming Liu, Yi Liu, Yuxiang Lu, Jianlan Luo, Ping Luo, Yao Mu, Yuehan Niu, Yixuan Pan, Jiangmiao Pang, Yu Qiao, Guanghui Ren, Cheng-Xing Ruan, Jiaqi Shan, Yongjian Shen, Chengshi Shi, Mi Shi, Modi Shi, Chonghao Sima, Jia-Yi Song, Huijie Wang, Wenhao Wang, Dafeng Wei, Chengen Xie, Guo-Liang Xu, Junchi Yan, Cunbiao Yang, Lei Yang, Shu-Xiang Yang, Maoqing Yao, Jiansheng Zeng, Chi Zhang, Qingli Zhang, Bin Zhao, Chengyu Zhao, Jiaqi Zhao, and Jianchao Zhu. 2025. AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems. ArXiv abs/2503.06669 (2025). https://api.semanticscholar. org/CorpusID:276902669 [2] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel M. Salz, Maxim Neumann, Ibrahim M. Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey A. Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Martin Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bovsnjak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiao-Qi Zhai. 2024. PaliGemma: A versatile 3B VLM for transfer. ArXiv abs/2407.07726 (2024). [3] V.S.K. Pandi V Bhat, Yu-Hsiang Lan, Prashanth Krishnamurthy, Ramesh Karri, and Farshad Khorrami. 2025. 3D CAVLA: Leveraging Depth and 3D Context to Generalize Vision Language Action Models for Unseen Tasks. ArXiv abs/2505.05800 (2025). [4] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, and Ury Zhilinsky. 2024. π0: A VisionLanguage-Action Flow Model for General Robot Control. ArXiv abs/2410.24164 (2024). [5] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Krzysztof Choromanski, Tianli Ding, et al. 2023. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control. ArXiv abs/2307.15818 (2023). [6] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, et al. 2022. RT-1: Robotics Transformer for Real-World Control at Scale. ArXiv abs/2212.06817 (2022). [7] Qingwen Bu, Yanting Yang, Jisong Cai, Shenyuan Gao, Guanghui Ren, Maoqing Yao, Ping Luo, and Hongyang Li. 2025. UniVLA: Learning to Act Anywhere with Task-centric Latent Actions. ArXiv abs/2505.06111 (2025). https://api. semanticscholar.org/CorpusID:278481174 [8] Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, and Minzhao Zhu. 2024. GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation. ArXiv abs/2410.06158 (2024). [9] Chi-Lam Cheang, Sijin Chen, Zhongren Cui, Yingdong Hu, Liqun Huang, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Xiao Ma, Hao Niu, Wenxuan Ou, Wanli Peng, Zeyu Ren, Haixin Shi, Jiawen Tian, Hongtao Wu, Xin Xiao, Yuyang Xiao, Jiafeng Xu, and Yichu Yang. 2025. GR-3 Technical Report. ArXiv abs/2507.15493 (2025). [10] Sili Chen, Hengkai Guo, Shengnan Zhu, Feihu Zhang, Zilong Huang, Jiashi Feng, and Bingyi Kang. 2025. Video Depth Anything: Consistent Depth Estimation for Super-Long Videos. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025), 22831-22840. [11] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. 2023. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. ArXiv abs/2303.04137 (2023). [12] Yiguo Fan, Pengxiang Ding, Shuanghao Bai, Xinyang Tong, Yuyang Zhu, Hongchao Lu, Fengqi Dai, Wei Zhao, Yang Liu, Siteng Huang, Zhaoxin Fan, Badong Chen, and Donglin Wang. 2025. Long-VLA: Unleashing Long-Horizon Capability of Vision Language Action Model for Robot Manipulation. ArXiv abs/2508.19958 (2025). https://api.semanticscholar.org/CorpusID:280919002 [13] Théophile Gervet, Zhou Xian, Nikolaos Gkanatsios, and Katerina Fragkiadaki. 2023. Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation. In Conference on Robot Learning. [14] Ankit Goyal, Valts Blukis, Jie Xu, Yijie Guo, Yu-Wei Chao, and Dieter Fox. 2024. RVT-2: Learning Precise Manipulation from Few Demonstrations. ArXiv abs/2406.08545 (2024). [15] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. 2023. RVT: Robotic View Transformer for 3D Object Manipulation. ArXiv abs/2306.14896 (2023). https://api.semanticscholar.org/CorpusID:259262273 [16] Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Manuel Y. Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren, Lucy Xiaoyang Shi, Laura Smith, Jost Tobias Springenberg, Kyle Stachowicz, James Tanner, Quan Vuong, Homer Walke, Anna Walling, Haohuan Wang, Lili Yu, and Ury Zhilinsky. 2025. π0.5: a Vision-Language-Action Model with Open-World Generalization. ArXiv abs/2504.16054 (2025). [17] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag R. Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, and Chelsea Finn. 2024. OpenVLA: An Open-Source Vision-Language-Action Model. ArXiv abs/2406.09246 (2024). [18] Chengmeng Li, Junjie Wen, Yan Peng, Yaxin Peng, Feifei Feng, and Yichen Zhu. 2025. PointVLA: Injecting the 3D World into Vision-Language-Action Models. ArXiv abs/2503.07511 (2025). [19] Peiyan Li, Yixiang Chen, Hongtao Wu, Xiao Ma, Xiangnan Wu, Yan Huang, Liang Wang, Tao Kong, and Tieniu Tan. 2025. BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models. ArXiv abs/2506.07961 (2025). [20] Xuanlin Li, Kyle Hsu, Jiayuan Gu, Karl Pertsch, Oier Mees, Homer Rich Walke, Chuyuan Fu, Ishikaa Lunawat, Isabel Sieh, Sean Kirmani, Sergey Levine, Jiajun Wu, Chelsea Finn, Hao Su, Quan Ho Vuong, and Ted Xiao. 2024. Evaluating Real-World Robot Manipulation Policies in Simulation. ArXiv abs/2405.05941 (2024). [21] Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, and Huaping Liu. 2024. Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models. ArXiv abs/2412.14058 (2024). [22] Tao Lin, Gen Li, Yilei Zhong, Yanwen Zou, and Bo Zhao. 2025. Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding. ArXiv abs/2507.00416 (2025). [23] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. 2022. Flow Matching for Generative Modeling. ArXiv abs/2210.02747 (2022). [24] Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qian Liu, Yuke Zhu, and Peter Stone. 2023. LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning. ArXiv abs/2306.03310 (2023). [25] Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. In International Conference on Learning Representations. [26] Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, and et al. 2024. Gemma: Open Models Based on Gemini Research and Technology. arXiv preprint abs/2403.08295 (2024). https: //api.semanticscholar.org/CorpusID:268379206 [27] Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, et al. 2023. Open X-Embodiment: Robotic Learning Datasets and RT-X Models. ArXiv abs/2310.08864 (2023). [28] Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yani Ding, Zhigang Wang, Jiayuan Gu, Bin Zhao, Dong Wang, and Xuelong Li. 2025. SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model. ArXiv abs/2501.15830 (2025). [29] Allen Z. Ren. 2024. open-pi-zero: Re-implementation of the π0 vision-languageaction (VLA) model. https://github.com/allenzren/open-pi-zero [30] Wenxuan Song, Ziyang Zhou, Han Zhao, Jiayi Chen, Pengxiang Ding, Haodong Yan, Yuxin Huang, Feilong Tang, Donglin Wang, and Haoang Li. 2025. ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver. ArXiv abs/2508.10333 (2025). [31] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsenko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, Siyang Qin, R. Reeve Ingle, Emanuele Bugliarello, Sahar Kazemzadeh, Thomas Mesnard, Ibrahim M. Alabdulmohsin, Lucas Beyer, and Xiao-Qi Zhai. 2024. PaliGemma 2: A Family of Versatile VLMs for Transfer. ArXiv abs/2412.03555 (2024). [32] Lin Sun, Bin Xie, Yingfei Liu, Hao Shi, Tiancai Wang, and Jiale Cao. 2025. GeoVLA: Empowering 3D Representations in Vision-Language-Action Models. ArXiv abs/2508.09071 (2025). [33] Octo Model Team, Dibya Ghosh, Homer Rich Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu, Jianlan Luo, You Liang Tan, Pannag R. Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, and Sergey Levine. 2024. Octo: An Open-Source Generalist Robot Policy. ArXiv abs/2405.12213 (2024). [34] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. In Neural Information Processing Systems. [35] Homer Rich Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Maximilian Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Ho Vuong, Andre Wang He, Vivek Myers, Kuan Fang, Chelsea Finn, and Sergey Levine. 2023. BridgeData V2: A Dataset for Robot Learning at Scale. In Conference on Robot Learning. [36] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Ke-Yang Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024. Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution. ArXiv abs/2409.12191 (2024). [37] Liudi Yang, Yang Bai, George Eskandar, Fengyi Shen, Mohammad Altillawi, Dong Chen, Soumajit Majumder, Ziyuan Liu, Gitta Kutyniok, and Abhinav Valada. 2025. RoboEnvision: A Long-Horizon Video Generation Model for Multi-Task Robot Manipulation. ArXiv abs/2506.22007 (2025). [38] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. 2024. Depth Anything V2. ArXiv abs/2406.09414 (2024). https://api.semanticscholar.org/CorpusID:270440448 [39] Rujia Yang, Geng Chen, Chuan Wen, and Yang Gao. 2025. FP3: A 3D Foundation Policy for Robotic Manipulation. ArXiv abs/2503.08950 (2025). [40] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid Loss for Language Image Pre-Training. 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), 11941-11952. [41] Jiahui Zhang, Yurui Chen, Yueming Xu, Ze Huang, Yanpeng Zhou, Yuan Yuan, Xinyue Cai, Guowei Huang, Xingyue Quan, Hang Xu, and Li Zhang. 2025. 4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration. ArXiv abs/2506.22242 (2025). https://api.semanticscholar.org/ CorpusID:280010742 [42] Wenyao Zhang, Hongsi Liu, Zekun Qi, Yunnan Wang, Xinqiang Yu, Jiazhao Zhang, Runpei Dong, Jiawei He, He Wang, Zhizheng Zhang, Li Yi, Wenjun Zeng, and Xin Jin. 2025. DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge. ArXiv abs/2507.04447 (2025). [43] Qingqing Zhao, Yao Lu, Moo Jin Kim, Zipeng Fu, Zhuoyang Zhang, Yecheng Wu, Zhaoshuo Li, Qianli Ma, Song Han, Chelsea Finn, Ankur Handa, Ming-Yu Liu, Donglai Xiang, Gordon Wetzstein, and Tsung-Yi Lin. 2025. CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025), 1702-1713. [44] Tony Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. 2023. Learning FineGrained Bimanual Manipulation with Low-Cost Hardware. ArXiv abs/2304.13705 (2023). [45] Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, and Chuang Gan. 2024. 3D-VLA: A 3D Vision-Language-Action Generative World Model. ArXiv abs/2403.09631 (2024). [46] Junjie Zhu, Huayu Liu, Jin Wang, Bangrong Wen, Kaixiang Huang, Xiao-Fei Li, Haiyun Zhan, and Guodong Lu. 2025. Bridging VLM and KMP: Enabling Fine-grained robotic manipulation via Semantic Keypoints Representation. ArXiv abs/2503.02748 (2025).
2510.14833
Non-exotic traversable wormholes with strong deflection angle in King and Dekel-Zhao dark matter halos under f(R, Lm) gravity Susmita Sarkar ,1, ∗Nayan Sarkar ,2, † Abdelmalek Bouzenada ,3, ‡ and Farook Rahaman 4, § 1Department of Applied Science and Humanities, Haldia Institute of Technology, Haldia-721606, West Bengal, India 2Department of Mathematics, Karimpur Pannadevi College, Nadia 741152, West Bengal, India 3Laboratory of Theoretical and Applied Physics, Echahid Cheikh Larbi Tebessi University 12001, Algeria 4Department of Mathematics, Jadavpur University, Kolkata-700 032, West Bengal, India (Dated: October 17, 2025) In this article, we investigate asymptotically flat non-exotic traversable wormhole geometries within the King and Dekel-Zhao dark matter halos in the framework of f(R, Lm) gravity. Two functional forms of the theory are considered: Model-I: f(R, Lm) = (R/2) + Lα m and Model-II: f(R, Lm) = (R/2) + (1 + λR)Lm. For both models, wormhole solutions are obtained and analyzed using the King and Dekel–Zhao dark matter density profiles, allowing us to explore how the underly- ing matter distribution influences the wormhole structures. The energy conditions are examined to verify the feasibility of sustaining the wormhole geometries with non-exotic matter, while embedding surfaces, proper radial distance and total gravitational energy are studied to illustrate the worm- hole’s physical viability and traversability. Moreover, we test the strong deflection angle and its implications for gravitational lensing and show possible observational signatures of such wormhole configurations. Our results indicate that within f(R, Lm) gravity, and for appropriate parameter choices, dark matter environments can sustain physically consistent non-exotic traversable worm- hole geometries with the gravitational lensing signatures, providing new insights into the interplay between modified gravity, dark matter, and astrophysical observations. Keywords: f(R, Lm) Gravity; Wormhole; Energy Conditions; Non-Exotic Matter; Deflection Angle. I. INTRODUCTION In contemporary theoretical astrophysics, wormholes are regarded as highly intriguing structures that naturally emerge from the framework of Einstein’s general relativity (GR) when special kinds of matter distributions are con- sidered. In this case, these hypothetical objects can serve as tunnels or conduits that connect either widely separated regions of the same universe or even two different universes. The first notion of such a tunnel-like configuration was introduced by Flamm [1], who analyzed the Schwarzschild solution in 1916. Later, Einstein and Rosen [2] advanced this perspective by presenting a bridge-type connection between two distinct external spacetime regions, leading to the concept of the Einstein–Rosen bridge (ERB). Also, wheeler subsequently coined the term wormhole [3], and to- gether with Fuller, they illustrate that these structures are dynamically unstable and prone to collapse after their creation, which rules out their utility for interstellar travel in that form [4]. Furthermore, the interest in traversable Lorentzian wormholes reignited when Morris and Thorne [5] showed that a spherically symmetric wormhole geometry can, in principle, provide a two-way passage between two asymptotically flat spacetimes, provided that exotic matter is present at the throat to keep the passage open. Such constructions necessarily lead to violations of the null energy condition (NEC), raising the question of whether exotic matter might arise in nature within the framework of quantum field theory [7]. Visser’s monograph [6] and later work with collaborators [8] illustrate that specific spacetime geome- tries could sustain traversable wormholes with a minimized requirement of exotic matter, thereby enhancing their physical plausibility. Alongside these efforts, the development of modified theories of gravity has provided powerful new approaches for studying wormholes, motivated by the fact that GR with ordinary matter alone cannot explain phenomena such as the late-time acceleration of the universe or the presence of dark matter (DM). Another important result in this modified framework, including f(R) gravity [9–11], f(R, T ) gravity [12–14], brane-world models [15–17], Rastall gravity [18–20], and f(Q) gravity [21–24], have been employed to construct wormhole solutions by assuming or deriving suitable shape function models, with detailed examinations of stability and energy conditions. More recently, ∗Electronic address: susmita.mathju@gmail.com †Electronic address: nayan.mathju@gmail.com ‡Electronic address: abdelmalekbouzenada@gmail.com §Electronic address: rahaman@iucaa.ernet.in arXiv:2510.14833v1 [gr-qc] 16 Oct 2025 2 non-linear f(R, Lm) models have been studied in the context of traversable wormholes [25], and further works have in- vestigated their compatibility with various energy conditions [26]. In this context, additional important contributions exploring wormhole geometries within modified gravity frameworks are available in the literature [27–30]. Modified gravity theories were illustrated by Harko et al. [31], who introduced the f(R, Lm) framework as a gen- eralization of the f(R) models. In this formulation, R defines the Ricci scalar curvature that encodes the geometric features of spacetime, while Lm symbolizes the matter Lagrangian density, thereby combining the geometrical at- tributes of gravity with the material distribution of matter. Also, the relation between these two elements introduces a non-trivial coupling, giving rise to an additional force orthogonal to the particle’s four-velocity, which in turn drives massive particles away from purely geodesic trajectories. Also, this property distinguishes f(R, Lm) gravity from conventional relativistic dynamics and opens avenues for reinterpreting the motion of particles and fields in curved spacetime. Following its proposal, subsequent investigations have broadened the scope of this idea by analyzing arbitrary couplings between matter and geometry [32], thereby deepening our understanding of both astrophysical and cosmological processes. In this context, many studies have tested its phenomenological implications, particularly in relation to the evolution of the universe, cosmic structure formation, and modifications to astrophysical systems [33–37]. However, a central feature of this theory is its violation of the equivalence principle, which is subject to strong experimental constraints derived from solar system observations [38, 39]. In the cosmological context, Wang and Liao examined the validity of energy conditions within f(R, Lm) gravity, thereby providing important insights into the theoretical consistency of the model [40]. In a complementary direction, Gon¸calves and Moraes are testing cosmolog- ical dynamics with non-minimal matter-geometry couplings [41], while Solanki et al. show how the presence of bulk viscosity in an anisotropic background could lead to late-time cosmic acceleration [42]. In this case, for more discussion and expalin, Jaybhaye et al. tested viscous dark energy models within this framework and derived constraints on the equation of state parameter, illustrating new perspectives on the driving forces behind cosmic expansion [43]. Also, these efforts show and explain the phenomenology of f(R, Lm) gravity models, where the synergy between matter and geometry reshapes fundamental principles and yields novel interpretations of dark energy, acceleration, and the large-scale dynamics of the universe. In this case, growing scholarly attention continues to generate a diverse array of studies addressing the cosmological and astrophysical consequences of this theory [44–52], with more recent works extending its applicability across different regimes of gravitational physics [53–56]. The DM candidate plays a central role in both astrophysical phenomena and in the exploration of exotic spacetime geometries, particularly wormholes, which within GR usually demand exotic matter violating classical energy bounds such as the Null Energy Condition (NEC) [57–59]. To overcome this limitation, alternative approaches based on quantum gravity have been considered, with Loop Quantum Gravity (LQG) and its cosmological reduction, Loop Quantum Cosmology (LQC), offering important corrections in the high-density regime through the introduction of a critical density ρc, above which quantum effects modify the classical description of spacetime [60]. Within such frameworks, wormhole configurations were successfully constructed without requiring exotic matter, thanks to quantum geometric contributions [61, 62]. Also, this perspective naturally opens the possibility of employing DM as a supporting source for wormholes, given its gravitationally attractive but non-luminous nature. In this context, cosmological measurements testing that DM constitutes nearly five-sixths of the total matter in the universe [63], with its existence strongly confirmed by astrophysical probes including galactic rotation curves, lensing signatures, and cluster dynamics [64–67]. Despite such compelling evidence, the fundamental nature of DM remains elusive, with candidate models ranging from ultralight bosonic fields on the order of 10−22 eV to heavy Weakly Interacting Massive Particles (WIMPs), all beyond the Standard Model of particle physics [68]. In this case, several hypotheses remain under investigation: primordial black holes (BHs) [69–78] produced in the early universe may account for part of the DM content, while particles such as axions or WIMPs could annihilate within celestial objects like the Sun, producing neutrinos observable by large detectors such as IceCube, though no conclusive signal has been reported [79, 80]. More recently, innovative detection methods include proposals for solar-orbit satellites to constrain DM properties in the local environment [81, 82]. In addition to these astrophysical searches, DM has been theoretically investigated as a viable wormhole source in both Einsteinian and modified gravitational settings, where its isotropic or anisotropic pressures can satisfy wormhole flare-out conditions without invoking unphysical exotic fluids [83–90]. Another important analysis tested the construction of traversable wormholes supported by isotropic DM within the LQC framework. In this case, the study of DM density models, (i) the Navarro-Frenk-White (NFW), (ii) the Pseudo-Isothermal (PI), and (iii) the Perfect Fluid (PF) distributions, each offering distinct insights into the behavior of galactic halos and their role in sustaining wormhole geometries under quantum gravitational corrections [91, 92]. The King model is a statistical mechanics framework originally developed to describe globular clusters but later extended to DM halos, providing a realistic way to avoid the infinite mass problem of classical isothermal cases. It introduces a truncation in the distribution function by assuming that particles with energies above a certain escape energy leave the system, leading to a self-consistent density profile with a flat core and a finite radius. Also, this structure naturally combines an isothermal core and halo with a polytropic envelope of index n = 5/2, producing density profiles that decrease as r−3 at large distances, consistent with observed galactic halos such as those described 3 by the Burkert profile. Because of evaporation and collisions, the King model captures the evolution of self-gravitating systems toward marginal stability, explaining why many globular clusters and large DM halos are found close to this equilibrium state [93]. The Dekel-Zhao (DZ) DM density profile has gained prominence as one of the most adaptable models for describing the distribution of DM halos across astrophysical systems [94, 95]. In this context, formulated as a generalized double power-law structure, it provides a smooth interpolation between the central and outer regions of galaxies through a set of adjustable parameters that govern the inner slope, the steepness of the transition, and the asymptotic outer behavior [96, 97]. In this case, the DZ profile is capable of reproducing a wide range of observed galactic density configurations, making it a robust tool for both theoretical modeling and observational studies in galactic dynamics. Beyond its astrophysical applications, it has also been tested in gravitational contexts, where, testing the influence of a parameter, the profile can give rise to regular or singular BH geometries within the Schwarzschild framework [98– 100]. Its utility extends further when incorporated into approaches such as the generalized Einstein cluster method, which tests the impact of DM on compact objects in strong gravitational regimes [101–103]. In this context, other measurements have illustrated the influence of the DZ model: Khatri et al. [104] demonstrated that the DZ profile can support stable wormhole geometries in Einstein gravity with minimal exotic matter, accompanied by distinctive gravitational lensing effects, while Errehymy et al. [105] showed that traversable wormholes can also emerge in f(R, Lm, T ) gravity, where coupling constants mediate matter-geometry interactions. Our study tests traversable wormhole geometries in the framework of f(R, Lm) gravity, the fundamental equations of f(R, Lm) gravity, and derives the corresponding Einstein field equations tailored for wormhole spacetimes. Also, the analysis of energy conditions is then carried out, providing the essential constraints on the exotic matter content required to sustain the wormhole. Next, we examine Model I, defined by the functional form f(R, Lm) = (R/2)+Lα m, where wormhole solutions are constructed and studied under the King and Dekel-Zhao DM density profiles. We are also testing Model II, based on f(R, Lm) = (R/2) + (1 + λR)Lm, where the wormhole solutions are obtained for the same DM models, offering a comparative perspective between the two cases. Also, the embedding diagrams of the wormhole spacetime are analyzed to visualize the geometry, alongside the evaluation of tidal forces and the computation of the total gravitational energy, which illustrate more information about the wormhole’s stability and traversability. Another important result is that the study also considers the strong deflection angle, emphasizing its relevance for gravitational lensing phenomena and the potential for observational signatures. In this context, we discuss the results in these models, show their physical influence, and broader implications for wormhole physics in the context of modified gravity theories and DM models. The paper is organized as follows: Section-I introduces the wormhole geometries in the framework of f(R, Lm) gravity. Also, in Section-II, we present the fundamental equations governing this modified gravity theory. In Section- III, we mention the traversability criteria for wormhole, derives the corresponding Einstein field equations for wormhole spacetime, and analyze the energy conditions, providing the necessary constraints on the matter content supporting the wormhole. Section-IV explores Model-I, where the functional form of the theory is taken as f(R, Lm) = (R/2) + Lα m, and wormhole solutions are obtained in the context of both the King and Dekel–Zhao DM density profiles, discussed separately in subsections-IV A and IV B. In Section-V, we examine Model-II, with f(R, Lm) = (R/2) + (1 + λR)Lm, again analyzing wormhole solutions for the King and Dekel–Zhao models. Section-VI addresses the embedding surface of the wormhole geometry and computes the total gravitational energy. The strong deflection angle and its implications for gravitational lensing are discussed in Section-VII. In this context, Section-VIII illustrates and explains the result, showing its physical influence. II. BASIC EQUATIONS IN f(R, Lm) GRAVITY In modified gravity theories, a key approach is the generalization of the action, providing a more flexible framework to describe gravitational dynamics beyond general relativity and enabling the inclusion of additional curvature or matter terms to model cosmic acceleration, DM effects, and other phenomena. In this context, Harko et al. [106] proposed the f(R, Lm) gravity theory, which generalizes the f(R) models by assuming that the gravitational Lagrangian can be represented as an arbitrary function of the Ricci scalar R and the matter Lagrangian Lm. The action corresponding to this framework is expressed as S = Z f(R, Lm)√−gd4x, (1) where g denotes the determinant of the metric tensor gµν. The Ricci scalar curvature R can be obtained by contracting the Ricci tensor Rµν as R = gµνRµν, (2) 4 where the Ricci tensor Rµν can be defined as Rµν = ∂λΓλ µν −∂νΓλ µµ + Γσ µµΓλ σλ −Γλ µσΓσ µλ. (3) Here, Γα βγ denotes the components of the Levi-Civita connection, expressed as Γα βγ = 1 2gαλ ∂gγλ ∂xβ + ∂gλβ ∂xγ −∂gβγ ∂xλ  . (4) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 2 3 4 5 6 7 r ξ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 50 100 150 200 0.2 0.4 0.6 0.8 1.0 r ξ ( r)/r α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 0.6 0 0 0.9 r ξ'(r) FIG. 1: Shows the characteristics of shape function ξ(r) (Left), the ratio ξ(r)/r (Middle), and the derivative ξ′(r) (Right) against the radial coordinate r for King DM model under the f(R, Lm) gravity model-I with parameters β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. The field equations of f(R, Lm) gravity follow from the variation of the general action (1) with respect to the metric tensor gµν, yielding fRRµν + (gµν□−∇µ∇ν) fR −1 2 (f −fLmLm) gµν = 1 2fLmTµν, (5) where fR = ∂f ∂R, fLm = ∂f ∂Lm , and Tµν denotes the energy-momentum tensor of the matter distribution, expressed as Tµν = −2 √−g δ(√−gLm) δgµν = gµνLm −2 ∂Lm ∂gµν . (6) Now, the contraction of the field equation (5) leads to a relation connecting the energy-momentum scalar T , the matter Lagrangian Lm, and the Ricci scalar R as RfR + 3□fR −2(f −fLmLm) = 1 2fLmT, (7) where □stands for the d’Alembertian operator, defined as □F = 1 √−g∂α(√−ggαβ∂βF) for any scalar function F. Further, the covariant divergence of the energy-momentum tensor in this gravity framework can be expressed as ∇µT µν = 2 [∇µ ln(fLm)] ∂Lm ∂gµν . (8) The conservation of the matter energy-momentum tensor, ∇µT µν = 0, imposes a functional constraint between the matter Lagrangian density and the function fLm(R, Lm) as ∇µ ln(fLm)∂Lm ∂gµν = 0. (9) Consequently, given a specific matter Lagrangian density, an appropriate choice of the function f(R, Lm) can yield conservative models with arbitrary matter-geometry coupling. 5 η=-0.7 η=-0.6 η=-0.5 η=-0.4 η=-0.3 1 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15  2   2 0.30 r ρ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 r ρ(r)+Pr(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 0.6 0.7 r ρ(r)+Pt(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.04 0.06 0.08 0.10 0.12 0.14 0.16 r ρ(r)-|Pr(r)| α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 r ρ(r)-|Pt(r)| α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1.0 1.2 r ρ(r)+Pr(r)+2Pt(r) FIG. 2: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) −|Pr(r)| (Left), ρ(r) −|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for King DM model under the f(R, Lm) gravity model-I with parameters β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. III. FIELD EQUATIONS FOR TRAVERSABLE WORMHOLE IN f(R, Lm) GRAVITY In this section, we discuss the traversability criteria of the wormhole structure and formulate the Einstein field equations for a traversable wormhole geometry within the framework of f(R, Lm) gravity theory. A. Traversability Criteria for Wormhole In this study, we consider a static, spherically symmetric Morris-Thorne wormhole, described by the following line element [107] ds2 = −e2Φ(r)dt2 +  1 −ξ(r) r −1 dr2 + r2(dθ2 + sin2θdφ2), (10) where Φ(r) and ξ(r) are the gravitational redshift function and shape function of the wormhole, respectively. The wormhole throat, a crucial structural feature, corresponds to the minimum radius r = r0 for which ξ(r0) = r0, known as the throat condition. Further, for the wormhole line element (10), the proper radial distance function l(r) defined as l(r) = ± Z r r0 dr p 1 −ξ(r)/r . (11) Here, the ± signs denote the upper and lower regions of the wormhole, which are connected through the throat. For a wormhole to allow traversal, the above functions are required to meet the following criteria: Gravitational redshift function Φ(r): The gravitational redshift function, Φ(r), must remain finite throughout the spacetime to prevent the formation of an event horizon. If Φ(r) diverges at any point, it would correspond to an infinite redshift, effectively creating a horizon that would block the passage of signals or travelers. Shape function ξ(r): The shape function ξ(r) must satisfy the following conditions: (i) For all regions outside the throat, the ratio of the shape function to the radial coordinate must not exceed unity, i.e., ξ(r) r ≤1 for r ≥r0. (ii) 6 The flare-out condition, defined as ξ′(r) < 1 for r ≥r0, and (iii) Asymptotic flatness condition, defined as ξ(r) r →0 as r →∞, for asymptotic flatness nature of the wormhole. Proper radial distance function l(r): This function must remain finite across all radial coordinates r. Also, it decreases from the upper universe (l = ∞) toward the throat (l = 0), and then increases from the throat toward the lower universe (l = −∞). Moreover, the proper radial distance l(r) must satisfy |l(r)| ≥r −r0. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 1.5 1.6 1.7 1.8 1.9 r ξ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 5 10 15 20 25 30 0.2 0.4 0.6 0.8 1.0 r ξ(r)/r α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.0 0.2 0.4 0.6 0.8 r ξ'(r) FIG. 3: Shows the characteristics of shape function ξ(r) (Left), the ratio ξ(r)/r (Middle), and the derivative ξ′(r) (Right) against the radial coordinate r for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6, and r0 = 1.4. B. Filed Equations for Traversable Wormhole Here, we derive the Einstein field equations for the wormhole metric (10) in f(R, Lm) gravity framework. In this context, the non-vanishing components of the Ricci tensor for the wormhole metric (10) obtained as R00 = e2Φ "  1 −ξ r   Φ′′ + Φ′2 + 2Φ′ r  −(rξ′ −ξ) 2r2 Φ′ # , (12) R11 = −Φ′′ −Φ′2 + (rξ′ −ξ) 2r(r −ξ)  Φ′ + 2 r  , (13) R22 = (ξ −r)Φ′ + ξ′ 2 + ξ 2r, (14) R33 = sin2 θ h (ξ −r)Φ′ + ξ′ 2 + ξ 2r i . (15) Thus, the Ricci curvature scalar R for the wormhole geometry (10) can be obtained using equation (2) as R = 2ξ′ r2 −2  1 −ξ r   Φ′′ + Φ′2 + Φ′ r  + Φ′ r2 (rξ′ + ξ −2r). (16) In this study, the matter distribution of the wormhole structure is assumed to be anisotropic, and the corresponding energy-momentum tensor is expressed Tµν = [ρ(r) + Pt(r)]uµuν + Pt(r)δµν + [Pr(r) −Pt(r)]uµuν, (17) where uµ denotes the four-velocity vector, and vµ is the unitary space-like vector, satisfying −uµuµ = vµvµ = 1. Besides, ρ(r) represents the energy density, while Pr(r) and Pt(r) represent the radial and tangential pressures, respectively. Notably, they depend solely on the radial coordinate r. Now, the Einstein filed equations (5) for the wormhole geometry (10) associated with the anisotropic matter 7 distribution (17) read as  1 −ξ r   Φ′′ + Φ′2 + 2Φ′ r − rξ′ −ξ 2r(r −ξ)Φ′  F −  Φ′ + 2 r − rξ′ −ξ 2r(r −ξ)  F ′ −F ′′  + f −LmfLm 2 = fLm 2 ρ(r),(18)  1 −ξ r   −Φ′′ −Φ′2 + rξ′ −ξ 2r(r −ξ)  Φ′ + 2 r  F +  Φ′ + 2 r − rξ′ −ξ 2r(r −ξ)  F ′  −f −LmfLm 2 = fLm 2 Pr(r),(19)  1 −ξ r   −Φ′ r + rξ′ + ξ 2r2(r −ξ)  F +  Φ′ + 2 r − rξ′ −ξ 2r(r −ξ)  F ′ + F ′′  −f −LmfLm 2 = fLm 2 Pt(r). (20) Here, F = ∂f ∂R. It is important to note that the above field equations provide a foundation for exploring a wide range of traversable wormhole solutions within the framework of f(R, Lm) gravity. k=1.8 k=1.9 k=2 k=2.12 k=2.2 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 1.2 r ρ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2.00 2.05 2.10 2.15 2.20 2.25 0.20 0.22 0.24 0.26 0.28 2 3 4 5 6 7 8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 r  (r)+Pr(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 1 2 3 4 r  (r)+Pt(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 r  (r)-|Pr(r)| α=2 α=2.1 α=2.2 α=2.3 α=2.4 1.4 1.5 1.6 1.7 1.8 -0.2 -0.1 0.0 0.1 0.2 2 3 4 5 6 7 8 -4 -3 -2 -1 0 1 r (r)-|Pt(r)| α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 2 4 6 8 r (r)+Pr(r)+2Pt(r) FIG. 4: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) −|Pr(r)| (Left), ρ(r) −|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. We now briefly address the classical energy conditions, derived from the Raychaudhuri equations. In the framework of general relativity, the traversable wormholes demand exotic matter by violating the null energy condition (NEC) [107, 108]. In contrast, modified gravity theories can comply with or avoid this condition because their effective field equations are not identical to those in general relativity. The Raychaudhuri equations, describing the temporal evolution of congruences of timelike vectors uη and the null geodesics kη, can be expressed as [109] dΘ dτ −ωηξωηξ + σηξσηξ + 1 3θ2 + Rηξuηuξ = 0, (21) dΘ dτ −ωηξωηξ + σηξσηξ + 1 3θ2 + Rηξkηkξ = 0, (22) where kη represents the vector files, Rηξkηkξ is the shear or spatial tensor with σ2 = σηξσηξ ≥0 and ωηξ ≡0. In the scenario of attractive gravity (θ < 0), the Raychaudhuri equations indicate the following conditions Rηξuηuξ ≥0, Rηξkηkξ ≥0. (23) Thus, the energy conditions for anisotropic matter take the form: 8 (i) Null Energy Condition (NEC): NECr : ρ(r) + Pr(r) ≥0, NECt : ρ(r) + Pt(r) ≥0. (ii) Weak energy condition (WEC): WECr : ρ(r) ≥0, ρ(r) + Pr(r) ≥0, WECt : ρ(r) ≥0, ρ(r) + Pt(r) ≥0. (ii) Dominant energy condition (DEC): DECr : ρ(r) ≥0, ρ(r) −|Pr(r)| ≥0, DECt : ρ(r) ≥0, ρ(r) −|Pt(r)| ≥0. (iv) Strong energy condition (SEC): ρ(r) + Pr(r) ≥0, ρ(r) + Pt(r) ≥0, ρ(r) + Pr(r) + 2Pt(r) ≥0. To construct physically viable wormhole solutions within the framework of f(R, Lm) gravity, we consider two specific and well-motivated functional forms of the gravitational Lagrangian, model-I: f(R, Lm) = R 2 + Lα m, and model-II: f(R, Lm) = R 2 + (1 + λR)Lm. These forms are chosen to capture different modes of curvature-matter coupling and to facilitate analytical tractability of the field equations. 2 3 4 5 6 7 8 1.5 2.0 2.5 3.0 3.5 r (r) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 -0.6 -0.4 -0.2 0.0 0.2 r (r)-r 10 20 30 40 50 0.2 0.4 0.6 0.8 r (r)/r 2 3 4 5 6 7 8 0.3 0.4 0.5 0.6 0.7 r  '(r) FIG. 5: Shows the characteristics of wormhole shape function ξ(r) (59)(Left), the function ξ(r) −r (Left-Middle), the ratio ξ(r)/r (Right-Middle), and the derivative ξ′(r) (Right) against the radial coordinate r under the f(R, Lm) gravity model-II with parameter r0 = 1.4. IV. MODEL-I: WORMHOLE SOLUTIONS FOR f(R, Lm)= R 2 + Lα m In this section, we derive the wormhole solutions by considering the following minimal functional form of f(R, Lm)[110] f(R, Lm) = R 2 + Lα m. (24) Here, the parameter α serves as a crucial free variable, whose value can be adjusted to modulate the physical properties of the system. Remarkably, for α = 1, the model naturally reproduces the well-known wormhole geometry within the framework of general relativity (GR). Indeed, this particular choice simplifies the underlying field equations while retaining the essential coupling between the matter Lagrangian density Lm and the Ricci scalar R. Such a minimal form enables the exploration of the fundamental geometrical and physical characteristics of wormhole configurations without invoking additional complexities arising from higher-order or non-minimal curvature-matter interactions. For a traversable wormhole, the tidal forces experienced by a traveler must remain within human tolerance, not exceeding Earth’s surface gravity g⊕. Focusing on radial motion in the equatorial plane (θ = π/2), the condition |∆a| ≤g⊕ensures safe passage, yielding the following constraint on the radial tidal force at the wormhole throat [107] |Φ′(r0)| ≤ g⊕r0 1 −ξ′(r0), (25) Since the constant redshift function satisfies the above constraint, we adopt it in this study. 9 Thus, for a constant redshift function and with the choice Lm = ρ(r), the Einstein field equations (18)–(20) in this model simplify to the following form ρ(r) =  ξ′(r) (2α −1)r2  1 α , (26) Pr(r) = ρ(r)  (α −1)r3 −ξ(r)(ρ(r))α αr3 , (27) Pt(r) = (ρ(r))1−α  ξ(r) −rξ′(r) + 2(α −1)r3(ρ(r))α 2αr3 . (28) The above system of field equations comprises three equations with four unknowns. Hence, to obtain viable wormhole solutions within DM halos, we adopt two specific DM halo models: the King DM model and the Dekel–Zhao DM model. A. King Dark Matter Model The King DM density profile is a widely used phenomenological model for describing galactic DM distributions [111]. Owing to its consistency with low surface brightness and flat rotation curves, it effectively models core-like DM halos [112]. The King DM density profile is defined as [111, 112] ρ(r) = β " r rs 2 + γ #η , (29) where β, γ, η are parameters, and rs denotes the scale radius. Substituting the above energy density (29) into the field equation (26), the corresponding shape function ξ(r) is obtained as ξ(r) = C1 + 1 3(2α −1)r3βαγαηF(r), (30) where F(r) = 2F1  3 2, −αη; 5 2; −r2 γr2 s  is the hypergeometric function, and C1 is an integration constant. Here, the throat condition ξ(r0) = r0 is applied to determine the constant C1, which yields the following expression for C1 C1 = r0 −1 3(2α −1)r3 0βαγαηF(r0). (31) Thus, with the above values of C1, the resulting shape function takes the following form ξ(r) = r0 + 1 3(2α −1)βαγαη  r3F(r) −r3 0F(r0)  . (32) Furthermore, the wormhole solutions are required to satisfy the flare-out condition at the throat, given by the relation ξ′(r0) = (2α −1)r2 0βαγαη  1 + r2 0 γr2s αη < 1. (33) The obtained shape function (32) is illustrated graphically in Fig. 1 for the f(R, Lm) gravity parameter values α = 2.2, 2.3, 2.4, 2.5, 2.6 with β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. Fig. 1 shows that the shape function ξ(r) increases monotonically with the radial coordinate r and decreases with increasing parameter α, while consistently satisfying ξ(r)/r ≤1 for r ≥r0. Moreover, the shape function meets the flare-out condition in this configuration. These results ensure that the shape function (32) derived from the King DM density profile (29) successfully captures all essential features of a traversable wormhole. Notably, the shape function also exhibits the desired asymptotic flat behavior, ensuring that the resulting traversable wormhole geometries are asymptotically flat. The radial and transverse pressures are then obtained from the field equations (19)-(20), resulting in the following expressions Pr(r) = β 3αr3  γ + r2 r2s η  βα  γ + r2 r2s ηα  (2α −1)βαγηα(r3 0F(r0) −r3F(r)) −3r0 + 3(α −1)r3  , (34) Pt(r) = β1−α 6αr3  γ + r2 r2s (1−α)η  (2α −1)βαγηα r3F(r) −r3 0F(r0)  + 3r0 −3r3βαγηα  1 + r2 γr2s αη . (35) 10 η=-0.7 η=-0.6 η=-0.5 η=-0.4 η=-0.3 1 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 0.20 0.25 0.30 r ρ(r) λ0.11 λ 0.13 λ 0.15 λ0.17 λ 0.19 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 r ρ(r)+Pr(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.003.023.043.06 3  3 3  3  0.216 0.218 0.220 0.222 0.224 0.226 2 3 4 5 6 7 8 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 r ρ(r)+Pt(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 r ρ(r)-|Pr(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.00 3.02 3.04 3.06                        2 3 4 5 6 7  0.10 0.15 0.20 0.25 r ρ(r)-|Pt(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.10 0.15 0.20 0.25 r ρ(r)+Pr(r)+2Pt(r) FIG. 6: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) −|Pr(r)| (Left), ρ(r) −|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for King DM model under the f(R, Lm) gravity model-II with parameters β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. In this model, the NEC, DEC, and SEC at the wormhole throat can be explicitly expressed as [ρ(r) + Pr(r)]r=r0 = β αr2 0  γ + r2 0 r2s η  (2α −1)r2 0 −βα  γ + r2 0 r2s αη > 0, (36) [ρ(r) + Pt(r)]r=r0 = β1−α 2αr2 0  γ + r2 0 r2s (1−α)η  1 + (2α −1)d2βα  γ + r2 0 r2s αη > 0, (37) [ρ(r) −|Pr(r)|]r=r0 = β  γ + r2 0 r2s η − β αr2 0  γ + r2 0 r2s η  (α −1)r2 0 −βα  γ + r2 0 r2s αη > 0, (38) [ρ(r) −|Pt(r)|]r=r0 = β  γ + r2 0 r2s η − 3β1−α αr2 0  γ + r2 0 r2s (1−α)η  1 −r2 0βα  γ + r2 0 r2s αη > 0, (39) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = β1−α αr2 0  γ + r2 0 r2s (1−α)η " 1 −β2α  γ + r2 0 r2s 2αη + 2(α −1)r2 0βα  γ + r2 0 r2s αη# > 0. (40) To investigate the physical nature of the matter content supporting the wormholes, we provide graphical illustrations of all relevant energy conditions in Fig. 2 corresponding to the parameters α = 2.2, 2.3, 2.4, 2.5, 2.6 with β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. Notably, the energy conditions NECr, NECt, WECr, WECt, DECr, and SEC are satisfied throughout the wormhole spacetime, while DECt is satisfied near the throat, indicating the absence of exotic matter. These findings confirm that the wormhole solutions can be sustained within the King DM halo under f(R, Lm) gravity without requiring exotic matter. B. Dekel-Zhao Dark Matter Model In this section, we adopt the Dekel–Zhao (DZ) DM density profile to construct traversable wormholes. The DZ profile is a widely used, versatile model for DM halos, capable of capturing a broad range of astrophysical structures 11 k=1.8 k=1.9 k=2 k=2.12 k=2.2 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 1.2 r ρ(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 r ρ(r)+Pr(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.00 3.02 3.04 3.06  !"  # !  #$  # % 0.082 0.084 0.086 0.088 0.090 0.092 2  4 5 6 7 8 0.0 0.1 0.2 0.3 0.4 0.5 r ρ(r)+Pt(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 r ρ(r)-|Pr(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.00 3.02 3.04 3.06 &')* &'+) &'+, &'+- 0.049 0.050 0.051 0.052 0.053 0.054 0.055 0.056 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 r ρ(r)-|Pt(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 0.20 0.25 0.30 r ρ(r)+Pr(r)+2Pt(r) FIG. 7: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) −|Pr(r)| (Left), ρ(r) −|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for Dekel-Zhao DM model under the f(R, Lm) gravity model-II with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. [113, 114]. Its double power-law form smoothly interpolates between inner and outer slopes, making it well-suited for both observational and theoretical studies. The profile is defined as [113–115] ρ(r) = ρs  r rs κ  1 +  r rs 1/bb(d−κ) . (41) Here, the parameters ρs and rs represent the characteristic density and the scale radius, respectively. Moreover, the parameters κ, b, and d control the inner slope, the transition sharpness, and the outer slope of the density profile. For b = N1 and d = 3 + N2/N1 with natural numbers N1 and N2, simplified analytical expressions for the gravitational potential, enclosed mass, and velocity dispersion can be obtained. In this study, we choose N1 = 2 and N2 = 1, so the DZ density profile (41) becomes ρ(r) = ρs  r rs κ  1 +  r rs 1/27−2κ , (42) In this case, the shape function is obtained by substituting the above density profile (42) into the field equation (26), giving: ξ(r) = C2 + 2α −1 3 −ακr3ρα s rs r ακ H(r), (43) where H(r) = 2F1  α(7 −2k), 6 −2αk, 7 −2αk, −p r s  , and C2 is an integration constant. Here, the throat condition ξ(r0) = r0 yields the following result C2 = r0 + 2α −1 ακ −3r3 0ρα s rs r0 ακ H(r0). (44) Consequently, incorporating the above result of C2, the shape function can be expressed in its final form as ξ(r) = r0 + 2α −1 3 −ακρα s  r3 rs r ακ H(r) −r3 0 rs r0 ακ H(r0)  . (45) 12 In this case, the flare-out condition at the wormhole throat is given by the following relation ξ′(r0) = (2α −1)r2 0rρα s rs r0 ακ  1 + rr0 rs α(2κ−7) < 1. (46) The derived shape function (45) is shown in Fig. 3 for α = 2.2, 2.3, 2.4, 2.5, 2.6 with parameters ρs = 0.06, κ = 2.12, rs = 6, and r0 = 1.4. As illustrated, the shape function increases monotonically with r, satisfies ξ(r)/r ≤1 for r ≥r0, and meets the flare-out condition. Moreover, ξ(r)/r →0 as r →∞, demonstrating the desired asymptotic behavior. Therefore, the shape function derived from the DZ DM density profile (56) successfully satisfies the essential features of asymptotically flat traversable wormhole geometries. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 0 2 4 6 8 -30 -20 -10 0 10 20 30 r Ζ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0 5 10 15 20 25 30 r l(r) FIG. 8: Shows the characteristics of embedding surface (Left), proper radial length (Middle) against the radial coordinate r, and the full visualization diagram of wormholes (Right) for King DM model under the f(R, Lm) gravity model-I with parameters β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. In the full visualization diagram of wormholes: α = 2.2 (Yellow), α = 2.3 (Red), α = 2.4 (Blue), α = 2.5 (Green), α = 2.6 (Purple). The radial and transverse pressures in this case are obtained as follows Pr(r) = ρs αr3 rs r κ  1 + r r rs 2κ−7 " (α −1)r3 − ρα s ακ −3 rs r ακ  1 + r r rs α(2κ−7) ( (2α −1)ρα s r3 0 rs r0 ακ × H(r0) −r3 rs r ακ H(r) ! + r0(ακ −3) )# . (47) Pt(r) = ρ1−α s rs r (1−α)κ 2αr3(ακ −3)  1 + rr s (1−α)(2κ−7) " (ακ −3) ( r0 −r3ρα s rs r ακ  1 + rr s α(2κ−7)) + (2α −1) × ρα s  r3 0 rs r0 ακ H(r0) −r3 rs r H(r) ακ # . (48) In this case, at the wormhole throat, the NEC, DEC, and SEC can be expressed as [ρ(r) + Pr(r)]r=r0 = ρsJ1 αr2 0 " (2α −1)r2 0 −ρα s rs r0 αk  1 + rr0 rs α(2κ−7)# > 0, (49) [ρ(r) + Pt(r)]r=r0 = ρ1−α s J 1−α 1 2αr2 0 " 1 + (2α −1)r2 0ρα s rs r0 ακ  1 + rr0 rs α(2κ−7)# > 0, (50) [ρ(r) −|Pr(r)|]r=r0 = ρsJ1 − ρsJ1 αr2 0 " (α −1)r2 0 −ρα s rs r0 ακ  1 + rr0 rs α(2κ−7)# > 0, (51) [ρ(r) −|Pt(r)|]r=r0 = ρsJ1 −1 2 ρ1−α s J 1−α 1 αr2 0 " 1 −r2 0ρα s rs r0 ακ  1 + rr0 rs α(2κ−7)# > 0, (52) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = ρ1−α s J 1−α 1 αr2 0 " 1 −ρ2α s J 2α 1 + 2(α −1)r2 0ρα s rs r0 ακ  1 + rr0 rs α(2κ−7)# > 0, (53) 13 where J1 = rs r0 κ  1 + rr0 rs 2κ−7 . (54) Figure 4 illustrates the relevant energy conditions, showing that NEC, WEC, DECr, and SEC are satisfied through- out the wormhole spacetime, while DECt is satisfied near the throat. This indicates the absence of exotic matter and confirms that the wormhole solutions can be supported within the DZ DM halo under f(R, Lm) gravity without requiring exotic matter. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 0 2 4 6 8 -5 0 5 r Ζ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0 2 4 6 8 10 r l(r) FIG. 9: Shows the characteristics of embedding surface (Left), proper radial length (Middle) against the radial coordinate r, and the full visualization diagram of wormholes (Right) for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. In the full visualization diagram of wormholes: α = 2.2 (Yellow), α = 2.3 (Red), α = 2.4 (Blue), α = 2.5 (Green), α = 2.6 (Purple). V. MODEL-II: WORMHOLE SOLUTIONS FOR f(R, Lm)= R 2 + (1 + λR) Lm In this section, we derive the wormhole solutions by considering the following non-minimal form of the f(R, Lm) function [116–118] f(R, Lm) = R 2 + (1 + λR)Lm, (55) where λ denotes the coupling constant. It is noteworthy that for λ = 0, the model reduces to the standard wormhole geometry of GR. In this model, taking a constant redshift function and Lm = ρ(r), the Einstein field equations (18)–(20) reduce to the following form ρ(r) = ξ′(r) r2 + 2λξ′(r), (56) Pr(r) = 2λr(r3 −ξ′(r))ξ′(r) −(r2 + 4λξ′(r))ξ(r) r (r2 + 2λξ′(r))2 , (57) Pt(r) = r2ξ(r) −(r3 −4λξ(r))ξ′(r) 2r (r2 + 2λξ′(r))2 . (58) As obtaining an exact solution for this non-linear model is challenging, we introduce a new shape function to generate the wormhole solutions under this framework, given by ξ(r) = r0 log(2) log  1 + r r0  . (59) The shape function (59) is depicted in Fig. 5, which shows that ξ(r) increases monotonically with the radial coordinate r, while ξ(r) −r intersects the r-axis at r = r0 = 1.4. It consistently satisfies the condition ξ(r)/r ≤1 for 14 r ≥r0 and the flare-out condition. Therefore, the proposed shape function (59) supports the traversable wormhole geometries by satisfying all the essential characteristics. Moreover, it exhibits the asymptotic flatness, confirming that the resulting traversable wormhole geometries are asymptotically flat. In this case, the radial and transverse pressures are determined as follows Pr(r) = 2λr0r  r3 log(2)(r0 + r) −r0  −r0(r + r0) [4λr0 + log(2)(r + r0)] r2 log  r+r0 r0  r [2λr0 + log(2)(r + r0)r2]2 , (60) Pt(r) = r0(r + r0) h log  r+r0 r0  4λr0 + log(2)(r + r0)r2 −log(2)r3i 2r [2λr0 + log(2)(r0 + r)r2]2 . (61) Next, we examine all the relevant energy conditions corresponding to the shape function (59) for both the King and Dekel–Zhao DM models. 0 2 4 6 8 -10 -5 0 5 10 r Ζ(r) 2 3 4 5 6 7 8 0 2 4 6 8 10 12 r l(r) FIG. 10: Shows the characteristics of embedding surface (Left), proper radial length (Middle) against the radial coordinate r, and the full visualization diagram of wormhole (Right) for wormhole shape function (59) under the f(R, Lm) gravity model-II with parameter r0 = 1.4. A. King Dark Matter Model Here, we analyze the energy density profile of the King DM model (29) together with the proposed shape function (59) to examine the corresponding energy conditions. This comprehensive analysis will offer valuable insights into the behavior and characteristics of Model II in sustaining the traversable wormhole structures. In this model, employing the energy density (29) along with the radial and transverse pressures (60) and (61), the NEC, DEC, and SEC at the throat r = r0 are obtained as [ρ(r) + Pr(r)]r=r0 = β  γ + r2 0 r2s η + λ  2 log(2)(r3 0 −2) −1  −2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (62) [ρ(r) + Pt(r)]r=r0 = β  γ + r2 0 r2s η + log(2)  4λ + r2 0(2 log(2) −1)  4 [λ + r2 0 log(2)]2 > 0, (63) [ρ(r) −|Pr(r)|]r=r0 = β  γ + r2 0 r2s η − λ  2 log(2)(r3 0 −2) −1  −2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (64) [ρ(r) −|Pt(r)|]r=r0 = β  γ + r2 0 r2s η − log(2)  4λ + r2 0(2 log(2) −1)  4 [λ + r2 0 log(2)]2 > 0, (65) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = β  γ + r2 0 r2s η + (2λr0 −1)r2 0 log(2) −λ 2 [λ + r2 0 log(2)]2 > 0, (66) Also, the graphical representation of all relevant energy conditions is shown in Fig. 6 for the parameters λ = 0.11, 0.13, 0.15, 0.17, 0.19 with β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4. Remarkably, all energy conditions, 15 NEC, WEC, DEC, and SEC, are satisfied throughout the wormhole spacetime, indicating the absence of exotic matter. These results confirm that the wormhole solutions can be maintained within the King DM halo under f(R, Lm) gravity without the need for exotic matter. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 5 10 15 20 25 30 35 r Eg α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 1.0 1.5 2.0 r Eg 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 r Eg 2 3 4 5 6 7 8 1.0 1.5 2.0 2.5 3.0 r Eg FIG. 11: Shows the characteristics of total gravitational energy for King DM model under the f(R, Lm) gravity model-I (Left), for Dekel-Zhao DM model under the f(R, Lm) gravity model-I (Left-Middle), for King DM model under the f(R, Lm) gravity model-II (Right-Middle), and for Dekel-Zhao DM model under the f(R, Lm) gravity model-II (Right). In King DM model, parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, and in Dekel-Zhao DM model, parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. B. Dekel-Zhao Dark Matter Model In this subsection, we examine the energy conditions associated with the energy density profile (42) of the DZ DM model and the proposed shape function (59). Specifically, the NEC, DEC, and SEC at the wormhole throat r = r0 are calculated using the energy density (42) and the radial and transverse pressures (60) and (61), yielding [ρ(r) + Pr(r)]r=r0 = ρs rs r0 k  1 + rr0 rs 2κ−7 + λ  2 log(2)(r3 0 −2) −1  −2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (67) [ρ(r) + Pt(r)]r=r0 = ρs rs r0 k  1 + rr0 rs 2κ−7 + log(2)  4λ + r2 0(2 log(2) −1)  4 [λ + r2 0 log(2)]2 > 0, (68) [ρ(r) −|Pr(r)|]r=r0 = ρs rs r0 k  1 + rr0 rs 2κ−7 − λ  2 log(2)(r3 0 −2) −1  −2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (69) [ρ(r) −|Pt(r)|]r=r0 = ρs rs r0 k  1 + rr0 rs 2κ−7 − log(2)  4λ + r2 0(2 log(2) −1)  4 [λ + r2 0 log(2)]2 > 0, (70) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = ρs rs r0 k  1 + rr0 rs 2κ−7 + (2λr0 −1)r2 0 log(2) −λ 2 [λ + r2 0 log(2)]2 > 0, (71) In this model, all the energy conditions, NEC, WEC, DEC, and SEC, are satisfied throughout the wormhole spacetime for parameter λ = 0.11, 0.13, 0.15, 0.17, 0.19 with ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, as clear from Fig. 7. Thus, the wormhole solutions supported by the proposed shape function (59) can also be sustained within the DZ DM halo under f(R, Lm) gravity without exotic matter. VI. EMBEDDING SURFACE AND TOTAL GRAVITATIONAL ENERGY In this section, we investigate some physical properties of the proposed wormhole structures, including the embed- ding surface and the total gravitational energy. In three-dimensional space, the embedded surface Z(r) of the axially symmetric wormhole is given by [107] ds2 = " 1 + dZ(r) dr 2# dr2 + r2dφ2. (72) 16 α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 1 2 3 4 5 6 7 8 0 5 10 15 20 25 30 rc α(rc) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 1 2 3 4 5 6 7 8 0 5 10 15 20 rc α(rc) 1 2 3 4 5 6 7 8 0 5 10 15 20 rc α(rc) FIG. 12: Shows the characteristics of deflection angles for King DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4 (Left), for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4 (Middle), and for wormhole shape function (59) under the f(R, Lm) gravity model-II with parameter r0 = 1.4 (Right). Aslo, at the equatorial slice θ = π/2 with constant time t, the metric (10) reads as ds2 =  1 −ξ(r) r −1 dr2 + r2dφ2. (73) Thus, combining results (72) and (73) leads to the following differential equation for the embedding surface Z(r) dZ(r) dr = ±  r ξ(r) −1 −1 2 . (74) The above result indicates that dZ(r)/dr diverges at the throat, causing Z(r) to become vertical at the throat. However, the embedding surface Z(r) can be expressed as Z(r) = ± Z r r+ 0 dr p r/ξ(r) −1 , (75) where the ± sign denotes the upper and lower regions of the wormhole geometry. Another key physical quantity of the wormhole is the proper radial distance, which we have already defined in Eq. (11). The embedding surfaces and proper radial distances for the King and DZ DM models under model-I of f(R, Lm) gravity are illustrated in Figs. 8 and 9, respectively. Fig. 10 illustrates the embedding surface and proper radial distance for the model-II of f(R, Lm) gravity. In the embedding surface, the regions with positive curvature (Z(r) > 0) and negative curvature (Z(r) < 0) correspond to the wormhole’s upper and lower universes, respectively, with the two joined at the wormhole throat. Moreover, the f(R, Lm) gravity parameter α has a notable effect on the curvature of the wormholes; increasing α diminishes the spacetime curvature of the wormholes. The proper radial distance for the current wormhole solutions is finite and increases monotonically, also satisfying the condition |l(r)| ≥r −r0. Furthermore, complete visualizations of the wormholes for each case are shown in Figs. 8–10. These analyses offer valuable insights into the physical viability of the proposed wormhole models within the f(R, Lm) gravity framework. Now, we proceed to examine the characteristics of the total gravitational energy for the present wormhole models. The total gravitational energy, denoted by Eg, is defined as [119, 120] Eg = Mc2 −EM. (76) The term Mc2 represents the total energy, defined as Mc2 = 1 2 Z r r0 T t t r2dr + r0 2 , (77) where the term r0 2 denotes the effective mass [120]. The term EM denotes the total contribution from supplementary energy components, including rest, internal, kinetic, and related energies, and is expressed as EM = 1 2 Z r r0 r2T t t dr p 1 −ξ(r)/r . (78) 17 Thus, combining the results (77) and (78), the total gravitational energy (76) can be expressed as Eg = 1 2 Z r r0 " 1 − 1 p 1 −ξ(r)/r # T t t r2dr + r0 2 . (79) The total gravitational energy is considered attractive when Eg < 0 and repulsive when Eg > 0 [121]. For the present wormhole models, we demonstrate the graphical representation of the total gravitational energy in Fig. 11, indicating Eg > 0. This suggests a repulsive gravitational effect, which supports the potential existence of viable traversable wormholes within the King and DZ DM halos under the f(R, Lm) gravity framework. VII. STRONG DEFLECTION ANGLE In this section, we explore how the wormhole geometries considered in our study affect the path of light rays propagating in their vicinity. Specifically, we focus on calculating the strong gravitational deflection angle, which quantifies how much light is bent when it passes close to the wormhole throat. This analysis is crucial for understanding the gravitational lensing effects of these wormholes, which can provide observable signatures distinguishing them from black holes or other compact objects. Indeed, strong gravitational lensing provides a powerful tool for probing the geometry of spacetime and the properties of astrophysical objects [122, 123]. In this context, Bozza [124] introduced a systematic analytical approach for examining light deflection in spherically symmetric spacetimes subjected to strong gravitational fields. The strong deflection angle α(rc) corresponding to the wormhole metric (10) can be written as [124, 125] α(rc) = −π + 2 Z ∞ rc eΦ(r)dr r2 r 1 −ξ(r) r   1 χ2 −e2Φ(r) r2 , (80) where rc denotes the closest approach of the light ray to the wormhole throat, and χ represents the impact parameter. For null geodesics, these quantities are related as χ = rce−Φ(rc). The obtained deflection angles for the proposed wormhole structures are presented in Fig. 12 as a function of the closest approach distance rc. The results indicate that the deflection angle diminishes as rc increases, i.e. weaker light bending at the greater distances from the wormhole throat. In contrast, near the throat, the deflection angle grows sharply and diverges, highlighting the intense curvature and strong gravitational influence in that region. Furthermore, the f(R, Lm) gravity parameter α plays a significant role: larger α values reduce the deflection angle, indicating a decrease in gravitational lensing strength consistent with the reduced spacetime curvature. VIII. DISCUSSIONS AND CONCLUSION Modified gravity theories offer an alternative explanation for the late-time cosmic acceleration. Among them, f(R) gravity, where f(R) is an arbitrary function of the Ricci scalar R, is particularly significant. More recently, the generalized f(R, Lm) gravity, depending on both R and the matter Lagrangian, has been introduced, with extensive astrophysical and cosmological implications. Recently, several investigations have demonstrated that f(R, Lm) gravity offers a suitable theoretical framework for constructing and sustaining traversable wormhole geometries [126–130]. It is noteworthy that, in standard Einstein gravity, sustaining wormhole geometries generally necessitates exotic matter [108]. In contrast, numerous studies have demonstrated that within modified gravity frameworks, wormhole solutions can often be constructed without the need for exotic matter [131–134]. In this study, we have explored asymptotically flat non-exotic traversable wormhole structures within the King and DZ DM halos in the framework of f(R, Lm) gravity under the consideration of two functional forms of this theory, Model-I: f(R, Lm) = (R/2)+Lα m and Model-II: f(R, Lm) = (R/2)+ (1 + λR)Lm. The key features of the proposed wormhole solutions can be summarized as follows: • For Model-I wormholes: In Model-I of f(R, Lm) gravity, the wormhole solutions are sustained by two different DM density profiles, the King model and the Dekel–Zhao model. The King model generated shape function (32) increases monotonically with r and decreases with higher α, while satisfying ξ(r)/r ≤1 for r ≥r0 and fulfilling the flare-out condition for the parameters α = 2.2, 2.3, 2.4, 2.5, 2.6 with β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4 (See Fig. 1). These behaviors confirm that the King DM based shape function encapsulates all the fundamental characteristics of a traversable wormhole. Furthermore, its asymptotic flatness indicates that the resulting wormhole geometries are asymptotically flat. To examine the physical characteristics of the matter 18 supporting the wormholes, we analyze the relevant energy conditions. It is found that NECr, NECt, WECr, WECt, DECr, and SEC are satisfied throughout the wormhole spacetime, while DECt holds near the throat (See Fig. 2). This indicates the absence of exotic matter, confirming that the King DM halo can sustain traversable wormholes in f(R, Lm) gravity without exotic matter. The shape function (45) derived from the Dekel–Zhao model is also monotonically increasing with r, satisfies ξ(r)/r ≤1 for r ≥r0, meets the flare-out condition, and ξ(r)/r →0 as r →∞for α = 2.2, 2.3, 2.4, 2.5, 2.6 with ρs = 0.06, κ = 2.12, rs = 6, and r0 = 1.4, as shown in Fig. 3. Therefore, the obtained shape function, in this case, is also well-suited for constructing asymptotically flat traversable wormhole geometries. Moreover, these wormhole structures are supported by the non-exotic matter by satisfying all the relevant energy conditions, clear from Fig. 4. • For Model-II wormholes: In Model-II of f(R, Lm) gravity, we have introduced a new shape function (59) to construct the corresponding wormhole solutions. The newly proposed shape function, depicted in Fig. 5, exhibits a monotonic increasing behavior with respect to the radial coordinate r, while ξ(r) −r intersects the r-axis at r = r0 = 1.4. It consistently satisfies the condition ξ(r)/r ≤1 for r ≥r0 and meets the flare-out requirement at the throat. These features confirm that the shape function effectively generates traversable wormhole geometries, fulfilling all necessary conditions. Additionally, its asymptotically flat nature ensures that the resulting spacetime remains asymptotically flat. These wormhole configurations within the King DM halo are supported by non-exotic matter by holding all the energy conditions, NEC, WEC, DEC, and SEC throughout the spacetime for the parameter set λ = 0.11, 0.13, 0.15, 0.17, 0.19 with β = 0.65, γ = 1, η = −0.5, rs = 1.01, and r0 = 1.4, as illustrated in Fig. 6. Moreover, the wormhole configurations can also be sustained within the DZ DM halo without invoking exotic matter, as all energy conditions are satisfied for the parameter set λ = 0.11, 0.13, 0.15, 0.17, 0.19 with ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, as shown in Fig. 7. We have examined some physical characteristics of the proposed wormhole geometries to gain a deeper understanding of their structure. In particular, we have analyzed the embedding surface and proper radial distance, which provide a geometric visualization of the wormhole throat and its curvature properties in an embedded Euclidean space. The embedding surfaces and proper radial distances for the King and DZ DM models under Model-I of f(R, Lm) gravity are shown in Figs. 8 and 9, while Fig. 10 presents the corresponding results for Model-II. In the embedding diagrams, the regions with positive curvature (Z(r) > 0) and negative curvature (Z(r) < 0) represent the upper and lower universes of the wormhole, respectively, joined smoothly at the throat. The f(R, Lm) gravity parameter α significantly influences the wormhole geometry, with higher α values leading to reduced spacetime curvature. The proper radial distance for all considered models is finite, increases monotonically, and satisfies |l(r)| ≥r −r0. Furthermore, we have analyzed the total gravitational energy to determine the nature of the gravitational interaction, whether attractive or repulsive, within the wormhole configurations. Interestingly, the results reveal that the total gravitational energy is repulsive in nature, as clear from Fig. 11, which supports the potential existence of viable traversable wormholes within the King and DZ DM halos under the f(R, Lm) gravity framework. Overall, these analyses provide deeper insight into the geometric and physical viability of the proposed wormhole configurations within the f(R, Lm) gravity framework. In addition, we have investigated gravitational lensing, emphasizing the strong deflection effects generated by the wormhole geometries in each of the considered f(R, Lm) gravity models, employing Bozza’s formalism [124]. Our findings reveal that the deflection angles associated with the King and DZ DM profiles in Model-I f(R, Lm) gravity and the introduced shape function in Model-II f(R, Lm) gravity exhibit a decreasing trend with increasing closest approach distance rc (See Fig. 12). The analysis implies weaker light bending farther from the wormhole throat. Near the throat, however, the deflection angle increases rapidly and diverges, indicating strong spacetime curvature and intense gravitational effects, where light can be trapped in unstable circular orbits. Moreover, the f(R, Lm) gravity parameter α has a notable impact on the deflection angles; larger values of α lead to smaller deflection angles, reflecting a reduction in gravitational lensing strength due to the diminished curvature of spacetime. In recent years, observations have confirmed black holes at galactic centers, but no direct evidence for wormholes exists. Consequently, the scientific community continue searching for observational signatures of wormholes. Following the pioneering work of Morris and Thorne, numerous theoretical models have been developed within GR and modified gravity theories, often at galactic scales. In this study, we have constructed new asymptotically flat traversable wormhole solutions within the framework of f(R, Lm) gravity using the King and DZ DM models, without invoking exotic matter. Remarkably, these wormhole configurations satisfy all essential geometric and physical conditions, demonstrating their viability within the King and DZ DM halos under f(R, Lm) gravity. Additionally, we have analyzed the corresponding deflection angles for each DM model, providing novel insights into gravitational lensing effects in the context of modified gravity. It is important to emphasize that these wormhole solutions are purely theoretical, and ongoing efforts in the scientific community aim to identify observational signatures, such as through scalar wave scattering, gravitational lensing [135], and gamma-ray burst light curve analyses [136]. Furthermore, our study may inspire future research on constructing non-exotic traversable wormhole geometries within the King and DZ DM halos under other modified gravity theories, thereby extending their applicability across a wide range of 19 alternative gravitational frameworks. Acknowledgments FR would like to thank the authorities of the Inter-University Centre for Astronomy and Astrophysics, Pune, India for providing research facilities. [1] L. Flamm, Physikalische Zeitschrift 17, 448 (1916). [2] A. Einstein, and N. Rosen, Phys. Rev. 48, 73 (1935). [3] J. A. Wheeler, Ann. Phys. 2, 604 (1957). [4] R. W. Fuller, and J. A. Wheeler, Phys. Rev. 128, 919 (1962). [5] M. S. Morris, and K. S. Thorne, Am. J. Phys. 56, 395 (1988). [6] M. Visser, Lorentzian Wormholes: From Einstein to Hawking,Springer, (1995). [7] J. A. Wheeler, Phys. Rev. 97, 511 (1955). [8] M. Visser, S. Kar, and N. Dadhich, Phys. Rev. Lett. 90, 201102 (2003). [9] F. S. N. Lobo and M. A. Oliveira, Phys. Rev. D 80, 104012 (2009). [10] T. Azizi, Int. J. Theor. Phys. 52, 3486 (2013). [11] S. H. Mazharimousavi, and M. Halilsoy, Mod. Phys. Lett. A 31, 1650192 (2016). [12] P. Moraes and P. Sahoo, Phys. Rev. D 96, 044038 (2017). [13] A. K. Mishra, U. K. Sharma, V. C. Dubey, and A. Pradhan, Astrophys. Space Sci. 365, 34 (2020). [14] A. Chanda, S. Dey, and B. C. Paul, Gen. Rel. Grav. 53, 78 (2021). [15] K. A. Bronnikov and S. W. Kim, Phys. Rev. D 67, 064027 (2003). [16] M. L. Camera, Phys. Lett. B 573, 27 (2003). [17] F. Parsaei and N. Riazi, Phys. Rev. D 91, 024015 (2015). [18] F. Javed, G. Mustafa, A. ¨Ovg¨un, and M. F. Shamir, Eur. Phys. J. Plus 137, 61 (2021). [19] I. P. Lobo, M. G. Richarte, J. P. M. Gra¸ca, and H. Moradpour, Eur. Phys. J. Plus 135, 550 (2020). [20] T. Tangphati, C. Muniz, A. Pradhan, and A. Banerjee, Phys. Dark Univ. 42, 101364 (2023). [21] G. Mustafa, Z. Hassan, P. Moraes, and P. Sahoo, Phys. Lett. B 821, 136612 (2021). [22] Z. Hassan, G. Mustafa, J. R. L. Santos, and P. K. Sahoo, Europhys. Lett. 139, 39001 (2022). [23] A. Banerjee, A. Pradhan, T. Tangphati, and F. Rahaman, Eur. Phys. J. C 81, 1031 (2021). [24] Z. Hassan, S. Ghosh, P. K. Sahoo, and V. S. H. Rao, Gen. Rel. Grav. 55, 90 (2023). [25] R. Solanki, Z. Hassan, and P. K. Sahoo, Chin. J. Phys. 85, 74 (2023). [26] M. F. Shamir, G. Mustafa, S. Waseem, and M. Ahmad, Commun. Theor. Phys. 73, 115401 (2021). [27] R. Shaikh, Phys. Rev. D 98, 064033 (2018). [28] S. Bahamonde, U. Camci, S. Capozziello, and M. Jamil, Phys. Rev. D 94, 084042 (2016). [29] K. Jusufi, Phys. Rev. D 98, 044016 (2018). [30] K. Jusufi, M. Jamil, and M. Rizwan, Gen. Rel. Grav. 51, 102 (2019). [31] T. Harko, and F. S. N. Lobo, Eur. Phys. J. C 70, 373 (2010). [32] T. Harko, Phys. Lett. B 669, 376 (2008). [33] T. Harko, Phys. Rev. D 81, 084050 (2010). [34] T. Harko, Phys. Rev. D 81, 044021 (2010). [35] S. Nesseris, Phys. Rev. D 79, 044015 (2009). [36] V. Faraoni, Phys. Rev. D 76, 127501 (2007). [37] V. Faraoni, Phys. Rev. D 80, 124040 (2009). [38] V. Faraoni, Cosmology in Scalar-Tensor Gravity, Kluwer Academic, Dordrecht, (2004). [39] O. Bertolami, J. Paramos, and S. Turyshev, arXiv:gr-qc/0602016 (2006). [40] J. Wang, and K. Liao, Class. Quantum Grav. 29, 215016 (2012). [41] B. S. Goncalves, P. H. R. S. Moraes, and B. Mishra, Fortschr. Phys., 71(8), 2200153 (2023). [42] R. Solanki, B. Patel, L. V. Jaybhaye, and P. K. Sahoo, Commun. Theor. Phys. 75, 075401 (2023). [43] L. V. Jaybhaye, R. Solanki, S. Mandal, and P. K. Sahoo, Universe 9, 163 (2023). [44] M. Zeyauddin, A. Dixit, and A. Pradhan, Int. J. Geom. Meth. Mod. Phys. 21(09), 2450167 (2023). [45] N. Myrzakulov, M. Koussour, A. H. A. Alnadhief, and A. Abebe, Eur. Phys. J. Plus 138, 852 (2023). [46] D. C. Maurya, Grav. Cosm. 29, 315 (2023). [47] R. Solanki, et al., Commun. Theor. Phys. 75, 075401 (2023). [48] J. K. Singh, et al., New Astron. 104, 102070 (2023). [49] L. V. Jaybhaye, et al., Phys. Dark Univ. 40, 101223 (2023). [50] J. C. Fabris, et al., Eur. Phys. J. Plus 138, 232 (2023). [51] A. Pradhan, et al., Int. J. Geom. Meth. Mod. Phys. 20, 1230105 (2023). [52] D. C. Maurya, New Astron. 100, 101974 (2023). 20 [53] G. A. Carvalho, et al., Eur. Phys. J. C 82, 1096 (2022). [54] R. V. Labato, G. A. Carvalho, and C. A. Bertulani, Eur. Phys. J. C 81, 1013 (2021). [55] T. Harko, and S. Shahidi, Eur. Phys. J. C 82, 1003 (2022). [56] T. Harko, and M. J. Lake, Eur. Phys. J. C 75, 60 (2015). [57] K. K. Nandi, Y.-Z. Zhang, and K. B. Vijaya Kumar, Phys. Rev. D 70, 127503 (2004). [58] M. S. Churilova, R. A. Konoplya, Z. Stuchlik, and A. Zhidenko, JCAP 10, 010 (2021). [59] R. A. Konoplya and A. Zhidenko, Phys. Rev. Lett. 128, 091104 (2022). [60] A. Ashtekar, J. Phys. Conf. Ser. 189, 012003 (2009). [61] R. Sengupta, S. Ghosh, and M. Kalam, Eur. Phys. J. C 83, 830 (2023). [62] C. R. Muniz, T. Tangphati, R. M. P. Neves, and M. B. Cruz, Phys. Dark Univ. 46, 101673 (2024). [63] N. Aghanim, et al. (Planck), Astron. Astrophys. 641, A6 (2020). [64] F. Zwicky, Helv. Phys. Acta 6, 110 (1933). [65] V. C. Rubin and W. K. Ford, Jr., Astrophys. J. 159, 379 (1970). [66] M. Persic, P. Salucci, and F. Stel, Mon. Not. Roy. Astron. Soc. 281, 27 (1996). [67] G. Bertone and D. Hooper, Rev. Mod. Phys. 90, 045002 (2018). [68] L. Randall, Nature 557, 2 (2018). [69] A. Ashraf, et al., Phys. Dark Universe, 47, 101787 (2025). [70] G. Mustafa, et al., Phys. Dark Universe, 47, 101765 (2025). [71] G. Mustafa, et al., Phys. Dark Universe, 47, 101753 (2025). [72] A. Ashraf, et al., Physics of the Dark Universe, 47, 101725 (2025). [73] A. Ashraf, et al., Physics of the Dark Universe 48, 101836 (2025). [74] A. Ashraf, et al., Phys. Dark Universe 47, 101823 (2025). [75] A. Ditta, et al., Phys. Dark Universe 47, 101818 (2025). [76] A. Ditta, et al., Phys. Dark Universe, 46, 101573 (2024). [77] A. Bouzenada, et al., Nucl. Phys. B 1017, 116928 (2025). [78] A. Saleem, et al., Nucl. Phys. B 1017, 116926 (2025). [79] C. A. Arg¨uelles, et al., Rev. Mod. Phys. 93, 035007 (2021). [80] D. J. E. Marsh, D. Ellis, and V. M. Mehta, Dark Matter: Evidence, Theory, and Constraints, Princeton Series in Astrophysics, Princeton University Press, (2024). [81] Y.-D. Tsai, J. Eby, and M. S. Safronova, Nature Astron. 7, 113 (2023). [82] A. D. S. Souza, C. R. Muniz, R. M. P. Neves, and M. B. Cruz, Ann. Phys. 472, 169859 (2025). [83] Z. Xu, M. Tang, G. Cao, and S.-N. Zhang, Eur. Phys. J. C 80, 70 (2020). [84] C. R. Muniz and R. V. Maluf, Annals Phys. 446, 169129 (2022). [85] G. Mustafa, S. K. Maurya, and S. Ray, Fortsch. Phys. 71, 2200129 (2023). [86] R. Radhakrishnan, et al., Symmetry 16, 1007 (2024). [87] A. Errehymy, et al., Eur. Phys. J. C 84, 573 (2024). [88] S. K. Maurya, J. Kumar, S. Kiroriwal, and A. Errehymy, Phys. Dark Univ. 46, 101564 (2024). [89] Z. Hassan, and P. K. Sahoo, Annalen Phys. 536, 2400114 (2024). [90] Z. Xu, M. Tang, G. Cao, and S.-N. Zhang, Eur. Phys. J. C 80, 70 (2020). [91] J. F. Navarro, C. S. Frenk, and S. D. M. White, Astrophys. J. 462, 563 (1996). [92] K. G. Begeman, A. H. Broeils, and R. H. Sanders, Mon. Not. Roy. Astron. Soc. 249, 523 (1991). [93] P. H. Chavanis, M. Lemou, and F. M´ehats, Phys. Rev. D 91(6), 063531 (2015). [94] H. Zhao, Mon. Not. R. Astron. Soc. 278, 488 (1996). [95] H. Zhao, Mon. Not. R. Astron. Soc. 287, 525 (1997). [96] A. Dekel, G. Ishai, A. A. Dutton, and A. V. Macci`o, Mon. Not. R. Astron. Soc. 468, 1005 (2017). [97] J. Freundlich, et al., Mon. Not. R. Astron. Soc. 499, 2912 (2020). [98] P. Salucci, Universe 11, 67 (2025). [99] D. Batic, J. M. Faraji, and M. Nowakowski, Eur. Phys. J. C 82, 759 (2022). [100] A. A. Badawi, S. Shaymatov, and Y. Sekhmani, JCAP 02, 014 (2025). [101] A. Einstein, Ann. Math. 40, 922 (1939). [102] V. Cardoso, et al., Phys. Rev. Lett. 129, 241103 (2022). [103] L. Hernquist, Astrophys. J. 356, 359 (1990). [104] M. Khatri, and P. K. Sahoo, Phys. Dark Universe 49, 102042 (2025). [105] A. Errehymy, O. Donmez, A. Syzdykova, K. Myrzakulov, S. Muminov, A. Dauletov, and J. Rayimbaev, Ann. Phys. 480, 170105 (2025).) [106] T. Harko and F. S. N. Lobo, Eur. Phys. J. C 70, 373-379 (2010). [107] M. S. Morris, K.S. Thorne, Am. J. Phys. 56, 395 (1988). [108] M. S. Morris, K. S. Thorne and U. Yurtsever, Phys. Rev. Lett. 61, 1446 (1988). [109] A. Raychaudhuri, Phys. Rev. 98, 1123 (1955). [110] L. V. Jaybhaye et al., Phys. Lett. B 831, 137148 (2022). [111] I. R. King, ApJL 174, L123 (1972). [112] A. N. Baushev, New Astronomy 60 69-73 (2018). [113] H. Zhao, Mon. Not. R. Astron. Soc. 278, 488 (1996). [114] H. Zhao, Mon. Not. R. Astron. Soc. 287, 525 (1997). 21 [115] J. Freundlich, et al., Mon. Not. R. Astron. Soc. 499, 2912 (2020). [116] N. M. Garcia, F. S. N. Lobo, Phys. Rev. D 82, 104018 (2010). [117] N. M. Garcia, F.S.N. Lobo, Class. Quantum Gravity 28, 085018 (2011). [118] R. V. Labato, G. A. Carvalho, and C. A. Bertulani, Eur. Phys. J. C 81, 1013 (2021). [119] D. Lynden-Bell, J. Katz, and J. Bicak, Phys. Rev. D 75, 024040 (2007). [120] K. K. Nandi, Y. Z. Zhang, R. G. Cai, and A. Panchenko, Phys. Rev. D 79, 024011 (2009). [121] C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation, San Francisco, (1973). [122] K. S. Virbhadra, D. Narasimha, and S. M. Chitre, Astron. Astro. Phys. 337, 1 (1998). [123] C. M. Claudel, K. S. Virbhadra, and G. F. R. Ellis, J. Math. Phys. 42, 818 (2001). [124] V. Bozza, Phys. Rev. D 66, 103001 (2002). [125] V. Bozza et al., Gen. Relativ. Gravit. 33, 1535 (2001). [126] N. S. Kavya, et al., Chinese Journal of Physics 84 1-11 (2023). [127] R. Solanki, Z. Hassan, and P. K. Sahoo, Chinese Journal of Physics 85, 74-88 (2023). [128] K. De, S. Mitra, and U. C. De, International Journal of Geometric Methods in Modern Physics 22 2450265 (2025). [129] M. M. Rizwan, Z. Hassan, and P. K. Sahoo, Physics Letters B 860, 139152 (2025). [130] A. Errehymy, Chinese Journal of Physics 89, 56-68 (2024). [131] M. K. Zangeneh, F. S. N. Lobo and M. H. Dehghani, Phys. Rev. D 92, 124049 (2015). [132] N. Sarkar, S. Sarkar, M. Sarkar and F. Rahaman, Physics of the Dark Universe 47, 101828 (2025). [133] C. G. Boehmer, T. Harko and F. S. N. Lobo, Phys. Rev. D 85, 044033 (2012). [134] M. F. Shamir, and I. Fayyaz, Eur. Phys. J. C 80, 1-9 (2020) [135] P. K. F Kuhfittig, Eur. Phys. J. C 74, 2818 (2014). [136] D. Torres, G. Romero, L. Anchordoqui, Phys. Rev. D 58, 123001 (1998).
Non-exotic traversable wormholes with strong deflection angle in King and Dekel-Zhao dark matter halos under f(R, Lm) gravity Susmita Sarkar ,1, ∗Nayan Sarkar ,2, † Abdelmalek Bouzenada ,3, ‡ and Farook Rahaman 4, § 1 -721606, West Bengal, India 2 741152, West Bengal, India 3Laboratory of Theoretical and Applied Physics, Echahid Cheikh Larbi Tebessi University 12001, Algeria 4 -700 032, West Bengal, India (Dated: October 17, 2025) In this article, we investigate asymptotically flat non-exotic traversable wormhole geometries within the King and Dekel-Zhao dark matter halos in the framework of f(R, Lm) gravity. Two functional forms of the theory are considered: Model-I: f(R, Lm) = (R/2) + Lα m and Model-II: f(R, Lm) = (R/2) + (1 + λR)Lm. For both models, wormhole solutions are obtained and analyzed using the King and Dekel-Zhao dark matter density profiles, allowing us to explore how the underlying matter distribution influences the wormhole structures. The energy conditions are examined to verify the feasibility of sustaining the wormhole geometries with non-exotic matter, while embedding surfaces, proper radial distance and total gravitational energy are studied to illustrate the wormhole's physical viability and traversability. Moreover, we test the strong deflection angle and its implications for gravitational lensing and show possible observational signatures of such wormhole configurations. Our results indicate that within f(R, Lm) gravity, and for appropriate parameter choices, dark matter environments can sustain physically consistent non-exotic traversable wormhole geometries with the gravitational lensing signatures, providing new insights into the interplay between modified gravity, dark matter, and astrophysical observations. Keywords: f(R, Lm) Gravity; Wormhole; Energy Conditions; Non-Exotic Matter; Deflection Angle. I. INTRODUCTION In contemporary theoretical astrophysics, wormholes are regarded as highly intriguing structures that naturally emerge from the framework of Einstein's general relativity (GR) when special kinds of matter distributions are considered. In this case, these hypothetical objects can serve as tunnels or conduits that connect either widely separated regions of the same universe or even two different universes. The first notion of such a tunnel-like configuration was introduced by Flamm [1], who analyzed the Schwarzschild solution in 1916. Later, Einstein and Rosen [2] advanced this perspective by presenting a bridge-type connection between two distinct external spacetime regions, leading to the concept of the Einstein-Rosen bridge (ERB). Also, wheeler subsequently coined the term wormhole [3], and together with Fuller, they illustrate that these structures are dynamically unstable and prone to collapse after their creation, which rules out their utility for interstellar travel in that form [4]. Furthermore, the interest in traversable Lorentzian wormholes reignited when Morris and Thorne [5] showed that a spherically symmetric wormhole geometry can, in principle, provide a two-way passage between two asymptotically flat spacetimes, provided that exotic matter is present at the throat to keep the passage open. Such constructions necessarily lead to violations of the null energy condition (NEC), raising the question of whether exotic matter might arise in nature within the framework of quantum field theory [7]. Visser's monograph [6] and later work with collaborators [8] illustrate that specific spacetime geometries could sustain traversable wormholes with a minimized requirement of exotic matter, thereby enhancing their physical plausibility. Alongside these efforts, the development of modified theories of gravity has provided powerful new approaches for studying wormholes, motivated by the fact that GR with ordinary matter alone cannot explain phenomena such as the late-time acceleration of the universe or the presence of dark matter (DM). Another important result in this modified framework, including f(R) gravity [9-11], f(R, T ) gravity [12-14], brane-world models [15-17], Rastall gravity [18-20], and f(Q) gravity [21-24], have been employed to construct wormhole solutions by assuming or deriving suitable shape function models, with detailed examinations of stability and energy conditions. More recently, ∗Electronic address: †Electronic address: ‡Electronic address: §Electronic address: 16 Oct 2025 2 non-linear f(R, Lm) models have been studied in the context of traversable wormholes [25], and further works have investigated their compatibility with various energy conditions [26]. In this context, additional important contributions exploring wormhole geometries within modified gravity frameworks are available in the literature [27-30]. Modified gravity theories were illustrated by Harko et al. [31], who introduced the f(R, Lm) framework as a generalization of the f(R) models. In this formulation, R defines the Ricci scalar curvature that encodes the geometric features of spacetime, while Lm symbolizes the matter Lagrangian density, thereby combining the geometrical attributes of gravity with the material distribution of matter. Also, the relation between these two elements introduces a non-trivial coupling, giving rise to an additional force orthogonal to the particle's four-velocity, which in turn drives massive particles away from purely geodesic trajectories. Also, this property distinguishes f(R, Lm) gravity from conventional relativistic dynamics and opens avenues for reinterpreting the motion of particles and fields in curved spacetime. Following its proposal, subsequent investigations have broadened the scope of this idea by analyzing arbitrary couplings between matter and geometry [32], thereby deepening our understanding of both astrophysical and cosmological processes. In this context, many studies have tested its phenomenological implications, particularly in relation to the evolution of the universe, cosmic structure formation, and modifications to astrophysical systems [33-37]. However, a central feature of this theory is its violation of the equivalence principle, which is subject to strong experimental constraints derived from solar system observations [38, 39]. In the cosmological context, Wang and Liao examined the validity of energy conditions within f(R, Lm) gravity, thereby providing important insights into the theoretical consistency of the model [40]. In a complementary direction, Gon ̧calves and Moraes are testing cosmological dynamics with non-minimal matter-geometry couplings [41], while Solanki et al. show how the presence of bulk viscosity in an anisotropic background could lead to late-time cosmic acceleration [42]. In this case, for more discussion and expalin, Jaybhaye et al. tested viscous dark energy models within this framework and derived constraints on the equation of state parameter, illustrating new perspectives on the driving forces behind cosmic expansion [43]. Also, these efforts show and explain the phenomenology of f(R, Lm) gravity models, where the synergy between matter and geometry reshapes fundamental principles and yields novel interpretations of dark energy, acceleration, and the large-scale dynamics of the universe. In this case, growing scholarly attention continues to generate a diverse array of studies addressing the cosmological and astrophysical consequences of this theory [44-52], with more recent works extending its applicability across different regimes of gravitational physics [53-56]. The DM candidate plays a central role in both astrophysical phenomena and in the exploration of exotic spacetime geometries, particularly wormholes, which within GR usually demand exotic matter violating classical energy bounds such as the Null Energy Condition (NEC) [57-59]. To overcome this limitation, alternative approaches based on quantum gravity have been considered, with Loop Quantum Gravity (LQG) and its cosmological reduction, Loop Quantum Cosmology (LQC), offering important corrections in the high-density regime through the introduction of a critical density ρc, above which quantum effects modify the classical description of spacetime [60]. Within such frameworks, wormhole configurations were successfully constructed without requiring exotic matter, thanks to quantum geometric contributions [61, 62]. Also, this perspective naturally opens the possibility of employing DM as a supporting source for wormholes, given its gravitationally attractive but non-luminous nature. In this context, cosmological measurements testing that DM constitutes nearly five-sixths of the total matter in the universe [63], with its existence strongly confirmed by astrophysical probes including galactic rotation curves, lensing signatures, and cluster dynamics [64-67]. Despite such compelling evidence, the fundamental nature of DM remains elusive, with candidate models ranging from ultralight bosonic fields on the order of 10-22 eV to heavy Weakly Interacting Massive Particles (WIMPs), all beyond the Standard Model of particle physics [68]. In this case, several hypotheses remain under investigation: primordial black holes (BHs) [69-78] produced in the early universe may account for part of the DM content, while particles such as axions or WIMPs could annihilate within celestial objects like the Sun, producing neutrinos observable by large detectors such as IceCube, though no conclusive signal has been reported [79, 80]. More recently, innovative detection methods include proposals for solar-orbit satellites to constrain DM properties in the local environment [81, 82]. In addition to these astrophysical searches, DM has been theoretically investigated as a viable wormhole source in both Einsteinian and modified gravitational settings, where its isotropic or anisotropic pressures can satisfy wormhole flare-out conditions without invoking unphysical exotic fluids [83-90]. Another important analysis tested the construction of traversable wormholes supported by isotropic DM within the LQC framework. In this case, the study of DM density models, (i) the Navarro-Frenk-White (NFW), (ii) the Pseudo-Isothermal (PI), and (iii) the Perfect Fluid (PF) distributions, each offering distinct insights into the behavior of galactic halos and their role in sustaining wormhole geometries under quantum gravitational corrections [91, 92]. The King model is a statistical mechanics framework originally developed to describe globular clusters but later extended to DM halos, providing a realistic way to avoid the infinite mass problem of classical isothermal cases. It introduces a truncation in the distribution function by assuming that particles with energies above a certain escape energy leave the system, leading to a self-consistent density profile with a flat core and a finite radius. Also, this structure naturally combines an isothermal core and halo with a polytropic envelope of index n = 5/2, producing density profiles that decrease as r-3 at large distances, consistent with observed galactic halos such as those described 3 by the Burkert profile. Because of evaporation and collisions, the King model captures the evolution of self-gravitating systems toward marginal stability, explaining why many globular clusters and large DM halos are found close to this equilibrium state [93]. The Dekel-Zhao (DZ) DM density profile has gained prominence as one of the most adaptable models for describing the distribution of DM halos across astrophysical systems [94, 95]. In this context, formulated as a generalized double power-law structure, it provides a smooth interpolation between the central and outer regions of galaxies through a set of adjustable parameters that govern the inner slope, the steepness of the transition, and the asymptotic outer behavior [96, 97]. In this case, the DZ profile is capable of reproducing a wide range of observed galactic density configurations, making it a robust tool for both theoretical modeling and observational studies in galactic dynamics. Beyond its astrophysical applications, it has also been tested in gravitational contexts, where, testing the influence of a parameter, the profile can give rise to regular or singular BH geometries within the Schwarzschild framework [98100]. Its utility extends further when incorporated into approaches such as the generalized Einstein cluster method, which tests the impact of DM on compact objects in strong gravitational regimes [101-103]. In this context, other measurements have illustrated the influence of the DZ model: Khatri et al. [104] demonstrated that the DZ profile can support stable wormhole geometries in Einstein gravity with minimal exotic matter, accompanied by distinctive gravitational lensing effects, while Errehymy et al. [105] showed that traversable wormholes can also emerge in f(R, Lm, T ) gravity, where coupling constants mediate matter-geometry interactions. Our study tests traversable wormhole geometries in the framework of f(R, Lm) gravity, the fundamental equations of f(R, Lm) gravity, and derives the corresponding Einstein field equations tailored for wormhole spacetimes. Also, the analysis of energy conditions is then carried out, providing the essential constraints on the exotic matter content required to sustain the wormhole. Next, we examine Model I, defined by the functional form f(R, Lm) = (R/2)+Lα m, where wormhole solutions are constructed and studied under the King and Dekel-Zhao DM density profiles. We are also testing Model II, based on f(R, Lm) = (R/2) + (1 + λR)Lm, where the wormhole solutions are obtained for the same DM models, offering a comparative perspective between the two cases. Also, the embedding diagrams of the wormhole spacetime are analyzed to visualize the geometry, alongside the evaluation of tidal forces and the computation of the total gravitational energy, which illustrate more information about the wormhole's stability and traversability. Another important result is that the study also considers the strong deflection angle, emphasizing its relevance for gravitational lensing phenomena and the potential for observational signatures. In this context, we discuss the results in these models, show their physical influence, and broader implications for wormhole physics in the context of modified gravity theories and DM models. The paper is organized as follows: Section-I introduces the wormhole geometries in the framework of f(R, Lm) gravity. Also, in Section-II, we present the fundamental equations governing this modified gravity theory. In SectionIII, we mention the traversability criteria for wormhole, derives the corresponding Einstein field equations for wormhole spacetime, and analyze the energy conditions, providing the necessary constraints on the matter content supporting the wormhole. Section-IV explores Model-I, where the functional form of the theory is taken as f(R, Lm) = (R/2) + Lα m, and wormhole solutions are obtained in the context of both the King and Dekel-Zhao DM density profiles, discussed separately in subsections-IV A and IV B. In Section-V, we examine Model-II, with f(R, Lm) = (R/2) + (1 + λR)Lm, again analyzing wormhole solutions for the King and Dekel-Zhao models. Section-VI addresses the embedding surface of the wormhole geometry and computes the total gravitational energy. The strong deflection angle and its implications for gravitational lensing are discussed in Section-VII. In this context, Section-VIII illustrates and explains the result, showing its physical influence. II. BASIC EQUATIONS IN f(R, Lm) GRAVITY In modified gravity theories, a key approach is the generalization of the action, providing a more flexible framework to describe gravitational dynamics beyond general relativity and enabling the inclusion of additional curvature or matter terms to model cosmic acceleration, DM effects, and other phenomena. In this context, Harko et al. [106] proposed the f(R, Lm) gravity theory, which generalizes the f(R) models by assuming that the gravitational Lagrangian can be represented as an arbitrary function of the Ricci scalar R and the matter Lagrangian Lm. The action corresponding to this framework is expressed as S = Z f(R, Lm)√-gd4x, (1) where g denotes the determinant of the metric tensor gμν. The Ricci scalar curvature R can be obtained by contracting the Ricci tensor Rμν as R = gμνRμν, (2) 4 where the Ricci tensor Rμν can be defined as Rμν = ∂λΓλ μν -∂νΓλ μμ + Γσ μμΓλ σλ -Γλ μσΓσ μλ. (3) Here, Γα βγ denotes the components of the Levi-Civita connection, expressed as Γα βγ = 1 2gαλ ∂gγλ ∂xβ + ∂gλβ ∂xγ -∂gβγ ∂xλ . (4) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 2 3 4 5 6 7 r ξ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 50 100 150 200 0.2 0.4 0.6 0.8 1.0 r ξ ( r)/r α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 0.6 0 0 0.9 r ξ'(r) FIG. 1: Shows the characteristics of shape function ξ(r) (Left), the ratio ξ(r)/r (Middle), and the derivative ξ′(r) (Right) against the radial coordinate r for King DM model under the f(R, Lm) gravity model-I with parameters β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4. The field equations of f(R, Lm) gravity follow from the variation of the general action (1) with respect to the metric tensor gμν, yielding fRRμν + (gμν□-∇μ∇ν) fR -1 2 (f -fLmLm) gμν = 1 2fLmTμν, (5) where fR = ∂f ∂R, fLm = ∂f ∂Lm , and Tμν denotes the energy-momentum tensor of the matter distribution, expressed as Tμν = -2 √-g δ(√-gLm) δgμν = gμνLm -2 ∂Lm ∂gμν . (6) Now, the contraction of the field equation (5) leads to a relation connecting the energy-momentum scalar T , the matter Lagrangian Lm, and the Ricci scalar R as RfR + 3□fR -2(f -fLmLm) = 1 2fLmT, (7) where □stands for the d'Alembertian operator, defined as □F = 1 √-g∂α(√-ggαβ∂βF) for any scalar function F. Further, the covariant divergence of the energy-momentum tensor in this gravity framework can be expressed as ∇μT μν = 2 [∇μ ln(fLm)] ∂Lm ∂gμν . (8) The conservation of the matter energy-momentum tensor, ∇μT μν = 0, imposes a functional constraint between the matter Lagrangian density and the function fLm(R, Lm) as ∇μ ln(fLm)∂Lm ∂gμν = 0. (9) Consequently, given a specific matter Lagrangian density, an appropriate choice of the function f(R, Lm) can yield conservative models with arbitrary matter-geometry coupling. 5 η=-0.7 η=-0.6 η=-0.5 η=-0.4 η=-0.3 1 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 2 2 0.30 r ρ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 r ρ(r)+Pr(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.3 0.4 0.5 0.6 0.7 r ρ(r)+Pt(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.04 0.06 0.08 0.10 0.12 0.14 0.16 r ρ(r)-|Pr(r)| α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 r ρ(r)-|Pt(r)| α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1.0 1.2 r ρ(r)+Pr(r)+2Pt(r) FIG. 2: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) -|Pr(r)| (Left), ρ(r) -|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for King DM model under the f(R, Lm) gravity model-I with parameters β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4. III. FIELD EQUATIONS FOR TRAVERSABLE WORMHOLE IN f(R, Lm) GRAVITY In this section, we discuss the traversability criteria of the wormhole structure and formulate the Einstein field equations for a traversable wormhole geometry within the framework of f(R, Lm) gravity theory. A. Traversability Criteria for Wormhole In this study, we consider a static, spherically symmetric Morris-Thorne wormhole, described by the following line element [107] ds2 = -e2Φ(r)dt2 + 1 -ξ(r) r -1 dr2 + r2(dθ2 + sin2θdφ2), (10) where Φ(r) and ξ(r) are the gravitational redshift function and shape function of the wormhole, respectively. The wormhole throat, a crucial structural feature, corresponds to the minimum radius r = r0 for which ξ(r0) = r0, known as the throat condition. Further, for the wormhole line element (10), the proper radial distance function l(r) defined as l(r) = ± Z r r0 dr p 1 -ξ(r)/r . (11) Here, the ± signs denote the upper and lower regions of the wormhole, which are connected through the throat. For a wormhole to allow traversal, the above functions are required to meet the following criteria: Gravitational redshift function Φ(r): The gravitational redshift function, Φ(r), must remain finite throughout the spacetime to prevent the formation of an event horizon. If Φ(r) diverges at any point, it would correspond to an infinite redshift, effectively creating a horizon that would block the passage of signals or travelers. Shape function ξ(r): The shape function ξ(r) must satisfy the following conditions: (i) For all regions outside the throat, the ratio of the shape function to the radial coordinate must not exceed unity, i.e., ξ(r) r ≤1 for r ≥r0. (ii) 6 The flare-out condition, defined as ξ′(r) 0, (36) [ρ(r) + Pt(r)]r=r0 = β1-α 2αr2 0 γ + r2 0 r2s (1-α)η 1 + (2α -1)d2βα γ + r2 0 r2s αη > 0, (37) [ρ(r) -|Pr(r)|]r=r0 = β γ + r2 0 r2s η - β αr2 0 γ + r2 0 r2s η (α -1)r2 0 -βα γ + r2 0 r2s αη > 0, (38) [ρ(r) -|Pt(r)|]r=r0 = β γ + r2 0 r2s η - 3β1-α αr2 0 γ + r2 0 r2s (1-α)η 1 -r2 0βα γ + r2 0 r2s αη > 0, (39) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = β1-α αr2 0 γ + r2 0 r2s (1-α)η " 1 -β2α γ + r2 0 r2s 2αη + 2(α -1)r2 0βα γ + r2 0 r2s αη# > 0. (40) To investigate the physical nature of the matter content supporting the wormholes, we provide graphical illustrations of all relevant energy conditions in Fig. 2 corresponding to the parameters α = 2.2, 2.3, 2.4, 2.5, 2.6 with β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4. Notably, the energy conditions NECr, NECt, WECr, WECt, DECr, and SEC are satisfied throughout the wormhole spacetime, while DECt is satisfied near the throat, indicating the absence of exotic matter. These findings confirm that the wormhole solutions can be sustained within the King DM halo under f(R, Lm) gravity without requiring exotic matter. B. Dekel-Zhao Dark Matter Model In this section, we adopt the Dekel-Zhao (DZ) DM density profile to construct traversable wormholes. The DZ profile is a widely used, versatile model for DM halos, capable of capturing a broad range of astrophysical structures 11 k=1.8 k=1.9 k=2 k=2.12 k=2.2 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 1.2 r ρ(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 r ρ(r)+Pr(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.00 3.02 3.04 3.06 !" # ! #$ # % 0.082 0.084 0.086 0.088 0.090 0.092 2 4 5 6 7 8 0.0 0.1 0.2 0.3 0.4 0.5 r ρ(r)+Pt(r) λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 r ρ(r)-|Pr(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 3.00 3.02 3.04 3.06 &')* &'+) &'+, &'+- 0.049 0.050 0.051 0.052 0.053 0.054 0.055 0.056 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 r ρ(r)-|Pt(r)| λ=0.11 λ=0.13 λ=0.15 λ=0.17 λ=0.19 2 3 4 5 6 7 8 0.00 0.05 0.10 0.15 0.20 0.25 0.30 r ρ(r)+Pr(r)+2Pt(r) FIG. 7: Shows the characteristic of energy density ρ(r) (Left), ρ(r) + Pr(r) (Middle), ρ(r) + Pt(r) (Right) in the above panel, and ρ(r) -|Pr(r)| (Left), ρ(r) -|Pt(r)| (Middle), ρ(r) + Pr(r) + 2Pt(r) (Right) in the below panel for Dekel-Zhao DM model under the f(R, Lm) gravity model-II with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. [113, 114]. Its double power-law form smoothly interpolates between inner and outer slopes, making it well-suited for both observational and theoretical studies. The profile is defined as [113-115] ρ(r) = ρs r rs κ 1 + r rs 1/b b(d-κ) . (41) Here, the parameters ρs and rs represent the characteristic density and the scale radius, respectively. Moreover, the parameters κ, b, and d control the inner slope, the transition sharpness, and the outer slope of the density profile. For b = N1 and d = 3 + N2/N1 with natural numbers N1 and N2, simplified analytical expressions for the gravitational potential, enclosed mass, and velocity dispersion can be obtained. In this study, we choose N1 = 2 and N2 = 1, so the DZ density profile (41) becomes ρ(r) = ρs r rs κ 1 + r rs 1/2 7-2κ , (42) In this case, the shape function is obtained by substituting the above density profile (42) into the field equation (26), giving: ξ(r) = C2 + 2α -1 3 -ακr3ρα s rs r ακ H(r), (43) where H(r) = 2F1 α(7 -2k), 6 -2αk, 7 -2αk, -p r s , and C2 is an integration constant. Here, the throat condition ξ(r0) = r0 yields the following result C2 = r0 + 2α -1 ακ -3r3 0ρα s rs r0 ακ H(r0). (44) Consequently, incorporating the above result of C2, the shape function can be expressed in its final form as ξ(r) = r0 + 2α -1 3 -ακρα s r3 rs r ακ H(r) -r3 0 rs r0 ακ H(r0) . (45) 12 In this case, the flare-out condition at the wormhole throat is given by the following relation ξ′(r0) = (2α -1)r2 0rρα s rs r0 ακ 1 + rr0 rs α(2κ-7) 0, (49) [ρ(r) + Pt(r)]r=r0 = ρ1-α s J 1-α 1 2αr2 0 " 1 + (2α -1)r2 0ρα s rs r0 ακ 1 + rr0 rs α(2κ-7)# > 0, (50) [ρ(r) -|Pr(r)|]r=r0 = ρsJ1 - ρsJ1 αr2 0 " (α -1)r2 0 -ρα s rs r0 ακ 1 + rr0 rs α(2κ-7)# > 0, (51) [ρ(r) -|Pt(r)|]r=r0 = ρsJ1 -1 2 ρ1-α s J 1-α 1 αr2 0 " 1 -r2 0ρα s rs r0 ακ 1 + rr0 rs α(2κ-7)# > 0, (52) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = ρ1-α s J 1-α 1 αr2 0 " 1 -ρ2α s J 2α 1 + 2(α -1)r2 0ρα s rs r0 ακ 1 + rr0 rs α(2κ-7)# > 0, (53) 13 where J1 = rs r0 κ 1 + rr0 rs 2κ-7 . (54) Figure 4 illustrates the relevant energy conditions, showing that NEC, WEC, DECr, and SEC are satisfied throughout the wormhole spacetime, while DECt is satisfied near the throat. This indicates the absence of exotic matter and confirms that the wormhole solutions can be supported within the DZ DM halo under f(R, Lm) gravity without requiring exotic matter. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 0 2 4 6 8 -5 0 5 r Ζ(r) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 0 2 4 6 8 10 r l(r) FIG. 9: Shows the characteristics of embedding surface (Left), proper radial length (Middle) against the radial coordinate r, and the full visualization diagram of wormholes (Right) for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. In the full visualization diagram of wormholes: α = 2.2 (Yellow), α = 2.3 (Red), α = 2.4 (Blue), α = 2.5 (Green), α = 2.6 (Purple). V. MODEL-II: WORMHOLE SOLUTIONS FOR f(R, Lm)= R 2 + (1 + λR) Lm In this section, we derive the wormhole solutions by considering the following non-minimal form of the f(R, Lm) function [116-118] f(R, Lm) = R 2 + (1 + λR)Lm, (55) where λ denotes the coupling constant. It is noteworthy that for λ = 0, the model reduces to the standard wormhole geometry of GR. In this model, taking a constant redshift function and Lm = ρ(r), the Einstein field equations (18)-(20) reduce to the following form ρ(r) = ξ′(r) r2 + 2λξ′(r), (56) Pr(r) = 2λr(r3 -ξ′(r))ξ′(r) -(r2 + 4λξ′(r))ξ(r) r (r2 + 2λξ′(r))2 , (57) Pt(r) = r2ξ(r) -(r3 -4λξ(r))ξ′(r) 2r (r2 + 2λξ′(r))2 . (58) As obtaining an exact solution for this non-linear model is challenging, we introduce a new shape function to generate the wormhole solutions under this framework, given by ξ(r) = r0 log(2) log 1 + r r0 . (59) The shape function (59) is depicted in Fig. 5, which shows that ξ(r) increases monotonically with the radial coordinate r, while ξ(r) -r intersects the r-axis at r = r0 = 1.4. It consistently satisfies the condition ξ(r)/r ≤1 for 14 r ≥r0 and the flare-out condition. Therefore, the proposed shape function (59) supports the traversable wormhole geometries by satisfying all the essential characteristics. Moreover, it exhibits the asymptotic flatness, confirming that the resulting traversable wormhole geometries are asymptotically flat. In this case, the radial and transverse pressures are determined as follows Pr(r) = 2λr0r r3 log(2)(r0 + r) -r0 -r0(r + r0) [4λr0 + log(2)(r + r0)] r2 log r+r0 r0 r [2λr0 + log(2)(r + r0)r2]2 , (60) Pt(r) = r0(r + r0) h log r+r0 r0 4λr0 + log(2)(r + r0)r2 -log(2)r3i 2r [2λr0 + log(2)(r0 + r)r2]2 . (61) Next, we examine all the relevant energy conditions corresponding to the shape function (59) for both the King and Dekel-Zhao DM models. 0 2 4 6 8 -10 -5 0 5 10 r Ζ(r) 2 3 4 5 6 7 8 0 2 4 6 8 10 12 r l(r) FIG. 10: Shows the characteristics of embedding surface (Left), proper radial length (Middle) against the radial coordinate r, and the full visualization diagram of wormhole (Right) for wormhole shape function (59) under the f(R, Lm) gravity model-II with parameter r0 = 1.4. A. King Dark Matter Model Here, we analyze the energy density profile of the King DM model (29) together with the proposed shape function (59) to examine the corresponding energy conditions. This comprehensive analysis will offer valuable insights into the behavior and characteristics of Model II in sustaining the traversable wormhole structures. In this model, employing the energy density (29) along with the radial and transverse pressures (60) and (61), the NEC, DEC, and SEC at the throat r = r0 are obtained as [ρ(r) + Pr(r)]r=r0 = β γ + r2 0 r2s η + λ 2 log(2)(r3 0 -2) -1 -2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (62) [ρ(r) + Pt(r)]r=r0 = β γ + r2 0 r2s η + log(2) 4λ + r2 0(2 log(2) -1) 4 [λ + r2 0 log(2)]2 > 0, (63) [ρ(r) -|Pr(r)|]r=r0 = β γ + r2 0 r2s η - λ 2 log(2)(r3 0 -2) -1 -2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (64) [ρ(r) -|Pt(r)|]r=r0 = β γ + r2 0 r2s η - log(2) 4λ + r2 0(2 log(2) -1) 4 [λ + r2 0 log(2)]2 > 0, (65) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = β γ + r2 0 r2s η + (2λr0 -1)r2 0 log(2) -λ 2 [λ + r2 0 log(2)]2 > 0, (66) Also, the graphical representation of all relevant energy conditions is shown in Fig. 6 for the parameters λ = 0.11, 0.13, 0.15, 0.17, 0.19 with β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4. Remarkably, all energy conditions, 15 NEC, WEC, DEC, and SEC, are satisfied throughout the wormhole spacetime, indicating the absence of exotic matter. These results confirm that the wormhole solutions can be maintained within the King DM halo under f(R, Lm) gravity without the need for exotic matter. α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 5 10 15 20 25 30 35 r Eg α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 2 3 4 5 6 7 8 1.0 1.5 2.0 r Eg 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 r Eg 2 3 4 5 6 7 8 1.0 1.5 2.0 2.5 3.0 r Eg FIG. 11: Shows the characteristics of total gravitational energy for King DM model under the f(R, Lm) gravity model-I (Left), for Dekel-Zhao DM model under the f(R, Lm) gravity model-I (Left-Middle), for King DM model under the f(R, Lm) gravity model-II (Right-Middle), and for Dekel-Zhao DM model under the f(R, Lm) gravity model-II (Right). In King DM model, parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, and in Dekel-Zhao DM model, parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4. B. Dekel-Zhao Dark Matter Model In this subsection, we examine the energy conditions associated with the energy density profile (42) of the DZ DM model and the proposed shape function (59). Specifically, the NEC, DEC, and SEC at the wormhole throat r = r0 are calculated using the energy density (42) and the radial and transverse pressures (60) and (61), yielding [ρ(r) + Pr(r)]r=r0 = ρs rs r0 k 1 + rr0 rs 2κ-7 + λ 2 log(2)(r3 0 -2) -1 -2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (67) [ρ(r) + Pt(r)]r=r0 = ρs rs r0 k 1 + rr0 rs 2κ-7 + log(2) 4λ + r2 0(2 log(2) -1) 4 [λ + r2 0 log(2)]2 > 0, (68) [ρ(r) -|Pr(r)|]r=r0 = ρs rs r0 k 1 + rr0 rs 2κ-7 - λ 2 log(2)(r3 0 -2) -1 -2r2 0 log2(2) 2 [λ + r2 0 log(2)]2 > 0, (69) [ρ(r) -|Pt(r)|]r=r0 = ρs rs r0 k 1 + rr0 rs 2κ-7 - log(2) 4λ + r2 0(2 log(2) -1) 4 [λ + r2 0 log(2)]2 > 0, (70) [ρ(r) + Pr(r) + 2Pt(r)]r=r0 = ρs rs r0 k 1 + rr0 rs 2κ-7 + (2λr0 -1)r2 0 log(2) -λ 2 [λ + r2 0 log(2)]2 > 0, (71) In this model, all the energy conditions, NEC, WEC, DEC, and SEC, are satisfied throughout the wormhole spacetime for parameter λ = 0.11, 0.13, 0.15, 0.17, 0.19 with ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, as clear from Fig. 7. Thus, the wormhole solutions supported by the proposed shape function (59) can also be sustained within the DZ DM halo under f(R, Lm) gravity without exotic matter. VI. EMBEDDING SURFACE AND TOTAL GRAVITATIONAL ENERGY In this section, we investigate some physical properties of the proposed wormhole structures, including the embedding surface and the total gravitational energy. In three-dimensional space, the embedded surface Z(r) of the axially symmetric wormhole is given by [107] ds2 = " 1 + dZ(r) dr 2# dr2 + r2dφ2. (72) 16 α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 1 2 3 4 5 6 7 8 0 5 10 15 20 25 30 rc α(rc) α=2.2 α=2.3 α=2.4 α=2.5 α=2.6 1 2 3 4 5 6 7 8 0 5 10 15 20 rc α(rc) 1 2 3 4 5 6 7 8 0 5 10 15 20 rc α(rc) FIG. 12: Shows the characteristics of deflection angles for King DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4 (Left), for Dekel-Zhao DM model under the f(R, Lm) gravity model-I with parameters ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4 (Middle), and for wormhole shape function (59) under the f(R, Lm) gravity model-II with parameter r0 = 1.4 (Right). Aslo, at the equatorial slice θ = π/2 with constant time t, the metric (10) reads as ds2 = 1 -ξ(r) r -1 dr2 + r2dφ2. (73) Thus, combining results (72) and (73) leads to the following differential equation for the embedding surface Z(r) dZ(r) dr = ± r ξ(r) -1 -1 2 . (74) The above result indicates that dZ(r)/dr diverges at the throat, causing Z(r) to become vertical at the throat. However, the embedding surface Z(r) can be expressed as Z(r) = ± Z r r+ 0 dr p r/ξ(r) -1 , (75) where the ± sign denotes the upper and lower regions of the wormhole geometry. Another key physical quantity of the wormhole is the proper radial distance, which we have already defined in Eq. (11). The embedding surfaces and proper radial distances for the King and DZ DM models under model-I of f(R, Lm) gravity are illustrated in Figs. 8 and 9, respectively. Fig. 10 illustrates the embedding surface and proper radial distance for the model-II of f(R, Lm) gravity. In the embedding surface, the regions with positive curvature (Z(r) > 0) and negative curvature (Z(r) 0 [121]. For the present wormhole models, we demonstrate the graphical representation of the total gravitational energy in Fig. 11, indicating Eg > 0. This suggests a repulsive gravitational effect, which supports the potential existence of viable traversable wormholes within the King and DZ DM halos under the f(R, Lm) gravity framework. VII. STRONG DEFLECTION ANGLE In this section, we explore how the wormhole geometries considered in our study affect the path of light rays propagating in their vicinity. Specifically, we focus on calculating the strong gravitational deflection angle, which quantifies how much light is bent when it passes close to the wormhole throat. This analysis is crucial for understanding the gravitational lensing effects of these wormholes, which can provide observable signatures distinguishing them from black holes or other compact objects. Indeed, strong gravitational lensing provides a powerful tool for probing the geometry of spacetime and the properties of astrophysical objects [122, 123]. In this context, Bozza [124] introduced a systematic analytical approach for examining light deflection in spherically symmetric spacetimes subjected to strong gravitational fields. The strong deflection angle α(rc) corresponding to the wormhole metric (10) can be written as [124, 125] α(rc) = -π + 2 Z ∞ rc eΦ(r)dr r2 r 1 -ξ(r) r 1 χ2 -e2Φ(r) r2 , (80) where rc denotes the closest approach of the light ray to the wormhole throat, and χ represents the impact parameter. For null geodesics, these quantities are related as χ = rce-Φ(rc). The obtained deflection angles for the proposed wormhole structures are presented in Fig. 12 as a function of the closest approach distance rc. The results indicate that the deflection angle diminishes as rc increases, i.e. weaker light bending at the greater distances from the wormhole throat. In contrast, near the throat, the deflection angle grows sharply and diverges, highlighting the intense curvature and strong gravitational influence in that region. Furthermore, the f(R, Lm) gravity parameter α plays a significant role: larger α values reduce the deflection angle, indicating a decrease in gravitational lensing strength consistent with the reduced spacetime curvature. VIII. DISCUSSIONS AND CONCLUSION Modified gravity theories offer an alternative explanation for the late-time cosmic acceleration. Among them, f(R) gravity, where f(R) is an arbitrary function of the Ricci scalar R, is particularly significant. More recently, the generalized f(R, Lm) gravity, depending on both R and the matter Lagrangian, has been introduced, with extensive astrophysical and cosmological implications. Recently, several investigations have demonstrated that f(R, Lm) gravity offers a suitable theoretical framework for constructing and sustaining traversable wormhole geometries [126-130]. It is noteworthy that, in standard Einstein gravity, sustaining wormhole geometries generally necessitates exotic matter [108]. In contrast, numerous studies have demonstrated that within modified gravity frameworks, wormhole solutions can often be constructed without the need for exotic matter [131-134]. In this study, we have explored asymptotically flat non-exotic traversable wormhole structures within the King and DZ DM halos in the framework of f(R, Lm) gravity under the consideration of two functional forms of this theory, Model-I: f(R, Lm) = (R/2)+Lα m and Model-II: f(R, Lm) = (R/2)+ (1 + λR)Lm. The key features of the proposed wormhole solutions can be summarized as follows: • For Model-I wormholes: In Model-I of f(R, Lm) gravity, the wormhole solutions are sustained by two different DM density profiles, the King model and the Dekel-Zhao model. The King model generated shape function (32) increases monotonically with r and decreases with higher α, while satisfying ξ(r)/r ≤1 for r ≥r0 and fulfilling the flare-out condition for the parameters α = 2.2, 2.3, 2.4, 2.5, 2.6 with β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4 (See Fig. 1). These behaviors confirm that the King DM based shape function encapsulates all the fundamental characteristics of a traversable wormhole. Furthermore, its asymptotic flatness indicates that the resulting wormhole geometries are asymptotically flat. To examine the physical characteristics of the matter 18 supporting the wormholes, we analyze the relevant energy conditions. It is found that NECr, NECt, WECr, WECt, DECr, and SEC are satisfied throughout the wormhole spacetime, while DECt holds near the throat (See Fig. 2). This indicates the absence of exotic matter, confirming that the King DM halo can sustain traversable wormholes in f(R, Lm) gravity without exotic matter. The shape function (45) derived from the Dekel-Zhao model is also monotonically increasing with r, satisfies ξ(r)/r ≤1 for r ≥r0, meets the flare-out condition, and ξ(r)/r →0 as r →∞for α = 2.2, 2.3, 2.4, 2.5, 2.6 with ρs = 0.06, κ = 2.12, rs = 6, and r0 = 1.4, as shown in Fig. 3. Therefore, the obtained shape function, in this case, is also well-suited for constructing asymptotically flat traversable wormhole geometries. Moreover, these wormhole structures are supported by the non-exotic matter by satisfying all the relevant energy conditions, clear from Fig. 4. • For Model-II wormholes: In Model-II of f(R, Lm) gravity, we have introduced a new shape function (59) to construct the corresponding wormhole solutions. The newly proposed shape function, depicted in Fig. 5, exhibits a monotonic increasing behavior with respect to the radial coordinate r, while ξ(r) -r intersects the r-axis at r = r0 = 1.4. It consistently satisfies the condition ξ(r)/r ≤1 for r ≥r0 and meets the flare-out requirement at the throat. These features confirm that the shape function effectively generates traversable wormhole geometries, fulfilling all necessary conditions. Additionally, its asymptotically flat nature ensures that the resulting spacetime remains asymptotically flat. These wormhole configurations within the King DM halo are supported by non-exotic matter by holding all the energy conditions, NEC, WEC, DEC, and SEC throughout the spacetime for the parameter set λ = 0.11, 0.13, 0.15, 0.17, 0.19 with β = 0.65, γ = 1, η = -0.5, rs = 1.01, and r0 = 1.4, as illustrated in Fig. 6. Moreover, the wormhole configurations can also be sustained within the DZ DM halo without invoking exotic matter, as all energy conditions are satisfied for the parameter set λ = 0.11, 0.13, 0.15, 0.17, 0.19 with ρs = 0.06, κ = 2.12, rs = 6.5, and r0 = 1.4, as shown in Fig. 7. We have examined some physical characteristics of the proposed wormhole geometries to gain a deeper understanding of their structure. In particular, we have analyzed the embedding surface and proper radial distance, which provide a geometric visualization of the wormhole throat and its curvature properties in an embedded Euclidean space. The embedding surfaces and proper radial distances for the King and DZ DM models under Model-I of f(R, Lm) gravity are shown in Figs. 8 and 9, while Fig. 10 presents the corresponding results for Model-II. In the embedding diagrams, the regions with positive curvature (Z(r) > 0) and negative curvature (Z(r) < 0) represent the upper and lower universes of the wormhole, respectively, joined smoothly at the throat. The f(R, Lm) gravity parameter α significantly influences the wormhole geometry, with higher α values leading to reduced spacetime curvature. The proper radial distance for all considered models is finite, increases monotonically, and satisfies |l(r)| ≥r -r0. Furthermore, we have analyzed the total gravitational energy to determine the nature of the gravitational interaction, whether attractive or repulsive, within the wormhole configurations. Interestingly, the results reveal that the total gravitational energy is repulsive in nature, as clear from Fig. 11, which supports the potential existence of viable traversable wormholes within the King and DZ DM halos under the f(R, Lm) gravity framework. Overall, these analyses provide deeper insight into the geometric and physical viability of the proposed wormhole configurations within the f(R, Lm) gravity framework. In addition, we have investigated gravitational lensing, emphasizing the strong deflection effects generated by the wormhole geometries in each of the considered f(R, Lm) gravity models, employing Bozza's formalism [124]. Our findings reveal that the deflection angles associated with the King and DZ DM profiles in Model-I f(R, Lm) gravity and the introduced shape function in Model-II f(R, Lm) gravity exhibit a decreasing trend with increasing closest approach distance rc (See Fig. 12). The analysis implies weaker light bending farther from the wormhole throat. Near the throat, however, the deflection angle increases rapidly and diverges, indicating strong spacetime curvature and intense gravitational effects, where light can be trapped in unstable circular orbits. Moreover, the f(R, Lm) gravity parameter α has a notable impact on the deflection angles; larger values of α lead to smaller deflection angles, reflecting a reduction in gravitational lensing strength due to the diminished curvature of spacetime. In recent years, observations have confirmed black holes at galactic centers, but no direct evidence for wormholes exists. Consequently, the scientific community continue searching for observational signatures of wormholes. Following the pioneering work of Morris and Thorne, numerous theoretical models have been developed within GR and modified gravity theories, often at galactic scales. In this study, we have constructed new asymptotically flat traversable wormhole solutions within the framework of f(R, Lm) gravity using the King and DZ DM models, without invoking exotic matter. Remarkably, these wormhole configurations satisfy all essential geometric and physical conditions, demonstrating their viability within the King and DZ DM halos under f(R, Lm) gravity. Additionally, we have analyzed the corresponding deflection angles for each DM model, providing novel insights into gravitational lensing effects in the context of modified gravity. It is important to emphasize that these wormhole solutions are purely theoretical, and ongoing efforts in the scientific community aim to identify observational signatures, such as through scalar wave scattering, gravitational lensing [135], and gamma-ray burst light curve analyses [136]. Furthermore, our study may inspire future research on constructing non-exotic traversable wormhole geometries within the King and DZ DM halos under other modified gravity theories, thereby extending their applicability across a wide range of 19 alternative gravitational frameworks. Acknowledgments FR would like to thank the authorities of the Inter-University Centre for Astronomy and Astrophysics, Pune, India for providing research facilities. [1] L. Flamm, Physikalische Zeitschrift 17, 448 (1916). [2] A. Einstein, and N. Rosen, Phys. Rev. 48, 73 (1935). [3] J. A. Wheeler, Ann. Phys. 2, 604 (1957). [4] R. W. Fuller, and J. A. Wheeler, Phys. Rev. 128, 919 (1962). [5] M. S. Morris, and K. S. Thorne, Am. J. Phys. 56, 395 (1988). [6] M. Visser, Lorentzian Wormholes: From Einstein to Hawking,Springer, (1995). [7] J. A. Wheeler, Phys. Rev. 97, 511 (1955). [8] M. Visser, S. Kar, and N. Dadhich, Phys. Rev. Lett. 90, 201102 (2003). [9] F. S. N. Lobo and M. A. Oliveira, Phys. Rev. D 80, 104012 (2009). [10] T. Azizi, Int. J. Theor. Phys. 52, 3486 (2013). [11] S. H. Mazharimousavi, and M. Halilsoy, Mod. Phys. Lett. A 31, 1650192 (2016). [12] P. Moraes and P. Sahoo, Phys. Rev. D 96, 044038 (2017). [13] A. K. Mishra, U. K. Sharma, V. C. Dubey, and A. Pradhan, Astrophys. Space Sci. 365, 34 (2020). [14] A. Chanda, S. Dey, and B. C. Paul, Gen. Rel. Grav. 53, 78 (2021). [15] K. A. Bronnikov and S. W. Kim, Phys. Rev. D 67, 064027 (2003). [16] M. L. Camera, Phys. Lett. B 573, 27 (2003). [17] F. Parsaei and N. Riazi, Phys. Rev. D 91, 024015 (2015). [18] F. Javed, G. Mustafa, A. ̈Ovg ̈un, and M. F. Shamir, Eur. Phys. J. Plus 137, 61 (2021). [19] I. P. Lobo, M. G. Richarte, J. P. M. Gra ̧ca, and H. Moradpour, Eur. Phys. J. Plus 135, 550 (2020). [20] T. Tangphati, C. Muniz, A. Pradhan, and A. Banerjee, Phys. Dark Univ. 42, 101364 (2023). [21] G. Mustafa, Z. Hassan, P. Moraes, and P. Sahoo, Phys. Lett. B 821, 136612 (2021). [22] Z. Hassan, G. Mustafa, J. R. L. Santos, and P. K. Sahoo, Europhys. Lett. 139, 39001 (2022). [23] A. Banerjee, A. Pradhan, T. Tangphati, and F. Rahaman, Eur. Phys. J. C 81, 1031 (2021). [24] Z. Hassan, S. Ghosh, P. K. Sahoo, and V. S. H. Rao, Gen. Rel. Grav. 55, 90 (2023). [25] R. Solanki, Z. Hassan, and P. K. Sahoo, Chin. J. Phys. 85, 74 (2023). [26] M. F. Shamir, G. Mustafa, S. Waseem, and M. Ahmad, Commun. Theor. Phys. 73, 115401 (2021). [27] R. Shaikh, Phys. Rev. D 98, 064033 (2018). [28] S. Bahamonde, U. Camci, S. Capozziello, and M. Jamil, Phys. Rev. D 94, 084042 (2016). [29] K. Jusufi, Phys. Rev. D 98, 044016 (2018). [30] K. Jusufi, M. Jamil, and M. Rizwan, Gen. Rel. Grav. 51, 102 (2019). [31] T. Harko, and F. S. N. Lobo, Eur. Phys. J. C 70, 373 (2010). [32] T. Harko, Phys. Lett. B 669, 376 (2008). [33] T. Harko, Phys. Rev. D 81, 084050 (2010). [34] T. Harko, Phys. Rev. D 81, 044021 (2010). [35] S. Nesseris, Phys. Rev. D 79, 044015 (2009). [36] V. Faraoni, Phys. Rev. D 76, 127501 (2007). [37] V. Faraoni, Phys. Rev. D 80, 124040 (2009). [38] V. Faraoni, Cosmology in Scalar-Tensor Gravity, Kluwer Academic, Dordrecht, (2004). [39] O. Bertolami, J. Paramos, and S. Turyshev, arXiv:gr-qc/0602016 (2006). [40] J. Wang, and K. Liao, Class. Quantum Grav. 29, 215016 (2012). [41] B. S. Goncalves, P. H. R. S. Moraes, and B. Mishra, Fortschr. Phys., 71(8), 2200153 (2023). [42] R. Solanki, B. Patel, L. V. Jaybhaye, and P. K. Sahoo, Commun. Theor. Phys. 75, 075401 (2023). [43] L. V. Jaybhaye, R. Solanki, S. Mandal, and P. K. Sahoo, Universe 9, 163 (2023). [44] M. Zeyauddin, A. Dixit, and A. Pradhan, Int. J. Geom. Meth. Mod. Phys. 21(09), 2450167 (2023). [45] N. Myrzakulov, M. Koussour, A. H. A. Alnadhief, and A. Abebe, Eur. Phys. J. Plus 138, 852 (2023). [46] D. C. Maurya, Grav. Cosm. 29, 315 (2023). [47] R. Solanki, et al., Commun. Theor. Phys. 75, 075401 (2023). [48] J. K. Singh, et al., New Astron. 104, 102070 (2023). [49] L. V. Jaybhaye, et al., Phys. Dark Univ. 40, 101223 (2023). [50] J. C. Fabris, et al., Eur. Phys. J. Plus 138, 232 (2023). [51] A. Pradhan, et al., Int. J. Geom. Meth. Mod. Phys. 20, 1230105 (2023). [52] D. C. Maurya, New Astron. 100, 101974 (2023). 20 [53] G. A. Carvalho, et al., Eur. Phys. J. C 82, 1096 (2022). [54] R. V. Labato, G. A. Carvalho, and C. A. Bertulani, Eur. Phys. J. C 81, 1013 (2021). [55] T. Harko, and S. Shahidi, Eur. Phys. J. C 82, 1003 (2022). [56] T. Harko, and M. J. Lake, Eur. Phys. J. C 75, 60 (2015). [57] K. K. Nandi, Y.-Z. Zhang, and K. B. Vijaya Kumar, Phys. Rev. D 70, 127503 (2004). [58] M. S. Churilova, R. A. Konoplya, Z. Stuchlik, and A. Zhidenko, JCAP 10, 010 (2021). [59] R. A. Konoplya and A. Zhidenko, Phys. Rev. Lett. 128, 091104 (2022). [60] A. Ashtekar, J. Phys. Conf. Ser. 189, 012003 (2009). [61] R. Sengupta, S. Ghosh, and M. Kalam, Eur. Phys. J. C 83, 830 (2023). [62] C. R. Muniz, T. Tangphati, R. M. P. Neves, and M. B. Cruz, Phys. Dark Univ. 46, 101673 (2024). [63] N. Aghanim, et al. (Planck), Astron. Astrophys. 641, A6 (2020). [64] F. Zwicky, Helv. Phys. Acta 6, 110 (1933). [65] V. C. Rubin and W. K. Ford, Jr., Astrophys. J. 159, 379 (1970). [66] M. Persic, P. Salucci, and F. Stel, Mon. Not. Roy. Astron. Soc. 281, 27 (1996). [67] G. Bertone and D. Hooper, Rev. Mod. Phys. 90, 045002 (2018). [68] L. Randall, Nature 557, 2 (2018). [69] A. Ashraf, et al., Phys. Dark Universe, 47, 101787 (2025). [70] G. Mustafa, et al., Phys. Dark Universe, 47, 101765 (2025). [71] G. Mustafa, et al., Phys. Dark Universe, 47, 101753 (2025). [72] A. Ashraf, et al., Physics of the Dark Universe, 47, 101725 (2025). [73] A. Ashraf, et al., Physics of the Dark Universe 48, 101836 (2025). [74] A. Ashraf, et al., Phys. Dark Universe 47, 101823 (2025). [75] A. Ditta, et al., Phys. Dark Universe 47, 101818 (2025). [76] A. Ditta, et al., Phys. Dark Universe, 46, 101573 (2024). [77] A. Bouzenada, et al., Nucl. Phys. B 1017, 116928 (2025). [78] A. Saleem, et al., Nucl. Phys. B 1017, 116926 (2025). [79] C. A. Arg ̈uelles, et al., Rev. Mod. Phys. 93, 035007 (2021). [80] D. J. E. Marsh, D. Ellis, and V. M. Mehta, Dark Matter: Evidence, Theory, and Constraints, Princeton Series in Astrophysics, Princeton University Press, (2024). [81] Y.-D. Tsai, J. Eby, and M. S. Safronova, Nature Astron. 7, 113 (2023). [82] A. D. S. Souza, C. R. Muniz, R. M. P. Neves, and M. B. Cruz, Ann. Phys. 472, 169859 (2025). [83] Z. Xu, M. Tang, G. Cao, and S.-N. Zhang, Eur. Phys. J. C 80, 70 (2020). [84] C. R. Muniz and R. V. Maluf, Annals Phys. 446, 169129 (2022). [85] G. Mustafa, S. K. Maurya, and S. Ray, Fortsch. Phys. 71, 2200129 (2023). [86] R. Radhakrishnan, et al., Symmetry 16, 1007 (2024). [87] A. Errehymy, et al., Eur. Phys. J. C 84, 573 (2024). [88] S. K. Maurya, J. Kumar, S. Kiroriwal, and A. Errehymy, Phys. Dark Univ. 46, 101564 (2024). [89] Z. Hassan, and P. K. Sahoo, Annalen Phys. 536, 2400114 (2024). [90] Z. Xu, M. Tang, G. Cao, and S.-N. Zhang, Eur. Phys. J. C 80, 70 (2020). [91] J. F. Navarro, C. S. Frenk, and S. D. M. White, Astrophys. J. 462, 563 (1996). [92] K. G. Begeman, A. H. Broeils, and R. H. Sanders, Mon. Not. Roy. Astron. Soc. 249, 523 (1991). [93] P. H. Chavanis, M. Lemou, and F. M ́ehats, Phys. Rev. D 91(6), 063531 (2015). [94] H. Zhao, Mon. Not. R. Astron. Soc. 278, 488 (1996). [95] H. Zhao, Mon. Not. R. Astron. Soc. 287, 525 (1997). [96] A. Dekel, G. Ishai, A. A. Dutton, and A. V. Macci`o, Mon. Not. R. Astron. Soc. 468, 1005 (2017). [97] J. Freundlich, et al., Mon. Not. R. Astron. Soc. 499, 2912 (2020). [98] P. Salucci, Universe 11, 67 (2025). [99] D. Batic, J. M. Faraji, and M. Nowakowski, Eur. Phys. J. C 82, 759 (2022). [100] A. A. Badawi, S. Shaymatov, and Y. Sekhmani, JCAP 02, 014 (2025). [101] A. Einstein, Ann. Math. 40, 922 (1939). [102] V. Cardoso, et al., Phys. Rev. Lett. 129, 241103 (2022). [103] L. Hernquist, Astrophys. J. 356, 359 (1990). [104] M. Khatri, and P. K. Sahoo, Phys. Dark Universe 49, 102042 (2025). [105] A. Errehymy, O. Donmez, A. Syzdykova, K. Myrzakulov, S. Muminov, A. Dauletov, and J. Rayimbaev, Ann. Phys. 480, 170105 (2025).) [106] T. Harko and F. S. N. Lobo, Eur. Phys. J. C 70, 373-379 (2010). [107] M. S. Morris, K.S. Thorne, Am. J. Phys. 56, 395 (1988). [108] M. S. Morris, K. S. Thorne and U. Yurtsever, Phys. Rev. Lett. 61, 1446 (1988). [109] A. Raychaudhuri, Phys. Rev. 98, 1123 (1955). [110] L. V. Jaybhaye et al., Phys. Lett. B 831, 137148 (2022). [111] I. R. King, ApJL 174, L123 (1972). [112] A. N. Baushev, New Astronomy 60 69-73 (2018). [113] H. Zhao, Mon. Not. R. Astron. Soc. 278, 488 (1996). [114] H. Zhao, Mon. Not. R. Astron. Soc. 287, 525 (1997). 21 [115] J. Freundlich, et al., Mon. Not. R. Astron. Soc. 499, 2912 (2020). [116] N. M. Garcia, F. S. N. Lobo, Phys. Rev. D 82, 104018 (2010). [117] N. M. Garcia, F.S.N. Lobo, Class. Quantum Gravity 28, 085018 (2011). [118] R. V. Labato, G. A. Carvalho, and C. A. Bertulani, Eur. Phys. J. C 81, 1013 (2021). [119] D. Lynden-Bell, J. Katz, and J. Bicak, Phys. Rev. D 75, 024040 (2007). [120] K. K. Nandi, Y. Z. Zhang, R. G. Cai, and A. Panchenko, Phys. Rev. D 79, 024011 (2009). [121] C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation, San Francisco, (1973). [122] K. S. Virbhadra, D. Narasimha, and S. M. Chitre, Astron. Astro. Phys. 337, 1 (1998). [123] C. M. Claudel, K. S. Virbhadra, and G. F. R. Ellis, J. Math. Phys. 42, 818 (2001). [124] V. Bozza, Phys. Rev. D 66, 103001 (2002). [125] V. Bozza et al., Gen. Relativ. Gravit. 33, 1535 (2001). [126] N. S. Kavya, et al., Chinese Journal of Physics 84 1-11 (2023). [127] R. Solanki, Z. Hassan, and P. K. Sahoo, Chinese Journal of Physics 85, 74-88 (2023). [128] K. De, S. Mitra, and U. C. De, International Journal of Geometric Methods in Modern Physics 22 2450265 (2025). [129] M. M. Rizwan, Z. Hassan, and P. K. Sahoo, Physics Letters B 860, 139152 (2025). [130] A. Errehymy, Chinese Journal of Physics 89, 56-68 (2024). [131] M. K. Zangeneh, F. S. N. Lobo and M. H. Dehghani, Phys. Rev. D 92, 124049 (2015). [132] N. Sarkar, S. Sarkar, M. Sarkar and F. Rahaman, Physics of the Dark Universe 47, 101828 (2025). [133] C. G. Boehmer, T. Harko and F. S. N. Lobo, Phys. Rev. D 85, 044033 (2012). [134] M. F. Shamir, and I. Fayyaz, Eur. Phys. J. C 80, 1-9 (2020) [135] P. K. F Kuhfittig, Eur. Phys. J. C 74, 2818 (2014). [136] D. Torres, G. Romero, L. Anchordoqui, Phys. Rev. D 58, 123001 (1998).
2510.14839
Draft version October 17, 2025 Typeset using LATEX default style in AASTeX7.0.1 Antarctic Infrared Binocular Telescope. I. System Overview, Laboratory Testing, and On-Sky Performance Evaluation Zhongnan Dong ,1 Bin Ma ,1, 2 Haoran Zhang,1 Jinji Li,1 Xu Yang,3 Yi Hu,3 Zhaohui Shang3 And Michael C. B. Ashley4 The Terra Mater collaboration 1School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519082, China 2CSST Science Center for the Guangdong-Hong Kong-Macau Greater Bay Area, Zhuhai 519082, China 3National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China 4School of Physics, University of New South Wales, Sydney, NSW 2052, Australia ABSTRACT Infrared time-domain surveys remain significantly underdeveloped compared with their optical coun- terparts. We have developed the Antarctic Infrared Binocular Telescope (AIRBT) to study the dynamic infrared sky at Dome A, Antarctica, taking advantage of the superb infrared observational conditions at this site. AIRBT consists of two identical 15 cm f/3 optical tube assemblies and two cost-effective indium gallium arsenide (InGaAs) cameras equipped with J and H filters, respectively. The cameras have 640 × 512 15 µm pixels, giving a scale of 6.9 arcsec pixel−1 and a field of view (FoV) of 1.22 × 0.97 deg2. We characterize the performance of the InGaAs cameras, including bias, readout noise, dark current, nonlinearity, and photon transfer curve (PTC). Our analysis highlights the distinct behaviors of InGaAs cameras compared with charge-coupled devices (CCDs). The bias and readout noise show temperature dependence. In addition, the noise measured from the PTCs has additional components increasing with exposure time. On-sky tests were conducted in October 2022 including system cali- bration, limiting depth, and photometric precision. For a single 3 s exposure, we achieved 5σ limiting magnitudes of 11.2 mag (Vega system) in J band and 9.7 mag in H band. The best photometric precision reached 20 mmag at the bright end, which could be further improved to sub-percent levels through image stacking. AIRBT was installed at Dome A in January 2023, and scientific observations began as soon as darkness set in. Keywords: Astronomical detectors (84) — Infrared photometry (792) — Infrared telescopes (794) — Time domain astronomy (2109) 1. INTRODUCTION Infrared surveys are essential for revealing cold, obscured, and/or high-redshift objects such as exoplanets, cold stars, star-forming regions, and distant galaxies. Static infrared surveys have mapped the sky with remarkable area and depth, including the Two Micron All-Sky Survey (2MASS, M. F. Skrutskie et al. 2006), the Deep Near-Infrared Southern Sky Survey (DENIS, N. Epchtein et al. 1997), the UKIRT Infrared Deep Sky Survey (UKIDSS, A. Lawrence et al. 2007), and large public surveys conducted by VISTA (W. Sutherland et al. 2015). However, the dynamic infrared sky remains largely unexplored compared to the densely instrumented optical time-domain surveys. VISTA allocated fractional observation time for a variability survey of the Milky Way over an area of 520 deg2, called VISTA Variables in the Via Lactea (VVV, D. Minniti et al. 2010). The Palomar Gattini-IR (A. M. Moore et al. 2016; K. De et al. 2020; S. Murakawa et al. 2024) is a dedicated time-domain infrared survey with a 30 cm aperture telescope, covering about 15,000 square degrees of accessible sky with a median cadence of 2 days to a depth of 14.9 mag in J band. All magnitudes quoted in this paper are on the Vega system. Corresponding author: Bin Ma arXiv:2510.14839v1 [astro-ph.IM] 16 Oct 2025 2 Infrared time-domain surveys face several challenges. One of the main issues is the high cost and complex cooling requirements (down to 80 K) of traditional mercury cadmium telluride (HgCdTe) detectors. Recently, cost-effective indium gallium arsenide (InGaAs) detectors have improved and offer an alternative solution for wavelengths of 0.9 – 1.7 µm. These detectors typically operate at temperatures higher than −80℃, achievable with simple thermoelectric cooling (TEC) without the need for complex cryogenic systems. Although their dark current is orders of magnitude higher than that of HgCdTe detectors or charge-coupled devices (CCDs), it is comparable to or even smaller than the infrared sky background. Astronomers have begun testing InGaAs cameras in astronomical observations. P. W. Sullivan et al. (2013) used a 0.6 m telescope equipped with an InGaAs camera, and achieved sub-mmag photometric precision. Similarly, tests with 12-inch and 18-inch telescopes showed that sub-percent precision could be obtained during exoplanet transits (R. Strausbaugh et al. 2018). An InGaAs camera on a 2.5 m telescope achieved background-limited imaging, enabling the detection of two supernovae and a 1.2% transit event (R. A. Simcoe et al. 2019). On the 2 m Liverpool Telescope with an H-band filter, an InGaAs camera reached mmag precision for sources < 10.7 mag and a 10σ depth of 16 mag with a total exposure time of 3 minutes (K. Batty et al. 2022). After extensive on-sky testing, InGaAs cameras are now being deployed in time-domain surveys. The Wide-Field Infrared Transient Explorer (WINTER, N. P. Lourie et al. 2020) began operations at Palomar Observatory in June 2023, with six 1920 × 1080 pixel InGaAs cameras on a 1 m telescope with a field of view (FoV) of 1 deg2. It is designed for dedicated near-infrared follow-up of kilonovae detected by LIGO (D. Frostig et al. 2022). The Dynamic REd All-sky Monitoring Survey (DREAMS, J. Soon et al. 2020; M. Birch et al. 2022) is a time-domain sky survey using six 1280 × 1024 pixel InGaAs cameras on a 0.5 m telescope at Siding Spring Observatory; commissioning is planned for 2026. A 1280 × 1024 pixel InGaAs-based instrument (0.81 – 1.33 µm) was designed for the SPECULOOS telescope (P. P. Pedersen et al. 2024). It achieved better photometric precision than CCDs (0.7 – 1.1 µm) for L-type stars and cooler, as the infrared band significantly reduced noise from atmospheric precipitable water vapor (PWV) variability. Another challenge is that ground-based observations are limited by site conditions. The infrared sky background from 1 – 4 µm at temperate-latitude observatories is typically between 10 and 104 times brighter than in optical bands, due to strong hydroxyl airglow and thermal emission from the atmosphere and telescope (e.g., S. Noll et al. 2012). Moreover, atmospheric transmittance is significantly reduced at certain wavelengths due to water vapor absorption, creating gaps between the J, H, and K bands. All of these issues are ameliorated when observing from the Antarctic plateau, which is an exceptional site for infrared observations (M. G. Hidas et al. 2000), primarily due to its unique environmental conditions. The extremely low temperatures in Antarctica greatly reduce thermal radiation from the lower atmosphere and the telescope, as well as reducing water vapor absorption since much of the water has precipitated out. Observations at the South Pole demonstrate a 20 – 100 times reduction in background emission in the Kdark band (2.43 µm), along with 2 – 3 times reductions in the J and H bands (M. C. B. Ashley et al. 1996; A. Phillips et al. 1999). The excellent infrared sky conditions in Antarctica have led to a number of projects. For example, the 0.6 m SPIREX telescope at the South Pole Station (M. G. Burton et al. 2000); the International Robotic Antarctic Infrared Telescope (IRAIT, G. Tosti et al. 2006; G. A. Durand et al. 2014) with the near/mid-infrared camera AMICA (1.25 – 25 µm) at Dome C, Antarctica; the proposed Cryoscope telescope M. M. Kasliwal et al. (2025), a 50 deg2 FoV and 1.2 m aperture survey telescope in the Kdark band planned for deployment to Dome C, and preceded by a pathfinder prototype with 16 deg2 FoV and 26 cm aperture scheduled for installation in December 2026 (N. Earley et al. 2024). Dome A, as the highest location on the Antarctic plateau, has the best published THz and optical observation conditions on the ground (Z. Shang 2020). It has the lowest PWV — which is crucial for THz observations (S.-C. Shi et al. 2016), long dark nights (H. Zou et al. 2010), a high fraction of clear nighttime (X. Yang et al. 2021), a shallow boundary layer (C. S. Bonner et al. 2008, 2010), strong temperature inversion and stable atmosphere (Y. Hu et al. 2014, 2019), and superb seeing for optical and infrared observations (B. Ma et al. 2020). In the near-infrared, J. Zhang et al. (2023) reported that the sky at Dome A is as dark as South Pole in the J, H, and Ks bands based on preliminary measurements from several nights in April 2019. This is not surprising given the study at the South Pole by M. C. B. Ashley et al. (1996) and the fact that Dome A has a higher altitude and lower temperatures than the South Pole and that thermal IR emission, which is significant in Ks band, is greatly reduced by the low ambient temperature. The extremely low PWV at Dome A reduces absorption around 1.4 µm and 1.9 µm, opening new ground-based windows for observation (G. Sims et al. 2012). The Kunlun Infrared Sky Survey (KISS, M. G. Burton et al. 2016) was proposed to explore the dynamic universe in Kdark band, but was unable to proceed due to export restrictions on the detector. Z.-Y. Li et al. (2024) developed a 15 cm near-infrared telescope with a FoV of 3 Figure 1. Optical design of the telescope, which is an RC design with lens correctors. The entrance pupil has a diameter of 15 cm and the overall focal ratio is f/3. 0.87◦× 0.69◦using an IMX990 InGaAs camera; this was installed at Dome A in January 2024 and conducted daytime observations, achieving a detection limit of J = 10 mag with an effective exposure time of 175 s (C. Yang et al. 2025). Herein, we present the Antarctic Infrared Binocular Telescope (AIRBT), a time-domain survey pathfinder for Dome A. The AIRBT consists of two 15 cm optical tube assemblies (OTAs) with commercial InGaAs cameras, providing a FoV of 1.22◦× 0.97◦. By simultaneously imaging in the J and H bands, AIRBT aims to study bright variables, monitor the sky background at Dome A over the long term, and lay the technological groundwork for future large telescopes. AIRBT was installed at Dome A in January 2023 by the 39th Chinese National Antarctic Research Expedition (CHINARE 39). Due to focus issues in the first year, the data reduction requires further processing. After commissioning, AIRBT began survey observations in J and H bands with good operational status in 2024. In this paper, we describe the design of AIRBT and present test results from both laboratory and on-sky observations. The paper is organized as follows: Sect. 2 provides an overview of the system, including the telescopes and InGaAs cameras. Sect. 3 presents the laboratory characterization of the detectors, while Sect. 4 describes on-sky performance prior to deployment to Antarctica. We summarize the paper in Sect. 5. 2. SYSTEM OVERVIEW AIRBT is optimized for wavelengths of 1.0 – 1.7 µm and the Antarctic environment. Each of the two OTAs adopts a Ritchey-Chr´etien (RC) design with lens correctors, as shown in Fig. 1. The OTAs have a diameter of 15 cm and a focal ratio of f/3. The optical design achieves 80% encircled energy within a radius of 15 µm across the full FoV, which has a diameter of 15 mm (1.9◦). The image quality remains stable at ambient temperatures from −20 ℃to −80 ℃, since the telescope structure is made of INVAR, a low-expansion material. Thus, the telescope requires only one focusing process during installation in the Antarctic summer when the ambient temperature is around −20 ℃, and it will maintain sharp images throughout the year, even when the temperature drops to −80 ℃. This eliminates the requirement for further focusing during the unmanned polar night. Electrically conductive Indium-Tin-Oxide (ITO) films are coated on the windows of AIRBT to prevent frost accumulation. The films are powered during observations to heat the windows to several degrees above the ambient temperature, thereby allowing frost to sublime. AIRBT uses two LD-SW640171550-C2-G cameras from Xi’an Leading Optoelectronic Technology Co., Ltd. Each camera has a FPA0640P15F-17-T2 InGaAs focal plane array (FPA)5 with 640 × 512 pixels. The pixel size is 15 µm, corresponding to 6.9 arcsec on the sky, giving an effective FoV of 1.22◦× 0.97◦. These cameras utilize TEC to cool the FPA temperature to 30 – 40 ℃below the camera shell temperature. When operating at Dome A the FPA can readily achieve its optimal operating temperature of −55 ℃. The two cameras are equipped with J and H filters, respectively. The spectral response of the InGaAs detectors is 0.9 – 1.7 µm with a quantum efficiency of ≥70% at room temperature. After cooling, the long wavelength cut-off becomes shorter, resulting in a response range of approximately 0.9 – 1.63 µm at −40 ℃. Therefore, the effective H filter of AIRBT covers only the blue half of the 2MASS H filter (1.51 – 1.79 µm) at Dome A. The wavelength range of the J band filter of AIRBT is similar to that of the 2MASS J filter (1.11 – 1.39 µm). 5 https://www.clpt.com.tw/640x512P15/17603/ 4 All hardware is designed for Dome A operating conditions. We choose these cameras not only for their performance but also for their suitability for Antarctic conditions. The cameras can operate normally down to −40 ℃and even colder, accounting for their own heat generated during operation. Nevertheless, extreme temperatures during the Antarctic winter still pose operational risks. Consequently, we add thermal insulation and heaters for the cameras. The heaters are automatically controlled according to the camera temperature. The active thermal control ensures optimal operating temperature when the air temperature ranges from −20 ℃to −80 ℃, as verified by laboratory cold tests. All power and data transfer are provided through the PLATeau Observatory (PLATO, J. S. Lawrence et al. 2009; M. C. B. Ashley et al. 2010) platform. A network cable, long enough to cover the 30-meter distance between AIRBT and its computer, also provides redundancy by enabling every computer on the local area network to communicate with the cameras. 3. LABORATORY TESTS OF CAMERAS We characterized the cameras’ performance through laboratory tests, including bias, readout noise, dark current, non-linearity, photon transfer curve (PTC), and gain. We followed the standard procedure for CCD tests and found that InGaAs cameras exhibit some differences from CCDs: (1) the bias takes a short time (10 µs) to reach a stable level; (2) the bias decreases with decreasing FPA temperature; (3) the noise increases with integration time during PTC measurements, affecting the gain calculation. The cameras feature 14-bit outputs with maximum count values of 16,383 in analog-to-digital units (ADUs). The signal chains operates in three gain modes (high, middle, and low), each with distinct parameters while maintaining similar trends. In this section, we present detailed results from the J band camera, as the performance of the H band camera was comparable. 3.1. Bias and readout noise For CCDs the bias is essentially constant for each pixel, and a bias frame is obtained from a dark frame with the shortest integration time, or at least one short enough to make the accumulated dark current negligible. The noise in these frames is also relatively constant, corresponding to the readout noise. Additionally, the bias is almost invariant with temperature. However, InGaAs detectors behave differently. Firstly, both the bias and readout noise require a sufficiently long integration time to reach a stable state, as illustrated in Fig. 2. The black solid lines represent median counts (bias + signal) in ADUs from short dark frames taken with the camera at room temperature and the FPA cooled to −20 ℃, typical for most sites. For the shortest integration time (1 µs) at middle and high gains, many pixels have zero counts. We suspect this is due to the frequency response of the signal chain for these gains. As the integration time increases from 1 µs to about 10 µs in middle gain mode, the median count increases rapidly from 200 ADU to 999 ADU. Similarly, the median noise (blue solid lines) rises from 0 to 107 e−. After this, the count stabilizes, while the noise continues to increase slowly as the accumulated dark current becomes significant. This indicates that the bias of InGaAs detectors takes time to stabilize, so it is important to select an appropriate integration time for deriving a bias frame. For our cameras, we chose 1 ms dark frames as the bias. The bias frame exhibits striped patterns, as shown in Fig. 3. This temporally stable pattern can be removed from science images using standard bias subtraction. Secondly, both the bias and readout noise decrease with the FPA temperature. The dashed lines in Fig. 2 show counts and noises with the camera at low temperature, with the FPA cooled to −55 ℃, typical for Antarctica. In this case, the bias level decreases from 2771 (999) to 2025 (485) ADU in high (middle) gain mode. The effect is even more pronounced in low gain mode, where the bias drops from 375 to 73 ADU, and it takes significantly longer (several seconds) to stabilize. For readout noise, the effective values in high, middle, and low gain modes are 96 (83), 107 (93), and 290 (253) e−at −20 (−55) ℃, respectively. 3.2. Dark current To deeply cool the FPA with its TEC, we put the camera in a low-temperature test chamber with a temperature of −40 ℃. The lower left panel of Fig. 3 displays the median dark current temperature dependence of the high and middle gain mode. The dark current decreases sharply with temperature in high gain mode, then slows beyond −40 ℃ . A similar trend is observed in middle gain mode, though not depicted here. The dark current is higher in middle gain compared to high gain mode. The stable operating temperature for middle gain mode is −55 ℃, with a dark current of 605 e−s−1 pixel−1 in middle gain mode. 5 0 500 1000 1500 2000 2500 3000 3500 4000 High Gain 0 200 400 600 800 1000 1200 1400 Count (ADU) Middle Gain 10 0 10 1 10 2 10 3 10 4 10 5 10 6 Time (μs) 320 340 360 380 400 420 440 460 Low Gain -20μ -55μ 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 Noise (e − ) 280 285 290 295 300 305 310 315 320 Figure 2. Counts (orange) and noise (purple) versus time from dark frames. Solid curves represent results with the FPA temperature around −20 ℃, while dashed curves represent FPA temperature around −55 ℃. We show that both the bias and readout noise take time to stabilize and decrease with lower FPA temperature. Fig. 3 presents a 5 s dark frame at −55 ℃in middle gain mode. The pixels near the edge show 50% higher dark current than those of normal pixels. The histogram of dark current follows approximately a normal distribution, with 90% below 750 e−s−1 pixel−1. 3.3. Nonlinearity To investigate nonlinearity, we took flat-field frames with increasing exposure times in middle gain mode and plot the mean signal versus time in Fig. 4. The plot shows high linearity between 1000 and 12,500 ADU, with nonlinearity less than 1%. However, 117 pixels (0.04% of the total) exhibit higher nonlinearity, smaller linear ranges, or no response to light. 3.4. Photon transfer curve and gain The PTC is the relationship between variance and signal from flat-field frames. The variance consists of two noises: the readout noise σ2 rdn, which is relatively constant, and the photon shot noise, which follows a Poisson distribution. 6 −60 −50 −40 −30 −20 −10 T emperature (°C) 1000 Dark Current (e − s −1 pixel −1 ) Middle Gain High Gain 0 500 1000 1500 2000 Dark Current (e − s −1 pixel −1 ) 10 1 10 2 10 3 10 4 10 5 C unts Median: 605 Cumulative pr babilit( 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3. Bias and dark current. Upper left: Bias frame with temporally stable stripe patterns. Upper right: Dark current frame with a 5 s integration at −55 ℃in middle gain mode. Lower left: Dark current versus temperature in high gain (black) and middle gain (red) modes. The dark current decreases exponentially with temperature until −40 ℃. The dark current in middle gain mode is higher than that in high gain mode. Lower right: Distribution and cumulative curve of middle gain dark current at −55 ℃, following approximate normal statistics. All data are acquired in middle gain mode. 0 20000 40000 60000 80000 Time (μs) 0 2000 4000 6000 8000 10000 12000 14000 Signal (ADU) All data oints Points used for fitting Intensity = 1.5 e μ /μs 0 2500 5000 7500 10000 12500 15000 Signal (ADU) −1 0 1 2 3 4 Residuals (%) Figure 4. Left: Overall linearity. Middle: Nonlinearity, which stays below 1% between 1000 and 12,500 ADU. Right: Sin- gle-pixel linearity, where 117 pixels show no response or nonlinear behavior. All data were acquired in middle gain mode. If the variance and signal are measured in units of ADU, the variance is given by: Variance = σ2 rdn + Signal Gain . (1) Therefore, the relationship is linear, and the slope is typically used to derive the gain. We apply the standard method to calculate variance by subtracting consecutive frames to eliminate the impact of fixed pattern noise. However, we find extra noise in the PTCs in all three gain modes. Limited by the highest intensity of our light sources, we used high gain mode as an example. We obtain PTCs in the traditional way by increasing the exposure time with a stable light source to increase the signal. Fig. 5(a) shows a set of PTCs, each obtained with a different light brightness. The PTCs are linear but surprisingly have varying slopes. The brighter the light, the gentler the 7 Table 1. The InGaAs detector specifications of high, middle, and low gain modes at −55 ℃, operating temperature. Mode Gain (e−/ADU) Bias (ADU) Readout noise (e−) Dark current (e−s−1pix−1) Saturation capacity (ke−) High 2.6 2025 83 395 43 Middle 8.9 485 93 605 146 Low 79 73 253 545 1294 slope, resulting in inconsistent gain results. For the same signal, the variance is larger for fainter light, i.e., longer exposure times. Therefore, we hypothesize the presence of extra noise that increases with integration time. To confirm this, we alter the method for obtaining PTCs. Since the extra noise is time-dependent, we fix the exposure time and increase the signal by brightening the illumination to obtain a PTC. We obtain another set of PTCs for different fixed exposure times. The results shown in Fig. 5(b) confirm our hypothesis. The PTCs exhibit consistent slopes, or equivalently, consistent gain values, because each point in the same PTC has constant extra noise. On the other hand, the intercepts of the PTCs increase with integration time, proving that the noise increases with integration time. To derive this extra noise, shot noises from both photoelectrons and dark current — calculated using the gain values from the second set of PTCs — were subtracted from the first set of PTCs. The resultant remaining noise is plotted in Fig. 5(c). The time dependence of the noise can be well fitted by ∆Var = 0.15t + 11.25 √ t + 1510, (2) where ∆Var corresponds to the additional noise plus readout noise, and t represents the integration time in units of ms. The constant term is 1510 ADU2, i.e., a readout noise of 101 e−, which agrees with the value derived from bias in Sect. 3.1. To verify Eq. 2, we subtract the time-dependent noise from the second set of PTCs. The corrected variance versus signal is shown in Fig. 5(d). All data points fall on the same line, with a slope identical to the original. The assumption of extra noise explains both sets of PTCs well, though its origin remains unclear. The time dependence consists of two terms: a linear term and a square-root term, implying multiple possible causes. C. Yu et al. (2017) describes noise components of InGaAs detectors, including time-dependent shot noise, detector thermal noise, 1/f noise, amplifier thermal noise, reset noise from the sample circuit, and fixed pattern noise et al. The additional noise may represent a combination of several of these components. However, their analysis does not account for a noise contribution whose variance follows a half-order dependence on time, a phenomenon that requires further study. To accurately measure the camera’s gain from PTCs, we recommend using fixed exposure times and variable light intensity. In this way, we measured gain values for all gain modes, yielding values of 2.6, 8.9, and 79 e−/ADU for the high, middle, and low gain modes, respectively. One should also take into account the extra noise to esimate the signal-to-noise ratio (S/N) in photometry. 3.5. Summary of camera tests We summarize the laboratory test results for the three gain modes in Table 1. Accordingly, we adopt the middle gain mode for its balance between readout noise and saturation capacity. We found that stacking 9 middle-gain frames (each with exposure time t) achieves a dynamic range and readout noise level equivalent to a single low-gain frame with an exposure time of 9t. The multi-frame strategy provides higher temporal resolution and can remove contamination from cosmic rays or satellite trails. However, this is not the case for the high gain mode due to its high effective readout noise. Therefore, the middle gain mode is optimal. Under the working conditions at Dome A, the middle gain mode has a readout noise of 93 e−, dark current of 605 e−s−1 pixel−1, and saturation capacity of 146 ke−. The nonlinearity is better than 1%, with a fraction of bad pixels of 0.04%. Beyond these specifications, one should pay attention to the divergent behaviors between InGaAs cameras and CCDs. The bias of InGaAs cameras needs a short time to reach the stable level, and the bias decreases as the temperature decreases. Extra noise increases over time in the PTCs, and the slope will change if the signal increases with longer exposure time. An unbiased gain can be derived from a PTC by fixing the exposure time and increasing the light intensity to vary the signal. These behaviors may be common in InGaAs detectors, though to varying degrees, as we have observed them across multiple cameras from different manufacturers (e.g., Z. Dong et al. 2024). 8 0 1000 2000 3000 4000 5000 6000 Signal (ADU) 1500 2000 2500 3000 3500 4000 4500 Variance (ADU²) (a) 1/k=1.80 (I=6e − /m ) 1/k=2.04 (I=11e − /m ) 1/k=2.20 (I=18e − /m ) 1/k=2.40 (I=50e − /m ) 0 1000 2000 3000 4000 5000 6000 7000 Signal (ADU) 2000 2500 3000 3500 4000 4500 Variance (ADU 2 ) (b) 1/k=2.59(t=100ms) 1/k=2.63(t=200ms) 1/k=2.63(t=300ms) 1/k=2.62(t=400ms) 1/k=2.61(t=500ms) 0 500 1000 1500 2000 2500 3000 Time (ms) 1600 1800 2000 2200 2400 2600 Variance - k × Signal (ADU²) (c) Data (I=6e − /ms) Data (I=11e − /ms) Data (I=18e − /ms) Data (I=50e − /ms) ΔVar=0.15t+11.25√t+1510 0 2000 4000 6000 Signal (ADU) 0 500 1000 1500 2000 2500 Corrected variance (ADU 2 ) (d) Data (t=100ms) Data (t=200ms) Data (t=300ms) Data (t=400ms) Data (t=500ms) 1/k=2.63, b=-5.0 Figure 5. The PTCs and gain estimation. (a) PTCs obtained by increasing exposure time with a stable light source. Different curves represent varying intensity levels. Variance for identical counts increases with lower light intensity, requiring longer integration time and resulting in higher variance. (b) PTCs for fixed exposure time with increasing light intensity. Different curves correspond to different exposure times. All PTC slopes remain consistent, indicating a gain of 2.6 e−/ADU. Longer integration times lead to greater interference, suggesting that additional noise accumulates over time. (c) Extra noise obtained by subtracting photon shot noise from the PTCs in (a). (d) Noise from the PTCs in (b) with extra noise subtracted. All data points fall on the same line. 4. ON-SKY PERFORMANCE TESTS We performed on-sky tests of AIRBT at the Zhuhai campus of Sun Yat-sen University in October 2022 to verify the photometric performance of the whole system, as shown in Fig. 6. During observations, we obtained simultaneous J and H band images using the middle gain mode with 3 s exposures, considering the bias level, dark current, sky background, and saturation capacity mentioned in Sect. 3. Our observations targeted 20 fields in the Galactic plane to ensure enough stars in the images. The image depth was limited by the higher FPA temperature of −14.4°C, since the ambient temperature was above 20°C. This resulted in a dark current of 6,500 e−s−1 pixel−1, about nine times higher than expected in Antarctica. 4.1. Image pre-processing We corrected the raw images following bias and dark current calibrations. However, obvious residuals of dark current patterns remained, as shown in Fig. 7. This indicates that the dark current levels fluctuated slightly, likely due to instability of the TEC cooling and changes in the ambient temperature. To quantify the dark current in our scientific images, we scaled the dark frames obtained from laboratory tests. This scaling used a linear correction coefficient determined by the temperature-dependent ratio of the intensity difference between warm and normal pixels, as described in B. Ma et al. (2014, 2018). The correction yields negligible residuals, as demonstrated in Fig. 7, where the background RMS is close to the theoretical noise limit. The full width at half maximum (FWHM) of the image is about 2.8 pixels. 9 Figure 6. AIRBT at the Zhuhai campus of Sun Yat-sen University on October 11, 2022. Figure 7. The effect of dark frame subtraction. Left: an example of raw frames from the observation. Middle: the image after subtracting the dark current captured in the laboratory. Right: the image after subtracting the dark current scaled by warm pixels. The background RMS decreases to 26 ADU, approaching the theoretical noise limit. 4.2. Photometry and astrometry We performed aperture photometry using Source Extractor (E. Bertin & S. Arnouts 1996). We assigned weights to all pixels. For bad pixels, we set the weight to zero to mask them. For normal pixels, we calculated the weight using: weight = 1 variance = 1 σ2 rdn + σ2 Poisson , (3) where σ2 rdn denotes the readout noise variance from the bias, and σ2 Poisson indicates the Poisson noise variance from the dark current, sky background, and source flux. We used aperture diameters of 2, 4, and 6 pixels, corresponding to 13.8, 27.6, and 41.4 arcsec, respectively. The World Coordinate System (WCS) was established using SCAMP (E. Bertin 2006), with astrometric calibration performed against the 2MASS catalog (M. F. Skrutskie et al. 2006). Fig. 8 shows median positional errors of 0.7 arcsec in right ascension (RA) and 0.5 arcsec in declination (DEC). Given the pixel scale of 6.9 arcsec per pixel, these uncertainties correspond to 0.1 pixel precision. 10 0.5 0.6 0.7 0.8 0.9 1.0 1.1 RA Error (") 0.4 0.5 0.6 0.7 DEC Error (") 0 32 65 Counts 0 36 72 Counts Figure 8. Astrometry precision measurements for 682 frames across all observation fields. 5 6 7 8 9 10 11 12 13 J (mag) 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 Zero -point p=16.9±0.1 (S/N≥20) 4 5 6 7 8 9 10 11 J (mag) 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5 18.0 Zero -point p=15.8±0.1 (S/N≥20) Figure 9. Magnitude zero-points for J (left) and H (right) bands. The analysis is based on sources with S/N ≥20, comprising 6,677 and 4,683 sources for the J and H bands, respectively. The red solid line shows the median values, yielding zero-points of 16.9 mag and 15.8 mag for J and H bands, respectively. Outliers may correspond to variable sources. 4.3. Flux calibration We performed flux calibration using the formula: Magnitude = −2.5 log(Flux) + Zero-point, (4) where magnitude is relative to the 2MASS catalog, and flux represents the total signal in ADU for a 3 second exposure frame within the aperture. We utilized data from all 20 fields and selected the frame with optimal image quality, resulting in 6,677 targets. The selection criteria included an FWHM of ≤3 pixels, sky background level (3σ clipping), dark current (3σ clipping), and atmospheric extinction (3σ clipping). Fig. 9 shows AIRBT’s magnitude zero-points in J and H bands. We determined the system zero-points by calculating the median from sources with S/N ≥20, yielding 16.9 mag and 15.8 mag for J and H bands, respectively. Consequently, the overall throughput of the system is 19% and 17% for the J and H bands, respectively, caused by transmissions of atmosphere, ITO, optics and filter, as well as detector quantum efficiency. Higher overall throughput is expected in Antarctica due to improved atmospheric transmission. We compare the magnitude systems of AIRBT and 2MASS in Fig. 10. The best linear fit for the magnitude transformation is JAIRBT = J2MASS + 0.05(J −H)2MASS (5) HAIRBT = H2MASS + 0.16(J −H)2MASS (6) (J −H)AIRBT = 0.96(J −H)2MASS −0.02 (7) The J band difference shows marginal color dependence with a linear coefficient of 0.05, while the H band color term reaches 0.16, mainly because the effective H band of AIRBT covers only the blue half of that of 2MASS. Additional 11 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS − H 2MASS −0.2 0.0 0.2 0.4 J AIRBT − J 2MASS Fit: y=(0.04±0.01)x-0.06 r 2 =0.04 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS − H 2MASS −0.4 −0.2 0.0 0.2 0.4 0.6 H AIRBT − H 2MASS Fit: y=(0.15±0.01)x-0.18 r 2 =0.30 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS − H 2MASS 0.0 0.5 1.0 1.5 J AIRBT − H AIRBT Fit: y=(0.91±0.01)x+0.10 r 2 =0.95 Figure 10. The magnitude transformation between AIRBT and 2MASS for J (left), H (middle), and J −H (right), derived from sources with S/N ≥20. The best linear fit is shown by the red lines. variations, including telescope, filter, and atmospheric transmission, also contribute to the color dependence. Therefore, applying the magnitude transformation is essential when combining data from the two systems. 4.4. Photometric precision To minimize pixel-related differences, we evaluated photometric precision by comparing two consecutive images with shifts of less than 0.1 pixels. Therefore, the magnitude differences are primarily due to measurement error, which can be calculated as the standard deviation of the former divided by √ 2. We plot the magnitude differences from all image pairs in the upper panels of Fig. 11, and the practical versus theoretical magnitude errors in the lower panels. For the faint end, the noise is dominated by sky background and dark current. Smaller apertures provide better precision for faint sources by collecting less noise from the sky background. The 5σ limiting magnitude is about 11.2 and 9.7 for the J and H bands, respectively, with a 2-pixel aperture. Both bands are limited in depth by the bright sky and high dark current during the tests. The sky background during the tests was measured to be approximately 1,000 e−s−1 pixel−1 (14.7 mag arcsec-2) in J band and 2,700 e−s−1 pixel−1 (12.6 mag arcsec-2) in H band. These values are several times brighter than those observed at Dome A, as reported in (J. Zhang et al. 2023). Hence, AIRBT will be able to image deeper at Dome A, benefiting from the dark sky and low dark current. For bright sources, photon noise becomes dominant. Larger apertures provide better precision for bright sources by collecting more photons, where the noise is dominated by the stellar flux. However, rather than following the ideal photon-noise predictions, the measured precisions for unsaturated stars observed with a 6-pixel aperture approached approximately 0.02 mag. We propose that the noise is dominated by atmospheric scintillation for bright stars. The approximation of the scintillation variance is given by A. T. Young (1967): σ2 sc = 10 × 10−6D−3/4t−1(cos z)−3 exp(−2hobs/H), (8) where D is the telescope diameter (m), t is the exposure time (s), hobs is the observatory height (15 m), z is the zenith distance, H is the atmospheric scale height (8000 m), and σsc is a dimensionless relative error of flux. For our test, this results in a scintillation of about 1%. J. Osborn et al. (2015) demonstrated that this equation underestimated the median scintillation noise at several major observatories by a factor of about 1.5. This factor would likely increase further during observational tests, due to typically stronger atmospheric turbulence compared to that in world-class observatories. Consequently, scintillation accounts for the predominant photometric noise for bright stars in our test. By combining six images, the noise could be reduced to about 8 mmag. In particular, the scintillation at Dome A is significantly weaker, making photometric precision at the mmag level achievable for bright stars. 5. SUMMARY We have developed AIRBT for simultaneous imaging in J and H bands at Dome A, Antarctica. The goals include investigation of bright variable stars and measurements of the sky brightness in the near-infrared. AIRBT consists of two identical optical tube assemblies, each with a diameter of 15 cm. Each OTA is equipped with an InGaAs camera with an effective FoV of 1.22◦× 0.97◦. We have tested the InGaAs cameras in detail in the laboratory and found that the middle gain mode is optimal for astronomy. In this mode, the cameras have a readout noise of 93 e−, a dark current of 605 e−s-1pixel-1, and a 12 −1.0 −0.5 0.0 0.5 1.0 Δ Mag 4 pix (27.6"): Observed Bin Median 5 6 7 8 9 10 11 J ( ag) 10 −2 10 −1 σ Mag 2 pi( (13.8"): Observed 2 pi( (13.8"): The"retical 4 pi( (27.6"): Observed 4 pi( (27.6"): The"retical 6 pi( (41.4"): Observed 6 pi( (41.4"): The"retical −1.0 −0.5 0.0 0.5 1.0 Δ Mag 4 pix (27.6"): Observed Bin Median 4 5 6 7 8 9 J ( ag) 10 −2 10 −1 σ Mag 2 pi( (13.8"): Observed 2 pi( (13.8"): The"retical 4 pi( (27.6"): Observed 4 pi( (27.6"): The"retical 6 pi( (41.4"): Observed 6 pi( (41.4"): The"retical Figure 11. Photometric precision of J (left) and H (right) bands. For each band, the upper panel shows magnitude differences (∆mag) between consecutive exposures, while the lower panel displays photometric errors (σ mag) binned in 0.5-mag intervals for three diameter aperture sizes: 2, 4, and 6 pixels, corresponding to 13.8, 27.6, and 41.4 arcsec, respectively. The best photometric precision reaches 20 mmag, and the limiting magnitude is 11.2 (9.7) in the J (H) band. saturation capacity of 146 ke−. The nonlinearity is better than 1%, and the fraction of bad pixels is 0.04%. However, InGaAs cameras exhibit different behavior from CCDs. The exposure time for bias should be at least 10 µs, rather than using the shortest available one. There are additional noise increasing with integration time, which may bias the slope of a PTC. A PTC should be obtained by fixing the exposure time and varying the light source to increase the signal, allowing for an unbiased measurement of the gain. We have evaluated the on-sky performance of AIRBT through observations in Zhuhai. Compared with 2MASS magnitudes, the J magnitude of AIRBT shows only a marginal difference, while the H magnitude shows significant color dependence, as the cutoff wavelength of the InGaAs cameras is only 1.63 µm. For a 3 s exposure, the 5σ limiting magnitudes are 11.2 mag and 9.7 mag in J and H bands, respectively. The photometric precision is better than 2% at the bright end, and can be further improved to sub-percent levels by image stacking. Both the depth and precision were limited by the observing environment, including a high dark current from the elevated air temperature, a bright sky background, and strong scintillation. Therefore, the photometric performance has considerable potential for improvement at Dome A. In January 2023, CHINARE 39 deployed AIRBT at Dome A. They installed AIRBT on one of the equatorial mounts for a KL-DIMM telescope (B. Ma et al. 2020) on an 8 m tower. We will release the data and report the results of sky background measurements and time-domain research in upcoming papers of this series. ACKNOWLEDGEMENTS B.M. acknowledges the support from the National Key R&D Program of China (grant No. 2022YFC2807303). AIRBT is financially supported by School of Physics and Astronomy, Sun Yat-sen University. REFERENCES Ashley, M. C. B., Burton, M. G., Storey, J. W. V., et al. 1996, PASP, 108, 721, doi: 10.1086/133792 Ashley, M. C. B., Allen, G., Bonner, C. S., et al. 2010, in EAS Publications Series, Vol. 40, EAS Publications Series, ed. L. Spinoglio & N. Epchtein, 79–84, doi: 10.1051/eas/1040009 Batty, K., Steele, I., & Copperwheat, C. 2022, PASP, 134, 065001, doi: 10.1088/1538-3873/ac71cc Bertin, E. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 112 Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164 13 Birch, M., Soon, J., Travouillon, T., et al. 2022, Journal of Astronomical Telescopes, Instruments, and Systems, 8, 016001, doi: 10.1117/1.JATIS.8.1.016001 Bonner, C. S., Ashley, M. C. B., Lawrence, J. S., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Ground-based and Airborne Instrumentation for Astronomy II, ed. I. S. McLean & M. M. Casali, 70146I, doi: 10.1117/12.788154 Bonner, C. S., Ashley, M. C. B., Cui, X., et al. 2010, PASP, 122, 1122, doi: 10.1086/656250 Burton, M. G., Ashley, M. C. B., Marks, R. D., et al. 2000, ApJ, 542, 359, doi: 10.1086/309510 Burton, M. G., Zheng, J., Mould, J., et al. 2016, PASA, 33, e047, doi: 10.1017/pasa.2016.38 De, K., Hankins, M. J., Kasliwal, M. M., et al. 2020, PASP, 132, 025001, doi: 10.1088/1538-3873/ab6069 Dong, Z., Ma, B., Li, J., & Zhang, H. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13103, X-Ray, Optical, and Infrared Detectors for Astronomy XI, ed. A. D. Holland & K. Minoglou, 131031M, doi: 10.1117/12.3018935 Durand, G. A., Tremblin, P., Minier, V., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9145, Ground-based and Airborne Telescopes V, ed. L. M. Stepp, R. Gilmozzi, & H. J. Hall, 91450D, doi: 10.1117/12.2056562 Earley, N., Fucik, J., Fahey, L., et al. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13096, Ground-based and Airborne Instrumentation for Astronomy X, ed. J. J. Bryant, K. Motohara, & J. R. D. Vernet, 130963M, doi: 10.1117/12.3020678 Epchtein, N., de Batz, B., Capoani, L., et al. 1997, The Messenger, 87, 27 Frostig, D., Biscoveanu, S., Mo, G., et al. 2022, ApJ, 926, 152, doi: 10.3847/1538-4357/ac4508 Hidas, M. G., Burton, M. G., Chamberlain, M. A., & Storey, J. W. V. 2000, PASA, 17, 260, doi: 10.1071/AS00033 Hu, Y., Shang, Z., Ashley, M. C. B., et al. 2014, PASP, 126, 868, doi: 10.1086/678327 Hu, Y., Hu, K., Shang, Z., et al. 2019, PASP, 131, 015001, doi: 10.1088/1538-3873/aae916 Kasliwal, M. M., Earley, N., Smith, R., et al. 2025, PASP, 137, 065001, doi: 10.1088/1538-3873/adc629 Lawrence, A., Warren, S. J., Almaini, O., et al. 2007, MNRAS, 379, 1599, doi: 10.1111/j.1365-2966.2007.12040.x Lawrence, J. S., Ashley, M. C. B., Hengst, S., et al. 2009, Review of Scientific Instruments, 80, 064501, doi: 10.1063/1.3137081 Li, Z.-Y., Cong, J.-N., Wu, Z.-X., et al. 2024, PASP, 136, 115002, doi: 10.1088/1538-3873/ad8d7b Lourie, N. P., Baker, J. W., Burruss, R. S., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11447, Ground-based and Airborne Instrumentation for Astronomy VIII, ed. C. J. Evans, J. J. Bryant, & K. Motohara, 114479K, doi: 10.1117/12.2561210 Ma, B., Shang, Z., Hu, Y., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9154, High Energy, Optical, and Infrared Detectors for Astronomy VI, ed. A. D. Holland & J. Beletic, 91541T, doi: 10.1117/12.2055416 Ma, B., Shang, Z., Hu, Y., et al. 2018, MNRAS, 479, 111, doi: 10.1093/mnras/sty1392 Ma, B., Shang, Z., Hu, Y., et al. 2020, Nature, 583, 771, doi: 10.1038/s41586-020-2489-0 Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, NewA, 15, 433, doi: 10.1016/j.newast.2009.12.002 Moore, A. M., Kasliwal, M. M., Gelino, C. R., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9906, Ground-based and Airborne Telescopes VI, ed. H. J. Hall, R. Gilmozzi, & H. K. Marshall, 99062C, doi: 10.1117/12.2233694 Murakawa, S., De, K., Ashley, M. C. B., et al. 2024, PASP, 136, 104501, doi: 10.1088/1538-3873/ad7db1 Noll, S., Kausch, W., Barden, M., et al. 2012, A&A, 543, A92, doi: 10.1051/0004-6361/201219040 Osborn, J., F¨ohring, D., Dhillon, V. S., & Wilson, R. W. 2015, MNRAS, 452, 1707, doi: 10.1093/mnras/stv1400 Pedersen, P. P., Queloz, D., Garcia, L., et al. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13096, Ground-based and Airborne Instrumentation for Astronomy X, ed. J. J. Bryant, K. Motohara, & J. R. D. Vernet, 130963X, doi: 10.1117/12.3018320 Phillips, A., Burton, M. G., Ashley, M. C. B., et al. 1999, ApJ, 527, 1009, doi: 10.1086/308097 Shang, Z. 2020, Research in Astronomy and Astrophysics, 20, 168, doi: 10.1088/1674-4527/20/10/168 Shi, S.-C., Paine, S., Yao, Q.-J., et al. 2016, Nature Astronomy, 1, 0001, doi: 10.1038/s41550-016-0001 Simcoe, R. A., F˝ur´esz, G., Sullivan, P. W., et al. 2019, AJ, 157, 46, doi: 10.3847/1538-3881/aae094 Sims, G., Ashley, M. C. B., Cui, X., et al. 2012, PASP, 124, 74, doi: 10.1086/664077 14 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163, doi: 10.1086/498708 Soon, J., Adams, D., De, K., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11203, Advances in Optical Astronomical Instrumentation 2019, ed. S. C. Ellis & C. d’Orgeville, 1120307, doi: 10.1117/12.2539594 Strausbaugh, R., Jackson, R., & Butler, N. 2018, PASP, 130, 095001, doi: 10.1088/1538-3873/aaca2a Sullivan, P. W., Croll, B., & Simcoe, R. A. 2013, PASP, 125, 1021, doi: 10.1086/672573 Sutherland, W., Emerson, J., Dalton, G., et al. 2015, A&A, 575, A25, doi: 10.1051/0004-6361/201424973 Tosti, G., Busso, M., Nucciarelli, G., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6267, Ground-based and Airborne Telescopes, ed. L. M. Stepp, 62671H, doi: 10.1117/12.670302 Yang, C., Ji, T., Li, Z., et al. 2025, AJ, 169, 228, doi: 10.3847/1538-3881/adbd0e Yang, X., Shang, Z., Hu, K., et al. 2021, MNRAS, 501, 3614, doi: 10.1093/mnras/staa3824 Young, A. T. 1967, AJ, 72, 747, doi: 10.1086/110303 Yu, C., Li, X., Yang, B., et al. 2017, Infrared Physics and Technology, 85, 74, doi: 10.1016/j.infrared.2017.05.017 Zhang, J., Zhang, Y.-h., Tang, Q.-J., et al. 2023, MNRAS, 521, 5624, doi: 10.1093/mnras/stad775 Zou, H., Zhou, X., Jiang, Z., et al. 2010, AJ, 140, 602, doi: 10.1088/0004-6256/140/2/602
Draft version October 17, 2025 Typeset using LATEX default style in AASTeX7.0.1 Antarctic Infrared Binocular Telescope. I. System Overview, Laboratory Testing, and On-Sky Performance Evaluation Zhongnan Dong ,1 Bin Ma ,1, 2 Haoran Zhang,1 Jinji Li,1 Xu Yang,3 Yi Hu,3 Zhaohui Shang3 And Michael C. B. Ashley4 The Terra Mater collaboration 1 -sen University, Zhuhai 519082, China 2CSST Science Center for the Guangdong-Hong Kong-Macau Greater Bay Area, Zhuhai 519082, China 3National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China 4 2052, Australia ABSTRACT Infrared time-domain surveys remain significantly underdeveloped compared with their optical counterparts. We have developed the Antarctic Infrared Binocular Telescope (AIRBT) to study the dynamic infrared sky at Dome A, Antarctica, taking advantage of the superb infrared observational conditions at this site. AIRBT consists of two identical 15 cm f/3 optical tube assemblies and two cost-effective indium gallium arsenide (InGaAs) cameras equipped with J and H filters, respectively. The cameras have 640 × 512 15 μm pixels, giving a scale of 6.9 arcsec pixel-1 and a field of view (FoV) of 1.22 × 0.97 deg2. We characterize the performance of the InGaAs cameras, including bias, readout noise, dark current, nonlinearity, and photon transfer curve (PTC). Our analysis highlights the distinct behaviors of InGaAs cameras compared with charge-coupled devices (CCDs). The bias and readout noise show temperature dependence. In addition, the noise measured from the PTCs has additional components increasing with exposure time. On-sky tests were conducted in October 2022 including system calibration, limiting depth, and photometric precision. For a single 3 s exposure, we achieved 5σ limiting magnitudes of 11.2 mag (Vega system) in J band and 9.7 mag in H band. The best photometric precision reached 20 mmag at the bright end, which could be further improved to sub-percent levels through image stacking. AIRBT was installed at Dome A in January 2023, and scientific observations began as soon as darkness set in. Keywords: Astronomical detectors (84) - Infrared photometry (792) - Infrared telescopes (794) - Time domain astronomy (2109) 1. INTRODUCTION Infrared surveys are essential for revealing cold, obscured, and/or high-redshift objects such as exoplanets, cold stars, star-forming regions, and distant galaxies. Static infrared surveys have mapped the sky with remarkable area and depth, including the Two Micron All-Sky Survey (2MASS, M. F. Skrutskie et al. 2006), the Deep Near-Infrared Southern Sky Survey (DENIS, N. Epchtein et al. 1997), the UKIRT Infrared Deep Sky Survey (UKIDSS, A. Lawrence et al. 2007), and large public surveys conducted by VISTA (W. Sutherland et al. 2015). However, the dynamic infrared sky remains largely unexplored compared to the densely instrumented optical time-domain surveys. VISTA allocated fractional observation time for a variability survey of the Milky Way over an area of 520 deg2, called VISTA Variables in the Via Lactea (VVV, D. Minniti et al. 2010). The Palomar Gattini-IR (A. M. Moore et al. 2016; K. De et al. 2020; S. Murakawa et al. 2024) is a dedicated time-domain infrared survey with a 30 cm aperture telescope, covering about 15,000 square degrees of accessible sky with a median cadence of 2 days to a depth of 14.9 mag in J band. All magnitudes quoted in this paper are on the Vega system. Corresponding author: Bin Ma 16 Oct 2025 2 Infrared time-domain surveys face several challenges. One of the main issues is the high cost and complex cooling requirements (down to 80 K) of traditional mercury cadmium telluride (HgCdTe) detectors. Recently, cost-effective indium gallium arsenide (InGaAs) detectors have improved and offer an alternative solution for wavelengths of 0.9 - 1.7 μm. These detectors typically operate at temperatures higher than -80°C, achievable with simple thermoelectric cooling (TEC) without the need for complex cryogenic systems. Although their dark current is orders of magnitude higher than that of HgCdTe detectors or charge-coupled devices (CCDs), it is comparable to or even smaller than the infrared sky background. Astronomers have begun testing InGaAs cameras in astronomical observations. P. W. Sullivan et al. (2013) used a 0.6 m telescope equipped with an InGaAs camera, and achieved sub-mmag photometric precision. Similarly, tests with 12-inch and 18-inch telescopes showed that sub-percent precision could be obtained during exoplanet transits (R. Strausbaugh et al. 2018). An InGaAs camera on a 2.5 m telescope achieved background-limited imaging, enabling the detection of two supernovae and a 1.2% transit event (R. A. Simcoe et al. 2019). On the 2 m Liverpool Telescope with an H-band filter, an InGaAs camera reached mmag precision for sources < 10.7 mag and a 10σ depth of 16 mag with a total exposure time of 3 minutes (K. Batty et al. 2022). After extensive on-sky testing, InGaAs cameras are now being deployed in time-domain surveys. The Wide-Field Infrared Transient Explorer (WINTER, N. P. Lourie et al. 2020) began operations at Palomar Observatory in June 2023, with six 1920 × 1080 pixel InGaAs cameras on a 1 m telescope with a field of view (FoV) of 1 deg2. It is designed for dedicated near-infrared follow-up of kilonovae detected by LIGO (D. Frostig et al. 2022). The Dynamic REd All-sky Monitoring Survey (DREAMS, J. Soon et al. 2020; M. Birch et al. 2022) is a time-domain sky survey using six 1280 × 1024 pixel InGaAs cameras on a 0.5 m telescope at Siding Spring Observatory; commissioning is planned for 2026. A 1280 × 1024 pixel InGaAs-based instrument (0.81 - 1.33 μm) was designed for the SPECULOOS telescope (P. P. Pedersen et al. 2024). It achieved better photometric precision than CCDs (0.7 - 1.1 μm) for L-type stars and cooler, as the infrared band significantly reduced noise from atmospheric precipitable water vapor (PWV) variability. Another challenge is that ground-based observations are limited by site conditions. The infrared sky background from 1 - 4 μm at temperate-latitude observatories is typically between 10 and 104 times brighter than in optical bands, due to strong hydroxyl airglow and thermal emission from the atmosphere and telescope (e.g., S. Noll et al. 2012). Moreover, atmospheric transmittance is significantly reduced at certain wavelengths due to water vapor absorption, creating gaps between the J, H, and K bands. All of these issues are ameliorated when observing from the Antarctic plateau, which is an exceptional site for infrared observations (M. G. Hidas et al. 2000), primarily due to its unique environmental conditions. The extremely low temperatures in Antarctica greatly reduce thermal radiation from the lower atmosphere and the telescope, as well as reducing water vapor absorption since much of the water has precipitated out. Observations at the South Pole demonstrate a 20 - 100 times reduction in background emission in the Kdark band (2.43 μm), along with 2 - 3 times reductions in the J and H bands (M. C. B. Ashley et al. 1996; A. Phillips et al. 1999). The excellent infrared sky conditions in Antarctica have led to a number of projects. For example, the 0.6 m SPIREX telescope at the South Pole Station (M. G. Burton et al. 2000); the International Robotic Antarctic Infrared Telescope (IRAIT, G. Tosti et al. 2006; G. A. Durand et al. 2014) with the near/mid-infrared camera AMICA (1.25 - 25 μm) at Dome C, Antarctica; the proposed Cryoscope telescope M. M. Kasliwal et al. (2025), a 50 deg2 FoV and 1.2 m aperture survey telescope in the Kdark band planned for deployment to Dome C, and preceded by a pathfinder prototype with 16 deg2 FoV and 26 cm aperture scheduled for installation in December 2026 (N. Earley et al. 2024). Dome A, as the highest location on the Antarctic plateau, has the best published THz and optical observation conditions on the ground (Z. Shang 2020). It has the lowest PWV - which is crucial for THz observations (S.-C. Shi et al. 2016), long dark nights (H. Zou et al. 2010), a high fraction of clear nighttime (X. Yang et al. 2021), a shallow boundary layer (C. S. Bonner et al. 2008, 2010), strong temperature inversion and stable atmosphere (Y. Hu et al. 2014, 2019), and superb seeing for optical and infrared observations (B. Ma et al. 2020). In the near-infrared, J. Zhang et al. (2023) reported that the sky at Dome A is as dark as South Pole in the J, H, and Ks bands based on preliminary measurements from several nights in April 2019. This is not surprising given the study at the South Pole by M. C. B. Ashley et al. (1996) and the fact that Dome A has a higher altitude and lower temperatures than the South Pole and that thermal IR emission, which is significant in Ks band, is greatly reduced by the low ambient temperature. The extremely low PWV at Dome A reduces absorption around 1.4 μm and 1.9 μm, opening new ground-based windows for observation (G. Sims et al. 2012). The Kunlun Infrared Sky Survey (KISS, M. G. Burton et al. 2016) was proposed to explore the dynamic universe in Kdark band, but was unable to proceed due to export restrictions on the detector. Z.-Y. Li et al. (2024) developed a 15 cm near-infrared telescope with a FoV of 3 Figure 1. Optical design of the telescope, which is an RC design with lens correctors. The entrance pupil has a diameter of 15 cm and the overall focal ratio is f/3. 0.87◦× 0.69◦using an IMX990 InGaAs camera; this was installed at Dome A in January 2024 and conducted daytime observations, achieving a detection limit of J = 10 mag with an effective exposure time of 175 s (C. Yang et al. 2025). Herein, we present the Antarctic Infrared Binocular Telescope (AIRBT), a time-domain survey pathfinder for Dome A. The AIRBT consists of two 15 cm optical tube assemblies (OTAs) with commercial InGaAs cameras, providing a FoV of 1.22◦× 0.97◦. By simultaneously imaging in the J and H bands, AIRBT aims to study bright variables, monitor the sky background at Dome A over the long term, and lay the technological groundwork for future large telescopes. AIRBT was installed at Dome A in January 2023 by the 39th Chinese National Antarctic Research Expedition (CHINARE 39). Due to focus issues in the first year, the data reduction requires further processing. After commissioning, AIRBT began survey observations in J and H bands with good operational status in 2024. In this paper, we describe the design of AIRBT and present test results from both laboratory and on-sky observations. The paper is organized as follows: Sect. 2 provides an overview of the system, including the telescopes and InGaAs cameras. Sect. 3 presents the laboratory characterization of the detectors, while Sect. 4 describes on-sky performance prior to deployment to Antarctica. We summarize the paper in Sect. 5. 2. SYSTEM OVERVIEW AIRBT is optimized for wavelengths of 1.0 - 1.7 μm and the Antarctic environment. Each of the two OTAs adopts a Ritchey-Chr ́etien (RC) design with lens correctors, as shown in Fig. 1. The OTAs have a diameter of 15 cm and a focal ratio of f/3. The optical design achieves 80% encircled energy within a radius of 15 μm across the full FoV, which has a diameter of 15 mm (1.9◦). The image quality remains stable at ambient temperatures from -20 °Cto -80 °C, since the telescope structure is made of INVAR, a low-expansion material. Thus, the telescope requires only one focusing process during installation in the Antarctic summer when the ambient temperature is around -20 °C, and it will maintain sharp images throughout the year, even when the temperature drops to -80 °C. This eliminates the requirement for further focusing during the unmanned polar night. Electrically conductive Indium-Tin-Oxide (ITO) films are coated on the windows of AIRBT to prevent frost accumulation. The films are powered during observations to heat the windows to several degrees above the ambient temperature, thereby allowing frost to sublime. AIRBT uses two LD-SW640171550-C2-G cameras from Xi'an Leading Optoelectronic Technology Co., Ltd. Each camera has a FPA0640P15F-17-T2 InGaAs focal plane array (FPA)5 with 640 × 512 pixels. The pixel size is 15 μm, corresponding to 6.9 arcsec on the sky, giving an effective FoV of 1.22◦× 0.97◦. These cameras utilize TEC to cool the FPA temperature to 30 - 40 °Cbelow the camera shell temperature. When operating at Dome A the FPA can readily achieve its optimal operating temperature of -55 °C. The two cameras are equipped with J and H filters, respectively. The spectral response of the InGaAs detectors is 0.9 - 1.7 μm with a quantum efficiency of ≥70% at room temperature. After cooling, the long wavelength cut-off becomes shorter, resulting in a response range of approximately 0.9 - 1.63 μm at -40 °C. Therefore, the effective H filter of AIRBT covers only the blue half of the 2MASS H filter (1.51 - 1.79 μm) at Dome A. The wavelength range of the J band filter of AIRBT is similar to that of the 2MASS J filter (1.11 - 1.39 μm). 5 https://www.clpt.com.tw/640x512P15/17603/ 4 All hardware is designed for Dome A operating conditions. We choose these cameras not only for their performance but also for their suitability for Antarctic conditions. The cameras can operate normally down to -40 °Cand even colder, accounting for their own heat generated during operation. Nevertheless, extreme temperatures during the Antarctic winter still pose operational risks. Consequently, we add thermal insulation and heaters for the cameras. The heaters are automatically controlled according to the camera temperature. The active thermal control ensures optimal operating temperature when the air temperature ranges from -20 °Cto -80 °C, as verified by laboratory cold tests. All power and data transfer are provided through the PLATeau Observatory (PLATO, J. S. Lawrence et al. 2009; M. C. B. Ashley et al. 2010) platform. A network cable, long enough to cover the 30-meter distance between AIRBT and its computer, also provides redundancy by enabling every computer on the local area network to communicate with the cameras. 3. LABORATORY TESTS OF CAMERAS We characterized the cameras' performance through laboratory tests, including bias, readout noise, dark current, non-linearity, photon transfer curve (PTC), and gain. We followed the standard procedure for CCD tests and found that InGaAs cameras exhibit some differences from CCDs: (1) the bias takes a short time (10 μs) to reach a stable level; (2) the bias decreases with decreasing FPA temperature; (3) the noise increases with integration time during PTC measurements, affecting the gain calculation. The cameras feature 14-bit outputs with maximum count values of 16,383 in analog-to-digital units (ADUs). The signal chains operates in three gain modes (high, middle, and low), each with distinct parameters while maintaining similar trends. In this section, we present detailed results from the J band camera, as the performance of the H band camera was comparable. 3.1. Bias and readout noise For CCDs the bias is essentially constant for each pixel, and a bias frame is obtained from a dark frame with the shortest integration time, or at least one short enough to make the accumulated dark current negligible. The noise in these frames is also relatively constant, corresponding to the readout noise. Additionally, the bias is almost invariant with temperature. However, InGaAs detectors behave differently. Firstly, both the bias and readout noise require a sufficiently long integration time to reach a stable state, as illustrated in Fig. 2. The black solid lines represent median counts (bias + signal) in ADUs from short dark frames taken with the camera at room temperature and the FPA cooled to -20 °C, typical for most sites. For the shortest integration time (1 μs) at middle and high gains, many pixels have zero counts. We suspect this is due to the frequency response of the signal chain for these gains. As the integration time increases from 1 μs to about 10 μs in middle gain mode, the median count increases rapidly from 200 ADU to 999 ADU. Similarly, the median noise (blue solid lines) rises from 0 to 107 e-. After this, the count stabilizes, while the noise continues to increase slowly as the accumulated dark current becomes significant. This indicates that the bias of InGaAs detectors takes time to stabilize, so it is important to select an appropriate integration time for deriving a bias frame. For our cameras, we chose 1 ms dark frames as the bias. The bias frame exhibits striped patterns, as shown in Fig. 3. This temporally stable pattern can be removed from science images using standard bias subtraction. Secondly, both the bias and readout noise decrease with the FPA temperature. The dashed lines in Fig. 2 show counts and noises with the camera at low temperature, with the FPA cooled to -55 °C, typical for Antarctica. In this case, the bias level decreases from 2771 (999) to 2025 (485) ADU in high (middle) gain mode. The effect is even more pronounced in low gain mode, where the bias drops from 375 to 73 ADU, and it takes significantly longer (several seconds) to stabilize. For readout noise, the effective values in high, middle, and low gain modes are 96 (83), 107 (93), and 290 (253) e-at -20 (-55) °C, respectively. 3.2. Dark current To deeply cool the FPA with its TEC, we put the camera in a low-temperature test chamber with a temperature of -40 °C. The lower left panel of Fig. 3 displays the median dark current temperature dependence of the high and middle gain mode. The dark current decreases sharply with temperature in high gain mode, then slows beyond -40 °C . A similar trend is observed in middle gain mode, though not depicted here. The dark current is higher in middle gain compared to high gain mode. The stable operating temperature for middle gain mode is -55 °C, with a dark current of 605 e-s-1 pixel-1 in middle gain mode. 5 0 500 1000 1500 2000 2500 3000 3500 4000 High Gain 0 200 400 600 800 1000 1200 1400 Count (ADU) Middle Gain 10 0 10 1 10 2 10 3 10 4 10 5 10 6 Time (μs) 320 340 360 380 400 420 440 460 Low Gain -20μ -55μ 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 Noise (e - ) 280 285 290 295 300 305 310 315 320 Figure 2. Counts (orange) and noise (purple) versus time from dark frames. Solid curves represent results with the FPA temperature around -20 °C, while dashed curves represent FPA temperature around -55 °C. We show that both the bias and readout noise take time to stabilize and decrease with lower FPA temperature. Fig. 3 presents a 5 s dark frame at -55 °Cin middle gain mode. The pixels near the edge show 50% higher dark current than those of normal pixels. The histogram of dark current follows approximately a normal distribution, with 90% below 750 e-s-1 pixel-1. 3.3. Nonlinearity To investigate nonlinearity, we took flat-field frames with increasing exposure times in middle gain mode and plot the mean signal versus time in Fig. 4. The plot shows high linearity between 1000 and 12,500 ADU, with nonlinearity less than 1%. However, 117 pixels (0.04% of the total) exhibit higher nonlinearity, smaller linear ranges, or no response to light. 3.4. Photon transfer curve and gain The PTC is the relationship between variance and signal from flat-field frames. The variance consists of two noises: the readout noise σ2 rdn, which is relatively constant, and the photon shot noise, which follows a Poisson distribution. 6 -60 -50 -40 -30 -20 -10 T emperature (°C) 1000 Dark Current (e - s -1 pixel -1 ) Middle Gain High Gain 0 500 1000 1500 2000 Dark Current (e - s -1 pixel -1 ) 10 1 10 2 10 3 10 4 10 5 C unts Median: 605 Cumulative pr babilit( 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3. Bias and dark current. Upper left: Bias frame with temporally stable stripe patterns. Upper right: Dark current frame with a 5 s integration at -55 °Cin middle gain mode. Lower left: Dark current versus temperature in high gain (black) and middle gain (red) modes. The dark current decreases exponentially with temperature until -40 °C. The dark current in middle gain mode is higher than that in high gain mode. Lower right: Distribution and cumulative curve of middle gain dark current at -55 °C, following approximate normal statistics. All data are acquired in middle gain mode. 0 20000 40000 60000 80000 Time (μs) 0 2000 4000 6000 8000 10000 12000 14000 Signal (ADU) All data oints Points used for fitting Intensity = 1.5 e μ /μs 0 2500 5000 7500 10000 12500 15000 Signal (ADU) -1 0 1 2 3 4 Residuals (%) Figure 4. Left: Overall linearity. Middle: Nonlinearity, which stays below 1% between 1000 and 12,500 ADU. Right: Single-pixel linearity, where 117 pixels show no response or nonlinear behavior. All data were acquired in middle gain mode. If the variance and signal are measured in units of ADU, the variance is given by: Variance = σ2 rdn + Signal Gain . (1) Therefore, the relationship is linear, and the slope is typically used to derive the gain. We apply the standard method to calculate variance by subtracting consecutive frames to eliminate the impact of fixed pattern noise. However, we find extra noise in the PTCs in all three gain modes. Limited by the highest intensity of our light sources, we used high gain mode as an example. We obtain PTCs in the traditional way by increasing the exposure time with a stable light source to increase the signal. Fig. 5(a) shows a set of PTCs, each obtained with a different light brightness. The PTCs are linear but surprisingly have varying slopes. The brighter the light, the gentler the 7 Table 1. The InGaAs detector specifications of high, middle, and low gain modes at -55 °C, operating temperature. Mode Gain (e-/ADU) Bias (ADU) Readout noise (e-) Dark current (e-s-1pix-1) Saturation capacity (ke-) High 2.6 2025 83 395 43 Middle 8.9 485 93 605 146 Low 79 73 253 545 1294 slope, resulting in inconsistent gain results. For the same signal, the variance is larger for fainter light, i.e., longer exposure times. Therefore, we hypothesize the presence of extra noise that increases with integration time. To confirm this, we alter the method for obtaining PTCs. Since the extra noise is time-dependent, we fix the exposure time and increase the signal by brightening the illumination to obtain a PTC. We obtain another set of PTCs for different fixed exposure times. The results shown in Fig. 5(b) confirm our hypothesis. The PTCs exhibit consistent slopes, or equivalently, consistent gain values, because each point in the same PTC has constant extra noise. On the other hand, the intercepts of the PTCs increase with integration time, proving that the noise increases with integration time. To derive this extra noise, shot noises from both photoelectrons and dark current - calculated using the gain values from the second set of PTCs - were subtracted from the first set of PTCs. The resultant remaining noise is plotted in Fig. 5(c). The time dependence of the noise can be well fitted by ∆Var = 0.15t + 11.25 √ t + 1510, (2) where ∆Var corresponds to the additional noise plus readout noise, and t represents the integration time in units of ms. The constant term is 1510 ADU2, i.e., a readout noise of 101 e-, which agrees with the value derived from bias in Sect. 3.1. To verify Eq. 2, we subtract the time-dependent noise from the second set of PTCs. The corrected variance versus signal is shown in Fig. 5(d). All data points fall on the same line, with a slope identical to the original. The assumption of extra noise explains both sets of PTCs well, though its origin remains unclear. The time dependence consists of two terms: a linear term and a square-root term, implying multiple possible causes. C. Yu et al. (2017) describes noise components of InGaAs detectors, including time-dependent shot noise, detector thermal noise, 1/f noise, amplifier thermal noise, reset noise from the sample circuit, and fixed pattern noise et al. The additional noise may represent a combination of several of these components. However, their analysis does not account for a noise contribution whose variance follows a half-order dependence on time, a phenomenon that requires further study. To accurately measure the camera's gain from PTCs, we recommend using fixed exposure times and variable light intensity. In this way, we measured gain values for all gain modes, yielding values of 2.6, 8.9, and 79 e-/ADU for the high, middle, and low gain modes, respectively. One should also take into account the extra noise to esimate the signal-to-noise ratio (S/N) in photometry. 3.5. Summary of camera tests We summarize the laboratory test results for the three gain modes in Table 1. Accordingly, we adopt the middle gain mode for its balance between readout noise and saturation capacity. We found that stacking 9 middle-gain frames (each with exposure time t) achieves a dynamic range and readout noise level equivalent to a single low-gain frame with an exposure time of 9t. The multi-frame strategy provides higher temporal resolution and can remove contamination from cosmic rays or satellite trails. However, this is not the case for the high gain mode due to its high effective readout noise. Therefore, the middle gain mode is optimal. Under the working conditions at Dome A, the middle gain mode has a readout noise of 93 e-, dark current of 605 e-s-1 pixel-1, and saturation capacity of 146 ke-. The nonlinearity is better than 1%, with a fraction of bad pixels of 0.04%. Beyond these specifications, one should pay attention to the divergent behaviors between InGaAs cameras and CCDs. The bias of InGaAs cameras needs a short time to reach the stable level, and the bias decreases as the temperature decreases. Extra noise increases over time in the PTCs, and the slope will change if the signal increases with longer exposure time. An unbiased gain can be derived from a PTC by fixing the exposure time and increasing the light intensity to vary the signal. These behaviors may be common in InGaAs detectors, though to varying degrees, as we have observed them across multiple cameras from different manufacturers (e.g., Z. Dong et al. 2024). 8 0 1000 2000 3000 4000 5000 6000 Signal (ADU) 1500 2000 2500 3000 3500 4000 4500 Variance (ADU2) (a) 1/k=1.80 (I=6e - /m ) 1/k=2.04 (I=11e - /m ) 1/k=2.20 (I=18e - /m ) 1/k=2.40 (I=50e - /m ) 0 1000 2000 3000 4000 5000 6000 7000 Signal (ADU) 2000 2500 3000 3500 4000 4500 Variance (ADU 2 ) (b) 1/k=2.59(t=100ms) 1/k=2.63(t=200ms) 1/k=2.63(t=300ms) 1/k=2.62(t=400ms) 1/k=2.61(t=500ms) 0 500 1000 1500 2000 2500 3000 Time (ms) 1600 1800 2000 2200 2400 2600 Variance - k × Signal (ADU2) (c) Data (I=6e - /ms) Data (I=11e - /ms) Data (I=18e - /ms) Data (I=50e - /ms) ΔVar=0.15t+11.25√t+1510 0 2000 4000 6000 Signal (ADU) 0 500 1000 1500 2000 2500 Corrected variance (ADU 2 ) (d) Data (t=100ms) Data (t=200ms) Data (t=300ms) Data (t=400ms) Data (t=500ms) 1/k=2.63, b=-5.0 Figure 5. The PTCs and gain estimation. (a) PTCs obtained by increasing exposure time with a stable light source. Different curves represent varying intensity levels. Variance for identical counts increases with lower light intensity, requiring longer integration time and resulting in higher variance. (b) PTCs for fixed exposure time with increasing light intensity. Different curves correspond to different exposure times. All PTC slopes remain consistent, indicating a gain of 2.6 e-/ADU. Longer integration times lead to greater interference, suggesting that additional noise accumulates over time. (c) Extra noise obtained by subtracting photon shot noise from the PTCs in (a). (d) Noise from the PTCs in (b) with extra noise subtracted. All data points fall on the same line. 4. ON-SKY PERFORMANCE TESTS We performed on-sky tests of AIRBT at the Zhuhai campus of Sun Yat-sen University in October 2022 to verify the photometric performance of the whole system, as shown in Fig. 6. During observations, we obtained simultaneous J and H band images using the middle gain mode with 3 s exposures, considering the bias level, dark current, sky background, and saturation capacity mentioned in Sect. 3. Our observations targeted 20 fields in the Galactic plane to ensure enough stars in the images. The image depth was limited by the higher FPA temperature of -14.4°C, since the ambient temperature was above 20°C. This resulted in a dark current of 6,500 e-s-1 pixel-1, about nine times higher than expected in Antarctica. 4.1. Image pre-processing We corrected the raw images following bias and dark current calibrations. However, obvious residuals of dark current patterns remained, as shown in Fig. 7. This indicates that the dark current levels fluctuated slightly, likely due to instability of the TEC cooling and changes in the ambient temperature. To quantify the dark current in our scientific images, we scaled the dark frames obtained from laboratory tests. This scaling used a linear correction coefficient determined by the temperature-dependent ratio of the intensity difference between warm and normal pixels, as described in B. Ma et al. (2014, 2018). The correction yields negligible residuals, as demonstrated in Fig. 7, where the background RMS is close to the theoretical noise limit. The full width at half maximum (FWHM) of the image is about 2.8 pixels. 9 Figure 6. AIRBT at the Zhuhai campus of Sun Yat-sen University on October 11, 2022. Figure 7. The effect of dark frame subtraction. Left: an example of raw frames from the observation. Middle: the image after subtracting the dark current captured in the laboratory. Right: the image after subtracting the dark current scaled by warm pixels. The background RMS decreases to 26 ADU, approaching the theoretical noise limit. 4.2. Photometry and astrometry We performed aperture photometry using Source Extractor (E. Bertin & S. Arnouts 1996). We assigned weights to all pixels. For bad pixels, we set the weight to zero to mask them. For normal pixels, we calculated the weight using: weight = 1 variance = 1 σ2 rdn + σ2 Poisson , (3) where σ2 rdn denotes the readout noise variance from the bias, and σ2 Poisson indicates the Poisson noise variance from the dark current, sky background, and source flux. We used aperture diameters of 2, 4, and 6 pixels, corresponding to 13.8, 27.6, and 41.4 arcsec, respectively. The World Coordinate System (WCS) was established using SCAMP (E. Bertin 2006), with astrometric calibration performed against the 2MASS catalog (M. F. Skrutskie et al. 2006). Fig. 8 shows median positional errors of 0.7 arcsec in right ascension (RA) and 0.5 arcsec in declination (DEC). Given the pixel scale of 6.9 arcsec per pixel, these uncertainties correspond to 0.1 pixel precision. 10 0.5 0.6 0.7 0.8 0.9 1.0 1.1 RA Error (") 0.4 0.5 0.6 0.7 DEC Error (") 0 32 65 Counts 0 36 72 Counts Figure 8. Astrometry precision measurements for 682 frames across all observation fields. 5 6 7 8 9 10 11 12 13 J (mag) 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 Zero -point p=16.9±0.1 (S/N≥20) 4 5 6 7 8 9 10 11 J (mag) 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5 18.0 Zero -point p=15.8±0.1 (S/N≥20) Figure 9. Magnitude zero-points for J (left) and H (right) bands. The analysis is based on sources with S/N ≥20, comprising 6,677 and 4,683 sources for the J and H bands, respectively. The red solid line shows the median values, yielding zero-points of 16.9 mag and 15.8 mag for J and H bands, respectively. Outliers may correspond to variable sources. 4.3. Flux calibration We performed flux calibration using the formula: Magnitude = -2.5 log(Flux) + Zero-point, (4) where magnitude is relative to the 2MASS catalog, and flux represents the total signal in ADU for a 3 second exposure frame within the aperture. We utilized data from all 20 fields and selected the frame with optimal image quality, resulting in 6,677 targets. The selection criteria included an FWHM of ≤3 pixels, sky background level (3σ clipping), dark current (3σ clipping), and atmospheric extinction (3σ clipping). Fig. 9 shows AIRBT's magnitude zero-points in J and H bands. We determined the system zero-points by calculating the median from sources with S/N ≥20, yielding 16.9 mag and 15.8 mag for J and H bands, respectively. Consequently, the overall throughput of the system is 19% and 17% for the J and H bands, respectively, caused by transmissions of atmosphere, ITO, optics and filter, as well as detector quantum efficiency. Higher overall throughput is expected in Antarctica due to improved atmospheric transmission. We compare the magnitude systems of AIRBT and 2MASS in Fig. 10. The best linear fit for the magnitude transformation is JAIRBT = J2MASS + 0.05(J -H)2MASS (5) HAIRBT = H2MASS + 0.16(J -H)2MASS (6) (J -H)AIRBT = 0.96(J -H)2MASS -0.02 (7) The J band difference shows marginal color dependence with a linear coefficient of 0.05, while the H band color term reaches 0.16, mainly because the effective H band of AIRBT covers only the blue half of that of 2MASS. Additional 11 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS - H 2MASS -0.2 0.0 0.2 0.4 J AIRBT - J 2MASS Fit: y=(0.04±0.01)x-0.06 r 2 =0.04 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS - H 2MASS -0.4 -0.2 0.0 0.2 0.4 0.6 H AIRBT - H 2MASS Fit: y=(0.15±0.01)x-0.18 r 2 =0.30 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 J 2MASS - H 2MASS 0.0 0.5 1.0 1.5 J AIRBT - H AIRBT Fit: y=(0.91±0.01)x+0.10 r 2 =0.95 Figure 10. The magnitude transformation between AIRBT and 2MASS for J (left), H (middle), and J -H (right), derived from sources with S/N ≥20. The best linear fit is shown by the red lines. variations, including telescope, filter, and atmospheric transmission, also contribute to the color dependence. Therefore, applying the magnitude transformation is essential when combining data from the two systems. 4.4. Photometric precision To minimize pixel-related differences, we evaluated photometric precision by comparing two consecutive images with shifts of less than 0.1 pixels. Therefore, the magnitude differences are primarily due to measurement error, which can be calculated as the standard deviation of the former divided by √ 2. We plot the magnitude differences from all image pairs in the upper panels of Fig. 11, and the practical versus theoretical magnitude errors in the lower panels. For the faint end, the noise is dominated by sky background and dark current. Smaller apertures provide better precision for faint sources by collecting less noise from the sky background. The 5σ limiting magnitude is about 11.2 and 9.7 for the J and H bands, respectively, with a 2-pixel aperture. Both bands are limited in depth by the bright sky and high dark current during the tests. The sky background during the tests was measured to be approximately 1,000 e-s-1 pixel-1 (14.7 mag arcsec-2) in J band and 2,700 e-s-1 pixel-1 (12.6 mag arcsec-2) in H band. These values are several times brighter than those observed at Dome A, as reported in (J. Zhang et al. 2023). Hence, AIRBT will be able to image deeper at Dome A, benefiting from the dark sky and low dark current. For bright sources, photon noise becomes dominant. Larger apertures provide better precision for bright sources by collecting more photons, where the noise is dominated by the stellar flux. However, rather than following the ideal photon-noise predictions, the measured precisions for unsaturated stars observed with a 6-pixel aperture approached approximately 0.02 mag. We propose that the noise is dominated by atmospheric scintillation for bright stars. The approximation of the scintillation variance is given by A. T. Young (1967): σ2 sc = 10 × 10-6D-3/4t-1(cos z)-3 exp(-2hobs/H), (8) where D is the telescope diameter (m), t is the exposure time (s), hobs is the observatory height (15 m), z is the zenith distance, H is the atmospheric scale height (8000 m), and σsc is a dimensionless relative error of flux. For our test, this results in a scintillation of about 1%. J. Osborn et al. (2015) demonstrated that this equation underestimated the median scintillation noise at several major observatories by a factor of about 1.5. This factor would likely increase further during observational tests, due to typically stronger atmospheric turbulence compared to that in world-class observatories. Consequently, scintillation accounts for the predominant photometric noise for bright stars in our test. By combining six images, the noise could be reduced to about 8 mmag. In particular, the scintillation at Dome A is significantly weaker, making photometric precision at the mmag level achievable for bright stars. 5. SUMMARY We have developed AIRBT for simultaneous imaging in J and H bands at Dome A, Antarctica. The goals include investigation of bright variable stars and measurements of the sky brightness in the near-infrared. AIRBT consists of two identical optical tube assemblies, each with a diameter of 15 cm. Each OTA is equipped with an InGaAs camera with an effective FoV of 1.22◦× 0.97◦. We have tested the InGaAs cameras in detail in the laboratory and found that the middle gain mode is optimal for astronomy. In this mode, the cameras have a readout noise of 93 e-, a dark current of 605 e-s-1pixel-1, and a 12 -1.0 -0.5 0.0 0.5 1.0 Δ Mag 4 pix (27.6"): Observed Bin Median 5 6 7 8 9 10 11 J ( ag) 10 -2 10 -1 σ Mag 2 pi( (13.8"): Observed 2 pi( (13.8"): The"retical 4 pi( (27.6"): Observed 4 pi( (27.6"): The"retical 6 pi( (41.4"): Observed 6 pi( (41.4"): The"retical -1.0 -0.5 0.0 0.5 1.0 Δ Mag 4 pix (27.6"): Observed Bin Median 4 5 6 7 8 9 J ( ag) 10 -2 10 -1 σ Mag 2 pi( (13.8"): Observed 2 pi( (13.8"): The"retical 4 pi( (27.6"): Observed 4 pi( (27.6"): The"retical 6 pi( (41.4"): Observed 6 pi( (41.4"): The"retical Figure 11. Photometric precision of J (left) and H (right) bands. For each band, the upper panel shows magnitude differences (∆mag) between consecutive exposures, while the lower panel displays photometric errors (σ mag) binned in 0.5-mag intervals for three diameter aperture sizes: 2, 4, and 6 pixels, corresponding to 13.8, 27.6, and 41.4 arcsec, respectively. The best photometric precision reaches 20 mmag, and the limiting magnitude is 11.2 (9.7) in the J (H) band. saturation capacity of 146 ke-. The nonlinearity is better than 1%, and the fraction of bad pixels is 0.04%. However, InGaAs cameras exhibit different behavior from CCDs. The exposure time for bias should be at least 10 μs, rather than using the shortest available one. There are additional noise increasing with integration time, which may bias the slope of a PTC. A PTC should be obtained by fixing the exposure time and varying the light source to increase the signal, allowing for an unbiased measurement of the gain. We have evaluated the on-sky performance of AIRBT through observations in Zhuhai. Compared with 2MASS magnitudes, the J magnitude of AIRBT shows only a marginal difference, while the H magnitude shows significant color dependence, as the cutoff wavelength of the InGaAs cameras is only 1.63 μm. For a 3 s exposure, the 5σ limiting magnitudes are 11.2 mag and 9.7 mag in J and H bands, respectively. The photometric precision is better than 2% at the bright end, and can be further improved to sub-percent levels by image stacking. Both the depth and precision were limited by the observing environment, including a high dark current from the elevated air temperature, a bright sky background, and strong scintillation. Therefore, the photometric performance has considerable potential for improvement at Dome A. In January 2023, CHINARE 39 deployed AIRBT at Dome A. They installed AIRBT on one of the equatorial mounts for a KL-DIMM telescope (B. Ma et al. 2020) on an 8 m tower. We will release the data and report the results of sky background measurements and time-domain research in upcoming papers of this series. ACKNOWLEDGEMENTS B.M. acknowledges the support from the National Key R&D Program of China (grant No. 2022YFC2807303). AIRBT is financially supported by -sen University. REFERENCES Ashley, M. C. B., Burton, M. G., Storey, J. W. V., et al. 1996, PASP, 108, 721, Ashley, M. C. B., Allen, G., Bonner, C. S., et al. 2010, in EAS Publications Series, Vol. 40, EAS Publications Series, ed. L. Spinoglio & N. Epchtein, 79-84, Batty, K., Steele, I., & Copperwheat, C. 2022, PASP, 134, 065001, Bertin, E. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 112 Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, 13 Birch, M., Soon, J., Travouillon, T., et al. 2022, Journal of Astronomical Telescopes, Instruments, and Systems, 8, 016001, Bonner, C. S., Ashley, M. C. B., Lawrence, J. S., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Ground-based and Airborne Instrumentation for Astronomy II, ed. I. S. McLean & M. M. Casali, 70146I, Bonner, C. S., Ashley, M. C. B., Cui, X., et al. 2010, PASP, 122, 1122, Burton, M. G., Ashley, M. C. B., Marks, R. D., et al. 2000, ApJ, 542, 359, Burton, M. G., Zheng, J., Mould, J., et al. 2016, PASA, 33, e047, De, K., Hankins, M. J., Kasliwal, M. M., et al. 2020, PASP, 132, 025001, Dong, Z., Ma, B., Li, J., & Zhang, H. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13103, X-Ray, Optical, and Infrared Detectors for Astronomy XI, ed. A. D. Holland & K. Minoglou, 131031M, Durand, G. A., Tremblin, P., Minier, V., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9145, Ground-based and Airborne Telescopes V, ed. L. M. Stepp, R. Gilmozzi, & H. J. Hall, 91450D, Earley, N., Fucik, J., Fahey, L., et al. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13096, Ground-based and Airborne Instrumentation for Astronomy X, ed. J. J. Bryant, K. Motohara, & J. R. D. Vernet, 130963M, Epchtein, N., de Batz, B., Capoani, L., et al. 1997, The Messenger, 87, 27 Frostig, D., Biscoveanu, S., Mo, G., et al. 2022, ApJ, 926, 152, Hidas, M. G., Burton, M. G., Chamberlain, M. A., & Storey, J. W. V. 2000, PASA, 17, 260, Hu, Y., Shang, Z., Ashley, M. C. B., et al. 2014, PASP, 126, 868, Hu, Y., Hu, K., Shang, Z., et al. 2019, PASP, 131, 015001, Kasliwal, M. M., Earley, N., Smith, R., et al. 2025, PASP, 137, 065001, Lawrence, A., Warren, S. J., Almaini, O., et al. 2007, MNRAS, 379, 1599, Lawrence, J. S., Ashley, M. C. B., Hengst, S., et al. 2009, Review of Scientific Instruments, 80, 064501, Li, Z.-Y., Cong, J.-N., Wu, Z.-X., et al. 2024, PASP, 136, 115002, Lourie, N. P., Baker, J. W., Burruss, R. S., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11447, Ground-based and Airborne Instrumentation for Astronomy VIII, ed. C. J. Evans, J. J. Bryant, & K. Motohara, 114479K, Ma, B., Shang, Z., Hu, Y., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9154, High Energy, Optical, and Infrared Detectors for Astronomy VI, ed. A. D. Holland & J. Beletic, 91541T, Ma, B., Shang, Z., Hu, Y., et al. 2018, MNRAS, 479, 111, Ma, B., Shang, Z., Hu, Y., et al. 2020, Nature, 583, 771, Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, NewA, 15, 433, Moore, A. M., Kasliwal, M. M., Gelino, C. R., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9906, Ground-based and Airborne Telescopes VI, ed. H. J. Hall, R. Gilmozzi, & H. K. Marshall, 99062C, Murakawa, S., De, K., Ashley, M. C. B., et al. 2024, PASP, 136, 104501, Noll, S., Kausch, W., Barden, M., et al. 2012, A&A, 543, A92, Osborn, J., F ̈ohring, D., Dhillon, V. S., & Wilson, R. W. 2015, MNRAS, 452, 1707, Pedersen, P. P., Queloz, D., Garcia, L., et al. 2024, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 13096, Ground-based and Airborne Instrumentation for Astronomy X, ed. J. J. Bryant, K. Motohara, & J. R. D. Vernet, 130963X, Phillips, A., Burton, M. G., Ashley, M. C. B., et al. 1999, ApJ, 527, 1009, Shang, Z. 2020, Research in Astronomy and Astrophysics, 20, 168, Shi, S.-C., Paine, S., Yao, Q.-J., et al. 2016, Nature Astronomy, 1, 0001, Simcoe, R. A., F ̋ur ́esz, G., Sullivan, P. W., et al. 2019, AJ, 157, 46, Sims, G., Ashley, M. C. B., Cui, X., et al. 2012, PASP, 124, 74, 14 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163, Soon, J., Adams, D., De, K., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11203, Advances in Optical Astronomical Instrumentation 2019, ed. S. C. Ellis & C. d'Orgeville, 1120307, Strausbaugh, R., Jackson, R., & Butler, N. 2018, PASP, 130, 095001, Sullivan, P. W., Croll, B., & Simcoe, R. A. 2013, PASP, 125, 1021, Sutherland, W., Emerson, J., Dalton, G., et al. 2015, A&A, 575, A25, Tosti, G., Busso, M., Nucciarelli, G., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6267, Ground-based and Airborne Telescopes, ed. L. M. Stepp, 62671H, Yang, C., Ji, T., Li, Z., et al. 2025, AJ, 169, 228, Yang, X., Shang, Z., Hu, K., et al. 2021, MNRAS, 501, 3614, Young, A. T. 1967, AJ, 72, 747, Yu, C., Li, X., Yang, B., et al. 2017, Infrared Physics and Technology, 85, 74, Zhang, J., Zhang, Y.-h., Tang, Q.-J., et al. 2023, MNRAS, 521, 5624, Zou, H., Zhou, X., Jiang, Z., et al. 2010, AJ, 140, 602,
2510.14834
Improved Voltage Regulation with Optimal Design of Decentralized Volt-VAr Control Daniel Russell, Dakota Hamilton, Mads R. Almassalkhi, and Hamid R. Ossareh1 Abstract— Integration of distributed energy resources has created a need for autonomous, dynamic voltage regulation. Decentralized Volt-VAr Control (VVC) of grid-connected in- verters presents a unique opportunity for voltage management but, if designed poorly, can lead to unstable behavior when in feedback with the grid. We model the grid-VVC closed-loop dynamics with a linearized power flow approach, leveraging historical data, which shows improvement over the commonly used LinDistFlow model. This model is used to design VVC slopes by minimizing steady-state voltage deviation from the nominal value, subject to a non-convex spectral radius stability constraint, which has not been previously implemented within this context. We compare this constraint to existing convex restrictions and demonstrate, through simulations on a realistic feeder, that using the spectral radius results in more effective voltage regulation. I. INTRODUCTION The decreasing cost of solar photovoltaics (PV) is lead- ing to rapid deployment of this technology in the electric distribution grid [1], [2]. However, in regions where high solar PV adoption is combined with new loads like electric vehicles and other distributed energy resources (DERs), fast changes in generation and demand can cause distribution system voltages to change rapidly and significantly (as illustrated in Fig. 1). This variability in distribution grid voltages can negatively impact grid reliability [3]. For- tunately, inverter-interfaced DERs can assist in managing system voltages by injecting or absorbing reactive power at their point-of-connection with the grid, known as Volt- VAr Control (VVC) [4]. The objective of this paper is to present a framework for designing optimal VVC rules that not only regulate voltages effectively but also guarantee dynamic stability when interconnected with the grid. Implementations of VVC fit into three categories: central- ized, distributed, and decentralized control. Centralized VVC is often capable of providing globally optimal results (e.g. minimum voltage deviation or minimum power losses) be- cause of its presumed knowledge of the grid at all times [5]. However, as pointed out in [6], access to data at that scale is impractical and coordinating all devices directly exposes This material is based upon work supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under the Solar Energy Technologies Office Award Number DE-EE0010147 and the Community Energyshed Design initiative, award number DE- EE0010407. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy or the United States Government. The authors also thank Cyril Brunner of VEC and Dan Kopin of VELCO for their data sharing and continued collaboration. 1 The authors are with the Department of Electrical and Biomedical Engineering, University of Vermont, Burlington, VT, USA {djrussel, dhamilt6, malmassa, hossareh}@uvm.edu Fig. 1. As proliferation of DERs increase, voltage profiles on distribution networks change faster and by larger amounts. Note that this is a cartoon for illustrative purposes only, not real data. Fig. 2. A block diagram showing the interaction between Volt-VAr Control and the grid physics (plant model). Bold lines indicate a vector of measurements at all nodes. the grid to cybersecurity threats [7]. To reduce the need for extensive communication networks and central computations, distributed control solutions as part of a laminar architecture can rely on less data and/or less frequent communication. Strategies like multi-agent reinforcement learning [8], [9] and centralized optimization with partial grid information [10] have been applied. All of these schemes still require some communication infrastructure, making them harder to imple- ment. Due to these challenges, we focus on decentralized control, which uses devices in a network that only require local information to make control decisions. The IEEE 1547-2018 Standard [11] specifies various de- centralized voltage-control policies for grid-connected in- verters. Among these, VVC is favored in this study (and many others) as it preserves inverter life-time [12]. The IEEE Standard specifies VVC as a curve, which proportionally relates voltage at a particular node to a reactive power response. The interaction between VVC and the grid physics is shown in Fig. 2 and the control curve for this policy can be seen in Fig. 3. There has been significant study of VVC policies that do not adhere to this curve structure, such as with artificial neural networks [13] and chance- constrained optimal power flow plus machine learning [14]. Although these strategies can perform well, lack of standard compliance limits implementability. Even with an IEEE-compliant control curve, the system in arXiv:2510.14834v1 [eess.SY] 16 Oct 2025 Fig. 2 can exhibit unstable voltages: poor VVC design (i.e., steep slopes) and no filtering can cause voltage oscillation or even voltage collapse. This phenomenon has been analyzed in [15]. While that work does not address optimal design, subsequent studies by [16] and [17] do consider design, albeit using conservative restrictions of the stability constraint defined by spectral radius. In this paper, we address the gap by using the non-convex spectral radius constraint directly. Additionally, much of the literature proposes an incre- mental or “filtered” approach to VVC, where a low-pass filter is applied at the inverter output. According to [14], this approach results in guaranteed asymptotic stability of a unique equilibrium without needing restrictive or non-convex stability constraints. However, because the IEEE standard does not strictly specify an incremental approach, we choose to study the worst-case scenario with no filtering, as was done in [18]. Much of VVC modeling leverages the linear LinDist- Flow (LDF) model, originally derived from [19] and used in [14], [16]–[18], [20] and many more. This model makes two key assumptions: first that the loss term in the power flow equations is negligible, and second that voltage magnitude everywhere stays close to 1 per unit (p.u.). Various alterna- tives to this model have been proposed, including [21] which shows an analytical method for approximating power flow as well as data-driven approaches discussed in [22]. We move away from LDF and develop a model derived from a data- based linearization of the power flow that is more accurate within the presented VVC context. Accordingly, the contributions of this paper are as follows: • A linearized power flow (LPF) model using a single his- torical average operating point is developed for offline design that shows improved accuracy of voltages over the LinDistFlow model. • A novel non-incremental VVC policy design framework is presented as a non-convex optimization problem that accounts for both steady-state voltage deviations and grid-VVC stability, resulting in less conservative controllers. • Simulation-based analysis that validates VVC design on different feeders, including a realistic feeder in Vermont, U.S.A. and compares the approach against prior work. The remainder of this paper is laid out as follows. Sec- tion II describes mathematical modeling of the system in Fig. 2 while Sec. III describes various stability conditions for this system. The optimization problem for VVC design is formulated in Sec. IV. We describe and validate on a test network and dataset in Sec. V. Performance of the design framework herein is shown in Sec. VI with conclusions drawn in Sec. VII. II. SYSTEM MODELING For optimal VVC design, we require models of both the grid physics and the VVC. We employ a power flow model of the grid that is linear with respect to power injections, but does not neglect losses, as is the case with the oft used LDF. A. Linearized Power Flow Model Consider a single-phase (or single-phase equivalent), ra- dial power distribution network with n nodes (excluding the substation) and generator nodes Ng ⊆{1, . . . , n} with |Ng| =: ng ≤n. We assume the substation voltage magni- tude and angle are fixed. The nonlinear AC power flow can be represented by v = fpf(p, q) , fpf : Rn × Rn →Rn , (1) where v ∈Rn denotes the vector of nodal voltage mag- nitudes, and p = pg −pd and q = qg −qd are vectors of net (generation minus demand) active and reactive power at all nodes, respectively. Because demand and generation vary over time, the vector quantities v, p, q are all implicitly functions of time. However, since the power flow model is quasi-steady-state, the explicit dependence on time, t, is not shown for clarity of presentation. The function fpf is the mapping between power injections and voltage magnitudes at all nodes, which is equivalent to the DistFlow formulation in [19] when the network is radial with loads as P-Q buses.1 Although voltage phase angles are also an output of the power flow solution, we choose to omit this from fpf since we are primarily interested in the control of voltage magnitudes. Using a first-order Taylor Series expansion, we lin- earize (1) around an operating point (p0, q0). That is, v ≈fpf(p0, q0) + Jp(p −p0) + Jq(q −q0) , (2) where Jacobian matrices are derived from Jp ≜∂fpf ∂p p0 , Jq ≜∂fpf ∂q q0 . (3) Moving forward, we will drop the symbol ≈and use equality for simplicity of notation, though we emphasize that the voltages resulting from the linearized model should be interpreted as approximations of the true voltages. We assume that the grid-VVC dynamics reach steady-state more quickly than large changes in p and qd. Therefore, similar to [16], we define constant ˜v as ˜v ≜fpf(p0, q0) + Jp(p −p0) −Jq(q0 + qd) , (4) which simplifies (2) to v = Jqqg + ˜v . (5) Matrices Jp and Jq are computed using numerical approx- imation of the derivative of the power flow in fpf through a centered finite difference approach. That is, the j-th column of Jp is computed for all j = 1, . . . , n by Jp,j = fpf(p0 + εej, q0) −fpf(p0 −εej, q0) 2ε , (6) where ε ≪1 (e.g., 10−6), and ej is the j-th standard basis vector in Rn. A similar procedure is used to compute Jq. 1P-Q bus refers to a node modeled with a constant power (P and Q) and variable voltage magnitude and angle. Therefore, in a network with all P-Q buses, the voltage can be a function of only P and Q. Fig. 3. The IEEE prescribed standard curve with saturation limits and a deadband is shown. We choose parameters such that the curve is a straight line through the reference voltage (V2 = V3 = Vr and VL = V1, VH = V4). Note this is a slightly modified version of Figure H.4 from [11]. Procedurally, this requires solving the power flow 4n times (2n for p0 ± εej and 2n for q0 ± εej), which could become computationally expensive depending on network size. However, these computations can be parallelized and need not be updated frequently as they are only required for establishing the model for design, not for real-time control. Because the linearization takes into account losses and does not require that voltages be close to 1 p.u., it is a more accurate model of voltages than LDF. Approximation accuracy is discussed in Sec. V-B with a case study. B. Volt-VAr Control Rule The VVC curve specified by the IEEE-1547 standard [11] is shown in Fig. 3. The curve may include a deadband region and saturation limits, and is parameterized by the corner points {(Vi, Qi)}4 i=1 and the reference voltage Vr. In this work, we consider VVC rules which do not include a deadband (i.e., V2 = V3 = Vr and Q2 = Q3 = 0), to simplify the analysis.2 We also assume that these nodes have sufficient reactive power capability such that saturation limits are never reached. Although this may be untrue in many real-world grids currently, technology such as [23] is coming onto the grid that significantly increases reactive power capacity. Under these assumptions, the resulting VVC curve at each node i is linear (as illustrated in Fig. 3). Also, given that VVC uses sampled data, the reactive power output of VVC at time t + 1 is dependent on voltage measured at time t, where t is the discrete time index. Thus, we have qg,i[t + 1] = ki(vi[t] −vr,i) , (7) where vi is the voltage magnitude at node i, vr,i is the reference voltage (typically 1 p.u. but can change node to node), ki is the slope of the linearized VVC curve, and qg,i is the output reactive power. In this model, the size of the discrete time step t is considered to be 1 second, per the lower limit of the IEEE-1547 standard open-loop response time. 2Inclusion of the deadband in the framework will require nonlinear control design techniques and is left as a topic of future research. In matrix-vector notation (for the entire network), we have qg[t + 1] = K(v[t] −vr) , (8) where vr ∈Rn denotes the vector of reference voltages at each node, and K ∈Rn×n is a diagonal matrix with ki on the i-th element of the main diagonal. That is, K = diag(k), where k = [k1, k2, . . . , kn]⊤. For nodes that do not have an inverter, ki = 0. The dimension of (8) could be reduced to include only ki ̸= 0, but we will need qg to be in Rn to model effects of reactive power at non-generator nodes using (5). C. Combined Grid-VVC Dynamic Model Next, we combine the linearized power flow equations and VVC curve to form a dynamic model of the closed-loop system, as shown in Fig. 2. Substituting (8) into (5), we have v[t + 1] = JqKv[t] −JqKvr + ˜v . (9) The equilibrium point of this discrete-time dynamical system, denoted by v∗, is given by: v∗= (I −JqK)−1(˜v −JqKvr) , (10) where I is the identity matrix. Note that the matrix I −JqK is invertible when no eigenvalue of JqK is 1. In Sec. III, we will impose a constraint on the values of K that restricts all eigenvalues of JqK to be strictly inside of the unit disk. Therefore, an equilibrium voltage solution will always exist. The steady-state voltage equation (10) will be used to determine the optimal matrix K (and thus, the optimal VVC rules), but first we assess the stability of the system in (9). III. STABILITY CRITERIA The discrete-time linear system (9) may become unstable if VVC slopes k are chosen poorly. Thus, our goal is to select k such that voltages in (9) are stable in time. The standard stability criterion for a discrete-time linear system x[t + 1] = Ax[t] is given by the spectral radius ρ(A) < 1. Although this is discussed in [15] with reference to VVC, they do not apply it in a design framework or use it on an actual linear model. Taking their work a step farther, we apply it to our linear difference equation to say (9) is asymptotically stable (A.S.) if this constraint is satisfied: ρ(JqK) < 1 ⇐⇒A.S. of (9) . (11) This is the least conservative stability constraint possible for a system of this type and it may be non-convex with respect to k depending on the structure of Jq. Other work has taken a convex restriction of this constraint in order to avoid computational complexity and guarantee optimality within their reduced convex set of k. In [16], an LDF grid model is leveraged with a matrix 2-norm constraint: ∥XK∥2 < 1 =⇒A.S. of (9) , (12) Fig. 4. Comparison of stable regions of VVC slope values (k) in a 2-node section of a real network using an LPF model where Jq = [1.5504, 1.5504; 1.5505, 1.6144];. Note that the spectral radius set is convex here, although this is not guaranteed depending on the structure of Jq. where X is similar to Jq but calculated only from network impedances without considering losses or operating point. Eq. (12) is further restricted in [17] using H¨older’s inequality: ∥XK∥∞< 1 and ∥XK∥1 < 1 =⇒∥XK∥2 < 1 . (13) This constraint is unnecessarily conservative, as any induced matrix norm serves as a restriction of (11), but we study it as a linear worst-case bound from the literature. Both (12) and (13) avoid computational complexity at the cost of significantly restricting the set of feasible k values. We improve on these constraints by using the spectral radius combined with Jq from our LPF model (as opposed to X from the LDF model). A visual representation of these stability criteria is shown in Fig. 4 on a 2-node section of a real feeder (see Sec. V). If both nodes have inverters with non-zero values in k, the shaded regions represent the feasible set of asymptotically stable k values. We assume a network with inductive lines, as is common in distribution grids (and the case in this example). Therefore, we restrict k ≤0, as this will produce a negative feedback q response that corrects voltage towards nominal. To the best of the author’s knowledge, the spectral radius constraint has been largely avoided thus far in the literature for designing VVC slopes. This constraint can be non- convex depending on the structure of Jq which makes it more challenging to optimize over. Note that Jq is not necessarily symmetric, unlike X from the LDF model which is guaranteed positive definite [16]. Intuitively, a larger ki represents greater control authority and participation in voltage regulation. To choose the best values of ki, we use optimization, which is described next. IV. OPTIMAL VVC DESIGN Now that we have established conditions on k values that result in stable grid-VVC dynamics, we next address how to choose the optimal k values from the stable set. In the literature, a number of network objectives have been considered in VVC design including network losses [24] and substation power output [25]. In this work, we study voltage regulation, similar to [16], which is critical for network reliability and flexibility. More specifically, we are interested in minimizing the deviation in steady-state voltage magnitudes from a given reference, vr. Thus, we define the following objective function: min k ∥v∗−vr∥2 2 + β ng ∥k∥2 2 , (14) where the steady-state voltages v∗are given by (10) and β is a scalar cost-coefficient, normalized by ng to en- sure scalability with the number of generators. The 2-norm of voltage deviation was selected to spread the reliability and flexibility benefits of inverter VVCs across the entire network. The regularization term limits control action by penalizing large values of k, similar to ideas from LQR theory. This regularization also helps with flatness of the objective function. Three constraints for stability of the system were presented in Sec. III. We choose to focus on the spectral radius con- straint (11), the least conservative, but most computationally expensive. As discussed in Sec. III, we also constrain each inverter slope ki to be non-positive. This ensures IEEE-1547 standard compliance. Combining these constraints with the objective function (14), we obtain the following non-convex3 optimal VVC design formulation: min k ∥v∗−vr∥2 2 + β ng ∥k∥2 2 , (15a) s.t. ρ(JqK) ≤1 −ϵ , (15b) v∗= (I −JqK)−1(˜v −JqKvr) , (15c) K = diag k  , (15d) ki ≤0 , ∀i ∈Ng , (15e) ki = 0 , ∀i ∈{1, . . . , n} \ Ng . (15f) To improve numerical stability, we replace < 1 from (11) with ≤1 −ϵ in (15b), where 0 < ϵ ≪1. In order to implement the non-convex formulation in (15), we leverage MATLAB’s fmincon with default interior point algorithm. We utilize fmincon’s flexibility and sub- stitute (15c) into (15a), forming a non-convex objective function due to matrix inverse and bilinearities. Other solvers like IPOPT [26] handle this by treating v∗as a second (dependent) variable and keeping (15c) as an explicit non- convex equality constraint, with (I −JqK) brought to the left side. In addition, IPOPT is unable to directly express the spectral radius constraint, because of its lack of smoothness. Table I shows how the stability constraints were implemented for the different optimization solvers. 3This non-convex objective function with a possibly non-convex con- straint implies there is no guarantee of global optimality. However, we found that under a broad range of initial conditions, the solution value of (15) is always lower than under constraints in (12). Performance, as indicated by the optimal value, is more important than global optimality. TABLE I STABILITY CONSTRAINT IMPLEMENTATION Constraint Implemented Form Solvera ρ(JqK) < 1 max | eig(JqK)| ≤1 −ϵ fmincon ∥JqK∥2 < 1 (JqKx)⊤(JqKx) ≤1 −ϵ and x⊤x = 1 IPOPT ∥JqK∥∞< 1 and ∥JqK∥1 < 1 −Jqk ≤ (1 −ϵ)1 and diag(1⊤Jq)k ≤(1 −ϵ)1 IPOPT a Indicates the solver used, not a complete list of capable solvers. V. TEST DATA AND MODEL VALIDATION The solution of the optimization problem (15) provides a set of VVC slopes that are guaranteed to be stable (under the LPF model) and are optimal with respect to steady-state voltage regulation. In order to evaluate the performance of the proposed VVC design framework, we leverage network, load, and generation data from a real distribution feeder. Relevant details about the test case data are provided next. A. Network, Generation, and Load Data For this study, we use a single-phase section of a real distribution feeder in Vermont, with data provided by the Ver- mont Electric Cooperative. The network contains 1072 nodes (n = 1072) across medium (12.47 kV) and low (208 V) voltage levels. This feeder section contains 209 residential loads and 30 nodes with solar generation (ng = 30). An illustration of the network can be seen in Fig. 5. While the head node of this feeder section is not the substation, it is regulated by a tap-changing voltage regulator. Thus, we assume that the voltage magnitude at this node is fixed at 1 per unit. Real load and generation data from the entire year of 2024 is randomly split into a “training dataset” (90%) and “test dataset” (10%). The training dataset is used to determine an average loading condition over the year, which serves as the operating point (p0, q0) in the LPF model (2). The test dataset is used as sample loading conditions to which we apply our optimally designed VVC policy to assess its performance. B. Validation of Linearized Power Flow (LPF) Model To validate the accuracy and utility of the LPF model from Sec. II-A for the test network, we compare it to the nonlinear AC power flow solution and the LDF model, the latter of which is commonplace throughout VVC literature. First, the LPF model is parameterized based on the training dataset. Specifically, the i-th element of p0 is the average of all net active power injections at node i across all samples in the training dataset, and similarly for q0. The Jacobian matrices Jp and Jq are calculated from (6), where the nonlinear AC power flow solver MATPOWER [27] is used to evaluate fpf. We emphasize that the LPF model is constructed once (i.e., using a single operating point) and is applied to all loading conditions in the test dataset, rather than re-linearizing around a different operating point for each loading condition. Fig. 5. Illustration of the single-phase section of a real feeder in Vermont on which the VVC design framework is applied and evaluated. Fig. 6. A histogram showing the difference in nodal voltage magnitudes between the model used (Vmodel,i LDF or LPF) and voltage computed by MATPOWER (VMP) across the test dataset. Figure 6 compares the voltage magnitude error at each node of the LPF and LDF models (Vmodel in figure) with respect to the nonlinear AC power flow solutions from MATPOWER (VMP in figure) across loading scenarios from the entire test dataset. The maximum voltage difference for LDF is 0.005 p.u. while LPF is 0.0031 p.u. The results indicate that the LPF is generally more accurate than LDF for these test networks and datasets. This improved accuracy can be attributed to LPF accounting for losses and not assuming voltage magnitudes are equal to 1 p.u. through the lineariza- tion process, both of which are reductive assumptions in the LDF model. VI. CASE STUDY RESULTS Next, we demonstrate the effectiveness of the proposed VVC design framework through a number of case studies. In particular, we compare the performance of VVC designs under different power flow models, and stability constraints in (11)–(13). We also compare non-incremental VVC design to an incremental approach. A. Case Study Setup The VVC curves are designed by solving the optimization problem (15) under different β. For reference voltages in the objective, we consider vr = 1n, where 1n denotes a vector of ones of size n, and for the stability constraints, we set ϵ = 10−3. A specific grid loading condition ˜v (i.e., pg, pd, qd) is needed for design. For this, we choose the largest 2-norm of voltage deviation from nominal in the training dataset, which is representative of a “worst-case” scenario. Lastly, we compare the value of both terms in (15) objective function under different β and select β = 0.06. These optimal VVC curves are applied to the inverters in the network, and their performance is evaluated through numerical simulations. Given a loading condition from the test dataset, (8) provides a new value of reactive power qg to be applied at each inverter. We use MATPOWER to solve fpf(p, q). The voltage solution of fpf is fed back into VVC and the loop continues until the voltage converges to some steady state value within a tolerance of 10−4. By solving power flow with a nonlinear AC solver as opposed to LPF, we are closer to representing how the designed inverter slopes will perform in a real-world context. The voltage deviation of the steady state voltage values can then be compared with different k’s, different models, and the baseline voltage before any control action. B. Exemplary Hours for Case Studies From the test dataset, we select four particular hours of the year which represent specific scenarios of interest: • Hour A: maximum total active power demand (which also corresponds to maximum voltage deviation using ∥v −1n∥2). • Hour B: maximum total active power generation. • Hour C: minimum single-node voltage. • Hour D: maximum single-node voltage. These four hours are representative of the range of loading conditions in the full test dataset. C. Comparing Models We first compare the performance of optimal k values under different model choices. In Fig. 7, the baseline is represented in gray. We also show a “model-free” case in purple, where all inverter slopes are set to a uniform (guaranteed stable) value. Voltage deviation from optimal slopes designed using the LPF and LDF models are shown in blue and yellow respectively. In Sec. V-B, we showed with Fig. 6 that LPF is generally a more accurate model of voltage than LDF. Despite this, the k values designed by the two models perform almost identically when correcting voltage deviation. This indicates that, under these loading conditions and this network, both LDF and LPF are suitable grid models. We also observe that designing the k values via either opti- mization problem is more effective than setting all inverters to be arbitrarily the same value when voltage deviation is high. In low voltage deviation situations, the stronger control action from optimal k is over-correcting at nodes nearby to the inverter. We expect that implementation of a deadband would eliminate this effect, which is a direction of future work. Fig. 7. A comparison of the performance of VVC under different design techniques: arbitrary uniform values, optimal with LDF, and optimal with LPF. There is no meaningful difference between LDF and LPF. The uniform values perform better under low voltage deviation (B, D) because the optimal VVC is over-correcting. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown defined as ∥v −vr∥. Fig. 8. A performance comparison of optimal k values designed using different stability criteria in place of (15b). As expected, more restrictive constraints perform worse. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown. It is worth noting that all three implementations of VVC will reduce voltage deviation, including if k values are set uniformly across the network. The control policy doesn’t have to be “optimal” to improve the voltage profile. D. Performance of Stability Constraints In Fig. 8, we compare the stability constraints that were discussed in Section III. To generate these results, we solve the optimization in (15), replacing (15b) with different sta- bility criteria. The optimal k values are used to simulate the control policy performance for hours A–D. Note that these slope values were all designed using the LPF model. It is not surprising that the most restrictive stability crite- (a) (b) Fig. 9. The voltage dynamics at generator nodes are shown, simulated by MATPOWER during hour A loading conditions; (a) Spectral radius constraint — stable dynamics and (b) k increased by 10% — unstable dynamics. Note that faster convergence can be guaranteed by increasing ϵ in (15b). rion performs the worst. This trend is noticeable even under relatively low voltage deviation like in hour B. In Fig. 9(a), the simulated voltage dynamics for hour A at all inverter nodes are shown, using k designed under the spectral radius stability constraint. To show instability, we increase k by 10%, shown in Fig. 9(b). These dynamics are simulated as described in Sec. VI-A. With unstable slopes for inverters, the voltage oscillates until exceeding physical limits of the grid. Voltage dynamics under the other two constraints are similar, with slightly faster convergence and less voltage regulation. They also generally have more room to increase k before becoming unstable. E. Comparison with Incremental Finally, we compare our designed non-incremental solu- tion to an incremental VVC approach. The authors in [14] prove that high gain values which would otherwise destabi- lize the grid can be stabilized using an incremental approach. To demonstrate this, we arbitrarily take our designed k values and multiply them by a factor of 25, putting them well outside of the spectral radius stability boundary. We implement a “Type-B” system from [15] with ∆T τ = 0.05. Results in [15] demonstrate that, if ∆T τ ≤1, the design framework presented here also applies to an incremental implementation.4 Simulation results of using this incremental approach can be seen in Fig. 10 as “Inc. VVC”. An incremental approach correcting the voltage so much more than our design demonstrates that the stability con- straint is a fundamental limit to voltage regulation perfor- mance. Fig. 11 further illustrates the corrective power of both control policies. This figure shows the voltage at all nodes across the entire test dataset of loading conditions with baseline, non-incremental, and incremental VVC. Voltage at each node is ordered on the x-axis according to how many nodes are between that node and the head node. We observe that, in general, incremental VVC brings voltages closer to nominal. It is worth noting that the average number 4Note that the special case of ∆T τ = 1 is the rule in (8). Fig. 10. A comparison of 4 different hours of the year under non- incremental (VVC) and incremental (Inc. VVC) control. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown. Fig. 11. Voltage at all nodes over the entire test dataset, plotted under different control schemes (no control, VVC, incremental VVC). Spacing on the x-axis is based on how many nodes are between the plotted node and the head node. of iterations for non-incremental to converge is 389, while incremental converges in 9.4 iterations on average with 25 times higher slopes. Some nodes are not being corrected sufficiently, even with the incremental VVC response, because the generators are too far away from affected nodes. One way to correct this is to add a generator at a particular node with high voltage deviation, re-optimize over k and re-simulate the grid. As a simple test of the effect of this, we add an inverter at the node causing significant voltage drop at depth 76. From Fig. 10 during hour C, regular VVC has a voltage deviation infinity norm of 0.065, which violates the limits set by ANSI standards (0.05). After adding an inverter at that node, the infinity norm of voltage deviation during that same hour drops to 0.040, which is well within the limits. VII. CONCLUSIONS In this study, we compared different stability constraints and grid models while finding an optimal design of inverter slopes for VVC, and evaluated our design framework using a real network with real loading conditions. We conclude that Linearized Power Flow is generally a more accurate tool for modeling voltage than LinDistFlow on a real network using real data. However, this increased accuracy does not necessarily translate to better performance when designing VVC rules. Therefore, either model is suitable for design. We also showed that a simple VVC design using uniform slopes across the network has the ability to improve voltage deviation, although not as much as optimized design. Different stability constraints were studied and we showed that although a spectral radius constraint on inverter slopes is more challenging to compute, it produces a more optimal result when used for VVC design. We also showed that an incremental approach which allows for larger slopes without stability issues often results in less voltage deviation. Future work will focus on optimizing design of incre- mental VVC, including with deadband and saturation limits. The question of controllability is also of interest, given that this system has a limited number of nodes that have active control. It would be valuable to understand where the optimal location would be to add reactive power capability. Lastly, we aim to extend these results to 3-phase networks. REFERENCES [1] H. Ritchie, M. Roser, and P. Rosado, “Data Page: Solar pho- tovoltaic module price.” https://ourworldindata.org/grapher/solar-pv- prices, 2023. [2] Solar Energy Industries Association, “5 Million Solar Installations: Powering American Communities.” https://seia.org/solar-installations/, May 2024. [3] Y. T. Tan and D. S. Kirschen, “Impact on the Power System of a Large Penetration of Photovoltaic Generation,” in 2007 IEEE Power Engineering Society General Meeting, pp. 1–8, June 2007. [4] J. Grainger and S. Civanlar, “Volt/Var Control on Distribution Systems with Lateral Branches Using Shunt Capacitors and Voltage Regulators Part I: The Overall Problem,” IEEE Transactions on Power Apparatus and Systems, vol. PAS-104, pp. 3278–3283, Nov. 1985. [5] T. Gush, C.-H. Kim, S. Admasie, J.-S. Kim, and J.-S. Song, “Optimal Smart Inverter Control for PV and BESS to Improve PV Hosting Capacity of Distribution Networks Using Slime Mould Algorithm,” IEEE Access, vol. 9, pp. 52164–52176, 2021. [6] K. Worthmann, C. M. Kellett, P. Braun, L. Gr¨une, and S. R. Weller, “Distributed and Decentralized Control of Residential Energy Systems Incorporating Battery Storage,” IEEE Transactions on Smart Grid, vol. 6, pp. 1914–1923, July 2015. [7] U.S. Department of Energy, “Cybersecurity Considerations for Dis- tributed Energy Resources on the U.S. Electric Grid,” tech. rep., U.S. Department of Energy, Oct. 2022. [8] H. Liu and W. Wu, “Online Multi-Agent Reinforcement Learning for Decentralized Inverter-Based Volt-VAR Control,” IEEE Transactions on Smart Grid, vol. 12, pp. 2980–2990, July 2021. [9] X. Sun and J. Qiu, “Two-Stage Volt/Var Control in Active Distribution Networks With Multi-Agent Deep Reinforcement Learning Method,” IEEE Transactions on Smart Grid, vol. 12, pp. 2903–2912, July 2021. [10] S. Gupta, V. Kekatos, and M. Jin, “Controlling Smart Inverters Using Proxies: A Chance-Constrained DNN-Based Approach,” IEEE Transactions on Smart Grid, vol. 13, pp. 1310–1321, Mar. 2022. [11] IEEE Standards Association, “IEEE Standard for Interconnection and Interoperability of Distributed Energy Resources with Associated Electric Power Systems Interfaces,” Apr. 2018. [12] A. Mohamed, G. Saadeh, and A. K. Kaviani, “Impact of smart photovoltaic inverter control modes on medium-voltage grid voltage and inverter lifetime: An experimental approach,” IET Smart Grid, vol. 6, no. 4, pp. 380–390, 2023. [13] S. Li, Y. Sun, M. Ramezani, and Y. Xiao, “Artificial Neural Networks for Volt/VAR Control of DER Inverters at the Grid Edge,” IEEE Transactions on Smart Grid, vol. 10, pp. 5564–5573, Sept. 2019. [14] A. Eggli, S. Karagiannopoulos, S. Bolognani, and G. Hug, “Stability Analysis and Design of Local Control Schemes in Active Distribution Grids,” IEEE Transactions on Power Systems, vol. 36, pp. 1900–1909, May 2021. [15] P. Jahangiri and D. C. Aliprantis, “Distributed Volt/VAr Control by PV Inverters,” IEEE Transactions on Power Systems, vol. 28, pp. 3429– 3439, Aug. 2013. [16] X. Zhou, M. Farivar, Z. Liu, L. Chen, and S. H. Low, “Reverse and Forward Engineering of Local Voltage Control in Distribution Net- works,” IEEE Transactions on Automatic Control, vol. 66, pp. 1116– 1128, Mar. 2021. [17] I. Murzakhanov, S. Gupta, S. Chatzivasileiadis, and V. Kekatos, “Optimal Design of Volt/VAR Control Rules for Inverter-Interfaced Distributed Energy Resources,” IEEE Transactions on Smart Grid, vol. 15, pp. 312–323, Jan. 2024. [18] J. Wei, S. Gupta, D. C. Aliprantis, and V. Kekatos, “A Chance- Constrained Optimal Design of Volt/VAR Control Rules for Dis- tributed Energy Resources,” July 2023. [19] M. Baran and F. Wu, “Network reconfiguration in distribution systems for loss reduction and load balancing,” IEEE Transactions on Power Delivery, vol. 4, pp. 1401–1407, Apr. 1989. [20] J. Feng, Y. Shi, G. Qu, S. H. Low, A. Anandkumar, and A. Wier- man, “Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control,” IEEE Transactions on Control of Network Systems, vol. 11, pp. 1370–1381, Sept. 2024. [21] S. V. Dhople, S. S. Guggilam, and Y. C. Chen, “Linear approximations to AC power flow in rectangular coordinates,” in 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 211–217, Sept. 2015. [22] M. Jia and G. Hug, “Overview of Data-driven Power Flow Lineariza- tion,” in 2023 IEEE Belgrade PowerTech, pp. 01–06, June 2023. [23] Ecojoule Energy, “EcoVAR Pole-Mounted Static Compensator (STAT- COM) Voltage Regulator.” https://ecojoule.com/ecovar/. [24] S. Bolognani, R. Carli, G. Cavraro, and S. Zampieri, “Distributed Reactive Power Feedback Control for Voltage Regulation and Loss Minimization,” IEEE Transactions on Automatic Control, vol. 60, pp. 966–981, Apr. 2015. [25] D. Hamilton, Optimization-based Operation and Control Approaches for Improving the Resilience of Electric Power Systems. Ph.D. dissertation, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, Dec. 2023. [26] A. W¨achter and L. T. Biegler, “On the implementation of an interior- point filter line-search algorithm for large-scale nonlinear program- ming,” Mathematical Programming, vol. 106, pp. 25–57, Mar. 2006. [27] R. D. Zimmerman, C. E. Murillo-S´anchez, and R. J. Thomas, “MAT- POWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education,” IEEE Transactions on Power Systems, vol. 26, pp. 12–19, Feb. 2011.
Improved Voltage Regulation with Optimal Design of Decentralized Volt-VAr Control Daniel Russell, Dakota Hamilton, Mads R. Almassalkhi, and Hamid R. Ossareh1 Abstract- Integration of distributed energy resources has created a need for autonomous, dynamic voltage regulation. Decentralized Volt-VAr Control (VVC) of grid-connected inverters presents a unique opportunity for voltage management but, if designed poorly, can lead to unstable behavior when in feedback with the grid. We model the grid-VVC closed-loop dynamics with a linearized power flow approach, leveraging historical data, which shows improvement over the commonly used LinDistFlow model. This model is used to design VVC slopes by minimizing steady-state voltage deviation from the nominal value, subject to a non-convex spectral radius stability constraint, which has not been previously implemented within this context. We compare this constraint to existing convex restrictions and demonstrate, through simulations on a realistic feeder, that using the spectral radius results in more effective voltage regulation. I. INTRODUCTION The decreasing cost of solar photovoltaics (PV) is leading to rapid deployment of this technology in the electric distribution grid [1], [2]. However, in regions where high solar PV adoption is combined with new loads like electric vehicles and other distributed energy resources (DERs), fast changes in generation and demand can cause distribution system voltages to change rapidly and significantly (as illustrated in Fig. 1). This variability in distribution grid voltages can negatively impact grid reliability [3]. Fortunately, inverter-interfaced DERs can assist in managing system voltages by injecting or absorbing reactive power at their point-of-connection with the grid, known as VoltVAr Control (VVC) [4]. The objective of this paper is to present a framework for designing optimal VVC rules that not only regulate voltages effectively but also guarantee dynamic stability when interconnected with the grid. Implementations of VVC fit into three categories: centralized, distributed, and decentralized control. Centralized VVC is often capable of providing globally optimal results (e.g. minimum voltage deviation or minimum power losses) because of its presumed knowledge of the grid at all times [5]. However, as pointed out in [6], access to data at that scale is impractical and coordinating all devices directly exposes This material is based upon work supported by the U.S. 's Office of Energy Efficiency and Renewable Energy (EERE) under the Solar Energy Technologies Office Award Number DE-EE0010147 and the Community Energyshed Design initiative, award number DEEE0010407. The views expressed herein do not necessarily represent the views of the U.S. . The authors also thank Cyril Brunner of VEC and Dan Kopin of VELCO for their data sharing and continued collaboration. 1 The authors are with the {djrussel, dhamilt6, malmassa, Fig. 1. As proliferation of DERs increase, voltage profiles on distribution networks change faster and by larger amounts. Note that this is a cartoon for illustrative purposes only, not real data. Fig. 2. A block diagram showing the interaction between Volt-VAr Control and the grid physics (plant model). Bold lines indicate a vector of measurements at all nodes. the grid to cybersecurity threats [7]. To reduce the need for extensive communication networks and central computations, distributed control solutions as part of a laminar architecture can rely on less data and/or less frequent communication. Strategies like multi-agent reinforcement learning [8], [9] and centralized optimization with partial grid information [10] have been applied. All of these schemes still require some communication infrastructure, making them harder to implement. Due to these challenges, we focus on decentralized control, which uses devices in a network that only require local information to make control decisions. The IEEE 1547-2018 Standard [11] specifies various decentralized voltage-control policies for grid-connected inverters. Among these, VVC is favored in this study (and many others) as it preserves inverter life-time [12]. The IEEE Standard specifies VVC as a curve, which proportionally relates voltage at a particular node to a reactive power response. The interaction between VVC and the grid physics is shown in Fig. 2 and the control curve for this policy can be seen in Fig. 3. There has been significant study of VVC policies that do not adhere to this curve structure, such as with artificial neural networks [13] and chanceconstrained optimal power flow plus machine learning [14]. Although these strategies can perform well, lack of standard compliance limits implementability. Even with an IEEE-compliant control curve, the system in 16 Oct 2025 Fig. 2 can exhibit unstable voltages: poor VVC design (i.e., steep slopes) and no filtering can cause voltage oscillation or even voltage collapse. This phenomenon has been analyzed in [15]. While that work does not address optimal design, subsequent studies by [16] and [17] do consider design, albeit using conservative restrictions of the stability constraint defined by spectral radius. In this paper, we address the gap by using the non-convex spectral radius constraint directly. Additionally, much of the literature proposes an incremental or "filtered" approach to VVC, where a low-pass filter is applied at the inverter output. According to [14], this approach results in guaranteed asymptotic stability of a unique equilibrium without needing restrictive or non-convex stability constraints. However, because the IEEE standard does not strictly specify an incremental approach, we choose to study the worst-case scenario with no filtering, as was done in [18]. Much of VVC modeling leverages the linear LinDistFlow (LDF) model, originally derived from [19] and used in [14], [16]-[18], [20] and many more. This model makes two key assumptions: first that the loss term in the power flow equations is negligible, and second that voltage magnitude everywhere stays close to 1 per unit (p.u.). Various alternatives to this model have been proposed, including [21] which shows an analytical method for approximating power flow as well as data-driven approaches discussed in [22]. We move away from LDF and develop a model derived from a databased linearization of the power flow that is more accurate within the presented VVC context. Accordingly, the contributions of this paper are as follows: • A linearized power flow (LPF) model using a single historical average operating point is developed for offline design that shows improved accuracy of voltages over the LinDistFlow model. • A novel non-incremental VVC policy design framework is presented as a non-convex optimization problem that accounts for both steady-state voltage deviations and grid-VVC stability, resulting in less conservative controllers. • Simulation-based analysis that validates VVC design on different feeders, including a realistic feeder in Vermont, U.S.A. and compares the approach against prior work. The remainder of this paper is laid out as follows. Section II describes mathematical modeling of the system in Fig. 2 while Sec. III describes various stability conditions for this system. The optimization problem for VVC design is formulated in Sec. IV. We describe and validate on a test network and dataset in Sec. V. Performance of the design framework herein is shown in Sec. VI with conclusions drawn in Sec. VII. II. SYSTEM MODELING For optimal VVC design, we require models of both the grid physics and the VVC. We employ a power flow model of the grid that is linear with respect to power injections, but does not neglect losses, as is the case with the oft used LDF. A. Linearized Power Flow Model Consider a single-phase (or single-phase equivalent), radial power distribution network with n nodes (excluding the substation) and generator nodes Ng ⊆{1, . . . , n} with |Ng| =: ng ≤n. We assume the substation voltage magnitude and angle are fixed. The nonlinear AC power flow can be represented by v = fpf(p, q) , fpf : Rn × Rn →Rn , (1) where v ∈Rn denotes the vector of nodal voltage magnitudes, and p = pg -pd and q = qg -qd are vectors of net (generation minus demand) active and reactive power at all nodes, respectively. Because demand and generation vary over time, the vector quantities v, p, q are all implicitly functions of time. However, since the power flow model is quasi-steady-state, the explicit dependence on time, t, is not shown for clarity of presentation. The function fpf is the mapping between power injections and voltage magnitudes at all nodes, which is equivalent to the DistFlow formulation in [19] when the network is radial with loads as P-Q buses.1 Although voltage phase angles are also an output of the power flow solution, we choose to omit this from fpf since we are primarily interested in the control of voltage magnitudes. Using a first-order Taylor Series expansion, we linearize (1) around an operating point (p0, q0). That is, v ≈fpf(p0, q0) + Jp(p -p0) + Jq(q -q0) , (2) where Jacobian matrices are derived from Jp ≜∂fpf ∂p p0 , Jq ≜∂fpf ∂q q0 . (3) Moving forward, we will drop the symbol ≈and use equality for simplicity of notation, though we emphasize that the voltages resulting from the linearized model should be interpreted as approximations of the true voltages. We assume that the grid-VVC dynamics reach steady-state more quickly than large changes in p and qd. Therefore, similar to [16], we define constant ̃v as ̃v ≜fpf(p0, q0) + Jp(p -p0) -Jq(q0 + qd) , (4) which simplifies (2) to v = Jqqg + ̃v . (5) Matrices Jp and Jq are computed using numerical approximation of the derivative of the power flow in fpf through a centered finite difference approach. That is, the j-th column of Jp is computed for all j = 1, . . . , n by Jp,j = fpf(p0 + εej, q0) -fpf(p0 -εej, q0) 2ε , (6) where ε ≪1 (e.g., 10-6), and ej is the j-th standard basis vector in Rn. A similar procedure is used to compute Jq. 1P-Q bus refers to a node modeled with a constant power (P and Q) and variable voltage magnitude and angle. Therefore, in a network with all P-Q buses, the voltage can be a function of only P and Q. Fig. 3. The IEEE prescribed standard curve with saturation limits and a deadband is shown. We choose parameters such that the curve is a straight line through the reference voltage (V2 = V3 = Vr and VL = V1, VH = V4). Note this is a slightly modified version of Figure H.4 from [11]. Procedurally, this requires solving the power flow 4n times (2n for p0 ± εej and 2n for q0 ± εej), which could become computationally expensive depending on network size. However, these computations can be parallelized and need not be updated frequently as they are only required for establishing the model for design, not for real-time control. Because the linearization takes into account losses and does not require that voltages be close to 1 p.u., it is a more accurate model of voltages than LDF. Approximation accuracy is discussed in Sec. V-B with a case study. B. Volt-VAr Control Rule The VVC curve specified by the IEEE-1547 standard [11] is shown in Fig. 3. The curve may include a deadband region and saturation limits, and is parameterized by the corner points {(Vi, Qi)}4 i=1 and the reference voltage Vr. In this work, we consider VVC rules which do not include a deadband (i.e., V2 = V3 = Vr and Q2 = Q3 = 0), to simplify the analysis.2 We also assume that these nodes have sufficient reactive power capability such that saturation limits are never reached. Although this may be untrue in many real-world grids currently, technology such as [23] is coming onto the grid that significantly increases reactive power capacity. Under these assumptions, the resulting VVC curve at each node i is linear (as illustrated in Fig. 3). Also, given that VVC uses sampled data, the reactive power output of VVC at time t + 1 is dependent on voltage measured at time t, where t is the discrete time index. Thus, we have qg,i[t + 1] = ki(vi[t] -vr,i) , (7) where vi is the voltage magnitude at node i, vr,i is the reference voltage (typically 1 p.u. but can change node to node), ki is the slope of the linearized VVC curve, and qg,i is the output reactive power. In this model, the size of the discrete time step t is considered to be 1 second, per the lower limit of the IEEE-1547 standard open-loop response time. 2Inclusion of the deadband in the framework will require nonlinear control design techniques and is left as a topic of future research. In matrix-vector notation (for the entire network), we have qg[t + 1] = K(v[t] -vr) , (8) where vr ∈Rn denotes the vector of reference voltages at each node, and K ∈Rn×n is a diagonal matrix with ki on the i-th element of the main diagonal. That is, K = diag(k), where k = [k1, k2, . . . , kn]⊤. For nodes that do not have an inverter, ki = 0. The dimension of (8) could be reduced to include only ki ̸= 0, but we will need qg to be in Rn to model effects of reactive power at non-generator nodes using (5). C. Combined Grid-VVC Dynamic Model Next, we combine the linearized power flow equations and VVC curve to form a dynamic model of the closed-loop system, as shown in Fig. 2. Substituting (8) into (5), we have v[t + 1] = JqKv[t] -JqKvr + ̃v . (9) The equilibrium point of this discrete-time dynamical system, denoted by v∗, is given by: v∗= (I -JqK)-1( ̃v -JqKvr) , (10) where I is the identity matrix. Note that the matrix I -JqK is invertible when no eigenvalue of JqK is 1. In Sec. III, we will impose a constraint on the values of K that restricts all eigenvalues of JqK to be strictly inside of the unit disk. Therefore, an equilibrium voltage solution will always exist. The steady-state voltage equation (10) will be used to determine the optimal matrix K (and thus, the optimal VVC rules), but first we assess the stability of the system in (9). III. STABILITY CRITERIA The discrete-time linear system (9) may become unstable if VVC slopes k are chosen poorly. Thus, our goal is to select k such that voltages in (9) are stable in time. The standard stability criterion for a discrete-time linear system x[t + 1] = Ax[t] is given by the spectral radius ρ(A) < 1. Although this is discussed in [15] with reference to VVC, they do not apply it in a design framework or use it on an actual linear model. Taking their work a step farther, we apply it to our linear difference equation to say (9) is asymptotically stable (A.S.) if this constraint is satisfied: ρ(JqK) < 1 ⇐⇒A.S. of (9) . (11) This is the least conservative stability constraint possible for a system of this type and it may be non-convex with respect to k depending on the structure of Jq. Other work has taken a convex restriction of this constraint in order to avoid computational complexity and guarantee optimality within their reduced convex set of k. In [16], an LDF grid model is leveraged with a matrix 2-norm constraint: ∥XK∥2 < 1 =⇒A.S. of (9) , (12) Fig. 4. Comparison of stable regions of VVC slope values (k) in a 2-node section of a real network using an LPF model where Jq = [1.5504, 1.5504; 1.5505, 1.6144];. Note that the spectral radius set is convex here, although this is not guaranteed depending on the structure of Jq. where X is similar to Jq but calculated only from network impedances without considering losses or operating point. Eq. (12) is further restricted in [17] using H ̈older's inequality: ∥XK∥∞< 1 and ∥XK∥1 < 1 =⇒∥XK∥2 < 1 . (13) This constraint is unnecessarily conservative, as any induced matrix norm serves as a restriction of (11), but we study it as a linear worst-case bound from the literature. Both (12) and (13) avoid computational complexity at the cost of significantly restricting the set of feasible k values. We improve on these constraints by using the spectral radius combined with Jq from our LPF model (as opposed to X from the LDF model). A visual representation of these stability criteria is shown in Fig. 4 on a 2-node section of a real feeder (see Sec. V). If both nodes have inverters with non-zero values in k, the shaded regions represent the feasible set of asymptotically stable k values. We assume a network with inductive lines, as is common in distribution grids (and the case in this example). Therefore, we restrict k ≤0, as this will produce a negative feedback q response that corrects voltage towards nominal. To the best of the author's knowledge, the spectral radius constraint has been largely avoided thus far in the literature for designing VVC slopes. This constraint can be nonconvex depending on the structure of Jq which makes it more challenging to optimize over. Note that Jq is not necessarily symmetric, unlike X from the LDF model which is guaranteed positive definite [16]. Intuitively, a larger ki represents greater control authority and participation in voltage regulation. To choose the best values of ki, we use optimization, which is described next. IV. OPTIMAL VVC DESIGN Now that we have established conditions on k values that result in stable grid-VVC dynamics, we next address how to choose the optimal k values from the stable set. In the literature, a number of network objectives have been considered in VVC design including network losses [24] and substation power output [25]. In this work, we study voltage regulation, similar to [16], which is critical for network reliability and flexibility. More specifically, we are interested in minimizing the deviation in steady-state voltage magnitudes from a given reference, vr. Thus, we define the following objective function: min k ∥v∗-vr∥2 2 + β ng ∥k∥2 2 , (14) where the steady-state voltages v∗are given by (10) and β is a scalar cost-coefficient, normalized by ng to ensure scalability with the number of generators. The 2-norm of voltage deviation was selected to spread the reliability and flexibility benefits of inverter VVCs across the entire network. The regularization term limits control action by penalizing large values of k, similar to ideas from LQR theory. This regularization also helps with flatness of the objective function. Three constraints for stability of the system were presented in Sec. III. We choose to focus on the spectral radius constraint (11), the least conservative, but most computationally expensive. As discussed in Sec. III, we also constrain each inverter slope ki to be non-positive. This ensures IEEE-1547 standard compliance. Combining these constraints with the objective function (14), we obtain the following non-convex3 optimal VVC design formulation: min k ∥v∗-vr∥2 2 + β ng ∥k∥2 2 , (15a) s.t. ρ(JqK) ≤1 -ε , (15b) v∗= (I -JqK)-1( ̃v -JqKvr) , (15c) K = diag k , (15d) ki ≤0 , ∀i ∈Ng , (15e) ki = 0 , ∀i ∈{1, . . . , n} \ Ng . (15f) To improve numerical stability, we replace < 1 from (11) with ≤1 -ε in (15b), where 0 < ε ≪1. In order to implement the non-convex formulation in (15), we leverage MATLAB's fmincon with default interior point algorithm. We utilize fmincon's flexibility and substitute (15c) into (15a), forming a non-convex objective function due to matrix inverse and bilinearities. Other solvers like IPOPT [26] handle this by treating v∗as a second (dependent) variable and keeping (15c) as an explicit nonconvex equality constraint, with (I -JqK) brought to the left side. In addition, IPOPT is unable to directly express the spectral radius constraint, because of its lack of smoothness. Table I shows how the stability constraints were implemented for the different optimization solvers. 3This non-convex objective function with a possibly non-convex constraint implies there is no guarantee of global optimality. However, we found that under a broad range of initial conditions, the solution value of (15) is always lower than under constraints in (12). Performance, as indicated by the optimal value, is more important than global optimality. TABLE I STABILITY CONSTRAINT IMPLEMENTATION Constraint Implemented Form Solvera ρ(JqK) < 1 max | eig(JqK)| ≤1 -ε fmincon ∥JqK∥2 < 1 (JqKx)⊤(JqKx) ≤1 -ε and x⊤x = 1 IPOPT ∥JqK∥∞< 1 and ∥JqK∥1 < 1 -Jqk ≤ (1 -ε)1 and diag(1⊤Jq)k ≤(1 -ε)1 IPOPT a Indicates the solver used, not a complete list of capable solvers. V. TEST DATA AND MODEL VALIDATION The solution of the optimization problem (15) provides a set of VVC slopes that are guaranteed to be stable (under the LPF model) and are optimal with respect to steady-state voltage regulation. In order to evaluate the performance of the proposed VVC design framework, we leverage network, load, and generation data from a real distribution feeder. Relevant details about the test case data are provided next. A. Network, Generation, and Load Data For this study, we use a single-phase section of a real distribution feeder in Vermont, with data provided by the Vermont Electric Cooperative. The network contains 1072 nodes (n = 1072) across medium (12.47 kV) and low (208 V) voltage levels. This feeder section contains 209 residential loads and 30 nodes with solar generation (ng = 30). An illustration of the network can be seen in Fig. 5. While the head node of this feeder section is not the substation, it is regulated by a tap-changing voltage regulator. Thus, we assume that the voltage magnitude at this node is fixed at 1 per unit. Real load and generation data from the entire year of 2024 is randomly split into a "training dataset" (90%) and "test dataset" (10%). The training dataset is used to determine an average loading condition over the year, which serves as the operating point (p0, q0) in the LPF model (2). The test dataset is used as sample loading conditions to which we apply our optimally designed VVC policy to assess its performance. B. Validation of Linearized Power Flow (LPF) Model To validate the accuracy and utility of the LPF model from Sec. II-A for the test network, we compare it to the nonlinear AC power flow solution and the LDF model, the latter of which is commonplace throughout VVC literature. First, the LPF model is parameterized based on the training dataset. Specifically, the i-th element of p0 is the average of all net active power injections at node i across all samples in the training dataset, and similarly for q0. The Jacobian matrices Jp and Jq are calculated from (6), where the nonlinear AC power flow solver MATPOWER [27] is used to evaluate fpf. We emphasize that the LPF model is constructed once (i.e., using a single operating point) and is applied to all loading conditions in the test dataset, rather than re-linearizing around a different operating point for each loading condition. Fig. 5. Illustration of the single-phase section of a real feeder in Vermont on which the VVC design framework is applied and evaluated. Fig. 6. A histogram showing the difference in nodal voltage magnitudes between the model used (Vmodel,i LDF or LPF) and voltage computed by MATPOWER (VMP) across the test dataset. Figure 6 compares the voltage magnitude error at each node of the LPF and LDF models (Vmodel in figure) with respect to the nonlinear AC power flow solutions from MATPOWER (VMP in figure) across loading scenarios from the entire test dataset. The maximum voltage difference for LDF is 0.005 p.u. while LPF is 0.0031 p.u. The results indicate that the LPF is generally more accurate than LDF for these test networks and datasets. This improved accuracy can be attributed to LPF accounting for losses and not assuming voltage magnitudes are equal to 1 p.u. through the linearization process, both of which are reductive assumptions in the LDF model. VI. CASE STUDY RESULTS Next, we demonstrate the effectiveness of the proposed VVC design framework through a number of case studies. In particular, we compare the performance of VVC designs under different power flow models, and stability constraints in (11)-(13). We also compare non-incremental VVC design to an incremental approach. A. Case Study Setup The VVC curves are designed by solving the optimization problem (15) under different β. For reference voltages in the objective, we consider vr = 1n, where 1n denotes a vector of ones of size n, and for the stability constraints, we set ε = 10-3. A specific grid loading condition ̃v (i.e., pg, pd, qd) is needed for design. For this, we choose the largest 2-norm of voltage deviation from nominal in the training dataset, which is representative of a "worst-case" scenario. Lastly, we compare the value of both terms in (15) objective function under different β and select β = 0.06. These optimal VVC curves are applied to the inverters in the network, and their performance is evaluated through numerical simulations. Given a loading condition from the test dataset, (8) provides a new value of reactive power qg to be applied at each inverter. We use MATPOWER to solve fpf(p, q). The voltage solution of fpf is fed back into VVC and the loop continues until the voltage converges to some steady state value within a tolerance of 10-4. By solving power flow with a nonlinear AC solver as opposed to LPF, we are closer to representing how the designed inverter slopes will perform in a real-world context. The voltage deviation of the steady state voltage values can then be compared with different k's, different models, and the baseline voltage before any control action. B. Exemplary Hours for Case Studies From the test dataset, we select four particular hours of the year which represent specific scenarios of interest: • Hour A: maximum total active power demand (which also corresponds to maximum voltage deviation using ∥v -1n∥2). • Hour B: maximum total active power generation. • Hour C: minimum single-node voltage. • Hour D: maximum single-node voltage. These four hours are representative of the range of loading conditions in the full test dataset. C. Comparing Models We first compare the performance of optimal k values under different model choices. In Fig. 7, the baseline is represented in gray. We also show a "model-free" case in purple, where all inverter slopes are set to a uniform (guaranteed stable) value. Voltage deviation from optimal slopes designed using the LPF and LDF models are shown in blue and yellow respectively. In Sec. V-B, we showed with Fig. 6 that LPF is generally a more accurate model of voltage than LDF. Despite this, the k values designed by the two models perform almost identically when correcting voltage deviation. This indicates that, under these loading conditions and this network, both LDF and LPF are suitable grid models. We also observe that designing the k values via either optimization problem is more effective than setting all inverters to be arbitrarily the same value when voltage deviation is high. In low voltage deviation situations, the stronger control action from optimal k is over-correcting at nodes nearby to the inverter. We expect that implementation of a deadband would eliminate this effect, which is a direction of future work. Fig. 7. A comparison of the performance of VVC under different design techniques: arbitrary uniform values, optimal with LDF, and optimal with LPF. There is no meaningful difference between LDF and LPF. The uniform values perform better under low voltage deviation (B, D) because the optimal VVC is over-correcting. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown defined as ∥v -vr∥. Fig. 8. A performance comparison of optimal k values designed using different stability criteria in place of (15b). As expected, more restrictive constraints perform worse. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown. It is worth noting that all three implementations of VVC will reduce voltage deviation, including if k values are set uniformly across the network. The control policy doesn't have to be "optimal" to improve the voltage profile. D. Performance of Stability Constraints In Fig. 8, we compare the stability constraints that were discussed in Section III. To generate these results, we solve the optimization in (15), replacing (15b) with different stability criteria. The optimal k values are used to simulate the control policy performance for hours A-D. Note that these slope values were all designed using the LPF model. It is not surprising that the most restrictive stability crite- (a) (b) Fig. 9. The voltage dynamics at generator nodes are shown, simulated by MATPOWER during hour A loading conditions; (a) Spectral radius constraint - stable dynamics and (b) k increased by 10% - unstable dynamics. Note that faster convergence can be guaranteed by increasing ε in (15b). rion performs the worst. This trend is noticeable even under relatively low voltage deviation like in hour B. In Fig. 9(a), the simulated voltage dynamics for hour A at all inverter nodes are shown, using k designed under the spectral radius stability constraint. To show instability, we increase k by 10%, shown in Fig. 9(b). These dynamics are simulated as described in Sec. VI-A. With unstable slopes for inverters, the voltage oscillates until exceeding physical limits of the grid. Voltage dynamics under the other two constraints are similar, with slightly faster convergence and less voltage regulation. They also generally have more room to increase k before becoming unstable. E. Comparison with Incremental Finally, we compare our designed non-incremental solution to an incremental VVC approach. The authors in [14] prove that high gain values which would otherwise destabilize the grid can be stabilized using an incremental approach. To demonstrate this, we arbitrarily take our designed k values and multiply them by a factor of 25, putting them well outside of the spectral radius stability boundary. We implement a "Type-B" system from [15] with ∆T τ = 0.05. Results in [15] demonstrate that, if ∆T τ ≤1, the design framework presented here also applies to an incremental implementation.4 Simulation results of using this incremental approach can be seen in Fig. 10 as "Inc. VVC". An incremental approach correcting the voltage so much more than our design demonstrates that the stability constraint is a fundamental limit to voltage regulation performance. Fig. 11 further illustrates the corrective power of both control policies. This figure shows the voltage at all nodes across the entire test dataset of loading conditions with baseline, non-incremental, and incremental VVC. Voltage at each node is ordered on the x-axis according to how many nodes are between that node and the head node. We observe that, in general, incremental VVC brings voltages closer to nominal. It is worth noting that the average number 4Note that the special case of ∆T τ = 1 is the rule in (8). Fig. 10. A comparison of 4 different hours of the year under nonincremental (VVC) and incremental (Inc. VVC) control. The 2-norm of voltage deviation (darker wider bar) and ∞-norm of voltage deviation (lighter skinnier bar) are both shown. Fig. 11. Voltage at all nodes over the entire test dataset, plotted under different control schemes (no control, VVC, incremental VVC). Spacing on the x-axis is based on how many nodes are between the plotted node and the head node. of iterations for non-incremental to converge is 389, while incremental converges in 9.4 iterations on average with 25 times higher slopes. Some nodes are not being corrected sufficiently, even with the incremental VVC response, because the generators are too far away from affected nodes. One way to correct this is to add a generator at a particular node with high voltage deviation, re-optimize over k and re-simulate the grid. As a simple test of the effect of this, we add an inverter at the node causing significant voltage drop at depth 76. From Fig. 10 during hour C, regular VVC has a voltage deviation infinity norm of 0.065, which violates the limits set by ANSI standards (0.05). After adding an inverter at that node, the infinity norm of voltage deviation during that same hour drops to 0.040, which is well within the limits. VII. CONCLUSIONS In this study, we compared different stability constraints and grid models while finding an optimal design of inverter slopes for VVC, and evaluated our design framework using a real network with real loading conditions. We conclude that Linearized Power Flow is generally a more accurate tool for modeling voltage than LinDistFlow on a real network using real data. However, this increased accuracy does not necessarily translate to better performance when designing VVC rules. Therefore, either model is suitable for design. We also showed that a simple VVC design using uniform slopes across the network has the ability to improve voltage deviation, although not as much as optimized design. Different stability constraints were studied and we showed that although a spectral radius constraint on inverter slopes is more challenging to compute, it produces a more optimal result when used for VVC design. We also showed that an incremental approach which allows for larger slopes without stability issues often results in less voltage deviation. Future work will focus on optimizing design of incremental VVC, including with deadband and saturation limits. The question of controllability is also of interest, given that this system has a limited number of nodes that have active control. It would be valuable to understand where the optimal location would be to add reactive power capability. Lastly, we aim to extend these results to 3-phase networks. REFERENCES [1] H. Ritchie, M. Roser, and P. Rosado, "Data Page: Solar photovoltaic module price." https://ourworldindata.org/grapher/solar-pvprices, 2023. [2] Solar Energy Industries Association, "5 Million Solar Installations: Powering American Communities." https://seia.org/solar-installations/, May 2024. [3] Y. T. Tan and D. S. Kirschen, "Impact on the Power System of a Large Penetration of Photovoltaic Generation," in 2007 IEEE Power Engineering Society General Meeting, pp. 1-8, June 2007. [4] J. Grainger and S. Civanlar, "Volt/Var Control on Distribution Systems with Lateral Branches Using Shunt Capacitors and Voltage Regulators Part I: The Overall Problem," IEEE Transactions on Power Apparatus and Systems, vol. PAS-104, pp. 3278-3283, Nov. 1985. [5] T. Gush, C.-H. Kim, S. Admasie, J.-S. Kim, and J.-S. Song, "Optimal Smart Inverter Control for PV and BESS to Improve PV Hosting Capacity of Distribution Networks Using Slime Mould Algorithm," IEEE Access, vol. 9, pp. 52164-52176, 2021. [6] K. Worthmann, C. M. Kellett, P. Braun, L. Gr ̈une, and S. R. Weller, "Distributed and Decentralized Control of Residential Energy Systems Incorporating Battery Storage," IEEE Transactions on Smart Grid, vol. 6, pp. 1914-1923, July 2015. [7] U.S. "Cybersecurity Considerations for Distributed Energy Resources on the U.S. Electric Grid," tech. rep., U.S. . 2022. [8] H. Liu and W. Wu, "Online Multi-Agent Reinforcement Learning for Decentralized Inverter-Based Volt-VAR Control," IEEE Transactions on Smart Grid, vol. 12, pp. 2980-2990, July 2021. [9] X. Sun and J. Qiu, "Two-Stage Volt/Var Control in Active Distribution Networks With Multi-Agent Deep Reinforcement Learning Method," IEEE Transactions on Smart Grid, vol. 12, pp. 2903-2912, July 2021. [10] S. Gupta, V. Kekatos, and M. Jin, "Controlling Smart Inverters Using Proxies: A Chance-Constrained DNN-Based Approach," IEEE Transactions on Smart Grid, vol. 13, pp. 1310-1321, Mar. 2022. [11] IEEE Standards Association, "IEEE Standard for Interconnection and Interoperability of Distributed Energy Resources with Associated Electric Power Systems Interfaces," Apr. 2018. [12] A. Mohamed, G. Saadeh, and A. K. Kaviani, "Impact of smart photovoltaic inverter control modes on medium-voltage grid voltage and inverter lifetime: An experimental approach," IET Smart Grid, vol. 6, no. 4, pp. 380-390, 2023. [13] S. Li, Y. Sun, M. Ramezani, and Y. Xiao, "Artificial Neural Networks for Volt/VAR Control of DER Inverters at the Grid Edge," IEEE Transactions on Smart Grid, vol. 10, pp. 5564-5573, Sept. 2019. [14] A. Eggli, S. Karagiannopoulos, S. Bolognani, and G. Hug, "Stability Analysis and Design of Local Control Schemes in Active Distribution Grids," IEEE Transactions on Power Systems, vol. 36, pp. 1900-1909, May 2021. [15] P. Jahangiri and D. C. Aliprantis, "Distributed Volt/VAr Control by PV Inverters," IEEE Transactions on Power Systems, vol. 28, pp. 34293439, Aug. 2013. [16] X. Zhou, M. Farivar, Z. Liu, L. Chen, and S. H. Low, "Reverse and Forward Engineering of Local Voltage Control in Distribution Networks," IEEE Transactions on Automatic Control, vol. 66, pp. 11161128, Mar. 2021. [17] I. Murzakhanov, S. Gupta, S. Chatzivasileiadis, and V. Kekatos, "Optimal Design of Volt/VAR Control Rules for Inverter-Interfaced Distributed Energy Resources," IEEE Transactions on Smart Grid, vol. 15, pp. 312-323, Jan. 2024. [18] J. Wei, S. Gupta, D. C. Aliprantis, and V. Kekatos, "A ChanceConstrained Optimal Design of Volt/VAR Control Rules for Distributed Energy Resources," July 2023. [19] M. Baran and F. Wu, "Network reconfiguration in distribution systems for loss reduction and load balancing," IEEE Transactions on Power Delivery, vol. 4, pp. 1401-1407, Apr. 1989. [20] J. Feng, Y. Shi, G. Qu, S. H. Low, A. Anandkumar, and A. Wierman, "Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control," IEEE Transactions on Control of Network Systems, vol. 11, pp. 1370-1381, Sept. 2024. [21] S. V. Dhople, S. S. Guggilam, and Y. C. Chen, "Linear approximations to AC power flow in rectangular coordinates," in 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 211-217, Sept. 2015. [22] M. Jia and G. Hug, "Overview of Data-driven Power Flow Linearization," in 2023 IEEE Belgrade PowerTech, pp. 01-06, June 2023. [23] Ecojoule Energy, "EcoVAR Pole-Mounted Static Compensator (STATCOM) Voltage Regulator." https://ecojoule.com/ecovar/. [24] S. Bolognani, R. Carli, G. Cavraro, and S. Zampieri, "Distributed Reactive Power Feedback Control for Voltage Regulation and Loss Minimization," IEEE Transactions on Automatic Control, vol. 60, pp. 966-981, Apr. 2015. [25] D. Hamilton, Optimization-based Operation and Control Approaches for Improving the Resilience of Electric Power Systems. Ph.D. dissertation, . 2023. [26] A. W ̈achter and L. T. Biegler, "On the implementation of an interiorpoint filter line-search algorithm for large-scale nonlinear programming," Mathematical Programming, vol. 106, pp. 25-57, Mar. 2006. [27] R. D. Zimmerman, C. E. Murillo-S ́anchez, and R. J. Thomas, "MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education," IEEE Transactions on Power Systems, vol. 26, pp. 12-19, Feb. 2011.
2510.14828
RoboGPT-R1: Enhancing Robot Planning with Reinforcement Learning Jinrui Liu∗ Institute of Automation, CASIA School of Artificial Intelligence, UCAS China liujinrui2024@ia.ac.cn Bingyan Nie∗ Institute of Automation, CASIA School of Artificial Intelligence, UCAS China niebingyan2025@ia.ac.cn Boyu Li Institute of Automation, CASIA School of Artificial Intelligence, UCAS China liboyu2021@ia.ac.cn Yaran Chen Institute of Automation, CASIA School of Artificial Intelligence, UCAS China chenyaran2013@ia.ac.cn Yuze Wang Huawei Cloud Technology Co., Ltd China wangyuze1@hisilicon.com Shunsen He Huawei Cloud Technology Co., Ltd China heshunsen@huawei.com Haoran Li† Institute of Automation, CASIA School of Artificial Intelligence, UCAS China lihaoran2015@ia.ac.cn ABSTRACT Improving the reasoning capabilities of embodied agents is crucial for robots to complete complex human instructions in long-view manipulation tasks successfully. Despite the success of large lan- guage models and vision language models based on Supervised Fine- Tuning (SFT) in planning tasks, they continue facing challenges in performing long-horizon manipulation tasks in complex real- world environments, owing to their restricted common sense and reasoning capabilities. Considering that aligning general-purpose vision language models to robotic planning tasks via supervised fine-tuning suffers from poor generalization and insufficient phys- ical understanding, we propose RoboGPT-R1, a two-stage fine- tuning framework for embodied planning. In this framework, su- pervised training acquires foundational knowledge through expert sequences, followed by RL to address the model’s shortcomings in visual-spatial understanding and reasoning. To achieve physical understanding and action sequence consistency in multi-step rea- soning tasks, we design a rule-based reward function that simulta- neously considers long-horizon performance and action constraint in the environment. The reasoning model, trained on Qwen2.5-VL- 3B, significantly outperforms the larger-scale model, GPT-4o-mini, by 21.33% and surpasses other work trained on Qwen2.5-VL-7B by 20.33% on the EmbodiedBench benchmark. KEYWORDS Robot Task Planning, Reasoning Planning, Reinforcement Learning, Vision-Language-Model ∗Equal contribution. †Corresponding author. 1 INTRODUCTION Recently, vision language models (VLMs) have been increasingly employed as high-level planners for embodied tasks [6, 18, 24], given their emerging capability to combine natural language in- structions into long-horizon robotic action sequences. Nevertheless, in real-world environments, VLMs still fail to meet the demands of robustness and generalization [6, 7, 66]. There are two major challenges that remain unsolved. First, the prevailing supervised fine-tuning (SFT)-only paradigm primarily imitates expert demon- strations, yet lacks mechanisms for adaptation or self-correction un- der dynamic environments [10]. Second, the design of long-horizon reward functions remains inadequate—existing rewards are often sparse or poorly aligned with the execution of grounded action, ultimately hindering planning performance [53]. In real-world long-horizon embodied tasks, VLMs still exhibit limited planning capability [53], as they are not well aligned with the physical realities of robotic embodiments or with accurate state transition dynamics [6, 60]. Although existing approaches based on the SFT-only paradigm can enhance the performance of VLMs [8, 65], they remain ineffective when confronted with sce- narios or instructions that fall outside the distribution of the SFT dataset [58]. The lack of physical common sense and feasibility constraints often results in ambiguous object recognition and bi- ased state estimation [15, 27]. Moreover, the absence of feedback and error-correction signals [57] in the SFT paradigm leads models to memorize answers rather than learn generalizable reasoning strategies [10], thereby failing to mitigate the accumulation of local errors over extended task horizons. Reinforcement learning (RL) has proven effective for VLMs in domains such as video reasoning [26], object detection [29, 38] and mathematical reasoning [49, 50, 59], where tasks provide clear and verifiable answers. However, when transferred to open-ended embodied planning tasks, it becomes challenging to design dense arXiv:2510.14828v1 [cs.AI] 16 Oct 2025 and interpretable reward functions [24, 63, 66], as the outcomes are often ambiguous and context-dependent. For instance, in embod- ied planning tasks, when the correct answer is “pick up an apple and put it on the table,” using a straightforward RL reward such as string matching or accuracy calculation allows the model to gain higher rewards simply by generating more actions, as some subsequences are likely to overlap with the reference plan. This mechanism misleads the model to produce overly long yet logi- cally incorrect reasoning chains, masking its true deficiencies in action ordering and planning coherence. Therefore, a dense and sequence-aware reward is required to directly capture whether a multi-step plan is executed fully or partially correctly [6, 60, 63], particularly in long-horizon and complex action sequences, rather than rewarding superficial token overlap. To address the above problems, we propose RoboGPT-R1, a two- stage training framework designed to enhance robotic planning with small-scale models. In contrast to the SFT stage, which learns predefined answers, Group Relative Policy Optimization (GRPO) algorithm explores optimal solutions, addressing the shortcomings of SFT in generalization, task understanding, spatial perception, and planning consistency [10]. Moreover, in the second-stage RL training, unlike conventional RL approaches in reasoning tasks that typically rely on sparse or single-point accuracy rewards, our method introduces a rule-based variable reward function specifi- cally designed for long-horizon embodied reasoning and planning. This reward function consists of two complementary components: a format reward and an accuracy reward. As illustrated in Fig. 1, the format reward integrates multiple dimensions, including struc- tural completeness of reasoning, action type correctness and action validity. The accuracy reward is based on the longest common sub- sequence (LCS) between predicted and reference action sequences, effectively preserving action order and enhancing long-horizon per- formance. The results on EmbodiedBench [58] show that RoboGPT- R1 significantly outperforms GPT-4o-mini and is competitive with closed-source models such as GPT-4o and Gemini-2.0-flash. Further- more, its performance significantly surpasses open-source models like Llama-3.2-90B, achieving a 23.33% higher overall score. Com- pared to the previous state-of-the-art, it yields a 20.33% improve- ment. On long-horizon tasks, it leads with an accuracy of 50%, demonstrating the superior reasoning capabilities of small models. In summary, our contributions are as follows: • We propose RoboGPT-R1, a two-stage training paradigm for embodied multi-step reasoning tasks. With RL training, RoboGPT-R1 develops the reasoning capability for complex tasks and environments, thereby enhancing its physical commonsense and error correction abilities. • We design a reward function based on perception-reasoning- planning-action loop, with LCS-reward effectively enhanc- ing the model’s understanding and reflection of the task. This enables efficient and high-quality reward computa- tion at a very low cost, and demonstrates good reasoning capabilities on long-horizon tasks. • We conduct extensive experiments on 6 tasks in 2 sce- narios, including spatial perception, long-range reasoning, common-sense questions, and visual understanding. In seen scenarios, our method outperforms open-source general models and existing embodied planning models, achieving competitive performance compared to closed-source gen- eral models. Moreover, in unseen scenarios, it demonstrates superior reasoning capability and surpasses the state-of- the-art embodied planning model. 2 RELATED WORK 2.1 Embodied Planning Embodied agents require not only active exploration, manipulation, and scene perception, but also embodied task planning capabili- ties [15, 16, 54]. Embodied planning aims to decompose high-level natural language instructions into executable subtask sequence [17, 40], enabling the embodied agent to generate actionable steps within an interactive environment to complete complex behav- iors. With the advent of large models [28, 56, 57], natural languages offer greater expressive flexibility than structured languages, mak- ing it possible to utilize LLMs to decompose complex plans into sub-plans in a fully automated manner [5, 51, 61, 66]. For example, TaPA introduces an embodied task planner that grounds free-form instructions into executable plans, trained on a new multimodal benchmark (80 scenes, 15K instruction–plan pairs) [55]. It fine-tunes LLaMA with object lists from multi-view open-vocabulary detec- tion. Additionally, SayCan [3] combines LLM with reinforcement learning, leveraging the high-level reasoning capabilities of LLM to complement the value assessment of pre-trained skills, thereby lay- ing a foundation for language in robotics and enabling the feasible scoring of actions. This can generate executable long-term plans suitable for real robots. While LLMs can generate preliminary plans based on commonsense reasoning, they lack constraints on the physical environment and the feasibility of actions [22, 42, 48, 61]. The emergence of VLMs [39, 55, 60] has led to their use as high- level planners, with the current mainstream approach being to fine-tune VLMs based on demonstration data. Zhang et al. [66] ex- tend the O1-style deep reasoning to embodied interactive tasks by coupling visual search with step-by-step planning, reflection, and verification, trained on synthesized Observation–Thought–Action trajectories to improve performance on AI2–THOR–style [25] tasks. Moreover, Reflective Planning [19] proposes a test-time computa- tion framework that augments a pre-trained VLM with a reflection loop. It imagines future world states via a diffusion-based dynam- ics model, critiques potential suboptimalities, and revises plans to improve multi-stage, long-horizon manipulation. 2.2 Reinforcement Learning for LLMs and VLMs In recent years, with the emergence of reasoning models like Ope- nAI’s o1 [32], research on large language models (LLMs) has gradu- ally shifted towards enhancing their reasoning capabilities through reinforcement learning (RL) [20, 31, 45, 47, 52]. Numerous studies have explored ways to enhance the performance of LLMs in rea- soning tasks, including solving mathematical [37, 46, 59] problems and coding [62]. A notable breakthrough in this field is Deepseek- R1 [14], which experienced an "aha moment" during GRPO-based training, enabling the model to independently reflect and reevalu- ate its initial policy without any explicit guidance. Subsequently, several works [26, 29, 30, 38, 44, 64] have used reinforcement learn- ing to enhance the reasoning capabilities of models in multimodal settings. For example, VLM-R1 [38] and Visual-RFT [29] extend R1-style reinforcement learning to vision-language models, sam- pling multiple answer outputs for each input and optimizing with GRPO using verifiable, task-specific rewards, resulting in stronger visual reasoning and perception enhancements over the SFT base- line. These advances demonstrate the potential of RL to propel large models from "imitation learning" to "emergent intelligence" [10]. Inspired by the R1 paradigm, this paper employs GRPO-based re- inforcement learning to perform two-stage training on the model, systematically improving the planning ability and long-term con- sistency of embodied agents in multimodal task planning. 3 METHODOLOGY 3.1 Overview In this section, we provide a brief introduction to the proposed RoboGPT-R1 framework. In contrast to previous approaches that solely rely on SFT, this study explores the incorporation of RL and reasoning techniques to better align with the embodied planning tasks. Section 3.3 will introduce the two-stage learning scheme. As demonstrated in Figure 1, the training of agents is comprised of two phases: an initial SFT phase, the purpose of which is to instil funda- mental knowledge and elementary reasoning capabilities into the agent; and a subsequent reinforcement learning phase, the function of which is to utilise the GRPO policy optimisation algorithm to enable the agent to explore, think, and learn independently contin- uously. Next, to address issues such as inconsistent timing during multi-step reasoning, poor action planning, and poor performance on long-term tasks, we designed a rule-based, verifiable reward to incentivize the VLM’s planning capabilities in Section 3.4. 3.2 Data Preparation Embodied task planning in real-world indoor scenarios requires a substantial amount of multimodal data, encompassing both per- ceptual and physical knowledge. Given the impressive inference performance of large, closed-source models, data distillation can be employed to generate high-quality datasets. The present paper employs the SFT dataset from REBP[50], distilled from Gemini- 2.0- flash, in the SFT phase. Following REBP [53], we employ the SFT dataset distilled from Gemini-2.0-flash, in the SFT phase. In the RL phase, the RFT dataset from REBP is augmented with task-relevant examples and unsuccessful exploratory tasks. It has been demon- strated through experimentation that the quantity of examples contained within a dataset can result in contradictory outcomes regarding the training and testing of models. For instance, the direct application of n-shot (n denotes the number of examples) to the SFT phase results in the model "memorising" the example morphology. However, during testing, there is a significant drop in the n-shot score. In contrast, an untrained model significantly improves its score when tested using n-shot, demonstrating its firm reliance on examples during the testing process. Consequently, it can be concluded that the injection of knowledge into a trained model may result in the model’s rigid adherence to predefined answer templates. This phenomenon impedes the model’s capacity to adapt to the complexity of the problem and the variability of the environ- ment. Experiments have demonstrated that zero-shot training and testing models exhibit superior performance, while concurrently reducing the number of input tokens by approximately one-third (from approximately 9,000 to less than 6,000), thereby enhancing the efficiency of the training process. Consequently, we employ 0-shot processing uniformly for both training and testing data. 3.3 Two-stage Training Scheme Stage 1: Initial Planning via SFT. To equip the base VLM with the fundamental capacity to generate multi-step reasoning, it is first necessary to undertake a supervised fine-tuning phase. This step is crucial because the reasoning patterns learned in subsequent reinforcement learning are significantly affected by the capabilities of the base model. Furthermore, the use of reinforcement learning as the sole training method is found to be significantly affected by data distribution, resulting in instability during the initial training stages. Therefore, a small amount of data is used for the warm- up stage, which is based on the SFT. Following this, reinforcement learning training is conducted using the entire dataset. The findings of this study demonstrate that this approach enhances stability in the initial stages of training and rapidly integrates the base model with relevant knowledge, thereby enabling it to acquire a certain level of embodied reasoning knowledge and planning capabilities. Stage 2: Enhancing reasoning with GRPO. The DeepSeek R1- zero algorithm employs the GRPO framework. Unlike reinforce- ment learning algorithms such as PPO, which require an additional critic model to estimate policy performance, GRPO directly com- pares groups of candidate responses, eliminating the need for a separate critic. Given a question 𝑞, GRPO samples 𝑁candidate re- sponses {𝑜1,𝑜2, . . . ,𝑜𝑁} from the policy 𝜋𝜃and evaluates the quality of each response oi using a reward function 𝑅(𝑞,𝑜𝑖). To determine the relative quality of these responses, the algorithm normalizes the rewards by computing their mean and standard deviation and subsequently derives the advantage as: 𝐴𝑖= 𝑟𝑖−𝑚𝑒𝑎𝑛({𝑟1,𝑟2, . . . ,𝑟𝑁}) 𝑠𝑡𝑑({𝑟1,𝑟2, . . . ,𝑟𝑁}) (1) where 𝐴𝑖denotes the advantage of candidate oi measured against the other samples in the group. To encourage the model to gener- ate responses with higher advantages within the group, the group penalizes large deviations from the reference policy 𝜋𝑟𝑒𝑓. The ob- jective is as follows: Jgrpo(𝜃) = E𝑞,{𝑜𝑖}𝑁 𝑖=1∼𝜋𝜃old (𝑞) " 1 𝑁 𝑁 ∑︁ 𝑖=1 {min (ℓ1 · 𝐴𝑖, ℓ2 · 𝐴𝑖) −𝛽𝐷KL (𝜋𝜃∥𝜋ref)} # (2) ℓ1 = 𝜋𝜃(𝑜𝑖|𝑞) 𝜋𝜃𝑜𝑙𝑑(𝑜𝑖|𝑞) ; ℓ2 = clip  𝜋𝜃(𝑜𝑖|𝑞) 𝜋𝜃𝑜𝑙𝑑(𝑜𝑖|𝑞) , 1 −𝜖, 1 + 𝜖  (3) where 𝜖is the clipping hyperparameter, and 𝛽controls the KL penalty against a reference policy 𝜋ref. Inspired by DeepSeek-R1, our approach incorporates two complementary rewards: the accuracy reward and the format reward. Next, we introduce the reward function 𝑅designed for robotic planning tasks. Figure 1: An overview of RoboGPT-R1. RoboGPT-R1 adopts a two-stage learning paradigm. In the initial phase, supervised fine- tuning introduces the model to data from mathematics, embodied tasks, and visual understanding, establishing a foundation for embodied reasoning. In the second phase, we apply GRPO-based reinforcement fine-tuning guided by a tailored reward function. The model is subsequently evaluated across six categories of tasks, including long-horizon planning and spatial reasoning. 3.4 Reward Design In Section 3.3, we have briefly introduced the general GRPO algo- rithm. Given that embodied planning requires an agent to complete complex tasks in a real or simulated physical environment based on natural language instructions, we design a set of reward functions specifically for this purpose. The following sections describe the format reward and the accuracy reward, respectively. 3.4.1 Format Reward. To ensure output regularity and facilitate the extraction of both the reasoning process and the final result, most R1-style approaches enclose the reasoning within ‘<think></think>‘ tags and the final plan within ‘<answer></answer>‘ tags. If the gen- erated output deviates from this structure, the format reward is assigned a value of zero. Inspired by REBP [53], in embodied multi- step planning, the agent must generate responses that are not only semantically meaningful but also structurally executable. Unlike conventional text generation tasks, where free-form output may be acceptable, embodied planning requires a higher level of struc- tural rigour to ensure that each response remains interpretable and executable by downstream systems. First, the model should follow a clear cognitive loop—just like how humans plan their actions. Before acting, we reflect on the task, observe the environment, make a plan, and then execute it. Second, the generation of invalid or fabricated actions should be penalized. In summary, we set the following reward format, which consists of three parts: 𝑅format = 0.3 · 𝑅section + 0.3 · 𝑅type + 0.4 · 𝑅validity (4) The section reward 𝑅section evaluates whether all required fields (visual_state_description, reasoning_and_reflection, language_plan, executable_plan) are present and correctly typed, and is computed as: 𝑅section = 1 |𝑆| ∑︁ 𝑠∈𝑆 1 [type(𝑜𝑠) = 𝑇𝑠] (5) where 1 is an exponential function, 𝑆= {𝑠1,𝑠2,𝑠3,𝑠4} includes the four fields,𝑇𝑠denotes the string type of each field and 𝑜𝑠is the value of field 𝑠in the output object. The type reward 𝑅𝑡𝑦𝑝𝑒checks whether each action step is well-formed, defined as: 𝑅type = 1 𝑚 𝑚 ∑︁ 𝑖=1 1  ˆ𝑦𝑖𝑑 𝑖 ∈Int, name𝑖∈Str, ˆ𝑦𝑖≠∅  (6) In this formulation, 𝑚denotes the number of action steps, ˆ𝑦𝑖𝑑 𝑖 and ˆ𝑦𝑖represent the action id and name of step 𝑖respectively. Here Int denotes the set of integers, while Str represents the set of non-empty strings. Finally, the validity reward 𝑅𝑣𝑎𝑙𝑖𝑑𝑖𝑡𝑦measures whether each action id–name pair matches the predefined action dictionary, given by: 𝑅validity = 1 𝐶 𝐶 ∑︁ 𝑖=1 1  norm( ˆ𝑦𝑖) = norm(Daction[ ˆ𝑦𝑖𝑑 𝑖])  (7) where 𝐶is the number of matched steps and norm(·)is a normal- ization function that lowercases and trims whitespace. The action dictionary Daction defines a mapping from action ids to their corre- sponding names. Different from REBP [53], our setup employs dynamic action IDs that vary across tasks and environments, preventing the model from relying on memorization of a fixed action set. Instead, we want it to learn the meaning of actions and use them correctly. In summary, such a reward design can guide the model to generate structured outputs that conform to the task execution closed loop, and avoid hallucinations or inconsistent outputs through self-understanding and thinking. 3.4.2 LCS Reward. In embodied multi-step planning, the correct- ness of individual actions is not sufficient—the order in which ac- tions are executed is often critical to task success. For example, placing an object before picking it up may involve the right actions but in a logically invalid sequence. Traditional token-level match- ing or step-wise accuracy metrics fail to penalize such disorder, treating unordered but correct actions as equally valid. Besides, in long-horizon tasks, plans can extend over dozens of steps, where strict reward strategies like exact matching become too rigid to reflect realistic performance. A model might make early mistakes yet recover in later steps to complete the task successfully. However, prefix-based accuracy rewards, such as those used in REBP [53], overlook this “error recovery” behaviour, leading to sparse and less informative reward signals. To address these problems, we design the accuracy reward based on the Longest Common Subsequence (LCS) between the predicted and reference action sequences. By computing the LCS over action names, we enforce both content accuracy and sequence coherence. This approach is robust to local deviations while maintaining global alignment and remains effec- tive even as task length increases. We define the model generation sequence as ˆ𝑌= ( ˆ𝑦1, ˆ𝑦2, . . . , ˆ𝑦𝑚) and the reference sequence as 𝑌= (𝑦1,𝑦2, . . . ,𝑦𝑛) . The detailed accuracy reward is as follows: LCS(𝑖, 𝑗) =   0, if 𝑖= 0 or 𝑗= 0 LCS(𝑖−1, 𝑗−1) + 1, if ˆ𝑦𝑖= 𝑦𝑗 max (LCS(𝑖−1, 𝑗), LCS(𝑖, 𝑗−1)) , otherwise (8) 𝑅lcs = 𝑘 𝑛, (9) where 𝑛is reference sequence length |𝑌|, 𝑘is length of the longest common subsequence LCS(𝑖, 𝑗). 𝑛= |𝑌| denotes the length of the reference sequence, and 𝑘= LCS(𝑖, 𝑗) is the length of the longest common subsequence between the generated sequence ˆ𝑌and the reference sequence 𝑌. To evaluate the effectiveness of our proposed accuracy reward, we conduct an ablation study. Compared to prefix accuracy (as used in REBP) and standard step-wise matching, the LCS-based accuracy reward shows stronger performance in multi- step planning tasks. 3.4.3 ovrall reward. To jointly encourage structural correctness and sequential accuracy, we define the overall reward as a weighted combination of the format reward and the LCS-based accuracy reward: 𝑅= 0.2 · 𝑅𝑓𝑜𝑟𝑎𝑚𝑡+ 0.8 · 𝑅𝑙𝑐𝑠 (10) This overall reward formulation plays two key purposes in em- bodied planning: enforcing structural correctness and promoting action-level accuracy. The format reward 𝑅format enforces a fixed output structure and ensures each action step is well-formed and consistent with the current scene and instructions. Meanwhile, the accuracy reward 𝑅𝑙𝑐𝑠evaluates whether the predicted action se- quence aligns with the reference plan, not only in content but also in order. By combining these two aspects, the overall reward encour- ages the model to reason more effectively and generate executable plans with reasonable length and structure. 4 EXPERIMENTS 4.1 Experimental Settings Evaluation. We evaluate RoboGPT-R1 in EmbodiedBench [58], a unified benchmarking suite for multimodal embodied planning agent. EmbodiedBench offers standardized protocols and interfaces that cover multi-sensory input, language instructions, and long- horizon decision making, enabling consistent and reproducible comparisons across task suites. Specifically, we focus on its two con- stituent suites: EB-ALFRED and EB-Habitat. The former is rooted in the ALFRED ecosystem and targets instruction-following house- hold tasks (e.g., pick-and-place, cleaning, and mobile manipulation) that emphasize object state tracking and stepwise dependencies; the latter builds on the Habitat ecosystem and emphasizes navigation and interaction in 3D environments, with observation distributions, scene layouts, and action semantics that differ markedly from EB- ALFRED [36, 41]. Because our training data are primarily associated with EB-ALFRED/ALFRED, we treat performance on EB-ALFRED as in-domain and use it to gauge method effectiveness, whereas results on EB-Habitat are regarded as out-of-domain and used to assess generalization. Unless otherwise noted, all evaluation settings follow the bench- mark defaults; the only test variable we modify is the number of examples in-context (n_shots) in the control input. The evaluation setting details are provided in the appendix C Baselines. We evaluate both the Qwen and GPT model series, including Qwen2.5-VL-72B-Ins, Qwen2.5-VL-7B-Ins, Qwen2.5-VL- 3B-Ins, as well as GPT-4.1, GPT-4o and GPT-4o-mini. Closed-source models are assessed via their official APIs, while open-source mod- els are tested through local deployment. Performance results for additional baselines are obtained from the REBP and Embodied- Bench leaderboards. The three types of comparison baseline and the corresponding results are presented below: (1) General closed-source models: including five represen- tative proprietary multimodal models from the GPT, Gem- ini, and Qwen families: Gemini-2.0-Flash [13], Qwen-VL- Max [11], GPT-4.1 [35], GPT-4o [34] and GPT-4o-mini [33]. (2) General open-source models: Comprising open-source models from multiple series—including Qwen2.5-VL-3B/7B/ 72B-Ins [12], LLaMA-3.2-90B-Vision-Ins [4], InternVL2.5-8B [9], and Gemma-3-12B-its [43]. (3) Embodied domain-specific models: focusing on mod- els tailored for embodied reasoning and planning, such as REBP [53], RoBoBrain [23], TaPa [55] and ours. Dataset. We process the REBP public dataset [53] to generate a base dataset and an augmented dataset. The base dataset is directly distilled from the EB-ALFRED tasks in EmbodiedBench [58], serv- ing as a strong-correlation dataset with bench and is utilized in the Table 1: Success rates of diverse models on EB-ALFRED and EB-Habitat. Entries without any symbol are sourced from the EMBench leaderboard. Symbol † indicates results directly cited from the REBP paper, based on their evaluations. Symbol ‡ marks results obtained through our own reproduction by querying the official API. All scores are reported as percentages (%). For each metric, the best-performing result is highlighted with a gray background. Method Params EB-ALFRED (seen) EB-Habitat (unseen) Avg. Base Common Complex Visual Spatial Long Avg. Base Common Complex Visual Spatial Long • Type: Closed-Source General Model Gemini-2.0-flash [13] - 52.30 62 48 54 46 46 58 42.30 82 38 38 36 34 26 Qwen-VL-Max [11] - 41.30 44 48 44 42 38 32 45.30 74 40 50 42 30 36 GPT-4.1‡ [35] - 64.67 70 64 70 62 62 60 50.67 90 38 50 36 46 44 GPT-4o‡ [34] - 51.67 54 46 58 52 52 48 57.00 84 42 62 38 62 54 GPT-4o-mini‡ [33] - 34.00 66 56 68 14 0 0 35.00 70 24 36 30 30 20 • Type: Open-Source General Model Llama-3.2-90B-Vision-Ins [4] 90B 32.00 38 34 44 28 32 16 40.30 94 24 50 32 28 14 InternVL2.5-8B [9] 8B 2.00 4 6 2 0 0 0 11.30 36 4 0 10 16 2 Gemma-3-12b-its [43] 12B 25.70 32 26 38 26 20 12 23.00 58 10 24 18 24 4 Qwen2.5-VL-72B-Ins‡ [12] 72B 43.67 62 36 48 40 44 32 50.33 92 38 48 34 46 44 Qwen2.5-VL-7B-Ins‡ [12] 7B 2.67 6 2 6 0 2 0 15.00 42 6 22 12 4 4 Qwen2.5-VL-3B-Ins‡ [12] 3B 1.33 2 2 0 0 4 0 14.67 34 0 18 18 14 4 • Type: Embodied Planning Model RoboBrain† [23] 7B 0.33 2 0 0 0 0 0 15.30 38 6 18 8 18 4 Tapa† [55] 7B 0.00 0 0 0 0 0 0 0.00 0 0 0 0 0 0 REBP † [53] 7B 35.00 52 46 46 28 32 6 18.33 50 6 18 14 14 8 RoboGPT-R1 (ours)‡ 3B 55.33 62 56 64 50 50 50 22.00 64 8 18 20 12 10 SFT phase to endow the model with initial multimodal embodied- planning skills. The main body of the augmented dataset origi- nates from the open-source ALFRED trajectory dataset [41]. While its content is similar to EB-ALFRED, it exhibits significant differ- ences in details such as action space and task length, making it a benchmark-adjacent (near-domain) data. The augmented dataset includes all embodied planning data in the base dataset to prevent catastrophic forgetting. The augmented dataset is employed in the RFT phase to improve reasoning robustness under near-domain distributions, thereby further enhancing planning performance on EmbodiedBench. Data processing methods, data composition, and other details are provided in the appendix B. Training. We adopt Qwen2.5-VL-3B-Instruct as the multimodal base model and employ a two-stage training scheme. In the first stage, we perform full-parameter SFT on the base dataset to endow the model with initial planning skills aligned with EB-ALFRED. In the second stage, we conduct reinforcement fine-tuning (RFT) with GRPO on the augmented dataset, aiming to improve reasoning and generalization from data that are weakly aligned with the bench- mark. SFT is implemented with LLaMA-Factory [21] and trained on 8× Ascend 910B3 64GB NPUs for approximately 1.5 hours; RFT is implemented with VERL [1] and trained on 4× NVIDIA H20 100GB GPUs for about 25 hours. Complete hyperparameters and imple- mentation details are provided in the appendix A. 4.2 Main Results Training Results in EB-ALFRED.Table 1 reports the evaluation results in EB-ALFRED. RoboGPT-R1 attains an average success rate of 55.33% across six sub-task suites. This performance sig- nificantly outperforms multiple strong baselines: it surpasses the closed-source GPT-4o (51.67%) and GPT-4o-mini (34.00%), and trails only GPT-4.1 (64.67%). Among open-source general (a) Overall Reward (b) Format & Accuracy(LCS) Reward Figure 2: Reward Curves in RFT. Our LCS-based accuracy reward provides an appropriate and dense learning signal for RFT, rising steadily from 0.30 to 0.80. The format reward, already aligned by SFT, starts around 0.95 at the onset of RFT and saturates within ∼20 steps, stabilizing near 1.0. The overall reward is computed as a weighted sum of the accuracy and format rewards with weights 0.8 and 0.2, respectively, and it increases in tandem with the steady improvement of the accuracy reward. models, it notably outperforms the larger-parameter Qwen2.5- VL-72B-Instruct (43.67%). Relative to the small-scale model with similar parameters Qwen2.5-VL-3B-Instruct, our approach yields an approximately 54% relative improvement in average success. Compared with the embodied specialist REBP (35.00%), RoboGPT- R1 leads by nearly 20 percentage points overall and shows a pro- nounced advantage on long-horizon tasks: REBP achieves only 6%, whereas RoboGPT-R1 reaches 50%. Notably, despite using only 3B parameters, RoboGPT-R1 delivers this level of performance under a small-model, low-inference-cost regime, highlighting the parameter efficiency and effectiveness of our method. Generalization Results in EB-Habitat.Table 1 summarizes the EB-Habitat results. RoboGPT-R1 attains an average success rate of 22% across six sub-tasks, representing a 7% improvement over Table 2: Training-Strategy Ablation Result (success rate, %). After SFT, the model acquires core embodied planning competence, raising the average success rate from 1.33% to 42.00% while still lagging on long-horizon tasks. With subsequent RFT, performance further increases to 55.33%, with especially pronounced gains on long-horizon tasks (26% →50%). Base SFT RFT Avg. Base Common Complex Visual Spatial Long ✓ ✗ ✗ 1.33(+0.00) 2(+0.00) 2(+0.00) 0(+0.00) 0(+0.00) 4(+0.00) 0 (+0.00) ✓ ✓ ✗ 42.00(+41.33) 48(+46) 44(+42) 58 (+58) 38 (+38) 38(+34) 26(+26) ✓ ✓ ✓ 55.33(+54.00) 62(+60) 56(+54) 64(+64) 50(+50) 50(+46) 50(+50) Table 3: Data-Source Ablation Result (success rate, %). Base denotes the in-domain dataset distilled from EMbench; Aug is the near-domain ALFRED-derived set. Training only with SFT on Base reaches 40.00% on average, whereas replacing Base with Aug for SFT collapses performance to 6.00%, indicating poor transfer under pure supervision. Continuing RFT on Base after SFT on Base yields only a modest gain (40.00% →44.33%), while using Aug during RFT achieves the best results (55.33%), showing that only the RL (RFT) stage can effectively absorb near-domain data and transfer it to the target task. Model Avg. Base Common Complex Visual Spatial Long Base Model (Qwen2.5-VL-3B) 1.33 2 2 0 0 4 0 Only SFT w Base (ours) 42.00 48 44 58 38 38 26 Only SFT w Aug 6.00 14 6 12 2 2 0 SFT+ RFT w Base 44.33 56 56 54 32 36 32 SFT+ RFT w Aug (ours) 55.33 62 56 64 50 50 50 the base model Qwen2.5-VL-3B-Instruct (14.67%) and outper- forming both the 7B-parameter models Qwen2.5-VL-7B-Instruct (15.00%) and REBP (18.33%). Although a performance gap remains compared to Qwen2.5-VL-72B-Instruct and several closed-source general models, these out-of-domain results indicate that our ap- proach substantially improves generalization and transferability of embodied model. 4.3 Ablation Study We conduct two sets of ablations on EB-ALFRED to disentangle the effects of the two training stages and the two data sources used in our framework. For brevity, we refer to the Base dataset and Aug dataset as Base and Aug, respectively. Training-Strategy Ablation. We compared the performance changes on EB-ALFRED among the base model(no finetuning), the SFT-only model, and the final SFT+RFT model. Results are shown in Table 2. After SFT, the average success rate rises from 1.33% (base model) to 42.00%, indicating that SFT learns the initial embodied plan- ning competence. However, performance on long-horizon tasks re- mains limited (26%). With subsequent RFT, performance improves across all sub-suites, yielding an average of 55.33%; notably, the long-horizon score increases from 26% to 50%, highlighting the particular effectiveness of RFT for complex, extended plans. Data-Source Ablation. As shown in Table 3, these ablations high- light the importance of near-domain data during RFT. (1) SFT on Aug only: Replacing Base with Aug for SFT reduces the average success from 42.00% to 6.00%, suggesting that, under supervised learning, near-domain data alone does not transfer effectively to Figure 3: Success rates with different stages. Bars show the macro average and the long-horizon score for the base model, SFT-only, and SFT+RFT model. SFT establishes initial embod- ied planning (Avg: 1.33%→42.00%) but leaves long-horizon performance limited (26%). Adding RFT lifts the averages fur- ther (to 55.33%) and markedly improves long-horizon plan- ning (to 50%), validating the effectiveness of our two-stage framework—SFT for foundational competence and RFT for the additional gains needed to solve long-horizon tasks. the target task. (2) RFT on Base: Starting from SFT on Base and con- tinuing RFT on Base (instead of Aug) yields only a modest increase from 42.00% to 44.33%. Compared with the SFT+RFT setting that uses Aug during RFT (55.33%), this indicates that incorporating near-domain data in the RFT stage is essential. Table 4: Accuracy Reward Comparison Result (success rates, %). We compare three GRPO accuracy rewards used during RFT—Step Accuracy, REBP Acc., and LCS (ours)—with “RFT Base” as the reference row for deltas. Bracketed scores denote the change relative to the reference. LCS (ours) delivers the strongest overall performance (Avg. 55.33%) and the largest increase on long-horizon tasks (+24%), outperforming the alternatives across most sub-suites. Accuracy Reward Avg. Base Common Complex Visual Spatial Long RFT Base 42.00 (+0.00) 48 (+0.00) 44 (+0.00) 58 (+0.00) 38 (+0.00) 38 (+0.00) 26 (+0.00) Step Accuracy 43.67 (+1.67) 52 (+4.00) 56 (+12.00) 54 (-4.00) 40 (+2.00) 40 (+2.00) 20 (-6.00) REBP Acc. 48.33 (+6.33) 62 (+14.00) 52 (+8.00) 58 (+0.00) 46 (+8.00) 38 (+0.00) 34 (+8.00) LCS (ours) 55.33 (+13.33) 62 (+14.00) 56 (+12.00) 64 (+8.00) 50 (+12.00) 50 (+12.00) 50 (+24.00) Ablation Analysis. We summarize three observations: (i) SFT establishes the initial planning ability but leaves a gap on long- horizon tasks; (ii) RFT closes this gap with the strongest gains on long-horizon sub-suites; (iii) combining RFT with Aug is essential, as Aug brings little benefit in SFT but provides substantial gains when used during RFT. 4.4 Accuracy Reward Comparison During the RFT stage, we instantiate the accuracy reward in the GRPO algorithm [2] as the ratio of the Longest Common Subsequence (LCS) length. Thanks to the appropriate and dense learning signal provided by the LCS-based accuracy reward, the reward curves rise steadily throughout RFT, as shown in Fig. 2. This section conducts a head-to-head comparison of three accuracy rewards to assess the suitability and advantages of our LCS reward for embodied planning. Compared rewards. We set the accuracy reward in GRPO to one of: • LCS reward (ours): the ratio of the longest common sub- sequence between the generated and reference trajectories (Normalized LCS); • Step Accuracy: Ratio of strictly matched actions/instruc- tions step-by-step; • REBP reward: the multi-step, progress-style signal used in REBP [53]. We evaluate on EB-ALFRED with same setting of main experi- ments. To ensure fairness, we fix the backbone model, training data, total update budget, optimization hyperparameters, and the format reward term, and vary only the accuracy reward. The primary met- rics are the average success rate and the scores of each sub-suite (with emphasis on long-horizon tasks). Comparison Results. As summarized in Table 4, using Step Ac- curacy yields only a slight improvement in the average success rate (from 42.00% to 43.67%), while the long-horizon score decreases (from 26% to 20%). The REBP reward improves the average to 48.33% and raises the long-horizon score from 26% to 34%. In contrast, our LCS reward achieves the largest gains: the average increases from 42.00% to 55.33%, and the long-horizon score from 26% to 50%. These findings demonstrate the suitability of the LCS- based accuracy reward for embodied planning and its advantages over the alternatives under the same training budget. 5 CONCLUSIONS In this work, we propose RoboGPT-R1, a two-stage training frame- work for embodied planning. Stage 1 (SFT) equips the model with initial instruction-following and planning priors. Stage 2 (RFT) per- forms reinforcement fine-tuning with GRPO and an LCS-based, sequence-aware exact reward (paired with format constraints), pro- viding dense and verifiable feedback. This design overcomes the limitations of SFT-only behavior cloning, which fails to adequately elicit the reasoning capabilities of VLMs and often undermines in-domain performance when leveraging near-domain data. These shortcomings result in poor performance on long-horizon tasks and brittle out-of-domain generalization. Evaluated on Embodied- Bench, the 3B model trained with our framework surpasses general- purpose VLM baselines including Qwen2.5-VL-72B and GPT-4o, and substantially outperforms other 7B-scale embodied planners, with especially pronounced gains on long-horizon subtasks. REFERENCES [1] Alibaba DAMO Academy. 2024. EasyR1: A unified framework for reward model- ing and RLHF. https://github.com/alibaba/EasyR1. [2] Alibaba DAMO Academy. 2024. GRPO: Generalized Reward Preference Opti- mization for LLM alignment. https://github.com/alibaba/GRPO. [3] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. arXiv:2204.01691 [cs.RO] https://arxiv.org/abs/2204.01691 [4] Meta AI. 2024. Llama 3.2: Revolutionizing edge AI and vision with open, cus- tomizable models. https://ai.meta.com/blog/llama-3-2-connect-2024-vision- edge-mobile-devices/. [5] Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, and Aviral Kumar. 2024. DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning. arXiv:2406.11896 [cs.LG] https://arxiv. org/abs/2406.11896 [6] Zitong Bo, Yue Hu, Jinming Ma, Mingliang Zhou, Junhui Yin, Yachen Kang, Yuqi Liu, Tong Wu, Diyun Xiang, and Hao Chen. 2025. Reinforced Embod- ied Planning with Verifiable Reward for Real-World Robotic Manipulation. arXiv:2509.25852 [cs.RO] https://arxiv.org/abs/2509.25852 [7] Shaoyu Chen, Bo Jiang, Hao Gao, Bencheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. 2024. VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic Planning. arXiv:2402.13243 [cs.CV] https://arxiv.org/abs/2402.13243 [8] Yaran Chen, Wenbo Cui, Yuanwen Chen, Mining Tan, Xinyao Zhang, Jinrui Liu, Haoran Li, Dongbin Zhao, and He Wang. 2025. RoboGPT: an LLM-based Long-term Decision-making Embodied Agent for Instruction Following Tasks. IEEE Transactions on Cognitive and Developmental Systems (2025), 1–11. https: //doi.org/10.1109/TCDS.2025.3543364 [9] Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yiming Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Kaipeng Zhang, Limin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. 2025. Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling. arXiv:2412.05271 [cs.CV] https://arxiv.org/abs/2412.05271 [10] Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V. Le, Sergey Levine, and Yi Ma. 2025. SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training. CoRR abs/2501.17161 (2025). [11] Alibaba Cloud. 2024. Qwen-VL-Max: Large Vision-Language Model. https: //github.com/QwenLM/Qwen-VL. [12] Alibaba Cloud. 2025. Qwen2.5-VL Technical Report. arXiv:2502.13923 [cs.CL] https://arxiv.org/abs/2502.13923 [13] Google DeepMind. 2024. Introducing Gemini 2.0: Our new AI model for the agentic era. https://blog.google/technology/google-deepmind/google-gemini-ai- update-december-2024/. [14] DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. CoRR abs/2501.12948 (2025). [15] Jiafei Duan, Samson Yu, Hui Li Tan, Hongyuan Zhu, and Cheston Tan. 2022. A Sur- vey of Embodied AI: From Simulators to Research Tasks. arXiv:2103.04918 [cs.AI] https://arxiv.org/abs/2103.04918 [16] Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, Katsushi Ikeuchi, Hoi Vo, Li Fei-Fei, and Jianfeng Gao. 2024. Agent AI: Surveying the Horizons of Multimodal Interaction. arXiv:2401.03568 [cs.AI] https://arxiv.org/ abs/2401.03568 [17] Zhaoye Fei, Li Ji, Siyin Wang, Junhao Shi, Jingjing Gong, and Xipeng Qiu. 2025. Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning. arXiv:2506.23127 [cs.CL] https://arxiv.org/abs/2506.23127 [18] Sicheng Feng, Kaiwen Tuo, Song Wang, Lingdong Kong, Jianke Zhu, and Huan Wang. 2025. RewardMap: Tackling Sparse Rewards in Fine-grained Visual Rea- soning via Multi-Stage Reinforcement Learning. arXiv:2510.02240 [cs.CV] https://arxiv.org/abs/2510.02240 [19] Yunhai Feng, Jiaming Han, Zhuoran Yang, Xiangyu Yue, Sergey Levine, and Jianlan Luo. 2025. Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon Robotic Manipulation. arXiv:2502.16707 [cs.RO] https://arxiv. org/abs/2502.16707 [20] Yuqian Fu, Tinghong Chen, Jiajun Chai, Xihuai Wang, Songjun Tu, Guojun Yin, Wei Lin, Qichao Zhang, Yuanheng Zhu, and Dongbin Zhao. 2025. SRFT: A Single- Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning. arXiv:2506.19767 [cs.CL] https://arxiv.org/abs/2506.19767 [21] Hiyouga. 2023. LLaMA Factory: Open-source instruction tuning framework for LLMs. https://github.com/hiyouga/LLaMA-Factory. [22] Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, and Brian Ichter. 2023. Grounded decoding: guiding text generation with grounded models for embodied agents. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS ’23). Curran Associates Inc., Red Hook, NY, USA, Article 2606, 26 pages. [23] Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, Xinda Xue, Qinghang Su, Huaihai Lyu, Xiaolong Zheng, Jiaming Liu, Zhongyuan Wang, and Shanghang Zhang. 2025. RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. arXiv:2502.21257 [cs.RO] https://arxiv.org/abs/2502.21257 [24] Bo Jiang, Shaoyu Chen, Qian Zhang, Wenyu Liu, and Xinggang Wang. 2025. AlphaDrive: Unleashing the Power of VLMs in Autonomous Driving via Rein- forcement Learning and Reasoning. CoRR abs/2503.07608 (2025). [25] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, Aniruddha Kem- bhavi, Abhinav Gupta, and Ali Farhadi. 2022. AI2-THOR: An Interactive 3D Envi- ronment for Visual AI. arXiv:1712.05474 [cs.CV] https://arxiv.org/abs/1712.05474 [26] Xinhao Li, Ziang Yan, Desen Meng, Lu Dong, Xiangyu Zeng, Yinan He, Yali Wang, Yu Qiao, Yi Wang, and Limin Wang. 2025. VideoChat-R1: Enhancing Spatio- Temporal Perception via Reinforcement Fine-Tuning. arXiv:2504.06958 [cs.CV] https://arxiv.org/abs/2504.06958 [27] Wenlong Liang, Rui Zhou, Yang Ma, Bing Zhang, Songlin Li, Yijia Liao, and Ping Kuang. 2025. Large Model Empowered Embodied AI: A Survey on Decision- Making and Embodied Learning. CoRR abs/2508.10399 (2025). [28] Yang Liu, Weixing Chen, Yongjie Bai, Xiaodan Liang, Guanbin Li, Wen Gao, and Liang Lin. 2025. Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI. arXiv:2407.06886 [cs.CV] https://arxiv.org/abs/2407. 06886 [29] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. 2025. Visual-RFT: Visual Reinforcement Fine-Tuning. CoRR abs/2503.01785 (2025). [30] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Tiancheng Han, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, Ping Luo, Yu Qiao, Qiaosheng Zhang, and Wenqi Shao. 2025. MM-Eureka: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning. arXiv:2503.07365 [cs.CV] https://arxiv.org/abs/2503.07365 [31] Youssef Mroueh. 2025. Reinforcement Learning with Verifiable Rewards: GRPO’s Effective Loss, Dynamics, and Success Amplification. arXiv:2503.06639 [cs.LG] https://arxiv.org/abs/2503.06639 [32] OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexan- der Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, An- dre Saraiva, Andrea Vallone, Andrew Duberstein, Andrew Kondrich, Andrey Mishchenko, Andy Applebaum, Angela Jiang, Ashvin Nair, Barret Zoph, Behrooz Ghorbani, Ben Rossen, Benjamin Sokolowsky, Boaz Barak, Bob McGrew, Bo- rys Minaiev, Botao Hao, Bowen Baker, Brandon Houghton, Brandon McKinzie, Brydon Eastman, Camillo Lugaresi, Cary Bassin, Cary Hudson, Chak Ming Li, Charles de Bourcy, Chelsea Voss, Chen Shen, Chong Zhang, Chris Koch, Chris Orsinger, Christopher Hesse, Claudia Fischer, Clive Chan, Dan Roberts, Daniel Kappler, Daniel Levy, Daniel Selsam, David Dohan, David Farhi, David Mely, David Robinson, Dimitris Tsipras, Doug Li, Dragos Oprica, Eben Freeman, Ed- die Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Guillaume Leclerc, Hadi Salman, Haiming Bao, Hao Sheng, Hart Andrin, Hessam Bagherinezhad, Hongyu Ren, Hunter Lightman, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian Osband, Ignasi Clavera Gilaberte, Ilge Akkaya, Ilya Kostrikov, Ilya Sutskever, Irina Kofman, Jakub Pachocki, James Lennon, Jason Wei, Jean Harb, Jerry Twore, Jiacheng Feng, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joaquin Quiñonero Candela, Joe Palermo, Joel Parish, Jo- hannes Heidecke, John Hallman, John Rizzo, Jonathan Gordon, Jonathan Uesato, Jonathan Ward, Joost Huizinga, Julie Wang, Kai Chen, Kai Xiao, Karan Singhal, Karina Nguyen, Karl Cobbe, Katy Shi, Kayla Wood, Kendra Rimbach, Keren Gu-Lemberg, Kevin Liu, Kevin Lu, Kevin Stone, Kevin Yu, Lama Ahmad, Lau- ren Yang, Leo Liu, Leon Maksin, Leyton Ho, Liam Fedus, Lilian Weng, Linden Li, Lindsay McCallum, Lindsey Held, Lorenz Kuhn, Lukas Kondraciuk, Lukasz Kaiser, Luke Metz, Madelaine Boyd, Maja Trebacz, Manas Joglekar, Mark Chen, Marko Tintor, Mason Meyer, Matt Jones, Matt Kaufer, Max Schwarzer, Meghan Shah, Mehmet Yatbaz, Melody Y. Guan, Mengyuan Xu, Mengyuan Yan, Mia Glaese, Mianna Chen, Michael Lampe, Michael Malek, Michele Wang, Michelle Fradin, Mike McClay, Mikhail Pavlov, Miles Wang, Mingxuan Wang, Mira Mu- rati, Mo Bavarian, Mostafa Rohaninejad, Nat McAleese, Neil Chowdhury, Neil Chowdhury, Nick Ryder, Nikolas Tezak, Noam Brown, Ofir Nachum, Oleg Boiko, Oleg Murk, Olivia Watkins, Patrick Chao, Paul Ashbourne, Pavel Izmailov, Pe- ter Zhokhov, Rachel Dias, Rahul Arora, Randall Lin, Rapha Gontijo Lopes, Raz Gaon, Reah Miyara, Reimar Leike, Renny Hwang, Rhythm Garg, Robin Brown, Roshan James, Rui Shu, Ryan Cheu, Ryan Greene, Saachi Jain, Sam Altman, Sam Toizer, Sam Toyer, Samuel Miserendino, Sandhini Agarwal, Santiago Hernandez, Sasha Baker, Scott McKinney, Scottie Yan, Shengjia Zhao, Shengli Hu, Shibani Santurkar, Shraman Ray Chaudhuri, Shuyuan Zhang, Siyuan Fu, Spencer Pa- pay, Steph Lin, Suchir Balaji, Suvansh Sanjeev, Szymon Sidor, Tal Broda, Aidan Clark, Tao Wang, Taylor Gordon, Ted Sanders, Tejal Patwardhan, Thibault Sot- tiaux, Thomas Degry, Thomas Dimson, Tianhao Zheng, Timur Garipov, Tom Stasi, Trapit Bansal, Trevor Creech, Troy Peterson, Tyna Eloundou, Valerie Qi, Vineet Kosaraju, Vinnie Monaco, Vitchyr Pong, Vlad Fomenko, Weiyi Zheng, Wenda Zhou, Wes McCabe, Wojciech Zaremba, Yann Dubois, Yinghai Lu, Yining Chen, Young Cha, Yu Bai, Yuchen He, Yuchen Zhang, Yunyun Wang, Zheng Shao, and Zhuohan Li. 2024. OpenAI o1 System Card. arXiv:2412.16720 [cs.AI] https://arxiv.org/abs/2412.16720 [33] OpenAI. 2024. GPT-4o mini: Advancing cost-efficient intelligence. https://openai. com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/. [34] OpenAI. 2024. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/. [35] OpenAI. 2025. Introducing GPT-4.1 in the API. https://openai.com/index/gpt-4- 1/. [36] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wij- mans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. 2019. Habitat: A Platform for Embodied AI Research. arXiv:1904.01201 [cs.CV] https://arxiv.org/abs/1904.01201 [37] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024. DeepSeek- Math: Pushing the Limits of Mathematical Reasoning in Open Language Models. arXiv:2402.03300 [cs.CL] https://arxiv.org/abs/2402.03300 [38] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. 2025. VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model. CoRR abs/2504.07615 (2025). [39] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, and Chelsea Finn. 2025. Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language- Action Models. arXiv:2502.19417 [cs.RO] https://arxiv.org/abs/2502.19417 [40] Suyeon Shin, Sujin jeon, Junghyun Kim, Gi-Cheon Kang, and Byoung-Tak Zhang. 2025. Socratic Planner: Self-QA-Based Zero-Shot Planning for Embodied Instruc- tion Following. arXiv:2404.15190 [cs.AI] https://arxiv.org/abs/2404.15190 [41] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. arXiv:1912.01734 [cs.CV] https://arxiv.org/abs/1912.01734 [42] Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su. 2023. LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models. arXiv:2212.04088 [cs.AI] https://arxiv.org/ abs/2212.04088 [43] Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieil- lard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gaël Liu, Francesco Visin, Kathleen Kenealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, Alex Feng, Noveen Sachdeva, Ben- jamin Coleman, Yi Gao, Basil Mustafa, Iain Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopalnikov, Surya Bhu- patiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Ravin Kumar, David Vilar, Idan Brusilovsky, Jiaming Luo, Andreas Steiner, Abe Friesen, Abhanshu Sharma, Abheesht Sharma, Adi Mayrav Gilady, Adrian Goedeckemeyer, Alaa Saade, Alex Feng, Alexander Kolesnikov, Alexei Bendebury, Alvin Abdagic, Amit Vadi, András György, André Susano Pinto, Anil Das, Ankur Bapna, Antoine Miech, Antoine Yang, Antonia Paterson, Ashish Shenoy, Ayan Chakrabarti, Bilal Piot, Bo Wu, Bobak Shahriari, Bryce Petrini, Charlie Chen, Charline Le Lan, Christopher A. Choquette-Choo, CJ Carey, Cormac Brick, Daniel Deutsch, Danielle Eisenbud, Dee Cattle, Derek Cheng, Dimitris Paparas, Divyashree Shiv- akumar Sreepathihalli, Doug Reid, Dustin Tran, Dustin Zelle, Eric Noland, Er- win Huizenga, Eugene Kharitonov, Frederick Liu, Gagik Amirkhanyan, Glenn Cameron, Hadi Hashemi, Hanna Klimczak-Plucińska, Harman Singh, Harsh Mehta, Harshal Tushar Lehri, Hussein Hazimeh, Ian Ballantyne, Idan Szpek- tor, Ivan Nardini, Jean Pouget-Abadie, Jetha Chan, Joe Stanton, John Wieting, Jonathan Lai, Jordi Orbay, Joseph Fernandez, Josh Newlan, Ju yeong Ji, Jyotinder Singh, Kat Black, Kathy Yu, Kevin Hui, Kiran Vodrahalli, Klaus Greff, Linhai Qiu, Marcella Valentine, Marina Coelho, Marvin Ritter, Matt Hoffman, Matthew Watson, Mayank Chaturvedi, Michael Moynihan, Min Ma, Nabila Babar, Natasha Noy, Nathan Byrd, Nick Roy, Nikola Momchev, Nilay Chauhan, Noveen Sachdeva, Oskar Bunyan, Pankil Botarda, Paul Caron, Paul Kishan Rubenstein, Phil Culli- ton, Philipp Schmid, Pier Giuseppe Sessa, Pingmei Xu, Piotr Stanczyk, Pouya Tafti, Rakesh Shivanna, Renjie Wu, Renke Pan, Reza Rokni, Rob Willoughby, Rohith Vallu, Ryan Mullins, Sammy Jerome, Sara Smoot, Sertan Girgin, Shariq Iqbal, Shashir Reddy, Shruti Sheth, Siim Põder, Sijal Bhatnagar, Sindhu Raghuram Panyam, Sivan Eiger, Susan Zhang, Tianqi Liu, Trevor Yacovone, Tyler Liechty, Uday Kalra, Utku Evci, Vedant Misra, Vincent Roseberry, Vlad Feinberg, Vlad Kolesnikov, Woohyun Han, Woosuk Kwon, Xi Chen, Yinlam Chow, Yuvein Zhu, Zichuan Wei, Zoltan Egyed, Victor Cotruta, Minh Giang, Phoebe Kirk, Anand Rao, Kat Black, Nabila Babar, Jessica Lo, Erica Moreira, Luiz Gustavo Martins, Omar Sanseviero, Lucas Gonzalez, Zach Gleicher, Tris Warkentin, Vahab Mirrokni, Evan Senter, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, Yossi Matias, D. Sculley, Slav Petrov, Noah Fiedel, Noam Shazeer, Oriol Vinyals, Jeff Dean, Demis Hassabis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Jean-Baptiste Alayrac, Rohan Anil, Dmitry, Lepikhin, Sebastian Borgeaud, Olivier Bachem, Armand Joulin, Alek Andreev, Cassidy Hardin, Robert Dadashi, and Léonard Hussenot. 2025. Gemma 3 Technical Report. arXiv:2503.19786 [cs.CL] https://arxiv.org/abs/2503.19786 [44] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Cong- cong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, Guangda Wei, Guokun Lai, Haiqing Guo, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haotian Yao, Haotian Zhao, Haoyu Lu, Haoze Li, Haozhen Yu, Hongcheng Gao, Huabin Zheng, Huan Yuan, Jia Chen, Jianhang Guo, Jianlin Su, Jianzhou Wang, Jie Zhao, Jin Zhang, Jingyuan Liu, Junjie Yan, Junyan Wu, Lidong Shi, Ling Ye, Longhui Yu, Mengnan Dong, Neo Zhang, Ningchen Ma, Qiwei Pan, Qucheng Gong, Shaowei Liu, Shengling Ma, Shupeng Wei, Sihan Cao, Siying Huang, Tao Jiang, Weihao Gao, Weimin Xiong, Weiran He, Weix- iao Huang, Weixin Xu, Wenhao Wu, Wenyang He, Xianghui Wei, Xianqing Jia, Xingzhe Wu, Xinran Xu, Xinxing Zu, Xinyu Zhou, Xuehai Pan, Y. Charles, Yang Li, Yangyang Hu, Yangyang Liu, Yanru Chen, Yejie Wang, Yibo Liu, Yidao Qin, Yifeng Liu, Ying Yang, Yiping Bao, Yulun Du, Yuxin Wu, Yuzhi Wang, Zaida Zhou, Zhaoji Wang, Zhaowei Li, Zhen Zhu, Zheng Zhang, Zhexu Wang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Ziyao Xu, Zonghan Yang, and Zongyu Lin. 2025. Kimi k1.5: Scaling Reinforcement Learning with LLMs. arXiv:2501.12599 [cs.AI] https://arxiv.org/abs/2501.12599 [45] Songjun Tu, Jiahao Lin, Xiangyu Tian, Qichao Zhang, Linjing Li, Yuqian Fu, Nan Xu, Wei He, Xiangyuan Lan, Dongmei Jiang, and Dongbin Zhao. 2025. Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation. arXiv:2503.12854 [cs.CL] https://arxiv.org/abs/2503.12854 [46] Songjun Tu, Jiahao Lin, Qichao Zhang, Xiangyu Tian, Linjing Li, Xiangyuan Lan, and Dongbin Zhao. 2025. Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL. arXiv:2505.10832 [cs.CL] https://arxiv.org/abs/2505.10832 [47] Songjun Tu, Jingbo Sun, Qichao Zhang, Xiangyuan Lan, and Dongbin Zhao. 2024. Online Preference-based Reinforcement Learning with Self-augmented Feedback from Large Language Model. arXiv:2412.16878 [cs.LG] https://arxiv.org/abs/ 2412.16878 [48] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291 [cs.AI] https://arxiv.org/ abs/2305.16291 [49] Hongcheng Wang, Yinuo Huang, Sukai Wang, Guanghui Ren, and Hao Dong. 2025. GRPO-MA: Multi-Answer Generation in GRPO for Stable and Efficient Chain-of-Thought Training. arXiv:2509.24494 [cs.CL] https://arxiv.org/abs/2509. 24494 [50] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024. Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset. arXiv:2402.14804 [51] Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Am- manabrolu. 2022. ScienceWorld: Is your Agent Smarter than a 5th Grader? arXiv:2203.07540 [cs.CL] https://arxiv.org/abs/2203.07540 [52] Yuxiang Wei, Olivier Duchenne, Jade Copet, Quentin Carbonneaux, Lingming Zhang, Daniel Fried, Gabriel Synnaeve, Rishabh Singh, and Sida I. Wang. 2025. SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Soft- ware Evolution. CoRR abs/2502.18449 (2025). [53] Di Wu, Jiaxin Fan, Junzhe Zang, Guanbo Wang, Wei Yin, Wenhao Li, and Bo Jin. 2025. Reinforced Reasoning for Embodied Planning. CoRR abs/2505.22050 (2025). [54] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. 2023. Embodied Task Planning with Large Language Models. CoRR abs/2307.01848 (2023). [55] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. 2023. Embodied Task Planning with Large Language Models. arXiv:2307.01848 [cs.CV] https: //arxiv.org/abs/2307.01848 [56] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. 2023. The Rise and Potential of Large Language Model Based Agents: A Survey. arXiv:2309.07864 [cs.AI] https://arxiv.org/abs/2309.07864 [57] Zhiyuan Xu, Kun Wu, Junjie Wen, Jinming Li, Ning Liu, Zhengping Che, and Jian Tang. 2024. A Survey on Robotics with Foundation Models: toward Embodied AI. arXiv:2402.02385 [cs.RO] https://arxiv.org/abs/2402.02385 [58] Rui Yang, Hanyang Chen, Junyu Zhang, Mark Zhao, Cheng Qian, Kangrui Wang, Qineng Wang, Teja Venkat Koripella, Marziyeh Movahedi, Manling Li, Heng Ji, Huan Zhang, and Tong Zhang. 2025. EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents. arXiv:2502.09560 [cs.AI] https://arxiv.org/abs/2502.09560 [59] Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. 2025. De- mystifying Long Chain-of-Thought Reasoning in LLMs. arXiv:2502.03373 [cs.CL] https://arxiv.org/abs/2502.03373 [60] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. 2025. Robotic Control via Embodied Chain-of-Thought Reasoning. arXiv:2407.08693 [cs.RO] https://arxiv.org/abs/2407.08693 [61] Aohan Zeng, Mingdao Liu, Rui Lu, Bowen Wang, Xiao Liu, Yuxiao Dong, and Jie Tang. 2023. AgentTuning: Enabling Generalized Agent Abilities for LLMs. arXiv:2310.12823 [62] Huaye Zeng, Dongfu Jiang, Haozhe Wang, Ping Nie, Xiaotong Chen, and Wenhu Chen. 2025. ACECODER: Acing Coder RL via Automated Test-Case Synthesis. arXiv:2502.01718 [cs.SE] https://arxiv.org/abs/2502.01718 [63] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. 2024. Vision-Language Models for Vision Tasks: A Survey. arXiv:2304.00685 [cs.CV] https://arxiv.org/ abs/2304.00685 [64] Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. 2025. R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization. arXiv:2503.12937 [cs.AI] https://arxiv.org/abs/2503.12937 [65] Wenqi Zhang, Mengna Wang, Gangao Liu, Xu Huixin, Yiwei Jiang, Yongliang Shen, Guiyang Hou, Zhe Zheng, Hang Zhang, Xin Li, Weiming Lu, Peng Li, and Yueting Zhuang. 2025. Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks. arXiv:2503.21696 [cs.CL] https://arxiv.org/abs/2503.21696 [66] Zijing Zhang, Ziyang Chen, Mingxiao Li, Zhaopeng Tu, and Xiaolong Li. 2025. RLVMR: Reinforcement Learning with Verifiable Meta-Reasoning Rewards for Robust Long-Horizon Agents. CoRR abs/2507.22844 (2025). Appendix A EXPERIENTAL DETAILS A.1 SFT Details In all experiments conducted in this paper, the hyperparameter settings for the supervised fine-tuning (SFT) phase strictly adhere to the configurations listed in the table 5, using 8× Ascend 910B3 64GB NPUs as computational devices. We empirically set the parameter ‘num_train_epochs‘ to 2 epochs in all SFT experiments to ensure that the model learns effectively without severe overfitting. Table 5: SFT hyperparameter settings used in our experiments. Hyperparameter Value Hyperparameter Value image_max_pixels 262144 cutoff_len 9216 video_max_pixels 16384 max_samples 50000 trust_remote_code true overwrite_cache true stage sft per_device_train_batch_size 1 finetuning_type full gradient_accumulation_steps 2 freeze_vision_tower true learning_rate 0.00001 freeze_multi_modal_projector true num_train_epochs 2.0 freeze_language_model false lr_scheduler_type cosine deepspeed ds_z2_config.json warmup_ratio 0.1 ddp_timeout 180000000 bf16 true emplate qwen2_vl nproc_per_node 8 A.2 RFT Details The hyperparameter settings remained consistent across all experiments during the RFT phase, as listed in Table 6. The RFT phase utilized 4× NVIDIA H20 96GB GPUs as computational devices. For the sake of fairness in comparison, the iteration count for the RFT phase was uniformly set to 80 steps. B DATASET DETAILS B.1 Embodied Planning Data Processing The embodied planning task data used in our constructed dataset was primarily processed based on two datasets released by REBP. We performed the following data processing steps: First, we cleaned and removed data from the original REBP dataset that contained obvious errors, such as incomplete responses or content that was abnormally truncated. Next, we addressed the inconsistency in the order of the four key-value pairs within the response content. Following the logical sequence of "Perception →Reasoning →Providing Answers → Formatting Output", we standardized the sequence of key-value pairs to: "visual_state_description", "reasoning_and_reflection", "language_plan" and "executable_plan". Additionally, the original data input contains a large amount of example information, which consums numerous tokens. We remove all example prompts to align the data to a zero-shot state. This configuration significantly reduces the data length, shortening the length of each data by approximately one-third. This, in turn, shortens model training time and reduces computing resource consumption during training. B.2 Composition of the Base and Aug Dataset Base. The Base dataset primarily consists of the REBP open-source SFT embodied planning data processed as described in the previous section, supplemented by a small amount of data extracted from the RoboVQA and MATH-Vision datasets. The final Base dataset contains over 5,000 samples, comprising over 4,000 embodied planning samples and over 1,000 for other tasks. The inclusion of other tasks aims to prevent the model from overfitting to embodied planning tasks, which could compromise its inherent multimodal perception and reasoning capabilities. Aug. The core of the Aug dataset consists of over 40,000 entries processed from the REBP open-source RFT dataset following the methodology described in the previous section. Additionally, it incorporates all 4,000+ embodied planning data samples from the Base dataset to prevent catastrophic forgetting during the RFT phase. The final Aug dataset contains over 45,000 samples. C EVALUATION DETAILS When evaluating models in EmbodiedBench, the number of input examples during evaluation can be controlled via the "n_shots" parameter, with a default maximum setting of 10. As described in the previous section, we align all training data to the 0-shot state during the processing Table 6: RFT hyperparameter settings used in our experiments. Hyperparameter Value Hyperparameter Value data actor_rollout_ref.ref.log_prob_use_dynamic_bsz true data.video_fps 2.0 actor_rollout_ref.ref.log_prob_max_token_len_per_gpu 16384 data.min_pixels 40000 actor_rollout_ref.ref.ulysses_sequence_parallel_size 1 data.max_pixels 4194304 actor_rollout_ref.rollout data.max_prompt_length 6144 actor_rollout_ref.rollout.name vllm data.max_response_length 3072 actor_rollout_ref.rollout.temperature 1.0 data.train_batch_size 512 actor_rollout_ref.rollout.top_k -1 data.shuffle true actor_rollout_ref.rollout.top_p 1.0 actor_rollout_ref.actor actor_rollout_ref.rollout.n 5 actor_rollout_ref.actor.ppo_mini_batch_size 128 actor_rollout_ref.rollout.response_length 3072 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu 1 actor_rollout_ref.rollout.gpu_memory_utilization 0.6 actor_rollout_ref.actor.use_dynamic_bsz true actor_rollout_ref.rollout.ignore_eos false actor_rollout_ref.actor.ppo_max_token_len_per_gpu 16384 actor_rollout_ref.rollout.enforce_eager false actor_rollout_ref.actor.grad_clip 1.0 actor_rollout_ref.rollout.tensor_model_parallel_size 4 actor_rollout_ref.actor.clip_ratio 0.2 actor_rollout_ref.rollout.max_num_batched_tokens 12288 actor_rollout_ref.actor.entropy_coeff 0.0 actor_rollout_ref.rollout.max_num_seqs 1024 actor_rollout_ref.actor.use_kl_loss true algorithm actor_rollout_ref.actor.kl_loss_type low_var_kl algorithm.gamma 1.0 actor_rollout_ref.actor.kl_loss_coef 1.0e-2 algorithm.lam 1.0 actor_rollout_ref.actor.ppo_epochs 1 algorithm.adv_estimator grpo actor_rollout_ref.actor.shuffle false algorithm.use_kl_in_reward false actor_rollout_ref.actor.ulysses_sequence_parallel_size 1 algorithm.kl_penalty low_var_kl actor_rollout_ref.actor.optim.lr 1.0e-6 algorithm.kl_ctrl.type fixed actor_rollout_ref.actor.optim.lr_warmup_steps -1 algorithm.kl_ctrl.kl_coef 0.001 actor_rollout_ref.actor.optim.lr_warmup_steps_ratio 0.0 algorithm.kl_ctrl.horizon 10000 actor_rollout_ref.actor.optim.lr_scheduler_type constant algorithm.kl_ctrl.target_kl 0.1 actor_rollout_ref.actor.optim.total_training_steps -1 algorithm.rollout_is false actor_rollout_ref.actor.optim.weight_decay 1.0e-2 trainer actor_rollout_ref.ref trainer.nnodes 1 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu 2 trainer.n_gpus_per_node 4 phase. Therefore, to maintain consistency with the training process, the final evaluation results were obtained using the setting "n_shots=0" when testing our model. When testing general models, our preliminary experiments revealed that the n-shot strategy is indispensable for general models to complete EmbodiedBench tests. Furthermore, the number of examples provided significantly affects the performance of general models, as shown in Table 7. When no examples are provided at all(that is, in 0-shot scenarios), both Qwen and GPT series models are completely unable to perform embodied planning within EB-ALFRED. Providing just 1 example versus 10 examples results in a significant gap in test performance. To ensure a conservative comparison, all results from the general models directly tested in this paper within the main results shown in Table 1 were obtained under the setting "n_shots=10", i.e., using test results from the optimal performance configuration. This setting guarantees that the experimental results provide a conservative estimate of our method’s performance advantage. All other test parameters use the default settings of EmbodiedBench. Table 7: Impact of n-shots Strategy on the Performance of General Models on EB-ALFRED model 0-shot 1-shot 10-shots GPT-4.1 0 46.00 64.67 GPT-4o 0 34.33 51.67 GPT-4o-mini 0 0 24.00 Qwen2.5-VL-72B-Ins. 0 35.00 43.67 Qwen2.5-VL-7B-Ins. 0 0.33 2.67 Qwen2.5-VL-3B-Ins. 0 0.67 1.33
RoboGPT-R1: Enhancing Robot Planning with Reinforcement Learning Jinrui Liu∗ ∗ ., Ltd China Shunsen He Huawei Cloud Technology Co., Ltd China Haoran Li† -view manipulation tasks successfully. Despite the success of large language models and vision language models based on Supervised FineTuning (SFT) in planning tasks, they continue facing challenges in performing long-horizon manipulation tasks in complex realworld environments, owing to their restricted common sense and reasoning capabilities. Considering that aligning general-purpose vision language models to robotic planning tasks via supervised fine-tuning suffers from poor generalization and insufficient physical understanding, we propose RoboGPT-R1, a two-stage finetuning framework for embodied planning. In this framework, supervised training acquires foundational knowledge through expert sequences, followed by RL to address the model's shortcomings in visual-spatial understanding and reasoning. To achieve physical understanding and action sequence consistency in multi-step reasoning tasks, we design a rule-based reward function that simultaneously considers long-horizon performance and action constraint in the environment. The reasoning model, trained on Qwen2.5-VL3B, significantly outperforms the larger-scale model, GPT-4o-mini, by 21.33% and surpasses other work trained on Qwen2.5-VL-7B by 20.33% on the EmbodiedBench benchmark. KEYWORDS Robot Task Planning, Reasoning Planning, Reinforcement Learning, Vision-Language-Model ∗Equal contribution. †Corresponding author. 1 INTRODUCTION Recently, vision language models (VLMs) have been increasingly employed as high-level planners for embodied tasks [6, 18, 24], given their emerging capability to combine natural language instructions into long-horizon robotic action sequences. Nevertheless, in real-world environments, VLMs still fail to meet the demands of robustness and generalization [6, 7, 66]. There are two major challenges that remain unsolved. First, the prevailing supervised fine-tuning (SFT)-only paradigm primarily imitates expert demonstrations, yet lacks mechanisms for adaptation or self-correction under dynamic environments [10]. Second, the design of long-horizon reward functions remains inadequate-existing rewards are often sparse or poorly aligned with the execution of grounded action, ultimately hindering planning performance [53]. In real-world long-horizon embodied tasks, VLMs still exhibit limited planning capability [53], as they are not well aligned with the physical realities of robotic embodiments or with accurate state transition dynamics [6, 60]. Although existing approaches based on the SFT-only paradigm can enhance the performance of VLMs [8, 65], they remain ineffective when confronted with scenarios or instructions that fall outside the distribution of the SFT dataset [58]. The lack of physical common sense and feasibility constraints often results in ambiguous object recognition and biased state estimation [15, 27]. Moreover, the absence of feedback and error-correction signals [57] in the SFT paradigm leads models to memorize answers rather than learn generalizable reasoning strategies [10], thereby failing to mitigate the accumulation of local errors over extended task horizons. Reinforcement learning (RL) has proven effective for VLMs in domains such as video reasoning [26], object detection [29, 38] and mathematical reasoning [49, 50, 59], where tasks provide clear and verifiable answers. However, when transferred to open-ended embodied planning tasks, it becomes challenging to design dense 16 Oct 2025 and interpretable reward functions [24, 63, 66], as the outcomes are often ambiguous and context-dependent. For instance, in embodied planning tasks, when the correct answer is "pick up an apple and put it on the table," using a straightforward RL reward such as string matching or accuracy calculation allows the model to gain higher rewards simply by generating more actions, as some subsequences are likely to overlap with the reference plan. This mechanism misleads the model to produce overly long yet logically incorrect reasoning chains, masking its true deficiencies in action ordering and planning coherence. Therefore, a dense and sequence-aware reward is required to directly capture whether a multi-step plan is executed fully or partially correctly [6, 60, 63], particularly in long-horizon and complex action sequences, rather than rewarding superficial token overlap. To address the above problems, we propose RoboGPT-R1, a twostage training framework designed to enhance robotic planning with small-scale models. In contrast to the SFT stage, which learns predefined answers, Group Relative Policy Optimization (GRPO) algorithm explores optimal solutions, addressing the shortcomings of SFT in generalization, task understanding, spatial perception, and planning consistency [10]. Moreover, in the second-stage RL training, unlike conventional RL approaches in reasoning tasks that typically rely on sparse or single-point accuracy rewards, our method introduces a rule-based variable reward function specifically designed for long-horizon embodied reasoning and planning. This reward function consists of two complementary components: a format reward and an accuracy reward. As illustrated in Fig. 1, the format reward integrates multiple dimensions, including structural completeness of reasoning, action type correctness and action validity. The accuracy reward is based on the longest common subsequence (LCS) between predicted and reference action sequences, effectively preserving action order and enhancing long-horizon performance. The results on EmbodiedBench [58] show that RoboGPTR1 significantly outperforms GPT-4o-mini and is competitive with closed-source models such as GPT-4o and Gemini-2.0-flash. Furthermore, its performance significantly surpasses open-source models like Llama-3.2-90B, achieving a 23.33% higher overall score. Compared to the previous state-of-the-art, it yields a 20.33% improvement. On long-horizon tasks, it leads with an accuracy of 50%, demonstrating the superior reasoning capabilities of small models. In summary, our contributions are as follows: • We propose RoboGPT-R1, a two-stage training paradigm for embodied multi-step reasoning tasks. With RL training, RoboGPT-R1 develops the reasoning capability for complex tasks and environments, thereby enhancing its physical commonsense and error correction abilities. • We design a reward function based on perception-reasoningplanning-action loop, with LCS-reward effectively enhancing the model's understanding and reflection of the task. This enables efficient and high-quality reward computation at a very low cost, and demonstrates good reasoning capabilities on long-horizon tasks. • We conduct extensive experiments on 6 tasks in 2 scenarios, including spatial perception, long-range reasoning, common-sense questions, and visual understanding. In seen scenarios, our method outperforms open-source general models and existing embodied planning models, achieving competitive performance compared to closed-source general models. Moreover, in unseen scenarios, it demonstrates superior reasoning capability and surpasses the state-ofthe-art embodied planning model. 2 RELATED WORK 2.1 Embodied Planning Embodied agents require not only active exploration, manipulation, and scene perception, but also embodied task planning capabilities [15, 16, 54]. Embodied planning aims to decompose high-level natural language instructions into executable subtask sequence [17, 40], enabling the embodied agent to generate actionable steps within an interactive environment to complete complex behaviors. With the advent of large models [28, 56, 57], natural languages offer greater expressive flexibility than structured languages, making it possible to utilize LLMs to decompose complex plans into sub-plans in a fully automated manner [5, 51, 61, 66]. For example, TaPA introduces an embodied task planner that grounds free-form instructions into executable plans, trained on a new multimodal benchmark (80 scenes, 15K instruction-plan pairs) [55]. It fine-tunes LLaMA with object lists from multi-view open-vocabulary detection. Additionally, SayCan [3] combines LLM with reinforcement learning, leveraging the high-level reasoning capabilities of LLM to complement the value assessment of pre-trained skills, thereby laying a foundation for language in robotics and enabling the feasible scoring of actions. This can generate executable long-term plans suitable for real robots. While LLMs can generate preliminary plans based on commonsense reasoning, they lack constraints on the physical environment and the feasibility of actions [22, 42, 48, 61]. The emergence of VLMs [39, 55, 60] has led to their use as highlevel planners, with the current mainstream approach being to fine-tune VLMs based on demonstration data. Zhang et al. [66] extend the O1-style deep reasoning to embodied interactive tasks by coupling visual search with step-by-step planning, reflection, and verification, trained on synthesized Observation-Thought-Action trajectories to improve performance on AI2-THOR-style [25] tasks. Moreover, Reflective Planning [19] proposes a test-time computation framework that augments a pre-trained VLM with a reflection loop. It imagines future world states via a diffusion-based dynamics model, critiques potential suboptimalities, and revises plans to improve multi-stage, long-horizon manipulation. 2.2 Reinforcement Learning for LLMs and VLMs In recent years, with the emergence of reasoning models like OpenAI's o1 [32], research on large language models (LLMs) has gradually shifted towards enhancing their reasoning capabilities through reinforcement learning (RL) [20, 31, 45, 47, 52]. Numerous studies have explored ways to enhance the performance of LLMs in reasoning tasks, including solving mathematical [37, 46, 59] problems and coding [62]. A notable breakthrough in this field is DeepseekR1 [14], which experienced an "aha moment" during GRPO-based training, enabling the model to independently reflect and reevaluate its initial policy without any explicit guidance. Subsequently, several works [26, 29, 30, 38, 44, 64] have used reinforcement learning to enhance the reasoning capabilities of models in multimodal settings. For example, VLM-R1 [38] and Visual-RFT [29] extend R1-style reinforcement learning to vision-language models, sampling multiple answer outputs for each input and optimizing with GRPO using verifiable, task-specific rewards, resulting in stronger visual reasoning and perception enhancements over the SFT baseline. These advances demonstrate the potential of RL to propel large models from "imitation learning" to "emergent intelligence" [10]. Inspired by the R1 paradigm, this paper employs GRPO-based reinforcement learning to perform two-stage training on the model, systematically improving the planning ability and long-term consistency of embodied agents in multimodal task planning. 3 METHODOLOGY 3.1 Overview In this section, we provide a brief introduction to the proposed RoboGPT-R1 framework. In contrast to previous approaches that solely rely on SFT, this study explores the incorporation of RL and reasoning techniques to better align with the embodied planning tasks. Section 3.3 will introduce the two-stage learning scheme. As demonstrated in Figure 1, the training of agents is comprised of two phases: an initial SFT phase, the purpose of which is to instil fundamental knowledge and elementary reasoning capabilities into the agent; and a subsequent reinforcement learning phase, the function of which is to utilise the GRPO policy optimisation algorithm to enable the agent to explore, think, and learn independently continuously. Next, to address issues such as inconsistent timing during multi-step reasoning, poor action planning, and poor performance on long-term tasks, we designed a rule-based, verifiable reward to incentivize the VLM's planning capabilities in Section 3.4. 3.2 Data Preparation Embodied task planning in real-world indoor scenarios requires a substantial amount of multimodal data, encompassing both perceptual and physical knowledge. Given the impressive inference performance of large, closed-source models, data distillation can be employed to generate high-quality datasets. The present paper employs the SFT dataset from REBP[50], distilled from Gemini- 2.0flash, in the SFT phase. Following REBP [53], we employ the SFT dataset distilled from Gemini-2.0-flash, in the SFT phase. In the RL phase, the RFT dataset from REBP is augmented with task-relevant examples and unsuccessful exploratory tasks. It has been demonstrated through experimentation that the quantity of examples contained within a dataset can result in contradictory outcomes regarding the training and testing of models. For instance, the direct application of n-shot (n denotes the number of examples) to the SFT phase results in the model "memorising" the example morphology. However, during testing, there is a significant drop in the n-shot score. In contrast, an untrained model significantly improves its score when tested using n-shot, demonstrating its firm reliance on examples during the testing process. Consequently, it can be concluded that the injection of knowledge into a trained model may result in the model's rigid adherence to predefined answer templates. This phenomenon impedes the model's capacity to adapt to the complexity of the problem and the variability of the environment. Experiments have demonstrated that zero-shot training and testing models exhibit superior performance, while concurrently reducing the number of input tokens by approximately one-third (from approximately 9,000 to less than 6,000), thereby enhancing the efficiency of the training process. Consequently, we employ 0-shot processing uniformly for both training and testing data. 3.3 Two-stage Training Scheme Stage 1: Initial Planning via SFT. To equip the base VLM with the fundamental capacity to generate multi-step reasoning, it is first necessary to undertake a supervised fine-tuning phase. This step is crucial because the reasoning patterns learned in subsequent reinforcement learning are significantly affected by the capabilities of the base model. Furthermore, the use of reinforcement learning as the sole training method is found to be significantly affected by data distribution, resulting in instability during the initial training stages. Therefore, a small amount of data is used for the warmup stage, which is based on the SFT. Following this, reinforcement learning training is conducted using the entire dataset. The findings of this study demonstrate that this approach enhances stability in the initial stages of training and rapidly integrates the base model with relevant knowledge, thereby enabling it to acquire a certain level of embodied reasoning knowledge and planning capabilities. Stage 2: Enhancing reasoning with GRPO. The DeepSeek R1zero algorithm employs the GRPO framework. Unlike reinforcement learning algorithms such as PPO, which require an additional critic model to estimate policy performance, GRPO directly compares groups of candidate responses, eliminating the need for a separate critic. Given a question q, GRPO samples Ncandidate responses {o1,o2, . . . ,oN} from the policy πθand evaluates the quality of each response oi using a reward function R(q,oi). To determine the relative quality of these responses, the algorithm normalizes the rewards by computing their mean and standard deviation and subsequently derives the advantage as: Ai= ri-mean({r1,r2, . . . ,rN}) std({r1,r2, . . . ,rN}) (1) where Aidenotes the advantage of candidate oi measured against the other samples in the group. To encourage the model to generate responses with higher advantages within the group, the group penalizes large deviations from the reference policy πref. The objective is as follows: Jgrpo(θ) = Eq,{oi}N i=1∼πθold (q) " 1 N N ∑︁ i=1 {min (l1 · Ai, l2 · Ai) -βDKL (πθ∥πref)} # (2) l1 = πθ(oi|q) πθold(oi|q) ; l2 = clip πθ(oi|q) πθold(oi|q) , 1 -ε, 1 + ε (3) where εis the clipping hyperparameter, and βcontrols the KL penalty against a reference policy πref. Inspired by DeepSeek-R1, our approach incorporates two complementary rewards: the accuracy reward and the format reward. Next, we introduce the reward function Rdesigned for robotic planning tasks. Figure 1: An overview of RoboGPT-R1. RoboGPT-R1 adopts a two-stage learning paradigm. In the initial phase, supervised finetuning introduces the model to data from mathematics, embodied tasks, and visual understanding, establishing a foundation for embodied reasoning. In the second phase, we apply GRPO-based reinforcement fine-tuning guided by a tailored reward function. The model is subsequently evaluated across six categories of tasks, including long-horizon planning and spatial reasoning. 3.4 Reward Design In Section 3.3, we have briefly introduced the general GRPO algorithm. Given that embodied planning requires an agent to complete complex tasks in a real or simulated physical environment based on natural language instructions, we design a set of reward functions specifically for this purpose. The following sections describe the format reward and the accuracy reward, respectively. 3.4.1 Format Reward. To ensure output regularity and facilitate the extraction of both the reasoning process and the final result, most R1-style approaches enclose the reasoning within ' ' tags and the final plan within ' ' tags. If the generated output deviates from this structure, the format reward is assigned a value of zero. Inspired by REBP [53], in embodied multistep planning, the agent must generate responses that are not only semantically meaningful but also structurally executable. Unlike conventional text generation tasks, where free-form output may be acceptable, embodied planning requires a higher level of structural rigour to ensure that each response remains interpretable and executable by downstream systems. First, the model should follow a clear cognitive loop-just like how humans plan their actions. Before acting, we reflect on the task, observe the environment, make a plan, and then execute it. Second, the generation of invalid or fabricated actions should be penalized. In summary, we set the following reward format, which consists of three parts: Rformat = 0.3 · Rsection + 0.3 · Rtype + 0.4 · Rvalidity (4) The section reward Rsection evaluates whether all required fields (visual_state_description, reasoning_and_reflection, language_plan, executable_plan) are present and correctly typed, and is computed as: Rsection = 1 |S| ∑︁ s∈S 1 [type(os) = Ts] (5) where 1 is an exponential function, S= {s1,s2,s3,s4} includes the four fields,Tsdenotes the string type of each field and osis the value of field sin the output object. The type reward Rtypechecks whether each action step is well-formed, defined as: Rtype = 1 m m ∑︁ i=1 1 ˆyid i ∈Int, namei∈Str, ˆyi≠∅ (6) In this formulation, mdenotes the number of action steps, ˆyid i and ˆyirepresent the action id and name of step irespectively. Here Int denotes the set of integers, while Str represents the set of non-empty strings. Finally, the validity reward Rvaliditymeasures whether each action id-name pair matches the predefined action dictionary, given by: Rvalidity = 1 C C ∑︁ i=1 1 norm( ˆyi) = norm(Daction[ ˆyid i]) (7) where Cis the number of matched steps and norm(·)is a normalization function that lowercases and trims whitespace. The action dictionary Daction defines a mapping from action ids to their corresponding names. Different from REBP [53], our setup employs dynamic action IDs that vary across tasks and environments, preventing the model from relying on memorization of a fixed action set. Instead, we want it to learn the meaning of actions and use them correctly. In summary, such a reward design can guide the model to generate structured outputs that conform to the task execution closed loop, and avoid hallucinations or inconsistent outputs through self-understanding and thinking. 3.4.2 LCS Reward. In embodied multi-step planning, the correctness of individual actions is not sufficient-the order in which actions are executed is often critical to task success. For example, placing an object before picking it up may involve the right actions but in a logically invalid sequence. Traditional token-level matching or step-wise accuracy metrics fail to penalize such disorder, treating unordered but correct actions as equally valid. Besides, in long-horizon tasks, plans can extend over dozens of steps, where strict reward strategies like exact matching become too rigid to reflect realistic performance. A model might make early mistakes yet recover in later steps to complete the task successfully. However, prefix-based accuracy rewards, such as those used in REBP [53], overlook this "error recovery" behaviour, leading to sparse and less informative reward signals. To address these problems, we design the accuracy reward based on the Longest Common Subsequence (LCS) between the predicted and reference action sequences. By computing the LCS over action names, we enforce both content accuracy and sequence coherence. This approach is robust to local deviations while maintaining global alignment and remains effective even as task length increases. We define the model generation sequence as ˆY= ( ˆy1, ˆy2, . . . , ˆym) and the reference sequence as Y= (y1,y2, . . . ,yn) . The detailed accuracy reward is as follows: LCS(i, j) =   0, if i= 0 or j= 0 LCS(i-1, j-1) + 1, if ˆyi= yj max (LCS(i-1, j), LCS(i, j-1)) , otherwise (8) Rlcs = k n, (9) where nis reference sequence length |Y|, kis length of the longest common subsequence LCS(i, j). n= |Y| denotes the length of the reference sequence, and k= LCS(i, j) is the length of the longest common subsequence between the generated sequence ˆYand the reference sequence Y. To evaluate the effectiveness of our proposed accuracy reward, we conduct an ablation study. Compared to prefix accuracy (as used in REBP) and standard step-wise matching, the LCS-based accuracy reward shows stronger performance in multistep planning tasks. 3.4.3 ovrall reward. To jointly encourage structural correctness and sequential accuracy, we define the overall reward as a weighted combination of the format reward and the LCS-based accuracy reward: R= 0.2 · Rforamt+ 0.8 · Rlcs (10) This overall reward formulation plays two key purposes in embodied planning: enforcing structural correctness and promoting action-level accuracy. The format reward Rformat enforces a fixed output structure and ensures each action step is well-formed and consistent with the current scene and instructions. Meanwhile, the accuracy reward Rlcsevaluates whether the predicted action sequence aligns with the reference plan, not only in content but also in order. By combining these two aspects, the overall reward encourages the model to reason more effectively and generate executable plans with reasonable length and structure. 4 EXPERIMENTS 4.1 Experimental Settings Evaluation. We evaluate RoboGPT-R1 in EmbodiedBench [58], a unified benchmarking suite for multimodal embodied planning agent. EmbodiedBench offers standardized protocols and interfaces that cover multi-sensory input, language instructions, and longhorizon decision making, enabling consistent and reproducible comparisons across task suites. Specifically, we focus on its two constituent suites: EB-ALFRED and EB-Habitat. The former is rooted in the ALFRED ecosystem and targets instruction-following household tasks (e.g., pick-and-place, cleaning, and mobile manipulation) that emphasize object state tracking and stepwise dependencies; the latter builds on the Habitat ecosystem and emphasizes navigation and interaction in 3D environments, with observation distributions, scene layouts, and action semantics that differ markedly from EBALFRED [36, 41]. Because our training data are primarily associated with EB-ALFRED/ALFRED, we treat performance on EB-ALFRED as in-domain and use it to gauge method effectiveness, whereas results on EB-Habitat are regarded as out-of-domain and used to assess generalization. Unless otherwise noted, all evaluation settings follow the benchmark defaults; the only test variable we modify is the number of examples in-context (n_shots) in the control input. The evaluation setting details are provided in the appendix C Baselines. We evaluate both the Qwen and GPT model series, including Qwen2.5-VL-72B-Ins, Qwen2.5-VL-7B-Ins, Qwen2.5-VL3B-Ins, as well as GPT-4.1, GPT-4o and GPT-4o-mini. Closed-source models are assessed via their official APIs, while open-source models are tested through local deployment. Performance results for additional baselines are obtained from the REBP and EmbodiedBench leaderboards. The three types of comparison baseline and the corresponding results are presented below: (1) General closed-source models: including five representative proprietary multimodal models from the GPT, Gemini, and Qwen families: Gemini-2.0-Flash [13], Qwen-VLMax [11], GPT-4.1 [35], GPT-4o [34] and GPT-4o-mini [33]. (2) General open-source models: Comprising open-source models from multiple series-including Qwen2.5-VL-3B/7B/ 72B-Ins [12], LLaMA-3.2-90B-Vision-Ins [4], InternVL2.5-8B [9], and Gemma-3-12B-its [43]. (3) Embodied domain-specific models: focusing on models tailored for embodied reasoning and planning, such as REBP [53], RoBoBrain [23], TaPa [55] and ours. Dataset. We process the REBP public dataset [53] to generate a base dataset and an augmented dataset. The base dataset is directly distilled from the EB-ALFRED tasks in EmbodiedBench [58], serving as a strong-correlation dataset with bench and is utilized in the Table 1: Success rates of diverse models on EB-ALFRED and EB-Habitat. Entries without any symbol are sourced from the EMBench leaderboard. Symbol † indicates results directly cited from the REBP paper, based on their evaluations. Symbol ‡ marks results obtained through our own reproduction by querying the official API. All scores are reported as percentages (%). For each metric, the best-performing result is highlighted with a gray background. Method Params EB-ALFRED (seen) EB-Habitat (unseen) Avg. Base Common Complex Visual Spatial Long Avg. Base Common Complex Visual Spatial Long • Type: Closed-Source General Model Gemini-2.0-flash [13] - 52.30 62 48 54 46 46 58 42.30 82 38 38 36 34 26 Qwen-VL-Max [11] - 41.30 44 48 44 42 38 32 45.30 74 40 50 42 30 36 GPT-4.1‡ [35] - 64.67 70 64 70 62 62 60 50.67 90 38 50 36 46 44 GPT-4o‡ [34] - 51.67 54 46 58 52 52 48 57.00 84 42 62 38 62 54 GPT-4o-mini‡ [33] - 34.00 66 56 68 14 0 0 35.00 70 24 36 30 30 20 • Type: Open-Source General Model Llama-3.2-90B-Vision-Ins [4] 90B 32.00 38 34 44 28 32 16 40.30 94 24 50 32 28 14 InternVL2.5-8B [9] 8B 2.00 4 6 2 0 0 0 11.30 36 4 0 10 16 2 Gemma-3-12b-its [43] 12B 25.70 32 26 38 26 20 12 23.00 58 10 24 18 24 4 Qwen2.5-VL-72B-Ins‡ [12] 72B 43.67 62 36 48 40 44 32 50.33 92 38 48 34 46 44 Qwen2.5-VL-7B-Ins‡ [12] 7B 2.67 6 2 6 0 2 0 15.00 42 6 22 12 4 4 Qwen2.5-VL-3B-Ins‡ [12] 3B 1.33 2 2 0 0 4 0 14.67 34 0 18 18 14 4 • Type: Embodied Planning Model RoboBrain† [23] 7B 0.33 2 0 0 0 0 0 15.30 38 6 18 8 18 4 Tapa† [55] 7B 0.00 0 0 0 0 0 0 0.00 0 0 0 0 0 0 REBP † [53] 7B 35.00 52 46 46 28 32 6 18.33 50 6 18 14 14 8 RoboGPT-R1 (ours)‡ 3B 55.33 62 56 64 50 50 50 22.00 64 8 18 20 12 10 SFT phase to endow the model with initial multimodal embodiedplanning skills. The main body of the augmented dataset originates from the open-source ALFRED trajectory dataset [41]. While its content is similar to EB-ALFRED, it exhibits significant differences in details such as action space and task length, making it a benchmark-adjacent (near-domain) data. The augmented dataset includes all embodied planning data in the base dataset to prevent catastrophic forgetting. The augmented dataset is employed in the RFT phase to improve reasoning robustness under near-domain distributions, thereby further enhancing planning performance on EmbodiedBench. Data processing methods, data composition, and other details are provided in the appendix B. Training. We adopt Qwen2.5-VL-3B-Instruct as the multimodal base model and employ a two-stage training scheme. In the first stage, we perform full-parameter SFT on the base dataset to endow the model with initial planning skills aligned with EB-ALFRED. In the second stage, we conduct reinforcement fine-tuning (RFT) with GRPO on the augmented dataset, aiming to improve reasoning and generalization from data that are weakly aligned with the benchmark. SFT is implemented with LLaMA-Factory [21] and trained on 8× Ascend 910B3 64GB NPUs for approximately 1.5 hours; RFT is implemented with VERL [1] and trained on 4× NVIDIA H20 100GB GPUs for about 25 hours. Complete hyperparameters and implementation details are provided in the appendix A. 4.2 Main Results Training Results in EB-ALFRED.Table 1 reports the evaluation results in EB-ALFRED. RoboGPT-R1 attains an average success rate of 55.33% across six sub-task suites. This performance significantly outperforms multiple strong baselines: it surpasses the closed-source GPT-4o (51.67%) and GPT-4o-mini (34.00%), and trails only GPT-4.1 (64.67%). Among open-source general (a) Overall Reward (b) Format & Accuracy(LCS) Reward Figure 2: Reward Curves in RFT. Our LCS-based accuracy reward provides an appropriate and dense learning signal for RFT, rising steadily from 0.30 to 0.80. The format reward, already aligned by SFT, starts around 0.95 at the onset of RFT and saturates within ∼20 steps, stabilizing near 1.0. The overall reward is computed as a weighted sum of the accuracy and format rewards with weights 0.8 and 0.2, respectively, and it increases in tandem with the steady improvement of the accuracy reward. models, it notably outperforms the larger-parameter Qwen2.5VL-72B-Instruct (43.67%). Relative to the small-scale model with similar parameters Qwen2.5-VL-3B-Instruct, our approach yields an approximately 54% relative improvement in average success. Compared with the embodied specialist REBP (35.00%), RoboGPTR1 leads by nearly 20 percentage points overall and shows a pronounced advantage on long-horizon tasks: REBP achieves only 6%, whereas RoboGPT-R1 reaches 50%. Notably, despite using only 3B parameters, RoboGPT-R1 delivers this level of performance under a small-model, low-inference-cost regime, highlighting the parameter efficiency and effectiveness of our method. Generalization Results in EB-Habitat.Table 1 summarizes the EB-Habitat results. RoboGPT-R1 attains an average success rate of 22% across six sub-tasks, representing a 7% improvement over Table 2: Training-Strategy Ablation Result (success rate, %). After SFT, the model acquires core embodied planning competence, raising the average success rate from 1.33% to 42.00% while still lagging on long-horizon tasks. With subsequent RFT, performance further increases to 55.33%, with especially pronounced gains on long-horizon tasks (26% →50%). Base SFT RFT Avg. Base Common Complex Visual Spatial Long ✓ ✗ ✗ 1.33(+0.00) 2(+0.00) 2(+0.00) 0(+0.00) 0(+0.00) 4(+0.00) 0 (+0.00) ✓ ✓ ✗ 42.00(+41.33) 48(+46) 44(+42) 58 (+58) 38 (+38) 38(+34) 26(+26) ✓ ✓ ✓ 55.33(+54.00) 62(+60) 56(+54) 64(+64) 50(+50) 50(+46) 50(+50) Table 3: Data-Source Ablation Result (success rate, %). Base denotes the in-domain dataset distilled from EMbench; Aug is the near-domain ALFRED-derived set. Training only with SFT on Base reaches 40.00% on average, whereas replacing Base with Aug for SFT collapses performance to 6.00%, indicating poor transfer under pure supervision. Continuing RFT on Base after SFT on Base yields only a modest gain (40.00% →44.33%), while using Aug during RFT achieves the best results (55.33%), showing that only the RL (RFT) stage can effectively absorb near-domain data and transfer it to the target task. Model Avg. Base Common Complex Visual Spatial Long Base Model (Qwen2.5-VL-3B) 1.33 2 2 0 0 4 0 Only SFT w Base (ours) 42.00 48 44 58 38 38 26 Only SFT w Aug 6.00 14 6 12 2 2 0 SFT+ RFT w Base 44.33 56 56 54 32 36 32 SFT+ RFT w Aug (ours) 55.33 62 56 64 50 50 50 the base model Qwen2.5-VL-3B-Instruct (14.67%) and outperforming both the 7B-parameter models Qwen2.5-VL-7B-Instruct (15.00%) and REBP (18.33%). Although a performance gap remains compared to Qwen2.5-VL-72B-Instruct and several closed-source general models, these out-of-domain results indicate that our approach substantially improves generalization and transferability of embodied model. 4.3 Ablation Study We conduct two sets of ablations on EB-ALFRED to disentangle the effects of the two training stages and the two data sources used in our framework. For brevity, we refer to the Base dataset and Aug dataset as Base and Aug, respectively. Training-Strategy Ablation. We compared the performance changes on EB-ALFRED among the base model(no finetuning), the SFT-only model, and the final SFT+RFT model. Results are shown in Table 2. After SFT, the average success rate rises from 1.33% (base model) to 42.00%, indicating that SFT learns the initial embodied planning competence. However, performance on long-horizon tasks remains limited (26%). With subsequent RFT, performance improves across all sub-suites, yielding an average of 55.33%; notably, the long-horizon score increases from 26% to 50%, highlighting the particular effectiveness of RFT for complex, extended plans. Data-Source Ablation. As shown in Table 3, these ablations highlight the importance of near-domain data during RFT. (1) SFT on Aug only: Replacing Base with Aug for SFT reduces the average success from 42.00% to 6.00%, suggesting that, under supervised learning, near-domain data alone does not transfer effectively to Figure 3: Success rates with different stages. Bars show the macro average and the long-horizon score for the base model, SFT-only, and SFT+RFT model. SFT establishes initial embodied planning (Avg: 1.33%→42.00%) but leaves long-horizon performance limited (26%). Adding RFT lifts the averages further (to 55.33%) and markedly improves long-horizon planning (to 50%), validating the effectiveness of our two-stage framework-SFT for foundational competence and RFT for the additional gains needed to solve long-horizon tasks. the target task. (2) RFT on Base: Starting from SFT on Base and continuing RFT on Base (instead of Aug) yields only a modest increase from 42.00% to 44.33%. Compared with the SFT+RFT setting that uses Aug during RFT (55.33%), this indicates that incorporating near-domain data in the RFT stage is essential. Table 4: Accuracy Reward Comparison Result (success rates, %). We compare three GRPO accuracy rewards used during RFT-Step Accuracy, REBP Acc., and LCS (ours)-with "RFT Base" as the reference row for deltas. Bracketed scores denote the change relative to the reference. LCS (ours) delivers the strongest overall performance (Avg. 55.33%) and the largest increase on long-horizon tasks (+24%), outperforming the alternatives across most sub-suites. Accuracy Reward Avg. Base Common Complex Visual Spatial Long RFT Base 42.00 (+0.00) 48 (+0.00) 44 (+0.00) 58 (+0.00) 38 (+0.00) 38 (+0.00) 26 (+0.00) Step Accuracy 43.67 (+1.67) 52 (+4.00) 56 (+12.00) 54 (-4.00) 40 (+2.00) 40 (+2.00) 20 (-6.00) REBP Acc. 48.33 (+6.33) 62 (+14.00) 52 (+8.00) 58 (+0.00) 46 (+8.00) 38 (+0.00) 34 (+8.00) LCS (ours) 55.33 (+13.33) 62 (+14.00) 56 (+12.00) 64 (+8.00) 50 (+12.00) 50 (+12.00) 50 (+24.00) Ablation Analysis. We summarize three observations: (i) SFT establishes the initial planning ability but leaves a gap on longhorizon tasks; (ii) RFT closes this gap with the strongest gains on long-horizon sub-suites; (iii) combining RFT with Aug is essential, as Aug brings little benefit in SFT but provides substantial gains when used during RFT. 4.4 Accuracy Reward Comparison During the RFT stage, we instantiate the accuracy reward in the GRPO algorithm [2] as the ratio of the Longest Common Subsequence (LCS) length. Thanks to the appropriate and dense learning signal provided by the LCS-based accuracy reward, the reward curves rise steadily throughout RFT, as shown in Fig. 2. This section conducts a head-to-head comparison of three accuracy rewards to assess the suitability and advantages of our LCS reward for embodied planning. Compared rewards. We set the accuracy reward in GRPO to one of: • LCS reward (ours): the ratio of the longest common subsequence between the generated and reference trajectories (Normalized LCS); • Step Accuracy: Ratio of strictly matched actions/instructions step-by-step; • REBP reward: the multi-step, progress-style signal used in REBP [53]. We evaluate on EB-ALFRED with same setting of main experiments. To ensure fairness, we fix the backbone model, training data, total update budget, optimization hyperparameters, and the format reward term, and vary only the accuracy reward. The primary metrics are the average success rate and the scores of each sub-suite (with emphasis on long-horizon tasks). Comparison Results. As summarized in Table 4, using Step Accuracy yields only a slight improvement in the average success rate (from 42.00% to 43.67%), while the long-horizon score decreases (from 26% to 20%). The REBP reward improves the average to 48.33% and raises the long-horizon score from 26% to 34%. In contrast, our LCS reward achieves the largest gains: the average increases from 42.00% to 55.33%, and the long-horizon score from 26% to 50%. These findings demonstrate the suitability of the LCSbased accuracy reward for embodied planning and its advantages over the alternatives under the same training budget. 5 CONCLUSIONS In this work, we propose RoboGPT-R1, a two-stage training framework for embodied planning. Stage 1 (SFT) equips the model with initial instruction-following and planning priors. Stage 2 (RFT) performs reinforcement fine-tuning with GRPO and an LCS-based, sequence-aware exact reward (paired with format constraints), providing dense and verifiable feedback. This design overcomes the limitations of SFT-only behavior cloning, which fails to adequately elicit the reasoning capabilities of VLMs and often undermines in-domain performance when leveraging near-domain data. These shortcomings result in poor performance on long-horizon tasks and brittle out-of-domain generalization. Evaluated on EmbodiedBench, the 3B model trained with our framework surpasses generalpurpose VLM baselines including Qwen2.5-VL-72B and GPT-4o, and substantially outperforms other 7B-scale embodied planners, with especially pronounced gains on long-horizon subtasks. REFERENCES [1] Alibaba DAMO Academy. 2024. EasyR1: A unified framework for reward modeling and RLHF. https://github.com/alibaba/EasyR1. [2] Alibaba DAMO Academy. 2024. GRPO: Generalized Reward Preference Optimization for LLM alignment. https://github.com/alibaba/GRPO. [3] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. https://arxiv.org/abs/2204.01691 [4] Meta AI. 2024. Llama 3.2: Revolutionizing edge AI and vision with open, customizable models. https://ai.meta.com/blog/llama-3-2-connect-2024-visionedge-mobile-devices/. [5] Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, and Aviral Kumar. 2024. DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning. https://arxiv. org/abs/2406.11896 [6] Zitong Bo, Yue Hu, Jinming Ma, Mingliang Zhou, Junhui Yin, Yachen Kang, Yuqi Liu, Tong Wu, Diyun Xiang, and Hao Chen. 2025. Reinforced Embodied Planning with Verifiable Reward for Real-World Robotic Manipulation. https://arxiv.org/abs/2509.25852 [7] Shaoyu Chen, Bo Jiang, Hao Gao, Bencheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. 2024. VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic Planning. https://arxiv.org/abs/2402.13243 [8] Yaran Chen, Wenbo Cui, Yuanwen Chen, Mining Tan, Xinyao Zhang, Jinrui Liu, Haoran Li, Dongbin Zhao, and He Wang. 2025. RoboGPT: an LLM-based Long-term Decision-making Embodied Agent for Instruction Following Tasks. IEEE Transactions on Cognitive and Developmental Systems (2025), 1-11. https: //doi.org/10.1109/TCDS.2025.3543364 [9] Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yiming Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Kaipeng Zhang, Limin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. 2025. Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling. https://arxiv.org/abs/2412.05271 [10] Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V. Le, Sergey Levine, and Yi Ma. 2025. SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training. CoRR abs/2501.17161 (2025). [11] Alibaba Cloud. 2024. Qwen-VL-Max: Large Vision-Language Model. https: //github.com/QwenLM/Qwen-VL. [12] Alibaba Cloud. 2025. Qwen2.5-VL Technical Report. https://arxiv.org/abs/2502.13923 [13] Google DeepMind. 2024. Introducing Gemini 2.0: Our new AI model for the agentic era. https://blog.google/technology/google-deepmind/google-gemini-aiupdate-december-2024/. [14] DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. CoRR abs/2501.12948 (2025). [15] Jiafei Duan, Samson Yu, Hui Li Tan, Hongyuan Zhu, and Cheston Tan. 2022. A Survey of Embodied AI: From Simulators to Research Tasks. https://arxiv.org/abs/2103.04918 [16] Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, Katsushi Ikeuchi, Hoi Vo, Li Fei-Fei, and Jianfeng Gao. 2024. Agent AI: Surveying the Horizons of Multimodal Interaction. https://arxiv.org/ abs/2401.03568 [17] Zhaoye Fei, Li Ji, Siyin Wang, Junhao Shi, Jingjing Gong, and Xipeng Qiu. 2025. Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning. https://arxiv.org/abs/2506.23127 [18] Sicheng Feng, Kaiwen Tuo, Song Wang, Lingdong Kong, Jianke Zhu, and Huan Wang. 2025. RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning. https://arxiv.org/abs/2510.02240 [19] Yunhai Feng, Jiaming Han, Zhuoran Yang, Xiangyu Yue, Sergey Levine, and Jianlan Luo. 2025. Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon Robotic Manipulation. https://arxiv. org/abs/2502.16707 [20] Yuqian Fu, Tinghong Chen, Jiajun Chai, Xihuai Wang, Songjun Tu, Guojun Yin, Wei Lin, Qichao Zhang, Yuanheng Zhu, and Dongbin Zhao. 2025. SRFT: A SingleStage Method with Supervised and Reinforcement Fine-Tuning for Reasoning. https://arxiv.org/abs/2506.19767 [21] Hiyouga. 2023. LLaMA Factory: Open-source instruction tuning framework for LLMs. https://github.com/hiyouga/LLaMA-Factory. [22] Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, and Brian Ichter. 2023. Grounded decoding: guiding text generation with grounded models for embodied agents. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS '23). Curran Associates Inc., Red Hook, NY, USA, Article 2606, 26 pages. [23] Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, Xinda Xue, Qinghang Su, Huaihai Lyu, Xiaolong Zheng, Jiaming Liu, Zhongyuan Wang, and Shanghang Zhang. 2025. RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. https://arxiv.org/abs/2502.21257 [24] Bo Jiang, Shaoyu Chen, Qian Zhang, Wenyu Liu, and Xinggang Wang. 2025. AlphaDrive: Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning. CoRR abs/2503.07608 (2025). [25] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, Aniruddha Kembhavi, Abhinav Gupta, and Ali Farhadi. 2022. AI2-THOR: An Interactive 3D Environment for Visual AI. https://arxiv.org/abs/1712.05474 [26] Xinhao Li, Ziang Yan, Desen Meng, Lu Dong, Xiangyu Zeng, Yinan He, Yali Wang, Yu Qiao, Yi Wang, and Limin Wang. 2025. VideoChat-R1: Enhancing SpatioTemporal Perception via Reinforcement Fine-Tuning. https://arxiv.org/abs/2504.06958 [27] Wenlong Liang, Rui Zhou, Yang Ma, Bing Zhang, Songlin Li, Yijia Liao, and Ping Kuang. 2025. Large Model Empowered Embodied AI: A Survey on DecisionMaking and Embodied Learning. CoRR abs/2508.10399 (2025). [28] Yang Liu, Weixing Chen, Yongjie Bai, Xiaodan Liang, Guanbin Li, Wen Gao, and Liang Lin. 2025. Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI. https://arxiv.org/abs/2407. 06886 [29] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. 2025. Visual-RFT: Visual Reinforcement Fine-Tuning. CoRR abs/2503.01785 (2025). [30] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Tiancheng Han, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, Ping Luo, Yu Qiao, Qiaosheng Zhang, and Wenqi Shao. 2025. MM-Eureka: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning. https://arxiv.org/abs/2503.07365 [31] Youssef Mroueh. 2025. Reinforcement Learning with Verifiable Rewards: GRPO's Effective Loss, Dynamics, and Success Amplification. https://arxiv.org/abs/2503.06639 [32] OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, Andre Saraiva, Andrea Vallone, Andrew Duberstein, Andrew Kondrich, Andrey Mishchenko, Andy Applebaum, Angela Jiang, Ashvin Nair, Barret Zoph, Behrooz Ghorbani, Ben Rossen, Benjamin Sokolowsky, Boaz Barak, Bob McGrew, Borys Minaiev, Botao Hao, Bowen Baker, Brandon Houghton, Brandon McKinzie, Brydon Eastman, Camillo Lugaresi, Cary Bassin, Cary Hudson, Chak Ming Li, Charles de Bourcy, Chelsea Voss, Chen Shen, Chong Zhang, Chris Koch, Chris Orsinger, Christopher Hesse, Claudia Fischer, Clive Chan, Dan Roberts, Daniel Kappler, Daniel Levy, Daniel Selsam, David Dohan, David Farhi, David Mely, David Robinson, Dimitris Tsipras, Doug Li, Dragos Oprica, Eben Freeman, Eddie Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Guillaume Leclerc, Hadi Salman, Haiming Bao, Hao Sheng, Hart Andrin, Hessam Bagherinezhad, Hongyu Ren, Hunter Lightman, Hyung Won Chung, Ian Kivlichan, Ian O'Connell, Ian Osband, Ignasi Clavera Gilaberte, Ilge Akkaya, Ilya Kostrikov, Ilya Sutskever, Irina Kofman, Jakub Pachocki, James Lennon, Jason Wei, Jean Harb, Jerry Twore, Jiacheng Feng, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joaquin Quiñonero Candela, Joe Palermo, Joel Parish, Johannes Heidecke, John Hallman, John Rizzo, Jonathan Gordon, Jonathan Uesato, Jonathan Ward, Joost Huizinga, Julie Wang, Kai Chen, Kai Xiao, Karan Singhal, Karina Nguyen, Karl Cobbe, Katy Shi, Kayla Wood, Kendra Rimbach, Keren Gu-Lemberg, Kevin Liu, Kevin Lu, Kevin Stone, Kevin Yu, Lama Ahmad, Lauren Yang, Leo Liu, Leon Maksin, Leyton Ho, Liam Fedus, Lilian Weng, Linden Li, Lindsay McCallum, Lindsey Held, Lorenz Kuhn, Lukas Kondraciuk, Lukasz Kaiser, Luke Metz, Madelaine Boyd, Maja Trebacz, Manas Joglekar, Mark Chen, Marko Tintor, Mason Meyer, Matt Jones, Matt Kaufer, Max Schwarzer, Meghan Shah, Mehmet Yatbaz, Melody Y. Guan, Mengyuan Xu, Mengyuan Yan, Mia Glaese, Mianna Chen, Michael Lampe, Michael Malek, Michele Wang, Michelle Fradin, Mike McClay, Mikhail Pavlov, Miles Wang, Mingxuan Wang, Mira Murati, Mo Bavarian, Mostafa Rohaninejad, Nat McAleese, Neil Chowdhury, Neil Chowdhury, Nick Ryder, Nikolas Tezak, Noam Brown, Ofir Nachum, Oleg Boiko, Oleg Murk, Olivia Watkins, Patrick Chao, Paul Ashbourne, Pavel Izmailov, Peter Zhokhov, Rachel Dias, Rahul Arora, Randall Lin, Rapha Gontijo Lopes, Raz Gaon, Reah Miyara, Reimar Leike, Renny Hwang, Rhythm Garg, Robin Brown, Roshan James, Rui Shu, Ryan Cheu, Ryan Greene, Saachi Jain, Sam Altman, Sam Toizer, Sam Toyer, Samuel Miserendino, Sandhini Agarwal, Santiago Hernandez, Sasha Baker, Scott McKinney, Scottie Yan, Shengjia Zhao, Shengli Hu, Shibani Santurkar, Shraman Ray Chaudhuri, Shuyuan Zhang, Siyuan Fu, Spencer Papay, Steph Lin, Suchir Balaji, Suvansh Sanjeev, Szymon Sidor, Tal Broda, Aidan Clark, Tao Wang, Taylor Gordon, Ted Sanders, Tejal Patwardhan, Thibault Sottiaux, Thomas Degry, Thomas Dimson, Tianhao Zheng, Timur Garipov, Tom Stasi, Trapit Bansal, Trevor Creech, Troy Peterson, Tyna Eloundou, Valerie Qi, Vineet Kosaraju, Vinnie Monaco, Vitchyr Pong, Vlad Fomenko, Weiyi Zheng, Wenda Zhou, Wes McCabe, Wojciech Zaremba, Yann Dubois, Yinghai Lu, Yining Chen, Young Cha, Yu Bai, Yuchen He, Yuchen Zhang, Yunyun Wang, Zheng Shao, and Zhuohan Li. 2024. OpenAI o1 System Card. https://arxiv.org/abs/2412.16720 [33] OpenAI. 2024. GPT-4o mini: Advancing cost-efficient intelligence. https://openai. com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/. [34] OpenAI. 2024. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/. [35] OpenAI. 2025. Introducing GPT-4.1 in the API. https://openai.com/index/gpt-41/. [36] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. 2019. Habitat: A Platform for Embodied AI Research. https://arxiv.org/abs/1904.01201 [37] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. https://arxiv.org/abs/2402.03300 [38] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. 2025. VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model. CoRR abs/2504.07615 (2025). [39] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, and Chelsea Finn. 2025. Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-LanguageAction Models. https://arxiv.org/abs/2502.19417 [40] Suyeon Shin, Sujin jeon, Junghyun Kim, Gi-Cheon Kang, and Byoung-Tak Zhang. 2025. Socratic Planner: Self-QA-Based Zero-Shot Planning for Embodied Instruction Following. https://arxiv.org/abs/2404.15190 [41] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. https://arxiv.org/abs/1912.01734 [42] Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su. 2023. LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models. https://arxiv.org/ abs/2212.04088 [43] Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gaël Liu, Francesco Visin, Kathleen Kenealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, Alex Feng, Noveen Sachdeva, Benjamin Coleman, Yi Gao, Basil Mustafa, Iain Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopalnikov, Surya Bhupatiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Ravin Kumar, David Vilar, Idan Brusilovsky, Jiaming Luo, Andreas Steiner, Abe Friesen, Abhanshu Sharma, Abheesht Sharma, Adi Mayrav Gilady, Adrian Goedeckemeyer, Alaa Saade, Alex Feng, Alexander Kolesnikov, Alexei Bendebury, Alvin Abdagic, Amit Vadi, András György, André Susano Pinto, Anil Das, Ankur Bapna, Antoine Miech, Antoine Yang, Antonia Paterson, Ashish Shenoy, Ayan Chakrabarti, Bilal Piot, Bo Wu, Bobak Shahriari, Bryce Petrini, Charlie Chen, Charline Le Lan, Christopher A. Choquette-Choo, CJ Carey, Cormac Brick, Daniel Deutsch, Danielle Eisenbud, Dee Cattle, Derek Cheng, Dimitris Paparas, Divyashree Shivakumar Sreepathihalli, Doug Reid, Dustin Tran, Dustin Zelle, Eric Noland, Erwin Huizenga, Eugene Kharitonov, Frederick Liu, Gagik Amirkhanyan, Glenn Cameron, Hadi Hashemi, Hanna Klimczak-Plucińska, Harman Singh, Harsh Mehta, Harshal Tushar Lehri, Hussein Hazimeh, Ian Ballantyne, Idan Szpektor, Ivan Nardini, Jean Pouget-Abadie, Jetha Chan, Joe Stanton, John Wieting, Jonathan Lai, Jordi Orbay, Joseph Fernandez, Josh Newlan, Ju yeong Ji, Jyotinder Singh, Kat Black, Kathy Yu, Kevin Hui, Kiran Vodrahalli, Klaus Greff, Linhai Qiu, Marcella Valentine, Marina Coelho, Marvin Ritter, Matt Hoffman, Matthew Watson, Mayank Chaturvedi, Michael Moynihan, Min Ma, Nabila Babar, Natasha Noy, Nathan Byrd, Nick Roy, Nikola Momchev, Nilay Chauhan, Noveen Sachdeva, Oskar Bunyan, Pankil Botarda, Paul Caron, Paul Kishan Rubenstein, Phil Culliton, Philipp Schmid, Pier Giuseppe Sessa, Pingmei Xu, Piotr Stanczyk, Pouya Tafti, Rakesh Shivanna, Renjie Wu, Renke Pan, Reza Rokni, Rob Willoughby, Rohith Vallu, Ryan Mullins, Sammy Jerome, Sara Smoot, Sertan Girgin, Shariq Iqbal, Shashir Reddy, Shruti Sheth, Siim Põder, Sijal Bhatnagar, Sindhu Raghuram Panyam, Sivan Eiger, Susan Zhang, Tianqi Liu, Trevor Yacovone, Tyler Liechty, Uday Kalra, Utku Evci, Vedant Misra, Vincent Roseberry, Vlad Feinberg, Vlad Kolesnikov, Woohyun Han, Woosuk Kwon, Xi Chen, Yinlam Chow, Yuvein Zhu, Zichuan Wei, Zoltan Egyed, Victor Cotruta, Minh Giang, Phoebe Kirk, Anand Rao, Kat Black, Nabila Babar, Jessica Lo, Erica Moreira, Luiz Gustavo Martins, Omar Sanseviero, Lucas Gonzalez, Zach Gleicher, Tris Warkentin, Vahab Mirrokni, Evan Senter, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, Yossi Matias, D. Sculley, Slav Petrov, Noah Fiedel, Noam Shazeer, Oriol Vinyals, Jeff Dean, Demis Hassabis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Jean-Baptiste Alayrac, Rohan Anil, Dmitry, Lepikhin, Sebastian Borgeaud, Olivier Bachem, Armand Joulin, Alek Andreev, Cassidy Hardin, Robert Dadashi, and Léonard Hussenot. 2025. Gemma 3 Technical Report. https://arxiv.org/abs/2503.19786 [44] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Congcong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, Guangda Wei, Guokun Lai, Haiqing Guo, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haotian Yao, Haotian Zhao, Haoyu Lu, Haoze Li, Haozhen Yu, Hongcheng Gao, Huabin Zheng, Huan Yuan, Jia Chen, Jianhang Guo, Jianlin Su, Jianzhou Wang, Jie Zhao, Jin Zhang, Jingyuan Liu, Junjie Yan, Junyan Wu, Lidong Shi, Ling Ye, Longhui Yu, Mengnan Dong, Neo Zhang, Ningchen Ma, Qiwei Pan, Qucheng Gong, Shaowei Liu, Shengling Ma, Shupeng Wei, Sihan Cao, Siying Huang, Tao Jiang, Weihao Gao, Weimin Xiong, Weiran He, Weixiao Huang, Weixin Xu, Wenhao Wu, Wenyang He, Xianghui Wei, Xianqing Jia, Xingzhe Wu, Xinran Xu, Xinxing Zu, Xinyu Zhou, Xuehai Pan, Y. Charles, Yang Li, Yangyang Hu, Yangyang Liu, Yanru Chen, Yejie Wang, Yibo Liu, Yidao Qin, Yifeng Liu, Ying Yang, Yiping Bao, Yulun Du, Yuxin Wu, Yuzhi Wang, Zaida Zhou, Zhaoji Wang, Zhaowei Li, Zhen Zhu, Zheng Zhang, Zhexu Wang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Ziyao Xu, Zonghan Yang, and Zongyu Lin. 2025. Kimi k1.5: Scaling Reinforcement Learning with LLMs. https://arxiv.org/abs/2501.12599 [45] Songjun Tu, Jiahao Lin, Xiangyu Tian, Qichao Zhang, Linjing Li, Yuqian Fu, Nan Xu, Wei He, Xiangyuan Lan, Dongmei Jiang, and Dongbin Zhao. 2025. Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation. https://arxiv.org/abs/2503.12854 [46] Songjun Tu, Jiahao Lin, Qichao Zhang, Xiangyu Tian, Linjing Li, Xiangyuan Lan, and Dongbin Zhao. 2025. Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL. https://arxiv.org/abs/2505.10832 [47] Songjun Tu, Jingbo Sun, Qichao Zhang, Xiangyuan Lan, and Dongbin Zhao. 2024. Online Preference-based Reinforcement Learning with Self-augmented Feedback from Large Language Model. https://arxiv.org/abs/ 2412.16878 [48] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An Open-Ended Embodied Agent with Large Language Models. https://arxiv.org/ abs/2305.16291 [49] Hongcheng Wang, Yinuo Huang, Sukai Wang, Guanghui Ren, and Hao Dong. 2025. GRPO-MA: Multi-Answer Generation in GRPO for Stable and Efficient Chain-of-Thought Training. https://arxiv.org/abs/2509. 24494 [50] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024. Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset. Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. 2022. ScienceWorld: Is your Agent Smarter than a 5th Grader? https://arxiv.org/abs/2203.07540 [52] Yuxiang Wei, Olivier Duchenne, Jade Copet, Quentin Carbonneaux, Lingming Zhang, Daniel Fried, Gabriel Synnaeve, Rishabh Singh, and Sida I. Wang. 2025. SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution. CoRR abs/2502.18449 (2025). [53] Di Wu, Jiaxin Fan, Junzhe Zang, Guanbo Wang, Wei Yin, Wenhao Li, and Bo Jin. 2025. Reinforced Reasoning for Embodied Planning. CoRR abs/2505.22050 (2025). [54] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. 2023. Embodied Task Planning with Large Language Models. CoRR abs/2307.01848 (2023). [55] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. 2023. Embodied Task Planning with Large Language Models. https: //arxiv.org/abs/2307.01848 [56] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. 2023. The Rise and Potential of Large Language Model Based Agents: A Survey. https://arxiv.org/abs/2309.07864 [57] Zhiyuan Xu, Kun Wu, Junjie Wen, Jinming Li, Ning Liu, Zhengping Che, and Jian Tang. 2024. A Survey on Robotics with Foundation Models: toward Embodied AI. https://arxiv.org/abs/2402.02385 [58] Rui Yang, Hanyang Chen, Junyu Zhang, Mark Zhao, Cheng Qian, Kangrui Wang, Qineng Wang, Teja Venkat Koripella, Marziyeh Movahedi, Manling Li, Heng Ji, Huan Zhang, and Tong Zhang. 2025. EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents. https://arxiv.org/abs/2502.09560 [59] Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. 2025. Demystifying Long Chain-of-Thought Reasoning in LLMs. https://arxiv.org/abs/2502.03373 [60] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. 2025. Robotic Control via Embodied Chain-of-Thought Reasoning. https://arxiv.org/abs/2407.08693 [61] Aohan Zeng, Mingdao Liu, Rui Lu, Bowen Wang, Xiao Liu, Yuxiao Dong, and Jie Tang. 2023. AgentTuning: Enabling Generalized Agent Abilities for LLMs. Huaye Zeng, Dongfu Jiang, Haozhe Wang, Ping Nie, Xiaotong Chen, and Wenhu Chen. 2025. ACECODER: Acing Coder RL via Automated Test-Case Synthesis. https://arxiv.org/abs/2502.01718 [63] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. 2024. Vision-Language Models for Vision Tasks: A Survey. https://arxiv.org/ abs/2304.00685 [64] Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. 2025. R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization. https://arxiv.org/abs/2503.12937 [65] Wenqi Zhang, Mengna Wang, Gangao Liu, Xu Huixin, Yiwei Jiang, Yongliang Shen, Guiyang Hou, Zhe Zheng, Hang Zhang, Xin Li, Weiming Lu, Peng Li, and Yueting Zhuang. 2025. Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks. https://arxiv.org/abs/2503.21696 [66] Zijing Zhang, Ziyang Chen, Mingxiao Li, Zhaopeng Tu, and Xiaolong Li. 2025. RLVMR: Reinforcement Learning with Verifiable Meta-Reasoning Rewards for Robust Long-Horizon Agents. CoRR abs/2507.22844 (2025). Appendix A EXPERIENTAL DETAILS A.1 SFT Details In all experiments conducted in this paper, the hyperparameter settings for the supervised fine-tuning (SFT) phase strictly adhere to the configurations listed in the table 5, using 8× Ascend 910B3 64GB NPUs as computational devices. We empirically set the parameter 'num_train_epochs' to 2 epochs in all SFT experiments to ensure that the model learns effectively without severe overfitting. Table 5: SFT hyperparameter settings used in our experiments. Hyperparameter Value Hyperparameter Value image_max_pixels 262144 cutoff_len 9216 video_max_pixels 16384 max_samples 50000 trust_remote_code true overwrite_cache true stage sft per_device_train_batch_size 1 finetuning_type full gradient_accumulation_steps 2 freeze_vision_tower true learning_rate 0.00001 freeze_multi_modal_projector true num_train_epochs 2.0 freeze_language_model false lr_scheduler_type cosine deepspeed ds_z2_config.json warmup_ratio 0.1 ddp_timeout 180000000 bf16 true emplate qwen2_vl nproc_per_node 8 A.2 RFT Details The hyperparameter settings remained consistent across all experiments during the RFT phase, as listed in Table 6. The RFT phase utilized 4× NVIDIA H20 96GB GPUs as computational devices. For the sake of fairness in comparison, the iteration count for the RFT phase was uniformly set to 80 steps. B DATASET DETAILS B.1 Embodied Planning Data Processing The embodied planning task data used in our constructed dataset was primarily processed based on two datasets released by REBP. We performed the following data processing steps: First, we cleaned and removed data from the original REBP dataset that contained obvious errors, such as incomplete responses or content that was abnormally truncated. Next, we addressed the inconsistency in the order of the four key-value pairs within the response content. Following the logical sequence of "Perception →Reasoning →Providing Answers → Formatting Output", we standardized the sequence of key-value pairs to: "visual_state_description", "reasoning_and_reflection", "language_plan" and "executable_plan". Additionally, the original data input contains a large amount of example information, which consums numerous tokens. We remove all example prompts to align the data to a zero-shot state. This configuration significantly reduces the data length, shortening the length of each data by approximately one-third. This, in turn, shortens model training time and reduces computing resource consumption during training. B.2 Composition of the Base and Aug Dataset Base. The Base dataset primarily consists of the REBP open-source SFT embodied planning data processed as described in the previous section, supplemented by a small amount of data extracted from the RoboVQA and MATH-Vision datasets. The final Base dataset contains over 5,000 samples, comprising over 4,000 embodied planning samples and over 1,000 for other tasks. The inclusion of other tasks aims to prevent the model from overfitting to embodied planning tasks, which could compromise its inherent multimodal perception and reasoning capabilities. Aug. The core of the Aug dataset consists of over 40,000 entries processed from the REBP open-source RFT dataset following the methodology described in the previous section. Additionally, it incorporates all 4,000+ embodied planning data samples from the Base dataset to prevent catastrophic forgetting during the RFT phase. The final Aug dataset contains over 45,000 samples. C EVALUATION DETAILS When evaluating models in EmbodiedBench, the number of input examples during evaluation can be controlled via the "n_shots" parameter, with a default maximum setting of 10. As described in the previous section, we align all training data to the 0-shot state during the processing Table 6: RFT hyperparameter settings used in our experiments. Hyperparameter Value Hyperparameter Value data actor_rollout_ref.ref.log_prob_use_dynamic_bsz true data.video_fps 2.0 actor_rollout_ref.ref.log_prob_max_token_len_per_gpu 16384 data.min_pixels 40000 actor_rollout_ref.ref.ulysses_sequence_parallel_size 1 data.max_pixels 4194304 actor_rollout_ref.rollout data.max_prompt_length 6144 actor_rollout_ref.rollout.name vllm data.max_response_length 3072 actor_rollout_ref.rollout.temperature 1.0 data.train_batch_size 512 actor_rollout_ref.rollout.top_k -1 data.shuffle true actor_rollout_ref.rollout.top_p 1.0 actor_rollout_ref.actor actor_rollout_ref.rollout.n 5 actor_rollout_ref.actor.ppo_mini_batch_size 128 actor_rollout_ref.rollout.response_length 3072 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu 1 actor_rollout_ref.rollout.gpu_memory_utilization 0.6 actor_rollout_ref.actor.use_dynamic_bsz true actor_rollout_ref.rollout.ignore_eos false actor_rollout_ref.actor.ppo_max_token_len_per_gpu 16384 actor_rollout_ref.rollout.enforce_eager false actor_rollout_ref.actor.grad_clip 1.0 actor_rollout_ref.rollout.tensor_model_parallel_size 4 actor_rollout_ref.actor.clip_ratio 0.2 actor_rollout_ref.rollout.max_num_batched_tokens 12288 actor_rollout_ref.actor.entropy_coeff 0.0 actor_rollout_ref.rollout.max_num_seqs 1024 actor_rollout_ref.actor.use_kl_loss true algorithm actor_rollout_ref.actor.kl_loss_type low_var_kl algorithm.gamma 1.0 actor_rollout_ref.actor.kl_loss_coef 1.0e-2 algorithm.lam 1.0 actor_rollout_ref.actor.ppo_epochs 1 algorithm.adv_estimator grpo actor_rollout_ref.actor.shuffle false algorithm.use_kl_in_reward false actor_rollout_ref.actor.ulysses_sequence_parallel_size 1 algorithm.kl_penalty low_var_kl actor_rollout_ref.actor.optim.lr 1.0e-6 algorithm.kl_ctrl.type fixed actor_rollout_ref.actor.optim.lr_warmup_steps -1 algorithm.kl_ctrl.kl_coef 0.001 actor_rollout_ref.actor.optim.lr_warmup_steps_ratio 0.0 algorithm.kl_ctrl.horizon 10000 actor_rollout_ref.actor.optim.lr_scheduler_type constant algorithm.kl_ctrl.target_kl 0.1 actor_rollout_ref.actor.optim.total_training_steps -1 algorithm.rollout_is false actor_rollout_ref.actor.optim.weight_decay 1.0e-2 trainer actor_rollout_ref.ref trainer.nnodes 1 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu 2 trainer.n_gpus_per_node 4 phase. Therefore, to maintain consistency with the training process, the final evaluation results were obtained using the setting "n_shots=0" when testing our model. When testing general models, our preliminary experiments revealed that the n-shot strategy is indispensable for general models to complete EmbodiedBench tests. Furthermore, the number of examples provided significantly affects the performance of general models, as shown in Table 7. When no examples are provided at all(that is, in 0-shot scenarios), both Qwen and GPT series models are completely unable to perform embodied planning within EB-ALFRED. Providing just 1 example versus 10 examples results in a significant gap in test performance. To ensure a conservative comparison, all results from the general models directly tested in this paper within the main results shown in Table 1 were obtained under the setting "n_shots=10", i.e., using test results from the optimal performance configuration. This setting guarantees that the experimental results provide a conservative estimate of our method's performance advantage. All other test parameters use the default settings of EmbodiedBench. Table 7: Impact of n-shots Strategy on the Performance of General Models on EB-ALFRED model 0-shot 1-shot 10-shots GPT-4.1 0 46.00 64.67 GPT-4o 0 34.33 51.67 GPT-4o-mini 0 0 24.00 Qwen2.5-VL-72B-Ins. 0 35.00 43.67 Qwen2.5-VL-7B-Ins. 0 0.33 2.67 Qwen2.5-VL-3B-Ins. 0 0.67 1.33
2510.14827
NEURAL IMPLICIT FLOW FIELDS FOR SPATIO- TEMPORAL MOTION MAPPING Yufei Zhu∗ Shih-Min Yang∗ Andrey Rudenko† Tomasz P. Kucner‡ Achim J. Lilienthal∗† Martin Magnusson∗ ABSTRACT Safe and efficient robot operation in complex human environments can benefit from good models of site-specific motion patterns. Maps of Dynamics (MoDs) provide such models by encoding statistical motion patterns in a map, but ex- isting representations use discrete spatial sampling and typically require costly offline construction. We propose a continuous spatio-temporal MoD representa- tion based on implicit neural functions that directly map coordinates to the pa- rameters of a Semi-Wrapped Gaussian Mixture Model. This removes the need for discretization and imputation for unevenly sampled regions, enabling smooth gen- eralization across both space and time. Evaluated on a large public dataset with long-term real-world people tracking data, our method achieves better accuracy of motion representation and smoother velocity distributions in sparse regions while still being computationally efficient, compared to available baselines. The pro- posed approach demonstrates a powerful and efficient way of modeling complex human motion patterns. 1 INTRODUCTION Safe and efficient operation in complex, dynamic and densely crowded human environments is a crit- ical prerequisite for deploying robots in various tasks to support people in their daily activities. Ex- tending the environment model with human motion patterns using a map of dynamics (MoD) is one way to achieve unobtrusive navigation, compliant with existing site-specific motion flows (Palmieri et al., 2017; Swaminathan et al., 2022). As illustrated in Fig. 1, incorporating MoDs into motion planning provides benefits in crowded environments, since they encode information about the expected motion outside of the robot’s sensor range, allowing for less reactive behavior. In the example shown in Fig. 1, the oncoming pedestrian flow is initially outside the robot’s observation radius. Without MoD awareness, the robot chooses a direct path to the goal but later becomes trapped in the oncoming crowd. In contrast, a planner informed by MoDs can exploit prior knowledge of human motion patterns to generate a trajectory that aligns with the expected flow, allowing the robot to reach the goal safely and efficiently. MoDs can also be applied to long-term human motion prediction (Zhu et al., 2023). As shown in the right of Fig. 1, MoDs help predict realistic trajectories that implicitly respect the complex topology of the environment, such as navigating around corners or avoiding obstacles. Several approaches have been proposed for constructing MoDs. Early methods modeled human mo- tion on occupancy grid maps, treating dynamics as shifts in occupancy (Wang et al., 2015; 2016). These approaches struggle with noisy or incomplete trajectory data. Later, velocity-based repre- sentations have been introduced, most notably the CLiFF-map (Kucner et al., 2017), which models local motion patterns with Gaussian mixture models, effectively captures multimodality in human flows and has been successfully used in both robot navigation and prediction tasks. The methods above are computed in batch, given a set of observations. Online learning methods have also been explored to update motion models as new observations arrive (Zhu et al., 2025), allowing robots to ∗¨Orebro University, Sweden. †Technical University of Munich, Germany. ‡Aalto University, Finland. 1 arXiv:2510.14827v1 [cs.RO] 16 Oct 2025 With MoD guidance Without MoD guidance Observation Ground truth CLiFF-LHMP Social LSTM MID TUTR Figure 1: Indicative applications of Maps of dynamics (MoD) for motion planning and human motion prediction. Left: Illustration of how MoDs can support socially aware motion planning. The robot (red diamond) navigates toward the goal (green cross) in the presence of two opposing human flows, with the underlying MoD shown as colored arrows. In this scenario, the oncoming pedestrian flow moving in the opposing direction is initially outside the robot’s observation radius (grey dashed circle). Without guidance from the MoD, the planner initially takes a direct path to the goal but eventually becomes stuck and collides with the oncoming flow. In contrast, when informed by the MoD, the robot aligns its trajectory with the motion patterns and reaches the goal efficiently and safely. Right: Human motion prediction with a 60 s horizon. The red line represents the ground truth trajectory and the green line represents the observed trajectory. With MoD guidance, CLiFF- LHMP Zhu et al. (2023) makes more accurate and realistic predictions than deep learning methods. While the trajectories predicted by Social LSTM (Alahi et al., 2016), TUTR (Shi et al., 2023) and MID (Gu et al., 2022) often are unfeasible (e.g., crossing the walls), CLiFF-LHMP predictions implicitly follow the topology of the environment. adapt to changing environments without costly retraining from scratch. Temporal MoDs have also been explored, including STeF-maps (Molina et al., 2022), which apply frequency-based models to encode periodic variations in the flow. However, existing MoD representations require spatial discretization, with a manually selected map resolution for point locations and interpolation to estimate motion at arbitrary positions. This dis- cretization introduces information loss, reduces flexibility, and complicates tuning across different environments. To address these challenges, in this work, instead of representing motion patterns on a discrete grid, we propose a continuous map of dynamics using implicit neural representation. We learn a neural function that maps spatio-temporal coordinates to parameters of a local motion distribution. Implicit neural representations have emerged as powerful tools for encoding continuous functions, provid- ing compact and differentiable models with strong generalization. Leveraging these properties, this formulation allows the model to smoothly generalize across space and time, while maintaining mul- timodality in places where flows tend to go in more than one direction since it produces a wrapped Gaussian mixture model of expected motion given a query location and time. We evaluate our approach on real-world datasets of human motion and show that continuous MoDs not only improve representation accuracy but can also drastically reduce map construction time. Our method yields smoother and more consistent velocity distributions, resulting in more accurate representations of human motion patterns. In contrast to baseline approaches that rely on time- consuming per-cell motion modeling, it computes nearly two orders of magnitude faster than CLiFF- map Kucner et al. (2017). Unlike the faster but discretised representation of STeF-map Molina et al. (2022), our method preserves non-discretised directions, yielding results closer in spirit to CLiFF. In summary, the main contribution of this work is an entirely novel representation of flow-aware maps of dynamics, named NeMo-map. In contrast to existing methods, NeMo allow continuous spatio-temporal queries to generate location- and time-specific multimodal flow predictions. As evidenced by our experimental validation on real-world human motion data, NeMo efficiently learns a highly accurate statistical representation of motion in large-scale maps. 2 2 RELATED WORK A map of dynamics (MoD) is a representation that augments the geometric map of an environment with statistical information about observed motion patterns. Unlike static maps, MoDs incorporate spatio-temporal flow information, allowing robots to reason about how humans typically move in a given environment. MoDs can be built from various sources of input, such as trajectories (Bennewitz et al., 2005), dynamics samples, or information about the flow of continuous media (e.g., air or water) (Bennetts et al., 2017). Furthermore, these models can feature diverse underlying representations, including evidence grids, histograms, graphs, or Gaussian mixtures. There are several types of MoDs described in the literature, generally striving to provide an efficient tool for storing and querying information about historical or expected changes in states within the environment. Occupancy-based methods focus on mapping human dynamics on occupancy grid maps, modeling motion as shifts in occupancy (Wang et al., 2015; 2016). Trajectory-based methods extract human trajectories and group them into clusters, with each cluster representing a typical path through the environment (Bennewitz et al., 2005). These approaches suffer from noisy or incom- plete trajectories. To address this, Chen et al. (2016) formulate trajectory modeling as a dictionary learning problem and use augmented semi-nonnegative sparse coding to find local motion patterns characterized by partial trajectory segments. MoDs can also be based on velocity observations. With velocity mapping, human dynamics can be modeled through flow models. Kucner et al. (2017) presented a probabilistic framework for mapping velocity observations, which is named Circular-Linear Flow Field map (CLiFF-map). CLiFF-map represents local flow patterns as a multi-modal, continuous joint distribution of speed and orienta- tion, as further described in Sec. 3. A benefit of CLiFF-map is that it can be built from incomplete or spatially sparse velocity observations (de Almeida et al., 2024), without the need to store a long history of data or deploy advanced tracking algorithms. CLiFF-maps are typically built offline, for the reason of high computational costs associated with the building process. This constraint limits their applicability in real environments. p Figure 2: Probability density of a Semi-Wrapped Gaussian Mixture Model (SWGMM) with two components, visualized on a cylinder. Orientation θ is wrapped around the circular axis, while speed ρ extends along the vertical axis. The represen- tation allows joint modeling of angular (orienta- tion) and linear (speed) variables, capturing mul- timodality in motion patterns. When building flow models, temporal informa- tion can also be incorporated. Molina et al. (2022) apply the Frequency Map Enhance- ment (FreMEn Krajn´ık et al. (2017)), which is a model describing spatio-temporal dynam- ics in the frequency domain, to build a time- dependent probabilistic map to model peri- odic changes in people flow called STeF-map. The motion orientations in STeF-map are dis- cretized. Another method of incorporating tem- poral information is proposed by Zhi et al. (2019). Their approach uses a kernel recur- rent mixture density network to provide a multi- modal probability distribution of movement di- rections of a typical object in the environment over time, though it models only orientation and not the speed of human motion. It is important to note that a map of dynamics is not a trajectory prediction model. Whereas trajectory predictors (e.g., LSTMs) aim to fore- cast the future state of agents by propagating state information forward in time from an initial state, our goal is fundamentally different. We seek to construct a spatio-temporal prior that encodes the distribution of motion patterns in the environment itself. This prior can be queried directly at any spatial coordinate and any time of day, providing motion statistics that can support downstream tasks such as planning or long-term prediction, but it does not by itself generate trajectories for individual agents. 3 3 METHODOLOGY 3.1 PROBABILISTIC MODELING OF HUMAN MOTION Our spatio-temporal map of dynamics produces probability distributions over human motion veloc- ities. A velocity v is defined by the pair of speed (a positive linear variable ρ ∈R+) and orientation (a circular variable θ ∈[0, 2π)). To capture the statistical structure of such data, we model human motion patterns with a Semi- Wrapped Gaussian Mixture Model (SWGMM), similar to the CLiFF-map representation (Kucner et al., 2017). While a von Mises distribution would be effective for purely angular variables, is not suitable when combining circular and linear components. Roy et al. (2012) proposed the von Mises- Gaussian mixture model (VMGMM) to jointly represent one circular variable and linear variables. However, their model assumes independence between the circular and linear dimensions, which limits its ability in capturing real-world correlations. To overcome this, SWGMM (Roy et al., 2016) jointly models circular-linear variables and allows correlations between them. An SWGMM models velocity v = [ρ, θ]⊤as a mixture of J Semi-Wrapped Normal Distributions (SWNDs): p(v | ξ) = J X j=1 wjN SW µj,Σj(v), (1) where ξ = {ξj = (wj, µj, Σj)}J j=1 denotes a finite set of components of the SWGMM. Each wj is a mixing weight and satisfies 0 ≤wj ≤1), µj the component mean, and Σj the covariance. An SWND N SW Σ,µ is formally defined as N SW Σ,µ(v) = X k∈Z Nµ,Σ [ρ, θ]⊤+ 2π[0, k]⊤ , (2) where k is a winding number. In practice, the PDF can be approximated adequately by taking k ∈{−1, 0, 1} (Mardia & Jupp, 2008). The SWGMM density function over velocities can be visualized as a function on a cylinder, as shown in Fig. 2. Orientation values θ are wrapped on the unit circle and the speed ρ runs along the length of the cylinder. This formulation yields a flexible and interpretable probabilistic representation of local human motion, capturing multimodality and correlations between orientation and speed. 3.2 LEARNING CONTINUOUS MOTION FIELDS Previous MoD approaches, such as CLiFF-maps and STeF-maps, rely on discretizing the environ- ment into cells and fitting local probability models. Discretization leads to information loss and prevents querying at arbitrary locations. We address this by introducing a continuous map of dy- namics parameterized by a neural implicit representation. The method overview is shown in Fig. 3. In many real-world environments, human motion patterns exhibit strong daily periodicity, such as morning and evening rush hours or lunchtime activity. Motivated by this structure, we model time as a periodic variable and condition the MoD on the time of day. This assumption allows the repre- sentation to capture long-term temporal variations without requiring sequential rollouts, and enables efficient queries of motion dynamics at arbitrary spatio-temporal coordinates (x, y, t). Problem statement. Given a dataset D of N spatio-temporal motion samples: D = {(xi, ti, vi)}N i=1, where xi ∈R2 is the spatial coordinate, ti ∈[0, 1] is the normalized time of a day, and vi = [ρi, θi]⊤ is observed velocity, we learn a continuous function Φθ that maps a spatio-temporal coordinate (x, t) to SWGMM parameters: Φθ(x, t) = n wj(x, t), µj(x, t), Σj(x, t) oJ j=1, (3) 4 SIREN MLP SWGMM Figure 3: Method overview. A spatio-temporal query (x, t) is mapped to parameters of a Semi- Wrapped Gaussian Mixture Model (SWGMM). The spatial coordinate x is used to interpolate fea- tures from a learnable spatial grid Gs, and the temporal coordinate t is encoded using a SIREN network. The spatial features fs(x), temporal encoding ft(t), and raw coordinates are concatenated and passed through an MLP, which outputs the parameters of a SWGMM, providing a continuous, multimodal probabilistic representation of motion dynamics at the queried location and time. where J is the number of mixture components, weights wj ≥0 and PJ j=1 wj = 1. Each of the j components models the joint velocity v = [ρ, θ]⊤with a Semi-Wrapped Normal Distribution N SW Σ,µ. At inference time, querying Φθ at any coordinate yields the full set of SWGMM parameters, resulting in a continuous probabilistic representation of motion dynamics. This formulation enables the model to learn smooth, continuous motion fields while retaining the multimodal characteristic of human motion. Architecture. In our neural representation, we parameterize Φθ with a fully connected multilayer perceptron (MLP), conditioned on both spatial and temporal features: fs(x) | {z } spatial features ∈RCs, ft(t) |{z} temporal encoding ∈RCt. For spatial features, a learnable grid Gs ∈RH×W ×Cs is queried at location x by bilinear interpola- tion, producing fs(x). This captures local variations in motion patterns while remaining continuous in space. For temporal encoding, we encode t with SIREN, the sinusoidal representation network (Sitzmann et al., 2020), which uses periodic activation functions throughout the network. The MLP input concatenates the raw coordinates and the spatial and temporal features, z = [ x, t, fs(x), ft(t) ], and outputs SWGMM parameters. This feature-conditioned representation en- ables the model to flexibly encode local variations in motion dynamics while maintaining global smoothness across both space and time. Likelihood and training. For a spatio-temporal coordinate (xi, ti), the velocity likelihood under the predicted SWGMM is p(vi | Φθ(xi, ti)) = J X j=1 wj(xi, ti) N SW µj(xi,ti), Σj(xi,ti)(vi), (4) where N SW denotes the semi-wrapped normal distribution that wraps the angular component (see Eq. (2)). The model is trained by minimizing the negative log-likelihood of motion samples from the dataset under the probability density function (PDF) produced by the model: L(θ) = −1 N N X i=1 log p vi | Φθ(xi, ti)  . (5) 5 10m corridor continues (toward train station) exit toward east part of ATC stairs to third floor stairs going down (for exiting the building) es nce ng ors ple e this area is traversable, but not covered by tracking system. some people pass through here restaurant entrance escalator going down (to parking lot) shops would often put goods in this area various things here during events sitting benches entry/exit points Figure 4: Layout of the ATC dataset environment (Brˇsˇci´c et al., 2013), showing the main east corridor and open areas annotated with semantic information such as entry and exit points, shops, seating areas, and stairs. 4 RESULTS 4.1 DATASET To evaluate spatio-temporal maps of dynamics that capture changes of human motion patterns over time, it is essential to use datasets that span multiple days and reflect variations in human motion pat- terns throughout the day. Our experiments were conducted using a real-world dataset, ATC (Brˇsˇci´c et al., 2013), which provide sufficient multi-day coverage for evaluation. This dataset was collected in a shopping mall in Japan using multiple 3D range sensors, recording pedestrian trajectories be- tween 9:00 and 21:00, over a total of 92 days. ATC covers a large indoor environment, with a total area covered of approximately 900 m2. Because of the large scale of the dataset, we use first four days in the dataset (2012 Oct 24, 2012 Oct 28, 2012 Oct 31, and 2012 Nov 04) for experiments. The data from October 24 is used for training, while the other three days are used for evaluation. The observation rate is downsampled from over 10 Hz to 1 Hz. After downsampling, the training set contains 717,875 recorded motion samples, and the test set contains 5,114,478 samples. 4.2 BASELINES Circular-Linear Flow Field Map (CLiFF-map) CLiFF-map (Kucner et al., 2017) represents mo- tion patterns by associating each discretized grid location with an SWGMM fitted from local observations. The environment is divided into a set of grid locations, and each grid lo- cation aggregates motion samples within a fixed radius. The SWGMM parameters at each grid location are estimated via expectation-maximization (EM) (Dempster et al., 1977), with the number and initial positions of mixture components determined using mean shift clustering (Cheng, 1995). When training the CLiFF-map, the convergence precision is set to 1e–5 for both mean shift and EM algorithms, with a maximum iteration count of 100. The grid resolution is set to 1 m. To evaluate different hours, we train separate CLiFF-maps for each hour using the motion samples observed during that time. Spatio-Temporal Flow Map (STeF-map) STeF-map (Molina et al., 2022) is a spatio-temporal map of dynamics that models the likelihood of human motion directions using harmonic functions. Each grid location maintains kstef temporal models, corresponding to kstef dis- cretized orientations of people moving through that location over time. By modeling peri- odic patterns, STeF-map captures long-term temporal variations in crowd movements and can predict activities at specific times of day under the assumption of periodicity in the environment. Following Molina et al. (2022), we set kstef = 8 in the experiments, and the model orders for training STeF-map, i.e. the number of periodicities, is set to 2. Online CLiFF-map Online CLiFF-map (Zhu et al. (2025)) extends the static CLiFF model by up- dating the SWGMM parameters incrementally as new motion observations become avail- able. Each grid location maintains an SWGMM, which is initialized upon first receiv- ing observations and subsequently updated using the stochastic expectation-maximization 6 (sEM) algorithm (Capp´e & Moulines (2009)). In sEM, the expectation step of the origi- nal EM algorithm is replaced by a stochastic approximation step, while the maximization step remains unchanged. Like the static CLiFF-map, Online CLiFF-map outputs SWGMM parameters at each grid location, but supports continuous adaptation over time. In the ex- periments, we follow a spatio-temporal setting by generating an online CLiFF-map for each hour. Observations collected in an hour interval are treated as the new data batch for updat- ing the SWGMMs, producing a temporally adaptive representation of motion dynamics. 4.3 IMPLEMENTATION DETAILS The output of the network Φθ parameterizes an SWGMM over speed and orientation. For J mix- ture components, the network predicts 6J raw values per query coordinate. Each component j is defined by: a mixture weight wj, obtained by applying a softmax over the raw weights; a mean speed µj,s = max(0, ˜µj,s) and mean orientation µj,a = ˜µj,a mod 2π; variances σ2 j,s = exp(clamp(˜vj,s, −10, 10)) and σ2 j,a = exp(clamp(˜vj,a, −10, 10)); and a correlation coefficient ρj = 0.99 tanh(˜ρj). Altogether, the network defines a valid SWGMM with parameters as in Eq. (3), where µj = µj,s, µj,a  and Σj is the covariance matrix with diagonal entries σ2 j,s, σ2 j,a and cor- relation ρj. In the experiments, J is set to 3 and coordinates are normalized to [−1, 1]. Spatial input is processed by an MLP with hidden sizes [128, 64] and ReLU activations. Temporal input is processed by a two-layer SIREN (sine activations with ω(1) 0 = 30 in the first layer and ω(h) 0 = 1 in the hidden layer). The two streams are fused via FiLM modulation (Perez et al., 2018). The fused representation is passed to a linear head producing 6J outputs. Models are trained using the Adam optimizer with learning rate 10−3 for 100 epochs. An ablation of alternative temporal encodings is provided in Sec. 4.6. 4.4 QUANTITATIVE RESULTS To quantitatively evaluate the accuracy of modeling human motion patterns (MoD quality), we use the negative log-likelihood (NLL). An MoD represents human motion as a probability distribu- tion over velocity conditioned on a spatio-temporal coordinate (x, y, t), implemented as either an SWGMM (our method and CLiFF-maps) or a histogram (STeF-map). To evaluate representation accuracy, we use test data consisting of observed human motions in the same environment. For each test sample (x, y, t), we query the MoD to obtain the predicted distribution and compute the likeli- hood of the observed motion under this distribution. A higher likelihood indicates that the predicted distribution better aligns with the observed data. We report NLL for numerical stability and easy comparison, so lower NLL values correspond to more accurate motion representations, i.e., higher quality MoDs. Table 1 reports the accuracy results. Our method achieves the lowest NLL (0.775 ± 2.052), outper- forming all baselines. Online CLiFF-map, CLiFF-map, and STeF-map exhibit significantly higher NLLs, with paired t-tests showing p < 0.001 under the null hypothesis that baseline NLL is less than or equal to ours. The reductions relative to our method are respectively +0.752 (online CLiFF), +1.189 (CLiFF), and +4.801 (STeF), all with 95% confidence intervals. Compared with STeF-maps, methods based on SWGMM, such as ours and CLiFF-map, offer two key advantages. They jointly model speed and orientation, whereas STeF-maps do not include speed information. In addition, SWGMMs represent orientation continuously rather than through a discretized 8-bin histogram as in STeF-map. These aspects lead to a more accurate representation of human motion and contribute to the improved performance. Limitations of CLiFF-maps are from discretizing the environment into grid cells, with each cell storing a locally fitted SWGMM. This grid-based design limits spatial resolution and introduces discontinuities at cell boundaries in both space and time. In particular, dividing time into hourly intervals is a coarse approximation that can produce abrupt changes, since human motion patterns do not necessarily shift at exact hour boundaries. In contrast, our method models the MoD as a continuous neural implicit representation. This enables smooth generalization across space and time, supports queries at arbitrary spatio-temporal coordinates, and provides a compact representation that avoids the memory overhead of storing distributions for every grid cell. 7 We also compare the map building time of the baselines against our approach as shown in Table 2. For the baselines, the training time corresponds to convergence on all grid cells, while for our method it corresponds to the neural network training time. Our method trains in 19 minutes, substantially faster than CLiFF-map (over 30 hours) while achieving higher accuracy. These results highlight the practicality of continuous MoDs for real-time applications, combining both accuracy and efficiency. Table 1: Accuracy evaluation on the ATC dataset using average negative log-likelihood (NLL), where lower values indicate better accuracy. We report mean ± standard deviation, together with the reduction in NLL relative to our method and the corresponding 95% confidence interval (CI). Method NLL↓ NLL reduction (vs Ours) 95% CI Ours 0.775 ± 2.052 – – Online CLiFF-map 1.527 ± 4.156 +0.752 [0.749, 0.755] CLiFF-map 1.964 ± 4.953 +1.189 [1.185, 1.192] STeF-map 5.576 ± 9.314 +4.801 [4.794, 4.809] Table 2: Training and inference times for map building on the ATC dataset. Lower values indicate faster performance. Experiments were conducted on a desktop computer equipped with an Intel i9-12900K CPU and an NVIDIA GeForce RTX 3060 GPU running Ubuntu 20.04. Method Train time (minute)↓ Inference time (second)↓ Ours 19.26 1.363 × 10−6 Online CLiFF-map 23.859 1.914 × 10−3 CLiFF-map 1831 1.914 × 10−3 STeF-map 0.815 5.665 × 10−5 4.5 QUALITATIVE RESULTS Examples of NeMo-map are shown in Fig. 5. The model is queried at regular spatial intervals at three different times of day, at locations where human motion appears in the training dataset. Across the day, the map adapts smoothly to changes in human motion patterns. For example, in the east corridor (right side of the ATC map), the flow is directed left/upwards in the morning, shifts direction at noon, where pedestrians keep left when facing oncoming flows, and turns into right/downwards in the evening. (These patterns are most clearly seen when displaying only the SWGMM mixture component with the largest weight, in the bottom row, but please note that the map maintains a representation of the full multimodal distribution at all times.) The generated flow fields capture such temporal variations and implicitly align with the environment’s topology, even though no explicit map was provided during training. For instance, speeds decrease near resting benches, motion flows pass through exits, and flows follow the corridors. 4.6 ABLATION STUDY We perform an ablation study on alternative methods for temporal encoding. In our method, we use a SIREN network to process the temporal input. For comparison, we evaluate two alternative mappings of time t into a temporal feature vector ft(t): • Temporal grid. A learnable grid Gt ∈RK×Ct that captures daily periodicity, where K is the number of discretized time bins (set to 24). The grid feature corresponding to each time bin is concatenated with the spatial feature and passed through an MLP with hidden sizes [128, 64] and ReLU activations. • Fourier features. The time input t is mapped into a periodic embedding using Fourier features Tancik et al. (2020); Mildenhall et al. (2020). For F frequencies, we construct ft(t) =  sin(2n2πt), cos(2n2πt) F −1 n=0 . This representation enables the model to capture time-dependent variations at multiple res- olutions. The implementation is identical to the temporal grid variant, except the time grid is replaced by Fourier features with F = 4, yielding an 8-dimensional temporal embedding. 8 09:00 11:00 18:00 All Orientation [deg] 0 90 180 270 360 Max Orientation [deg] 0 90 180 270 360 Figure 5: NeMo-map in the ATC dataset, for 09:00 (left), 11:00 (middle) and 18:00 (right), showing changes of motion patterns throughout the day. Predicted Semi-Wrapped Gaussian Mixture Models (SWGMMs) are visualized. At each location, arrow color encodes orientation and arrow length encodes speed. The top row shows multimodality by rendering all SWGMM components with transparency proportional to their weights, while the bottom row more clearly shows the dominant flow, only displaying the mixture component with the largest weight. Table 3 summarizes the results of the ablation study on temporal encoding. Replacing SIREN with a temporal grid or Fourier features results in higher NLL, confirming the advantage of using SIREN for modeling continuous temporal dynamics. Among the alternatives, Fourier features outperform the temporal grid, but both remain less accurate than SIREN. The reductions in NLL relative to our method are 0.082 for the temporal grid and 0.063 for Fourier features, with 95% confidence intervals. Table 3: Comparing alternative time encodings for NeMo-map, again using the ATC dataset and comparing average negative log-likelihood (NLL) where lower indicates better accuracy. We report mean ± standard deviation, together with the reduction in NLL relative to our method and the corresponding 95% confidence interval (CI). All models are trained using the Adam optimizer with learning rate 10−3 for 100 epochs. Method NLL↓ NLL reduction (vs Ours) 95% CI Ours 0.775 ± 2.052 – – Temporal grid 0.857 ± 2.113 +0.082 [0.081, 0.083] Fourier features 0.838 ± 2.105 +0.063 [0.062, 0.064] 5 CONCLUSIONS We introduced the first-of-its-kind continuous spatio-temporal map of dynamics representation NeMo-map, a novel formulation of MoDs using implicit neural representations. In contrast to prior discretized methods such as CLiFF-map and STeF-map, our approach parametrizes a continuous neural function, which outputs the parameters of a Semi-Wrapped Gaussian Mixture Model at arbi- trary spatio-temporal coordinates. The model enables smooth generalization across space and time, and provides a compact representation that avoids storing per-cell distributions. Through experiments on the large-scale ATC dataset, we demonstrated that NeMo-map achieves subsantially higher accuracy (lower negative log-likelihood) than existing MoD baselines, while reducing map building time. Qualitative results further show that the learned flow fields capture multimodality, temporal variations, and environment topology without requiring explicit maps. Ab- lation studies confirmed the advantage of using SIREN-based temporal encoding over discrete or Fourier alternatives. In summary, the results highlight continuous MoDs as a practical and scalable tool for modeling hu- man motion dynamics. By combining accuracy, efficiency, and flexibility, the representation offers a 9 powerful prior for downstream tasks such as socially aware navigation, long-term motion prediction, and localization in dynamic environments. In future work, we plan to extend this formulation with online update mechanisms to adapt continuously to evolving crowd behaviors, further bridging the gap toward long-term real-world deployment. REFERENCES A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese. Social LSTM: Human trajectory prediction in crowded spaces. In Proc. of the IEEE Conf. on Comp. Vis. and Pat. Rec. (CVPR), pp. 961–971, 2016. Victor Hernandez Bennetts, Tomasz Piotr Kucner, Erik Schaffernicht, Patrick P. Neumann, Han Fan, and Achim J. Lilienthal. Probabilistic air flow modelling using turbulent and laminar characteris- tics for ground and aerial robots. IEEE Robotics and Automation Letters, 2(2):1117–1123, 2017. M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun. Learning motion patterns of people for compliant robot motion. Int. J. of Robotics Research, 24(1):31–48, 2005. D. Brˇsˇci´c, T. Kanda, T. Ikeda, and T. Miyashita. Person tracking in large public spaces using 3-d range sensors. IEEE Trans. on Human-Machine Systems, 43(6):522–534, 2013. Olivier Capp´e and Eric Moulines. On-line expectation–maximization algorithm for latent data mod- els. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3):593–613, 2009. Y. F. Chen, M. Liu, and J. P. How. Augmented dictionary learning for motion prediction. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2527–2534, 2016. Yizong Cheng. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1995. Tiago Rodrigues de Almeida, Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Johannes A. Stork, Martin Magnusson, and Achim J. Lilienthal. Trajectory prediction for heterogeneous agents: A performance analysis on small and imbalanced datasets. IEEE Robotics and Automation Letters, pp. 1–8, 2024. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1–38, 1977. Tianpei Gu, Guangyi Chen, Junlong Li, Chunze Lin, Yongming Rao, Jie Zhou, and Jiwen Lu. Stochastic trajectory prediction via motion indeterminacy diffusion. In Proc. of the IEEE Conf. on Comp. Vis. and Pat. Rec. (CVPR), 2022. T. Krajn´ık, J. P. Fentanes, J. M. Santos, and T. Duckett. Fremen: Frequency map enhancement for long-term mobile robot autonomy in changing environments. IEEE Trans. on Robotics (TRO), 33 (4):964–977, 2017. T. P. Kucner, M. Magnusson, E. Schaffernicht, V. H. Bennetts, and A. J. Lilienthal. Enabling flow awareness for mobile robots in partially observable environments. IEEE Robotics and Automation Letters, 2(2):1093–1100, 2017. K. V. Mardia and P. E. Jupp. Directional Statistics. Wiley, 2008. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. of the Europ. Conf. on Comp. Vision (ECCV), 2020. Sergi Molina, Grzegorz Cielniak, and Tom Duckett. Robotic exploration for learning human motion patterns. IEEE Trans. on Robotics and Automation (TRO), 2022. Luigi Palmieri, Tomasz P Kucner, Martin Magnusson, Achim J Lilienthal, and K. O. Arras. Kino- dynamic motion planning on gaussian mixture fields. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 6176–6181. IEEE, 2017. 10 Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In Proc. of the AAAI Conf. on Artificial Intelligence (AAAI), 2018. Anandarup Roy, Swapan K. Parui, and Utpal Roy. A mixture model of circular-linear distributions for color image segmentation. International Journal of Computer Applications, 58(9):6–11, 11 2012. ISSN 0975-8887. Anandarup Roy, Swapan K. Parui, and Utpal Roy. SWGMM: a semi-wrapped gaussian mixture model for clustering of circular-linear data. Pattern Anal. Appl., 19(3):631–645, 2016. Liushuai Shi, Le Wang, Sanping Zhou, and Gang Hua. Trajectory unified transformer for pedestrian trajectory prediction. In Proc. of the IEEE Int. Conf. on Computer Vision (ICCV), pp. 9675–9684, October 2023. Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wet- zstein. Implicit neural representations with periodic activation functions. In Advances in Neural Inf. Proc. Syst. (NeurIPS), 2020. Chittaranjan Srinivas Swaminathan, Tomasz Piotr Kucner, Martin Magnusson, Luigi Palmieri, Sergi Molina, Anna Mannucci, Federico Pecora, and Achim J. Lilienthal. Benchmarking the utility of maps of dynamics for human-aware motion planning. Frontiers in Robotics and AI, 9, 2022. ISSN 2296-9144. Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let net- works learn high frequency functions in low dimensional domains. Advances in Neural Inf. Proc. Syst. (NeurIPS), 2020. Zhan Wang, Patric Jensfelt, and John Folkesson. Modeling spatial-temporal dynamics of human movements for predicting future trajectories. In Workshop Proc. of the AAAI Conf. on Artificial Intelligence ”Knowledge, Skill, and Behavior Transfer in Autonomous Robots”, 2015. Zhan Wang, Patric Jensfelt, and John Folkesson. Building a human behavior map from local obser- vations. In Proc. of the IEEE Int. Symp. on Robot and Human Interactive Comm. (RO-MAN), pp. 64–70, 2016. W. Zhi, R. Senanayake, L. Ott, and F. Ramos. Spatiotemporal learning of directional uncertainty in urban environments with kernel recurrent mixture density networks. IEEE Robotics and Automa- tion Letters, 4(4):4306–4313, 2019. Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal, and Martin Magnusson. CLiFF-LHMP: Using spatial dynamics patterns for long-term human motion prediction. In Proc. of the IEEE Int. Conf. on Intell. Robots and Syst. (IROS), 2023. Yufei Zhu, Andrey Rudenko, Luigi Palmieri, Lukas Heuer, Achim J. Lilienthal, and Martin Mag- nusson. Fast online learning of cliff-maps in changing environments. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2025. 11
NEURAL IMPLICIT FLOW FIELDS FOR SPATIOTEMPORAL MOTION MAPPING Yufei Zhu∗ Shih-Min Yang∗ Andrey Rudenko† Tomasz P. Kucner‡ Achim J. Lilienthal∗† Martin Magnusson∗ ABSTRACT Safe and efficient robot operation in complex human environments can benefit from good models of site-specific motion patterns. Maps of Dynamics (MoDs) provide such models by encoding statistical motion patterns in a map, but existing representations use discrete spatial sampling and typically require costly offline construction. We propose a continuous spatio-temporal MoD representation based on implicit neural functions that directly map coordinates to the parameters of a Semi-Wrapped Gaussian Mixture Model. This removes the need for discretization and imputation for unevenly sampled regions, enabling smooth generalization across both space and time. Evaluated on a large public dataset with long-term real-world people tracking data, our method achieves better accuracy of motion representation and smoother velocity distributions in sparse regions while still being computationally efficient, compared to available baselines. The proposed approach demonstrates a powerful and efficient way of modeling complex human motion patterns. 1 INTRODUCTION Safe and efficient operation in complex, dynamic and densely crowded human environments is a critical prerequisite for deploying robots in various tasks to support people in their daily activities. Extending the environment model with human motion patterns using a map of dynamics (MoD) is one way to achieve unobtrusive navigation, compliant with existing site-specific motion flows (Palmieri et al., 2017; Swaminathan et al., 2022). As illustrated in Fig. 1, incorporating MoDs into motion planning provides benefits in crowded environments, since they encode information about the expected motion outside of the robot's sensor range, allowing for less reactive behavior. In the example shown in Fig. 1, the oncoming pedestrian flow is initially outside the robot's observation radius. Without MoD awareness, the robot chooses a direct path to the goal but later becomes trapped in the oncoming crowd. In contrast, a planner informed by MoDs can exploit prior knowledge of human motion patterns to generate a trajectory that aligns with the expected flow, allowing the robot to reach the goal safely and efficiently. MoDs can also be applied to long-term human motion prediction (Zhu et al., 2023). As shown in the right of Fig. 1, MoDs help predict realistic trajectories that implicitly respect the complex topology of the environment, such as navigating around corners or avoiding obstacles. Several approaches have been proposed for constructing MoDs. Early methods modeled human motion on occupancy grid maps, treating dynamics as shifts in occupancy (Wang et al., 2015; 2016). These approaches struggle with noisy or incomplete trajectory data. Later, velocity-based representations have been introduced, most notably the CLiFF-map (Kucner et al., 2017), which models local motion patterns with Gaussian mixture models, effectively captures multimodality in human flows and has been successfully used in both robot navigation and prediction tasks. The methods above are computed in batch, given a set of observations. Online learning methods have also been explored to update motion models as new observations arrive (Zhu et al., 2025), allowing robots to ∗ ̈Orebro University, Sweden. †Technical . ‡Aalto University, Finland. 1 16 Oct 2025 With MoD guidance Without MoD guidance Observation Ground truth CLiFF-LHMP Social LSTM MID TUTR Figure 1: Indicative applications of Maps of dynamics (MoD) for motion planning and human motion prediction. Left: Illustration of how MoDs can support socially aware motion planning. The robot (red diamond) navigates toward the goal (green cross) in the presence of two opposing human flows, with the underlying MoD shown as colored arrows. In this scenario, the oncoming pedestrian flow moving in the opposing direction is initially outside the robot's observation radius (grey dashed circle). Without guidance from the MoD, the planner initially takes a direct path to the goal but eventually becomes stuck and collides with the oncoming flow. In contrast, when informed by the MoD, the robot aligns its trajectory with the motion patterns and reaches the goal efficiently and safely. Right: Human motion prediction with a 60 s horizon. The red line represents the ground truth trajectory and the green line represents the observed trajectory. With MoD guidance, CLiFFLHMP Zhu et al. (2023) makes more accurate and realistic predictions than deep learning methods. While the trajectories predicted by Social LSTM (Alahi et al., 2016), TUTR (Shi et al., 2023) and MID (Gu et al., 2022) often are unfeasible (e.g., crossing the walls), CLiFF-LHMP predictions implicitly follow the topology of the environment. adapt to changing environments without costly retraining from scratch. Temporal MoDs have also been explored, including STeF-maps (Molina et al., 2022), which apply frequency-based models to encode periodic variations in the flow. However, existing MoD representations require spatial discretization, with a manually selected map resolution for point locations and interpolation to estimate motion at arbitrary positions. This discretization introduces information loss, reduces flexibility, and complicates tuning across different environments. To address these challenges, in this work, instead of representing motion patterns on a discrete grid, we propose a continuous map of dynamics using implicit neural representation. We learn a neural function that maps spatio-temporal coordinates to parameters of a local motion distribution. Implicit neural representations have emerged as powerful tools for encoding continuous functions, providing compact and differentiable models with strong generalization. Leveraging these properties, this formulation allows the model to smoothly generalize across space and time, while maintaining multimodality in places where flows tend to go in more than one direction since it produces a wrapped Gaussian mixture model of expected motion given a query location and time. We evaluate our approach on real-world datasets of human motion and show that continuous MoDs not only improve representation accuracy but can also drastically reduce map construction time. Our method yields smoother and more consistent velocity distributions, resulting in more accurate representations of human motion patterns. In contrast to baseline approaches that rely on timeconsuming per-cell motion modeling, it computes nearly two orders of magnitude faster than CLiFFmap Kucner et al. (2017). Unlike the faster but discretised representation of STeF-map Molina et al. (2022), our method preserves non-discretised directions, yielding results closer in spirit to CLiFF. In summary, the main contribution of this work is an entirely novel representation of flow-aware maps of dynamics, named NeMo-map. In contrast to existing methods, NeMo allow continuous spatio-temporal queries to generate location- and time-specific multimodal flow predictions. As evidenced by our experimental validation on real-world human motion data, NeMo efficiently learns a highly accurate statistical representation of motion in large-scale maps. 2 2 RELATED WORK A map of dynamics (MoD) is a representation that augments the geometric map of an environment with statistical information about observed motion patterns. Unlike static maps, MoDs incorporate spatio-temporal flow information, allowing robots to reason about how humans typically move in a given environment. MoDs can be built from various sources of input, such as trajectories (Bennewitz et al., 2005), dynamics samples, or information about the flow of continuous media (e.g., air or water) (Bennetts et al., 2017). Furthermore, these models can feature diverse underlying representations, including evidence grids, histograms, graphs, or Gaussian mixtures. There are several types of MoDs described in the literature, generally striving to provide an efficient tool for storing and querying information about historical or expected changes in states within the environment. Occupancy-based methods focus on mapping human dynamics on occupancy grid maps, modeling motion as shifts in occupancy (Wang et al., 2015; 2016). Trajectory-based methods extract human trajectories and group them into clusters, with each cluster representing a typical path through the environment (Bennewitz et al., 2005). These approaches suffer from noisy or incomplete trajectories. To address this, Chen et al. (2016) formulate trajectory modeling as a dictionary learning problem and use augmented semi-nonnegative sparse coding to find local motion patterns characterized by partial trajectory segments. MoDs can also be based on velocity observations. With velocity mapping, human dynamics can be modeled through flow models. Kucner et al. (2017) presented a probabilistic framework for mapping velocity observations, which is named Circular-Linear Flow Field map (CLiFF-map). CLiFF-map represents local flow patterns as a multi-modal, continuous joint distribution of speed and orientation, as further described in Sec. 3. A benefit of CLiFF-map is that it can be built from incomplete or spatially sparse velocity observations (de Almeida et al., 2024), without the need to store a long history of data or deploy advanced tracking algorithms. CLiFF-maps are typically built offline, for the reason of high computational costs associated with the building process. This constraint limits their applicability in real environments. p Figure 2: Probability density of a Semi-Wrapped Gaussian Mixture Model (SWGMM) with two components, visualized on a cylinder. Orientation θ is wrapped around the circular axis, while speed ρ extends along the vertical axis. The representation allows joint modeling of angular (orientation) and linear (speed) variables, capturing multimodality in motion patterns. When building flow models, temporal information can also be incorporated. Molina et al. (2022) apply the Frequency Map Enhancement (FreMEn Krajn ́ık et al. (2017)), which is a model describing spatio-temporal dynamics in the frequency domain, to build a timedependent probabilistic map to model periodic changes in people flow called STeF-map. The motion orientations in STeF-map are discretized. Another method of incorporating temporal information is proposed by Zhi et al. (2019). Their approach uses a kernel recurrent mixture density network to provide a multimodal probability distribution of movement directions of a typical object in the environment over time, though it models only orientation and not the speed of human motion. It is important to note that a map of dynamics is not a trajectory prediction model. Whereas trajectory predictors (e.g., LSTMs) aim to forecast the future state of agents by propagating state information forward in time from an initial state, our goal is fundamentally different. We seek to construct a spatio-temporal prior that encodes the distribution of motion patterns in the environment itself. This prior can be queried directly at any spatial coordinate and any time of day, providing motion statistics that can support downstream tasks such as planning or long-term prediction, but it does not by itself generate trajectories for individual agents. 3 3 METHODOLOGY 3.1 PROBABILISTIC MODELING OF HUMAN MOTION Our spatio-temporal map of dynamics produces probability distributions over human motion velocities. A velocity v is defined by the pair of speed (a positive linear variable ρ ∈R+) and orientation (a circular variable θ ∈[0, 2π)). To capture the statistical structure of such data, we model human motion patterns with a SemiWrapped Gaussian Mixture Model (SWGMM), similar to the CLiFF-map representation (Kucner et al., 2017). While a von Mises distribution would be effective for purely angular variables, is not suitable when combining circular and linear components. Roy et al. (2012) proposed the von MisesGaussian mixture model (VMGMM) to jointly represent one circular variable and linear variables. However, their model assumes independence between the circular and linear dimensions, which limits its ability in capturing real-world correlations. To overcome this, SWGMM (Roy et al., 2016) jointly models circular-linear variables and allows correlations between them. An SWGMM models velocity v = [ρ, θ]⊤as a mixture of J Semi-Wrapped Normal Distributions (SWNDs): p(v | ξ) = J X j=1 wjN SW μj,Σj(v), (1) where ξ = {ξj = (wj, μj, Σj)}J j=1 denotes a finite set of components of the SWGMM. Each wj is a mixing weight and satisfies 0 ≤wj ≤1), μj the component mean, and Σj the covariance. An SWND N SW Σ,μ is formally defined as N SW Σ,μ(v) = X k∈Z Nμ,Σ [ρ, θ]⊤+ 2π[0, k]⊤ , (2) where k is a winding number. In practice, the PDF can be approximated adequately by taking k ∈{-1, 0, 1} (Mardia & Jupp, 2008). The SWGMM density function over velocities can be visualized as a function on a cylinder, as shown in Fig. 2. Orientation values θ are wrapped on the unit circle and the speed ρ runs along the length of the cylinder. This formulation yields a flexible and interpretable probabilistic representation of local human motion, capturing multimodality and correlations between orientation and speed. 3.2 LEARNING CONTINUOUS MOTION FIELDS Previous MoD approaches, such as CLiFF-maps and STeF-maps, rely on discretizing the environment into cells and fitting local probability models. Discretization leads to information loss and prevents querying at arbitrary locations. We address this by introducing a continuous map of dynamics parameterized by a neural implicit representation. The method overview is shown in Fig. 3. In many real-world environments, human motion patterns exhibit strong daily periodicity, such as morning and evening rush hours or lunchtime activity. Motivated by this structure, we model time as a periodic variable and condition the MoD on the time of day. This assumption allows the representation to capture long-term temporal variations without requiring sequential rollouts, and enables efficient queries of motion dynamics at arbitrary spatio-temporal coordinates (x, y, t). Problem statement. Given a dataset D of N spatio-temporal motion samples: D = {(xi, ti, vi)}N i=1, where xi ∈R2 is the spatial coordinate, ti ∈[0, 1] is the normalized time of a day, and vi = [ρi, θi]⊤ is observed velocity, we learn a continuous function Φθ that maps a spatio-temporal coordinate (x, t) to SWGMM parameters: Φθ(x, t) = n wj(x, t), μj(x, t), Σj(x, t) oJ j=1, (3) 4 SIREN MLP SWGMM Figure 3: Method overview. A spatio-temporal query (x, t) is mapped to parameters of a SemiWrapped Gaussian Mixture Model (SWGMM). The spatial coordinate x is used to interpolate features from a learnable spatial grid Gs, and the temporal coordinate t is encoded using a SIREN network. The spatial features fs(x), temporal encoding ft(t), and raw coordinates are concatenated and passed through an MLP, which outputs the parameters of a SWGMM, providing a continuous, multimodal probabilistic representation of motion dynamics at the queried location and time. where J is the number of mixture components, weights wj ≥0 and PJ j=1 wj = 1. Each of the j components models the joint velocity v = [ρ, θ]⊤with a Semi-Wrapped Normal Distribution N SW Σ,μ. At inference time, querying Φθ at any coordinate yields the full set of SWGMM parameters, resulting in a continuous probabilistic representation of motion dynamics. This formulation enables the model to learn smooth, continuous motion fields while retaining the multimodal characteristic of human motion. Architecture. In our neural representation, we parameterize Φθ with a fully connected multilayer perceptron (MLP), conditioned on both spatial and temporal features: fs(x) | {z } spatial features ∈RCs, ft(t) |{z} temporal encoding ∈RCt. For spatial features, a learnable grid Gs ∈RH×W ×Cs is queried at location x by bilinear interpolation, producing fs(x). This captures local variations in motion patterns while remaining continuous in space. For temporal encoding, we encode t with SIREN, the sinusoidal representation network (Sitzmann et al., 2020), which uses periodic activation functions throughout the network. The MLP input concatenates the raw coordinates and the spatial and temporal features, z = [ x, t, fs(x), ft(t) ], and outputs SWGMM parameters. This feature-conditioned representation enables the model to flexibly encode local variations in motion dynamics while maintaining global smoothness across both space and time. Likelihood and training. For a spatio-temporal coordinate (xi, ti), the velocity likelihood under the predicted SWGMM is p(vi | Φθ(xi, ti)) = J X j=1 wj(xi, ti) N SW μj(xi,ti), Σj(xi,ti)(vi), (4) where N SW denotes the semi-wrapped normal distribution that wraps the angular component (see Eq. (2)). The model is trained by minimizing the negative log-likelihood of motion samples from the dataset under the probability density function (PDF) produced by the model: L(θ) = -1 N N X i=1 log p vi | Φθ(xi, ti) . (5) 5 10m corridor continues (toward train station) exit toward east part of ATC stairs to third floor stairs going down (for exiting the building) es nce ng ors ple e this area is traversable, but not covered by tracking system. some people pass through here restaurant entrance escalator going down (to parking lot) shops would often put goods in this area various things here during events sitting benches entry/exit points Figure 4: Layout of the ATC dataset environment (Brˇsˇci ́c et al., 2013), showing the main east corridor and open areas annotated with semantic information such as entry and exit points, shops, seating areas, and stairs. 4 RESULTS 4.1 DATASET To evaluate spatio-temporal maps of dynamics that capture changes of human motion patterns over time, it is essential to use datasets that span multiple days and reflect variations in human motion patterns throughout the day. Our experiments were conducted using a real-world dataset, ATC (Brˇsˇci ́c et al., 2013), which provide sufficient multi-day coverage for evaluation. This dataset was collected in a shopping mall in Japan using multiple 3D range sensors, recording pedestrian trajectories between 9:00 and 21:00, over a total of 92 days. ATC covers a large indoor environment, with a total area covered of approximately 900 m2. Because of the large scale of the dataset, we use first four days in the dataset (2012 Oct 24, 2012 Oct 28, 2012 Oct 31, and 2012 Nov 04) for experiments. The data from October 24 is used for training, while the other three days are used for evaluation. The observation rate is downsampled from over 10 Hz to 1 Hz. After downsampling, the training set contains 717,875 recorded motion samples, and the test set contains 5,114,478 samples. 4.2 BASELINES Circular-Linear Flow Field Map (CLiFF-map) CLiFF-map (Kucner et al., 2017) represents motion patterns by associating each discretized grid location with an SWGMM fitted from local observations. The environment is divided into a set of grid locations, and each grid location aggregates motion samples within a fixed radius. The SWGMM parameters at each grid location are estimated via expectation-maximization (EM) (Dempster et al., 1977), with the number and initial positions of mixture components determined using mean shift clustering (Cheng, 1995). When training the CLiFF-map, the convergence precision is set to 1e-5 for both mean shift and EM algorithms, with a maximum iteration count of 100. The grid resolution is set to 1 m. To evaluate different hours, we train separate CLiFF-maps for each hour using the motion samples observed during that time. Spatio-Temporal Flow Map (STeF-map) STeF-map (Molina et al., 2022) is a spatio-temporal map of dynamics that models the likelihood of human motion directions using harmonic functions. Each grid location maintains kstef temporal models, corresponding to kstef discretized orientations of people moving through that location over time. By modeling periodic patterns, STeF-map captures long-term temporal variations in crowd movements and can predict activities at specific times of day under the assumption of periodicity in the environment. Following Molina et al. (2022), we set kstef = 8 in the experiments, and the model orders for training STeF-map, i.e. the number of periodicities, is set to 2. Online CLiFF-map Online CLiFF-map (Zhu et al. (2025)) extends the static CLiFF model by updating the SWGMM parameters incrementally as new motion observations become available. Each grid location maintains an SWGMM, which is initialized upon first receiving observations and subsequently updated using the stochastic expectation-maximization 6 (sEM) algorithm (Capp ́e & Moulines (2009)). In sEM, the expectation step of the original EM algorithm is replaced by a stochastic approximation step, while the maximization step remains unchanged. Like the static CLiFF-map, Online CLiFF-map outputs SWGMM parameters at each grid location, but supports continuous adaptation over time. In the experiments, we follow a spatio-temporal setting by generating an online CLiFF-map for each hour. Observations collected in an hour interval are treated as the new data batch for updating the SWGMMs, producing a temporally adaptive representation of motion dynamics. 4.3 IMPLEMENTATION DETAILS The output of the network Φθ parameterizes an SWGMM over speed and orientation. For J mixture components, the network predicts 6J raw values per query coordinate. Each component j is defined by: a mixture weight wj, obtained by applying a softmax over the raw weights; a mean speed μj,s = max(0, ̃μj,s) and mean orientation μj,a = ̃μj,a mod 2π; variances σ2 j,s = exp(clamp( ̃vj,s, -10, 10)) and σ2 j,a = exp(clamp( ̃vj,a, -10, 10)); and a correlation coefficient ρj = 0.99 tanh( ̃ρj). Altogether, the network defines a valid SWGMM with parameters as in Eq. (3), where μj = μj,s, μj,a and Σj is the covariance matrix with diagonal entries σ2 j,s, σ2 j,a and correlation ρj. In the experiments, J is set to 3 and coordinates are normalized to [-1, 1]. Spatial input is processed by an MLP with hidden sizes [128, 64] and ReLU activations. Temporal input is processed by a two-layer SIREN (sine activations with ω(1) 0 = 30 in the first layer and ω(h) 0 = 1 in the hidden layer). The two streams are fused via FiLM modulation (Perez et al., 2018). The fused representation is passed to a linear head producing 6J outputs. Models are trained using the Adam optimizer with learning rate 10-3 for 100 epochs. An ablation of alternative temporal encodings is provided in Sec. 4.6. 4.4 QUANTITATIVE RESULTS To quantitatively evaluate the accuracy of modeling human motion patterns (MoD quality), we use the negative log-likelihood (NLL). An MoD represents human motion as a probability distribution over velocity conditioned on a spatio-temporal coordinate (x, y, t), implemented as either an SWGMM (our method and CLiFF-maps) or a histogram (STeF-map). To evaluate representation accuracy, we use test data consisting of observed human motions in the same environment. For each test sample (x, y, t), we query the MoD to obtain the predicted distribution and compute the likelihood of the observed motion under this distribution. A higher likelihood indicates that the predicted distribution better aligns with the observed data. We report NLL for numerical stability and easy comparison, so lower NLL values correspond to more accurate motion representations, i.e., higher quality MoDs. Table 1 reports the accuracy results. Our method achieves the lowest NLL (0.775 ± 2.052), outperforming all baselines. Online CLiFF-map, CLiFF-map, and STeF-map exhibit significantly higher NLLs, with paired t-tests showing p < 0.001 under the null hypothesis that baseline NLL is less than or equal to ours. The reductions relative to our method are respectively +0.752 (online CLiFF), +1.189 (CLiFF), and +4.801 (STeF), all with 95% confidence intervals. Compared with STeF-maps, methods based on SWGMM, such as ours and CLiFF-map, offer two key advantages. They jointly model speed and orientation, whereas STeF-maps do not include speed information. In addition, SWGMMs represent orientation continuously rather than through a discretized 8-bin histogram as in STeF-map. These aspects lead to a more accurate representation of human motion and contribute to the improved performance. Limitations of CLiFF-maps are from discretizing the environment into grid cells, with each cell storing a locally fitted SWGMM. This grid-based design limits spatial resolution and introduces discontinuities at cell boundaries in both space and time. In particular, dividing time into hourly intervals is a coarse approximation that can produce abrupt changes, since human motion patterns do not necessarily shift at exact hour boundaries. In contrast, our method models the MoD as a continuous neural implicit representation. This enables smooth generalization across space and time, supports queries at arbitrary spatio-temporal coordinates, and provides a compact representation that avoids the memory overhead of storing distributions for every grid cell. 7 We also compare the map building time of the baselines against our approach as shown in Table 2. For the baselines, the training time corresponds to convergence on all grid cells, while for our method it corresponds to the neural network training time. Our method trains in 19 minutes, substantially faster than CLiFF-map (over 30 hours) while achieving higher accuracy. These results highlight the practicality of continuous MoDs for real-time applications, combining both accuracy and efficiency. Table 1: Accuracy evaluation on the ATC dataset using average negative log-likelihood (NLL), where lower values indicate better accuracy. We report mean ± standard deviation, together with the reduction in NLL relative to our method and the corresponding 95% confidence interval (CI). Method NLL↓ NLL reduction (vs Ours) 95% CI Ours 0.775 ± 2.052 - - Online CLiFF-map 1.527 ± 4.156 +0.752 [0.749, 0.755] CLiFF-map 1.964 ± 4.953 +1.189 [1.185, 1.192] STeF-map 5.576 ± 9.314 +4.801 [4.794, 4.809] Table 2: Training and inference times for map building on the ATC dataset. Lower values indicate faster performance. Experiments were conducted on a desktop computer equipped with an Intel i9-12900K CPU and an NVIDIA GeForce RTX 3060 GPU running Ubuntu 20.04. Method Train time (minute)↓ Inference time (second)↓ Ours 19.26 1.363 × 10-6 Online CLiFF-map 23.859 1.914 × 10-3 CLiFF-map 1831 1.914 × 10-3 STeF-map 0.815 5.665 × 10-5 4.5 QUALITATIVE RESULTS Examples of NeMo-map are shown in Fig. 5. The model is queried at regular spatial intervals at three different times of day, at locations where human motion appears in the training dataset. Across the day, the map adapts smoothly to changes in human motion patterns. For example, in the east corridor (right side of the ATC map), the flow is directed left/upwards in the morning, shifts direction at noon, where pedestrians keep left when facing oncoming flows, and turns into right/downwards in the evening. (These patterns are most clearly seen when displaying only the SWGMM mixture component with the largest weight, in the bottom row, but please note that the map maintains a representation of the full multimodal distribution at all times.) The generated flow fields capture such temporal variations and implicitly align with the environment's topology, even though no explicit map was provided during training. For instance, speeds decrease near resting benches, motion flows pass through exits, and flows follow the corridors. 4.6 ABLATION STUDY We perform an ablation study on alternative methods for temporal encoding. In our method, we use a SIREN network to process the temporal input. For comparison, we evaluate two alternative mappings of time t into a temporal feature vector ft(t): • Temporal grid. A learnable grid Gt ∈RK×Ct that captures daily periodicity, where K is the number of discretized time bins (set to 24). The grid feature corresponding to each time bin is concatenated with the spatial feature and passed through an MLP with hidden sizes [128, 64] and ReLU activations. • Fourier features. The time input t is mapped into a periodic embedding using Fourier features Tancik et al. (2020); Mildenhall et al. (2020). For F frequencies, we construct ft(t) = sin(2n2πt), cos(2n2πt) F -1 n=0 . This representation enables the model to capture time-dependent variations at multiple resolutions. The implementation is identical to the temporal grid variant, except the time grid is replaced by Fourier features with F = 4, yielding an 8-dimensional temporal embedding. 8 09:00 11:00 18:00 All Orientation [deg] 0 90 180 270 360 Max Orientation [deg] 0 90 180 270 360 Figure 5: NeMo-map in the ATC dataset, for 09:00 (left), 11:00 (middle) and 18:00 (right), showing changes of motion patterns throughout the day. Predicted Semi-Wrapped Gaussian Mixture Models (SWGMMs) are visualized. At each location, arrow color encodes orientation and arrow length encodes speed. The top row shows multimodality by rendering all SWGMM components with transparency proportional to their weights, while the bottom row more clearly shows the dominant flow, only displaying the mixture component with the largest weight. Table 3 summarizes the results of the ablation study on temporal encoding. Replacing SIREN with a temporal grid or Fourier features results in higher NLL, confirming the advantage of using SIREN for modeling continuous temporal dynamics. Among the alternatives, Fourier features outperform the temporal grid, but both remain less accurate than SIREN. The reductions in NLL relative to our method are 0.082 for the temporal grid and 0.063 for Fourier features, with 95% confidence intervals. Table 3: Comparing alternative time encodings for NeMo-map, again using the ATC dataset and comparing average negative log-likelihood (NLL) where lower indicates better accuracy. We report mean ± standard deviation, together with the reduction in NLL relative to our method and the corresponding 95% confidence interval (CI). All models are trained using the Adam optimizer with learning rate 10-3 for 100 epochs. Method NLL↓ NLL reduction (vs Ours) 95% CI Ours 0.775 ± 2.052 - - Temporal grid 0.857 ± 2.113 +0.082 [0.081, 0.083] Fourier features 0.838 ± 2.105 +0.063 [0.062, 0.064] 5 CONCLUSIONS We introduced the first-of-its-kind continuous spatio-temporal map of dynamics representation NeMo-map, a novel formulation of MoDs using implicit neural representations. In contrast to prior discretized methods such as CLiFF-map and STeF-map, our approach parametrizes a continuous neural function, which outputs the parameters of a Semi-Wrapped Gaussian Mixture Model at arbitrary spatio-temporal coordinates. The model enables smooth generalization across space and time, and provides a compact representation that avoids storing per-cell distributions. Through experiments on the large-scale ATC dataset, we demonstrated that NeMo-map achieves subsantially higher accuracy (lower negative log-likelihood) than existing MoD baselines, while reducing map building time. Qualitative results further show that the learned flow fields capture multimodality, temporal variations, and environment topology without requiring explicit maps. Ablation studies confirmed the advantage of using SIREN-based temporal encoding over discrete or Fourier alternatives. In summary, the results highlight continuous MoDs as a practical and scalable tool for modeling human motion dynamics. By combining accuracy, efficiency, and flexibility, the representation offers a 9 powerful prior for downstream tasks such as socially aware navigation, long-term motion prediction, and localization in dynamic environments. In future work, we plan to extend this formulation with online update mechanisms to adapt continuously to evolving crowd behaviors, further bridging the gap toward long-term real-world deployment. REFERENCES A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese. Social LSTM: Human trajectory prediction in crowded spaces. In Proc. of the IEEE Conf. on Comp. Vis. and Pat. Rec. (CVPR), pp. 961-971, 2016. Victor Hernandez Bennetts, Tomasz Piotr Kucner, Erik Schaffernicht, Patrick P. Neumann, Han Fan, and Achim J. Lilienthal. Probabilistic air flow modelling using turbulent and laminar characteristics for ground and aerial robots. IEEE Robotics and Automation Letters, 2(2):1117-1123, 2017. M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun. Learning motion patterns of people for compliant robot motion. Int. J. of Robotics Research, 24(1):31-48, 2005. D. Brˇsˇci ́c, T. Kanda, T. Ikeda, and T. Miyashita. Person tracking in large public spaces using 3-d range sensors. IEEE Trans. on Human-Machine Systems, 43(6):522-534, 2013. Olivier Capp ́e and Eric Moulines. On-line expectation-maximization algorithm for latent data models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3):593-613, 2009. Y. F. Chen, M. Liu, and J. P. How. Augmented dictionary learning for motion prediction. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2527-2534, 2016. Yizong Cheng. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1995. Tiago Rodrigues de Almeida, Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Johannes A. Stork, Martin Magnusson, and Achim J. Lilienthal. Trajectory prediction for heterogeneous agents: A performance analysis on small and imbalanced datasets. IEEE Robotics and Automation Letters, pp. 1-8, 2024. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-38, 1977. Tianpei Gu, Guangyi Chen, Junlong Li, Chunze Lin, Yongming Rao, Jie Zhou, and Jiwen Lu. Stochastic trajectory prediction via motion indeterminacy diffusion. In Proc. of the IEEE Conf. on Comp. Vis. and Pat. Rec. (CVPR), 2022. T. Krajn ́ık, J. P. Fentanes, J. M. Santos, and T. Duckett. Fremen: Frequency map enhancement for long-term mobile robot autonomy in changing environments. IEEE Trans. on Robotics (TRO), 33 (4):964-977, 2017. T. P. Kucner, M. Magnusson, E. Schaffernicht, V. H. Bennetts, and A. J. Lilienthal. Enabling flow awareness for mobile robots in partially observable environments. IEEE Robotics and Automation Letters, 2(2):1093-1100, 2017. K. V. Mardia and P. E. Jupp. Directional Statistics. Wiley, 2008. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. of the Europ. Conf. on Comp. Vision (ECCV), 2020. Sergi Molina, Grzegorz Cielniak, and Tom Duckett. Robotic exploration for learning human motion patterns. IEEE Trans. on Robotics and Automation (TRO), 2022. Luigi Palmieri, Tomasz P Kucner, Martin Magnusson, Achim J Lilienthal, and K. O. Arras. Kinodynamic motion planning on gaussian mixture fields. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 6176-6181. IEEE, 2017. 10 Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In Proc. of the AAAI Conf. on Artificial Intelligence (AAAI), 2018. Anandarup Roy, Swapan K. Parui, and Utpal Roy. A mixture model of circular-linear distributions for color image segmentation. International Journal of Computer Applications, 58(9):6-11, 11 2012. ISSN 0975-8887. Anandarup Roy, Swapan K. Parui, and Utpal Roy. SWGMM: a semi-wrapped gaussian mixture model for clustering of circular-linear data. Pattern Anal. Appl., 19(3):631-645, 2016. Liushuai Shi, Le Wang, Sanping Zhou, and Gang Hua. Trajectory unified transformer for pedestrian trajectory prediction. In Proc. of the IEEE Int. Conf. on Computer Vision (ICCV), pp. 9675-9684, October 2023. Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In Advances in Neural Inf. Proc. Syst. (NeurIPS), 2020. Chittaranjan Srinivas Swaminathan, Tomasz Piotr Kucner, Martin Magnusson, Luigi Palmieri, Sergi Molina, Anna Mannucci, Federico Pecora, and Achim J. Lilienthal. Benchmarking the utility of maps of dynamics for human-aware motion planning. Frontiers in Robotics and AI, 9, 2022. ISSN 2296-9144. Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Inf. Proc. Syst. (NeurIPS), 2020. Zhan Wang, Patric Jensfelt, and John Folkesson. Modeling spatial-temporal dynamics of human movements for predicting future trajectories. In Workshop Proc. of the AAAI Conf. on Artificial Intelligence "Knowledge, Skill, and Behavior Transfer in Autonomous Robots", 2015. Zhan Wang, Patric Jensfelt, and John Folkesson. Building a human behavior map from local observations. In Proc. of the IEEE Int. Symp. on Robot and Human Interactive Comm. (RO-MAN), pp. 64-70, 2016. W. Zhi, R. Senanayake, L. Ott, and F. Ramos. Spatiotemporal learning of directional uncertainty in urban environments with kernel recurrent mixture density networks. IEEE Robotics and Automation Letters, 4(4):4306-4313, 2019. Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal, and Martin Magnusson. CLiFF-LHMP: Using spatial dynamics patterns for long-term human motion prediction. In Proc. of the IEEE Int. Conf. on Intell. Robots and Syst. (IROS), 2023. Yufei Zhu, Andrey Rudenko, Luigi Palmieri, Lukas Heuer, Achim J. Lilienthal, and Martin Magnusson. Fast online learning of cliff-maps in changing environments. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2025. 11
2510.14829
MNRAS 000, 1–18 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 The Launching of Galactic Winds from a Multiphase ISM Fernando Hidalgo-Pineda,1★Max Gronke2,1 and Philipp Grete3 1Max Planck Institute for Astrophysics, Garching D-85748, Germany 2Astronomisches Rechen-Institut, Zentrum für Astronomie, Universität Heidelberg, Mönchhofstraße 12-14, 69120 Heidelberg, Germany 3Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany Draft from 17 October 2025 ABSTRACT Galactic outflows are a key agent of galaxy evolution, yet their observed multiphase nature remains difficult to reconcile with theoretical models, which often fail to explain how cold gas survives interactions with hot, fast winds. Here we present high- resolution 3D hydrodynamic simulations of hot outflows interacting with a multiphase interstellar medium (ISM), parameterised by its cold-gas volume filling fraction 𝑓𝑣, depth 𝐿ISM, and characteristic clump size 𝑟cl. We identify a universal survival criterion, 𝑓𝑣𝐿ISM ≳𝑟crit, that generalises the classical single-cloud condition (𝑟cl > 𝑟crit) and correctly predicts cold-gas survival across a wide range of ISM configurations – including scale-free – down to 𝑟cl/𝑟crit ∼10−2. Remarkably, the resulting cold phase rapidly loses memory of the initial ISM structure and converges toward a self-similar clump mass spectrum following Zipf’s law (d𝑁/d𝑚∝𝑚−2), implying that turbulent mixing and radiative condensation universally shape the morphology of multiphase outflows. Surviving gas assembles into extended plumes or confined cold shells of size ∼𝜒𝑟cl,min, which grow as mass is accreted from the hot phase. The interaction of an initially laminar wind with a clumpy ISM naturally drives turbulence in both phases, which we characterise through first-order velocity structure functions that follow a Kolmogorov scaling with an injection scale set by 𝐿ISM, and velocity dispersions reaching 𝜎∼𝑐s,cold. Finally, the areal covering fraction of the cold gas approaches unity even for 𝑓𝑣∼10−3, while the volume filling fraction remains low, naturally explaining the “misty” nature of observed outflows. Together, these results link small-scale cloud–wind interactions to galaxy-scale feedback, and we discuss their implications for interpreting observations and for modelling multiphase galactic winds in larger-scale simulations. Key words: hydrodynamics – galaxies: evolution – ISM: structure – ISM: jets and outflows – Galaxy: kinematics and dynamics 1 INTRODUCTION Galactic winds are a hallmark of star-forming galaxies and a central mechanism in their evolution. Observations show that the velocities and extent of these outflows correlate strongly with star formation rates (Thompson & Heckman 2024; Rubin et al. 2014), highlight- ing their connection to stellar feedback processes. Their presence is ubiquitous across cosmic time, playing a crucial role in shaping the phase distribution of the interstellar medium (ISM), regulating star formation, and redistributing baryons across galactic, circumgalactic (CGM) and even intergalactic scales (see reviews by Veilleux et al. 2005; Péroux et al. 2018; Thompson & Heckman 2024). There is broad consensus on the importance of galactic winds in cosmic evolution. First, spectroscopic observations reveal that out- flows are often metal-enriched compared to the surrounding gas, confirming their ISM origin and their role in enriching the CGM and intergalactic medium (IGM) with metals (Lopez et al. 2020; Veilleux et al. 2022). Second, multi-wavelength data consistently show that winds are both multiphase and multiscale, exhibiting struc- tures across wide spatial extents, central to both small and large- scales of the baryon cycle (Rupke 2018; Lopez et al. 2025). Finally, cosmological simulations consistently require energetic feedback in ★E-mail: fernando@mpa-garching.mpg.de the form of outflows to reproduce the observed stellar mass function, metallicity gradients, and gas fractions in galaxies (Vogelsberger et al. 2014; Nelson et al. 2019; Somerville & Davé 2015). Despite this, the precise coupling between supernova (SN)-driven outflows and the ISM remains poorly understood. Several simulation efforts have tried to model outflows in realistic ISM environments. Stratified disk simulations such as TIGRESS (Kim et al. 2023) and SILCC (Walch et al. 2015; Girichidis et al. 2016a) reproduce the multiphase struc- ture and chemistry of the ISM and use stellar feedback to launch cold gas flows. However, these flows mostly take the form of fountains that fall back onto the galaxy. Their design also makes it difficult to follow large-scale expanding winds (e.g. Martizzi et al. 2016). Global disk models with embedded supernovae, such as Schneider & Mao (2024), improve the treatment of wind geometry but still lack the resolution needed to resolve the small-scale dynamics of cold cloudlets interacting with the hot wind. X-ray and millimetric observations reveal that outflows have a complex phase structure, with cold gas (∼104 K) embedded within a hot wind (∼106 K) (Heckman et al. 1990; Fisher et al. 2024). Cap- turing this multiphase composition and dynamics is challenging not only for advanced simulations but also for simplified models. Clas- sically, these simplified approaches study in detail the interaction of a single cold clump of gas with stellar-driven feedback, treating it as a test case for multiphase outflow formation. In this framework, © 2025 The Authors arXiv:2510.14829v1 [astro-ph.GA] 16 Oct 2025 2 Hidalgo-Pineda, Gronke & Grete the acceleration timescale for a cloudlet to be entrained by the wind is given by 𝑡acc ∼𝜒𝑟cl/𝑣wind, where 𝜒is the cloud-to-wind density contrast, 𝑟cl the cloud radius, and 𝑣𝑤the wind speed. In contrast, the cloud destruction timescale due to hydrodynamical instabilities is 𝑡dest ∼𝜒1/2𝑟cl/𝑣wind, so for 𝜒= 𝑇hot/𝑇cold, 𝑡dest ≪𝑡acc suggest- ing clouds are destroyed before they can be accelerated (Klein et al. 1994; Zhang et al. 2017). Instead, observations suggest that cold gas is dynamically coupled to outflows and a fundamental outcome of feedback. Not reproducing this nature comes at the cost of miscap- turing the properties of galaxies and their environments. Extensive theoretical work has closely examined the survival of these cold clumps embedded in fast, hot winds. Several studies have shown that radiative cooling can alter the outcome signifi- cantly (Cooper et al. 2009; Scannapieco & Brüggen 2015; Armillotta et al. 2017; Farber et al. 2018). In particlular, Gronke & Oh (2018) showed that in the strong cooling regime, where the cooling time of cold clouds is shorter than their disruption timescale, overdense clouds can resist destruction by hydrodynamic instabilities (also see Li et al. 2020a; Kanjilal et al. 2020; Sparre et al. 2020, for further investigations of the relevant cooling time). This introduces a critical radius 𝑟crit, above which radiative gains exceed destructive mixing losses (Gronke & Oh 2020). However, the ISM is not composed of isolated spherical clouds but instead follows a scale-free distribu- tion shaped by turbulence (Elmegreen 1997; Federrath 2016; Beattie et al. 2025; Grete et al. 2025). In such media, the single-cloud model cannot fully explain the persistence of cold gas: neighboring clouds can shield one another from shear instabilities, cloud drag may be enhanced by certain geometries, and mixing can be suppressed in clumpier environments. Additional physics such as magnetic fields and viscosity further complicate this picture (McCourt et al. 2015; Hidalgo-Pineda et al. 2024; Brüggen et al. 2023). Cold gas survival is therefore unlikely to depend on a single critical scale, but rather on the combined influence of geometry and gas properties. Previous work such as Banda-Barragán et al. (2021) investigate the driving of multiphase outflows through the interaction of a fast- travelling wind with fractal-like structures and thus bridge the ‘clas- sical’ cloud crushing studies with the stratified disk simulations dis- cussed above. However, the connection to the analytical ‘survival criterion’ described above remains largely unexplored. Antipov et al. (2025) identify and study the cooling regime of single clouds in mul- ticloud set-ups, but their analysis is limited to two simulation runs from Banda-Barragán et al. (2021), without a broad survey of the parameter space. In this work, we will address this point by studying the impact of a fast, hot wind on a more realistic, multiphase ISM. Using high- resolution 3D hydrodynamical simulations, we aim to resolve the relevant length scales (e.g., 𝑟crit), and thus aim to generalize the sur- vival criterion 𝑡cool,mix < 𝑡cc, which is based on a spherical cloud geometry, to provide a condition testable also for larger scale simu- lations. Our goal is to identify the physical conditions under which multiphase outflows emerge, determine how ISM geometry affects cold gas survival, and characterise the dynamical and thermal cou- pling of winds to the ISM. This paper is organized as follows. In Section 2, we describe the simulation setup and the modeling of the ISM. Section 3 presents the main results from the wind-tunnel runs, outlining the conditions for the emergence of multiphase outflows. In Section 4.1, we de- rive a general criterion for cold gas survival, followed by an analysis of outflow properties and potential observational signatures in Sec- tion 4.3. We conclude by summarising our findings and discussing key limitations in Section 5. 2 METHODS We conduct a suite of wind-tunnel simulations of hot, fast outflows interacting with idealised ISM density fields. These simulations are designed to mimic a localised region of the galactic disk subjected to continuous wind-driven feedback. We extract a slab of multiphase ISM gas and expose it to a supersonic, high-temperature wind trav- eling parallel to the slab where we can study in detail the issue of entrainment. Figure 1 shows a volume rendering of one quarter of a simulation box from our suite. 2.1 ISM generation Both observations (Elmegreen & Scalo 2004a; Groves et al. 2023), simulations of the ISM (Elmegreen 1997; Federrath 2016; Naab & Ostriker 2017) and simulations of turbulent media (Gronke et al. 2022; Das & Gronke 2024; Federrath et al. 2009; Beattie et al. 2025; Grete et al. 2025) show cold matter arranges in clumpy filamentary structures, surrounded by a hot phase 106𝐾gas. We generate this binary distribution of the ISM from two parameters, a volume filling fraction 𝑓𝑣describing the abundance of cold gas relative to the tenu- ous phase, and a characteristic spatial lengthscale 𝑟cl for the clumps. In principle, ISM turbulence is effectively scale-free at least over several orders of magnitude (Rathjen et al. 2023; Federrath et al. 2009), Nevertheless, as a first step, we will study the behaviour for specific average clump sizes 𝑟cl and later generalise it for a scale-free ISM setup. For simplicity, we assume an isotropic distribution of gas, which we represent as a Poisson-like probability distribution such that each cell in our domain has a probability to be cold gas 𝜙(x,y,z) ∼U{0, 1}. We introduce spatial coherence by correlating neighboring cells to create an average clump size 𝑟cl, rather than treating gas cells as com- pletely independent of their surroundings. Smoothing this Poissonian field with a Gaussian kernel (with standard deviation 𝑟cl) ˜𝜙x, y, z = ∑︁ (𝐺𝑟cl ∗𝜙x, y, z), (1) imprints the desired intrinsic scale. For this field we can now employ a cutoff such that the final field will have a cold gas volume filling fraction 𝑓𝑣. Specifically, we can define a piece-wise function for the density field 𝜌such that 𝜌(𝑥, 𝑦, 𝑧) ≡ ( 𝜌= 𝜌hot if ˜𝜙x,y,z < 𝑓𝑣 𝜌= 𝜌cold if ˜𝜙x,y,z ≥𝑓𝑣 , (2) or compactly written from a Heaviside function with offset 𝑓𝑣, 𝜌(𝑥, 𝑦, 𝑧) = 𝜌hot  1 + (𝜒−1)H( ˜𝜙−𝑓𝑣)  . (3) Figure 2 represents the result of this process. The total cold mass of each set-up will also vary according the total depth of the ISM in the direction of the wind, 𝐿ISM (where 𝐿ISM is paralell to the 𝑦axis). This parameter, along with 𝑓𝑣and 𝑟cl, characterises our simulations. As we want to trace a universal criterion for the emergence of outflows, our suite of simulations span ∼4 orders of magnitude for all three variables (see details in Table 1). The resultant density field is used in our wind-tunnel simulation domain, where the 𝑥and 𝑧dimensions match the ISM extent, while the 𝑦dimension is 2-3 times the ISM length (𝐿ISM). As a second setup, we generalise the ISM generation to pro- duce a scale-free ISM (discussion in section 4.2) ensuring the mass distribution of clumps follows a mass profile density func- tion d𝑁/d𝑚∝𝑚−2 found from both theory and observations of the ISM and turbulent structures (Gronke et al. 2022; Ilyasi et al. 2025; MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 3 Figure 1. Volume rendering of our simulation set-up: a stellar wind with M𝑤= 1.5 enters from the left, driving turbulent mixing as it moves through 1/4 of the cropped domain box. The rendering highlights regions of varying gas density. Bright yellow denotes gas that is overdense by a factor of 100 relative to the diffuse background phase (shown in light blue). Intermediate overdensities appear in muted colors, with black indicating regions where the overdensity is around 10. Table 1. Parameters for our wind-tunnel ISM simulations: domain lengths parallel and perpendicular to the wind (𝐿𝑦and 𝐿𝑧(𝐿𝑥)), cold gas volumetric fraction ( 𝑓𝑣), clump size relative to the critical survival size (𝑟cl/𝑟crit), ISM length along the wind (𝐿ISM), and inflow Mach number (M𝑤). 𝐿𝑦[𝑟cl] 𝐿𝑧(𝐿𝑥)[𝑟cl] 𝑓𝑣 𝑟cl/𝑟crit 𝐿ISM[𝑟cl] M𝑤 1 96 32 10−1 10 30 1.5 2 96 32 10−1 10 6 1.5 3 96 32 10−2 10 30 1.5 4 96 32 10−3 10 30 1.5 5 96 32 10−1 1 30 1.5 6 96 16 10−1 1 30 1.5 7 112 48 10−1 1 20 1.5 8 96 32 10−2 1 30 1.5 9 864 32 10−2 1 300 1.5 10 864 32 10−3 1 600 1.5 11 4320 32 10−3 1 3000 1.5 13 112 48 10−1 0.5 40 0.7 14 176 48 10−2 0.5 40 0.7 15 864 32 10−1 0.1 300 1.5 16 96 32 10−1 0.1 30 1.5 16 240 48 10−1 0.05 80 0.7 17 432 32 10−1 10−2 300 1.5 18 4320 32 10−1 10−2 3000 1.5 Tan & Fielding 2024). From the definition of volume filling fraction, 𝑓𝑣= 𝑁𝑟3 cl/𝑉tot ∝𝑁, and since 𝑁∝𝑚−1 via simple integration of the mass distribution, we can establish the relation: 𝑓𝑣(𝑟cl) = 𝑓v,max 𝑟cl,min 𝑟cl 3 (4) where 𝑓v,max corresponds to our desired final volume filling fraction, majorly composed of the minimum cloud size 𝑟cl,min for our ISM sample. Subsequent 𝑟cl > 𝑟cl,min in the equation lead to lower indi- vidual 𝑓𝑣. Producing a series of realisations for different 𝑟cl fields following this volumetric filling fraction relation recovers the clump mass power law. The final combined 𝑓𝑣roughly corresponds to the sum of 𝑓𝑣for each 𝑟cl size. Essentially, the Heaviside function in 3 is replaced by H′ = ∑︁ r=r0 H  ˜𝜙−𝑚/𝑟cl3 (5) where 𝑚= 𝑓v,max 𝑟3 cl,min. An example of the final mass distribution can be found in Appendix A. Note that, for all our runs, we satisfy the resolution criterion 𝑟cl/𝑑cell ≥8 which was found by previous ‘cloud crushing’ studies to be sufficient in order to converge on the cold gas mass evolution (Gronke & Oh 2020; Kanjilal et al. 2020). fv = 0.5 r/rcrit = 0.3 fv = 0.1 fv = 0.05 r/rcrit = 1 r/rcrit = 2 rcl 10−26 10−25 10−24 ρ [gcm−3] Figure 2. 2D slices focused on our initial binary multiphase ISM fields where the wind enters from the left boundary (outside the shown region). We vary the volumetric cold gas filling fraction, 𝑓v, along the 𝑥axis, and clump size 𝑟cl along the 𝑦axis, expressed as a fraction of the critical lengthscale for single cloud survival, 𝑟crit. Colour-coded in yellow and navy the density values for cold and hot phases, respectively. 2.2 Numerical implementation We perform wind-tunnel hydrodynamic simulations using the pub- licly available code AthenaPK1, which implements finite-volume (magneto)hydrodynamic algorithms on top of the Parthenon frame- work (Grete et al. 2023). We employ a formally second-order hy- drodynamic finite-volume scheme with a predictor–corrector Van Leer integrator, Harten–Lax–van Leer with Contact (HLLC) Rie- mann solver, and piecewise parabolic reconstruction in primitive variables. The performance portability of AthenaPK – achieved through Kokkos (Trott et al. 2022) – allows us to run on accelerator- based architectures and perform large-domain simulations on a static grid while strictly satisfying the aforementioned mass-growth reso- lution criterion of 𝑟cl/𝑑cell ≥8. Our simulation meshes are structured as a rectangular domain with transverse lengths of 256 or 128 cells (𝐿⊥/𝑟cl = 32 and 18, respec- tively) and a longitudinal dimension aligned with the wind direction (𝑦-axis), typically extending to 𝐿𝑦∼3𝐿ISM (see Table 1 for details). Both transverse lengths are sufficient to capture the dynamics, though we adopt the 256 cell width as the fiducial resolution (see Appendix B for a resolution test). Our multiphase ISM density field realisations assign a temperature of 𝑇= 106 K to the background (hot) phase, and a density of 𝜌hot = 10−26g cm−3, while the overdense (cold) phase is set to 𝑇= 104 K and 𝜌cold = 10−24g cm−3. Note it is only the relative overdensity of the gas at these two temperatures what affects the dynamical timescales, producing a self-similar solution 1 AthenaPK is an open-source, performance-portable code for astrophysical MHD simulations: https://github.com/parthenon-hpc-lab/athenapk. MNRAS 000, 1–18 (2025) 4 Hidalgo-Pineda, Gronke & Grete regardless of the specific density values (see § 2.2 in Dutta et al. 2025, for an explanation on self-similarity). The ISM slab, of length 𝐿ISM, is placed 8𝑟cl from the upstream 𝑦 boundary of the domain. From this boundary, we continuously inject a mildly transonic (M ≈1.5) or subsonic (M ≈0.7) hot wind in the positive 𝑦-direction. No initial turbulent velocities are imposed to either of the two phases, i.e., the hot wind is initially fully laminar. Periodic boundary conditions are applied in the transverse directions, 𝑥and 𝑧, and a convergence study assessing potential artifacts from this setup is presented in Appendix B. To compute radiative losses, we employ the Townsend (2009) radiative cooling algorithm, using the collisional ionization equilib- rium (CIE) cooling tables from Gnat & Sternberg (2007) for solar metallicity. To mimic heating from the UV background, we disable cooling above 6 × 105 K, and impose a temperature floor of 104 K. To optimize computational efficiency and better track the mo- tion of dense material, we perform simulations in a frame boosted fashion alike previous works (McCourt et al. 2015; Scannapieco & Brüggen 2015; Dutta & Sharma 2019) by computing the the cold gas mass-weighted velocity ⟨𝑣cold⟩= ∫ 𝜌(𝑇< 2𝑇cold)𝑣𝑦𝑑𝑉 ∫ 𝜌(𝑇< 2𝑇cold)𝑑𝑉 (6) with 𝜌,𝑇,𝑉as density, temperature and volume, and𝑇cold = 104𝐾, where our simulation set-up is on the frame-reference of cold gas, with ⟨𝑣cold⟩= 0. 3 RESULTS 3.1 Cold gas structure and morphology In Figure 3 we show simulations where two of the three ISM control parameters ( 𝑓𝑣, 𝑟cl, 𝐿ISM) are fixed and the third varied. The charac- teristic destruction timescale for any of these systems corresponds to the canonical cloud-crushing timescale (Klein et al. 1994; Scanna- pieco & Brüggen 2015) 𝑡cc = 𝜒1/2 𝑟cl 𝑣wind . (7) In Fig. 3 as well as hereafter, we refer to the dimensionless timescale 𝜏≡𝑡−0.5 𝑡sh 𝑡cc (8) where 𝑡sh = 𝐿ISM/𝑣wind. This is a convenient choice when com- paring runs of different ISM depths, as the effects of the wind on cold gas mass and speed only become evident after having transversed half the total ISM depth. Similarly, the entrainment time can be derived from momentum conservation, 𝜌𝑤𝐴cl𝑣𝑤𝑡ent ≈ 𝐴cl (𝜌cl 𝑓𝑣𝐿ISM + 𝜌𝑤(1 −𝑓𝑣)𝐿ISM), with 𝐴cl, 𝑣𝑤, 𝜌as area of the cloud, velocity of the wind and density (from either the hot 𝜌𝑤or cold 𝜌cl gas component). In contrast to single-cloud models, assum- ing 𝜌cl ≫𝜌𝑤, the entrainment time of an ISM run 𝑡ent ≈𝐿ISM 𝑣𝑤 (𝜒𝑓𝑣+ 1) = 𝑡sh(𝜒𝑓𝑣+ 1), (9) is directly proportional to the total cold gas mass depth and can be orders of magnitude larger than 𝑡cc. Figure 3 shows that final structure (rightmost column) of the out- flows varies significantly depending on the initial conditions. In the top subfigure (top 3 rows), 𝑓𝑣and 𝐿ISM are fixed at 0.1, 300𝑟cl re- sepectively, while 𝑟cl/𝑟crit is varied as 0.1, 1, and 10 from top to bottom. In this case, all three runs exhibit similar evolutionary be- haviour. Initially, a clumpy multiphase distribution of the ISM is present as described in the initial conditions. At approximately half the entrainment time (𝑡ent ∼0.5), some cold clouds begins to coag- ulate, forming larger clumps while simultaneously losing some of their mass, represented by the black coloration, which indicates the presence of gas mixing. The top panel, for example, shows clumpier structures at this time with respect to the other panels, with substan- tial mixing taking place. By the entrainment time (𝑡ent ∼1), all runs exhibit a large fraction of cold mass. The bottom half section for each of these panels (below the dashed white line), showing the projected mass over the full box size, show the formation of regions where the local concentration of cold gas is higher, while at the same time, exhibiting a clumpy nature when integrated over shorter lengthscales (top half of the panels). In the central subfigure of Fig. 3 (middle 3 rows), we fix 𝑟cl = 𝑟crit and 𝐿ISM = 300𝑟cl while varying the volume filling fraction 𝑓𝑣from top to bottom: 𝑓𝑣= 10−1, 10−2, and 10−3. At high filling fractions ( 𝑓𝑣= 10−1, top row), the evolution resembles the runs shown in the subfigure above. Entrainment drives coagulation that competes with mixing, forming a coherent blob where cold gas preferentially survives. At intermediate filling fractions ( 𝑓𝑣= 10−2, middle row), the evolution follows a similar pattern with continued mixing and coagulation. However, by the final snapshot, the surviving cold gas is mistier and has a smaller projected area than for 𝑓𝑣= 0.1. Comparing the rightmost column for both 𝑓𝑣= 10−1 and 𝑓𝑣= 10−2 reveals that lower filling fractions produce mistier structures, while higher filling fractions create more distinct, compact clumps. Such differences may help observers identify which galactic regions preferentially drive multiphase gas to galaxy outskirts based on whether the cold phase appears misty or clumpy (e.g., Chen et al. 2023). At low filling fractions ( 𝑓𝑣= 10−3, bottom row), the cold gas does not survive the interaction with the hot wind and no cold mass remains by the time of entrainment. Figure 3 also demonstrates that 𝐿ISM contributes to determine the phase of multiphase outflows. In the last subfigure (bottom 2 rows), we show two runs of 𝑓𝑣= 0.1 and 𝑟cl = 𝑟crit, but of 𝐿ISM = 6 and 30𝑟cl, top and bottom, respectively. Although the shallower ISM does not retain any of its initial cold mass after the wind interaction, increasing it by a factor of 5 shows that it forms large cold phase structures. 3.2 Evolution of gas phases 3.2.1 Cold mass evolution In order to place tighter constraints on the formation of multiphase outflows, we quantitatively analyse the evolution of cold gas mass over time. Figure 4 shows the time evolution of cold gas mass (top) and shear velocity relative to the wind (bottom) for two representa- tive cases: one that strictly satisfies the classical survival criterion (𝑟cl/𝑟crit = 10) and one borderline case (𝑟cl/𝑟crit = 1). Linestyles denote volume filling fractions, with solid for 𝑓𝑣= 10−1, dotted for 𝑓𝑣= 10−2 and dashed for 𝑓𝑣= 10−3. The runs are colour-coded by ISM depth relative to the initial clump size (the full set of initial conditions is provided in Table 1). The left panel of Fig. 4 shows that cold gas survives across all values of 𝑓𝑣and 𝐿ISM. The cold gas mass increases over time for all ISM depths and volume filling fractions. Besides, the bottom panel reveals that this mass growth is accompanied by acceleration to the wind speed. In other words, for the simulations shown in the left column, only the intrinsic initial clump size 𝑟cl affects cold gas MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 5 tent ∼0 rcl/rcrit = 0.1 tent ∼0.5 tent ∼1 rcl/rcrit = 1 rcl/rcrit = 10 (fv, LISM) = const. tent ∼0 fv = 10−1 tent ∼0.5 tent ∼1 fv = 10−2 fv = 10−3 (rcl/rcrit, LISM) = const. tent ∼0 LISM = 6rcl tent ∼0.5 tent ∼1 LISM = 30rcl 100 101 102 χ (rcl/rcrit, fv) = const. Figure 3. Density projection ( ∫ 𝜌(𝑧)d𝑧/(ℓ𝑧𝜌hot)) at the time of wind-ISM interaction (left column), mid-way through entrainment (middle) and at time of entrainment (right) for different simulations. Each panel is divided into two halves by a dashed white line, where the top uses a total integration depth ℓ𝑧= 𝑟cl, and bottom ℓ𝑧= 𝐿z,box. Top (rows 1-3): runs of varying 𝑟cl and fixed 𝑓𝑣= 0.1, 𝐿ISM = 300𝑟cl. Middle (rows 4-6): runs of varying 𝑓𝑣and fixed 𝐿ISM = 300𝑟cl, 𝑟cl = 𝑟crit. Bottom (rows 7-9): 2 runs of varying 𝐿ISM and fixed 𝑟cl = 𝑟crit, 𝑓𝑣= 0.1. Note that we do not show the entire simulation domain but instead focus on the upstream boundary of the cold gas with the hot wind, and use a length of 𝐿𝑦≈100𝑟cl along the 𝑦axis. survival — neither 𝑓𝑣nor 𝐿ISM plays a significant role. Notice that the entrainment time is 𝑡ent ∼𝑡sh(𝜒𝑓𝑣+ 1) as discussed in §3.1. In the 𝑟cl/𝑟crit = 1 case (right panels of Fig. 4), survival is no longer guaranteed. Solid lines ( 𝑓𝑣= 0.1) show mass ablation at early times. For instance, simulations with ISM depths of 10𝑟cl (black lines) rapidly lose most of their mass within a few 𝜏. However, as we increase the ISM depth to 30𝑟cl (dark purple) or more, mass growth resumes. At lower 𝑓𝑣(e.g., 10−2), a similar trend is observed: shal- lower depths (∼10𝑟cl) lead to destruction, while deeper ISM columns of gas (∼100𝑟cl) allow the cold phase to survive and eventually grow. MNRAS 000, 1–18 (2025) 6 Hidalgo-Pineda, Gronke & Grete −3 −2 −1 0 1 log (m(T < 2Tcl)/m0) rcl = 10 rcrit rcl = rcrit 0 10 20 τ 0.0 0.2 0.4 0.6 0.8 1.0 ∆vshear/vw 0 50 100 τ 101 102 103 104 LISM [rcl] fv = 10−1 fv = 10−2 fv = 10−3 Figure 4. Total cold gas mass (top) and shear speed (bottom) evolution as function of their dimensionless temporal variable, 𝜏(see § 3.1). Left panels display runs for 𝑟cl/𝑟crit = 10 (where the classical survival criteria of cold gas is strictly satisfied), and 𝑟/𝑟crit = 1 for the right panels. Linestyles represent the volumetric filling fraction of cold gas in the ISM, and runs are colour- coded by the total depth of the initial density field in units of clump size 𝑟cl. For instance, the blue dotted line with 𝐿ISM = 300𝑟cl and 𝑓𝑣= 10−2 shows mass recovery and growth after an initial drop to 10% of the starting mass. Expectedly, the velocity evolution (bottom row panels) for the survival cases in this regime are similar among them and to the evolution of ISM runs with 𝑟cl/𝑟crit = 10. This same behaviour holds for volumetric fractions as low as 𝑓𝑣= 10−3 (dotted lines in Fig. 4): while some cases (navy line) experience destruction, others (light blue curve) survive and grow. For the latter case, the ISM depth exceeds the individual clump size 𝑟cl by three orders of magnitude. 3.2.2 Temperature-velocity evolution Figure 5 shows the general temperature and velocity evolution of all gas cells at three different snapshots: initial shock-ISM interaction (𝑡ent ≈0.1), mid-entrainment (𝑡ent ≈0.5), and near full entrainment (𝑡ent ≈1), from left to right, respectively, for a simulation with ( 𝑓𝑣, 𝑟cl, 𝐿ISM) = (10−1, 𝑟crit, 30𝑟cl). At 𝑡ent = 0.1, the top panel shows that as the wind encounters the ISM, the majority of the cold phase remains slow in speed, while the inflowing hot phase is predominantly faster than the rest of the ISM gas. The bottom panel reveals that the wind has only begun to interact with interstellar gas, producing three main phase populations in dim yellow: the fast-travelling wind (𝑇> 106, 𝐾and 𝑣≃𝑣w), an equally tenuous static phase with 𝑣≪𝑣𝑤, and a static 104 𝐾ISM component. Thermally-unstable gas at intermediate temperatures arises directly from early-disrupted ISM regions heated by the shock front, and is therefore scarce at this stage in evolution. The spread in velocities of this intermediate gas is a result of turbulent mixing in clouds swept by the wind, with 𝑣gas ∼𝑣𝑤, and freshly disrupted clumps that mix into ∼105 K gas at 𝑣∼0, therefore filling a wide velocity range below 𝑣𝑤. At half the entrainment time (𝑡ent = 0.5), the top panel shows that the cold phase velocity is now well described by a Schechter function 𝑚(𝑣)/𝑚cl ≈𝐴𝑣/𝑣𝑐exp (−(𝑣/𝑣𝑐)2) (with 𝐴= 104 and 𝑣𝑐= 0.1𝑣𝑤 shown by the thick dashed black line), with the bulk of its mass trav- elling at high speeds and a tail of accelerating gas extending down to 𝑣/𝑣𝑤∼10−3. During this time, however, the hot phase still car- ries most of the momentum, followed closely by mixed gas which contains more overall mass and travels faster than the cold phase of the outflow. The bottom panel shows that the initially unstable, low- velocity 105 K gas now occupies a narrower 𝑣gas distribution closer to 𝑣𝑤, with gas at these temperatures mainly produced through mixing from surviving clumps at higher speeds. Cold gas forms a distinct elongated region centred at 𝑇∼104 K and 𝑣∼0.1𝑣w. The spread in 𝑣𝑦for the cold phase is a distinctive feature of multicloud systems. Unlike single-cloud simulations, clouds in multicloud environments undergo differential entrainment: clouds at the leading edge are ac- celerated early by the wind, while those farther downstream have not yet been reached. Additionally, low-velocity fragments from ablated clouds can coagulate into larger clumps that retain these initially low velocities. These reformed clumps remain slow-moving until radiative cooling becomes efficient enough to enable their entrain- ment. This coagulation process, also illustrated in the central panel of Figure 3, is absent in single-cloud simulations. At 𝑡ent = 1, the top panel shows that once both cold and hot phases are approximately co-moving, the mixed gas mass becomes negligible, and the cold phase carries most of the wind’s momentum. The bottom panel indicates that low-speed cold gas has mostly vanished, with negligible gas remaining at 𝑣≲10−3𝑣w. The surviving cold (104 K) and hot (106 K) components now co-travel roughly at the hot phase transonic speed Mw ∼1.5. Gas cells at 105 K sit below 0.01, 𝑚clump and cluster around 𝑣∼𝑣w, consistent with turbulent radiative mixing layer theory in the fully entrained single-cloud picture. This mixed gas is however four orders of magnitude lower in mass than the two main wind phases. Overall, while the gas structure in our set-ups is distinctly more intricate than idealised single-cloud models, the evolution of the sys- tem as a whole is broadly consistent with the expected dynamic and thermal evolution of individual clumps. Nevertheless, the formation of multiphase outflows does not strictly follow from the traditional single-cloud survival criterion. In the following section, we examine how this criterion must be modified to accurately predict multiphase gas formation. 3.3 Cold gas survival We define a unique ISM from three parameters, ( 𝑓𝑣, 𝑟cl, 𝐿ISM; see section 2.1). Analysing the mass evolution of our set of simulations showed that if 𝑟cl ≤𝑟crit, estimating a threshold for the formation of a multiphase outflow requires a non-trivial combination of these three values, hinted by the alternate survival status for similar runs in the right panel of figure 4. Analogous to the single cloud survival criterion 𝑟cl > 𝑟crit we can compare the average cold gas length through the ISM, 𝑓𝑣𝐿ISM, to the MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 7 102 104 106 m/mcl t ∼0.1 tent 102 104 106 t ∼0.5 tent 102 104 106 t ∼tent 10−5 10−3 10−1 101 vgas/vw 104 105 106 107 T [K] 10−5 10−3 10−1 101 vgas/vw 104 105 106 107 10−5 10−3 10−1 101 vgas/vw 104 105 106 107 101 102 103 104 105 mtot/mcl Figure 5. Top: mass carried per velocity bin during the wind-ISM interaction (𝑡∼0.1 𝑡ent, left), mid-way through entrainment (𝑡∼0.5 𝑡ent, middle) and at time of entrainment (𝑡∼𝑡ent, right). In black, total mass velocity curve; blue, green and red show the mass curves for temperature cuts 𝑇≤105, 105 ≤𝑇≤ 105.8, 𝑇≥105.8 K, respectively. We perform a least-squares fit (see S3.2.2) to the mass distribution of cold gas in the middle panel (dashed black). Bottom: phase diagrams for gas temperature (y-axis) and bulk flow speed along the wind direction (x-axis) as a fraction of wind speed 𝑣𝑤, weighted by phase bin mass (colourbar). This corresponds to run 𝑟cl/𝑟crit = 1, 𝑓𝑣= 0.1 and 𝐿ISM = 30𝑟cl . critical cloud radius and conjecture that 𝛼𝑓𝑣𝐿ISM > 𝑟crit (10) (where 𝛼is a fudge factor of order unity) leads to cold gas survival also in more complex ISM gas distributions. Equation 10 gives a clear threshold for multiphase outflows, where the volumetric filling fraction 𝑓𝑣, and ISM depth 𝐿ISM both influence survival. Notice how the survival does not directly depend on the original clump size 𝑟cl anymore. This is consistent with our results, as for all 𝑟cl ≤𝑟crit, we always find a regime where the ISM survives the interaction (we comment on the limits of our criterion in section 4.1). This equation simultaneously captures the single-cloud criterion for clouds 𝑟cl > 𝑟crit, as setting 𝑓𝑣≈1 and 𝐿ISM = 𝑟cl in equation 10 automatically satisfies the inequality. We compare this criterion with our results by extracting the ef- fective ISM depth for our runs. In figure 6, 𝑓𝑣𝐿ISM is plotted as a function of the initial ISM coherent lengthscale. Survived runs are la- belled as green (light blue) dots and destroyed as red (violet) crosses, respectively, for M ∼1.5 (M ∼0.7). As observed in section 3.2, ISM patches that are coherent over lengthscales larger than the single cloud criterion self-consistently survive regardless of 𝑓𝑣or 𝐿ISM. As we move towards the 𝑟cl < 𝑟crit regime, all runs that would classically experience destruction, now display survival above a certain 𝑓𝑣𝐿ISM value. The threshold for survival follows a linear dependence. More specifically, we find it to follow 0.5 𝑓𝑣𝐿ISM = 𝑟crit with a break-point at 𝑟cl ≈𝑟crit, in excellent agreement with Eq. 10. Our criterion accurately extends to 𝑟cl ≪𝑟crit. For instance, the left-most points in Fig. 6 lay two orders of magnitude below the tra- ditional critical clump size. For both simulations we use a 𝑓𝑣= 0.1, with 𝐿ISM = 3000𝑟cl and 800, where only the former, strictly satisfy- ing the expected 𝐿> 103𝑟cl limit for survival, can form a multiphase outflow. We further comment on this point in the discussion§ 4.1. 10−2 10−1 100 101 rcl[rcrit] 10−2 10−1 100 101 102 103 fv LISM[rcl] ∝rcrit Mw = 1.5 Mw = 0.7 Figure 6. Emergence (dot) or absence (cross) of multiphase outflows as a function of cold mass-equivalent ISM depth, 𝑓𝑣𝐿ISM (y-axis), and clump size 𝑟cl (x-axis), expressed as fractions of the ISM clump size and the critical cloud radius, respectively. The dotted line indicates values proportional to the critical radius for single cloud survival. 3.3.1 Mass distribution Figure 7 shows the mass probability density function (PDF) for the top row and cumulative distribution (CDF) for the bottom row of three runs, prior to the interaction with the wind (left panels) and during/after entrainment (right panels). We use Scipy2 union-find connected-component labeling (CCL) to identify clumps for each snapshot. Strikingly, we find that the cold gas distribution stabilises shortly after the wind-ISM interaction, and remains time-invariant throughout the simulation (shown in light gray). 2 https://docs.scipy.org/doc/scipy/ MNRAS 000, 1–18 (2025) 8 Hidalgo-Pineda, Gronke & Grete 10−1 100 101 r [rcl] 10−3 10−2 10−1 100 101 dN/dr 10−1 100 101 r [rcl] r−2 r−4 r−2 r−4 r−2 r−4 100 102 V [r3 cl] 100 101 102 103 104 N(≥V ) 100 102 V [r3 cl] V −1 V −2 V −1 V −2 V −1 V −2 Figure 7. PDF (top) and CDF (bottom) for the clump size distribution of cold gas in the wind at time of initialisation (left) and at time of interaction with the wind (at 𝑡∼𝑡sh; right) for runs with ( 𝑓𝑣, 𝑟cl/𝑟crit, 𝐿ISM) = (10−1, 1, 30𝑟cl) and (10−1, 0.1, 300𝑟cl) as blue and green solid lines, respectively. The blue dashed curve shows a run identical to the solid blue but with 𝑓𝑣= 10−2, for completeness. Similar mass distributions are shown in light grey for later snapshots. The blue and green curves correspond to simulations with initial clump sizes of 𝑟cl = 𝑟crit and 0.1𝑟crit, respectively, with a volumetric filling factor of 𝑓𝑣= 0.1. Initially, both follow a similar bell-shaped density profile, reflecting our Gaussian-like initialisation of the ISM. These initial distributions do not reflect realistic ISM structures (see § 2.1), but are useful for isolating and testing survival parameters. To highlight the impact of 𝑓𝑣on the distribution of the clumps, we also show an identical run to 𝑓𝑣= 10−2 in Fig. 7 as a dotted blue curve. For the second column, after 𝑡∼𝑡sh, the shape of the clump dis- tribution changes significantly. The profiles across all three cases flatten into a power-law form consistent with 𝑁(> 𝑉) ∝𝑉−1, and d𝑁/d𝑟∝𝑟−4, corresponding to a ‘Zipf like’ mass distribution d𝑁/d𝑚∝𝑚−2, in good agreement with previous results from turbu- lent, multiphase media Gronke et al. (2022); Das & Gronke (2024), ‘shattering’ simulations (Yao et al. 2025), galactic winds (Tan & Fielding 2024), and ICM simulations (Li et al. 2015; Fournier, M. et al. 2024). This shape is identically reproduced for the lower 𝑓𝑣run, demonstrating that the ∝𝑚−2 mass distribution seems to be univer- sally emergent – and does not depend on the initial distribution. This power law distribution implies that we do not find a clear character- istic, maximum, or minimum clump size accross the runs, and the powerlaw establishes around the initial mass range of the simulation (with the lower cut-off given by our resolution and the maximum cut-off given by the biggest clump (for the PDF) or total mass (for the CDF) of the system). Notably, runs with lower initial cold gas volume filling fractions 𝑓𝑣produce smaller clumps in the wind. This is more evident when comparing the end-tail of both blue curves in the CDF distributions, where the dotted line with 𝑓𝑣= 10−2 termi- nates a factor of a few below the maximum clump size for 𝑓𝑣= 10−1, a behaviour that we had already observed for the middle subfigure in Fig. 3. 0 2 vturb/cs,cold T > 105K T ≤105K 0.0 0.5 1.0 x/LISM 0.0 0.5 1.0 1.5 2.0 2.5 3.0 y/LISM 0.0 0.5 1.0 x/LISM 0.0 0.5 1.0 1.5 vturb,cold/cs,cold 0 1 2 3 vturb,hot/cs,cold Figure 8. Averaged turbulent velocity of the cold and hot phases along the wind direction (top), and projected along the 𝑧-axis (middle and bottom), for the run 𝑟cl/𝑟crit = 10, 𝑓𝑣= 0.1, and 𝐿ISM = 30𝑟cl at 𝑡≈0.2 𝑡ent. We assume equipartition along the three dimensions when computing turbulent dispersion velocities. Our findings show that regardless of initial ISM conditions, clump distributions during and after the initial interaction with the wind con- verge toward a universal clump mass distribution, effectively erasing any memory of the initial ISM configuration. This suggests that, at least structurally, multiphase outflows do not retain a direct imprint of the source galaxy’s cold-phase morphology. 3.4 Emergent turbulence in the outflow As stellar and AGN-driven winds propagate through a porous ISM, the resulting interaction not only disturbs the cold gas but can also trigger turbulent motions from the initial hot gas laminar flow. Fig- ure 8 shows the mass-weighted projection of the velocity dispersion 𝑣turb = (3/2𝜎2 x +3/2𝜎2 z )1/2, where 𝜎𝑥/𝑦represents the standard devi- ation of the velocity component along the 𝑥/𝑧axis. The 3/2 prefactor accounts for unmeasured turbulent motions along the direction of the wind under the assumption of turbulence isotropy since, as we show below, turbulent speeds are well below 𝑣𝑤, effectively hiding 𝜎𝑦in the bulk flow of the outflow. 𝜎𝑧and 𝜎𝑥are self-similar (we further prove the self-similarity of turbulence in § 4.4). Top panels show the marginalized spatial averages along the direction of the wind. Figure 8 shows that a volume filling, turbulent hot phase quickly develops throughout the simulation domain whereas cold gas (turbu- lence) remains more localised. Interestingly, while we have proven the bulk coupling of cold and hot speeds along the outflow direction in Fig, 4, turbulence also shows to be coupled between the phases. The hot, 106 𝐾gas only shows a factor of ∼2 larger turbulent ve- locities than the cold phase, but both are of 𝜎∼𝑐s,cold. While hot gas shows a larger dependence on the ISM depth and 𝑓𝑣, with lower average turbulence for lower cold gas fractions, cold gas turbulence consistently converges to ∼𝑐s,cold regardless of wind speed and ISM conditions. Figure 9 shows the time evolution of the mass-weighted mean turbulent velocity dispersion. The left and right panels represent cases MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 9 0 5 10 t[tsh] 0 1 2 3 4 5 vturb/cs,cold T ≤105 K Mw = 0.7 Mw = 2 0 5 10 t[tsh] T > 105 K fv = 10−1 fv = 10−2 10−2 10−1 100 101 LISM [rcl] Figure 9. Average turbulent velocity (solid line) and 20-80 percentile ranges (shaded regions) as a function of time. The ISM depth is encoded in the colour- bar. We represent different volume filling fractions with solid and dashed linestyles. Some addditional runs are added for comparison with wind Mach numbers M𝑤≈0.7, 2 in orange and purple, respectively. for the cold (𝑇≤105 K) and hot (𝑇> 105 K) phases, respectively. Linestyles follow those in Figure 4, with curves colour-coded by the cold gas depth. For completeness, we show additional runs with wind Mach numbers M𝑤= 0.7, 2 in orange and purple, respectively. We identify three key features: (i) All ISM configurations follow qualitatively a similar evolution. The peak in turbulence occurs around 𝑡∼𝑡sh, immediately at wind- crossing time, when shear between the hot wind and cold structures is strongest. (ii) The 104 K gas consistently displays turbulent motion of mag- nitude 𝑐s,cold, with peaks typically reaching values 𝑣turb/𝑐s,cold ∼1.5 and only occasionally exceeding it (∼2 𝑐s,cold) for a handful of clumps as shown by the shaded regions, particularly for runs with narrower ISM depths. Turbulence for the cold phase remains stable at these values. (iii) The hot gas retains a relatively laminar flow throughout most of the interaction with 𝑣𝑤≫𝑣turb,hot. Peaks in turbulent velocity are observed only at early times and lie an order of magnitude below 𝑐s,hot. Interestingly, the peak in turbulence shows a mild correlation with 𝑓𝑣as opposed to the cold phase: higher volume filling fractions impose a larger inertial resistance to the wind flow and therefore a larger initial turbulence. All curves follow a gentle decay towards lower turbulent speeds, likely due to the merging of small-size clumps into larger structures. 3.5 Mass growth Previous work on the growth of cold mass in ‘wind tunnel’set-ups as well as in turbulent radiative mixing layers simulations (Gronke & Oh 2020; Tan et al. 2021; Fielding et al. 2020). Tan et al. (2021) suggested that the effective cooling time of a turbulent, multiphase system is given by the geometric mean of the mixing and cooling time of the gas, thus, leading to a growth time of 𝑡grow ≡𝑚/ ¤𝑚∼𝜒(𝑡eddy𝑡cool)1/2. This was confirmed to hold in simulations of multiphase turbu- lence – with an without magnetic fields (Gronke et al. 2022; Das & Gronke 2024). Figure 10 shows the cold gas mass evolution nor- malized by this expected value. Our choice for 𝑡eddy corresponds to the 0 10 20 30 40 τ 10−2 10−1 100 101 ˙m tgrow/m0  rcl 101 102 103 104 LISM [rcl] fv = 10−1 fv = 10−2 fv = 10−3 Figure 10. Mass growth of runs in figure 4, divided by 𝑚0/𝑡grow, with 𝑚0 as initial cold mass and 𝑡grow = 𝜒(𝑡cool,mix𝑡kh)1/2. These runs match those of Figure 4 and follow the same linestyle and colourcode. In black, the mass growth of an additional scale-free ISM run with ( 𝑓𝑣, 𝐿ISM) = (0.1, 30). Kelvin–Helmholtz timescale of individual clumps, 𝑡kh = 𝜒1/2𝑟cl/𝑣𝑤, and for 𝑡cool we use the mixed gas cooling time 𝑡cool,mix3. The above- mentioned analytical expectation shows excellent agreement with the mass growth in our study,independent of volume filling fraction and global ISM properties. Notice that we include a ’scale-free’ ISM run for the mass growth analysis. For this case, which contains a range of clump sizes, we evaluate the Kelvin-Helmholtz timescale using the largest clump in the sample, 𝑡kh,max. This ‘scale free’ run is shown as the black curve in Fig. 10. We further comment on this and other scale-free ISM runs in § 4.2. 4 DISCUSSION 4.1 A universal survival criterion for multiphase outflows Figure 6 shows that the survival of cold gas to the entrainment of a wind depends only on two parameters: the volume of the cold phase of a multiphase medium, 𝑓𝑣and its depth in the direction of the wind propagation, 𝐿ISM. The equivalent depth of the ISM, that is, the product of these variables, needs to be larger than the critical survival radius of a radiatively cooling plasma, which has been previously constrained by studies from Gronke & Oh (2018). Specifically, survival is guaranteed for: 𝑓𝑣𝐿ISM ≳3 pc 𝑇5/2 cl,4Mwind 𝑃3Λmix,−21.4 𝜒100 (11) where 𝑇cl,4 ≡(𝑇cl/104 K), 𝑃3 ≡𝑛𝑇/(103 cm−3 K), Λmix,−21.4 ≡ Λ(𝑇mix)/(10−21.4 ergcm3s−1), M is the Mach number of the wind, and we write 𝑣wind ∼𝑐𝑠,clM𝜒1/2, and 𝜒100 = 𝜒/100. Equivalent, since 𝑓𝑣𝐿ISM ∝1/𝑛the column density of the ISM 3 Note that this differs slightly from Gronke et al. (2022) who use the mininum cooling time. The difference is, however, only a constant offset of a factor of ∼4 and does not significantly change our findings. MNRAS 000, 1–18 (2025) 10 Hidalgo-Pineda, Gronke & Grete 0 20 40 t[tcc] −3 −2 −1 0 1 log (m(T < 2Tcl)/m0) 0 20 40 t[tcc] 0.0 0.2 0.4 0.6 0.8 1.0 ∆vshear/vw 20 40 60 80 dsep [rcl] Figure 11. Mass evolution (left) and shear speed (right) for three aligned spheres along the wind direction, each with borderline destruction size 0.5 𝑟crit (dashed). A single isolated cloud does not survive (grey dashed). Simulations vary intercloud separation (colour bar) and we run and identical set-up varying cloud size to 0.75 𝑟crit (solid). between the hot wind injection site and the surface needs to fulfill 𝑁cold ≳1018 cm−2 M 𝜒 100  Λ(𝑇4)/Λ(𝑇mix) 0.1  (12) in order for cold gas to survive in outflows. This means that the total projected cold gas along the direction of propagation of the wind requires densities above ∼1018cm2 to efficiently entrain and grow cold mass within an outflow. Further- more, the particular original geometry of the ISM is irrelevant to the question of survival: as long as the column density is met, a cold outflow will be formed. Consequently, this criterion applies more generally to any scenario where cold gas is accelerated by a more tenuous phase. Importantly, here 𝐿ISM does not represent the entire heigh / depth of the ISM but rather the distance from the hot medium injection (e.g., the clustered SN site) to the disk surface. These survival criteria are directly analogous to the ones found in the single-cloud case (Gronke & Oh 2018). Note, however, that there is ongoing debate in the literature about the exact formulation of this criterion (Li et al. 2020a; Sparre et al. 2020; Kanjilal et al. 2020; Abruzzo et al. 2022) with divergent criteria found for 𝜒≳104. As we focus in this study on 𝜒≈100 (see also § 4.2 where we check the robustness of the survival criterion for 𝜒= 1000) where the criteria mostly agree, we do not contribute to the discussion of survival timescales. We rather note that independent of the exact formulation of the ‘single cloud criterion’, there is a generalised survival criterion applicable to complex ISM morphologies, captured by the effective depth of the ISM patch and where the details enter in the form of an 𝑟crit. It is possible to think of a limit in volume filling fraction under which our survival criterion Eq. 10 will break apart. The underlying assumption in our derivation is that separate clumps will always interact among each other in a multicloud set-up. We expect a certain threshold in 𝑓𝑣for which inter-cloud distances will be larger than their ‘radius of interaction’, and this assumption will no longer hold. We find no evidence for the existence of this break-down at 𝑓𝑣= 10−3, the lower limit in the parameter space of our suite of simulations. Exploring further this parameter space is computationally expensive, so we examine the limiting radius of influence for the ideal-case scenario and extrapolate to ISM conditions. The top panels of Fig. 11 show the mass (left) and relative velocity evolution (right) of 3 spherical clouds aligned along the direction of the incoming wind, in analogy to our multicloud analysis, now color-coded by the separation between them. We choose the indi- vidual clouds sizes to be 𝑟cl/𝑟crit ≈0.5 < 1, i.e., not to satisfy the survival criterion for a single cloud. We show the destruction status of a single cloud as the grey dashed line in Fig. 11. The system does however, experience cold gas growth for cloud-cloud separations be- low ∼40𝑟cl = 4𝜒1/2𝑟cl, but destruction reemerges as clouds are initialised with distances larger than this limit. From our isotropic ISM distributions where 𝑓𝑣= 𝑟3 cl/𝑑3, this break down in survival occurs for 𝑓𝑣≈10−5 (see Appendix B). This critical cloud separa- tion aligns with previous single cloud crushing studies who found that the extend of single-cloud tails before destruction is ∼2𝜒1/2𝑟cl (Gronke & Oh 2020). This implies that survival is also dependent on a second requirement, 4 𝑓1/3 𝑣 𝜒1/2 ≳1, criterion often met for ISM gas conditions. Note that this approach of considering the shielding effect between individual clumps is similar to that of previous ‘multi cloud crushing’ studies (Pittard et al. 2005; Seidl et al. 2025; Forbes & Lin 2019). Recently, Seidl et al. (2025) systematically studied a variety of multi- cloud set-ups with the inclusion of radiative cooling and obtained a survival criterion defined from the overlap of what they define the ‘effective volume’ of single clouds. They define an effective cloud volume approximated as a cone aligned with the wind, with dimensions 𝐿∥= 𝑎∥𝑟crit and 𝐿⊥= 𝑎⊥𝑟crit with (𝑎∥, 𝑎⊥) = (7.5, 3), and use this to derive a critical effective filling fraction, ˜𝑓v,crit ≈0.24 above which they find cold gas survival. Translating to our set-up, the picture of overlapping volumes can lead to two scenarios: one where the intercloud separation 𝑑cl > 𝐿∥, and one where 𝑑cl < 𝐿∥4. The former case, 𝐿∥> 𝑑cl, generally holds for our runs with 𝑟cl < 𝑟crit and 𝑓𝑣≥10−3. We can approximate for this case the effective volume filling fraction ˜𝑓V by considering an elongated volume ˜𝑉= 𝐴(𝐿ISM+ 𝑎∥) where 𝐴= 𝐿𝑧𝐿𝑥is the cross section of our domain. Thus, ˜𝑓V ≈ 𝑁𝑟3 cl ˜𝑉 = 𝑓𝑣 1 + 𝑎∥(𝑟crit/𝐿ISM) (13) where 𝑁is as above the number of clumps. We can extract an analogous effective critical folume filling fraction ˜𝑓v,crit by substituting our survival criterion 𝑓𝑣𝐿ISM = 𝑟crit into Eq. (13), ˜𝑓v,crit ≈1/(𝑎∥+ 𝑓−1 𝑣). This shows that, for 𝐿∥> 𝑑cl (i.e. 𝑟cl ≪𝑟crit and 4 𝑓1/3 𝑣 𝜒1/2 ≥1), the effective critical filling fraction should in fact vary with 𝑓𝑣, not remain at a fixed value. However, Seidl et al. (2025) only explored volume filling fractions of the order of 𝑓𝑉≳10−1, for which our results do align well with their findings (e.g. for 𝑓𝑣= 0.3, ˜𝑓v,crit ≈0.1). In summary, our generalised survival criterion provides a simple yet powerful framework for predicting when multiphase gas can persist in galactic outflows. It shows that the emergence of cold gas in winds is not determined by the details of individual clouds, but rather by the integrated cold gas column between the wind-launching region and the surrounding medium. This criterion not only unifies a range of previous single- and multi-cloud results, but also explains why cold material is ubiquitous in observed galactic winds: as long as the effective column exceeds the critical threshold, cold gas will survive, grow, and assemble into extended shells or plumes irrespective of the original ISM geometry. Beyond galactic winds, these results can have implications for simulations of ram-pressure stripping in galaxies moving through the intracluster medium (ICM; e.g. Ghosh et al. 2024). There, our criterion suggests the fate of stripped gas – and whether it forms a 4 Because 𝑑∥≈𝑑⊥in our simulations, we take 𝐿∥as the main constraining volume length, since distances in the direction of the wind are the most stringent for the survival of cold gas, see § 4.1 MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 11 −3 −2 −1 0 1 log (m(T < 2Tcl)/m0) (χ, Mw, fvLISM/rcrit) (102, 1.5, 3) (102, 2, 3) (103, 0.4, 3) (103, 1.5, 3) (102, 1.5, 0.3) 0.0 0.5 1.0 1.5 2.0 tent 0.2 0.4 0.6 0.8 1.0 ∆vshear/vw Figure 12. Mass and shear evolution for ‘scale-free’ ISM runs with 𝑓𝑣𝐿ISM = 3𝑟crit and one with 𝑓𝑣𝐿ISM = 0.3𝑟crit (green). Dark blue: clumps from 0.05 𝑟crit to 𝑟crit. Purple: same as blue with with Mach number Mw = 0.7 (440kms−1). Light red: ’scale-free’ ISM with overdensities 𝜒= 103. Dark red: same as light red with Mach number Mw = 1.5 (725kms−1). multiphase tail or is rapidly destroyed and mixed – likewise depends on the effective cold gas depth along the flow direction. This predic- tion can be tested with high-resolution simulations that systematically vary ISM structure and distribution. 4.2 The effect of scale-free ISM morphologies, larger overdensities and different wind speeds Observations and simulations of the multiphase interstellar medium (Federrath et al. 2009; Elmegreen & Scalo 2004b) reveal gas structure extending spatially over several orders of magnitude. This is different to the ISM density field generated in the prevous sections where cold clumps possess a characteristic scale. This raises the question of how our survival criterion connects to a realistic, scale-free ISM structure. The dark blue line in Fig. 12 shows the evolution for a simulation where we initialise the ISM for a range of clump sizes, spanning over an initial 𝑟cl ∼10𝑟crit = 80 𝑑cell to 𝑟cl ∼𝑟crit = 8 𝑑cell using the ISM generation algorithm described in § 2.1 (the range of cloud sizes is shown in the appendix figure A2). We use an effective cold gas depth of 𝑓𝑣𝐿ISM = 3𝑟crit with a volume filling fraction of 𝑓𝑣≈0.15. As shown for the blue line in the top panel of Fig. 12 the cold gas is well- entrained within 𝑡ent, growing above its initial mass. This behaviour follows the expectation from our analysis. Since the emergent survival criterion in Eq (12) reveals survival does not depend on an intrinsic lengthscale of the ISM, it equally applies here for our scale-free ISMs. We further prove this point by analysing the evolution, in solid 0 10 10 tsh 10−4 10−3 10−2 10−1 100 fv  rcl 0.5 1.0 fA 0 10 10 tsh 10−1 100 101 102 ℓslab/χrcl,init 101 102 103 104 LISM[rcl] fv = 10−1 fv = 10−2 fv = 10−3 Figure 13. Volume filling fraction (left), areal covering fraction (top-right) and cold gas shell extent in the wind direction (bottom-right) evolution for runs of different initial clump size (colourbar). We express ℓslab as a fraction of a cloud’s tail, ∼𝜒𝑟cl. The initial volume filling fraction is denoted by the linestyle. green, of an identical set-up but with 𝑓𝑣𝐿ISM ≈0.3𝑟crit, where the cold phase is completely absent by 𝑡ent. To explore multiphase outflow formation at larger overdensities, we conduct additional runs with higher wind temperature. All runs in Fig. 12 use 𝑓𝑣𝐿ISM = 3𝑟crit and 𝑓𝑣≈0.15. The purple curve shows a scale-free ISM run with the original parameters: 𝜒= 100, 𝑇𝑤= 106 K, 𝑇cold = 104 K, and M𝑤= 2. The light red and dark red curves use an increased wind temperature of 𝑇𝑤= 107 K (giving 𝜒= 103 with 𝑇cold = 104 K), with M𝑤= 0.4 (𝑣𝑤≈220 km s−1) and M𝑤= 1.5, respectively. Despite spanning different temperature regimes and parameter combinations, these runs are initialised to satisfy Eq. (10). All cases show clear survival and accurately follow the survival criterion. 4.3 Multiphase morphology The ubiquity of cold outflows at both low and high redshifts in turn requires a launch mechanism that can efficiently expel large numbers of cold clumps (Veilleux et al. 2005, 2020; Thompson & Heckman 2024). Similarly, observations of galactic outflows show that while cool gas permeate the field of view with covering fractions near unity, their inferred volumetric fraction are well below unity, 𝑓𝑣≪1 (see Xu et al., 2022, 2023; Thompson & Heckman 2024, for more references). Fig. 3 demonstrates that cold gas preferentially concentrates in ’shells’ where 𝑓𝐴∼1 in runs exhibiting cloud survival. In Fig. 13 we explicitly show the cold gas volume filling fraction (left panel), its area covering fraction (top right panel), and the extent of this cold gas shells along the direction of the wind (bottom right panel) in runs with 𝑓𝑣𝐿ISM > 𝑟crit, i.e., in which multiphase outflows are formed. We colour-code the curves by ISM depth and the linestyles denote MNRAS 000, 1–18 (2025) 12 Hidalgo-Pineda, Gronke & Grete the initial volume filling fraction. We compute both 𝑓𝑣and 𝑓𝐴within the limits of the cold outflow, i.e. from the negative- to positive-most cold gas component along the 𝑦axis, and within the full 𝑥, 𝑧extent. This allow us to set an upper-limit on both parameters. The total 𝑓𝑣is the ratio of number of cold gas cells to total within these limits. For this same domain we compute the covering fraction 𝑓𝐴from the left- most boundary in the direction perpendicular to the wind velocity, i.e., along the 𝑧direction. The volumetric filling fraction for these runs remains well below unity at all times. Despite initial drops, 𝑓𝑣consistently increases with time for all survival scenarios. Meanwhile, the areal covering fraction of winds, 𝑓𝐴, asymptotically reaches unity. The general picture shows that 𝑓𝐴∼1 throughout; remarkably, even for initial ISM conditions with extremely low volume filling fractions, the covering fraction naturally tends toward unity, as in the other cases. This is expected, as the clumps gather around 𝑟crit in order to survive, occupying larger projected areas than their original sizes. Observations from Xu et al. (2022) of line multiplets and doublets in 45 low-redshift starburst galaxies showed that outflow winds retain a large covering fraction of approximately 𝑓𝐴∼0.64, in very good agreement with our results, where we find 𝑓𝐴∼1. On the other hand, Xu et al. (2023) estimated the volumetric filling fractions of cold gas in M82 outflows from the resolved [S II], 𝜆𝜆6717, 6731 emission lines, and found very low values of 𝑓𝑣∼10−3 −10−4. While these values satisfy 𝑓𝑣≪1 as in our simulations, they are 1-2 orders of magnitude lower than most of our runs. Furthermore, our simulations show 𝑓𝑣increasing monotonically (cf. 13), whereas their estimates show 𝑓𝑣decreasing with radius, with a dependence 𝑓𝑣∝𝑟−1.8. This discrepancy is, however, an expected consequence of our set-up, as wind-tunnel simulations miss the expanding nature of outflows in which cold gas redistributes across larger volumes as it travels with the wind. Pressure and density properties also evolve in such scenarios, leading to a decrease of the mass transfer rate between the hot and cold material (Dutta et al. 2025). We also caution that our estimates of 𝑓𝑣and 𝑓𝐴are restricted to a shell of gas and, thus, are strictly speaking upper limits compared to the observed values. Note that in Fig. 13 some runs show an apparent plateau in volume filling fractions of 𝑓𝑣∼10−1. This could be attributed to artifacts of the periodic boundary conditions, which can alter the mass growth once gas shells increase in mass. Figure 10, however, shows that the mass growth of the same runs remain close to its predicted value. Indeed, it is possible that this property is inherent to multiphase sys- tems which can undergo a strong fragmentation process, and hence reduce the overall volume the cold gas is filling. This could explain why mass growth continues despite a fixed cold gas volume fraction. While the cold gas volume filling fraction mostly remains low, some of the projections in Fig. 3 clearly show the formation of “cold gas shells”, i.e., confined regions of the outflows where most of the cold material is concentrated, leading large 𝑓𝑣locally. In the bottom-right panel of Fig. 13, we analyse the temporal extent of cold gas along the wind direction, ℓslab. The initial expansion or compression of the ISM is tightly coupled with its initial depth: systems with 𝐿ISM > 102𝑟cl are compressed, in contrast to runs with 𝐿ISM ≤102𝑟cl, which expand. This can be understood by equating the shear timescale of the wind 𝑡sh ∼𝐿ISM/𝑣𝑤with the reaction time of individual clumps 𝑡ent ∼𝜒𝑟cl/𝑣𝑤, which shows that this transition first occurs at 𝐿ISM ∼ 𝜒𝑟cl,min. In other words, initial depths shallower than the expanding tail of surviving clouds lead to an apparent extension of the cold shell whereas is the initial distribution is smaller, the cold gas extends to ∼𝜒𝑟cl. Not only do single cloud tails determine whether the cold expands or contracts initially but also set a minimum evolving shell size for multiphase outflows. This is shown by the curves in the bottom right of Fig. 13, where all the curves settle close to ℓslab ∼𝜒𝑟cl, after which further fragmentation and mass accretion can lead to slow and continuous increase in size. The compression of the interstellar medium (ISM) is especially important when examining outflow en- ergetics. In particular, in studies of momentum- and energy-driven outflows, the presence of a compact shell of cold gas may cause a momentum-driven outflow to resemble an energy-driven phase. This occurs because the hot gas performs 𝑃d𝑉work on the dense cold shell, rather than escaping through a porous medium, a phenomenon common in AGN driven winds (Faucher-Giguère & Quataert 2012; Ward et al. 2024). This criterion for the expansion and compression of cold gas could explain the formation of both plumes and shells observed in starburst galaxies. For example, 3.3, 𝜇m PAH emission in (Fisher et al. 2025) reveals plumes extending up to 200–300 pc where the cold dust appears to align with the direction of the wind and is embedded with clumps of sizes 5 - 15 pc. These observations are consistent with our findings, where for clumps sizes of 5 pc in a sufficiently narrow ISM above the SNe event ( 𝐿ISM < 𝜒𝑟cl ≈500pc), cold gas should form plumes that extend by 2 orders of magnitude with respect to their initial size. Similarly, studies by Ha et al. (2025); Rupke et al. (2019) on the Makani galaxy and by Lopez et al. (2025) on M82 focus on the formation and properties of outflows traced by O IV and H𝛼emission shells. These oberved cold gas morphologies stand in stark contrast to the cometary structures predicted by single-cloud simulations (see discussion in Thompson & Heckman 2024). For example, Lopez et al. (2025) shows the formation of slabs and arcs of cold gas in M82 that are 14–50 pc deep along the direction of the wind. If one compares this to the values of e.g., the red curve in the bottom right panel of Figure 13, we find that for intial clump sizes of 𝑟cl ∼0.1𝑟crit ∼0.2 pc (using the fiducial values in Eq. (11)), the final shell size reaches ∼10 pc, in agreement with their findings. Another critical point against single-cloud studies is their inability to form arc-like structures. In our case, some of the morphology maps (middle panel of Figure 3) begin to show asymmetric arc-like features. Although not as clear as in the M82 observations (Thompson & Heckman 2024; Lopez et al. 2025), the expanding background absent in our simulations is likely essential to accentuate these structures. Previous single-cloud studies also highlight the potential role of magnetic fields in forming more filamentary structures (Shin et al. 2008; Grønnow et al. 2018; Hidalgo-Pineda et al. 2024). While we do not include magnetic fields in this work, we hope to explore it in the future. Another question we address is how the multiphase morphology of the emergent wind is related to the morphology of the originating ISM. In other words, do galactic winds retain any imprint of the ini- tial cold gas scale? Our results show that the mass distribution in an outflow is governed by Zipf’s law (d𝑁/d𝑚∝𝑚−2; cf. Fig. 7). This distribution has been observed in various simulation setups: galac- tic outflows (Tan & Fielding 2024), multiphase turbulence (Das & Gronke 2024; Gronke et al. 2022), and simulations of the ICM (Li & Bryan 2014; Fournier, M. et al. 2024). In our study of ISM–wind interactions, this power law permeates the cold gas structure across time, demonstrating that even with different initial conditions, the cold phase rapidly converges toward this behaviour, and suggest- ing that the power law is a universal attractor for gas arising from cold–hot phase interactions – and the ISM structure is not imprinted in the winds. Observations of nearby multiphase winds reach now the resolution required in order to compare this clump mass distri- bution to data (e.g. Lopez et al. 2025; Fisher et al. 2025). However, a quantitative analysis is still outstanding. MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 13 4.4 Outflow kinematics Multicloud setups offer a closer approximation to realistic galac- tic outflows, as shown above by several properties of ISM–wind interactions not captured in single-cloud studies. For instance, the mass-carrying phase of a single outflow can exhibit a wide range of bulk speeds along the wind direction (see Fig. 5). Specifically, we predict a modified ‘Schechter-like’ function for the cold gas spread in mass and velocities in winds (cf. § 3.2.2) which can be in principle observable. We show that the characteristic 𝑣/𝑣𝑐exp(−(𝑣/𝑣𝑐)2)- distribution forms after 𝑡∼𝑡sh, while during the wind-ISM initial interaction, the clump distribution is semi-random, with most of its material travelling at 𝑣𝑐∼0. The peak and normalisation increases with time, until reaching 𝑣𝑐≈𝑣𝑤at time 𝑡∼𝑡ent, i.e. typically after tens of Myr. This implies that using, e.g., ‘down the barrel’ observations and comparing low-ionization absorption line shapes one can infer at which evolutionary stage an observed outflow is and what this implies for the distance the cold gas has travelled, allowing e.g. to estimate the mass outflow rate of the wind. In fact, observations of nearby galaxies have shown potentially compatible functional forms in their low- ionization species absorption lines reaching ∼hundreds of kilometers per second (Rivera-Thorsen et al. 2015; Barger et al. 2016). Further observations and comparisons to simulated outflow profiles such as the ones suggested here will help constrain the nature of galactic winds. We also study the emergent turbulent motion of the (initially lam- inar) outflow (§ 8). There, we show that the phases remain kine- matically coupled throughout evolution. In particular, the turbulent velocity of the cold phase converges to ∼𝑐s,cold across all simulations. The turbulent velocity of the hot phase, on the other hand, is slightly higher – about a factor of ∼2 larger than that of the cold phase – and shows a weak dependence on the initial volume filling factor 𝑓𝑣. Specifically, the hot-phase turbulent velocity increases by roughly a factor of ∼2 when 𝑓𝑣is increases from 10−2 to 10−1, indicating that higher cold-gas volume fractions lead to stronger turbulent motions in the hot medium. We further investigate the turbulent cascade in these outflows by computing the velocity structure function (VSF hereinafter), a standard diagnostic of (multiphase) turbulence (von Hoerner 1951; Li et al. 2020b; Fournier, M. et al. 2024). Figure 14 shows the first-order velocity structure function of cold gas for runs with (𝑟cl/𝑟crit, 𝑓𝑣, 𝐿ISM/𝑟cl) = (1, 10−1, 30) (left), (1, 10−2, 300) (middle), and (0.1, 10−1, 300) (right), i.e., runs in which the cold gas survives. The line colour saturation towards darker tones indicates the temporal evolution of the VSF as the wind tra- verses 𝐿ISM, as indicated by the colour-bar. For these three cases, the VSF progressively flattens as the ISM is traversed by the wind, reaching a plateau at 𝑡≈𝑡sh, when the wind has traversed the ISM, displayed by the darkest solid line in each panel. At the smallest scales, the curves display a Kolmogorov relation following a ℓ1/3 power law. This relation is absent at early times, but once the wind has traversed the full depth of the cold gas, the inertial turbulent regime develops, spatially spanning nearly two orders of magnitude from 𝑟cl to 𝐿ISM for the run shown in the right panel of Fig. 14 ( 𝑓𝑣= 0.1, 𝑟cl/𝑟crit = 0.1, 𝐿ISM = 300𝑟cl). This injection scale can be inferred from the flattening of the VSF curves at large scales, which consistently matches ℓ∼𝐿ISM for all runs. The temporal evolution of the VSF magnitude is in agreement with Fig. 9, where we showed turbulence peaks for both cold and hot phases at 𝑡∼𝐿ISM/𝑣𝑤reaching 𝑣turb ∼𝑐s,cold. By comparing the first and second panel of Fig. 14, one can see that 𝑓𝑣does not alter the inertial regime of the VSF where Kolmogorov’s turbulence applies; in both cases the driving scale ∼𝐿ISM. In addition, the magnitude is 𝑆1(𝐿ISM) ∼𝑐s,cold. This behaviour is also shown in Figure 9 where 𝑓𝑣has negigible impact on the cold gas rms velocity. We are not aware of studies of the VSF in the cold gas component of star-driven outflows in galaxies. Existing work has been limited to cool-core clusters, where AGN activity produces long H𝛼fila- ments (Li et al. 2020b; Hu et al. 2022). In cases where outflows are supersonic, a short 𝑡cool leads to partial thermalisation of the energy cascade, steepening the scaling to ℓ1/2. This is the case for Perseus, as shown by Li et al. (2020b), where the effect was first iden- tified, and later confirmed through simulations in Hu et al. (2022). In the latter study, simulations with subsonic outflows recover the ℓ1/3 scaling, consistent with our results, and show a similar temporal trend in the VSF, an increase with time that eventually settles into the Kolmogorov cascade after wind entrainment. Our results sug- gest that turbulence does develop naturally in multiphase winds, and that measuring the VSF can reveal not only the evolutionary state of the wind but also the driving scale which in turn can constrain the original ISM from which the outflow emerged. 4.5 Comparison to previous work A large body of literature has focused on cold gas-wind interactions. The simplest cases are, as already preluded in § 1, the single ‘cloud crushing’ studies (Klein et al. 1994; Schneider & Robertson 2017; Scannapieco & Brüggen 2015) which have been extended with ra- diative cooling to develop a cold gas survival criterion (Gronke & Oh 2018; Li et al. 2020a; Kanjilal et al. 2020) – which our general criterion is consistent with (see § 4.1). Several studies also focused on ensemble of clouds and the asso- ciated ‘shielding’ effects (e.g. Poludnenko et al. 2002; Al¯uzas et al. 2012). As mentioned in § 4.1, specifically, Seidl et al. (2025) per- formed multi-cloud simulations that include radiative cooling which allowed them to derived a critical volume filling fraction from the overlapping of single cloud effective volumes. Using this method they extract a limiting ˜𝑓v,crit ≈0.24 above which they find cold gas sur- vival. This aligns with our findings for large volume filling fractions (probed by their work) but fails to predict survival for 𝑓𝑣≪0.1. A handful of higher-resolution studies have turned to other setups via the introduction of more complex density fields that mimic ISM gas distributions (e.g Banda-Barragán et al. 2021; Antipov et al. 2025; Borodina et al. 2025). Specifically, Banda-Barragán et al. (2021) perform wind-tunnel simulations with ISM initialised from a log-normal gas distribution spanning 104–106 K. They examine the formation of clumpy, rain-like cold droplets for both compressive and solenoidal density fields. Although their initial conditions yield surviving cloudlets of cold gas at later times, they do not discuss the criterion for survival. Their results show a broad velocity spread for the cold phase, particularly at early times, but do not recover features such as shell formation, since the initial ISM depths re- main below 𝜒𝑟cl (although the expected initial expansion is visible). Survival is revisited in Antipov et al. (2025), exploring a run from Banda-Barragán et al. (2021) and studying the survival of cold gas on an individual, one-by-one clump basis via a friends-of-friends al- gorithm to the cold phase domain of their box. They find on average 𝑡cool,cl ≪𝑡cc, explaining the aforementioned formation of a multi- phase outflow. However, their study does not explore the broader parameter space or explain the classical destruction regime where cloud-cloud interactions dominate survival and multiphase outflows emerge. Indeed, their analysis is reduced to two identical runs of 𝐿ISM ≈50 pc, and does not comment on the role of 𝑓𝑣in survival. Multiple studies have also investigated SNe–ISM interactions MNRAS 000, 1–18 (2025) 14 Hidalgo-Pineda, Gronke & Grete 10−1 100 101 l3D = |ri −rj| [rcl] 10−1 100 101 ¯S1 = ⟨|v(r + ℓ) −v(r)|⟩/cs,cold ℓ1/3 ℓ1 LISM (1, 10−1, 30) 10−1 100 101 102 l3D = |ri −rj| [rcl] 10−1 100 101 (rcl/rcrit, fv, LISM/rcl) LISM (1, 10−2, 300) 10−1 100 101 102 l3D = |ri −rj| [rcl] 10−1 100 101 LISM (0.1, 10−1, 300) 0.10 0.33 0.55 0.78 1.00 t/tsh Figure 14. First-order structure functions for the cold phase of the wind (𝑇< 105 K), shown for three simulations (from left to right panel): (𝑟cl/𝑟crit, 𝑓𝑣, 𝐿ISM) = (1, 0.1, 30𝑟cl), (1, 0.1, 300𝑟cl), and (0.1, 0.1, 300𝑟cl). Temporal evolution is indicated by line colour saturation in units of the shear time 𝑡sh = 𝐿ISM/𝑣w. Due to our resolution, clump pairs are undetected at certain separations, leading to gaps in the VSF where the contribution is 0. through vertically-stratified disk set-ups. Simulations consistently show that the efficiency in generating ’hot’ outflows depends strongly on the heighscale ℎSN at which SNe take place (Martizzi et al. 2016; Li et al. 2017; Creasey et al. 2013). Those exploding at high ℎSN and therefore lower column densities drive hot outflows more efficiently. Recently Vijayan et al. (2025) has systematically studied the effect of SN scaleheight in the formation of a multiphase outflows. They show that the cold-mass loading factor becomes significant while the hot phase carries suffiecient specific energy to form an outflow when the equivalent heightscale for gas in the disk, ℎgas is comparable to ℎSN. Since their simulations use ℎgas ∼1kpc, this sweet spot can be ex- plained from the column of ionised gas extending for a few hundreds of pc above the characteristic ℎgas. For our criterion, these set-ups with an average ISM 𝑓𝑣= 0.1 and 𝐿ISM ≈100 pc above ℎSN, are well above the survival threshold for multiphase outflows. However, for ℎSN ≫ℎgas their work reports outflows contain effectively no cold gas mass. This makes intuitively sense: if there is no cold gas in a ‘windtunnel’ setup in the first place, no multiphase outflow will emerge. In the case of ℎSN ≳ℎgas, one must be cautious. Our crite- rion indicates that in principle gas layers extending only a few tens of parsecs above the supernova scale height should still form cold gas in the outflow. However, one must take the numerical resolution into account when interpreting these results. In order to capture the formation and growth of cold gas in the outflow, the critical cloud radius 𝑟crit – which can be significantly smaller than a parsec under typical ISM conditions (cf. Eq. (11)) needs to be resolved by at least ∼8 grid cells. If this condition is not met, cold gas formation will be artificially suppressed, even in cases where our criterion would oth- erwise predict its presence. This requirement becomes particularly important when only a small amount of cold gas is present above ℎSN, i.e., when existing clumps are intrinsically small: in such situations, under-resolving these clumps (and 𝑟crit) will prevent the condensa- tion and growth of cold structures, leading to an underestimation of the cold mass in the simulated outflow. Additionally, it is worth mentioning that while their study focuses exclusively on the role of 𝐿ISM, we show that 𝑓𝑣is also a key pa- rameter. This distinction is particularly relevant for i.e. young stellar clusters, where SNe can create low- 𝑓𝑣channels and subsequent ex- plosions can have both large specific energies for the hot phase and form a significant cold gas component. Multiple stratified disk studies include more complex physics such as self-gravity and chemical networks (Girichidis et al. 2016b; Walch et al. 2015; Kim & Ostriker 2018) and report difficulties launching multiphase winds with supernovae feedback alone (Rathjen et al. 2023). It is important to consider, that in typical ISM conditions – where the cold gas follow approximately a d𝑁/d𝑚∝𝑚−2 (from 𝑚min,ISM to 𝑚max) – the mass fraction of clumps capable of surviving ram pressure acceleration is 𝑓res = ln(𝑚max/𝑚min,wind) ln(𝑚max/𝑚min,ISM) (14) where 𝑚min,wind and 𝑚min,ISM are the minimum clump masses that can be launched into the wind and present in the ISM, respectively. Ideally, 𝑚min,wind should be set by 𝑟crit. However, most large-scale simulations (e.g., stratified disk models) employ resolutions of Δ𝑥∼ a few parsec, leaving 𝑟crit unresolved. In this regime, 𝑚min,ISM is determined by the cell size: 𝑚min,ISM ∝Δ𝑥3. Crucially, clumps must span several cells per dimension to survive and grow via mixing in the wind. We therefore adopt 𝑚min,wind ∝(8Δ𝑥)3 to ensure adequate numerical resolution.This resolution requirement has significant con- sequences. Adopting an upper cutoff of ∼100 pc for the maximum clump size and using Δ𝑥= 4 pc (as in Walch et al. 2015; Girichidis et al. 2016b), only 𝑓res ≈0.35 of the simulated ISM mass could be launched into the wind via ram-pressure acceleration. For gas scale heights of ℎgas ≈100pc (Walch et al. 2015; Kim & Ostriker 2017), the layers of cold gas above the super- novae scale height can extend only a fraction of ℎgas, i.e. ∼20pc. Combining these results, we obtain an effective cold-gas depth of 𝑓𝑣(𝐿ISM 𝑓res) ≈0.1 × 0.3 × 20 pc = 0.6 pc < 𝑟crit for the average density values in the ISM, which therefore does not satisfy our crite- rion for the formation of multiphase outflows. In addition, it is also worth noticing that 𝑓res𝐿ISM ≲Δ𝑥for most stratified disk studies, and so the ‘effective’ cold gas is not well resolved and will not launch multiphase outflows. In other words, while supernovae detonating too deep in the disk fail to launch winds because they must propagate through too much cold gas before reaching the surface, those occur- ring under more favourable conditions (∼ℎgas) may still not produce multiphase outflows if the amount of cold gas above the explosion site is insufficient – or insufficiently resolved – to seed the growth of a cold phase in the wind. Higher-resolution simulations – which well resolve 𝑟crit – will shed light on this issue. Similar arguments apply in principle to isolated disk and larger- scale simulations (Tan & Fielding 2024; Smith et al. 2021), but two MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 15 additional complications arise. First, these simulations often employ adaptive mesh refinement, making it unclear whether the resolution criteria discussed above remain valid. Second, the impact of specific sink-particle and supernova-injection schemes on outflow proper- ties remains an open question (Kim & Ostriker 2018), particularly in simulations including self-gravity. Notably, high supernova rates combined with clustered star formation can efficiently drive multi- phase winds even when ℎSN ≪ℎgas (e.g. Schneider & Mao 2024). Finally, although our work only partially explores the role of higher-energy winds, the AGN community has extensively studied the effect of powerful outflows in clumpy ISMs. These winds are characterised by large overdensities and wind velocities (Bourne & Sijacki 2017; Costa et al. 2014).Ward et al. (2024) show that intro- ducing clumpiness in AGN winds can alter the coupling between hot and cold phases. In our framework, multiphase ISMs naturally lead to the formation of gas shells (see section 4.3).This shell structure has important implications for understanding AGN outflow driving mechanisms, as for example, a momentum-driven wind can effi- ciently couple to the cold gas shell through 𝑃d𝑉work, producing observational signatures that mimic energy-driven outflows. A com- plementary study by Borodina et al. (2025) investigates jet propa- gation through multiphase ISMs and finds that the orientation and jet properties outside the disk depends on the intrinsic AGN power. At low luminosities, cold outflows are rarely produced, whereas at intermediate luminosities of order 𝐿∼1040 erg s−1, the outflow di- rection can be strongly altered depending on inclination and ISM depth. Although neither study reaches resolutions down to 𝑟crit, our results demonstrate that the survival criterion applies for 𝜒∼1000 and 𝑣𝑤∼700 km s−1(see section 4.2) and that it can account for substantial differences in the observed phase structure and energetics of AGN-driven outflows, where the energy budget is usually larger than for SNe-driven feedback. 4.6 Caveats • Set-up and resolution. Our simulations resolve individual clouds by ≳8 cells per radius which is sufficient to capture growth and survival (Gronke & Oh 2020). Thus, computational limits re- strict us to 𝑓𝑣≥10−3. We carry out an alternative analysis indicating that survival should extend down to 𝑓𝑣= 10−5 (cf. § C), although dedicated simulations are still required to confirm this. The periodic boundary condition in the direction perpendicular to the wind is de- signed to mimic a larger ISM patch. Tests with varying widths (see Appendix B) show the results are converged. • Magnetic fields. Initialising magnetic fields is non-trivial and adds computational cost. Single-cloud studies (McCourt et al. 2015; Dursi & Pfrommer 2008; Gronke & Oh 2020) show that magnetic fields alter the picture of entrainment and morphology for single- clouds. In particular, Hidalgo-Pineda et al. (2024) demonstrated that near-equipartition magnetic fields can boost survival by two orders of magnitude (for 𝜒∼100), shifting the effective threshold in 𝑓𝑣𝐿ISM down to sub-parsec cloud sizes. This process – and how it connects to a more ralistic ISM morphology – remains incompletely understood, and further follow-up work is required. • Outflow energetics and geometry. We explore only part of the parameter space spanned by wind velocities, temperatures, and density contrasts in the literature. While our criterion suc- cessfully predicts cloud survival in winds with velocities up to 700 km s−1 (Mw = 1.5) and contrasts of 𝜒∼103, these Mach numbers and overdensitites remain slightly below the typical con- ditions for SNe-ISM interactions, and well below the ones expected for AGN-ISM interactions(Bourne & Sijacki 2017; Costa et al. 2014). Extending the analysis to these regimes will be an important direc- tion for future work. As discussed in § 4.3, we do not consider the change in wind properties, e.g., due to the adiabatic expansion. This will change the morphology and mass transfer rates at larger dis- tances (Gronke & Oh 2020; Dutta et al. 2025). However, our core conclusions of this study are related to the initial launching, and are thus, not affected by this. • Thermal conduction and viscosity. While single cloud studies have shown that thermal conduction and viscosity can alter the cold gas morphology and dynamics(Brüggen et al. 2023), they play and overall small effect on the mass transfer rate between the phases, and thus, the survival criterion of cold gas. This is because turbulent dif- fusion dominates over thermal conduction (Tan et al. 2021; Fielding & Bryan 2022), and rapid cooling sharpens the density (and velocity) interface, thus, counteracting the effects of viscosity (Marin-Gilabert et al. 2025). 5 CONCLUSIONS Galactic outflows are inherently multiphase: they regulate the cold gas content of galaxies, enrich the surrounding medium with ∼104 K gas and metals, and can suppress or delay star formation. Yet, current theoretical models provide no consistent explanation for their origin, structure, and composition. In particular, the scarcity of cold gas in simulated outflows remains at odds with the observed multiphase character of galactic winds. Most attempts to address this tension have relied on single-cloud simulations, an idealised configuration that does not capture the complexity of a realistic ISM. In this work, we performed 3D hydrodynamic simulations of hot outflows interacting with a multicloud ISM, characterized by its cold- gas volume filling fraction 𝑓𝑣, depth 𝐿ISM, and typical clump size 𝑟cl. This framework enabled us to identify the parameter regimes that naturally lead to multiphase outflows, and to assess their rel- evance in scale-free ISMs, which better approximate real galactic environments. Our main findings are as follows: (i) Universal multiphase outflow criterion: Cold clumps grow for ISMs satisfying the criterion 𝑓𝑣𝐿ISM ≥𝑟crit, where 𝑟crit corre- sponds to the single-cloud size threshold for survival (Gronke & Oh 2018; Li et al. 2020a; Kanjilal et al. 2020), 𝑓𝑣is the cold gas volume filling fraction, and 𝐿ISM the distance the hot wind has to travel through the ISM. This survival criterion can be rephrased in terms of a critical column density 𝑁crit ≳1018 𝜒100 cm−2 required for survival, analogous to the single-cloud criterion. This gener- alised survival criterion can explain the survival of cold gas clouds present, e.g., in a fractal-like ISM morphology, which would fall in the ‘destruction regime’ (𝑡cool,mix < 𝑡cc) if considered individually. We show that this criterion holds two orders of magnitude below the classical threshold, i.e., for 𝑟cl/𝑟crit ≲10−2 and only breaks down at 𝑓𝑣≲10−5, where intercloud distances overatake their interaction radius 𝑑int ∼4𝜒1/2𝑟cl. (ii) Self-similar cold gas morphology: Independent of the initial ISM structure, the cold phase rapidly converges toward a scale-free mass distribution following Zipf’s law, d𝑁/d𝑚∝𝑚−2. This be- haviour emerges across all simulations and persists over time, effec- tively erasing memory of the initial morphology and suggesting that turbulent mixing and radiative condensation drive multiphase gas toward a universal self-similar state. As a result, the cold clump mass spectrum, rather than showing imprints of the original ISM geometry, becomes a fundamental distribution of multiphase outflows. (iii) Compression and cold shells: Depending on the initial con- ditions, the ISM depth can be compressed or expanded by the MNRAS 000, 1–18 (2025) 16 Hidalgo-Pineda, Gronke & Grete hot wind, with 𝐿ISM ≲𝜒𝑟cl leading to expansion. Surviving out- flows concentrate their cold material into shells or plumes of size 𝑑cold = 𝜒𝑟cl,min along the direction of the wind, which then slowly grow as hot gas continues to be accreted. (iv) Flow and turbulence: As the 𝑇∼106 K wind crosses the 𝑇∼104 K ISM, turbulence is driven in both phases, peaking at the ISM crossing time 𝑡sh. The cold and hot phases are kinematically coupled, both in bulk flow and turbulence, with the cold phase reach- ing 𝜎/𝑐s,cold ≈1. First-order velocity structure functions show that the emergent turbulence is compatible with a Kolmogorov turbulent cascade and the injection scales are set by 𝐿ISM. (v) Evolving outflow properties: The growth of multicloud out- flows is governed by the Kelvin-Helmholtz instability timescale of the largest clumps, 𝑚/ ¤𝑚∼𝜒(𝑡kh,max𝑡cool,mix). Despite ISM com- pression, both 𝑓𝑣and the areal covering fraction 𝑓𝐴increase in time. 𝑓𝐴rapidly approaches unity even for 𝑓𝑣= 10−3. Although 𝑓𝑣also grows, it remains ≪1, consistent observations. Taken together, these results establish a framework that connects ide- alised single-cloud studies with stratified disk simulations, extending the analytical survival criterion to more complex, ISM morphologies. While further work is required to test the robustness of this criterion under additional physical processes and a larger parameter space, our findings already provide a pathway for comparison with both larger- scale numerical simulations and observations of multiphase galactic outflows. ACKNOWLEDGEMENTS FHP thanks Martin Fournier for help with VSF functions. FHP also thanks Tiago Costa, Aditi Vijayan and Benedikt Seidl for insightful discussions. MG thanks the Max Planck Society for support through the Max Planck Research Group, and the European Union for support through ERC-2024-STG 101165038 (ReMMU). P.G. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 555983577. REFERENCES Abruzzo M. W., Bryan G. L., Fielding D. B., 2022, ApJ, 925, 199 Al¯uzas R., Pittard J. M., Hartquist T. W., Falle S. A. E. G., Langton R., 2012, MNRAS, 425, 2212 Antipov A., Banda-Barragán W. E., Birnboim Y., Federrath C., Gnat O., Brüggen M., 2025, MNRAS, 540, 3798 Armillotta L., Fraternali F., Werk J., Prochaska J. X., Marinacci F., 2017, MNRAS, 470, 114 Banda-Barragán W. E., Brüggen M., Heesen V., Scannapieco E., Cottle J., Federrath C., Wagner A. Y., 2021, MNRAS, 506, 5658 Barger K., Lehner N., Howk J. C., 2016, ApJ, 817, 91 Beattie J. R., Noer Kolborg A., Ramirez-Ruiz E., Federrath C., 2025, arXiv e-prints, p. arXiv:2501.09855 Borodina O., et al., 2025, ApJ, 981, 149 Bourne M. A., Sijacki D., 2017, MNRAS, 472, 4707–4735 Brüggen M., Scannapieco E., Grete P., 2023, The Astrophysical Journal, 951, 113 Chen H.-W., et al., 2023, ApJLetters, 955, L25 Cooper J. L., Bicknell G. V., Sutherland R. S., Bland-Hawthorn J., 2009, ApJ, 703, 330 Costa T., Sijacki D., Haehnelt M. G., 2014, MNRAS, 444, 2355 Creasey P., Theuns T., Bower R. G., 2013, MNRAS, 429, 1922 Das H. K., Gronke M., 2024, MNRAS, 527, 991 Dursi L., Pfrommer C., 2008, ApJ, 677, 993 Dutta A., Sharma P., 2019, Research Notes of the AAS, 3, 148 Dutta A., Sharma P., Gronke M., 2025, arXiv e-prints, p. arXiv:2506.08545 Elmegreen B. G., 1997, ApJ, 477, 196 Elmegreen B. G., Scalo J., 2004a, ARA&A, 42, 211 Elmegreen B. G., Scalo J., 2004b, ARA&A, 42, 211 Farber R., Ruszkowski M., Yang H.-Y., Zweibel E., 2018, ApJ, 856, 112 Faucher-Giguère C.-A., Quataert E., 2012, MNRAS, 425, 605 Federrath C., 2016, MNRAS, 457, 375 Federrath C., Klessen R. S., Schmidt W., 2009, ApJ, 692, 364 Fielding D. B., Bryan G. L., 2022, ApJ, 924, 82 Fielding D. B., Ostriker E. C., Bryan G. L., Jermyn A. S., 2020, ApJ, 894, L24 Fisher D. B., et al., 2024, arXiv preprint arXiv:2405.03686 Fisher D. B., et al., 2025, MNRAS, 538, 3068 Forbes J. C., Lin D. N., 2019, The Astronomical Journal, 158, 124 Fournier, M. Grete, P. Brüggen, M. Glines, F. W. O’Shea, B. W. 2024, A&A, 691, A239 Ghosh R., Dutta A., Sharma P., 2024, MNRAS, 531, 3445 Girichidis P., et al., 2016b, MNRAS, 456, 3432 Girichidis P., et al., 2016a, MNRAS, 456, 3432 Gnat O., Sternberg A., 2007, ApJSupplement Series, 168, 213 Grete P., et al., 2023, The International Journal of High Performance Com- puting Applications, 37, 465 Grete P., Scannapieco E., Brüggen M., Pan L., 2025, ApJ, 987, 122 Gronke M., Oh S. P., 2018, MNRAS, 480, L111 Gronke M., Oh S. P., 2020, MNRAS, 492, 1970 Gronke M., Oh S. P., Ji S., Norman C., 2022, MNRAS, 511, 859 Grønnow A., Tepper-García T., Bland-Hawthorn J., 2018, ApJ, 865, 64 Groves B., et al., 2023, MNRAS, 520, 4902 Ha T., et al., 2025, ApJ, 986, 87 Heckman T. M., Armus L., Miley G. K., 1990, ApJS, 74, 833 Hidalgo-Pineda F., Farber R. J., Gronke M., 2024, MNRAS, 527, 135 Hu H., Qiu Y., Gendron-Marsolais M.-L., Bogdanović T., Hlavacek-Larrondo J., Ho L. C., Inayoshi K., McNamara B. R., 2022, ApJLetters, 929, L30 Ilyasi B., Neelamkodan N., Tokuda K., Barman S., Sewiło M., Sano H., Onishi T., 2025, The Astrophysical Journal, 984, 85 Kanjilal V., Dutta A., Sharma P., 2020, MNRAS, 501, 1143 Kim C.-G., Ostriker E. C., 2017, ApJ, 846, 133 Kim C.-G., Ostriker E. C., 2018, ApJ, 853, 173 Kim C.-G., Kim J.-G., Gong M., Ostriker E. C., 2023, ApJ, 946, 3 Klein R. I., McKee C. F., Colella P., 1994, ApJ, 420, 213 Li Y., Bryan G. L., 2014, ApJ, 789, 153 Li Y., Bryan G. L., Ruszkowski M., Voit G. M., O’Shea B. W., Donahue M., 2015, ApJ, 811, 73 Li M., Bryan G. L., Ostriker J. P., 2017, ApJ, 841, 101 Li Z., Hopkins P. F., Squire J., Hummels C., 2020a, MNRAS, 492, 1841 Li Y., et al., 2020b, ApJ, 889, L1 Lopez L. A., Mathur S., Nguyen D. D., Thompson T. A., Olivier G. M., 2020, ApJ, 904, 152 Lopez S., Lopez L. A., Thompson T. A., Leroy A. K., Bolatto A. D., 2025, arXiv preprint arXiv:2502.06934 Marin-Gilabert T., Gronke M., Oh S. P., 2025, arXiv preprint arXiv:2504.15345 Martizzi D., Fielding D., Faucher-Giguère C.-A., Quataert E., 2016, MNRAS, 459, 2311 McCourt M., O’Leary R. M., Madigan A.-M., Quataert E., 2015, MNRAS, 449, 2 Naab T., Ostriker J. P., 2017, Annual Review of Astronomy and Astrophysics, 55, 59 Nelson D., et al., 2019, MNRAS, 490, 3234 Péroux C., Rahmani H., Arrigoni Battaia F., Augustin R., 2018, MNRAS: Letters, 479, L50 Pittard J. M., Dyson J., Falle S., Hartquist T., 2005, MNRAS, 361, 1077 Poludnenko A. Y., Frank A., Blackman E. G., 2002, ApJ, 576, 832 Rathjen T.-E., Naab T., Walch S., Seifried D., Girichidis P., Wünsch R., 2023, MNRAS, 522, 1843 Rivera-Thorsen T. E., et al., 2015, ApJ, 805, 14 Rubin K. H., Prochaska J. X., Koo D. C., Phillips A. C., Martin C. L., Winstrom L. O., 2014, ApJ, 794, 156 MNRAS 000, 1–18 (2025) Launch of Multiphase Winds 17 Rupke D., 2018, Galaxies, 6, 138 Rupke D. S., et al., 2019, Nature, 574, 643 Scannapieco E., Brüggen M., 2015, ApJ, 805, 158 Schneider E. E., Mao S. A., 2024, ApJ, 966, 37 Schneider E. E., Robertson B. E., 2017, Hydrodynamical coupling of mass and momentum in multiphase galactic winds Seidl B. S., Gronke M., Farber R. J., Dolag K., 2025, arXiv e-prints, p. arXiv:2506.05448 Shin M.-S., Stone J. M., Snyder G. F., 2008, ApJ, 680, 336 Smith M. C., Bryan G. L., Somerville R. S., Hu C.-Y., Teyssier R., Burkhart B., Hernquist L., 2021, MNRAS, 506, 3882 Somerville R. S., Davé R., 2015, Annual Review of Astronomy and Astro- physics, 53, 51 Sparre M., Pfrommer C., Ehlert K., 2020, MNRAS, 499, 4261 Tan B., Fielding D. B., 2024, MNRAS, 527, 9683 Tan B., Oh S. P., Gronke M., 2021, MNRAS, 502, 3179 Thompson T. A., Heckman T. M., 2024, Annual Review of Astronomy and Astrophysics, 62 Townsend R., 2009, ApJSupplement Series, 181, 391 Trott C. R., et al., 2022, IEEE Transactions on Parallel and Distributed Sys- tems, 33, 805 Veilleux S., Cecil G., Bland-Hawthorn J., 2005, Annu. Rev. Astron. Astro- phys., 43, 769 Veilleux S., Maiolino R., Bolatto A. D., Aalto S., 2020, The Astronomy and Astrophysics Review, 28 Veilleux S., et al., 2022, ApJ, 926, 60 Vijayan A., Krumholz M. R., Wibking B. D., 2025, MNRAS, 539, 1706 Vogelsberger M., et al., 2014, MNRAS, 444, 1518 Walch S., et al., 2015, MNRAS, 454, 238 Ward S. R., Costa T., Harrison C. M., Mainieri V., 2024, MNRAS, 533, 1733 Xu X., et al., 2022, ApJ, 933, 222 Xu X., et al., 2023, ApJ, 948, 28 Yao Z., Mandelker N., Oh S. P., Aung H., Dekel A., 2025, MNRAS, 536, 3053 Zhang D., a. Thompson T., Quataert E., Murray N., 2017, MNRAS, 468, 4801 von Hoerner S., 1951, Zeitschrift fur Astrophysik, 30, 17 APPENDIX A: ISM GENERATION In section 2 we describe the formalism to create a binary ISM. Figure A1 shows how this distribution peaks at a characteristic cloud mass scale. Lower volumetric fractions preferentially reduce the high mass clouds, leading to a slight displacement in the peak by a factor of a few. As we study variations in cloud sizes of orders of magnitude, this has negligible impact on the survival probability. This potential effect of factor of a few is in turn captured by the fudge factor of order of magnitude in eq. 10, where 𝛼accounts for geometric effects. Similarly, in section 2, we also create ISM of varying clump sizes following the mass density profile d𝑁/d𝑚∝𝑚−2 to mimic observa- tions of "scale-free" ISMs. We show the density profile distribution of the scale-free example in section 4.2 in figure A2 corresponding to the black-solid line in figure 12. We use an overdensity of 𝜒= 100 and initial ISM parameters ( 𝑓𝑣, 𝐿ISM/𝑟crit, 𝑟cl,min/𝑟crit) = (0.1, 30, 0.1). The initial clump size distribution spans over an order of above 𝑟cl,min, whereas the slope of the distribution roughly matches the yellow band, representing the expected Zipf’s law where the PDF is proportional to 𝑚−2. APPENDIX B: MASS CONVERGENCE In order to capture the extended nature of the ISM in the direction perpendicular to the wind, we impose periodic boundary conditions 10−1 100 101 r / rinput 100 101 102 Nclouds fv = 10−1 fv = 10−2 fv = 10−3 fv = 10−4 Figure A1. Clump size distribution for the ISM modelling in section 2, shown for different volume filling fractions for a cubic box of side 103 cells. The 𝑥 axis represents the measured clump radius as a fraction of the expected input clump size. 100 101 102 r [rcl] 10−3 10−2 10−1 100 101 N(≥V ) r−4 ± 0.3 dex Figure A2. Cumulative distribution function of clump masses for the scale-free ISM example with overdensity 𝜒= 100 and initial parameters ( 𝑓𝑣, 𝐿ISM/𝑟crit, 𝑟cl,min/𝑟crit) = (0.1, 30, 0.1). The clump size distribution spans over an order of magnitude above 𝑟cl,min, with a slope roughly match- ing Zipf’s law (yellow band), where the PDF follows d𝑁/d𝑚∝𝑚−2. for the 𝑥and 𝑧axis. We show that for our two fiducial resolution cases 𝐿⊥,box/𝑑cell = 32 and 16, mass growth is well converged by showing the evolution of two runs with (𝑟cl/𝑟crit, 𝑓𝑣, 𝐿ISM/𝑟cl) = (1, 0.1, 30). Notice that we generate separate initial conditions from identical ISM parameters (𝑟cl/𝑟crit, 𝑓𝑣, 𝐿ISM/𝑟cl). Even for this case, the mass evolution is close to identical, only showing deviations of a factor of a few around 15 𝑡cc. MNRAS 000, 1–18 (2025) 18 Hidalgo-Pineda, Gronke & Grete 0 10 20 30 40 50 60 t [tcc] −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 log(m/m0) L⊥,box = 32rcl L⊥,box = 16rcl Figure B1. Mass growth for two simulations with (𝑟cl/𝑟crit, 𝑓𝑣, 𝐿ISM/𝑟cl) = (1, 0.1, 30) for different mesh length in the direction perpendicular to the direction of propagation of the wind. In black, we use a box of 𝐿⊥= 128 cells. In purple, our fiducial simulation domain with 𝐿⊥= 256. APPENDIX C: LIMITATIONS OF THE MULTIPHASE SURVIVAL CRITERION As demonstrated in section 4.1, inter-cloud distances of 4𝜒1/2𝑟cl mark the limit for survival in multicloud set-ups. Clouds separated by distances larger than this limit cannot interact and do not exhibit survival below 𝑟cl < 𝑟crit. We can associate this separation to a volume filling fraction of the cold phase: Since a roughly isotropic and homogeneous ISM should have 𝑓𝑣= 𝑁𝑟3 cl/𝑑cl (with 𝑁as number of clouds and 𝑑cl as cloud separation), the intercloud separation follows as 𝑑cl/𝑟cl ≈𝑓−1/3 𝑣 , which in turn shows that the limiting radius of influence 4𝜒1/2𝑟cl, corresponds to a volume filling fraction of ≈10−5. The bottom panel in figure C1 shows the intercloud separation averaged for the 6 k-nearest neighbours using 125𝑟3 cl ISM realisations. The isotropic cloud approximation accurately predicts 𝑑cl, only deviating by a factor of ∼a few for volume filling fractions above 10−2. This is potentially an artifact of the k-tree and CCL algorithm (see methods of section 3.3.1), as structures overlap more at higher volume filling fractions. The prediction for the cloud separation dependence on 𝑓𝑣agrees well with the empirically found separation of clumps in the ISM. Indicated in dashed grey, the predicted critical separation (and therefore, volume filling fraction) for surviving clouds. This paper has been typeset from a TEX/LATEX file prepared by the author. 10−6 10−5 10−4 10−3 10−2 10−1 fv 100 101 102 dsep[rcl] d ∝fv −1/3 Figure C1. Average intercloud separation as a function of volume filling fraction for the cold phase. The thick dashed line shows the expected relation for a homogeneous ISM. In dotted light grey, the critical volume filling fraction for intercloud distances larger than 4𝜒1/2𝑟cl . MNRAS 000, 1–18 (2025)
MNRAS 000, 1-18 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 The Launching of Galactic Winds from a Multiphase ISM Fernando Hidalgo-Pineda,1★Max Gronke2,1 and Philipp Grete3 1Max Planck Institute for Astrophysics, Garching D-85748, Germany 2Astronomisches Rechen-Institut, Zentrum für Astronomie, Universität Heidelberg, Mönchhofstraße 12-14, 69120 Heidelberg, Germany 3Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany Draft from 17 October 2025 ABSTRACT Galactic outflows are a key agent of galaxy evolution, yet their observed multiphase nature remains difficult to reconcile with theoretical models, which often fail to explain how cold gas survives interactions with hot, fast winds. Here we present highresolution 3D hydrodynamic simulations of hot outflows interacting with a multiphase interstellar medium (ISM), parameterised by its cold-gas volume filling fraction fv, depth LISM, and characteristic clump size rcl. We identify a universal survival criterion, fvLISM ≳rcrit, that generalises the classical single-cloud condition (rcl > rcrit) and correctly predicts cold-gas survival across a wide range of ISM configurations - including scale-free - down to rcl/rcrit ∼10-2. Remarkably, the resulting cold phase rapidly loses memory of the initial ISM structure and converges toward a self-similar clump mass spectrum following Zipf's law (dN/dm∝m-2), implying that turbulent mixing and radiative condensation universally shape the morphology of multiphase outflows. Surviving gas assembles into extended plumes or confined cold shells of size ∼χrcl,min, which grow as mass is accreted from the hot phase. The interaction of an initially laminar wind with a clumpy ISM naturally drives turbulence in both phases, which we characterise through first-order velocity structure functions that follow a Kolmogorov scaling with an injection scale set by LISM, and velocity dispersions reaching σ∼cs,cold. Finally, the areal covering fraction of the cold gas approaches unity even for fv∼10-3, while the volume filling fraction remains low, naturally explaining the "misty" nature of observed outflows. Together, these results link small-scale cloud-wind interactions to galaxy-scale feedback, and we discuss their implications for interpreting observations and for modelling multiphase galactic winds in larger-scale simulations. Key words: hydrodynamics - galaxies: evolution - ISM: structure - ISM: jets and outflows - Galaxy: kinematics and dynamics 1 INTRODUCTION Galactic winds are a hallmark of star-forming galaxies and a central mechanism in their evolution. Observations show that the velocities and extent of these outflows correlate strongly with star formation rates (Thompson & Heckman 2024; Rubin et al. 2014), highlighting their connection to stellar feedback processes. Their presence is ubiquitous across cosmic time, playing a crucial role in shaping the phase distribution of the interstellar medium (ISM), regulating star formation, and redistributing baryons across galactic, circumgalactic (CGM) and even intergalactic scales (see reviews by Veilleux et al. 2005; Péroux et al. 2018; Thompson & Heckman 2024). There is broad consensus on the importance of galactic winds in cosmic evolution. First, spectroscopic observations reveal that outflows are often metal-enriched compared to the surrounding gas, confirming their ISM origin and their role in enriching the CGM and intergalactic medium (IGM) with metals (Lopez et al. 2020; Veilleux et al. 2022). Second, multi-wavelength data consistently show that winds are both multiphase and multiscale, exhibiting structures across wide spatial extents, central to both small and largescales of the baryon cycle (Rupke 2018; Lopez et al. 2025). Finally, cosmological simulations consistently require energetic feedback in ★E-mail: the form of outflows to reproduce the observed stellar mass function, metallicity gradients, and gas fractions in galaxies (Vogelsberger et al. 2014; Nelson et al. 2019; Somerville & Davé 2015). Despite this, the precise coupling between supernova (SN)-driven outflows and the ISM remains poorly understood. Several simulation efforts have tried to model outflows in realistic ISM environments. Stratified disk simulations such as TIGRESS (Kim et al. 2023) and SILCC (Walch et al. 2015; Girichidis et al. 2016a) reproduce the multiphase structure and chemistry of the ISM and use stellar feedback to launch cold gas flows. However, these flows mostly take the form of fountains that fall back onto the galaxy. Their design also makes it difficult to follow large-scale expanding winds (e.g. Martizzi et al. 2016). Global disk models with embedded supernovae, such as Schneider & Mao (2024), improve the treatment of wind geometry but still lack the resolution needed to resolve the small-scale dynamics of cold cloudlets interacting with the hot wind. X-ray and millimetric observations reveal that outflows have a complex phase structure, with cold gas (∼104 K) embedded within a hot wind (∼106 K) (Heckman et al. 1990; Fisher et al. 2024). Capturing this multiphase composition and dynamics is challenging not only for advanced simulations but also for simplified models. Classically, these simplified approaches study in detail the interaction of a single cold clump of gas with stellar-driven feedback, treating it as a test case for multiphase outflow formation. In this framework, The Authors 16 Oct 2025 2 Hidalgo-Pineda, Gronke & Grete the acceleration timescale for a cloudlet to be entrained by the wind is given by tacc ∼χrcl/vwind, where χis the cloud-to-wind density contrast, rcl the cloud radius, and vwthe wind speed. In contrast, the cloud destruction timescale due to hydrodynamical instabilities is tdest ∼χ1/2rcl/vwind, so for χ= Thot/Tcold, tdest ≪tacc suggesting clouds are destroyed before they can be accelerated (Klein et al. 1994; Zhang et al. 2017). Instead, observations suggest that cold gas is dynamically coupled to outflows and a fundamental outcome of feedback. Not reproducing this nature comes at the cost of miscapturing the properties of galaxies and their environments. Extensive theoretical work has closely examined the survival of these cold clumps embedded in fast, hot winds. Several studies have shown that radiative cooling can alter the outcome significantly (Cooper et al. 2009; Scannapieco & Brüggen 2015; Armillotta et al. 2017; Farber et al. 2018). In particlular, Gronke & Oh (2018) showed that in the strong cooling regime, where the cooling time of cold clouds is shorter than their disruption timescale, overdense clouds can resist destruction by hydrodynamic instabilities (also see Li et al. 2020a; Kanjilal et al. 2020; Sparre et al. 2020, for further investigations of the relevant cooling time). This introduces a critical radius rcrit, above which radiative gains exceed destructive mixing losses (Gronke & Oh 2020). However, the ISM is not composed of isolated spherical clouds but instead follows a scale-free distribution shaped by turbulence (Elmegreen 1997; Federrath 2016; Beattie et al. 2025; Grete et al. 2025). In such media, the single-cloud model cannot fully explain the persistence of cold gas: neighboring clouds can shield one another from shear instabilities, cloud drag may be enhanced by certain geometries, and mixing can be suppressed in clumpier environments. Additional physics such as magnetic fields and viscosity further complicate this picture (McCourt et al. 2015; Hidalgo-Pineda et al. 2024; Brüggen et al. 2023). Cold gas survival is therefore unlikely to depend on a single critical scale, but rather on the combined influence of geometry and gas properties. Previous work such as Banda-Barragán et al. (2021) investigate the driving of multiphase outflows through the interaction of a fasttravelling wind with fractal-like structures and thus bridge the 'classical' cloud crushing studies with the stratified disk simulations discussed above. However, the connection to the analytical 'survival criterion' described above remains largely unexplored. Antipov et al. (2025) identify and study the cooling regime of single clouds in multicloud set-ups, but their analysis is limited to two simulation runs from Banda-Barragán et al. (2021), without a broad survey of the parameter space. In this work, we will address this point by studying the impact of a fast, hot wind on a more realistic, multiphase ISM. Using highresolution 3D hydrodynamical simulations, we aim to resolve the relevant length scales (e.g., rcrit), and thus aim to generalize the survival criterion tcool,mix rcl,min in the equation lead to lower individual fv. Producing a series of realisations for different rcl fields following this volumetric filling fraction relation recovers the clump mass power law. The final combined fvroughly corresponds to the sum of fvfor each rcl size. Essentially, the Heaviside function in 3 is replaced by H′ = ∑︁ r=r0 H ̃φ-m/rcl3 (5) where m= fv,max r3 cl,min. An example of the final mass distribution can be found in Appendix A. Note that, for all our runs, we satisfy the resolution criterion rcl/dcell ≥8 which was found by previous 'cloud crushing' studies to be sufficient in order to converge on the cold gas mass evolution (Gronke & Oh 2020; Kanjilal et al. 2020). fv = 0.5 r/rcrit = 0.3 fv = 0.1 fv = 0.05 r/rcrit = 1 r/rcrit = 2 rcl 10-26 10-25 10-24 ρ [gcm-3] Figure 2. 2D slices focused on our initial binary multiphase ISM fields where the wind enters from the left boundary (outside the shown region). We vary the volumetric cold gas filling fraction, fv, along the xaxis, and clump size rcl along the yaxis, expressed as a fraction of the critical lengthscale for single cloud survival, rcrit. Colour-coded in yellow and navy the density values for cold and hot phases, respectively. 2.2 Numerical implementation We perform wind-tunnel hydrodynamic simulations using the publicly available code AthenaPK1, which implements finite-volume (magneto)hydrodynamic algorithms on top of the Parthenon framework (Grete et al. 2023). We employ a formally second-order hydrodynamic finite-volume scheme with a predictor-corrector Van Leer integrator, Harten-Lax-van Leer with Contact (HLLC) Riemann solver, and piecewise parabolic reconstruction in primitive variables. The performance portability of AthenaPK - achieved through Kokkos (Trott et al. 2022) - allows us to run on acceleratorbased architectures and perform large-domain simulations on a static grid while strictly satisfying the aforementioned mass-growth resolution criterion of rcl/dcell ≥8. Our simulation meshes are structured as a rectangular domain with transverse lengths of 256 or 128 cells (L⊥/rcl = 32 and 18, respectively) and a longitudinal dimension aligned with the wind direction (y-axis), typically extending to Ly∼3LISM (see Table 1 for details). Both transverse lengths are sufficient to capture the dynamics, though we adopt the 256 cell width as the fiducial resolution (see Appendix B for a resolution test). Our multiphase ISM density field realisations assign a temperature of T= 106 K to the background (hot) phase, and a density of ρhot = 10-26g cm-3, while the overdense (cold) phase is set to T= 104 K and ρcold = 10-24g cm-3. Note it is only the relative overdensity of the gas at these two temperatures what affects the dynamical timescales, producing a self-similar solution 1 AthenaPK is an open-source, performance-portable code for astrophysical MHD simulations: https://github.com/parthenon-hpc-lab/athenapk. MNRAS 000, 1-18 (2025) 4 Hidalgo-Pineda, Gronke & Grete regardless of the specific density values (see § 2.2 in Dutta et al. 2025, for an explanation on self-similarity). The ISM slab, of length LISM, is placed 8rcl from the upstream y boundary of the domain. From this boundary, we continuously inject a mildly transonic (M ≈1.5) or subsonic (M ≈0.7) hot wind in the positive y-direction. No initial turbulent velocities are imposed to either of the two phases, i.e., the hot wind is initially fully laminar. Periodic boundary conditions are applied in the transverse directions, xand z, and a convergence study assessing potential artifacts from this setup is presented in Appendix B. To compute radiative losses, we employ the Townsend (2009) radiative cooling algorithm, using the collisional ionization equilibrium (CIE) cooling tables from Gnat & Sternberg (2007) for solar metallicity. To mimic heating from the UV background, we disable cooling above 6 × 105 K, and impose a temperature floor of 104 K. To optimize computational efficiency and better track the motion of dense material, we perform simulations in a frame boosted fashion alike previous works (McCourt et al. 2015; Scannapieco & Brüggen 2015; Dutta & Sharma 2019) by computing the the cold gas mass-weighted velocity ⟨vcold⟩= ∫ ρ(T 106, Kand v≃vw), an equally tenuous static phase with v≪vw, and a static 104 KISM component. Thermally-unstable gas at intermediate temperatures arises directly from early-disrupted ISM regions heated by the shock front, and is therefore scarce at this stage in evolution. The spread in velocities of this intermediate gas is a result of turbulent mixing in clouds swept by the wind, with vgas ∼vw, and freshly disrupted clumps that mix into ∼105 K gas at v∼0, therefore filling a wide velocity range below vw. At half the entrainment time (tent = 0.5), the top panel shows that the cold phase velocity is now well described by a Schechter function m(v)/mcl ≈Av/vcexp (-(v/vc)2) (with A= 104 and vc= 0.1vw shown by the thick dashed black line), with the bulk of its mass travelling at high speeds and a tail of accelerating gas extending down to v/vw∼10-3. During this time, however, the hot phase still carries most of the momentum, followed closely by mixed gas which contains more overall mass and travels faster than the cold phase of the outflow. The bottom panel shows that the initially unstable, lowvelocity 105 K gas now occupies a narrower vgas distribution closer to vw, with gas at these temperatures mainly produced through mixing from surviving clumps at higher speeds. Cold gas forms a distinct elongated region centred at T∼104 K and v∼0.1vw. The spread in vyfor the cold phase is a distinctive feature of multicloud systems. Unlike single-cloud simulations, clouds in multicloud environments undergo differential entrainment: clouds at the leading edge are accelerated early by the wind, while those farther downstream have not yet been reached. Additionally, low-velocity fragments from ablated clouds can coagulate into larger clumps that retain these initially low velocities. These reformed clumps remain slow-moving until radiative cooling becomes efficient enough to enable their entrainment. This coagulation process, also illustrated in the central panel of Figure 3, is absent in single-cloud simulations. At tent = 1, the top panel shows that once both cold and hot phases are approximately co-moving, the mixed gas mass becomes negligible, and the cold phase carries most of the wind's momentum. The bottom panel indicates that low-speed cold gas has mostly vanished, with negligible gas remaining at v≲10-3vw. The surviving cold (104 K) and hot (106 K) components now co-travel roughly at the hot phase transonic speed Mw ∼1.5. Gas cells at 105 K sit below 0.01, mclump and cluster around v∼vw, consistent with turbulent radiative mixing layer theory in the fully entrained single-cloud picture. This mixed gas is however four orders of magnitude lower in mass than the two main wind phases. Overall, while the gas structure in our set-ups is distinctly more intricate than idealised single-cloud models, the evolution of the system as a whole is broadly consistent with the expected dynamic and thermal evolution of individual clumps. Nevertheless, the formation of multiphase outflows does not strictly follow from the traditional single-cloud survival criterion. In the following section, we examine how this criterion must be modified to accurately predict multiphase gas formation. 3.3 Cold gas survival We define a unique ISM from three parameters, ( fv, rcl, LISM; see section 2.1). Analysing the mass evolution of our set of simulations showed that if rcl ≤rcrit, estimating a threshold for the formation of a multiphase outflow requires a non-trivial combination of these three values, hinted by the alternate survival status for similar runs in the right panel of figure 4. Analogous to the single cloud survival criterion rcl > rcrit we can compare the average cold gas length through the ISM, fvLISM, to the MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 7 102 104 106 m/mcl t ∼0.1 tent 102 104 106 t ∼0.5 tent 102 104 106 t ∼tent 10-5 10-3 10-1 101 vgas/vw 104 105 106 107 T [K] 10-5 10-3 10-1 101 vgas/vw 104 105 106 107 10-5 10-3 10-1 101 vgas/vw 104 105 106 107 101 102 103 104 105 mtot/mcl Figure 5. Top: mass carried per velocity bin during the wind-ISM interaction (t∼0.1 tent, left), mid-way through entrainment (t∼0.5 tent, middle) and at time of entrainment (t∼tent, right). In black, total mass velocity curve; blue, green and red show the mass curves for temperature cuts T≤105, 105 ≤T≤ 105.8, T≥105.8 K, respectively. We perform a least-squares fit (see S3.2.2) to the mass distribution of cold gas in the middle panel (dashed black). Bottom: phase diagrams for gas temperature (y-axis) and bulk flow speed along the wind direction (x-axis) as a fraction of wind speed vw, weighted by phase bin mass (colourbar). This corresponds to run rcl/rcrit = 1, fv= 0.1 and LISM = 30rcl . critical cloud radius and conjecture that αfvLISM > rcrit (10) (where αis a fudge factor of order unity) leads to cold gas survival also in more complex ISM gas distributions. Equation 10 gives a clear threshold for multiphase outflows, where the volumetric filling fraction fv, and ISM depth LISM both influence survival. Notice how the survival does not directly depend on the original clump size rcl anymore. This is consistent with our results, as for all rcl ≤rcrit, we always find a regime where the ISM survives the interaction (we comment on the limits of our criterion in section 4.1). This equation simultaneously captures the single-cloud criterion for clouds rcl > rcrit, as setting fv≈1 and LISM = rcl in equation 10 automatically satisfies the inequality. We compare this criterion with our results by extracting the effective ISM depth for our runs. In figure 6, fvLISM is plotted as a function of the initial ISM coherent lengthscale. Survived runs are labelled as green (light blue) dots and destroyed as red (violet) crosses, respectively, for M ∼1.5 (M ∼0.7). As observed in section 3.2, ISM patches that are coherent over lengthscales larger than the single cloud criterion self-consistently survive regardless of fvor LISM. As we move towards the rcl 103rcl limit for survival, can form a multiphase outflow. We further comment on this point in the discussion§ 4.1. 10-2 10-1 100 101 rcl[rcrit] 10-2 10-1 100 101 102 103 fv LISM[rcl] ∝rcrit Mw = 1.5 Mw = 0.7 Figure 6. Emergence (dot) or absence (cross) of multiphase outflows as a function of cold mass-equivalent ISM depth, fvLISM (y-axis), and clump size rcl (x-axis), expressed as fractions of the ISM clump size and the critical cloud radius, respectively. The dotted line indicates values proportional to the critical radius for single cloud survival. 3.3.1 Mass distribution Figure 7 shows the mass probability density function (PDF) for the top row and cumulative distribution (CDF) for the bottom row of three runs, prior to the interaction with the wind (left panels) and during/after entrainment (right panels). We use Scipy2 union-find connected-component labeling (CCL) to identify clumps for each snapshot. Strikingly, we find that the cold gas distribution stabilises shortly after the wind-ISM interaction, and remains time-invariant throughout the simulation (shown in light gray). 2 https://docs.scipy.org/doc/scipy/ MNRAS 000, 1-18 (2025) 8 Hidalgo-Pineda, Gronke & Grete 10-1 100 101 r [rcl] 10-3 10-2 10-1 100 101 dN/dr 10-1 100 101 r [rcl] r-2 r-4 r-2 r-4 r-2 r-4 100 102 V [r3 cl] 100 101 102 103 104 N(≥V ) 100 102 V [r3 cl] V -1 V -2 V -1 V -2 V -1 V -2 Figure 7. PDF (top) and CDF (bottom) for the clump size distribution of cold gas in the wind at time of initialisation (left) and at time of interaction with the wind (at t∼tsh; right) for runs with ( fv, rcl/rcrit, LISM) = (10-1, 1, 30rcl) and (10-1, 0.1, 300rcl) as blue and green solid lines, respectively. The blue dashed curve shows a run identical to the solid blue but with fv= 10-2, for completeness. Similar mass distributions are shown in light grey for later snapshots. The blue and green curves correspond to simulations with initial clump sizes of rcl = rcrit and 0.1rcrit, respectively, with a volumetric filling factor of fv= 0.1. Initially, both follow a similar bell-shaped density profile, reflecting our Gaussian-like initialisation of the ISM. These initial distributions do not reflect realistic ISM structures (see § 2.1), but are useful for isolating and testing survival parameters. To highlight the impact of fvon the distribution of the clumps, we also show an identical run to fv= 10-2 in Fig. 7 as a dotted blue curve. For the second column, after t∼tsh, the shape of the clump distribution changes significantly. The profiles across all three cases flatten into a power-law form consistent with N(> V) ∝V-1, and dN/dr∝r-4, corresponding to a 'Zipf like' mass distribution dN/dm∝m-2, in good agreement with previous results from turbulent, multiphase media Gronke et al. (2022); Das & Gronke (2024), 'shattering' simulations (Yao et al. 2025), galactic winds (Tan & Fielding 2024), and ICM simulations (Li et al. 2015; Fournier, M. et al. 2024). This shape is identically reproduced for the lower fvrun, demonstrating that the ∝m-2 mass distribution seems to be universally emergent - and does not depend on the initial distribution. This power law distribution implies that we do not find a clear characteristic, maximum, or minimum clump size accross the runs, and the powerlaw establishes around the initial mass range of the simulation (with the lower cut-off given by our resolution and the maximum cut-off given by the biggest clump (for the PDF) or total mass (for the CDF) of the system). Notably, runs with lower initial cold gas volume filling fractions fvproduce smaller clumps in the wind. This is more evident when comparing the end-tail of both blue curves in the CDF distributions, where the dotted line with fv= 10-2 terminates a factor of a few below the maximum clump size for fv= 10-1, a behaviour that we had already observed for the middle subfigure in Fig. 3. 0 2 vturb/cs,cold T > 105K T ≤105K 0.0 0.5 1.0 x/LISM 0.0 0.5 1.0 1.5 2.0 2.5 3.0 y/LISM 0.0 0.5 1.0 x/LISM 0.0 0.5 1.0 1.5 vturb,cold/cs,cold 0 1 2 3 vturb,hot/cs,cold Figure 8. Averaged turbulent velocity of the cold and hot phases along the wind direction (top), and projected along the z-axis (middle and bottom), for the run rcl/rcrit = 10, fv= 0.1, and LISM = 30rcl at t≈0.2 tent. We assume equipartition along the three dimensions when computing turbulent dispersion velocities. Our findings show that regardless of initial ISM conditions, clump distributions during and after the initial interaction with the wind converge toward a universal clump mass distribution, effectively erasing any memory of the initial ISM configuration. This suggests that, at least structurally, multiphase outflows do not retain a direct imprint of the source galaxy's cold-phase morphology. 3.4 Emergent turbulence in the outflow As stellar and AGN-driven winds propagate through a porous ISM, the resulting interaction not only disturbs the cold gas but can also trigger turbulent motions from the initial hot gas laminar flow. Figure 8 shows the mass-weighted projection of the velocity dispersion vturb = (3/2σ2 x +3/2σ2 z )1/2, where σx/yrepresents the standard deviation of the velocity component along the x/zaxis. The 3/2 prefactor accounts for unmeasured turbulent motions along the direction of the wind under the assumption of turbulence isotropy since, as we show below, turbulent speeds are well below vw, effectively hiding σyin the bulk flow of the outflow. σzand σxare self-similar (we further prove the self-similarity of turbulence in § 4.4). Top panels show the marginalized spatial averages along the direction of the wind. Figure 8 shows that a volume filling, turbulent hot phase quickly develops throughout the simulation domain whereas cold gas (turbulence) remains more localised. Interestingly, while we have proven the bulk coupling of cold and hot speeds along the outflow direction in Fig, 4, turbulence also shows to be coupled between the phases. The hot, 106 Kgas only shows a factor of ∼2 larger turbulent velocities than the cold phase, but both are of σ∼cs,cold. While hot gas shows a larger dependence on the ISM depth and fv, with lower average turbulence for lower cold gas fractions, cold gas turbulence consistently converges to ∼cs,cold regardless of wind speed and ISM conditions. Figure 9 shows the time evolution of the mass-weighted mean turbulent velocity dispersion. The left and right panels represent cases MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 9 0 5 10 t[tsh] 0 1 2 3 4 5 vturb/cs,cold T ≤105 K Mw = 0.7 Mw = 2 0 5 10 t[tsh] T > 105 K fv = 10-1 fv = 10-2 10-2 10-1 100 101 LISM [rcl] Figure 9. Average turbulent velocity (solid line) and 20-80 percentile ranges (shaded regions) as a function of time. The ISM depth is encoded in the colourbar. We represent different volume filling fractions with solid and dashed linestyles. Some addditional runs are added for comparison with wind Mach numbers Mw≈0.7, 2 in orange and purple, respectively. for the cold (T≤105 K) and hot (T> 105 K) phases, respectively. Linestyles follow those in Figure 4, with curves colour-coded by the cold gas depth. For completeness, we show additional runs with wind Mach numbers Mw= 0.7, 2 in orange and purple, respectively. We identify three key features: (i) All ISM configurations follow qualitatively a similar evolution. The peak in turbulence occurs around t∼tsh, immediately at windcrossing time, when shear between the hot wind and cold structures is strongest. (ii) The 104 K gas consistently displays turbulent motion of magnitude cs,cold, with peaks typically reaching values vturb/cs,cold ∼1.5 and only occasionally exceeding it (∼2 cs,cold) for a handful of clumps as shown by the shaded regions, particularly for runs with narrower ISM depths. Turbulence for the cold phase remains stable at these values. (iii) The hot gas retains a relatively laminar flow throughout most of the interaction with vw≫vturb,hot. Peaks in turbulent velocity are observed only at early times and lie an order of magnitude below cs,hot. Interestingly, the peak in turbulence shows a mild correlation with fvas opposed to the cold phase: higher volume filling fractions impose a larger inertial resistance to the wind flow and therefore a larger initial turbulence. All curves follow a gentle decay towards lower turbulent speeds, likely due to the merging of small-size clumps into larger structures. 3.5 Mass growth Previous work on the growth of cold mass in 'wind tunnel'set-ups as well as in turbulent radiative mixing layers simulations (Gronke & Oh 2020; Tan et al. 2021; Fielding et al. 2020). Tan et al. (2021) suggested that the effective cooling time of a turbulent, multiphase system is given by the geometric mean of the mixing and cooling time of the gas, thus, leading to a growth time of tgrow ≡m/ ¤m∼χ(teddytcool)1/2. This was confirmed to hold in simulations of multiphase turbulence - with an without magnetic fields (Gronke et al. 2022; Das & Gronke 2024). Figure 10 shows the cold gas mass evolution normalized by this expected value. Our choice for teddy corresponds to the 0 10 20 30 40 τ 10-2 10-1 100 101 ̇m tgrow/m0 rcl 101 102 103 104 LISM [rcl] fv = 10-1 fv = 10-2 fv = 10-3 Figure 10. Mass growth of runs in figure 4, divided by m0/tgrow, with m0 as initial cold mass and tgrow = χ(tcool,mixtkh)1/2. These runs match those of Figure 4 and follow the same linestyle and colourcode. In black, the mass growth of an additional scale-free ISM run with ( fv, LISM) = (0.1, 30). Kelvin-Helmholtz timescale of individual clumps, tkh = χ1/2rcl/vw, and for tcool we use the mixed gas cooling time tcool,mix3. The abovementioned analytical expectation shows excellent agreement with the mass growth in our study,independent of volume filling fraction and global ISM properties. Notice that we include a 'scale-free' ISM run for the mass growth analysis. For this case, which contains a range of clump sizes, we evaluate the Kelvin-Helmholtz timescale using the largest clump in the sample, tkh,max. This 'scale free' run is shown as the black curve in Fig. 10. We further comment on this and other scale-free ISM runs in § 4.2. 4 DISCUSSION 4.1 A universal survival criterion for multiphase outflows Figure 6 shows that the survival of cold gas to the entrainment of a wind depends only on two parameters: the volume of the cold phase of a multiphase medium, fvand its depth in the direction of the wind propagation, LISM. The equivalent depth of the ISM, that is, the product of these variables, needs to be larger than the critical survival radius of a radiatively cooling plasma, which has been previously constrained by studies from Gronke & Oh (2018). Specifically, survival is guaranteed for: fvLISM ≳3 pc T5/2 cl,4Mwind P3Λmix,-21.4 χ100 (11) where Tcl,4 ≡(Tcl/104 K), P3 ≡nT/(103 cm-3 K), Λmix,-21.4 ≡ Λ(Tmix)/(10-21.4 ergcm3s-1), M is the Mach number of the wind, and we write vwind ∼cs,clMχ1/2, and χ100 = χ/100. Equivalent, since fvLISM ∝1/nthe column density of the ISM 3 Note that this differs slightly from Gronke et al. (2022) who use the mininum cooling time. The difference is, however, only a constant offset of a factor of ∼4 and does not significantly change our findings. MNRAS 000, 1-18 (2025) 10 Hidalgo-Pineda, Gronke & Grete 0 20 40 t[tcc] -3 -2 -1 0 1 log (m(T L∥, and one where dcl dcl, generally holds for our runs with rcl dcl (i.e. rcl ≪rcrit and 4 f1/3 v χ1/2 ≥1), the effective critical filling fraction should in fact vary with fv, not remain at a fixed value. However, Seidl et al. (2025) only explored volume filling fractions of the order of fV≳10-1, for which our results do align well with their findings (e.g. for fv= 0.3, ̃fv,crit ≈0.1). In summary, our generalised survival criterion provides a simple yet powerful framework for predicting when multiphase gas can persist in galactic outflows. It shows that the emergence of cold gas in winds is not determined by the details of individual clouds, but rather by the integrated cold gas column between the wind-launching region and the surrounding medium. This criterion not only unifies a range of previous single- and multi-cloud results, but also explains why cold material is ubiquitous in observed galactic winds: as long as the effective column exceeds the critical threshold, cold gas will survive, grow, and assemble into extended shells or plumes irrespective of the original ISM geometry. Beyond galactic winds, these results can have implications for simulations of ram-pressure stripping in galaxies moving through the intracluster medium (ICM; e.g. Ghosh et al. 2024). There, our criterion suggests the fate of stripped gas - and whether it forms a 4 Because d∥≈d⊥in our simulations, we take L∥as the main constraining volume length, since distances in the direction of the wind are the most stringent for the survival of cold gas, see § 4.1 MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 11 -3 -2 -1 0 1 log (m(T rcrit, i.e., in which multiphase outflows are formed. We colour-code the curves by ISM depth and the linestyles denote MNRAS 000, 1-18 (2025) 12 Hidalgo-Pineda, Gronke & Grete the initial volume filling fraction. We compute both fvand fAwithin the limits of the cold outflow, i.e. from the negative- to positive-most cold gas component along the yaxis, and within the full x, zextent. This allow us to set an upper-limit on both parameters. The total fvis the ratio of number of cold gas cells to total within these limits. For this same domain we compute the covering fraction fAfrom the leftmost boundary in the direction perpendicular to the wind velocity, i.e., along the zdirection. The volumetric filling fraction for these runs remains well below unity at all times. Despite initial drops, fvconsistently increases with time for all survival scenarios. Meanwhile, the areal covering fraction of winds, fA, asymptotically reaches unity. The general picture shows that fA∼1 throughout; remarkably, even for initial ISM conditions with extremely low volume filling fractions, the covering fraction naturally tends toward unity, as in the other cases. This is expected, as the clumps gather around rcrit in order to survive, occupying larger projected areas than their original sizes. Observations from Xu et al. (2022) of line multiplets and doublets in 45 low-redshift starburst galaxies showed that outflow winds retain a large covering fraction of approximately fA∼0.64, in very good agreement with our results, where we find fA∼1. On the other hand, Xu et al. (2023) estimated the volumetric filling fractions of cold gas in M82 outflows from the resolved [S II], λλ6717, 6731 emission lines, and found very low values of fv∼10-3 -10-4. While these values satisfy fv≪1 as in our simulations, they are 1-2 orders of magnitude lower than most of our runs. Furthermore, our simulations show fvincreasing monotonically (cf. 13), whereas their estimates show fvdecreasing with radius, with a dependence fv∝r-1.8. This discrepancy is, however, an expected consequence of our set-up, as wind-tunnel simulations miss the expanding nature of outflows in which cold gas redistributes across larger volumes as it travels with the wind. Pressure and density properties also evolve in such scenarios, leading to a decrease of the mass transfer rate between the hot and cold material (Dutta et al. 2025). We also caution that our estimates of fvand fAare restricted to a shell of gas and, thus, are strictly speaking upper limits compared to the observed values. Note that in Fig. 13 some runs show an apparent plateau in volume filling fractions of fv∼10-1. This could be attributed to artifacts of the periodic boundary conditions, which can alter the mass growth once gas shells increase in mass. Figure 10, however, shows that the mass growth of the same runs remain close to its predicted value. Indeed, it is possible that this property is inherent to multiphase systems which can undergo a strong fragmentation process, and hence reduce the overall volume the cold gas is filling. This could explain why mass growth continues despite a fixed cold gas volume fraction. While the cold gas volume filling fraction mostly remains low, some of the projections in Fig. 3 clearly show the formation of "cold gas shells", i.e., confined regions of the outflows where most of the cold material is concentrated, leading large fvlocally. In the bottom-right panel of Fig. 13, we analyse the temporal extent of cold gas along the wind direction, lslab. The initial expansion or compression of the ISM is tightly coupled with its initial depth: systems with LISM > 102rcl are compressed, in contrast to runs with LISM ≤102rcl, which expand. This can be understood by equating the shear timescale of the wind tsh ∼LISM/vwwith the reaction time of individual clumps tent ∼χrcl/vw, which shows that this transition first occurs at LISM ∼ χrcl,min. In other words, initial depths shallower than the expanding tail of surviving clouds lead to an apparent extension of the cold shell whereas is the initial distribution is smaller, the cold gas extends to ∼χrcl. Not only do single cloud tails determine whether the cold expands or contracts initially but also set a minimum evolving shell size for multiphase outflows. This is shown by the curves in the bottom right of Fig. 13, where all the curves settle close to lslab ∼χrcl, after which further fragmentation and mass accretion can lead to slow and continuous increase in size. The compression of the interstellar medium (ISM) is especially important when examining outflow energetics. In particular, in studies of momentum- and energy-driven outflows, the presence of a compact shell of cold gas may cause a momentum-driven outflow to resemble an energy-driven phase. This occurs because the hot gas performs PdVwork on the dense cold shell, rather than escaping through a porous medium, a phenomenon common in AGN driven winds (Faucher-Giguère & Quataert 2012; Ward et al. 2024). This criterion for the expansion and compression of cold gas could explain the formation of both plumes and shells observed in starburst galaxies. For example, 3.3, μm PAH emission in (Fisher et al. 2025) reveals plumes extending up to 200-300 pc where the cold dust appears to align with the direction of the wind and is embedded with clumps of sizes 5 - 15 pc. These observations are consistent with our findings, where for clumps sizes of 5 pc in a sufficiently narrow ISM above the SNe event ( LISM < χrcl ≈500pc), cold gas should form plumes that extend by 2 orders of magnitude with respect to their initial size. Similarly, studies by Ha et al. (2025); Rupke et al. (2019) on the Makani galaxy and by Lopez et al. (2025) on M82 focus on the formation and properties of outflows traced by O IV and Hαemission shells. These oberved cold gas morphologies stand in stark contrast to the cometary structures predicted by single-cloud simulations (see discussion in Thompson & Heckman 2024). For example, Lopez et al. (2025) shows the formation of slabs and arcs of cold gas in M82 that are 14-50 pc deep along the direction of the wind. If one compares this to the values of e.g., the red curve in the bottom right panel of Figure 13, we find that for intial clump sizes of rcl ∼0.1rcrit ∼0.2 pc (using the fiducial values in Eq. (11)), the final shell size reaches ∼10 pc, in agreement with their findings. Another critical point against single-cloud studies is their inability to form arc-like structures. In our case, some of the morphology maps (middle panel of Figure 3) begin to show asymmetric arc-like features. Although not as clear as in the M82 observations (Thompson & Heckman 2024; Lopez et al. 2025), the expanding background absent in our simulations is likely essential to accentuate these structures. Previous single-cloud studies also highlight the potential role of magnetic fields in forming more filamentary structures (Shin et al. 2008; Grønnow et al. 2018; Hidalgo-Pineda et al. 2024). While we do not include magnetic fields in this work, we hope to explore it in the future. Another question we address is how the multiphase morphology of the emergent wind is related to the morphology of the originating ISM. In other words, do galactic winds retain any imprint of the initial cold gas scale? Our results show that the mass distribution in an outflow is governed by Zipf's law (dN/dm∝m-2; cf. Fig. 7). This distribution has been observed in various simulation setups: galactic outflows (Tan & Fielding 2024), multiphase turbulence (Das & Gronke 2024; Gronke et al. 2022), and simulations of the ICM (Li & Bryan 2014; Fournier, M. et al. 2024). In our study of ISM-wind interactions, this power law permeates the cold gas structure across time, demonstrating that even with different initial conditions, the cold phase rapidly converges toward this behaviour, and suggesting that the power law is a universal attractor for gas arising from cold-hot phase interactions - and the ISM structure is not imprinted in the winds. Observations of nearby multiphase winds reach now the resolution required in order to compare this clump mass distribution to data (e.g. Lopez et al. 2025; Fisher et al. 2025). However, a quantitative analysis is still outstanding. MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 13 4.4 Outflow kinematics Multicloud setups offer a closer approximation to realistic galactic outflows, as shown above by several properties of ISM-wind interactions not captured in single-cloud studies. For instance, the mass-carrying phase of a single outflow can exhibit a wide range of bulk speeds along the wind direction (see Fig. 5). Specifically, we predict a modified 'Schechter-like' function for the cold gas spread in mass and velocities in winds (cf. § 3.2.2) which can be in principle observable. We show that the characteristic v/vcexp(-(v/vc)2)- distribution forms after t∼tsh, while during the wind-ISM initial interaction, the clump distribution is semi-random, with most of its material travelling at vc∼0. The peak and normalisation increases with time, until reaching vc≈vwat time t∼tent, i.e. typically after tens of Myr. This implies that using, e.g., 'down the barrel' observations and comparing low-ionization absorption line shapes one can infer at which evolutionary stage an observed outflow is and what this implies for the distance the cold gas has travelled, allowing e.g. to estimate the mass outflow rate of the wind. In fact, observations of nearby galaxies have shown potentially compatible functional forms in their lowionization species absorption lines reaching ∼hundreds of kilometers per second (Rivera-Thorsen et al. 2015; Barger et al. 2016). Further observations and comparisons to simulated outflow profiles such as the ones suggested here will help constrain the nature of galactic winds. We also study the emergent turbulent motion of the (initially laminar) outflow (§ 8). There, we show that the phases remain kinematically coupled throughout evolution. In particular, the turbulent velocity of the cold phase converges to ∼cs,cold across all simulations. The turbulent velocity of the hot phase, on the other hand, is slightly higher - about a factor of ∼2 larger than that of the cold phase - and shows a weak dependence on the initial volume filling factor fv. Specifically, the hot-phase turbulent velocity increases by roughly a factor of ∼2 when fvis increases from 10-2 to 10-1, indicating that higher cold-gas volume fractions lead to stronger turbulent motions in the hot medium. We further investigate the turbulent cascade in these outflows by computing the velocity structure function (VSF hereinafter), a standard diagnostic of (multiphase) turbulence (von Hoerner 1951; Li et al. 2020b; Fournier, M. et al. 2024). Figure 14 shows the first-order velocity structure function of cold gas for runs with (rcl/rcrit, fv, LISM/rcl) = (1, 10-1, 30) (left), (1, 10-2, 300) (middle), and (0.1, 10-1, 300) (right), i.e., runs in which the cold gas survives. The line colour saturation towards darker tones indicates the temporal evolution of the VSF as the wind traverses LISM, as indicated by the colour-bar. For these three cases, the VSF progressively flattens as the ISM is traversed by the wind, reaching a plateau at t≈tsh, when the wind has traversed the ISM, displayed by the darkest solid line in each panel. At the smallest scales, the curves display a Kolmogorov relation following a l1/3 power law. This relation is absent at early times, but once the wind has traversed the full depth of the cold gas, the inertial turbulent regime develops, spatially spanning nearly two orders of magnitude from rcl to LISM for the run shown in the right panel of Fig. 14 ( fv= 0.1, rcl/rcrit = 0.1, LISM = 300rcl). This injection scale can be inferred from the flattening of the VSF curves at large scales, which consistently matches l∼LISM for all runs. The temporal evolution of the VSF magnitude is in agreement with Fig. 9, where we showed turbulence peaks for both cold and hot phases at t∼LISM/vwreaching vturb ∼cs,cold. By comparing the first and second panel of Fig. 14, one can see that fvdoes not alter the inertial regime of the VSF where Kolmogorov's turbulence applies; in both cases the driving scale ∼LISM. In addition, the magnitude is S1(LISM) ∼cs,cold. This behaviour is also shown in Figure 9 where fvhas negigible impact on the cold gas rms velocity. We are not aware of studies of the VSF in the cold gas component of star-driven outflows in galaxies. Existing work has been limited to cool-core clusters, where AGN activity produces long Hαfilaments (Li et al. 2020b; Hu et al. 2022). In cases where outflows are supersonic, a short tcool leads to partial thermalisation of the energy cascade, steepening the scaling to l1/2. This is the case for Perseus, as shown by Li et al. (2020b), where the effect was first identified, and later confirmed through simulations in Hu et al. (2022). In the latter study, simulations with subsonic outflows recover the l1/3 scaling, consistent with our results, and show a similar temporal trend in the VSF, an increase with time that eventually settles into the Kolmogorov cascade after wind entrainment. Our results suggest that turbulence does develop naturally in multiphase winds, and that measuring the VSF can reveal not only the evolutionary state of the wind but also the driving scale which in turn can constrain the original ISM from which the outflow emerged. 4.5 Comparison to previous work A large body of literature has focused on cold gas-wind interactions. The simplest cases are, as already preluded in § 1, the single 'cloud crushing' studies (Klein et al. 1994; Schneider & Robertson 2017; Scannapieco & Brüggen 2015) which have been extended with radiative cooling to develop a cold gas survival criterion (Gronke & Oh 2018; Li et al. 2020a; Kanjilal et al. 2020) - which our general criterion is consistent with (see § 4.1). Several studies also focused on ensemble of clouds and the associated 'shielding' effects (e.g. Poludnenko et al. 2002; Al ̄uzas et al. 2012). As mentioned in § 4.1, specifically, Seidl et al. (2025) performed multi-cloud simulations that include radiative cooling which allowed them to derived a critical volume filling fraction from the overlapping of single cloud effective volumes. Using this method they extract a limiting ̃fv,crit ≈0.24 above which they find cold gas survival. This aligns with our findings for large volume filling fractions (probed by their work) but fails to predict survival for fv≪0.1. A handful of higher-resolution studies have turned to other setups via the introduction of more complex density fields that mimic ISM gas distributions (e.g Banda-Barragán et al. 2021; Antipov et al. 2025; Borodina et al. 2025). Specifically, Banda-Barragán et al. (2021) perform wind-tunnel simulations with ISM initialised from a log-normal gas distribution spanning 104-106 K. They examine the formation of clumpy, rain-like cold droplets for both compressive and solenoidal density fields. Although their initial conditions yield surviving cloudlets of cold gas at later times, they do not discuss the criterion for survival. Their results show a broad velocity spread for the cold phase, particularly at early times, but do not recover features such as shell formation, since the initial ISM depths remain below χrcl (although the expected initial expansion is visible). Survival is revisited in Antipov et al. (2025), exploring a run from Banda-Barragán et al. (2021) and studying the survival of cold gas on an individual, one-by-one clump basis via a friends-of-friends algorithm to the cold phase domain of their box. They find on average tcool,cl ≪tcc, explaining the aforementioned formation of a multiphase outflow. However, their study does not explore the broader parameter space or explain the classical destruction regime where cloud-cloud interactions dominate survival and multiphase outflows emerge. Indeed, their analysis is reduced to two identical runs of LISM ≈50 pc, and does not comment on the role of fvin survival. Multiple studies have also investigated SNe-ISM interactions MNRAS 000, 1-18 (2025) 14 Hidalgo-Pineda, Gronke & Grete 10-1 100 101 l3D = |ri -rj| [rcl] 10-1 100 101 ̄S1 = ⟨|v(r + l) -v(r)|⟩/cs,cold l1/3 l1 LISM (1, 10-1, 30) 10-1 100 101 102 l3D = |ri -rj| [rcl] 10-1 100 101 (rcl/rcrit, fv, LISM/rcl) LISM (1, 10-2, 300) 10-1 100 101 102 l3D = |ri -rj| [rcl] 10-1 100 101 LISM (0.1, 10-1, 300) 0.10 0.33 0.55 0.78 1.00 t/tsh Figure 14. First-order structure functions for the cold phase of the wind (T< 105 K), shown for three simulations (from left to right panel): (rcl/rcrit, fv, LISM) = (1, 0.1, 30rcl), (1, 0.1, 300rcl), and (0.1, 0.1, 300rcl). Temporal evolution is indicated by line colour saturation in units of the shear time tsh = LISM/vw. Due to our resolution, clump pairs are undetected at certain separations, leading to gaps in the VSF where the contribution is 0. through vertically-stratified disk set-ups. Simulations consistently show that the efficiency in generating 'hot' outflows depends strongly on the heighscale hSN at which SNe take place (Martizzi et al. 2016; Li et al. 2017; Creasey et al. 2013). Those exploding at high hSN and therefore lower column densities drive hot outflows more efficiently. Recently Vijayan et al. (2025) has systematically studied the effect of SN scaleheight in the formation of a multiphase outflows. They show that the cold-mass loading factor becomes significant while the hot phase carries suffiecient specific energy to form an outflow when the equivalent heightscale for gas in the disk, hgas is comparable to hSN. Since their simulations use hgas ∼1kpc, this sweet spot can be explained from the column of ionised gas extending for a few hundreds of pc above the characteristic hgas. For our criterion, these set-ups with an average ISM fv= 0.1 and LISM ≈100 pc above hSN, are well above the survival threshold for multiphase outflows. However, for hSN ≫hgas their work reports outflows contain effectively no cold gas mass. This makes intuitively sense: if there is no cold gas in a 'windtunnel' setup in the first place, no multiphase outflow will emerge. In the case of hSN ≳hgas, one must be cautious. Our criterion indicates that in principle gas layers extending only a few tens of parsecs above the supernova scale height should still form cold gas in the outflow. However, one must take the numerical resolution into account when interpreting these results. In order to capture the formation and growth of cold gas in the outflow, the critical cloud radius rcrit - which can be significantly smaller than a parsec under typical ISM conditions (cf. Eq. (11)) needs to be resolved by at least ∼8 grid cells. If this condition is not met, cold gas formation will be artificially suppressed, even in cases where our criterion would otherwise predict its presence. This requirement becomes particularly important when only a small amount of cold gas is present above hSN, i.e., when existing clumps are intrinsically small: in such situations, under-resolving these clumps (and rcrit) will prevent the condensation and growth of cold structures, leading to an underestimation of the cold mass in the simulated outflow. Additionally, it is worth mentioning that while their study focuses exclusively on the role of LISM, we show that fvis also a key parameter. This distinction is particularly relevant for i.e. young stellar clusters, where SNe can create low- fvchannels and subsequent explosions can have both large specific energies for the hot phase and form a significant cold gas component. Multiple stratified disk studies include more complex physics such as self-gravity and chemical networks (Girichidis et al. 2016b; Walch et al. 2015; Kim & Ostriker 2018) and report difficulties launching multiphase winds with supernovae feedback alone (Rathjen et al. 2023). It is important to consider, that in typical ISM conditions - where the cold gas follow approximately a dN/dm∝m-2 (from mmin,ISM to mmax) - the mass fraction of clumps capable of surviving ram pressure acceleration is fres = ln(mmax/mmin,wind) ln(mmax/mmin,ISM) (14) where mmin,wind and mmin,ISM are the minimum clump masses that can be launched into the wind and present in the ISM, respectively. Ideally, mmin,wind should be set by rcrit. However, most large-scale simulations (e.g., stratified disk models) employ resolutions of Δx∼ a few parsec, leaving rcrit unresolved. In this regime, mmin,ISM is determined by the cell size: mmin,ISM ∝Δx3. Crucially, clumps must span several cells per dimension to survive and grow via mixing in the wind. We therefore adopt mmin,wind ∝(8Δx)3 to ensure adequate numerical resolution.This resolution requirement has significant consequences. Adopting an upper cutoff of ∼100 pc for the maximum clump size and using Δx= 4 pc (as in Walch et al. 2015; Girichidis et al. 2016b), only fres ≈0.35 of the simulated ISM mass could be launched into the wind via ram-pressure acceleration. For gas scale heights of hgas ≈100pc (Walch et al. 2015; Kim & Ostriker 2017), the layers of cold gas above the supernovae scale height can extend only a fraction of hgas, i.e. ∼20pc. Combining these results, we obtain an effective cold-gas depth of fv(LISM fres) ≈0.1 × 0.3 × 20 pc = 0.6 pc < rcrit for the average density values in the ISM, which therefore does not satisfy our criterion for the formation of multiphase outflows. In addition, it is also worth noticing that fresLISM ≲Δxfor most stratified disk studies, and so the 'effective' cold gas is not well resolved and will not launch multiphase outflows. In other words, while supernovae detonating too deep in the disk fail to launch winds because they must propagate through too much cold gas before reaching the surface, those occurring under more favourable conditions (∼hgas) may still not produce multiphase outflows if the amount of cold gas above the explosion site is insufficient - or insufficiently resolved - to seed the growth of a cold phase in the wind. Higher-resolution simulations - which well resolve rcrit - will shed light on this issue. Similar arguments apply in principle to isolated disk and largerscale simulations (Tan & Fielding 2024; Smith et al. 2021), but two MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 15 additional complications arise. First, these simulations often employ adaptive mesh refinement, making it unclear whether the resolution criteria discussed above remain valid. Second, the impact of specific sink-particle and supernova-injection schemes on outflow properties remains an open question (Kim & Ostriker 2018), particularly in simulations including self-gravity. Notably, high supernova rates combined with clustered star formation can efficiently drive multiphase winds even when hSN ≪hgas (e.g. Schneider & Mao 2024). Finally, although our work only partially explores the role of higher-energy winds, the AGN community has extensively studied the effect of powerful outflows in clumpy ISMs. These winds are characterised by large overdensities and wind velocities (Bourne & Sijacki 2017; Costa et al. 2014).Ward et al. (2024) show that introducing clumpiness in AGN winds can alter the coupling between hot and cold phases. In our framework, multiphase ISMs naturally lead to the formation of gas shells (see section 4.3).This shell structure has important implications for understanding AGN outflow driving mechanisms, as for example, a momentum-driven wind can efficiently couple to the cold gas shell through PdVwork, producing observational signatures that mimic energy-driven outflows. A complementary study by Borodina et al. (2025) investigates jet propagation through multiphase ISMs and finds that the orientation and jet properties outside the disk depends on the intrinsic AGN power. At low luminosities, cold outflows are rarely produced, whereas at intermediate luminosities of order L∼1040 erg s-1, the outflow direction can be strongly altered depending on inclination and ISM depth. Although neither study reaches resolutions down to rcrit, our results demonstrate that the survival criterion applies for χ∼1000 and vw∼700 km s-1(see section 4.2) and that it can account for substantial differences in the observed phase structure and energetics of AGN-driven outflows, where the energy budget is usually larger than for SNe-driven feedback. 4.6 Caveats • Set-up and resolution. Our simulations resolve individual clouds by ≳8 cells per radius which is sufficient to capture growth and survival (Gronke & Oh 2020). Thus, computational limits restrict us to fv≥10-3. We carry out an alternative analysis indicating that survival should extend down to fv= 10-5 (cf. § C), although dedicated simulations are still required to confirm this. The periodic boundary condition in the direction perpendicular to the wind is designed to mimic a larger ISM patch. Tests with varying widths (see Appendix B) show the results are converged. • Magnetic fields. Initialising magnetic fields is non-trivial and adds computational cost. Single-cloud studies (McCourt et al. 2015; Dursi & Pfrommer 2008; Gronke & Oh 2020) show that magnetic fields alter the picture of entrainment and morphology for singleclouds. In particular, Hidalgo-Pineda et al. (2024) demonstrated that near-equipartition magnetic fields can boost survival by two orders of magnitude (for χ∼100), shifting the effective threshold in fvLISM down to sub-parsec cloud sizes. This process - and how it connects to a more ralistic ISM morphology - remains incompletely understood, and further follow-up work is required. • Outflow energetics and geometry. We explore only part of the parameter space spanned by wind velocities, temperatures, and density contrasts in the literature. While our criterion successfully predicts cloud survival in winds with velocities up to 700 km s-1 (Mw = 1.5) and contrasts of χ∼103, these Mach numbers and overdensitites remain slightly below the typical conditions for SNe-ISM interactions, and well below the ones expected for AGN-ISM interactions(Bourne & Sijacki 2017; Costa et al. 2014). Extending the analysis to these regimes will be an important direction for future work. As discussed in § 4.3, we do not consider the change in wind properties, e.g., due to the adiabatic expansion. This will change the morphology and mass transfer rates at larger distances (Gronke & Oh 2020; Dutta et al. 2025). However, our core conclusions of this study are related to the initial launching, and are thus, not affected by this. • Thermal conduction and viscosity. While single cloud studies have shown that thermal conduction and viscosity can alter the cold gas morphology and dynamics(Brüggen et al. 2023), they play and overall small effect on the mass transfer rate between the phases, and thus, the survival criterion of cold gas. This is because turbulent diffusion dominates over thermal conduction (Tan et al. 2021; Fielding & Bryan 2022), and rapid cooling sharpens the density (and velocity) interface, thus, counteracting the effects of viscosity (Marin-Gilabert et al. 2025). 5 CONCLUSIONS Galactic outflows are inherently multiphase: they regulate the cold gas content of galaxies, enrich the surrounding medium with ∼104 K gas and metals, and can suppress or delay star formation. Yet, current theoretical models provide no consistent explanation for their origin, structure, and composition. In particular, the scarcity of cold gas in simulated outflows remains at odds with the observed multiphase character of galactic winds. Most attempts to address this tension have relied on single-cloud simulations, an idealised configuration that does not capture the complexity of a realistic ISM. In this work, we performed 3D hydrodynamic simulations of hot outflows interacting with a multicloud ISM, characterized by its coldgas volume filling fraction fv, depth LISM, and typical clump size rcl. This framework enabled us to identify the parameter regimes that naturally lead to multiphase outflows, and to assess their relevance in scale-free ISMs, which better approximate real galactic environments. Our main findings are as follows: (i) Universal multiphase outflow criterion: Cold clumps grow for ISMs satisfying the criterion fvLISM ≥rcrit, where rcrit corresponds to the single-cloud size threshold for survival (Gronke & Oh 2018; Li et al. 2020a; Kanjilal et al. 2020), fvis the cold gas volume filling fraction, and LISM the distance the hot wind has to travel through the ISM. This survival criterion can be rephrased in terms of a critical column density Ncrit ≳1018 χ100 cm-2 required for survival, analogous to the single-cloud criterion. This generalised survival criterion can explain the survival of cold gas clouds present, e.g., in a fractal-like ISM morphology, which would fall in the 'destruction regime' (tcool,mix < tcc) if considered individually. We show that this criterion holds two orders of magnitude below the classical threshold, i.e., for rcl/rcrit ≲10-2 and only breaks down at fv≲10-5, where intercloud distances overatake their interaction radius dint ∼4χ1/2rcl. (ii) Self-similar cold gas morphology: Independent of the initial ISM structure, the cold phase rapidly converges toward a scale-free mass distribution following Zipf's law, dN/dm∝m-2. This behaviour emerges across all simulations and persists over time, effectively erasing memory of the initial morphology and suggesting that turbulent mixing and radiative condensation drive multiphase gas toward a universal self-similar state. As a result, the cold clump mass spectrum, rather than showing imprints of the original ISM geometry, becomes a fundamental distribution of multiphase outflows. (iii) Compression and cold shells: Depending on the initial conditions, the ISM depth can be compressed or expanded by the MNRAS 000, 1-18 (2025) 16 Hidalgo-Pineda, Gronke & Grete hot wind, with LISM ≲χrcl leading to expansion. Surviving outflows concentrate their cold material into shells or plumes of size dcold = χrcl,min along the direction of the wind, which then slowly grow as hot gas continues to be accreted. (iv) Flow and turbulence: As the T∼106 K wind crosses the T∼104 K ISM, turbulence is driven in both phases, peaking at the ISM crossing time tsh. The cold and hot phases are kinematically coupled, both in bulk flow and turbulence, with the cold phase reaching σ/cs,cold ≈1. First-order velocity structure functions show that the emergent turbulence is compatible with a Kolmogorov turbulent cascade and the injection scales are set by LISM. (v) Evolving outflow properties: The growth of multicloud outflows is governed by the Kelvin-Helmholtz instability timescale of the largest clumps, m/ ¤m∼χ(tkh,maxtcool,mix). Despite ISM compression, both fvand the areal covering fraction fAincrease in time. fArapidly approaches unity even for fv= 10-3. Although fvalso grows, it remains ≪1, consistent observations. Taken together, these results establish a framework that connects idealised single-cloud studies with stratified disk simulations, extending the analytical survival criterion to more complex, ISM morphologies. While further work is required to test the robustness of this criterion under additional physical processes and a larger parameter space, our findings already provide a pathway for comparison with both largerscale numerical simulations and observations of multiphase galactic outflows. ACKNOWLEDGEMENTS FHP thanks Martin Fournier for help with VSF functions. FHP also thanks Tiago Costa, Aditi Vijayan and Benedikt Seidl for insightful discussions. MG thanks the Max Planck Society for support through the Max Planck Research Group, and the European Union for support through ERC-2024-STG 101165038 (ReMMU). P.G. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 555983577. REFERENCES Abruzzo M. W., Bryan G. L., Fielding D. B., 2022, ApJ, 925, 199 Al ̄uzas R., Pittard J. M., Hartquist T. W., Falle S. A. E. G., Langton R., 2012, MNRAS, 425, 2212 Antipov A., Banda-Barragán W. E., Birnboim Y., Federrath C., Gnat O., Brüggen M., 2025, MNRAS, 540, 3798 Armillotta L., Fraternali F., Werk J., Prochaska J. X., Marinacci F., 2017, MNRAS, 470, 114 Banda-Barragán W. E., Brüggen M., Heesen V., Scannapieco E., Cottle J., Federrath C., Wagner A. Y., 2021, MNRAS, 506, 5658 Barger K., Lehner N., Howk J. C., 2016, ApJ, 817, 91 Beattie J. R., Noer Kolborg A., Ramirez-Ruiz E., Federrath C., 2025, arXiv e-prints, p. Borodina O., et al., 2025, ApJ, 981, 149 Bourne M. A., Sijacki D., 2017, MNRAS, 472, 4707-4735 Brüggen M., Scannapieco E., Grete P., 2023, The Astrophysical Journal, 951, 113 Chen H.-W., et al., 2023, ApJLetters, 955, L25 Cooper J. L., Bicknell G. V., Sutherland R. S., Bland-Hawthorn J., 2009, ApJ, 703, 330 Costa T., Sijacki D., Haehnelt M. G., 2014, MNRAS, 444, 2355 Creasey P., Theuns T., Bower R. G., 2013, MNRAS, 429, 1922 Das H. K., Gronke M., 2024, MNRAS, 527, 991 Dursi L., Pfrommer C., 2008, ApJ, 677, 993 Dutta A., Sharma P., 2019, Research Notes of the AAS, 3, 148 Dutta A., Sharma P., Gronke M., 2025, arXiv e-prints, p. Elmegreen B. G., 1997, ApJ, 477, 196 Elmegreen B. G., Scalo J., 2004a, ARA&A, 42, 211 Elmegreen B. G., Scalo J., 2004b, ARA&A, 42, 211 Farber R., Ruszkowski M., Yang H.-Y., Zweibel E., 2018, ApJ, 856, 112 Faucher-Giguère C.-A., Quataert E., 2012, MNRAS, 425, 605 Federrath C., 2016, MNRAS, 457, 375 Federrath C., Klessen R. S., Schmidt W., 2009, ApJ, 692, 364 Fielding D. B., Bryan G. L., 2022, ApJ, 924, 82 Fielding D. B., Ostriker E. C., Bryan G. L., Jermyn A. S., 2020, ApJ, 894, L24 Fisher D. B., et al., 2024, arXiv preprint Fisher D. B., et al., 2025, MNRAS, 538, 3068 Forbes J. C., Lin D. N., 2019, The Astronomical Journal, 158, 124 Fournier, M. Grete, P. Brüggen, M. Glines, F. W. O'Shea, B. W. 2024, A&A, 691, A239 Ghosh R., Dutta A., Sharma P., 2024, MNRAS, 531, 3445 Girichidis P., et al., 2016b, MNRAS, 456, 3432 Girichidis P., et al., 2016a, MNRAS, 456, 3432 Gnat O., Sternberg A., 2007, ApJSupplement Series, 168, 213 Grete P., et al., 2023, The International Journal of High Performance Computing Applications, 37, 465 Grete P., Scannapieco E., Brüggen M., Pan L., 2025, ApJ, 987, 122 Gronke M., Oh S. P., 2018, MNRAS, 480, L111 Gronke M., Oh S. P., 2020, MNRAS, 492, 1970 Gronke M., Oh S. P., Ji S., Norman C., 2022, MNRAS, 511, 859 Grønnow A., Tepper-García T., Bland-Hawthorn J., 2018, ApJ, 865, 64 Groves B., et al., 2023, MNRAS, 520, 4902 Ha T., et al., 2025, ApJ, 986, 87 Heckman T. M., Armus L., Miley G. K., 1990, ApJS, 74, 833 Hidalgo-Pineda F., Farber R. J., Gronke M., 2024, MNRAS, 527, 135 Hu H., Qiu Y., Gendron-Marsolais M.-L., Bogdanović T., Hlavacek-Larrondo J., Ho L. C., Inayoshi K., McNamara B. R., 2022, ApJLetters, 929, L30 Ilyasi B., Neelamkodan N., Tokuda K., Barman S., Sewiło M., Sano H., Onishi T., 2025, The Astrophysical Journal, 984, 85 Kanjilal V., Dutta A., Sharma P., 2020, MNRAS, 501, 1143 Kim C.-G., Ostriker E. C., 2017, ApJ, 846, 133 Kim C.-G., Ostriker E. C., 2018, ApJ, 853, 173 Kim C.-G., Kim J.-G., Gong M., Ostriker E. C., 2023, ApJ, 946, 3 Klein R. I., McKee C. F., Colella P., 1994, ApJ, 420, 213 Li Y., Bryan G. L., 2014, ApJ, 789, 153 Li Y., Bryan G. L., Ruszkowski M., Voit G. M., O'Shea B. W., Donahue M., 2015, ApJ, 811, 73 Li M., Bryan G. L., Ostriker J. P., 2017, ApJ, 841, 101 Li Z., Hopkins P. F., Squire J., Hummels C., 2020a, MNRAS, 492, 1841 Li Y., et al., 2020b, ApJ, 889, L1 Lopez L. A., Mathur S., Nguyen D. D., Thompson T. A., Olivier G. M., 2020, ApJ, 904, 152 Lopez S., Lopez L. A., Thompson T. A., Leroy A. K., Bolatto A. D., 2025, arXiv preprint Marin-Gilabert T., Gronke M., Oh S. P., 2025, arXiv preprint Martizzi D., Fielding D., Faucher-Giguère C.-A., Quataert E., 2016, MNRAS, 459, 2311 McCourt M., O'Leary R. M., Madigan A.-M., Quataert E., 2015, MNRAS, 449, 2 Naab T., Ostriker J. P., 2017, Annual Review of Astronomy and Astrophysics, 55, 59 Nelson D., et al., 2019, MNRAS, 490, 3234 Péroux C., Rahmani H., Arrigoni Battaia F., Augustin R., 2018, MNRAS: Letters, 479, L50 Pittard J. M., Dyson J., Falle S., Hartquist T., 2005, MNRAS, 361, 1077 Poludnenko A. Y., Frank A., Blackman E. G., 2002, ApJ, 576, 832 Rathjen T.-E., Naab T., Walch S., Seifried D., Girichidis P., Wünsch R., 2023, MNRAS, 522, 1843 Rivera-Thorsen T. E., et al., 2015, ApJ, 805, 14 Rubin K. H., Prochaska J. X., Koo D. C., Phillips A. C., Martin C. L., Winstrom L. O., 2014, ApJ, 794, 156 MNRAS 000, 1-18 (2025) Launch of Multiphase Winds 17 Rupke D., 2018, Galaxies, 6, 138 Rupke D. S., et al., 2019, Nature, 574, 643 Scannapieco E., Brüggen M., 2015, ApJ, 805, 158 Schneider E. E., Mao S. A., 2024, ApJ, 966, 37 Schneider E. E., Robertson B. E., 2017, Hydrodynamical coupling of mass and momentum in multiphase galactic winds Seidl B. S., Gronke M., Farber R. J., Dolag K., 2025, arXiv e-prints, p. Shin M.-S., Stone J. M., Snyder G. F., 2008, ApJ, 680, 336 Smith M. C., Bryan G. L., Somerville R. S., Hu C.-Y., Teyssier R., Burkhart B., Hernquist L., 2021, MNRAS, 506, 3882 Somerville R. S., Davé R., 2015, Annual Review of Astronomy and Astrophysics, 53, 51 Sparre M., Pfrommer C., Ehlert K., 2020, MNRAS, 499, 4261 Tan B., Fielding D. B., 2024, MNRAS, 527, 9683 Tan B., Oh S. P., Gronke M., 2021, MNRAS, 502, 3179 Thompson T. A., Heckman T. M., 2024, Annual Review of Astronomy and Astrophysics, 62 Townsend R., 2009, ApJSupplement Series, 181, 391 Trott C. R., et al., 2022, IEEE Transactions on Parallel and Distributed Systems, 33, 805 Veilleux S., Cecil G., Bland-Hawthorn J., 2005, Annu. Rev. Astron. Astrophys., 43, 769 Veilleux S., Maiolino R., Bolatto A. D., Aalto S., 2020, The Astronomy and Astrophysics Review, 28 Veilleux S., et al., 2022, ApJ, 926, 60 Vijayan A., Krumholz M. R., Wibking B. D., 2025, MNRAS, 539, 1706 Vogelsberger M., et al., 2014, MNRAS, 444, 1518 Walch S., et al., 2015, MNRAS, 454, 238 Ward S. R., Costa T., Harrison C. M., Mainieri V., 2024, MNRAS, 533, 1733 Xu X., et al., 2022, ApJ, 933, 222 Xu X., et al., 2023, ApJ, 948, 28 Yao Z., Mandelker N., Oh S. P., Aung H., Dekel A., 2025, MNRAS, 536, 3053 Zhang D., a. Thompson T., Quataert E., Murray N., 2017, MNRAS, 468, 4801 von Hoerner S., 1951, Zeitschrift fur Astrophysik, 30, 17 APPENDIX A: ISM GENERATION In section 2 we describe the formalism to create a binary ISM. Figure A1 shows how this distribution peaks at a characteristic cloud mass scale. Lower volumetric fractions preferentially reduce the high mass clouds, leading to a slight displacement in the peak by a factor of a few. As we study variations in cloud sizes of orders of magnitude, this has negligible impact on the survival probability. This potential effect of factor of a few is in turn captured by the fudge factor of order of magnitude in eq. 10, where αaccounts for geometric effects. Similarly, in section 2, we also create ISM of varying clump sizes following the mass density profile dN/dm∝m-2 to mimic observations of "scale-free" ISMs. We show the density profile distribution of the scale-free example in section 4.2 in figure A2 corresponding to the black-solid line in figure 12. We use an overdensity of χ= 100 and initial ISM parameters ( fv, LISM/rcrit, rcl,min/rcrit) = (0.1, 30, 0.1). The initial clump size distribution spans over an order of above rcl,min, whereas the slope of the distribution roughly matches the yellow band, representing the expected Zipf's law where the PDF is proportional to m-2. APPENDIX B: MASS CONVERGENCE In order to capture the extended nature of the ISM in the direction perpendicular to the wind, we impose periodic boundary conditions 10-1 100 101 r / rinput 100 101 102 Nclouds fv = 10-1 fv = 10-2 fv = 10-3 fv = 10-4 Figure A1. Clump size distribution for the ISM modelling in section 2, shown for different volume filling fractions for a cubic box of side 103 cells. The x axis represents the measured clump radius as a fraction of the expected input clump size. 100 101 102 r [rcl] 10-3 10-2 10-1 100 101 N(≥V ) r-4 ± 0.3 dex Figure A2. Cumulative distribution function of clump masses for the scale-free ISM example with overdensity χ= 100 and initial parameters ( fv, LISM/rcrit, rcl,min/rcrit) = (0.1, 30, 0.1). The clump size distribution spans over an order of magnitude above rcl,min, with a slope roughly matching Zipf's law (yellow band), where the PDF follows dN/dm∝m-2. for the xand zaxis. We show that for our two fiducial resolution cases L⊥,box/dcell = 32 and 16, mass growth is well converged by showing the evolution of two runs with (rcl/rcrit, fv, LISM/rcl) = (1, 0.1, 30). Notice that we generate separate initial conditions from identical ISM parameters (rcl/rcrit, fv, LISM/rcl). Even for this case, the mass evolution is close to identical, only showing deviations of a factor of a few around 15 tcc. MNRAS 000, 1-18 (2025) 18 Hidalgo-Pineda, Gronke & Grete 0 10 20 30 40 50 60 t [tcc] -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 log(m/m0) L⊥,box = 32rcl L⊥,box = 16rcl Figure B1. Mass growth for two simulations with (rcl/rcrit, fv, LISM/rcl) = (1, 0.1, 30) for different mesh length in the direction perpendicular to the direction of propagation of the wind. In black, we use a box of L⊥= 128 cells. In purple, our fiducial simulation domain with L⊥= 256. APPENDIX C: LIMITATIONS OF THE MULTIPHASE SURVIVAL CRITERION As demonstrated in section 4.1, inter-cloud distances of 4χ1/2rcl mark the limit for survival in multicloud set-ups. Clouds separated by distances larger than this limit cannot interact and do not exhibit survival below rcl < rcrit. We can associate this separation to a volume filling fraction of the cold phase: Since a roughly isotropic and homogeneous ISM should have fv= Nr3 cl/dcl (with Nas number of clouds and dcl as cloud separation), the intercloud separation follows as dcl/rcl ≈f-1/3 v , which in turn shows that the limiting radius of influence 4χ1/2rcl, corresponds to a volume filling fraction of ≈10-5. The bottom panel in figure C1 shows the intercloud separation averaged for the 6 k-nearest neighbours using 125r3 cl ISM realisations. The isotropic cloud approximation accurately predicts dcl, only deviating by a factor of ∼a few for volume filling fractions above 10-2. This is potentially an artifact of the k-tree and CCL algorithm (see methods of section 3.3.1), as structures overlap more at higher volume filling fractions. The prediction for the cloud separation dependence on fvagrees well with the empirically found separation of clumps in the ISM. Indicated in dashed grey, the predicted critical separation (and therefore, volume filling fraction) for surviving clouds. This paper has been typeset from a TEX/LATEX file prepared by the author. 10-6 10-5 10-4 10-3 10-2 10-1 fv 100 101 102 dsep[rcl] d ∝fv -1/3 Figure C1. Average intercloud separation as a function of volume filling fraction for the cold phase. The thick dashed line shows the expected relation for a homogeneous ISM. In dotted light grey, the critical volume filling fraction for intercloud distances larger than 4χ1/2rcl . MNRAS 000, 1-18 (2025)
2510.14831
Scaling Tumor Segmentation: Best Lessons from Real and Synthetic Data Qi Chen1,2 Xinze Zhou1 Chen Liu1,3 Hao Chen4 Wenxuan Li1 Zekun Jiang5 Ziyan Huang6,7 Yuxuan Zhao8 Dexin Yu8 Junjun He7 Yefeng Zheng9 Ling Shao2 Alan Yuille1 Zongwei Zhou1,* 1Johns Hopkins University 2UCAS-Terminus AI Lab, University of Chinese Academy of Sciences 3Hong Kong Polytechnic University 4University of Cambridge 5Sichuan University 6Shanghai Jiao Tong University 7Shanghai AI Laboratory 8Qilu Hospital of Shandong University 9Westlake University Code, Model & Data: https://github.com/BodyMaps/AndomenAtlas2.0 Abstract AI for tumor segmentation is limited by the lack of large, voxel-wise annotated datasets, which are hard to create and require medical experts. In our proprietary JHH dataset of 3,000 annotated pancreatic tumor scans, we found that AI performance stopped improving after 1,500 scans. With synthetic data, we reached the same performance using only 500 real scans. This finding suggests that synthetic data can steepen data scaling laws, enabling more efficient model training than real data alone. Motivated by these lessons, we created AbdomenAtlas 2.0—a dataset of 10,135 CT scans with a total of 15,130 tumor instances per-voxel man- ually annotated in six organs (pancreas, liver, kidney, colon, esophagus, and uterus) and 5,893 control scans. Annotated by 23 expert radiologists, it is several orders of magnitude larger than existing public tumor datasets. While we con- tinue expanding the dataset, the current version of Abdome- nAtlas 2.0 already provides a strong foundation—based on lessons from the JHH dataset—for training AI to segment tumors in six organs. It achieves notable improvements over public datasets, with a +7% DSC gain on in-distribution tests and +16% on out-of-distribution tests. 1. Introduction Developing AI models for tumor segmentation is funda- mentally challenged by the scarcity of large, annotated datasets—owing to the immense time and expertise re- quired for per-voxel annotation [70, 103, 107]. Inspired *Correspondence to Zongwei Zhou (ZZHOU82@JH.EDU) Figure 1. Data scaling laws study. Experimental results on the proprietary dataset demonstrate that increasing the scale of real data improve the segmentation (gray curve). Notably, supplement- ing the dataset with an additional 3× synthetic data (red curve) can further enhance the results, revealing the potential of a larger pub- lic dataset to advance tumor research. by scaling laws [20, 51, 78, 85], to estimate the impact of data scale on tumor segmentation performance, we first leveraged a proprietary dataset of 3,000 pancreatic tumor scans, per-voxel annotated over five years by expert radi- ologists and verified by pathology reports. Our previous work [61, 96] showed that this dataset enabled AI to reach radiologist-level detection accuracy. However, as shown in Figure 1, performance gains plateaued after 1,500 scans, suggesting diminishing returns from adding more real data. Recognizing that annotating 1,500 scans is still a consider- able undertaking for a single tumor type, we explored the potential of synthetic data [13, 28, 40, 58, 63, 73] to further advance this plateau. By adding synthetic tumors—three times the number of real tumors—we achieved similar or arXiv:2510.14831v1 [cs.CV] 16 Oct 2025 Figure 2. Overview of the AbdomenAtlas 2.0 dataset. For each CT scan, AbdomenAtlas 2.0 provides precise and high-quality annotations following a well-designed AI-driven annotation pipeline. Compared to existing datasets, AbdomenAtlas 2.0 collects large- scale CT scans from diverse clinical sources, encompassing a wide range of tumor types (i.e., liver, pancreas, kidney, colon, esophageal, and uterine tumors) and comprehensive tumor sizes. This extensive scale makes it the largest human-annotated tumor mask dataset. better performance with only 500 real tumor scans. This reduces annotation needs by a large margin and shows that synthetic data can accelerate learning, effectively steepen- ing the scaling curve more than real data alone. The lesson on the proprietary dataset helps estimate how many annotated tumor scans are needed to train effective AI models, e.g., matching radiologist performance. Con- sidering that pancreatic tumors are especially hard to detect on CT, with 80% detected only at late stages [38, 62], we hypothesize that if 1,500 real scans—or 500 with synthetic data—are enough for pancreatic tumors, the same or fewer might work for other organs. Based on this idea, our first contribution is to create a dataset, that is publicly available, with 500–1,500 per-voxel annotated CT scans for tumors in six organs: pancreas, liver, kidney, colon, esophagus, and uterus. This is also the first public dataset that offers per- voxel annotations for esophageal and uterine tumors. We name this six-tumor dataset AbdomenAtlas 2.0, which comprises 4,242 CT scans with per-voxel annotations of 15,130 benign/malignant tumor instances and 5,893 normal scans as control (§3). Importantly, it includes many early- stage tumors (<20 mm): 5,709 in liver, 850 in pancreas, 4,638 in kidney, 29 in colon, 17 in esophagus, and 39 in uterus—rare and hard to collect. While AbdomenAtlas 2.0 is much larger than public tumor datasets combined [6, 35, 49, 70], the 500–1,500 scans per tumor type are still insufficient for building ro- bust AI across diverse data sources. This limitation is clear in our data-scaling analysis (Figure 1), where per- formance plateaued only on in-distribution tests. For out- of-distribution data—CT scans from different centers— performance kept improving up to 3,000 scans, suggesting that broader diversity is critical for generalization. How- ever, scaling to that level is costly: annotating just 500– 1,500 scans per tumor type required 23 radiologists and sev- eral months of effort. Selecting the most valuable scans to annotate is also challenging, since out-of-distribution data are unknown in advance. To address this, our second contribution is to scale data and annotations through DiffTumor to produce differ- ent types of tumors (Figure 5). The data-scaling analysis (Figure 1) suggested that training AI on synthetic tumors can significantly enhances in-distribution test performance. More importantly, since collecting normal scans is much easier than acquiring and annotating tumor scans, synthetic tumors can be added to normal scans from a range of out- of-distribution sources, bypassing the need for manual per- voxel annotation. These synthetic tumors are automatically paired with per-voxel annotations as they are generated with their masks. Training AI on these normal scans augmented by synthetic tumors can greatly improve performance in out-of-distribution tests (Figure 7). In summary, we bring data-scaling lessons from both real and synthetic data on a large proprietary dataset to develop AbdomenAtlas 2.0, achieving two key advancements for six-tumor segmentation, specifically, 1. Scaling real and synthetic data enhances performance in abdominal tumor segmentation. We rank first in the MSD challenge, leading to substantial performance im- provement. We also achieve the highest performance on the validation sets of our AbdomenAtlas 2.0 dataset, improving DSC scores by +5%, +9%, +3%, +4%, +7%, and +2% for segmenting tumors in the liver, pancreas, Dataset release # scans # slices (K) # tumors tumor in # # hospitals # countries‡ annotators LiTS [6] [link] 2019 131 58.6 853 liver 7 E, NL, CA, FR, IL human MSD-Colon [2] [link] 2021 126 13.5 131 colon 1 US human & AI MSD-Pancreas [2] [link] 2021 281 26.7 283 pancreas 1 US human & AI FLARE23 [2] [link] 2022 2,200 629.1 1,511 unknown† 30 N/A human & AI KiTS [36] [link] 2023 489 250.9 568 kidney 1 US human ULS-Liver [18] [link] 2023 49 6.3 49 liver 1 - human ULS-Pancreas [18] [link] 2023 120 15.4 120 pancreas 1 NI human ULS-Kidney [18] [link] 2023 50 6.4 50 kidney 1 N/A human AbdomenAtlas 2.0 (ours) 2025 10,135 4,700 15,130 liver, pancreas, kidneys, colon, esophagus, uterus 89 MT, IE, BR, BA, AUS, TH, CA, TR, CL, ES, MA, US, DE, NL, FR, IL, CN human † Tumors labeled in the FLARE23 dataset fall under a general ’Tumor’ category without specific tumor type information. ‡ US: United States, DE: Germany, NL: Netherlands, CA: Canada, FR: France, IL: Israel, IE: Ireland, BR: Brazil, BA: Bosnia and Herzegowina, CN: China, TR: Turkey, CH: Switzerland, AUS: Australia, TH: Thailand, CL: Chile, ES: Spain, MA: Morocco, and MT: Malta. Table 1. Dataset comparison. We compare AbdomenAtlas 2.0 against existing abdominal tumor segmentation datasets, including those with and without tumor labels. AbdomenAtlas 2.0 outperforms these datasets in terms of scale and diversity. kidney, colon, esophagus, and uterus, respectively, com- pared to the runner-up algorithms (§3.3, Tables 2–3). 2. Scaling real and synthetic data enhances generalizable performance in abdominal tumor segmentation without additional tuning and adaptation. AbdomenAtlas 2.0 significantly outperforms the runner-up algorithms by +14% DSC on four external datasets (§3.3, Table 4). 2. Related Work Large-scale Annotated Tumor Datasets are scarce due to the limited availability of scan data and the substan- tial costs of obtaining per-voxel annotations. Despite these hurdles, datasets such as DeepLesion [99], AutoPET [23], PANORAMA [1], FLARE [70], and MSD [3] serve as sig- nificant efforts to mitigate this limitation. A detailed com- parison of related datasets is provided in Figure 1. Abdom- enAtlas 2.0 comprises more than 10,000 CT scans with voxel-level annotations across six abdominal tumors. No- tably, AbdomenAtlas 2.0 features esophageal and uterine tumor scans, which have not been previously available in public datasets. Neural Scaling Laws establish the power-law relationships that correlate model performance with key scaling factors such as model size, dataset volume, and computational re- sources. It is initially discovered within the domain of lan- guage models highlighted by Kaplan et al. [51], and soon also been observed in generative visual modeling [37, 80] and multi-modality modeling [47]. This trend of scaling underpins the recent achievements of foundation models [78, 85], emphasizing how scaling up systematically boosts model generalization and effectiveness across various tasks. However, for tumor analysis and synthetic data, scaling laws remain underexplored due to the limited availability of annotated tumor data. Leveraging our new, large-scale tumor dataset, we investigate whether similar data scaling laws exist in tumor segmentation and whether appropriate data scaling can yield a robust segmentation model capa- ble of generalizing to detect and segment tumors from CT scans, encompassing a broad spectrum of patient demo- graphics, imaging protocols, and healthcare facilities. 3. AbdomenAtlas 2.0 3.1. Dataset Construction Accurate annotations are the foundation of high-quality medical datasets. However, conventional per-voxel labeling is labor-intensive. Obtaining each scan data typically costs 4–5 minutes, while extensive tumors may take up to 40 min- utes [7, 70]. In addition, precisely delineating tumor bound- aries takes substantial time and requires the specialized ex- pertise of highly trained radiologists, making it impractical to scale annotations to datasets with 10,000 or more scans. To address this bottleneck, we establish a semi-automated annotation pipeline for CT scans that significantly reduces the manual workload and requires only minimal revision time from radiologists. SMART-Annotator Procedure. Annotating missed tu- mors from scratch takes much longer than removing AI-generated false positives. Therefore, our annota- tion pipeline is designed to prioritize minimizing under- segmentation errors, thereby reducing the typical annota- tion time from 5 minutes per scan to less than 5 seconds on average, while maintaining high accuracy. The proposed pipeline, named SMART-Annotator, stands for Segmenta- tion Model-Assisted Rapid Tumor Annotator. As depicted in Figure 3, it consists of the following four key stages: Stage 1: Model Preparation. For each tumor, we sepa- rately train a Segmentation Model (denoted as f(·)) using publicly available datasets. The tumor-specific f(·) is opti- Figure 3. Overview of the SMART-Annotator. Towards annotating a large-scale tumor dataset, developing our SMART-Annotator involves four stages. ①Train a Segmentation Model using public datasets to provide tumor segmentation logits across AbdomenAtlas 2.0. ②Analyzing the FROC curve and selecting a threshold that enhances sensitivity to minimize missed tumors while maintaining an acceptable specificity score. ③Removing false positives for the adjusted predictions by senior radiologists. ④Revising the final annotations to get ground truth by junior radiologists. mized for tumor segmentation and detection tasks. Stage 2: FROC Curve Analysis. To determine the optimal threshold, we construct the Free-response ROC (FROC) Curve by equipping f(·, θ) with a set of threshold values θ, obtaining the trade-off map (as shown by the purple shadow region in Figure 3) between sensitivity and false positive rate. Experimental results on tumor analysis in CT scans reveal that a lower θ∗maximizes sensitivity while main- taining an acceptable false positive rate. Stage 3: Tumor Candidate Generation. For CT scans requiring annotation, we apply the tumor-specific model f(·, θ∗) to perform voxel-wise analysis. This process gen- erates preliminary tumor segmentation candidates, while identifying potential tumor regions that need further refine- ment and validation. Since these potential regions are typ- ically challenging, senior radiologists are then required to conduct a review to confirm true positives and eliminate false positive cases. Stage 4: Annotation Revision. The reviewed tumor seg- mentation candidates undergo further refinement by ju- nior radiologists, who annotate missed tumors and adjust mask boundaries to ensure accurate and precise tumor an- notations. The final revised annotations are thoroughly reviewed by senior radiologists to guarantee high-quality ground truth. Annotation Accuracy Analysis. For each specific organ, our pipeline adaptively adjusts the threshold θ∗based on the FROC curve to ensure over 90% sensitivity. A common concern is whether such high sensitivity might result in a significant number of false positive cases? To answer this, we validate SMART-Annotator on three public datasets and reveal that the pipeline maintains manageable false-positive rates, with an average of 1.2 false positives per scan for pancreatic tumors, 2 for liver tumors, and 2.4 for kidney tumors. These results highlight the effectiveness of our AI- driven approach in tumor detection. By pre-identifying tu- mors with pseudo-annotations, radiologists can quickly ver- ify true positives, correct false positives, and, if necessary, provide additional annotations for false negatives, thereby efficiently annotating tumor scans in AbdomenAtlas 2.0. Annotation Efficiency Analysis. The AbdomenAtlas 2.0 incorporates proprietary esophagus and uterus scans along- side unannotated data from 12 publicly available sources. Our approach applies the SMART-Annotator pipeline to all scans. Given that full manual annotation typically requires 5 minutes per scan, whereas annotation with SMART- Annotator takes only 5 seconds, this AI-driven approach substantially alleviates the annotation workload, conserv- ing approximately 10,135 × 5 −1 12  ≈49, 826 minutes of valuable radiologist time for annotating the entire Ab- domenAtlas 2.0 collection. Assuming a radiologist works 10 hours per day, this corresponds to 83 workdays saved. Figure 4. Dataset statistics analysis on the distributions of (a) different tumor proportions, (b) tumor radius, and (c) different tu- mor sizes categorized as tiny, small, medium, and large. 3.2. Dataset Statistical Analysis AbdomenAtlas 2.0 is the largest public, human-annotated tumor segmentation dataset, covering six tumor types (Fig- ure 2). It improves on existing datasets in five ways: 1. Large-Scale CT Coverage. AbdomenAtlas 2.0 in- cludes 10,136 fully annotated CT scans, totaling over 4.7 million slices. It provides labels for 31 anatomical struc- tures, including 25 organs and 6 tumor types. The data comes from 89 hospitals, ensuring diverse patient popula- tions and clinical conditions. 2. Diverse Tumor Types. Most public datasets focus on a single tumor type (Table 1). In contrast, AbdomenAtlas 2.0 includes liver, pancreas, kidney, colon, esophageal, and uterine tumors. It is the first public dataset with voxel-wise annotations for esophageal and uterine tumors, supporting research on rare and underrepresented cancers (Figure 4a). 3. Wide Tumor Size Range. Tumor sizes in AbdomenAt- las 2.0 range from 0 to 100 mm. We group them into four categories: tiny (r ≤5 mm), small (5 ≤r ≤10 mm), medium (10 ≤r ≤20 mm), and large (r ≥20 mm). AbdomenAtlas 2.0 provides a balanced size distribution across all tumor types, as shown in Figure 4b–c), enabling robust and scalable model training. 4. Abundant Tumor Masks. The dataset contains 10,260 annotated tumor masks across six tumor types and all size groups. It surpasses existing datasets such as LiTS and KiTS in both scale and tumor diversity (Figure 2b, Table 1). 5. High-Quality Annotations. Our annotation pipeline adopts a multi-stage review process (see Figure 3), inte- Task03 Liver Task07 Pancreas Method DSC NSD DSC NSD Kim et al. [54] 73.0 88.6 51.8 73.1 C2FNAS [102] 72.9 89.2 54.4 75.6 Trans VW [29] 76.9±20.0 92.0±16.8 51.1±32.8 70.1±37.4 Models Gen. [106] 77.5±20.4 91.9±17.9 50.4±32.6 70.0±37.2 nnU-Net [44] 76.0±22.1 90.7±18.3 52.8±33.0 71.5±36.6 DiNTS [33] 74.6±21.3 91.0±17.3 55.4±29.8 75.9±32.0 Swin UNETR [92] 75.7±20.4 91.6±16.8 58.2±28.6 79.1±29.7 Uni. Model [66] 79.4±17.0 93.4±15.2 62.3±26.6 82.9±27.2 AbdomenAtlas 2.0 82.6±11.0 96.9±6.4 67.2±24.7 86.0±25.2 ∆ +3.2 +3.5 +4.9 +3.1 Table 2. Leaderboard performance on MSD Challenge. The results are assessed on the MSD official server using the MSD competition test dataset. All DSC and NSD metrics are sourced from The MSD Leaderboard. The outcomes for the remaining tasks were produced by Universal Model [66, 67]. grating AI algorithms with human expertise to enhance ef- ficiency while maintaining high annotation quality. All im- ages and annotations undergo rigorous quality control. This process iteratively refined the annotations until no further major revisions were necessary. 3.3. Advantages of AbdomenAtlas 2.0 Strong performance on in-distribution data. We report detailed comparisons on the official test set of the Medical Segmentation Decathlon (MSD) leaderboard in Table 2. As can be seen, with AbdomenAtlas 2.0, we significantly sur- pass the previously leading Universal Model [66] (denoted as Uni. Model in Table 2) and achieve the top #1 perfor- mance on the leaderboard, underscoring the superiority of AbdomenAtlas 2.0 in the task of medical segmentation. To comprehensively evaluate the six tumor types in Ab- domenAtlas 2.0, we train ResEncM [45] with the anno- tated tumor data in AbdomenAtlas 2.0 and compare with state-of-the-art segmentation models in the medical field (i.e., UNETR [32], Swin UNETR [92], nnU-Net [44], Res- EncM [45] and STU-Net-B [43]) that are trained with pub- licly available tumor datasets. The evaluations are con- ducted on the validation set of AbdomenAtlas 2.0 and re- ported in Table 3. As can be seen, training the ResEncM with AbdomenAtlas 2.0 (denoted as AbdomenAtlas 2.0) consistently improves the performance and outperforms the state-of-the-art across all tumor segmentation tasks. Com- pared with the second-ranked STU-Net-B, AbdomenAt- las 2.0 archives a remarkable DSC improvement of 7.3% on esophageal tumors and 4.9% on liver tumors, respec- tively. These results demonstrate the superiority of Ab- domenAtlas 2.0 in delivering high-quality tumor data for model training compared to existing datasets, contributing to alleviating the data scarcity issue in tumor segmentation. Better generalization for out-of-distribution data. A crit- ical requirement for medical AI models is their ability to Liver Tumor Pancreatic Tumor Kidney Tumor Method Param Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 101.8M 77.1 (102/131) 55.6 53.7 66.7 (102/131) 31.1 27.2 95.8 (102/131) 67.2 55.7 Swin UNETR [92] 72.8M 76.6 (102/131) 66.8 68.4 81.5 (102/131) 44.7 43.8 95.8 (102/131) 72.3 67.7 nnU-Net [44] 31.1M 80.3 (102/131) 71.7 74.6 81.5 (102/131) 56.7 54.3 100 (102/131) 84.8 80.7 ResEncM [45] 63.1M 89.1 (102/131) 71.9 74.7 84.0 (102/131) 57.0 54.6 100 (102/131) 84.8 81.1 STU-Net-B [43] 58.3M 79.3 (102/131) 72.6 74.9 85.2 (102/131) 56.1 54.4 100 (102/131) 82.4 77.6 AbdomenAtlas 2.0 63.1M 83.7 (102/131) 77.5 81.0 96.0 (102/131) 65.8 64.7 100 (102/131) 87.9 84.4 ∆ -5.4 +4.9 +6.1 +10.8 +8.8 +10.1 +0.0 +3.1 +3.3 Colon Tumor Esophagus Tumor Uterus Tumor Method Param Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 101.8M 69.2 (102/131) 27.8 29.2 92.3 (102/131) 42.3 44.1 95.8 (102/131) 69.9 60.7 Swin UNETR [92] 72.8M 65.4 (102/131) 36.8 39.4 84.6 (102/131) 48.2 49.0 95.8 (102/131) 73.8 65.0 nnU-Net [44] 31.3M 65.4 (102/131) 42.8 43.7 92.3 (102/131) 52.7 53.2 95.8 (102/131) 78.5 70.2 ResEncM [45] 63.1M 65.4 (102/131) 43.8 45.9 84.6 (102/131) 53.3 51.9 95.8 (102/131) 78.7 68.4 STU-Net-B [43] 58.3M 73.1 (102/131) 47.1 48.7 88.5 (102/131) 53.9 54.1 95.8 (102/131) 78.2 68.8 AbdomenAtlas 2.0 63.1M 96.2 (102/131) 50.7 47.6 96.2 (102/131) 61.2 61.7 95.8 (102/131) 80.1 70.3 ∆ +23.1 +3.6 -1.1 +3.9 +7.3 +7.6 +0.0 +1.4 +0.1 Table 3. Strong performance for in-distribution data: Results on AbdomenAtlas 2.0. We compare AbdomenAtlas 2.0 with common AI algorithms, using the validation sets from the AbdomenAtlas 2.0. AbdomenAtlas 2.0 demonstrates superior tumor segmentation and performance overall, showing significant improvements in segmenting liver tumors (+4.9%), pancreatic tumors (+8.8%), kidney tumors (+3.1%), colon tumors (+3.6%), esophagus tumors (+7.3%), and uterus tumors (+1.4%). generalize across diverse, out-of-distribution (OOD) data from multiple hospitals, rather than being optimized solely for a single, in-distribution dataset. As shown in Table 1, AbdomenAtlas 2.0 provides a considerably more diverse collection of CT scans from 89 hospitals across 18 coun- tries. To verify the generalizability offered by Abdome- nAtlas 2.0, we further conduct evaluations on four external datasets: 3D-IRCADb [90], PANORAMA [1], Kipa [34], and a proprietary JHH dataset [96], none of which are included in the training phase. We train ResEncM [45] with the annotated tumor data in AbdomenAtlas 2.0 and compare with the following state-of-the-art medical image segmentation models: UNETR [32], Swin UNETR [92], nnU-Net [44], ResEncM [45] and STU-Net [43], SegRes- Net [76], Universal Model [66], and SuPrem [59]. As shown in Table 4, our model significantly outperforms pre- vious methods on all external datasets, achieving a notable DSC improvement of 14.0% and an NSD improvement of 17.0% on the 3D-IRCADb dataset. 4. Scaling Laws in Tumor Segmentation In this section, we explore the existence of data scaling laws in tumor segmentation and assess whether appropriate data scaling can yield a robust segmentation model. This segmentation model should be generalizable to detect and segment tumors from CT scans, handling a broad spectrum of patient demographics, imaging protocols, and healthcare facilities. Specifically, we first examine the impact of in- creasing the number of annotated real-tumor scans on in- distribution performance. Then we analyze how the scale of annotated real-tumor data influences the model’s ability Figure 5. Tumor size and feature distribution of real vs. syn- thetic tumors. (a) Tumor size distribution across liver, pancreatic, and kidney tumors from real and synthetic data. (b–d) Feature distributions of liver, pancreatic, and kidney tumors. We extract features using a pretrained encoder [85] and visualize them with t-SNE to compare synthetic tumors with real ones. to generalize to out-of-distribution tumor data. 3D-IRCADb [90] - Liver Tumor PANORAMA [1] - Pancreatic Tumor Kipa [34] - Kidney Tumor JHH - Pancreatic Tumor Method Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 74.4 (87/117) 50.1 46.8 58.8 (77/131) 21.4 18.0 70.8 (51/72) 43.1 35.8 51.4 (152/296) 13.0 9.0 Swin UNETR [92] 76.9 (90/117) 57.9 53.7 69.5 (91/131) 34.0 30.9 81.9 (59/72) 64.3 56.6 71.3 (211/296) 31.9 21.9 nnU-Net [44] 77.8 (91/117) 65.1 62.2 75.6 (99/131) 42.4 38.6 80.6 (58/72) 64.3 58.9 69.9 (207/296) 34.1 24.7 ResEncM [45] 76.9 (90/117) 57.6 53.3 61.1 (80/131) 33.5 30.0 90.2 (65/72) 76.4 77.0 68.6 (203/296) 34.8 26.5 STU-Net [43] 78.6 (92/117) 67.1 64.5 74.0 (97/131) 42.7 40.3 55.6 (40/72) 71.2 70.4 68.9 (204/296) 34.1 24.7 SegResNet [76] 65.0 (76/117) 54.6 51.3 84.0 (110/131) 43.0 40.3 94.4 (68/72) 73.6 70.0 77.7 (211/296) 39.5 31.1 Universal Model [66] 86.3 (101/117) 62.8 57.4 77.9 (102/131) 37.0 33.9 97.2 (67/72) 47.8 37.1 78.4 (232/296) 32.6 27.1 SuPreM [59] 58.1 (68/117) 50.2 47.8 67.9 (89/131) 30.5 28.0 84.7 (61/72) 42.3 36.0 63.2 (187/296) 24.7 19.8 AbdomenAtlas 2.0 86.3 (101/117) 81.1 81.5 94.6 (124/131) 55.3 52.2 97.2 (70/72) 83.6 83.0 80.7 (239/296) 45.1 35.7 ∆ +0.0 +14.0 +17.0 +10.6 +12.3 +11.9 +0.0 +7.2 +6.0 +2.3 +5.6 +4.6 Table 4. Better generalizability for out-of-distribution data: Results on external datasets. We evaluate AbdomenAtlas 2.0 and 8 other models on data from three publicly available and one private external source without additional fine-tuning or domain adaptation. Compared to dataset-specific models, AbdomenAtlas 2.0 demonstrates greater robustness when handling CT scans obtained from a variety of scanners, protocols, and institutes. 4.1. Experimental Setup We evaluate the scaling behavior with two data setups: (1) only real-tumor scans, and (2) a combination of both syn- thetic and real tumor scans. Since small tumors are rare in public datasets but crucial for clinical applications, we em- ploy DiffTumor [12] to generate synthetic tumors, with a ratio of 4:2:1 for small, medium, and large tumors, respec- tively. The total number of synthetic tumor scans gener- ated is three times of AbdomenAtlas 2.0. The distribu- tion of tumor size and combined data distribution are illus- trated in Figure 5, where we combine the generated tumors with different scales of AbdomenAtlas 2.0 training set to train the supervised ResEncM [45]. The evaluation is con- ducted with segmentation metrics (i.e., DSC, NSD) and de- tection metrics (i.e., tumor-level and patient level sensitiv- ity), using the validation set of AbdomenAtlas 2.0 and six external datasets (3D-IRCADb, ULS-Liver, ULS-Pancreas, PANORAMA, Kipa, and JHH dataset). 4.2. Plateau in In-Distribution Evaluation We report the in-distribution segmentation performance in Figure 6 and include the detection metrics in Appendix E. Our analysis of tumor segmentation scaling behavior re- veals a clear trend in in-distribution performance: as the number of annotated real-tumor scans increases, the in- distribution performance gains gradually saturate. As illus- trated by the gray lines in Figure 6, in-distribution perfor- mance initially improves with increasing data but eventu- ally reaches a plateau across all three tumor types. This sat- uration indicates diminishing returns that adding more real tumor data yields progressively smaller performance gains. However, combining a certain amount of synthetic tu- mors with real data during training helps to accelerate this in-distribution performance saturation process. As shown by the red lines in Figure 6, with the participants of syn- thetic data, the saturation status can be reached with only 40% to 60% of the annotated real-tumor scans, indicating Figure 6. Scaling data shows performance plateau in in- distribution evaluation. We conduct a scaling study using Ab- domenAtlas 2.0 and JHH datasets as real tumor data and evaluate performance on their corresponding validation sets. While scal- ing up the dataset initially enhances in-distribution performance, it eventually plateaus. These results align with the data-scaling les- son in §1. By supplementing real tumor data with well-designed synthetic data, we only need to collect and annotate a small amount of real data. This approach is especially beneficial for scenar- ios where data is scarce and annotation is costly, enabling high- accuracy segmentation with reduced effort. that synthetic data effectively expedite the model’s conver- gence to its optimal performance within a given domain. This finding shows that we can achieve strong segmen- tation performance without collecting large amounts of real data. By supplementing real tumor data with well-designed synthetic data, we can significantly reduce the effort for costly real data annotation while maintaining strong in- distribution segmentation accuracy. This lesson demon- strates the tangible benefits of introducing synthetic data into the training process, and is particularly valuable for scenarios where real-data acquisition is costly or limited. Figure 7. Scaling data leads to greater generalizability. We conduct a scaling study using liver, pancreatic, and kidney tumors from the Cancerverse dataset as real tumor data and evaluate per- formance on six external datasets. Unlike in-distribution perfor- mance, which plateaus with more data, OOD generalization con- tinues to improve with the addition of real tumor data. Notably, the integration of synthetic data further improves generalizability, with models trained on both real and synthetic scans consistently outperforming those using only real data. These findings under- score the critical importance of data diversity in enhancing model robustness across diverse imaging conditions. 4.3. Scaling Data Leads to Greater Generalizability Figure 7 reports out-of-distribution (OOD) segmentation performance. As indicated by the gray and red lines, OOD accuracy consistently rises with the expanding dataset. The impact of data scaling on out-of-distribution performance follows a consistently positive trend: as the amount of real tumor data increases, OOD performance continues to im- prove without signs of saturation, even after exhausting the entire Cancerverse dataset. We include more OOD results in Appendix F. In contrast with the in-distribution perfor- mance that tends to saturate with increasing data, the find- ing on the out-of-distribution performance reveals that OOD generalization continues to benefit from additional real tu- mor data without exhibiting diminishing returns. Such a non-diminishing trend is still obvious even when synthetic data is incorporated into the training process. Furthermore, models trained with both real and synthetic tumor scans consistently outperform those trained with real data. These findings underscore the critical role of data diversity in en- hancing OOD generalization, showing that a carefully cu- rated combination of real/synthetic data strengthens model robustness across diverse imaging settings. 5. Conclusion This study examined not just whether more data helps tumor segmentation, but what kind of data is most valuable. Using AbdomenAtlas 2.0—a large, expert-annotated dataset— and systematic scaling experiments, we learned three key lessons: First, performance on in-distribution data plateaus early. On internal datasets like JHH, segmentation accuracy stops improving after about 1,500 real scans. This means adding more similar data yields limited benefit once a mod- erate threshold is reached. Second, synthetic tumors sig- nificantly reduce the need for manual annotation. By using synthetic lesions generated with DiffTumor, we can reach the same performance with only 500 real scans, cutting an- notation effort by 70%. Synthetic data improves data effi- ciency and accelerates model convergence. Third, out-of- distribution generalization continues to improve with data diversity. Unlike the in-distribution case, performance on external datasets keeps increasing even after 1,500 scans and sees additional gains when synthetic tumors are added. This shows that model robustness depends more on data di- versity than just quantity. These lessons have important implications. Future ex- pansion of AbdomenAtlas 2.0 should focus on including scans from different hospitals and imaging protocols. This will help the model perform better on new and unfamiliar data. For underrepresented tumor types like esophageal and uterine cancers, a few hundred well-selected scans com- bined with synthetic data can be enough to build useful models. AbdomenAtlas 2.0 also makes it possible to benchmark various data scaling and annotation strategies that were previously limited by small dataset size. The SMART-Annotator pipeline further shows how AI-assisted pre-labeling can reduce radiologist time from minutes to seconds per scan without sacrificing accuracy, especially when combined with synthetic tumor generation. There are several limitations worth noting. First, the performance plateau observed at 1,500 scans applies only to pancreatic tumors in abdominal CT and with the Res- EncM model. It is unclear whether this threshold holds for other organs, especially those with more complex or sub- tle tumor appearances. Future studies should examine how data scaling behaves across different tumor types, imaging modalities, and model architectures to see if similar satu- ration points occur. Second, although synthetic tumors im- prove performance, their anatomical realism—particularly for infiltrative, necrotic, or early-stage lesions—has not been fully verified by expert review or radiomic analysis. Ensuring clinically realistic synthesis remains a key chal- lenge for building trust and interpretability. Acknowledgments. This work was supported by the Lust- garten Foundation for Pancreatic Cancer Research and the National Institutes of Health (NIH) under Award Number R01EB037669. We would like to thank the Johns Hopkins Research IT team in IT@JH for their support and infras- tructure resources where some of these analyses were con- ducted; especially DISCOVERY HPC. References [1] Nuno Alves, Maarten Schuurmans, Darius Rutkowski, D. Yakar, Ingfrid Haldorsen, Marianne Liedenbaum, Anders Molven, Paolo Vendittelli, Geert Litjens, Johan Hermans, and Henk Huisman. The panorama study protocol: Pan- creatic cancer diagnosis - radiologists meet ai, 2024. 3, 6, 7 [2] Michela Antonelli, Annika Reinke, Spyridon Bakas, Key- van Farahani, Bennett A Landman, Geert Litjens, Bjoern Menze, Olaf Ronneberger, Ronald M Summers, Bram van Ginneken, et al. The medical segmentation decathlon. arXiv preprint arXiv:2106.05735, 2021. 3, 19 [3] Michela Antonelli, Annika Reinke, Spyridon Bakas, et al. The medical segmentation decathlon. Nature Communica- tions, 13:4128, 2022. 3 [4] Pedro RAS Bassi, Wenxuan Li, Jieneng Chen, Zheren Zhu, Tianyu Lin, Sergio Decherchi, Andrea Cavalli, Kang Wang, Yang Yang, Alan L Yuille, et al. Learning segmentation from radiology reports. arXiv preprint arXiv:2507.05582, 2025. 17 [5] Pedro RAS Bassi, Mehmet Can Yavuz, Kang Wang, Xi- aoxi Chen, Wenxuan Li, Sergio Decherchi, Andrea Cav- alli, Yang Yang, Alan Yuille, and Zongwei Zhou. Radgpt: Constructing 3d image-text tumor datasets. arXiv preprint arXiv:2501.04678, 2025. 17 [6] Patrick Bilic, Patrick Ferdinand Christ, Eugene Vorontsov, Grzegorz Chlebus, Hao Chen, Qi Dou, Chi-Wing Fu, Xiao Han, Pheng-Ann Heng, J¨urgen Hesser, et al. The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056, 2019. 2, 3, 17, 19 [7] Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, et al. The liver tumor segmentation benchmark (lits). Medical Image Analysis, 84:102680, 2023. 3 [8] Jingye Chen, Jieneng Chen, Zongwei Zhou, Bin Li, Alan Yuille, and Yongyi Lu. Mt-transunet: Mediating multi- task tokens in transformers for skin lesion segmentation and classification. arXiv preprint arXiv:2112.01767, 2021. 17 [9] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medi- cal image segmentation. arXiv preprint arXiv:2102.04306, 2021. 17 [10] Jieneng Chen, Jieru Mei, Xianhang Li, Yongyi Lu, Qi- hang Yu, Qingyue Wei, Xiangde Luo, Yutong Xie, Ehsan Adeli, Yan Wang, Matthew Lungren, Lei Xing, Le Lu, Alan Yuille, and Yuyin Zhou. 3d transunet: Advancing medical image segmentation through vision transformers, 2023. 17 [11] Qi Chen, Mingxing Li, Jiacheng Li, Bo Hu, and Zhiwei Xiong. Mask rearranging data augmentation for 3d mi- tochondria segmentation. In International Conference on Medical Image Computing and Computer-Assisted Inter- vention, pages 36–46. Springer, 2022. 18 [12] Qi Chen, Xiaoxi Chen, Haorui Song, Zhiwei Xiong, Alan Yuille, Chen Wei, and Zongwei Zhou. Towards generaliz- able tumor synthesis. In IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, 2024. 7, 18 [13] Qi Chen, Yuxiang Lai, Xiaoxi Chen, Qixin Hu, Alan Yuille, and Zongwei Zhou. Analyzing tumors by synthesis. arXiv preprint arXiv:2409.06035, 2024. 1 [14] Richard J Chen, Ming Y Lu, Tiffany Y Chen, Drew FK Williamson, and Faisal Mahmood. Synthetic data in ma- chine learning for medicine and healthcare. Nature Biomed- ical Engineering, 5(6):493–497, 2021. 17 [15] Douglas C Cheung and Antonio Finelli. Active surveillance in small renal masses in the elderly: a literature review. Eu- ropean urology focus, 3(4-5):340–351, 2017. 18 [16] ¨Ozg¨un C¸ ic¸ek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Inter- vention, pages 424–432. Springer, 2016. 17 [17] Errol Colak, Hui-Ming Lin, Robyn Ball, Melissa Davis, Adam Flanders, Sabeena Jalal, Kirti Magudia, Brett Marinelli, Savvas Nicolaou, Luciano Prevedello, Jeff Rudie, George Shih, Maryam Vazirabad, and John Mon- gan. Rsna 2023 abdominal trauma detection, 2023. 19 [18] MJJ de Grauw, E Th Scholten, EJ Smit, MJCM Rutten, M Prokop, B van Ginneken, and A Hering. The uls23 chal- lenge: a baseline model and benchmark dataset for 3d uni- versal lesion segmentation in computed tomography. arXiv preprint arXiv:2406.05231, 2024. 3 [19] Pedro Domingos. A few useful things to know about ma- chine learning. Communications of the ACM, 55(10):78– 87, 2012. 17 [20] Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, and Yonglong Tian. Scaling laws of synthetic images for model training... for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7382–7392, 2024. 1 [21] Virginia Fernandez, Walter Hugo Lopez Pinaya, Pedro Borges, Petru-Daniel Tudosiu, Mark S Graham, Tom Ver- cauteren, and M Jorge Cardoso. Can segmentation models be trained with fully synthetically generated data? In Inter- national Workshop on Simulation and Synthesis in Medical Imaging, pages 79–90. Springer, 2022. 18 [22] Kathryn J Fowler, Adam Burgoyne, Tyler J Fraum, Moj- gan Hosseini, Shintaro Ichikawa, Sooah Kim, Azusa Ki- tao, Jeong Min Lee, Val´erie Paradis, Bachir Taouli, et al. Pathologic, molecular, and prognostic radiologic features of hepatocellular carcinoma. Radiographics, 41(6):1611– 1631, 2021. 18 [23] Sergios Gatidis, Marcel Fr¨uh, Matthias P Fabritius, Sijing Gu, Konstantin Nikolaou, Christian La Foug`ere, Jin Ye, Junjun He, Yige Peng, Lei Bi, et al. Results from the autopet challenge on fully automated lesion segmentation in oncologic pet/ct imaging. Nature Machine Intelligence, pages 1–10, 2024. 3 [24] CH Golias, A Charalabopoulos, and K Charalabopoulos. Cell proliferation and cell cycle control: a mini review. In- ternational journal of clinical practice, 58(12):1134–1141, 2004. 17 [25] Kuang Gong, Keith Johnson, Georges El Fakhri, Quanzheng Li, and Tinsu Pan. Pet image denoising based on denoising diffusion probabilistic model. European Jour- nal of Nuclear Medicine and Molecular Imaging, pages 1– 11, 2023. 18 [26] Chloe Gui, Suzanne E Kosteniuk, Jonathan C Lau, and Joseph F Megyesi. Tumor growth dynamics in serially- imaged low-grade glioma patients. Journal of Neuro- Oncology, 139:167–175, 2018. 18 [27] Pengfei Guo, Can Zhao, Dong Yang, Ziyue Xu, Vish- wesh Nath, Yucheng Tang, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, et al. Maisi: Medical ai for synthetic imaging. arXiv preprint arXiv:2409.11169, 2024. 18 [28] Pengfei Guo, Can Zhao, Dong Yang, Yufan He, Vish- wesh Nath, Ziyue Xu, Pedro RAS Bassi, Zongwei Zhou, Benjamin D Simon, Stephanie Anne Harmon, et al. Text2ct: Towards 3d ct volume generation from free- text descriptions using diffusion model. arXiv preprint arXiv:2505.04522, 2025. 1 [29] Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Michael B Gotway, and Jianming Liang. Transferable visual words: Exploiting the semantics of anatomical patterns for self-supervised learning. IEEE Transactions on Medical Imaging, 2021. 5 [30] Ibrahim Ethem Hamamci, Sezgin Er, Anjany Sekuboy- ina, Enis Simsar, Alperen Tezcan, Ayse Gulnihan Sim- sek, Sevval Nil Esirgun, Furkan Almas, Irem Do˘gan, Muhammed Furkan Dasdelen, et al. Generatect: text- conditional generation of 3d chest ct volumes. In European Conference on Computer Vision, pages 126–143. Springer, 2025. 18 [31] Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Work- shop, pages 272–284. Springer, 2022. 17 [32] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF win- ter conference on applications of computer vision, pages 574–584, 2022. 5, 6, 7, 17 [33] Yufan He, Dong Yang, Holger Roth, Can Zhao, and Daguang Xu. Dints: Differentiable neural network topol- ogy search for 3d medical image segmentation. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5841–5850, 2021. 5 [34] Yuting He, Guanyu Yang, Jian Yang, Rongjun Ge, Youy- ong Kong, Xiaomei Zhu, Shaobo Zhang, Pengfei Shao, Huazhong Shu, Jean-Louis Dillenseger, et al. Meta grayscale adaptive network for 3d integrated renal struc- tures segmentation. Medical image analysis, 71:102055, 2021. 6, 7 [35] Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara, Edward Walczak, Keenan Moore, Heather Kaluzniak, Joel Rosenberg, Paul Blake, Zachary Rengel, Makinna Oestre- ich, et al. The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes. arXiv preprint arXiv:1904.00445, 2019. 2 [36] Nicholas Heller, Fabian Isensee, Dasha Trofimova, Resha Tejpaul, Zhongchen Zhao, Huai Chen, Lisheng Wang, Alex Golts, Daniel Khapun, Daniel Shats, et al. The kits21 chal- lenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase ct. arXiv preprint arXiv:2307.01984, 2023. 3, 17, 19 [37] Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020. 3 [38] Nora B Henrikson, Erin J Aiello Bowles, Paula R Blasi, Caitlin C Morrison, Matt Nguyen, Venu G Pillarisetty, and Jennifer S Lin. Screening for pancreatic cancer: updated evidence report and systematic review for the us preventive services task force. Jama, 322(5):445–454, 2019. 2 [39] N Hiraoka, Y Ino, S Sekine, H Tsuda, K Shimada, T Ko- suge, J Zavada, M Yoshida, K Yamada, T Koyama, et al. Tumour necrosis is a postoperative prognostic marker for pancreatic cancer patients with a high interobserver repro- ducibility in histological evaluation. British journal of can- cer, 103(7):1057–1065, 2010. 18 [40] Qixin Hu, Junfei Xiao, Yixiong Chen, Shuwen Sun, Jie- Neng Chen, Alan Yuille, and Zongwei Zhou. Synthetic tu- mors make ai segment tumors better. NeurIPS Workshop on Medical Imaging meets NeurIPS, 2022. 1 [41] Qixin Hu, Yixiong Chen, Junfei Xiao, Shuwen Sun, Jieneng Chen, Alan L Yuille, and Zongwei Zhou. Label-free liver tumor segmentation. In IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 7422–7432, 2023. 18 [42] Qixin Hu, Alan Yuille, and Zongwei Zhou. Synthetic data as validation. arXiv preprint arXiv:2310.16052, 2023. 17 [43] Ziyan Huang, Haoyu Wang, Zhongying Deng, Jin Ye, Yanzhou Su, Hui Sun, Junjun He, Yun Gu, Lixu Gu, Shaot- ing Zhang, et al. Stu-net: Scalable and transferable medi- cal image segmentation models empowered by large-scale supervised pre-training. arXiv preprint arXiv:2304.06716, 2023. 5, 6, 7, 17 [44] Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein. nnu-net: a self- configuring method for deep learning-based biomedical im- age segmentation. Nature Methods, 18(2):203–211, 2021. 5, 6, 7 [45] Fabian Isensee, Tassilo Wald, Constantin Ulrich, Michael Baumgartner, Saikat Roy, Klaus Maier-Hein, and Paul F Jaeger. nnu-net revisited: A call for rigorous validation in 3d medical image segmentation. In International Confer- ence on Medical Image Computing and Computer-Assisted Intervention, pages 488–498. Springer, 2024. 5, 6, 7 [46] Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xi- ang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. arXiv preprint arXiv:2206.08023, 2022. 17, 19 [47] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In In- ternational conference on machine learning, pages 4904– 4916. PMLR, 2021. 3 [48] James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar. Pate-gan: Generating synthetic data with differential pri- vacy guarantees. In International conference on learning representations, 2018. 17 [49] Mintong Kang, Bowen Li, Zengle Zhu, Yongyi Lu, Elliot K Fishman, Alan Yuille, and Zongwei Zhou. Label-assemble: Leveraging multiple datasets with partial labels. In IEEE International Symposium on Biomedical Imaging, pages 1– 5. IEEE, 2023. 2 [50] Mee Joo Kang, Jin-Young Jang, Soo Jin Kim, Kyoung Bun Lee, Ji Kon Ryu, Yong-Tae Kim, Yong Bum Yoon, and Sun-Whe Kim. Cyst growth rate predicts malignancy in pa- tients with branch duct intraductal papillary mucinous neo- plasms. Clinical Gastroenterology and Hepatology, 9(1): 87–93, 2011. 18 [51] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. 1, 3 [52] David J Kerr, Daniel G Haller, and Michael Baumann. Ox- ford textbook of oncology. oxford university press, 2016. 18 [53] Boah Kim, Yujin Oh, and Jong Chul Ye. Diffusion adver- sarial representation learning for self-supervised vessel seg- mentation. arXiv preprint arXiv:2209.14566, 2022. 18 [54] Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, and Taesup Kim. Scalable neural architecture search for 3d medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted In- tervention, pages 220–228. Springer, 2019. 5 [55] Vinay Kumar, Abul Abbas, and Jon C Aster. Robbins basic pathology. Elsevier Health Sciences, 2017. 17, 18 [56] Yuxiang Lai, Xiaoxi Chen, Angtian Wang, Alan Yuille, and Zongwei Zhou. From pixel to cancer: Cellular automata in computed tomography. arXiv preprint arXiv:2403.06459, 2024. 18 [57] Bennett Landman, Zhoubing Xu, J Igelsias, Martin Styner, Thomas Langerak, and Arno Klein. Miccai multi-atlas la- beling beyond the cranial vault–workshop and challenge. In Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, page 12, 2015. 19 [58] Bowen Li, Yu-Cheng Chou, Shuwen Sun, Hualin Qiao, Alan Yuille, and Zongwei Zhou. Early detection and local- ization of pancreatic cancer by label-free tumor synthesis. MICCAI Workshop on Big Task Small Data, 1001-AI, 2023. 1 [59] Wenxuan Li, Chongyu Qu, Xiaoxi Chen, Pedro RAS Bassi, Yijia Shi, Yuxiang Lai, Qian Yu, Huimin Xue, Yix- iong Chen, Xiaorui Lin, et al. Abdomenatlas: A large- scale, detailed-annotated, & multi-center dataset for effi- cient transfer learning and open algorithmic benchmarking. Medical Image Analysis, page 103285, 2024. 6, 7 [60] Wenxuan Li, Alan Yuille, and Zongwei Zhou. How well do supervised models transfer to 3d image segmentation? In International Conference on Learning Representations, 2024. 17 [61] Wenxuan Li, Pedro RAS Bassi, Tianyu Lin, Yu-Cheng Chou, Xinze Zhou, Yucheng Tang, Fabian Isensee, Kang Wang, Qi Chen, Xiaowei Xu, et al. Scalemai: Accelerating the development of trusted datasets and ai models. arXiv preprint arXiv:2501.03410, 2025. 1, 17 [62] Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro RAS Bassi, Szymon Plotka, Jaroslaw B Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, et al. Pants: The pancreatic tumor segmentation dataset. arXiv preprint arXiv:2507.01291, 2025. 2 [63] Xinran Li, Yi Shuai, Chen Liu, Qi Chen, Qilong Wu, Pengfei Guo, Dong Yang, Can Zhao, Pedro RAS Bassi, Daguang Xu, et al. Text-driven tumor synthesis. arXiv preprint arXiv:2412.18589, 2024. 1 [64] Tianyu Lin, Xinran Li, Chuntung Zhuang, Qi Chen, Yuanhao Cai, Kai Ding, Alan L Yuille, and Zongwei Zhou. Are pixel-wise metrics reliable for sparse-view computed tomography reconstruction? arXiv preprint arXiv:2506.02093, 2025. 18 [65] Jie Liu, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for partially labeled organ and pan-cancer segmentation. In MICCAI 2023 FLARE Chal- lenge, 2023. 17 [66] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven univer- sal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152–21164, 2023. 5, 6, 7, 17 [67] Jie Liu, Yixiao Zhang, Kang Wang, Mehmet Can Yavuz, Xiaoxi Chen, Yixuan Yuan, Haoliang Li, Yang Yang, Alan Yuille, Yucheng Tang, et al. Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical Image Analysis, page 103226, 2024. 5 [68] Xiangde Luo, Wenjun Liao, Jianghong Xiao, Tao Song, Xi- aofan Zhang, Kang Li, Guotai Wang, and Shaoting Zhang. Word: Revisiting organs segmentation in the whole abdom- inal region. arXiv preprint arXiv:2111.02403, 2021. 17, 19 [69] Qing Lyu and Ge Wang. Conversion between ct and mri images using diffusion and score-matching models. arXiv preprint arXiv:2209.12104, 2022. 18 [70] J. Ma and B. Wang. Fast, low-resource, accurate, and robust organ and pan-cancer segmentation. In 27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2024). Zenodo, 2024. 1, 2, 3 [71] Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 19 [72] Jun Ma, Yao Zhang, Song Gu, Xingle An, Zhihe Wang, Cheng Ge, Congcong Wang, Fan Zhang, Yu Wang, Yinan Xu, et al. Fast and low-gpu-memory abdomen ct organ seg- mentation: the flare challenge. Medical Image Analysis, 82: 102616, 2022. 19 [73] Jiawei Mao, Yuhan Wang, Yucheng Tang, Daguang Xu, Kang Wang, Yang Yang, Zongwei Zhou, and Yuyin Zhou. Medsegfactory: Text-guided generation of medical image- mask pairs. arXiv preprint arXiv:2504.06897, 2025. 1 [74] Xiangxi Meng, Yuning Gu, Yongsheng Pan, Nizhuan Wang, Peng Xue, Mengkang Lu, Xuming He, Yiqiang Zhan, and Dinggang Shen. A novel unified conditional score-based generative framework for multi-modal medical image completion. arXiv preprint arXiv:2207.03430, 2022. 18 [75] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pages 565–571. IEEE, 2016. 17 [76] Andriy Myronenko. 3d mri brain tumor segmentation using autoencoder regularization. In Brainlesion: Glioma, Mul- tiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunc- tion with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4, pages 311–320. Springer, 2019. 6, 7 [77] Piyush Nathani, Purva Gopal, Nicole Rich, Adam Yopp, Takeshi Yokoo, Binu John, Jorge Marrero, Neehar Parikh, and Amit G Singal. Hepatocellular carcinoma tumour vol- ume doubling time: a systematic review and meta-analysis. Gut, 70(2):401–407, 2021. 18 [78] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Car- roll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training lan- guage models to follow instructions with human feedback. Advances in neural information processing systems, 35: 27730–27744, 2022. 1, 3 [79] Muzaffer ¨Ozbey, Onat Dalmaz, Salman UH Dar, Hasan A Bedel, S¸aban ¨Ozturk, Alper G¨ung¨or, and Tolga C¸ ukur. Un- supervised medical image translation with adversarial dif- fusion models. IEEE Transactions on Medical Imaging, 2023. 18 [80] William Peebles and Saining Xie. Scalable diffusion mod- els with transformers. In Proceedings of the IEEE/CVF In- ternational Conference on Computer Vision, pages 4195– 4205, 2023. 3 [81] Micha Pfeiffer, Isabel Funke, Maria R Robu, Sebastian Bo- denstedt, Leon Strenger, Sandy Engelhardt, Tobias Roß, Matthew J Clarkson, Kurinchi Gurusamy, Brian R David- son, et al. Generating large labeled data sets for laparo- scopic image processing tasks using unpaired image-to- image translation. In Medical Image Computing and Com- puter Assisted Intervention, pages 119–127. Springer, 2019. 18 [82] Marion J Pollheimer, Peter Kornprat, Richard A Lindtner, Lars Harbaum, Andrea Schlemmer, Peter Rehak, and Cord Langner. Tumor necrosis is a new promising prognostic fac- tor in colorectal cancer. Human pathology, 41(12):1749– 1757, 2010. 18 [83] Colin H Richards, Zahra Mohammed, Tahir Qayyum, Paul G Horgan, and Donald C McMillan. The prognostic value of histological tumor necrosis in solid organ malig- nant disease: a systematic review. Future oncology, 7(10): 1223–1235, 2011. 18 [84] Blaine Rister, Darvin Yi, Kaushik Shivakumar, Tomomi Nobashi, and Daniel L Rubin. Ct-org, a new dataset for multiple organ segmentation in computed tomography. Sci- entific Data, 7(1):1–9, 2020. 19 [85] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. 1, 3, 6 [86] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image seg- mentation. In International Conference on Medical Im- age Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015. 17 [87] Holger R Roth, Le Lu, Amal Farag, Hoo-Chang Shin, Ji- amin Liu, Evrim B Turkbey, and Ronald M Summers. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In International confer- ence on medical image computing and computer-assisted intervention, pages 556–564. Springer, 2015. 19 [88] Younghak Shin, Hemin Ali Qadir, and Ilangko Balasing- ham. Abnormal colon polyp image synthesis using condi- tional adversarial networks for improved detection perfor- mance. IEEE Access, 6:56007–56017, 2018. 18 [89] Marc C Smaldone, Alexander Kutikov, Brian L Egle- ston, Daniel J Canter, Rosalia Viterbo, David YT Chen, Michael A Jewett, Richard E Greenberg, and Robert G Uzzo. Small renal masses progressing to metastases under active surveillance: a systematic review and pooled analy- sis. Cancer, 118(4):997–1006, 2012. 18 [90] L Soler, A Hostettler, V Agnus, A Charnoz, J Fasquel, J Moreau, A Osswald, M Bouhadjar, and J Marescaux. 3d image reconstruction for comparison of algorithm database: A patient specific anatomical and medical image database. IRCAD, Strasbourg, France, Tech. Rep, 2010. 6, 7 [91] Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score- based generative models. arXiv preprint arXiv:2111.08005, 2021. 18 [92] Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swin trans- formers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20730–20740, 2022. 5, 6, 7, 17 [93] Vanya V Valindria, Nick Pawlowski, Martin Rajchl, Ioannis Lavdas, Eric O Aboagye, Andrea G Rockall, Daniel Rueck- ert, and Ben Glocker. Multi-modal learning from unpaired images: Application to multi-organ segmentation in ct and mri. In 2018 IEEE winter conference on applications of computer vision (WACV), pages 547–556. IEEE, 2018. 19 [94] Julia Wolleb, Robin Sandk¨uhler, Florentin Bieder, Philippe Valmaggia, and Philippe C Cattin. Diffusion models for implicit image segmentation ensembles. In International Conference on Medical Imaging with Deep Learning, pages 1336–1348. PMLR, 2022. 18 [95] Linshan Wu, Jiaxin Zhuang, Xuefeng Ni, and Hao Chen. Freetumor: Advance tumor segmentation via large-scale tu- mor synthesis. arXiv preprint arXiv:2406.01264, 2024. 18 [96] Yingda Xia, Qihang Yu, Linda Chu, Satomi Kawamoto, Seyoun Park, Fengze Liu, Jieneng Chen, Zhuotun Zhu, Bowen Li, Zongwei Zhou, et al. The felix project: Deep networks to detect pancreatic neoplasms. medRxiv, 2022. 1, 6 [97] Tiange Xiang, Yixiao Zhang, Yongyi Lu, Alan Yuille, Chaoyi Zhang, Weidong Cai, and Zongwei Zhou. Ex- ploiting structural consistency of chest anatomy for unsu- pervised anomaly detection in radiography images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 17 [98] Yutong Xie and Quanzheng Li. Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction. In International Conference on Medical Image Computing and Computer-Assisted In- tervention, pages 655–664. Springer, 2022. 18 [99] Ke Yan, Xiaosong Wang, Le Lu, and Ronald M Sum- mers. Deeplesion: Automated deep mining, categorization and detection of significant radiology image findings us- ing large-scale clinical lesion annotations. arXiv preprint arXiv:1710.01766, 2017. 3 [100] Yijun Yang, Zhao-Yang Wang, Qiuping Liu, Shuwen Sun, Kang Wang, Rama Chellappa, Zongwei Zhou, Alan Yuille, Lei Zhu, Yu-Dong Zhang, et al. Medical world model: Gen- erative simulation of tumor evolution for treatment plan- ning. arXiv preprint arXiv:2506.02327, 2025. 17 [101] Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. Advances in neural information processing systems, 32, 2019. 17 [102] Qihang Yu, Dong Yang, Holger Roth, Yutong Bai, Yixiao Zhang, Alan L Yuille, and Daguang Xu. C2fnas: Coarse- to-fine neural architecture search for 3d medical image seg- mentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4126– 4135, 2020. 5 [103] Zongwei Zhou. Towards Annotation-Efficient Deep Learn- ing for Computer-Aided Diagnosis. PhD thesis, Arizona State University, 2021. 1 [104] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net ar- chitecture for medical image segmentation. In Deep Learn- ing in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3–11. Springer, 2018. 17 [105] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: Redesigning skip connections to exploit multiscale features in image seg- mentation. IEEE Transactions on Medical Imaging, 39(6): 1856–1867, 2019. 17 [106] Zongwei Zhou, Vatsal Sodha, Jiaxuan Pang, Michael B Gotway, and Jianming Liang. Models genesis. Medical Image Analysis, 67:101840, 2021. 5 [107] Zongwei Zhou, Michael B Gotway, and Jianming Liang. Interpreting medical images. In Intelligent Systems in Medicine and Health, pages 343–371. Springer, 2022. 1 [108] Lingting Zhu, Noel Codella, Dongdong Chen, Zhenchao Jin, Lu Yuan, and Lequan Yu. Generative enhancement for 3d medical images. arXiv preprint arXiv:2403.12852, 2024. 18 Scaling Tumor Segmentation: Best Lessons from Real and Synthetic Data Supplementary Material This appendix is organized as follows: • § A provides comprehensive results with scaled real data with proprietary dataset and synthetic data. • § B provides comprehensive related works. ◦B.1: AI Development on Real Tumors ◦B.2: AI Development on Synthetic Tumors • § C provides implementation details for Tumor Genesis and comparative models. ◦C.1: details of public and private datasets used in AbdomenAtlas 2.0. ◦C.2: implementation details of comparative models • § D provides more visual examples from AbdomenAtlas 2.0. • § E presents additional results on the key insights gained from scaling real tumor data. • § F presents additional results on the key insights gained from scaling real and synthetic tumor data. A. Best Lesson Proof on proprietary dataset Figure 8. Best lesson proof on proprietary dataset. Comprehensive experimental results trained on the proprietary dataset show that increasing the scale of real data (gray curve) improves segmentation (DSC and NSD) and detection (patient-level sensitivity, tumor-level sensitivity, and early tumor sensitivity) for both in-distribution and out-of-distribution data. Additionally, augmenting the dataset with an extra 3× synthetic data (red curve) consistently enhances the results. The specific numerical results in this figure can be referenced in the Table 5. Given the substantial GPU requirements, the results were obtained from a single experiment. To reach a more reliable conclusion, we will conduct the experiments at least 10 times. Scaling with real data. #real CT Patient-level Sen. Tumor-level Sen. DSC NSD Early Tumor Sen. Test on in-distribution data 60 86.4 74.3 40.2 35.0 45.7 120 90.1 79.4 51.0 44.4 40.0 278 91.5 82.4 52.7 47.2 51.4 435 90.1 80.7 54.4 48.6 42.9 750 89.7 80.1 55.4 49.1 41.4 1125 91.2 82.1 56.1 50.7 44.3 1500 90.4 82.1 59.3 53.6 48.6 3159 91.5 84.8 59.7 54.3 58.6 Test on out-of-distribution data 60 100.0 51.2 6.6 4.4 22.0 120 100.0 69.5 22.6 17.4 42.0 278 89.2 74.1 33.2 27.9 44.0 435 92.3 75.6 36.8 30.5 48.0 750 88.5 71.0 35.3 28.5 38.0 1125 80.0 72.5 37.1 31.4 48.0 1500 90.0 81.7 38.4 31.2 60.0 3159 81.5 74.1 41.2 35.5 46.0 Scaling with real & synthetic data. #real CT Patient-level Sen. Tumor-level Sen. DSC NSD Early Tumor Sen. Test on in-distribution data 60 89.7 77.4 48.2 39.8 40.0 120 94.1 82.1 56.5 47.7 44.3 278 98.2 86.1 58.1 50.0 57.1 435 97.4 86.5 59.1 51.0 55.7 750 95.2 86.5 58.1 50.4 51.4 1125 97.1 86.1 59.9 52.0 52.9 1500 95.2 85.1 59.2 51.1 51.4 3159 96.3 85.8 59.3 51.2 50.0 Test on out-of-distribution data 60 96.2 83.2 42.7 39.2 64.7 120 91.5 82.4 45.5 40.6 64.7 278 99.2 92.4 51.8 48.3 80.4 435 98.5 90.8 52.8 49.6 78.4 750 96.9 90.8 54.1 50.6 78.4 1125 98.5 90.1 53.0 48.4 78.4 1500 97.7 87.8 52.6 48.0 70.6 3159 96.9 87.8 53.5 49.1 72.5 Table 5. Best Lesson Proof on proprietary dataset. The proprietary dataset comprises a total of 5,176 CT scans, which include scans of patients with pancreatic tumors as well as healthy scans without pancreatic tumors. We utilized 3,159 scans for training, while the remaining 2,017 were allocated for testing within the same distribution. For the out-of-distribution dataset, we selected the Panorama dataset. Detailed information regarding the dataset split can be found in § C. For the segmentation model, we employed the SegResNet model based on the MONAI codebase for training and assessed the tumor segmentation and detection results using the DSC, NSD, and sensitivity metrics. B. Related works B.1. AI Development on Real Tumors AI algorithms. Tumor detection and segmentation have been long-standing problems in medical image analysis. To achieve deliverable results, many recent works leverage state-of-the-art deep learning technology [43, 92]. The U-Net architecture [86] has been widely adopted in medical image analysis. Over the years, numerous well-designed networks have been proposed to improve the U-Net architecture, including UNet++ [104, 105], TransU-Net [9], UNETR [32], Swin-UNETR [31], and many others [8, 10, 16, 75]. While these methods have demonstrated remarkable performance in tumor detection and segmentation, they typically rely on a significant number of annotations. The process of annotating real tumors is not only time-consuming but also requires extensive medical expertise. Sometimes, it needs the assistance of radiology reports [4, 5] or is even impossible to obtain the annotation [6, 42, 97, 100]. Therefore, the use of synthetic tumors emerges as a promising solution. Liu et al. [66] integrate text embeddings derived from Contrastive Language-Image Pre-training (CLIP) into segmenta- tion models, effectively capturing anatomical relationships and enabling the model to learn structured feature embeddings across multiple organ and tumor types. With pre-training on large-scale CT scans with per-voxel annotations for 25 anatom- ical structures and seven tumor types, Li et al [60] has developed a suite of models demonstrating robust transfer learning capabilities across various downstream organ and tumor segmentation tasks. Preexisting public datasets have made significant contributions to the advancement of AI in tumor detection [61]. We sum- marizes key characteristics of existing public datasets for organ and tumor segmentation in table 1, categorized into those with and without tumor labels. Datasets such as LiTS [6] and KiTS [36] provide essential tumor labels but are limited with regard to size and variety, with 131 and 489 scans, respectively, and fewer hospitals contributing data (7 for LiTS and 1 for KiTS). Larger datasets like FLARE23 [65] include 2,200 scans and span contributions from 30 hospitals, yet they focus on a single organ and provide no explicit tumor-specific labels. Similarly, datasets without tumor labels, such as WORD [68] and AMOS22 [46], are useful for broader anatomical segmentation tasks but lack tumor-specific annotations. In contrast, AbdomenAtlas 2.0 distinguishes itself by offering the most extensive dataset to date, with 10,136 scans, 4,700K slices, and 13,223 tumors annotated across multiple organs, including rarer tumor types like esophagus and uterus. The dataset incorpo- rates data from 89 hospitals across a wide range of countries, providing unprecedented diversity and comprehensiveness for multi-organ tumor research. B.2. AI Development on Synthetic Tumors Tumor synthesis enables the generation of artificial tumors in medical images, aiding in the training of AI models for tumor detection and segmentation [14, 48, 101]. Synthetic tumors become particularly valuable when acquiring per-voxel anno- tations of real tumors is challenging, such as in the early stages of tumor development. There are several advantages of synthetic tumors over real tumors. Quality Control: Synthetic data allows for the control of specific variables and the introduction of desired diversity into the dataset. Real-world datasets often suffer from imbalances, such as an overrepresentation of certain demographics or tumor stages. Synthetic data can be generated to balance these datasets, ensuring that machine learning models are trained on a comprehensive and representative sample of data. For rare cancers, collecting enough patient data is particularly difficult. Synthetic data can help augment these limited datasets, enabling the development of more robust and accurate models for rare cancer types. Additionally, synthetic data can be used to simulate hard cases that are difficult to capture in real-world data. Researchers can rapidly iterate and refine their models, leading to faster advancements in tumor detection, diagnosis, and treatment. Privacy and Ethical Considerations: One of the major advantages of synthetic data is that it can be used without compromis- ing patient privacy. Since synthetic data is not directly tied to any real individual, it eliminates the risk of exposing sensitive patient information. By using synthetic data, researchers can bypass ethical dilemmas associated with real patient data, such as the need for patient consent and the risk of data breaches. Synthetic tumors can be used in aiding AI models for tumor detection and segmentation, particularly in situations where detailed annotations are scarce [14, 19]. Therefore, an effective and universally applicable tumor synthesis approach is urgently needed to accelerate the development of tumor detection and segmentation methods. Tumor development is intricately regulated by biological mechanisms at various scales. Tumors, which arise from DNA mutations in a single cell and represent genetic disorders [55], undergo complex growth processes. Mutated cells lead to uncontrolled proliferation, which can be benign or malignant [24]. Differences between benign and malignant tumors include growth rate and invasiveness [55]. Malignant tumors tend to exhibit larger final sizes and faster growth rates compared to benign lesions [50]. Additionally, slow tumor growth rates have been associated with low malignant potential [15, 89]. These patterns have also been observed in several studies [26, 77]. Malignant tumors usually invade surrounding tissues, while benign tumors typically remain confined to their original sites. Moreover, even slowly growing malignant tumors can invade surrounding tissues [52], leading to blurry boundaries between tumors and adjacent tissues. Therefore, it is necessary to design Accumulation and Growth rules to simulate these features. Tumor necrosis, a form of cell death, indicates a worse prognosis [82, 83]. Histologically, necrosis is caused by hypoxia resulting from rapid cell proliferation surpassing vascular supply [39], presenting as non-enhancing irregular areas in CT images [22]. Hu et al. [41] developed a program that integrates medical knowledge to generate realistic liver tumors. However, these models are generally organ-specific and require adaptation to work with other organs. Lai et al. [56] proposed a framework that leverages cellular automata to simulate tumor growth, invasion, and necrosis, enabling realistic synthetic tumor generation across multiple organs. Generative models have been effectively utilized in the medical field for tasks like image-to-image translation [69, 74, 79, 81], reconstruction [64, 91, 98], segmentation [11, 21, 53, 94], and image denoising [25]. Utilizing advanced generative models to synthesize various tumors is also a promising direction [27, 30, 95, 108]. Shin et al. [88] advanced detection by generating synthetic abnormal colon polyps using Conditional Adversarial Networks. Chen et al. [12] employed a diffusion model that capitalizes on similarities in early-stage tumor imaging for cross-organ tumor synthesis. Wu et al. [95] employs an adversarial-based discriminator to automatically filter out the low-quality synthetic tumors to improve tumor synthesis. Guo et al. [27] incorporates ControlNet to process organ segmentation as additional conditions to guide the generation of CT images with flexible volume dimensions and voxel spacing. C. Implementation Details C.1. Dataset Composition AbdomenAtlas 2.0 components # of scans annotated tumor (original) annotators Public CT in AbdomenAtlas 2.0 (AbdomenAtlas1.1) 9,901 liver, pancreas, kidney, colon human & AI CHAOS [2018] [link] 20 - human BTCV [2015] [link] 47 - human Pancreas-CT [2015] [link] 42 - human CT-ORG [2020] [link] 140 - human & AI WORD [2021] [link] 120 - human LiTS [2019] [link] 130 liver human AMOS22 [2022] [link] 200 - human & AI KiTS [2023] [link] 489 kidney human AbdomenCT-1K [2021] [link] 1,000 - human & AI MSD-CT [2021] [link] 945 liver, pancreas, colon human & AI FLARE’23 [2022] [link] 4,100 - human & AI Abdominal Trauma Det [2023] [link] 4,711 - - Private CT in AbdomenAtlas 2.0 233 liver, pancreas, kidney, colon, esophagus, uterus human & AI Table 6. Dataset composition of AbdomenAtlas 2.0. Our AbdomenAtlas 2.0 comprises two components: CT scans from the public AbdomenAtlas 1.1 dataset and CT scans from a private source, totaling 10,135 tumor-annotated CT scans, with additional scans expected from various sources. Note that, for CT scans from AbdomenAtlas 1.1 dataset, we fully annotate six tumor types for each CT scan. C.2. Comparative Models The code for the Comparative Model is implemented in Python using MONAI and nnU-Net framework. nnU-Net Framework. nnU-Net serves as a framework for the automatic configuration of AI-driven semantic segmentation pipelines. When presented with a new segmentation dataset, it extracts pertinent metadata from the training cases to auto- matically determine its hyperparameters. It has withstood the test of time and continues to deliver state-of-the-art results. nnU-Net effectively illustrates that meticulously configuring and validating segmentation pipelines across a diverse range of segmentation tasks can yield a remarkably powerful algorithm. We implement UNETR, Swin UNETR, nnU-Net, ResEncM, and STU-Net using the nnU-Net framework. The orientation of CT scans is adjusted to specific axcodes. Isotropic spacing is employed to resample each scan, achieving a uniform voxel size of 1.5 × 1.5 × 1.5mm3. Additionally, the intensity in each scan is truncated to the range [-175, 250] and then linearly normalized to [0, 1]. During training, we crop random fixed-sized 96 × 96 × 96 regions, selecting centers from either a foreground or background voxel according to a pre-defined ratio. Furthermore, the data augmentation during training adheres to the default strategies outlined in the nnU-Net framework. All models are trained for 1000 epochs, with each epoch consisting of 250 iterations. Besides, we utilize the SGD optimizer with a base learning rate of 0.01, and the batch size is defined as 2. During inference, we utilize the test time augmentation by following the default implementations in nnU-Net framework. Besides, we use the sliding window strategy by setting the overlapping area ratio to 0.5. MONAI Framework. MONAI (Medical Open Network for AI) is an open-source framework that supports AI in healthcare. Built on PyTorch, it offers a comprehensive set of tools for configuring, training, inferring, and deploying medical AI models. We implement SegResNet, Universal Model, and Suprem utilizing the MONAI framework. Since different methods have varying hyperparameter settings, we trained and tested the models exactly according to the original hyperparameters specified in the corresponding papers. D. Visual Real Examples in AbdomenAtlas 2.0 Figure 9. Visual examples of six tumor types annotated in AbdomenAtlas 2.0. AbdomenAtlas 2.0 features a diverse distribution across various tumor stages and sizes. These comprehensive, high-quality tumors, accompanied by per-voxel annotations, significantly improve the performance of AI models, both on in-distribution and out-of- distribution data. (Figure 10). E. More Results: Best Lesson from Real Data Figure 10. Best Lesson from Real Data: Results on in-distribution and out-of-distribution data. (a): while increasing data scale initially enhances in-distribution performance across various metrics (sensitivity, DSC, and NSD), it eventually plateaus. Notably, certain organ types, such as the Liver and Kidney, exhibit a decline in performance at the largest scales. (b): In contrast, the scaling trends ob- served in out-of-distribution datasets demonstrate consistent improvements in specific datasets (e.g., 3D-IRCADb, ULS-Pancreas) without reaching a plateau, indicating that larger data volumes may enhance generalizability. These results relate to the data-scaling lesson in §1 (1,500 if with real data only). Larger datasets are needed for effective out-of-distribution generalizability. F. More Results: Best Lesson for generalizability Figure 11. Best lesson for pancreatic tumors. Integrating real and synthetic data, compared to using real data alone, consistently improves generalizable performance in sensitivity, DSC, and NSD across various scenarios and data scales. These results underscore the benefits of this combination in enhancing the accuracy of pancreatic tumor analysis. Figure 12. Best lesson for kidney tumors. Combining real and synthetic data consistently enhances generalizable performance in sensi- tivity, DSC, and NSD across various scenarios and data scales, highlighting its effectiveness in improving kidney tumor diagnosis accuracy.
Scaling Tumor Segmentation: Best Lessons from Real and Synthetic Data Qi Chen1,2 Xinze Zhou1 Chen Liu1,3 Hao Chen4 Wenxuan Li1 Zekun Jiang5 Ziyan Huang6,7 Yuxuan Zhao8 Dexin Yu8 Junjun He7 Yefeng Zheng9 Ling Shao2 Alan Yuille1 Zongwei Zhou1,* 1Johns Hopkins University 2UCAS-Terminus AI Lab, 3Hong Kong Polytechnic University 4 5Sichuan University 6Shanghai Jiao Tong University 7Shanghai AI Laboratory 8Qilu Hospital of Shandong University 9Westlake University Code, Model & Data: https://github.com/BodyMaps/AndomenAtlas2.0 Abstract AI for tumor segmentation is limited by the lack of large, voxel-wise annotated datasets, which are hard to create and require medical experts. In our proprietary JHH dataset of 3,000 annotated pancreatic tumor scans, we found that AI performance stopped improving after 1,500 scans. With synthetic data, we reached the same performance using only 500 real scans. This finding suggests that synthetic data can steepen data scaling laws, enabling more efficient model training than real data alone. Motivated by these lessons, we created AbdomenAtlas 2.0-a dataset of 10,135 CT scans with a total of 15,130 tumor instances per-voxel manually annotated in six organs (pancreas, liver, kidney, colon, esophagus, and uterus) and 5,893 control scans. Annotated by 23 expert radiologists, it is several orders of magnitude larger than existing public tumor datasets. While we continue expanding the dataset, the current version of AbdomenAtlas 2.0 already provides a strong foundation-based on lessons from the JHH dataset-for training AI to segment tumors in six organs. It achieves notable improvements over public datasets, with a +7% DSC gain on in-distribution tests and +16% on out-of-distribution tests. 1. Introduction Developing AI models for tumor segmentation is fundamentally challenged by the scarcity of large, annotated datasets-owing to the immense time and expertise required for per-voxel annotation [70, 103, 107]. Inspired *Correspondence to Zongwei Zhou ( ) Figure 1. Data scaling laws study. Experimental results on the proprietary dataset demonstrate that increasing the scale of real data improve the segmentation (gray curve). Notably, supplementing the dataset with an additional 3× synthetic data (red curve) can further enhance the results, revealing the potential of a larger public dataset to advance tumor research. by scaling laws [20, 51, 78, 85], to estimate the impact of data scale on tumor segmentation performance, we first leveraged a proprietary dataset of 3,000 pancreatic tumor scans, per-voxel annotated over five years by expert radiologists and verified by pathology reports. Our previous work [61, 96] showed that this dataset enabled AI to reach radiologist-level detection accuracy. However, as shown in Figure 1, performance gains plateaued after 1,500 scans, suggesting diminishing returns from adding more real data. Recognizing that annotating 1,500 scans is still a considerable undertaking for a single tumor type, we explored the potential of synthetic data [13, 28, 40, 58, 63, 73] to further advance this plateau. By adding synthetic tumors-three times the number of real tumors-we achieved similar or 16 Oct 2025 Figure 2. Overview of the AbdomenAtlas 2.0 dataset. For each CT scan, AbdomenAtlas 2.0 provides precise and high-quality annotations following a well-designed AI-driven annotation pipeline. Compared to existing datasets, AbdomenAtlas 2.0 collects largescale CT scans from diverse clinical sources, encompassing a wide range of tumor types (i.e., liver, pancreas, kidney, colon, esophageal, and uterine tumors) and comprehensive tumor sizes. This extensive scale makes it the largest human-annotated tumor mask dataset. better performance with only 500 real tumor scans. This reduces annotation needs by a large margin and shows that synthetic data can accelerate learning, effectively steepening the scaling curve more than real data alone. The lesson on the proprietary dataset helps estimate how many annotated tumor scans are needed to train effective AI models, e.g., matching radiologist performance. Considering that pancreatic tumors are especially hard to detect on CT, with 80% detected only at late stages [38, 62], we hypothesize that if 1,500 real scans-or 500 with synthetic data-are enough for pancreatic tumors, the same or fewer might work for other organs. Based on this idea, our first contribution is to create a dataset, that is publicly available, with 500-1,500 per-voxel annotated CT scans for tumors in six organs: pancreas, liver, kidney, colon, esophagus, and uterus. This is also the first public dataset that offers pervoxel annotations for esophageal and uterine tumors. We name this six-tumor dataset AbdomenAtlas 2.0, which comprises 4,242 CT scans with per-voxel annotations of 15,130 benign/malignant tumor instances and 5,893 normal scans as control (§3). Importantly, it includes many earlystage tumors (<20 mm): 5,709 in liver, 850 in pancreas, 4,638 in kidney, 29 in colon, 17 in esophagus, and 39 in uterus-rare and hard to collect. While AbdomenAtlas 2.0 is much larger than public tumor datasets combined [6, 35, 49, 70], the 500-1,500 scans per tumor type are still insufficient for building robust AI across diverse data sources. This limitation is clear in our data-scaling analysis (Figure 1), where performance plateaued only on in-distribution tests. For outof-distribution data-CT scans from different centersperformance kept improving up to 3,000 scans, suggesting that broader diversity is critical for generalization. However, scaling to that level is costly: annotating just 5001,500 scans per tumor type required 23 radiologists and several months of effort. Selecting the most valuable scans to annotate is also challenging, since out-of-distribution data are unknown in advance. To address this, our second contribution is to scale data and annotations through DiffTumor to produce different types of tumors (Figure 5). The data-scaling analysis (Figure 1) suggested that training AI on synthetic tumors can significantly enhances in-distribution test performance. More importantly, since collecting normal scans is much easier than acquiring and annotating tumor scans, synthetic tumors can be added to normal scans from a range of outof-distribution sources, bypassing the need for manual pervoxel annotation. These synthetic tumors are automatically paired with per-voxel annotations as they are generated with their masks. Training AI on these normal scans augmented by synthetic tumors can greatly improve performance in out-of-distribution tests (Figure 7). In summary, we bring data-scaling lessons from both real and synthetic data on a large proprietary dataset to develop AbdomenAtlas 2.0, achieving two key advancements for six-tumor segmentation, specifically, 1. Scaling real and synthetic data enhances performance in abdominal tumor segmentation. We rank first in the MSD challenge, leading to substantial performance improvement. We also achieve the highest performance on the validation sets of our AbdomenAtlas 2.0 dataset, improving DSC scores by +5%, +9%, +3%, +4%, +7%, and +2% for segmenting tumors in the liver, pancreas, Dataset release # scans # slices (K) # tumors tumor in # # hospitals # countries‡ annotators LiTS [6] [link] 2019 131 58.6 853 liver 7 E, NL, CA, FR, IL human MSD-Colon [2] [link] 2021 126 13.5 131 colon 1 US human & AI MSD-Pancreas [2] [link] 2021 281 26.7 283 pancreas 1 US human & AI FLARE23 [2] [link] 2022 2,200 629.1 1,511 unknown† 30 N/A human & AI KiTS [36] [link] 2023 489 250.9 568 kidney 1 US human ULS-Liver [18] [link] 2023 49 6.3 49 liver 1 - human ULS-Pancreas [18] [link] 2023 120 15.4 120 pancreas 1 NI human ULS-Kidney [18] [link] 2023 50 6.4 50 kidney 1 N/A human AbdomenAtlas 2.0 (ours) 2025 10,135 4,700 15,130 liver, pancreas, kidneys, colon, esophagus, uterus 89 MT, IE, BR, BA, AUS, TH, CA, TR, CL, ES, MA, US, DE, NL, FR, IL, CN human † Tumors labeled in the FLARE23 dataset fall under a general 'Tumor' category without specific tumor type information. ‡ US: United States, DE: Germany, NL: Netherlands, CA: Canada, FR: France, IL: Israel, IE: Ireland, BR: Brazil, BA: Bosnia and Herzegowina, CN: China, TR: Turkey, CH: Switzerland, AUS: Australia, TH: Thailand, CL: Chile, ES: Spain, MA: Morocco, and MT: Malta. Table 1. Dataset comparison. We compare AbdomenAtlas 2.0 against existing abdominal tumor segmentation datasets, including those with and without tumor labels. AbdomenAtlas 2.0 outperforms these datasets in terms of scale and diversity. kidney, colon, esophagus, and uterus, respectively, compared to the runner-up algorithms (§3.3, Tables 2-3). 2. Scaling real and synthetic data enhances generalizable performance in abdominal tumor segmentation without additional tuning and adaptation. AbdomenAtlas 2.0 significantly outperforms the runner-up algorithms by +14% DSC on four external datasets (§3.3, Table 4). 2. Related Work Large-scale Annotated Tumor Datasets are scarce due to the limited availability of scan data and the substantial costs of obtaining per-voxel annotations. Despite these hurdles, datasets such as DeepLesion [99], AutoPET [23], PANORAMA [1], FLARE [70], and MSD [3] serve as significant efforts to mitigate this limitation. A detailed comparison of related datasets is provided in Figure 1. AbdomenAtlas 2.0 comprises more than 10,000 CT scans with voxel-level annotations across six abdominal tumors. Notably, AbdomenAtlas 2.0 features esophageal and uterine tumor scans, which have not been previously available in public datasets. Neural Scaling Laws establish the power-law relationships that correlate model performance with key scaling factors such as model size, dataset volume, and computational resources. It is initially discovered within the domain of language models highlighted by Kaplan et al. [51], and soon also been observed in generative visual modeling [37, 80] and multi-modality modeling [47]. This trend of scaling underpins the recent achievements of foundation models [78, 85], emphasizing how scaling up systematically boosts model generalization and effectiveness across various tasks. However, for tumor analysis and synthetic data, scaling laws remain underexplored due to the limited availability of annotated tumor data. Leveraging our new, large-scale tumor dataset, we investigate whether similar data scaling laws exist in tumor segmentation and whether appropriate data scaling can yield a robust segmentation model capable of generalizing to detect and segment tumors from CT scans, encompassing a broad spectrum of patient demographics, imaging protocols, and healthcare facilities. 3. AbdomenAtlas 2.0 3.1. Dataset Construction Accurate annotations are the foundation of high-quality medical datasets. However, conventional per-voxel labeling is labor-intensive. Obtaining each scan data typically costs 4-5 minutes, while extensive tumors may take up to 40 minutes [7, 70]. In addition, precisely delineating tumor boundaries takes substantial time and requires the specialized expertise of highly trained radiologists, making it impractical to scale annotations to datasets with 10,000 or more scans. To address this bottleneck, we establish a semi-automated annotation pipeline for CT scans that significantly reduces the manual workload and requires only minimal revision time from radiologists. SMART-Annotator Procedure. Annotating missed tumors from scratch takes much longer than removing AI-generated false positives. Therefore, our annotation pipeline is designed to prioritize minimizing undersegmentation errors, thereby reducing the typical annotation time from 5 minutes per scan to less than 5 seconds on average, while maintaining high accuracy. The proposed pipeline, named SMART-Annotator, stands for Segmentation Model-Assisted Rapid Tumor Annotator. As depicted in Figure 3, it consists of the following four key stages: Stage 1: Model Preparation. For each tumor, we separately train a Segmentation Model (denoted as f(·)) using publicly available datasets. The tumor-specific f(·) is optiFigure 3. Overview of the SMART-Annotator. Towards annotating a large-scale tumor dataset, developing our SMART-Annotator involves four stages. 1Train a Segmentation Model using public datasets to provide tumor segmentation logits across AbdomenAtlas 2.0. 2Analyzing the FROC curve and selecting a threshold that enhances sensitivity to minimize missed tumors while maintaining an acceptable specificity score. 3Removing false positives for the adjusted predictions by senior radiologists. 4Revising the final annotations to get ground truth by junior radiologists. mized for tumor segmentation and detection tasks. Stage 2: FROC Curve Analysis. To determine the optimal threshold, we construct the Free-response ROC (FROC) Curve by equipping f(·, θ) with a set of threshold values θ, obtaining the trade-off map (as shown by the purple shadow region in Figure 3) between sensitivity and false positive rate. Experimental results on tumor analysis in CT scans reveal that a lower θ∗maximizes sensitivity while maintaining an acceptable false positive rate. Stage 3: Tumor Candidate Generation. For CT scans requiring annotation, we apply the tumor-specific model f(·, θ∗) to perform voxel-wise analysis. This process generates preliminary tumor segmentation candidates, while identifying potential tumor regions that need further refinement and validation. Since these potential regions are typically challenging, senior radiologists are then required to conduct a review to confirm true positives and eliminate false positive cases. Stage 4: Annotation Revision. The reviewed tumor segmentation candidates undergo further refinement by junior radiologists, who annotate missed tumors and adjust mask boundaries to ensure accurate and precise tumor annotations. The final revised annotations are thoroughly reviewed by senior radiologists to guarantee high-quality ground truth. Annotation Accuracy Analysis. For each specific organ, our pipeline adaptively adjusts the threshold θ∗based on the FROC curve to ensure over 90% sensitivity. A common concern is whether such high sensitivity might result in a significant number of false positive cases? To answer this, we validate SMART-Annotator on three public datasets and reveal that the pipeline maintains manageable false-positive rates, with an average of 1.2 false positives per scan for pancreatic tumors, 2 for liver tumors, and 2.4 for kidney tumors. These results highlight the effectiveness of our AIdriven approach in tumor detection. By pre-identifying tumors with pseudo-annotations, radiologists can quickly verify true positives, correct false positives, and, if necessary, provide additional annotations for false negatives, thereby efficiently annotating tumor scans in AbdomenAtlas 2.0. Annotation Efficiency Analysis. The AbdomenAtlas 2.0 incorporates proprietary esophagus and uterus scans alongside unannotated data from 12 publicly available sources. Our approach applies the SMART-Annotator pipeline to all scans. Given that full manual annotation typically requires 5 minutes per scan, whereas annotation with SMARTAnnotator takes only 5 seconds, this AI-driven approach substantially alleviates the annotation workload, conserving approximately 10,135 × 5 -1 12 ≈49, 826 minutes of valuable radiologist time for annotating the entire AbdomenAtlas 2.0 collection. Assuming a radiologist works 10 hours per day, this corresponds to 83 workdays saved. Figure 4. Dataset statistics analysis on the distributions of (a) different tumor proportions, (b) tumor radius, and (c) different tumor sizes categorized as tiny, small, medium, and large. 3.2. Dataset Statistical Analysis AbdomenAtlas 2.0 is the largest public, human-annotated tumor segmentation dataset, covering six tumor types (Figure 2). It improves on existing datasets in five ways: 1. Large-Scale CT Coverage. AbdomenAtlas 2.0 includes 10,136 fully annotated CT scans, totaling over 4.7 million slices. It provides labels for 31 anatomical structures, including 25 organs and 6 tumor types. The data comes from 89 hospitals, ensuring diverse patient populations and clinical conditions. 2. Diverse Tumor Types. Most public datasets focus on a single tumor type (Table 1). In contrast, AbdomenAtlas 2.0 includes liver, pancreas, kidney, colon, esophageal, and uterine tumors. It is the first public dataset with voxel-wise annotations for esophageal and uterine tumors, supporting research on rare and underrepresented cancers (Figure 4a). 3. Wide Tumor Size Range. Tumor sizes in AbdomenAtlas 2.0 range from 0 to 100 mm. We group them into four categories: tiny (r ≤5 mm), small (5 ≤r ≤10 mm), medium (10 ≤r ≤20 mm), and large (r ≥20 mm). AbdomenAtlas 2.0 provides a balanced size distribution across all tumor types, as shown in Figure 4b-c), enabling robust and scalable model training. 4. Abundant Tumor Masks. The dataset contains 10,260 annotated tumor masks across six tumor types and all size groups. It surpasses existing datasets such as LiTS and KiTS in both scale and tumor diversity (Figure 2b, Table 1). 5. High-Quality Annotations. Our annotation pipeline adopts a multi-stage review process (see Figure 3), inteTask03 Liver Task07 Pancreas Method DSC NSD DSC NSD Kim et al. [54] 73.0 88.6 51.8 73.1 C2FNAS [102] 72.9 89.2 54.4 75.6 Trans VW [29] 76.9±20.0 92.0±16.8 51.1±32.8 70.1±37.4 Models Gen. [106] 77.5±20.4 91.9±17.9 50.4±32.6 70.0±37.2 nnU-Net [44] 76.0±22.1 90.7±18.3 52.8±33.0 71.5±36.6 DiNTS [33] 74.6±21.3 91.0±17.3 55.4±29.8 75.9±32.0 Swin UNETR [92] 75.7±20.4 91.6±16.8 58.2±28.6 79.1±29.7 Uni. Model [66] 79.4±17.0 93.4±15.2 62.3±26.6 82.9±27.2 AbdomenAtlas 2.0 82.6±11.0 96.9±6.4 67.2±24.7 86.0±25.2 ∆ +3.2 +3.5 +4.9 +3.1 Table 2. Leaderboard performance on MSD Challenge. The results are assessed on the MSD official server using the MSD competition test dataset. All DSC and NSD metrics are sourced from The MSD Leaderboard. The outcomes for the remaining tasks were produced by Universal Model [66, 67]. grating AI algorithms with human expertise to enhance efficiency while maintaining high annotation quality. All images and annotations undergo rigorous quality control. This process iteratively refined the annotations until no further major revisions were necessary. 3.3. Advantages of AbdomenAtlas 2.0 Strong performance on in-distribution data. We report detailed comparisons on the official test set of the Medical Segmentation Decathlon (MSD) leaderboard in Table 2. As can be seen, with AbdomenAtlas 2.0, we significantly surpass the previously leading Universal Model [66] (denoted as Uni. Model in Table 2) and achieve the top #1 performance on the leaderboard, underscoring the superiority of AbdomenAtlas 2.0 in the task of medical segmentation. To comprehensively evaluate the six tumor types in AbdomenAtlas 2.0, we train ResEncM [45] with the annotated tumor data in AbdomenAtlas 2.0 and compare with state-of-the-art segmentation models in the medical field (i.e., UNETR [32], Swin UNETR [92], nnU-Net [44], ResEncM [45] and STU-Net-B [43]) that are trained with publicly available tumor datasets. The evaluations are conducted on the validation set of AbdomenAtlas 2.0 and reported in Table 3. As can be seen, training the ResEncM with AbdomenAtlas 2.0 (denoted as AbdomenAtlas 2.0) consistently improves the performance and outperforms the state-of-the-art across all tumor segmentation tasks. Compared with the second-ranked STU-Net-B, AbdomenAtlas 2.0 archives a remarkable DSC improvement of 7.3% on esophageal tumors and 4.9% on liver tumors, respectively. These results demonstrate the superiority of AbdomenAtlas 2.0 in delivering high-quality tumor data for model training compared to existing datasets, contributing to alleviating the data scarcity issue in tumor segmentation. Better generalization for out-of-distribution data. A critical requirement for medical AI models is their ability to Liver Tumor Pancreatic Tumor Kidney Tumor Method Param Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 101.8M 77.1 (102/131) 55.6 53.7 66.7 (102/131) 31.1 27.2 95.8 (102/131) 67.2 55.7 Swin UNETR [92] 72.8M 76.6 (102/131) 66.8 68.4 81.5 (102/131) 44.7 43.8 95.8 (102/131) 72.3 67.7 nnU-Net [44] 31.1M 80.3 (102/131) 71.7 74.6 81.5 (102/131) 56.7 54.3 100 (102/131) 84.8 80.7 ResEncM [45] 63.1M 89.1 (102/131) 71.9 74.7 84.0 (102/131) 57.0 54.6 100 (102/131) 84.8 81.1 STU-Net-B [43] 58.3M 79.3 (102/131) 72.6 74.9 85.2 (102/131) 56.1 54.4 100 (102/131) 82.4 77.6 AbdomenAtlas 2.0 63.1M 83.7 (102/131) 77.5 81.0 96.0 (102/131) 65.8 64.7 100 (102/131) 87.9 84.4 ∆ -5.4 +4.9 +6.1 +10.8 +8.8 +10.1 +0.0 +3.1 +3.3 Colon Tumor Esophagus Tumor Uterus Tumor Method Param Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 101.8M 69.2 (102/131) 27.8 29.2 92.3 (102/131) 42.3 44.1 95.8 (102/131) 69.9 60.7 Swin UNETR [92] 72.8M 65.4 (102/131) 36.8 39.4 84.6 (102/131) 48.2 49.0 95.8 (102/131) 73.8 65.0 nnU-Net [44] 31.3M 65.4 (102/131) 42.8 43.7 92.3 (102/131) 52.7 53.2 95.8 (102/131) 78.5 70.2 ResEncM [45] 63.1M 65.4 (102/131) 43.8 45.9 84.6 (102/131) 53.3 51.9 95.8 (102/131) 78.7 68.4 STU-Net-B [43] 58.3M 73.1 (102/131) 47.1 48.7 88.5 (102/131) 53.9 54.1 95.8 (102/131) 78.2 68.8 AbdomenAtlas 2.0 63.1M 96.2 (102/131) 50.7 47.6 96.2 (102/131) 61.2 61.7 95.8 (102/131) 80.1 70.3 ∆ +23.1 +3.6 -1.1 +3.9 +7.3 +7.6 +0.0 +1.4 +0.1 Table 3. Strong performance for in-distribution data: Results on AbdomenAtlas 2.0. We compare AbdomenAtlas 2.0 with common AI algorithms, using the validation sets from the AbdomenAtlas 2.0. AbdomenAtlas 2.0 demonstrates superior tumor segmentation and performance overall, showing significant improvements in segmenting liver tumors (+4.9%), pancreatic tumors (+8.8%), kidney tumors (+3.1%), colon tumors (+3.6%), esophagus tumors (+7.3%), and uterus tumors (+1.4%). generalize across diverse, out-of-distribution (OOD) data from multiple hospitals, rather than being optimized solely for a single, in-distribution dataset. As shown in Table 1, AbdomenAtlas 2.0 provides a considerably more diverse collection of CT scans from 89 hospitals across 18 countries. To verify the generalizability offered by AbdomenAtlas 2.0, we further conduct evaluations on four external datasets: 3D-IRCADb [90], PANORAMA [1], Kipa [34], and a proprietary JHH dataset [96], none of which are included in the training phase. We train ResEncM [45] with the annotated tumor data in AbdomenAtlas 2.0 and compare with the following state-of-the-art medical image segmentation models: UNETR [32], Swin UNETR [92], nnU-Net [44], ResEncM [45] and STU-Net [43], SegResNet [76], Universal Model [66], and SuPrem [59]. As shown in Table 4, our model significantly outperforms previous methods on all external datasets, achieving a notable DSC improvement of 14.0% and an NSD improvement of 17.0% on the 3D-IRCADb dataset. 4. Scaling Laws in Tumor Segmentation In this section, we explore the existence of data scaling laws in tumor segmentation and assess whether appropriate data scaling can yield a robust segmentation model. This segmentation model should be generalizable to detect and segment tumors from CT scans, handling a broad spectrum of patient demographics, imaging protocols, and healthcare facilities. Specifically, we first examine the impact of increasing the number of annotated real-tumor scans on indistribution performance. Then we analyze how the scale of annotated real-tumor data influences the model's ability Figure 5. Tumor size and feature distribution of real vs. synthetic tumors. (a) Tumor size distribution across liver, pancreatic, and kidney tumors from real and synthetic data. (b-d) Feature distributions of liver, pancreatic, and kidney tumors. We extract features using a pretrained encoder [85] and visualize them with t-SNE to compare synthetic tumors with real ones. to generalize to out-of-distribution tumor data. 3D-IRCADb [90] - Liver Tumor PANORAMA [1] - Pancreatic Tumor Kipa [34] - Kidney Tumor JHH - Pancreatic Tumor Method Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD Sen. DSC NSD UNETR [32] 74.4 (87/117) 50.1 46.8 58.8 (77/131) 21.4 18.0 70.8 (51/72) 43.1 35.8 51.4 (152/296) 13.0 9.0 Swin UNETR [92] 76.9 (90/117) 57.9 53.7 69.5 (91/131) 34.0 30.9 81.9 (59/72) 64.3 56.6 71.3 (211/296) 31.9 21.9 nnU-Net [44] 77.8 (91/117) 65.1 62.2 75.6 (99/131) 42.4 38.6 80.6 (58/72) 64.3 58.9 69.9 (207/296) 34.1 24.7 ResEncM [45] 76.9 (90/117) 57.6 53.3 61.1 (80/131) 33.5 30.0 90.2 (65/72) 76.4 77.0 68.6 (203/296) 34.8 26.5 STU-Net [43] 78.6 (92/117) 67.1 64.5 74.0 (97/131) 42.7 40.3 55.6 (40/72) 71.2 70.4 68.9 (204/296) 34.1 24.7 SegResNet [76] 65.0 (76/117) 54.6 51.3 84.0 (110/131) 43.0 40.3 94.4 (68/72) 73.6 70.0 77.7 (211/296) 39.5 31.1 Universal Model [66] 86.3 (101/117) 62.8 57.4 77.9 (102/131) 37.0 33.9 97.2 (67/72) 47.8 37.1 78.4 (232/296) 32.6 27.1 SuPreM [59] 58.1 (68/117) 50.2 47.8 67.9 (89/131) 30.5 28.0 84.7 (61/72) 42.3 36.0 63.2 (187/296) 24.7 19.8 AbdomenAtlas 2.0 86.3 (101/117) 81.1 81.5 94.6 (124/131) 55.3 52.2 97.2 (70/72) 83.6 83.0 80.7 (239/296) 45.1 35.7 ∆ +0.0 +14.0 +17.0 +10.6 +12.3 +11.9 +0.0 +7.2 +6.0 +2.3 +5.6 +4.6 Table 4. Better generalizability for out-of-distribution data: Results on external datasets. We evaluate AbdomenAtlas 2.0 and 8 other models on data from three publicly available and one private external source without additional fine-tuning or domain adaptation. Compared to dataset-specific models, AbdomenAtlas 2.0 demonstrates greater robustness when handling CT scans obtained from a variety of scanners, protocols, and institutes. 4.1. Experimental Setup We evaluate the scaling behavior with two data setups: (1) only real-tumor scans, and (2) a combination of both synthetic and real tumor scans. Since small tumors are rare in public datasets but crucial for clinical applications, we employ DiffTumor [12] to generate synthetic tumors, with a ratio of 4:2:1 for small, medium, and large tumors, respectively. The total number of synthetic tumor scans generated is three times of AbdomenAtlas 2.0. The distribution of tumor size and combined data distribution are illustrated in Figure 5, where we combine the generated tumors with different scales of AbdomenAtlas 2.0 training set to train the supervised ResEncM [45]. The evaluation is conducted with segmentation metrics (i.e., DSC, NSD) and detection metrics (i.e., tumor-level and patient level sensitivity), using the validation set of AbdomenAtlas 2.0 and six external datasets (3D-IRCADb, ULS-Liver, ULS-Pancreas, PANORAMA, Kipa, and JHH dataset). 4.2. Plateau in In-Distribution Evaluation We report the in-distribution segmentation performance in Figure 6 and include the detection metrics in Appendix E. Our analysis of tumor segmentation scaling behavior reveals a clear trend in in-distribution performance: as the number of annotated real-tumor scans increases, the indistribution performance gains gradually saturate. As illustrated by the gray lines in Figure 6, in-distribution performance initially improves with increasing data but eventually reaches a plateau across all three tumor types. This saturation indicates diminishing returns that adding more real tumor data yields progressively smaller performance gains. However, combining a certain amount of synthetic tumors with real data during training helps to accelerate this in-distribution performance saturation process. As shown by the red lines in Figure 6, with the participants of synthetic data, the saturation status can be reached with only 40% to 60% of the annotated real-tumor scans, indicating Figure 6. Scaling data shows performance plateau in indistribution evaluation. We conduct a scaling study using AbdomenAtlas 2.0 and JHH datasets as real tumor data and evaluate performance on their corresponding validation sets. While scaling up the dataset initially enhances in-distribution performance, it eventually plateaus. These results align with the data-scaling lesson in §1. By supplementing real tumor data with well-designed synthetic data, we only need to collect and annotate a small amount of real data. This approach is especially beneficial for scenarios where data is scarce and annotation is costly, enabling highaccuracy segmentation with reduced effort. that synthetic data effectively expedite the model's convergence to its optimal performance within a given domain. This finding shows that we can achieve strong segmentation performance without collecting large amounts of real data. By supplementing real tumor data with well-designed synthetic data, we can significantly reduce the effort for costly real data annotation while maintaining strong indistribution segmentation accuracy. This lesson demonstrates the tangible benefits of introducing synthetic data into the training process, and is particularly valuable for scenarios where real-data acquisition is costly or limited. Figure 7. Scaling data leads to greater generalizability. We conduct a scaling study using liver, pancreatic, and kidney tumors from the Cancerverse dataset as real tumor data and evaluate performance on six external datasets. Unlike in-distribution performance, which plateaus with more data, OOD generalization continues to improve with the addition of real tumor data. Notably, the integration of synthetic data further improves generalizability, with models trained on both real and synthetic scans consistently outperforming those using only real data. These findings underscore the critical importance of data diversity in enhancing model robustness across diverse imaging conditions. 4.3. Scaling Data Leads to Greater Generalizability Figure 7 reports out-of-distribution (OOD) segmentation performance. As indicated by the gray and red lines, OOD accuracy consistently rises with the expanding dataset. The impact of data scaling on out-of-distribution performance follows a consistently positive trend: as the amount of real tumor data increases, OOD performance continues to improve without signs of saturation, even after exhausting the entire Cancerverse dataset. We include more OOD results in Appendix F. In contrast with the in-distribution performance that tends to saturate with increasing data, the finding on the out-of-distribution performance reveals that OOD generalization continues to benefit from additional real tumor data without exhibiting diminishing returns. Such a non-diminishing trend is still obvious even when synthetic data is incorporated into the training process. Furthermore, models trained with both real and synthetic tumor scans consistently outperform those trained with real data. These findings underscore the critical role of data diversity in enhancing OOD generalization, showing that a carefully curated combination of real/synthetic data strengthens model robustness across diverse imaging settings. 5. Conclusion This study examined not just whether more data helps tumor segmentation, but what kind of data is most valuable. Using AbdomenAtlas 2.0-a large, expert-annotated datasetand systematic scaling experiments, we learned three key lessons: First, performance on in-distribution data plateaus early. On internal datasets like JHH, segmentation accuracy stops improving after about 1,500 real scans. This means adding more similar data yields limited benefit once a moderate threshold is reached. Second, synthetic tumors significantly reduce the need for manual annotation. By using synthetic lesions generated with DiffTumor, we can reach the same performance with only 500 real scans, cutting annotation effort by 70%. Synthetic data improves data efficiency and accelerates model convergence. Third, out-ofdistribution generalization continues to improve with data diversity. Unlike the in-distribution case, performance on external datasets keeps increasing even after 1,500 scans and sees additional gains when synthetic tumors are added. This shows that model robustness depends more on data diversity than just quantity. These lessons have important implications. Future expansion of AbdomenAtlas 2.0 should focus on including scans from different hospitals and imaging protocols. This will help the model perform better on new and unfamiliar data. For underrepresented tumor types like esophageal and uterine cancers, a few hundred well-selected scans combined with synthetic data can be enough to build useful models. AbdomenAtlas 2.0 also makes it possible to benchmark various data scaling and annotation strategies that were previously limited by small dataset size. The SMART-Annotator pipeline further shows how AI-assisted pre-labeling can reduce radiologist time from minutes to seconds per scan without sacrificing accuracy, especially when combined with synthetic tumor generation. There are several limitations worth noting. First, the performance plateau observed at 1,500 scans applies only to pancreatic tumors in abdominal CT and with the ResEncM model. It is unclear whether this threshold holds for other organs, especially those with more complex or subtle tumor appearances. Future studies should examine how data scaling behaves across different tumor types, imaging modalities, and model architectures to see if similar saturation points occur. Second, although synthetic tumors improve performance, their anatomical realism-particularly for infiltrative, necrotic, or early-stage lesions-has not been fully verified by expert review or radiomic analysis. Ensuring clinically realistic synthesis remains a key challenge for building trust and interpretability. Acknowledgments. This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the National Institutes of Health (NIH) under Award Number R01EB037669. We would like to thank the Johns Hopkins Research IT team in IT@JH for their support and infrastructure resources where some of these analyses were conducted; especially DISCOVERY HPC. References [1] Nuno Alves, Maarten Schuurmans, Darius Rutkowski, D. Yakar, Ingfrid Haldorsen, Marianne Liedenbaum, Anders Molven, Paolo Vendittelli, Geert Litjens, Johan Hermans, and Henk Huisman. The panorama study protocol: Pancreatic cancer diagnosis - radiologists meet ai, 2024. 3, 6, 7 [2] Michela Antonelli, Annika Reinke, Spyridon Bakas, Keyvan Farahani, Bennett A Landman, Geert Litjens, Bjoern Menze, Olaf Ronneberger, Ronald M Summers, Bram van Ginneken, et al. The medical segmentation decathlon. arXiv preprint , 2021. 3, 19 [3] Michela Antonelli, Annika Reinke, Spyridon Bakas, et al. The medical segmentation decathlon. Nature Communications, 13:4128, 2022. 3 [4] Pedro RAS Bassi, Wenxuan Li, Jieneng Chen, Zheren Zhu, Tianyu Lin, Sergio Decherchi, Andrea Cavalli, Kang Wang, Yang Yang, Alan L Yuille, et al. Learning segmentation from radiology reports. arXiv preprint , 2025. 17 [5] Pedro RAS Bassi, Mehmet Can Yavuz, Kang Wang, Xiaoxi Chen, Wenxuan Li, Sergio Decherchi, Andrea Cavalli, Yang Yang, Alan Yuille, and Zongwei Zhou. Radgpt: Constructing 3d image-text tumor datasets. arXiv preprint , 2025. 17 [6] Patrick Bilic, Patrick Ferdinand Christ, Eugene Vorontsov, Grzegorz Chlebus, Hao Chen, Qi Dou, Chi-Wing Fu, Xiao Han, Pheng-Ann Heng, J ̈urgen Hesser, et al. The liver tumor segmentation benchmark (lits). arXiv preprint , 2019. 2, 3, 17, 19 [7] Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, et al. The liver tumor segmentation benchmark (lits). Medical Image Analysis, 84:102680, 2023. 3 [8] Jingye Chen, Jieneng Chen, Zongwei Zhou, Bin Li, Alan Yuille, and Yongyi Lu. Mt-transunet: Mediating multitask tokens in transformers for skin lesion segmentation and classification. arXiv preprint , 2021. 17 [9] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint , 2021. 17 [10] Jieneng Chen, Jieru Mei, Xianhang Li, Yongyi Lu, Qihang Yu, Qingyue Wei, Xiangde Luo, Yutong Xie, Ehsan Adeli, Yan Wang, Matthew Lungren, Lei Xing, Le Lu, Alan Yuille, and Yuyin Zhou. 3d transunet: Advancing medical image segmentation through vision transformers, 2023. 17 [11] Qi Chen, Mingxing Li, Jiacheng Li, Bo Hu, and Zhiwei Xiong. Mask rearranging data augmentation for 3d mitochondria segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 36-46. Springer, 2022. 18 [12] Qi Chen, Xiaoxi Chen, Haorui Song, Zhiwei Xiong, Alan Yuille, Chen Wei, and Zongwei Zhou. Towards generalizable tumor synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 7, 18 [13] Qi Chen, Yuxiang Lai, Xiaoxi Chen, Qixin Hu, Alan Yuille, and Zongwei Zhou. Analyzing tumors by synthesis. arXiv preprint , 2024. 1 [14] Richard J Chen, Ming Y Lu, Tiffany Y Chen, Drew FK Williamson, and Faisal Mahmood. Synthetic data in machine learning for medicine and healthcare. Nature Biomedical Engineering, 5(6):493-497, 2021. 17 [15] Douglas C Cheung and Antonio Finelli. Active surveillance in small renal masses in the elderly: a literature review. European urology focus, 3(4-5):340-351, 2017. 18 [16] ̈Ozg ̈un C ̧ ic ̧ek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention, pages 424-432. Springer, 2016. 17 [17] Errol Colak, Hui-Ming Lin, Robyn Ball, Melissa Davis, Adam Flanders, Sabeena Jalal, Kirti Magudia, Brett Marinelli, Savvas Nicolaou, Luciano Prevedello, Jeff Rudie, George Shih, Maryam Vazirabad, and John Mongan. Rsna 2023 abdominal trauma detection, 2023. 19 [18] MJJ de Grauw, E Th Scholten, EJ Smit, MJCM Rutten, M Prokop, B van Ginneken, and A Hering. The uls23 challenge: a baseline model and benchmark dataset for 3d universal lesion segmentation in computed tomography. arXiv preprint , 2024. 3 [19] Pedro Domingos. A few useful things to know about machine learning. Communications of the ACM, 55(10):7887, 2012. 17 [20] Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, and Yonglong Tian. Scaling laws of synthetic images for model training... for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7382-7392, 2024. 1 [21] Virginia Fernandez, Walter Hugo Lopez Pinaya, Pedro Borges, Petru-Daniel Tudosiu, Mark S Graham, Tom Vercauteren, and M Jorge Cardoso. Can segmentation models be trained with fully synthetically generated data? In International Workshop on Simulation and Synthesis in Medical Imaging, pages 79-90. Springer, 2022. 18 [22] Kathryn J Fowler, Adam Burgoyne, Tyler J Fraum, Mojgan Hosseini, Shintaro Ichikawa, Sooah Kim, Azusa Kitao, Jeong Min Lee, Val ́erie Paradis, Bachir Taouli, et al. Pathologic, molecular, and prognostic radiologic features of hepatocellular carcinoma. Radiographics, 41(6):16111631, 2021. 18 [23] Sergios Gatidis, Marcel Fr ̈uh, Matthias P Fabritius, Sijing Gu, Konstantin Nikolaou, Christian La Foug`ere, Jin Ye, Junjun He, Yige Peng, Lei Bi, et al. Results from the autopet challenge on fully automated lesion segmentation in oncologic pet/ct imaging. Nature Machine Intelligence, pages 1-10, 2024. 3 [24] CH Golias, A Charalabopoulos, and K Charalabopoulos. Cell proliferation and cell cycle control: a mini review. International journal of clinical practice, 58(12):1134-1141, 2004. 17 [25] Kuang Gong, Keith Johnson, Georges El Fakhri, Quanzheng Li, and Tinsu Pan. Pet image denoising based on denoising diffusion probabilistic model. European Journal of Nuclear Medicine and Molecular Imaging, pages 111, 2023. 18 [26] Chloe Gui, Suzanne E Kosteniuk, Jonathan C Lau, and Joseph F Megyesi. Tumor growth dynamics in seriallyimaged low-grade glioma patients. Journal of NeuroOncology, 139:167-175, 2018. 18 [27] Pengfei Guo, Can Zhao, Dong Yang, Ziyue Xu, Vishwesh Nath, Yucheng Tang, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, et al. Maisi: Medical ai for synthetic imaging. arXiv preprint , 2024. 18 [28] Pengfei Guo, Can Zhao, Dong Yang, Yufan He, Vishwesh Nath, Ziyue Xu, Pedro RAS Bassi, Zongwei Zhou, Benjamin D Simon, Stephanie Anne Harmon, et al. Text2ct: Towards 3d ct volume generation from freetext descriptions using diffusion model. arXiv preprint , 2025. 1 [29] Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Michael B Gotway, and Jianming Liang. Transferable visual words: Exploiting the semantics of anatomical patterns for self-supervised learning. IEEE Transactions on Medical Imaging, 2021. 5 [30] Ibrahim Ethem Hamamci, Sezgin Er, Anjany Sekuboyina, Enis Simsar, Alperen Tezcan, Ayse Gulnihan Simsek, Sevval Nil Esirgun, Furkan Almas, Irem Do ̆gan, Muhammed Furkan Dasdelen, et al. Generatect: textconditional generation of 3d chest ct volumes. In European Conference on Computer Vision, pages 126-143. Springer, 2025. 18 [31] Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop, pages 272-284. Springer, 2022. 17 [32] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 574-584, 2022. 5, 6, 7, 17 [33] Yufan He, Dong Yang, Holger Roth, Can Zhao, and Daguang Xu. Dints: Differentiable neural network topology search for 3d medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5841-5850, 2021. 5 [34] Yuting He, Guanyu Yang, Jian Yang, Rongjun Ge, Youyong Kong, Xiaomei Zhu, Shaobo Zhang, Pengfei Shao, Huazhong Shu, Jean-Louis Dillenseger, et al. Meta grayscale adaptive network for 3d integrated renal structures segmentation. Medical image analysis, 71:102055, 2021. 6, 7 [35] Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara, Edward Walczak, Keenan Moore, Heather Kaluzniak, Joel Rosenberg, Paul Blake, Zachary Rengel, Makinna Oestreich, et al. The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes. arXiv preprint , 2019. 2 [36] Nicholas Heller, Fabian Isensee, Dasha Trofimova, Resha Tejpaul, Zhongchen Zhao, Huai Chen, Lisheng Wang, Alex Golts, Daniel Khapun, Daniel Shats, et al. The kits21 challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase ct. arXiv preprint , 2023. 3, 17, 19 [37] Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative modeling. arXiv preprint , 2020. 3 [38] Nora B Henrikson, Erin J Aiello Bowles, Paula R Blasi, Caitlin C Morrison, Matt Nguyen, Venu G Pillarisetty, and Jennifer S Lin. Screening for pancreatic cancer: updated evidence report and systematic review for the us preventive services task force. Jama, 322(5):445-454, 2019. 2 [39] N Hiraoka, Y Ino, S Sekine, H Tsuda, K Shimada, T Kosuge, J Zavada, M Yoshida, K Yamada, T Koyama, et al. Tumour necrosis is a postoperative prognostic marker for pancreatic cancer patients with a high interobserver reproducibility in histological evaluation. British journal of cancer, 103(7):1057-1065, 2010. 18 [40] Qixin Hu, Junfei Xiao, Yixiong Chen, Shuwen Sun, JieNeng Chen, Alan Yuille, and Zongwei Zhou. Synthetic tumors make ai segment tumors better. NeurIPS Workshop on Medical Imaging meets NeurIPS, 2022. 1 [41] Qixin Hu, Yixiong Chen, Junfei Xiao, Shuwen Sun, Jieneng Chen, Alan L Yuille, and Zongwei Zhou. Label-free liver tumor segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7422-7432, 2023. 18 [42] Qixin Hu, Alan Yuille, and Zongwei Zhou. Synthetic data as validation. arXiv preprint , 2023. 17 [43] Ziyan Huang, Haoyu Wang, Zhongying Deng, Jin Ye, Yanzhou Su, Hui Sun, Junjun He, Yun Gu, Lixu Gu, Shaoting Zhang, et al. Stu-net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training. arXiv preprint , 2023. 5, 6, 7, 17 [44] Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein. nnu-net: a selfconfiguring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2):203-211, 2021. 5, 6, 7 [45] Fabian Isensee, Tassilo Wald, Constantin Ulrich, Michael Baumgartner, Saikat Roy, Klaus Maier-Hein, and Paul F Jaeger. nnu-net revisited: A call for rigorous validation in 3d medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 488-498. Springer, 2024. 5, 6, 7 [46] Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. arXiv preprint , 2022. 17, 19 [47] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 49044916. PMLR, 2021. 3 [48] James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar. Pate-gan: Generating synthetic data with differential privacy guarantees. In International conference on learning representations, 2018. 17 [49] Mintong Kang, Bowen Li, Zengle Zhu, Yongyi Lu, Elliot K Fishman, Alan Yuille, and Zongwei Zhou. Label-assemble: Leveraging multiple datasets with partial labels. In IEEE International Symposium on Biomedical Imaging, pages 15. IEEE, 2023. 2 [50] Mee Joo Kang, Jin-Young Jang, Soo Jin Kim, Kyoung Bun Lee, Ji Kon Ryu, Yong-Tae Kim, Yong Bum Yoon, and Sun-Whe Kim. Cyst growth rate predicts malignancy in patients with branch duct intraductal papillary mucinous neoplasms. Clinical Gastroenterology and Hepatology, 9(1): 87-93, 2011. 18 [51] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint , 2020. 1, 3 [52] David J Kerr, Daniel G Haller, and Michael Baumann. Oxford textbook of oncology. oxford university press, 2016. 18 [53] Boah Kim, Yujin Oh, and Jong Chul Ye. Diffusion adversarial representation learning for self-supervised vessel segmentation. arXiv preprint , 2022. 18 [54] Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, and Taesup Kim. Scalable neural architecture search for 3d medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 220-228. Springer, 2019. 5 [55] Vinay Kumar, Abul Abbas, and Jon C Aster. Robbins basic pathology. Elsevier Health Sciences, 2017. 17, 18 [56] Yuxiang Lai, Xiaoxi Chen, Angtian Wang, Alan Yuille, and Zongwei Zhou. From pixel to cancer: Cellular automata in computed tomography. arXiv preprint , 2024. 18 [57] Bennett Landman, Zhoubing Xu, J Igelsias, Martin Styner, Thomas Langerak, and Arno Klein. Miccai multi-atlas labeling beyond the cranial vault-workshop and challenge. In Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault-Workshop Challenge, page 12, 2015. 19 [58] Bowen Li, Yu-Cheng Chou, Shuwen Sun, Hualin Qiao, Alan Yuille, and Zongwei Zhou. Early detection and localization of pancreatic cancer by label-free tumor synthesis. MICCAI Workshop on Big Task Small Data, 1001-AI, 2023. 1 [59] Wenxuan Li, Chongyu Qu, Xiaoxi Chen, Pedro RAS Bassi, Yijia Shi, Yuxiang Lai, Qian Yu, Huimin Xue, Yixiong Chen, Xiaorui Lin, et al. Abdomenatlas: A largescale, detailed-annotated, & multi-center dataset for efficient transfer learning and open algorithmic benchmarking. Medical Image Analysis, page 103285, 2024. 6, 7 [60] Wenxuan Li, Alan Yuille, and Zongwei Zhou. How well do supervised models transfer to 3d image segmentation? In International Conference on Learning Representations, 2024. 17 [61] Wenxuan Li, Pedro RAS Bassi, Tianyu Lin, Yu-Cheng Chou, Xinze Zhou, Yucheng Tang, Fabian Isensee, Kang Wang, Qi Chen, Xiaowei Xu, et al. Scalemai: Accelerating the development of trusted datasets and ai models. arXiv preprint , 2025. 1, 17 [62] Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro RAS Bassi, Szymon Plotka, Jaroslaw B Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, et al. Pants: The pancreatic tumor segmentation dataset. arXiv preprint , 2025. 2 [63] Xinran Li, Yi Shuai, Chen Liu, Qi Chen, Qilong Wu, Pengfei Guo, Dong Yang, Can Zhao, Pedro RAS Bassi, Daguang Xu, et al. Text-driven tumor synthesis. arXiv preprint , 2024. 1 [64] Tianyu Lin, Xinran Li, Chuntung Zhuang, Qi Chen, Yuanhao Cai, Kai Ding, Alan L Yuille, and Zongwei Zhou. Are pixel-wise metrics reliable for sparse-view computed tomography reconstruction? arXiv preprint , 2025. 18 [65] Jie Liu, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for partially labeled organ and pan-cancer segmentation. In MICCAI 2023 FLARE Challenge, 2023. 17 [66] Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, and Zongwei Zhou. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21152-21164, 2023. 5, 6, 7, 17 [67] Jie Liu, Yixiao Zhang, Kang Wang, Mehmet Can Yavuz, Xiaoxi Chen, Yixuan Yuan, Haoliang Li, Yang Yang, Alan Yuille, Yucheng Tang, et al. Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical Image Analysis, page 103226, 2024. 5 [68] Xiangde Luo, Wenjun Liao, Jianghong Xiao, Tao Song, Xiaofan Zhang, Kang Li, Guotai Wang, and Shaoting Zhang. Word: Revisiting organs segmentation in the whole abdominal region. arXiv preprint , 2021. 17, 19 [69] Qing Lyu and Ge Wang. Conversion between ct and mri images using diffusion and score-matching models. arXiv preprint , 2022. 18 [70] J. Ma and B. Wang. Fast, low-resource, accurate, and robust organ and pan-cancer segmentation. In 27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2024). Zenodo, 2024. 1, 2, 3 [71] Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 19 [72] Jun Ma, Yao Zhang, Song Gu, Xingle An, Zhihe Wang, Cheng Ge, Congcong Wang, Fan Zhang, Yu Wang, Yinan Xu, et al. Fast and low-gpu-memory abdomen ct organ segmentation: the flare challenge. Medical Image Analysis, 82: 102616, 2022. 19 [73] Jiawei Mao, Yuhan Wang, Yucheng Tang, Daguang Xu, Kang Wang, Yang Yang, Zongwei Zhou, and Yuyin Zhou. Medsegfactory: Text-guided generation of medical imagemask pairs. arXiv preprint , 2025. 1 [74] Xiangxi Meng, Yuning Gu, Yongsheng Pan, Nizhuan Wang, Peng Xue, Mengkang Lu, Xuming He, Yiqiang Zhan, and Dinggang Shen. A novel unified conditional score-based generative framework for multi-modal medical image completion. arXiv preprint , 2022. 18 [75] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pages 565-571. IEEE, 2016. 17 [76] Andriy Myronenko. 3d mri brain tumor segmentation using autoencoder regularization. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4, pages 311-320. Springer, 2019. 6, 7 [77] Piyush Nathani, Purva Gopal, Nicole Rich, Adam Yopp, Takeshi Yokoo, Binu John, Jorge Marrero, Neehar Parikh, and Amit G Singal. Hepatocellular carcinoma tumour volume doubling time: a systematic review and meta-analysis. Gut, 70(2):401-407, 2021. 18 [78] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35: 27730-27744, 2022. 1, 3 [79] Muzaffer ̈Ozbey, Onat Dalmaz, Salman UH Dar, Hasan A Bedel, S ̧aban ̈Ozturk, Alper G ̈ung ̈or, and Tolga C ̧ ukur. Unsupervised medical image translation with adversarial diffusion models. IEEE Transactions on Medical Imaging, 2023. 18 [80] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 41954205, 2023. 3 [81] Micha Pfeiffer, Isabel Funke, Maria R Robu, Sebastian Bodenstedt, Leon Strenger, Sandy Engelhardt, Tobias Roß, Matthew J Clarkson, Kurinchi Gurusamy, Brian R Davidson, et al. Generating large labeled data sets for laparoscopic image processing tasks using unpaired image-toimage translation. In Medical Image Computing and Computer Assisted Intervention, pages 119-127. Springer, 2019. 18 [82] Marion J Pollheimer, Peter Kornprat, Richard A Lindtner, Lars Harbaum, Andrea Schlemmer, Peter Rehak, and Cord Langner. Tumor necrosis is a new promising prognostic factor in colorectal cancer. Human pathology, 41(12):17491757, 2010. 18 [83] Colin H Richards, Zahra Mohammed, Tahir Qayyum, Paul G Horgan, and Donald C McMillan. The prognostic value of histological tumor necrosis in solid organ malignant disease: a systematic review. Future oncology, 7(10): 1223-1235, 2011. 18 [84] Blaine Rister, Darvin Yi, Kaushik Shivakumar, Tomomi Nobashi, and Daniel L Rubin. Ct-org, a new dataset for multiple organ segmentation in computed tomography. Scientific Data, 7(1):1-9, 2020. 19 [85] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 1, 3, 6 [86] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. Unet: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234-241. Springer, 2015. 17 [87] Holger R Roth, Le Lu, Amal Farag, Hoo-Chang Shin, Jiamin Liu, Evrim B Turkbey, and Ronald M Summers. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In International conference on medical image computing and computer-assisted intervention, pages 556-564. Springer, 2015. 19 [88] Younghak Shin, Hemin Ali Qadir, and Ilangko Balasingham. Abnormal colon polyp image synthesis using conditional adversarial networks for improved detection performance. IEEE Access, 6:56007-56017, 2018. 18 [89] Marc C Smaldone, Alexander Kutikov, Brian L Egleston, Daniel J Canter, Rosalia Viterbo, David YT Chen, Michael A Jewett, Richard E Greenberg, and Robert G Uzzo. Small renal masses progressing to metastases under active surveillance: a systematic review and pooled analysis. Cancer, 118(4):997-1006, 2012. 18 [90] L Soler, A Hostettler, V Agnus, A Charnoz, J Fasquel, J Moreau, A Osswald, M Bouhadjar, and J Marescaux. 3d image reconstruction for comparison of algorithm database: A patient specific anatomical and medical image database. IRCAD, Strasbourg, France, Tech. Rep, 2010. 6, 7 [91] Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with scorebased generative models. arXiv preprint , 2021. 18 [92] Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20730-20740, 2022. 5, 6, 7, 17 [93] Vanya V Valindria, Nick Pawlowski, Martin Rajchl, Ioannis Lavdas, Eric O Aboagye, Andrea G Rockall, Daniel Rueckert, and Ben Glocker. Multi-modal learning from unpaired images: Application to multi-organ segmentation in ct and mri. In 2018 IEEE winter conference on applications of computer vision (WACV), pages 547-556. IEEE, 2018. 19 [94] Julia Wolleb, Robin Sandk ̈uhler, Florentin Bieder, Philippe Valmaggia, and Philippe C Cattin. Diffusion models for implicit image segmentation ensembles. In International Conference on Medical Imaging with Deep Learning, pages 1336-1348. PMLR, 2022. 18 [95] Linshan Wu, Jiaxin Zhuang, Xuefeng Ni, and Hao Chen. Freetumor: Advance tumor segmentation via large-scale tumor synthesis. arXiv preprint , 2024. 18 [96] Yingda Xia, Qihang Yu, Linda Chu, Satomi Kawamoto, Seyoun Park, Fengze Liu, Jieneng Chen, Zhuotun Zhu, Bowen Li, Zongwei Zhou, et al. The felix project: Deep networks to detect pancreatic neoplasms. medRxiv, 2022. 1, 6 [97] Tiange Xiang, Yixiao Zhang, Yongyi Lu, Alan Yuille, Chaoyi Zhang, Weidong Cai, and Zongwei Zhou. Exploiting structural consistency of chest anatomy for unsupervised anomaly detection in radiography images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 17 [98] Yutong Xie and Quanzheng Li. Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 655-664. Springer, 2022. 18 [99] Ke Yan, Xiaosong Wang, Le Lu, and Ronald M Summers. Deeplesion: Automated deep mining, categorization and detection of significant radiology image findings using large-scale clinical lesion annotations. arXiv preprint , 2017. 3 [100] Yijun Yang, Zhao-Yang Wang, Qiuping Liu, Shuwen Sun, Kang Wang, Rama Chellappa, Zongwei Zhou, Alan Yuille, Lei Zhu, Yu-Dong Zhang, et al. Medical world model: Generative simulation of tumor evolution for treatment planning. arXiv preprint , 2025. 17 [101] Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. Advances in neural information processing systems, 32, 2019. 17 [102] Qihang Yu, Dong Yang, Holger Roth, Yutong Bai, Yixiao Zhang, Alan L Yuille, and Daguang Xu. C2fnas: Coarseto-fine neural architecture search for 3d medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 41264135, 2020. 5 [103] Zongwei Zhou. Towards Annotation-Efficient Deep Learning for Computer-Aided Diagnosis. PhD thesis, Arizona State University, 2021. 1 [104] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3-11. Springer, 2018. 17 [105] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39(6): 1856-1867, 2019. 17 [106] Zongwei Zhou, Vatsal Sodha, Jiaxuan Pang, Michael B Gotway, and Jianming Liang. Models genesis. Medical Image Analysis, 67:101840, 2021. 5 [107] Zongwei Zhou, Michael B Gotway, and Jianming Liang. Interpreting medical images. In Intelligent Systems in Medicine and Health, pages 343-371. Springer, 2022. 1 [108] Lingting Zhu, Noel Codella, Dongdong Chen, Zhenchao Jin, Lu Yuan, and Lequan Yu. Generative enhancement for 3d medical images. arXiv preprint , 2024. 18 Scaling Tumor Segmentation: Best Lessons from Real and Synthetic Data Supplementary Material This appendix is organized as follows: • § A provides comprehensive results with scaled real data with proprietary dataset and synthetic data. • § B provides comprehensive related works. ◦B.1: AI Development on Real Tumors ◦B.2: AI Development on Synthetic Tumors • § C provides implementation details for Tumor Genesis and comparative models. ◦C.1: details of public and private datasets used in AbdomenAtlas 2.0. ◦C.2: implementation details of comparative models • § D provides more visual examples from AbdomenAtlas 2.0. • § E presents additional results on the key insights gained from scaling real tumor data. • § F presents additional results on the key insights gained from scaling real and synthetic tumor data. A. Best Lesson Proof on proprietary dataset Figure 8. Best lesson proof on proprietary dataset. Comprehensive experimental results trained on the proprietary dataset show that increasing the scale of real data (gray curve) improves segmentation (DSC and NSD) and detection (patient-level sensitivity, tumor-level sensitivity, and early tumor sensitivity) for both in-distribution and out-of-distribution data. Additionally, augmenting the dataset with an extra 3× synthetic data (red curve) consistently enhances the results. The specific numerical results in this figure can be referenced in the Table 5. Given the substantial GPU requirements, the results were obtained from a single experiment. To reach a more reliable conclusion, we will conduct the experiments at least 10 times. Scaling with real data. #real CT Patient-level Sen. Tumor-level Sen. DSC NSD Early Tumor Sen. Test on in-distribution data 60 86.4 74.3 40.2 35.0 45.7 120 90.1 79.4 51.0 44.4 40.0 278 91.5 82.4 52.7 47.2 51.4 435 90.1 80.7 54.4 48.6 42.9 750 89.7 80.1 55.4 49.1 41.4 1125 91.2 82.1 56.1 50.7 44.3 1500 90.4 82.1 59.3 53.6 48.6 3159 91.5 84.8 59.7 54.3 58.6 Test on out-of-distribution data 60 100.0 51.2 6.6 4.4 22.0 120 100.0 69.5 22.6 17.4 42.0 278 89.2 74.1 33.2 27.9 44.0 435 92.3 75.6 36.8 30.5 48.0 750 88.5 71.0 35.3 28.5 38.0 1125 80.0 72.5 37.1 31.4 48.0 1500 90.0 81.7 38.4 31.2 60.0 3159 81.5 74.1 41.2 35.5 46.0 Scaling with real & synthetic data. #real CT Patient-level Sen. Tumor-level Sen. DSC NSD Early Tumor Sen. Test on in-distribution data 60 89.7 77.4 48.2 39.8 40.0 120 94.1 82.1 56.5 47.7 44.3 278 98.2 86.1 58.1 50.0 57.1 435 97.4 86.5 59.1 51.0 55.7 750 95.2 86.5 58.1 50.4 51.4 1125 97.1 86.1 59.9 52.0 52.9 1500 95.2 85.1 59.2 51.1 51.4 3159 96.3 85.8 59.3 51.2 50.0 Test on out-of-distribution data 60 96.2 83.2 42.7 39.2 64.7 120 91.5 82.4 45.5 40.6 64.7 278 99.2 92.4 51.8 48.3 80.4 435 98.5 90.8 52.8 49.6 78.4 750 96.9 90.8 54.1 50.6 78.4 1125 98.5 90.1 53.0 48.4 78.4 1500 97.7 87.8 52.6 48.0 70.6 3159 96.9 87.8 53.5 49.1 72.5 Table 5. Best Lesson Proof on proprietary dataset. The proprietary dataset comprises a total of 5,176 CT scans, which include scans of patients with pancreatic tumors as well as healthy scans without pancreatic tumors. We utilized 3,159 scans for training, while the remaining 2,017 were allocated for testing within the same distribution. For the out-of-distribution dataset, we selected the Panorama dataset. Detailed information regarding the dataset split can be found in § C. For the segmentation model, we employed the SegResNet model based on the MONAI codebase for training and assessed the tumor segmentation and detection results using the DSC, NSD, and sensitivity metrics. B. Related works B.1. AI Development on Real Tumors AI algorithms. Tumor detection and segmentation have been long-standing problems in medical image analysis. To achieve deliverable results, many recent works leverage state-of-the-art deep learning technology [43, 92]. The U-Net architecture [86] has been widely adopted in medical image analysis. Over the years, numerous well-designed networks have been proposed to improve the U-Net architecture, including UNet++ [104, 105], TransU-Net [9], UNETR [32], Swin-UNETR [31], and many others [8, 10, 16, 75]. While these methods have demonstrated remarkable performance in tumor detection and segmentation, they typically rely on a significant number of annotations. The process of annotating real tumors is not only time-consuming but also requires extensive medical expertise. Sometimes, it needs the assistance of radiology reports [4, 5] or is even impossible to obtain the annotation [6, 42, 97, 100]. Therefore, the use of synthetic tumors emerges as a promising solution. Liu et al. [66] integrate text embeddings derived from Contrastive Language-Image Pre-training (CLIP) into segmentation models, effectively capturing anatomical relationships and enabling the model to learn structured feature embeddings across multiple organ and tumor types. With pre-training on large-scale CT scans with per-voxel annotations for 25 anatomical structures and seven tumor types, Li et al [60] has developed a suite of models demonstrating robust transfer learning capabilities across various downstream organ and tumor segmentation tasks. Preexisting public datasets have made significant contributions to the advancement of AI in tumor detection [61]. We summarizes key characteristics of existing public datasets for organ and tumor segmentation in table 1, categorized into those with and without tumor labels. Datasets such as LiTS [6] and KiTS [36] provide essential tumor labels but are limited with regard to size and variety, with 131 and 489 scans, respectively, and fewer hospitals contributing data (7 for LiTS and 1 for KiTS). Larger datasets like FLARE23 [65] include 2,200 scans and span contributions from 30 hospitals, yet they focus on a single organ and provide no explicit tumor-specific labels. Similarly, datasets without tumor labels, such as WORD [68] and AMOS22 [46], are useful for broader anatomical segmentation tasks but lack tumor-specific annotations. In contrast, AbdomenAtlas 2.0 distinguishes itself by offering the most extensive dataset to date, with 10,136 scans, 4,700K slices, and 13,223 tumors annotated across multiple organs, including rarer tumor types like esophagus and uterus. The dataset incorporates data from 89 hospitals across a wide range of countries, providing unprecedented diversity and comprehensiveness for multi-organ tumor research. B.2. AI Development on Synthetic Tumors Tumor synthesis enables the generation of artificial tumors in medical images, aiding in the training of AI models for tumor detection and segmentation [14, 48, 101]. Synthetic tumors become particularly valuable when acquiring per-voxel annotations of real tumors is challenging, such as in the early stages of tumor development. There are several advantages of synthetic tumors over real tumors. Quality Control: Synthetic data allows for the control of specific variables and the introduction of desired diversity into the dataset. Real-world datasets often suffer from imbalances, such as an overrepresentation of certain demographics or tumor stages. Synthetic data can be generated to balance these datasets, ensuring that machine learning models are trained on a comprehensive and representative sample of data. For rare cancers, collecting enough patient data is particularly difficult. Synthetic data can help augment these limited datasets, enabling the development of more robust and accurate models for rare cancer types. Additionally, synthetic data can be used to simulate hard cases that are difficult to capture in real-world data. Researchers can rapidly iterate and refine their models, leading to faster advancements in tumor detection, diagnosis, and treatment. Privacy and Ethical Considerations: One of the major advantages of synthetic data is that it can be used without compromising patient privacy. Since synthetic data is not directly tied to any real individual, it eliminates the risk of exposing sensitive patient information. By using synthetic data, researchers can bypass ethical dilemmas associated with real patient data, such as the need for patient consent and the risk of data breaches. Synthetic tumors can be used in aiding AI models for tumor detection and segmentation, particularly in situations where detailed annotations are scarce [14, 19]. Therefore, an effective and universally applicable tumor synthesis approach is urgently needed to accelerate the development of tumor detection and segmentation methods. Tumor development is intricately regulated by biological mechanisms at various scales. Tumors, which arise from DNA mutations in a single cell and represent genetic disorders [55], undergo complex growth processes. Mutated cells lead to uncontrolled proliferation, which can be benign or malignant [24]. Differences between benign and malignant tumors include growth rate and invasiveness [55]. Malignant tumors tend to exhibit larger final sizes and faster growth rates compared to benign lesions [50]. Additionally, slow tumor growth rates have been associated with low malignant potential [15, 89]. These patterns have also been observed in several studies [26, 77]. Malignant tumors usually invade surrounding tissues, while benign tumors typically remain confined to their original sites. Moreover, even slowly growing malignant tumors can invade surrounding tissues [52], leading to blurry boundaries between tumors and adjacent tissues. Therefore, it is necessary to design Accumulation and Growth rules to simulate these features. Tumor necrosis, a form of cell death, indicates a worse prognosis [82, 83]. Histologically, necrosis is caused by hypoxia resulting from rapid cell proliferation surpassing vascular supply [39], presenting as non-enhancing irregular areas in CT images [22]. Hu et al. [41] developed a program that integrates medical knowledge to generate realistic liver tumors. However, these models are generally organ-specific and require adaptation to work with other organs. Lai et al. [56] proposed a framework that leverages cellular automata to simulate tumor growth, invasion, and necrosis, enabling realistic synthetic tumor generation across multiple organs. Generative models have been effectively utilized in the medical field for tasks like image-to-image translation [69, 74, 79, 81], reconstruction [64, 91, 98], segmentation [11, 21, 53, 94], and image denoising [25]. Utilizing advanced generative models to synthesize various tumors is also a promising direction [27, 30, 95, 108]. Shin et al. [88] advanced detection by generating synthetic abnormal colon polyps using Conditional Adversarial Networks. Chen et al. [12] employed a diffusion model that capitalizes on similarities in early-stage tumor imaging for cross-organ tumor synthesis. Wu et al. [95] employs an adversarial-based discriminator to automatically filter out the low-quality synthetic tumors to improve tumor synthesis. Guo et al. [27] incorporates ControlNet to process organ segmentation as additional conditions to guide the generation of CT images with flexible volume dimensions and voxel spacing. C. Implementation Details C.1. Dataset Composition AbdomenAtlas 2.0 components # of scans annotated tumor (original) annotators Public CT in AbdomenAtlas 2.0 (AbdomenAtlas1.1) 9,901 liver, pancreas, kidney, colon human & AI CHAOS [2018] [link] 20 - human BTCV [2015] [link] 47 - human Pancreas-CT [2015] [link] 42 - human CT-ORG [2020] [link] 140 - human & AI WORD [2021] [link] 120 - human LiTS [2019] [link] 130 liver human AMOS22 [2022] [link] 200 - human & AI KiTS [2023] [link] 489 kidney human AbdomenCT-1K [2021] [link] 1,000 - human & AI MSD-CT [2021] [link] 945 liver, pancreas, colon human & AI FLARE'23 [2022] [link] 4,100 - human & AI Abdominal Trauma Det [2023] [link] 4,711 - - Private CT in AbdomenAtlas 2.0 233 liver, pancreas, kidney, colon, esophagus, uterus human & AI Table 6. Dataset composition of AbdomenAtlas 2.0. Our AbdomenAtlas 2.0 comprises two components: CT scans from the public AbdomenAtlas 1.1 dataset and CT scans from a private source, totaling 10,135 tumor-annotated CT scans, with additional scans expected from various sources. Note that, for CT scans from AbdomenAtlas 1.1 dataset, we fully annotate six tumor types for each CT scan. C.2. Comparative Models The code for the Comparative Model is implemented in Python using MONAI and nnU-Net framework. nnU-Net Framework. nnU-Net serves as a framework for the automatic configuration of AI-driven semantic segmentation pipelines. When presented with a new segmentation dataset, it extracts pertinent metadata from the training cases to automatically determine its hyperparameters. It has withstood the test of time and continues to deliver state-of-the-art results. nnU-Net effectively illustrates that meticulously configuring and validating segmentation pipelines across a diverse range of segmentation tasks can yield a remarkably powerful algorithm. We implement UNETR, Swin UNETR, nnU-Net, ResEncM, and STU-Net using the nnU-Net framework. The orientation of CT scans is adjusted to specific axcodes. Isotropic spacing is employed to resample each scan, achieving a uniform voxel size of 1.5 × 1.5 × 1.5mm3. Additionally, the intensity in each scan is truncated to the range [-175, 250] and then linearly normalized to [0, 1]. During training, we crop random fixed-sized 96 × 96 × 96 regions, selecting centers from either a foreground or background voxel according to a pre-defined ratio. Furthermore, the data augmentation during training adheres to the default strategies outlined in the nnU-Net framework. All models are trained for 1000 epochs, with each epoch consisting of 250 iterations. Besides, we utilize the SGD optimizer with a base learning rate of 0.01, and the batch size is defined as 2. During inference, we utilize the test time augmentation by following the default implementations in nnU-Net framework. Besides, we use the sliding window strategy by setting the overlapping area ratio to 0.5. MONAI Framework. MONAI (Medical Open Network for AI) is an open-source framework that supports AI in healthcare. Built on PyTorch, it offers a comprehensive set of tools for configuring, training, inferring, and deploying medical AI models. We implement SegResNet, Universal Model, and Suprem utilizing the MONAI framework. Since different methods have varying hyperparameter settings, we trained and tested the models exactly according to the original hyperparameters specified in the corresponding papers. D. Visual Real Examples in AbdomenAtlas 2.0 Figure 9. Visual examples of six tumor types annotated in AbdomenAtlas 2.0. AbdomenAtlas 2.0 features a diverse distribution across various tumor stages and sizes. These comprehensive, high-quality tumors, accompanied by per-voxel annotations, significantly improve the performance of AI models, both on in-distribution and out-of- distribution data. (Figure 10). E. More Results: Best Lesson from Real Data Figure 10. Best Lesson from Real Data: Results on in-distribution and out-of-distribution data. (a): while increasing data scale initially enhances in-distribution performance across various metrics (sensitivity, DSC, and NSD), it eventually plateaus. Notably, certain organ types, such as the Liver and Kidney, exhibit a decline in performance at the largest scales. (b): In contrast, the scaling trends observed in out-of-distribution datasets demonstrate consistent improvements in specific datasets (e.g., 3D-IRCADb, ULS-Pancreas) without reaching a plateau, indicating that larger data volumes may enhance generalizability. These results relate to the data-scaling lesson in §1 (1,500 if with real data only). Larger datasets are needed for effective out-of-distribution generalizability. F. More Results: Best Lesson for generalizability Figure 11. Best lesson for pancreatic tumors. Integrating real and synthetic data, compared to using real data alone, consistently improves generalizable performance in sensitivity, DSC, and NSD across various scenarios and data scales. These results underscore the benefits of this combination in enhancing the accuracy of pancreatic tumor analysis. Figure 12. Best lesson for kidney tumors. Combining real and synthetic data consistently enhances generalizable performance in sensitivity, DSC, and NSD across various scenarios and data scales, highlighting its effectiveness in improving kidney tumor diagnosis accuracy.
2510.14824
Supervised Fine-Tuning or Contrastive Learning? Towards Better Multimodal LLM Reranking Ziqi Dai1∗, Xin Zhang1,2∗, Mingxin Li, Yanzhao Zhang, Dingkun Long Pengjun Xie, Meishan Zhang1, Wenjie Li2, Min Zhang1 1Harbin Institute of Technology, Shenzhen 2The Hong Kong Polytechnic University {ziqi.dai, zhangxin2023}@stu.hit.edu.cn Release at https://github.com/vec-ai/lychee-rerank-mm ABSTRACT In information retrieval, training reranking models mainly focuses on two types of ob- jectives: metric learning (e.g., contrastive loss to increase the predicted scores on rele- vant query-document pairs) and classification (binary label prediction of relevance vs. irrelevance). For BERT-style encoders, various studies have shown that contrastive learning (CL) can be more effective than discriminative (classification) learning. How- ever, for large language models (LLMs), classification via supervised fine-tuning (SFT), which predicts “yes” (resp. “no”) token for relevant (resp. irrelevant) pairs, appears more promising as it aligns well with the generative nature of LLMs. This divergence raises a central question: which objective is intrinsically better suited to LLM-based reranking, and what mechanism underlies the difference? In this work, we conduct a comprehensive comparison and analysis between CL and SFT for reranking, taking the universal multimodal retrieval (UMR) as the experimental playground. We first decom- pose the objectives into two components: weight, which controls the magnitude of those updates, and direction, which guides the model updates, then present a unified frame- work for understanding their interactions. Through probing experiments, we find that SFT provides a substantially stronger weighting scheme than CL, whereas the preferred scoring direction shows no clear winner. Taken together, these results point to a consis- tent advantage of SFT over CL for LLM reranking. To further validate our findings, we conduct large-scale training with SFT and present new state-of-the-art rerankers on the MRB benchmark. We also provide ablations on SFT settings and expect our findings to benefit future research and applications in this area. 1 INTRODUCTION Reranking is a crucial step in the retrieval pipeline (Lin et al., 2022), aiming to refine the initial re- sults obtained from the previous search stage by reordering them based on their relevance to a given query. In recent years, the integration of Large Language Models (LLMs) into reranking techniques has shown promising results in text retrieval (Ma et al., 2024b) and has gradually become the stan- dard approach (Sharifymoghaddam et al., 2025). When extending to the multimodal setting (Liu et al., 2023; Wei et al., 2025), multimodal LLMs (MLLMs) also become the promising backbone choice (Lin et al., 2025; Zhang et al., 2025a) as their strong multimodal understanding capabilities. Current widely used rerankers are typically in the point-wise setting (Lin et al., 2022), which in- dependently scores each query-candidate pair and ranks the candidates. The simple architecture of point-wise rerankers makes them easy and efficient to applicate in real-world scenarios, and there emerges various open-source state-of-the-art (SOTA) models (Chen et al., 2024; Zhang et al., 2024), particularly LLM-based ones (Sharifymoghaddam et al., 2025; Zhang et al., 2025b). To train such rerankers1, one straightforward approach follows the pre-LLM practice of contrastive learning (CL) (Nogueira et al., 2019; Zhang et al., 2024), computing InfoNCE loss (Oord et al., 2018) one predicted relevance scores. Another approach is to directly perform supervised fine-tuning (SFT) (Nogueira et al., 2020; Zhang et al., 2025b), which optimizes the model to predict the next token (“true/yes” ∗Equal contribution 1Throughout this work, reranking primarily refers to point-wise reranking setting. 1 arXiv:2510.14824v1 [cs.CL] 16 Oct 2025 Assistant: Instruction Query Document MLLM Reranker … Positive Output Negative Output Contrastive Learning Supervised Fine-Tuning 𝒑“𝒚𝒆𝒔” 𝒊𝒏𝒔, 𝒒, 𝒅+ →𝟏 𝒑“𝒏𝒐” 𝒊𝒏𝒔, 𝒒, 𝒅− →𝟏 𝒑𝑶𝟎 + 𝑶𝟎 +, 𝑶𝟏 −, … , 𝑶𝑵 − →𝟏 LM head … … Figure 1: Comparison of Supervised Fine-Tuning (SFT) and Contrastive Learning (CL) for the MLLM reranker. for relevant, “false/no” for irrelevant) and takes the “true/yes” token probability as the relevance score. The illustration of them is shown in Figure 1. Before the emergence of LLMs, contrastive learning was the dominant approach for leveraging BERT-style encoders due to its strong perfor- mance (Nogueira et al., 2019; Zhang et al., 2024). However, SFT are now widely applied to LLMs (Nogueira et al., 2020; Zhang et al., 2025b) and appears to deliver competitive results. This raises a natural research question: which objective is intrinsically better for LLM reranking, and why? Meanwhile, research on multimodal reranking remains largely restricted to single datasets or nar- rowly defined tasks (Xu et al., 2025), limiting the generalizability of existing approaches. Building on recent advances in universal multimodal retrieval (Zhang et al., 2025a), our objective is to develop a universal multimodal reranking model that can consistently adapt across diverse modalities. In this work, we aim to explore the question by providing a theoretical analysis and empirical com- parison of the two approaches on the universal multimodal retrieval task as testbed. We first design the General Multimodal Reranker (GMR, §3.1), and then analyze the two training approaches and decompose their loss functions (§3.3) into weight and direction. Based on this, we implement a uni- fied framework for CL and SFT losses and conduct experiments to compare and analysis them (§4). To make comprehensive evaluations of multimodal reranking, we compile a new unified benchmark called MRB (multimodal reranking benchmark, §5). Through analysis and comparison, we find that SFT consistently outperforms CL for LLM-based rerankers, and: (1) The weight component, rather than the direction, accounts for the most perfor- mance gap; (2) A larger weight improves robustness to numerical errors in training, where SFT intrinsically assigns larger weights than CL; (3) The function of weight is a input-specific guidance: down-weight already-well-learned input pairs and up-weight hard or under-fit pairs; (4) The native SFT direction is almost optimal, whereas CL can be further improved by tuning its direction ma- trix. To further validate the potential of SFT, we train two reranking models (i.e., GMR-3B and GMR-7B), which set new state-of-the-art results on MRB. We will release code, data and models to facilitate future research in this area. Our contributions are: • We provide a unified analysis of SFT and CL for LLM-based reranking, showing that SFT intrinsically outperforms CL. By decomposing the loss into weight and direction components, we reveal that SFT’s weight term delivers stronger optimization signals. • We introduce the MRB benchmark, comprising 40 datasets across single-, cross-, and fused- modal retrieval, offering a comprehensive evaluation for universal multimodal reranking. • We develop GMR models, instruction-aware multimodal LLM rerankers trained on 1.5M di- verse pairs. GMR-3B and GMR-7B achieve state-of-the-art results on MRB, highlighting the effectiveness of SFT and providing strong backbones for future research. 2 RELATED WORK Reranking with Large Language Model. Reranking improves retrieval output quality by jointly modeling the query and retrieved candidates and reorder the candidates (Lin et al., 2022). In recent years, reranking is dominated by methods based on pretrained language models (Nogueira et al., 2 2019; 2020), with LLM-based approaches becoming particularly prominent in the latest advance- ments (Ma et al., 2024b; Zhuang et al., 2024; Sharifymoghaddam et al., 2025). Compared to the widely studied list-wise reranking (Ren et al., 2025; Liu et al., 2025), in this work, we focus on the more straightforward and widely used point-wise approach (Zhang et al., 2024; Guo et al., 2025), which scores each query-candidate pair independently and ranks the candidates. Training point-wise rerankers has traditionally relied on contrastive learning (CL) (Nogueira et al., 2019; Zhang et al., 2024), which is also a verified choice for LLM-based models (Ma et al., 2024b). However, for such generative language models, a supervised fine-tuning (SFT) approach (Nogueira et al., 2020) seems to be more aligned with the model nature, as it directly optimizes the model to predict the next token (“true/yes” for relevant, “false/no” for irrelevant) based on the input query and candidate, rather than relying on a contrastive loss that compares the relevant and irrelevant candidates. There is no clear consensus on which approach is better yet. To bridge this significant research gap, we conduct a theoretical analysis with empirical comparison of the two approaches, and demonstrate that SFT outperforms CL in terms of performance. Multimodal Information Retrieval. Multimodal Retrieval aims to retrieve relevant candidates from and based on modalities beyond text (Wang et al., 2024), which involves various sub-tasks such as image-text retrieval (Cao et al., 2022) and composed image retrieval (Song et al., 2025). Recent advancements in this field have been shifted to a more generalized view, exploring the universal multimodal retrieval (UMR) (Liu et al., 2023; Wei et al., 2025; Zhang et al., 2025a) which compile a wide range of datasets and tasks into a unified benchmark. Retrievers (Lin et al., 2025; Zhang et al., 2025a) driven by multimodal LLMs have shown significant improvements in understanding and processing multimodal data, enabling more effective retrieval across different modalities. While the reranking stage is crucial for enhancing the precision of retrieval system, it has been less studied in UMR (Lin et al., 2025). In this work, we investigate how to build better LLM reranking models, presenting state-of-the-art MLLM-based rerankers for UMR. 3 METHOD In this work we analyze the contrastive learning (CL) and supervised fine-tuning (SFT) approaches to reranking, taking the multimodal retrieval as the experimental playground. We first introduce our reranking model (§3.1), training by CL or SFT (§3.2), and then present our tools for analysis (§3.3). 3.1 RERANKER IMPLEMENTATION Our general multimodal reranker (namely GMR) follows the conventional design of LLM-based point-wise reranking models. We employ a strong MLLM as the backbone, which could process diverse input modalities, encompassing images, text, and multimodal combinations. Instruction-Aware Reranking. Given query q and document d, we set an instruction ins to de- scribe detailed task objectives, which has proven highly effective in MLLM-based multimodal re- trieval (Lin et al., 2025; Zhang et al., 2025a). For example, in the Visual Document Retrieval task (Ma et al., 2024a; Faysse et al., 2025), we use an instruction “Find a screenshot that relevant to the user’s question.” to guide the model to better evaluate the relevance between query and visual docu- ment. We list all instructions of our model in Appendix D.3. The inputs are in the form of (ins, q, d) and formatted into the template shown in Figure 6 before being fed into the MLLM backbone. Relevance Score Computation. In the SFT setting, given the task instruction ins, query q and document d, our reranker assesses the probability of the next token being either “yes” or “no” to be the relevance score σ. This process could be formally expressed as: σ(ins, q, d) = eP (“yes”|{ins,q,d}) eP (“yes”|{ins,q,d}) + eP (“no”|{ins,q,d}) , (1) where P(“yes”|{ins, q, d}) and P(“no”|{ins, q, d}) represent the probabilities of the next token being “yes” or “no”, respectively, given the document and query as context. With such relevance scores, we could rerank all retrieved candidates more precisely. This method is more aligned to the generative nature of MLLM and thus allows us to leverage its powerful understanding ability while 3 providing a effective scoring mechanism for reranking purposes. In the CL setting, the relevance score is the “yes” probability only: σ(ins, q, d) = P(“yes”|{ins, q, d}). (2) 3.2 RERANKER TRAINING In reranking, each data example contains one query q, one relevant document (positive) d+ 0 , and sev- eral irrelevant documents (negatives, the selection is described in Appendix C.3) {d− 1 , d− 2 , . . . , d− N}. As shown in Figure 1, we explore both CL and SFT based training. • Contrastive Learning: With relevance score σ from Equation 2, we compute the InfoNCE loss (Oord et al., 2018) for each example: LCL = −log exp(σ(ins, q, d+ 0 )) exp(σ(ins, q, d+ 0 )) + P i exp(σ(ins, q, d− i )). (3) • Supervised Fine-Tuning: The objective is predicting correct next token (relevance label) for each input pair, independently. We reorganize one example into multiple triples (ins, q, di), each corresponding to a different d. Then predict the likelihood of “yes” and “no” for each triplet and compute per-triplet cross-entropy loss with the token of ground-truth label l: LSFT i = −log(p(l|P({“yes”,“no”}|{ins, q, di}))), (4) where P({“yes”,“no”}|{ins, q, d}) denotes the likelihood of “yes” and “no”. The relevance label l is “yes” for positive documents and “no” for negatives. This loss encourages the model to assign higher probabilities to correct tokens, thereby improving the ranking performance. 3.3 LOSS FUNCTION DECOMPOSITION We analyze two reranking loss functions by decomposing them into two key components: weight and direction. With this decomposition, we perform probing experiments in §4. Basic Notation. We denote the SFT-style data instance with positive (resp. negative) doc as o+ 0 = {ins, q, d+ 0 } (resp. o− j = {ins, q, d− i }, i = 1, 2, . . . , N). The reranker is conceptualized as two components: a mapping function f(·|θ) (parameterized by θ) that converts oi to the feature representation hi = f(oi|θ), and a transformation My that maps hi into “yes” token likelihood score sy(hi) = hi · My. And the “no” token score sn could be computed similarly by Mn. Unified View. From §3.2, the SFT loss is calculated separately for each positive or negative doc of an example, while the CL loss is computed in an integrated manner across all positive and negative docs of the same example. To enable a fair comparison, we adopt the total loss L({oi}N i=0, θ) over an entire example (with one positive and N negatives) as the unit of analysis. So we have the gradient ∂L ∂θ = ∂L ∂h+ 0 ∂h+ 0 ∂θ + X i ∂L ∂h− i ∂h− i ∂θ , (5) where h+ 0 is the feature of positive doc and h− i is that of i-th negative. To understand the influence of positive and negatives on the model, we calculate the partial derivative of the loss function with respect to the hidden state. For CL, we only use “yes” token, and by substituting the specific loss (Equation 3) into the gradient, we obtain the partial derivatives: −∂LCL ∂h+ 0 = P j exp(sy(h− i )) exp(sy(h+ 0 )) + P i exp(sy(h− i ))My, (6) −∂LCL ∂h− i = − exp(sy(h− i )) exp(sy(h+ 0 )) + P i exp(sy(h− i ))My. (7) In SFT, we first merge the Equation 4 of multiple pairs in one example into the total loss LSFT = −log exp(sy(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) − X i log exp(sn(h− i )) exp(sy(h− i )) + exp(sn(h− i )). 4 Then we have partial derivatives −∂LSFT ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(My −Mn), (8) −∂LSFT ∂h− i = − exp(sy(h− i )) exp(sy(h− i )) + exp(sn(h− i ))(My −Mn). (9) The complete derivation of the above process is provided in the Appendix A.2. Algorithm 1 Unified Reranking Loss Require: inputs O ←{o+ 0 , . . . , o− n } Ensure: loss value L 1: M ←lm head(“yes”,“no”) 2: logits ←M · f(O|θ) //— weight branch —————————- 3: if weight= “sft” then 4: s ←Softmax(logits)[0].detach() 5: W + ←W + sft ←1 −s[0] 6: W −←W − sft ←s[1:] 7: else ▷weight= “cl” 8: s ←Softmax(logits[0]).detach() 9: W + ←W + cl ←1 −s[0] 10: W −←W − cl ←s[1:] 11: end if //— direction branch ———————- 12: My ←logits[:, 0]; Mn ←logits[:, 1] 13: if direction= “sft” then 14: D+ ←D+ sft ←Mn[0] −My[0] 15: D−←D− sft ←My[1:] −Mn[1:] 16: else ▷direction= “cl” 17: D+ ←D+ cl ←−My[0] 18: D−←D− cl ←My[1:] 19: end if //—————————————————- 20: L ←mean W +D+ + P i W − i D− i  21: return L Loss Decomposition. As above gradients looks similar, we can break them down into two parts: weight and direction. They reflect the differences between CL and SFT. • Weight W is a scalar that controls the magnitude of the updates. From Equation 6 - 9, we obtain the weights as shown below: W + CL = P i exp(sy(h− i )) exp(sy(h+ 0 )) + P i exp(sy(h− i )), (10) W − CL = exp(sy(h− i )) exp(sy(h+ 0 )) + P i exp(sy(h− i )), (11) W + SFT = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )), (12) W − SFT = exp(sy(h− i )) exp(sy(h− i )) + exp(sn(h− i )). (13) Compared with CL, WSF T only focus on the single document, without the interactions with all negatives of the same query like CL. • Direction D is a vector that controls the direction of model updates. From Equation 6 and 8, the direc- tion from the positive d+ for CL is D+ CL = My, and that of SFT is D+ SFT = My −Mn. While from Equation 7 and 9, direction from negatives are D− CL = −My and D− SFT = −(My −Mn). Appar- ently, for both CL and SFT, the update directions of positive and negatives are opposite. In summary, CL and SFT share similar direction components, and we believe that differing initializa- tions2 are insufficient to account for performance differences. In contrast, CL computes the weight using all positive and negative documents within a sample, while SFT assigns weights independently per document, making this the likely key factor in performance variation. Unified Framework Building on the above decomposition, we propose a unified reranking loss framework (URL), with pseudo-code provided in Algorithm 1. This framework allows us to indepen- dently analyze weight and direction, thereby facilitating a deeper understanding of the differences between the two training paradigms through controlled adjustments during computation. We then validate our analysis through probing experiments in the following §4. 4 ANALYSIS In this section, we continue and validate the analysis in §3.3 through probing experiments. We choose universal multimodal retrieval as the testbed, compiling a new benchmark (MRB §5.1) in- cludes single-modal tasks (text-to-text, image-to-image), cross-modal tasks (e.g., text-to-image), as well as fused-modal tasks (either the query or the document could consist of text + image). 2My compared to My −Mn. 5 Original URL Original URL 56 57 58 Perf. n.s. n.s. SFT CL Figure 2: Performance com- parison of the original imple- mentations and our URL. We defer the description of experiment settings and evaluation benchmark to §5.1. General Empirical Comparison. We first train both CL and SFT rerankers with the original implementation and our URL framework to (1) find the winner in practice, and (2) verify that URL faithfully reproduces the original implementation, supporting the subsequent analyses built on URL. As shown in Figure 2, under the identical setting, SFT consistently outperforms CL. Meanwhile, URL yields statistically indistinguishable performance to the original. It thus could be trusted in the following analysis. DSFT DCL ∆D WSFT 58.09 57.88 ▼0.21 WCL 56.99 56.40 ▼0.59 ∆W ▼1.10 ▼1.48 Table 1: MRB results of all loss components combinations, where the weight W delivers the domi- nant influence on performance. Weight W Dominates Performance. To investigate why SFT outperforms CL, we first dissect the contribution of weight and direction. In Table 1, we train the model with all combina- tions by URL. We observe that the improvements from weight (i.e., ∆W ) is more significant than that of direction (∆D). This suggests that the weight W is the dominant factor in the per- formance gap between SFT and CL, guiding us to focus on the weight in the following section. However, the direction also contributes to the gap, which is investigated in §4.2. 4.1 FUNCTION OF WEIGHT To figure out why WCL is less effective that WSFT and what is the function of W, we start from the observation of (Chen et al., 2021). In small-batch CL training with InfoNCE, gradients would shrink to very small scale, close to random precision errors, and thus cease to provide effective learning guidance. We suppose this is more salient in reranking where the small batch size is common3. Then we validate their findings by training a CL model with fully half-precision loss computation, which yields degraded performance compared to precision-safe training (refer to Appendix B.2). Back to our framework, W controls the steps of model updates, or say the gradient scale. According to Chen et al. (2021), WCL should be small in the training process. And we expect WSFT to be larger than WCL to provide better optimization signal as SFT presents better performance. To verify this, we plot the W of CL and SFT in training in Figure 3, where WCL indeed show relatively small values. SFT provides larger (better) W than CL, thereby achieving stronger empirical performance. Equation 10 to 12 also shows that WSFT is larger than WCL, since the denominator of WCL involves a sum of all negatives while the denominator of WSFT only adds up current instance. 0 500 1000 1500 2000 2500 Training Step 0.0 0.2 0.4 0.6 0.8 1.0 Average Weights SFT Pos Weight SFT Neg Weight CL Pos Weight CL Neg Weight Figure 3: Evolution of positives and negatives average weights during training for SFT and CL. 3Consider a batch of instances, {O1, . . . , Oj}, is forward simultaneously during training with k negatives per sample. While dense retrieval can achieve the negative size of j·(k+1) per instances, reranking models’ are limited to k + 1. Furthermore, the increased number of input tokens at the reranking stage, compared to dense retrieval, imposes additional constraints on memory usage, resulting in a reduction in the value of negative size. 6 Next, we investigate the fine-grained function of W. To create a cleaner analysis setting, we fix the direction in URL as DSFT unchanged, as it performs better. We first set weights of both positive and negatives to the fixed constant 1 as a baseline (Wbase) following (Chen et al., 2021): W + = 1, W − j = exp(s(h− j )) P j exp(s(h−)), X j W − j = 1. (14) Although the earlier analysis suggests that the larger W is preferable, this value 1 never appears in Figure 3, so we expect this setting to perform poorly. The experiment in Table 2 also align this. Hence, we suppose that W should be in a reasonable range. Meanwhile, the failure of constant W indicates that instance-specific adjustment is necessary: the model should update less on already- mastered instances and more on those it has not yet grasped. No. Method Avg ∆ 1 WBase 49.47 – 2 + τ mask 56.57 ▲7.10 3 + WCL 56.23 ▲6.76 4 + WSFT 58.19 ▲8.72 Table 2: Evaluation of weight properties. ∆denotes performance gain relative to WBase. We adopt the predicted relevance scores s as a guide and apply a masking rule: if a positive score is high enough, i.e., s(h0) > 1 −τ, (or, conversely, a negative score is low enough, s(hj) < τ), we set W + = 0 (resp. W − j = 0) to halt further learning on that instance. In addition, we further set WCL and WSFT to the baseline and conduct training under the same conditions. The results are shown in Table 2, we can see that the simple masking rule can provide strong performance, comparable to CL. This indicates both CL and SFT follow the above instance- specific weight feature. More details of the experiment can be found in the Appendix B.2. 4.2 SEARCHING BETTER DIRECTION Results in Table 1 indicate that the direction component also affects model performance, but it is not the dominant factor. Here we conduct additional experiments and try to find a better direction. 2 4 8 16 #Tokens in SFT 56 58 60 Perf. 57.97 57.88 57.9 58.02 Figure 4: Results with dif- ferent token numbers in SFT. The setting with 2 tokens is the standard SFT training. Does adding more tokens improve performance? SFT-based training is actually a binary classification on the token labels, where DSFT only involves “yes” and “no” tokens. One natural question is whether adding more tokens (e.g., “true”, “false”, “maybe”, etc. ) during training could improve the direction component and model performance? To investigate this, we randomly select 10,000 train- ing instances and identify the top 16 tokens with the highest logits from the model’s output, including “yes” and “no”. For a compre- hensive list of these tokens and details, please refer to the Appendix B.3. We then train the model using this expanded token set. Figure 4 presents the results, which indicate that increasing the number of tokens does not significantly impact model performance. This result suggests that using only “yes” and “no” tokens is sufficient for effective SFT. Weight Direction Perf. ∆ WSFT DSFT 58.09 – DRand. 56.75 ▼1.34 WCL DCL 56.40 – DRand. 57.72 ▲1.32 Table 3: Performance comparison of SFT and CL directions against ran- dom initialization DRand.. Is it possible to learn a better direction? The direction components, in essence, corresponds to the token embed- dings of the LLM, which are pre-trained and keeping frozen during training. Before LLM, CL-based rerankers often learn a score-projection matrix from scratch. To see whether this still helps, we implement the random-initialized learn- able weight DRand. in URL. Table 3 shows that, for CL mod- els, it does improve performance, yet still trails behind SFT. For SFT models, however, the strategy hurts performance. This is in line with the intuition: SFT is trained to predict the “yes/no” tokens, so replacing the weight with a randomly- initialized projection will loss the semantic signal from the pre-trained token embeddings. 7 Model Size Single-Modal Cross-Modal Fused-Modal Avg T→T(14) I→I(1) T→I(4) T→VD(5) I→T(5) T→IT(2) IT→T(4) IT→I(2) IT→IT(3) ALL(40) GME-2B 2.21B 49.59 30.75 48.46 66.39 52.62 77.02 39.88 36.70 66.89 52.54 Qwen3 4.02B 60.49 – – – – – – – – – Jina-m0 2.21B 55.36 27.50 59.46 73.13 55.43 74.95 27.82 37.65 51.54 54.36 MonoQwen 2.21B 48.89 12.59 58.73 71.29 19.62 76.46 14.35 31.75 35.83 44.20 GMR-3B 3.75B 59.22 29.76 58.85 72.38 63.06 81.96 48.81 43.97 79.08 61.40 GMR-7B 8.29B 61.08 32.83 61.18 72.94 66.61 84.55 53.29 47.39 82.19 63.85 Table 4: Performance of different models on MRB. Each column corresponds to a task category, with the number of test sets indicated in parentheses. Evaluation metrics are provided in Appendix E.1. We adopt GME-2B as the retrieval backbone, while all other models rerank the top-100 retrieved candidates. indicates the best result in reranking models, and indicates the second-best. 5 EXPERIMENTS 5.1 SETTINGS Training Dataset To develop a universal multimodal reranking model, we follow the settings of GME and curate training data from three categories: single-modal data (T→T, I→I), cross-modal data (I↔T, T→VD), and fused-modal data (IT↔T, IT→I, IT→IT). In total, we compile approxi- mately 1.5 million training instances from diverse sources, including M-BEIR (Wei et al., 2025), ViDoRe (Faysse et al., 2025), ImageNet-1K (Deng et al., 2009), E-VQA (Mensink et al., 2023), and MS MARCO (Nguyen et al., 2016). To ensure fairness and efficiency in the comparative ex- periments reported in §4, we additionally construct a balanced and category-representative subset consisting of about 270K samples drawn from the full training dataset. The models, GMR-3B and GMR-7B, are trained on the complete dataset to achieve optimal performance, whereas the models evaluated in §4 are trained on the constructed subset. Details could be found in Appendix C.1. MRB Benchmark To facilitate a more rigorous evaluation of model performance, we construct the MRB benchmark, which comprises 40 test datasets sourced from BEIR (Kamalloo et al., 2024), UMRB (Zhang et al., 2025a), ViDoRe (Faysse et al., 2025; Mac´e et al., 2025), and MIEB (Xiao et al., 2025). Collectively, these datasets span diverse modalities, domains, and task types, ensuring that the benchmark provides a comprehensive and representative assessment of model generalization. To more clearly highlight performance differences among models, we exclude test datasets on which GME-2B exhibits exceptionally high performance. A detailed description of the MRB benchmark composition is provided in Appendix C.2. Training Configuration We adopt the Qwen2.5-VL-Instruction (Team, 2025) model series as the backbone of our multimodal large language model (MLLM), and conduct training at both 3-billion (3B) and 7-billion (7B) parameter scales. For efficient adaptation, we employ Low-Rank Adaptation (LoRA) with a rank of 16 and a learning rate of 1e-4. As evidenced by the comparative results in §4, within the domain of multimodal LLM reranking, SFT consistently outperforms CL. Consequently, we adopt SFT as the training strategy for our GMR series models. During training, we set the maximum input length to 3,200 tokens. Each training sample is paired with 16 negative instances for the GMR-3B and GMR-7B models, and with 4 negative instances for the models mentioned in §4. Regarding the selection of negatives, we employ two strategies: Ran- dom Selection and Hard Mining, maintaining a balanced ratio of 1:1 between them. Further details on the negative sampling strategy are provided in Appendix C.3. To optimize GPU memory usage, we train the model using bfloat16 precision. All experiments were conducted on eight NVIDIA A100 GPUs, each equipped with 80 GB of memory. Baselines We adopt GME-2B as the retrieval backbone to generate candidate results for each task. Specifically, the top-100 retrieved candidates are retained, and all reranking models are subsequently evaluated on this candidate pool. For the experiment described in §4, we reorder the top-25 candi- dates to balance fairness with efficiency. Our method is compared against three representative types of reranking systems: (1) A representative textual model : Qwen3-Reranker (Zhang et al., 2025b) 8 (Qwen3), exemplifying recent advancements in text-based reranking. (2) A versatile multimodal reranking model: Jina-rerank-m04(Jina-m0). This model natively supports single-modal tasks and cross-modal tasks. Leveraging the flexibility of its MLLM architecture, we extend its application to fused-modal tasks by adopting its input template. The specifics of these adaptations are detailed in Appendix D.4. (3) A cutting-edge visual document reranking model: MonoQwen2-VL-v0.1 (Chaf- fin & Lac, 2024) (MonoQwen). Similar to our approach with Jina-rerank-m0, we evaluate this model across all task types. The input templates used is provided in Appendix D.5. This comprehensive evaluation benchmarks our method against leading models across diverse modalities and task types, enabling a thorough assessment of its effectiveness. 5.2 MAIN RESULTS 2 4 8 16 #Neg in SFT 58 59 Perf. 57.78 57.97 58.08 58.68 Figure 5: Results with different numbers of negative in SFT. We first examine the effect of the number of negatives. In SFT, where query–candidate similarity is formulated as a binary clas- sification task, the number of negatives directly affects model performance. To identify an appropriate setting under our com- putational budget, we experiment with varying numbers of neg- atives (Figure 5). Performance consistently improves with more negatives, peaking at 16. Moreover, SFT outperforms CL across all settings (Appendix E.2). Based on these results, we set the number of negatives to 16 in training. Given the impact of ran- dom initialization on performance (§4.2), we also conduct an ab- lation on freezing the LM head (Appendix E.3) and find that has no effect on SFT performance. We next examine the evaluation results. Table 4 presents a comprehensive overview of the baseline systems’ performance. The reported scores are averaged across the respective sub-tasks and are organized according to the retrieval modality: Single-Modal, Cross-Modal, and Fused-Modal. For completeness, the overall micro-average score across all sub-tasks is provided in the final column. Achieve state-of-the-art performance in universal multimodal reranking. Analyzing the aver- age metrics, our smaller model, GMR-3B, exhibits superior results compared to the fused-modal reranking model (Jina-Rerank-m0). The larger GMR-7B further elevates this performance, under- scoring the efficacy in addressing universal multimodal reranking challenges. Rival and surpass leading textual reranker. We conduct a comparative analysis with the state-of- the-art textual reranking model, Qwen3-Reranker, which is specifically optimized for the T→T task within the Single-Modal category and comprises approximately 4 billion parameters. Our smaller model exhibited similar performance metrics when evaluated against models of similar parameter scale. Notably, our larger model surpass the performance of Qwen3-Reranker, providing strong empirical evidence for the efficacy of our proposed methodology. Adapt seamlessly to visual-document reranking. We compare with the visual document reranking model, MonoQwen2-VL-v0.1, which is specifically tailored for the T→VD task. Our proposed models demonstrate performance metrics that are surpass those of this task-specific baseline, which suggests a promising direction for developing more efficient and adaptable information re-reanking systems that can seamlessly handle diverse modalities within a single architecture. 6 CONCLUSION In summary, our study shows that supervised fine-tuning (SFT) consistently outperforms contrastive learning (CL) for LLM-based reranking. By decomposing the loss into weight and direction compo- nents, we find that the weight term primarily drives performance gains by strengthening optimization signals and providing input-specific guidance. While SFT’s directional component is nearly opti- mal, CL requires learning a score-projection matrix to achieve comparable results. Building on these insights, we develop the GMR-3B and GMR-7B models, which set new state-of-the-art results on the MRB benchmark covering 40 datasets. By releasing MRB, our models, and code, we pro- vide a solid foundation for future research in large-scale multimodal retrieval and universal LLM reranking, underscoring both methodological and practical significance. 4https://huggingface.co/jinaai/jina-reranker-m0 9 REFERENCES Min Cao, Shiping Li, Juntao Li, Liqiang Nie, and Min Zhang. Image-text retrieval: A survey on re- cent research and development. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp. 5410–5417, 7 2022. doi: 10.24963/ijcai.2022/759. URL https://doi.org/10.24963/ijcai.2022/759. Survey Track. Antoine Chaffin and Aur´elien Lac. Monoqwen: Visual document reranking, 2024. URL https: //huggingface.co/lightonai/MonoQwen2-VL-v0.1. Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. M3-embedding: Multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. In Findings of the Association for Computational Linguistics: ACL 2024, pp. 2318– 2335, Bangkok, Thailand, August 2024. URL https://aclanthology.org/2024. findings-acl.137/. Junya Chen, Zhe Gan, Xuan Li, Qing Guo, Liqun Chen, Shuyang Gao, Tagyoung Chung, Yi Xu, Belinda Zeng, Wenlian Lu, Fan Li, Lawrence Carin, and Chenyang Tao. Simpler, faster, stronger: Breaking the log-k curse on contrastive learners with flatnce, 2021. URL https://arxiv. org/abs/2107.01152. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Manuel Faysse, Hugues Sibille, Tony Wu, Bilel Omrani, Gautier Viaud, CELINE HUDELOT, and Pierre Colombo. Colpali: Efficient document retrieval with vision language models. In The Thirteenth International Conference on Learning Representations, 2025. URL https: //openreview.net/forum?id=ogjBpZ8uSi. Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Le Yan, Qi Zhu, and Yue Zhang. Mcranker: Generating diverse criteria on-the-fly to improve pointwise llm rankers. In Pro- ceedings of the Eighteenth ACM International Conference on Web Search and Data Mining, pp. 944–953, Hannover, Germany, 2025. doi: 10.1145/3701551.3703583. URL https: //doi.org/10.1145/3701551.3703583. Ehsan Kamalloo, Nandan Thakur, Carlos Lassance, Xueguang Ma, Jheng-Hong Yang, and Jimmy Lin. Resources for brewing beir: Reproducible reference models and statistical analyses. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’24, pp. 1431–1440, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400704314. doi: 10.1145/3626772.3657862. URL https: //doi.org/10.1145/3626772.3657862. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. Pretrained transformers for text ranking: Bert and beyond. Springer Nature, 2022. Sheng-Chieh Lin, Chankyu Lee, Mohammad Shoeybi, Jimmy Lin, Bryan Catanzaro, and Wei Ping. MM-EMBED: Universal multimodal retrieval with multimodal LLMS. In The Thirteenth Inter- national Conference on Learning Representations, 2025. URL https://openreview.net/ forum?id=i45NQb2iKO. Qi Liu, Bo Wang, Nan Wang, and Jiaxin Mao. Leveraging passage embeddings for efficient listwise reranking with large language models. In Proceedings of the ACM on Web Conference 2025, pp. 4274–4283, Sydney NSW, Australia, 2025. doi: 10.1145/3696410.3714554. URL https: //doi.org/10.1145/3696410.3714554. Zhenghao Liu, Chenyan Xiong, Yuanhuiyi Lv, Zhiyuan Liu, and Ge Yu. Universal vision-language dense retrieval: Learning a unified representation space for multi-modal retrieval. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview. net/forum?id=PQOlkgsBsik. 10 Xueguang Ma, Sheng-Chieh Lin, Minghan Li, Wenhu Chen, and Jimmy Lin. Unifying multi- modal retrieval via document screenshot embedding. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing, pp. 6492–6505, Miami, Florida, USA, November 2024a. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.373. URL https: //aclanthology.org/2024.emnlp-main.373/. Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. Fine-tuning llama for multi- stage text retrieval. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2421–2425, Washington DC, USA, 2024b. doi: 10.1145/3626772.3657951. URL https://doi.org/10.1145/3626772.3657951. Quentin Mac´e, Ant´onio Loison, and Manuel Faysse. Vidore benchmark v2: Raising the bar for visual retrieval, 2025. URL https://arxiv.org/abs/2505.17166. Thomas Mensink, Jasper Uijlings, Lluis Castrejon, Arushi Goel, Felipe Cadar, Howard Zhou, Fei Sha, Andr´e Araujo, and Vittorio Ferrari. Encyclopedic vqa: Visual questions about detailed properties of fine-grained categories. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3090–3101, 2023. doi: 10.1109/ICCV51070.2023.00289. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268, 2016. URL http://arxiv.org/abs/1611.09268. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424, 2019. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 708–718, Online, November 2020. doi: 10.18653/v1/2020. findings-emnlp.63. URL https://aclanthology.org/2020.findings-emnlp. 63/. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predic- tive coding. arXiv preprint arXiv:1807.03748, 2018. Ruiyang Ren, Yuhao Wang, Kun Zhou, Wayne Xin Zhao, Wenjie Wang, Jing Liu, Ji-Rong Wen, and Tat-Seng Chua. Self-calibrated listwise reranking with large language models. In Proceedings of the ACM on Web Conference 2025, pp. 3692–3701, Sydney NSW, Australia, 2025. doi: 10.1145/ 3696410.3714658. URL https://doi.org/10.1145/3696410.3714658. Sahel Sharifymoghaddam, Ronak Pradeep, Andre Slavescu, Ryan Nguyen, Andrew Xu, Zijian Chen, Yilin Zhang, Yidi Chen, Jasper Xian, and Jimmy Lin. RankLLM: A python package for reranking with llms. arXiv:2505.19284, 2025. Xuemeng Song, Haoqiang Lin, Haokun Wen, Bohan Hou, Mingzhu Xu, and Liqiang Nie. A com- prehensive survey on composed image retrieval. arXiv preprint arXiv:2502.18495, 2025. Qwen Team. Qwen2.5-vl, January 2025. URL https://qwenlm.github.io/blog/ qwen2.5-vl/. Tianshi Wang, Fengling Li, Lei Zhu, Jingjing Li, Zheng Zhang, and Heng Tao Shen. Cross-modal retrieval: A systematic review of methods and future directions. Proceedings of the IEEE, 112 (11):1716–1754, 2024. doi: 10.1109/JPROC.2024.3525147. Cong Wei, Yang Chen, Haonan Chen, Hexiang Hu, Ge Zhang, Jie Fu, Alan Ritter, and Wenhu Chen. Uniir: Training and benchmarking universal multimodal information retrievers. In Proceedings of the 18th European Conference on Computer Vision, pp. 387–404, Milan, Italy, 2025. Chenghao Xiao, Isaac Chung, Imene Kerboua, Jamie Stirling, Xin Zhang, M´arton Kardos, Roman Solomatin, Noura Al Moubayed, Kenneth Enevoldsen, and Niklas Muennighoff. Mieb: Massive image embedding benchmark, 2025. URL https://arxiv.org/abs/2504.10471. 11 Mingjun Xu, Jinhan Dong, Jue Hou, Zehui Wang, Sihang Li, Zhifeng Gao, Renxin Zhong, and Hengxing Cai. Mm-r5: Multimodal reasoning-enhanced reranker via reinforcement learning for document retrieval. arXiv preprint arXiv:2506.12364, 2025. Xin Zhang, Yanzhao Zhang, Dingkun Long, Wen Xie, Ziqi Dai, Jialong Tang, Huan Lin, Baosong Yang, Pengjun Xie, Fei Huang, Meishan Zhang, Wenjie Li, and Min Zhang. mGTE: Gen- eralized long-context text representation and reranking models for multilingual text retrieval. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing: Industry Track, pp. 1393–1412, Miami, Florida, US, November 2024. URL https: //aclanthology.org/2024.emnlp-industry.103/. Xin Zhang, Yanzhao Zhang, Wen Xie, Mingxin Li, Ziqi Dai, Dingkun Long, Pengjun Xie, Meishan Zhang, Wenjie Li, and Min Zhang. Bridging modalities: Improving universal multimodal re- trieval by multimodal large language models. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pp. 9274–9285, June 2025a. Yanzhao Zhang, Mingxin Li, Dingkun Long, Xin Zhang, Huan Lin, Baosong Yang, Pengjun Xie, An Yang, Dayiheng Liu, Junyang Lin, et al. Qwen3 embedding: Advancing text embedding and reranking through foundation models. arXiv preprint arXiv:2506.05176, 2025b. Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. A setwise approach for effective and highly efficient zero-shot ranking with large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Informa- tion Retrieval, pp. 38–47, Washington DC, USA, 2024. doi: 10.1145/3626772.3657813. URL https://doi.org/10.1145/3626772.3657813. 12 APPENDIX A METHOD DETAILS A.1 GMR INPUT TEMPLATE Following a chat-based template, The prompt formulates a binary classification task by providing the model with a specific Instruction, Query, and Document for evaluation as shon in Figure 6. <|im_start|>system: Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no". <|im_end|> <|im_start|> user : <Instruction>: {Instruction} <Query>: {Query} <Document>: {Document} <|im_end|> <|im_start|> assistant : Figure 6: The structured input template for GMR series models. A.2 LOSS FUNCTION DECOMPOSITION In this section, we elaborate on the derivation process of the equation in §3.3. • Equation 6: −∂LCL ∂h+ 0 = −∂LCL ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = − ∂(−log exp(σ(ins,q,d+ 0 )) exp(σ(ins,q,d+ 0 ))+P i exp(σ(ins,q,d− i ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(h− i ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(h− i )) exp(sy(h+ 0 )) · ∂( exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(h− i ))) ∂sy(h+ 0 ) · ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(h− i )) exp(sy(h+ 0 )) · P i exp(sy(h− i )) (exp(sy(h+ 0 )) + P i exp(sy(h− i )))2 · ∂exp(sy(h+ 0 )) ∂sy(h+ 0 ) · ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(h− i )) exp(sy(h+ 0 )) · P i exp(sy(h− i )) (exp(sy(h+ 0 )) + P i exp(sy(h− i )))2 · exp(sy(h+ 0 )) · ∂sy(h+ 0 ) ∂h+ 0 = P j exp(sy(h− j )) exp(sy(h+ 0 )) + P j exp(sy(h− j )) ∂sy(h+ 0 ) ∂h+ 0 = P j exp(sy(h− j )) exp(sy(h+ 0 )) + P j exp(sy(h− j ))My (15) 13 • Equation 7: −∂LCL ∂h− i = −∂LCL ∂sy(h− i ) ∂sy(h− i ) ∂h− i = − ∂(−log exp(σ(ins,q,d+ 0 )) exp(σ(ins,q,d+ 0 ))+P i exp(σ(ins,q,d− i ))) ∂sy(h− i ) ∂sy(h− i ) ∂h− i = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(h− i ))) ∂sy(h− i ) ∂sy(h− i ) ∂h− i = exp(sy(h+ 0 )) + P i exp(sy(h− i )) exp(sy(h+ 0 )) · (− exp(sy(h+ 0 )) (exp(sy(h+ 0 )) + P i exp(sy(h− i )))2 ) · exp(sy(h− i )) · ∂sy(h− i ) ∂h+ 0 = − exp(sy(h− i )) exp(sy(h+ 0 )) + P j exp(sy(h− i )) ∂sy(h− i ) ∂h− i = − exp(sy(h− i )) exp(sy(h+ 0 )) + P j exp(sy(h− i ))My (16) • Equation 8: −∂LSFT ∂h+ 0 = −∂LSFT ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 −∂LSFT ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = −∂(−log(p(“yes”|P({“yes”,“no”}|{ins, q, di})))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 −∂(−log(p(“yes”|P({“yes”,“no”}|{ins, q, di})))) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = ∂(log eP (“yes”|{ins,q,d}) eP (“yes”|{ins,q,d})+eP (“no”|{ins,q,d}) ) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 + ∂(log eP (“yes”|{ins,q,d}) eP (“yes”|{ins,q,d})+eP (“no”|{ins,q,d}) ) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+exp(sn(h+ 0 ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 + ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+exp(sn(h+ 0 ))) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) ∂sy(h+ 0 ) ∂h+ 0 − exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) ∂sn(h+ 0 ) ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(∂sy(h+ 0 ) ∂h+ 0 −∂sn(h+ 0 ) ∂h+ 0 ) = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(My −Mn) (17) 14 • Equation 9: −∂LSFT ∂h− i = −∂LSFT ∂sy(h− j ) ∂sy(h− i ) ∂h− i −∂LSFT ∂sn(h− i ) ∂sn(h− i ) ∂h− i = −∂(−log(p(“no”|P({“yes”,“no”}|{ins, q, di})))) ∂sy(h− i ) ∂sy(h− i ) ∂h− i −∂(−log(p(“no”|P({“yes”,“no”}|{ins, q, di})))) ∂sn(h− i ) ∂sn(h− i ) ∂h− i = ∂(log eP (“no”|{ins,q,d}) eP (“yes”|{ins,q,d})+eP (“no”|{ins,q,d}) ) ∂sy(h− i ) ∂sy(h− i ) ∂h− i + ∂(log eP (“no”|{ins,q,d}) eP (“yes”|{ins,q,d})+eP (“no”|{ins,q,d}) ) ∂sn(h− i ) ∂sn(h− i ) ∂h− i = ∂(log exp(sn(h− i )) exp(sy(h− i ))+exp(sn(h− i ))) ∂sy(h− i ) ∂sy(h− i ) ∂h− i + ∂(log exp(sn(h− i )) exp(sy(h− i ))+exp(sn(h− i ))) ∂sn(h− i ) ∂sn(h− i ) ∂h− i = − exp(sy(h− i )) exp(sy(h− i )) + exp(sn(h− i ))(∂sy(h− i ) ∂h− i −∂sn(h− i ) ∂h− i ) = − exp(sy(h− i )) exp(sy(h− i )) + exp(sn(h− i ))(My −Mn) (18) B ANALYSIS EXPERIMENT B.1 THE INFLUENCE OF PRECISION ON CL Method Precision Avg ∆ CL FP16 56.09 - FP32 56.40 ▲0.31 Table 5: Impact of precision on Con- trastive Learning’s performance. We validate the findings of FlatNCE by performing full half-precision training during loss function computation on the contrastive learning (CL) model. Specifically, we configure the model to use BF16 for accuracy, and in the loss computation process (refer to Algorithm 1), we con- trol all other variables while varying the precision of the weight computations between FP16 and FP32 to assess their impact on model performance. The results show that FP32 precision yields better performance than FP16 precision, confirming that computational preci- sion significantly affects the effectiveness of contrastive learning. B.2 FUNCTION OF WEIGHT Method τ Avg w/ τ mask 1e-2 55.07 1e-3 56.57 1e-4 55.89 Table 6: The performance of the model under different values of τ. To investigate the role of the weight, we first define s(hi) = exp(sy(hi)) exp(sy(hi))+exp(sn(hi)). Since s(hi) is bounded within [0, 1], prior experience with embedding models suggests that an ap- propriate scaling factor is necessary to accelerate model conver- gence. Therefore, we introduce a temperature parameter β = 5×10−2 into Equation 14, yielding W − j = exp(s(h− j )/β) P j exp(s(h−)/β). In addition, for experiments involving the masking rule, we vary τ ∈10−2, 10−3, 10−4 to identify the configuration that achieves optimal performance. For the ex- periment with WCL, we follow Equation 10 and 11, consistent with the requirements of contrastive learning, where the positive and negative weights must satisfy the constraint W + = P W −. Since directly setting W+WCL = WBaseWCL would violate this condition, we instead use W+WCL = 15 WCL for comparison with WBase. For the experiment with WSF T , we aim to demonstrate that WSF T can effectively enhance the performance of WBase. Following Equation 12 and 13, we set W+WSF T = WBaseWSF T and evaluate its impact on model performance. B.3 THE INFLUENCE OF TOKEN SELECTION To examine whether introducing additional tokens during training can enhance the directionality component and improve model performance, we randomly sample 10,000 instances together with their corresponding positives and negatives. Based on the model outputs, we identify the top 16 tokens with the highest average logits, which include “yes” and “no.” The remaining tokens in this set are: {“No,” “Yes,” “NO,” “YES,” “The,” “None,” “In,” “Answer,” “This,” “To,” “Not,” “not,” “There,” “-no”}. C EXPERIMENT SETTING C.1 TRAINING DATASETS Figure 7: The proportion of the training data. Our training dataset is curated from diverse sources, including M-BEIR, ViDoRe, ImageNet-1K, E- VQA, and Ms Marco. These datasets cover a wide array of domains, ensuring that the model is ex- posed to varied and representative examples across different tasks. To ensure balanced representation across task domains, we sample 100k instances from ImageNet-1K and integrated them into our training corpus. In total, our training dataset consists of approxi- mately 1.5 million instances, which are distributed across various domains to ensure robust learning. The detailed distribution of the data across these do- mains is carefully visualized in Figure 7. To ensure a fair comparison between supervised fine-tuning and contrastive learning, we construct a balanced, category-representative subset of approximately 270K samples from our training dataset, and the details could be found in Table 7. Class Task Datasets Number Single-Modal(4) T→T (2) WebQA† Ms Marco 30000 I→I (2) Nights† ImageNet-1K 30000 Cross-Modal(6) T→I (2) Fashion200k† VisualNews† 29958 T→VD (1) ViDoRe 30000 I→T (3) Fashion200k† MScoco† VisualNews† 30882 Fused-Modal(11) T→IT (2) EDIS† WebQA† 30000 IT→T (3) LLava† OVEN† Remuq† 30382 IT→I (2) CIRR† FashionIQ† 29528 IT→IT (3) E-VQA OVEN† 30000 Table 7: The details of sub trainset. † means that they belong to the M-BEIR dataset. 16 C.2 MRB BENCHMARK Since overly simple tasks fail to effectively differentiate the performance of various rerank models, we exclude the dataset on which the GME-2B model achieves exceptionally high performance. Detailed descriptions of MRB Benchmark are provided in Tables 8 and 9. Class Task Datasets Single-Modal(15) T→T (14) ArguAna† Climate-FEVER† CQADupStack† DBPedia† FIQA2018† HotpotQA† MSMARCO† NFCorpus† NQ† Quora† SCIDOCS† SciFact† Touche2020† TRECCOVID† I→I (1) Nights∗ Cross-Modal(14) T→I (4) VisualNews∗Fashion200k∗Memotion⋆HatefulMemes⋆ T→VD (5) TAT-DQA† ArxivQA† DocVQA† MIT Tissue Interaction† World Economic Reports† I→T (5) VisualNews∗Fashion200K∗ Memotion⋆GLDv2⋆HatefulMemes⋆ Fused-Modal(11) T→IT (2) WebQA∗EDIS∗ IT→T (4) OVEN∗INFOSEEK∗OKVQA∗VizWiz⋆ IT→I (2) FashionIQ∗CIRR∗ IT→IT (3) OVEN∗E-VQA∗INFOSEEK∗ Table 8: An overview of datasets in MRB. † means it belong to BEIR. ∗means it belong to UMRB. † means it belong to ViDoRe. ⋆means it belong to MIEB. C.3 NEGATIVE SELECTION The quality and diversity of negatives greatly affect the final performance of the reranker. Overly simple negatives can make the model lack the ability to distinguish hard negatives from positives, while overly difficult documents are very likely to be false negatives that give the model incorrect update signal. Therefore, we adopt two strategies to select negatives: (1) Random Selection. Ran- domly select irrelevant document as negatives to enhance the generalization ability of the model. (2) Hard Mining. For each query in every dataset, we use GME-2B to search for the correspond- ing documents to obtain the top 100, and randomly select k irrelevant samples from them as hard negatives to improve the reranking performance. We employ this set of hard negatives for all the models trained in this paper. While training, we always maintain the ratio of random negatives to hard negatives at 1:1 to balance the diversity and quality of the data. D MODEL SETTINGS D.1 GME-2B We employ the GME-2B model as the foundational retrieval model, generating the initial retrieval results that serve as the input to our diverse reranking approaches. Recognizing that the GME se- ries models leverage instruction fine-tuning, we incorporate task-specific instructions into the input query to enhance the retrieval model’s performance. Aligning with the UMRB benchmark, we curate the specific instructions for each task, as compre- hensively detailed in Table 13. D.2 QWEN3-RERANKER Paralleling our approach, Qwen3-Reranker leverages Large Language Models for point-wise rerank- ing within a singular contextual framework. To facilitate instruction-following capabilities, the model incorporates task-specific instructions directly into the input context. By utilizing the LLM’s inherent chat template, the similarity assessment is reframed as a binary classification paradigm. 17 Name Type Categ. Eval Candidates Eval Query Eval Candidate Samples Nums avg. chars avg. chars ArguAna Single-Modal T→T 1406 8,674 192.98 166.80 Climate-FEVER Single-Modal T→T 1,535 5,416,593 20.13 84.76 CQADupStack Single-Modal T→T 13,145 457,199 8.59 129.09 DBPedia Single-Modal T→T 400 4,635,922 5.39 49.68 FiQA2018 Single-Modal T→T 648 57,638 10.77 132.32 HotpotQA Single-Modal T→T 7,405 5,233,329 17.61 46.30 MSMARCO Single-Modal T→T 6,980 8,841,823 5.96 55.98 NFCorpus Single-Modal T→T 323 3,633 3.30 232.26 NQ Single-Modal T→T 3,452 2,681,468 9.16 78.88 Quora Single-Modal T→T 10,000 522,931 9.53 11.44 SCIDOCS Single-Modal T→T 1,000 25,657 9.38 176.19 SciFact Single-Modal T→T 300 5,183 12.37 213.63 Touche2020 Single-Modal T→T 49 382,545 6.55 292.37 TRECCOVID Single-Modal T→T 50 171,332 10.60 160.77 Nights Single-Modal I→I 2,120 40,038 - - VisualNews Cross-Modal T→I 19,995 542,246 18.78 - Fashion200k Cross-Modal T→I 1,719 201,824 4.89 - HatefulMemes Cross-Modal T→I 1000 10000 10.42 - Memotion Cross-Modal T→I 697 6988 14.77 - TAT-DQA Cross-Modal T→VD 1,646 277 12.44 - ArxivQA Cross-Modal T→VD 500 500 17.12 - DocVQA Cross-Modal T→VD 451 500 8.23 - WER Cross-Modal T→VD 58 452 13.05 - MITTI Cross-Modal T→VD 160 1016 13.91 - VisualNews Cross-Modal I→T 20,000 537,568 - 18.53 Fashion200k Cross-Modal I→T 4,889 61,707 - 4.95 GLDv2 Cross-Modal I→T 1704 674 - 3.18 Memotion Cross-Modal T→I 697 6988 - 14.67 HatefulMemes Cross-Modal I→T 1000 10000 - 11.53 WebQA Fused-Modal T→IT 2,511 403,196 16.43 12.83 EDIS Fused-Modal T→IT 3,241 1,047,067 20.07 15.53 OVEN Fused-Modal IT→T 50,004 676,667 6.52 82.13 INFOSEEK Fused-Modal IT→T 11,323 611,651 8.76 91.49 OKVQA Fused-Modal IT→T 5,046 114,516 8.09 102.55 VizWiz Fused-Modal IT→T 4319 2091 7.17 - FashionIQ Fused-Modal IT→I 6,003 74,381 11.70 - CIRR Fused-Modal IT→I 4,170 21,551 11.01 - OVEN Fused-Modal IT→IT 14,741 335,135 5.91 94.76 EVQA Fused-Modal IT→IT 3,743 68,313 9.38 211.12 INFOSEEK Fused-Modal IT→IT 17,593 481,782 7.94 96.00 Table 9: Tasks in MRB. Following UMRB, We count the number of datasets under each task type, the number of evaluation instances, the size of the candidate set, and the average length of the text. Specifically, for T →T tasks, we set task-specific instructions the same as GME, as comprehen- sively illustrated in Table 13. D.3 GMR In our GMR series models, we incorporate the retrieval instructions into the input context, yielding two advantages. Primarily, this approach eliminates the need for task-specific instruction redesign at the reranking stage, enabling seamless instruction transfer from the retrieval phase. Moreover, by strategically integrating instructions into the contextual input, we effectively guide the model’s comprehension, facilitating enhanced task understanding and robust instruction-following capabilities. The comprehensive instruction sets for both training and testing phases are meticu- lously detailed in Tables 10 and 13, respectively. D.4 JINA-RERANK-M0 Jina-rerank-m0 demonstrates inherent capabilities for processing single-modal and cross-modal tasks. By leveraging the architectural flexibility of Multimodal Large Language Model framework, we extend its operational scope to encompass fused-modal tasks through a input template adaptation. 18 Task Dataset Query Instruction T→T WebQA Retrieve passages from Wikipedia that provide answers to the following question. Ms Marco Given a question, retrieve relevant passages that answer the question. I→I Nights Find a day-to-day image that looks similar to the provided image. ImageNet-1K Retrieve images of the same type as the one in the question. T→I Fashion200k Based on the following fashion description, retrieve the best matching image. VisualNews Identify the news-related image in line with the described event. T→VD ViDoRe Find a screenshot that relevant to the user’s question. I→T VisualNews Find a caption for the news in the given photo. Fashion200k Find a product description for the fashion item in the image. MSCOCO Find an image caption describing the following everyday image. T→IT WebQA Find a Wikipedia image that answers this question. EDIS Find a news image that matches the provided caption. IT→T OVEN Retrieve a Wikipedia paragraph that provides an answer to the given query about the image. LLava Provide a specific decription of the image along with the following question. Remuq Retrieve a fact-based paragraph that provides an answer to the given query about the image. IT→I FashionIQ Find a fashion image that aligns with the reference image and style note. CIRR Retrieve a day-to-day image that aligns with the modification instructions of the provided image. IT→IT OVEN Retrieve a Wikipedia image-description pair that provides evidence for the question of this image. E-VQA Determine the Wikipedia image-snippet pair that matches my question about this image. Table 10: The instructions for training dataset. We set the instructions for the GMR series models on each task during training as shown in the Table. For text and image-modal inputs, Jina-rerank-m0 organizes Query/Document configurations, as comprehensively illustrated in Table 11. Building upon this foundational template, we design a input organization strategy for fused-modal scenarios, represented in the Fused configuration. Ultimately, the model’s input is standardized to the canonical format:“{Document}\n{Query}”. Query Document Text **Query**:\n{query} **Document**:\n {doc} Image **Query**: **Document**: <vision start><image pad><vision end> <vision start><image pad><vision end> Fused **Query**: **Document**: <vision start><image pad><vision end>{query} <vision start><image pad><vision end>{doc} Table 11: The input template of Jina-rerank-m0. We refer to it’s format settings for Text and Image to set the input format of fused-modal data, then format the input as “{Document}\n{Query}”. D.5 MONOQWEN2-VL-V0.1 Analogous to our method approach with Jina-rerank-m0, we conduct a comprehensive evaluation of MonoQwen2-VL-v0.1 across the full spectrum of task types. Given that MonoQwen2-VL-v0.1 is exclusively trained and tested on the T→VD task, its input configuration is specifically tailored to this particular scenario, as illustrated in Table 12. Notably, since MonoQwen2-VL-v0.1 does not incorporate additional instructions during training and lacks inherent instruction-following capabilities, we leverage the established T→VD input tem- plate to uniformly configure the inputs for all other tasks, as shown under the Others in Table 12. Input Format T →VD {doc}\nAssert the relevance of the previous image document to the following query, answer True or False. The query is: {query} Others {doc}\nAssert the relevance of the previous document to the following query, answer True or False. The query is: {query} Table 12: The input template of MonoQwen2-VL-v0.1. T →VD is the original input format of it, and we design the input formats for other tasks based on this format, as shown in Others. 19 Task Dataset Query Instruction T→T ArguAna Given a claim, find documents that refute the claim. Climate-FEVER Given a claim about climate change, retrieve documents that support orrefute the claim. CQADupStack Given a question, retrieve detailed question descriptions from Stackexchange that are duplicates to the given question. DBPedia Given a query, retrieve relevant entity descriptions from DBPedia. FiQA2018 Given a financial question, retrieve user replies that best answer the question. HotpotQA Given a multi-hop question, retrieve documents that can help answer the question. MSMARCO Given a web search query, retrieve relevant passages that answer the query. NFCorpus Given a question, retrieve relevant documents that best answer the question. NQ Given a question, retrieve Wikipedia passages that answer the question. Quora Given a question, retrieve questions that are semantically equivalentto the given question. SCIDOCS Given a scientific paper title, retrieve paper abstracts that are cited bythe given paper. SciFact Given a scientific claim, retrieve documents that support or refute theclaim. Touche2020 Given a question, retrieve detailed and persuasive arguments that answer the question. TRECCOVID Given a query on COVID-19, retrieve documents that answer the query. I→I Nights Find a day-to-day image that looks similar to the provided image. T→I VisualNews Identify the news-related image in line with the described event. Fashion200k Based on the following fashion description, retrieve the best matching image. Memotion Retrieve the meme based on the given caption. HatefulMemes T→VD TAT-DQA Find a screenshot that relevant to the user’s question. ArxivQA DocVQA MITTI WER I→T VisualNews Find a caption for the news in the given photo. Fashion200k Find a product description for the fashion item in the image. GLDv2 Retrieve the name of the landmark based on the given image. Memotion Retrieve the caption based on the given meme. HatefulMemes T→IT WebQA Find a Wikipedia image that answers this question. EDIS Find a news image that matches the provided caption. IT→T OVEN Retrieve a Wikipedia paragraph that provides an answer to the given query about the image. INFOSEEK Find a paragraph from Wikipedia that answers my question about this image. OKVQA Retrieve documents that provide an answer to the question alongside the image. VizWiz Retrieve the correct answer for a question about an image. IT→I FashionIQ Find a fashion image that aligns with the reference image and style note. CIRR Retrieve a day-to-day image that aligns with the modification instructions of the provided image. IT→IT OVEN Retrieve a Wikipedia image-description pair that provides evidence for the question of this image. INFOSEEK Find an image and subject description from Wikipedia that answers my question about this image. E-VQA Obtain illustrated documents that correspond to the inquiry alongside the provided image. Table 13: The instructions for different tasks. We set the instructions for the GME-2B and GMR series models on each task as shown in the Table. WER means World Economic Reports, and MITTI means MIT Tissue Interaction. 20 E MAIN RESULT E.1 DETAILED RESULTS We evaluate all models described in §5 on our benchmark. The evaluation metrics and the detailed results for each dataset are reported in Table 14. Class Dataset Model GME-2B Qwen3 MonoQwen Jina-m0 GMR-3B GMR-7B T→T (14) ArguAna† 47.11 86.00 50.93 56.07 80.42 84.49 SCIDOCS† 22.65 26.42 18.31 22.12 25.49 28.77 TRECCOVID† 79.11 87.83 79.84 85.36 87.23 85.56 Quora† 87.35 88.16 82.71 87.98 89.51 89.91 SciFact† 66.53 79.83 74.94 79.18 77.52 79.70 NFCorpus† 36.90 41.88 38.29 40.99 40.51 40.81 Climate-FEVER† 32.15 49.08 19.78 34.33 50.14 50.26 FiQA2018† 46.35 56.25 44.11 50.72 54.79 59.64 HotpotQA† 70.45 82.66 71.64 80.49 82.86 83.84 DBPedia† 43.17 52.69 41.75 49.60 52.99 53.96 Touche2020† 33.18 43.00 36.71 38.40 32.17 37.26 NQ† 51.22 63.33 49.08 62.06 62.49 66.48 MSMARCO† 40.79 44.57 35.57 43.09 45.90 47.60 CQADupStack† 37.25 45.18 40.83 44.66 47.10 46.81 I→I (1) Nights⋆ 30.75 - 12.59 27.50 29.76 32.83 T→I (4) Fashion200k∗ 25.77 - 29.14 29.38 25.01 27.57 HatefulMemes† 52.09 - 74.93 76.57 75.07 75.19 Memotion† 77.41 - 93.47 93.40 93.17 93.52 VisualNews⋆ 38.55 - 37.39 38.48 42.16 48.44 T→VD (5) TAT-DQA† 71.23 - 79.99 82.05 83.23 84.00 DocVQA† 56.44 - 57.51 61.69 61.48 62.87 ArxivQA† 84.21 - 87.61 89.38 88.99 90.99 WER† 58.78 - 63.00 63.47 62.13 61.00 MITTI† 61.29 - 68.32 69.07 66.06 65.82 I→T (5) Fashion200k∗ 27.67 - 7.55 17.14 26.22 29.80 HatefulMemes† 57.85 - 32.27 80.90 81.21 81.23 Memotion† 80.01 - 44.74 94.84 96.08 96.68 GLDv2† 59.28 - 5.72 59.21 68.68 76.74 VisualNews⋆ 38.28 - 7.83 25.05 43.12 48.60 T→IT (2) WebQA⋆ 83.03 - 87.30 87.14 86.98 87.46 EDIS⋆ 71.00 - 65.63 62.76 76.95 81.64 IT→T (4) OKVQA∗ 29.71 - 20.13 30.34 37.71 40.09 VizWiz† 29.56 - 5.11 20.36 35.96 41.29 INFOSEEK⋆ 39.77 - 23.97 36.84 59.17 63.01 OVEN⋆ 60.46 - 8.18 23.74 62.41 68.78 IT→I (2) FashionIQ∗ 26.57 - 21.41 25.97 30.70 33.32 CIRR⋆ 46.83 - 42.09 49.33 57.24 61.46 IT→IT (3) INFOSEEK⋆ 44.61 - 35.39 53.28 73.89 76.31 E-VQA⋆ 79.11 - 55.81 61.21 84.66 86.08 OVEN⋆ 76.96 - 16.28 40.12 78.68 84.17 Table 14: Detailed scores of each model on various datasets on MRB. Qwen3 stands for Qwen3- Reranker, MonoQwen stands for MonoQwen2-VL-v0.1, Jina-m0 stands for Jina-Reranker-m0. WER means World Economic Reports, and MITTI means MIT Tissue Interaction. For the datasets denoted with ⋆, we report the Recall@5 metric. Correspondingly, the Recall@10 metric is adopted for the datasets marked with ∗. Furthermore, the NDCG@5 score is utilized for the †-annotated datasets, while the NDCG@10 score is reported for those designated with †. 21 E.2 THE INFLUENCE OF THE NUMBER OF NEGATIVE 2 4 8 16 #Neg 55.5 56.0 56.5 57.0 57.5 58.0 58.5 59.0 Perf. SFT CL Figure 8: Average performance of the number of negatives per sample. In §5.2, we examine the effect of incorporating negatives in supervised fine-tuning (SFT) and observed that, within the limits of available computational resources, increas- ing the number of negative examples consistently im- proved model performance. The best performance was achieved when the number of negative examples reached 16. For comparison, we further conduct experiments on the role of negatives in contrastive learning. As shown in Figure 8, the results indicate that, similar to SFT, a larger number of negative examples leads to better per- formance. Nevertheless, the overall performance of con- trastive learning remains lower than that of supervised fine-tuning. E.3 THE INFLUENCE OF THE FROZEN OF LM HEAD -F -NF ∆f SFT 57.97 57.94 ▼0.03 CL 55.95 57.20 ▲1.25 Table 15: Impact of frozen of the LM head on performance. -F de- notes frozen, while -NF denotes not frozen. In §4, we observe that SFT can exploit semantic signals from pre-trained token embeddings, whereas CL must learn the score-projection matrix from scratch. To rule out the potential influence of freezing the language modeling (LM) head pa- rameters, we conduct an ablation study on LM head parameter freezing, with the results presented in Table 15. The findings show that freezing or unfreezing the LM head has no effect on SFT. In contrast, CL achieves better performance when the LM head parameters are not frozen. These results suggest that SFT effectively leverages the semantic information embedded in pre-trained token of LLM, while CL requires relearning the score-projection matrix. F LIMITATION In this work, we introduce MRB, a benchmark designed for training and evaluating multimodal reranking tasks. To address this challenge, we investigate strategies for adopting Multimodal Large Language Models (MLLMs) into general-purpose multimodal reranking models, and propose GMR, a reranking model capable of handling candidates across different modalities. Despite these contri- butions, our work has the following limitations: • Single-language constraint. Although the backbone model, Qwen2.5-VL-Instruction, supports multiple languages, we trained and evaluated GMR exclusively in English. Consequently, the per- formance of GMR in other languages remains unexplored. • Single-image constraint for queries and documents. For reasons of training efficiency and limited availability of relevant data, both queries and candidates in MRB are restricted to a single image for each query and document. As a result, the benchmark cannot assess performance on interleaved inputs that involve multiple images and texts. 22
Supervised Fine-Tuning or Contrastive Learning? Towards Better Multimodal LLM Reranking Ziqi Dai1∗, Xin Zhang1,2∗, Mingxin Li, Yanzhao Zhang, Dingkun Long Pengjun Xie, Meishan Zhang1, Wenjie Li2, Min Zhang1 1Harbin 2The Hong Kong Polytechnic University {ziqi.dai, Release at https://github.com/vec-ai/lychee-rerank-mm ABSTRACT In information retrieval, training reranking models mainly focuses on two types of objectives: metric learning (e.g., contrastive loss to increase the predicted scores on relevant query-document pairs) and classification (binary label prediction of relevance vs. irrelevance). For BERT-style encoders, various studies have shown that contrastive learning (CL) can be more effective than discriminative (classification) learning. However, for large language models (LLMs), classification via supervised fine-tuning (SFT), which predicts "yes" (resp. "no") token for relevant (resp. irrelevant) pairs, appears more promising as it aligns well with the generative nature of LLMs. This divergence raises a central question: which objective is intrinsically better suited to LLM-based reranking, and what mechanism underlies the difference? In this work, we conduct a comprehensive comparison and analysis between CL and SFT for reranking, taking the universal multimodal retrieval (UMR) as the experimental playground. We first decompose the objectives into two components: weight, which controls the magnitude of those updates, and direction, which guides the model updates, then present a unified framework for understanding their interactions. Through probing experiments, we find that SFT provides a substantially stronger weighting scheme than CL, whereas the preferred scoring direction shows no clear winner. Taken together, these results point to a consistent advantage of SFT over CL for LLM reranking. To further validate our findings, we conduct large-scale training with SFT and present new state-of-the-art rerankers on the MRB benchmark. We also provide ablations on SFT settings and expect our findings to benefit future research and applications in this area. 1 INTRODUCTION Reranking is a crucial step in the retrieval pipeline (Lin et al., 2022), aiming to refine the initial results obtained from the previous search stage by reordering them based on their relevance to a given query. In recent years, the integration of Large Language Models (LLMs) into reranking techniques has shown promising results in text retrieval (Ma et al., 2024b) and has gradually become the standard approach (Sharifymoghaddam et al., 2025). When extending to the multimodal setting (Liu et al., 2023; Wei et al., 2025), multimodal LLMs (MLLMs) also become the promising backbone choice (Lin et al., 2025; Zhang et al., 2025a) as their strong multimodal understanding capabilities. Current widely used rerankers are typically in the point-wise setting (Lin et al., 2022), which independently scores each query-candidate pair and ranks the candidates. The simple architecture of point-wise rerankers makes them easy and efficient to applicate in real-world scenarios, and there emerges various open-source state-of-the-art (SOTA) models (Chen et al., 2024; Zhang et al., 2024), particularly LLM-based ones (Sharifymoghaddam et al., 2025; Zhang et al., 2025b). To train such rerankers1, one straightforward approach follows the pre-LLM practice of contrastive learning (CL) (Nogueira et al., 2019; Zhang et al., 2024), computing InfoNCE loss (Oord et al., 2018) one predicted relevance scores. Another approach is to directly perform supervised fine-tuning (SFT) (Nogueira et al., 2020; Zhang et al., 2025b), which optimizes the model to predict the next token ("true/yes" ∗Equal contribution 1Throughout this work, reranking primarily refers to point-wise reranking setting. 1 16 Oct 2025 Assistant: Instruction Query Document MLLM Reranker ... Positive Output Negative Output Contrastive Learning Supervised Fine-Tuning p"yes" ins, q, d+ →1 p"no" ins, q, d- →1 pO0 + O0 +, O1 -, ... , ON - →1 LM head ... ... Figure 1: Comparison of Supervised Fine-Tuning (SFT) and Contrastive Learning (CL) for the MLLM reranker. for relevant, "false/no" for irrelevant) and takes the "true/yes" token probability as the relevance score. The illustration of them is shown in Figure 1. Before the emergence of LLMs, contrastive learning was the dominant approach for leveraging BERT-style encoders due to its strong performance (Nogueira et al., 2019; Zhang et al., 2024). However, SFT are now widely applied to LLMs (Nogueira et al., 2020; Zhang et al., 2025b) and appears to deliver competitive results. This raises a natural research question: which objective is intrinsically better for LLM reranking, and why? Meanwhile, research on multimodal reranking remains largely restricted to single datasets or narrowly defined tasks (Xu et al., 2025), limiting the generalizability of existing approaches. Building on recent advances in universal multimodal retrieval (Zhang et al., 2025a), our objective is to develop a universal multimodal reranking model that can consistently adapt across diverse modalities. In this work, we aim to explore the question by providing a theoretical analysis and empirical comparison of the two approaches on the universal multimodal retrieval task as testbed. We first design the General Multimodal Reranker (GMR, §3.1), and then analyze the two training approaches and decompose their loss functions (§3.3) into weight and direction. Based on this, we implement a unified framework for CL and SFT losses and conduct experiments to compare and analysis them (§4). To make comprehensive evaluations of multimodal reranking, we compile a new unified benchmark called MRB (multimodal reranking benchmark, §5). Through analysis and comparison, we find that SFT consistently outperforms CL for LLM-based rerankers, and: (1) The weight component, rather than the direction, accounts for the most performance gap; (2) A larger weight improves robustness to numerical errors in training, where SFT intrinsically assigns larger weights than CL; (3) The function of weight is a input-specific guidance: down-weight already-well-learned input pairs and up-weight hard or under-fit pairs; (4) The native SFT direction is almost optimal, whereas CL can be further improved by tuning its direction matrix. To further validate the potential of SFT, we train two reranking models (i.e., GMR-3B and GMR-7B), which set new state-of-the-art results on MRB. We will release code, data and models to facilitate future research in this area. Our contributions are: • We provide a unified analysis of SFT and CL for LLM-based reranking, showing that SFT intrinsically outperforms CL. By decomposing the loss into weight and direction components, we reveal that SFT's weight term delivers stronger optimization signals. • We introduce the MRB benchmark, comprising 40 datasets across single-, cross-, and fusedmodal retrieval, offering a comprehensive evaluation for universal multimodal reranking. • We develop GMR models, instruction-aware multimodal LLM rerankers trained on 1.5M diverse pairs. GMR-3B and GMR-7B achieve state-of-the-art results on MRB, highlighting the effectiveness of SFT and providing strong backbones for future research. 2 RELATED WORK Reranking with Large Language Model. Reranking improves retrieval output quality by jointly modeling the query and retrieved candidates and reorder the candidates (Lin et al., 2022). In recent years, reranking is dominated by methods based on pretrained language models (Nogueira et al., 2 2019; 2020), with LLM-based approaches becoming particularly prominent in the latest advancements (Ma et al., 2024b; Zhuang et al., 2024; Sharifymoghaddam et al., 2025). Compared to the widely studied list-wise reranking (Ren et al., 2025; Liu et al., 2025), in this work, we focus on the more straightforward and widely used point-wise approach (Zhang et al., 2024; Guo et al., 2025), which scores each query-candidate pair independently and ranks the candidates. Training point-wise rerankers has traditionally relied on contrastive learning (CL) (Nogueira et al., 2019; Zhang et al., 2024), which is also a verified choice for LLM-based models (Ma et al., 2024b). However, for such generative language models, a supervised fine-tuning (SFT) approach (Nogueira et al., 2020) seems to be more aligned with the model nature, as it directly optimizes the model to predict the next token ("true/yes" for relevant, "false/no" for irrelevant) based on the input query and candidate, rather than relying on a contrastive loss that compares the relevant and irrelevant candidates. There is no clear consensus on which approach is better yet. To bridge this significant research gap, we conduct a theoretical analysis with empirical comparison of the two approaches, and demonstrate that SFT outperforms CL in terms of performance. Multimodal Information Retrieval. Multimodal Retrieval aims to retrieve relevant candidates from and based on modalities beyond text (Wang et al., 2024), which involves various sub-tasks such as image-text retrieval (Cao et al., 2022) and composed image retrieval (Song et al., 2025). Recent advancements in this field have been shifted to a more generalized view, exploring the universal multimodal retrieval (UMR) (Liu et al., 2023; Wei et al., 2025; Zhang et al., 2025a) which compile a wide range of datasets and tasks into a unified benchmark. Retrievers (Lin et al., 2025; Zhang et al., 2025a) driven by multimodal LLMs have shown significant improvements in understanding and processing multimodal data, enabling more effective retrieval across different modalities. While the reranking stage is crucial for enhancing the precision of retrieval system, it has been less studied in UMR (Lin et al., 2025). In this work, we investigate how to build better LLM reranking models, presenting state-of-the-art MLLM-based rerankers for UMR. 3 METHOD In this work we analyze the contrastive learning (CL) and supervised fine-tuning (SFT) approaches to reranking, taking the multimodal retrieval as the experimental playground. We first introduce our reranking model (§3.1), training by CL or SFT (§3.2), and then present our tools for analysis (§3.3). 3.1 RERANKER IMPLEMENTATION Our general multimodal reranker (namely GMR) follows the conventional design of LLM-based point-wise reranking models. We employ a strong MLLM as the backbone, which could process diverse input modalities, encompassing images, text, and multimodal combinations. Instruction-Aware Reranking. Given query q and document d, we set an instruction ins to describe detailed task objectives, which has proven highly effective in MLLM-based multimodal retrieval (Lin et al., 2025; Zhang et al., 2025a). For example, in the Visual Document Retrieval task (Ma et al., 2024a; Faysse et al., 2025), we use an instruction "Find a screenshot that relevant to the user's question." to guide the model to better evaluate the relevance between query and visual document. We list all instructions of our model in Appendix D.3. The inputs are in the form of (ins, q, d) and formatted into the template shown in Figure 6 before being fed into the MLLM backbone. Relevance Score Computation. In the SFT setting, given the task instruction ins, query q and document d, our reranker assesses the probability of the next token being either "yes" or "no" to be the relevance score σ. This process could be formally expressed as: σ(ins, q, d) = eP ("yes"|{ins,q,d}) eP ("yes"|{ins,q,d}) + eP ("no"|{ins,q,d}) , (1) where P("yes"|{ins, q, d}) and P("no"|{ins, q, d}) represent the probabilities of the next token being "yes" or "no", respectively, given the document and query as context. With such relevance scores, we could rerank all retrieved candidates more precisely. This method is more aligned to the generative nature of MLLM and thus allows us to leverage its powerful understanding ability while 3 providing a effective scoring mechanism for reranking purposes. In the CL setting, the relevance score is the "yes" probability only: σ(ins, q, d) = P("yes"|{ins, q, d}). (2) 3.2 RERANKER TRAINING In reranking, each data example contains one query q, one relevant document (positive) d+ 0 , and several irrelevant documents (negatives, the selection is described in Appendix C.3) {d1 , d2 , . . . , dN}. As shown in Figure 1, we explore both CL and SFT based training. • Contrastive Learning: With relevance score σ from Equation 2, we compute the InfoNCE loss (Oord et al., 2018) for each example: LCL = -log exp(σ(ins, q, d+ 0 )) exp(σ(ins, q, d+ 0 )) + P i exp(σ(ins, q, di )). (3) • Supervised Fine-Tuning: The objective is predicting correct next token (relevance label) for each input pair, independently. We reorganize one example into multiple triples (ins, q, di), each corresponding to a different d. Then predict the likelihood of "yes" and "no" for each triplet and compute per-triplet cross-entropy loss with the token of ground-truth label l: LSFT i = -log(p(l|P({"yes","no"}|{ins, q, di}))), (4) where P({"yes","no"}|{ins, q, d}) denotes the likelihood of "yes" and "no". The relevance label l is "yes" for positive documents and "no" for negatives. This loss encourages the model to assign higher probabilities to correct tokens, thereby improving the ranking performance. 3.3 LOSS FUNCTION DECOMPOSITION We analyze two reranking loss functions by decomposing them into two key components: weight and direction. With this decomposition, we perform probing experiments in §4. Basic Notation. We denote the SFT-style data instance with positive (resp. negative) doc as o+ 0 = {ins, q, d+ 0 } (resp. oj = {ins, q, di }, i = 1, 2, . . . , N). The reranker is conceptualized as two components: a mapping function f(·|θ) (parameterized by θ) that converts oi to the feature representation hi = f(oi|θ), and a transformation My that maps hi into "yes" token likelihood score sy(hi) = hi · My. And the "no" token score sn could be computed similarly by Mn. Unified View. From §3.2, the SFT loss is calculated separately for each positive or negative doc of an example, while the CL loss is computed in an integrated manner across all positive and negative docs of the same example. To enable a fair comparison, we adopt the total loss L({oi}N i=0, θ) over an entire example (with one positive and N negatives) as the unit of analysis. So we have the gradient ∂L ∂θ = ∂L ∂h+ 0 ∂h+ 0 ∂θ + X i ∂L ∂hi ∂hi ∂θ , (5) where h+ 0 is the feature of positive doc and hi is that of i-th negative. To understand the influence of positive and negatives on the model, we calculate the partial derivative of the loss function with respect to the hidden state. For CL, we only use "yes" token, and by substituting the specific loss (Equation 3) into the gradient, we obtain the partial derivatives: -∂LCL ∂h+ 0 = P j exp(sy(hi )) exp(sy(h+ 0 )) + P i exp(sy(hi ))My, (6) -∂LCL ∂hi = - exp(sy(hi )) exp(sy(h+ 0 )) + P i exp(sy(hi ))My. (7) In SFT, we first merge the Equation 4 of multiple pairs in one example into the total loss LSFT = -log exp(sy(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) - X i log exp(sn(hi )) exp(sy(hi )) + exp(sn(hi )). 4 Then we have partial derivatives -∂LSFT ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(My -Mn), (8) -∂LSFT ∂hi = - exp(sy(hi )) exp(sy(hi )) + exp(sn(hi ))(My -Mn). (9) The complete derivation of the above process is provided in the Appendix A.2. Algorithm 1 Unified Reranking Loss Require: inputs O ←{o+ 0 , . . . , on } Ensure: loss value L 1: M ←lm head("yes","no") 2: logits ←M · f(O|θ) //- weight branch ---------- 3: if weight= "sft" then 4: s ←Softmax(logits)[0].detach() 5: W + ←W + sft ←1 -s[0] 6: W -←W - sft ←s[1:] 7: else ▷weight= "cl" 8: s ←Softmax(logits[0]).detach() 9: W + ←W + cl ←1 -s[0] 10: W -←W - cl ←s[1:] 11: end if //- direction branch -------- 12: My ←logits[:, 0]; Mn ←logits[:, 1] 13: if direction= "sft" then 14: D+ ←D+ sft ←Mn[0] -My[0] 15: D-←Dsft ←My[1:] -Mn[1:] 16: else ▷direction= "cl" 17: D+ ←D+ cl ←-My[0] 18: D-←Dcl ←My[1:] 19: end if //------------------ 20: L ←mean W +D+ + P i W - i Di 21: return L Loss Decomposition. As above gradients looks similar, we can break them down into two parts: weight and direction. They reflect the differences between CL and SFT. • Weight W is a scalar that controls the magnitude of the updates. From Equation 6 - 9, we obtain the weights as shown below: W + CL = P i exp(sy(hi )) exp(sy(h+ 0 )) + P i exp(sy(hi )), (10) W - CL = exp(sy(hi )) exp(sy(h+ 0 )) + P i exp(sy(hi )), (11) W + SFT = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )), (12) W - SFT = exp(sy(hi )) exp(sy(hi )) + exp(sn(hi )). (13) Compared with CL, WSF T only focus on the single document, without the interactions with all negatives of the same query like CL. • Direction D is a vector that controls the direction of model updates. From Equation 6 and 8, the direction from the positive d+ for CL is D+ CL = My, and that of SFT is D+ SFT = My -Mn. While from Equation 7 and 9, direction from negatives are DCL = -My and DSFT = -(My -Mn). Apparently, for both CL and SFT, the update directions of positive and negatives are opposite. In summary, CL and SFT share similar direction components, and we believe that differing initializations2 are insufficient to account for performance differences. In contrast, CL computes the weight using all positive and negative documents within a sample, while SFT assigns weights independently per document, making this the likely key factor in performance variation. Unified Framework Building on the above decomposition, we propose a unified reranking loss framework (URL), with pseudo-code provided in Algorithm 1. This framework allows us to independently analyze weight and direction, thereby facilitating a deeper understanding of the differences between the two training paradigms through controlled adjustments during computation. We then validate our analysis through probing experiments in the following §4. 4 ANALYSIS In this section, we continue and validate the analysis in §3.3 through probing experiments. We choose universal multimodal retrieval as the testbed, compiling a new benchmark (MRB §5.1) includes single-modal tasks (text-to-text, image-to-image), cross-modal tasks (e.g., text-to-image), as well as fused-modal tasks (either the query or the document could consist of text + image). 2My compared to My -Mn. 5 Original URL Original URL 56 57 58 Perf. n.s. n.s. SFT CL Figure 2: Performance comparison of the original implementations and our URL. We defer the description of experiment settings and evaluation benchmark to §5.1. General Empirical Comparison. We first train both CL and SFT rerankers with the original implementation and our URL framework to (1) find the winner in practice, and (2) verify that URL faithfully reproduces the original implementation, supporting the subsequent analyses built on URL. As shown in Figure 2, under the identical setting, SFT consistently outperforms CL. Meanwhile, URL yields statistically indistinguishable performance to the original. It thus could be trusted in the following analysis. DSFT DCL ∆D WSFT 58.09 57.88 ▼0.21 WCL 56.99 56.40 ▼0.59 ∆W ▼1.10 ▼1.48 Table 1: MRB results of all loss components combinations, where the weight W delivers the dominant influence on performance. Weight W Dominates Performance. To investigate why SFT outperforms CL, we first dissect the contribution of weight and direction. In Table 1, we train the model with all combinations by URL. We observe that the improvements from weight (i.e., ∆W ) is more significant than that of direction (∆D). This suggests that the weight W is the dominant factor in the performance gap between SFT and CL, guiding us to focus on the weight in the following section. However, the direction also contributes to the gap, which is investigated in §4.2. 4.1 FUNCTION OF WEIGHT To figure out why WCL is less effective that WSFT and what is the function of W, we start from the observation of (Chen et al., 2021). In small-batch CL training with InfoNCE, gradients would shrink to very small scale, close to random precision errors, and thus cease to provide effective learning guidance. We suppose this is more salient in reranking where the small batch size is common3. Then we validate their findings by training a CL model with fully half-precision loss computation, which yields degraded performance compared to precision-safe training (refer to Appendix B.2). Back to our framework, W controls the steps of model updates, or say the gradient scale. According to Chen et al. (2021), WCL should be small in the training process. And we expect WSFT to be larger than WCL to provide better optimization signal as SFT presents better performance. To verify this, we plot the W of CL and SFT in training in Figure 3, where WCL indeed show relatively small values. SFT provides larger (better) W than CL, thereby achieving stronger empirical performance. Equation 10 to 12 also shows that WSFT is larger than WCL, since the denominator of WCL involves a sum of all negatives while the denominator of WSFT only adds up current instance. 0 500 1000 1500 2000 2500 Training Step 0.0 0.2 0.4 0.6 0.8 1.0 Average Weights SFT Pos Weight SFT Neg Weight CL Pos Weight CL Neg Weight Figure 3: Evolution of positives and negatives average weights during training for SFT and CL. 3Consider a batch of instances, {O1, . . . , Oj}, is forward simultaneously during training with k negatives per sample. While dense retrieval can achieve the negative size of j·(k+1) per instances, reranking models' are limited to k + 1. Furthermore, the increased number of input tokens at the reranking stage, compared to dense retrieval, imposes additional constraints on memory usage, resulting in a reduction in the value of negative size. 6 Next, we investigate the fine-grained function of W. To create a cleaner analysis setting, we fix the direction in URL as DSFT unchanged, as it performs better. We first set weights of both positive and negatives to the fixed constant 1 as a baseline (Wbase) following (Chen et al., 2021): W + = 1, W - j = exp(s(hj )) P j exp(s(h-)), X j W - j = 1. (14) Although the earlier analysis suggests that the larger W is preferable, this value 1 never appears in Figure 3, so we expect this setting to perform poorly. The experiment in Table 2 also align this. Hence, we suppose that W should be in a reasonable range. Meanwhile, the failure of constant W indicates that instance-specific adjustment is necessary: the model should update less on alreadymastered instances and more on those it has not yet grasped. No. Method Avg ∆ 1 WBase 49.47 - 2 + τ mask 56.57 ▲7.10 3 + WCL 56.23 ▲6.76 4 + WSFT 58.19 ▲8.72 Table 2: Evaluation of weight properties. ∆denotes performance gain relative to WBase. We adopt the predicted relevance scores s as a guide and apply a masking rule: if a positive score is high enough, i.e., s(h0) > 1 -τ, (or, conversely, a negative score is low enough, s(hj) system: Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no". user : : {Instruction} : {Query} : {Document} assistant : Figure 6: The structured input template for GMR series models. A.2 LOSS FUNCTION DECOMPOSITION In this section, we elaborate on the derivation process of the equation in §3.3. • Equation 6: -∂LCL ∂h+ 0 = -∂LCL ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = - ∂(-log exp(σ(ins,q,d+ 0 )) exp(σ(ins,q,d+ 0 ))+P i exp(σ(ins,q,di ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(hi ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(hi )) exp(sy(h+ 0 )) · ∂( exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(hi ))) ∂sy(h+ 0 ) · ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(hi )) exp(sy(h+ 0 )) · P i exp(sy(hi )) (exp(sy(h+ 0 )) + P i exp(sy(hi )))2 · ∂exp(sy(h+ 0 )) ∂sy(h+ 0 ) · ∂sy(h+ 0 ) ∂h+ 0 = exp(sy(h+ 0 )) + P i exp(sy(hi )) exp(sy(h+ 0 )) · P i exp(sy(hi )) (exp(sy(h+ 0 )) + P i exp(sy(hi )))2 · exp(sy(h+ 0 )) · ∂sy(h+ 0 ) ∂h+ 0 = P j exp(sy(hj )) exp(sy(h+ 0 )) + P j exp(sy(hj )) ∂sy(h+ 0 ) ∂h+ 0 = P j exp(sy(hj )) exp(sy(h+ 0 )) + P j exp(sy(hj ))My (15) 13 • Equation 7: -∂LCL ∂hi = -∂LCL ∂sy(hi ) ∂sy(hi ) ∂hi = - ∂(-log exp(σ(ins,q,d+ 0 )) exp(σ(ins,q,d+ 0 ))+P i exp(σ(ins,q,di ))) ∂sy(hi ) ∂sy(hi ) ∂hi = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+P i exp(sy(hi ))) ∂sy(hi ) ∂sy(hi ) ∂hi = exp(sy(h+ 0 )) + P i exp(sy(hi )) exp(sy(h+ 0 )) · (- exp(sy(h+ 0 )) (exp(sy(h+ 0 )) + P i exp(sy(hi )))2 ) · exp(sy(hi )) · ∂sy(hi ) ∂h+ 0 = - exp(sy(hi )) exp(sy(h+ 0 )) + P j exp(sy(hi )) ∂sy(hi ) ∂hi = - exp(sy(hi )) exp(sy(h+ 0 )) + P j exp(sy(hi ))My (16) • Equation 8: -∂LSFT ∂h+ 0 = -∂LSFT ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 -∂LSFT ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = -∂(-log(p("yes"|P({"yes","no"}|{ins, q, di})))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 -∂(-log(p("yes"|P({"yes","no"}|{ins, q, di})))) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = ∂(log eP ("yes"|{ins,q,d}) eP ("yes"|{ins,q,d})+eP ("no"|{ins,q,d}) ) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 + ∂(log eP ("yes"|{ins,q,d}) eP ("yes"|{ins,q,d})+eP ("no"|{ins,q,d}) ) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+exp(sn(h+ 0 ))) ∂sy(h+ 0 ) ∂sy(h+ 0 ) ∂h+ 0 + ∂(log exp(sy(h+ 0 )) exp(sy(h+ 0 ))+exp(sn(h+ 0 ))) ∂sn(h+ 0 ) ∂sn(h+ 0 ) ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) ∂sy(h+ 0 ) ∂h+ 0 - exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 )) ∂sn(h+ 0 ) ∂h+ 0 = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(∂sy(h+ 0 ) ∂h+ 0 -∂sn(h+ 0 ) ∂h+ 0 ) = exp(sn(h+ 0 )) exp(sy(h+ 0 )) + exp(sn(h+ 0 ))(My -Mn) (17) 14 • Equation 9: -∂LSFT ∂hi = -∂LSFT ∂sy(hj ) ∂sy(hi ) ∂hi -∂LSFT ∂sn(hi ) ∂sn(hi ) ∂hi = -∂(-log(p("no"|P({"yes","no"}|{ins, q, di})))) ∂sy(hi ) ∂sy(hi ) ∂hi -∂(-log(p("no"|P({"yes","no"}|{ins, q, di})))) ∂sn(hi ) ∂sn(hi ) ∂hi = ∂(log eP ("no"|{ins,q,d}) eP ("yes"|{ins,q,d})+eP ("no"|{ins,q,d}) ) ∂sy(hi ) ∂sy(hi ) ∂hi + ∂(log eP ("no"|{ins,q,d}) eP ("yes"|{ins,q,d})+eP ("no"|{ins,q,d}) ) ∂sn(hi ) ∂sn(hi ) ∂hi = ∂(log exp(sn(hi )) exp(sy(hi ))+exp(sn(hi ))) ∂sy(hi ) ∂sy(hi ) ∂hi + ∂(log exp(sn(hi )) exp(sy(hi ))+exp(sn(hi ))) ∂sn(hi ) ∂sn(hi ) ∂hi = - exp(sy(hi )) exp(sy(hi )) + exp(sn(hi ))(∂sy(hi ) ∂hi -∂sn(hi ) ∂hi ) = - exp(sy(hi )) exp(sy(hi )) + exp(sn(hi ))(My -Mn) (18) B ANALYSIS EXPERIMENT B.1 THE INFLUENCE OF PRECISION ON CL Method Precision Avg ∆ CL FP16 56.09 - FP32 56.40 ▲0.31 Table 5: Impact of precision on Contrastive Learning's performance. We validate the findings of FlatNCE by performing full half-precision training during loss function computation on the contrastive learning (CL) model. Specifically, we configure the model to use BF16 for accuracy, and in the loss computation process (refer to Algorithm 1), we control all other variables while varying the precision of the weight computations between FP16 and FP32 to assess their impact on model performance. The results show that FP32 precision yields better performance than FP16 precision, confirming that computational precision significantly affects the effectiveness of contrastive learning. B.2 FUNCTION OF WEIGHT Method τ Avg w/ τ mask 1e-2 55.07 1e-3 56.57 1e-4 55.89 Table 6: The performance of the model under different values of τ. To investigate the role of the weight, we first define s(hi) = exp(sy(hi)) exp(sy(hi))+exp(sn(hi)). Since s(hi) is bounded within [0, 1], prior experience with embedding models suggests that an appropriate scaling factor is necessary to accelerate model convergence. Therefore, we introduce a temperature parameter β = 5×10-2 into Equation 14, yielding W - j = exp(s(hj )/β) P j exp(s(h-)/β). In addition, for experiments involving the masking rule, we vary τ ∈10-2, 10-3, 10-4 to identify the configuration that achieves optimal performance. For the experiment with WCL, we follow Equation 10 and 11, consistent with the requirements of contrastive learning, where the positive and negative weights must satisfy the constraint W + = P W -. Since directly setting W+WCL = WBaseWCL would violate this condition, we instead use W+WCL = 15 WCL for comparison with WBase. For the experiment with WSF T , we aim to demonstrate that WSF T can effectively enhance the performance of WBase. Following Equation 12 and 13, we set W+WSF T = WBaseWSF T and evaluate its impact on model performance. B.3 THE INFLUENCE OF TOKEN SELECTION To examine whether introducing additional tokens during training can enhance the directionality component and improve model performance, we randomly sample 10,000 instances together with their corresponding positives and negatives. Based on the model outputs, we identify the top 16 tokens with the highest average logits, which include "yes" and "no." The remaining tokens in this set are: {"No," "Yes," "NO," "YES," "The," "None," "In," "Answer," "This," "To," "Not," "not," "There," "-no"}. C EXPERIMENT SETTING C.1 TRAINING DATASETS Figure 7: The proportion of the training data. Our training dataset is curated from diverse sources, including M-BEIR, ViDoRe, ImageNet-1K, EVQA, and Ms Marco. These datasets cover a wide array of domains, ensuring that the model is exposed to varied and representative examples across different tasks. To ensure balanced representation across task domains, we sample 100k instances from ImageNet-1K and integrated them into our training corpus. In total, our training dataset consists of approximately 1.5 million instances, which are distributed across various domains to ensure robust learning. The detailed distribution of the data across these domains is carefully visualized in Figure 7. To ensure a fair comparison between supervised fine-tuning and contrastive learning, we construct a balanced, category-representative subset of approximately 270K samples from our training dataset, and the details could be found in Table 7. Class Task Datasets Number Single-Modal(4) T→T (2) WebQA† Ms Marco 30000 I→I (2) Nights† ImageNet-1K 30000 Cross-Modal(6) T→I (2) Fashion200k† VisualNews† 29958 T→VD (1) ViDoRe 30000 I→T (3) Fashion200k† MScoco† VisualNews† 30882 Fused-Modal(11) T→IT (2) EDIS† WebQA† 30000 IT→T (3) LLava† OVEN† Remuq† 30382 IT→I (2) CIRR† FashionIQ† 29528 IT→IT (3) E-VQA OVEN† 30000 Table 7: The details of sub trainset. † means that they belong to the M-BEIR dataset. 16 C.2 MRB BENCHMARK Since overly simple tasks fail to effectively differentiate the performance of various rerank models, we exclude the dataset on which the GME-2B model achieves exceptionally high performance. Detailed descriptions of MRB Benchmark are provided in Tables 8 and 9. Class Task Datasets Single-Modal(15) T→T (14) ArguAna† Climate-FEVER† CQADupStack† DBPedia† FIQA2018† HotpotQA† MSMARCO† NFCorpus† NQ† Quora† SCIDOCS† SciFact† Touche2020† TRECCOVID† I→I (1) Nights∗ Cross-Modal(14) T→I (4) VisualNews∗Fashion200k∗Memotion⋆HatefulMemes⋆ T→VD (5) TAT-DQA† ArxivQA† DocVQA† MIT Tissue Interaction† World Economic Reports† I→T (5) VisualNews∗Fashion200K∗ Memotion⋆GLDv2⋆HatefulMemes⋆ Fused-Modal(11) T→IT (2) WebQA∗EDIS∗ IT→T (4) OVEN∗INFOSEEK∗OKVQA∗VizWiz⋆ IT→I (2) FashionIQ∗CIRR∗ IT→IT (3) OVEN∗E-VQA∗INFOSEEK∗ Table 8: An overview of datasets in MRB. † means it belong to BEIR. ∗means it belong to UMRB. † means it belong to ViDoRe. ⋆means it belong to MIEB. C.3 NEGATIVE SELECTION The quality and diversity of negatives greatly affect the final performance of the reranker. Overly simple negatives can make the model lack the ability to distinguish hard negatives from positives, while overly difficult documents are very likely to be false negatives that give the model incorrect update signal. Therefore, we adopt two strategies to select negatives: (1) Random Selection. Randomly select irrelevant document as negatives to enhance the generalization ability of the model. (2) Hard Mining. For each query in every dataset, we use GME-2B to search for the corresponding documents to obtain the top 100, and randomly select k irrelevant samples from them as hard negatives to improve the reranking performance. We employ this set of hard negatives for all the models trained in this paper. While training, we always maintain the ratio of random negatives to hard negatives at 1:1 to balance the diversity and quality of the data. D MODEL SETTINGS D.1 GME-2B We employ the GME-2B model as the foundational retrieval model, generating the initial retrieval results that serve as the input to our diverse reranking approaches. Recognizing that the GME series models leverage instruction fine-tuning, we incorporate task-specific instructions into the input query to enhance the retrieval model's performance. Aligning with the UMRB benchmark, we curate the specific instructions for each task, as comprehensively detailed in Table 13. D.2 QWEN3-RERANKER Paralleling our approach, Qwen3-Reranker leverages Large Language Models for point-wise reranking within a singular contextual framework. To facilitate instruction-following capabilities, the model incorporates task-specific instructions directly into the input context. By utilizing the LLM's inherent chat template, the similarity assessment is reframed as a binary classification paradigm. 17 Name Type Categ. Eval Candidates Eval Query Eval Candidate Samples Nums avg. chars avg. chars ArguAna Single-Modal T→T 1406 8,674 192.98 166.80 Climate-FEVER Single-Modal T→T 1,535 5,416,593 20.13 84.76 CQADupStack Single-Modal T→T 13,145 457,199 8.59 129.09 DBPedia Single-Modal T→T 400 4,635,922 5.39 49.68 FiQA2018 Single-Modal T→T 648 57,638 10.77 132.32 HotpotQA Single-Modal T→T 7,405 5,233,329 17.61 46.30 MSMARCO Single-Modal T→T 6,980 8,841,823 5.96 55.98 NFCorpus Single-Modal T→T 323 3,633 3.30 232.26 NQ Single-Modal T→T 3,452 2,681,468 9.16 78.88 Quora Single-Modal T→T 10,000 522,931 9.53 11.44 SCIDOCS Single-Modal T→T 1,000 25,657 9.38 176.19 SciFact Single-Modal T→T 300 5,183 12.37 213.63 Touche2020 Single-Modal T→T 49 382,545 6.55 292.37 TRECCOVID Single-Modal T→T 50 171,332 10.60 160.77 Nights Single-Modal I→I 2,120 40,038 - - VisualNews Cross-Modal T→I 19,995 542,246 18.78 - Fashion200k Cross-Modal T→I 1,719 201,824 4.89 - HatefulMemes Cross-Modal T→I 1000 10000 10.42 - Memotion Cross-Modal T→I 697 6988 14.77 - TAT-DQA Cross-Modal T→VD 1,646 277 12.44 - ArxivQA Cross-Modal T→VD 500 500 17.12 - DocVQA Cross-Modal T→VD 451 500 8.23 - WER Cross-Modal T→VD 58 452 13.05 - MITTI Cross-Modal T→VD 160 1016 13.91 - VisualNews Cross-Modal I→T 20,000 537,568 - 18.53 Fashion200k Cross-Modal I→T 4,889 61,707 - 4.95 GLDv2 Cross-Modal I→T 1704 674 - 3.18 Memotion Cross-Modal T→I 697 6988 - 14.67 HatefulMemes Cross-Modal I→T 1000 10000 - 11.53 WebQA Fused-Modal T→IT 2,511 403,196 16.43 12.83 EDIS Fused-Modal T→IT 3,241 1,047,067 20.07 15.53 OVEN Fused-Modal IT→T 50,004 676,667 6.52 82.13 INFOSEEK Fused-Modal IT→T 11,323 611,651 8.76 91.49 OKVQA Fused-Modal IT→T 5,046 114,516 8.09 102.55 VizWiz Fused-Modal IT→T 4319 2091 7.17 - FashionIQ Fused-Modal IT→I 6,003 74,381 11.70 - CIRR Fused-Modal IT→I 4,170 21,551 11.01 - OVEN Fused-Modal IT→IT 14,741 335,135 5.91 94.76 EVQA Fused-Modal IT→IT 3,743 68,313 9.38 211.12 INFOSEEK Fused-Modal IT→IT 17,593 481,782 7.94 96.00 Table 9: Tasks in MRB. Following UMRB, We count the number of datasets under each task type, the number of evaluation instances, the size of the candidate set, and the average length of the text. Specifically, for T →T tasks, we set task-specific instructions the same as GME, as comprehensively illustrated in Table 13. D.3 GMR In our GMR series models, we incorporate the retrieval instructions into the input context, yielding two advantages. Primarily, this approach eliminates the need for task-specific instruction redesign at the reranking stage, enabling seamless instruction transfer from the retrieval phase. Moreover, by strategically integrating instructions into the contextual input, we effectively guide the model's comprehension, facilitating enhanced task understanding and robust instruction-following capabilities. The comprehensive instruction sets for both training and testing phases are meticulously detailed in Tables 10 and 13, respectively. D.4 JINA-RERANK-M0 Jina-rerank-m0 demonstrates inherent capabilities for processing single-modal and cross-modal tasks. By leveraging the architectural flexibility of Multimodal Large Language Model framework, we extend its operational scope to encompass fused-modal tasks through a input template adaptation. 18 Task Dataset Query Instruction T→T WebQA Retrieve passages from Wikipedia that provide answers to the following question. Ms Marco Given a question, retrieve relevant passages that answer the question. I→I Nights Find a day-to-day image that looks similar to the provided image. ImageNet-1K Retrieve images of the same type as the one in the question. T→I Fashion200k Based on the following fashion description, retrieve the best matching image. VisualNews Identify the news-related image in line with the described event. T→VD ViDoRe Find a screenshot that relevant to the user's question. I→T VisualNews Find a caption for the news in the given photo. Fashion200k Find a product description for the fashion item in the image. MSCOCO Find an image caption describing the following everyday image. T→IT WebQA Find a Wikipedia image that answers this question. EDIS Find a news image that matches the provided caption. IT→T OVEN Retrieve a Wikipedia paragraph that provides an answer to the given query about the image. LLava Provide a specific decription of the image along with the following question. Remuq Retrieve a fact-based paragraph that provides an answer to the given query about the image. IT→I FashionIQ Find a fashion image that aligns with the reference image and style note. CIRR Retrieve a day-to-day image that aligns with the modification instructions of the provided image. IT→IT OVEN Retrieve a Wikipedia image-description pair that provides evidence for the question of this image. E-VQA Determine the Wikipedia image-snippet pair that matches my question about this image. Table 10: The instructions for training dataset. We set the instructions for the GMR series models on each task during training as shown in the Table. For text and image-modal inputs, Jina-rerank-m0 organizes Query/Document configurations, as comprehensively illustrated in Table 11. Building upon this foundational template, we design a input organization strategy for fused-modal scenarios, represented in the Fused configuration. Ultimately, the model's input is standardized to the canonical format:"{Document} ". Query Document Text **Query**: **Document**: {doc} Image **Query**: **Document**: Fused **Query**: **Document**: {query} {doc} Table 11: The input template of Jina-rerank-m0. We refer to it's format settings for Text and Image to set the input format of fused-modal data, then format the input as "{Document} ". D.5 MONOQWEN2-VL-V0.1 Analogous to our method approach with Jina-rerank-m0, we conduct a comprehensive evaluation of MonoQwen2-VL-v0.1 across the full spectrum of task types. Given that MonoQwen2-VL-v0.1 is exclusively trained and tested on the T→VD task, its input configuration is specifically tailored to this particular scenario, as illustrated in Table 12. Notably, since MonoQwen2-VL-v0.1 does not incorporate additional instructions during training and lacks inherent instruction-following capabilities, we leverage the established T→VD input template to uniformly configure the inputs for all other tasks, as shown under the Others in Table 12. Input Format T →VD {doc} the relevance of the previous image document to the following query, answer True or False. The query is: {query} Others {doc} the relevance of the previous document to the following query, answer True or False. The query is: {query} Table 12: The input template of MonoQwen2-VL-v0.1. T →VD is the original input format of it, and we design the input formats for other tasks based on this format, as shown in Others. 19 Task Dataset Query Instruction T→T ArguAna Given a claim, find documents that refute the claim. Climate-FEVER Given a claim about climate change, retrieve documents that support orrefute the claim. CQADupStack Given a question, retrieve detailed question descriptions from Stackexchange that are duplicates to the given question. DBPedia Given a query, retrieve relevant entity descriptions from DBPedia. FiQA2018 Given a financial question, retrieve user replies that best answer the question. HotpotQA Given a multi-hop question, retrieve documents that can help answer the question. MSMARCO Given a web search query, retrieve relevant passages that answer the query. NFCorpus Given a question, retrieve relevant documents that best answer the question. NQ Given a question, retrieve Wikipedia passages that answer the question. Quora Given a question, retrieve questions that are semantically equivalentto the given question. SCIDOCS Given a scientific paper title, retrieve paper abstracts that are cited bythe given paper. SciFact Given a scientific claim, retrieve documents that support or refute theclaim. Touche2020 Given a question, retrieve detailed and persuasive arguments that answer the question. TRECCOVID Given a query on COVID-19, retrieve documents that answer the query. I→I Nights Find a day-to-day image that looks similar to the provided image. T→I VisualNews Identify the news-related image in line with the described event. Fashion200k Based on the following fashion description, retrieve the best matching image. Memotion Retrieve the meme based on the given caption. HatefulMemes T→VD TAT-DQA Find a screenshot that relevant to the user's question. ArxivQA DocVQA MITTI WER I→T VisualNews Find a caption for the news in the given photo. Fashion200k Find a product description for the fashion item in the image. GLDv2 Retrieve the name of the landmark based on the given image. Memotion Retrieve the caption based on the given meme. HatefulMemes T→IT WebQA Find a Wikipedia image that answers this question. EDIS Find a news image that matches the provided caption. IT→T OVEN Retrieve a Wikipedia paragraph that provides an answer to the given query about the image. INFOSEEK Find a paragraph from Wikipedia that answers my question about this image. OKVQA Retrieve documents that provide an answer to the question alongside the image. VizWiz Retrieve the correct answer for a question about an image. IT→I FashionIQ Find a fashion image that aligns with the reference image and style note. CIRR Retrieve a day-to-day image that aligns with the modification instructions of the provided image. IT→IT OVEN Retrieve a Wikipedia image-description pair that provides evidence for the question of this image. INFOSEEK Find an image and subject description from Wikipedia that answers my question about this image. E-VQA Obtain illustrated documents that correspond to the inquiry alongside the provided image. Table 13: The instructions for different tasks. We set the instructions for the GME-2B and GMR series models on each task as shown in the Table. WER means World Economic Reports, and MITTI means MIT Tissue Interaction. 20 E MAIN RESULT E.1 DETAILED RESULTS We evaluate all models described in §5 on our benchmark. The evaluation metrics and the detailed results for each dataset are reported in Table 14. Class Dataset Model GME-2B Qwen3 MonoQwen Jina-m0 GMR-3B GMR-7B T→T (14) ArguAna† 47.11 86.00 50.93 56.07 80.42 84.49 SCIDOCS† 22.65 26.42 18.31 22.12 25.49 28.77 TRECCOVID† 79.11 87.83 79.84 85.36 87.23 85.56 Quora† 87.35 88.16 82.71 87.98 89.51 89.91 SciFact† 66.53 79.83 74.94 79.18 77.52 79.70 NFCorpus† 36.90 41.88 38.29 40.99 40.51 40.81 Climate-FEVER† 32.15 49.08 19.78 34.33 50.14 50.26 FiQA2018† 46.35 56.25 44.11 50.72 54.79 59.64 HotpotQA† 70.45 82.66 71.64 80.49 82.86 83.84 DBPedia† 43.17 52.69 41.75 49.60 52.99 53.96 Touche2020† 33.18 43.00 36.71 38.40 32.17 37.26 NQ† 51.22 63.33 49.08 62.06 62.49 66.48 MSMARCO† 40.79 44.57 35.57 43.09 45.90 47.60 CQADupStack† 37.25 45.18 40.83 44.66 47.10 46.81 I→I (1) Nights⋆ 30.75 - 12.59 27.50 29.76 32.83 T→I (4) Fashion200k∗ 25.77 - 29.14 29.38 25.01 27.57 HatefulMemes† 52.09 - 74.93 76.57 75.07 75.19 Memotion† 77.41 - 93.47 93.40 93.17 93.52 VisualNews⋆ 38.55 - 37.39 38.48 42.16 48.44 T→VD (5) TAT-DQA† 71.23 - 79.99 82.05 83.23 84.00 DocVQA† 56.44 - 57.51 61.69 61.48 62.87 ArxivQA† 84.21 - 87.61 89.38 88.99 90.99 WER† 58.78 - 63.00 63.47 62.13 61.00 MITTI† 61.29 - 68.32 69.07 66.06 65.82 I→T (5) Fashion200k∗ 27.67 - 7.55 17.14 26.22 29.80 HatefulMemes† 57.85 - 32.27 80.90 81.21 81.23 Memotion† 80.01 - 44.74 94.84 96.08 96.68 GLDv2† 59.28 - 5.72 59.21 68.68 76.74 VisualNews⋆ 38.28 - 7.83 25.05 43.12 48.60 T→IT (2) WebQA⋆ 83.03 - 87.30 87.14 86.98 87.46 EDIS⋆ 71.00 - 65.63 62.76 76.95 81.64 IT→T (4) OKVQA∗ 29.71 - 20.13 30.34 37.71 40.09 VizWiz† 29.56 - 5.11 20.36 35.96 41.29 INFOSEEK⋆ 39.77 - 23.97 36.84 59.17 63.01 OVEN⋆ 60.46 - 8.18 23.74 62.41 68.78 IT→I (2) FashionIQ∗ 26.57 - 21.41 25.97 30.70 33.32 CIRR⋆ 46.83 - 42.09 49.33 57.24 61.46 IT→IT (3) INFOSEEK⋆ 44.61 - 35.39 53.28 73.89 76.31 E-VQA⋆ 79.11 - 55.81 61.21 84.66 86.08 OVEN⋆ 76.96 - 16.28 40.12 78.68 84.17 Table 14: Detailed scores of each model on various datasets on MRB. Qwen3 stands for Qwen3Reranker, MonoQwen stands for MonoQwen2-VL-v0.1, Jina-m0 stands for Jina-Reranker-m0. WER means World Economic Reports, and MITTI means MIT Tissue Interaction. For the datasets denoted with ⋆, we report the Recall@5 metric. Correspondingly, the Recall@10 metric is adopted for the datasets marked with ∗. Furthermore, the NDCG@5 score is utilized for the †-annotated datasets, while the NDCG@10 score is reported for those designated with †. 21 E.2 THE INFLUENCE OF THE NUMBER OF NEGATIVE 2 4 8 16 #Neg 55.5 56.0 56.5 57.0 57.5 58.0 58.5 59.0 Perf. SFT CL Figure 8: Average performance of the number of negatives per sample. In §5.2, we examine the effect of incorporating negatives in supervised fine-tuning (SFT) and observed that, within the limits of available computational resources, increasing the number of negative examples consistently improved model performance. The best performance was achieved when the number of negative examples reached 16. For comparison, we further conduct experiments on the role of negatives in contrastive learning. As shown in Figure 8, the results indicate that, similar to SFT, a larger number of negative examples leads to better performance. Nevertheless, the overall performance of contrastive learning remains lower than that of supervised fine-tuning. E.3 THE INFLUENCE OF THE FROZEN OF LM HEAD -F -NF ∆f SFT 57.97 57.94 ▼0.03 CL 55.95 57.20 ▲1.25 Table 15: Impact of frozen of the LM head on performance. -F denotes frozen, while -NF denotes not frozen. In §4, we observe that SFT can exploit semantic signals from pre-trained token embeddings, whereas CL must learn the score-projection matrix from scratch. To rule out the potential influence of freezing the language modeling (LM) head parameters, we conduct an ablation study on LM head parameter freezing, with the results presented in Table 15. The findings show that freezing or unfreezing the LM head has no effect on SFT. In contrast, CL achieves better performance when the LM head parameters are not frozen. These results suggest that SFT effectively leverages the semantic information embedded in pre-trained token of LLM, while CL requires relearning the score-projection matrix. F LIMITATION In this work, we introduce MRB, a benchmark designed for training and evaluating multimodal reranking tasks. To address this challenge, we investigate strategies for adopting Multimodal Large Language Models (MLLMs) into general-purpose multimodal reranking models, and propose GMR, a reranking model capable of handling candidates across different modalities. Despite these contributions, our work has the following limitations: • Single-language constraint. Although the backbone model, Qwen2.5-VL-Instruction, supports multiple languages, we trained and evaluated GMR exclusively in English. Consequently, the performance of GMR in other languages remains unexplored. • Single-image constraint for queries and documents. For reasons of training efficiency and limited availability of relevant data, both queries and candidates in MRB are restricted to a single image for each query and document. As a result, the benchmark cannot assess performance on interleaved inputs that involve multiple images and texts. 22
2510.14826
To Infinity and Beyond: Tool-Use Unlocks Length Generalization in State Space Models Eran Malach, Omid Saremi, Sinead Williamson, Arwen Bradley, Aryo Lotfi, Emmanuel Abbe, Josh Susskind, Etai Littwin Apple State Space Models (SSMs) have become the leading alternative to Transformers for sequence modeling. Their primary advantage is efficiency in long-context and long-form generation, enabled by fixed-size memory and linear scaling of computational complexity. We begin this work by showing a simple theoretical result stating that SSMs cannot accurately solve any “truly long-form” generation problem (in a sense we formally define), undermining their main competitive advantage. However, we show that this limitation can be mitigated by allowing SSMs interactive access to external tools. In fact, we show that given the right choice of tool access and problem-dependent training data, SSMs can learn to solve any tractable problem and generalize to arbitrary problem length/complexity (i.e., achieve length generalization). Following our theoretical finding, we demonstrate that tool-augmented SSMs achieve remarkable length generalization on a variety of arithmetic, reasoning, and coding tasks. These findings highlight SSMs as a potential efficient alternative to Transformers in interactive tool-based and agentic settings. Correspondence: Eran Malach: e_malach@apple.com Date: October 17, 2025 1 Introduction Transformers (Vaswani et al., 2017), the main architecture powering large language models, have a well-known limitation: due to the attention mechanism, their computational complexity scales quadratically with the sequence length, and their memory scales linearly with length1. This quadratic dependency becomes a major limitation for tasks that require long-context and long-form generation. As test-time scaling paradigms that involve the generation of long Chain of Thought (CoT) (Wei et al., 2022) have become the leading solution for improving reasoning capabilities (Jaech et al., 2024; Guo et al., 2025), the ability to efficiently generate long sequences becomes even more crucial. To solve this limitation, various works suggested replacing the attention mechanism with other modules where memory and per-token compute are fixed as a function of the sequence length (Choromanski et al., 2020). Examples of such architectures include variants of Linear Transformers (Katharopoulos et al., 2020) and State Space Models (Gu et al., 2021) such as Mamba (Gu & Dao, 2023; Dao & Gu, 2024), DeltaNet (Yang et al., 2024c) and GatedDeltaNet (Yang et al., 2024b). These architectures achieve performance similar to Transformers across a wide range of domains (Qu et al., 2024) at a lower inference cost. However, some works have also pointed out significant limitations of these architectures in certain tasks that involve memorization of long sequences and in-context learning (Jelassi et al., 2024; Park et al., 2024; Akyürek et al., 2024). Possibly due to these limitations, linear-time models are still not widely adopted as a replacement to Transformers. The goal of this work is to understand the capabilities and limitations of SSMs, focusing on tasks that require long-form generation. We formally define long-form generation tasks to be problems where the effective number of outputs grows with the complexity of the problem. We focus on such tasks as these are the tasks where SSMs display a clear benefit over Transformers in terms of inference efficiency. However, we show 1While a naive implementation of attention requires quadratic memory complexity, efficient implementations such as Flash Attention (Dao et al., 2022) and KV caching enable close to linear complexity. 1 arXiv:2510.14826v1 [cs.LG] 16 Oct 2025 ==> foo.py <== def foo(v3, v5): print(v10) ... 1) Single-Turn Tool-Use: 2) Interactive Tool-Use: cat *.py sed –i s/”v5”/”v10”/g foo.py sed –i s/”v4”/”v10”/g f904.py ⋮ Traceback (most recent ... ⋮ File “code/169.py”, ... python main.py sed –i s/”v5”/”v10”/g f169.py python main.py ⋮ Traceback (most recent ... ⋮ File “code/f904.py”, ... 3) Distillation: Fix the bug in main.py. You can use bash commands and edit files to implement the necessary changes ... SWE-agent 32B Let’s start by examining the repository structure. ls -la total 8 -rw-r--r-- Aug 25 f068.py ⋮ Now let’s look at the content of main.py to understand the issue. cat main.py ⋮ Figure 1 We finetune Mamba and Pythia (Transformer) on trajectories collected from different tool-use agents for solving a coding problem. 1) Single-Turn Tool-Use: Hard-coded agent that prints all the files in the repository and then outputs all the required changes. 2) Interactive Tool-Use: Hard-coded agent that iteratively runs the code, changes a few files, runs the code again etc. until all problems are resolved. 3) Distillation: SWE-agent Language Model (Yang et al., 2025) instructed to solve the bug in the code. Models are trained on codebases of up to 16 files (dashed red line), with context length 8,192, and evaluated on larger codebases with longer context. While Pythia outperforms Mamba on smaller codebases and single-turn tool use, Mamba displays favorable performance on large codebases when trained to imitate interactive agents (agents 2 and 3), extrapolating beyond the training distribution. that this efficiency comes at a cost of inherent performance degradation. Namely, we prove that SSMs fail to solve long-form generation tasks when the complexity of the task increases beyond the capacity of the model, even if the model is allowed to generate CoT of any length. This limitation arises from the bounded memory of the model, which limits the expressive power when generating long sequences. This is in contrast with Transformers which, using CoT, can in principle solve any computationally tractable problem, utilizing their unbounded memory (Merrill & Sabharwal, 2023). So, to solve long-form generation tasks we can either use Transformers and suffer quadratic scaling of compute, or use SSMs and suffer performance degradation. Another alternative is to use hybrid models that mix attention and SSM layers and have been recently shown to achieve state-of-the-art performance at large scale (Blakeman et al., 2025). However, this ultimately does not eliminate the quadratic dependence on the sequence length. Following the observation above, we explore another alternative: allowing SSMs to interactively use external tools. LLMs are now increasingly used as agents that interact with external tools for solving tasks such as coding, math or question answering (Luo et al., 2025; Yehudai et al., 2025). These tools can allow agents to query and read from external resources, and write information that can be used later. Therefore, such tool-use can naturally augment the internal memory of the model, allowing it access to practically unbounded external memory. We introduce a new theoretical framework for studying ReAct (Yao et al., 2023) agents, and show that allowing SSMs access to external memory through interactive tool-use makes them much more powerful. We prove that tool-augmented SSMs trained on task-specific trajectories can achieve length generalization on any tractable long-form generation task. That is, we show that for any such task we can construct training data with tool-use trajectories such that a simple training paradigm learns to execute the task with high accuracy, even when evaluated beyond the length of the training data. Importantly, this result only holds for interactive tool-use, and we show that single-turn tool-use SSMs are still limited. Experimentally, we show that SSMs trained to interactively use external memory tools achieve length generalization on tasks such as arithmetic, logical reasoning and coding. For example, a Mamba model trained to solve a simple coding task extrapolates to codebases larger than those seen during training when trained on trajectories with interactive tool-use (Figure 1). Additionally, a Mamba model trained to execute long-form multi-digit addition using pointer-based memory can generalize from 5-digit addition to 1,000-digit addition (Figure 2). We observe similar results on multiplication and on a logical reasoning task, and more modest 2 1 2 3 + 4 5 = pointer2.move_left() pointer1.move_left() pointer1.read() 2 pointer2.read() 4 sum=6,carry=0 8,0 6,0 1 2 3 + 4 5 = 8,0 6,0 1,0 1 6 ... pointer1 pointer2 pointer2 pointer2.move_left() 1 2 3 + 4 5 = 8,0 6,0 1,0 1 6 8 Step 9: Step 20: Step 21: pointer2 pointer1 pointer1 ... 1 2 3 + 4 5 = pointer2 pointer1 Step 0: pointer1.read() 1 pointer2.read() 1 pointer2.read() 6,0 6 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition Mamba LSTM GRU Pythia Mistral Figure 2 Left: Illustration of an interactive tool-use agent trajectory with pointer-based memory tool for solving multi-digit addition. The agent can generate thoughts (blue), outputs (purple) or commands (orange), and receive observations (green) from the memory tool. At each step, we show the state of the memory context on the top row, and below it show the sequence of generated tokens. Right: Accuracy of recurrent/SSM models (Mamba, LSTM, GRU) and Transformers (Pythia, Mistral) trained on trajectories for ≤5-digit addition, evaluated on up to 1,000-digits (log scale). extrapolation on solving the Tower of Hanoi task (a task which proved to be difficult for reasoning models, see Shojaee et al. (2025)). Taken together, our theoretical and experimental results highlight the potential advantage of using SSMs as agents with interactive tool access, instead of using them as standalone systems. 1.1 Related Work Chain-of-Thought and Scratchpad When solving problems that require reasoning, LLMs are known to signifi- cantly benefit from generating a CoT, detailing the step-by-step process required for solving the target task (Wei et al., 2022; Nye et al., 2021). Indeed, many datasets used for training models on mathematical problems include such CoT in the training data (Toshniwal et al., 2024b,a; Cobbe et al., 2021). Theoretically, CoT is shown to improve both the expressive power of language models (Merrill & Sabharwal, 2023) and their optimization and learnability (Wies et al., 2022; Malach, 2023). Additionally, it was shown that choices of CoT training data that “localize” the computation enable efficient learning and length generalization (Abbe et al., 2024). In another work, using CoT that encodes the operation of a Turing machine was used to improve length generalization on various tasks (Hou et al., 2024). In our work, we follow a similar approach for improving length generalization capabilities of language models. However, we focus on SSMs instead of Transformers, and study the effect of interactive tool-use for improving learning and generalization. Emulations and Neural Turing Machines The goal of learning to execute general algorithms with neural networks has been discussed in various works. Abbe & Sandon (2023) and Abbe et al. (2021) show universality learning properties of poly-size neural networks trained by stochastic gradient descent. Graves et al. (2014) introduces the Neural Turing Machine (NTM), a neural network that can simulate Turing machines and thus execute computable algorithms. NTMs were studied in different settings (Malekmohamadi Faradonbe et al., 2020), with some improvements such as the Neural GPU (Kaiser & Sutskever, 2015), but were ultimately not widely adopted. Similar works suggested augmenting LSTMs (Hochreiter & Schmidhuber, 1997) with external stack or tape (Delétang et al., 2022; Joulin & Mikolov, 2015). We use similar ideas to study algorithmic learning and length generalization capabilities of SSMs in the setting of tool-augmented interactive agents. Length Generalization The problem of length generalization, training models on short/simple problems and evaluating them on longer/complex instances, has been studied in many works. These works often focus 3 on training Transformers on arithmetic or algorithmic tasks such as sorting, copying or multi-digit addition (Jelassi et al., 2023; Nogueira et al., 2021). Different works suggest various techniques for improving length generalization capabilities of Transformers, including various choices of positional encodings and output format (Zhou et al., 2024; Cho et al., 2024; McLeish et al., 2024; Kazemnejad et al., 2023; Ruoss et al., 2023), scratchpads (Nye et al., 2021; Lee et al., 2023; Zhou et al., 2023; Abbe et al., 2024), architecture (Ontanon et al., 2021; Li & McClelland, 2023), mixing different tasks for “task hinting” (Awasthi & Gupta, 2023) or using looped Transformers (Fan et al., 2024). Some works aim to give scientific or theoretical explanation to the capability and limitation of Transformers in extrapolating beyond the context length (Golowich et al., 2025; Zhou et al., 2023; Huang et al., 2024; Bhattamishra et al., 2022). SSMs have been shown to display robust length generalization capabilities in certain cases. Gu & Dao (2023) demonstrate that Mamba achieves significantly better length generalization performance compared to Transformers on some tasks. Other works show that the length extrapolation of SSMs can be significantly improved with modifications to the model (Ben-Kish et al., 2024) or the training pipeline (Ruiz & Gu, 2025). In this paper, we study the length generalization of SSMs when trained on data with tool-use trajectories. We show that SSMs can achieve perfect length generalization in this setting on various tasks. 2 Theory In this section we formally define the notion of long-form generation tasks: tasks that require generating longer output sequences as their complexity increases. Following this, we define a family of functions that generalizes the class of SSMs, and theoretically analyze their limitation and capabilities in different tool-use settings. Definitions and Notation. Fix some set Z and some distribution P over Z. For some subset S ⊆Z, we denote by P(S) the probability mass of S under P, i.e. P(S) := Prz∼P [z ∈S]. For some function f : Z →Z′, we denote by f(P) the probability distribution of f(z) for z ∼P. For some set B, we denote by ∆(B) the set of probability distributions over B. Definition 2.1. For some finite set Z and some distribution P over Z, we define the minimum support size of mass 0 ≤α ≤1 for P to be the size of the smallest set that covers α probability mass: suppα(P) := min{|S| : S ⊆Z, P(S) ≥α}. 2.1 Long-Form Generation Let Σ be a dictionary of tokens, and denote by Σ∗the set of strings of tokens. Let X1, X2, · · · ⊆Σ∗be a sequence of input spaces, and let Y1, Y2, · · · ⊆Σ∗be a sequence of output spaces. We assume that the input and output spaces are finite, i.e. |Xn| , |Yn| < ∞for all n. Let D1, D2, . . . be a sequence of distributions, such that Dn is a distribution over Xn. Finally, let f : Σ∗→Σ∗be some ground-truth function that satisfies, for all n, that f(Xn) ⊆Yn. We think of the parameter n as a complexity parameter, and so the distribution Dn generates more complex inputs as n →∞. We give the following definition of long-form generation tasks: Definition 2.2. We say that f, {Dn}∞ n=1 is a long-form generation task with coverage α ∈(0, 1) if suppα(f(Dn)) is monotonically increasing with n,2 and limn→∞suppα(f(Dn)) = ∞. Namely, we require that as the complexity n increases, the effective number of possible outputs (i.e., outputs that have non-negligible probability mass) increases as well. We note that many natural long-form generation tasks indeed satisfy these conditions, for example: 1) Multi-Digit Addition (or Multiplication): Dn is a distribution over strings of the form a + b = (or a × b =), where a, b uniformly random numbers with n-digits. The function f maps the input strings to the solution, e.g. f(“a + b = ”) = c where c = a + b (or c = a · b). 2) Sorting: Dn is a distribution over n items, f maps to the sorted list of items. 3) Code Fixing: Dn is a distribution over python codes that have bugs that require changing n lines of code, f maps the code to the necessary changes. 2The condition that the support size is monotonically increasing makes the theoretical results slightly easier to introduce, and holds for practically all reasonable long-form generation problems. 4 2.2 Generalized State Space Models In this section, we follow similar definitions and notations as in Jelassi et al. (2024). We define a state space to be some finite set S with |S| < ∞. A generalized state space model (GSSM) is a (potentially probabilistic) function h : Σ∗→∆(Σ∗) defined by an initial state s0 ∈S and two rules: an update rule u : S ×Σ →S, and an output rule r : S →∆(Σ). Given some input x ∈Σ∗, the function h generates a sequence y ∈Σ∗. We define the state and the output of h at time t recursively s.t. st = uh(st−1, xt−1) if t ≤|x| and st = uh(st−1, yt−|x|) if t > |x|, and we sample yt ∼r(s|x|+t). We terminate when an end-of-sequence token [EOS] ∈Σ is observed. Note that any model that has fixed memory as a function of the sequence length satisfies the definition of a GSSM. This includes common choices of recurrent models, such as: LSTM (Hochreiter & Schmidhuber, 1997), Linear Transformers (Katharopoulos et al., 2020), H3 (Fu et al., 2022), RetNet (Sun et al., 2023), Mamba-1 and Mamba-2 (Gu & Dao, 2023; Dao & Gu, 2024) and other variants (Yang et al., 2024b). Additionally, Transformers where all the attention layers have local (sliding window) attention with fixed window size, are also GSSMs. Other computational models that use Transformers to process fixed length sequences and update fixed-size “memory” vectors (Hutchins et al., 2022) are also GSSMs. Transformers and hybrid-SSM models are not GSSMs, since their memory increases with the sequence length. CoT, Single-Turn and Interactive Tool-Use. We analyze multiple settings where the model can invoke reasoning and tool-use. We follow the popular ReAct framework (Yao et al., 2023) and let the model generate either thoughts, that capture the internal reasoning of the model, or actions that are followed by observations from the environment. The thoughts and actions can be interleaved during the runtime of the model. We specify two types of actions: command actions, that are sent to a tool-oracle O that returns an observation following the execution of the command, and output actions that are simply tokens appended to an output stream and do not result in an observation. The output stream captures the final response of the model which is then evaluated against the ground-truth3. Thoughts, commands and observations are placed between dedicated open/close tags (e.g. [THINK], [\THINK]). We define more formally the tool-oracle and the interaction protocol in Appendix A. We analyze three settings for problem-solving agents. 1) CoT-only: The model is allowed to use only thoughts or outputs, but cannot issue commands or receive external observations4. 2) Single-Turn Tool-Use: The model is allowed to issue a single command, followed by an observation, and then generate the output. The model can use thoughts before and after the tool call, and during the output generation. 3) Interactive Tool-Use: The model is allowed to use as many commands as it needs, and freely interleave thoughts, commands and outputs. 2.3 Learning Algorithms and Length Generalization Fix some task f, {Dn}∞ n=1. We now define training data distributions for learning the task f. We note that for many downstream tasks, it is common to collect training data that contains CoT reasoning and/or tool-use traces for solving the problem. We therefore allow the training distributions to contain a task-specific reasoning and tool-use trajectories. Given some trajectory z ∈Σ∗, we denote by z(out) the value of the output stream after execution of the trajectory. Formally, a training distribution for the task f, {Dn}∞ n=1 is a sequence of distributions {Pn}∞ n=1 s.t. Pn is a distribution over Xn × Σ∗satisfying that: 1) Dn is the marginal distribution of Pn w.r.t. Xn, and 2) For (x, z) ∼Pn, with probability 1 it holds that z(out) = f(x) (i.e. the output stream at the end of generation evaluates to the correct answer). A learning algorithm A is an algorithm that, for some given length n, draws a sample of size m from P1, . . . , Pn5, and returns some hypothesis h : Σ∗→∆(Σ∗) that, given an input problem, can generate a reasoning and tool-use trajectory. We denote the output of A in this case by A(P1, . . . , Pn). We say that A is a GSSM learning algorithm if it always returns a GSSM. We define the error of h w.r.t f for some complexity n by errn(h) = Pr  h(out)(x) ̸= f(x)  , with probability over x ∼Dn and the randomness of h. We now define length generalization of an algorithm: 3We focus on agents for solving input-output problems, where the task of the model is to generate some output given the input problem (e.g. question answering, coding, mathematical proofs etc.). This is a different setting from an agent that performs actions and collects rewards, as in many Reinforcement Learning problems. 4This setting also includes the case where the model generates the output immediately, without using CoT. 5We let the algorithm choose freely how to sample from these distributions. 5 Definition 2.3. We say that A achieves length generalization, if for every ϵ, δ ∈(0, 1) there exists some minimal complexity n0 and sample size m s.t. w.p. ≥1 −δ we have that hn0 = A(P1, . . . , Pn0) satisfies errn(hn0) ≤ϵ for all n ≥n0. Namely, we require that the algorithm returns a hypothesis with low-error on problems with arbitrarily large complexity n, as long as it sees “complex enough” inputs sequence in the training data (with complexity larger than n0). This requirement may seem relatively strong, as we could expect that the error of the learned model would grow with the complexity of the problem. However, we will show theoretically (and to some extent, empirically) that with a carefully constructed training data, achieving such “infinite” length generalization is possible. 2.4 Main Results In this subsection, we state the main theoretical results in the paper. We begin by showing a negative result, stating that GSSMs cannot solve long-form generation tasks, if they operate in the CoT-only or single-turn tool-use setting. Following this, we show a positive result, proving that for any computable long-form generation task we can construct training data such that a simple learning algorithm achieves length generalization on the target task in the interactive tool-use setting. GSSMs cannot Solve Long-Form Generation Tasks without Interaction. We begin by stating the negative result. The proof is relatively simple: since the model has a fixed memory, and outputs are a function of the state of the memory, the model cannot generate all outputs as complexity grows. Theorem 2.1. Let f be a long-form generation task over {Dn}∞ n=1 with coverage parameter α ∈(0, 1). Then, for any CoT-only or Single-Turn GSSM h there exists some problem complexity n0 s.t. for all n ≥n0 the model h has error: errn(h) ≥1 −α. The full proof is given in Appendix B. An immediate implication of this result is that GSSM learning algorithms cannot achieve length generalization on long-form generation tasks without interaction. GSSMs with Interactive Tool-Use can Length Generalize on Long-Form Generation Tasks. For some function f : Σ∗→Σ∗, we say that f is computationally tractable if there exists a Turing machine T s.t. for any x ∈Σ∗, if T begins with x written on its tape, it halts with f(x) written on its tape. The following result shows that a GSSM learning algorithm can achieve length generalization with interactive tool-use, given proper training data: Theorem 2.2. There exists memory-tool oracle O and a simple GSSM learning algorithm A6 s.t. for any computationally tractable long-form generation task f, {Dn}∞ n=1, there exists a sequence of training distributions {Pn}∞ n=1 for which A achieves length generalization in the interactive setting. To show the above result, we define a simple tool that allows read/write access to some external memory, using a pointer that can move left or right between the memory cells. Using this tool, we can simulate the operations of a Turing machine, where we use the external memory as the tape of the Turing machine, use thoughts to track the state of the machine and commands to move the head and read/write symbols. Since the transition function of the Turing machine is defined for every pair of state and symbol, to prove that length generalization is achieved we show that, for large enough n0, most of these pairs are seen in the training data. We give the complete proof in Appendix B. To conclude, the above results show that interactive tool-use is both necessary and sufficient for GSSMs to achieve length generalization on tractable long-form generation problems. 6The algorithm that we analyze is “simple” in the sense that it learns a function that operates using simple string-matching with the training data, similar to e.g. n-gram models. While this is not a “standard” learning algorithm, we believe that similar results can be obtained for more natural algorithms (e.g., gradient descent on a simple RNN), at the cost of making the analysis much more involved. 6 Table 1 Experimental results for synthetic tasks for different models. The notation n →m(p%) means a model trained on length n achieves accuracy p on length m (for the largest m s.t. p ≥5%). Model n × 1 n × 2 Logical Graph Hanoi7 Mamba 10→1K (100%) 10→1K (100%) 10→1K (98%) 8→12 (49%) LSTM 10→500 (100%) 10→100 (100%) 10→1K (100%) 8→8 (100%) GRU 10→500 (100%) 10→100 (100%) 10→1K (100%) 8→8 (100%) Pythia 10→20 (79%) 10→14 (12%) 10→1K (5%) 8→8 (100%) Mistral 10→13 (25%) 10→20 (33%) 10→500 (9%) 8→8 (100%) 3 Experiments In this section we evaluate the length generalization capabilities of GSSMs and Transformer-based language models on various tasks, including arithmetic, reasoning and coding. We experiment with different choices of tools that allow read/write memory access, using either a pointer-based memory access, search tool, or arbitrary bash commands for reading and changing files. We use both tasks where we synthetically generate the ground-truth trajectory and tool commands, as well as a coding task where we collect the trajectories from a SWE coding agent. In our experiments, we largely follow a similar framework for ReAct agents defined in the previous section, where the model can interleave thoughts, outputs (“final answer” tokens), and commands that are followed by observations from the environment. In our experiments we use Mamba SSM (Gu & Dao, 2023), LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014), Pythia Transformer (Biderman et al., 2023) and a Transformer with sliding window (local) attention based on the Mistral architecture (Jiang et al., 2023). In all experiments, we see that SSMs/RNNs achieve length generalization performance that is much better compared to Transformers. See Appendix C for experimental details. 3.1 Arithmetic Tasks In the following set of experiments, we augment the model with a pointer-based memory tool that gives the model access to past tokens in the input/output context. In this setting, the model can execute the following commands: 1) initialize a new pointer, 2) move the pointer left or right by a single token and 3) read the token under a given pointer. By default, a new pointer is initialized to the first token position of the input context. The thoughts and outputs are appended to the context, and are therefore accessible by the pointers (if they reach beyond the length of the input), but commands and observations are discarded (i.e., they are not appended to the context memory and cannot be read by the pointers). We give a detailed description of how thoughts, commands and outputs are specified in Appendix D.1. The final answer is written in the output stream at the end of the generation. We train the model using the standard next-token prediction objective with teacher-forcing, while masking from the loss the input question and the observations (the outputs of a read operation, which will be generated by the memory tool). For the training data, we construct synthetic trajectories that simulate the desired algorithm, and train the model to exactly execute the algorithm required for solving the problem using the memory-tool interaction. Multi-Digit Addition. For this task, we train the model to perform multi-digit addition. We fix some maximal training length n, and for each training example we sample uniformly n1, n2 ∼{1, . . . , n}, then sample two numbers x1, x2 where xi is a uniformly random ni-digit number. We construct a training example with the trajectory for solving x1 +x2, essentially mimicking the long addition algorithm (see Appendix D.3 for details). For evaluation, we choose n′ ≥n and evaluate on addition of two n′-digit numbers. In evaluation, we measure the accuracy of exact-recovery of the trajectory and the final answer (i.e., we measure the probability of generating a solution that exactly matches the desired algorithm). Figure 2 (right) shows the results of this experiment. We observe that Mamba and LSTM trained on 5-digit demonstrations learn to perfectly perform 1,000-digit addition (we did not measure the accuracy beyond this). A Transformer trained in the same 7Due to the sensitivity of the Hanoi experiments to the initialization seed, we tried 10 seeds for Mamba and Pythia and 3 seeds for the rest of the models. Best performing seed is reported. See Appendix D.4 for more details. 7 Logical Graph V630 v630 = True v872 = False v622 = True v191 = not v872 v240 = v872 or v622 v539 = v622 and not v191 v539 = ? v872 v622 v191 v240 v539 ∨ ∧ ¬ ¬ main f169 f352 foo f643 f816 Code Fixing Figure 3 Illustration of Logical Graph reasoning task and code fixing task. We generate random graphs that define logical function structure (top) or code dependencies (bottom), and synthetically generate problems according to these graph structures. setting fails to extrapolate. Additional ablations, such as training with no CoT, no tool-use and single-turn tool-use, result in little to no length generalization, and are discussed in Appendix D.7. Multi-Digit Multiplication. For this task we use the same pointer-based memory tool described above for learning the multiplication algorithm. In this task, we increase the length of only the first operand, keeping the second operand fixed. Specifically, we fix some maximal training length n, choose n1 ∼{1, . . . , n} to be the length of the first operand and choose n2 ∼{1, 2} to be the length of the second operand (i.e. we multiply an n1-digit number by a 1-digit or 2-digit number). We sample x1, x2 where xi is a uniformly random ni-digit number, and construct the trajectory for solving x1 × x2 (see details in Appendix D.3). We then evaluate on n′ × 1 and n′ × 2 digit multiplication, for some n′ ≥n, and report exact recovery accuracy. We train different SSMs/RNNs and Transformers where first operand has n ≤10 digits, and evaluate on multiplications of numbers with up to 1,000-digits (Table 1). Here too we see that Mamba models maintain high accuracy when evaluated on numbers that have orders of magnitude more digits than in the training data (also see Appendix D.6 for ablations on training steps and maximum number of digits seen during training). Task Mixture. We examine whether co-training a primary task with an auxiliary task that shares a related computational structure yields synergistic benefits (Awasthi & Gupta, 2023). Our experiments indicate that such co-training improves the length generalization of the primary task under limited training budgets. In our experiments, the primary task is multiplication (n-digit × 2-digit), co-trained with addition (n+n digits) as an auxiliary task. Both tasks share structural similarities when expressed as sequences of tool calls. The training distribution for multiplication contains samples up to 20 digits. We compare the accuracy as a function of test length for various training budgets (250, 500 or 800 steps) and various choices of task mixtures (see Appendix D.8). We observe that under limited budgets (250 steps), introducing auxiliary addition samples yields minor improvements. At intermediate budgets (500 steps), the benefit becomes more pronounced, with certain weights extending generalization to much larger n. However, with sufficient training (800 steps), all settings converge to strong generalization, and the auxiliary data provides no additional gain. 3.2 Algorithmic/Reasoning Tasks We next turn to evaluate the tool-use paradigm on tasks that test certain “reasoning” capabilities. Tower of Hanoi. This task is based on the popular puzzle game, which was also recently used for testing reasoning capabilities of frontier LLMs, showing that they struggle to solve this task as complexity increases 8 (Shojaee et al., 2025). In our setup, we randomly sample (without replacement) n disks of sizes ∈{1, . . . , 100}. These disks are placed on the first rod (labeled A), ordered from the largest to the smallest, with rods B and C being empty. The input to the model is the list of disks, which captures the initial state of the game. The model then needs to output a sequence of valid moves that result in placing the pile on rod C. We use the same pointer-based memory tool as in the previous experiments, and train the model trajectories with up to n disks, evaluating on larger n′ (see Appendix D.3). In this experiment we observe more limited length generalization (Table 1), but note that unlike other experiments, here the length of the output increases exponentially with n. Logical Graph. In this task, we construct a directed-acyclic computation graph with n nodes. The graph has k input nodes (for some fixed k), and each internal node node computes a Boolean operation (AND/OR) on one or two input variables or their negations. We construct the graph by iteratively adding new internal nodes and randomly choosing their Boolean operation and their connectivity to existing nodes in the graph. We take the last node that is added to be the output node. All nodes are randomly labeled, and the model receives the graph structure and an assignment for the input variables as a python code (see Figure 3). In this task, instead of using the pointer-based memory tool as in previous tasks, we use a search tool: the model can issue a command find(x), and gets a list of all occurrences of the pattern x in the context. As before, all thoughts and outputs generated by the model are appended to the context and are therefore searchable in future iterations. We fix k = 3 and train the model on trajectories for solving this problem for graphs with up to n = 10 nodes. We then evaluate on graphs with n′ ≥n nodes, and report the exact-match accuracy in Table 1. Again, we see that Mamba and recurrent models can solve this problem, extrapolating to graphs with n = 1, 000 nodes. 3.3 Coding Task For the previous tasks, we trained models “from scratch” on synthetic trajectories that invoke tool use for solving arithmetic and algorithmic problems. This allowed us to demonstrate the length generalization capability of SSMs equipped with tool-use in a clean and controlled setting, resulting in perfect recovery of the underlying algorithm in many cases. We now turn to study extrapolation of tool-use agents in a more realistic coding setting. Importantly, this setting will allow us to go beyond programmatically generated trajectories and collect trajectories from an existing SWE coding agent. This demonstrates that our results and observations can also be applicable in settings where the underlying algorithm/method for solving the task are not known or well-specified. Our task will be fixing a “bug” in a given codebase. To construct the codebase we generate n python functions, each function saved in a separate python file. The functions form a dependency graph, with one root function called main (stored in main.py). Each function declares variables (named v0, v1, . . . , v9), gets some variables as inputs and passes some variables to other functions it imports. We generate this codebase by randomly generating a dependency graph, iteratively adding nodes (functions) to this graph and connecting each node to existing nodes, where each edge represents a parent function importing a child function. Function names are randomly selected from f0, . . . , f999, except for the last function added to the graph which is called foo. We then randomly assign variables and print them and/or pass them from parent functions to child functions. The code always has the following “bug”: there is a special variable v10 that is declared in main.py and is called in foo.py without properly passing it from main. In order to fix the code, we need to pass the variable v10 through all the dependency paths from main to foo (ideally without changing other functions, though we do not enforce this). See Figure 3 for an illustration of how we generate the code. We start by running a coding agent and collecting its trajectories when attempting to solve this code-fixing task, as we are varying the number of functions n in the codebase (choosing n ∈{4, . . . , 16}). We use three types of agents for generating trajectories, (illustrated in Figure 1): 1) Single-Turn Agent: Hard-coded agent that prints all the files and immediately generates the correct code edits. 2) Interactive Agent: Hard-coded agent that iteratively runs the code, resolves the issue in up to 3 files, then runs the code again, and keeps going until the code runs without errors. 3) Distillation: An agent based on SWE-agent-LM-32B (Yang et al., 2025), a capable open-source coding model that we couple with mini-SWE-agent8 (Yang et al., 8https://github.com/SWE-agent/mini-swe-agent 9 2024a) as a simple agent environment which gives the model access to the code through bash commands. We instruct the model to fix the bug in the code, specifically telling it what the bug is and how it should fix it (pass the variable v10 from main to foo). See the full prompt and further details in Appendix E. We observe that while this task is relatively simple, the model’s performance degrades as the complexity (number of functions) in the codebase increases (see statistics in Appendix E). We therefore filter the trajectories to include only trajectories that correctly fixed the code, and also filter for short trajectories (shorter than the average length for a given size n). After collecting around 100K trajectories from each coding agent, we finetune two comparable models on these trajectories: Pythia-1.4B (Transformer-based model, Biderman et al. (2023)) and Mamba-1.4B (Gu & Dao, 2023), both pretrained on The Pile (Gao et al., 2020). We train both models with context length 8,192, on codebases of up to 16 functions (if the trajectory is longer than the context length, we train only on the first 8,192 tokens). We then evaluate both models on codebases of different sizes, letting the models generate beyond the context length.9 We measure the probability of correctly fixing the code (using the same environment used for collecting the trajectories). As shown in Figure 1, we observe that for codebases with small number of functions, both Transformer and Mamba models perform well in all settings. Notably, the Transformer-based model outperforms the Mamba SSM for small codebases in the agent distillation setting, achieving over 90% pass rate. However, for larger codebases, beyond the training distribution (both in terms of number of functions and trajectory length), we see that the Mamba model maintains much better accuracy as the complexity increases when trained to imitate interactive agents (agents 2 and 3), but fails on complex codebases when trained in the single-turn setting (agent 1). This finding aligns with our theoretical results, and also matches the previous synthetic experiments. 4 Conclusion and Discussion We started this work by comparing two families of models for long-form generation: Transformers and SSMs. Transformers are inefficient for long-context and long-form generation, as their computational complexity scales quadratically with the sequence length. SSMs, on the other hand, offer linear scaling of compute but, as we showed, cannot accurately solve long-form generation tasks (without tools). This demonstrates a clear trade-off between efficiency and accuracy that seems to be inescapable. Indeed, several works have observed that SSMs are inferior to Transformers in various tasks that require memorization of long sequences (Jelassi et al. (2024); Waleffe et al. (2024)). On the positive side, we show that in the agentic/tool-use setting, SSMs can leverage tools to overcome their memory bottleneck, thus offering efficiency, accuracy, and generalization to longer sequences. In hindsight, SSMs seem to be a natural fit for tool-use settings: tools often generate large quantities of content, which SSMs can parse efficiently, and also involve multi-turn interactions that can quickly overflow the context of a standard Transformer. However, it seems that there is little work on building SSM-based agents, and thus their evaluation is restricted to the “standalone” setting, where they are inherently limited. We do not believe this is due to any inability of SSMs to learn tool-use behavior. For example, while Mistral’s Mamba-Codestral-7B-v0.1 model does not naively support function calling, it is able to achieve comparable tool-use performance to several function-calling-enabled transformer-based models (Appendix F). We therefore believe this work should encourage the development of a tool-based SSMs that operate in various agentic settings, such as coding, search or reasoning. This application could potentially unlock the full capabilities of these models, making them competitive with, or even superior to, Transformer-based agents. Finally, we want to emphasize that our work is one of the first to analyze the performance of language modeling architectures in a system, rather than as standalone models. To some extent, our analysis shows that certain architectures can be “weaker” when operating standalone, but in fact perform better when incorporated as part of a system. Since LLMs are now rarely used as standalone tools, we believe that this aspect of language modeling deserves more attention and focus in the field. 9We experimented with applying RoPE scaling when using the Transformer beyond the training context length, both in finetuning and evaluation, and observed mixed results. We report the accuracy for the best choice (with or without RoPE scaling) in each setting. 10 References Emmanuel Abbe and Colin Sandon. Polynomial-time universality and limitations of deep learning. Communications on Pure and Applied Mathematics, 76(11):3493–3549, 2023. doi: https://doi.org/10.1002/cpa.22121. URL https: //onlinelibrary.wiley.com/doi/abs/10.1002/cpa.22121. Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, and Nathan Srebro. On the power of differentiable learning versus PAC and SQ learning. In Advances in Neural Information Processing Systems, volume 34, 2021. Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, and Omid Saremi. How far can transformers reason? the globality barrier and inductive scratchpad. Advances in Neural Information Processing Systems, 37:27850–27895, 2024. Ekin Akyürek, Bailin Wang, Yoon Kim, and Jacob Andreas. In-context language learning: Architectures and algorithms. arXiv preprint arXiv:2401.12973, 2024. Pranjal Awasthi and Anupam Gupta. Improving length-generalization in transformers via task hinting. arXiv preprint arXiv:2310.00726, 2023. URL https://arxiv.org/abs/2310.00726. Assaf Ben-Kish, Itamar Zimerman, Shady Abu-Hussein, Nadav Cohen, Amir Globerson, Lior Wolf, and Raja Giryes. Decimamba: Exploring the length extrapolation potential of mamba. arXiv preprint arXiv:2406.14528, 2024. Satwik Bhattamishra, Arkil Patel, Varun Kanade, and Phil Blunsom. Simplicity bias in transformers and their ability to learn sparse boolean functions. arXiv preprint arXiv:2211.12316, 2022. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Moham- mad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397–2430. PMLR, 2023. Aaron Blakeman, Aarti Basant, Abhinav Khattar, Adithya Renduchintala, Akhiad Bercovich, Aleksander Ficek, Alexis Bjorlin, Ali Taghibakhshi, Amala Sanjay Deshmukh, Ameya Sunil Mahabaleshwarkar, et al. Nemotron-h: A family of accurate and efficient hybrid mamba-transformer models. arXiv preprint arXiv:2504.03624, 2025. Hanseul Cho, Jaeyoung Cha, Pranjal Awasthi, Srinadh Bhojanapalli, Anupam Gupta, and Chulhee Yun. Position coupling: Improving length generalization of arithmetic transformers using task structure. Advances in Neural Information Processing Systems, 37:22233–22315, 2024. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060, 2024. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in neural information processing systems, 35:16344–16359, 2022. Grégoire Delétang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, et al. Neural networks and the chomsky hierarchy. arXiv preprint arXiv:2207.02098, 2022. Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee. Looped transformers for length generalization. arXiv preprint arXiv:2409.15647, 2024. Daniel Y Fu, Tri Dao, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. Hungry hungry hippos: Towards language modeling with state space models. arXiv preprint arXiv:2212.14052, 2022. 11 Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Noah Golowich, Samy Jelassi, David Brandfonbrener, Sham M Kakade, and Eran Malach. The role of sparsity for length generalization in transformers. arXiv preprint arXiv:2502.16792, 2025. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Kaiying Hou, David Brandfonbrener, Sham Kakade, Samy Jelassi, and Eran Malach. Universal length generalization with turing programs. arXiv preprint arXiv:2407.03310, 2024. Xinting Huang, Andy Yang, Satwik Bhattamishra, Yash Sarrof, Andreas Krebs, Hattie Zhou, Preetum Nakkiran, and Michael Hahn. A formal framework for understanding length generalization in transformers. arXiv preprint arXiv:2410.02140, 2024. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, and Behnam Neyshabur. Block-recurrent transformers. Advances in neural information processing systems, 35:33248–33261, 2022. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. Samy Jelassi, Stéphane d’Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, and François Charton. Length generalization in arithmetic transformers. arXiv preprint arXiv:2306.15400, 2023. Samy Jelassi, David Brandfonbrener, Sham M Kakade, and Eran Malach. Repeat after me: Transformers are better than state space models at copying. In International Conference on Machine Learning, pp. 21502–21521. PMLR, 2024. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. Advances in neural information processing systems, 28, 2015. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pp. 5156–5165. PMLR, 2020. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36:24892–24928, 2023. Nayoung Lee, Kartik Sreenivasan, Jason D Lee, Kangwook Lee, and Dimitris Papailiopoulos. Teaching arithmetic to small transformers. arXiv preprint arXiv:2307.03381, 2023. Yuxuan Li and James McClelland. Representations and computations in transformers that support generalization on structured tasks. Transactions on Machine Learning Research, 2023. Junyu Luo, Weizhi Zhang, Ye Yuan, Yusheng Zhao, Junwei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, et al. Large language model agent: A survey on methodology, applications and challenges. arXiv preprint arXiv:2503.21460, 2025. Eran Malach. Auto-regressive next-token predictors are universal learners. arXiv preprint arXiv:2309.06979, 2023. 12 Soroor Malekmohamadi Faradonbe, Faramarz Safi-Esfahani, and Morteza Karimian-Kelishadrokhi. A review on neural turing machine (ntm). SN Computer Science, 1(6):333, 2020. Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, et al. Transformers can do arithmetic with the right embeddings. Advances in Neural Information Processing Systems, 37:108012–108041, 2024. William Merrill and Ashish Sabharwal. The expressive power of transformers with chain of thought. arXiv preprint arXiv:2310.07923, 2023. Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. 2021. Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. Making transformers solve compositional tasks. arXiv preprint arXiv:2108.04378, 2021. Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos. Can mamba learn how to learn? a comparative study on in-context learning tasks. arXiv preprint arXiv:2402.04248, 2024. Shishir G Patil, Huanzhi Mao, Fanjia Yan, Charlie Cheng-Jie Ji, Vishnu Suresh, Ion Stoica, and Joseph E Gonzalez. The berkeley function calling leaderboard (bfcl): From tool use to agentic evaluation of large language models. In International Conference on Machine Learning. Haohao Qu, Liangbo Ning, Rui An, Wenqi Fan, Tyler Derr, Hui Liu, Xin Xu, and Qing Li. A survey of mamba. arXiv preprint arXiv:2408.01129, 2024. Ricardo Buitrago Ruiz and Albert Gu. Understanding and improving length generalization in recurrent models. arXiv preprint arXiv:2507.02782, 2025. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, and Joel Veness. Randomized positional encodings boost length generalization of transformers. arXiv preprint arXiv:2305.16843, 2023. Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar. The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. arXiv preprint arXiv:2506.06941, 2025. Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621, 2023. Shubham Toshniwal, Wei Du, Ivan Moshkov, Branislav Kisacanin, Alexan Ayrapetyan, and Igor Gitman. Openmathinstruct-2: Accelerating ai for math with massive open-source instruction data. arXiv preprint arXiv:2410.01560, 2024a. Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. Advances in Neural Information Processing Systems, 37:34737–34774, 2024b. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, et al. An empirical study of mamba-based language models. arXiv preprint arXiv:2406.07887, 2024. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Noam Wies, Yoav Levine, and Amnon Shashua. Sub-task decomposition enables learning in sequence to sequence tasks. arXiv preprint arXiv:2204.02892, 2022. 13 John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik R Narasimhan, and Ofir Press. SWE-agent: Agent-computer interfaces enable automated software engineering. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024a. URL https://arxiv.org/abs/2405.15793. John Yang, Kilian Leret, Carlos E Jimenez, Alexander Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, and Diyi Yang. Swe-smith: Scaling data for software engineering agents. arXiv preprint arXiv:2504.21798, 2025. Songlin Yang, Jan Kautz, and Ali Hatamizadeh. Gated delta networks: Improving mamba2 with delta rule. arXiv preprint arXiv:2412.06464, 2024b. Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, and Yoon Kim. Parallelizing linear transformers with the delta rule over sequence length. Advances in neural information processing systems, 37:115491–115522, 2024c. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. Asaf Yehudai, Lilach Eden, Alan Li, Guy Uziel, Yilun Zhao, Roy Bar-Haim, Arman Cohan, and Michal Shmueli-Scheuer. Survey on evaluation of llm-based agents. arXiv preprint arXiv:2503.16416, 2025. Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Josh Susskind, Samy Bengio, and Preetum Nakkiran. What algorithms can transformers learn? a study in length generalization. arXiv preprint arXiv:2310.16028, 2023. Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, and Denny Zhou. Transformers can achieve length generalization but not robustly. arXiv preprint arXiv:2402.09371, 2024. 14 A More Definitions Here we give a more complete and formal definition for models with tool-use. We start by defining a tool-use oracle, which will receive tool-use commands and will return an observation that corresponds to the execution of the command. This oracle will be stateful, meaning that its responses can vary depending on the memory of the oracle, which can be updated based on the commands it receives10. Let M be some set which will correspond to the set of memories of the oracle. We denote by Mt ∈M the memory of the oracle after receiving t commands, and let M0 be the initial memory of the oracle. Importantly, we let the initial memory of the oracle depend on the input (e.g., the input can be stored in the memory of the oracle). For some memory Mt ∈M, we define OMt : Σ∗→Σ∗to be the mapping from tool calls to observations, given memory Mt. We augment the dictionary with additional tokens: [TOOL], [\TOOL], [OBS], [\OBS], [THINK], [\THINK] ∈ Σ. At any point in the generation, the model h can issue a call to a tool by generating a sequence of the form [TOOL], z, [\TOOL], for some z ∈Σ∗which encodes the tool command. The command z is passed to the tool oracle, and the resulting observation that is then appended to the context of the model, with the format: [OBS], OMt(z), [\OBS]. The model can also generate thoughts/reasoning, generated as a sequence [THINK], z, [\THINK]. All other tokens (tokens that are not tool-commands, observations or thinking tokens) are considered output tokens, and are appended to the output stream. In the CoT-only setting, the model is only allowed to use thinking and output tokens. In the single-turn setting, the model can issue a single tool command, and can start generating the output after receiving the observation from the command (but can think before, after and during the output generation). In the interactive setting, a model can issue a tool call at any point in the generation, possibly interleaved with output tokens and tool commands. When evaluating the output, we ignore all tool commands, observations and thoughts and only consider the output stream at the end of generation. B Proofs B.1 Proof of Theorem 2.1 Before we introduce the complete proof of the theorem, we prove a simple version of the Theorem for the case where the output rule of the GSSM is deterministic. This proof is simpler than the more general stochastic setting, and captures the key principles for this result. Proof of Theorem 2.1 for deterministic GSSMs. Let S be the state space of h. By definition, there exists some n0 such that for all n ≥n0 it holds that suppα(f(Dn)) > |S|. Now, fix some n ≥n0, and let A ⊆Σ∗be all the possible outputs of h. Note that the output is determined by the state of h before the output tokens are generated. Therefore, |A| ≤|S| < suppα(f(Dn)) and so we have Pr x∼Dn[f(x) ∈A] = Pr y∼f(Dn)[y ∈A] = f(Dn)(A) < α Finally, we have Pr x∼Dn [f(x) = h(x)] = Pr [f(x) = h(x)|f(x) ∈A] Pr [f(x) ∈A] + Pr [f(x) = h(x)|f(x) /∈A] | {z } =0 Pr [f(x) /∈A] ≤Dn(A) < α and so errn(h) ≥1 −α. We now show the proof of the theorem for the general (stochastic) setting: 10For example, the oracle can receive a command for updating the content of a file on disk, which will affect its memory and hence future requests for reading the contents of the changed file. 15 Proof of Theorem 2.1. Let S be the state space of h, and we assume that h operates either in the CoT-only or the single-turn setting. Denote by U(x) the state of the model before generating the first output token. The model h can generate thinking tokens and/or a single tool command before generating the first output token, and therefore U(x) is a random variable that depends on x and the randomness of h.11 Let R(s) be the distribution of outputs (i.e. values of the output stream) generated by the model h if it is at state s before generating the first output token. Treating h(out)(x) as a random variable over outputs, we note that it depends only on the state after parsing x, and therefore h(out)(x) = R(U(x)). Additionally, we denote the conditional distribution over outputs induced by h(out) with p(y|x). Again, since the future generation of the model depends only on the state, we have p(y|x) = p(y|U(x)). Now, by definition, there exists some n0 such that for all n ≥n0 it holds that suppα(f(Dn)) > |S|. Fix some n ≥n0. Fix some s ∈S. Let ys be an output with maximal probability under the distribution Dn, conditioned on the event that U(x) = s: ys = arg max y Pr x∼Dn[f(x) = y|U(x) = s] Denote by A the set of maximal probability outputs A = {ys : s ∈S}. Note that |A| ≤|S| < suppα(f(Dn)) and so we have Pr x∼Dn[f(x) ∈A] = Pr y∼f(Dn)[y ∈A] = f(Dn)(A) < α Observe that: Pr[f(x) = h(x)|U(x) = s] = X y∈Yn E[1h(x)=y · 1f(x)=y|U(x) = s] = X y∈Yn E[1R(U(x))=y · 1f(x)=y|U(x) = s] = X y∈Yn E[1R(s)=y · 1f(x)=y|U(x) = s] = X y∈Yn p(y|s) · Pr x∼Dn[f(x) = y|U(x) = s] ≤ X y∈Yn p(y|s) · Pr[f(x) = ys|U(x) = s] ≤Pr[f(x) = ys|U(x) = s] Where in the 4th equality we use the fact that the variables are independent. Therefore, we have: Pr x∼Dn [f(x) = h(x)] = X s∈S Pr[f(x) = h(x)|U(x) = s] Pr[U(x) = s] ≤ X s∈S Pr[f(x) = ys|U(x) = s] Pr[U(x) = s] ≤ X s∈S Pr[f(x) ∈A|U(x) = s] Pr[U(x) = s] = Pr[f(x) ∈A] < α and so errn(h) ≥1 −α. B.2 Proof of Theorem 2.2 Proof of Theorem 2.2. First, let us define the oracle O. The memory of the oracle at iteration t holds a sequence of tokens mt ∈Σ∗, and additionally some index it ∈N. At first iteration, we set m0 = x and i0 = 0. The oracle O accepts the following commands: 11We assume the oracle (e.g., the environment) is deterministic, but we can be easily extend this result to capture a stochastic oracle. 16 • read: outputs the it-th token in mt. If it > |mt|, output [EOS]. • write σ: updates the it-th token of mt to be σ. • move_left, move_right: adds/subtracts 1 from it. Next, we describe the training distributions Pn. Since f is tractable, there exists some Turing machine T that computes f. By definition, the machine halts for every input, and we can assume w.l.o.g. that it halts when the head is at position 0. Let Q be the (finite) set of states of T , and let q0 be the initial state. We assume the dictionary Σ contains the following symbols: {0, 1, [STATE], [\STATE]} ∈Σ. For each state q ∈Q, we define the encoding of the state enc(q) = [STATE]zq[\STATE], where zq ∈{0, 1}log(|Q|) is a binary encoding of the state. Then, for some input x ∼Dn, we construct a CoT of x, denoted by F(x), that will capture the “trace” of the machine T : • The sequence F(x) begins with: [THINK]enc(q0)[\THINK][TOOL]read[\TOOL]. • In each step of the Turing machine processing x, we add to F(x) the sequence: [THINK]enc(q)[\THINK][TOOL]read[\TOOL] [OBS]σ[\OBS][TOOL]write σ′[\TOOL] where q is the current state, and σ′ is the next symbol to write when reading σ in state q. Addi- tionally, we add [TOOL]move_left[\TOOL] if the machine moves the head to the left, and otherwise [TOOL]move_right[\TOOL]. • When the machine reaches a halting state, for every i = 1 . . . |f(x)| we add: [TOOL]move_right[\TOOL][TOOL]read[\TOOL][OBS]f(x)i[OBS]f(x)i Note that since the machine computes f(x) it will be written on its tape when it reaches a valid state. Therefore, it is easy to verify that the memory of the oracle O at step t will hold the state of the tape and the correct position of the head, and that all the tool observations will be correct. Finally, x ∼Dn and F(x) together define the distribution Pn for all n, and it is indeed a training distribution for the task (since the non-tool tokens after the [ANS] token correspond to the correct output f(x)). Next, we will show that a simple tool-SSM algorithm can achieve length generalization on this task. Let {(x1, F(x1)), . . . , (xm, F(xm)} be the set of examples observed by the algorithm. Let bA be the set of all pairs of state encodings and symbols that appear together in F(xi) in some xi: bA := {(q, σ) : ∃i, [THINK]enc(q)[\THINK][TOOL]read[\TOOL][OBS]σ[\OBS] ∈F(xi)} Note that for every (q, σ) ∈bA there is a single symbol σ′ and a single command d ∈{move_left, move_right} and a single state q′ that follow (q, σ) in the trace (corresponding to the operation of the Turing machine). Let R be the function mapping (q, σ) to (q′, σ′, d). Note that both bA and R can be encoded with fixed (finite) memory. Therefore, we define a GSSM h b A,R that generates tokens as follows: • Immediately after the input, generate: [THINK]enc(q0)[\THINK][TOOL]read[\TOOL]. • Following each response to a read command, generate: [THINK]enc(q′)[\THINK][TOOL]write σ′[\TOOL][TOOL]d[\TOOL] where q′, σ′, d = R(q, σ), for the σ returned by the tool oracle. • When a halting state is reached, generate the sequence: [TOOL]move_right[\TOOL][TOOL]read[\TOOL] and following the observation [OBS]σ[\OBS], output σ (if σ = [EOS] we halt the generation). • If at some point we observe a pair q, σ /∈bA, output [EOS]. 17 Denote by A(x) ⊆Q × Σ the set of state-symbol pairs observed by T when processing x. Its easy to verify that for every x s.t. A(x) ∈bA, the GSSM h b A,R will exactly recover F(x). Therefore, the following lemma suffices for proving the theorem: Lemma B.1. Fix some ϵ, δ ∈(0, 1). There exists some n0 s.t. and m s.t. w.p. at least 1 −δ over sampling from P1, . . . , Pn0 it holds that: ∀n ≥n0 Pr x∼Dn h A(x) ⊆bA i > 1 −ϵ Proof. For every pair of symbols σ ∈Σ and state q ∈Q, denote pn(σ, q) := Prx∼Dn[(q, σ) ∈A(x)] the probability over sampling x ∼Dn that the machine T reads a symbol σ while it is in state q, when processing x. Let M := |Q| · |Σ|. Denote: Aϵ =  (q, σ) ∈Q × Σ s.t. sup n pn(q, σ) ≥2ϵ/M  Now, for every q, σ ∈Aϵ, let n0(q, σ) be the minimal n s.t. pn(q, σ) ≥ϵ/M. Let n0 = max{n0(q, σ)}(q,σ)∈Aϵ. Let m = n0M log(M/δ) ϵ , and we will sample m′ = m/n0 = M log(M/δ) ϵ examples from each of D1, . . . , Dn0. Fix some (q, σ) ∈Aϵ. Claim: w.p. at least 1 −δ/M we have (q, σ) ∈bA. Proof: Note that n0(q, σ) ≤n0, and therefore we sample m′ examples from Dn0(q,σ). Let p := Prx∼Dn0(q,σ)[(q, σ) ∈ A(x)] and by definition p ≥ϵ/M. Therefore, for the m′ samples we draw, the probability that we do not encounter (q, σ) in any of the traces is at most (1 −p)m′ ≤(1 −ϵ/M)m′ ≤exp(−m′ϵ/M) ≤δ/M. From the above claim, using the union bound, we get that w.p. at least 1 −δ we have Aϵ ⊆bA. Assume this holds, and fix some n ≥n0. For every (q, σ) ∈Q × Σ \ Aϵ it holds that Prx∼Dn [(q, σ) ∈A(x)] ≤ϵ/M. From the union bound, the probability over x ∼Dn that there exists some (q, σ) /∈Aϵ ⊆bA s.t. (q, ϵ) ∈A(x) is at most |Q × Σ \ Aϵ| ϵ/M ≤ϵ. Therefore, the required follows. From the above lemma the proof of Theorem 2.2 follows. C Architecture and training details We train the following architectures for the synthetic experiments: • Mamba-130M (https://huggingface.co/state-spaces/mamba-130m-hf): a selective state-space (SSM) language model. We use a 24-layer, 1536-d intermediate size and 768-d model size configuration to match the Transformer baselines while retaining linear-time sequence modeling. • LSTM: a multi-layer recurrent baseline sized to roughly comparable capacity (4 layers, hidden size 1536) to probe how classical RNNs fare on our trajectory-style tasks. • GRU: a gated-recurrent baseline (4 layers, hidden size 1536) offering a stronger RNN comparator with fewer parameters per unit than LSTM. • Pythia (GPT-NeoX style) (https://huggingface.co/EleutherAI/pythia-160m): a decoder-only Transformer from the Pythia scaling suite. We adopt a 24-layer, 1536-d intermediate size, 768-d model size and 8-head model variant with RoPE, roughly matching Mamba’s scale. • Mistral-style Transformer (https://huggingface.co/mistralai/Mistral-7B-v0.1): a modern decoder-only Trans- former with sliding-window (512) sparse attention, utilizing RoPE. We use a scaled-down 8-layer, 1536-d intermediate size and 768-d model size. 18 For the synthetic experiment, we perform hyper-parameter search over learning rate, batch size and weight decay. We choose learning_rate ∈{0.0005, 0.0001, 0.0003, 0.0005, 0.001, 0.003, 0.005}, batch_size ∈ {128, 256, 512, 1024}, weight_decay ∈{0, 0.01} and fix the number training steps to be 2, 000. We run each experiment with 2 seeds, and report the accuracy of the best model. For Tower of Hanoi experiments, due to their sensitivity, we exceptionally used 10 seeds for Mamba and Pythia models and 3 seeds for other architectures. For the code fixing experiment, we finetune a pretrained Mamba-1.4b and Pythia-1.4b, both trained on The Pile (Gao et al., 2020), with learning rate 0.0001, weight decay 0.01, batch size 512 and 200 training steps. For all experiments, we use a single node with 8 H100 GPUs. D Synthetic Experiments Details D.1 Memory Tool Definitions As discussed in Section 3, we use either a pointer-based or a search-based memory tool to augment the memory of the model. We now describe how the model interacts with the memory tool, and how we differentiate between thoughts, outputs, commands and observations. We generally try to reduce the number of tokens by using dedicated command tokens, and differentiate between output, thoughts and observation tokens based on the context (instead of using open/closing tags). Pointer-based Memory. The commands for this memory are given as special tokens that the model can output, e.g. [pointer1.read()] or [pointer2.move_left()]. A read command will be immediately followed by a single observation token, that is the token read by the pointer at its current position. All other tokens (tokens that are not command tokens or observation tokens, which always immediately follow a read command) are either thoughts or outputs. We use a single token [ANS] that indicates the final answer of the model, where all tokens before the [ANS] token are considered thoughts and all tokens after the [ANS] token are considered outputs. Both thoughts and outputs are appended to the context memory, and the pointers can move beyond the input context, and start reading thought or output tokens that were previously generated by the model. Commands and observations are discarded and are not appended to the external memory (but of course do affect the internal memory and representation of the model). The model can freely interleave commands, thoughts and outputs, and therefore the model can interact with the memory while producing the answers. Search-based Memory. This memory tool allows the model to search for a given pattern in the context. A search command is a sequence of tokens of the form: [COMMAND]find[VALUE]x, where x is some sequence of tokens to search for. Following the search command, the model will receive a set of observations of the form: [OBSERVATION]z1, . . . , zk, where z1, . . . , zk are all the lines in the memory context that contain the string x (similar to the operation of a grep command). As before, all other tokens are either thoughts or outputs, and are appended to the memory and can be searched for in future iterations. In this case we take the output to be the last line generated by the model. D.2 Logical Reasoning Task As described in Section 3, we generate a random logical computation graph with k = 3 input nodes, where each intermediate node is a boolean expression over one or two variables or their negation. The graph is encoded as a python code given to the model as input. Illustration of the graph and the code are shown in Figure 4. D.3 Tool-Use Algorithms We describe the synthetically generated tool-use trajectories for solving the different tasks presented in Section 3. Multi-Digit Addition. We follow the standard long addition algorithm, summing one-digit from right to left while keeping the “carry” digit. The model uses pointer-based memory with two pointers, and performs the following steps: 19 v630 = True v872 = False v622 = True v191 = not v872 v240 = v191 v539 = not v191 v526 = not v872 v792 = v526 and not v630 v054 = not v792 or not v191 v903 = v054 and v622 v903 = ? Expected answer: True Figure 4 Example of a logical reasoning graph and its encoding. 1. Move each pointer to the least significant digit of each summand, where the first pointer points to the digit of the first summand, and the second pointer to the second summand. To do this, we move the first pointer until we read the token +, and move the second pointer until we read the token =. 2. Read one digit from each summand, compute the sum of the digits and the carry from previous iteration (if it exists), output the new sum and carry as thoughts, and move both pointers to the left. Stop each pointer if it reaches a non-digit token. If both pointers reached non-digit tokens, output [ANS] and move to the next step. 3. At this step we have the sum written in reverse in memory, along with the carry digits from each iteration. To write the final output in reverse, we move the second pointer to the right until it reaches the [ANS] token, then start moving to the left, each iteration outputting the sum digit, until the pointer reaches the = token. Multi-Digit Multiplication. We follow the long multiplication algorithm for multiplying an n-digit number by a k-digit number (for fixed k). We use a pointer-based memory with max(k, 2) pointers. The algorithm executes the following steps: 1. Move the first pointer to the least significant digit of the first operand, and the second pointer to the least significant digit of the second operand. 2. Move the first pointer to the left, each time reading a digit and multiplying it with the digit of the second pointer. Add the result to the previous carry, and write it together with the new carry. If we reach the most significant digit of the first operand, move the first pointer back to the least significant digit, move the second pointer one position to the left, output a + sign and zeros as required (depending on which digit from the second operand we read). If the second pointer reached the × sign, move to the next step. 3. At this step we have a summation problem with k summands, where the summands are written in reverse and also contain carry digits that we should ignore. We move each pointer to the least significant digit of its respective summand, read all digits, compute the sum and the carry, and move each pointer to the right and skip carry digits. We continue until we reach the most significant digit of all summands, and then output an [ANS] token. 4. Finally, we have the answer written in reverse with carry digits. We move the first pointer to the [ANS] token, then move it one token to the left and output the tokens read by the pointer, skipping carry digits. Tower of Hanoi. The Tower of Hanoi puzzle can be solved by a simple recursive algorithm. Let n be the number of disks in the puzzle. The recursive algorithm involves three steps: (1) recursively moving the top 20 Table 2 Experimental results for the Tower of Hanoi puzzle solved recursively by different models. Model Hanoi (recursive) Mamba 7→8 (100%) LSTM 7→9 (83%) GRU 7→8 (100%) Pythia 7→7 (100%) Mistral 7→8 (87%) n −1 disks from rod A to rod B; (2) moving the largest disk from rod A to rod C; and (3) recursively moving the n −1 disks from rod B to rod C. Therefore, the puzzle can be solved with 2n −1 moves. This algorithm can also be stated iteratively: • At the first step, the smallest disk is moved from rod A to rod B or C depending on whether n is even or odd, respectively. • At the second and other even steps, the only legal move not involving the smallest disk is performed. • At the third and other odd steps, the smallest disk is moved to the rod it was not on two turns ago. Our model uses the iterative algorithm described above to solve the puzzle. The model uses pointer-based memory with one pointer. At each step of the algorithm, the model outputs the next move and the subsequent state of the rods. Specifically, the model takes the following steps: 1. The pointer traverses rod A from its base, reading disks one by one and outputting B and C alternatively. This step computes the parity of n, which is crucial for the first move. After reaching the end of rod A, the pointer rewinds to the beginning of the current state representation. The model is now ready to predict the next move. 2. At this step, the model outputs the next move, e.g., (5)AC$12 (meaning disk (5) is moved from rod A to rod C). Next, the model outputs the new state of the puzzle and goes to the step described below. In case that all disks are on rod C, no move is predicted and the output ends at this step. 3. The model uses the pointer to output the new state. The pointer starts by reading the previous state one disk at a time, outputting the new state. Note that the new state differs from the previous state only by the position of a single disk. After processing the previous state, the pointer is then advanced past the outputted move (e.g., (5)AC$) to position itself at the beginning of the newly generated state. The model then goes back to the move prediction step above. We also trained our models using the recursive algorithm; however, their length generalization performance was weaker. Details of this experiment are presented in Appendix D.4. Logical Reasoning. We use a search-based memory tool to solve the logical reasoning problem detailed in Appendix D.2. We try to resolve variables’ truth value recursively using depth-first-search (DFS). Namely, starting with the output variable, we recursively search for the values of variables in a given expression. If we find a variable with a boolean (True/False) value, we update the expression, replacing the variable’s name by its value. If we find a child variable that is still not resolved, we search for the variables in the child’s expression, while also logging the value of the parent variable (that we can use for “backtracking”). When we are done resolving a variable’s value, we backtrack to its parent, trying to resolve the parent’s value. When we resolved the output nodes value, we finish the generation. D.4 Tower of Hanoi Experiment Details In contrast to other tasks presented in this paper, the solution length for the Tower of Hanoi puzzle scales exponentially with the number of disks. In particular, solving the puzzle for 9 and 12 disks would require over 42,000 and 385,000 tokens, respectively. A solution is considered correct only if all of its tokens are correctly generated. As a result, even if a model has 99% token accuracy, it may not show any length generalization 12$ is used after each move as a delimiter. 21 capability. In agreement with this intuition, we found that Hanoi length generalization performance is very sensitive to the random seed. Hence, we used 10 seeds for Mamba and Pythia models and 3 seeds for the rest of the models. In our experiments with Mamba models, we noticed that token accuracy is always high (e.g., ≥99.75% for 12 disks), however, the actual accuracy varies based on the seed. The performance of our 10 seeds (all trained on puzzles with up to 8 disks) is shown in Figure 5. We note that Pythia did not show any length generalization, in particular, even the token accuracy did not exceed 93%. 7 8 9 10 11 12 Number of Disks 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Figure 5 Performance of different seeds when training a Mamba model on Tower of Hanoi puzzles with up to 8 disks. As discussed in Appendix D.3, the Tower of Hanoi puzzle can also be solved recursively. We also tried the recursive variant of the algorithm, however, its length generalize performance was weaker than the iterative algorithm. Here, we include the implementation and results of the recursive implementation. Recursive Implementation. For a puzzle with n disks, the model outputs the list of moves for puzzles of size 1, 2, . . . , n sequentially and uses the move list generated for puzzle of size i −1 to output the list of moves for puzzle of size i using the recursive pattern. The model uses pointer-based memory with two pointers. While outputting the list of moves for the puzzle of size i, the first pointer points to the ith disk (largest disk moved in the moves of the puzzle of size i). The second pointer is used for implementing the recursive pattern and iterating the moves of the puzzle of size i −1 while generating the moves for size i. More precisely, the input gives the list of disks (e.g., (7)(5)(2)) and the model executes the following steps: 1. Both pointers are moved to the smallest (top) disk, and the model outputs the first move, i.e., moving the top disk from rod A to rod C. This solves the problem for a single disk. First pointer moves one step back (now pointing to the second smallest disk) and the second pointer advances, pointing to the beginning of the first move. The model is now ready to output the moves for solving the puzzle for two disks. 2. At this step, the model copies the last list of moves, swapping rod labels B and C. To achieve the latter, the second pointer traverses the last list of moves and the model reads and outputs one token at a time (performing the swap if needed). This step corresponds to the first step of the recursive algorithm. At the end of copying, the second pointer is rewound to point to the beginning of the list of moves again. 3. Next, the middle move, i.e., moving the largest disk (ith disk while outputting the moves of size i) is performed. This disk is identified by the value of the first pointer and the move is always from rod A to C. This step corresponds to the second step of the recursive algorithm. 4. Similar to step 2, the model copies the list of moves again, swapping B and A. The second pointer is used again for iterating the list of moves and copying. This step corresponds to the third step of the recursive algorithm. After the copying is finished, the second pointer advances and points to the beginning of the newly constructed move list. The first pointer goes one step back so that it points to the next larger disk. This completes the generation for size i, and the process iterates by returning 22 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication by 1-Digit Mamba LSTM GRU Pythia Mistral 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication by 2-Digit Mamba LSTM GRU Pythia Mistral 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Logical Graph Mamba LSTM GRU Pythia Mistral 6 7 8 9 # of Discs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Tower of Hanoi (Recursive) Mamba LSTM GRU Pythia Mistral Figure 6 We train various transformer (Pythia, Mistral) and SSM (Mamba, LSTM, GRU) models on Multi-Digit Multiplication, Logical Graph and Tower of Hanoi tasks, with CoT + pointer-based memory tool. Multi-Digit Multiplication: We train models on multiplying a number of up to 10-digit by a 1-digit number or 2-digit number, using the pointer-based memory tool. Logical Graph: We train models to perform a logical graph reasoning problem using search-based memory tool, training on graphs with up to 10 variables. Tower of Hanoi: We train models to solve the Tower of Hanoi (recursive implementation) reasoning problem using search-based memory tool, training on problems with up to 7 disks. The first point in each plot is the maximal problem size seen during training (i.e., all other points are out-of-distribution extrapolation). to step 2 for size i + 1. The generation terminates if there is no disk remaining for the first pointer, indicating that the lists of moves have been generated for all puzzle sizes 1, . . . , n.13 The performance of the recursive solution for the Tower of Hanoi puzzle is reported in Table 2. D.5 SSMs and Transformer Baselines In Figure 6 we report accuracies of our baseline models on the Multi-Digit Multiplication, logical reasoning and Tower of Hanoi tasks. We train each model on the same trajectories, using CoT and tool use. We perform hyperparameter optimization for each model as described in C. Our results generally point to a length generalization advantage for state space models over baseline transformer models. D.6 Ablating training steps and digit length for multiplication In Figure 7 we investigate the impact of different training configurations on generalization for multi-digit multiplication, varying the training budget (250, 500, or 800 steps) and the maximum number of digits seen during training for the first operand (5, 10, or 20). For this experiment, the learning rate was set to 0.003 based on validation. Results indicate that increasing the maximum number of digits shown during training for the first operand improved stablity of OOD generalization consistently, with results improving with more training steps. In particular, training with up to 20 digits improves generalization stability perfectly up to the 13We note that we use a delimiter (e.g., #) between the list of moves for different number of disks so that they become separable. 23 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=250 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=500 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=800 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=250 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=500 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=800 train_len=5 train_len=10 train_len=20 Figure 7 Multiplication generalization performance for Mamba across different training configurations. Each subplot shows accuracy as a function of sequence length for a specific maximum training steps value. Different colored lines represent different training sequence lengths, with error envelope indicating median absolute discrepancy across 5 runs. maximum digit size tested (1000 digits). However, even training with up to 5 and 10 digits show progressive improvements as the number of training steps increases. D.7 Additional Ablations We run the following ablations on the multi-digit addition task: 1. No-CoT: we train the model to directly output the final answer, without any CoT or tool-use. 2. No-CoT, reversed answer: we train the model to directly output the final answer in reverse (reverse format was shown to improve length generalization in certain settings, e.g. Zhou et al. (2024)). 3. No Tool-Use: The model is trained on similar trajectories as in the main experiment, but now needs to predict the output of the memory tool instead of receiving these as observations. Namely, the trajectory is used as CoT data. 4. Single-Turn Tool-Use: we train the model with a “calculator”, where the model needs to generate a single addition command following the input (i.e., given an input a + b it needs to generate add(a, b)). We train the Mamba model in all settings with extensive hyper-parameter tuning on 5-digit addition. Experiments 1,2 and 4 results in perfect accuracy on 5-digit addition, but little to no length generalization. Experiment 3 results in poor performance even in-distribution. D.8 Task mixture In Figure 8 each panel shows accuracy as a function of test length for various training budgets (250, 500, or 800 steps). The curves correspond to different mixing weights, where w = 0 denotes the baseline trained only on the main task and higher values indicate a normalized fraction of auxiliary samples. The error bars indicate variability across random seeds. 24 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 250 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 500 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 800 w=0.0 w=0.33 w=0.5 w=0.6 Figure 8 Multiplication task accuracy under co-training with varying training budgets (see Sec 3.1). The accuracy plots for the main task (multiplication) in the task mixture experiment were presented in section 3.1. For completeness, we show the auxiliary task accuracy in Figure 9. 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 250 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 500 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 800 w=0.0 w=0.33 w=0.5 w=0.6 Figure 9 Addition task accuracy under co-training with varying training budgets (250, 500, 800 steps). Curves show different mixing weights. See (3.1). E Code Fixing Agent Setup We use the same system prompt and input prompt as in mini-SWE-agent (Yang et al., 2024a) from: https://github.com/SWE-agent/mini-swe-agent. We instruct the model to solve the bug in main.py, and explain how the bug should be solved. We modify the original prompt of mini-SWE-agent to instruct the model interactively debug the code and generate a fix for up to 3 files at a time. Please solve this issue: Fix the bug in main.py. Make sure to pass variable v10 too foo() and all other relevant functions. Pass v10 to ONLY the relevant functions, do not pass it if it is not needed. You can execute bash commands and edit files to implement the necessary changes. ## Recommended Workflow This workflows should be done step-by-step so that you can iterate on your changes and any possible problems. 1. Create a script to reproduce the issue and run it 2. Spot 3 files that might be causing the issue 25 Figure 10 Pass rate and median sequence length for SWE-agent-LM-32B on the code fixing task. 3. Read the content of these 3 files. 3. Edit the source code of these files resolve the issue. Do not edit more than 3 files before running the script again, even if the code is not completely fixed. 4. Verify your fix works by running your script again, if not - analyze at most 3 more files that might cause the issue and repeat the debugging process 5. Submit your changes and finish your work by issuing the following command: ‘echo COM- PLETE_TASK_AND_SUBMIT_FINAL_OUTPUT‘. Do not combine it with any other com- mand. ⟨important⟩After this command, you cannot continue working on this task.⟨/important⟩ We plot the pass rate and generated trajectory length of the SWE agent as a function of the number of functions in the code in Figure 10. F Tool use capabilities of pretrained SSMs At the time of writing, we were unable to find any publicly-available SSM models that were fine-tuned for function calling. The closest we could find is Mistral’s Mamba-Codestral-7B-v0.1, which was fine-tuned on coding tasks. We evaluated this model on the Berkeley Function Calling Leaderboard (Patil et al.), and found an overall accuracy of 16.58%, comparable with the reported accuracies of 16.22% for Falcon3-3B-Instruct and 15.58% for Llama-3.1-8B-Instruct. Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries and regions. 26
To Infinity and Beyond: Tool-Use Unlocks Length Generalization in State Space Models Eran Malach, Omid Saremi, Sinead Williamson, Arwen Bradley, Aryo Lotfi, Emmanuel Abbe, Josh Susskind, Etai Littwin Apple State Space Models (SSMs) have become the leading alternative to Transformers for sequence modeling. Their primary advantage is efficiency in long-context and long-form generation, enabled by fixed-size memory and linear scaling of computational complexity. We begin this work by showing a simple theoretical result stating that SSMs cannot accurately solve any "truly long-form" generation problem (in a sense we formally define), undermining their main competitive advantage. However, we show that this limitation can be mitigated by allowing SSMs interactive access to external tools. In fact, we show that given the right choice of tool access and problem-dependent training data, SSMs can learn to solve any tractable problem and generalize to arbitrary problem length/complexity (i.e., achieve length generalization). Following our theoretical finding, we demonstrate that tool-augmented SSMs achieve remarkable length generalization on a variety of arithmetic, reasoning, and coding tasks. These findings highlight SSMs as a potential efficient alternative to Transformers in interactive tool-based and agentic settings. Correspondence: Eran Malach: Date: October 17, 2025 1 Introduction Transformers (Vaswani et al., 2017), the main architecture powering large language models, have a well-known limitation: due to the attention mechanism, their computational complexity scales quadratically with the sequence length, and their memory scales linearly with length1. This quadratic dependency becomes a major limitation for tasks that require long-context and long-form generation. As test-time scaling paradigms that involve the generation of long Chain of Thought (CoT) (Wei et al., 2022) have become the leading solution for improving reasoning capabilities (Jaech et al., 2024; Guo et al., 2025), the ability to efficiently generate long sequences becomes even more crucial. To solve this limitation, various works suggested replacing the attention mechanism with other modules where memory and per-token compute are fixed as a function of the sequence length (Choromanski et al., 2020). Examples of such architectures include variants of Linear Transformers (Katharopoulos et al., 2020) and State Space Models (Gu et al., 2021) such as Mamba (Gu & Dao, 2023; Dao & Gu, 2024), DeltaNet (Yang et al., 2024c) and GatedDeltaNet (Yang et al., 2024b). These architectures achieve performance similar to Transformers across a wide range of domains (Qu et al., 2024) at a lower inference cost. However, some works have also pointed out significant limitations of these architectures in certain tasks that involve memorization of long sequences and in-context learning (Jelassi et al., 2024; Park et al., 2024; Akyürek et al., 2024). Possibly due to these limitations, linear-time models are still not widely adopted as a replacement to Transformers. The goal of this work is to understand the capabilities and limitations of SSMs, focusing on tasks that require long-form generation. We formally define long-form generation tasks to be problems where the effective number of outputs grows with the complexity of the problem. We focus on such tasks as these are the tasks where SSMs display a clear benefit over Transformers in terms of inference efficiency. However, we show 1While a naive implementation of attention requires quadratic memory complexity, efficient implementations such as Flash Attention (Dao et al., 2022) and KV caching enable close to linear complexity. 1 16 Oct 2025 ==> foo.py |x|, and we sample yt ∼r(s|x|+t). We terminate when an end-of-sequence token [EOS] ∈Σ is observed. Note that any model that has fixed memory as a function of the sequence length satisfies the definition of a GSSM. This includes common choices of recurrent models, such as: LSTM (Hochreiter & Schmidhuber, 1997), Linear Transformers (Katharopoulos et al., 2020), H3 (Fu et al., 2022), RetNet (Sun et al., 2023), Mamba-1 and Mamba-2 (Gu & Dao, 2023; Dao & Gu, 2024) and other variants (Yang et al., 2024b). Additionally, Transformers where all the attention layers have local (sliding window) attention with fixed window size, are also GSSMs. Other computational models that use Transformers to process fixed length sequences and update fixed-size "memory" vectors (Hutchins et al., 2022) are also GSSMs. Transformers and hybrid-SSM models are not GSSMs, since their memory increases with the sequence length. CoT, Single-Turn and Interactive Tool-Use. We analyze multiple settings where the model can invoke reasoning and tool-use. We follow the popular ReAct framework (Yao et al., 2023) and let the model generate either thoughts, that capture the internal reasoning of the model, or actions that are followed by observations from the environment. The thoughts and actions can be interleaved during the runtime of the model. We specify two types of actions: command actions, that are sent to a tool-oracle O that returns an observation following the execution of the command, and output actions that are simply tokens appended to an output stream and do not result in an observation. The output stream captures the final response of the model which is then evaluated against the ground-truth3. Thoughts, commands and observations are placed between dedicated open/close tags (e.g. [THINK], [ ]). We define more formally the tool-oracle and the interaction protocol in Appendix A. We analyze three settings for problem-solving agents. 1) CoT-only: The model is allowed to use only thoughts or outputs, but cannot issue commands or receive external observations4. 2) Single-Turn Tool-Use: The model is allowed to issue a single command, followed by an observation, and then generate the output. The model can use thoughts before and after the tool call, and during the output generation. 3) Interactive Tool-Use: The model is allowed to use as many commands as it needs, and freely interleave thoughts, commands and outputs. 2.3 Learning Algorithms and Length Generalization Fix some task f, {Dn}∞ n=1. We now define training data distributions for learning the task f. We note that for many downstream tasks, it is common to collect training data that contains CoT reasoning and/or tool-use traces for solving the problem. We therefore allow the training distributions to contain a task-specific reasoning and tool-use trajectories. Given some trajectory z ∈Σ∗, we denote by z(out) the value of the output stream after execution of the trajectory. Formally, a training distribution for the task f, {Dn}∞ n=1 is a sequence of distributions {Pn}∞ n=1 s.t. Pn is a distribution over Xn × Σ∗satisfying that: 1) Dn is the marginal distribution of Pn w.r.t. Xn, and 2) For (x, z) ∼Pn, with probability 1 it holds that z(out) = f(x) (i.e. the output stream at the end of generation evaluates to the correct answer). A learning algorithm A is an algorithm that, for some given length n, draws a sample of size m from P1, . . . , Pn5, and returns some hypothesis h : Σ∗→∆(Σ∗) that, given an input problem, can generate a reasoning and tool-use trajectory. We denote the output of A in this case by A(P1, . . . , Pn). We say that A is a GSSM learning algorithm if it always returns a GSSM. We define the error of h w.r.t f for some complexity n by errn(h) = Pr h(out)(x) ̸= f(x) , with probability over x ∼Dn and the randomness of h. We now define length generalization of an algorithm: 3We focus on agents for solving input-output problems, where the task of the model is to generate some output given the input problem (e.g. question answering, coding, mathematical proofs etc.). This is a different setting from an agent that performs actions and collects rewards, as in many Reinforcement Learning problems. 4This setting also includes the case where the model generates the output immediately, without using CoT. 5We let the algorithm choose freely how to sample from these distributions. 5 Definition 2.3. We say that A achieves length generalization, if for every ε, δ ∈(0, 1) there exists some minimal complexity n0 and sample size m s.t. w.p. ≥1 -δ we have that hn0 = A(P1, . . . , Pn0) satisfies errn(hn0) ≤ε for all n ≥n0. Namely, we require that the algorithm returns a hypothesis with low-error on problems with arbitrarily large complexity n, as long as it sees "complex enough" inputs sequence in the training data (with complexity larger than n0). This requirement may seem relatively strong, as we could expect that the error of the learned model would grow with the complexity of the problem. However, we will show theoretically (and to some extent, empirically) that with a carefully constructed training data, achieving such "infinite" length generalization is possible. 2.4 Main Results In this subsection, we state the main theoretical results in the paper. We begin by showing a negative result, stating that GSSMs cannot solve long-form generation tasks, if they operate in the CoT-only or single-turn tool-use setting. Following this, we show a positive result, proving that for any computable long-form generation task we can construct training data such that a simple learning algorithm achieves length generalization on the target task in the interactive tool-use setting. GSSMs cannot Solve Long-Form Generation Tasks without Interaction. We begin by stating the negative result. The proof is relatively simple: since the model has a fixed memory, and outputs are a function of the state of the memory, the model cannot generate all outputs as complexity grows. Theorem 2.1. Let f be a long-form generation task over {Dn}∞ n=1 with coverage parameter α ∈(0, 1). Then, for any CoT-only or Single-Turn GSSM h there exists some problem complexity n0 s.t. for all n ≥n0 the model h has error: errn(h) ≥1 -α. The full proof is given in Appendix B. An immediate implication of this result is that GSSM learning algorithms cannot achieve length generalization on long-form generation tasks without interaction. GSSMs with Interactive Tool-Use can Length Generalize on Long-Form Generation Tasks. For some function f : Σ∗→Σ∗, we say that f is computationally tractable if there exists a Turing machine T s.t. for any x ∈Σ∗, if T begins with x written on its tape, it halts with f(x) written on its tape. The following result shows that a GSSM learning algorithm can achieve length generalization with interactive tool-use, given proper training data: Theorem 2.2. There exists memory-tool oracle O and a simple GSSM learning algorithm A6 s.t. for any computationally tractable long-form generation task f, {Dn}∞ n=1, there exists a sequence of training distributions {Pn}∞ n=1 for which A achieves length generalization in the interactive setting. To show the above result, we define a simple tool that allows read/write access to some external memory, using a pointer that can move left or right between the memory cells. Using this tool, we can simulate the operations of a Turing machine, where we use the external memory as the tape of the Turing machine, use thoughts to track the state of the machine and commands to move the head and read/write symbols. Since the transition function of the Turing machine is defined for every pair of state and symbol, to prove that length generalization is achieved we show that, for large enough n0, most of these pairs are seen in the training data. We give the complete proof in Appendix B. To conclude, the above results show that interactive tool-use is both necessary and sufficient for GSSMs to achieve length generalization on tractable long-form generation problems. 6The algorithm that we analyze is "simple" in the sense that it learns a function that operates using simple string-matching with the training data, similar to e.g. n-gram models. While this is not a "standard" learning algorithm, we believe that similar results can be obtained for more natural algorithms (e.g., gradient descent on a simple RNN), at the cost of making the analysis much more involved. 6 Table 1 Experimental results for synthetic tasks for different models. The notation n →m(p%) means a model trained on length n achieves accuracy p on length m (for the largest m s.t. p ≥5%). Model n × 1 n × 2 Logical Graph Hanoi7 Mamba 10→1K (100%) 10→1K (100%) 10→1K (98%) 8→12 (49%) LSTM 10→500 (100%) 10→100 (100%) 10→1K (100%) 8→8 (100%) GRU 10→500 (100%) 10→100 (100%) 10→1K (100%) 8→8 (100%) Pythia 10→20 (79%) 10→14 (12%) 10→1K (5%) 8→8 (100%) Mistral 10→13 (25%) 10→20 (33%) 10→500 (9%) 8→8 (100%) 3 Experiments In this section we evaluate the length generalization capabilities of GSSMs and Transformer-based language models on various tasks, including arithmetic, reasoning and coding. We experiment with different choices of tools that allow read/write memory access, using either a pointer-based memory access, search tool, or arbitrary bash commands for reading and changing files. We use both tasks where we synthetically generate the ground-truth trajectory and tool commands, as well as a coding task where we collect the trajectories from a SWE coding agent. In our experiments, we largely follow a similar framework for ReAct agents defined in the previous section, where the model can interleave thoughts, outputs ("final answer" tokens), and commands that are followed by observations from the environment. In our experiments we use Mamba SSM (Gu & Dao, 2023), LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014), Pythia Transformer (Biderman et al., 2023) and a Transformer with sliding window (local) attention based on the Mistral architecture (Jiang et al., 2023). In all experiments, we see that SSMs/RNNs achieve length generalization performance that is much better compared to Transformers. See Appendix C for experimental details. 3.1 Arithmetic Tasks In the following set of experiments, we augment the model with a pointer-based memory tool that gives the model access to past tokens in the input/output context. In this setting, the model can execute the following commands: 1) initialize a new pointer, 2) move the pointer left or right by a single token and 3) read the token under a given pointer. By default, a new pointer is initialized to the first token position of the input context. The thoughts and outputs are appended to the context, and are therefore accessible by the pointers (if they reach beyond the length of the input), but commands and observations are discarded (i.e., they are not appended to the context memory and cannot be read by the pointers). We give a detailed description of how thoughts, commands and outputs are specified in Appendix D.1. The final answer is written in the output stream at the end of the generation. We train the model using the standard next-token prediction objective with teacher-forcing, while masking from the loss the input question and the observations (the outputs of a read operation, which will be generated by the memory tool). For the training data, we construct synthetic trajectories that simulate the desired algorithm, and train the model to exactly execute the algorithm required for solving the problem using the memory-tool interaction. Multi-Digit Addition. For this task, we train the model to perform multi-digit addition. We fix some maximal training length n, and for each training example we sample uniformly n1, n2 ∼{1, . . . , n}, then sample two numbers x1, x2 where xi is a uniformly random ni-digit number. We construct a training example with the trajectory for solving x1 +x2, essentially mimicking the long addition algorithm (see Appendix D.3 for details). For evaluation, we choose n′ ≥n and evaluate on addition of two n′-digit numbers. In evaluation, we measure the accuracy of exact-recovery of the trajectory and the final answer (i.e., we measure the probability of generating a solution that exactly matches the desired algorithm). Figure 2 (right) shows the results of this experiment. We observe that Mamba and LSTM trained on 5-digit demonstrations learn to perfectly perform 1,000-digit addition (we did not measure the accuracy beyond this). A Transformer trained in the same 7Due to the sensitivity of the Hanoi experiments to the initialization seed, we tried 10 seeds for Mamba and Pythia and 3 seeds for the rest of the models. Best performing seed is reported. See Appendix D.4 for more details. 7 Logical Graph V630 v630 = True v872 = False v622 = True v191 = not v872 v240 = v872 or v622 v539 = v622 and not v191 v539 = ? v872 v622 v191 v240 v539 ∨ ∧ ¬ ¬ main f169 f352 foo f643 f816 Code Fixing Figure 3 Illustration of Logical Graph reasoning task and code fixing task. We generate random graphs that define logical function structure (top) or code dependencies (bottom), and synthetically generate problems according to these graph structures. setting fails to extrapolate. Additional ablations, such as training with no CoT, no tool-use and single-turn tool-use, result in little to no length generalization, and are discussed in Appendix D.7. Multi-Digit Multiplication. For this task we use the same pointer-based memory tool described above for learning the multiplication algorithm. In this task, we increase the length of only the first operand, keeping the second operand fixed. Specifically, we fix some maximal training length n, choose n1 ∼{1, . . . , n} to be the length of the first operand and choose n2 ∼{1, 2} to be the length of the second operand (i.e. we multiply an n1-digit number by a 1-digit or 2-digit number). We sample x1, x2 where xi is a uniformly random ni-digit number, and construct the trajectory for solving x1 × x2 (see details in Appendix D.3). We then evaluate on n′ × 1 and n′ × 2 digit multiplication, for some n′ ≥n, and report exact recovery accuracy. We train different SSMs/RNNs and Transformers where first operand has n ≤10 digits, and evaluate on multiplications of numbers with up to 1,000-digits (Table 1). Here too we see that Mamba models maintain high accuracy when evaluated on numbers that have orders of magnitude more digits than in the training data (also see Appendix D.6 for ablations on training steps and maximum number of digits seen during training). Task Mixture. We examine whether co-training a primary task with an auxiliary task that shares a related computational structure yields synergistic benefits (Awasthi & Gupta, 2023). Our experiments indicate that such co-training improves the length generalization of the primary task under limited training budgets. In our experiments, the primary task is multiplication (n-digit × 2-digit), co-trained with addition (n+n digits) as an auxiliary task. Both tasks share structural similarities when expressed as sequences of tool calls. The training distribution for multiplication contains samples up to 20 digits. We compare the accuracy as a function of test length for various training budgets (250, 500 or 800 steps) and various choices of task mixtures (see Appendix D.8). We observe that under limited budgets (250 steps), introducing auxiliary addition samples yields minor improvements. At intermediate budgets (500 steps), the benefit becomes more pronounced, with certain weights extending generalization to much larger n. However, with sufficient training (800 steps), all settings converge to strong generalization, and the auxiliary data provides no additional gain. 3.2 Algorithmic/Reasoning Tasks We next turn to evaluate the tool-use paradigm on tasks that test certain "reasoning" capabilities. Tower of Hanoi. This task is based on the popular puzzle game, which was also recently used for testing reasoning capabilities of frontier LLMs, showing that they struggle to solve this task as complexity increases 8 (Shojaee et al., 2025). In our setup, we randomly sample (without replacement) n disks of sizes ∈{1, . . . , 100}. These disks are placed on the first rod (labeled A), ordered from the largest to the smallest, with rods B and C being empty. The input to the model is the list of disks, which captures the initial state of the game. The model then needs to output a sequence of valid moves that result in placing the pile on rod C. We use the same pointer-based memory tool as in the previous experiments, and train the model trajectories with up to n disks, evaluating on larger n′ (see Appendix D.3). In this experiment we observe more limited length generalization (Table 1), but note that unlike other experiments, here the length of the output increases exponentially with n. Logical Graph. In this task, we construct a directed-acyclic computation graph with n nodes. The graph has k input nodes (for some fixed k), and each internal node node computes a Boolean operation (AND/OR) on one or two input variables or their negations. We construct the graph by iteratively adding new internal nodes and randomly choosing their Boolean operation and their connectivity to existing nodes in the graph. We take the last node that is added to be the output node. All nodes are randomly labeled, and the model receives the graph structure and an assignment for the input variables as a python code (see Figure 3). In this task, instead of using the pointer-based memory tool as in previous tasks, we use a search tool: the model can issue a command find(x), and gets a list of all occurrences of the pattern x in the context. As before, all thoughts and outputs generated by the model are appended to the context and are therefore searchable in future iterations. We fix k = 3 and train the model on trajectories for solving this problem for graphs with up to n = 10 nodes. We then evaluate on graphs with n′ ≥n nodes, and report the exact-match accuracy in Table 1. Again, we see that Mamba and recurrent models can solve this problem, extrapolating to graphs with n = 1, 000 nodes. 3.3 Coding Task For the previous tasks, we trained models "from scratch" on synthetic trajectories that invoke tool use for solving arithmetic and algorithmic problems. This allowed us to demonstrate the length generalization capability of SSMs equipped with tool-use in a clean and controlled setting, resulting in perfect recovery of the underlying algorithm in many cases. We now turn to study extrapolation of tool-use agents in a more realistic coding setting. Importantly, this setting will allow us to go beyond programmatically generated trajectories and collect trajectories from an existing SWE coding agent. This demonstrates that our results and observations can also be applicable in settings where the underlying algorithm/method for solving the task are not known or well-specified. Our task will be fixing a "bug" in a given codebase. To construct the codebase we generate n python functions, each function saved in a separate python file. The functions form a dependency graph, with one root function called main (stored in main.py). Each function declares variables (named v0, v1, . . . , v9), gets some variables as inputs and passes some variables to other functions it imports. We generate this codebase by randomly generating a dependency graph, iteratively adding nodes (functions) to this graph and connecting each node to existing nodes, where each edge represents a parent function importing a child function. Function names are randomly selected from f0, . . . , f999, except for the last function added to the graph which is called foo. We then randomly assign variables and print them and/or pass them from parent functions to child functions. The code always has the following "bug": there is a special variable v10 that is declared in main.py and is called in foo.py without properly passing it from main. In order to fix the code, we need to pass the variable v10 through all the dependency paths from main to foo (ideally without changing other functions, though we do not enforce this). See Figure 3 for an illustration of how we generate the code. We start by running a coding agent and collecting its trajectories when attempting to solve this code-fixing task, as we are varying the number of functions n in the codebase (choosing n ∈{4, . . . , 16}). We use three types of agents for generating trajectories, (illustrated in Figure 1): 1) Single-Turn Agent: Hard-coded agent that prints all the files and immediately generates the correct code edits. 2) Interactive Agent: Hard-coded agent that iteratively runs the code, resolves the issue in up to 3 files, then runs the code again, and keeps going until the code runs without errors. 3) Distillation: An agent based on SWE-agent-LM-32B (Yang et al., 2025), a capable open-source coding model that we couple with mini-SWE-agent8 (Yang et al., 8https://github.com/SWE-agent/mini-swe-agent 9 2024a) as a simple agent environment which gives the model access to the code through bash commands. We instruct the model to fix the bug in the code, specifically telling it what the bug is and how it should fix it (pass the variable v10 from main to foo). See the full prompt and further details in Appendix E. We observe that while this task is relatively simple, the model's performance degrades as the complexity (number of functions) in the codebase increases (see statistics in Appendix E). We therefore filter the trajectories to include only trajectories that correctly fixed the code, and also filter for short trajectories (shorter than the average length for a given size n). After collecting around 100K trajectories from each coding agent, we finetune two comparable models on these trajectories: Pythia-1.4B (Transformer-based model, Biderman et al. (2023)) and Mamba-1.4B (Gu & Dao, 2023), both pretrained on The Pile (Gao et al., 2020). We train both models with context length 8,192, on codebases of up to 16 functions (if the trajectory is longer than the context length, we train only on the first 8,192 tokens). We then evaluate both models on codebases of different sizes, letting the models generate beyond the context length.9 We measure the probability of correctly fixing the code (using the same environment used for collecting the trajectories). As shown in Figure 1, we observe that for codebases with small number of functions, both Transformer and Mamba models perform well in all settings. Notably, the Transformer-based model outperforms the Mamba SSM for small codebases in the agent distillation setting, achieving over 90% pass rate. However, for larger codebases, beyond the training distribution (both in terms of number of functions and trajectory length), we see that the Mamba model maintains much better accuracy as the complexity increases when trained to imitate interactive agents (agents 2 and 3), but fails on complex codebases when trained in the single-turn setting (agent 1). This finding aligns with our theoretical results, and also matches the previous synthetic experiments. 4 Conclusion and Discussion We started this work by comparing two families of models for long-form generation: Transformers and SSMs. Transformers are inefficient for long-context and long-form generation, as their computational complexity scales quadratically with the sequence length. SSMs, on the other hand, offer linear scaling of compute but, as we showed, cannot accurately solve long-form generation tasks (without tools). This demonstrates a clear trade-off between efficiency and accuracy that seems to be inescapable. Indeed, several works have observed that SSMs are inferior to Transformers in various tasks that require memorization of long sequences (Jelassi et al. (2024); Waleffe et al. (2024)). On the positive side, we show that in the agentic/tool-use setting, SSMs can leverage tools to overcome their memory bottleneck, thus offering efficiency, accuracy, and generalization to longer sequences. In hindsight, SSMs seem to be a natural fit for tool-use settings: tools often generate large quantities of content, which SSMs can parse efficiently, and also involve multi-turn interactions that can quickly overflow the context of a standard Transformer. However, it seems that there is little work on building SSM-based agents, and thus their evaluation is restricted to the "standalone" setting, where they are inherently limited. We do not believe this is due to any inability of SSMs to learn tool-use behavior. For example, while Mistral's Mamba-Codestral-7B-v0.1 model does not naively support function calling, it is able to achieve comparable tool-use performance to several function-calling-enabled transformer-based models (Appendix F). We therefore believe this work should encourage the development of a tool-based SSMs that operate in various agentic settings, such as coding, search or reasoning. This application could potentially unlock the full capabilities of these models, making them competitive with, or even superior to, Transformer-based agents. Finally, we want to emphasize that our work is one of the first to analyze the performance of language modeling architectures in a system, rather than as standalone models. To some extent, our analysis shows that certain architectures can be "weaker" when operating standalone, but in fact perform better when incorporated as part of a system. Since LLMs are now rarely used as standalone tools, we believe that this aspect of language modeling deserves more attention and focus in the field. 9We experimented with applying RoPE scaling when using the Transformer beyond the training context length, both in finetuning and evaluation, and observed mixed results. We report the accuracy for the best choice (with or without RoPE scaling) in each setting. 10 References Emmanuel Abbe and Colin Sandon. Polynomial-time universality and limitations of deep learning. Communications on Pure and Applied Mathematics, 76(11):3493-3549, 2023. URL https: //onlinelibrary.wiley.com/doi/abs/10.1002/cpa.22121. Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, and Nathan Srebro. On the power of differentiable learning versus PAC and SQ learning. In Advances in Neural Information Processing Systems, volume 34, 2021. Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, and Omid Saremi. How far can transformers reason? the globality barrier and inductive scratchpad. Advances in Neural Information Processing Systems, 37:27850-27895, 2024. Ekin Akyürek, Bailin Wang, Yoon Kim, and Jacob Andreas. In-context language learning: Architectures and algorithms. arXiv preprint , 2024. Pranjal Awasthi and Anupam Gupta. Improving length-generalization in transformers via task hinting. arXiv preprint , 2023. URL https://arxiv.org/abs/2310.00726. Assaf Ben-Kish, Itamar Zimerman, Shady Abu-Hussein, Nadav Cohen, Amir Globerson, Lior Wolf, and Raja Giryes. Decimamba: Exploring the length extrapolation potential of mamba. arXiv preprint , 2024. Satwik Bhattamishra, Arkil Patel, Varun Kanade, and Phil Blunsom. Simplicity bias in transformers and their ability to learn sparse boolean functions. arXiv preprint , 2022. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397-2430. PMLR, 2023. Aaron Blakeman, Aarti Basant, Abhinav Khattar, Adithya Renduchintala, Akhiad Bercovich, Aleksander Ficek, Alexis Bjorlin, Ali Taghibakhshi, Amala Sanjay Deshmukh, Ameya Sunil Mahabaleshwarkar, et al. Nemotron-h: A family of accurate and efficient hybrid mamba-transformer models. arXiv preprint , 2025. Hanseul Cho, Jaeyoung Cha, Pranjal Awasthi, Srinadh Bhojanapalli, Anupam Gupta, and Chulhee Yun. Position coupling: Improving length generalization of arithmetic transformers using task structure. Advances in Neural Information Processing Systems, 37:22233-22315, 2024. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint , 2014. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint , 2020. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint , 2021. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint , 2024. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in neural information processing systems, 35:16344-16359, 2022. Grégoire Delétang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, et al. Neural networks and the chomsky hierarchy. arXiv preprint , 2022. Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee. Looped transformers for length generalization. arXiv preprint , 2024. Daniel Y Fu, Tri Dao, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. Hungry hungry hippos: Towards language modeling with state space models. arXiv preprint , 2022. 11 Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint , 2020. Noah Golowich, Samy Jelassi, David Brandfonbrener, Sham M Kakade, and Eran Malach. The role of sparsity for length generalization in transformers. arXiv preprint , 2025. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint , 2014. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint , 2023. Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. arXiv preprint , 2021. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint , 2025. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. Kaiying Hou, David Brandfonbrener, Sham Kakade, Samy Jelassi, and Eran Malach. Universal length generalization with turing programs. arXiv preprint , 2024. Xinting Huang, Andy Yang, Satwik Bhattamishra, Yash Sarrof, Andreas Krebs, Hattie Zhou, Preetum Nakkiran, and Michael Hahn. A formal framework for understanding length generalization in transformers. arXiv preprint , 2024. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, and Behnam Neyshabur. Block-recurrent transformers. Advances in neural information processing systems, 35:33248-33261, 2022. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint , 2024. Samy Jelassi, Stéphane d'Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, and François Charton. Length generalization in arithmetic transformers. arXiv preprint , 2023. Samy Jelassi, David Brandfonbrener, Sham M Kakade, and Eran Malach. Repeat after me: Transformers are better than state space models at copying. In International Conference on Machine Learning, pp. 21502-21521. PMLR, 2024. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. Advances in neural information processing systems, 28, 2015. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint , 2015. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pp. 5156-5165. PMLR, 2020. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36:24892-24928, 2023. Nayoung Lee, Kartik Sreenivasan, Jason D Lee, Kangwook Lee, and Dimitris Papailiopoulos. Teaching arithmetic to small transformers. arXiv preprint , 2023. Yuxuan Li and James McClelland. Representations and computations in transformers that support generalization on structured tasks. Transactions on Machine Learning Research, 2023. Junyu Luo, Weizhi Zhang, Ye Yuan, Yusheng Zhao, Junwei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, et al. Large language model agent: A survey on methodology, applications and challenges. arXiv preprint , 2025. Eran Malach. Auto-regressive next-token predictors are universal learners. arXiv preprint , 2023. 12 Soroor Malekmohamadi Faradonbe, Faramarz Safi-Esfahani, and Morteza Karimian-Kelishadrokhi. A review on neural turing machine (ntm). SN Computer Science, 1(6):333, 2020. Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, et al. Transformers can do arithmetic with the right embeddings. Advances in Neural Information Processing Systems, 37:108012-108041, 2024. William Merrill and Ashish Sabharwal. The expressive power of transformers with chain of thought. arXiv preprint , 2023. Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint , 2021. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. 2021. Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. Making transformers solve compositional tasks. arXiv preprint , 2021. Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos. Can mamba learn how to learn? a comparative study on in-context learning tasks. arXiv preprint , 2024. Shishir G Patil, Huanzhi Mao, Fanjia Yan, Charlie Cheng-Jie Ji, Vishnu Suresh, Ion Stoica, and Joseph E Gonzalez. The berkeley function calling leaderboard (bfcl): From tool use to agentic evaluation of large language models. In International Conference on Machine Learning. Haohao Qu, Liangbo Ning, Rui An, Wenqi Fan, Tyler Derr, Hui Liu, Xin Xu, and Qing Li. A survey of mamba. arXiv preprint , 2024. Ricardo Buitrago Ruiz and Albert Gu. Understanding and improving length generalization in recurrent models. arXiv preprint , 2025. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, and Joel Veness. Randomized positional encodings boost length generalization of transformers. arXiv preprint , 2023. Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar. The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. arXiv preprint , 2025. Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint , 2023. Shubham Toshniwal, Wei Du, Ivan Moshkov, Branislav Kisacanin, Alexan Ayrapetyan, and Igor Gitman. Openmathinstruct-2: Accelerating ai for math with massive open-source instruction data. arXiv preprint , 2024a. Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. Advances in Neural Information Processing Systems, 37:34737-34774, 2024b. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, et al. An empirical study of mamba-based language models. arXiv preprint , 2024. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. Noam Wies, Yoav Levine, and Amnon Shashua. Sub-task decomposition enables learning in sequence to sequence tasks. arXiv preprint , 2022. 13 John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik R Narasimhan, and Ofir Press. SWE-agent: Agent-computer interfaces enable automated software engineering. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024a. URL https://arxiv.org/abs/2405.15793. John Yang, Kilian Leret, Carlos E Jimenez, Alexander Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, and Diyi Yang. Swe-smith: Scaling data for software engineering agents. arXiv preprint , 2025. Songlin Yang, Jan Kautz, and Ali Hatamizadeh. Gated delta networks: Improving mamba2 with delta rule. arXiv preprint , 2024b. Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, and Yoon Kim. Parallelizing linear transformers with the delta rule over sequence length. Advances in neural information processing systems, 37:115491-115522, 2024c. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. Asaf Yehudai, Lilach Eden, Alan Li, Guy Uziel, Yilun Zhao, Roy Bar-Haim, Arman Cohan, and Michal Shmueli-Scheuer. Survey on evaluation of llm-based agents. arXiv preprint , 2025. Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Josh Susskind, Samy Bengio, and Preetum Nakkiran. What algorithms can transformers learn? a study in length generalization. arXiv preprint , 2023. Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, and Denny Zhou. Transformers can achieve length generalization but not robustly. arXiv preprint , 2024. 14 A More Definitions Here we give a more complete and formal definition for models with tool-use. We start by defining a tool-use oracle, which will receive tool-use commands and will return an observation that corresponds to the execution of the command. This oracle will be stateful, meaning that its responses can vary depending on the memory of the oracle, which can be updated based on the commands it receives10. Let M be some set which will correspond to the set of memories of the oracle. We denote by Mt ∈M the memory of the oracle after receiving t commands, and let M0 be the initial memory of the oracle. Importantly, we let the initial memory of the oracle depend on the input (e.g., the input can be stored in the memory of the oracle). For some memory Mt ∈M, we define OMt : Σ∗→Σ∗to be the mapping from tool calls to observations, given memory Mt. We augment the dictionary with additional tokens: [TOOL], [ ], [OBS], [ ], [THINK], [ ] ∈ Σ. At any point in the generation, the model h can issue a call to a tool by generating a sequence of the form [TOOL], z, [ ], for some z ∈Σ∗which encodes the tool command. The command z is passed to the tool oracle, and the resulting observation that is then appended to the context of the model, with the format: [OBS], OMt(z), [ ]. The model can also generate thoughts/reasoning, generated as a sequence [THINK], z, [ ]. All other tokens (tokens that are not tool-commands, observations or thinking tokens) are considered output tokens, and are appended to the output stream. In the CoT-only setting, the model is only allowed to use thinking and output tokens. In the single-turn setting, the model can issue a single tool command, and can start generating the output after receiving the observation from the command (but can think before, after and during the output generation). In the interactive setting, a model can issue a tool call at any point in the generation, possibly interleaved with output tokens and tool commands. When evaluating the output, we ignore all tool commands, observations and thoughts and only consider the output stream at the end of generation. B Proofs B.1 Proof of Theorem 2.1 Before we introduce the complete proof of the theorem, we prove a simple version of the Theorem for the case where the output rule of the GSSM is deterministic. This proof is simpler than the more general stochastic setting, and captures the key principles for this result. Proof of Theorem 2.1 for deterministic GSSMs. Let S be the state space of h. By definition, there exists some n0 such that for all n ≥n0 it holds that suppα(f(Dn)) > |S|. Now, fix some n ≥n0, and let A ⊆Σ∗be all the possible outputs of h. Note that the output is determined by the state of h before the output tokens are generated. Therefore, |A| ≤|S| |S|. Fix some n ≥n0. Fix some s ∈S. Let ys be an output with maximal probability under the distribution Dn, conditioned on the event that U(x) = s: ys = arg max y Pr x∼Dn[f(x) = y|U(x) = s] Denote by A the set of maximal probability outputs A = {ys : s ∈S}. Note that |A| ≤|S| |mt|, output [EOS]. • write σ: updates the it-th token of mt to be σ. • move_left, move_right: adds/subtracts 1 from it. Next, we describe the training distributions Pn. Since f is tractable, there exists some Turing machine T that computes f. By definition, the machine halts for every input, and we can assume w.l.o.g. that it halts when the head is at position 0. Let Q be the (finite) set of states of T , and let q0 be the initial state. We assume the dictionary Σ contains the following symbols: {0, 1, [STATE], [ ]} ∈Σ. For each state q ∈Q, we define the encoding of the state enc(q) = [STATE]zq[ ], where zq ∈{0, 1}log(|Q|) is a binary encoding of the state. Then, for some input x ∼Dn, we construct a CoT of x, denoted by F(x), that will capture the "trace" of the machine T : • The sequence F(x) begins with: [THINK]enc(q0)[ ][TOOL]read[ ]. • In each step of the Turing machine processing x, we add to F(x) the sequence: [THINK]enc(q)[ ][TOOL]read[ ] [OBS]σ[ ][TOOL]write σ′[ ] where q is the current state, and σ′ is the next symbol to write when reading σ in state q. Additionally, we add [TOOL]move_left[ ] if the machine moves the head to the left, and otherwise [TOOL]move_right[ ]. • When the machine reaches a halting state, for every i = 1 . . . |f(x)| we add: [TOOL]move_right[ ][TOOL]read[ ][OBS]f(x)i[OBS]f(x)i Note that since the machine computes f(x) it will be written on its tape when it reaches a valid state. Therefore, it is easy to verify that the memory of the oracle O at step t will hold the state of the tape and the correct position of the head, and that all the tool observations will be correct. Finally, x ∼Dn and F(x) together define the distribution Pn for all n, and it is indeed a training distribution for the task (since the non-tool tokens after the [ANS] token correspond to the correct output f(x)). Next, we will show that a simple tool-SSM algorithm can achieve length generalization on this task. Let {(x1, F(x1)), . . . , (xm, F(xm)} be the set of examples observed by the algorithm. Let bA be the set of all pairs of state encodings and symbols that appear together in F(xi) in some xi: bA := {(q, σ) : ∃i, [THINK]enc(q)[ ][TOOL]read[ ][OBS]σ[ ] ∈F(xi)} Note that for every (q, σ) ∈bA there is a single symbol σ′ and a single command d ∈{move_left, move_right} and a single state q′ that follow (q, σ) in the trace (corresponding to the operation of the Turing machine). Let R be the function mapping (q, σ) to (q′, σ′, d). Note that both bA and R can be encoded with fixed (finite) memory. Therefore, we define a GSSM h b A,R that generates tokens as follows: • Immediately after the input, generate: [THINK]enc(q0)[ ][TOOL]read[ ]. • Following each response to a read command, generate: [THINK]enc(q′)[ ][TOOL]write σ′[ ][TOOL]d[ ] where q′, σ′, d = R(q, σ), for the σ returned by the tool oracle. • When a halting state is reached, generate the sequence: [TOOL]move_right[ ][TOOL]read[ ] and following the observation [OBS]σ[ ], output σ (if σ = [EOS] we halt the generation). • If at some point we observe a pair q, σ /∈bA, output [EOS]. 17 Denote by A(x) ⊆Q × Σ the set of state-symbol pairs observed by T when processing x. Its easy to verify that for every x s.t. A(x) ∈bA, the GSSM h b A,R will exactly recover F(x). Therefore, the following lemma suffices for proving the theorem: Lemma B.1. Fix some ε, δ ∈(0, 1). There exists some n0 s.t. and m s.t. w.p. at least 1 -δ over sampling from P1, . . . , Pn0 it holds that: ∀n ≥n0 Pr x∼Dn h A(x) ⊆bA i > 1 -ε Proof. For every pair of symbols σ ∈Σ and state q ∈Q, denote pn(σ, q) := Prx∼Dn[(q, σ) ∈A(x)] the probability over sampling x ∼Dn that the machine T reads a symbol σ while it is in state q, when processing x. Let M := |Q| · |Σ|. Denote: Aε = (q, σ) ∈Q × Σ s.t. sup n pn(q, σ) ≥2ε/M Now, for every q, σ ∈Aε, let n0(q, σ) be the minimal n s.t. pn(q, σ) ≥ε/M. Let n0 = max{n0(q, σ)}(q,σ)∈Aε. Let m = n0M log(M/δ) ε , and we will sample m′ = m/n0 = M log(M/δ) ε examples from each of D1, . . . , Dn0. Fix some (q, σ) ∈Aε. Claim: w.p. at least 1 -δ/M we have (q, σ) ∈bA. Proof: Note that n0(q, σ) ≤n0, and therefore we sample m′ examples from Dn0(q,σ). Let p := Prx∼Dn0(q,σ)[(q, σ) ∈ A(x)] and by definition p ≥ε/M. Therefore, for the m′ samples we draw, the probability that we do not encounter (q, σ) in any of the traces is at most (1 -p)m′ ≤(1 -ε/M)m′ ≤exp(-m′ε/M) ≤δ/M. From the above claim, using the union bound, we get that w.p. at least 1 -δ we have Aε ⊆bA. Assume this holds, and fix some n ≥n0. For every (q, σ) ∈Q × Σ \ Aε it holds that Prx∼Dn [(q, σ) ∈A(x)] ≤ε/M. From the union bound, the probability over x ∼Dn that there exists some (q, σ) /∈Aε ⊆bA s.t. (q, ε) ∈A(x) is at most |Q × Σ \ Aε| ε/M ≤ε. Therefore, the required follows. From the above lemma the proof of Theorem 2.2 follows. C Architecture and training details We train the following architectures for the synthetic experiments: • Mamba-130M (https://huggingface.co/state-spaces/mamba-130m-hf): a selective state-space (SSM) language model. We use a 24-layer, 1536-d intermediate size and 768-d model size configuration to match the Transformer baselines while retaining linear-time sequence modeling. • LSTM: a multi-layer recurrent baseline sized to roughly comparable capacity (4 layers, hidden size 1536) to probe how classical RNNs fare on our trajectory-style tasks. • GRU: a gated-recurrent baseline (4 layers, hidden size 1536) offering a stronger RNN comparator with fewer parameters per unit than LSTM. • Pythia (GPT-NeoX style) (https://huggingface.co/EleutherAI/pythia-160m): a decoder-only Transformer from the Pythia scaling suite. We adopt a 24-layer, 1536-d intermediate size, 768-d model size and 8-head model variant with RoPE, roughly matching Mamba's scale. • Mistral-style Transformer (https://huggingface.co/mistralai/Mistral-7B-v0.1): a modern decoder-only Transformer with sliding-window (512) sparse attention, utilizing RoPE. We use a scaled-down 8-layer, 1536-d intermediate size and 768-d model size. 18 For the synthetic experiment, we perform hyper-parameter search over learning rate, batch size and weight decay. We choose learning_rate ∈{0.0005, 0.0001, 0.0003, 0.0005, 0.001, 0.003, 0.005}, batch_size ∈ {128, 256, 512, 1024}, weight_decay ∈{0, 0.01} and fix the number training steps to be 2, 000. We run each experiment with 2 seeds, and report the accuracy of the best model. For Tower of Hanoi experiments, due to their sensitivity, we exceptionally used 10 seeds for Mamba and Pythia models and 3 seeds for other architectures. For the code fixing experiment, we finetune a pretrained Mamba-1.4b and Pythia-1.4b, both trained on The Pile (Gao et al., 2020), with learning rate 0.0001, weight decay 0.01, batch size 512 and 200 training steps. For all experiments, we use a single node with 8 H100 GPUs. D Synthetic Experiments Details D.1 Memory Tool Definitions As discussed in Section 3, we use either a pointer-based or a search-based memory tool to augment the memory of the model. We now describe how the model interacts with the memory tool, and how we differentiate between thoughts, outputs, commands and observations. We generally try to reduce the number of tokens by using dedicated command tokens, and differentiate between output, thoughts and observation tokens based on the context (instead of using open/closing tags). Pointer-based Memory. The commands for this memory are given as special tokens that the model can output, e.g. [pointer1.read()] or [pointer2.move_left()]. A read command will be immediately followed by a single observation token, that is the token read by the pointer at its current position. All other tokens (tokens that are not command tokens or observation tokens, which always immediately follow a read command) are either thoughts or outputs. We use a single token [ANS] that indicates the final answer of the model, where all tokens before the [ANS] token are considered thoughts and all tokens after the [ANS] token are considered outputs. Both thoughts and outputs are appended to the context memory, and the pointers can move beyond the input context, and start reading thought or output tokens that were previously generated by the model. Commands and observations are discarded and are not appended to the external memory (but of course do affect the internal memory and representation of the model). The model can freely interleave commands, thoughts and outputs, and therefore the model can interact with the memory while producing the answers. Search-based Memory. This memory tool allows the model to search for a given pattern in the context. A search command is a sequence of tokens of the form: [COMMAND]find[VALUE]x, where x is some sequence of tokens to search for. Following the search command, the model will receive a set of observations of the form: [OBSERVATION]z1, . . . , zk, where z1, . . . , zk are all the lines in the memory context that contain the string x (similar to the operation of a grep command). As before, all other tokens are either thoughts or outputs, and are appended to the memory and can be searched for in future iterations. In this case we take the output to be the last line generated by the model. D.2 Logical Reasoning Task As described in Section 3, we generate a random logical computation graph with k = 3 input nodes, where each intermediate node is a boolean expression over one or two variables or their negation. The graph is encoded as a python code given to the model as input. Illustration of the graph and the code are shown in Figure 4. D.3 Tool-Use Algorithms We describe the synthetically generated tool-use trajectories for solving the different tasks presented in Section 3. Multi-Digit Addition. We follow the standard long addition algorithm, summing one-digit from right to left while keeping the "carry" digit. The model uses pointer-based memory with two pointers, and performs the following steps: 19 v630 = True v872 = False v622 = True v191 = not v872 v240 = v191 v539 = not v191 v526 = not v872 v792 = v526 and not v630 v054 = not v792 or not v191 v903 = v054 and v622 v903 = ? Expected answer: True Figure 4 Example of a logical reasoning graph and its encoding. 1. Move each pointer to the least significant digit of each summand, where the first pointer points to the digit of the first summand, and the second pointer to the second summand. To do this, we move the first pointer until we read the token +, and move the second pointer until we read the token =. 2. Read one digit from each summand, compute the sum of the digits and the carry from previous iteration (if it exists), output the new sum and carry as thoughts, and move both pointers to the left. Stop each pointer if it reaches a non-digit token. If both pointers reached non-digit tokens, output [ANS] and move to the next step. 3. At this step we have the sum written in reverse in memory, along with the carry digits from each iteration. To write the final output in reverse, we move the second pointer to the right until it reaches the [ANS] token, then start moving to the left, each iteration outputting the sum digit, until the pointer reaches the = token. Multi-Digit Multiplication. We follow the long multiplication algorithm for multiplying an n-digit number by a k-digit number (for fixed k). We use a pointer-based memory with max(k, 2) pointers. The algorithm executes the following steps: 1. Move the first pointer to the least significant digit of the first operand, and the second pointer to the least significant digit of the second operand. 2. Move the first pointer to the left, each time reading a digit and multiplying it with the digit of the second pointer. Add the result to the previous carry, and write it together with the new carry. If we reach the most significant digit of the first operand, move the first pointer back to the least significant digit, move the second pointer one position to the left, output a + sign and zeros as required (depending on which digit from the second operand we read). If the second pointer reached the × sign, move to the next step. 3. At this step we have a summation problem with k summands, where the summands are written in reverse and also contain carry digits that we should ignore. We move each pointer to the least significant digit of its respective summand, read all digits, compute the sum and the carry, and move each pointer to the right and skip carry digits. We continue until we reach the most significant digit of all summands, and then output an [ANS] token. 4. Finally, we have the answer written in reverse with carry digits. We move the first pointer to the [ANS] token, then move it one token to the left and output the tokens read by the pointer, skipping carry digits. Tower of Hanoi. The Tower of Hanoi puzzle can be solved by a simple recursive algorithm. Let n be the number of disks in the puzzle. The recursive algorithm involves three steps: (1) recursively moving the top 20 Table 2 Experimental results for the Tower of Hanoi puzzle solved recursively by different models. Model Hanoi (recursive) Mamba 7→8 (100%) LSTM 7→9 (83%) GRU 7→8 (100%) Pythia 7→7 (100%) Mistral 7→8 (87%) n -1 disks from rod A to rod B; (2) moving the largest disk from rod A to rod C; and (3) recursively moving the n -1 disks from rod B to rod C. Therefore, the puzzle can be solved with 2n -1 moves. This algorithm can also be stated iteratively: • At the first step, the smallest disk is moved from rod A to rod B or C depending on whether n is even or odd, respectively. • At the second and other even steps, the only legal move not involving the smallest disk is performed. • At the third and other odd steps, the smallest disk is moved to the rod it was not on two turns ago. Our model uses the iterative algorithm described above to solve the puzzle. The model uses pointer-based memory with one pointer. At each step of the algorithm, the model outputs the next move and the subsequent state of the rods. Specifically, the model takes the following steps: 1. The pointer traverses rod A from its base, reading disks one by one and outputting B and C alternatively. This step computes the parity of n, which is crucial for the first move. After reaching the end of rod A, the pointer rewinds to the beginning of the current state representation. The model is now ready to predict the next move. 2. At this step, the model outputs the next move, e.g., (5)AC ) to position itself at the beginning of the newly generated state. The model then goes back to the move prediction step above. We also trained our models using the recursive algorithm; however, their length generalization performance was weaker. Details of this experiment are presented in Appendix D.4. Logical Reasoning. We use a search-based memory tool to solve the logical reasoning problem detailed in Appendix D.2. We try to resolve variables' truth value recursively using depth-first-search (DFS). Namely, starting with the output variable, we recursively search for the values of variables in a given expression. If we find a variable with a boolean (True/False) value, we update the expression, replacing the variable's name by its value. If we find a child variable that is still not resolved, we search for the variables in the child's expression, while also logging the value of the parent variable (that we can use for "backtracking"). When we are done resolving a variable's value, we backtrack to its parent, trying to resolve the parent's value. When we resolved the output nodes value, we finish the generation. D.4 Tower of Hanoi Experiment Details In contrast to other tasks presented in this paper, the solution length for the Tower of Hanoi puzzle scales exponentially with the number of disks. In particular, solving the puzzle for 9 and 12 disks would require over 42,000 and 385,000 tokens, respectively. A solution is considered correct only if all of its tokens are correctly generated. As a result, even if a model has 99% token accuracy, it may not show any length generalization 12$ is used after each move as a delimiter. 21 capability. In agreement with this intuition, we found that Hanoi length generalization performance is very sensitive to the random seed. Hence, we used 10 seeds for Mamba and Pythia models and 3 seeds for the rest of the models. In our experiments with Mamba models, we noticed that token accuracy is always high (e.g., ≥99.75% for 12 disks), however, the actual accuracy varies based on the seed. The performance of our 10 seeds (all trained on puzzles with up to 8 disks) is shown in Figure 5. We note that Pythia did not show any length generalization, in particular, even the token accuracy did not exceed 93%. 7 8 9 10 11 12 Number of Disks 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Figure 5 Performance of different seeds when training a Mamba model on Tower of Hanoi puzzles with up to 8 disks. As discussed in Appendix D.3, the Tower of Hanoi puzzle can also be solved recursively. We also tried the recursive variant of the algorithm, however, its length generalize performance was weaker than the iterative algorithm. Here, we include the implementation and results of the recursive implementation. Recursive Implementation. For a puzzle with n disks, the model outputs the list of moves for puzzles of size 1, 2, . . . , n sequentially and uses the move list generated for puzzle of size i -1 to output the list of moves for puzzle of size i using the recursive pattern. The model uses pointer-based memory with two pointers. While outputting the list of moves for the puzzle of size i, the first pointer points to the ith disk (largest disk moved in the moves of the puzzle of size i). The second pointer is used for implementing the recursive pattern and iterating the moves of the puzzle of size i -1 while generating the moves for size i. More precisely, the input gives the list of disks (e.g., (7)(5)(2)) and the model executes the following steps: 1. Both pointers are moved to the smallest (top) disk, and the model outputs the first move, i.e., moving the top disk from rod A to rod C. This solves the problem for a single disk. First pointer moves one step back (now pointing to the second smallest disk) and the second pointer advances, pointing to the beginning of the first move. The model is now ready to output the moves for solving the puzzle for two disks. 2. At this step, the model copies the last list of moves, swapping rod labels B and C. To achieve the latter, the second pointer traverses the last list of moves and the model reads and outputs one token at a time (performing the swap if needed). This step corresponds to the first step of the recursive algorithm. At the end of copying, the second pointer is rewound to point to the beginning of the list of moves again. 3. Next, the middle move, i.e., moving the largest disk (ith disk while outputting the moves of size i) is performed. This disk is identified by the value of the first pointer and the move is always from rod A to C. This step corresponds to the second step of the recursive algorithm. 4. Similar to step 2, the model copies the list of moves again, swapping B and A. The second pointer is used again for iterating the list of moves and copying. This step corresponds to the third step of the recursive algorithm. After the copying is finished, the second pointer advances and points to the beginning of the newly constructed move list. The first pointer goes one step back so that it points to the next larger disk. This completes the generation for size i, and the process iterates by returning 22 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication by 1-Digit Mamba LSTM GRU Pythia Mistral 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication by 2-Digit Mamba LSTM GRU Pythia Mistral 101 102 103 Sequence length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Logical Graph Mamba LSTM GRU Pythia Mistral 6 7 8 9 # of Discs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Tower of Hanoi (Recursive) Mamba LSTM GRU Pythia Mistral Figure 6 We train various transformer (Pythia, Mistral) and SSM (Mamba, LSTM, GRU) models on Multi-Digit Multiplication, Logical Graph and Tower of Hanoi tasks, with CoT + pointer-based memory tool. Multi-Digit Multiplication: We train models on multiplying a number of up to 10-digit by a 1-digit number or 2-digit number, using the pointer-based memory tool. Logical Graph: We train models to perform a logical graph reasoning problem using search-based memory tool, training on graphs with up to 10 variables. Tower of Hanoi: We train models to solve the Tower of Hanoi (recursive implementation) reasoning problem using search-based memory tool, training on problems with up to 7 disks. The first point in each plot is the maximal problem size seen during training (i.e., all other points are out-of-distribution extrapolation). to step 2 for size i + 1. The generation terminates if there is no disk remaining for the first pointer, indicating that the lists of moves have been generated for all puzzle sizes 1, . . . , n.13 The performance of the recursive solution for the Tower of Hanoi puzzle is reported in Table 2. D.5 SSMs and Transformer Baselines In Figure 6 we report accuracies of our baseline models on the Multi-Digit Multiplication, logical reasoning and Tower of Hanoi tasks. We train each model on the same trajectories, using CoT and tool use. We perform hyperparameter optimization for each model as described in C. Our results generally point to a length generalization advantage for state space models over baseline transformer models. D.6 Ablating training steps and digit length for multiplication In Figure 7 we investigate the impact of different training configurations on generalization for multi-digit multiplication, varying the training budget (250, 500, or 800 steps) and the maximum number of digits seen during training for the first operand (5, 10, or 20). For this experiment, the learning rate was set to 0.003 based on validation. Results indicate that increasing the maximum number of digits shown during training for the first operand improved stablity of OOD generalization consistently, with results improving with more training steps. In particular, training with up to 20 digits improves generalization stability perfectly up to the 13We note that we use a delimiter (e.g., #) between the list of moves for different number of disks so that they become separable. 23 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=250 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=500 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 1, max_steps=800 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=250 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=500 train_len=5 train_len=10 train_len=20 101 102 103 Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Task 2, max_steps=800 train_len=5 train_len=10 train_len=20 Figure 7 Multiplication generalization performance for Mamba across different training configurations. Each subplot shows accuracy as a function of sequence length for a specific maximum training steps value. Different colored lines represent different training sequence lengths, with error envelope indicating median absolute discrepancy across 5 runs. maximum digit size tested (1000 digits). However, even training with up to 5 and 10 digits show progressive improvements as the number of training steps increases. D.7 Additional Ablations We run the following ablations on the multi-digit addition task: 1. No-CoT: we train the model to directly output the final answer, without any CoT or tool-use. 2. No-CoT, reversed answer: we train the model to directly output the final answer in reverse (reverse format was shown to improve length generalization in certain settings, e.g. Zhou et al. (2024)). 3. No Tool-Use: The model is trained on similar trajectories as in the main experiment, but now needs to predict the output of the memory tool instead of receiving these as observations. Namely, the trajectory is used as CoT data. 4. Single-Turn Tool-Use: we train the model with a "calculator", where the model needs to generate a single addition command following the input (i.e., given an input a + b it needs to generate add(a, b)). We train the Mamba model in all settings with extensive hyper-parameter tuning on 5-digit addition. Experiments 1,2 and 4 results in perfect accuracy on 5-digit addition, but little to no length generalization. Experiment 3 results in poor performance even in-distribution. D.8 Task mixture In Figure 8 each panel shows accuracy as a function of test length for various training budgets (250, 500, or 800 steps). The curves correspond to different mixing weights, where w = 0 denotes the baseline trained only on the main task and higher values indicate a normalized fraction of auxiliary samples. The error bars indicate variability across random seeds. 24 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 250 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 500 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Multiplication / Max Steps: 800 w=0.0 w=0.33 w=0.5 w=0.6 Figure 8 Multiplication task accuracy under co-training with varying training budgets (see Sec 3.1). The accuracy plots for the main task (multiplication) in the task mixture experiment were presented in section 3.1. For completeness, we show the auxiliary task accuracy in Figure 9. 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 250 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 500 w=0.0 w=0.33 w=0.5 w=0.6 101 102 103 Number of Digits 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Addition / Max Steps: 800 w=0.0 w=0.33 w=0.5 w=0.6 Figure 9 Addition task accuracy under co-training with varying training budgets (250, 500, 800 steps). Curves show different mixing weights. See (3.1). E Code Fixing Agent Setup We use the same system prompt and input prompt as in mini-SWE-agent (Yang et al., 2024a) from: https://github.com/SWE-agent/mini-swe-agent. We instruct the model to solve the bug in main.py, and explain how the bug should be solved. We modify the original prompt of mini-SWE-agent to instruct the model interactively debug the code and generate a fix for up to 3 files at a time. Please solve this issue: Fix the bug in main.py. Make sure to pass variable v10 too foo() and all other relevant functions. Pass v10 to ONLY the relevant functions, do not pass it if it is not needed. You can execute bash commands and edit files to implement the necessary changes. ## Recommended Workflow This workflows should be done step-by-step so that you can iterate on your changes and any possible problems. 1. Create a script to reproduce the issue and run it 2. Spot 3 files that might be causing the issue 25 Figure 10 Pass rate and median sequence length for SWE-agent-LM-32B on the code fixing task. 3. Read the content of these 3 files. 3. Edit the source code of these files resolve the issue. Do not edit more than 3 files before running the script again, even if the code is not completely fixed. 4. Verify your fix works by running your script again, if not - analyze at most 3 more files that might cause the issue and repeat the debugging process 5. Submit your changes and finish your work by issuing the following command: 'echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'. Do not combine it with any other command. ⟨important⟩After this command, you cannot continue working on this task.⟨/important⟩ We plot the pass rate and generated trajectory length of the SWE agent as a function of the number of functions in the code in Figure 10. F Tool use capabilities of pretrained SSMs At the time of writing, we were unable to find any publicly-available SSM models that were fine-tuned for function calling. The closest we could find is Mistral's Mamba-Codestral-7B-v0.1, which was fine-tuned on coding tasks. We evaluated this model on the Berkeley Function Calling Leaderboard (Patil et al.), and found an overall accuracy of 16.58%, comparable with the reported accuracies of 16.22% for Falcon3-3B-Instruct and 15.58% for Llama-3.1-8B-Instruct. Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries and regions. 26
2510.14822
Regression Model Selection Under General Conditions Amaze Lusompa * Federal Reserve Bank of Kansas City October 17, 2025 Abstract Model selection criteria are one of the most important tools in statistics. Proofs showing a model selection criterion is asymptotically optimal are tailored to the type of model (linear regression, quantile regression, penalized regression, etc.), the estima- tion method (linear smoothers, maximum likelihood, generalized method of moments (GMM), etc.), the type of data (i.i.d., dependent, high dimensional, etc.), and the type of model selection criterion. Moreover, assumptions are often restrictive and unreal- istic making it a slow and winding process for researchers to determine if a model selection criterion is selecting an optimal model. This paper provides general proofs showing asymptotic optimality for a wide range of model selection criteria under gen- eral conditions. This paper not only asymptotically justifies model selection criteria for most situations, but it also unifies and extends a range of previously disparate results. *I thank Sai Avinash, Jason P. Brown, Òscar Jordà, and Francisco Scott for helpful comments, discussions, and suggestions. I thank Johnson Oliyide for excellent research assistance. The views expressed are those of the author and do not necessarily reflect the positions of the Federal Reserve Bank of Kansas City or the Federal Reserve System. 1 arXiv:2510.14822v1 [math.ST] 16 Oct 2025 1 Introduction Model selection criteria are one of the most important tools in statistics and are used for a variety of things such as model selection, model comparison, model averaging, or to evaluate the forecasting ability of models. Model selection criteria are used in a wide range of models from parametric, semiparametric, to nonparametric. Though model selection criteria have been around for decades and are a lynchpin in statistics, the behavior of these methods are complex and not fully understood (Bates et al., 2024). Asymptotic optimality of these criteria (their ability to select the true model or the best approximating model if the true model is not in the set), have only been shown for certain types of models in a limited number of situations. Model selection criteria can be broken down into three major classes: Cross Validation (CV), information criteria, and pseudo-out-of-sample forecasting.1,2 Though CV is consid- ered by many to be the gold standard for model selection, asymptotic optimality results for CV have been limited. Li (1987) shows that leave-one-out Cross-Validation (LOO CV) is asymptotically optimal for a class of linear smoother models under the assumption that the data are i.i.d. Andrews (1991) extends the results to handle regressions with independent but heteroskedastic residuals. Extending these results to time series, however, has been spotty at best. Results for CV either assume strict exogeneity, which is well known to be an unrealistic assumption for time series (Stock and Watson, 2007), or if they allow for lagged dependent variables, the assumptions for these models end up being highly restric- tive such as requiring the residuals to be i.i.d. which rules out models with heteroskedas- ticity of any kind as well as models with autocorrelated residuals (see for example Zhang et al. (2013), Sun et al. (2021)).3 As noted in Hansen and Racine (2012), extending the proofs to time-series regression would be quite challenging.4 Moving away from linear 1Bayesian model selection via the marginal likelihood is a class not discussed in this paper in part because Bayesian and frequentist notions of model selection do not necessarily agree (Moreno et al., 2015) (see Chib and Kuffner (2016) and references therein for more on model selection consistency in the Bayesian case). It is common for forecasting models estimated using Bayesian methods to use frequentist model selection criteria such as pseudo-out-of-sample forecasting. 2General-to-specific and specific-to-general testing are not a classes discussed in this paper. Though they have some popularity, unlike the other major classes they lack wide applicability (e.g. you cannot use it to find the optimal tuning parameters or things not directly involving variable selection). 3This rules out most time series models which include but are not limited to: Local Projections and direct forecast regressions (Jordà, 2005) and Vector Autoregressions or autoregressive distributed lag models with heteroskedasticity of any kind. 4To my knowledge there have been scant results for panel data with Gao et al. (2016), Yu et al. (2025) being exceptions. 2 smoothers, assumptions are even more restrictive and things covered are not that broad.5 Popular methods that are not covered at all or only covered under restrictive conditions include but are not limited to: Lasso and its variants (adaptive lasso, elastic net, square root lasso, etc.), bridge estimators more generally, least angle regression (LARS) and other stepwise methods, quantile regressions, partially linear regression, nonlinear regressions, generalized linear models, neural networks, regression trees, ensemble methods, and other machine learning methods. Asymptotic optimality of information criteria such as AIC and BIC have typically only been shown under more stringent assumptions such as i.i.d. data or i.i.d. residuals (see for example Shao (1997), Claeskens (2016), Ding et al. (2018) for extensive reviews). An exception is Sin and White (1996) who consider information criteria for dependent processes, but they do not account for nonparametric or time-varying parameter models. The paper also assumes restrictive conditions such as the models being finite dimensional as well as continuous differentiability of the log quasi-likelihood function which rules out: quantile regressions, many penalized regressions (such as lasso, its variants, most bridge estimators, and trend filtering), robust regressions (Huber loss functions), LARS and other stepwise methods, and regression trees just to name a few. Sin and White (1996) ac- knowledge that generalizing their results to a wider class of data generating processes and models represents an interesting and challenging area for future research, but general progress has not been made.6 Real-time out of sample forecasting exercises (also known as pseudo-out-of-sample forecasting) in economics and finance are regarded by many researchers as the “ultimate test of a forecasting model” in time series (see Stock and Watson (2007) page 571).7 These methods are the standard way to choose/evaluate forecast models. Despite the influx of 5Chetverikov et al. (2021) show the validity of k-fold cross validation for Lasso in high dimensions assum- ing the errors are i.i.d. and neither sub-Gaussian nor sub-exponential. As noted in their paper, the results do not cover LOO CV. 6Sin and White (1996) show asymptotic optimality by showing the model selection criteria choose the model with lowest Kullback-Leibler divergence from the data generating process. In this paper asymptotic optimality is demonstrated by showing model selection criteria choose the model with the lowest integrated mean squared error. Sin and White (1996) also prove strong consistency of certain information criteria. Strong consistency arguments are not attempted in this paper. See the discussion in section 4 for the reason- ing. 7This is particularly the case in economics and finance. For example, central banks routinely evaluate their forecasting models using pseudo-out-of-sample exercises (Faust and Wright, 2013), and asset pricing studies typically require pseudo-out-of-sample evidence for predictability claims to be taken seriously (see for example Welch and Goyal (2008)). The preference for pseudo-out-of-sample testing reflects the high cost of forecast errors in policy and investment decisions. 3 several new and more complicated methods over the past few decades (e.g. penalized regression methods, model averaging, machine learning methods, etc.), asymptotic op- timality of pseudo-out-of-sample forecast methods such as rolling window and recursive methods, are generally limited to least square estimators or standard time-varying param- eter models (see Rossi (2021) and references therein). Currently, researchers spend entire papers or substantial parts of papers showing asymp- totic optimality of a selection procedure, if it is shown at all, for a limited class of estimators under restrictive assumptions. In this paper, I derive the asymptotic optimality of model selection criteria under fairly general conditions. The proofs do not rely on a specific es- timation method and encompass a wide array of data generating processes. Section 2 reviews CV, information criteria, and pseudo-out-of-sample forecasting. Section 3 presents the main proofs for CV, information criteria, and pseudo-out-of-sample forecasting. Sec- tion 4 provides implications and a broader discussion of results in the literature. Section 5 concludes. Some notation: pÐ→is converges in probability, ∥⋅∥is the Frobenius/Euclidean norm, Op(⋅) is big O probability notation, and op(⋅) is little o probability notation. 2 Preliminaries for Cross Validation, Information Criteria, and Pseudo-Out-Of-Sample Forecasting Let the true model be yi = µi + εi for i = 1,2,...,T, where yi is the dependent variable, µi is the conditional mean, and εi is the residual. In the special case of linear regressions yi = xiβ + εi for i = 1,2,...,T, where xi is a 1 × p vector of regressors, β is a p × 1 vector of regression coefficients. The Leave-one-out (LOO) residual for a regression model is calculated by estimating β for all but one observation, and predicting the residual for the left out observation. More formally for observation i, ̃β−i = (∑ j≠i x′ jxj)−1 ∑ j≠i x′ jyj, ̃µ−i = xĩβ−i, ̃εi = yi −̃µ−i, 4 where ̃β−i is the leave i out OLS estimate of β, ̃µ−i is the leave i out estimate of µi, and ̃εi is the leave i out estimate of εi. LOO CV is calculated by finding the model that minimizes the mean-squared error (MSE) of the LOO residuals, that is, by finding the model that minimizes T −1 ∑N i=1 ̃ε2 i , though other loss functions such as the absolute value can be used. LOO CV was developed and is mainly used for independent data. It is well known that LOO CV can perform poorly in finite samples when used for dependent data due to the residuals and regressors being autocorrelated. This can lead to overfitting or underfitting (Opsomer et al., 2001, Arlot and Celisse, 2010, Hansen, 2010). One solution is to use block CV methods (Burman et al., 1994). Block methods work the same way as the LOO method, but instead of leaving out just observation i, one would additionally leave out h observations on both sides of i to break up the dependence. More formally, one would leave out observations i −h,i −h + 1,...,i,...i + h −1,i + h, so for h-block CV via OLS estimation ̃β−i = ( ∑ j≠i−h∶i+h x′ jxj)−1 ∑ j≠i−h∶i+h x′ jyj. Though fairly popular, the block methods have not been asymptotically justified. Another major strand of model selection criteria are information criteria. They gener- ally take the form of log⎛ ⎝ 1 T T ∑ i=1 {yi −ˆµi}2⎞ ⎠+ λTp T where in the standard linear regression case ˆµi = xi ˆβ, ˆβ is the estimate of β based on all of the data, p is the number of predictors and λT > 0 is the penalty coefficient. For the AIC λT = 2, for the BIC λT = log(T), for HQIC λT = clog(log(T)) where c > 2, for the RIC λT = 2log(p), etc. Models are chosen by choosing the model that minimizes the information criteria of choice. The last strand of model selection criteria we will discuss is pseudo-out-of-sample fore- casting. The two most popular methods are rolling window forecast and recursive forecasts (Clark and McCracken, 2009).8 In the standard linear regression case estimated using OLS, rolling window forecasts estimate a model on the last R observations so that ˙˙˙˙ βi = ( i−1 ∑ j=i−R x′ jxj)−1 i−1 ∑ j=i−R x′ jyj, ˙˙˙˙ µi = xi˙˙˙˙ βi, and ˙˙˙˙ εi = yi −˙˙˙˙ µi. Recursive models alternatively estimate the model based on all observations up to that 8In the least squares case, recursive forecasts are also known as predictive least squares (Wei, 1992). 5 point with ˙˙˙˙ βi = ( i−1 ∑ j=1 x′ jxj)−1 i−1 ∑ j=1 x′ jyj. The optimal rolling window and recursive forecast models are chosen by choosing the model that minimizes the MSE of the forecasted residuals. Note that even though the above examples estimate the standard linear regression model using OLS, the proofs of asymptotic optimality in this paper are general and can handle most types of regression models and estimation methods. Ideally, the goal of these methods is to minimize the integrated mean squared error (IMSE). The IMSE is a distance or loss function between the true conditional mean and the estimated conditional mean for model α with models and is a measure of how well the conditional mean for model α approximates the true conditional mean. A model with a smaller IMSE means that model is closer to the truth in the squared error loss sense. Define AT as the set of all models being compared, and α is the model index.9 The IMSE is defined as ̃LT(α) = 1 T T ∑ i=1 (µi −̃µ−i(α))2 and LT(α) = 1 T T ∑ i=1 (µi −ˆµi(α))2 for leave out and full sample estimates respectively. For recursive and rolling window estimates, the IMSE is define as ˙˙˙˙ LT(α) = 1 T −t0 T ∑ i=t0+1 (µi −˙˙˙˙ µi(α))2 and ˙˙˙˙ LT(α) = 1 T −R T ∑ i=R+1 (µi −˙˙˙˙ µi(α))2, where t0 is the minimum number of observations needed so ˙˙˙˙ µi(α) is uniquely defined for all models and all observations, i.10 To show asymptotic optimality, I follow the standard in the literature and show asymptotically optimality in the sense that LT(̂α) inf α∈AT LT(α) p→1 where ˆα is the model in the set of AT selected by CV, information criteria, or pseudo out sample forecasting methods. Intuitively, the above formula says that as the sample size 9Note that the models being compared may be individual models or combination/model averaged models. 10For least squares models, t0 would be the max p(α) for all models in the recursive case. In the rolling window case, R would need to be large enough that ˙˙˙˙ µi(α) is uniquely defined. 6 tends toward infinity, the probability of the model selection procedure choosing the model with the smallest IMSE converges to 1.11 3 Optimality of Model Selection Under General Conditions This section shows the proofs for asymptotic optimality. Section 3.1 shows the proofs for CV, section 3.2. for information criteria, and section 3.3. for pseudo-out-of-sample forecasting. 3.1 Cross-Validation I use the following assumptions to show asymptotic optimality for CV: Assumption 1. The following conditions are satisfied ∀α ∈AT: (a) For all i, E(µiεi) = 0, E(xi(α)εi) = 0, E(ε2 i ) < ∞, and each element in xi(α)εi has finite second moments. (b) ∥ˆµi(α) −̃µ−i(α) ∥ p→0 and ∥ˆµi(α) −̃µ−i(α) ∥< ∞∀i. (c) {µiεi}T i=1 satisfies conditions for a law of large numbers. (d) {xi(α)εi}T i=1 satisfies conditions for a law of large numbers. (e)The dimension of xi(α), p(α), can grow with the sample size, but it diverges at a slower rate than the rate of convergence for the applicable law of large numbers. (f) sup α∈AT ∣̃LT(α)/LT(α) −1∣ p→0 (g) Each model α can be written as a linear regression yi = xi(α)β(α) + εi(α) where µi(α) = xi(α)β(α). (h) Either all of the models in the set AT are misspecified, or at most one model in the set is true. First note that εi is the true residual and εi(α) is the residual for model α. Assumption 1(a) should be an uncontroversial assumption since it is made up of standard first order moment conditions and requires that the second moments for the true residual and the regression score exist and are bounded. The first two parts follow directly from the ex- ogeneity assumption, E(εi∣xi) = 0, and depending on the models being considered could be strictly (past, present, and future) exogenous, past and present exogenous, and past exogenous. Most time series models would be past or past and present exogenous, while 11Note that asymptotic optimality is shown in terms of the full sample IMSE. 7 non-time series models are mostly assumed to be strictly exogenous. Note that the di- mension of xi(α)εi is 1 × p(α). The dimensions are suppressed in the proofs to make the notation simpler, but the dimensions are taken into account in the proofs. Assumption 1(a) can even apply to quantile regressions, though it does not apply to quantile regressions in all situations (see discussions Machado and Silva (2019)).12 Note that 1(a) would apply even in the case of omitted variable bias or classical measurement error since εi is the true residual. If, however, there is simultaneity, 1(a) would not apply unless one were to first instrument the endogenous variable(s). The first part of Assumption 1(b) is a more general version of the assumption made in Li (1987) and Hansen and Racine (2012) that the leverage values for each model α dissipate to zero as T →∞. The second part is just a uniform boundness assumption which rules out razors edge cases where, for example, the difference grows as a function of the sample size. As argued in Li (1987), it should be the case that the impact of leaving out an observation has on an estimate should dissipate asymptotically for any reasonable estimator.13 Note that in the case of h-block CV, one can allow h to grow with the sample size, but to ensure that the condition is satisfied, one can let h T →0 as h,T →∞.14 Assumption 1(c) and 1(d) are standard model assumptions that are used when proving consistency. There are a wide range of law of large numbers (LLN) for different data including: i.i.d., independent but not identically distributed, stationary, mixing, mixingale, near-epoch dependent, and locally stationary just to name a few. For a more exhaustive list for different types of cross sectional and times series data, see for example chapter 3 in White (2001) or part 4 in Davidson (2022). Assumption 1(e) places a restriction on the the number of variables that can be included in a model. For example, many law of large numbers converge at rate Op(T −1/2) for each element in the vector, so assuming p(α)2 T →0, as p(α),T →∞would be sufficient in many cases. Assuming that p(α)2 T →0 or p(α)3 T →0 is standard in infinite dimensional time series models, infinite dimensional semiparametric models, as well as high dimensional models (Lütkepohl, 2005, Chen, 2007, Fan and Peng, 12In the context of this paper, for quantile regressions one would want to minimize the IMSE between the conditional quantile for model α and the true conditional quantile. Other loss functions have been used in the literature for quantile regressions (see for example Lu and Su (2015)). 13This would clearly apply to M-estimators that satisfy standard regularity conditions and would apply more generally to extremum estimators that satisfy certain regularity conditions (see for example Theorem 2.1 in Newey and McFadden (1994)). It is also consistent with convergence of misspecified models (see for example White (1981, 1982, 1994) and references therein). 14This is standard in block bootstrapping (Lahiri, 2003). It may also be possible to set h as a constant fraction of the sample size. 8 2004).15 This assumption also rules out the razors edge case of perfectly predicting yi by including as many variables as there are observations. Assumption 1(f) is standard in the literature, and can be imposed for example by as- suming that either ∥µi ∥or ∥µi −ˆµi(α) ∥are bounded above for all i and α in addition to Assumption 1(b) (see Lemma 1 in the appendix). Assumption 1(g) assumes that the can- didate models can be written as linear regressions and is standard assumption in model selection.16 Though 1(g) is set up for variable/specification selection, the proofs also ap- ply to bandwidth selection or other tuning parameters where α denotes the model for a specific tuning parameter. Assumption 1(g) is a broad assumption that includes a broad range of regression models, however, it excludes nonparametric kernel regression models (e.g. Nadaraya-Watson or Gasser-Muller types) as well as time-varying parameter models. These models will require modifications to the proof and will be addressed in Theorem 2. Assumption 1(h) is a standard assumption in the literature (see Li (1987), Shao (1997), Hansen and Racine (2012) and references therein). It is consistent with the statistical adage from George Box that “All models are wrong” or “All models are wrong, but some are useful” as a way to point out that in science, all models are approximate, so we should not believe the true model is in the set. This assumption is satisfied for infinite dimensional models since the true model cannot be in the set. For finite dimensional models, people do not think the true model is in the set anyway.17 Theorem 1. Under Assumptions 1 ˆα chosen by LOO CV or h-block CV is asymptotically optimal. Proof. Note that ̃ε−i(α) = yi−̃µ−i(α) = µi+εi−̃µ−i(α). Using simple algebra, it can be shown that the LOO (or the h-block equivalent) squared residual can be decomposed as ̃ε2 −i(α) = ε2 i + 2(µi −̃µ−i(α))εi + (µi −̃µ−i(α))2, 15Note that high-dimensional models with a diverging number of parameters such that p(α) T ∈(0,∞) as p(α),T →∞(which are sometimes called proportional asymptotics) are excluded. 16See for example White (1981, 1982, 1994) and references therein for more about pseudo-true models. 17If one includes multiple true models in the set, the model selection procedures will still select a true model since asymptotically the model selection procedure will choose a model whose IMSE converges to zero, though it may not select the most parsimonious version of the true model in the set whose IMSE will converge to faster than the other true models (Shao, 1997). So it is essentially a razors edge case where a true model is selected (and the true model will have a IMSE which converges to zero), but the true model that is selected does not necessarily have the IMSE which converges to zero the fastest, so the model selection procedure may not asymptotic optimality in this case. 9 implying that 1 T T ∑ i=1 ̃ε2 −i(α) = 1 T T ∑ i=1 ε2 i + 1 T T ∑ i=1 2µiεi −1 T T ∑ i=1 2̃µ−i(α)εi + ̃LT(α). Note that first term, 1 T ∑T i=1 ε2 i , is the mean squared error of the true residuals and shows up in every model, so it can be ignored since it will not affect the ordering of the models. By Assumption 1(f), ̃LT(α) can be replaced by LT(α) asymptotically. To show asymptotic optimality, it is therefore sufficient to show that 1 T ∑T i=1 2µiεi p→0 and 1 T ∑T i=1 2̃µ−i(α)εi p→0 since if this is the case, the LOO MSE of the different models only would only differ because of their model specific IMSE.18 Therefore, choosing the model with the smallest LOO MSE would be choosing the model with the smallest IMSE. 1 T ∑T i=1 2µiεi p→0 follows directly from the exogeneity condition (Assumption 1(a)) and an appropriate LLN (Assumption 1(c)). To show that 1 T ∑T i=1 2̃µ−i(α)εi p→0, first note that by 1(g) ̃µ−i(α) = xi(α)̃β−i(α). So 1 T T ∑ i=1 2̃µ−i(α)εi = 1 T T ∑ i=1 2xi(α)ˆβ(α)εi + [ 1 T T ∑ i=1 2xi(α)(̃β−i(α) −ˆβ(α))εi] = [ 1 T T ∑ i=1 2xi(α)εi]ˆβ(α) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 +[ 1 T T ∑ i=1 2xi(α)(̃β−i(α) −ˆβ(α))εi] ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 . To understand why first term converges to zero, note that [ 1 T ∑T i=1 2xi(α)εi] converges in probability to a zero vector by the exogeneity condition (Assumption 1(a)) and an appro- priate LLN for xi(α)εi (Assumption 1(d)). In the finite dimension case that is enough for the first term to converge. If the dimension p(α) grows with the sample size, by Assump- tion 1(e) p(α) grows at a slower rate than the appropriate LLN so the entire first term converges to zero in that case as well.19 To show why the second term converges to zero, it is sufficient to show that E ∥1 T T ∑ i=1 2xi(α)(̃β−i(α) −ˆβ(α))εi ∥ pÐ→0 18Technically since 1 T ∑T i=1 2µiεi shows up in every model, it can be ignored as well. 19If for example the LLN converges at the standard rate of Op(T −1/2), then ∥[ 1 T ∑T i=1 2xi(α)εi]ˆβ(α) ∥= Op( p(α) √ T ) and as long as p(α) grows slower than √ T, there will be convergence to zero in probability. 10 since convergence in mean implies convergence in probability. Note that E ∥1 T T ∑ i=1 2xi(α)(̃β−i(α) −ˆβ(α))εi ∥≤1 T T ∑ i=1 E ∥2xi(α)(̃β−i(α) −ˆβ(α))εi ∥ ≤1 T T ∑ i=1 (E ∥2xi(α)(̃β−i(α) −ˆβ(α)) ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 (E(ε2 i ))1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ bounded ≤constant 1 T T ∑ i=1 op(1) = op(1) due to the triangle inequality, the Cauchy-Schwarz inequality, Assumption 1(a), and As- sumption 1(b). As was just shown, setting up the problem by decomposing the LOO MSE in terms of the MSE of the true residuals, the IMSE, and what are essentially standard first order moment conditions, allows us to leverage standard statistical results without having to specify the type of estimation method and easily allows for generality for the type of model and type of data. The above proof also shows h-block CV is asymptotically optimal, which to my knowledge, no one has actually shown (Burman et al., 1994, Racine, 2000). These proofs could also be applied to k-fold CV, but it would require assumptions on the fold size similar to h-block CV (e.g. if you have k folds with h observations in each fold, h can grow with the sample size but h T →0). As mentioned earlier, Theorem 1 does not allow for observation dependent coefficients, so it does not account for time-varying parameter models or Nadaraya-Watson type non- parametric models. To address this shortcoming, we need a slightly different set of as- sumptions. Assumption 2. Assumptions 1 holds except for 1(e) and 1(g). In addition, assume either conditions (a) or (b) holds (conditions a and b are listed below). Lastly, condition (c) holds (condition c is listed below). (a) For all α ∈AT, yi = xi(α)β(α,xi(α)) + εi(α) where µi(α) = xi(α)β(α,xi(α)) for the nonparametric kernel regression case. β(α,xi(α)) is continuous in xi(α). (b) For all α ∈AT, yi = xi(α)βi(α) + εi(α) where µi(α) = xi(α)βi(α) in the time-varying parameter case. βi(α) is continuous in i in the infill asymptotic sense. (c) The data can be divided into m blocks with ℓobservations in each block. These blocks can be constructed in such a way that ℓgoes to infinity, but ˆβi(α) = ˆβr(α)+op(1) or ˆβ(α,xi) = ˆβ(α,xr) + op(1) for i ≠r within the same block. 11 Assumption 2 is essentially the same as Assumption 1, but there are two differences. The first difference is it allows the model parameters to vary with the observation or covari- ate. For time-varying parameter models, βi(α) is a continuous function in time and follows standard infill asymptotics arguments (Robinson, 1989, Cai, 2007, Chen and Hong, 2012, Dahlhaus, 2012). Note that for time-varying parameter case, the continuous function can be deterministic or stochastic.20 The continuity assumption for nonparametric kernel re- gressions is also standard (Fan and Gijbels, 1996). Assumption 2(c) just says as the sample size grows, the difference between estimates in the same block should decrease asymptot- ically. Note that the construction of the m blocks of size ℓis in line with consistency assumptions and intuition behind nonparametric kernel regressions and infill asymptotics of time-varying parameter models. In the case where the estimators are consistent/pseudo consistent, this condition is satisfied.21 There are also cases where this condition can be satisfied even if the estimators are not consistent/pseudo consistent (e.g. rolling window regression where the window is a constant fraction of the sample size or more generally a kernel regression where the bandwidth does not go to zero fast enough to satisfy the consistency condition). The second difference from Assumption 1 is that the dimension, p(α), is not growing with the sample size. This is standard in the time-varying parameter and nonparametric kernel regression literature. As will be seen in the proofs, the proofs may be able to handle p(α) growing with the sample size, but it require being specific about the rates at which p(α) and ℓgrow. Theorem 2. Under Assumptions 2, ˆα chosen by LOO CV or h-block CV is asymptotically optimal. Proof. The proof is the same as Theorem 1, except for the proof that 1 T ∑T i=1 2̃µ−i(α)εi con- verges in probability to 0. The proof will be written for time-varying parameter model case, but it also applies to the nonparametric kernel regression. It is written in terms of the 20The standard in nonparametric time-varying parameter estimation is to use nonparametric kernels (Robinson, 1989, Cai, 2007, Dahlhaus, 2012), and it is typically assumed that the function is determinis- tic and differentiable. It has been shown that kernel methods can be used for certain stochastic processes (Giraitis et al., 2014, 2021). As long as the continuous functions can be written as an infinite order of or- thogonal basis functions (whether it be by the Stone-Weierstrass Theorem or Karhunen-Loève Theorem), the time-varying parameters can alternatively be estimated using basis function approximations (Huang et al., 2002, 2004). 21It is fairly standard in nonparametric regression literature to assume that the class of estimators being compared are consistent/pseudo consistent (see for example Li (1985), Härdle and Linton (1994), Hart (1994), Leung (2005), Sun et al. (2021) and references therein). 12 time-varying parameter model for convenience since observations are naturally ordered in regards to time. To show 1 T ∑T i=1 2̃µ−i(α)εi converges in probability to 0, first divide the data into m blocks with ℓobservations in each block. The blocks are constructed in such a way that ℓgoes to infinity, but ℓis small enough such that for any two time-varying parameter vectors in the same block ˆβi(α) = ˆβr(α) + op(1). For any one block we have 1 ℓ ℓ ∑ i=1 2̃µ−i(α)εi = 1 ℓ ℓ ∑ i=1 2xi(α)ˆβi(α)εi + [1 ℓ ℓ ∑ i=1 2xi(α)(̃β−i(α) −ˆβi(α))εi] = [1 ℓ ℓ ∑ i=1 2xi(α)εi]ˆβr(α) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 +1 ℓ ℓ ∑ i=1 2xi(α)εiop(1) + [1 ℓ ℓ ∑ i=1 2xi(α)(̃β−i(α) −ˆβi(α))εi] ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 where convergence in probability to zero for the first term follows from the exogeneity condition (Assumption 1(a)) and an appropriate LLN (Assumption 1(d)). Convergence of the third terms follows the same argument used in the proof of Theorem 1 and is omitted for brevity. To show convergence of the second term, note that it is sufficient to show that E ∥1 ℓ ℓ ∑ i=1 2xi(α)εiop(1) ∥ pÐ→0. Similar to an argument used in the proof used in Theorem 1, note that E ∥1 ℓ ℓ ∑ i=1 2xi(α)εiop(1) ∥≤1 ℓ ℓ ∑ i=1 (E ∥2xi(α)εi ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ Op(p(α))=Op(1) (E ∥op(1) ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 ≤constant × 1 ℓ ℓ ∑ i=1 op(1) = op(1) due to the triangle inequality, the Cauchy-Schwarz inequality, and Assumption 1(a). Since for each block we have ∥1 ℓ∑ℓ i=1 2̃µ−i(α)εi ∥converging to zero in probability, the average over those blocks will also converge to zero, thus completing the proof. Remark 1. For the time-varying parameter case, it is assumed that the parameters map to a continuous function. This may be unrealistic if one thinks that the time-varying parameter process is not continuous and instead exhibits structural breaks. The mapping to a contin- uous function is not a necessary assumption. In the case of structural breaks, the proofs still go through as is, but the blocks need to correspond to break fractions (Andrews, 1993, 13 Bai and Perron, 1998). For the non-parametric kernel regression case, these proofs could also be applied to k-fold CV.22 3.2 Information Criteria I use the following assumptions to show asymptotic optimality for information criteria: Assumption 3. Either Assumption 1 or Assumption 2 is satisfied (except for 1(b) and 1(f)). In addition, assume the penalty term λT p(α) T →0 for all α ∈AT Assumptions 1(b) and 1(f) are dropped since information criteria use all of the data when estimating the residuals unlike CV. The penalty term tending toward zero asymptot- ically occurs for most if not all of the popular information criteria including but not limited to: AIC, BIC, HQIC, RIC, and Generalized information criteria. Theorem 3. Under Assumption 3, ˆα chosen by an information criteria is asymptotically op- timal. Proof. The proof follows along similar lines as the proof of Theorems 1 and 2 except there is no need to show ∥1 T ∑T i=1 2xi(α)(̃β−i(α) −ˆβ(α))εi] ∥or ∥[1 ℓ∑ℓ i=1 2xi(α)(̃β−i(α) − ˆβi(α))εi] ∥converge to 0 in probability for the constant parameter and time-varying pa- rameter model/nonparametric model cases, respectively. It is sufficient to just demonstrate the proof under the constant parameter case and the time-varying/nonparametric regres- sion cases are omitted for brevity since they are simpler versions of the proof of Theorem 2. In the constant parameter case with information criteria, we have 1 T T ∑ i=1 ˆε2 i (α) = 1 T T ∑ i=1 ε2 i + 1 T T ∑ i=1 2µiεi −1 T T ∑ i=1 2ˆµi(α)εi + LT(α). The penalty has no impact on the ordering of the models asymptotically since the penalty converges to 0 asymptotically, and the natural log does not change the ordering of the in- formation criteria for the models since it is a strictly monotonic function. Therefore to show asymptotic optimality, it is sufficient to show that 1 T ∑T i=1 2µiεi p→0 and 1 T ∑T i=1 2ˆµi(α)εi p→0 since by the continuous mapping theorem the only difference between the information criteria asymptotically would be the model specific IMSE. The proof 1 T ∑T i=1 2µiεi p→0 fol- lows same argument used in proof of Theorem 1 and after noting that ˆµi(α) = xi(α)ˆβ(α), 22Again, though 2(a) and 2(b) are set up for variable/specification selection, these proofs also apply to bandwidth selection or other tuning parameters where α denotes the model for a specific tuning parameter. 14 1 T ∑T i=1 2ˆµi(α)εi p→0 follows from the argument that [ 1 T ∑T i=1 2xi(α)εi]ˆβ(α) p→0 shown in Theorem 1. So despite AIC, BIC, and other information criteria being derived under fairly restrictive conditions (i.i.d. data or independent residuals), they are asymptotically optimal under more general conditions. This result holds for any penalty that converges to 0 asymp- totically, thus indicating the arbitrariness of the penalty from an asymptotic optimality standpoint. In finite samples, the chosen model may vary drastically due to the penalty, but currently it is not clear what the penalty should be, and it may be the case that there is no best penalty for all situations. 3.3 Pseudo-Out-Of-Sample Forecast Methods 3.3.1 Constant Parameter Case To show asymptotic optimality of rolling window and recursive forecasts in the constant parameter case, we need updated assumptions. Assumption 4. Assume as R and t0 grow, Assumption 1 holds. In Assumption 1, ̃β−i(α), ̃µ−i(α), and ̃LT(α) are replaced by ˙˙˙˙ β i(α), ˙˙˙˙µi(α), and ˙˙˙˙ LT(α), respectively. In addition, assume (a) R and t0 grow with the sample size but at a slower rate such that R,t0,T →∞but T −R,T −t0 →∞. Assumption 4 is the rolling window and recursive version of Assumption 1. Assumption 4(a), which requires that the window size, R, grows with the sample size but at a slower rate, is standard in the literature (see Inoue et al. (2017) and references therein). Letting the initial starting point for the recursive forecast, t0, tend toward infinity is also common (e.g. West (1996), West and McCracken (1998), Clark and McCracken (2009)), though many papers also assume that it is either constant or a constant fraction of the sample size.23 Theorem 4. Under Assumption 4, ˆα chosen by rolling window and recursive forecast is asymp- totically optimal. Proof. See appendix. 23The proofs for the recursive case can be written under the alternative assumptions for t0, though it would require different proofs, and it is convenient to assume that it grows with the sample size as it allows the same proof to be used for both the rolling window and recursive forecast cases. 15 3.3.1 Time-varying Parameter Case To show asymptotic optimality for rolling window and recursive forecasts in the time- varying parameter case, we use the following assumption: Assumption 5. Assume R and t0 grow in such a way that Assumption 2 holds. In Assumption 2, ̃β−i(α), ̃µ−i(α), and ̃LT(α) are replaced by ˙˙˙˙ β i(α), ˙˙˙˙µi(α), and ˙˙˙˙ LT(α), respectively. In addition, assume (a) R,t0,T →∞, but T −R,T −t0 →∞. As long as R and t0 grow at suitable rates, the modified Assumption 2 should be satis- fied. In the case of nonparametric kernel estimation with time-varying parameters, rolling window estimates are simply one sided kernel estimates of the time-varying parameters (Inoue et al., 2017, Cai and Juhl, 2023, Farmer et al., 2023), as opposed to the full sample estimates which are generally based on two sided kernels. Taking that into account along with the infill asymptotic framework, the recursive/rolling window version of Assumption 2(c) is not unrealistic.24 Note that the rate at which the block size, ℓ, grows may be depen- dent on the rate at which R and t0 grow. Assumption 5(a) is the same as 4(a) and follows the same reasoning. Theorem 5. Under Assumption 5, ˆα chosen by rolling window and recursive forecasts is asymptotically optimal. Proof. See appendix. 4 Implications and Discussion Under fairly general conditions, CV, the most popular information criteria, and the most popular pseudo-out-of-sample forecasting methods will asymptotically all choose the model with the same conditional mean, so from an asymptotic optimality standpoint, there is no benefit from using one procedure versus another. In regards to what model selection pro- cedure one should use in finite samples, the arguments in this paper cannot speak to them. I do think the results help highlight why recommendations for certain model selection pro- cedures are not as strong as they first appear. 24This also makes sense if one uses the basis function approximation approach to estimating time-varying parameter models (Huang et al., 2002, 2004). 16 Many people over the last few decades have advocated the use of model selection criteria such as BIC and HQIC on the grounds that they are consistent (see for example Shao (1997), Claeskens (2016), Ding et al. (2018) and references therein). What does not get talked about enough is consistency is a distinction without a difference. Intuitively, consistency of a model selection procedure simply means that if the most parsimonious version of the true model is in a set, then the model selection procedure will choose the most parsimonious version of the true model asymptotically. What does not get brought up enough is that it could be the case that in finite samples, the researcher would prefer a criteria that is not consistent. Consistent model selection criteria tend to choose a more parsimonious version of the model, which may not be ideal. There is a host of Monte Carlo evidence indicating this is the case. For example Lütkepohl (2005) (page 155) shows in a Monte Carlo where the true model is a vector autoregression (VAR) with two lags that AIC selected the true model more often than BIC and HQIC, despite the latter two criteria being consistent. Burnham and Anderson (2004) shows in a Monte Carlo that when the true model is denser (has more variables), AIC does a better job of selecting a model closer to the truth than BIC, but the results depend on the size of the parameters. Ng and Perron (2005) and Ng (2013) show that neither AIC or BIC dominate, but again, the results depend on the size of the parameters. If the true model is not in the set (which would be the case for infinite dimensional models), a model selection procedure cannot be consistent since the true model can never be in the set of models being compared. Even if the true model is finite dimensional, if the true model is not in the set, then we currently do not have a reason (from the asymptotic point of view) to prefer one model selection procedure over another. This is in line with the George Box argument that “All models are wrong” or “All models are wrong, but some are useful” as a way to point out that in science, all models are approximate, so we should not believe the true model is in the set.25 There has also been focus on model selection procedures, such as AIC, being mini- max.26,27 A model selection procedure being minimax simply means that in the worst case scenario, the minimax model selection procedure has a smaller IMSE than a non-minimax procedure, or alternatively, minimax procedures minimize downside risk. But for any par- 25Many researchers have realized this, and it is one of the reasons why model averaging has become popular over the past 30 years (Steel, 2020). 26Conditions for minimax estimators are generally derived under i.i.d. data, and to my knowledge these results have not been extended. 27Yang (2005) shows an estimator cannot be strongly consistent and minimax at the same time. 17 ticular situation a researcher is interested in, a procedure being minimax does not say anything about which criteria is preferred in finite samples because the situation may be far from the worst case scenario. Minimax procedures tend to choose larger models, and consistent procedures tend to choose smaller models, so depending how accurately and efficiently estimated parameters are and whether you are more worried about overfitting or underfitting, you may prefer one procedure over another. In a finite sample, there is a bias variance tradeoff because while parameters may be biased when the selected model is too small, the parameter estimates may be highly inefficient if the estimated model in- cludes too many parameters (Ng, 2013, Rossi, 2021). Furthermore, as long as λT is a finite constant greater than 1, the information criteria is minimax (Shao, 1997, Yang, 2005). So even within the class of minimax procedures, it is not clear which one should be chosen since as shown in Theorem 3, they are all asymptotically optimal.28 Based off the results in this paper and in the literature, I believe it is the case that the literature should focus more on the finite sample properties or maybe use local asymptotics or set up problems in such a way that the differences in model selection criteria show up in the limit.29 5 Conclusion This paper provides general proofs showing optimality for a wide range of model selection criteria under fairly general conditions. This paper not only asymptotically justifies model selection criteria for most situations, but it also unifies and extends a range of previously disparate results. The results from this paper should allow researchers to move on from showing the asymptotic optimality of model selection procedures for most situations and to potentially focus on showing the theoretical finite sample properties of these methods. Since it is the case that the most popular methods end up being asymptotically optimal under general conditions, the choice of which model selection procedure to choose from, like most things in statistics, appears to be a finite sample choice and not an asymptotic one. 28The same holds for consistent model selection procedures. A sufficient condition for consistent model selection procedures require that λT →∞but λT p T →0. Since there are an infinite number of ways to construct a consistent model selection procedures and they are asymptotically optimal under the conditions stated in this paper, the literature currently cannot distinguish between procedures within this class from an asymptotic standpoint. 29An important topic of research in the literature that is beyond the scope of this paper is taking into account the impact model selection has on subsequent inference (see for example Leeb and Pötscher (2005)). 18 References Andrews, D. W. (1991). Asymptotic optimality of generalized cl, cross-validation, and generalized cross-validation in regression with heteroskedastic errors. Journal of Econo- metrics 47(2-3), 359–377. 2 Andrews, D. W. K. (1993). Tests for parameter instability and structural change with unknown change point. Econometrica 61(4), 821–856. 13 Arlot, S. and A. Celisse (2010). A survey of cross-validation procedures for model selection. Statistical Survey 4, 40–79. 5 Bai, J. and P. Perron (1998). Estimating and testing linear models with multiple structural changes. Econometrica 66(1), 47–78. 14 Bates, S., T. Hastie, and R. Tibshirani (2024). Cross-validation: What does it estimate and how well does it do it? Journal of the American Statistical Association 119(546), 1434–1445. 2 Burman, P., E. Chow, and D. Nolan (1994). A cross-validatory method for dependent data. Biometrika 81(2), 351–358. 5, 11 Burnham, K. P. and D. R. Anderson (2004). Multimodel inference understanding aic and bic in model selection. Sociological Methods & Research 33(2), 261 – 304. 17 Cai, Z. (2007). Trending time-varying coefficient time series models with serially corre- lated errors. Journal of Econometrics 136(1), 163–188. 12 Cai, Z. and T. Juhl (2023). The distribution of rolling regression estimators. Journal of Econometrics 235(2), 1447–1463. 16 Chen, B. and Y. Hong (2012). Testing for smooth structural changes in time series models via nonparametric regression. Econometrica 80(3), 1157–1183. 12 Chen, X. (2007). Handbook of Econometrics, Volume 6, Chapter LARGE SAMPLE SIEVE ESTIMATION OF SEMI-NONPARAMETRIC MODELS. 8 Chetverikov, D., Z. Liao, and V. Chernozhukov (2021). On cross-validated lasso in high dimensions. Annals of Statistics 49(3), 1300–1317. 3 19 Chib, S. and T. A. Kuffner (2016). Bayes factor consistency. Working Paper. 2 Claeskens, G. (2016). Statistical model choice. Annual Review of Statistics and Its Applica- tion 3, 233–256. 3, 17 Clark, T. E. and M. W. McCracken (2009). Improving forecast accuracy by combining recursive and rolling forecasts. International Economic Review 50(2), 363– 395. 5, 15 Dahlhaus, R. (2012). Handbook of Statistics, Volume 30, Chapter Locally Stationary Pro- cesses. 12 Davidson, J. (2022). Stochastic Limit Theory: An Introduction for Econometricians (2nd ed.). Oxford University Press. 8 Ding, J., V. Tarokh, and Y. Yang (2018). Model selection techniques—an overview. IEEE Signal Processing Magazine 35(6), 16–34. 3, 17 Fan, J. and I. Gijbels (1996). Local Polynomial Modelling and Its Applications. CHAPMAN & HALL/CRC. 12 Fan, J. and H. Peng (2004). Nonconcave penalized likelihood with a diverging number of parameters. Annals of Statistics 32(3), 928–961. 8 Farmer, L. E., L. Schmidt, and A. Timmermann (2023). Pockets of predictability. The Journal of Finance 78(3), 1279–1341. 16 Faust, J. and J. H. Wright (2013). Handbook of Economic Forecasting, Volume 2, Chapter Forecasting Inflation, pp. 2–56. 3 Gao, Y., X. Zhang, S. Wang, and G. Zou (2016). Model averaging based on leave-subject- out cross-validation. Journal of Econometrics 192, 139–151. 2 Giraitis, L., G. Kapetanios, and M. Marcellino (2021). Time-varying instrumental variable estimation. Journal of Econometrics 224(2), 394–415. 12 Giraitis, L., G. Kapetanios, and T. Yates (2014). Inference on stochastic time-varying coef- ficient models. Journal of Econometrics 179(1), 46–65. 12 Hansen, B. E. (2010). Multi-step forecast model selection. Working Paper. 5 20 Hansen, B. E. and J. Racine (2012). Jackknife model averaging. Journal of Economet- rics 167(1), 38–46. 2, 8, 9 Härdle, W. and O. B. Linton (1994). Handbook of Econometrics, Volume 4, Chapter Applied nonparametric methods, pp. 2295–2339. 12 Hart, J. D. (1994). Automated kernel smoothing of dependent data by using time series cross-validation. Journal of the Royal Statistical Society. Series B 56(3), 529–542. 12 Huang, J. Z., C. O. Wu, and L. Zhou (2002). Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika 89(1), 111–128. 12, 16 Huang, J. Z., C. O. Wu, and L. Zhou (2004). Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica 14(763-788). 12, 16 Inoue, A., L. Jin, and B. Rossi (2017). Rolling window selection for out-of-sample forecast- ing with time-varying parameters. Journal of Econometrics 196(1), 55–67. 15, 16 Jordà, Ò. (2005). Estimation and inference of impulse responses by local projections. American Economic Review 95(1), 161–182. 2 Lahiri, S. (2003). Resampling Methods for Dependent Data. New York: Springer. 8 Leeb, H. and B. M. Pötscher (2005). Model selection and inference: Facts and fiction. Econometric Theory 21(1), 21–59. 18 Leung, D. H.-Y. (2005). Cross-validation in nonparametric regression with outliers. Annals of Statistics 33(5), 2291–2310. 12 Li, K.-C. (1985). From stein’s unbiased risk estimates to the method of generalized cross validation. The Annals of Statistics 13(4), 1352–1377. 12 Li, K.-C. (1987). Asymptotic optimality for cp, cl, cross-validation,and generalized cross- validation: Discrete index set. Annals of Statistics 15(3), 958–975. 2, 8, 9 Lu, X. and L. Su (2015). Jackknife model averaging for quantile regressions. Journal of Econometrics 188(1), 40–58. 8 21 Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Berlin: Springer- Verlag. 8, 17 Machado, J. A. and J. S. Silva (2019). Quantiles via moments. Journal of Economet- rics 213(1), 145–173. 8 Moreno, E., J. Girón, and G. Casella (2015). Posterior model consistency in variable selec- tion as the model dimension grows. Statistical Science 30(2), 228–241. 2 Newey, W. K. and D. McFadden (1994). Large sample estimation and hypothesis testing, Volume 4, Chapter 36, pp. 2111–2245. Amsterdam: Elsevier. 8 Ng, S. (2013). Handbook of Forecasting, Volume 3, Chapter Variable Selection in Predictive Regressions. 17, 18 Ng, S. and P. Perron (2005). A note on the selection of time series models. Oxford Bulletin of Economics and Statistics 67(1), 115–134. 17 Opsomer, J., Y. Wang, and Y. Yang (2001). Nonparametric regression with correlated errors. Statistical Science 16(2), 134–153. 5 Racine, J. (2000). Consistent cross-validatory model-selection for dependent data: hv- block cross-validation. Journal of Econometrics 99(1), 39–61. 11 Robinson, P. M. (1989). Statistical Analysis and Forecasting of Economic Structural Change, Chapter Nonparametric Estimation of Time-Varying Parameters, pp. 253–264. Berlin, Heidelberg.: Springer. 12 Rossi, B. (2021). Forecasting in the presence of instabilities: How we know whether models predict well and how to improve them. Journal of Economic Literature 59(4), 1135–90. 4, 18 Shao, J. (1997). An asymptotic theory for linear model selection. Statistica Sinica 7, 221–264. 3, 9, 17, 18 Sin, C.-Y. and H. White (1996). Information criteria for selecting possibly misspecified parametric models. Journal of Econometrics 71(1-2), 207–225. 3 Steel, M. F. (2020). Model averaging and its use in economics. Journal of Economic Literature 58(3), 644–719. 17 22 Stock, J. and M. Watson (2007). Introduction to Econometrics. Addison Wesley Longman. 2, 3 Sun, Y., Y. Hong, T.-H. Lee, S. Wang, and X. Zhang (2021). Time-varying model averaging. Journal of Econometrics 222(2), 974–992. 2, 12 Wei, C. Z. (1992). On predictive least squares principles. Annals of Statistics 20(1), 1–42. 5 Welch, I. and A. Goyal (2008). A comprehensive look at the empirical performance of equity premium prediction. The Review of Financial Studies 21(4), 1455–1508. 3 West, K. D. (1996). Asymptotic inference about predictive ability. Econometrica 64(5), 1067–1084. 15 West, K. D. and M. W. McCracken (1998). Regression-based tests of predictive ability. International Economic Review 39(4), 817–840. 15 White, H. (1981). Consequences and detection of misspecified nonlinear regression mod- els. Journal of the American Statistical Association 76(374), 419–433. 8, 9 White, H. (1982). Maximum likelihood estimation of misspecified models. Economet- rica 50(1), 1–25. 8, 9 White, H. (1994). Estimation, inference and specification analysis. Cambridge University Press. 8, 9 White, H. (2001). Asymptotic Theory for Econometricians. Emerald Group Publishing Lim- ited. 8 Yang, Y. (2005). Can the strengths of aic and bic be shared? a conflict between model indentification and regression estimation. Biometrika 92(4), 937–950. 17, 18 Yu, D., X. Zhang, and H. Liang (2025). Unified optimal model averaging with a general loss function based on cross-validation. Journal of the American Statistical Association. 2 Zhang, X., A. T. Wan, and G. Zou (2013). Model averaging by jackknife criterion in models with dependent data. Journal of Econometrics 174(2), 82–94. 2 23 Proof of Theorem 4 Proof. The proofs are written for the rolling window case. Note that for the recursive case, one would just replace R with t0. The proof follows along similar lines as the proof of Theorem 1. Note that ˙˙˙˙ε 2 i (α) = ε2 i + 2(µi −˙˙˙˙ µi(α))εi + (µi −˙˙˙˙ µi(α))2, which implies 1 T −R T ∑ i=R+1 ˙˙˙˙ε 2 i (α) = 1 T −R T ∑ i=R+1 ε2 i + 1 T −R T ∑ i=R+1 2µiεi − 1 T −R T ∑ i=R+1 2˙˙˙˙ µi(α)εi + ˙˙˙˙ LT(α). To prove asymptotic optimality, it is sufficient to show that 1 T−R ∑T i=R+1 2µi(α)εi p→0 and 1 T−R ∑T i=R+1 2˙˙˙˙ µi(α)εi p→0. Again, the proof 1 T−R ∑T i=R+1 2µi(α)εi p→0 follows same arguments used in Theorem 1. To show 1 T−R ∑T i=R+1 2˙˙˙˙ µi(α)εi p→0, note that 1 T −R T ∑ i=R+1 2˙˙˙˙ µi(α)εi = [ 1 T −R T ∑ i=R+1 2xi(α)̂β(α)εi] + 1 T −R T ∑ i=R+1 [2xi(α)(˙˙˙˙ β i(α) −̂β(α))εi] = [ 1 T −R T ∑ i=R+1 2xi(α)εi]̂β(α) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 +[ 1 T −R T ∑ i=R+1 2xi(α)(˙˙˙˙ β i(α) −̂β(α))εi] ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 where the first term converges to zero by the exogeneity condition (Assumption 1(a)), an appropriate LLN (Assumption 1(d)), and in the infinite dimensional case Assumption 1(e). To show that the second term converges to 0, note that E ∥ 1 T −R T ∑ i=R+1 2xi(α)(˙˙˙˙ β i(α)−̂β(α))εi ∥≤ 1 T −R T ∑ i=R+1 (E ∥2xi(α)(˙˙˙˙ β i(α)−̂β(α)) ∥2)1/2 (E(ε2 i ))1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ bounded ≤constant 1 T −R T ∑ i=R+1 (E ∥2xi(α)(˙˙˙˙ β i(α) −̂β(α)) ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 = op(1) due to the triangle inequality, Cauchy-Schwarz inequality, Assumption 1(a), and Assump- tion 4(a), thus proving that 1 T−t0 ∑T i=t0+1 2˙˙˙˙ µi(α)εi p→0. 24 Proof of Theorem 5 Proof. The proofs are written for the rolling window case. Note that for the recursive case, one would just replace R with t0. Note that 1 T −R T ∑ i=R+1 ˙˙˙˙ε 2 i (α) = 1 T −R T ∑ i=R+1 ε2 i + 1 T −R T ∑ i=R+1 2µiεi − 1 T −R T ∑ i=R+1 2˙˙˙˙ µi(α)εi + ˙˙˙˙ LT(α). To prove asymptotic optimality, it is sufficient to show that 1 T−R ∑T i=R+1 2µiεi p→0 and 1 T−R ∑T i=R+1 2˙˙˙˙ µi(α)εi p→0. Again, the proof 1 T−R ∑T i=R+1 2µiεi p→0 follows same arguments used in Theorem 1. The 1 T−R ∑T i=R+1 2˙˙˙˙ µi(α)εi p→0 follows the same argument used in the proof of Theorem 2, except (̃β−i(α) −ˆβi(α)) is replaced with (˙˙˙˙ β i(α) −ˆβi(α)). Lemma 1. Assume 1(b) holds. In addition, assume either ∥µi ∥or ∥µi−ˆµi(α) ∥are bounded above for all i and α. Then ∣̃LT(α)/LT(α) −1∣ p→0 for all α ∈AT. Proof. The proof is shown in two cases. The first under the case ∥µi ∥< ∞and the second under ∥µi −ˆµi(α) ∥< ∞. Under case 1, note that ̃LT(α) −LT(α) = 1 T T ∑ i=1 (µi −̃µ−i(α))2 −1 T T ∑ i=1 (µi −ˆµi(α))2 = 1 T T ∑ i=1 2(µiˆµi(α) −µĩµ−i(α)) + 1 T T ∑ i=1 (̃µ−i(α)2 −ˆµi(α)2) = 1 T T ∑ i=1 2(µi(ˆµ(α) −̃µ−i(α)) + 1 T T ∑ i=1 (̃µ−i(α)2 −ˆµi(α)2) To complete the proof for case 1, it is sufficient to show E ∥1 T T ∑ i=1 2(µi(ˆµ(α) −̃µ−i(α)) + 1 T T ∑ i=1 (̃µ−i(α)2 −ˆµi(α)2) ∥ p→0. Note that E ∥1 T T ∑ i=1 2(µi(ˆµ(α) −̃µ−i(α)) + 1 T T ∑ i=1 (̃µ−i(α)2 −ˆµi(α)2) ∥ ≤1 T T ∑ i=1 (E ∥2µi ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ bounded (E ∥ˆµ(α) −̃µ−i(α) ∥2)1/2 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 + 1 T T ∑ i=1 E ∥(̃µ−i(α)2 −ˆµi(α)2) ∥ ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 = op(1) 25 by the triangle inequality, Assumption 1(b), and the assumption of ∥µi ∥being bounded for all i. For case 2, note that (µi −̃µ−i(α))2 = ([µi −ˆµi(α)] + [ˆµi(α) −̃µ−i(α)])2 = [µi −ˆµi(α)]2 + 2[µi −ˆµi(α)][ˆµi(α) −̃µ−i(α)] + [ˆµi(α) −̃µ−i(α)]2. It follows that ̃LT(α) = LT(α) + 1 T T ∑ i=1 2[µi −ˆµi(α)][ˆµi(α) −̃µ−i(α)] + 1 T T ∑ i=1 [ˆµi(α) −̃µ−i(α)]2. To show ∣̃LT(α)/LT(α) −1∣ p→0, it is sufficient to show that the second and third terms on the right hand side converge to zero in probability. Convergence of the third term follows from Assumption 1(b). To show the second term converges to zero in probability, note that it is sufficient to show that E ∥1 T T ∑ i=1 2[µi −ˆµi(α)][ˆµi(α) −̃µ−i(α)] ∥ pÐ→0. Note that E ∥1 T T ∑ i=1 2[µi −ˆµi(α)][ˆµi(α) −̃µ−i(α)] ∥≤1 T T ∑ i=1 E⎛ ⎝∥2[µi −ˆµi(α)] ∥ ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ bounded ∥[ˆµi(α) −̃µ−i(α)] ∥) ≤constant 1 T T ∑ i=1 E⎛ ⎝∥[ˆµi(α) −̃µ−i(α)] ∥) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ p→0 = op(1), by the triangle inequality, by the assumption that ∥µi −ˆµi(α) ∥is bounded above for all i and α, and Assumptions 1(b). Remark 2. The proof of Lemma 1 also applies for the rolling window and recursive cases for both the constant parameter and time-varying cases. One just needs to replace ̃β−i(α), ̃µ−i(α), and ̃LT(α) with ˙˙˙˙ β i(α), ˙˙˙˙µi(α), and ˙˙˙˙ LT(α), respectively. One also needs to use the corresponding assumptions in Assumptions 4 and 5. 26
Regression Model Selection Under General Conditions Amaze Lusompa * Federal Reserve Bank of Kansas City October 17, 2025 Abstract Model selection criteria are one of the most important tools in statistics. Proofs showing a model selection criterion is asymptotically optimal are tailored to the type of model (linear regression, quantile regression, penalized regression, etc.), the estimation method (linear smoothers, maximum likelihood, generalized method of moments (GMM), etc.), the type of data (i.i.d., dependent, high dimensional, etc.), and the type of model selection criterion. Moreover, assumptions are often restrictive and unrealistic making it a slow and winding process for researchers to determine if a model selection criterion is selecting an optimal model. This paper provides general proofs showing asymptotic optimality for a wide range of model selection criteria under general conditions. This paper not only asymptotically justifies model selection criteria for most situations, but it also unifies and extends a range of previously disparate results. *I thank Sai Avinash, Jason P. Brown, Òscar Jordà, and Francisco Scott for helpful comments, discussions, and suggestions. I thank Johnson Oliyide for excellent research assistance. The views expressed are those of the author and do not necessarily reflect the positions of the Federal Reserve Bank of Kansas City or the Federal Reserve System. 1 16 Oct 2025 1 Introduction Model selection criteria are one of the most important tools in statistics and are used for a variety of things such as model selection, model comparison, model averaging, or to evaluate the forecasting ability of models. Model selection criteria are used in a wide range of models from parametric, semiparametric, to nonparametric. Though model selection criteria have been around for decades and are a lynchpin in statistics, the behavior of these methods are complex and not fully understood (Bates et al., 2024). Asymptotic optimality of these criteria (their ability to select the true model or the best approximating model if the true model is not in the set), have only been shown for certain types of models in a limited number of situations. Model selection criteria can be broken down into three major classes: Cross Validation (CV), information criteria, and pseudo-out-of-sample forecasting.1,2 Though CV is considered by many to be the gold standard for model selection, asymptotic optimality results for CV have been limited. Li (1987) shows that leave-one-out Cross-Validation (LOO CV) is asymptotically optimal for a class of linear smoother models under the assumption that the data are i.i.d. Andrews (1991) extends the results to handle regressions with independent but heteroskedastic residuals. Extending these results to time series, however, has been spotty at best. Results for CV either assume strict exogeneity, which is well known to be an unrealistic assumption for time series (Stock and Watson, 2007), or if they allow for lagged dependent variables, the assumptions for these models end up being highly restrictive such as requiring the residuals to be i.i.d. which rules out models with heteroskedasticity of any kind as well as models with autocorrelated residuals (see for example Zhang et al. (2013), Sun et al. (2021)).3 As noted in Hansen and Racine (2012), extending the proofs to time-series regression would be quite challenging.4 Moving away from linear 1Bayesian model selection via the marginal likelihood is a class not discussed in this paper in part because Bayesian and frequentist notions of model selection do not necessarily agree (Moreno et al., 2015) (see Chib and Kuffner (2016) and references therein for more on model selection consistency in the Bayesian case). It is common for forecasting models estimated using Bayesian methods to use frequentist model selection criteria such as pseudo-out-of-sample forecasting. 2General-to-specific and specific-to-general testing are not a classes discussed in this paper. Though they have some popularity, unlike the other major classes they lack wide applicability (e.g. you cannot use it to find the optimal tuning parameters or things not directly involving variable selection). 3This rules out most time series models which include but are not limited to: Local Projections and direct forecast regressions (Jordà, 2005) and Vector Autoregressions or autoregressive distributed lag models with heteroskedasticity of any kind. 4To my knowledge there have been scant results for panel data with Gao et al. (2016), Yu et al. (2025) being exceptions. 2 smoothers, assumptions are even more restrictive and things covered are not that broad.5 Popular methods that are not covered at all or only covered under restrictive conditions include but are not limited to: Lasso and its variants (adaptive lasso, elastic net, square root lasso, etc.), bridge estimators more generally, least angle regression (LARS) and other stepwise methods, quantile regressions, partially linear regression, nonlinear regressions, generalized linear models, neural networks, regression trees, ensemble methods, and other machine learning methods. Asymptotic optimality of information criteria such as AIC and BIC have typically only been shown under more stringent assumptions such as i.i.d. data or i.i.d. residuals (see for example Shao (1997), Claeskens (2016), Ding et al. (2018) for extensive reviews). An exception is Sin and White (1996) who consider information criteria for dependent processes, but they do not account for nonparametric or time-varying parameter models. The paper also assumes restrictive conditions such as the models being finite dimensional as well as continuous differentiability of the log quasi-likelihood function which rules out: quantile regressions, many penalized regressions (such as lasso, its variants, most bridge estimators, and trend filtering), robust regressions (Huber loss functions), LARS and other stepwise methods, and regression trees just to name a few. Sin and White (1996) acknowledge that generalizing their results to a wider class of data generating processes and models represents an interesting and challenging area for future research, but general progress has not been made.6 Real-time out of sample forecasting exercises (also known as pseudo-out-of-sample forecasting) in economics and finance are regarded by many researchers as the "ultimate test of a forecasting model" in time series (see Stock and Watson (2007) page 571).7 These methods are the standard way to choose/evaluate forecast models. Despite the influx of 5Chetverikov et al. (2021) show the validity of k-fold cross validation for Lasso in high dimensions assuming the errors are i.i.d. and neither sub-Gaussian nor sub-exponential. As noted in their paper, the results do not cover LOO CV. 6Sin and White (1996) show asymptotic optimality by showing the model selection criteria choose the model with lowest Kullback-Leibler divergence from the data generating process. In this paper asymptotic optimality is demonstrated by showing model selection criteria choose the model with the lowest integrated mean squared error. Sin and White (1996) also prove strong consistency of certain information criteria. Strong consistency arguments are not attempted in this paper. See the discussion in section 4 for the reasoning. 7This is particularly the case in economics and finance. For example, central banks routinely evaluate their forecasting models using pseudo-out-of-sample exercises (Faust and Wright, 2013), and asset pricing studies typically require pseudo-out-of-sample evidence for predictability claims to be taken seriously (see for example Welch and Goyal (2008)). The preference for pseudo-out-of-sample testing reflects the high cost of forecast errors in policy and investment decisions. 3 several new and more complicated methods over the past few decades (e.g. penalized regression methods, model averaging, machine learning methods, etc.), asymptotic optimality of pseudo-out-of-sample forecast methods such as rolling window and recursive methods, are generally limited to least square estimators or standard time-varying parameter models (see Rossi (2021) and references therein). Currently, researchers spend entire papers or substantial parts of papers showing asymptotic optimality of a selection procedure, if it is shown at all, for a limited class of estimators under restrictive assumptions. In this paper, I derive the asymptotic optimality of model selection criteria under fairly general conditions. The proofs do not rely on a specific estimation method and encompass a wide array of data generating processes. Section 2 reviews CV, information criteria, and pseudo-out-of-sample forecasting. Section 3 presents the main proofs for CV, information criteria, and pseudo-out-of-sample forecasting. Section 4 provides implications and a broader discussion of results in the literature. Section 5 concludes. Some notation: pÐ→is converges in probability, ∥⋅∥is the Frobenius/Euclidean norm, Op(⋅) is big O probability notation, and op(⋅) is little o probability notation. 2 Preliminaries for Cross Validation, Information Criteria, and Pseudo-Out-Of-Sample Forecasting Let the true model be yi = μi + εi for i = 1,2,...,T, where yi is the dependent variable, μi is the conditional mean, and εi is the residual. In the special case of linear regressions yi = xiβ + εi for i = 1,2,...,T, where xi is a 1 × p vector of regressors, β is a p × 1 vector of regression coefficients. The Leave-one-out (LOO) residual for a regression model is calculated by estimating β for all but one observation, and predicting the residual for the left out observation. More formally for observation i, ̃β-i = (∑ j≠i x′ jxj)-1 ∑ j≠i x′ jyj, ̃μ-i = xĩβ-i, ̃εi = yi -̃μ-i, 4 where ̃β-i is the leave i out OLS estimate of β, ̃μ-i is the leave i out estimate of μi, and ̃εi is the leave i out estimate of εi. LOO CV is calculated by finding the model that minimizes the mean-squared error (MSE) of the LOO residuals, that is, by finding the model that minimizes T -1 ∑N i=1 ̃ε2 i , though other loss functions such as the absolute value can be used. LOO CV was developed and is mainly used for independent data. It is well known that LOO CV can perform poorly in finite samples when used for dependent data due to the residuals and regressors being autocorrelated. This can lead to overfitting or underfitting (Opsomer et al., 2001, Arlot and Celisse, 2010, Hansen, 2010). One solution is to use block CV methods (Burman et al., 1994). Block methods work the same way as the LOO method, but instead of leaving out just observation i, one would additionally leave out h observations on both sides of i to break up the dependence. More formally, one would leave out observations i -h,i -h + 1,...,i,...i + h -1,i + h, so for h-block CV via OLS estimation ̃β-i = ( ∑ j≠i-h∶i+h x′ jxj)-1 ∑ j≠i-h∶i+h x′ jyj. Though fairly popular, the block methods have not been asymptotically justified. Another major strand of model selection criteria are information criteria. They generally take the form of log⎛ ⎝ 1 T T ∑ i=1 {yi -ˆμi}2⎞ ⎠+ λTp T where in the standard linear regression case ˆμi = xi ˆβ, ˆβ is the estimate of β based on all of the data, p is the number of predictors and λT > 0 is the penalty coefficient. For the AIC λT = 2, for the BIC λT = log(T), for HQIC λT = clog(log(T)) where c > 2, for the RIC λT = 2log(p), etc. Models are chosen by choosing the model that minimizes the information criteria of choice. The last strand of model selection criteria we will discuss is pseudo-out-of-sample forecasting. The two most popular methods are rolling window forecast and recursive forecasts (Clark and McCracken, 2009).8 In the standard linear regression case estimated using OLS, rolling window forecasts estimate a model on the last R observations so that ̇ ̇ ̇ ̇ βi = ( i-1 ∑ j=i-R x′ jxj)-1 i-1 ∑ j=i-R x′ jyj, ̇ ̇ ̇ ̇ μi = xi ̇ ̇ ̇ ̇ βi, and ̇ ̇ ̇ ̇ εi = yi - ̇ ̇ ̇ ̇ μi. Recursive models alternatively estimate the model based on all observations up to that 8In the least squares case, recursive forecasts are also known as predictive least squares (Wei, 1992). 5 point with ̇ ̇ ̇ ̇ βi = ( i-1 ∑ j=1 x′ jxj)-1 i-1 ∑ j=1 x′ jyj. The optimal rolling window and recursive forecast models are chosen by choosing the model that minimizes the MSE of the forecasted residuals. Note that even though the above examples estimate the standard linear regression model using OLS, the proofs of asymptotic optimality in this paper are general and can handle most types of regression models and estimation methods. Ideally, the goal of these methods is to minimize the integrated mean squared error (IMSE). The IMSE is a distance or loss function between the true conditional mean and the estimated conditional mean for model α with models and is a measure of how well the conditional mean for model α approximates the true conditional mean. A model with a smaller IMSE means that model is closer to the truth in the squared error loss sense. Define AT as the set of all models being compared, and α is the model index.9 The IMSE is defined as ̃LT(α) = 1 T T ∑ i=1 (μi -̃μ-i(α))2 and LT(α) = 1 T T ∑ i=1 (μi -ˆμi(α))2 for leave out and full sample estimates respectively. For recursive and rolling window estimates, the IMSE is define as ̇ ̇ ̇ ̇ LT(α) = 1 T -t0 T ∑ i=t0+1 (μi - ̇ ̇ ̇ ̇ μi(α))2 and ̇ ̇ ̇ ̇ LT(α) = 1 T -R T ∑ i=R+1 (μi - ̇ ̇ ̇ ̇ μi(α))2, where t0 is the minimum number of observations needed so ̇ ̇ ̇ ̇ μi(α) is uniquely defined for all models and all observations, i.10 To show asymptotic optimality, I follow the standard in the literature and show asymptotically optimality in the sense that LT(̂α) inf α∈AT LT(α) p→1 where ˆα is the model in the set of AT selected by CV, information criteria, or pseudo out sample forecasting methods. Intuitively, the above formula says that as the sample size 9Note that the models being compared may be individual models or combination/model averaged models. 10For least squares models, t0 would be the max p(α) for all models in the recursive case. In the rolling window case, R would need to be large enough that ̇ ̇ ̇ ̇ μi(α) is uniquely defined. 6 tends toward infinity, the probability of the model selection procedure choosing the model with the smallest IMSE converges to 1.11 3 Optimality of Model Selection Under General Conditions This section shows the proofs for asymptotic optimality. Section 3.1 shows the proofs for CV, section 3.2. for information criteria, and section 3.3. for pseudo-out-of-sample forecasting. 3.1 Cross-Validation I use the following assumptions to show asymptotic optimality for CV: Assumption 1. The following conditions are satisfied ∀α ∈AT: (a) For all i, E(μiεi) = 0, E(xi(α)εi) = 0, E(ε2 i ) < ∞, and each element in xi(α)εi has finite second moments. (b) ∥ˆμi(α) -̃μ-i(α) ∥ p→0 and ∥ˆμi(α) -̃μ-i(α) ∥< ∞∀i. (c) {μiεi}T i=1 satisfies conditions for a law of large numbers. (d) {xi(α)εi}T i=1 satisfies conditions for a law of large numbers. (e)The dimension of xi(α), p(α), can grow with the sample size, but it diverges at a slower rate than the rate of convergence for the applicable law of large numbers. (f) sup α∈AT ∣̃LT(α)/LT(α) -1∣ p→0 (g) Each model α can be written as a linear regression yi = xi(α)β(α) + εi(α) where μi(α) = xi(α)β(α). (h) Either all of the models in the set AT are misspecified, or at most one model in the set is true. First note that εi is the true residual and εi(α) is the residual for model α. Assumption 1(a) should be an uncontroversial assumption since it is made up of standard first order moment conditions and requires that the second moments for the true residual and the regression score exist and are bounded. The first two parts follow directly from the exogeneity assumption, E(εi∣xi) = 0, and depending on the models being considered could be strictly (past, present, and future) exogenous, past and present exogenous, and past exogenous. Most time series models would be past or past and present exogenous, while 11Note that asymptotic optimality is shown in terms of the full sample IMSE. 7 non-time series models are mostly assumed to be strictly exogenous. Note that the dimension of xi(α)εi is 1 × p(α). The dimensions are suppressed in the proofs to make the notation simpler, but the dimensions are taken into account in the proofs. Assumption 1(a) can even apply to quantile regressions, though it does not apply to quantile regressions in all situations (see discussions Machado and Silva (2019)).12 Note that 1(a) would apply even in the case of omitted variable bias or classical measurement error since εi is the true residual. If, however, there is simultaneity, 1(a) would not apply unless one were to first instrument the endogenous variable(s). The first part of Assumption 1(b) is a more general version of the assumption made in Li (1987) and Hansen and Racine (2012) that the leverage values for each model α dissipate to zero as T →∞. The second part is just a uniform boundness assumption which rules out razors edge cases where, for example, the difference grows as a function of the sample size. As argued in Li (1987), it should be the case that the impact of leaving out an observation has on an estimate should dissipate asymptotically for any reasonable estimator.13 Note that in the case of h-block CV, one can allow h to grow with the sample size, but to ensure that the condition is satisfied, one can let h T →0 as h,T →∞.14 Assumption 1(c) and 1(d) are standard model assumptions that are used when proving consistency. There are a wide range of law of large numbers (LLN) for different data including: i.i.d., independent but not identically distributed, stationary, mixing, mixingale, near-epoch dependent, and locally stationary just to name a few. For a more exhaustive list for different types of cross sectional and times series data, see for example chapter 3 in White (2001) or part 4 in Davidson (2022). Assumption 1(e) places a restriction on the the number of variables that can be included in a model. For example, many law of large numbers converge at rate Op(T -1/2) for each element in the vector, so assuming p(α)2 T →0, as p(α),T →∞would be sufficient in many cases. Assuming that p(α)2 T →0 or p(α)3 T →0 is standard in infinite dimensional time series models, infinite dimensional semiparametric models, as well as high dimensional models (Lütkepohl, 2005, Chen, 2007, Fan and Peng, 12In the context of this paper, for quantile regressions one would want to minimize the IMSE between the conditional quantile for model α and the true conditional quantile. Other loss functions have been used in the literature for quantile regressions (see for example Lu and Su (2015)). 13This would clearly apply to M-estimators that satisfy standard regularity conditions and would apply more generally to extremum estimators that satisfy certain regularity conditions (see for example Theorem 2.1 in Newey and McFadden (1994)). It is also consistent with convergence of misspecified models (see for example White (1981, 1982, 1994) and references therein). 14This is standard in block bootstrapping (Lahiri, 2003). It may also be possible to set h as a constant fraction of the sample size. 8 2004).15 This assumption also rules out the razors edge case of perfectly predicting yi by including as many variables as there are observations. Assumption 1(f) is standard in the literature, and can be imposed for example by assuming that either ∥μi ∥or ∥μi -ˆμi(α) ∥are bounded above for all i and α in addition to Assumption 1(b) (see Lemma 1 in the appendix). Assumption 1(g) assumes that the candidate models can be written as linear regressions and is standard assumption in model selection.16 Though 1(g) is set up for variable/specification selection, the proofs also apply to bandwidth selection or other tuning parameters where α denotes the model for a specific tuning parameter. Assumption 1(g) is a broad assumption that includes a broad range of regression models, however, it excludes nonparametric kernel regression models (e.g. Nadaraya-Watson or Gasser-Muller types) as well as time-varying parameter models. These models will require modifications to the proof and will be addressed in Theorem 2. Assumption 1(h) is a standard assumption in the literature (see Li (1987), Shao (1997), Hansen and Racine (2012) and references therein). It is consistent with the statistical adage from George Box that "All models are wrong" or "All models are wrong, but some are useful" as a way to point out that in science, all models are approximate, so we should not believe the true model is in the set. This assumption is satisfied for infinite dimensional models since the true model cannot be in the set. For finite dimensional models, people do not think the true model is in the set anyway.17 Theorem 1. Under Assumptions 1 ˆα chosen by LOO CV or h-block CV is asymptotically optimal. Proof. Note that ̃ε-i(α) = yi-̃μ-i(α) = μi+εi-̃μ-i(α). Using simple algebra, it can be shown that the LOO (or the h-block equivalent) squared residual can be decomposed as ̃ε2 -i(α) = ε2 i + 2(μi -̃μ-i(α))εi + (μi -̃μ-i(α))2, 15Note that high-dimensional models with a diverging number of parameters such that p(α) T ∈(0,∞) as p(α),T →∞(which are sometimes called proportional asymptotics) are excluded. 16See for example White (1981, 1982, 1994) and references therein for more about pseudo-true models. 17If one includes multiple true models in the set, the model selection procedures will still select a true model since asymptotically the model selection procedure will choose a model whose IMSE converges to zero, though it may not select the most parsimonious version of the true model in the set whose IMSE will converge to faster than the other true models (Shao, 1997). So it is essentially a razors edge case where a true model is selected (and the true model will have a IMSE which converges to zero), but the true model that is selected does not necessarily have the IMSE which converges to zero the fastest, so the model selection procedure may not asymptotic optimality in this case. 9 implying that 1 T T ∑ i=1 ̃ε2 -i(α) = 1 T T ∑ i=1 ε2 i + 1 T T ∑ i=1 2μiεi -1 T T ∑ i=1 2̃μ-i(α)εi + ̃LT(α). Note that first term, 1 T ∑T i=1 ε2 i , is the mean squared error of the true residuals and shows up in every model, so it can be ignored since it will not affect the ordering of the models. By Assumption 1(f), ̃LT(α) can be replaced by LT(α) asymptotically. To show asymptotic optimality, it is therefore sufficient to show that 1 T ∑T i=1 2μiεi p→0 and 1 T ∑T i=1 2̃μ-i(α)εi p→0 since if this is the case, the LOO MSE of the different models only would only differ because of their model specific IMSE.18 Therefore, choosing the model with the smallest LOO MSE would be choosing the model with the smallest IMSE. 1 T ∑T i=1 2μiεi p→0 follows directly from the exogeneity condition (Assumption 1(a)) and an appropriate LLN (Assumption 1(c)). To show that 1 T ∑T i=1 2̃μ-i(α)εi p→0, first note that by 1(g) ̃μ-i(α) = xi(α)̃β-i(α). So 1 T T ∑ i=1 2̃μ-i(α)εi = 1 T T ∑ i=1 2xi(α)ˆβ(α)εi + [ 1 T T ∑ i=1 2xi(α)(̃β-i(α) -ˆβ(α))εi] = [ 1 T T ∑ i=1 2xi(α)εi]ˆβ(α) ́111111111111111111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111111111111111111¶ p→0 +[ 1 T T ∑ i=1 2xi(α)(̃β-i(α) -ˆβ(α))εi] ́111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 . To understand why first term converges to zero, note that [ 1 T ∑T i=1 2xi(α)εi] converges in probability to a zero vector by the exogeneity condition (Assumption 1(a)) and an appropriate LLN for xi(α)εi (Assumption 1(d)). In the finite dimension case that is enough for the first term to converge. If the dimension p(α) grows with the sample size, by Assumption 1(e) p(α) grows at a slower rate than the appropriate LLN so the entire first term converges to zero in that case as well.19 To show why the second term converges to zero, it is sufficient to show that E ∥1 T T ∑ i=1 2xi(α)(̃β-i(α) -ˆβ(α))εi ∥ pÐ→0 18Technically since 1 T ∑T i=1 2μiεi shows up in every model, it can be ignored as well. 19If for example the LLN converges at the standard rate of Op(T -1/2), then ∥[ 1 T ∑T i=1 2xi(α)εi]ˆβ(α) ∥= Op( p(α) √ T ) and as long as p(α) grows slower than √ T, there will be convergence to zero in probability. 10 since convergence in mean implies convergence in probability. Note that E ∥1 T T ∑ i=1 2xi(α)(̃β-i(α) -ˆβ(α))εi ∥≤1 T T ∑ i=1 E ∥2xi(α)(̃β-i(α) -ˆβ(α))εi ∥ ≤1 T T ∑ i=1 (E ∥2xi(α)(̃β-i(α) -ˆβ(α)) ∥2)1/2 ́11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 (E(ε2 i ))1/2 ́111111111111111111111 ̧1111111111111111111111¶ bounded ≤constant 1 T T ∑ i=1 op(1) = op(1) due to the triangle inequality, the Cauchy-Schwarz inequality, Assumption 1(a), and Assumption 1(b). As was just shown, setting up the problem by decomposing the LOO MSE in terms of the MSE of the true residuals, the IMSE, and what are essentially standard first order moment conditions, allows us to leverage standard statistical results without having to specify the type of estimation method and easily allows for generality for the type of model and type of data. The above proof also shows h-block CV is asymptotically optimal, which to my knowledge, no one has actually shown (Burman et al., 1994, Racine, 2000). These proofs could also be applied to k-fold CV, but it would require assumptions on the fold size similar to h-block CV (e.g. if you have k folds with h observations in each fold, h can grow with the sample size but h T →0). As mentioned earlier, Theorem 1 does not allow for observation dependent coefficients, so it does not account for time-varying parameter models or Nadaraya-Watson type nonparametric models. To address this shortcoming, we need a slightly different set of assumptions. Assumption 2. Assumptions 1 holds except for 1(e) and 1(g). In addition, assume either conditions (a) or (b) holds (conditions a and b are listed below). Lastly, condition (c) holds (condition c is listed below). (a) For all α ∈AT, yi = xi(α)β(α,xi(α)) + εi(α) where μi(α) = xi(α)β(α,xi(α)) for the nonparametric kernel regression case. β(α,xi(α)) is continuous in xi(α). (b) For all α ∈AT, yi = xi(α)βi(α) + εi(α) where μi(α) = xi(α)βi(α) in the time-varying parameter case. βi(α) is continuous in i in the infill asymptotic sense. (c) The data can be divided into m blocks with lobservations in each block. These blocks can be constructed in such a way that lgoes to infinity, but ˆβi(α) = ˆβr(α)+op(1) or ˆβ(α,xi) = ˆβ(α,xr) + op(1) for i ≠r within the same block. 11 Assumption 2 is essentially the same as Assumption 1, but there are two differences. The first difference is it allows the model parameters to vary with the observation or covariate. For time-varying parameter models, βi(α) is a continuous function in time and follows standard infill asymptotics arguments (Robinson, 1989, Cai, 2007, Chen and Hong, 2012, Dahlhaus, 2012). Note that for time-varying parameter case, the continuous function can be deterministic or stochastic.20 The continuity assumption for nonparametric kernel regressions is also standard (Fan and Gijbels, 1996). Assumption 2(c) just says as the sample size grows, the difference between estimates in the same block should decrease asymptotically. Note that the construction of the m blocks of size lis in line with consistency assumptions and intuition behind nonparametric kernel regressions and infill asymptotics of time-varying parameter models. In the case where the estimators are consistent/pseudo consistent, this condition is satisfied.21 There are also cases where this condition can be satisfied even if the estimators are not consistent/pseudo consistent (e.g. rolling window regression where the window is a constant fraction of the sample size or more generally a kernel regression where the bandwidth does not go to zero fast enough to satisfy the consistency condition). The second difference from Assumption 1 is that the dimension, p(α), is not growing with the sample size. This is standard in the time-varying parameter and nonparametric kernel regression literature. As will be seen in the proofs, the proofs may be able to handle p(α) growing with the sample size, but it require being specific about the rates at which p(α) and lgrow. Theorem 2. Under Assumptions 2, ˆα chosen by LOO CV or h-block CV is asymptotically optimal. Proof. The proof is the same as Theorem 1, except for the proof that 1 T ∑T i=1 2̃μ-i(α)εi converges in probability to 0. The proof will be written for time-varying parameter model case, but it also applies to the nonparametric kernel regression. It is written in terms of the 20The standard in nonparametric time-varying parameter estimation is to use nonparametric kernels (Robinson, 1989, Cai, 2007, Dahlhaus, 2012), and it is typically assumed that the function is deterministic and differentiable. It has been shown that kernel methods can be used for certain stochastic processes (Giraitis et al., 2014, 2021). As long as the continuous functions can be written as an infinite order of orthogonal basis functions (whether it be by the Stone-Weierstrass Theorem or Karhunen-Loève Theorem), the time-varying parameters can alternatively be estimated using basis function approximations (Huang et al., 2002, 2004). 21It is fairly standard in nonparametric regression literature to assume that the class of estimators being compared are consistent/pseudo consistent (see for example Li (1985), Härdle and Linton (1994), Hart (1994), Leung (2005), Sun et al. (2021) and references therein). 12 time-varying parameter model for convenience since observations are naturally ordered in regards to time. To show 1 T ∑T i=1 2̃μ-i(α)εi converges in probability to 0, first divide the data into m blocks with lobservations in each block. The blocks are constructed in such a way that lgoes to infinity, but lis small enough such that for any two time-varying parameter vectors in the same block ˆβi(α) = ˆβr(α) + op(1). For any one block we have 1 l l ∑ i=1 2̃μ-i(α)εi = 1 l l ∑ i=1 2xi(α)ˆβi(α)εi + [1 l l ∑ i=1 2xi(α)(̃β-i(α) -ˆβi(α))εi] = [1 l l ∑ i=1 2xi(α)εi]ˆβr(α) ́1111111111111111111111111111111111111111111111111111111111111 ̧11111111111111111111111111111111111111111111111111111111111111¶ p→0 +1 l l ∑ i=1 2xi(α)εiop(1) + [1 l l ∑ i=1 2xi(α)(̃β-i(α) -ˆβi(α))εi] ́111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 where convergence in probability to zero for the first term follows from the exogeneity condition (Assumption 1(a)) and an appropriate LLN (Assumption 1(d)). Convergence of the third terms follows the same argument used in the proof of Theorem 1 and is omitted for brevity. To show convergence of the second term, note that it is sufficient to show that E ∥1 l l ∑ i=1 2xi(α)εiop(1) ∥ pÐ→0. Similar to an argument used in the proof used in Theorem 1, note that E ∥1 l l ∑ i=1 2xi(α)εiop(1) ∥≤1 l l ∑ i=1 (E ∥2xi(α)εi ∥2)1/2 ́1111111111111111111111111111111111111111111111111111111111 ̧11111111111111111111111111111111111111111111111111111111111¶ Op(p(α))=Op(1) (E ∥op(1) ∥2)1/2 ́1111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111¶ p→0 ≤constant × 1 l l ∑ i=1 op(1) = op(1) due to the triangle inequality, the Cauchy-Schwarz inequality, and Assumption 1(a). Since for each block we have ∥1 l∑l i=1 2̃μ-i(α)εi ∥converging to zero in probability, the average over those blocks will also converge to zero, thus completing the proof. Remark 1. For the time-varying parameter case, it is assumed that the parameters map to a continuous function. This may be unrealistic if one thinks that the time-varying parameter process is not continuous and instead exhibits structural breaks. The mapping to a continuous function is not a necessary assumption. In the case of structural breaks, the proofs still go through as is, but the blocks need to correspond to break fractions (Andrews, 1993, 13 Bai and Perron, 1998). For the non-parametric kernel regression case, these proofs could also be applied to k-fold CV.22 3.2 Information Criteria I use the following assumptions to show asymptotic optimality for information criteria: Assumption 3. Either Assumption 1 or Assumption 2 is satisfied (except for 1(b) and 1(f)). In addition, assume the penalty term λT p(α) T →0 for all α ∈AT Assumptions 1(b) and 1(f) are dropped since information criteria use all of the data when estimating the residuals unlike CV. The penalty term tending toward zero asymptotically occurs for most if not all of the popular information criteria including but not limited to: AIC, BIC, HQIC, RIC, and Generalized information criteria. Theorem 3. Under Assumption 3, ˆα chosen by an information criteria is asymptotically optimal. Proof. The proof follows along similar lines as the proof of Theorems 1 and 2 except there is no need to show ∥1 T ∑T i=1 2xi(α)(̃β-i(α) -ˆβ(α))εi] ∥or ∥[1 l∑l i=1 2xi(α)(̃β-i(α) - ˆβi(α))εi] ∥converge to 0 in probability for the constant parameter and time-varying parameter model/nonparametric model cases, respectively. It is sufficient to just demonstrate the proof under the constant parameter case and the time-varying/nonparametric regression cases are omitted for brevity since they are simpler versions of the proof of Theorem 2. In the constant parameter case with information criteria, we have 1 T T ∑ i=1 ˆε2 i (α) = 1 T T ∑ i=1 ε2 i + 1 T T ∑ i=1 2μiεi -1 T T ∑ i=1 2ˆμi(α)εi + LT(α). The penalty has no impact on the ordering of the models asymptotically since the penalty converges to 0 asymptotically, and the natural log does not change the ordering of the information criteria for the models since it is a strictly monotonic function. Therefore to show asymptotic optimality, it is sufficient to show that 1 T ∑T i=1 2μiεi p→0 and 1 T ∑T i=1 2ˆμi(α)εi p→0 since by the continuous mapping theorem the only difference between the information criteria asymptotically would be the model specific IMSE. The proof 1 T ∑T i=1 2μiεi p→0 follows same argument used in proof of Theorem 1 and after noting that ˆμi(α) = xi(α)ˆβ(α), 22Again, though 2(a) and 2(b) are set up for variable/specification selection, these proofs also apply to bandwidth selection or other tuning parameters where α denotes the model for a specific tuning parameter. 14 1 T ∑T i=1 2ˆμi(α)εi p→0 follows from the argument that [ 1 T ∑T i=1 2xi(α)εi]ˆβ(α) p→0 shown in Theorem 1. So despite AIC, BIC, and other information criteria being derived under fairly restrictive conditions (i.i.d. data or independent residuals), they are asymptotically optimal under more general conditions. This result holds for any penalty that converges to 0 asymptotically, thus indicating the arbitrariness of the penalty from an asymptotic optimality standpoint. In finite samples, the chosen model may vary drastically due to the penalty, but currently it is not clear what the penalty should be, and it may be the case that there is no best penalty for all situations. 3.3 Pseudo-Out-Of-Sample Forecast Methods 3.3.1 Constant Parameter Case To show asymptotic optimality of rolling window and recursive forecasts in the constant parameter case, we need updated assumptions. Assumption 4. Assume as R and t0 grow, Assumption 1 holds. In Assumption 1, ̃β-i(α), ̃μ-i(α), and ̃LT(α) are replaced by ̇ ̇ ̇ ̇ β i(α), ̇ ̇ ̇ ̇μi(α), and ̇ ̇ ̇ ̇ LT(α), respectively. In addition, assume (a) R and t0 grow with the sample size but at a slower rate such that R,t0,T →∞but T -R,T -t0 →∞. Assumption 4 is the rolling window and recursive version of Assumption 1. Assumption 4(a), which requires that the window size, R, grows with the sample size but at a slower rate, is standard in the literature (see Inoue et al. (2017) and references therein). Letting the initial starting point for the recursive forecast, t0, tend toward infinity is also common (e.g. West (1996), West and McCracken (1998), Clark and McCracken (2009)), though many papers also assume that it is either constant or a constant fraction of the sample size.23 Theorem 4. Under Assumption 4, ˆα chosen by rolling window and recursive forecast is asymptotically optimal. Proof. See appendix. 23The proofs for the recursive case can be written under the alternative assumptions for t0, though it would require different proofs, and it is convenient to assume that it grows with the sample size as it allows the same proof to be used for both the rolling window and recursive forecast cases. 15 3.3.1 Time-varying Parameter Case To show asymptotic optimality for rolling window and recursive forecasts in the timevarying parameter case, we use the following assumption: Assumption 5. Assume R and t0 grow in such a way that Assumption 2 holds. In Assumption 2, ̃β-i(α), ̃μ-i(α), and ̃LT(α) are replaced by ̇ ̇ ̇ ̇ β i(α), ̇ ̇ ̇ ̇μi(α), and ̇ ̇ ̇ ̇ LT(α), respectively. In addition, assume (a) R,t0,T →∞, but T -R,T -t0 →∞. As long as R and t0 grow at suitable rates, the modified Assumption 2 should be satisfied. In the case of nonparametric kernel estimation with time-varying parameters, rolling window estimates are simply one sided kernel estimates of the time-varying parameters (Inoue et al., 2017, Cai and Juhl, 2023, Farmer et al., 2023), as opposed to the full sample estimates which are generally based on two sided kernels. Taking that into account along with the infill asymptotic framework, the recursive/rolling window version of Assumption 2(c) is not unrealistic.24 Note that the rate at which the block size, l, grows may be dependent on the rate at which R and t0 grow. Assumption 5(a) is the same as 4(a) and follows the same reasoning. Theorem 5. Under Assumption 5, ˆα chosen by rolling window and recursive forecasts is asymptotically optimal. Proof. See appendix. 4 Implications and Discussion Under fairly general conditions, CV, the most popular information criteria, and the most popular pseudo-out-of-sample forecasting methods will asymptotically all choose the model with the same conditional mean, so from an asymptotic optimality standpoint, there is no benefit from using one procedure versus another. In regards to what model selection procedure one should use in finite samples, the arguments in this paper cannot speak to them. I do think the results help highlight why recommendations for certain model selection procedures are not as strong as they first appear. 24This also makes sense if one uses the basis function approximation approach to estimating time-varying parameter models (Huang et al., 2002, 2004). 16 Many people over the last few decades have advocated the use of model selection criteria such as BIC and HQIC on the grounds that they are consistent (see for example Shao (1997), Claeskens (2016), Ding et al. (2018) and references therein). What does not get talked about enough is consistency is a distinction without a difference. Intuitively, consistency of a model selection procedure simply means that if the most parsimonious version of the true model is in a set, then the model selection procedure will choose the most parsimonious version of the true model asymptotically. What does not get brought up enough is that it could be the case that in finite samples, the researcher would prefer a criteria that is not consistent. Consistent model selection criteria tend to choose a more parsimonious version of the model, which may not be ideal. There is a host of Monte Carlo evidence indicating this is the case. For example Lütkepohl (2005) (page 155) shows in a Monte Carlo where the true model is a vector autoregression (VAR) with two lags that AIC selected the true model more often than BIC and HQIC, despite the latter two criteria being consistent. Burnham and Anderson (2004) shows in a Monte Carlo that when the true model is denser (has more variables), AIC does a better job of selecting a model closer to the truth than BIC, but the results depend on the size of the parameters. Ng and Perron (2005) and Ng (2013) show that neither AIC or BIC dominate, but again, the results depend on the size of the parameters. If the true model is not in the set (which would be the case for infinite dimensional models), a model selection procedure cannot be consistent since the true model can never be in the set of models being compared. Even if the true model is finite dimensional, if the true model is not in the set, then we currently do not have a reason (from the asymptotic point of view) to prefer one model selection procedure over another. This is in line with the George Box argument that "All models are wrong" or "All models are wrong, but some are useful" as a way to point out that in science, all models are approximate, so we should not believe the true model is in the set.25 There has also been focus on model selection procedures, such as AIC, being minimax.26,27 A model selection procedure being minimax simply means that in the worst case scenario, the minimax model selection procedure has a smaller IMSE than a non-minimax procedure, or alternatively, minimax procedures minimize downside risk. But for any par25Many researchers have realized this, and it is one of the reasons why model averaging has become popular over the past 30 years (Steel, 2020). 26Conditions for minimax estimators are generally derived under i.i.d. data, and to my knowledge these results have not been extended. 27Yang (2005) shows an estimator cannot be strongly consistent and minimax at the same time. 17 ticular situation a researcher is interested in, a procedure being minimax does not say anything about which criteria is preferred in finite samples because the situation may be far from the worst case scenario. Minimax procedures tend to choose larger models, and consistent procedures tend to choose smaller models, so depending how accurately and efficiently estimated parameters are and whether you are more worried about overfitting or underfitting, you may prefer one procedure over another. In a finite sample, there is a bias variance tradeoff because while parameters may be biased when the selected model is too small, the parameter estimates may be highly inefficient if the estimated model includes too many parameters (Ng, 2013, Rossi, 2021). Furthermore, as long as λT is a finite constant greater than 1, the information criteria is minimax (Shao, 1997, Yang, 2005). So even within the class of minimax procedures, it is not clear which one should be chosen since as shown in Theorem 3, they are all asymptotically optimal.28 Based off the results in this paper and in the literature, I believe it is the case that the literature should focus more on the finite sample properties or maybe use local asymptotics or set up problems in such a way that the differences in model selection criteria show up in the limit.29 5 Conclusion This paper provides general proofs showing optimality for a wide range of model selection criteria under fairly general conditions. This paper not only asymptotically justifies model selection criteria for most situations, but it also unifies and extends a range of previously disparate results. The results from this paper should allow researchers to move on from showing the asymptotic optimality of model selection procedures for most situations and to potentially focus on showing the theoretical finite sample properties of these methods. Since it is the case that the most popular methods end up being asymptotically optimal under general conditions, the choice of which model selection procedure to choose from, like most things in statistics, appears to be a finite sample choice and not an asymptotic one. 28The same holds for consistent model selection procedures. A sufficient condition for consistent model selection procedures require that λT →∞but λT p T →0. Since there are an infinite number of ways to construct a consistent model selection procedures and they are asymptotically optimal under the conditions stated in this paper, the literature currently cannot distinguish between procedures within this class from an asymptotic standpoint. 29An important topic of research in the literature that is beyond the scope of this paper is taking into account the impact model selection has on subsequent inference (see for example Leeb and Pötscher (2005)). 18 References Andrews, D. W. (1991). Asymptotic optimality of generalized cl, cross-validation, and generalized cross-validation in regression with heteroskedastic errors. Journal of Econometrics 47(2-3), 359-377. 2 Andrews, D. W. K. (1993). Tests for parameter instability and structural change with unknown change point. Econometrica 61(4), 821-856. 13 Arlot, S. and A. Celisse (2010). A survey of cross-validation procedures for model selection. Statistical Survey 4, 40-79. 5 Bai, J. and P. Perron (1998). Estimating and testing linear models with multiple structural changes. Econometrica 66(1), 47-78. 14 Bates, S., T. Hastie, and R. Tibshirani (2024). Cross-validation: What does it estimate and how well does it do it? Journal of the American Statistical Association 119(546), 1434-1445. 2 Burman, P., E. Chow, and D. Nolan (1994). A cross-validatory method for dependent data. Biometrika 81(2), 351-358. 5, 11 Burnham, K. P. and D. R. Anderson (2004). Multimodel inference understanding aic and bic in model selection. Sociological Methods & Research 33(2), 261 - 304. 17 Cai, Z. (2007). Trending time-varying coefficient time series models with serially correlated errors. Journal of Econometrics 136(1), 163-188. 12 Cai, Z. and T. Juhl (2023). The distribution of rolling regression estimators. Journal of Econometrics 235(2), 1447-1463. 16 Chen, B. and Y. Hong (2012). Testing for smooth structural changes in time series models via nonparametric regression. Econometrica 80(3), 1157-1183. 12 Chen, X. (2007). Handbook of Econometrics, Volume 6, Chapter LARGE SAMPLE SIEVE ESTIMATION OF SEMI-NONPARAMETRIC MODELS. 8 Chetverikov, D., Z. Liao, and V. Chernozhukov (2021). On cross-validated lasso in high dimensions. Annals of Statistics 49(3), 1300-1317. 3 19 Chib, S. and T. A. Kuffner (2016). Bayes factor consistency. Working Paper. 2 Claeskens, G. (2016). Statistical model choice. Annual Review of Statistics and Its Application 3, 233-256. 3, 17 Clark, T. E. and M. W. McCracken (2009). Improving forecast accuracy by combining recursive and rolling forecasts. International Economic Review 50(2), 363- 395. 5, 15 Dahlhaus, R. (2012). Handbook of Statistics, Volume 30, Chapter Locally Stationary Processes. 12 Davidson, J. (2022). Stochastic Limit Theory: An Introduction for Econometricians (2nd ed.). Oxford University Press. 8 Ding, J., V. Tarokh, and Y. Yang (2018). Model selection techniques-an overview. IEEE Signal Processing Magazine 35(6), 16-34. 3, 17 Fan, J. and I. Gijbels (1996). Local Polynomial Modelling and Its Applications. CHAPMAN & HALL/CRC. 12 Fan, J. and H. Peng (2004). Nonconcave penalized likelihood with a diverging number of parameters. Annals of Statistics 32(3), 928-961. 8 Farmer, L. E., L. Schmidt, and A. Timmermann (2023). Pockets of predictability. The Journal of Finance 78(3), 1279-1341. 16 Faust, J. and J. H. Wright (2013). Handbook of Economic Forecasting, Volume 2, Chapter Forecasting Inflation, pp. 2-56. 3 Gao, Y., X. Zhang, S. Wang, and G. Zou (2016). Model averaging based on leave-subjectout cross-validation. Journal of Econometrics 192, 139-151. 2 Giraitis, L., G. Kapetanios, and M. Marcellino (2021). Time-varying instrumental variable estimation. Journal of Econometrics 224(2), 394-415. 12 Giraitis, L., G. Kapetanios, and T. Yates (2014). Inference on stochastic time-varying coefficient models. Journal of Econometrics 179(1), 46-65. 12 Hansen, B. E. (2010). Multi-step forecast model selection. Working Paper. 5 20 Hansen, B. E. and J. Racine (2012). Jackknife model averaging. Journal of Econometrics 167(1), 38-46. 2, 8, 9 Härdle, W. and O. B. Linton (1994). Handbook of Econometrics, Volume 4, Chapter Applied nonparametric methods, pp. 2295-2339. 12 Hart, J. D. (1994). Automated kernel smoothing of dependent data by using time series cross-validation. Journal of the Royal Statistical Society. Series B 56(3), 529-542. 12 Huang, J. Z., C. O. Wu, and L. Zhou (2002). Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika 89(1), 111-128. 12, 16 Huang, J. Z., C. O. Wu, and L. Zhou (2004). Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica 14(763-788). 12, 16 Inoue, A., L. Jin, and B. Rossi (2017). Rolling window selection for out-of-sample forecasting with time-varying parameters. Journal of Econometrics 196(1), 55-67. 15, 16 Jordà, Ò. (2005). Estimation and inference of impulse responses by local projections. American Economic Review 95(1), 161-182. 2 Lahiri, S. (2003). Resampling Methods for Dependent Data. New York: Springer. 8 Leeb, H. and B. M. Pötscher (2005). Model selection and inference: Facts and fiction. Econometric Theory 21(1), 21-59. 18 Leung, D. H.-Y. (2005). Cross-validation in nonparametric regression with outliers. Annals of Statistics 33(5), 2291-2310. 12 Li, K.-C. (1985). From stein's unbiased risk estimates to the method of generalized cross validation. The Annals of Statistics 13(4), 1352-1377. 12 Li, K.-C. (1987). Asymptotic optimality for cp, cl, cross-validation,and generalized crossvalidation: Discrete index set. Annals of Statistics 15(3), 958-975. 2, 8, 9 Lu, X. and L. Su (2015). Jackknife model averaging for quantile regressions. Journal of Econometrics 188(1), 40-58. 8 21 Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Berlin: SpringerVerlag. 8, 17 Machado, J. A. and J. S. Silva (2019). Quantiles via moments. Journal of Econometrics 213(1), 145-173. 8 Moreno, E., J. Girón, and G. Casella (2015). Posterior model consistency in variable selection as the model dimension grows. Statistical Science 30(2), 228-241. 2 Newey, W. K. and D. McFadden (1994). Large sample estimation and hypothesis testing, Volume 4, Chapter 36, pp. 2111-2245. Amsterdam: Elsevier. 8 Ng, S. (2013). Handbook of Forecasting, Volume 3, Chapter Variable Selection in Predictive Regressions. 17, 18 Ng, S. and P. Perron (2005). A note on the selection of time series models. Oxford Bulletin of Economics and Statistics 67(1), 115-134. 17 Opsomer, J., Y. Wang, and Y. Yang (2001). Nonparametric regression with correlated errors. Statistical Science 16(2), 134-153. 5 Racine, J. (2000). Consistent cross-validatory model-selection for dependent data: hvblock cross-validation. Journal of Econometrics 99(1), 39-61. 11 Robinson, P. M. (1989). Statistical Analysis and Forecasting of Economic Structural Change, Chapter Nonparametric Estimation of Time-Varying Parameters, pp. 253-264. Berlin, Heidelberg.: Springer. 12 Rossi, B. (2021). Forecasting in the presence of instabilities: How we know whether models predict well and how to improve them. Journal of Economic Literature 59(4), 1135-90. 4, 18 Shao, J. (1997). An asymptotic theory for linear model selection. Statistica Sinica 7, 221-264. 3, 9, 17, 18 Sin, C.-Y. and H. White (1996). Information criteria for selecting possibly misspecified parametric models. Journal of Econometrics 71(1-2), 207-225. 3 Steel, M. F. (2020). Model averaging and its use in economics. Journal of Economic Literature 58(3), 644-719. 17 22 Stock, J. and M. Watson (2007). Introduction to Econometrics. Addison Wesley Longman. 2, 3 Sun, Y., Y. Hong, T.-H. Lee, S. Wang, and X. Zhang (2021). Time-varying model averaging. Journal of Econometrics 222(2), 974-992. 2, 12 Wei, C. Z. (1992). On predictive least squares principles. Annals of Statistics 20(1), 1-42. 5 Welch, I. and A. Goyal (2008). A comprehensive look at the empirical performance of equity premium prediction. The Review of Financial Studies 21(4), 1455-1508. 3 West, K. D. (1996). Asymptotic inference about predictive ability. Econometrica 64(5), 1067-1084. 15 West, K. D. and M. W. McCracken (1998). Regression-based tests of predictive ability. International Economic Review 39(4), 817-840. 15 White, H. (1981). Consequences and detection of misspecified nonlinear regression models. Journal of the American Statistical Association 76(374), 419-433. 8, 9 White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica 50(1), 1-25. 8, 9 White, H. (1994). Estimation, inference and specification analysis. Cambridge University Press. 8, 9 White, H. (2001). Asymptotic Theory for Econometricians. Emerald Group Publishing Limited. 8 Yang, Y. (2005). Can the strengths of aic and bic be shared? a conflict between model indentification and regression estimation. Biometrika 92(4), 937-950. 17, 18 Yu, D., X. Zhang, and H. Liang (2025). Unified optimal model averaging with a general loss function based on cross-validation. Journal of the American Statistical Association. 2 Zhang, X., A. T. Wan, and G. Zou (2013). Model averaging by jackknife criterion in models with dependent data. Journal of Econometrics 174(2), 82-94. 2 23 Proof of Theorem 4 Proof. The proofs are written for the rolling window case. Note that for the recursive case, one would just replace R with t0. The proof follows along similar lines as the proof of Theorem 1. Note that ̇ ̇ ̇ ̇ε 2 i (α) = ε2 i + 2(μi - ̇ ̇ ̇ ̇ μi(α))εi + (μi - ̇ ̇ ̇ ̇ μi(α))2, which implies 1 T -R T ∑ i=R+1 ̇ ̇ ̇ ̇ε 2 i (α) = 1 T -R T ∑ i=R+1 ε2 i + 1 T -R T ∑ i=R+1 2μiεi - 1 T -R T ∑ i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi + ̇ ̇ ̇ ̇ LT(α). To prove asymptotic optimality, it is sufficient to show that 1 T-R ∑T i=R+1 2μi(α)εi p→0 and 1 T-R ∑T i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi p→0. Again, the proof 1 T-R ∑T i=R+1 2μi(α)εi p→0 follows same arguments used in Theorem 1. To show 1 T-R ∑T i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi p→0, note that 1 T -R T ∑ i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi = [ 1 T -R T ∑ i=R+1 2xi(α)̂β(α)εi] + 1 T -R T ∑ i=R+1 [2xi(α)( ̇ ̇ ̇ ̇ β i(α) -̂β(α))εi] = [ 1 T -R T ∑ i=R+1 2xi(α)εi]̂β(α) ́111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 +[ 1 T -R T ∑ i=R+1 2xi(α)( ̇ ̇ ̇ ̇ β i(α) -̂β(α))εi] ́11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 where the first term converges to zero by the exogeneity condition (Assumption 1(a)), an appropriate LLN (Assumption 1(d)), and in the infinite dimensional case Assumption 1(e). To show that the second term converges to 0, note that E ∥ 1 T -R T ∑ i=R+1 2xi(α)( ̇ ̇ ̇ ̇ β i(α)-̂β(α))εi ∥≤ 1 T -R T ∑ i=R+1 (E ∥2xi(α)( ̇ ̇ ̇ ̇ β i(α)-̂β(α)) ∥2)1/2 (E(ε2 i ))1/2 ́111111111111111111111 ̧1111111111111111111111¶ bounded ≤constant 1 T -R T ∑ i=R+1 (E ∥2xi(α)( ̇ ̇ ̇ ̇ β i(α) -̂β(α)) ∥2)1/2 ́111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 = op(1) due to the triangle inequality, Cauchy-Schwarz inequality, Assumption 1(a), and Assumption 4(a), thus proving that 1 T-t0 ∑T i=t0+1 2 ̇ ̇ ̇ ̇ μi(α)εi p→0. 24 Proof of Theorem 5 Proof. The proofs are written for the rolling window case. Note that for the recursive case, one would just replace R with t0. Note that 1 T -R T ∑ i=R+1 ̇ ̇ ̇ ̇ε 2 i (α) = 1 T -R T ∑ i=R+1 ε2 i + 1 T -R T ∑ i=R+1 2μiεi - 1 T -R T ∑ i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi + ̇ ̇ ̇ ̇ LT(α). To prove asymptotic optimality, it is sufficient to show that 1 T-R ∑T i=R+1 2μiεi p→0 and 1 T-R ∑T i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi p→0. Again, the proof 1 T-R ∑T i=R+1 2μiεi p→0 follows same arguments used in Theorem 1. The 1 T-R ∑T i=R+1 2 ̇ ̇ ̇ ̇ μi(α)εi p→0 follows the same argument used in the proof of Theorem 2, except (̃β-i(α) -ˆβi(α)) is replaced with ( ̇ ̇ ̇ ̇ β i(α) -ˆβi(α)). Lemma 1. Assume 1(b) holds. In addition, assume either ∥μi ∥or ∥μi-ˆμi(α) ∥are bounded above for all i and α. Then ∣̃LT(α)/LT(α) -1∣ p→0 for all α ∈AT. Proof. The proof is shown in two cases. The first under the case ∥μi ∥< ∞and the second under ∥μi -ˆμi(α) ∥< ∞. Under case 1, note that ̃LT(α) -LT(α) = 1 T T ∑ i=1 (μi -̃μ-i(α))2 -1 T T ∑ i=1 (μi -ˆμi(α))2 = 1 T T ∑ i=1 2(μiˆμi(α) -μĩμ-i(α)) + 1 T T ∑ i=1 (̃μ-i(α)2 -ˆμi(α)2) = 1 T T ∑ i=1 2(μi(ˆμ(α) -̃μ-i(α)) + 1 T T ∑ i=1 (̃μ-i(α)2 -ˆμi(α)2) To complete the proof for case 1, it is sufficient to show E ∥1 T T ∑ i=1 2(μi(ˆμ(α) -̃μ-i(α)) + 1 T T ∑ i=1 (̃μ-i(α)2 -ˆμi(α)2) ∥ p→0. Note that E ∥1 T T ∑ i=1 2(μi(ˆμ(α) -̃μ-i(α)) + 1 T T ∑ i=1 (̃μ-i(α)2 -ˆμi(α)2) ∥ ≤1 T T ∑ i=1 (E ∥2μi ∥2)1/2 ́11111111111111111111111111111111111111 ̧11111111111111111111111111111111111111¶ bounded (E ∥ˆμ(α) -̃μ-i(α) ∥2)1/2 ́1111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧11111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 + 1 T T ∑ i=1 E ∥(̃μ-i(α)2 -ˆμi(α)2) ∥ ́1111111111111111111111111111111111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 = op(1) 25 by the triangle inequality, Assumption 1(b), and the assumption of ∥μi ∥being bounded for all i. For case 2, note that (μi -̃μ-i(α))2 = ([μi -ˆμi(α)] + [ˆμi(α) -̃μ-i(α)])2 = [μi -ˆμi(α)]2 + 2[μi -ˆμi(α)][ˆμi(α) -̃μ-i(α)] + [ˆμi(α) -̃μ-i(α)]2. It follows that ̃LT(α) = LT(α) + 1 T T ∑ i=1 2[μi -ˆμi(α)][ˆμi(α) -̃μ-i(α)] + 1 T T ∑ i=1 [ˆμi(α) -̃μ-i(α)]2. To show ∣̃LT(α)/LT(α) -1∣ p→0, it is sufficient to show that the second and third terms on the right hand side converge to zero in probability. Convergence of the third term follows from Assumption 1(b). To show the second term converges to zero in probability, note that it is sufficient to show that E ∥1 T T ∑ i=1 2[μi -ˆμi(α)][ˆμi(α) -̃μ-i(α)] ∥ pÐ→0. Note that E ∥1 T T ∑ i=1 2[μi -ˆμi(α)][ˆμi(α) -̃μ-i(α)] ∥≤1 T T ∑ i=1 E⎛ ⎝∥2[μi -ˆμi(α)] ∥ ́111111111111111111111111111111111111111111111 ̧111111111111111111111111111111111111111111111¶ bounded ∥[ˆμi(α) -̃μ-i(α)] ∥) ≤constant 1 T T ∑ i=1 E⎛ ⎝∥[ˆμi(α) -̃μ-i(α)] ∥) ́11111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ p→0 = op(1), by the triangle inequality, by the assumption that ∥μi -ˆμi(α) ∥is bounded above for all i and α, and Assumptions 1(b). Remark 2. The proof of Lemma 1 also applies for the rolling window and recursive cases for both the constant parameter and time-varying cases. One just needs to replace ̃β-i(α), ̃μ-i(α), and ̃LT(α) with ̇ ̇ ̇ ̇ β i(α), ̇ ̇ ̇ ̇μi(α), and ̇ ̇ ̇ ̇ LT(α), respectively. One also needs to use the corresponding assumptions in Assumptions 4 and 5. 26
2510.14825
Preprint PROGRAMMATIC REPRESENTATION LEARNING WITH LANGUAGE MODELS Gabriel Poesia∗ Kempner Institute at Harvard University gabriel poesia@fas.harvard.edu Georgia Gabriela Sampaio∗ Stanford University gsamp@stanford.edu ABSTRACT Classical models for supervised machine learning, such as decision trees, are ef- ficient and interpretable predictors, but their quality is highly dependent on the particular choice of input features. Although neural networks can learn useful rep- resentations directly from raw data (e.g., images or text), this comes at the expense of interpretability and the need for specialized hardware to run them efficiently. In this paper, we explore a hypothesis class we call Learned Programmatic Rep- resentations (LeaPR) models, which stack arbitrary features represented as code (functions from data points to scalars) and decision tree predictors. We synthesize feature functions using Large Language Models (LLMs), which have rich prior knowledge in a wide range of domains and a remarkable ability to write code us- ing existing domain-specific libraries. We propose two algorithms to learn LeaPR models from supervised data. First, we design an adaptation of FunSearch to learn features rather than directly generate predictors. Then, we develop a novel variant of the classical ID3 algorithm for decision tree learning, where new features are generated on demand when splitting leaf nodes. In experiments from chess po- sition evaluation to image and text classification, our methods learn high-quality, neural network-free predictors often competitive with neural networks. Our work suggests a flexible paradigm for learning interpretable representations end-to-end where features and predictions can be readily inspected and understood. 1 INTRODUCTION The central problem in supervised machine learning is to find a predictor h : X →Y in a hypothesis class H that minimizes a certain risk function R(h), such as 0 −1 error in classification or mean- squared error in regression (Michalski et al., 2013). Classical choices for H include linear models, decision trees, and ensembles thereof, which are compellingly simple to understand and debug, and are both compute- and data-efficient. However, their effectiveness is highly limited in domains with low-level, high-dimensional inputs, such as images or text. For these domains, high-quality models are often best learned by first constructing a high level representation of an input x ∈X using a set of features fi : X →R that yield a higher-level encoding of the input that predictors can then rely on. While this offers great flexibility, in practice the effort and domain expertise required to engineer a good set of features for a particular learning task severely limits the quality of models that can be obtained with classical predictors in high-dimensional input domains without extensive human effort (Dong & Liu, 2018; Cheng & Camargo, 2023). A remarkably successful paradigm that avoids the need for hand-designed feature engineering is deep learning, where H is set to a parameterized family of neural networks of a domain-appropriate architecture. The core advantage of deep learning is the ability of gradient-based optimization to automatically learn useful representations from raw data (Bengio & LeCun, 2007; Damian et al., 2022). Indeed, deep neural networks can be seen as computing a set of complex, non-linear neural features, then applying a simple predictor on top (e.g., the last fully-connected layer, corresponding to a linear model). However, despite being highly effective for prediction, neural features have sev- eral drawbacks. First, deep neural networks are highly data intensive, and their ability to generalize drops drastically when in-domain data are scarce (Wang et al., 2023). Second, neural features are not ∗Authors contributed equally. Code available at https://github.com/gpoesia/leapr/ 1 arXiv:2510.14825v1 [cs.LG] 16 Oct 2025 Preprint Training Data Learning Algorithm Features FunSearch Dynamic ID3 Decision Tree + Programmatic Features def feature( board: chess.Board )-> float: “Count backward pawns (white - black)” ... “The aim of the experiment was to…” Figure 1: Learned Programmatic Representation models combine programmatic features, synthe- sized by LLMs as code, and decision tree predictors, yielding interpretable models. We give two algorithms for learning them end-to-end from supervised data in high-dimensional input domains. easily interpretable: analyses of large-scale neural networks typically only find a fraction of neurons that seem to encode human-aligned concepts (Huben et al., 2023). This limits the potential of neural models to provide faithful explanations for their predictions (e.g., express why a given x is being classified as y), which are important when experts rely on learned models to support high-stakes decision-making (Doshi-Velez & Kim, 2017). In this paper, we seek to investigate alternative paradigms for representation learning that, like deep learning, do not require manual feature engineering from users and, like classical methods, yield interpretable, fast, and efficiently-learnable predictors. To that end, we propose learning LeaPR (Learned Programmatic Representation) models: a class of predictors with programmatic features paired with decision tree predictors, as illustrated in Figure 1. We leverage Large Language Mod- els’ (LLMs) ability to generate code using domain-specific libraries to encode features as arbitrary Python functions from the input domain into the reals: these functions are generated during training with the goal of improving the empirical risk of the current predictor. For learning from supervised data, we propose two alternative methods. First, we explore a variant of the FunSearch method (Romera-Paredes et al., 2024) called F2 (for Features FunSearch). In F2, an LLM iteratively generates functions that compute features from the input domain, which are then evaluated by training a Random Forest (Breiman, 2001) predictor and extracting importance weights. These scores are used in-context to steer the LLM to generate features with high importance that are thus as predictive as possible. F2 is a black-box procedure with respect to the underlying classic predictor, leveraging an off-the-shelf Random Forest training method. We further introduce a white-box training procedure for LeaPR models called Dynamic ID3, or D-ID3, inspired by the classical ID3 algorithm for training decision trees. D-ID3 follows an ID3-like training loop where leaf nodes of a growing decision tree are selected and “split” by introducing two new leaves and a decision based on the value of a specific feature. While ID3 operates with pre-existing features, D-ID3 queries an LLM to propose new programmatic features on the fly that can help split the node under consideration. D-ID3 gives the LLM the decision path leading to the node, possibly with concrete examples of training samples in that branch, and asks for novel features that are useful for that specific context. The vast prior knowledge of modern LLMs about domain-specific Python libraries makes both approaches highly applicable across diverse, high-dimensional input domains. We evaluate both methods on two LLMs across three diverse domains: chess position evaluation, image classification (MNIST (LeCun, 1998) and Fashion-MNIST (Xiao et al., 2017)), and text clas- sification (AI vs. human text classification on the Ghostbuster dataset (Verma et al., 2024)). Across domains, both LeaPR methods learn high-quality neural network-free representations that extract empirically useful features, often spanning tens of thousands of lines of Python code spread across hundreds of functions, all stitched together by decision trees. Moreover, the learned features tend to be highly intuitive yet specific enough to lead to high-quality predictors. Our main contributions are: • We propose jointly learning programmatic features, represented as LLM-generated func- tions from the input domain to the reals, together with decision tree predictors, thus obtain- ing fast, interpretable predictors. • We introduce F2 and D-ID3, two algorithms for learning LeaPR models, where features are generated on demand during decision tree training. 2 Preprint • We evaluate LeaPR models on three domains: chess positions, images and text, showing comparable accuracy and often favorable data efficiency compared to baseline methods. We analyze the learned features and show how programmatic features can be useful for data exploration and for understanding model failures. 2 RELATED WORK Code generation with LLMs Our work is enabled by the capability of LLMs to generate code under flexible task specifications, including natural language instructions and examples (Chen et al., 2021). This ability has been explored in a variety of domains, including using code as an interme- diate tool for reasoning Chen et al. (2022) and as an interface for agents to interact with an external environment Lv et al. (2024). We exploit code generation for synthesizing features from arbitrary input domains, building on LLMs’ prior knowledge of a rich set of existing libraries. Feature gener- ation with LLMs has been explored in tabular datasets (Ko et al., 2025; Zhang & Liu, 2024), whereas we explore domains with complex, low-level input. Black-box code evolution with LLMs LLMs can also be used to optimize a black-box objective function (Lange et al., 2024). This approach was pioneered in FunSearch (Romera-Paredes et al., 2024), the method that inspires our F2 method (Section 3.1). In FunSearch, the LLM is given the type signature of a function to synthesize, and it outputs candidate functions which are scored with a metric unknown to the model. LLMs can then see past candidate function examples and their scores in following rounds (Romera-Paredes et al., 2024), and can mutate and evolve previous attempts to propose new, more successful ones. AlphaEvolve (Novikov et al., 2025) explored this idea with larger models and novel methods to ensure diverse candidates. Our work differs from FunSearch and AlphaEvolve in that LeaPR models use LLM-generated code simply to generate features, and not the full predictor end-to-end. Using a decision tree to combine LLM-generated features allows our method to scale and simultaneously employ thousands of LLM-generated features at once: for instance, a single model we synthesize in our experiments in chess can use up to 50k lines of LLM- generated code to compute features. We believe our work showcases some of the largest fully AI- generated programs written so far as a result of the modularity that our framework provides (even ignoring the decision trees, which could also be compiled to nested conditionals). Learning programmatic world models A modular approach for using LLM-synthesized code has been recently employed in PoE-World (Piriyakulkij et al., 2025) for the problem of learning world models. In PoE-World, LLMs generate a set of small predictors that individually explain a specific behavior of the environment; a separate model combines these predictors with weights learned by gradient descent. This scales better than monolithic code representations, which World- Coder introduced (Tang et al., 2024). Like LeaPR models, PoE-World can generate world models with thousands of lines since LLMs do not have to unilaterally generate all code at once. We propose a similarly modular architecture aimed toward the general problem of supervised learning. Mechanistic interpretability of neural networks The widespread deployment of deep neural networks strongly motivated the research community to understand the behavior of neural models (Mu & Andreas, 2021), often through the lens of mechanistic interpretability Conmy et al. (2023). One key hypothesis is that neural networks have interpretable “circuits” that drive specific behaviors (Marks et al., 2025) or encode learned knowledge (Yao et al., 2025). Prior work has also explored neural architectures that are more amenable to interpretation by construction (Doumbouya et al., 2025; Hewitt et al., 2023), rather than post-hoc. We share this approach of training models that allow for transparent inspection of their behavior, but focusing on a simple composition of programs and decision trees rather than neural networks. 3 LEARNING PROGRAMMATIC REPRESENTATIONS Classical predictors considered in supervised machine learning, like decision trees, are intrinsically explainable in the sense that their structure is simple enough to warrant human inspection: this is highly desirable when humans might want to trust (e.g. in high-stakes decision making) or learn from (e.g., someone learning to play chess) machine learning models. However, this simplicity 3 Preprint comes at a steep cost: the performance of these methods is notoriously impacted by the particular choice of features given as their input. Consider learning a board evaluation decision tree for the game of chess, for example: a predictor for the likelihood that the player with the white pieces will win in a given board position. During inference, a decision tree proceeds by testing the value of one input variable at a time. If given only the raw information available on the board (e.g. the piece lying on each of the 8x8 squares), each input variable alone carries negligible information about the overall situation in the game. Each leaf node is forced to be invariant to all features that are not tested on the path from the root to that node; thus, if any decision is made before testing all 64 squares, there is often a chance that something critical was missed (e.g., a queen on one of the unobserved squares). Thus, this representation is too low-level for decision trees to be useful, even though all that there is to know about the board is derivable from the input. On the other hand, if we instead train the model on a single useful feature computed from the board, like the material difference between players (given by a weighted sum of the pieces of each color still on the board), suddenly a very shallow decision tree can easily learn to make predictions that are significantly better than random, since a large material advantage is highly correlated with one’s chance of winning. Adding more informative features will progressively allow a decision tree learner to find better steadily predictors. In this paper, our starting point is the insight that LLMs have two capabilities that allow them to serve as highly effective feature engineers for classical ML models. First, many useful features, such as the material balance in chess described above, can be implemented in a few lines of code, and modern LLMs are capable of flexibly generating code to accomplish a variety of tasks. Second, broad pre-training of LLMs encodes useful prior knowledge about a very wide range of domains (Bommasani et al., 2021), equipping them with strong priors over what kinds of features could be useful for making predictions across these domains. Our main hypothesis is that we might be able to train high-quality classical models by leveraging LLM-generated features at scale. Our goal here is thus to learn a hypothesis class that consists of (a) programmatic features and (b) decision tree predictors. We call these Learned Programmatic Representation models, or LeaPR models. The main challenge we now tackle is how to elicit predictive features from language models. 3.1 BLACK-BOX FEATURE GENERATION Given a supervised dataset D and any scoring function SD(f) that measures the quality of a pro- posed feature f : X →R, a simple methodology to obtain increasingly good features is to apply a FunSearch-style procedure, where an LLM is used to propose candidates to try to maximize S. Mod- ern LLMs are capable of generating complex functions even for high-dimensional input domains, such as images and text, partly due to their ability to write code that uses existing human-written libraries for each of these domains. Thus, the main component needed for this approach to work is to answer: what makes f a good feature? In the standard FunSearch setup (Romera-Paredes et al., 2024), the scoring function S evaluates candidates independently: proposals are self-contained solutions of the task. Our setup here is different: for the purpose of learning a predictor from all generated features at once, a new candidate fk is valuable only to the extent that it contributes predictive information when taking into account the existing feature set f1:k−1. Assuming that we keep track of the set of features generated so far and propose new features in a loop, a na¨ıve adaptation of FunSearch would thus score a new feature fk on its added predictive power once it has been proposed. For that, we could train a new predictor using f1:k and compare its risk against the previous predictor trained on f1:k−1. However, this runs into the issue that early features receive disproportionately high scores simply because their baselines (initial predictors based on few features) are severely limited. In practice, with this approach, the highest-scoring features are essentially fixed after the first few iterations, which is undesirable: we would like to detect and reward powerful features even if they appear late during training. To overcome this problem, we simultaneously score all existing features f1:k independently of the order in which they were proposed. Given a learned decision tree, prior work has proposed several metrics of importance of each input feature (e.g., measuring decrease in “impurity” in decision nodes 4 Preprint Algorithm 1: Features FunSearch (F2) Input : Language model, LM, Supervised dataset D ∈2X×Y Output: List of features F, where fi : X →R F ←[ ]; for iteration ∈[1, · · · , T] do rf ←TrainRandomForest(D, F) ; imp ←FeatureImportances(rf) ; top k ←TopKFeatures(F, imp); r ←RandomKFeatures(F \ top k, imp); p ←ProposeFeatures(LM, top k, r); F.extend(p); end return F ; Algorithm 2: Dynamic ID3 (D-ID3) Input : Language model LM, Supervised dataset D ∈2X×Y Output: List of features F, where fi : X →R T ←Leaf(Dtrain); for iteration ∈[1, · · · , T] do l ←arg maxIsLeaf(n) TotalError(T, n.data); p ←ProposeFeatures(LM, l.path to root) ; ⟨f, t⟩← arg minf∈F,t SplitError(f, t, n.data); l.split(f, t, {x ∈n.data|f(x) < t}, {x ∈n.data|f(x) > t}) ; end return {n.splitting feature|n ∈T ∧Internal(n)} ; Figure 2: Two learning algorithms for LeaPR models. F2 (left) uses a FunSearch-style loop that attempts to evolve features that are globally useful to train a Random Forest predictor, as estimated by feature importances. D-ID3 (right) runs an ID3-style decision tree training loop and attempts to propose new features that are locally useful for splitting specific leaf nodes, attempting to minimize their impurity (e.g., variance in regression, or entropy in classification). that use a given feature, (Breiman, 2001)). Importance metrics only depend on the final learned predictor, and decision tree learning methods are order-invariant with respect to input features. Algorithm 1 (Figure 2, left) shows Features FunSearch (F2), our representation learning algorithm based on this FunSearch-style approach but with features scored as a set. Specifically, F2 takes a supervised dataset and learns a programmatic representation — i.e. a set of feature functions fi : X →R, represented as executable code. Like FunSearch, F2 iteratively uses an LLM to make batches of proposals conditioned on a sample of existing features, which are shown to the model along with their assigned scores — the LLM’s task is to propose new features that will be assigned high importance score in a newly trained Random Forest predictor. These scores are a global estimate of the predictive power of each feature in a predictor trained with all of them. 3.2 DYNAMIC SPLITTING While F2 uses an underlying decision tree learner as a black-box, the insight that LLMs can be used to generate features on demand can serve as the basis for designing the decision tree learner itself. Recall that during inference in a decision tree, we start at the root and repeatedly follow the “decisions” associated with each node until we reach a leaf. Each such decision consists of testing the value of a particular feature: if this node splits on feature fk, we compare fk(x) with a threshold t learned during training. If fk(x) < t, the node recursively returns the prediction made by its left child (or right child if fk(x) ≥t). Leaf nodes return a fixed prediction defined during training, e.g., the most common class label (classification), or average value (regression) for training points that fall on that leaf. For training, classical decision tree learning algorithms (e.g., ID3 or CART (Quinlan, 1986)) start with a single node and repeatedly improve the current decision tree predictor by (a) choosing a leaf node and (b) partitioning it into a new decision node with two new leaves as its children. Partitioning searches for a feature and comparison threshold that minimize the “impurity” (e.g., variance in the continuous case, or entropy of class labels in classification) of data points falling on both sides. For instance, in classification, the best-case scenario would be to find a partition where all training data points falling on each new leaf belong to the same class. However, the ability of classical algorithms to find good splits is limited by the predictive power of preexisting dataset features. Here, we revisit this recursive splitting strategy considering that we can attempt to generate new features on demand for the purpose of successfully partitioning a particular leaf node. When we decide to split a leaf, we have significant local context aside from global dataset information: in particular, we know the specific path of decisions that leads to that leaf, and we have a corresponding set of training examples, with their labels, falling onto that node. To be locally useful, a feature only needs to help distinguish between examples in that set. Indeed, for informing the proposals of potentially useful features, we can even leverage the ability of LLMs to performe 5 Preprint inductive reasoning, by presenting actual examples (if possible in text), along with their labels, in the model’s context: its objective, then, is to propose a feature that would explain the variation in the labels between those examples and others that reach the same leaf. Algorithm 2, Dynamic ID3 (D-ID3), realizes this idea. In each iteration, D-ID3 selects the current leaf in the tree that accounts for the largest portion of training error (e.g. number of misclassified training examples). D-ID3 then generates new candidate features with an LLM on the fly to split that particular leaf. In modalities where we can easily represent examples in text, the LLM receives a sample of examples and their labels that fall in this branch (in our experiments, this only excludes image classification, where we only show a sample of image class labels in the prompt). D-ID3 considers these features, as well as all candidate features generated for ancestor nodes, and finds the best split for this leaf according to a user-defined impurity metric (all metrics available for classical methods are also possible here). This process repeats for a number of iterations. At the end, like F2, D-ID3 returns a learned representation: the set of programmatic features for the input domain that were used in the resulting decision tree. We note several practical considerations for both F2 and D-ID3, as well as other implementation details, in Appendix A. 4 EXPERIMENTS We now evaluate programmatic representation models on three tasks with complex input domains where the standard practice is to train neural networks: chess position evaluation (given a chess board, predict the probability that White wins), image classification on MNIST (LeCun, 1998) and Fashion-MNIST (Xiao et al., 2017), and text classification (detecting whether a piece of text is human- or AI-generated) on Ghostbuster (Verma et al., 2024). In all domains, we compare LeaPR against standard neural network baselines; additionally, in chess and image classification, we also include the Random Forest baseline where we feed a simple “raw” encoding of the input (the piece in each square for chess boards, or pixel values for images). We discuss the features our methods learn in each domain, and finally conduct a case study debugging a classifier that has learned to rely on a spurious feature in the Waterbird dataset (Sagawa et al., 2020). We run F2 and D-ID3 using two OpenAI models, for a total of 4 LeaPR models per task: GPT 4o-mini gpt-4o- mini-2024-07-21 (Hurst et al., 2024) and GPT 5-mini gpt-5-mini-2025-08-07 (OpenAI, 2025). We run both methods so that they output a maxi- mum of 1000 features — this means using 1000 iterations of D-ID3, and 100 iterations of F2 with a proposal batch size of 10 features in each call. We sometimes end with fewer than 1000 features because we discard features that fail validation (see Section A) Using the features learned by either algorithm, we then train a Random Forest model using the standard Scikit-Learn (Pedregosa et al., 2011) implementation, with 500 trees and a maximum depth of 50. 4.1 CHESS POSITION EVALUATION Predictor Training Size RMSE ρ Acc. Random policy 0 11.4% Transformer 5 × 107 .161 .795 30.3% Transformer (Ruoss et al., 2024) 5 × 108 58.5% Random Forest (raw board) 200k .248 .306 14.5% LeaPR F2 + GPT 5-mini 200k .169 .762 31.4% F2 + GPT 4o-mini 200k .163 .783 16.7% D-ID3 + GPT 5-mini 200k .160 .789 33.5% D-ID3 + GPT 4o-mini 200k .156 .806 17.2% Table 1: Performance in state-value prediction models in chess positions from Lichess. We train the 270M-parameter Trans- former from Ruoss et al. (2024) with up to 50M data points, and match their results for their full run on 10x more data. First, we train models on the re- gression task of state-value pre- diction in the game of chess: given the board position, pre- dict the win probability for each player. We use a pub- licly available dataset of games from the Lichess online plat- form (Lichess.org), and hold out 1000 random board positions for evaluation. The dataset comes with state values estimated by Stockfish (Romstad et al., 2008), the strongest publicly available chess engine. We use Stock- fish’s prediction value as the ground truth (Stockfish outputs values in “centipawns”, which we convert to win percentages using the standard formula used 6 Preprint by Lichess and other prior work). To represent and manipulate chess boards, we use the popular python-chess library, with a standard API that facilitates iterating through the board, locating pieces, generating available moves, and testing for various pieces of game state (e.g., a player’s turn, whether the current player is in check, etc). Our prompts contain a short listing of the main API classes, methods, and functions available in the library. The models are instructed to generate fea- tures that receive an argument of type chess.Board and return a float value. We provide full prompts in Appendix C. Transformer baseline. As a neural baseline, we train the 270M parameter Transformer architec- ture proposed in Ruoss et al. (2024) (their largest model) to predict the discretized win-probability for White (128 buckets) given a position encoded in the standard FEN format. Ruoss et al. (2024) compared models that predict both state values and state-action values; when controlled for the num- ber of data points, models trained on state-value slightly outperformed when playing games against each other. Their strongest model (trained on 15.3B state-action values) achieved grandmaster-level play. Due to computational constraints, we reproduce their training of a state-value prediction Trans- former run only up to 50M data points (10x less than their total state-value dataset of 500M data points; though we approximately match their number of epochs over the training data at 2.5). Table 1 summarizes the results for this regression task. Here, we show both root mean square error (RMSE) and Pearson correlation (ρ) between model predictions and Stockfish’s estimate. Our LeaPR models, trained on 200k board positions, compare favorably with the Transformer predictor trained on 250x more data. In contrast, as expected, Random Forests trained on the raw board struggle. LeaPR models benefit from the significant prior knowledge that LLMs have about useful chess concepts. We see basic features such as one that “Calculates the total piece value of both sides” (the model’s own function documentation string) proposed and implemented by GPT 4o-mini in 6 lines of code early during training, as well as significantly more complex, specific features such as “Pawn promotion pressure: sum over pawns of 1/(1+steps to promotion) weighted by being passed (white minus black). Encourages advanced, passed pawns.”, implemented by GPT 5-mini late in the D-ID3 run (with 51 lines of code). Generally, D-ID3 features appear to become more specific as training progresses, likely because the LLM is asked to distinguish only a subset of board positions that already share many similarities (due to falling on a specific leaf node), yielding better models than F2 even with the same number of total features. Predictor MNIST Fashion ResNet-50 98.71% 89.54% EfficientNetV2 98.8% 90.94% Random Forest (raw pixels) 95.6% 88.29% LeaPR F2 + GPT 5 mini 92.54% 85.77% F2 + GPT 4o mini 89.26% 80.26% D-ID3 + GPT 5 mini 96.91% 88.51% D-ID3 + GPT 4o mini 93.71% 83.80% Table 2: Top-1 accuracy on image classifica- tion on MNIST and Fashion-MNIST. We also compare models in terms of Top-1 move accuracy compared to Stockfish at its maximum strength. Since we only estimate state-values, to use our predictors we select the move that leads to the best successor value from the point of view of the current player (i.e., the move leading to the high- est or lowest win-probability for White depending on who plays), and measure how often this matches Stockfish’s top move. Interestingly, we find that re- gression performance is not necessarily predictive of move accuracy: LeaPR models trained with GPT 4o-mini are significantly worse when used to se- lect moves. Our best action predictors achieve non- trivial move selection accuracy: whereas random performance for this task is 11.4%, the D-ID3 model trained with GPT 5-mini predicts the top Stockfish move in 33.5% of the cases, with the Transformer baseline underperforming at 30.3%.Ru- oss et al. (2024) trained this same state-value model with up to 500M data points and managed to achieve a move accuracy of 58.5%, showing that the Transformer keeps improving for much longer. Their most accurate model for action prediction is trained to directly predict action-values from 15B training data points, achieving an accuracy of 63.5% and a grandmaster-level ELO rating when playing with humans online. Although there remains a significant gap between their best results achieved with a Transformer and what we demonstrate here, LeaPR models still get surprisingly far in this challenging regression task. If the scalability challenges associated with LeaPR models can be overcome (e.g., we see negligible benefits from training Random Forests beyond 200k training data points), we might be able to obtain chess policies that not only play at a high level but can also “explain” their moves — a feature that no existing chess engine possesses. 7 Preprint 4.2 IMAGE CLASSIFICATION We now evaluate LeaPR models on two image classification datasets: MNIST (handwritten digit classification) from LeCun (1998) and Fashion-MNIST (grayscale fashion products classification) from Xiao et al. (2017). We train standard ResNet-50 and EfficientNetV2 baselines to convergence on the same datasets (training details in B.2). Unlike in Section 4.1, D-ID3 does not add images in the prompt when calling the LLM, but only a textual description of class labels of a set of training ex- amples belonging to the leaf being split (e.g., “digit 0” in MNIST, or “T-shirt” in Fashion-MNIST). This is a significant limitation for this domain, since the features need to rely solely on the LLM’s prior knowledge about what the described objects might look like and hypotheses about how they will be detectable in a small grayscale image. Still, the best LeaPR models in this task achieve com- parable accuracy to the neural baselines, even without the ability to construct features by directly observing the training data. Again, especially with D-ID3 we observe specific features that attempt to distinguish between particular classes, such as “Count of ink ”endpoints”: ink pixels with only one ink neighbor (8-connected). Loops like 0/8 have few endpoints; open strokes like 5 have end- points” being the feature with the best split (thus selected) in a leaf where the majority classes were 8 and 5. GPT 5-mini correctly implemented the above feature in 30 lines of Python using numpy. Though MNIST and Fashion-MNIST generally only serve as “sanity checks” for computer vision models (with even Random Forests trained on the raw pixels performing near the neural baselines, given the small image and training set sizes), we believe that this result presents an encouraging signal towards the ability of LeaPR models to achieve comparable accuracy while proposing simple and interpretable programmatic features. 4.3 TEXT CLASSIFICATION Predictor F1 Perplexity only 81.5 GPTZero 93.1 RoBERTa 98.1 Ghostbuster 99.0 LeaPR F2 + GPT 5-mini 97.7 F2 + GPT 4o-mini 98.6 D-ID3 + GPT 5-mini 98.8 D-ID3 + GPT 4o-mini 98.6 Table 3: F1 score on Ghostbuster: classifying text as human or AI written (Verma et al., 2024). We now evaluate LeaPR models on a binary text classification task: detecting whether the input was written by an LLM or a human. We use the Ghostbuster dataset (Verma et al., 2024), which contains a collection of student essays, creative writ- ing, and news articles written by humans and by ChatGPT and Claude (Anthropic, 2024) given the same or similar prompts. In Table 3 we compare LeaPR models with the results for the Ghostbuster model and other neural baselines reported in Verma et al. (2024) for the “in distribution” setting with all domains combined (we omit DetectGPT, which achieves a low F1 score of 51.6% due to having been trained to detect another LLM). Here, LeaPR models are the only neural network-free predic- tors. Still, our models perform competitively when evaluated in F1 score: they outperform all baselines except for Ghostbuster itself, with the best LeaPR model (with features obtained by D- ID3 with GPT 5-Mini) closely matching Ghostbuster (98.8 vs 99.0 in F1 score). 4.4 CASE STUDIES: UNDERSTANDING FEATURES AND MODEL PREDICTIONS The interpretable representations that LeaPR models learn have potential uses beyond their predic- tive power. We now describe two case studies using SHAP values (Lundberg & Lee, 2017) as a lens into the patterns that LeaPR models find in their training data, and into why a particular prediction — especially if erroneous — was made. SHAP values are a metric of feature importance for a model that can be applied both at a dataset-level or to a particular prediction; we refer to Lundberg & Lee (2017) for details. This is an especially compelling tool for understanding LeaPR models given that their features already come with natural language descriptions, in the form of documentation strings. First, we compute SHAP values in the Ghostbuster training set to understand what features the LeaPR model trained with D-ID3 and GPT 5-mini has learned to use to identify AI-generated text. Sorting features by their SHAP values on a sample of 150 training examples, we find that two of the top-3 most important features for the model are (1) the “Fraction of quotation marks that are curly/typographic quotes (e.g., ‘ ’ “ ”) vs plain ASCII quotes, indicating published/edited text” and the (2) “Proportion of characters that lie inside parentheses (measures parenthetical/planned con- 8 Preprint True Label: land bird Prediction: water bird Top 3 features by SHAP value: 1. [0.044] Mean approximate saturation in the center region (colorfulness) 2. [0.042] Ratio of mean blue to mean green in the central region (blue vs vegetation center bias) 3. [0.030] - Fraction of center-region pixels that are green-dominant and have noticeable chroma (vegetation patches) −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 0.25 AI Human Feature 2: Fraction of Text in Parenthesis Feature 1: Fraction of Curly vs Plain Quotes Top 2 Ghostbuster Features by SHAP value Figure 3: (Left) Distribution of the top 2 LeaPR-learned features with highest SHAP values on the Ghostbuster dataset (gaussian jitter added to aid visualization). On Feature 1 (which measures fraction of “curly”, or typographic characters, versus plain ASCII quotes), human text tends to cluster at the extremes, while AI-generated text features mid-range values. Feature 2 shows high values primarily for human-written text. (Right) A land bird misclassified by a LeaPR model on the Waterbird dataset; the top SHAP-valued features for this example show a clear reliance on the background. tent like ”(50 words)”)” (the other top-3 feature also looks for kinds of quotation characters and is strongly correlated with the first). Together, these two features already capture distinctive patterns in human- and AI-written samples in Ghostbuster. Figure 3 (left) shows training samples projected on these two features, with small Gaussian jitter added to both coordinates to aid visualization. For feature 1, human-generated text either has value 1.0, meaning all quote characters are typographi- cal, or “curly” quotes, or 0.0, meaning all quotes in the text are plain ASCII quotes (this could, for instance, reflect default settings in the user device, with using curly quotes being much more fre- quent). This is in stark contrast with AI-generated text, which often mixes both kinds of characters in the same text: AI-generated text displays the full range from 0 to 1 in this feature. For Feature 2, humans seem much more likely to wrap a significant fraction of the text in parentheses: almost all samples with value over 0.1 in this feature (over 10% in parenthesis) were human-written. LeaPR models allowed us to quickly discover these patterns without having to formulate specific a priori assumptions: combined, our models contain thousands of automatically generated domain-relevant features, thus serving as a highly useful tool for data understanding. Finally, we conduct a case study on the Waterbird dataset (Sagawa et al., 2020) showing how pro- grammatic features can serve to debug model failures. This dataset contains images with two classes of birds: land birds and water birds — which are the two target classification labels. However, a trained classifier might learn instead to rely on the background, rather than use features of the bird itself. The dataset contains a subset of land birds placed on water backgrounds, and vice-versa: typically, classification accuracy drops significantly across groups when models learn to predict bird classes based on the (spuriously correlated) background. When we train a LeaPR model with F2 and GPT 5-mini on Waterbird, it achieves 100% validation accuracy when evaluated on land birds on land background, but it drops to 84% when evaluated on land birds on water background. Again, SHAP values can help us understand why this happens in a particular case. Figure 3 (right) shows the first validation example of a land bird on water background that is misclassified. When we show the top 3 features by their SHAP values, the second and third features explicitly indicate that their goal is to detect vegetation — a spurious feature. We can indeed find several examples of features that attempt to characterize the bird, such as “Fraction of warm (red-dominant) pixels in the center region (bird color cue)”, that are however ignored by the trained predictor. This example shows how LeaPR models can make their failures transparent. Since model failures often reflect properties of the training data, we believe that LeaPR can serve as debugging tool for both models and datasets, applicable to a wide range of domains. 5 LIMITATIONS AND CONCLUSION We introduced Learned Programmatic Representation models, a class of neural network-free models that combine programmatic LLM-generated features and decision tree predictors. We experiment 9 Preprint with chess boards, images and text, seeing encouraging initial results exploring this class of models. Moreover, our learned features tend to be easy to interpret: we explore various examples across each of these domains, including using SHAP values to explain individual predictions. However, several limitations remain to be tackled by future work. First, our methods do not learn deep hierarchical features: each feature is directly computed from the input, but not from other features. Learning deep feature hierarchies is the main advantage of training deep rather than shallow neural networks, and we believe that to be an important capability if LeaPR models are to lead to competitive performance in more complex domains. Moreover, our experiments were still done at a small scale, and there are scalability challenges — both in representation learning as well as in training predictors — to be overcome if we want to give competitive predictive performance in data- rich domains, like chess, where neural networks improve predictably with more data and compute. Despite these limitations, we find our results encouraging for further exploring novel learning paradigms that yield interpretable models by construction, rather than post-hoc. With a rapidly advancing AI toolbox, we can imagine that future tools might allow us to learn interpretable models just as easily as we can train neural networks today, with little to no sacrifice in quality. Overcoming the limitations in the LeaPR paradigm can thus be a path to make this possible. ACKNOWLEDGMENTS This work has been made possible in part by a gift from the Chan Zuckerberg Initiative Foundation to establish the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. In the earlier stages of the work, GP was also supported by a Stanford Graduate Inter- disciplinary Fellowship. REFERENCES Anthropic. Claude, 2024. URL https://www.anthropic.com. Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. In Large Scale Kernel Machines. MIT Press, 2007. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportu- nities and risks of foundation models. arXiv e-prints, pp. arXiv–2108, 2021. Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research, 2022. Isaac Cheng and Chico Camargo. Machine learning to study patterns in chess games. Master’s thesis, University of Exeter, 2023. Arthur Conmy, Augustine Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adri`a Garriga- Alonso. Towards automated circuit discovery for mechanistic interpretability. Advances in Neural Information Processing Systems, 36:16318–16352, 2023. Alexandru Damian, Jason Lee, and Mahdi Soltanolkotabi. Neural networks can learn representations with gradient descent. In Conference on Learning Theory, pp. 5413–5452. PMLR, 2022. Guozhu Dong and Huan Liu. Feature engineering for machine learning and data analytics. CRC press, 2018. Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017. 10 Preprint Moussa Koulako Bala Doumbouya, Dan Jurafsky, and Christopher D. Manning. Tversky neural networks: Psychologically plausible deep learning with differentiable tversky similarity. 2025. URL https://arxiv.org/abs/2506.11035. John Hewitt, John Thickstun, Christopher D Manning, and Percy Liang. Backpack language models. arXiv preprint arXiv:2305.16765, 2023. Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In The Twelfth International Conference on Learning Representations, 2023. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. Jeonghyun Ko, Gyeongyun Park, Donghoon Lee, and Kyunam Lee. Ferg-llm: Feature engineering by reason generation large language models. In Findings of the Association for Computational Linguistics: NAACL 2025, pp. 4211–4228, 2025. Robert Lange, Yingtao Tian, and Yujin Tang. Large language models as evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 579–582, 2024. Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/ mnist/, 1998. Lichess.org. Lichess. https://lichess.org/about. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017. Weijie Lv, Xuan Xia, and Sheng-Jun Huang. Codeact: Code adaptive compute-efficient tuning framework for code llms. arXiv preprint arXiv:2408.02193, 2024. Samuel Marks, Can Rager, Eric J. Michaud, Yonatan Belinkov, David Bau, and Aaron Mueller. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models. 2025. URL https://arxiv.org/abs/2403.19647. Ryszard Stanislaw Michalski, Jaime Guillermo Carbonell, and Tom M Mitchell. Machine learning: An artificial intelligence approach. Springer Science & Business Media, 2013. Jesse Mu and Jacob Andreas. Compositional explanations of neurons. 2021. URL https:// arxiv.org/abs/2006.14032. Alexander Novikov, Ngˆan V˜u, Marvin Eisenberger, Emilien Dupont, Po-Sen Huang, Adam Zsolt Wagner, Sergey Shirobokov, Borislav Kozlovskii, Francisco JR Ruiz, Abbas Mehrabian, et al. Alphaevolve: A coding agent for scientific and algorithmic discovery. arXiv preprint arXiv:2506.13131, 2025. OpenAI. GPT-5 System Card. https://cdn.openai.com/gpt-5-system-card.pdf, August 2025. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. Wasu Top Piriyakulkij, Yichao Liang, Hao Tang, Adrian Weller, Marta Kryven, and Kevin Ellis. Poe-world: Compositional world modeling with products of programmatic experts. Advances in Neural Information Processing Systems (to appear), 2025. J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986. 11 Preprint Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. Nature, 625(7995):468–475, 2024. Tord Romstad, Marco Costalba, Joona Kiiski, Gary Linscott, Yu Nasu, Motohiro Isozaki, Hisayori Noda, et al. Stockfish, 2008. URL https://stockfishchess.org. Anian Ruoss, Gr´egoire Del´etang, Sourabh Medapati, Jordi Grau-Moya, Li K Wenliang, Elliot Catt, John Reid, Cannada A Lewis, Joel Veness, and Tim Genewein. Amortized planning with large- scale transformers: A case study on chess. Advances in Neural Information Processing Systems, 37:65765–65790, 2024. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generaliza- tion. International Conference on Learning Representations, 2020. Hao Tang, Darren Key, and Kevin Ellis. Worldcoder, a model-based llm agent: Building world models by writing code and interacting with the environment. Advances in Neural Information Processing Systems, 37:70148–70212, 2024. Vivek Verma, Eve Fleisig, Nicholas Tomlin, and Dan Klein. Ghostbuster: Detecting text ghostwrit- ten by large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol- ume 1: Long Papers), pp. 1702–1717, 2024. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip S. Yu. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 35(8):8052–8072, 2023. doi: 10.1109/ TKDE.2022.3178128. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmark- ing machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. Yunzhi Yao, Ningyu Zhang, Zekun Xi, Mengru Wang, Ziwen Xu, Shumin Deng, and Huajun Chen. Knowledge circuits in pretrained transformers. 2025. URL https://arxiv.org/abs/ 2405.17969. Xinhao Zhang and Kunpeng Liu. Tifg: Text-informed feature generation with large language mod- els. In 2024 IEEE International Conference on Big Data (BigData), pp. 8256–8258. IEEE, 2024. A IMPLEMENTATION DETAILS FOR F2 AND D-ID3 Both F2 and D-ID3 run for a user-specified number of iterations; this number is exactly equal to the number of LLM calls that the algorithm will perform, allowing users to budget for LLM usage. In a sense, both algorithms are “anytime algorithms” — they can always return their latest set of learned features. The algorithms return a representation, rather than a predictor (e.g. the decision tree constructed by D-ID3), to allow for separation of concerns: having a representation, users can later iterate on learning predictors (which need not be decision trees) without additional LLM calls. Most of the time in F2 and D-ID3 is generally spent computing features; luckily, feature computation for all relevant examples is embarrassingly parallel, and we exploit this in our implementation. During training, we always validate proposed features on a subset of the training set (we use 10k examples in our experiments), and discard features that throw exceptions, timeout, or return non-finite values for some example (e.g., NaN or ±∞). Our training runs for D-ID3 were the most expensive, with cost ranging from 0.5 to 5 US dollars per run with GPT 5-mini (1000 iterations). Runs with F2 were around 10x cheaper, due to performing 10x less LLM calls. Runs took from 5 to 24h on a CPU-only commodity machine. 12 Preprint B EXPERIMENTAL DETAILS B.1 TRANSFORMER TRAINING We followed the architectural hyperparameters in Ruoss et al. (2024), and trained the 270M model with cross-entropy loss on their same tokenization scheme, and the same learning rate of 10−5. We train for 300k steps with a batch size of 400 on a machine with 4x H100 NVIDIA GPUs. This gives around 2.4 epochs over 50M data points. We find training to be generally stable, with top-1 move accuracy slowly but monotonically improving across checkpoints (whereas regression metrics, like Pearson correlation, seem less stable, often temporarily decreasing before improving again). B.2 MNIST AND FASHION-MNIST TRAINING We train standard ResNet-50 and EfficientNet-V2 models for 4000 steps and a batch size of 1024 with Adam on a single H100 GPU. We use the default random initialization from PyTorch. For ResNet-50, we use a learning rate of 0.03; for EfficientNet-V2, we use 0.001, which we tuned using the validation set. C PROMPTS AND EXAMPLE FEATURES C.1 CHESS POSITION EVALUATION C.1.1 F2 1 You are an expert chess programmer creating feature functions to help a machine learning model predict chess position evaluations. 2 3 Your task is to write a feature function that helps discriminate between board positions with different evaluations (e.g., probability that white or black wins). 4 A feature function is a Python function that takes a chess Board and computes a feature out of the board. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 5 6 You have access to the following API from the ‘chess‘ library: 7 8 # Chess API Documentation 9 ## Class chess.Board Methods 10 - board.turn: True if White to move, False if Black 11 - board.fullmove_number: Current move number 12 - board.halfmove_clock: Halfmove clock for 50-move rule 13 - board.is_check(): True if current player is in check 14 - board.is_checkmate(): True if current player is checkmated 15 - board.is_stalemate(): True if stalemate 16 - board.is_insufficient_material(): Returns True if insufficient material 17 - board.piece_at(square): Returns piece at given square (or None) 18 - board.piece_map(): Returns dict mapping squares to pieces 19 - board.legal_moves: Iterator over legal moves 20 - board.attackers(color, square): Returns set of squares attacking given square 21 - board.is_attacked_by(color, square): Returns True if square is attacked by color 22 23 ## Chess Squares and Pieces 24 - chess.A1, chess.A2, ..., chess.H8: Square constants 25 - chess.PAWN, chess.KNIGHT, chess.BISHOP, chess.ROOK, chess.QUEEN, chess.KING: Piece types 26 - chess.WHITE, chess.BLACK: Colors 27 - piece.piece_type: Type of piece (PAWN, KNIGHT, etc.) 28 - piece.color: Color of piece (WHITE or BLACK) 29 30 ## Useful Functions 31 - chess.square_name(square): Convert square index to name (e.g., "e4") 32 - chess.parse_square(name): Convert square name to index 33 - chess.square_file(square): Get file (0-7) of square 34 - chess.square_rank(square): Get rank (0-7) of square 35 - chess.square_distance(sq1, sq2): Manhattan distance between squares 36 37 38 ## Current Feature Database 39 Here are some of our existing features and their importance to the current model (higher importance means this is a more useful feature for the current model): 40 41 Feature: 42 def feature(board: chess.Board) -> float: 13 Preprint 43 ’King centralization in endgames: positive if White king is closer to center than Black king (only active when low material)’ 44 total_non_king_pieces = sum(1 for _, p in board.piece_map().items() if p.piece_type != chess.KING) 45 # Activation threshold: small material (pawns + pieces <= 6) 46 if total_non_king_pieces > 6: 47 return 0.0 48 center_sq = [chess.parse_square(s) for s in (’d4’,’e4’,’d5’,’e5’)] 49 center_sq_avg = sum(center_sq) # not used directly; use center coords (3.5,3.5) 50 def king_dist(color): 51 for sq, p in board.piece_map().items(): 52 if p.piece_type == chess.KING and p.color == color: 53 # distance to center by min over central squares 54 return min(chess.square_distance(sq, c) for c in center_sq) 55 return 8.0 56 wd = king_dist(chess.WHITE) 57 bd = king_dist(chess.BLACK) 58 # Normalize in range roughly -1..1 59 return float((bd - wd) / 8.0) 60 61 62 Importance: 0.014 63 --- 64 65 66 Feature: 67 def feature(board: chess.Board) -> float: 68 ’Bishop-pair and minor-piece balance: (White advantage) bishops and minor piece composition bonus’ 69 w_bishops = 0 70 b_bishops = 0 71 w_minors = 0 72 b_minors = 0 73 for _, p in board.piece_map().items(): 74 if p.piece_type == chess.BISHOP: 75 if p.color == chess.WHITE: 76 w_bishops += 1 77 else: 78 b_bishops += 1 79 if p.piece_type in (chess.BISHOP, chess.KNIGHT): 80 if p.color == chess.WHITE: 81 w_minors += 1 82 else: 83 b_minors += 1 84 score = 0.0 85 # bishop pair bonus 86 if w_bishops >= 2: 87 score += 0.6 88 if b_bishops >= 2: 89 score -= 0.6 90 # minor piece imbalance small weight 91 score += 0.12 * (w_minors - b_minors) 92 return float(score) 93 94 95 Importance: 0.039 96 --- 97 98 99 Feature: 100 def feature(board: chess.Board) -> float: 101 ’Center control: difference in control of d4,e4,d5,e5 (occupied=1, attacked=0.5)’ 102 center_squares = [chess.parse_square(s) for s in (’d4’,’e4’,’d5’,’e5’)] 103 def control_for(color): 104 c = 0.0 105 for sq in center_squares: 106 occ = board.piece_at(sq) 107 if occ is not None and occ.color == color: 108 c += 1.0 109 # attacked by color 110 if board.is_attacked_by(color, sq): 111 c += 0.5 112 return c 113 wc = control_for(chess.WHITE) 114 bc = control_for(chess.BLACK) 115 return float(wc - bc) 116 117 118 Importance: 0.044 119 --- 120 14 Preprint 121 122 Feature: 123 def feature(board: chess.Board) -> float: 124 ’Piece activity squares: difference in number of unique squares attacked by non-pawn, non- king pieces (White - Black) normalized by 64’ 125 def active_squares(color): 126 count = 0 127 for sq in range(64): 128 attackers = board.attackers(color, sq) 129 found = False 130 for a in attackers: 131 p = board.piece_at(a) 132 if p is None: 133 continue 134 if p.piece_type not in (chess.PAWN, chess.KING): 135 found = True 136 break 137 if found: 138 count += 1 139 return count 140 w = active_squares(chess.WHITE) 141 b = active_squares(chess.BLACK) 142 return float((w - b) / 64.0) 143 144 145 Importance: 0.329 146 --- 147 148 149 Feature: 150 def feature(board: chess.Board) -> float: 151 ’Pawn structure weakness: (Black penalties - White penalties), positive if Black worse ( good for White). Penalty = doubled*0.5 + isolated*0.7’ 152 def pawn_penalty(color): 153 files = {f:0 for f in range(8)} 154 pawn_sqs = [] 155 for sq, p in board.piece_map().items(): 156 if p.piece_type == chess.PAWN and p.color == color: 157 f = chess.square_file(sq) 158 files[f] += 1 159 pawn_sqs.append(sq) 160 doubled = sum(max(0, cnt-1) for cnt in files.values()) 161 isolated = 0 162 for sq in pawn_sqs: 163 f = chess.square_file(sq) 164 if files.get(f-1,0) == 0 and files.get(f+1,0) == 0: 165 isolated += 1 166 return doubled * 0.5 + isolated * 0.7 167 bp = pawn_penalty(chess.BLACK) 168 wp = pawn_penalty(chess.WHITE) 169 return float(bp - wp) 170 171 172 Importance: 0.065 173 --- 174 175 176 Feature: 177 def feature(board: chess.Board) -> float: 178 ’King safety pressure: weighted sum of attackers near each king (positive = pressure on Black king > White king)’ 179 def king_square(color): 180 for sq, p in board.piece_map().items(): 181 if p.piece_type == chess.KING and p.color == color: 182 return sq 183 return None 184 wk = king_square(chess.WHITE) 185 bk = king_square(chess.BLACK) 186 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.5} 187 def pressure_on(king_sq, attacker_color): 188 if king_sq is None: 189 return 0.0 190 attackers = board.attackers(attacker_color, king_sq) 191 s = 0.0 192 for a in attackers: 193 p = board.piece_at(a) 194 if p is None: 195 continue 196 dist = chess.square_distance(a, king_sq) 197 s += values.get(p.piece_type, 0.0) / (1.0 + dist) 15 Preprint 198 return s 199 p_on_black = pressure_on(bk, chess.WHITE) 200 p_on_white = pressure_on(wk, chess.BLACK) 201 return float(p_on_black - p_on_white) 202 203 204 Importance: 0.005 205 --- 206 207 208 Feature: 209 def feature(board: chess.Board) -> float: 210 ’Undefended high-value threats: (value of Black pieces under more attackers than defenders ) - (value of White pieces similarly threatened)’ 211 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.0} 212 threat_black = 0.0 213 threat_white = 0.0 214 for sq, p in board.piece_map().items(): 215 if p.piece_type == chess.PAWN: 216 continue 217 attackers = board.attackers(not p.color, sq) 218 defenders = board.attackers(p.color, sq) 219 atk = sum(1 for _ in attackers) 220 defn = sum(1 for _ in defenders) 221 if atk > defn and atk > 0: 222 score = values.get(p.piece_type, 0.0) * (atk - defn) 223 if p.color == chess.BLACK: 224 threat_black += score 225 else: 226 threat_white += score 227 return float(threat_black - threat_white) 228 229 Importance: 0.045 230 --- 231 232 233 Feature: 234 def feature(board: chess.Board) -> float: 235 ’Material balance (White minus Black) using common piece values: P=1,N=3,B=3.25,R=5,Q=9’ 236 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.0} 237 total = 0.0 238 for sq, piece in board.piece_map().items(): 239 val = values.get(piece.piece_type, 0.0) 240 total += val if piece.color == chess.WHITE else -val 241 return float(total) 242 243 244 Importance: 0.199 245 --- 246 247 248 Feature: 249 def feature(board: chess.Board) -> float: 250 ’Normalized mobility difference: (White legal moves - Black legal moves) / 100’ 251 try: 252 white_moves = 0 253 black_moves = 0 254 # count current side moves 255 white_board = board.copy() 256 white_board.turn = chess.WHITE 257 white_moves = sum(1 for _ in white_board.legal_moves) 258 black_board = board.copy() 259 black_board.turn = chess.BLACK 260 black_moves = sum(1 for _ in black_board.legal_moves) 261 return float((white_moves - black_moves) / 100.0) 262 except Exception: 263 return 0.0 264 265 266 Importance: 0.165 267 --- 268 269 270 Feature: 271 def feature(board: chess.Board) -> float: 272 ’Passed pawns score: sum of passed-pawn strengths (White minus Black), advanced pawns weighted more’ 273 def is_passed(sq, color): 274 f = chess.square_file(sq) 16 Preprint 275 r = chess.square_rank(sq) 276 if color == chess.WHITE: 277 ahead_ranks = range(r+1, 8) 278 opp_color = chess.BLACK 279 for ar in ahead_ranks: 280 for df in (-1,0,1): 281 ff = f + df 282 if 0 <= ff < 8: 283 sq2 = chess.square(ff, ar) 284 p = board.piece_at(sq2) 285 if p is not None and p.color == opp_color and p.piece_type == chess. PAWN: 286 return False 287 return True 288 else: 289 ahead_ranks = range(r-1, -1, -1) 290 opp_color = chess.WHITE 291 for ar in ahead_ranks: 292 for df in (-1,0,1): 293 ff = f + df 294 if 0 <= ff < 8: 295 sq2 = chess.square(ff, ar) 296 p = board.piece_at(sq2) 297 if p is not None and p.color == opp_color and p.piece_type == chess. PAWN: 298 return False 299 return True 300 score_w = 0.0 301 score_b = 0.0 302 for sq, p in board.piece_map().items(): 303 if p.piece_type != chess.PAWN: 304 continue 305 rank = chess.square_rank(sq) 306 if p.color == chess.WHITE: 307 if is_passed(sq, chess.WHITE): 308 advancement = (rank - 1) / 6.0 if rank >= 1 else 0.0 309 score_w += 1.0 + advancement 310 else: 311 if is_passed(sq, chess.BLACK): 312 advancement = (6 - rank) / 6.0 if rank <= 6 else 0.0 313 score_b += 1.0 + advancement 314 return float(score_w - score_b) 315 316 317 Importance: 0.094 318 --- 319 320 321 # Task 322 Generate 10 new chess board feature functions in Python that: 323 324 1. Help us discriminate between strong and weak board positions, hopefully with positions before and after the optimal split point having the lowest possible variance between their evaluations. 325 2. Return a float value given a board position. 326 3. Handle edge cases gracefully - won’t crash on unusual positions 327 328 Your task is to generate diverse, creative features that are relevant to explain the evaluations for the board positions shown above. Focus on features that would help distinguish between positions of different strengths. These features will be used in this decision tree that will predict the evaluation of a given board position in estimated % win probability for white (e.g., 20 means Black winning with around 80% probability). Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 329 330 # Code Requirements 331 332 - Use single quotes for docstrings: "description here" 333 - No markdown code blocks 334 - No explanatory text after the function 335 - Each function should be complete and standalone, and return a float 336 337 # Output Format 338 Generate exactly 10 features in this format: 339 340 def feature(board: chess.Board) -> float: 341 "Simple, clear description of what this feature measures" 342 # ... Calculate and return the feature value 343 return result 344 345 def feature(board: chess.Board) -> float: 17 Preprint 346 "Another feature description" 347 # ... Calculate and return the feature value 348 return result 349 350 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. 351 352 Optimize for producing discriminant features that are novel compared to the existing features and that are likely to achieve a high importance for scoring positions, once we retrain the model using your new features combined with the existing ones. C.1.2 D-ID3 - PROMPT 1 You are an expert chess programmer creating feature functions to help a machine learning model predict chess position evaluations. 2 3 Your task is to write a feature function that helps discriminate between the board positions given below. 4 A feature function is a Python function that takes a chess Board and computes a feature out of the board. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 5 6 You have access to the following API from the ‘chess‘ library: 7 8 # Chess API Documentation 9 ## Class chess.Board Methods 10 - board.turn: True if White to move, False if Black 11 - board.fullmove_number: Current move number 12 - board.halfmove_clock: Halfmove clock for 50-move rule 13 - board.is_check(): True if current player is in check 14 - board.is_checkmate(): True if current player is checkmated 15 - board.is_stalemate(): True if stalemate 16 - board.is_insufficient_material(): Returns True if insufficient material 17 - board.piece_at(square): Returns piece at given square (or None) 18 - board.piece_map(): Returns dict mapping squares to pieces 19 - board.legal_moves: Iterator over legal moves 20 - board.attackers(color, square): Returns set of squares attacking given square 21 - board.is_attacked_by(color, square): Returns True if square is attacked by color 22 23 ## Chess Squares and Pieces 24 - chess.A1, chess.A2, ..., chess.H8: Square constants 25 - chess.PAWN, chess.KNIGHT, chess.BISHOP, chess.ROOK, chess.QUEEN, chess.KING: Piece types 26 - chess.WHITE, chess.BLACK: Colors 27 - piece.piece_type: Type of piece (PAWN, KNIGHT, etc.) 28 - piece.color: Color of piece (WHITE or BLACK) 29 30 ## Useful Functions 31 - chess.square_name(square): Convert square index to name (e.g., "e4") 32 - chess.parse_square(name): Convert square name to index 33 - chess.square_file(square): Get file (0-7) of square 34 - chess.square_rank(square): Get rank (0-7) of square 35 - chess.square_distance(sq1, sq2): Manhattan distance between squares 36 37 38 # Task 39 Generate 10 new chess board feature functions in Python that: 40 41 1. Help us discriminate between strong and weak board positions, hopefully with positions before and after the optimal split point having the lowest possible variance between their evaluations. 42 2. Return a float value given a board position. 43 3. Handle edge cases gracefully - won’t crash on unusual positions 44 45 Your task is to generate diverse, creative features that are relevant to explain the evaluations for the board positions shown above. Focus on features that would help distinguish between positions of different strengths. These features will be used in this decision tree that will predict the evaluation of a given board position in estimated % win probability for white (e.g., 20 means Black winning with around 80% probability). Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 46 47 # Code Requirements 48 49 - Use single quotes for docstrings: "description here" 50 - No markdown code blocks 51 - No explanatory text after the function 52 - Each function should be complete and standalone, and return a float 53 54 # Output Format 18 Preprint 55 Generate exactly 10 features in this format: 56 57 def feature(board: chess.Board) -> float: 58 "Simple, clear description of what this feature measures" 59 # ... Calculate and return the feature value 60 return result 61 62 def feature(board: chess.Board) -> float: 63 "Another feature description" 64 # ... Calculate and return the feature value 65 return result 66 67 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. 68 69 # Current decision tree node 70 You are currently focusing on features that explain the position evaluation of board positions in the following subtree of a decision tree: 71 72 [root] 73 -> value < 3.000 for "Measure the material balance between both players" -> value < 1.182 for "Counts the number of pieces for each player and computes the ratio of the piece counts." -> value > 0.650 for "Counts the number of squares attacked by pieces of each color to assess control of the board." 74 75 # Board positions 76 Here are examples of board positions in this subtree, along with their position evaluations ( computed by Stockfish): 77 78 Board: 79 . . r r . . k . 80 . . . . . p p p 81 p b . . p . . . 82 . p . . P . . . 83 . P . . N P n q 84 P . . . . . P . 85 . B . . Q . . P 86 R . . . . R . K 87 Evaluation: 38.08281678856247 88 --- 89 90 Board: 91 r . . . . . k . 92 . b . r . p p . 93 p . . P . . . . 94 . p . . . . . Q 95 . P . b . . . . 96 P . . . . . . . 97 . . . . . . P P 98 . . . . . R . K 99 Evaluation: 15.383900101915987 100 --- 101 102 Board: 103 r n b . k . . r 104 p p p . n p p p 105 . . . b . . . . 106 . . . B . . . . 107 . . . P P p . q 108 . . . . . . . . 109 P P P . . . P P 110 R N B Q . K N R 111 Evaluation: 47.700323643230064 112 --- 113 114 Board: 115 . . . r . . . k 116 p p . . . p p . 117 . . . . p . . p 118 . . b r P . . P 119 . q . N R . . . 120 . . . . B . . . 121 . . . . . K P . 122 . . R Q . . . . 123 Evaluation: 50.0 124 --- 125 126 Board: 127 r n b q k b n r 128 p p p p . p p p 19 Preprint 129 . . . . . . . . 130 . . . . p . . . 131 . . . . P . . . 132 . . . . . N . . 133 P P P P . P P P 134 R N B Q K B . R 135 Evaluation: 51.19644756166626 136 --- 137 138 139 Optimize for producing discriminant features that are novel compared to the existing features used to arrive at this subtree. Focus on explaining the differences between the board positions shown above. C.1.3 D-ID3 - EXAMPLE FEATURES 1 def feature(board: chess.Board) -> float: 2 "Measure the material balance between both players" 3 material_values = {chess.PAWN: 1, chess.KNIGHT: 3, chess.BISHOP: 3, chess.ROOK: 5, chess. QUEEN: 9, chess.KING: 0} 4 white_material = sum(material_values[piece.piece_type] for piece in board.piece_map(). values() if piece.color == chess.WHITE) 5 black_material = sum(material_values[piece.piece_type] for piece in board.piece_map(). values() if piece.color == chess.BLACK) 6 return float(white_material - black_material) 7 8 9 def feature(board: chess.Board) -> float: 10 "Counts the number of pieces for each player and computes the ratio of the piece counts." 11 piece_count_white = sum(1 for piece in board.piece_map().values() if piece.color == chess. WHITE) 12 piece_count_black = sum(1 for piece in board.piece_map().values() if piece.color == chess. BLACK) 13 if piece_count_black == 0: 14 return float(’inf’) # Black has no pieces left 15 return piece_count_white / piece_count_black 16 17 def feature(board: chess.Board) -> float: 18 "Counts the number of squares attacked by pieces of each color to assess control of the board." 19 white_attacks = sum(board.is_attacked_by(chess.WHITE, square) for square in chess.SQUARES) 20 black_attacks = sum(board.is_attacked_by(chess.BLACK, square) for square in chess.SQUARES) 21 control_ratio = white_attacks / (black_attacks + 1) # Avoid division by zero 22 return float(control_ratio) 23 24 def feature(board: chess.Board) -> float: 25 "Calculates the center control by counting pieces in the central squares." 26 central_squares = [chess.D4, chess.D5, chess.E4, chess.E5] 27 control = sum(1 for square in central_squares if board.piece_at(square) is not None) 28 return float(control) 29 30 def feature(board: chess.Board) -> float: 31 "Calculates the total piece value for each player based on standard chess piece values." 32 piece_values = { 33 chess.PAWN: 1, 34 chess.KNIGHT: 3, 35 chess.BISHOP: 3, 36 chess.ROOK: 5, 37 chess.QUEEN: 9, 38 chess.KING: 0 # King is invaluable 39 } 40 white_value = sum(piece_values[piece.piece_type] for piece in board.piece_map().values() if piece.color == chess.WHITE) 41 black_value = sum(piece_values[piece.piece_type] for piece in board.piece_map().values() if piece.color == chess.BLACK) 42 return float(white_value - black_value) C.2 IMAGE CLASSIFICATION C.2.1 F2 1 You are an expert computer vision programmer creating evaluation features for a machine learning model that predicts values from images. 2 3 This is a classification task with the following classes: 0: landbird, 1: waterbird. 4 5 Your task is to write a feature function that helps discriminate between the image classes above. 20 Preprint 6 A feature function is a Python function that takes an image array and computes a feature out of the image. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from image processing libraries: 9 10 11 # Image Processing API Documentation 12 13 The features receive an image as a numpy array, so you can use any numpy functions on it. For RGB images, shape is (height, width, 3). For grayscale, shape is (height, width). 14 15 ## Image Processing Methods 16 - image.shape: Returns (height, width, channels) for RGB or (height, width) for grayscale 17 - image.mean(): Average pixel intensity across all channels 18 - image.std(): Standard deviation of pixel intensities 19 - image.max(), image.min(): Maximum and minimum pixel values 20 - np.sum(image): Sum of all pixel values 21 - np.count_nonzero(image): Count of non-zero pixels 22 23 ## Handle Both Grayscale and RGB 24 - Check format: len(image.shape) == 2 for grayscale, len(image.shape) == 3 for RGB 25 - Unpack safely: h, w = image.shape[:2] # Works for both formats 26 - For RGB only: image[:,:,0] (red), image[:,:,1] (green), image[:,:,2] (blue) 27 28 ## Useful Functions 29 - np.mean(image): Average intensity 30 - np.std(image): Standard deviation 31 - np.gradient(image): Image gradients - for RGB use on single channel: np.gradient(image [:,:,0]) 32 - np.where(condition, x, y): Conditional selection 33 - np.argmax(image), np.argmin(image): Location of max/min values 34 - np.percentile(image, q): Percentile values 35 - np.histogram(image.flatten(), bins): Intensity histogram 36 37 ## Spatial Analysis 38 - image[start_row:end_row, start_col:end_col]: Region selection 39 - Center region: image[h//4:3*h//4, w//4:3*w//4] 40 - Edge detection: np.gradient(np.mean(image, axis=2)) for RGB 41 - Color channel differences: image[:,:,0] - image[:,:,1] 42 43 ## Example Feature Function 44 def feature(image: np.ndarray) -> float: 45 "Average pixel intensity in the center region" 46 if len(image.shape) == 3: 47 h, w, c = image.shape 48 gray = np.mean(image, axis=2) 49 else: 50 h, w = image.shape 51 gray = image 52 center_h, center_w = h // 4, w // 4 53 center_region = gray[center_h:3*center_h, center_w:3*center_w] 54 return float(np.mean(center_region)) 55 56 57 ## Current Feature Database 58 Here are some existing features and their importances to the current classifier (importance = benefit from that feature, higher is better): 59 60 Feature: 61 def feature(image: np.ndarray) -> float: 62 ’Relative difference in average blue intensity between bottom half and top half’ 63 import numpy as np 64 h, w = image.shape[:2] 65 if len(image.shape) != 3 or image.shape[2] < 3: 66 return float(0.0) 67 b = image[:, :, 2].astype(float) 68 top_mean = np.mean(b[:h // 2, :]) if h // 2 > 0 else np.mean(b) 69 bot_mean = np.mean(b[h // 2:, :]) if h - h // 2 > 0 else np.mean(b) 70 denom = (np.mean(b) + 1e-8) 71 return float((bot_mean - top_mean) / denom) 72 73 74 Importance: 0.081 75 --- 76 77 78 Feature: 79 def feature(image: np.ndarray) -> float: 80 ’Aspect ratio (width/height) of bounding box of pixels significantly different from median intensity’ 21 Preprint 81 import numpy as np 82 h, w = image.shape[:2] 83 if len(image.shape) == 3: 84 gray = np.mean(image, axis=2).astype(float) 85 else: 86 gray = image.astype(float) 87 med = np.median(gray) 88 dynamic_range = np.max(gray) - np.min(gray) 89 thresh = 0.15 * (dynamic_range + 1e-8) 90 mask = np.abs(gray - med) > thresh 91 if not np.any(mask): 92 return float(0.5) 93 rows = np.where(np.any(mask, axis=1))[0] 94 cols = np.where(np.any(mask, axis=0))[0] 95 if rows.size == 0 or cols.size == 0: 96 return float(0.5) 97 r0, r1 = rows[0], rows[-1] 98 c0, c1 = cols[0], cols[-1] 99 bbox_h = (r1 - r0 + 1) 100 bbox_w = (c1 - c0 + 1) 101 return float(bbox_w / (bbox_h + 1e-8)) 102 103 Importance: 0.082 104 --- 105 106 107 Feature: 108 def feature(image: np.ndarray) -> float: 109 ’Left-right symmetry score (1.0 = perfectly symmetric, lower = less symmetric)’ 110 import numpy as np 111 if len(image.shape) == 3: 112 gray = np.mean(image, axis=2).astype(float) 113 else: 114 gray = image.astype(float) 115 flipped = np.fliplr(gray) 116 diff = np.mean(np.abs(gray - flipped)) 117 # normalize by image contrast to avoid small-image issues 118 denom = np.mean(np.abs(gray - np.mean(gray))) + 1e-8 119 norm_diff = diff / denom 120 score = 1.0 - np.tanh(norm_diff) # bounded between ˜0 and 1 121 return float(np.clip(score, 0.0, 1.0)) 122 123 124 Importance: 0.084 125 --- 126 127 128 Feature: 129 def feature(image: np.ndarray) -> float: 130 ’Ratio of average horizontal gradient magnitude to vertical gradient magnitude (texture orientation)’ 131 import numpy as np 132 # compute gray 133 if len(image.shape) == 3: 134 gray = np.mean(image, axis=2).astype(float) 135 else: 136 gray = image.astype(float) 137 gy, gx = np.gradient(gray) 138 mean_dx = np.mean(np.abs(gx)) 139 mean_dy = np.mean(np.abs(gy)) 140 return float(mean_dx / (mean_dy + 1e-8)) 141 142 143 Importance: 0.159 144 --- 145 146 147 Feature: 148 def feature(image: np.ndarray) -> float: 149 ’Proportion of pixels where green channel is significantly high (vegetation cue)’ 150 import numpy as np 151 h, w = image.shape[:2] 152 if len(image.shape) != 3 or image.shape[2] < 3: 153 return float(0.0) 154 img = image.astype(float) 155 r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2] 156 # require green greater than both red and blue and at least above 60th percentile 157 thresh = np.percentile(g.flatten(), 60) 158 mask = (g > r) & (g > b) & (g > thresh) 159 return float(np.count_nonzero(mask) / (h * w + 1e-12)) 160 22 Preprint 161 162 Importance: 0.094 163 --- 164 165 166 Feature: 167 def feature(image: np.ndarray) -> float: 168 ’Fraction of pixels brighter than the 90th percentile (bright spot proportion)’ 169 import numpy as np 170 if len(image.shape) == 3: 171 gray = np.mean(image, axis=2).astype(float) 172 else: 173 gray = image.astype(float) 174 thresh = np.percentile(gray.flatten(), 90) 175 frac = np.count_nonzero(gray > thresh) / (gray.size + 1e-12) 176 return float(frac) 177 178 179 Importance: 0.080 180 --- 181 182 183 Feature: 184 def feature(image: np.ndarray) -> float: 185 ’Normalized difference between center region brightness and border brightness’ 186 import numpy as np 187 h, w = image.shape[:2] 188 if len(image.shape) == 3: 189 gray = np.mean(image, axis=2).astype(float) 190 else: 191 gray = image.astype(float) 192 ch0, ch1 = h // 4, w // 4 193 center = gray[ch0:3 * ch0 or h, ch1:3 * ch1 or w] 194 # border defined as whole minus center 195 mask_border = np.ones_like(gray, dtype=bool) 196 mask_border[ch0:3 * ch0 or h, ch1:3 * ch1 or w] = False 197 center_mean = np.mean(center) if center.size > 0 else 0.0 198 border_mean = np.mean(gray[mask_border]) if np.any(mask_border) else 0.0 199 denom = np.mean(np.abs(gray)) + 1e-8 200 return float((center_mean - border_mean) / denom) 201 202 203 Importance: 0.088 204 --- 205 206 207 Feature: 208 def feature(image: np.ndarray) -> float: 209 ’Proportion of pixels where blue channel is dominant (blue > red and blue > green)’ 210 import numpy as np 211 h, w = image.shape[:2] 212 if len(image.shape) != 3 or image.shape[2] < 3: 213 return float(0.0) 214 img = image.astype(float) 215 r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2] 216 mask = (b > r) & (b > g) 217 return float(np.count_nonzero(mask) / (h * w + 1e-12)) 218 219 220 Importance: 0.117 221 --- 222 223 224 Feature: 225 def feature(image: np.ndarray) -> float: 226 ’Edge density: fraction of pixels with gradient magnitude above (mean+std)’ 227 import numpy as np 228 if len(image.shape) == 3: 229 gray = np.mean(image, axis=2).astype(float) 230 else: 231 gray = image.astype(float) 232 gy, gx = np.gradient(gray) 233 mag = np.hypot(gx, gy) 234 thresh = np.mean(mag) + np.std(mag) 235 count = np.count_nonzero(mag > thresh) 236 total = gray.size 237 return float(count / (total + 1e-12)) 238 239 240 Importance: 0.114 241 --- 23 Preprint 242 243 244 Feature: 245 def feature(image: np.ndarray) -> float: 246 ’Average per-pixel color "saturation" approximated by (max-min)/max across channels’ 247 import numpy as np 248 if len(image.shape) != 3 or image.shape[2] < 3: 249 return float(0.0) 250 img = image.astype(float) 251 mx = np.max(img, axis=2) 252 mn = np.min(img, axis=2) 253 # avoid division by zero 254 sat = (mx - mn) / (mx + 1e-8) 255 return float(np.mean(sat)) 256 257 258 Importance: 0.100 259 --- 260 261 262 ## Task 263 Generate 10 NEW image features that: 264 265 1. Are different from existing features 266 2. Capture useful visual patterns 267 3. Return float values 268 4. Handle edge cases gracefully - Won’t crash on unusual images 269 5. Use simple, short docstrings - Use single quotes, not triple quotes 270 6. Are efficient to compute 271 272 Your task is to generate diverse, creative features that capture different aspects of image content for prediction. Focus on features that would help distinguish between different samples. These features will be used as input to a learned model that predicts target values from images. 273 274 ## IMPORTANT CODE REQUIREMENTS 275 - Use SINGLE quotes for docstrings: "description here" 276 - NO triple quotes (""") anywhere in the code 277 - NO markdown code blocks 278 - NO explanatory text after the function 279 - Each function should be complete and standalone 280 281 ## Output Format 282 Generate exactly 10 features in this format: 283 284 def feature(image: np.ndarray) -> float: 285 "Clear description of what this feature measures" 286 # ... Calculate and return the feature value 287 return float(result) 288 289 def feature(image: np.ndarray) -> float: 290 "Another feature description" 291 # ... Calculate and return the feature value 292 return float(result) 293 294 The body of the functions can be anything, but the first line (function declaration) should be identical to those examples above (always ’def feature(...)’), and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. C.2.2 D-ID3 - PROMPT 1 You are an expert image processing programmer creating feature functions to help a machine learning model perform image classification. 2 3 This is a classification task with the following classes: 0: digit zero, 1: digit one, 2: digit two, 3: digit three, 4: digit four, 5: digit five, 6: digit six, 7: digit seven, 8: digit eight, 9: digit nine. 4 5 Your task is to write a feature function that helps discriminate between the image samples given below. 6 A feature function is a Python function that takes an image array and computes a feature out of the image. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from image processing libraries: 9 10 11 # Image Processing API Documentation 12 24 Preprint 13 The features receive an image as a numpy array, so you can use any numpy functions on it. For RGB images, shape is (height, width, 3). For grayscale, shape is (height, width). 14 15 ## Image Processing Methods 16 - image.shape: Returns (height, width, channels) for RGB or (height, width) for grayscale 17 - image.mean(): Average pixel intensity across all channels 18 - image.std(): Standard deviation of pixel intensities 19 - image.max(), image.min(): Maximum and minimum pixel values 20 - np.sum(image): Sum of all pixel values 21 - np.count_nonzero(image): Count of non-zero pixels 22 23 ## Handle Both Grayscale and RGB 24 - Check format: len(image.shape) == 2 for grayscale, len(image.shape) == 3 for RGB 25 - Unpack safely: h, w = image.shape[:2] # Works for both formats 26 - For RGB only: image[:,:,0] (red), image[:,:,1] (green), image[:,:,2] (blue) 27 28 ## Useful Functions 29 - np.mean(image): Average intensity 30 - np.std(image): Standard deviation 31 - np.gradient(image): Image gradients - for RGB use on single channel: np.gradient(image [:,:,0]) 32 - np.where(condition, x, y): Conditional selection 33 - np.argmax(image), np.argmin(image): Location of max/min values 34 - np.percentile(image, q): Percentile values 35 - np.histogram(image.flatten(), bins): Intensity histogram 36 37 ## Spatial Analysis 38 - image[start_row:end_row, start_col:end_col]: Region selection 39 - Center region: image[h//4:3*h//4, w//4:3*w//4] 40 - Edge detection: np.gradient(np.mean(image, axis=2)) for RGB 41 - Color channel differences: image[:,:,0] - image[:,:,1] 42 43 ## Example Feature Function 44 def feature(image: np.ndarray) -> float: 45 "Average pixel intensity in the center region" 46 if len(image.shape) == 3: 47 h, w, c = image.shape 48 gray = np.mean(image, axis=2) 49 else: 50 h, w = image.shape 51 gray = image 52 center_h, center_w = h // 4, w // 4 53 center_region = gray[center_h:3*center_h, center_w:3*center_w] 54 return float(np.mean(center_region)) 55 56 57 # Task 58 Generate 10 new image feature functions in Python that: 59 60 1. Help us discriminate between different image classes, hopefully with samples before and after the optimal split point having the lowest possible variance between their classifications. 61 2. Return a float value given an image. 62 3. Handle edge cases gracefully - won’t crash on unusual images 63 64 Your task is to generate diverse, creative features that are relevant to explain the classifications for the image samples shown above. Focus on features that would help distinguish between samples of different classes. These features will be used in this decision tree that will predict the classification of a given image sample. Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 65 66 # Code Requirements 67 - Use single quotes for docstrings: "description here" 68 - No markdown code blocks 69 - No explanatory text after the function 70 - Each function should be complete and standalone, and return a float 71 72 # Output Format 73 Generate exactly 10 features in this format: 74 75 def feature(image: np.ndarray) -> float: 76 "Simple, clear description of what this feature measures" 77 # ... Calculate and return the feature value 78 return result 79 80 def feature(image: np.ndarray) -> float: 81 "Another feature description" 82 # ... Calculate and return the feature value 83 return result 84 25 Preprint 85 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. 86 87 # Current decision tree node 88 You are currently focusing on features that explain the image classifications in the following subtree of a decision tree: 89 90 [root] 91 -> value > 0.344 for "Ratio of edge pixels to the total number of pixels" -> value > 72.865 for "Computes the brightness of the central region as a percentage of the whole image" -> value > 0.805 for "Determines the ratio of vertical to horizontal gradients in the image, indicating edge direction" 92 93 # Image samples 94 Here are examples of image samples in this subtree, along with their target classifications: 95 96 Sample (Target: 9) 97 Sample (Target: 6) 98 Sample (Target: 7) 99 Sample (Target: 4) 100 Sample (Target: 4) 101 102 Optimize for producing discriminant features that are novel compared to the existing features used to arrive at this subtree. Focus on explaining the differences between the image samples shown above. C.2.3 D-ID3 - EXAMPLE FEATURES 1 def feature(image: np.ndarray) -> float: 2 "Ratio of edge pixels to the total number of pixels" 3 if len(image.shape) == 3: 4 gray = np.mean(image, axis=2) 5 else: 6 gray = image 7 edges = np.gradient(gray.astype(float)) 8 edge_pixels = np.count_nonzero(edges[0]) + np.count_nonzero(edges[1]) 9 total_pixels = gray.size 10 return float(edge_pixels / total_pixels) 11 12 13 def feature(image: np.ndarray) -> float: 14 "Computes the brightness of the central region as a percentage of the whole image" 15 if len(image.shape) == 3: 16 gray = np.mean(image, axis=2) 17 else: 18 gray = image 19 h, w = gray.shape 20 center_region = gray[h//4:3*h//4, w//4:3*w//4] 21 total_brightness = np.sum(gray) 22 center_brightness = np.sum(center_region) 23 return float(center_brightness) / (total_brightness + 1e-7) * 100 24 25 26 def feature(image: np.ndarray) -> float: 27 "Determines the ratio of vertical to horizontal gradients in the image, indicating edge direction" 28 if len(image.shape) == 3: 29 gray = np.mean(image, axis=2) 30 else: 31 gray = image 32 grad_y, grad_x = np.gradient(gray) 33 vertical_grad = np.sum(np.abs(grad_x)) 34 horizontal_grad = np.sum(np.abs(grad_y)) 35 return float(vertical_grad) / (horizontal_grad + 1e-7) # Avoid division by zero 36 37 def feature(image: np.ndarray) -> float: 38 "Calculates the ratio of bright pixels (above a threshold) to total pixels in the image" 39 if len(image.shape) == 3: 40 gray = np.mean(image, axis=2) 41 else: 42 gray = image 43 threshold = 200 # A threshold for bright pixels 44 bright_pixels = np.count_nonzero(gray > threshold) 45 total_pixels = gray.size 46 return float(bright_pixels) / total_pixels 47 48 def feature(image: np.ndarray) -> float: 49 "Measures the contrast of the image based on the standard deviation of pixel intensities" 50 if len(image.shape) == 3: 26 Preprint 51 gray = np.mean(image, axis=2) 52 else: 53 gray = image 54 return float(np.std(gray)) C.3 TEXT CLASSIFICATION C.3.1 F2 1 You are an expert text analysis programmer creating evaluation features for a machine learning model that classifies text. 2 3 4 # Text Processing API Documentation 5 6 The features receive text as a string, so you can use any string methods and text processing functions. 7 8 ## String Methods 9 - text.lower(), text.upper(): Case conversion 10 - text.strip(): Remove whitespace 11 - text.split(delimiter): Split into list 12 - text.count(substring): Count occurrences 13 - text.startswith(prefix), text.endswith(suffix): Check prefixes/suffixes 14 - text.find(substring): Find position of substring 15 - text.replace(old, new): Replace text 16 17 ## Text Analysis 18 - len(text): Length of text 19 - text.isdigit(), text.isalpha(), text.isalnum(): Character type checks 20 - sum(1 for c in text if c.isupper()): Count uppercase letters 21 - text.split(): Split on whitespace to get words 22 23 ## Regular Expressions (re module) 24 - re.findall(pattern, text): Find all matches 25 - re.search(pattern, text): Find first match 26 - re.sub(pattern, replacement, text): Replace patterns 27 - len(re.findall(r’\w+’, text)): Count words 28 - len(re.findall(r’[.!?]’, text)): Count sentences 29 30 ## Useful Patterns 31 - Word count: len(text.split()) 32 - Sentence count: text.count(’.’) + text.count(’!’) + text.count(’?’) 33 - Average word length: sum(len(word) for word in text.split()) / len(text.split()) 34 - Punctuation density: sum(1 for c in text if not c.isalnum() and not c.isspace()) / len(text) 35 36 ## Example Feature Function 37 def feature(text: str) -> float: 38 "Average word length in the text" 39 words = text.split() 40 if not words: 41 return 0.0 42 return sum(len(word) for word in words) / len(words) 43 44 45 ## Current Feature Database 46 Here are some existing features and their performance (Performance improvement = benefit from that feature, higher is better): 47 48 Feature: 49 def feature(text: str) -> float: 50 ’Average number of words per sentence (sentences split on . ! ?)’ 51 import re 52 if not text or not text.strip(): 53 return float(0.0) 54 # Split on one or more sentence-ending punctuation and filter empties 55 sentences = [s.strip() for s in re.split(r’[.!?]+’, text) if s.strip()] 56 if not sentences: 57 return float(0.0) 58 total_words = sum(len(s.split()) for s in sentences) 59 return float(total_words / len(sentences)) 60 61 62 Importance: 0.057 63 --- 64 65 66 Feature: 67 def feature(text: str) -> float: 27 Preprint 68 ’Average character-uniqueness per word (unique chars / word length), averaged over words’ 69 words = [w for w in text.split() if any(ch.isalnum() for ch in w)] 70 if not words: 71 return float(0.0) 72 ratios = [] 73 for w in words: 74 chars = [c for c in w if not c.isspace()] 75 if not chars: 76 continue 77 unique = len(set(chars)) 78 ratios.append(unique / len(chars)) 79 if not ratios: 80 return float(0.0) 81 return float(sum(ratios) / len(ratios)) 82 83 84 Importance: 0.057 85 --- 86 87 88 Feature: 89 def feature(text: str) -> float: 90 ’Proportion of alphabetic characters that are uppercase’ 91 if not text: 92 return float(0.0) 93 alpha_chars = [c for c in text if c.isalpha()] 94 if not alpha_chars: 95 return float(0.0) 96 upper_count = sum(1 for c in alpha_chars if c.isupper()) 97 return float(upper_count / len(alpha_chars)) 98 99 100 Importance: 0.083 101 --- 102 103 104 Feature: 105 def feature(text: str) -> float: 106 ’Proportion of long words (length > 7) among all words’ 107 words = [w for w in text.split() if w] 108 if not words: 109 return float(0.0) 110 long_count = sum(1 for w in words if len(w) > 7) 111 return float(long_count / len(words)) 112 113 114 Importance: 0.194 115 --- 116 117 118 Feature: 119 def feature(text: str) -> float: 120 ’Punctuation characters per word (punctuation = not alnum and not whitespace)’ 121 if not text or not text.strip(): 122 return float(0.0) 123 words = text.split() 124 if not words: 125 return float(0.0) 126 punct_count = sum(1 for c in text if not c.isalnum() and not c.isspace()) 127 return float(punct_count / len(words)) 128 129 130 Importance: 0.096 131 --- 132 133 134 Feature: 135 def feature(text: str) -> float: 136 ’Proportion of sentences that are questions (based on ? count over total sentence terminators)’ 137 if not text or not text.strip(): 138 return float(0.0) 139 question_marks = text.count(’?’) 140 sentence_terminators = text.count(’.’) + text.count(’!’) + text.count(’?’) 141 if sentence_terminators == 0: 142 return float(0.0) 143 return float(question_marks / sentence_terminators) 144 145 Importance: 0.023 146 --- 147 28 Preprint 148 149 Feature: 150 def feature(text: str) -> float: 151 ’Type-token ratio: unique word tokens / total words (case-insensitive, alphanumeric tokens )’ 152 import re 153 tokens = re.findall(r’\w+’, text.lower()) 154 if not tokens: 155 return float(0.0) 156 unique = len(set(tokens)) 157 return float(unique / len(tokens)) 158 159 160 Importance: 0.089 161 --- 162 163 164 Feature: 165 def feature(text: str) -> float: 166 ’Ratio of tokens that contain at least one digit’ 167 tokens = text.split() 168 if not tokens: 169 return float(0.0) 170 num_with_digit = sum(1 for t in tokens if any(ch.isdigit() for ch in t)) 171 return float(num_with_digit / len(tokens)) 172 173 174 Importance: 0.165 175 --- 176 177 178 Feature: 179 def feature(text: str) -> float: 180 ’Longest run of the same character normalized by text length’ 181 if not text: 182 return float(0.0) 183 max_run = 1 184 current_run = 1 185 prev = text[0] 186 for c in text[1:]: 187 if c == prev: 188 current_run += 1 189 if current_run > max_run: 190 max_run = current_run 191 else: 192 current_run = 1 193 prev = c 194 return float(max_run / max(1, len(text))) 195 196 197 Importance: 0.126 198 --- 199 200 201 Feature: 202 def feature(text: str) -> float: 203 ’Stopword density: fraction of tokens that are common English stopwords’ 204 stopwords = { 205 ’the’,’and’,’is’,’in’,’it’,’of’,’to’,’a’,’an’,’that’,’this’,’for’,’on’,’with’, 206 ’as’,’by’,’at’,’from’,’or’,’be’,’are’,’was’,’were’,’has’,’have’,’not’,’but’, 207 ’they’,’their’,’you’,’I’ 208 } 209 tokens = [t.lower().strip(".,!?;:\"’()[]") for t in text.split()] 210 if not tokens: 211 return float(0.0) 212 stop_count = sum(1 for t in tokens if t and t in stopwords) 213 return float(stop_count / len(tokens)) 214 215 216 Importance: 0.111 217 --- 218 219 220 ## Task 221 Generate 10 NEW text features that: 222 223 1. Are different from existing features 224 2. Capture useful textual patterns 225 3. Return float values 226 4. Handle edge cases gracefully - Won’t crash on unusual texts 227 5. Use simple, short docstrings - Use single quotes, not triple quotes 29 Preprint 228 6. Are efficient to compute 229 230 Your task is to generate diverse, creative features that capture different aspects of text content for classification. Focus on features that would help distinguish between different text classes. These features will be used as input to a learned model that predicts target values from text. 231 232 ## IMPORTANT CODE REQUIREMENTS 233 - Use SINGLE quotes for docstrings: "description here" 234 - NO triple quotes (""") anywhere in the code 235 - NO markdown code blocks 236 - NO explanatory text after the function 237 - Each function should be complete and standalone 238 239 ## Output Format 240 Generate exactly 10 features in this format: 241 242 def feature(text: str) -> float: 243 "Clear description of what this feature measures" 244 # ... Calculate and return the feature value 245 return float(result) 246 247 def feature(text: str) -> float: 248 "Another feature description" 249 # ... Calculate and return the feature value 250 return float(result) 251 252 The body of the functions can be anything, but the first line (function declaration) should be identical to those examples above (always ’def feature(...)’), and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. C.3.2 D-ID3 - PROMPT 1 You are an expert text analysis programmer creating feature functions to help a machine learning model perform text classification. 2 3 This is a classification task with the following classes: 0: human-written text, 1: AI- generated text. 4 5 Your task is to write a feature function that helps discriminate between the text samples given below. 6 A feature function is a Python function that takes a text string and computes a feature out of the text. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from text processing libraries: 9 10 11 # Text Processing API Documentation 12 13 The features receive text as a string, so you can use any string methods and text processing functions. 14 15 ## String Methods 16 - text.lower(), text.upper(): Case conversion 17 - text.strip(): Remove whitespace 18 - text.split(delimiter): Split into list 19 - text.count(substring): Count occurrences 20 - text.startswith(prefix), text.endswith(suffix): Check prefixes/suffixes 21 - text.find(substring): Find position of substring 22 - text.replace(old, new): Replace text 23 24 ## Text Analysis 25 - len(text): Length of text 26 - text.isdigit(), text.isalpha(), text.isalnum(): Character type checks 27 - sum(1 for c in text if c.isupper()): Count uppercase letters 28 - text.split(): Split on whitespace to get words 29 30 ## Regular Expressions (re module) 31 - re.findall(pattern, text): Find all matches 32 - re.search(pattern, text): Find first match 33 - re.sub(pattern, replacement, text): Replace patterns 34 - len(re.findall(r’\w+’, text)): Count words 35 - len(re.findall(r’[.!?]’, text)): Count sentences 36 37 ## Useful Patterns 38 - Word count: len(text.split()) 39 - Sentence count: text.count(’.’) + text.count(’!’) + text.count(’?’) 40 - Average word length: sum(len(word) for word in text.split()) / len(text.split()) 30 Preprint 41 - Punctuation density: sum(1 for c in text if not c.isalnum() and not c.isspace()) / len(text) 42 43 ## Example Feature Function 44 def feature(text: str) -> float: 45 "Average word length in the text" 46 words = text.split() 47 if not words: 48 return 0.0 49 return sum(len(word) for word in words) / len(words) 50 51 52 # Task 53 Generate 10 new text feature functions in Python that: 54 55 1. Help us discriminate between different text classes, hopefully with samples before and after the optimal split point having the lowest possible variance between their classifications. 56 2. Return a float value given a text string. 57 3. Handle edge cases gracefully - won’t crash on unusual texts 58 59 Your task is to generate diverse, creative features that are relevant to explain the classifications for the text samples shown above. Focus on features that would help distinguish between samples of different classes. These features will be used in this decision tree that will predict the classification of a given text sample. Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 60 61 # Code Requirements 62 - Use single quotes for docstrings: "description here" 63 - No markdown code blocks 64 - No explanatory text after the function 65 - Each function should be complete and standalone, and return a float 66 67 # Output Format 68 Generate exactly 10 features in this format: 69 70 def feature(text: str) -> float: 71 "Simple, clear description of what this feature measures" 72 # ... Calculate and return the feature value 73 return result 74 75 def feature(text: str) -> float: 76 "Another feature description" 77 # ... Calculate and return the feature value 78 return result 79 80 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don’t output explanatory text - just the function definitions as shown above. 81 82 # Current decision tree node 83 You are currently focusing on features that explain the text classifications in the following subtree of a decision tree: 84 85 [root] 86 -> value > 4.468 for "Average character length of words in the text" -> value > 0.024 for " Calculates the proportion of text that is in passive voice" -> value < 26.000 for " Assesses the use of passive voice constructions in the text" 87 88 # Text samples 89 Here are examples of text samples in this subtree, along with their target classifications: 90 91 Sample: ’The Strange Tank 92 93 John awoke with a start. He was submerged in a tank of pink, viscous liquid. He thrashed around in a panic, trying to determine which way was up. He finally surfaced, gasping for ai...’, (Target: 1) 94 Sample: ’The old map crumbled at the edges as Juan carefully unrolled it across the table. His grandfather had given him the map many years ago, telling him it showed the true history of this land. A history m...’, (Target: 1) 95 Sample: ’The call came at midnight, just as I finished cleaning and assembling my sniper rifle . My burner phone buzzed, the one dedicated only to receiving instructions. A robotic female voice spoke: "Contract...’, (Target: 1) 96 Sample: ’"Bang Bang Baby" can be labeled a science fiction musical as it incorporates elements of both genres in an exciting plot. After watching the trailer first, I was surprised by the energetic nature of t...’, (Target: 0) 97 Sample: ’The red dust swirled around John as he took his first step onto the rust-colored soil of Mars. After decades of training and a perilous 9 month journey through the inky black void of space, he had fin...’, (Target: 1) 98 31 Preprint 99 Optimize for producing discriminant features that are novel compared to the existing features used to arrive at this subtree. Focus on explaining the differences between the text samples shown above. C.3.3 D-ID3 - EXAMPLE FEATURES 1 def feature(text: str) -> float: 2 "Average character length of words in the text" 3 words = text.split() 4 if not words: 5 return 0.0 6 return sum(len(word) for word in words) / len(words) 7 8 def feature(text: str) -> float: 9 "Calculates the proportion of text that is in passive voice" 10 passive_pattern = r’\b(?:is|was|were|be|being|been) \w+\b’ 11 passive_count = len(re.findall(passive_pattern, text)) 12 return float(passive_count) / len(text.split()) if text.split() else 0.0 13 14 def feature(text: str) -> float: 15 "Assesses the use of passive voice constructions in the text" 16 passive_voice = re.findall(r’\b(is|are|was|were|be|being|been)\s+\w+\b’, text) 17 return float(len(passive_voice)) 18 19 def feature(text: str) -> float: 20 "Counts the number of transition words to evaluate the flow of text" 21 transition_words = set([’however’, ’therefore’, ’moreover’, ’furthermore’, ’nevertheless’, ’consequently’]) 22 words = text.lower().split() 23 count = sum(1 for word in words if word in transition_words) 24 return float(count) 25 26 def feature(text: str) -> float: 27 "Measures the proportion of first-person pronouns in the text" 28 first_person_pronouns = set([’I’, ’me’, ’my’, ’mine’, ’we’, ’us’, ’our’, ’ours’]) 29 words = text.lower().split() 30 count = sum(1 for word in words if word in first_person_pronouns) 31 return float(count) / len(words) if words else 0.0 32
Preprint PROGRAMMATIC REPRESENTATION LEARNING WITH LANGUAGE MODELS Gabriel Poesia∗ Kempner Institute at Harvard University gabriel Georgia Gabriela Sampaio∗ Stanford University ABSTRACT Classical models for supervised machine learning, such as decision trees, are efficient and interpretable predictors, but their quality is highly dependent on the particular choice of input features. Although neural networks can learn useful representations directly from raw data (e.g., images or text), this comes at the expense of interpretability and the need for specialized hardware to run them efficiently. In this paper, we explore a hypothesis class we call Learned Programmatic Representations (LeaPR) models, which stack arbitrary features represented as code (functions from data points to scalars) and decision tree predictors. We synthesize feature functions using Large Language Models (LLMs), which have rich prior knowledge in a wide range of domains and a remarkable ability to write code using existing domain-specific libraries. We propose two algorithms to learn LeaPR models from supervised data. First, we design an adaptation of FunSearch to learn features rather than directly generate predictors. Then, we develop a novel variant of the classical ID3 algorithm for decision tree learning, where new features are generated on demand when splitting leaf nodes. In experiments from chess position evaluation to image and text classification, our methods learn high-quality, neural network-free predictors often competitive with neural networks. Our work suggests a flexible paradigm for learning interpretable representations end-to-end where features and predictions can be readily inspected and understood. 1 INTRODUCTION The central problem in supervised machine learning is to find a predictor h : X →Y in a hypothesis class H that minimizes a certain risk function R(h), such as 0 -1 error in classification or meansquared error in regression (Michalski et al., 2013). Classical choices for H include linear models, decision trees, and ensembles thereof, which are compellingly simple to understand and debug, and are both compute- and data-efficient. However, their effectiveness is highly limited in domains with low-level, high-dimensional inputs, such as images or text. For these domains, high-quality models are often best learned by first constructing a high level representation of an input x ∈X using a set of features fi : X →R that yield a higher-level encoding of the input that predictors can then rely on. While this offers great flexibility, in practice the effort and domain expertise required to engineer a good set of features for a particular learning task severely limits the quality of models that can be obtained with classical predictors in high-dimensional input domains without extensive human effort (Dong & Liu, 2018; Cheng & Camargo, 2023). A remarkably successful paradigm that avoids the need for hand-designed feature engineering is deep learning, where H is set to a parameterized family of neural networks of a domain-appropriate architecture. The core advantage of deep learning is the ability of gradient-based optimization to automatically learn useful representations from raw data (Bengio & LeCun, 2007; Damian et al., 2022). Indeed, deep neural networks can be seen as computing a set of complex, non-linear neural features, then applying a simple predictor on top (e.g., the last fully-connected layer, corresponding to a linear model). However, despite being highly effective for prediction, neural features have several drawbacks. First, deep neural networks are highly data intensive, and their ability to generalize drops drastically when in-domain data are scarce (Wang et al., 2023). Second, neural features are not ∗Authors contributed equally. Code available at https://github.com/gpoesia/leapr/ 1 16 Oct 2025 Preprint Training Data Learning Algorithm Features FunSearch Dynamic ID3 Decision Tree + Programmatic Features def feature( board: chess.Board )-> float: "Count backward pawns (white - black)" ... "The aim of the experiment was to..." Figure 1: Learned Programmatic Representation models combine programmatic features, synthesized by LLMs as code, and decision tree predictors, yielding interpretable models. We give two algorithms for learning them end-to-end from supervised data in high-dimensional input domains. easily interpretable: analyses of large-scale neural networks typically only find a fraction of neurons that seem to encode human-aligned concepts (Huben et al., 2023). This limits the potential of neural models to provide faithful explanations for their predictions (e.g., express why a given x is being classified as y), which are important when experts rely on learned models to support high-stakes decision-making (Doshi-Velez & Kim, 2017). In this paper, we seek to investigate alternative paradigms for representation learning that, like deep learning, do not require manual feature engineering from users and, like classical methods, yield interpretable, fast, and efficiently-learnable predictors. To that end, we propose learning LeaPR (Learned Programmatic Representation) models: a class of predictors with programmatic features paired with decision tree predictors, as illustrated in Figure 1. We leverage Large Language Models' (LLMs) ability to generate code using domain-specific libraries to encode features as arbitrary Python functions from the input domain into the reals: these functions are generated during training with the goal of improving the empirical risk of the current predictor. For learning from supervised data, we propose two alternative methods. First, we explore a variant of the FunSearch method (Romera-Paredes et al., 2024) called F2 (for Features FunSearch). In F2, an LLM iteratively generates functions that compute features from the input domain, which are then evaluated by training a Random Forest (Breiman, 2001) predictor and extracting importance weights. These scores are used in-context to steer the LLM to generate features with high importance that are thus as predictive as possible. F2 is a black-box procedure with respect to the underlying classic predictor, leveraging an off-the-shelf Random Forest training method. We further introduce a white-box training procedure for LeaPR models called Dynamic ID3, or D-ID3, inspired by the classical ID3 algorithm for training decision trees. D-ID3 follows an ID3-like training loop where leaf nodes of a growing decision tree are selected and "split" by introducing two new leaves and a decision based on the value of a specific feature. While ID3 operates with pre-existing features, D-ID3 queries an LLM to propose new programmatic features on the fly that can help split the node under consideration. D-ID3 gives the LLM the decision path leading to the node, possibly with concrete examples of training samples in that branch, and asks for novel features that are useful for that specific context. The vast prior knowledge of modern LLMs about domain-specific Python libraries makes both approaches highly applicable across diverse, high-dimensional input domains. We evaluate both methods on two LLMs across three diverse domains: chess position evaluation, image classification (MNIST (LeCun, 1998) and Fashion-MNIST (Xiao et al., 2017)), and text classification (AI vs. human text classification on the Ghostbuster dataset (Verma et al., 2024)). Across domains, both LeaPR methods learn high-quality neural network-free representations that extract empirically useful features, often spanning tens of thousands of lines of Python code spread across hundreds of functions, all stitched together by decision trees. Moreover, the learned features tend to be highly intuitive yet specific enough to lead to high-quality predictors. Our main contributions are: • We propose jointly learning programmatic features, represented as LLM-generated functions from the input domain to the reals, together with decision tree predictors, thus obtaining fast, interpretable predictors. • We introduce F2 and D-ID3, two algorithms for learning LeaPR models, where features are generated on demand during decision tree training. 2 Preprint • We evaluate LeaPR models on three domains: chess positions, images and text, showing comparable accuracy and often favorable data efficiency compared to baseline methods. We analyze the learned features and show how programmatic features can be useful for data exploration and for understanding model failures. 2 RELATED WORK Code generation with LLMs Our work is enabled by the capability of LLMs to generate code under flexible task specifications, including natural language instructions and examples (Chen et al., 2021). This ability has been explored in a variety of domains, including using code as an intermediate tool for reasoning Chen et al. (2022) and as an interface for agents to interact with an external environment Lv et al. (2024). We exploit code generation for synthesizing features from arbitrary input domains, building on LLMs' prior knowledge of a rich set of existing libraries. Feature generation with LLMs has been explored in tabular datasets (Ko et al., 2025; Zhang & Liu, 2024), whereas we explore domains with complex, low-level input. Black-box code evolution with LLMs LLMs can also be used to optimize a black-box objective function (Lange et al., 2024). This approach was pioneered in FunSearch (Romera-Paredes et al., 2024), the method that inspires our F2 method (Section 3.1). In FunSearch, the LLM is given the type signature of a function to synthesize, and it outputs candidate functions which are scored with a metric unknown to the model. LLMs can then see past candidate function examples and their scores in following rounds (Romera-Paredes et al., 2024), and can mutate and evolve previous attempts to propose new, more successful ones. AlphaEvolve (Novikov et al., 2025) explored this idea with larger models and novel methods to ensure diverse candidates. Our work differs from FunSearch and AlphaEvolve in that LeaPR models use LLM-generated code simply to generate features, and not the full predictor end-to-end. Using a decision tree to combine LLM-generated features allows our method to scale and simultaneously employ thousands of LLM-generated features at once: for instance, a single model we synthesize in our experiments in chess can use up to 50k lines of LLMgenerated code to compute features. We believe our work showcases some of the largest fully AIgenerated programs written so far as a result of the modularity that our framework provides (even ignoring the decision trees, which could also be compiled to nested conditionals). Learning programmatic world models A modular approach for using LLM-synthesized code has been recently employed in PoE-World (Piriyakulkij et al., 2025) for the problem of learning world models. In PoE-World, LLMs generate a set of small predictors that individually explain a specific behavior of the environment; a separate model combines these predictors with weights learned by gradient descent. This scales better than monolithic code representations, which WorldCoder introduced (Tang et al., 2024). Like LeaPR models, PoE-World can generate world models with thousands of lines since LLMs do not have to unilaterally generate all code at once. We propose a similarly modular architecture aimed toward the general problem of supervised learning. Mechanistic interpretability of neural networks The widespread deployment of deep neural networks strongly motivated the research community to understand the behavior of neural models (Mu & Andreas, 2021), often through the lens of mechanistic interpretability Conmy et al. (2023). One key hypothesis is that neural networks have interpretable "circuits" that drive specific behaviors (Marks et al., 2025) or encode learned knowledge (Yao et al., 2025). Prior work has also explored neural architectures that are more amenable to interpretation by construction (Doumbouya et al., 2025; Hewitt et al., 2023), rather than post-hoc. We share this approach of training models that allow for transparent inspection of their behavior, but focusing on a simple composition of programs and decision trees rather than neural networks. 3 LEARNING PROGRAMMATIC REPRESENTATIONS Classical predictors considered in supervised machine learning, like decision trees, are intrinsically explainable in the sense that their structure is simple enough to warrant human inspection: this is highly desirable when humans might want to trust (e.g. in high-stakes decision making) or learn from (e.g., someone learning to play chess) machine learning models. However, this simplicity 3 Preprint comes at a steep cost: the performance of these methods is notoriously impacted by the particular choice of features given as their input. Consider learning a board evaluation decision tree for the game of chess, for example: a predictor for the likelihood that the player with the white pieces will win in a given board position. During inference, a decision tree proceeds by testing the value of one input variable at a time. If given only the raw information available on the board (e.g. the piece lying on each of the 8x8 squares), each input variable alone carries negligible information about the overall situation in the game. Each leaf node is forced to be invariant to all features that are not tested on the path from the root to that node; thus, if any decision is made before testing all 64 squares, there is often a chance that something critical was missed (e.g., a queen on one of the unobserved squares). Thus, this representation is too low-level for decision trees to be useful, even though all that there is to know about the board is derivable from the input. On the other hand, if we instead train the model on a single useful feature computed from the board, like the material difference between players (given by a weighted sum of the pieces of each color still on the board), suddenly a very shallow decision tree can easily learn to make predictions that are significantly better than random, since a large material advantage is highly correlated with one's chance of winning. Adding more informative features will progressively allow a decision tree learner to find better steadily predictors. In this paper, our starting point is the insight that LLMs have two capabilities that allow them to serve as highly effective feature engineers for classical ML models. First, many useful features, such as the material balance in chess described above, can be implemented in a few lines of code, and modern LLMs are capable of flexibly generating code to accomplish a variety of tasks. Second, broad pre-training of LLMs encodes useful prior knowledge about a very wide range of domains (Bommasani et al., 2021), equipping them with strong priors over what kinds of features could be useful for making predictions across these domains. Our main hypothesis is that we might be able to train high-quality classical models by leveraging LLM-generated features at scale. Our goal here is thus to learn a hypothesis class that consists of (a) programmatic features and (b) decision tree predictors. We call these Learned Programmatic Representation models, or LeaPR models. The main challenge we now tackle is how to elicit predictive features from language models. 3.1 BLACK-BOX FEATURE GENERATION Given a supervised dataset D and any scoring function SD(f) that measures the quality of a proposed feature f : X →R, a simple methodology to obtain increasingly good features is to apply a FunSearch-style procedure, where an LLM is used to propose candidates to try to maximize S. Modern LLMs are capable of generating complex functions even for high-dimensional input domains, such as images and text, partly due to their ability to write code that uses existing human-written libraries for each of these domains. Thus, the main component needed for this approach to work is to answer: what makes f a good feature? In the standard FunSearch setup (Romera-Paredes et al., 2024), the scoring function S evaluates candidates independently: proposals are self-contained solutions of the task. Our setup here is different: for the purpose of learning a predictor from all generated features at once, a new candidate fk is valuable only to the extent that it contributes predictive information when taking into account the existing feature set f1:k-1. Assuming that we keep track of the set of features generated so far and propose new features in a loop, a na ̈ıve adaptation of FunSearch would thus score a new feature fk on its added predictive power once it has been proposed. For that, we could train a new predictor using f1:k and compare its risk against the previous predictor trained on f1:k-1. However, this runs into the issue that early features receive disproportionately high scores simply because their baselines (initial predictors based on few features) are severely limited. In practice, with this approach, the highest-scoring features are essentially fixed after the first few iterations, which is undesirable: we would like to detect and reward powerful features even if they appear late during training. To overcome this problem, we simultaneously score all existing features f1:k independently of the order in which they were proposed. Given a learned decision tree, prior work has proposed several metrics of importance of each input feature (e.g., measuring decrease in "impurity" in decision nodes 4 Preprint Algorithm 1: Features FunSearch (F2) Input : Language model, LM, Supervised dataset D ∈2X×Y Output: List of features F, where fi : X →R F ←[ ]; for iteration ∈[1, · · · , T] do rf ←TrainRandomForest(D, F) ; imp ←FeatureImportances(rf) ; top k ←TopKFeatures(F, imp); r ←RandomKFeatures(F \ top k, imp); p ←ProposeFeatures(LM, top k, r); F.extend(p); end return F ; Algorithm 2: Dynamic ID3 (D-ID3) Input : Language model LM, Supervised dataset D ∈2X×Y Output: List of features F, where fi : X →R T ←Leaf(Dtrain); for iteration ∈[1, · · · , T] do l ←arg maxIsLeaf(n) TotalError(T, n.data); p ←ProposeFeatures(LM, l.path to root) ; ⟨f, t⟩← arg minf∈F,t SplitError(f, t, n.data); l.split(f, t, {x ∈n.data|f(x) t}) ; end return {n.splitting feature|n ∈T ∧Internal(n)} ; Figure 2: Two learning algorithms for LeaPR models. F2 (left) uses a FunSearch-style loop that attempts to evolve features that are globally useful to train a Random Forest predictor, as estimated by feature importances. D-ID3 (right) runs an ID3-style decision tree training loop and attempts to propose new features that are locally useful for splitting specific leaf nodes, attempting to minimize their impurity (e.g., variance in regression, or entropy in classification). that use a given feature, (Breiman, 2001)). Importance metrics only depend on the final learned predictor, and decision tree learning methods are order-invariant with respect to input features. Algorithm 1 (Figure 2, left) shows Features FunSearch (F2), our representation learning algorithm based on this FunSearch-style approach but with features scored as a set. Specifically, F2 takes a supervised dataset and learns a programmatic representation - i.e. a set of feature functions fi : X →R, represented as executable code. Like FunSearch, F2 iteratively uses an LLM to make batches of proposals conditioned on a sample of existing features, which are shown to the model along with their assigned scores - the LLM's task is to propose new features that will be assigned high importance score in a newly trained Random Forest predictor. These scores are a global estimate of the predictive power of each feature in a predictor trained with all of them. 3.2 DYNAMIC SPLITTING While F2 uses an underlying decision tree learner as a black-box, the insight that LLMs can be used to generate features on demand can serve as the basis for designing the decision tree learner itself. Recall that during inference in a decision tree, we start at the root and repeatedly follow the "decisions" associated with each node until we reach a leaf. Each such decision consists of testing the value of a particular feature: if this node splits on feature fk, we compare fk(x) with a threshold t learned during training. If fk(x) float: 13 Preprint 43 'King centralization in endgames: positive if White king is closer to center than Black king (only active when low material)' 44 total_non_king_pieces = sum(1 for _, p in board.piece_map().items() if p.piece_type != chess.KING) 45 # Activation threshold: small material (pawns + pieces 6: 47 return 0.0 48 center_sq = [chess.parse_square(s) for s in ('d4','e4','d5','e5')] 49 center_sq_avg = sum(center_sq) # not used directly; use center coords (3.5,3.5) 50 def king_dist(color): 51 for sq, p in board.piece_map().items(): 52 if p.piece_type == chess.KING and p.color == color: 53 # distance to center by min over central squares 54 return min(chess.square_distance(sq, c) for c in center_sq) 55 return 8.0 56 wd = king_dist(chess.WHITE) 57 bd = king_dist(chess.BLACK) 58 # Normalize in range roughly -1..1 59 return float((bd - wd) / 8.0) 60 61 62 Importance: 0.014 63 --- 64 65 66 Feature: 67 def feature(board: chess.Board) -> float: 68 'Bishop-pair and minor-piece balance: (White advantage) bishops and minor piece composition bonus' 69 w_bishops = 0 70 b_bishops = 0 71 w_minors = 0 72 b_minors = 0 73 for _, p in board.piece_map().items(): 74 if p.piece_type == chess.BISHOP: 75 if p.color == chess.WHITE: 76 w_bishops += 1 77 else: 78 b_bishops += 1 79 if p.piece_type in (chess.BISHOP, chess.KNIGHT): 80 if p.color == chess.WHITE: 81 w_minors += 1 82 else: 83 b_minors += 1 84 score = 0.0 85 # bishop pair bonus 86 if w_bishops >= 2: 87 score += 0.6 88 if b_bishops >= 2: 89 score -= 0.6 90 # minor piece imbalance small weight 91 score += 0.12 * (w_minors - b_minors) 92 return float(score) 93 94 95 Importance: 0.039 96 --- 97 98 99 Feature: 100 def feature(board: chess.Board) -> float: 101 'Center control: difference in control of d4,e4,d5,e5 (occupied=1, attacked=0.5)' 102 center_squares = [chess.parse_square(s) for s in ('d4','e4','d5','e5')] 103 def control_for(color): 104 c = 0.0 105 for sq in center_squares: 106 occ = board.piece_at(sq) 107 if occ is not None and occ.color == color: 108 c += 1.0 109 # attacked by color 110 if board.is_attacked_by(color, sq): 111 c += 0.5 112 return c 113 wc = control_for(chess.WHITE) 114 bc = control_for(chess.BLACK) 115 return float(wc - bc) 116 117 118 Importance: 0.044 119 --- 120 14 Preprint 121 122 Feature: 123 def feature(board: chess.Board) -> float: 124 'Piece activity squares: difference in number of unique squares attacked by non-pawn, nonking pieces (White - Black) normalized by 64' 125 def active_squares(color): 126 count = 0 127 for sq in range(64): 128 attackers = board.attackers(color, sq) 129 found = False 130 for a in attackers: 131 p = board.piece_at(a) 132 if p is None: 133 continue 134 if p.piece_type not in (chess.PAWN, chess.KING): 135 found = True 136 break 137 if found: 138 count += 1 139 return count 140 w = active_squares(chess.WHITE) 141 b = active_squares(chess.BLACK) 142 return float((w - b) / 64.0) 143 144 145 Importance: 0.329 146 --- 147 148 149 Feature: 150 def feature(board: chess.Board) -> float: 151 'Pawn structure weakness: (Black penalties - White penalties), positive if Black worse ( good for White). Penalty = doubled*0.5 + isolated*0.7' 152 def pawn_penalty(color): 153 files = {f:0 for f in range(8)} 154 pawn_sqs = [] 155 for sq, p in board.piece_map().items(): 156 if p.piece_type == chess.PAWN and p.color == color: 157 f = chess.square_file(sq) 158 files[f] += 1 159 pawn_sqs.append(sq) 160 doubled = sum(max(0, cnt-1) for cnt in files.values()) 161 isolated = 0 162 for sq in pawn_sqs: 163 f = chess.square_file(sq) 164 if files.get(f-1,0) == 0 and files.get(f+1,0) == 0: 165 isolated += 1 166 return doubled * 0.5 + isolated * 0.7 167 bp = pawn_penalty(chess.BLACK) 168 wp = pawn_penalty(chess.WHITE) 169 return float(bp - wp) 170 171 172 Importance: 0.065 173 --- 174 175 176 Feature: 177 def feature(board: chess.Board) -> float: 178 'King safety pressure: weighted sum of attackers near each king (positive = pressure on Black king > White king)' 179 def king_square(color): 180 for sq, p in board.piece_map().items(): 181 if p.piece_type == chess.KING and p.color == color: 182 return sq 183 return None 184 wk = king_square(chess.WHITE) 185 bk = king_square(chess.BLACK) 186 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.5} 187 def pressure_on(king_sq, attacker_color): 188 if king_sq is None: 189 return 0.0 190 attackers = board.attackers(attacker_color, king_sq) 191 s = 0.0 192 for a in attackers: 193 p = board.piece_at(a) 194 if p is None: 195 continue 196 dist = chess.square_distance(a, king_sq) 197 s += values.get(p.piece_type, 0.0) / (1.0 + dist) 15 Preprint 198 return s 199 p_on_black = pressure_on(bk, chess.WHITE) 200 p_on_white = pressure_on(wk, chess.BLACK) 201 return float(p_on_black - p_on_white) 202 203 204 Importance: 0.005 205 --- 206 207 208 Feature: 209 def feature(board: chess.Board) -> float: 210 'Undefended high-value threats: (value of Black pieces under more attackers than defenders ) - (value of White pieces similarly threatened)' 211 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.0} 212 threat_black = 0.0 213 threat_white = 0.0 214 for sq, p in board.piece_map().items(): 215 if p.piece_type == chess.PAWN: 216 continue 217 attackers = board.attackers(not p.color, sq) 218 defenders = board.attackers(p.color, sq) 219 atk = sum(1 for _ in attackers) 220 defn = sum(1 for _ in defenders) 221 if atk > defn and atk > 0: 222 score = values.get(p.piece_type, 0.0) * (atk - defn) 223 if p.color == chess.BLACK: 224 threat_black += score 225 else: 226 threat_white += score 227 return float(threat_black - threat_white) 228 229 Importance: 0.045 230 --- 231 232 233 Feature: 234 def feature(board: chess.Board) -> float: 235 'Material balance (White minus Black) using common piece values: P=1,N=3,B=3.25,R=5,Q=9' 236 values = {chess.PAWN:1.0, chess.KNIGHT:3.0, chess.BISHOP:3.25, chess.ROOK:5.0, chess.QUEEN :9.0, chess.KING:0.0} 237 total = 0.0 238 for sq, piece in board.piece_map().items(): 239 val = values.get(piece.piece_type, 0.0) 240 total += val if piece.color == chess.WHITE else -val 241 return float(total) 242 243 244 Importance: 0.199 245 --- 246 247 248 Feature: 249 def feature(board: chess.Board) -> float: 250 'Normalized mobility difference: (White legal moves - Black legal moves) / 100' 251 try: 252 white_moves = 0 253 black_moves = 0 254 # count current side moves 255 white_board = board.copy() 256 white_board.turn = chess.WHITE 257 white_moves = sum(1 for _ in white_board.legal_moves) 258 black_board = board.copy() 259 black_board.turn = chess.BLACK 260 black_moves = sum(1 for _ in black_board.legal_moves) 261 return float((white_moves - black_moves) / 100.0) 262 except Exception: 263 return 0.0 264 265 266 Importance: 0.165 267 --- 268 269 270 Feature: 271 def feature(board: chess.Board) -> float: 272 'Passed pawns score: sum of passed-pawn strengths (White minus Black), advanced pawns weighted more' 273 def is_passed(sq, color): 274 f = chess.square_file(sq) 16 Preprint 275 r = chess.square_rank(sq) 276 if color == chess.WHITE: 277 ahead_ranks = range(r+1, 8) 278 opp_color = chess.BLACK 279 for ar in ahead_ranks: 280 for df in (-1,0,1): 281 ff = f + df 282 if 0 = 1 else 0.0 309 score_w += 1.0 + advancement 310 else: 311 if is_passed(sq, chess.BLACK): 312 advancement = (6 - rank) / 6.0 if rank float: 341 "Simple, clear description of what this feature measures" 342 # ... Calculate and return the feature value 343 return result 344 345 def feature(board: chess.Board) -> float: 17 Preprint 346 "Another feature description" 347 # ... Calculate and return the feature value 348 return result 349 350 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. 351 352 Optimize for producing discriminant features that are novel compared to the existing features and that are likely to achieve a high importance for scoring positions, once we retrain the model using your new features combined with the existing ones. C.1.2 D-ID3 - PROMPT 1 You are an expert chess programmer creating feature functions to help a machine learning model predict chess position evaluations. 2 3 Your task is to write a feature function that helps discriminate between the board positions given below. 4 A feature function is a Python function that takes a chess Board and computes a feature out of the board. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 5 6 You have access to the following API from the 'chess' library: 7 8 # Chess API Documentation 9 ## Class chess.Board Methods 10 - board.turn: True if White to move, False if Black 11 - board.fullmove_number: Current move number 12 - board.halfmove_clock: Halfmove clock for 50-move rule 13 - board.is_check(): True if current player is in check 14 - board.is_checkmate(): True if current player is checkmated 15 - board.is_stalemate(): True if stalemate 16 - board.is_insufficient_material(): Returns True if insufficient material 17 - board.piece_at(square): Returns piece at given square (or None) 18 - board.piece_map(): Returns dict mapping squares to pieces 19 - board.legal_moves: Iterator over legal moves 20 - board.attackers(color, square): Returns set of squares attacking given square 21 - board.is_attacked_by(color, square): Returns True if square is attacked by color 22 23 ## Chess Squares and Pieces 24 - chess.A1, chess.A2, ..., chess.H8: Square constants 25 - chess.PAWN, chess.KNIGHT, chess.BISHOP, chess.ROOK, chess.QUEEN, chess.KING: Piece types 26 - chess.WHITE, chess.BLACK: Colors 27 - piece.piece_type: Type of piece (PAWN, KNIGHT, etc.) 28 - piece.color: Color of piece (WHITE or BLACK) 29 30 ## Useful Functions 31 - chess.square_name(square): Convert square index to name (e.g., "e4") 32 - chess.parse_square(name): Convert square name to index 33 - chess.square_file(square): Get file (0-7) of square 34 - chess.square_rank(square): Get rank (0-7) of square 35 - chess.square_distance(sq1, sq2): Manhattan distance between squares 36 37 38 # Task 39 Generate 10 new chess board feature functions in Python that: 40 41 1. Help us discriminate between strong and weak board positions, hopefully with positions before and after the optimal split point having the lowest possible variance between their evaluations. 42 2. Return a float value given a board position. 43 3. Handle edge cases gracefully - won't crash on unusual positions 44 45 Your task is to generate diverse, creative features that are relevant to explain the evaluations for the board positions shown above. Focus on features that would help distinguish between positions of different strengths. These features will be used in this decision tree that will predict the evaluation of a given board position in estimated % win probability for white (e.g., 20 means Black winning with around 80% probability). Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 46 47 # Code Requirements 48 49 - Use single quotes for docstrings: "description here" 50 - No markdown code blocks 51 - No explanatory text after the function 52 - Each function should be complete and standalone, and return a float 53 54 # Output Format 18 Preprint 55 Generate exactly 10 features in this format: 56 57 def feature(board: chess.Board) -> float: 58 "Simple, clear description of what this feature measures" 59 # ... Calculate and return the feature value 60 return result 61 62 def feature(board: chess.Board) -> float: 63 "Another feature description" 64 # ... Calculate and return the feature value 65 return result 66 67 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. 68 69 # Current decision tree node 70 You are currently focusing on features that explain the position evaluation of board positions in the following subtree of a decision tree: 71 72 [root] 73 -> value value value > 0.650 for "Counts the number of squares attacked by pieces of each color to assess control of the board." 74 75 # Board positions 76 Here are examples of board positions in this subtree, along with their position evaluations ( computed by Stockfish): 77 78 Board: 79 . . r r . . k . 80 . . . . . p p p 81 p b . . p . . . 82 . p . . P . . . 83 . P . . N P n q 84 P . . . . . P . 85 . B . . Q . . P 86 R . . . . R . K 87 Evaluation: 38.08281678856247 88 --- 89 90 Board: 91 r . . . . . k . 92 . b . r . p p . 93 p . . P . . . . 94 . p . . . . . Q 95 . P . b . . . . 96 P . . . . . . . 97 . . . . . . P P 98 . . . . . R . K 99 Evaluation: 15.383900101915987 100 --- 101 102 Board: 103 r n b . k . . r 104 p p p . n p p p 105 . . . b . . . . 106 . . . B . . . . 107 . . . P P p . q 108 . . . . . . . . 109 P P P . . . P P 110 R N B Q . K N R 111 Evaluation: 47.700323643230064 112 --- 113 114 Board: 115 . . . r . . . k 116 p p . . . p p . 117 . . . . p . . p 118 . . b r P . . P 119 . q . N R . . . 120 . . . . B . . . 121 . . . . . K P . 122 . . R Q . . . . 123 Evaluation: 50.0 124 --- 125 126 Board: 127 r n b q k b n r 128 p p p p . p p p 19 Preprint 129 . . . . . . . . 130 . . . . p . . . 131 . . . . P . . . 132 . . . . . N . . 133 P P P P . P P P 134 R N B Q K B . R 135 Evaluation: 51.19644756166626 136 --- 137 138 139 Optimize for producing discriminant features that are novel compared to the existing features used to arrive at this subtree. Focus on explaining the differences between the board positions shown above. C.1.3 D-ID3 - EXAMPLE FEATURES 1 def feature(board: chess.Board) -> float: 2 "Measure the material balance between both players" 3 material_values = {chess.PAWN: 1, chess.KNIGHT: 3, chess.BISHOP: 3, chess.ROOK: 5, chess. QUEEN: 9, chess.KING: 0} 4 white_material = sum(material_values[piece.piece_type] for piece in board.piece_map(). values() if piece.color == chess.WHITE) 5 black_material = sum(material_values[piece.piece_type] for piece in board.piece_map(). values() if piece.color == chess.BLACK) 6 return float(white_material - black_material) 7 8 9 def feature(board: chess.Board) -> float: 10 "Counts the number of pieces for each player and computes the ratio of the piece counts." 11 piece_count_white = sum(1 for piece in board.piece_map().values() if piece.color == chess. WHITE) 12 piece_count_black = sum(1 for piece in board.piece_map().values() if piece.color == chess. BLACK) 13 if piece_count_black == 0: 14 return float('inf') # Black has no pieces left 15 return piece_count_white / piece_count_black 16 17 def feature(board: chess.Board) -> float: 18 "Counts the number of squares attacked by pieces of each color to assess control of the board." 19 white_attacks = sum(board.is_attacked_by(chess.WHITE, square) for square in chess.SQUARES) 20 black_attacks = sum(board.is_attacked_by(chess.BLACK, square) for square in chess.SQUARES) 21 control_ratio = white_attacks / (black_attacks + 1) # Avoid division by zero 22 return float(control_ratio) 23 24 def feature(board: chess.Board) -> float: 25 "Calculates the center control by counting pieces in the central squares." 26 central_squares = [chess.D4, chess.D5, chess.E4, chess.E5] 27 control = sum(1 for square in central_squares if board.piece_at(square) is not None) 28 return float(control) 29 30 def feature(board: chess.Board) -> float: 31 "Calculates the total piece value for each player based on standard chess piece values." 32 piece_values = { 33 chess.PAWN: 1, 34 chess.KNIGHT: 3, 35 chess.BISHOP: 3, 36 chess.ROOK: 5, 37 chess.QUEEN: 9, 38 chess.KING: 0 # King is invaluable 39 } 40 white_value = sum(piece_values[piece.piece_type] for piece in board.piece_map().values() if piece.color == chess.WHITE) 41 black_value = sum(piece_values[piece.piece_type] for piece in board.piece_map().values() if piece.color == chess.BLACK) 42 return float(white_value - black_value) C.2 IMAGE CLASSIFICATION C.2.1 F2 1 You are an expert computer vision programmer creating evaluation features for a machine learning model that predicts values from images. 2 3 This is a classification task with the following classes: 0: landbird, 1: waterbird. 4 5 Your task is to write a feature function that helps discriminate between the image classes above. 20 Preprint 6 A feature function is a Python function that takes an image array and computes a feature out of the image. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from image processing libraries: 9 10 11 # Image Processing API Documentation 12 13 The features receive an image as a numpy array, so you can use any numpy functions on it. For RGB images, shape is (height, width, 3). For grayscale, shape is (height, width). 14 15 ## Image Processing Methods 16 - image.shape: Returns (height, width, channels) for RGB or (height, width) for grayscale 17 - image.mean(): Average pixel intensity across all channels 18 - image.std(): Standard deviation of pixel intensities 19 - image.max(), image.min(): Maximum and minimum pixel values 20 - np.sum(image): Sum of all pixel values 21 - np.count_nonzero(image): Count of non-zero pixels 22 23 ## Handle Both Grayscale and RGB 24 - Check format: len(image.shape) == 2 for grayscale, len(image.shape) == 3 for RGB 25 - Unpack safely: h, w = image.shape[:2] # Works for both formats 26 - For RGB only: image[:,:,0] (red), image[:,:,1] (green), image[:,:,2] (blue) 27 28 ## Useful Functions 29 - np.mean(image): Average intensity 30 - np.std(image): Standard deviation 31 - np.gradient(image): Image gradients - for RGB use on single channel: np.gradient(image [:,:,0]) 32 - np.where(condition, x, y): Conditional selection 33 - np.argmax(image), np.argmin(image): Location of max/min values 34 - np.percentile(image, q): Percentile values 35 - np.histogram(image.flatten(), bins): Intensity histogram 36 37 ## Spatial Analysis 38 - image[start_row:end_row, start_col:end_col]: Region selection 39 - Center region: image[h//4:3*h//4, w//4:3*w//4] 40 - Edge detection: np.gradient(np.mean(image, axis=2)) for RGB 41 - Color channel differences: image[:,:,0] - image[:,:,1] 42 43 ## Example Feature Function 44 def feature(image: np.ndarray) -> float: 45 "Average pixel intensity in the center region" 46 if len(image.shape) == 3: 47 h, w, c = image.shape 48 gray = np.mean(image, axis=2) 49 else: 50 h, w = image.shape 51 gray = image 52 center_h, center_w = h // 4, w // 4 53 center_region = gray[center_h:3*center_h, center_w:3*center_w] 54 return float(np.mean(center_region)) 55 56 57 ## Current Feature Database 58 Here are some existing features and their importances to the current classifier (importance = benefit from that feature, higher is better): 59 60 Feature: 61 def feature(image: np.ndarray) -> float: 62 'Relative difference in average blue intensity between bottom half and top half' 63 import numpy as np 64 h, w = image.shape[:2] 65 if len(image.shape) != 3 or image.shape[2] 0 else np.mean(b) 69 bot_mean = np.mean(b[h // 2:, :]) if h - h // 2 > 0 else np.mean(b) 70 denom = (np.mean(b) + 1e-8) 71 return float((bot_mean - top_mean) / denom) 72 73 74 Importance: 0.081 75 --- 76 77 78 Feature: 79 def feature(image: np.ndarray) -> float: 80 'Aspect ratio (width/height) of bounding box of pixels significantly different from median intensity' 21 Preprint 81 import numpy as np 82 h, w = image.shape[:2] 83 if len(image.shape) == 3: 84 gray = np.mean(image, axis=2).astype(float) 85 else: 86 gray = image.astype(float) 87 med = np.median(gray) 88 dynamic_range = np.max(gray) - np.min(gray) 89 thresh = 0.15 * (dynamic_range + 1e-8) 90 mask = np.abs(gray - med) > thresh 91 if not np.any(mask): 92 return float(0.5) 93 rows = np.where(np.any(mask, axis=1))[0] 94 cols = np.where(np.any(mask, axis=0))[0] 95 if rows.size == 0 or cols.size == 0: 96 return float(0.5) 97 r0, r1 = rows[0], rows[-1] 98 c0, c1 = cols[0], cols[-1] 99 bbox_h = (r1 - r0 + 1) 100 bbox_w = (c1 - c0 + 1) 101 return float(bbox_w / (bbox_h + 1e-8)) 102 103 Importance: 0.082 104 --- 105 106 107 Feature: 108 def feature(image: np.ndarray) -> float: 109 'Left-right symmetry score (1.0 = perfectly symmetric, lower = less symmetric)' 110 import numpy as np 111 if len(image.shape) == 3: 112 gray = np.mean(image, axis=2).astype(float) 113 else: 114 gray = image.astype(float) 115 flipped = np.fliplr(gray) 116 diff = np.mean(np.abs(gray - flipped)) 117 # normalize by image contrast to avoid small-image issues 118 denom = np.mean(np.abs(gray - np.mean(gray))) + 1e-8 119 norm_diff = diff / denom 120 score = 1.0 - np.tanh(norm_diff) # bounded between ̃0 and 1 121 return float(np.clip(score, 0.0, 1.0)) 122 123 124 Importance: 0.084 125 --- 126 127 128 Feature: 129 def feature(image: np.ndarray) -> float: 130 'Ratio of average horizontal gradient magnitude to vertical gradient magnitude (texture orientation)' 131 import numpy as np 132 # compute gray 133 if len(image.shape) == 3: 134 gray = np.mean(image, axis=2).astype(float) 135 else: 136 gray = image.astype(float) 137 gy, gx = np.gradient(gray) 138 mean_dx = np.mean(np.abs(gx)) 139 mean_dy = np.mean(np.abs(gy)) 140 return float(mean_dx / (mean_dy + 1e-8)) 141 142 143 Importance: 0.159 144 --- 145 146 147 Feature: 148 def feature(image: np.ndarray) -> float: 149 'Proportion of pixels where green channel is significantly high (vegetation cue)' 150 import numpy as np 151 h, w = image.shape[:2] 152 if len(image.shape) != 3 or image.shape[2] r) & (g > b) & (g > thresh) 159 return float(np.count_nonzero(mask) / (h * w + 1e-12)) 160 22 Preprint 161 162 Importance: 0.094 163 --- 164 165 166 Feature: 167 def feature(image: np.ndarray) -> float: 168 'Fraction of pixels brighter than the 90th percentile (bright spot proportion)' 169 import numpy as np 170 if len(image.shape) == 3: 171 gray = np.mean(image, axis=2).astype(float) 172 else: 173 gray = image.astype(float) 174 thresh = np.percentile(gray.flatten(), 90) 175 frac = np.count_nonzero(gray > thresh) / (gray.size + 1e-12) 176 return float(frac) 177 178 179 Importance: 0.080 180 --- 181 182 183 Feature: 184 def feature(image: np.ndarray) -> float: 185 'Normalized difference between center region brightness and border brightness' 186 import numpy as np 187 h, w = image.shape[:2] 188 if len(image.shape) == 3: 189 gray = np.mean(image, axis=2).astype(float) 190 else: 191 gray = image.astype(float) 192 ch0, ch1 = h // 4, w // 4 193 center = gray[ch0:3 * ch0 or h, ch1:3 * ch1 or w] 194 # border defined as whole minus center 195 mask_border = np.ones_like(gray, dtype=bool) 196 mask_border[ch0:3 * ch0 or h, ch1:3 * ch1 or w] = False 197 center_mean = np.mean(center) if center.size > 0 else 0.0 198 border_mean = np.mean(gray[mask_border]) if np.any(mask_border) else 0.0 199 denom = np.mean(np.abs(gray)) + 1e-8 200 return float((center_mean - border_mean) / denom) 201 202 203 Importance: 0.088 204 --- 205 206 207 Feature: 208 def feature(image: np.ndarray) -> float: 209 'Proportion of pixels where blue channel is dominant (blue > red and blue > green)' 210 import numpy as np 211 h, w = image.shape[:2] 212 if len(image.shape) != 3 or image.shape[2] r) & (b > g) 217 return float(np.count_nonzero(mask) / (h * w + 1e-12)) 218 219 220 Importance: 0.117 221 --- 222 223 224 Feature: 225 def feature(image: np.ndarray) -> float: 226 'Edge density: fraction of pixels with gradient magnitude above (mean+std)' 227 import numpy as np 228 if len(image.shape) == 3: 229 gray = np.mean(image, axis=2).astype(float) 230 else: 231 gray = image.astype(float) 232 gy, gx = np.gradient(gray) 233 mag = np.hypot(gx, gy) 234 thresh = np.mean(mag) + np.std(mag) 235 count = np.count_nonzero(mag > thresh) 236 total = gray.size 237 return float(count / (total + 1e-12)) 238 239 240 Importance: 0.114 241 --- 23 Preprint 242 243 244 Feature: 245 def feature(image: np.ndarray) -> float: 246 'Average per-pixel color "saturation" approximated by (max-min)/max across channels' 247 import numpy as np 248 if len(image.shape) != 3 or image.shape[2] float: 285 "Clear description of what this feature measures" 286 # ... Calculate and return the feature value 287 return float(result) 288 289 def feature(image: np.ndarray) -> float: 290 "Another feature description" 291 # ... Calculate and return the feature value 292 return float(result) 293 294 The body of the functions can be anything, but the first line (function declaration) should be identical to those examples above (always 'def feature(...)'), and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. C.2.2 D-ID3 - PROMPT 1 You are an expert image processing programmer creating feature functions to help a machine learning model perform image classification. 2 3 This is a classification task with the following classes: 0: digit zero, 1: digit one, 2: digit two, 3: digit three, 4: digit four, 5: digit five, 6: digit six, 7: digit seven, 8: digit eight, 9: digit nine. 4 5 Your task is to write a feature function that helps discriminate between the image samples given below. 6 A feature function is a Python function that takes an image array and computes a feature out of the image. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from image processing libraries: 9 10 11 # Image Processing API Documentation 12 24 Preprint 13 The features receive an image as a numpy array, so you can use any numpy functions on it. For RGB images, shape is (height, width, 3). For grayscale, shape is (height, width). 14 15 ## Image Processing Methods 16 - image.shape: Returns (height, width, channels) for RGB or (height, width) for grayscale 17 - image.mean(): Average pixel intensity across all channels 18 - image.std(): Standard deviation of pixel intensities 19 - image.max(), image.min(): Maximum and minimum pixel values 20 - np.sum(image): Sum of all pixel values 21 - np.count_nonzero(image): Count of non-zero pixels 22 23 ## Handle Both Grayscale and RGB 24 - Check format: len(image.shape) == 2 for grayscale, len(image.shape) == 3 for RGB 25 - Unpack safely: h, w = image.shape[:2] # Works for both formats 26 - For RGB only: image[:,:,0] (red), image[:,:,1] (green), image[:,:,2] (blue) 27 28 ## Useful Functions 29 - np.mean(image): Average intensity 30 - np.std(image): Standard deviation 31 - np.gradient(image): Image gradients - for RGB use on single channel: np.gradient(image [:,:,0]) 32 - np.where(condition, x, y): Conditional selection 33 - np.argmax(image), np.argmin(image): Location of max/min values 34 - np.percentile(image, q): Percentile values 35 - np.histogram(image.flatten(), bins): Intensity histogram 36 37 ## Spatial Analysis 38 - image[start_row:end_row, start_col:end_col]: Region selection 39 - Center region: image[h//4:3*h//4, w//4:3*w//4] 40 - Edge detection: np.gradient(np.mean(image, axis=2)) for RGB 41 - Color channel differences: image[:,:,0] - image[:,:,1] 42 43 ## Example Feature Function 44 def feature(image: np.ndarray) -> float: 45 "Average pixel intensity in the center region" 46 if len(image.shape) == 3: 47 h, w, c = image.shape 48 gray = np.mean(image, axis=2) 49 else: 50 h, w = image.shape 51 gray = image 52 center_h, center_w = h // 4, w // 4 53 center_region = gray[center_h:3*center_h, center_w:3*center_w] 54 return float(np.mean(center_region)) 55 56 57 # Task 58 Generate 10 new image feature functions in Python that: 59 60 1. Help us discriminate between different image classes, hopefully with samples before and after the optimal split point having the lowest possible variance between their classifications. 61 2. Return a float value given an image. 62 3. Handle edge cases gracefully - won't crash on unusual images 63 64 Your task is to generate diverse, creative features that are relevant to explain the classifications for the image samples shown above. Focus on features that would help distinguish between samples of different classes. These features will be used in this decision tree that will predict the classification of a given image sample. Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 65 66 # Code Requirements 67 - Use single quotes for docstrings: "description here" 68 - No markdown code blocks 69 - No explanatory text after the function 70 - Each function should be complete and standalone, and return a float 71 72 # Output Format 73 Generate exactly 10 features in this format: 74 75 def feature(image: np.ndarray) -> float: 76 "Simple, clear description of what this feature measures" 77 # ... Calculate and return the feature value 78 return result 79 80 def feature(image: np.ndarray) -> float: 81 "Another feature description" 82 # ... Calculate and return the feature value 83 return result 84 25 Preprint 85 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. 86 87 # Current decision tree node 88 You are currently focusing on features that explain the image classifications in the following subtree of a decision tree: 89 90 [root] 91 -> value > 0.344 for "Ratio of edge pixels to the total number of pixels" -> value > 72.865 for "Computes the brightness of the central region as a percentage of the whole image" -> value > 0.805 for "Determines the ratio of vertical to horizontal gradients in the image, indicating edge direction" 92 93 # Image samples 94 Here are examples of image samples in this subtree, along with their target classifications: 95 96 Sample (Target: 9) 97 Sample (Target: 6) 98 Sample (Target: 7) 99 Sample (Target: 4) 100 Sample (Target: 4) 101 102 Optimize for producing discriminant features that are novel compared to the existing features used to arrive at this subtree. Focus on explaining the differences between the image samples shown above. C.2.3 D-ID3 - EXAMPLE FEATURES 1 def feature(image: np.ndarray) -> float: 2 "Ratio of edge pixels to the total number of pixels" 3 if len(image.shape) == 3: 4 gray = np.mean(image, axis=2) 5 else: 6 gray = image 7 edges = np.gradient(gray.astype(float)) 8 edge_pixels = np.count_nonzero(edges[0]) + np.count_nonzero(edges[1]) 9 total_pixels = gray.size 10 return float(edge_pixels / total_pixels) 11 12 13 def feature(image: np.ndarray) -> float: 14 "Computes the brightness of the central region as a percentage of the whole image" 15 if len(image.shape) == 3: 16 gray = np.mean(image, axis=2) 17 else: 18 gray = image 19 h, w = gray.shape 20 center_region = gray[h//4:3*h//4, w//4:3*w//4] 21 total_brightness = np.sum(gray) 22 center_brightness = np.sum(center_region) 23 return float(center_brightness) / (total_brightness + 1e-7) * 100 24 25 26 def feature(image: np.ndarray) -> float: 27 "Determines the ratio of vertical to horizontal gradients in the image, indicating edge direction" 28 if len(image.shape) == 3: 29 gray = np.mean(image, axis=2) 30 else: 31 gray = image 32 grad_y, grad_x = np.gradient(gray) 33 vertical_grad = np.sum(np.abs(grad_x)) 34 horizontal_grad = np.sum(np.abs(grad_y)) 35 return float(vertical_grad) / (horizontal_grad + 1e-7) # Avoid division by zero 36 37 def feature(image: np.ndarray) -> float: 38 "Calculates the ratio of bright pixels (above a threshold) to total pixels in the image" 39 if len(image.shape) == 3: 40 gray = np.mean(image, axis=2) 41 else: 42 gray = image 43 threshold = 200 # A threshold for bright pixels 44 bright_pixels = np.count_nonzero(gray > threshold) 45 total_pixels = gray.size 46 return float(bright_pixels) / total_pixels 47 48 def feature(image: np.ndarray) -> float: 49 "Measures the contrast of the image based on the standard deviation of pixel intensities" 50 if len(image.shape) == 3: 26 Preprint 51 gray = np.mean(image, axis=2) 52 else: 53 gray = image 54 return float(np.std(gray)) C.3 TEXT CLASSIFICATION C.3.1 F2 1 You are an expert text analysis programmer creating evaluation features for a machine learning model that classifies text. 2 3 4 # Text Processing API Documentation 5 6 The features receive text as a string, so you can use any string methods and text processing functions. 7 8 ## String Methods 9 - text.lower(), text.upper(): Case conversion 10 - text.strip(): Remove whitespace 11 - text.split(delimiter): Split into list 12 - text.count(substring): Count occurrences 13 - text.startswith(prefix), text.endswith(suffix): Check prefixes/suffixes 14 - text.find(substring): Find position of substring 15 - text.replace(old, new): Replace text 16 17 ## Text Analysis 18 - len(text): Length of text 19 - text.isdigit(), text.isalpha(), text.isalnum(): Character type checks 20 - sum(1 for c in text if c.isupper()): Count uppercase letters 21 - text.split(): Split on whitespace to get words 22 23 ## Regular Expressions (re module) 24 - re.findall(pattern, text): Find all matches 25 - re.search(pattern, text): Find first match 26 - re.sub(pattern, replacement, text): Replace patterns 27 - len(re.findall(r' +', text)): Count words 28 - len(re.findall(r'[.!?]', text)): Count sentences 29 30 ## Useful Patterns 31 - Word count: len(text.split()) 32 - Sentence count: text.count('.') + text.count('!') + text.count('?') 33 - Average word length: sum(len(word) for word in text.split()) / len(text.split()) 34 - Punctuation density: sum(1 for c in text if not c.isalnum() and not c.isspace()) / len(text) 35 36 ## Example Feature Function 37 def feature(text: str) -> float: 38 "Average word length in the text" 39 words = text.split() 40 if not words: 41 return 0.0 42 return sum(len(word) for word in words) / len(words) 43 44 45 ## Current Feature Database 46 Here are some existing features and their performance (Performance improvement = benefit from that feature, higher is better): 47 48 Feature: 49 def feature(text: str) -> float: 50 'Average number of words per sentence (sentences split on . ! ?)' 51 import re 52 if not text or not text.strip(): 53 return float(0.0) 54 # Split on one or more sentence-ending punctuation and filter empties 55 sentences = [s.strip() for s in re.split(r'[.!?]+', text) if s.strip()] 56 if not sentences: 57 return float(0.0) 58 total_words = sum(len(s.split()) for s in sentences) 59 return float(total_words / len(sentences)) 60 61 62 Importance: 0.057 63 --- 64 65 66 Feature: 67 def feature(text: str) -> float: 27 Preprint 68 'Average character-uniqueness per word (unique chars / word length), averaged over words' 69 words = [w for w in text.split() if any(ch.isalnum() for ch in w)] 70 if not words: 71 return float(0.0) 72 ratios = [] 73 for w in words: 74 chars = [c for c in w if not c.isspace()] 75 if not chars: 76 continue 77 unique = len(set(chars)) 78 ratios.append(unique / len(chars)) 79 if not ratios: 80 return float(0.0) 81 return float(sum(ratios) / len(ratios)) 82 83 84 Importance: 0.057 85 --- 86 87 88 Feature: 89 def feature(text: str) -> float: 90 'Proportion of alphabetic characters that are uppercase' 91 if not text: 92 return float(0.0) 93 alpha_chars = [c for c in text if c.isalpha()] 94 if not alpha_chars: 95 return float(0.0) 96 upper_count = sum(1 for c in alpha_chars if c.isupper()) 97 return float(upper_count / len(alpha_chars)) 98 99 100 Importance: 0.083 101 --- 102 103 104 Feature: 105 def feature(text: str) -> float: 106 'Proportion of long words (length > 7) among all words' 107 words = [w for w in text.split() if w] 108 if not words: 109 return float(0.0) 110 long_count = sum(1 for w in words if len(w) > 7) 111 return float(long_count / len(words)) 112 113 114 Importance: 0.194 115 --- 116 117 118 Feature: 119 def feature(text: str) -> float: 120 'Punctuation characters per word (punctuation = not alnum and not whitespace)' 121 if not text or not text.strip(): 122 return float(0.0) 123 words = text.split() 124 if not words: 125 return float(0.0) 126 punct_count = sum(1 for c in text if not c.isalnum() and not c.isspace()) 127 return float(punct_count / len(words)) 128 129 130 Importance: 0.096 131 --- 132 133 134 Feature: 135 def feature(text: str) -> float: 136 'Proportion of sentences that are questions (based on ? count over total sentence terminators)' 137 if not text or not text.strip(): 138 return float(0.0) 139 question_marks = text.count('?') 140 sentence_terminators = text.count('.') + text.count('!') + text.count('?') 141 if sentence_terminators == 0: 142 return float(0.0) 143 return float(question_marks / sentence_terminators) 144 145 Importance: 0.023 146 --- 147 28 Preprint 148 149 Feature: 150 def feature(text: str) -> float: 151 'Type-token ratio: unique word tokens / total words (case-insensitive, alphanumeric tokens )' 152 import re 153 tokens = re.findall(r' +', text.lower()) 154 if not tokens: 155 return float(0.0) 156 unique = len(set(tokens)) 157 return float(unique / len(tokens)) 158 159 160 Importance: 0.089 161 --- 162 163 164 Feature: 165 def feature(text: str) -> float: 166 'Ratio of tokens that contain at least one digit' 167 tokens = text.split() 168 if not tokens: 169 return float(0.0) 170 num_with_digit = sum(1 for t in tokens if any(ch.isdigit() for ch in t)) 171 return float(num_with_digit / len(tokens)) 172 173 174 Importance: 0.165 175 --- 176 177 178 Feature: 179 def feature(text: str) -> float: 180 'Longest run of the same character normalized by text length' 181 if not text: 182 return float(0.0) 183 max_run = 1 184 current_run = 1 185 prev = text[0] 186 for c in text[1:]: 187 if c == prev: 188 current_run += 1 189 if current_run > max_run: 190 max_run = current_run 191 else: 192 current_run = 1 193 prev = c 194 return float(max_run / max(1, len(text))) 195 196 197 Importance: 0.126 198 --- 199 200 201 Feature: 202 def feature(text: str) -> float: 203 'Stopword density: fraction of tokens that are common English stopwords' 204 stopwords = { 205 'the','and','is','in','it','of','to','a','an','that','this','for','on','with', 206 'as','by','at','from','or','be','are','was','were','has','have','not','but', 207 'they','their','you','I' 208 } 209 tokens = [t.lower().strip(".,!?;:\"'()[]") for t in text.split()] 210 if not tokens: 211 return float(0.0) 212 stop_count = sum(1 for t in tokens if t and t in stopwords) 213 return float(stop_count / len(tokens)) 214 215 216 Importance: 0.111 217 --- 218 219 220 ## Task 221 Generate 10 NEW text features that: 222 223 1. Are different from existing features 224 2. Capture useful textual patterns 225 3. Return float values 226 4. Handle edge cases gracefully - Won't crash on unusual texts 227 5. Use simple, short docstrings - Use single quotes, not triple quotes 29 Preprint 228 6. Are efficient to compute 229 230 Your task is to generate diverse, creative features that capture different aspects of text content for classification. Focus on features that would help distinguish between different text classes. These features will be used as input to a learned model that predicts target values from text. 231 232 ## IMPORTANT CODE REQUIREMENTS 233 - Use SINGLE quotes for docstrings: "description here" 234 - NO triple quotes (""") anywhere in the code 235 - NO markdown code blocks 236 - NO explanatory text after the function 237 - Each function should be complete and standalone 238 239 ## Output Format 240 Generate exactly 10 features in this format: 241 242 def feature(text: str) -> float: 243 "Clear description of what this feature measures" 244 # ... Calculate and return the feature value 245 return float(result) 246 247 def feature(text: str) -> float: 248 "Another feature description" 249 # ... Calculate and return the feature value 250 return float(result) 251 252 The body of the functions can be anything, but the first line (function declaration) should be identical to those examples above (always 'def feature(...)'), and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. C.3.2 D-ID3 - PROMPT 1 You are an expert text analysis programmer creating feature functions to help a machine learning model perform text classification. 2 3 This is a classification task with the following classes: 0: human-written text, 1: AIgenerated text. 4 5 Your task is to write a feature function that helps discriminate between the text samples given below. 6 A feature function is a Python function that takes a text string and computes a feature out of the text. It should return a float, but note that a feature could also be effectively boolean-valued (0.0 or 1.0), or integer-valued, even if its type is float. 7 8 You have access to the following API from text processing libraries: 9 10 11 # Text Processing API Documentation 12 13 The features receive text as a string, so you can use any string methods and text processing functions. 14 15 ## String Methods 16 - text.lower(), text.upper(): Case conversion 17 - text.strip(): Remove whitespace 18 - text.split(delimiter): Split into list 19 - text.count(substring): Count occurrences 20 - text.startswith(prefix), text.endswith(suffix): Check prefixes/suffixes 21 - text.find(substring): Find position of substring 22 - text.replace(old, new): Replace text 23 24 ## Text Analysis 25 - len(text): Length of text 26 - text.isdigit(), text.isalpha(), text.isalnum(): Character type checks 27 - sum(1 for c in text if c.isupper()): Count uppercase letters 28 - text.split(): Split on whitespace to get words 29 30 ## Regular Expressions (re module) 31 - re.findall(pattern, text): Find all matches 32 - re.search(pattern, text): Find first match 33 - re.sub(pattern, replacement, text): Replace patterns 34 - len(re.findall(r' +', text)): Count words 35 - len(re.findall(r'[.!?]', text)): Count sentences 36 37 ## Useful Patterns 38 - Word count: len(text.split()) 39 - Sentence count: text.count('.') + text.count('!') + text.count('?') 40 - Average word length: sum(len(word) for word in text.split()) / len(text.split()) 30 Preprint 41 - Punctuation density: sum(1 for c in text if not c.isalnum() and not c.isspace()) / len(text) 42 43 ## Example Feature Function 44 def feature(text: str) -> float: 45 "Average word length in the text" 46 words = text.split() 47 if not words: 48 return 0.0 49 return sum(len(word) for word in words) / len(words) 50 51 52 # Task 53 Generate 10 new text feature functions in Python that: 54 55 1. Help us discriminate between different text classes, hopefully with samples before and after the optimal split point having the lowest possible variance between their classifications. 56 2. Return a float value given a text string. 57 3. Handle edge cases gracefully - won't crash on unusual texts 58 59 Your task is to generate diverse, creative features that are relevant to explain the classifications for the text samples shown above. Focus on features that would help distinguish between samples of different classes. These features will be used in this decision tree that will predict the classification of a given text sample. Think about new features that would help such a predictor in the particular cases above, trying to add information that the already existing features shown above are missing. 60 61 # Code Requirements 62 - Use single quotes for docstrings: "description here" 63 - No markdown code blocks 64 - No explanatory text after the function 65 - Each function should be complete and standalone, and return a float 66 67 # Output Format 68 Generate exactly 10 features in this format: 69 70 def feature(text: str) -> float: 71 "Simple, clear description of what this feature measures" 72 # ... Calculate and return the feature value 73 return result 74 75 def feature(text: str) -> float: 76 "Another feature description" 77 # ... Calculate and return the feature value 78 return result 79 80 The body of the function can be anything, but the first line (function declaration) should be identical to those examples above, and the second line should be a one-line docstring. Don't output explanatory text - just the function definitions as shown above. 81 82 # Current decision tree node 83 You are currently focusing on features that explain the text classifications in the following subtree of a decision tree: 84 85 [root] 86 -> value > 4.468 for "Average character length of words in the text" -> value > 0.024 for " Calculates the proportion of text that is in passive voice" -> value float: 2 "Average character length of words in the text" 3 words = text.split() 4 if not words: 5 return 0.0 6 return sum(len(word) for word in words) / len(words) 7 8 def feature(text: str) -> float: 9 "Calculates the proportion of text that is in passive voice" 10 passive_pattern = r' (?:is|was|were|be|being|been) + ' 11 passive_count = len(re.findall(passive_pattern, text)) 12 return float(passive_count) / len(text.split()) if text.split() else 0.0 13 14 def feature(text: str) -> float: 15 "Assesses the use of passive voice constructions in the text" 16 passive_voice = re.findall(r' (is|are|was|were|be|being|been) + + ', text) 17 return float(len(passive_voice)) 18 19 def feature(text: str) -> float: 20 "Counts the number of transition words to evaluate the flow of text" 21 transition_words = set(['however', 'therefore', 'moreover', 'furthermore', 'nevertheless', 'consequently']) 22 words = text.lower().split() 23 count = sum(1 for word in words if word in transition_words) 24 return float(count) 25 26 def feature(text: str) -> float: 27 "Measures the proportion of first-person pronouns in the text" 28 first_person_pronouns = set(['I', 'me', 'my', 'mine', 'we', 'us', 'our', 'ours']) 29 words = text.lower().split() 30 count = sum(1 for word in words if word in first_person_pronouns) 31 return float(count) / len(words) if words else 0.0 32
2510.14821
THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE PETER HOLY AND JONATHAN SCHILHAN Abstract. We provide, for any regular uncountable cardinal κ, a new argu- ment for Pincus’ result on the consistency of ZF with the higher dependent choice principle DC<κ and the ordering principle in the presence of a fail- ure of the axiom of choice. We also generalise his methods and obtain these consistency results in a larger class of models. 1. Introduction The ordering principle, OP, is the statement that every set can be linearly or- dered. The axiom of choice, AC, in one of its equivalent forms, states that every set can be wellordered, and thus clearly implies OP. If δ is an infinite cardinal, the principle DCδ of higher dependent choice can be stated as follows: whenever T is a tree without terminal nodes that is closed under increasing sequences of length less than δ, then it contains an increasing sequence of length δ. Note that by an easy argument (see [2, Section 8]), these principles become stronger as κ increases. The principle of dependent choice DC, that is the statement that whenever R is a relation on a set X with the property that ∀x ∈X ∃y ∈X x R y there exists a sequence ⟨xi | i < ω⟩of elements of X such that ∀i < ω xi R xi+1, is easily seen to be equivalent to DCω. Finally, for an uncountable cardinal κ, DC<κ denotes the statement that DCδ holds whenever δ < κ is a cardinal. In his [3], Pincus provided two arguments for the consistency of ZF +OP+DC+ ¬AC (in fact, ¬DCω1). His first argument builds on the basic Cohen model (adding countably many Cohen subsets of ω and then passing to a symmetric submodel where AC, but also DC fails), and then adding certain maps on top of that, in order to resurrect DC. Since it was difficult to follow anything beyond Pincus’ basic outline of the argument in [3], we provided a modern presentation of this result in our [1]. Pincus’ second argument, which is even harder to grasp, in fact yielded the (stronger) consistency of ZF +OP + DC<κ + ¬AC (in fact, ¬DCκ) for an arbitrary regular and uncountable cardinal κ (while preserving cardinals at least up to and including κ). In fact, we didn’t manage to follow much of Pincus’ original arguments here at all, but analysing a notion of hereditary almost disjointness that is introduced in his [3], we came up with a similar notion of hereditarily almost 2020 Mathematics Subject Classification. 03E25,03E35,06A05. Key words and phrases. Ordering Principle, Dependent Choice, Symmetric extensions. This research was funded in whole or in part by the Austrian Science Fund (FWF) [10.55776/ESP5711024]. For open access purposes, the authors have applied a CC BY public copyright license to any author-accepted manuscript version arising from this submission. 1 arXiv:2510.14821v1 [math.LO] 16 Oct 2025 2 PETER HOLY AND JONATHAN SCHILHAN disjoint towers, and eventually with a new proof of Pincus’ consistency result.1 Over a suitable ground model (for example, G¨odel’s constructible universe), we now obtain the above consistency result (as did Pincus) starting with add(κ, κ), the standard forcing notion to add κ-many Cohen subsets of κ, and then continuing in κ-many steps, where at each stage 0 < α < κ, we add κ-many maps from cardinals less than κ to the set of things that we have added so far, in a careful way. We finally obtain our desired model by passing to a suitable symmetric submodel of the above-described forcing extension of our universe. While the very basic construction may seem somewhat similar to the one that we presented in [1] at first glance, both the construction and the arguments here are in fact very much different. We also provide further models witnessing these consistency results, that is, if κ < κ+ < λ are both regular and uncountable cardinals, we obtain a model of ZF +OP + DC<λ + ¬DCλ starting with add(κ, λ), and then continuing to add certain maps in λ-many steps. Throughout this paper, let κ be a fixed regular and uncountable cardinal, and let λ be a fixed regular and uncountable cardinal such that either κ = λ or κ < κ+ < λ. (Note in particular that this excludes the case λ = κ+.) The case when λ = κ will produce the models that are essentially due to Pincus, while the case λ > κ+ will produce new models for the above described consistency results. 2. Hereditarily almost disjoint towers A key ingredient of our constructions will be what we call hereditarily almost disjoint (or HAD) towers. They are fairly similar to and strongly inspired by the concept of HAD functions introduced by Pincus in [3].2 Definition 1. We say that p is a λ-tower if: • p is a function with domain dom(p) ⊆(λ \ {0}) × λ and | dom(p)| < λ, • If (α, β) ∈dom(p), then for some nonzero cardinal δ < λ, p(α, β): δ →α × λ is an injection. • If (α, β0) and (α, β1) are both in dom p, then p(α, β0) ̸= p(α, β1). Given λ-towers p and q, we say that q extends p, and write q ≤p, if q ⊇p. We will write pα,β or p(α,β) rather than p(α, β). Since λ will be fixed throughout our paper, we will simply write tower rather than λ-tower. Definition 2. Let p be a tower. We define the target of p to be t(p) = dom p ∪ [ γ∈dom(p) range pγ. We say that p is complete if t(p) \ ({0} × λ) = dom(p). 1This also yields a different (and in fact, probably somewhat easier than the one provided in [1]) argument for the consistency of ZF +OP + DC + ¬AC. 2The actual conditions that we will use for our forcing notion, that we will define in the next section of this paper, will contain further information (or in order to be somewhat more specific already, this part of our conditions will then work on adding λ-many Cohen subsets of κ), for which we will leave space at level 0 of our towers below. THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 3 Note that by the regularity of λ, |t(p)| < λ. Given towers p and q, we say that they are compatible if there is a tower r such that r ≤p, q. Note that in this case, p ∪q is their (unique) greatest lower bound in the ordering of towers. Similarly, if {pi | i ∈I} is a family of towers that has a common lower bound with respect to ≤, S i∈I pi is their greatest lower bound, which is again a tower. Note that whenever a union of complete towers is a tower, then it is complete. Definition 3. Given a complete tower p, and a set e ⊆λ × λ, we define the target t(p, e) ⊆λ × λ of p on e, by inductively defining a sequence ⟨tn(p, e) | n < ω⟩, with each tn(p, e) ⊆t(p), and then taking t(p, e) = S n<ω tn(p, e), as follows: • t0(p, e) = e ∩t(p). • Given tn(p, e), let tn+1(p, e) = tn(p, e) ∪ [ {range pγ | γ ∈tn(p, e) \ ({0} × λ)}. Note that if α < λ is such that e ⊆α × λ, then also t(p, e) ⊆α × λ. Note also that t(p, λ × λ) = t(p, t(p)) = t(p). This now allows us to introduce what is essentially Pincus’ concept of hereditary almost disjointness [3]: Definition 4. (HAD towers) Let p be a complete tower. If d ⊆t(p), we say that d is finitely generated (in p) if there is a finite set e ⊆d such that d = t(p, e). We also say that d is (finitely) generated by e (in p) in this case. We say that p is hereditarily almost disjoint, or HAD, if whenever γ0, γ1 ∈t(p), then t(p, {γ0}) ∩t(p, {γ1}) is finitely generated (in p). Given two compatible HAD towers p and q, p∪q is easily seen to be a HAD tower. An analogous remark applies to arbitrary families of HAD towers with a common lower bound. By the finitary nature of the HAD property, any ≤-decreasing <λ- sequence of HAD towers has a HAD tower as its greatest lower bound. Adding elements to the target of a HAD tower is essentially trivial: Lemma 5. If p is a HAD tower, and α, β < λ with (α, β) ̸∈t(p), then there is a HAD tower q ≤p such that • (α, β) ∈t(q) and • t(q) is the disjoint union t(q) = t(p) ∪t(q, {(α, β)}). Proof. If α = 0, pick ¯β such that (1, ¯β) ̸∈dom(p). Let q1, ¯β be the function with domain 1 that maps 0 to (0, β), and let qγ = pγ for γ ∈dom(p). If α > 0, pick ¯β < λ such that (0, ¯β) ̸∈t(p), let qα,β be the function with domain 1 that maps 0 to (0, ¯β), and let qγ = pγ for γ ∈dom(p). Note that in both cases, since qα,β ̸= pα,β′ whenever (α, β′) ∈dom(p), q is a complete tower, and it obviously has the two properties listed in the statement of the lemma. The HAD property of q trivially follows from the HAD property of p together with the second of these properties. □ An easy to verify, yet crucial property of HAD towers is that they can be extended so that the range of a single element covers the target of the original tower. Lemma 6. If p is a HAD tower, then there is a HAD tower q ≤p and an ordinal α∗< λ such that: • t(q) = t(q, {(α∗, 0)}). 4 PETER HOLY AND JONATHAN SCHILHAN • t(p) = ran(qα∗,0). Proof. Pick α∗< λ such that dom(p) ⊆α∗× λ. Let t(p) be enumerated by ⟨tϵ | ϵ < δ⟩for a cardinal δ < λ. Extend p to a complete tower q ≤p by setting qα∗,0 = ⟨tϵ | ϵ < δ⟩, and letting qγ = pγ otherwise. We need to check that q is a HAD tower. Note that if γ ∈t(q), then t(q, {γ}) ∩t(q, {(α∗, 0)}) = t(q, {γ}), which is finitely generated (by {γ}). If γ0, γ1 ∈t(q) are both different to (α∗, 0), i.e., elements of t(p), then t(q, {γ0}) ∩t(q, {γ1}) = t(p, {γ0}) ∩t(p, {γ1}), which is finitely generated in the HAD tower p, and thus also in q. □ Lemma 7. Let p be a HAD tower, let n ∈ω, and let γ0, . . . , γn ∈t(p). Then, T i≤n t(p, {γi}) is finitely generated (in p). Proof. First note that for any e ⊆t(p), t(p, e) = S γ∈e t(p, {γ}). We verify the lemma by induction on n. The case n = 0 is trivial. Suppose inductively that the lemma is true for a particular value n ≥0, and let γ0, . . . , γn, γn+1 ∈t(p). Then, \ i≤n+1 t(p, {γi}) =  \ i≤n t(p, {γi})  ∩t(p, {γn+1}) = t(p, e) ∩t(p, {γn+1}) = [ γ∈e t(p, {γ}) ∩t(p, {γn+1})  = [ γ∈e t(p, eγ) = t(p, [ γ∈e eγ), for appropriate finite e ⊆t(p) and eγ ⊆t(p) for γ ∈e, using the HAD property and our inductive hypothesis. □ 3. Our forcing notion The forcing notion that we use will be the product P0×P1, where P0 = add(κ, λ) and P1 is the set of all HAD towers, ordered by extension as in Definition 1. Let us agree that whenever I ⊆Ord, we think of conditions q in add(κ, I), the standard forcing notion to add a Cohen subset of κ for every i ∈I, as sequences ⟨qα | α ∈J⟩ with a domain J that is a <κ-size subset of I, and with sequents being functions from some ordinal less than κ to 2. These conditions are ordered by componentwise reverse inclusion, as usual. For the sake of simplicity of notation, conditions p = (p0, ¯p) ∈P = P0 × P1 will also be written as p = ⟨pα,β | (α = 0 ∧β ∈dom p0) ∨(α > 0 ∧(α, β) ∈dom(¯p)⟩. We let dom p = ({0}×dom p0)∪dom ¯p, and we think of p as a function with domain dom p. We let t(p) = ({0}×dom p0)∪t(¯p), and also t(p, e) = (e∩({0}×dom p0))∪ t(¯p, e) whenever e ⊆λ × λ. If α < λ, we also let pα = ⟨pα,β | β < λ ∧(α, β) ∈ dom(p)⟩and we let dom pα = {β | (α, β) ∈dom p}. Assume the GCH, and that there is a global wellorder (say for example that we start in L).3 P0 = add(κ, λ) is <κ-closed and κ+-cc. Since HAD towers are closed 3It is easy to see that the GCH could be replaced by somewhat weaker assumptions here; we will leave the details of figuring out what exactly is needed to the interested reader. THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 5 under <λ-unions, P1 is <λ-closed. Using the GCH, P is also of size λ, so forcing with P preserves all cardinals.4 For any β < λ, let ˙g0,β be the canonical P0 = add(κ, λ)-name, which we can also think of as a P-name, for the βth Cohen subset of κ added. We now proceed to define further objects inductively. Given 0 < α < λ, assume that we have defined ˙g¯α,β whenever ¯α < α and β < κ. We also allow for the notation ˙g(¯α,β) rather than ˙g¯α,β. For every β < κ, let ˙gα,β denote the canonical P-name for the function with domain dom pα,β mapping any given ϵ ∈dom pα,β to ˙gpα,β(ϵ) whenever p is a HAD tower in the generic filter with (α, β) ∈t(p). To be precise, ˙gα,β := p, (ˇϵ, ˙gpα,β(ϵ))• | p ∈P, (α, β) ∈t(p) .5 For every α < λ, let ˙Aα = {˙gα,β | β < λ}•, and for α ≤λ, let ˙A<α = S ¯α<α ˙A¯α. Let ˙A = ˙A<λ. If G is P-generic, α, β < λ, and we are in a context where G is the only P-generic that we currently make use of, we let gα,β = ˙gG α,β, Aα = ˙AG α etc. Let ˙G be the canonical P-name for the P-generic filter. 4. Our symmetric system We next define a symmetric system S = ⟨P, G, F⟩using the notion of forcing P that we have already defined above. Definition 8. Let G be the set of sequences π = ⟨πα | α < λ⟩of permutations of λ, with each sequent moving only less than λ-many ordinals, and with only less than λ-many nontrivial sequents, which form a group using componentwise composition. Given such π, we let π act on λ×λ, letting, for (α, β) ∈λ×λ, π((α, β)) = (α, πα(β)). If δ < λ is a cardinal and f : δ →λ × λ, we let π(f) be the function with domain δ such that π(f)(ϵ) = π(f(ϵ)) for every ϵ < δ. We let π ∈G act on a condition p ∈P as follows: • dom π(p)α = πα[dom pα] for every α < λ. • π(p)0,π0(β) = p0,β whenever β ∈dom p0. • π(p)α,πα(β) = π(pα,β) whenever α > 0 and β ∈dom pα. Note that for every e ⊆t(p), t(π(p), π[e]) = π[t(p, e)]. This implies that the HAD property is preserved from p to π(p), that is π(p) ∈P. We use finite support to define our filter F on the set of subgroups of G, that is, F is generated by the subgroups fix(e) = {π ∈G | π ↾e = id} ≤G for e ⊆λ×λ finite. Note that π fix(e)π−1 = fix(π[e]), so F is indeed a normal filter. The symmetry group of a P-name ˙x is sym( ˙x) = {π ∈G | π( ˙x) = ˙x}, and if fix(e) ≤sym( ˙x), we also say that e is a support of ˙x. Note that for α, β < λ, π(˙gα,β) = ˙gπ(α,β) = ˙gα,πα(β). In particular, each ˙gα,β is symmetric, with symmetry group fix({(α, β)}). Moreover, each ˙Aα is symmetric with symmetry group G, as is each ˙A<α, and also ⟨˙Aα | α < κ⟩•. 4It would be enough for a meaningful result if it preserved all cardinals ≤λ. 5Given a finite tuple ( ˙x0, . . . , ˙xn) of P-names, ( ˙x0, . . . , ˙xn)• denotes the canonical P-name for the tuple consisting of the evaluations of the ˙xi. Likewise, for a set X of P-names, X• denotes the canonical P-name for the set containing exactly the evaluations of the elements of X. For any set I, ⟨˙xi | i ∈ˇI⟩• denotes the canonical P-name for the I-sequence of evaluations of the ˙xi. 6 PETER HOLY AND JONATHAN SCHILHAN We will later use the following standard fact, which says that we can uniformly find names for definable objects. We include the short proof for the convenience of our readers. Fact 9. Let φ(u, v0, . . . , vn) be a formula in the language of set theory. Then, there is a definable class function F so that for any S-names ˙x0, . . . , ˙xn and p ∈P with p ⊩S ∃!yφ(y, ˙x0, . . . , ˙xn), ˙y = F(p, ˙x0, . . . , ˙xn) is an S-name with T i≤n sym( ˙xi) ≤sym( ˙y) so that p ⊩S φ( ˙y, ˙x0, . . . , ˙xn). Proof. Let γ be the least ordinal such that p ⊩S ∃y ∈HS• γ φ(y, ˙x0, . . . , ˙xn). Let F(p, ˙x0, . . . , ˙xn) = ˙y be the set of all pairs (q, ˙z) ∈P ×HSγ so that q ⊩∀y(φ(y, ˙x0, . . . , ˙xn) →˙z ∈y)}. □ 5. The failure of AC We first verify a fairly general lemma. Lemma 10 (Restriction Lemma). Let φ be a formula in the language of set theory and let ˙x be an S-name with support e ∈[λ × λ]<ω. Whenever p ⊩S φ( ˙x), already the restriction p ↾t(p, e) of p to t(p, e), defined in the obvious way, forces φ( ˙x). Proof. Assume for a contradiction that there is q ≤p ↾t(p, e) which forces ¬φ( ˙x). Pick a permutation π = ⟨πγ | γ < λ⟩∈G such that π fixes t(p, e) = t(q, e) pointwise, and which swaps t(q)\t(q, e) with a set that is disjoint from t(q). Such π can easily be found. We will thus reach a contradiction if we can show that p ∥π(q). We will verify the stronger statement that q ∥π(q). Claim 11. q ∥π(q). Proof. Let r be the componentwise union r = q ∪π(q), which makes sense as any γ ∈t(q)∪t(π(q)) is contained in exactly one of t(q, e), t(q)\t(q, e) or t(π(q))\t(q, e) by our choice of π. In the first case, qγ = π(q)γ, while in the remaining two cases, γ is contained in either t(q) or t(π(q)), but not both simultaneously. We are left to show that r has the HAD property and is thus a condition in P. The only nontrivial case is when γ0 ∈t(q) \ t(q, e) and γ1 ∈t(π(q)) \ t(q, e). But then, the following hold: • t(r, {γ0}) = t(q, {γ0}). • ∃γ′ ∈t(q) \ t(q, e) γ1 = π(γ′). • t(r, {γ1}) = t(π(q), {π(γ′)}) = π[t(q, {γ′})]. • By our choice of π, t(q, {γ0}) ∩π[t(q, {γ′})] ⊆t(q, e), since already t(q) ∩t(π(q)) = t(q) ∩π[t(q)] ⊆t(q, e). We will be essentially done once we show the following: Claim 12. t(q, {γ0}) ∩π[t(q, {γ′})] = t(q, {γ0}) ∩t(q, {γ′}) ∩t(q, e). THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 7 Proof. If ¯γ is an element of the left hand side expression of the above equation, it follows that ¯γ ∈t(q, e) by the final of the above items. It thus follows that π(¯γ) = ¯γ, which means that ¯γ ∈t(q, {γ′}), and thus it is an element of the right hand side expression. In the other direction, if ¯γ is an element of the right hand side expression, we again obtain that π(¯γ) = ¯γ and then that ¯γ is an element of the left hand side expression. □ Now, since q is HAD, using Lemma 7, we find a finite c ⊆t(q) such that t(r, {γ0}) ∩t(r, {γ1}) = t(q, {γ0}) ∩t(q, {γ′}) ∩t(q, e) = t(q, c). This finishes the argument to show that r is a HAD tower. □ □ Theorem 13. Let G be P-generic. There is no choice function for the sequence ⟨Aα | α < λ⟩in V [G]S. This implies that DCλ, and hence in particular AC fails in V [G]S. Proof. Assume for a contradiction that ˙F is an S-name which is forced by some condition p ∈P to actually be such an choice function. Let e ⊆λ × λ be finite such that fix(e) ≤sym( ˙F). Pick α < λ such that α > max dom(e). Pick q ≤p and β < λ such that q ⊩˙F(ˇα) = ˙gα,β and, using Lemma 5, (α, β) ∈t(q). Pick a permutation π = ⟨πγ | γ < λ⟩∈G such that π fixes t(q, e) pointwise, and which swaps t(q) \ t(q, e) with a set that is disjoint from t(q). Such π can easily be found, and since (α, β) ̸∈t(q, e), π(α, β) = (α, β′) for some β′ ̸= β. Then, ⊩π(˙gα,β) = ˙gα,β′ ̸= ˙gα,β, and also π(q) ⊩˙F(ˇα) = ˙gα,β′. But this is a contradiction since q ∥π(q) by Claim 11 – note that we are in exactly the same situation as in that claim. □ 6. Minimal Supports In this section, we want to introduce a concept of minimal supports for S-names, and show that every S-name has such a minimal support. Definition 14. Let p ∈P. We say that a finite subset a ⊆t(p) is irreducible (in p) if t(p, b) ⊊t(p, a) whenever b ⊊a. Lemma 15. If ˙x and ˙y are S-names with finite supports a, b ⊆λ × λ respectively, and p ∈P is such that p ⊩˙x = ˙y and a ∪b ⊆t(p), then there is an irreducible c ⊆t(p, a) ∩t(p, b), and an S-name ˙z with fix(c) ≤sym( ˙z), such that p ⊩˙z = ˙x. Proof. Consider ˙y′ = {(s, τ) : ∃(r, τ) ∈˙y s ≤r, p}. Clearly, p ⊩˙y = ˙y′. Using the HAD property (together with the assumption that a and b are both finite), let c ⊆t(p, a) ∩t(p, b) be finite such that t(p, c) = t(p, a)∩t(p, b). By possibly shrinking c by one element finitely many times, we may additionally assume that c is irreducible. Now, simply consider ˙z = [ π ∈fix(c) π( ˙y′). We obviously have ˙z ∈HS and fix(c) ≤sym( ˙z). We claim that indeed p ⊩˙z = ˙x. Toward this end, let G be an arbitrary P-generic containing the condition p. We 8 PETER HOLY AND JONATHAN SCHILHAN already know that ˙xG = ( ˙y′)G = id( ˙y′)G ⊆˙zG. Thus, it suffices to show that for any π ∈fix(c), π( ˙y′)G ⊆˙xG. So let π ∈fix(c). If π(p) /∈G, clearly π( ˙y′)G = ∅, as every condition appearing in a pair in π( ˙y′) is below π(p). So assume that π(p) ∈G. Let d = {γ ∈λ × λ | π(γ) ̸= γ} ∪t(p), which is of size less than λ. Pick σ = ⟨σα | α < λ⟩∈fix(t(p, a)) so that σ swaps the elements of b \ t(p, a) with pairs of ordinals in (λ × λ) \ d, and such that σ(p) ∈G. This is possible: Claim 16. For any q ≤p there exists σ ∈fix(t(p, a)) that swaps the elements of b\t(p, a) with pairs of ordinals in (λ×λ)\d, and for which we have q ∥σ(p). Thus, by the genericity of G, there exists a desired σ with σ(p) ∈G. Proof. Let q ≤p, and let e = d ∪t(q). Pick σ = ⟨σα | α < λ⟩∈G fixing t(p, a) pointwise, and which swaps t(q)\t(p, a) with a set that is disjoint from e. Such σ can easily be found. Remember that t(q, a) = t(p, a). Arguing exactly as in Claim 11 (with σ in place of π, and with a in place of e), we obtain the stronger conclusion that q ∥σ(q). Now, this shows that for any q ≤p there is a permutation σ which is as desired, and we may thus pick r ≤q, σ(p). This yields a dense set of conditions r, so we may pick one such r ∈G. For the corresponding permutation σ, it thus follows that σ(p) ∈G, as desired. □ Then, note that σ(p) ⊩˙x = σ( ˙y′) = σ( ˙y). Note also that π(σ(p) ∪p) ∈G, since, by the properties of σ, it is weaker than π(p) ∪σ(p) ∈G. Since fix(b) ≤sym( ˙y), it follows that fix(σ[b]) ≤sym(σ( ˙y)). Let’s take a closer look at σ[b]. It can be written as a disjoint union of a := σ[b] ∩t(p, a) and of b := σ[b] \ t(p, a). The set a is pointwise fixed by σ, because t(p, a) is, so in fact, a = b ∩t(p, a) ⊆ t(p, a) ∩t(p, b) ⊆t(p, c). The set b is pointwise fixed by π, as follows easily from the definition of σ. That is, π ∈fix(c ∪b). We also have σ[b] = a ∪b ⊆t(p ∪σ(p), c ∪b) by the above. Thus, by Lemma 24, there is a name ˙y∗∈HS with fix(c∪b) ≤sym( ˙y∗) and such that p ∪σ(p) ⊩˙y∗= σ( ˙y). This means that π ∈sym( ˙y∗), and therefore, π(σ(p) ∪p) ∪p ⊩π( ˙x) = ˙y∗= σ( ˙y). Overall, since also π(p) ⊩π( ˙x) = π( ˙y′), it follows in particular that ˙xG = σ( ˙y)G = π( ˙x)G = π( ˙y′)G, as desired. □ Definition 17. Let p ∈P. We define a relation p on the set of all irreducible subsets of t(p), letting, for a, b irreducible in p, a p b if t(p, a) ⊆t(p, b). We define the strict relation  by setting a  b if a  b ∧a ̸= b. We will usually omit the subscript p when the relevant tower is clear from context. Note also that if q ≤p are complete towers and a p b, then also a q b, and also if a q b and b ⊆t(p), then also a ⊆t(p), and a p b. Lemma 18. Let p be a complete tower. Then,  = p is a well-founded partial order. Proof. Clearly,  is transitive and reflexive. In order to check antisymmetry, sup- pose for a contradiction that t(p, a) = t(p, b) but a ̸= b. Let α be largest so that aα ̸= bα, where aα := {β | (α, β) ∈a}, and similarly for b. Say, without loss of generality, that β ∈aα \ bα. As (α, β) ∈t(p, a) = t(p, b), there must be some ¯α > α and ¯β < κ with (¯α, ¯β) ∈b and (α, β) ∈t(p, {(¯α, ¯β)}). But then (¯α, ¯β) ∈a as well, THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 9 as α was chosen largest with aα ̸= bβ. We obtain that t(p, a) = t(p, a \ {(α, β)}), so a is not irreducible, which is a contradiction. To check well-foundedness, for an irreducible a ⊆t(p), let δ(a) := X α∈dom a ωα · |aα|, using ordinal arithmetic. It suffices to note that a  b implies δ(a) < δ(b). Towards this end, again, let α be largest so that aα ̸= bα. We claim that aα ⊆bα. In particular then, aα must be a strict subset of bα and we obtain that δ(a) < δ(b). So suppose otherwise, that there is β ∈aα \ bα. Just as before, we obtain that a is not irreducible, using that t(p, a) ⊆t(p, b), which is again a contradiction. □ Theorem 19 (Minimal Supports). If ˙x is an S-name and p ∈P, then there is q ≤p, a unique (with respect to q) irreducible (in q) b ⊆t(q), and ˙y ∈HS with support b for which q ⊩˙y = ˙x, and whenever ab and ˙z is an S-name with support a, then q ⊩˙z ̸= ˙x. We say that b is the minimal support for ˙x below q in this case. Proof. Use Lemma 15 repeatedly, in order to obtain successively stronger conditions qi ≤p, S-names ˙yi and successively smaller (according to ) irreducible bi, such that for each i, qi ⊩˙yi = ˙x and fix(bi) ≤sym( ˙yi). By Lemma 18, this construction has to break down after a final finite stage i. Then clearly, qi, bi and ˙yi are as desired, where the uniqueness of bi follows from the fact that  is a partial order, that is if some irreducible b satisfies b  bi and bi  b, then already b = bi. □ Note that if b is the minimal support for an S-name ˙x below a condition q ∈P and r ≤q, then b is also the minimal support for ˙x below r. Moreover, if π ∈G, then π[b] is the minimal support for π( ˙x) below π(q). 7. The Ordering Principle We now want to show that the ordering principle holds in our symmetric ex- tension. The arguments in this section will be very similar to the corresponding arguments presented in [1]. Lemma 20. There is an S-name ˙< for a linear order of ˙A, such that sym( ˙<) = G. Proof. In any model of ZF, we can consider the definable sequence of sets ⟨Xα : α ∈ Ord⟩, obtained recursively by setting X0 = κ2, Xα+1 = ωXα and Xα = S β<α Xβ for limit α. We can recursively define linear orders <α on Xα, by letting <0 be the lexicographic ordering on κ2, <α+1 be the lexicographic ordering on Xα+1 obtained from <α, and for limit α, x <α y iff, for β least such that x ∈Xβ, either y /∈Xγ for all γ ≤β, or y ∈Xβ and x <β y. Then <λ is a definable linear order of Xλ. Note that ˙A is forced to be contained in Xλ, and by Fact 9, there is an S-name ˙< as required. □ Theorem 21. There is a class S-name ˙F for an injection of the symmetric exten- sion by S into Ord × ˙A<ω such that sym( ˙F) = G. In particular, OP holds in our symmetric extension. Proof. Fix a global well-order ≺of our ground model V , and let G be P-generic over V . We first provide a definition of such an injection F in the full P-generic extension V [G]. Then, we will observe that all the parameters in this definition have symmetric names, which will let us directly build an S-name ˙F for F. 10 PETER HOLY AND JONATHAN SCHILHAN For each a ∈[λ × λ]<ω and each enumeration h = ⟨γi : i < k⟩of a, define ˙Ga = {˙gγ | γ ∈a}• and ˙th = ⟨˙gγi : i < k⟩•. Define ˙Γ = {π( ˙G) : π ∈G}•. While ˙Γ is not an S-name in general, it is still a symmetric P-name. Let Γ = ˙ΓG and <= ˙<G. Given x ∈V [G]S, F(x) will be found as follows: First, let (p, ˙z, a, h) be ≺-minimal with the following properties: (1) in V , a is the minimal support for ˙z below p, (2) in V , h is an enumeration of a so that p forces that ˙th enumerates ˙Ga in the order of ˙<, (3) in V [G], there is H ∈Γ with p ∈H and ˙zH = x. Such a tuple certainly exists by Theorem 19 and since G ∈Γ. Claim 22. For any H, K ∈Γ with p ∈H, K, the following are equivalent: (a) (˙th)H = (˙th)K, (b) ˙zH = ˙zK. Proof. Let H, K ∈Γ, p ∈H, K. H is itself a P-generic filter, and ˙ΓH = ˙ΓG = Γ, as can be easily checked. Thus, there is π ∈G so that K = π( ˙G)H. Now, note that π( ˙G)H = π−1[H] and (˙th)K = (˙th)π−1[H] = π(˙th)H. Similarly, ˙zK = π( ˙z)H. Suppose that (˙th)H = (˙th)K. Then, (˙th)H = π(˙th)H. By the way that per- mutations act on the names ˙gγ (see Section 4), and thus on ˙th, the only way this is possible is if π(γ) = γ for every γ ∈a. In other words, π ∈fix(a). Thus, ˙zH = π( ˙z)H = ˙zK. Now, suppose that ˙zH = ˙zK = π( ˙z)H. Since p ∈K = π−1[H], it follows that π(p) ∈H. Thus, there is r ≤p, π(p) in H with r ⊩˙z = π( ˙z). Since a is the minimal support for ˙z below p, and hence also below r, also π[a] is the minimal support for π( ˙z) below π(p), hence also below r. But by the uniqueness property in Theorem 19, this implies that π[a] = a. This also means that ˙Ga = π( ˙Ga). As p forces that ˙th is the ˙<-enumeration of ˙Ga, π(p) forces that π(˙th) is the π( ˙<)-enumeration of π( ˙Ga). Since p ∈K and π(p) ∈H, this implies that (˙th)K = π(˙th)H is the enumeration of π( ˙Ga)H = ˙GH a according to π( ˙<)H = <, which is exactly what (˙th)H is. □ By the claim, there is a unique t ∈A<ω so that t = (˙th)H, for some, or equiva- lently all, H ∈Γ with p ∈H and ˙zH = x. We let F(x) = (ξ, t), where (p, ˙z, a, h) is the ξth element of V according to ≺. To see that this is an injection, assume that x and y both yield the same (p, ˙z, a, h) and t. Let H, K ∈Γ with p ∈H, K, and with ˙zH = x, ˙zK = y. By our definition, t = (˙th)H = (˙th)K, and according to the claim, x = ˙zH = ˙zK = y. This finishes the definition of F. The definition we have just given can be rephrased as F(x) = y iff φ(x, y, Γ, <), where φ is a first order formula using the parameters Γ and <, and the only param- eters that are not shown are parameters from V , such as the class ≺or the class of tuples (p, ˙z, a, h) so that (1) and (2) hold. Simply let ˙F = {(p, ( ˙x, ˙y)•) : ˙x, ˙y ∈HS ∧p ⊩P φ( ˙x, ˙y, ˙Γ, ˙<)}, where the parameters from V in φ are replaced by their check-names. Then, ˙F ⊆ P × HS, and sym( ˙F) = G, so ˙F is a class S-name, as desired. THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 11 It follows that OP holds in any symmetric extension by S since by Lemma 20 and Fact 9, there is an S-name for a linear order of Ord × ˙A<ω, which can be pulled back to produce a class that is a linear order of the sets of our symmetric extension using F. □ 8. Higher dependent choice Recall the symmetric P-name ˙Γ = {π( ˙G) : π ∈G}• from the previous proof. We need the following fairly general result: Lemma 23. Let ˙x be a P-name and e ∈[λ × λ]<ω so that fix(e) ≤sym( ˙x). Whenever G is P-generic, x = ˙xG and Γ = ˙ΓG, then x is definable in V [G] from elements of V , from Γ and from ⟨gγ | γ ∈e⟩, as the only parameters. Proof. In V [G], define y to consist exactly of those z so that z ∈˙xH for some H ∈Γ with ˙gG γ = ˙gH γ for all γ ∈e. We claim that x = y. Clearly, x ⊆y as G ∈Γ. Now suppose that H ∈Γ is arbitrary, so that ˙gG γ = ˙gH γ for all γ ∈e. Then, H = π( ˙G)G, for some π ∈G. We obtain that ˙gG γ = ˙gH γ = π(˙gγ)G = ˙gG π(γ), for each γ ∈e. But this is only possible if π ∈fix(e). So also ˙xH = π( ˙x)G = ˙xG, and we are done. □ A key idea of our forcing construction is captured by the following lemma. Lemma 24. Let p ∈P and ˙y ∈HS have finite support e0 ⊆t(p, e1), for some e1 ∈[t(p)]<ω. Then, there is ˙y∗∈HS with support e1 such that p ⊩˙y = ˙y∗. Proof. Using Lemma 23, whenever G is P-generic, y = ˙yG and Γ = ˙ΓG, then y is definable (by a fixed formula that does not depend on the particular choice of generic G) in V [G] from elements of V , from Γ and from ⟨gγ | γ ∈e0⟩as the only parameters. Note that if p ∈G, since e0 ⊆t(p, e1), each gγ for γ ∈e0 is definable in V [G] from some gγ′ with γ′ ∈e1. More specifically, there is a finite sequence n0, . . . , nk of ordinals (in V , that can be read off from p) such that p ⊩˙gγ′(n0)(n1) . . . (nk) = ˙gγ. So we can find a formula φ such that p ⊩˙y = {w | φ(w, ˙Γ, ⟨˙gγ′ | γ′ ∈e1⟩•, ˇv)} for some v ∈V . For some large enough ξ, define ˙y∗= {(r, ˙w) ∈P × HSξ | r ⊩φ( ˙w, ˙Γ, ⟨˙gγ′ | γ′ ∈e1⟩•, ˇv). We obtain that fix(e1) ≤sym( ˙y∗) and p ⊩˙y = ˙y∗, as desired. □ Theorem 25. Let G be P-generic. If λ = κ, then V [G]S is closed under <κ- sequences in V [G]. In particular thus, since DC<κ holds in V [G] |= ZFC, it follows that DC<κ holds in V [G]S. Proof. Let ⃗x be a δ-sequence of elements ⟨xϵ | ϵ < δ⟩of V [G]S in V [G], for some cardinal δ < κ. Let x = {xϵ | ϵ < δ} denote the range of ⃗x, and let ˙x and ˙⃗x be P-names for x and ⃗x respectively. For some p ∈G and some large enough ordinal ξ, p ⊩˙x ⊆HS• ξ. By further strengthening p, using that P is <κ-closed, we can find a sequence of S-names ⟨˙xϵ | ϵ < δ⟩so that p ⊩˙⃗x is a function with domain δ and ∀ϵ < δ ˙⃗x(ϵ) = ˙xϵ. For each ϵ < δ, there is eϵ ∈[κ × κ]<ω so that fix(eϵ) ≤sym( ˙xϵ). Let α < κ be a large enough ordinal so that for each ϵ < δ, there is such eϵ in [α × κ]<ω, and such that α ≥dom(p). Let e = S i<δ ei, which is of size at most δ < κ. Using Lemma 5, the <κ-closure of P, and Lemma 6, let q ≤p, α∗≥α, and let q ∈G be a HAD tower with the property that e ⊆t(q) = t(q, {(α∗, 0)}). 12 PETER HOLY AND JONATHAN SCHILHAN Fix some ϵ < δ. By Lemma 24, we find ˙x′ ϵ ∈HS with support {(α∗, 0)} such that q ⊩˙xϵ = ˙x′ ϵ. Let ˙⃗y = ⟨˙x′ ϵ | ϵ < δ⟩•. Then, fix({(α∗, 0)}) ≤sym( ˙⃗y), and we obtain that ⃗x = ˙⃗yG ∈V [G]S, as desired. □ Theorem 26. Let G be P-generic. If λ > κ+, then DC<λ holds in V [G]S. Proof. Suppose that ˙T is an S-name for a <δ-closed (in the symmetric extension) tree without terminal nodes, where, without loss of generality, κ < δ < λ is regular. Let p0 = (p0 0, ¯p0) ∈P be arbitrary. By possibly strengthening p0, we may assume that the support of ˙T is contained in t(p0). We want to find a condition q ≤p0 forcing that ˙T contains an increasing sequence of length δ in order to verify the theorem. Fix a name ˙F as obtained from Theorem 21. We will recursively define a decreasing sequence ⟨pξ : ξ < δ⟩in P, with each pξ of the form pξ = (p0 0, ¯pξ), and a ⊆-increasing sequence ⟨Xξ : ξ < δ⟩, where Xξ ⊆Ord and |Xξ| < λ, for each ξ < δ. Initially, we are already given p0 and we let X0 = ∅. At limit steps ξ < δ, we let Xξ = S ξ′<ξ Xξ′ and we pick pξ to be a lower bound for ⟨pξ′ : ξ′ < ξ⟩. At successor steps, given p = pξ and X = Xξ, we proceed as follows. First, by extending p, using Lemma 6, we can assume that there is γ ∈t(p) such that t(p) = t(p, {γ}). Fix, for now, a P-generic G with p ∈G, and let T := ˙T G, F := ˙F G and gα,β := ˙gG α,β, for every (α, β) ∈λ × λ. Note that the least ZF-model extending V and containing gγ as an element is V (gγ) = V [⟨g0,β : (0, β) ∈t(p)⟩], which is an add(κ, t(p) ∩({0} × λ))-generic extension, and thus a model of ZFC. Moreover define Ap := {gγ′ : γ′ ∈t(p)} ∈V (gγ) and note that Ap has size < λ. In particular, V (gγ) |= |(X × A<ω p )<δ| < λ. Whenever ⟨(ηi, ai) : i < δ′⟩∈(X × A<ω p )<δ ∩V (gγ), the sequence ⟨F −1(ηi, ai) : i < δ′⟩may or may not be a chain in T. In case it is, since T is closed under increasing sequences of length less than δ, there is some (η, e) ∈Ord ×(λ × λ)<ω, so that F −1(η, ge) is an upper bound, where ge is defined as ⟨gei : i < |e|⟩when e = ⟨ei | i < |e|⟩. All in all, in V [G], there is Y ⊆Ord and E ⊆λ × λ, both of size <λ, such that we can find pairs (η, e) witnessing any of the above described instances within Y × E<ω. Using the λ-cc of P0 (this uses that κ+ ≤λ) and the <λ-closure of P1, back in V , we can find q ≤p of the form q = (p0 0, ¯q) and sets Y and E such that q forces that Y , E are as just described. Finishing our recursive definitions, let Xξ+1 = X ∪Y and pξ+1 ≤q such that E ⊆t(pξ+1). Let p be the greatest lower bound of ⟨pξ : ξ < δ⟩, and let X = S ξ<δ Xξ. Using Lemma 6, let q ≤p be such that t(q) = t(q, {γ}) = t(p) ∪{γ} for some γ ∈λ × λ. Now suppose that q ∈G, for a P-generic G. We let T, F and ge, for e ∈(λ×λ)<ω, be the evaluations by G of the corresponding names just as before. Let ˜T = {((η0, e0), (η1, e1)) ∈(X × t(p)<ω)2 : F −1(η0, ge0) <T F −1(η1, ge1)}, where <T is the order of T. Note, by Lemma 10, that ˜T ∈V (gγ), for all names used in its definition have supports that are contained in t(p). Claim 27. ˜T is <δ-closed in V (gγ). Proof. Let ⟨(ηi, ei) : i < δ′⟩∈V (gγ) be a decreasing sequence in ˜T, for some δ′ < δ. Remember that V (gγ) = V [⟨g0,β : (0, β) ∈t(p)⟩], which is an add(κ, t(p)∩({0}×λ))- generic extension of V . Since add(κ, t(p) ∩({0} × λ)) has the κ+-cc and κ < δ, THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 13 there is J ⊆t(p) ∩({0} × λ) of size < δ so that ⟨(ηi, ei) : i < δ′⟩∈V [⟨g0,β : β ∈J⟩]. By the regularity of δ, there is ξ < δ such that ηi ∈Xξ and ei ⊆t(pξ) for each i < δ′, and such that 0 × J ⊆t(pξ) = t(pξ, {γ′}) for some (unique) γ′ ∈t(pξ). In particular then, ⟨(ηi, ei) : i < δ′⟩∈V (gγ′), and we ensured in the next step of our above recursive construction that there is an upper bound in V (gγ). □ Finally, constructing a branch ⟨(ηi, ei) : i < δ⟩through ˜T in V (gγ) |= ZFC, we find that ⟨F −1(ηi, gei) : i < δ⟩is a branch through T in V [G]S. □ References [1] Peter Holy and Jonathan Schilhan. The ordering principle and dependent choice. 2025. Sub- mitted. [2] Thomas Jech. The axiom of choice. Dover books on mathematics. Dover publications, 1973. [3] David Pincus. Adding dependent choice. Ann. Math. Logic, 11(1):105–145, 1977. Institut f¨ur diskrete Mathematik und Geometrie, TU Wien, Wiedner Hauptstrasse 8-10/104, 1040 Vienna, Austria Email address: peter.holy@tuwien.ac.at University of Vienna, Institute of Mathematics, Kurt G¨odel Research Center, Kolin- gasse 14-16, 1090 Vienna, Austria Email address: jonathan.schilhan@univie.ac.at
THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE PETER HOLY AND JONATHAN SCHILHAN Abstract. We provide, for any regular uncountable cardinal κ, a new argument for Pincus' result on the consistency of ZF with the higher dependent choice principle DC κ+ will produce new models for the above described consistency results. 2. Hereditarily almost disjoint towers A key ingredient of our constructions will be what we call hereditarily almost disjoint (or HAD) towers. They are fairly similar to and strongly inspired by the concept of HAD functions introduced by Pincus in [3].2 Definition 1. We say that p is a λ-tower if: • p is a function with domain dom(p) ⊆(λ \ {0}) × λ and | dom(p)| 0, pick ̄β 0 ∧(α, β) ∈dom( ̄p)⟩. We let dom p = ({0}×dom p0)∪dom ̄p, and we think of p as a function with domain dom p. We let t(p) = ({0}×dom p0)∪t( ̄p), and also t(p, e) = (e∩({0}×dom p0))∪ t( ̄p, e) whenever e ⊆λ × λ. If α 0 and β ∈dom pα. Note that for every e ⊆t(p), t(π(p), π[e]) = π[t(p, e)]. This implies that the HAD property is preserved from p to π(p), that is π(p) ∈P. We use finite support to define our filter F on the set of subgroups of G, that is, F is generated by the subgroups fix(e) = {π ∈G | π ↾e = id} ≤G for e ⊆λ×λ finite. Note that π fix(e)π-1 = fix(π[e]), so F is indeed a normal filter. The symmetry group of a P-name ̇x is sym( ̇x) = {π ∈G | π( ̇x) = ̇x}, and if fix(e) ≤sym( ̇x), we also say that e is a support of ̇x. Note that for α, β max dom(e). Pick q ≤p and β α and ̄β κ+, then DC<λ holds in V [G]S. Proof. Suppose that ̇T is an S-name for a <δ-closed (in the symmetric extension) tree without terminal nodes, where, without loss of generality, κ < δ < λ is regular. Let p0 = (p0 0, ̄p0) ∈P be arbitrary. By possibly strengthening p0, we may assume that the support of ̇T is contained in t(p0). We want to find a condition q ≤p0 forcing that ̇T contains an increasing sequence of length δ in order to verify the theorem. Fix a name ̇F as obtained from Theorem 21. We will recursively define a decreasing sequence ⟨pξ : ξ < δ⟩in P, with each pξ of the form pξ = (p0 0, ̄pξ), and a ⊆-increasing sequence ⟨Xξ : ξ < δ⟩, where Xξ ⊆Ord and |Xξ| < λ, for each ξ < δ. Initially, we are already given p0 and we let X0 = ∅. At limit steps ξ < δ, we let Xξ = S ξ′<ξ Xξ′ and we pick pξ to be a lower bound for ⟨pξ′ : ξ′ < ξ⟩. At successor steps, given p = pξ and X = Xξ, we proceed as follows. First, by extending p, using Lemma 6, we can assume that there is γ ∈t(p) such that t(p) = t(p, {γ}). Fix, for now, a P-generic G with p ∈G, and let T := ̇T G, F := ̇F G and gα,β := ̇gG α,β, for every (α, β) ∈λ × λ. Note that the least ZF-model extending V and containing gγ as an element is V (gγ) = V [⟨g0,β : (0, β) ∈t(p)⟩], which is an add(κ, t(p) ∩({0} × λ))-generic extension, and thus a model of ZFC. Moreover define Ap := {gγ′ : γ′ ∈t(p)} ∈V (gγ) and note that Ap has size < λ. In particular, V (gγ) |= |(X × A<ω p )<δ| < λ. Whenever ⟨(ηi, ai) : i < δ′⟩∈(X × A<ω p )<δ ∩V (gγ), the sequence ⟨F -1(ηi, ai) : i < δ′⟩may or may not be a chain in T. In case it is, since T is closed under increasing sequences of length less than δ, there is some (η, e) ∈Ord ×(λ × λ)<ω, so that F -1(η, ge) is an upper bound, where ge is defined as ⟨gei : i < |e|⟩when e = ⟨ei | i < |e|⟩. All in all, in V [G], there is Y ⊆Ord and E ⊆λ × λ, both of size <λ, such that we can find pairs (η, e) witnessing any of the above described instances within Y × E<ω. Using the λ-cc of P0 (this uses that κ+ ≤λ) and the <λ-closure of P1, back in V , we can find q ≤p of the form q = (p0 0, ̄q) and sets Y and E such that q forces that Y , E are as just described. Finishing our recursive definitions, let Xξ+1 = X ∪Y and pξ+1 ≤q such that E ⊆t(pξ+1). Let p be the greatest lower bound of ⟨pξ : ξ < δ⟩, and let X = S ξ<δ Xξ. Using Lemma 6, let q ≤p be such that t(q) = t(q, {γ}) = t(p) ∪{γ} for some γ ∈λ × λ. Now suppose that q ∈G, for a P-generic G. We let T, F and ge, for e ∈(λ×λ)<ω, be the evaluations by G of the corresponding names just as before. Let ̃T = {((η0, e0), (η1, e1)) ∈(X × t(p)<ω)2 : F -1(η0, ge0) <T F -1(η1, ge1)}, where <T is the order of T. Note, by Lemma 10, that ̃T ∈V (gγ), for all names used in its definition have supports that are contained in t(p). Claim 27. ̃T is <δ-closed in V (gγ). Proof. Let ⟨(ηi, ei) : i < δ′⟩∈V (gγ) be a decreasing sequence in ̃T, for some δ′ < δ. Remember that V (gγ) = V [⟨g0,β : (0, β) ∈t(p)⟩], which is an add(κ, t(p)∩({0}×λ))- generic extension of V . Since add(κ, t(p) ∩({0} × λ)) has the κ+-cc and κ < δ, THE ORDERING PRINCIPLE AND HIGHER DEPENDENT CHOICE 13 there is J ⊆t(p) ∩({0} × λ) of size < δ so that ⟨(ηi, ei) : i < δ′⟩∈V [⟨g0,β : β ∈J⟩]. By the regularity of δ, there is ξ < δ such that ηi ∈Xξ and ei ⊆t(pξ) for each i < δ′, and such that 0 × J ⊆t(pξ) = t(pξ, {γ′}) for some (unique) γ′ ∈t(pξ). In particular then, ⟨(ηi, ei) : i < δ′⟩∈V (gγ′), and we ensured in the next step of our above recursive construction that there is an upper bound in V (gγ). □ Finally, constructing a branch ⟨(ηi, ei) : i < δ⟩through ̃T in V (gγ) |= ZFC, we find that ⟨F -1(ηi, gei) : i < δ⟩is a branch through T in V [G]S. □ References [1] Peter Holy and Jonathan Schilhan. The ordering principle and dependent choice. 2025. Submitted. [2] Thomas Jech. The axiom of choice. Dover books on mathematics. Dover publications, 1973. [3] David Pincus. Adding dependent choice. Ann. Math. Logic, 11(1):105-145, 1977. Institut f ̈ur diskrete Mathematik und Geometrie, TU Wien, Wiedner Hauptstrasse 8-10/104, 1040 Vienna, Austria Email address: ̈odel Research Center, Kolingasse 14-16, 1090 Vienna, Austria Email address:
2510.14820
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 Exploring a cosmic ray inverse-Compton origin to the SZ-to-X-ray pressure deficit in the cool core cluster ZwCl 3146 Emily M. Silich,1 Jack Sayers,1 Philip F. Hopkins,1 Charles Romero,2 Brian Mason,3 John Orlowski-Scherer,2 and Craig L. Sarazin4 1Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125, USA 2Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA, 19104, USA 3National Radio Astronomy Observatory, 520 Edgemont Rd, Charlottesville, VA 22903 4Department of Astronomy and Virginia Institute for Theoretical Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904, USA ABSTRACT We explore the possibility that inverse-Compton (IC) scattering of cosmic microwave background photons by ∼GeV cosmic rays (CRs) injected by the central active galactic nucleus (AGN) in cool core (CC) clusters produces a non-negligible continuum-like X-ray signal that is easily misinterpreted as intracluster medium (ICM) thermal bremsstrahlung continuum. This is particularly relevant to the cooling flow problem–the lack of star formation relative to X-ray-inferred ICM cooling rates. Using ZwCl 3146, a relaxed CC system at z = 0.291, we compare pressure profiles derived via X-rays and the thermal Sunyaev–Zel’dovich (SZ) effect. While SZ measurements probe only thermal ICM electrons, additional CR–IC emission would appear to boost the X-ray–inferred pressure. Relative to unity, we measure a ≃30% decrement in PSZ/PX within 100 kpc of the ZwCl 3146 center at a statistical significance of ≃3.3σ, consistent with predicted deficits from CR–IC contamination in reasonable models of central AGN-driven CR injection. X-ray spectral fits of a two-component model with thermal ICM and CR-IC emission are consistent with CR-IC as the cause of this deficit. We test alternative explanations and systematics that could drive such a decrement, with the leading order systematics associated with halo triaxiality. Collectively, these systematics are unlikely to produce a PSZ/PX decrement ≳10%. While our results establish that non-negligible CR-IC emission is plausible in ZwCl 3146, we stress that more detailed studies of larger cluster samples are required to robustly assess whether CR-IC is relevant to the cooling flow problem. 1. INTRODUCTION Cool core (CC) galaxy clusters are characterized by central (r ≲100 kpc), rapidly-cooling regions of the in- tracluster medium (ICM) that exhibit radiative cooling timescales shorter than a few Gyr (see A. C. Fabian 1994, for a review). The cooling rates associated with these CCs inferred from X-ray observations are signifi- cantly higher than observed star formation rates in the central brightest cluster galaxies (BCGs) or cold gas reservoir quantities allow (e.g., J. N. Bregman et al. 2006; C. P. O’Dea et al. 2008; J. R. Peterson et al. 2001; M. McDonald et al. 2011, 2018). Historically, this dis- crepancy has been termed the “cooling flow problem”, though modern studies have proposed a resolution in the form of a self-regulating cycle comprising the rapidly- cooling ICM as it falls onto the central BCG, which Email: esilich@caltech.edu itself hosts an active galactic nucleus (AGN) that, fu- eled by the infalling gas, provides a form of mechani- cal heating that re-deposits energy into the surrounding medium and prevents catastrophic cooling of the ICM within the CC (for a review, see B. R. McNamara & P. E. J. Nulsen 2007). While these scenarios require a relatively fine tuning between the ICM thermodynamics and accretion rate onto the AGN, plausible models for this fine tuning have been proposed (e.g., G. M. Voit et al. 2015). In support of such a self-regulating scenario, CCs tend to host strong radio AGNs (R. Mittal et al. 2009; M. Sun 2009). The leptonic and kinetic power injected by radio jets in CCs is comparable to the apparent X-ray cooling luminosity within the CC (L. Bˆırzan et al. 2004; E. O’Sullivan et al. 2011; J. Hlavacek-Larrondo et al. 2012), and AGN radio luminosities are weakly correlated with the X-ray CC size (W. Liu et al. 2024). However, the physics governing the regulation of ICM cooling and arXiv:2510.14820v1 [astro-ph.HE] 16 Oct 2025 2 heating in CCs are not well established. The balance of ICM cooling with heating via feedback in galaxy groups and clusters is only self-regulated well on long timescales (M. McDonald et al. 2018), and while many AGN-driven heating mechanisms in CCs have been proposed, it is un- clear to what extent these processes are responsible for the thermalization of energy in the ICM: for example, the dissipation of sound waves and weak shocks driven by expanding AGN bubbles (e.g., A. C. Fabian et al. 2003; M. Ruszkowski et al. 2004; W. Forman et al. 2005; W. G. Mathews et al. 2006), AGN-bubble-driven gener- ation and dissipation of gravity waves via turbulence (I. Zhuravleva et al. 2014, 2016; Y. Li et al. 2020), mix- ing of hot AGN bubble plasma with the ICM (H. Y. K. Yang & C. S. Reynolds 2016; S. Hillel & N. Soker 2016, 2017), and cosmic ray (CR) heating via Alfv´en waves (C. Pfrommer 2013). The correlation of radio AGN luminosity with X- ray CC luminosity implies that a significant popula- tion of CRs is being accelerated in CCs. The radio emission is primarily generated by high-energy leptons (E ≫10 −100 GeV), which, having short lifetimes (≲107 yr; see M. Ruszkowski & C. Pfrommer 2023 for a review), typically generate emission only within a few ∼kpc of the CC centers (e.g., F. de Gasperin et al. 2012). Most of the total CR lepton energy should, however, be contained by a population of low-energy (E ∼0.1 −1 GeV) leptons (P. F. Hopkins et al. 2025b). In contrast to the high-energy CR leptons generating the radio emis- sion, these low-energy leptons have much longer lifetimes (∼Gyr), so they should diffuse or stream out to larger radii (∼100 kpc) before losing most of their energy, comprising extended, ancient cosmic ray halos (ACRHs; P. F. Hopkins et al. 2025b). These ACRHs would pro- duce ∼keV X-ray emission via inverse Compton (IC) scattering with cosmic microwave background (CMB) photons. Since this CR population is peaked around one GeV, CR-IC from these halos would exhibit ther- mal continuum-like X-ray spectra (P. F. Hopkins et al. 2025a). In massive clusters, X-ray emission from CR-IC would be insignificant relative to the thermal bremsstrahlung emission at large radii, but it would appear as ∼keV emission in the cluster cores at radii ≲100 kpc (P. F. Hopkins et al. 2025b). Because the 0.1 −1 GeV CR life- time is of order the age of a cluster (∼Gyr), ACRHs should be present in a significant fraction of clusters. In addition, given the similarity of the power injected by radio jets in CCs and the apparent X-ray cooling lu- minosity, CR-IC from ACRHs could non-negligibly bias X-ray-inferred CC properties. The expected emissivity profile for CR-IC emission in these ACRHs is similar to the observed X-ray emission profiles in CCs (P. F. Hopkins et al. 2025b). Therefore, interpreting a non- negligable fraction of the X-ray emission as CR-IC could alleviate the most challenging aspects of the cooling flow problem: the CR-IC contamination could explain why the apparent X-ray cooling rates of CCs are so large compared to constraints on observed cooling rates; the radio AGN properties could be so well-correlated with the apparent X-ray cooling luminosity because the AGN are the direct source of the low-energy leptons produc- ing the CR-IC in X-rays, and the radio emission is trac- ing a younger population of high-energy leptons; and it would explain why CC clusters appear to follow univer- sal profile shapes in their X-ray-inferred thermodynamic quantities (P. F. Hopkins et al. 2025b). Because the CR-IC spectra exhibit shapes similar to thermal continuum, observationally determining the fraction of CR-IC in X-rays via spectroscopy is ex- tremely challenging. Perhaps the cleanest test for such CR-IC contributions involves comparison with the ther- mal Sunyaev-Zel’dovich (SZ) effect, which is the IC scat- tering of ∼keV thermal electrons in the ICM with CMB photons (see T. Mroczkowski et al. 2019, for a review). Since the number of thermal electrons in the ICM is many orders-of-magnitude larger than the number of CR leptons, and the thermal SZ effect is sensitive to the non- relativistic electron pressure, SZ measurements provide a robust measurement of the true thermal gas pressure of the CC. Conversely, the X-ray-derived pressure pro- files of CCs are primarily driven by the X-ray-inferred density, which is derived from the observed soft X-ray emission. Therefore, the pressure derived from X-rays would in general be different, and most often boosted, relative to the true thermal gas pressure when CR-IC contributes to the X-ray continuum, and equal to the true thermal gas pressure in the absence of CR-IC con- tamination. So, the ratio of SZ-to-X-ray derived pres- sure (PSZ/PX) as a function of radius in CCs provides a potentially powerful test of these ACRH models, which we will leverage in this study. ZwCl 3146 is a massive (M500 ≃7.7 × 1014 M⊙; C. E. Romero et al. 2020), relaxed CC cluster (e.g., W. Kausch et al. 2007) at z = 0.291 (S. W. Allen et al. 1992). No- tably, ZwCl 3146 exhibits a large X-ray inferred cool- ing rate (∼1000 M⊙yr−1 A. C. Edge et al. 1994; E. Egami et al. 2006; W. Kausch et al. 2007; M. McDonald et al. 2018), and it also hosts a central radio source em- bedded within a diffuse radio minihalo extending out to ∼90 kpc in radius (S. Giacintucci et al. 2014). Given the need to resolve the inner ≃100 kpc of a CC to test for the presence of CR-IC contamination via pressure profile comparisons, deep, high-resolution X-ray and SZ data 3 are required. ZwCl 3146 represents the only such CC with not only adequate available observations in each waveband (having 86 ks of archival Chandra data and being detected at 61σ significance with MUSTANG-2; C. E. Romero et al. 2020), but also a sufficiently low mm-wave-brightness central AGN so as to be disentan- gled from SZ signal (C. E. Romero et al. 2020). In this work, we explore the possibility that CR-IC generated by CRs injected by the central AGN are contributing significantly to the X-ray continuum in ZwCl 3146, and thus test whether such CR-IC could be biasing X-ray- inferred thermodynamic properties of CCs. In Section 2, we describe the ZwCl 3146 SZ and X-ray data analysis, and Section 3 provides an overview of the calculation of the SZ- and X-ray-derived pressure pro- files from these datasets. We compare these ZwCl 3146 pressure profiles in Section 4, and highlight a detection of an SZ-to-X-ray pressure deficit in the central ∼100 kpc of ZwCl 3146. In Section 5, we discuss physical scenarios that could contribute to the observed pressure deficit, including CR-IC from an ACRH as well as addi- tional possible contaminants and systematics. Our con- clusions are given in Section 6. Throughout this work, we assume a concordance cosmology: H0 = 70 km s−1 Mpc−1, Ωm,0 = 0.3, unless otherwise specified. 2. DATA ANALYSIS 2.1. SZ We use MUSTANG-2 data which has been presented in earlier works, C. E. Romero et al. (2020, 2023). Op- erating on the 100-m Green Bank Telescope (GBT), MUSTANG-2 achieves 10′′ resolution at full-width half- maximum (FWHM) with an instantaneous field of view (FOV) of 4′.2 (S. R. Dicker et al. 2014). MUSTANG-2 observes targets via on-the-fly mapping. In the case of ZwCl 3146, Lissajous daisy scans, primarily of 2′.5 and 3′.0 scanning radii, were used. In C. E. Romero et al. (2020), an analysis of the MUSTANG-2 data in the time-domain was found to better recover the pressure profile at large radii rela- tive to a more canonical map-domain analysis. The time-domain analysis relies on characterizing the noise in Fourier space. To create a map of MUSTANG-2 for visualization and comparison with the X-ray images, we consider each pixel to be a model component (see Sec. 3.1). The complexity here negates the potential for an explicit solution, and rather a (preconditioned) conju- gate gradient descent is used (see again C. E. Romero et al. 2020). We show the resultant map in Figure 1, though, to reiterate, the pressure profile is not derived from this map. 2.2. X-ray We performed the Chandra X-ray data reduction with CIAO version 4.16 (A. Fruscione et al. 2006), utiliz- ing a modified version of the data reduction pipeline outlined in E. M. Silich et al. (2024) applied to two ZwCl 3146 datasets: Chandra ObsIDs 909 and 9371. We calibrated the raw ACIS-I data for each ObsID us- ing the CalDB version 4.11.3 with chandra repro. For each dataset, we performed an exposure correction with fluximage, point source identification via wavdetect (P. E. Freeman et al. 2002), and exclusion of point sources from the exposure-corrected data. We filtered the light curves for each observation with deflare and applied the identified good-time-intervals (GTI) to the data. After filtering, the total GTI is 86 ks between ObsIDs 909 and 9371. From these filtered datasets, we generated a “clean” (filtered, exposure-corrected, point- source-free) merged 0.5–7 keV surface brightness (SX) map for each ObsID with flux obs. We then constructed blank-sky background event files that are reprojected and scaled to match each of the ZwCl 3146 observations with blanksky. These CalDB blank-sky datasets, which are constructed by averag- ing deep (high-statistics), point source-free observations across large regions of the sky, characterize contribu- tions from the Chandra particle-induced instrumental background (I. Bartalucci et al. 2014) and astrophysical foreground and background components. For each Ob- sID, we generated an 0.5–7 background-subtracted SX map from the corresponding clean SX map and blank- sky background event file with blanksky image. We merged the background-subtracted images for each Ob- sID to generate a single ZwCl 3146 clean background- subtracted 0.5–7 counts map. Then, we calculated the X-ray peak of ZwCl 3146 from the clean background-subtracted 0.5–7 counts map. We defined a set of circular annuli relative to this peak within the radial range of 25 < r < 625 kpc, beginning with the innermost annulus and defining bin boundaries each time a threshold of tc = 7.5 × 103 background- subtracted counts was obtained. In this way, we ensure that each of the 11 annular bins will contain sufficient cluster counts to robustly obtain density and tempera- ture values from a spectral fit. We then extracted source and background spectra for each ObsID in each annular bin. We defined an additional thin (20 pixel; ≃86 kpc) outermost bin, which is used to initialize the spectral de- projection procedure, but not included in the pressure profile evaluation. With these spectra, we performed a spectral deprojection with dsdeproj (J. S. Sanders & A. C. Fabian 2007; H. R. Russell et al. 2008) for each ObsID, assuming a spherical underlying source geome- 4 Chandra 500 kpc MUSTANG-2 500 kpc 100 r [kpc] 1 P [10 10 dyne cm 2] PX PSZ Figure 1. Left: 0.5 −7 keV (background-subtracted) Chandra X-ray counts map of ZwCl 3146 with annular bins defining pressure profile extraction regions. Middle: MUSTANG-2 SZ map of ZwCl 3146 for visualization. Note that the pressure profile is not derived from this map, but rather the time-ordered data. Right: SZ- and X-ray-derived pressure profiles. Uncertainties are 1σ. try beginning with our additionally defined outermost annular bin. When calculating the emission volumes in this deprojection, we included only the fraction of each annulus not contaminated by point sources. The result of this deprojection is a set of background-subtracted deprojected X-ray spectra for each annular bin. 3. PRESSURE PROFILE GENERATION 3.1. PSZ In C. E. Romero et al. (2020), annular surface bright- ness rings were fit to the MUSTANG-2 data, along with Gaussians to model the compact point sources, includ- ing the central AGN. In C. E. Romero et al. (2023), another model component of a parabola in (telescope) elevation was added to further reduce large-scale noise. In principle, a reduction in the recovered source signal at large angular scales is possible from such an eleva- tion correction, but any such reduction is expected to be small compared to the measurement uncertainties of the derived pressure profiles (see C. E. Romero et al. 2023, their Figure 3). We begin with the MCMC chains for the pressure profile fits from C. E. Romero et al. (2023). For each step, we evaluated the pressure profile at the X-ray de- fined bin centers (see Section 2.2). The outermost of these bins is evaluated at ≃1.64′, which is well inside the ≃2′ radius within which MUSTANG-2 reliably re- covers signal in a minimally biased manner. The SZ- derived pressure profiles are centered at the position of the XMM-Newton X-ray centroid, which is within an XMM-Newton PSF width (≃6′′) of the more precisely determined Chandra X-ray peak. C. E. Romero et al. (2020) note that the choice of centroid (derived from XMM-Newton or MUSTANG-2, which are also sepa- rated within an XMM-Newton PSF width) has a negli- gible effect on the derived pressure profile. Taking this, combined with the much larger PSF size of MUSTANG- 2 (≃10′′), we therefore expect the offset between the X- ray- and SZ-derived pressure profile centers to result in negligible bias when comparing pressure values. From these realizations of the pressure profile, we estimated the median and 1σ uncertainty of PSZ in each bin, which comprise our PSZ profile (see Fig. 1). 3.2. PX We performed a simultaneous fit of the background- subtracted deprojected spectra with Sherpa for each Ob- sID in a given annular bin over an energy range of 0.5–7 keV. We model the spectra as a single collisionally ion- ized plasma modified by interstellar absorption (tbabs × apec; J. Wilms et al. 2000; R. K. Smith et al. 2001; A. R. Foster et al. 2012) with fixed hydrogen column nH = 2 × 1020 cm−2 ( HI4PI Collaboration et al. 2016), redshift z = 0.291 (S. W. Allen et al. 1992), and metal- licity Z = 0.3 Z⊙with abundances from E. Anders & N. Grevesse (1989). The free parameters are thus the plasma temperature and normalization (which is itself a function of the electron and ion densities). We calculate the pressure from each spectral fit as the product of the electron number density ne and temper- ature kT. We derive ne from the fitted apec normaliza- tion: η = 10−14 4π(DA(1 + z))2 · Z nenH dV (1) for DA being the angular diameter distance to the source, nH being the H number density, V being the volume of the source region, and assuming all units are CGS. The deprojected spectra are normalized to unit volume, and we assume ne ≃1.2nH for a fully ionized solar abundance plasma, and that the density and tem- 5 perature are uniform within each bin. Then, the pres- sure in each annular bin is simply PX = ne · kT. We estimate the 1σ uncertainty on PX by sampling 1000 random realizations of PX from ne and kT drawn from the calculated posterior distributions of the η and kT parameters (see Fig. 1). At large radii, CC cluster pressure profiles derived from X-rays and the SZ effect should agree well. How- ever, X-ray-derived temperature (and therefore, pres- sure) uncertainties associated with effective area cali- bration uncertainties exist in modern X-ray observato- ries, including Chandra. These calibration uncertainties have been examined in great detail, and are generally expected to occur at the ≳10% level (G. Schellenberger et al. 2015; K. Migkas et al. 2024), in contrast to the percent-level calibration possible for SZ data (J. T. Wan et al. 2021). Therefore, we choose to calibrate the abso- lute normalization of the PX profile to the PSZ profile (generated as described in Section 3.1). To calculate this correction factor and calibrate the uncertainty thereof, we generate 1000 realizations of PSZ and PX by ran- domly sampling the posterior distributions of each pro- file. In each realization, we estimate the mean value of αP ≡PSZ/PX|>100kpc for all bins outside of 100 kpc. The pressure profiles at these radii are less likely to suf- fer from projection effects associated with environmental complexity induced by the central AGN, and they en- able accounting for biases in the absolute calibration of the X-ray-derived profiles relative to those derived from SZ without (by design) washing out the possible signal (decrement) that we seek to test at inner radii. We find that αP = 0.94 ± 0.04, or that the X-ray de- rived pressure profiles are on average biased ≃6% higher than those derived from the SZ effect. This value of αP is consistent with the expected level of calibration uncer- tainty and the value derived from a comparison of the SZ- and X-ray (via Chandra)-derived pressure profiles for a large sample of clusters in J. T. Wan et al. (2021) (their normalization value being ≃0.91). We thus scale the entire PX profile by αP to ensure agreement between the pressure profiles at large radii (see Figure 1), though see Section 4 for further discussion of this. 4. RESULTS We calculate the SZ-to-X-ray pressure deficit within 100 kpc of the X-ray peak from PSZ and PX by lever- aging the 1000 realizations of PSZ and PX described in Section 3.2. For each realization, we apply the mean value of αP for that realization to PX. Once PX is corrected, we estimate the (unweighted) mean ratio of PSZ/PX for all bins within 100 kpc. We use this distribution of 1000 pressure deficit values to es- timate the total SZ-to-X-ray pressure deficit: Pdef ≡ ⟨PSZ/PX⟩r ≤100 kpc = 0.72 ± 0.08. While Pdef repre- sents the average pressure deficit within 100 kpc, the values of PSZ/PX within this radius consistently trend downwards as a function of decreasing cluster radius. This procedure encapsulates the variation in αP , which is not precisely constrained in the literature, within the statistical uncertainty estimate. In principle, the expected degree of X-ray calibration uncertainty is larger at higher plasma temperatures (G. Schellenberger et al. 2015), i.e., biased towards outer cluster radii for a CC. Even given the aforementioned motivation for nor- malizing PX using the outer radii values, we perform the analysis without applying this normalization correction, obtaining Pdef = 0.68 ± 0.07. Since the αP correction shifts PX down relative to PSZ, neglecting this correc- tion (and its associated scatter) results in a downward shift of PSZ/PX, and thus results in a mildly more sta- tistically significant measurement of Pdef. In principle, our results are further dependent on two main modeling systematics; namely, the counts thresh- old used to define bin radii and the width of the out- ermost bin used in the spectral deprojection. To de- termine the importance of these two systematic uncer- tainties in our modeling, we evaluate a distribution of Pdef and αP values using the above described proce- dures assuming a range of reasonable counts thresholds (tc = [6000, 6500, ..., 9000] counts) and outermost bin radii (rw = [10, 15, ..., 30] image pixels), for 35 combi- nations in total. The additional systematic uncertainty associated with our modeling procedures from these dis- tributions is negligibly small relative to our statistical uncertainty: σPdef ≃0.02. We therefore report Pdef as above with the statistical uncertainty exclusively. Finally, while we evaluate the ZwCl 3146 PX profiles within a set of discrete annular bins assuming constant pressure within each bin, we calculate PSZ at a series of nodes corresponding to each linear bin center (where each node is connected to adjacent nodes via a power law). In principle, this difference could introduce a bias in the pressure profile comparisons. To test for this pos- sible bias, we evaluate the PSZ profiles via two methods. First, as in the observational analysis, we estimate PSZ at each linear bin center. Second, we estimate PSZ at the “X-ray emission-weighted center”, i.e., p ⟨P 2⟩(as- suming constant temperature within each bin). Within each bin, differences in PSZ evaluated via each of these procedures are far subdominant to the uncertainty on each respective pressure estimate, so we conclude that this methodology difference should not significantly bias our comparisons. 6 100 r [kpc] 0.2 0.4 0.6 0.8 1.0 1.2 PSZ / PX Pdef = 1 Pdef = 0.9 Pdef = 0.7 Pdef = 0.3 Pdef = 0.72±0.08 Data LCRs 1043 erg s 1 LCRs 1044 erg s 1 LCRs 1045 erg s 1 No CR-IC Figure 2. Ratios of SZ-to-X-ray derived pressure profiles (black data points; 1σ uncertainties) with predictions from P. F. Hopkins et al. (2025b) overplotted for various CR injection luminosities. The predicted deficit in PSZ/PX at radii within ≃100 kpc of the cluster center becomes more extreme for higher CR injection luminosities. Our data from MUSTANG-2 and Chandra indicate the presence of a deficit in PSZ/PX at ≃3.3σ significance. 5. DISCUSSION We detect a ≳3σ deficit in the SZ-to-X-ray pressure ratio within 100 kpc of the ZwCl 3146 cluster center, Pdef = 0.72 ± 0.08. Although of moderate statistical significance, this result aligns with a scenario in which CR–IC emission non-negligably contributes to the X- ray continuum, artificially boosting the pressure inferred from X-rays. The models introduced in P. F. Hopkins et al. (2025a,b) naturally predict this behavior, indicat- ing that for realistic levels of AGN-driven CR injection, central CC pressure ratios can be suppressed to values as low as Pdef ≃0.3 within 100 kpc. Of these, models assuming a CR injection luminosity LCRs ∼1044 erg s−1 predict a central pressure deficit Pdef ≃0.7 within 100 kpc, consistent with the ZwCl 3146 value. This demon- strates that a significant non-thermal X-ray continuum component could be present in ZwCl 3146 and plausi- bly generated by CR-IC in an ACRH. Energetically, a CR injection luminosity of order LCRs ∼1044 erg s−1 is reasonable given the large observed jet kinetic power of the ZwCl 3146 AGN (∼6×1045 erg s−1; D. A. Rafferty et al. 2006), which is consistent with measured trends between the observed jet kinetic power and the X-ray luminosity (D. A. Rafferty et al. 2006). Such CR-IC would provide a natural physical mechanism for offset- ting X-ray-inferred radiative cooling in CCs: by con- tributing ∼keV thermal continuum-like emission in the soft X-ray band, CR–IC scattering masks the true ther- mal budget of the ICM, thereby alleviating the apparent discrepancy of X-ray-inferred and true gas cooling rates at the heart of the cooling flow problem. 5.1. Other possible systematics In this section, we explore additional possible system- atics and physical processes that could contribute to the observed deficit in PSZ/PX. The most precise quanti- tative calibration of these effects would require a one- to-one comparison of full instrument mock SZ and X- ray pressure measurements derived from a cosmological sample of CCs, thus incorporating instrumental effects (e.g., PSF smearing, deprojection of a non-truncated gas distribution, etc.), and astrophysical effects (see below), which is beyond the scope of this work. Instead, we provide estimates for the expected contributions from the leading sources of bias in our analysis calculated via available analytic models or numerical calibrations. A summary of all contributions we consider and their rel- ative importance to the measured SZ-to-X-ray pressure deficit in ZwCl 3146 is given in Table 1. 5.1.1. Halo triaxiality + orientation Dark matter halos are intrinsically triaxial, with shapes correlated with their formation histories (E. T. Lau et al. 2021), and within the gravitational poten- tial of a cluster, the ICM inherits this triaxiality (L. F. Machado Poletti Valle et al. 2021). Such asphericity affects both cluster selection (since elongated systems aligned with the LOS are preferentially detected; J. P. Dietrich et al. 2014; H. Saxena et al. 2025), and the inter- pretation of multiwavelength observables. Since X-ray and SZ observables depend differently on the LOS pro- jection, neglecting gas triaxiality could introduce sys- 7 tematic biases in derived thermodynamic profiles (e.g., J. T. Wan et al. 2021). We estimated the effects of halo triaxiality and orien- tation on our result as follows. We first obtained a cat- alog of axial ratios for a sample of 253 M200 ≥1014M⊙ halos from IllustrisTNG, which is itself a subset of the sample presented in L. F. Machado Poletti Valle et al. (2021) and references therein. For each halo in this sam- ple, we thus obtain the average (as a function of radius) gas minor-to-major and intermediate-to-major axial ra- tios, and we initialize a 3D grid with the axes of this triaxial ellipsoid aligned with those of the grid. We dis- tribute a general (arbitrarily normalized) electron num- ber density function over the triaxial ellipsoid, which is assumed to have constant axial ratios as a function of radius, in the shape of a cuspy β-profile (A. Vikhlinin et al. 2006): ne ∝ (r/rc)−α/2 ((1 + (r/rc)2)3β/2−α/4 (2) for β-profile shape parameters typical of a CC cluster: the core radius rc = 0.1r2500, inner slope parameter range α = [0.5, 1.5, 3] (I. Bartalucci et al. 2023), and β-profile parameter β = 0.7. For each halo, we assign 4 random line-of-sight (LOS) unit vectors, so we obtain in total ∼1000 realizations of the triaxial halos. Then, to calculate the effects of an underlying triaxial gas den- sity distribution on the SZ- and X-ray-derived pressure profiles, we create simple mock maps of the SZ and X- ray observables projected along the random LOS vectors assuming two cases: first, a triaxial gas distribution us- ing the axial ratios from our catalog for a given halo (wherein r in the above Equation 2 is the elliptical ra- dius), and second, a spherical distribution (wherein r in Equation 2 is the spherical radius and the axial ratios are set to 1). For a roughly isothermal distribution, the SZ- derived pressure is Pe ∝ R nedl and the X-ray-derived pressure is PX ∝ qR n2edl, so we project these quanti- ties (ne and n2 e, respectively) along the LOS vectors for each halo. We then performed a deprojection of these maps using an analogous spherical geometric deprojection formal- ism as the X-ray observational analysis (see Sec. 2.2). Since this deprojection routine implicitly assumes that the projected quantity is equal to zero outside the out- ermost deprojected radius, we apply a truncation to the density function at the outermost bin edge (router = 5 Mpc) before the projection. Estimating Pdef in an anal- ogous way as in the observational analysis, we tested the effect of the choice of α using the spherical distribu- tions. For (de-)projections generated at identical resolu- tion as the observational data, we find no significant bias as a result of varying α, though a systematic offset of ≃1−2% can emerge for lower-resolution data when the deprojection is unable to recover the innermost shape of the density/pressure profiles given the steep gradient at small radii within each deprojection bin. Ultimately, we find the 90% confidence range of the recovered SZ- to-X-ray pressure ratio for the average triaxial distribu- tions across all tested α values is 1.01 ≳Pdef ≳0.98, though the distribution of Pdef values is skewed towards higher recovered pressure ratios, with values as extreme as ≃1.13 possible. This approach represents a very simple estimate of the effects of gas triaxiality on the SZ-to-X-ray derived pres- sure profiles, neglecting possible additional corrections associated with, e.g., non-isothermality or variation in ICM axial ratios as a function of radius. To address the latter point, using the tabulated distribution of ICM ax- ial ratios as a function of cluster-centric radius in E. T. Lau et al. (2011), we perform an analogous numerical es- timate of the effects of differential ellipticity in the ICM distribution on Pdef. For a single cluster of ZwCl 3146 mass, we randomly sample 1000 LOS vectors and eval- uate the ratio of measured SZ-to-X-ray pressures within 100 kpc for the E. T. Lau et al. (2011) ICM axial ratio distribution (using all clusters in their sample, measured at z = 0 for simulations including cooling and feedback), finding a 90% confidence range of the recovered SZ-to- X-ray pressure ratio of 1.06 ≳Pdef ≳0.92. Therefore, the predicted contribution to Pdef due to halo triaxial- ity and cluster orientation is not large enough to explain the observed pressure deficit in ZwCl 3146. Our calculations of the characteristic bias in Pdef due to the underlying halo shapes are likely a conservative estimate (i.e., over-estimate) of the true variation in the pressure profiles due to gas triaxiality and cluster orien- tation. ZwCl 3146 (and any representative CC sample) comprise a specific class of galaxy cluster morphology (associated with relaxed systems; see, e.g., W. Kausch et al. 2007), while both our IllustrisTNG catalog and the E. T. Lau et al. (2011) axial ratios used here are agnos- tic to cluster morphology, and thus may include more disturbed (elongated) systems than we would consider observationally. For this analysis, we implicitly assume that our cluster is selected agnostic to orientation, which will not be true for any larger sample of CCs, even while these relaxed objects are likely to be less sensitive to ori- entation systematics than more disturbed systems. 5.1.2. Helium sedimentation The X-ray emission from massive galaxy clusters is dominated by the thermal bremsstrahlung continuum 8 Table 1. Census of possible contributions to the SZ-to-X-ray pressure deficit within 100 kpc of the ZwCl 3146 center. Contribution Expected effect Additional considerations CR-IC 1 ≥Pdef ≳0.3 – Tested across two orders-of-magnitude of CR injection luminosity Halo triaxiality + orientation 1.01 ≳Pdef ≳0.98 – Skewed towards high values of Pdef, as extreme as Pdef ≲1.13 (constant axial ratios) Halo triaxiality + orientation 1.06 ≳Pdef ≳0.92 – Tested for a single model of radially-dependent halo triaxiality (radially-dependent axial ratios) Gas clumping 1.02 ≳Pdef ≳0.98 – Limited sensitivity due to probing radii ≲0.5R500 Helium sedimentation Pdef = 1 – Plasma instabilities, bulk motions and turbulence driven by AGN – outflows and merger activity render such sedimentation to be – negligible Cosmological parameters Pdef = 1 – Normalization of PX to PSZ at large radii removes (constant in – radius) bias from PX ∝DA(z)−1/2 X-ray calibration uncertainty Pdef = 1 – Bias of ∼20% possible in the absence of our large radius – normalization of PX to PSZ aContributions to Pdef from CR-IC, and choice of cosmological parameters are quantified as a characteristic range derived from a discrete set of analytic models. b Contributions to Pdef from halo triaxiality and orientation and gas clumping are quantified as a 90% confidence range calculated via numerical modeling. generated via electrons free-free scattering with hydro- gen and helium nuclei. Theoretical calculations have predicted that the larger helium nuclei can sediment into the gravitational potential of the cluster on timescales of order the cluster age (e.g., F. Abramopoulos et al. 1981; M. R. Gilfanov & R. A. Syunyaev 1984), resulting in an enhanced abundance of helium nuclei in cluster cores that could bias the electron pressure derived from X-ray emission if one were to assume an emitting ICM composed of primordial abundances. Several previous works have studied the effects of pos- sible helium sedimentation on the X-ray observables of galaxy clusters (e.g., B. Qin & X.-P. Wu 2000; L. Chuzhoy & A. Nusser 2003; L. Chuzhoy & A. Loeb 2004; M. Markevitch 2007; S. Ettori & A. C. Fabian 2006; F. Peng & D. Nagai 2009; G. E. Bulbul et al. 2011). In particular, the X-ray surface brightness dominated by thermal bremsstrahlung (including contributions from H and He ions) in a cluster may be written as SX ∝ R LOS(nenHΛeH + nenHeΛeHe) dl ∝n2 H (1 + 2x)(1 + 4x) ΛeH (3) for x ≡nHe/nH such that ne = (1 + 2x) nH, with the X-ray band cooling function Λe(H/He) describing the emission from the free-free scattering of electrons with hydrogen and helium nuclei, and neglecting continuum contributions from all elements heavier than helium (F. Peng & D. Nagai 2009). Then, the electron pressure scales as Pe ∝ 1 + 2x 1 + 4x 1/2 (4) for a fixed (measured) SX, intrinsic gas temperature, and X-ray spectroscopic electron temperature. In real- ity, spectroscopic fits to obtain ne are dependent on the spectroscopically-fit gas temperature, which determines the shape of the thermal bremsstrahlung continuum, as well as influences the line emission, which is further de- pendent on the metallicity of the plasma, though the line emission is a very small contribution to the over- all surface brightness. S. Ettori & A. C. Fabian (2006) note that the X-ray spectroscopically-fit temperature of a cluster does not vary significantly in the presence of helium enrichment. While the metallicity is dependent on the underlying helium abundance, we fix the metal- licity to 0.3 X⊙in our analysis due to the large param- eter degeneracy and limited constraining power of the metallicity parameter in our fits. For an ICM composed of primordial abundances (X = 0.75, Y = 0.25), we would have x = 0.083. Therefore, in the case where the He abundance were doubled from its primordial value, if we were to calculate the electron pressure measured 9 at fixed SX assuming x = 0.083, we would overestimate Pe by ≃5%. Using the prediction for the helium-to-hydrogen mass fraction radial distribution from the F. Peng & D. Nagai (2009) 1D helium sedimentation model applied to a clus- ter with ages of tage = 9 Gyr and tage = 11 Gyr with an X-ray spectroscopic temperature of 10 keV (distributed according to the profile of A. Vikhlinin et al. 2006), we calculate the overestimation of the electron pressure that would be inferred from an X-ray measurement as a function of cluster radius. Based on this simple laminar model, we estimate an equivalent SZ-to-X-ray pressure deficit within 100 kpc for the oldest evaluated cluster age (tage = 11 Gyr) of Pdef ≃0.9. This simple estimate is ≃2σ less extreme than the measured deficit, and con- tributions become even less important for lower cluster ages. However, 1D models like that of F. Peng & D. Na- gai (2009) are likely an overly idealized estimate of the effects of He sedimentation in cluster cores. First, they do not include the effects of cluster environments, which are subject to, e.g., bulk motions or turbulence driven by AGN outflows, merger activity, etc. (I. Zhuravleva et al. 2014; E. T. Lau et al. 2017). To first order, we can scale the amount of He sedimentation expected in a simple laminar system to account for the effects of bulk mixing and turbulence like ηsed,mix ∼ηsed · e−tsed/tmix (5) with ηsed being the amount of He sedimentation pre- dicted for a laminar system (here causing Pdef ≃0.9) with the He sedimentation time tsed and the mixing time tmix. Following F. Peng & D. Nagai (2009), and assum- ing the cluster is approximately in hydrostatic equilib- rium, we can write tsed/tmix as a function of cluster- centric radius R tsed/tmix ∼12 Ms 0.1  ·  R 100 kpc  · (6)  λmix (fb/0.2)(ngas/103 cm−3)(T/10 keV)5/2  for the sonic Mach number of the bulk flows or turbu- lence Ms = vmix/cs (with the largest-scale bulk mixing velocity vmix and sound speed cs), λmix ∼1 being the ratio of the coherence length of the largest-scale coher- ent flows (in/outflows, turbulence) to the cluster-centric radius in cluster CCs, the magnetic suppression factor fB ∼0.2 for tangled magnetic fields, and the cluster temperature T ∼5 keV and density ngas ∼10−2 cm−3 at a radius of ∼100 kpc in a characteristic CC (e.g., N. Truong et al. 2024). Finally, applying radial scalings of density and temperature (ngas ∝1/R, T ∝R0.6 from 10 −100 kpc; C. E. Romero et al. 2020), we have tsed/tmix ∼6  R 100 kpc 1/2 ·  vmix 100 km s−1  (7) Assuming a characteristic turbulent velocity disper- sion derived from observations of the Perseus CC with Hitomi (∼200km s−1; Hitomi Collaboration et al. 2016), the effects of He sedimentation (via Equations 5 and 7) would be reduced to < 1% of those predicted in the equivalent laminar system at a radius of 100 kpc. Even this relatively quiescent turbulence level thus acts to significantly suppress He sedimentation. Simulations of Perseus-like clusters from TNG-Cluster support this observed level of velocity dispersion inferred from X- rays, and additionally suggest the coexistence of bulk flows in excess of ∼1000 km s−1 (N. Truong et al. 2024), which would significantly boost the suppression of He sedimentation, further reducing its effects. Second, and more dramatically, the 1D simulations further neglect effects from small-scale plasma insta- bilities in the presence of a He composition gradient. These instabilities–primarily the heat-flux-driven buoy- ancy instability (HBI) at small cluster-centric radii and the magnetothermal instability (MTI) at large radii– would effectively erase the composition gradients pre- dicted by the 1D simulations (see T. Berlok & M. E. Pessah 2015, 2016). In particular, a He composition gradient would act to increase the (already fast-growing relative to the sedimentation timescale) growth rates of such instabilities in cluster core regions, further re- distributing the ICM. Taking this in addition to the ef- fects of turbulent/outflow-driven mixing on the He com- position gradient outlined above, we conclude that he- lium sedimentation should have negligible effects on the observed pressure deficit in ZwCl 3146. 5.1.3. Gas clumping Another possible systematic that could affect our pres- sure measurements is enhanced X-ray emission from the inhomogeneous density field (“gas clumping”) from on- going mass assembly in cluster outskirts (R ≳R500; e.g., D. Eckert et al. 2015). While both PX and PSZ are linearly dependent on the electron temperature, the dependence on the electron density is quadratic in the case of PX (being derived spectroscopically via the de- projected SX) and linear in the case of PSZ (being de- rived simply from the deprojected LOS integral of the electron pressure). Thus, any dominant bias in the ratio of PSZ/PX as a result of gas clumping would enter via the X-ray emission. Both simulations (e.g., D. Nagai 10 & E. T. Lau 2011; N. Battaglia et al. 2015; S. Planelles et al. 2017) and observations (e.g., D. Eckert et al. 2015; A. Simionescu et al. 2011; O. Urban et al. 2014) have shown that the clumping factor (which can be inter- preted as the ratio of mean to median deprojected SX; see D. Eckert et al. 2015) in relaxed clusters is reason- ably flat at the radial range within which we normalize PX to PSZ (∼0.1 −0.4 R500), and is expected to be negligible at smaller radii where we calculate Pdef. Nevertheless, to quantify the effects of gas clumping on the measured ZwCl 3146 SZ-to-X-ray pressure deficit, we leverage the analytic form for the (square root of the) observed clumping factor ( √ C(r)) fit to 31 clusters observed with ROSAT/PSPC (D. Eckert et al. 2015). We assume “true” initial underlying distributions for PX and PSZ equal to unity, and we generate 1000 realiza- tions of PX · √ C(r). Following the procedure outlined above for ZwCl 3146, we normalize PX · √ C(r) to PSZ using all bins outside of 100 kpc, and we estimate the resulting value of Pdef within 100 kpc. We find that including this gas clumping effect results in no statis- tically significant bias to Pdef (with a 90% confidence range of 1.02 ≳Pdef ≳0.98). Given that the clumping factor across our cluster radial range is reasonably flat (D. Eckert et al. 2015), combined with our normalization of PX to PSZ outside of 100 kpc, the mild scatter and lack of a statistically significant bias in Pdef is expected and should not contribute significantly to the measured ZwCl 3146 pressure deficit. 5.1.4. Cosmological parameters The choice of cosmological parameters within a given flat ΛCDM model can affect the X-ray- and SZ-derived pressure values. While the conversion of angular to physical scale in the source frame to determine the proper radii at which the pressure profiles are evalu- ated is identical within both the SZ and X-ray analy- ses, the conversion of X-ray emissivity to electron den- sity (via the apec normalization, see Eq. 1) scales as DA(z)−1/2 (when one factors in the cosmological depen- dence of the source region volume V ∝DA(z)3). Thus, assuming the most recent Planck cosmological param- eterization (H0 = 67.66 km s−1 Mpc−1, Ωm,0 = 0.31; Planck Collaboration et al. 2020) in place of the con- cordance cosmology would result in a ≃1.6% change in PX ∝DA(z)−1/2, independent of the radius at which PX is evaluated. However, in practice, our results are entirely unaffected by this bias, since we calibrate for this systematic via the normalization of PX to PSZ at large radii (see Section 3.2). 5.1.5. X-ray instrument calibration As stated in Section 3.2, the measured ICM electron temperature (and therefore, pressure) can vary signifi- cantly across modern X-ray instruments, mostly likely due to effective area calibration uncertainties. For exam- ple, if Chandra were to measure a plasma temperature characteristic of ZwCl 3146 (≃7 keV) in a standard band (0.7 −7 keV), detectors on XMM-Newton would measure a temperature ≃20% lower (G. Schellenberger et al. 2015), and eROSITA would measure a tempera- ture nearly ≃40% lower (K. Migkas et al. 2024). Given these large uncertainties and the lack of clarity regard- ing which temperature measurement is closest to the true underlying electron temperature, we normalize PX to PSZ using the radial bins outside of 100 kpc to elim- inate this potential systematic bias (see again Section 3.2). 5.1.6. AGN contamination In principle, bright mm-wave emission from both the central AGN core and associated extended emission from its radio jets or lobes could bias the measurement of PSZ in the innermost radial bins, although in prac- tice the steep-spectrum lobes are generally very dim at mm wavelengths and thus not relevant. By design, we have selected ZwCl 3146 in part due to the relatively low mm-wave brightness of its central AGN core (C. E. Romero et al. 2020). The central AGN in ZwCl 3146 is modeled as a compact point-like source (C. E. Romero et al. 2020, 2023) and jointly fit with the pressure profile so that any uncertainty associated with the central AGN is captured in the fitting methodology. Given the half width at half maximum (HWHM) of the MUSTANG-2 PSF of ≃5′′ (≃20 kpc), the innermost PSZ radial bin (≃33 kpc) is evaluated well outside the anticipated ra- dius of significant PSF smearing of the central AGN (and even more so for the X-ray observations, given Chan- dra’s PSF HWHM of ≃0.25′′, nearly 20× better than MUSTANG-2). In addition, because of the low surface brightness and steep spectral slope of the ZwCl 3146 radio minihalo, we do not expect it to contribute signif- icantly at the mm-wave frequencies of interest (S. Giac- intucci et al. 2014). Therefore, we do not expect any sig- nificant systematic bias associated with mm-wave emis- sion from the ZwCl 3146 central AGN. 5.2. Fitting a CR-IC model to the X-ray spectra Our measurement of an SZ-to-X-ray pressure deficit in ZwCl 3146 implies that a spectral component associated with CR-IC should be present in the inner core regions of ZwCl 3146. The measured ratio PSZ/PX should re- flect the ratio of the SZ-inferred true thermal pressure 11 10 9 10 8 10 7 [photons s 1 keV 1 kpc 3] 2/dof =110.04/106 ObsID 909 ObsID 9371 ICM CR-IC 0.5 1 2 3 4 5 6 7 Energy [keV] 2.5 0.0 2.5 Residuals ( ) 10 9 10 8 10 7 [photons s 1 keV 1 kpc 3] 2/dof =110.11/108 ObsID 909 ObsID 9371 ICM 0.5 1 2 3 4 5 6 7 Energy [keV] 2.5 0.0 2.5 Residuals ( ) Figure 3. Left: spectral fit of the innermost annular bin in the ZwCl 3146 core, with contributions from the fitted tbabs × (apec + nlapec) model for ICM thermal and CR-IC emission indicated. Right: same, for the fitted tbabs × apec model for ICM thermal emission exclusively (no CR-IC component). This check confirms that the thermal pressure inferred from X-ray spectroscopy and SZ are consistent for the innermost ZwCl 3146 bin (≃33 kpc) in the simple case where the CR-IC spectral shape is similar to the thermal ICM continuum shape. (PSZ ≡Pth) to the X-ray-inferred pressure in the pres- ence of CR-IC (PX ≡Pth+IC). To first order, when fitting a single absorbed apec model to the X-ray spec- tra, we obtain Pth+IC ∼η1/2 th+ICkTth+IC, for ηth+IC and kTth+IC corresponding to the normalization and tem- perature of that model. In order to obtain Pth from the X-ray spectra in the presence of non-negligible CR-IC, a two-component fit that includes the standard absorbed apec model plus a model of the CR-IC must be per- formed. In this scenario, Pth ∼η1/2 th kTth for ηth and kTth being the normalization and temperature of the ab- sorbed apec model component. We consider the inner- most radial bin (≃33 kpc), where a deficit in PSZ/PX is detected at the highest statistical significance and where we thus expect the largest amount of CR-IC contamina- tion. To determine if the measured X-ray spectra from this bin allow for this CR-IC emission, we re-fit these data using the two-component model described above. Performing such a fit requires a spectral model of the CR-IC emission, which is predicted to be similar in shape to thermal continuum emission (P.F. Hop- kins et al. in prep). In particular, P.F. Hopkins et al. in prep predict that, as the CR electrons age and undergo Coulomb, IC, and bremsstrahlung losses–and with relatively weak dependence on the CR injection spectrum–the soft X-ray CR-IC spectrum will begin to exhibit curvature. This CR-IC spectral shape should mimic increasingly soft thermal bremsstrahlung contin- uum shapes as the CRs propagate out from the AGN injection site. Model calculations assuming an effective streaming speed ≃100 km s−1 (P.F. Hopkins et al. in prep) predict that after the CRs have propagated out to r ∼30 kpc, the CR-IC spectrum will exhibit a shape similar to thermal bremsstrahlung continuum with an effective temperature of a kTIC ≃2.6 keV. However, within the range of plausible CR injection spectra and CR propagation models, the effective temperature of the CR-IC emission could be as large as kTIC ≃10 keV. Given this large uncertainty, combined with the lack of constraining power of the X-ray data to simultane- ously fit all of the CR-IC model parameters (often de- generate with the thermal ICM model parameters in the full model, tbabs × (apec + nlapec)), we approxi- mate the CR-IC spectral contributions for the simplest case: kTIC = kTth, and link these parameters in the spectral fit, while leaving free the normalization of the CR-IC component. As in the observational analysis (see Sec. 3.2), we fix the hydrogen column and redshift for ZwCl 3146, and we leave the temperature and normal- ization of the thermal ICM component free. Since an additional continuum component can, in principle, af- fect the X-ray-inferred metallicity of the plasma (via line strengths), we free the metallicity for the thermal ICM emission. The results of this two-component spectral fit com- pared to the single-component spectral fit without the CR-IC model are shown in Figure 3. In brief, we find that there is negligble difference in fit quality, with a total χ2 = 110 in both cases. Solely from the two- component X-ray spectral fit, we find Pth/Pth+IC ≡ Pth/PX = 0.45+0.11 −0.11. This value can be compared to the one obtained from the ratio of SZ to X-ray pres- sure in Sec. 4 for the same radial bin, with PSZ/PX ≡ Pth/PX = 0.51+0.14 −0.20 from that analysis. Thus, we con- 12 clude that the recovered thermal pressure in the inner- most bin is consistent between the method that de- termines Pth from the SZ data and the method that fits a two-component model to the X-ray spectra with kTIC = kTth. In this simple case, the flux ratio in the 0.5–1.0 keV band for the CR-IC and thermal ICM com- ponents is F 0.5−1.0 IC /F 0.5−1.0 th ≃3.3. This test demonstrates that the innermost radial bin considered in our analysis of ZwCl 3146, near 33 kpc, allows for a substantial CR-IC continuum component with a spectral shape similar to the ICM thermal bremsstrahlung continuum. However, this is a simple consistency check, and we stress that any detailed mod- eling of such a CR-IC component should fully account for the uncertainties in the theoretical modeling, as well as marginalize over the possible systematics discussed in Section 5.1. For example, given the broad range of theoretically motivated CR-IC spectral shapes (which are dependent on the CR injection spectrum, CR trans- port properties, etc), one should in principle fit for this shape (here approximated as a thermal bremsstrahlung- like continuum at a similar temperature as the ICM; P. F. Hopkins et al. 2025b), leveraging deeper X-ray observations. Additional theoretical modeling is also re- quired for more complex cases, where the CR-IC spec- tral shape is discrepant from the shape of the ther- mal ICM emission. In such a case, the addition of a softer/harder CR-IC spectral component will influence the X-ray-inferred ICM temperature relative to that de- rived from a single-component model fit. These temper- ature variations could then non-trivially influence the X-ray-inferred pressure values, and by extension, the ra- tio PSZ/PX. A study exploring these detailed models of CR-IC should further include the relevant radial infor- mation (e.g., PSZ/PX trends as a function of radial bin) in a comprehensive fit. We leave such joint theoretical and observational exploration of these details for future work. 6. SUMMARY & CONCLUSIONS In this work, we have tested the possibility that CRs injected by a central AGN in CCs could non-negligibly bias the X-ray-inferred thermodynamic properties of such clusters by comparing X-ray and SZ observations of ZwCl 3146 from Chandra and MUSTANG-2, respec- tively. Our main findings are summarized below: 1. We detect a ≃3.3σ (statistical) significance deficit in the SZ-derived pressure relative to that derived from X-rays (Pdef = 0.72 ± 0.08) within 100 kpc of the cluster center in ZwCl 3146. This decre- ment consistently trends downwards as a function of decreasing cluster radius at r ≲100 kpc. 2. This deficit is consistent with simple analytic mod- els for CR injection by a central AGN introduced in P. F. Hopkins et al. (2025a,b), where ∼GeV CR leptons populate an ACRH within ∼100 kpc of the central CC and IC scatter with CMB photons to produce thermal-continuum-like emis- sion around ∼keV. The observed Pdef = 0.72 is within the predicted range of models spanning two orders of magnitude in CR injection luminosity (LCRs ∼1043 −1045 erg s−1 P. F. Hopkins et al. 2025b). This not only demonstrates that a sig- nificant non-thermal X-ray continuum component plausibly generated by CR-IC could be present in ZwCl 3146, but that such an effect could play a role in observed trends between AGN jet kinetic power and X-ray luminosity in CCs. 3. We considered additional astrophysical, instru- mental, and methodological contributions that could drive such an SZ-to-X-ray pressure deficit, including the effects of halo (gas) triaxiality and cluster orientation, helium sedimentation in the cluster potential, the ability of the deprojection routine to resolve the steep slope of the inner ther- modynamic profiles, gas clumping due to mass as- sembly, choice of cosmological parameters, X-ray instrument calibration uncertainty, and mm-wave AGN contamination. In aggregate, these system- atics are unlikely to contribute to a PSZ/PX pres- sure decrement ≳10%, which is insufficient to fully explain our measured deficit of ≃30% in ZwCl 3146. 4. While the procedure for normalizing PX to PSZ at large radii (> 100 kpc) was designed to account for the leading-order systematic associated with the uncertainty of X-ray-derived temperature (pres- sure) measurements, we find that such a normal- ization mitigates biases associated with other as- trophysical phenomena. Namely, the large-radius normalization of PX to PSZ reduces effects on Pdef due to halo triaxiality and cluster orientation, gas clumping, and choice of cosmological parameters. This normalization is thus a necessary step in com- paring pressure profiles derived from X-rays and the SZ effect. 5. As a simple check, we confirmed that the ratio of Pth/Pth+IC is consistent between a method esti- mating Pth via the SZ-derived pressure and an- other calculating Pth from an X-ray spectral fit to a two-component model including both CR-IC and thermal ICM for the innermost ZwCl 3146 13 radial bin near 33 kpc, where the highest CR- IC flux is expected. This is a simplified esti- mate for a case where the CR-IC spectral shape can be reasonably approximated by a thermal bremsstrahlung-like continuum at the temperature of the coincident ICM. In this test case, the ratio of F 0.5−1.0 IC /F 0.5−1.0 th ≃3.3. 6. We have shown that an SZ-to-X-ray pressure deficit within ∼100 kpc of a CC center can be observed at moderate statistical significance via measurements of the ZwCl 3146 system with ex- isting instrumentation, and the systematics asso- ciated with such a pressure deficit can be straight- forwardly handled. However, to make a robust claim of whether CR-IC contamination in the X- ray emission is contributing significantly to the cooling flow problem in CCs, more detailed stud- ies accounting for the following are required: 1. additional observations of a representative pop- ulation of CCs, 2. an empirical calibration of (instrumental, astrophysical, and methodological) systematics derived from mock observations of a large sample of CC systems in cosmological simu- lations, 3. an exploration of theoretical modeling choices/assumptions, including the effects of CR- IC spectral shapes discrepant from the thermal ICM spectral shapes, and 4. simultaneous han- dling of spectral and radial information in each cluster. ACKNOWLEDGMENTS EMS acknowledges support from a National Science Foundation Graduate Research Fellowship (NSF GRFP) under Grant No. DGE-1745301. Support for PFH was provided by a Simons Investigator Award. Facilities: CXO, MUSTANG-2 Software: CIAO (A. Fruscione et al. 2006), Xspec 12.12.0 (K. A. Arnaud 1996), Astropy ( Astropy Collab- oration et al. 2013, 2018), mpi4py (L. Dalcin & Y.-L. L. Fang 2021), NumPy (S. van der Walt et al. 2011; C. R. Harris et al. 2020), Matplotlib (J. D. Hunter 2007) REFERENCES Abramopoulos, F., Chanan, G. A., & Ku, W. H. M. 1981, ApJ, 248, 429, doi: 10.1086/159168 Allen, S. W., Edge, A. C., Fabian, A. C., et al. 1992, MNRAS, 259, 67, doi: 10.1093/mnras/259.1.67 Anders, E., & Grevesse, N. 1989, GeoCoA, 53, 197, doi: 10.1016/0016-7037(89)90286-X Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f Bartalucci, I., Mazzotta, P., Bourdin, H., & Vikhlinin, A. 2014, A&A, 566, A25, doi: 10.1051/0004-6361/201423443 Bartalucci, I., Molendi, S., Rasia, E., et al. 2023, A&A, 674, A179, doi: 10.1051/0004-6361/202346189 Battaglia, N., Bond, J. R., Pfrommer, C., & Sievers, J. L. 2015, ApJ, 806, 43, doi: 10.1088/0004-637X/806/1/43 Berlok, T., & Pessah, M. E. 2015, ApJ, 813, 22, doi: 10.1088/0004-637X/813/1/22 Berlok, T., & Pessah, M. E. 2016, ApJ, 833, 164, doi: 10.3847/1538-4357/833/2/164 Bˆırzan, L., Rafferty, D. A., McNamara, B. R., Wise, M. W., & Nulsen, P. E. J. 2004, ApJ, 607, 800, doi: 10.1086/383519 Bregman, J. N., Fabian, A. C., Miller, E. D., & Irwin, J. A. 2006, ApJ, 642, 746, doi: 10.1086/501112 Bulbul, G. E., Hasler, N., Bonamente, M., et al. 2011, A&A, 533, A6, doi: 10.1051/0004-6361/201016407 Chuzhoy, L., & Loeb, A. 2004, MNRAS, 349, L13, doi: 10.1111/j.1365-2966.2004.07688.x Chuzhoy, L., & Nusser, A. 2003, MNRAS, 342, L5, doi: 10.1046/j.1365-8711.2003.06641.x Dalcin, L., & Fang, Y.-L. L. 2021, Computing in Science and Engineering, 23, 47, doi: 10.1109/MCSE.2021.3083216 de Gasperin, F., Orr´u, E., Murgia, M., et al. 2012, A&A, 547, A56, doi: 10.1051/0004-6361/201220209 Dicker, S. R., Ade, P. A. R., Aguirre, J., et al. 2014, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9153, MUSTANG2: a large focal plan array for the 100 meter Green Bank Telescope, 91530J, doi: 10.1117/12.2056455 Dietrich, J. P., Zhang, Y., Song, J., et al. 2014, MNRAS, 443, 1713, doi: 10.1093/mnras/stu1282 14 Eckert, D., Roncarelli, M., Ettori, S., et al. 2015, MNRAS, 447, 2198, doi: 10.1093/mnras/stu2590 Edge, A. C., Fabian, A. C., Allen, S. W., et al. 1994, MNRAS, 270, L1, doi: 10.1093/mnras/270.1.L1 Egami, E., Misselt, K. A., Rieke, G. H., et al. 2006, ApJ, 647, 922, doi: 10.1086/504519 Ettori, S., & Fabian, A. C. 2006, MNRAS, 369, L42, doi: 10.1111/j.1745-3933.2006.00170.x Fabian, A. C. 1994, ARA&A, 32, 277, doi: 10.1146/annurev.aa.32.090194.001425 Fabian, A. C., Sanders, J. S., Allen, S. W., et al. 2003, MNRAS, 344, L43, doi: 10.1046/j.1365-8711.2003.06902.x Forman, W., Nulsen, P., Heinz, S., et al. 2005, ApJ, 635, 894, doi: 10.1086/429746 Foster, A. R., Ji, L., Smith, R. K., & Brickhouse, N. S. 2012, ApJ, 756, 128, doi: 10.1088/0004-637X/756/2/128 Freeman, P. E., Kashyap, V., Rosner, R., & Lamb, D. Q. 2002, ApJS, 138, 185, doi: 10.1086/324017 Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. D. R. Silva & R. E. Doxsey, 62701V, doi: 10.1117/12.671760 Giacintucci, S., Markevitch, M., Venturi, T., et al. 2014, ApJ, 781, 9, doi: 10.1088/0004-637X/781/1/9 Gilfanov, M. R., & Syunyaev, R. A. 1984, Soviet Astronomy Letters, 10, 137 Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 HI4PI Collaboration, Ben Bekhti, N., Fl¨oer, L., et al. 2016, A&A, 594, A116, doi: 10.1051/0004-6361/201629178 Hillel, S., & Soker, N. 2016, MNRAS, 455, 2139, doi: 10.1093/mnras/stv2483 Hillel, S., & Soker, N. 2017, ApJ, 845, 91, doi: 10.3847/1538-4357/aa81c5 Hitomi Collaboration, Aharonian, F., Akamatsu, H., et al. 2016, Nature, 535, 117, doi: 10.1038/nature18627 Hlavacek-Larrondo, J., Fabian, A. C., Edge, A. C., et al. 2012, MNRAS, 421, 1360, doi: 10.1111/j.1365-2966.2011.20405.x Hopkins, P. F., Quataert, E., Ponnada, S. B., & Silich, E. 2025a, The Open Journal of Astrophysics, 8, 78, doi: 10.33232/001c.141293 Hopkins, P. F., Quataert, E., Silich, E. M., et al. 2025b, arXiv e-prints, arXiv:2507.18712, doi: 10.48550/arXiv.2507.18712 Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 Kausch, W., Gitti, M., Erben, T., & Schindler, S. 2007, A&A, 471, 31, doi: 10.1051/0004-6361:20054413 Lau, E. T., Gaspari, M., Nagai, D., & Coppi, P. 2017, ApJ, 849, 54, doi: 10.3847/1538-4357/aa8c00 Lau, E. T., Hearin, A. P., Nagai, D., & Cappelluti, N. 2021, MNRAS, 500, 1029, doi: 10.1093/mnras/staa3313 Lau, E. T., Nagai, D., Kravtsov, A. V., & Zentner, A. R. 2011, ApJ, 734, 93, doi: 10.1088/0004-637X/734/2/93 Li, Y., Gendron-Marsolais, M.-L., Zhuravleva, I., et al. 2020, ApJL, 889, L1, doi: 10.3847/2041-8213/ab65c7 Liu, W., Sun, M., Voit, G. M., et al. 2024, MNRAS, 531, 2063, doi: 10.1093/mnras/stae1285 Machado Poletti Valle, L. F., Avestruz, C., Barnes, D. J., et al. 2021, MNRAS, 507, 1468, doi: 10.1093/mnras/stab2252 Markevitch, M. 2007, arXiv e-prints, arXiv:0705.3289, doi: 10.48550/arXiv.0705.3289 Mathews, W. G., Faltenbacher, A., & Brighenti, F. 2006, ApJ, 638, 659, doi: 10.1086/499119 McDonald, M., Gaspari, M., McNamara, B. R., & Tremblay, G. R. 2018, ApJ, 858, 45, doi: 10.3847/1538-4357/aabace McDonald, M., Veilleux, S., Rupke, D. S. N., Mushotzky, R., & Reynolds, C. 2011, ApJ, 734, 95, doi: 10.1088/0004-637X/734/2/95 McNamara, B. R., & Nulsen, P. E. J. 2007, ARA&A, 45, 117, doi: 10.1146/annurev.astro.45.051806.110625 Migkas, K., Kox, D., Schellenberger, G., et al. 2024, A&A, 688, A107, doi: 10.1051/0004-6361/202349006 Mittal, R., Hudson, D. S., Reiprich, T. H., & Clarke, T. 2009, A&A, 501, 835, doi: 10.1051/0004-6361/200810836 Mroczkowski, T., Nagai, D., Basu, K., et al. 2019, SSRv, 215, 17, doi: 10.1007/s11214-019-0581-2 Nagai, D., & Lau, E. T. 2011, ApJL, 731, L10, doi: 10.1088/2041-8205/731/1/L10 O’Dea, C. P., Baum, S. A., Privon, G., et al. 2008, ApJ, 681, 1035, doi: 10.1086/588212 O’Sullivan, E., Giacintucci, S., David, L. P., et al. 2011, ApJ, 735, 11, doi: 10.1088/0004-637X/735/1/11 Peng, F., & Nagai, D. 2009, ApJ, 693, 839, doi: 10.1088/0004-637X/693/1/839 Peterson, J. R., Paerels, F. B. S., Kaastra, J. S., et al. 2001, A&A, 365, L104, doi: 10.1051/0004-6361:20000021 Pfrommer, C. 2013, ApJ, 779, 10, doi: 10.1088/0004-637X/779/1/10 Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A6, doi: 10.1051/0004-6361/201833910 Planelles, S., Fabjan, D., Borgani, S., et al. 2017, MNRAS, 467, 3827, doi: 10.1093/mnras/stx318 15 Qin, B., & Wu, X.-P. 2000, ApJL, 529, L1, doi: 10.1086/312445 Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J., & Wise, M. W. 2006, ApJ, 652, 216, doi: 10.1086/507672 Romero, C. E., Sievers, J., Ghirardini, V., et al. 2020, ApJ, 891, 90, doi: 10.3847/1538-4357/ab6d70 Romero, C. E., Gaspari, M., Schellenberger, G., et al. 2023, ApJ, 951, 41, doi: 10.3847/1538-4357/acd3f0 Russell, H. R., Sanders, J. S., & Fabian, A. C. 2008, MNRAS, 390, 1207, doi: 10.1111/j.1365-2966.2008.13823.x Ruszkowski, M., Br¨uggen, M., & Begelman, M. C. 2004, ApJ, 615, 675, doi: 10.1086/424702 Ruszkowski, M., & Pfrommer, C. 2023, A&A Rv, 31, 4, doi: 10.1007/s00159-023-00149-2 Sanders, J. S., & Fabian, A. C. 2007, MNRAS, 381, 1381, doi: 10.1111/j.1365-2966.2007.12347.x Saxena, H., Sayers, J., Gavidia, A., et al. 2025, A&A, 700, A128, doi: 10.1051/0004-6361/202555719 Schellenberger, G., Reiprich, T. H., Lovisari, L., Nevalainen, J., & David, L. 2015, A&A, 575, A30, doi: 10.1051/0004-6361/201424085 Silich, E. M., Bellomi, E., Sayers, J., et al. 2024, ApJ, 968, 74, doi: 10.3847/1538-4357/ad3fb5 Simionescu, A., Allen, S. W., Mantz, A., et al. 2011, Science, 331, 1576, doi: 10.1126/science.1200331 Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJL, 556, L91, doi: 10.1086/322992 Sun, M. 2009, ApJ, 704, 1586, doi: 10.1088/0004-637X/704/2/1586 Truong, N., Pillepich, A., Nelson, D., et al. 2024, A&A, 686, A200, doi: 10.1051/0004-6361/202348562 Urban, O., Simionescu, A., Werner, N., et al. 2014, MNRAS, 437, 3939, doi: 10.1093/mnras/stt2209 van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22, doi: 10.1109/MCSE.2011.37 Vikhlinin, A., Kravtsov, A., Forman, W., et al. 2006, ApJ, 640, 691, doi: 10.1086/500288 Voit, G. M., Donahue, M., Bryan, G. L., & McDonald, M. 2015, Nature, 519, 203, doi: 10.1038/nature14167 Wan, J. T., Mantz, A. B., Sayers, J., et al. 2021, MNRAS, 504, 1062, doi: 10.1093/mnras/stab948 Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914, doi: 10.1086/317016 Yang, H. Y. K., & Reynolds, C. S. 2016, ApJ, 829, 90, doi: 10.3847/0004-637X/829/2/90 Zhuravleva, I., Churazov, E., Schekochihin, A. A., et al. 2014, Nature, 515, 85, doi: 10.1038/nature13830 Zhuravleva, I., Churazov, E., Ar´evalo, P., et al. 2016, MNRAS, 458, 2902, doi: 10.1093/mnras/stw520
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 Exploring a cosmic ray inverse-Compton origin to the SZ-to-X-ray pressure deficit in the cool core cluster ZwCl 3146 Emily M. Silich,1 Jack Sayers,1 Philip F. Hopkins,1 Charles Romero,2 Brian Mason,3 John Orlowski-Scherer,2 and Craig L. Sarazin4 1Cahill Center for Astronomy and Astrophysics, California 91125, USA 2 209 South 33rd Street, Philadelphia, PA, 19104, USA 3National Radio Astronomy Observatory, 520 Edgemont Rd, Charlottesville, VA 22903 4 .O. Box 400325, Charlottesville, VA 22904, USA ABSTRACT We explore the possibility that inverse-Compton (IC) scattering of cosmic microwave background photons by ∼GeV cosmic rays (CRs) injected by the central active galactic nucleus (AGN) in cool core (CC) clusters produces a non-negligible continuum-like X-ray signal that is easily misinterpreted as intracluster medium (ICM) thermal bremsstrahlung continuum. This is particularly relevant to the cooling flow problem-the lack of star formation relative to X-ray-inferred ICM cooling rates. Using ZwCl 3146, a relaxed CC system at z = 0.291, we compare pressure profiles derived via X-rays and the thermal Sunyaev-Zel'dovich (SZ) effect. While SZ measurements probe only thermal ICM electrons, additional CR-IC emission would appear to boost the X-ray-inferred pressure. Relative to unity, we measure a ≃30% decrement in PSZ/PX within 100 kpc of the ZwCl 3146 center at a statistical significance of ≃3.3σ, consistent with predicted deficits from CR-IC contamination in reasonable models of central AGN-driven CR injection. X-ray spectral fits of a two-component model with thermal ICM and CR-IC emission are consistent with CR-IC as the cause of this deficit. We test alternative explanations and systematics that could drive such a decrement, with the leading order systematics associated with halo triaxiality. Collectively, these systematics are unlikely to produce a PSZ/PX decrement ≳10%. While our results establish that non-negligible CR-IC emission is plausible in ZwCl 3146, we stress that more detailed studies of larger cluster samples are required to robustly assess whether CR-IC is relevant to the cooling flow problem. 1. INTRODUCTION Cool core (CC) galaxy clusters are characterized by central (r ≲100 kpc), rapidly-cooling regions of the intracluster medium (ICM) that exhibit radiative cooling timescales shorter than a few Gyr (see A. C. Fabian 1994, for a review). The cooling rates associated with these CCs inferred from X-ray observations are significantly higher than observed star formation rates in the central brightest cluster galaxies (BCGs) or cold gas reservoir quantities allow (e.g., J. N. Bregman et al. 2006; C. P. O'Dea et al. 2008; J. R. Peterson et al. 2001; M. McDonald et al. 2011, 2018). Historically, this discrepancy has been termed the "cooling flow problem", though modern studies have proposed a resolution in the form of a self-regulating cycle comprising the rapidlycooling ICM as it falls onto the central BCG, which Email: itself hosts an active galactic nucleus (AGN) that, fueled by the infalling gas, provides a form of mechanical heating that re-deposits energy into the surrounding medium and prevents catastrophic cooling of the ICM within the CC (for a review, see B. R. McNamara & P. E. J. Nulsen 2007). While these scenarios require a relatively fine tuning between the ICM thermodynamics and accretion rate onto the AGN, plausible models for this fine tuning have been proposed (e.g., G. M. Voit et al. 2015). In support of such a self-regulating scenario, CCs tend to host strong radio AGNs (R. Mittal et al. 2009; M. Sun 2009). The leptonic and kinetic power injected by radio jets in CCs is comparable to the apparent X-ray cooling luminosity within the CC (L. Bˆırzan et al. 2004; E. O'Sullivan et al. 2011; J. Hlavacek-Larrondo et al. 2012), and AGN radio luminosities are weakly correlated with the X-ray CC size (W. Liu et al. 2024). However, the physics governing the regulation of ICM cooling and 16 Oct 2025 2 heating in CCs are not well established. The balance of ICM cooling with heating via feedback in galaxy groups and clusters is only self-regulated well on long timescales (M. McDonald et al. 2018), and while many AGN-driven heating mechanisms in CCs have been proposed, it is unclear to what extent these processes are responsible for the thermalization of energy in the ICM: for example, the dissipation of sound waves and weak shocks driven by expanding AGN bubbles (e.g., A. C. Fabian et al. 2003; M. Ruszkowski et al. 2004; W. Forman et al. 2005; W. G. Mathews et al. 2006), AGN-bubble-driven generation and dissipation of gravity waves via turbulence (I. Zhuravleva et al. 2014, 2016; Y. Li et al. 2020), mixing of hot AGN bubble plasma with the ICM (H. Y. K. Yang & C. S. Reynolds 2016; S. Hillel & N. Soker 2016, 2017), and cosmic ray (CR) heating via Alfv ́en waves (C. Pfrommer 2013). The correlation of radio AGN luminosity with Xray CC luminosity implies that a significant population of CRs is being accelerated in CCs. The radio emission is primarily generated by high-energy leptons (E ≫10 -100 GeV), which, having short lifetimes (≲107 yr; see M. Ruszkowski & C. Pfrommer 2023 for a review), typically generate emission only within a few ∼kpc of the CC centers (e.g., F. de Gasperin et al. 2012). Most of the total CR lepton energy should, however, be contained by a population of low-energy (E ∼0.1 -1 GeV) leptons (P. F. Hopkins et al. 2025b). In contrast to the high-energy CR leptons generating the radio emission, these low-energy leptons have much longer lifetimes (∼Gyr), so they should diffuse or stream out to larger radii (∼100 kpc) before losing most of their energy, comprising extended, ancient cosmic ray halos (ACRHs; P. F. Hopkins et al. 2025b). These ACRHs would produce ∼keV X-ray emission via inverse Compton (IC) scattering with cosmic microwave background (CMB) photons. Since this CR population is peaked around one GeV, CR-IC from these halos would exhibit thermal continuum-like X-ray spectra (P. F. Hopkins et al. 2025a). In massive clusters, X-ray emission from CR-IC would be insignificant relative to the thermal bremsstrahlung emission at large radii, but it would appear as ∼keV emission in the cluster cores at radii ≲100 kpc (P. F. Hopkins et al. 2025b). Because the 0.1 -1 GeV CR lifetime is of order the age of a cluster (∼Gyr), ACRHs should be present in a significant fraction of clusters. In addition, given the similarity of the power injected by radio jets in CCs and the apparent X-ray cooling luminosity, CR-IC from ACRHs could non-negligibly bias X-ray-inferred CC properties. The expected emissivity profile for CR-IC emission in these ACRHs is similar to the observed X-ray emission profiles in CCs (P. F. Hopkins et al. 2025b). Therefore, interpreting a nonnegligable fraction of the X-ray emission as CR-IC could alleviate the most challenging aspects of the cooling flow problem: the CR-IC contamination could explain why the apparent X-ray cooling rates of CCs are so large compared to constraints on observed cooling rates; the radio AGN properties could be so well-correlated with the apparent X-ray cooling luminosity because the AGN are the direct source of the low-energy leptons producing the CR-IC in X-rays, and the radio emission is tracing a younger population of high-energy leptons; and it would explain why CC clusters appear to follow universal profile shapes in their X-ray-inferred thermodynamic quantities (P. F. Hopkins et al. 2025b). Because the CR-IC spectra exhibit shapes similar to thermal continuum, observationally determining the fraction of CR-IC in X-rays via spectroscopy is extremely challenging. Perhaps the cleanest test for such CR-IC contributions involves comparison with the thermal Sunyaev-Zel'dovich (SZ) effect, which is the IC scattering of ∼keV thermal electrons in the ICM with CMB photons (see T. Mroczkowski et al. 2019, for a review). Since the number of thermal electrons in the ICM is many orders-of-magnitude larger than the number of CR leptons, and the thermal SZ effect is sensitive to the nonrelativistic electron pressure, SZ measurements provide a robust measurement of the true thermal gas pressure of the CC. Conversely, the X-ray-derived pressure profiles of CCs are primarily driven by the X-ray-inferred density, which is derived from the observed soft X-ray emission. Therefore, the pressure derived from X-rays would in general be different, and most often boosted, relative to the true thermal gas pressure when CR-IC contributes to the X-ray continuum, and equal to the true thermal gas pressure in the absence of CR-IC contamination. So, the ratio of SZ-to-X-ray derived pressure (PSZ/PX) as a function of radius in CCs provides a potentially powerful test of these ACRH models, which we will leverage in this study. ZwCl 3146 is a massive (M500 ≃7.7 × 1014 M⊙; C. E. Romero et al. 2020), relaxed CC cluster (e.g., W. Kausch et al. 2007) at z = 0.291 (S. W. Allen et al. 1992). Notably, ZwCl 3146 exhibits a large X-ray inferred cooling rate (∼1000 M⊙yr-1 A. C. Edge et al. 1994; E. Egami et al. 2006; W. Kausch et al. 2007; M. McDonald et al. 2018), and it also hosts a central radio source embedded within a diffuse radio minihalo extending out to ∼90 kpc in radius (S. Giacintucci et al. 2014). Given the need to resolve the inner ≃100 kpc of a CC to test for the presence of CR-IC contamination via pressure profile comparisons, deep, high-resolution X-ray and SZ data 3 are required. ZwCl 3146 represents the only such CC with not only adequate available observations in each waveband (having 86 ks of archival Chandra data and being detected at 61σ significance with MUSTANG-2; C. E. Romero et al. 2020), but also a sufficiently low mm-wave-brightness central AGN so as to be disentangled from SZ signal (C. E. Romero et al. 2020). In this work, we explore the possibility that CR-IC generated by CRs injected by the central AGN are contributing significantly to the X-ray continuum in ZwCl 3146, and thus test whether such CR-IC could be biasing X-rayinferred thermodynamic properties of CCs. In Section 2, we describe the ZwCl 3146 SZ and X-ray data analysis, and Section 3 provides an overview of the calculation of the SZ- and X-ray-derived pressure profiles from these datasets. We compare these ZwCl 3146 pressure profiles in Section 4, and highlight a detection of an SZ-to-X-ray pressure deficit in the central ∼100 kpc of ZwCl 3146. In Section 5, we discuss physical scenarios that could contribute to the observed pressure deficit, including CR-IC from an ACRH as well as additional possible contaminants and systematics. Our conclusions are given in Section 6. Throughout this work, we assume a concordance cosmology: H0 = 70 km s-1 Mpc-1, Ωm,0 = 0.3, unless otherwise specified. 2. DATA ANALYSIS 2.1. SZ We use MUSTANG-2 data which has been presented in earlier works, C. E. Romero et al. (2020, 2023). Operating on the 100-m Green Bank Telescope (GBT), MUSTANG-2 achieves 10′′ resolution at full-width halfmaximum (FWHM) with an instantaneous field of view (FOV) of 4′.2 (S. R. Dicker et al. 2014). MUSTANG-2 observes targets via on-the-fly mapping. In the case of ZwCl 3146, Lissajous daisy scans, primarily of 2′.5 and 3′.0 scanning radii, were used. In C. E. Romero et al. (2020), an analysis of the MUSTANG-2 data in the time-domain was found to better recover the pressure profile at large radii relative to a more canonical map-domain analysis. The time-domain analysis relies on characterizing the noise in Fourier space. To create a map of MUSTANG-2 for visualization and comparison with the X-ray images, we consider each pixel to be a model component (see Sec. 3.1). The complexity here negates the potential for an explicit solution, and rather a (preconditioned) conjugate gradient descent is used (see again C. E. Romero et al. 2020). We show the resultant map in Figure 1, though, to reiterate, the pressure profile is not derived from this map. 2.2. X-ray We performed the Chandra X-ray data reduction with CIAO version 4.16 (A. Fruscione et al. 2006), utilizing a modified version of the data reduction pipeline outlined in E. M. Silich et al. (2024) applied to two ZwCl 3146 datasets: Chandra ObsIDs 909 and 9371. We calibrated the raw ACIS-I data for each ObsID using the CalDB version 4.11.3 with chandra repro. For each dataset, we performed an exposure correction with fluximage, point source identification via wavdetect (P. E. Freeman et al. 2002), and exclusion of point sources from the exposure-corrected data. We filtered the light curves for each observation with deflare and applied the identified good-time-intervals (GTI) to the data. After filtering, the total GTI is 86 ks between ObsIDs 909 and 9371. From these filtered datasets, we generated a "clean" (filtered, exposure-corrected, pointsource-free) merged 0.5-7 keV surface brightness (SX) map for each ObsID with flux obs. We then constructed blank-sky background event files that are reprojected and scaled to match each of the ZwCl 3146 observations with blanksky. These CalDB blank-sky datasets, which are constructed by averaging deep (high-statistics), point source-free observations across large regions of the sky, characterize contributions from the Chandra particle-induced instrumental background (I. Bartalucci et al. 2014) and astrophysical foreground and background components. For each ObsID, we generated an 0.5-7 background-subtracted SX map from the corresponding clean SX map and blanksky background event file with blanksky image. We merged the background-subtracted images for each ObsID to generate a single ZwCl 3146 clean backgroundsubtracted 0.5-7 counts map. Then, we calculated the X-ray peak of ZwCl 3146 from the clean background-subtracted 0.5-7 counts map. We defined a set of circular annuli relative to this peak within the radial range of 25 100kpc for all bins outside of 100 kpc. The pressure profiles at these radii are less likely to suffer from projection effects associated with environmental complexity induced by the central AGN, and they enable accounting for biases in the absolute calibration of the X-ray-derived profiles relative to those derived from SZ without (by design) washing out the possible signal (decrement) that we seek to test at inner radii. We find that αP = 0.94 ± 0.04, or that the X-ray derived pressure profiles are on average biased ≃6% higher than those derived from the SZ effect. This value of αP is consistent with the expected level of calibration uncertainty and the value derived from a comparison of the SZ- and X-ray (via Chandra)-derived pressure profiles for a large sample of clusters in J. T. Wan et al. (2021) (their normalization value being ≃0.91). We thus scale the entire PX profile by αP to ensure agreement between the pressure profiles at large radii (see Figure 1), though see Section 4 for further discussion of this. 4. RESULTS We calculate the SZ-to-X-ray pressure deficit within 100 kpc of the X-ray peak from PSZ and PX by leveraging the 1000 realizations of PSZ and PX described in Section 3.2. For each realization, we apply the mean value of αP for that realization to PX. Once PX is corrected, we estimate the (unweighted) mean ratio of PSZ/PX for all bins within 100 kpc. We use this distribution of 1000 pressure deficit values to estimate the total SZ-to-X-ray pressure deficit: Pdef ≡ ⟨PSZ/PX⟩r ≤100 kpc = 0.72 ± 0.08. While Pdef represents the average pressure deficit within 100 kpc, the values of PSZ/PX within this radius consistently trend downwards as a function of decreasing cluster radius. This procedure encapsulates the variation in αP , which is not precisely constrained in the literature, within the statistical uncertainty estimate. In principle, the expected degree of X-ray calibration uncertainty is larger at higher plasma temperatures (G. Schellenberger et al. 2015), i.e., biased towards outer cluster radii for a CC. Even given the aforementioned motivation for normalizing PX using the outer radii values, we perform the analysis without applying this normalization correction, obtaining Pdef = 0.68 ± 0.07. Since the αP correction shifts PX down relative to PSZ, neglecting this correction (and its associated scatter) results in a downward shift of PSZ/PX, and thus results in a mildly more statistically significant measurement of Pdef. In principle, our results are further dependent on two main modeling systematics; namely, the counts threshold used to define bin radii and the width of the outermost bin used in the spectral deprojection. To determine the importance of these two systematic uncertainties in our modeling, we evaluate a distribution of Pdef and αP values using the above described procedures assuming a range of reasonable counts thresholds (tc = [6000, 6500, ..., 9000] counts) and outermost bin radii (rw = [10, 15, ..., 30] image pixels), for 35 combinations in total. The additional systematic uncertainty associated with our modeling procedures from these distributions is negligibly small relative to our statistical uncertainty: σPdef ≃0.02. We therefore report Pdef as above with the statistical uncertainty exclusively. Finally, while we evaluate the ZwCl 3146 PX profiles within a set of discrete annular bins assuming constant pressure within each bin, we calculate PSZ at a series of nodes corresponding to each linear bin center (where each node is connected to adjacent nodes via a power law). In principle, this difference could introduce a bias in the pressure profile comparisons. To test for this possible bias, we evaluate the PSZ profiles via two methods. First, as in the observational analysis, we estimate PSZ at each linear bin center. Second, we estimate PSZ at the "X-ray emission-weighted center", i.e., p ⟨P 2⟩(assuming constant temperature within each bin). Within each bin, differences in PSZ evaluated via each of these procedures are far subdominant to the uncertainty on each respective pressure estimate, so we conclude that this methodology difference should not significantly bias our comparisons. 6 100 r [kpc] 0.2 0.4 0.6 0.8 1.0 1.2 PSZ / PX Pdef = 1 Pdef = 0.9 Pdef = 0.7 Pdef = 0.3 Pdef = 0.72±0.08 Data LCRs 1043 erg s 1 LCRs 1044 erg s 1 LCRs 1045 erg s 1 No CR-IC Figure 2. Ratios of SZ-to-X-ray derived pressure profiles (black data points; 1σ uncertainties) with predictions from P. F. Hopkins et al. (2025b) overplotted for various CR injection luminosities. The predicted deficit in PSZ/PX at radii within ≃100 kpc of the cluster center becomes more extreme for higher CR injection luminosities. Our data from MUSTANG-2 and Chandra indicate the presence of a deficit in PSZ/PX at ≃3.3σ significance. 5. DISCUSSION We detect a ≳3σ deficit in the SZ-to-X-ray pressure ratio within 100 kpc of the ZwCl 3146 cluster center, Pdef = 0.72 ± 0.08. Although of moderate statistical significance, this result aligns with a scenario in which CR-IC emission non-negligably contributes to the Xray continuum, artificially boosting the pressure inferred from X-rays. The models introduced in P. F. Hopkins et al. (2025a,b) naturally predict this behavior, indicating that for realistic levels of AGN-driven CR injection, central CC pressure ratios can be suppressed to values as low as Pdef ≃0.3 within 100 kpc. Of these, models assuming a CR injection luminosity LCRs ∼1044 erg s-1 predict a central pressure deficit Pdef ≃0.7 within 100 kpc, consistent with the ZwCl 3146 value. This demonstrates that a significant non-thermal X-ray continuum component could be present in ZwCl 3146 and plausibly generated by CR-IC in an ACRH. Energetically, a CR injection luminosity of order LCRs ∼1044 erg s-1 is reasonable given the large observed jet kinetic power of the ZwCl 3146 AGN (∼6×1045 erg s-1; D. A. Rafferty et al. 2006), which is consistent with measured trends between the observed jet kinetic power and the X-ray luminosity (D. A. Rafferty et al. 2006). Such CR-IC would provide a natural physical mechanism for offsetting X-ray-inferred radiative cooling in CCs: by contributing ∼keV thermal continuum-like emission in the soft X-ray band, CR-IC scattering masks the true thermal budget of the ICM, thereby alleviating the apparent discrepancy of X-ray-inferred and true gas cooling rates at the heart of the cooling flow problem. 5.1. Other possible systematics In this section, we explore additional possible systematics and physical processes that could contribute to the observed deficit in PSZ/PX. The most precise quantitative calibration of these effects would require a oneto-one comparison of full instrument mock SZ and Xray pressure measurements derived from a cosmological sample of CCs, thus incorporating instrumental effects (e.g., PSF smearing, deprojection of a non-truncated gas distribution, etc.), and astrophysical effects (see below), which is beyond the scope of this work. Instead, we provide estimates for the expected contributions from the leading sources of bias in our analysis calculated via available analytic models or numerical calibrations. A summary of all contributions we consider and their relative importance to the measured SZ-to-X-ray pressure deficit in ZwCl 3146 is given in Table 1. 5.1.1. Halo triaxiality + orientation Dark matter halos are intrinsically triaxial, with shapes correlated with their formation histories (E. T. Lau et al. 2021), and within the gravitational potential of a cluster, the ICM inherits this triaxiality (L. F. Machado Poletti Valle et al. 2021). Such asphericity affects both cluster selection (since elongated systems aligned with the LOS are preferentially detected; J. P. Dietrich et al. 2014; H. Saxena et al. 2025), and the interpretation of multiwavelength observables. Since X-ray and SZ observables depend differently on the LOS projection, neglecting gas triaxiality could introduce sys7 tematic biases in derived thermodynamic profiles (e.g., J. T. Wan et al. 2021). We estimated the effects of halo triaxiality and orientation on our result as follows. We first obtained a catalog of axial ratios for a sample of 253 M200 ≥1014M⊙ halos from IllustrisTNG, which is itself a subset of the sample presented in L. F. Machado Poletti Valle et al. (2021) and references therein. For each halo in this sample, we thus obtain the average (as a function of radius) gas minor-to-major and intermediate-to-major axial ratios, and we initialize a 3D grid with the axes of this triaxial ellipsoid aligned with those of the grid. We distribute a general (arbitrarily normalized) electron number density function over the triaxial ellipsoid, which is assumed to have constant axial ratios as a function of radius, in the shape of a cuspy β-profile (A. Vikhlinin et al. 2006): ne ∝ (r/rc)-α/2 ((1 + (r/rc)2)3β/2-α/4 (2) for β-profile shape parameters typical of a CC cluster: the core radius rc = 0.1r2500, inner slope parameter range α = [0.5, 1.5, 3] (I. Bartalucci et al. 2023), and β-profile parameter β = 0.7. For each halo, we assign 4 random line-of-sight (LOS) unit vectors, so we obtain in total ∼1000 realizations of the triaxial halos. Then, to calculate the effects of an underlying triaxial gas density distribution on the SZ- and X-ray-derived pressure profiles, we create simple mock maps of the SZ and Xray observables projected along the random LOS vectors assuming two cases: first, a triaxial gas distribution using the axial ratios from our catalog for a given halo (wherein r in the above Equation 2 is the elliptical radius), and second, a spherical distribution (wherein r in Equation 2 is the spherical radius and the axial ratios are set to 1). For a roughly isothermal distribution, the SZderived pressure is Pe ∝ R nedl and the X-ray-derived pressure is PX ∝ qR n2edl, so we project these quantities (ne and n2 e, respectively) along the LOS vectors for each halo. We then performed a deprojection of these maps using an analogous spherical geometric deprojection formalism as the X-ray observational analysis (see Sec. 2.2). Since this deprojection routine implicitly assumes that the projected quantity is equal to zero outside the outermost deprojected radius, we apply a truncation to the density function at the outermost bin edge (router = 5 Mpc) before the projection. Estimating Pdef in an analogous way as in the observational analysis, we tested the effect of the choice of α using the spherical distributions. For (de-)projections generated at identical resolution as the observational data, we find no significant bias as a result of varying α, though a systematic offset of ≃1-2% can emerge for lower-resolution data when the deprojection is unable to recover the innermost shape of the density/pressure profiles given the steep gradient at small radii within each deprojection bin. Ultimately, we find the 90% confidence range of the recovered SZto-X-ray pressure ratio for the average triaxial distributions across all tested α values is 1.01 ≳Pdef ≳0.98, though the distribution of Pdef values is skewed towards higher recovered pressure ratios, with values as extreme as ≃1.13 possible. This approach represents a very simple estimate of the effects of gas triaxiality on the SZ-to-X-ray derived pressure profiles, neglecting possible additional corrections associated with, e.g., non-isothermality or variation in ICM axial ratios as a function of radius. To address the latter point, using the tabulated distribution of ICM axial ratios as a function of cluster-centric radius in E. T. Lau et al. (2011), we perform an analogous numerical estimate of the effects of differential ellipticity in the ICM distribution on Pdef. For a single cluster of ZwCl 3146 mass, we randomly sample 1000 LOS vectors and evaluate the ratio of measured SZ-to-X-ray pressures within 100 kpc for the E. T. Lau et al. (2011) ICM axial ratio distribution (using all clusters in their sample, measured at z = 0 for simulations including cooling and feedback), finding a 90% confidence range of the recovered SZ-toX-ray pressure ratio of 1.06 ≳Pdef ≳0.92. Therefore, the predicted contribution to Pdef due to halo triaxiality and cluster orientation is not large enough to explain the observed pressure deficit in ZwCl 3146. Our calculations of the characteristic bias in Pdef due to the underlying halo shapes are likely a conservative estimate (i.e., over-estimate) of the true variation in the pressure profiles due to gas triaxiality and cluster orientation. ZwCl 3146 (and any representative CC sample) comprise a specific class of galaxy cluster morphology (associated with relaxed systems; see, e.g., W. Kausch et al. 2007), while both our IllustrisTNG catalog and the E. T. Lau et al. (2011) axial ratios used here are agnostic to cluster morphology, and thus may include more disturbed (elongated) systems than we would consider observationally. For this analysis, we implicitly assume that our cluster is selected agnostic to orientation, which will not be true for any larger sample of CCs, even while these relaxed objects are likely to be less sensitive to orientation systematics than more disturbed systems. 5.1.2. Helium sedimentation The X-ray emission from massive galaxy clusters is dominated by the thermal bremsstrahlung continuum 8 Table 1. Census of possible contributions to the SZ-to-X-ray pressure deficit within 100 kpc of the ZwCl 3146 center. Contribution Expected effect Additional considerations CR-IC 1 ≥Pdef ≳0.3 - Tested across two orders-of-magnitude of CR injection luminosity Halo triaxiality + orientation 1.01 ≳Pdef ≳0.98 - Skewed towards high values of Pdef, as extreme as Pdef ≲1.13 (constant axial ratios) Halo triaxiality + orientation 1.06 ≳Pdef ≳0.92 - Tested for a single model of radially-dependent halo triaxiality (radially-dependent axial ratios) Gas clumping 1.02 ≳Pdef ≳0.98 - Limited sensitivity due to probing radii ≲0.5R500 Helium sedimentation Pdef = 1 - Plasma instabilities, bulk motions and turbulence driven by AGN - outflows and merger activity render such sedimentation to be - negligible Cosmological parameters Pdef = 1 - Normalization of PX to PSZ at large radii removes (constant in - radius) bias from PX ∝DA(z)-1/2 X-ray calibration uncertainty Pdef = 1 - Bias of ∼20% possible in the absence of our large radius - normalization of PX to PSZ aContributions to Pdef from CR-IC, and choice of cosmological parameters are quantified as a characteristic range derived from a discrete set of analytic models. b Contributions to Pdef from halo triaxiality and orientation and gas clumping are quantified as a 90% confidence range calculated via numerical modeling. generated via electrons free-free scattering with hydrogen and helium nuclei. Theoretical calculations have predicted that the larger helium nuclei can sediment into the gravitational potential of the cluster on timescales of order the cluster age (e.g., F. Abramopoulos et al. 1981; M. R. Gilfanov & R. A. Syunyaev 1984), resulting in an enhanced abundance of helium nuclei in cluster cores that could bias the electron pressure derived from X-ray emission if one were to assume an emitting ICM composed of primordial abundances. Several previous works have studied the effects of possible helium sedimentation on the X-ray observables of galaxy clusters (e.g., B. Qin & X.-P. Wu 2000; L. Chuzhoy & A. Nusser 2003; L. Chuzhoy & A. Loeb 2004; M. Markevitch 2007; S. Ettori & A. C. Fabian 2006; F. Peng & D. Nagai 2009; G. E. Bulbul et al. 2011). In particular, the X-ray surface brightness dominated by thermal bremsstrahlung (including contributions from H and He ions) in a cluster may be written as SX ∝ R LOS(nenHΛeH + nenHeΛeHe) dl ∝n2 H (1 + 2x)(1 + 4x) ΛeH (3) for x ≡nHe/nH such that ne = (1 + 2x) nH, with the X-ray band cooling function Λe(H/He) describing the emission from the free-free scattering of electrons with hydrogen and helium nuclei, and neglecting continuum contributions from all elements heavier than helium (F. Peng & D. Nagai 2009). Then, the electron pressure scales as Pe ∝ 1 + 2x 1 + 4x 1/2 (4) for a fixed (measured) SX, intrinsic gas temperature, and X-ray spectroscopic electron temperature. In reality, spectroscopic fits to obtain ne are dependent on the spectroscopically-fit gas temperature, which determines the shape of the thermal bremsstrahlung continuum, as well as influences the line emission, which is further dependent on the metallicity of the plasma, though the line emission is a very small contribution to the overall surface brightness. S. Ettori & A. C. Fabian (2006) note that the X-ray spectroscopically-fit temperature of a cluster does not vary significantly in the presence of helium enrichment. While the metallicity is dependent on the underlying helium abundance, we fix the metallicity to 0.3 X⊙in our analysis due to the large parameter degeneracy and limited constraining power of the metallicity parameter in our fits. For an ICM composed of primordial abundances (X = 0.75, Y = 0.25), we would have x = 0.083. Therefore, in the case where the He abundance were doubled from its primordial value, if we were to calculate the electron pressure measured 9 at fixed SX assuming x = 0.083, we would overestimate Pe by ≃5%. Using the prediction for the helium-to-hydrogen mass fraction radial distribution from the F. Peng & D. Nagai (2009) 1D helium sedimentation model applied to a cluster with ages of tage = 9 Gyr and tage = 11 Gyr with an X-ray spectroscopic temperature of 10 keV (distributed according to the profile of A. Vikhlinin et al. 2006), we calculate the overestimation of the electron pressure that would be inferred from an X-ray measurement as a function of cluster radius. Based on this simple laminar model, we estimate an equivalent SZ-to-X-ray pressure deficit within 100 kpc for the oldest evaluated cluster age (tage = 11 Gyr) of Pdef ≃0.9. This simple estimate is ≃2σ less extreme than the measured deficit, and contributions become even less important for lower cluster ages. However, 1D models like that of F. Peng & D. Nagai (2009) are likely an overly idealized estimate of the effects of He sedimentation in cluster cores. First, they do not include the effects of cluster environments, which are subject to, e.g., bulk motions or turbulence driven by AGN outflows, merger activity, etc. (I. Zhuravleva et al. 2014; E. T. Lau et al. 2017). To first order, we can scale the amount of He sedimentation expected in a simple laminar system to account for the effects of bulk mixing and turbulence like ηsed,mix ∼ηsed · e-tsed/tmix (5) with ηsed being the amount of He sedimentation predicted for a laminar system (here causing Pdef ≃0.9) with the He sedimentation time tsed and the mixing time tmix. Following F. Peng & D. Nagai (2009), and assuming the cluster is approximately in hydrostatic equilibrium, we can write tsed/tmix as a function of clustercentric radius R tsed/tmix ∼12 Ms 0.1 · R 100 kpc · (6) λmix (fb/0.2)(ngas/103 cm-3)(T/10 keV)5/2 for the sonic Mach number of the bulk flows or turbulence Ms = vmix/cs (with the largest-scale bulk mixing velocity vmix and sound speed cs), λmix ∼1 being the ratio of the coherence length of the largest-scale coherent flows (in/outflows, turbulence) to the cluster-centric radius in cluster CCs, the magnetic suppression factor fB ∼0.2 for tangled magnetic fields, and the cluster temperature T ∼5 keV and density ngas ∼10-2 cm-3 at a radius of ∼100 kpc in a characteristic CC (e.g., N. Truong et al. 2024). Finally, applying radial scalings of density and temperature (ngas ∝1/R, T ∝R0.6 from 10 -100 kpc; C. E. Romero et al. 2020), we have tsed/tmix ∼6 R 100 kpc 1/2 · vmix 100 km s-1 (7) Assuming a characteristic turbulent velocity dispersion derived from observations of the Perseus CC with Hitomi (∼200km s-1; Hitomi Collaboration et al. 2016), the effects of He sedimentation (via Equations 5 and 7) would be reduced to 100 kpc) was designed to account for the leading-order systematic associated with the uncertainty of X-ray-derived temperature (pressure) measurements, we find that such a normalization mitigates biases associated with other astrophysical phenomena. Namely, the large-radius normalization of PX to PSZ reduces effects on Pdef due to halo triaxiality and cluster orientation, gas clumping, and choice of cosmological parameters. This normalization is thus a necessary step in comparing pressure profiles derived from X-rays and the SZ effect. 5. As a simple check, we confirmed that the ratio of Pth/Pth+IC is consistent between a method estimating Pth via the SZ-derived pressure and another calculating Pth from an X-ray spectral fit to a two-component model including both CR-IC and thermal ICM for the innermost ZwCl 3146 13 radial bin near 33 kpc, where the highest CRIC flux is expected. This is a simplified estimate for a case where the CR-IC spectral shape can be reasonably approximated by a thermal bremsstrahlung-like continuum at the temperature of the coincident ICM. In this test case, the ratio of F 0.5-1.0 IC /F 0.5-1.0 th ≃3.3. 6. We have shown that an SZ-to-X-ray pressure deficit within ∼100 kpc of a CC center can be observed at moderate statistical significance via measurements of the ZwCl 3146 system with existing instrumentation, and the systematics associated with such a pressure deficit can be straightforwardly handled. However, to make a robust claim of whether CR-IC contamination in the Xray emission is contributing significantly to the cooling flow problem in CCs, more detailed studies accounting for the following are required: 1. additional observations of a representative population of CCs, 2. an empirical calibration of (instrumental, astrophysical, and methodological) systematics derived from mock observations of a large sample of CC systems in cosmological simulations, 3. an exploration of theoretical modeling choices/assumptions, including the effects of CRIC spectral shapes discrepant from the thermal ICM spectral shapes, and 4. simultaneous handling of spectral and radial information in each cluster. ACKNOWLEDGMENTS EMS acknowledges support from a National Science Foundation Graduate Research Fellowship (NSF GRFP) under Grant No. DGE-1745301. Support for PFH was provided by a Simons Investigator Award. Facilities: CXO, MUSTANG-2 Software: CIAO (A. Fruscione et al. 2006), Xspec 12.12.0 (K. A. Arnaud 1996), Astropy ( Astropy Collaboration et al. 2013, 2018), mpi4py (L. Dalcin & Y.-L. L. Fang 2021), NumPy (S. van der Walt et al. 2011; C. R. Harris et al. 2020), Matplotlib (J. D. Hunter 2007) REFERENCES Abramopoulos, F., Chanan, G. A., & Ku, W. H. M. 1981, ApJ, 248, 429, Allen, S. W., Edge, A. C., Fabian, A. C., et al. 1992, MNRAS, 259, 67, Anders, E., & Grevesse, N. 1989, GeoCoA, 53, 197, Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, Astropy Collaboration, Price-Whelan, A. M., Sip ̋ocz, B. M., et al. 2018, AJ, 156, 123, Bartalucci, I., Mazzotta, P., Bourdin, H., & Vikhlinin, A. 2014, A&A, 566, A25, Bartalucci, I., Molendi, S., Rasia, E., et al. 2023, A&A, 674, A179, Battaglia, N., Bond, J. R., Pfrommer, C., & Sievers, J. L. 2015, ApJ, 806, 43, Berlok, T., & Pessah, M. E. 2015, ApJ, 813, 22, Berlok, T., & Pessah, M. E. 2016, ApJ, 833, 164, Bˆırzan, L., Rafferty, D. A., McNamara, B. R., Wise, M. W., & Nulsen, P. E. J. 2004, ApJ, 607, 800, Bregman, J. N., Fabian, A. C., Miller, E. D., & Irwin, J. A. 2006, ApJ, 642, 746, Bulbul, G. E., Hasler, N., Bonamente, M., et al. 2011, A&A, 533, A6, Chuzhoy, L., & Loeb, A. 2004, MNRAS, 349, L13, Chuzhoy, L., & Nusser, A. 2003, MNRAS, 342, L5, Dalcin, L., & Fang, Y.-L. L. 2021, Computing in Science and Engineering, 23, 47, de Gasperin, F., Orr ́u, E., Murgia, M., et al. 2012, A&A, 547, A56, Dicker, S. R., Ade, P. A. R., Aguirre, J., et al. 2014, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9153, MUSTANG2: a large focal plan array for the 100 meter Green Bank Telescope, 91530J, Dietrich, J. P., Zhang, Y., Song, J., et al. 2014, MNRAS, 443, 1713, 14 Eckert, D., Roncarelli, M., Ettori, S., et al. 2015, MNRAS, 447, 2198, Edge, A. C., Fabian, A. C., Allen, S. W., et al. 1994, MNRAS, 270, L1, Egami, E., Misselt, K. A., Rieke, G. H., et al. 2006, ApJ, 647, 922, Ettori, S., & Fabian, A. C. 2006, MNRAS, 369, L42, Fabian, A. C. 1994, ARA&A, 32, 277, Fabian, A. C., Sanders, J. S., Allen, S. W., et al. 2003, MNRAS, 344, L43, Forman, W., Nulsen, P., Heinz, S., et al. 2005, ApJ, 635, 894, Foster, A. R., Ji, L., Smith, R. K., & Brickhouse, N. S. 2012, ApJ, 756, 128, Freeman, P. E., Kashyap, V., Rosner, R., & Lamb, D. Q. 2002, ApJS, 138, 185, Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. D. R. Silva & R. E. Doxsey, 62701V, Giacintucci, S., Markevitch, M., Venturi, T., et al. 2014, ApJ, 781, 9, Gilfanov, M. R., & Syunyaev, R. A. 1984, Soviet Astronomy Letters, 10, 137 Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, HI4PI Collaboration, Ben Bekhti, N., Fl ̈oer, L., et al. 2016, A&A, 594, A116, Hillel, S., & Soker, N. 2016, MNRAS, 455, 2139, Hillel, S., & Soker, N. 2017, ApJ, 845, 91, Hitomi Collaboration, Aharonian, F., Akamatsu, H., et al. 2016, Nature, 535, 117, Hlavacek-Larrondo, J., Fabian, A. C., Edge, A. C., et al. 2012, MNRAS, 421, 1360, Hopkins, P. F., Quataert, E., Ponnada, S. B., & Silich, E. 2025a, The Open Journal of Astrophysics, 8, 78, Hopkins, P. F., Quataert, E., Silich, E. M., et al. 2025b, arXiv e-prints, , Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, Kausch, W., Gitti, M., Erben, T., & Schindler, S. 2007, A&A, 471, 31, Lau, E. T., Gaspari, M., Nagai, D., & Coppi, P. 2017, ApJ, 849, 54, Lau, E. T., Hearin, A. P., Nagai, D., & Cappelluti, N. 2021, MNRAS, 500, 1029, Lau, E. T., Nagai, D., Kravtsov, A. V., & Zentner, A. R. 2011, ApJ, 734, 93, Li, Y., Gendron-Marsolais, M.-L., Zhuravleva, I., et al. 2020, ApJL, 889, L1, Liu, W., Sun, M., Voit, G. M., et al. 2024, MNRAS, 531, 2063, Machado Poletti Valle, L. F., Avestruz, C., Barnes, D. J., et al. 2021, MNRAS, 507, 1468, Markevitch, M. 2007, arXiv e-prints, , Mathews, W. G., Faltenbacher, A., & Brighenti, F. 2006, ApJ, 638, 659, McDonald, M., Gaspari, M., McNamara, B. R., & Tremblay, G. R. 2018, ApJ, 858, 45, McDonald, M., Veilleux, S., Rupke, D. S. N., Mushotzky, R., & Reynolds, C. 2011, ApJ, 734, 95, McNamara, B. R., & Nulsen, P. E. J. 2007, ARA&A, 45, 117, Migkas, K., Kox, D., Schellenberger, G., et al. 2024, A&A, 688, A107, Mittal, R., Hudson, D. S., Reiprich, T. H., & Clarke, T. 2009, A&A, 501, 835, Mroczkowski, T., Nagai, D., Basu, K., et al. 2019, SSRv, 215, 17, Nagai, D., & Lau, E. T. 2011, ApJL, 731, L10, O'Dea, C. P., Baum, S. A., Privon, G., et al. 2008, ApJ, 681, 1035, O'Sullivan, E., Giacintucci, S., David, L. P., et al. 2011, ApJ, 735, 11, Peng, F., & Nagai, D. 2009, ApJ, 693, 839, Peterson, J. R., Paerels, F. B. S., Kaastra, J. S., et al. 2001, A&A, 365, L104, Pfrommer, C. 2013, ApJ, 779, 10, Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A6, Planelles, S., Fabjan, D., Borgani, S., et al. 2017, MNRAS, 467, 3827, 15 Qin, B., & Wu, X.-P. 2000, ApJL, 529, L1, Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J., & Wise, M. W. 2006, ApJ, 652, 216, Romero, C. E., Sievers, J., Ghirardini, V., et al. 2020, ApJ, 891, 90, Romero, C. E., Gaspari, M., Schellenberger, G., et al. 2023, ApJ, 951, 41, Russell, H. R., Sanders, J. S., & Fabian, A. C. 2008, MNRAS, 390, 1207, Ruszkowski, M., Br ̈uggen, M., & Begelman, M. C. 2004, ApJ, 615, 675, Ruszkowski, M., & Pfrommer, C. 2023, A&A Rv, 31, 4, Sanders, J. S., & Fabian, A. C. 2007, MNRAS, 381, 1381, Saxena, H., Sayers, J., Gavidia, A., et al. 2025, A&A, 700, A128, Schellenberger, G., Reiprich, T. H., Lovisari, L., Nevalainen, J., & David, L. 2015, A&A, 575, A30, Silich, E. M., Bellomi, E., Sayers, J., et al. 2024, ApJ, 968, 74, Simionescu, A., Allen, S. W., Mantz, A., et al. 2011, Science, 331, 1576, Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJL, 556, L91, Sun, M. 2009, ApJ, 704, 1586, Truong, N., Pillepich, A., Nelson, D., et al. 2024, A&A, 686, A200, Urban, O., Simionescu, A., Werner, N., et al. 2014, MNRAS, 437, 3939, van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22, Vikhlinin, A., Kravtsov, A., Forman, W., et al. 2006, ApJ, 640, 691, Voit, G. M., Donahue, M., Bryan, G. L., & McDonald, M. 2015, Nature, 519, 203, Wan, J. T., Mantz, A. B., Sayers, J., et al. 2021, MNRAS, 504, 1062, Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914, Yang, H. Y. K., & Reynolds, C. S. 2016, ApJ, 829, 90, Zhuravleva, I., Churazov, E., Schekochihin, A. A., et al. 2014, Nature, 515, 85, Zhuravleva, I., Churazov, E., Ar ́evalo, P., et al. 2016, MNRAS, 458, 2902,
2510.14823
FraQAT: Quantization Aware Training with Fractional bits Luca Morreale Alberto Gil C. P. Ramos Malcolm Chadwick Mehid Noroozi Ruchika Chavhan Abhinav Mehrotra Samsung AI Center Cambridge, UK Sourav Bhattacharya Sana 600M FraQAT SD3.5-M FraQAT Flux-schnell FraQAT Figure 1: FraQAT is a Quantization aware Training (QAT) technique that grants generative models high fidelity at a fraction of training time required. Large text-to-image (T2I) models quantized with FraQAT (W4A8) achieve 16% lower FiD score than the state-of-the-art. Abstract State-of-the-art (SOTA) generative models have demonstrated impressive capa- bilities in image synthesis or text generation, often with a large capacity model. However, these large models cannot be deployed on smartphones due to the limited availability of on-board memory and computations. Quantization methods lower the precision of the model parameters, allowing for efficient computations, e.g., in INT8. Although aggressive quantization addresses efficiency and memory con- straints, preserving the quality of the model remains a challenge. To retain quality in previous aggressive quantization, we propose a new fractional bits quantization (FraQAT) approach. The novelty is a simple yet effective idea: we progressively reduce the model’s precision from 32 to 4 bits per parameter, and exploit the frac- tional bits during optimization to maintain high generation quality. We show that the FraQAT yields improved quality on a variety of diffusion models, including SD3.5-Medium, Sana, PixArt-Σ, and FLUX.1-schnell, while achieving 4 −7% lower FiD than standard QAT. Finally, we deploy and run Sana on a Samsung S25U, which runs on the Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP). 1 Introduction Over the past few years, generative models have made impressive progress in synthesizing high- quality images [1, 2, 3] and texts [4, 5]. Such a breakthrough is partly achieved by enlarging the model’s size, e.g., Diffusion Transformer (DiT) models with over 10 billion (10B) parameters are increasingly common. However, larger models require significantly more resources, hence higher inference-time or latency, even for inference. This increase is particularly problematic for deploying these models on resource limited devices, e.g., smartphones, thus limiting their wide-scale usage. arXiv:2510.14823v1 [cs.CV] 16 Oct 2025 A well-established approach to mitigate these resource constraints is quantization: by shifting parameters from 32 bits to a lower precision, e.g., 4 bits, the model’s weight-allocation footprints in its computational graph is significantly reduced. While past quantization research aimed mostly at decreasing model size, low-precision hardware support, such as NPUs on smartphones, drives researchers to further decrease inference latency. For example, latency gains from reduced data movement are boosted by native support for low-precision operations, e.g., using 4-bit weights and 8-bit activations (W4A8). Although, initially few devices offered support for these operations, modern hardware manufacturers readily offer low precision operations across devices: W4A8 in Qualcomm Snapdragon HTP [6], INT8, BF16 and FP16 in Intel CPUs [7], Block FP16 in AMD CPUs [8] and FP8/FP4 in NVIDIA GPU H100/H200 [9] to name a few. The advantages of deploying cloud-quality generative models on-device are multi-fold: it preserves users’ privacy while offering a low-latency experience. For service providers, it reduces operating costs by pushing compute from expensive servers to users’ personal devices as well as avoiding violating country-specific privacy regulations. In this work, we target mobile deployment, and restrict ourselves to W4A8 given its ubiquitous availability across devices. Quantization approaches fall under two main categories: Post Training Quantization (PTQ) and Quantization-Aware-Training (QAT). PTQ creates a low-precision model from a high precision pre-trained model using a small calibration dataset. Recent progress in PTQ research has resulted in W4A321 and W8A322 high quality quantized models from pre-trained SANA [10], SANA 1.5 [2] and SANA-Sprint [11]. Mixed precision W4A32 and W16A32 approaches like SVDQuant [12] have also yielded high quality quantized models from pre-trained FLUX.1-schnell. In essence, PTQ is ideal for cases where access to a large training dataset or compute cluster is limited. Despite its success, PTQ requires careful data selection [13]. For example, a poorly selected calibration dataset may manifest in poor prompt adherence or exhibit color shifts during deployment. Instead, Quantization-Aware-Training (QAT) optimizes weights in lower precision to boost the overall model’s performance [14, 15, 16]. In general, QAT approaches yield better results, at lower precision, when a large training dataset or a compute cluster is available. Nonetheless, quantized models suffer from a quality loss compared to the original FP32 model. We propose fractional bits quantization (FraQAT) to bridge the quality gap between the original and the quantized model. Inspired by Curriculum Learning [17], our training process progressively increases the quantization complexity, i.e., gradually lowers parameter precision, while replicating the original model’s output. We show that FraQAT reduces outliers, stabilizes training and yields improved prompt adherence and image generation quality (Section 2.2). We apply FraQAT to the linear layers of SOTA generative models as they contain the majority of the parameters, and empirically demonstrate the advantages of the proposed techniques on diffusion models (Section 4.1,4.2). In terms of image quality, FraQAT achieves 16% lower FiD than SOTA QAT. To address computational costs, we perform an outlier analysis (Section 4.3), and selectively train a subset of the model’s layers and show its performance. Finally, we quantize and deploy a model on a Samsung S25U, running on Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP) (Section 4.4). 2 Method 2.1 Quantization preliminaries The goal of quantization is to approximate – in dynamic or static finite precision – internal model operations, such as operations within linear layers x × W where x ∈RB×m and W ∈Rm×n. Depending on hardware support, the quantization operation W b on a matrix W to b bits can be expressed with narrower range as: 1https://github.com/NVlabs/Sana/blob/main/asset/docs/4bit_sana.md 2https://github.com/NVlabs/Sana/blob/main/asset/docs/8bit_sana.md 2 Q(W )b :=  2b−1 −1 maxi,j |[W ]i,j|W  ∈{−2b−1, . . . , 2b−1 −1} S(W )b := maxi,j |[W ]i,j| 2b−1 −1 ∈R+ W b := S(W )bQ(W )b (1) or with wider range as: Q(W )b :=  2b W −wmin wmax −wmin  ∈{0, . . . , 2b −1} S(W )b := wmax −wmin 2b ∈R+ W b := S(W )bQ(W )b + wmin (2) where wmin := mini,j[W ]i,j and wmax := maxi,j[W ]i,j. Most simply for (1), matrix multiplica- tions can be rewritten as: xbxW bW = (S(x)bxS(W )bW )(Q(x)bxQ(W )bW ) where bx and bW may differ. Therefore, matrix multiplication xbxW bW can be reduced to the multiplication of two floats S(x)bxS(W )bW and matrix multiplication of two integer matrices Q(x)bxQ(W )bW . Furthermore, we refer to dynamic quantization when wmin and wmax are computed at runtime, per sample, based on the input. While, in static quantization, wmin and wmax are pre-computed and shared across all samples. Dynamic quantization, especially when applied to activations, allow to robustly handle outliers as each sample range is computed to maximize representability. On the other hand, static quantization are more restrictive, and generate more outliers, thus making the quantization problem strictly harder. Edge devices, such as smartphones, only support static quantization, while GPUs support both. In contrast, activations are often quantized through a look-up table mapping from a 2b sized partition of the input range into a fixed number of quantized output values, e.g., the previous layer output x. In general, weights and activations may be quantized to different precisions, upcasted to the same precision before computation and downcasted after computation. Hereafter, we make the number of bits in weights and activations explicit with subscripts, e.g. x32 refers to a 32 bit approximation of x. Due to restricted address spaces in most mobile accelerators, it is critical to decrease weights precision aggressively, especially in large vision or language models, e.g., 12 billion parameter models, otherwise these models cannot even be placed on the target devices. However, naively lowering the weight’s precision from FP32 to INT4 causes severe degradation in the generated results. This degradation is exacerbated by lowering activations precision, as required by integer- only accelerators, most often to INT8 for reduced generation latency. At a high level, the quality degradation phenomenon is attributed to outliers in both activations and weights due to training. The overall challenge of quantization is to approximate the original network’s behavior while lowering the precision: xW ≈x8W 4. (3) 2.2 Fractional Quantization-Aware-Training Intuitively, Quantization-Aware-Training (QAT) approaches – including the proposed FraQAT – handle outliers, both in weights and activations, by shifting parameters to quantization centroids within or towards adjacent bins. Hence, re-distributing weights in a more compact space. Consequently, the further apart bins are, the harder the optimization problem. We further speculate that it is slower to optimize for lower precisions – INT4 vs FP32 – as the gap between two adjacent representable numbers is much larger. Indeed, it can be observed from Figure 2 that the loss is higher for lower precisions, thus, outliers appear gradually. To address this issue, we take inspiration from Curriculum Learning [17] literature: we progressively increase the complexity of the task during the optimization by gradually lowering weights’ precision while approximating the full precision model’s output. This is achieved by two key designs: first FraQAT leverages weights from pre-trained models. Second, FraQAT continuously steps between 3 8.00 7.00 6.00 5.50 5.00 4.75 4.50 4.25 4.00 Num bits 0.00 0.05 0.10 0.15 0.20 Avg loss SD3.5-M Sana 600M Pixart-Sigma Flux-schnell Figure 2: Bit vs Loss: as we reduce the precision (number of bits), the average knowledge distillation loss increases – the gap between student and teacher widens. From a quantization perspective, this implies outliers incrementally affect the student model. Figure 3: Classic QAT first computes the loss computed at the lower precision (◦), then propagates it back to the original precision and optimize the weights (→). This results in coarse and noisy gradients. Fractional Quantization Aware Training rely on intermediate precisions (from INT8 to INT4 as training progresses) to incrementally adjust to weights distributions. Consequently, during training Parameters smoothly shift between bins (•) thanks to the finer gradients from in intermediate precision. discrete quantization ranges to exploit the fact that Eq. (1)–(2) are purely a software construct, hence, it is possible to span any continuous — not just discrete – precision b ∈[32; 1] ∈R. Coupled together, these concepts establish FraQAT as a faster and higher quality QAT scheme dubbed Fractional Quantization Aware Training (FraQAT). Given a model, FraQAT progressively lowers the precision, first coarsely between FP32 and INT8 and then finely from INT8 to INT4, stepping through intermediate fractional bits during training as depicted in Figure 3 and in Algorithm 1. As the training progresses, outliers gradually appear, as shown in Figure 2, and are addressed. Furthermore, by optimizing at fractional bit precision in a curriculum fashion, FraQAT allows the weights to move to stable configurations, yielding higher quality samples, and reducing training time. Finally, throughout the entire training process FraQAT keeps all activation quantization constant (INT8). As training progresses, this progressive lowering of precision, smoothly moves weights distribution and thus facilitates quantization (c.f., Figure 3). Note that it is possible to even set b = 5.5. Although half-bits precision have no meaning, in practice they bridge the gap of range of representable numbers between two precisions: INT6 ∈[−32; 31], INT5.5 ∈[−22; 21], INT5 ∈[−16; 15]. In other words, half-bits precisions reduce the distance between adjacent bins, speeding up convergence without ad-hoc hyper-parameters, such as learning rate. 4 The proposed Fractional Quantization-Aware-Training (FraQAT) approach is generally applicable to any model and quantization level. Given the wide-spread usage of DiT and MM-DiT blocks in SOTA T2I models, we focus the presentation on DiT models. Since model size limits must first be met for any on-device placement, FraQAT quantizes linear layers as they contain the bulk of the parameters of DiT models (99.9%). In particular, FraQAT targets the most aggressive W4A8 quantization, as it allows for a wider range of models to fit edge accelerators with the lowest generation latency. Nonetheless, the proposed technique allows to target any precision. 3 Related works Large diffusion models are the de-facto framework for image generation [18, 19, 20, 21]. On the other hand, Large Language Models (LLMs) shows human-like abilities with text [4, 22, 23]. However quality and diversity comes at a cost: these models have a huge amount of parameters and cannot be hosted on an on-device NPU without some form of quantization. Tackling computational complexity Diffusion models’ computational complexity has two major sources: the amount of denoising passes and the conditioning mechanism. The former issue can be addressed by distilling the model to few or a single pass [24, 25], while the latter by modeling the latent-noise space [26] to decrease the number of function evaluations. Despite the success of these approaches, a major bottleneck remains: the memory required for inference. Quantization aim to preserve the original model’s quality when moving to lower precision – thus saving memory and enabling deployment. Quantization-Aware-Training QAT methods optimize model’s weights at lower precision [15, 27] aiming to recover the original performance. Early approaches [15, 27] study QAT on ResNet for classification: starting from low bit precision – b = 2 or 4 – the weights quantization is progressively reduced [27] or selected at random[15]. Although [15, 27] closely relate to FraQAT, they (i) focus on classification networks, (ii) ignore the gap with full precision models and the hierarchal nature of different precisions by starting from a low-bit quantization, (iii) aim to get models at different precisions. In parallel, Fracbits [28] introduces bit-width optimization by relying on a non-standard quantization formula for fractional bits. Bit-width are regularized to achieve the desired precision, followed by a binary search and fine tuning process to finalize weights. However, this procedure focuses on average bit length across the layers, obtaining lower bits in some layers at the cost of higher bits in other layers, which may not map to readily available hardware. Finally, as [15, 27], Fracbits focuses on classification rather than generative tasks. More recently, MatryoshkaQAT [14] exploits the nested structure of a number’s byte representation to encode LLM’s weights at different precision – 8, 4, and 2. The joint training at the three precisions result in a multi-precision model. Parallel to this work, Liu et al. [29] extend [30], and discover that model quantized to lower than 4 bits develop a different representation from the original models. Finally, [31] based on the Teaching Assistant distillation framework [32], quantize LLM models to Algorithm 1 Fractional Quantization Aware Training Input: Pre-trained model MW 32A32, dataset D, loss function L, quantization schedule B (e.g., {8, 5, 4.5, 4}), optimizer O Output: Quantized model MW 4A8 1: MW bA8 ←MW 32A32 2: for b ∈B do 3: MW bA8 ←QUANTIZELINEARLAYER(MW bA8, WbA8) 4: for batch ∈D do 5: OW bA8 ←FORWARD(MW bA8, batch) 6: OW 32A32 ←FORWARD(MW 32A32, batch) 7: l ←L(stop grad(OW 32A32), OW bA8) 8: OPTIMIZE(O, l, MW bA8) 9: end for 10: end for 11: return MW bA8 5 W1A1. Similar to our work, the authors use a progressive strategy, however limited to W1A4 (to W1A2) to W1A1, where intermediate models (W1A2) models are used as teachers. Combined with a series of techniques (e.g., gradient clipping, elastic binarization, etc) to stabilize the optimization process the authors achieve a binary quantized model. Since the quantization is binary, the model is not deployable to edge-devices. In this work, we show progressive quantization is enough to quantize a model that can be deployed on edge-devices. Related to diffusion models, Bitfusion [16] combines different QAT techniques, such as distillation and fine tuning, to convert SD1.5 [19] to 1.99 bits. Following a similar trend, Wang et al. in [33] selectively fine tune SD1.5 to handle activation distribution. BinaryDM [34] takes model quantization one step further by applying a multi-stage QAT approach to quantization. Notwithstanding these impressive results, none of these techniques showcase low-bit quantization of large scale DiT models such as SD3.5-M [1] (2.2B) or FLUX.1-schnell [3] (12B). Ours is the first QAT approach applied to such models. Post-Training Quantization Among the recent seminal works on LLMs, SmoothQuant [35] proposes a PTQ approach by injecting a smoothing factor in linear layers to reduce the impact of outliers in LLMs. AWQ [36] and MobileQuant [37] extend this approach to lowers the precision to W4A8 thus enabling an LLM to run on-device. These works have been extended to DiT models with specific focus on timesteps. PTQ4DiT [38] builds a calibration dataset by sampling timesteps before quantizing the diffusion model. DiTAS [39] proposes a temporal-aggregated smoothing technique combined with LoRA and a grid-search to reduce quantization errors of small DiT networks with W4A8 quantization. QuEST [33] through layer specific (PTQ) fine tuning achieves W4A4 quantization. Q- DiT [40], inspired by [36, 41, 37], combines a fine-grained group quantization with a novel automatic allocation algorithm to account for weights’ spatial variance. Most recently, SVDQuant [12] and FBQuant [42] have shown impressive preservation of image quality generation when quantizing FLUX.1-schnell [3] to W4A16. The authors rely on a low-rank approximation of the original weights and a residual branch to absorb outliers. 4 Experiments Models We focus our evaluation on recent text to image (T2I) models since there is an increasing interest in lowering their computational requirements due to their large number of parameters. In particular, we assess the soundness of the proposed approach over 4 different diffusion models: SD3.5-Medium [1], Sana [10], PixArt-Σ [43], and FLUX.1-schnell [3]. These models space a wide range of parameters, 0.6B–12B, and architectural innovations, linear and non-linear attentions, DiT, MM-DiT, etc. In all our experiments, we start from a pre-trained W32A32 model and through FraQAT reduce it to W4A8. We bootstrap the student at INT8 and optimize it to replicate its FP32 counter-part. This initialization allows FraQAT to start with minimal gap between teacher (FP32) and student (INT8). After T epochs we lower the precision of the model – number of bits –, and continue the optimization. This procedure is repeated until the precision reaches 4 bits. Since we apply a fake quantization process, we can emulate arbitrary precisions that have no hardware support, e.g., INT4.5. Unless stated otherwise all our experiments follow the same progression: 8 →7 →6 →5.5 →5 → 4.75 →4.5 →4.25 →4 targeting linear layers. Further hyper-parameters are detailed in the appendix. In all cases we distil a W4A8 model through knowledge-distillation loss, using dynamic quantization. Note that the proposed approach is applicable to any quantization precision, e.g., W2A8, and static quantization. Baselines We compare the proposed approach with state of the art PTQ techniques: DiTAS [39] (W4A8) and SVDQuant [12](W4A16). In both cases, we use the code publicly available and train (calibrate) the model over the train dataset (see below). To further prove the soundness of FraQAT, we implement a vanilla QAT (vQAT) approach, and an SVDQuant-like QAT (SVDQAT). In vQAT, we apply a W4A8 quantization to all linear layers and optimize them with the same loss as FraQAT. Instead in SVDQAT, we inject a LoRA-like layer in all linear layers, as in [12], and optimize both the low-rank and residual branch – almost doubling the number of parameters. Finally, we report the results for the naive quantization (Dynamic Q.) to desired precision through torchao3. 3https://github.com/pytorch/ao 6 Table 1: Qualitative evaluation: we evaluate FraQAT using a fractional quantization schedule on PixArt Evaluation dataset [43] and MidJourney HQ Evaluation dataset [44] measuring FID, and CLIP-FID wrt the original model, and ImageReward (IR) [45]. PixArt-Σ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ FID ↓ FID ↓ FID ↓ Dynamic Q. W4A8 9.36 2.08 0.56 2.22 0.24 0.57 13.35 6.19 0.35 8.17 1.13 -0.73 DiTAS W4A8 27.93 13.77 0.41 12.87 4.58 0.62 7.30 3.95 0.84 - - - SVDQuant W4A16 14.42 3.14 0.66 2.43 0.24 0.60 6.80 2.02 0.79 2.26 0.36 0.84 SVDQAT W4A8 2.57 0.28 0.80 1.93 0.13 0.48 5.38 1.48 0.76 - - - vQAT W4A8 2.67 0.31 0.78 2.13 0.16 0.45 7.00 2.52 0.79 3.40 0.66 0.87 FraQAT W4A8 2.54 0.27 0.82 2.17 0.19 0.48 4.48 1.07 0.79 2.55 0.30 0.86 MJHQ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ FID ↓ FID ↓ FID ↓ Dynamic Q. W4A8 10.29 2.11 0.65 2.40 0.28 0.63 15.04 5.55 0.44 8.66 1.24 -0.90 DiTAS W4A8 32.04 14.06 0.41 12.91 5.59 0.68 8.63 4.07 1.04 - - - SVDQuant W4A16 15.10 3.06 0.78 2.48 0.25 0.62 6.95 1.71 0.99 2.41 0.41 0.96 SVDQAT W4A8 2.85 0.32 0.91 2.04 0.16 0.53 5.83 1.44 0.96 - - - vQAT W4A8 3.01 0.37 0.89 2.13 0.20 0.47 7.38 2.12 0.99 3.56 0.73 0.99 FraQAT W4A8 2.78 0.32 0.96 2.34 0.24 0.50 4.95 1.054 0.97 2.55 0.39 0.99 Datasets All models in all our experiments are trained – calibrated – on YE-POP4 dataset. We split it between training (97.5%) and validation (2.5%). Then, quantized models are evaluated on two different datasets: PixArt-Σ Evaluation dataset [43]5, and MidJourney HQ Evaluation dataset [44]. In all cases during training and evaluation, we generate 512 × 512 images. Metrics We quantitatively evaluate the proposed technique over a variety of metrics measuring image quality, and features distributions. In particular, we measure the image quality with Image Reward (IR) [45], and measure the features distribution disparity between the generated samples of the quantized model and the original one with FID [46] and CLIP-FID [47]. This choice allows us to quantify the similarity between the original model and its quantized version: similar images have similar features – thus lower FID score. 4.1 Quantitative evaluation Table 1 shows a quantitative comparison of FraQAT across the aforementioned five models, two SOTA QAT approaches for direct comparison, and three PTQ techniques for overall completeness across all quantization approaches. All combinations are evaluated across two different test datasets. Due to memory requirements, we are unable to apply some techniques to Flux-schnell [3] (12B model). SVDQuant was developed and optimized for Sana, PixArt-Σ and FLUX.1-schnell, thus achieves lower performance in SD3.5-Medium, as shown in Table 1, yielding particularly worse FID and CLIP FID metrics for both test datasets. Table 1 reveals mixed results for Dynamic Quantization and DiTaS on both test datasets. Specifically, DiTaS outperforms Dynamic Quantization in PixArt-Σ but reveals itself overall worse for SD3.5-Medium and Sana. These models have different architectures, namely DiT, MM-DiT and linear attention, which indicates that DiTaS can be particularly sensitive to the model family. As for other QAT approaches, our developed SVDQuant-like QAT (SVDQAT) consistently outper- forms vanilla QAT (vQAT) across the test datasets. Arguably, the increased number of parameters – LoRA and residual branch – better cope with the lower precision. Our motivation to establish this strong QAT baseline was the success of SVDQuant as a PTQ approach alongside its need of a higher precision W4A16 that hinders latency in on-device accelerators. Finally, Table 1 shows FraQAT 4https://huggingface.co/datasets/Ejafa/ye-pop 5https://huggingface.co/datasets/PixArt-alpha/PixArt-Eval-30K 7 outperforms even the strongest QAT baseline we developed namely SVDQAT, with overall higher gains for SD3.5-Medium and PixArt-Σ for both test datasets. SD3.5-M Sana 600M PixArt-Σ Flux-Schnell (a) Original model (b) SVDQuant (c) vQAT (d) FraQAT (Ours) Figure 4: Qualitative comparison: FraQAT(d) generates images similar to the original model (a). Prompts are from MJHQ dataset [44]. 8 norm1 attn ff 0 10 20 30 40 50 % outliers norm1 attn ff 0 10 20 30 40 50 Layer name sd35 pixart Figure 5: Outliers: outliers distribution for ac- tivations varies across models. SD3.5-M (left) experience most of its outliers right after Feed Forward layers, while for PixArt-Σ, most outliers are in Attention layers. Table 2: Outlier analysis: we optimize specific layers types while the rest of the model is frozen and quantized (W4A8). FID and CLIP-FID are computed on PixArt-Σ [43] evaluation dataset. Model Layer FID ↓ CLIP FID ↓ SD3.5-M FF 2.23 0.23 Attn 2.32 0.24 TF 2.49 0.28 All 2.54 0.22 Sana 600M FF 2.18 0.17 Attn 2.10 0.16 TF 2.13 0.16 All 2.17 0.19 PixArt-Σ FF 5.34 1.55 Attn 6.48 2.23 TF 4.40 1.13 All 4.48 1.07 4.2 Qualitative evaluation To complement the quantitative evaluation in the preceding section, Figure 4 depicts for each of the four models considered: the original model, one PTQ representative (SVDQuant), one QAT alternative (vQAT) and the proposed QAT approach (FraQAT). We have selected SVDQuant among DiTaS and Dynamic Quantization given its popularity. We focus on vQAT rather than our other developed QAT baseline (SVDQAT) given its overall popularity. Note that, the images generated in each row in Figure 4 share the same seed and prompt. As expected from a PTQ approach, SVDQuant under-performs when generating certain high fre- quency image details. This is especially visible when multiple faces are present in a generated image as shown in the first row. QAT improves high frequency image details, but generates significantly different images than the original model for the same prompt and seed. Finally, FraQAT preserves both high frequency details and generates images as close to the original model across all baselines. 4.3 Outlier analysis Activation outliers disrupt the quantization process by introducing artifacts or biases. By analyzing these outliers across different models, we discover that different models produce outliers in different layers. For example, in SD3.5-M outliers emerge after Feed-Forward (FF) layers, while in PixArt-Σ outliers arise from Attention (Attn) layers, see Figure 5. Through selectively training specific layers, we can reduce FraQAT’s computational demand while obtaining a deployable model. In this vein, we analyze the impact of selective training, i.e., we optimize only certain layers while the rest of the network is frozen and quantized (W4A8). In particular, we focus on attention layers (Attn), feed forward layers (FF), and transformer blocks (TF), and compare it with training the entire network (Full). Quantitative results in Table 2 show that there is no clear winner – a layer type for all architecture. Different models take advantage from optimizing different layers. Nevertheless we recommend starting from quantizing Transformer Blocks (TF) as it reduces memory requirements, lowers computational demands, and addresses all outliers. 4.4 On device model deployment To demonstrate the feasibility of deploying models quantized with FraQAT on edge devices, we quantized Sana 600M [10] to W4A8 and deployed it on the Samsung S25U, which runs on the Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP). Compared to CPUs and GPUs, integer accelerators support a limited range of precisions and exclusively support static 9 (a) Original Sana 600M (b) FraQAT on GPU (c) FraQAT on smartphone Figure 6: On device generation: we generate images on a mobile phone (c) and compare the results with the samples generates on GPU by the original model (a) and the quantized model (b). quantization for both weights and activations. Please, refer to Section 2.1, for a discussion on static and dynamic quantization. Conversely to the baselines used in Section 4.1, FraQAT support both dynamic and static quantization paradigms. To apply FraQAT to Sana 600M [10] with static weight and activation quantization, we pre-compute scale and offset through statistical analysis of the features: we select 100 random samples and pass them through the DiT. Feature values for every layer are recorded and used to compute standard deviation and mean. Following literature [48], we use a 3 std range for inlier. Finally, we use these scale and offset during the QAT, while the overall training procedure does not change from the one discussed in Section 4.1. All linear layers of the quantized model run with precision W4A8 except for the last layer which runs in W4A16. This is a good compromise to preserve quality, while not impacting latency. Overall, the model has a latency of 66ms per forward step, while the same model running in W4A16 (bit-width supported by SVDQuant) has a latency of 95ms – a 30.5% latency improvement. Finally, to assess the on-device quality we generate samples, and compare them with the those from the original model in Figure 6. The quantized model produces high quality pictures that resemble the original model and its GPU version. 5 Limitations and future work The proposed approach is a step forward compared to SOTA Quantization-Aware-Training (QAT). Like most – if not all – QAT techniques, FraQAT yields higher quality but is more computational expensive than Post-Training Quantization (PTQ). Compared with multi-precision LLM’s SOTA QAT approaches, such as MatryoshkaQAT, FraQAT’s quantized model is tailored to a single bit precision, and we leave multi-precision support for future work. Furthermore, the intermediate precision levels are hand-picked, in the future, we plan to design an algorithm to select the most impactful precisions. The proposed training scheme may benefit from regularizers such as weight decay and data aug- mentation. For example, preliminary regularization tests on Sana 0.6B [10] show that weight decay 10 boosts the performance by ∼10%. A proper investigation of regularization and its impact in training is left to future work. Finally, FraQAT’s networks are optimized using knowledge distillation only, but different losses such as feature and task loss may further boost image generation quality. 6 Conclusions FraQAT is a novel Quantization-Aware-Training technique that exploits fractional bits while progres- sively reducing the parameter-precision during the quantization process. Thanks to this curriculum learning strategy, we address the outliers, as they arise, at different precisions, achieving a more stable and faster training. The proposed method is evaluated over a variety of state-of-the-art DiT and MM-DiT models. We show that, both qualitatively and quantitatively, the quantized models achieve superior performance compared to the state-of-the-art QAT approaches. Such improved quality, if deployed on-device, may boost mobile users productivity, preserve their privacy, as well as generate personalized contents for users. References [1] S. AI, “Stable diffusion 3.5.” https://stability.ai/news/ introducing-stable-diffusion-3-5, 2024. [2] E. Xie, J. Chen, Y. Zhao, J. Yu, L. Zhu, C. Wu, Y. Lin, Z. Zhang, M. Li, J. Chen, H. Cai, B. Liu, D. Zhou, and S. Han, “SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer,” arXiv:2501.18427, 2025. [3] B. F. Labs, “Flux.” https://github.com/black-forest-labs/flux, 2024. [4] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al., “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783, 2024. [5] G. Team, A. Kamath, J. Ferret, S. Pathak, N. Vieillard, R. Merhej, S. Perrin, T. Matejovicova, A. Ram´e, M. Rivi`ere, et al., “Gemma 3 technical report,” arXiv preprint arXiv:2503.19786, 2025. [6] Q. Snapdragon®, “Snapdragon 8 Elite Mobile Platform.” https://docs.qualcomm. com/bundle/publicresource/87-83196-1_REV_D_Snapdragon_8_Elite_Mobile_ Platform_Product_Brief.pdf. [7] X. Intel®, “Processors with Performance-Cores (P-Cores).” https://www.intel.com/ content/www/us/en/products/details/processors/xeon/xeon6-p-cores.html. [8] A. Ryzen™, “AI 300 Series Processors.” https://www.amd. com/content/dam/amd/en/documents/partner-hub/ryzen/ amd-ryzen-ai-300-series-vs-qualcomm-snapdragon-x-elite-deck.pdf. [9] N. HGX™, “NVIDIA HGX Platform.” https://www.nvidia.com/en-gb/data-center/ hgx. [10] E. Xie, J. Chen, J. Chen, H. Cai, H. Tang, Y. Lin, Z. Zhang, M. Li, L. Zhu, Y. Lu, and S. Han, “SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transform- ers,” arXiv:2410.10629, 2024. [11] J. Chen, S. Xue, Y. Zhao, J. Yu, S. Paul, J. Chen, H. Cai, E. Xie, and S. Han, “SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation,” arXiv:2503.09641, 2025. [12] M. Li, Y. Lin, Z. Zhang, T. Cai, X. Li, J. Guo, E. Xie, C. Meng, J.-Y. Zhu, and S. Han, “SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models,” arXiv:2411.05007, 2024. [13] Z. Zhang, Y. Gao, J. Fan, Z. Zhao, Y. Yang, and S. Yan, “Selectq: Calibration data selection for post-training quantization,” Machine Intelligence Research, pp. 1–12, 2025. [14] P. Nair, P. Datta, J. Dean, P. Jain, and A. Kusupati, “Matryoshka Quantization,” arXiv:2502.06786, 2025. [15] A. Bulat and G. Tzimiropoulos, “Bit-Mixer: Mixed-precision networks with runtime bit-width selection,” in ICCV, pp. 5188–5197, 2021. 11 [16] Y. Sui, Y. Li, A. Kag, Y. Idelbayev, J. Cao, J. Hu, D. Sagar, B. Yuan, S. Tulyakov, and J. Ren, “Bitsfusion: 1.99 bits weight quantization of diffusion model,” arXiv:2406.04333, 2024. [17] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in Proceedings of the 26th annual international conference on machine learning, pp. 41–48, 2009. [18] P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021. [19] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-Resolution Image Synthesis with Latent Diffusion Models,” in CVPR, pp. 10684–10695, 2022. [20] Y. Lipman, R. T. Q. Chen, H. Ben-Hamu, M. Nickel, and M. Le, “Flow Matching for Generative Modeling,” arXiv:2210.02747, 2023. [21] I. Gat, T. Remez, N. Shaul, F. Kreuk, R. T. Chen, G. Synnaeve, Y. Adi, and Y. Lipman, “Discrete flow matching,” Advances in Neural Information Processing Systems, vol. 37, pp. 133345– 133385, 2024. [22] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023. [23] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican, et al., “Gemini: a family of highly capable multimodal models,” arXiv preprint arXiv:2312.11805, 2023. [24] T. Salimans and J. Ho, “Progressive distillation for fast sampling of diffusion models,” arXiv preprint arXiv:2202.00512, 2022. [25] M. Noroozi, I. Hadji, B. Martinez, A. Bulat, and G. Tzimiropoulos, “You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation,” in ECCV, pp. 145–161, Springer, 2025. [26] M. Noroozi, A. G. Ramos, L. Morreale, R. Chavhan, M. Chadwick, A. Mehrotra, and S. Bhattacharya, “Guidance free image editing via explicit conditioning,” arXiv preprint arXiv:2503.17593, 2025. [27] Q. Jin, L. Yang, and Z. Liao, “AdaBits: Neural Network Quantization with Adaptive Bit-Widths,” in CVPR, pp. 2146–2156, 2020. [28] L. Yang and Q. Jin, “Fracbits: Mixed precision quantization via fractional bit-widths,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 10612–10620, 2021. [29] Z. Liu, C. Zhao, H. Huang, S. Chen, J. Zhang, J. Zhao, S. Roy, L. Jin, Y. Xiong, Y. Shi, et al., “ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization,” arXiv:2502.02631, 2025. [30] M. Nagel, M. Fournarakis, Y. Bondarenko, and T. Blankevoort, “Overcoming Oscillations in Quantization-Aware Training,” in ICML, pp. 16318–16330, PMLR, 2022. [31] Z. Liu, B. Oguz, A. Pappu, L. Xiao, S. Yih, M. Li, R. Krishnamoorthi, and Y. Mehdad, “Bit: Robustly binarized multi-distilled transformer,” Advances in neural information processing systems, vol. 35, pp. 14303–14316, 2022. [32] S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh, “Im- proved knowledge distillation via teacher assistant,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, pp. 5191–5198, 2020. [33] H. Wang, Y. Shang, Z. Yuan, J. Wu, J. Yan, and Y. Yan, “QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning,” arXiv:2402.03666, 2024. [34] X. Zheng, X. Liu, H. Qin, X. Ma, M. Zhang, H. Hao, J. Wang, Z. Zhao, J. Guo, and M. Magno, “BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models,” arXiv:2404.05662, 2024. [35] G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han, “SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models,” arXiv:2211.10438, 2024. [36] J. Lin, J. Tang, H. Tang, S. Yang, W.-M. Chen, W.-C. Wang, G. Xiao, X. Dang, C. Gan, and S. Han, “AWQ: Activation-aware Weight Quantization for On-device LLM Compression and Acceleration,” Proceedings of Machine Learning and Systems, vol. 6, pp. 87–100, 2024. 12 [37] F. Tan, R. Lee, Ł. Dudziak, S. X. Hu, S. Bhattacharya, T. Hospedales, G. Tzimiropoulos, and B. Martinez, “MobileQuant: Mobile-friendly Quantization for On-device Language Models,” arXiv:2408.13933, 2024. [38] J. Wu, H. Wang, Y. Shang, M. Shah, and Y. Yan, “PTQ4DiT: Post-training Quantization for Diffusion Transformers,” arXiv:2405.16005, 2024. [39] Z. Dong and S. Q. Zhang, “DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing,” arXiv:2409.07756, 2024. [40] L. Chen, Y. Meng, C. Tang, X. Ma, J. Jiang, X. Wang, Z. Wang, and W. Zhu, “Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers,” arXiv:2406.17343, 2024. [41] Y. Zhao, C.-Y. Lin, K. Zhu, Z. Ye, L. Chen, S. Zheng, L. Ceze, A. Krishnamurthy, T. Chen, and B. Kasikci, “Atom: Low-bit quantization for efficient and accurate llm serving,” Proceedings of Machine Learning and Systems, vol. 6, pp. 196–209, 2024. [42] Y. Liu, H. Fang, L. He, R. Zhang, Y. Bai, Y. Du, and L. Du, “FBQuant: FeedBack Quantization for Large Language Models,” arXiv:2501.16385, 2025. [43] J. Chen, C. Ge, E. Xie, Y. Wu, L. Yao, X. Ren, Z. Wang, P. Luo, H. Lu, and Z. Li, “Pixart-σ: Weak-to-strong training of diffusion transformer for 4k text-to-image generation,” in European Conference on Computer Vision, pp. 74–91, Springer, 2024. [44] D. Li, A. Kamko, E. Akhgari, A. Sabet, L. Xu, and S. Doshi, “Playground v2. 5: Three insights towards enhancing aesthetic quality in text-to-image generation,” arXiv preprint arXiv:2402.17245, 2024. [45] J. Xu, X. Liu, Y. Wu, Y. Tong, Q. Li, M. Ding, J. Tang, and Y. Dong, “Imagereward: Learning and evaluating human preferences for text-to-image generation,” Advances in Neural Information Processing Systems, vol. 36, pp. 15903–15935, 2023. [46] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception archi- tecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016. [47] T. Kynk¨a¨anniemi, T. Karras, M. Aittala, T. Aila, and J. Lehtinen, “The role of imagenet classes in fr\’echet inception distance,” arXiv preprint arXiv:2203.06026, 2022. [48] R. Wang, Y. Gong, X. Liu, G. Zhao, Z. Yang, B. Guo, Z. Zha, and P. Cheng, “Optimizing large language model training using fp4 quantization,” arXiv preprint arXiv:2501.17116, 2025. [49] Z. Lin, D. Pathak, B. Li, J. Li, X. Xia, G. Neubig, P. Zhang, and D. Ramanan, “Evaluating text- to-visual generation with image-to-text generation,” arXiv preprint arXiv:2404.01291, 2024. [50] J. Wang, K. C. Chan, and C. C. Loy, “Exploring clip for assessing the look and feel of images,” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, pp. 2555–2563, 2023. [51] G. Team, “Gemma,” 2024. [52] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, no. 140, pp. 1–67, 2020. [53] C. Clark, K. Lee, M.-W. Chang, T. Kwiatkowski, M. Collins, and K. Toutanova, “Boolq: Ex- ploring the surprising difficulty of natural yes/no questions,” arXiv preprint arXiv:1905.10044, 2019. [54] A. Talmor, J. Herzig, N. Lourie, and J. Berant, “Commonsenseqa: A question answering challenge targeting commonsense knowledge,” arXiv preprint arXiv:1811.00937, 2018. 13 A Experimental evaluation A.1 Baselines For state of the art baselines we rely on code released by authors67 and use the default parameters. For all approaches we use pre-trained models with default resolution 512 × 512. Where needed we change the baselines configurations to use the same model. A.2 Hyper-parameters for QAT We detail the various hyper parameters for all QAT experiments in Table 3. In all cases we rely on FuseAdam as optimizer and optimize for 25 epochs. All experiments run on AMD MI300X and are implemented using PyTorch8, Lightning9, torchao10, with seed 1234. Table 3: Hyper-parameters: Detailed hyper-parameters required to replicate all experiments. SD3.5-M Sana 600M PixArt-Σ lr batch low lr batch low lr batch low size rank size rank size rank SVDQAT W4A8 10−5 128 32 10−6 128 16 10−6 256 16 vQAT W4A8 10−5 256 - 10−6 128 - 10−6 128 - FraQAT W4A8 10−6 256 - 10−7 128 - 10−6 128 - For all FraQAT experiments, we follow the schedule highlighted in Table 4. Table 4: Precision schedule: During training we progressively reduce the precision following the prescribed schedule. Precision 8 7 6 5.5 5 4.75 4.5 4.25 4 # epochs 1 1 1 1 1 2 2 2 14 Experiments with the configuration highlighted above take on average 192 GPUh for Sana, 576 GPUh for PixArt-Σ, 1008 GPUh for SD3.5-Medium. A.3 Qualitative evaluation For additional qualitative evaluation on MJHQ dataset[44], please see the html pages in the zip file. A.4 Quantitative evaluation Here we report additional evaluation of the proposed approach with a wider set of metrics. In particular, we rely on VQA [49] to measure the adherence of the generated samples to the input prompts. We measure the image quality with CLIP-IQA [50]. Table 5 shows FraQAT outperforms even the strongest QAT baseline we developed namely SVDQAT, with overall higher gains for SD3.5-Medium and PixArt-Σ for both test datasets. A.5 Quantization schedule To validate the benefits of a Fractional quantization schedule (Table 4) we compare it with its Integer counterpart (8 →7 →6 →5 →→4), and a simpler progressive schedule (16 →8 →4). For a fair comparison, all experiments have the same computational budget. We measure the validation 6SVDQuant https://github.com/mit-han-lab/deepcompressor 7DiTAS https://github.com/DZY122/DiTAS 8https://pytorch.org/ 9https://lightning.ai/docs/pytorch/stable/ 10https://github.com/pytorch/ao 14 Table 5: Qualitative evaluation: we evaluate FraQAT using a fractional quantization schedule on PixArt Evaluation dataset [43] and MidJourney HQ Evaluation dataset [44] measuring FID, and CLIP-FID wrt the original model, CLIP-IQA [50], ImageReward (IR) [45], and VQA [49]. PixArt-Σ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ Dynamic Q. W4A8 9.36 2.08 0.44 0.56 0.84 2.22 0.24 0.46 0.57 0.82 13.35 6.19 0.44 0.35 0.82 8.17 1.13 0.43 -0.73 0.77 DiTAS W4A8 27.93 13.77 0.47 0.41 0.82 12.87 4.58 0.45 0.62 0.82 7.30 3.95 0.46 0.84 0.86 - - - - - SVDQuant W4A16 14.42 3.14 0.42 0.66 0.85 2.43 0.24 0.43 0.60 0.82 6.80 2.02 0.43 0.79 0.86 2.26 0.36 0.42 0.84 0.85 SVDQAT W4A8 2.57 0.28 0.45 0.80 0.85 1.93 0.13 0.43 0.48 0.82 5.38 1.48 0.43 0.76 0.86 - - - - - vQAT W4A8 2.67 0.31 0.44 0.78 0.85 2.13 0.16 0.43 0.45 0.81 7.00 2.52 0.45 0.79 0.85 3.40 0.66 0.41 0.87 0.86 FraQAT W4A8 2.54 0.27 0.45 0.82 0.86 2.17 0.19 0.42 0.48 0.82 4.48 1.07 0.45 0.79 0.86 2.55 0.30 0.41 0.86 0.85 MJHQ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ Dynamic Q. W4A8 10.29 2.11 0.44 0.65 0.79 2.40 0.28 0.45 0.63 0.74 15.04 5.55 0.43 0.44 0.74 8.66 1.24 0.42 -0.90 0.65 DiTAS W4A8 32.04 14.06 0.47 0.41 0.73 12.91 5.59 0.45 0.68 0.75 8.63 4.07 0.46 1.04 0.80 - - - - - SVDQuant W4A16 15.10 3.06 0.42 0.78 0.78 2.48 0.25 0.42 0.62 0.75 6.95 1.71 0.43 0.99 0.80 2.41 0.41 0.42 0.96 0.79 SVDQAT W4A8 2.85 0.32 0.45 0.91 0.80 2.04 0.16 0.43 0.53 0.74 5.83 1.44 0.43 0.96 0.81 - - - - - vQAT W4A8 3.01 0.37 0.44 0.89 0.80 2.13 0.20 0.43 0.47 0.74 7.38 2.12 0.44 0.99 0.80 3.56 0.73 0.41 0.99 0.80 FraQAT W4A8 2.78 0.32 0.45 0.96 0.81 2.34 0.24 0.42 0.50 0.74 4.95 1.05 0.44 0.97 0.80 2.55 0.39 0.41 0.99 0.80 loss across training and plot it in Figure 7. The integer and simple schedules perform comparably to each other. On the other hand, the Fractional schedule consistently outperforms the two competitors during training, resulting in a sensibly lower validation loss. 0 5 10 15 20 25 0.001 0.002 0.003 0.004 Fractional Integer 16->8->4 Figure 7: Fractional schedule: we train SD3.5-M using a simple progressive schedule (green), an integer schedule (orange), and a fractional schedule (blue). As seen in the graph, the Fractional schedule achives a lower validation loss. B Additional evaluation B.1 Language Model The proposed method is agnostic to the architecture and the application. We apply FraQAT to Gemma2 2B IT [51]11. Start from the FP16 model – original –, then we quantize it to 4 bits in a similar fashion as we did with T2I models in the main paper. We follow the same schedule as in Section A.2. The quantized model (W4A8) is then compared with the original model. As training set we rely on a subset of C4 dataset [52]: we pick randomly 384K samples for training and 38.4K samples for validation. The model is evaluated on two datasets in zero-shot fashion: BoolQ [53]12, and Commonsense QA [54]13. Table 6 shows minimal drop when FraQAT is applied 11https://huggingface.co/google/gemma-2-2b-it 12https://huggingface.co/datasets/google/boolq 13https://huggingface.co/datasets/allenai/social_i_qa 15 Table 6: Evaluation on Language Models: We apply FraQAT to Gemma2 2B IT [51] exactly as we did to T2I models. The quantized model is evaluated on Common Sense QA and BoolQ datasets. The resulting model has minimal drop from original language model. Model Precision Common Sense QA ↑ BoolQ ↑ COQA ↑ Original (W16A32) 0.70 ± 0.01 0.76 ± 0.01 0.66 ± 0.01 FraQAT (W4A8) 0.69 ± 0.01 0.72 ± 0.01 0.70 ± 0.01 to Gemma2 2B IT model. Therefore, proving FraQAT can be applied to Language Models as well as Vision Models. 16
FraQAT: Quantization Aware Training with Fractional bits Luca Morreale Alberto Gil C. P. Ramos Malcolm Chadwick Mehid Noroozi Ruchika Chavhan Abhinav Mehrotra Samsung AI Center Cambridge, UK Sourav Bhattacharya Sana 600M FraQAT SD3.5-M FraQAT Flux-schnell FraQAT Figure 1: FraQAT is a Quantization aware Training (QAT) technique that grants generative models high fidelity at a fraction of training time required. Large text-to-image (T2I) models quantized with FraQAT (W4A8) achieve 16% lower FiD score than the state-of-the-art. Abstract State-of-the-art (SOTA) generative models have demonstrated impressive capabilities in image synthesis or text generation, often with a large capacity model. However, these large models cannot be deployed on smartphones due to the limited availability of on-board memory and computations. Quantization methods lower the precision of the model parameters, allowing for efficient computations, e.g., in INT8. Although aggressive quantization addresses efficiency and memory constraints, preserving the quality of the model remains a challenge. To retain quality in previous aggressive quantization, we propose a new fractional bits quantization (FraQAT) approach. The novelty is a simple yet effective idea: we progressively reduce the model's precision from 32 to 4 bits per parameter, and exploit the fractional bits during optimization to maintain high generation quality. We show that the FraQAT yields improved quality on a variety of diffusion models, including SD3.5-Medium, Sana, PixArt-Σ, and FLUX.1-schnell, while achieving 4 -7% lower FiD than standard QAT. Finally, we deploy and run Sana on a Samsung S25U, which runs on the Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP). 1 Introduction Over the past few years, generative models have made impressive progress in synthesizing highquality images [1, 2, 3] and texts [4, 5]. Such a breakthrough is partly achieved by enlarging the model's size, e.g., Diffusion Transformer (DiT) models with over 10 billion (10B) parameters are increasingly common. However, larger models require significantly more resources, hence higher inference-time or latency, even for inference. This increase is particularly problematic for deploying these models on resource limited devices, e.g., smartphones, thus limiting their wide-scale usage. 16 Oct 2025 A well-established approach to mitigate these resource constraints is quantization: by shifting parameters from 32 bits to a lower precision, e.g., 4 bits, the model's weight-allocation footprints in its computational graph is significantly reduced. While past quantization research aimed mostly at decreasing model size, low-precision hardware support, such as NPUs on smartphones, drives researchers to further decrease inference latency. For example, latency gains from reduced data movement are boosted by native support for low-precision operations, e.g., using 4-bit weights and 8-bit activations (W4A8). Although, initially few devices offered support for these operations, modern hardware manufacturers readily offer low precision operations across devices: W4A8 in Qualcomm Snapdragon HTP [6], INT8, BF16 and FP16 in Intel CPUs [7], Block FP16 in AMD CPUs [8] and FP8/FP4 in NVIDIA GPU H100/H200 [9] to name a few. The advantages of deploying cloud-quality generative models on-device are multi-fold: it preserves users' privacy while offering a low-latency experience. For service providers, it reduces operating costs by pushing compute from expensive servers to users' personal devices as well as avoiding violating country-specific privacy regulations. In this work, we target mobile deployment, and restrict ourselves to W4A8 given its ubiquitous availability across devices. Quantization approaches fall under two main categories: Post Training Quantization (PTQ) and Quantization-Aware-Training (QAT). PTQ creates a low-precision model from a high precision pre-trained model using a small calibration dataset. Recent progress in PTQ research has resulted in W4A321 and W8A322 high quality quantized models from pre-trained SANA [10], SANA 1.5 [2] and SANA-Sprint [11]. Mixed precision W4A32 and W16A32 approaches like SVDQuant [12] have also yielded high quality quantized models from pre-trained FLUX.1-schnell. In essence, PTQ is ideal for cases where access to a large training dataset or compute cluster is limited. Despite its success, PTQ requires careful data selection [13]. For example, a poorly selected calibration dataset may manifest in poor prompt adherence or exhibit color shifts during deployment. Instead, Quantization-Aware-Training (QAT) optimizes weights in lower precision to boost the overall model's performance [14, 15, 16]. In general, QAT approaches yield better results, at lower precision, when a large training dataset or a compute cluster is available. Nonetheless, quantized models suffer from a quality loss compared to the original FP32 model. We propose fractional bits quantization (FraQAT) to bridge the quality gap between the original and the quantized model. Inspired by Curriculum Learning [17], our training process progressively increases the quantization complexity, i.e., gradually lowers parameter precision, while replicating the original model's output. We show that FraQAT reduces outliers, stabilizes training and yields improved prompt adherence and image generation quality (Section 2.2). We apply FraQAT to the linear layers of SOTA generative models as they contain the majority of the parameters, and empirically demonstrate the advantages of the proposed techniques on diffusion models (Section 4.1,4.2). In terms of image quality, FraQAT achieves 16% lower FiD than SOTA QAT. To address computational costs, we perform an outlier analysis (Section 4.3), and selectively train a subset of the model's layers and show its performance. Finally, we quantize and deploy a model on a Samsung S25U, running on Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP) (Section 4.4). 2 Method 2.1 Quantization preliminaries The goal of quantization is to approximate - in dynamic or static finite precision - internal model operations, such as operations within linear layers x × W where x ∈RB×m and W ∈Rm×n. Depending on hardware support, the quantization operation W b on a matrix W to b bits can be expressed with narrower range as: 1https://github.com/NVlabs/Sana/blob/main/asset/docs/4bit_sana.md 2https://github.com/NVlabs/Sana/blob/main/asset/docs/8bit_sana.md 2 Q(W )b := 2b-1 -1 maxi,j |[W ]i,j|W ∈{-2b-1, . . . , 2b-1 -1} S(W )b := maxi,j |[W ]i,j| 2b-1 -1 ∈R+ W b := S(W )bQ(W )b (1) or with wider range as: Q(W )b := 2b W -wmin wmax -wmin ∈{0, . . . , 2b -1} S(W )b := wmax -wmin 2b ∈R+ W b := S(W )bQ(W )b + wmin (2) where wmin := mini,j[W ]i,j and wmax := maxi,j[W ]i,j. Most simply for (1), matrix multiplications can be rewritten as: xbxW bW = (S(x)bxS(W )bW )(Q(x)bxQ(W )bW ) where bx and bW may differ. Therefore, matrix multiplication xbxW bW can be reduced to the multiplication of two floats S(x)bxS(W )bW and matrix multiplication of two integer matrices Q(x)bxQ(W )bW . Furthermore, we refer to dynamic quantization when wmin and wmax are computed at runtime, per sample, based on the input. While, in static quantization, wmin and wmax are pre-computed and shared across all samples. Dynamic quantization, especially when applied to activations, allow to robustly handle outliers as each sample range is computed to maximize representability. On the other hand, static quantization are more restrictive, and generate more outliers, thus making the quantization problem strictly harder. Edge devices, such as smartphones, only support static quantization, while GPUs support both. In contrast, activations are often quantized through a look-up table mapping from a 2b sized partition of the input range into a fixed number of quantized output values, e.g., the previous layer output x. In general, weights and activations may be quantized to different precisions, upcasted to the same precision before computation and downcasted after computation. Hereafter, we make the number of bits in weights and activations explicit with subscripts, e.g. x32 refers to a 32 bit approximation of x. Due to restricted address spaces in most mobile accelerators, it is critical to decrease weights precision aggressively, especially in large vision or language models, e.g., 12 billion parameter models, otherwise these models cannot even be placed on the target devices. However, naively lowering the weight's precision from FP32 to INT4 causes severe degradation in the generated results. This degradation is exacerbated by lowering activations precision, as required by integeronly accelerators, most often to INT8 for reduced generation latency. At a high level, the quality degradation phenomenon is attributed to outliers in both activations and weights due to training. The overall challenge of quantization is to approximate the original network's behavior while lowering the precision: xW ≈x8W 4. (3) 2.2 Fractional Quantization-Aware-Training Intuitively, Quantization-Aware-Training (QAT) approaches - including the proposed FraQAT - handle outliers, both in weights and activations, by shifting parameters to quantization centroids within or towards adjacent bins. Hence, re-distributing weights in a more compact space. Consequently, the further apart bins are, the harder the optimization problem. We further speculate that it is slower to optimize for lower precisions - INT4 vs FP32 - as the gap between two adjacent representable numbers is much larger. Indeed, it can be observed from Figure 2 that the loss is higher for lower precisions, thus, outliers appear gradually. To address this issue, we take inspiration from Curriculum Learning [17] literature: we progressively increase the complexity of the task during the optimization by gradually lowering weights' precision while approximating the full precision model's output. This is achieved by two key designs: first FraQAT leverages weights from pre-trained models. Second, FraQAT continuously steps between 3 8.00 7.00 6.00 5.50 5.00 4.75 4.50 4.25 4.00 Num bits 0.00 0.05 0.10 0.15 0.20 Avg loss SD3.5-M Sana 600M Pixart-Sigma Flux-schnell Figure 2: Bit vs Loss: as we reduce the precision (number of bits), the average knowledge distillation loss increases - the gap between student and teacher widens. From a quantization perspective, this implies outliers incrementally affect the student model. Figure 3: Classic QAT first computes the loss computed at the lower precision (◦), then propagates it back to the original precision and optimize the weights (→). This results in coarse and noisy gradients. Fractional Quantization Aware Training rely on intermediate precisions (from INT8 to INT4 as training progresses) to incrementally adjust to weights distributions. Consequently, during training Parameters smoothly shift between bins (•) thanks to the finer gradients from in intermediate precision. discrete quantization ranges to exploit the fact that Eq. (1)-(2) are purely a software construct, hence, it is possible to span any continuous - not just discrete - precision b ∈[32; 1] ∈R. Coupled together, these concepts establish FraQAT as a faster and higher quality QAT scheme dubbed Fractional Quantization Aware Training (FraQAT). Given a model, FraQAT progressively lowers the precision, first coarsely between FP32 and INT8 and then finely from INT8 to INT4, stepping through intermediate fractional bits during training as depicted in Figure 3 and in Algorithm 1. As the training progresses, outliers gradually appear, as shown in Figure 2, and are addressed. Furthermore, by optimizing at fractional bit precision in a curriculum fashion, FraQAT allows the weights to move to stable configurations, yielding higher quality samples, and reducing training time. Finally, throughout the entire training process FraQAT keeps all activation quantization constant (INT8). As training progresses, this progressive lowering of precision, smoothly moves weights distribution and thus facilitates quantization (c.f., Figure 3). Note that it is possible to even set b = 5.5. Although half-bits precision have no meaning, in practice they bridge the gap of range of representable numbers between two precisions: INT6 ∈[-32; 31], INT5.5 ∈[-22; 21], INT5 ∈[-16; 15]. In other words, half-bits precisions reduce the distance between adjacent bins, speeding up convergence without ad-hoc hyper-parameters, such as learning rate. 4 The proposed Fractional Quantization-Aware-Training (FraQAT) approach is generally applicable to any model and quantization level. Given the wide-spread usage of DiT and MM-DiT blocks in SOTA T2I models, we focus the presentation on DiT models. Since model size limits must first be met for any on-device placement, FraQAT quantizes linear layers as they contain the bulk of the parameters of DiT models (99.9%). In particular, FraQAT targets the most aggressive W4A8 quantization, as it allows for a wider range of models to fit edge accelerators with the lowest generation latency. Nonetheless, the proposed technique allows to target any precision. 3 Related works Large diffusion models are the de-facto framework for image generation [18, 19, 20, 21]. On the other hand, Large Language Models (LLMs) shows human-like abilities with text [4, 22, 23]. However quality and diversity comes at a cost: these models have a huge amount of parameters and cannot be hosted on an on-device NPU without some form of quantization. Tackling computational complexity Diffusion models' computational complexity has two major sources: the amount of denoising passes and the conditioning mechanism. The former issue can be addressed by distilling the model to few or a single pass [24, 25], while the latter by modeling the latent-noise space [26] to decrease the number of function evaluations. Despite the success of these approaches, a major bottleneck remains: the memory required for inference. Quantization aim to preserve the original model's quality when moving to lower precision - thus saving memory and enabling deployment. Quantization-Aware-Training QAT methods optimize model's weights at lower precision [15, 27] aiming to recover the original performance. Early approaches [15, 27] study QAT on ResNet for classification: starting from low bit precision - b = 2 or 4 - the weights quantization is progressively reduced [27] or selected at random[15]. Although [15, 27] closely relate to FraQAT, they (i) focus on classification networks, (ii) ignore the gap with full precision models and the hierarchal nature of different precisions by starting from a low-bit quantization, (iii) aim to get models at different precisions. In parallel, Fracbits [28] introduces bit-width optimization by relying on a non-standard quantization formula for fractional bits. Bit-width are regularized to achieve the desired precision, followed by a binary search and fine tuning process to finalize weights. However, this procedure focuses on average bit length across the layers, obtaining lower bits in some layers at the cost of higher bits in other layers, which may not map to readily available hardware. Finally, as [15, 27], Fracbits focuses on classification rather than generative tasks. More recently, MatryoshkaQAT [14] exploits the nested structure of a number's byte representation to encode LLM's weights at different precision - 8, 4, and 2. The joint training at the three precisions result in a multi-precision model. Parallel to this work, Liu et al. [29] extend [30], and discover that model quantized to lower than 4 bits develop a different representation from the original models. Finally, [31] based on the Teaching Assistant distillation framework [32], quantize LLM models to Algorithm 1 Fractional Quantization Aware Training Input: Pre-trained model MW 32A32, dataset D, loss function L, quantization schedule B (e.g., {8, 5, 4.5, 4}), optimizer O Output: Quantized model MW 4A8 1: MW bA8 ←MW 32A32 2: for b ∈B do 3: MW bA8 ←QUANTIZELINEARLAYER(MW bA8, WbA8) 4: for batch ∈D do 5: OW bA8 ←FORWARD(MW bA8, batch) 6: OW 32A32 ←FORWARD(MW 32A32, batch) 7: l ←L(stop grad(OW 32A32), OW bA8) 8: OPTIMIZE(O, l, MW bA8) 9: end for 10: end for 11: return MW bA8 5 W1A1. Similar to our work, the authors use a progressive strategy, however limited to W1A4 (to W1A2) to W1A1, where intermediate models (W1A2) models are used as teachers. Combined with a series of techniques (e.g., gradient clipping, elastic binarization, etc) to stabilize the optimization process the authors achieve a binary quantized model. Since the quantization is binary, the model is not deployable to edge-devices. In this work, we show progressive quantization is enough to quantize a model that can be deployed on edge-devices. Related to diffusion models, Bitfusion [16] combines different QAT techniques, such as distillation and fine tuning, to convert SD1.5 [19] to 1.99 bits. Following a similar trend, Wang et al. in [33] selectively fine tune SD1.5 to handle activation distribution. BinaryDM [34] takes model quantization one step further by applying a multi-stage QAT approach to quantization. Notwithstanding these impressive results, none of these techniques showcase low-bit quantization of large scale DiT models such as SD3.5-M [1] (2.2B) or FLUX.1-schnell [3] (12B). Ours is the first QAT approach applied to such models. Post-Training Quantization Among the recent seminal works on LLMs, SmoothQuant [35] proposes a PTQ approach by injecting a smoothing factor in linear layers to reduce the impact of outliers in LLMs. AWQ [36] and MobileQuant [37] extend this approach to lowers the precision to W4A8 thus enabling an LLM to run on-device. These works have been extended to DiT models with specific focus on timesteps. PTQ4DiT [38] builds a calibration dataset by sampling timesteps before quantizing the diffusion model. DiTAS [39] proposes a temporal-aggregated smoothing technique combined with LoRA and a grid-search to reduce quantization errors of small DiT networks with W4A8 quantization. QuEST [33] through layer specific (PTQ) fine tuning achieves W4A4 quantization. QDiT [40], inspired by [36, 41, 37], combines a fine-grained group quantization with a novel automatic allocation algorithm to account for weights' spatial variance. Most recently, SVDQuant [12] and FBQuant [42] have shown impressive preservation of image quality generation when quantizing FLUX.1-schnell [3] to W4A16. The authors rely on a low-rank approximation of the original weights and a residual branch to absorb outliers. 4 Experiments Models We focus our evaluation on recent text to image (T2I) models since there is an increasing interest in lowering their computational requirements due to their large number of parameters. In particular, we assess the soundness of the proposed approach over 4 different diffusion models: SD3.5-Medium [1], Sana [10], PixArt-Σ [43], and FLUX.1-schnell [3]. These models space a wide range of parameters, 0.6B-12B, and architectural innovations, linear and non-linear attentions, DiT, MM-DiT, etc. In all our experiments, we start from a pre-trained W32A32 model and through FraQAT reduce it to W4A8. We bootstrap the student at INT8 and optimize it to replicate its FP32 counter-part. This initialization allows FraQAT to start with minimal gap between teacher (FP32) and student (INT8). After T epochs we lower the precision of the model - number of bits -, and continue the optimization. This procedure is repeated until the precision reaches 4 bits. Since we apply a fake quantization process, we can emulate arbitrary precisions that have no hardware support, e.g., INT4.5. Unless stated otherwise all our experiments follow the same progression: 8 →7 →6 →5.5 →5 → 4.75 →4.5 →4.25 →4 targeting linear layers. Further hyper-parameters are detailed in the appendix. In all cases we distil a W4A8 model through knowledge-distillation loss, using dynamic quantization. Note that the proposed approach is applicable to any quantization precision, e.g., W2A8, and static quantization. Baselines We compare the proposed approach with state of the art PTQ techniques: DiTAS [39] (W4A8) and SVDQuant [12](W4A16). In both cases, we use the code publicly available and train (calibrate) the model over the train dataset (see below). To further prove the soundness of FraQAT, we implement a vanilla QAT (vQAT) approach, and an SVDQuant-like QAT (SVDQAT). In vQAT, we apply a W4A8 quantization to all linear layers and optimize them with the same loss as FraQAT. Instead in SVDQAT, we inject a LoRA-like layer in all linear layers, as in [12], and optimize both the low-rank and residual branch - almost doubling the number of parameters. Finally, we report the results for the naive quantization (Dynamic Q.) to desired precision through torchao3. 3https://github.com/pytorch/ao 6 Table 1: Qualitative evaluation: we evaluate FraQAT using a fractional quantization schedule on PixArt Evaluation dataset [43] and MidJourney HQ Evaluation dataset [44] measuring FID, and CLIP-FID wrt the original model, and ImageReward (IR) [45]. PixArt-Σ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ FID ↓ FID ↓ FID ↓ Dynamic Q. W4A8 9.36 2.08 0.56 2.22 0.24 0.57 13.35 6.19 0.35 8.17 1.13 -0.73 DiTAS W4A8 27.93 13.77 0.41 12.87 4.58 0.62 7.30 3.95 0.84 - - - SVDQuant W4A16 14.42 3.14 0.66 2.43 0.24 0.60 6.80 2.02 0.79 2.26 0.36 0.84 SVDQAT W4A8 2.57 0.28 0.80 1.93 0.13 0.48 5.38 1.48 0.76 - - - vQAT W4A8 2.67 0.31 0.78 2.13 0.16 0.45 7.00 2.52 0.79 3.40 0.66 0.87 FraQAT W4A8 2.54 0.27 0.82 2.17 0.19 0.48 4.48 1.07 0.79 2.55 0.30 0.86 MJHQ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ CLIP IR ↑ FID ↓ FID ↓ FID ↓ FID ↓ Dynamic Q. W4A8 10.29 2.11 0.65 2.40 0.28 0.63 15.04 5.55 0.44 8.66 1.24 -0.90 DiTAS W4A8 32.04 14.06 0.41 12.91 5.59 0.68 8.63 4.07 1.04 - - - SVDQuant W4A16 15.10 3.06 0.78 2.48 0.25 0.62 6.95 1.71 0.99 2.41 0.41 0.96 SVDQAT W4A8 2.85 0.32 0.91 2.04 0.16 0.53 5.83 1.44 0.96 - - - vQAT W4A8 3.01 0.37 0.89 2.13 0.20 0.47 7.38 2.12 0.99 3.56 0.73 0.99 FraQAT W4A8 2.78 0.32 0.96 2.34 0.24 0.50 4.95 1.054 0.97 2.55 0.39 0.99 Datasets All models in all our experiments are trained - calibrated - on YE-POP4 dataset. We split it between training (97.5%) and validation (2.5%). Then, quantized models are evaluated on two different datasets: PixArt-Σ Evaluation dataset [43]5, and MidJourney HQ Evaluation dataset [44]. In all cases during training and evaluation, we generate 512 × 512 images. Metrics We quantitatively evaluate the proposed technique over a variety of metrics measuring image quality, and features distributions. In particular, we measure the image quality with Image Reward (IR) [45], and measure the features distribution disparity between the generated samples of the quantized model and the original one with FID [46] and CLIP-FID [47]. This choice allows us to quantify the similarity between the original model and its quantized version: similar images have similar features - thus lower FID score. 4.1 Quantitative evaluation Table 1 shows a quantitative comparison of FraQAT across the aforementioned five models, two SOTA QAT approaches for direct comparison, and three PTQ techniques for overall completeness across all quantization approaches. All combinations are evaluated across two different test datasets. Due to memory requirements, we are unable to apply some techniques to Flux-schnell [3] (12B model). SVDQuant was developed and optimized for Sana, PixArt-Σ and FLUX.1-schnell, thus achieves lower performance in SD3.5-Medium, as shown in Table 1, yielding particularly worse FID and CLIP FID metrics for both test datasets. Table 1 reveals mixed results for Dynamic Quantization and DiTaS on both test datasets. Specifically, DiTaS outperforms Dynamic Quantization in PixArt-Σ but reveals itself overall worse for SD3.5-Medium and Sana. These models have different architectures, namely DiT, MM-DiT and linear attention, which indicates that DiTaS can be particularly sensitive to the model family. As for other QAT approaches, our developed SVDQuant-like QAT (SVDQAT) consistently outperforms vanilla QAT (vQAT) across the test datasets. Arguably, the increased number of parameters - LoRA and residual branch - better cope with the lower precision. Our motivation to establish this strong QAT baseline was the success of SVDQuant as a PTQ approach alongside its need of a higher precision W4A16 that hinders latency in on-device accelerators. Finally, Table 1 shows FraQAT 4https://huggingface.co/datasets/Ejafa/ye-pop 5https://huggingface.co/datasets/PixArt-alpha/PixArt-Eval-30K 7 outperforms even the strongest QAT baseline we developed namely SVDQAT, with overall higher gains for SD3.5-Medium and PixArt-Σ for both test datasets. SD3.5-M Sana 600M PixArt-Σ Flux-Schnell (a) Original model (b) SVDQuant (c) vQAT (d) FraQAT (Ours) Figure 4: Qualitative comparison: FraQAT(d) generates images similar to the original model (a). Prompts are from MJHQ dataset [44]. 8 norm1 attn ff 0 10 20 30 40 50 % outliers norm1 attn ff 0 10 20 30 40 50 Layer name sd35 pixart Figure 5: Outliers: outliers distribution for activations varies across models. SD3.5-M (left) experience most of its outliers right after Feed Forward layers, while for PixArt-Σ, most outliers are in Attention layers. Table 2: Outlier analysis: we optimize specific layers types while the rest of the model is frozen and quantized (W4A8). FID and CLIP-FID are computed on PixArt-Σ [43] evaluation dataset. Model Layer FID ↓ CLIP FID ↓ SD3.5-M FF 2.23 0.23 Attn 2.32 0.24 TF 2.49 0.28 All 2.54 0.22 Sana 600M FF 2.18 0.17 Attn 2.10 0.16 TF 2.13 0.16 All 2.17 0.19 PixArt-Σ FF 5.34 1.55 Attn 6.48 2.23 TF 4.40 1.13 All 4.48 1.07 4.2 Qualitative evaluation To complement the quantitative evaluation in the preceding section, Figure 4 depicts for each of the four models considered: the original model, one PTQ representative (SVDQuant), one QAT alternative (vQAT) and the proposed QAT approach (FraQAT). We have selected SVDQuant among DiTaS and Dynamic Quantization given its popularity. We focus on vQAT rather than our other developed QAT baseline (SVDQAT) given its overall popularity. Note that, the images generated in each row in Figure 4 share the same seed and prompt. As expected from a PTQ approach, SVDQuant under-performs when generating certain high frequency image details. This is especially visible when multiple faces are present in a generated image as shown in the first row. QAT improves high frequency image details, but generates significantly different images than the original model for the same prompt and seed. Finally, FraQAT preserves both high frequency details and generates images as close to the original model across all baselines. 4.3 Outlier analysis Activation outliers disrupt the quantization process by introducing artifacts or biases. By analyzing these outliers across different models, we discover that different models produce outliers in different layers. For example, in SD3.5-M outliers emerge after Feed-Forward (FF) layers, while in PixArt-Σ outliers arise from Attention (Attn) layers, see Figure 5. Through selectively training specific layers, we can reduce FraQAT's computational demand while obtaining a deployable model. In this vein, we analyze the impact of selective training, i.e., we optimize only certain layers while the rest of the network is frozen and quantized (W4A8). In particular, we focus on attention layers (Attn), feed forward layers (FF), and transformer blocks (TF), and compare it with training the entire network (Full). Quantitative results in Table 2 show that there is no clear winner - a layer type for all architecture. Different models take advantage from optimizing different layers. Nevertheless we recommend starting from quantizing Transformer Blocks (TF) as it reduces memory requirements, lowers computational demands, and addresses all outliers. 4.4 On device model deployment To demonstrate the feasibility of deploying models quantized with FraQAT on edge devices, we quantized Sana 600M [10] to W4A8 and deployed it on the Samsung S25U, which runs on the Qualcomm SM8750-AB Snapdragon 8 Elite Hexagon Tensor Processor (HTP). Compared to CPUs and GPUs, integer accelerators support a limited range of precisions and exclusively support static 9 (a) Original Sana 600M (b) FraQAT on GPU (c) FraQAT on smartphone Figure 6: On device generation: we generate images on a mobile phone (c) and compare the results with the samples generates on GPU by the original model (a) and the quantized model (b). quantization for both weights and activations. Please, refer to Section 2.1, for a discussion on static and dynamic quantization. Conversely to the baselines used in Section 4.1, FraQAT support both dynamic and static quantization paradigms. To apply FraQAT to Sana 600M [10] with static weight and activation quantization, we pre-compute scale and offset through statistical analysis of the features: we select 100 random samples and pass them through the DiT. Feature values for every layer are recorded and used to compute standard deviation and mean. Following literature [48], we use a 3 std range for inlier. Finally, we use these scale and offset during the QAT, while the overall training procedure does not change from the one discussed in Section 4.1. All linear layers of the quantized model run with precision W4A8 except for the last layer which runs in W4A16. This is a good compromise to preserve quality, while not impacting latency. Overall, the model has a latency of 66ms per forward step, while the same model running in W4A16 (bit-width supported by SVDQuant) has a latency of 95ms - a 30.5% latency improvement. Finally, to assess the on-device quality we generate samples, and compare them with the those from the original model in Figure 6. The quantized model produces high quality pictures that resemble the original model and its GPU version. 5 Limitations and future work The proposed approach is a step forward compared to SOTA Quantization-Aware-Training (QAT). Like most - if not all - QAT techniques, FraQAT yields higher quality but is more computational expensive than Post-Training Quantization (PTQ). Compared with multi-precision LLM's SOTA QAT approaches, such as MatryoshkaQAT, FraQAT's quantized model is tailored to a single bit precision, and we leave multi-precision support for future work. Furthermore, the intermediate precision levels are hand-picked, in the future, we plan to design an algorithm to select the most impactful precisions. The proposed training scheme may benefit from regularizers such as weight decay and data augmentation. For example, preliminary regularization tests on Sana 0.6B [10] show that weight decay 10 boosts the performance by ∼10%. A proper investigation of regularization and its impact in training is left to future work. Finally, FraQAT's networks are optimized using knowledge distillation only, but different losses such as feature and task loss may further boost image generation quality. 6 Conclusions FraQAT is a novel Quantization-Aware-Training technique that exploits fractional bits while progressively reducing the parameter-precision during the quantization process. Thanks to this curriculum learning strategy, we address the outliers, as they arise, at different precisions, achieving a more stable and faster training. The proposed method is evaluated over a variety of state-of-the-art DiT and MM-DiT models. We show that, both qualitatively and quantitatively, the quantized models achieve superior performance compared to the state-of-the-art QAT approaches. Such improved quality, if deployed on-device, may boost mobile users productivity, preserve their privacy, as well as generate personalized contents for users. References [1] S. AI, "Stable diffusion 3.5." https://stability.ai/news/ introducing-stable-diffusion-3-5, 2024. [2] E. Xie, J. Chen, Y. Zhao, J. Yu, L. Zhu, C. Wu, Y. Lin, Z. Zhang, M. Li, J. Chen, H. Cai, B. Liu, D. Zhou, and S. Han, "SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer," , 2025. [3] B. F. Labs, "Flux." https://github.com/black-forest-labs/flux, 2024. [4] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al., "The llama 3 herd of models," arXiv preprint , 2024. [5] G. Team, A. Kamath, J. Ferret, S. Pathak, N. Vieillard, R. Merhej, S. Perrin, T. Matejovicova, A. Ram ́e, M. Rivi`ere, et al., "Gemma 3 technical report," arXiv preprint , 2025. [6] Q. Snapdragon®, "Snapdragon 8 Elite Mobile Platform." https://docs.qualcomm. com/bundle/publicresource/87-83196-1_REV_D_Snapdragon_8_Elite_Mobile_ Platform_Product_Brief.pdf. [7] X. Intel®, "Processors with Performance-Cores (P-Cores)." https://www.intel.com/ content/www/us/en/products/details/processors/xeon/xeon6-p-cores.html. [8] A. RyzenTM, "AI 300 Series Processors." https://www.amd. com/content/dam/amd/en/documents/partner-hub/ryzen/ amd-ryzen-ai-300-series-vs-qualcomm-snapdragon-x-elite-deck.pdf. [9] N. HGXTM, "NVIDIA HGX Platform." https://www.nvidia.com/en-gb/data-center/ hgx. [10] E. Xie, J. Chen, J. Chen, H. Cai, H. Tang, Y. Lin, Z. Zhang, M. Li, L. Zhu, Y. Lu, and S. Han, "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers," , 2024. [11] J. Chen, S. Xue, Y. Zhao, J. Yu, S. Paul, J. Chen, H. Cai, E. Xie, and S. Han, "SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation," , 2025. [12] M. Li, Y. Lin, Z. Zhang, T. Cai, X. Li, J. Guo, E. Xie, C. Meng, J.-Y. Zhu, and S. Han, "SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models," , 2024. [13] Z. Zhang, Y. Gao, J. Fan, Z. Zhao, Y. Yang, and S. Yan, "Selectq: Calibration data selection for post-training quantization," Machine Intelligence Research, pp. 1-12, 2025. [14] P. Nair, P. Datta, J. Dean, P. Jain, and A. Kusupati, "Matryoshka Quantization," , 2025. [15] A. Bulat and G. Tzimiropoulos, "Bit-Mixer: Mixed-precision networks with runtime bit-width selection," in ICCV, pp. 5188-5197, 2021. 11 [16] Y. Sui, Y. Li, A. Kag, Y. Idelbayev, J. Cao, J. Hu, D. Sagar, B. Yuan, S. Tulyakov, and J. Ren, "Bitsfusion: 1.99 bits weight quantization of diffusion model," , 2024. [17] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, "Curriculum learning," in Proceedings of the 26th annual international conference on machine learning, pp. 41-48, 2009. [18] P. Dhariwal and A. Nichol, "Diffusion models beat gans on image synthesis," Advances in neural information processing systems, vol. 34, pp. 8780-8794, 2021. [19] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, "High-Resolution Image Synthesis with Latent Diffusion Models," in CVPR, pp. 10684-10695, 2022. [20] Y. Lipman, R. T. Q. Chen, H. Ben-Hamu, M. Nickel, and M. Le, "Flow Matching for Generative Modeling," , 2023. [21] I. Gat, T. Remez, N. Shaul, F. Kreuk, R. T. Chen, G. Synnaeve, Y. Adi, and Y. Lipman, "Discrete flow matching," Advances in Neural Information Processing Systems, vol. 37, pp. 133345133385, 2024. [22] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., "Gpt-4 technical report," arXiv preprint , 2023. [23] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican, et al., "Gemini: a family of highly capable multimodal models," arXiv preprint , 2023. [24] T. Salimans and J. Ho, "Progressive distillation for fast sampling of diffusion models," arXiv preprint , 2022. [25] M. Noroozi, I. Hadji, B. Martinez, A. Bulat, and G. Tzimiropoulos, "You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation," in ECCV, pp. 145-161, Springer, 2025. [26] M. Noroozi, A. G. Ramos, L. Morreale, R. Chavhan, M. Chadwick, A. Mehrotra, and S. Bhattacharya, "Guidance free image editing via explicit conditioning," arXiv preprint , 2025. [27] Q. Jin, L. Yang, and Z. Liao, "AdaBits: Neural Network Quantization with Adaptive Bit-Widths," in CVPR, pp. 2146-2156, 2020. [28] L. Yang and Q. Jin, "Fracbits: Mixed precision quantization via fractional bit-widths," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 10612-10620, 2021. [29] Z. Liu, C. Zhao, H. Huang, S. Chen, J. Zhang, J. Zhao, S. Roy, L. Jin, Y. Xiong, Y. Shi, et al., "ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization," , 2025. [30] M. Nagel, M. Fournarakis, Y. Bondarenko, and T. Blankevoort, "Overcoming Oscillations in Quantization-Aware Training," in ICML, pp. 16318-16330, PMLR, 2022. [31] Z. Liu, B. Oguz, A. Pappu, L. Xiao, S. Yih, M. Li, R. Krishnamoorthi, and Y. Mehdad, "Bit: Robustly binarized multi-distilled transformer," Advances in neural information processing systems, vol. 35, pp. 14303-14316, 2022. [32] S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh, "Improved knowledge distillation via teacher assistant," in Proceedings of the AAAI conference on artificial intelligence, vol. 34, pp. 5191-5198, 2020. [33] H. Wang, Y. Shang, Z. Yuan, J. Wu, J. Yan, and Y. Yan, "QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning," , 2024. [34] X. Zheng, X. Liu, H. Qin, X. Ma, M. Zhang, H. Hao, J. Wang, Z. Zhao, J. Guo, and M. Magno, "BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models," , 2024. [35] G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han, "SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models," , 2024. [36] J. Lin, J. Tang, H. Tang, S. Yang, W.-M. Chen, W.-C. Wang, G. Xiao, X. Dang, C. Gan, and S. Han, "AWQ: Activation-aware Weight Quantization for On-device LLM Compression and Acceleration," Proceedings of Machine Learning and Systems, vol. 6, pp. 87-100, 2024. 12 [37] F. Tan, R. Lee, Ł. Dudziak, S. X. Hu, S. Bhattacharya, T. Hospedales, G. Tzimiropoulos, and B. Martinez, "MobileQuant: Mobile-friendly Quantization for On-device Language Models," , 2024. [38] J. Wu, H. Wang, Y. Shang, M. Shah, and Y. Yan, "PTQ4DiT: Post-training Quantization for Diffusion Transformers," , 2024. [39] Z. Dong and S. Q. Zhang, "DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing," , 2024. [40] L. Chen, Y. Meng, C. Tang, X. Ma, J. Jiang, X. Wang, Z. Wang, and W. Zhu, "Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers," , 2024. [41] Y. Zhao, C.-Y. Lin, K. Zhu, Z. Ye, L. Chen, S. Zheng, L. Ceze, A. Krishnamurthy, T. Chen, and B. Kasikci, "Atom: Low-bit quantization for efficient and accurate llm serving," Proceedings of Machine Learning and Systems, vol. 6, pp. 196-209, 2024. [42] Y. Liu, H. Fang, L. He, R. Zhang, Y. Bai, Y. Du, and L. Du, "FBQuant: FeedBack Quantization for Large Language Models," , 2025. [43] J. Chen, C. Ge, E. Xie, Y. Wu, L. Yao, X. Ren, Z. Wang, P. Luo, H. Lu, and Z. Li, "Pixart-σ: Weak-to-strong training of diffusion transformer for 4k text-to-image generation," in European Conference on Computer Vision, pp. 74-91, Springer, 2024. [44] D. Li, A. Kamko, E. Akhgari, A. Sabet, L. Xu, and S. Doshi, "Playground v2. 5: Three insights towards enhancing aesthetic quality in text-to-image generation," arXiv preprint , 2024. [45] J. Xu, X. Liu, Y. Wu, Y. Tong, Q. Li, M. Ding, J. Tang, and Y. Dong, "Imagereward: Learning and evaluating human preferences for text-to-image generation," Advances in Neural Information Processing Systems, vol. 36, pp. 15903-15935, 2023. [46] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016. [47] T. Kynk ̈a ̈anniemi, T. Karras, M. Aittala, T. Aila, and J. Lehtinen, "The role of imagenet classes in fr\'echet inception distance," arXiv preprint , 2022. [48] R. Wang, Y. Gong, X. Liu, G. Zhao, Z. Yang, B. Guo, Z. Zha, and P. Cheng, "Optimizing large language model training using fp4 quantization," arXiv preprint , 2025. [49] Z. Lin, D. Pathak, B. Li, J. Li, X. Xia, G. Neubig, P. Zhang, and D. Ramanan, "Evaluating textto-visual generation with image-to-text generation," arXiv preprint , 2024. [50] J. Wang, K. C. Chan, and C. C. Loy, "Exploring clip for assessing the look and feel of images," in Proceedings of the AAAI conference on artificial intelligence, vol. 37, pp. 2555-2563, 2023. [51] G. Team, "Gemma," 2024. [52] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," Journal of machine learning research, vol. 21, no. 140, pp. 1-67, 2020. [53] C. Clark, K. Lee, M.-W. Chang, T. Kwiatkowski, M. Collins, and K. Toutanova, "Boolq: Exploring the surprising difficulty of natural yes/no questions," arXiv preprint , 2019. [54] A. Talmor, J. Herzig, N. Lourie, and J. Berant, "Commonsenseqa: A question answering challenge targeting commonsense knowledge," arXiv preprint , 2018. 13 A Experimental evaluation A.1 Baselines For state of the art baselines we rely on code released by authors67 and use the default parameters. For all approaches we use pre-trained models with default resolution 512 × 512. Where needed we change the baselines configurations to use the same model. A.2 Hyper-parameters for QAT We detail the various hyper parameters for all QAT experiments in Table 3. In all cases we rely on FuseAdam as optimizer and optimize for 25 epochs. All experiments run on AMD MI300X and are implemented using PyTorch8, Lightning9, torchao10, with seed 1234. Table 3: Hyper-parameters: Detailed hyper-parameters required to replicate all experiments. SD3.5-M Sana 600M PixArt-Σ lr batch low lr batch low lr batch low size rank size rank size rank SVDQAT W4A8 10-5 128 32 10-6 128 16 10-6 256 16 vQAT W4A8 10-5 256 - 10-6 128 - 10-6 128 - FraQAT W4A8 10-6 256 - 10-7 128 - 10-6 128 - For all FraQAT experiments, we follow the schedule highlighted in Table 4. Table 4: Precision schedule: During training we progressively reduce the precision following the prescribed schedule. Precision 8 7 6 5.5 5 4.75 4.5 4.25 4 # epochs 1 1 1 1 1 2 2 2 14 Experiments with the configuration highlighted above take on average 192 GPUh for Sana, 576 GPUh for PixArt-Σ, 1008 GPUh for SD3.5-Medium. A.3 Qualitative evaluation For additional qualitative evaluation on MJHQ dataset[44], please see the html pages in the zip file. A.4 Quantitative evaluation Here we report additional evaluation of the proposed approach with a wider set of metrics. In particular, we rely on VQA [49] to measure the adherence of the generated samples to the input prompts. We measure the image quality with CLIP-IQA [50]. Table 5 shows FraQAT outperforms even the strongest QAT baseline we developed namely SVDQAT, with overall higher gains for SD3.5-Medium and PixArt-Σ for both test datasets. A.5 Quantization schedule To validate the benefits of a Fractional quantization schedule (Table 4) we compare it with its Integer counterpart (8 →7 →6 →5 →→4), and a simpler progressive schedule (16 →8 →4). For a fair comparison, all experiments have the same computational budget. We measure the validation 6SVDQuant https://github.com/mit-han-lab/deepcompressor 7DiTAS https://github.com/DZY122/DiTAS 8https://pytorch.org/ 9https://lightning.ai/docs/pytorch/stable/ 10https://github.com/pytorch/ao 14 Table 5: Qualitative evaluation: we evaluate FraQAT using a fractional quantization schedule on PixArt Evaluation dataset [43] and MidJourney HQ Evaluation dataset [44] measuring FID, and CLIP-FID wrt the original model, CLIP-IQA [50], ImageReward (IR) [45], and VQA [49]. PixArt-Σ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ Dynamic Q. W4A8 9.36 2.08 0.44 0.56 0.84 2.22 0.24 0.46 0.57 0.82 13.35 6.19 0.44 0.35 0.82 8.17 1.13 0.43 -0.73 0.77 DiTAS W4A8 27.93 13.77 0.47 0.41 0.82 12.87 4.58 0.45 0.62 0.82 7.30 3.95 0.46 0.84 0.86 - - - - - SVDQuant W4A16 14.42 3.14 0.42 0.66 0.85 2.43 0.24 0.43 0.60 0.82 6.80 2.02 0.43 0.79 0.86 2.26 0.36 0.42 0.84 0.85 SVDQAT W4A8 2.57 0.28 0.45 0.80 0.85 1.93 0.13 0.43 0.48 0.82 5.38 1.48 0.43 0.76 0.86 - - - - - vQAT W4A8 2.67 0.31 0.44 0.78 0.85 2.13 0.16 0.43 0.45 0.81 7.00 2.52 0.45 0.79 0.85 3.40 0.66 0.41 0.87 0.86 FraQAT W4A8 2.54 0.27 0.45 0.82 0.86 2.17 0.19 0.42 0.48 0.82 4.48 1.07 0.45 0.79 0.86 2.55 0.30 0.41 0.86 0.85 MJHQ SD3.5 Medium Sana 600M PixArt-Σ Flux-schnell Method Precision FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ CLIP CLIP IR ↑ VQA FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ FID ↓ IQA ↑ score ↑ Dynamic Q. W4A8 10.29 2.11 0.44 0.65 0.79 2.40 0.28 0.45 0.63 0.74 15.04 5.55 0.43 0.44 0.74 8.66 1.24 0.42 -0.90 0.65 DiTAS W4A8 32.04 14.06 0.47 0.41 0.73 12.91 5.59 0.45 0.68 0.75 8.63 4.07 0.46 1.04 0.80 - - - - - SVDQuant W4A16 15.10 3.06 0.42 0.78 0.78 2.48 0.25 0.42 0.62 0.75 6.95 1.71 0.43 0.99 0.80 2.41 0.41 0.42 0.96 0.79 SVDQAT W4A8 2.85 0.32 0.45 0.91 0.80 2.04 0.16 0.43 0.53 0.74 5.83 1.44 0.43 0.96 0.81 - - - - - vQAT W4A8 3.01 0.37 0.44 0.89 0.80 2.13 0.20 0.43 0.47 0.74 7.38 2.12 0.44 0.99 0.80 3.56 0.73 0.41 0.99 0.80 FraQAT W4A8 2.78 0.32 0.45 0.96 0.81 2.34 0.24 0.42 0.50 0.74 4.95 1.05 0.44 0.97 0.80 2.55 0.39 0.41 0.99 0.80 loss across training and plot it in Figure 7. The integer and simple schedules perform comparably to each other. On the other hand, the Fractional schedule consistently outperforms the two competitors during training, resulting in a sensibly lower validation loss. 0 5 10 15 20 25 0.001 0.002 0.003 0.004 Fractional Integer 16->8->4 Figure 7: Fractional schedule: we train SD3.5-M using a simple progressive schedule (green), an integer schedule (orange), and a fractional schedule (blue). As seen in the graph, the Fractional schedule achives a lower validation loss. B Additional evaluation B.1 Language Model The proposed method is agnostic to the architecture and the application. We apply FraQAT to Gemma2 2B IT [51]11. Start from the FP16 model - original -, then we quantize it to 4 bits in a similar fashion as we did with T2I models in the main paper. We follow the same schedule as in Section A.2. The quantized model (W4A8) is then compared with the original model. As training set we rely on a subset of C4 dataset [52]: we pick randomly 384K samples for training and 38.4K samples for validation. The model is evaluated on two datasets in zero-shot fashion: BoolQ [53]12, and Commonsense QA [54]13. Table 6 shows minimal drop when FraQAT is applied 11https://huggingface.co/google/gemma-2-2b-it 12https://huggingface.co/datasets/google/boolq 13https://huggingface.co/datasets/allenai/social_i_qa 15 Table 6: Evaluation on Language Models: We apply FraQAT to Gemma2 2B IT [51] exactly as we did to T2I models. The quantized model is evaluated on Common Sense QA and BoolQ datasets. The resulting model has minimal drop from original language model. Model Precision Common Sense QA ↑ BoolQ ↑ COQA ↑ Original (W16A32) 0.70 ± 0.01 0.76 ± 0.01 0.66 ± 0.01 FraQAT (W4A8) 0.69 ± 0.01 0.72 ± 0.01 0.70 ± 0.01 to Gemma2 2B IT model. Therefore, proving FraQAT can be applied to Language Models as well as Vision Models. 16
2510.14817
Signatures of Topological Symmetries on a Noisy Quantum Simulator Christopher Lamb,1, ∗Robert M. Konik,2 Hubert Saleur,3, 4 and Ananda Roy1, † 1Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854-8019 USA 2Division of Condensed Matter Physics and Material Science, Brookhaven National Laboratory, Upton, NY 11973-5000, USA 3Institut de physique th´eorique, CEA, CNRS, Universit´e Paris-Saclay, France 4Physics Department, University of Southern California, Los Angeles, USA Topological symmetries, invertible and otherwise, play a fundamental role in the investigation of quantum field theories. Despite their ubiquitous importance across a multitude of disciplines ranging from string the- ory to condensed matter physics, controlled realizations of models exhibiting these symmetries in physical systems are rare. Quantum simulators based on engineered solid-state devices provide a novel alternative to conventional condensed matter systems for realizing these models. In this work, eigenstates of impurity Hamil- tonians and loop operators associated with the topological symmetries for the Ising conformal field theory in two space-time dimensions are realized on IBM’s ibm kingston simulator. The relevant states are created on the quantum device using a hybrid quantum-classical algorithm. The latter is based on a variation of the quantum approximate optimization algorithm ansatz combined with the quantum natural gradient optimization method. Signatures of the topological symmetry are captured by measuring correlation functions of different qubit operators with results obtained from the quantum device in reasonable agreement with those obtained from classical computations. The current work demonstrates the viability of noisy quantum simulators as platforms for investigating low-dimensional quantum field theories with direct access to observables that are often difficult to probe in conventional condensed matter experiments. Topological symmetries in quantum field theories are gen- eralizations of global symmetries [1] which do not necessar- ily obey group-like composition law and can even be non- invertible. The discovery of topological symmetries has shed new light on anomalies and renormalization group (RG) flows in non-abelian gauge theories [2, 3] and the stan- dard model [4]. In contrast to their higher dimensional counterparts, topological symmetries in conformal field theo- ries (CFTs) residing in two space-time dimensions [5, 6] have explicit lattice realizations in terms of anyonic [7–11] and quantum rotor [12] chains. This allows quantitative charac- terization of entire RG flows [13, 14] and entanglement mea- sures [12, 15]. These quantum field theories not only serve as toy models for their higher dimensional counterparts, but also, in the Hamiltonian picture, realize variations of multi-channel Kondo models [16, 17] relevant for impurity problems in con- densed matter physics. Despite their ubiquitous importance, controlled realizations of these symmetries in realistic physical systems are rare. En- gineered quantum devices provide an alternative to the estab- lished condensed matter setups for probing these models. For 2D CFTs, lattice realizations of the relevant impurity Hamil- tonians as well as the associated topological symmetry op- erators have already been obtained [7–10, 18, 19] alongside a recipe for mapping the latter to qubit registers [20]. Even more important, certain topological symmetries in 2D CFTs are realized exactly in lattice realizations. This removes the need to realize large system-sizes to obtain agreement with field theoretic predictions – a crucial feature allowing inves- tigation of these symmetries using current quantum devices with modest sizes and coherence properties. In addition, the lattice models considered here are a part of an integrable fam- ily of models [21]. This enables analytical computation of various equilibrium characteristics precious for comparison with experimental results. As such, these models are ideal testbeds for realization of topological symmetries on current engineered quantum systems. In fact, the latter provide direct access to several observables which are difficult to probe in conventional condensed matter experiments. In this work, the ground state of an impurity Hamiltonian corresponding to the non-invertible topological symmetry of the Ising CFT is realized on IBM’s superconducting circuit- based ibm kingston simulator. The relevant Hamiltonian is given by [14] H(v) = − L−1 X i=1 ZiZi+1 − L X i=1 Xi −bZLZ1 + Hd(v), (1) where Hd(v)= 2 sinh2(v) cosh(2v) (ZjZj+1+Xj)+ sinh(2v) cosh(2v)YjZj+1. (2) In the above, the operators Xj, Zj denote the Pauli operators at the jth site with Yj = iXjZj. The parameter b is chosen to be either 1 or 0 to switch between a periodic and an open chain (see Fig. 1). The impurity is located between sites j and j + 1 and its strength is parameterized by v. The latter tunes the strength of the impurity with v →∞corresponding to the Ising chain with the non-invertible (Kramers-Wannier) duality defect [22, 23]. The case v = 0 corresponds to the usual Ising chain Hamiltonian with periodic (b = 1) or open (b = 0) boundary conditions. Variation of the ‘Kondo screening length’, given by lB = e4v [24], relative to the length scales under investigation provides access to the en- tire RG flow connecting the duality and the identity defects in the Ising CFT [14]. The characteristics of the aforementioned model along the RG trajectory can be obtained by computing the scaling of arXiv:2510.14817v1 [quant-ph] 16 Oct 2025 2 (a) (c) (b) FIG. 1. (a) A topological symmetry/defect line (red dashed line) in a 2D CFT on a torus. (b) Schematic of the impurity Hamiltonian for the Ising case. The green and blue lines correspond to the fer- romagnetic interaction and the transverse field for the qubits (gray circles) respectively. The orange line indicates a variable ferromag- netic coupling parameterized by b with b = 1(0) corresponding to the periodic (open) chain. The red hatched box corresponds to the impurity part of the Hamiltonian, Hd(v), at site j and j + 1 [see Eqs. (1, 2)]. (c) Representation of the variational quantum circuit optimization scheme used to realize the ground states of the Hamil- tonian in Eq. (1). After initialization of the qubits in the |→⟩⊗L state, N layers of the unitary operators (orange dashed boxes) are applied. The parameters of the circuit are iteratively optimized using the quantum natural gradient (QNG) optimization method. the ground state energy as a function of the dimensionless pa- rameter L/lB. Using standard arguments [5, 25], the relevant scaling dimension for v →∞(0) is given by 1/16(0) with numerical results available for the intermediate values [14]. In addition, a particularly interesting feature of the duality de- fect Hamiltonian is that it couples the order fields to the dis- order fields on the two sides of the defect [22]. This leads to a dramatic change in the behavior of the correlation func- tion ⟨Z1Zr⟩as the site index r crosses the defect location. Here, the correlation function is computed with respect to the ground state of the Hamiltonian for a given value of v. For v = 0, the correlation function exhibits the usual power- law decay with the well-known exponent governed by the scaling dimension of the spin field of the Ising CFT. However, for v →∞, the correlation drops abruptly to zero as soon as r crosses the defect location. These properties, which are zero temperature equilibrium characteristics of the model, are difficult to measure in a typical condensed matter experiment. However, the relevant ground states can be created in a quan- tum simulator by the application of a suitable parametrized quantum circuit with both periodic and open boundary condi- tions straightforwardly realizable. Subsequently, the ground state energy and the necessary two-point correlation function can be obtained by performing measurements of one and two- qubit observables. This is described next. To realize the relevant ground state, a parameterized circuit ansatz is used which is a variation of the quantum approxi- mate optimization algorithm operator ansatz [26–30]. Start- ing with all qubits pointing along the | →⟩state, N layers of unitary operators (orange boxes in Fig. 1) are applied. The final state |ψf⟩is given by: |ψf⟩= U NU N−1 · · · U 1, U α = U α ZU α XU α ZZ, U α X = Y j RXj(ζα j ), U α Z = Y j RZj(ϕα j ), U α ZZ = Y j RZjZj+1(θα j ), (3) where α(j) is the layer (site) index and RO(φ) is the unitary rotation by angle φ with generator O. Here, the boundary con- dition of the circuit ansatz is taken to be identical to the bound- ary condition imposed on the target Hamiltonian [Eq. (1)], although this is not essential for the approach to work (see Refs. [31, 32] for more general ans¨atze). Note that the cir- cuit ansatz does not preserve the Z2 symmetry of the Hamilto- nian arising from the conserved operator Q j Xj. The parame- ters, θα j , ϕα j and ζα j are subsequently determined iteratively us- ing the quantum natural gradient (QNG) method [30, 33, 34]. The latter is a sophisticated variation of the conventional gra- dient descent methods like BFGS [35] or ADAM [36], where the optimization is performed by taking into the account the geometry of the manifold of quantum states [37–39]. The cen- tral ingredient is the Fubini-Study metric tensor whose ele- ments are given by gpq = ℜ(Gpq), where Gpq(⃗Θ)= D ∂ψf ∂Θp ∂ψf ∂Θq E − D ∂ψf ∂Θp ψf ED ψf ∂ψf ∂Θq E (4) and ⃗Θ is a vector of all the circuit parameters to be deter- mined. From Eq. (3), the number of such parameters is 3LN and (3L −1)N for b = 1 and b = 0 respectively. Then, the circuit parameters at the (t + 1)th step are given by: ⃗Θt+1 = ⃗Θt −ηg−1 ∂L ∂⃗Θt , (5) where L = ⟨ψf|H|ψf⟩is the relevant cost-function being minimized to obtain the ground state and η is the learning rate. The QNG-based optimization approach often outperforms its competitors [30, 32, 34]. This superior efficacy comes at the price of additional quantum circuits that need to be run on the quantum simulator. This is because a computation of gpq is required at each optimization step in addition to the gra- dient of the cost function [Eq. (5)]. In contrast to the exist- ing proposals for the determination of the Fubini-Study met- ric using a parameter-shift rule [33, 40, 41], here an alterna- tive is proposed for the computation of the gradients and the metric elements needed for the optimization process. In this scheme (Fig. 2), the relevant multi-point correlation functions 3 𝜕𝜕𝜓𝜓𝑓𝑓 𝜕𝜕Θ𝑝𝑝ℎ𝑗𝑗𝜓𝜓𝑓𝑓= 𝜓𝜓0 𝑈𝑈< 𝑝𝑝†𝑖𝑖𝑂𝑂𝑝𝑝𝑈𝑈> 𝑝𝑝†ℎ𝑗𝑗𝜓𝜓𝑓𝑓 𝜕𝜕𝜓𝜓𝑓𝑓 𝜕𝜕Θ𝑝𝑝𝜓𝜓𝑓𝑓= 𝜓𝜓0 𝑈𝑈< 𝑝𝑝†𝑖𝑖𝑂𝑂𝑝𝑝𝑈𝑈> 𝑝𝑝† 𝜓𝜓𝑓𝑓 𝜕𝜕𝜓𝜓𝑓𝑓 𝜕𝜕Θ𝑝𝑝 𝜕𝜕𝜓𝜓𝑓𝑓 𝜕𝜕Θ𝑞𝑞= 𝜓𝜓0 𝑈𝑈< 𝑝𝑝† 𝑂𝑂𝑝𝑝 > 𝑝𝑝𝑈𝑈< 𝑞𝑞† 𝑂𝑂𝑞𝑞𝜓𝜓𝑓𝑓 (a) (b) (c) FIG. 2. Circuits for the evaluation of the gradient of the cost func- tion [panel (a)] and the Fubini-Study matrix element [panels (b), (c)] required for the QNG optimization update [Eq. (5)]. (a) For the com- putation of the gradient ∂L/∂Θp, first the portion of the circuit until the rotation by Θp (denoted by U p <) is applied to the qubits initial- ized to |ψ0⟩. This is followed by a controlled- ˜Op rotation with an ancilla qubit, initialized to |+⟩, as control and the remaining gates of the circuits (denoted by U p >). Here, ˜Op = −iOp. Finally, a con- trolled unitary rotation is performed by the jth term of the Hamil- tonian, hj. Averaging over the X-measurements of the ancilla qubit yields the contribution to the gradient from the jth term. The total gradient is the sum of the different such contributions. (b) Com- putation of the overlaps ⟨∂ψf/∂Θp|ψf⟩. In this case, after the ap- plication of U p < and the controlled- ˜Op, average is performed over the Y-basis measurements of the ancilla qubit. (c) Computation of the overlaps ⟨∂ψf/∂Θp|∂ψf/∂Θq⟩. After application of U p < and controlled- ˜Op, the portion of the circuit until the rotation by the an- gle Θq is applied (denoted by p >U q <), followed by a controlled-iOq. Averaging over the X-measurement results yields the relevant over- lap. See Secs. S1 and S2 of the Supplementary Material for more details. are computed by applying suitable controlled-unitary opera- tions followed by measurements of the ancilla qubit in the X and Y bases. As shown in Fig. 2, the circuits required to com- pute gpq are similar in depth as the gradient evaluations, but lead to an increase in the total number of circuits evaluated on the quantum processor. The measurement-based method using the ancilla qubit is verified to asymptotically agree with exact computations as the number of measurement shots is in- creased (see Supplementary Materials, Secs. S1 and S2, for details). Next, results are presented for the Ising chain with an im- purity at the center of the chain [Fig. 3(a)]. These results were obtained from the 156-qubit ibm kingston simulator. Due to limited access to the quantum hardware, the circuit param- eters required to realize the ground states were determined us- ing classical computers. Subsequently, these circuits were im- plemented on the quantum hardware to compute the relevant observables. The number of layers required to reach an error in target energy of < 0.1% was N = L/2. This is compat- ible with the observations of Ref. [30] for critical spin chain Hamiltonians like that in Eq. (1). The learning rate was cho- sen to be η = 0.05. Each of the circuits was compiled into an ISA circuit with a pass manager configured with the highest level of circuit optimization and SABRE routing. After the initial compilation, each circuit was recompiled 20 times to find the circuit with the least number of two-qubit gates. The Dynamical Decoupling sequence XpXm and Twirled Readout Error Extinction were applied to all the results. Zero Noise Extrapolation (ZNE) was applied to all of the results with the Ground State Energy Qubits v Exact Data % Error L = 12 0.0 -14.9260 -15.3089 2.57 4.0 -14.2572 -14.2195 0.25 (a) (b) (c) 𝑍𝑍𝑖𝑖𝑍𝑍𝑖𝑖+1 𝑋𝑋𝑖𝑖 𝐻𝐻𝑑𝑑(𝑣𝑣) Shots: 8192 Runs: 10 FIG. 3. (a) Schematic of an open Ising chain with 12 qubits and the impurity between sites j, j + 1 with j = 6. The topological de- fect is introduced (removed) by tuning the parameter v [Eq. (1)] to ∞(0). (b) Ground state energies obtained from measurement of sin- gle and two-qubit correlation functions corresponding to the different terms of the Hamiltonian [Eq. (1)] using ZNE and 5 runs, each with 1024 shots. For comparison, exact results are also shown. (c) Raw measurement data for the correlation function ⟨Z1Zr⟩for a 12-qubit chain with v = 0 and 4 shown using orange circles and blue squares respectively. For comparison, the results computed using exact di- agonalization are also shown. In contrast to the v = 0 case where the correlation function exhibits a power-law decay characteristic of a critical theory, for v = 4, the correlation function drops abruptly to zero as the defect location is traversed. The data was obtained av- eraging over 10 runs with 8192 shots per run. For panels (b, c), the circuit parameters for the realization of the ground state were com- puted classically using QNG optimization method. The so-obtained circuit was then implemented on ibm kingston followed by rele- vant measurements (see main text for more details). exception of the correlation functions for the open boundary case. The ZNE strategy used a range of noise factors from 1 to 3 in steps of 0.2, the gate folding back amplifier and second degree polynomial extrapolator. Fig. 3(b) shows the ground state energies for an open Ising chain with 12 qubits for v = 0 (no defect) and v = 4 (sufficient to realize the duality defect for the chosen system size) obtained by com- puting the relevant one and two point correlation functions. With the help of the different error mitigation strategies, the energies are obtained to within a few percent of the exact re- sults. Note that even though the noiseless simulations had an error < 0.1%, the noise in the actual quantum device resulted in higher errors in the data. Fig. 3(c) shows the results for the computation of the correlation function ⟨Z1Zr⟩. In contrast to the v = 0 case where ⟨Z1Zr⟩exhibits a power law de- cay characteristic of critical theories, the correlation function drops abruptly to zero as r is varied across the defect loca- tion. This can be viewed as a ‘smoking-gun signature’ of the topological symmetry realized by the impurity Hamiltonian. Indeed, the duality defect couples the order and disorder fields of the Ising chain on either side of the defect which leads to the observed behavior [22]. Topological symmetries in CFTs also leave their imprints in their respective g-functions [42]. The difference between the g-functions can be obtained from the change in the thermody- namic entropy with temperature. The thermodynamic entropy is amenable to numerical and sometimes analytical computa- 4 Ground State Energy Qubits v Exact Data % Error L = 8 0.0 -10.252 -10.371 1.16 4.0 -9.5147 -9.5294 0.15 L = 10 0.0 -12.7849 -12.8095 0.19 4.0 -12.0685 -11.9635 0.87 L = 12 0.0 -15.3226 -15.3211 0.01 4.0 -14.6198 -14.5047 0.79 a) b) c) Shots: 1024 Runs: 5 d) (a) (b) (d) (c) FIG. 4. (a) Schematic of the Ising chain with periodic boundary con- ditions with an impurity between sites j, j + 1 [Eq. (1)]. The model reduces to the periodic (duality-twisted) Ising chain as v →0(∞). (b) Quantum circuit for the measurement of the topological sym- metry operator, ¯Y . After preparing the qubits in the ground state of H(v = 0) using circuit parameters obtained from QNG-based op- timization, controlled braid-operators gj-s [Eq. (7)] are applied with an ancilla qubit as the control. Averaging over X-measurement of the ancilla yields the desired expectation value. (c) Comparison of measurement results for |⟨¯Y ⟩| [Eq. (6)] with exact predictions. The orange circles (crosses) denote experimental data (exact results) for the expectation value of ¯Y . (d) Comparison of the ground state en- ergies obtained from the measurement of one and two-qubit correla- tion functions with exact results for L = 8, 10, 12. The experimental data is obtained using ZNE and averaging over 5 runs of the exper- iment on the ibm kingston simulator with each run containing 1024 shots. See main text for details regarding the measurement pro- tocol. tions. It is also the quantity often accessible in experimental settings. However, interchanging the role of space and time in the torus [Fig. 1(a)], the different g-functions can directly be obtained from the expectation value of the loop operator in the ground state of the periodic Hamiltonian [Fig. 4(a)]. Indeed, the ground state is an eigenstate of the topological symme- try operator with the relevant eigenvalue being the g-function. Owing to its origin in integrability, the forms of the different topological symmetry operators are known exactly in terms of the lattice spin operators [7, 9, 10]. The topological symmetry operator with a nontrivial g-function for the Ising CFT is the one corresponding to the duality defect [v →∞in Eq. (1)]. To evaluate the g-function using the spin chain model consid- ered here, it is sufficient to consider the operator [43] ¯Y = (−q)Lg−1 1 . . . g−1 2L−1 + h.c., (6) with q = ieiπ/4 and the braid operators gj-s given by g2j−1 = qeiπXj/4, g2j = qeiπZjZj+1/4. (7) The expectation value of ¯Y is obtained by first creating the ground state of the periodic Hamiltonian using the variational approach described earlier and then applying controlled uni- tary operators [Fig. 4(b)] where an ancilla qubit plays the role of the control qubit. Averaging over measurements of the ancilla qubit in the X-basis yields the desired expectation value. Fig. 4(c) shows the results for |⟨¯Y ⟩| (orange circles) while comparing with exact result of √ 2 (orange crosses) for L = 8, 10, 12. The measurement protocol is identical to that used to obtain the results shown in Fig. 3(b). Note that the relevant topological symmetry operator is realized exactly on the lattice which leads to good agreement with field theory predictions even for such small system-sizes. For benchmark- ing purposes, ground state energies are also obtained from the measurement of correlation functions corresponding to the different terms of the Hamiltonian [Eq. (1)] for different systems-sizes. The measurement results, alongside those ob- tained from exact computations, are shown in Fig. 4(d). In summary, this works realizes the eigenstates of Hamilto- nians and associated loop operators for topological symme- tries in the Ising CFT and performs measurements of rele- vant observables on IBM’s ibm kingston simulator. The relevant eigenstates are created on the noisy quantum sim- ulator using a hybrid quantum-classical algorithm based on a variational quantum circuit. The parameters of the latter are determined using the quantum natural gradient optimiza- tion method. Measurements are performed for observables that capture the signatures of the non-invertible topological symmetry of the Ising CFT. The measurement results are in close agreement with those obtained using classical methods demonstrating the noise-resilience of the proposed protocol. In contrast to transport characteristics probed in typical condensed matter experiments, as shown in this work, hybrid algorithms on current quantum simulators provide access to a wider variety of observables for low-dimensional quantum field theories. The current work can be straightforwardly gen- eralized using the framework developed in Ref. [20] to probe all non-invertible symmetries in minimal models of CFTs in two space-time dimensions [44] and analyze characteristics along the RG flows connecting the various fixed points [13, 14]. With further advancement of quantum technologies, more exotic quantum field theories including those realized by non-compact [45] and non-hermitian spin chains [46, 47] could be realized using quantum simulators opening the door to investigation of QFTs which lack controlled realization in other experimental setups. The authors thank David Rogerson and Madhav Sinha for discussions and related collaborations. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported un- der Contract DE-AC05-00OR22725. AR and RMK were sup- ported by the U.S. Department of Energy, Office of Basic En- ergy Sciences, under Contract No. DE-SC0012704. The work of H.S. was supported by the French Agence Nationale de la Recherche (ANR) under grant ANR-21- CE40-0003 (project CONFICA). ∗cdl92@physics.rutgers.edu † ananda.roy@physics.rutgers.edu 5 [1] D. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, General- ized Global Symmetries, JHEP 02, 172, arXiv:1412.5148 [hep- th]. [2] Y. Choi, C. Cordova, P.-S. Hsin, H. T. Lam, and S.-H. Shao, Noninvertible duality defects in 3+1 dimensions, Phys. Rev. D 105, 125016 (2022), arXiv:2111.01139 [hep-th]. [3] J. Kaidi, K. Ohmori, and Y. Zheng, Kramers-wannier-like du- ality defects in (3 + 1)d gauge theories, Phys. Rev. Lett. 128, 111601 (2022). [4] Y. Choi, H. T. Lam, and S.-H. Shao, Noninvertible Global Sym- metries in the Standard Model, Phys. Rev. Lett. 129, 161601 (2022), arXiv:2205.05086 [hep-th]. [5] V. B. Petkova and J. B. Zuber, Generalized twisted parti- tion functions, Phys. Lett. B 504, 157 (2001), arXiv:hep- th/0011021. [6] J. Frohlich, J. Fuchs, I. Runkel, and C. Schweigert, Duality and defects in rational conformal field theory, Nucl. Phys. B 763, 354 (2007), arXiv:hep-th/0607247. [7] D. Aasen, R. S. K. Mong, and P. Fendley, Topological Defects on the Lattice I: The Ising model, J. Phys. A 49, 354001 (2016), arXiv:1601.07185 [cond-mat.stat-mech]. [8] D. Aasen, P. Fendley, and R. S. K. Mong, Topological De- fects on the Lattice: Dualities and Degeneracies, (2020), arXiv:2008.08598 [cond-mat.stat-mech]. [9] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in periodic rsos models and anyonic chains (2020), arXiv:2003.11293 [math-ph]. [10] M. Sinha, F. Yan, L. Grans-Samuelsson, A. Roy, and H. Saleur, Lattice realizations of topological defects in the critical (1+1)-d three-state Potts model, JHEP 07, 225, arXiv:2310.19703 [hep- th]. [11] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in lattice models and affine temperley–lieb algebra, Communications in Mathemat- ical Physics 10.1007/s00220-022-04618-0 (2023). [12] A. Roy and H. Saleur, Topological interfaces of luttinger liq- uids, Phys. Rev. B 109, L161107 (2024). [13] M. Kormos, I. Runkel, and G. M. T. Watts, Defect flows in min- imal models, JHEP 11, 057, arXiv:0907.1497 [hep-th]. [14] T. S. Tavares, M. Sinha, L. Grans-Samuelsson, A. Roy, and H. Saleur, Integrable RG Flows on Topological Defect Lines in 2D Conformal Field Theories, arXiv:2408.08241 (2024), arXiv:2408.08241 [hep-th]. [15] A. Roy and H. Saleur, Entanglement Entropy in the Ising Model with Topological Defects, Phys. Rev. Lett. 128, 090603 (2022), arXiv:2111.04534 [hep-th]. [16] A. W. Ludwig and I. Affleck, Exact conformal-field-theory re- sults on the multi-channel kondo effect: Asymptotic three- dimensional space- and time-dependent multi-point and many- particle green’s functions, Nuclear Physics B 428, 545 (1994). [17] C. Bachas and M. Gaberdiel, Loop operators and the kondo problem, Journal of High Energy Physics 2004, 065–065 (2004). [18] W. Koo and H. Saleur, Representations of the virasoro algebra from lattice models, Nuclear Physics B 426, 459 (1994). [19] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in lattice models and affine temperley-lieb algebra (2020), arXiv:1811.02551 [hep-th]. [20] A. Roy, Variational Quantum Simulation of Anyonic Chains, arXiv:2412.17781 (2024). [21] G. E. Andrews, R. J. Baxter, and P. J. Forrester, Eight vertex SOS model and generalized Rogers-Ramanujan type identities, J. Statist. Phys. 35, 193 (1984). [22] M. Oshikawa and I. Affleck, Boundary conformal field theory approach to the critical two-dimensional ising model with a de- fect line, Nuclear Physics B 495, 533 (1997). [23] U. Grimm, Spectrum of a duality twisted Ising quantum chain, J. Phys. A 35, L25 (2002), arXiv:hep-th/0111157. [24] In fact, this model is closely related to the two-channel Kondo model at the Toulouse point [48]. [25] J. L. Cardy, Operator Content of Two-Dimensional Confor- mally Invariant Theories, Nucl. Phys. B 270, 186 (1986). [26] E. Farhi, J. Goldstone, and S. Gutmann, A quantum approxi- mate optimization algorithm (2014), arXiv:1411.4028 [quant- ph]. [27] S. Lloyd, Quantum approximate optimization is computation- ally universal (2018), arXiv:1812.11075 [quant-ph]. [28] S. Hadfield, Z. Wang, B. O'Gorman, E. Rieffel, D. Venturelli, and R. Biswas, From the quantum approximate optimization algorithm to a quantum alternating operator ansatz, Algorithms 12, 34 (2019). [29] M. E. S. Morales, J. D. Biamonte, and Z. Zimbor´as, On the uni- versality of the quantum approximate optimization algorithm, Quantum Information Processing 19, 291 (2020). [30] A. Roy, S. Erramilli, and R. M. Konik, Efficient quantum cir- cuits based on the quantum natural gradient, Phys. Rev. Res. 6, 043083 (2024). [31] D. Rogerson and A. Roy, Quantum circuit optimization us- ing differentiable programming of tensor network states, arXiv:2408.12583 (2024), arXiv:2408.12583 [quant-ph]. [32] A. Roy, R. M. Konik, and D. Rogerson, Universal Euler- Cartan Circuits for Quantum Field Theories, arXiv:2407.21278 (2024). [33] J. Stokes, J. Izaac, N. Killoran, and G. Carleo, Quantum natural gradient, Quantum 4, 269 (2020). [34] D. Wierichs, C. Gogolin, and M. Kastoryano, Avoiding local minima in variational quantum eigensolvers with the natural gradient optimizer, Phys. Rev. Res. 2, 043246 (2020). [35] R. Fletcher, Practical Methods of Optimization (Wiley, 2013). [36] D. P. Kingma and J. Ba, Adam: A method for stochastic opti- mization (2017), arXiv:1412.6980 [cs.LG]. [37] P. Zanardi, P. Giorda, and M. Cozzini, Information-theoretic differential geometry of quantum phase transitions, Phys. Rev. Lett. 99, 100603 (2007). [38] M. Kolodrubetz, V. Gritsev, and A. Polkovnikov, Classifying and measuring geometry of a quantum ground state manifold, Phys. Rev. B 88, 064304 (2013). [39] M. Kolodrubetz, D. Sels, P. Mehta, and A. Polkovnikov, Geom- etry and non-adiabatic response in quantum and classical sys- tems, Physics Reports 697, 1 (2017). [40] A. Mari, T. R. Bromley, and N. Killoran, Estimating the gra- dient and higher-order derivatives on quantum hardware, Phys. Rev. A 103, 012405 (2021). [41] D. Wierichs, J. Izaac, C. Wang, and C. Y.-Y. Lin, General parameter-shift rules for quantum gradients, Quantum 6, 677 (2022), arXiv:2107.12390 [quant-ph]. [42] I. Affleck and A. W. W. Ludwig, Universal noninteger “ground- state degeneracy” in critical quantum systems, Phys. Rev. Lett. 67, 161 (1991). [43] This corresponds to the dropping the operator performing trans- lation by half a lattice site from the loop operator defined in Ref. [9, 10]. The dropped operator does not change the ana- lyzed g-function. [44] M. Sinha, T. S. Tavares, A. Roy, and H. Saleur, Integrability and lattice discretizations of all Topological Defect Lines in mini- mal CFTs, (2025), arXiv:2509.04257 [hep-th]. [45] A. G. Bytsko and J. Teschner, Quantization of models with non- compact quantum group symmetry: Modular xxz magnet and 6 lattice sinh-gordon model, Journal of Physics A: Mathematical and General 39, 12927 (2006). [46] Y. Ikhlef, J. L. Jacobsen, and H. Saleur, Integrable spin chain for the SL(2, r)/U(1) black hole sigma model, Phys. Rev. Lett. 108, 081601 (2012). [47] V. V. Bazhanov, G. A. Kotousov, S. M. Koval, and S. L. Lukyanov, Scaling limit of the Z2 invariant inhomoge- neous six-vertex model, Nucl. Phys. B 965, 115337 (2021), arXiv:2010.10613 [math-ph]. [48] V. J. Emery and S. Kivelson, Mapping of the two-channel kondo problem to a resonant-level model, Phys. Rev. B 46, 10812 (1992).
Signatures of Topological Symmetries on a Noisy Quantum Simulator Christopher Lamb,1, ∗Robert M. Konik,2 Hubert Saleur,3, 4 and Ananda Roy1, † 1 08854-8019 USA 2Division of Condensed Matter Physics and Material Science, Brookhaven National Laboratory, Upton, NY 11973-5000, USA 3Institut de physique th ́eorique, CEA, CNRS, Universit ́e Paris-Saclay, France 4Physics Department, . Despite their ubiquitous importance across a multitude of disciplines ranging from string theory to condensed matter physics, controlled realizations of models exhibiting these symmetries in physical systems are rare. Quantum simulators based on engineered solid-state devices provide a novel alternative to conventional condensed matter systems for realizing these models. In this work, eigenstates of impurity Hamiltonians and loop operators associated with the topological symmetries for the Ising conformal field theory in two space-time dimensions are realized on IBM's ibm kingston simulator. The relevant states are created on the quantum device using a hybrid quantum-classical algorithm. The latter is based on a variation of the quantum approximate optimization algorithm ansatz combined with the quantum natural gradient optimization method. Signatures of the topological symmetry are captured by measuring correlation functions of different qubit operators with results obtained from the quantum device in reasonable agreement with those obtained from classical computations. The current work demonstrates the viability of noisy quantum simulators as platforms for investigating low-dimensional quantum field theories with direct access to observables that are often difficult to probe in conventional condensed matter experiments. Topological symmetries in quantum field theories are generalizations of global symmetries [1] which do not necessarily obey group-like composition law and can even be noninvertible. The discovery of topological symmetries has shed new light on anomalies and renormalization group (RG) flows in non-abelian gauge theories [2, 3] and the standard model [4]. In contrast to their higher dimensional counterparts, topological symmetries in conformal field theories (CFTs) residing in two space-time dimensions [5, 6] have explicit lattice realizations in terms of anyonic [7-11] and quantum rotor [12] chains. This allows quantitative characterization of entire RG flows [13, 14] and entanglement measures [12, 15]. These quantum field theories not only serve as toy models for their higher dimensional counterparts, but also, in the Hamiltonian picture, realize variations of multi-channel Kondo models [16, 17] relevant for impurity problems in condensed matter physics. Despite their ubiquitous importance, controlled realizations of these symmetries in realistic physical systems are rare. Engineered quantum devices provide an alternative to the established condensed matter setups for probing these models. For 2D CFTs, lattice realizations of the relevant impurity Hamiltonians as well as the associated topological symmetry operators have already been obtained [7-10, 18, 19] alongside a recipe for mapping the latter to qubit registers [20]. Even more important, certain topological symmetries in 2D CFTs are realized exactly in lattice realizations. This removes the need to realize large system-sizes to obtain agreement with field theoretic predictions - a crucial feature allowing investigation of these symmetries using current quantum devices with modest sizes and coherence properties. In addition, the lattice models considered here are a part of an integrable family of models [21]. This enables analytical computation of various equilibrium characteristics precious for comparison with experimental results. As such, these models are ideal testbeds for realization of topological symmetries on current engineered quantum systems. In fact, the latter provide direct access to several observables which are difficult to probe in conventional condensed matter experiments. In this work, the ground state of an impurity Hamiltonian corresponding to the non-invertible topological symmetry of the Ising CFT is realized on IBM's superconducting circuitbased ibm kingston simulator. The relevant Hamiltonian is given by [14] H(v) = - L-1 X i=1 ZiZi+1 - L X i=1 Xi -bZLZ1 + Hd(v), (1) where Hd(v)= 2 sinh2(v) cosh(2v) (ZjZj+1+Xj)+ sinh(2v) cosh(2v)YjZj+1. (2) In the above, the operators Xj, Zj denote the Pauli operators at the jth site with Yj = iXjZj. The parameter b is chosen to be either 1 or 0 to switch between a periodic and an open chain (see Fig. 1). The impurity is located between sites j and j + 1 and its strength is parameterized by v. The latter tunes the strength of the impurity with v →∞corresponding to the Ising chain with the non-invertible (Kramers-Wannier) duality defect [22, 23]. The case v = 0 corresponds to the usual Ising chain Hamiltonian with periodic (b = 1) or open (b = 0) boundary conditions. Variation of the 'Kondo screening length', given by lB = e4v [24], relative to the length scales under investigation provides access to the entire RG flow connecting the duality and the identity defects in the Ising CFT [14]. The characteristics of the aforementioned model along the RG trajectory can be obtained by computing the scaling of 16 Oct 2025 2 (a) (c) (b) FIG. 1. (a) A topological symmetry/defect line (red dashed line) in a 2D CFT on a torus. (b) Schematic of the impurity Hamiltonian for the Ising case. The green and blue lines correspond to the ferromagnetic interaction and the transverse field for the qubits (gray circles) respectively. The orange line indicates a variable ferromagnetic coupling parameterized by b with b = 1(0) corresponding to the periodic (open) chain. The red hatched box corresponds to the impurity part of the Hamiltonian, Hd(v), at site j and j + 1 [see Eqs. (1, 2)]. (c) Representation of the variational quantum circuit optimization scheme used to realize the ground states of the Hamiltonian in Eq. (1). After initialization of the qubits in the |→⟩⊗L state, N layers of the unitary operators (orange dashed boxes) are applied. The parameters of the circuit are iteratively optimized using the quantum natural gradient (QNG) optimization method. the ground state energy as a function of the dimensionless parameter L/lB. Using standard arguments [5, 25], the relevant scaling dimension for v →∞(0) is given by 1/16(0) with numerical results available for the intermediate values [14]. In addition, a particularly interesting feature of the duality defect Hamiltonian is that it couples the order fields to the disorder fields on the two sides of the defect [22]. This leads to a dramatic change in the behavior of the correlation function ⟨Z1Zr⟩as the site index r crosses the defect location. Here, the correlation function is computed with respect to the ground state of the Hamiltonian for a given value of v. For v = 0, the correlation function exhibits the usual powerlaw decay with the well-known exponent governed by the scaling dimension of the spin field of the Ising CFT. However, for v →∞, the correlation drops abruptly to zero as soon as r crosses the defect location. These properties, which are zero temperature equilibrium characteristics of the model, are difficult to measure in a typical condensed matter experiment. However, the relevant ground states can be created in a quantum simulator by the application of a suitable parametrized quantum circuit with both periodic and open boundary conditions straightforwardly realizable. Subsequently, the ground state energy and the necessary two-point correlation function can be obtained by performing measurements of one and twoqubit observables. This is described next. To realize the relevant ground state, a parameterized circuit ansatz is used which is a variation of the quantum approximate optimization algorithm operator ansatz [26-30]. Starting with all qubits pointing along the | →⟩state, N layers of unitary operators (orange boxes in Fig. 1) are applied. The final state |ψf⟩is given by: |ψf⟩= U NU N-1 · · · U 1, U α = U α ZU α XU α ZZ, U α X = Y j RXj(ζα j ), U α Z = Y j RZj(φα j ), U α ZZ = Y j RZjZj+1(θα j ), (3) where α(j) is the layer (site) index and RO(φ) is the unitary rotation by angle φ with generator O. Here, the boundary condition of the circuit ansatz is taken to be identical to the boundary condition imposed on the target Hamiltonian [Eq. (1)], although this is not essential for the approach to work (see Refs. [31, 32] for more general ans ̈atze). Note that the circuit ansatz does not preserve the Z2 symmetry of the Hamiltonian arising from the conserved operator Q j Xj. The parameters, θα j , φα j and ζα j are subsequently determined iteratively using the quantum natural gradient (QNG) method [30, 33, 34]. The latter is a sophisticated variation of the conventional gradient descent methods like BFGS [35] or ADAM [36], where the optimization is performed by taking into the account the geometry of the manifold of quantum states [37-39]. The central ingredient is the Fubini-Study metric tensor whose elements are given by gpq = R(Gpq), where Gpq(⃗Θ)= D ∂ψf ∂Θp ∂ψf ∂Θq E - D ∂ψf ∂Θp ψf ED ψf ∂ψf ∂Θq E (4) and ⃗Θ is a vector of all the circuit parameters to be determined. From Eq. (3), the number of such parameters is 3LN and (3L -1)N for b = 1 and b = 0 respectively. Then, the circuit parameters at the (t + 1)th step are given by: ⃗Θt+1 = ⃗Θt -ηg-1 ∂L ∂⃗Θt , (5) where L = ⟨ψf|H|ψf⟩is the relevant cost-function being minimized to obtain the ground state and η is the learning rate. The QNG-based optimization approach often outperforms its competitors [30, 32, 34]. This superior efficacy comes at the price of additional quantum circuits that need to be run on the quantum simulator. This is because a computation of gpq is required at each optimization step in addition to the gradient of the cost function [Eq. (5)]. In contrast to the existing proposals for the determination of the Fubini-Study metric using a parameter-shift rule [33, 40, 41], here an alternative is proposed for the computation of the gradients and the metric elements needed for the optimization process. In this scheme (Fig. 2), the relevant multi-point correlation functions 3 ∂∂ψψff ∂∂Θpphjjψψff= ψψ0 UU pp†hjjψψff ∂∂ψψff ∂∂Θppψψff= ψψ0 UU pp† ψψff ∂∂ψψff ∂∂Θpp ∂∂ψψff ∂∂Θqq= ψψ0 UU ppUU ). Here, ̃Op = -iOp. Finally, a controlled unitary rotation is performed by the jth term of the Hamiltonian, hj. Averaging over the X-measurements of the ancilla qubit yields the contribution to the gradient from the jth term. The total gradient is the sum of the different such contributions. (b) Computation of the overlaps ⟨∂ψf/∂Θp|ψf⟩. In this case, after the application of U p U q <), followed by a controlled-iOq. Averaging over the X-measurement results yields the relevant overlap. See Secs. S1 and S2 of the Supplementary Material for more details. are computed by applying suitable controlled-unitary operations followed by measurements of the ancilla qubit in the X and Y bases. As shown in Fig. 2, the circuits required to compute gpq are similar in depth as the gradient evaluations, but lead to an increase in the total number of circuits evaluated on the quantum processor. The measurement-based method using the ancilla qubit is verified to asymptotically agree with exact computations as the number of measurement shots is increased (see Supplementary Materials, Secs. S1 and S2, for details). Next, results are presented for the Ising chain with an impurity at the center of the chain [Fig. 3(a)]. These results were obtained from the 156-qubit ibm kingston simulator. Due to limited access to the quantum hardware, the circuit parameters required to realize the ground states were determined using classical computers. Subsequently, these circuits were implemented on the quantum hardware to compute the relevant observables. The number of layers required to reach an error in target energy of < 0.1% was N = L/2. This is compatible with the observations of Ref. [30] for critical spin chain Hamiltonians like that in Eq. (1). The learning rate was chosen to be η = 0.05. Each of the circuits was compiled into an ISA circuit with a pass manager configured with the highest level of circuit optimization and SABRE routing. After the initial compilation, each circuit was recompiled 20 times to find the circuit with the least number of two-qubit gates. The Dynamical Decoupling sequence XpXm and Twirled Readout Error Extinction were applied to all the results. Zero Noise Extrapolation (ZNE) was applied to all of the results with the Ground State Energy Qubits v Exact Data % Error L = 12 0.0 -14.9260 -15.3089 2.57 4.0 -14.2572 -14.2195 0.25 (a) (b) (c) ZZiiZZii+1 XXii HHdd(vv) Shots: 8192 Runs: 10 FIG. 3. (a) Schematic of an open Ising chain with 12 qubits and the impurity between sites j, j + 1 with j = 6. The topological defect is introduced (removed) by tuning the parameter v [Eq. (1)] to ∞(0). (b) Ground state energies obtained from measurement of single and two-qubit correlation functions corresponding to the different terms of the Hamiltonian [Eq. (1)] using ZNE and 5 runs, each with 1024 shots. For comparison, exact results are also shown. (c) Raw measurement data for the correlation function ⟨Z1Zr⟩for a 12-qubit chain with v = 0 and 4 shown using orange circles and blue squares respectively. For comparison, the results computed using exact diagonalization are also shown. In contrast to the v = 0 case where the correlation function exhibits a power-law decay characteristic of a critical theory, for v = 4, the correlation function drops abruptly to zero as the defect location is traversed. The data was obtained averaging over 10 runs with 8192 shots per run. For panels (b, c), the circuit parameters for the realization of the ground state were computed classically using QNG optimization method. The so-obtained circuit was then implemented on ibm kingston followed by relevant measurements (see main text for more details). exception of the correlation functions for the open boundary case. The ZNE strategy used a range of noise factors from 1 to 3 in steps of 0.2, the gate folding back amplifier and second degree polynomial extrapolator. Fig. 3(b) shows the ground state energies for an open Ising chain with 12 qubits for v = 0 (no defect) and v = 4 (sufficient to realize the duality defect for the chosen system size) obtained by computing the relevant one and two point correlation functions. With the help of the different error mitigation strategies, the energies are obtained to within a few percent of the exact results. Note that even though the noiseless simulations had an error < 0.1%, the noise in the actual quantum device resulted in higher errors in the data. Fig. 3(c) shows the results for the computation of the correlation function ⟨Z1Zr⟩. In contrast to the v = 0 case where ⟨Z1Zr⟩exhibits a power law decay characteristic of critical theories, the correlation function drops abruptly to zero as r is varied across the defect location. This can be viewed as a 'smoking-gun signature' of the topological symmetry realized by the impurity Hamiltonian. Indeed, the duality defect couples the order and disorder fields of the Ising chain on either side of the defect which leads to the observed behavior [22]. Topological symmetries in CFTs also leave their imprints in their respective g-functions [42]. The difference between the g-functions can be obtained from the change in the thermodynamic entropy with temperature. The thermodynamic entropy is amenable to numerical and sometimes analytical computa4 Ground State Energy Qubits v Exact Data % Error L = 8 0.0 -10.252 -10.371 1.16 4.0 -9.5147 -9.5294 0.15 L = 10 0.0 -12.7849 -12.8095 0.19 4.0 -12.0685 -11.9635 0.87 L = 12 0.0 -15.3226 -15.3211 0.01 4.0 -14.6198 -14.5047 0.79 a) b) c) Shots: 1024 Runs: 5 d) (a) (b) (d) (c) FIG. 4. (a) Schematic of the Ising chain with periodic boundary conditions with an impurity between sites j, j + 1 [Eq. (1)]. The model reduces to the periodic (duality-twisted) Ising chain as v →0(∞). (b) Quantum circuit for the measurement of the topological symmetry operator, ̄Y . After preparing the qubits in the ground state of H(v = 0) using circuit parameters obtained from QNG-based optimization, controlled braid-operators gj-s [Eq. (7)] are applied with an ancilla qubit as the control. Averaging over X-measurement of the ancilla yields the desired expectation value. (c) Comparison of measurement results for |⟨ ̄Y ⟩| [Eq. (6)] with exact predictions. The orange circles (crosses) denote experimental data (exact results) for the expectation value of ̄Y . (d) Comparison of the ground state energies obtained from the measurement of one and two-qubit correlation functions with exact results for L = 8, 10, 12. The experimental data is obtained using ZNE and averaging over 5 runs of the experiment on the ibm kingston simulator with each run containing 1024 shots. See main text for details regarding the measurement protocol. tions. It is also the quantity often accessible in experimental settings. However, interchanging the role of space and time in the torus [Fig. 1(a)], the different g-functions can directly be obtained from the expectation value of the loop operator in the ground state of the periodic Hamiltonian [Fig. 4(a)]. Indeed, the ground state is an eigenstate of the topological symmetry operator with the relevant eigenvalue being the g-function. Owing to its origin in integrability, the forms of the different topological symmetry operators are known exactly in terms of the lattice spin operators [7, 9, 10]. The topological symmetry operator with a nontrivial g-function for the Ising CFT is the one corresponding to the duality defect [v →∞in Eq. (1)]. To evaluate the g-function using the spin chain model considered here, it is sufficient to consider the operator [43] ̄Y = (-q)Lg-1 1 . . . g-1 2L-1 + h.c., (6) with q = ieiπ/4 and the braid operators gj-s given by g2j-1 = qeiπXj/4, g2j = qeiπZjZj+1/4. (7) The expectation value of ̄Y is obtained by first creating the ground state of the periodic Hamiltonian using the variational approach described earlier and then applying controlled unitary operators [Fig. 4(b)] where an ancilla qubit plays the role of the control qubit. Averaging over measurements of the ancilla qubit in the X-basis yields the desired expectation value. Fig. 4(c) shows the results for |⟨ ̄Y ⟩| (orange circles) while comparing with exact result of √ 2 (orange crosses) for L = 8, 10, 12. The measurement protocol is identical to that used to obtain the results shown in Fig. 3(b). Note that the relevant topological symmetry operator is realized exactly on the lattice which leads to good agreement with field theory predictions even for such small system-sizes. For benchmarking purposes, ground state energies are also obtained from the measurement of correlation functions corresponding to the different terms of the Hamiltonian [Eq. (1)] for different systems-sizes. The measurement results, alongside those obtained from exact computations, are shown in Fig. 4(d). In summary, this works realizes the eigenstates of Hamiltonians and associated loop operators for topological symmetries in the Ising CFT and performs measurements of relevant observables on IBM's ibm kingston simulator. The relevant eigenstates are created on the noisy quantum simulator using a hybrid quantum-classical algorithm based on a variational quantum circuit. The parameters of the latter are determined using the quantum natural gradient optimization method. Measurements are performed for observables that capture the signatures of the non-invertible topological symmetry of the Ising CFT. The measurement results are in close agreement with those obtained using classical methods demonstrating the noise-resilience of the proposed protocol. In contrast to transport characteristics probed in typical condensed matter experiments, as shown in this work, hybrid algorithms on current quantum simulators provide access to a wider variety of observables for low-dimensional quantum field theories. The current work can be straightforwardly generalized using the framework developed in Ref. [20] to probe all non-invertible symmetries in minimal models of CFTs in two space-time dimensions [44] and analyze characteristics along the RG flows connecting the various fixed points [13, 14]. With further advancement of quantum technologies, more exotic quantum field theories including those realized by non-compact [45] and non-hermitian spin chains [46, 47] could be realized using quantum simulators opening the door to investigation of QFTs which lack controlled realization in other experimental setups. The authors thank David Rogerson and Madhav Sinha for discussions and related collaborations. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. AR and RMK were supported by the U.S. - ergy Sciences, under Contract No. DE-SC0012704. The work of H.S. was supported by the French Agence Nationale de la Recherche (ANR) under grant ANR-21- CE40-0003 (project CONFICA). ∗ † 5 [1] D. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, Generalized Global Symmetries, JHEP 02, 172, . [2] Y. Choi, C. Cordova, P.-S. Hsin, H. T. Lam, and S.-H. Shao, Noninvertible duality defects in 3+1 dimensions, Phys. Rev. D 105, 125016 (2022), . [3] J. Kaidi, K. Ohmori, and Y. Zheng, Kramers-wannier-like duality defects in (3 + 1)d gauge theories, Phys. Rev. Lett. 128, 111601 (2022). [4] Y. Choi, H. T. Lam, and S.-H. Shao, Noninvertible Global Symmetries in the Standard Model, Phys. Rev. Lett. 129, 161601 (2022), . [5] V. B. Petkova and J. B. Zuber, Generalized twisted partition functions, Phys. Lett. B 504, 157 (2001), arXiv:hepth/0011021. [6] J. Frohlich, J. Fuchs, I. Runkel, and C. Schweigert, Duality and defects in rational conformal field theory, Nucl. Phys. B 763, 354 (2007), arXiv:hep-th/0607247. [7] D. Aasen, R. S. K. Mong, and P. Fendley, Topological Defects on the Lattice I: The Ising model, J. Phys. A 49, 354001 (2016), . [8] D. Aasen, P. Fendley, and R. S. K. Mong, Topological Defects on the Lattice: Dualities and Degeneracies, (2020), . [9] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in periodic rsos models and anyonic chains (2020), . [10] M. Sinha, F. Yan, L. Grans-Samuelsson, A. Roy, and H. Saleur, Lattice realizations of topological defects in the critical (1+1)-d three-state Potts model, JHEP 07, 225, . [11] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in lattice models and affine temperley-lieb algebra, Communications in Mathematical Physics 10.1007/s00220-022-04618-0 (2023). [12] A. Roy and H. Saleur, Topological interfaces of luttinger liquids, Phys. Rev. B 109, L161107 (2024). [13] M. Kormos, I. Runkel, and G. M. T. Watts, Defect flows in minimal models, JHEP 11, 057, . [14] T. S. Tavares, M. Sinha, L. Grans-Samuelsson, A. Roy, and H. Saleur, Integrable RG Flows on Topological Defect Lines in 2D Conformal Field Theories, (2024), . [15] A. Roy and H. Saleur, Entanglement Entropy in the Ising Model with Topological Defects, Phys. Rev. Lett. 128, 090603 (2022), . [16] A. W. Ludwig and I. Affleck, Exact conformal-field-theory results on the multi-channel kondo effect: Asymptotic threedimensional space- and time-dependent multi-point and manyparticle green's functions, Nuclear Physics B 428, 545 (1994). [17] C. Bachas and M. Gaberdiel, Loop operators and the kondo problem, Journal of High Energy Physics 2004, 065-065 (2004). [18] W. Koo and H. Saleur, Representations of the virasoro algebra from lattice models, Nuclear Physics B 426, 459 (1994). [19] J. Belletˆete, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in lattice models and affine temperley-lieb algebra (2020), . [20] A. Roy, Variational Quantum Simulation of Anyonic Chains, (2024). [21] G. E. Andrews, R. J. Baxter, and P. J. Forrester, Eight vertex SOS model and generalized Rogers-Ramanujan type identities, J. Statist. Phys. 35, 193 (1984). [22] M. Oshikawa and I. Affleck, Boundary conformal field theory approach to the critical two-dimensional ising model with a defect line, Nuclear Physics B 495, 533 (1997). [23] U. Grimm, Spectrum of a duality twisted Ising quantum chain, J. Phys. A 35, L25 (2002), arXiv:hep-th/0111157. [24] In fact, this model is closely related to the two-channel Kondo model at the Toulouse point [48]. [25] J. L. Cardy, Operator Content of Two-Dimensional Conformally Invariant Theories, Nucl. Phys. B 270, 186 (1986). [26] E. Farhi, J. Goldstone, and S. Gutmann, A quantum approximate optimization algorithm (2014), . [27] S. Lloyd, Quantum approximate optimization is computationally universal (2018), . [28] S. Hadfield, Z. Wang, B. O'Gorman, E. Rieffel, D. Venturelli, and R. Biswas, From the quantum approximate optimization algorithm to a quantum alternating operator ansatz, Algorithms 12, 34 (2019). [29] M. E. S. Morales, J. D. Biamonte, and Z. Zimbor ́as, On the universality of the quantum approximate optimization algorithm, Quantum Information Processing 19, 291 (2020). [30] A. Roy, S. Erramilli, and R. M. Konik, Efficient quantum circuits based on the quantum natural gradient, Phys. Rev. Res. 6, 043083 (2024). [31] D. Rogerson and A. Roy, Quantum circuit optimization using differentiable programming of tensor network states, (2024), . [32] A. Roy, R. M. Konik, and D. Rogerson, Universal EulerCartan Circuits for Quantum Field Theories, (2024). [33] J. Stokes, J. Izaac, N. Killoran, and G. Carleo, Quantum natural gradient, Quantum 4, 269 (2020). [34] D. Wierichs, C. Gogolin, and M. Kastoryano, Avoiding local minima in variational quantum eigensolvers with the natural gradient optimizer, Phys. Rev. Res. 2, 043246 (2020). [35] R. Fletcher, Practical Methods of Optimization (Wiley, 2013). [36] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2017), . [37] P. Zanardi, P. Giorda, and M. Cozzini, Information-theoretic differential geometry of quantum phase transitions, Phys. Rev. Lett. 99, 100603 (2007). [38] M. Kolodrubetz, V. Gritsev, and A. Polkovnikov, Classifying and measuring geometry of a quantum ground state manifold, Phys. Rev. B 88, 064304 (2013). [39] M. Kolodrubetz, D. Sels, P. Mehta, and A. Polkovnikov, Geometry and non-adiabatic response in quantum and classical systems, Physics Reports 697, 1 (2017). [40] A. Mari, T. R. Bromley, and N. Killoran, Estimating the gradient and higher-order derivatives on quantum hardware, Phys. Rev. A 103, 012405 (2021). [41] D. Wierichs, J. Izaac, C. Wang, and C. Y.-Y. Lin, General parameter-shift rules for quantum gradients, Quantum 6, 677 (2022), . [42] I. Affleck and A. W. W. Ludwig, Universal noninteger "groundstate degeneracy" in critical quantum systems, Phys. Rev. Lett. 67, 161 (1991). [43] This corresponds to the dropping the operator performing translation by half a lattice site from the loop operator defined in Ref. [9, 10]. The dropped operator does not change the analyzed g-function. [44] M. Sinha, T. S. Tavares, A. Roy, and H. Saleur, Integrability and lattice discretizations of all Topological Defect Lines in minimal CFTs, (2025), . [45] A. G. Bytsko and J. Teschner, Quantization of models with noncompact quantum group symmetry: Modular xxz magnet and 6 lattice sinh-gordon model, Journal of Physics A: Mathematical and General 39, 12927 (2006). [46] Y. Ikhlef, J. L. Jacobsen, and H. Saleur, Integrable spin chain for the SL(2, r)/U(1) black hole sigma model, Phys. Rev. Lett. 108, 081601 (2012). [47] V. V. Bazhanov, G. A. Kotousov, S. M. Koval, and S. L. Lukyanov, Scaling limit of the Z2 invariant inhomogeneous six-vertex model, Nucl. Phys. B 965, 115337 (2021), . [48] V. J. Emery and S. Kivelson, Mapping of the two-channel kondo problem to a resonant-level model, Phys. Rev. B 46, 10812 (1992).
2510.14815
STABLE TYPE I BLOW-UP FOR THE ONE-DIMENSIONAL WAVE EQUATION WITH TIME-DERIVATIVE NONLINEARITY OLIVER GOUGH Abstract. We study finite-time blow-up for the one-dimensional nonlinear wave equation with a quadratic time-derivative nonlinearity, utt −uxx = (ut)2, (x, t) ∈R × [0, T). Building on the work of Ghoul, Liu, and Masmoudi [16] on the spatial-derivative analogue, we establish the non-existence of smooth, exact self-similar blow-up profiles. Instead we construct an explicit family of generalised self-similar solutions, bifurcating from the ODE blow-up, that are smooth within the past light cone and exhibit type-I blow-up at a prescribed point (x0, T). We further prove asymptotic stability of these profiles under small perturbations in the energy topology. In particular, these profiles verify that the spatially homogeneous ODE blow-up is not asymptotically stable. 1. Introduction We consider the Cauchy problem for the nonlinear wave equation with quadratic time-derivative nonlin- earity in one space dimension, (1.1) utt −uxx = (ut)2, (x, t) ∈R × [0, T), for an unknown function u : R × [0, T) →R. This equation admits the scaling and translation invariances u(x, t) 7→u x −x0 λ , t −t0 λ  + κ, x0, t0, κ ∈R, λ > 0, which will play a central role in the construction of self-similar blow-up solutions. Local well-posedness in Hk+1(R) × Hk(R), finite speed of propagation, and persistence of regularity follow from standard energy methods for semilinear wave equations; see, e.g., [41, 15]. Main results. As in the power–nonlinearity case, blow-up already occurs for spatially homogeneous data. If u = u(t), then (1.1) reduces to utt = (ut)2. Writing v := ut gives vt = v2, hence v(t) = 1 T −t and u(t) = −log(T −t) + κ, which exhibits the Type I rate. Here we loosely define Type I blow-up as blow-up occurring at a rate comparable to that of the ODE blow-up profile. By time-translation and addition of constants, this yields the two-parameter ODE-type family uT,κ(t) = −log  1 −t T  + κ, ∂tuT,κ(t) = 1 T −t, blowing up at time T > 0 uniformly in space (a spatial translation x 7→x −x0 does not change these homogeneous profiles). Moreover, by a standard cutoff-and-finite-speed of propagation argument one can prescribe the blow-up point and make the initial data smooth and compactly supported and in particular Hk+1(R) × Hk(R). Fix 1 arXiv:2510.14815v1 [math.AP] 16 Oct 2025 2 OLIVER GOUGH T > 0 and x0 ∈R, choose R > T, and let χ ∈C∞ c (R) satisfy χ ≡1 on BR(x0). For the Cauchy problem (1.1) with initial data (u, ∂tu) t=0 = κ χ(x), T −1χ(x)  , finite speed of propagation implies that for all (x, t) in the past light cone Γ(x0, T) := {(x, t) ∈R × [0, T) : |x −x0| ≤T −t} the solution coincides with the spatially homogeneous ODE profile: u(x, t) = −log  1 −t T  + κ. Thus the solution blows up at (x0, T) while the initial data are compactly supported. As in this example, it suffices to analyse the dynamics inside Γ(x0, T). Unlike power–type semilinear wave equations, (1.1) is not invariant under Lorentz transformations. In particular, Lorentz boosts (x, t) 7→ γ(x −vt), γ(t −vx)  , γ = 1 √ 1 −v2 , |v| < 1, do not generate new solutions from ODE blow-up profiles. Nevertheless, following Ghoul–Liu–Masmoudi [16], one can still construct solutions to (1.1) that are effectively Lorentz-boosted ODE blow-up inside Γ(x0, T). We first search for exactly self-similar solutions. Guided by scaling and translation symmetries, consider u(x, t) = U x −x0 T −t  =: U(y), y := x −x0 T −t . A direct computation yields the similarity ODE (1.2) (y2 −1) U ′′(y) + 2y U ′(y) = y2U ′(y) 2. that U must satisfy. The first result is that apart from the symmetry-induced constant profiles U ≡κ, there are no nontrivial, smooth, exact self-similar solutions of (1.2) on {|y| ≤1}. We now introduce the temporal similarity variable τ := −log  1 −t T  = −log(T −t) + log T. where we observe that this is also the ODE blow up profile. Together with the spatial similarity variable y := x−x0 T −t , then (x, t) 7→(y, τ) is a smooth diffeomorphism from the interior of Γ(x0, T) onto (−1, 1)×[0, ∞). Exact self-similarity corresponds to stationarity in these variables, which is ruled out below. Nevertheless, setting (1.3) u(x, t) = p τ + eU(y), p ∈R, does produce explicit smooth solutions in the cone. Theorem 1.1 (Non-existence of exact smooth self-similar blow-up solutions). 1) For any T > 0 and x0 ∈R, there are no nontrivial smooth exact self-similar blow-up solutions to (1.1) in the past light cone Γ(x0, T) := {(x, t) ∈R × [0, T) : |x −x0| ≤T −t} ≡{|y| ≤1}. Equivalently, apart from the symmetry-induced constant profiles eU ≡κ, no smooth profile of the form u(x, t) = eU(y) exists in {|y| ≤1}. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 3 2) There exists a five-parameter family of smooth generalised self-similar blow-up solutions to (1.1) with logarithmic growth, up,q,κ,T,x0(x, t) = −p log  1 −t T  −p log  1 + q p 1 −p x −x0 T −t  + κ, smooth in the past light cone Γ(x0, T) for all T > 0, κ, x0 ∈R, q ∈{±1}, and p ∈(0, 1]. In terms of the similarity variables, these solutions read up,q,κ,T,x0(x, t) = Up,q,κ(τ, y) = p τ + eUp,q,κ(y), eUp,q,κ(y) = −p log 1 + q p 1 −p y  + κ. In particular, u1,q,κ,T,x0(x, t) = −log  1 −t T  + κ, recovers the spatially homogeneous ODE-type blow-up, and the two signs are related by spatial reflection, up,−1,κ,T,x0(x, t) = up,1,κ,T,−x0(−x, t). Remark 1.1 (Lorentz boost viewpoint). Under the boost t′ = t−γx √ 1−γ2 , x′ = x−γt √ 1−γ2 with γ ∈(−1, 1), taking the same spatially homogeneous ODE blow up profile in the primed frame and mapping back yields u(x, t) = −(1 −γ2) log (T −t −γ(x −x0)) + c2 Moreover, since γ ∈(−1, 1) we see that the above is equivalent to u(x, t) = −(1 −γ2) log (T −t + q|γ|(x −x0)) + c2 for q ∈{−1, 1}. A simple relabeling of p = 1 −γ2 ∈(0, 1] recovers the solution up,q,κ,x0,T (x, t) = −p log  T −t + q p 1 −p(x −x0)  + κ Equivalently, the travelling-wave ansatz u(x, t) = F(x −ct) reduces (1.1) to the Riccati-ODE mechanism (c2 −1)F ′′ = c2(F ′)2, giving F(ξ) = −(1 −c2) log(A + Bξ) + κ. Applying translational and scaling symmetries, we choose A, B so that A + B(x −ct) = T −t −c(x −x0) reproduces the same family. Blow-up occurs along T −t −c(x −x0) = 0. See section 2.3. Remark 1.2 (Values of p). For p < 0, the functions up,q,κ,T,x0 still solve (1.1), but the argument of the logarithm vanishes at y = ± 1 √1 −p , which lies strictly inside the cone |y| < 1. Thus a singular point appears inside Γ(x0, T), and by persistence of regularity these profiles cannot arise from smooth initial data; we thus exclude them from the stability analysis. We now present the main theorem of asymptotic stability for such generalised self similar blow up solutions. Theorem 1.2 (Asymptotic stability of generalised self-similar solutions). Let p0 ∈(0, 1), T0 > 0, κ0, x0 ∈R, and k ≥4. There exists ω0 ∈(0, 1 2) such that for any δ ∈(0, ω0) there exists ε > 0 with the following property. For any real-valued (f, g) ∈Hk+1(R) × Hk(R) with ∥(f, g)∥Hk+1(R)×Hk(R) ≤ε, there exist parameters p∗∈(0, 1), T ∗> 0, κ∗∈R and a unique solution u : Γ(x0, T ∗) →R to (1.1) with initial data u(x, 0) = up0,1,κ0,x0,T0(x, 0) + f(x), ∂tu(x, 0) = ∂tup0,1,κ0,x0,T0(x, 0) + g(x) (x ∈BT ∗(x0)). 4 OLIVER GOUGH Then for all t ∈[0, T ∗), (T ∗−t)−1 2 +s ∥u(·, t) −up∗,1,κ∗,x0,T ∗(·, t)∥˙Hs(BT ∗−t(x0)) ≲(T ∗−t)ω0−δ, s = 0, 1, . . . , k + 1, (T ∗−t)−1 2 +s ∥∂tu(·, t) −∂tup∗,1,κ∗,x0,T ∗(·, t)∥˙Hs−1(BT ∗−t(x0)) ≲(T ∗−t)ω0−δ, s = 1, . . . , k, and the parameters obey |p0 −p∗| + |κ0 −κ∗| + 1 −T0 T ∗ ≲ε. Remark 1.3 (Symmetry q = ±1). Since up,1,κ,x0,T (x, t) = up,−1,κ,−x0,T (−x, t), the conclusion of Theo- rem 1.2 also holds for the solutions with q = −1. Remark 1.4 (Similarity normalisation). The prefactor (T ∗−t)−1 2 +s arises from renormalising the ˙Hs norm in similarity variables. Writing τ := −log(1 −t/T ∗), the right-hand side satisfies (T ∗−t)ω0−δ = (T ∗)ω0−δ e−(ω0−δ)τ, so the perturbation decays exponentially at rate ω0 −δ > 0 in τ. Theorem 1.3 (Failure of asymptotic stability of ODE blow-up). Fix x0 ∈R, T > 0, κ ∈R, and k ≥4. For every σ > 0 there exists p ∈(0, 1), with 1 −p sufficiently small (depending on σ), such that the exact solution up,1,κ,T,x0(x, t) = −p log  1 −t T  −p log  1 + p 1 −p x −x0 T −t  + κ has small Cauchy data at t = 0 on BT (x0) := {x : |x −x0| < T}: inf a∈R  T −1 2 +(k+1) up,1,κ,T,x0(·, 0) −a Hk+1(BT (x0)) + T −1 2 +k ∂tup,1,κ,T,x0(·, 0) −T −1 Hk(BT (x0))  < σ. Nevertheless, in similarity variables one has, for every a ∈R, lim τ→∞ Up,1,κ(τ, ·) −(τ + a) L2(−1,1) = ∞, and hence the spatially homogeneous ODE blow-up u(x, t) = −log(1 −t/T) + κ is not asymptotically stable in the sense of Theorem 1.2. 1.1. Related work. Power-type semilinear wave equations. For power nonlinearities F(u) = |u|p−1u, the blow-up theory in 1D is by now very well developed, with universality of rates and stability of self-similar profiles established in a series of works by Merle–Zaag and collaborators (see, e.g., [27, 28, 29, 30, 31]). In higher dimensions for sub-conformal exponents, the blow-up rate is known and self-similar profiles are stable; see the references above. In the super-conformal regime, spectral/semigroup methods initiated by Donninger and collaborators [7, 11, 12, 13, 3, 8, 10, 43] yield stability of ODE-type blow-up and related self-similar solutions. Related constructions and stability phenomena also appear in other wave-type equations, including wave maps, Yang–Mills, and the Skyrme model. For energy-critical and supercritical equations, a distinct Type II mechanism—slower-than-ODE blow-up via concentration of stationary states—has been developed in seminal works starting with Krieger–Schlag–Tataru [24] and extended in [17, 19, 23, 22, 4, 6, 5]. Derivative nonlinear wave equations. Explicit blow-up profiles for quadratic derivative nonlinearities have been largely absent. So far, the work has been devoted to lifespan of blow-up solutions. For equations with spatial derivative nonlinearities, see Rammaha [34, 35], Sasaki and collaborators [36], and the recent blow-up results of Shao–Takamura–Wang [40]. For the time-derivative model closest to our work, Sasaki [37] analysed the geometry and regularity of the blow-up curve; see also [18, 38, 39]. As for explicit profiles, Ghoul–Liu–Masmoudi [16] were the first to give a full construction–and–stability theory in this setting: for the 1D spatial-derivative model utt −uxx = (ux)2 they proved nonexistence of STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 5 smooth exact self-similar solutions, instead constructing explicit generalised self-similar blow-up solutions, and established their asymptotic stability. Their proof advances the spectral/semigroup program of Don- ninger and collaborators [7, 11, 12, 13, 3, 8, 10, 43] to handle a non-self-adjoint linearised operator with unstable eigenvalues and, crucially, non-compact perturbations. Key technical ingredients include a Lorentz transformation in self-similar variables (yielding a spectral equivalence), an adaptation of the functional framework, and a novel resolvent estimate. We apply their scheme to the present purpose and the analogous solutions, highlighting the similarities but also differences with this model. 1.2. Outline of the stability proof. We follow the approach of Ghoul–Liu–Masmoudi [16], who extend Donninger’s method for non-compact perturbations; for completeness we briefly recall the key steps of their scheme. After constructing the explicit profiles, we pass to similarity coordinates τ := −log  1 −t T  , y := x −x0 T −t , under which (1.1) becomes (1.4) Uττ + Uτ + 2y Uτy + (y2 −1)Uyy + 2y Uy = Uτ + y Uy 2, (y, τ) ∈(−1, 1) × [0, ∞). With Up,1,κ(τ, y) = p τ + eUp,1,κ(y) a solution of (1.4), we set U = Up,1,κ + η and linearise. By introducing the first-order state q := q1 q2  :=  η ητ + y ηy  , we have the linearised problem as an abstract Cauchy problem. (1.5) ∂τ q1 q2 ! = −y ∂yq1 + q2 ∂yyq1 −y ∂yq2 + 2p 1+y√1−p −1  q2 ! . It is convenient to decompose the linearised operator as eLpq = L q + Lp,1 q, L q1 q2  =  −y ∂yq1 + q2 ∂yyq1 −y ∂yq2 −q2  , Lp,1 q1 q2  =  0 2p 1+y√1−p q2  where L is the first order wave operator in similarity variables, while Lp,1 is the p-dependent (bounded but non-compact) perturbation. This sets the stage for the stability analysis. (1) Mode stability. Because the similarity solutions are built from symmetries, the linearised operator carries unstable eigenvalues associated to the scaling and translational invariances : λ = 1 and λ = 0. However, these are not genuine instabilities—they correspond to the free choice of parameters. Mode stability consists in showing that for Reλ ≥0, the only eigenvalues are precisely these symmetry modes. This reduces to an ODE eigenvalue problem for (1.5). As noted in the aforementioned literature, a convenient route is to invoke a Lorentz transform: although (1.1) itself is not Lorentz invariant, the discrete spectrum is preserved under the change to the primed frame, see [29, 25] for the use of this, and, in view of Remark 1.1, our profiles arise from ODE blow-up in that frame. We therefore analyse the eigenvalue ODE in the primed variables, where the ODE analysis becomes more tractable. (2) Semigroup generation via the free wave operator. Our first step is to show that the linearised operator eLp generates a strongly continuous (C0) semigroup. We first prove that the free wave oper- ator L generates a C0 semigroup and then invoke the bounded–perturbation theorem (Engel–Nagel [14], p. 158) to conclude the result for eLp. The proof uses the Lumer–Phillips theorem ([14], The- orem 3.15) together with a coercive energy norm equivalent to Hk+1(−1, 1) × Hk(−1, 1), since on 6 OLIVER GOUGH a bounded interval the standard wave energy is only a seminorm. As the argument is essentially the same as in Ghoul–Liu–Masmoudi [16] and, in higher dimensions, Ostermann [33], we state the results without proof. (3) A new decomposition to recover compactness One of the difficulties for PDEs such as (1.1) is that the perturbation Lp,1 is not compact, unlike in the power–nonlinearity case. Worse still, the perturbation is not compact relative to L, which complicates the analysis of the essential spectrum. The key advancement in [16] was to use a subcoercivity estimate employed by Merle et al. [26] for the nonlinear Schrödinger equation. Since dissipativity is not guaranteed at lower Sobolev regularity, we must work at higher regularity. The subcoercivity estimate allows us to decompose the operator as the sum of a maximally dissipative operator and a finite-rank (compact) projection. This restores compactness properties and yields sufficient control of the spectrum. (4) The linearised evolution The above shows that, aside from the symmetry modes, the spectrum lies strictly in the left half-plane; this strictness is exactly the spectral gap we need. This is one of the adaptations in this paper. In the Ghoul–Liu–Masmoudi paper, they prove that for all Reλ > −1 the only eigenvalues are 0 and 1. We are able to prove this only for Reλ ≥0. In order to exhibit the ω0 > 0 such that σ(Lp) ⊂{Reλ ≤−ω0} ∪{0, 1}. we use standard spectral theory results, providing an alternative route to the spectral gap. In our proposition we also have eigenfunctions arising from symmetries at 0 and 1, and a generalised eigenfunction resulting from the parameter p ∈(0, 1). Unlike the power–nonlinearity case, the algebraic multiplicity exceeds the geometric multiplicity for the eigenvalue 0. To make this precise, we introduce the usual Riesz projections Pλ,p := Z γλ (zI −Lp)−1 dz for suitable contours. We prove, using a direct ODE analysis, that in a sufficiently high-regularity Sobolev space there are no further generalised eigenfunctions. To then pass from spectral in- formation to semigroup decay on the stable subspace, we estimate the resolvent and invoke the Gearhart–Prüss–Greiner Theorem ([14], Theorem 1.11, p. 302). This resolvent estimate is obtained as in [16]. After this, a full description of the linearised evolution is given. (5) Nonlinear stability The full nonlinear system for the perturbation q is ( ∂τq = Lpq + N(q), q(0, y) = q0(y), where N(q) = 0 q2 2 ! . By Duhamel’s principle, q(τ, ·) = Sp(τ) q0(·) + Z τ 0 Sp(τ −τ ′) N q(τ ′, ·)  dτ ′. Because the symmetry and the generalised eigenvector induce linear growth, this Duhamel formu- lation does not close globally on [0, ∞) without an adjustment. To remedy this, we follow the Lyapunov–Perron technique: we project the initial data onto the stable subspace and introduce a STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 7 correction along the unstable directions to cancel their growth. Concretely, we consider the modified problem q(τ, ·) = Sp(τ) Up,T,κ(f) −Cp Up,T,κ(f), q(τ, ·)  + Z τ 0 Sp(τ −τ ′) N q(τ ′, ·)  dτ ′, where Up,T,κ is the operator that rescales the initial data relative to the blow-up profile. We then show that there exist parameters p∗, T ∗, κ∗for which the correction term vanishes, yielding a genuine solution of the fixed-point problem. Proving that the correction vanishes is equivalent to showing that a certain linear functional—obtained by dual pairing with the correction term—is identically zero. Appealing to Brouwer’s fixed-point theorem exhibits the required parameters. Finally, by the restriction properties of the semigroup, the resulting mild solution upgrades to a classical solution. 1.3. Notation. In this paper we follow the same notation as in [16]. For x0 ∈R and r > 0, we denote the open interval centered at x0 by Br(x0) := (x0 −r, x0 + r). Its closure is Br(x0) = [x0 −r, x0 + r]. For x0 ∈R and T > 0, the past light cone is Γ(x0, T) := {(t, x) ∈[0, T) × R : |x −x0| ≤T −t}. For a, b ∈R, we write a ≲b if there exists a constant C > 0 (independent of the variables under consideration) such that a ≤C b. If C may depend on a parameter p, we write a ≲p b. For a function f : R →R, y 7→f(y), we may interchangeably denote the first derivative by f ′ = ∂yf = fy and the k-th derivative by ∂k yf = ∂kf, where ∂k without subscript always means ∂k y. If Ω⊂R is a domain, then C∞(Ω) denotes smooth functions on Ωwith derivatives continuous up to ∂Ω. If Ωis bounded and k ∈N ∪{0}, we set ∥f∥2 Hk(Ω) := k X i=0 ∥∂i yf∥2 L2(Ω), ∥f∥˙Hk(Ω) := ∥∂k yf∥L2(Ω), f ∈C∞(Ω). The Sobolev space Hk(Ω) is the completion of C∞(Ω) with respect to ∥· ∥Hk(Ω). For k ≥0, we use the product (energy) space Hk(−1, 1) := Hk+1(−1, 1) × Hk(−1, 1). We may also omit the domain (e.g., (−1, 1)) when it is clear from context. We use boldface for tuples of functions; e.g. f ≡(f1, f2)⊤, q(t, ·) ≡ q1(t, ·), q2(t, ·) ⊤. Linear operators acting on such tuples are also written in boldface. If L is a closed linear operator on a Banach space X, we denote its domain by D(L), its spectrum by σ(L), and its point, discrete, and essential spectrum by σp(L), σdisc(L) and σess(L), respectively. The resolvent set is ρ(L) := C \ σ(L) and the resolvent operator is RL(z) := (z −L)−1, z ∈ρ(L). The space of bounded operators on X is denoted by L(X). For background on spectral theory we refer to Kato [20], and for C0-semigroups to Engel and Nagel [14] and the references therein. 8 OLIVER GOUGH 2. Construction of smooth blow-up solutions 2.1. Non-existence of smooth self-similar blow-up. Since the nonlinear wave equation (1.1) is not Lorentz invariant, we cannot use Poincaré symmetries or Lorentz boosts—as in the power-nonlinearity set- ting—to generate spatial dependence from an ODE blow-up profile. Nevertheless, as mentioned we construct solutions to (1.1) that effectively have the form of Lorentz-transformed ODE blow-up inside the cone. Guided by scaling and translations, we first consider the exactly self-similar ansatz u(x, t) = U x −x0 T −t  = U(y), y := x −x0 T −t . Substituting into (1.1) yields the similarity ODE (2.1) (y2 −1)Uyy + 2y Uy = y2 (Uy)2. Besides the symmetry-induced constants U ≡κ, we can solve (2.1) explicitly for V := Uy by rewriting it as −  1 V ′ + 2y y2 −1 1 V = y2 y2 −1. With W := 1 V and the integrating factor (y2 −1)−1, we obtain d dy W(y2 −1)−1 = − y2 (y2 −1)2 = 1 4(y + 1) − 1 4(y + 1)2 − 1 4(y −1) − 1 4(y −1)2 . Integrating gives W = 1 4  2y + c(y2 −1) + (y2 −1) log y + 1 y −1  =: pc(y), so Uy(y) =    4  2y + c(y2 −1) + (y2 −1) log y+1 y−1 −1 , or 0. Proof of Theorem 1.1(1). For any c ∈R, the denominator pc is continuous on (−1, 1) and satisfies pc(−1) = −1 2 and pc(1) = 1 2. By the intermediate value theorem, pc has a zero in (−1, 1), whence Uy is singular there. Thus there are no nontrivial smooth exact self-similar blow-up profiles in the past light cone Γ(x0, T) ≡ {|y| ≤1}. □ 2.2. Construction of generalised self-similar solutions. Since stationary similarity profiles are ruled out, we perturb the ansatz by allowing a linear growth in the temporal similarity variable τ := −log  1 −t T  = log T −log(T −t). (Recall that u(x, t) = τ is the spatially homogeneous ODE blow-up.) We look for U(τ, y) = p τ + eU(y), p ∈R. Substituting into (1.4) yields a Riccati equation for V := eUy, (2.2) (1 −y2)eUyy + 2y(p −1)eUy + p(p −1) = −y2(eUy)2, i.e. V ′ = Q0(y) + Q1(y)V + Q2(y)V 2, Q0 = p(1 −p) 1 −y2 , Q1 = 2(1 −p)y 1 −y2 , Q2 = − y2 1 −y2 . STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 9 A simple ansatz V = a b+cy produces the two elementary particular solutions V±(y) = p√1 −p ±1 −y√1 −p. These are real-valued for p ∈(0, 1], and yield the ODE profile when p = 1. The case p = 1: Setting p = 1 in (2.2) gives (1 −y2)eUyy = −y2(eUy)2. Besides the constant solutions eU ≡κ, any nontrivial solution is singular in (−1, 1). Indeed, with W := 1 eUy we have W ′ = y2 1 −y2 = −1 + 1 2(1 + y) + 1 2(1 −y), hence 1 eUy = −y + 1 2 log 1 + y 1 −y + c =: ρc(y), and ρc(y) →±∞as y →±1, so ρc has a zero in (−1, 1). Thus eUy blows up, and only the constant profile remains smooth, reproducing the ODE blow-up. The case p < 0. Although V± are real-valued, √1 −p > 1 and hence V± blow up at y = ±1/√1 −p ∈(−1, 1). By persistence of regularity, such profiles cannot evolve from smooth Cauchy data, and we discard p < 0. General solution for p ∈(0, 1). : Given a particular solution, the Riccati equation linearises via V (y) = V+(y) + 1 z(y), z′ + Q1 + 2Q2V+  z = −Q2. A computation, using partial fractions, shows z′ + 2(1 −p)y 1 −y2 − 2p√1 −p y2 (1 −y√1 −p)(1 −y2)  z = y2 1 −y2 , with integrating factor I(y) = 1 (1 −y√1 −p)2 1 −y 1 + y √1−p . Integrating yields z(y) = −1 −y√1 −p 2p√1 −p ( c 1 + y 1 −y √1−p (1 −y p 1 −p) + (1 −y p 1 −p) ) , and hence eUy(y) = V+(y) + 1 z(y) = p√1 −p 1 − 2 1 + c e2√1−p tanh−1(y) −y p 1 −p . Thus c = 0 reproduces V−, while c = ±∞gives V+. To obtain smoothness on [−1, 1], we analyse the denominator hc(y) := 1 − 2 1 + c  1+y 1−y √1−p −y p 1 −p. For 0 < c < ∞and any p ∈(0, 1), hc is continuous on [−1, 1] and lim y→1 hc(y) = 1 − p 1 −p, lim y→−1 hc(y) = −1 + p 1 −p, 10 OLIVER GOUGH which have opposite signs; by the intermediate value theorem, hc vanishes in (−1, 1), so eUy blows up. For c < 0, eUy remains continuous but eUyy(y) = − p√1 −p 4c√1 −p e2√1−p tanh−1(y) (1 −y2) c e2√1−p tanh−1(y) + 1 2 − p 1 −p !  1 − 2 c e2√1−p tanh−1(y) + 1 −y√1 −p 2 is unbounded at y = ±1. Consequently, smoothness on the closed cone forces c ∈{0, ±∞}. Collecting cases, for p ∈(0, 1], q ∈{±1}, and κ ∈R, the smooth profiles are eUp,q,κ(y) = −p log 1 + q p 1 −p y  + κ, and hence Up,q,κ(τ, y) = p τ −p log 1 + q p 1 −p y  + κ. In (x, t)-variables this gives the five-parameter family up,q,κ,x0,T (x, t) = −p log  1 −t T  −p log  1 + q p 1 −p x −x0 T −t  + κ (2.3) = −p log T −t + q p 1 −p (x −x0)  + p log T + κ, (2.4) with the special case p = 1 recovering the ODE blow-up, and (2.5) up,−1,κ,x0,T (x, t) = up,1,κ,−x0,T (−x, t). 2.3. Solutions obtained from the Lorentz transformation. Although neither (ut)2 nor (ux)2 is Lorentz invariant (in contrast with (ut)2 −(ux)2), the Lorentz transform offers a complementary viewpoint on the preceding family. Let t′ = t −γx p 1 −γ2 , x′ = x −γt p 1 −γ2 , γ ∈(−1, 1), with inverse t = t′ + γx′ p 1 −γ2 , x = x′ + γt′ p 1 −γ2 . If u solves (1.1), then v(x′, t′) := u(x(x′, t′), t(x′, t′)) satisfies vt′t′ −vx′x′ = 1 1 −γ2  (vt′)2 −2γvx′vt′ + γ2(vx′)2 . Considering the ODE mechanism vt′t′ = 1 1−γ2 (vt′)2 yields v(x′, t′) = −(1 −γ2) log(c1 + t′) + c2. Transforming back, applying time reversal, and using translations gives u(x, t) = −(1 −γ2) log T −t −γ(x −x0)  + c2 = −(1 −γ2) log T −t + q|γ|(x −x0)  + c2, with q ∈{±1}. Relabelling p = 1 −γ2 ∈(0, 1] reproduces up,q,κ,x0,T (x, t) = −p log T −t + q p 1 −p(x −x0)  + κ, up to the symmetry constant p log T. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 11 Remark 2.1. By scaling symmetry, the Lorentz transform corresponds to travelling-wave profiles: the ansatz u(x, t) = F(x −ct) reduces (1.1) to the Riccati mechanism (c2 −1)F ′′ = c2(F ′)2, whose solutions are F(ξ) = −(1 −c2) log(A + Bξ) + κ. After translations and scaling, choosing A + B(x −ct) = T −t −c(x −x0) recovers the same family, with blow-up along T −t −c(x −x0) = 0. 3. Mode stability We now turn to stability of the above blow-up solution. First we perform the most elementary stability analysis of Up,1,κ(τ, y) in similarity coordinates, where it is enough to consider q = 1 due to the relation (2.5). We substitute U(τ, y) = Up,1,κ(τ, y) + η(τ, y) into (1.4) and linearise, obtaining (3.1) ∂ττη + ∂τη + 2y∂τyη + 2y∂yη + (y2 −1)∂yyη = 2(∂τη + y∂yη)  p 1 + y√1 −p  . In the same way one puts the free wave equation □u = 0 into a first order (in time) evolution equation in the arguments (u, ∂tu), we let q = (q1, q2)T := (η, ∂τη + y∂yη)T , and rewrite (3.1) as the first order system (3.2) ∂τ q1 q2  = −y∂yq1 + q2 ∂yyq1 −y∂yq2 +  2p 1+y√1−p −1  q2 ! . We denote the right hand side as the operator eLp q1 q2  := −y∂yq1 + q2 ∂yyq1 −y∂yq2 +  2p 1+y√1−p −1  q2 ! = Lq + Lp,1q where L q1 q2  =  −y∂yq1 + q2 ∂yyq1 −y∂yq2 −q2  is the free wave operator and Lp,1 Lp,1 q1 q2  :=  0 2p 1+y√1−pq2  is the perturbation which depends on p. In fact this operator only depends on p and not κ, T, x0. Note: We consider (3.2) as an evolution equation, and so qj are understood only as functions of y for each point in time τ. This is also known as an abstract Cauchy problem. Unfortunately the linearised operator eLp not self adjoint on Hk+1(−1, 1)×Hk(−1, 1) which means in general that σ(Lp) ⊂C. As mentioned, the perturbation Lp,1 is a bounded operator on Hk+1 × Hk, it is not rel- atively compact with respect to L which also complicates finding information on the essential spectrum of eLp. By considering the separated variable solutions η = eλτφ(y) to (3.1) we obtain the eigenequation ODE (3.3)  λ2 + λ − 2pλ 1 + y√1 −p  φ(y) +  (2λ + 2)y − 2yp 1 + y√1 −p  φ′(y) + (y2 −1)φ′′(y) = 0. Equation (3.3) is called the eigenequation because there is a 1-to-1 correspondence between solutions to (3.3) and solutions (q, λ) to eLpq = λq, as given by the following proposition. 12 OLIVER GOUGH Proposition 3.1. If q = (q1, q2) ∈C∞[−1, 1] × C∞[−1, 1] satisfies eLp q = λ q for some λ ∈C, then q1 solves (3.3). Conversely, if (φ, λ) solves (3.3) with φ ∈C∞[−1, 1], then Ψ := (φ, λφ + y φ′)⊤satisfies eLp Ψ = λ Ψ. Proof. Direct computation: eliminating q2 from eLpq = λq yields (3.3); conversely, η(τ, y) = eλτφ(y) solves (3.1), hence (q1, q2) = (η, ητ + yηy) is an eigenvector, and factoring out eλτ gives the stated Ψ. □ As a result, we make the following point–spectrum conventions. Definition 3.2. We call λ ∈C an eigenvalue of eLp if (3.3) admits a nontrivial φ ∈C∞[−1, 1]. Remark 3.1 (Sobolev vs. smooth eigenfunctions). It is equivalent to define eigenvalues of eLp using eigen- functions in Hk+1(−1, 1) for k ≥4. Indeed, the eigenequation (3.3) is a second-order elliptic ODE with smooth coefficients on (−1, 1), so any weak Hk+1 solution is C∞in the interior by standard elliptic regu- larity (e.g., Evans [15], Ch. 6). Moreover, we show in Proposition A.1 (Appendix A) that such solutions are smooth—indeed real-analytic—up to the endpoints y = ±1. Hence the Hk+1 and C∞notions of point spectrum coincide. Definition 3.3. An eigenvalue λ ∈C is unstable if Reλ ≥0; otherwise it is stable. 3.1. Symmetry–generated eigenmodes. If U(τ, y) solves (1.4), the following symmetry actions map solutions to solutions: • Spatial translation: Ua(τ, y) = U τ, y + a eτ . • Shift of the blow-up time: Ub(τ, y) = U  τ −log(1 −beτ), y 1 −beτ  . • τ-translation: Uc(τ, y) = U(τ + c, y). • Additive constant: Ud(τ, y) = U(τ, y) + d. Remark 3.2. The physical scaling in (x, t) is a composition of a time shift in t and a τ-translation, so it yields no additional independent direction in similarity variables. Differentiating at parameter 0 gives the symmetry generators ∂aUa a=0 = eτUy, ∂bUb b=0 = eτ(Uτ + yUy), ∂cUc c=0 = Uτ, ∂dUd d=0 = 1. For the profile Up,1,κ(τ, y) = pτ −p log 1 + y p 1 −p  + κ, p ∈[0, 1], we have ∂aUa a=0 = −eτ p√1 −p 1 + y√1 −p, ∂bUb b=0 = eτ p 1 + y√1 −p, ∂cUc c=0 = p, ∂dUd d=0 = 1. Thus we have the following eigenvalues and eigenfunctions for 0 < p < 1, • ∂aUa|a=0 and ∂bUb|b=0 are proportional, hence span the same one–dimensional unstable eigenspace at eigenvalue λ = 1. • ∂cUc|c=0 and ∂dUd|d=0 are constants in y and span the same one–dimensional neutral eigenspace at λ = 0. Equivalently, writing modes as eλτφ(y), we may take (up to normalisation) φ1(y) = 1 1 + y√1 −p (λ = 1), φ0(y) ≡1 (λ = 0). STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 13 A generalised neutral mode (λ = 0). Since Up,1,κ also depends continuously on p, differentiating in p produces a generalised mode at λ = 0. In the first–order formulation this function reads (q1, q2)⊤=  ∂pUp,1,κ, ∂p Uτ + yUy ⊤ which satisfies eLp (q1, q2)⊤= (φ0, 0)⊤. We will say the profile is mode stable if the point spectrum in Re λ ≥0 consists only of the symmetry generated modes. Definition 3.4. (Mode Stability) The solution Up,1,κ is mode stable if the existence of a non-trivial φλ ∈ C∞[−1, 1] satisfying (3.3) implies that either Re(λ) < 0 or λ ∈{0, 1}, and these eigenvalues are simple. We now begin the study of mode stability of the solutions Up,1,κ. Rewriting (3.3) with partial fractions φ′′ +  A y + 1 + B y −1 + C 1 + y√1 −p  φ′ +  D y −1 + E y + 1 + F 1 + y√1 −p  φ = 0 where the coefficients are A(λ) = λ − p 1 −p B(λ) = λ + p 1 −p C(λ) = 2 p 1 −p D(λ) = λ2 2 −λ 2 + λ p 1 −p E(λ) = −λ2 2 + λ 2 + λ p 1 −p F(λ) = −2λ(1 −p) we see the presence of four regular singular points, ±1, −1 √1−p and ∞. ODEs of this class are called Heun differential equations. Unfortunately, the existence of smooth solutions to these ODE, known as the ‘connec- tion problem’ is still open. Therefore, following the method of the aforementioned reference, we transform this Heun ODE into the more tractable hypergeometric equation which possesses only three regular singular points. Note: When p = 1 (corresponding to the ODE blow up solution), the coefficients C, F = 0, reducing it already to the hypergeometric equation (3.4) φ′′ +  λ y −1 + λ y + 1  φ′ + λ2 −λ y2 −1 φ = 0 with the three singular points ±1 and ∞, an expected phenomenon for eigen-equations associated with ODE blow up solutions. We will treat this case separately since the eigenvalues are explicit. Before this, we recall a classical ODE result. 3.2. Frobenius Theory and Hypergeometric functions. Lemma 3.5. (Frobenius-Fuchs [42] [9]) Consider the ODE over C (3.5) f ′′(z) + p(z)f ′(z) + q(z)f(z) = 0. 14 OLIVER GOUGH Denoting the complex ball of radius R > 0 be DR = {z ∈C : |z| < R} and where p, q : DR \ {0} →C are holomorphic. Suppose the limits p0 = lim z→0[zp(z)] q0 = lim z→0[z2q(z)] exists, i.e zp(z), z2q(z) are analytic at z = 0, i.e p, q are meromorphic C with poles at most of order 1 and 2. So we can write zp(z) = ∞ X n=0 pnzn z2q(z) = ∞ X n=0 qnzn which converge on |z| < R. Define also the ‘indicial polynomial’ P(s) as P(s) := s(s −1) + p0s + q0 and denote sj ∈C j ∈{1, 2} the roots P(sj) = 0 where Re(s1) ≥Re(s2) • If s1 −s2 /∈N0 := N ∪{0}, the fundamental system of solutions to (3.5) is given by fj(z) = zsjhj(z) where the functions hj(z) = P∞ n=0 aj nzn are some analytic functions at 0 from DR →C and satisfy hj(0) = aj 0 = 1. • If s1 −s2 = m ∈N0 the fundamental system of solutions to (3.5) is f1(z) = zs1h1(z) f2(z) = zs2h3(z) + c log(z)f1(z) where hj are analytic near z = 0, satisfying hj(0) = 1. The function h3 = P∞ n=0 bnzn where the coefficients bn are determined by equating coefficients. Again, we also have h3(0) = b0 = 1. The constant c ∈C may be zero unless m = 0. Moreover we can obtain the coefficients aj n by the recurrence relation (3.6) aj n [(sj + n)(sj + n −1) + (sj + n)p0 + q0] + n−1 X k=0 aj k [(sj + k)pn−k + qn−k] = 0 Finally, the radii of convergence of the series hj in either cases of s1 −s2 is non-integer or integer, including zero, are not less than the distance to the nearest of the next nearest singularity of the differential equation from z = 0 [32]. Hypergeometric equations A particular class of (3.5) is the hypergeometric differential equation which takes the form z(1 −z)d2w dz2 + [γ −(α + β + 1)z]dw dz −abw = 0 STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 15 for parameters a, b, c and where p0 = c and q0 = 0 so that zp(z) = γ + (γ −α −β −1) ∞ X k=1 zk z2q(z) = −αβ ∞ X k=1 zk in |z| < 1. For these equations we have the indicial polynomial p(s) = s(s −1) + sc which always have the roots s = 0 and s = 1−γ. The Frobenius solution has been fully studied and depends on the difference between the roots γ −1 which reduces to investigating each case when γ is an integer or not. For our purposes, it suffices to know the following solutions around z = 0: (1) γ is not an integer: w(z) = A 2F1(α, β, γ; z) + Bz1−γ 2 F1(α −γ + 1, β −γ + 1, 2 −γ; z) (2) γ ∈N0 (s1 = s2): w(z) = c1 2F1(α, β, 1; z) + c2 log(z)h(z) for h some analytic function which can be found in Abramowitz and Stegun [1]. The hypergeometric function is defined as 2F1(α, β, γ; z) = ∞ X n=0 (a)n(b)n (c)n zn n! . 3.3. Mode instability at the ODE blow-up profile. To apply hypergeometric theory, write y = 2z −1. Then (3.4) becomes (3.7) z(1 −z) φ′′ + (λ −2λz) φ′ −λ(λ −1) φ = 0, i.e. the Gauss hypergeometric equation with parameters a(λ) = λ −1, b(λ) = λ, c(λ) = λ. The points z = 0, 1 are regular singular, and Frobenius–Fuchs theory yields exponents {0, 1 −λ} at both z = 0 and z = 1. Proposition 3.6. If Re λ > −1, the only φ ∈C∞[−1, 1] solving (3.7) occur for λ ∈{0, 1}. However the eigenvalue 0 is not simple, so the ODE blow up is not mode stable. Proof. A fundamental system near z = 0 is φ1(z) = 2F1(λ −1, λ; λ; z) = (1 −z)1−λ, φ2(z) = z1−λ 2F1(1, 1; 2 −λ; z). Smoothness at z = 0 forces either 1 −λ ∈N0 or the coefficient of the singular branch φ2 to vanish. By symmetry, the same condition arises at z = 1. Thus 1 −λ = n ∈N0. For these n, both zn and (1 −z)n solve (3.7) and are linearly independent with Wronskian W(zn, (1 −z)n) = −(1 −λ) z−λ(1 −z)−λ ̸≡0. When Re λ > −1, the only possibilities are n = 0, 1, i.e. λ ∈{1, 0}. However, at λ = 1 we obtain the constant mode, and at λ = 0 the eigenspace is two-dimensional, span{1, y}. □ In the original y-variable, representative eigenfunctions are φ(y) ∈span{(1 + y)1−λ, (1 −y)1−λ}. This additional neutral direction verifies that the spatially homogeneous ODE blow-up is not asymptotically stable in our framework (see Theorem 1.3). 16 OLIVER GOUGH Remark 3.3 (Conjecture of codimension-one stability of the ODE blow-up). We conjecture that after modulation by the time-and additive-shift symmetries, there exists a locally invariant center-stable manifold of codimension one in the energy topology near U1,1,κ. We will investigate this in future work. 3.4. Mode stability of generalised self-similar solutions. We transform the eigenequation (3.3) for general p ∈(0, 1) into a hypergeometric ODE using the Lorentz transformation in similarity coordinates, first introduced in [29]. Lorentz transform and similarity variables. Recall that under the Lorentz transformation with pa- rameter γ = √1 −p, the solutions up,1,κ,T,x0(x, t) = −p log  1 −t T  −p log  1 + p 1 −p x −x0 T −t  + κ correspond to vp,1,κ,T,x0(x′, t′) = −(1 −γ2) log(c1 + t′) + c2 = −p log T −t′ + p 1 −p (x′ −x′ 0)  + p log T + κ, which solves (3.8) vt′t′ −vx′x′ = 1 p  (vt′)2 −2 p 1 −p vx′vt′ + (1 −p)(vx′)2 . Thus vp,1,κ,T,x0 reduces to a spatially homogeneous ODE blow-up for (3.8). However, the stability of vp,1,κ,T,x0 does not automatically imply the stability of up,1,κ,T,x0, because: (1) perturbations for u live on {t = 0}, while perturbations for v live on {t′ = 0} = {t −γx = 0}; (2) stability forward in t′ does not a priori imply stability forward in t. (As observed in [25], the advantage is discrete-spectral equivalence: one can study the spectrum at up,1,κ,T,x0 by analysing the spectrum at vp,1,κ,T,x0 in self-similar variables.) Eigenequation and coefficients. For p ∈(0, 1) the separated eigenequation can be written in partial fractions as φ′′ +  A y + 1 + B y −1 + C 1 + y√1 −p  φ′ +  D y −1 + E y + 1 + F 1 + y√1 −p  φ = 0, with A = λ −√1 −p, B = λ + √1 −p, C = 2√1 −p, D = λ2 2 −λ 2 + λ√1 −p, E = −λ2 2 + λ 2 + λ√1 −p, F = −2λ(1 −p). Similarity variables in Lorentzian coordinates. Define τ ′ = log T ′ −log(T ′ −t′), y′ = x′ −x′ 0 T ′ −t′ . Then, for V (τ ′, y′) := v(x′(τ ′, y′), t′(τ ′, y′)), the PDE (3.8) becomes ∂τ ′τ ′V + ∂τ ′V + 2y′ ∂τ ′y′V + 2y′ ∂y′V + (y′2 −1) ∂y′y′V = 1 p  (∂τ ′V + y′∂y′V )2 −2 p 1 −p ∂y′V · (∂τ ′V + y′∂y′V ) + (1 −p)(∂y′V )2 . (3.9) For our family Up,1,κ(τ, y) = pτ −p log(1 + y√1 −p) + κ, the similarity-coordinate Lorentz map is (3.10) τ = τ ′ −log 1 −√1 −p y′ p  , y = y′ −√1 −p 1 −√1 −p y′ , and hence Vp,1,κ(τ ′, y′) = p τ ′ + κ. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 17 Linearisation and eigenequation in Lorentz variables. Linearising (3.9) about Vp,1,κ yields, for ξ, (3.11) ∂τ ′τ ′ξ −∂τ ′ξ + 2y′ ∂τ ′y′ξ + 2 p 1 −p ∂y′ξ + (y′2 −1) ∂y′y′ξ = 0. Proposition 3.7. If U = Up,1,κ + η with η solving the linearised equation in (τ, y), then ξ(τ ′, y′) := η τ(τ ′, y′), y(τ ′, y′)  solves (3.11) under the similarity-coordinate Lorentz map (3.10). Proof. Direct substitution. □ With the ansatz ξ(τ ′, y′) = eλτ ′ψ(y′), (3.11) reduces to (3.12) (y′2 −1)ψ′′ + (2λy′ + 2 p 1 −p)ψ′ + λ(λ −1)ψ = 0. In standard hypergeometric form, with y′ = 2z′ −1 and ψ(y′) = ϕ(z′), (3.13) z′(1 −z′)ϕ′′ + λ − p 1 −p −2λz′ ϕ′ −λ(λ −1)ϕ = 0, i.e. Gauss’s hypergeometric equation with parameters a = λ, b = λ −1, c = λ − p 1 −p. Thus the exponents at z′ = 0 and z′ = 1 are {0, 1 −λ + √1 −p} and {0, 1 −λ −√1 −p}, respectively. Local Frobenius analysis at z′ = 1. Then we have the following cases • Case 1: s1 = 1 −λ −√1 −p and s2 = 0 since Re(λ) ≤1 −√1 −p • Case 2: s1 = 0 and s2 = 1 −λ −√1 −p assuming that Re(λ) > 1 −√1 −p In case 1, we have the sub-case where s1 −s2 /∈N0, then we have the independent solutions ϕ1,1 = (1 −z′)s1h1(z′ −1) ϕ1,2 = h2(z′ −1) for analytic h1, h2. Since s1 /∈N, any smooth solution must be a multiple of the analytic function ϕ0,2 = 2F1(a, b, 1 + a + b −c). The next subcase is if s1 −s2 ∈N0 and so 1 −λ −√1 −p ∈{0, 1, 2, . . . }. By restricting ourselves to Re(λ) ≥0, and noting that √1 −p ∈(0, 1), then we have s1 = 1 −λ −√1 −p < 1 −λ ≤1 which implies s1 = 0. The two independent solutions are thus ϕ1,3 = h1(z′ −1) ϕ1,4 = h2(z′ −1) + c1 log(z′ −1)ϕ11. Since s1−s2 = 0, c1 ̸= 0, so any smooth solution must be a multiple of ϕ11 = h1(z′−1) = 2F1(a, b, 1+a+b−c). Finally for case 2, we have that Re(λ) > 1 −√1 −p. The first subcase is if s1 −s2 /∈N0 which implies Re(s2) < 0, then we have the independent solutions ϕ2,1 = h1(z′ −1) ϕ2,2 = (z′ −1)s2h2(z′ −1) Since Re(s2) < 0 then ϕ2,2 is not smooth at z′ = 1, so the local smooth solution must be a multiple of h1(z′ −1) = 2F1(a, b, 1 + a + b −c; 1 −z′). The second sub-case is s1 −s2 ∈N0 for which we have the 18 OLIVER GOUGH solutions ϕ2,3 = h1(z′ −1) ϕ2,4 = (z′ −1)s2h2(z′ −1) + c log(z′ −1)ϕ2,1 Since Re(λ) > 1 −√1 −p, s2 ̸= 0 the constant c may be 0, but Re(s2) < 0 and so ϕ2,4 is not smooth regardless of c, so the local smooth solution is again a multiple of ϕ2,3 = h1(z′ −1) = 2F1(a, b, 1 + a + b −c) In all cases, any smooth local solution at z′ = 1 is a multiple of ϕ+(z′) = 2F1(a, b; 1 + a + b −c; 1 −z′). A symmetric Frobenius analysis at z′ = 0 yields the analogous conclusion there. Proposition 3.8. For p ∈(0, 1) and Re λ ≥0, there are no nonzero smooth solutions ϕλ ∈C∞[0, 1] to (3.13), except for λ ∈{0, 1}. Therefore, the profiles Up,1,κ are mode stable. Proof. By the above analysis that for any λ ∈C with Re(λ) ≥0 and any non-zero solution ϕ ∈C∞[0, 1] solving (3.11), then ϕ must also be analytic on [0, 1]. To show the mode stability, we show that if λ /∈{0, 1} and Re(λ) ≥0 then ϕ fails to be analytic on [0, 1], which proves the claim by the contrapositive. By the above analysis, we found that the smooth local solution about z′ = 1 is a multiple of ϕ(z′) = 2F1(a, b, 1 + a + b −c; 1 −z′) which is an analytic function around z′ = 1. However, the hypergeometric function is analytic everywhere except possibly at the singular points. Thus, when the radius of convergence is precisely 1, then there must be singularity located at z′ = 0, rendering the function non-analytic on [0, 1]. By definition of the hypergeometric function, the coefficients αn of 2F1(a, b, 1 + a + b −c; 1 −z′) are αn(λ) = (a)n(b)n n!(1 + a + b −c)n = (λ)n(λ −1)n n!(λ + √1 −p)n . If λ /∈{0, 1} as well as Re(λ) ≥0 then αn ̸= 0 for any n ∈N. Then the limiting behaviour of the radius of convergence may be found as rn(λ) = an+1(λ) an(λ) = (λ + n)(λ + n −1) (n + 1)(λ + √1 −p + n) →1, n →∞. □ 4. Spectral analysis of eLp Recall from (3.2) that the linearised flow about Up,1,κ can be written, for τ ∈[0, ∞) and y ∈[−1, 1], as (4.1) ∂τ  q1 q2  = eLp  q1 q2  , eLp q1 q2  :=   −y ∂yq1 + q2 ∂yyq1 −y ∂yq2 +  2p 1 + y√1 −p −1  q2  . Following [16], we add and subtract a boundary trace to the free wave part: eLp = eL + Lp,1, eL q1 q2  := −y ∂yq1 + q2 −q1(−1) ∂yyq1 −y ∂yq2 −q2  , Lp,1 q1 q2  :=   q1(−1) 2p 1 + y√1 −p q2  . Here the map q1 7→q1(−1) is rank-one (hence compact) on our state space. Secondly, although the multiplier y 7→ 2p 1 + y√1 −p is smooth on [−1, 1] for p ∈(0, 1], by construction, the multiplication operator q2 7→ 2p 1+y√1−pq2 is not compact, a standard functional analytic result. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 19 4.1. Functional framework. We use the same framework as in Ostermann [33] adapted to 1D. For smooth q = (q1, q2) and eq = (eq1, eq2) set ⟨q, eq⟩0 := Z 1 −1 (∂yq1) ∂yeq1 dy + Z 1 −1 q2 eq2 dy + q1(−1) eq1(−1), and, for k ≥1, ⟨q, eq⟩k := Z 1 −1 ∂k+1 y q1 ∂k+1 y eq1 dy + Z 1 −1 ∂k yq2 ∂ky eq2 dy + ⟨q, eq⟩0, ∥q∥2 k := ⟨q, q⟩k. By the trace theorem, q1(−1) is well-defined for q1 ∈H1(−1, 1), and for all k ≥0, (4.2) ∥q∥k ≃∥q1∥Hk+1(−1,1) + ∥q2∥Hk(−1,1). We write Hk := Hk+1(−1, 1) × Hk(−1, 1); the completion of Ck+1[−1, 1] × Ck[−1, 1] under ∥· ∥k is Hk. 4.2. Generation of the semigroup. All proofs follow [16] (and standard semigroup theory [14]); we only state the results we need. Lemma 4.1 (Dissipativity of the modified free wave). For k ≥0, we have Re ⟨eLq, q⟩k ≤−1 2 ∥q∥2 Hk for all q ∈(C∞[−1, 1])2. In particular, eL + 1 2 is dissipative. Next we have that the range of λ −eLp is dense in Hk, for some λ > −1 2. λ = 0 is chosen for convenience. Lemma 4.2 (Dense range at λ = 0). For any k ≥0, ε > 0 and any g ∈Hk, there exists q ∈C∞[−1, 1] × C∞[−1, 1] such that ∥−eLq −g∥k < ε By the Lumer Phillips theorem, we now have that the closure of the modified wave operator with domain Hk generates a C0 semigroup. Proposition 4.3 (Generation for eL). The closure of eL generates a contraction C0-semigroup (T(τ))τ≥0 on Hk for eL + 1 2. Consequently, S(τ) := e−τ 2 T(τ) is the C0-semigroup generated by eL and satisfies ∥S(τ)∥L(Hk) ≤e−τ 2 . To pass to the full operator we now treat Lp,1. Proposition 4.4 (Boundedness of Lp,1 and generation for eLp). For all k ≥0, and all p ∈(0, 1], the operator Lp,1 is bounded on Hk. As a corollary Lp := L + Lp,1 generates a strongly continuous one-parameter semi- group Sp(·) : [0, ∞) →L(Hk) which satisfies (4.3) ∥Sp(τ)∥L(Hk) ≤Me(−1 2 +∥Lp,1∥L(Hk))τ for all τ ≥0 and M > 0 is a constant independent of p. Proof. To show Lp,1 is bounded on Hk, take q ∈Hk ∥Lp,1q∥Hk = ∥q1(−1)∥Hk+1 + 2p 1 + y√1 −pq2 Hk ≲∥q1∥L∞+ 4∥q2∥Hk ≲∥q∥Hk Then using the bounded perturbation theorem ([14] p.158), the sum of the operators Lp = L+Lp,1 generates the C0 semigroup with the growth bound (4.3). □ 20 OLIVER GOUGH 4.3. A New Decomposition of Lp. Recall Lp = −y∂y 1 ∂yy −y∂y + (Up(y) −1)  = L + Lp,1 where we denote Up(y) = 2p 1+y√1−p = 2(∂τUp,1,k + y∂yUp,1,κ). As in the utt −uxx = u2 x problem, the pertur- bation Lp,1 is merely bounded, and neither compact nor L-compact. This complicates finding information about the spectrum of Lp. Indeed, recall Lp,1q =  q1(−1) 2p 1+y√1−pq2  . Although q1(−1) is a compact the second component, a multiplication operator, is not compact. In addition, Lp,1 is not L-compact. Indeed, suppose qn = (q1n, q2n), Lqn ∈Hk+1 × Hk such that, ∥qn∥Hk ≤C1, and ∥Lqn∥Hk ≤C2. It does not follow that Up(y)q2n contains a convergent subsequence in Hk. Assuming the bound ∥Lqn∥Hk only allows us to deduce that (1−y2)∂yq2 ∈Hk(−1, 1), rather than ∂yq2 ∈Hk(−1, 1) (which would let us extract a convergent subsequence by the compact embedding Hk+1(−1, 1) ⊂⊂Hk(−1, 1)). As a step towards finding information on the spectrum, we therefore investigate the dissipation of the overall operator Lp, firstly in the base space H1 × L2. By Lemma 4.1 Re⟨Lpq, q⟩0 = Re⟨Lq + Lp,1q, q⟩H1×L2 ≤−1 2∥q∥2 0 + Re Z 1 −1 2p 1 + y√1 −p|q2(y)|2 dy ≤−1 2(∥q1∥2 H1 + ∥q2∥2 L2) + 2(1 + p 1 −p)∥q2∥2 L2 = −1 2∥q1∥2 H1 + 3 2 + 2 p 1 −p  ∥q2∥2 L2 we see that dissipativity is not guaranteed for H0. We therefore look in higher order spaces. To proceed with computations in higher order Sobolev spaces, we first need a technical lemma showing how the operator Lp commutes with derivatives ∂k y. Lemma 4.5. Given p ∈(0, 1), and k ≥0, we have ∂k yLp = Lp,k∂k y + L′ p,k where Lp,k = −y∂y −k 1 ∂yy −y∂y −k −1 + Up(y)  and L′ p,k = 0 0 0 [∂k y, Up(y)]  which satisfies the pointwise bound (4.4) |L′ p,kq| ≲p  0 Pk−1 j=0 |∂j yq2|  Proof. To compute the commutation with diagonal terms, we use that [∂k y, y∂y] = k∂k y. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 21 For the pointwise bound, we compute the non-zero component of L′ p,kq. [∂k y, Up(y)]q2 = ∂k y (Up(y)q2) −Up(y)∂k yq2 = k X j=0 k j  ∂k−j y Up(y)  ∂j yq2  −Up(y)∂k yq2 = k−1 X j=0 k j  ∂k−j y Up(y)  ∂j yq2  Note that ∂k−j y  2p 1 + y√1 −p  = (−1)k−j(k −j)! p 1 −p k−j 2p (1 + y√1 −p)k−j+1 so ∂k−j y Up(y) L∞(−1,1) = (k −j)! 2p √1 −p k−j 1 + y√1 −p k−j+1 L∞(−1,1) ≤k! 2p √1 −p k−j 1 −√1 −p k−j+1 which we take as the p-dependent constant in the pointwise bound (4.4). □ We now define an energy which is an equivalent inner product to Hk, isolating the highest order derivative. Definition 4.6. For k ≥0, we define the new inner product ⟨⟨·, ·⟩⟩by ⟨⟨Ψ, eΨ⟩⟩k := (∂k yΨ, ∂k y eΨ)L2(−1,1) + (Ψ, eΨ)L2(−1,1) ≡ Z 1 −1 ∂k yΨ∂ky eΨ dy + Z 1 −1 ΨeΨ dy For tuples q = (q1, q2) ∈Hk+1(−1, 1) × Hk(−1, 1), we use ⟨⟨q, eq⟩⟩k := ⟨⟨q1, eq1⟩⟩k+1 + ⟨⟨q2, eq2⟩⟩k The induced norm ∥|Ψ∥|k := ⟨⟨Ψ, Ψ⟩⟩is equivalent to the standard Hk norm (See [2],p.135). Now, in order to control L′ p,k, we need the following coercivity estimate. Lemma 4.7 (Subcoercivity). Given m ≥1, there exists a sequence ϵn > 0, with limn→∞ϵn = 0, and {Πi}1≤i≤n ⊂Hm(−1, 1), and constants cn > 0 such that for all n ≥0 and Ψ ∈Hm(−1, 1), (4.5) ϵn⟨⟨Ψ, Ψ⟩⟩m ≥∥Ψ∥2 m −cn n X i=1 (Ψ, Πi)2 L2(−1,1) Proof. A proof can be found in [26, 21, 16]. □ Thanks to the above Lemma, we are able to decompose the linearised operator into a sum of a maximally dissipative operator plus a finite rank projection. Proposition 4.8 (Maximal Dissipativity). Let p ∈(0, 1], k ≥4, for ε ∈(0, 1 2), there exists an integer N = N(ε, k, p) and vectors (Πi,p)1≤i≤N ⊂Hk such that for the finite rank projection operator (4.6) bPp = N X i=1 ⟨⟨·, Πp,i⟩⟩kΠp,i the modified operator bLp := Lp −bPp 22 OLIVER GOUGH satisfies: (Dissipativity): (4.7) ∀q ∈D(Lp), Re⟨⟨bLpq, q⟩⟩k ≤−  k −7 2 −ε  ⟨⟨q, q⟩⟩k ≤  ε −1 2  ⟨⟨q, q⟩⟩k (Maximality) (4.8) λ −bLp is surjective for some λ > 0. Proof. For q ∈(C∞[−1, 1])2, we have using Lemma 4.5, that ⟨⟨−Lpq, q⟩⟩k = (−Lpq, q) ˙Hk+1× ˙Hk + (−Lpq, q)L2×L2 = (−(∂k+1Lpq)1, ∂k+1q1)L2 + (−(∂kLpq)2, ∂kq2)L2 + (−Lpq, q)L2×L2 = (−(Lp,k+1∂k+1q)1, ∂k+1q1)L2 + (−(Lp,k∂kq)2, ∂kq2)L2 + (−(L′ p,k+1q)1, ∂k+1q1)L2 + (−(L′ p,kq)2, ∂kq2)L2 + (−Lpq, q)L2×L2. For the first two terms of the above expression, integration by parts gives us Re(−(Lp,k+1∂k+1q)1, ∂k+1q1)L2 + Re(−(Lp,k∂kq)2, ∂kq2)L2 =  k + 1 2  (∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2) − Up(y)∂kq2, ∂kq2  L2 + 1 2  (∂k+1q1(±1)) ∓(∂kq2(±1)) 2 ≥  k + 1 2  (∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2) −2(1 + p 1 −p)∥∂kq2∥2 L2 ≥  k + 1 2  ∥∂k+1q1∥2 L2 +  k −7 2  ∥∂kq2∥2 L2. As for the last three terms, by Youngs inequality we have that for any ε > 0 |(−(L′ p,k+1q)1, ∂k+1q1)L2| + |(−(L′ p,kq)2, ∂kq2)L2| + |(−Lpq, q)L2×L2| = |(−(L′ p,kq)2, ∂kq2)L2| + |(Lpq, q)L2×L2| ≤∥(L′ p,kq)2∥L2∥∂kq2∥L2 + |(Lpq, q)L2×L2| ≤ε 2∥∂kq2∥2 L2 + cε,p,k ∥q1∥2 Hk + ∥q2∥2 Hk−1  . Therefore Re⟨⟨−Lpq, q⟩⟩k ≥  k + 1 2  ∥∂k+1q1∥2 L2 +  k −7 2 −ε 2  ∥∂kq2∥2 L2 −cε,p,k∥q∥2 Hk−1 ≥  k −7 2 −ε 2  ∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2  −cε,p,k∥q∥2 Hk−1. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 23 Applying the subcoercivity estimate for q1,q2 respectively allows us to trade the lower derivative terms into a ε 2 multiple of the first factor, modulo a finite rank defect, i.e ⟨⟨−Lpq, q⟩⟩k ≥(k −7 2 −ε)⟨⟨q, q⟩⟩k −c′ ε,k,p N1 X i=1 (q1, Π(1) i )2 L2 + N2 X i=1 (q2, Π(2) i )2 L2 ! . Now consider the linear functionals given by q → q c′ ε,k,p  q1, Π(1) i  L2 and q → q c′ ε,k,p  q2, Π(2) i  L2 which are actually continuous on Hk by Cauchy Schwarz. Thus by the Riesz Representation Theorem there exists (Πp,i)1≤i≤N1+N2 ⊂Hk such that ⟨⟨q, Πp,i⟩⟩k = q c′ ε,k,p  q1, Π(1) i  L2 for 1 ≤i ≤N1 ⟨⟨q, Πp,i⟩⟩k = q c′ ε,k,p  q1, Π(2) i  L2 for N1 + 1 ≤i ≤N1 + N2 := N. By defining the projection operator bPp as in (4.6) and bLp := Lp −bPp, we clearly get the dissipativity of bLp when q ∈(C∞[−1, 1])2. However, we must prove (4.7) for q ∈D(Lp) = Hk. Indeed, since Lp generates a C0 semigroup on Hk by Proposition 4.4, we have the well-defined characterisation Lpq = limτ→0 (Sp(τ)−I)q τ ∈Hk for arbitrary q ∈Hk. Let us fix q ∈Hk. Since Lp = L + Lp,1, where the latter operator is bounded on Hk, we have that Lq ∈Hk. By the density of (C∞[−1, 1])2 in Hk, there is a sequence (fn)n∈N ⊂(C∞[−1, 1])2 with ∥fn −(−Lq)∥k →0. By the constructive proof of Lemma 4.2, for every n ∈N, there exists a unique qn ∈(C∞[−1, 1])2 such that −Lqn = fn. We now show that qn →q. Indeed, by dissipativity of the free wave operator in Lemma 4.1 and Cauchy Schwarz, we have −1 2∥qn −qm∥2 k ≥⟨L(qn −qm), qn −qm⟩k ≥−∥L(qn −qm)∥k∥qn −qm∥k =⇒∥qn −qm∥2 k ≤2∥L(qn −qm)∥k →0. Thus (qn) is Cauchy and so there exists q′ ∈Hk with ∥qn −q′∥k →0. Since the operator L is closed, we have −Lq = −Lq′. By Proposition 4.3, and the Hille-Yosida Theorem (see [14]), (−1 2, ∞) ⊂ρ(L). In particular 0 ∈ρ(L), meaning −L is invertible and q = −q′ and we conclude qn →q′ = q. Now, since bLp = L + Lp,1 −bPp, where Lp,1 and bPp are bounded on Hk, we also have that bLpqn →bLpq. Therefore taking n →∞, (4.7) holds for q ∈D(Lp). We finally show that bLp is maximal, i.e λ −bLp : Hk →Hk is surjective for all λ > 0. Since we have shown dissipativity of bLp, it suffices to show this for one such λ > 0. Because Lp was the generator of a C0 semi-group with growth bound ω = −1 2 + ∥Lp,1∥L(Hk) < ∞, we can always find Reλ > ω with Reλ ∈ρ(Lp) meaning λ −Lp invertible. Again by Hille-Yosida, we have ∥(λ −Lp)−1 ∥L(Hk) ≲ 1 (λ −ω) ≲2 λ. Then writing (λ −bLp) = λ −Lp + bPp = (λ −Lp)(I + (λ −Lp)−1 bPp) we have by the boundedness of bPp and for λ > 0 sufficiently large ∥(λ −Lp)−1 bPp)∥L(Hk) < 1 24 OLIVER GOUGH so (I + (λ −Lp)−1 bPp)−1 is well-defined by a Neumann series. Therefore the above product is also invertible, and in particular surjective. □ Remark 4.1 (Choice of regularity). The Sobolev regularity k ≥4 could be chosen larger to improve the dissipative estimate (4.7), thereby pushing the spectrum to the left. However we choose the smallest k ∈N that ensures dissipativity. By comparison, the (ux)2 problem allows similar estimates for the solutions of wider parameter range α > 0, and consequently kα depends on α. Here we merely have parameter p ∈(0, 1] so the smallest k does not depend on p. 4.3.1. The spectrum of the linearised operator. We can now give a sufficiently detailed description of the spectrum of Lp. Proposition 4.9. Take p ∈(0, 1), k ≥4, then ∃ω0 ∈(0, 1 2) such that σ(Lp) ⊂{z ∈C : Rez ≤−ω0} ∪{0, 1} and {0, 1} ⊂σp(Lp). Furthermore, the geometric eigenspaces of eigenvalues 0 and 1 are spanned by the functions f0,p and f1,p respectively, where f0,p(y) = 1 0  and f1,p(y) = −p√1−p 1+y√1−p −p√1−p (1+y√1−p)2 ! . Moreover, there is a generalised eigenfunction g0,p of the eigenvalue 0 given by g0,p(y) = −log 1 + y√1 −p  − p 2(1−p) 1 1+y√1−p (2−p)y+2√1−p 2√1−p(1+y√1−p)2 ! satisfying Lp g0,p = f0,p, L2 p g0,p = 0. Proof. Fix ε ∈(0, 1 2) and set ω2 := 1 2 −ε > 0. By Proposition 4.8, bLp −ω2I is maximally dissipative, hence σ(bLp) ⊂{z ∈C : Rez ≤−ω2}. Since bLp = Lp −bPp with bPp compact, Kato’s compact-perturbation theorem [20, Thm. 5.35, p. 244] yields σess(Lp) = σess(bLp) ⊂{z ∈C : Rez ≤−ω2}. In particular, 0 /∈σess(Lp). Since σ(Lp) = σess(Lp)∪σdisc(Lp), the mode stability of Proposition 3.8 implies that the only eigenvalues with Rez ≥0 are {0, 1} ⊂σdisc(Lp). Possible spectrum in the strip {−ω2 < Rez < 0} is therefore discrete. Discrete eigenvalues can only accumulate at points of the essential spectrum (or at infinity), and 0 /∈σess(Lp), so there is no accumulation at 0. Hence the set Σ := σ(Lp) ∩{−ω2 < Rez < 0} is finite. Define ω0 :=    min  ω2, −maxλ∈Σ Reλ , Σ ̸= ∅, ω2, Σ = ∅. Then ω0 ∈(0, 1 2) and σ(Lp) ⊂{z ∈C : ℜz ≤−ω0} ∪{0, 1}. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 25 A direct computation shows f0,p, f1,p ∈D(Lp) are eigenfunctions with eigenvalues 0 and 1, respectively, and g0,p ∈D(L2 p) is a generalised eigenfunction satisfying Lpg0,p = f0,p and L2 pg0,p = 0. Finally, as proved in the appendix in Proposition A.1, any Hk+1(0, 1) eigenfunction (with k ≥4) is smooth, and the geometric eigenspaces at 0 and 1 are one-dimensional; hence they are spanned by f0,p and f1,p, respectively. □ Due to the presence of unstable modes, we now introduce the Riesz projectors of isolated eigenvalues 0 and 1 for Lp P0,p := Z γ0 (zI −Lp)−1 dz P1,p := Z γ1 (zI −Lp)−1 dz where we take γj : [0, 1] →C as γ0(s) = ω0 2 e2πis, γ1(s) = 1 + 1 2e2πis where the former radius is sufficiently small so that γ0 does not intersect σ(bLp). Recall that the algebraic multiplicity is the rank of the projection Pi,p, i.e the dimension of the range of the Pi,p. The next result shows that the algebraic multiplicity of the eigenvalue 0 exceeds the geometric multiplicity. Lemma 4.10. The projections P1,p and P0,p have rank 1 and 2 respectively. Proof. The argument is identical to that of [16] and we sketch only the main points for completeness. Since Lp = bLp + Pp with Pp finite rank, the eigenvalues 0 and 1 are isolated of finite algebraic multiplicity ([20, Thm 5.28]). The Riesz projections Pi,p depend continuously on p ∈(0, 1), so their ranks are constant ([20, Lemma 4.10 ]). It suffices to compute the ranks for a fixed value, say p = 3 4. As in [16], we have the Jordan chain Lpf0,p = 0, Lpg0,p = f0,p, and Lemma B.1 in the Appendix rules out any further generalised eigenfunction in H4. Hence rank P0,p = 2. For λ = 1, Lemma B.2 shows that (Lp −I)v = f1,p has no solution in D(Lp), so λ = 1 is simple and rank P1,p = 1. □ 4.4. A Resolvent Estimate. For 0 < ω1 < ω0, define the rectangle Ωm,n := {z ∈C : Rez ∈[−ω1, m], Imz ∈[−n, n]} and Ω′ m,n := {z ∈C : Rez ≥−ω1}\Ωm,n Proposition 4.11. Take p0 ∈(0, 1), and k ≥4, and ω1 ∈(0, ω0) and ε ∈ 0, min{ p0 2 , 1−p0 2 }  , then there exist m, n > 0 such that Ω′ m,n ⊂ρ(Lp) and sup λ∈Ω′m,n ∥(λ −Lp)−1∥L(Hk) ≤C for all p ∈Bε(p0), and C > 0 is a constant independent of p. Proof. The proof follows the same argument to [16]. □ 26 OLIVER GOUGH 5. The Linearised Evolution Proposition 5.1. Take p0 ∈(0, 1), k ≥4, ω1 ∈(0, ω0), ε ∈ 0, min{ p0 2 , 1−p0 2 }  and p ∈Bε(p0), then the following properties hold [Sp(τ), P1,p] = [Sp(τ), P0,p] = 0 such that Sp(τ)P1,p = eτP1,p Sp(τ)P0,p = P0,p + τLpP1,p Sp(τ)ePpq Hk ≤Me−ω1τ ePpq Hk for all τ ≥0, where ePp := I −P1,p −P0,p, and M = M(p0, ω1) is a constant depending only on p0, ω1. Moreover, we have ran(P1,p) = span(f1,p) ran(P0,p) = span(f0,p, g0,p) and P0,pP1,p = P1,pP0,p = 0 Proof. Any C0-semigroup commutes with its generator, and as a consequence of Hille Yosida Theorem, it commutes with the resolvent of its generator. Therefore Sp(τ) commutes with Pλ,p for λ ∈{0, 1}. This proves the first claim. For λ = 1 and λ = 0 respectively, by Lemma 4.10 the dimension of ran(Pλ,p) is 1 and 2 and so by Proposition 4.9 the spaces must be a span of {f1,p} and {f0,p, g0,p}. For the next identities, by definition of the semigroup, for any q ∈Hk we have ∂τSp(τ)q = LpSp(τ)q and by commutation and the eigenvalue property we have in particular ∂τSp(τ)P1,pq = LpSp(τ)P1,pq = Sp(τ)LpP1,pq = Sp(τ)P1,pq. Together with the initial condition Sp(τ)P1,pq|τ=0 = P1,pq|τ=0, the above implies Sp(τ)P1,p = eτP1,p. For the next identity, we first have ∂τSp(τ)P0,pq = LpSp(τ)P0,pq = Sp(τ)LpP0,pq Sp(τ)P0,pq|τ=0 = P0,pq Differentiating again, ∂ττSp(τ)P0,pq = ∂τSp(τ)LpP0,pq = LpSp(τ)LpP0,pq = Sp(τ)L2 pP0,pq = 0 Together with the other initial condition ∂τSp(τ)P0,pq|τ=0 = LpP0,pq|τ=0, the above implies Sp(τ)P0,p = P0,p + τLpP0,p We now prove the p-uniform growth estimate of Sp(τ)|ePpHk. By Theorem 6.17 [20] we have the decomposition of the spectrum of the reduced operator σ(Lp|ePpHk) ⊂{z ∈C : Rez ≤−ω0} and the resolvent of this reduced STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 27 operator is now analytic in the box Ωm,n containing unstable modes, for all m, n > 0. Now, using Proposition 4.11, the resolvent of the reduced operator satisfies (λ −Lp|ePpHk)−1 L(Hk) ≤C for all λ ∈{z ∈C : Rez ≥−ω1} and p ∈Bε(p0) where C is independent of p. Finally, by the Gearhart- Prüss-Greiner Theorem ([14], Theorem 1.11, p.302), this is equivalent to uniform exponential stability, i.e the growth estimate with growth bound −ω1. □ 6. Nonlinear Stability We now write the full nonlinear system for the perturbation q (6.1) ( ∂τq = Lpq + N(q) q(τ = 0, y) = q0(y) where N(q) =  0 q2 2  . By Duhamel’s principle, we have q(τ, ·) = Sp(τ)q0(·) + Z τ 0 Sp(τ −τ ′)N(q(τ ′)) dτ ′. 6.1. Stabilised Nonlinear Evolution. Definition 6.1. For k ≥4, and ω1 ∈(0, ω0) arbitrary but fixed, we define the Banach space X k(−1, 1), ∥· ∥X k(−1,1)  by X k(−1, 1) := {q ∈C [0, ∞); Hk(−1, 1)  : ∥q(τ)∥Hk ≲e−ω1τ for all τ ≥0} ∥q∥X k(−1,1) := sup τ≥0 (eω1τ∥q∥Hk) We also define the closed balls of size ε > 0 Hk ε(−1, 1) := {f ∈Hk(−1, 1) : ∥f∥Hk(−1,1) ≤ε} X k ε (−1, 1) := {q ∈X k(−1, 1) : ∥q∥X k(−1,1) ≤ε} By formally applying the projections Pλ,p to the Duhamel formula, and using the properties of Proposition 5.1 and writing Pp := P1,p + P0,p we obtain the correction Cp(f, q) = Ppf + P0,p Z ∞ 0 N(q(τ ′)) dτ ′ + LpP0,p Z ∞ 0 (−τ ′)N(q(τ ′)) dτ ′ + P1,p Z ∞ 0 e−τ ′N(q(τ ′)) dτ ′ We note that C(f, q) ∈ran(Pp) ⊂Hk. We then anticipate that by subtracting the initial data by this term, we will be able to close a fixed point argument. Before this we state an estimate on the nonlinearity N. Lemma 6.2. for k ≥1 we have ∥N(q)∥Hk ≤∥q∥2 Hk ∥N(q) −N(eq)∥Hk ≲∥q −eq∥Hk(∥q∥Hk + ∥eq∥Hk) We now prove the existence of a small perturbation to the corrected abstract Cauchy problem. 28 OLIVER GOUGH Proposition 6.3. Take p0 ∈(0, 1), k ≥4, and ω1 ∈(0, ω0). There are constants 0 < ε0 < min{ p0 2 , 1−p0 2 } and C0 = C0(p, k, ω1) > 1 such that for all p ∈Bε0(p0) and all f ∈Hk(−1, 1) with ∥f∥Hk ≤ε0 C0 , there is a unique global solution qp ∈X k(−1, 1) satisfying ∥qp∥X k ≤ε0 and qp(τ) = Sp(τ)(f −Cp(f, q)) + Z τ 0 Sp(τ ′ −τ)N(q(τ ′)) dτ ′ for all τ ≥0. Moreover, the data-to-solution map Hkε0 C0 (−1, 1) →X k(−1, 1), f 7→qp is Lipschitz continuous. Proof. We take p0 ∈(0, 1), k ≥4 and ε0 ∈(0, min{ p0 2 , 1−p0 2 }) and ω1 ∈(0, ω0) arbitrary but fixed so that for all p ∈Bε0(p0) the semigroup Sp(τ) is well-defined and satisfies Proposition 5.1. We denote the solution operator Kp(f, q), depending on initial data f and solution q, as Kp(f, q)(τ) = Sp(τ)(f −Cp(f, q)) + Z τ 0 Sp(τ ′ −τ)N(q(τ ′)) dτ ′. Using the expression for the correction C(f, q), we rewrite this, recalling that ePp := P0,p + P1,p, as Kp(f, q)(τ) = Sp(τ)ePpf + Z τ 0 Sp(τ −τ ′)ePpN(q(τ ′)) dτ ′ −P0,p Z ∞ τ N(q(τ ′)) dτ ′ −LpP0,p Z ∞ τ (τ −τ ′)N(q(τ ′)) dτ ′ −P1,p Z ∞ τ eτ−τ ′N(q(τ ′)) dτ ′ (6.2) We now show the first step of the fixed point argument. Namely, that for ε0 > 0, f ∈Hk sufficiently small, that Kp(f, q) maps from the ball X k ε0 to itself. Indeed, by Proposition 5.1, and the nonlinear estimates in Lemma 6.2 we have ∥Kp(f, q)∥Hk(τ) ≲e−ω1τ∥f∥Hk + Z τ 0 e−ω1(τ−τ ′)∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ ∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ (τ ′ −τ)∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ eτ−τ ′∥q(τ ′)∥2 Hk dτ ′ ≲e−ω1τ ε0 C0 + ε2 0 Z τ 0 e−ω1(τ−τ ′)e−2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ e−2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ (τ ′ −τ)e−2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ eτ−τ ′e−2ω1τ ′ dτ ′ ≲e−ω1τ ε0 C + ε2 0  ≤e−ω1τε0 for all τ ≥0, ε0 ∈(0, min{ p0 2 , 1−p0 2 }) sufficiently small, p ∈Bε0(p0), f ∈Hk satisfying ∥f∥Hk ≤ε0 C and q ∈X k ε0 and any C0 > 1. This shows that for such f ∈Hkε0 C0 , Kp(f, ·) : X k ε0 →X k ε0. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 29 We now show that Kp(f, ·) is a contraction. Using (6.2) and Lemma 6.2 we have ∥Kp(f, q)(τ) −Kp(f, eq)(τ)∥Hk ≲ Z τ 0 e−ω1(τ−τ ′)∥N(q)(τ ′) −N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ ∥N(q)(τ ′) −N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ (τ ′ −τ) ∥N(q)(τ ′) −N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ eτ−τ ′∥N(q)(τ ′) −N(eq)(τ ′)∥Hk dτ ′ ≲ϵ0 h Z τ 0 e−ω1(τ−τ ′)e−2ω1τ ′ dτ ′ + Z ∞ τ e−2ω1τ ′ dτ ′ + Z ∞ τ (τ ′ −τ)e−2ω1τ ′ dτ ′ + Z ∞ τ eτ−τ ′e−2ω1τ ′ dτ ′i ∥q −eq∥X k ≲ϵ0e−ω1τ∥q −eq∥X k. for all q, eq ∈X k ε0, τ ≥0, p ∈Bε0(p0) and f ∈Hk. We can now choose ε0 small enough so that ∥Kp(f, q) −Kp(f, eq)∥X k ≤1 2∥q −eq∥X k By Banach’s fixed point theorem, for f ∈Hkε0 C0 , there exists a unique fixed point qp ∈X k ε0 such that Kp(f, qp) = qp. Finally for Lipschitz continuity, we take f,ef ∈Hkε0 C0 and the associated Kp-fixed points qp, eqp ∈X k ε0, then firstly we write qp(τ) −eqp(τ) = Kp(f, qp)(τ) −Kp(ef, eqp)(τ) = Kp(f, qp)(τ) −Kp(f, eqp)(τ) + Sp(τ)ePp(f −ef). Using the contraction estimate we have 1 2∥qp −eqp∥X ≤∥Sp(τ)ePp(f −ef)∥X k ≲∥f −ef∥X k which proves Lipschitz continuous dependence on the initial data. □ 6.2. Stable flow near the blow up solution. Recall that we actually want to solve the Duhamel equation without the correction term C(f, q). To do this we take a p0, T0, κ0 that we will later choose so that the correction term vanishes. Definition 6.4. Take p, p0 ∈(0, 1), T, T0 > 0, κ, κ0 ∈R and k ≥4, with T < T0 √1 −p0. We define the p, T, κ-dependent initial data operator Up,T,κ : Hk(−1, 1) →Hk(−1, 1), f 7→f T + f T 0 −fp,κ where f = f1(y) f2(y)  and f T (y) =  f1(Ty) Tf2(Ty)  , f T 0 (y) =   eUp0,1,κ0( T T0 y) T T0 p0 +  T T0 2 y∂y eUp0,1,κ0( T T0 y)  , fp,κ(y) = eUp,1,κ(y) p + y∂y eUp,1,κ(y) ! We remark that this operator gives initial data for the perturbation for the abstract Cauchy problem and splits the initial data into f T and f T 0 , the initial data for a blow up solution with parameters p0, T0, κ0. 30 OLIVER GOUGH Lemma 6.5. Let p, p0 ∈(0, 1), T, T0 > 0, κ, κ0 ∈R, k ≥4, and T < T0 √1 −p0. Then Up,T,κ(f) = f T +  (κ −κ0) −p  T T0 −1  + p 2(1 −p)(p0 −p)  f0,p −  T T0 −1  √1 −p f1,p + (p0 −p)g0,p + r  p, T T0  for any f ∈Hk and f0,p, f1,p, g0,p are introduced in Proposition 4.9 and ∥r  p, T T0  ∥Hk ≲  T T0 −1 2 + (p −p0)2 for p ∈Bε1(p0) and T ∈BT0ε(T0) where ε1 = min  p0 2 , 1−p0 2 , δ 2 , with δ := √1 −p0 −T T0 . Proof. Recall eUp,1,κ = −p log(1 + y√1 −p) + κ. Define the distance δ := √1 −p0 − T T0 , then let ε1 = min{ p0 2 , 1−p0 2 , δ 2} and consider p ∈Bε1(p0) and T ∈BT0ε(T0). For fixed y ∈[−1, 1], a Taylor expansion around (p0, κ0, T0) in each component gives f T 0 −fp,κ =  (κ −κ0) −p0  T T0 −1  + p0 2(1 −p0)(p0 −p)  f0,p0 −  T T0 −1  √1 −p0 f1,p0 + (p0 −p)g0,p0 + er  p, T T0  where  er  p, T T0  i =  T T0 −1 2 Z 1 0 T 2 0 h ∂T ′T ′[f T ′ 0 ]i|T ′=T0+z(T −T0) i (1 −z) dz −(p −p0)2 Z 1 0  ∂p′p′[fp′,κ]i|p′=p+z(p−p0)  (1 −z) dz for i = 1, 2. Thus f T 0 −fp,κ =  (κ −κ0) −p  T T0 −1  + p 2(1 −p)(p0 −p)  f0,p −  T T0 −1  √1 −p f1,p + (p0 −p)g0,p + r  p, T T0  where r  p, T T0  = er(p, T) +  (κ −κ0) −  T T0 −1  (p −p0) +  p0 2(1 −p0) − p 2(1 −p)  (p0 −p)  (f0,p0 −f0,p) +  1 √1 −p − 1 √1 −p0   T T0 −1  (f1,p −f1,p0) + (p −p0)(g0,p0 −g0,p) =  1 √1 −p − 1 √1 −p0   T T0 −1  (f1,p −f1,p0) + (p −p0)(g0,p0 −g0,p) Note that the remainder er(p, T) contains the terms ∂T T f T 0 and ∂ppfp,κ which are smooth by construction, and are uniformly bounded in [−1, 1] for all p ∈Bε1(p0) and T ∈BT0ε(T0). Applying Youngs inequality to the above yields the bound ∥r  p, T T0  ∥Hk ≲  T T0 −1 2 + (p −p0)2 . □ STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 31 Proposition 6.6. Let p0 ∈(0, 1), T0 > 0, κ0 ∈R, k ≥4, and 0 < ω1 < ω0. There are constants ε2 > 0 and C2 > 1 such that for all real-valued f ∈Hk(−1, 1) with ∥f∥Hk ≤ε2 C2 2 , there exists parameters p∗∈(0, 1), T0 > 0, κ∗∈R, and a unique qp∗,κ∗,T ∗∈C [0, ∞), Hk(−1, 1)  with ∥qp∗,κ∗,T ∗(τ)∥Hk ≤ε2e−ω1τ and (6.3) qp∗,κ∗,T ∗= Sp∗(τ)Up∗,κ∗,T ∗(f) + Z τ 0 Sp∗(τ −τ ′)N(qp∗,κ∗,T ∗(τ ′)) dτ ′ with |p∗−p0| + |κ∗−κ| + T T0 −1 ≤ε2 C2 Proof. Take the ε0 > 0 and C0 > 1 obtained from Proposition 6.3 and ε1 > 0 from Lemma 6.5. We let 0 < ε′ ≤min{ε0, ε1} and a C′ ≥C0. We further define ε2 and C2 via ε2 := ε′ M C2 := MC′ for some M ≥1 which we will choose later. First we bound Up,T,κ(f) for f ∈Hk with ∥f∥Hk ≤ ε2 C2 2 , p ∈B ε2 C2 (p0), T ∈B T0ε2 C2 (T0) and κ ∈B ε2 C2 (κ0). Indeed, ∥Up,T,κ(f)∥Hk ≤∥f∥Hk +  |κ −κ0| + p T T0 −1 + p 2(1 −p)|p −p0|  ∥f0,p∥Hk + 1 √1 −p T T0 −1 ∥f1,p∥Hk + |p −p0|∥g0,p∥Hk + ∥r  p, T T0  ∥Hk ≤ε2 C2 2 +  ε2 C2 + ε2 C2 + c1(p0) ε2 C2  + c2(p0) ε2 C2 + c3(p0) ε2 C2 + c4(p0) ε2 2 C2 2 ≤ε′ C′ for M sufficiently large. We conclude that it is possible to fix ε′ > 0 and C′ > 1, so that Up,T,κ satisfies the assumptions of the initial data in Proposition 6.3 which implies the existence and uniqueness of a solution qp,T,κ ∈X k(−1, 1) with ∥qp,T,κ∥X k ≤ε2 satisfying qp,T,κ(τ) = Sp(τ) (Up,T,κ(f) −Cp(Up,T,κ(f), qp,T,κ(τ))) + Z τ 0 Sp(τ −τ ′)N(qp,T,κ(τ ′)) dτ ′ for all p ∈B ε2 C2 (p0), T ∈B T0ε2 C2 (T0) and κ ∈B ε2 C2 (κ0). It remains to show the existence of parameters (p, T, κ) such that the correction Cp(Up,T,κ(f), qp,T,κ) ∈ran(Pp) is equal to 0. By Proposition 4.9, we have that ran(Pp) ⊂Hk is a finite dimensional sub-Hilbert space spanned by the linearly independent functions {g0,p, f0,p, , f1,p}. It suffices to show that there are parameters for which the linear functional ℓp,T,κ : ran(Pp) →R, g 7→(Cp(Up,T,κ(f), qp,T,κ), g)Hk 32 OLIVER GOUGH is zero on a basis of ran(Pp). Indeed, this would imply Cp(Up,T,κ(f), qp,T,κ) ⊥ran(Pp), which yields the result. Using the expression for the correction and Lemma 6.5, we have ℓp,κ,T (g) = (Ppf T , g)Hk +  (κ −κ0) −p  T T0 −1  + p 2(1 −p)(p0 −p)  (f0,p, g)Hk −  T T0 −1  √1 −p (f1,p, g)Hk + (p0 −p)(g0,p, g)Hk +  Ppr  p, T T0  , g  Hk +  P0,p Z ∞ 0 N(q(τ ′)) dτ ′, g  Hk +  LpP0,p Z ∞ 0 (−τ ′)N(q(τ ′)) dτ ′, g  Hk +  P1,p Z ∞ 0 e−τ ′N(q(τ ′)) dτ ′, g  Hk . We now want to construct a dual basis {g1 p, g2 p, g3 p} to the basis {g0,p, f0,p, f1,p}. To do this we use the Gram matrix Γ(p) =   (g0,p, g0,p)Hk (g0,p, f0,p)Hk (g0,p, f1,p)Hk (f0,p, g0,p)Hk (f0,p, f0,p)Hk (f0,p, f1,p)Hk (f1,p, g0,p)Hk (f1,p, f0,p)Hk (f1,p, f1,p)Hk  . with components Γ(p)ij, with i, j = 1, 2, 3. Since the eigenfunctions are linearly independent for p ∈(0, 1), the Gram matrix is invertible and we denote the elements of the inverse of Γ(p) as Γ(p)mn with m, n = 1, 2, 3. By defining gn p := Γ(p)n1g0,p + Γ(p)n2f0,p + Γ(p)n3f1,p n = 1, 2, 3 we get (g0,p, gj p)Hk = δj 1, (f0,p, gj p) = δj 2, (f1,p, gj p)Hk = δj 3 for j = 1, 2, 3. Moreover, by Cramer’s rule, the components of {g1 p, g2 p, g3 p} ⊂ran(Pp) depend smoothly on p since Γ(p) depends smoothly on p and the determinant is non-zero. We now define the continuous map F : B ε2 C2 (p0) × B ε2 C2 (κ0) × B T0ε2 C2 (T0) →R3 (p, κ, T) 7→  p0 + F1, κ0 + p  T T0 −1  − p 2(1 −p)(p0 −p) −F2, T0(1 + p 1 −pF3)  with real valued components defined as Fn(p, κ, T) := (Ppf T , gn p)Hk +  Ppr  p, T T0  , gn p  Hk +  P0,p Z ∞ 0 N(qp,κ,T (τ ′)) dτ ′, gn p  Hk +  LpP0,p Z ∞ 0 (−τ ′)N(qp,κ,T (τ ′)) dτ ′, gn p  Hk +  P1,p Z ∞ 0 e−τ ′N(qp,κ,T (τ ′)) dτ ′, gn p  Hk for n = 1, 2, 3, where we note Fn(p, κ, T) = ℓp,κ,T (gn p). To bound each component we use Lemma 6.2 and Lemma 6.5 to obtain |Fn(p, κ, T)| ≲∥gn p∥Hk  ∥f T ∥Hk + r  p, T T0  Hk + ∥qp,κ,T ∥2 X k  ≲ε2 C2 2 + ε2 2 C2 2 + ε2 2 = ε2 C2  1 MC′ + ε′ M 2C′ + ε′C′  whenever p ∈B ε2 C2 (p0), T ∈B T0ε2 C2 (T0) and κ ∈B ε2 C2 (κ0). We can now choose M and C′ large enough and ε′ small enough so that F is a self map. By Brouwer’s fixed point theorem, we obtain a fixed point STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 33 (p∗, κ∗, T ∗) ∈B ε2 C2 (p0) × B ε2 C2 (κ0) × B T0ε2 C2 (T0) of F, satisfying p∗= p0 + F1(p∗, κ∗, T ∗) = p∗+ ℓp∗,κ∗,T ∗(g1 p∗) κ∗= κ0 + p∗ T ∗ T0 −1  − p∗ 2(1 −p∗)(p0 −p∗) −F2(p∗, κ∗, T ∗) = κ∗−ℓp∗,κ∗,T ∗(g2 p∗) T ∗= T0(1 + p 1 −p∗F3(p∗, κ∗, T ∗)) = T ∗+ T0 p 1 −p∗ℓp∗,κ∗,T ∗(g3 p∗) which implies ℓp∗,κ∗,T ∗≡0 on ran(Pp). □ 6.3. Proof of Theorem 1.2. Proof. Fix 0 < δ < ω0 with ω0 from Proposition 4.9, and let ω1 = ω0 −δ. We take ε2 > 0 and C2 > 1 from Proposition 6.6. Then for any real-valued (f, g) ∈Hk+1(R)×Hk(R) with ∥f∥Hk+1(R) +∥g∥Hk(R) ≤ε2 C2 there exists a unique mild solution qp∗,κ∗,T ∗=  q(1) p∗,κ∗,T ∗, q(2) p∗,κ∗,T ∗  ∈C([0, ∞); Hk(−1, 1)) satisfying (6.3) that is a jointly classical solution to the abstract Cauchy problem (6.1). We now reconstruct the solution to the original Cauchy problem. Recall we had U(τ, y) = Up∗,1,κ∗(τ, y) + q(1) p∗,κ∗,T ∗(τ, y). Transforming variables back u(x, t) = Up∗,1,κ∗  −log  1 −t T ∗  , x −x0 T ∗−t  + q(1) p∗,κ∗,T ∗  −log  1 −t T ∗  , x −x0 T ∗−t  is the unique solution to (1.1), with initial data, by construction of Up,κ,T (f, g), given by u(x, 0) = up0,1,κ,x0,T0(x, 0) + f(x) ∂tu(x, 0) = ∂tup,1,κ,x0,T (x, 0) + g(x) for x ∈BT ∗(x0). Moreover, by Proposition 6.6, we have ∥q(1) p∗,κ∗,T ∗(τ, ·)∥Hk+1(−1,1) ≤ε2e−ω1τ ∥q(2) p∗,κ∗,T ∗(τ, ·)∥Hk(−1,1) ≤ε2e−ω1τ which implies, by the scaling of the homogeneous semi-norms (T ∗−t)−1 2 +s∥u(·, t) −up∗,1,κ∗,x0,T (·, t)∥˙Hs(BT ∗−t(x0)) = q(1) p∗,κ∗,T ∗  −log  1 −t T ∗  , ·  ˙Hs(−1,1) ≤ε2(T ∗−t)ω0−δ for s = 0, 1, . . . , k + 1, and (T ∗−t)−1 2 +s∥∂tu(·, t) −∂tup∗,1,κ∗,x0,T (·, t)∥˙Hs−1(BT ∗−t(x0)) = q(2) p∗,κ∗,T ∗  −log  1 −t T ∗  , ·  ˙Hs−1(−1,1) ≤ε2(T ∗−t)ω0−δ for s = 1, . . . , k. □ Acknowledgments The author would like to thank Professor Manuel del Pino and Professor Monica Musso for their super- vision and helpful discussions. 34 OLIVER GOUGH Appendix A. Frobenius Analysis The eigenequation (3.3) may be written in standard Heun ODE form by letting φ(y) = ϕ(z) for y = 2z−1, (A.1) ϕ′′ +  γ z + δ z −1 + ε  z −  √1−p−1 2√1−p   ϕ′ +   λ(λ + 1)z −q z(z −1)  z −  √1−p−1 2√1−p   ϕ = 0 with γ = λ −√1 −p, δ = λ + √1 −p, ε = 2 and q = (√1 −p −1)λ2 + (√1 −p −1 + 2p)λ. By the Frobenius theorem, we can always find a local power series solution near the singular points by substituting ϕ(z) = (z −z0)s ∞ X k=1 ak(z −z0)k. Indeed, after substituting and matching the lowest order term, we obtain indicial polynomials for z0 ∈{0, 1} respectively. We have P0(s) = s(s −1 + γ) for z0 = 0 P1(s) = s(s −1 + δ) for z0 = 1 Proposition A.1. For p ∈(0, 1), λ ∈C, with Reλ ≥0 and k ≥4, then any Hk+1(0, 1) solution to (A.1) is in C∞[0, 1]. Moreover, the set of local solutions around z0 = 1 is one dimensional. Proof. First we do the Frobenius analysis around z0 = 0. We have the two roots to P0(s), s± ∈{0, 1 −λ + √1 −p} and the two cases: (1) s+ = 1 −λ + √1 −p, s−= 0 if 0 ≤Reλ ≤1 + √1 −p (2) s+ = 0, s−= 1 −λ + √1 −p if Reλ > 1 + √1 −p We now have the following sub-cases. Case 1.1: s+ −s−/∈N and 0 ≤Reλ ≤1 + √1 −p, then there are the two independent local solutions are ϕ11(z) = zs+h+(z) ϕ12(z) = h−(z) where h± are analytic functions around z = 0. However ϕ11 /∈H4+1(0, 1) (and therefore no larger regularity), since s+ /∈N, and Re s+ −5 < 0 for Reλ ≥0. Thus any local Hk+1(0, 1) solution must be a multiple of ϕ12, and hence is smooth and analytic at 0. Case 1.2: s+ −s−∈N and 0 ≤Reλ ≤1 + √1 −p, the two independent local solutions ϕ13(z) = zs+h+(z) ϕ14(z) = h−(z) + c ϕ13(z) log(z) with h± analytic around z = 0, and c ∈C which may be zero unless s+ = s−= 0. If c ̸= 0, then ϕ14 /∈H5(0, 1) due to the log(z) term, and so any Hk+1 solution would be a multiple of ϕ13. If c = 0, the solution is a multiple of ϕ13 or ϕ14 and so in either case the local solution is smooth and analytic at z = 0. Case 2.1: s+ = 0, s−= 1 −λ + √1 −p, and Reλ > 1 + √1 −p, the two independent local solutions are ϕ21(z) = h+(z) ϕ22(z) = zs−h−(z) + c ϕ21(z) log(z). STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 35 Note that ϕ22 /∈H5(0, 1), regardless of the value of c, since s−< 0. Thus any local Hk+1 solution must be a multiple of h+ and is thus smooth and analytic at z = 0. We now do the analysis around z0 = 1, for which we have P1(s) = s(s −1 + δ) and roots s± ∈{0, 1 −λ − √1 −p}. We have the two cases (1) s+ = 1 −λ −√1 −p, s−= 0 if 0 ≤Reλ ≤1 −√1 −p (2) s+ = 0, s−= 1 −λ −√1 −p if Reλ > 1 −√1 −p We now have the following sub-cases. Case 1.1: s+ −s−/∈N and 0 ≤Reλ ≤1 −√1 −p. The two independent local solutions are eϕ11(z) = (z −1)s+h+(z −1) eϕ12(z) = h−(z −1) with h± analytic functions around 0. However, eϕ11 /∈H5(0, 1), since s+ /∈N and Re s+ −5 < 0. Thus any Hk+1 solution must be a multiple of eϕ12 and hence smooth and analytic at z = 1. Case 1.2: s+ −s−∈N, and 0 ≤Reλ ≤1 −√1 −p, then the two independent local solutions are eϕ13(z) = (z −1)s+h+(z −1) eϕ14(z) = h−(z −1) + c eϕ13(z −1) log(z −1). Since s+ = 1 −λ −√1 −p < 1 −λ ≤1 and s+ ∈N, then s+ = s−= 0. Thus c ̸= 0, and so eϕ14 /∈H5(0, 1). Thus any local Hk+1 solution must be a multiple of eϕ13 = h+(z −1) which is smooth and analytic around 1. Case 2.1: s+ = 0, s−= 1 −λ −√1 −p and Reλ > 1 −√1 −p. The two independent local solutions are eϕ21(z) = h+(z −1) eϕ22(z) = (z −1)s−+ c eϕ21(z −1)(z) log(z −1). We note eϕ22 /∈H5(0, 1) since s−< 0, regardless of the value of c. Thus any H5 local solution is smooth and analytic at 1. The above analysis shows that any local Hk+1 solution to (A.1) must be C∞, and analytic in a neigh- bourhood of z = 0, 1. In addition, since (A.1) is a second order elliptic equation with smooth coefficients on (0, 1), noting that the other singular point z = √1−p−1 2√1−p does not lie in (0, 1), any Hk+1(0, 1) solution must be in C∞(0, 1) by elliptic regularity theory. Together with the former point, we have that the solution must be in C∞[0, 1]. Moreover, the above analysis also shows that in each case the solution at each singular point is a multiple of the same analytic function, and thus the set is one-dimensional. □ Appendix B. Non-existence of other generalised eigenfunctions Lemma B.1. There do not exist v ∈H4 such that L 3 4 v = g0, 3 4 . Proof. Assume for contradiction that there exists (v1, v2) ∈H4(−1, 1) × H3(−1, 1) satisfying    −y∂yv1 + v2 = g0, 3 4 ,1 ≡−log 1 + y 2  − 3 2+y, ∂yyv1 −y∂yv2 +  3 2+y −1  v2 = g0, 3 4 ,2 ≡ 5y+4 (2+y)2 . 36 OLIVER GOUGH Substituting v2 = y∂yv1 + g0, 3 4 ,1 into the second equation gives (1 −y2)∂yyv1 −y(1 + 2y) y + 2 ∂yv1 = g(y) := 1 −y 2 + y log  1 + y 2  + −y2 + 3y + 7 (2 + y)2 . Introducing the integrating factor p(y) = (y + 2)2 1 −y 1 + y 1/2 , the equation becomes ∂yv1(y) = c p(y)−1 + p(y)−1 Z 1 y p(z)g(z) 1 −z2 dz. Computing the integral explicitly yields ∂yv1(y) = √y + 1  c + 3 √ 3 arctan  2y+1 √ 3−3y2  √1 −y (y + 2)2 + −y log(2) + (y −1) log(y + 2) −3 + log(2) (y + 2)2 . Asymptotics near y = 1. Let t = 1 −y ↓0. Using q 1−y 1+y ∼ √ t √ 2, 2y + 1 p 3 −3y2 ∼ 3 √ 6t, we have arctan  2y+1 √ 3−3y2  = π 2 − q 2 3 √ t + O(t3/2). Hence, 3 √ 3 arctan  2y+1 √ 3−3y2  = 3 √ 3π 2 −3 √ 2 √ t + O(t3/2), p(y) ∼ 9 √ 2 √ t. Therefore, ∂yv1(y) ∼ √ 2 9 √ t  c + 3 √ 3π 2  + O(1). For ∂yv1 ∈L2(−1, 1), the singular term ∼(1 −y)−1/2 must vanish, forcing c = −3 √ 3 2 π. With this choice of c, the leading singular term cancels and ∂yv1(y) = A + B p 1 −y + O(1 −y), y →1−, for some A, B ̸= 0. Differentiating gives ∂yyv1(y) ∼−B 2 (1 −y)−1/2, so |∂yyv1(y)|2 ∼C(1 −y)−1, which is not integrable near y = 1. Hence ∂yyv1 /∈L2(−1, 1), contradicting v1 ∈H2(−1, 1). Therefore, such v cannot exist. □ Lemma B.2. There do not exist u ∈H4 such that (L 3 4 −I)u = f1, 3 4 . STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 37 Proof. Writing out the system and eliminating u2 as before yields (1 −y2)∂yyu1 −y(4y + 5) y + 2 ∂yu1 −1 + 2y y + 2 u1 = −3(y + 3) 4(y + 2)2 . A direct computation gives the general solution u1(y) = 3 log(y + 2) 4(y + 2) + c1 y + 2 + c2 √1 + y √1 −y(y + 2) + 3 √ 3√y + 1 4√1 −y(y + 2) arctan √ 3 p 1 −y2 2y + 1  . Near y = 1, the term with c2 behaves like (1 −y)−1/2 and thus forces c2 = 0 for u1 ∈L2(−1, 1). We now inspect the remaining arctan term. The argument √ 3√ 1−y2 2y+1 diverges as y →−1 2, with lim y→(−1/2)− √ 3 p 1 −y2 2y + 1 = −∞, lim y→(−1/2)+ √ 3 p 1 −y2 2y + 1 = +∞. Hence arctan(·) has a jump discontinuity of size π across y = −1 2, and since the pre-factor is finite and nonzero at y = −1 2, u1 itself has a finite jump there. Therefore u1 is discontinuous and thus not in H1(−1, 1), contradicting the assumption that u ∈H4. The claim follows. □ References [1] Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions: with formulas, graphs, and mathematical tables, volume 55. Courier Corporation, 1965. [2] Robert A Adams and John JF Fournier. Sobolev spaces, volume 140. Elsevier, 2003. [3] Athanasios Chatzikaleas and Roland Donninger. Stable blowup for the cubic wave equation in higher dimensions. Journal of Differential Equations, 266(10):6809–6865, 2019. [4] Charles Collot. Type II blow up manifolds for the energy supercritical semilinear wave equation, volume 252. American Mathematical Society, 2018. [5] Charles Collot, Thomas Duyckaerts, Carlos Kenig, and Frank Merle. Soliton resolution for the radial quadratic wave equation in space dimension 6. Vietnam Journal of Mathematics, 52(3):735–773, 2024. [6] Charles Collot, Tej-Eddine Ghoul, and Nader Masmoudi. Singularity formation for burgers equation with transverse vis- cosity. arXiv preprint arXiv:1803.07826, 2018. [7] Roland Donninger. Nonlinear stability of self-similar solutions for semilinear wave equations. Communications in Partial Differential Equations, 35(4):669–684, 2010. [8] Roland Donninger. Strichartz estimates in similarity coordinates and stable blowup for the critical wave equation. Duke Mathematical Journal, 166:1627–1683, 2017. [9] Roland Donninger. Spectral theory and self-similar blowup in wave equations. Bulletin of the American Mathematical Society, 61(4):659–685, 2024. [10] Roland Donninger and Ziping Rao. Blowup stability at optimal regularity for the critical wave equation. Advances in Mathematics, 370:107219, 2020. [11] Roland Donninger and Birgit Schörkhuber. Stable self-similar blow up for energy subcritical wave equations. arXiv preprint arXiv:1201.4337, 2012. [12] Roland Donninger and Birgit Schörkhuber. Stable blow up dynamics for energy supercritical wave equations. Transactions of the American Mathematical Society, 366(4):2167–2189, 2014. [13] Roland Donninger and Birgit Schörkhuber. On blowup in supercritical wave equations. Communications in Mathematical Physics, 346(3):907–943, 2016. [14] Klaus-Jochen Engel and Rainer Nagel. One-parameter semigroups for linear evolution equations. Springer, 2000. [15] Lawrence C Evans. Partial differential equations, volume 19. American mathematical society, 2022. [16] Tej-eddine Ghoul, Jie Liu, and Nader Masmoudi. Blow-up of the one-dimensional wave equation with quadratic spatial derivative nonlinearity. arXiv preprint arXiv:2501.07887, 2025. 38 OLIVER GOUGH [17] Matthieu Hillairet, Pierre Raphaël, et al. Smooth type ii blow-up solutions to the four-dimensional energy-critical wave equation. Anal. PDE, 5(4):777–829, 2012. [18] Tetsuya Ishiwata and Takiko Sasaki. The blow-up curve of solutions to one dimensional nonlinear wave equations with the Dirichlet boundary conditions. Japan Journal of Industrial and Applied Mathematics, 37:339–363, 2020. [19] Jacek Jendrej. Construction of type ii blow-up solutions for the energy-critical wave equation in dimension 5. Journal of Functional Analysis, 272(3):866–917, 2017. [20] Tosio Kato. Perturbation theory for linear operators, volume 132. Springer Science & Business Media, 2013. [21] Jihoi Kim. Self-similar blow up for energy supercritical semilinear wave equation. arXiv preprint arXiv:2211.13699, 2022. [22] Joachim Krieger. On stability of type II blow up for the critical nonlinear wave equation on R3 +, volume 265 of Memoirs of the American Mathematical Society. American Mathematical Society, 2020. [23] Joachim Krieger and Joules Nahas. Instability of type ii blow up for the quintic nonlinear wave equation on mathbbR3+1. Bulletin de la Société mathématique de France, 143:339–355, 2015. [24] Joachim Krieger, Wilhelm Schlag, and Daniel Tataru. Slow blow-up solutions for the h1(R3) critical focusing semilinear wave equation. Duke Mathematical Journal, 2009. [25] Jie Liu. A note on the stability of self-similar blow-up solutions for superconformal semilinear wave equations. arXiv preprint arXiv:2504.05911, 2025. [26] Frank Merle, Pierre Raphaël, Igor Rodnianski, and Jeremie Szeftel. On blow up for the energy super critical defocusing nonlinear schrödinger equations. Inventiones mathematicae, 227(1):247–413, 2022. [27] Frank Merle and Hatem Zaag. Determination of the blow-up rate for the semilinear wave equation. American journal of mathematics, 125(5):1147–1164, 2003. [28] Frank Merle and Hatem Zaag. Determination of the blow-up rate for a critical semilinear wave equation. Mathematische Annalen, 331(2):395–416, 2005. [29] Frank Merle and Hatem Zaag. Existence and universality of the blow-up profile for the semilinear wave equation in one space dimension. Journal of Functional Analysis, 253(1):43–121, 2007. [30] Frank Merle and Hatem Zaag. On the stability of the notion of non-characteristic point and blow-up profile for semilinear wave equations. Communications in Mathematical Physics, 333(3):1529–1562, 2015. [31] Frank Merle and Hatem Zaag. Dynamics near explicit stationary solutions in similarity variables for solutions of a semilinear wave equation in higher dimensions. Transactions of the American Mathematical Society, 368(1):27–87, 2016. [32] Frank WJ Olver. NIST handbook of mathematical functions hardback and CD-ROM. Cambridge university press, 2010. [33] Matthias Ostermann. Stable blowup for focusing semilinear wave equations in all dimensions. Transactions of the American Mathematical Society, 377(07):4727–4778, 2024. [34] Mohammad A. Rammaha. Upper bounds for the life span of solutions to systems of nonlinear wave equations in two and three space dimensions. Nonlinear Analysis: Theory, Methods & Applications, 25(6):639–654, 1995. [35] Mohammad A. Rammaha. A note on a nonlinear wave equation in two and three space dimensions. Communications in Partial Differential Equations, 22(5-6):799–810, 1997. [36] T. Sasaki, S. Takamatsu, and H. Takamura. The lifespan of classical solutions of one dimensional wave equations with semilinear terms of the spatial derivative. AIMS Mathematics, 8(11):25477–25486, 2023. [37] Takiko Sasaki. Regularity and singularity of the blow-up curve for a wave equation with a derivative nonlinearity, 2018. preprint. [38] Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete and Continuous Dynamical Systems - Series S, 14(3):1133–1143, 2021. [39] Takiko Sasaki. Regularity of the blow-up curve at characteristic points for nonlinear wave equations, 2022. preprint. [40] K. Shao, H. Takamura, and C. Wang. Blow-up of solutions to semilinear wave equations with spatial derivatives. arXiv preprint arXiv:2406.02098, 2024. [41] Christopher Donald Sogge. Lectures on non-linear wave equations, volume 2. International Press Boston, MA, 1995. [42] Gerald Teschl. Ordinary differential equations and dynamical systems, volume 140. American Mathematical Soc., 2012. [43] David Wallauch. Strichartz estimates and blowup stability for energy critical nonlinear wave equations. Transactions of the American Mathematical Society, 376(06):4321–4360, 2023.
STABLE TYPE I BLOW-UP FOR THE ONE-DIMENSIONAL WAVE EQUATION WITH TIME-DERIVATIVE NONLINEARITY OLIVER GOUGH Abstract. We study finite-time blow-up for the one-dimensional nonlinear wave equation with a quadratic time-derivative nonlinearity, utt -uxx = (ut)2, (x, t) ∈R × [0, T). Building on the work of Ghoul, Liu, and Masmoudi [16] on the spatial-derivative analogue, we establish the non-existence of smooth, exact self-similar blow-up profiles. Instead we construct an explicit family of generalised self-similar solutions, bifurcating from the ODE blow-up, that are smooth within the past light cone and exhibit type-I blow-up at a prescribed point (x0, T). We further prove asymptotic stability of these profiles under small perturbations in the energy topology. In particular, these profiles verify that the spatially homogeneous ODE blow-up is not asymptotically stable. 1. Introduction We consider the Cauchy problem for the nonlinear wave equation with quadratic time-derivative nonlinearity in one space dimension, (1.1) utt -uxx = (ut)2, (x, t) ∈R × [0, T), for an unknown function u : R × [0, T) →R. This equation admits the scaling and translation invariances u(x, t) 7→u x -x0 λ , t -t0 λ + κ, x0, t0, κ ∈R, λ > 0, which will play a central role in the construction of self-similar blow-up solutions. Local well-posedness in Hk+1(R) × Hk(R), finite speed of propagation, and persistence of regularity follow from standard energy methods for semilinear wave equations; see, e.g., [41, 15]. Main results. As in the power-nonlinearity case, blow-up already occurs for spatially homogeneous data. If u = u(t), then (1.1) reduces to utt = (ut)2. Writing v := ut gives vt = v2, hence v(t) = 1 T -t and u(t) = -log(T -t) + κ, which exhibits the Type I rate. Here we loosely define Type I blow-up as blow-up occurring at a rate comparable to that of the ODE blow-up profile. By time-translation and addition of constants, this yields the two-parameter ODE-type family uT,κ(t) = -log 1 -t T + κ, ∂tuT,κ(t) = 1 T -t, blowing up at time T > 0 uniformly in space (a spatial translation x 7→x -x0 does not change these homogeneous profiles). Moreover, by a standard cutoff-and-finite-speed of propagation argument one can prescribe the blow-up point and make the initial data smooth and compactly supported and in particular Hk+1(R) × Hk(R). Fix 1 16 Oct 2025 2 OLIVER GOUGH T > 0 and x0 ∈R, choose R > T, and let χ ∈C∞ c (R) satisfy χ ≡1 on BR(x0). For the Cauchy problem (1.1) with initial data (u, ∂tu) t=0 = κ χ(x), T -1χ(x) , finite speed of propagation implies that for all (x, t) in the past light cone Γ(x0, T) := {(x, t) ∈R × [0, T) : |x -x0| ≤T -t} the solution coincides with the spatially homogeneous ODE profile: u(x, t) = -log 1 -t T + κ. Thus the solution blows up at (x0, T) while the initial data are compactly supported. As in this example, it suffices to analyse the dynamics inside Γ(x0, T). Unlike power-type semilinear wave equations, (1.1) is not invariant under Lorentz transformations. In particular, Lorentz boosts (x, t) 7→ γ(x -vt), γ(t -vx) , γ = 1 √ 1 -v2 , |v| 0 and x0 ∈R, there are no nontrivial smooth exact self-similar blow-up solutions to (1.1) in the past light cone Γ(x0, T) := {(x, t) ∈R × [0, T) : |x -x0| ≤T -t} ≡{|y| ≤1}. Equivalently, apart from the symmetry-induced constant profiles eU ≡κ, no smooth profile of the form u(x, t) = eU(y) exists in {|y| ≤1}. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 3 2) There exists a five-parameter family of smooth generalised self-similar blow-up solutions to (1.1) with logarithmic growth, up,q,κ,T,x0(x, t) = -p log 1 -t T -p log 1 + q p 1 -p x -x0 T -t + κ, smooth in the past light cone Γ(x0, T) for all T > 0, κ, x0 ∈R, q ∈{±1}, and p ∈(0, 1]. In terms of the similarity variables, these solutions read up,q,κ,T,x0(x, t) = Up,q,κ(τ, y) = p τ + eUp,q,κ(y), eUp,q,κ(y) = -p log 1 + q p 1 -p y + κ. In particular, u1,q,κ,T,x0(x, t) = -log 1 -t T + κ, recovers the spatially homogeneous ODE-type blow-up, and the two signs are related by spatial reflection, up,-1,κ,T,x0(x, t) = up,1,κ,T,-x0(-x, t). Remark 1.1 (Lorentz boost viewpoint). Under the boost t′ = t-γx √ 1-γ2 , x′ = x-γt √ 1-γ2 with γ ∈(-1, 1), taking the same spatially homogeneous ODE blow up profile in the primed frame and mapping back yields u(x, t) = -(1 -γ2) log (T -t -γ(x -x0)) + c2 Moreover, since γ ∈(-1, 1) we see that the above is equivalent to u(x, t) = -(1 -γ2) log (T -t + q|γ|(x -x0)) + c2 for q ∈{-1, 1}. A simple relabeling of p = 1 -γ2 ∈(0, 1] recovers the solution up,q,κ,x0,T (x, t) = -p log T -t + q p 1 -p(x -x0) + κ Equivalently, the travelling-wave ansatz u(x, t) = F(x -ct) reduces (1.1) to the Riccati-ODE mechanism (c2 -1)F ′′ = c2(F ′)2, giving F(ξ) = -(1 -c2) log(A + Bξ) + κ. Applying translational and scaling symmetries, we choose A, B so that A + B(x -ct) = T -t -c(x -x0) reproduces the same family. Blow-up occurs along T -t -c(x -x0) = 0. See section 2.3. Remark 1.2 (Values of p). For p 0, κ0, x0 ∈R, and k ≥4. There exists ω0 ∈(0, 1 2) such that for any δ ∈(0, ω0) there exists ε > 0 with the following property. For any real-valued (f, g) ∈Hk+1(R) × Hk(R) with ∥(f, g)∥Hk+1(R)×Hk(R) ≤ε, there exist parameters p∗∈(0, 1), T ∗> 0, κ∗∈R and a unique solution u : Γ(x0, T ∗) →R to (1.1) with initial data u(x, 0) = up0,1,κ0,x0,T0(x, 0) + f(x), ∂tu(x, 0) = ∂tup0,1,κ0,x0,T0(x, 0) + g(x) (x ∈BT ∗(x0)). 4 OLIVER GOUGH Then for all t ∈[0, T ∗), (T ∗-t)-1 2 +s ∥u(·, t) -up∗,1,κ∗,x0,T ∗(·, t)∥ ̇Hs(BT ∗-t(x0)) ≲(T ∗-t)ω0-δ, s = 0, 1, . . . , k + 1, (T ∗-t)-1 2 +s ∥∂tu(·, t) -∂tup∗,1,κ∗,x0,T ∗(·, t)∥ ̇Hs-1(BT ∗-t(x0)) ≲(T ∗-t)ω0-δ, s = 1, . . . , k, and the parameters obey |p0 -p∗| + |κ0 -κ∗| + 1 -T0 T ∗ ≲ε. Remark 1.3 (Symmetry q = ±1). Since up,1,κ,x0,T (x, t) = up,-1,κ,-x0,T (-x, t), the conclusion of Theorem 1.2 also holds for the solutions with q = -1. Remark 1.4 (Similarity normalisation). The prefactor (T ∗-t)-1 2 +s arises from renormalising the ̇Hs norm in similarity variables. Writing τ := -log(1 -t/T ∗), the right-hand side satisfies (T ∗-t)ω0-δ = (T ∗)ω0-δ e-(ω0-δ)τ, so the perturbation decays exponentially at rate ω0 -δ > 0 in τ. Theorem 1.3 (Failure of asymptotic stability of ODE blow-up). Fix x0 ∈R, T > 0, κ ∈R, and k ≥4. For every σ > 0 there exists p ∈(0, 1), with 1 -p sufficiently small (depending on σ), such that the exact solution up,1,κ,T,x0(x, t) = -p log 1 -t T -p log 1 + p 1 -p x -x0 T -t + κ has small Cauchy data at t = 0 on BT (x0) := {x : |x -x0| -1 the only eigenvalues are 0 and 1. We are able to prove this only for Reλ ≥0. In order to exhibit the ω0 > 0 such that σ(Lp) ⊂{Reλ ≤-ω0} ∪{0, 1}. we use standard spectral theory results, providing an alternative route to the spectral gap. In our proposition we also have eigenfunctions arising from symmetries at 0 and 1, and a generalised eigenfunction resulting from the parameter p ∈(0, 1). Unlike the power-nonlinearity case, the algebraic multiplicity exceeds the geometric multiplicity for the eigenvalue 0. To make this precise, we introduce the usual Riesz projections Pλ,p := Z γλ (zI -Lp)-1 dz for suitable contours. We prove, using a direct ODE analysis, that in a sufficiently high-regularity Sobolev space there are no further generalised eigenfunctions. To then pass from spectral information to semigroup decay on the stable subspace, we estimate the resolvent and invoke the Gearhart-Prüss-Greiner Theorem ([14], Theorem 1.11, p. 302). This resolvent estimate is obtained as in [16]. After this, a full description of the linearised evolution is given. (5) Nonlinear stability The full nonlinear system for the perturbation q is ( ∂τq = Lpq + N(q), q(0, y) = q0(y), where N(q) = 0 q2 2 ! . By Duhamel's principle, q(τ, ·) = Sp(τ) q0(·) + Z τ 0 Sp(τ -τ ′) N q(τ ′, ·) dτ ′. Because the symmetry and the generalised eigenvector induce linear growth, this Duhamel formulation does not close globally on [0, ∞) without an adjustment. To remedy this, we follow the Lyapunov-Perron technique: we project the initial data onto the stable subspace and introduce a STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 7 correction along the unstable directions to cancel their growth. Concretely, we consider the modified problem q(τ, ·) = Sp(τ) Up,T,κ(f) -Cp Up,T,κ(f), q(τ, ·) + Z τ 0 Sp(τ -τ ′) N q(τ ′, ·) dτ ′, where Up,T,κ is the operator that rescales the initial data relative to the blow-up profile. We then show that there exist parameters p∗, T ∗, κ∗for which the correction term vanishes, yielding a genuine solution of the fixed-point problem. Proving that the correction vanishes is equivalent to showing that a certain linear functional-obtained by dual pairing with the correction term-is identically zero. Appealing to Brouwer's fixed-point theorem exhibits the required parameters. Finally, by the restriction properties of the semigroup, the resulting mild solution upgrades to a classical solution. 1.3. Notation. In this paper we follow the same notation as in [16]. For x0 ∈R and r > 0, we denote the open interval centered at x0 by Br(x0) := (x0 -r, x0 + r). Its closure is Br(x0) = [x0 -r, x0 + r]. For x0 ∈R and T > 0, the past light cone is Γ(x0, T) := {(t, x) ∈[0, T) × R : |x -x0| ≤T -t}. For a, b ∈R, we write a ≲b if there exists a constant C > 0 (independent of the variables under consideration) such that a ≤C b. If C may depend on a parameter p, we write a ≲p b. For a function f : R →R, y 7→f(y), we may interchangeably denote the first derivative by f ′ = ∂yf = fy and the k-th derivative by ∂k yf = ∂kf, where ∂k without subscript always means ∂k y. If Ω⊂R is a domain, then C∞(Ω) denotes smooth functions on Ωwith derivatives continuous up to ∂Ω. If Ωis bounded and k ∈N ∪{0}, we set ∥f∥2 Hk(Ω) := k X i=0 ∥∂i yf∥2 L2(Ω), ∥f∥ ̇Hk(Ω) := ∥∂k yf∥L2(Ω), f ∈C∞(Ω). The Sobolev space Hk(Ω) is the completion of C∞(Ω) with respect to ∥· ∥Hk(Ω). For k ≥0, we use the product (energy) space Hk(-1, 1) := Hk+1(-1, 1) × Hk(-1, 1). We may also omit the domain (e.g., (-1, 1)) when it is clear from context. We use boldface for tuples of functions; e.g. f ≡(f1, f2)⊤, q(t, ·) ≡ q1(t, ·), q2(t, ·) ⊤. Linear operators acting on such tuples are also written in boldface. If L is a closed linear operator on a Banach space X, we denote its domain by D(L), its spectrum by σ(L), and its point, discrete, and essential spectrum by σp(L), σdisc(L) and σess(L), respectively. The resolvent set is ρ(L) := C \ σ(L) and the resolvent operator is RL(z) := (z -L)-1, z ∈ρ(L). The space of bounded operators on X is denoted by L(X). For background on spectral theory we refer to Kato [20], and for C0-semigroups to Engel and Nagel [14] and the references therein. 8 OLIVER GOUGH 2. Construction of smooth blow-up solutions 2.1. Non-existence of smooth self-similar blow-up. Since the nonlinear wave equation (1.1) is not Lorentz invariant, we cannot use Poincaré symmetries or Lorentz boosts-as in the power-nonlinearity setting-to generate spatial dependence from an ODE blow-up profile. Nevertheless, as mentioned we construct solutions to (1.1) that effectively have the form of Lorentz-transformed ODE blow-up inside the cone. Guided by scaling and translations, we first consider the exactly self-similar ansatz u(x, t) = U x -x0 T -t = U(y), y := x -x0 T -t . Substituting into (1.1) yields the similarity ODE (2.1) (y2 -1)Uyy + 2y Uy = y2 (Uy)2. Besides the symmetry-induced constants U ≡κ, we can solve (2.1) explicitly for V := Uy by rewriting it as - 1 V ′ + 2y y2 -1 1 V = y2 y2 -1. With W := 1 V and the integrating factor (y2 -1)-1, we obtain d dy W(y2 -1)-1 = - y2 (y2 -1)2 = 1 4(y + 1) - 1 4(y + 1)2 - 1 4(y -1) - 1 4(y -1)2 . Integrating gives W = 1 4 2y + c(y2 -1) + (y2 -1) log y + 1 y -1 =: pc(y), so Uy(y) =    4 2y + c(y2 -1) + (y2 -1) log y+1 y-1 -1 , or 0. Proof of Theorem 1.1(1). For any c ∈R, the denominator pc is continuous on (-1, 1) and satisfies pc(-1) = -1 2 and pc(1) = 1 2. By the intermediate value theorem, pc has a zero in (-1, 1), whence Uy is singular there. Thus there are no nontrivial smooth exact self-similar blow-up profiles in the past light cone Γ(x0, T) ≡ {|y| ≤1}. □ 2.2. Construction of generalised self-similar solutions. Since stationary similarity profiles are ruled out, we perturb the ansatz by allowing a linear growth in the temporal similarity variable τ := -log 1 -t T = log T -log(T -t). (Recall that u(x, t) = τ is the spatially homogeneous ODE blow-up.) We look for U(τ, y) = p τ + eU(y), p ∈R. Substituting into (1.4) yields a Riccati equation for V := eUy, (2.2) (1 -y2)eUyy + 2y(p -1)eUy + p(p -1) = -y2(eUy)2, i.e. V ′ = Q0(y) + Q1(y)V + Q2(y)V 2, Q0 = p(1 -p) 1 -y2 , Q1 = 2(1 -p)y 1 -y2 , Q2 = - y2 1 -y2 . STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 9 A simple ansatz V = a b+cy produces the two elementary particular solutions V±(y) = p√1 -p ±1 -y√1 -p. These are real-valued for p ∈(0, 1], and yield the ODE profile when p = 1. The case p = 1: Setting p = 1 in (2.2) gives (1 -y2)eUyy = -y2(eUy)2. Besides the constant solutions eU ≡κ, any nontrivial solution is singular in (-1, 1). Indeed, with W := 1 eUy we have W ′ = y2 1 -y2 = -1 + 1 2(1 + y) + 1 2(1 -y), hence 1 eUy = -y + 1 2 log 1 + y 1 -y + c =: ρc(y), and ρc(y) →±∞as y →±1, so ρc has a zero in (-1, 1). Thus eUy blows up, and only the constant profile remains smooth, reproducing the ODE blow-up. The case p 1 and hence V± blow up at y = ±1/√1 -p ∈(-1, 1). By persistence of regularity, such profiles cannot evolve from smooth Cauchy data, and we discard p 0 be DR = {z ∈C : |z| -1, the only φ ∈C∞[-1, 1] solving (3.7) occur for λ ∈{0, 1}. However the eigenvalue 0 is not simple, so the ODE blow up is not mode stable. Proof. A fundamental system near z = 0 is φ1(z) = 2F1(λ -1, λ; λ; z) = (1 -z)1-λ, φ2(z) = z1-λ 2F1(1, 1; 2 -λ; z). Smoothness at z = 0 forces either 1 -λ ∈N0 or the coefficient of the singular branch φ2 to vanish. By symmetry, the same condition arises at z = 1. Thus 1 -λ = n ∈N0. For these n, both zn and (1 -z)n solve (3.7) and are linearly independent with Wronskian W(zn, (1 -z)n) = -(1 -λ) z-λ(1 -z)-λ ̸≡0. When Re λ > -1, the only possibilities are n = 0, 1, i.e. λ ∈{1, 0}. However, at λ = 1 we obtain the constant mode, and at λ = 0 the eigenspace is two-dimensional, span{1, y}. □ In the original y-variable, representative eigenfunctions are φ(y) ∈span{(1 + y)1-λ, (1 -y)1-λ}. This additional neutral direction verifies that the spatially homogeneous ODE blow-up is not asymptotically stable in our framework (see Theorem 1.3). 16 OLIVER GOUGH Remark 3.3 (Conjecture of codimension-one stability of the ODE blow-up). We conjecture that after modulation by the time-and additive-shift symmetries, there exists a locally invariant center-stable manifold of codimension one in the energy topology near U1,1,κ. We will investigate this in future work. 3.4. Mode stability of generalised self-similar solutions. We transform the eigenequation (3.3) for general p ∈(0, 1) into a hypergeometric ODE using the Lorentz transformation in similarity coordinates, first introduced in [29]. Lorentz transform and similarity variables. Recall that under the Lorentz transformation with parameter γ = √1 -p, the solutions up,1,κ,T,x0(x, t) = -p log 1 -t T -p log 1 + p 1 -p x -x0 T -t + κ correspond to vp,1,κ,T,x0(x′, t′) = -(1 -γ2) log(c1 + t′) + c2 = -p log T -t′ + p 1 -p (x′ -x′ 0) + p log T + κ, which solves (3.8) vt′t′ -vx′x′ = 1 p (vt′)2 -2 p 1 -p vx′vt′ + (1 -p)(vx′)2 . Thus vp,1,κ,T,x0 reduces to a spatially homogeneous ODE blow-up for (3.8). However, the stability of vp,1,κ,T,x0 does not automatically imply the stability of up,1,κ,T,x0, because: (1) perturbations for u live on {t = 0}, while perturbations for v live on {t′ = 0} = {t -γx = 0}; (2) stability forward in t′ does not a priori imply stability forward in t. (As observed in [25], the advantage is discrete-spectral equivalence: one can study the spectrum at up,1,κ,T,x0 by analysing the spectrum at vp,1,κ,T,x0 in self-similar variables.) Eigenequation and coefficients. For p ∈(0, 1) the separated eigenequation can be written in partial fractions as φ′′ + A y + 1 + B y -1 + C 1 + y√1 -p φ′ + D y -1 + E y + 1 + F 1 + y√1 -p φ = 0, with A = λ -√1 -p, B = λ + √1 -p, C = 2√1 -p, D = λ2 2 -λ 2 + λ√1 -p, E = -λ2 2 + λ 2 + λ√1 -p, F = -2λ(1 -p). Similarity variables in Lorentzian coordinates. Define τ ′ = log T ′ -log(T ′ -t′), y′ = x′ -x′ 0 T ′ -t′ . Then, for V (τ ′, y′) := v(x′(τ ′, y′), t′(τ ′, y′)), the PDE (3.8) becomes ∂τ ′τ ′V + ∂τ ′V + 2y′ ∂τ ′y′V + 2y′ ∂y′V + (y′2 -1) ∂y′y′V = 1 p (∂τ ′V + y′∂y′V )2 -2 p 1 -p ∂y′V · (∂τ ′V + y′∂y′V ) + (1 -p)(∂y′V )2 . (3.9) For our family Up,1,κ(τ, y) = pτ -p log(1 + y√1 -p) + κ, the similarity-coordinate Lorentz map is (3.10) τ = τ ′ -log 1 -√1 -p y′ p , y = y′ -√1 -p 1 -√1 -p y′ , and hence Vp,1,κ(τ ′, y′) = p τ ′ + κ. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 17 Linearisation and eigenequation in Lorentz variables. Linearising (3.9) about Vp,1,κ yields, for ξ, (3.11) ∂τ ′τ ′ξ -∂τ ′ξ + 2y′ ∂τ ′y′ξ + 2 p 1 -p ∂y′ξ + (y′2 -1) ∂y′y′ξ = 0. Proposition 3.7. If U = Up,1,κ + η with η solving the linearised equation in (τ, y), then ξ(τ ′, y′) := η τ(τ ′, y′), y(τ ′, y′) solves (3.11) under the similarity-coordinate Lorentz map (3.10). Proof. Direct substitution. □ With the ansatz ξ(τ ′, y′) = eλτ ′ψ(y′), (3.11) reduces to (3.12) (y′2 -1)ψ′′ + (2λy′ + 2 p 1 -p)ψ′ + λ(λ -1)ψ = 0. In standard hypergeometric form, with y′ = 2z′ -1 and ψ(y′) = φ(z′), (3.13) z′(1 -z′)φ′′ + λ - p 1 -p -2λz′ φ′ -λ(λ -1)φ = 0, i.e. Gauss's hypergeometric equation with parameters a = λ, b = λ -1, c = λ - p 1 -p. Thus the exponents at z′ = 0 and z′ = 1 are {0, 1 -λ + √1 -p} and {0, 1 -λ -√1 -p}, respectively. Local Frobenius analysis at z′ = 1. Then we have the following cases • Case 1: s1 = 1 -λ -√1 -p and s2 = 0 since Re(λ) ≤1 -√1 -p • Case 2: s1 = 0 and s2 = 1 -λ -√1 -p assuming that Re(λ) > 1 -√1 -p In case 1, we have the sub-case where s1 -s2 /∈N0, then we have the independent solutions φ1,1 = (1 -z′)s1h1(z′ -1) φ1,2 = h2(z′ -1) for analytic h1, h2. Since s1 /∈N, any smooth solution must be a multiple of the analytic function φ0,2 = 2F1(a, b, 1 + a + b -c). The next subcase is if s1 -s2 ∈N0 and so 1 -λ -√1 -p ∈{0, 1, 2, . . . }. By restricting ourselves to Re(λ) ≥0, and noting that √1 -p ∈(0, 1), then we have s1 = 1 -λ -√1 -p 1 -√1 -p. The first subcase is if s1 -s2 /∈N0 which implies Re(s2) 1 -√1 -p, s2 ̸= 0 the constant c may be 0, but Re(s2) -1 2. λ = 0 is chosen for convenience. Lemma 4.2 (Dense range at λ = 0). For any k ≥0, ε > 0 and any g ∈Hk, there exists q ∈C∞[-1, 1] × C∞[-1, 1] such that ∥-eLq -g∥k 0 is a constant independent of p. Proof. To show Lp,1 is bounded on Hk, take q ∈Hk ∥Lp,1q∥Hk = ∥q1(-1)∥Hk+1 + 2p 1 + y√1 -pq2 Hk ≲∥q1∥L∞+ 4∥q2∥Hk ≲∥q∥Hk Then using the bounded perturbation theorem ([14] p.158), the sum of the operators Lp = L+Lp,1 generates the C0 semigroup with the growth bound (4.3). □ 20 OLIVER GOUGH 4.3. A New Decomposition of Lp. Recall Lp = -y∂y 1 ∂yy -y∂y + (Up(y) -1) = L + Lp,1 where we denote Up(y) = 2p 1+y√1-p = 2(∂τUp,1,k + y∂yUp,1,κ). As in the utt -uxx = u2 x problem, the perturbation Lp,1 is merely bounded, and neither compact nor L-compact. This complicates finding information about the spectrum of Lp. Indeed, recall Lp,1q = q1(-1) 2p 1+y√1-pq2 . Although q1(-1) is a compact the second component, a multiplication operator, is not compact. In addition, Lp,1 is not L-compact. Indeed, suppose qn = (q1n, q2n), Lqn ∈Hk+1 × Hk such that, ∥qn∥Hk ≤C1, and ∥Lqn∥Hk ≤C2. It does not follow that Up(y)q2n contains a convergent subsequence in Hk. Assuming the bound ∥Lqn∥Hk only allows us to deduce that (1-y2)∂yq2 ∈Hk(-1, 1), rather than ∂yq2 ∈Hk(-1, 1) (which would let us extract a convergent subsequence by the compact embedding Hk+1(-1, 1) ⊂⊂Hk(-1, 1)). As a step towards finding information on the spectrum, we therefore investigate the dissipation of the overall operator Lp, firstly in the base space H1 × L2. By Lemma 4.1 Re⟨Lpq, q⟩0 = Re⟨Lq + Lp,1q, q⟩H1×L2 ≤-1 2∥q∥2 0 + Re Z 1 -1 2p 1 + y√1 -p|q2(y)|2 dy ≤-1 2(∥q1∥2 H1 + ∥q2∥2 L2) + 2(1 + p 1 -p)∥q2∥2 L2 = -1 2∥q1∥2 H1 + 3 2 + 2 p 1 -p ∥q2∥2 L2 we see that dissipativity is not guaranteed for H0. We therefore look in higher order spaces. To proceed with computations in higher order Sobolev spaces, we first need a technical lemma showing how the operator Lp commutes with derivatives ∂k y. Lemma 4.5. Given p ∈(0, 1), and k ≥0, we have ∂k yLp = Lp,k∂k y + L′ p,k where Lp,k = -y∂y -k 1 ∂yy -y∂y -k -1 + Up(y) and L′ p,k = 0 0 0 [∂k y, Up(y)] which satisfies the pointwise bound (4.4) |L′ p,kq| ≲p 0 Pk-1 j=0 |∂j yq2| Proof. To compute the commutation with diagonal terms, we use that [∂k y, y∂y] = k∂k y. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 21 For the pointwise bound, we compute the non-zero component of L′ p,kq. [∂k y, Up(y)]q2 = ∂k y (Up(y)q2) -Up(y)∂k yq2 = k X j=0 k j ∂k-j y Up(y) ∂j yq2 -Up(y)∂k yq2 = k-1 X j=0 k j ∂k-j y Up(y) ∂j yq2 Note that ∂k-j y 2p 1 + y√1 -p = (-1)k-j(k -j)! p 1 -p k-j 2p (1 + y√1 -p)k-j+1 so ∂k-j y Up(y) L∞(-1,1) = (k -j)! 2p √1 -p k-j 1 + y√1 -p k-j+1 L∞(-1,1) ≤k! 2p √1 -p k-j 1 -√1 -p k-j+1 which we take as the p-dependent constant in the pointwise bound (4.4). □ We now define an energy which is an equivalent inner product to Hk, isolating the highest order derivative. Definition 4.6. For k ≥0, we define the new inner product ⟨⟨·, ·⟩⟩by ⟨⟨Ψ, eΨ⟩⟩k := (∂k yΨ, ∂k y eΨ)L2(-1,1) + (Ψ, eΨ)L2(-1,1) ≡ Z 1 -1 ∂k yΨ∂ky eΨ dy + Z 1 -1 ΨeΨ dy For tuples q = (q1, q2) ∈Hk+1(-1, 1) × Hk(-1, 1), we use ⟨⟨q, eq⟩⟩k := ⟨⟨q1, eq1⟩⟩k+1 + ⟨⟨q2, eq2⟩⟩k The induced norm ∥|Ψ∥|k := ⟨⟨Ψ, Ψ⟩⟩is equivalent to the standard Hk norm (See [2],p.135). Now, in order to control L′ p,k, we need the following coercivity estimate. Lemma 4.7 (Subcoercivity). Given m ≥1, there exists a sequence εn > 0, with limn→∞εn = 0, and {Πi}1≤i≤n ⊂Hm(-1, 1), and constants cn > 0 such that for all n ≥0 and Ψ ∈Hm(-1, 1), (4.5) εn⟨⟨Ψ, Ψ⟩⟩m ≥∥Ψ∥2 m -cn n X i=1 (Ψ, Πi)2 L2(-1,1) Proof. A proof can be found in [26, 21, 16]. □ Thanks to the above Lemma, we are able to decompose the linearised operator into a sum of a maximally dissipative operator plus a finite rank projection. Proposition 4.8 (Maximal Dissipativity). Let p ∈(0, 1], k ≥4, for ε ∈(0, 1 2), there exists an integer N = N(ε, k, p) and vectors (Πi,p)1≤i≤N ⊂Hk such that for the finite rank projection operator (4.6) bPp = N X i=1 ⟨⟨·, Πp,i⟩⟩kΠp,i the modified operator bLp := Lp -bPp 22 OLIVER GOUGH satisfies: (Dissipativity): (4.7) ∀q ∈D(Lp), Re⟨⟨bLpq, q⟩⟩k ≤- k -7 2 -ε ⟨⟨q, q⟩⟩k ≤ ε -1 2 ⟨⟨q, q⟩⟩k (Maximality) (4.8) λ -bLp is surjective for some λ > 0. Proof. For q ∈(C∞[-1, 1])2, we have using Lemma 4.5, that ⟨⟨-Lpq, q⟩⟩k = (-Lpq, q) ̇Hk+1× ̇Hk + (-Lpq, q)L2×L2 = (-(∂k+1Lpq)1, ∂k+1q1)L2 + (-(∂kLpq)2, ∂kq2)L2 + (-Lpq, q)L2×L2 = (-(Lp,k+1∂k+1q)1, ∂k+1q1)L2 + (-(Lp,k∂kq)2, ∂kq2)L2 + (-(L′ p,k+1q)1, ∂k+1q1)L2 + (-(L′ p,kq)2, ∂kq2)L2 + (-Lpq, q)L2×L2. For the first two terms of the above expression, integration by parts gives us Re(-(Lp,k+1∂k+1q)1, ∂k+1q1)L2 + Re(-(Lp,k∂kq)2, ∂kq2)L2 = k + 1 2 (∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2) - Up(y)∂kq2, ∂kq2 L2 + 1 2 (∂k+1q1(±1)) ∓(∂kq2(±1)) 2 ≥ k + 1 2 (∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2) -2(1 + p 1 -p)∥∂kq2∥2 L2 ≥ k + 1 2 ∥∂k+1q1∥2 L2 + k -7 2 ∥∂kq2∥2 L2. As for the last three terms, by Youngs inequality we have that for any ε > 0 |(-(L′ p,k+1q)1, ∂k+1q1)L2| + |(-(L′ p,kq)2, ∂kq2)L2| + |(-Lpq, q)L2×L2| = |(-(L′ p,kq)2, ∂kq2)L2| + |(Lpq, q)L2×L2| ≤∥(L′ p,kq)2∥L2∥∂kq2∥L2 + |(Lpq, q)L2×L2| ≤ε 2∥∂kq2∥2 L2 + cε,p,k ∥q1∥2 Hk + ∥q2∥2 Hk-1 . Therefore Re⟨⟨-Lpq, q⟩⟩k ≥ k + 1 2 ∥∂k+1q1∥2 L2 + k -7 2 -ε 2 ∥∂kq2∥2 L2 -cε,p,k∥q∥2 Hk-1 ≥ k -7 2 -ε 2 ∥∂k+1q1∥2 L2 + ∥∂kq2∥2 L2 -cε,p,k∥q∥2 Hk-1. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 23 Applying the subcoercivity estimate for q1,q2 respectively allows us to trade the lower derivative terms into a ε 2 multiple of the first factor, modulo a finite rank defect, i.e ⟨⟨-Lpq, q⟩⟩k ≥(k -7 2 -ε)⟨⟨q, q⟩⟩k -c′ ε,k,p N1 X i=1 (q1, Π(1) i )2 L2 + N2 X i=1 (q2, Π(2) i )2 L2 ! . Now consider the linear functionals given by q → q c′ ε,k,p q1, Π(1) i L2 and q → q c′ ε,k,p q2, Π(2) i L2 which are actually continuous on Hk by Cauchy Schwarz. Thus by the Riesz Representation Theorem there exists (Πp,i)1≤i≤N1+N2 ⊂Hk such that ⟨⟨q, Πp,i⟩⟩k = q c′ ε,k,p q1, Π(1) i L2 for 1 ≤i ≤N1 ⟨⟨q, Πp,i⟩⟩k = q c′ ε,k,p q1, Π(2) i L2 for N1 + 1 ≤i ≤N1 + N2 := N. By defining the projection operator bPp as in (4.6) and bLp := Lp -bPp, we clearly get the dissipativity of bLp when q ∈(C∞[-1, 1])2. However, we must prove (4.7) for q ∈D(Lp) = Hk. Indeed, since Lp generates a C0 semigroup on Hk by Proposition 4.4, we have the well-defined characterisation Lpq = limτ→0 (Sp(τ)-I)q τ ∈Hk for arbitrary q ∈Hk. Let us fix q ∈Hk. Since Lp = L + Lp,1, where the latter operator is bounded on Hk, we have that Lq ∈Hk. By the density of (C∞[-1, 1])2 in Hk, there is a sequence (fn)n∈N ⊂(C∞[-1, 1])2 with ∥fn -(-Lq)∥k →0. By the constructive proof of Lemma 4.2, for every n ∈N, there exists a unique qn ∈(C∞[-1, 1])2 such that -Lqn = fn. We now show that qn →q. Indeed, by dissipativity of the free wave operator in Lemma 4.1 and Cauchy Schwarz, we have -1 2∥qn -qm∥2 k ≥⟨L(qn -qm), qn -qm⟩k ≥-∥L(qn -qm)∥k∥qn -qm∥k =⇒∥qn -qm∥2 k ≤2∥L(qn -qm)∥k →0. Thus (qn) is Cauchy and so there exists q′ ∈Hk with ∥qn -q′∥k →0. Since the operator L is closed, we have -Lq = -Lq′. By Proposition 4.3, and the Hille-Yosida Theorem (see [14]), (-1 2, ∞) ⊂ρ(L). In particular 0 ∈ρ(L), meaning -L is invertible and q = -q′ and we conclude qn →q′ = q. Now, since bLp = L + Lp,1 -bPp, where Lp,1 and bPp are bounded on Hk, we also have that bLpqn →bLpq. Therefore taking n →∞, (4.7) holds for q ∈D(Lp). We finally show that bLp is maximal, i.e λ -bLp : Hk →Hk is surjective for all λ > 0. Since we have shown dissipativity of bLp, it suffices to show this for one such λ > 0. Because Lp was the generator of a C0 semi-group with growth bound ω = -1 2 + ∥Lp,1∥L(Hk) ω with Reλ ∈ρ(Lp) meaning λ -Lp invertible. Again by Hille-Yosida, we have ∥(λ -Lp)-1 ∥L(Hk) ≲ 1 (λ -ω) ≲2 λ. Then writing (λ -bLp) = λ -Lp + bPp = (λ -Lp)(I + (λ -Lp)-1 bPp) we have by the boundedness of bPp and for λ > 0 sufficiently large ∥(λ -Lp)-1 bPp)∥L(Hk) 0, and consequently kα depends on α. Here we merely have parameter p ∈(0, 1] so the smallest k does not depend on p. 4.3.1. The spectrum of the linearised operator. We can now give a sufficiently detailed description of the spectrum of Lp. Proposition 4.9. Take p ∈(0, 1), k ≥4, then ∃ω0 ∈(0, 1 2) such that σ(Lp) ⊂{z ∈C : Rez ≤-ω0} ∪{0, 1} and {0, 1} ⊂σp(Lp). Furthermore, the geometric eigenspaces of eigenvalues 0 and 1 are spanned by the functions f0,p and f1,p respectively, where f0,p(y) = 1 0 and f1,p(y) = -p√1-p 1+y√1-p -p√1-p (1+y√1-p)2 ! . Moreover, there is a generalised eigenfunction g0,p of the eigenvalue 0 given by g0,p(y) = -log 1 + y√1 -p - p 2(1-p) 1 1+y√1-p (2-p)y+2√1-p 2√1-p(1+y√1-p)2 ! satisfying Lp g0,p = f0,p, L2 p g0,p = 0. Proof. Fix ε ∈(0, 1 2) and set ω2 := 1 2 -ε > 0. By Proposition 4.8, bLp -ω2I is maximally dissipative, hence σ(bLp) ⊂{z ∈C : Rez ≤-ω2}. Since bLp = Lp -bPp with bPp compact, Kato's compact-perturbation theorem [20, Thm. 5.35, p. 244] yields σess(Lp) = σess(bLp) ⊂{z ∈C : Rez ≤-ω2}. In particular, 0 /∈σess(Lp). Since σ(Lp) = σess(Lp)∪σdisc(Lp), the mode stability of Proposition 3.8 implies that the only eigenvalues with Rez ≥0 are {0, 1} ⊂σdisc(Lp). Possible spectrum in the strip {-ω2 0 such that Ω′ m,n ⊂ρ(Lp) and sup λ∈Ω′m,n ∥(λ -Lp)-1∥L(Hk) ≤C for all p ∈Bε(p0), and C > 0 is a constant independent of p. Proof. The proof follows the same argument to [16]. □ 26 OLIVER GOUGH 5. The Linearised Evolution Proposition 5.1. Take p0 ∈(0, 1), k ≥4, ω1 ∈(0, ω0), ε ∈ 0, min{ p0 2 , 1-p0 2 } and p ∈Bε(p0), then the following properties hold [Sp(τ), P1,p] = [Sp(τ), P0,p] = 0 such that Sp(τ)P1,p = eτP1,p Sp(τ)P0,p = P0,p + τLpP1,p Sp(τ)ePpq Hk ≤Me-ω1τ ePpq Hk for all τ ≥0, where ePp := I -P1,p -P0,p, and M = M(p0, ω1) is a constant depending only on p0, ω1. Moreover, we have ran(P1,p) = span(f1,p) ran(P0,p) = span(f0,p, g0,p) and P0,pP1,p = P1,pP0,p = 0 Proof. Any C0-semigroup commutes with its generator, and as a consequence of Hille Yosida Theorem, it commutes with the resolvent of its generator. Therefore Sp(τ) commutes with Pλ,p for λ ∈{0, 1}. This proves the first claim. For λ = 1 and λ = 0 respectively, by Lemma 4.10 the dimension of ran(Pλ,p) is 1 and 2 and so by Proposition 4.9 the spaces must be a span of {f1,p} and {f0,p, g0,p}. For the next identities, by definition of the semigroup, for any q ∈Hk we have ∂τSp(τ)q = LpSp(τ)q and by commutation and the eigenvalue property we have in particular ∂τSp(τ)P1,pq = LpSp(τ)P1,pq = Sp(τ)LpP1,pq = Sp(τ)P1,pq. Together with the initial condition Sp(τ)P1,pq|τ=0 = P1,pq|τ=0, the above implies Sp(τ)P1,p = eτP1,p. For the next identity, we first have ∂τSp(τ)P0,pq = LpSp(τ)P0,pq = Sp(τ)LpP0,pq Sp(τ)P0,pq|τ=0 = P0,pq Differentiating again, ∂ττSp(τ)P0,pq = ∂τSp(τ)LpP0,pq = LpSp(τ)LpP0,pq = Sp(τ)L2 pP0,pq = 0 Together with the other initial condition ∂τSp(τ)P0,pq|τ=0 = LpP0,pq|τ=0, the above implies Sp(τ)P0,p = P0,p + τLpP0,p We now prove the p-uniform growth estimate of Sp(τ)|ePpHk. By Theorem 6.17 [20] we have the decomposition of the spectrum of the reduced operator σ(Lp|ePpHk) ⊂{z ∈C : Rez ≤-ω0} and the resolvent of this reduced STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 27 operator is now analytic in the box Ωm,n containing unstable modes, for all m, n > 0. Now, using Proposition 4.11, the resolvent of the reduced operator satisfies (λ -Lp|ePpHk)-1 L(Hk) ≤C for all λ ∈{z ∈C : Rez ≥-ω1} and p ∈Bε(p0) where C is independent of p. Finally, by the GearhartPrüss-Greiner Theorem ([14], Theorem 1.11, p.302), this is equivalent to uniform exponential stability, i.e the growth estimate with growth bound -ω1. □ 6. Nonlinear Stability We now write the full nonlinear system for the perturbation q (6.1) ( ∂τq = Lpq + N(q) q(τ = 0, y) = q0(y) where N(q) = 0 q2 2 . By Duhamel's principle, we have q(τ, ·) = Sp(τ)q0(·) + Z τ 0 Sp(τ -τ ′)N(q(τ ′)) dτ ′. 6.1. Stabilised Nonlinear Evolution. Definition 6.1. For k ≥4, and ω1 ∈(0, ω0) arbitrary but fixed, we define the Banach space X k(-1, 1), ∥· ∥X k(-1,1) by X k(-1, 1) := {q ∈C [0, ∞); Hk(-1, 1) : ∥q(τ)∥Hk ≲e-ω1τ for all τ ≥0} ∥q∥X k(-1,1) := sup τ≥0 (eω1τ∥q∥Hk) We also define the closed balls of size ε > 0 Hk ε(-1, 1) := {f ∈Hk(-1, 1) : ∥f∥Hk(-1,1) ≤ε} X k ε (-1, 1) := {q ∈X k(-1, 1) : ∥q∥X k(-1,1) ≤ε} By formally applying the projections Pλ,p to the Duhamel formula, and using the properties of Proposition 5.1 and writing Pp := P1,p + P0,p we obtain the correction Cp(f, q) = Ppf + P0,p Z ∞ 0 N(q(τ ′)) dτ ′ + LpP0,p Z ∞ 0 (-τ ′)N(q(τ ′)) dτ ′ + P1,p Z ∞ 0 e-τ ′N(q(τ ′)) dτ ′ We note that C(f, q) ∈ran(Pp) ⊂Hk. We then anticipate that by subtracting the initial data by this term, we will be able to close a fixed point argument. Before this we state an estimate on the nonlinearity N. Lemma 6.2. for k ≥1 we have ∥N(q)∥Hk ≤∥q∥2 Hk ∥N(q) -N(eq)∥Hk ≲∥q -eq∥Hk(∥q∥Hk + ∥eq∥Hk) We now prove the existence of a small perturbation to the corrected abstract Cauchy problem. 28 OLIVER GOUGH Proposition 6.3. Take p0 ∈(0, 1), k ≥4, and ω1 ∈(0, ω0). There are constants 0 1 such that for all p ∈Bε0(p0) and all f ∈Hk(-1, 1) with ∥f∥Hk ≤ε0 C0 , there is a unique global solution qp ∈X k(-1, 1) satisfying ∥qp∥X k ≤ε0 and qp(τ) = Sp(τ)(f -Cp(f, q)) + Z τ 0 Sp(τ ′ -τ)N(q(τ ′)) dτ ′ for all τ ≥0. Moreover, the data-to-solution map Hkε0 C0 (-1, 1) →X k(-1, 1), f 7→qp is Lipschitz continuous. Proof. We take p0 ∈(0, 1), k ≥4 and ε0 ∈(0, min{ p0 2 , 1-p0 2 }) and ω1 ∈(0, ω0) arbitrary but fixed so that for all p ∈Bε0(p0) the semigroup Sp(τ) is well-defined and satisfies Proposition 5.1. We denote the solution operator Kp(f, q), depending on initial data f and solution q, as Kp(f, q)(τ) = Sp(τ)(f -Cp(f, q)) + Z τ 0 Sp(τ ′ -τ)N(q(τ ′)) dτ ′. Using the expression for the correction C(f, q), we rewrite this, recalling that ePp := P0,p + P1,p, as Kp(f, q)(τ) = Sp(τ)ePpf + Z τ 0 Sp(τ -τ ′)ePpN(q(τ ′)) dτ ′ -P0,p Z ∞ τ N(q(τ ′)) dτ ′ -LpP0,p Z ∞ τ (τ -τ ′)N(q(τ ′)) dτ ′ -P1,p Z ∞ τ eτ-τ ′N(q(τ ′)) dτ ′ (6.2) We now show the first step of the fixed point argument. Namely, that for ε0 > 0, f ∈Hk sufficiently small, that Kp(f, q) maps from the ball X k ε0 to itself. Indeed, by Proposition 5.1, and the nonlinear estimates in Lemma 6.2 we have ∥Kp(f, q)∥Hk(τ) ≲e-ω1τ∥f∥Hk + Z τ 0 e-ω1(τ-τ ′)∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ ∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ (τ ′ -τ)∥q(τ ′)∥2 Hk dτ ′ + Z ∞ τ eτ-τ ′∥q(τ ′)∥2 Hk dτ ′ ≲e-ω1τ ε0 C0 + ε2 0 Z τ 0 e-ω1(τ-τ ′)e-2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ e-2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ (τ ′ -τ)e-2ω1τ ′ dτ ′ + ε2 0 Z ∞ τ eτ-τ ′e-2ω1τ ′ dτ ′ ≲e-ω1τ ε0 C + ε2 0 ≤e-ω1τε0 for all τ ≥0, ε0 ∈(0, min{ p0 2 , 1-p0 2 }) sufficiently small, p ∈Bε0(p0), f ∈Hk satisfying ∥f∥Hk ≤ε0 C and q ∈X k ε0 and any C0 > 1. This shows that for such f ∈Hkε0 C0 , Kp(f, ·) : X k ε0 →X k ε0. STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 29 We now show that Kp(f, ·) is a contraction. Using (6.2) and Lemma 6.2 we have ∥Kp(f, q)(τ) -Kp(f, eq)(τ)∥Hk ≲ Z τ 0 e-ω1(τ-τ ′)∥N(q)(τ ′) -N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ ∥N(q)(τ ′) -N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ (τ ′ -τ) ∥N(q)(τ ′) -N(eq)(τ ′)∥Hk dτ ′ + Z ∞ τ eτ-τ ′∥N(q)(τ ′) -N(eq)(τ ′)∥Hk dτ ′ ≲ε0 h Z τ 0 e-ω1(τ-τ ′)e-2ω1τ ′ dτ ′ + Z ∞ τ e-2ω1τ ′ dτ ′ + Z ∞ τ (τ ′ -τ)e-2ω1τ ′ dτ ′ + Z ∞ τ eτ-τ ′e-2ω1τ ′ dτ ′i ∥q -eq∥X k ≲ε0e-ω1τ∥q -eq∥X k. for all q, eq ∈X k ε0, τ ≥0, p ∈Bε0(p0) and f ∈Hk. We can now choose ε0 small enough so that ∥Kp(f, q) -Kp(f, eq)∥X k ≤1 2∥q -eq∥X k By Banach's fixed point theorem, for f ∈Hkε0 C0 , there exists a unique fixed point qp ∈X k ε0 such that Kp(f, qp) = qp. Finally for Lipschitz continuity, we take f,ef ∈Hkε0 C0 and the associated Kp-fixed points qp, eqp ∈X k ε0, then firstly we write qp(τ) -eqp(τ) = Kp(f, qp)(τ) -Kp(ef, eqp)(τ) = Kp(f, qp)(τ) -Kp(f, eqp)(τ) + Sp(τ)ePp(f -ef). Using the contraction estimate we have 1 2∥qp -eqp∥X ≤∥Sp(τ)ePp(f -ef)∥X k ≲∥f -ef∥X k which proves Lipschitz continuous dependence on the initial data. □ 6.2. Stable flow near the blow up solution. Recall that we actually want to solve the Duhamel equation without the correction term C(f, q). To do this we take a p0, T0, κ0 that we will later choose so that the correction term vanishes. Definition 6.4. Take p, p0 ∈(0, 1), T, T0 > 0, κ, κ0 ∈R and k ≥4, with T 0, κ, κ0 ∈R, k ≥4, and T 0, κ0 ∈R, k ≥4, and 0 0 and C2 > 1 such that for all real-valued f ∈Hk(-1, 1) with ∥f∥Hk ≤ε2 C2 2 , there exists parameters p∗∈(0, 1), T0 > 0, κ∗∈R, and a unique qp∗,κ∗,T ∗∈C [0, ∞), Hk(-1, 1) with ∥qp∗,κ∗,T ∗(τ)∥Hk ≤ε2e-ω1τ and (6.3) qp∗,κ∗,T ∗= Sp∗(τ)Up∗,κ∗,T ∗(f) + Z τ 0 Sp∗(τ -τ ′)N(qp∗,κ∗,T ∗(τ ′)) dτ ′ with |p∗-p0| + |κ∗-κ| + T T0 -1 ≤ε2 C2 Proof. Take the ε0 > 0 and C0 > 1 obtained from Proposition 6.3 and ε1 > 0 from Lemma 6.5. We let 0 0 and C′ > 1, so that Up,T,κ satisfies the assumptions of the initial data in Proposition 6.3 which implies the existence and uniqueness of a solution qp,T,κ ∈X k(-1, 1) with ∥qp,T,κ∥X k ≤ε2 satisfying qp,T,κ(τ) = Sp(τ) (Up,T,κ(f) -Cp(Up,T,κ(f), qp,T,κ(τ))) + Z τ 0 Sp(τ -τ ′)N(qp,T,κ(τ ′)) dτ ′ for all p ∈B ε2 C2 (p0), T ∈B T0ε2 C2 (T0) and κ ∈B ε2 C2 (κ0). It remains to show the existence of parameters (p, T, κ) such that the correction Cp(Up,T,κ(f), qp,T,κ) ∈ran(Pp) is equal to 0. By Proposition 4.9, we have that ran(Pp) ⊂Hk is a finite dimensional sub-Hilbert space spanned by the linearly independent functions {g0,p, f0,p, , f1,p}. It suffices to show that there are parameters for which the linear functional lp,T,κ : ran(Pp) →R, g 7→(Cp(Up,T,κ(f), qp,T,κ), g)Hk 32 OLIVER GOUGH is zero on a basis of ran(Pp). Indeed, this would imply Cp(Up,T,κ(f), qp,T,κ) ⊥ran(Pp), which yields the result. Using the expression for the correction and Lemma 6.5, we have lp,κ,T (g) = (Ppf T , g)Hk + (κ -κ0) -p T T0 -1 + p 2(1 -p)(p0 -p) (f0,p, g)Hk - T T0 -1 √1 -p (f1,p, g)Hk + (p0 -p)(g0,p, g)Hk + Ppr p, T T0 , g Hk + P0,p Z ∞ 0 N(q(τ ′)) dτ ′, g Hk + LpP0,p Z ∞ 0 (-τ ′)N(q(τ ′)) dτ ′, g Hk + P1,p Z ∞ 0 e-τ ′N(q(τ ′)) dτ ′, g Hk . We now want to construct a dual basis {g1 p, g2 p, g3 p} to the basis {g0,p, f0,p, f1,p}. To do this we use the Gram matrix Γ(p) =   (g0,p, g0,p)Hk (g0,p, f0,p)Hk (g0,p, f1,p)Hk (f0,p, g0,p)Hk (f0,p, f0,p)Hk (f0,p, f1,p)Hk (f1,p, g0,p)Hk (f1,p, f0,p)Hk (f1,p, f1,p)Hk  . with components Γ(p)ij, with i, j = 1, 2, 3. Since the eigenfunctions are linearly independent for p ∈(0, 1), the Gram matrix is invertible and we denote the elements of the inverse of Γ(p) as Γ(p)mn with m, n = 1, 2, 3. By defining gn p := Γ(p)n1g0,p + Γ(p)n2f0,p + Γ(p)n3f1,p n = 1, 2, 3 we get (g0,p, gj p)Hk = δj 1, (f0,p, gj p) = δj 2, (f1,p, gj p)Hk = δj 3 for j = 1, 2, 3. Moreover, by Cramer's rule, the components of {g1 p, g2 p, g3 p} ⊂ran(Pp) depend smoothly on p since Γ(p) depends smoothly on p and the determinant is non-zero. We now define the continuous map F : B ε2 C2 (p0) × B ε2 C2 (κ0) × B T0ε2 C2 (T0) →R3 (p, κ, T) 7→ p0 + F1, κ0 + p T T0 -1 - p 2(1 -p)(p0 -p) -F2, T0(1 + p 1 -pF3) with real valued components defined as Fn(p, κ, T) := (Ppf T , gn p)Hk + Ppr p, T T0 , gn p Hk + P0,p Z ∞ 0 N(qp,κ,T (τ ′)) dτ ′, gn p Hk + LpP0,p Z ∞ 0 (-τ ′)N(qp,κ,T (τ ′)) dτ ′, gn p Hk + P1,p Z ∞ 0 e-τ ′N(qp,κ,T (τ ′)) dτ ′, gn p Hk for n = 1, 2, 3, where we note Fn(p, κ, T) = lp,κ,T (gn p). To bound each component we use Lemma 6.2 and Lemma 6.5 to obtain |Fn(p, κ, T)| ≲∥gn p∥Hk ∥f T ∥Hk + r p, T T0 Hk + ∥qp,κ,T ∥2 X k ≲ε2 C2 2 + ε2 2 C2 2 + ε2 2 = ε2 C2 1 MC′ + ε′ M 2C′ + ε′C′ whenever p ∈B ε2 C2 (p0), T ∈B T0ε2 C2 (T0) and κ ∈B ε2 C2 (κ0). We can now choose M and C′ large enough and ε′ small enough so that F is a self map. By Brouwer's fixed point theorem, we obtain a fixed point STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 33 (p∗, κ∗, T ∗) ∈B ε2 C2 (p0) × B ε2 C2 (κ0) × B T0ε2 C2 (T0) of F, satisfying p∗= p0 + F1(p∗, κ∗, T ∗) = p∗+ lp∗,κ∗,T ∗(g1 p∗) κ∗= κ0 + p∗ T ∗ T0 -1 - p∗ 2(1 -p∗)(p0 -p∗) -F2(p∗, κ∗, T ∗) = κ∗-lp∗,κ∗,T ∗(g2 p∗) T ∗= T0(1 + p 1 -p∗F3(p∗, κ∗, T ∗)) = T ∗+ T0 p 1 -p∗lp∗,κ∗,T ∗(g3 p∗) which implies lp∗,κ∗,T ∗≡0 on ran(Pp). □ 6.3. Proof of Theorem 1.2. Proof. Fix 0 0 and C2 > 1 from Proposition 6.6. Then for any real-valued (f, g) ∈Hk+1(R)×Hk(R) with ∥f∥Hk+1(R) +∥g∥Hk(R) ≤ε2 C2 there exists a unique mild solution qp∗,κ∗,T ∗= q(1) p∗,κ∗,T ∗, q(2) p∗,κ∗,T ∗ ∈C([0, ∞); Hk(-1, 1)) satisfying (6.3) that is a jointly classical solution to the abstract Cauchy problem (6.1). We now reconstruct the solution to the original Cauchy problem. Recall we had U(τ, y) = Up∗,1,κ∗(τ, y) + q(1) p∗,κ∗,T ∗(τ, y). Transforming variables back u(x, t) = Up∗,1,κ∗ -log 1 -t T ∗ , x -x0 T ∗-t + q(1) p∗,κ∗,T ∗ -log 1 -t T ∗ , x -x0 T ∗-t is the unique solution to (1.1), with initial data, by construction of Up,κ,T (f, g), given by u(x, 0) = up0,1,κ,x0,T0(x, 0) + f(x) ∂tu(x, 0) = ∂tup,1,κ,x0,T (x, 0) + g(x) for x ∈BT ∗(x0). Moreover, by Proposition 6.6, we have ∥q(1) p∗,κ∗,T ∗(τ, ·)∥Hk+1(-1,1) ≤ε2e-ω1τ ∥q(2) p∗,κ∗,T ∗(τ, ·)∥Hk(-1,1) ≤ε2e-ω1τ which implies, by the scaling of the homogeneous semi-norms (T ∗-t)-1 2 +s∥u(·, t) -up∗,1,κ∗,x0,T (·, t)∥ ̇Hs(BT ∗-t(x0)) = q(1) p∗,κ∗,T ∗ -log 1 -t T ∗ , · ̇Hs(-1,1) ≤ε2(T ∗-t)ω0-δ for s = 0, 1, . . . , k + 1, and (T ∗-t)-1 2 +s∥∂tu(·, t) -∂tup∗,1,κ∗,x0,T (·, t)∥ ̇Hs-1(BT ∗-t(x0)) = q(2) p∗,κ∗,T ∗ -log 1 -t T ∗ , · ̇Hs-1(-1,1) ≤ε2(T ∗-t)ω0-δ for s = 1, . . . , k. □ Acknowledgments The author would like to thank Professor Manuel del Pino and Professor Monica Musso for their supervision and helpful discussions. 34 OLIVER GOUGH Appendix A. Frobenius Analysis The eigenequation (3.3) may be written in standard Heun ODE form by letting φ(y) = φ(z) for y = 2z-1, (A.1) φ′′ +  γ z + δ z -1 + ε z - √1-p-1 2√1-p  φ′ +   λ(λ + 1)z -q z(z -1) z - √1-p-1 2√1-p  φ = 0 with γ = λ -√1 -p, δ = λ + √1 -p, ε = 2 and q = (√1 -p -1)λ2 + (√1 -p -1 + 2p)λ. By the Frobenius theorem, we can always find a local power series solution near the singular points by substituting φ(z) = (z -z0)s ∞ X k=1 ak(z -z0)k. Indeed, after substituting and matching the lowest order term, we obtain indicial polynomials for z0 ∈{0, 1} respectively. We have P0(s) = s(s -1 + γ) for z0 = 0 P1(s) = s(s -1 + δ) for z0 = 1 Proposition A.1. For p ∈(0, 1), λ ∈C, with Reλ ≥0 and k ≥4, then any Hk+1(0, 1) solution to (A.1) is in C∞[0, 1]. Moreover, the set of local solutions around z0 = 1 is one dimensional. Proof. First we do the Frobenius analysis around z0 = 0. We have the two roots to P0(s), s± ∈{0, 1 -λ + √1 -p} and the two cases: (1) s+ = 1 -λ + √1 -p, s-= 0 if 0 ≤Reλ ≤1 + √1 -p (2) s+ = 0, s-= 1 -λ + √1 -p if Reλ > 1 + √1 -p We now have the following sub-cases. Case 1.1: s+ -s-/∈N and 0 ≤Reλ ≤1 + √1 -p, then there are the two independent local solutions are φ11(z) = zs+h+(z) φ12(z) = h-(z) where h± are analytic functions around z = 0. However φ11 /∈H4+1(0, 1) (and therefore no larger regularity), since s+ /∈N, and Re s+ -5 1 + √1 -p, the two independent local solutions are φ21(z) = h+(z) φ22(z) = zs-h-(z) + c φ21(z) log(z). STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 35 Note that φ22 /∈H5(0, 1), regardless of the value of c, since s- 1 -√1 -p We now have the following sub-cases. Case 1.1: s+ -s-/∈N and 0 ≤Reλ ≤1 -√1 -p. The two independent local solutions are eφ11(z) = (z -1)s+h+(z -1) eφ12(z) = h-(z -1) with h± analytic functions around 0. However, eφ11 /∈H5(0, 1), since s+ /∈N and Re s+ -5 1 -√1 -p. The two independent local solutions are eφ21(z) = h+(z -1) eφ22(z) = (z -1)s-+ c eφ21(z -1)(z) log(z -1). We note eφ22 /∈H5(0, 1) since s-< 0, regardless of the value of c. Thus any H5 local solution is smooth and analytic at 1. The above analysis shows that any local Hk+1 solution to (A.1) must be C∞, and analytic in a neighbourhood of z = 0, 1. In addition, since (A.1) is a second order elliptic equation with smooth coefficients on (0, 1), noting that the other singular point z = √1-p-1 2√1-p does not lie in (0, 1), any Hk+1(0, 1) solution must be in C∞(0, 1) by elliptic regularity theory. Together with the former point, we have that the solution must be in C∞[0, 1]. Moreover, the above analysis also shows that in each case the solution at each singular point is a multiple of the same analytic function, and thus the set is one-dimensional. □ Appendix B. Non-existence of other generalised eigenfunctions Lemma B.1. There do not exist v ∈H4 such that L 3 4 v = g0, 3 4 . Proof. Assume for contradiction that there exists (v1, v2) ∈H4(-1, 1) × H3(-1, 1) satisfying    -y∂yv1 + v2 = g0, 3 4 ,1 ≡-log 1 + y 2 - 3 2+y, ∂yyv1 -y∂yv2 + 3 2+y -1 v2 = g0, 3 4 ,2 ≡ 5y+4 (2+y)2 . 36 OLIVER GOUGH Substituting v2 = y∂yv1 + g0, 3 4 ,1 into the second equation gives (1 -y2)∂yyv1 -y(1 + 2y) y + 2 ∂yv1 = g(y) := 1 -y 2 + y log 1 + y 2 + -y2 + 3y + 7 (2 + y)2 . Introducing the integrating factor p(y) = (y + 2)2 1 -y 1 + y 1/2 , the equation becomes ∂yv1(y) = c p(y)-1 + p(y)-1 Z 1 y p(z)g(z) 1 -z2 dz. Computing the integral explicitly yields ∂yv1(y) = √y + 1 c + 3 √ 3 arctan 2y+1 √ 3-3y2 √1 -y (y + 2)2 + -y log(2) + (y -1) log(y + 2) -3 + log(2) (y + 2)2 . Asymptotics near y = 1. Let t = 1 -y ↓0. Using q 1-y 1+y ∼ √ t √ 2, 2y + 1 p 3 -3y2 ∼ 3 √ 6t, we have arctan 2y+1 √ 3-3y2 = π 2 - q 2 3 √ t + O(t3/2). Hence, 3 √ 3 arctan 2y+1 √ 3-3y2 = 3 √ 3π 2 -3 √ 2 √ t + O(t3/2), p(y) ∼ 9 √ 2 √ t. Therefore, ∂yv1(y) ∼ √ 2 9 √ t c + 3 √ 3π 2 + O(1). For ∂yv1 ∈L2(-1, 1), the singular term ∼(1 -y)-1/2 must vanish, forcing c = -3 √ 3 2 π. With this choice of c, the leading singular term cancels and ∂yv1(y) = A + B p 1 -y + O(1 -y), y →1-, for some A, B ̸= 0. Differentiating gives ∂yyv1(y) ∼-B 2 (1 -y)-1/2, so |∂yyv1(y)|2 ∼C(1 -y)-1, which is not integrable near y = 1. Hence ∂yyv1 /∈L2(-1, 1), contradicting v1 ∈H2(-1, 1). Therefore, such v cannot exist. □ Lemma B.2. There do not exist u ∈H4 such that (L 3 4 -I)u = f1, 3 4 . STABLE TYPE I BLOW-UP FOR THE 1D WAVE EQUATION 37 Proof. Writing out the system and eliminating u2 as before yields (1 -y2)∂yyu1 -y(4y + 5) y + 2 ∂yu1 -1 + 2y y + 2 u1 = -3(y + 3) 4(y + 2)2 . A direct computation gives the general solution u1(y) = 3 log(y + 2) 4(y + 2) + c1 y + 2 + c2 √1 + y √1 -y(y + 2) + 3 √ 3√y + 1 4√1 -y(y + 2) arctan √ 3 p 1 -y2 2y + 1 . Near y = 1, the term with c2 behaves like (1 -y)-1/2 and thus forces c2 = 0 for u1 ∈L2(-1, 1). We now inspect the remaining arctan term. The argument √ 3√ 1-y2 2y+1 diverges as y →-1 2, with lim y→(-1/2)- √ 3 p 1 -y2 2y + 1 = -∞, lim y→(-1/2)+ √ 3 p 1 -y2 2y + 1 = +∞. Hence arctan(·) has a jump discontinuity of size π across y = -1 2, and since the pre-factor is finite and nonzero at y = -1 2, u1 itself has a finite jump there. Therefore u1 is discontinuous and thus not in H1(-1, 1), contradicting the assumption that u ∈H4. The claim follows. □ References [1] Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions: with formulas, graphs, and mathematical tables, volume 55. Courier Corporation, 1965. [2] Robert A Adams and John JF Fournier. Sobolev spaces, volume 140. Elsevier, 2003. [3] Athanasios Chatzikaleas and Roland Donninger. Stable blowup for the cubic wave equation in higher dimensions. Journal of Differential Equations, 266(10):6809-6865, 2019. [4] Charles Collot. Type II blow up manifolds for the energy supercritical semilinear wave equation, volume 252. American Mathematical Society, 2018. [5] Charles Collot, Thomas Duyckaerts, Carlos Kenig, and Frank Merle. Soliton resolution for the radial quadratic wave equation in space dimension 6. Vietnam Journal of Mathematics, 52(3):735-773, 2024. [6] Charles Collot, Tej-Eddine Ghoul, and Nader Masmoudi. Singularity formation for burgers equation with transverse viscosity. arXiv preprint , 2018. [7] Roland Donninger. Nonlinear stability of self-similar solutions for semilinear wave equations. Communications in Partial Differential Equations, 35(4):669-684, 2010. [8] Roland Donninger. Strichartz estimates in similarity coordinates and stable blowup for the critical wave equation. Duke Mathematical Journal, 166:1627-1683, 2017. [9] Roland Donninger. Spectral theory and self-similar blowup in wave equations. Bulletin of the American Mathematical Society, 61(4):659-685, 2024. [10] Roland Donninger and Ziping Rao. Blowup stability at optimal regularity for the critical wave equation. Advances in Mathematics, 370:107219, 2020. [11] Roland Donninger and Birgit Schörkhuber. Stable self-similar blow up for energy subcritical wave equations. arXiv preprint , 2012. [12] Roland Donninger and Birgit Schörkhuber. Stable blow up dynamics for energy supercritical wave equations. Transactions of the American Mathematical Society, 366(4):2167-2189, 2014. [13] Roland Donninger and Birgit Schörkhuber. On blowup in supercritical wave equations. Communications in Mathematical Physics, 346(3):907-943, 2016. [14] Klaus-Jochen Engel and Rainer Nagel. One-parameter semigroups for linear evolution equations. Springer, 2000. [15] Lawrence C Evans. Partial differential equations, volume 19. American mathematical society, 2022. [16] Tej-eddine Ghoul, Jie Liu, and Nader Masmoudi. Blow-up of the one-dimensional wave equation with quadratic spatial derivative nonlinearity. arXiv preprint , 2025. 38 OLIVER GOUGH [17] Matthieu Hillairet, Pierre Raphaël, et al. Smooth type ii blow-up solutions to the four-dimensional energy-critical wave equation. Anal. PDE, 5(4):777-829, 2012. [18] Tetsuya Ishiwata and Takiko Sasaki. The blow-up curve of solutions to one dimensional nonlinear wave equations with the Dirichlet boundary conditions. Japan Journal of Industrial and Applied Mathematics, 37:339-363, 2020. [19] Jacek Jendrej. Construction of type ii blow-up solutions for the energy-critical wave equation in dimension 5. Journal of Functional Analysis, 272(3):866-917, 2017. [20] Tosio Kato. Perturbation theory for linear operators, volume 132. Springer Science & Business Media, 2013. [21] Jihoi Kim. Self-similar blow up for energy supercritical semilinear wave equation. arXiv preprint , 2022. [22] Joachim Krieger. On stability of type II blow up for the critical nonlinear wave equation on R3 +, volume 265 of Memoirs of the American Mathematical Society. American Mathematical Society, 2020. [23] Joachim Krieger and Joules Nahas. Instability of type ii blow up for the quintic nonlinear wave equation on mathbbR3+1. Bulletin de la Société mathématique de France, 143:339-355, 2015. [24] Joachim Krieger, Wilhelm Schlag, and Daniel Tataru. Slow blow-up solutions for the h1(R3) critical focusing semilinear wave equation. Duke Mathematical Journal, 2009. [25] Jie Liu. A note on the stability of self-similar blow-up solutions for superconformal semilinear wave equations. arXiv preprint , 2025. [26] Frank Merle, Pierre Raphaël, Igor Rodnianski, and Jeremie Szeftel. On blow up for the energy super critical defocusing nonlinear schrödinger equations. Inventiones mathematicae, 227(1):247-413, 2022. [27] Frank Merle and Hatem Zaag. Determination of the blow-up rate for the semilinear wave equation. American journal of mathematics, 125(5):1147-1164, 2003. [28] Frank Merle and Hatem Zaag. Determination of the blow-up rate for a critical semilinear wave equation. Mathematische Annalen, 331(2):395-416, 2005. [29] Frank Merle and Hatem Zaag. Existence and universality of the blow-up profile for the semilinear wave equation in one space dimension. Journal of Functional Analysis, 253(1):43-121, 2007. [30] Frank Merle and Hatem Zaag. On the stability of the notion of non-characteristic point and blow-up profile for semilinear wave equations. Communications in Mathematical Physics, 333(3):1529-1562, 2015. [31] Frank Merle and Hatem Zaag. Dynamics near explicit stationary solutions in similarity variables for solutions of a semilinear wave equation in higher dimensions. Transactions of the American Mathematical Society, 368(1):27-87, 2016. [32] Frank WJ Olver. NIST handbook of mathematical functions hardback and CD-ROM. Cambridge university press, 2010. [33] Matthias Ostermann. Stable blowup for focusing semilinear wave equations in all dimensions. Transactions of the American Mathematical Society, 377(07):4727-4778, 2024. [34] Mohammad A. Rammaha. Upper bounds for the life span of solutions to systems of nonlinear wave equations in two and three space dimensions. Nonlinear Analysis: Theory, Methods & Applications, 25(6):639-654, 1995. [35] Mohammad A. Rammaha. A note on a nonlinear wave equation in two and three space dimensions. Communications in Partial Differential Equations, 22(5-6):799-810, 1997. [36] T. Sasaki, S. Takamatsu, and H. Takamura. The lifespan of classical solutions of one dimensional wave equations with semilinear terms of the spatial derivative. AIMS Mathematics, 8(11):25477-25486, 2023. [37] Takiko Sasaki. Regularity and singularity of the blow-up curve for a wave equation with a derivative nonlinearity, 2018. preprint. [38] Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete and Continuous Dynamical Systems - Series S, 14(3):1133-1143, 2021. [39] Takiko Sasaki. Regularity of the blow-up curve at characteristic points for nonlinear wave equations, 2022. preprint. [40] K. Shao, H. Takamura, and C. Wang. Blow-up of solutions to semilinear wave equations with spatial derivatives. arXiv preprint , 2024. [41] Christopher Donald Sogge. Lectures on non-linear wave equations, volume 2. International Press Boston, MA, 1995. [42] Gerald Teschl. Ordinary differential equations and dynamical systems, volume 140. American Mathematical Soc., 2012. [43] David Wallauch. Strichartz estimates and blowup stability for energy critical nonlinear wave equations. Transactions of the American Mathematical Society, 376(06):4321-4360, 2023.
2510.14830
RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning Kun Lei1,2,*, Huanyu Li1,2,*, Dongjie Yu1,3,*, Zhenyu Wei5,*, Lingxiao Guo6, Zhennan Jiang7, Ziyu Wang4, Shiyu Liang2 and Huazhe Xu1,4 Abstract Real-world robotic manipulation in homes and factories demands reliability, efficiency, and robustness that approach or surpass skilled human operators. We present RL-100, a real-world reinforcement learning training framework built on diffusion visuomotor policies trained bu supervised learning. RL-100 introduces a three-stage pipeline. First, imitation learning leverages human priors. Second, iterative offline reinforcement learning uses an Offline Policy Evaluation procedure, abbreviated OPE, to gate PPO-style updates that are applied in the denoising process for conservative and reliable improvement. Third, online reinforcement learning eliminates residual failure modes. An additional lightweight consistency distillation head compresses the multi-step sampling process in diffusion into a single-step policy, enabling high-frequency control with an order-of-magnitude reduction in latency while preserving task performance. The framework is task-, embodiment-, and representation-agnostic and supports both 3D point clouds and 2D RGB inputs, a variety of robot platforms, and both single-step and action-chunk policies. We evaluate RL-100 on seven real-robot tasks spanning dynamic rigid-body control, such as Push-T and Agile Bowling, fluids and granular pouring, deformable cloth folding, precise dexterous unscrewing, and multi-stage orange juicing. RL-100 attains 100% success across evaluated trials for a total of 900 out of 900 episodes, including up to 250 out of 250 consecutive trials on one task. The method achieves near-human teleoperation or better time efficiency and demonstrates multi-hour robustness with uninterrupted operation lasting up to two hours. The resulting policies generalize zero-shot to novel dynamics with an average success of 92.5% and adapt in a few-shot fashion to significant task variations, reaching an average of 86.7% after one to three hours of additional training. These results suggest a practical path to deployment- ready robot learning by starting from human priors, aligning training objectives with human-grounded metrics, and reliably extending performance beyond human demonstrations. For more results and videos, please visit our project website: https://lei-kun.github.io/RL-100/. Keywords Real-world Reinforcement Learning, Diffusion Policy, Robotic Manipulation, Offline-to-Online RL, Visuomotor Control Introduction Dexterous robotic manipulation stands as an iconic challenge in robotics (Cui and Trinkle 2021; Luo et al. 2025). Real- world deployment beyond laboratories requires human-level reliability, efficiency, and robustness. Recent learning-based advances across generative diffusion policies (Chi et al. 2023; Ze et al. 2024), diffusion-based robot foundation models (Black et al. 2024a; Intelligence et al. 2025), sim- to-real RL (Yuan et al. 2025; Lin et al. 2025), and real- world RL (Luo et al. 2025, 2024a) have narrowed this gap, demonstrating human-like manipulation proficiency. In particular, generative policies and foundation models are trained or fine-tuned on high-quality, human-collected real- robot datasets at varying scales, providing strong human priors and enabling robots to acquire the efficient strategies used by skilled tele-operators. However, high-quality real- robot data remain scarce: teleoperation incurs perceptual and control latency and favors slow, conservative motions (Guo et al. 2025). Moreover, large-scale collection depends on skilled operators and is labor-intensive and expensive. As a result, state–action coverage is limited, undermining generalization and reliability. Consequently, this supervised paradigm is constrained by an imitation ceiling: under purely supervised objectives, performance is effectively bounded by demonstrator skill and inherits human inefficiencies, biases, and occasional errors. Reinforcement learning (RL) offers a complementary route by optimizing returns from interaction rather than imitation fidelity, enabling the discovery of strategies that are rare or absent in human demonstrations. At the same time, sim-to-real RL must contend with visual and dynamics gaps between simulation and reality, while na¨ıvely training a learning-based generative policy on real hardware is risky and sample-inefficient. This raises a central question: how can we build a robotic learning system that leverages strong human priors yet continues to refine itself through autonomous exploration? A useful analogy comes from human learning: babies learn to walk under parental 1Shanghai Qizhi Institute, China 2Shanghai Jiao Tong University, China 3The University of Hong Kong, HKSAR, China 4IIIS, Tsinghua University, China 5University of North Carolina at Chapel Hill, USA 6Carnegie Mellon University, USA 7Chinese Academy of Sciences, China *Equal contribution arXiv:2510.14830v1 [cs.RO] 16 Oct 2025 Figure 1. Teaser. Real-robot snapshots illustrating the diversity of our task suite. Panels are ordered top-left →bottom-right: (a) Dynamic Push-T; (b) Agile Bowling; (c) Pouring. (d) Dynamic Unscrewing with a dexterous hand; (e) Dual-arm Folding; (f) Juicing—stage 1 and stage 2 are shown side-by-side in the same panel; The grid highlights six of the seven tasks; juicing stage 1/2 are grouped into one panel for space. supervision, then reinforce the skill on their own until they master it and eventually transfer it across terrains. Analogously, a generalizable robotic learning system should combine skilled human priors with self-improvement to reach—and in some cases exceed—human capability in reliability, efficiency, and robustness. In this paper, we introduce RL-100, a framework that employs a real-world RL post-training phase on top of an imitation-based diffusion policy, preserving its expressive strengths while explicitly optimizing deployment metrics – success rate, time-to-completion, and robustness – under mild human-guided exploration. In short, we start from human priors, align with human-grounded objectives, and go beyond human performance. RL-100 has three stages with distinct roles and costs: (i) Imitation-learning (IL) pretraining on teleoperated demonstrations provides a competent, low-variance base, much like the sponge layer of a cake, on which subsequent learning can be built. (ii) Iterative offline RL post-training (offline updates on a growing buffer of policy rollouts) delivers the bulk of the improvement in success rate and efficiency across iterations, analogous to adding the cream layer. (iii) Online and on- policy RL post-training supplies the last-mile reliability, targeting rare failure modes that remain after iterative offline learning, which are the cherries on top. But it is resource-intensive on real hardware (parameter tuning, resets, approvals). We therefore allocate most of the learning budget to iterative offline updates and use a small, targeted online budget to push performance from a high success rate (e.g., 95% ) to near-perfect (e.g., 99%+). Moreover, RL-100 is representation-agnostic: it operates in a vision-based setting and supports both 3D point clouds and 2D RGB images by simply swapping observation encoders, without modifying rest of the framework. While our real-robot experiments use 3D point clouds as the primary representation, ablations in simulation show the same performance trends with 2D inputs. In particular, we introduce a self-supervised visual encoder tailored for RL post-training, which furnishes stable, task-relevant features throughout policy exploration and updates. During policy rollouts, a human operator gives sparse success signals when needed, and the controller follows conservative operating limits. We use a unified policy- gradient objective across both iterative offline and online phases to fine-tune the diffusion sampler’s short-horizon denoising schedule (Song et al. 2021). This alignment yields stable updates across phases and strong fine-tuning sample efficiency. In addition, we interleave a lightweight distillation loss that compresses the K-step diffusion policy into a one- step consistency (Song et al. 2023) policy for deployment, reducing inference latency while maintaining or improving efficiency and robustness. Moreover, our framework is task- and embodiment- agnostic. We evaluate RL-100 across simulated tasks and on a real-world suite of seven manipulation tasks as illustrated in Fig. 1 and summarized in Tab. 1: Dynamic Push- T, Agile Bowling, Pouring, Soft-towel Folding, Dynamic Unscrewing, and Orange Juicing. Orange Juicing comprises two subtasks, placing and removal, which are trained and evaluated separately but reported as one task family. The suite includes rigid-body dynamics, deformable objects, fluids, and precision assembly, and the framework is deployed across multiple embodiments. For these tasks and embodiments, we select a specific control mode per task: a single-step control mode is used when a fast closed-loop reaction is necessary; action-chunk-based control is preferred for coordination-heavy or high precision tasks where smoothing mitigates jitter and limits error compounding (Zhao et al. 2023). Both regimes share the same diffusion backbone; only the action heads differ. Because we target deployment in homes and factories, we emphasize deployment-centric metrics: reliability (success rate), efficiency (time-to-completion), and robustness (sustained stability under long deployment time and perturbation). Real-world experiments show that RL-100 attains 100% reliability across all seven tasks (up to 250/250 consecutive successful trials) and maintains long-horizon stability. In terms of efficiency, RL-100 approaches human teleoperation-level time-to-completion and, on several tasks, matches or even surpasses skilled operators. In summary, our main contributions are as follows: (i) Unified training framework. We propose RL-100, a three-stage real-world RL training framework on top of teleoperation-trained generative diffusion policies. The pipeline chains IL pretraining, iterative offline RL and online RL, with most updates allocated to iterative offline and a small, targeted online budget for the last mile to deployment-grade performance. (ii) One objective, fast deployment. A unified policy gradient loss fine-tunes the diffusion sampler’s denois- ing schedule; a lightweight distillation compresses the multi-step diffusion policy to a one-step consistency policy. (iii) Generality across tasks, embodiments, and visual representations. Our framework is task-, embodiment-, and visual-representation–agnostic. To the best of our knowledge, RL-100 is the first system to demonstrate vision-based RL post-training on real robots across diverse task modalities and embodiments. (iv) Deployment-centric results. On real robots, RL- 100 achieves 100% reliability across seven tasks, including runs of up to 250 consecutive successes, attains efficiency comparable to or exceeding human teleoperation on multiple tasks, and demonstrates long-horizon robustness with uninterrupted operation for up to two hours, offering a promising route to deployment in homes and factories (v) RL-specific network backbone. Our policy backbone is tailored for diffusion-based visuomotor control and is agnostic to the execution regime, supporting both single-step and action-chunk control. We use a self- supervised visual encoder to produce stable, drift- resistant representations throughout RL fine-tuning. Preliminaries Reinforcement Learning We formulate the robotic manipulation problem as a Markov Decision Process (MDP) ⟨S, A, P, R, γ⟩, where S is the state space, A is the action space, P is the transition dynamics, R is the reward function, and γ is the discount factor. The robot policy π chooses action at at state st to maximize discounted cumulative rewards Σ∞ t=0γtR(st, at). The value function V (s) = Σ∞ t=0γtR(st, at)|s0=s,at∼π(·|st) is defined to measure robot performance starting from a given state s and the Q function Q(s, a) = Σ∞ t=0γtR(st, at)|s0=s,a0=a,at∼π(·|st) starts from given s and a. Offline-to-online RL. Our post-training follows an offline- to-online paradigm. Following Lei et al. (2024), we employ a proximal policy optimization (PPO)-style objective (Schul- man et al. 2017) to unify both stages. The core learning objective incorporates importance sampling for proximal policy updates: Ji(π) = Es∼ρπ,a∼πi  min  r(π)A(s, a), clip r(π), 1 −ϵ, 1 + ϵ  A(s, a)  (1) where ρπ is the stationary state distribution induced by policy π, r(π) = π(a|s) πi(a|s) is the importance ratio, and A(s, a) = Q(s, a) −V (s) is the advantage function. Table 1. Task suite, embodiments, and control modes. Embodiments listed are typical examples; our framework is task- and embodiment-agnostic. Control modes are selected per task as either Single-step (one action per control tick) or Action-chunk (short c-step segments). Task Control mode Embodiments (examples) Modality Key challenges Dynamic Push-T Single-step UR5 + 3D-printed end- effector (single-arm) Rigid-body dynamics Fast reaction to a moving goal pose and online perturbations Agile Bowling Single-step UR5 + 3D-printed end- effector (single-arm) Rigid-body dynamics Release-timing control at high velocity; trajectory and release-pose accuracy Pouring Single-step Franka + LeapHand (single-arm) Fluids / granular Spillage minimization; flow control; container alignment under motion Dynamic Unscrewing Action-chunk Franka + LeapHand (single-arm) Precision assembly Time-varying alignment; torque/pose regulation; cross- thread avoidance Soft-towel Folding Action-chunk xArm + Franka + Robotiq (dual-arm) Deformable cloth Large deformation; coordinated contacts; fold accuracy Orange Juicing‡ Action-chunk xArm + Robotiq (single- arm) Deformable manipulation Confined-space insertion/ejection; generalization to fruit variability Control modes: Single-step (one action per control tick); Action-chunk (short c-step segments). ‡ Two subtasks: Placing (place fruit into the press zone) and Removal (remove the spent fruit); the spent fruit is deformable, fruit sizes vary substantially (requiring strong generalization), and operation occurs in a confined cavity with narrow clearances. The key distinction between offline and online stages lies in advantage estimation: • Offline: Implicit Q Learning (IQL)-style (Kostrikov et al. 2022) value functions: Aoff(s, a) = Q(s, a) −V (s). • Online: Generalized Advantage Estimation (GAE) (Schulman et al. 2016): Aon(s, a) = GAE(Rt, V ) to balance variance and bias. Diffusion models We overload the subscript t from timestep in MDP to the step index in diffusion process in the following two subsections. Diffusion models (Ho et al. 2020a) learn to reverse a noising process that gradually corrupts clean data x0 ∈Rd into Gaussian noise to reconstruct the original clean data distribution. Given a schedule {αt}T t=1 with αt = 1 −βt, a sample x0 drawn from the clean distribution, the forward noising process follows a closed form: xt = √¯αt x0 + √ 1 −¯αt ε, ε ∼N(0, I), (2a) ¯αt = tY s=1 αs. (2b) A denoiser εθ is trained to recognize the noise inside the noisy sample via Lsimple(θ) = Ex0, t, ε h ε −εθ(xt, t) 2 2 i . (3) to recover the clean sample. DDIM sampling with stochastic form Denoising Diffusion Implicit Models (DDIM) (Song et al. 2021) provide a family of samplers that interpolate between deterministic and stochastic generation. Given a learned denoiser εθ, the predicted clean sample at time t is commonly written as ˆx0(xt, t) = xt −√1 −¯αt εθ(xt, t) √¯αt , (4) where ¯αt = Qt i=1 αi denotes the cumulative noise schedule. We consider a (possibly) coarse time-subsequence for sampling τK > τK−1 > · · · > τ1, with K ≪T (for example, T = 50∼1000 and K = 5∼10). To cover both the single-step (t→t −1) and subsampled (τk →τk−1) cases in a unified notation, denote a generic transition from time t to an earlier time m (with m < t) by t→m. DDIM then defines a stochastic update with variance parameter σt→m ≥0 as µθ(xt, t →m) = √¯αm ˆx0(xt, t) + q 1 −¯αm −σ2 t→m εθ(xt, t), (5a) xm = µθ(xt, t →m) + σt→m εt→m, εt→m ∼N(0, I). (5b) The radical in (5a) requires the constraint 1 −¯αm −σ2 t→m ≥0, hence 0 ≤σt→m ≤ √ 1 −¯αm. (6) In particular, the deterministic DDIM update is recovered when σt→m = 0 (the distribution degenerates to a Dirac at µθ). Conversely, positive values of σt→m inject stochasticity into the transition. Policy perspective and log-likelihood. When σt→m > 0 the transition from xt to xm can be viewed as a Gaussian sub-policy πθ(xm | xt, t →m) = N µθ(xt, t →m), σ2 t→mI  , (7a) log πθ(xm | xt, t →m) = − 1 2σ2 t→m xm −µθ(xt, t →m) 2 + C, (7b) where C is a constant independent of the parameters θ. Note that (7b) is only valid for σt→m > 0; when σt→m = 0 the density becomes singular and the transition is best described as the deterministic mapping xm = µθ(xt, t → m) or equivalently a Dirac distribution. A full DDIM sampling process recovers a clean sample x0 by chaining the sub-policies {πθ(xτk−1 | xτk, τk → τk−1)}1 k=K, starting from xτK ∼N(0, I). In practice, one may set σt→m = 0 for fully deterministic sampling, or choose a small positive σt→m (subject to (6)) to trade off between sample diversity and stability. If the log-likelihood in (7b) is later used as an objective or as part of a fine-tuning criterion, care must be taken in handling the σt→m →0 limit (e.g., by restricting likelihood-based terms to steps with strictly positive variance). Notation and scheduling conventions. Throughout the paper we distinguish MDP timesteps from diffusion (denoising) timesteps. Environmental timesteps are denoted by t, while diffusion indices follow a (possibly subsampled) schedule τK > τK−1 > · · · > τ1, and are written as superscripts (e.g. aτk). A generic denoising transition from time t to an earlier time m is denoted by t→ m (for the subsampled schedule this will typically be written τk →τk−1). Variance parameters are indexed consistently as στk→τk−1 (abbreviated as στk when unambiguous). We always enforce the constraint 0 ≤στk ≤ p 1 −¯ατk−1, (8) so that all square roots appearing in the DDIM updates are real. When στk = 0 the corresponding transition degenerates to a deterministic mapping (Dirac), and Gaussian log- densities are not defined; therefore any likelihood-based objective (e.g., policy-gradient using log π) must only use steps with strictly positive variance. Consistency Models Consistency models (Song et al. 2023) learn a single- step mapping from noisy inputs at arbitrary noise levels to clean data. Denote the consistency model by Cθ(xτ, τ) where the superscript indicates the diffusion index (per the notation above). Given a frozen diffusion teacher Ψφ (for instance a K-step DDIM teacher that follows the same subsampled schedule {τk}), consistency distillation minimizes the squared regression objective LCD(θ) = Ex0,τ,ε h Cθ(xτ, τ) −sg  Ψφ(xτ, τ →0)  2 2 i , (9) where sg[·] denotes stop-gradient and Ψφ(xτ, τ →0) is the teacher’s output after running the teacher’s denoising chain from xτ down to (approximate) x0. The teacher must be run using the same subsampled schedule {τk} that the student will emulate or distill from. At inference time a consistency model requires only a single evaluation: x0 ≈Cθ(xτK, τK), xτK ∼N(0, I). (10) Diffusion policy and RL fine-tuning Diffusion Policy (Chi et al. 2023) performs diffusion over robot actions conditioning on observations. Given an observation o, we roll out along the K-step subsampled schedule {τK > · · · > τ1}: aτk−1 = fθ(aτk, τk | o) , k = K, . . . , 2, (11) where fθ follows the DDIM schedule (5) conditioned on o. Then we can retrieve a clean action at := aτ0 from the so- called diffusion policy. RL formulation. Now we embed the K-step diffusion process as a sub-MDP into a single step in robotic manipulation MDP. Each denoising step can be viewed as sampling a cleaner noisy action aτk−1 from the Gaussian sub- policy in (7a). We therefore model the process as a K-step sub-MDP with: • Initial state: sK = (aτK, τK, o) with aτK ∼N(0, I). • State: sk = (aτk, τk, o), k = K, . . . , 1. • Action: uk = aτk−1 drawn from the denoising sub-policy πθ(uk | sk) = N(µθ(aτk, τk, o), σ2 τkI). • Transition: sk−1 = (uk, τk−1, o). • Reward: this sub-MDP only receives terminal reward R(aτ0) from the upper environment MDP. The log-likelihood defined in (7b) then computes the density of sub-policies, enabling end-to-end optimization of task rewards with respect to the denoising sub-policies via policy- gradient updates (e.g., PPO (Schulman et al. 2017)). Methods We present RL-100, a unified framework for robot learning that combines IL with RL. As illustrated in Fig. 2, our approach consists of three stages: (1) imitation learning from human demonstrations, (2) iterative offline RL with progressive data expansion, and (3) online fine-tuning. The key innovation lies in unifying offline and online RL through a shared PPO-style objective applied across diffusion denoising steps. Imitation Learning We initialize the policy by behavior cloning on human- teleoperated trajectories. Our approach uses conditional dif- fusion to learn robust visuomotor policies from demonstra- tions. Each episode provides synchronized tuples  (ot, qt, at) Te t=1, (12) where ot are visual observations (RGB images or 3D point clouds), qt denotes robot proprioception (joint positions/velocities, gripper state), and at is either a single- step action or an action chunk. Learn from Human Priors DATASET Reward Action State Post-Training – Iterative Offline Post-Training - Online BC Loss RL-100 Finetune Loss ENV-ROBOT Training Pipeline Training Objective RL-100 Encoder DDIM POLICY – Chunk Action U-Net DDIM POLICY – Single Action Skip-Net Visual Feature 𝒛 CM POLICY – Chunk / Single U-Net / Skip-Net Self-Supervised Loss Robot State Distill Distill - Online / Batch Online Loss One-Step & High Freq Control Policy Training Loss: { } POLICY – Perception & Decision Teleop Rollout HUMAN ROBOT RL-100 Encoder ❄ 𝒛 𝑇: 𝑑!(𝒛′|𝒛, 𝑎) 𝑄! · 𝒛, 𝑎 & 𝑉!(· |𝒛) Update: Transition & Value Nets Offline Module - Supervised Training Iterative x N Times ENV-ROBOT Once Teleop 𝑉!(· |𝒛) Figure 2. Overview of RL-100. We first learn from human demonstrations through imitation learning with diffusion policies, then apply iterative offline RL with data expansion, followed by online fine-tuning for final performance optimization. Conditioning and prediction horizon. We fuse recent observations into a conditioning vector ct = [ϕ(oi, qi)]t i=t−no+1, (13) where the perception encoder ϕ(·) processes the most recent no frames (typically no = 2) and [·] is the operator concatenating multiple vectors. The clean diffusion target aτ0 t at timestep t is set to a single action aτ0 t = ut ∈Rda or an action chunk aτ0 t = [ut, . . . , ut+nc−1] ∈Rncda where nc is the chunk size (typically 8–16). Actions are normalized per dimension; we predict delta end-effector pose when applicable. Diffusion parameterization. Following conditional diffu- sion over actions, we corrupt aτ0 t to aτK t via the forward process (Eq. (2)). The denoiser εθ(aτ, τ, ct) is trained with the noise-prediction objective: LIL(θ) = E(aτ0,ct)∼D, τ, ε  ∥ε −εθ(aτ, τ, ct)∥2 2  , (14) where D is the demonstration dataset, τ ∈{τK > · · · > τ1} indexes a K-step schedule, and ε∼N(0, I). The policy backbone is shared across control modes; only the output head differs: Rda (single-step) or Rncda (chunked). Vision and proprioception encoders. For RGB input, we use pretrained ResNet/ViT backbones; for point clouds input, we adapt the 3D encoder of DP3 (Ze et al. 2024) with reconstruction regularization for stability during RL fine- tuning. Visual embeddings are projected and concatenated with proprioceptive features to form ct. All encoders are trained end-to-end with Eq. (14). When applicable, we add reconstruction (Recon) and variational information bottleneck (VIB) regularization: Lrecon = βrecon dChamfer(ˆo, o) + ∥ˆq −q∥2 2  , (15) LKL = βKL KL ϕ(z|o, s) ∥N(0, I)  , (16) where o and q denote the observed point cloud and proprioceptive vector; ˆo and ˆq are the reconstructed observations given encoded embedding ϕ(o, q); dChamfer is Chamfer distance between two set of point clouds. The complete imitation learning objective becomes: LIL total = LIL + Lrecon + LKL. (17) During RL fine-tuning, we reduce βrecon and βKL by a factor of 10 to allow policy improvement while maintaining representation stability. Inference and control. At deployment, K-step DDIM sampling (Eq. (5)) generates actions: ˆaτ0 t ←DDIMK εθ(·, ·, ct)  . Single-step control executes ut ←ˆaτ0 t immediately; chun- ked control executes [ut, . . . , ut+nc−1] ←ˆaτ0 t in the follow- ing nc timesteps. Single-step control excels in reactive tasks (e.g., dynamic bowling), while action chunking reduces jitter in precision tasks (e.g., assembly). Both modes share the same architecture, enabling task-adaptive deployment. Unified Offline and Online RL Fine-tuning Handling single action vs. action chunk. Our framework supports both single-step and chunked action execution, which affects value computation and credit assignment: • Single action: Standard MDP formulation with per-step rewards Rt and discount γ • Action chunk: Each chunk of nc actions is treated as a single decision. The chunk receives cumulative reward Rchunk = Rt:t+nc−1 = Pnc−1 j=0 γjRt+j, and the equivalent discount factor between chunks is γnc For clarity, we present the single-action case below; the chunked case works similarly by replacing per-step quantities with their chunk equivalents. Two-level MDP structure. Our approach operates on two temporal scales: 1. Environment MDP: Standard robot control with state st, action at, reward Rt 2. Denoising MDP: K-step diffusion process generating each aτ0 t through iterative refinement The denoising MDP is embedded within each environment timestep, creating a hierarchical structure where K denoising steps produce one environment action. Unified PPO objective over denoising steps. Given the two-level MDP structure, we optimize the PPO objective w.r.t. the K-step diffusion process at each timestep t in iteration i via the summation across all denoising steps k: Ji(π) = E st∼ρπ, at∼πi " K X k=1 min rk(π) At, clip rk(π), 1 −ϵ, 1 + ϵ  At  # , (18) and the loss function is Loff RL = −Ji(π), (19) where rk(π) is the per-denoising-step importance ratio and At is the task advantage tied to the environment timestep t. Here, πi denotes the behavior policy at PPO iteration i, and ρπ is the (discounted) state distribution under current policy π. The key insight is to share the same environment- level advantage A across all K denoising steps, providing dense learning signals throughout the denoising process while maintaining consistency with the environment reward structure. Offline RL Setting. Given an offline dataset D, we initialize the behavior policy with a diffusion policy acquired from IL: π0 :=πIL. No new data are collected in this pure offline stage. Policy improvement on D. At offline iteration i, we optimize the PPO-style surrogate (Eq. (18)) applied across K denoising steps, using the offline-policy ratio roff k (π) = π(aτk−1 | sk) πi(aτk−1 | sk), with standard clipping. Offline advantages are computed as Aoff t = Qψ(st, at) −Vψ(st), where the critics (Qψ, Vψ) are pre-trained on D following IQL (Kostrikov et al. 2022). This yields a candidate policy π from πi by several epochs of gradient updates on D. OPE gate and iteration advancement. We use AM- Q (Lei et al. 2024) for offline policy evaluation (OPE) without further interaction with the environment to compare the candidate with current behavior policy: bJAM-Q(π) = E(s,a)∼( ˆT ,π) h H−1 X t=0 Qψ(st, at) i , where ˆT is a learned transition model. We accept the candidate and advance the behavior-policy iteration only if bJAM-Q(π) −bJAM-Q(πi) ≥δ, (20) by setting the updated policy as behavior policy: πi+1 :=π. Otherwise, we reject and keep the behavior policy unchanged (πi+1 :=πi). In practice, we set δ = 0.05 · | bJAM-Q(πi)| for adaptive thresholding. This OPE-gated rule yields conservative, monotonic behavior-policy improvement on D. Shared and frozen encoders. To ensure stable represen- tation learning and efficient computation, all components in our offline RL pipeline share the same fixed visual encoder ϕ pre-trained during imitation learning. During offline RL, we keep ϕIL fixed and only update the task-specific heads of each module. Online RL The online stage uses on-policy components. We use an on-policy ratio for each diffusion step ron k (π) = π(aτk−1 | sk) πi(aτk−1 | sk), and compute advantages using GAE: Aon t = GAE(λ, γ; rt, Vψ), sharing the same Aon t across all K denoising steps that produce the environment action at time t. We minimize the total loss: Lon RL = −Ji(π) + λV E h (Vψ(st) −ˆVt)2i , (21) where ˆVt = P∞ l=0 γlrt+l is the discounted return and λV weights the value function loss. Distillation to One-step Consistency Policy High-frequency control is crucial for robotics applications. While our diffusion policy achieves strong performance, the K-step denoising process introduces latency that can limit real-time deployment. To address this, we jointly train a consistency model cw that learns to directly map noise to actions in a single step, distilling knowledge from the multi- step diffusion teacher πθ. During both offline and online RL training, we augment the policy optimization objective with the consistency distillation loss from Eq. (9): Ltotal = LRL + λCD · LCD, (22) where LRL is either the offline objective (Eq. (19) with IQL-based advantages) or the online objective (Eq. (21) with GAE). The consistency loss LCD follows Eq. (9), with the teacher being our diffusion policy πθ that performs K- step denoising conditioned on observation. The stop-gradient operator ensures the teacher policy continues to improve through RL objectives while simultaneously serving as a distillation target. High-frequency control is essential for practical robot deployment in industrial settings. First, faster control loops directly translate to improved task completion efficiency—a robot operating at 20 Hz can execute the same trajectory in half the time compared to 10 Hz operation, significantly increasing throughput in factory automation, where cycle time directly impacts productivity. Second, many manipulation tasks inherently require high- frequency feedback for reliable execution. Dynamic tasks such as catching moving objects, maintaining contact during sliding motions, or recovering from unexpected disturbances demand sub-50 ms response times that multi-step diffusion cannot provide. Furthermore, tasks involving compliance control, force feedback, or human-robot collaboration often fail catastrophically when control frequency drops below critical thresholds, as the system cannot react quickly enough to prevent excessive forces or maintain stable contact. During inference, the consistency model generates actions in a single forward pass: aτ0 = cw(aτK, τK|o), achieving K× speedup (e.g., from 100ms to 10ms latency) while pre- serving the performance of the diffusion policy. This order- of-magnitude improvement enables deployment in real- world manufacturing scenarios where robots must maintain consistent cycle times, respond to conveyor belt speeds, and safely operate alongside human workers—requirements that are infeasible with standard multi-step diffusion policies. Overall training framework While each component above can be used independently, we propose an iterative procedure that combines them for progressive improvement. Instead of applying offline RL once on fixed demonstrations, we alternate between training an IL policy on the current dataset, improving it via offline RL with conservative updates, collecting new data with the improved policy, and re-training IL on the expanded dataset, which is summarized in Algo. 1. This creates a virtuous cycle where better policies generate better data, which in turn enables learning even better policies. Algorithm 1 RL-100 training pipeline 1: Input: Demonstrations D0, iterations M 2: Initialize: πIL 0 ←ImitationLearning(D0) 3: for iteration m = 0 to M −1 do 4: // Offline RL improvement 5: Train critics: (Qψm, Vψm) ←IQL(Dm) 6: Train transition: Tθm(s′|s, a) 7: Optimize: 8: πddim m , πcm m ←OfflineRL(πIL m, Qψm, Vψm, Tθm) 9: // Data expansion 10: Deploy: Dnew ←Rollout(πddim m or πcm m ) 11: Merge: Dm+1 ←Dm ∪Dnew 12: // IL re-training on expanded data 13: πIL m+1 ←ImitationLearning(Dm+1) 14: end for 15: // Final online fine-tuning 16: πfinal ddim, πfinal cm ←OnlineRL(πM−1, VψM−1) 17: Output: πfinal ddim, πfinal cm Why IL re-training matters. Re-training with IL on the expanded dataset (Algo. 1 line 13) is crucial for several reasons: • Distribution shift: IL naturally adapts to the evolving data distribution as higher-quality trajectories are added • Stability: Supervised learning is more stable than RL on mixed-quality data • Multimodality: IL preserves the diffusion policy’s ability to model multiple solution modes • Distillation: IL effectively distills both human demonstra- tions and RL improvements into a unified policy Final online fine-tuning. After the iterative offline procedure converges, we apply online RL for final performance optimization. This stage benefits from: (1) a strong initialization from iterative offline training, (2) pre- trained value functions that accelerate learning, and (3) a diverse dataset for replay and regularization. Variance clipping for stable exploration. To ensure stable learning during RL fine-tuning, we introduce variance clipping in the stochastic DDIM sampling process. Specifically, we constrain the standard deviation at each denoising step: ˜σk = clip(σk, σmin, σmax), (23) where σk is the original DDIM variance parameter from Eq. (5b), and [σmin, σmax] defines the permissible range. This modification effectively bounds the stochasticity of the behavior policy πθ(aτk−1|aτk, τk) = N(µθ(aτk, τk), ˜σ2 kI), preventing both: • Excessive exploration when σt is too large, which can lead to out-of-distribution actions that destabilize training or cause safety violations in physical systems • Premature convergence when σt approaches zero, which eliminates exploration and prevents the policy from discovering better modes In practice, we set σmin = 0.01 to maintain minimal exploration even in late denoising steps, and σmax = 0.8 to prevent destructive exploration in early steps. This bounded variance ensures that the importance ratio rk(π) = π(aτk−1|sk) πi(aτk−1|sk) remain well-behaved during PPO updates, as extreme variance differences between the current and behavior policies are avoided. We will empirically demonstrate that this simple modification is crucial for achieving stable fine-tuning performance. Related Work Reinforcement Learning with Generative Diffusion Models The integration of generative diffusion models with RL represents a paradigm shift in policy representation and optimization. Building upon foundational work in diffusion models (Ho et al. 2020b; Song et al. 2020) and flow matching (Lipman et al. 2023), recent advances have demonstrated the power of these generative frameworks in capturing complex, multimodal action distributions inherent in RL problems. Diffusion Q-Learning (DQL) (Wang et al. 2023b) pioneered this integration by replacing traditional Gaussian policies with conditional diffusion models in offline reinforcement learning, addressing fundamental limitations of parametric policies in modeling multimodal behaviors. This approach has evolved through multiple directions: weighted regression C F E A G B D Figure 3. Illustrations from rollouts on seven real-world tasks. Each row shows one task, and columns are time-ordered frames (left→right) subsampled from a single trajectory. From top to bottom: (a) Dynamic Push-T, (b) Agile bowling, (c) Pouring, (d) Dynamic Unscrewing, (e) Soft-towel Folding, (f) Orange Juicing – Placing, (f) Orange Juicing – Removal. The suite spans fluids or particles, tool-using, deformable objects manipulation, dynamic non-prehensile manipulation, and precise insertion, highlighting the diversity and dynamics of our benchmark. methods (Kang et al. 2023; Lu et al. 2023; Ding et al. 2024a) train diffusion policies through importance- weighted objectives to maximize learned Q-functions; reparameterization gradient approaches (Psenka et al. 2024; He et al. 2023; Ding et al. 2024b) utilize gradient-based optimization despite temporal backpropagation challenges; and sampling-based methods (Chen et al. 2023; Hansen- Estruch et al. 2024) provide effective but computationally expensive solutions. More recently, consistency-based extensions(Li et al. 2024) have further generalized diffusion and consistency policies to visual RL. Recent works have also explored using RL to directly optimize diffusion models beyond traditional likelihood- based training. Black et al. (2023) demonstrate fine- tuning diffusion models with RL to maximize arbitrary reward functions, while Fan et al. (2024) apply similar techniques specifically to text-to-image generation using human feedback. Ren et al. (2024) introduce policy gradient methods tailored for diffusion-based policies, enabling effective online optimization while maintaining the expressiveness benefits of diffusion models. To address computational bottlenecks inherent in diffusion models, Park et al. (2025) present Flow Q-Learning (FQL), which leverages flow-matching policies to model complex action distributions while avoiding recursive backpropagation through denoising processes. By training an independent single-step policy that matches the flow model’s output, FQL achieves computational efficiency without sacrificing expressiveness, demonstrating state-of-the-art performance on offline RL benchmarks with significant computational advantages, particularly in offline-to-online fine-tuning settings. Similarly, One-Step Flow Q-Learning (OFQL) (Nguyen and Yoo 2025) extends this framework for even faster single-step action generation. Generative Diffusion Models in Robotics The application of diffusion models to robotics has yielded transformative advances in visuomotor control, trajectory planning, and real-world deployment. Diffusion Policy (Chi et al. 2023) demonstrated breakthrough performance in visuomotor control by modeling action distributions as conditional diffusion processes, excelling at handling multimodal behaviors through visual conditioning and receding horizon control. This approach has inspired extensions including DiffClone (Sabatelli et al. 2024) and Diffusion Model-Augmented Behavioral Cloning (Wang et al. 2023a), which further improve upon traditional behavioral cloning by leveraging the expressiveness of A B C D E F G Figure 4. Point clouds trajectories for seven tasks (ordered top-to-bottom): (A) Push-T, (B) Bowling, (C) Pouring, (D) Unscrewing, (E) Folding, (F) Orange Juicing – Placing, (G) Orange Juicing – Removal. diffusion models. The integration with pretrained visual representations, including R3M (Nair et al. 2022), VC- 1 (Majumdar et al. 2023), and MoCo (He et al. 2020), has proven particularly effective for generalization across visual variations. Recent work on 3D Diffusion Policy (Ze et al. 2024) extends these capabilities to point cloud observations, while FlowPolicy (Chen et al. 2024) introduces consistency flow matching for robust 3D manipulation tasks. Lu et al. (2025) propose a triply-hierarchical diffusion policy that decomposes complex visuomotor tasks into multiple levels of abstraction, improving both learning efficiency and generalization. Beyond direct policy learning, diffusion models have revolutionized trajectory planning in robotics. Janner et al. (2022) and Ajay et al. (2023) reformulate planning as conditional generation, producing complete state-action trajectories conditioned on rewards and constraints, excelling in long-horizon tasks requiring complex coordination. Shang et al. (2024) introduces an alternative paradigm by generating policy parameters rather than trajectories in latent spaces, offering computational advantages. Real- world deployment challenges have been addressed through methods like One-Step Diffusion Policy (Wang et al. 2024), which uses distillation to achieve real-time performance suitable for robotic control. Successful applications now span deformable object manipulation with PinchBot (Liu et al. 2024), multi-task learning, and navigation scenarios. The availability of large-scale datasets like BridgeData V2 (Walke et al. 2023) and foundation models such as π0 (Black et al. 2024b) enables broader generalization across robotic platforms and tasks, accelerating the transition from laboratory demonstrations to practical robotic systems. Offline-to-Online Reinforcement Learning The transition from offline pretraining to online fine-tuning presents unique challenges in managing distribution shift and preventing catastrophic forgetting while enabling continuous improvement. Conservative Q-Learning (CQL) (Kumar et al. 2020) established the foundational framework for safe offline RL through pessimistic value estimation, preventing overestimation for out-of-distribution actions. Advantage- Weighted Regression (AWR) (Peng et al. 2019) provides a scalable framework that has influenced numerous subsequent works, though its restriction to Gaussian policies limits expressiveness for complex behavioral patterns. Calibrated Q-Learning (Cal-QL) (Nakamoto et al. 2023) addresses initialization challenges by ensuring conservative Q-values are appropriately scaled for effective online fine-tuning, providing crucial insights for successful offline-to-online transfer. Uni-O4 (Lei et al. 2024) directly applies the PPO (Schulman et al. 2017) objective to unify offline and online learning, eliminating the need for extra regularization. Recent advances focus on efficiently leveraging both offline and online data through hybrid strategies. RLPD (Ball et al. 2023) achieves sample-efficient online learning by mixing offline and online experiences, demonstrating that careful data mixture strategies can accelerate learning significantly. Nair et al. (2020) established fundamental frameworks for combining offline pretraining with online fine-tuning, showing that hybrid approaches achieve superior sample efficiency compared to pure online or offline methods. A paradigm shift in offline-to-online adaptation is represented by Wagenmaker et al. (2025), which introduces DSRL (Diffusion Steering with Reinforcement Learning) that operates RL entirely in the latent noise space of pretrained diffusion policies rather than modifying base policy weights. Real-world RL Real-world RL trains directly on real robot dynamics, optimizing deployment metrics (reliability, speed, safety) and yielding robust performance that continually adapts to disturbances—without sim-to-real gaps. Critical require- ments include sample efficiency, stability under high- dimensional perception, safe continuous operation, and auto- mated reward/reset mechanisms. While early work demon- strated end-to-end visuomotor learning (Levine et al. 2016) and large-scale grasping with off-policy methods (Kalash- nikov et al. 2018), subsequent advances established key algo- rithmic foundations: off-policy actor-critic methods (SAC, TD3) for data efficiency (Haarnoja et al. 2018; Fujimoto et al. 2018), model-based approaches for sample acceleration (Chua et al. 2018; Janner et al. 2019), reset-free learning for autonomous operation (Eysenbach et al. 2018; Gupta et al. 2021), and learned reward specifications from visual classifiers or human feedback (Singh et al. 2019; Christiano et al. 2017). Despite these advances, most systems required extensive engineering, task-specific tuning, or long training times to achieve reliable performance. These scattered advances converge in SERL (Luo et al. 2024b), a comprehensive framework that integrates high update-to-data ratio off-policy learning, automated resets, and visual reward specification to achieve several manipulation tasks. However, SERL relies solely on demonstrations and struggles with tasks requiring precision or recovery from failures. HIL-SERL (Luo et al. 2024c) addresses these limitations by incorporating real-time human corrections during training, enabling the policy to learn from mistakes and achieve perfect success rates across diverse tasks, including dual-arm coordination and dynamic manipulation. While SERL and HIL-SERL report impressive on- robot learning efficiency and reliability on well-scoped tabletop skills, their evaluations typically employ action– space shaping (e.g., limiting wrist rotations and encouraging near-planar end-effector motion) and focus on short-horizon regimes with relatively low-dimensional control (Luo et al. 2024b,c). Such constraints are pragmatic for safety and sample efficiency, but they reduce policy expressivity and can cap performance on orientation-critical, contact-rich, or compositionally complex tasks. In everyday home and factory scenarios, many skills inherently require full SE(3) control and substantial reorientation, including: deformable manipulation with twist and regrasp (e.g., towel folding), insertion/ejection in confined cavities with large tilt changes (e.g., orange juicing), fluid and granular control that hinges on container tilt (e.g., controlled pouring), dynamic release and trajectory shaping (e.g., agile bowling), cable routing or cloth placement with out-of-plane rotations, and bimanual reorientation.* In contrast, our system retains full 6-DoF control without hard rotation constraints and targets these under-explored regimes by (i) using a diffusion/consistency visuomotor policy to capture diverse human strategies, (ii) unifying offline-to-online improvement with an OPE-gated PPO-style objective for nearly-monotonic improvement, and (iii) enabling high-frequency control via one-step consistency distillation, across dual-arm, deformable, and dynamic tasks with larger cross-object generalization. Real-world Experiments In this section, we detail the experimental setup and report results on reliability (success rate), efficiency (time-to- completion), and robustness (generalization across objects and initial conditions). We compare policies trained with RL-100 to human teleoperators and strong baselines. Across ∗We do not claim these systems cannot be extended to such settings; rather, the reported experiments emphasize rapid learning on well-scoped tasks, leaving broader generalization, long-horizon composition, and orientation- sensitive control less explored. tasks, RL-100 achieves higher precision and greater efficiency than human teleoperation. We then present ablations to analyze the contribution of each component of RL-100. Collectively, these results demonstrate the practical viability of RL-100 for real-world deployment. Overview We evaluate RL-100 on seven real-world manipulation tasks that jointly cover dynamic rigid-body control, deformable- object handling, and precision assembly (Tab. 1, Fig. 1) to comprehensively evaluate the versatility of our framework. The suite spans single- and dual-arm embodiments (UR5, Franka with LeapHand, xArm–Franka) and two control modes (single-step actions vs. chunked action sequences). The trajectory of each task visualized in images and point clouds can be found in Fig. 3 and 4, respectively. Key specifications for observation and action spaces of all tasks are summarized in Tab. 2. We randomize initial object placements for each trial to encourage policy generalization, with specific ranges for each task depicted in Fig. 5. Two types of reward structure are used for our tasks. For five tasks except Dynamic Push-T, we use a sparse reward function where the agent receives a reward of +1 upon successful task completion and 0 at all other timesteps, labeled by a human supervisor pressing a keyboard. The Dynamic Push-T task, which requires continuous, high-precision control, utilizes a dense shaped reward. The total reward at each timestep t is rt = rpose + rstatic + rsmooth, where rpose = exp(−3e) −1 punishes the SE(3) discrepancy e between the T-block and the desired pose; rstatic = −1 if the movement of T-block at one timestep is below a given threshold and otherwise 0; rsmooth = −5∥at −at−1∥2 2 punishes jerky actions. More details about our setup (calibration, point-cloud processing, low-level control) are provided in supplementary materials. Across tasks, RL-100 achieves higher success rates and shorter time-to-completion than baselines and human operators, with especially large gains in settings that require fast online corrections (dynamic contacts, narrow clearances) or stable dexterous grasping under pose variability. A comprehensive robustness protocol evaluates either zero- shot or few-shot transfer across challenging variations (e.g., changed objects, visual distractors, external perturbations). Results show consistent performance retention under these shifts. We defer task-specific objectives and success criteria to the following section, and present aggregate reliability/robustness/efficiency results afterwards. Description of Tasks Following the overview, we describe the objective, chal- lenges and further evaluation protocols beyond normal ones of each task individually. Dynamic Push-T The Dynamic Push-T task requires the robot arm, equipped with a 3D-printed stick-like end- effector, to push a T-shaped block from various initial locations into a wooden slot that is only slightly larger (3mm each side) than the block. The T-block’s initial position and orientation are fully randomized, uniformly distributed within a 55cm×60cm reachable workspace of the arm. The robot’s initial position and the target slot’s position are fixed. The task is considered successful only when the T-block is fully and correctly inserted into the slot. As a 3D extension of the 2D Push-T task, this task inherits the challenge of requiring high-frequency and precise dynamic adjustments but introduces additional complexities. For instance, due to the slot’s geometry, the friction coefficient at the slot’s edges varies when the T-block is partially suspended, leading to unpredictable behaviors such as significant rotations of the block. Additionally, the robot must avoid pushing the T-block into the slot in an incorrect orientation, which could lead to failure and end-effector fracture. The policy must therefore exhibit robust control to handle these dynamic and frictional variations while ensuring precise alignment with the slot. To evaluate the generalization capabilities of the framework, we further perform the task under multiple variations: (i) different initial positions and orientations of the T-block, (ii) a wooden surface with a desktop sticker to reduce friction, (iii) the presence of additional objects on the workspace to introduce visual input distractions, and (iv) external disturbances applied to the T-block’s during pushing. These variations challenge the policy to dynamically adjust with high precision, ignore environmental distractions, and avoid incorrect paths, demonstrating its robustness in maintaining high-frequency dynamic control while accurately completing the task. Agile Bowling The Agile Bowling task requires the robot arm, equipped with a 3D-printed semi-circular end effector, to push a curling stone to knock down six bowling pins positioned 60cm away. The robot arm starts from a fixed initial position, while the curling stone and bowling pins are uniformly distributed along an 18cm starting line and a 20cm target line, respectively. The task is considered successful only when more than five pins are knocked down. This task presents several challenges due to the highly sensitive dynamics of the setup. The curling surface is extremely smooth, and the end effector is 4mm larger in diameter than the curling stone, meaning even minor variations in the robot’s movements can significantly alter the stone’s trajectory. The robot must precisely initiate the stone’s motion from its starting position, align it toward the bowling pins, and execute a controlled push while making fine adjustments to account for potential trajectory deviations. Furthermore, knocking down almost all pins requires applying sufficient force to accurately strike the first pin, as slight misalignments can result in some pins remaining upright, leading to failure. The small size of both the curling stone and bowling pins provides limited visual input, adding to the challenge of achieving precise control. To evaluate the generalization capabilities of the framework, we test the task under varying initial positions of the curling stone and bowling pins along their respective lines. We further test (1) zero-shot transferability on a coarser trail surface and (2) adaptation by further finetuning on an inversed pins placement of the policy. These variations challenge the policy to adapt its pushing strategy and trajectory corrections to different configurations while maintaining accuracy. Pouring This task requires the robot to grasp a cup containing mixed nuts and snacks of varying sizes and Table 2. State and action components of each task. We abbreviate dimension as ’dim.’, proprioception as ’prop.’. Task Observation space Action space Point cloud dim. Prop. dim. Note Dim. Note Dynamic Push-T (512, 3) 6 UR5 joints 2 Normalized ∆(x, y) of end-effector Agile Bowling (512, 3) 6 Same as above 2 Same as above Pouring (512, 3) 23 = 7 + 16 Franka joints, Leap hand joints 22 = 6 + 16 Target arm pose, target hand joints Dynamic Unscrewing (1024, 3) 23 Same as above 23 = 22 + 1 Same as above, 1: enable arm motion Soft-towel Folding (1024, 3) 16 = 2×(7+1) Dual arm joints + grippers 16 = 2×(7+1) Dual arms: target pose (pos+quat), target gripper Orange Juicing – Placing (1024, 3) 7 = 6 + 1 xArm joints, gripper state 8 = 6 + 1 + 1 Target arm pose, target gripper, task done Orange Juicing – Removal (1024, 3) 7 Same as above 8 Same as above 60 cm 55 cm 15 cm 15 cm 20 cm A 18 cm 20 cm B 39 cm 16 cm 28 cm 20 cm C ⌀12.5 cm D 42 cm 40 cm E 29 cm 20 cm ⌀11 cm F Figure 5. The object initialization workspaces for the seven real-world tasks. At the beginning of each episode, objects-of-interest are randomly positioned within the areas denoted by the red boundaries and annotations. textures with a LeapHand dexterous hand, then precisely pour the contents into a target plate through controlled wrist rotation. The cup’s initial position is randomized within 39cm×16cm on the table surface, while the plate position varies within 28cm×20cm. The robot begins from a fixed initial pose, grasps the cup, rotates the wrist to invert the container, and pours the contents into the plate. The task is considered successful if the robot maintains a stable cup grasp and accurate alignment during pouring, with no spillage attributable to misalignment; incidental bounce- outs from unavoidable rebounds are disregarded. This task presents multiple challenges that demand sophisticated sensorimotor coordination. First, the cup’s smooth surface makes it inherently difficult to grasp securely, requiring the policy to learn appropriate finger configurations that balance grip stability with the ability to maintain a suitable pouring orientation. Second, the substantial randomization in both cup and plate positions necessitates robust spatial generalization. The policy must adapt its grasping strategy and pouring trajectory to varying relative positions while maintaining precision. Third, the mixed contents with different physical properties (varying nut sizes and weights) create unpredictable dynamics during pouring, requiring adaptive control to ensure complete and accurate transfer without overshooting. To evaluate the generalization capabilities of our framework, we extend this task to three additional variations with distinct physical properties: (i) pouring small soft candies that exhibit different flow characteristics due to their lighter weight and higher elasticity, (ii) pouring water from a cup, which introduces liquid dynamics requiring smooth, continuous motion control to prevent splashing, and (iii) pouring water using a long-spout teapot, which demands adaptation to a different container geometry and requires precise control of the extended spout for accurate targeting. These variations test the policy’s ability to transfer learned manipulation skills across different materials (granular solids to liquids) and container morphologies while maintaining the core competency of controlled pouring. Dynamic Unscrewing This task simulates industrial assem- bly operations by requiring the robot to unscrew a nut from a vertically-mounted bolt with a LeapHand dexterous hand. The bolt position is randomized within a circle of 12.5cm diameter on the work surface, demanding robust adaptation to varying target locations. The task involves a multi-phase manipulation sequence: the robot first approaches and uses Figure 6. Comparison between images of (A) an unremovable nut and (B) a removable one, as well as their corresponding point-cloud observations (C-D). Robot policies must be accurate enough to recognize the removable state by the tiny tilt shown in (B) and (D) to guarantee success grasps. its thumb to engage the nut with rotational motions, gradu- ally lifting it along the threaded shaft. As the nut ascends, the robot must continuously adjust its hand pose to maintain effective contact and torque transmission. Once the nut clears the threads, the robot should precisely grasp it with a pinch grip between the index finger and thumb, then transports and places it on the plate. Success is evaluated based on three criteria: (i) complete unscrewing of the nut, (ii) stable grasping without dropping, and (iii) accurate placement at the target position. This task presents several interconnected challenges that test the limits of dexterous manipulation. First, the initial approach requires precise positioning to establish an effective contact configuration—the thumb must reach a pose that enables sufficient rotational leverage while maintaining clearance for continuous motion. Second, the dynamic nature of the unscrewing process demands adaptive control: as the nut rises along the threads, the optimal contact points and force directions continuously change, requiring the policy to learn time-varying manipulation strategies that maintain consistent rotational motion despite the evolving kinematic constraints. Third, the transition from unscrewing to grasping requires accurate state estimation, as shown in Fig. 6 – the policy must recognize when the nut has cleared the threads and swiftly reconfigure from a rotating contact to a stable pinch grip. Finally, precise grasping of a small object tests fine motor control and hand-eye coordination, as even minor positioning errors can result in unstable grasps or dropped nuts. Soft-towel Folding This task requires dual robot arms to collaboratively fold a randomly crumpled towel, uniformly distributed in the 42cm×40cm central region of a table, into a neatly folded state. The task is considered successful only when the arms lift and flatten the towel, then perform two precise folds, ensuring no wrinkles or misalignments occur. This task is challenging due to the need for precise dual-arm coordination and handling a deformable object with significant dynamic variability. The towel’s initial position and crumpled configuration vary greatly, leading to substantial differences in point cloud observations. The task is further complicated by potential issues such as uneven folding, unflipped towel corners, or failed grasps, requiring the policy to perform robust failure recovery. Arms must dynamically adjust their motions to flatten and fold the towel accurately while responding to unexpected deformations or missteps. To evaluate the generalization capabilities of the framework, we test the policy under conditions with external human-induced disturbances during the folding process, assessing its ability to recover from failures and adapt to disruptions. This task challenges the model’s capacity for dual-arm coordination, generalization across diverse initial towel configurations, manipulation of deformable objects, and effective error correction. Orange juicing The orange juicing task consists of three subtasks: placing a halved orange on the juicer, pressing down the lever to extract juice, and removing the discarded orange pulp. The robot’s end effector is replaced with two 3D-printed sleeves that fix two rubber grippers, enabling robust grasping irregular fruit surfaces. For each subtask, the initial and final poses of the robot arm are fixed. Note that these poses are defined individually for each subtask and therefore differ across the three subtasks. Orange placing. In this subtask, a randomly sized half- orange is placed on the inside 29cm×20cm area of a tray at a random position and orientation. The robot first grasps the orange with the gripper vevertically aligned to the surface of the trayhe tray , then rotates its wrist joint by 180◦to flip the orange, and finally places it onto the filter section of a lever-style juicer before retracting the gripper. The task is considered successful if the orange is properly placed on top of the filter. This step is challenging due to the high variability in orange size, inclination, height, and placement as shown in Fig. 7A and 7B. The policy must therefore exhibit strong robustness to spatial randomness to achieve reliable placement. Lever pressing. The pressing process follows a relatively fixed trajectory. To ensure consistent execution, we use kinesthetic teaching to record the downward and upward motions of the lever in advance. During deployment, the robot replays this trajectory to lower the hammer-like press, squeeze the juice out of the orange, and then reset the lever. The task is considered successful if the lever is fully pressed down and restored. Discard removal. After pressing, the flattened orange half is randomly embedded within the 11cm diameter juicer filter. The robot begins with the gripper closed and aligned parallel to the filter surface, pushing against the flattened fruit to move it from the tightly embedded position. The wrist then rotates by 90◦, the gripper opens to grasp the orange, and the discarded pulp is placed into a disposal container on the left side of the juicer. The task is considered successful if the flattened orange is removed from the filter (A) Various appearances of halves. (B) Different sizes of oranges. (C) Proper force is needed for removal. Figure 7. Challenging parts of the orange juicing task: (A)-(B) spatial robustness w.r.t. orange appearances and positions; (C) force-sensitive manipulation of deformable orange discards. Table 3. Success rate (%) across tasks. RL–100 (ours) groups an iterative offline fine-tuning stage followed by online RL with either DDIM or one-step CM policy. Averages are unweighted across tasks. Task Imitation baselines RL-100 (ours) DP-2D DP3 Iterative Offline RL Online RL (DDIM) Online RL (CM) Dynamic Push-T 40 (20/50) 64 (32/50) 90 (45/50) 100 (50/50) 100 (50/50) Agile Bowling 14 (7/50) 80 (40/50) 88 (44/50) 100 (50/50) 100 (50/50) Pouring 42 (21/50) 48 (24/50) 92 (46/50) 100 (50/50) 100 (50/50) Soft-towel Folding 46 (23/50) 68 (34/50) 94 (47/50) 100 (50/50) 100 (250/250) Dynamic Unscrewing 82 (41/50) 70 (35/50) 94 (47/50) 100 (50/50) 100 (50/50) Orange Juicing – Placing 78 (39/50) 88 (44/50) 94 (47/50) 100 (100/100) 100 (50/50) Orange Juicing – Removal 48 (24/50) 76 (38/50) 86 (43/50) 100 (50/50) — Mean (unweighted) 50.0 70.6 91.1 100.0 100.0† † Mean over the six tasks evaluated with CM. and dropped into the container. This subtask is particularly challenging because it involves force-sensitive manipulation of a deformable object in a confined space. Fig. 7C indicates that sufficient force is needed to move the orange, but excessive force can cause deformation and hinder grasping. Furthermore, juice on the surface makes the orange slippery, and inappropriate grasping poses may result in failed pickups. Results on Real Robots Main results Tab. 3 traces performance from imitation-only policies to our RL-100 method. The imitation baselines produce only moderate results. The DP-2D baseline attains an average success of 50.0% while DP3 improves to 70.6%. Both baselines are especially challenged by tasks with dynamic or deformable objects, as shown by Agile Bowling with success rates of 14% and 80% respectively, and by Pouring with success rates of 42% and 48% respectively. The iterative offline RL phase of RL-100 yields a large improvement and raises average success to 91.1%. This stage produces the largest absolute gains on the hardest tasks, namely an increase of 8 points over DP3 on Agile Bowling with 88% versus 80%, an increase of 44 points on Pouring with 92% versus 48%, and an increase of 24 points on Dynamic Unscrewing with 94% versus 70%. These results indicate that conservative model-guided policy refinement reliably improves performance across diverse manipulation regimes while avoiding the instability associated with naive online updates. The full RL-100 pipeline, consisting of iterative offline RL followed by brief on-policy online refinement, attains perfect success across all evaluated trials. With the DDIM policy, we observe 400 successful episodes out of 400 across seven tasks. The consistency-model variant evaluated on six tasks achieves the same perfect success rate with 500 successful episodes out of 500 and enables single-step high- frequency control, including 250 successful trials out of 250 on the challenging dual-arm soft-towel folding task. Overall, the results highlight the contribution of each component, namely that OPE-gated iterative learning delivers substantial and reliable improvements under a conservative regime and that short on-policy fine-tuning together with consistency distillation removes remaining failure modes and produces computationally efficient deployment-ready policies. Zero-shot adaptation to new dynamics To assess the robustness of our learned policies, we evaluate zero-shot transfer to environmental variations unseen during training, listed in Tab. 4 and Fig. 8. On Dynamic Push- T, as shown in Fig. 8B-C, the policy maintains perfect 100% success when the surface friction is substantially altered, and achieves 80% success even when visual and physical clutter from interference objects of various shapes are introduced into the workspace. Similarly, Agile Bowling in Fig. 8D achieves 100% success on modified surface properties that change ball rolling dynamics. For Pouring in Fig. 8A, replacing granular nuts with fluid water—a significant shift in material properties and flow dynamics— still yields 90% success, demonstrating that the policy has learned sufficiently stable and robust manipulation strategies rather than overfitting to specific physical parameters. Across Nuts Water Changed Surface Interference Objects Changed Surface A B C D Figure 8. Zero-shot adaptation to novel dynamics. Our method generalizes to unseen physical variations without retraining. (A) Pouring with different granular/fluid materials. (B) Dynamic Push-T on altered surface friction. (C) Dynamic Push-T with visual and physical interference objects. (D) Agile Bowling on modified surface properties with a target-based scoring system. Red arrows highlight the changed environmental conditions. Table 4. Zero-shot generalization success rates (%) on novel dynamics and environmental variations. All policies are trained on nominal conditions and evaluated without any retraining or fine-tuning. Task Variation Success Rate (%) Pouring (Water) 90 Push-T (Changed surface) 100 Push-T (Interference Objects) 80 Bowling (Changed Surface) 100 Average 92.5 all four variations, our method achieves 92.5% average success without any retraining or fine-tuning, indicating strong generalization to distribution shifts in environmental dynamics. Few-shot adaptation Beyond zero-shot generalization, we evaluate the sample efficiency of adapting to more substantial task modifications through brief fine-tuning, as shown in Tab. 5 and Fig. 9. After only 1–3 hours of additional training on each variation, our policies achieve high success rates across diverse changes. The Soft-towel Folding policy adapts perfectly to a different towel material with altered deformable properties, achieving 100% success rate as shown in Fig. 9A, demonstrating effective transfer despite significant changes in cloth dynamics. Similarly, Agile Bowling achieves 100% success on an inverted pin arrangement listed in Fig. 9C, requiring the policy to adapt its trajectory planning and aiming strategy. The Pouring task with a modified container geometry, as shown in Fig. 9B, achieves 60% success, showing that while geometric changes require more adaptation, the policy can still leverage its learned A B C Changed Object Inversion Figure 9. Few-shot adaptation to novel task variations. Our method rapidly adapts to unseen scenarios with minimal additional training data. (A) Soft-towel Folding with a different towel material exhibiting altered deformable properties. (B) Pouring with a modified container shape and size. (C) Agile Bowling with inverted pin arrangement requiring adapted trajectory planning. Table 5. Few-shot adaptation success rates (%) after brief fine-tuning (1–3 hours) on novel task variations. Task Variation Success Rate (%) Pour (New Container) 60 Folding (Changed Object) 100 Bowling (Inverted pin) 100 Average 86.7 manipulation primitives. Averaging 86.7% success across these variations with minimal retraining demonstrates the sample efficiency and adaptability of our approach, highlighting that the learned representations and control strategies effectively transfer to modified task configurations. Robustness to physical disturbances We evaluate the resilience of RL-100 policies to real- world physical perturbations by introducing human-applied disturbances during task execution (Tab. 6, Fig. 10). For Soft- towel Folding, we apply external forces at two critical stages: during initial grasping (Stage 1, Fig. 10A) and during pre- folding manipulation (Stage 2, Fig. 10B), achieving 90% success at both stages despite the perturbations. On the Dynamic Unscrewing task (Fig. 10C), we apply sustained counter-rotational interference for up to 4 seconds during both the unscrewing motion and at the critical zero-boundary point where visual judgment is required to grasp the nut, a particularly challenging moment as described in our task specifications. Remarkably, the policy achieves 100% success, demonstrating the ability to recover from prolonged disturbances even during precision-critical manipulation phases. Similarly, Dynamic Push-T (Fig. 10D) achieves 100% success despite multiple dragging perturbations applied throughout the pushing motion. Across all tested scenarios, our policies maintain an average 95.0% success rate, validating that the closed-loop control learned through RL-100 enables robust recovery and task completion in the presence of external perturbations typical of unstructured real-world environments. A B C D Figure 10. Robust recovery from real-world disturbances. RL-100 policies demonstrate resilience to external perturbations across diverse manipulation tasks. (A–B) Soft-towel Folding recovers from external pulling and lateral dragging at different manipulation stages. (C) Dynamic Unscrewing withstands counter-rotational interference from human hand. (D) Dynamic Push-T handles dragging perturbations to the target object. Orange arrows indicate disturbance directions and timing. The closed-loop policies successfully recover and complete all tasks, demonstrating robustness critical for unstructured real-world environments. Table 6. Task completion success rates (%) after recovering from physical disturbances. Task & Disturbance Stage Success Rate (%) Folding (Stage 1: Grasping) 90 Folding (Stage 2: Pre-folding) 90 Unscrewing 100 Push-T (Whole stage) 100 Average 95.0 Execution Efficiency We evaluate the execution efficiency of RL-100’s policies from four complementary perspectives, with results pre- sented in Fig. 11. Episode length in successful trajectories. We first compare the number of environment steps needed to complete a task successfully (Fig. 11A). RL-finetuned policies (RL- 100) consistently outperform imitation baselines across all three tasks. This advantage stems from reward-driven policy optimization, which steers the agent toward more efficient—and potentially optimal—trajectories for earlier task completion. Specifically, RL-100 achieves substantial step reductions: in Soft-towel Folding, 390 steps (DP- 2D) →312 steps (RL-100 CM; 1.25× fewer), and in Dynamics Unscrew, 361 steps (DP-2D) →280 steps (RL- 100 DDIM; 1.29× fewer). These gains reflect the value- driven objective’s tendency to produce more decisive state transitions. Wall-clock time per episode. Beyond step count, wall-clock time is the most intuitive metric for real-world deployment, especially in dynamic tasks requiring real-time action prediction. Fig. 11B reveals that inference latency depends on two factors: (i) Input modality—point clouds (DP3, 1024 points) incur lower computational overhead than 128×128 RGB images (DP-2D), yielding 1.02–1.31× speedup despite similar step counts; (ii) Inference method—RL-100 (CM)’s 1.18x 1.29x 1.27x 1.20x 1.25x 1.05x 1.09x 0.98x 1.03x 1.31x1.45x 1.53x 1.25x 1.44x 1.04x 1.15x 1.02x 1.06x 1.31x 1.54x A B C D 1.25x 2.55x 2.15x Figure 11. Comparison of RL-100 against algorithmic baselines or human operators in terms of task completion efficiency. (A) Episode length (# of environmental steps) across three tasks; lower is better. (B) Wall-clock time (s) to finish one episode; lower is better. (C) Dynamic Push-T throughput (# of successful episodes per unit time); higher is better. (D) Episode length in Dynamic Push-T; lower is better. Bars show absolute values; annotations indicate speedup factors relative to the baseline (DP-2D for A, B, and D; Human Beginner for C). We abbreviate ‘episode’ as ‘epi.’. one-step generation per action reduces control-loop latency compared to the 10-step DDIM sampler, providing an additional 1.05–1.16× wall-clock speedup over RL-100 (DDIM). For example, in Orange Juicing–Placing, RL-100 (CM) completes episodes in 9.2s versus 10.2s for RL-100 (DDIM) (1.11× faster) and 10.6s for DP-2D (1.15× faster). RL-100 vs. human operators. A critical question for practical deployment is whether robotic policies can match or exceed human efficiency. To address this, we compare RL-100 against human teleoperators on Dynamic Push-T (Fig. 11C). RL-100 (DDIM) achieves 20 successful episodes per unit time, surpassing the human expert who collected the demonstrations (17 episodes; 1.18× improvement) and substantially exceeding human beginners (13 episodes; 1.54× improvement). This demonstrates that our learned policy not only matches expert-level task completion quality but also executes more efficiently in repetitive scenarios. Episode length including failures. The previous compar- isons focused solely on successful trajectories. However, a comprehensive efficiency analysis must account for all attempts, including failures. Fig. 11D shows that when con- sidering all trials on Dynamic Push-T, RL-100 maintains significantly shorter average horizons: 822 steps (DP-2D) →658 steps (DP3) →382 steps (RL-100 Offline) →322 steps (RL-100 DDIM), representing up to 2.55× fewer steps than DP-2D. This indicates that RL-100 produces policies that both succeed more reliably and terminate unpromising attempts quickly, avoiding wasted execution time. Summary. Overall, the efficiency gains of RL-100 arise from three synergistic sources: (1) efficient encoding (point clouds over RGB images), (2) reward-driven policy opti- mization (shorter, more decisive rollouts via the discount factor γ < 1), and (3) low-latency action generation (CM’s one-step versus DDIM’s 10-step inference). The monotonic improvement—DP-2D →DP3 →RL-100 (DDIM) →RL- 100 (CM)—validates each design choice’s contribution. Crucially, RL-100’s execution efficiency surpasses both algorithmic baselines and human operators, demonstrat- ing readiness for practical deployment. Training efficiency Fig. 12 shows the online RL training progression for Agile Bowling. The policy achieves consistent 100% success after approximately 120 episodes of on-policy rollouts, with a moving average demonstrating stable learning without performance drop from offline. The final 100+ episodes maintain perfect success, validating robust policy convergence. These success rates reflect training rollout episodes with exploration noise, not formal evaluations. Since rollouts involve noise for exploration, while evaluation uses deterministic actions, the reported rates underestimate the true policy performance. However, frequent evaluation on physical hardware is prohibitively costly. We therefore track learning progress through rollout success rates as an efficient proxy, providing real-time visibility into policy improvement. We present this curve for Agile Bowling as it exemplifies the rapid convergence and stability characteristic of our online RL finetuning phase. Simulation Experiments To complement our real-robot evaluations, we benchmark RL-100 in simulation and compare it against state-of- the-art diffusion/flow–based offline-to-online RL methods, including DPPO (Ren et al. 2024), ReinFlow (Zhang 0 25 50 75 100 125 150 175 200 Training Episode 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate Per-episode success Moving average (window=10) Perfect success (100%) Figure 12. Online RL training curve for Agile Bowling. Per-episode success rate during online RL finetuning phase, showing rapid convergence from the iterative offline baseline (approximately 80+%) to perfect performance (100%). et al. 2025), and DSRL (Wagenmaker et al. 2025). We also conduct ablations to verify the contribution of each component in RL-100. Main Results We evaluate on three standard continuous-control suites that cover both dexterous manipulation and locomotion: Adroit (Rajeswaran et al. 2018), Meta-World (Yu et al. 2020), and MuJoCo locomotion (Fu et al. 2020; Todorov et al. 2012). Unless noted otherwise, observations include robot proprioception together with visual inputs (RGB frames or point clouds), and actions are continuous joint/EE commands. We report normalized returns and task success rates (mean ± std over multiple seeds) and compare RL-100 with baselines finetuning diffusion/flow policies as well as their offline/online training protocols. As shown in Fig. 13, RL-100, shown in red curves, con- sistently achieves the highest or near-highest performance across all ten tasks. On MuJoCo locomotion, RL-100 reaches 10,000 return on halfcheetah-medium-v2, which is 2.2× higher than DPPO’s 4,500 and 3.3× higher than DSRL’s 3,000, and substantially outperforms all baselines on hopper and walker2d. On Adroit dexterous tasks, RL-100 attains near-perfect success rates, about 100%, on door and hammer, while DPPO plateaus at 0.9 and ReinFlow struggles below 0.6 on hammer. For Meta-World precision tasks like peg- insert-side, RL-100 maintains a stable 1.0 success rate after a few steps, where ReinFlow fails to exceed 0.2. From a sample efficiency and stability perspective, RL-100 demonstrates faster convergence—achieving peak performance within 1-2M steps on most tasks versus 3-5M for baselines—and exhibits notably lower variance (narrower confidence intervals) across random seeds. On challenging coordination tasks (pen, disassemble), RL-100’s advantage is most pronounced, showing both higher asymptotic performance and more stable learning trajectories. The consistent gains across diverse modalities—from high- dimensional locomotion to contact-rich manipulation and precision insertion—validate RL-100’s generality as a unified framework for diffusion policy fine-tuning. Policy Inference Frequency Fig. 14 presents a comparative analysis of our proposed methods, RL-100 (CM), against representative baselines in terms of model size (number of parameters) and policy inference speed (frequency in Table 7. Summary of numbers of episodes and corresponding collection time (h) across all tasks during the three training stages - human demonstration, iterative offline RL and online RL. Note that in offline RL stage, the number is the sum of all iterations. Task Human Demonstration Iterative Offline RL Online RL # of epi. Collection time (h) # of epi. Collection time (h) # of epi. Collection time (h) Dynamic Push-T 100 2 821 8 763 7.5 Agile Bowling 100 2 249 2 213 2.5 Pouring 64 1 741 6.8 129 1.5 Soft-towel Folding 400 5 896 11 654 8.5 Dynamic Unscrew 31 0.5 467 4.5 288 3 Orange Juicing – Placing 80 1.5 642 10.5 750 12.5 Orange Juicing – Removal 29 0.5 149 2.5 240 4 Ours DPPO DSRL ReinFlow 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 8000 10000 Return halfcheetah-medium-v2 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 Return hopper-medium-v2 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 Return walker2d-medium-v2 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate box-close 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate disassemble 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate door 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate hammer 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate pen 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate peg-insert-side 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate push-wall Figure 13. Learning curves of various methods on MuJoCo locomotion, Adroit, Meta-World tasks are presented across three random seeds. Solid lines indicate the mean performance, while the shaded regions represent the corresponding 95% confidence interval. We use robot proprioception as the policy input for MuJoCo locomotion, and both 3D visual observations and proprioceptive state for Adroit (Door, Hammer, Pen) and Meta-World (Box-Close, Disassemble, Peg-Insert-Side, Push-Wall); baseline methods follow the input modalities specified in their original papers. Hz). The results demonstrate the superior efficiency of our approach. Specifically, RL-100 (CM) with a Skip-Net architecture achieves an exceptional inference frequency of 378 Hz, outperforming DSRL (35 Hz) and DPPO (30 Hz) by over an order of magnitude, while maintaining a compact model size of just 3.9M parameters w.r.t. DSRL (52.3M) . Even a larger U-Net architecture (39.2M) enjoys an inference speed boost up to 133 Hz via CM. Reinflow benefits from a relatively high frequency due to its smaller network size, but it is still notably lower than ours, even with a smaller model. This significant performance gain is attributed to our core methodology: we distill a diffusion policy into a Consistency Model (CM) capable of single-step inference alongside training/finetuning the original policy. Unlike diffusion- based policies that inherently rely on a slow, multi-step iterative sampling process, CM collapses policy generation into a single, efficient forward pass. Consequently, we achieve near real-time policy inference, effectively removing the decision-making model as a bottleneck. The overall system frequency becomes primarily limited by the speed of the perception system (e.g., camera frame rate at 30Hz), enabling truly reactive control and timely responses. This low latency is critical for robust performance in dynamic environments, where the system delay introduced by iterative samplers can severely hinder task success, which has been shown in our real-world experiments. RL-100 (CM) w/ Skip-Net RL-100 (CM) w/ U-Net DPPO DSRL ReinFlow 0 10 20 30 40 50 # of Net Parameters (M) 3.9 39.2 1.2 52.3 1.3 Model Size Inference Speed 0 50 100 150 200 250 300 350 Policy Inference Frequency (Hz) 378 Hz 133 Hz 30 Hz 35 Hz 170 Hz Figure 14. Comparison of two RL-100 variations with different action heads against other baselines in simulation in terms of model size and inference speed. Model size is defined as the number of parameters in a policy neural net, while inference speed is the frequency of a policy outputting an action given observations at each timestep. Ablation Studies We conduct ablation studies on five components: (i) visual modality (2D vs. 3D), (ii) diffusion-noise clipping, (iii) sampler/model (CM vs. DDIM), (iv) representation learning during fine-tuning (ReconVIB / no ReconVIB / frozen encoder), and (v) diffusion parameterization (ϵ- vs. x0- prediction). The comparison is listed in Fig. 15 and the 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate 2D vs. 3D 2d 3d (ours) 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Different clip ranges 0.1 0.8 w/o clip 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate CM vs. DDIM DDIM CM 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate ReconVIB vs. No ReconVIB vs. Fix Encoder w/ ReconVIB (ours) w/o ReconVIB Fix encoder 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Epsilon vs. Sample Epsilon (ours) Sample Figure 15. Learning curves of various ablation studies on the door task in Adroit are presented across three random seeds. Solid lines indicate the mean performance, while the shaded regions represent the corresponding 95% confidence interval. From left to right: (A) 2D vision modality (i.e., images) vs. 3D point clouds used by RL-100. (B) Different clip ranges leveraged to bound the standard deviation of stochastic DDIM sampling steps. (C) Different models in RL-100. (D) Whether or not including reconstruction and VIB loss during finetuning as well as fixing encoder during finetuning. (E) Different diffusion parameterizations (i.e., the output of neural net; epsilon: predicting noises, sample: predicting clean samples. detailed configurations and results are provided below. Visual modality: 2D vs. 3D. Our framework is modality- agnostic and accepts either 2D RGB images or 3D point clouds as visual inputs. On the Adroit door task—a relatively clean scene—the 3D variant learns faster and attains a higher final success rate. We attribute this to the ability of 3D inputs to enable precise geometric filtering (e.g., ROI/radius “clipping”) that cleanly isolates task-relevant structures such as the handle and contact surfaces, yielding a higher signal- to-noise ratio and easier credit assignment than 2D images. This ablation thus demonstrates both the flexibility of our framework (2D/3D optional) and the practical advantage of 3D in clean, contact-rich settings. Clipping the diffusion noise scale. Bounding the standard deviation of stochastic DDIM steps with a moderate clip (e.g., 0.8) yields the best stability–performance trade-off. An overly tight bound (0.1) under-explores and converges lower, whereas no clipping induces pronounced oscillations and wider intervals. We use 0.8 for Adroit, Mujoco Locomotion tasks, and real-world single-action mode tasks, and 0.1 for Meta-World and real-world chunk-action mode tasks. Consistency model (CM) vs. DDIM. One-step CM matches multi-step DDIM performance with K× speedup. The distilled consistency model (CM, blue) achieves nearly identical learning curves and final success rates (∼1.0) compared to the K-step DDIM policy (red), validating that our lightweight distillation successfully pre- serves task performance while enabling single-forward-pass inference. Both policies converge at similar rates and main- tain comparable stability (overlapping confidence intervals), demonstrating that the CM effectively compresses the iter- ative denoising process without sacrificing control quality. This enables high-frequency deployment (100+ Hz) critical for dynamic real-world tasks. Representation learning during fine-tuning. Including reconstruction plus VIB losses while updating the encoder achieves the highest and most stable success. Removing these losses degrades stability and final performance, and freezing the encoder further limits gains; joint policy–representation adaptation mitigates representational drift and improves sample efficiency. Diffusion parameterization: ϵ vs. x0. Noise prediction (ϵ- prediction) substantially outperforms clean-sample predic- tion (x0-prediction), achieving ∼40% higher final success (∼1.0 vs. ∼0.6) with faster learning and reduced vari- ance—suggesting better-conditioned gradients and closer alignment with the PPO-style objective used during RL post- training. Prediction parameterization analysis Prior work on dif- fusion models for robot learning (e.g., DP3) predomi- nantly adopts the x0–prediction parameterization rather than ϵ–prediction. The two parameterizations recover the clean action (or action chunk) ˆx0 as ˆx0 = xt −√1 −¯αt fθ(xt, t) √¯αt (ϵ–prediction), (24) ˆx0 = fθ(xt, t) (x0–prediction). (25) Analysis. A practical reason is variance amplification in (24): early in the reverse process (t→T), the factor 1 √¯αt becomes large, which magnifies estimation noise in εθ and yields a higher-variance ˆx0. In supervised imitation settings (or when actions are executed open-loop), this extra variance can destabilize training and control, making the direct x0–prediction attractive. Variance measurements. We pretrain identical policies under the two parameterizations and, during environment rollouts, perform 100 forward passes per time step to estimate predictive variance. On a locomotion task (Hopper), the empirical variance of ˆx0 is 0.1316 for ϵ–prediction versus 0.0870 for x0–prediction; on a manipulation task (Adroit–Door) it is 0.0589 for ϵ–prediction versus 0.0290 for x0–prediction, respectively. Implication for online RL. In our two-level MDP—where each action is produced by a K-step denoising trajec- tory—the higher stochasticity induced by ϵ–prediction acts as structured exploration within the denoising chain, improv- ing coverage of latent action modes and helping avoid local optima during policy-gradient fine-tuning. With all other components fixed, online RL with ϵ–prediction achieves faster and more reliable improvement on Adroit–Door (see Fig. 15), whereas x0–prediction exhibits slower improve- ment and more frequent premature convergence. Takeaway. While x0–prediction can be preferable for low- variance imitation and open-loop execution, the variance amplification inherent to ϵ–prediction is beneficial for exploration in diffusion-based online RL. Consequently, we adopt ϵ–prediction for post-training, leveraging its exploratory behavior inside the denoising steps. Conclusion and Future Work This paper focuses on the core challenges of real-world deployment of robot learning, which demands systems with high reliability, efficiency, and robustness. To this end, we present RL-100, a three-stage real-world RL training framework built atop a diffusion-based visuomotor policy, designed to “start from human priors” and then “go beyond human performance” in robotic manipulation. By unifying imitation learning and reinforcement learning under a single policy optimization objective, RL-100 preserves the strengths of human teleoperation demonstrations while continuously self-improving through autonomous exploration. In particular, a shared PPO-style policy-gradient loss is applied across both offline and online RL phases to fine-tune the diffusion policy’s denoising process, providing stable, near-monotonic performance gains. Moreover, the framework introduces a consistency distillation technique that compresses the multi-step diffusion policy (requiring K = 5–10 denoising steps) into a single-step consistency model, enabling high-frequency control at deployment without sacrificing performance. This unification of IL and RL under one objective, coupled with the one-step distilled policy, yields a controller that is both efficient to fine-tune and fast to execute in real-world settings. RL-100 ’s efficacy was demonstrated across a diverse suite of real-world tasks, encompassing dynamic object pushing, dual-arm deformable cloth folding, agile high- speed bowling, precision pouring of granular and fluid materials, unscrewing, and a multi-stage orange juicing task. Deployment-level results show that the fully-trained RL-100 policy achieved 100% success rates on every task, attaining virtually perfect reliability in extensive evaluations. Notably, RL-100 also improved task efficiency, often matching or even surpassing skilled human operators in time-to- completion on multiple tasks. The trained policies proved robust over long durations, maintaining stable performance over 2+ hours of continuous operation, which attests to the framework’s long-horizon reliability. These results collectively indicate that RL-100 satisfies the stringent reliability, efficiency, and robustness requirements needed for real-world deployment. Beyond excelling under nominal conditions, RL-100 exhibited strong generalization and robustness in the face of new variations and disturbances. Without any retraining, a single policy maintained high performance under significant environmental shifts. Across several such unseen scenario variations, RL-100 attained an average 92.5% success rate without any fine-tuning, underscoring the policy’s robustness to modest domain shifts. Furthermore, the framework’s consistency-model variant (the one-step distilled policy) demonstrated that high-frequency control is possible without loss of accuracy, as it too reached a 100% success rate on all evaluated tasks, including 250/250 successes in the challenging dual-arm towel folding. This blend of reliability and adaptability indicates that RL-100 can handle the kind of dynamic perturbations and variability inherent in real home or factory environments, rather than overfitting to narrow training conditions. In summary, RL-100 integrates imitation learning and reinforcement learning into a unified paradigm that yields deployment-grade performance on real robots. It leverages human demonstration data as a foundation and then aligns to human-grounded objectives while exceeding human-level reliability through iterative self-improvement. The framework’s near-perfect success rates, broad task generality, and sustained operational robustness represent a significant step toward deploying autonomous manipulators in unstructured environments. These outcomes “offer a credible path to deployment in homes and factories”, where robots must perform daily manipulation tasks with super- human consistency and safety. Future Work. There are several promising directions to extend and enhance this work. A priority is to extend evaluation to more complex, cluttered, and partially observ- able scenes that better mirror the variability of homes and factories. This includes dynamic multi-object settings, occlusions, specular/transparent materials, changing illumi- nation, and non-stationary layouts. Stress-testing the pol- icy beyond the current benchmark—where we already observe near-perfect success, strong long-horizon stability, and human teleoperation-level time-to-completion on multi- ple tasks—will help quantify limits and failure modes under realistic deployment conditions. Our results indicate that small diffusion policies can attain high reliability and efficiency after modest fine- tuning. Building on this, we plan to scale post-training to larger, multi-task, multi-robot Vision-Language-Action (VLA) models. Key directions include: (i) scaling laws for data/model size vs. real-robot sample efficiency; (ii) cross-embodiment and cross-task transfer in a single policy; and (iii) aligning large VLA priors with RL-100 ’s unified objective to retain zero-shot semantic generalization while preserving the high success rates we report. Although our pipeline already incorporates conservative operation and stable fine-tuning, reset and recovery remain practical bottlenecks. We will investigate autonomous reset mechanisms—learned reset policies, scripted recovery behaviors, task-aware environment fixtures, and failure- aware action chunking—to minimize on-robot human effort during training and evaluation. A principled reset strategy should reduce downtime, smooth data collection, and further stabilize online improvement, complementing RL-100 ’s OPE-gated iterative updates. Contribution List Kun: Project lead, Core, responsible for the original idea and codebase, overall design and training of simulation and real- world experiments, and paper writing. Huanyu: Core, optimized training infrastructure, led the unscrew, pour and juicing tasks, contributed to paper writing, and implemented part of the simulation baseline. Dongjie: Core, led the juicing task, contributed to paper writing, most of the real-world baselines, and implemented part of the simulation baseline. Zhenyu: Core, responsible for the real-world robot setup, design, and data collection, and contributed to paper writing. Lingxiao: Core, led data collection for the folding task, managed its offline training process, and explored dual-arm embodiments in the early stages. Zhennan: Core, contributed to part of the algorithm design, improved consistency policy distillation, debugged and explored settings in simulation, and developed most of the simulation baselines. Ziyu: Managed the Metaworld domain tasks. Shiyu: Writing refinement Huazhe: Principal Investigator (PI), Core, responsible for project direction and guidance, and contributed to paper writing. Acknowledgement We would like to thank Tianhai Liang for his contributions to the hardware design for this project, and Jiacheng You and Zhecheng Yuan for their valuable discussions and insights. References Ajay A, Du Y, Gupta A, Tenenbaum J, Jaakkola T and Barbu A (2023) Is conditional generative modeling all you need for decision making? arXiv preprint arXiv:2211.15657 . Ball PJ, Smith L, Kostrikov I and Levine S (2023) Efficient online reinforcement learning with offline data. In: International Conference on Machine Learning. Black K, Brown N, Driess D, Esmail A, Equi M, Finn C, Fusai N, Groom L, Hausman K, Ichter B, Jakubczak S, Jones T, Ke L, Levine S, Li-Bell A, Mothukuri M, Nair S, Pertsch K, Shi LX, Tanner J, Vuong Q, Walling A, Wang H and Zhilinsky U (2024a) π0: A vision-language-action flow model for general robot control. URL https://arxiv.org/abs/2410. 24164. Black K, Brown N, Driess D, Ko AE, Hu K, Lee A, Nakayama M, Singh A, Xu Q, Yang Q et al. (2024b) π0: A vision-language- action flow model for general robot control. arXiv preprint arXiv:2410.24164 . Black K, Janner M, Du Y, Kostrikov I and Levine S (2023) Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301 . Chen H, Lu C, Ying C, Su H and Zhu J (2023) Score regularized policy optimization through diffusion behavior. arXiv preprint arXiv:2310.07297 . Chen Q, Deng Z, Wang Y, Bao J, Lu Y, Zeng W, Niebles JC, Gao R, Li FF and Wu J (2024) Flowpolicy: Enabling fast and robust 3d flow-based policy via consistency flow matching for robot manipulation. arXiv preprint arXiv:2412.04987 . Chi C, Xu Z, Feng S, Cousineau E, Du Y, Burchfiel B, Tedrake R and Song S (2023) Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research : 02783649241273668. Christiano PF, Leike J, Brown TB et al. (2017) Deep reinforcement learning from human preferences. In: NeurIPS. Chua K, Calandra R, McAllister R and Levine S (2018) Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In: NeurIPS. Cui J and Trinkle J (2021) Toward next-generation learned robot manipulation. Science Robotics 6(54): eabd9461. DOI:10. 1126/scirobotics.abd9461. URL https://www.science. org/doi/abs/10.1126/scirobotics.abd9461. Ding S, Hu C, Liu Z, Zhang C and Liang Y (2024a) Diffusion- based reinforcement learning via q-weighted variational policy optimization. In: Advances in Neural Information Processing Systems. Ding Z, Zhang C, Dong J and Jia J (2024b) Consistency models as a rich and efficient policy class for reinforcement learning. arXiv preprint arXiv:2309.16984 . Eysenbach B, Gu S, Ibarz J and Levine S (2018) Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In: International Conference on Learning Representations. Fan Y, Watkins O, Du Y, Liu H, Ryu M, Goodman C, Jain A, Vafa K, Kim D, Yu J, Tang H, Hao T, Tenenbaum J, Garg S, Jesse A, Abbeel P and Singh AK (2024) DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems . Fu J, Kumar A, Nachum O, Tucker G and Levine S (2020) D4rl: Datasets for deep data-driven reinforcement learning. Fujimoto S, van Hoof H and Meger D (2018) Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning (ICML). Guo L, Xue Z, Xu Z and Xu H (2025) Demospeedup: Accelerating visuomotor policies via entropy-guided demonstration acceler- ation. URL https://arxiv.org/abs/2506.05064. Gupta A, Mendonca R, Zhao Y, Emmert-Streib F and Levine S (2021) Reset-free reinforcement learning via multi-task learning: Learning dexterous manipulation behaviors without human intervention. In: IEEE International Conference on Robotics and Automation. Haarnoja T, Zhou A, Abbeel P and Levine S (2018) Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning (ICML). Hansen-Estruch P, Kostrikov I, Janner M, Kuba JG and Levine S (2024) Revisiting diffusion q-learning: From iterative denoising to one-step action generation. In: International Conference on Machine Learning. He H, Bai C, Xu K, Yang Z, Zhang W, Wang D, Zhao B and Li X (2023) Diffusion model is an effective planner and data synthesizer for multi-task reinforcement learning. arXiv preprint arXiv:2305.18459 . He K, Fan H, Wu Y, Xie S and Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9729–9738. Ho J, Jain A and Abbeel P (2020a) Denoising diffusion probabilistic models. Advances in neural information processing systems 33: 6840–6851. Ho J, Jain A and Abbeel P (2020b) Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems, volume 33. pp. 6840–6851. Intelligence P, Black K, Brown N, Darpinian J, Dhabalia K, Driess D, Esmail A, Equi M, Finn C, Fusai N, Galliker MY, Ghosh D, Groom L, Hausman K, Ichter B, Jakubczak S, Jones T, Ke L, LeBlanc D, Levine S, Li-Bell A, Mothukuri M, Nair S, Pertsch K, Ren AZ, Shi LX, Smith L, Springenberg JT, Stachowicz K, Tanner J, Vuong Q, Walke H, Walling A, Wang H, Yu L and Zhilinsky U (2025) π0.5: a vision-language-action model with open-world generalization. URL https://arxiv.org/ abs/2504.16054. Janner M, Du Y, Tenenbaum JB and Levine S (2022) Planning with diffusion for flexible behavior synthesis. In: International Conference on Machine Learning. pp. 9902–9915. Janner M, Fu J, Zhang M and Levine S (2019) When to trust your model: Model-based policy optimization. In: Advances in Neural Information Processing Systems. Kalashnikov D, Irpan A, Pastor P, Ibarz J et al. (2018) Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. In: Conference on Robot Learning (CoRL). Kang B, Jie X, Du D, Li J, Chen X, Zhang Y and Tian Y (2023) Efficient diffusion policies for offline reinforcement learning. arXiv preprint arXiv:2305.20081 . Kostrikov I, Nair A and Levine S (2022) Offline reinforcement learning with implicit q-learning. In: ICLR. Kumar A, Zhou A, Tucker G and Levine S (2020) Conservative q-learning for offline reinforcement learning. In: Advances in Neural Information Processing Systems, volume 33. pp. 1179– 1191. Lei K, He Z, Lu C, Hu K, Gao Y and Xu H (2024) Uni-o4: Unifying online and offline deep reinforcement learning with multi-step on-policy optimization. In: ICLR. Levine S, Finn C, Darrell T and Abbeel P (2016) End-to-end training of deep visuomotor policies. In: Robotics: Science and Systems (RSS). Li H, Jiang Z, CHEN Y and Zhao D (2024) Generalizing consis- tency policy to visual RL with prioritized proximal experience regularization. In: The Thirty-eighth Annual Conference on Neural Information Processing Systems. URL https:// openreview.net/forum?id=MOFwt8OeXr. Lin T, Sachdev K, Fan L, Malik J and Zhu Y (2025) Sim- to-real reinforcement learning for vision-based dexterous manipulation on humanoids. arXiv preprint arXiv:2502.20396 . Lipman Y, Chen RT, Ben-Hamu H, Nickel M and Le M (2023) Flow matching for generative modeling. In: International Conference on Learning Representations. Liu Y, Bao J and Li KH (2024) Pinchbot: Long-horizon deformable manipulation with guided diffusion policy. arXiv preprint arXiv:2507.17846 . Lu C, Chen H, Chen J, Su H, Li C and Zhu J (2023) Con- trastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning. arXiv preprint arXiv:2304.08349 . Lu Y, Tian Y, Yuan Z, Wang X, Hua P, Xue Z and Xu H (2025) H3dp: Triply-hierarchical diffusion policy for visuomotor learning. arXiv preprint arXiv:2505.07819 . Luo J, Hu Z, Xu C, Tan YL, Berg J, Sharma A, Schaal S, Finn C, Gupta A and Levine S (2024a) Serl: A software suite for sample-efficient robotic reinforcement learning. In: 2024 IEEE International Conference on Robotics and Automation (ICRA). pp. 16961–16969. DOI:10.1109/ICRA57147.2024.10610040. Luo J, Hu Z, Xu C, You S, Darrell T and Levine S (2024b) SERL: A software suite for sample-efficient robotic reinforcement learning. In: IEEE International Conference on Robotics and Automation. Luo J, Xu C, Liu F, Trevor D and Levine S (2024c) Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning. Science Robotics 9(96): eads5033. Luo J, Xu C, Wu J and Levine S (2025) Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning. Science Robotics 10(105): eads5033. DOI:10.1126/ scirobotics.ads5033. URL https://www.science.org/ doi/abs/10.1126/scirobotics.ads5033. Majumdar A, Yadav K, Arnaud S, Ma YJ, Chen C, Silwal S, Jain A, Berges VP, Abbeel P, Malik J et al. (2023) Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint arXiv:2303.18240 . Nair A, Dalal M, Gupta A and Levine S (2020) AWAC: Accelerating online reinforcement learning with offline datasets. In: International Conference on Machine Learning. Nair S, Rajeswaran A, Kumar V, Finn C and Gupta A (2022) R3m: A universal visual representation for robot manipulation. In: Conference on Robot Learning. Nakamoto M, Zhai Y, Singh A, Mark MS, Ma Y, Finn C, Kumar A and Levine S (2023) Cal-QL: Calibrated offline RL pre- training for efficient online fine-tuning. In: Advances in Neural Information Processing Systems. Nguyen T and Yoo CD (2025) Revisiting diffusion q-learning: From iterative denoising to one-step action generation. arXiv preprint arXiv:2508.13904 . Park S, Kang K and Levine S (2025) Flow q-learning. arXiv preprint arXiv:2502.02538 . Peng XB, Kumar A, Zhang G and Levine S (2019) Advantage- weighted regression: Simple and scalable off-policy reinforce- ment learning. In: International Conference on Learning Representations. Psenka M, Escontrela A, Abbeel P and Ma Y (2024) Learning a diffusion model policy from rewards via q-score matching. arXiv preprint arXiv:2312.11752 . Rajeswaran A, Kumar V, Gupta A, Vezzani G, Schulman J, Todorov E and Levine S (2018) Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. In: Proceedings of Robotics: Science and Systems. Pittsburgh, Pennsylvania. DOI:10.15607/RSS.2018. XIV.049. Ren AZ, Dixit A, Bodrova AS, Singh S, Tu S, Brown N, Xu P, Takayama L, Xia F, Varley J, Xu Z, Sadigh D, Zeng A and Majumdar A (2024) Diffusion policy policy optimization. arXiv preprint arXiv:2409.00588 . Sabatelli M, Suresh S and Zhang B (2024) Diffclone: Enhanced behaviour cloning in robotics with diffusion-driven policy learning. arXiv preprint arXiv:2401.09243 . Schulman J, Moritz P, Levine S, Jordan MI and Abbeel P (2016) High-dimensional continuous control using generalized advantage estimation. In: ICLR. Schulman J, Wolski F, Dhariwal P, Radford A and Klimov O (2017) Proximal Policy Optimization Algorithms. ArXiv preprint abs/1707.06347. Shang S, Zheng Y, Yin Z and Duan Y (2024) Latent weight diffusion: Generating policies from trajectories. arXiv preprint arXiv:2410.14040 . Singh A, Yang L, Hartikainen K, Finn C and Levine S (2019) End-to-end robotic reinforcement learning without reward engineering. In: Robotics: Science and Systems. Song J, Meng C and Ermon S (2021) Denoising diffusion implicit models. In: International Conference on Learning Representations. URL https://openreview.net/ forum?id=St1giarCHLP. Song Y, Dhariwal P, Chen M and Sutskever I (2023) Consistency models. In: Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Song Y, Sohl-Dickstein J, Kingma DP, Kumar A, Ermon S and Poole B (2020) Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 . Todorov E, Erez T and Tassa Y (2012) Mujoco: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 5026–5033. DOI:10.1109/IROS.2012.6386109. Wagenmaker A, Beeson A, Ajay A and Gupta A (2025) Steering your diffusion policy with latent space reinforcement learning. arXiv preprint arXiv:2501.xxxxx . Walke H, Black K, Lee A, Kim MJ, Du M, Zheng C, Zhao T, Hansen-Estruch P, Xu Q, Dragan A et al. (2023) BridgeData V2: A dataset for robot learning at scale. arXiv preprint arXiv:2308.12952 . Wang SY, Kubota Y and Shah D (2023a) Diffusion model-augmented behavioral cloning. arXiv preprint arXiv:2302.13335 . Wang Z, Hunt JJ and Zhou M (2023b) Diffusion policies as an expressive policy class for offline reinforcement learning. In: International Conference on Learning Representations. Wang Z, Zhu Y, Liu S and Song S (2024) One-step diffusion policy: Fast visuomotor policies via diffusion distillation. arXiv preprint arXiv:2410.21257 . Yu T, Quillen D, He Z, Julian R, Hausman K, Finn C and Levine S (2020) Meta-world: A benchmark and evaluation for multi- task and meta reinforcement learning. In: Conference on robot learning. PMLR, pp. 1094–1100. Yuan Z, Wei T, Gu L, Hua P, Liang T, Chen Y and Xu H (2025) Hermes: Human-to-robot embodied learning from multi-source motion data for mobile dexterous manipulation. URL https: //arxiv.org/abs/2508.20085. Ze Y, Zhang G, Zhang K, Hu C, Wang M and Xu H (2024) 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations. In: Proceedings of Robotics: Science and Systems. Delft, Netherlands. DOI:10.15607/RSS. 2024.XX.067. Zhang T, Yu C, Su S and Wang Y (2025) Reinflow: Fine-tuning flow matching policy with online reinforcement learning. arXiv preprint arXiv:2505.22094 . Zhao TZ, Kumar V, Levine S and Finn C (2023) Learning Fine- Grained Bimanual Manipulation with Low-Cost Hardware. In: Proceedings of Robotics: Science and Systems. Daegu, Republic of Korea. DOI:10.15607/RSS.2023.XIX.016. Appendix Experiment Setting This section presents the setup and methodology for conducting real-world robot manipulation experiments. The experimental framework is designed to integrate perception, control, and demonstration collection in a coherent pipeline. We employ the Intel RealSense L515 camera to capture depth images, which are subsequently processed into 3D point clouds and transformed into the robot’s root frame for consistent spatial representation. Intrinsic and extrinsic calibrations ensure accurate mapping between image coordinates and the robot workspace, while spatial filtering and downsampling produce compact yet informative point cloud observations suitable for high-frequency control. The robotic platform consists of three manipula- tors—UR5, xArm, and Franka Emika Panda—equipped with dexterous or gripper-type end-effectors, as well as a passive fixed effector. Asynchronous control schemes are implemented across all manipulators to achieve low-latency, high-frequency execution, with task-specific interpolation strategies ensuring smooth and responsive motion. Demonstrations for seven manipulation tasks are collected using teleoperation interfaces matched to task complexity. For dexterous 3D tasks, the Apple Vision Pro provides natural hand tracking, which is converted into robot joint or Cartesian commands via inverse kinematics for the LeapHand. For planar or low-dimensional tasks, a standard game joystick is used to provide delta-action control of the end-effector in the plane. Each task includes approximately 100 demonstration trajectories, efficiently recorded within a few hours per task. Overall, the experimental setup provides a consistent, flexible, and practical framework for acquiring high-quality real-world data and evaluating robot manipulation policies across a diverse set of tasks. Camera Calibration In the real-world experiments, we employ the Intel RealSense L515 camera, chosen for its ability to provide high-fidelity 3D point cloud observations owing to its superior depth accuracy. This ensures reliable perception for robot manipulation tasks across diverse environments. The intrinsic calibration is performed using a Charuco board, which serves as the reference pattern for estimating the camera’s internal parameters. Multiple images of the board are captured from varying angles and distances. The board corners are then detected and used to compute the intrinsic matrix and distortion coefficients through an optimization process that minimizes the reprojection error. This procedure guarantees an accurate mapping between 2D image coordinates and their corresponding 3D spatial points. The extrinsic calibration aligns the camera frame with the robot’s root frame using an AprilTag placed on the tabletop. First, a manual measurement establishes the transformation Ttag2root, which encodes the pose of the tag relative to the robot base. Subsequently, the AprilTag is detected in the camera’s RGB channel, yielding the transformation Ttag2camera from the tag to the camera frame. The camera-to- root transformation is then obtained as: Tcamera2root = Ttag2root · (Ttag2camera)−1 This calibrated extrinsic transformation is stored and utilized throughout the experiments to maintain consistent spatial alignment between the camera observations and the robot workspace. Point Cloud Processing The raw depth maps captured by the RealSense L515 are first back-projected into 3D point clouds using the calibrated intrinsic parameters, thereby establishing a metric reconstruction of the scene in the camera frame. These point clouds are then transformed into the robot’s root frame using the calibrated extrinsic transformation Tcamera2root, ensuring spatial consistency between visual observations and the robot’s operational workspace. To suppress environmental noise and spurious reflections, a spatial bounding box is applied in the root frame, filtering out irrelevant regions while preserving only the volume above the tabletop. This step enhances the semantic relevance of the retained points, focusing on task-relevant structures. Finally, to improve computational efficiency while maintaining geometric fidelity, farthest point sampling (FPS) is applied. The processed point cloud is uniformly downsampled to 512 points, yielding a compact yet informative representation suitable for robot perception and policy learning. During deployment, the camera operates in a dedicated thread with a buffered pipeline, enabling asynchronous acquisition of observations. This design minimizes latency and allows the effective point cloud update rate to exceed 25 Hz, ensuring timely and reliable perception for closed-loop control. Robotic Arm & End-effector Control The experimental plat- form incorporates three robotic manipulators—UR5, xArm, and Franka Emika Panda—equipped with different types of end-effectors, including the LeapHand dexterous hand, the Robotiq 2F-85 gripper, and a custom 3D-printed fixed effector. All control modules are designed to operate asyn- chronously, ensuring low-latency communication and high- frequency execution for responsive and stable manipulation. For the UR5, relative action commands are transmitted at 30 Hz. These commands are handled through the Real- Time Data Exchange (RTDE) interface, where they are interpolated to 125 Hz for execution. A lookahead time of 1/30 s and a control gain of 200 are used, providing a balance between smooth motion trajectories and timely responsiveness. For the xArm, control is performed via absolute Cartesian position commands at 30 Hz. Linear interpolation is applied to the position trajectories, while orientation is represented in quaternion form and interpolated via spherical linear interpolation (slerp). The resulting trajectory is executed at 200 Hz, yielding continuous and smooth motion in task space. For the Franka Emika Panda, absolute Cartesian position commands are also issued at 30 Hz. The robot’s low-level controller performs trajectory interpolation, converting the input into control signals at 1000 Hz, thereby ensuring precise and compliant execution. Regarding the end-effector control, both the LeapHand and the Robotiq 2F-85 are driven asynchronously at 30 Hz. The policy performs end-to-end joint control for the LeapHand, while the gripper uses binary control for the Soft-towel Folding task and continuous control for the Orange Juicing task. The custom 3D-printed fixed effector, by contrast, is purely passive and requires no active control, serving as a simple but reliable interface for specific experimental settings like Dynamic Push-T or Agile Bowling. Data Collection Data for seven real-world manipulation tasks are collected using teleoperation interfaces selected to match the com- plexity and dimensionality of each task. These tasks include both dexterous three-dimensional manipulations—such as folding, pouring, unscrewing, and juicing—and planar or constrained motions, such as Dynamic Push-T and Agile Bowling. The data collection strategy aims to provide suffi- cient coverage of the task space while maintaining practical efficiency in demonstration acquisition. For tasks involving complex 3D hand motions, the Apple Vision Pro is employed. Its built-in spatial hand-tracking captures natural hand and finger movements, which are mapped to robot end-effector motions. Before starting each demonstration, the operator’s hand pose is aligned with the robot end-effector through an initialization procedure, ensuring a consistent reference frame. During teleoperation, wrist and finger motions are streamed in real time and converted into robot joint or Cartesian commands, while gripper grasping actions are triggered via pinch gestures. For the LeapHand dexterous hand, inverse kinematics is solved in PyBullet to produce feasible joint configurations from the tracked hand poses. This setup allows operators to efficiently provide demonstrations for tasks requiring dexterous, spatially rich manipulations. For planar or low-dimensional manipulation tasks—such as Dynamic Push-T and Agile Bowling—a standard game joystick is used. The joystick’s XY-axis values are mapped to delta actions of the robot end-effector in the plane. This approach provides an intuitive and efficient means for recording demonstrations where precise two-dimensional control is sufficient. Our dataset is intentionally varied. Some tasks feature a large number of demonstrations to test performance limits, while others use smaller datasets to evaluate sample efficiency. Tab. 7 provides a complete summary.
RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning Kun Lei1,2,*, Huanyu Li1,2,*, Dongjie Yu1,3,*, Zhenyu Wei5,*, Lingxiao Guo6, Zhennan Jiang7, Ziyu Wang4, Shiyu Liang2 and Huazhe Xu1,4 Abstract Real-world robotic manipulation in homes and factories demands reliability, efficiency, and robustness that approach or surpass skilled human operators. We present RL-100, a real-world reinforcement learning training framework built on diffusion visuomotor policies trained bu supervised learning. RL-100 introduces a three-stage pipeline. First, imitation learning leverages human priors. Second, iterative offline reinforcement learning uses an Offline Policy Evaluation procedure, abbreviated OPE, to gate PPO-style updates that are applied in the denoising process for conservative and reliable improvement. Third, online reinforcement learning eliminates residual failure modes. An additional lightweight consistency distillation head compresses the multi-step sampling process in diffusion into a single-step policy, enabling high-frequency control with an order-of-magnitude reduction in latency while preserving task performance. The framework is task-, embodiment-, and representation-agnostic and supports both 3D point clouds and 2D RGB inputs, a variety of robot platforms, and both single-step and action-chunk policies. We evaluate RL-100 on seven real-robot tasks spanning dynamic rigid-body control, such as Push-T and Agile Bowling, fluids and granular pouring, deformable cloth folding, precise dexterous unscrewing, and multi-stage orange juicing. RL-100 attains 100% success across evaluated trials for a total of 900 out of 900 episodes, including up to 250 out of 250 consecutive trials on one task. The method achieves near-human teleoperation or better time efficiency and demonstrates multi-hour robustness with uninterrupted operation lasting up to two hours. The resulting policies generalize zero-shot to novel dynamics with an average success of 92.5% and adapt in a few-shot fashion to significant task variations, reaching an average of 86.7% after one to three hours of additional training. These results suggest a practical path to deploymentready robot learning by starting from human priors, aligning training objectives with human-grounded metrics, and reliably extending performance beyond human demonstrations. For more results and videos, please visit our project website: https://lei-kun.github.io/RL-100/. Keywords Real-world Reinforcement Learning, Diffusion Policy, Robotic Manipulation, Offline-to-Online RL, Visuomotor Control Introduction Dexterous robotic manipulation stands as an iconic challenge in robotics (Cui and Trinkle 2021; Luo et al. 2025). Realworld deployment beyond laboratories requires human-level reliability, efficiency, and robustness. Recent learning-based advances across generative diffusion policies (Chi et al. 2023; Ze et al. 2024), diffusion-based robot foundation models (Black et al. 2024a; Intelligence et al. 2025), simto-real RL (Yuan et al. 2025; Lin et al. 2025), and realworld RL (Luo et al. 2025, 2024a) have narrowed this gap, demonstrating human-like manipulation proficiency. In particular, generative policies and foundation models are trained or fine-tuned on high-quality, human-collected realrobot datasets at varying scales, providing strong human priors and enabling robots to acquire the efficient strategies used by skilled tele-operators. However, high-quality realrobot data remain scarce: teleoperation incurs perceptual and control latency and favors slow, conservative motions (Guo et al. 2025). Moreover, large-scale collection depends on skilled operators and is labor-intensive and expensive. As a result, state-action coverage is limited, undermining generalization and reliability. Consequently, this supervised paradigm is constrained by an imitation ceiling: under purely supervised objectives, performance is effectively bounded by demonstrator skill and inherits human inefficiencies, biases, and occasional errors. Reinforcement learning (RL) offers a complementary route by optimizing returns from interaction rather than imitation fidelity, enabling the discovery of strategies that are rare or absent in human demonstrations. At the same time, sim-to-real RL must contend with visual and dynamics gaps between simulation and reality, while na ̈ıvely training a learning-based generative policy on real hardware is risky and sample-inefficient. This raises a central question: how can we build a robotic learning system that leverages strong human priors yet continues to refine itself through autonomous exploration? A useful analogy comes from human learning: babies learn to walk under parental 1Shanghai Qizhi Institute, China 2Shanghai Jiao Tong University, China 3The 4IIIS, Tsinghua University, China 5 6Carnegie Mellon University, USA 7Chinese Academy of Sciences, China *Equal contribution 16 Oct 2025 Figure 1. Teaser. Real-robot snapshots illustrating the diversity of our task suite. Panels are ordered top-left →bottom-right: (a) Dynamic Push-T; (b) Agile Bowling; (c) Pouring. (d) Dynamic Unscrewing with a dexterous hand; (e) Dual-arm Folding; (f) Juicing-stage 1 and stage 2 are shown side-by-side in the same panel; The grid highlights six of the seven tasks; juicing stage 1/2 are grouped into one panel for space. supervision, then reinforce the skill on their own until they master it and eventually transfer it across terrains. Analogously, a generalizable robotic learning system should combine skilled human priors with self-improvement to reach-and in some cases exceed-human capability in reliability, efficiency, and robustness. In this paper, we introduce RL-100, a framework that employs a real-world RL post-training phase on top of an imitation-based diffusion policy, preserving its expressive strengths while explicitly optimizing deployment metrics - success rate, time-to-completion, and robustness - under mild human-guided exploration. In short, we start from human priors, align with human-grounded objectives, and go beyond human performance. RL-100 has three stages with distinct roles and costs: (i) Imitation-learning (IL) pretraining on teleoperated demonstrations provides a competent, low-variance base, much like the sponge layer of a cake, on which subsequent learning can be built. (ii) Iterative offline RL post-training (offline updates on a growing buffer of policy rollouts) delivers the bulk of the improvement in success rate and efficiency across iterations, analogous to adding the cream layer. (iii) Online and onpolicy RL post-training supplies the last-mile reliability, targeting rare failure modes that remain after iterative offline learning, which are the cherries on top. But it is resource-intensive on real hardware (parameter tuning, resets, approvals). We therefore allocate most of the learning budget to iterative offline updates and use a small, targeted online budget to push performance from a high success rate (e.g., 95% ) to near-perfect (e.g., 99%+). Moreover, RL-100 is representation-agnostic: it operates in a vision-based setting and supports both 3D point clouds and 2D RGB images by simply swapping observation encoders, without modifying rest of the framework. While our real-robot experiments use 3D point clouds as the primary representation, ablations in simulation show the same performance trends with 2D inputs. In particular, we introduce a self-supervised visual encoder tailored for RL post-training, which furnishes stable, task-relevant features throughout policy exploration and updates. During policy rollouts, a human operator gives sparse success signals when needed, and the controller follows conservative operating limits. We use a unified policygradient objective across both iterative offline and online phases to fine-tune the diffusion sampler's short-horizon denoising schedule (Song et al. 2021). This alignment yields stable updates across phases and strong fine-tuning sample efficiency. In addition, we interleave a lightweight distillation loss that compresses the K-step diffusion policy into a onestep consistency (Song et al. 2023) policy for deployment, reducing inference latency while maintaining or improving efficiency and robustness. Moreover, our framework is task- and embodimentagnostic. We evaluate RL-100 across simulated tasks and on a real-world suite of seven manipulation tasks as illustrated in Fig. 1 and summarized in Tab. 1: Dynamic PushT, Agile Bowling, Pouring, Soft-towel Folding, Dynamic Unscrewing, and Orange Juicing. Orange Juicing comprises two subtasks, placing and removal, which are trained and evaluated separately but reported as one task family. The suite includes rigid-body dynamics, deformable objects, fluids, and precision assembly, and the framework is deployed across multiple embodiments. For these tasks and embodiments, we select a specific control mode per task: a single-step control mode is used when a fast closed-loop reaction is necessary; action-chunk-based control is preferred for coordination-heavy or high precision tasks where smoothing mitigates jitter and limits error compounding (Zhao et al. 2023). Both regimes share the same diffusion backbone; only the action heads differ. Because we target deployment in homes and factories, we emphasize deployment-centric metrics: reliability (success rate), efficiency (time-to-completion), and robustness (sustained stability under long deployment time and perturbation). Real-world experiments show that RL-100 attains 100% reliability across all seven tasks (up to 250/250 consecutive successful trials) and maintains long-horizon stability. In terms of efficiency, RL-100 approaches human teleoperation-level time-to-completion and, on several tasks, matches or even surpasses skilled operators. In summary, our main contributions are as follows: (i) Unified training framework. We propose RL-100, a three-stage real-world RL training framework on top of teleoperation-trained generative diffusion policies. The pipeline chains IL pretraining, iterative offline RL and online RL, with most updates allocated to iterative offline and a small, targeted online budget for the last mile to deployment-grade performance. (ii) One objective, fast deployment. A unified policy gradient loss fine-tunes the diffusion sampler's denoising schedule; a lightweight distillation compresses the multi-step diffusion policy to a one-step consistency policy. (iii) Generality across tasks, embodiments, and visual representations. Our framework is task-, embodiment-, and visual-representation-agnostic. To the best of our knowledge, RL-100 is the first system to demonstrate vision-based RL post-training on real robots across diverse task modalities and embodiments. (iv) Deployment-centric results. On real robots, RL100 achieves 100% reliability across seven tasks, including runs of up to 250 consecutive successes, attains efficiency comparable to or exceeding human teleoperation on multiple tasks, and demonstrates long-horizon robustness with uninterrupted operation for up to two hours, offering a promising route to deployment in homes and factories (v) RL-specific network backbone. Our policy backbone is tailored for diffusion-based visuomotor control and is agnostic to the execution regime, supporting both single-step and action-chunk control. We use a selfsupervised visual encoder to produce stable, driftresistant representations throughout RL fine-tuning. Preliminaries Reinforcement Learning We formulate the robotic manipulation problem as a Markov Decision Process (MDP) ⟨S, A, P, R, γ⟩, where S is the state space, A is the action space, P is the transition dynamics, R is the reward function, and γ is the discount factor. The robot policy π chooses action at at state st to maximize discounted cumulative rewards Σ∞ t=0γtR(st, at). The value function V (s) = Σ∞ t=0γtR(st, at)|s0=s,at∼π(·|st) is defined to measure robot performance starting from a given state s and the Q function Q(s, a) = Σ∞ t=0γtR(st, at)|s0=s,a0=a,at∼π(·|st) starts from given s and a. Offline-to-online RL. Our post-training follows an offlineto-online paradigm. Following Lei et al. (2024), we employ a proximal policy optimization (PPO)-style objective (Schulman et al. 2017) to unify both stages. The core learning objective incorporates importance sampling for proximal policy updates: Ji(π) = Es∼ρπ,a∼πi min r(π)A(s, a), clip r(π), 1 -ε, 1 + ε A(s, a) (1) where ρπ is the stationary state distribution induced by policy π, r(π) = π(a|s) πi(a|s) is the importance ratio, and A(s, a) = Q(s, a) -V (s) is the advantage function. Table 1. Task suite, embodiments, and control modes. Embodiments listed are typical examples; our framework is task- and embodiment-agnostic. Control modes are selected per task as either Single-step (one action per control tick) or Action-chunk (short c-step segments). Task Control mode Embodiments (examples) Modality Key challenges Dynamic Push-T Single-step UR5 + 3D-printed endeffector (single-arm) Rigid-body dynamics Fast reaction to a moving goal pose and online perturbations Agile Bowling Single-step UR5 + 3D-printed endeffector (single-arm) Rigid-body dynamics Release-timing control at high velocity; trajectory and release-pose accuracy Pouring Single-step Franka + LeapHand (single-arm) Fluids / granular Spillage minimization; flow control; container alignment under motion Dynamic Unscrewing Action-chunk Franka + LeapHand (single-arm) Precision assembly Time-varying alignment; torque/pose regulation; crossthread avoidance Soft-towel Folding Action-chunk xArm + Franka + Robotiq (dual-arm) Deformable cloth Large deformation; coordinated contacts; fold accuracy Orange Juicing‡ Action-chunk xArm + Robotiq (singlearm) Deformable manipulation Confined-space insertion/ejection; generalization to fruit variability Control modes: Single-step (one action per control tick); Action-chunk (short c-step segments). ‡ Two subtasks: Placing (place fruit into the press zone) and Removal (remove the spent fruit); the spent fruit is deformable, fruit sizes vary substantially (requiring strong generalization), and operation occurs in a confined cavity with narrow clearances. The key distinction between offline and online stages lies in advantage estimation: • Offline: Implicit Q Learning (IQL)-style (Kostrikov et al. 2022) value functions: Aoff(s, a) = Q(s, a) -V (s). • Online: Generalized Advantage Estimation (GAE) (Schulman et al. 2016): Aon(s, a) = GAE(Rt, V ) to balance variance and bias. Diffusion models We overload the subscript t from timestep in MDP to the step index in diffusion process in the following two subsections. Diffusion models (Ho et al. 2020a) learn to reverse a noising process that gradually corrupts clean data x0 ∈Rd into Gaussian noise to reconstruct the original clean data distribution. Given a schedule {αt}T t=1 with αt = 1 -βt, a sample x0 drawn from the clean distribution, the forward noising process follows a closed form: xt = √ ̄αt x0 + √ 1 - ̄αt ε, ε ∼N(0, I), (2a) ̄αt = tY s=1 αs. (2b) A denoiser εθ is trained to recognize the noise inside the noisy sample via Lsimple(θ) = Ex0, t, ε h ε -εθ(xt, t) 2 2 i . (3) to recover the clean sample. DDIM sampling with stochastic form Denoising Diffusion Implicit Models (DDIM) (Song et al. 2021) provide a family of samplers that interpolate between deterministic and stochastic generation. Given a learned denoiser εθ, the predicted clean sample at time t is commonly written as ˆx0(xt, t) = xt -√1 - ̄αt εθ(xt, t) √ ̄αt , (4) where ̄αt = Qt i=1 αi denotes the cumulative noise schedule. We consider a (possibly) coarse time-subsequence for sampling τK > τK-1 > · · · > τ1, with K ≪T (for example, T = 50∼1000 and K = 5∼10). To cover both the single-step (t→t -1) and subsampled (τk →τk-1) cases in a unified notation, denote a generic transition from time t to an earlier time m (with m 0 the transition from xt to xm can be viewed as a Gaussian sub-policy πθ(xm | xt, t →m) = N μθ(xt, t →m), σ2 t→mI , (7a) log πθ(xm | xt, t →m) = - 1 2σ2 t→m xm -μθ(xt, t →m) 2 + C, (7b) where C is a constant independent of the parameters θ. Note that (7b) is only valid for σt→m > 0; when σt→m = 0 the density becomes singular and the transition is best described as the deterministic mapping xm = μθ(xt, t → m) or equivalently a Dirac distribution. A full DDIM sampling process recovers a clean sample x0 by chaining the sub-policies {πθ(xτk-1 | xτk, τk → τk-1)}1 k=K, starting from xτK ∼N(0, I). In practice, one may set σt→m = 0 for fully deterministic sampling, or choose a small positive σt→m (subject to (6)) to trade off between sample diversity and stability. If the log-likelihood in (7b) is later used as an objective or as part of a fine-tuning criterion, care must be taken in handling the σt→m →0 limit (e.g., by restricting likelihood-based terms to steps with strictly positive variance). Notation and scheduling conventions. Throughout the paper we distinguish MDP timesteps from diffusion (denoising) timesteps. Environmental timesteps are denoted by t, while diffusion indices follow a (possibly subsampled) schedule τK > τK-1 > · · · > τ1, and are written as superscripts (e.g. aτk). A generic denoising transition from time t to an earlier time m is denoted by t→ m (for the subsampled schedule this will typically be written τk →τk-1). Variance parameters are indexed consistently as στk→τk-1 (abbreviated as στk when unambiguous). We always enforce the constraint 0 ≤στk ≤ p 1 - ̄ατk-1, (8) so that all square roots appearing in the DDIM updates are real. When στk = 0 the corresponding transition degenerates to a deterministic mapping (Dirac), and Gaussian logdensities are not defined; therefore any likelihood-based objective (e.g., policy-gradient using log π) must only use steps with strictly positive variance. Consistency Models Consistency models (Song et al. 2023) learn a singlestep mapping from noisy inputs at arbitrary noise levels to clean data. Denote the consistency model by Cθ(xτ, τ) where the superscript indicates the diffusion index (per the notation above). Given a frozen diffusion teacher Ψφ (for instance a K-step DDIM teacher that follows the same subsampled schedule {τk}), consistency distillation minimizes the squared regression objective LCD(θ) = Ex0,τ,ε h Cθ(xτ, τ) -sg Ψφ(xτ, τ →0) 2 2 i , (9) where sg[·] denotes stop-gradient and Ψφ(xτ, τ →0) is the teacher's output after running the teacher's denoising chain from xτ down to (approximate) x0. The teacher must be run using the same subsampled schedule {τk} that the student will emulate or distill from. At inference time a consistency model requires only a single evaluation: x0 ≈Cθ(xτK, τK), xτK ∼N(0, I). (10) Diffusion policy and RL fine-tuning Diffusion Policy (Chi et al. 2023) performs diffusion over robot actions conditioning on observations. Given an observation o, we roll out along the K-step subsampled schedule {τK > · · · > τ1}: aτk-1 = fθ(aτk, τk | o) , k = K, . . . , 2, (11) where fθ follows the DDIM schedule (5) conditioned on o. Then we can retrieve a clean action at := aτ0 from the socalled diffusion policy. RL formulation. Now we embed the K-step diffusion process as a sub-MDP into a single step in robotic manipulation MDP. Each denoising step can be viewed as sampling a cleaner noisy action aτk-1 from the Gaussian subpolicy in (7a). We therefore model the process as a K-step sub-MDP with: • Initial state: sK = (aτK, τK, o) with aτK ∼N(0, I). • State: sk = (aτk, τk, o), k = K, . . . , 1. • Action: uk = aτk-1 drawn from the denoising sub-policy πθ(uk | sk) = N(μθ(aτk, τk, o), σ2 τkI). • Transition: sk-1 = (uk, τk-1, o). • Reward: this sub-MDP only receives terminal reward R(aτ0) from the upper environment MDP. The log-likelihood defined in (7b) then computes the density of sub-policies, enabling end-to-end optimization of task rewards with respect to the denoising sub-policies via policygradient updates (e.g., PPO (Schulman et al. 2017)). Methods We present RL-100, a unified framework for robot learning that combines IL with RL. As illustrated in Fig. 2, our approach consists of three stages: (1) imitation learning from human demonstrations, (2) iterative offline RL with progressive data expansion, and (3) online fine-tuning. The key innovation lies in unifying offline and online RL through a shared PPO-style objective applied across diffusion denoising steps. Imitation Learning We initialize the policy by behavior cloning on humanteleoperated trajectories. Our approach uses conditional diffusion to learn robust visuomotor policies from demonstrations. Each episode provides synchronized tuples (ot, qt, at) Te t=1, (12) where ot are visual observations (RGB images or 3D point clouds), qt denotes robot proprioception (joint positions/velocities, gripper state), and at is either a singlestep action or an action chunk. Learn from Human Priors DATASET Reward Action State Post-Training - Iterative Offline Post-Training - Online BC Loss RL-100 Finetune Loss ENV-ROBOT Training Pipeline Training Objective RL-100 Encoder DDIM POLICY - Chunk Action U-Net DDIM POLICY - Single Action Skip-Net Visual Feature z CM POLICY - Chunk / Single U-Net / Skip-Net Self-Supervised Loss Robot State Distill Distill - Online / Batch Online Loss One-Step & High Freq Control Policy Training Loss: { } POLICY - Perception & Decision Teleop Rollout HUMAN ROBOT RL-100 Encoder ❄ z T: d!(z′|z, a) Q! · z, a & V!(· |z) Update: Transition & Value Nets Offline Module - Supervised Training Iterative x N Times ENV-ROBOT Once Teleop V!(· |z) Figure 2. Overview of RL-100. We first learn from human demonstrations through imitation learning with diffusion policies, then apply iterative offline RL with data expansion, followed by online fine-tuning for final performance optimization. Conditioning and prediction horizon. We fuse recent observations into a conditioning vector ct = [φ(oi, qi)]t i=t-no+1, (13) where the perception encoder φ(·) processes the most recent no frames (typically no = 2) and [·] is the operator concatenating multiple vectors. The clean diffusion target aτ0 t at timestep t is set to a single action aτ0 t = ut ∈Rda or an action chunk aτ0 t = [ut, . . . , ut+nc-1] ∈Rncda where nc is the chunk size (typically 8-16). Actions are normalized per dimension; we predict delta end-effector pose when applicable. Diffusion parameterization. Following conditional diffusion over actions, we corrupt aτ0 t to aτK t via the forward process (Eq. (2)). The denoiser εθ(aτ, τ, ct) is trained with the noise-prediction objective: LIL(θ) = E(aτ0,ct)∼D, τ, ε ∥ε -εθ(aτ, τ, ct)∥2 2 , (14) where D is the demonstration dataset, τ ∈{τK > · · · > τ1} indexes a K-step schedule, and ε∼N(0, I). The policy backbone is shared across control modes; only the output head differs: Rda (single-step) or Rncda (chunked). Vision and proprioception encoders. For RGB input, we use pretrained ResNet/ViT backbones; for point clouds input, we adapt the 3D encoder of DP3 (Ze et al. 2024) with reconstruction regularization for stability during RL finetuning. Visual embeddings are projected and concatenated with proprioceptive features to form ct. All encoders are trained end-to-end with Eq. (14). When applicable, we add reconstruction (Recon) and variational information bottleneck (VIB) regularization: Lrecon = βrecon dChamfer(ˆo, o) + ∥ˆq -q∥2 2 , (15) LKL = βKL KL φ(z|o, s) ∥N(0, I) , (16) where o and q denote the observed point cloud and proprioceptive vector; ˆo and ˆq are the reconstructed observations given encoded embedding φ(o, q); dChamfer is Chamfer distance between two set of point clouds. The complete imitation learning objective becomes: LIL total = LIL + Lrecon + LKL. (17) During RL fine-tuning, we reduce βrecon and βKL by a factor of 10 to allow policy improvement while maintaining representation stability. Inference and control. At deployment, K-step DDIM sampling (Eq. (5)) generates actions: ˆaτ0 t ←DDIMK εθ(·, ·, ct) . Single-step control executes ut ←ˆaτ0 t immediately; chunked control executes [ut, . . . , ut+nc-1] ←ˆaτ0 t in the following nc timesteps. Single-step control excels in reactive tasks (e.g., dynamic bowling), while action chunking reduces jitter in precision tasks (e.g., assembly). Both modes share the same architecture, enabling task-adaptive deployment. Unified Offline and Online RL Fine-tuning Handling single action vs. action chunk. Our framework supports both single-step and chunked action execution, which affects value computation and credit assignment: • Single action: Standard MDP formulation with per-step rewards Rt and discount γ • Action chunk: Each chunk of nc actions is treated as a single decision. The chunk receives cumulative reward Rchunk = Rt:t+nc-1 = Pnc-1 j=0 γjRt+j, and the equivalent discount factor between chunks is γnc For clarity, we present the single-action case below; the chunked case works similarly by replacing per-step quantities with their chunk equivalents. Two-level MDP structure. Our approach operates on two temporal scales: 1. Environment MDP: Standard robot control with state st, action at, reward Rt 2. Denoising MDP: K-step diffusion process generating each aτ0 t through iterative refinement The denoising MDP is embedded within each environment timestep, creating a hierarchical structure where K denoising steps produce one environment action. Unified PPO objective over denoising steps. Given the two-level MDP structure, we optimize the PPO objective w.r.t. the K-step diffusion process at each timestep t in iteration i via the summation across all denoising steps k: Ji(π) = E st∼ρπ, at∼πi " K X k=1 min rk(π) At, clip rk(π), 1 -ε, 1 + ε At # , (18) and the loss function is Loff RL = -Ji(π), (19) where rk(π) is the per-denoising-step importance ratio and At is the task advantage tied to the environment timestep t. Here, πi denotes the behavior policy at PPO iteration i, and ρπ is the (discounted) state distribution under current policy π. The key insight is to share the same environmentlevel advantage A across all K denoising steps, providing dense learning signals throughout the denoising process while maintaining consistency with the environment reward structure. Offline RL Setting. Given an offline dataset D, we initialize the behavior policy with a diffusion policy acquired from IL: π0 :=πIL. No new data are collected in this pure offline stage. Policy improvement on D. At offline iteration i, we optimize the PPO-style surrogate (Eq. (18)) applied across K denoising steps, using the offline-policy ratio roff k (π) = π(aτk-1 | sk) πi(aτk-1 | sk), with standard clipping. Offline advantages are computed as Aoff t = Qψ(st, at) -Vψ(st), where the critics (Qψ, Vψ) are pre-trained on D following IQL (Kostrikov et al. 2022). This yields a candidate policy π from πi by several epochs of gradient updates on D. OPE gate and iteration advancement. We use AMQ (Lei et al. 2024) for offline policy evaluation (OPE) without further interaction with the environment to compare the candidate with current behavior policy: bJAM-Q(π) = E(s,a)∼( ˆT ,π) h H-1 X t=0 Qψ(st, at) i , where ˆT is a learned transition model. We accept the candidate and advance the behavior-policy iteration only if bJAM-Q(π) -bJAM-Q(πi) ≥δ, (20) by setting the updated policy as behavior policy: πi+1 :=π. Otherwise, we reject and keep the behavior policy unchanged (πi+1 :=πi). In practice, we set δ = 0.05 · | bJAM-Q(πi)| for adaptive thresholding. This OPE-gated rule yields conservative, monotonic behavior-policy improvement on D. Shared and frozen encoders. To ensure stable representation learning and efficient computation, all components in our offline RL pipeline share the same fixed visual encoder φ pre-trained during imitation learning. During offline RL, we keep φIL fixed and only update the task-specific heads of each module. Online RL The online stage uses on-policy components. We use an on-policy ratio for each diffusion step ron k (π) = π(aτk-1 | sk) πi(aτk-1 | sk), and compute advantages using GAE: Aon t = GAE(λ, γ; rt, Vψ), sharing the same Aon t across all K denoising steps that produce the environment action at time t. We minimize the total loss: Lon RL = -Ji(π) + λV E h (Vψ(st) -ˆVt)2i , (21) where ˆVt = P∞ l=0 γlrt+l is the discounted return and λV weights the value function loss. Distillation to One-step Consistency Policy High-frequency control is crucial for robotics applications. While our diffusion policy achieves strong performance, the K-step denoising process introduces latency that can limit real-time deployment. To address this, we jointly train a consistency model cw that learns to directly map noise to actions in a single step, distilling knowledge from the multistep diffusion teacher πθ. During both offline and online RL training, we augment the policy optimization objective with the consistency distillation loss from Eq. (9): Ltotal = LRL + λCD · LCD, (22) where LRL is either the offline objective (Eq. (19) with IQL-based advantages) or the online objective (Eq. (21) with GAE). The consistency loss LCD follows Eq. (9), with the teacher being our diffusion policy πθ that performs Kstep denoising conditioned on observation. The stop-gradient operator ensures the teacher policy continues to improve through RL objectives while simultaneously serving as a distillation target. High-frequency control is essential for practical robot deployment in industrial settings. First, faster control loops directly translate to improved task completion efficiency-a robot operating at 20 Hz can execute the same trajectory in half the time compared to 10 Hz operation, significantly increasing throughput in factory automation, where cycle time directly impacts productivity. Second, many manipulation tasks inherently require highfrequency feedback for reliable execution. Dynamic tasks such as catching moving objects, maintaining contact during sliding motions, or recovering from unexpected disturbances demand sub-50 ms response times that multi-step diffusion cannot provide. Furthermore, tasks involving compliance control, force feedback, or human-robot collaboration often fail catastrophically when control frequency drops below critical thresholds, as the system cannot react quickly enough to prevent excessive forces or maintain stable contact. During inference, the consistency model generates actions in a single forward pass: aτ0 = cw(aτK, τK|o), achieving K× speedup (e.g., from 100ms to 10ms latency) while preserving the performance of the diffusion policy. This orderof-magnitude improvement enables deployment in realworld manufacturing scenarios where robots must maintain consistent cycle times, respond to conveyor belt speeds, and safely operate alongside human workers-requirements that are infeasible with standard multi-step diffusion policies. Overall training framework While each component above can be used independently, we propose an iterative procedure that combines them for progressive improvement. Instead of applying offline RL once on fixed demonstrations, we alternate between training an IL policy on the current dataset, improving it via offline RL with conservative updates, collecting new data with the improved policy, and re-training IL on the expanded dataset, which is summarized in Algo. 1. This creates a virtuous cycle where better policies generate better data, which in turn enables learning even better policies. Algorithm 1 RL-100 training pipeline 1: Input: Demonstrations D0, iterations M 2: Initialize: πIL 0 ←ImitationLearning(D0) 3: for iteration m = 0 to M -1 do 4: // Offline RL improvement 5: Train critics: (Qψm, Vψm) ←IQL(Dm) 6: Train transition: Tθm(s′|s, a) 7: Optimize: 8: πddim m , πcm m ←OfflineRL(πIL m, Qψm, Vψm, Tθm) 9: // Data expansion 10: Deploy: Dnew ←Rollout(πddim m or πcm m ) 11: Merge: Dm+1 ←Dm ∪Dnew 12: // IL re-training on expanded data 13: πIL m+1 ←ImitationLearning(Dm+1) 14: end for 15: // Final online fine-tuning 16: πfinal ddim, πfinal cm ←OnlineRL(πM-1, VψM-1) 17: Output: πfinal ddim, πfinal cm Why IL re-training matters. Re-training with IL on the expanded dataset (Algo. 1 line 13) is crucial for several reasons: • Distribution shift: IL naturally adapts to the evolving data distribution as higher-quality trajectories are added • Stability: Supervised learning is more stable than RL on mixed-quality data • Multimodality: IL preserves the diffusion policy's ability to model multiple solution modes • Distillation: IL effectively distills both human demonstrations and RL improvements into a unified policy Final online fine-tuning. After the iterative offline procedure converges, we apply online RL for final performance optimization. This stage benefits from: (1) a strong initialization from iterative offline training, (2) pretrained value functions that accelerate learning, and (3) a diverse dataset for replay and regularization. Variance clipping for stable exploration. To ensure stable learning during RL fine-tuning, we introduce variance clipping in the stochastic DDIM sampling process. Specifically, we constrain the standard deviation at each denoising step: ̃σk = clip(σk, σmin, σmax), (23) where σk is the original DDIM variance parameter from Eq. (5b), and [σmin, σmax] defines the permissible range. This modification effectively bounds the stochasticity of the behavior policy πθ(aτk-1|aτk, τk) = N(μθ(aτk, τk), ̃σ2 kI), preventing both: • Excessive exploration when σt is too large, which can lead to out-of-distribution actions that destabilize training or cause safety violations in physical systems • Premature convergence when σt approaches zero, which eliminates exploration and prevents the policy from discovering better modes In practice, we set σmin = 0.01 to maintain minimal exploration even in late denoising steps, and σmax = 0.8 to prevent destructive exploration in early steps. This bounded variance ensures that the importance ratio rk(π) = π(aτk-1|sk) πi(aτk-1|sk) remain well-behaved during PPO updates, as extreme variance differences between the current and behavior policies are avoided. We will empirically demonstrate that this simple modification is crucial for achieving stable fine-tuning performance. Related Work Reinforcement Learning with Generative Diffusion Models The integration of generative diffusion models with RL represents a paradigm shift in policy representation and optimization. Building upon foundational work in diffusion models (Ho et al. 2020b; Song et al. 2020) and flow matching (Lipman et al. 2023), recent advances have demonstrated the power of these generative frameworks in capturing complex, multimodal action distributions inherent in RL problems. Diffusion Q-Learning (DQL) (Wang et al. 2023b) pioneered this integration by replacing traditional Gaussian policies with conditional diffusion models in offline reinforcement learning, addressing fundamental limitations of parametric policies in modeling multimodal behaviors. This approach has evolved through multiple directions: weighted regression C F E A G B D Figure 3. Illustrations from rollouts on seven real-world tasks. Each row shows one task, and columns are time-ordered frames (left→right) subsampled from a single trajectory. From top to bottom: (a) Dynamic Push-T, (b) Agile bowling, (c) Pouring, (d) Dynamic Unscrewing, (e) Soft-towel Folding, (f) Orange Juicing - Placing, (f) Orange Juicing - Removal. The suite spans fluids or particles, tool-using, deformable objects manipulation, dynamic non-prehensile manipulation, and precise insertion, highlighting the diversity and dynamics of our benchmark. methods (Kang et al. 2023; Lu et al. 2023; Ding et al. 2024a) train diffusion policies through importanceweighted objectives to maximize learned Q-functions; reparameterization gradient approaches (Psenka et al. 2024; He et al. 2023; Ding et al. 2024b) utilize gradient-based optimization despite temporal backpropagation challenges; and sampling-based methods (Chen et al. 2023; HansenEstruch et al. 2024) provide effective but computationally expensive solutions. More recently, consistency-based extensions(Li et al. 2024) have further generalized diffusion and consistency policies to visual RL. Recent works have also explored using RL to directly optimize diffusion models beyond traditional likelihoodbased training. Black et al. (2023) demonstrate finetuning diffusion models with RL to maximize arbitrary reward functions, while Fan et al. (2024) apply similar techniques specifically to text-to-image generation using human feedback. Ren et al. (2024) introduce policy gradient methods tailored for diffusion-based policies, enabling effective online optimization while maintaining the expressiveness benefits of diffusion models. To address computational bottlenecks inherent in diffusion models, Park et al. (2025) present Flow Q-Learning (FQL), which leverages flow-matching policies to model complex action distributions while avoiding recursive backpropagation through denoising processes. By training an independent single-step policy that matches the flow model's output, FQL achieves computational efficiency without sacrificing expressiveness, demonstrating state-of-the-art performance on offline RL benchmarks with significant computational advantages, particularly in offline-to-online fine-tuning settings. Similarly, One-Step Flow Q-Learning (OFQL) (Nguyen and Yoo 2025) extends this framework for even faster single-step action generation. Generative Diffusion Models in Robotics The application of diffusion models to robotics has yielded transformative advances in visuomotor control, trajectory planning, and real-world deployment. Diffusion Policy (Chi et al. 2023) demonstrated breakthrough performance in visuomotor control by modeling action distributions as conditional diffusion processes, excelling at handling multimodal behaviors through visual conditioning and receding horizon control. This approach has inspired extensions including DiffClone (Sabatelli et al. 2024) and Diffusion Model-Augmented Behavioral Cloning (Wang et al. 2023a), which further improve upon traditional behavioral cloning by leveraging the expressiveness of A B C D E F G Figure 4. Point clouds trajectories for seven tasks (ordered top-to-bottom): (A) Push-T, (B) Bowling, (C) Pouring, (D) Unscrewing, (E) Folding, (F) Orange Juicing - Placing, (G) Orange Juicing - Removal. diffusion models. The integration with pretrained visual representations, including R3M (Nair et al. 2022), VC1 (Majumdar et al. 2023), and MoCo (He et al. 2020), has proven particularly effective for generalization across visual variations. Recent work on 3D Diffusion Policy (Ze et al. 2024) extends these capabilities to point cloud observations, while FlowPolicy (Chen et al. 2024) introduces consistency flow matching for robust 3D manipulation tasks. Lu et al. (2025) propose a triply-hierarchical diffusion policy that decomposes complex visuomotor tasks into multiple levels of abstraction, improving both learning efficiency and generalization. Beyond direct policy learning, diffusion models have revolutionized trajectory planning in robotics. Janner et al. (2022) and Ajay et al. (2023) reformulate planning as conditional generation, producing complete state-action trajectories conditioned on rewards and constraints, excelling in long-horizon tasks requiring complex coordination. Shang et al. (2024) introduces an alternative paradigm by generating policy parameters rather than trajectories in latent spaces, offering computational advantages. Realworld deployment challenges have been addressed through methods like One-Step Diffusion Policy (Wang et al. 2024), which uses distillation to achieve real-time performance suitable for robotic control. Successful applications now span deformable object manipulation with PinchBot (Liu et al. 2024), multi-task learning, and navigation scenarios. The availability of large-scale datasets like BridgeData V2 (Walke et al. 2023) and foundation models such as π0 (Black et al. 2024b) enables broader generalization across robotic platforms and tasks, accelerating the transition from laboratory demonstrations to practical robotic systems. Offline-to-Online Reinforcement Learning The transition from offline pretraining to online fine-tuning presents unique challenges in managing distribution shift and preventing catastrophic forgetting while enabling continuous improvement. Conservative Q-Learning (CQL) (Kumar et al. 2020) established the foundational framework for safe offline RL through pessimistic value estimation, preventing overestimation for out-of-distribution actions. AdvantageWeighted Regression (AWR) (Peng et al. 2019) provides a scalable framework that has influenced numerous subsequent works, though its restriction to Gaussian policies limits expressiveness for complex behavioral patterns. Calibrated Q-Learning (Cal-QL) (Nakamoto et al. 2023) addresses initialization challenges by ensuring conservative Q-values are appropriately scaled for effective online fine-tuning, providing crucial insights for successful offline-to-online transfer. Uni-O4 (Lei et al. 2024) directly applies the PPO (Schulman et al. 2017) objective to unify offline and online learning, eliminating the need for extra regularization. Recent advances focus on efficiently leveraging both offline and online data through hybrid strategies. RLPD (Ball et al. 2023) achieves sample-efficient online learning by mixing offline and online experiences, demonstrating that careful data mixture strategies can accelerate learning significantly. Nair et al. (2020) established fundamental frameworks for combining offline pretraining with online fine-tuning, showing that hybrid approaches achieve superior sample efficiency compared to pure online or offline methods. A paradigm shift in offline-to-online adaptation is represented by Wagenmaker et al. (2025), which introduces DSRL (Diffusion Steering with Reinforcement Learning) that operates RL entirely in the latent noise space of pretrained diffusion policies rather than modifying base policy weights. Real-world RL Real-world RL trains directly on real robot dynamics, optimizing deployment metrics (reliability, speed, safety) and yielding robust performance that continually adapts to disturbances-without sim-to-real gaps. Critical requirements include sample efficiency, stability under highdimensional perception, safe continuous operation, and automated reward/reset mechanisms. While early work demonstrated end-to-end visuomotor learning (Levine et al. 2016) and large-scale grasping with off-policy methods (Kalashnikov et al. 2018), subsequent advances established key algorithmic foundations: off-policy actor-critic methods (SAC, TD3) for data efficiency (Haarnoja et al. 2018; Fujimoto et al. 2018), model-based approaches for sample acceleration (Chua et al. 2018; Janner et al. 2019), reset-free learning for autonomous operation (Eysenbach et al. 2018; Gupta et al. 2021), and learned reward specifications from visual classifiers or human feedback (Singh et al. 2019; Christiano et al. 2017). Despite these advances, most systems required extensive engineering, task-specific tuning, or long training times to achieve reliable performance. These scattered advances converge in SERL (Luo et al. 2024b), a comprehensive framework that integrates high update-to-data ratio off-policy learning, automated resets, and visual reward specification to achieve several manipulation tasks. However, SERL relies solely on demonstrations and struggles with tasks requiring precision or recovery from failures. HIL-SERL (Luo et al. 2024c) addresses these limitations by incorporating real-time human corrections during training, enabling the policy to learn from mistakes and achieve perfect success rates across diverse tasks, including dual-arm coordination and dynamic manipulation. While SERL and HIL-SERL report impressive onrobot learning efficiency and reliability on well-scoped tabletop skills, their evaluations typically employ actionspace shaping (e.g., limiting wrist rotations and encouraging near-planar end-effector motion) and focus on short-horizon regimes with relatively low-dimensional control (Luo et al. 2024b,c). Such constraints are pragmatic for safety and sample efficiency, but they reduce policy expressivity and can cap performance on orientation-critical, contact-rich, or compositionally complex tasks. In everyday home and factory scenarios, many skills inherently require full SE(3) control and substantial reorientation, including: deformable manipulation with twist and regrasp (e.g., towel folding), insertion/ejection in confined cavities with large tilt changes (e.g., orange juicing), fluid and granular control that hinges on container tilt (e.g., controlled pouring), dynamic release and trajectory shaping (e.g., agile bowling), cable routing or cloth placement with out-of-plane rotations, and bimanual reorientation.* In contrast, our system retains full 6-DoF control without hard rotation constraints and targets these under-explored regimes by (i) using a diffusion/consistency visuomotor policy to capture diverse human strategies, (ii) unifying offline-to-online improvement with an OPE-gated PPO-style objective for nearly-monotonic improvement, and (iii) enabling high-frequency control via one-step consistency distillation, across dual-arm, deformable, and dynamic tasks with larger cross-object generalization. Real-world Experiments In this section, we detail the experimental setup and report results on reliability (success rate), efficiency (time-tocompletion), and robustness (generalization across objects and initial conditions). We compare policies trained with RL-100 to human teleoperators and strong baselines. Across ∗We do not claim these systems cannot be extended to such settings; rather, the reported experiments emphasize rapid learning on well-scoped tasks, leaving broader generalization, long-horizon composition, and orientationsensitive control less explored. tasks, RL-100 achieves higher precision and greater efficiency than human teleoperation. We then present ablations to analyze the contribution of each component of RL-100. Collectively, these results demonstrate the practical viability of RL-100 for real-world deployment. Overview We evaluate RL-100 on seven real-world manipulation tasks that jointly cover dynamic rigid-body control, deformableobject handling, and precision assembly (Tab. 1, Fig. 1) to comprehensively evaluate the versatility of our framework. The suite spans single- and dual-arm embodiments (UR5, Franka with LeapHand, xArm-Franka) and two control modes (single-step actions vs. chunked action sequences). The trajectory of each task visualized in images and point clouds can be found in Fig. 3 and 4, respectively. Key specifications for observation and action spaces of all tasks are summarized in Tab. 2. We randomize initial object placements for each trial to encourage policy generalization, with specific ranges for each task depicted in Fig. 5. Two types of reward structure are used for our tasks. For five tasks except Dynamic Push-T, we use a sparse reward function where the agent receives a reward of +1 upon successful task completion and 0 at all other timesteps, labeled by a human supervisor pressing a keyboard. The Dynamic Push-T task, which requires continuous, high-precision control, utilizes a dense shaped reward. The total reward at each timestep t is rt = rpose + rstatic + rsmooth, where rpose = exp(-3e) -1 punishes the SE(3) discrepancy e between the T-block and the desired pose; rstatic = -1 if the movement of T-block at one timestep is below a given threshold and otherwise 0; rsmooth = -5∥at -at-1∥2 2 punishes jerky actions. More details about our setup (calibration, point-cloud processing, low-level control) are provided in supplementary materials. Across tasks, RL-100 achieves higher success rates and shorter time-to-completion than baselines and human operators, with especially large gains in settings that require fast online corrections (dynamic contacts, narrow clearances) or stable dexterous grasping under pose variability. A comprehensive robustness protocol evaluates either zeroshot or few-shot transfer across challenging variations (e.g., changed objects, visual distractors, external perturbations). Results show consistent performance retention under these shifts. We defer task-specific objectives and success criteria to the following section, and present aggregate reliability/robustness/efficiency results afterwards. Description of Tasks Following the overview, we describe the objective, challenges and further evaluation protocols beyond normal ones of each task individually. Dynamic Push-T The Dynamic Push-T task requires the robot arm, equipped with a 3D-printed stick-like endeffector, to push a T-shaped block from various initial locations into a wooden slot that is only slightly larger (3mm each side) than the block. The T-block's initial position and orientation are fully randomized, uniformly distributed within a 55cm×60cm reachable workspace of the arm. The robot's initial position and the target slot's position are fixed. The task is considered successful only when the T-block is fully and correctly inserted into the slot. As a 3D extension of the 2D Push-T task, this task inherits the challenge of requiring high-frequency and precise dynamic adjustments but introduces additional complexities. For instance, due to the slot's geometry, the friction coefficient at the slot's edges varies when the T-block is partially suspended, leading to unpredictable behaviors such as significant rotations of the block. Additionally, the robot must avoid pushing the T-block into the slot in an incorrect orientation, which could lead to failure and end-effector fracture. The policy must therefore exhibit robust control to handle these dynamic and frictional variations while ensuring precise alignment with the slot. To evaluate the generalization capabilities of the framework, we further perform the task under multiple variations: (i) different initial positions and orientations of the T-block, (ii) a wooden surface with a desktop sticker to reduce friction, (iii) the presence of additional objects on the workspace to introduce visual input distractions, and (iv) external disturbances applied to the T-block's during pushing. These variations challenge the policy to dynamically adjust with high precision, ignore environmental distractions, and avoid incorrect paths, demonstrating its robustness in maintaining high-frequency dynamic control while accurately completing the task. Agile Bowling The Agile Bowling task requires the robot arm, equipped with a 3D-printed semi-circular end effector, to push a curling stone to knock down six bowling pins positioned 60cm away. The robot arm starts from a fixed initial position, while the curling stone and bowling pins are uniformly distributed along an 18cm starting line and a 20cm target line, respectively. The task is considered successful only when more than five pins are knocked down. This task presents several challenges due to the highly sensitive dynamics of the setup. The curling surface is extremely smooth, and the end effector is 4mm larger in diameter than the curling stone, meaning even minor variations in the robot's movements can significantly alter the stone's trajectory. The robot must precisely initiate the stone's motion from its starting position, align it toward the bowling pins, and execute a controlled push while making fine adjustments to account for potential trajectory deviations. Furthermore, knocking down almost all pins requires applying sufficient force to accurately strike the first pin, as slight misalignments can result in some pins remaining upright, leading to failure. The small size of both the curling stone and bowling pins provides limited visual input, adding to the challenge of achieving precise control. To evaluate the generalization capabilities of the framework, we test the task under varying initial positions of the curling stone and bowling pins along their respective lines. We further test (1) zero-shot transferability on a coarser trail surface and (2) adaptation by further finetuning on an inversed pins placement of the policy. These variations challenge the policy to adapt its pushing strategy and trajectory corrections to different configurations while maintaining accuracy. Pouring This task requires the robot to grasp a cup containing mixed nuts and snacks of varying sizes and Table 2. State and action components of each task. We abbreviate dimension as 'dim.', proprioception as 'prop.'. Task Observation space Action space Point cloud dim. Prop. dim. Note Dim. Note Dynamic Push-T (512, 3) 6 UR5 joints 2 Normalized ∆(x, y) of end-effector Agile Bowling (512, 3) 6 Same as above 2 Same as above Pouring (512, 3) 23 = 7 + 16 Franka joints, Leap hand joints 22 = 6 + 16 Target arm pose, target hand joints Dynamic Unscrewing (1024, 3) 23 Same as above 23 = 22 + 1 Same as above, 1: enable arm motion Soft-towel Folding (1024, 3) 16 = 2×(7+1) Dual arm joints + grippers 16 = 2×(7+1) Dual arms: target pose (pos+quat), target gripper Orange Juicing - Placing (1024, 3) 7 = 6 + 1 xArm joints, gripper state 8 = 6 + 1 + 1 Target arm pose, target gripper, task done Orange Juicing - Removal (1024, 3) 7 Same as above 8 Same as above 60 cm 55 cm 15 cm 15 cm 20 cm A 18 cm 20 cm B 39 cm 16 cm 28 cm 20 cm C ⌀12.5 cm D 42 cm 40 cm E 29 cm 20 cm ⌀11 cm F Figure 5. The object initialization workspaces for the seven real-world tasks. At the beginning of each episode, objects-of-interest are randomly positioned within the areas denoted by the red boundaries and annotations. textures with a LeapHand dexterous hand, then precisely pour the contents into a target plate through controlled wrist rotation. The cup's initial position is randomized within 39cm×16cm on the table surface, while the plate position varies within 28cm×20cm. The robot begins from a fixed initial pose, grasps the cup, rotates the wrist to invert the container, and pours the contents into the plate. The task is considered successful if the robot maintains a stable cup grasp and accurate alignment during pouring, with no spillage attributable to misalignment; incidental bounceouts from unavoidable rebounds are disregarded. This task presents multiple challenges that demand sophisticated sensorimotor coordination. First, the cup's smooth surface makes it inherently difficult to grasp securely, requiring the policy to learn appropriate finger configurations that balance grip stability with the ability to maintain a suitable pouring orientation. Second, the substantial randomization in both cup and plate positions necessitates robust spatial generalization. The policy must adapt its grasping strategy and pouring trajectory to varying relative positions while maintaining precision. Third, the mixed contents with different physical properties (varying nut sizes and weights) create unpredictable dynamics during pouring, requiring adaptive control to ensure complete and accurate transfer without overshooting. To evaluate the generalization capabilities of our framework, we extend this task to three additional variations with distinct physical properties: (i) pouring small soft candies that exhibit different flow characteristics due to their lighter weight and higher elasticity, (ii) pouring water from a cup, which introduces liquid dynamics requiring smooth, continuous motion control to prevent splashing, and (iii) pouring water using a long-spout teapot, which demands adaptation to a different container geometry and requires precise control of the extended spout for accurate targeting. These variations test the policy's ability to transfer learned manipulation skills across different materials (granular solids to liquids) and container morphologies while maintaining the core competency of controlled pouring. Dynamic Unscrewing This task simulates industrial assembly operations by requiring the robot to unscrew a nut from a vertically-mounted bolt with a LeapHand dexterous hand. The bolt position is randomized within a circle of 12.5cm diameter on the work surface, demanding robust adaptation to varying target locations. The task involves a multi-phase manipulation sequence: the robot first approaches and uses Figure 6. Comparison between images of (A) an unremovable nut and (B) a removable one, as well as their corresponding point-cloud observations (C-D). Robot policies must be accurate enough to recognize the removable state by the tiny tilt shown in (B) and (D) to guarantee success grasps. its thumb to engage the nut with rotational motions, gradually lifting it along the threaded shaft. As the nut ascends, the robot must continuously adjust its hand pose to maintain effective contact and torque transmission. Once the nut clears the threads, the robot should precisely grasp it with a pinch grip between the index finger and thumb, then transports and places it on the plate. Success is evaluated based on three criteria: (i) complete unscrewing of the nut, (ii) stable grasping without dropping, and (iii) accurate placement at the target position. This task presents several interconnected challenges that test the limits of dexterous manipulation. First, the initial approach requires precise positioning to establish an effective contact configuration-the thumb must reach a pose that enables sufficient rotational leverage while maintaining clearance for continuous motion. Second, the dynamic nature of the unscrewing process demands adaptive control: as the nut rises along the threads, the optimal contact points and force directions continuously change, requiring the policy to learn time-varying manipulation strategies that maintain consistent rotational motion despite the evolving kinematic constraints. Third, the transition from unscrewing to grasping requires accurate state estimation, as shown in Fig. 6 - the policy must recognize when the nut has cleared the threads and swiftly reconfigure from a rotating contact to a stable pinch grip. Finally, precise grasping of a small object tests fine motor control and hand-eye coordination, as even minor positioning errors can result in unstable grasps or dropped nuts. Soft-towel Folding This task requires dual robot arms to collaboratively fold a randomly crumpled towel, uniformly distributed in the 42cm×40cm central region of a table, into a neatly folded state. The task is considered successful only when the arms lift and flatten the towel, then perform two precise folds, ensuring no wrinkles or misalignments occur. This task is challenging due to the need for precise dual-arm coordination and handling a deformable object with significant dynamic variability. The towel's initial position and crumpled configuration vary greatly, leading to substantial differences in point cloud observations. The task is further complicated by potential issues such as uneven folding, unflipped towel corners, or failed grasps, requiring the policy to perform robust failure recovery. Arms must dynamically adjust their motions to flatten and fold the towel accurately while responding to unexpected deformations or missteps. To evaluate the generalization capabilities of the framework, we test the policy under conditions with external human-induced disturbances during the folding process, assessing its ability to recover from failures and adapt to disruptions. This task challenges the model's capacity for dual-arm coordination, generalization across diverse initial towel configurations, manipulation of deformable objects, and effective error correction. Orange juicing The orange juicing task consists of three subtasks: placing a halved orange on the juicer, pressing down the lever to extract juice, and removing the discarded orange pulp. The robot's end effector is replaced with two 3D-printed sleeves that fix two rubber grippers, enabling robust grasping irregular fruit surfaces. For each subtask, the initial and final poses of the robot arm are fixed. Note that these poses are defined individually for each subtask and therefore differ across the three subtasks. Orange placing. In this subtask, a randomly sized halforange is placed on the inside 29cm×20cm area of a tray at a random position and orientation. The robot first grasps the orange with the gripper vevertically aligned to the surface of the trayhe tray , then rotates its wrist joint by 180◦to flip the orange, and finally places it onto the filter section of a lever-style juicer before retracting the gripper. The task is considered successful if the orange is properly placed on top of the filter. This step is challenging due to the high variability in orange size, inclination, height, and placement as shown in Fig. 7A and 7B. The policy must therefore exhibit strong robustness to spatial randomness to achieve reliable placement. Lever pressing. The pressing process follows a relatively fixed trajectory. To ensure consistent execution, we use kinesthetic teaching to record the downward and upward motions of the lever in advance. During deployment, the robot replays this trajectory to lower the hammer-like press, squeeze the juice out of the orange, and then reset the lever. The task is considered successful if the lever is fully pressed down and restored. Discard removal. After pressing, the flattened orange half is randomly embedded within the 11cm diameter juicer filter. The robot begins with the gripper closed and aligned parallel to the filter surface, pushing against the flattened fruit to move it from the tightly embedded position. The wrist then rotates by 90◦, the gripper opens to grasp the orange, and the discarded pulp is placed into a disposal container on the left side of the juicer. The task is considered successful if the flattened orange is removed from the filter (A) Various appearances of halves. (B) Different sizes of oranges. (C) Proper force is needed for removal. Figure 7. Challenging parts of the orange juicing task: (A)-(B) spatial robustness w.r.t. orange appearances and positions; (C) force-sensitive manipulation of deformable orange discards. Table 3. Success rate (%) across tasks. RL-100 (ours) groups an iterative offline fine-tuning stage followed by online RL with either DDIM or one-step CM policy. Averages are unweighted across tasks. Task Imitation baselines RL-100 (ours) DP-2D DP3 Iterative Offline RL Online RL (DDIM) Online RL (CM) Dynamic Push-T 40 (20/50) 64 (32/50) 90 (45/50) 100 (50/50) 100 (50/50) Agile Bowling 14 (7/50) 80 (40/50) 88 (44/50) 100 (50/50) 100 (50/50) Pouring 42 (21/50) 48 (24/50) 92 (46/50) 100 (50/50) 100 (50/50) Soft-towel Folding 46 (23/50) 68 (34/50) 94 (47/50) 100 (50/50) 100 (250/250) Dynamic Unscrewing 82 (41/50) 70 (35/50) 94 (47/50) 100 (50/50) 100 (50/50) Orange Juicing - Placing 78 (39/50) 88 (44/50) 94 (47/50) 100 (100/100) 100 (50/50) Orange Juicing - Removal 48 (24/50) 76 (38/50) 86 (43/50) 100 (50/50) - Mean (unweighted) 50.0 70.6 91.1 100.0 100.0† † Mean over the six tasks evaluated with CM. and dropped into the container. This subtask is particularly challenging because it involves force-sensitive manipulation of a deformable object in a confined space. Fig. 7C indicates that sufficient force is needed to move the orange, but excessive force can cause deformation and hinder grasping. Furthermore, juice on the surface makes the orange slippery, and inappropriate grasping poses may result in failed pickups. Results on Real Robots Main results Tab. 3 traces performance from imitation-only policies to our RL-100 method. The imitation baselines produce only moderate results. The DP-2D baseline attains an average success of 50.0% while DP3 improves to 70.6%. Both baselines are especially challenged by tasks with dynamic or deformable objects, as shown by Agile Bowling with success rates of 14% and 80% respectively, and by Pouring with success rates of 42% and 48% respectively. The iterative offline RL phase of RL-100 yields a large improvement and raises average success to 91.1%. This stage produces the largest absolute gains on the hardest tasks, namely an increase of 8 points over DP3 on Agile Bowling with 88% versus 80%, an increase of 44 points on Pouring with 92% versus 48%, and an increase of 24 points on Dynamic Unscrewing with 94% versus 70%. These results indicate that conservative model-guided policy refinement reliably improves performance across diverse manipulation regimes while avoiding the instability associated with naive online updates. The full RL-100 pipeline, consisting of iterative offline RL followed by brief on-policy online refinement, attains perfect success across all evaluated trials. With the DDIM policy, we observe 400 successful episodes out of 400 across seven tasks. The consistency-model variant evaluated on six tasks achieves the same perfect success rate with 500 successful episodes out of 500 and enables single-step highfrequency control, including 250 successful trials out of 250 on the challenging dual-arm soft-towel folding task. Overall, the results highlight the contribution of each component, namely that OPE-gated iterative learning delivers substantial and reliable improvements under a conservative regime and that short on-policy fine-tuning together with consistency distillation removes remaining failure modes and produces computationally efficient deployment-ready policies. Zero-shot adaptation to new dynamics To assess the robustness of our learned policies, we evaluate zero-shot transfer to environmental variations unseen during training, listed in Tab. 4 and Fig. 8. On Dynamic PushT, as shown in Fig. 8B-C, the policy maintains perfect 100% success when the surface friction is substantially altered, and achieves 80% success even when visual and physical clutter from interference objects of various shapes are introduced into the workspace. Similarly, Agile Bowling in Fig. 8D achieves 100% success on modified surface properties that change ball rolling dynamics. For Pouring in Fig. 8A, replacing granular nuts with fluid water-a significant shift in material properties and flow dynamicsstill yields 90% success, demonstrating that the policy has learned sufficiently stable and robust manipulation strategies rather than overfitting to specific physical parameters. Across Nuts Water Changed Surface Interference Objects Changed Surface A B C D Figure 8. Zero-shot adaptation to novel dynamics. Our method generalizes to unseen physical variations without retraining. (A) Pouring with different granular/fluid materials. (B) Dynamic Push-T on altered surface friction. (C) Dynamic Push-T with visual and physical interference objects. (D) Agile Bowling on modified surface properties with a target-based scoring system. Red arrows highlight the changed environmental conditions. Table 4. Zero-shot generalization success rates (%) on novel dynamics and environmental variations. All policies are trained on nominal conditions and evaluated without any retraining or fine-tuning. Task Variation Success Rate (%) Pouring (Water) 90 Push-T (Changed surface) 100 Push-T (Interference Objects) 80 Bowling (Changed Surface) 100 Average 92.5 all four variations, our method achieves 92.5% average success without any retraining or fine-tuning, indicating strong generalization to distribution shifts in environmental dynamics. Few-shot adaptation Beyond zero-shot generalization, we evaluate the sample efficiency of adapting to more substantial task modifications through brief fine-tuning, as shown in Tab. 5 and Fig. 9. After only 1-3 hours of additional training on each variation, our policies achieve high success rates across diverse changes. The Soft-towel Folding policy adapts perfectly to a different towel material with altered deformable properties, achieving 100% success rate as shown in Fig. 9A, demonstrating effective transfer despite significant changes in cloth dynamics. Similarly, Agile Bowling achieves 100% success on an inverted pin arrangement listed in Fig. 9C, requiring the policy to adapt its trajectory planning and aiming strategy. The Pouring task with a modified container geometry, as shown in Fig. 9B, achieves 60% success, showing that while geometric changes require more adaptation, the policy can still leverage its learned A B C Changed Object Inversion Figure 9. Few-shot adaptation to novel task variations. Our method rapidly adapts to unseen scenarios with minimal additional training data. (A) Soft-towel Folding with a different towel material exhibiting altered deformable properties. (B) Pouring with a modified container shape and size. (C) Agile Bowling with inverted pin arrangement requiring adapted trajectory planning. Table 5. Few-shot adaptation success rates (%) after brief fine-tuning (1-3 hours) on novel task variations. Task Variation Success Rate (%) Pour (New Container) 60 Folding (Changed Object) 100 Bowling (Inverted pin) 100 Average 86.7 manipulation primitives. Averaging 86.7% success across these variations with minimal retraining demonstrates the sample efficiency and adaptability of our approach, highlighting that the learned representations and control strategies effectively transfer to modified task configurations. Robustness to physical disturbances We evaluate the resilience of RL-100 policies to realworld physical perturbations by introducing human-applied disturbances during task execution (Tab. 6, Fig. 10). For Softtowel Folding, we apply external forces at two critical stages: during initial grasping (Stage 1, Fig. 10A) and during prefolding manipulation (Stage 2, Fig. 10B), achieving 90% success at both stages despite the perturbations. On the Dynamic Unscrewing task (Fig. 10C), we apply sustained counter-rotational interference for up to 4 seconds during both the unscrewing motion and at the critical zero-boundary point where visual judgment is required to grasp the nut, a particularly challenging moment as described in our task specifications. Remarkably, the policy achieves 100% success, demonstrating the ability to recover from prolonged disturbances even during precision-critical manipulation phases. Similarly, Dynamic Push-T (Fig. 10D) achieves 100% success despite multiple dragging perturbations applied throughout the pushing motion. Across all tested scenarios, our policies maintain an average 95.0% success rate, validating that the closed-loop control learned through RL-100 enables robust recovery and task completion in the presence of external perturbations typical of unstructured real-world environments. A B C D Figure 10. Robust recovery from real-world disturbances. RL-100 policies demonstrate resilience to external perturbations across diverse manipulation tasks. (A-B) Soft-towel Folding recovers from external pulling and lateral dragging at different manipulation stages. (C) Dynamic Unscrewing withstands counter-rotational interference from human hand. (D) Dynamic Push-T handles dragging perturbations to the target object. Orange arrows indicate disturbance directions and timing. The closed-loop policies successfully recover and complete all tasks, demonstrating robustness critical for unstructured real-world environments. Table 6. Task completion success rates (%) after recovering from physical disturbances. Task & Disturbance Stage Success Rate (%) Folding (Stage 1: Grasping) 90 Folding (Stage 2: Pre-folding) 90 Unscrewing 100 Push-T (Whole stage) 100 Average 95.0 Execution Efficiency We evaluate the execution efficiency of RL-100's policies from four complementary perspectives, with results presented in Fig. 11. Episode length in successful trajectories. We first compare the number of environment steps needed to complete a task successfully (Fig. 11A). RL-finetuned policies (RL100) consistently outperform imitation baselines across all three tasks. This advantage stems from reward-driven policy optimization, which steers the agent toward more efficient-and potentially optimal-trajectories for earlier task completion. Specifically, RL-100 achieves substantial step reductions: in Soft-towel Folding, 390 steps (DP2D) →312 steps (RL-100 CM; 1.25× fewer), and in Dynamics Unscrew, 361 steps (DP-2D) →280 steps (RL100 DDIM; 1.29× fewer). These gains reflect the valuedriven objective's tendency to produce more decisive state transitions. Wall-clock time per episode. Beyond step count, wall-clock time is the most intuitive metric for real-world deployment, especially in dynamic tasks requiring real-time action prediction. Fig. 11B reveals that inference latency depends on two factors: (i) Input modality-point clouds (DP3, 1024 points) incur lower computational overhead than 128×128 RGB images (DP-2D), yielding 1.02-1.31× speedup despite similar step counts; (ii) Inference method-RL-100 (CM)'s 1.18x 1.29x 1.27x 1.20x 1.25x 1.05x 1.09x 0.98x 1.03x 1.31x1.45x 1.53x 1.25x 1.44x 1.04x 1.15x 1.02x 1.06x 1.31x 1.54x A B C D 1.25x 2.55x 2.15x Figure 11. Comparison of RL-100 against algorithmic baselines or human operators in terms of task completion efficiency. (A) Episode length (# of environmental steps) across three tasks; lower is better. (B) Wall-clock time (s) to finish one episode; lower is better. (C) Dynamic Push-T throughput (# of successful episodes per unit time); higher is better. (D) Episode length in Dynamic Push-T; lower is better. Bars show absolute values; annotations indicate speedup factors relative to the baseline (DP-2D for A, B, and D; Human Beginner for C). We abbreviate 'episode' as 'epi.'. one-step generation per action reduces control-loop latency compared to the 10-step DDIM sampler, providing an additional 1.05-1.16× wall-clock speedup over RL-100 (DDIM). For example, in Orange Juicing-Placing, RL-100 (CM) completes episodes in 9.2s versus 10.2s for RL-100 (DDIM) (1.11× faster) and 10.6s for DP-2D (1.15× faster). RL-100 vs. human operators. A critical question for practical deployment is whether robotic policies can match or exceed human efficiency. To address this, we compare RL-100 against human teleoperators on Dynamic Push-T (Fig. 11C). RL-100 (DDIM) achieves 20 successful episodes per unit time, surpassing the human expert who collected the demonstrations (17 episodes; 1.18× improvement) and substantially exceeding human beginners (13 episodes; 1.54× improvement). This demonstrates that our learned policy not only matches expert-level task completion quality but also executes more efficiently in repetitive scenarios. Episode length including failures. The previous comparisons focused solely on successful trajectories. However, a comprehensive efficiency analysis must account for all attempts, including failures. Fig. 11D shows that when considering all trials on Dynamic Push-T, RL-100 maintains significantly shorter average horizons: 822 steps (DP-2D) →658 steps (DP3) →382 steps (RL-100 Offline) →322 steps (RL-100 DDIM), representing up to 2.55× fewer steps than DP-2D. This indicates that RL-100 produces policies that both succeed more reliably and terminate unpromising attempts quickly, avoiding wasted execution time. Summary. Overall, the efficiency gains of RL-100 arise from three synergistic sources: (1) efficient encoding (point clouds over RGB images), (2) reward-driven policy optimization (shorter, more decisive rollouts via the discount factor γ < 1), and (3) low-latency action generation (CM's one-step versus DDIM's 10-step inference). The monotonic improvement-DP-2D →DP3 →RL-100 (DDIM) →RL100 (CM)-validates each design choice's contribution. Crucially, RL-100's execution efficiency surpasses both algorithmic baselines and human operators, demonstrating readiness for practical deployment. Training efficiency Fig. 12 shows the online RL training progression for Agile Bowling. The policy achieves consistent 100% success after approximately 120 episodes of on-policy rollouts, with a moving average demonstrating stable learning without performance drop from offline. The final 100+ episodes maintain perfect success, validating robust policy convergence. These success rates reflect training rollout episodes with exploration noise, not formal evaluations. Since rollouts involve noise for exploration, while evaluation uses deterministic actions, the reported rates underestimate the true policy performance. However, frequent evaluation on physical hardware is prohibitively costly. We therefore track learning progress through rollout success rates as an efficient proxy, providing real-time visibility into policy improvement. We present this curve for Agile Bowling as it exemplifies the rapid convergence and stability characteristic of our online RL finetuning phase. Simulation Experiments To complement our real-robot evaluations, we benchmark RL-100 in simulation and compare it against state-ofthe-art diffusion/flow-based offline-to-online RL methods, including DPPO (Ren et al. 2024), ReinFlow (Zhang 0 25 50 75 100 125 150 175 200 Training Episode 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate Per-episode success Moving average (window=10) Perfect success (100%) Figure 12. Online RL training curve for Agile Bowling. Per-episode success rate during online RL finetuning phase, showing rapid convergence from the iterative offline baseline (approximately 80+%) to perfect performance (100%). et al. 2025), and DSRL (Wagenmaker et al. 2025). We also conduct ablations to verify the contribution of each component in RL-100. Main Results We evaluate on three standard continuous-control suites that cover both dexterous manipulation and locomotion: Adroit (Rajeswaran et al. 2018), Meta-World (Yu et al. 2020), and MuJoCo locomotion (Fu et al. 2020; Todorov et al. 2012). Unless noted otherwise, observations include robot proprioception together with visual inputs (RGB frames or point clouds), and actions are continuous joint/EE commands. We report normalized returns and task success rates (mean ± std over multiple seeds) and compare RL-100 with baselines finetuning diffusion/flow policies as well as their offline/online training protocols. As shown in Fig. 13, RL-100, shown in red curves, consistently achieves the highest or near-highest performance across all ten tasks. On MuJoCo locomotion, RL-100 reaches 10,000 return on halfcheetah-medium-v2, which is 2.2× higher than DPPO's 4,500 and 3.3× higher than DSRL's 3,000, and substantially outperforms all baselines on hopper and walker2d. On Adroit dexterous tasks, RL-100 attains near-perfect success rates, about 100%, on door and hammer, while DPPO plateaus at 0.9 and ReinFlow struggles below 0.6 on hammer. For Meta-World precision tasks like peginsert-side, RL-100 maintains a stable 1.0 success rate after a few steps, where ReinFlow fails to exceed 0.2. From a sample efficiency and stability perspective, RL-100 demonstrates faster convergence-achieving peak performance within 1-2M steps on most tasks versus 3-5M for baselines-and exhibits notably lower variance (narrower confidence intervals) across random seeds. On challenging coordination tasks (pen, disassemble), RL-100's advantage is most pronounced, showing both higher asymptotic performance and more stable learning trajectories. The consistent gains across diverse modalities-from highdimensional locomotion to contact-rich manipulation and precision insertion-validate RL-100's generality as a unified framework for diffusion policy fine-tuning. Policy Inference Frequency Fig. 14 presents a comparative analysis of our proposed methods, RL-100 (CM), against representative baselines in terms of model size (number of parameters) and policy inference speed (frequency in Table 7. Summary of numbers of episodes and corresponding collection time (h) across all tasks during the three training stages - human demonstration, iterative offline RL and online RL. Note that in offline RL stage, the number is the sum of all iterations. Task Human Demonstration Iterative Offline RL Online RL # of epi. Collection time (h) # of epi. Collection time (h) # of epi. Collection time (h) Dynamic Push-T 100 2 821 8 763 7.5 Agile Bowling 100 2 249 2 213 2.5 Pouring 64 1 741 6.8 129 1.5 Soft-towel Folding 400 5 896 11 654 8.5 Dynamic Unscrew 31 0.5 467 4.5 288 3 Orange Juicing - Placing 80 1.5 642 10.5 750 12.5 Orange Juicing - Removal 29 0.5 149 2.5 240 4 Ours DPPO DSRL ReinFlow 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 8000 10000 Return halfcheetah-medium-v2 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 Return hopper-medium-v2 0 1 2 3 4 5 Env steps (M) 0 2000 4000 6000 Return walker2d-medium-v2 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate box-close 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate disassemble 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate door 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate hammer 0.0 0.5 1.0 1.5 2.0 2.5 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate pen 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate peg-insert-side 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate push-wall Figure 13. Learning curves of various methods on MuJoCo locomotion, Adroit, Meta-World tasks are presented across three random seeds. Solid lines indicate the mean performance, while the shaded regions represent the corresponding 95% confidence interval. We use robot proprioception as the policy input for MuJoCo locomotion, and both 3D visual observations and proprioceptive state for Adroit (Door, Hammer, Pen) and Meta-World (Box-Close, Disassemble, Peg-Insert-Side, Push-Wall); baseline methods follow the input modalities specified in their original papers. Hz). The results demonstrate the superior efficiency of our approach. Specifically, RL-100 (CM) with a Skip-Net architecture achieves an exceptional inference frequency of 378 Hz, outperforming DSRL (35 Hz) and DPPO (30 Hz) by over an order of magnitude, while maintaining a compact model size of just 3.9M parameters w.r.t. DSRL (52.3M) . Even a larger U-Net architecture (39.2M) enjoys an inference speed boost up to 133 Hz via CM. Reinflow benefits from a relatively high frequency due to its smaller network size, but it is still notably lower than ours, even with a smaller model. This significant performance gain is attributed to our core methodology: we distill a diffusion policy into a Consistency Model (CM) capable of single-step inference alongside training/finetuning the original policy. Unlike diffusionbased policies that inherently rely on a slow, multi-step iterative sampling process, CM collapses policy generation into a single, efficient forward pass. Consequently, we achieve near real-time policy inference, effectively removing the decision-making model as a bottleneck. The overall system frequency becomes primarily limited by the speed of the perception system (e.g., camera frame rate at 30Hz), enabling truly reactive control and timely responses. This low latency is critical for robust performance in dynamic environments, where the system delay introduced by iterative samplers can severely hinder task success, which has been shown in our real-world experiments. RL-100 (CM) w/ Skip-Net RL-100 (CM) w/ U-Net DPPO DSRL ReinFlow 0 10 20 30 40 50 # of Net Parameters (M) 3.9 39.2 1.2 52.3 1.3 Model Size Inference Speed 0 50 100 150 200 250 300 350 Policy Inference Frequency (Hz) 378 Hz 133 Hz 30 Hz 35 Hz 170 Hz Figure 14. Comparison of two RL-100 variations with different action heads against other baselines in simulation in terms of model size and inference speed. Model size is defined as the number of parameters in a policy neural net, while inference speed is the frequency of a policy outputting an action given observations at each timestep. Ablation Studies We conduct ablation studies on five components: (i) visual modality (2D vs. 3D), (ii) diffusion-noise clipping, (iii) sampler/model (CM vs. DDIM), (iv) representation learning during fine-tuning (ReconVIB / no ReconVIB / frozen encoder), and (v) diffusion parameterization (ε- vs. x0prediction). The comparison is listed in Fig. 15 and the 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate 2D vs. 3D 2d 3d (ours) 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Different clip ranges 0.1 0.8 w/o clip 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate CM vs. DDIM DDIM CM 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate ReconVIB vs. No ReconVIB vs. Fix Encoder w/ ReconVIB (ours) w/o ReconVIB Fix encoder 0.0 0.2 0.4 0.6 0.8 1.0 Env steps (M) 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Epsilon vs. Sample Epsilon (ours) Sample Figure 15. Learning curves of various ablation studies on the door task in Adroit are presented across three random seeds. Solid lines indicate the mean performance, while the shaded regions represent the corresponding 95% confidence interval. From left to right: (A) 2D vision modality (i.e., images) vs. 3D point clouds used by RL-100. (B) Different clip ranges leveraged to bound the standard deviation of stochastic DDIM sampling steps. (C) Different models in RL-100. (D) Whether or not including reconstruction and VIB loss during finetuning as well as fixing encoder during finetuning. (E) Different diffusion parameterizations (i.e., the output of neural net; epsilon: predicting noises, sample: predicting clean samples. detailed configurations and results are provided below. Visual modality: 2D vs. 3D. Our framework is modalityagnostic and accepts either 2D RGB images or 3D point clouds as visual inputs. On the Adroit door task-a relatively clean scene-the 3D variant learns faster and attains a higher final success rate. We attribute this to the ability of 3D inputs to enable precise geometric filtering (e.g., ROI/radius "clipping") that cleanly isolates task-relevant structures such as the handle and contact surfaces, yielding a higher signalto-noise ratio and easier credit assignment than 2D images. This ablation thus demonstrates both the flexibility of our framework (2D/3D optional) and the practical advantage of 3D in clean, contact-rich settings. Clipping the diffusion noise scale. Bounding the standard deviation of stochastic DDIM steps with a moderate clip (e.g., 0.8) yields the best stability-performance trade-off. An overly tight bound (0.1) under-explores and converges lower, whereas no clipping induces pronounced oscillations and wider intervals. We use 0.8 for Adroit, Mujoco Locomotion tasks, and real-world single-action mode tasks, and 0.1 for Meta-World and real-world chunk-action mode tasks. Consistency model (CM) vs. DDIM. One-step CM matches multi-step DDIM performance with K× speedup. The distilled consistency model (CM, blue) achieves nearly identical learning curves and final success rates (∼1.0) compared to the K-step DDIM policy (red), validating that our lightweight distillation successfully preserves task performance while enabling single-forward-pass inference. Both policies converge at similar rates and maintain comparable stability (overlapping confidence intervals), demonstrating that the CM effectively compresses the iterative denoising process without sacrificing control quality. This enables high-frequency deployment (100+ Hz) critical for dynamic real-world tasks. Representation learning during fine-tuning. Including reconstruction plus VIB losses while updating the encoder achieves the highest and most stable success. Removing these losses degrades stability and final performance, and freezing the encoder further limits gains; joint policy-representation adaptation mitigates representational drift and improves sample efficiency. Diffusion parameterization: ε vs. x0. Noise prediction (εprediction) substantially outperforms clean-sample prediction (x0-prediction), achieving ∼40% higher final success (∼1.0 vs. ∼0.6) with faster learning and reduced variance-suggesting better-conditioned gradients and closer alignment with the PPO-style objective used during RL posttraining. Prediction parameterization analysis Prior work on diffusion models for robot learning (e.g., DP3) predominantly adopts the x0-prediction parameterization rather than ε-prediction. The two parameterizations recover the clean action (or action chunk) ˆx0 as ˆx0 = xt -√1 - ̄αt fθ(xt, t) √ ̄αt (ε-prediction), (24) ˆx0 = fθ(xt, t) (x0-prediction). (25) Analysis. A practical reason is variance amplification in (24): early in the reverse process (t→T), the factor 1 √ ̄αt becomes large, which magnifies estimation noise in εθ and yields a higher-variance ˆx0. In supervised imitation settings (or when actions are executed open-loop), this extra variance can destabilize training and control, making the direct x0-prediction attractive. Variance measurements. We pretrain identical policies under the two parameterizations and, during environment rollouts, perform 100 forward passes per time step to estimate predictive variance. On a locomotion task (Hopper), the empirical variance of ˆx0 is 0.1316 for ε-prediction versus 0.0870 for x0-prediction; on a manipulation task (Adroit-Door) it is 0.0589 for ε-prediction versus 0.0290 for x0-prediction, respectively. Implication for online RL. In our two-level MDP-where each action is produced by a K-step denoising trajectory-the higher stochasticity induced by ε-prediction acts as structured exploration within the denoising chain, improving coverage of latent action modes and helping avoid local optima during policy-gradient fine-tuning. With all other components fixed, online RL with ε-prediction achieves faster and more reliable improvement on Adroit-Door (see Fig. 15), whereas x0-prediction exhibits slower improvement and more frequent premature convergence. Takeaway. While x0-prediction can be preferable for lowvariance imitation and open-loop execution, the variance amplification inherent to ε-prediction is beneficial for exploration in diffusion-based online RL. Consequently, we adopt ε-prediction for post-training, leveraging its exploratory behavior inside the denoising steps. Conclusion and Future Work This paper focuses on the core challenges of real-world deployment of robot learning, which demands systems with high reliability, efficiency, and robustness. To this end, we present RL-100, a three-stage real-world RL training framework built atop a diffusion-based visuomotor policy, designed to "start from human priors" and then "go beyond human performance" in robotic manipulation. By unifying imitation learning and reinforcement learning under a single policy optimization objective, RL-100 preserves the strengths of human teleoperation demonstrations while continuously self-improving through autonomous exploration. In particular, a shared PPO-style policy-gradient loss is applied across both offline and online RL phases to fine-tune the diffusion policy's denoising process, providing stable, near-monotonic performance gains. Moreover, the framework introduces a consistency distillation technique that compresses the multi-step diffusion policy (requiring K = 5-10 denoising steps) into a single-step consistency model, enabling high-frequency control at deployment without sacrificing performance. This unification of IL and RL under one objective, coupled with the one-step distilled policy, yields a controller that is both efficient to fine-tune and fast to execute in real-world settings. RL-100 's efficacy was demonstrated across a diverse suite of real-world tasks, encompassing dynamic object pushing, dual-arm deformable cloth folding, agile highspeed bowling, precision pouring of granular and fluid materials, unscrewing, and a multi-stage orange juicing task. Deployment-level results show that the fully-trained RL-100 policy achieved 100% success rates on every task, attaining virtually perfect reliability in extensive evaluations. Notably, RL-100 also improved task efficiency, often matching or even surpassing skilled human operators in time-tocompletion on multiple tasks. The trained policies proved robust over long durations, maintaining stable performance over 2+ hours of continuous operation, which attests to the framework's long-horizon reliability. These results collectively indicate that RL-100 satisfies the stringent reliability, efficiency, and robustness requirements needed for real-world deployment. Beyond excelling under nominal conditions, RL-100 exhibited strong generalization and robustness in the face of new variations and disturbances. Without any retraining, a single policy maintained high performance under significant environmental shifts. Across several such unseen scenario variations, RL-100 attained an average 92.5% success rate without any fine-tuning, underscoring the policy's robustness to modest domain shifts. Furthermore, the framework's consistency-model variant (the one-step distilled policy) demonstrated that high-frequency control is possible without loss of accuracy, as it too reached a 100% success rate on all evaluated tasks, including 250/250 successes in the challenging dual-arm towel folding. This blend of reliability and adaptability indicates that RL-100 can handle the kind of dynamic perturbations and variability inherent in real home or factory environments, rather than overfitting to narrow training conditions. In summary, RL-100 integrates imitation learning and reinforcement learning into a unified paradigm that yields deployment-grade performance on real robots. It leverages human demonstration data as a foundation and then aligns to human-grounded objectives while exceeding human-level reliability through iterative self-improvement. The framework's near-perfect success rates, broad task generality, and sustained operational robustness represent a significant step toward deploying autonomous manipulators in unstructured environments. These outcomes "offer a credible path to deployment in homes and factories", where robots must perform daily manipulation tasks with superhuman consistency and safety. Future Work. There are several promising directions to extend and enhance this work. A priority is to extend evaluation to more complex, cluttered, and partially observable scenes that better mirror the variability of homes and factories. This includes dynamic multi-object settings, occlusions, specular/transparent materials, changing illumination, and non-stationary layouts. Stress-testing the policy beyond the current benchmark-where we already observe near-perfect success, strong long-horizon stability, and human teleoperation-level time-to-completion on multiple tasks-will help quantify limits and failure modes under realistic deployment conditions. Our results indicate that small diffusion policies can attain high reliability and efficiency after modest finetuning. Building on this, we plan to scale post-training to larger, multi-task, multi-robot Vision-Language-Action (VLA) models. Key directions include: (i) scaling laws for data/model size vs. real-robot sample efficiency; (ii) cross-embodiment and cross-task transfer in a single policy; and (iii) aligning large VLA priors with RL-100 's unified objective to retain zero-shot semantic generalization while preserving the high success rates we report. Although our pipeline already incorporates conservative operation and stable fine-tuning, reset and recovery remain practical bottlenecks. We will investigate autonomous reset mechanisms-learned reset policies, scripted recovery behaviors, task-aware environment fixtures, and failureaware action chunking-to minimize on-robot human effort during training and evaluation. A principled reset strategy should reduce downtime, smooth data collection, and further stabilize online improvement, complementing RL-100 's OPE-gated iterative updates. Contribution List Kun: Project lead, Core, responsible for the original idea and codebase, overall design and training of simulation and realworld experiments, and paper writing. Huanyu: Core, optimized training infrastructure, led the unscrew, pour and juicing tasks, contributed to paper writing, and implemented part of the simulation baseline. Dongjie: Core, led the juicing task, contributed to paper writing, most of the real-world baselines, and implemented part of the simulation baseline. Zhenyu: Core, responsible for the real-world robot setup, design, and data collection, and contributed to paper writing. Lingxiao: Core, led data collection for the folding task, managed its offline training process, and explored dual-arm embodiments in the early stages. Zhennan: Core, contributed to part of the algorithm design, improved consistency policy distillation, debugged and explored settings in simulation, and developed most of the simulation baselines. Ziyu: Managed the Metaworld domain tasks. Shiyu: Writing refinement Huazhe: Principal Investigator (PI), Core, responsible for project direction and guidance, and contributed to paper writing. Acknowledgement We would like to thank Tianhai Liang for his contributions to the hardware design for this project, and Jiacheng You and Zhecheng Yuan for their valuable discussions and insights. References Ajay A, Du Y, Gupta A, Tenenbaum J, Jaakkola T and Barbu A (2023) Is conditional generative modeling all you need for decision making? arXiv preprint . Ball PJ, Smith L, Kostrikov I and Levine S (2023) Efficient online reinforcement learning with offline data. In: International Conference on Machine Learning. Black K, Brown N, Driess D, Esmail A, Equi M, Finn C, Fusai N, Groom L, Hausman K, Ichter B, Jakubczak S, Jones T, Ke L, Levine S, Li-Bell A, Mothukuri M, Nair S, Pertsch K, Shi LX, Tanner J, Vuong Q, Walling A, Wang H and Zhilinsky U (2024a) π0: A vision-language-action flow model for general robot control. URL https://arxiv.org/abs/2410. 24164. Black K, Brown N, Driess D, Ko AE, Hu K, Lee A, Nakayama M, Singh A, Xu Q, Yang Q et al. (2024b) π0: A vision-languageaction flow model for general robot control. arXiv preprint . Black K, Janner M, Du Y, Kostrikov I and Levine S (2023) Training diffusion models with reinforcement learning. arXiv preprint . Chen H, Lu C, Ying C, Su H and Zhu J (2023) Score regularized policy optimization through diffusion behavior. arXiv preprint . Chen Q, Deng Z, Wang Y, Bao J, Lu Y, Zeng W, Niebles JC, Gao R, Li FF and Wu J (2024) Flowpolicy: Enabling fast and robust 3d flow-based policy via consistency flow matching for robot manipulation. arXiv preprint . Chi C, Xu Z, Feng S, Cousineau E, Du Y, Burchfiel B, Tedrake R and Song S (2023) Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research : 02783649241273668. Christiano PF, Leike J, Brown TB et al. (2017) Deep reinforcement learning from human preferences. In: NeurIPS. Chua K, Calandra R, McAllister R and Levine S (2018) Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In: NeurIPS. Cui J and Trinkle J (2021) Toward next-generation learned robot manipulation. Science Robotics 6(54): eabd9461. 1126/scirobotics.abd9461. URL https://www.science. org/doi/abs/10.1126/scirobotics.abd9461. Ding S, Hu C, Liu Z, Zhang C and Liang Y (2024a) Diffusionbased reinforcement learning via q-weighted variational policy optimization. In: Advances in Neural Information Processing Systems. Ding Z, Zhang C, Dong J and Jia J (2024b) Consistency models as a rich and efficient policy class for reinforcement learning. arXiv preprint . Eysenbach B, Gu S, Ibarz J and Levine S (2018) Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In: International Conference on Learning Representations. Fan Y, Watkins O, Du Y, Liu H, Ryu M, Goodman C, Jain A, Vafa K, Kim D, Yu J, Tang H, Hao T, Tenenbaum J, Garg S, Jesse A, Abbeel P and Singh AK (2024) DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems . Fu J, Kumar A, Nachum O, Tucker G and Levine S (2020) D4rl: Datasets for deep data-driven reinforcement learning. Fujimoto S, van Hoof H and Meger D (2018) Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning (ICML). Guo L, Xue Z, Xu Z and Xu H (2025) Demospeedup: Accelerating visuomotor policies via entropy-guided demonstration acceleration. URL https://arxiv.org/abs/2506.05064. Gupta A, Mendonca R, Zhao Y, Emmert-Streib F and Levine S (2021) Reset-free reinforcement learning via multi-task learning: Learning dexterous manipulation behaviors without human intervention. In: IEEE International Conference on Robotics and Automation. Haarnoja T, Zhou A, Abbeel P and Levine S (2018) Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning (ICML). Hansen-Estruch P, Kostrikov I, Janner M, Kuba JG and Levine S (2024) Revisiting diffusion q-learning: From iterative denoising to one-step action generation. In: International Conference on Machine Learning. He H, Bai C, Xu K, Yang Z, Zhang W, Wang D, Zhao B and Li X (2023) Diffusion model is an effective planner and data synthesizer for multi-task reinforcement learning. arXiv preprint . He K, Fan H, Wu Y, Xie S and Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9729-9738. Ho J, Jain A and Abbeel P (2020a) Denoising diffusion probabilistic models. Advances in neural information processing systems 33: 6840-6851. Ho J, Jain A and Abbeel P (2020b) Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems, volume 33. pp. 6840-6851. Intelligence P, Black K, Brown N, Darpinian J, Dhabalia K, Driess D, Esmail A, Equi M, Finn C, Fusai N, Galliker MY, Ghosh D, Groom L, Hausman K, Ichter B, Jakubczak S, Jones T, Ke L, LeBlanc D, Levine S, Li-Bell A, Mothukuri M, Nair S, Pertsch K, Ren AZ, Shi LX, Smith L, Springenberg JT, Stachowicz K, Tanner J, Vuong Q, Walke H, Walling A, Wang H, Yu L and Zhilinsky U (2025) π0.5: a vision-language-action model with open-world generalization. URL https://arxiv.org/ abs/2504.16054. Janner M, Du Y, Tenenbaum JB and Levine S (2022) Planning with diffusion for flexible behavior synthesis. In: International Conference on Machine Learning. pp. 9902-9915. Janner M, Fu J, Zhang M and Levine S (2019) When to trust your model: Model-based policy optimization. In: Advances in Neural Information Processing Systems. Kalashnikov D, Irpan A, Pastor P, Ibarz J et al. (2018) Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. In: Conference on Robot Learning (CoRL). Kang B, Jie X, Du D, Li J, Chen X, Zhang Y and Tian Y (2023) Efficient diffusion policies for offline reinforcement learning. arXiv preprint . Kostrikov I, Nair A and Levine S (2022) Offline reinforcement learning with implicit q-learning. In: ICLR. Kumar A, Zhou A, Tucker G and Levine S (2020) Conservative q-learning for offline reinforcement learning. In: Advances in Neural Information Processing Systems, volume 33. pp. 11791191. Lei K, He Z, Lu C, Hu K, Gao Y and Xu H (2024) Uni-o4: Unifying online and offline deep reinforcement learning with multi-step on-policy optimization. In: ICLR. Levine S, Finn C, Darrell T and Abbeel P (2016) End-to-end training of deep visuomotor policies. In: Robotics: Science and Systems (RSS). Li H, Jiang Z, CHEN Y and Zhao D (2024) Generalizing consistency policy to visual RL with prioritized proximal experience regularization. In: The Thirty-eighth Annual Conference on Neural Information Processing Systems. URL https:// openreview.net/forum?id=MOFwt8OeXr. Lin T, Sachdev K, Fan L, Malik J and Zhu Y (2025) Simto-real reinforcement learning for vision-based dexterous manipulation on humanoids. arXiv preprint . Lipman Y, Chen RT, Ben-Hamu H, Nickel M and Le M (2023) Flow matching for generative modeling. In: International Conference on Learning Representations. Liu Y, Bao J and Li KH (2024) Pinchbot: Long-horizon deformable manipulation with guided diffusion policy. arXiv preprint . Lu C, Chen H, Chen J, Su H, Li C and Zhu J (2023) Contrastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning. arXiv preprint . Lu Y, Tian Y, Yuan Z, Wang X, Hua P, Xue Z and Xu H (2025) H3dp: Triply-hierarchical diffusion policy for visuomotor learning. arXiv preprint . Luo J, Hu Z, Xu C, Tan YL, Berg J, Sharma A, Schaal S, Finn C, Gupta A and Levine S (2024a) Serl: A software suite for sample-efficient robotic reinforcement learning. In: 2024 IEEE International Conference on Robotics and Automation (ICRA). pp. 16961-16969. Luo J, Hu Z, Xu C, You S, Darrell T and Levine S (2024b) SERL: A software suite for sample-efficient robotic reinforcement learning. In: IEEE International Conference on Robotics and Automation. Luo J, Xu C, Liu F, Trevor D and Levine S (2024c) Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning. Science Robotics 9(96): eads5033. Luo J, Xu C, Wu J and Levine S (2025) Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning. Science Robotics 10(105): eads5033. scirobotics.ads5033. URL https://www.science.org/ doi/abs/10.1126/scirobotics.ads5033. Majumdar A, Yadav K, Arnaud S, Ma YJ, Chen C, Silwal S, Jain A, Berges VP, Abbeel P, Malik J et al. (2023) Where are we in the search for an artificial visual cortex for embodied intelligence? arXiv preprint . Nair A, Dalal M, Gupta A and Levine S (2020) AWAC: Accelerating online reinforcement learning with offline datasets. In: International Conference on Machine Learning. Nair S, Rajeswaran A, Kumar V, Finn C and Gupta A (2022) R3m: A universal visual representation for robot manipulation. In: Conference on Robot Learning. Nakamoto M, Zhai Y, Singh A, Mark MS, Ma Y, Finn C, Kumar A and Levine S (2023) Cal-QL: Calibrated offline RL pretraining for efficient online fine-tuning. In: Advances in Neural Information Processing Systems. Nguyen T and Yoo CD (2025) Revisiting diffusion q-learning: From iterative denoising to one-step action generation. arXiv preprint . Park S, Kang K and Levine S (2025) Flow q-learning. arXiv preprint . Peng XB, Kumar A, Zhang G and Levine S (2019) Advantageweighted regression: Simple and scalable off-policy reinforcement learning. In: International Conference on Learning Representations. Psenka M, Escontrela A, Abbeel P and Ma Y (2024) Learning a diffusion model policy from rewards via q-score matching. arXiv preprint . Rajeswaran A, Kumar V, Gupta A, Vezzani G, Schulman J, Todorov E and Levine S (2018) Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. In: Proceedings of Robotics: Science and Systems. Pittsburgh, Pennsylvania. XIV.049. Ren AZ, Dixit A, Bodrova AS, Singh S, Tu S, Brown N, Xu P, Takayama L, Xia F, Varley J, Xu Z, Sadigh D, Zeng A and Majumdar A (2024) Diffusion policy policy optimization. arXiv preprint . Sabatelli M, Suresh S and Zhang B (2024) Diffclone: Enhanced behaviour cloning in robotics with diffusion-driven policy learning. arXiv preprint . Schulman J, Moritz P, Levine S, Jordan MI and Abbeel P (2016) High-dimensional continuous control using generalized advantage estimation. In: ICLR. Schulman J, Wolski F, Dhariwal P, Radford A and Klimov O (2017) Proximal Policy Optimization Algorithms. ArXiv preprint abs/1707.06347. Shang S, Zheng Y, Yin Z and Duan Y (2024) Latent weight diffusion: Generating policies from trajectories. arXiv preprint . Singh A, Yang L, Hartikainen K, Finn C and Levine S (2019) End-to-end robotic reinforcement learning without reward engineering. In: Robotics: Science and Systems. Song J, Meng C and Ermon S (2021) Denoising diffusion implicit models. In: International Conference on Learning Representations. URL https://openreview.net/ forum?id=St1giarCHLP. Song Y, Dhariwal P, Chen M and Sutskever I (2023) Consistency models. In: Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org. Song Y, Sohl-Dickstein J, Kingma DP, Kumar A, Ermon S and Poole B (2020) Score-based generative modeling through stochastic differential equations. arXiv preprint . Todorov E, Erez T and Tassa Y (2012) Mujoco: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 5026-5033. Wagenmaker A, Beeson A, Ajay A and Gupta A (2025) Steering your diffusion policy with latent space reinforcement learning. arXiv preprint arXiv:2501.xxxxx . Walke H, Black K, Lee A, Kim MJ, Du M, Zheng C, Zhao T, Hansen-Estruch P, Xu Q, Dragan A et al. (2023) BridgeData V2: A dataset for robot learning at scale. arXiv preprint . Wang SY, Kubota Y and Shah D (2023a) Diffusion model-augmented behavioral cloning. arXiv preprint . Wang Z, Hunt JJ and Zhou M (2023b) Diffusion policies as an expressive policy class for offline reinforcement learning. In: International Conference on Learning Representations. Wang Z, Zhu Y, Liu S and Song S (2024) One-step diffusion policy: Fast visuomotor policies via diffusion distillation. arXiv preprint . Yu T, Quillen D, He Z, Julian R, Hausman K, Finn C and Levine S (2020) Meta-world: A benchmark and evaluation for multitask and meta reinforcement learning. In: Conference on robot learning. PMLR, pp. 1094-1100. Yuan Z, Wei T, Gu L, Hua P, Liang T, Chen Y and Xu H (2025) Hermes: Human-to-robot embodied learning from multi-source motion data for mobile dexterous manipulation. URL https: //arxiv.org/abs/2508.20085. Ze Y, Zhang G, Zhang K, Hu C, Wang M and Xu H (2024) 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations. In: Proceedings of Robotics: Science and Systems. Delft, Netherlands. 2024.XX.067. Zhang T, Yu C, Su S and Wang Y (2025) Reinflow: Fine-tuning flow matching policy with online reinforcement learning. arXiv preprint . Zhao TZ, Kumar V, Levine S and Finn C (2023) Learning FineGrained Bimanual Manipulation with Low-Cost Hardware. In: Proceedings of Robotics: Science and Systems. Daegu, Republic of Korea. Appendix Experiment Setting This section presents the setup and methodology for conducting real-world robot manipulation experiments. The experimental framework is designed to integrate perception, control, and demonstration collection in a coherent pipeline. We employ the Intel RealSense L515 camera to capture depth images, which are subsequently processed into 3D point clouds and transformed into the robot's root frame for consistent spatial representation. Intrinsic and extrinsic calibrations ensure accurate mapping between image coordinates and the robot workspace, while spatial filtering and downsampling produce compact yet informative point cloud observations suitable for high-frequency control. The robotic platform consists of three manipulators-UR5, xArm, and Franka Emika Panda-equipped with dexterous or gripper-type end-effectors, as well as a passive fixed effector. Asynchronous control schemes are implemented across all manipulators to achieve low-latency, high-frequency execution, with task-specific interpolation strategies ensuring smooth and responsive motion. Demonstrations for seven manipulation tasks are collected using teleoperation interfaces matched to task complexity. For dexterous 3D tasks, the Apple Vision Pro provides natural hand tracking, which is converted into robot joint or Cartesian commands via inverse kinematics for the LeapHand. For planar or low-dimensional tasks, a standard game joystick is used to provide delta-action control of the end-effector in the plane. Each task includes approximately 100 demonstration trajectories, efficiently recorded within a few hours per task. Overall, the experimental setup provides a consistent, flexible, and practical framework for acquiring high-quality real-world data and evaluating robot manipulation policies across a diverse set of tasks. Camera Calibration In the real-world experiments, we employ the Intel RealSense L515 camera, chosen for its ability to provide high-fidelity 3D point cloud observations owing to its superior depth accuracy. This ensures reliable perception for robot manipulation tasks across diverse environments. The intrinsic calibration is performed using a Charuco board, which serves as the reference pattern for estimating the camera's internal parameters. Multiple images of the board are captured from varying angles and distances. The board corners are then detected and used to compute the intrinsic matrix and distortion coefficients through an optimization process that minimizes the reprojection error. This procedure guarantees an accurate mapping between 2D image coordinates and their corresponding 3D spatial points. The extrinsic calibration aligns the camera frame with the robot's root frame using an AprilTag placed on the tabletop. First, a manual measurement establishes the transformation Ttag2root, which encodes the pose of the tag relative to the robot base. Subsequently, the AprilTag is detected in the camera's RGB channel, yielding the transformation Ttag2camera from the tag to the camera frame. The camera-toroot transformation is then obtained as: Tcamera2root = Ttag2root · (Ttag2camera)-1 This calibrated extrinsic transformation is stored and utilized throughout the experiments to maintain consistent spatial alignment between the camera observations and the robot workspace. Point Cloud Processing The raw depth maps captured by the RealSense L515 are first back-projected into 3D point clouds using the calibrated intrinsic parameters, thereby establishing a metric reconstruction of the scene in the camera frame. These point clouds are then transformed into the robot's root frame using the calibrated extrinsic transformation Tcamera2root, ensuring spatial consistency between visual observations and the robot's operational workspace. To suppress environmental noise and spurious reflections, a spatial bounding box is applied in the root frame, filtering out irrelevant regions while preserving only the volume above the tabletop. This step enhances the semantic relevance of the retained points, focusing on task-relevant structures. Finally, to improve computational efficiency while maintaining geometric fidelity, farthest point sampling (FPS) is applied. The processed point cloud is uniformly downsampled to 512 points, yielding a compact yet informative representation suitable for robot perception and policy learning. During deployment, the camera operates in a dedicated thread with a buffered pipeline, enabling asynchronous acquisition of observations. This design minimizes latency and allows the effective point cloud update rate to exceed 25 Hz, ensuring timely and reliable perception for closed-loop control. Robotic Arm & End-effector Control The experimental platform incorporates three robotic manipulators-UR5, xArm, and Franka Emika Panda-equipped with different types of end-effectors, including the LeapHand dexterous hand, the Robotiq 2F-85 gripper, and a custom 3D-printed fixed effector. All control modules are designed to operate asynchronously, ensuring low-latency communication and highfrequency execution for responsive and stable manipulation. For the UR5, relative action commands are transmitted at 30 Hz. These commands are handled through the RealTime Data Exchange (RTDE) interface, where they are interpolated to 125 Hz for execution. A lookahead time of 1/30 s and a control gain of 200 are used, providing a balance between smooth motion trajectories and timely responsiveness. For the xArm, control is performed via absolute Cartesian position commands at 30 Hz. Linear interpolation is applied to the position trajectories, while orientation is represented in quaternion form and interpolated via spherical linear interpolation (slerp). The resulting trajectory is executed at 200 Hz, yielding continuous and smooth motion in task space. For the Franka Emika Panda, absolute Cartesian position commands are also issued at 30 Hz. The robot's low-level controller performs trajectory interpolation, converting the input into control signals at 1000 Hz, thereby ensuring precise and compliant execution. Regarding the end-effector control, both the LeapHand and the Robotiq 2F-85 are driven asynchronously at 30 Hz. The policy performs end-to-end joint control for the LeapHand, while the gripper uses binary control for the Soft-towel Folding task and continuous control for the Orange Juicing task. The custom 3D-printed fixed effector, by contrast, is purely passive and requires no active control, serving as a simple but reliable interface for specific experimental settings like Dynamic Push-T or Agile Bowling. Data Collection Data for seven real-world manipulation tasks are collected using teleoperation interfaces selected to match the complexity and dimensionality of each task. These tasks include both dexterous three-dimensional manipulations-such as folding, pouring, unscrewing, and juicing-and planar or constrained motions, such as Dynamic Push-T and Agile Bowling. The data collection strategy aims to provide sufficient coverage of the task space while maintaining practical efficiency in demonstration acquisition. For tasks involving complex 3D hand motions, the Apple Vision Pro is employed. Its built-in spatial hand-tracking captures natural hand and finger movements, which are mapped to robot end-effector motions. Before starting each demonstration, the operator's hand pose is aligned with the robot end-effector through an initialization procedure, ensuring a consistent reference frame. During teleoperation, wrist and finger motions are streamed in real time and converted into robot joint or Cartesian commands, while gripper grasping actions are triggered via pinch gestures. For the LeapHand dexterous hand, inverse kinematics is solved in PyBullet to produce feasible joint configurations from the tracked hand poses. This setup allows operators to efficiently provide demonstrations for tasks requiring dexterous, spatially rich manipulations. For planar or low-dimensional manipulation tasks-such as Dynamic Push-T and Agile Bowling-a standard game joystick is used. The joystick's XY-axis values are mapped to delta actions of the robot end-effector in the plane. This approach provides an intuitive and efficient means for recording demonstrations where precise two-dimensional control is sufficient. Our dataset is intentionally varied. Some tasks feature a large number of demonstrations to test performance limits, while others use smaller datasets to evaluate sample efficiency. Tab. 7 provides a complete summary.
2510.14814
Pre-print TACKLING TIME-SERIES FORECASTING GENERALIZA- TION VIA MITIGATING CONCEPT DRIFT Zhiyuan Zhao, Haoxin Liu, B. Aditya Prakash School of Computational Science & Engineering Georgia Institute of Technology Atlanta, GA 30332, USA {leozhao1997@, hliu763@, badityap@cc.}gatech.edu ABSTRACT Time-series forecasting finds broad applications in real-world scenarios. Due to the dynamic nature of time series data, it is important for time-series forecasting models to handle potential distribution shifts over time. In this paper, we initially identify two types of distribution shifts in time series: concept drift and temporal shift. We acknowledge that while existing studies primarily focus on addressing temporal shift issues in time series forecasting, designing proper concept drift methods for time series forecasting has received comparatively less attention. Motivated by the need to address potential concept drift, while conventional con- cept drift methods via invariant learning face certain challenges in time-series forecasting, we propose a soft attention mechanism that finds invariant patterns from both lookback and horizon time series. Additionally, we emphasize the critical importance of mitigating temporal shifts as a preliminary to addressing concept drift. In this context, we introduce ShifTS, a method-agnostic framework designed to tackle temporal shift first and then concept drift within a unified ap- proach. Extensive experiments demonstrate the efficacy of ShifTS in consistently enhancing the forecasting accuracy of agnostic models across multiple datasets, and outperforming existing concept drift, temporal shift, and combined baselines. 1 INTRODUCTION Time-series forecasting finds applications in various real-world scenarios such as economics, urban computing, and epidemiology (Zhu & Shasha, 2002; Zheng et al., 2014; Deb et al., 2017; Mathis et al., 2024). These applications involve predicting future trends or events based on historical time-series data. For example, economists use forecasts to make financial and marketing plans, while sociologists use them to allocate resources and formulate policies for traffic or disease control. The recent advent of deep learning has revolutionized time-series forecasting, resulting in a series of advanced forecasting models (Lai et al., 2018; Torres et al., 2021; Salinas et al., 2020; Nie et al., 2023; Zhou et al., 2021). However, despite these successes, time-series forecasting faces certain challenges from distribution shifts due to the dynamic and complex nature of time series data. The distribution shifts in time series can be categorized into two types (Granger, 2003). First, the data distributions of the time series data themselves can change over time, including shifts in mean, variance, and autocorrelation structure, which is referred to as non-stationarity or temporal drift issues in time-series forecasting (Shimodaira, 2000; Du et al., 2021). Second, time-series forecasting is compounded by unforeseen exogenous factors, which shifts the distribution of target time series. These types of phenomena, categorized as concept drift problems in time-series forecasting (Gama et al., 2014; Lu et al., 2018), make it even more challenging. While prior research has investigated strategies to mitigate temporal shifts (Liu et al., 2022; Kim et al., 2021; Fan et al., 2023), addressing concept drift issues in time-series forecasting has been largely overlooked. Although concept drift is a well-studied problem in general machine learning (Sagawa et al., 2019; Arjovsky et al., 2019; Ahuja et al., 2021), adapting these solutions to time-series forecasting is challenging. Many of these methods require environment labels, which are typically unavailable in time-series datasets (Liu et al., 2024a). Indeed, the few concept drift approaches 1 arXiv:2510.14814v1 [cs.LG] 16 Oct 2025 Pre-print developed for time-series data are designed exclusively for online settings (Guo et al., 2021), which requires iterative retraining over time steps and is infeasible when applied to standard time-series forecasting tasks. Therefore, we aim to close this gap in the literature in this paper, that is, to mitigate concept drift in time-series forecasting for standard time-series forecasting tasks. The contributions of this paper are: 1. Concept Drift Method: We introduce soft attention masking (SAM) designed to mitigate concept drift by using the invariant patterns in exogenous features. The soft attention allows the time-series forecasting models to weigh and ensemble of invariant patterns at multiple horizon time steps to enhance the generalization ability. 2. Distribution Shift Generalized Framework: We show the necessity of addressing temporal shift as a preliminary when addressing concept drift. We therefore propose ShifTS, a practical, distribution shift generalized, model-agnostic framework that tackles temporal shift and concept drift within a unified approach. 3. Comprehensive Evaluations: We conduct extensive experiments on various time series datasets with multiple advanced time-series forecasting models. The proposed ShifTS demonstrates effectiveness by consistent performance improvements to agnostic forecasting models, as well as outperforming distribution shift baselines in better forecasting accuracy. We provide related works on time-series analysis and distribution shift generalization in Appendix A. 2 PROBLEM FORMULATION 2.1 TIME-SERIES FORECASTING Time-series forecasting involves predicting future values of one or more dependent time series based on historical data, augmented with exogenous covariate features. Let denote the target time series as Y and its associated exogenous covariate features as X. At any time step t, time-series forecasting aims to predict YH t = [yt + 1, yt+2, . . . , yt+H] ∈Y using historical data (XL t , YL t ), where L represents the length of the historical data window, known as the lookback window, and H denotes the forecasting time steps, known as the horizon window. Here, XL t = [xt−L+1, xt−L+2, . . . , xt] ∈X and YL t = [yt−L+1, yt−L+2, . . . , yt] ∈Y. For simplicity, we denote YH = {YH t } for ∀t as the collection of horizon time-series of all time steps, and similar for YL and XL. Conventional time- series forecasting involves learning a model parameterized by θ through empirical risk minimization (ERM) to obtain fθ : (XL, YL) →YH for all time steps t. In this study, we focus on univariate time-series forecasting with exogenous features, where dY = 1 and dX ≥1. 2.2 DISTRIBUTION SHIFT IN TIME SERIES Given the time-series forecasting setups, a time-series forecasting model aims to predict the target distribution P(YH) = P(YH|YL)P(YL) + P(YH|XL)P(XL), which should be generalizable for both training and testing time steps. However, due to the dynamic nature of time-series data, forecasting faces challenges from distribution shifts, categorized into two types: temporal shift and concept drift. These two types of distribution shifts are defined as follows: Definition 2.1 (Temporal Shift (Shimodaira, 2000; Du et al., 2021)) Temporal shift (also known as virtual shift (Tsymbal, 2004)) is the marginal probability distributions changing over time, while the conditional distributions are the same. Definition 2.2 (Concept Drift (Lu et al., 2018)) Concept drift (also known as real concept drift (Gama et al., 2014)1) is the conditional distributions changing over time, while the marginal probability distributions are the same. 1(Gama et al., 2014) defines concept drift as both virtual shift and real concept drift. Our concept drift definition is consistent with the definition of real concept drift in (Gama et al., 2014). 2 Pre-print Intuitively, a temporal shift indicates unstable marginal distributions (e.g. P(YH) ̸= P(YL)), while a concept drift indicates unstable conditional distributions (P(YH i |XL i ) ̸= P(YH j |XL j ) for some i, j ∈t). Existing methods for distribution shifts in time-series forecasting typically focus on mitigating temporal shifts through normalization, ensuring P(YH) = P(YL) by both normalizing to standard 0-1 distributions (Kim et al., 2021; Liu et al., 2022; Fan et al., 2023). In contrast, concept drift remains relatively underexplored in time-series forecasting. Nevertheless, time-series forecasting does face challenges from concept drift: The correlations between X and Y can change over time, making the conditional distributions P(YH|XL) unstable and less predictable. A demonstration visualizing the differences and relationships between temporal shift and concept drift is provided in Appendix B. While the concept drift issue has received considerable attention in existing studies on general machine learning, applying them, mostly invariant learning approaches, to time-series forecasting tasks presents certain challenges. Firstly, conventional approaches to mitigate concept drift are through invariant learning. However, these invariant learning methods typically rely on explicit environment labels as input (e.g., labeled rotation or noisy images in image classification), which are not readily available in time series datasets. Second, these invariant learning methods assume that all correlated exogenous features necessary to fully determine the target variable are accessible (Liu et al., 2024a), which are often not applied to time series datasets (e.g., lookback window information is not sufficiently determining the horizon target). Indeed, a few concept drift methods not based on invariant learning have been proposed for time-series forecasting (Guo et al., 2021). However, these methods are designed for the online setting which does not fit standard time-series forecasting, and are only validated on limited synthetic datasets rather than complicated real-world ones. 3 METHODOLOGY The main idea of our methodology is to address concept drift through SAM by modeling stable conditional distributions on surrogate exogenous features with invariant patterns, rather than the sole lookback window. Furthermore, we recognize that effectively mitigating temporal shifts is preliminary for addressing concept drift. To this end, we propose ShifTS that effectively handles concept drift by first resolving temporal shifts as a preliminary step within a unified framework. 3.1 MITIGATING CONCEPT DRIFT Methodology Intuition. As defined in Definition 2.2, concept drift in time-series refers to the changing correlations between X and Y over time (P(YH i |XL i ) ̸= P(YH j |XL j ) for i, j ∈t), which introduces instability when modeling conditional distribution P(YH|XL). Figure 1: Comparison between conventional time-series forecasting and our approach. Our approach identifies invariant patterns in look- back and horizon window as XSUR and then models a stable conditional distribution accord- ingly to mitigate concept drift. This instability arises because, for a given exoge- nous feature X, its lookback window XL alone may lack sufficient information to predict YH, while learning a stable conditional distribution requires that the inputs provide sufficient information to predict the output (Sagawa et al., 2019; Arjovsky et al., 2019). There are possible patterns in the horizon window XH, joint with XL, that influence the target. Thus, modeling P(YH|XL, XH) leads to a more stable conditional distribution compared to P(YH|XL), as [XL, XH] captures additional causal relationships across future time steps. We assume that incorporating causal relationships from the horizon window enables more complete causal- ity modeling between that exogenous feature and target, given that the future cannot influence the past (e.g., XH t+1 ↛YH t ). However, these causal effects from the horizon window, while important for learn- 3 Pre-print ing stable conditional distributions, are often overlooked by conventional time-series forecasting methods, as illustrated in Figure 1(a). Therefore, we propose leveraging both lookback and horizon information from exogenous features (i.e., [XL, XH]) to predict the target, enabling a more stable conditional distribution. However, di- rectly modeling P(YH|XL, XH) in practice presents two challenges. First, XH typically represents unknown future values during testing. To model P(YH|XL, XH), it may require to first predict XH by modeling P(XH|XL), which can be as challenging as predicting YH directly. Second, not every pattern in XH at every time step holds a causal relationship with the target. Modeling all patterns from XL and XH may introduce noisy causal relationships (as invariant learning methods aim to mitigate) and reduce the stability of conditional distributions. To address the above challenges, instead of directly modeling P(YH|XL, XH), we propose a two- step approach: first, identifying patterns in [XL, XH] that lead to stable conditional distributions (namely invariant patterns), and then modeling these conditional distributions accordingly. To determine stability, a natural intuition is to assess whether a pattern’s correlation with the target remains consistent across all time steps. For instance, if a subsequence of [XL, XH] consistently exhibits stable correlations with the target over all or most time steps (e.g., an increase of the subsequence always results in an increase of the target), then its conditional distribution should be explicitly modeled due to the stability. Conversely, if a subsequence demonstrates correlations with the target only sporadically or locally, these correlations are likely spurious, which are unstable conditional distributions to other time steps. We leverage this intuition to identify all invariant patterns and aggregate them into a surrogate feature XSUR, accounting for the fact that the target can be determined by multiple patterns. For instance, an influenza-like illness (ILI) outbreak in winter can be triggered by either extreme cold weather in winter or extreme heat waves in summer (Nielsen et al., 2011; Jaakkola et al., 2014). By incorporating this information, we model the corresponding conditional distribution P(YH|XSUR), as illustrated in Figure 1(b). The effectiveness of XSUR in predicting YH stems from two key insights. First, P(YH|XSUR) is a stable conditional distribution to model, as it captures invariant patterns across both the lookback and horizon windows. Second, while there is a trade-off—P(YH|XSUR) provides stability, but estimating XSUR may introduce additional errors—practical evaluations demonstrate that the benefits of constructing stable conditional distributions outweigh the potential estimation errors of XSUR. This is because XSUR contains only partial information, which is easier to predict than the entire XH. Methodology Implementation. Recognizing that P(YH|XSUR) is the desirable conditional distri- bution to learn, the remaining challenge is to identify XSUR in practice. To achieve this, we propose a soft attention masking mechanism (SAM), that operates as follows: First, we concatenate [XL, XH] to form an entire time series of length L + H. The entire series is then sliced using a sliding window of size H, resulting in L + 1 slices. This process extracts local patterns ([XH t−L, . . . , XH t ] at each time step t), which are subsequently used to identify invariant patterns. Second, we model the conditional distributions for all local patterns [P(YH t |XH t−L), . . . , P(YH t |XH t )] at each time step t, with applying a learnable soft atten- tion matrix M to weigh each local pattern. This matrix incorporates softmax, sparsity, and normalization operations, which can be mathematically described as: Softmax : Mj = Softmax(Mj) Sparsity : Mij = Mij · 1(Mij−µ(Mj))≥0 Normalize : Mj = Mj |Mj| (1) where i, j are the first and second dimensions of M. These operations are essential for SAM identifying invariant patterns. The intuition is that we consider sliced windows from the lookback and horizon over time steps as candidates of invariant patterns. We use the softmax operation to compute and update the weights of each pattern contributing to the target YH. We then apply a sparsity operation to filter out patterns with low weights, leaving only the patterns with high weights. These high-weight patterns, which consistently contribute to the target across all instances at all time steps, are regarded as invariant patterns over time. These patterns intuitively are invariant patterns as 4 Pre-print Figure 2: Diagram of ShifTS, consisting of three components: (a) normalization at the start (c) denormalization at the end to address temporal shifts, and (b) a two-stage forecasting process-The first stage predicts surrogate exogenous features, ˆXSUR, identified by the SAM, which capture invariant patterns essential for forecasting the target; The second stage uses both the predicted surrogate exogenous features and the original YL to predict YH. P(YH i |XH i−k) ≈P(YH j |XH j−k) for some k ∈[0, L] and i, j ∈t. While multiple invariant patterns may be identified, we compute a weighted sum of these patterns, proportional to their contributions in predicting the target. The weighted-sum patterns formulate the surrogate feature XSUR. For simplicity, we denote this process as: XSUR = SAM([XL, XH]) = X L+1 M(Slice([XL, XH])) (2) where Slice(·) represents slicing the time series [L + H, dX] →[H, L + 1, dX], and M ∈RL+1×dX is the learnable soft attention as in Equation 1. In practice, XSUR may include horizon information unavailable during testing. To address this, SAM estimates the surrogate features ˆXSUR using agnostic forecasting models. The surrogate loss that aims to estimate ˆXSUR is defined as: LSUR = MSE(XSUR, ˆXSUR) (3) 3.2 MITIGATING TEMPORAL SHIFT While the primary contribution of this work is to mitigate concept drift in time-series forecasting, addressing temporal shifts is equally critical and serves as a prerequisite for effectively managing concept drift. The key intuition is that SAM seeks to learn invariant patterns that result in a stable conditional distribution, P(YH|XSUR). However, achieving this stability becomes challenging if the marginal distributions (e.g., P(YH) or P(XSUR)) are not fixed, as these distributions may change over time because of the temporal shift issues. To address this issue, a natural solution is to learn the conditional distribution under standardized marginal distributions. This can be achieved using temporal shift methods, which employ instance normalization techniques to stabilize the marginals. The core intuition behind popular temporal shift methods is to normalize data distributions before the model processes them and to denormalize the outputs afterward. This approach ensures that the normalized sequences maintain consistent mean and variance between the inputs and outputs of the forecasting model. Specifically, P(XL Norm) ≈ P(XH Norm) ∼Dist(0, 1) and P(YL Norm) ≈P(YH Norm) ∼Dist(0, 1), thereby mitigating temporal shifts (i.e., shifts in marginal distributions over time). Among the existing methods, Reversible Instance Normalization (RevIN) (Kim et al., 2021) stands out for its simplicity and effectiveness, making it the method of choice in this work. Advanced techniques, such as SAN (Liu et al., 2023) and N-S Transformer (Liu et al., 2022), have also demonstrated promise in addressing temporal shifts. However, these methods often require modifications to forecasting models or additional pre-training strategies. While exploring these advanced temporal shift approaches remains a promising avenue for further performance improvements, it is beyond the scope of this study and not the primary focus of this work. 5 Pre-print 3.3 SHIFTS: THE INTEGRATED FRAMEWORK To address concept drift in time-series forecasting, while acknowledging that mitigating temporal shifts is a prerequisite for resolving concept drift, we propose ShifTS —a comprehensive framework designed to tackle both challenges in time-series forecasting. ShifTS is model-agnostic, as the stable conditional distributions distinguished by SAM can be learned by any time-series forecasting model. The workflow of ShifTS is illustrated in Figure 2 and consists of the following steps: (1) Normalize the input time series; (2) Forecast surrogate exogenous features ˆXSUR that invariantly support the target series, as determined by SAM; (3) An aggregation MLP that uses ˆXSUR to forecast the target, denoted as Agg(·) in Figure 2 and Algorithm 1; (4) Denormalize the output time series. Conceptually, steps 1 and 4 mitigate the temporal shift, step 2 addresses concept drift, and step 3 performs weighted aggregation of exogenous features to support the target series. The optimization objective of ShifTS is as follows: L = LSUR(XSUR, ˆXSUR) + LTS(YH, ˆYH) (4) Here, LSUR is the surrogate loss that encourages learning to forecast exogenous features, and LTS is the MSE loss used in conventional time-series forecasting. The pseudo-code for training and testing ShifTS is provided in Algorithm 1. Algorithm 1 ShifTS 1: Training: Require: Training data XL, XH, YL, YH; Initial parameters f0, M0, Agg0; Output: Model parameter f, M, Agg 2: For i in range (E): 3: Normalization: [XL Norm, YL Norm] = Norm([XL, YL]) 4: Time-series forecasting: [ˆXSUR Norm, ˆYH Norm] = fi([XL Norm, YL Norm]) 5: Exogenous feature aggregation: ˆYH Norm = ˆYH Norm + Aggi(ˆXSUR Norm) 6: Denormalization: [ˆXSUR, ˆYH] = Denorm([ˆXSUR Norm, ˆYH Norm]) 7: Obtain sufficient ex-features: XSUR = SAM([XL, XH]) 8: Compute loss: L = LSUR(XSUR, ˆXSUR) + LTS(YH, ˆYH) 9: Update model parameter: fi+1 ←fi, Mi+1 ←Mi, Aggi+1 ←Aggi 10: Final model parameters: f ←fE, M ←ME, Agg ←AggE 11: Testing: Require: Test data XL, YL, Output: Forecast target ˆYH 12: Normalization: [XL Norm, YL Norm] = Norm([XL, YL]) 13: Time-series forecasting: [ˆXSUR Norm, ˆYH Norm] = f([XL Norm, YL Norm]) 14: Exogenous feature aggregation: ˆYH Norm = ˆYH Norm + Agg(ˆXSUR Norm) 15: Denormalization: [ˆXSUR, ˆYH] = Denorm([ˆXSUR Norm, ˆYH Norm]) 4 EXPERIMENTS 4.1 SETUP Datasets. We conduct experiments using six time-series datasets as leveraged in (Liu et al., 2024a): The daily reported currency exchange rates (Exchange) (Lai et al., 2018); The weekly reported influenza-like illness patients (ILI) (Kamarthi et al., 2021); Two-hourly/minutely reported electricity transformer temperature (ETTh1/ETTh2 and ETTm1/ETTm2, respectively) (Zhou et al., 2021). We follow the established experimental setups and target variable selections in previous works(Wu et al., 2021; 2022; Nie et al., 2023; Liu et al., 2024b). Datasets such as Traffic (PeMS) (Zhao et al., 2017) and Weather (Wu et al., 2021) are excluded from our evaluations, as their time series exhibit near-stationary behavior, with only moderate distribution shift issues. Further details on the dataset differences are discussed in Appendix C.1. Baselines. We include two types of baselines for comprehensive evaluation on ShifTS: Forecasting Model Baselines: ShifTS is model-agnostic, we include six time-series forecast- ing models (referred to as ‘Model’ in Table 1 and 4), including: Informer (Zhou et al., 2021), 6 Pre-print Table 1: Performance comparison on forecasting errors without (ERM) and with ShifTS. Employing ShifTS shows consistent performance gains agnostic to forecasting models. The top-performing method is in bold. ‘IMP.’ denotes the average improvements over all horizons of ShifTS vs ERM. Model Crossformer (ICLR’23) PatchTST (ICLR’23) iTransformer (ICLR’24) Method ERM ShifTS ERM ShifTS ERM ShifTS Dataset MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ILI 24 3.409 1.604 0.674 0.590 0.772 0.634 0.656 0.618 0.824 0.653 0.799 0.642 36 4.001 1.772 0.687 0.617 0.763 0.649 0.694 0.602 0.917 0.738 0.690 0.640 48 3.720 1.724 0.652 0.611 0.753 0.692 0.654 0.630 0.772 0.699 0.680 0.665 60 3.689 1.715 0.658 0.633 0.761 0.724 0.680 0.656 0.729 0.710 0.672 0.667 IMP. 81.9% 64.0% 12.0% 7.1% 13.8% 6.5% Exchange 96 0.338 0.475 0.102 0.237 0.130 0.265 0.102 0.236 0.135 0.272 0.115 0.255 192 0.566 0.622 0.203 0.338 0.247 0.394 0.194 0.332 0.250 0.376 0.209 0.343 336 1.078 0.867 0.407 0.484 0.522 0.557 0.388 0.477 0.450 0.503 0.426 0.495 720 1.292 0.963 1.165 0.813 1.171 0.824 0.995 0.747 1.501 0.941 1.138 0.827 IMP. 53.5% 38.9% 20.9% 12.6% 15.2% 6.9% ETTh1 96 0.145 0.312 0.055 0.180 0.064 0.193 0.056 0.181 0.061 0.190 0.056 0.181 192 0.240 0.420 0.072 0.206 0.085 0.222 0.073 0.209 0.076 0.219 0.072 0.205 336 0.240 0.424 0.084 0.228 0.096 0.244 0.089 0.235 0.086 0.227 0.083 0.225 720 0.391 0.553 0.095 0.244 0.128 0.282 0.097 0.245 0.085 0.232 0.082 0.230 IMP. 68.2% 48.8% 14.5% 7.2% 5.1% 3.3% ETTh2 96 0.255 0.408 0.137 0.286 0.154 0.309 0.139 0.287 0.141 0.292 0.137 0.288 192 1.257 1.034 0.182 0.338 0.204 0.374 0.191 0.345 0.194 0.347 0.184 0.339 336 0.783 0.771 0.234 0.388 0.252 0.406 0.222 0.381 0.229 0.383 0.225 0.381 720 1.455 1.100 0.234 0.389 0.259 0.411 0.236 0.390 0.266 0.413 0.235 0.390 IMP. 71.4% 52.9% 9.2% 6.5% 5.4% 2.5% ETTm1 96 0.050 0.174 0.028 0.126 0.031 0.135 0.029 0.128 0.030 0.131 0.030 0.131 192 0.271 0.454 0.043 0.158 0.048 0.166 0.044 0.161 0.049 0.171 0.046 0.165 336 0.731 0.805 0.057 0.184 0.058 0.190 0.058 0.186 0.066 0.199 0.059 0.188 720 0.829 0.849 0.083 0.219 0.083 0.223 0.080 0.219 0.082 0.219 0.079 0.217 IMP. 77.3% 61.0% 4.6% 3.0% 5.1% 2.5% ETTm2 96 0.153 0.315 0.069 0.190 0.078 0.206 0.067 0.188 0.073 0.200 0.073 0.195 192 0.408 0.526 0.105 0.242 0.113 0.246 0.101 0.237 0.119 0.251 0.108 0.248 336 0.428 0.504 0.146 0.289 0.176 0.320 0.134 0.278 0.157 0.302 0.144 0.291 720 1.965 1.205 0.191 0.342 0.220 0.368 0.185 0.334 0.196 0.347 0.193 0.344 IMP. 71.3% 52.0% 15.9% 8.6% 4.8% 2.1% Pyraformer (Liu et al., 2021), Crossformer (Zhang & Yan, 2022), PatchTST (Nie et al., 2023), TimeMixer (Wang et al., 2024) and iTransformer (Liu et al., 2024b), which of the last two are the state-of-the-art (SOTA) forecasting model. These models are used to demonstrate that ShifTS consistently enhances forecasting accuracy across various models, including SOTA. Distribution Shift Baselines: We compare ShifTS with various distribution shift methods (referred to as ‘Method’ in Table 2): (1) Three non-stationary methods for addressing temporal distribution shifts in time-series forecasting N-S Trans. (Liu et al., 2022), RevIN (Kim et al., 2021), and SAN (Liu et al., 2023). We omit Dish-TS (Fan et al., 2023) and SIN (Han et al., 2024) from the main text due to their instability on univariate targets. (2) Four concept drift methods, including GroupDRO (Sagawa et al., 2019), IRM (Arjovsky et al., 2019), VREx (Krueger et al., 2021), and EIIL (Creager et al., 2021), which are primarily designed for general applications. (3) Three combined methods for both temporal distribution shifts and concept drift: IRM+RevIN, EIIL+RevIN, and SOTA time-series distribution shift method FOIL (Liu et al., 2024a). These comparisons aim to highlight the advantages of ShifTS in distribution shift generalization over existing distribution shift approaches. Evaluation. We measure the forecasting errors using mean squared error (MSE) and mean absolute error (MAE). The formula of the metrics are: MSE = 1 n Pn i=1(y−ˆy)2 and MAE = 1 n Pn i=1 |y−ˆy|. Reproducibility. All models are trained on NVIDIA Tesla V100 32GB GPUs. All training data and code are available at: https://github.com/AdityaLab/ShifTS. More experiment details are presented in Appendix C.2. 4.2 PERFORMANCE IMPROVEMENT ACROSS BASE FORECASTING MODELS To evaluate the effectiveness of ShifTS in reducing forecasting errors, we conduct experiments comparing performance with and without ShifTS across popular time-series datasets and four different forecasting horizons. These experiments utilize five transformer-based models and one MLP-based model. Evaluation results for Crossformer, PatchTST, and iTransformer are presented in 7 Pre-print Table 2: Averaged performance comparison between ShifTS and distribution shift baselines with Crossformer. ShifTS achieves the best and second-best performance in 6 and 2 out of 8 evaluations. The best results are highlighted in bold and the second-best results are underlined. Dataset ILI Exchange ETTh1 ETTh2 Method MSE MAE MSE MAE MSE MAE MSE MAE Base ERM 3.705 1.704 0.819 0.732 0.254 0.427 0.937 0.828 Concept Drift Method GroupDRO 2.285 1.287 0.821 0.751 0.278 0.453 1.150 0.936 IRM 2.248 1.237 0.846 0.754 0.201 0.367 0.878 0.792 VREx 2.285 1.286 0.821 0.742 0.314 0.486 1.142 0.938 EIIL 2.036 1.159 0.822 0.749 0.212 0.433 1.122 0.930 Temporal Shift Method RevIN 0.815 0.708 0.475 0.476 0.085 0.224 0.205 0.358 N-S Trans. 0.781 0.688 0.484 0.481 0.086 0.226 0.203 0.355 SAN 0.757 0.715 0.415 0.453 0.088 0.225 0.199 0.348 Combined Method IRM+RevIN 0.809 0.711 0.481 0.476 0.089 0.231 0.202 0.362 EIIL+RevIN 0.799 0.706 0.483 0.485 0.085 0.225 0.218 0.380 FOIL 0.735 0.651 0.497 0.481 0.081 0.219 0.206 0.357 ShifTS (Ours) 0.668 0.613 0.470 0.468 0.076 0.214 0.194 0.348 Table 1, while additional results for older models, including Informer, Pyraformer, and TimeMixer, are provided in Table 4 in Appendix D.1. The experimental results consistently demonstrate the effectiveness of ShifTS in improving fore- casting performance across agnostic forecasting models. Notably, ShifTS achieves reductions in forecasting errors of up to 15% when integrated with advanced models like iTransformer. Further- more, ShifTS shows even greater relative effectiveness when applied to older or less advanced forecasting models, such as Informer and Crossformer. In addition to the observed performance improvements, our results reveal two further insights: The effectiveness of ShifTS relies on the insights provided by the horizon data. The performance improvements exhibit variations across different datasets. For instance, the application of ShifTS on ILI and Exchange datasets yields greater performance improvements compared to ETT datasets over- all. To interpret the phenomenon and determine the conditions under which ShifTS could be most effective in practical scenarios, we quantify the mutual information I(XH; YH) shared between XH and YH (detailed setup provided in Appendix C.2). We plot the relationship between I(XH; YH) and performance gains in Figure 3(a). The scatter plot illustrates a positive linear correlation between I(XH; YH) and performance gains, supported by a p-value p = 0.012 ≤0.05. This observation suggests that the greater the amount of useful information from exogenous features within the hori- zon window, the more substantial the performance gains achieved by ShifTS. This insight aligns with the design of ShifTS, as higher mutual information indicates clearer correlations and causal relationships between the target YH and exogenous features in the horizon window—relationships often overlooked by conventional time-series models. Stronger correlations imply a greater extent of misrepresented dependencies in ERM, leading to more significant improvements with ShifTS. The extent of quantitative performance gains achieved by ShifTS depends on the underlying forecasting model. Notably, the extent of performance enhancements achieved by ShifTS varies across different forecasting models. For example, the performance gains on the simpler Informer model by ShifTS is more significant than the SOTA iTransformer model. Importantly, we emphasize two key observations: Firstly, even when applied to the iTransformer model, ShifTS demonstrates a notable performance boost of approximately 15% on both ILI and Exchange datasets, consistent with the aforehead intuition. Secondly, integrating ShifTS into forecasting processes should, at the very least, maintain or improve the performance of standalone forecasting models, as evidenced by consistent performance enhancements observed across all datasets with iTransformer model. 4.3 COMPARISON WITH DISTRIBUTION SHIFT METHODS To illustrate the advantages of ShifTS over other model-agnostic approaches for addressing distri- bution shifts, we perform experiments comparing its performance against distribution shift baselines, including methods designed for concept drift, temporal shift, and combined approaches. We exclude evaluations on minutely ETT datasets, following (Liu et al., 2024a), as their data characteristics and 8 Pre-print Figure 3: Left (a): The performance gains of ShifTS versus the mutual information shared between XH and YH. Greater mutual information in XH compared to YH correlates with more significant performance gains achieved by ShifTS. Right (b): Ablation Study. Addressing either concept drift or temporal shift individually provides certain benefits in forecasting accuracy. ShifTS that tackles both achieves the lowest forecasting error. forecasting performance closely resemble those of hourly ETT datasets. The experiments utilize Crossformer as the forecasting model, and the averaged results are presented in Table 2. The results highlight the advantages of ShifTS over existing distribution shift methods, achieving the highest average forecasting accuracy in 6 out of 8 evaluations, with the re- maining 2 evaluations ranking second. Notably, as discussed in Section 3.2, we choose to use RevIN as it is one of the most popular yet simple and effective temporal shift meth- ods. However, ShifTS is flexible and can integrate more advanced temporal shift meth- ods to further enhance performance. While exploring these advanced temporal shift meth- ods is beyond the scope of this work, we illustrate the potential benefits of such integration. Table 3: MSE comparison between ShifTS, SAN, and ShifTS+SAN on Exchange dataset. ShifTS+SAN achieves the best performance on all evaluations. Horizon ShifTS SAN ShifTS w. SAN 96 0.102 0.091 0.089 192 0.207 0.195 0.187 336 0.407 0.373 0.372 720 1.165 1.001 0.981 Avg. 0.470 0.415 0.407 For example, on the Exchange dataset, where SAN outperforms ShifTS, incor- porating SAN in place of RevIN within ShifTS leads to even greater accuracy im- provements. Detailed MSE values for these evaluations are provided in Table 3. Further- more, the results underscore the importance of addressing concept drift using SAM when temporal shifts are effectively addressed. 4.4 ABLATION STUDY To demonstrate the effectiveness of each module in ShifTS, we conducted an ab- lation study using two modified versions: ShifTS\TS and ShifTS\CD. ShifTS\TS excludes the temporal shift adjustment via RevIN, while ShifTS\CD excludes the concept drift handling via SAM. Additionally, conventional forecasting models that do not address either concept drift or temporal shift are denoted as ‘Base’. We performed experiments on the Exchange datasets using the previous three baseline forecasting models, with a fixed forecasting horizon of 96. The results are visualized in Figure 3(b). The visualization reveals the following observations: First, addressing temporal shift and concept drift together, as implemented in ShifTS, yields lower forecasting errors than addressing only one type of distribution shift (ShifTS\TS and ShifTS\CD) or not considering any distribution shift adjustments (Base). This suggests that temporal shift and concept drift are interrelated and co-exist in time series data, and addressing both provides significant benefits. Second, for forecasting models that inherently address temporal shift, such as PatchTST and iTransformer that incorporate norm/denorm, the performance gains from mitigating concept drift are more significant than those from additionally mitigating temporal shift using RevIN. In contrast, for models without any temporal shift mitigation, such as Crossformer, tackling temporal shift leads to 9 Pre-print a greater performance improvement than concept drift. These observations suggest that mitigating temporal shift is a necessity in mitigating concept drift, which matches the intuition in Section 3.2. 5 CONCLUSION AND LIMITATION DISCUSSION In this paper, we identify the challenges posed by both concept drift and temporal shift in time-series forecasting. While the issue of mitigating temporal shifts has garnered significant attention within the time-series forecasting community, concept drift has remained largely overlooked. To bridge this gap, we propose SAM, a method designed to effectively address concept drift in time-series forecasting by modeling conditional distributions through surrogate exogenous features. Building on SAM, we introduce ShifTS, a model-agnostic framework that handles concept drift in practice by first mitigating temporal shift as a preliminary step. Our comprehensive evaluations highlight the effectiveness of ShifTS, while the benefits of SAM are further demonstrated through an ablation study. We discuss the limitations of our approach in Appendix E. 6 ACKNOWLEDGMENT This paper was supported in part by the NSF (Expeditions CCF-1918770, CAREER IIS-2028586, Medium IIS-1955883, Medium IIS-2106961, Medium IIS-2403240, PIPP CCF-2200269), CDC MInD program, Meta faculty gift, and funds/computing resources from Georgia Tech and GTRI. REFERENCES Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of- distribution generalization. Advances in Neural Information Processing Systems, 34:3438–3450, 2021. Martin Arjovsky, L´eon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. Peter L Bartlett. Learning with a slowly changing distribution. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 243–252, 1992. Elliot Creager, J¨orn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, pp. 2189–2200. PMLR, 2021. Chirag Deb, Fan Zhang, Junjing Yang, Siew Eang Lee, and Kwok Wei Shah. A review on time series forecasting techniques for building energy consumption. Renewable and Sustainable Energy Reviews, 74:902–924, 2017. Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang. Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM international conference on information & knowledge management, pp. 402–411, 2021. Wei Fan, Pengyang Wang, Dongkun Wang, Dongjie Wang, Yuanchun Zhou, and Yanjie Fu. Dish-ts: a general paradigm for alleviating distribution shift in time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 7522–7529, 2023. Jo˜ao Gama, Indr˙e ˇZliobait˙e, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1–37, 2014. Clive WJ Granger. Time series concepts for conditional distributions. Oxford Bulletin of Economics and Statistics, 65:689–701, 2003. Husheng Guo, Shuai Zhang, and Wenjian Wang. Selective ensemble-based online adaptive deep neural networks for streaming data with concept drift. Neural Networks, 142:437–456, 2021. Lu Han, Han-Jia Ye, and De-Chuan Zhan. Sin: Selective and interpretable normalization for long-term time series forecasting. In Forty-first International Conference on Machine Learning, 2024. 10 Pre-print Kari Jaakkola, Annika Saukkoriipi, Jari Jokelainen, Raija Juvonen, Jaana Kauppila, Olli Vainio, Thedi Ziegler, Esa R¨onkk¨o, Jouni JK Jaakkola, Tiina M Ik¨aheimo, et al. Decline in temperature and humidity increases the occurrence of influenza in cold climate. Environmental Health, 13:1–8, 2014. Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodriguez, Chao Zhang, and B Aditya Prakash. When in doubt: Neural non-parametric uncertainty quantification for epidemic forecasting. Ad- vances in Neural Information Processing Systems, 34:19796–19807, 2021. Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Re- versible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations, 2021. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapola- tion (rex). In International Conference on Machine Learning, pp. 5815–5826. PMLR, 2021. Anthony Kuh, Thomas Petsche, and Ronald Rivest. Learning time-varying concepts. Advances in neural information processing systems, 3, 1990. Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In The 41st international ACM SIGIR conference on research & development in information retrieval, pp. 95–104, 2018. Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang, and B Aditya Prakash. Time-series forecasting for out-of-distribution generalization using invariant learning. Forty-first International Conference on Machine Learning, 2024a. Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dust- dar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In International conference on learning representations, 2021. Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems, 35: 9881–9893, 2022. Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. In International Conference on Learning Representations (ICLR), 2024b. Zhiding Liu, Mingyue Cheng, Zhi Li, Zhenya Huang, Qi Liu, Yanhu Xie, and Enhong Chen. Adaptive normalization for non-stationary time series forecasting: A temporal slice perspective. In Thirty- seventh Conference on Neural Information Processing Systems, 2023. Jie Lu, Anjin Liu, Fan Dong, Feng Gu, Joao Gama, and Guangquan Zhang. Learning under concept drift: A review. IEEE transactions on knowledge and data engineering, 31(12):2346–2363, 2018. Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, and Xing Xie. Out-of-distribution representation learning for time series classification. In International Conference on Learning Representations, 2023. Sarabeth M Mathis, Alexander E Webber, Tom´as M Le´on, Erin L Murray, Monica Sun, Lauren A White, Logan C Brooks, Alden Green, Addison J Hu, Roni Rosenfeld, et al. Title evaluation of flusight influenza forecasting in the 2021–22 and 2022–23 seasons with a new target laboratory- confirmed influenza hospitalizations. Nature Communications, 15(1):6289, 2024. Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations, 2023. Jens Nielsen, Anne Mazick, Steffen Glismann, and K˚are Mølbak. Excess mortality related to seasonal influenza and extreme temperatures in denmark, 1994-2010. BMC infectious diseases, 11:1–13, 2011. 11 Pre-print Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis ex- pansion analysis for interpretable time series forecasting. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1ecqn4YwB. Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guil- laume Lajoie. Gradient starvation: A learning proclivity in neural networks. Advances in Neural Information Processing Systems, 34:1256–1272, 2021. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019. David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3): 1181–1191, 2020. Alex Sherstinsky. Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Physica D: Nonlinear Phenomena, 404:132306, 2020. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000. Jos´e F Torres, Dalil Hadjout, Abderrazak Sebaa, Francisco Mart´ınez- ´Alvarez, and Alicia Troncoso. Deep learning for time series forecasting: a survey. Big Data, 9(1):3–21, 2021. Alexey Tsymbal. The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin, 106(2):58, 2004. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y Zhang, and JUN ZHOU. Timemixer: Decomposable multiscale mixing for time series forecasting. In International Conference on Learning Representations (ICLR), 2024. Qingsong Wen, Weiqi Chen, Liang Sun, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan, et al. Onenet: Enhancing time series forecasting models under concept drift by online ensembling. Advances in Neural Information Processing Systems, 36, 2024. Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, 34:22419–22430, 2021. Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In The eleventh international conference on learning representations, 2022. Xinyu Zhang, Shanshan Feng, Jianghong Ma, Huiwei Lin, Xutao Li, Yunming Ye, Fan Li, and Yew Soon Ong. Frnet: Frequency-based rotation network for long-term time series forecasting. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3586–3597, 2024. Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In The eleventh international conference on learning representations, 2022. Zheng Zhao, Weihai Chen, Xingming Wu, Peter CY Chen, and Jingmeng Liu. Lstm network: a deep learning approach for short-term traffic forecast. IET intelligent transport systems, 11(2):68–75, 2017. Zhiyuan Zhao, Alexander Rodriguez, and B Aditya Prakash. Performative time-series forecasting. arXiv preprint arXiv:2310.06077, 2023. 12 Pre-print Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 5(3):1–55, 2014. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 11106–11115, 2021. Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proc. 39th International Conference on Machine Learning (ICML 2022), 2022. Yunyue Zhu and Dennis Shasha. Statstream: Statistical monitoring of thousands of data streams in real time. In VLDB’02: Proceedings of the 28th International Conference on Very Large Databases, pp. 358–369. Elsevier, 2002. A RELATED WORKS Time-Series Forecasting. Recent works in deep learning have achieved notable achievements in time- series forecasting, such as RNNs, LSTNet, N-BEATS (Sherstinsky, 2020; Lai et al., 2018; Oreshkin et al., 2020). State-of-the-art models build upon the successes of self-attention mechanisms (Vaswani et al., 2017) with transformer-based architectures and significantly improve forecasting accuracy, such as Informer, Autoformer, Fedformer, PatchTST, iTransformer, FRNet (Zhou et al., 2021; Wu et al., 2021; Zhou et al., 2022; Nie et al., 2023; Liu et al., 2024b; Zhang et al., 2024). However, these advanced models primarily rely on empirical risk minimization (ERM) with IID assumptions, i.e., train and test dataset follows the same data distribution, which exhibits limitations when potential distribution shifts in time series. Distribution Shift in Time-Series Forecasting. In recent decades, learning under non-stationary distributions, where the target distribution over instances changes with time, has attracted attention within learning theory (Kuh et al., 1990; Bartlett, 1992). In the context of time series, the distribution shift can be categorized into concept drift and temporal shifts. General concept drift methods (via invariant learning) (Arjovsky et al., 2019; Ahuja et al., 2021; Krueger et al., 2021; Pezeshki et al., 2021; Sagawa et al., 2019) assume instances sampled from various environments and propose to identify and utilize invariant predictors across these environments. However, when applied to time-series forecasting, these methods encounter limitations. Additional methods specifically tailored for time series data also encounter certain constraints: DIVERSITY (Lu et al., 2023) is designed for time series classification and detection only. OneNet (Wen et al., 2024) is tailored solely for online forecasting scenarios using online ensembling. PeTS (Zhao et al., 2023) focuses on distribution shifts induced by the specific phenomenon of performativity. Other works specifically tackle temporal shift issues in time-series forecasting (Kim et al., 2021; Liu et al., 2022; Fan et al., 2023; Liu et al., 2023). These approaches implement carefully crafted normalization strategies to ensure that both the lookback and horizon of a univariate time series adhere to normalized distributions. This alignment helps alleviate potential temporal shifts, where the statistical properties of the lookback and horizon time series may differ, over time. B TEMPORAL SHIFT AND CONCEPT DRIFT To highlight the differences between concept drift and temporal shift, we provide visualizations of both phenomena. Figure 4 illustrates temporal shift, while Figure 5 demonstrates concept drift2. Temporal shift refers to changes in the statistical properties of a univariate time series data, such as mean, variance, and autocorrelation structures, over time. For instance, the mean and variance of the given time series shift between the lookback window and horizon window, as depicted in Figure 4. This issue is inherent in time series forecasting and can occur on any given time series data, regardless of whether the data pertains to the target series or exogenous features. 2Figures adapted from: https://github.com/ts-kim/RevIN 13 Pre-print In contrast, concept drift describes to changes in the correlations between exogenous features and the target series over time. Figure 5 illustrates this phenomenon, where increases in exogenous features at earlier time steps lead to increases in the target series, while increases at later time steps result in decreases. Unlike temporal shift, concept drift involves multiple correlated time series and is not an inherent issue in univariate time series analysis. Figure 4: Demonstration of temporal shift phenomenon within time series data, showcasing the varia- tions in statistical properties, including mean and variance, over time as the emergence of temporal shift (Red: ground truth; Yellow: N-BEATS prediction; Blue: N-BEATS+RevIN prediction). Figure 5: Demonstration of concept drift phenomenon within time series data, showcasing the variations in correlation structures between target series Y and exogenous feature X over time as the emergence of concept drift (Red: ground truth; Yellow: N-BEATS prediction; Blue: N- BEATS+RevIN prediction). C ADDITIONAL EXPERIMENT DETAILS C.1 DATASETS We conduct experiments on six real-world datasets, which are commonly used as benchmark datasets: • ILI. The ILI dataset collects data on influenza-like illness patients weekly, with eight variables. • Exchange. The Exchange dataset records the daily exchange rate of eight currencies. • ETT. The ETT dataset contains four sub-datasets: ETTh1, ETTh2, ETTm1, ETTm2. The datasets record electricity transformer temperatures from two separate counties in China (distinguished by ‘1’ and ‘2’), with two granularities: minutely and hourly (distinguished by ‘m’ and ‘h’). All sub-datasets have seven variables/features. 14 Pre-print We follow (Wu et al., 2022; Nie et al., 2023; Liu et al., 2024b) to preprocess data, which guides splitting datasets into train/validation/test sets and selecting the target variables. All datasets are preprocessed using the zero-mean normalization method. Additional popular time-series datasets, such as Traffic (which records road occupancy rates from various sensors on San Francisco freeways), Electricity (which tracks hourly electricity consumption for 321 customers), and Weather (which collects 21 meteorological indicators in Germany, such as humidity and air temperature), are omitted from our evaluations. These datasets exhibit strong periodic signals and display near-stationary properties, making distribution shift issues less prevalent. A visualization comparison between the ETTh1 and Traffic datasets, shown in Figure 6, further supports this observation. Figure 6: Distribution shift issues across datasets: Left (a): ETT. Both temporal shift and concept drift are present. The target series shows varying statistics over time (e.g., lower variance in earlier periods and higher variance later), causing temporal shift. The correlation between X and Y is unclear and unstable, causing concept drift. Right (b): Traffic. Both temporal shift and concept drift are moderate. The target series exhibits near-periodicity, making the temporal shift moderate. Moreover, the correlation between X and Y remains stable (e.g., both increase or decrease simultaneously), making concept drift moderate. C.2 BASELINE IMPLEMENTATION We follow the commonly adopted setup for defining the forecasting horizon window length, as outlined in prior works (Wu et al., 2022; Nie et al., 2023; Liu et al., 2024b). Specifically, for datasets such as ETT and Exchange, the forecasting horizon windows are chosen from the set [96, 192, 336, 720], with a fixed lookback window size of 96 and a consistent label window size of 48 for the decoder (if required). Similarly, for the weekly reported ILI dataset, we employ forecasting horizon windows from [24, 36, 48, 60], with a fixed lookback window size of 36 and a constant label window size of 18 for the decoder (if required). In the context of concept drift baselines, several baselines like GroupDRO, IRM, and VREx require environment labels, which are typically absent in time series datasets. To address this, we partition the training set into k equal-length time segments to serve as predefined environment labels. For baseline time-series forecasting models, we follow implementations and suggested hyperparam- eters (with additional tuning) sourced from the Time Series Library3. For concept drift baselines, we utilize implementations and hyperparameter tuning strategies recommended by DomainBed4. For temporal shift baselines, we adopt implementations and hyperparameter configurations outlined in their respective papers. Additionally, we add an additional MLP layer to the end PatchTST to effectively utilize exogenous features, following (Liu et al., 2024a). In the ablation study, for the implementation of PatchTST and iTransformer, we follow the original approach by applying norm and denorm operations to the ‘Base’ model. To clarify our notation, ShifTS\TS refers to the model with standard norm/denorm operations and SAM, while ShifTS\CD denotes the version where the regular norm/denorm is replaced with RevIN. 3https://github.com/thuml/Time-Series-Library 4https://github.com/facebookresearch/DomainBed 15 Pre-print C.3 MUTUAL INFORMATION VISUALIZATION For a given time series dataset, we compute the mutual information I(XH; YH) for each training time step and each exogenous feature dimension individually, following: I(XH; YH) = X x∈XH X y∈YH P(x, y) log P(x, y) P(x)P(y) (5) We then average the mutual information across all time steps for each exogenous feature dimension and identify the maximum averaged mutual information over all feature dimensions. This process allows us to assess the information content of each feature dimension in relation to the target series. We visualize the maximum averaged mutual information plotted against the corresponding perfor- mance gain in Figure 3(a). This visualization provides insights into how the information content of different feature dimensions relates to the performance improvement achieved in the forecasting model. D ADDITIONAL RESULTS D.1 EVALUATIONS ON AGNOSTIC PERFORMANCE GAINS To further demonstrate the benefit of ShifTS in improving the forecasting accuracy over agnostic forecasting models, we additionally evaluate the performance differences without and with ShifTS on Informer, Pyraformer, and TimeMixer. The detailed results are presented in Table 4. The additional evaluations again show consistent performance improvements in these models. Moreover, compared to the results in Table 1, the performance gains on these older models are even more significant. This observation highlights the need to mitigate both concept drift and temporal shift in time-series forecasting, as such problem are rarely considered in these models, but in the later models (e.g., PatchTST and iTransformer are compounded with normalizaiton/denormalizaiton processes). E LIMITATION DISCUSSION This work introduces SAM to address concept drift and proposes an integrated framework, ShifTS, which combines SAM with temporal shift mitigation techniques to enhance the accuracy of time-series forecasting. Extensive empirical evaluations support the effectiveness of these methods. However, the limitations of this study lie in two aspects: First, the distribution shift methods in time-series forecasting, including ShifTS, lack a theoretical guarantee. For example, no analysis quantifies how much the error bound can be tightened by addressing concept drift or temporal shift compared to vanilla time-series forecasting methods. Second, while this paper defines concept drift and temporal shift issues within the context of time-series forecasting, SAM and ShifTS are not the only possible solutions. Exploring alternative approaches remains an avenue for future research beyond the scope of this work. These two limitations highlight opportunities for future investigation. 16 Pre-print Table 4: Performance comparison on forecasting errors without (ERM) and with ShifTS on Informer, Pyraformer, and TimeMixer. Employing ShifTS again shows near-consistent performance gains agnostic to forecasting models. The top-performing method is in bold. ‘IMP.’ denotes the average improvements over all horizons of ShifTS vs ERM. Model Informer (AAAI’21) Pyraformer (ICLR’21) TimeMixer (ICLR’24) Method ERM ShifTS ERM ShifTS ERM ShifTS Dataset MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ILI 24 5.032 1.935 1.030 0.812 4.692 1.898 0.979 0.749 0.853 0.733 0.789 0.702 36 4.475 1.876 1.046 0.850 4.814 1.950 0.866 0.740 0.721 0.676 0.697 0.665 48 4.506 1.879 0.918 0.818 4.109 1.801 0.789 0.732 0.737 0.692 0.741 0.711 60 4.313 1.850 0.957 0.839 4.483 1.850 0.723 0.698 0.788 0.723 0.670 0.659 IMP. 78.4% 56.0% 81.5% 61.1% 6.3% 3.0% Exchange 96 0.839 0.746 0.137 0.277 0.410 0.525 0.145 0.275 0.127 0.268 0.098 0.234 192 0.862 0.773 0.210 0.346 0.529 0.610 0.300 0.404 0.229 0.355 0.214 0.352 336 1.597 1.063 0.378 0.485 0.851 0.778 0.440 0.506 0.553 0.560 0.440 0.491 720 4.358 1.935 0.760 0.655 1.558 1.067 1.509 0.963 1.173 0.834 0.962 0.747 IMP. 79.5% 59.7% 39.8% 31.5% 16.9% 9.1% ETTh1 96 0.891 0.863 0.095 0.231 0.653 0.748 0.065 0.197 0.059 0.184 0.059 0.187 192 1.027 0.958 0.096 0.237 0.853 0.828 0.075 0.210 0.099 0.247 0.077 0.211 336 1.055 0.961 0.092 0.237 0.705 0.797 0.092 0.238 0.121 0.279 0.098 0.246 720 1.077 0.969 0.100 0.252 0.562 0.695 0.126 0.279 0.139 0.299 0.099 0.252 IMP. 90.7% 74.5% 86.4% 69.6% 23.3% 10.1% ETTh2 96 3.195 1.651 0.232 0.381 1.598 1.127 0.156 0.307 0.152 0.303 0.146 0.299 192 3.569 1.778 0.334 0.464 3.314 1.599 0.217 0.367 0.195 0.349 0.185 0.343 336 2.556 1.468 0.400 0.512 2.571 1.489 0.245 0.398 0.238 0.392 0.230 0.381 720 2.723 1.532 0.489 0.579 2.294 1.409 0.261 0.410 0.273 0.421 0.249 0.397 IMP. 82.0% 69.5% 90.6% 73.5% 5.3% 2.9% ETTm1 96 0.320 0.433 0.055 0.175 0.130 0.298 0.028 0.125 0.030 0.128 0.029 0.126 192 0.459 0.582 0.079 0.211 0.240 0.4112 0.045 0.162 0.047 0.165 0.047 0.164 336 0.457 0.556 0.104 0.243 0.359 0.512 0.062 0.192 0.063 0.191 0.060 0.189 720 0.735 0.760 0.148 0.294 0.657 0.750 0.091 0.231 0.083 0.223 0.081 0.220 IMP. 80.7% 60.3% 82.2% 62.6% 2.3% 1.1% ETTm2 96 0.191 0.345 0.154 0.298 0.275 0.422 0.075 0.200 0.079 0.205 0.075 0.201 192 0.458 0.556 0.243 0.378 0.484 0.552 0.107 0.248 0.121 0.259 0.111 0.250 336 0.606 0.624 0.515 0.539 1.138 0.909 0.146 0.293 0.150 0.295 0.148 0.294 720 1.175 0.879 0.564 0.592 2.920 1.537 0.196 0.347 0.246 0.387 0.198 0.346 IMP. 33.4% 23.0% 82.8% 63.2% 8.5% 4.1% 17
Pre-print TACKLING TIME-SERIES FORECASTING GENERALIZATION VIA MITIGATING CONCEPT DRIFT Zhiyuan Zhao, Haoxin Liu, B. Aditya Prakash 30332, USA {leozhao1997@, hliu763@, ABSTRACT Time-series forecasting finds broad applications in real-world scenarios. Due to the dynamic nature of time series data, it is important for time-series forecasting models to handle potential distribution shifts over time. In this paper, we initially identify two types of distribution shifts in time series: concept drift and temporal shift. We acknowledge that while existing studies primarily focus on addressing temporal shift issues in time series forecasting, designing proper concept drift methods for time series forecasting has received comparatively less attention. Motivated by the need to address potential concept drift, while conventional concept drift methods via invariant learning face certain challenges in time-series forecasting, we propose a soft attention mechanism that finds invariant patterns from both lookback and horizon time series. Additionally, we emphasize the critical importance of mitigating temporal shifts as a preliminary to addressing concept drift. In this context, we introduce ShifTS, a method-agnostic framework designed to tackle temporal shift first and then concept drift within a unified approach. Extensive experiments demonstrate the efficacy of ShifTS in consistently enhancing the forecasting accuracy of agnostic models across multiple datasets, and outperforming existing concept drift, temporal shift, and combined baselines. 1 INTRODUCTION Time-series forecasting finds applications in various real-world scenarios such as economics, urban computing, and epidemiology (Zhu & Shasha, 2002; Zheng et al., 2014; Deb et al., 2017; Mathis et al., 2024). These applications involve predicting future trends or events based on historical time-series data. For example, economists use forecasts to make financial and marketing plans, while sociologists use them to allocate resources and formulate policies for traffic or disease control. The recent advent of deep learning has revolutionized time-series forecasting, resulting in a series of advanced forecasting models (Lai et al., 2018; Torres et al., 2021; Salinas et al., 2020; Nie et al., 2023; Zhou et al., 2021). However, despite these successes, time-series forecasting faces certain challenges from distribution shifts due to the dynamic and complex nature of time series data. The distribution shifts in time series can be categorized into two types (Granger, 2003). First, the data distributions of the time series data themselves can change over time, including shifts in mean, variance, and autocorrelation structure, which is referred to as non-stationarity or temporal drift issues in time-series forecasting (Shimodaira, 2000; Du et al., 2021). Second, time-series forecasting is compounded by unforeseen exogenous factors, which shifts the distribution of target time series. These types of phenomena, categorized as concept drift problems in time-series forecasting (Gama et al., 2014; Lu et al., 2018), make it even more challenging. While prior research has investigated strategies to mitigate temporal shifts (Liu et al., 2022; Kim et al., 2021; Fan et al., 2023), addressing concept drift issues in time-series forecasting has been largely overlooked. Although concept drift is a well-studied problem in general machine learning (Sagawa et al., 2019; Arjovsky et al., 2019; Ahuja et al., 2021), adapting these solutions to time-series forecasting is challenging. Many of these methods require environment labels, which are typically unavailable in time-series datasets (Liu et al., 2024a). Indeed, the few concept drift approaches 1 16 Oct 2025 Pre-print developed for time-series data are designed exclusively for online settings (Guo et al., 2021), which requires iterative retraining over time steps and is infeasible when applied to standard time-series forecasting tasks. Therefore, we aim to close this gap in the literature in this paper, that is, to mitigate concept drift in time-series forecasting for standard time-series forecasting tasks. The contributions of this paper are: 1. Concept Drift Method: We introduce soft attention masking (SAM) designed to mitigate concept drift by using the invariant patterns in exogenous features. The soft attention allows the time-series forecasting models to weigh and ensemble of invariant patterns at multiple horizon time steps to enhance the generalization ability. 2. Distribution Shift Generalized Framework: We show the necessity of addressing temporal shift as a preliminary when addressing concept drift. We therefore propose ShifTS, a practical, distribution shift generalized, model-agnostic framework that tackles temporal shift and concept drift within a unified approach. 3. Comprehensive Evaluations: We conduct extensive experiments on various time series datasets with multiple advanced time-series forecasting models. The proposed ShifTS demonstrates effectiveness by consistent performance improvements to agnostic forecasting models, as well as outperforming distribution shift baselines in better forecasting accuracy. We provide related works on time-series analysis and distribution shift generalization in Appendix A. 2 PROBLEM FORMULATION 2.1 TIME-SERIES FORECASTING Time-series forecasting involves predicting future values of one or more dependent time series based on historical data, augmented with exogenous covariate features. Let denote the target time series as Y and its associated exogenous covariate features as X. At any time step t, time-series forecasting aims to predict YH t = [yt + 1, yt+2, . . . , yt+H] ∈Y using historical data (XL t , YL t ), where L represents the length of the historical data window, known as the lookback window, and H denotes the forecasting time steps, known as the horizon window. Here, XL t = [xt-L+1, xt-L+2, . . . , xt] ∈X and YL t = [yt-L+1, yt-L+2, . . . , yt] ∈Y. For simplicity, we denote YH = {YH t } for ∀t as the collection of horizon time-series of all time steps, and similar for YL and XL. Conventional timeseries forecasting involves learning a model parameterized by θ through empirical risk minimization (ERM) to obtain fθ : (XL, YL) →YH for all time steps t. In this study, we focus on univariate time-series forecasting with exogenous features, where dY = 1 and dX ≥1. 2.2 DISTRIBUTION SHIFT IN TIME SERIES Given the time-series forecasting setups, a time-series forecasting model aims to predict the target distribution P(YH) = P(YH|YL)P(YL) + P(YH|XL)P(XL), which should be generalizable for both training and testing time steps. However, due to the dynamic nature of time-series data, forecasting faces challenges from distribution shifts, categorized into two types: temporal shift and concept drift. These two types of distribution shifts are defined as follows: Definition 2.1 (Temporal Shift (Shimodaira, 2000; Du et al., 2021)) Temporal shift (also known as virtual shift (Tsymbal, 2004)) is the marginal probability distributions changing over time, while the conditional distributions are the same. Definition 2.2 (Concept Drift (Lu et al., 2018)) Concept drift (also known as real concept drift (Gama et al., 2014)1) is the conditional distributions changing over time, while the marginal probability distributions are the same. 1(Gama et al., 2014) defines concept drift as both virtual shift and real concept drift. Our concept drift definition is consistent with the definition of real concept drift in (Gama et al., 2014). 2 Pre-print Intuitively, a temporal shift indicates unstable marginal distributions (e.g. P(YH) ̸= P(YL)), while a concept drift indicates unstable conditional distributions (P(YH i |XL i ) ̸= P(YH j |XL j ) for some i, j ∈t). Existing methods for distribution shifts in time-series forecasting typically focus on mitigating temporal shifts through normalization, ensuring P(YH) = P(YL) by both normalizing to standard 0-1 distributions (Kim et al., 2021; Liu et al., 2022; Fan et al., 2023). In contrast, concept drift remains relatively underexplored in time-series forecasting. Nevertheless, time-series forecasting does face challenges from concept drift: The correlations between X and Y can change over time, making the conditional distributions P(YH|XL) unstable and less predictable. A demonstration visualizing the differences and relationships between temporal shift and concept drift is provided in Appendix B. While the concept drift issue has received considerable attention in existing studies on general machine learning, applying them, mostly invariant learning approaches, to time-series forecasting tasks presents certain challenges. Firstly, conventional approaches to mitigate concept drift are through invariant learning. However, these invariant learning methods typically rely on explicit environment labels as input (e.g., labeled rotation or noisy images in image classification), which are not readily available in time series datasets. Second, these invariant learning methods assume that all correlated exogenous features necessary to fully determine the target variable are accessible (Liu et al., 2024a), which are often not applied to time series datasets (e.g., lookback window information is not sufficiently determining the horizon target). Indeed, a few concept drift methods not based on invariant learning have been proposed for time-series forecasting (Guo et al., 2021). However, these methods are designed for the online setting which does not fit standard time-series forecasting, and are only validated on limited synthetic datasets rather than complicated real-world ones. 3 METHODOLOGY The main idea of our methodology is to address concept drift through SAM by modeling stable conditional distributions on surrogate exogenous features with invariant patterns, rather than the sole lookback window. Furthermore, we recognize that effectively mitigating temporal shifts is preliminary for addressing concept drift. To this end, we propose ShifTS that effectively handles concept drift by first resolving temporal shifts as a preliminary step within a unified framework. 3.1 MITIGATING CONCEPT DRIFT Methodology Intuition. As defined in Definition 2.2, concept drift in time-series refers to the changing correlations between X and Y over time (P(YH i |XL i ) ̸= P(YH j |XL j ) for i, j ∈t), which introduces instability when modeling conditional distribution P(YH|XL). Figure 1: Comparison between conventional time-series forecasting and our approach. Our approach identifies invariant patterns in lookback and horizon window as XSUR and then models a stable conditional distribution accordingly to mitigate concept drift. This instability arises because, for a given exogenous feature X, its lookback window XL alone may lack sufficient information to predict YH, while learning a stable conditional distribution requires that the inputs provide sufficient information to predict the output (Sagawa et al., 2019; Arjovsky et al., 2019). There are possible patterns in the horizon window XH, joint with XL, that influence the target. Thus, modeling P(YH|XL, XH) leads to a more stable conditional distribution compared to P(YH|XL), as [XL, XH] captures additional causal relationships across future time steps. We assume that incorporating causal relationships from the horizon window enables more complete causality modeling between that exogenous feature and target, given that the future cannot influence the past (e.g., XH t+1 ↛YH t ). However, these causal effects from the horizon window, while important for learn3 Pre-print ing stable conditional distributions, are often overlooked by conventional time-series forecasting methods, as illustrated in Figure 1(a). Therefore, we propose leveraging both lookback and horizon information from exogenous features (i.e., [XL, XH]) to predict the target, enabling a more stable conditional distribution. However, directly modeling P(YH|XL, XH) in practice presents two challenges. First, XH typically represents unknown future values during testing. To model P(YH|XL, XH), it may require to first predict XH by modeling P(XH|XL), which can be as challenging as predicting YH directly. Second, not every pattern in XH at every time step holds a causal relationship with the target. Modeling all patterns from XL and XH may introduce noisy causal relationships (as invariant learning methods aim to mitigate) and reduce the stability of conditional distributions. To address the above challenges, instead of directly modeling P(YH|XL, XH), we propose a twostep approach: first, identifying patterns in [XL, XH] that lead to stable conditional distributions (namely invariant patterns), and then modeling these conditional distributions accordingly. To determine stability, a natural intuition is to assess whether a pattern's correlation with the target remains consistent across all time steps. For instance, if a subsequence of [XL, XH] consistently exhibits stable correlations with the target over all or most time steps (e.g., an increase of the subsequence always results in an increase of the target), then its conditional distribution should be explicitly modeled due to the stability. Conversely, if a subsequence demonstrates correlations with the target only sporadically or locally, these correlations are likely spurious, which are unstable conditional distributions to other time steps. We leverage this intuition to identify all invariant patterns and aggregate them into a surrogate feature XSUR, accounting for the fact that the target can be determined by multiple patterns. For instance, an influenza-like illness (ILI) outbreak in winter can be triggered by either extreme cold weather in winter or extreme heat waves in summer (Nielsen et al., 2011; Jaakkola et al., 2014). By incorporating this information, we model the corresponding conditional distribution P(YH|XSUR), as illustrated in Figure 1(b). The effectiveness of XSUR in predicting YH stems from two key insights. First, P(YH|XSUR) is a stable conditional distribution to model, as it captures invariant patterns across both the lookback and horizon windows. Second, while there is a trade-off-P(YH|XSUR) provides stability, but estimating XSUR may introduce additional errors-practical evaluations demonstrate that the benefits of constructing stable conditional distributions outweigh the potential estimation errors of XSUR. This is because XSUR contains only partial information, which is easier to predict than the entire XH. Methodology Implementation. Recognizing that P(YH|XSUR) is the desirable conditional distribution to learn, the remaining challenge is to identify XSUR in practice. To achieve this, we propose a soft attention masking mechanism (SAM), that operates as follows: First, we concatenate [XL, XH] to form an entire time series of length L + H. The entire series is then sliced using a sliding window of size H, resulting in L + 1 slices. This process extracts local patterns ([XH t-L, . . . , XH t ] at each time step t), which are subsequently used to identify invariant patterns. Second, we model the conditional distributions for all local patterns [P(YH t |XH t-L), . . . , P(YH t |XH t )] at each time step t, with applying a learnable soft attention matrix M to weigh each local pattern. This matrix incorporates softmax, sparsity, and normalization operations, which can be mathematically described as: Softmax : Mj = Softmax(Mj) Sparsity : Mij = Mij · 1(Mij-μ(Mj))≥0 Normalize : Mj = Mj |Mj| (1) where i, j are the first and second dimensions of M. These operations are essential for SAM identifying invariant patterns. The intuition is that we consider sliced windows from the lookback and horizon over time steps as candidates of invariant patterns. We use the softmax operation to compute and update the weights of each pattern contributing to the target YH. We then apply a sparsity operation to filter out patterns with low weights, leaving only the patterns with high weights. These high-weight patterns, which consistently contribute to the target across all instances at all time steps, are regarded as invariant patterns over time. These patterns intuitively are invariant patterns as 4 Pre-print Figure 2: Diagram of ShifTS, consisting of three components: (a) normalization at the start (c) denormalization at the end to address temporal shifts, and (b) a two-stage forecasting process-The first stage predicts surrogate exogenous features, ˆXSUR, identified by the SAM, which capture invariant patterns essential for forecasting the target; The second stage uses both the predicted surrogate exogenous features and the original YL to predict YH. P(YH i |XH i-k) ≈P(YH j |XH j-k) for some k ∈[0, L] and i, j ∈t. While multiple invariant patterns may be identified, we compute a weighted sum of these patterns, proportional to their contributions in predicting the target. The weighted-sum patterns formulate the surrogate feature XSUR. For simplicity, we denote this process as: XSUR = SAM([XL, XH]) = X L+1 M(Slice([XL, XH])) (2) where Slice(·) represents slicing the time series [L + H, dX] →[H, L + 1, dX], and M ∈RL+1×dX is the learnable soft attention as in Equation 1. In practice, XSUR may include horizon information unavailable during testing. To address this, SAM estimates the surrogate features ˆXSUR using agnostic forecasting models. The surrogate loss that aims to estimate ˆXSUR is defined as: LSUR = MSE(XSUR, ˆXSUR) (3) 3.2 MITIGATING TEMPORAL SHIFT While the primary contribution of this work is to mitigate concept drift in time-series forecasting, addressing temporal shifts is equally critical and serves as a prerequisite for effectively managing concept drift. The key intuition is that SAM seeks to learn invariant patterns that result in a stable conditional distribution, P(YH|XSUR). However, achieving this stability becomes challenging if the marginal distributions (e.g., P(YH) or P(XSUR)) are not fixed, as these distributions may change over time because of the temporal shift issues. To address this issue, a natural solution is to learn the conditional distribution under standardized marginal distributions. This can be achieved using temporal shift methods, which employ instance normalization techniques to stabilize the marginals. The core intuition behind popular temporal shift methods is to normalize data distributions before the model processes them and to denormalize the outputs afterward. This approach ensures that the normalized sequences maintain consistent mean and variance between the inputs and outputs of the forecasting model. Specifically, P(XL Norm) ≈ P(XH Norm) ∼Dist(0, 1) and P(YL Norm) ≈P(YH Norm) ∼Dist(0, 1), thereby mitigating temporal shifts (i.e., shifts in marginal distributions over time). Among the existing methods, Reversible Instance Normalization (RevIN) (Kim et al., 2021) stands out for its simplicity and effectiveness, making it the method of choice in this work. Advanced techniques, such as SAN (Liu et al., 2023) and N-S Transformer (Liu et al., 2022), have also demonstrated promise in addressing temporal shifts. However, these methods often require modifications to forecasting models or additional pre-training strategies. While exploring these advanced temporal shift approaches remains a promising avenue for further performance improvements, it is beyond the scope of this study and not the primary focus of this work. 5 Pre-print 3.3 SHIFTS: THE INTEGRATED FRAMEWORK To address concept drift in time-series forecasting, while acknowledging that mitigating temporal shifts is a prerequisite for resolving concept drift, we propose ShifTS -a comprehensive framework designed to tackle both challenges in time-series forecasting. ShifTS is model-agnostic, as the stable conditional distributions distinguished by SAM can be learned by any time-series forecasting model. The workflow of ShifTS is illustrated in Figure 2 and consists of the following steps: (1) Normalize the input time series; (2) Forecast surrogate exogenous features ˆXSUR that invariantly support the target series, as determined by SAM; (3) An aggregation MLP that uses ˆXSUR to forecast the target, denoted as Agg(·) in Figure 2 and Algorithm 1; (4) Denormalize the output time series. Conceptually, steps 1 and 4 mitigate the temporal shift, step 2 addresses concept drift, and step 3 performs weighted aggregation of exogenous features to support the target series. The optimization objective of ShifTS is as follows: L = LSUR(XSUR, ˆXSUR) + LTS(YH, ˆYH) (4) Here, LSUR is the surrogate loss that encourages learning to forecast exogenous features, and LTS is the MSE loss used in conventional time-series forecasting. The pseudo-code for training and testing ShifTS is provided in Algorithm 1. Algorithm 1 ShifTS 1: Training: Require: Training data XL, XH, YL, YH; Initial parameters f0, M0, Agg0; Output: Model parameter f, M, Agg 2: For i in range (E): 3: Normalization: [XL Norm, YL Norm] = Norm([XL, YL]) 4: Time-series forecasting: [ˆXSUR Norm, ˆYH Norm] = fi([XL Norm, YL Norm]) 5: Exogenous feature aggregation: ˆYH Norm = ˆYH Norm + Aggi(ˆXSUR Norm) 6: Denormalization: [ˆXSUR, ˆYH] = Denorm([ˆXSUR Norm, ˆYH Norm]) 7: Obtain sufficient ex-features: XSUR = SAM([XL, XH]) 8: Compute loss: L = LSUR(XSUR, ˆXSUR) + LTS(YH, ˆYH) 9: Update model parameter: fi+1 ←fi, Mi+1 ←Mi, Aggi+1 ←Aggi 10: Final model parameters: f ←fE, M ←ME, Agg ←AggE 11: Testing: Require: Test data XL, YL, Output: Forecast target ˆYH 12: Normalization: [XL Norm, YL Norm] = Norm([XL, YL]) 13: Time-series forecasting: [ˆXSUR Norm, ˆYH Norm] = f([XL Norm, YL Norm]) 14: Exogenous feature aggregation: ˆYH Norm = ˆYH Norm + Agg(ˆXSUR Norm) 15: Denormalization: [ˆXSUR, ˆYH] = Denorm([ˆXSUR Norm, ˆYH Norm]) 4 EXPERIMENTS 4.1 SETUP Datasets. We conduct experiments using six time-series datasets as leveraged in (Liu et al., 2024a): The daily reported currency exchange rates (Exchange) (Lai et al., 2018); The weekly reported influenza-like illness patients (ILI) (Kamarthi et al., 2021); Two-hourly/minutely reported electricity transformer temperature (ETTh1/ETTh2 and ETTm1/ETTm2, respectively) (Zhou et al., 2021). We follow the established experimental setups and target variable selections in previous works(Wu et al., 2021; 2022; Nie et al., 2023; Liu et al., 2024b). Datasets such as Traffic (PeMS) (Zhao et al., 2017) and Weather (Wu et al., 2021) are excluded from our evaluations, as their time series exhibit near-stationary behavior, with only moderate distribution shift issues. Further details on the dataset differences are discussed in Appendix C.1. Baselines. We include two types of baselines for comprehensive evaluation on ShifTS: Forecasting Model Baselines: ShifTS is model-agnostic, we include six time-series forecasting models (referred to as 'Model' in Table 1 and 4), including: Informer (Zhou et al., 2021), 6 Pre-print Table 1: Performance comparison on forecasting errors without (ERM) and with ShifTS. Employing ShifTS shows consistent performance gains agnostic to forecasting models. The top-performing method is in bold. 'IMP.' denotes the average improvements over all horizons of ShifTS vs ERM. Model Crossformer (ICLR'23) PatchTST (ICLR'23) iTransformer (ICLR'24) Method ERM ShifTS ERM ShifTS ERM ShifTS Dataset MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ILI 24 3.409 1.604 0.674 0.590 0.772 0.634 0.656 0.618 0.824 0.653 0.799 0.642 36 4.001 1.772 0.687 0.617 0.763 0.649 0.694 0.602 0.917 0.738 0.690 0.640 48 3.720 1.724 0.652 0.611 0.753 0.692 0.654 0.630 0.772 0.699 0.680 0.665 60 3.689 1.715 0.658 0.633 0.761 0.724 0.680 0.656 0.729 0.710 0.672 0.667 IMP. 81.9% 64.0% 12.0% 7.1% 13.8% 6.5% Exchange 96 0.338 0.475 0.102 0.237 0.130 0.265 0.102 0.236 0.135 0.272 0.115 0.255 192 0.566 0.622 0.203 0.338 0.247 0.394 0.194 0.332 0.250 0.376 0.209 0.343 336 1.078 0.867 0.407 0.484 0.522 0.557 0.388 0.477 0.450 0.503 0.426 0.495 720 1.292 0.963 1.165 0.813 1.171 0.824 0.995 0.747 1.501 0.941 1.138 0.827 IMP. 53.5% 38.9% 20.9% 12.6% 15.2% 6.9% ETTh1 96 0.145 0.312 0.055 0.180 0.064 0.193 0.056 0.181 0.061 0.190 0.056 0.181 192 0.240 0.420 0.072 0.206 0.085 0.222 0.073 0.209 0.076 0.219 0.072 0.205 336 0.240 0.424 0.084 0.228 0.096 0.244 0.089 0.235 0.086 0.227 0.083 0.225 720 0.391 0.553 0.095 0.244 0.128 0.282 0.097 0.245 0.085 0.232 0.082 0.230 IMP. 68.2% 48.8% 14.5% 7.2% 5.1% 3.3% ETTh2 96 0.255 0.408 0.137 0.286 0.154 0.309 0.139 0.287 0.141 0.292 0.137 0.288 192 1.257 1.034 0.182 0.338 0.204 0.374 0.191 0.345 0.194 0.347 0.184 0.339 336 0.783 0.771 0.234 0.388 0.252 0.406 0.222 0.381 0.229 0.383 0.225 0.381 720 1.455 1.100 0.234 0.389 0.259 0.411 0.236 0.390 0.266 0.413 0.235 0.390 IMP. 71.4% 52.9% 9.2% 6.5% 5.4% 2.5% ETTm1 96 0.050 0.174 0.028 0.126 0.031 0.135 0.029 0.128 0.030 0.131 0.030 0.131 192 0.271 0.454 0.043 0.158 0.048 0.166 0.044 0.161 0.049 0.171 0.046 0.165 336 0.731 0.805 0.057 0.184 0.058 0.190 0.058 0.186 0.066 0.199 0.059 0.188 720 0.829 0.849 0.083 0.219 0.083 0.223 0.080 0.219 0.082 0.219 0.079 0.217 IMP. 77.3% 61.0% 4.6% 3.0% 5.1% 2.5% ETTm2 96 0.153 0.315 0.069 0.190 0.078 0.206 0.067 0.188 0.073 0.200 0.073 0.195 192 0.408 0.526 0.105 0.242 0.113 0.246 0.101 0.237 0.119 0.251 0.108 0.248 336 0.428 0.504 0.146 0.289 0.176 0.320 0.134 0.278 0.157 0.302 0.144 0.291 720 1.965 1.205 0.191 0.342 0.220 0.368 0.185 0.334 0.196 0.347 0.193 0.344 IMP. 71.3% 52.0% 15.9% 8.6% 4.8% 2.1% Pyraformer (Liu et al., 2021), Crossformer (Zhang & Yan, 2022), PatchTST (Nie et al., 2023), TimeMixer (Wang et al., 2024) and iTransformer (Liu et al., 2024b), which of the last two are the state-of-the-art (SOTA) forecasting model. These models are used to demonstrate that ShifTS consistently enhances forecasting accuracy across various models, including SOTA. Distribution Shift Baselines: We compare ShifTS with various distribution shift methods (referred to as 'Method' in Table 2): (1) Three non-stationary methods for addressing temporal distribution shifts in time-series forecasting N-S Trans. (Liu et al., 2022), RevIN (Kim et al., 2021), and SAN (Liu et al., 2023). We omit Dish-TS (Fan et al., 2023) and SIN (Han et al., 2024) from the main text due to their instability on univariate targets. (2) Four concept drift methods, including GroupDRO (Sagawa et al., 2019), IRM (Arjovsky et al., 2019), VREx (Krueger et al., 2021), and EIIL (Creager et al., 2021), which are primarily designed for general applications. (3) Three combined methods for both temporal distribution shifts and concept drift: IRM+RevIN, EIIL+RevIN, and SOTA time-series distribution shift method FOIL (Liu et al., 2024a). These comparisons aim to highlight the advantages of ShifTS in distribution shift generalization over existing distribution shift approaches. Evaluation. We measure the forecasting errors using mean squared error (MSE) and mean absolute error (MAE). The formula of the metrics are: MSE = 1 n Pn i=1(y-ˆy)2 and MAE = 1 n Pn i=1 |y-ˆy|. Reproducibility. All models are trained on NVIDIA Tesla V100 32GB GPUs. All training data and code are available at: https://github.com/AdityaLab/ShifTS. More experiment details are presented in Appendix C.2. 4.2 PERFORMANCE IMPROVEMENT ACROSS BASE FORECASTING MODELS To evaluate the effectiveness of ShifTS in reducing forecasting errors, we conduct experiments comparing performance with and without ShifTS across popular time-series datasets and four different forecasting horizons. These experiments utilize five transformer-based models and one MLP-based model. Evaluation results for Crossformer, PatchTST, and iTransformer are presented in 7 Pre-print Table 2: Averaged performance comparison between ShifTS and distribution shift baselines with Crossformer. ShifTS achieves the best and second-best performance in 6 and 2 out of 8 evaluations. The best results are highlighted in bold and the second-best results are underlined. Dataset ILI Exchange ETTh1 ETTh2 Method MSE MAE MSE MAE MSE MAE MSE MAE Base ERM 3.705 1.704 0.819 0.732 0.254 0.427 0.937 0.828 Concept Drift Method GroupDRO 2.285 1.287 0.821 0.751 0.278 0.453 1.150 0.936 IRM 2.248 1.237 0.846 0.754 0.201 0.367 0.878 0.792 VREx 2.285 1.286 0.821 0.742 0.314 0.486 1.142 0.938 EIIL 2.036 1.159 0.822 0.749 0.212 0.433 1.122 0.930 Temporal Shift Method RevIN 0.815 0.708 0.475 0.476 0.085 0.224 0.205 0.358 N-S Trans. 0.781 0.688 0.484 0.481 0.086 0.226 0.203 0.355 SAN 0.757 0.715 0.415 0.453 0.088 0.225 0.199 0.348 Combined Method IRM+RevIN 0.809 0.711 0.481 0.476 0.089 0.231 0.202 0.362 EIIL+RevIN 0.799 0.706 0.483 0.485 0.085 0.225 0.218 0.380 FOIL 0.735 0.651 0.497 0.481 0.081 0.219 0.206 0.357 ShifTS (Ours) 0.668 0.613 0.470 0.468 0.076 0.214 0.194 0.348 Table 1, while additional results for older models, including Informer, Pyraformer, and TimeMixer, are provided in Table 4 in Appendix D.1. The experimental results consistently demonstrate the effectiveness of ShifTS in improving forecasting performance across agnostic forecasting models. Notably, ShifTS achieves reductions in forecasting errors of up to 15% when integrated with advanced models like iTransformer. Furthermore, ShifTS shows even greater relative effectiveness when applied to older or less advanced forecasting models, such as Informer and Crossformer. In addition to the observed performance improvements, our results reveal two further insights: The effectiveness of ShifTS relies on the insights provided by the horizon data. The performance improvements exhibit variations across different datasets. For instance, the application of ShifTS on ILI and Exchange datasets yields greater performance improvements compared to ETT datasets overall. To interpret the phenomenon and determine the conditions under which ShifTS could be most effective in practical scenarios, we quantify the mutual information I(XH; YH) shared between XH and YH (detailed setup provided in Appendix C.2). We plot the relationship between I(XH; YH) and performance gains in Figure 3(a). The scatter plot illustrates a positive linear correlation between I(XH; YH) and performance gains, supported by a p-value p = 0.012 ≤0.05. This observation suggests that the greater the amount of useful information from exogenous features within the horizon window, the more substantial the performance gains achieved by ShifTS. This insight aligns with the design of ShifTS, as higher mutual information indicates clearer correlations and causal relationships between the target YH and exogenous features in the horizon window-relationships often overlooked by conventional time-series models. Stronger correlations imply a greater extent of misrepresented dependencies in ERM, leading to more significant improvements with ShifTS. The extent of quantitative performance gains achieved by ShifTS depends on the underlying forecasting model. Notably, the extent of performance enhancements achieved by ShifTS varies across different forecasting models. For example, the performance gains on the simpler Informer model by ShifTS is more significant than the SOTA iTransformer model. Importantly, we emphasize two key observations: Firstly, even when applied to the iTransformer model, ShifTS demonstrates a notable performance boost of approximately 15% on both ILI and Exchange datasets, consistent with the aforehead intuition. Secondly, integrating ShifTS into forecasting processes should, at the very least, maintain or improve the performance of standalone forecasting models, as evidenced by consistent performance enhancements observed across all datasets with iTransformer model. 4.3 COMPARISON WITH DISTRIBUTION SHIFT METHODS To illustrate the advantages of ShifTS over other model-agnostic approaches for addressing distribution shifts, we perform experiments comparing its performance against distribution shift baselines, including methods designed for concept drift, temporal shift, and combined approaches. We exclude evaluations on minutely ETT datasets, following (Liu et al., 2024a), as their data characteristics and 8 Pre-print Figure 3: Left (a): The performance gains of ShifTS versus the mutual information shared between XH and YH. Greater mutual information in XH compared to YH correlates with more significant performance gains achieved by ShifTS. Right (b): Ablation Study. Addressing either concept drift or temporal shift individually provides certain benefits in forecasting accuracy. ShifTS that tackles both achieves the lowest forecasting error. forecasting performance closely resemble those of hourly ETT datasets. The experiments utilize Crossformer as the forecasting model, and the averaged results are presented in Table 2. The results highlight the advantages of ShifTS over existing distribution shift methods, achieving the highest average forecasting accuracy in 6 out of 8 evaluations, with the remaining 2 evaluations ranking second. Notably, as discussed in Section 3.2, we choose to use RevIN as it is one of the most popular yet simple and effective temporal shift methods. However, ShifTS is flexible and can integrate more advanced temporal shift methods to further enhance performance. While exploring these advanced temporal shift methods is beyond the scope of this work, we illustrate the potential benefits of such integration. Table 3: MSE comparison between ShifTS, SAN, and ShifTS+SAN on Exchange dataset. ShifTS+SAN achieves the best performance on all evaluations. Horizon ShifTS SAN ShifTS w. SAN 96 0.102 0.091 0.089 192 0.207 0.195 0.187 336 0.407 0.373 0.372 720 1.165 1.001 0.981 Avg. 0.470 0.415 0.407 For example, on the Exchange dataset, where SAN outperforms ShifTS, incorporating SAN in place of RevIN within ShifTS leads to even greater accuracy improvements. Detailed MSE values for these evaluations are provided in Table 3. Furthermore, the results underscore the importance of addressing concept drift using SAM when temporal shifts are effectively addressed. 4.4 ABLATION STUDY To demonstrate the effectiveness of each module in ShifTS, we conducted an ablation study using two modified versions: ShifTS and ShifTS . ShifTS excludes the temporal shift adjustment via RevIN, while ShifTS excludes the concept drift handling via SAM. Additionally, conventional forecasting models that do not address either concept drift or temporal shift are denoted as 'Base'. We performed experiments on the Exchange datasets using the previous three baseline forecasting models, with a fixed forecasting horizon of 96. The results are visualized in Figure 3(b). The visualization reveals the following observations: First, addressing temporal shift and concept drift together, as implemented in ShifTS, yields lower forecasting errors than addressing only one type of distribution shift (ShifTS and ShifTS ) or not considering any distribution shift adjustments (Base). This suggests that temporal shift and concept drift are interrelated and co-exist in time series data, and addressing both provides significant benefits. Second, for forecasting models that inherently address temporal shift, such as PatchTST and iTransformer that incorporate norm/denorm, the performance gains from mitigating concept drift are more significant than those from additionally mitigating temporal shift using RevIN. In contrast, for models without any temporal shift mitigation, such as Crossformer, tackling temporal shift leads to 9 Pre-print a greater performance improvement than concept drift. These observations suggest that mitigating temporal shift is a necessity in mitigating concept drift, which matches the intuition in Section 3.2. 5 CONCLUSION AND LIMITATION DISCUSSION In this paper, we identify the challenges posed by both concept drift and temporal shift in time-series forecasting. While the issue of mitigating temporal shifts has garnered significant attention within the time-series forecasting community, concept drift has remained largely overlooked. To bridge this gap, we propose SAM, a method designed to effectively address concept drift in time-series forecasting by modeling conditional distributions through surrogate exogenous features. Building on SAM, we introduce ShifTS, a model-agnostic framework that handles concept drift in practice by first mitigating temporal shift as a preliminary step. Our comprehensive evaluations highlight the effectiveness of ShifTS, while the benefits of SAM are further demonstrated through an ablation study. We discuss the limitations of our approach in Appendix E. 6 ACKNOWLEDGMENT This paper was supported in part by the NSF (Expeditions CCF-1918770, CAREER IIS-2028586, Medium IIS-1955883, Medium IIS-2106961, Medium IIS-2403240, PIPP CCF-2200269), CDC MInD program, Meta faculty gift, and funds/computing resources from Georgia Tech and GTRI. REFERENCES Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-ofdistribution generalization. Advances in Neural Information Processing Systems, 34:3438-3450, 2021. Martin Arjovsky, L ́eon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint , 2019. Peter L Bartlett. Learning with a slowly changing distribution. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 243-252, 1992. Elliot Creager, J ̈orn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, pp. 2189-2200. PMLR, 2021. Chirag Deb, Fan Zhang, Junjing Yang, Siew Eang Lee, and Kwok Wei Shah. A review on time series forecasting techniques for building energy consumption. Renewable and Sustainable Energy Reviews, 74:902-924, 2017. Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang. Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM international conference on information & knowledge management, pp. 402-411, 2021. Wei Fan, Pengyang Wang, Dongkun Wang, Dongjie Wang, Yuanchun Zhou, and Yanjie Fu. Dish-ts: a general paradigm for alleviating distribution shift in time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 7522-7529, 2023. Jo ̃ao Gama, Indr ̇e ˇZliobait ̇e, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1-37, 2014. Clive WJ Granger. Time series concepts for conditional distributions. Oxford Bulletin of Economics and Statistics, 65:689-701, 2003. Husheng Guo, Shuai Zhang, and Wenjian Wang. Selective ensemble-based online adaptive deep neural networks for streaming data with concept drift. Neural Networks, 142:437-456, 2021. Lu Han, Han-Jia Ye, and De-Chuan Zhan. Sin: Selective and interpretable normalization for long-term time series forecasting. In Forty-first International Conference on Machine Learning, 2024. 10 Pre-print Kari Jaakkola, Annika Saukkoriipi, Jari Jokelainen, Raija Juvonen, Jaana Kauppila, Olli Vainio, Thedi Ziegler, Esa R ̈onkk ̈o, Jouni JK Jaakkola, Tiina M Ik ̈aheimo, et al. Decline in temperature and humidity increases the occurrence of influenza in cold climate. Environmental Health, 13:1-8, 2014. Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodriguez, Chao Zhang, and B Aditya Prakash. When in doubt: Neural non-parametric uncertainty quantification for epidemic forecasting. Advances in Neural Information Processing Systems, 34:19796-19807, 2021. Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations, 2021. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815-5826. PMLR, 2021. Anthony Kuh, Thomas Petsche, and Ronald Rivest. Learning time-varying concepts. Advances in neural information processing systems, 3, 1990. Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In The 41st international ACM SIGIR conference on research & development in information retrieval, pp. 95-104, 2018. Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang, and B Aditya Prakash. Time-series forecasting for out-of-distribution generalization using invariant learning. Forty-first International Conference on Machine Learning, 2024a. Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In International conference on learning representations, 2021. Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems, 35: 9881-9893, 2022. Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. In International Conference on Learning Representations (ICLR), 2024b. Zhiding Liu, Mingyue Cheng, Zhi Li, Zhenya Huang, Qi Liu, Yanhu Xie, and Enhong Chen. Adaptive normalization for non-stationary time series forecasting: A temporal slice perspective. In Thirtyseventh Conference on Neural Information Processing Systems, 2023. Jie Lu, Anjin Liu, Fan Dong, Feng Gu, Joao Gama, and Guangquan Zhang. Learning under concept drift: A review. IEEE transactions on knowledge and data engineering, 31(12):2346-2363, 2018. Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, and Xing Xie. Out-of-distribution representation learning for time series classification. In International Conference on Learning Representations, 2023. Sarabeth M Mathis, Alexander E Webber, Tom ́as M Le ́on, Erin L Murray, Monica Sun, Lauren A White, Logan C Brooks, Alden Green, Addison J Hu, Roni Rosenfeld, et al. Title evaluation of flusight influenza forecasting in the 2021-22 and 2022-23 seasons with a new target laboratoryconfirmed influenza hospitalizations. Nature Communications, 15(1):6289, 2024. Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations, 2023. Jens Nielsen, Anne Mazick, Steffen Glismann, and K ̊are Mølbak. Excess mortality related to seasonal influenza and extreme temperatures in denmark, 1994-2010. BMC infectious diseases, 11:1-13, 2011. 11 Pre-print Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1ecqn4YwB. Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. Advances in Neural Information Processing Systems, 34:1256-1272, 2021. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint , 2019. David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3): 1181-1191, 2020. Alex Sherstinsky. Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Physica D: Nonlinear Phenomena, 404:132306, 2020. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000. Jos ́e F Torres, Dalil Hadjout, Abderrazak Sebaa, Francisco Mart ́ınez- ́Alvarez, and Alicia Troncoso. Deep learning for time series forecasting: a survey. Big Data, 9(1):3-21, 2021. Alexey Tsymbal. The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin, 106(2):58, 2004. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y Zhang, and JUN ZHOU. Timemixer: Decomposable multiscale mixing for time series forecasting. In International Conference on Learning Representations (ICLR), 2024. Qingsong Wen, Weiqi Chen, Liang Sun, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan, et al. Onenet: Enhancing time series forecasting models under concept drift by online ensembling. Advances in Neural Information Processing Systems, 36, 2024. Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, 34:22419-22430, 2021. Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In The eleventh international conference on learning representations, 2022. Xinyu Zhang, Shanshan Feng, Jianghong Ma, Huiwei Lin, Xutao Li, Yunming Ye, Fan Li, and Yew Soon Ong. Frnet: Frequency-based rotation network for long-term time series forecasting. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3586-3597, 2024. Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In The eleventh international conference on learning representations, 2022. Zheng Zhao, Weihai Chen, Xingming Wu, Peter CY Chen, and Jingmeng Liu. Lstm network: a deep learning approach for short-term traffic forecast. IET intelligent transport systems, 11(2):68-75, 2017. Zhiyuan Zhao, Alexander Rodriguez, and B Aditya Prakash. Performative time-series forecasting. arXiv preprint , 2023. 12 Pre-print Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 5(3):1-55, 2014. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 11106-11115, 2021. Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proc. 39th International Conference on Machine Learning (ICML 2022), 2022. Yunyue Zhu and Dennis Shasha. Statstream: Statistical monitoring of thousands of data streams in real time. In VLDB'02: Proceedings of the 28th International Conference on Very Large Databases, pp. 358-369. Elsevier, 2002. A RELATED WORKS Time-Series Forecasting. Recent works in deep learning have achieved notable achievements in timeseries forecasting, such as RNNs, LSTNet, N-BEATS (Sherstinsky, 2020; Lai et al., 2018; Oreshkin et al., 2020). State-of-the-art models build upon the successes of self-attention mechanisms (Vaswani et al., 2017) with transformer-based architectures and significantly improve forecasting accuracy, such as Informer, Autoformer, Fedformer, PatchTST, iTransformer, FRNet (Zhou et al., 2021; Wu et al., 2021; Zhou et al., 2022; Nie et al., 2023; Liu et al., 2024b; Zhang et al., 2024). However, these advanced models primarily rely on empirical risk minimization (ERM) with IID assumptions, i.e., train and test dataset follows the same data distribution, which exhibits limitations when potential distribution shifts in time series. Distribution Shift in Time-Series Forecasting. In recent decades, learning under non-stationary distributions, where the target distribution over instances changes with time, has attracted attention within learning theory (Kuh et al., 1990; Bartlett, 1992). In the context of time series, the distribution shift can be categorized into concept drift and temporal shifts. General concept drift methods (via invariant learning) (Arjovsky et al., 2019; Ahuja et al., 2021; Krueger et al., 2021; Pezeshki et al., 2021; Sagawa et al., 2019) assume instances sampled from various environments and propose to identify and utilize invariant predictors across these environments. However, when applied to time-series forecasting, these methods encounter limitations. Additional methods specifically tailored for time series data also encounter certain constraints: DIVERSITY (Lu et al., 2023) is designed for time series classification and detection only. OneNet (Wen et al., 2024) is tailored solely for online forecasting scenarios using online ensembling. PeTS (Zhao et al., 2023) focuses on distribution shifts induced by the specific phenomenon of performativity. Other works specifically tackle temporal shift issues in time-series forecasting (Kim et al., 2021; Liu et al., 2022; Fan et al., 2023; Liu et al., 2023). These approaches implement carefully crafted normalization strategies to ensure that both the lookback and horizon of a univariate time series adhere to normalized distributions. This alignment helps alleviate potential temporal shifts, where the statistical properties of the lookback and horizon time series may differ, over time. B TEMPORAL SHIFT AND CONCEPT DRIFT To highlight the differences between concept drift and temporal shift, we provide visualizations of both phenomena. Figure 4 illustrates temporal shift, while Figure 5 demonstrates concept drift2. Temporal shift refers to changes in the statistical properties of a univariate time series data, such as mean, variance, and autocorrelation structures, over time. For instance, the mean and variance of the given time series shift between the lookback window and horizon window, as depicted in Figure 4. This issue is inherent in time series forecasting and can occur on any given time series data, regardless of whether the data pertains to the target series or exogenous features. 2Figures adapted from: https://github.com/ts-kim/RevIN 13 Pre-print In contrast, concept drift describes to changes in the correlations between exogenous features and the target series over time. Figure 5 illustrates this phenomenon, where increases in exogenous features at earlier time steps lead to increases in the target series, while increases at later time steps result in decreases. Unlike temporal shift, concept drift involves multiple correlated time series and is not an inherent issue in univariate time series analysis. Figure 4: Demonstration of temporal shift phenomenon within time series data, showcasing the variations in statistical properties, including mean and variance, over time as the emergence of temporal shift (Red: ground truth; Yellow: N-BEATS prediction; Blue: N-BEATS+RevIN prediction). Figure 5: Demonstration of concept drift phenomenon within time series data, showcasing the variations in correlation structures between target series Y and exogenous feature X over time as the emergence of concept drift (Red: ground truth; Yellow: N-BEATS prediction; Blue: NBEATS+RevIN prediction). C ADDITIONAL EXPERIMENT DETAILS C.1 DATASETS We conduct experiments on six real-world datasets, which are commonly used as benchmark datasets: • ILI. The ILI dataset collects data on influenza-like illness patients weekly, with eight variables. • Exchange. The Exchange dataset records the daily exchange rate of eight currencies. • ETT. The ETT dataset contains four sub-datasets: ETTh1, ETTh2, ETTm1, ETTm2. The datasets record electricity transformer temperatures from two separate counties in China (distinguished by '1' and '2'), with two granularities: minutely and hourly (distinguished by 'm' and 'h'). All sub-datasets have seven variables/features. 14 Pre-print We follow (Wu et al., 2022; Nie et al., 2023; Liu et al., 2024b) to preprocess data, which guides splitting datasets into train/validation/test sets and selecting the target variables. All datasets are preprocessed using the zero-mean normalization method. Additional popular time-series datasets, such as Traffic (which records road occupancy rates from various sensors on San Francisco freeways), Electricity (which tracks hourly electricity consumption for 321 customers), and Weather (which collects 21 meteorological indicators in Germany, such as humidity and air temperature), are omitted from our evaluations. These datasets exhibit strong periodic signals and display near-stationary properties, making distribution shift issues less prevalent. A visualization comparison between the ETTh1 and Traffic datasets, shown in Figure 6, further supports this observation. Figure 6: Distribution shift issues across datasets: Left (a): ETT. Both temporal shift and concept drift are present. The target series shows varying statistics over time (e.g., lower variance in earlier periods and higher variance later), causing temporal shift. The correlation between X and Y is unclear and unstable, causing concept drift. Right (b): Traffic. Both temporal shift and concept drift are moderate. The target series exhibits near-periodicity, making the temporal shift moderate. Moreover, the correlation between X and Y remains stable (e.g., both increase or decrease simultaneously), making concept drift moderate. C.2 BASELINE IMPLEMENTATION We follow the commonly adopted setup for defining the forecasting horizon window length, as outlined in prior works (Wu et al., 2022; Nie et al., 2023; Liu et al., 2024b). Specifically, for datasets such as ETT and Exchange, the forecasting horizon windows are chosen from the set [96, 192, 336, 720], with a fixed lookback window size of 96 and a consistent label window size of 48 for the decoder (if required). Similarly, for the weekly reported ILI dataset, we employ forecasting horizon windows from [24, 36, 48, 60], with a fixed lookback window size of 36 and a constant label window size of 18 for the decoder (if required). In the context of concept drift baselines, several baselines like GroupDRO, IRM, and VREx require environment labels, which are typically absent in time series datasets. To address this, we partition the training set into k equal-length time segments to serve as predefined environment labels. For baseline time-series forecasting models, we follow implementations and suggested hyperparameters (with additional tuning) sourced from the Time Series Library3. For concept drift baselines, we utilize implementations and hyperparameter tuning strategies recommended by DomainBed4. For temporal shift baselines, we adopt implementations and hyperparameter configurations outlined in their respective papers. Additionally, we add an additional MLP layer to the end PatchTST to effectively utilize exogenous features, following (Liu et al., 2024a). In the ablation study, for the implementation of PatchTST and iTransformer, we follow the original approach by applying norm and denorm operations to the 'Base' model. To clarify our notation, ShifTS refers to the model with standard norm/denorm operations and SAM, while ShifTS denotes the version where the regular norm/denorm is replaced with RevIN. 3https://github.com/thuml/Time-Series-Library 4https://github.com/facebookresearch/DomainBed 15 Pre-print C.3 MUTUAL INFORMATION VISUALIZATION For a given time series dataset, we compute the mutual information I(XH; YH) for each training time step and each exogenous feature dimension individually, following: I(XH; YH) = X x∈XH X y∈YH P(x, y) log P(x, y) P(x)P(y) (5) We then average the mutual information across all time steps for each exogenous feature dimension and identify the maximum averaged mutual information over all feature dimensions. This process allows us to assess the information content of each feature dimension in relation to the target series. We visualize the maximum averaged mutual information plotted against the corresponding performance gain in Figure 3(a). This visualization provides insights into how the information content of different feature dimensions relates to the performance improvement achieved in the forecasting model. D ADDITIONAL RESULTS D.1 EVALUATIONS ON AGNOSTIC PERFORMANCE GAINS To further demonstrate the benefit of ShifTS in improving the forecasting accuracy over agnostic forecasting models, we additionally evaluate the performance differences without and with ShifTS on Informer, Pyraformer, and TimeMixer. The detailed results are presented in Table 4. The additional evaluations again show consistent performance improvements in these models. Moreover, compared to the results in Table 1, the performance gains on these older models are even more significant. This observation highlights the need to mitigate both concept drift and temporal shift in time-series forecasting, as such problem are rarely considered in these models, but in the later models (e.g., PatchTST and iTransformer are compounded with normalizaiton/denormalizaiton processes). E LIMITATION DISCUSSION This work introduces SAM to address concept drift and proposes an integrated framework, ShifTS, which combines SAM with temporal shift mitigation techniques to enhance the accuracy of time-series forecasting. Extensive empirical evaluations support the effectiveness of these methods. However, the limitations of this study lie in two aspects: First, the distribution shift methods in time-series forecasting, including ShifTS, lack a theoretical guarantee. For example, no analysis quantifies how much the error bound can be tightened by addressing concept drift or temporal shift compared to vanilla time-series forecasting methods. Second, while this paper defines concept drift and temporal shift issues within the context of time-series forecasting, SAM and ShifTS are not the only possible solutions. Exploring alternative approaches remains an avenue for future research beyond the scope of this work. These two limitations highlight opportunities for future investigation. 16 Pre-print Table 4: Performance comparison on forecasting errors without (ERM) and with ShifTS on Informer, Pyraformer, and TimeMixer. Employing ShifTS again shows near-consistent performance gains agnostic to forecasting models. The top-performing method is in bold. 'IMP.' denotes the average improvements over all horizons of ShifTS vs ERM. Model Informer (AAAI'21) Pyraformer (ICLR'21) TimeMixer (ICLR'24) Method ERM ShifTS ERM ShifTS ERM ShifTS Dataset MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ILI 24 5.032 1.935 1.030 0.812 4.692 1.898 0.979 0.749 0.853 0.733 0.789 0.702 36 4.475 1.876 1.046 0.850 4.814 1.950 0.866 0.740 0.721 0.676 0.697 0.665 48 4.506 1.879 0.918 0.818 4.109 1.801 0.789 0.732 0.737 0.692 0.741 0.711 60 4.313 1.850 0.957 0.839 4.483 1.850 0.723 0.698 0.788 0.723 0.670 0.659 IMP. 78.4% 56.0% 81.5% 61.1% 6.3% 3.0% Exchange 96 0.839 0.746 0.137 0.277 0.410 0.525 0.145 0.275 0.127 0.268 0.098 0.234 192 0.862 0.773 0.210 0.346 0.529 0.610 0.300 0.404 0.229 0.355 0.214 0.352 336 1.597 1.063 0.378 0.485 0.851 0.778 0.440 0.506 0.553 0.560 0.440 0.491 720 4.358 1.935 0.760 0.655 1.558 1.067 1.509 0.963 1.173 0.834 0.962 0.747 IMP. 79.5% 59.7% 39.8% 31.5% 16.9% 9.1% ETTh1 96 0.891 0.863 0.095 0.231 0.653 0.748 0.065 0.197 0.059 0.184 0.059 0.187 192 1.027 0.958 0.096 0.237 0.853 0.828 0.075 0.210 0.099 0.247 0.077 0.211 336 1.055 0.961 0.092 0.237 0.705 0.797 0.092 0.238 0.121 0.279 0.098 0.246 720 1.077 0.969 0.100 0.252 0.562 0.695 0.126 0.279 0.139 0.299 0.099 0.252 IMP. 90.7% 74.5% 86.4% 69.6% 23.3% 10.1% ETTh2 96 3.195 1.651 0.232 0.381 1.598 1.127 0.156 0.307 0.152 0.303 0.146 0.299 192 3.569 1.778 0.334 0.464 3.314 1.599 0.217 0.367 0.195 0.349 0.185 0.343 336 2.556 1.468 0.400 0.512 2.571 1.489 0.245 0.398 0.238 0.392 0.230 0.381 720 2.723 1.532 0.489 0.579 2.294 1.409 0.261 0.410 0.273 0.421 0.249 0.397 IMP. 82.0% 69.5% 90.6% 73.5% 5.3% 2.9% ETTm1 96 0.320 0.433 0.055 0.175 0.130 0.298 0.028 0.125 0.030 0.128 0.029 0.126 192 0.459 0.582 0.079 0.211 0.240 0.4112 0.045 0.162 0.047 0.165 0.047 0.164 336 0.457 0.556 0.104 0.243 0.359 0.512 0.062 0.192 0.063 0.191 0.060 0.189 720 0.735 0.760 0.148 0.294 0.657 0.750 0.091 0.231 0.083 0.223 0.081 0.220 IMP. 80.7% 60.3% 82.2% 62.6% 2.3% 1.1% ETTm2 96 0.191 0.345 0.154 0.298 0.275 0.422 0.075 0.200 0.079 0.205 0.075 0.201 192 0.458 0.556 0.243 0.378 0.484 0.552 0.107 0.248 0.121 0.259 0.111 0.250 336 0.606 0.624 0.515 0.539 1.138 0.909 0.146 0.293 0.150 0.295 0.148 0.294 720 1.175 0.879 0.564 0.592 2.920 1.537 0.196 0.347 0.246 0.387 0.198 0.346 IMP. 33.4% 23.0% 82.8% 63.2% 8.5% 4.1% 17
2510.14816
POLYNOMIAL PRECONDITIONING FOR INDEFINITE MATRICES HAYDEN HENSON† AND RONALD B. MORGAN‡ Abstract. Polynomial preconditioning is an important tool in solving large linear systems and eigenvalue problems. A polynomial from GMRES can be used to precondition restarted GMRES and restarted Arnoldi. Here we give methods for indefinite matrices that make polynomial preconditioning more generally applicable. The new techniques include balancing the polynomial so that it produces a definite spectrum. Then a stability approach is given that is specialized for the indefinite case. Also, very complex spectra are examined. Then convergence estimates are given for polynomial preconditioning of real, indefinite spectra. Finally, tests are preformed of finding interior eigenvalues. Key words. linear equations, polynomial preconditioning, GMRES, indefinite matrices AMS subject classifications. 65F10, 15A06, 1. Introduction. Polynomial preconditioning is a powerful tool for improving convergence when solving large linear equations [18] or finding eigenvalues [8]. How- ever, there can be difficulties for indefinite linear equations and interior eigenvalue problems. Here we give techniques for polynomial preconditioning to be effective in these situations. This is important because indefinite/interior problems tend to be difficult and so especially benefit from polynomial preconditioning. Polynomial preconditioning has been extensively studied; see for example [15, 37, 29, 30, 31, 3, 35, 9, 4, 13, 39, 32, 38, 2, 17, 16, 8, 18, 41]. The GMRES polynomial was used in [27, 12] for Richardson iteration and in [38, 2, 17] for polynomial precon- ditioning. However, more recently in [8, 18], the GMRES polynomial was improved in efficiency, stability, and ease of determining the polynomial. This implementation uses roots of the polynomial which enhances stability and allows for the insertion of extra copies of roots for further stability control. For indefinite problems, it is desirable to have the polynomial preconditioning turn the spectrum definite. For this, we choose a polynomial that we call “balanced”. This balanced polynomial has derivative zero at the origin. Several methods for balancing are given. Some guidance is included for which to use. Also cubic Hermite splines are suggested for checking if a polynomial will be effective. Significantly complex spectra are considered, and one conclusion is that balancing will probably not be helpful if the origin is mostly surrounded by eigenvalues. Next, the stability control method in [8, 18] may be ineffective for indefinite prob- lems, because adding small roots to one side of the origin can increase the polynomial size and variability on the other side. This is addressed by not adding roots on the smaller side and applying stability fixes of deflating eigenvalues and a short GMRES run. Section 2 of the paper has quick review of items needed in this paper. Section 3 has techniques for balancing the polynomial. Section 4 focuses on matrices with significantly complex spectra. Then stability of the polynomial is in Section 5, and Section 6 has convergence estimates for polynomial preconditioning indefinite linear equations. Finally, interior eigenvalue problems are in Section 7. †Department of Mathematics, Baylor University, Waco, TX 76798-7328 (Hayden Henson1@baylor.edu). ‡Department of Mathematics, Baylor University, Waco, TX 76798-7328 (Ronald Morgan@baylor.edu). 1 arXiv:2510.14816v1 [math.NA] 16 Oct 2025 2 H. HENSON and R. B. MORGAN 2. Review. 2.1. Polynomial Preconditioning with the GMRES Polynomial. Polyno- mial preconditioning is a way to transform the spectrum of a matrix and thus improve convergence of Krylov iterative methods. For linear equations with polynomial p and right preconditioning, this is Ap(A)y = b, x = p(A)y. Defining φ(z) ≡zp(z), the preconditioned system of linear equations is φ(A)y = b. In [8, 18], the polynomials are found with the GMRES algorithm [33]. Starting with the GMRES residual polynomial π, the polynomial φ is chosen as φ(z) = 1−π(z), and thus p is also determined. The roots of π are the harmonic Ritz values [19, 28, 25], and they are used to implement both polynomials φ and p. Then GMRES can also be used to solve the linear equations as a part of polynomial preconditioned GMRES, see Algorithm 1, which is from [18]. Note that if A is a real matrix, then φ has real coefficients and φ(A) is real. Algorithm 1 Polynomial Preconditioned GMRES, PP(d)-GMRES(m) 1. Construction of the polynomial preconditioner: (a) For a degree d preconditioner, run one cycle of GMRES(d) using a ran- dom starting vector. (b) Find the harmonic Ritz values θ1, . . . , θd, which are the roots of the GM- RES polynomial: with Arnoldi decomposition AVd = Vd+1Hd+1,d, find the eigenvalues of Hd,d + h2 d+1,d feT d , where f = H−∗ d,ded with elementary coordinate vector ed = [0, . . . , 0, 1]T . (c) Order the GMRES roots using Leja ordering [6, Alg. 3.1] and apply stability control as in [18, Algorithm 2]. 2. PP-GMRES: Apply restarted GMRES to the matrix φ(A) = I −Πd i=1(I − A/θi) to compute an approximate solution to the right-preconditioned system φ(A)y = b, using [18, Algorithm 1] for φ(A) times a vector. To find x, compute p(A)y with [18, Algorithm 3]. 2.2. Stability control. For a matrix with an eigenvalue that stands out from the rest of the spectrum, the GMRES residual polynomial π generally has a root at the eigenvalue. The slope of this polynomial is likely to be very steep at that root which can lead to ill-conditioning and cause φ(A) times a vector to be unstable. To improve stability, extra copies of roots corresponding to outstanding eigenvalues can be added to π. This is implemented in [18, Algorithm 2] (see also [8, p. A21]). For each root θk, one computes a diagnostic quantity called pof(k) that measures the magnitude of π(θk) with the (1 −z/θk) term removed. When log10(pof(k)) exceeds some threshold pofcutoff, extra (1−z/θk) terms are appended to π (in [18, Algorithm 2], pofcutoff is set to 4). 2.3. Range Restricted GMRES. Some image processing problems have ma- trices with zero and very small eigenvalues that correspond to noise. The associated INDEFINITE POLYNOMIAL PRECONDITIONING 3 linear equations need to be solved so that the effect of these small eigenvalues is essen- tially removed. Range Restricted GMRES (RRGMRES) [7] chooses A·b as the initial vector for its subspace. This starting vector has small eigencomponents corresponding to the small eigenvalues, thus reducing their effect. 3. Balancing the Polynomial. In this section, we give ways of adjusting poly- nomial preconditioning to make it more effective for indefinite problems. We begin with an example showing that polynomial preconditioning with the GMRES polyno- mial can be very effective but needs some improvement. Example 1. We use a matrix that is bidiagonal with diagonal elements −2500, −2499, −2498, . . . , −2, −1, 1, 2, 3, . . . , 2499, 2500 and super-diagonal elements all 1. So while the matrix is nonsymmetric, the spectrum is real and is mirrored about the origin. The right-hand side is generated random normal and then is scaled to norm 1. The residual norm tolerance is 10−10. Table 3.1 has results comparing restarted GMRES(50) to PP(d)-GMRES(50), which stands for polynomial preconditioned GMRES with φ polynomial of degree d and GMRES restarted at dimension 50. The preconditioning is effective for this very indefinite case. Comparing no polynomial preconditioning (d = 1) to d = 50, the polynomial preconditioning improves the time by a factor of 200. The top of Figure 3.1 shows the polynomials used for degrees 5, 25 and 50. The spectra are significantly improved by the polynomial with degrees 25 and 50. The eigenvalues are mapped to near 1, except for eigenvalues near the origin. This explains why the method is effective for these polynomials. However, with the degree 5 polynomial, many eigenvalues near zero are mapped very close to zero. This is especially shown in the closeup in the bottom half of the figure, with the 200 eigenvalues closest to the origin all mapped very near zero. Looking further at the results in Table 3.1, higher degree polynomials can be even more effective. However for degree 150, it takes about 10 times longer than degree 151. This is explained by the closeup in the bottom of Figure 3.2. Both degree 150 and 151 polynomials dip into the lower half of the plane, so the spectrum of φ(A) is indefinite. Both have 11 negative eigenvalues. But the important difference is that degree 150 happens to have an eigenvalue fall very near zero. This smallest eigenvalue of φ(A) is at −2.6 ∗10−4, compared to smallest at 3.0 ∗10−3 for degree 151. Having one very small eigenvalue can significantly slow down GMRES(50). So this example gives two reasons for why we need a polynomial that stays in the top half of the plane over the spectrum. Such a polynomial gives a definite spectrum and avoids creating very small eigenvalues for φ(A). We call a polynomial “balanced” if it has slope zero with respect to the real axis at the origin. This way it will be positive over the real axis near the origin and hopefully over all the spectrum. We will give several ways of balancing the polynomial. 3.1. Balance Method 1: Add a root. The first way of balancing the polyno- mial is to add a root that makes the slope zero at the origin. As with adding roots for stability in Subsection 2.2, the resulting polynomial is no longer a minimum residual polynomial. But it may nearly have that property since it is a modification of the GMRES polynomial. The slope of the polynomial φ at the origin is φ′(0) = d X i=1 1 θi . 4 H. HENSON and R. B. MORGAN Table 3.1: Bidiagonal matrix from Example 1 which is nonsymmetric with a real, symmetric (mirrored) spectrum. PP(d)-GMRES(50) is used to solve linear equations. MVP’s is the total matrix-vector products, v op’s is the total length-n vector opera- tions and dp’s is the total dot products. GMRES Polynomial Balanced Poly - Method 1 - Added Root d, deg Time MVP’s v op’s dp’s Time MVP’s v op’s dp’s of φ (sec’s) (tho’s) (mil’s) (mil’s) (sec’s) (tho’s) (mil’s) (mil’s) 1 2346 4363 245 116 (No PP) 5 - - - - 198 2014 20.5 8.91 10 280 4305 28.0 11.4 47.1 767 4.61 1.85 50 11.5 444 0.94 0.24 2.77 95.3 0.20 0.051 100 10.5 535 0.84 0.15 2.18 86.7 0.14 0.028 150 43.6 2442 3.36 0.44 1.79 64.9 0.11 0.023 151 4.35 212 0.31 0.048 1.86 68.6 0.12 0.024 200 3.85 180 0.27 0.044 1.86 61.5 0.12 0.028 250 3.60 146 0.24 0.047 2.07 47.9 0.12 0.036 -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 0 0.5 1 1.5 d = 5 d = 25 d = 50 -100 -80 -60 -40 -20 0 20 40 60 80 100 0 0.2 0.4 0.6 Fig. 3.1: Bidiagonal matrix of size n = 5000. Top has φ polynomials of degree d = 5, d = 25 and d = 50. Bottom has a closeup of the same polynomials. In order to balance it, we add one extra root to π to make the slope φ at the origin zero. We call this root the “balancing root” defined as η = −1/ Pd i=1 1 θi and denote the new polynomial as φ1, see Algorithm 2. Example 1 (continued). The right half of Table 3.1 is for the balanced polynomial. All of the φ1 polynomials used are one higher degree than the degree listed on the INDEFINITE POLYNOMIAL PRECONDITIONING 5 -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 0 0.5 1 d = 150 d = 151 -14 -12 -10 -8 -6 -4 -2 0 2 -0.02 -0.01 0 0.01 0.02 Fig. 3.2: Bidiagonal matrix of size n = 5000. Top has φ polynomials of degree d = 150 and d = 151. Bottom has a closeup of the same polynomials plotted at the eigenvalues. Algorithm 2 Balance Method 1: Add a Root. 0. Apply GMRES(d) to form a polynomial φ(α) of degree d. Let the roots of the corresponding polynomial π(α) (harmonic Ritz values) be θ1, . . . , θd. 1. Compute η = −1/ Pd i=1 1 θi and add it to the list of polynomial roots of π. 2. The new polynomial of degree d + 1 is φ1(α) = 1 −π(α) ·  1 −α η  . left because of the added root. The results are consistently better with the balancing. Even the degree six polynomial (d = 5 before the added root) is effective, converging in 198 seconds compared to over 2000 seconds without polynomial preconditioning. For polynomials with d = 50 and above, the times are all below three seconds, and dot products are reduced by more than three orders of magnitude versus no polynomial preconditioning. Example 2. We next use a matrix that is bidiagonal with diagonal elements −100, −99, −98, . . . , −2, −1, 1, 2, 3, . . . , 4899, 4900 and super-diagonal elements all 1. This matrix is much less indefinite than the first one. Table 3.2 has timing results for PP(d)-GMRES(50) with different degree polynomials and with different balance methods. This example motivates the balance methods that will be given in the next few subsections. With Balance Method 1, the best times are an improvement of more than a factor of 150 compared to no preconditioning. However, for the degree five polynomial, balancing is not necessary. Figure 3.3 shows why. The unbalanced approach has a polynomial that goes negative over the negative part of the spectrum of A, so the resulting polynomial preconditioned spectrum remains indefinite. But it does not have as many small eigenvalues as with Balance Method 1 which is fairly flat around the origin. Specifically, with the original degree five polynomial, φ(A) has 6 H. HENSON and R. B. MORGAN Table 3.2: Bidiagonal matrix with diagonal −100, −99, . . . , −1, 1, 2, 3, . . . , 4900, and superdiagonal of ones. PP(d)-GMRES(50) is used to solve linear equations. Times are compared for different ways of balancing the polynomial. Times are given in seconds. For Balance 2, “same” means that no root is subtracted but a root is added, so it is the same as Balance 1. Superscript ∗indicates the residual norm only reaches 7.94 ∗10−10, and superscript † indicates the residual norm only reaches 6.44 ∗10−10. For Balance 4, the degree 10 polynomial is composed of a degree 5 inner polynomial and a degree 2 outer polynomial. Similarly, degree 15 and 25 have inner polynomial of degree 5, while the higher ones have inner polynomial of degree 10. d - degree No balance Balance 1 Balance 2 Balance 3 Balance 4 of poly φ (seconds) add root remove, add RRG poly Composite No PP 229 5 18.6 52.8 118 97.4 9 17.3 373 28.2 13.9 10 - 8.44 same 9.48 34.9 15 10.6 3.16 3.09 3.33 19.2 25 20.4 2.33 2.62 1.92 9.28 50 2.92 1.83 2.11 1.59 1.53 100 1.95 1.92 same 1.23 1.06 150 1.82 1.44 3.04 1.87∗ 1.10 200 2.08 - 2.69 2.20† 0.77 100 negative eigenvalues, but only 12 with absolute value less than 0.02 and smallest 3.2∗10−3. Meanwhile, with the balanced degree six polynomial, φ1(A) has all positive eigenvalues, but has 108 less than 0.02 and the smallest is 6.7 ∗10−6. At degree 10, convergence is not achieved without balancing as the residual norm stalls at 0.0217. Balance Method 1 is a big improvement for this degree and degrees 15 and 25. For degrees 50 and higher, convergence is reached in under three seconds even without balancing, though balancing still yields improvements in some cases. The degree 9 polynomial with Balance Method 1 (so degree 10 with the extra root) does poorly. The added root is around -645 and the linear factor 1 + α 645  causes the polynomial to increase in size at the large part of the spectrum. Figure 3.4 shows this polynomial dips below zero and so gives an indefinite spectrum. The degree 200 polynomial has a similar problem as degree 9. Balance Method 2 is developed next to deal with these situations. 3.2. Balance Method 2: Remove root(s), then add a root. Sometimes it may be beneficial to remove one or two roots from the GMRES polynomial π, but still add a balancing root. The motivation for first removing a root is to decrease the value of |φ′(0)|. This gives a balancing root η further from the origin and linear factor (1 −α η ) closer to 1 across the spectrum of A. To make the value of |φ′(0)| smaller, we look for the harmonic Ritz value whose reciprocal (or sum of reciprocals in the complex case) is closest to φ′(0). Balance Method 2 is in Algorithm 3. Example 3. We consider another bidiagonal matrix with diagonal elements −100, −99, . . . , −2, −1, 1, 2, . . . , 9850, 9860, 9870, . . . , 10330, 10340, 10350 and super-diagonal of all ones. As can be seen in Table 3.3, polynomial preconditioning provides im- provement of up to 14 times compared to no preconditioning and up to 43 times INDEFINITE POLYNOMIAL PRECONDITIONING 7 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -1 0 1 2 Unbalanced, d = 5 Balance Method 1, d = 5 + 1 Balance Method 2, d = 5 - 1 + 1 Balance Meth 3, RRG, d = 5 -100 -80 -60 -40 -20 0 20 40 60 80 100 -0.02 0 0.02 0.04 0.06 Fig. 3.3: Bidiagonal matrix of size n = 5000 with 100 negative eigenvalues. Top has φ polynomials of original degree d = 5 with different balancing methods. Bottom has a closeup of the same polynomials near the origin. Algorithm 3 Balance Method 2: Remove root(s), then add a root 0. Apply GMRES(d) to form a polynomial φ(α) of degree d. Let the roots of the corresponding polynomial π(α) (harmonic Ritz values) be θ1, . . . , θd. 1. Compute the difference from the derivative: • For each real root θi, compute |φ′(0) −1 θi | • For complex conjugate pairs (θi, θi), compute |φ′(0) −( 1 θi + 1 θi )| 2. Let ξ be the inverse (or sum of inverses) with the smallest difference from φ′(0). 3. If |φ′(0) −ξ| ≥|φ′(0)|, do not remove any roots and add η = −1/ Pd i=1 1 θi to the list of polynomial roots of π. 4. If |φ′(0) −ξ| < |φ′(0)|, remove the root(s) whose sum yields ξ and add η = −1/ Pd i=1 1 θi −ξ  to the list of polynomial roots of π. 5. The new polynomial of degree d, d+1, or d−1 is φ2 = 1−πr(α)·  1 −α η  . Where πr is the π polynomial after having root(s) removed improvement with Balance Method 2. For the degree 10 polynomial, we see that Balance Method 2 harms convergence compared to Balance Method 1 and to no bal- ancing. Caution should be used when applying Balance Method 2 to low degree polynomials, as removing a root relatively takes away a lot of information. Next, with a degree 40 polynomial, the φ(A) has an eigenvalue at 2.5 ∗10−4 which slows down convergence considerably. Balance Method 1 helps this problem (see top right of figure 3.5), but still produces a polynomial that is negative over the intervals (9900, 10090), (10230, 10280), and (10340, 10350) which can be seen in the top left of Figure 3.5. This is in contrast to Balance Method 2 which gives a polynomial which 8 H. HENSON and R. B. MORGAN 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -1 0 1 2 d = 9 d=9 plus added root d=9, subtr.,add roots d=9, RRGMRES poly -100 -80 -60 -40 -20 0 20 40 60 80 100 -0.1 0 0.1 0.2 Fig. 3.4: Bidiagonal matrix of size n = 5000 with 100 negative eigenvalues. Top has φ polynomials of degree 9 with different balancing. Bottom has a closeup of the same polynomials near the origin. remains positive definite over the spectrum of A (see bottom of Figure 3.5) and gives much faster convergence. 3.3. Using Cubic Splines. Since the goal of the balancing root η is to ensure that the φ polynomial remains positive over the spectrum of A, it is useful to verify that the GMRES polynomial π stays below 1. Then φ will stay above zero. For this, we employ Hermite cubic splines to estimate the values of the balanced π over the intervals between its real roots. This approach is a test to determine the polynomial’s suitability for preconditioning. We develop cubic splines Cj in each interval (θj, θj+1) which meet the following conditions: 1. Cj(θj) = Cj(θj+1) = 0 2. C′(θj) = π′(θj) and C′ j(θj + 1) = π′(θj + 1). The goal is to see if Cj stays below 1 over the interval, which suggests that π behaves likewise. We do need not check the intervals where π′(θj) < 0, because it is only possible for Cj(x) ≥0 over [θj, θj+1] if π′(θj) ≥0. The last part of the algorithm considers the Ritz values along with the harmonic Ritz values. Spurious harmonic Ritz values can occur outside of the spectrum, as is also discussed in Section 5. Example 3. (cont.) Figure 3.5 shows that the Balance Method 1 polynomial of degree 40 has large fluctuation over the spectrum of A. The cubic spline test in Algorithm 4 appropriately flags the interval between the two harmonic Ritz values at about 9990 and 10,115, so the polynomial is rejected. Another, more complicated, option for testing a polynomial is to actually find the maximum value of π on each interval between real roots. This could be done with a bisection method using the polynomial and its derivative. INDEFINITE POLYNOMIAL PRECONDITIONING 9 Algorithm 4 Using cubic splines to show positive definiteness 0. Let θ1 ≤θ2 ≤· · · ≤θ ˜d be the real harmonic Ritz values which are the roots of π, and ˜θmin, ˜θmax. be the Ritz values with smallest (most negative) and largest real parts respectively. For each interval (θj, θj+1), j = 1, . . . , ˜d −1, where π′(θj) ≥0 and where π′(θj) and π′(θj+1) are not both 0, do: 1. Calculate Cj(x) = 1 6a(x−θj)3+ 1 2b(x−θj)2+c(x−θj), where a = 6(π′(θj+1)+π′(θj)) (θj+1−θj)2 , b = −(2π′(θj+1)+4π′(θj)) (θj+1−θj) , and c = π′(θj). 2. Find the critical point(s) ˆxj = θj + −b± √ b2−2ac a . Select the root ˆxj ∈(θj, θj+1). 3. If Cj(ˆxj) > 1 and any of the following hold: • (ˆxj ≤Re(˜θmin) ≤θj+1 and Cj(Re(˜θmin)) > 1) • (θj ≤Re(˜θmax) ≤ˆxj and Cj(Re(˜θmax)) > 1) • Re(˜θmin) /∈(ˆxj, θj+1) and Re(˜θmax) /∈(θj, ˆxj)) Then φ is not positive over the spectrum of A. Table 3.3: Bidiagonal matrix of size n = 10, 000 with diagonal −100, −99, . . . , −2, −1, 1, 2, . . . , 9850, 9860, 9870, . . . , 10330, 10340, 10350 and super diagonal of ones. PP(d)- GMRES(50) is used to solve linear equations. - denotes polynomial degrees where Balance Method 1 resulted in an indefinite spectrum for φ(A) . d - degree No balance Balance 1 Balance 2 of poly φ add root subtr., add MVP Time MVP Time MVP Time (thous) (sec) (thous) (sec) (thous) (sec) No poly 872 976 10 210 28.1 298 40.4 798 136 40 12,940 749 4,394 254 42.6 3.6 70 69.3 5.6 377 19.0 26.0 2.2 100 59.0 4.7 203 10.6 20.2 1.9 3.4. Balancing with a polynomial from Range Restricted GMRES. This section discusses two methods that use Range Restricted GMRES [7] (RRGMRES). This approach automatically gives a balanced polynomial. There is more than one way to represent the polynomial from RRGMRES. Here we implement it in Newton form and call it Balance Method 3; see Algorithm 5. Determining the polynomial for Balance Method 3 is not efficient for high degree polynomials as it uses double the matrix-vector products. Also, it may not always be stable due to the non-orthogonal basis for V and the use of Normal Equations (the columns of V do approximate an orthogonal matrix). For a more stable and more complicated implementation of this Newton form of the polynomial, see [17, Subsection 3.3] (this can be adjusted for RRGMRES). Balance Method 4 uses a composite polynomial with the Newton form of the RRGMRES polynomial as the inner polynomial. The outer polynomial comes from polynomial preconditioned GMRES. For details of using composite polynomials, we refer to [18] where they are called double polynomials. 10 H. HENSON and R. B. MORGAN 0 2000 4000 6000 8000 10000 -2 -1 0 1 2 3 4 d = 40 no bal. d = 40 Bal 1 d = 40 Bal 2 1 1.005 1.01 1.015 1.02 1.025 1.03 104 -2 -1 0 1 2 3 4 d = 40 no bal. d = 40 Bal d = 40 Bal -20 -10 0 10 20 -0.02 0 0.02 0.04 0.06 0.08 0.1 d = 40 no bal. d = 40 Bal 1 d = 40 Bal 2 Fig. 3.5: Bidiagonal matrix of size n = 10,000. The bottom graph has the φ poly- nomial of degree 40 with no balance, balance method 1 and balance method 2. Top right has the same polynomials zoomed in at the origin. Top left demonstrates the intervals where the polynomial from Balance Method 1 is negative. Algorithm 5 Determine the Polynomial from Range Restricted GMRES for Balance Method 3 0. Choose the degree d of the φ polynomial. 1. Apply the Arnoldi iteration with starting vector A · b for d iterations. Compute Ritz values θ1, . . . , θd (regular Ritz, not harmonic). 2. Generate a matrix V with Newton basis vectors as columns. So the vectors are b, (A −θ1)b, (A −θ2)(A −θ1)b, . . . , (A −θd−1) · · · (A −θ2)(A −θ1)b. This can be modified to keep real vectors for the complex Ritz vector case. 3. Solve the Normal Equations, so (AV )T (AV )g = (AV )T b. Then the Newton coef- ficients are in g. Example 2. (cont.) The two right-most columns have results with Balance Meth- ods 3 and 4. These methods are mostly competitive and Method 4 has the best times for high degree polynomials. Note that Balance Method 3 does not quite give full accuracy at the high degrees. 3.5. Choice of balancing. We first give an example where balancing is detri- mental, then discuss when balancing should be used and which version is appropriate. Example 4. Let the matrix be diagonal of size n = 5000 with eigenvalues −1000, −999, −998, . . . , −100, 0.1, 0.2, 0.3, . . . 1, 2, 3, . . . 4090. We apply polynomial pre- conditioning with and without balancing. PP(50)-GMRES(50) converges to residual INDEFINITE POLYNOMIAL PRECONDITIONING 11 -1000 -500 0 500 1000 1500 2000 2500 3000 3500 4000 0 0.5 1 1.5 With balancing No balancing 0 0.5 1 1.5 2 2.5 3 0 10 20 10-3 With balancing No balancing Fig. 3.6: Matrix with a gap in the spectrum on one side of the origin. Balance Method 1 is compared to no balancing with d = 50. The lower half of the figure has ‘x’ and ‘o’ marks showing the polynomial values at the small eigenvalues. norm below 10−10 in 10 cycles without balancing. Using balancing is disastrous. With Method 1, it takes 4235 cycles, Balance 2 has 6713 cycles and Balance 3 uses 4185. The top of Figure 3.6 has the polynomial with and without Balance Method 1. The unbalanced polynomial comes through the origin with significant slope that allows it to separate the small eigenvalues from the origin more than the balanced polynomial. The smallest eigenvalue of φ(A) without balancing is 6.12 ∗10−4, while the smallest with balancing is much smaller at 1.27 ∗10−6. The bottom of Figure 3.6 shows the polynomial at these small eigenvalues. Now we attempt to give guidance of when to balance. Some rough knowledge about the spectrum is required. Otherwise it may be necessary to experiment with and without balancing to determine which works better. If the eigenvalues are real or almost all real, and they are fairly equally distributed near the origin, then there will probably be a gain from balancing an indefinite spectrum and hopefully turning it definite. For such a spectrum that is nearly mirrored on both sides of the origin, use Balance Method 1. Otherwise, one can try Methods 1 and 2, possibly testing with splines to see if they pass. If not, or if they don’t work in an actual test, switch to Balance 3 or to Balance 4 for high degree polynomials. As the last example showed, if there is a gap between the origin and the eigenvalues on one side of the spectrum, then it may be best to not balance. Then the polynomial can dip down in this gap. The preconditioned spectrum may still be definite. Finally, when there are eigenvalues surrounding the origin with significant imaginary parts, this is both a difficult situation and probably one where balancing is not beneficial. 12 H. HENSON and R. B. MORGAN Table 4.1: PPGMRES applied to the Hatano-Nelson [10] with n = 2500, γ = 0.5, and d = 0.9 ∗4 ∗rand(n, 1). For balance method 4, the inner polynomial is selected to have the highest possible degree, up to 15, that evenly divides the degree of the overall composite polynomial. Times are given in seconds. d - degree No balance Balance 1 Balance 2 Balance 3 Balance 4 of poly φ add root subtr., add RRG poly Composite 15 4.66 - - 2.08 2.85 20 22.6 4.50 4.29 2.69 3.73 25 12.0 3.92 3.92 3.32 4.03 50 10.3 7.18 - 2.01 3.70 100 64.2 5.02 same as bal 1 1.63 5.62 This is discussed in the next section. 4. Complex Spectra. Matrices with spectra spread out in the complex plane need to be investigated. A polynomial cannot be expected to be effective if the origin is completely surrounded by eigenvalues, based on the minimum modulus principle in complex analysis. For instance, if the spectrum is on a circle centered at the origin, we would want the φ polynomial to be zero at the origin and then on the circle have real part near one and imaginary part near zero. However such a polynomial would have minimum modulus in the interior of the circle, violating the principle. We look at approaching this difficult situation of having the origin surrounded in this and later sections. We first have an example where both polynomial precondition- ing and balancing are effective for a matrix with a significantly complex spectrum, but not complex at the origin. Then the next example looks at how far we can surround the origin and still solve linear equations. Example 5. We consider a Hatano-Nelson [10] matrix of size n = 2500, which has a complex and indefinite spectrum; see the lower right part of Figure 4.1. The results are in Table 4.1. GMRES(50) does not converge; the residual norm stalls at ||r|| = 0.359. This can be fixed with polynomial preconditioning. For degree d = 15, Balance Methods 3 and 4 convergence rapidly. Balance methods 1 and 2 fail to converge, but for degrees 20 and 25 they work well. The left side of Figure 4.1 shows the spectra after polynomial preconditioning of degree 25 using no balancing, and Balance Methods 1 and 2. With no balancing, the spectrum is very indefinite. A close-up with the two balancings in the upper right part of the figure shows the spectra are now definite. Example 6. Polynomial preconditioning is examined for matrices with very com- plex spectra. Each matrix has size n = 2000. Twenty eigenvalues are equally spaced on 50 rays that move out from the origin and go much of the way around it; see the (blue) dots on the left parts of Figure 4.2. GMRES(50) is run with and without polynomial preconditioning to relative residual norm tolerance of 10−8 and with right- hand sides generated random normal and then normed to one. No balancing is used. The red stars on the left parts of Figure 4.2 are the roots of the GMRES residual polynomials of degree 50. Plots on the right show how the spectrum is changed by the preconditioning. The first case, shown in the top half of the figure, has eigenvalues 230 degrees around the origin. The polynomial succeeds in transforming the original indefinite spectrum on the left into one that is nearly definite on the right (there are INDEFINITE POLYNOMIAL PRECONDITIONING 13 0 1 2 3 4 5 6 Real Axis -1 0 1 Imaginary Axis Hatano-Nelson Spectrum 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 No balance 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 Balance method 1 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 Balance method 2 -1 -0.5 0 0.5 1 1.5 2 2.5 Real Axis 10-4 -5 0 5 Imaginary Axis 10-5 d = 25 Balance method 1 d = 25 Balance method 2 Fig. 4.1: Graphs for the Hatano-Nelson matrix with n = 2500, γ = 0.5 and d = 0.9∗4∗rand(n, 1). The bottom right graph is the original spectrum of the matrix. The top left is the spectrum transformed with a degree 25 precondition polynomial without balancing. The middle left and bottom left graphs are the spectrum transformed with the balanced method 1 and balance method 2 polynomial, respectively. The top right shows balanced methods 1 and 2 zoomed in at the origin to show that they are positive definite spectra. only six eigenvalues with negative real part after the preconditioning). For the second case of 280 degrees, the spectrum is also improved by the preconditioning even though it stays quite indefinite. There is more spacing from the origin and many eigenvalues are in a blob. GMRES(50) converges very slowly for the easier matrix and not at all for the tougher one, but PP(50)-GMRES(50) converges rapidly for both cases. See Table 4.2 for results with these two matrices and one in-between them. For the 230 case, there is remarkable improvement with just a degree 5 polynomial. For the more difficult matrices, a degree 50 polynomial is needed. Further cases with eigenvalues further extending around the origin require even higher degree polynomials. For ex- ample with a spectrum 290 degrees around the origin, using a degree 50 polynomial takes 13.9 minutes while d = 150 converges in only 6.4 seconds. Going to the even more difficult case of 300 degrees, both a high degree polynomial and a larger GM- RES restarting parameter are needed as PP(150)-GMRES(50) takes 13.5 minutes but PP(150)-GMRES(150) converges in 10.9 seconds. Note that polynomials of degree higher than 150 may not be stable in this situation. As mentioned earlier, a φ polynomial in the complex plane cannot be zero at the origin and move only toward 1 as it moves away from the origin. It needs to have both ups and downs while moving away. However, for this example, the spectrum only partly surrounds the origin. Thus, the polynomial is somewhat able to head towards 1 as it moves away from the origin over the spectrum. For the case of the spectrum going 230 degrees around, the upper-left plot in Figure 4.3 shows contours for the real part of the degree 5 polynomial and the upper-right has the imaginary part. The eigenvalues are yellow dots. The polynomial is able to flatten out over most of the spectrum and push much of the eigenvalues to have real parts between 14 H. HENSON and R. B. MORGAN Table 4.2: Complex matrix of Example 6. PP(d)-GMRES(50) is used to solve linear equations with eigenvalues through three angles around the origin. Angle 230◦ Angle 255◦ Angle 280◦ d - degree MVP time MVP time MVP time of poly φ (thou’s) (thou’s) (thou’s) 1 (no pp) 2.6e+6 7.6 days - - - - 5 11.2 0.54 sec 33,485 18 min - - 50 6.4 0.28 sec 24.4 0.45 sec 772 6.7 sec 0.5 and 1.0. Extending further out from this relatively flat portion in the real plot are five valleys and five ridges. Heading left from the origin is a valley and to the right of the flat area is a rising ridge. Valleys and ridges alternate going around. Next, for the 280◦matrix, the real contours are in the bottom-left. For this more difficult spectrum, the real part is not able to make it to 0.5 except at a few eigenvalues. A higher degree polynomial can be significantly more effective. The real part of the degree 50 polynomial is shown in the lower-right part of the figure. This polynomial goes above 0.5 except for the part of the spectrum near the origin. There are many valleys and ridges for this polynomial. Unlike for the previous example, balancing is not effective. These contour maps help show why. Balancing keeps the real part of the polynomial from being able to dive down quickly when moving to the left from the origin. Similarly, moving to the right, the polynomial could not as quickly move up toward 1. Two upcoming examples (8 and 10) will back up the result that balancing may be detrimental when the spectrum goes much of the way around the origin. 5. Stability Control. Here we give stability methods specialized for polynomial preconditioning of indefinite problems. Stability of the GMRES residual polynomial is mentioned in Subsection 2.2 (from sources [8, 18]). A pof value is associated with each root θj of the GMRES residual polynomial π (harmonic Ritz value), pof(θj) = Q i̸=j  1 −θj θi  . A high pof value indicates a root that stands out from the others and indicates that there may be instability (unless the root is spurious). To fix this, extra copies of these outstanding roots are added to the π polynomial. However, for a small root θi, the linear factor (I −A θi ) from an extra copy can blow up the polynomial at large portions of the spectrum. For an indefinite spectrum, there can be outstanding eigenvalues on one side of the spectrum that are much smaller than eigenvalues on the other side. For this situation where there are high pof’s on the shorter side of the spectrum, we suggest alternatives to adding extra roots. The following theorem shows a correlation between a high pof value and the accuracy of the corresponding harmonic Ritz vector. It is assumed that the root of interest is equal to an eigenvalue, since a large pof will usually correspond to a root that closely approximates an outstanding eigenvalue (for spurious harmonic Ritz values, we do not need to deal with them, as is mentioned right before Algorithm 6). Theorem 5.1. Assume a GMRES residual polynomial π is generated with d iterations of solving Ax = b, where A is diagonalizable and b is norm one. Let the roots of π (the harmonic Ritz values) be θi, for i = 1, . . . , d. Let the eigenvectors of A INDEFINITE POLYNOMIAL PRECONDITIONING 15 -1 -0.5 0 0.5 1 230o original spectrum -1 -0.5 0 0.5 1 0 0.5 1 1.5 2 230o poly prec spectrum, degree 50 -1.5 -1 -0.5 0 0.5 1 1.5 -1 -0.5 0 0.5 1 280o original spectrum -1 -0.5 0 0.5 1 0 1 2 280o poly prec spectrum, degree 50 -2 -1 0 1 2 Fig. 4.2: The effect of polynomial preconditioning on very complex spectra. The top two plots are for a matrix with eigenvalues going around 230◦, and the bottom plots are for 280◦. On the left side, the original eigenvalues are shown with dots in the complex plane. The (red) asterisks are the polynomial roots. On the right two plots, the ‘x’ marks show the eigenvalues of the polynomial preconditioned spectrum of φ(A) with degree d = 50. be zi’s and let Z be the matrix with these as columns. Let b = Pn i=1 βizi. Assume a particular Ritz value θj equals an eigenvalue λj. Then ||rj|| ≤|θj|∥Z−1∥ |βj|pof(θj), where pof(θj) = Q i̸=j  1 −θj θi  . Proof. Let yj be a harmonic Ritz vector. From [25, 21], it can be written as yj = ωj(A)b = sj Y i̸=j  I −A θi  b, (5.1) where sj is the constant that normalizes ωj at θj, meaning sj = 1 Q i̸=j  1 −θj θi . Multiply (5.1) by (I −A θj ), use that π(A) = Qn i=1(1 −A θj ), and rearrange: Ayi −θjyj = −θjsjπ(A)b. 16 H. HENSON and R. B. MORGAN -1 -0.5 0 0.5 1 real contours, 230o, degree 5 -1 -0.5 0 0.5 1 -8 -4 -2 -2 -2 -1 -1 -1 -1 -1 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 4 4 8 8 -1 -0.5 0 0.5 1 imaginary contours, 230o, degree 5 -1 -0.5 0 0.5 1 -8 -4 -4 -4 -2 -2 -2 -1 -1 -1 -1 -1 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 4 8 8 -1 -0.5 0 0.5 1 real contours, 280o, degree 5 -1 -0.5 0 0.5 1 -2 -2 -2 -1 -1 -1 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 -1 -0.5 0 0.5 1 real contours, 280o, degree 50 -1 -0.5 0 0.5 1 0 0.5 Fig. 4.3: Contour maps for polynomials with the matrices from Example 6. Since the GMRES residual norm cannot rise, ∥b∥= 1 implies that ∥π(A)b∥≤1. So ||Ayj −θjyj|| = |θj||sj|∥π(A)b∥≤|θj||sj| = |θj| pof(θj). (5.2) Next, ||yj|| = ∥Z[β1ωj(λ1), . . . , βnωj(λn)]T ∥≥ 1 ∥Z−1∥∥[β1ωj(λ1), . . . , βnωj(λn)]T ∥ ≥ 1 ∥Z−1∥|βj||ωj(λj)| ≥ 1 ∥Z−1∥|βj|, using that θj = λj and ωj is normalized to be one at θj. Combined with (5.2), we have ||rj|| = ||Ayj −θjyj|| ||yj|| ≤|θj|∥Z−1∥ |βj|pof(θj). This theorem indicates that for an accurate θj with a large pof, yj will generally be a good approximate eigenvector. This motivates one of the stability procedures that follow, where approximate eigenvectors are used for deflation. INDEFINITE POLYNOMIAL PRECONDITIONING 17 Algorithm 6 gives our approach for dealing with an unstable polynomial. It has the previous way of adding extra copies of roots to outstanding eigenvalues, but now only to the side in the complex plane where the spectrum is larger. Then it has corrections for the other side if there are outstanding eigenvalues there. The Step 5.a. correction uses that instability error on the smaller side is mostly in the direction of just a few eigencomponents corresponding to roots with high pof values. So projecting over a few eigenvectors reduces the rogue eigencomponents. Likewise in Step 5.b., it generally only takes a few iterations of GMRES to improve the residual vector. In the algorithm, it is important to identify if there are large spurious harmonic Ritz values. These are much more likely to occur for indefinite problems than for definite (large spurious harmonic Ritz values correspond to spurious Ritz values of A−1 near the origin [19], which for symmetric problems can happen only in the indefinite case). Extra roots for stability control are not needed at spurious harmonic Ritz values. If there is a high pof for a particular large value, then the residual norm can tell us whether or not it is spurious. Algorithm 6 Choice of Stability Control for an Indefinite Spectrum 0. Compute pof values for the polynomial roots. Set pofcutoff (mentioned in Subsec- tion 2.2) and rncutoff (used to indicate a reasonably accurate approximate eigenvector). Determine which is the larger side of the spectrum in the com- plex plane; this is the side for which the harmonic Ritz values (with spurious values not counted) extend the furthest from the imaginary axis. Alterna- tively, Ritz values can be used and then large spurious values are unlikely. 1. Optional: Add a limited number of extra copies of roots on the smaller side of the spectrum that have pof > pofcutoff and ∥rj∥≤rncutoff. Then recompute pof values. 2. If the largest pof on the small side of spectrum is greater than 1020, reduce the degree of the polynomial and begin again. 3. Add extra copies of roots on the larger side if pof > pofcutoff. The number of copies at a root is the least integer greater than (log10(pof(k)) −pofcutoff)/14. 4. Apply PP-GMRES. Solve until a shortcut residual norm reaches the desired tol- erance. If the actual residual norm is not as accurate, then the correction methods can be applied. 5. Correction phase: Apply one or both correction steps. a. Apply Galerkin projection (Algorithm 2 in [24]) over the span of the approximate eigenvectors corresponding to the roots on the smaller side of the spectrum that have both pof(θj) ≥pofcutoff and ∥rj∥≤rncutoff. This deflates these eigencomponents from the residual vector. b. Run GMRES (without polynomial preconditioning) for a few iterations. In the optional Step 1. of the algorithm, perhaps only one root should be added to the shorter side. Or only add roots that are within half the magnitude of the largest non-spurious harmonic Ritz value on the other side. The effect of adding these roots on the polynomial on the large side can be checked with the Hermite spline polynomials in Subsection 3.3. Example 7. The stability algorithm is tested with a matrix that has outstanding eigenvalues on both sides of the complex plane. However, the outstanding eigenvalues on the left side are much smaller in magnitude than those on the other side. The matrix is bidiagonal of size n = 5000 with diagonal elements −500, −400, −300, −200, 18 H. HENSON and R. B. MORGAN Table 5.1: Bidiagonal matrix of Example 7. PP(d)-GMRES(50) is run followed by correction procedures. d - degree Time max Res Norm Deflate GMRES Both of poly φ (sec’s) pof w/o correct correction correct corrects 57 + 2 44 2.9e+9 5.8e-6 1.1e-10 9.6e-11 9.6e-11 75 + 4 0.92 1.7e+14 1.9e+2 2.8e-10 1.1e-10 5.1e-11 90 + 4 0.97 2.5e+18 8.3e+4 1.5e-9 2.5e-9 4.2e-11 100 + 5 0.92 1.1e+21 1.7e+5 2.3e-9 3.2e-10 3.8e-11 110 + 5 3.5 4.6e+23 1.0e+9 1.4e-5 6.3e-8 1.2e-7 115 + 5 - 9.0e+24 - - - - −100, 0.001, 0.01, 0.02, 0.03, . . . , 0.09, 0.1, 0.02, 0.03, . . . , 0.9, 1, 2, 3, . . . 4971, 5000, 5100, 5200, 5300, 5400 and with superdiagonal elements all 0.1. The right-hand side is gen- erated random normal, and then is normed to one. PP(d)-GMRES is applied using the stability control in Algorithm 6 with roots added only on the larger (right) side (i.e. Step 1 is not activated). No balancing is used. Linear equations are solved until a shortcut residual norm reaches 10−10. The rncutoff value is not needed be- cause no spurious harmonic Ritz values occur. The value pofcutoff = 106 is used. The correction in Step 5.a. has different numbers of approximate eigenvectors. For example, with degree 57, only two have reached pof > 106 while for degree 100 there are five. For the GMRES correction in Step 5.b., 10 GMRES iterations are used. Table 5.1 has results with three types of correction. First, the eigenvector deflation is in the fifth column, then the next columns have the GMRES correction, followed by both (deflation then GMRES). Much more accurate results are produced. Using both corrections is usually better, but may not be needed. Because of the difficulty of this problem, polynomial preconditioning is very im- portant. In fact, PP-GMRES does not converge until the degree of the polynomial reaches 57 (and not for every degree just larger than that). We finish this example by activating the optional Step 1 for the polynomial of degree initially 115. For this high degree polynomial, the method does not converge even in the shortcut residual norm. We need to activate Step 1 of the algorithm in order to keep the polynomial more controlled on the short side. Here we add one root copy to each root on the left side with pof over 1014. This results in three extra roots. Then the rest of the algorithm is applied along with the GMRES correction. The final residual norm is 1.1∗10−10. So by using Step 1, we are able to get an accurate result. However, we do not succeed with polynomials of much higher degree. Example 8. The matrix is cz20468 from the SuiteSparse Matrix Collection. Stan- dard GMRES(50) fails to converge, prompting the use of preconditioning techniques. ILU factorization (with droptol=0.01) is applied as an initial preconditioner. This pro- duces a preconditioned operator with an indefinite spectrum, for which GMRES(50) still does not convergence. We add polynomial preconditioning and now can get con- vergence; see Table 5.2. For this example, a solution with residual of 10−8 is sought, so PPGMRES is run until the shortcut residual norm produces a solution of 10−10 to compensate for potential instability in the polynomial preconditioner. The parameters are set to pofcutoff = 104 and rncutoff = 10−3. Results for polynomial degrees ranging from 5 INDEFINITE POLYNOMIAL PRECONDITIONING 19 Table 5.2: cz20468 matrix with ILU factorization (droptol=0.01) of Example 8. PP(d)-GMRES(50) is run followed by correction procedures. d - degree Time max Res Norm Deflate GMRES Both of poly φ (sec’s) pof w/o correct correction correction corrections 1 - 5 + 0 173 5.7 5.6e-9 - 2.7e-10 - 10 + 0 166 5.8e+2 2.3e-8 - 2.1e-10 - 20 + 2 2186 7.5e+7 3.7e-7 1.6e-8 1.8e-10 3.3e-10 30 + 5 404 3.7e+19 1.6 1.1e-8 2.9e-10 3.9e-10 35 + 6 333 1.4e+24 3.2e+4 9.4e-9 1.9e-9 1.2e-9 40 + 7 166 2.0e+29 7.6e+9 3.7e-4 5.6e-4 6.6e-6 40 + 8 162 2.0e+29 8.1e+2 1.9e-7 7.5e-8 4.3e-8 to 40 are presented in Table 5.2. The higher the degree of the polynomial, the more unstable the polynomial can be. The max pof column provides a glimpse of this as some pof’s can become greater than 1020 for degrees 35 and higher. These high pof’s are observed on the larger side, while the pofs on the shorter side remain below 1020 as stipulated in Algorithm 6. At higher polynomial degrees, more vectors are available for deflation. For the degree 20 polynomial only one approximate eigenvector is deflated, while 7 are used for the degree 40 polynomial. Applications of deflation and/or GMRES correction consistently restore accuracy to the solution. Notably, 10 iterations of GMRES alone often serves as a sufficient correction and even outperform using both corrections for degrees 20 and 30. The bottom two rows of Table 5.2 show the degree 40 polynomial with and without the optional step 1 in Algorithm 6. Applying Step 1 increases the initial accuracy of the residual from 7.6 ∗109 to 810 which allows the accuracy to reach the desired goal of 10−8 post deflation. 6. Convergence Estimates. This section develops estimates for the effective- ness of polynomial preconditioned GMRES for indefinite problems. Using Chebyshev polynomial results, we derive theoretical estimates on how well polynomial precon- ditioning can enhance the convergence of restarted GMRES. The analysis reveals a significant reduction in the required number of matrix-vector products under ideal- ized conditions. We assume that all polynomials can be approximated by Chebyshev polynomials, including both the polynomial for preconditioning and the polynomials that underlie the GMRES method. We assume the spectrum of the matrix satisfies σ(A) ⊂[u, v] ∪[a, b] ⊂R where u ≪v < 0 < a ≪b, and with the longer interval on the right of the origin. It is also assumed that Tm(1 + δ) .= 1 + m2δ (6.1) where Tm is the standard standard Chebyshev polynomial of the first kind of degree m and 0 < δ ≪1. To approximate the GMRES polynomial that is being used for polynomial pre- conditioning, we will use a composite polynomial. For a spectrum that is about the same on both sides of the origin, one could use an inner polynomial that is a quadratic and maps to a positive definite spectrum. Then at that point a standard shifted and 20 H. HENSON and R. B. MORGAN scaled Chebyshev polynomial can be applied as the outer polynomial (see [19, Thm. 6] for such an approach). This makes it possible to estimate the success of polynomial preconditioning. We skip this development with a quadratic and jump ahead to hav- ing a cubic as the inner polynomial. This is better for lopsided spectra that extend further on one side of the origin than the other. The cubic is best for spectra that extend about three times further on one side than the other. Higher degree inner polynomials could be used for more lopsided spectra. We first develop a degree 3 polynomial f(x) which maps [u, v] ∪[a, b] onto the interval [−1, 1] while ensuring f(0) > 1. Consider h(x) = (x −a)(x −b)(x −v), which has roots at x = a, b, v and local maximum at γ1 = (a+b+v)−√ a2+b2+v2−(ab+av+bv) 3 , and a local minimum at γ2 = (a+b+v)+√ a2+b2+v2−(ab+av+bv) 3 . It can be shown that γ1 ∈[v, a] and γ2 ∈[a, b]. It can also be shown that h(γ2) = h(a + b + v −2γ2) < 0, so for u < a + b + v −2γ2, then h(u) < h(γ2). With this, we can define: f(x) = ( −2h(x) h(u) + 1 u ≤a + b + v −2γ2 −2h(x) h(γ2) + 1 u > a + b + v −2γ2 In the case where u > a + b + v −2γ2, f(0) = 2abv (γ2 −a)(γ2 −b)(γ2 −v) + 1 .= 2abv ( −2b 3 −a)( −2b 3 −b)( −2b 3 −v) + 1 .= −27av 2b2 + 1 and if u ≤a + b + v −2γ2 f(0) = 2abv (u −a)(u −b)(u −v) + 1 .= 2abv u2(u −b) + 1. In both cases, we conclude that f(0) = 1 + δ with δ small, allowing us to utilize (6.1) above. Now, composing f(x) with the Chebychev polynomial T m 3 , we obtain: T m 3 (f(x)) T m 3 (f(0)), which forms the desired polynomial of degree m. The maximum value of this Cheby- shev polynomial over [u, v]∪[a, b] is: 1 T m 3 (1+ 2abv (ξ−a)(ξ−b)(ξ−v)), where ξ = γ2 or ξ = u. This quantity estimates the improvement in residual norm per one cycle of GMRES(m). Using approximation (6.1), for either value of ξ, the residual norm is improved for approximately: 1 1 + δ( m 3 )2 .= 1 −m2 9 δ. For d cycles of GMRES(m), the improvement factor is approximately:  1 −m2 9 δ d .= 1 −dm2 9 δ. (6.2) We view a single cycle of PP(d)-GMRES(m) as a composition of two polynomials: the preconditioner polynomial from GMRES(d) on the inside and the GMRES(m) polynomial on the outside. This can be modeled by comparing shifted and scaled INDEFINITE POLYNOMIAL PRECONDITIONING 21 Chebyshev polynomials, leading to the residual improvement estimate: 1 Tm  T d 3  1 + 2abv (ξ−a)(ξ−b)(ξ−v)  .= 1 Tm  1 + δ  d 3 2 .= 1 1 + δ m2d2 9  .= 1 −d2m2 9 δ (6.3) Comparing the improvements from (6.2) and (6.3), we conclude that polynomial preconditioned GMRES(d) converges approximately d times faster in terms of matrix- vector products. 7. Interior Eigenvalue Problems. Computing eigenvalues and eigenvectors of large matrices is another important task in linear algebra. The standard approach is restarted Arnoldi [36, 20, 40, 26] or restarted Lanczos for symmetric [40, 1] (non- restarted CG [11] can be used in LOBPCG [14] for symmetric problems). Polynomial preconditioning is even more important for large eigenvalue problems than it is for linear equations. Eigenvalue problems tend to be difficult, because standard precon- ditioning is less effective for eigenvalue problems and often not used. Standard pre- conditioning can be incorporated into more complicated methods such as LOBPCG, Jacobi-Davidson [34] and Preconditioned Lanczos [23]. However, then only one eigen- value can be targeted at a time [22]. Eigenvalue polynomial preconditioning using a GMRES polynomial is given in [8]. Here we focus on the case where the desired eigenvalues are in the interior of the spectrum. The polynomial for preconditioning interior eigenvalue problems is found similarly as for indefinite system of linear equations. GMRES is applied to a shifted problem in order to generate a polynomial that targets a certain zone. Here we give another balancing method that balances on an interval, meaning that the value of the polynomial φ is equal at the two ends of the interval (see [16] for this type of balancing of a Chebyshev polynomial for symmetric eigenvalues). Unlike the other balancings, it does not attempt to make the derivative of the ϕ polynomial zero at a point. However, like Balance Method 1, it is done by adding a root. We give this Balance Method 5 for the case of balancing around the origin, so on the interval [-a a], but this can be shifted for an interval around another spot. Algorithm 7 Balance Method 5: Add a root for balancing on the interval [−a a]. 0. Let polynomial φ(α) of degree d correspond to π(α) with roots θ1, . . . , θd. 1. Compute β = φ(a) φ(−a), then η = a (1+β) (1+β). Add η to the list of polynomial roots. 2. The new polynomial of degree d + 1 is φ5(α) = 1 −π(α) ∗  1 −α η  . Balancing the polynomial is important for interior eigenvalue problems because otherwise a lopsided number of eigenvalues may be found on one side of the requested target. Here we focus on Balance Method 1, but others could be used. In particular, Balance Method 5 gives essentially the same results as Method 1, so only Method 1 is given in the results. Example 9. We use a diagonal matrix with eigenvalues 1, 2, 3, . . . , 499, 500, 500.2, 500.4, . . . , 519.8, 520, 521, 522, . . . , 4920. So n = 5000, and there is a group of 101 eigenvalues close together starting at 500. We seek the 30 eigenvalues nearest 500.33. Arnoldi(80,40) is used, so the Krylov subspace is built out to dimension 80 and then restarted with 40 vectors. Reorthogonalization is applied in this and the other ex- amples of eigenvalue computation in this section. The residual norm tolerance is set 22 H. HENSON and R. B. MORGAN Table 7.1: Diagonal Matrix with n = 5000. PP(d)-Arnoldi(80,40) is used to find 30 interior eigenvalues near 500.33. Degrees of the balanced polynomial are one higher than indicated because of the added root. Residual tolerance is 10−8. The asterisks indicate max residual norm only reaches 5.3 ∗10−8. Unbalanced GMRES Polynomial Balanced Polynomial d, deg Time MVP’s Eigenvalues Time MVP’s Eigenvalues of φ (sec’s) (tho’s) (sec’s) (tho’s) 1 (no pp) 363 118 496 - 505 10 19 61 496 - 505 20 70 496 - 505 50 3.1 38.1 478 - 503.4 4.4 57.2 496 - 505 (missing some) 100 1.7 32 472 - 503.6 7.2∗ 166∗ 496 - 504.4 (missing some) + 3 large ones to 10−8. Table 7.1 has results with polynomial preconditioning of degrees 10, 50 and 100, both with and without balancing. This is compared to no polynomial precon- ditioning (note that for some of our interior eigenvalue testing, harmonic Rayleigh- Ritz [19, 28, 25, 26] performs better, but for this particular test regular Rayleigh-Ritz is best and is reported). The degree 10 polynomial dramatically speeds up the eigen- value computation and does not need balancing. Degree 50 is even better (about two orders of magnitude faster than without polynomial preconditioning), and with balancing it finds the eigenvalues closest to the requested value of 500.33. Without balancing, the polynomial dips below the axis to the left of the origin and so eigen- values are found much further to the left of 500.33 than to the right. Also some are missed where the polynomial takes them well below zero (our program finds the ones closest to zero). With balancing, the correct eigenvalues are quickly found. The degree 100 polynomial also has problems without balancing, but with balancing it is too volatile at the largest eigenvalues (among the 30 eigenvalues it finds are the large ones 4843, 4846 and 4870). So here it is best to use a lower degree polynomial. If a high degree polynomial is desired then Balance Method 4 could be applied. We next consider two tests of computing interior eigenvalues with matrices from applications. Example 10. The matrix Af23560 is part of the package NEP [5] (Non-Hermitian Eigenvalue Problem) and is available from SuiteSparse. The matrix is of size n = 23,560 and was developed for stability analysis of Navier-Stokes equations. The upper left portion of Figure 7.2 has the overall spectrum, which is very complex and lies in the left half of the complex plane. We target the eigenvalues nearest −4 using Arnoldi(600,300). This is an interior eigenvalue problem, because the 300 nearest eigenvalues do not include the right-most (exterior) ones. We compare a polynomial preconditioner of degree 50 to no precon- ditioner. The second picture down on the left of Figure 7.2 has the preconditioned spectrum with no balancing. Note it is very indefinite. PP(50)-Arnoldi without balancing is run for 25 cycles and finds 284 Ritz pairs to residual norm below 10−3. This takes 32 minutes. The 284 Ritz values are plotted with (blue) x’s on 7.1. Regular restarted Arnoldi finds a similar number of Ritz pairs below 10−3 in residual norm, specifically 286, with 125 cycles. This takes 2.6 hours, INDEFINITE POLYNOMIAL PRECONDITIONING 23 -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 real axis -4 -3 -2 -1 0 1 2 3 4 imaginary axis Fig. 7.1: Matrix Af23560 with a target of finding eigenvalues near -4. Ritz values computed by regular Arnoldi are (red) circles, while (blue) x’s show values from PP(50)-Arnoldi. Polynomial preconditioning is much better at finding eigenvalues around the target. five times longer, while using less matrix-vector products. However, the important point is that regular Arnoldi solves an easier problem, focusing more on the eigenvalues nearer the edge of the spectrum. Even if it is run longer, 467 cycles (9.7 hours) so that the first 200 residual norms are below 10−7, the eigenvalues found are still mostly to one side of the target. Figure 7.1 shows with (red) circles the 298 Ritz values from this run that have residual norms below 10−3. This shows that polynomial preconditioned Arnoldi does better at finding the eigenvalues nearest the target. If Balance Method 1 is applied to the degree 50 polynomial, the results are worse: only 252 eigenvalues are accurate to 10−3 after 25 cycles. Figure 7.2 shows how the spectrum is changed with balancing. The upper right portion has the 50 eigenvalues of A numbered in order of distance from −4. The lower left has where these move to with the preconditioned spectrum. Then the lower right has them with balancing added. The balancing does succeed in moving the real eigenvalues on the left side over to the right, so that the real portion of the spectrum looks positive definite. However, the rest of the spectrum does not move in a predictable fashion. Also, note these 50 eigenvalues are much closer together after balancing and thus harder to find. The non-balanced polynomial goes through the desired region with a slope and so does not push the eigenvalues together. Example 11. Large, complex, non-Hermitian matrices arise in quantum chromo- dynamics (QCD). However, they can be transformed into Hermitian by multiplying by the QCD γ5 matrix. The spectrum is then real and indefinite with about the same spread of eigenvalues on both sides of the origin. We compute 15 eigenvalues and eigenvectors around the origin for a matrix of size n = 3.98 million. These can be used to deflate solution of linear equations with the conjugate gradient method [1]. 24 H. HENSON and R. B. MORGAN -150 -100 -50 0 real axis -200 0 200 imaginary axis A spectrum -1 -0.5 0 0.5 1 1.5 real axis -0.5 0 0.5 imaginary axis -4.5 -4 -3.5 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 imaginary axis 1 2 3 4 567 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 A -0.05 0 0.05 0.1 real axis -0.05 0 0.05 0.1 imaginary axis 1 2 3 4 5678 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 -2 -1 0 1 2 real axis 10-3 -2 -1 0 1 2 10-3 12345678 9 10 11 12 1314 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 3031 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Fig. 7.2: The upper left has the spectrum of Af23560 in the complex plane, and just below is the polynomial preconditioned spectrum with no balancing. The upper right of the figure has eigenvalues of A numbered in order of distance from −4. The lower left has where these move for the preconditioned spectrum with polynomial of degree 50. Then the lower right has balancing added. Table 7.2 compares restarted Arnoldi(60,25) with and without polynomial precondi- tioning. The polynomial is balanced with Balance Method 1. For this test, it does not make much difference, but we have seen situations where balancing is needed for QCD problems. The polynomial preconditioning reduces the time dramatically and makes using eigenvalue deflation much more practical for QCD calculations. Table 7.2: A quenched QCD matrix of dimension n = 3.98 million is from a 244 lattice at zero quark mass. The 15 eigenvalues nearest the origin are computed with Arnoldi(60,25) and stopped when 15 eigenvalues reach residual norms below 10−8. degree of polynomial time in hours no polynomial 149 d = 25+1 7.82 d = 100+1 5.49 8. Conclusion. Polynomial preconditioning is especially important for difficult indefinite linear equations and interior eigenvalues. Standard preconditioning is gen- erally less effective for such indefinite problems so polynomial preconditioning can INDEFINITE POLYNOMIAL PRECONDITIONING 25 assist. The work in this paper makes polynomial preconditioning more practical for gen- eral matrices. Polynomial preconditioning faces special challenges for indefinite prob- lems. These are addressed here, first with methods to balance the polynomial with the goal of creating a definite spectrum. Then an approach is given for making the polynomial stable for indefinite problems. In this, the previous stability method of adding extra copies of roots is applied only to one side of the spectrum. For the other side, corrections are given using eigenvalue deflation and using GMRES iterations. Looking forward, the parallelizabilty of polynomial preconditioners makes them well-suited for modern high-performance computing environments, including multi- core processors and GPUs. In some cases, polynomial preconditioners may even re- place traditional methods, because polynomials are easier to parallelize than incom- plete factorizations. The effects of PPGMRES on GPU architectures should be further investigated to fully understand its potential for accelerating large computations. Fu- ture work should also focus on further researching the effectiveness of polynomial preconditioning for interior eigenvalue problems and comparing performance across a wider range of problems. Acknowledgments. The authors would like to thank Mark Embree for provid- ing the Hatano-Nelson matrix in Example 5. REFERENCES [1] A. M. Abdel-Rehim, R. B. Morgan, D. A. Nicely, and W. Wilcox, Deflated and restarted symmetric Lanczos methods for eigenvalues and linear equations with multiple right-hand sides, SIAM J. Sci. Comput., 32 (2010), pp. 129–149. [2] A. M. Abdel-Rehim, R. B. Morgan, and W. Wilcox, Improved seed methods for symmetric positive definite linear equations with multiple right-hand sides, Numer. Linear Algebra Appl., 21 (2014), pp. 453–471. [3] S. F. Ashby, Polynomial preconditioning for conjugate gradient methods. PhD Thesis, Uni- versity of Illinois at Urbana-Champaign, 1987. [4] S. F. Ashby, T. A. Manteuffel, and J. S. Otto, A comparison of adaptive Chebyshev and least squares polynomial preconditioning for conjugate gradient methods, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 1–29. [5] A. Bai, D. Day, J. Demmel, and J. Dongarra, A test matrix collection for non-hermitian eigenvalue problems, Tech. Rep. CS-97-355, University of Tennessee, Knoxville, Tennessee, 1997. [6] Z. Bai, D. Hu, and L. Reichel, A Newton basis GMRES implementation, IMA J. Numer. Anal., 14 (1994), pp. 563–581. [7] D. Calvetti, B. Lewis, and L. Reichel, On the choice of subspace for iterative methods for linear discrete ill-posed problems, Int. J. Appl. Math. Comput. Sci., 11 (2001), pp. 1069– 1092. [8] M. Embree, J. A. Loe, and R. B. Morgan, Polynomial preconditioned Arnoldi with stability control, SIAM J. Sci. Comput., 43 (2021), pp. A1–A25. [9] B. Fischer and L. Reichel, A stable Richardson iteration method for complex linear systems, Numer. Math., 54 (1988), pp. 225–241. [10] N. Hatano and D. R. Nelson, Localization transitions in non-Hermitian quantum mechanics, Phys. Rev. Lett., 77 (1996), pp. 570–573. [11] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), pp. 409–436. [12] W. Joubert, On the convergence behavior of the restarted GMRES algorithm for solving nonsymmetric linear systems, Numer. Linear Algebra Appl., 1 (1994), pp. 427–447. [13] , A robust GMRES-based adaptive polynomial preconditioning algorithm for nonsymmet- ric linear systems, SIAM J. Sci. Comput., 15 (1994), pp. 427–439. [14] A. V. Knyazev and K. Neymeyr, Efficient solution of symmetric eigenvalue problems using multigrid preconditioners in the locally optimal block conjugate gradient method, Elec. Trans. Numer. Anal., 15 (2003), pp. 38–55. 26 H. HENSON and R. B. MORGAN [15] C. Lanczos, Chebyshev polynomials in the solution large-scale linear systems, Proc. ACM, (1952), pp. 124–133. [16] R. Li, Y. Xi, E. Vecharynski, C. Yang, and Y. Saad, A thick-restart Lanczos algorithm with polynomial filtering for Hermitian eigenvalue problems, SIAM J. Sci. Comput., 38 (2016), pp. A2512–A2534. [17] Q. Liu, R. B. Morgan, and W. Wilcox, Polynomial preconditioned GMRES and GMRES- DR, SIAM J. Sci. Comput., 37 (2015), pp. S407–S428. [18] J. A. Loe and R. B. Morgan, Toward efficient polynomial preconditioning for GMRES, Numer. Linear Algebra Appl., 29 (2021), pp. 1–21. [19] R. B. Morgan, Computing interior eigenvalues of large matrices, Linear Algebra Appl., 154– 156 (1991), pp. 289–309. [20] , On restarting the Arnoldi method for large nonsymmetric eigenvalue problems, Math. Comp., 65 (1996), pp. 1213–1230. [21] , Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equa- tions, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1112–1135. [22] , Preconditioning eigenvalues and some comparison of solvers, J. Comput. Appl. Math., 123 (2000), pp. 101–115. [23] R. B. Morgan and D. S. Scott, Preconditioning the Lanczos algorithm for sparse symmetric eigenvalue problems, SIAM J. Sci. Comput., 14 (1993), pp. 585–593. [24] R. B. Morgan, T. Whyte, W. Wilcox, and Z. Yang, Two-grid deflated Krylov methods for linear equations, Elec. Trans. Numer. Anal., 63 (2025), pp. 1–21. [25] R. B. Morgan and M. Zeng, Harmonic projection methods for large non-symmetric eigen- value problems, Numer. Linear Algebra Appl., 5 (1998), pp. 33–55. [26] , A harmonic restarted Arnoldi algorithm for calculating eigenvalues and determining multiplicity, Linear Algebra Appl., 415 (2006), pp. 96–113. [27] N. M. Nachtigal, L. Reichel, and L. N. Trefethen, A hybrid GMRES algorithm for non- symmetric linear systems, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 796–825. [28] C. C. Paige, B. N. Parlett, and H. A. van der Vorst, Approximate solutions and eigenvalue bounds from Krylov subspaces, Numer. Linear Algebra Appl., 2 (1995), pp. 115–133. [29] H. Rutishauser, Theory of gradient methods, in Refined Iterative Methods for Computation of the Solution and the Eigenvalues of Self-Adjoint Boundary Value Problems, M. Engeli, T. Ginsburg, H. Rutishauser, and E. Stiefel, eds., Birkhauser, Basel, 1959, pp. 24–49. [30] Y. Saad, Chebychev acceleration techniques for solving large nonsymmetric eigenvalue prob- lems, Math. Comp., 42 (1984), pp. 567–588. [31] , Least squares polynomials in the complex plane and their use for solving sparse non- symmetric linear systems, SIAM J. Numer. Anal., 24 (1987), pp. 155–169. [32] , Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM, Philadelphia, PA, 2003. [33] Y. Saad and M. H. Schultz, GMRES: a generalized minimum residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 856–869. [34] G. L. G. Sleijpen and H. A. van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl., 17 (1996), pp. 401–425. [35] D. C. Smolarski and P. E. Saylor, An optimal iterative method for solving any linear system with a square matrix, BIT, 28 (1988), pp. 163–178. [36] D. C. Sorensen, Implicit application of polynomial filters in a k-step Arnoldi method, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 357–385. [37] E. L. Stiefel, Kernel polynomials in linear algebra and their numerical applications, Nat. Bur. Standards, Appl. Math. Ser., 49 (1958), pp. 1–22. [38] H. K. Thornquist, Fixed-Polynomial Approximate Spectral Transformations for Precondition- ing the Eigenvalue Problem, PhD thesis, Rice University, 2006. Technical report TR06-05, Department of Computational and Applied Mathematics, Rice University. [39] M. B. van Gijzen, A polynomial preconditioner for the GMRES algorithm, J. Comput. Appl. Math., 59 (1995), pp. 91–107. [40] K. Wu and H. Simon, Thick-restart Lanczos method for symmetric eigenvalue problems, SIAM J. Matrix Anal. Appl., 22 (2000), pp. 602 – 616. [41] X. Ye, Y. Xi, and Y. Saad, Proxy-GMRES: Preconditioning via GMRES in polynomial space, SIAM J. Sci. Comput., 42 (2021), pp. 1248–1267.
POLYNOMIAL PRECONDITIONING FOR INDEFINITE MATRICES HAYDEN HENSON† AND RONALD B. MORGAN‡ Abstract. Polynomial preconditioning is an important tool in solving large linear systems and eigenvalue problems. A polynomial from GMRES can be used to precondition restarted GMRES and restarted Arnoldi. Here we give methods for indefinite matrices that make polynomial preconditioning more generally applicable. The new techniques include balancing the polynomial so that it produces a definite spectrum. Then a stability approach is given that is specialized for the indefinite case. Also, very complex spectra are examined. Then convergence estimates are given for polynomial preconditioning of real, indefinite spectra. Finally, tests are preformed of finding interior eigenvalues. Key words. linear equations, polynomial preconditioning, GMRES, indefinite matrices AMS subject classifications. 65F10, 15A06, 1. Introduction. Polynomial preconditioning is a powerful tool for improving convergence when solving large linear equations [18] or finding eigenvalues [8]. However, there can be difficulties for indefinite linear equations and interior eigenvalue problems. Here we give techniques for polynomial preconditioning to be effective in these situations. This is important because indefinite/interior problems tend to be difficult and so especially benefit from polynomial preconditioning. Polynomial preconditioning has been extensively studied; see for example [15, 37, 29, 30, 31, 3, 35, 9, 4, 13, 39, 32, 38, 2, 17, 16, 8, 18, 41]. The GMRES polynomial was used in [27, 12] for Richardson iteration and in [38, 2, 17] for polynomial preconditioning. However, more recently in [8, 18], the GMRES polynomial was improved in efficiency, stability, and ease of determining the polynomial. This implementation uses roots of the polynomial which enhances stability and allows for the insertion of extra copies of roots for further stability control. For indefinite problems, it is desirable to have the polynomial preconditioning turn the spectrum definite. For this, we choose a polynomial that we call "balanced". This balanced polynomial has derivative zero at the origin. Several methods for balancing are given. Some guidance is included for which to use. Also cubic Hermite splines are suggested for checking if a polynomial will be effective. Significantly complex spectra are considered, and one conclusion is that balancing will probably not be helpful if the origin is mostly surrounded by eigenvalues. Next, the stability control method in [8, 18] may be ineffective for indefinite problems, because adding small roots to one side of the origin can increase the polynomial size and variability on the other side. This is addressed by not adding roots on the smaller side and applying stability fixes of deflating eigenvalues and a short GMRES run. Section 2 of the paper has quick review of items needed in this paper. Section 3 has techniques for balancing the polynomial. Section 4 focuses on matrices with significantly complex spectra. Then stability of the polynomial is in Section 5, and Section 6 has convergence estimates for polynomial preconditioning indefinite linear equations. Finally, interior eigenvalue problems are in Section 7. † 76798-7328 (Hayden ). ‡ 76798-7328 (Ronald ). 1 16 Oct 2025 2 H. HENSON and R. B. MORGAN 2. Review. 2.1. Polynomial Preconditioning with the GMRES Polynomial. Polynomial preconditioning is a way to transform the spectrum of a matrix and thus improve convergence of Krylov iterative methods. For linear equations with polynomial p and right preconditioning, this is Ap(A)y = b, x = p(A)y. Defining φ(z) ≡zp(z), the preconditioned system of linear equations is φ(A)y = b. In [8, 18], the polynomials are found with the GMRES algorithm [33]. Starting with the GMRES residual polynomial π, the polynomial φ is chosen as φ(z) = 1-π(z), and thus p is also determined. The roots of π are the harmonic Ritz values [19, 28, 25], and they are used to implement both polynomials φ and p. Then GMRES can also be used to solve the linear equations as a part of polynomial preconditioned GMRES, see Algorithm 1, which is from [18]. Note that if A is a real matrix, then φ has real coefficients and φ(A) is real. Algorithm 1 Polynomial Preconditioned GMRES, PP(d)-GMRES(m) 1. Construction of the polynomial preconditioner: (a) For a degree d preconditioner, run one cycle of GMRES(d) using a random starting vector. (b) Find the harmonic Ritz values θ1, . . . , θd, which are the roots of the GMRES polynomial: with Arnoldi decomposition AVd = Vd+1Hd+1,d, find the eigenvalues of Hd,d + h2 d+1,d feT d , where f = H-∗ d,ded with elementary coordinate vector ed = [0, . . . , 0, 1]T . (c) Order the GMRES roots using Leja ordering [6, Alg. 3.1] and apply stability control as in [18, Algorithm 2]. 2. PP-GMRES: Apply restarted GMRES to the matrix φ(A) = I -Πd i=1(I - A/θi) to compute an approximate solution to the right-preconditioned system φ(A)y = b, using [18, Algorithm 1] for φ(A) times a vector. To find x, compute p(A)y with [18, Algorithm 3]. 2.2. Stability control. For a matrix with an eigenvalue that stands out from the rest of the spectrum, the GMRES residual polynomial π generally has a root at the eigenvalue. The slope of this polynomial is likely to be very steep at that root which can lead to ill-conditioning and cause φ(A) times a vector to be unstable. To improve stability, extra copies of roots corresponding to outstanding eigenvalues can be added to π. This is implemented in [18, Algorithm 2] (see also [8, p. A21]). For each root θk, one computes a diagnostic quantity called pof(k) that measures the magnitude of π(θk) with the (1 -z/θk) term removed. When log10(pof(k)) exceeds some threshold pofcutoff, extra (1-z/θk) terms are appended to π (in [18, Algorithm 2], pofcutoff is set to 4). 2.3. Range Restricted GMRES. Some image processing problems have matrices with zero and very small eigenvalues that correspond to noise. The associated INDEFINITE POLYNOMIAL PRECONDITIONING 3 linear equations need to be solved so that the effect of these small eigenvalues is essentially removed. Range Restricted GMRES (RRGMRES) [7] chooses A·b as the initial vector for its subspace. This starting vector has small eigencomponents corresponding to the small eigenvalues, thus reducing their effect. 3. Balancing the Polynomial. In this section, we give ways of adjusting polynomial preconditioning to make it more effective for indefinite problems. We begin with an example showing that polynomial preconditioning with the GMRES polynomial can be very effective but needs some improvement. Example 1. We use a matrix that is bidiagonal with diagonal elements -2500, -2499, -2498, . . . , -2, -1, 1, 2, 3, . . . , 2499, 2500 and super-diagonal elements all 1. So while the matrix is nonsymmetric, the spectrum is real and is mirrored about the origin. The right-hand side is generated random normal and then is scaled to norm 1. The residual norm tolerance is 10-10. Table 3.1 has results comparing restarted GMRES(50) to PP(d)-GMRES(50), which stands for polynomial preconditioned GMRES with φ polynomial of degree d and GMRES restarted at dimension 50. The preconditioning is effective for this very indefinite case. Comparing no polynomial preconditioning (d = 1) to d = 50, the polynomial preconditioning improves the time by a factor of 200. The top of Figure 3.1 shows the polynomials used for degrees 5, 25 and 50. The spectra are significantly improved by the polynomial with degrees 25 and 50. The eigenvalues are mapped to near 1, except for eigenvalues near the origin. This explains why the method is effective for these polynomials. However, with the degree 5 polynomial, many eigenvalues near zero are mapped very close to zero. This is especially shown in the closeup in the bottom half of the figure, with the 200 eigenvalues closest to the origin all mapped very near zero. Looking further at the results in Table 3.1, higher degree polynomials can be even more effective. However for degree 150, it takes about 10 times longer than degree 151. This is explained by the closeup in the bottom of Figure 3.2. Both degree 150 and 151 polynomials dip into the lower half of the plane, so the spectrum of φ(A) is indefinite. Both have 11 negative eigenvalues. But the important difference is that degree 150 happens to have an eigenvalue fall very near zero. This smallest eigenvalue of φ(A) is at -2.6 ∗10-4, compared to smallest at 3.0 ∗10-3 for degree 151. Having one very small eigenvalue can significantly slow down GMRES(50). So this example gives two reasons for why we need a polynomial that stays in the top half of the plane over the spectrum. Such a polynomial gives a definite spectrum and avoids creating very small eigenvalues for φ(A). We call a polynomial "balanced" if it has slope zero with respect to the real axis at the origin. This way it will be positive over the real axis near the origin and hopefully over all the spectrum. We will give several ways of balancing the polynomial. 3.1. Balance Method 1: Add a root. The first way of balancing the polynomial is to add a root that makes the slope zero at the origin. As with adding roots for stability in Subsection 2.2, the resulting polynomial is no longer a minimum residual polynomial. But it may nearly have that property since it is a modification of the GMRES polynomial. The slope of the polynomial φ at the origin is φ′(0) = d X i=1 1 θi . 4 H. HENSON and R. B. MORGAN Table 3.1: Bidiagonal matrix from Example 1 which is nonsymmetric with a real, symmetric (mirrored) spectrum. PP(d)-GMRES(50) is used to solve linear equations. MVP's is the total matrix-vector products, v op's is the total length-n vector operations and dp's is the total dot products. GMRES Polynomial Balanced Poly - Method 1 - Added Root d, deg Time MVP's v op's dp's Time MVP's v op's dp's of φ (sec's) (tho's) (mil's) (mil's) (sec's) (tho's) (mil's) (mil's) 1 2346 4363 245 116 (No PP) 5 - - - - 198 2014 20.5 8.91 10 280 4305 28.0 11.4 47.1 767 4.61 1.85 50 11.5 444 0.94 0.24 2.77 95.3 0.20 0.051 100 10.5 535 0.84 0.15 2.18 86.7 0.14 0.028 150 43.6 2442 3.36 0.44 1.79 64.9 0.11 0.023 151 4.35 212 0.31 0.048 1.86 68.6 0.12 0.024 200 3.85 180 0.27 0.044 1.86 61.5 0.12 0.028 250 3.60 146 0.24 0.047 2.07 47.9 0.12 0.036 -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 0 0.5 1 1.5 d = 5 d = 25 d = 50 -100 -80 -60 -40 -20 0 20 40 60 80 100 0 0.2 0.4 0.6 Fig. 3.1: Bidiagonal matrix of size n = 5000. Top has φ polynomials of degree d = 5, d = 25 and d = 50. Bottom has a closeup of the same polynomials. In order to balance it, we add one extra root to π to make the slope φ at the origin zero. We call this root the "balancing root" defined as η = -1/ Pd i=1 1 θi and denote the new polynomial as φ1, see Algorithm 2. Example 1 (continued). The right half of Table 3.1 is for the balanced polynomial. All of the φ1 polynomials used are one higher degree than the degree listed on the INDEFINITE POLYNOMIAL PRECONDITIONING 5 -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 0 0.5 1 d = 150 d = 151 -14 -12 -10 -8 -6 -4 -2 0 2 -0.02 -0.01 0 0.01 0.02 Fig. 3.2: Bidiagonal matrix of size n = 5000. Top has φ polynomials of degree d = 150 and d = 151. Bottom has a closeup of the same polynomials plotted at the eigenvalues. Algorithm 2 Balance Method 1: Add a Root. 0. Apply GMRES(d) to form a polynomial φ(α) of degree d. Let the roots of the corresponding polynomial π(α) (harmonic Ritz values) be θ1, . . . , θd. 1. Compute η = -1/ Pd i=1 1 θi and add it to the list of polynomial roots of π. 2. The new polynomial of degree d + 1 is φ1(α) = 1 -π(α) · 1 -α η . left because of the added root. The results are consistently better with the balancing. Even the degree six polynomial (d = 5 before the added root) is effective, converging in 198 seconds compared to over 2000 seconds without polynomial preconditioning. For polynomials with d = 50 and above, the times are all below three seconds, and dot products are reduced by more than three orders of magnitude versus no polynomial preconditioning. Example 2. We next use a matrix that is bidiagonal with diagonal elements -100, -99, -98, . . . , -2, -1, 1, 2, 3, . . . , 4899, 4900 and super-diagonal elements all 1. This matrix is much less indefinite than the first one. Table 3.2 has timing results for PP(d)-GMRES(50) with different degree polynomials and with different balance methods. This example motivates the balance methods that will be given in the next few subsections. With Balance Method 1, the best times are an improvement of more than a factor of 150 compared to no preconditioning. However, for the degree five polynomial, balancing is not necessary. Figure 3.3 shows why. The unbalanced approach has a polynomial that goes negative over the negative part of the spectrum of A, so the resulting polynomial preconditioned spectrum remains indefinite. But it does not have as many small eigenvalues as with Balance Method 1 which is fairly flat around the origin. Specifically, with the original degree five polynomial, φ(A) has 6 H. HENSON and R. B. MORGAN Table 3.2: Bidiagonal matrix with diagonal -100, -99, . . . , -1, 1, 2, 3, . . . , 4900, and superdiagonal of ones. PP(d)-GMRES(50) is used to solve linear equations. Times are compared for different ways of balancing the polynomial. Times are given in seconds. For Balance 2, "same" means that no root is subtracted but a root is added, so it is the same as Balance 1. Superscript ∗indicates the residual norm only reaches 7.94 ∗10-10, and superscript † indicates the residual norm only reaches 6.44 ∗10-10. For Balance 4, the degree 10 polynomial is composed of a degree 5 inner polynomial and a degree 2 outer polynomial. Similarly, degree 15 and 25 have inner polynomial of degree 5, while the higher ones have inner polynomial of degree 10. d - degree No balance Balance 1 Balance 2 Balance 3 Balance 4 of poly φ (seconds) add root remove, add RRG poly Composite No PP 229 5 18.6 52.8 118 97.4 9 17.3 373 28.2 13.9 10 - 8.44 same 9.48 34.9 15 10.6 3.16 3.09 3.33 19.2 25 20.4 2.33 2.62 1.92 9.28 50 2.92 1.83 2.11 1.59 1.53 100 1.95 1.92 same 1.23 1.06 150 1.82 1.44 3.04 1.87∗ 1.10 200 2.08 - 2.69 2.20† 0.77 100 negative eigenvalues, but only 12 with absolute value less than 0.02 and smallest 3.2∗10-3. Meanwhile, with the balanced degree six polynomial, φ1(A) has all positive eigenvalues, but has 108 less than 0.02 and the smallest is 6.7 ∗10-6. At degree 10, convergence is not achieved without balancing as the residual norm stalls at 0.0217. Balance Method 1 is a big improvement for this degree and degrees 15 and 25. For degrees 50 and higher, convergence is reached in under three seconds even without balancing, though balancing still yields improvements in some cases. The degree 9 polynomial with Balance Method 1 (so degree 10 with the extra root) does poorly. The added root is around -645 and the linear factor 1 + α 645 causes the polynomial to increase in size at the large part of the spectrum. Figure 3.4 shows this polynomial dips below zero and so gives an indefinite spectrum. The degree 200 polynomial has a similar problem as degree 9. Balance Method 2 is developed next to deal with these situations. 3.2. Balance Method 2: Remove root(s), then add a root. Sometimes it may be beneficial to remove one or two roots from the GMRES polynomial π, but still add a balancing root. The motivation for first removing a root is to decrease the value of |φ′(0)|. This gives a balancing root η further from the origin and linear factor (1 -α η ) closer to 1 across the spectrum of A. To make the value of |φ′(0)| smaller, we look for the harmonic Ritz value whose reciprocal (or sum of reciprocals in the complex case) is closest to φ′(0). Balance Method 2 is in Algorithm 3. Example 3. We consider another bidiagonal matrix with diagonal elements -100, -99, . . . , -2, -1, 1, 2, . . . , 9850, 9860, 9870, . . . , 10330, 10340, 10350 and super-diagonal of all ones. As can be seen in Table 3.3, polynomial preconditioning provides improvement of up to 14 times compared to no preconditioning and up to 43 times INDEFINITE POLYNOMIAL PRECONDITIONING 7 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -1 0 1 2 Unbalanced, d = 5 Balance Method 1, d = 5 + 1 Balance Method 2, d = 5 - 1 + 1 Balance Meth 3, RRG, d = 5 -100 -80 -60 -40 -20 0 20 40 60 80 100 -0.02 0 0.02 0.04 0.06 Fig. 3.3: Bidiagonal matrix of size n = 5000 with 100 negative eigenvalues. Top has φ polynomials of original degree d = 5 with different balancing methods. Bottom has a closeup of the same polynomials near the origin. Algorithm 3 Balance Method 2: Remove root(s), then add a root 0. Apply GMRES(d) to form a polynomial φ(α) of degree d. Let the roots of the corresponding polynomial π(α) (harmonic Ritz values) be θ1, . . . , θd. 1. Compute the difference from the derivative: • For each real root θi, compute |φ′(0) -1 θi | • For complex conjugate pairs (θi, θi), compute |φ′(0) -( 1 θi + 1 θi )| 2. Let ξ be the inverse (or sum of inverses) with the smallest difference from φ′(0). 3. If |φ′(0) -ξ| ≥|φ′(0)|, do not remove any roots and add η = -1/ Pd i=1 1 θi to the list of polynomial roots of π. 4. If |φ′(0) -ξ| 1 and any of the following hold: • (ˆxj ≤Re( ̃θmin) ≤θj+1 and Cj(Re( ̃θmin)) > 1) • (θj ≤Re( ̃θmax) ≤ˆxj and Cj(Re( ̃θmax)) > 1) • Re( ̃θmin) /∈(ˆxj, θj+1) and Re( ̃θmax) /∈(θj, ˆxj)) Then φ is not positive over the spectrum of A. Table 3.3: Bidiagonal matrix of size n = 10, 000 with diagonal -100, -99, . . . , -2, -1, 1, 2, . . . , 9850, 9860, 9870, . . . , 10330, 10340, 10350 and super diagonal of ones. PP(d)- GMRES(50) is used to solve linear equations. - denotes polynomial degrees where Balance Method 1 resulted in an indefinite spectrum for φ(A) . d - degree No balance Balance 1 Balance 2 of poly φ add root subtr., add MVP Time MVP Time MVP Time (thous) (sec) (thous) (sec) (thous) (sec) No poly 872 976 10 210 28.1 298 40.4 798 136 40 12,940 749 4,394 254 42.6 3.6 70 69.3 5.6 377 19.0 26.0 2.2 100 59.0 4.7 203 10.6 20.2 1.9 3.4. Balancing with a polynomial from Range Restricted GMRES. This section discusses two methods that use Range Restricted GMRES [7] (RRGMRES). This approach automatically gives a balanced polynomial. There is more than one way to represent the polynomial from RRGMRES. Here we implement it in Newton form and call it Balance Method 3; see Algorithm 5. Determining the polynomial for Balance Method 3 is not efficient for high degree polynomials as it uses double the matrix-vector products. Also, it may not always be stable due to the non-orthogonal basis for V and the use of Normal Equations (the columns of V do approximate an orthogonal matrix). For a more stable and more complicated implementation of this Newton form of the polynomial, see [17, Subsection 3.3] (this can be adjusted for RRGMRES). Balance Method 4 uses a composite polynomial with the Newton form of the RRGMRES polynomial as the inner polynomial. The outer polynomial comes from polynomial preconditioned GMRES. For details of using composite polynomials, we refer to [18] where they are called double polynomials. 10 H. HENSON and R. B. MORGAN 0 2000 4000 6000 8000 10000 -2 -1 0 1 2 3 4 d = 40 no bal. d = 40 Bal 1 d = 40 Bal 2 1 1.005 1.01 1.015 1.02 1.025 1.03 104 -2 -1 0 1 2 3 4 d = 40 no bal. d = 40 Bal d = 40 Bal -20 -10 0 10 20 -0.02 0 0.02 0.04 0.06 0.08 0.1 d = 40 no bal. d = 40 Bal 1 d = 40 Bal 2 Fig. 3.5: Bidiagonal matrix of size n = 10,000. The bottom graph has the φ polynomial of degree 40 with no balance, balance method 1 and balance method 2. Top right has the same polynomials zoomed in at the origin. Top left demonstrates the intervals where the polynomial from Balance Method 1 is negative. Algorithm 5 Determine the Polynomial from Range Restricted GMRES for Balance Method 3 0. Choose the degree d of the φ polynomial. 1. Apply the Arnoldi iteration with starting vector A · b for d iterations. Compute Ritz values θ1, . . . , θd (regular Ritz, not harmonic). 2. Generate a matrix V with Newton basis vectors as columns. So the vectors are b, (A -θ1)b, (A -θ2)(A -θ1)b, . . . , (A -θd-1) · · · (A -θ2)(A -θ1)b. This can be modified to keep real vectors for the complex Ritz vector case. 3. Solve the Normal Equations, so (AV )T (AV )g = (AV )T b. Then the Newton coefficients are in g. Example 2. (cont.) The two right-most columns have results with Balance Methods 3 and 4. These methods are mostly competitive and Method 4 has the best times for high degree polynomials. Note that Balance Method 3 does not quite give full accuracy at the high degrees. 3.5. Choice of balancing. We first give an example where balancing is detrimental, then discuss when balancing should be used and which version is appropriate. Example 4. Let the matrix be diagonal of size n = 5000 with eigenvalues -1000, -999, -998, . . . , -100, 0.1, 0.2, 0.3, . . . 1, 2, 3, . . . 4090. We apply polynomial preconditioning with and without balancing. PP(50)-GMRES(50) converges to residual INDEFINITE POLYNOMIAL PRECONDITIONING 11 -1000 -500 0 500 1000 1500 2000 2500 3000 3500 4000 0 0.5 1 1.5 With balancing No balancing 0 0.5 1 1.5 2 2.5 3 0 10 20 10-3 With balancing No balancing Fig. 3.6: Matrix with a gap in the spectrum on one side of the origin. Balance Method 1 is compared to no balancing with d = 50. The lower half of the figure has 'x' and 'o' marks showing the polynomial values at the small eigenvalues. norm below 10-10 in 10 cycles without balancing. Using balancing is disastrous. With Method 1, it takes 4235 cycles, Balance 2 has 6713 cycles and Balance 3 uses 4185. The top of Figure 3.6 has the polynomial with and without Balance Method 1. The unbalanced polynomial comes through the origin with significant slope that allows it to separate the small eigenvalues from the origin more than the balanced polynomial. The smallest eigenvalue of φ(A) without balancing is 6.12 ∗10-4, while the smallest with balancing is much smaller at 1.27 ∗10-6. The bottom of Figure 3.6 shows the polynomial at these small eigenvalues. Now we attempt to give guidance of when to balance. Some rough knowledge about the spectrum is required. Otherwise it may be necessary to experiment with and without balancing to determine which works better. If the eigenvalues are real or almost all real, and they are fairly equally distributed near the origin, then there will probably be a gain from balancing an indefinite spectrum and hopefully turning it definite. For such a spectrum that is nearly mirrored on both sides of the origin, use Balance Method 1. Otherwise, one can try Methods 1 and 2, possibly testing with splines to see if they pass. If not, or if they don't work in an actual test, switch to Balance 3 or to Balance 4 for high degree polynomials. As the last example showed, if there is a gap between the origin and the eigenvalues on one side of the spectrum, then it may be best to not balance. Then the polynomial can dip down in this gap. The preconditioned spectrum may still be definite. Finally, when there are eigenvalues surrounding the origin with significant imaginary parts, this is both a difficult situation and probably one where balancing is not beneficial. 12 H. HENSON and R. B. MORGAN Table 4.1: PPGMRES applied to the Hatano-Nelson [10] with n = 2500, γ = 0.5, and d = 0.9 ∗4 ∗rand(n, 1). For balance method 4, the inner polynomial is selected to have the highest possible degree, up to 15, that evenly divides the degree of the overall composite polynomial. Times are given in seconds. d - degree No balance Balance 1 Balance 2 Balance 3 Balance 4 of poly φ add root subtr., add RRG poly Composite 15 4.66 - - 2.08 2.85 20 22.6 4.50 4.29 2.69 3.73 25 12.0 3.92 3.92 3.32 4.03 50 10.3 7.18 - 2.01 3.70 100 64.2 5.02 same as bal 1 1.63 5.62 This is discussed in the next section. 4. Complex Spectra. Matrices with spectra spread out in the complex plane need to be investigated. A polynomial cannot be expected to be effective if the origin is completely surrounded by eigenvalues, based on the minimum modulus principle in complex analysis. For instance, if the spectrum is on a circle centered at the origin, we would want the φ polynomial to be zero at the origin and then on the circle have real part near one and imaginary part near zero. However such a polynomial would have minimum modulus in the interior of the circle, violating the principle. We look at approaching this difficult situation of having the origin surrounded in this and later sections. We first have an example where both polynomial preconditioning and balancing are effective for a matrix with a significantly complex spectrum, but not complex at the origin. Then the next example looks at how far we can surround the origin and still solve linear equations. Example 5. We consider a Hatano-Nelson [10] matrix of size n = 2500, which has a complex and indefinite spectrum; see the lower right part of Figure 4.1. The results are in Table 4.1. GMRES(50) does not converge; the residual norm stalls at ||r|| = 0.359. This can be fixed with polynomial preconditioning. For degree d = 15, Balance Methods 3 and 4 convergence rapidly. Balance methods 1 and 2 fail to converge, but for degrees 20 and 25 they work well. The left side of Figure 4.1 shows the spectra after polynomial preconditioning of degree 25 using no balancing, and Balance Methods 1 and 2. With no balancing, the spectrum is very indefinite. A close-up with the two balancings in the upper right part of the figure shows the spectra are now definite. Example 6. Polynomial preconditioning is examined for matrices with very complex spectra. Each matrix has size n = 2000. Twenty eigenvalues are equally spaced on 50 rays that move out from the origin and go much of the way around it; see the (blue) dots on the left parts of Figure 4.2. GMRES(50) is run with and without polynomial preconditioning to relative residual norm tolerance of 10-8 and with righthand sides generated random normal and then normed to one. No balancing is used. The red stars on the left parts of Figure 4.2 are the roots of the GMRES residual polynomials of degree 50. Plots on the right show how the spectrum is changed by the preconditioning. The first case, shown in the top half of the figure, has eigenvalues 230 degrees around the origin. The polynomial succeeds in transforming the original indefinite spectrum on the left into one that is nearly definite on the right (there are INDEFINITE POLYNOMIAL PRECONDITIONING 13 0 1 2 3 4 5 6 Real Axis -1 0 1 Imaginary Axis Hatano-Nelson Spectrum 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 No balance 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 Balance method 1 0 0.2 0.4 0.6 0.8 1 Real Axis -0.1 0 0.1 Imaginary Axis d = 25 Balance method 2 -1 -0.5 0 0.5 1 1.5 2 2.5 Real Axis 10-4 -5 0 5 Imaginary Axis 10-5 d = 25 Balance method 1 d = 25 Balance method 2 Fig. 4.1: Graphs for the Hatano-Nelson matrix with n = 2500, γ = 0.5 and d = 0.9∗4∗rand(n, 1). The bottom right graph is the original spectrum of the matrix. The top left is the spectrum transformed with a degree 25 precondition polynomial without balancing. The middle left and bottom left graphs are the spectrum transformed with the balanced method 1 and balance method 2 polynomial, respectively. The top right shows balanced methods 1 and 2 zoomed in at the origin to show that they are positive definite spectra. only six eigenvalues with negative real part after the preconditioning). For the second case of 280 degrees, the spectrum is also improved by the preconditioning even though it stays quite indefinite. There is more spacing from the origin and many eigenvalues are in a blob. GMRES(50) converges very slowly for the easier matrix and not at all for the tougher one, but PP(50)-GMRES(50) converges rapidly for both cases. See Table 4.2 for results with these two matrices and one in-between them. For the 230 case, there is remarkable improvement with just a degree 5 polynomial. For the more difficult matrices, a degree 50 polynomial is needed. Further cases with eigenvalues further extending around the origin require even higher degree polynomials. For example with a spectrum 290 degrees around the origin, using a degree 50 polynomial takes 13.9 minutes while d = 150 converges in only 6.4 seconds. Going to the even more difficult case of 300 degrees, both a high degree polynomial and a larger GMRES restarting parameter are needed as PP(150)-GMRES(50) takes 13.5 minutes but PP(150)-GMRES(150) converges in 10.9 seconds. Note that polynomials of degree higher than 150 may not be stable in this situation. As mentioned earlier, a φ polynomial in the complex plane cannot be zero at the origin and move only toward 1 as it moves away from the origin. It needs to have both ups and downs while moving away. However, for this example, the spectrum only partly surrounds the origin. Thus, the polynomial is somewhat able to head towards 1 as it moves away from the origin over the spectrum. For the case of the spectrum going 230 degrees around, the upper-left plot in Figure 4.3 shows contours for the real part of the degree 5 polynomial and the upper-right has the imaginary part. The eigenvalues are yellow dots. The polynomial is able to flatten out over most of the spectrum and push much of the eigenvalues to have real parts between 14 H. HENSON and R. B. MORGAN Table 4.2: Complex matrix of Example 6. PP(d)-GMRES(50) is used to solve linear equations with eigenvalues through three angles around the origin. Angle 230◦ Angle 255◦ Angle 280◦ d - degree MVP time MVP time MVP time of poly φ (thou's) (thou's) (thou's) 1 (no pp) 2.6e+6 7.6 days - - - - 5 11.2 0.54 sec 33,485 18 min - - 50 6.4 0.28 sec 24.4 0.45 sec 772 6.7 sec 0.5 and 1.0. Extending further out from this relatively flat portion in the real plot are five valleys and five ridges. Heading left from the origin is a valley and to the right of the flat area is a rising ridge. Valleys and ridges alternate going around. Next, for the 280◦matrix, the real contours are in the bottom-left. For this more difficult spectrum, the real part is not able to make it to 0.5 except at a few eigenvalues. A higher degree polynomial can be significantly more effective. The real part of the degree 50 polynomial is shown in the lower-right part of the figure. This polynomial goes above 0.5 except for the part of the spectrum near the origin. There are many valleys and ridges for this polynomial. Unlike for the previous example, balancing is not effective. These contour maps help show why. Balancing keeps the real part of the polynomial from being able to dive down quickly when moving to the left from the origin. Similarly, moving to the right, the polynomial could not as quickly move up toward 1. Two upcoming examples (8 and 10) will back up the result that balancing may be detrimental when the spectrum goes much of the way around the origin. 5. Stability Control. Here we give stability methods specialized for polynomial preconditioning of indefinite problems. Stability of the GMRES residual polynomial is mentioned in Subsection 2.2 (from sources [8, 18]). A pof value is associated with each root θj of the GMRES residual polynomial π (harmonic Ritz value), pof(θj) = Q i̸=j 1 -θj θi . A high pof value indicates a root that stands out from the others and indicates that there may be instability (unless the root is spurious). To fix this, extra copies of these outstanding roots are added to the π polynomial. However, for a small root θi, the linear factor (I -A θi ) from an extra copy can blow up the polynomial at large portions of the spectrum. For an indefinite spectrum, there can be outstanding eigenvalues on one side of the spectrum that are much smaller than eigenvalues on the other side. For this situation where there are high pof's on the shorter side of the spectrum, we suggest alternatives to adding extra roots. The following theorem shows a correlation between a high pof value and the accuracy of the corresponding harmonic Ritz vector. It is assumed that the root of interest is equal to an eigenvalue, since a large pof will usually correspond to a root that closely approximates an outstanding eigenvalue (for spurious harmonic Ritz values, we do not need to deal with them, as is mentioned right before Algorithm 6). Theorem 5.1. Assume a GMRES residual polynomial π is generated with d iterations of solving Ax = b, where A is diagonalizable and b is norm one. Let the roots of π (the harmonic Ritz values) be θi, for i = 1, . . . , d. Let the eigenvectors of A INDEFINITE POLYNOMIAL PRECONDITIONING 15 -1 -0.5 0 0.5 1 230o original spectrum -1 -0.5 0 0.5 1 0 0.5 1 1.5 2 230o poly prec spectrum, degree 50 -1.5 -1 -0.5 0 0.5 1 1.5 -1 -0.5 0 0.5 1 280o original spectrum -1 -0.5 0 0.5 1 0 1 2 280o poly prec spectrum, degree 50 -2 -1 0 1 2 Fig. 4.2: The effect of polynomial preconditioning on very complex spectra. The top two plots are for a matrix with eigenvalues going around 230◦, and the bottom plots are for 280◦. On the left side, the original eigenvalues are shown with dots in the complex plane. The (red) asterisks are the polynomial roots. On the right two plots, the 'x' marks show the eigenvalues of the polynomial preconditioned spectrum of φ(A) with degree d = 50. be zi's and let Z be the matrix with these as columns. Let b = Pn i=1 βizi. Assume a particular Ritz value θj equals an eigenvalue λj. Then ||rj|| ≤|θj|∥Z-1∥ |βj|pof(θj), where pof(θj) = Q i̸=j 1 -θj θi . Proof. Let yj be a harmonic Ritz vector. From [25, 21], it can be written as yj = ωj(A)b = sj Y i̸=j I -A θi b, (5.1) where sj is the constant that normalizes ωj at θj, meaning sj = 1 Q i̸=j 1 -θj θi . Multiply (5.1) by (I -A θj ), use that π(A) = Qn i=1(1 -A θj ), and rearrange: Ayi -θjyj = -θjsjπ(A)b. 16 H. HENSON and R. B. MORGAN -1 -0.5 0 0.5 1 real contours, 230o, degree 5 -1 -0.5 0 0.5 1 -8 -4 -2 -2 -2 -1 -1 -1 -1 -1 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 4 4 8 8 -1 -0.5 0 0.5 1 imaginary contours, 230o, degree 5 -1 -0.5 0 0.5 1 -8 -4 -4 -4 -2 -2 -2 -1 -1 -1 -1 -1 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 4 8 8 -1 -0.5 0 0.5 1 real contours, 280o, degree 5 -1 -0.5 0 0.5 1 -2 -2 -2 -1 -1 -1 -0.5 -0.5 -0.5 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 2 2 2 2 4 4 -1 -0.5 0 0.5 1 real contours, 280o, degree 50 -1 -0.5 0 0.5 1 0 0.5 Fig. 4.3: Contour maps for polynomials with the matrices from Example 6. Since the GMRES residual norm cannot rise, ∥b∥= 1 implies that ∥π(A)b∥≤1. So ||Ayj -θjyj|| = |θj||sj|∥π(A)b∥≤|θj||sj| = |θj| pof(θj). (5.2) Next, ||yj|| = ∥Z[β1ωj(λ1), . . . , βnωj(λn)]T ∥≥ 1 ∥Z-1∥∥[β1ωj(λ1), . . . , βnωj(λn)]T ∥ ≥ 1 ∥Z-1∥|βj||ωj(λj)| ≥ 1 ∥Z-1∥|βj|, using that θj = λj and ωj is normalized to be one at θj. Combined with (5.2), we have ||rj|| = ||Ayj -θjyj|| ||yj|| ≤|θj|∥Z-1∥ |βj|pof(θj). This theorem indicates that for an accurate θj with a large pof, yj will generally be a good approximate eigenvector. This motivates one of the stability procedures that follow, where approximate eigenvectors are used for deflation. INDEFINITE POLYNOMIAL PRECONDITIONING 17 Algorithm 6 gives our approach for dealing with an unstable polynomial. It has the previous way of adding extra copies of roots to outstanding eigenvalues, but now only to the side in the complex plane where the spectrum is larger. Then it has corrections for the other side if there are outstanding eigenvalues there. The Step 5.a. correction uses that instability error on the smaller side is mostly in the direction of just a few eigencomponents corresponding to roots with high pof values. So projecting over a few eigenvectors reduces the rogue eigencomponents. Likewise in Step 5.b., it generally only takes a few iterations of GMRES to improve the residual vector. In the algorithm, it is important to identify if there are large spurious harmonic Ritz values. These are much more likely to occur for indefinite problems than for definite (large spurious harmonic Ritz values correspond to spurious Ritz values of A-1 near the origin [19], which for symmetric problems can happen only in the indefinite case). Extra roots for stability control are not needed at spurious harmonic Ritz values. If there is a high pof for a particular large value, then the residual norm can tell us whether or not it is spurious. Algorithm 6 Choice of Stability Control for an Indefinite Spectrum 0. Compute pof values for the polynomial roots. Set pofcutoff (mentioned in Subsection 2.2) and rncutoff (used to indicate a reasonably accurate approximate eigenvector). Determine which is the larger side of the spectrum in the complex plane; this is the side for which the harmonic Ritz values (with spurious values not counted) extend the furthest from the imaginary axis. Alternatively, Ritz values can be used and then large spurious values are unlikely. 1. Optional: Add a limited number of extra copies of roots on the smaller side of the spectrum that have pof > pofcutoff and ∥rj∥≤rncutoff. Then recompute pof values. 2. If the largest pof on the small side of spectrum is greater than 1020, reduce the degree of the polynomial and begin again. 3. Add extra copies of roots on the larger side if pof > pofcutoff. The number of copies at a root is the least integer greater than (log10(pof(k)) -pofcutoff)/14. 4. Apply PP-GMRES. Solve until a shortcut residual norm reaches the desired tolerance. If the actual residual norm is not as accurate, then the correction methods can be applied. 5. Correction phase: Apply one or both correction steps. a. Apply Galerkin projection (Algorithm 2 in [24]) over the span of the approximate eigenvectors corresponding to the roots on the smaller side of the spectrum that have both pof(θj) ≥pofcutoff and ∥rj∥≤rncutoff. This deflates these eigencomponents from the residual vector. b. Run GMRES (without polynomial preconditioning) for a few iterations. In the optional Step 1. of the algorithm, perhaps only one root should be added to the shorter side. Or only add roots that are within half the magnitude of the largest non-spurious harmonic Ritz value on the other side. The effect of adding these roots on the polynomial on the large side can be checked with the Hermite spline polynomials in Subsection 3.3. Example 7. The stability algorithm is tested with a matrix that has outstanding eigenvalues on both sides of the complex plane. However, the outstanding eigenvalues on the left side are much smaller in magnitude than those on the other side. The matrix is bidiagonal of size n = 5000 with diagonal elements -500, -400, -300, -200, 18 H. HENSON and R. B. MORGAN Table 5.1: Bidiagonal matrix of Example 7. PP(d)-GMRES(50) is run followed by correction procedures. d - degree Time max Res Norm Deflate GMRES Both of poly φ (sec's) pof w/o correct correction correct corrects 57 + 2 44 2.9e+9 5.8e-6 1.1e-10 9.6e-11 9.6e-11 75 + 4 0.92 1.7e+14 1.9e+2 2.8e-10 1.1e-10 5.1e-11 90 + 4 0.97 2.5e+18 8.3e+4 1.5e-9 2.5e-9 4.2e-11 100 + 5 0.92 1.1e+21 1.7e+5 2.3e-9 3.2e-10 3.8e-11 110 + 5 3.5 4.6e+23 1.0e+9 1.4e-5 6.3e-8 1.2e-7 115 + 5 - 9.0e+24 - - - - -100, 0.001, 0.01, 0.02, 0.03, . . . , 0.09, 0.1, 0.02, 0.03, . . . , 0.9, 1, 2, 3, . . . 4971, 5000, 5100, 5200, 5300, 5400 and with superdiagonal elements all 0.1. The right-hand side is generated random normal, and then is normed to one. PP(d)-GMRES is applied using the stability control in Algorithm 6 with roots added only on the larger (right) side (i.e. Step 1 is not activated). No balancing is used. Linear equations are solved until a shortcut residual norm reaches 10-10. The rncutoff value is not needed because no spurious harmonic Ritz values occur. The value pofcutoff = 106 is used. The correction in Step 5.a. has different numbers of approximate eigenvectors. For example, with degree 57, only two have reached pof > 106 while for degree 100 there are five. For the GMRES correction in Step 5.b., 10 GMRES iterations are used. Table 5.1 has results with three types of correction. First, the eigenvector deflation is in the fifth column, then the next columns have the GMRES correction, followed by both (deflation then GMRES). Much more accurate results are produced. Using both corrections is usually better, but may not be needed. Because of the difficulty of this problem, polynomial preconditioning is very important. In fact, PP-GMRES does not converge until the degree of the polynomial reaches 57 (and not for every degree just larger than that). We finish this example by activating the optional Step 1 for the polynomial of degree initially 115. For this high degree polynomial, the method does not converge even in the shortcut residual norm. We need to activate Step 1 of the algorithm in order to keep the polynomial more controlled on the short side. Here we add one root copy to each root on the left side with pof over 1014. This results in three extra roots. Then the rest of the algorithm is applied along with the GMRES correction. The final residual norm is 1.1∗10-10. So by using Step 1, we are able to get an accurate result. However, we do not succeed with polynomials of much higher degree. Example 8. The matrix is cz20468 from the SuiteSparse Matrix Collection. Standard GMRES(50) fails to converge, prompting the use of preconditioning techniques. ILU factorization (with droptol=0.01) is applied as an initial preconditioner. This produces a preconditioned operator with an indefinite spectrum, for which GMRES(50) still does not convergence. We add polynomial preconditioning and now can get convergence; see Table 5.2. For this example, a solution with residual of 10-8 is sought, so PPGMRES is run until the shortcut residual norm produces a solution of 10-10 to compensate for potential instability in the polynomial preconditioner. The parameters are set to pofcutoff = 104 and rncutoff = 10-3. Results for polynomial degrees ranging from 5 INDEFINITE POLYNOMIAL PRECONDITIONING 19 Table 5.2: cz20468 matrix with ILU factorization (droptol=0.01) of Example 8. PP(d)-GMRES(50) is run followed by correction procedures. d - degree Time max Res Norm Deflate GMRES Both of poly φ (sec's) pof w/o correct correction correction corrections 1 - 5 + 0 173 5.7 5.6e-9 - 2.7e-10 - 10 + 0 166 5.8e+2 2.3e-8 - 2.1e-10 - 20 + 2 2186 7.5e+7 3.7e-7 1.6e-8 1.8e-10 3.3e-10 30 + 5 404 3.7e+19 1.6 1.1e-8 2.9e-10 3.9e-10 35 + 6 333 1.4e+24 3.2e+4 9.4e-9 1.9e-9 1.2e-9 40 + 7 166 2.0e+29 7.6e+9 3.7e-4 5.6e-4 6.6e-6 40 + 8 162 2.0e+29 8.1e+2 1.9e-7 7.5e-8 4.3e-8 to 40 are presented in Table 5.2. The higher the degree of the polynomial, the more unstable the polynomial can be. The max pof column provides a glimpse of this as some pof's can become greater than 1020 for degrees 35 and higher. These high pof's are observed on the larger side, while the pofs on the shorter side remain below 1020 as stipulated in Algorithm 6. At higher polynomial degrees, more vectors are available for deflation. For the degree 20 polynomial only one approximate eigenvector is deflated, while 7 are used for the degree 40 polynomial. Applications of deflation and/or GMRES correction consistently restore accuracy to the solution. Notably, 10 iterations of GMRES alone often serves as a sufficient correction and even outperform using both corrections for degrees 20 and 30. The bottom two rows of Table 5.2 show the degree 40 polynomial with and without the optional step 1 in Algorithm 6. Applying Step 1 increases the initial accuracy of the residual from 7.6 ∗109 to 810 which allows the accuracy to reach the desired goal of 10-8 post deflation. 6. Convergence Estimates. This section develops estimates for the effectiveness of polynomial preconditioned GMRES for indefinite problems. Using Chebyshev polynomial results, we derive theoretical estimates on how well polynomial preconditioning can enhance the convergence of restarted GMRES. The analysis reveals a significant reduction in the required number of matrix-vector products under idealized conditions. We assume that all polynomials can be approximated by Chebyshev polynomials, including both the polynomial for preconditioning and the polynomials that underlie the GMRES method. We assume the spectrum of the matrix satisfies σ(A) ⊂[u, v] ∪[a, b] ⊂R where u ≪v 1. Consider h(x) = (x -a)(x -b)(x -v), which has roots at x = a, b, v and local maximum at γ1 = (a+b+v)-√ a2+b2+v2-(ab+av+bv) 3 , and a local minimum at γ2 = (a+b+v)+√ a2+b2+v2-(ab+av+bv) 3 . It can be shown that γ1 ∈[v, a] and γ2 ∈[a, b]. It can also be shown that h(γ2) = h(a + b + v -2γ2) a + b + v -2γ2 In the case where u > a + b + v -2γ2, f(0) = 2abv (γ2 -a)(γ2 -b)(γ2 -v) + 1 .= 2abv ( -2b 3 -a)( -2b 3 -b)( -2b 3 -v) + 1 .= -27av 2b2 + 1 and if u ≤a + b + v -2γ2 f(0) = 2abv (u -a)(u -b)(u -v) + 1 .= 2abv u2(u -b) + 1. In both cases, we conclude that f(0) = 1 + δ with δ small, allowing us to utilize (6.1) above. Now, composing f(x) with the Chebychev polynomial T m 3 , we obtain: T m 3 (f(x)) T m 3 (f(0)), which forms the desired polynomial of degree m. The maximum value of this Chebyshev polynomial over [u, v]∪[a, b] is: 1 T m 3 (1+ 2abv (ξ-a)(ξ-b)(ξ-v)), where ξ = γ2 or ξ = u. This quantity estimates the improvement in residual norm per one cycle of GMRES(m). Using approximation (6.1), for either value of ξ, the residual norm is improved for approximately: 1 1 + δ( m 3 )2 .= 1 -m2 9 δ. For d cycles of GMRES(m), the improvement factor is approximately: 1 -m2 9 δ d .= 1 -dm2 9 δ. (6.2) We view a single cycle of PP(d)-GMRES(m) as a composition of two polynomials: the preconditioner polynomial from GMRES(d) on the inside and the GMRES(m) polynomial on the outside. This can be modeled by comparing shifted and scaled INDEFINITE POLYNOMIAL PRECONDITIONING 21 Chebyshev polynomials, leading to the residual improvement estimate: 1 Tm T d 3 1 + 2abv (ξ-a)(ξ-b)(ξ-v) .= 1 Tm 1 + δ d 3 2 .= 1 1 + δ m2d2 9 .= 1 -d2m2 9 δ (6.3) Comparing the improvements from (6.2) and (6.3), we conclude that polynomial preconditioned GMRES(d) converges approximately d times faster in terms of matrixvector products. 7. Interior Eigenvalue Problems. Computing eigenvalues and eigenvectors of large matrices is another important task in linear algebra. The standard approach is restarted Arnoldi [36, 20, 40, 26] or restarted Lanczos for symmetric [40, 1] (nonrestarted CG [11] can be used in LOBPCG [14] for symmetric problems). Polynomial preconditioning is even more important for large eigenvalue problems than it is for linear equations. Eigenvalue problems tend to be difficult, because standard preconditioning is less effective for eigenvalue problems and often not used. Standard preconditioning can be incorporated into more complicated methods such as LOBPCG, Jacobi-Davidson [34] and Preconditioned Lanczos [23]. However, then only one eigenvalue can be targeted at a time [22]. Eigenvalue polynomial preconditioning using a GMRES polynomial is given in [8]. Here we focus on the case where the desired eigenvalues are in the interior of the spectrum. The polynomial for preconditioning interior eigenvalue problems is found similarly as for indefinite system of linear equations. GMRES is applied to a shifted problem in order to generate a polynomial that targets a certain zone. Here we give another balancing method that balances on an interval, meaning that the value of the polynomial φ is equal at the two ends of the interval (see [16] for this type of balancing of a Chebyshev polynomial for symmetric eigenvalues). Unlike the other balancings, it does not attempt to make the derivative of the φ polynomial zero at a point. However, like Balance Method 1, it is done by adding a root. We give this Balance Method 5 for the case of balancing around the origin, so on the interval [-a a], but this can be shifted for an interval around another spot. Algorithm 7 Balance Method 5: Add a root for balancing on the interval [-a a]. 0. Let polynomial φ(α) of degree d correspond to π(α) with roots θ1, . . . , θd. 1. Compute β = φ(a) φ(-a), then η = a (1+β) (1+β). Add η to the list of polynomial roots. 2. The new polynomial of degree d + 1 is φ5(α) = 1 -π(α) ∗ 1 -α η . Balancing the polynomial is important for interior eigenvalue problems because otherwise a lopsided number of eigenvalues may be found on one side of the requested target. Here we focus on Balance Method 1, but others could be used. In particular, Balance Method 5 gives essentially the same results as Method 1, so only Method 1 is given in the results. Example 9. We use a diagonal matrix with eigenvalues 1, 2, 3, . . . , 499, 500, 500.2, 500.4, . . . , 519.8, 520, 521, 522, . . . , 4920. So n = 5000, and there is a group of 101 eigenvalues close together starting at 500. We seek the 30 eigenvalues nearest 500.33. Arnoldi(80,40) is used, so the Krylov subspace is built out to dimension 80 and then restarted with 40 vectors. Reorthogonalization is applied in this and the other examples of eigenvalue computation in this section. The residual norm tolerance is set 22 H. HENSON and R. B. MORGAN Table 7.1: Diagonal Matrix with n = 5000. PP(d)-Arnoldi(80,40) is used to find 30 interior eigenvalues near 500.33. Degrees of the balanced polynomial are one higher than indicated because of the added root. Residual tolerance is 10-8. The asterisks indicate max residual norm only reaches 5.3 ∗10-8. Unbalanced GMRES Polynomial Balanced Polynomial d, deg Time MVP's Eigenvalues Time MVP's Eigenvalues of φ (sec's) (tho's) (sec's) (tho's) 1 (no pp) 363 118 496 - 505 10 19 61 496 - 505 20 70 496 - 505 50 3.1 38.1 478 - 503.4 4.4 57.2 496 - 505 (missing some) 100 1.7 32 472 - 503.6 7.2∗ 166∗ 496 - 504.4 (missing some) + 3 large ones to 10-8. Table 7.1 has results with polynomial preconditioning of degrees 10, 50 and 100, both with and without balancing. This is compared to no polynomial preconditioning (note that for some of our interior eigenvalue testing, harmonic RayleighRitz [19, 28, 25, 26] performs better, but for this particular test regular Rayleigh-Ritz is best and is reported). The degree 10 polynomial dramatically speeds up the eigenvalue computation and does not need balancing. Degree 50 is even better (about two orders of magnitude faster than without polynomial preconditioning), and with balancing it finds the eigenvalues closest to the requested value of 500.33. Without balancing, the polynomial dips below the axis to the left of the origin and so eigenvalues are found much further to the left of 500.33 than to the right. Also some are missed where the polynomial takes them well below zero (our program finds the ones closest to zero). With balancing, the correct eigenvalues are quickly found. The degree 100 polynomial also has problems without balancing, but with balancing it is too volatile at the largest eigenvalues (among the 30 eigenvalues it finds are the large ones 4843, 4846 and 4870). So here it is best to use a lower degree polynomial. If a high degree polynomial is desired then Balance Method 4 could be applied. We next consider two tests of computing interior eigenvalues with matrices from applications. Example 10. The matrix Af23560 is part of the package NEP [5] (Non-Hermitian Eigenvalue Problem) and is available from SuiteSparse. The matrix is of size n = 23,560 and was developed for stability analysis of Navier-Stokes equations. The upper left portion of Figure 7.2 has the overall spectrum, which is very complex and lies in the left half of the complex plane. We target the eigenvalues nearest -4 using Arnoldi(600,300). This is an interior eigenvalue problem, because the 300 nearest eigenvalues do not include the right-most (exterior) ones. We compare a polynomial preconditioner of degree 50 to no preconditioner. The second picture down on the left of Figure 7.2 has the preconditioned spectrum with no balancing. Note it is very indefinite. PP(50)-Arnoldi without balancing is run for 25 cycles and finds 284 Ritz pairs to residual norm below 10-3. This takes 32 minutes. The 284 Ritz values are plotted with (blue) x's on 7.1. Regular restarted Arnoldi finds a similar number of Ritz pairs below 10-3 in residual norm, specifically 286, with 125 cycles. This takes 2.6 hours, INDEFINITE POLYNOMIAL PRECONDITIONING 23 -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 real axis -4 -3 -2 -1 0 1 2 3 4 imaginary axis Fig. 7.1: Matrix Af23560 with a target of finding eigenvalues near -4. Ritz values computed by regular Arnoldi are (red) circles, while (blue) x's show values from PP(50)-Arnoldi. Polynomial preconditioning is much better at finding eigenvalues around the target. five times longer, while using less matrix-vector products. However, the important point is that regular Arnoldi solves an easier problem, focusing more on the eigenvalues nearer the edge of the spectrum. Even if it is run longer, 467 cycles (9.7 hours) so that the first 200 residual norms are below 10-7, the eigenvalues found are still mostly to one side of the target. Figure 7.1 shows with (red) circles the 298 Ritz values from this run that have residual norms below 10-3. This shows that polynomial preconditioned Arnoldi does better at finding the eigenvalues nearest the target. If Balance Method 1 is applied to the degree 50 polynomial, the results are worse: only 252 eigenvalues are accurate to 10-3 after 25 cycles. Figure 7.2 shows how the spectrum is changed with balancing. The upper right portion has the 50 eigenvalues of A numbered in order of distance from -4. The lower left has where these move to with the preconditioned spectrum. Then the lower right has them with balancing added. The balancing does succeed in moving the real eigenvalues on the left side over to the right, so that the real portion of the spectrum looks positive definite. However, the rest of the spectrum does not move in a predictable fashion. Also, note these 50 eigenvalues are much closer together after balancing and thus harder to find. The non-balanced polynomial goes through the desired region with a slope and so does not push the eigenvalues together. Example 11. Large, complex, non-Hermitian matrices arise in quantum chromodynamics (QCD). However, they can be transformed into Hermitian by multiplying by the QCD γ5 matrix. The spectrum is then real and indefinite with about the same spread of eigenvalues on both sides of the origin. We compute 15 eigenvalues and eigenvectors around the origin for a matrix of size n = 3.98 million. These can be used to deflate solution of linear equations with the conjugate gradient method [1]. 24 H. HENSON and R. B. MORGAN -150 -100 -50 0 real axis -200 0 200 imaginary axis A spectrum -1 -0.5 0 0.5 1 1.5 real axis -0.5 0 0.5 imaginary axis -4.5 -4 -3.5 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 imaginary axis 1 2 3 4 567 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 A -0.05 0 0.05 0.1 real axis -0.05 0 0.05 0.1 imaginary axis 1 2 3 4 5678 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 -2 -1 0 1 2 real axis 10-3 -2 -1 0 1 2 10-3 12345678 9 10 11 12 1314 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 3031 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Fig. 7.2: The upper left has the spectrum of Af23560 in the complex plane, and just below is the polynomial preconditioned spectrum with no balancing. The upper right of the figure has eigenvalues of A numbered in order of distance from -4. The lower left has where these move for the preconditioned spectrum with polynomial of degree 50. Then the lower right has balancing added. Table 7.2 compares restarted Arnoldi(60,25) with and without polynomial preconditioning. The polynomial is balanced with Balance Method 1. For this test, it does not make much difference, but we have seen situations where balancing is needed for QCD problems. The polynomial preconditioning reduces the time dramatically and makes using eigenvalue deflation much more practical for QCD calculations. Table 7.2: A quenched QCD matrix of dimension n = 3.98 million is from a 244 lattice at zero quark mass. The 15 eigenvalues nearest the origin are computed with Arnoldi(60,25) and stopped when 15 eigenvalues reach residual norms below 10-8. degree of polynomial time in hours no polynomial 149 d = 25+1 7.82 d = 100+1 5.49 8. Conclusion. Polynomial preconditioning is especially important for difficult indefinite linear equations and interior eigenvalues. Standard preconditioning is generally less effective for such indefinite problems so polynomial preconditioning can INDEFINITE POLYNOMIAL PRECONDITIONING 25 assist. The work in this paper makes polynomial preconditioning more practical for general matrices. Polynomial preconditioning faces special challenges for indefinite problems. These are addressed here, first with methods to balance the polynomial with the goal of creating a definite spectrum. Then an approach is given for making the polynomial stable for indefinite problems. In this, the previous stability method of adding extra copies of roots is applied only to one side of the spectrum. For the other side, corrections are given using eigenvalue deflation and using GMRES iterations. Looking forward, the parallelizabilty of polynomial preconditioners makes them well-suited for modern high-performance computing environments, including multicore processors and GPUs. In some cases, polynomial preconditioners may even replace traditional methods, because polynomials are easier to parallelize than incomplete factorizations. The effects of PPGMRES on GPU architectures should be further investigated to fully understand its potential for accelerating large computations. Future work should also focus on further researching the effectiveness of polynomial preconditioning for interior eigenvalue problems and comparing performance across a wider range of problems. Acknowledgments. The authors would like to thank Mark Embree for providing the Hatano-Nelson matrix in Example 5. REFERENCES [1] A. M. Abdel-Rehim, R. B. Morgan, D. A. Nicely, and W. Wilcox, Deflated and restarted symmetric Lanczos methods for eigenvalues and linear equations with multiple right-hand sides, SIAM J. Sci. Comput., 32 (2010), pp. 129-149. [2] A. M. Abdel-Rehim, R. B. Morgan, and W. Wilcox, Improved seed methods for symmetric positive definite linear equations with multiple right-hand sides, Numer. Linear Algebra Appl., 21 (2014), pp. 453-471. [3] S. F. Ashby, Polynomial preconditioning for conjugate gradient methods. PhD Thesis, University of Illinois at Urbana-Champaign, 1987. [4] S. F. Ashby, T. A. Manteuffel, and J. S. Otto, A comparison of adaptive Chebyshev and least squares polynomial preconditioning for conjugate gradient methods, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 1-29. [5] A. Bai, D. Day, J. Demmel, and J. Dongarra, A test matrix collection for non-hermitian eigenvalue problems, Tech. Rep. CS-97-355, 1997. [6] Z. Bai, D. Hu, and L. Reichel, A Newton basis GMRES implementation, IMA J. Numer. Anal., 14 (1994), pp. 563-581. [7] D. Calvetti, B. Lewis, and L. Reichel, On the choice of subspace for iterative methods for linear discrete ill-posed problems, Int. J. Appl. Math. Comput. Sci., 11 (2001), pp. 10691092. [8] M. Embree, J. A. Loe, and R. B. Morgan, Polynomial preconditioned Arnoldi with stability control, SIAM J. Sci. Comput., 43 (2021), pp. A1-A25. [9] B. Fischer and L. Reichel, A stable Richardson iteration method for complex linear systems, Numer. Math., 54 (1988), pp. 225-241. [10] N. Hatano and D. R. Nelson, Localization transitions in non-Hermitian quantum mechanics, Phys. Rev. Lett., 77 (1996), pp. 570-573. [11] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), pp. 409-436. [12] W. Joubert, On the convergence behavior of the restarted GMRES algorithm for solving nonsymmetric linear systems, Numer. Linear Algebra Appl., 1 (1994), pp. 427-447. [13] , A robust GMRES-based adaptive polynomial preconditioning algorithm for nonsymmetric linear systems, SIAM J. Sci. Comput., 15 (1994), pp. 427-439. [14] A. V. Knyazev and K. Neymeyr, Efficient solution of symmetric eigenvalue problems using multigrid preconditioners in the locally optimal block conjugate gradient method, Elec. Trans. Numer. Anal., 15 (2003), pp. 38-55. 26 H. HENSON and R. B. MORGAN [15] C. Lanczos, Chebyshev polynomials in the solution large-scale linear systems, Proc. ACM, (1952), pp. 124-133. [16] R. Li, Y. Xi, E. Vecharynski, C. Yang, and Y. Saad, A thick-restart Lanczos algorithm with polynomial filtering for Hermitian eigenvalue problems, SIAM J. Sci. Comput., 38 (2016), pp. A2512-A2534. [17] Q. Liu, R. B. Morgan, and W. Wilcox, Polynomial preconditioned GMRES and GMRESDR, SIAM J. Sci. Comput., 37 (2015), pp. S407-S428. [18] J. A. Loe and R. B. Morgan, Toward efficient polynomial preconditioning for GMRES, Numer. Linear Algebra Appl., 29 (2021), pp. 1-21. [19] R. B. Morgan, Computing interior eigenvalues of large matrices, Linear Algebra Appl., 154156 (1991), pp. 289-309. [20] , On restarting the Arnoldi method for large nonsymmetric eigenvalue problems, Math. Comp., 65 (1996), pp. 1213-1230. [21] , Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1112-1135. [22] , Preconditioning eigenvalues and some comparison of solvers, J. Comput. Appl. Math., 123 (2000), pp. 101-115. [23] R. B. Morgan and D. S. Scott, Preconditioning the Lanczos algorithm for sparse symmetric eigenvalue problems, SIAM J. Sci. Comput., 14 (1993), pp. 585-593. [24] R. B. Morgan, T. Whyte, W. Wilcox, and Z. Yang, Two-grid deflated Krylov methods for linear equations, Elec. Trans. Numer. Anal., 63 (2025), pp. 1-21. [25] R. B. Morgan and M. Zeng, Harmonic projection methods for large non-symmetric eigenvalue problems, Numer. Linear Algebra Appl., 5 (1998), pp. 33-55. [26] , A harmonic restarted Arnoldi algorithm for calculating eigenvalues and determining multiplicity, Linear Algebra Appl., 415 (2006), pp. 96-113. [27] N. M. Nachtigal, L. Reichel, and L. N. Trefethen, A hybrid GMRES algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 796-825. [28] C. C. Paige, B. N. Parlett, and H. A. van der Vorst, Approximate solutions and eigenvalue bounds from Krylov subspaces, Numer. Linear Algebra Appl., 2 (1995), pp. 115-133. [29] H. Rutishauser, Theory of gradient methods, in Refined Iterative Methods for Computation of the Solution and the Eigenvalues of Self-Adjoint Boundary Value Problems, M. Engeli, T. Ginsburg, H. Rutishauser, and E. Stiefel, eds., Birkhauser, Basel, 1959, pp. 24-49. [30] Y. Saad, Chebychev acceleration techniques for solving large nonsymmetric eigenvalue problems, Math. Comp., 42 (1984), pp. 567-588. [31] , Least squares polynomials in the complex plane and their use for solving sparse nonsymmetric linear systems, SIAM J. Numer. Anal., 24 (1987), pp. 155-169. [32] , Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM, Philadelphia, PA, 2003. [33] Y. Saad and M. H. Schultz, GMRES: a generalized minimum residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 856-869. [34] G. L. G. Sleijpen and H. A. van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl., 17 (1996), pp. 401-425. [35] D. C. Smolarski and P. E. Saylor, An optimal iterative method for solving any linear system with a square matrix, BIT, 28 (1988), pp. 163-178. [36] D. C. Sorensen, Implicit application of polynomial filters in a k-step Arnoldi method, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 357-385. [37] E. L. Stiefel, Kernel polynomials in linear algebra and their numerical applications, Nat. Bur. Standards, Appl. Math. Ser., 49 (1958), pp. 1-22. [38] H. K. Thornquist, Fixed-Polynomial Approximate Spectral Transformations for Preconditioning the Eigenvalue Problem, PhD thesis, Rice University, 2006. Technical report TR06-05, . [39] M. B. van Gijzen, A polynomial preconditioner for the GMRES algorithm, J. Comput. Appl. Math., 59 (1995), pp. 91-107. [40] K. Wu and H. Simon, Thick-restart Lanczos method for symmetric eigenvalue problems, SIAM J. Matrix Anal. Appl., 22 (2000), pp. 602 - 616. [41] X. Ye, Y. Xi, and Y. Saad, Proxy-GMRES: Preconditioning via GMRES in polynomial space, SIAM J. Sci. Comput., 42 (2021), pp. 1248-1267.
2510.14818
Trends of Pink Slime Journalism Advertisement Expenditure and Spread on Facebook from 2019-2024 Christine Sowa Lepird1, Lynnette Hui Xian Ng1, and Kathleen M. Carley1 Carnegie Mellon University, USA csowa@andrew.cmu.edu Abstract. Pink slime journalism is a practice where news outlets pub- lish low-quality or inflammatory partisan articles, claiming to be local news networks. This paper examines the spread of pink slime sites on Facebook using public posts from Pages and Groups. We evaluate the trends of sharing pink slime sites on Facebook and patterns regarding the advertisements purchased by the parent organizations of the pink slime news networks. Our analysis discovers that while the number of pink slime posts on Facebook pages have decreased over the years, advertising dollars have increased. The increase in advertising dollars influences an increase in Facebook group posts. Further, the advertising expenditure increases during election years, but contentious topics are still discussed during non-election years. By illustrating the patterns and themes from US election years of 2020, 2022, and 2024, this research offers insights into potentially dangerous journalism tactics, and provides predictions for future US Presidential Elections. Keywords: Pink slime journalism, local news, Facebook ads. 1 Introduction Pink Slime journalism is a practice that refers where organizations masquerade as local news outlets, and publish low-quality, hyperpartisan or inflammatory articles [Moore et al., 2023]. These organizations typically have no ties to the local community and are owned by a parent organization that operates a network of news outlets to push “local” news targeting multiple states. This practice is common in America, where the term “local” represents a region in the United States that is a size of a state or smaller. Pink slime news sites are characterized by algorithmically-generated websites, with visually similar content, layouts and origins [Burton and Koehorst, 2020]. The same message is pushed to multiple states, while its tone is tweaked to appeal to the local community. To exploit the trust in local reporting, parent organizations like Metric Me- dia have created almost 1,000 local news sites [Bengani, 2021]. Local reporting remains a highly trusted source of news, although there is a dearth of authentic news reporting. The creators of pink slime news networks take advantage of this arXiv:2510.14818v1 [cs.CY] 16 Oct 2025 2 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley trust and publish low-quality news on their websites, often on politically related and partisanship issues. Besides solely publishing stories on news websites, these pink slime journalism networks have created social media accounts in which they share the URL (Unified Resource Locator) link of the pink slime news on social media platforms. In this article, we study the activity and patterns of sharing of pink slime websites on Facebook. Facebook was chosen because it is a social media platform that is commonly used for sharing URLs to pink slime news sites [Moore et al., 2023]. The activity of publishing partisan news provides the possibility for manip- ulation of public opinion, and more so if these pink slime news networks are highly funded by advertising money. This leads us to our first research ques- tion: RQ1: How much money was spent by pink slime organizations to target which populations? Through this research question, we look at ad- vertising dollars spent in total on Facebook advertisements (ads) by the larger parent organizations, and the breakdown of the target advertisement demograph- ics. Through textual analysis methods, we examine the posts on the Facebook groups that shared pink slime news URLs. This investigation answers our second research question: RQ2: How did the ad spend influence online conver- sations? News sites from metric media are active in publishing partisan news, provid- ing a possibility of manipulation of public opinion. Although there is a dearth of authentic local reporting by local reports, it remains highly trusted. The cre- ators of these networks are taking advantage of this trust, and publish low-quality news on these sites, often politically related partisanship. Pink Slime news or- ganizations are typically active throughout the year, but since they primarily share news with a political slant, we examine one last research question: RQ3: How have pink slime tactics differed over time of a political event, in particular the United States (US) elections? With this study of pink slime news URLs on Facebook, we examine target- ing patterns of low-quality local news and the advertising money poured into the pink slime journalism ecosystem. Our studies provide insights towards the evolving roles of journalism in the digital age, where news websites and social media platforms overlap in their dissemination and broadcasting functions. 1.1 Contributions Although there are studies point to the use of generated news sites for political issues [Burton and Koehorst, 2020], overall, the success of pink slime journalism on social media platforms is understudied. In this study, we make use of content and geographical analysis to understand how the production strategies of pink slime organizations have evolved on Facebook. We study the production and distribution of pink slime URLs across three key political events in the United States: (1) the 2020 Presidential Elections, (2) 2024 Presidential Elections, and (3) the 2022 Midterm Elections. In this study, we make the following contributions: Title Suppressed Due to Excessive Length 3 1. We analyze the advertisement expenditure of pink slime news organizations per state across political election timelines, allowing for projection of adver- tising dollars poured into political campaigns. 2. To facilitate our analysis, we collected a dataset from Facebook containing posts that have pink slime URLs and advertising expenditure from the 2020 to 2024 United States (US) Elections. This dataset provides a representation of pink slime journalism on Facebook in the context of the US Elections, thereby facilitating future studies of this phenomenon. 3. Finally, we analyze the messaging techniques of pink slime news organiza- tions per state across political election timelines. 2 Background 2.1 Local News Journalism Local news agencies have faced many challenges in the 21st century. The news- paper industry, which is predominantly local newspapers, has experienced a 67% decline in advertising revenue since its peak in 2005 [Pew, 2019]. Between 2008 and 2018, newsrooms experienced a 25% reduction in employment, with news- papers seeing a much higher 47% loss of labor [Hendrickson, 2019]. Since 2004, over 2,000 local newspapers have closed, resulting in 200 US counties having no local newspaper. Americans have higher trust in local news than in national news organizations [Gottfried and Liedke, 2021]. 71% of US adults (falsely) believe their local news outlets are doing well financially despite only 14% of these adults having donated to these outlets in the past year [Pew, 2019]. To cope with the falling revenue, some regional newspapers have been acquired by larger corporations, forming corporate-owned local news sites, which thus have to publish content to be shared across multiple locales in a region [LeBrun et al., 2022, Toff and Mathews, 2021]. These acquisitions result in decreased local content publications and increase national news coverage. Some news outlets, like the sports-focused The Athletic, thrived by imple- menting a national framework to report on local events [Ferrucci and Perreault, 2022]. Further, these news outlets suffer from a dearth of local reporters to write stories, so newsrooms have turned to Artificial Intelligence (AI) to assist in writ- ing news stories in various fashions. While some of the examples of AI usage are designed to automate tasks (like the Los Angeles Times’ QuakeBot which auto- matically drafts an article if the U.S. Geological Survey detects an earthquake), other usages can be more sinister and fully generate “local” stories with no local context [Qu´er´e and Jakesch, 2022]. 2.2 Pink Slime Journalism In this age of digital media, digital journalism production evolved to include pink slime journalism, which was initially described in 2005 as automated news 4 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley reporting [Cohen, 2015]. Given the particular financial and labor challenges the local newsroom has faced since then, five organizations have adopted this genre of news with a goal of creating “local” news using algorithms and a handful of non-local reporters. Media companies face declining profits and expanding demands for content, thus pink slime journalism rely on digital tools to generate content and publish more for less, reducing the quality of the news [Moore et al., 2023]. The largest of the most prominent organizations, Metric Media, controls vast swaths of pink slime sites that do not appear to have foreign ties [Bengani, 2021], but are currently financed by political candidates and political action committees with the hope of swaying election results. When speaking of threats to election integrity, Alex Stamos, director of the Stanford Internet Observatory, remarked “The issue [...] is not going to be foreign interference. It’s much more likely that legitimate domestic actors possibly operating under their own name — with LLCs or corporations with very shady funding that are not required to disclose what that funding is — are going to dominate the online conversation about the outcome of the election” [Murphy and Venkataramakrishnan, 2020]. Metric Media has been shown to algorithmically generate most of its content, and it prioritizes the publication of state and national partisan content at the expense of local news [Royal and Napoli, 2022]. One such example of algorithmically generated content is to use an Application Programming Interface (API) to display the local weather. The API is a piece of computer script that retrieves the local weather from a weather source via only a zip code. The local weather is a news topic which American adults rate as the most important local news topic for their daily lives [Pew, 2019], and can be easily generated via computer programming. Further, research from Stanford University analyzed news consumption of pink slime sites and found that at least 3.7% of American adults visited at least one pink slime site during the 2020 Presidential elections period Moore et al. [2023]. The frequency which pink slime sites are visited reflects the extent which these sites can have an effect on population opinion, especially during crucial events like the elections. Social media posts by these pink slime websites have been observed to implement information maneuvers against local and state elections. In contrast, authentic local news sites focused on the national elections and state figureheads [Lepird and Carley, 2023]. 2.3 Spread of News Through Facebook Around 70% of Americans get their news from the social media platform Face- book [Gramlich, 2019]. 15% of the adults cited social media as their preferred platform for reading local news, while 13% of them prefer print newspapers [Pew, 2019]. Those who prefer reading their local news via social media are less likely to be closely following the news stories than those who consume local news via television or print [Pew, 2019]. Social media platforms like Facebook have been a serious threat to the traditional local news publishing industry. Local news organizations previously held exclusive access to their readers, however, Title Suppressed Due to Excessive Length 5 in today’s journalism climate, Facebook knows the intended audience of news websites through their micro-targeting tactics [Hendrickson, 2019]. When local news organizations post to Facebook, they have a greater incentive to optimize for engagement metrics. Therefore, these organizations post content pertaining to national news stories as opposed to local stories [Toff and Mathews, 2021]. Despite the scale of news advertisements on Facebook, not all of this news is coming from quality news sources: 15% of referrals to fake news sites are coming from Facebook [Guess et al., 2020], and 17.7% of visits to pink slime sites are referred by Facebook Moore et al. [2023]. Many local news organizations have official pages on Facebook to share their stories with their social media followers. During the COVID-19 pandemic, researchers analyzed the posts of over 1,000 local news sites to Facebook Pages and found that the engagement on these posts rose proportionately with the size of the population the local news organization was targeting Le Qu´er´e et al. [2022]. In general, US citizens rate posts from local Facebook groups to be more trust worthy than non-local Facebook groups. Posts from Facebook pages of local news are perceived to be more interesting and more trustworthy to local citizens as compared to posts from local neighborhood Facebook groups [Aubin Le Qu´er´e et al., 2022]. Much like authentic local news outlets, many of the known sources of pink slime have associated social media accounts on platforms like Facebook to amplify the spread of the messaging to the community. The names of these pink slime sites frequently contain the target community in the domain name. Examples are: The Bucks County Standard, North Alaska News, the Michigan Star, and Pensacola Times. Per Mihailidis and Foster [2021], there is a serious reason to be concerned about neighborhood Facebook groups: the platform makes it difficult to discern who owns the group. Even if the group advertises to be apolitical, they may still have a political slant. This results in further political manipulation, as Mihailidis and Foster [2021] expresses: “A fractured local media ecosystem, especially without the investigative power once shared by healthy local newspaper competition, has left local-and-state politics open to further manipulation by outsiders. Local and state governments draw considerable attention because they hold more power than the average citizen acknowledges”. Local news organizations boost their presence on social media not only by frequent postings but also by advertisements. Meta allows for companies to pur- chase advertisements on Facebook. Newsrooms can have their stories promoted to their intended audiences on Facebook, while micro-targeting the desired age, locations and gender of the readership. During the 2018 US Midterms elections, Haenschen [2022] ran ads in Texas to examine if the advertisements affect voter turnout. They found that only one set of targeted ads focusing on abortion rights and women’s healthcare in a competitive district was shown to significantly in- crease turnout. In 2018, Meta launched the Facebook Ad Library for increased transparency on the organizations that are running ads on Facebook. This library provides information in terms of how much money the organizations are spending and 6 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley what messages they are promoting 1. This resource has allowed researchers to analyze the spread of advertising of posts with false information and political posts, observing that micro-targeted ads reduced the likelihood of persuading Democrat respondents to vote for Democrat candidates [Liberini et al., 2023, Mejova and Kalimeri, 2020]. 3 Data Collection Facebook is the social media platform with the plurality of referrals to pink slime sites [Moore et al., 2023], hence this paper studies the spread of pink slime sites on Facebook. We study how the sites share pink slime news through both organic and paid means. A total of three datasets were collected with respect to pink slime domains: (1) public posts on Facebook groups and Facebook pages (termed “Facebook posts dataset”), and (2) total number of advertisement expenditure (ad-spend) per group (termed “Advertisement expenditure”). Facebook posts dataset We collected this data using the Python Crowdtangle API2. For each pink slime domain listed in Bengani [2022], we used the API to collect all the posts that have a link from public Facebook groups and pages from 2019 until August 2024, which was when the CrowdTangle platform was shut down. Overall, this data collection yielded 1,108,059 posts from the pages and 43,357 posts from groups. Facebook page posts in this dataset are dominated by posts made by official Facebook pages. These pages are dedicated to sharing news from individual pink slime sites. Therefore, data reflecting pink slime on Facebook pages can be seen as a glimpse into the marketing efforts by the organizations. The presence of Facebook groups that shared news from these websites indicates an organic news spread within local communities. These groups have a median subscriber count of 1,075 group members, and frequently contain local community names within their group names. In total, we collected posts from 12,644 Facebook pages, linking to 531 unique pink slime domains; and posts across 8,528 Facebook groups, linking to 198 unique pink slime domains. From this Facebook posts dataset, we extract and quantified the number of pink slime domains per organization. Table 1 shows the breakdown of news domain present in this dataset. The largest organizations, Metric Media and Star, are known for their politically conservative leanings while Local Report, Courier, and American Independent have liberal political leanings. It should be noted that Facebook page posts in this dataset are dominated by posts made by official Facebook pages dedicated to sharing news from individual pink slime sites. For that reason, the page data can be seen as a glimpse into (free) marketing efforts by the organizations. The Facebook groups that were not operated by organizations and shared news from these pink slime websites indicate an organic news spread within local communities. These groups are not 1 https://www.facebook.com/ads/library 2 https://pypi.org/project/PyCrowdTangle/ Title Suppressed Due to Excessive Length 7 Parent Organization Sub-Specialty Websites Metric Media Metric Media 398 Metro Business 51 LGIS 34 Record 11 Franklin Archer 10 Local Labs 8 Star 11 Local Report 9 Courier 9 American Independent 7 Table 1: he number of pink slime sites shared on Facebook in this study by parent organization. seeded with the websites by the official community, and the sharing patterns indicate the natural propagation of pink slime sites. These groups had a median subscriber count of 1,075 group members and frequently had local community names in their group names like ‘Republicans of the Palm Beaches’, ‘Progres- sive Democrats of North Carolina’, ‘Michigan Republicans/Democrats Debate Group’, ‘Middle Georgia Young Democrats’, ‘Philadelphia 8th Ward GOP’, and ‘Republican Liberty Caucus of Indiana.’ Advertisement expenditure dataset This dataset was collected using the Face- book Ad Library3. The dataset consists of posts that pink slime parent organi- zations paid to promote on Facebook pages, the amount of money known pink slime organizations spent on advertisements posts per state, and the impressions garnered. This was collected by searching for political ads from the parent orga- nizations: Courier’s (“Cardinal & Pine”, “Courier Newsroom”, “Courier News- room, Inc.”, “Floricuas”, “Granite Post”, “Iowa Starting Line”, “The ’Gander Newsroom”, “The Copper Courier”, “The Keystone”, “The Nevadan”, “Up- NorthNews”, “The Keystone Courier”), Metric Media’s (“Metric Media LLC”, “Franklin Archer”, “Local Government Information Services”, “The Record”), and American Independent’s (“American Independent Media”). No ads were found from the Star and Local Report parent organizations. We collected 14,581 advertisement posts shared from 310 Facebook pages, linking to 383 unique do- mains, the majority of which are direct pink slime domains. 4 RQ1: How much money was spent by pink slime organizations to target which populations? Our first research question examines the advertising expenditure (ad spend) of pink slime organizations and their targeted populations. We make use of data 3 https://www.facebook.com/ads/library 8 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley from the Advertising Expenditure dataset to perform this investigation. Table 2 shows the aggregated ad spend of all the organizations from 2018 through 2024 and the total number of impressions those ads received. In terms of ad spending across the years, the election years of 2020 and 2024 observed a spike in ad spend, and subsequently the election year of 2022 observed the next highest ad spending. The total impressions on advertisements were highest during the election years of 2020, 2022, and 2024, indicating the appeal of political ads these pink slime organizations during these years. Year Number of Ads Total Impressions Total Ad Spend 2018 374 1,328,813 $21,713 2019 1,367 16,091,317 $200,067 2020 2,774 105,847,118 $1,118,763 2021 1,735 14,787,133 $158,883 2022 3,696 43,776,653 $731,752 2023 1,404 18,664,300 $544,398 2024 3,231 328,211,953 $7,952,185 Table 2: Pink Slime Facebook Ads Over Time With the advertising expenditure segregated by states in Figure 1, we analyze the expenditure per state. We observe that three of the five largest and known pink slime organizations purchased Facebook ads throughout the time period we examined, and Metric Media was the only organization to consistently purchase ads across all time periods. Courier purchased ads through 2020 then stopped doing so until December 2023, and American Independent purchased ads 2022- 2024. All of the organizations that purchased ads targeted Pennsylvania - a key swing state that determined Joe Biden’s 2020 victory over Donald Trump - as well as the swing states of Michigan, Wisconsin, and Georgia. The top ten states by ad spend (from highest to lowest) are North Carolina, Pennsylvania, Arizona, Michigan, Nevada, Wisconsin, Ohio, Georgia, Texas, and West Virginia. All of these states with the exception of Ohio and West Virginia were among those with the closest 2020 Presidential Elections spread. Iowa had the 15th highest ad spend due to its unique position as the first state in the country to hold an electoral event with its caucus every four years. While swing states are an important target during election years, other states’ significance was seen as a result of isolated events. For example, Courier spent $16,999 in ads to Iowa in 2020. During the 2020 elections, Trump won Iowa against Biden by 8.2 percentage points. Given the small difference, it is likely that Courier, a left-leaning party, poured more resources to boost chances in Iowa. When we assess the differences in the demographics of those targeted by the pink slime Facebook ads (Table 3), we notice gender and age shifts between the organizations. Two politically left-leaning sites, American Independent and Courier Newsroom, target women more than men in their ads, and the majority of their impressions are coming from individuals under 55. However, Metric Title Suppressed Due to Excessive Length 9 Media, a right-leaning organization, displays its ads to more men than women, and the majority of their impressions come from Facebook users who are over the age of 55. It seems that left-leaning sites generally target younger women, while right-leaning sites generally target older men. Fig. 1: Total Pink Slime Advertising Expenditure by State During Election Years Female Male 18-24 25-34 35-44 45-54 55-64 65+ American Independent 69.6% 29.8% 4.5% 18.8% 22.6% 20.2% 17.7% 16.2% Courier Newsroom 57.2% 40.2% 14.3% 27.0% 20.3% 14.7% 11.4% 10.5% Metric Media 43.4% 49.8% 5.4% 11.3% 10.3% 14.3% 22.5% 30.3% Table 3: Breakdown of targeted ad demographics by gender and age for pink slime organizations Finally, we examined whether state variables have an effect on ad spend. We performed Pearson Correlations of several state variables to the 2022 and 2024 pink slime ad spend. This includes the 2020 voter spread4, 2020 GDP5, percentage of population living in rural areas6, percentage of population with a bachelors degree7 and so forth. We used the statistics retrieved from the 2020 data, envisioning that organizations only have access to that information in 2022 and 2024. Table 4 presents the Pearson correlation coefficient between the state variables and the ad spend, and the corresponding p-values. We find that only the 2020 Voter Spread is significantly correlated with ad spend at the p < 0.05 level. 4 https://www.presidency.ucsb.edu/statistics/elections/2020 5 https://www.bea.gov/ 6 https://www.census.gov/en.html 7 https://fred.stlouisfed.org/release/tables?rid=330&eid=391444&od=2020-01-01 10 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 2: Advertising Expenditure by Parent Organization Over Time (a) An over-time plot of Page Posts Linking to Pink Slime Sites by Orga- nization (through August 2024) (b) An over-time plot of Group Posts Linking to Pink Slime Site by Organi- zation (through August 2024) Fig. 3: Facebook Pages and Group Posts Over Time Title Suppressed Due to Excessive Length 11 Fig. 4: Facebook Pages and Groups Posts by State 2022 2024 Variable Correlation Significance Correlation Significance 2020 Voter Spread -0.52 0.0040 -0.52 0.0072 Cities Over 100k Population 0.032 0.88 -0.12 0.57 Percent of Population Living in Rural Areas -0.085 0.66 -0.041 0.84 Percent of Population with a Bachelors Degree -0.055 0.78 -0.12 0.65 2020 GDP 0.085 0.66 -0.087 0.68 Median Age of State -0.12 0.53 0.12 0.58 March 2020 Governor’s Party -0.15 0.44 0.34 0.093 Electoral College Votes 0.15 0.45 -0.35 0.87 Table 4: State variables and their Pearson correlation to 2022 and 2024 pink slime ad spend 12 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley 5 RQ2: How did the ad spend influence online conversations? In this research question, we combined data from advertising expenditure and posts content, and examined the effects of ad spend on online conversations. We first begin by analyzing themes within the online conversations through a temporal lens. We broke down the content within Facebook ad messaging across the years and populated the frequency of messaging into a word cloud. To do so, we first pre-processed the text in the messaging to remove stopwords and URLs, before using Python’s wordcloud package8 to formulate the word cloud of the top 100 words, sized by frequency of appearance. Figure 5 illustrates the changing focus of topics in election and off-election years and Figure 6 shows how these words changed between the two elections generated using Gallagher et al. [2021]. Throughout all of the years, the most targeted states (‘Texas’, ‘Michigan’, ‘Georgia’, ‘Arizona’, ‘Florida’, ‘Wisconsin’, and more) remain as top terms. One key tactic used is to write ads with the same message but switching out the name of the state for the one that is being targeted in the ad. For example, some of the ads run in 2022 had the following titles: “‘Inflation has shot up a staggering 13.2%’ since Biden took office, Arizona’s CPI at 13%” , “‘Inflation has shot up a staggering 13.2%’ since Biden took office, Michigan’s CPI at 8.1%”, “3 in 5 Americans concerned about housing affordability, North Carolina’s average rent up 30%”, and “3 in 5 Americans concerned about housing affordability, Wisconsin’s average rent up 17%.” During election years, ad spending increases drastically, and the conversations naturally turn political. While the presidential candidates’ names were not at the forefront of the 2020 conversation, there was a focus on the phrase Catholic. The ads containing references to ‘Catholic’ were mostly run in September and October 2020 by the Metric Media organization, leading into the appointment of Catholic Supreme Court Justice Amy Coney Barrett. The top two of these ads, garnering 275,000 and 112,000 impressions, respectively, were titled ‘Presi- dent Trump addresses Catholics directly’ and ‘Catholic Vote: Biden’s anti-school choice stance should worry WI Catholic school parents’; the ‘Catholic’ phrase was indirectly used to support President Trump’s re-election. During the midterms in 2022, President Biden was the top phrase, with sec- ondary attention paid to key economic issues like ‘inflation’, ‘gas’ , and ‘prices’. Three of the top four ads (by impression) in 2022 were purchased by left-leaning American Independent, criticizing Georgia senate candidate, Herschel Walker, and garnering over 3 million impressions from 9 distinct ads. Right-leaning Met- ric Media, however, garnered over 350,000 total impressions with the ad title ‘As Pennsylvanians receive fourth stimulus check, Pigott points out negative real wage growth: ’Joe Biden is the pay cut president”. Much like the 2020 ef- forts to use the Catholic narrative into praise for President Trump, the 2022 tactic was to use negative economic news to undermine President Biden (and the Democratic party) ahead of the midterm election. 8 https://pypi.org/project/wordcloud/ Title Suppressed Due to Excessive Length 13 Ad expenditure during the years between elections drastically diminished, and the discourse focuses more on “court”. These ads highlight that state supreme and high courts, and are not necessarily political or partisan in nature. For ex- ample, ‘Appeals court vacates ruling against Parkways Authority over Turnpike toll fees’ received over 22,000 impressions in 2023. This strategy may be to es- tablish the Facebook Pages sharing the news as a nonpartisan, trustworthy local news outlet when they are not actively trying to push a political message, and to keep the organizations active and visible to the Facebook audience even during the down time. Finally, to see how the ad spend influenced organic conversation, we generate Figure 7 to compare the amount of money spent promoting a given pink slime domain via Facebook ads and the number of times that domain appeared in a Facebook group. A linear trend line (R2 = 0.35) indicates a weak positive correlation between the two measures, ad spend and organic conversation. Fig. 5: Wordclouds of the Top 100 Words Appearing in Pink Slime Facebook Ads Over Time 14 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 6: Change in words used in Facebook ads by Pink Slime Organizations in 2020 (left) and 2022 (right) Title Suppressed Due to Excessive Length 15 Fig. 7: Number of Instances a Pink Slime Domain Appears in a Facebook Group by Ad Spend for those Domains 6 RQ3: How have pink slime tactics differed over time of a political event? Given data across multiple years, we finally ask how the different organizations have adapted their tactics during different political phases, namely two presi- dential elections, a midterm election, and between elections. Figure 2, 3b, and 3a show an over-time plot of the advertisement impres- sions, group posts linking to pink slime sites and page posts linking to pink slime sites per organization, respectively. In general, the advertising impressions increase drastically during election years, and decreases during the years in be- tween elections. While advertisement impressions have increased over the years, the total number of posts in Facebook pages has decreased since 2020, indicat- ing that pink slime organizations are now pivoting away from posting to their Facebook pages organically and instead spending more for paid advertisements to run from those pages. Facebook groups see different trends depending on the parent organizations. Star had peaks in 2020 and then again in 2023 while Courier and Local Report had peaks in 2020, and Metric Media peaked in 2022. All of these, with the exception of Star, show peaks in organic activity around election years (like ad impressions); however, since the 2024 data only contains data through August 2024, it’s possible that Star would have exceeded its 2023 numbers had we had the ability to collect all of the 2024 calendar year data. Figure 8 shows the number of Facebook group posts that share pink slime websites affiliated with each state and ?? shows the number of pink slime Face- 16 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley book Page posts per state. Metric Media has the most prolific activity on Face- book posts, posting news stories relating to all states on pages. Metric Media owns hundreds of these pink slime sites whereas the other four organizations own fewer than 15 of these sites each. Their goal seems to be to reach every corner of the United States with strong focus on Illinois, where the founder of Metric Media, Brian Timpone, resides. The other pink slime organizations have more concentrated sharing in Facebook groups and pages to states of higher electoral importance. Metric Media Metric Media is the largest known parent organization of pink slime, and has several sub-networks associated with and sharing the IP space with main Metric Media sites [Bengani, 2021]. Metric Media has spent more on ads ($1,927,128.50) than American Independent but less than Courier. They started in 2019 and spent $208,604.50 during the 2020 presidential election year. However, in 2022, they increased ad spend significantly, up to $400,039 during the midterm year and final to $977,253 during the 2024 presidential election year. Metric Media dominates the Facebook page post dataset with 392,849 posts in 2020. These numbers then rapidly decreased, with 213,249 in 2022 and just 4,735 in the first 8 months of 2024. This data coupled with the ad data indicates there was a business decision to pivot from aggressively posting to their Facebook pages towards spending more for ads. Facebook groups have posted their news sites 5,409 times with the highest year being in 2022 (1,489 posts) followed by 2020 (1,011 posts). This is consistent with past work which has also observed significant publications by Metric Media in Ohio, Michigan, Texas, California and Illinois [Royal and Napoli, 2022]. Local Report Local Report is a new player in this space, having only posted on Facebook during the 2022 elections. The organization had no known ads purchased on Facebook. Unlike the other parent organizations, their 14 sites do not have a dedicated Facebook Page. Unsurprisingly, given their lack of social media accounts, they had minimal page posts - only 28, all in 2022 except for one in 2023. In groups, these sites have only been posted 12 times, all in 2022. Star Like the Local Report, Star had no known ads purchased on Facebook, but unlike the Local Report, they have active Facebook pages. The 10 largest Facebook page sharers were the official social media accounts for these platforms and account for 85% of the posts to Facebook pages that linked to their 11 news sites. The page posts increased from 2020 (23,826 posts) to 2022 (32,151 posts) but declined for the first 8 months of 2024 (3,035 posts). Star has the most group posts of any pink slime organization. Their news sites have been posted to groups 21,878 times; the highest posting frequency was in the year 2023 (5,755 posts). Courier Courier purchased Facebook ads in 2018 ($495), 2019 ($166,347) and 2020 ($910,158.50) before going on a break and returning with vigor leading up to the 2024 presidential election by buying ads in December 2023 ($996) Title Suppressed Due to Excessive Length 17 and 2024 ($6,909,696.50). They have seen a total of 51,933 posts from Facebook pages with the plurality coming in 2020 (15,460 posts). In 2022 they only had 8,829 page posts and for the first 8 months of 2024 they had 7,110 page posts. Courier has the second highest number of group shares - with a total of 9,376 posts to Facebook groups linking to their news sites; their biggest year for group shares was 2020 (3,875 posts). American Independent The American Independent is the organization with the least ad spend, starting with purchasing ads in just 2022. In 2022 alone they spend $331,713 on Facebook ads and they proceed to spend more on ads in 2023 (surprisingly, a non-election year) with $415,990. Their 2024 ad spend fell to just $65,235. There were 20,400 instances of Facebook pages and 6,682 instances of Facebook groups sharing links to the American Independent’s news sites, the plurality of which occurred in 2020; however, the vast majority of these posts to Facebook groups and pages are to the American Independent parent site (not one of their smaller sites targeting swing states). 7 Discussion In today’s digital age, news reporting goes beyond traditional television and print newspapers. Journalists have begun to make use of social media to dissem- inate their news articles and broadcast their news, to keep up with the shifting readership trends. This gives rise to a new phenomenon: pink slime journalism. In this paper, we discuss the impact of pink slime journalism with respect to the US elections. Understanding the impact of pink slime journalism through their propaga- tion on Facebook is important for a few reasons. This understanding provides insight about the scope and the scale of pink slime networks throughout the US. While ad expenditure and group posts are focused on key states, page posts are dispersed across almost every state, indicating the prevalence of pink slime advertisements across America. In general, advertising expenditure from Pink Slime organizations have in- creased across the years. Metric Media is one of the largest players in the space. Metric Media spends heavily and broadly, producing ads that target a wide num- ber of states and demographics. Other pink slime organizations produce ads that are more targeted towards swing states and vulnerable demographics. Courier was spending heavily on advertising to Iowa and to a lesser degree Michigan and Pennsylvania; however, they seem to have discontinued their advertising expen- diture after the 2020 Elections. In contrast, American Independent have started advertising in the 2022 Elections. In terms of messaging, during the election years, the focus of pink slime news websites were heavily on the politics during the elections. Between elections, these websites did not simply fizzle away, but kept up with talking about court cases and lawsuits. This is a technique to keep the Facebook accounts and pages active and fresh in people’s memory. 18 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Understanding the geographical trends of pink slime distribution in the 2020 to 2024 elections will be useful in providing an insight towards possible trends in future elections. From our studies, we believe that in future US elections, pink slime advertisement expenditure and Facebook posting will increase drastically. Metric Media will continue to spend heavily in an attempt to influence the elec- tion, and American Independent will continue with a focused and high ad spend on swing states. Finally, should another seat open up on the Supreme Court, the ads will target the religion and values of the president as a consideration. Ethical considerations Our data, while obtained from a social media platform, focused on the quantity of posts and ad-spend. Our study only uses public data and does not involve any personally identifiable information. We analyzed ag- gregate information, and not personal or individual information. Future work on pink slime journalism can link the production of pink slime as observed within our study with the consumption of pink slime, thereby further quantifying the impact of the pink slime news. This includes analyzing the pink slime phenomena to indicate the to reveal whether consumption habits matches production ideals, providing insights towards the success of journalism outlets. 8 Conclusion Pink slime journalism is increasingly being used as an advertising technique, in which politically leaning junk news appears to look like local news. In this work, we investigated the prevalence of pink slime journalism on Facebook groups and pages across five years from 2018 to 2024, characterizing their quantity across the states of America. Our observations show that pink slime news organiza- tions have increased in their advertising expenditure across the years, and this has impacted organic conversation on social media. Pink slime journalism is a growing phenomenon in the US and the world, for they leverage on the public’s trust of local news sites to dispense political news. Our work sets forth paths for analyzing this phenomenon across the US, in terms of messaging and advertising expenditure, opening avenues to examinations of this new journalistic trend. 9 Acknowledgments This work was supported in part by the Office of Naval Research (ONR) Award N00014182106, the Knight Foundation, the Center for Computational Analysis of Social and Organizational Systems (CASOS), and the Center for Informed Democracy and Social-cybersecurity (IDeaS). The views and conclusions con- tained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR or the U.S. government. Title Suppressed Due to Excessive Length 19 9.1 IRB Approval This research was conducted with IRB approval in the Spring of 2024 Federalwide Assurance No: FWA00004206 IRB Registration No: IRB00000603. Bibliography Marianne Aubin Le Qu´er´e, Mor Naaman, and Jenna Fields. Local, social, and online: Comparing the perceptions and impact of local online groups and local media pages on facebook. In Proceedings of the Computation+Journalism Symposium, New York, NY, June 2022. C+J. PRIYANJANA Bengani. The Metric Media network runs more than 1,200 local news sites. Here are some of the non-profits funding them. — cjr.org. https://www.cjr.org/tow center reports/metric-media- lobbyists-funding.php, 2021. [Accessed 10-09-2023]. PRIYANJANA Bengani. ‘Pink slime’ network gets 1.6M election boost from PACs backed by oil-and-gas, shipping magnates — cjr.org. https: //www.cjr.org/tow center/pink-slime-network-gets-1-6m-election- boost-from-pacs-backed-by-oil-and-gas-shipping-magnates.php, 2022. [Accessed 10-09-2023]. Anthony G Burton and Dimitri Koehorst. Research note: The spread of political misinformation on online subcultural platforms. HKS Misinfo Rev, 1(6):10– 37016, 2020. Nicole S Cohen. From pink slips to pink slime: Transforming media labor in a digital age. The Communication Review, 18(2):98–122, 2015. Patrick Ferrucci and Gregory Perreault. Local is now national: The athletic as a model for online local news. New Media & Society, page 146144482211177, August 2022. ISSN 1461-7315. https://doi.org/10.1177/14614448221117748. URL http://dx.doi.org/10.1177/14614448221117748. Ryan J. Gallagher, Morgan R. Frank, Lewis Mitchell, Aaron J. Schwartz, An- drew J. Reagan, Christopher M. Danforth, and Peter Sheridan Dodds. Gen- eralized word shift graphs: a method for visualizing and explaining pair- wise comparisons between texts. EPJ Data Science, 10(1), January 2021. ISSN 2193-1127. https://doi.org/10.1140/epjds/s13688-021-00260-3. URL http://dx.doi.org/10.1140/epjds/s13688-021-00260-3. JEFFREY Gottfried and JACOB Liedke. Partisan divides in media trust widen, driven by a decline among Republicans — pewresearch.org. https: //www.pewresearch.org/fact-tank/2021/08/30/partisan-divides-in- media-trust-widen-driven-by-a-decline-among-republicans/, 2021. [Accessed 10-09-2023]. JOHN Gramlich. 10 facts about Americans and Facebook — pewre- search.org. https://www.pewresearch.org/fact-tank/2019/05/16/facts- about-americans-and-facebook/, 2019. [Accessed 10-09-2023]. Andrew M Guess, Brendan Nyhan, and Jason Reifler. Exposure to untrustworthy websites in the 2016 us election. Nature human behaviour, 4(5):472–480, 2020. Katherine Haenschen. The conditional effects of microtargeted facebook ad- vertisements on voter turnout. Political Behavior, 45(4):1661–1681, March 2022. ISSN 1573-6687. https://doi.org/10.1007/s11109-022-09781-7. URL http://dx.doi.org/10.1007/s11109-022-09781-7. Title Suppressed Due to Excessive Length 21 Clara Hendrickson. Local journalism in crisis: Why america must revive its local newsrooms. https://www.brookings.edu/wp-content/uploads/2019/ 11/Local-Journalism-in-Crisis.pdf, 2019. Marianne Aubin Le Qu´er´e, Ting-Wei Chiang, and Mor Naaman. Un- derstanding local news social coverage and engagement at scale during the covid-19 pandemic. Proceedings of the International AAAI Confer- ence on Web and Social Media, 16:560–572, May 2022. ISSN 2162- 3449. https://doi.org/10.1609/icwsm.v16i1.19315. URL http://dx.doi.org/ 10.1609/icwsm.v16i1.19315. Benjamin LeBrun, Kaitlyn Todd, and Andrew Piper. Buying the news: A quantitative study of the effects of corporate acquisition on local news. New Media & Society, page 146144482210790, March 2022. ISSN 1461- 7315. https://doi.org/10.1177/14614448221079030. URL http://dx.doi.org/ 10.1177/14614448221079030. Christine Sowa Lepird and Kathleen M Carley. Comparison of online maneu- vers by authentic and inauthentic local news organizations. arXiv preprint arXiv:2312.07613, 2023. Federica Liberini, Michela Redoano, Antonio Russo, ´Angel Cuevas, and Ruben Cuevas. Politics in the facebook era. evidence from the 2016 us presiden- tial elections. 2023. https://doi.org/10.2139/ssrn.4669077. URL http: //dx.doi.org/10.2139/ssrn.4669077. Yelena Mejova and Kyriaki Kalimeri. Covid-19 on facebook ads: Competing agendas around a public health crisis. In Proceedings of the 3rd ACM SIG- CAS Conference on Computing and Sustainable Societies, COMPASS ’20. ACM, June 2020. https://doi.org/10.1145/3378393.3402241. URL http: //dx.doi.org/10.1145/3378393.3402241. Paul Mihailidis and Bobbie Foster. The cost of disbelief: Fractur- ing news ecosystems in an age of rampant media cynicism. 4: 616–631, 2021. ISSN 65. https://doi.org/10.1177/0002764220pa978470. URL https://journals.sagepub.com/doi/full/10.1177/ 0002764220978470?casa token=q3Z5Gqjr zwAAAAA% 3AXgFLlEFYznqsUI5Sr51ZuHzzNRmpkYOnkF j6K-skbatcqcipuR- MjKa3Cmdvjvio7KrSEcxnrOruw. Ryan Moore, Ross Dahlke, Priyanjana Bengani, and Jeffrey Hancock. The con- sumption of pink slime journalism: Who, what, when, where, and why? 2023. Hannah Murphy and Siddharth Venkataramakrishnan. Local news is drowning in pink slime ahead of US election — Financial Times — ft.com. https:// www.ft.com/content/f36c3e2e-bd62-4d93-853f-0d09cd8d9079, 2020. [Ac- cessed 10-09-2023]. Pew. For Local News, Americans Embrace Digital but Still Want Strong Community Connection — pewresearch.org. https://www.pewresearch.org/ journalism/2019/03/26/for-local-news-americans-embrace-digital- but-still-want-strong-community-connection/, March 2019. Marianne Aubin Le Qu´er´e and Maurice Jakesch. Trust in ai in under-resourced environments: Lessons from local journalism. In CHI ’22: Workshop on Trust and Reliance in AI-Human Teams, New York, NY, 2022. ACM. 22 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Asa Royal and Philip M Napoli. Local journalism without journalists metric media and the future of local news. Journal of Creative Industries and Cultural Studies-JOCIS, (8):119–147, 2022. Benjamin Toff and Nick Mathews. Is social media killing local news? an ex- amination of engagement and ownership patterns in u.s. community news on facebook. Digital Journalism, page 1–20, October 2021. ISSN 2167-082X. https://doi.org/10.1080/21670811.2021.1977668. URL http://dx.doi.org/ 10.1080/21670811.2021.1977668. A Appendix A: Over Time Maps by Parent Organization Title Suppressed Due to Excessive Length 23 Fig. 8: Sum of all the posts linking from public Facebook groups to pink slime sites targeting different states by year through August 2024. 24 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 9: Sum of all the posts linking from Facebook Pages to pink slime sites targeting different states by year through August 2024. Title Suppressed Due to Excessive Length 25 Fig. 10: Total Facebook ad expenditure (in US Dollars) by state over time by the various pink slime organizations.
Trends of Pink Slime Journalism Advertisement Expenditure and Spread on Facebook from 2019-2024 Christine Sowa Lepird1, Lynnette Hui Xian Ng1, and Kathleen M. Carley1 Carnegie Mellon University, USA Abstract. Pink slime journalism is a practice where news outlets publish low-quality or inflammatory partisan articles, claiming to be local news networks. This paper examines the spread of pink slime sites on Facebook using public posts from Pages and Groups. We evaluate the trends of sharing pink slime sites on Facebook and patterns regarding the advertisements purchased by the parent organizations of the pink slime news networks. Our analysis discovers that while the number of pink slime posts on Facebook pages have decreased over the years, advertising dollars have increased. The increase in advertising dollars influences an increase in Facebook group posts. Further, the advertising expenditure increases during election years, but contentious topics are still discussed during non-election years. By illustrating the patterns and themes from US election years of 2020, 2022, and 2024, this research offers insights into potentially dangerous journalism tactics, and provides predictions for future US Presidential Elections. Keywords: Pink slime journalism, local news, Facebook ads. 1 Introduction Pink Slime journalism is a practice that refers where organizations masquerade as local news outlets, and publish low-quality, hyperpartisan or inflammatory articles [Moore et al., 2023]. These organizations typically have no ties to the local community and are owned by a parent organization that operates a network of news outlets to push "local" news targeting multiple states. This practice is common in America, where the term "local" represents a region in the United States that is a size of a state or smaller. Pink slime news sites are characterized by algorithmically-generated websites, with visually similar content, layouts and origins [Burton and Koehorst, 2020]. The same message is pushed to multiple states, while its tone is tweaked to appeal to the local community. To exploit the trust in local reporting, parent organizations like Metric Media have created almost 1,000 local news sites [Bengani, 2021]. Local reporting remains a highly trusted source of news, although there is a dearth of authentic news reporting. The creators of pink slime news networks take advantage of this 16 Oct 2025 2 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley trust and publish low-quality news on their websites, often on politically related and partisanship issues. Besides solely publishing stories on news websites, these pink slime journalism networks have created social media accounts in which they share the URL (Unified Resource Locator) link of the pink slime news on social media platforms. In this article, we study the activity and patterns of sharing of pink slime websites on Facebook. Facebook was chosen because it is a social media platform that is commonly used for sharing URLs to pink slime news sites [Moore et al., 2023]. The activity of publishing partisan news provides the possibility for manipulation of public opinion, and more so if these pink slime news networks are highly funded by advertising money. This leads us to our first research question: RQ1: How much money was spent by pink slime organizations to target which populations? Through this research question, we look at advertising dollars spent in total on Facebook advertisements (ads) by the larger parent organizations, and the breakdown of the target advertisement demographics. Through textual analysis methods, we examine the posts on the Facebook groups that shared pink slime news URLs. This investigation answers our second research question: RQ2: How did the ad spend influence online conversations? News sites from metric media are active in publishing partisan news, providing a possibility of manipulation of public opinion. Although there is a dearth of authentic local reporting by local reports, it remains highly trusted. The creators of these networks are taking advantage of this trust, and publish low-quality news on these sites, often politically related partisanship. Pink Slime news organizations are typically active throughout the year, but since they primarily share news with a political slant, we examine one last research question: RQ3: How have pink slime tactics differed over time of a political event, in particular the United States (US) elections? With this study of pink slime news URLs on Facebook, we examine targeting patterns of low-quality local news and the advertising money poured into the pink slime journalism ecosystem. Our studies provide insights towards the evolving roles of journalism in the digital age, where news websites and social media platforms overlap in their dissemination and broadcasting functions. 1.1 Contributions Although there are studies point to the use of generated news sites for political issues [Burton and Koehorst, 2020], overall, the success of pink slime journalism on social media platforms is understudied. In this study, we make use of content and geographical analysis to understand how the production strategies of pink slime organizations have evolved on Facebook. We study the production and distribution of pink slime URLs across three key political events in the United States: (1) the 2020 Presidential Elections, (2) 2024 Presidential Elections, and (3) the 2022 Midterm Elections. In this study, we make the following contributions: Title Suppressed Due to Excessive Length 3 1. We analyze the advertisement expenditure of pink slime news organizations per state across political election timelines, allowing for projection of advertising dollars poured into political campaigns. 2. To facilitate our analysis, we collected a dataset from Facebook containing posts that have pink slime URLs and advertising expenditure from the 2020 to 2024 United States (US) Elections. This dataset provides a representation of pink slime journalism on Facebook in the context of the US Elections, thereby facilitating future studies of this phenomenon. 3. Finally, we analyze the messaging techniques of pink slime news organizations per state across political election timelines. 2 Background 2.1 Local News Journalism Local news agencies have faced many challenges in the 21st century. The newspaper industry, which is predominantly local newspapers, has experienced a 67% decline in advertising revenue since its peak in 2005 [Pew, 2019]. Between 2008 and 2018, newsrooms experienced a 25% reduction in employment, with newspapers seeing a much higher 47% loss of labor [Hendrickson, 2019]. Since 2004, over 2,000 local newspapers have closed, resulting in 200 US counties having no local newspaper. Americans have higher trust in local news than in national news organizations [Gottfried and Liedke, 2021]. 71% of US adults (falsely) believe their local news outlets are doing well financially despite only 14% of these adults having donated to these outlets in the past year [Pew, 2019]. To cope with the falling revenue, some regional newspapers have been acquired by larger corporations, forming corporate-owned local news sites, which thus have to publish content to be shared across multiple locales in a region [LeBrun et al., 2022, Toff and Mathews, 2021]. These acquisitions result in decreased local content publications and increase national news coverage. Some news outlets, like the sports-focused The Athletic, thrived by implementing a national framework to report on local events [Ferrucci and Perreault, 2022]. Further, these news outlets suffer from a dearth of local reporters to write stories, so newsrooms have turned to Artificial Intelligence (AI) to assist in writing news stories in various fashions. While some of the examples of AI usage are designed to automate tasks (like the Los Angeles Times' QuakeBot which automatically drafts an article if the U.S. Geological Survey detects an earthquake), other usages can be more sinister and fully generate "local" stories with no local context [Qu ́er ́e and Jakesch, 2022]. 2.2 Pink Slime Journalism In this age of digital media, digital journalism production evolved to include pink slime journalism, which was initially described in 2005 as automated news 4 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley reporting [Cohen, 2015]. Given the particular financial and labor challenges the local newsroom has faced since then, five organizations have adopted this genre of news with a goal of creating "local" news using algorithms and a handful of non-local reporters. Media companies face declining profits and expanding demands for content, thus pink slime journalism rely on digital tools to generate content and publish more for less, reducing the quality of the news [Moore et al., 2023]. The largest of the most prominent organizations, Metric Media, controls vast swaths of pink slime sites that do not appear to have foreign ties [Bengani, 2021], but are currently financed by political candidates and political action committees with the hope of swaying election results. When speaking of threats to election integrity, Alex Stamos, director of the Stanford Internet Observatory, remarked "The issue [...] is not going to be foreign interference. It's much more likely that legitimate domestic actors possibly operating under their own name - with LLCs or corporations with very shady funding that are not required to disclose what that funding is - are going to dominate the online conversation about the outcome of the election" [Murphy and Venkataramakrishnan, 2020]. Metric Media has been shown to algorithmically generate most of its content, and it prioritizes the publication of state and national partisan content at the expense of local news [Royal and Napoli, 2022]. One such example of algorithmically generated content is to use an Application Programming Interface (API) to display the local weather. The API is a piece of computer script that retrieves the local weather from a weather source via only a zip code. The local weather is a news topic which American adults rate as the most important local news topic for their daily lives [Pew, 2019], and can be easily generated via computer programming. Further, research from Stanford University analyzed news consumption of pink slime sites and found that at least 3.7% of American adults visited at least one pink slime site during the 2020 Presidential elections period Moore et al. [2023]. The frequency which pink slime sites are visited reflects the extent which these sites can have an effect on population opinion, especially during crucial events like the elections. Social media posts by these pink slime websites have been observed to implement information maneuvers against local and state elections. In contrast, authentic local news sites focused on the national elections and state figureheads [Lepird and Carley, 2023]. 2.3 Spread of News Through Facebook Around 70% of Americans get their news from the social media platform Facebook [Gramlich, 2019]. 15% of the adults cited social media as their preferred platform for reading local news, while 13% of them prefer print newspapers [Pew, 2019]. Those who prefer reading their local news via social media are less likely to be closely following the news stories than those who consume local news via television or print [Pew, 2019]. Social media platforms like Facebook have been a serious threat to the traditional local news publishing industry. Local news organizations previously held exclusive access to their readers, however, Title Suppressed Due to Excessive Length 5 in today's journalism climate, Facebook knows the intended audience of news websites through their micro-targeting tactics [Hendrickson, 2019]. When local news organizations post to Facebook, they have a greater incentive to optimize for engagement metrics. Therefore, these organizations post content pertaining to national news stories as opposed to local stories [Toff and Mathews, 2021]. Despite the scale of news advertisements on Facebook, not all of this news is coming from quality news sources: 15% of referrals to fake news sites are coming from Facebook [Guess et al., 2020], and 17.7% of visits to pink slime sites are referred by Facebook Moore et al. [2023]. Many local news organizations have official pages on Facebook to share their stories with their social media followers. During the COVID-19 pandemic, researchers analyzed the posts of over 1,000 local news sites to Facebook Pages and found that the engagement on these posts rose proportionately with the size of the population the local news organization was targeting Le Qu ́er ́e et al. [2022]. In general, US citizens rate posts from local Facebook groups to be more trust worthy than non-local Facebook groups. Posts from Facebook pages of local news are perceived to be more interesting and more trustworthy to local citizens as compared to posts from local neighborhood Facebook groups [Aubin Le Qu ́er ́e et al., 2022]. Much like authentic local news outlets, many of the known sources of pink slime have associated social media accounts on platforms like Facebook to amplify the spread of the messaging to the community. The names of these pink slime sites frequently contain the target community in the domain name. Examples are: The Bucks County Standard, North Alaska News, the Michigan Star, and Pensacola Times. Per Mihailidis and Foster [2021], there is a serious reason to be concerned about neighborhood Facebook groups: the platform makes it difficult to discern who owns the group. Even if the group advertises to be apolitical, they may still have a political slant. This results in further political manipulation, as Mihailidis and Foster [2021] expresses: "A fractured local media ecosystem, especially without the investigative power once shared by healthy local newspaper competition, has left local-and-state politics open to further manipulation by outsiders. Local and state governments draw considerable attention because they hold more power than the average citizen acknowledges". Local news organizations boost their presence on social media not only by frequent postings but also by advertisements. Meta allows for companies to purchase advertisements on Facebook. Newsrooms can have their stories promoted to their intended audiences on Facebook, while micro-targeting the desired age, locations and gender of the readership. During the 2018 US Midterms elections, Haenschen [2022] ran ads in Texas to examine if the advertisements affect voter turnout. They found that only one set of targeted ads focusing on abortion rights and women's healthcare in a competitive district was shown to significantly increase turnout. In 2018, Meta launched the Facebook Ad Library for increased transparency on the organizations that are running ads on Facebook. This library provides information in terms of how much money the organizations are spending and 6 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley what messages they are promoting 1. This resource has allowed researchers to analyze the spread of advertising of posts with false information and political posts, observing that micro-targeted ads reduced the likelihood of persuading Democrat respondents to vote for Democrat candidates [Liberini et al., 2023, Mejova and Kalimeri, 2020]. 3 Data Collection Facebook is the social media platform with the plurality of referrals to pink slime sites [Moore et al., 2023], hence this paper studies the spread of pink slime sites on Facebook. We study how the sites share pink slime news through both organic and paid means. A total of three datasets were collected with respect to pink slime domains: (1) public posts on Facebook groups and Facebook pages (termed "Facebook posts dataset"), and (2) total number of advertisement expenditure (ad-spend) per group (termed "Advertisement expenditure"). Facebook posts dataset We collected this data using the Python Crowdtangle API2. For each pink slime domain listed in Bengani [2022], we used the API to collect all the posts that have a link from public Facebook groups and pages from 2019 until August 2024, which was when the CrowdTangle platform was shut down. Overall, this data collection yielded 1,108,059 posts from the pages and 43,357 posts from groups. Facebook page posts in this dataset are dominated by posts made by official Facebook pages. These pages are dedicated to sharing news from individual pink slime sites. Therefore, data reflecting pink slime on Facebook pages can be seen as a glimpse into the marketing efforts by the organizations. The presence of Facebook groups that shared news from these websites indicates an organic news spread within local communities. These groups have a median subscriber count of 1,075 group members, and frequently contain local community names within their group names. In total, we collected posts from 12,644 Facebook pages, linking to 531 unique pink slime domains; and posts across 8,528 Facebook groups, linking to 198 unique pink slime domains. From this Facebook posts dataset, we extract and quantified the number of pink slime domains per organization. Table 1 shows the breakdown of news domain present in this dataset. The largest organizations, Metric Media and Star, are known for their politically conservative leanings while Local Report, Courier, and American Independent have liberal political leanings. It should be noted that Facebook page posts in this dataset are dominated by posts made by official Facebook pages dedicated to sharing news from individual pink slime sites. For that reason, the page data can be seen as a glimpse into (free) marketing efforts by the organizations. The Facebook groups that were not operated by organizations and shared news from these pink slime websites indicate an organic news spread within local communities. These groups are not 1 https://www.facebook.com/ads/library 2 https://pypi.org/project/PyCrowdTangle/ Title Suppressed Due to Excessive Length 7 Parent Organization Sub-Specialty Websites Metric Media Metric Media 398 Metro Business 51 LGIS 34 Record 11 Franklin Archer 10 Local Labs 8 Star 11 Local Report 9 Courier 9 American Independent 7 Table 1: he number of pink slime sites shared on Facebook in this study by parent organization. seeded with the websites by the official community, and the sharing patterns indicate the natural propagation of pink slime sites. These groups had a median subscriber count of 1,075 group members and frequently had local community names in their group names like 'Republicans of the Palm Beaches', 'Progressive Democrats of North Carolina', 'Michigan Republicans/Democrats Debate Group', 'Middle Georgia Young Democrats', 'Philadelphia 8th Ward GOP', and 'Republican Liberty Caucus of Indiana.' Advertisement expenditure dataset This dataset was collected using the Facebook Ad Library3. The dataset consists of posts that pink slime parent organizations paid to promote on Facebook pages, the amount of money known pink slime organizations spent on advertisements posts per state, and the impressions garnered. This was collected by searching for political ads from the parent organizations: Courier's ("Cardinal & Pine", "Courier Newsroom", "Courier Newsroom, Inc.", "Floricuas", "Granite Post", "Iowa Starting Line", "The 'Gander Newsroom", "The Copper Courier", "The Keystone", "The Nevadan", "UpNorthNews", "The Keystone Courier"), Metric Media's ("Metric Media LLC", "Franklin Archer", "Local Government Information Services", "The Record"), and American Independent's ("American Independent Media"). No ads were found from the Star and Local Report parent organizations. We collected 14,581 advertisement posts shared from 310 Facebook pages, linking to 383 unique domains, the majority of which are direct pink slime domains. 4 RQ1: How much money was spent by pink slime organizations to target which populations? Our first research question examines the advertising expenditure (ad spend) of pink slime organizations and their targeted populations. We make use of data 3 https://www.facebook.com/ads/library 8 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley from the Advertising Expenditure dataset to perform this investigation. Table 2 shows the aggregated ad spend of all the organizations from 2018 through 2024 and the total number of impressions those ads received. In terms of ad spending across the years, the election years of 2020 and 2024 observed a spike in ad spend, and subsequently the election year of 2022 observed the next highest ad spending. The total impressions on advertisements were highest during the election years of 2020, 2022, and 2024, indicating the appeal of political ads these pink slime organizations during these years. Year Number of Ads Total Impressions Total Ad Spend 2018 374 1,328,813 200,067 2020 2,774 105,847,118 158,883 2022 3,696 43,776,653 544,398 2024 3,231 328,211,953 16,999 in ads to Iowa in 2020. During the 2020 elections, Trump won Iowa against Biden by 8.2 percentage points. Given the small difference, it is likely that Courier, a left-leaning party, poured more resources to boost chances in Iowa. When we assess the differences in the demographics of those targeted by the pink slime Facebook ads (Table 3), we notice gender and age shifts between the organizations. Two politically left-leaning sites, American Independent and Courier Newsroom, target women more than men in their ads, and the majority of their impressions are coming from individuals under 55. However, Metric Title Suppressed Due to Excessive Length 9 Media, a right-leaning organization, displays its ads to more men than women, and the majority of their impressions come from Facebook users who are over the age of 55. It seems that left-leaning sites generally target younger women, while right-leaning sites generally target older men. Fig. 1: Total Pink Slime Advertising Expenditure by State During Election Years Female Male 18-24 25-34 35-44 45-54 55-64 65+ American Independent 69.6% 29.8% 4.5% 18.8% 22.6% 20.2% 17.7% 16.2% Courier Newsroom 57.2% 40.2% 14.3% 27.0% 20.3% 14.7% 11.4% 10.5% Metric Media 43.4% 49.8% 5.4% 11.3% 10.3% 14.3% 22.5% 30.3% Table 3: Breakdown of targeted ad demographics by gender and age for pink slime organizations Finally, we examined whether state variables have an effect on ad spend. We performed Pearson Correlations of several state variables to the 2022 and 2024 pink slime ad spend. This includes the 2020 voter spread4, 2020 GDP5, percentage of population living in rural areas6, percentage of population with a bachelors degree7 and so forth. We used the statistics retrieved from the 2020 data, envisioning that organizations only have access to that information in 2022 and 2024. Table 4 presents the Pearson correlation coefficient between the state variables and the ad spend, and the corresponding p-values. We find that only the 2020 Voter Spread is significantly correlated with ad spend at the p < 0.05 level. 4 https://www.presidency.ucsb.edu/statistics/elections/2020 5 https://www.bea.gov/ 6 https://www.census.gov/en.html 7 https://fred.stlouisfed.org/release/tables?rid=330&eid=391444&od=2020-01-01 10 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 2: Advertising Expenditure by Parent Organization Over Time (a) An over-time plot of Page Posts Linking to Pink Slime Sites by Organization (through August 2024) (b) An over-time plot of Group Posts Linking to Pink Slime Site by Organization (through August 2024) Fig. 3: Facebook Pages and Group Posts Over Time Title Suppressed Due to Excessive Length 11 Fig. 4: Facebook Pages and Groups Posts by State 2022 2024 Variable Correlation Significance Correlation Significance 2020 Voter Spread -0.52 0.0040 -0.52 0.0072 Cities Over 100k Population 0.032 0.88 -0.12 0.57 Percent of Population Living in Rural Areas -0.085 0.66 -0.041 0.84 Percent of Population with a Bachelors Degree -0.055 0.78 -0.12 0.65 2020 GDP 0.085 0.66 -0.087 0.68 Median Age of State -0.12 0.53 0.12 0.58 March 2020 Governor's Party -0.15 0.44 0.34 0.093 Electoral College Votes 0.15 0.45 -0.35 0.87 Table 4: State variables and their Pearson correlation to 2022 and 2024 pink slime ad spend 12 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley 5 RQ2: How did the ad spend influence online conversations? In this research question, we combined data from advertising expenditure and posts content, and examined the effects of ad spend on online conversations. We first begin by analyzing themes within the online conversations through a temporal lens. We broke down the content within Facebook ad messaging across the years and populated the frequency of messaging into a word cloud. To do so, we first pre-processed the text in the messaging to remove stopwords and URLs, before using Python's wordcloud package8 to formulate the word cloud of the top 100 words, sized by frequency of appearance. Figure 5 illustrates the changing focus of topics in election and off-election years and Figure 6 shows how these words changed between the two elections generated using Gallagher et al. [2021]. Throughout all of the years, the most targeted states ('Texas', 'Michigan', 'Georgia', 'Arizona', 'Florida', 'Wisconsin', and more) remain as top terms. One key tactic used is to write ads with the same message but switching out the name of the state for the one that is being targeted in the ad. For example, some of the ads run in 2022 had the following titles: "'Inflation has shot up a staggering 13.2%' since Biden took office, Arizona's CPI at 13%" , "'Inflation has shot up a staggering 13.2%' since Biden took office, Michigan's CPI at 8.1%", "3 in 5 Americans concerned about housing affordability, North Carolina's average rent up 30%", and "3 in 5 Americans concerned about housing affordability, Wisconsin's average rent up 17%." During election years, ad spending increases drastically, and the conversations naturally turn political. While the presidential candidates' names were not at the forefront of the 2020 conversation, there was a focus on the phrase Catholic. The ads containing references to 'Catholic' were mostly run in September and October 2020 by the Metric Media organization, leading into the appointment of Catholic Supreme Court Justice Amy Coney Barrett. The top two of these ads, garnering 275,000 and 112,000 impressions, respectively, were titled 'President Trump addresses Catholics directly' and 'Catholic Vote: Biden's anti-school choice stance should worry WI Catholic school parents'; the 'Catholic' phrase was indirectly used to support President Trump's re-election. During the midterms in 2022, President Biden was the top phrase, with secondary attention paid to key economic issues like 'inflation', 'gas' , and 'prices'. Three of the top four ads (by impression) in 2022 were purchased by left-leaning American Independent, criticizing Georgia senate candidate, Herschel Walker, and garnering over 3 million impressions from 9 distinct ads. Right-leaning Metric Media, however, garnered over 350,000 total impressions with the ad title 'As Pennsylvanians receive fourth stimulus check, Pigott points out negative real wage growth: 'Joe Biden is the pay cut president". Much like the 2020 efforts to use the Catholic narrative into praise for President Trump, the 2022 tactic was to use negative economic news to undermine President Biden (and the Democratic party) ahead of the midterm election. 8 https://pypi.org/project/wordcloud/ Title Suppressed Due to Excessive Length 13 Ad expenditure during the years between elections drastically diminished, and the discourse focuses more on "court". These ads highlight that state supreme and high courts, and are not necessarily political or partisan in nature. For example, 'Appeals court vacates ruling against Parkways Authority over Turnpike toll fees' received over 22,000 impressions in 2023. This strategy may be to establish the Facebook Pages sharing the news as a nonpartisan, trustworthy local news outlet when they are not actively trying to push a political message, and to keep the organizations active and visible to the Facebook audience even during the down time. Finally, to see how the ad spend influenced organic conversation, we generate Figure 7 to compare the amount of money spent promoting a given pink slime domain via Facebook ads and the number of times that domain appeared in a Facebook group. A linear trend line (R2 = 0.35) indicates a weak positive correlation between the two measures, ad spend and organic conversation. Fig. 5: Wordclouds of the Top 100 Words Appearing in Pink Slime Facebook Ads Over Time 14 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 6: Change in words used in Facebook ads by Pink Slime Organizations in 2020 (left) and 2022 (right) Title Suppressed Due to Excessive Length 15 Fig. 7: Number of Instances a Pink Slime Domain Appears in a Facebook Group by Ad Spend for those Domains 6 RQ3: How have pink slime tactics differed over time of a political event? Given data across multiple years, we finally ask how the different organizations have adapted their tactics during different political phases, namely two presidential elections, a midterm election, and between elections. Figure 2, 3b, and 3a show an over-time plot of the advertisement impressions, group posts linking to pink slime sites and page posts linking to pink slime sites per organization, respectively. In general, the advertising impressions increase drastically during election years, and decreases during the years in between elections. While advertisement impressions have increased over the years, the total number of posts in Facebook pages has decreased since 2020, indicating that pink slime organizations are now pivoting away from posting to their Facebook pages organically and instead spending more for paid advertisements to run from those pages. Facebook groups see different trends depending on the parent organizations. Star had peaks in 2020 and then again in 2023 while Courier and Local Report had peaks in 2020, and Metric Media peaked in 2022. All of these, with the exception of Star, show peaks in organic activity around election years (like ad impressions); however, since the 2024 data only contains data through August 2024, it's possible that Star would have exceeded its 2023 numbers had we had the ability to collect all of the 2024 calendar year data. Figure 8 shows the number of Facebook group posts that share pink slime websites affiliated with each state and ?? shows the number of pink slime Face16 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley book Page posts per state. Metric Media has the most prolific activity on Facebook posts, posting news stories relating to all states on pages. Metric Media owns hundreds of these pink slime sites whereas the other four organizations own fewer than 15 of these sites each. Their goal seems to be to reach every corner of the United States with strong focus on Illinois, where the founder of Metric Media, Brian Timpone, resides. The other pink slime organizations have more concentrated sharing in Facebook groups and pages to states of higher electoral importance. Metric Media Metric Media is the largest known parent organization of pink slime, and has several sub-networks associated with and sharing the IP space with main Metric Media sites [Bengani, 2021]. Metric Media has spent more on ads ( 208,604.50 during the 2020 presidential election year. However, in 2022, they increased ad spend significantly, up to 977,253 during the 2024 presidential election year. Metric Media dominates the Facebook page post dataset with 392,849 posts in 2020. These numbers then rapidly decreased, with 213,249 in 2022 and just 4,735 in the first 8 months of 2024. This data coupled with the ad data indicates there was a business decision to pivot from aggressively posting to their Facebook pages towards spending more for ads. Facebook groups have posted their news sites 5,409 times with the highest year being in 2022 (1,489 posts) followed by 2020 (1,011 posts). This is consistent with past work which has also observed significant publications by Metric Media in Ohio, Michigan, Texas, California and Illinois [Royal and Napoli, 2022]. Local Report Local Report is a new player in this space, having only posted on Facebook during the 2022 elections. The organization had no known ads purchased on Facebook. Unlike the other parent organizations, their 14 sites do not have a dedicated Facebook Page. Unsurprisingly, given their lack of social media accounts, they had minimal page posts - only 28, all in 2022 except for one in 2023. In groups, these sites have only been posted 12 times, all in 2022. Star Like the Local Report, Star had no known ads purchased on Facebook, but unlike the Local Report, they have active Facebook pages. The 10 largest Facebook page sharers were the official social media accounts for these platforms and account for 85% of the posts to Facebook pages that linked to their 11 news sites. The page posts increased from 2020 (23,826 posts) to 2022 (32,151 posts) but declined for the first 8 months of 2024 (3,035 posts). Star has the most group posts of any pink slime organization. Their news sites have been posted to groups 21,878 times; the highest posting frequency was in the year 2023 (5,755 posts). Courier Courier purchased Facebook ads in 2018 ( 166,347) and 2020 ( 996) Title Suppressed Due to Excessive Length 17 and 2024 ( 331,713 on Facebook ads and they proceed to spend more on ads in 2023 (surprisingly, a non-election year) with 65,235. There were 20,400 instances of Facebook pages and 6,682 instances of Facebook groups sharing links to the American Independent's news sites, the plurality of which occurred in 2020; however, the vast majority of these posts to Facebook groups and pages are to the American Independent parent site (not one of their smaller sites targeting swing states). 7 Discussion In today's digital age, news reporting goes beyond traditional television and print newspapers. Journalists have begun to make use of social media to disseminate their news articles and broadcast their news, to keep up with the shifting readership trends. This gives rise to a new phenomenon: pink slime journalism. In this paper, we discuss the impact of pink slime journalism with respect to the US elections. Understanding the impact of pink slime journalism through their propagation on Facebook is important for a few reasons. This understanding provides insight about the scope and the scale of pink slime networks throughout the US. While ad expenditure and group posts are focused on key states, page posts are dispersed across almost every state, indicating the prevalence of pink slime advertisements across America. In general, advertising expenditure from Pink Slime organizations have increased across the years. Metric Media is one of the largest players in the space. Metric Media spends heavily and broadly, producing ads that target a wide number of states and demographics. Other pink slime organizations produce ads that are more targeted towards swing states and vulnerable demographics. Courier was spending heavily on advertising to Iowa and to a lesser degree Michigan and Pennsylvania; however, they seem to have discontinued their advertising expenditure after the 2020 Elections. In contrast, American Independent have started advertising in the 2022 Elections. In terms of messaging, during the election years, the focus of pink slime news websites were heavily on the politics during the elections. Between elections, these websites did not simply fizzle away, but kept up with talking about court cases and lawsuits. This is a technique to keep the Facebook accounts and pages active and fresh in people's memory. 18 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Understanding the geographical trends of pink slime distribution in the 2020 to 2024 elections will be useful in providing an insight towards possible trends in future elections. From our studies, we believe that in future US elections, pink slime advertisement expenditure and Facebook posting will increase drastically. Metric Media will continue to spend heavily in an attempt to influence the election, and American Independent will continue with a focused and high ad spend on swing states. Finally, should another seat open up on the Supreme Court, the ads will target the religion and values of the president as a consideration. Ethical considerations Our data, while obtained from a social media platform, focused on the quantity of posts and ad-spend. Our study only uses public data and does not involve any personally identifiable information. We analyzed aggregate information, and not personal or individual information. Future work on pink slime journalism can link the production of pink slime as observed within our study with the consumption of pink slime, thereby further quantifying the impact of the pink slime news. This includes analyzing the pink slime phenomena to indicate the to reveal whether consumption habits matches production ideals, providing insights towards the success of journalism outlets. 8 Conclusion Pink slime journalism is increasingly being used as an advertising technique, in which politically leaning junk news appears to look like local news. In this work, we investigated the prevalence of pink slime journalism on Facebook groups and pages across five years from 2018 to 2024, characterizing their quantity across the states of America. Our observations show that pink slime news organizations have increased in their advertising expenditure across the years, and this has impacted organic conversation on social media. Pink slime journalism is a growing phenomenon in the US and the world, for they leverage on the public's trust of local news sites to dispense political news. Our work sets forth paths for analyzing this phenomenon across the US, in terms of messaging and advertising expenditure, opening avenues to examinations of this new journalistic trend. 9 Acknowledgments This work was supported in part by the Office of Naval Research (ONR) Award N00014182106, the Knight Foundation, the Center for Computational Analysis of Social and Organizational Systems (CASOS), and the Center for Informed Democracy and Social-cybersecurity (IDeaS). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR or the U.S. government. Title Suppressed Due to Excessive Length 19 9.1 IRB Approval This research was conducted with IRB approval in the Spring of 2024 Federalwide Assurance No: FWA00004206 IRB Registration No: IRB00000603. Bibliography Marianne Aubin Le Qu ́er ́e, Mor Naaman, and Jenna Fields. Local, social, and online: Comparing the perceptions and impact of local online groups and local media pages on facebook. In Proceedings of the Computation+Journalism Symposium, New York, NY, June 2022. C+J. PRIYANJANA Bengani. The Metric Media network runs more than 1,200 local news sites. Here are some of the non-profits funding them. - cjr.org. https://www.cjr.org/tow center reports/metric-medialobbyists-funding.php, 2021. [Accessed 10-09-2023]. PRIYANJANA Bengani. 'Pink slime' network gets 1.6M election boost from PACs backed by oil-and-gas, shipping magnates - cjr.org. https: //www.cjr.org/tow center/pink-slime-network-gets-1-6m-electionboost-from-pacs-backed-by-oil-and-gas-shipping-magnates.php, 2022. [Accessed 10-09-2023]. Anthony G Burton and Dimitri Koehorst. Research note: The spread of political misinformation on online subcultural platforms. HKS Misinfo Rev, 1(6):1037016, 2020. Nicole S Cohen. From pink slips to pink slime: Transforming media labor in a digital age. The Communication Review, 18(2):98-122, 2015. Patrick Ferrucci and Gregory Perreault. Local is now national: The athletic as a model for online local news. New Media & Society, page 146144482211177, August 2022. ISSN 1461-7315. https://doi.org/10.1177/14614448221117748. URL http://dx.doi.org/10.1177/14614448221117748. Ryan J. Gallagher, Morgan R. Frank, Lewis Mitchell, Aaron J. Schwartz, Andrew J. Reagan, Christopher M. Danforth, and Peter Sheridan Dodds. Generalized word shift graphs: a method for visualizing and explaining pairwise comparisons between texts. EPJ Data Science, 10(1), January 2021. ISSN 2193-1127. https://doi.org/10.1140/epjds/s13688-021-00260-3. URL http://dx.doi.org/10.1140/epjds/s13688-021-00260-3. JEFFREY Gottfried and JACOB Liedke. Partisan divides in media trust widen, driven by a decline among Republicans - pewresearch.org. https: //www.pewresearch.org/fact-tank/2021/08/30/partisan-divides-inmedia-trust-widen-driven-by-a-decline-among-republicans/, 2021. [Accessed 10-09-2023]. JOHN Gramlich. 10 facts about Americans and Facebook - pewresearch.org. https://www.pewresearch.org/fact-tank/2019/05/16/factsabout-americans-and-facebook/, 2019. [Accessed 10-09-2023]. Andrew M Guess, Brendan Nyhan, and Jason Reifler. Exposure to untrustworthy websites in the 2016 us election. Nature human behaviour, 4(5):472-480, 2020. Katherine Haenschen. The conditional effects of microtargeted facebook advertisements on voter turnout. Political Behavior, 45(4):1661-1681, March 2022. ISSN 1573-6687. https://doi.org/10.1007/s11109-022-09781-7. URL http://dx.doi.org/10.1007/s11109-022-09781-7. Title Suppressed Due to Excessive Length 21 Clara Hendrickson. Local journalism in crisis: Why america must revive its local newsrooms. https://www.brookings.edu/wp-content/uploads/2019/ 11/Local-Journalism-in-Crisis.pdf, 2019. Marianne Aubin Le Qu ́er ́e, Ting-Wei Chiang, and Mor Naaman. Understanding local news social coverage and engagement at scale during the covid-19 pandemic. Proceedings of the International AAAI Conference on Web and Social Media, 16:560-572, May 2022. ISSN 21623449. https://doi.org/10.1609/icwsm.v16i1.19315. URL http://dx.doi.org/ 10.1609/icwsm.v16i1.19315. Benjamin LeBrun, Kaitlyn Todd, and Andrew Piper. Buying the news: A quantitative study of the effects of corporate acquisition on local news. New Media & Society, page 146144482210790, March 2022. ISSN 14617315. https://doi.org/10.1177/14614448221079030. URL http://dx.doi.org/ 10.1177/14614448221079030. Christine Sowa Lepird and Kathleen M Carley. Comparison of online maneuvers by authentic and inauthentic local news organizations. arXiv preprint , 2023. Federica Liberini, Michela Redoano, Antonio Russo, ́Angel Cuevas, and Ruben Cuevas. Politics in the facebook era. evidence from the 2016 us presidential elections. 2023. https://doi.org/10.2139/ssrn.4669077. URL http: //dx.doi.org/10.2139/ssrn.4669077. Yelena Mejova and Kyriaki Kalimeri. Covid-19 on facebook ads: Competing agendas around a public health crisis. In Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies, COMPASS '20. ACM, June 2020. https://doi.org/10.1145/3378393.3402241. URL http: //dx.doi.org/10.1145/3378393.3402241. Paul Mihailidis and Bobbie Foster. The cost of disbelief: Fracturing news ecosystems in an age of rampant media cynicism. 4: 616-631, 2021. ISSN 65. https://doi.org/10.1177/0002764220pa978470. URL https://journals.sagepub.com/doi/full/10.1177/ 0002764220978470?casa token=q3Z5Gqjr zwAAAAA% 3AXgFLlEFYznqsUI5Sr51ZuHzzNRmpkYOnkF j6K-skbatcqcipuRMjKa3Cmdvjvio7KrSEcxnrOruw. Ryan Moore, Ross Dahlke, Priyanjana Bengani, and Jeffrey Hancock. The consumption of pink slime journalism: Who, what, when, where, and why? 2023. Hannah Murphy and Siddharth Venkataramakrishnan. Local news is drowning in pink slime ahead of US election - Financial Times - ft.com. https:// www.ft.com/content/f36c3e2e-bd62-4d93-853f-0d09cd8d9079, 2020. [Accessed 10-09-2023]. Pew. For Local News, Americans Embrace Digital but Still Want Strong Community Connection - pewresearch.org. https://www.pewresearch.org/ journalism/2019/03/26/for-local-news-americans-embrace-digitalbut-still-want-strong-community-connection/, March 2019. Marianne Aubin Le Qu ́er ́e and Maurice Jakesch. Trust in ai in under-resourced environments: Lessons from local journalism. In CHI '22: Workshop on Trust and Reliance in AI-Human Teams, New York, NY, 2022. ACM. 22 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Asa Royal and Philip M Napoli. Local journalism without journalists metric media and the future of local news. Journal of Creative Industries and Cultural Studies-JOCIS, (8):119-147, 2022. Benjamin Toff and Nick Mathews. Is social media killing local news? an examination of engagement and ownership patterns in u.s. community news on facebook. Digital Journalism, page 1-20, October 2021. ISSN 2167-082X. https://doi.org/10.1080/21670811.2021.1977668. URL http://dx.doi.org/ 10.1080/21670811.2021.1977668. A Appendix A: Over Time Maps by Parent Organization Title Suppressed Due to Excessive Length 23 Fig. 8: Sum of all the posts linking from public Facebook groups to pink slime sites targeting different states by year through August 2024. 24 Christine Sowa Lepird , Lynnette Hui Xian Ng, and Kathleen M. Carley Fig. 9: Sum of all the posts linking from Facebook Pages to pink slime sites targeting different states by year through August 2024. Title Suppressed Due to Excessive Length 25 Fig. 10: Total Facebook ad expenditure (in US Dollars) by state over time by the various pink slime organizations.
2510.14812
Published as a conference paper at ICLR 2026 EFFICIENT DYNAMIC STRUCTURED SPARSE TRAINING WITH LEARNED SHUFFLES Abhishek Tyagi1 Arjun Iyer2 Liam Young2 William H. Renninger2 Christopher Kanan1 Yuhao Zhu1 1Department of Computer Science 2The Institute of Optics University of Rochester, NY, USA {atyagi2,aiyer2,lyoung12,wrenning}@ur.rochester.edu ckanan@cs.rochester.edu, yzhu@rochester.edu ABSTRACT Structured sparsity accelerates training and inference on modern GPUs, yet it still trails unstructured dynamic sparse training (DST) in accuracy. The shortfall stems from a loss of expressivity: whereas a dense layer can realise every possible mask obtained by choosing any w active weights out of n, a fixed block, or N:M layout explores only a subset of those possibilities. We propose to close this gap by learning, for each layer, a single permutation matrix jointly with the structured weight matrix. Applied to three canonical structures—block, N:M and diagonals—we show that permutation-augmented DST (PA-DST) matches unstructured baselines (RigL, SET) at 90–95% sparsity on ImageNet-1K (ViT-B/16) and WikiText-103 (GPT-2), yet trains upto 1.21× and infers up to 2.9× faster. The results position structure + learned permutation as a sweet-spot between accuracy and efficiency. 1 INTRODUCTION Over the years, deep neural networks (DNNs) have grown, and their performance on complex tasks has increased to and beyond human-level performance (Sparkes, 2023; Lykiardopoulou, 2023). However, the cost of training and inference for such large DNNs has increased drastically (Cottier, 2023). A principled way to curb this cost, while retaining dense level accuracy, is to exploit sparsity by removing unnecessary weights via pruning (Molchanov et al., 2016; Tanaka et al., 2020) or training directly in the sparse regime (Jaiswal et al., 2022; Zhang et al., 2023), often achieving similar algorithmic performance as dense models under comparable budgets (Frankle & Carbin, 2018; Blalock et al., 2020; Mostafa & Wang, 2019). Weights are typically made sparse either unstructurally, by zeroing arbitrary individual parame- ters (Evci et al., 2020; Han et al., 2015), or structurally, by enforcing hardware-friendly patterns such as blocks or N :M groups (Liu et al., 2019). Unstructured sparsity preserves accuracy at high sparsity levels but is hard to accelerate on modern accelerators due to irregular memory access and limited kernel support. Structured sparsity aligns well with vendor-optimized kernels, yet it commonly underperforms its unstructured counterpart—especially at extreme sparsity—because rigid patterns reduce the expressivity of the network. Moreover, many structured approaches still rely on dense backpropagation or lose structure under transposition, limiting practical training speedups (Lasby et al., 2023; Hubara et al., 2021). This work revisits structured sparsity from the angle of permutations. Our key observation is that permutations help bridge the gap between the performance of structured and unstructured sparsity. We show the benefits of co-learning the structure and permutations. Concretely, for each sparsified layer ℓwith input xℓwe learn a single permutation Πℓand parameterize the layer as yℓ= Sℓ Πℓxℓ  , where Sℓobeys a fixed, accelerator-friendly structured sparsity pattern (e.g., block or N :M). While Sℓalone constrains the class of linear maps too severely, composing it with a learned permutation strictly enlarges the representable family (it recovers the classical structured model when Πℓ= I and extends it otherwise), thereby restoring much of the expressivity lost to structure. Importantly, the class is closed under transposition: (SℓΠℓ)⊤= Π⊤ ℓS⊤ ℓ, so the backward pass remains within 1 arXiv:2510.14812v1 [cs.LG] 16 Oct 2025 Published as a conference paper at ICLR 2026 Fig. 1: Overview of the PA-DST training approach. The training starts by initializing the target weight matrices with the desired sparse structure and a soft permutation matrix. As the training progresses, the soft permutation matrix starts moving towards a permutation matrix, and by the end of the training, we obtain the final sparse weight matrix and the permutation matrix. The permutation matrix is then used to reindex the activations of the layers so that an explicit matrix multiplication operation can be avoided during inference. More details can be found in Sec. 4.3. the same permutation-structured family, enabling sparse-to-sparse acceleration in both forward and backward passes. In addition to reintroducing expressivity, we learn the sparsity itself during training. Rather than fixing the position of non-zeros a priori, we dynamically adapt the structured mask of Sℓaccording to task-driven criteria, coupling optimal non-zero positioning with the learned permutation Πℓ. As we show in Sec. 6, this permutation-structured recipe delivers strong accuracy–speed trade-offs across architectures and sparsity regimes. This paper makes the following contributions: 1. We present, Permutation-Augmented DST (PA-DST), a general layer formulation that combines any structured sparsity with a single learnt permutation matrix. We show that existing methods of learning permutations can be employed to transformer based models to improve their generalization performance. 2. We prove tight combinatorial bounds showing that the added permutation recovers the expressivity lost by structured sparsity as compared to dense and unstructured sparse models. 3. We demonstrate that with learnt permutations, structured sparse methods can achieve un- structured level accuracy across both vision (on ImageNet-1K) and language tasks (on WikiText-103), while achieving up to 1.21× training and 2.9× inference speedup at 90% sparsity for a ViT-B/16 network. 2 RELATED WORK Sparse training. Classical pruning removes small-magnitude or low-salience weights from a pre- trained dense model and then fine-tunes the model. Despite strong compression, this approach still pays the dense pre-training cost (Molchanov et al., 2017; Cai et al., 2022; Lin et al., 2019; Lu et al., 2024). Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) shows that dense nets contain sparse “winning tickets,” but training sparse models from scratch can be more compute-intensive than dense training itself. To avoid dense pre-training, sparse-from-scratch methods fall into static and dynamic regimes. Static sparse training (SST) fixes a mask at initialization (e.g., Pixelated Butterfly (Dao et al., 2021)) but tends to underperform dynamic strategies at high sparsity. Dynamic sparse training (DST) updates connectivity during learning via prune-and-grow rules: SET prunes by magnitude and regrows randomly (Mocanu et al., 2018); MEST mixes magnitude and gradient signals (Yuan et al., 2021); RigL prunes by magnitude and regrows by gradient on missing connections (Evci et al., 2020); recent work explores topology-driven, gradient-free growth and N:M-style constraints for scalability (Zhang et al., 2024; 2023). 2 Published as a conference paper at ICLR 2026 Permutation learning and channel mixing. Beyond fixed channel shuffles, AutoShuffleNet (Lyu et al., 2020) learns channel permutations during training by optimizing a Lipschitz-continuous penalty that drives a stochastic matrix toward a permutation. The Kaleidoscope (Dao et al., 2020) (K-matrix) framework provides a differentiable, expressive parameterization that includes permutations, and it has been used to learn latent permutations in permuted-image tasks within end-to-end training. Pool et al. (Pool & Yu, 2021) use offline permutations to prune networks to a fixed N:M sparsity pattern. In contrast, we learn permutations jointly during training, and our method is pattern-agnostic, i.e., not tied to any specific structured sparsity, leading to more hardware-efficient sparse networks. 3 COMBINATORIAL EXPRESSIVITY VIA LINEAR REGIONS The question we would like to answer is: Why is there an accuracy gap between the generalization performance of a structured vs an unstructured sparse method? Structured sparsity buys efficiency because accelerators can exploit regular patterns (blocks, N:M, diagonals). Yet those same patterns limit which directions each layer can “slice” the input space along. Unstructured sparsity, and of course dense networks, do not impose such directional restrictions and typically attain higher accuracy at the same sparsity. We make this precise by measuring expressivity via the number of linear regions (NLR) of ReLU networks and by asking: (i) how structure alone changes the classic depth-multiplicative region growth, and (ii) how a single learned permutation per layer restores it when depth is sufficient. 3.1 THEORETICAL SETUP & INTUITION We consider a depth-L feed-forward ReLU network with layers ℓ= 1, . . . , L: zℓ(x) = Wℓaℓ−1(x) + bℓ, aℓ(x) = ϕ zℓ(x)  , ϕ(t) = max{t, 0}, a0(x) = x ∈Rd0. Here x ∈Rd0 is the input; aℓ−1(x) ∈Rdℓ−1 the activation at layer ℓ−1; Wℓ∈Rdℓ×dℓ−1 and bℓ∈Rdℓthe weight matrix and bias; zℓ(x) ∈Rdℓthe pre-activation; and aℓ(x) ∈Rdℓthe post- ReLU activation (with ϕ applied elementwise). We write nℓ:= dℓfor the width of layer ℓand (d0, d1, . . . , dL) for the layer dimensions. The activation pattern is constant on convex polyhedra; each maximal such set is a linear region. Hyperplane arrangements & expressivity via NLR. Each neuron contributes an affine hyperplane that “slices” the affine subspace propagated to its layer. If, within that subspace, the hyperplanes are in subspace-general position (SGP)—i.e., their restricted normals are in general position—standard arrangement counts apply. We denote by NLR(f) the total number of linear regions of f. Classi- cally, for dense ReLU networks with sufficiently wide layers, depth multiplies regions: each layer contributes a combinatorial factor that depends on the number of independent directions available, so NLR(f) grows multiplicatively with depth (Mont´ufar-style bounds (Mont´ufar et al., 2014)). A generic lower-bound template. Let kℓbe the effective dimension at layer ℓ(the number of independent row-normal directions realizable inside the current region). Under SGP, NLR(f) ≥ L Y ℓ=1 kℓ X j=0 nℓ j  . (1) All architectural effects reduce to identifying kℓ. To reason uniformly across settings, we track an accumulated span budget u0 := 0, uℓ:= min{d0, uℓ−1 + gℓ}, (2) where gℓis the number of fresh (linearly independent) directions that layer ℓcan inject beyond those already spanned. The effective dimension obeys kℓ= min{ nℓ, hℓ}, with hℓ∈{uℓ−1, uℓ} (3) depending on whether the newly available directions are usable inside the current region. Subsequent subsections instantiate Eqn. 1–Eqn. 3 for dense matrices and for unstructured sparsity before turning to structured sparsity and permutations later. 3 Published as a conference paper at ICLR 2026 3.2 DENSE MATRICES: RECOVERING CLASSICAL MULTIPLICATIVE GROWTH Dense layers impose no directional restriction: any normal in the ambient subspace can be realized. Thus, there are no “fresh” directions to accumulate (gℓ= 0) and the first layer already sees the full input subspace; we adopt the standard convention u0 = d0. From Eqn. 3, kℓ= min{nℓ, uℓ−1} = min{nℓ, d0} for all ℓ, (4) and plugging into Eqn. 1 yields NLR(f) ≥ L Y ℓ=1 min{nℓ,d0} X j=0 nℓ j  . (5) If nℓ≥d0 for all ℓ, then kℓ= d0 layer-wise and each factor equals Pd0 j=0 nℓ j  , giving the classical statement that depth multiplies regions (Mont´ufar et al., 2014). 3.3 UNSTRUCTURED SPARSITY Unstructured sparsity does not impose intrinsic directional caps: with generic weights, the row normals of Wℓcan span any k ≤min{nℓ, dℓ−1} directions inside the current subspace. Therefore, as in the dense case, we take gℓ= 0 and u0 = d0, which by Eqn. 3 gives kℓ= min{nℓ, uℓ−1} = min{nℓ, d0}, (6) and the lower bound Eqn. 1 coincides with the dense bound: NLR(f) ≥ L Y ℓ=1 min{nℓ,d0} X j=0 nℓ j  . (7) Interpretation. In the NLR lens, truly unstructured sparsity has the same depth-multiplicative expressivity as dense networks at a given width profile; any observed gap with structured sparsity arises from structural caps that reduce kℓ(analyzed in later sections) rather than from sparsity alone. 3.4 STRUCTURED SPARSITY WITHOUT MIXING We now turn to structured (axis-aligned) sparsity without any re-orientation across layers (e.g., no permutations/mixers). Let Aℓ⊂Rdℓ−1 be the set of admissible row-normal directions at layer ℓ induced by the fixed pattern (diagonals, bands, blocks, or tied N:M groups). Define the directional rank cap at layer ℓby rℓ:= dim span(Aℓ)  ≤dℓ−1. (8) For the axis-aligned families we study, the orientation and admissible directions are the same at every depth, so rℓ= rstruct is constant across layers. Set s := min{d0, rstruct}. Because the admissible directions lie in the same rstruct-dimensional coordinate subspace at every layer, the first layer can realize at most s independent directions and no fresh directions can be injected later. Using the global template Eqn. 1, this yields NLR(f) ≥ L Y ℓ=1 min{nℓ,s} X j=0 nℓ j  . (9) What this means. Unlike dense or unstructured layers, kℓis uniformly capped by s independently of depth, so the per-layer factor in Eqn. 1 is bounded and depth-multiplicative growth stalls. Instantiations of rstruct for each sparsity structure: For each sparsity structure, we instantiate rstruct as follows: for Diagonal-K, rstruct = K, hence s = min{d0, K}; for Block-B, rstruct = B, hence s = min{d0, B}; and for N:M with a tied group template, rstruct = αd0 with α = N/M, hence s = αd0. Substituting each resulting s into Eqn. 9 gives the corresponding lower bound on NLR(f). 4 Published as a conference paper at ICLR 2026 3.5 STRUCTURED SPARSITY WITH MIXING We now allow a per-layer re-orientation—a learned permutation or, more generally, any full-rank mixer—applied before the axis-aligned mask. Such mixing prevents successive layers from aligning to the same small coordinate subspace, so each structured layer can contribute up to rstruct fresh independent directions until the input dimension d0 is saturated. Generic consequence of mixing. Let the span budget evolve as u0 := 0 and uℓ= min{d0, uℓ−1 + rstruct}, and use the effective-dimension cap kℓ= min{nℓ, uℓ} in the master template Eqn. 1. This yields the mixing-enabled lower bound NLR(f) ≥ L Y ℓ=1 min{nℓ, uℓ} X j=0 nℓ j  , uℓ= min{d0, uℓ−1 + rstruct}. (10) Meaning. Each layer injects rstruct new independent directions; the span budget grows additively and the dense per-layer factor is recovered after a short warm-up of Loverhead = l d0 rstruct m layers. (11) The recipe above applies unchanged to each axis-structured family by substituting its rstruct: for Diagonal-K, rstruct = K ⇒uℓ= min{d0, uℓ−1 + K} and Loverhead = ⌈d0/K⌉; for Block-B, rstruct = B ⇒uℓ= min{d0, uℓ−1 + B} and Loverhead = ⌈d0/B⌉; and for N:M, rstruct = αd0 with α = N/M ⇒uℓ= min{d0, uℓ−1 + αd0} and Loverhead = ⌈M/N⌉. With a single mixer per layer, axis-structured sparsity thus regains dense-like, depth-multiplicative expressivity at the same widths after an explicit, structure-dependent overhead. Why permutations (and what mixing suffices)? Any full-rank per-layer mixer that varies across depth suffices for the theory above (e.g., permutations, co-prime stride shuffles, S-random interleavers, or very sparse full-rank transforms such as butterfly-style). We emphasize permutations because they are (i) parameter-free and invertible, (ii) cheap and friendly to accelerator memory layouts, and (iii) preserve axis-structure in Wℓ, allowing the same efficient kernels at inference. Empirically, learned permutations tend to outperform fixed random shuffles; the framework, however, only requires full rank and depth variation—not learnability per se. We discuss the practical consequences and predictions of our theory in detail in Apdx. C. 4 PERMUTATION–AUGMENTED DYNAMIC SPARSE TRAINING This section presents PA–DST, our algorithm that learns (i) sparse weight values and positions (ii) one permutation matrix per layer, all under a fixed global sparsity budget. In Sec. 4.1, we describe how a linear layer formulation changes with the introduction of a permutation matrix. In Sec. 4.2 we elaborate on our approach to learn a permutation matrix in a differentiable manner. Lastly, in Sec. 4.3, we take the example of a ViT network to point out the changes when training with permutations. 4.1 LAYER FORMULATION For a sparse weight matrix W ∈RR×C we define permuted weight matrix W ′ ∈RR×C W ′ = WP, P ∈Pd, d = min{R, C}, (12) where P is a column permutation matrix. Here, W is a sparse weight matrix where the positions of non-zeros are learnt during training, while adhering to a pre-defined structure. Row permutations can be applied to the weight matrices, but in our analysis, we don’t find any significant difference in the algorithmic performance of the two permutations as shown in Sec. 6.4. 4.2 DIFFERENTIABLE PERMUTATION ESTIMATOR Permutation matrices by nature are discrete, preventing direct gradient-based optimization methods. To ensure end-to-end differentiability, we require a method to learn permutation matrices through gradient descent. We adopt the permutation–learning formulation of AutoShuffleNet (Lyu et al., 5 Published as a conference paper at ICLR 2026 2020) to enable end-to-end training. Rather than optimizing a discrete permutation matrix P, we learn a soft matrix M ∈RN×N constrained to the Birkhoff polytope (doubly–stochastic matrices), and decode a hard permutation at evaluation time. Following Lyu et al. (2020), we couple our task loss with an exact, Lipschitz-continuous ℓ1−ℓ2 row/column penalty that drives a doubly–stochastic M toward a true permutation: L = Ltask + λ P(M) subject to M ≥0, M1 = 1, M ⊤1 = 1, (13) P(M) = N X i=1  ∥Mi:∥1 −∥Mi:∥2  + N X j=1  ∥M:j∥1 −∥M:j∥2  . (14) As shown by Lyu et al. (2020), for matrices satisfying the doubly–stochastic constraints, P(M) = 0 if and only if M is a permutation. In our setting, this penalty provided the most stable training among soft-permutation alternatives. 4.3 PERMUTATION MATRICES DURING TRAINING AND INFERENCE For this work, we are interested in looking at transformer-based neural networks. So, for a general transformer block, such as in a ViT-B/16, we sparsify the output projection linear layers of the multi-headed (MHA) attention block and the fully connected layers in the feedforward (MLP) blocks. This is in line with previous work in the literature (Lasby et al., 2023; Tyagi et al., 2025). We describe below in detail how the abovementioned layers are transformed. MHA output projection. Let H ∈Rd be the concatenated head output and WO ∈S ⊂Rd×d the structured–sparse output projection. We learn a column permutation PO and the forward pass can be written as: o = WOPOH = WOa, a := POH. (15) During inference, we know the learned permutation next to the attention output linear: WO is the output projection (from d to d), and PO permutes the concatenated head output H ∈Rd. A direct implementation would compute o = WO (POH). Instead of multiplying by PO, we precompute an index map ℓO : [d]→[d] such that (POH)i = HℓO(i), and we re-index during head concatenation (i.e., we write the concatenated buffer in permuted order, or equivalently read it with permuted indices), then run the usual sparse GEMM ok = d X i=1 WO[k, i] HℓO(i). (16) This keeps Q, K, V and the softmax unchanged, adds no extra matmul or copy, and is cheaper than a permutation multiply. FFN / MLP (both linears sparsified; permutations absorbed at inference). Let W↑∈S ⊂Rdff×d denote the expansion (up-projection) and W↓∈S ⊂Rd×dff the contraction (down-projection) matrix in the FFN. We learn one column permutation for each sparsified linear: P↑∈{0, 1}d×d, and P↓∈{0, 1}dff×dff as a column permutation of W↓). With σ(·) being the activation and LN(·) being the layer norm, the forward pass is y = W↓P↓σ W↑P↑LN(x)  . (17) Now, at inference, multiplying by permutation matrices involves extra computation. Instead of performing extra multiplications, we remove them by re-indexing. We precompute index maps ℓ↑: [d]→[d] and ℓ↓: [dff]→[dff] so that (P↑x)j = xℓ↑(j) and (P↓h)i = hℓ↓(i). Forward pass is: ui = d X j=1 W↑[i, j] xℓ↑(j), h = σ(u), yk = dff X i=1 W↓[k, i] hℓ↓(i). (18) No explicit P↑or P↓is formed at runtime; we only change which indices we read or write inside the existing kernels. This is computationally cheaper than multiplying by permutation matrices because a permutation multiply performs a shuffle, which would add an extra pass over memory and an extra kernel launch, whereas re-indexing costs only a small lookup and a few integer address operations per element, with no extra matrix multiplies, and unchanged structured-sparse GEMM multiplication. 6 Published as a conference paper at ICLR 2026 60% 70% 80% 90% 95% Sparsity 62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 Top-1 Accuracy (%) ViT-B/16 60% 70% 80% 90% 95% Sparsity 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Top-1 Accuracy (%) ViT-L/16 60% 70% 80% 90% 95% Sparsity 62 64 66 68 70 72 74 Top-1 Accuracy (%) Mixer-S/16 Dense RigL SET CHT CHTs Mest SRigL PixelatedBFly DSB DynaDiag SRigL(PA-DST) PixelatedBFly(PA-DST) DSB(PA-DST) DynaDiag(PA-DST) 40% 50% 60% 80% 90% Sparsity 20 30 40 50 60 Perplexity ( ) GPT-2 Small 40% 50% 60% 80% 90% Sparsity 20 30 40 50 Perplexity ( ) GPT-2 Medium Fig. 2: Accuracy vs sparsity of networks listed in Sec. 5. The plots in blue, green, and red correspond to unstructured, structured, and structured + permutations-based (PA-DST) methods, respectively. We see that a permutation helps improve the generalization performance (red plots are above the green plots). Moreover, permutations help bridge the gap between unstructured and structured sparse methods (structures + permutations, in red plot, reach close to the unstructured methods, in blue). 5 EXPERIMENTAL SETUP The aim of our experiments is to show the benefits of learning permutations during dynamic sparse training of neural networks. We aim to show the benefits of different modalities (vision and language), model types (MLP and attention), and sparsity regimes. Evaluation. We assess the generalization ability of all structured sparse methods with and without permutations using Top-1 accuracy for vision tasks and perplexity (PPL) for language tasks. Ad- ditionally, we measure the training and inference times for each method at varying sparsity levels through end-to-end execution, and assess the overheads associated with permutations-based training. Baselines. We compare the performance of the structured DST with & without permutation against the following approaches: A Dense network is trained with 0% sparsity in the weight matrices. For Unstructured DST, we use methods from the literature such as RigL (Evci et al., 2020), Mest (Yuan et al., 2021), SET (Mocanu et al., 2018), CHT (Zhang et al., 2024), and CHTs (Zhang et al., 2025) to produce unstructured sparse neural networks. For Structured DST, we use SRigL (Lasby et al., 2023), DSB (Jiang et al., 2022), and DynaDiag (Diag Sparsity) (Tyagi et al., 2025) to train networks with structured sparsity via DST, and we use PixelatedBFly (Dao et al., 2021) to compare against a structured Static Sparse Training (SST) method. Finally, for Structured Sparse Training + Permutations, we apply either random or learned permutations to the above structured DST methods to observe the impact on the network’s generalization performance. In the random case, a permutation is generated before the start of training and applied to the network. 6 RESULTS We present our main results, comparing the performance of various structured DST methods with and without permutations for vision (Sec. 6.1) and language (Sec. 6.1.1) tasks. We then compare the runtime of all sparse methods (training and inference time on GPUs) in Sec. 6.2 to understand the 7 Published as a conference paper at ICLR 2026 Fig. 3: Impact of permutations on the training and inference time for a ViT-B/16 model. We observe that there is a 3.16% - 8.69% overhead related to permutations during inference for the obtained accuracy gains. Even for training, learning permutations increases the overall training time, but at higher sparsity levels, we still obtain speedup compared to a dense model for all structured sparsities. Training time for GPT models can be found in Tbl. 5. overheads associated with permutation-based learning. We wrap up this section by looking at the learned permutations in more detail (Sec. 6.3) and showing the results of ablation studies in Sec. 6.4. 6.1 ACCURACY VERSUS SPARSITY: VISION EXPERIMENTS Setup. We evaluate the two architectures for vision tasks on ImageNet-1K (Deng et al., 2009) dataset. We train two variants of Vision Transformers (ViTs) (Dosovitskiy, 2020), ViT-B/16 and ViT-L, to study scalability aspects of sparsity. We also train Mixer-S/16 variant of MLP-Mixers (Tolstikhin et al., 2021) for our studies. We test all the sparse training methods (with and without permutations) at 60%, 70%, 80%, 90%, and 95% sparsity in the targeted layers in the networks. Details of which layers are sparsified can be found in Apdx. C.5. Results: Fig. 2 presents a summary of our results, comparing the performance of the three networks at varying levels of sparsity, reporting the average of three runs. The results show that learned permutations improve the generalization performance of structured sparse networks, helping to bridge the gap with unstructured sparsity. For instance: On ViT-L/16, at 70% sparsity, we obtain RigL level accuracies with DynaDiag + PA-DST. We see an improvement in accuracy across all structured sparsity patterns with an increase of 1.22% for SRigL at 90% sparsity (ViT-L). The raw values, along with comparisons with random permutations, can be found in Tbl. 11. 6.1.1 PERPLEXITY VERSUS SPARSITY: LANGUAGE EXPERIMENTS Setup: For language tasks, we evaluate on the WikiText-103 dataset using two variants (Small and Medium) of GPT-2 Radford et al. (2019) at varying sparsities s ∈{40%, 50%, 60%, 80%, 90%}. We sparsify both the attention and the MLP layers of the model. Results. We train GPT2-Small and GPT2-Medium Radford et al. (2019) from scratch on WikiText- 103 dataset and show the resulting performance in Fig. 2d) and e). We report the average PPL obtained from two runs. We see that structured sparse models benefit from learning permutations, achieving similar PPL to that of unstructured sparse networks. Notably, PA-DST-based methods demonstrate increasingly significant improvements over no-permutation-based baselines. For example, at 80% sparsity on GPT2-Medium, SRigL + PA-DST achieves a perplexity of 2.27 points lower than that just SRigL while remaining within 0.98 points of the performance ceiling established by RigL. The raw values, along with comparisons with random permutations, can be found in Tbl. 12. 6.2 TRAINING AND INFERENCE COST Learning permutations means extra computation, and hence it is essential to understand the overheads associated with the same. We measure the inference and training wall clock times of various structured sparse training approaches with and without permutations. 8 Published as a conference paper at ICLR 2026 b0.A b0.F1 b0.F2 b1.A b1.F1 b1.F2 b2.A b2.F1 b2.F2 b3.A b3.F1 b3.F2 b4.A b4.F1 b4.F2 b5.A b5.F1 b5.F2 b6.A b6.F1 b6.F2 b7.A b7.F1 b7.F2 b8.A b8.F1 b8.F2 b9.A b9.F1 b9.F2 b10.A b10.F1 b10.F2 b11.A b11.F1 b11.F2 ViT­B/16 Layers 0.00 0.25 0.50 0.75 1.00 Distance to Identity 1 P I F/ 2N Per­layer Distance to Identity Matrix attn.proj mlp.fc1 mlp.fc2 Fig. 4: Distance of permutations to an identity matrix (no permutation) for a ViT-B/16 at 90% sparsity trained with DynaDiag. We see that with depth, the learned permutations move closer to an identity matrix, signalling less shuffling with depth. A: Attention block; F: Fully Connected; b:blocks Setup. We use the best available libraries to accelerate the execution of structured sparsities mentioned in Sec. 5, and we use cuSparse (Naumov et al., 2010) to execute all the unstructured sparsities. We estimate the execution times for SRigl (Lasby et al., 2023) as per their methodology and are unable to run end-to-end training and inference due to lack of support in PyTorch. Whereas, for DSB and PixelatedBFly, we use the Triton based library package from PixelatedBFly (Dao et al., 2021) to accelerate inference for both and training for PixelatedBFly (DSB lacks support to integrate the kernels with the training pipeline). And lastly, we use the provided CUDA kernels implemented to accelerate DynaDiag’s execution on the GPUs. Results. In Fig. 3, we compare the inference and training times of models listed in Sec. 6.1 and Sec. 6.1.1. We see that using the approach mentioned in Sec. 4.3, we can re-index the output of a layer instead of explicitly applying a permutation matrix, leading to an overhead of just up to 8.69% as compared to without permutation baselines at all sparsity levels. Even with the overhead, we can achieve an inference speedup as high as 2.9× with DynaDiag at 90% sparsity. Whereas, during training, we see an increase in training time for all structured sparse methods due to extra computation required for learning the permutations. We elaborate in Apdx. C.2 overheads & our approach to early stopping the permutation learning by tracking the loss and heuristically determining a stopping point. However, we can see that at higher sparsities, even with permutations, it is possible to get better training time than dense and unstructured sparse models. For example, at 95% sparsity, the training times for SRigL, PBFly, DSB, and DynaDiag are 1.09×, 1.18×, 1.13×, and 1.23× that of RigL, respectively. We can also see that DynaDiag is the fastest to train the sparse models, and this can be attributed to the transposable nature of the sparse pattern (Tyagi et al., 2025), which helps accelerate the backward pass. 6.3 LEARNED PERMUTATIONS To understand what the learned permutations look like, we take the example of a ViT-B/16 trained with DynaDiag at 90% sparsity and plot the normalized distance between the learned permutations and an identity matrix. The identity is the natural no-permutation baseline, so measuring proximity to it directly measures how much reordering a layer learns. We use a width–invariant, normalized Frobenius metric for a permutation matrix P ∈RN×N,δ(P) = 1 −∥P −I∥F √ 2N ∈[0, 1]. The result is shown in Fig. 4. We observe two primary trends: 1) across depths, early blocks display the lowest distance to the identity matrix, conveying that they learn stronger reindexing, and later layers, permutation matrices are closer to an identity matrix, meaning weaker shuffles. 2) attn projections tend to remain more permuted than MLP layers, which aligns more with an identity matrix with depth. 6.4 ABLATION STUDY Row vs Col Permutations. Training structured sparse networks with col or row permutations yield networks with similar generalization performance. Tbl. 10 shows the results of training a structured 9 Published as a conference paper at ICLR 2026 sparse network col vs row permutation matrix for a ViT-B/16 network on ImageNet-1K. This means that instead of formulating a linear layer as y = WPx, we train instead with y = PWx, while keeping all other hyperparameters identical. We observe that there is no significant difference between their generalization performance. 7 CONCLUSION We theoretically show that learning one permutation matrix per layer restores the expressivity lost in structured sparsity while preserving accelerator–friendly layouts. We back these claims with experimental results showing that training structured sparsity with the resulting PA-DST algorithm attains unstructured-level accuracy while still realizing speed-ups during training and inference. REFERENCES Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is the state of neural network pruning? Proceedings of machine learning and systems, 2:129–146, 2020. Yaohui Cai, Weizhe Hua, Hongzheng Chen, G Edward Suh, Christopher De Sa, and Zhiru Zhang. Structured pruning is all you need for pruning cnns at initialization. arXiv preprint arXiv:2203.02549, 2022. Ben Cottier. Trends in the dollar training cost of machine learning systems. Epoch. January, 31:2023, 2023. Tri Dao, Nimit S Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, and Christopher R´e. Kaleidoscope: An efficient, learnable representation for all structured linear maps. arXiv preprint arXiv:2012.14966, 2020. Tri Dao, Beidi Chen, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, and Christopher Re. Pixelated butterfly: Simple and efficient sparse training for neural network models. arXiv preprint arXiv:2112.00029, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In International conference on machine learning, pp. 2943–2952. PMLR, 2020. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015. Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry. Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks. Advances in neural information processing systems, 34:21099–21111, 2021. Ajay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, and Zhangyang Wang. Training your sparse neural network better with any mask. In International Conference on Machine Learning, pp. 9833–9844. PMLR, 2022. Peng Jiang, Lihan Hu, and Shihui Song. Exposing and exploiting fine-grained block structures for fast and accurate sparse training. Advances in Neural Information Processing Systems, 35: 38345–38357, 2022. 10 Published as a conference paper at ICLR 2026 Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, and Yani Ioannou. Dynamic sparse training with structured sparsity. arXiv preprint arXiv:2305.02299, 2023. Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, and David Doermann. Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2790–2799, 2019. Shiwei Liu, Decebal Constantin Mocanu, and Mykola Pechenizkiy. On improving deep learning generalization with adaptive sparse connectivity. arXiv preprint arXiv:1906.11626, 2019. Haiquan Lu, Yefan Zhou, Shiwei Liu, Zhangyang Wang, Michael W Mahoney, and Yaoqing Yang. Alphapruning: Using heavy-tailed self regularization theory for improved layer-wise pruning of large language models. arXiv preprint arXiv:2410.10912, 2024. Ioanna Lykiardopoulou. Ai beats humans for the first time in physical skill game. https://www. newscientist.com/article/2402645, 12 2023. Accessed: 20 01 2025. Jiancheng Lyu, Shuai Zhang, Yingyong Qi, and Jack Xin. Autoshufflenet: Learning permutation matrices via an exact lipschitz continuous penalty in deep convolutional neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 608–616, 2020. Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):2383, 2018. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In International conference on machine learning, pp. 2498–2507. PMLR, 2017. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016. Guido Mont´ufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. Advances in neural information processing systems, 27, 2014. Hesham Mostafa and Xin Wang. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In International Conference on Machine Learning, pp. 4646–4655. PMLR, 2019. Maxim Naumov, L Chien, Philippe Vandermersch, and Ujval Kapasi. Cusparse library. In GPU Technology Conference, volume 12, 2010. Jeff Pool and Chong Yu. Channel permutations for n: m sparsity. Advances in neural information processing systems, 34:13316–13327, 2021. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Matthew Sparkes. Game-playing deepmind ai can beat top humans at chess, go and poker. https: //thenextweb.com/news/, 11 2023. Accessed: 20 01 2025. Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in neural information processing systems, 33:6377–6389, 2020. 11 Published as a conference paper at ICLR 2026 Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Un- terthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261– 24272, 2021. Abhishek Tyagi, Arjun Iyer, William H Renninger, Christopher Kanan, and Yuhao Zhu. Dynamic sparse training of diagonally sparse networks. arXiv preprint arXiv:2506.11449, 2025. Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, et al. Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems, 34:20838–20850, 2021. Xin-Jie Zhang, Jack Murdoch Moore, Gang Yan, and Xiang Li. Universal structural patterns in sparse recurrent neural networks. Communications Physics, 6(1):243, 2023. Yingtao Zhang, Jialin Zhao, Wenjing Wu, Alessandro Muscoloni, and Carlo Vittorio Cannistraci. Epitopological learning and cannistraci-hebb network shape intelligence brain-inspired theory for ultra-sparse advantage in deep learning. In The Twelfth International Conference on Learning Representations, 2024. Yingtao Zhang, Jialin Zhao, Wenjing Wu, Ziheng Liao, Umberto Michieli, and Carlo Vittorio Cannistraci. Brain-inspired sparse training enables transformers and llms to perform as fully connected. arXiv preprint arXiv:2501.19107, 2025. 12 Published as a conference paper at ICLR 2026 Appendix A MAPPING DENSITY TO PATTERN PARAMETERS Given a target per-layer density δ ∈(0, 1] and input size nin, we choose the smallest feasible integers so that per-row nnz ≈δ nin: K = B = round(δ nin), 2b+1 = nearest odd to δ nin, α = N M = δ (tied N:M). For non-cyclic bands/diagonals, use wrap-around (or adjust a few edge rows by ±1 nnz) so the total nnz matches the target. In our ViT-L/16 surrogate at δ = 0.05: nin=1024 ⇒K=B=51, 2b+1=51; nin=4096 ⇒K′=B′=205, 2b′+1=205; tied N:M uses α = 0.05 throughout. B WORKED-EXAMPLE CALCULATIONS (DETAILS) Using the master bound equation 1 and span update uℓ= min{d0, uℓ−1 + rstruct(nin,ℓ)} with d0 = 1024, the alternating widths yield per-block gain rpair = rstruct(1024) + min{rstruct(4096), d0} = 51 + 205 = 256. Thus u2t = min{1024, 256 t} and dense-like factors are guaranteed once u2t = 1024, i.e., t = ⌈1024/256⌉= 4 blocks (≈8 layers). Without mixing, uℓ≡51 and the per-layer factor remains strictly below dense for all depth. C PRACTICAL CONSEQUENCES & PREDICTIONS OF OUR THEORY We now analyze the impact of our theory via a concise worked example where all structured families operate at (approximately) the same sparsity level. To stay within our ReLU/MLP framework, we use an MLP surrogate whose layer sizes are motivated by ViT-L/16, which has 24 encoder blocks (307M params). Each block contains a two-layer FFN with dmodel=1024 and dff=4096. We analyze the MLP that stacks these FFN layers in order, 1024 →4096 →1024 | {z } block 1 FFN →1024 →4096 →1024 | {z } block 2 FFN →· · · (24 blocks), yielding a depth-L=48 ReLU-MLP used solely to set widths for the master bound equation 1. At 95% sparsity (density δ=0.05), the structural caps (Sec. 3.4) are rstruct(1024) = 51, rstruct(4096) = 205, for Diagonal-K, Banded-b (with 2b+1 odd), Block-B, and tied N:M with α = N/M = 0.05 (mapping in App. A). Consequences. Without mixing, all axis-structured families stall at kℓ≤51 (no multiplicative growth). With one mixer per layer (permutations suffice), the span budget grows additively; because widths alternate 1024 ↔4096, each block contributes 51+205 = 256 fresh directions toward d0=1024, so dense-like per-layer factors are guaranteed after  1024/256  = 4 blocks (≈8 layers). In this setting, the structured families share the same catch-up point (4 blocks) when mixing is used. C.1 COMBINATORIAL EXPRESSIVITY EXAMPLE Take d0 = 4, equal widths nℓ= 8, and L = 3 layers. 1. Dense / Unstructured (RigL/SET-like): kℓ= min{nℓ, d0} = 4 at every layer. Per-layer factor P4 j=0 8 j  = 1 + 8 + 28 + 56 + 70 = 163. Hence NLR ≥1633. 2. Block-B without permutations, B = 2: kℓ= min{nℓ, B} = 2 at every layer (no new directions with depth). Per-layer factor 1 + 8 + 28 = 37. Hence NLR ≥373. 3. Block-B with a learned permutation each layer, B = 2 (rs = 2): u0 = 0→u1 = 2 and u2 = 4; thereafter uℓ= 4. Per-layer factors: layer 1 gives 37, layers 2 and 3 give 163 each. Thus NLR ≥37 · 163 · 163. After a one-layer overhead, the per-layer factor matches dense. This concretely illustrates the phenomenon: structure alone stalls multiplicative growth; adding permutations restores it after a short, explicit warm-up in depth. 13 Published as a conference paper at ICLR 2026 Table 1: Lower bounds summary (instantiate equation 1). Here d0 is the input dimension, nℓthe width, and rs ∈{K, 2b+1, B} the per-layer structural cap for diagonal/banded/block. For tied N:M, α = N/M. Setting Effective kℓ Span recursion uℓ Depth overhead Dense min{nℓ, d0} uℓ= d0 0 Unstructured DST (free masks) min{nℓ, d0} uℓ= d0 0 N:M (free supports) min{nℓ, d0} uℓ= d0 0 N:M (tied template) min{nℓ, αuℓ−1} uℓ= uℓ−1 – (stalls) Diagonal-K (no perm) min{nℓ, K} uℓ= min{d0, K} – (stalls) Banded-b (no perm) min{nℓ, 2b+1} uℓ= min{d0, 2b+1} – (stalls) Block-B (no perm) min{nℓ, B} uℓ= min{d0, B} – (stalls) Diagonal-K + permutation min{nℓ, uℓ} uℓ= min{d0, uℓ−1 + K} ⌈d0/K⌉ Banded-b + permutation min{nℓ, uℓ} uℓ= min{d0, uℓ−1 + 2b+1} ⌈d0/(2b+1)⌉ Block-B + permutation min{nℓ, uℓ} uℓ= min{d0, uℓ−1 + B} ⌈d0/B⌉ C.2 PERMUTATION LEARNING SCHEDULE & OVERHEAD 0 50 100 150 200 250 300 Epoch 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Permutation Penalty (    Permutation) ViT B/16   Permutation Penalty (90% Sparsity) Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 Block 9 Block 10 Block 11 attn.proj mlp.fc1 mlp.fc2 Fig. 5: Tracking the loss of permutation matrices based on Eqn. 14. We plot the loss for every 10 training epochs. C.2.1 PERMUTATION LEARNING SCHEDULE While training, we can either learn the permutations till the end of the training or find a way to stop early without losing out on optimal permutations. From our experiments, we find that it is useful to monitor the loss of the soft-permutation matrix. For our training, we set a threshold loss value δ at which point we go from a soft to a hard permutation matrix for that particular layer. This means that instead of carrying out a multiplication operation for that layer, we can instead do re-indexing, which is a much cheaper operation. We show in Fig. 5 the loss associated with each permutation matrix when training a ViT-B/16 network at 90% sparsity with DynaDiag. We can see that the loss decreases drastically and saturates after the knee in the plot. Moreover, we see a clear trend that earlier permutation matrices take longer to reach this knee point. For the given experiment, we set the δ = 0.22 and we show in Fig. 6 when each layer reaches that threshold, and we stop training the permutation matrix corresponding to that layer. C.2.2 OVERHEADS AND ADDITIONAL RESULTS The tables quantify the training overhead associated with various permutation methods. Specifically, Tbl. 2 and Tbl. 3 detail the memory overhead in gigabytes and as a percentage for the GPT-2 Small model, using Diagonal and N:M sparsity, respectively. Tbl. 4 presents a similar memory overhead analysis for the ViT-B/16 model, but at higher 90% and 95% sparsity levels. Finally, Tbl. 5 expands this analysis for the GPT-2 Medium model, showing the overhead for both training time (in hours) and memory. Across all tables, we compare permutation strategies such as AutoShufflePerm and 14 Published as a conference paper at ICLR 2026 0 50 100 150 200 250 300 Epoch Count at Threshold (0.22) Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 Block 9 Block 10 Block 11 Block 85 90 95 99 105 110 115 120 125 130 135 140 88 93 98 103 108 113 118 123 128 133 138 143 90 95 101 105 110 115 120 125 130 135 140 145 Epochs Required for Each Layer to Reach Penalty Threshold mlp.fc1 mlp.fc2 attn.proj Fig. 6: We show the epoch count of each layer in a ViT-B/16 network when the corresponding permutation loss hits the threshold. We can see that the cutoff epoch varies drastically acorss the network which we use to our advantage to reduce the training time. the KaleidoscopePerm against their corresponding non-permuted baselines to clearly illustrate the computational trade-offs. Table 2: Memory overhead of permutation methods for GPT-2 Small with Diagonal sparsity on WikiText-103. The overhead percentage is calculated relative to the DynaDiag method. Model Method Memory (GB) @ 60% % Overhead Memory (GB) @ 80% % Overhead GPT-2 Small (DynaDiag) Unstructured 133.64 - 132.12 - DynaDiag 137.32 - (Baseline) 132.82 - (Baseline) + FixedRandPerm 139.21 +1.38% 134.09 +0.96% + PA-DST 156.17 +13.73% 145.41 +9.48% + KaleidoscopePerm 141.27 +2.88% 137.43 +3.47% Table 3: Memory overhead of permutation methods for GPT-2 Small with SRigL on WikiText-103. The overhead percentage is calculated relative to the SRigL method. Model Method Memory (GB) @ 60% % Overhead Memory (GB) @ 80% % Overhead GPT-2 Small Unstructured 133.64 - 132.12 - SRigL 135.12 - (Baseline) 131.42 - (Baseline) + FixedRandPerm 136.91 +1.32% 132.65 +0.94% + PA-DST 148.23 +9.70% 143.19 +8.96% C.3 EXPERIMENT DETAILS All experiments are conducted on the NVIDIA Tesla A100 GPUs with the following configuration: • Model: NVIDIA A100 40GB 15 Published as a conference paper at ICLR 2026 Table 4: Memory overhead of permutation methods for ViT-B/16 with Diagonal sparsity on ImageNet- 1K. The overhead percentage is calculated relative to the DynaDiag method. Model Method Memory (GB) @ 90% % Overhead Memory (GB) @ 95% % Overhead ViT-B/16 (Diag) Unstructured 112.64 - 112.64 - DynaDiag 114.50 - (Baseline) 111.43 - (Baseline) + PA-DST 138.38 +20.86% 135.01 +21.16% + KaleidoscopePerm 120.67 +5.39% 117.43 +5.38% Table 5: Time and memory overhead of permutation methods for GPT-2 Medium with Diagonal sparsity on WikiText-103. The overhead percentage is calculated relative to the DynaDiag method. Sparsity @ 60% Sparsity @ 80% Model Method Time (h) % Overhead Memory (GB) % Overhead Time (h) % Overhead Memory (GB) % Overhead GPT-2 Medium Unstructured 36.43 - 152.34 - 36.32 - 152.32 - DynaDiag 33.21 - (Base) 149.64 - (Base) 29.82 - (Base) 142.82 - (Base) + FixedRandPerm 33.39 +0.54% 149.41 -0.15% 29.23 -1.98% 144.09 +0.89% + PA-DST 37.49 +12.88% 156.17 +4.36% 34.28 +14.95% 145.41 +1.81% + KaleidoscopePerm 41.98 +26.41% 155.27 +3.76% 37.98 +27.36% 137.43 -3.77% • Memory: 40GB HBM2e • Memory Bandwidth: ∼2.0 TB/s (higher than the 40GB version) • TDP : 400W (PCIe: 300W) • Peak FP32 Performance: ∼19.5 TFLOPS (same as 40GB) • Peak FP16 Performance: ∼312 TFLOPS (same as 40GB) C.3.1 DATASETS 1. ImageNet-1K Deng et al. (2009) covers 1,000 object classes, with 1.28M training, 50,000 validation, and 100,000 test images. Images are typically resized and cropped to 224 × 224 for processing. 2. WikiText-103 Merity et al. (2016) comprises over 100 million tokens extracted from verified Wikipedia articles. It is significantly larger than other language datasets, such as Penn Treebank (PTB) Marcus et al. (1993). Table 6: Configuration of the CIFAR10 and CIFAR100 experiments with MLPMixer. Parameter Value Parameter Value Adam β1 0.9 Hidden 128 Adam β2 0.99 (Initial LR, Final LR) (10−3, 10−6) AutoAugment True Label Smoothing 0.1 Batch Size 128 Layers 8 CutMix Probability 0.5 LR Scheduler Cosine CutMix β 1.0 Optimizer Adam Dropout 0.0 Random Seed 3407 Epochs 300 Weight Decay 5 × 10−5 Hidden C 512 Warmup 5 epochs Hidden S 64 16 Published as a conference paper at ICLR 2026 Table 7: Configuration of the ImageNet experiments with ViT-Base and MLPMixer.Here X represents any of the sparse training methods that train a ViT-Base network. Model Optimizer Weight Decay Learning Rate Drop Path Warmup/Epoch ViT-Base AdamW 0.05 0.001 0.1 5/300 X-ViT-Base AdamW 0.05 0.001 0 5/300 Mixer-Small AdamW 0.1 0.001 0.1 5/300 X-Mixer-Small AdamW 0.1 0.001 0 5/300 Table 8: Configuration of the ImageNet experiments with ViT-Large and Huge. Parameter Value Parameter Value Batch size 256 Horizontal flip ✓ Optimizer AdamW Random Resized Crop (RRC) ✓ Learning Rate (LR) 3 × 10−3 Rand Augment ✗ LR decay cosine 3 Augment (ours) ✓ Weight decay 0.02 LayerScale ✓ Warmup epochs 5 Mixup α 0.8 Label smoothing ε 0.1 Cutmix α 1.0 Dropout ✗ Erasing prob. ✗ Stochastic Depth ✓ ColorJitter 0.3 Repeated Aug ✓ Test crop ratio 1.0 Gradient Clipping 1.0 Loss BCE Table 9: Configuration of the Wikitext-103 experiments GPT-2Small experiments. Model Optimizer Weight Decay Learning Rate Dropout Warmup/Epoch GPT-2-Small AdamW 0.1 0.0001 0.1 5/100 DynaDiag AdamW 0.1 0.0001 0.1 5/100 17 Published as a conference paper at ICLR 2026 C.4 RAW VALUES Table 10: Top-1 accuracy of structured sparse training methods at varying sparsities for a ViT-B/16 on ImageNet-1K. The results shown are from three runs for each data point. We see that there is no significant difference between the generalization performance of networks with learnt row and col permutation. Method Perm. 60% 70% 80% 90% 95% dense accuracy = 78.5 SRigL Col 78.04 ± 0.011 78.02 ± 0.008 77.83 ± 0.009 76.16 ± 0.007 69.24 ± 0.008 PixelatedBFly Col 78.10 ± 0.005 78.04 ± 0.007 77.49 ± 0.007 74.09 ± 0.008 62.82 ± 0.006 DSB Col 78.11 ± 0.009 77.95 ± 0.008 76.34 ± 0.005 73.09 ± 0.004 64.49 ± 0.004 DynaDiag Col 78.53 ± 0.007 78.26 ± 0.003 77.85 ± 0.005 77.19 ± 0.004 70.12 ± 0.007 SRigL Row 78.03 ± 0.007 78.02 ± 0.001 77.83 ± 0.007 76.16 ± 0.005 69.24 ± 0.007 PixelatedBFly Row 78.12 ± 0.004 78.04 ± 0.011 77.49 ± 0.003 74.09 ± 0.002 62.82 ± 0.009 DSB Row 78.09 ± 0.004 77.95 ± 0.005 76.34 ± 0.003 73.09 ± 0.004 64.49 ± 0.009 DynaDiag Row 78.54 ± 0.006 78.26 ± 0.008 77.85 ± 0.006 77.19 ± 0.003 70.12 ± 0.010 C.5 DETAILS OF LAYERS SPARSIFIED In our ViT-B/16 experiments, we applied sparsity to the initial patch projection layer, the MLP layers, and the output projections of the multi-head attention (MHA) modules. For the GPT models, we sparsified all attention and MLP layers. D USE OF AI We acknowledge the use of Google’s Gemini and ChatGPT for assistance in editing the manuscript. The tools were used to refine sentence structure, correct grammatical errors, and improve the overall readability of the text. The intellectual contributions, including all methodologies, analyses, and conclusions, are solely those of the authors, who bear full responsibility for the final content of this work. 18 Published as a conference paper at ICLR 2026 Table 11: Top-1 accuracy of sparse training methods at varying sparsities. We bold results that are not significantly different (based on paired asymptotic McNemar tests (α = 0.05)) from the best-performing method (marked with a *) in each column. We see an increase in the generalization performance of all the structured sparse networks on the ImageNet-1K dataset. Model Method Perm. Struc. 60% 70% 80% 90% 95% ViT-B/16 dense accuracy = 78.5 RigL - no 79.75 79.28 78.71 77.24 71.50 SET - no 78.15 78.01 77.78 77.01 71.48 CHT - no 79.78 79.37 79.06* 77.66* 71.68* CHTs - no 79.88* 79.38* 79.05 77.54 71.61 Mest - no 78.04 77.76 77.39 76.45 69.67 SRigl - yes 77.79 77.84 77.35 75.90 68.70 PixelatedBFly - yes 78.04 77.90 77.31 73.89 62.52 DSB - yes 77.98 77.85 76.26 72.89 64.17 DynaDiag - yes 78.29 77.94 77.62 76.91 69.54 SRigL Random yes 77.95 77.81 77.31 75.69 68.74 PixelatedBFly Random yes 77.91 77.94 77.34 73.93 62.45 DSB Random yes 78.06 77.84 76.27 72.84 64.23 DynaDiag Random yes 78.21 77.92 77.67 76.93 69.54 SRigL PA-DST yes 78.04 78.02 77.83 76.16 69.24 PixelatedBFly PA-DST yes 78.10 78.04 77.49 74.09 62.82 DSB PA-DST yes 78.11 77.95 76.34 73.09 64.49 DynaDiag PA-DST yes 78.53 78.26 77.85 77.19 70.12 ViT-L/16 dense accuracy = 82.2 RigL - no 81.85* 81.57* 81.70* 78.26* 72.11* SRigL - yes 79.87 78.94 77.54 75.46 66.68 PixelatedBFly - yes 79.13 79.06 79.33 75.12 66.59 DSB - yes 79.44 77.46 75.34 73.55 66.77 DynaDiag - yes 81.52 81.46 81.37 77.74 69.59 SRigL Random yes 79.94 78.95 77.55 75.56 66.74 PixelatedBFly Random yes 79.14 79.19 79.34 75.29 66.70 DSB Random yes 79.42 77.51 75.39 73.71 66.74 DynaDiag Random yes 80.21 80.16 79.52 75.31 69.56 SRigL PA-DST yes 80.29 79.63 77.96 76.78 66.86 PixelatedBFly PA-DST yes 79.33 79.23 80.33 75.67 66.79 DSB PA-DST yes 79.59 77.61 75.51 73.79 66.91 DynaDiag PA-DST yes 81.66 81.59 81.49 77.96 70.16 Mixer-S/16 dense accuracy = 72.4 RigL - no 73.21* 73.23* 73.47* 73.01* 70.41* SRigL - yes 71.89 72.05 71.71 70.21 66.87 PixelatedBFly - yes 71.95 71.91 71.45 69.17 67.85 DSB - yes 69.94 70.21 68.90 65.16 60.88 DynaDiag - yes 72.92 72.95 73.05 72.31 68.89 SRigL Random yes 71.84 72.07 71.74 70.21 66.86 PixelatedBFly Random yes 71.91 71.94 71.49 69.17 67.91 DSB Random yes 69.93 70.23 68.81 65.16 60.81 DynaDiag Random yes 72.93 72.41 72.32 71.91 68.01 SRigL PA-DST yes 72.56 72.91 72.79 71.24 67.16 PixelatedBFly PA-DST yes 72.14 72.09 71.86 69.47 68.71 DSB PA-DST yes 70.22 70.39 69.12 65.16 61.08 DynaDiag PA-DST yes 73.09 73.11 73.21 72.51 69.19 19 Published as a conference paper at ICLR 2026 Table 12: Perplexity of sparse training methods at varying levels of sparsity. We bold results that are not significantly different from the best-performing method (marked with a *) based on paired asymptotic McNemar tests (α = 0.05). We see an improvement in the PPL (lower the better) score for all structured sparse training methods with permutaitons on the WikiText-103 dataset. Model Method Perm. 40% 50% 60% 80% 90% GPT2-S dense PPL = 22.21 RigL - 22.34* 22.80 23.79* 29.87* 53.76* SRigL - 22.74 23.19 25.09 31.08 62.55 PixelatedBFly - 22.50 23.25 25.98 34.89 66.44 DynaDiag - 22.60 22.74 24.67 30.46 56.33 SRigL Random 22.70 23.19 25.21 31.11 62.42 PixelatedBFly Random 22.54 23.22 26.09 34.84 66.46 DynaDiag Random 22.61 23.19 25.12 31.69 57.61 SRigL PA-DST 22.41 22.94 24.59 30.40 60.96 PixelatedBFly PA-DST 22.41 23.01 25.69 34.71 66.06 DynaDiag PA-DST 22.44 22.69* 24.51 30.26 55.49 GPT2-M dense PPL = 20.18 RigL - 20.45* 21.60* 23.49* 28.87* 51.76* SRigL - 21.14 22.59 26.09 32.16 55.66 PixelatedBFly - 20.86 22.49 25.45 34.24 56.09 DynaDiag - 20.69 22.14 24.98 29.65 54.87 SRigL Random 21.19 22.55 26.01 32.19 55.69 PixelatedBFly Random 20.90 22.51 25.44 34.22 56.01 DynaDiag Random 21.65 22.67 25.17 30.39 54.81 SRigL PA-DST 20.57 22.20 25.04 29.89 55.13 PixelatedBFly PA-DST 20.69 22.23 25.31 33.19 55.71 DynaDiag PA-DST 20.55 21.91 24.71 29.21 54.26 20
Published as a conference paper at ICLR 2026 EFFICIENT DYNAMIC STRUCTURED SPARSE TRAINING WITH LEARNED SHUFFLES Abhishek Tyagi1 Arjun Iyer2 Liam Young2 William H. Renninger2 Christopher Kanan1 Yuhao Zhu1 1 2The { , ABSTRACT Structured sparsity accelerates training and inference on modern GPUs, yet it still trails unstructured dynamic sparse training (DST) in accuracy. The shortfall stems from a loss of expressivity: whereas a dense layer can realise every possible mask obtained by choosing any w active weights out of n, a fixed block, or N:M layout explores only a subset of those possibilities. We propose to close this gap by learning, for each layer, a single permutation matrix jointly with the structured weight matrix. Applied to three canonical structures-block, N:M and diagonals-we show that permutation-augmented DST (PA-DST) matches unstructured baselines (RigL, SET) at 90-95% sparsity on ImageNet-1K (ViT-B/16) and WikiText-103 (GPT-2), yet trains upto 1.21× and infers up to 2.9× faster. The results position structure + learned permutation as a sweet-spot between accuracy and efficiency. 1 INTRODUCTION Over the years, deep neural networks (DNNs) have grown, and their performance on complex tasks has increased to and beyond human-level performance (Sparkes, 2023; Lykiardopoulou, 2023). However, the cost of training and inference for such large DNNs has increased drastically (Cottier, 2023). A principled way to curb this cost, while retaining dense level accuracy, is to exploit sparsity by removing unnecessary weights via pruning (Molchanov et al., 2016; Tanaka et al., 2020) or training directly in the sparse regime (Jaiswal et al., 2022; Zhang et al., 2023), often achieving similar algorithmic performance as dense models under comparable budgets (Frankle & Carbin, 2018; Blalock et al., 2020; Mostafa & Wang, 2019). Weights are typically made sparse either unstructurally, by zeroing arbitrary individual parameters (Evci et al., 2020; Han et al., 2015), or structurally, by enforcing hardware-friendly patterns such as blocks or N :M groups (Liu et al., 2019). Unstructured sparsity preserves accuracy at high sparsity levels but is hard to accelerate on modern accelerators due to irregular memory access and limited kernel support. Structured sparsity aligns well with vendor-optimized kernels, yet it commonly underperforms its unstructured counterpart-especially at extreme sparsity-because rigid patterns reduce the expressivity of the network. Moreover, many structured approaches still rely on dense backpropagation or lose structure under transposition, limiting practical training speedups (Lasby et al., 2023; Hubara et al., 2021). This work revisits structured sparsity from the angle of permutations. Our key observation is that permutations help bridge the gap between the performance of structured and unstructured sparsity. We show the benefits of co-learning the structure and permutations. Concretely, for each sparsified layer lwith input xlwe learn a single permutation Πland parameterize the layer as yl= Sl Πlxl , where Slobeys a fixed, accelerator-friendly structured sparsity pattern (e.g., block or N :M). While Slalone constrains the class of linear maps too severely, composing it with a learned permutation strictly enlarges the representable family (it recovers the classical structured model when Πl= I and extends it otherwise), thereby restoring much of the expressivity lost to structure. Importantly, the class is closed under transposition: (SlΠl)⊤= Π⊤ lS⊤ l, so the backward pass remains within 1 16 Oct 2025 Published as a conference paper at ICLR 2026 Fig. 1: Overview of the PA-DST training approach. The training starts by initializing the target weight matrices with the desired sparse structure and a soft permutation matrix. As the training progresses, the soft permutation matrix starts moving towards a permutation matrix, and by the end of the training, we obtain the final sparse weight matrix and the permutation matrix. The permutation matrix is then used to reindex the activations of the layers so that an explicit matrix multiplication operation can be avoided during inference. More details can be found in Sec. 4.3. the same permutation-structured family, enabling sparse-to-sparse acceleration in both forward and backward passes. In addition to reintroducing expressivity, we learn the sparsity itself during training. Rather than fixing the position of non-zeros a priori, we dynamically adapt the structured mask of Slaccording to task-driven criteria, coupling optimal non-zero positioning with the learned permutation Πl. As we show in Sec. 6, this permutation-structured recipe delivers strong accuracy-speed trade-offs across architectures and sparsity regimes. This paper makes the following contributions: 1. We present, Permutation-Augmented DST (PA-DST), a general layer formulation that combines any structured sparsity with a single learnt permutation matrix. We show that existing methods of learning permutations can be employed to transformer based models to improve their generalization performance. 2. We prove tight combinatorial bounds showing that the added permutation recovers the expressivity lost by structured sparsity as compared to dense and unstructured sparse models. 3. We demonstrate that with learnt permutations, structured sparse methods can achieve unstructured level accuracy across both vision (on ImageNet-1K) and language tasks (on WikiText-103), while achieving up to 1.21× training and 2.9× inference speedup at 90% sparsity for a ViT-B/16 network. 2 RELATED WORK Sparse training. Classical pruning removes small-magnitude or low-salience weights from a pretrained dense model and then fine-tunes the model. Despite strong compression, this approach still pays the dense pre-training cost (Molchanov et al., 2017; Cai et al., 2022; Lin et al., 2019; Lu et al., 2024). Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) shows that dense nets contain sparse "winning tickets," but training sparse models from scratch can be more compute-intensive than dense training itself. To avoid dense pre-training, sparse-from-scratch methods fall into static and dynamic regimes. Static sparse training (SST) fixes a mask at initialization (e.g., Pixelated Butterfly (Dao et al., 2021)) but tends to underperform dynamic strategies at high sparsity. Dynamic sparse training (DST) updates connectivity during learning via prune-and-grow rules: SET prunes by magnitude and regrows randomly (Mocanu et al., 2018); MEST mixes magnitude and gradient signals (Yuan et al., 2021); RigL prunes by magnitude and regrows by gradient on missing connections (Evci et al., 2020); recent work explores topology-driven, gradient-free growth and N:M-style constraints for scalability (Zhang et al., 2024; 2023). 2 Published as a conference paper at ICLR 2026 Permutation learning and channel mixing. Beyond fixed channel shuffles, AutoShuffleNet (Lyu et al., 2020) learns channel permutations during training by optimizing a Lipschitz-continuous penalty that drives a stochastic matrix toward a permutation. The Kaleidoscope (Dao et al., 2020) (K-matrix) framework provides a differentiable, expressive parameterization that includes permutations, and it has been used to learn latent permutations in permuted-image tasks within end-to-end training. Pool et al. (Pool & Yu, 2021) use offline permutations to prune networks to a fixed N:M sparsity pattern. In contrast, we learn permutations jointly during training, and our method is pattern-agnostic, i.e., not tied to any specific structured sparsity, leading to more hardware-efficient sparse networks. 3 COMBINATORIAL EXPRESSIVITY VIA LINEAR REGIONS The question we would like to answer is: Why is there an accuracy gap between the generalization performance of a structured vs an unstructured sparse method? Structured sparsity buys efficiency because accelerators can exploit regular patterns (blocks, N:M, diagonals). Yet those same patterns limit which directions each layer can "slice" the input space along. Unstructured sparsity, and of course dense networks, do not impose such directional restrictions and typically attain higher accuracy at the same sparsity. We make this precise by measuring expressivity via the number of linear regions (NLR) of ReLU networks and by asking: (i) how structure alone changes the classic depth-multiplicative region growth, and (ii) how a single learned permutation per layer restores it when depth is sufficient. 3.1 THEORETICAL SETUP & INTUITION We consider a depth-L feed-forward ReLU network with layers l= 1, . . . , L: zl(x) = Wlal-1(x) + bl, al(x) = φ zl(x) , φ(t) = max{t, 0}, a0(x) = x ∈Rd0. Here x ∈Rd0 is the input; al-1(x) ∈Rdl-1 the activation at layer l-1; Wl∈Rdl×dl-1 and bl∈Rdlthe weight matrix and bias; zl(x) ∈Rdlthe pre-activation; and al(x) ∈Rdlthe postReLU activation (with φ applied elementwise). We write nl:= dlfor the width of layer land (d0, d1, . . . , dL) for the layer dimensions. The activation pattern is constant on convex polyhedra; each maximal such set is a linear region. Hyperplane arrangements & expressivity via NLR. Each neuron contributes an affine hyperplane that "slices" the affine subspace propagated to its layer. If, within that subspace, the hyperplanes are in subspace-general position (SGP)-i.e., their restricted normals are in general position-standard arrangement counts apply. We denote by NLR(f) the total number of linear regions of f. Classically, for dense ReLU networks with sufficiently wide layers, depth multiplies regions: each layer contributes a combinatorial factor that depends on the number of independent directions available, so NLR(f) grows multiplicatively with depth (Mont ́ufar-style bounds (Mont ́ufar et al., 2014)). A generic lower-bound template. Let klbe the effective dimension at layer l(the number of independent row-normal directions realizable inside the current region). Under SGP, NLR(f) ≥ L Y l=1 kl X j=0 nl j . (1) All architectural effects reduce to identifying kl. To reason uniformly across settings, we track an accumulated span budget u0 := 0, ul:= min{d0, ul-1 + gl}, (2) where glis the number of fresh (linearly independent) directions that layer lcan inject beyond those already spanned. The effective dimension obeys kl= min{ nl, hl}, with hl∈{ul-1, ul} (3) depending on whether the newly available directions are usable inside the current region. Subsequent subsections instantiate Eqn. 1-Eqn. 3 for dense matrices and for unstructured sparsity before turning to structured sparsity and permutations later. 3 Published as a conference paper at ICLR 2026 3.2 DENSE MATRICES: RECOVERING CLASSICAL MULTIPLICATIVE GROWTH Dense layers impose no directional restriction: any normal in the ambient subspace can be realized. Thus, there are no "fresh" directions to accumulate (gl= 0) and the first layer already sees the full input subspace; we adopt the standard convention u0 = d0. From Eqn. 3, kl= min{nl, ul-1} = min{nl, d0} for all l, (4) and plugging into Eqn. 1 yields NLR(f) ≥ L Y l=1 min{nl,d0} X j=0 nl j . (5) If nl≥d0 for all l, then kl= d0 layer-wise and each factor equals Pd0 j=0 nl j , giving the classical statement that depth multiplies regions (Mont ́ufar et al., 2014). 3.3 UNSTRUCTURED SPARSITY Unstructured sparsity does not impose intrinsic directional caps: with generic weights, the row normals of Wlcan span any k ≤min{nl, dl-1} directions inside the current subspace. Therefore, as in the dense case, we take gl= 0 and u0 = d0, which by Eqn. 3 gives kl= min{nl, ul-1} = min{nl, d0}, (6) and the lower bound Eqn. 1 coincides with the dense bound: NLR(f) ≥ L Y l=1 min{nl,d0} X j=0 nl j . (7) Interpretation. In the NLR lens, truly unstructured sparsity has the same depth-multiplicative expressivity as dense networks at a given width profile; any observed gap with structured sparsity arises from structural caps that reduce kl(analyzed in later sections) rather than from sparsity alone. 3.4 STRUCTURED SPARSITY WITHOUT MIXING We now turn to structured (axis-aligned) sparsity without any re-orientation across layers (e.g., no permutations/mixers). Let Al⊂Rdl-1 be the set of admissible row-normal directions at layer l induced by the fixed pattern (diagonals, bands, blocks, or tied N:M groups). Define the directional rank cap at layer lby rl:= dim span(Al) ≤dl-1. (8) For the axis-aligned families we study, the orientation and admissible directions are the same at every depth, so rl= rstruct is constant across layers. Set s := min{d0, rstruct}. Because the admissible directions lie in the same rstruct-dimensional coordinate subspace at every layer, the first layer can realize at most s independent directions and no fresh directions can be injected later. Using the global template Eqn. 1, this yields NLR(f) ≥ L Y l=1 min{nl,s} X j=0 nl j . (9) What this means. Unlike dense or unstructured layers, klis uniformly capped by s independently of depth, so the per-layer factor in Eqn. 1 is bounded and depth-multiplicative growth stalls. Instantiations of rstruct for each sparsity structure: For each sparsity structure, we instantiate rstruct as follows: for Diagonal-K, rstruct = K, hence s = min{d0, K}; for Block-B, rstruct = B, hence s = min{d0, B}; and for N:M with a tied group template, rstruct = αd0 with α = N/M, hence s = αd0. Substituting each resulting s into Eqn. 9 gives the corresponding lower bound on NLR(f). 4 Published as a conference paper at ICLR 2026 3.5 STRUCTURED SPARSITY WITH MIXING We now allow a per-layer re-orientation-a learned permutation or, more generally, any full-rank mixer-applied before the axis-aligned mask. Such mixing prevents successive layers from aligning to the same small coordinate subspace, so each structured layer can contribute up to rstruct fresh independent directions until the input dimension d0 is saturated. Generic consequence of mixing. Let the span budget evolve as u0 := 0 and ul= min{d0, ul-1 + rstruct}, and use the effective-dimension cap kl= min{nl, ul} in the master template Eqn. 1. This yields the mixing-enabled lower bound NLR(f) ≥ L Y l=1 min{nl, ul} X j=0 nl j , ul= min{d0, ul-1 + rstruct}. (10) Meaning. Each layer injects rstruct new independent directions; the span budget grows additively and the dense per-layer factor is recovered after a short warm-up of Loverhead = l d0 rstruct m layers. (11) The recipe above applies unchanged to each axis-structured family by substituting its rstruct: for Diagonal-K, rstruct = K ⇒ul= min{d0, ul-1 + K} and Loverhead = ⌈d0/K⌉; for Block-B, rstruct = B ⇒ul= min{d0, ul-1 + B} and Loverhead = ⌈d0/B⌉; and for N:M, rstruct = αd0 with α = N/M ⇒ul= min{d0, ul-1 + αd0} and Loverhead = ⌈M/N⌉. With a single mixer per layer, axis-structured sparsity thus regains dense-like, depth-multiplicative expressivity at the same widths after an explicit, structure-dependent overhead. Why permutations (and what mixing suffices)? Any full-rank per-layer mixer that varies across depth suffices for the theory above (e.g., permutations, co-prime stride shuffles, S-random interleavers, or very sparse full-rank transforms such as butterfly-style). We emphasize permutations because they are (i) parameter-free and invertible, (ii) cheap and friendly to accelerator memory layouts, and (iii) preserve axis-structure in Wl, allowing the same efficient kernels at inference. Empirically, learned permutations tend to outperform fixed random shuffles; the framework, however, only requires full rank and depth variation-not learnability per se. We discuss the practical consequences and predictions of our theory in detail in Apdx. C. 4 PERMUTATION-AUGMENTED DYNAMIC SPARSE TRAINING This section presents PA-DST, our algorithm that learns (i) sparse weight values and positions (ii) one permutation matrix per layer, all under a fixed global sparsity budget. In Sec. 4.1, we describe how a linear layer formulation changes with the introduction of a permutation matrix. In Sec. 4.2 we elaborate on our approach to learn a permutation matrix in a differentiable manner. Lastly, in Sec. 4.3, we take the example of a ViT network to point out the changes when training with permutations. 4.1 LAYER FORMULATION For a sparse weight matrix W ∈RR×C we define permuted weight matrix W ′ ∈RR×C W ′ = WP, P ∈Pd, d = min{R, C}, (12) where P is a column permutation matrix. Here, W is a sparse weight matrix where the positions of non-zeros are learnt during training, while adhering to a pre-defined structure. Row permutations can be applied to the weight matrices, but in our analysis, we don't find any significant difference in the algorithmic performance of the two permutations as shown in Sec. 6.4. 4.2 DIFFERENTIABLE PERMUTATION ESTIMATOR Permutation matrices by nature are discrete, preventing direct gradient-based optimization methods. To ensure end-to-end differentiability, we require a method to learn permutation matrices through gradient descent. We adopt the permutation-learning formulation of AutoShuffleNet (Lyu et al., 5 Published as a conference paper at ICLR 2026 2020) to enable end-to-end training. Rather than optimizing a discrete permutation matrix P, we learn a soft matrix M ∈RN×N constrained to the Birkhoff polytope (doubly-stochastic matrices), and decode a hard permutation at evaluation time. Following Lyu et al. (2020), we couple our task loss with an exact, Lipschitz-continuous l1-l2 row/column penalty that drives a doubly-stochastic M toward a true permutation: L = Ltask + λ P(M) subject to M ≥0, M1 = 1, M ⊤1 = 1, (13) P(M) = N X i=1 ∥Mi:∥1 -∥Mi:∥2 + N X j=1 ∥M:j∥1 -∥M:j∥2 . (14) As shown by Lyu et al. (2020), for matrices satisfying the doubly-stochastic constraints, P(M) = 0 if and only if M is a permutation. In our setting, this penalty provided the most stable training among soft-permutation alternatives. 4.3 PERMUTATION MATRICES DURING TRAINING AND INFERENCE For this work, we are interested in looking at transformer-based neural networks. So, for a general transformer block, such as in a ViT-B/16, we sparsify the output projection linear layers of the multi-headed (MHA) attention block and the fully connected layers in the feedforward (MLP) blocks. This is in line with previous work in the literature (Lasby et al., 2023; Tyagi et al., 2025). We describe below in detail how the abovementioned layers are transformed. MHA output projection. Let H ∈Rd be the concatenated head output and WO ∈S ⊂Rd×d the structured-sparse output projection. We learn a column permutation PO and the forward pass can be written as: o = WOPOH = WOa, a := POH. (15) During inference, we know the learned permutation next to the attention output linear: WO is the output projection (from d to d), and PO permutes the concatenated head output H ∈Rd. A direct implementation would compute o = WO (POH). Instead of multiplying by PO, we precompute an index map lO : [d]→[d] such that (POH)i = HlO(i), and we re-index during head concatenation (i.e., we write the concatenated buffer in permuted order, or equivalently read it with permuted indices), then run the usual sparse GEMM ok = d X i=1 WO[k, i] HlO(i). (16) This keeps Q, K, V and the softmax unchanged, adds no extra matmul or copy, and is cheaper than a permutation multiply. FFN / MLP (both linears sparsified; permutations absorbed at inference). Let W↑∈S ⊂Rdff×d denote the expansion (up-projection) and W↓∈S ⊂Rd×dff the contraction (down-projection) matrix in the FFN. We learn one column permutation for each sparsified linear: P↑∈{0, 1}d×d, and P↓∈{0, 1}dff×dff as a column permutation of W↓). With σ(·) being the activation and LN(·) being the layer norm, the forward pass is y = W↓P↓σ W↑P↑LN(x) . (17) Now, at inference, multiplying by permutation matrices involves extra computation. Instead of performing extra multiplications, we remove them by re-indexing. We precompute index maps l↑: [d]→[d] and l↓: [dff]→[dff] so that (P↑x)j = xl↑(j) and (P↓h)i = hl↓(i). Forward pass is: ui = d X j=1 W↑[i, j] xl↑(j), h = σ(u), yk = dff X i=1 W↓[k, i] hl↓(i). (18) No explicit P↑or P↓is formed at runtime; we only change which indices we read or write inside the existing kernels. This is computationally cheaper than multiplying by permutation matrices because a permutation multiply performs a shuffle, which would add an extra pass over memory and an extra kernel launch, whereas re-indexing costs only a small lookup and a few integer address operations per element, with no extra matrix multiplies, and unchanged structured-sparse GEMM multiplication. 6 Published as a conference paper at ICLR 2026 60% 70% 80% 90% 95% Sparsity 62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 Top-1 Accuracy (%) ViT-B/16 60% 70% 80% 90% 95% Sparsity 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Top-1 Accuracy (%) ViT-L/16 60% 70% 80% 90% 95% Sparsity 62 64 66 68 70 72 74 Top-1 Accuracy (%) Mixer-S/16 Dense RigL SET CHT CHTs Mest SRigL PixelatedBFly DSB DynaDiag SRigL(PA-DST) PixelatedBFly(PA-DST) DSB(PA-DST) DynaDiag(PA-DST) 40% 50% 60% 80% 90% Sparsity 20 30 40 50 60 Perplexity ( ) GPT-2 Small 40% 50% 60% 80% 90% Sparsity 20 30 40 50 Perplexity ( ) GPT-2 Medium Fig. 2: Accuracy vs sparsity of networks listed in Sec. 5. The plots in blue, green, and red correspond to unstructured, structured, and structured + permutations-based (PA-DST) methods, respectively. We see that a permutation helps improve the generalization performance (red plots are above the green plots). Moreover, permutations help bridge the gap between unstructured and structured sparse methods (structures + permutations, in red plot, reach close to the unstructured methods, in blue). 5 EXPERIMENTAL SETUP The aim of our experiments is to show the benefits of learning permutations during dynamic sparse training of neural networks. We aim to show the benefits of different modalities (vision and language), model types (MLP and attention), and sparsity regimes. Evaluation. We assess the generalization ability of all structured sparse methods with and without permutations using Top-1 accuracy for vision tasks and perplexity (PPL) for language tasks. Additionally, we measure the training and inference times for each method at varying sparsity levels through end-to-end execution, and assess the overheads associated with permutations-based training. Baselines. We compare the performance of the structured DST with & without permutation against the following approaches: A Dense network is trained with 0% sparsity in the weight matrices. For Unstructured DST, we use methods from the literature such as RigL (Evci et al., 2020), Mest (Yuan et al., 2021), SET (Mocanu et al., 2018), CHT (Zhang et al., 2024), and CHTs (Zhang et al., 2025) to produce unstructured sparse neural networks. For Structured DST, we use SRigL (Lasby et al., 2023), DSB (Jiang et al., 2022), and DynaDiag (Diag Sparsity) (Tyagi et al., 2025) to train networks with structured sparsity via DST, and we use PixelatedBFly (Dao et al., 2021) to compare against a structured Static Sparse Training (SST) method. Finally, for Structured Sparse Training + Permutations, we apply either random or learned permutations to the above structured DST methods to observe the impact on the network's generalization performance. In the random case, a permutation is generated before the start of training and applied to the network. 6 RESULTS We present our main results, comparing the performance of various structured DST methods with and without permutations for vision (Sec. 6.1) and language (Sec. 6.1.1) tasks. We then compare the runtime of all sparse methods (training and inference time on GPUs) in Sec. 6.2 to understand the 7 Published as a conference paper at ICLR 2026 Fig. 3: Impact of permutations on the training and inference time for a ViT-B/16 model. We observe that there is a 3.16% - 8.69% overhead related to permutations during inference for the obtained accuracy gains. Even for training, learning permutations increases the overall training time, but at higher sparsity levels, we still obtain speedup compared to a dense model for all structured sparsities. Training time for GPT models can be found in Tbl. 5. overheads associated with permutation-based learning. We wrap up this section by looking at the learned permutations in more detail (Sec. 6.3) and showing the results of ablation studies in Sec. 6.4. 6.1 ACCURACY VERSUS SPARSITY: VISION EXPERIMENTS Setup. We evaluate the two architectures for vision tasks on ImageNet-1K (Deng et al., 2009) dataset. We train two variants of Vision Transformers (ViTs) (Dosovitskiy, 2020), ViT-B/16 and ViT-L, to study scalability aspects of sparsity. We also train Mixer-S/16 variant of MLP-Mixers (Tolstikhin et al., 2021) for our studies. We test all the sparse training methods (with and without permutations) at 60%, 70%, 80%, 90%, and 95% sparsity in the targeted layers in the networks. Details of which layers are sparsified can be found in Apdx. C.5. Results: Fig. 2 presents a summary of our results, comparing the performance of the three networks at varying levels of sparsity, reporting the average of three runs. The results show that learned permutations improve the generalization performance of structured sparse networks, helping to bridge the gap with unstructured sparsity. For instance: On ViT-L/16, at 70% sparsity, we obtain RigL level accuracies with DynaDiag + PA-DST. We see an improvement in accuracy across all structured sparsity patterns with an increase of 1.22% for SRigL at 90% sparsity (ViT-L). The raw values, along with comparisons with random permutations, can be found in Tbl. 11. 6.1.1 PERPLEXITY VERSUS SPARSITY: LANGUAGE EXPERIMENTS Setup: For language tasks, we evaluate on the WikiText-103 dataset using two variants (Small and Medium) of GPT-2 Radford et al. (2019) at varying sparsities s ∈{40%, 50%, 60%, 80%, 90%}. We sparsify both the attention and the MLP layers of the model. Results. We train GPT2-Small and GPT2-Medium Radford et al. (2019) from scratch on WikiText103 dataset and show the resulting performance in Fig. 2d) and e). We report the average PPL obtained from two runs. We see that structured sparse models benefit from learning permutations, achieving similar PPL to that of unstructured sparse networks. Notably, PA-DST-based methods demonstrate increasingly significant improvements over no-permutation-based baselines. For example, at 80% sparsity on GPT2-Medium, SRigL + PA-DST achieves a perplexity of 2.27 points lower than that just SRigL while remaining within 0.98 points of the performance ceiling established by RigL. The raw values, along with comparisons with random permutations, can be found in Tbl. 12. 6.2 TRAINING AND INFERENCE COST Learning permutations means extra computation, and hence it is essential to understand the overheads associated with the same. We measure the inference and training wall clock times of various structured sparse training approaches with and without permutations. 8 Published as a conference paper at ICLR 2026 b0.A b0.F1 b0.F2 b1.A b1.F1 b1.F2 b2.A b2.F1 b2.F2 b3.A b3.F1 b3.F2 b4.A b4.F1 b4.F2 b5.A b5.F1 b5.F2 b6.A b6.F1 b6.F2 b7.A b7.F1 b7.F2 b8.A b8.F1 b8.F2 b9.A b9.F1 b9.F2 b10.A b10.F1 b10.F2 b11.A b11.F1 b11.F2 ViT­B/16 Layers 0.00 0.25 0.50 0.75 1.00 Distance to Identity 1 P I F/ 2N Per­layer Distance to Identity Matrix attn.proj mlp.fc1 mlp.fc2 Fig. 4: Distance of permutations to an identity matrix (no permutation) for a ViT-B/16 at 90% sparsity trained with DynaDiag. We see that with depth, the learned permutations move closer to an identity matrix, signalling less shuffling with depth. A: Attention block; F: Fully Connected; b:blocks Setup. We use the best available libraries to accelerate the execution of structured sparsities mentioned in Sec. 5, and we use cuSparse (Naumov et al., 2010) to execute all the unstructured sparsities. We estimate the execution times for SRigl (Lasby et al., 2023) as per their methodology and are unable to run end-to-end training and inference due to lack of support in PyTorch. Whereas, for DSB and PixelatedBFly, we use the Triton based library package from PixelatedBFly (Dao et al., 2021) to accelerate inference for both and training for PixelatedBFly (DSB lacks support to integrate the kernels with the training pipeline). And lastly, we use the provided CUDA kernels implemented to accelerate DynaDiag's execution on the GPUs. Results. In Fig. 3, we compare the inference and training times of models listed in Sec. 6.1 and Sec. 6.1.1. We see that using the approach mentioned in Sec. 4.3, we can re-index the output of a layer instead of explicitly applying a permutation matrix, leading to an overhead of just up to 8.69% as compared to without permutation baselines at all sparsity levels. Even with the overhead, we can achieve an inference speedup as high as 2.9× with DynaDiag at 90% sparsity. Whereas, during training, we see an increase in training time for all structured sparse methods due to extra computation required for learning the permutations. We elaborate in Apdx. C.2 overheads & our approach to early stopping the permutation learning by tracking the loss and heuristically determining a stopping point. However, we can see that at higher sparsities, even with permutations, it is possible to get better training time than dense and unstructured sparse models. For example, at 95% sparsity, the training times for SRigL, PBFly, DSB, and DynaDiag are 1.09×, 1.18×, 1.13×, and 1.23× that of RigL, respectively. We can also see that DynaDiag is the fastest to train the sparse models, and this can be attributed to the transposable nature of the sparse pattern (Tyagi et al., 2025), which helps accelerate the backward pass. 6.3 LEARNED PERMUTATIONS To understand what the learned permutations look like, we take the example of a ViT-B/16 trained with DynaDiag at 90% sparsity and plot the normalized distance between the learned permutations and an identity matrix. The identity is the natural no-permutation baseline, so measuring proximity to it directly measures how much reordering a layer learns. We use a width-invariant, normalized Frobenius metric for a permutation matrix P ∈RN×N,δ(P) = 1 -∥P -I∥F √ 2N ∈[0, 1]. The result is shown in Fig. 4. We observe two primary trends: 1) across depths, early blocks display the lowest distance to the identity matrix, conveying that they learn stronger reindexing, and later layers, permutation matrices are closer to an identity matrix, meaning weaker shuffles. 2) attn projections tend to remain more permuted than MLP layers, which aligns more with an identity matrix with depth. 6.4 ABLATION STUDY Row vs Col Permutations. Training structured sparse networks with col or row permutations yield networks with similar generalization performance. Tbl. 10 shows the results of training a structured 9 Published as a conference paper at ICLR 2026 sparse network col vs row permutation matrix for a ViT-B/16 network on ImageNet-1K. This means that instead of formulating a linear layer as y = WPx, we train instead with y = PWx, while keeping all other hyperparameters identical. We observe that there is no significant difference between their generalization performance. 7 CONCLUSION We theoretically show that learning one permutation matrix per layer restores the expressivity lost in structured sparsity while preserving accelerator-friendly layouts. We back these claims with experimental results showing that training structured sparsity with the resulting PA-DST algorithm attains unstructured-level accuracy while still realizing speed-ups during training and inference. REFERENCES Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is the state of neural network pruning? Proceedings of machine learning and systems, 2:129-146, 2020. Yaohui Cai, Weizhe Hua, Hongzheng Chen, G Edward Suh, Christopher De Sa, and Zhiru Zhang. Structured pruning is all you need for pruning cnns at initialization. arXiv preprint , 2022. Ben Cottier. Trends in the dollar training cost of machine learning systems. Epoch. January, 31:2023, 2023. Tri Dao, Nimit S Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, and Christopher R ́e. Kaleidoscope: An efficient, learnable representation for all structured linear maps. arXiv preprint , 2020. Tri Dao, Beidi Chen, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, and Christopher Re. Pixelated butterfly: Simple and efficient sparse training for neural network models. arXiv preprint , 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint , 2020. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In International conference on machine learning, pp. 2943-2952. PMLR, 2020. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint , 2018. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015. Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry. Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks. Advances in neural information processing systems, 34:21099-21111, 2021. Ajay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, and Zhangyang Wang. Training your sparse neural network better with any mask. In International Conference on Machine Learning, pp. 9833-9844. PMLR, 2022. Peng Jiang, Lihan Hu, and Shihui Song. Exposing and exploiting fine-grained block structures for fast and accurate sparse training. Advances in Neural Information Processing Systems, 35: 38345-38357, 2022. 10 Published as a conference paper at ICLR 2026 Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, and Yani Ioannou. Dynamic sparse training with structured sparsity. arXiv preprint , 2023. Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, and David Doermann. Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2790-2799, 2019. Shiwei Liu, Decebal Constantin Mocanu, and Mykola Pechenizkiy. On improving deep learning generalization with adaptive sparse connectivity. arXiv preprint , 2019. Haiquan Lu, Yefan Zhou, Shiwei Liu, Zhangyang Wang, Michael W Mahoney, and Yaoqing Yang. Alphapruning: Using heavy-tailed self regularization theory for improved layer-wise pruning of large language models. arXiv preprint , 2024. Ioanna Lykiardopoulou. Ai beats humans for the first time in physical skill game. https://www. newscientist.com/article/2402645, 12 2023. Accessed: 20 01 2025. Jiancheng Lyu, Shuai Zhang, Yingyong Qi, and Jack Xin. Autoshufflenet: Learning permutation matrices via an exact lipschitz continuous penalty in deep convolutional neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 608-616, 2020. Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint , 2016. Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):2383, 2018. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In International conference on machine learning, pp. 2498-2507. PMLR, 2017. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint , 2016. Guido Mont ́ufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. Advances in neural information processing systems, 27, 2014. Hesham Mostafa and Xin Wang. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In International Conference on Machine Learning, pp. 4646-4655. PMLR, 2019. Maxim Naumov, L Chien, Philippe Vandermersch, and Ujval Kapasi. Cusparse library. In GPU Technology Conference, volume 12, 2010. Jeff Pool and Chong Yu. Channel permutations for n: m sparsity. Advances in neural information processing systems, 34:13316-13327, 2021. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Matthew Sparkes. Game-playing deepmind ai can beat top humans at chess, go and poker. https: //thenextweb.com/news/, 11 2023. Accessed: 20 01 2025. Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in neural information processing systems, 33:6377-6389, 2020. 11 Published as a conference paper at ICLR 2026 Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:2426124272, 2021. Abhishek Tyagi, Arjun Iyer, William H Renninger, Christopher Kanan, and Yuhao Zhu. Dynamic sparse training of diagonally sparse networks. arXiv preprint , 2025. Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, et al. Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems, 34:20838-20850, 2021. Xin-Jie Zhang, Jack Murdoch Moore, Gang Yan, and Xiang Li. Universal structural patterns in sparse recurrent neural networks. Communications Physics, 6(1):243, 2023. Yingtao Zhang, Jialin Zhao, Wenjing Wu, Alessandro Muscoloni, and Carlo Vittorio Cannistraci. Epitopological learning and cannistraci-hebb network shape intelligence brain-inspired theory for ultra-sparse advantage in deep learning. In The Twelfth International Conference on Learning Representations, 2024. Yingtao Zhang, Jialin Zhao, Wenjing Wu, Ziheng Liao, Umberto Michieli, and Carlo Vittorio Cannistraci. Brain-inspired sparse training enables transformers and llms to perform as fully connected. arXiv preprint , 2025. 12 Published as a conference paper at ICLR 2026 Appendix A MAPPING DENSITY TO PATTERN PARAMETERS Given a target per-layer density δ ∈(0, 1] and input size nin, we choose the smallest feasible integers so that per-row nnz ≈δ nin: K = B = round(δ nin), 2b+1 = nearest odd to δ nin, α = N M = δ (tied N:M). For non-cyclic bands/diagonals, use wrap-around (or adjust a few edge rows by ±1 nnz) so the total nnz matches the target. In our ViT-L/16 surrogate at δ = 0.05: nin=1024 ⇒K=B=51, 2b+1=51; nin=4096 ⇒K′=B′=205, 2b′+1=205; tied N:M uses α = 0.05 throughout. B WORKED-EXAMPLE CALCULATIONS (DETAILS) Using the master bound equation 1 and span update ul= min{d0, ul-1 + rstruct(nin,l)} with d0 = 1024, the alternating widths yield per-block gain rpair = rstruct(1024) + min{rstruct(4096), d0} = 51 + 205 = 256. Thus u2t = min{1024, 256 t} and dense-like factors are guaranteed once u2t = 1024, i.e., t = ⌈1024/256⌉= 4 blocks (≈8 layers). Without mixing, ul≡51 and the per-layer factor remains strictly below dense for all depth. C PRACTICAL CONSEQUENCES & PREDICTIONS OF OUR THEORY We now analyze the impact of our theory via a concise worked example where all structured families operate at (approximately) the same sparsity level. To stay within our ReLU/MLP framework, we use an MLP surrogate whose layer sizes are motivated by ViT-L/16, which has 24 encoder blocks (307M params). Each block contains a two-layer FFN with dmodel=1024 and dff=4096. We analyze the MLP that stacks these FFN layers in order, 1024 →4096 →1024 | {z } block 1 FFN →1024 →4096 →1024 | {z } block 2 FFN →· · · (24 blocks), yielding a depth-L=48 ReLU-MLP used solely to set widths for the master bound equation 1. At 95% sparsity (density δ=0.05), the structural caps (Sec. 3.4) are rstruct(1024) = 51, rstruct(4096) = 205, for Diagonal-K, Banded-b (with 2b+1 odd), Block-B, and tied N:M with α = N/M = 0.05 (mapping in App. A). Consequences. Without mixing, all axis-structured families stall at kl≤51 (no multiplicative growth). With one mixer per layer (permutations suffice), the span budget grows additively; because widths alternate 1024 ↔4096, each block contributes 51+205 = 256 fresh directions toward d0=1024, so dense-like per-layer factors are guaranteed after 1024/256 = 4 blocks (≈8 layers). In this setting, the structured families share the same catch-up point (4 blocks) when mixing is used. C.1 COMBINATORIAL EXPRESSIVITY EXAMPLE Take d0 = 4, equal widths nl= 8, and L = 3 layers. 1. Dense / Unstructured (RigL/SET-like): kl= min{nl, d0} = 4 at every layer. Per-layer factor P4 j=0 8 j = 1 + 8 + 28 + 56 + 70 = 163. Hence NLR ≥1633. 2. Block-B without permutations, B = 2: kl= min{nl, B} = 2 at every layer (no new directions with depth). Per-layer factor 1 + 8 + 28 = 37. Hence NLR ≥373. 3. Block-B with a learned permutation each layer, B = 2 (rs = 2): u0 = 0→u1 = 2 and u2 = 4; thereafter ul= 4. Per-layer factors: layer 1 gives 37, layers 2 and 3 give 163 each. Thus NLR ≥37 · 163 · 163. After a one-layer overhead, the per-layer factor matches dense. This concretely illustrates the phenomenon: structure alone stalls multiplicative growth; adding permutations restores it after a short, explicit warm-up in depth. 13 Published as a conference paper at ICLR 2026 Table 1: Lower bounds summary (instantiate equation 1). Here d0 is the input dimension, nlthe width, and rs ∈{K, 2b+1, B} the per-layer structural cap for diagonal/banded/block. For tied N:M, α = N/M. Setting Effective kl Span recursion ul Depth overhead Dense min{nl, d0} ul= d0 0 Unstructured DST (free masks) min{nl, d0} ul= d0 0 N:M (free supports) min{nl, d0} ul= d0 0 N:M (tied template) min{nl, αul-1} ul= ul-1 - (stalls) Diagonal-K (no perm) min{nl, K} ul= min{d0, K} - (stalls) Banded-b (no perm) min{nl, 2b+1} ul= min{d0, 2b+1} - (stalls) Block-B (no perm) min{nl, B} ul= min{d0, B} - (stalls) Diagonal-K + permutation min{nl, ul} ul= min{d0, ul-1 + K} ⌈d0/K⌉ Banded-b + permutation min{nl, ul} ul= min{d0, ul-1 + 2b+1} ⌈d0/(2b+1)⌉ Block-B + permutation min{nl, ul} ul= min{d0, ul-1 + B} ⌈d0/B⌉ C.2 PERMUTATION LEARNING SCHEDULE & OVERHEAD 0 50 100 150 200 250 300 Epoch 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Permutation Penalty ( Permutation) ViT B/16 Permutation Penalty (90% Sparsity) Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 Block 9 Block 10 Block 11 attn.proj mlp.fc1 mlp.fc2 Fig. 5: Tracking the loss of permutation matrices based on Eqn. 14. We plot the loss for every 10 training epochs. C.2.1 PERMUTATION LEARNING SCHEDULE While training, we can either learn the permutations till the end of the training or find a way to stop early without losing out on optimal permutations. From our experiments, we find that it is useful to monitor the loss of the soft-permutation matrix. For our training, we set a threshold loss value δ at which point we go from a soft to a hard permutation matrix for that particular layer. This means that instead of carrying out a multiplication operation for that layer, we can instead do re-indexing, which is a much cheaper operation. We show in Fig. 5 the loss associated with each permutation matrix when training a ViT-B/16 network at 90% sparsity with DynaDiag. We can see that the loss decreases drastically and saturates after the knee in the plot. Moreover, we see a clear trend that earlier permutation matrices take longer to reach this knee point. For the given experiment, we set the δ = 0.22 and we show in Fig. 6 when each layer reaches that threshold, and we stop training the permutation matrix corresponding to that layer. C.2.2 OVERHEADS AND ADDITIONAL RESULTS The tables quantify the training overhead associated with various permutation methods. Specifically, Tbl. 2 and Tbl. 3 detail the memory overhead in gigabytes and as a percentage for the GPT-2 Small model, using Diagonal and N:M sparsity, respectively. Tbl. 4 presents a similar memory overhead analysis for the ViT-B/16 model, but at higher 90% and 95% sparsity levels. Finally, Tbl. 5 expands this analysis for the GPT-2 Medium model, showing the overhead for both training time (in hours) and memory. Across all tables, we compare permutation strategies such as AutoShufflePerm and 14 Published as a conference paper at ICLR 2026 0 50 100 150 200 250 300 Epoch Count at Threshold (0.22) Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 Block 9 Block 10 Block 11 Block 85 90 95 99 105 110 115 120 125 130 135 140 88 93 98 103 108 113 118 123 128 133 138 143 90 95 101 105 110 115 120 125 130 135 140 145 Epochs Required for Each Layer to Reach Penalty Threshold mlp.fc1 mlp.fc2 attn.proj Fig. 6: We show the epoch count of each layer in a ViT-B/16 network when the corresponding permutation loss hits the threshold. We can see that the cutoff epoch varies drastically acorss the network which we use to our advantage to reduce the training time. the KaleidoscopePerm against their corresponding non-permuted baselines to clearly illustrate the computational trade-offs. Table 2: Memory overhead of permutation methods for GPT-2 Small with Diagonal sparsity on WikiText-103. The overhead percentage is calculated relative to the DynaDiag method. Model Method Memory (GB) @ 60% % Overhead Memory (GB) @ 80% % Overhead GPT-2 Small (DynaDiag) Unstructured 133.64 - 132.12 - DynaDiag 137.32 - (Baseline) 132.82 - (Baseline) + FixedRandPerm 139.21 +1.38% 134.09 +0.96% + PA-DST 156.17 +13.73% 145.41 +9.48% + KaleidoscopePerm 141.27 +2.88% 137.43 +3.47% Table 3: Memory overhead of permutation methods for GPT-2 Small with SRigL on WikiText-103. The overhead percentage is calculated relative to the SRigL method. Model Method Memory (GB) @ 60% % Overhead Memory (GB) @ 80% % Overhead GPT-2 Small Unstructured 133.64 - 132.12 - SRigL 135.12 - (Baseline) 131.42 - (Baseline) + FixedRandPerm 136.91 +1.32% 132.65 +0.94% + PA-DST 148.23 +9.70% 143.19 +8.96% C.3 EXPERIMENT DETAILS All experiments are conducted on the NVIDIA Tesla A100 GPUs with the following configuration: • Model: NVIDIA A100 40GB 15 Published as a conference paper at ICLR 2026 Table 4: Memory overhead of permutation methods for ViT-B/16 with Diagonal sparsity on ImageNet1K. The overhead percentage is calculated relative to the DynaDiag method. Model Method Memory (GB) @ 90% % Overhead Memory (GB) @ 95% % Overhead ViT-B/16 (Diag) Unstructured 112.64 - 112.64 - DynaDiag 114.50 - (Baseline) 111.43 - (Baseline) + PA-DST 138.38 +20.86% 135.01 +21.16% + KaleidoscopePerm 120.67 +5.39% 117.43 +5.38% Table 5: Time and memory overhead of permutation methods for GPT-2 Medium with Diagonal sparsity on WikiText-103. The overhead percentage is calculated relative to the DynaDiag method. Sparsity @ 60% Sparsity @ 80% Model Method Time (h) % Overhead Memory (GB) % Overhead Time (h) % Overhead Memory (GB) % Overhead GPT-2 Medium Unstructured 36.43 - 152.34 - 36.32 - 152.32 - DynaDiag 33.21 - (Base) 149.64 - (Base) 29.82 - (Base) 142.82 - (Base) + FixedRandPerm 33.39 +0.54% 149.41 -0.15% 29.23 -1.98% 144.09 +0.89% + PA-DST 37.49 +12.88% 156.17 +4.36% 34.28 +14.95% 145.41 +1.81% + KaleidoscopePerm 41.98 +26.41% 155.27 +3.76% 37.98 +27.36% 137.43 -3.77% • Memory: 40GB HBM2e • Memory Bandwidth: ∼2.0 TB/s (higher than the 40GB version) • TDP : 400W (PCIe: 300W) • Peak FP32 Performance: ∼19.5 TFLOPS (same as 40GB) • Peak FP16 Performance: ∼312 TFLOPS (same as 40GB) C.3.1 DATASETS 1. ImageNet-1K Deng et al. (2009) covers 1,000 object classes, with 1.28M training, 50,000 validation, and 100,000 test images. Images are typically resized and cropped to 224 × 224 for processing. 2. WikiText-103 Merity et al. (2016) comprises over 100 million tokens extracted from verified Wikipedia articles. It is significantly larger than other language datasets, such as Penn Treebank (PTB) Marcus et al. (1993). Table 6: Configuration of the CIFAR10 and CIFAR100 experiments with MLPMixer. Parameter Value Parameter Value Adam β1 0.9 Hidden 128 Adam β2 0.99 (Initial LR, Final LR) (10-3, 10-6) AutoAugment True Label Smoothing 0.1 Batch Size 128 Layers 8 CutMix Probability 0.5 LR Scheduler Cosine CutMix β 1.0 Optimizer Adam Dropout 0.0 Random Seed 3407 Epochs 300 Weight Decay 5 × 10-5 Hidden C 512 Warmup 5 epochs Hidden S 64 16 Published as a conference paper at ICLR 2026 Table 7: Configuration of the ImageNet experiments with ViT-Base and MLPMixer.Here X represents any of the sparse training methods that train a ViT-Base network. Model Optimizer Weight Decay Learning Rate Drop Path Warmup/Epoch ViT-Base AdamW 0.05 0.001 0.1 5/300 X-ViT-Base AdamW 0.05 0.001 0 5/300 Mixer-Small AdamW 0.1 0.001 0.1 5/300 X-Mixer-Small AdamW 0.1 0.001 0 5/300 Table 8: Configuration of the ImageNet experiments with ViT-Large and Huge. Parameter Value Parameter Value Batch size 256 Horizontal flip ✓ Optimizer AdamW Random Resized Crop (RRC) ✓ Learning Rate (LR) 3 × 10-3 Rand Augment ✗ LR decay cosine 3 Augment (ours) ✓ Weight decay 0.02 LayerScale ✓ Warmup epochs 5 Mixup α 0.8 Label smoothing ε 0.1 Cutmix α 1.0 Dropout ✗ Erasing prob. ✗ Stochastic Depth ✓ ColorJitter 0.3 Repeated Aug ✓ Test crop ratio 1.0 Gradient Clipping 1.0 Loss BCE Table 9: Configuration of the Wikitext-103 experiments GPT-2Small experiments. Model Optimizer Weight Decay Learning Rate Dropout Warmup/Epoch GPT-2-Small AdamW 0.1 0.0001 0.1 5/100 DynaDiag AdamW 0.1 0.0001 0.1 5/100 17 Published as a conference paper at ICLR 2026 C.4 RAW VALUES Table 10: Top-1 accuracy of structured sparse training methods at varying sparsities for a ViT-B/16 on ImageNet-1K. The results shown are from three runs for each data point. We see that there is no significant difference between the generalization performance of networks with learnt row and col permutation. Method Perm. 60% 70% 80% 90% 95% dense accuracy = 78.5 SRigL Col 78.04 ± 0.011 78.02 ± 0.008 77.83 ± 0.009 76.16 ± 0.007 69.24 ± 0.008 PixelatedBFly Col 78.10 ± 0.005 78.04 ± 0.007 77.49 ± 0.007 74.09 ± 0.008 62.82 ± 0.006 DSB Col 78.11 ± 0.009 77.95 ± 0.008 76.34 ± 0.005 73.09 ± 0.004 64.49 ± 0.004 DynaDiag Col 78.53 ± 0.007 78.26 ± 0.003 77.85 ± 0.005 77.19 ± 0.004 70.12 ± 0.007 SRigL Row 78.03 ± 0.007 78.02 ± 0.001 77.83 ± 0.007 76.16 ± 0.005 69.24 ± 0.007 PixelatedBFly Row 78.12 ± 0.004 78.04 ± 0.011 77.49 ± 0.003 74.09 ± 0.002 62.82 ± 0.009 DSB Row 78.09 ± 0.004 77.95 ± 0.005 76.34 ± 0.003 73.09 ± 0.004 64.49 ± 0.009 DynaDiag Row 78.54 ± 0.006 78.26 ± 0.008 77.85 ± 0.006 77.19 ± 0.003 70.12 ± 0.010 C.5 DETAILS OF LAYERS SPARSIFIED In our ViT-B/16 experiments, we applied sparsity to the initial patch projection layer, the MLP layers, and the output projections of the multi-head attention (MHA) modules. For the GPT models, we sparsified all attention and MLP layers. D USE OF AI We acknowledge the use of Google's Gemini and ChatGPT for assistance in editing the manuscript. The tools were used to refine sentence structure, correct grammatical errors, and improve the overall readability of the text. The intellectual contributions, including all methodologies, analyses, and conclusions, are solely those of the authors, who bear full responsibility for the final content of this work. 18 Published as a conference paper at ICLR 2026 Table 11: Top-1 accuracy of sparse training methods at varying sparsities. We bold results that are not significantly different (based on paired asymptotic McNemar tests (α = 0.05)) from the best-performing method (marked with a *) in each column. We see an increase in the generalization performance of all the structured sparse networks on the ImageNet-1K dataset. Model Method Perm. Struc. 60% 70% 80% 90% 95% ViT-B/16 dense accuracy = 78.5 RigL - no 79.75 79.28 78.71 77.24 71.50 SET - no 78.15 78.01 77.78 77.01 71.48 CHT - no 79.78 79.37 79.06* 77.66* 71.68* CHTs - no 79.88* 79.38* 79.05 77.54 71.61 Mest - no 78.04 77.76 77.39 76.45 69.67 SRigl - yes 77.79 77.84 77.35 75.90 68.70 PixelatedBFly - yes 78.04 77.90 77.31 73.89 62.52 DSB - yes 77.98 77.85 76.26 72.89 64.17 DynaDiag - yes 78.29 77.94 77.62 76.91 69.54 SRigL Random yes 77.95 77.81 77.31 75.69 68.74 PixelatedBFly Random yes 77.91 77.94 77.34 73.93 62.45 DSB Random yes 78.06 77.84 76.27 72.84 64.23 DynaDiag Random yes 78.21 77.92 77.67 76.93 69.54 SRigL PA-DST yes 78.04 78.02 77.83 76.16 69.24 PixelatedBFly PA-DST yes 78.10 78.04 77.49 74.09 62.82 DSB PA-DST yes 78.11 77.95 76.34 73.09 64.49 DynaDiag PA-DST yes 78.53 78.26 77.85 77.19 70.12 ViT-L/16 dense accuracy = 82.2 RigL - no 81.85* 81.57* 81.70* 78.26* 72.11* SRigL - yes 79.87 78.94 77.54 75.46 66.68 PixelatedBFly - yes 79.13 79.06 79.33 75.12 66.59 DSB - yes 79.44 77.46 75.34 73.55 66.77 DynaDiag - yes 81.52 81.46 81.37 77.74 69.59 SRigL Random yes 79.94 78.95 77.55 75.56 66.74 PixelatedBFly Random yes 79.14 79.19 79.34 75.29 66.70 DSB Random yes 79.42 77.51 75.39 73.71 66.74 DynaDiag Random yes 80.21 80.16 79.52 75.31 69.56 SRigL PA-DST yes 80.29 79.63 77.96 76.78 66.86 PixelatedBFly PA-DST yes 79.33 79.23 80.33 75.67 66.79 DSB PA-DST yes 79.59 77.61 75.51 73.79 66.91 DynaDiag PA-DST yes 81.66 81.59 81.49 77.96 70.16 Mixer-S/16 dense accuracy = 72.4 RigL - no 73.21* 73.23* 73.47* 73.01* 70.41* SRigL - yes 71.89 72.05 71.71 70.21 66.87 PixelatedBFly - yes 71.95 71.91 71.45 69.17 67.85 DSB - yes 69.94 70.21 68.90 65.16 60.88 DynaDiag - yes 72.92 72.95 73.05 72.31 68.89 SRigL Random yes 71.84 72.07 71.74 70.21 66.86 PixelatedBFly Random yes 71.91 71.94 71.49 69.17 67.91 DSB Random yes 69.93 70.23 68.81 65.16 60.81 DynaDiag Random yes 72.93 72.41 72.32 71.91 68.01 SRigL PA-DST yes 72.56 72.91 72.79 71.24 67.16 PixelatedBFly PA-DST yes 72.14 72.09 71.86 69.47 68.71 DSB PA-DST yes 70.22 70.39 69.12 65.16 61.08 DynaDiag PA-DST yes 73.09 73.11 73.21 72.51 69.19 19 Published as a conference paper at ICLR 2026 Table 12: Perplexity of sparse training methods at varying levels of sparsity. We bold results that are not significantly different from the best-performing method (marked with a *) based on paired asymptotic McNemar tests (α = 0.05). We see an improvement in the PPL (lower the better) score for all structured sparse training methods with permutaitons on the WikiText-103 dataset. Model Method Perm. 40% 50% 60% 80% 90% GPT2-S dense PPL = 22.21 RigL - 22.34* 22.80 23.79* 29.87* 53.76* SRigL - 22.74 23.19 25.09 31.08 62.55 PixelatedBFly - 22.50 23.25 25.98 34.89 66.44 DynaDiag - 22.60 22.74 24.67 30.46 56.33 SRigL Random 22.70 23.19 25.21 31.11 62.42 PixelatedBFly Random 22.54 23.22 26.09 34.84 66.46 DynaDiag Random 22.61 23.19 25.12 31.69 57.61 SRigL PA-DST 22.41 22.94 24.59 30.40 60.96 PixelatedBFly PA-DST 22.41 23.01 25.69 34.71 66.06 DynaDiag PA-DST 22.44 22.69* 24.51 30.26 55.49 GPT2-M dense PPL = 20.18 RigL - 20.45* 21.60* 23.49* 28.87* 51.76* SRigL - 21.14 22.59 26.09 32.16 55.66 PixelatedBFly - 20.86 22.49 25.45 34.24 56.09 DynaDiag - 20.69 22.14 24.98 29.65 54.87 SRigL Random 21.19 22.55 26.01 32.19 55.69 PixelatedBFly Random 20.90 22.51 25.44 34.22 56.01 DynaDiag Random 21.65 22.67 25.17 30.39 54.81 SRigL PA-DST 20.57 22.20 25.04 29.89 55.13 PixelatedBFly PA-DST 20.69 22.23 25.31 33.19 55.71 DynaDiag PA-DST 20.55 21.91 24.71 29.21 54.26 20
2510.14811
Efficient adaptive control strategy for multi-parameter quantum metrology in two-dimensional systems Qifei Wei1 and Shengshi Pang1,2∗ 1School of Physics, Sun Yat-sen University, Guangzhou, Guangdong 510275, China and 2Hefei National Laboratory, Hefei 230088, China Quantum metrology leverages quantum resources such as entanglement and squeezing to enhance parameter estimation precision beyond classical limits. While optimal quantum control strategies can assist to reach or even surpass the Heisenberg limit, their practical implementation often re- quires the knowledge of the parameters to be estimated, necessitating adaptive control methods with feedback. Such adaptive control methods have been considered in single-parameter quantum metrology, but not much in multi-parameter quantum metrology so far. In this work, we bridge this gap by proposing an efficient adaptive control strategy for multi-parameter quantum metrology in two-dimensional systems. By eliminating the trade-offs among optimal measurements, initial states, and control Hamiltonians through a system extension scheme, we derive an explicit relation between the estimator variance and evolution time. Through a reparameterization technique, the optimization of evolution times in adaptive iterations are obtained, and a recursive relation is es- tablished to characterize the precision improvement across the iterations. The proposed strategy achieves the optimal performance up to an overall factor of constant order with only a few iter- ations and demonstrates strong robustness against deviations in the errors of control parameters at individual iterations. Further analysis shows the effectiveness of this strategy for Hamiltonians with arbitrary parameter dependence. This work provides a practical approach for multi-parameter quantum metrology with adaptive Hamiltonian control in realistic scenarios. Precision measurement plays a fundamental role across various disciplines of science and technology. Quantum metrology [1–3], rooted in the principles of quantum me- chanics and statistical inference, exploits nonclassical re- sources such as entanglement and squeezing to realize es- timation of parameters in quantum dynamics with high precision. This technique has been widely applied in atomic interferometers [4], atomic clocks [5–7], gravita- tional wave detection [8, 9] and so on. Over the past decades, quantum metrology has seen rapid advancement in both theoretical innovation and experimental break- throughs. Theoretically, entangled probes evolving in paral- lel under parameter-dependent dynamics can achieve the Heisenberg limit [1, 10]. Alternatively, a sequen- tial strategy—where a single probe evolves under the parameter-dependent dynamic and adaptive control— also achieves the Heisenberg limit and offers advan- tages when entanglement is difficult to generate or maintain [11–13]. Quantum metrology considers both single-parameter estimation [14–16] and multi-parameter [12, 17–37] estimation. While single-parameter estima- tion has been well understood, the multi-parameter quan- tum metrology poses additional challenges due to the in- compatibility of the optimal measurements, initial states, and control strategies with respect to different parame- ters [26–28]. Besides, environmental noise is inevitable in realistic scenarios, and significant progress has been made in addressing quantum metrology for open systems, both in exploring estimation precision limits [15, 38–40] and developing noise-resilient strategies [41–45]. ∗pangshsh@mail.sysu.edu.cn Experimentally, quantum metrology has been imple- mented on a variety of physical systems, e.g., photonic systems [28, 45–52], nuclear magnetic resonance [53, 54], superconducting circuits [55], etc. These experiments have realized key theoretical breakthroughs, such as at- taining the Heisenberg limit [56], improving the efficacy by control-enhanced strategies [54], full estimation of magnetic fields [29, 50], and mitigating the incompati- bility of multi-parameter estimation [28], etc. In quantum metrology, quantum control serves as a powerful tool to boost the estimation precision. In noise- less scenarios, Hamiltonian control has shown the ca- pability of increasing the precision to the Heisenberg limit and even beyond [13]. The optimal control strate- gies have been well established for single-parameter es- timation, including both time-independent and time- dependent Hamiltonians [11, 13]. For multi-parameter quantum metrology, significant progress has also been made in two-dimensional systems [12, 57] where the system extension scheme eliminates the trade-offs com- pletely, but the realizability of optimal quantum control in practice remains much less explored. The optimal control Hamiltonian usually relies on the knowledge of the unknown parameters to be estimated. This necessitates the use of adaptive control strategies to iteratively refine the control Hamiltonian based on the estimated values of the parameters from previous mea- surements. Although preliminary work has addressed adaptive control for the single-parameter estimation [13], efficient adaptive strategies for multi-parameter scenar- ios remain largely an open problem. Moreover, while one can certainly enhance the estimation precision by an in- creasing number of iterations and trials in each iteration, quantum resources are limited for any quantum proto- arXiv:2510.14811v1 [quant-ph] 16 Oct 2025 2 col. So it is crucial to design an efficient adaptive control strategy that enables rapid convergence to the optimal Hamiltonian control with given resources. In this work, we bridge this gap by introducing an efficient adaptive control strategy tailored for multi- parameter quantum metrology. Considering the feasi- bility of analytical computation, we focus our research on two-dimensional systems, similar as most studies of multi-parameter quantum metrology with quantum con- trol have pursued [12, 17, 20, 50, 57, 58], but the analysis can be effective for general quantum systems. We analyze the time dependence of the estimation variances of un- known parameters, and elucidate the mechanism under- lying the Hamiltonian control strategy that can achieve the Heisenberg limit. By integrating a system extension scheme with iterative feedback control and leveraging the reparameterization technique, we design an efficient adaptive control strategy for estimating the three orthog- onal components of a qubit Hamiltonian in the Pauli ba- sis, which can eliminate the trade-offs among measure- ments, initial states, and control strategies and achieves the optimal precision up to an overall factor with only a few iterations while maintaining the robustness against deviations in the errors of control parameters. Further- more, we prove the general applicability of our approach to Hamiltonians with arbitrary parameter dependence, making it a practical tool for quantum metrology in re- alistic experimental scenarios. RESULTS Quantum multi-parameter estimation theory. In quantum single-parameter estimation, the quantum Cramer-Rao bound tells that the variance of an estima- tor is bounded by the inverse of the quantum Fisher in- formation, as the quantum Fisher information character- izes the sensitivity of a parameter-dependent quantum state to the variations in the parameter [59, 60]. For multi-parameter estimation, the quantum Fisher infor- mation can be extended mathematically to the quantum Fisher information matrix [61, 62]. However, the quan- tum Cramér-Rao bound based on symmetric logarithmic derivatives is not always attainable due to the poten- tial incompatibility between the optimal measurements for different parameters [26–30], unless specific condi- tions are satisfied, e.g. the weak commutativity in the asymptotic limit of collective measurement on an unlim- ited number of systems [63–65] or a more strict condition when the number of accessible systems is finite [24, 66]. Therefore, estimating multiple unknown parameters in a quantum state is a challenging problem. Suppose a quantum state ρα depends on q un- known parameters denoted in a vector form α = (α1, α2, . . . , αq). The estimation precision of the un- known parameters is characterized by the covariance ma- trix C, which is bounded by the quantum Fisher infor- mation matrix F, C ≥(nF)−1 , (1) where “≥” represents the matrix semi-definite positivity and n refers to the number of trials. The entries of quan- tum Fisher information matrix are given by Fij = 1 2Tr (ρα {Li, Lj}) , (2) where Li is a symmetric logarithmic derivative defined by 2∂iρα = ραLi + Liρα, (3) where ∂i is the abbreviation of ∂αi for simplicity. In reality, the unknown parameters in a quantum state are usually encoded by physical processes. If the physical process is a unitary evolution Uα and the initial state of the quantum system is |ψ0⟩, the entries of quantum Fisher information matrix are given by Fij = 4 1 2 ⟨{hi, hj}⟩−⟨hi⟩⟨hj⟩  , (4) where hi := −i ∂iU † α  Uα is the generator of the in- finitesimal translation of Uα with respect to the i-th parameter αi, { , } denotes the anti-commutator, and ⟨·⟩= ⟨ψ0| · |ψ0⟩. To address the potential incompatibility issue between the optimal measurements for different parameters, a real symmetric matrix W can be introduced to assign weights to different parameters and define a weighted mean pre- cision Tr (WC) as the overall benchmark for the perfor- mance of estimation. The lower bound of this weighted mean precision can be derived from the quantum Cramér- Rao bound, S (W) = 1 nTr WF −1 . (5) The weighted mean precision can be optimized and attain the Holevo bound in the asymptotic limit of the number of trials if collective measurements on multiple quantum systems are allowed [25, 27, 65]. But the Holevo bound is usually hard to be solved explicitly, as it still involves a complex matrix optimization problem. Nevertheless, when a weak commutativity condition is satisfied, i.e., for any two parameters αi and αj, Tr (ρα [Li, Lj]) = 0, (6) the quantum Cramér-Rao bound coincides with the Holevo bound and can therefore be attained [63, 64]. For the estimation of parameters α in a unitary operator Uα, if the initial state of the system is |ψ0⟩, the weak com- mutativity condition can be further simplified as Im ⟨hihj⟩= 0, (7) 3 and independent measurements on individual systems are sufficient to achieve the quantum Cramér-Rao bound in this case [17, 63]. Multi-parameter quantum metrology in two- dimensional systems. Quantum metrology can gen- erally be decomposed to four steps: preparation of the initial states, parameter-dependent evolution, measure- ments on the final states, and post-processing of the mea- surement results to extract the parameters. The estima- tion precision can be improved by initial state optimiza- tion, feedback control, and measurement optimization at the first three steps and by using proper estimation strategies at the final step. In multi-parameter quan- tum metrology, the incompatibility issue lies in several aspects: in addition to the measurement incompatibility, the optimal initial states and optimal feedback controls for different parameters can be incompatible as well. System extension scheme has been widely used in quantum metrology, for instance, to establish upper bounds on the quantum Fisher information in noisy en- vironments [15, 39] and to eliminate the aforementioned incompatibilities in multi-parameter quantum estimation [12, 17]. Fig. 1 shows the system extension scheme, where a probe and an ancilla are coupled. The uni- tary evolution Uα (t) = exp (−iHαt) governed by the parameter-dependent Hamiltonian Hα acts on the probe only. When the joint system is initialized in a maximally en- tangled state, the weak commutativity condition is sat- isfied, eliminating the measurement tradeoff [12, 17] (see Supplementary Note 1). In two-dimensional systems, this configuration enables optimal estimation of all the parameters via projective measurements along the Bell basis, addressing the initial-state trade-off issue [12, 17] (see Supplementary Note 2). Furthermore, when the ini- tial Hamiltonian H(init) α is independent of time, feedback control using the reverse of H(init) α obtains the optimal es- timation for all the parameters and achieves the Heisen- berg limit (see Supplementary Note 3), thereby removing the control trade-off [12]. However, the optimal control Hamiltonian depends on the true values of the unknown parameters, so an adap- tive control is generally required to update the value of the parameters in the control Hamiltonian iteratively, so that the control Hamiltonian can approach the op- timum progressively. Such an adaptive feedback control scheme has been studied for the single-parameter quan- tum metrology Pang and Jordan [13], but has not re- ceived much investigation in multi-parameter quantum metrology. The dependence of the control Hamiltonian on the parameter estimation precisions from the previ- ous rounds at each iteration makes it challenging to eval- uate the overall performance of the adaptive procedure and design efficient iterative feedback strategies, even for two-dimensional systems. Variance-time relation. With all trade-offs elimi- nated, we analyze the time dependence of the variances of the parameters to be estimated, offering guidance for probe ancilla ƿڝ Figure 1: System extension scheme. An ancilla with the same dimension as the probe is introduced, with the unitary evolution Uα acts only on the probe. The initial state can be any quantum state of the joint system, and measurements are performed on the joint system. the design of efficient adaptive control strategies. In adaptive Hamiltonian control, the total Hamilto- nian Hα comprises the initial Hamiltonian H(init) α and a control Hamiltonian Hc. In a two-dimensional system with an ancilla, the joint Hamiltonian is Hα ⊗IA, where IA denotes the two-dimensional identity operator on the ancilla. The joint evolution of the probe and the ancilla is Uα (t)⊗IA, where Uα (t) = exp (−iHαt), and the gen- erator of the infinitesimal translation of Uα (t) ⊗IA with respect to the parameter αi is hi (t) ⊗IA, where hi (t) = −i  ∂iUα (t)† Uα (t) . (8) We choose a maximally entangled state, |ψ0⟩= (|0P0A⟩+ |1P1A⟩) / √ 2, (9) as the initial state, where {|0P⟩, |1P⟩} and {|0A⟩, |1A⟩} are sets of complete orthogonal basis for the probe and the ancilla, respectively. According to Eq. (4), the entries of quantum Fisher information matrix can be obtained as Fij (t) = 2Tr (hi (t) hj (t)) −Tr (hi (t)) Tr (hj (t)) . (10) To make the time dependence of the quantum Fisher information matrix explicit for the design of adaptive control strategy, we first analyze the generator hi (t). By applying an integral formula for the derivative of an op- erator exponential, ∂eMαt ∂α = Z t 0 eMα(t−τ) ∂Mα ∂α eMατdτ, (11) we obtain hi (t) = Z t 0 eiHατ (∂iHα) e−iHατdτ. (12) Through the spectral decomposition of the Hamiltonian of the probe, Hα = E0 |E0⟩⟨E0| + E1 |E1⟩⟨E1| [10, 67], we obtain Fij (t) = P1 l=0 t2 [(∂iEl) (∂jEl) −(∂iEl) (∂jE1−l)] −8 sin2 δEt 2  ⟨El| ∂iE1−l⟩⟨E1−l| ∂jEl⟩, (13) 4 where δE is the energy gap of the probe Hamiltonian, δE = E0 −E1. The complete derivation is provided in Supplementary Note 4. This equation characterizes the time dependence of quantum Fisher information matrix: the major term grows quadratically with time, while the other oscillates with time. The estimation precision of parameters is characterized by the estimator variances, which are bounded by the di- agonal elements of the inverse of the quantum Fisher in- formation matrix and the number of trials. As the system is two-dimensional, we assume α = (α1, α2, α3). The re- sults for the estimator variances are given in Supplemen- tary Note 5, where a detailed analysis is provided. Since the estimator variances for different parameters are sym- metric, we only present the estimator variance for the first parameter as an example. The estimator variance for the first parameter α1 can be obtained as δ2bα1 = 1 n csc2 1 2δEt  t2ξ1 + ξ2 t2ξ3 , (14) where ξ1 = (µ2∂3δE −µ3∂2δE)2 + (ν2∂3δE −ν3∂2δE)2 , ξ2 = 16 (µ3ν2 −µ2ν3)2 , ξ3 = 16 [µ1 (ν3∂2δE −ν2∂3δE) + µ2 (ν1∂3δE −ν3∂1δE) +µ3 (ν2∂1δE −ν1∂2δE)]2 , (15) and µi and νi are given by µi = Re (⟨E0| ∂iE1⟩) , νi = Im (⟨E0| ∂iE1⟩) . (16) The variance exhibits only two characteristic time scal- ings, as shown in Fig. 2. Fig. 2a shows the time scaling of variance for the case with ξ1 ̸= 0: when t ≪1/ |δE|, csc2 (δEt/2) ≈4/ (δEt)2, hence the estimation variance for the first parameter decays quadratically with time, while for a longer evolution time t, the variance oscillates at a frequency of |δE| /2π, with its lower envelope decay- ing and rapidly converging to ξ1/nξ3. Fig. 2b shows the time scaling of variance for ξ1 = 0, where the variance achieves the Heisenberg scaling. As aforementioned, the optimal control Hamiltonian is Hc = −H(init) α , but its dependence on the unknown pa- rameters poses a practical challenge. The adaptive con- trol strategy provides a solution to this challenge: an ini- tial estimation of the unknown parameters is performed without quantum control to yield a rough approximate value for the parameters used in the control Hamiltonian; in the following rounds, the control Hamiltonian is im- plemented with the estimated values of the unknown pa- rameters from the previous rounds, resulting in improved precisions. This process is iterated for multiple rounds. As Hc approaches −H(init) α , the total Hamiltonian Hα as well as δE2 converges toward zero. Based on the pre- ceding results, the decrease of δE2 extends the evolution time t satisfying t ≪1/ |δE| during which the estimator variances decay quadratically with time. Consequently, ξ1 n ξ3 0 2 π δE 4 π δE 6 π δE 8 π δE 0 t Variance (a) 0 0 t Variance (b) Figure 2: Relation between estimation variance and evolution time. The estimation variance of bα1 exhibits two characteristic time scalings. Fig. (a) shows the time scaling of variance for ξ1 ̸= 0, depicted by the or- ange curve. For t ≪1/ |δE|, the variance decays quadrat- ically with time. As t increases, the variance oscillates with time and diverges at integer multiples of 2π/ |δE|, with the asymptotes plotted by the green dashed lines. The lower envelope of the variance, depicted by the green solid line, decays and rapidly converges to ξ1/nξ3. Fig. (b) illustrates the time scaling of estimation variance for ξ1 = 0, where the variance decays quadratically with time and reaches the Heisenberg scaling. the estimation precision asymptotically approaches the Heisenberg limit (see Supplementary Note 3). Therefore, in order to achieve the Heisenberg-limited time scaling for an evolution time as long as possible, our objective is to design a strategy that reduces δE2 rapidly with a given total evolution time. The dependence of the Hamiltonian Hα on α can be arbitrary in general. To simplify the following study, we reparameterize the Hamiltonian as H(init) β = β1σx + β2σy + β3σz, (17) with β1, β2, and β3 being the parameters to be estimated which are transformed from α1, α2, α3 and the super- script (init) denotes that it is the initial Hamiltonian 5 without any control. The necessity of reparameterization is shown in Supplementary Note 6. But we will also show the validity of our results for arbitrary parameter depen- dence of the Hamiltonian later. The control Hamiltonian at the (k + 1)-th iteration is denoted by Hc,k+1 = −ˆβk · σ, (18) where ˆβk =  ˆβk,1, ˆβk,2, ˆβk,3  denotes the control param- eters which are essentially the estimates of ˆβ from the k-th iteration. Suppose ˆβk deviates from the true value of β by δβk, i.e., ˆβk = β0 + δβk, (19) with β0 = (β0,1, β0,2, β0,3) being the true value of β and δβk = (δβk,1, δβk,2, δβk,3) being the estimation errors from the k-th iteration, we obtain the δE2 of the (k + 1)- th iteration as δE2 k+1 = 4 ∥δβk∥2 . (20) When the number of trials in the k-th iteration, nk, is sufficiently large, the central limit theorem guarantees that δβk follows a three-dimensional normal distribution asymptotically, δβk ∼N  0, C(β) k  , (21) where C(β) k =  nkF (β) k −1 . As δE2 k+1 depends on δβk, it is also a random variable. Therefore, we reformulate the optimization objective as minimizing the expectation value δE2 k+1 , δE2 k+1 = 4 δβ2 k,1 + δβ2 k,2 + δβ2 k,3 = 4TrC(β) k . (22) Optimal evolution time for each trial in one it- eration. Since the optimal control Hamiltonian requires the knowledge of the unknown parameters to be esti- mated, we take an adaptive approach with feedback to progressively update the control parameters. As time is an important resource in quantum metrology, we con- sider a given total evolution time for the k-th iteration, e.g., Tk = nktk, where nk and tk are the number of trials and the evolution time per trial in k-th iteration, respec- tively, and study how to determine the evolution time tk that minimizes δE2 k+1 . The control Hamiltonian in the k-th iteration depends on the estimated values of the unknown parameters from the (k −1)-th iteration. The covariance matrix for the parameters β1, β2, and β3 is provided in Supplementary Note 7. Applying the covariance matrix in Eq. (22), we obtain δE2 k+1 = 1 Tk  1 tk + 2tk δβk−1 2 csc2 δβk−1 tk  . (23) To find the optimal tk that minimizes δE2 k+1 , we take the derivative of δE2 k+1 with respect to tk. Let g = δβk−1 tk, a numerical computation yields the optimal value of g that minimizes δE2 k+1 as g0 ≈1.2986. (24) By using δβk−1 2 = δE2 k/4 and replacing δE2 k with δE2 k , we obtain the optimal evolution time for each trial in the k-th iteration as topt,k = 2g0 p ⟨δE2 k⟩ (25) and a recursive relation for δE2 k between two consecu- tive iterations, δE2 k+1 = G (g0) nk δE2 k , (26) where G (x) = 1/ 4x2 + csc2 (x) /2 and δE2 1 = 4 ∥β0∥2. The scheme with an equal number of trials in each iteration. To determine the performance of the adaptive control strategy with the optimal evolution time derived above, we propose a scheme where all iterations consist of n trials with the respective optimal evolution time in each iteration. We compare its estimation error with that of the optimal control strategy which uses the true values of the unknown parameters. We define Vk = δE2 k+1 /4 to represent the sum of the variances of ˆβ1, ˆβ2, and ˆβ3 in the k-th iteration. For the scheme with an equal number n of trials in each iteration, the estimation error after m iterations is Vm = G (g0) n m V0 (27) according to Eq. (26). If the target of estimation error is V, the required number of iterations is given by m =  log G(g0) n V V0  , (28) This result shows that the growth of m with decreasing V is slow as it is a logarithm of V , implying that the target precision can be achieved with only a few iterations. The optimal evolution time for the k-th iteration is topt,k = g0V −1/2 0 (n/G (g0))(k−1)/2 , (29) according to Eq. (25) and define ttot,m = Pm k=1 topt,k which is the total evolution time when all the trials are carried out in parallel for each iteration. To compare with the Heisenberg limit of the optimal control strategy in Supplementary Note 3, we derive the relation between Vm and ttot,m, Vm = G (g0) n m   g0 1 − q n G(g0) m 1 − q n G(g0)    2 t−2 tot,m. (30) 6 For p n/G (g0) ≫1, Eq. (30) simplifies to Vm ≈g2 0G (g0) nt2 tot,m . (31) This result confirms that the Heisenberg limit can be achieved by the above adaptive control approach with the evolution time optimized in each iteration. For the optimal control Hamiltonian, Hc = −β0 · σ, Supplementary Note 3 shows that the covariance matrix for the parameters β1, β2, and β3 is C(β) oc = 1 4noct2oc I3×3, (32) where I3×3 is the three-dimensional identity matrix, and the total variance of the estimators ˆβ1, ˆβ2, and ˆβ3 is therefore Voc = 3 4noct2oc . (33) Compared to this optimal control strategy with precise control parameters, Vm is only 4g2 0G (g0) /3 ≈1.55 times larger, implying the estimation precision of this adaptive control strategy achieves the optimum up to an overall factor. The above result is obtained for the parameters in the Pauli basis, β1, β2, and β3, rather than the original parameters αi in the Hamiltonian. The Supplementary Note 8 derives the relation between estimation variances of the original parameter with adaptive and optimal con- trol, from which we obtain δ2bαi m ≤ 4g2 0 csc2 (g0) −1  δ2bαi oc ≈6.27 δ2bαi oc , (34) where δ2bαi m and δ2bαi oc represent the estimation variances of bαi with adaptive and optimal control, respec- tively. This indicates that our adaptive control strategy can work for arbitrary parameters of a Hamiltonian in general and the estimation precision for any unknown parameter in the Hamiltonian can also achieve the opti- mum up to a factor of constant order. Discussion. The optimal evolution time topt,k in the k-th iteration is derived based on the expectation value of δE2 k. In practical experiments, the measurement re- sults of the (k −1)-th iteration can be random, so δE2 k, which is obtained from the (k −1)-th iteration, can also be random accordingly. Hence, the real value of δE2 k can deviate from its expectation value in practice, which may affect the estimation precision at the k-th iteration. In the Methods, we study the effect of this randomness on the optimal evolution time scheme and the robustness of this scheme. In particular, we show it is more probable that the estimation precision can benefit from such devi- ation in δE2 k and perform better than the case with the expectation value of δE2 k. Therefore, the random devi- ation in δE2 k can actually be favorable to this adaptive feedback control strategy. METHODS Robustness Analysis. To facilitate the following analysis, Fig. 3 schematically depicts the relations be- tween the physical quantities in the optimal evolution time scheme, using dashed arrows for the no-deviation case with the errors of the control parameters averaged and the solid arrows for the practical cases with random deviations in the average errors of the control parame- ters. To explicitly characterize the deviation of δE2 k from its expectation value, we introduce a deviation factor Dk, which is also random due to the randomness of δE2 k, δE2 k = Dk δE2 k . (35) We use the evolution time topt,k, which is derived based on the mean of δE2 k according to Eq. (25), to perform the k-th iteration. If δE2 k deviates from its mean in the exper- iment, Eq. (25) shows that the deviation factor Dk scales g0 to √Dkg0, and thus the recursive relation Eq. (26) be- comes δE2 k+1 Dk = G √Dkg0  nk Dk δE2 k . (36) The impact of the deviation factor Dk on the estimation precision at the k-th iteration is characterized by the ratio Rk = Vk,Dk Vk = DkG √Dkg0  G (g0) (37) as illustrated in Fig. 4a. The estimation precision de- creases as the deviation increases for 0 < Dk < (π/g0)2. When 0 < Dk < 1, where the control Hamiltonian ob- tained from the estimate in the (k −1)-th iteration leads to a δE2 k smaller than its mean, implying that the es- timate from the (k −1)-th iteration is better than the average, we have Rk < 1. As Dk →0, Vk,Dk → 3/  4nkt2 opt,k  , which is exactly the bound given in Sup- plementary Note 3. For 1 ≤Dk < (π/g0)2, indicating that the estimate from the (k −1)-th iteration is worse than the average, it follows that Rk ≥1. The deviation factor Dk follows a generalized χ2 dis- tribution Dk = χ2 1 (1) + g2 0 csc2 (g0) χ2 2 (1) + χ2 3 (1)  2g2 0 csc2 (g0) + 1 , (38) where χ2 1 (1), χ2 2 (1), and χ2 3 (1) are squares of indepen- dent standard normal random variables. Detail of the derivation can be found in Supplementary Note 9. The probability density fDk of the deviation factor Dk is shown in Fig. 4b. Fig. 4 suggests that the deviation factor Dk lies most likely in a region where Rk is almost insensitive to Dk and close to 1, demonstrating strong ro- bustness of the estimation precision against the deviation in the errors of control parameters in a single iteration. In practice, the deviation factor Dk modifies the op- timal evolution time topt,k, which is determined based 7 The (k-1)-th iteration The k-th iteration The (k+1)-th iteration ... ... Figure 3: Relations between different physical quantities. Arrows schematically denote the relations between different physical quantities occurred in the pro- posed optimal evolution time scheme, pointing from one quantity to the derived quantity. The upper section, linked by dashed arrows, depicts the no-deviation cases with the errors of all control parameters averaged. The lower section, linked by solid arrows, depicts the practi- cal cases where the errors of the control parameters have random deviation from their average values, manifested by the deviation factor Dk for the k-th iteration. on the mean of δE2 k, to topt,k/√Dk. Therefore, an evo- lution time modified procedure is required. Suppose the estimate obtained from the (k −1)-th iteration is β ′ k−1, which determines the control Hamiltonian at the k-th iteration. By continuing to repeat the trials in the (k −1)-th iteration, a more precise estimate β ′ 0 can be obtained. The modified evolution time ] topt,k is deter- mined by β ′ k−1 −β ′ 0 ] topt,k = Π0, after which the k-th iteration proceeds with Hc,k = −β ′ k−1 · σ and ] topt,k. We now consider the effect of the deviation factors on the estimation precision of the estimation process con- sisting of m iterations with the evolution time modified procedure. Since the evolution time of each iteration is modified to the optimal evolution time, g0 is not scaled by the deviation factor Dk. To ensure a effective com- parison with the same total evolution time, we adopt the equivalent form of Eq. (26), δE2 k+1 = 2g0G (g0) Tk q ⟨δE2 k⟩, (39) which yields ^ δE2 k+1 Dk = 2g0G (g0) Tk r Dk ^ ⟨δE2 k⟩Dk−1, (40) where D1 = 1 and ^ ⟨δE2 1⟩D0 = δE2 1 . The effect of the deviation factors on the estimation precision at the entire estimation process is characterized by the ratio eRtot,m = eVm,Dk Vm = Qm k=1 D 1 2m−(k−1) k , (41) 3.8 0 1 2 3 4 5 0 10 20 30 40 Dk Rk 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.7 0.8 0.9 1.0 (a) 0 1 2 3 4 5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Dk fDk p(Dk≤3)≈0.97 (b) Figure 4: Robustness of a single iteration against deviation in the errors of control parameters. Fig. (a) illustrates the effect of the deviation factor on the estimation precision. The estimation precision decreases as Dk increases. When Dk is sufficiently low, the real es- timation precision approaches the optimal precision with precise control parameters, so the ratio Rk between the real estimation precision to the estimation precision with average errors in the control parameters drops below 1, as shown by the left panel in this figure. In the region indi- cated by the red dashed line in this figure, the estimation precision is almost insensitive to the random deviation of the errors of control parameters. Fig. (b) shows the prob- ability density function of Dk, which suggests Dk lies in an interval where the estimation precision is close to that with average errors in the control parameters with a high probability, indicating strong robustness of the optimal evolution time scheme against the deviation in the errors of control parameters. where eVm,Dk = ^ δE2 m+1 Dk/4. Fig. 5 shows the cumulative distribution functions of eRtot,m for m = 2, 3, and 4. A surprising result from the figure is that the probabilities that the estimation variance with deviation in δE2 k surpasses that without deviation exceed 50% and increase with the number of iterations, suggesting that the real estimation precision is more likely to benefit from the deviation in δE2 k and 8 0.61 0.68 0.71 m=2 m=3 m=4 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.2 0.4 0.6 0.8 1.0 R tot,m CDF Figure 5: Robustness of the optimal evolution time scheme. These figures illustrate the effect of the devia- tion in δE2 k on the estimation precision for a total of two (orange curve), three (blue curve), and four (green curve) iterations. The orange, blue, and green dashed lines plot the probabilities that the precision with the deviation in δE2 k surpasses that without deviation for different itera- tion numbers. They all exceed 50% and increase with the number of iterations, implying that the real estimation precision actually benefits from the deviation in δE2 k and becomes better than the expected estimation precision. becomes better than the expected estimation precision. DATA AVAILABILITY The code and data used in this work are available upon request to the corresponding author. ACKNOWLEDGMENTS This work is supported by National Natural Science Foundation of China (Grant No. 12075323), the Natu- ral Science Foundation of Guangdong Province of China (Grant No. 2025A1515011440) and the Innovation Pro- gram for Quantum Science and Technology (Grant No. 2021ZD0300702). AUTHOR CONTRIBUTIONS Q.W. initiated this work, and carried out the main cal- culations. S.P. participated in scientific discussions, and assisted with the calculations. Both authors contributed to the writing of the manuscript. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. [1] V. Giovannetti, S. Lloyd, and L. Maccone, Physical Re- view Letters 96, 010401 (2006). [2] M. G. A. Paris, International Journal of Quantum Infor- mation 07, 125 (2009). [3] V. Giovannetti, S. Lloyd, and L. Maccone, Nature Pho- tonics 5, 222 (2011). [4] J. Jacobson, G. Björk, and Y. Yamamoto, Applied Physics B 60, 187 (1995). [5] A. André, A. S. Sørensen, and M. D. Lukin, Physical Review Letters 92, 230801 (2004). [6] J. Borregaard and A. S. Sørensen, Physical Review Let- ters 111, 090801 (2013). [7] E. M. Kessler, P. Kómár, M. Bishof, L. Jiang, A. S. Sørensen, J. Ye, and M. D. Lukin, Physical Review Let- ters 112, 190403 (2014). [8] R. Schnabel, N. Mavalvala, D. E. McClelland, and P. K. Lam, Nature Communications 1, 121 (2010). [9] The LIGO Scientific Collaboration, Nature Physics 7, 962 (2011). [10] S. Pang and T. A. Brun, Physical Review A 90, 022117 (2014). [11] H. Yuan and C.-H. F. Fung, Physical Review Letters 115, 110401 (2015). [12] H. Yuan, Physical Review Letters 117, 160801 (2016). [13] S. Pang and A. N. Jordan, Nature Communications 8, 14695 (2017). [14] S. L. Braunstein and C. M. Caves, Physical Review Let- ters 72, 3439 (1994). [15] J. Kołodyński and R. Demkowicz-Dobrzański, New Jour- nal of Physics 15, 073043 (2013). [16] R. Demkowicz-Dobrzański and L. Maccone, Physical Re- view Letters 113, 250801 (2014). [17] A. Fujiwara, Physical Review A 65, 012316 (2001). [18] P. C. Humphreys, M. Barbieri, A. Datta, and I. A. Walm- sley, Physical Review Letters 111, 070403 (2013). [19] J. Suzuki, Journal of Mathematical Physics 57, 042201 (2016). [20] T. Baumgratz and A. Datta, Physical Review Letters 116, 030801 (2016). [21] M. Szczykulska, T. Baumgratz, and A. Datta, Advances in Physics: X 1, 621 (2016). [22] T. J. Proctor, P. A. Knott, and J. A. Dunningham, Phys- ical Review Letters 120, 080501 (2018). [23] S. Altenburg and S. Wölk, Physica Scripta 94, 014001 (2018). [24] J. Yang, S. Pang, Y. Zhou, and A. N. Jordan, Physical Review A 100, 032104 (2019). 9 [25] F. Albarelli, J. F. Friel, and A. Datta, Physical Review Letters 123, 200503 (2019). [26] A. Carollo, B. Spagnolo, A. A. Dubkov, and D. Valenti, Journal of Statistical Mechanics: Theory and Experi- ment 2019, 094010 (2019). [27] J. S. Sidhu, Y. Ouyang, E. T. Campbell, and P. Kok, Physical Review X 11, 011028 (2021). [28] B. Xia, J. Huang, H. Li, H. Wang, and G. Zeng, Nature Communications 14, 1021 (2023). [29] Z. Hou, Z. Zhang, G.-Y. Xiang, C.-F. Li, G.-C. Guo, H. Chen, L. Liu, and H. Yuan, Physical Review Letters 125, 020501 (2020). [30] F. Albarelli, M. Barbieri, M. G. Genoni, and I. Gianani, Physics Letters A 384, 126311 (2020). [31] J. Suzuki, Y. Yang, and M. Hayashi, Journal of Physics A: Mathematical and Theoretical 53, 453001 (2020). [32] J. Suzuki, Journal of Physics A: Mathematical and The- oretical 53, 264001 (2020). [33] F. Belliardo and V. Giovannetti, New Journal of Physics 23, 063055 (2021). [34] V. Katariya and M. M. Wilde, New Journal of Physics 23, 073040 (2021). [35] X.-M. Lu and X. Wang, Physical Review Letters 126, 120503 (2021). [36] W. Górecki and R. Demkowicz-Dobrzański, Physical Re- view A 106, 012424 (2022). [37] F. Albarelli and R. Demkowicz-Dobrzański, Physical Re- view X 12, 011039 (2022). [38] A. Fujiwara and H. Imai, Journal of Physics A: Mathe- matical and Theoretical 41, 255304 (2008). [39] B. M. Escher, R. L. de Matos Filho, and L. Davidovich, Nature Physics 7, 406 (2011). [40] R. Demkowicz-Dobrzański, J. Kołodyński, and M. Guţă, Nature Communications 3, 1063 (2012). [41] P. Sekatski, M. Skotiniotis, and W. Dür, New Journal of Physics 18, 073034 (2016). [42] Y. Dong, X.-D. Chen, G.-C. Guo, and F.-W. Sun, Phys- ical Review A 94, 052322 (2016). [43] S. Zhou, M. Zhang, J. Preskill, and L. Jiang, Nature Communications 9, 78 (2018). [44] W. Górecki, S. Zhou, L. Jiang, and R. Demkowicz- Dobrzański, Quantum 4, 288 (2020). [45] G. Chen, L. Zhang, W.-H. Zhang, X.-X. Peng, L. Xu, Z.-D. Liu, X.-Y. Xu, J.-S. Tang, Y.-N. Sun, D.-Y. He, J.-S. Xu, Z.-Q. Zhou, C.-F. Li, and G.-C. Guo, Physical Review Letters 121, 060506 (2018). [46] D. Leibfried, M. D. Barrett, T. Schaetz, J. Britton, J. Chiaverini, W. M. Itano, J. D. Jost, C. Langer, and D. J. Wineland, Science 304, 1476 (2004). [47] T. Nagata, R. Okamoto, J. L. O’Brien, K. Sasaki, and S. Takeuchi, Science 316, 726 (2007). [48] K. J. Resch, K. L. Pregnell, R. Prevedel, A. Gilchrist, G. J. Pryde, J. L. O’Brien, and A. G. White, Physical Review Letters 98, 223601 (2007). [49] G. Chen, N. Aharon, Y.-N. Sun, Z.-H. Zhang, W.-H. Zhang, D.-Y. He, J.-S. Tang, X.-Y. Xu, Y. Kedem, C.-F. Li, and G.-C. Guo, Nature Communications 9, 93 (2018). [50] Z. Hou, J.-F. Tang, H. Chen, H. Yuan, G.-Y. Xiang, C.-F. Li, and G.-C. Guo, Science Advances 7, eabd2986 (2021). [51] M. Markiewicz, M. Pandit, and W. Laskowski, Scientific Reports 11, 15669 (2021). [52] P. Yin, X. Zhao, Y. Yang, Y. Guo, W.-H. Zhang, G.- C. Li, Y.-J. Han, B.-H. Liu, J.-S. Xu, G. Chiribella, G. Chen, C.-F. Li, and G.-C. Guo, Nature Physics 19, 1122 (2023). [53] Y.-N. Lu, Y.-R. Zhang, G.-Q. Liu, F. Nori, H. Fan, and X.-Y. Pan, Physical Review Letters 124, 210502 (2020). [54] Y. Zhai, X. Yang, K. Tang, X. Long, X. Nie, T. Xin, D. Lu, and J. Li, Physical Review A 107, 022602 (2023). [55] M. Naghiloo, A. N. Jordan, and K. W. Murch, Physical Review Letters 119, 180801 (2017). [56] J. J. . Bollinger, W. M. Itano, D. J. Wineland, and D. J. Heinzen, Physical Review A 54, R4649 (1996). [57] Z. Hu, S. Wang, L. Qiao, T. Isogawa, C. Li, Y. Yang, G. Wang, H. Yuan, and P. Cappellaro, Control incom- patibility in multiparameter quantum metrology (2024), arXiv:2411.18896 [quant-ph]. [58] Y. Yang, S. Ru, M. An, Y. Wang, F. Wang, P. Zhang, and F. Li, Physical Review A 105, 022406 (2022). [59] W. K. Wootters, Physical Review D 23, 357 (1981). [60] J. S. Sidhu and P. Kok, AVS Quantum Science 2, 014701 (2020). [61] C. W. Helstrom, Physics Letters A 25, 101 (1967). [62] J. Liu, H. Yuan, X.-M. Lu, and X. Wang, Journal of Physics A: Mathematical and Theoretical 53, 023001 (2019). [63] K. Matsumoto, Journal of Physics A: Mathematical and General 35, 3111 (2002). [64] S. Ragy, M. Jarzyna, and R. Demkowicz-Dobrzański, Physical Review A 94, 052108 (2016). [65] R. Demkowicz-Dobrzański, W. Górecki, and M. Guţă, Journal of Physics A: Mathematical and Theoretical 53, 363001 (2020). [66] H. Chen, Y. Chen, and H. Yuan, Physical Review A 105, 062442 (2022). [67] R. M. Wilcox, Journal of Mathematical Physics 8, 962 (1967). 10 SUPPLEMENTARY NOTE 1. ELIMINATING MEASUREMENT TRADE-OFF THROUGH SYSTEM EXTENSION This Supplementary Note proves that the system extension scheme with the initial state being a maximally entangled state eliminates the measurement tradeoff. Suppose a d-dimensional system governed by a Hamiltonian Hα, we introduce an ancillary system with dimension no less than d, whose Hamiltonian is the identity operator IA. The initial state is prepared as an arbitrary pure state of the joint system, denoted as |ψ0⟩= Pd−1 l=0 λl |lP⟩⊗|lA⟩, where {|lP⟩| l = 0, 1, . . . , d −1} forms a complete orthonormal basis of the system, {|lA⟩| l = 0, 1, . . . , d −1} is a set of mutually orthogonal basis vectors for the ancillary system, and the non-negative coefficients λl satisfy Pd−1 l=0 λ2 l = 1. The Hamiltonian of the joint system is Hα ⊗IA, so the generator of the infinitesimal translation of Uα ⊗IA with respect to the parameter αi is given by hi ⊗IA, where hi = −i ∂iU † α  Uα and Uα = exp (−iHαt). From Eq. (7) of the main manuscript, we obtain ⟨ψ0| hihj ⊗IA |ψ0⟩= ⟨ψ0| hjhi ⊗IA |ψ0⟩. (S1) Calculating both sides of the equation separately, we obtain ⟨ψ0| hihj ⊗IA |ψ0⟩= d−1 X l=0 λ2 l ⟨lP| hihj |lP⟩, (S2) and ⟨ψ0| hjhi ⊗IA |ψ0⟩= d−1 X l=0 λ2 l ⟨lP| hjhi |lP⟩. (S3) Since hi and hj generally do not commute, Eq. (S1) holds only when λl = 1 √ d due to the cyclic property of the trace operator. If the dimension of the ancillary system is equal to that of the system, we conclude that the weak commutativity condition is satisfied when the initial state is the maximally entangled state. SUPPLEMENTARY NOTE 2. ELIMINATING INITIAL-STATE TRADE-OFF IN TWO-DIMENSIONAL SYSTEMS This supplementary note proves that, for a two-dimensional system with system extension, choosing the maximally entangled state as the initial state makes the quantum Fisher information matrix optimal. Suppose the joint Hamiltonian of a two-dimensional system and a two-dimensional ancillary system is Hα ⊗IA and the initial state is |ψ0⟩= √x |0P0A⟩+ √1 −x |1P1A⟩(0 ≤x ≤1), where {|0P⟩, |1P⟩} and {|0A⟩, |1A⟩} are sets of complete orthonormal basis for the system and ancillary system, respectively. According to Eq. (4) in the main manuscript, the entries of quantum Fisher information matrix of the finial state are given by Fij = 4 {Re [x ⟨0P| hihj |0P⟩+ (1 −x) ⟨1P| hihj |1P⟩] −[x ⟨0P| hi |0P⟩+ (1 −x) ⟨1P| hi |1P⟩] [x ⟨0P| hj |0P⟩+ (1 −x) ⟨1P| hj |1P⟩]} . (S4) Denote the matrix representation of hi in the basis {|0P⟩, |1P⟩} as h(M) i = ( h(M) i,11 h(M) i,12 h(M) i,21 h(M) i,22 ) , (S5) we have Fij = 4 n Re h x  h(M) i,11h(M) j,11 + h(M) i,12h(M) j,21  + (1 −x)  h(M) i,21h(M) j,12 + h(M) i,22h(M) j,22 i − h xh(M) i,11 + (1 −x) h(M) i,22 i h xh(M) j,11 + (1 −x) h(M) j,22 io . (S6) We define δF = F|x=1/2 −F, the elements of δF are given by δFij = 4  h(M) i,11 −h(M) i,22   h(M) j,11 −h(M) j,22   x −1 2 2 . (S7) 11 For both the two-parameter and three-parameter cases, all principal minors of δF are non-negative, which indicates that δF is positive semi-definite. Therefore, the maximally entangled state is the optimal pure state. Let ρ0 = P i pi |φi⟩⟨φi| be an arbitrary mixed initial state. Under the unitary evolution Uα ⊗IA, the final state is ρt = P i pi (Uα ⊗IA) |φi⟩⟨φi| U † α ⊗IA  . According to the convexity of the quantum Fisher information matrix [62], we obtain F (ρt) ≤P i piF  (Uα ⊗IA) |φi⟩⟨φi| U † α ⊗IA  ≤P i piF  (Uα ⊗IA) |ψ0⟩⟨ψ0| U † α ⊗IA  x= 1 2 = F|x= 1 2 , (S8) demonstrating that the maximally entangled initial state is optimal. SUPPLEMENTARY NOTE 3. ACHIEVING HEISENBERG SCALING VIA HAMILTONIAN CONTROL This supplementary note proves that using the reverse of the initial Hamiltonian as the control Hamiltonian enables the parameter estimation precision to reach the Heisenberg limit. Consider a two-dimensional system with Hamiltonian Hα and a two-dimensional ancillary system with Hamiltonian IA. The probe and ancilla are initialized in a maximally entangled state and evolve under the joint Hamiltonian Hα ⊗IA. The entries of quantum Fisher information matrix for the final evolved state are given by Fij = 2Tr (hi (t) hj (t)) −t2Tr (∂iHα) Tr (∂jHα) , (S9) where hi (t) can be expressed as hi (t) = Z t 0 eiHατ (∂iHα) e−iHατdτ (S10) and we have used Tr (hi (t)) = tTr (∂iHα) . Suppose the initial Hamiltonian of the system is H(init) α , then the control Hamiltonian is Hc = −H(init) α . Here, H(init) α is parameter-dependent, while Hc is parameter-independent. The total Hamiltonian Hα = H(init) α + Hc becomes a zero operator, yielding hi (t) = t∂iH(init) α , and hence F (c) ij = t2 n 2Tr h ∂iH(init) α   ∂jH(init) α i −Tr  ∂iH(init) α  Tr  ∂jH(init) α o . (S11) The estimator variance of the parameter αi is given by δ2bαi =  nF (c)−1 ii ∝ 1 nt2 , (S12) where δ2bαi := E h (bαi/∂αiE (bαi) −αi)2i . SUPPLEMENTARY NOTE 4. TIME DEPENDENCE OF QUANTUM FISHER INFORMATION MATRIX In this Supplementary Note, we derive the explicit time dependence of quantum Fisher information matrix. To obtain the explicit time dependence of the quantum Fisher information matrix, we first solve for the explicit time dependence of hi (t) in Eq. (S10). Let Y (τ) = eiHατ (∂iHα) e−iHατ, we obtain ∂Y (τ) ∂τ = i [Hα, Y ] , (S13) 12 where Y (0) = ∂iHα. The operator H (·) := [Hα, ·] is an Hermitian superoperator with four eigenvalues Λ1, Λ2, Λ3, Λ4, where Λk = 0 for k = 1, ..., r and Λk ̸= 0 for k = r + 1, ..., 4. The corresponding eigenvectors are Γ1, Γ2, Γ3, Γ4, which satisfy Tr h Γ † i Γj i = δij. We can decompose Y (0) based on {Γk} as Y (0) = 4 X k=1 ckΓk, (S14) where ck = Tr  Γ † k∂iHα  . The solution for Y (τ) is Y (τ) = 4 X k=1 Tr  Γ † k∂iHα  exp (iΛkτ) Γk, (S15) leading to hi (t) = t r X k=1 Tr  Γ † k∂iHα  Γk −i 4 X k=r+1 exp (iΛkt) −1 Λk Tr  Γ † k∂iHα  Γk. (S16) Suppose Hα has two eigenvalues E0 and E1 with E0 ̸= E1, and two corresponding eigenvectors |E0⟩and |E1⟩, the eigenvalues and eigenvectors of H (·) are Λ1 = 0 Γ1 = |E0⟩⟨E0| , Λ2 = 0 Γ2 = |E1⟩⟨E1| , Λ3 = E0 −E1 Γ3 = |E0⟩⟨E1| , Λ4 = E1 −E0 Γ4 = |E1⟩⟨E0| . (S17) The solutions for Tr  Γ † k∂iHα  are Tr  Γ † 1 ∂iHα  = ∂iE0, Tr  Γ † 2 ∂iHα  = ∂iE1, Tr  Γ † 3 ∂iHα  = (E1 −E0) ⟨E0| ∂iE1⟩, Tr  Γ † 4 ∂iHα  = (E0 −E1) ⟨E1| ∂iE0⟩. (S18) Substituting Eq. (S17) and Eq. (S18) into Eq. (S16), we obtain hi (t) = 1 X l=0 t (∂iEl) |El⟩⟨El| + i [exp (i (El −E1−l) t) −1] ⟨El| ∂iE1−l⟩|El⟩⟨E1−l| . (S19) Finally, substituting Eq. (S19) into Eq. (S9) yields Fij (t) = 1 X l=0 t2 [(∂iEl) (∂jEl) −(∂iEl) (∂jE1−l)] −8 sin2 (El −E1−l) t 2  ⟨El| ∂iE1−l⟩⟨E1−l| ∂jEl⟩. (S20) SUPPLEMENTARY NOTE 5. TIME DEPENDENCE OF ESTIMATOR VARIANCES This Supplementary Note provides an explicit expression for the estimator variances and analyses its dependence on time. The quantum Fisher information matrix can be directly obtained from Eq. (S20). The variance of bα1, bα2, and bα3 are δ2bα1 = t2 csc2 1 2δEt  h (µ2∂3δE −µ3∂2δE)2 + (ν2∂3δE −ν3∂2δE)2i + 16 (µ3ν2 −µ2ν3)2 16nt2 [µ1 (ν3∂2δE −ν2∂3δE) + µ2 (ν1∂3δE −ν3∂1δE) + µ3 (ν2∂1δE −ν1∂2δE)]2 , (S21) 13 δ2bα2 = t2 csc2 1 2δEt  h (µ1∂3δE −µ3∂1δE)2 + (ν1∂3δE −ν3∂1δE)2i + 16 (µ3ν1 −µ1ν3)2 16nt2 [µ1 (ν3∂2δE −ν2∂3δE) + µ2 (ν1∂3δE −ν3∂1δE) + µ3 (ν2∂1δE −ν1∂2δE)]2 , (S22) and δ2bα3 = t2 csc2 1 2δEt  h (µ1∂2δE −µ2∂1δE)2 + (ν1∂2δE −ν2∂1δE)2i + 16 (µ2ν1 −µ1ν2)2 16nt2 [µ1 (ν3∂2δE −ν2∂3δE) + µ2 (ν1∂3δE −ν3∂1δE) + µ3 (ν2∂1δE −ν1∂2δE)]2 , (S23) respectively, where δE = E0 −E1, µi = Re (⟨E0| ∂iE1⟩) , νi = Im (⟨E0| ∂iE1⟩) . (S24) Since δ2bα1 , δ2bα2 , and δ2bα3 are symmetric with respect to each other, we consider only δ2bα1 in the following analysis. When (µ2∂3δE −µ3∂2δE)2 + (ν2∂3δE −ν3∂2δE)2 = 0, the oscillatory term vanishes. To ensure a nonzero numerator, we have ∂2δE = ∂3δE = 0, leading to δ2bα1 = 1 nt2 (∂1δE)2 , (S25) where ∂1δE ̸= 0. In this case, δ2bα1 achieves the Heisenberg limit. For instance, consider a Hamiltonian H(B,θ,φ) = B (cos (θ) cos (φ) σx + cos (θ) sin (φ) σy + sin (θ) σz) , (S26) where σx, σy, and σz are Pauli matrices. For estimating B, the fact that the eigenvalues of the Hamiltonian are independent of θ and φ enables the estimation precision to reach the Heisenberg limit. When (µ2∂3δE −µ3∂2δE)2 + (ν2∂3δE −ν3∂2δE)2 ̸= 0, we first consider two extreme cases. When t is sufficiently small such that t ≪1/ |δE|, we have csc2 1 2δEt  ≈ 4 (δEt)2 , so δ2bα1 ∝1 t2 . (S27) When t is sufficiently large, we have δ2bα1 ∝csc2 1 2δEt  , (S28) which is a periodic function. For general t, its overall trend can be analyzed via periodic sampling. We set the initial sampling time as ts,0 ∈(0, 2π/ |δE|), and define c0 = csc2 1 2δEts,0  . For ts,k = ts,0 + 2kπ |δE| with k = 1, 2, 3, . . . , we have csc2 1 2δEts,k  = c0. The function δ2bα1 c0 = c0t2ξ1 + ξ2 nt2ξ3 , (S29) passes through all these sampling points, where ξ1 = (µ2∂3δE −µ3∂2δE)2 + (ν2∂3δE −ν3∂2δE)2 , ξ2 = 16 (µ3ν2 −µ2ν3)2 , ξ3 = 16 [µ1 (ν3∂2δE −ν2∂3δE) + µ2 (ν1∂3δE −ν3∂1δE) + µ3 (ν2∂1δE −ν1∂2δE)]2 . (S30) Since ∂t δ2bα1 c0 ≤0, (S31) the variance shows an overall decreasing trend. From Eq. (S28), we know that at large t, the minimum variance per period occurs at tk = (2k + 1) π/ |δE|, i.e., c0 = 1. Therefore, the infimum of the variance is inf δ2bα1 = ξ1 nξ3 . (S32) However, since ∂t δ2bα1 c0 decays cubically with time, it is not worthwhile to expend more time for limited gains in precision. 14 SUPPLEMENTARY NOTE 6. NECESSITY FOR REPARAMETERIZATION This Supplementary Note shows the necessity for reparameterization. In general, the parameterization of the Hamiltonian for a two-dimensional system can be arbitrary. For example, one can parameterize the Hamiltonian by the coefficients in the Pauli basis or by the strength and angular parameters. For the current problem of reducing δE2 so as to extend the evolution time for the Heisenberg scaling, it will turn out that the parameterization by the coefficients in the Pauli basis is more convenient, so we will focus on this parameterization. Usually the unknown parameters can have complex correlations between each other, manifested by the correlation matrix of the estimation. To simplify the estimation task of these parameters, a powerful tool is to transform the unknown parameters to other parameters which have simpler correlations (e.g., a diagonal correlation matrix) [30, 31], sometimes known as reparametrization. If the vector of the parameters α is reparameterized as β = (β1, β2, . . . , βk), the quantum Fisher information matrix of β is related to the original quantum Fisher information matrix for α through Fβ = JTFαJ, (S33) where J is the Jacobian matrix defined as Jij = ∂αi/∂βj. Suppose the original Hamiltonian of the probe is decomposed in the Pauli basis as H(init) α = f (α) · σ, (S34) where σ = (σx, σy, σz), α = (α1, α2, α3), and f (α) = (f1 (α) , f2 (α) , f3 (α)) with f1 (α), f2 (α), and f3 (α) being real-valued functions. The control Hamiltonian at the (k + 1)-th iteration is denoted by Hc,k+1 = −f (ˆαk) · σ, (S35) where ˆαk = (ˆαk,1, ˆαk,2, ˆαk,3) denotes the control parameters which are essentially the estimates of ˆα from the k-th iteration. Suppose ˆαk deviates from the true value of α by δαk, i.e., ˆαk = α0 + δαk, (S36) with α0 = (α0,1, α0,2, α0,3) being the true value of α and δαk = (δαk,1, δαk,2, δαk,3) being the estimation errors from the k-th iteration, we obtain the δE2 of the (k + 1)-th iteration as δE2 k+1 = 4 ∥f (α0) −f (ˆαk)∥2 . (S37) When ∥δαk∥≪1, Eq. (S37) is simplified to δE2 k+1 ≈4 ∥Jδαk∥2 , (S38) where J is the Jacobian matrix defined by Jij = ∂αjfi (α) evaluated at α0. If we reparameterize the Hamiltonian as H(init) β = β1σx + β2σy + β3σz, (S39) where the new parameters are defined as β1 = f1 (α), β2 = f2 (α), and β3 = f3 (α). According to Eq. (S37), we obtain δE2 k+1 = 4 ∥δβk∥2 , (S40) which will greatly simplify the subsequent optimization. SUPPLEMENTARY NOTE 7. DERIVATION OF COVARIANCE MATRIX In this Supplementary Note, we derive the covariance matrix for the k-th iteration. Suppose the estimate from the (k −1)-th iteration is ˆβk−1. The control Hamiltonian in the k-th iteration is Hc,k = ˆβk−1 · σ and the total Hamiltonian is Hβ,k =  β −ˆβk−1  · σ. From Eq. (10) in the main manuscript, the entries of quantum Fisher information matrix are F (β) k,ii = 4  δβ2 k−1,i∥δβk−1∥ 2t2 k+  ∥δβk−1∥ 2−δβ2 k−1,i  sin2(∥δβk−1∥tk)  ∥δβk−1∥ 4 , F (β) k,ij = 4δβk−1,iδβk−1,j  ∥δβk−1∥ 2t2 k−sin2(∥δβk−1∥tk)  ∥δβk−1∥ 4 , i ̸= j. (S41) 15 The covariance matrix in the k-th iteration is given by C(β) k =  nkF (β) k −1 , (S42) the elements of which turn out to be C(β) k,ii = δβ2 k−1,i t2 k∥δβk−1∥2 + h ∥δβk−1∥ 2−δβ2 k−1,i i csc2(∥δβk−1∥tk) 4nk , C(β) k,ij = δβk−1,iδβk−1,j  1 t2 k∥δβk−1∥2 −csc2(∥δβk−1∥tk)  4nk , i ̸= j. (S43) SUPPLEMENTARY NOTE 8. RELATION BETWEEN ESTIMATION VARIANCES OF THE ORIGINAL PARAMETER WITH ADAPTIVE AND OPTIMAL CONTROL In this Supplementary Note, we derive the relation between estimation variances of the original parameter with adaptive and optimal control strategy. Suppose the estimation variance obtained from the adaptive control strategy with m iterations is Vm = TrC(β) m , and that from the optimal control strategy with the true values of the unknown parameters is Voc, both for the parameters in the Pauli basis, ˆβ1, ˆβ2, and ˆβ3 after reparameterization, rather than the original parameters in the Hamiltonian. If the estimation variance of the original parameter αi using the adaptive control strategy approaches the variance obtained with the optimal control strategy, analogous to how Vm approaches Voc, it means the effectiveness of our adaptive control strategy in estimating the original parameters. For the adaptive control strategy, the covariance matrix after m iterations is C(β) m , with elements given in Eq. (S43). When the optimal evolution time scheme is applied, the total variance is Vm = ∥δβm−1∥ 2 g2 0 + 2 δβm−1 2 csc2 (g0) 4nm , (S44) where we have used Eq. (25) in the main manuscript. For the optimal control Hamiltonian, Voc is given by Eq. (33) in the main manuscript. When the Hamiltonian contains three independent unknown parameters, α = (α1, α2, α3), the quantum Fisher information matrix of α is related to the quantum Fisher information matrix of β via Fα = JTFβJ, (S45) where J is given by Jij = ∂βi/∂αj, with βi = fi (α). Therefore, the relation between their covariance matrices is C(α) = J−1C(β) J−1T . (S46) For the adaptive control strategy, Eq. (S46) yields the variances of the original parameters after m iterations as C(α) m,ii = 1 4nm P3 r,s=1,r̸=s J−1 ir J−1 is δβm−1,rδβm−1,s  1 g2 0 −csc2 (g0)  + 1 4nm P3 r=1 J−12 ir h δβ2 m−1,r g2 0 +  δβm−1 2 −δβ2 m−1,r  csc2 (g0) i , (S47) where we have used the optimal evolution time scheme. Let µmax,i = max  J−1 ir J−1 is r, s ∈{1, 2, 3} , r ̸= s , νmax,i = max nJ−12 ir r ∈{1, 2, 3} o . (S48) We obtain C(α) m,ii ≤  2µmax,i g2 0 csc2 (g0) −1 1 + 2g2 0 csc2 (g0) + νmax,i  Vm. (S49) For the optimal control strategy, Eq. (S46) yields the variances of the original parameters as C(α) oc,ii = (J−1) 2 i1+(J−1) 2 i2+(J−1) 2 i3 4noct2 oc ≥ νmax,i 3 Voc. (S50) 16 Using Vm = κVoc (S51) to connect Eq. (S49) and Eq. (S50), we obtain C(α) m,ii ≤12g2 0 csc2 (g0) −3 1 + 2g2 0 csc2 (g0) κC(α) oc,ii, (S52) where µmax,i ≤νmax,i has been used. SUPPLEMENTARY NOTE 9. PROBABILITY DENSITY FUNCTION OF DEVIATION FACTOR In this Supplementary Note, we derive the probability density function of deviation factor Dk for the k-th iteration. Since δE2 k = 4 δβk−1 2 and δβk−1 follows a Gaussian distribution δβk−1 ∼N  0, C(β) k−1  in the asymptotic limit of the number of trials, δE2 k follows a generalized χ2 distribution, expressed as a weighted sum of squares of independent standard normal random variables, where the weights are four times the eigenvalues of C(β) k−1. Together with δβk−2 topt,k−1 = g0, we obtain δE2 k = 1 nk−1t2 opt,k−1 χ2 1 (1) + g2 0 csc2 (g0) nk−1t2 opt,k−1 χ2 2 (1) + g2 0 csc2 (g0) nk−1t2 opt,k−1 χ2 3 (1) . (S53) The mean of δE2 k is 2g2 0 csc2 (g0) + 1  /  nk−1t2 opt,k−1  . From Eq. (35) of the main manuscript, we have Dk = χ2 1 (1) + g2 0 csc2 (g0) χ2 2 (1) + χ2 3 (1)  2g2 0 csc2 (g0) + 1 . (S54) Using the Laplace transform, we obtain the probability density function of Dk as fDk = 2g2 0 + sin2 (g0)  e − Dk sin2(g0) 2g2 0 −Dkerf    s Dk  −sin2(g0) g2 0 +2g2 0 csc2(g0)−1  √ 2    2g0 q g2 0 −sin2 (g0) , (S55) where erf (x) = 2 √π Z x 0 e−t2dt. (S56)
Efficient adaptive control strategy for multi-parameter quantum metrology in two-dimensional systems Qifei Wei1 and Shengshi Pang1,2∗ 1 -sen University, Guangzhou, Guangdong 510275, China and 2Hefei National Laboratory, Hefei 230088, China Quantum metrology leverages quantum resources such as entanglement and squeezing to enhance parameter estimation precision beyond classical limits. While optimal quantum control strategies can assist to reach or even surpass the Heisenberg limit, their practical implementation often requires the knowledge of the parameters to be estimated, necessitating adaptive control methods with feedback. Such adaptive control methods have been considered in single-parameter quantum metrology, but not much in multi-parameter quantum metrology so far. In this work, we bridge this gap by proposing an efficient adaptive control strategy for multi-parameter quantum metrology in two-dimensional systems. By eliminating the trade-offs among optimal measurements, initial states, and control Hamiltonians through a system extension scheme, we derive an explicit relation between the estimator variance and evolution time. Through a reparameterization technique, the optimization of evolution times in adaptive iterations are obtained, and a recursive relation is established to characterize the precision improvement across the iterations. The proposed strategy achieves the optimal performance up to an overall factor of constant order with only a few iterations and demonstrates strong robustness against deviations in the errors of control parameters at individual iterations. Further analysis shows the effectiveness of this strategy for Hamiltonians with arbitrary parameter dependence. This work provides a practical approach for multi-parameter quantum metrology with adaptive Hamiltonian control in realistic scenarios. Precision measurement plays a fundamental role across various disciplines of science and technology. Quantum metrology [1-3], rooted in the principles of quantum mechanics and statistical inference, exploits nonclassical resources such as entanglement and squeezing to realize estimation of parameters in quantum dynamics with high precision. This technique has been widely applied in atomic interferometers [4], atomic clocks [5-7], gravitational wave detection [8, 9] and so on. Over the past decades, quantum metrology has seen rapid advancement in both theoretical innovation and experimental breakthroughs. Theoretically, entangled probes evolving in parallel under parameter-dependent dynamics can achieve the Heisenberg limit [1, 10]. Alternatively, a sequential strategy-where a single probe evolves under the parameter-dependent dynamic and adaptive controlalso achieves the Heisenberg limit and offers advantages when entanglement is difficult to generate or maintain [11-13]. Quantum metrology considers both single-parameter estimation [14-16] and multi-parameter [12, 17-37] estimation. While single-parameter estimation has been well understood, the multi-parameter quantum metrology poses additional challenges due to the incompatibility of the optimal measurements, initial states, and control strategies with respect to different parameters [26-28]. Besides, environmental noise is inevitable in realistic scenarios, and significant progress has been made in addressing quantum metrology for open systems, both in exploring estimation precision limits [15, 38-40] and developing noise-resilient strategies [41-45]. ∗ Experimentally, quantum metrology has been implemented on a variety of physical systems, e.g., photonic systems [28, 45-52], nuclear magnetic resonance [53, 54], superconducting circuits [55], etc. These experiments have realized key theoretical breakthroughs, such as attaining the Heisenberg limit [56], improving the efficacy by control-enhanced strategies [54], full estimation of magnetic fields [29, 50], and mitigating the incompatibility of multi-parameter estimation [28], etc. In quantum metrology, quantum control serves as a powerful tool to boost the estimation precision. In noiseless scenarios, Hamiltonian control has shown the capability of increasing the precision to the Heisenberg limit and even beyond [13]. The optimal control strategies have been well established for single-parameter estimation, including both time-independent and timedependent Hamiltonians [11, 13]. For multi-parameter quantum metrology, significant progress has also been made in two-dimensional systems [12, 57] where the system extension scheme eliminates the trade-offs completely, but the realizability of optimal quantum control in practice remains much less explored. The optimal control Hamiltonian usually relies on the knowledge of the unknown parameters to be estimated. This necessitates the use of adaptive control strategies to iteratively refine the control Hamiltonian based on the estimated values of the parameters from previous measurements. Although preliminary work has addressed adaptive control for the single-parameter estimation [13], efficient adaptive strategies for multi-parameter scenarios remain largely an open problem. Moreover, while one can certainly enhance the estimation precision by an increasing number of iterations and trials in each iteration, quantum resources are limited for any quantum proto16 Oct 2025 2 col. So it is crucial to design an efficient adaptive control strategy that enables rapid convergence to the optimal Hamiltonian control with given resources. In this work, we bridge this gap by introducing an efficient adaptive control strategy tailored for multiparameter quantum metrology. Considering the feasibility of analytical computation, we focus our research on two-dimensional systems, similar as most studies of multi-parameter quantum metrology with quantum control have pursued [12, 17, 20, 50, 57, 58], but the analysis can be effective for general quantum systems. We analyze the time dependence of the estimation variances of unknown parameters, and elucidate the mechanism underlying the Hamiltonian control strategy that can achieve the Heisenberg limit. By integrating a system extension scheme with iterative feedback control and leveraging the reparameterization technique, we design an efficient adaptive control strategy for estimating the three orthogonal components of a qubit Hamiltonian in the Pauli basis, which can eliminate the trade-offs among measurements, initial states, and control strategies and achieves the optimal precision up to an overall factor with only a few iterations while maintaining the robustness against deviations in the errors of control parameters. Furthermore, we prove the general applicability of our approach to Hamiltonians with arbitrary parameter dependence, making it a practical tool for quantum metrology in realistic experimental scenarios. RESULTS Quantum multi-parameter estimation theory. In quantum single-parameter estimation, the quantum Cramer-Rao bound tells that the variance of an estimator is bounded by the inverse of the quantum Fisher information, as the quantum Fisher information characterizes the sensitivity of a parameter-dependent quantum state to the variations in the parameter [59, 60]. For multi-parameter estimation, the quantum Fisher information can be extended mathematically to the quantum Fisher information matrix [61, 62]. However, the quantum Cramér-Rao bound based on symmetric logarithmic derivatives is not always attainable due to the potential incompatibility between the optimal measurements for different parameters [26-30], unless specific conditions are satisfied, e.g. the weak commutativity in the asymptotic limit of collective measurement on an unlimited number of systems [63-65] or a more strict condition when the number of accessible systems is finite [24, 66]. Therefore, estimating multiple unknown parameters in a quantum state is a challenging problem. Suppose a quantum state ρα depends on q unknown parameters denoted in a vector form α = (α1, α2, . . . , αq). The estimation precision of the unknown parameters is characterized by the covariance matrix C, which is bounded by the quantum Fisher information matrix F, C ≥(nF)-1 , (1) where "≥" represents the matrix semi-definite positivity and n refers to the number of trials. The entries of quantum Fisher information matrix are given by Fij = 1 2Tr (ρα {Li, Lj}) , (2) where Li is a symmetric logarithmic derivative defined by 2∂iρα = ραLi + Liρα, (3) where ∂i is the abbreviation of ∂αi for simplicity. In reality, the unknown parameters in a quantum state are usually encoded by physical processes. If the physical process is a unitary evolution Uα and the initial state of the quantum system is |ψ0⟩, the entries of quantum Fisher information matrix are given by Fij = 4 1 2 ⟨{hi, hj}⟩-⟨hi⟩⟨hj⟩ , (4) where hi := -i ∂iU † α Uα is the generator of the infinitesimal translation of Uα with respect to the i-th parameter αi, { , } denotes the anti-commutator, and ⟨·⟩= ⟨ψ0| · |ψ0⟩. To address the potential incompatibility issue between the optimal measurements for different parameters, a real symmetric matrix W can be introduced to assign weights to different parameters and define a weighted mean precision Tr (WC) as the overall benchmark for the performance of estimation. The lower bound of this weighted mean precision can be derived from the quantum CramérRao bound, S (W) = 1 nTr WF -1 . (5) The weighted mean precision can be optimized and attain the Holevo bound in the asymptotic limit of the number of trials if collective measurements on multiple quantum systems are allowed [25, 27, 65]. But the Holevo bound is usually hard to be solved explicitly, as it still involves a complex matrix optimization problem. Nevertheless, when a weak commutativity condition is satisfied, i.e., for any two parameters αi and αj, Tr (ρα [Li, Lj]) = 0, (6) the quantum Cramér-Rao bound coincides with the Holevo bound and can therefore be attained [63, 64]. For the estimation of parameters α in a unitary operator Uα, if the initial state of the system is |ψ0⟩, the weak commutativity condition can be further simplified as Im ⟨hihj⟩= 0, (7) 3 and independent measurements on individual systems are sufficient to achieve the quantum Cramér-Rao bound in this case [17, 63]. Multi-parameter quantum metrology in twodimensional systems. Quantum metrology can generally be decomposed to four steps: preparation of the initial states, parameter-dependent evolution, measurements on the final states, and post-processing of the measurement results to extract the parameters. The estimation precision can be improved by initial state optimization, feedback control, and measurement optimization at the first three steps and by using proper estimation strategies at the final step. In multi-parameter quantum metrology, the incompatibility issue lies in several aspects: in addition to the measurement incompatibility, the optimal initial states and optimal feedback controls for different parameters can be incompatible as well. System extension scheme has been widely used in quantum metrology, for instance, to establish upper bounds on the quantum Fisher information in noisy environments [15, 39] and to eliminate the aforementioned incompatibilities in multi-parameter quantum estimation [12, 17]. Fig. 1 shows the system extension scheme, where a probe and an ancilla are coupled. The unitary evolution Uα (t) = exp (-iHαt) governed by the parameter-dependent Hamiltonian Hα acts on the probe only. When the joint system is initialized in a maximally entangled state, the weak commutativity condition is satisfied, eliminating the measurement tradeoff [12, 17] (see Supplementary Note 1). In two-dimensional systems, this configuration enables optimal estimation of all the parameters via projective measurements along the Bell basis, addressing the initial-state trade-off issue [12, 17] (see Supplementary Note 2). Furthermore, when the initial Hamiltonian H(init) α is independent of time, feedback control using the reverse of H(init) α obtains the optimal estimation for all the parameters and achieves the Heisenberg limit (see Supplementary Note 3), thereby removing the control trade-off [12]. However, the optimal control Hamiltonian depends on the true values of the unknown parameters, so an adaptive control is generally required to update the value of the parameters in the control Hamiltonian iteratively, so that the control Hamiltonian can approach the optimum progressively. Such an adaptive feedback control scheme has been studied for the single-parameter quantum metrology Pang and Jordan [13], but has not received much investigation in multi-parameter quantum metrology. The dependence of the control Hamiltonian on the parameter estimation precisions from the previous rounds at each iteration makes it challenging to evaluate the overall performance of the adaptive procedure and design efficient iterative feedback strategies, even for two-dimensional systems. Variance-time relation. With all trade-offs eliminated, we analyze the time dependence of the variances of the parameters to be estimated, offering guidance for probe ancilla ƿڝ Figure 1: System extension scheme. An ancilla with the same dimension as the probe is introduced, with the unitary evolution Uα acts only on the probe. The initial state can be any quantum state of the joint system, and measurements are performed on the joint system. the design of efficient adaptive control strategies. In adaptive Hamiltonian control, the total Hamiltonian Hα comprises the initial Hamiltonian H(init) α and a control Hamiltonian Hc. In a two-dimensional system with an ancilla, the joint Hamiltonian is Hα ⊗IA, where IA denotes the two-dimensional identity operator on the ancilla. The joint evolution of the probe and the ancilla is Uα (t)⊗IA, where Uα (t) = exp (-iHαt), and the generator of the infinitesimal translation of Uα (t) ⊗IA with respect to the parameter αi is hi (t) ⊗IA, where hi (t) = -i ∂iUα (t)† Uα (t) . (8) We choose a maximally entangled state, |ψ0⟩= (|0P0A⟩+ |1P1A⟩) / √ 2, (9) as the initial state, where {|0P⟩, |1P⟩} and {|0A⟩, |1A⟩} are sets of complete orthogonal basis for the probe and the ancilla, respectively. According to Eq. (4), the entries of quantum Fisher information matrix can be obtained as Fij (t) = 2Tr (hi (t) hj (t)) -Tr (hi (t)) Tr (hj (t)) . (10) To make the time dependence of the quantum Fisher information matrix explicit for the design of adaptive control strategy, we first analyze the generator hi (t). By applying an integral formula for the derivative of an operator exponential, ∂eMαt ∂α = Z t 0 eMα(t-τ) ∂Mα ∂α eMατdτ, (11) we obtain hi (t) = Z t 0 eiHατ (∂iHα) e-iHατdτ. (12) Through the spectral decomposition of the Hamiltonian of the probe, Hα = E0 |E0⟩⟨E0| + E1 |E1⟩⟨E1| [10, 67], we obtain Fij (t) = P1 l=0 t2 [(∂iEl) (∂jEl) -(∂iEl) (∂jE1-l)] -8 sin2 δEt 2 ⟨El| ∂iE1-l⟩⟨E1-l| ∂jEl⟩, (13) 4 where δE is the energy gap of the probe Hamiltonian, δE = E0 -E1. The complete derivation is provided in Supplementary Note 4. This equation characterizes the time dependence of quantum Fisher information matrix: the major term grows quadratically with time, while the other oscillates with time. The estimation precision of parameters is characterized by the estimator variances, which are bounded by the diagonal elements of the inverse of the quantum Fisher information matrix and the number of trials. As the system is two-dimensional, we assume α = (α1, α2, α3). The results for the estimator variances are given in Supplementary Note 5, where a detailed analysis is provided. Since the estimator variances for different parameters are symmetric, we only present the estimator variance for the first parameter as an example. The estimator variance for the first parameter α1 can be obtained as δ2bα1 = 1 n csc2 1 2δEt t2ξ1 + ξ2 t2ξ3 , (14) where ξ1 = (μ2∂3δE -μ3∂2δE)2 + (ν2∂3δE -ν3∂2δE)2 , ξ2 = 16 (μ3ν2 -μ2ν3)2 , ξ3 = 16 [μ1 (ν3∂2δE -ν2∂3δE) + μ2 (ν1∂3δE -ν3∂1δE) +μ3 (ν2∂1δE -ν1∂2δE)]2 , (15) and μi and νi are given by μi = Re (⟨E0| ∂iE1⟩) , νi = Im (⟨E0| ∂iE1⟩) . (16) The variance exhibits only two characteristic time scalings, as shown in Fig. 2. Fig. 2a shows the time scaling of variance for the case with ξ1 ̸= 0: when t ≪1/ |δE|, csc2 (δEt/2) ≈4/ (δEt)2, hence the estimation variance for the first parameter decays quadratically with time, while for a longer evolution time t, the variance oscillates at a frequency of |δE| /2π, with its lower envelope decaying and rapidly converging to ξ1/nξ3. Fig. 2b shows the time scaling of variance for ξ1 = 0, where the variance achieves the Heisenberg scaling. As aforementioned, the optimal control Hamiltonian is Hc = -H(init) α , but its dependence on the unknown parameters poses a practical challenge. The adaptive control strategy provides a solution to this challenge: an initial estimation of the unknown parameters is performed without quantum control to yield a rough approximate value for the parameters used in the control Hamiltonian; in the following rounds, the control Hamiltonian is implemented with the estimated values of the unknown parameters from the previous rounds, resulting in improved precisions. This process is iterated for multiple rounds. As Hc approaches -H(init) α , the total Hamiltonian Hα as well as δE2 converges toward zero. Based on the preceding results, the decrease of δE2 extends the evolution time t satisfying t ≪1/ |δE| during which the estimator variances decay quadratically with time. Consequently, ξ1 n ξ3 0 2 π δE 4 π δE 6 π δE 8 π δE 0 t Variance (a) 0 0 t Variance (b) Figure 2: Relation between estimation variance and evolution time. The estimation variance of bα1 exhibits two characteristic time scalings. Fig. (a) shows the time scaling of variance for ξ1 ̸= 0, depicted by the orange curve. For t ≪1/ |δE|, the variance decays quadratically with time. As t increases, the variance oscillates with time and diverges at integer multiples of 2π/ |δE|, with the asymptotes plotted by the green dashed lines. The lower envelope of the variance, depicted by the green solid line, decays and rapidly converges to ξ1/nξ3. Fig. (b) illustrates the time scaling of estimation variance for ξ1 = 0, where the variance decays quadratically with time and reaches the Heisenberg scaling. the estimation precision asymptotically approaches the Heisenberg limit (see Supplementary Note 3). Therefore, in order to achieve the Heisenberg-limited time scaling for an evolution time as long as possible, our objective is to design a strategy that reduces δE2 rapidly with a given total evolution time. The dependence of the Hamiltonian Hα on α can be arbitrary in general. To simplify the following study, we reparameterize the Hamiltonian as H(init) β = β1σx + β2σy + β3σz, (17) with β1, β2, and β3 being the parameters to be estimated which are transformed from α1, α2, α3 and the superscript (init) denotes that it is the initial Hamiltonian 5 without any control. The necessity of reparameterization is shown in Supplementary Note 6. But we will also show the validity of our results for arbitrary parameter dependence of the Hamiltonian later. The control Hamiltonian at the (k + 1)-th iteration is denoted by Hc,k+1 = -ˆβk · σ, (18) where ˆβk = ˆβk,1, ˆβk,2, ˆβk,3 denotes the control parameters which are essentially the estimates of ˆβ from the k-th iteration. Suppose ˆβk deviates from the true value of β by δβk, i.e., ˆβk = β0 + δβk, (19) with β0 = (β0,1, β0,2, β0,3) being the true value of β and δβk = (δβk,1, δβk,2, δβk,3) being the estimation errors from the k-th iteration, we obtain the δE2 of the (k + 1)- th iteration as δE2 k+1 = 4 ∥δβk∥2 . (20) When the number of trials in the k-th iteration, nk, is sufficiently large, the central limit theorem guarantees that δβk follows a three-dimensional normal distribution asymptotically, δβk ∼N 0, C(β) k , (21) where C(β) k = nkF (β) k -1 . As δE2 k+1 depends on δβk, it is also a random variable. Therefore, we reformulate the optimization objective as minimizing the expectation value δE2 k+1 , δE2 k+1 = 4 δβ2 k,1 + δβ2 k,2 + δβ2 k,3 = 4TrC(β) k . (22) Optimal evolution time for each trial in one iteration. Since the optimal control Hamiltonian requires the knowledge of the unknown parameters to be estimated, we take an adaptive approach with feedback to progressively update the control parameters. As time is an important resource in quantum metrology, we consider a given total evolution time for the k-th iteration, e.g., Tk = nktk, where nk and tk are the number of trials and the evolution time per trial in k-th iteration, respectively, and study how to determine the evolution time tk that minimizes δE2 k+1 . The control Hamiltonian in the k-th iteration depends on the estimated values of the unknown parameters from the (k -1)-th iteration. The covariance matrix for the parameters β1, β2, and β3 is provided in Supplementary Note 7. Applying the covariance matrix in Eq. (22), we obtain δE2 k+1 = 1 Tk 1 tk + 2tk δβk-1 2 csc2 δβk-1 tk . (23) To find the optimal tk that minimizes δE2 k+1 , we take the derivative of δE2 k+1 with respect to tk. Let g = δβk-1 tk, a numerical computation yields the optimal value of g that minimizes δE2 k+1 as g0 ≈1.2986. (24) By using δβk-1 2 = δE2 k/4 and replacing δE2 k with δE2 k , we obtain the optimal evolution time for each trial in the k-th iteration as topt,k = 2g0 p ⟨δE2 k⟩ (25) and a recursive relation for δE2 k between two consecutive iterations, δE2 k+1 = G (g0) nk δE2 k , (26) where G (x) = 1/ 4x2 + csc2 (x) /2 and δE2 1 = 4 ∥β0∥2. The scheme with an equal number of trials in each iteration. To determine the performance of the adaptive control strategy with the optimal evolution time derived above, we propose a scheme where all iterations consist of n trials with the respective optimal evolution time in each iteration. We compare its estimation error with that of the optimal control strategy which uses the true values of the unknown parameters. We define Vk = δE2 k+1 /4 to represent the sum of the variances of ˆβ1, ˆβ2, and ˆβ3 in the k-th iteration. For the scheme with an equal number n of trials in each iteration, the estimation error after m iterations is Vm = G (g0) n m V0 (27) according to Eq. (26). If the target of estimation error is V, the required number of iterations is given by m = log G(g0) n V V0 , (28) This result shows that the growth of m with decreasing V is slow as it is a logarithm of V , implying that the target precision can be achieved with only a few iterations. The optimal evolution time for the k-th iteration is topt,k = g0V -1/2 0 (n/G (g0))(k-1)/2 , (29) according to Eq. (25) and define ttot,m = Pm k=1 topt,k which is the total evolution time when all the trials are carried out in parallel for each iteration. To compare with the Heisenberg limit of the optimal control strategy in Supplementary Note 3, we derive the relation between Vm and ttot,m, Vm = G (g0) n m   g0 1 - q n G(g0) m 1 - q n G(g0)    2 t-2 tot,m. (30) 6 For p n/G (g0) ≫1, Eq. (30) simplifies to Vm ≈g2 0G (g0) nt2 tot,m . (31) This result confirms that the Heisenberg limit can be achieved by the above adaptive control approach with the evolution time optimized in each iteration. For the optimal control Hamiltonian, Hc = -β0 · σ, Supplementary Note 3 shows that the covariance matrix for the parameters β1, β2, and β3 is C(β) oc = 1 4noct2oc I3×3, (32) where I3×3 is the three-dimensional identity matrix, and the total variance of the estimators ˆβ1, ˆβ2, and ˆβ3 is therefore Voc = 3 4noct2oc . (33) Compared to this optimal control strategy with precise control parameters, Vm is only 4g2 0G (g0) /3 ≈1.55 times larger, implying the estimation precision of this adaptive control strategy achieves the optimum up to an overall factor. The above result is obtained for the parameters in the Pauli basis, β1, β2, and β3, rather than the original parameters αi in the Hamiltonian. The Supplementary Note 8 derives the relation between estimation variances of the original parameter with adaptive and optimal control, from which we obtain δ2bαi m ≤ 4g2 0 csc2 (g0) -1 δ2bαi oc ≈6.27 δ2bαi oc , (34) where δ2bαi m and δ2bαi oc represent the estimation variances of bαi with adaptive and optimal control, respectively. This indicates that our adaptive control strategy can work for arbitrary parameters of a Hamiltonian in general and the estimation precision for any unknown parameter in the Hamiltonian can also achieve the optimum up to a factor of constant order. Discussion. The optimal evolution time topt,k in the k-th iteration is derived based on the expectation value of δE2 k. In practical experiments, the measurement results of the (k -1)-th iteration can be random, so δE2 k, which is obtained from the (k -1)-th iteration, can also be random accordingly. Hence, the real value of δE2 k can deviate from its expectation value in practice, which may affect the estimation precision at the k-th iteration. In the Methods, we study the effect of this randomness on the optimal evolution time scheme and the robustness of this scheme. In particular, we show it is more probable that the estimation precision can benefit from such deviation in δE2 k and perform better than the case with the expectation value of δE2 k. Therefore, the random deviation in δE2 k can actually be favorable to this adaptive feedback control strategy. METHODS Robustness Analysis. To facilitate the following analysis, Fig. 3 schematically depicts the relations between the physical quantities in the optimal evolution time scheme, using dashed arrows for the no-deviation case with the errors of the control parameters averaged and the solid arrows for the practical cases with random deviations in the average errors of the control parameters. To explicitly characterize the deviation of δE2 k from its expectation value, we introduce a deviation factor Dk, which is also random due to the randomness of δE2 k, δE2 k = Dk δE2 k . (35) We use the evolution time topt,k, which is derived based on the mean of δE2 k according to Eq. (25), to perform the k-th iteration. If δE2 k deviates from its mean in the experiment, Eq. (25) shows that the deviation factor Dk scales g0 to √Dkg0, and thus the recursive relation Eq. (26) becomes δE2 k+1 Dk = G √Dkg0 nk Dk δE2 k . (36) The impact of the deviation factor Dk on the estimation precision at the k-th iteration is characterized by the ratio Rk = Vk,Dk Vk = DkG √Dkg0 G (g0) (37) as illustrated in Fig. 4a. The estimation precision decreases as the deviation increases for 0 < Dk < (π/g0)2. When 0 < Dk < 1, where the control Hamiltonian obtained from the estimate in the (k -1)-th iteration leads to a δE2 k smaller than its mean, implying that the estimate from the (k -1)-th iteration is better than the average, we have Rk < 1. As Dk →0, Vk,Dk → 3/ 4nkt2 opt,k , which is exactly the bound given in Supplementary Note 3. For 1 ≤Dk < (π/g0)2, indicating that the estimate from the (k -1)-th iteration is worse than the average, it follows that Rk ≥1. The deviation factor Dk follows a generalized χ2 distribution Dk = χ2 1 (1) + g2 0 csc2 (g0) χ2 2 (1) + χ2 3 (1) 2g2 0 csc2 (g0) + 1 , (38) where χ2 1 (1), χ2 2 (1), and χ2 3 (1) are squares of independent standard normal random variables. Detail of the derivation can be found in Supplementary Note 9. The probability density fDk of the deviation factor Dk is shown in Fig. 4b. Fig. 4 suggests that the deviation factor Dk lies most likely in a region where Rk is almost insensitive to Dk and close to 1, demonstrating strong robustness of the estimation precision against the deviation in the errors of control parameters in a single iteration. In practice, the deviation factor Dk modifies the optimal evolution time topt,k, which is determined based 7 The (k-1)-th iteration The k-th iteration The (k+1)-th iteration ... ... Figure 3: Relations between different physical quantities. Arrows schematically denote the relations between different physical quantities occurred in the proposed optimal evolution time scheme, pointing from one quantity to the derived quantity. The upper section, linked by dashed arrows, depicts the no-deviation cases with the errors of all control parameters averaged. The lower section, linked by solid arrows, depicts the practical cases where the errors of the control parameters have random deviation from their average values, manifested by the deviation factor Dk for the k-th iteration. on the mean of δE2 k, to topt,k/√Dk. Therefore, an evolution time modified procedure is required. Suppose the estimate obtained from the (k -1)-th iteration is β ′ k-1, which determines the control Hamiltonian at the k-th iteration. By continuing to repeat the trials in the (k -1)-th iteration, a more precise estimate β ′ 0 can be obtained. The modified evolution time ] topt,k is determined by β ′ k-1 -β ′ 0 ] topt,k = Π0, after which the k-th iteration proceeds with Hc,k = -β ′ k-1 · σ and ] topt,k. We now consider the effect of the deviation factors on the estimation precision of the estimation process consisting of m iterations with the evolution time modified procedure. Since the evolution time of each iteration is modified to the optimal evolution time, g0 is not scaled by the deviation factor Dk. To ensure a effective comparison with the same total evolution time, we adopt the equivalent form of Eq. (26), δE2 k+1 = 2g0G (g0) Tk q ⟨δE2 k⟩, (39) which yields ^ δE2 k+1 Dk = 2g0G (g0) Tk r Dk ^ ⟨δE2 k⟩Dk-1, (40) where D1 = 1 and ^ ⟨δE2 1⟩D0 = δE2 1 . The effect of the deviation factors on the estimation precision at the entire estimation process is characterized by the ratio eRtot,m = eVm,Dk Vm = Qm k=1 D 1 2m-(k-1) k , (41) 3.8 0 1 2 3 4 5 0 10 20 30 40 Dk Rk 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.7 0.8 0.9 1.0 (a) 0 1 2 3 4 5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Dk fDk p(Dk≤3)≈0.97 (b) Figure 4: Robustness of a single iteration against deviation in the errors of control parameters. Fig. (a) illustrates the effect of the deviation factor on the estimation precision. The estimation precision decreases as Dk increases. When Dk is sufficiently low, the real estimation precision approaches the optimal precision with precise control parameters, so the ratio Rk between the real estimation precision to the estimation precision with average errors in the control parameters drops below 1, as shown by the left panel in this figure. In the region indicated by the red dashed line in this figure, the estimation precision is almost insensitive to the random deviation of the errors of control parameters. Fig. (b) shows the probability density function of Dk, which suggests Dk lies in an interval where the estimation precision is close to that with average errors in the control parameters with a high probability, indicating strong robustness of the optimal evolution time scheme against the deviation in the errors of control parameters. where eVm,Dk = ^ δE2 m+1 Dk/4. Fig. 5 shows the cumulative distribution functions of eRtot,m for m = 2, 3, and 4. A surprising result from the figure is that the probabilities that the estimation variance with deviation in δE2 k surpasses that without deviation exceed 50% and increase with the number of iterations, suggesting that the real estimation precision is more likely to benefit from the deviation in δE2 k and 8 0.61 0.68 0.71 m=2 m=3 m=4 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.2 0.4 0.6 0.8 1.0 R tot,m CDF Figure 5: Robustness of the optimal evolution time scheme. These figures illustrate the effect of the deviation in δE2 k on the estimation precision for a total of two (orange curve), three (blue curve), and four (green curve) iterations. The orange, blue, and green dashed lines plot the probabilities that the precision with the deviation in δE2 k surpasses that without deviation for different iteration numbers. They all exceed 50% and increase with the number of iterations, implying that the real estimation precision actually benefits from the deviation in δE2 k and becomes better than the expected estimation precision. becomes better than the expected estimation precision. DATA AVAILABILITY The code and data used in this work are available upon request to the corresponding author. ACKNOWLEDGMENTS This work is supported by National Natural Science Foundation of China (Grant No. 12075323), the Natural Science Foundation of Guangdong Province of China (Grant No. 2025A1515011440) and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0300702). AUTHOR CONTRIBUTIONS Q.W. initiated this work, and carried out the main calculations. S.P. participated in scientific discussions, and assisted with the calculations. Both authors contributed to the writing of the manuscript. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. [1] V. Giovannetti, S. Lloyd, and L. Maccone, Physical Review Letters 96, 010401 (2006). [2] M. G. A. Paris, International Journal of Quantum Information 07, 125 (2009). [3] V. Giovannetti, S. Lloyd, and L. Maccone, Nature Photonics 5, 222 (2011). [4] J. Jacobson, G. Björk, and Y. Yamamoto, Applied Physics B 60, 187 (1995). [5] A. André, A. S. Sørensen, and M. D. Lukin, Physical Review Letters 92, 230801 (2004). [6] J. Borregaard and A. S. Sørensen, Physical Review Letters 111, 090801 (2013). [7] E. M. Kessler, P. Kómár, M. Bishof, L. Jiang, A. S. Sørensen, J. Ye, and M. D. Lukin, Physical Review Letters 112, 190403 (2014). [8] R. Schnabel, N. Mavalvala, D. E. McClelland, and P. K. Lam, Nature Communications 1, 121 (2010). [9] The LIGO Scientific Collaboration, Nature Physics 7, 962 (2011). [10] S. Pang and T. A. Brun, Physical Review A 90, 022117 (2014). [11] H. Yuan and C.-H. F. Fung, Physical Review Letters 115, 110401 (2015). [12] H. Yuan, Physical Review Letters 117, 160801 (2016). [13] S. Pang and A. N. Jordan, Nature Communications 8, 14695 (2017). [14] S. L. Braunstein and C. M. Caves, Physical Review Letters 72, 3439 (1994). [15] J. Kołodyński and R. Demkowicz-Dobrzański, New Journal of Physics 15, 073043 (2013). [16] R. Demkowicz-Dobrzański and L. Maccone, Physical Review Letters 113, 250801 (2014). [17] A. Fujiwara, Physical Review A 65, 012316 (2001). [18] P. C. Humphreys, M. Barbieri, A. Datta, and I. A. Walmsley, Physical Review Letters 111, 070403 (2013). [19] J. Suzuki, Journal of Mathematical Physics 57, 042201 (2016). [20] T. Baumgratz and A. Datta, Physical Review Letters 116, 030801 (2016). [21] M. Szczykulska, T. Baumgratz, and A. Datta, Advances in Physics: X 1, 621 (2016). [22] T. J. Proctor, P. A. Knott, and J. A. Dunningham, Physical Review Letters 120, 080501 (2018). [23] S. Altenburg and S. Wölk, Physica Scripta 94, 014001 (2018). [24] J. Yang, S. Pang, Y. Zhou, and A. N. Jordan, Physical Review A 100, 032104 (2019). 9 [25] F. Albarelli, J. F. Friel, and A. Datta, Physical Review Letters 123, 200503 (2019). [26] A. Carollo, B. Spagnolo, A. A. Dubkov, and D. Valenti, Journal of Statistical Mechanics: Theory and Experiment 2019, 094010 (2019). [27] J. S. Sidhu, Y. Ouyang, E. T. Campbell, and P. Kok, Physical Review X 11, 011028 (2021). [28] B. Xia, J. Huang, H. Li, H. Wang, and G. Zeng, Nature Communications 14, 1021 (2023). [29] Z. Hou, Z. Zhang, G.-Y. Xiang, C.-F. Li, G.-C. Guo, H. Chen, L. Liu, and H. Yuan, Physical Review Letters 125, 020501 (2020). [30] F. Albarelli, M. Barbieri, M. G. Genoni, and I. Gianani, Physics Letters A 384, 126311 (2020). [31] J. Suzuki, Y. Yang, and M. Hayashi, Journal of Physics A: Mathematical and Theoretical 53, 453001 (2020). [32] J. Suzuki, Journal of Physics A: Mathematical and Theoretical 53, 264001 (2020). [33] F. Belliardo and V. Giovannetti, New Journal of Physics 23, 063055 (2021). [34] V. Katariya and M. M. Wilde, New Journal of Physics 23, 073040 (2021). [35] X.-M. Lu and X. Wang, Physical Review Letters 126, 120503 (2021). [36] W. Górecki and R. Demkowicz-Dobrzański, Physical Review A 106, 012424 (2022). [37] F. Albarelli and R. Demkowicz-Dobrzański, Physical Review X 12, 011039 (2022). [38] A. Fujiwara and H. Imai, Journal of Physics A: Mathematical and Theoretical 41, 255304 (2008). [39] B. M. Escher, R. L. de Matos Filho, and L. Davidovich, Nature Physics 7, 406 (2011). [40] R. Demkowicz-Dobrzański, J. Kołodyński, and M. Guţă, Nature Communications 3, 1063 (2012). [41] P. Sekatski, M. Skotiniotis, and W. Dür, New Journal of Physics 18, 073034 (2016). [42] Y. Dong, X.-D. Chen, G.-C. Guo, and F.-W. Sun, Physical Review A 94, 052322 (2016). [43] S. Zhou, M. Zhang, J. Preskill, and L. Jiang, Nature Communications 9, 78 (2018). [44] W. Górecki, S. Zhou, L. Jiang, and R. DemkowiczDobrzański, Quantum 4, 288 (2020). [45] G. Chen, L. Zhang, W.-H. Zhang, X.-X. Peng, L. Xu, Z.-D. Liu, X.-Y. Xu, J.-S. Tang, Y.-N. Sun, D.-Y. He, J.-S. Xu, Z.-Q. Zhou, C.-F. Li, and G.-C. Guo, Physical Review Letters 121, 060506 (2018). [46] D. Leibfried, M. D. Barrett, T. Schaetz, J. Britton, J. Chiaverini, W. M. Itano, J. D. Jost, C. Langer, and D. J. Wineland, Science 304, 1476 (2004). [47] T. Nagata, R. Okamoto, J. L. O'Brien, K. Sasaki, and S. Takeuchi, Science 316, 726 (2007). [48] K. J. Resch, K. L. Pregnell, R. Prevedel, A. Gilchrist, G. J. Pryde, J. L. O'Brien, and A. G. White, Physical Review Letters 98, 223601 (2007). [49] G. Chen, N. Aharon, Y.-N. Sun, Z.-H. Zhang, W.-H. Zhang, D.-Y. He, J.-S. Tang, X.-Y. Xu, Y. Kedem, C.-F. Li, and G.-C. Guo, Nature Communications 9, 93 (2018). [50] Z. Hou, J.-F. Tang, H. Chen, H. Yuan, G.-Y. Xiang, C.-F. Li, and G.-C. Guo, Science Advances 7, eabd2986 (2021). [51] M. Markiewicz, M. Pandit, and W. Laskowski, Scientific Reports 11, 15669 (2021). [52] P. Yin, X. Zhao, Y. Yang, Y. Guo, W.-H. Zhang, G.- C. Li, Y.-J. Han, B.-H. Liu, J.-S. Xu, G. Chiribella, G. Chen, C.-F. Li, and G.-C. Guo, Nature Physics 19, 1122 (2023). [53] Y.-N. Lu, Y.-R. Zhang, G.-Q. Liu, F. Nori, H. Fan, and X.-Y. Pan, Physical Review Letters 124, 210502 (2020). [54] Y. Zhai, X. Yang, K. Tang, X. Long, X. Nie, T. Xin, D. Lu, and J. Li, Physical Review A 107, 022602 (2023). [55] M. Naghiloo, A. N. Jordan, and K. W. Murch, Physical Review Letters 119, 180801 (2017). [56] J. J. . Bollinger, W. M. Itano, D. J. Wineland, and D. J. Heinzen, Physical Review A 54, R4649 (1996). [57] Z. Hu, S. Wang, L. Qiao, T. Isogawa, C. Li, Y. Yang, G. Wang, H. Yuan, and P. Cappellaro, Control incompatibility in multiparameter quantum metrology (2024), . [58] Y. Yang, S. Ru, M. An, Y. Wang, F. Wang, P. Zhang, and F. Li, Physical Review A 105, 022406 (2022). [59] W. K. Wootters, Physical Review D 23, 357 (1981). [60] J. S. Sidhu and P. Kok, AVS Quantum Science 2, 014701 (2020). [61] C. W. Helstrom, Physics Letters A 25, 101 (1967). [62] J. Liu, H. Yuan, X.-M. Lu, and X. Wang, Journal of Physics A: Mathematical and Theoretical 53, 023001 (2019). [63] K. Matsumoto, Journal of Physics A: Mathematical and General 35, 3111 (2002). [64] S. Ragy, M. Jarzyna, and R. Demkowicz-Dobrzański, Physical Review A 94, 052108 (2016). [65] R. Demkowicz-Dobrzański, W. Górecki, and M. Guţă, Journal of Physics A: Mathematical and Theoretical 53, 363001 (2020). [66] H. Chen, Y. Chen, and H. Yuan, Physical Review A 105, 062442 (2022). [67] R. M. Wilcox, Journal of Mathematical Physics 8, 962 (1967). 10 SUPPLEMENTARY NOTE 1. ELIMINATING MEASUREMENT TRADE-OFF THROUGH SYSTEM EXTENSION This Supplementary Note proves that the system extension scheme with the initial state being a maximally entangled state eliminates the measurement tradeoff. Suppose a d-dimensional system governed by a Hamiltonian Hα, we introduce an ancillary system with dimension no less than d, whose Hamiltonian is the identity operator IA. The initial state is prepared as an arbitrary pure state of the joint system, denoted as |ψ0⟩= Pd-1 l=0 λl |lP⟩⊗|lA⟩, where {|lP⟩| l = 0, 1, . . . , d -1} forms a complete orthonormal basis of the system, {|lA⟩| l = 0, 1, . . . , d -1} is a set of mutually orthogonal basis vectors for the ancillary system, and the non-negative coefficients λl satisfy Pd-1 l=0 λ2 l = 1. The Hamiltonian of the joint system is Hα ⊗IA, so the generator of the infinitesimal translation of Uα ⊗IA with respect to the parameter αi is given by hi ⊗IA, where hi = -i ∂iU † α Uα and Uα = exp (-iHαt). From Eq. (7) of the main manuscript, we obtain ⟨ψ0| hihj ⊗IA |ψ0⟩= ⟨ψ0| hjhi ⊗IA |ψ0⟩. (S1) Calculating both sides of the equation separately, we obtain ⟨ψ0| hihj ⊗IA |ψ0⟩= d-1 X l=0 λ2 l ⟨lP| hihj |lP⟩, (S2) and ⟨ψ0| hjhi ⊗IA |ψ0⟩= d-1 X l=0 λ2 l ⟨lP| hjhi |lP⟩. (S3) Since hi and hj generally do not commute, Eq. (S1) holds only when λl = 1 √ d due to the cyclic property of the trace operator. If the dimension of the ancillary system is equal to that of the system, we conclude that the weak commutativity condition is satisfied when the initial state is the maximally entangled state. SUPPLEMENTARY NOTE 2. ELIMINATING INITIAL-STATE TRADE-OFF IN TWO-DIMENSIONAL SYSTEMS This supplementary note proves that, for a two-dimensional system with system extension, choosing the maximally entangled state as the initial state makes the quantum Fisher information matrix optimal. Suppose the joint Hamiltonian of a two-dimensional system and a two-dimensional ancillary system is Hα ⊗IA and the initial state is |ψ0⟩= √x |0P0A⟩+ √1 -x |1P1A⟩(0 ≤x ≤1), where {|0P⟩, |1P⟩} and {|0A⟩, |1A⟩} are sets of complete orthonormal basis for the system and ancillary system, respectively. According to Eq. (4) in the main manuscript, the entries of quantum Fisher information matrix of the finial state are given by Fij = 4 {Re [x ⟨0P| hihj |0P⟩+ (1 -x) ⟨1P| hihj |1P⟩] -[x ⟨0P| hi |0P⟩+ (1 -x) ⟨1P| hi |1P⟩] [x ⟨0P| hj |0P⟩+ (1 -x) ⟨1P| hj |1P⟩]} . (S4) Denote the matrix representation of hi in the basis {|0P⟩, |1P⟩} as h(M) i = ( h(M) i,11 h(M) i,12 h(M) i,21 h(M) i,22 ) , (S5) we have Fij = 4 n Re h x h(M) i,11h(M) j,11 + h(M) i,12h(M) j,21 + (1 -x) h(M) i,21h(M) j,12 + h(M) i,22h(M) j,22 i - h xh(M) i,11 + (1 -x) h(M) i,22 i h xh(M) j,11 + (1 -x) h(M) j,22 io . (S6) We define δF = F|x=1/2 -F, the elements of δF are given by δFij = 4 h(M) i,11 -h(M) i,22 h(M) j,11 -h(M) j,22 x -1 2 2 . (S7) 11 For both the two-parameter and three-parameter cases, all principal minors of δF are non-negative, which indicates that δF is positive semi-definite. Therefore, the maximally entangled state is the optimal pure state. Let ρ0 = P i pi |φi⟩⟨φi| be an arbitrary mixed initial state. Under the unitary evolution Uα ⊗IA, the final state is ρt = P i pi (Uα ⊗IA) |φi⟩⟨φi| U † α ⊗IA . According to the convexity of the quantum Fisher information matrix [62], we obtain F (ρt) ≤P i piF (Uα ⊗IA) |φi⟩⟨φi| U † α ⊗IA ≤P i piF (Uα ⊗IA) |ψ0⟩⟨ψ0| U † α ⊗IA x= 1 2 = F|x= 1 2 , (S8) demonstrating that the maximally entangled initial state is optimal. SUPPLEMENTARY NOTE 3. ACHIEVING HEISENBERG SCALING VIA HAMILTONIAN CONTROL This supplementary note proves that using the reverse of the initial Hamiltonian as the control Hamiltonian enables the parameter estimation precision to reach the Heisenberg limit. Consider a two-dimensional system with Hamiltonian Hα and a two-dimensional ancillary system with Hamiltonian IA. The probe and ancilla are initialized in a maximally entangled state and evolve under the joint Hamiltonian Hα ⊗IA. The entries of quantum Fisher information matrix for the final evolved state are given by Fij = 2Tr (hi (t) hj (t)) -t2Tr (∂iHα) Tr (∂jHα) , (S9) where hi (t) can be expressed as hi (t) = Z t 0 eiHατ (∂iHα) e-iHατdτ (S10) and we have used Tr (hi (t)) = tTr (∂iHα) . Suppose the initial Hamiltonian of the system is H(init) α , then the control Hamiltonian is Hc = -H(init) α . Here, H(init) α is parameter-dependent, while Hc is parameter-independent. The total Hamiltonian Hα = H(init) α + Hc becomes a zero operator, yielding hi (t) = t∂iH(init) α , and hence F (c) ij = t2 n 2Tr h ∂iH(init) α ∂jH(init) α i -Tr ∂iH(init) α Tr ∂jH(init) α o . (S11) The estimator variance of the parameter αi is given by δ2bαi = nF (c) -1 ii ∝ 1 nt2 , (S12) where δ2bαi := E h (bαi/∂αiE (bαi) -αi)2i . SUPPLEMENTARY NOTE 4. TIME DEPENDENCE OF QUANTUM FISHER INFORMATION MATRIX In this Supplementary Note, we derive the explicit time dependence of quantum Fisher information matrix. To obtain the explicit time dependence of the quantum Fisher information matrix, we first solve for the explicit time dependence of hi (t) in Eq. (S10). Let Y (τ) = eiHατ (∂iHα) e-iHατ, we obtain ∂Y (τ) ∂τ = i [Hα, Y ] , (S13) 12 where Y (0) = ∂iHα. The operator H (·) := [Hα, ·] is an Hermitian superoperator with four eigenvalues Λ1, Λ2, Λ3, Λ4, where Λk = 0 for k = 1, ..., r and Λk ̸= 0 for k = r + 1, ..., 4. The corresponding eigenvectors are Γ1, Γ2, Γ3, Γ4, which satisfy Tr h Γ † i Γj i = δij. We can decompose Y (0) based on {Γk} as Y (0) = 4 X k=1 ckΓk, (S14) where ck = Tr Γ † k∂iHα . The solution for Y (τ) is Y (τ) = 4 X k=1 Tr Γ † k∂iHα exp (iΛkτ) Γk, (S15) leading to hi (t) = t r X k=1 Tr Γ † k∂iHα Γk -i 4 X k=r+1 exp (iΛkt) -1 Λk Tr Γ † k∂iHα Γk. (S16) Suppose Hα has two eigenvalues E0 and E1 with E0 ̸= E1, and two corresponding eigenvectors |E0⟩and |E1⟩, the eigenvalues and eigenvectors of H (·) are Λ1 = 0 Γ1 = |E0⟩⟨E0| , Λ2 = 0 Γ2 = |E1⟩⟨E1| , Λ3 = E0 -E1 Γ3 = |E0⟩⟨E1| , Λ4 = E1 -E0 Γ4 = |E1⟩⟨E0| . (S17) The solutions for Tr Γ † k∂iHα are Tr Γ † 1 ∂iHα = ∂iE0, Tr Γ † 2 ∂iHα = ∂iE1, Tr Γ † 3 ∂iHα = (E1 -E0) ⟨E0| ∂iE1⟩, Tr Γ † 4 ∂iHα = (E0 -E1) ⟨E1| ∂iE0⟩. (S18) Substituting Eq. (S17) and Eq. (S18) into Eq. (S16), we obtain hi (t) = 1 X l=0 t (∂iEl) |El⟩⟨El| + i [exp (i (El -E1-l) t) -1] ⟨El| ∂iE1-l⟩|El⟩⟨E1-l| . (S19) Finally, substituting Eq. (S19) into Eq. (S9) yields Fij (t) = 1 X l=0 t2 [(∂iEl) (∂jEl) -(∂iEl) (∂jE1-l)] -8 sin2 (El -E1-l) t 2 ⟨El| ∂iE1-l⟩⟨E1-l| ∂jEl⟩. (S20) SUPPLEMENTARY NOTE 5. TIME DEPENDENCE OF ESTIMATOR VARIANCES This Supplementary Note provides an explicit expression for the estimator variances and analyses its dependence on time. The quantum Fisher information matrix can be directly obtained from Eq. (S20). The variance of bα1, bα2, and bα3 are δ2bα1 = t2 csc2 1 2δEt h (μ2∂3δE -μ3∂2δE)2 + (ν2∂3δE -ν3∂2δE)2i + 16 (μ3ν2 -μ2ν3)2 16nt2 [μ1 (ν3∂2δE -ν2∂3δE) + μ2 (ν1∂3δE -ν3∂1δE) + μ3 (ν2∂1δE -ν1∂2δE)]2 , (S21) 13 δ2bα2 = t2 csc2 1 2δEt h (μ1∂3δE -μ3∂1δE)2 + (ν1∂3δE -ν3∂1δE)2i + 16 (μ3ν1 -μ1ν3)2 16nt2 [μ1 (ν3∂2δE -ν2∂3δE) + μ2 (ν1∂3δE -ν3∂1δE) + μ3 (ν2∂1δE -ν1∂2δE)]2 , (S22) and δ2bα3 = t2 csc2 1 2δEt h (μ1∂2δE -μ2∂1δE)2 + (ν1∂2δE -ν2∂1δE)2i + 16 (μ2ν1 -μ1ν2)2 16nt2 [μ1 (ν3∂2δE -ν2∂3δE) + μ2 (ν1∂3δE -ν3∂1δE) + μ3 (ν2∂1δE -ν1∂2δE)]2 , (S23) respectively, where δE = E0 -E1, μi = Re (⟨E0| ∂iE1⟩) , νi = Im (⟨E0| ∂iE1⟩) . (S24) Since δ2bα1 , δ2bα2 , and δ2bα3 are symmetric with respect to each other, we consider only δ2bα1 in the following analysis. When (μ2∂3δE -μ3∂2δE)2 + (ν2∂3δE -ν3∂2δE)2 = 0, the oscillatory term vanishes. To ensure a nonzero numerator, we have ∂2δE = ∂3δE = 0, leading to δ2bα1 = 1 nt2 (∂1δE)2 , (S25) where ∂1δE ̸= 0. In this case, δ2bα1 achieves the Heisenberg limit. For instance, consider a Hamiltonian H(B,θ,φ) = B (cos (θ) cos (φ) σx + cos (θ) sin (φ) σy + sin (θ) σz) , (S26) where σx, σy, and σz are Pauli matrices. For estimating B, the fact that the eigenvalues of the Hamiltonian are independent of θ and φ enables the estimation precision to reach the Heisenberg limit. When (μ2∂3δE -μ3∂2δE)2 + (ν2∂3δE -ν3∂2δE)2 ̸= 0, we first consider two extreme cases. When t is sufficiently small such that t ≪1/ |δE|, we have csc2 1 2δEt ≈ 4 (δEt)2 , so δ2bα1 ∝1 t2 . (S27) When t is sufficiently large, we have δ2bα1 ∝csc2 1 2δEt , (S28) which is a periodic function. For general t, its overall trend can be analyzed via periodic sampling. We set the initial sampling time as ts,0 ∈(0, 2π/ |δE|), and define c0 = csc2 1 2δEts,0 . For ts,k = ts,0 + 2kπ |δE| with k = 1, 2, 3, . . . , we have csc2 1 2δEts,k = c0. The function δ2bα1 c0 = c0t2ξ1 + ξ2 nt2ξ3 , (S29) passes through all these sampling points, where ξ1 = (μ2∂3δE -μ3∂2δE)2 + (ν2∂3δE -ν3∂2δE)2 , ξ2 = 16 (μ3ν2 -μ2ν3)2 , ξ3 = 16 [μ1 (ν3∂2δE -ν2∂3δE) + μ2 (ν1∂3δE -ν3∂1δE) + μ3 (ν2∂1δE -ν1∂2δE)]2 . (S30) Since ∂t δ2bα1 c0 ≤0, (S31) the variance shows an overall decreasing trend. From Eq. (S28), we know that at large t, the minimum variance per period occurs at tk = (2k + 1) π/ |δE|, i.e., c0 = 1. Therefore, the infimum of the variance is inf δ2bα1 = ξ1 nξ3 . (S32) However, since ∂t δ2bα1 c0 decays cubically with time, it is not worthwhile to expend more time for limited gains in precision. 14 SUPPLEMENTARY NOTE 6. NECESSITY FOR REPARAMETERIZATION This Supplementary Note shows the necessity for reparameterization. In general, the parameterization of the Hamiltonian for a two-dimensional system can be arbitrary. For example, one can parameterize the Hamiltonian by the coefficients in the Pauli basis or by the strength and angular parameters. For the current problem of reducing δE2 so as to extend the evolution time for the Heisenberg scaling, it will turn out that the parameterization by the coefficients in the Pauli basis is more convenient, so we will focus on this parameterization. Usually the unknown parameters can have complex correlations between each other, manifested by the correlation matrix of the estimation. To simplify the estimation task of these parameters, a powerful tool is to transform the unknown parameters to other parameters which have simpler correlations (e.g., a diagonal correlation matrix) [30, 31], sometimes known as reparametrization. If the vector of the parameters α is reparameterized as β = (β1, β2, . . . , βk), the quantum Fisher information matrix of β is related to the original quantum Fisher information matrix for α through Fβ = JTFαJ, (S33) where J is the Jacobian matrix defined as Jij = ∂αi/∂βj. Suppose the original Hamiltonian of the probe is decomposed in the Pauli basis as H(init) α = f (α) · σ, (S34) where σ = (σx, σy, σz), α = (α1, α2, α3), and f (α) = (f1 (α) , f2 (α) , f3 (α)) with f1 (α), f2 (α), and f3 (α) being real-valued functions. The control Hamiltonian at the (k + 1)-th iteration is denoted by Hc,k+1 = -f (ˆαk) · σ, (S35) where ˆαk = (ˆαk,1, ˆαk,2, ˆαk,3) denotes the control parameters which are essentially the estimates of ˆα from the k-th iteration. Suppose ˆαk deviates from the true value of α by δαk, i.e., ˆαk = α0 + δαk, (S36) with α0 = (α0,1, α0,2, α0,3) being the true value of α and δαk = (δαk,1, δαk,2, δαk,3) being the estimation errors from the k-th iteration, we obtain the δE2 of the (k + 1)-th iteration as δE2 k+1 = 4 ∥f (α0) -f (ˆαk)∥2 . (S37) When ∥δαk∥≪1, Eq. (S37) is simplified to δE2 k+1 ≈4 ∥Jδαk∥2 , (S38) where J is the Jacobian matrix defined by Jij = ∂αjfi (α) evaluated at α0. If we reparameterize the Hamiltonian as H(init) β = β1σx + β2σy + β3σz, (S39) where the new parameters are defined as β1 = f1 (α), β2 = f2 (α), and β3 = f3 (α). According to Eq. (S37), we obtain δE2 k+1 = 4 ∥δβk∥2 , (S40) which will greatly simplify the subsequent optimization. SUPPLEMENTARY NOTE 7. DERIVATION OF COVARIANCE MATRIX In this Supplementary Note, we derive the covariance matrix for the k-th iteration. Suppose the estimate from the (k -1)-th iteration is ˆβk-1. The control Hamiltonian in the k-th iteration is Hc,k = ˆβk-1 · σ and the total Hamiltonian is Hβ,k = β -ˆβk-1 · σ. From Eq. (10) in the main manuscript, the entries of quantum Fisher information matrix are F (β) k,ii = 4 δβ2 k-1,i∥δβk-1∥ 2t2 k+ ∥δβk-1∥ 2-δβ2 k-1,i sin2(∥δβk-1∥tk) ∥δβk-1∥ 4 , F (β) k,ij = 4δβk-1,iδβk-1,j ∥δβk-1∥ 2t2 k-sin2(∥δβk-1∥tk) ∥δβk-1∥ 4 , i ̸= j. (S41) 15 The covariance matrix in the k-th iteration is given by C(β) k = nkF (β) k -1 , (S42) the elements of which turn out to be C(β) k,ii = δβ2 k-1,i t2 k∥δβk-1∥2 + h ∥δβk-1∥ 2-δβ2 k-1,i i csc2(∥δβk-1∥tk) 4nk , C(β) k,ij = δβk-1,iδβk-1,j 1 t2 k∥δβk-1∥2 -csc2(∥δβk-1∥tk) 4nk , i ̸= j. (S43) SUPPLEMENTARY NOTE 8. RELATION BETWEEN ESTIMATION VARIANCES OF THE ORIGINAL PARAMETER WITH ADAPTIVE AND OPTIMAL CONTROL In this Supplementary Note, we derive the relation between estimation variances of the original parameter with adaptive and optimal control strategy. Suppose the estimation variance obtained from the adaptive control strategy with m iterations is Vm = TrC(β) m , and that from the optimal control strategy with the true values of the unknown parameters is Voc, both for the parameters in the Pauli basis, ˆβ1, ˆβ2, and ˆβ3 after reparameterization, rather than the original parameters in the Hamiltonian. If the estimation variance of the original parameter αi using the adaptive control strategy approaches the variance obtained with the optimal control strategy, analogous to how Vm approaches Voc, it means the effectiveness of our adaptive control strategy in estimating the original parameters. For the adaptive control strategy, the covariance matrix after m iterations is C(β) m , with elements given in Eq. (S43). When the optimal evolution time scheme is applied, the total variance is Vm = ∥δβm-1∥ 2 g2 0 + 2 δβm-1 2 csc2 (g0) 4nm , (S44) where we have used Eq. (25) in the main manuscript. For the optimal control Hamiltonian, Voc is given by Eq. (33) in the main manuscript. When the Hamiltonian contains three independent unknown parameters, α = (α1, α2, α3), the quantum Fisher information matrix of α is related to the quantum Fisher information matrix of β via Fα = JTFβJ, (S45) where J is given by Jij = ∂βi/∂αj, with βi = fi (α). Therefore, the relation between their covariance matrices is C(α) = J-1C(β) J-1 T . (S46) For the adaptive control strategy, Eq. (S46) yields the variances of the original parameters after m iterations as C(α) m,ii = 1 4nm P3 r,s=1,r̸=s J-1 ir J-1 is δβm-1,rδβm-1,s 1 g2 0 -csc2 (g0) + 1 4nm P3 r=1 J-1 2 ir h δβ2 m-1,r g2 0 + δβm-1 2 -δβ2 m-1,r csc2 (g0) i , (S47) where we have used the optimal evolution time scheme. Let μmax,i = max J-1 ir J-1 is r, s ∈{1, 2, 3} , r ̸= s , νmax,i = max n J-1 2 ir r ∈{1, 2, 3} o . (S48) We obtain C(α) m,ii ≤ 2μmax,i g2 0 csc2 (g0) -1 1 + 2g2 0 csc2 (g0) + νmax,i Vm. (S49) For the optimal control strategy, Eq. (S46) yields the variances of the original parameters as C(α) oc,ii = (J-1) 2 i1+(J-1) 2 i2+(J-1) 2 i3 4noct2 oc ≥ νmax,i 3 Voc. (S50) 16 Using Vm = κVoc (S51) to connect Eq. (S49) and Eq. (S50), we obtain C(α) m,ii ≤12g2 0 csc2 (g0) -3 1 + 2g2 0 csc2 (g0) κC(α) oc,ii, (S52) where μmax,i ≤νmax,i has been used. SUPPLEMENTARY NOTE 9. PROBABILITY DENSITY FUNCTION OF DEVIATION FACTOR In this Supplementary Note, we derive the probability density function of deviation factor Dk for the k-th iteration. Since δE2 k = 4 δβk-1 2 and δβk-1 follows a Gaussian distribution δβk-1 ∼N 0, C(β) k-1 in the asymptotic limit of the number of trials, δE2 k follows a generalized χ2 distribution, expressed as a weighted sum of squares of independent standard normal random variables, where the weights are four times the eigenvalues of C(β) k-1. Together with δβk-2 topt,k-1 = g0, we obtain δE2 k = 1 nk-1t2 opt,k-1 χ2 1 (1) + g2 0 csc2 (g0) nk-1t2 opt,k-1 χ2 2 (1) + g2 0 csc2 (g0) nk-1t2 opt,k-1 χ2 3 (1) . (S53) The mean of δE2 k is 2g2 0 csc2 (g0) + 1 / nk-1t2 opt,k-1 . From Eq. (35) of the main manuscript, we have Dk = χ2 1 (1) + g2 0 csc2 (g0) χ2 2 (1) + χ2 3 (1) 2g2 0 csc2 (g0) + 1 . (S54) Using the Laplace transform, we obtain the probability density function of Dk as fDk = 2g2 0 + sin2 (g0) e - Dk sin2(g0) 2g2 0 -Dkerf    s Dk -sin2(g0) g2 0 +2g2 0 csc2(g0)-1 √ 2    2g0 q g2 0 -sin2 (g0) , (S55) where erf (x) = 2 √π Z x 0 e-t2dt. (S56)
2510.14813
Quantum Fisher Information as a Thermal and Dynamical Probe in Frustrated Magnets: Insights from Quantum Spin Ice Chengkang Zhou,1, ∗Zhengbang Zhou,2, ∗F´elix Desrochers,2, 3 Yong Baek Kim,2 and Zi Yang Meng1 1Department of Physics and HK Institute of Quantum Science & Technology, The University of Hong Kong, Pokfulam Road, Hong Kong 2Department of Physics, University of Toronto, Toronto, Ontario M5S 1A7, Canada 3Department of Physics, Harvard University, Cambridge, MA 02138, USA (Dated: October 17, 2025) Quantum Fisher information (QFI) is a novel measure of multipartite quantum entanglement that can be mea- sured in inelastic neutron scattering experiments on quantum magnets. In this work, we demonstrate that the QFI can be used to understand the thermal and dynamical properties of quantum magnets by focusing on the pyrochlore lattice model of quantum spin ice (QSI), a three-dimensional quantum spin liquid that hosts fraction- alized quasiparticles and emergent photons. We use the newly developed multi-directed loop update quantum Monte Carlo (QMC) algorithm and exact diagonalization (ED) to compute the QFI, which is further utilized to calibrate the gauge mean-field theory results. We show that the temperature and momentum dependence of the QFI can reveal characteristic energy scales of distinct phases and phase transitions in the global phase diagram. In particular, the QFI can clearly distinguish the ferromagnetic ordered phase, the thermal critical region above it, as well as two distinct QSI phases, namely zero-flux and π-flux QSI. Moreover, the QFI shows two crossover temperature scales, one from the trivial paramagnet to the classical spin ice regime and a lower temperature crossover to QSI. We discuss our results, especially for the π-flux QSI, in light of the ongoing experimental ef- forts on Cerium-based pyrochlore systems. Our results demonstrate that the QFI not only detects entanglement properties but can also be viewed as a sensitive thermal and dynamical probe in the investigation of quantum magnets. Introduction.— Entanglement is arguably the most impor- tant concept in quantum physics [1–7]. It is at the heart of our modern understanding of phases of matter and tran- sitions between them [8–23]. As a notable example, entan- glement is the fundamental characteristic of quantum spin liquids (QSLs) [20, 24–27] — paramagnetic phases of frus- trated spin systems that fail to magnetically order down to zero temperature, and host fractional excitations and emer- gent gauge fields [28–34]. QSLs are characterized by long- range entanglement, which may be measured for gapped QSL by a non-vanishing topological entanglement entropy [20, 35– 38]. However, despite its role as a theoretical cornerstone and its usefulness in numerical studies [19–23, 37–43], topologi- cal entanglement entropy is not experimentally accessible in solid-state platforms, where only local correlations are typi- cally measured. One still has experimental access to other measures of en- tanglement. A particularly useful one that has recently been at the forefront of the experimental search for QSLs is the quan- tum Fisher information (QFI) [44–52]. Initially employed to define the maximal achievable precision in parameter es- timation for a given quantum state in the quantum metrology community [53–58], the QFI is directly related to the dynam- ical susceptibility [59], which is routinely measured in inelas- tic neutron scattering experiments [60–62]. The QFI density fQ provides a lower bound on the multipartite entanglement in the system [63, 64]. This bound can be used to experi- mentally differentiate between QSLs and other, more trivial states, such as random singlet states driven by strong disor- der [65, 66]. These states may otherwise be challenging to distinguish experimentally, as they can both lead to similar signatures, such as continua of excitations in inelastic neu- tron scattering or a lack of experimentally observable finite- temperature phase transitions [67–78]. Nevertheless, it should be emphasized that a large value of the QFI, althought promis- ing, does not provide evidence for the realization of a QSL in and of itself. Indeed, trivial ordered states sufficiently close to a quantum critical point may have an arbitrarily large value of fQ [59]. Be as it may, if precise theoretical predictions for the momentum and temperature dependence of the QFI for prospective QSLs exist, they offer stringent quantitative pre- dictions which, if measured, may provide significantly more convincing evidence than qualitative features, like the pres- ence of broad continua of excitations. In this letter, we make such detailed predictions for the QFI on one of the most paradigmatic QSL: quantum spin ice (QSI) [79–84]. QSI is a three-dimensional QSL that is the ground state of an XXZ model with dominant Ising and sub- leading transverse couplings on the pyrochlore lattice. It real- izes the deconfined (Coulomb) phase of compact U(1) gauge theory and, as such, hosts emergent photon excitations, spin- 1/2 spinons that act as emergent electric charges, and mag- netic monopoles [85]. QSI is the ideal platform for mak- ing specific predictions for the QFI, as it is one of the few known examples of an experimentally relevant model that stabilizes a well-understood QSL that is numerically acces- sible over a parameter regime (i.e., ferromagnetic transverse couplings) with sign-problem-free quantum Monte Carlo (QMC) [85–90]. Furthermore, several compounds have his- torically been considered as possible experimental realization of QSI such as Tb2Ti2O7 [91–100], Pr2(Sn,Zr,Hf)2O7 [101– 112], and Yb2Ti2O7 [83, 113–124]. The most recently con- sidered candidate materials are the Cerium-based pyrochlore compounds Ce2(Zr,Hf,Sn)2O7 [125–144]. For this last fam- arXiv:2510.14813v1 [cond-mat.str-el] 16 Oct 2025 2 10−3 10−2 10−1 1 T (a) ED, S±, Γ (b) QMC, S±, Γ −0.30 −0.20 −0.15 −0.10 −0.05 0 0.02 0.04 0.06 0.08 J± 10−3 10−2 10−1 1 T (c) ED, S±, Γ′ 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± (d) QMC, S±, Γ′ 1.0 2.0 3.0 4.0 5.0 0.5 1.0 1.5 2.0 2.5 3.0 1.0 2.0 3.0 4.0 5.0 0.5 1.0 1.5 2.0 FIG. 1. Heat maps of the QFI as functions of temperature T and J±. Panels (a) and (b) show the QFI density fQ(S ± q, T) in the S ± channel at Γ = (0, 0, 0), while panels (c,d) show it at Γ′ = (4π, 4π, 0). Panel (a) and (c) are obtained from the ED calculation of 16-site cluster with J± ranging from −0.045 to 0.08, and panels (b) and (d) are from the QMC simulation of 4 × L3 sizes (L = 4) with J± ranging from 0.04 to 0.10. In panels (a) and (b), fQ(S ± Γ, T) maps out the thermodynamic phase boundaries between the ferromagnetic phase (FM) and QSI0. The temperature dependence of fQ(S ± Γ, T) from QMC further discerns the crossover temperature scales between the high-temperature paramagnetic regime to classical spin ice and eventually to the QSI0 regime (see also Fig. 2 (a)). In panels (c) and (d), fQ(S ± Γ′, T) reflects the strength of the fluctuations in the thermal and quantum phase diagram, with strong QFI at the classical critical region above the FM phase and stronger QFI in the classical spin ice and QSI0 region. Moreover, the strongest QFI signal, represented by the red region in panel (c) for J± < 0, reflects the QSIπ regime (see also Fig. 2 (b)). ily of compounds, no magnetic order has been reported in Ce2(Zr,Hf)2O7. Experimental determinations of their mi- croscopic couplings indicate that Ce2Zr2O7 and Ce2Hf2O7 likely fall in a region of parameter space that stabilizes, so- called, π-flux QSI [131–133, 140–143] where a constant flux of the emergent gauge field is threading hexagonal plaque- ttes such that translation acts projectively on spinons excita- tions [127, 145–150]. Energy integrated and inelastic neutron scattering measurements consistent with theoretical predic- tions have also been reported [129, 131, 133, 135, 142, 150], and a cubic scaling of the low-temperature heat capacity has even been recently measured in Ce2Zr2O7 [135]. Precise the- oretical predictions for the QFI should thus be experimentally measurable and provide rigorous tests for these and future candidate materials. We employ large-scale QMC simulations [151–154] to evaluate the QFI of the XXZ model with ferromagnetic trans- verse coupling. To this end, we develop a multi-directed loop (MDL) update algorithm to efficiently sample the highly frus- trated (3+1)d configurational space; the details are provided in the Supplemental Material (SM) [155]. Exact diagonalization (ED) is also used to compute the QFI [156, 157] for both anti- ferromagnetic and ferromagnetic transverse couplings. We show that ED and QMC results are consistent for ferromag- netic XY exchange, which justifies extending our ED result to the anti-ferromagnetic exchange case, where π-flux QSI is the ground state. These numerical results are further compared with predictions from the gauge mean-field theory (GMFT) parton construction [145, 146, 149, 150, 158–161]. We find that the QFI at finite temperature is sensitive to thermal phase transitions and crossovers, as well as regions of large critical fluctuations, depending on the momentum po- sition. For example, the QFI associated with the transverse spin components S ± at the Γ = (0, 0, 0) point clearly maps out the thermal phase diagram of the pyrochlore XXZ model, revealing the two crossover temperature scales between the high-temperature paramagnet and classical spin ice as well as between classical spin ice and QSI [85, 89]. Its evolution also delineates the ferromagnetic (FM) thermal phase transition for large ferromagnetic transverse couplings [86, 88]. In contrast, the QFI at the Γ′ = (4π, 4π, 0) point appears to be sensitive to critical fluctuations near the phase transition between QSI and the ferromagnetic ordered phases. Altogether, our work provides the first large-scale, unbiased computation of the en- tanglement properties of QSI, offering exhaustive numerical results that can be quantitatively compared with future exper- imental results on current candidate materials. Model and Method.— The pyrochlore lattice is a face- centered cubic lattice with four sublattices per unit cell, form- ing a network of corner-sharing tetrahedra. We consider spin- 1/2 on the pyrochlore lattice that are interacting through an XXZ model of the form H = Jz X ⟨i, j⟩ S z i S z j −J± X ⟨i, j⟩  S + i S − j + S − i S + j  , (1) where the summation ⟨i, j⟩runs over all the nearest-neighbor 3 pairs. We focus on the regime with a dominant antiferromag- netic Ising coupling (i.e., Jz > |J±| > 0), which provides ge- ometric frustration within each tetrahedron and energetically favors “classical spin ice configurations” where P i∈ S z i = 0 with the sum over all spins in a given tetrahedron. In the perturbative regime |J±| ≫Jz, the transverse coupling al- lows for tunelling between the classically degenerate config- urations that respect this local energetic constraint [79]. This tunnelling stabilizes 0-flux QSI (QSI0) for 0 < J±/Jz < 0.052 at low temperatures (T ≲12|J±|3/J2 z) as shown in QMC [85– 89]. When −1 < J±/Jz < 0, π-flux QSI (QSIπ) is instead stabilized, where the flux refers to the emergent static mag- netic flux threading the hexagonal plaquettes of the pyrochlore lattice in the ground state [79, 145, 147, 149, 150, 162– 164]. For J±/Jz > 0.052, the system undergoes a fluctuation- induced first-order transition [165, 166] into an XY ferro- magnetic (FM) ordered phase below a critical temperature Tc [86, 88, 89]. Hereafter, we set Jz = 1 as the unit of energy. For reference, the dominant exchange is usually slightly smaller than 1K in Cerium-based dipolar-octupolar candidates [131– 133, 140–143]. To quantify the entanglement properties of a given state, the QFI provides a lower bound on multipartite entanglement, also known as entanglement depth [50, 65, 167]. For a col- lective generator O = PN i=1 Oi with local eigenvalue range ∆λ = λmax −λmin, any m-producible state ρ obeys [46, 50, 63, 64, 168] FQ[ρ, O] N ≤m(∆λ)2. (2) Equivalently, with fQ(O) := FQ[ρ, O]/N, we can define the normalized QFI (nQFI) as the lower bound of entanglement depth, such that nQFI(O) := fQ(O) (∆λ)2 > m =⇒entanglement depth ≥m + 1, (3) For the operator O = S α q := P i S α Rieiq·Ri, the QFI den- sity is related to the dynamical structure factor Aα(q, ω) := 1 2πN R dt⟨S α† q (t)S α q(0)⟩eiωt at momentum q by [59] fQ(S α q, T) = 4 Z ∞ 0 dω tanh  ω 2T   1 −e−ω/T Aα(q, ω). (4) Here, T is the temperature and α labels different pseudospin components, such as α ∈{x, y, z}. We see that the QFI den- sity fQ(S α q, T) can be obtained by integrating Aα(q, ω) over all frequencies at a fixed temperature and momentum. Experi- mental measurements of the QFI have already been reported for quasi-1D KCuF2 [46] and for 2D frustrated triangular lat- tice material KYbSe2 [47, 52]. To simulate the QSI0 phase, we developed a MDL-QMC update that enables efficient simulations of the QSI0 phase on the pyrochlore lattice. This task is difficult to achieve with the conventional directed loop update algorithm as in the QSI0 phase, the simplest quantum fluctuation that con- nects different classical spin-ice states involves six spin flips 0.0 1.0 2.0 3.0 4.0 5.0 6.0 fQ (a) ED, J± = −0.3, QSIπ, Γ ED, J± = −0.125, QSIπ, Γ QMC, J± = 0.045, QSI0, Γ QMC, J± = 0.05, QSI0, Γ QMC, J± = 0.06, FM, Γ QMC, J± = 0.07, FM, Γ QMC, J± = 0.08, FM, Γ 10−3 10−2 10−1 1 T 0.0 1.0 2.0 3.0 fQ (b) ED, J± = −0.3, QSIπ, Γ′ ED, J± = −0.125, QSIπ, Γ′ QMC, J± = 0.045, QSI0, Γ′ QMC, J± = 0.05, QSI0, Γ′ QMC, J± = 0.06, FM, Γ′ QMC, J± = 0.07,FM, Γ′ QMC, J± = 0.08, FM, Γ′ FIG. 2. Temperature evolution of the QFI in different phases ob- tained with various computational methods. The QFI fQ(S ± q, T) are shown for (a) the S ± channel at the Γ point, (b) the S ± channel at the Γ′ point. The blue disk points represent results from ED calcula- tions with J± = −0.3 and J± = −0.125 in the QSIπ regime, while the organge triangle correspond to QMC simulations with J± = 0.045 and J± = 0.05 in the QSI0 regime. The orange shaded areas accom- panying the data highlight the crossover from paramagnetic regime to classical spin ice regime at T ∼1 and that from the classical spin ice regime to QSI0 at T ∼|J3 ±|. The red triangles indicate QMC re- sults with J± ranging from 0.06 to 0.08 in the FM regime. The green points denote the GMFT calculation results at J± = 0.045 (QSI0) and J± = −0.3 (QSIπ) for different ground states at zero temperature. The QMC, ED, and GMFT results are consistent (see SM [155] for details). around a hexagon, which is hard to realize by a simple di- rected loop update. Our MDL algorithm overcomes this lim- itation by allowing the insertion of multiple operator pairs, thereby naturally generating higher-order processes during the Monte Carlo update and making the QSI0 phase accessible (see SM [155]). We then measure the imaginary time correla- tion function G± = 1 2N P γ,ν⟨S + −q,γ(τ)S − q,ν(0) + S − −q,γ(τ)S + q,ν(0)⟩ and Gz = 1 N P γ,ν⟨S z −q,γ(τ)S z q,ν(0)⟩, where γ, ν label the four pyrochlore sublattices, the momentum-transfer q is measured in the 3D pyrochlore Brillouin zone (BZ), and τ ∈[0, β] de- notes the imaginary time with β = 1/T the inverse temper- ature. We utilize the stochastic analytic continuation (SAC) scheme [169–173] to convert the imaginary time correlation function to a real-frequency dynamic structure factor. This QMC+SAC scheme has been successfully applied to a vari- ety of lattice models, producing reliable spectral properties 4 ranging from magnon and amplitude modes in a magnetically order state [174, 175] to fractionalized excitations in QSL and QSI models [85, 176–179]. We further compute the QFI density using ED and GMFT. The ED calculations are performed over a 16-site periodic cu- bic conventional cluster (see SM [155]), providing access to momentum positions Γ, Γ′, and X. We first compare the ED and GMFT results with our high-quality QMC data before extending these calculations into the π-flux J± < 0 regime, where most QSI candidate materials reside and QMC encoun- ters a sign problem. Results.— Our main results are summarized in Fig. 1. Let us first define the total spinon QFI fQ(S ± q, T) = fQ(S x q + S y q, T) = fQ(S x q, T) + fQ(S y q, T) under the XXZ model. Panels (a, b) and (c, d) show fQ(S ± Γ, T) and fQ(S ± Γ′, T) respectively at dif- ferent values of J±. Among which, (a, c) are obtained from ED and (b, d) are from QMC on a L = 4 lattice (4 × L3 sites). We additionally computed the QFI obtained from GMFT and compared it with QMC simulations at J± = 0.045 for system sizes L = 3 and L = 4. We find that, while QMC and ED results are consistent with each other, GMFT results are only off by a factor of approximately 6/7 compared to QMC and ED at low temperatures (see SM [155]). This is impressive given the mean-field nature of the GMFT. As such, we argue that the ED and GMFT results can be readily extended into the J± < 0 regime. We would like to first highlight that QFI shows non-trivial entanglement depth across various phases. For spin-1/2 sys- tems, we find nQFI(S ± q) = fQ(S ± q, T)/2 (see SM [155]). As shown in Fig. 1(a,b), fQ  S ± Γ, T  ≳3 in QSI0 and ≳5 in the FM phase, implying at least 2- and 3-partite entanglement, respectively, in sharp contrast to the high-temperature param- agnet (PM), where fQ  S ± Γ, T  = 0. On the other hand, we see that fQ(S ± Γ, T) of QSIπ is ∼0. We hereby stress that QFI is a mere lower bound of entanglement depth — such re- sults do not reflect that QSIπ is trivially entangled nor that the FM phase is more entangled than that of the QSIπ phase. In fact, QFI is heavily dependent on the momentum positions. In principle, to aptly categorize the lower bound of the entangle- ment depth of a certain phase, one should rigorously search through the momentum space to find the largest QFI. One guiding principle for determining said momentum position is to examine the equal-time spin structure factor (ETSF), since the QFI density at T →0 becomes ⟨S + −qS − q + S − −qS + q⟩/2. We demonstrate this point by presenting fQ(S ± Γ′, T) in Fig. 1 (c, d), where Γ′ = (4π, 4π, 0), at different values of J±. This momen- tum position corresponds to a local maximum of the ETSF for QSIπ [149, 150] and, consequently, a local maximum for QFI at low temperatures[46, 50]. Indeed, here fQ(S ± Γ′, T) of QSIπ is ≳3, whereas for QSI0 and FM they are ≳2 and ≳1 re- spectively. From this, we can conclude that QSIπ is at least 2-partite entangled. Even more importantly, one insightful feature of QFI is that it can serve as a clear indicator of the thermal phase boundaries and crossovers [52]. Notably, at finite temperature, 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± 10−3 10−2 10−1 1 T (a) QMC, SDO, Γ′ 0.5 1.0 1.5 2.0 2.5 10−3 10−2 10−1 1 T 0.0 0.8 1.6 2.4 3.2 fQ (b) ED, J± = −0.30, SDO, X ED, J± = −0.125, SDO, X QMC, J± = 0.045, SDO, Γ′ QMC, J± = 0.05, SDO, Γ′ QMC, J± = 0.06, SDO, Γ′ QMC, J± = 0.08, SDO, Γ′ 0.00 0.04 0.06 0.08 J± 10−3 10−2 10−1 1 T (c) ED, SDO, Γ′ 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 −0.40 −0.30 −0.20 −0.10 0.00 J± 10−3 10−2 10−1 1 T ED, SDO, X (d) 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 FIG. 3. QFI in experimental coordinates for Cerium-based py- rochlore compounds. Panel (a) presents a heat map of the QMC computed QFI fQ(S DO Γ′ , T), with J± ranging from 0.04 to 0.10. Panel (b) is the line cut of panel (a) from J± = 0.045 to 0.08. The crossover temperature scales from the paramagnetic phase to CSI at T ∼1 and from CSI to QSI0 at T ∼|J3 ±| are clearly manifest in the shaded data, highlighted in orange. Panel (c) shows ED results of fQ(S DO Γ′ , T) for J± between 0.01 and 0.08. Panel (d) focuses fQ(S DO X , T) with X = (0, 0, 2π) in the range from −0.425 to 0.00. fQ(S ± Γ, T) faithfully tracks the two crossover temperatures of QSI0 regime as shown in Fig. 2(a) for J± = 0.045 and 0.05. From previous QMC calculations of the specific heat [85], it is known that as one cools down from the high-temperature paramagnetic (PM) phase, the system first crosses over into the classical spin ice (CSI) at T ∼1 before finally entering the QSI0 regime at T ∼|J3 ±|. These crossover temperatures are marked by the two peaks in the specific heat. But these fea- tures are equally well represented in the QFI density. In Fig. 2 (a), fQ(S ± Γ, T) first starts increasing from 0 in the PM phase at T ∼1 when we cross over into CSI. After which, fQ(S ± Γ, T) plateaus until the second temperature scales T ∼|J3 ±| where fQ(S ± Γ, T) rises again as we cross over into the QSI0 phase. Similarly, the phase boundary between the FM phase and the high-temperature PM is clearly delineated by the increase in fQ(S ± Γ, T), as shown in Fig. 2(a). As such, the QFI also ap- pears to be sensitive to this thermal phase transition [86]. On the other hand, the two temperature crossover scales are also clearly demarcated by QFI for QSIπ phase, as shown in Fig. 1(a,c): both fQ(S ± Γ, T) and fQ(S ± Γ′, T) increase near T ∼ 1; at T ∼|J±|3, fQ(S ± Γ, T) decreases while fQ(S ± Γ′, T) increases for J± ≲−0.3. At J± ∼−0.3, the two temperature scales coa- lesce, as shown by the essentially flat QFI below the first tem- perature scale 1K in Fig. 2(a,b). The behavior is consistent with Ce2Zr2O7 (best-fit J± ≈−0.3 [131, 133, 136]), where only one peak is observed in specific heat measurement [133]. In contrasts, specific heat measurements on Ce2Hf2O7 (J± ≈ −0.125 in the QSI scenario) [141–143, 180] show two well- separated peaks [142] mirrored by the predicted two inflec- tion points of QFI, most pronounced in fQ(S ± Γ, T) as shown in Fig. 2(a). The evolution of QFI also strongly depends on the mo- mentum position. The two aforementioned temperature scales 5 manifest themselves in a strikingly different way when look- ing at fQ(S ± Γ′, T) in Fig. 2(b) as opposed to fQ(S ± Γ, T) in Fig. 2(a). As we cool down to the CSI regime from the high- temperature PM at the temperature scale Jz, fQ(S ± Γ′, T) rapidly rises and eventually plateaus. But after the second tempera- ture scale is met, fQ(S ± Γ′, T) now dips instead of increasing, giving rise to this characteristic hump as shown in Fig. 2(b), which precisely marks the region of large critical fluctuations. This speaks to the versatility of QFI as a potentially powerful experimental probe — by adjusting the momentum positions, we can design probes that are sensitive to specific features of interest. Discussion.— So far, we have considered the QFI of an abstract XXZ model. The same framework connects di- rectly to real neutron scattering experiments once the mi- croscopic nature of the local moments and their coupling to the neutron’s magnetic moment are specified (see detailed derivation in SM. [155]). In particular, we present predic- tions in Fig. 3 for the Ce-based dipolar–octupolar pyrochlore Ce2Zr2O7[131, 133, 144]—a leading QSI candidate. While Ce ions carry both dipolar and octupolar moments, neutron scattering primarily couples to the dipolar moments, which correspond to transverse spin components S ± in the cur- rent model [125, 181]. Fig. 3 shows the corresponding QFI f(S DO q , T) that takes into account the transverse momentum projector in the inelastic neutron scattering. In Fig. 3, panels (a,b) are obtained from QMC simulation for L = 4 and panels (c,d) are the ED results. One sees that in the global frame, the QFI again vividly captures the thermody- namic phase structure. The crossover temperature scale from paramagnetic region at high temperature (T > Jz) to the CSI and its plateau at intermediate temperature (|J±|3 < T < Jz) and eventually the QSI0 at low temperature (T < |J3 ±|) man- ifest in the both the ED and QMC data. The evolution of QFI closely resembles that of the thermal entropy measure- ments [85]. One important point is that, in principle, the bounds on fQ(S DO q , T) for an m-producible state are momentum depen- dent because of the transverse projector. But we can estab- lish a q-independent bound as derived in SM [155], giving us the following relationship: nQFI(S DO q ) = 3 2 fQ(S DO q ). As a result, values extracted directly from our computed data in a neutron scattering setting already certify nontrivial en- tanglement depth: nQFI(S DO Γ′ , T) ≳2.2 for 0-flux QSI and nQFI(S DO Γ′ , T) ≳3.7 for all-in-all-out order (FM in the lo- cal spin frame) respectively. At the X = (0, 0, 2π) point, nQFI(S DO X , T) ≳2.3 certifies that the π-flux QSI is at least 3-partite entangled. Moreover, in principle, polarized neu- tron measurements with the non-spin-flip (NSF) and spin-flip (SF) channels are the optimal probes to find the best entangle- ment depth bound. As shown in SM [155], nQFI(S (N)SF q ) = 3fQ(S (N)SF q ), therefore, even a modest fQ(S (N)SF q ) can signal deep multipartite entanglement. Based on the results and analysis above, we believe the QFI can actually offer richer information than initially anticipated. Besides providing insightful information about the entangle- ment depth, it can be used as both a thermal and dynamical probe in the investigation of quantum magnets, with quan- tum spin ice as an archetypal manifestation. Based on spin spectra in both local and global frames, the temperature and momentum dependence of QFI can faithfully trace out the ground state and finite-temperature phase boundaries in the QSI system, consistent with previous theoretical and numer- ical knowledge. These properties of QFI can bridge the fun- damental entanglement concept to practical model computa- tions and applications to experiments. In this way, other ex- otic quantum critical points, QSL phases, and their associated theory, computation, and experimental verification, could be expected to unite from the entanglement perspective. Acknowledgement—– We acknowledge inspiring discussion with Bruce Gaulin. CKZ and ZYM thank the discussions with Menghan Song and Ting-Tung Wang on entanglement-related topics and acknowledge the support from the Research Grants Council (RGC) of Hong Kong (Project Nos. 17309822, C7037-22GF, 17302223, 17301924), the ANR/RGC Joint Re- search Scheme sponsored by RGC of Hong Kong and French National Research Agency (Project No. A HKU703/22). We thank HPC2021 system under the Information Technology Services at the University of Hong Kong, as well as the Bei- jing Paratera Tech Corp., Ltd [182] for providing HPC re- sources that have contributed to the research results reported within this paper. ZZ, FD, and YBK were supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Grant No. RGPIN-2023- 03296 and the Centre of Quantum Materials at the University of Toronto. Computations at the University of Toronto were performed on the Cedar and Fir clusters, which are hosted by the Digital Research Alliance of Canada. F.D. is further supported by the Vanier Canada Graduate Scholarship (CGV-186886). ∗These authors contributed equally to this work. [1] A. Einstein, B. Podolsky, and N. Rosen, Physical review 47, 777 (1935). [2] R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Reviews of modern physics 81, 865 (2009). [3] J. S. Bell, Physics Physique Fizika 1, 195 (1964). [4] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys- ical review letters 23, 880 (1969). [5] A. Aspect, Nature 398, 189 (1999). [6] A. Zeilinger, Reviews of Modern Physics 71, S288 (1999). [7] V. Vedral, Nature 453, 1004 (2008). [8] X.-G. Wen, International Journal of Modern Physics B 4, 239 (1990). [9] G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Physical review letters 90, 227902 (2003). [10] X.-G. Wen, Quantum field theory of many-body systems: from the origin of sound to an origin of light and electrons (Oxford University Press on Demand, 2004). [11] M. Levin and X.-G. Wen, Reviews of modern physics 77, 871 (2005). [12] Z.-C. Gu, M. Levin, and X.-G. Wen, Physical Review B—Condensed Matter and Materials Physics 78, 205116 6 (2008). [13] L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Reviews of modern physics 80, 517 (2008). [14] X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 82, 155138 (2010). [15] N. Laflorencie, Physics Reports 646, 1 (2016). [16] X.-G. Wen, Reviews of Modern Physics 89, 041004 (2017). [17] G. De Chiara and A. Sanpera, Reports on Progress in Physics 81, 074002 (2018). [18] X.-G. Wen, Science 363, eaal3099 (2019). [19] J. Zhao, Y.-C. Wang, Z. Yan, M. Cheng, and Z. Y. Meng, Phys. Rev. Lett. 128, 010601 (2022). [20] J. Zhao, B.-B. Chen, Y.-C. Wang, Z. Yan, M. Cheng, and Z. Y. Meng, npj Quantum Materials 7, 69 (2022). [21] Y. Da Liao, M. Song, J. Zhao, and Z. Y. Meng, Phys. Rev. B 110, 235111 (2024). [22] M. Song, J. Zhao, M. Cheng, C. Xu, M. Scherer, L. Janssen, and Z. Y. Meng, Science Advances 11, eadr0634 (2025). [23] J. Zhao, Z. Y. Meng, Y.-C. Wang, and N. Ma, Phys. Rev. B 112, 094416 (2025). [24] X.-G. Wen, Phys. Rev. B 65, 165113 (2002). [25] X.-G. Wen, Phys. Rev. D 68, 065003 (2003). [26] M. A. Levin and X.-G. Wen, Phys. Rev. B 71, 045110 (2005). [27] Z.-C. Gu, M. Levin, B. Swingle, and X.-G. Wen, Phys. Rev. B 79, 085118 (2009). [28] P. W. Anderson, science 235, 1196 (1987). [29] P. A. Lee, Reports on Progress in Physics 71, 012501 (2007). [30] L. Savary and L. Balents, Reports on Progress in Physics 80, 016502 (2016). [31] Y. Zhou, K. Kanoda, and T.-K. Ng, Rev. Mod. Phys. 89, 025003 (2017). [32] J. Knolle and R. Moessner, Annual Review of Condensed Mat- ter Physics 10, 451 (2019). [33] H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Nature Reviews Physics 1, 264 (2019). [34] C. Broholm, R. Cava, S. Kivelson, D. Nocera, M. Norman, and T. Senthil, Science 367, eaay0668 (2020). [35] M. Levin and X.-G. Wen, Phys. Rev. Lett. 96, 110405 (2006). [36] A. Kitaev and J. Preskill, Phys. Rev. Lett. 96, 110404 (2006). [37] S. V. Isakov, M. B. Hastings, and R. G. Melko, Nature Physics 7, 772 (2011). [38] B.-B. Chen, H.-H. Tu, Z. Y. Meng, and M. Cheng, Phys. Rev. B 106, 094415 (2022). [39] T. Grover, A. M. Turner, and A. Vishwanath, Physical Re- view B—Condensed Matter and Materials Physics 84, 195120 (2011). [40] H.-C. Jiang, Z. Wang, and L. Balents, Nature Physics 8, 902 (2012). [41] T. Grover, Physical review letters 111, 130402 (2013). [42] V. Alba, Phys. Rev. E 95, 062132 (2017). [43] J. D’Emidio, Phys. Rev. Lett. 124, 110602 (2020). [44] J. Lambert and E. S. Sørensen, Physical Review B 99, 045117 (2019). [45] J. Lambert and E. S. Sørensen, Physical Review B 102, 224401 (2020). [46] A. Scheie, P. Laurell, A. M. Samarakoon, B. Lake, S. E. Na- gler, G. E. Granroth, S. Okamoto, G. Alvarez, and D. A. Ten- nant, Phys. Rev. B 103, 224434 (2021). [47] A. Scheie, E. Ghioldi, J. Xing, J. Paddison, N. Sherman, M. Dupont, L. Sanjeewa, S. Lee, A. Woods, D. Abernathy, et al., Nat. Phys. 20, 74 (2024). [48] P. Laurell, A. Scheie, C. J. Mukherjee, M. M. Koza, M. En- derle, Z. Tylczynski, S. Okamoto, R. Coldea, D. A. Tennant, and G. Alvarez, Physical Review Letters 127, 037201 (2021). [49] V. Menon, N. E. Sherman, M. Dupont, A. O. Scheie, D. A. Tennant, and J. E. Moore, Physical Review B 107, 054422 (2023). [50] A. Scheie, P. Laurell, W. Simeth, E. Dagotto, and D. A. Ten- nant, Materials Today Quantum 5, 100020 (2025). [51] P. Laurell, A. Scheie, E. Dagotto, and D. A. Tennant, Ad- vanced Quantum Technologies 8, 2400196 (2025). [52] T. Shimokawa, S. Sabharwal, and N. Shannon, arXiv:2505.11874 (2025). [53] S. L. Braunstein and C. M. Caves, Physical Review Letters 72, 3439 (1994). [54] D. Petz and C. Sud´ar, Journal of Mathematical Physics 37, 2662 (1996). [55] D. Petz, Journal of Physics A: Mathematical and General 35, 929 (2002). [56] M. G. Paris, International Journal of Quantum Information 7, 125 (2009). [57] B. Escher, R. L. de Matos Filho, and L. Davidovich, Nature Physics 7, 406 (2011). [58] G. M. Tino and M. A. Kasevich, Atom interferometry, Vol. 188 (IOS Press, 2014). [59] P. Hauke, M. Heyl, L. Tagliacozzo, and P. Zoller, Nature Physics 12, 778 (2016). [60] S. W. Lovesey, Theory of neutron scattering from condensed matter (Oxford University Press, 1984). [61] A. T. Boothroyd, Principles of neutron scattering from con- densed matter (Oxford University Press, 2020). [62] S. T. Bramwell and B. Keimer, Nature Materials 13, 763 (2014). [63] G. T´oth, Physical Review A—Atomic, Molecular, and Optical Physics 85, 022322 (2012). [64] P. Hyllus, W. Laskowski, R. Krischek, C. Schwemmer, W. Wieczorek, H. Weinfurter, L. Pezz´e, and A. Smerzi, Phys- ical Review A—Atomic, Molecular, and Optical Physics 85, 022321 (2012). [65] S. Sabharwal, T. Shimokawa, and N. Shannon, Phys. Rev. Res. 7, 023271 (2025). [66] T. Shimokawa, S. Sabharwal, and N. Shannon, arXiv preprint arXiv:2505.11874 (2025). [67] A. Murani, Journal of Applied Physics 49, 1604 (1978). [68] S.-H. Lee, C. Broholm, G. Aeppli, A. Ramirez, T. Perring, C. Carlile, M. Adams, T. Jones, and B. Hessen, EPL (Euro- physics Letters) 35, 127 (1996). [69] J. A. Paddison, M. Daum, Z. Dun, G. Ehlers, Y. Liu, M. B. Stone, H. Zhou, and M. Mourigal, Nature Physics 13, 117 (2017). [70] Z. Zhu, P. Maksimov, S. R. White, and A. Chernyshev, Phys- ical review letters 119, 157201 (2017). [71] I. Kimchi, A. Nahum, and T. Senthil, Physical Review X 8, 031028 (2018). [72] B. Gao, T. Chen, C.-L. Huang, Y. Qiu, G. Xu, J. Liebman, L. Chen, M. B. Stone, E. Feng, H. Cao, et al., Physical Review B 108, 024431 (2023). [73] K. A. Ross, J. W. Krizan, J. A. Rodriguez-Rivera, R. J. Cava, and C. L. Broholm, Physical Review B 93, 014433 (2016). [74] R. Sarkar, J. W. Krizan, F. Br¨uckner, E. d. C. Andrade, S. Rachel, M. Vojta, R. J. Cava, and H.-H. Klauss, Physical Review B 96, 235117 (2017). [75] M. Aizenman and J. Wehr, Physical review letters 62, 2503 (1989). [76] R. L. Greenblatt, M. Aizenman, and J. L. Lebowitz, Physical review letters 103, 197201 (2009). [77] M. Aizenman, R. L. Greenblatt, and J. L. Lebowitz, Journal of mathematical physics 53 (2012). 7 [78] T. Vojta, Journal of Physics A: Mathematical and General 39, R143 (2006). [79] M. Hermele, M. P. A. Fisher, and L. Balents, Phys. Rev. B 69, 064404 (2004). [80] A. Castro Neto, P. Pujol, and E. Fradkin, Physical Re- view B—Condensed Matter and Materials Physics 74, 024302 (2006). [81] O. Benton, O. Sikora, and N. Shannon, Phys. Rev. B 86, 075154 (2012). [82] C. Castelnovo, R. Moessner, and S. L. Sondhi, Annu. Rev. Condens. Matter Phys. 3, 35 (2012). [83] M. J. Gingras and P. A. McClarty, Reports on Progress in Physics 77, 056501 (2014). [84] M. Udagawa and L. Jaubert, Spin Ice (Springer, 2021). [85] C.-J. Huang, Y. Deng, Y. Wan, and Z. Y. Meng, Phys. Rev. Lett. 120, 167202 (2018). [86] A. Banerjee, S. V. Isakov, K. Damle, and Y. B. Kim, Phys. Rev. Lett. 100, 047208 (2008). [87] N. Shannon, O. Sikora, F. Pollmann, K. Penc, and P. Fulde, Phys. Rev. Lett. 108, 067204 (2012). [88] J.-P. Lv, G. Chen, Y. Deng, and Z. Y. Meng, Phys. Rev. Lett. 115, 037202 (2015). [89] Y. Kato and S. Onoda, Phys. Rev. Lett. 115, 077202 (2015). [90] C.-J. Huang, C. Liu, Z. Meng, Y. Yu, Y. Deng, and G. Chen, Phys. Rev. Research 2, 042022 (2020). [91] H. R. Molavian, M. J. Gingras, and B. Canals, Physical review letters 98, 157204 (2007). [92] H. R. Molavian, P. A. McClarty, and M. J. Gingras, arXiv preprint arXiv:0912.2957 (2009). [93] M. Gingras, B. Den Hertog, M. Faucher, J. Gardner, S. Dun- siger, L. Chang, B. Gaulin, N. Raju, and J. Greedan, Physical Review B 62, 6496 (2000). [94] Y.-J. Kao, M. Enjalran, A. Del Maestro, H. R. Molavian, and M. J. Gingras, Physical Review B 68, 172407 (2003). [95] M. Enjalran and M. J. Gingras, Physical Review B—Condensed Matter and Materials Physics 70, 174426 (2004). [96] S. Guitteny, J. Robert, P. Bonville, J. Ollivier, C. Decorse, P. Steffens, M. Boehm, H. Mutka, I. Mirebeau, and S. Petit, Physical Review Letters 111, 087201 (2013). [97] S. Petit, P. Bonville, J. Robert, C. Decorse, and I. Mirebeau, Physical Review B—Condensed Matter and Materials Physics 86, 174403 (2012). [98] H. Takatsu, S. Onoda, S. Kittaka, A. Kasahara, Y. Kono, T. Sakakibara, Y. Kato, B. Fåk, J. Ollivier, J. W. Lynn, et al., Physical Review Letters 116, 217201 (2016). [99] J. Gardner, S. Dunsiger, B. Gaulin, M. Gingras, J. Greedan, R. Kiefl, M. Lumsden, W. MacFarlane, N. Raju, J. Sonier, et al., Physical review letters 82, 1012 (1999). [100] T. Taniguchi, H. Kadowaki, H. Takatsu, B. Fåk, J. Ollivier, T. Yamazaki, T. Sato, H. Yoshizawa, Y. Shimura, T. Sakak- ibara, et al., Physical Review B—Condensed Matter and Ma- terials Physics 87, 060408 (2013). [101] H. Zhou, C. Wiebe, J. Janik, L. Balicas, Y. Yo, Y. Qiu, J. Cop- ley, and J. Gardner, Physical review letters 101, 227204 (2008). [102] S. Onoda and Y. Tanaka, Phys. Rev. B 83, 094411 (2011). [103] Y. Kato and S. Onoda, Phys. Rev. Lett. 115, 077202 (2015). [104] K. Matsuhira, C. Sekine, C. Paulsen, and Y. Hinatsu, Journal of Magnetism and Magnetic Materials 272, E981 (2004). [105] K. Kimura, S. Nakatsuji, J. Wen, C. Broholm, M. Stone, E. Nishibori, and H. Sawa, Nature communications 4, 1934 (2013). [106] P. Bonville, S. Guitteny, A. Gukasov, I. Mirebeau, S. Petit, C. Decorse, M. C. Hatnean, and G. Balakrishnan, Physical Review B 94, 134428 (2016). [107] S. Petit, E. Lhotel, S. Guitteny, O. Florea, J. Robert, P. Bonville, I. Mirebeau, J. Ollivier, H. Mutka, E. Ressouche, et al., Physical Review B 94, 165153 (2016). [108] N. Martin, P. Bonville, E. Lhotel, S. Guitteny, A. Wildes, C. Decorse, M. Ciomaga Hatnean, G. Balakrishnan, I. Mire- beau, and S. Petit, Physical Review X 7, 041028 (2017). [109] J.-J. Wen, S. Koohpayeh, K. Ross, B. Trump, T. McQueen, K. Kimura, S. Nakatsuji, Y. Qiu, D. Pajerowski, J. Copley, et al., Physical Review Letters 118, 107206 (2017). [110] V. Anand, L. Opherden, J. Xu, D. Adroja, A. Islam, T. Her- rmannsd¨orfer, J. Hornung, R. Sch¨onemann, M. Uhlarz, H. Walker, et al., Physical Review B 94, 144415 (2016). [111] R. Sibille, E. Lhotel, M. C. Hatnean, G. Balakrishnan, B. Fåk, N. Gauthier, T. Fennell, and M. Kenzelmann, Physical Review B 94, 024436 (2016). [112] R. Sibille, N. Gauthier, H. Yan, M. Ciomaga Hatnean, J. Ol- livier, B. Winn, U. Filges, G. Balakrishnan, M. Kenzelmann, N. Shannon, et al., Nature Physics 14, 711 (2018). [113] K. A. Ross, L. Savary, B. D. Gaulin, and L. Balents, Phys. Rev. X 1, 021002 (2011). [114] Y. Yasui, M. Soda, S. Iikubo, M. Ito, M. Sato, N. Hamaguchi, T. Matsushita, N. Wada, T. Takeuchi, N. Aso, et al., Journal of the Physical Society of Japan 72, 3014 (2003). [115] L.-J. Chang, S. Onoda, Y. Su, Y.-J. Kao, K.-D. Tsuei, Y. Ya- sui, K. Kakurai, and M. R. Lees, Nature communications 3, 1 (2012). [116] A. Yaouanc, P. D. De R´eotier, L. Keller, B. Roessli, and A. Forget, Journal of Physics: Condensed Matter 28, 426002 (2016). [117] J. Gaudet, K. Ross, E. Kermarrec, N. Butch, G. Ehlers, H. Dabkowska, and B. Gaulin, Physical Review B 93, 064406 (2016). [118] E. Kermarrec, J. Gaudet, K. Fritsch, R. Khasanov, Z. Guguchia, C. Ritter, K. Ross, H. Dabkowska, and B. Gaulin, Nature communications 8, 14810 (2017). [119] V. Pec¸anha-Antonio, E. Feng, Y. Su, V. Pomjakushin, F. Dem- mel, L.-J. Chang, R. J. Aldus, Y. Xiao, M. R. Lees, and T. Br¨uckel, Physical Review B 96, 214415 (2017). [120] J. Robert, E. Lhotel, G. Remenyi, S. Sahling, I. Mirebeau, C. Decorse, B. Canals, and S. Petit, Phys. Rev. B 92, 064425 (2015). [121] J. Thompson, P. A. McClarty, D. Prabhakaran, I. Cabrera, T. Guidi, and R. Coldea, Physical review letters 119, 057203 (2017). [122] L. Jaubert, O. Benton, J. G. Rau, J. Oitmaa, R. Singh, N. Shan- non, and M. J. Gingras, Physical review letters 115, 267208 (2015). [123] K. Arpino, B. Trump, A. Scheie, T. McQueen, and S. Kooh- payeh, Physical Review B 95, 094407 (2017). [124] S. Petit, Proceedings of the National Academy of Sciences 117, 29263 (2020). [125] Y.-P. Huang, G. Chen, and M. Hermele, Phys. Rev. Lett. 112, 167203 (2014). [126] R. Sibille, E. Lhotel, V. Pomjakushin, C. Baines, T. Fennell, and M. Kenzelmann, Phys. Rev. Lett. 115, 097202 (2015). [127] Y.-D. Li and G. Chen, Phys. Rev. B 95, 041106 (2017). [128] J. Gaudet, E. M. Smith, J. Dudemaine, J. Beare, C. R. C. Buhariwalla, N. P. Butch, M. B. Stone, A. I. Kolesnikov, G. Xu, D. R. Yahne, K. A. Ross, C. A. Marjerrison, J. D. Gar- rett, G. M. Luke, A. D. Bianchi, and B. D. Gaulin, Phys. Rev. Lett. 122, 187201 (2019). [129] B. Gao, T. Chen, D. W. Tam, C.-L. Huang, K. Sasmal, D. T. 8 Adroja, F. Ye, H. Cao, G. Sala, M. B. Stone, et al., Nature Physics 15, 1052 (2019). [130] B. Gao, T. Chen, H. Yan, C. Duan, C.-L. Huang, X. P. Yao, F. Ye, C. Balz, J. R. Stewart, K. Nakajima, S. Ohira- Kawamura, G. Xu, X. Xu, S.-W. Cheong, E. Morosan, A. H. Nevidomskyy, G. Chen, and P. Dai, Phys. Rev. B 106, 094425 (2022). [131] E. M. Smith, O. Benton, D. R. Yahne, B. Placke, R. Sch¨afer, J. Gaudet, J. Dudemaine, A. Fitterman, J. Beare, A. R. Wildes, S. Bhattacharya, T. DeLazzer, C. R. C. Buhariwalla, N. P. Butch, R. Movshovich, J. D. Garrett, C. A. Marjerrison, J. P. Clancy, E. Kermarrec, G. M. Luke, A. D. Bianchi, K. A. Ross, and B. D. Gaulin, Phys. Rev. X 12, 021015 (2022). [132] A. Bhardwaj, S. Zhang, H. Yan, R. Moessner, A. H. Nevidom- skyy, and H. J. Changlani, npj Quantum Materials 7, 1 (2022). [133] E. M. Smith, J. Dudemaine, B. Placke, R. Sch¨afer, D. R. Yahne, T. DeLazzer, A. Fitterman, J. Beare, J. Gaudet, C. R. C. Buhariwalla, A. Podlesnyak, G. Xu, J. P. Clancy, R. Movshovich, G. M. Luke, K. A. Ross, R. Moessner, O. Benton, A. D. Bianchi, and B. D. Gaulin, Phys. Rev. B 108, 054438 (2023). [134] J. Beare, E. M. Smith, J. Dudemaine, R. Sch¨afer, M. R. Rutherford, S. Sharma, A. Fitterman, C. A. Marjerrison, T. J. Williams, A. A. Aczel, S. R. Dunsiger, A. D. Bianchi, B. D. Gaulin, and G. M. Luke, Phys. Rev. B 108, 174411 (2023). [135] B. Gao, F. Desrochers, D. W. Tam, D. M. Kirschbaum, P. Steffens, A. Hiess, D. H. Nguyen, Y. Su, S.-W. Cheong, S. Paschen, Y. B. Kim, and P. Dai, Nature Physics 21, 1203 (2025). [136] E. Smith, R. Sch¨afer, J. Dudemaine, B. Placke, B. Yuan, Z. Morgan, F. Ye, R. Moessner, O. Benton, A. Bianchi, et al., Physical Review X 15, 021033 (2025). [137] R. Sibille, N. Gauthier, E. Lhotel, V. Por´ee, V. Pomjakushin, R. A. Ewings, T. G. Perring, J. Ollivier, A. Wildes, C. Ritter, et al., Nature physics 16, 546 (2020). [138] D. R. Yahne, B. Placke, R. Sch¨afer, O. Benton, R. Moess- ner, M. Powell, J. W. Kolis, C. M. Pasco, A. F. May, M. D. Frontzek, E. M. Smith, B. D. Gaulin, S. Calder, and K. A. Ross, Phys. Rev. X 14, 011005 (2024). [139] V. Por´ee, H. Yan, F. Desrochers, S. Petit, E. Lhotel, M. Appel, J. Ollivier, Y. B. Kim, A. H. Nevidomskyy, and R. Sibille, Nature Physics 21, 83 (2025). [140] V. Por´ee, E. Lhotel, S. Petit, A. Krajewska, P. Puphal, A. H. Clark, V. Pomjakushin, H. C. Walker, N. Gauthier, D. J. Gawryluk, and R. Sibille, Phys. Rev. Mater. 6, 044406 (2022). [141] V. Por´ee, A. Bhardwaj, E. Lhotel, S. Petit, N. Gauthier, H. Yan, V. Pomjakushin, J. Ollivier, J. A. Quilliam, A. H. Nevidom- skyy, H. J. Changlani, and R. Sibille, “Dipolar-octupolar cor- relations and hierarchy of exchange interactions in ce2hf2o7,” (2025), arXiv:2305.08261 [cond-mat.str-el]. [142] E. Smith, A. Fitterman, R. Sch¨afer, B. Placke, A. Woods, S. Lee, S.-Y. Huang, J. Beare, S. Sharma, D. Chatterjee, et al., Physical Review Letters 135, 086702 (2025). [143] E. Kermarrec, G. Chen, H. Okamoto, C.-J. Huang, H. Yan, J. Yan, H. Takeda, Y. Shimizu, E. M. Smith, A. Fitterman, et al., arXiv preprint arXiv:2509.09189 (2025). [144] E. M. Smith, E. Lhotel, S. Petit, and B. D. Gaulin, Annual Review of Condensed Matter Physics 16 (2024). [145] S. Lee, S. Onoda, and L. Balents, Phys. Rev. B 86, 104412 (2012). [146] L. Savary and L. Balents, in Spin Ice (Springer, 2021) pp. 239– 271. [147] G. Chen, Phys. Rev. B 96, 085136 (2017). [148] X.-P. Yao, Y.-D. Li, and G. Chen, Phys. Rev. Res. 2, 013334 (2020). [149] F. Desrochers, L. E. Chern, and Y. B. Kim, Phys. Rev. B 107, 064404 (2023). [150] F. Desrochers and Y. B. Kim, Phys. Rev. Lett. 132, 066502 (2024). [151] F. Alet, S. Wessel, and M. Troyer, Phys. Rev. E 71, 036706 (2005). [152] A. W. Sandvik, in AIP Conference Proceedings, Vol. 1297 (AIP, 2010) pp. 135–338. [153] A. W. Sandvik, Phys. Rev. E 68, 056701 (2003). [154] O. F. Syljuåsen and A. W. Sandvik, Phys. Rev. E 66, 046701 (2002). [155] The Supplemental Material details our multi-directed-loop QMC algorithm for efficient sampling of the QSI model, the exact diagonalization and the gauge field theory calculation, and the computation of neutron scattering cross section in ex- perimental setting. [156] S. Sugiura and A. Shimizu, Phys. Rev. Lett. 108, 240401 (2012). [157] S. Sugiura and A. Shimizu, Phys. Rev. Lett. 111, 010401 (2013). [158] L. Savary and L. Balents, Phys. Rev. Lett. 108, 037202 (2012). [159] L. Savary and L. Balents, Phys. Rev. B 87, 205130 (2013). [160] Z. Zhou, F. Desrochers, and Y. B. Kim, Phys. Rev. B 110, 174441 (2024). [161] Z. Zhou and Y. B. Kim, Physical Review B 112, L060407 (2025). [162] A. S. Patri, M. Hosoi, and Y. B. Kim, Phys. Rev. Res. 2, 023253 (2020). [163] O. Benton, Phys. Rev. B 102, 104408 (2020). [164] L. E. Chern, F. Desrochers, Y. B. Kim, and C. Castelnovo, Phys. Rev. B 109, 184421 (2024). [165] B. Halperin, T. Lubensky, and S.-k. Ma, Physical Review Let- ters 32, 292 (1974). [166] I. Makhfudz, Physical Review B 89, 024401 (2014). [167] P. Hyllus, W. Laskowski, R. Krischek, C. Schwemmer, W. Wieczorek, H. Weinfurter, L. Pezz´e, and A. Smerzi, Phys. Rev. A 85, 022321 (2012). [168] J. Liu, J. Chen, X.-X. Jing, and X. Wang, Journal of Physics A: Mathematical and Theoretical 49, 275302 (2016). [169] A. W. Sandvik, Phys. Rev. B 57, 10287 (1998). [170] K. S. D. Beach, arXiv e-prints , cond-mat/0403055 (2004), arXiv:cond-mat/0403055 [cond-mat.str-el]. [171] O. F. Syljuåsen, Phys. Rev. B 78, 174429 (2008). [172] A. W. Sandvik, Phys. Rev. E 94, 063308 (2016). [173] H. Shao and A. W. Sandvik, Phys. Rep. 1003, 1 (2023). [174] H. Shao, Y. Q. Qin, S. Capponi, S. Chesi, Z. Y. Meng, and A. W. Sandvik, Phys. Rev. X 7, 041072 (2017). [175] C. Zhou, Z. Yan, H.-Q. Wu, K. Sun, O. A. Starykh, and Z. Y. Meng, Phys. Rev. Lett. 126, 227201 (2021). [176] G.-Y. Sun, Y.-C. Wang, C. Fang, Y. Qi, M. Cheng, and Z. Y. Meng, Phys. Rev. Lett. 121, 077201 (2018). [177] N. Ma, G.-Y. Sun, Y.-Z. You, C. Xu, A. Vishwanath, A. W. Sandvik, and Z. Y. Meng, Phys. Rev. B 98, 174421 (2018). [178] Y.-C. Wang, M. Cheng, W. Witczak-Krempa, and Z. Y. Meng, Nature Communications 12, 5347 (2021). [179] C. Chen, U. F. P. Seifert, K. Feng, O. A. Starykh, L. Balents, and Z. Y. Meng, arXiv e-prints , arXiv:2508.08528 (2025), arXiv:2508.08528 [cond-mat.str-el]. [180] A. Bhardwaj, V. Por´ee, H. Yan, N. Gauthier, E. Lhotel, S. Pe- tit, J. A. Quilliam, A. H. Nevidomskyy, R. Sibille, and H. J. Changlani, Physical Review B 111, 155137 (2025). [181] J. G. Rau and M. J. Gingras, Annual Review of Condensed Matter Physics 10, 357 (2019). 9 [182] Beijing PARATERA Tech CO.,Ltd . [183] H. Suwa and S. Todo, Phys. Rev. Lett. 105, 120603 (2010). [184] M. Hermele, M. P. A. Fisher, and L. Balents, Phys. Rev. B 69, 064404 (2004). [185] Y.-C. Wang, X.-F. Zhang, F. Pollmann, M. Cheng, and Z. Y. Meng, Phys. Rev. Lett. 121, 057202 (2018). [186] J. Liu, H. Yuan, X.-M. Lu, and X. Wang, Journal of Physics A: Mathematical and Theoretical 53, 023001 (2019). [187] L. J. Fiderer, T. Tufarelli, S. Piano, and G. Adesso, PRX Quantum 2, 020308 (2021). [188] D. ˇSafr´anek, Phys. Rev. A 97, 042322 (2018). 1 Supplemental Material for “Quantum Fisher Information as a Thermal and Dynamical Probe in Frustrated Magnets: Insights from Quantum Spin Ice” The Supplemental Material provides details on quantum Monte Carlo simulations using a multi-directed loop (MDL) update scheme, specifically designed for quantum spin ice systems, as well as on the measurement of quantum Fisher information and other physical observables therein. It also includes detailed benchmarking between QMC, exact diago- nalization, and gauge mean field theory calculations. We also discuss the conversion of local frame QFI data to that obtained from the neutron scattering cross section of dipolar-octupolar QSI material candidates in the global frame. I. MULTI-DIRECTED LOOP QMC SCHEME FOR QUANTUM SPIN ICE MODEL In this section, we briefly introduce the stochastic series ex- pansion quantum Monte Carlo (SSE-QMC) simulation on the pyrochlore lattice [151, 152, 154]. The basic idea is to con- struct the configuration space by expanding the partition func- tion, Z = Tr(e−βH), as a Taylor series into an operator string. We then sample this configuration space by proposing updates to the operator string, which fulfill the balance condition[183], P i W(Ai)P(Ai →B) = W(B), or even the detailed balance condition, W(A)P(A →B) = W(B)P(B →A). Here, Ai are all possible configurations that can be updated to configura- tion B. W(A) is the weight of configuration A, and P(A →B) is the transition probability from configuration A to B. To do so, one of the widely used methods is the directed loop (DL) update algorithm. Within the SSE-QMC framework, the DL update algorithm is a highly efficient method for simulating quantum spin sys- tems by inserting a pair of operators (e.g., S + i and S − i ), the configuration temporarily moves into an extended configu- ration space with these two operators evolving, and returns to the Hilbert space when the two operators meet and anni- hilate. However, on the pyrochlore lattice within the spin- ice regime (0 < J± < 0.052), the DL update fails to drive the system into the quantum spin ice (QSI) regime. These are plotted as orange points in Fig. S2(a-c). This failure can be understood from the perspective that first-order perturba- tions vanish, while third-order processes persist in the spin- ice regime [184]. Consequently, the simplest quantum fluctu- ation that connects different classical spin-ice states involves six spin flips around a hexagon, which is challenging to real- ize by inserting only a single operator pair. To address this problem, we develop a multi-directed loop (MDL) update that allows the insertion of multiple operator pairs, enabling effi- cient sampling in the QSI regime. In our simulation, we first decompose the Hamiltonian into a sum of tetrahedron operators (see Fig.S1(a)), which is given (a) (b) (c) (d) (e) FIG. S1. Schematic diagram of (a) a tetrahedron in the pyrochlore lattice and (b) the corresponding four leg operator vertex in the sam- pling space. (c) Illustration of the single loop update, (d) bi-loop update, and (e) tri-loop update in the multi-directed loop algorithm. by H = X t Ht,0 + X t Ht,1, Ht,0 =Jz X ⟨i, j⟩∈ S z i S z j + Hc,0, Ht,1 = −J± X ⟨i, j⟩∈ (S + i S − j + S − i S + j ). (S1) Here, t denotes a tetrahedron ( ) in the pyrochlore lattice and ⟨i, j⟩∈ represents the nearest-neighbor pairs within the tetrahedron t. To make all matrix elements are non-negative, we introduce a constant shift Hc,0 to the diagonal operator, which is Hc,0 = 3Jz/2. Therefore, for a given tetrahedron state |αi⟩= |S z 1, S z 2, S z 3, S z 4⟩, the matrix elements of the operators are wi, j,x = ⟨α j|Ht,x|αi⟩with x = 0, 1. And, the weight of a given operator string is W =βn(L −n)! L! n Y p=1 wip, jp,x, (S2) where L refers to the cut-off of the expansion and n is the number of operators in the string. In practice, each Ht,x is represented by a 8-leg vertex, as illustrated in Fig. S1(b). We employ the diagonal update, DL update, and MDL update to sample the configuration space efficiently. Also, we apply a thermal annealing process from inverse temperature β = 1 to β = 1000 with step δβ = 1. At each β = 1 T , we perform 10,000 steps of warming and 10,000 steps of measurement. The details of these updates are as follows. 1. Diagonal update In the diagonal update, we follow the detailed balance con- dition and apply the Metropolis algorithm. We sweep through the operator string and attempt to insert or remove diagonal 2 operators Ht,0 at each time slice. In the insertion process, we randomly select a tetrahedron t and propose to insert the di- agonal operator Ht,0. For the removal process, each diagonal operator are proposed to be removed. Therefore, the accep- tance probabilities for insertion is given by Pinsert accept = min " 1, Ntβwi,i,0 L −n # , (S3) and for removal is Premove accept = min " 1, L −n + 1 Ntβwi,i,0 # , (S4) with wi,i,0 denotes the weight of Ht,0 at this time slice, and Nt represents the total number of tetrahedron ( ). 2. Directed loop update In the DL update, we follow the balance condition and uti- lize the update scheme proposed in Ref. [183] to achieve the highest efficiency sampling as we know [176, 185]. In prac- tice, we first randomly select a vertex and one of its eight legs to insert a pair of operators, S + i and S − i . The selection proba- bility is Pselect = 1/(Nv × 8), where Nv is the total number of vertices in the given operator string. Then, we choose one of these pairs of operators to be the head, while the other oper- ator becomes the tail. The head then propagates through the operator string by entering and exiting vertices until it meets the tail again, at which point the loop is closed. The propaga- tion process is illustrated in Fig.S1(c). When the head enters a vertex through one leg (la), the exiting leg (lb) is chosen based on a set of exit probabilities Pexit, which update the operator from wa to wb. The exiting leg is selected according to the probability so that the balance condition (or the detailed bal- ance condition) is satisfied. In our simulation, the exit probabilities Pexit are determined by following the balance condition and utilizing the algorithm proposed in Ref. [183] to minimize the average rejection rate, which means the head exits from the entrance leg without changing the vertex. The exit probabilities are given by Pexit(la →lb) = 1 wa max[0, min[∆ab, wa + wb −∆ab, wa, wb]], ∆ab = S a −S a−1 + w1, 1 ≤a, b ≤n, S i = iX k=1 wk, 1 ≤i ≤n, S 0 = S n. (S5) Here, n = 8 is the total number of candidate exit legs. In prac- tice, we enumerate all nonzero vertices and iterate over all possible entrance legs to calculate and store the correspond- ing exit probabilities. For each vertex and entrance leg, we identify all candidate exiting legs (n in total) and their asso- ciated weights wi, which are sorted in descending order. The sums {S i} are then computed for these candidates and used to determine the exit probabilities Pexit(la →li). All results are precomputed and stored before the simulation. During the simulation, the exiting leg lb is selected according to these stored probabilities. Such an update method achieves highly efficient sampling in various quantum spin systems, including the Balents-Fisher-Girvin model [176, 185]. However, in the spin-ice regime, the DL update still fails to drive the system into the QSI phase, as shown by the orange points in Fig. S2(a- c). 3. Multi-Directed loop update Inspired by the nature that the simplest quantum fluctuation that connects different classical spin-ice states involves six spin flips around a hexagon, we develop a MDL update that allows the insertion of multiple operator pairs, enabling effi- cient access to the QSI regime. In this update method, we first select a hexagon in the operator string and insert three pairs of operators, S + i and S − i , on three alternate sites of the hexagon. Then, for each pair of operators, we randomly choose one op- erator to be the head, while the other operator becomes the tail. The three heads then propagate through the operator string by entering and exiting vertices until all of these heads meet the tail again at the same propagation step, at which point all the loops are closed. Different from the single loop update, the propagation process of the three heads is more complicated, including one head entrance and exit a vertex (single loop up- date in Fig.S1(c)), two heads enter and exit a vertex (bi-loop update in Fig.S1(d)), and three heads enter and exit a vertex (tri-loop update in Fig.S1(e)). To show that MDL update fulfills the detailed bal- ance condition, we follow the worm-antiworm construction principle[151]. In MDL update, the corresponding transition probability from configuration A to B is given by P(A →B) = Pinsert Nm Y i=1 nv Y j=1 Pprop(vi, j →vi+1, j), (S6) where Pinsert is the probability of inserting multiple operator pairs, Nm is the number of propagation steps for these opera- tor pairs when all loops are closed. And nv is the number of vertices visited by the heads (for example, two heads entering one vertex as in Fig. S1(d), or three heads as in Fig. S1(e)). Pprop(vi, j →vi+1, j) is the propagation probability for updat- ing the j-th visited vertex from vi, j to vi+1, j, corresponding to the choice of exit legs for all heads visiting this vertex. This propagation probability is determined by listing all can- didate vertices that can be updated from vi, j, and solving for the probabilities that satisfy the detailed balance condition, Pprop(vi, j →vi+1, j)W(vi, j) = Pprop(vi+1, j →vi, j)W(vi+1, j). And the transition probability for the antiworm process is given by P(B →A) = P′ insert Nm Y i=1 nv Y j=1 Pprop(vNm−i+1, j →vNm−i, j), (S7) 3 therefore, we have P(A →B) P(B →A) = Pinsert P′ insert Nm Y i=1 nv Y j=1 W(vi+1,j) W(vi,j) = Pinsert P′ insert W(B) W(A). (S8) The MDL update is completed only when all three heads meet a different tail at one propagation step, ensuring that the in- verse process is identical to the forward process and the in- sertion probabilities satisfy Pinsert/P′ insert = 1. Therefore, the MDL update fulfills the detailed balance condition. Note that if one of the heads meets its tail and closes the corresponding loop while the other heads continue to propagate, the inverse process differs from the forward process, and the correspond- ing update violates the detailed balance condition. In practice, before the simulation, we enumerate all nonzero vertices and iterate over all possible entrance cases, including single-, bi- , and tri-loop cases (see Fig.S1(c-e)), to calculate and store the propagation probability for each exit case. For each ver- tex, entrance head number, and entrance leg, we identify the propagation probability for each candidate exit leg case via Eq. (S5). All probabilities are precomputed and stored prior to the simulation. It is worth mentioning that such a MDL update enlarged the middle extended configuration space compared to the DL up- date, which is beneficial for sampling the QSI regime. How- ever, it also increases the computational cost per update. As- suming that the heads appear in each vertex leg with equal probability after a long propagation, then the probability of the heads meeting the tails and closing the loops is 3!/(4Nv)3 with Nv ∼L3β. Here, L refers to the system size. In con- struct, it is 1/(4Nv) for the DL update. Therefore, it is harder to close the loops for the MDL update, and the computa- tional cost per update of MDL is roughly (L3β)2 times that of the DL update. This cost difference becomes even more significant in the FM phase. Therefore, we utilize both MDL and DL update to simulate the system in the QSI regime (J± = 0.04, 0.045, 0.05), while only DL in the FM regime (J± = 0.06, 0.07, 0.08, 0.09, 0.10). 4. Measurement Here, we briefly introduce the measurement results of phys- ical observables in our simulation. We first measure the en- ergy per site e and compare the results obtained with and with- out MDL updates in the QSI regime, as shown in Fig. S2(a-c) for J± ranging from 0.04 to 0.05. The orange points corre- spond to the case without MDL update, while the blue points are with the MDL update. In all three cases, the MDL up- date successfully drives the system into the QSI regime at low temperature, as evidenced by a clear energy decrease, while the DL update does not. Specifically, in Fig. S2(b), a dis- tinct energy drop from −0.2547 to −0.2550 at T ∼0.0011 is observed in the MDL update results. This small energy 7.0 × 10−4 1 × 10−3 1.5 × 10−3 2.0 × 10−3 −0.2538 −0.2537 −0.2536 −0.2535 e (a) J± = 0.04, with MDL J± = 0.04, without MDL 1.0 × 10−3 2.0 × 10−3 4.0 × 10−3 1.0 × 10−2 −0.2550 −0.2549 −0.2548 −0.2547 −0.2546 −0.2545 (b) J± = 0.045, with MDL J± = 0.045, without MDL 1.0 × 10−3 5.0 × 10−3 2.0 × 10−2 T −0.2565 −0.2563 −0.2561 −0.2559 −0.2557 −0.2555 e (c) J± = 0.05, with MDL J± = 0.05, without MDL 2.0 × 10−3 1.0 × 10−2 1.0 × 10−1 T −0.30 −0.28 −0.26 −0.24 −0.22 (d) J± = 0.06, without MDL J± = 0.07, without MDL J± = 0.08, without MDL J± = 0.09, without MDL J± = 0.10, without MDL FIG. S2. The energy per site e as a function of temperature T at (a) J± = 0.04, (b) J± = 0.045, (c) J± = 0.05, and (d) J± ranging from 0.06 to 0.10 with an increment of 0.01. In (a-c), the blue points are obtained from SSE-QMC simulation with MDL update, and the orange lines are without MDL update. In (d), all points are obtained from SSE-QMC simulation without MDL update. drop corresponds to the crossover from classical spin ice (CSI) to QSI and is consistent with the theoretical prediction T ∼12J3 ± ≈0.0011 and previous calculations [89]. In con- trast, the DL update results remain around −0.2547 with only a slight decreasing tendency as temperature decreases, indi- cating that the DL update fails to sample the QSI regime ef- ficiently and becomes trapped in a local minimum. Also, the energy density in the FM phase is shown in Fig. S2(d) for J± ranging from 0.06 to 0.10. In this regime, the DL update effi- ciently samples the configuration space and drives the system into the FM phase at low temperature. Then, we measure the imaginary-time spin-spin correlation functions, G± = 1 2N X γ,ν D S + −q,γ(τ)S − q,ν(0) + S − −q,γ(τ)S + q,ν(0) E , Gzz = 1 N X γ,ν D S z −q,γ(τ)S z q,ν(0) E , (S9) where τ is the imaginary time, γ and ν are sublattice in- dices, and N is the total number of sites. The G± correla- tion is measured by tracing the head’s path in the DL up- date process [175]. The corresponding spectral function is then obtained via the stochastic analytic continuation (SAC) method [85, 173, 175–177]. Finally, the QFI density fQ is calculated by integrating the spectral function according to Eq. (4) in the main text. Here, we focus on the QFI density at Γ = (0, 0, 0), Γ′ = (4π, 4π, 0) and X = (0, 0, 2π) in both the S ± and S z channel. The results for fQ(S ± Γ, T) are shown in Fig. S3: panels (a) and (b) display fQ(S ± Γ, T) for J± in the QSI regime (0.04 to 0.05) and FM regime (0.06 to 0.10), re- spectively. Panels (c) and (d) present heat maps of fQ(S ± Γ, T) and energy density e as functions of temperature T and J± from 0.04 to 0.10. These two observations agree well, but the crossover signal in the QFI density is more significant. 4 10−3 10−2 10−1 1 T 0 1 2 3 4 5 6 fQ (a) QMC, J± = 0.05, S±, Γ QMC, J± = 0.045, S±, Γ QMC, J± = 0.04, S±, Γ 10−3 10−2 10−1 1 T (b) QMC, J± = 0.10, S±, Γ QMC, J± = 0.09, S±, Γ QMC, J± = 0.08, S±, Γ QMC, J± = 0.07, S±, Γ QMC, J± = 0.06, S±, Γ 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± 10−3 10−2 10−1 1 T (c) 1 2 3 4 5 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± (d) −0.30 −0.28 −0.26 −0.24 −0.22 −0.20 FIG. S3. The QFI density fQ in S ± channel at Γ as a function of tem- perature T with J± (a) ranging from 0.04 to 0.05 with an increment of 0.005; (b) ranging from 0.06 to 0.10 with an increment of 0.01. (c) The QFI density heat map as a function of temperature T and J±, ranging from 0.04 to 0.10, while (d) is that of the energy density e heat map. Fig. S4 presents the QFI density fQ(S ± q, T) at the X and Γ′ points. Panels (a) and (b) show fQ(S ± X, T) and fQ(S ± Γ′, T) in the QSI regime, while panels (c) and (d) display the corre- sponding results in the FM phase. The heat maps in panels (e) and (f) illustrate fQ(S ± X, T) and fQ(S ± Γ′, T) as functions of temperature T and J± from 0.04 to 0.10. In combination with Fig. S3, we find that fQ(S ± Γ, T) exhibits the strongest signal in the S ± channel, while fQ(S ± Γ′, T) captures the critical fluctua- tions. Therefore, we focus on these two momentum points in the main text and include the results at X in the Supplemental Material for completeness. Meanwhile, we also measure the QFI density fQ(S z q, T) in the S z channel, as shown in Fig. S5. Panels (a) and (b) illus- trate fQ(S z X, T) and fQ(S z Γ′, T) in the QSI regime, while panels (c) and (d) show the corresponding results in the FM phase. The heat maps of fQ(S z X, T) and fQ(S z Γ′, T) as functions of temperature T and J± from 0.04 to 0.10 are presented in pan- els (e) and (f), respectively. Compared to the S ± channel, the QFI density in the S z channel exhibits a similar trend but with a much smaller value, which is why we only include these results in the Supplemental Material for completeness. II. EXACT DIAGONALIZATION The exact diagonalization scheme follows that of the micro- canonical thermal pure quantum method (mTPQ) [156, 157]. We first construct a state at T →∞, |ψ0⟩= P i=1 ci|i⟩, such that {ci} is a set of random complex number. Then for a given Hamiltonian H, the kth TPQ state is constructed iteratively via |ψk⟩= (L −H)|ψk−1⟩, (S10) where L is some constant value greater than the largest eigen- value of the Hamiltonian. We can then estimate the energy 10−3 10−2 10−1 1 T 0 1 2 3 fQ (a) QMC, J± = 0.05, S±, X QMC, J± = 0.045, S±, X QMC, J± = 0.04, S±, X 10−3 10−2 10−1 1 T 0 1 2 (b) QMC, J± = 0.05, S±, Γ′ QMC, J± = 0.045, S±, Γ′ QMC, J± = 0.04, S±, Γ′ 10−3 10−2 10−1 1 T 0 1 2 3 fQ (c) QMC, J± = 0.10, S±, X QMC, J± = 0.09, S±, X QMC, J± = 0.08, S±, X QMC, J± = 0.07, S±, X QMC, J± = 0.06, S±, X 10−3 10−2 10−1 1 T 0.0 0.6 1.2 1.8 (d) QMC, J± = 0.10, S±, Γ′ QMC, J± = 0.09, S±, Γ′ QMC, J± = 0.08, S±, Γ′ QMC, J± = 0.07, S±, Γ′ QMC, J± = 0.06, S±, Γ′ 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± 10−3 10−2 10−1 1 T (e) 0.5 1.0 1.5 2.0 2.5 3.0 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± (f) 0.5 1.0 1.5 2.0 FIG. S4. The QFI density fQ in S ± channel (a, c, e) at X and (b, d, f) at Γ′ as a function of temperature T and J±. (a, b) illustrate fQ with J± ranging from 0.04 to 0.05 with an increment of 0.005 while (c, d) is that of J± ranging from 0.06 to 0.10 with an increment of 0.01. (e, f) are the QFI density heat maps as a function of temperature T and J± ranging from 0.04 to 0.10. and the inverse temperature via: Ek = ⟨ψk|H|ψk⟩ ⟨ψk|ψk⟩ (S11) and βk = 2k L −Ek , (S12) for the kth iteration. The idea is that the TPQ state provides an efficient Gaussian sampling near some energy window Ek in the thermodynam- ical limit [156], and lower energy states get heavier weights as we apply iterations, as shown in Eq. (S10). As such, one can easily calculate finite temperature observables simply by evaluating over the TPQ state instead of taking the thermal average over the partition function. Furthermore, we can compute dynamical properties first by time evolving the TPQ states via |ψk(tn)⟩= exp(−iH∆t)|ψk(tn−1)⟩ (S13) where exp−iH∆t = Pm l=0 1 l!(−iH∆t)l is approximated by the Tay- lor expansion of the matrix exponential up to the mth order. The spectral function with respect to some operator O is there- fore: ⟨O(t)O(0)⟩βk = ⟨ψk|eiHtOe−iHtO|ψk⟩= ⟨ψk(t)|O|ϕk(t)⟩, (S14) where ϕk(t) = e−iHtO|ψk⟩. 5 10−3 10−2 10−1 1 T 0.0 0.1 0.2 fQ (a) QMC, J± = 0.05, Sz, X QMC, J± = 0.045, Sz, X QMC, J± = 0.04, Sz, X 10−3 10−2 10−1 1 T 0.0 0.2 0.4 0.6 0.8 1.0 (b) QMC, J± = 0.05, Sz, Γ′ QMC, J± = 0.045, Sz, Γ′ QMC, J± = 0.04, Sz, Γ′ 10−3 10−2 10−1 1 T 0.0 0.1 0.2 0.3 0.4 fQ (c) QMC, J± = 0.10, Sz, X QMC, J± = 0.09, Sz, X QMC, J± = 0.08, Sz, X QMC, J± = 0.07, Sz, X QMC, J± = 0.06, Sz, X 10−3 10−2 10−1 1 T 0.0 0.4 0.8 1.2 1.6 (d) QMC, J± = 0.10, Sz, Γ′ QMC, J± = 0.09, Sz, Γ′ QMC, J± = 0.08, Sz, Γ′ QMC, J± = 0.07, Sz, Γ′ QMC, J± = 0.06, Sz, Γ′ 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± 10−3 10−2 10−1 1 T (e) 0.04 0.08 0.12 0.16 0.20 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± (f) 0.2 0.4 0.6 0.8 1.0 FIG. S5. The QFI density fQ in S z channel (a, c, e) at X and (b, d, f) at Γ′ as a function of temperature T and J±. (a, b) illustrate fQ with J± ranging from 0.04 to 0.05 with an increment of 0.005 while (c, d) is that of J± ranging from 0.06 to 0.10 with an increment of 0.01. (e, f) are the QFI density heat maps as a function of temperature T and J± ranging from 0.04 to 0.10. Sublattice 0 Sublattice 1 Sublattice 2 Sublattice 3 FIG. S6. 16 sites cluster setup for exact diagonalization calculation Due to the geometry of the pyrochlore lattice, one symmetry-respecting cluster is a 16-site cluster made up of 4 tetrahedra, as shown in Fig S6. For the real-time evolution, we find numerical convergence at Taylor expansion order m = 80, dt = 0.001. It is worth mentioning the limitation of the TPQ method here. Namely, the finite-size effect incurred by using the mTPQ method is not negligible, given the relatively small cluster size of 16. Literature roughly states that N ≥20 [156, 157] is a numerically stable size beyond which point 10−3 10−2 10−1 1 T (a) 10−3 10−2 10−1 1 T (b) 10−3 10−2 10−1 1 T (c) 10−3 10−2 10−1 1 T (d) −0.30 −0.20 −0.15 −0.10 −0.05 0 0.02 0.04 0.06 0.08 J± 10−3 10−2 10−1 1 T (e) 1.0 2.0 3.0 4.0 5.0 6.0 0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0 0.2 0.4 0.6 0.5 1.0 1.5 2.0 FIG. S7. The ED calculation results of the QFI density fQ (a-c) in S ± channel and (d-e) in S z channel as a function of temperature T and J±. (a) illustrates fQ at Γ = (0, 0, 0) while (b) is that at X and (c) at Γ′ = (4π, 4π, 0). (d) tells fQ at X while (e) is that at Γ′ = (4π, 4π, 0). the dependence on the lattice size N becomes insignificant. However, increasing the lattice size in this case would also be intractable, as the next symmetry-respecting cluster would have 32 sites. We combat this numerical error by averaging over multiple samples. We present data averaged over four samples in the main text. Besides the results shown in the main text, we also include the S ± channel at the X point as well as the S z channel in Fig. S7. We can also clearly observe the delineation of the two temperature scales from the computed QFI values, as we dis- cussed in the main text. Furthermore, we would like to high- light the increasing magnitude fQ(S z q) as we move towards the Heisenberg point J± = −0.5. QFI also encapsulates the total fluctuation generated by eiθO via some operator O. In the U(1) lattice gauge theory description, S z ∼E, the canonical electric field. Therefore, at the lowest temperature, fQ(S z q) ∼⟨EE⟩ can be thought of as some measure of the total gauge fluctua- tion. III. GAUGE MEAN FIELD THEORY We now provide a detailed account of the gauge mean-field theory (GMFT) formalism, which offers a canonical mapping of the spin Hamiltonian to that of a lattice U(1) gauge theory coupled to some bosonic matter field. To keep the presentation transparent, let us first focus on the regime where the coupling Jz is the dominant interaction, i.e. |Jz| > |Jxx| = |Jyy|, so that we may set Jz = Jz. In this formulation, one introduces a slave “charge” degree 6 of freedom on the sites of the parent diamond lattice, defined as Qrα = X µ∈∂trα S z Rµ, (S15) where trα denotes the tetrahedron centered at rα, and ∂trα are the four pyrochlore spins forming its, µ denotes the sublattice indices. The index α ∈A, B distinguishes the two diamond sublattices (up- and down-pointing tetrahedra). Notice we make an explicit distinction between the sublattice-indexed spin coordinates, denoted by Rµ, where µ ∈{0, 1, 2, 3} rep- resents the sublattice index, and the parent diamond lattice, denoted by the lower case rα. These two coordinate systems are related by: Rµ = rα + ηαbµ/2 (S16) where bµ connects A-sublattice sites to their four nearest B- sublattice neighbors and ηA = 1 and ηB = −1. b0 = −1 4(1, 1, 1) (S17a) b1 = 1 4(−1, 1, 1) (S17b) b2 = 1 4(1, −1, 1) (S17c) b3 = 1 4(1, 1, −1). (S17d) By canonical construction, the conjugate variable φrα obeys [φrα, Qrα′] = iδrαrα′ , allowing one to define bosonic (spinon) raising and lowering operators Φ† rα = eiφrα, Φrα = e−iφrα. (S18) The original pseudospin operators can now be re-expressed in the enlarged Hilbert space H = HQ ⊗Hspin, where the mapping is given by S + Rµ →Φ† rA  1 2eiArA,rA+bµ ΦrA+bµ, (S19) S z Rµ →ErA,rA+bµ, (S20) with ArA,rA+bµ and ErA,rA+bµ denoting conjugate gauge and electric fields on the diamond links. Physically, this construction makes explicit the emergent gauge structure inherent in the spin-ice manifold: spin flips map to spinon matter hopping minimally coupled to compact U(1) gauge fields. The resulting Hamiltonian contains quadratic spinon charge terms and spinon hopping terms (J±), yielding H = Jz 2 X rα Q2 rα −J± 4 X rα X µ,ν,µ Φ† rα+ηαbµΦrα+ηαbν eiηα(Arα,rα+ηαbν−Arα,rα+ηαbµ), (S21) where J± = −(Jxx + Jyy)/4. At this stage, we adopt two crucial approximations: (i) the electric field E is integrated out, leaving behind purely bosonic matter fields coupled to static background fluxes; and (ii) the gauge field A is frozen to its mean-field value ¯A, thereby neglecting dynamical gauge fluctuations. These simplifications transform the model into a tractable quadratic bosonic theory, whose self-consistent solution captures the stability of the U(1) quantum spin liquid. Evaluating the Hamiltonian at the spin ice manifold where Q = 0, we ob- tained that: H = −J± 4 X rα X µ,ν,µ Φ† rα+ηαbµΦrα+ηαbνeiηα( ¯Arα,rα+ηαbν−¯Arα,rα+ηαbµ). (S22) To compute QFI, we need to calculate the dynamical spin susceptibility. Under the GMFT formalism, since the pho- tonic mode is integrated out, we only have access to the spinon dynamics. In other words, we only have access to ⟨S +S −⟩∼⟨Φ†ΦΦ†Φ⟩. In particular, we can compute the spin structure factor via the following formalism: S +− µν (q, ω) = 1 N X Rµ,R′ν eiq·(Rµ−R′ ν) Z dteiωt⟨S + Rµ(t)S − R′ν(0)⟩ = 1 N X rA,r′ A X µ,ν eiq·(rA−r′ A+(bµ−bν)/2) Z dteiωt 1 4  Φ† rA(t)eiArA,rA+bµΦrA+bµ(t)Φ† r′ A+bν(0)e−iAr′ A,r′ A+bν Φr′ A(0)  (S23) where N is the number of spins. We can then evaluate this using Wick’s contraction. Within the XXZ model, it turns out that the only non-trivial term is: S +− µν (q, ω) = 1 N X rA,r′ A X µ,ν eiq·(rA−r′ A+(bµ−bν)/2) Z dteiωt Fµν 4  ΦrA+bµ(t)Φ† r′ A+bν(0)  D Φ† rA(t)Φr′ A(0) E , (S24) where Fµν = ei( ¯ArA,rA+bν−¯ArA,rA+bµ). In other words, we can compute the dynamical spin susceptibility by evaluating the Green’s function of the spinons. We can then integrate this to find the QFI of the spinon channel at zero temperature. We show the QFI computed from GMFT at zero tempera- ture in Fig. S8. Despite having made drastic assumptions in the theory, the results of GMFT actually align quite well with the QMC results (up to a factor of approximately 6/7). IV. NEUTRON SCATTERING CROSS SECTION OF DIPOLAR-OCTUPOLAR MATERIAL CANDIDATES The neutron scattering cross section is obtained via Fermi’s golden rule, and after tracing over all the sample spin degrees 7 10−3 10−2 T 0.0 1.0 2.0 3.0 4.0 5.0 fQ (a) GMFT, S±, Γ, J± = 0.045,L = 3 GMFT, S±, Γ, J± = 0.045,L = 6 GMFT, S±, Γ, J± = 0.045,L = 25 QMC, S± Γ, J± = 0.045, L = 4 QMC, S± Γ, J± = 0.045, L = 3 −0.4 −0.3 −0.2 −0.1 0 0.04 J± (b) GMFT, S±, Γ GMFT, S±, Γ′ GMFT, S±, X ED, T = 0.001, S±, Γ ED, T = 0.001, S±, Γ′ ED, T = 0.001, S±, X FIG. S8. The GMFT calculation results of the QFI density fQ in the S ± channel. Panel (a) shows the comparison between GMFT results (with a factor 6/7) for various system sizes from L = 3 to L = 25 at J± = 0.045, and the QMC results for L = 3 and L = 4. Panel (b) shows the QFI density as a function of J± at zero temperature in the S ± channel at Γ, Γ′, and X. of freedom, we obtained that: d2σ dΩdω ∝k f ki X αβ  δαβ −ˆqα · ˆqβ  Mαβ(q, ω), (S25) where kf (ki) is the norm of the momentum for the outgoing (incoming) neutron (in what follows, we will drop the kinetic prefactor by assuming kf /ki ≈1 for the experiment of inter- est), and Mαβ(q, ω) = 1 2π Z dt⟨Mα(−q, t)Mβ(q, 0)⟩eiωt. (S26) Here Mα(q, ω) is the Fourier transformed magnetization den- sity operator. In the case of DO pyrochlore compounds, the lowest ly- ing crystal electric field doublet |±⟩can be modeled using pseudospins operators τz = 1 2(|+⟩⟨+| −|−⟩⟨−|) and τ± = |±⟩⟨∓| that transform non-trivially under point group opera- tion [125, 181]. In the case of Cerium-based candidates, only the pseudospin component τz has a dipolar magnetic charge density and linearly couples with the magnetic field (e.g., |±⟩= |J = 7/2, mJ = ±3/2⟩for Ce2Zr2O7 [128, 129, 144]). The pseudospin components τz are defined with respect to the local z-axis, as determined by crystal electric field analysis. These local z-axes vary according to the tetrahedron sublat- tice index and are given by z0 = 1√ 3 (1, 1, 1) (S27a) z1 = 1√ 3 (1, −1, −1) (S27b) z2 = 1√ 3 (−1, 1, −1) (S27c) z3 = 1√ 3 (−1, −1, 1). (S27d) Therefore, in the dipolar approximation, the magnetization density operator is: Mα(q, t) ∝ 1 √ N X Rµ eiq·Rµ ˆzα µτz Rµ(t) = X µ ˆzα µτz q,µ(t), (S28) where we have defined τz q,µ(t) = 1 √ N P Rµ eiq·Rµτz Rµ(t) via the standard Fourier transform. Pluging this into Eq. (S25) and (S26): d2σ dΩdω ∝1 N X αβµν  δαβ −ˆqα · ˆqβ  (ˆzµ)α(ˆzν)β⟨τz −q,µ(t)τz q,ν(0)⟩ = 1 N X µν ˆzµ · ˆzν −  ˆzµ · Q  (ˆzν · Q) |Q|2 ⟨τz −q,µ(t)τz q,ν(0)⟩ (S29) := ADO(q, t). (S30) We can then write out explicitly the transverse projector in a DO pyrochlore system for the ⟨τzτz⟩channel as: Fµν := ˆzµ · ˆzν −  ˆzµ · Q  (ˆzν · Q) |Q|2 (S31) In this work, we want to compute experimentally relevant QFI for DO compounds, which are generally described by an XYZ Hamiltonian with a finite ˜Jxz: HDO = X ⟨i, j⟩ Tαατα i τα j +Txz(τx i τz j +τz iτx j)−gµB X i Bi ·τz i, (S32) where α ∈{x, y, z} We can remove the Txz term by applying a rotation along the y-axis: τy = ˜S y; τx = cos(θ) ˜S x −sin(θ) ˜S z; τz = sin(θ) ˜S x + cos(θ) ˜S z; tan(2θ) = 2Txz Txx−Tzz . By doing so, we map this problem to the XYZ model: HXYZ = X ⟨i, j⟩ ˜Jαα ˜S α i ˜S α j −gµB X i Bi·( ˜S z i cos θ+ ˜S x i sin θ). (S33) Finally, we cover the XXZ model described in Eq. (1) when under the simplifying assumption that ˜Jαα = ˜Jββ = −2J± < ˜Jγγ = Jz with one dominant component, where α, β, γ are the 3 pseudospin components. In doing so, we map S z = ˜S γ and the other two components to be x and y in Eq. (1). Let us define Cαβ µν(q, ω) = 1 2π Z dteiωt⟨S α −q,µ(t)S β q,ν(0)⟩ (S34) In the case of Ce2Zr2O7, the proposed parameter sets are (0.062, 0.063, 0.011)meV and (0.063, 0.062, 0.011)meV with a potentially small θ ≲0.1π. These two sets have equally excellent goodness of fit [133]. As such, depending on whether if Jxx or Jyy is dominant, ˜S z = (S + −S −)/2i or ˜S z = (S + + S −)/2 as either ˜S x or ˜S y becomes the dominant longitudal component. Therefore, in the case of Ce2Zr2O7: ADO(q, ω) ∼ X µ,ν Fµν 4 [  C+− µν (q, ω) + C−+ µν (q, ω)  ± 8  C++ µν (q, ω) + C−− µν (q, ω)  ], (S35) where the ± sign depends on whether if ˜S x or ˜S y is dominant. Under the XXZ model, the latter two terms vanish as a conse- quence of the U(1) symmetry of the Hamiltonian. As such, to compute ADO: ADO(q, ω) = X µ,ν Fµν  C+− µν (q, ω) + C−+ µν (q, ω)  /4. (S36) V. QUANTUM FISHER INFORMATION MATRIX A. Review of the Quantum Fisher Information For this appendix to be as self-contained as possible, we will first review some basic facts about the QFI before dis- cussing more specifically its relation to spin structure factors and neutron scattering cross sections. For a differentiable family of quantum states ρθ = e−iθOρeiθO generated by a Hermitian operator O, the quantum Fisher information (QFI) is defined via the symmetric loga- rithmic derivative (SLD) L as [168] FQ[ρ, O] ≡Tr  ρL2 , ∂θρθ θ=0 = 1 2{ρ, L} = −i[O, ρ]. In eigenbasis ρ = P n pn|n⟩⟨n|, the QFI admits the Lehmann spectral representation FQ[ρ, O] = 2 X m,n (pm −pn)2 pm + pn ⟨m|O|n⟩ 2, where terms with pm + pn = 0 are omitted. For pure states ρ = |ψ⟩⟨ψ|, this reduces to FQ[|ψ⟩, O] = 4  ⟨ψ|O2|ψ⟩−⟨ψ|O|ψ⟩2 = 4Varψ(O). For thermal states ρ ∝e−βH, the same expression holds with pn = e−βEn/Z and the {|n⟩} chosen as the energy eigenbasis of H. A pure N-particle state is called m-producible if it factorizes into a tensor product of blocks, each containing at most m parties; a mixed state is m-producible if it is a convex mixture of such pure states. The entanglement depth is the smallest m for which the state is m-producible. For a generator that is a sum of local terms, O = N X i=1 Oi, ∆i ≡λmax(Oi) −λmin(Oi), where ∆i is the spectral width of Oi and λmax(min) are the cor- responding largest (smallest) eigenvalues, every m-producible state ρ satisfies the QFI bound (Hyllus–T´oth) [63, 64] FQ[ρ, O] ≤ max partitions {Bℓ} |Bℓ|≤m X ℓ  X i∈Bℓ ∆i 2, (S37) where the maximum runs over all partitions of {1, . . . , N} into blocks Bℓwith |Bℓ| ≤m. If ∆i = ∆for all i, then (S37) becomes FQ[ρ, O] ≤∆2 X ℓ |Bℓ|2. Thus, for fixed N = P ℓ|Bℓ| and the constraint |Bℓ| ≤m, max- imizing the bound reduces to maximizing P ℓ|Bℓ|2. The op- timal partition is therefore as many size-m blocks as possible and at most one residual block. Writing N = sm + r with s = ⌊N/m⌋and 0 ≤r < m, the maximizing partition has s blocks of size m and, if r > 0, one block of size r (all other blocks empty). Therefore FQ[ρ, O] ≤∆2  sm2 + r2 , (S38) with equality attained by that partition (if r = 0, it is s blocks of size m). Hence, if a measured (or computed) QFI violates (S38) for some m, i.e. FQ[ρ, O] > ∆2  sm2 + r2 , then the state has entanglement depth at least m+1. Remark 1 (inhomogeneous local spectra). Let O = PN i=1 ciOi with ci ∈ R and local spectral widths ∆i := λmax(Oi) −λmin(Oi). Note that the spectral width of ciOi is |ci|∆i. Define the QFI density fQ(O) := 1 N FQ[ρ, O]. If ρ is m-producible, then (maximizing over all partitions {Bℓ} with |Bℓ| ≤m) fQ(O) ≤1 N max {Bℓ:|Bℓ|≤m} X ℓ  X i∈Bℓ |ci|∆i 2 ≤1 N max {Bℓ} X ℓ |Bℓ| X i∈Bℓ c2 i ∆2 i (Cauchy–Schwarz on each block) ≤m N N X i=1 c2 i ∆2 i =: m(c∆)2 (S39) where (c∆)2 := 1 N P i c2 i ∆2 i . It is therefore natural to normalize by the inhomogeneous factor η := (c∆)2, so that fQ(O) η ≤m. (S40) Hence, the density witness fQ(O) η > m =⇒entanglement depth ≥m+1. (S41) Remark 2 (sum of witnesses and spectral-width control). Suppose we define a composite quantity as the sum of two QFI densities, f (Σ) Q ≡fQ(O(1)) + fQ(O(2)), with additive generators O(k) = PN i=1 O(k) i , ∆(k) i := λmax O(k) i  −λmin O(k) i . For any m-producible state ρ one has, by the standard block-partition argument and Cauchy–Schwarz, fQ(O(k)) ≤m(∆(k))2, (∆(k))2 := 1 N N X i=1 ∆(k) i 2, (S42) 9 hence the sum bound f (Σ) Q ≤m  (∆(1))2 + (∆(2))2 . (S43) Therefore, the normalized sum witness f (Σ) Q (∆(1))2 + (∆(2))2 > m =⇒entanglement depth ≥m + 1. (S44) B. Spin Structure Factors In the main text, we have use S α q = P i S α i eiq·Ri as the op- erator associated with the QFI. One would note that the usual definition of QFI requires an operator O to be Hermitian such that the induced generator eiθO is unitary and thereby gener- ates norm-preserving dynamics. Although S α q is not Hermi- tian, we show below that the definition of QFI in Eq. (4) of the main text still encompasses all the important properties of the conventional QFI. Namely, it is a robust measure of fluc- tuations, and we can still derive a meaningful lower bound of entanglement depth. To keep our results general, let us consider some Hermitian operator ORµ on each site under the coordinate system speci- fied in Eq. (S16) such that the operator we associate with QFI has the form Oq = P Rµ ORµeiq·Rµ, where ORµ is some local observable with homogeneous spectral width ∆Rµ = ∆. For later discussion, let us denote such a homogeneous spectral width as ∆(ORµ) to be explicit about the underlying local op- erators. We can plug in ORµ = S α Rµ where α ∈{x, y, z} at the end to make a direct connection with the definitions used in the main text. To begin, let us split Oq = Oc;q + iOs;q into an Hermitian and antihermitian parts, where Oc;q and Os;q are Hermitian operators: Oc;q = Oq + O† q 2 = X Rµ ORµ cos(q · Rµ) (S45) Os;q = Oq −O† q 2i = X Rµ ORµ sin(q · Rµ) (S46) Now we construct a QFI matrix (QFIM) F [186–188], whose elements are specified by: Fab(T) := 4 Z dω tanh  ω 2T   1 −e−ω/T Aab(q, ω). (S47) where Aab(q, ω) = 1 2πN Z dteiωt⟨Oa,q(t)Ob,q(0)⟩. (S48) with a, b ∈{c, s}. Now we wish to show that fQ(Oq, T) defined in the main text via Eq. (4) is related to the trace of this QFIM. Since Oq = Oc;q + iOs;q, O† qOq = Oc;qOc;q + Os;qOs;q + i[Oc;q, Os;q] = Oc;qOc;q + Os;qOs;q, (S49) as [Oc;q, Os;q] = 0. Therefore, we see that A(q, ω) := 1 2πN Z dt⟨O† q(t)Oq(0)⟩eiωt = 1 2πN Z dteiωt⟨Oc;q(t)Oc;q(0)⟩+ ⟨Os;q(t)Os;q(0)⟩ = Acc + Ass. (S50) As a result, fQ(Oq, T) = Z dω tanh( ω 2T )(1 −e−ω/T)A(q, ω) = Z dω tanh( ω 2T )(1 −e−ω/T)(Acc(q, ω) + Ass(q, ω)) = fQ(Oc;q, T) + fQ(Os;q, T) = Tr(F). (S51) In other words, the QFI used in the main text is precisely the trace of the QFI matrix introduced above. Interpreting F as a covariance matrix, this shows that the fQ(Oq, T) equals the total variance — the sum of the variances carried by all fluc- tuation modes. Furthermore, fQ(Oq, T) provides a robust lower bound on the entanglement depth as fQ(Oq, T) = fQ(Oc;q, T) + fQ(Os;q, T) is the well-bounded sum witness discussed in re- mark 2. Using the result outlined in the remark, for an m- producible state, we have fQ(Oq, T) ≤m  (∆Oc,q)2 + (∆Os,q)2 . (S52) Finally, by negation, fQ(Oq, T) > m  (∆Oc,q)2 + (∆Os,q)2 (S53) implies the state is at least (m + 1)-partite entangled. As such, all that remains is to obtain a bound for these two channels. Using Eq. (S39) with homogeneous spectral width for each local operator ORµ, ∆Rµ = ∆(ORµ), and cRµ = cos(q · Rµ), (∆Oc,q)2 =  ∆(ORµ) 2 N X Rµ cos2(q · Rµ). (S54) Therefore, by arguing the same for Os,q, we can finally bound fQ(Oq) for any m-producible state: fQ(Oq) ≤ m  ∆(ORµ) 2 N X Rµ  cos2(q · Rµ) + sin2(q · Rµ)  = m  ∆(ORµ) 2 . (S55) As such, we find that fQ(Oq) is bounded above by m  ∆(ORµ) 2. This is precisely the definition of the entangle- ment bound for conventional QFI [46, 50, 168] as if we have ignored the exponential part eiq·Rµ altogether. 10 To conclude, for an m-producible state, any operator of the form Oq = P Rµ ORµeiq·Rµ, where ORµ is some observable with homogeneous spectrum ±1/2 across all sites, has the corre- sponding QFI bound fQ(Oq, T) ≤m  ∆(ORµ) 2 . (S56) Therefore, for Oq = S α q where α ∈{x, y, z} in a spin-1/2 sys- tem: fQ(S α q, T) > m(∆(S α Rµ))2 = m(1/2 −(−1/2))2 = m, (S57) implies that the state is at least (m + 1)-partite entangled, re- gardless of momentum position q. Now, since we have established a well-constructed QFI for non-Hermitian operators, we can extend this even further for α = ± in the main text. As discussed in the main text, fQ(S ± q, T) = fQ(S x q, T) + fQ(S y q, T) is a sum witness. There- fore, by remark 2, for an m-producible state: fQ(S ± q, T) = fQ(S x q, T) + fQ(S y q, T) ≤m(1 + 1) = 2m. (S58) Equivalently, this is to say that if the spinon channel has nQFI nQFT(S α q) = fQ(S ± q, T)/2 > m, (S59) then the state is at least (m + 1)-partite entangled. C. Neutron Scattering Cross Section Similarly, for ADO, to have a meaningful interpretation of the entanglement depth, we would have to determine λmax and λmin of the operator in which we are evaluating our QFI to determine the QFI spectral bound η. To do so, first, we should determine the exact operator O that the neutron scat- tering cross section contains. Let us define operators S NSF q = X Rµ  ˆp · ˆzµ  τz Rµeiq·Rµ (S60a) S SF q = X Rµ  ˆv · ˆzµ  τz Rµeiq·Rµ. (S60b) Here ˆp denotes the neutron polarization, which is perpendic- ular to the momentum transfer (ˆv · ˆp = 0), and ˆv is a vec- tor perpendicular to both the polarization vector and the neu- tron momentum transfer (ˆv · ˆp = ˆv · ˆq = 0). These channels correspond to polarized neutron scattering experiments in the spin-flip/non-spin-flip (SF/NSF) channels with respect to the neutron polarization vector ˆp. By definition, ADO ∼⟨S NSF† q S NSF q ⟩+ ⟨S SF† q S SF q ⟩, (S61) where we sum up all transverse modes, which is equivalent to taking the transverse projector in Eq.(S29). Accordingly, we assemble a 2 × 2 QFIM FDO from the two transverse com- ponents—taken here as the NSF and SF channels—so that the cross section ADO defined in Eq. (S29) is identified with Tr FDO. This construction is necessary because the transverse projector has two independent components. Hence, there is no single scalar operator S DO with ADO ∼⟨S DO†S DO⟩. Neverthe- less, we still denote the QFI related to ADO as fQ(S DO q , T) for consistency’s sake. The operators entering FDO are generally non-Hermitian, which is admissible since the bounds derived in the previous section apply directly. Applying Eq. (S56) to each channel then yields a lower bound on the entanglement depth. Finally, to bound the QFI induced by unpolarized neu- tron scattering, let us denote the corresponding QFI from the ADO channel as the sum witness in remark 2 by fQ(S DO q ) = fQ(S NSF q ) + fQ(S SF q ). (S62) Therefore, let us bound the NSF and SF channels to bound the total unpolarized neutron scattering cross section. To do so, we have to determine λmax and λmin of each basis operator to get an actual interpretation of the entanglement depth. To do so, we need to evaluate at a specific momentum position q. For clarity of later discussion, let us define vector ΠNS F and ΠS F with four components corresponding to the four py- rochlore sublattices µ as Πµ NSF(q) = ˆp · ˆzµ (S63a) Πµ SF(q) = ˆv · ˆzµ. (S63b) Since, in principle, the operator S (N)SF q is non-Hermitian, we should break it down into the same cosine and sine chan- nels again and discuss the sum of the spectral width on each channel. By Eq. (S52), we need to bound the cosine and the sine channel again. Let us denote S (N)SF c(s),q as the correspond- ing cosine and sine channels. Then, using Eq. (S39) with cµ = Πµ (N)SF: (∆S (N)SF c,q )2 = 1 N X Rµ cos2(q · Rµ)|Πµ (N)SF|2. (S64) Therefore, using Eq. (S52), we can derive a QFI bound on the NSF and SF channel separately for an m-producible state: fQ(S (N)SF q , T) ≤m N X Rµ  cos2(q · Rµ) + sin2(q · Rµ)  |Πµ (N)SF|2 = m N X Rµ |Πµ (N)SF|2 = m 4 X µ |Πµ (N)SF|2. (S65) In the last line, we have used the fact that Πµ (N)SF depends only on the sublattice but not on the unit cell position. We can then sum over all unit cells, which gives a prefactor of N/4 Finally, the upper bound on fQ(S DO q ) for an m-producible state is fQ(S DO q ) = fQ(S NSF q ) + fQ(S SF q ) ≤m 4 X µ  |Πµ NSF|2 + |Πµ SF|2 . (S66) Now, to actually evaluate this bound, let us write X µ Πµ (N)SF 2 = X µ ˆa · ˆzµ 2 = ˆa⊤X µ ˆzµˆz⊤ µ  ˆa = ˆa⊤Aˆa, (S67) 1 where A := P µ ˆzµˆz⊤ µ and ˆa = ˆp if NSF and ˆa = ˆv if SF. For a single pyrochlore tetrahedron, take the four local ˆzµ, a direct sum gives A = 3 X µ=0 ˆzµˆz⊤ µ = 4 313×3, (S68) since all off-diagonal entries cancel by symmetry (the signs appear equally often with opposite parity) and Tr A = P µ ∥ˆzµ∥2 = 4 fixes the proportionality constant. Hence X µ Πµ (N)SF 2 = 4 3||ˆa||2 = 4 3, (S69) which is independent of the direction of the unit vector ˆa. As a result, for an m-producible state, QFI upper bounds on NSF, SF, and the total scattering channel are given by fQ(S (N)SF q , T) ≤m 4 X µ |Πµ (N)SF|2 = m 3 . (S70a) fQ(S DO q , T) ≤2m 3 . (S70b) To summarize, we derived a bound independent of q for the NSF, SF, and the unpolarized scattering channel with corre- sponding nQFI: nQFI(S (N)SF q , T) = 3fQ(S (N)SF q , T) (S71a) nQFI(S DO q , T) = 3 2 fQ(S (N)SF q , T). (S71b) We again stress that this bound is not the optimal bound in general but an upper bound of such via Cauchy-Schwartz as shown in Eq. (S65). The true optimal bound depends strongly on the choice of ˆp and ˆv and thereby the incident neutron mo- mentum ˆq. In theory, one would need to apply Eq. (S37) care- fully to find the best bound.
Quantum Fisher Information as a Thermal and Dynamical Probe in Frustrated Magnets: Insights from Quantum Spin Ice Chengkang Zhou,1, ∗Zhengbang Zhou,2, ∗F ́elix Desrochers,2, 3 Yong Baek Kim,2 and Zi Yang Meng1 1 2 5S 1A7, Canada 3 02138, USA (Dated: October 17, 2025) Quantum Fisher information (QFI) is a novel measure of multipartite quantum entanglement that can be measured in inelastic neutron scattering experiments on quantum magnets. In this work, we demonstrate that the QFI can be used to understand the thermal and dynamical properties of quantum magnets by focusing on the pyrochlore lattice model of quantum spin ice (QSI), a three-dimensional quantum spin liquid that hosts fractionalized quasiparticles and emergent photons. We use the newly developed multi-directed loop update quantum Monte Carlo (QMC) algorithm and exact diagonalization (ED) to compute the QFI, which is further utilized to calibrate the gauge mean-field theory results. We show that the temperature and momentum dependence of the QFI can reveal characteristic energy scales of distinct phases and phase transitions in the global phase diagram. In particular, the QFI can clearly distinguish the ferromagnetic ordered phase, the thermal critical region above it, as well as two distinct QSI phases, namely zero-flux and π-flux QSI. Moreover, the QFI shows two crossover temperature scales, one from the trivial paramagnet to the classical spin ice regime and a lower temperature crossover to QSI. We discuss our results, especially for the π-flux QSI, in light of the ongoing experimental efforts on Cerium-based pyrochlore systems. Our results demonstrate that the QFI not only detects entanglement properties but can also be viewed as a sensitive thermal and dynamical probe in the investigation of quantum magnets. Introduction.- Entanglement is arguably the most important concept in quantum physics [1-7]. It is at the heart of our modern understanding of phases of matter and transitions between them [8-23]. As a notable example, entanglement is the fundamental characteristic of quantum spin liquids (QSLs) [20, 24-27] - paramagnetic phases of frustrated spin systems that fail to magnetically order down to zero temperature, and host fractional excitations and emergent gauge fields [28-34]. QSLs are characterized by longrange entanglement, which may be measured for gapped QSL by a non-vanishing topological entanglement entropy [20, 3538]. However, despite its role as a theoretical cornerstone and its usefulness in numerical studies [19-23, 37-43], topological entanglement entropy is not experimentally accessible in solid-state platforms, where only local correlations are typically measured. One still has experimental access to other measures of entanglement. A particularly useful one that has recently been at the forefront of the experimental search for QSLs is the quantum Fisher information (QFI) [44-52]. Initially employed to define the maximal achievable precision in parameter estimation for a given quantum state in the quantum metrology community [53-58], the QFI is directly related to the dynamical susceptibility [59], which is routinely measured in inelastic neutron scattering experiments [60-62]. The QFI density fQ provides a lower bound on the multipartite entanglement in the system [63, 64]. This bound can be used to experimentally differentiate between QSLs and other, more trivial states, such as random singlet states driven by strong disorder [65, 66]. These states may otherwise be challenging to distinguish experimentally, as they can both lead to similar signatures, such as continua of excitations in inelastic neutron scattering or a lack of experimentally observable finitetemperature phase transitions [67-78]. Nevertheless, it should be emphasized that a large value of the QFI, althought promising, does not provide evidence for the realization of a QSL in and of itself. Indeed, trivial ordered states sufficiently close to a quantum critical point may have an arbitrarily large value of fQ [59]. Be as it may, if precise theoretical predictions for the momentum and temperature dependence of the QFI for prospective QSLs exist, they offer stringent quantitative predictions which, if measured, may provide significantly more convincing evidence than qualitative features, like the presence of broad continua of excitations. In this letter, we make such detailed predictions for the QFI on one of the most paradigmatic QSL: quantum spin ice (QSI) [79-84]. QSI is a three-dimensional QSL that is the ground state of an XXZ model with dominant Ising and subleading transverse couplings on the pyrochlore lattice. It realizes the deconfined (Coulomb) phase of compact U(1) gauge theory and, as such, hosts emergent photon excitations, spin1/2 spinons that act as emergent electric charges, and magnetic monopoles [85]. QSI is the ideal platform for making specific predictions for the QFI, as it is one of the few known examples of an experimentally relevant model that stabilizes a well-understood QSL that is numerically accessible over a parameter regime (i.e., ferromagnetic transverse couplings) with sign-problem-free quantum Monte Carlo (QMC) [85-90]. Furthermore, several compounds have historically been considered as possible experimental realization of QSI such as Tb2Ti2O7 [91-100], Pr2(Sn,Zr,Hf)2O7 [101112], and Yb2Ti2O7 [83, 113-124]. The most recently considered candidate materials are the Cerium-based pyrochlore compounds Ce2(Zr,Hf,Sn)2O7 [125-144]. For this last fam16 Oct 2025 2 10-3 10-2 10-1 1 T (a) ED, S±, Γ (b) QMC, S±, Γ -0.30 -0.20 -0.15 -0.10 -0.05 0 0.02 0.04 0.06 0.08 J± 10-3 10-2 10-1 1 T (c) ED, S±, Γ′ 0.04 0.05 0.06 0.07 0.08 0.09 0.10 J± (d) QMC, S±, Γ′ 1.0 2.0 3.0 4.0 5.0 0.5 1.0 1.5 2.0 2.5 3.0 1.0 2.0 3.0 4.0 5.0 0.5 1.0 1.5 2.0 FIG. 1. Heat maps of the QFI as functions of temperature T and J±. Panels (a) and (b) show the QFI density fQ(S ± q, T) in the S ± channel at Γ = (0, 0, 0), while panels (c,d) show it at Γ′ = (4π, 4π, 0). Panel (a) and (c) are obtained from the ED calculation of 16-site cluster with J± ranging from -0.045 to 0.08, and panels (b) and (d) are from the QMC simulation of 4 × L3 sizes (L = 4) with J± ranging from 0.04 to 0.10. In panels (a) and (b), fQ(S ± Γ, T) maps out the thermodynamic phase boundaries between the ferromagnetic phase (FM) and QSI0. The temperature dependence of fQ(S ± Γ, T) from QMC further discerns the crossover temperature scales between the high-temperature paramagnetic regime to classical spin ice and eventually to the QSI0 regime (see also Fig. 2 (a)). In panels (c) and (d), fQ(S ± Γ′, T) reflects the strength of the fluctuations in the thermal and quantum phase diagram, with strong QFI at the classical critical region above the FM phase and stronger QFI in the classical spin ice and QSI0 region. Moreover, the strongest QFI signal, represented by the red region in panel (c) for J± |J±| > 0), which provides geometric frustration within each tetrahedron and energetically favors "classical spin ice configurations" where P i∈ S z i = 0 with the sum over all spins in a given tetrahedron. In the perturbative regime |J±| ≫Jz, the transverse coupling allows for tunelling between the classically degenerate configurations that respect this local energetic constraint [79]. This tunnelling stabilizes 0-flux QSI (QSI0) for 0 0.052, the system undergoes a fluctuationinduced first-order transition [165, 166] into an XY ferromagnetic (FM) ordered phase below a critical temperature Tc [86, 88, 89]. Hereafter, we set Jz = 1 as the unit of energy. For reference, the dominant exchange is usually slightly smaller than 1K in Cerium-based dipolar-octupolar candidates [131133, 140-143]. To quantify the entanglement properties of a given state, the QFI provides a lower bound on multipartite entanglement, also known as entanglement depth [50, 65, 167]. For a collective generator O = PN i=1 Oi with local eigenvalue range ∆λ = λmax -λmin, any m-producible state ρ obeys [46, 50, 63, 64, 168] FQ[ρ, O] N ≤m(∆λ)2. (2) Equivalently, with fQ(O) := FQ[ρ, O]/N, we can define the normalized QFI (nQFI) as the lower bound of entanglement depth, such that nQFI(O) := fQ(O) (∆λ)2 > m =⇒entanglement depth ≥m + 1, (3) For the operator O = S α q := P i S α Rieiq·Ri, the QFI density is related to the dynamical structure factor Aα(q, ω) := 1 2πN R dt⟨S α† q (t)S α q(0)⟩eiωt at momentum q by [59] fQ(S α q, T) = 4 Z ∞ 0 dω tanh ω 2T 1 -e-ω/T Aα(q, ω). (4) Here, T is the temperature and α labels different pseudospin components, such as α ∈{x, y, z}. We see that the QFI density fQ(S α q, T) can be obtained by integrating Aα(q, ω) over all frequencies at a fixed temperature and momentum. Experimental measurements of the QFI have already been reported for quasi-1D KCuF2 [46] and for 2D frustrated triangular lattice material KYbSe2 [47, 52]. To simulate the QSI0 phase, we developed a MDL-QMC update that enables efficient simulations of the QSI0 phase on the pyrochlore lattice. This task is difficult to achieve with the conventional directed loop update algorithm as in the QSI0 phase, the simplest quantum fluctuation that connects different classical spin-ice states involves six spin flips 0.0 1.0 2.0 3.0 4.0 5.0 6.0 fQ (a) ED, J± = -0.3, QSIπ, Γ ED, J± = -0.125, QSIπ, Γ QMC, J± = 0.045, QSI0, Γ QMC, J± = 0.05, QSI0, Γ QMC, J± = 0.06, FM, Γ QMC, J± = 0.07, FM, Γ QMC, J± = 0.08, FM, Γ 10-3 10-2 10-1 1 T 0.0 1.0 2.0 3.0 fQ (b) ED, J± = -0.3, QSIπ, Γ′ ED, J± = -0.125, QSIπ, Γ′ QMC, J± = 0.045, QSI0, Γ′ QMC, J± = 0.05, QSI0, Γ′ QMC, J± = 0.06, FM, Γ′ QMC, J± = 0.07,FM, Γ′ QMC, J± = 0.08, FM, Γ′ FIG. 2. Temperature evolution of the QFI in different phases obtained with various computational methods. The QFI fQ(S ± q, T) are shown for (a) the S ± channel at the Γ point, (b) the S ± channel at the Γ′ point. The blue disk points represent results from ED calculations with J± = -0.3 and J± = -0.125 in the QSIπ regime, while the organge triangle correspond to QMC simulations with J± = 0.045 and J± = 0.05 in the QSI0 regime. The orange shaded areas accompanying the data highlight the crossover from paramagnetic regime to classical spin ice regime at T ∼1 and that from the classical spin ice regime to QSI0 at T ∼|J3 ±|. The red triangles indicate QMC results with J± ranging from 0.06 to 0.08 in the FM regime. The green points denote the GMFT calculation results at J± = 0.045 (QSI0) and J± = -0.3 (QSIπ) for different ground states at zero temperature. The QMC, ED, and GMFT results are consistent (see SM [155] for details). around a hexagon, which is hard to realize by a simple directed loop update. Our MDL algorithm overcomes this limitation by allowing the insertion of multiple operator pairs, thereby naturally generating higher-order processes during the Monte Carlo update and making the QSI0 phase accessible (see SM [155]). We then measure the imaginary time correlation function G± = 1 2N P γ,ν⟨S + -q,γ(τ)S - q,ν(0) + S - -q,γ(τ)S + q,ν(0)⟩ and Gz = 1 N P γ,ν⟨S z -q,γ(τ)S z q,ν(0)⟩, where γ, ν label the four pyrochlore sublattices, the momentum-transfer q is measured in the 3D pyrochlore Brillouin zone (BZ), and τ ∈[0, β] denotes the imaginary time with β = 1/T the inverse temperature. We utilize the stochastic analytic continuation (SAC) scheme [169-173] to convert the imaginary time correlation function to a real-frequency dynamic structure factor. This QMC+SAC scheme has been successfully applied to a variety of lattice models, producing reliable spectral properties 4 ranging from magnon and amplitude modes in a magnetically order state [174, 175] to fractionalized excitations in QSL and QSI models [85, 176-179]. We further compute the QFI density using ED and GMFT. The ED calculations are performed over a 16-site periodic cubic conventional cluster (see SM [155]), providing access to momentum positions Γ, Γ′, and X. We first compare the ED and GMFT results with our high-quality QMC data before extending these calculations into the π-flux J± Jz) to the CSI and its plateau at intermediate temperature (|J±|3 |Jxx| = |Jyy|, so that we may set Jz = Jz. In this formulation, one introduces a slave "charge" degree 6 of freedom on the sites of the parent diamond lattice, defined as Qrα = X μ∈∂trα S z Rμ, (S15) where trα denotes the tetrahedron centered at rα, and ∂trα are the four pyrochlore spins forming its, μ denotes the sublattice indices. The index α ∈A, B distinguishes the two diamond sublattices (up- and down-pointing tetrahedra). Notice we make an explicit distinction between the sublattice-indexed spin coordinates, denoted by Rμ, where μ ∈{0, 1, 2, 3} represents the sublattice index, and the parent diamond lattice, denoted by the lower case rα. These two coordinate systems are related by: Rμ = rα + ηαbμ/2 (S16) where bμ connects A-sublattice sites to their four nearest Bsublattice neighbors and ηA = 1 and ηB = -1. b0 = -1 4(1, 1, 1) (S17a) b1 = 1 4(-1, 1, 1) (S17b) b2 = 1 4(1, -1, 1) (S17c) b3 = 1 4(1, 1, -1). (S17d) By canonical construction, the conjugate variable φrα obeys [φrα, Qrα′] = iδrαrα′ , allowing one to define bosonic (spinon) raising and lowering operators Φ† rα = eiφrα, Φrα = e-iφrα. (S18) The original pseudospin operators can now be re-expressed in the enlarged Hilbert space H = HQ ⊗Hspin, where the mapping is given by S + Rμ →Φ† rA 1 2eiArA,rA+bμ ΦrA+bμ, (S19) S z Rμ →ErA,rA+bμ, (S20) with ArA,rA+bμ and ErA,rA+bμ denoting conjugate gauge and electric fields on the diamond links. Physically, this construction makes explicit the emergent gauge structure inherent in the spin-ice manifold: spin flips map to spinon matter hopping minimally coupled to compact U(1) gauge fields. The resulting Hamiltonian contains quadratic spinon charge terms and spinon hopping terms (J±), yielding H = Jz 2 X rα Q2 rα -J± 4 X rα X μ,ν,μ Φ† rα+ηαbμΦrα+ηαbν eiηα(Arα,rα+ηαbν-Arα,rα+ηαbμ), (S21) where J± = -(Jxx + Jyy)/4. At this stage, we adopt two crucial approximations: (i) the electric field E is integrated out, leaving behind purely bosonic matter fields coupled to static background fluxes; and (ii) the gauge field A is frozen to its mean-field value ̄A, thereby neglecting dynamical gauge fluctuations. These simplifications transform the model into a tractable quadratic bosonic theory, whose self-consistent solution captures the stability of the U(1) quantum spin liquid. Evaluating the Hamiltonian at the spin ice manifold where Q = 0, we obtained that: H = -J± 4 X rα X μ,ν,μ Φ† rα+ηαbμΦrα+ηαbνeiηα( ̄Arα,rα+ηαbν- ̄Arα,rα+ηαbμ). (S22) To compute QFI, we need to calculate the dynamical spin susceptibility. Under the GMFT formalism, since the photonic mode is integrated out, we only have access to the spinon dynamics. In other words, we only have access to ⟨S +S -⟩∼⟨Φ†ΦΦ†Φ⟩. In particular, we can compute the spin structure factor via the following formalism: S +- μν (q, ω) = 1 N X Rμ,R′ν eiq·(Rμ-R′ ν) Z dteiωt⟨S + Rμ(t)S - R′ν(0)⟩ = 1 N X rA,r′ A X μ,ν eiq·(rA-r′ A+(bμ-bν)/2) Z dteiωt 1 4 Φ† rA(t)eiArA,rA+bμΦrA+bμ(t)Φ† r′ A+bν(0)e-iAr′ A,r′ A+bν Φr′ A(0) (S23) where N is the number of spins. We can then evaluate this using Wick's contraction. Within the XXZ model, it turns out that the only non-trivial term is: S +- μν (q, ω) = 1 N X rA,r′ A X μ,ν eiq·(rA-r′ A+(bμ-bν)/2) Z dteiωt Fμν 4 ΦrA+bμ(t)Φ† r′ A+bν(0) D Φ† rA(t)Φr′ A(0) E , (S24) where Fμν = ei( ̄ArA,rA+bν- ̄ArA,rA+bμ). In other words, we can compute the dynamical spin susceptibility by evaluating the Green's function of the spinons. We can then integrate this to find the QFI of the spinon channel at zero temperature. We show the QFI computed from GMFT at zero temperature in Fig. S8. Despite having made drastic assumptions in the theory, the results of GMFT actually align quite well with the QMC results (up to a factor of approximately 6/7). IV. NEUTRON SCATTERING CROSS SECTION OF DIPOLAR-OCTUPOLAR MATERIAL CANDIDATES The neutron scattering cross section is obtained via Fermi's golden rule, and after tracing over all the sample spin degrees 7 10-3 10-2 T 0.0 1.0 2.0 3.0 4.0 5.0 fQ (a) GMFT, S±, Γ, J± = 0.045,L = 3 GMFT, S±, Γ, J± = 0.045,L = 6 GMFT, S±, Γ, J± = 0.045,L = 25 QMC, S± Γ, J± = 0.045, L = 4 QMC, S± Γ, J± = 0.045, L = 3 -0.4 -0.3 -0.2 -0.1 0 0.04 J± (b) GMFT, S±, Γ GMFT, S±, Γ′ GMFT, S±, X ED, T = 0.001, S±, Γ ED, T = 0.001, S±, Γ′ ED, T = 0.001, S±, X FIG. S8. The GMFT calculation results of the QFI density fQ in the S ± channel. Panel (a) shows the comparison between GMFT results (with a factor 6/7) for various system sizes from L = 3 to L = 25 at J± = 0.045, and the QMC results for L = 3 and L = 4. Panel (b) shows the QFI density as a function of J± at zero temperature in the S ± channel at Γ, Γ′, and X. of freedom, we obtained that: d2σ dΩdω ∝k f ki X αβ δαβ -ˆqα · ˆqβ Mαβ(q, ω), (S25) where kf (ki) is the norm of the momentum for the outgoing (incoming) neutron (in what follows, we will drop the kinetic prefactor by assuming kf /ki ≈1 for the experiment of interest), and Mαβ(q, ω) = 1 2π Z dt⟨Mα(-q, t)Mβ(q, 0)⟩eiωt. (S26) Here Mα(q, ω) is the Fourier transformed magnetization density operator. In the case of DO pyrochlore compounds, the lowest lying crystal electric field doublet |±⟩can be modeled using pseudospins operators τz = 1 2(|+⟩⟨+| -|-⟩⟨-|) and τ± = |±⟩⟨∓| that transform non-trivially under point group operation [125, 181]. In the case of Cerium-based candidates, only the pseudospin component τz has a dipolar magnetic charge density and linearly couples with the magnetic field (e.g., |±⟩= |J = 7/2, mJ = ±3/2⟩for Ce2Zr2O7 [128, 129, 144]). The pseudospin components τz are defined with respect to the local z-axis, as determined by crystal electric field analysis. These local z-axes vary according to the tetrahedron sublattice index and are given by z0 = 1√ 3 (1, 1, 1) (S27a) z1 = 1√ 3 (1, -1, -1) (S27b) z2 = 1√ 3 (-1, 1, -1) (S27c) z3 = 1√ 3 (-1, -1, 1). (S27d) Therefore, in the dipolar approximation, the magnetization density operator is: Mα(q, t) ∝ 1 √ N X Rμ eiq·Rμ ˆzα μτz Rμ(t) = X μ ˆzα μτz q,μ(t), (S28) where we have defined τz q,μ(t) = 1 √ N P Rμ eiq·Rμτz Rμ(t) via the standard Fourier transform. Pluging this into Eq. (S25) and (S26): d2σ dΩdω ∝1 N X αβμν δαβ -ˆqα · ˆqβ (ˆzμ)α(ˆzν)β⟨τz -q,μ(t)τz q,ν(0)⟩ = 1 N X μν ˆzμ · ˆzν - ˆzμ · Q (ˆzν · Q) |Q|2 ⟨τz -q,μ(t)τz q,ν(0)⟩ (S29) := ADO(q, t). (S30) We can then write out explicitly the transverse projector in a DO pyrochlore system for the ⟨τzτz⟩channel as: Fμν := ˆzμ · ˆzν - ˆzμ · Q (ˆzν · Q) |Q|2 (S31) In this work, we want to compute experimentally relevant QFI for DO compounds, which are generally described by an XYZ Hamiltonian with a finite ̃Jxz: HDO = X ⟨i, j⟩ Tαατα i τα j +Txz(τx i τz j +τz iτx j)-gμB X i Bi ·τz i, (S32) where α ∈{x, y, z} We can remove the Txz term by applying a rotation along the y-axis: τy = ̃S y; τx = cos(θ) ̃S x -sin(θ) ̃S z; τz = sin(θ) ̃S x + cos(θ) ̃S z; tan(2θ) = 2Txz Txx-Tzz . By doing so, we map this problem to the XYZ model: HXYZ = X ⟨i, j⟩ ̃Jαα ̃S α i ̃S α j -gμB X i Bi·( ̃S z i cos θ+ ̃S x i sin θ). (S33) Finally, we cover the XXZ model described in Eq. (1) when under the simplifying assumption that ̃Jαα = ̃Jββ = -2J± 0, one block of size r (all other blocks empty). Therefore FQ[ρ, O] ≤∆2 sm2 + r2 , (S38) with equality attained by that partition (if r = 0, it is s blocks of size m). Hence, if a measured (or computed) QFI violates (S38) for some m, i.e. FQ[ρ, O] > ∆2 sm2 + r2 , then the state has entanglement depth at least m+1. Remark 1 (inhomogeneous local spectra). Let O = PN i=1 ciOi with ci ∈ R and local spectral widths ∆i := λmax(Oi) -λmin(Oi). Note that the spectral width of ciOi is |ci|∆i. Define the QFI density fQ(O) := 1 N FQ[ρ, O]. If ρ is m-producible, then (maximizing over all partitions {Bl} with |Bl| ≤m) fQ(O) ≤1 N max {Bl:|Bl|≤m} X l X i∈Bl |ci|∆i 2 ≤1 N max {Bl} X l |Bl| X i∈Bl c2 i ∆2 i (Cauchy-Schwarz on each block) ≤m N N X i=1 c2 i ∆2 i =: m(c∆)2 (S39) where (c∆)2 := 1 N P i c2 i ∆2 i . It is therefore natural to normalize by the inhomogeneous factor η := (c∆)2, so that fQ(O) η ≤m. (S40) Hence, the density witness fQ(O) η > m =⇒entanglement depth ≥m+1. (S41) Remark 2 (sum of witnesses and spectral-width control). Suppose we define a composite quantity as the sum of two QFI densities, f (Σ) Q ≡fQ(O(1)) + fQ(O(2)), with additive generators O(k) = PN i=1 O(k) i , ∆(k) i := λmax O(k) i -λmin O(k) i . For any m-producible state ρ one has, by the standard block-partition argument and Cauchy-Schwarz, fQ(O(k)) ≤m(∆(k))2, (∆(k))2 := 1 N N X i=1 ∆(k) i 2, (S42) 9 hence the sum bound f (Σ) Q ≤m (∆(1))2 + (∆(2))2 . (S43) Therefore, the normalized sum witness f (Σ) Q (∆(1))2 + (∆(2))2 > m =⇒entanglement depth ≥m + 1. (S44) B. Spin Structure Factors In the main text, we have use S α q = P i S α i eiq·Ri as the operator associated with the QFI. One would note that the usual definition of QFI requires an operator O to be Hermitian such that the induced generator eiθO is unitary and thereby generates norm-preserving dynamics. Although S α q is not Hermitian, we show below that the definition of QFI in Eq. (4) of the main text still encompasses all the important properties of the conventional QFI. Namely, it is a robust measure of fluctuations, and we can still derive a meaningful lower bound of entanglement depth. To keep our results general, let us consider some Hermitian operator ORμ on each site under the coordinate system specified in Eq. (S16) such that the operator we associate with QFI has the form Oq = P Rμ ORμeiq·Rμ, where ORμ is some local observable with homogeneous spectral width ∆Rμ = ∆. For later discussion, let us denote such a homogeneous spectral width as ∆(ORμ) to be explicit about the underlying local operators. We can plug in ORμ = S α Rμ where α ∈{x, y, z} at the end to make a direct connection with the definitions used in the main text. To begin, let us split Oq = Oc;q + iOs;q into an Hermitian and antihermitian parts, where Oc;q and Os;q are Hermitian operators: Oc;q = Oq + O† q 2 = X Rμ ORμ cos(q · Rμ) (S45) Os;q = Oq -O† q 2i = X Rμ ORμ sin(q · Rμ) (S46) Now we construct a QFI matrix (QFIM) F [186-188], whose elements are specified by: Fab(T) := 4 Z dω tanh ω 2T 1 -e-ω/T Aab(q, ω). (S47) where Aab(q, ω) = 1 2πN Z dteiωt⟨Oa,q(t)Ob,q(0)⟩. (S48) with a, b ∈{c, s}. Now we wish to show that fQ(Oq, T) defined in the main text via Eq. (4) is related to the trace of this QFIM. Since Oq = Oc;q + iOs;q, O† qOq = Oc;qOc;q + Os;qOs;q + i[Oc;q, Os;q] = Oc;qOc;q + Os;qOs;q, (S49) as [Oc;q, Os;q] = 0. Therefore, we see that A(q, ω) := 1 2πN Z dt⟨O† q(t)Oq(0)⟩eiωt = 1 2πN Z dteiωt⟨Oc;q(t)Oc;q(0)⟩+ ⟨Os;q(t)Os;q(0)⟩ = Acc + Ass. (S50) As a result, fQ(Oq, T) = Z dω tanh( ω 2T )(1 -e-ω/T)A(q, ω) = Z dω tanh( ω 2T )(1 -e-ω/T)(Acc(q, ω) + Ass(q, ω)) = fQ(Oc;q, T) + fQ(Os;q, T) = Tr(F). (S51) In other words, the QFI used in the main text is precisely the trace of the QFI matrix introduced above. Interpreting F as a covariance matrix, this shows that the fQ(Oq, T) equals the total variance - the sum of the variances carried by all fluctuation modes. Furthermore, fQ(Oq, T) provides a robust lower bound on the entanglement depth as fQ(Oq, T) = fQ(Oc;q, T) + fQ(Os;q, T) is the well-bounded sum witness discussed in remark 2. Using the result outlined in the remark, for an mproducible state, we have fQ(Oq, T) ≤m (∆Oc,q)2 + (∆Os,q)2 . (S52) Finally, by negation, fQ(Oq, T) > m (∆Oc,q)2 + (∆Os,q)2 (S53) implies the state is at least (m + 1)-partite entangled. As such, all that remains is to obtain a bound for these two channels. Using Eq. (S39) with homogeneous spectral width for each local operator ORμ, ∆Rμ = ∆(ORμ), and cRμ = cos(q · Rμ), (∆Oc,q)2 = ∆(ORμ) 2 N X Rμ cos2(q · Rμ). (S54) Therefore, by arguing the same for Os,q, we can finally bound fQ(Oq) for any m-producible state: fQ(Oq) ≤ m ∆(ORμ) 2 N X Rμ cos2(q · Rμ) + sin2(q · Rμ) = m ∆(ORμ) 2 . (S55) As such, we find that fQ(Oq) is bounded above by m ∆(ORμ) 2. This is precisely the definition of the entanglement bound for conventional QFI [46, 50, 168] as if we have ignored the exponential part eiq·Rμ altogether. 10 To conclude, for an m-producible state, any operator of the form Oq = P Rμ ORμeiq·Rμ, where ORμ is some observable with homogeneous spectrum ±1/2 across all sites, has the corresponding QFI bound fQ(Oq, T) ≤m ∆(ORμ) 2 . (S56) Therefore, for Oq = S α q where α ∈{x, y, z} in a spin-1/2 system: fQ(S α q, T) > m(∆(S α Rμ))2 = m(1/2 -(-1/2))2 = m, (S57) implies that the state is at least (m + 1)-partite entangled, regardless of momentum position q. Now, since we have established a well-constructed QFI for non-Hermitian operators, we can extend this even further for α = ± in the main text. As discussed in the main text, fQ(S ± q, T) = fQ(S x q, T) + fQ(S y q, T) is a sum witness. Therefore, by remark 2, for an m-producible state: fQ(S ± q, T) = fQ(S x q, T) + fQ(S y q, T) ≤m(1 + 1) = 2m. (S58) Equivalently, this is to say that if the spinon channel has nQFI nQFT(S α q) = fQ(S ± q, T)/2 > m, (S59) then the state is at least (m + 1)-partite entangled. C. Neutron Scattering Cross Section Similarly, for ADO, to have a meaningful interpretation of the entanglement depth, we would have to determine λmax and λmin of the operator in which we are evaluating our QFI to determine the QFI spectral bound η. To do so, first, we should determine the exact operator O that the neutron scattering cross section contains. Let us define operators S NSF q = X Rμ ˆp · ˆzμ τz Rμeiq·Rμ (S60a) S SF q = X Rμ ˆv · ˆzμ τz Rμeiq·Rμ. (S60b) Here ˆp denotes the neutron polarization, which is perpendicular to the momentum transfer (ˆv · ˆp = 0), and ˆv is a vector perpendicular to both the polarization vector and the neutron momentum transfer (ˆv · ˆp = ˆv · ˆq = 0). These channels correspond to polarized neutron scattering experiments in the spin-flip/non-spin-flip (SF/NSF) channels with respect to the neutron polarization vector ˆp. By definition, ADO ∼⟨S NSF† q S NSF q ⟩+ ⟨S SF† q S SF q ⟩, (S61) where we sum up all transverse modes, which is equivalent to taking the transverse projector in Eq.(S29). Accordingly, we assemble a 2 × 2 QFIM FDO from the two transverse components-taken here as the NSF and SF channels-so that the cross section ADO defined in Eq. (S29) is identified with Tr FDO. This construction is necessary because the transverse projector has two independent components. Hence, there is no single scalar operator S DO with ADO ∼⟨S DO†S DO⟩. Nevertheless, we still denote the QFI related to ADO as fQ(S DO q , T) for consistency's sake. The operators entering FDO are generally non-Hermitian, which is admissible since the bounds derived in the previous section apply directly. Applying Eq. (S56) to each channel then yields a lower bound on the entanglement depth. Finally, to bound the QFI induced by unpolarized neutron scattering, let us denote the corresponding QFI from the ADO channel as the sum witness in remark 2 by fQ(S DO q ) = fQ(S NSF q ) + fQ(S SF q ). (S62) Therefore, let us bound the NSF and SF channels to bound the total unpolarized neutron scattering cross section. To do so, we have to determine λmax and λmin of each basis operator to get an actual interpretation of the entanglement depth. To do so, we need to evaluate at a specific momentum position q. For clarity of later discussion, let us define vector ΠNS F and ΠS F with four components corresponding to the four pyrochlore sublattices μ as Πμ NSF(q) = ˆp · ˆzμ (S63a) Πμ SF(q) = ˆv · ˆzμ. (S63b) Since, in principle, the operator S (N)SF q is non-Hermitian, we should break it down into the same cosine and sine channels again and discuss the sum of the spectral width on each channel. By Eq. (S52), we need to bound the cosine and the sine channel again. Let us denote S (N)SF c(s),q as the corresponding cosine and sine channels. Then, using Eq. (S39) with cμ = Πμ (N)SF: (∆S (N)SF c,q )2 = 1 N X Rμ cos2(q · Rμ)|Πμ (N)SF|2. (S64) Therefore, using Eq. (S52), we can derive a QFI bound on the NSF and SF channel separately for an m-producible state: fQ(S (N)SF q , T) ≤m N X Rμ cos2(q · Rμ) + sin2(q · Rμ) |Πμ (N)SF|2 = m N X Rμ |Πμ (N)SF|2 = m 4 X μ |Πμ (N)SF|2. (S65) In the last line, we have used the fact that Πμ (N)SF depends only on the sublattice but not on the unit cell position. We can then sum over all unit cells, which gives a prefactor of N/4 Finally, the upper bound on fQ(S DO q ) for an m-producible state is fQ(S DO q ) = fQ(S NSF q ) + fQ(S SF q ) ≤m 4 X μ |Πμ NSF|2 + |Πμ SF|2 . (S66) Now, to actually evaluate this bound, let us write X μ Πμ (N)SF 2 = X μ ˆa · ˆzμ 2 = ˆa⊤ X μ ˆzμˆz⊤ μ ˆa = ˆa⊤Aˆa, (S67) 1 where A := P μ ˆzμˆz⊤ μ and ˆa = ˆp if NSF and ˆa = ˆv if SF. For a single pyrochlore tetrahedron, take the four local ˆzμ, a direct sum gives A = 3 X μ=0 ˆzμˆz⊤ μ = 4 313×3, (S68) since all off-diagonal entries cancel by symmetry (the signs appear equally often with opposite parity) and Tr A = P μ ∥ˆzμ∥2 = 4 fixes the proportionality constant. Hence X μ Πμ (N)SF 2 = 4 3||ˆa||2 = 4 3, (S69) which is independent of the direction of the unit vector ˆa. As a result, for an m-producible state, QFI upper bounds on NSF, SF, and the total scattering channel are given by fQ(S (N)SF q , T) ≤m 4 X μ |Πμ (N)SF|2 = m 3 . (S70a) fQ(S DO q , T) ≤2m 3 . (S70b) To summarize, we derived a bound independent of q for the NSF, SF, and the unpolarized scattering channel with corresponding nQFI: nQFI(S (N)SF q , T) = 3fQ(S (N)SF q , T) (S71a) nQFI(S DO q , T) = 3 2 fQ(S (N)SF q , T). (S71b) We again stress that this bound is not the optimal bound in general but an upper bound of such via Cauchy-Schwartz as shown in Eq. (S65). The true optimal bound depends strongly on the choice of ˆp and ˆv and thereby the incident neutron momentum ˆq. In theory, one would need to apply Eq. (S37) carefully to find the best bound.
2510.14808
Agentic NL2SQL to Reduce Computational Costs Dominik Jehle jehled@cs.uni-freiburg.de University of Freiburg Freiburg im Breisgau, Germany Lennart Purucker purucker@cs.uni-freiburg.de University of Freiburg Freiburg im Breisgau, Germany Frank Hutter Prior Labs ELLIS Institute Tübingen University of Freiburg Abstract Translating natural language queries into SQL queries (NL2SQL or Text-to-SQL) has recently been empowered by large language models (LLMs). Using LLMs to perform NL2SQL methods on a large collection of SQL databases necessitates processing large quantities of meta-information about the databases, which in turn results in lengthy prompts with many tokens and high processing costs. To address this challenge, we introduce Datalake Agent, an agentic system designed to enable an LLM to solve NL2SQL tasks more efficiently. Instead of utilizing direct solvers for NL2SQL that call the LLM once with all meta-information in the prompt, the Datalake Agent employs an interactive loop to reduce the utilized meta-information. Within the loop, the LLM is used in a reasoning framework that selectively requests only the necessary information to solve a table question answering task. We evaluate the Datalake Agent on a collection of 23 databases with 100 table question answering tasks. The Datalake Agent reduces the tokens used by the LLM by up to 87% and thus allows for substantial cost reductions while maintaining competitive performance. 1 Introduction Large language models (LLMs) show significant potential to advance the field of information retrieval and understanding of tabular data. Current models have demonstrated notable improvements in tasks related to the understanding of tabular data [17, 8, 14]. Optimizing the prompts plays a critical role in maximizing the performance of models and can significantly influence the output behavior for a specific task [9, 1]. At the same time, the number of tokens to represent the prompt directly determines the cost of a request [11, 2]. Reducing the number of tokens decreases the overall cost, which is often needed as large prompts can quickly entail significant expenses. In the field of translating natural language queries into SQL queries (NL2SQL or Text-to-SQL) [8], prompt size can quickly explode when the number of SQL databases grows. This issue becomes particularly relevant when working in large enterprise companies that store an extensive amount of structured, tabular data. Typically, when solving tasks on large collections of databases, LLMs receive large amounts of meta-information about the structure and types of all databases in one direct prompt. Yet, most NL2SQL tasks require only a subset of the database, rendering much of the meta-information in the prompt useless while still incurring additional costs for users. Thus, we propose to select only the meta-information truly necessary for a single task to significantly reduce the cost of solving the NL2SQL task. To this end, we introduce Datalake Agent, an agentic system designed to enable an LLM to solve NL2SQL tasks more efficiently by reducing the utilized 39th Conference on Neural Information Processing Systems (NeurIPS 2025). arXiv:2510.14808v1 [cs.AI] 16 Oct 2025 meta-information through a reasoning framework. The framework operates across three core areas: information acquisition, iterative refinement, and query formulation. We evaluate the Datalake Agent by comparing it to a direct prompting strategy with OpenAI’s GPT-4-mini [10]. Both methods, the Datalake Agent and the direct prompting, are evaluated on a set of 319 tables, collected from 23 different databases, some of which contain real-world data [13]. Both methods are tested on a manually created collection of 100 table question answering tasks. Our results show that the Datalake Agent significantly reduces the costs of processing large amounts of data. Furthermore, the Datalake Agent retains competitive performance on average, while also improving the LLM’s capability to process complex queries more effectively. Our contributions are: (1) the Datalake Agent, a reasoning framework that enables large language models (LLMs) to efficiently navigate and query large volumes of meta-information; (2) a new benchmark of 100 table-based question answering tasks, carefully curated to evaluate model performance across varying levels of complexity. 2 Related Work There are various methods for leveraging large language models (LLMs) for large data. One possible method is to provide the LLM direct access to all data by transmitting all information via one prompt to the model. However, Qiu et al. [12] have shown a clear negative performance trend across all tested models: when the prompt size increases, the overall performance declines. Likewise, Dong and Wang [3] observed that for prompts exceeding 1000 tokens, the effectiveness of GPT models for tabular data devolves to mere guesswork. Thus, alternatives have been developed, such as Text2API [15] or NL2SQL [8]. In Text2API, instead of granting the model direct access to complete table information, this approach enables interaction with the dataset through API endpoints [15]. While this method demonstrates impressive results, it restricts the model to retrieving information exclusively through the available API endpoints and cannot access data beyond them. Instead, translating natural language queries into SQL (NL2SQL) offers two major advantages. First, there is no need to input entire tables into the model. Instead, the model receives only meta-information about the tables, which is especially beneficial for datasets with a large number of rows. Second, unlike Text2API, NL2SQL does not impose limitations on information retrieval. By generating SQL queries, the model can access and process all the necessary data dynamically [8]. While LLMs show remarkable capabilities in translating natural language queries into SQL, their accuracy decreases with growing schema complexity, larger numbers of tables, and more demanding query structures [8]. Our work focuses on improving the efficiency of NL2SQL with LLMs when there is an excessive amount of meta-information available, while maintaining or enhancing their performance. The community developed several benchmarks for NL2SQL, such as the Bird [6, 7] and Spider [16, 4, 5] benchmarks. Yet these benchmarks only provide the relevant, potentially necessary, meta- information to models instead of all the available meta-information. Our work establishes a new benchmark, building on prior benchmarks. We simulate a potentially more realistic scenario in an enterprise company where the user, and by extension the LLM, do not know which database contains the necessary meta-information. Instead, the LLM is tasked with solving an NL2SQL task, specifically a table question answering task, given the natural language query and the meta-information of all available databases and tables. 3 Method To efficiently handle large volumes of meta-information, we employ the Datalake Agent, a reasoning framework that guides large language models (LLMs) through structured, task-driven information acquisition. The method can be explained in terms of three core areas: Information Acquisition. In the information acquisition, the LLM begins by gathering general schema knowledge. The Datalake Agent provides an intermediary layer for structured exploration using predefined commands: GetDBDescription retrieves high-level summaries of available databases, GetTables enumerates tables within a selected database, and GetColumns exposes column-level metadata, including names and types. A visual overview of the workflow is provided in the Appendix A. Iterative Refinement. During iterative refinement, the LLM follows a hierarchical approach, progressing from broad, high-level data toward task-specific details. At any point, it can revert to a 2 coarser level of information if needed, before refining again. This flexible, feedback-driven loop allows the LLM to reason autonomously over complex database structures while ensuring that only relevant information is retrieved. Query Formulation. Finally, in query formulation, once sufficient schema information has been collected, the LLM generates precise SQL queries using DBQueryFinalSQL. The Datalake Agent executes these queries through a dedicated access layer, which guarantees scalability and modularity. Integrating new databases requires minimal adjustments while preserving the systematic reasoning approach. This hierarchical and iterative method allows the LLM to autonomously select relevant information, refine its understanding, and generate accurate queries. 4 Experimental Setup Model. For our experiments, we employ the OpenAI GPT-4 Mini model [10], accessed via the OpenAI API. To ensure consistent behavior, the temperature parameter is fixed at 0.1. The model outputs follow the structured response format described above. Databases. As the foundation for evaluation, we select five databases from the Relational Deep Learning Benchmark (RelBench) [13]. Since enterprise environments typically consist of large, heterogeneous database convolutes, we expand this collection with 18 additional simulated databases from domains such as sports, politics, business, and geography. These simulated databases contain only schema information but no data, serving to increase the complexity of the evaluation setup. Table Question Answering Benchmark. To evaluate the Datalake Agent, we construct a bench- mark of 100 natural language Table Question Answering tasks. Each task requires the model to generate a SQL query without being informed which tables are relevant. Each of the five RelBench databases contributes 20 tasks, spanning two levels of difficulty. We evaluate the system under three experimental settings, each differing in the number of available databases and tables. The first setting includes only the RelBench databases, totaling 42 tables. The second and third settings incorporate the simulated databases, resulting in 159 and 319 tables, respectively. Examples of benchmark questions are presented in Appendix D. Direct Prompt Baseline. For comparison, we design a direct prompt baseline that provides the model with all necessary schema information upfront. Its system prompt mirrors that of the Datalake Agent but excludes the option to request schema information incrementally. 5 Results Performance. Examining the trends of both approaches, as illustrated in Figure 1, reveals that the Direct Solver initially outperforms the Datalake Agent. However, as data volume increases, both methods experience a decline in accuracy. The Direct Solver exhibits a steeper drop in performance compared to the Datalake Agent, a trend that is particularly pronounced for more complex tasks. A more detailed performance analysis is provided in Appendix C. Figure 1: Performance trend of Direct Solver and Datalake Agent. Figure 2: Trend of average amount of tokens required for solving one task. Input Tokens. Figure 2 illustrates the average number of tokens consumed by the Datalake Agent and the Direct Solver as the number of tables increases. The Datalake Agent shows relatively stable 3 token usage, rising slightly from 3,670 tokens with 42 tables to 4,264 tokens with 319 tables. In contrast, the Direct Solver exhibits a steep increase from 7,407 tokens at 42 tables to 34,602 tokens at 319 tables. These results indicate that the Datalake Agent scales more efficiently, maintaining consistent overhead even as database size grows, while the Direct Solver’s token consumption increases nearly linearly with the number of tables. Additional information on input token usage can be found in Appendix B. Figure 3: Comparison of costs for 1000 tasks across methods and LLMs. Associated Costs. To contextualize token usage, Figure 3 presents a cost calculation based on average input tokens per task, scaled to 1000 tasks. API costs were taken from official company specifications [11, 2]. A significant difference emerges between the Direct Solver and the Datalake Agent. For a table size of 42, the Direct Solver incurs double the cost of the Datalake Agent, increasing to eight times the cost at 319 tables. For the OpenAI o1 model, this corresponds to a difference exceeding $450 per 1000 tasks. 6 Conclusion This study investigates the impact of large-scale databases on LLM performance using NL2SQL approaches. A total of 23 databases with 319 tables, including both real and simulated data, were employed. To handle the substantial information volume, the Datalake Agent was introduced, enabling the model to selectively query only the data required for each task. Evaluation on 100 Table-QA tasks, divided into simple (single-table) and complex (multi-table) tasks, across three settings with 42, 159, and 319 tables shows that while the Direct Solver performs well on smaller datasets, its performance deteriorates sharply as table numbers increase. In contrast, the Datalake Agent consistently outperforms the direct approach in larger and more complex settings, while requiring significantly fewer input tokens, thereby reducing computational costs. Limitations. A key limitation of our agentic system is the potential for infinite reasoning loops. In some cases, the model struggles to identify the correct tables, leading to repeated requests and incor- rect answers. A more detailed explanation is provided in the Appendix E. Moreover, the evaluation is confined to GPT-4 Mini and only a few database settings, which may limit generalizability to broader scenarios. Future Work. Future research should extend evaluations to larger and more complex datasets to assess scalability and generalization more thoroughly. Addressing the infinite loop issue is critical, as developing strategies to prevent repeated queries could improve reliability and accuracy. Further studies could explore additional LLMs and more diverse task types to fully understand the potential and limitations of the Datalake Agent in real-world applications. To conclude, our Datalake Agent is a first step towards improving the efficiency of NL2SQL in enterprise use cases. Based on our promising first results, we believe the Datalake Agent can positively impact the use of LLMs for NL2SQL. 4 Acknowledgements L.P. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SFB 1597 (SmallData), grant number 499552394; Frank Hutter acknowledges the financial support of the Hector Foundation. Finally, we thank the reviewers for their constructive feedback and contribution to improving the paper. References [1] Ming Cheung. A Reality check of the benefits of LLM in business, June 2024. URL http: //arxiv.org/abs/2406.10249. arXiv:2406.10249 [cs]. [2] Deepseek. Models & Pricing, 2025. URL https://api-docs.deepseek.com/quick_ start/pricing. [3] Haoyu Dong and Zhiruo Wang. Large Language Models for Tabular Data: Progresses and Future Directions. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’24, pages 2997–3000, New York, NY, USA, July 2024. Association for Computing Machinery. ISBN 979-8-4007-0431-4. doi: 10.1145/3626772.3661384. URL https://dl.acm.org/doi/10.1145/3626772.3661384. [4] Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, and Tao Yu. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows, 2024. URL https://arxiv.org/abs/2411.07763. [5] Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, et al. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows. arXiv preprint arXiv:2411.07763, 2024. [6] Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, et al. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Advances in Neural Information Processing Systems, 36:42330–42357, 2023. [7] Jinyang Li, Xiaolong Li, Ge Qu, Per Jacobsson, Bowen Qin, Binyuan Hui, Shuzheng Si, Nan Huo, Xiaohan Xu, Yue Zhang, et al. Swe-sql: Illuminating llm pathways to solve user sql issues in real-world applications. arXiv preprint arXiv:2506.18951, 2025. [8] Xinyu Liu, Shuyu Shen, Boyan Li, Peixian Ma, Runzhi Jiang, Yuxin Zhang, Ju Fan, Guoliang Li, Nan Tang, and Yuyu Luo. A Survey of NL2SQL with Large Language Models: Where are we, and where are we going?, November 2024. URL http://arxiv.org/abs/2408.05109. arXiv:2408.05109. [9] Zeeshan Memon, Muhammad Arham, Adnan Ul-Hasan, and Faisal Shafait. LLM-Informed Discrete Prompt Optimization. July 2024. URL https://openreview.net/forum?id= d0jQuZe6k0. [10] OpenAI. Gpt-4o mini: Advancing cost-efficient intelligence. OpenAI Blog, 2024. URL https: //openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/. [11] OpenAi. Pricing, 2025. URL https://openai.com/api/pricing/. [12] Zipeng Qiu, You Peng, Guangxin He, Binhang Yuan, and Chen Wang. TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension, November 2024. URL http://arxiv.org/abs/2411.19504. arXiv:2411.19504 [cs]. [13] Joshua Robinson, Rishabh Ranjan, Weihua Hu, Kexin Huang, Jiaqi Han, Alejandro Dobles, Matthias Fey, Jan E. Lenssen, Yiwen Yuan, Zecheng Zhang, Xinwei He, and Jure Leskovec. RelBench: A Benchmark for Deep Learning on Relational Databases. Advances in Neural Information Processing Systems, 37:21330–21341, December 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/hash/ 25cd345233c65fac1fec0ce61d0f7836-Abstract-Datasets_and_Benchmarks_Track. html. 5 [14] Liang Shi, Zhengju Tang, Nan Zhang, Xiaotong Zhang, and Zhi Yang. A Survey on Employing Large Language Models for Text-to-SQL Tasks, November 2024. URL http://arxiv.org/ abs/2407.15186. arXiv:2407.15186 [cs]. [15] Mingzhu Wang, Yuzhe Zhang, Qihang Zhao, Junyi Yang, and Hong Zhang. Redefining Information Retrieval of Structured Database via Large Language Models, November 2024. URL http://arxiv.org/abs/2405.05508. arXiv:2405.05508. [16] Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. CoRR, abs/1809.08887, 2018. URL http://arxiv.org/abs/1809.08887. [17] Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zheng Liu, Zhicheng Dou, and Ji-Rong Wen. Large Language Models for Informa- tion Retrieval: A Survey, September 2024. URL http://arxiv.org/abs/2308.07107. arXiv:2308.07107 [cs]. 6 A The Datalake Agent Figure 4: Solution process of a Table Question Answering Task using the Datalake Agent Figure 5: Communication loop before solving a task between model and Datalake Agent 7 B Input Tokens Figure 6: Required input tokens per test with 42 tables: Comparison of model using Datalake Agent and Direct Solver Figure 7: Required input tokens per test with 159 tables: Comparison of model using Datalake Agent and Direct Solver Figure 8: Required input tokens per test with 319 tables: Comparison of model using Datalake Agent and Direct Solver 8 C Performance Figure 9: Correct answers to simple and more complex tasks with 42 tables Figure 10: Correct answers to simple and more complex tasks with 159 tables Figure 11: Correct answers to simple and more complex tasks with 319 tables 9 D Example Tasks for Evaluation 1 Which driver has the most wins in Formual 1? 2 Give me the latest search query on Avito. 3 Give me the titles of the 2 most recent clinical trials. 4 Give me the total number of posts in Stack-Exchange. 5 How many clinical studies are available? 7 How many comments does the most frequently commented post on Stack Exchange have? 8 How many searches did Avito have on April 28. 2015? 9 Name 3 clinical trial sponsors. 10 How many Formula 1 races were there in 2015? 11 Give me the ids of the 10 newest posts from Stack-Exchange. 12 What different member statuses does H&M have? 13 Tell me 3 Formula 1 drivers from Germany. 14 How many users does Avito have? 15 Give me the title of the oldest medical study 16 Tell me the 8 most expensive products from H&M 17 Show me the AccountId of the longest existing user on Stack-Exchange. 18 How many customers at H&M are younger than 40? 19 What is the cheapest advertising on Avito? 20 Which country has the most Formula 1 drivers? E Handling of Infinite Reasoning Loops A key limitation of our agentic system is the potential for infinite reasoning loops. In such cases, the model repeatedly issues the same requests without acquiring any new information, effectively becoming stuck. For instance, it may continuously request the same metadata for a given table without progressing further. Attempts to prevent the model from repeating requests or to encourage it to seek alternative informa- tion were not successful. Mitigation Strategy: Forced Response after 10 Requests: To ensure progress, the system is designed to force the model to provide an SQL query after 10 repeated requests. This intervention breaks the loop while maintaining the autonomy of the agent. This approach provides a practical safeguard against infinite loops and ensures continued evaluation of the agentic system. 10
Agentic NL2SQL to Reduce Computational Costs Dominik Jehle übingen (NL2SQL or Text-to-SQL) has recently been empowered by large language models (LLMs). Using LLMs to perform NL2SQL methods on a large collection of SQL databases necessitates processing large quantities of meta-information about the databases, which in turn results in lengthy prompts with many tokens and high processing costs. To address this challenge, we introduce Datalake Agent, an agentic system designed to enable an LLM to solve NL2SQL tasks more efficiently. Instead of utilizing direct solvers for NL2SQL that call the LLM once with all meta-information in the prompt, the Datalake Agent employs an interactive loop to reduce the utilized meta-information. Within the loop, the LLM is used in a reasoning framework that selectively requests only the necessary information to solve a table question answering task. We evaluate the Datalake Agent on a collection of 23 databases with 100 table question answering tasks. The Datalake Agent reduces the tokens used by the LLM by up to 87% and thus allows for substantial cost reductions while maintaining competitive performance. 1 Introduction Large language models (LLMs) show significant potential to advance the field of information retrieval and understanding of tabular data. Current models have demonstrated notable improvements in tasks related to the understanding of tabular data [17, 8, 14]. Optimizing the prompts plays a critical role in maximizing the performance of models and can significantly influence the output behavior for a specific task [9, 1]. At the same time, the number of tokens to represent the prompt directly determines the cost of a request [11, 2]. Reducing the number of tokens decreases the overall cost, which is often needed as large prompts can quickly entail significant expenses. In the field of translating natural language queries into SQL queries (NL2SQL or Text-to-SQL) [8], prompt size can quickly explode when the number of SQL databases grows. This issue becomes particularly relevant when working in large enterprise companies that store an extensive amount of structured, tabular data. Typically, when solving tasks on large collections of databases, LLMs receive large amounts of meta-information about the structure and types of all databases in one direct prompt. Yet, most NL2SQL tasks require only a subset of the database, rendering much of the meta-information in the prompt useless while still incurring additional costs for users. Thus, we propose to select only the meta-information truly necessary for a single task to significantly reduce the cost of solving the NL2SQL task. To this end, we introduce Datalake Agent, an agentic system designed to enable an LLM to solve NL2SQL tasks more efficiently by reducing the utilized 39th Conference on Neural Information Processing Systems (NeurIPS 2025). 16 Oct 2025 meta-information through a reasoning framework. The framework operates across three core areas: information acquisition, iterative refinement, and query formulation. We evaluate the Datalake Agent by comparing it to a direct prompting strategy with OpenAI's GPT-4-mini [10]. Both methods, the Datalake Agent and the direct prompting, are evaluated on a set of 319 tables, collected from 23 different databases, some of which contain real-world data [13]. Both methods are tested on a manually created collection of 100 table question answering tasks. Our results show that the Datalake Agent significantly reduces the costs of processing large amounts of data. Furthermore, the Datalake Agent retains competitive performance on average, while also improving the LLM's capability to process complex queries more effectively. Our contributions are: (1) the Datalake Agent, a reasoning framework that enables large language models (LLMs) to efficiently navigate and query large volumes of meta-information; (2) a new benchmark of 100 table-based question answering tasks, carefully curated to evaluate model performance across varying levels of complexity. 2 Related Work There are various methods for leveraging large language models (LLMs) for large data. One possible method is to provide the LLM direct access to all data by transmitting all information via one prompt to the model. However, Qiu et al. [12] have shown a clear negative performance trend across all tested models: when the prompt size increases, the overall performance declines. Likewise, Dong and Wang [3] observed that for prompts exceeding 1000 tokens, the effectiveness of GPT models for tabular data devolves to mere guesswork. Thus, alternatives have been developed, such as Text2API [15] or NL2SQL [8]. In Text2API, instead of granting the model direct access to complete table information, this approach enables interaction with the dataset through API endpoints [15]. While this method demonstrates impressive results, it restricts the model to retrieving information exclusively through the available API endpoints and cannot access data beyond them. Instead, translating natural language queries into SQL (NL2SQL) offers two major advantages. First, there is no need to input entire tables into the model. Instead, the model receives only meta-information about the tables, which is especially beneficial for datasets with a large number of rows. Second, unlike Text2API, NL2SQL does not impose limitations on information retrieval. By generating SQL queries, the model can access and process all the necessary data dynamically [8]. While LLMs show remarkable capabilities in translating natural language queries into SQL, their accuracy decreases with growing schema complexity, larger numbers of tables, and more demanding query structures [8]. Our work focuses on improving the efficiency of NL2SQL with LLMs when there is an excessive amount of meta-information available, while maintaining or enhancing their performance. The community developed several benchmarks for NL2SQL, such as the Bird [6, 7] and Spider [16, 4, 5] benchmarks. Yet these benchmarks only provide the relevant, potentially necessary, metainformation to models instead of all the available meta-information. Our work establishes a new benchmark, building on prior benchmarks. We simulate a potentially more realistic scenario in an enterprise company where the user, and by extension the LLM, do not know which database contains the necessary meta-information. Instead, the LLM is tasked with solving an NL2SQL task, specifically a table question answering task, given the natural language query and the meta-information of all available databases and tables. 3 Method To efficiently handle large volumes of meta-information, we employ the Datalake Agent, a reasoning framework that guides large language models (LLMs) through structured, task-driven information acquisition. The method can be explained in terms of three core areas: Information Acquisition. In the information acquisition, the LLM begins by gathering general schema knowledge. The Datalake Agent provides an intermediary layer for structured exploration using predefined commands: GetDBDescription retrieves high-level summaries of available databases, GetTables enumerates tables within a selected database, and GetColumns exposes column-level metadata, including names and types. A visual overview of the workflow is provided in the Appendix A. Iterative Refinement. During iterative refinement, the LLM follows a hierarchical approach, progressing from broad, high-level data toward task-specific details. At any point, it can revert to a 2 coarser level of information if needed, before refining again. This flexible, feedback-driven loop allows the LLM to reason autonomously over complex database structures while ensuring that only relevant information is retrieved. Query Formulation. Finally, in query formulation, once sufficient schema information has been collected, the LLM generates precise SQL queries using DBQueryFinalSQL. The Datalake Agent executes these queries through a dedicated access layer, which guarantees scalability and modularity. Integrating new databases requires minimal adjustments while preserving the systematic reasoning approach. This hierarchical and iterative method allows the LLM to autonomously select relevant information, refine its understanding, and generate accurate queries. 4 Experimental Setup Model. For our experiments, we employ the OpenAI GPT-4 Mini model [10], accessed via the OpenAI API. To ensure consistent behavior, the temperature parameter is fixed at 0.1. The model outputs follow the structured response format described above. Databases. As the foundation for evaluation, we select five databases from the Relational Deep Learning Benchmark (RelBench) [13]. Since enterprise environments typically consist of large, heterogeneous database convolutes, we expand this collection with 18 additional simulated databases from domains such as sports, politics, business, and geography. These simulated databases contain only schema information but no data, serving to increase the complexity of the evaluation setup. Table Question Answering Benchmark. To evaluate the Datalake Agent, we construct a benchmark of 100 natural language Table Question Answering tasks. Each task requires the model to generate a SQL query without being informed which tables are relevant. Each of the five RelBench databases contributes 20 tasks, spanning two levels of difficulty. We evaluate the system under three experimental settings, each differing in the number of available databases and tables. The first setting includes only the RelBench databases, totaling 42 tables. The second and third settings incorporate the simulated databases, resulting in 159 and 319 tables, respectively. Examples of benchmark questions are presented in Appendix D. Direct Prompt Baseline. For comparison, we design a direct prompt baseline that provides the model with all necessary schema information upfront. Its system prompt mirrors that of the Datalake Agent but excludes the option to request schema information incrementally. 5 Results Performance. Examining the trends of both approaches, as illustrated in Figure 1, reveals that the Direct Solver initially outperforms the Datalake Agent. However, as data volume increases, both methods experience a decline in accuracy. The Direct Solver exhibits a steeper drop in performance compared to the Datalake Agent, a trend that is particularly pronounced for more complex tasks. A more detailed performance analysis is provided in Appendix C. Figure 1: Performance trend of Direct Solver and Datalake Agent. Figure 2: Trend of average amount of tokens required for solving one task. Input Tokens. Figure 2 illustrates the average number of tokens consumed by the Datalake Agent and the Direct Solver as the number of tables increases. The Datalake Agent shows relatively stable 3 token usage, rising slightly from 3,670 tokens with 42 tables to 4,264 tokens with 319 tables. In contrast, the Direct Solver exhibits a steep increase from 7,407 tokens at 42 tables to 34,602 tokens at 319 tables. These results indicate that the Datalake Agent scales more efficiently, maintaining consistent overhead even as database size grows, while the Direct Solver's token consumption increases nearly linearly with the number of tables. Additional information on input token usage can be found in Appendix B. Figure 3: Comparison of costs for 1000 tasks across methods and LLMs. Associated Costs. To contextualize token usage, Figure 3 presents a cost calculation based on average input tokens per task, scaled to 1000 tasks. API costs were taken from official company specifications [11, 2]. A significant difference emerges between the Direct Solver and the Datalake Agent. For a table size of 42, the Direct Solver incurs double the cost of the Datalake Agent, increasing to eight times the cost at 319 tables. For the OpenAI o1 model, this corresponds to a difference exceeding $450 per 1000 tasks. 6 Conclusion This study investigates the impact of large-scale databases on LLM performance using NL2SQL approaches. A total of 23 databases with 319 tables, including both real and simulated data, were employed. To handle the substantial information volume, the Datalake Agent was introduced, enabling the model to selectively query only the data required for each task. Evaluation on 100 Table-QA tasks, divided into simple (single-table) and complex (multi-table) tasks, across three settings with 42, 159, and 319 tables shows that while the Direct Solver performs well on smaller datasets, its performance deteriorates sharply as table numbers increase. In contrast, the Datalake Agent consistently outperforms the direct approach in larger and more complex settings, while requiring significantly fewer input tokens, thereby reducing computational costs. Limitations. A key limitation of our agentic system is the potential for infinite reasoning loops. In some cases, the model struggles to identify the correct tables, leading to repeated requests and incorrect answers. A more detailed explanation is provided in the Appendix E. Moreover, the evaluation is confined to GPT-4 Mini and only a few database settings, which may limit generalizability to broader scenarios. Future Work. Future research should extend evaluations to larger and more complex datasets to assess scalability and generalization more thoroughly. Addressing the infinite loop issue is critical, as developing strategies to prevent repeated queries could improve reliability and accuracy. Further studies could explore additional LLMs and more diverse task types to fully understand the potential and limitations of the Datalake Agent in real-world applications. To conclude, our Datalake Agent is a first step towards improving the efficiency of NL2SQL in enterprise use cases. Based on our promising first results, we believe the Datalake Agent can positively impact the use of LLMs for NL2SQL. 4 Acknowledgements L.P. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SFB 1597 (SmallData), grant number 499552394; Frank Hutter acknowledges the financial support of the Hector Foundation. Finally, we thank the reviewers for their constructive feedback and contribution to improving the paper. References [1] Ming Cheung. A Reality check of the benefits of LLM in business, June 2024. URL http: //arxiv.org/abs/2406.10249. . [2] Deepseek. Models & Pricing, 2025. URL https://api-docs.deepseek.com/quick_ start/pricing. [3] Haoyu Dong and Zhiruo Wang. Large Language Models for Tabular Data: Progresses and Future Directions. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '24, pages 2997-3000, New York, NY, USA, July 2024. Association for Computing Machinery. ISBN 979-8-4007-0431-4. URL https://dl.acm.org/doi/10.1145/3626772.3661384. [4] Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, and Tao Yu. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows, 2024. URL https://arxiv.org/abs/2411.07763. [5] Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, et al. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows. arXiv preprint , 2024. [6] Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, et al. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Advances in Neural Information Processing Systems, 36:42330-42357, 2023. [7] Jinyang Li, Xiaolong Li, Ge Qu, Per Jacobsson, Bowen Qin, Binyuan Hui, Shuzheng Si, Nan Huo, Xiaohan Xu, Yue Zhang, et al. Swe-sql: Illuminating llm pathways to solve user sql issues in real-world applications. arXiv preprint , 2025. [8] Xinyu Liu, Shuyu Shen, Boyan Li, Peixian Ma, Runzhi Jiang, Yuxin Zhang, Ju Fan, Guoliang Li, Nan Tang, and Yuyu Luo. A Survey of NL2SQL with Large Language Models: Where are we, and where are we going?, November 2024. URL http://arxiv.org/abs/2408.05109. . [9] Zeeshan Memon, Muhammad Arham, Adnan Ul-Hasan, and Faisal Shafait. LLM-Informed Discrete Prompt Optimization. July 2024. URL https://openreview.net/forum?id= d0jQuZe6k0. [10] OpenAI. Gpt-4o mini: Advancing cost-efficient intelligence. OpenAI Blog, 2024. URL https: //openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/. [11] OpenAi. Pricing, 2025. URL https://openai.com/api/pricing/. [12] Zipeng Qiu, You Peng, Guangxin He, Binhang Yuan, and Chen Wang. TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension, November 2024. URL http://arxiv.org/abs/2411.19504. . [13] Joshua Robinson, Rishabh Ranjan, Weihua Hu, Kexin Huang, Jiaqi Han, Alejandro Dobles, Matthias Fey, Jan E. Lenssen, Yiwen Yuan, Zecheng Zhang, Xinwei He, and Jure Leskovec. RelBench: A Benchmark for Deep Learning on Relational Databases. Advances in Neural Information Processing Systems, 37:21330-21341, December 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/hash/ 25cd345233c65fac1fec0ce61d0f7836-Abstract-Datasets_and_Benchmarks_Track. html. 5 [14] Liang Shi, Zhengju Tang, Nan Zhang, Xiaotong Zhang, and Zhi Yang. A Survey on Employing Large Language Models for Text-to-SQL Tasks, November 2024. URL http://arxiv.org/ abs/2407.15186. . [15] Mingzhu Wang, Yuzhe Zhang, Qihang Zhao, Junyi Yang, and Hong Zhang. Redefining Information Retrieval of Structured Database via Large Language Models, November 2024. URL http://arxiv.org/abs/2405.05508. . [16] Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. CoRR, abs/1809.08887, 2018. URL http://arxiv.org/abs/1809.08887. [17] Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zheng Liu, Zhicheng Dou, and Ji-Rong Wen. Large Language Models for Information Retrieval: A Survey, September 2024. URL http://arxiv.org/abs/2308.07107. . 6 A The Datalake Agent Figure 4: Solution process of a Table Question Answering Task using the Datalake Agent Figure 5: Communication loop before solving a task between model and Datalake Agent 7 B Input Tokens Figure 6: Required input tokens per test with 42 tables: Comparison of model using Datalake Agent and Direct Solver Figure 7: Required input tokens per test with 159 tables: Comparison of model using Datalake Agent and Direct Solver Figure 8: Required input tokens per test with 319 tables: Comparison of model using Datalake Agent and Direct Solver 8 C Performance Figure 9: Correct answers to simple and more complex tasks with 42 tables Figure 10: Correct answers to simple and more complex tasks with 159 tables Figure 11: Correct answers to simple and more complex tasks with 319 tables 9 D Example Tasks for Evaluation 1 Which driver has the most wins in Formual 1? 2 Give me the latest search query on Avito. 3 Give me the titles of the 2 most recent clinical trials. 4 Give me the total number of posts in Stack-Exchange. 5 How many clinical studies are available? 7 How many comments does the most frequently commented post on Stack Exchange have? 8 How many searches did Avito have on April 28. 2015? 9 Name 3 clinical trial sponsors. 10 How many Formula 1 races were there in 2015? 11 Give me the ids of the 10 newest posts from Stack-Exchange. 12 What different member statuses does H&M have? 13 Tell me 3 Formula 1 drivers from Germany. 14 How many users does Avito have? 15 Give me the title of the oldest medical study 16 Tell me the 8 most expensive products from H&M 17 Show me the AccountId of the longest existing user on Stack-Exchange. 18 How many customers at H&M are younger than 40? 19 What is the cheapest advertising on Avito? 20 Which country has the most Formula 1 drivers? E Handling of Infinite Reasoning Loops A key limitation of our agentic system is the potential for infinite reasoning loops. In such cases, the model repeatedly issues the same requests without acquiring any new information, effectively becoming stuck. For instance, it may continuously request the same metadata for a given table without progressing further. Attempts to prevent the model from repeating requests or to encourage it to seek alternative information were not successful. Mitigation Strategy: Forced Response after 10 Requests: To ensure progress, the system is designed to force the model to provide an SQL query after 10 repeated requests. This intervention breaks the loop while maintaining the autonomy of the agent. This approach provides a practical safeguard against infinite loops and ensures continued evaluation of the agentic system. 10
2510.14819
Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Ji Cao1,3 ∗, Yu Wang1 ∗, Tongya Zheng2 †, Zujie Ren3, Canghong Jin2, Gang Chen1, Mingli Song1 1Zhejiang University 2Hangzhou City University 3Zhejiang Lab {caoj25, yu.wang, renzju, cg, brooksong}@zju.edu.cn doujiang_zheng@163.com, jinch@hzcu.edu.cn Abstract Trajectory Representation Learning (TRL) aims to encode raw tra- jectories into low-dimensional vectors, which can then be leveraged in various downstream tasks, including travel time estimation, loca- tion prediction, and trajectory similarity analysis. However, existing TRL methods suffer from a key oversight: treating trajectories as iso- lated spatio-temporal sequences, without considering the external environment and internal route choice behavior that govern their formation. To bridge this gap, we propose a novel framework that unifies comprehensive environment Perception and explicit Route choice modeling for effective Trajectory representation learning, dubbed PRTraj. Specifically, PRTraj first introduces an Environ- ment Perception Module to enhance the road network by captur- ing multi-granularity environmental semantics from surrounding POI distributions. Building on this environment-aware backbone, a Route Choice Encoder then captures the route choice behavior in- herent in each trajectory by modeling its constituent road segment transitions as a sequence of decisions. These route-choice-aware representations are finally aggregated to form the global trajectory embedding. Extensive experiments on 3 real-world datasets across 5 downstream tasks validate the effectiveness and generalizability of PRTraj. Moreover, PRTraj demonstrates strong data efficiency, maintaining robust performance under few-shot scenarios. Our code is available online 1. CCS Concepts • Information systems →Data mining. Keywords Urban computing, Trajectory data mining, Trajectory representa- tion learning, Route choice modeling 1 Introduction The proliferation of devices equipped with the Global Positioning System (GPS) facilitates the large-scale collection of spatio-temporal trajectories, capturing fine-grained movement patterns of vehicles or pedestrians [4, 38, 39, 57]. These trajectories contain rich spatio- temporal semantics and can be used to support various downstream 1https://anonymous.4open.science/r/PRTraj ∗Equal contribution. †Corresponding author. Summer Palace Zhongguancun Beijing CBD 0 2 4 6 8 10 12 14 16 18 20 22 Hour of Day 20 40 Speed (km/h) Summer Palace Zhongguancun Beijing CBD (a) External Urban Environment: The functional semantics of an urban area profoundly shape its traffic patterns. Left: A tourist site (Summer Palace ) and two business districts (Zhongguancun , Beijing CBD ) in Beijing. Right: Business areas exhibit distinct hourly speed patterns compared to the tourist site. (b) Internal Route Choice: Different drivers may choose significantly different routes for the same origin-destination (OD) pair, reflecting diverse internal route choice behavior. Figure 1: Illustration of two core factors in trajectory forma- tion, which are often overlooked in existing TRL methods. applications, including travel time estimation [28], location predic- tion [43], and trajectory similarity analysis [19]. Trajectory representation learning (TRL), which encodes raw tra- jectory data into low-dimensional embeddings, receives consider- able attention for its capacity to extract spatio-temporal patterns from trajectories [20, 29, 50]. Early TRL methods [24, 52] are pre- dominantly task-specific, which inherently limits their generaliz- ability across diverse downstream tasks. To overcome this limita- tion, subsequent studies adopt self-supervised learning strategies to improve the generalizability of learned representations. For in- stance, Trembr [14] employs an encoder-decoder architecture for trajectory reconstruction to capture spatio-temporal characteris- tics; PIM [6] maximizes mutual information to derive path repre- sentations from both global and local perspectives; START [20] captures travel semantics and temporal regularities via masked language modeling and contrastive learning; further extending START, TRACK [16] enriches road segment features with histori- cal speed profiles, while GREEN [58] learns multi-scale contextual information from both grid-level and segment-level trajectories. arXiv:2510.14819v1 [cs.CV] 16 Oct 2025 Ji Cao et al. However, despite these advancements, existing methods fundamen- tally regard a trajectory as an isolated spatio-temporal sequence, overlooking the latent influential factors that govern its formation. The formation of a trajectory does not occur in a vacuum, but is the result of a human’s sequential decision-making process within a complex urban environment. Ignoring this underlying formation process leads to representations that lack behavioral fidelity and thus face inherent limitations in downstream tasks. This trajectory formation process is governed by a duality of factors. On the one hand, the external urban environment profoundly shapes mobil- ity patterns. As illustrated in Figure 1(a), the semantic function of an area profoundly dictates its traffic dynamics. Two distant busi- ness districts (Zhongguancun and Beijing CBD) exhibit remarkably similar hourly speed patterns, starkly contrasting with the nearby Summer Palace, a tourist site. On the other hand, the trajectory itself embodies the individual’s internal route choice behavior. As shown in Figure 1(b), even for the same origin-destination (OD) pair, different drivers choose significantly different routes, reveal- ing diverse preferences and decision criteria, which aligns with the core principles of Route Choice Models [31] from transportation studies. Crucially, these external and internal factors are not inde- pendent: an individual’s route choice is fundamentally constrained and informed by their perception of the environment. Motivated by these observations, we propose a novel framework that unifies comprehensive environment Perception with explicit Route choice modeling to enable more effective Trajectory repre- sentation learning, dubbed PRTraj. Firstly, our method constructs a semantically-rich road network by capturing and integrating multi-granularity environmental information from surrounding POIs. Specifically, we leverage a Large Language Model (LLM) to interpret raw POI distributions into rich descriptions, thereby en- riching each road segment representation with an understanding of both local context and regional function. Secondly, leveraging this environment-aware road network, the model captures internal route choice behavior by framing each trajectory as a sequence of decisions, where each choice is modeled by a Wide & Deep net- work [7] that synthesizes key navigational factors with the rich environmental context from our aforementioned road network. Finally, a Trajectory Semantic Integrator fuses these behavioral representations with temporal features and aggregates them into a global trajectory embedding for various downstream tasks. The entire model is end-to-end and optimized with self-supervised objec- tives, including Masked Language Modeling (MLM) and contrastive learning to enhance contextual and semantic understanding. To summarize, we make the following contributions: • We identify a key oversight in existing TRL: treating trajectories as isolated spatio-temporal sequences, rather than as sequential human decision-making within an urban environment. • We propose PRTraj, a novel TRL framework that first constructs an environment-aware road network backbone to support the subsequent modeling of route choice behavior, which are finally aggregated into a unified trajectory embedding. • We conduct extensive experiments on 3 real-world trajectory datasets across 5 downstream tasks. The results demonstrate that our method significantly outperforms state-of-the-art baselines, validating the effectiveness of our method. 2 Preliminaries 2.1 Definitions Definition 1 (GPS Trajectory). A GPS trajectory is a sequence of GPS points T GPS = {𝝉GPS 1 ,𝝉GPS 2 , . . . ,𝝉GPS 𝑛 }, where each point 𝝉GPS 𝑖 = (lat𝑖, lon𝑖, t𝑖) comprises latitude, longitude, and timestamp. Definition 2 (Road Network). A road network is modeled as a directed graph G = ⟨V, E⟩, where V is the set of road segments, and E ⊆V × V is a set of edges representing connectivity. An edge 𝑒𝑖,𝑗∈E indicates that road segment 𝑣𝑗is directly reachable from 𝑣𝑖. Definition 3 (Road Network Constrained Trajectory). A road network constrained trajectory2 T = {𝝉1,𝝉2, . . . ,𝝉𝑛} is a times- tamped sequence of road segments, derived from a raw GPS trajectory through map matching. Each element 𝝉𝑖= (𝑟𝑖,𝑡𝑖) consists of a road segment ID 𝑟𝑖and its corresponding timestamp 𝑡𝑖. Definition 4 (Point of Interest). A point of interest (POI) is a geographic entity represented as 𝒑= (lat, lon, 𝒄𝒍𝒔, name), where lat and lon are its latitude and longitude, 𝒄𝒍𝒔= (𝑐1,𝑐2) is a two-level functional category (e.g., 𝑐1 = Shopping Services, 𝑐2 = Convenience Stores), and name is the POI’s name (e.g., 7-Eleven). 2.2 Problem Statement Given a trajectory dataset D = {T1, T2, . . . , T| D|} and a road net- work G, our objective is to learn a mapping function: 𝐹: T →z ∈ R𝑑, which encodes each trajectory T into a 𝑑-dimensional repre- sentation vector z. The representation z is expected to be effective and informative for various downstream tasks. 3 Methodology In this section, we introduce PRTraj, a novel TRL framework de- signed to unify the perception of the external environment with the modeling of internal route choice behavior. As depicted in Fig- ure 2, PRTraj consists of three core modules: (1) an Environment Perception Module that comprehensively models multi-granularity urban semantics via POI distributions; (2) a Route Choice Encoder designed to model route choice behavior explicitly; and (3) a Tra- jectory Semantic Integrator that aggregates segment-level repre- sentations into a global trajectory embedding. The framework is trained end-to-end in a self-supervised manner using both MLM loss and contrastive learning. 3.1 Environment Perception Module The semantics of the transportation network are profoundly shaped by the surrounding urban environment, which can be effectively perceived through the distribution and attributes of POIs. However, disparities in spatial granularity and semantics between POIs and road segments hinder effective modeling. To bridge this gap, the Environment Perception Module leverages LLMs to model and integrate POI contexts at multiple granularities, thereby enriching the semantic representation of the road network. To establish a basis for this enrichment, we first define the initial representation for each road segment. Specifically, each segment is encoded as a vector r𝑖∈R𝑑by summing the embeddings of four 2For convenience, “trajectory” will refer to a “road network constrained trajectory” in unambiguous contexts throughout this paper. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Describe nearby POIs (Fine-Grained) LLM Description & Text Embedding Identify POI Clusters (Coarse-Grained) Enhance Local Semantics lat POI type lng Propagate & Aggregate Cluster Semantics (step 1) (step 2) Cross GAT Roads POIs Q K&V POI clusters Attention Roads Q K&V CNN Gate ⨁ Gate ⨂ ⨂ Capture Route Choice Unselected Selected Unselected Journey Progression Historical Transition Directional Deviation Environment- Aware Segment Token Wide & Deep Encode Route Choice ⋯ Unselected MLP Mean Pooling Selected ⋯ CLS Output Transformer Encoder ⨁ Timestamp Temporal Encoding Environment-Aware Segment Token Route Choice Encoder Route Choice Behavior Trajectory Semantic Integrator Various Downstream Tasks Destination Prediction Travel Time Estimation Trajectory Retrieval Trajectory lookup Environment-Aware Road Network POI Data Environment Perception Module Figure 2: Overview of the PRTraj framework. It first employs an Environment Perception Module to construct an environment- aware road network representation by capturing urban environmental semantics. Building on this, the Route Choice Encoder models the decision-making process to extract route choice behaviors embedded within the trajectory. Finally, a Trajectory Semantic Integrator aggregates these outputs to produce the global trajectory representation. attributes: road type, length, in-degree, and out-degree: r𝑖= rtype 𝑖 + rlength 𝑖 + rin 𝑖+ rout 𝑖 . (1) This initial representation is then enhanced by modeling environ- ment semantics at both fine and coarse granularities. Fine-Grained Semantic Modeling. For fine-grained modeling, we capture localized urban context at the road segment level by an- alyzing the distribution of proximate POIs. Specifically, POIs within a fixed proximity 𝛿of each road segment are extracted and grouped according to their two-level functional category information. These grouped POIs are then used to prompt Qwen3-8B [46] to generate natural language descriptions that characterize the local environ- ment of each road segment from a human-centric perspective. The prompt template and illustrative prompt-output pairs are provided in Section A.1. These descriptions are subsequently encoded into dense semantic vectors, pfine 𝑖 ∈R𝑑, for each road segment 𝑖using the Qwen3-Embedding-8B text embedding model [55]. To mitigate the limitations of isolated segment-level representa- tions and incorporate broader contextual information, a CrossGAT is employed. Here, the road segment representation r𝑖serves as the query, while the POI context embeddings pfine 𝑗 of adjacent road segments act as keys and values in a cross-attention mechanism. This enables each segment to selectively aggregate semantically relevant contextual information from its surroundings: ˜pfine 𝑖 = W𝑝pfine 𝑖 + ∑︁ 𝑗∈N(𝑖) 𝛼𝑖,𝑗W𝑝pfine 𝑗 , (2) where the attention coefficients 𝛼𝑖,𝑗are computed as: 𝛼𝑖,𝑗= exp a⊤LeakyReLU(W𝑟r𝑖+ W𝑝pfine 𝑗 ) Í 𝑘∈N(𝑖) exp a⊤LeakyReLU(W𝑟r𝑖+ W𝑝pfine 𝑘 ) . (3) Here, N (𝑖) is the set of segments adjacent to 𝑟𝑖. LeakyReLU [36] is an activation function with a negative input slope of 0.2. a ∈R𝑑, W𝑟, W𝑝∈R𝑑×𝑑are learnable parameters. Coarse-Grained Semantic Modeling. To capture regional POI influence, the city is divided into grid cells of side length 𝐿. For each POI primary category 𝑐(i.e., the top-level category 𝑐1 in Def- inition 4), we identify clusters by selecting the top 10% of grid cells with the highest POI density. Note that a single grid cell may contain multiple clusters, each corresponding to a different pri- mary category 𝑐. These structured POI distributions are used to prompt Qwen3-8B [46] to generate cluster-level natural language descriptions (refer to Section A.2 for details), which are subse- quently encoded using Qwen3-Embedding-8B [55] into grid-level vectors g𝑐 (𝑥,𝑦) ∈R𝑑, similar to the procedure used in fine-grained modeling. To model the spatial diffusion of influence from each POI cluster type 𝑐, a 3 × 3 convolutional kernel is applied over g𝑐 (𝑥,𝑦), enabling localized propagation of regional semantics within the urban grid: ˜g𝑐 (𝑥,𝑦) = 1 ∑︁ 𝑎=−1 1 ∑︁ 𝑏=−1 W(𝑎,𝑏)g𝑐 (𝑥+𝑎,𝑦+𝑏), (4) with learnable weights W(𝑎,𝑏) ∈R𝑑×𝑑for each relative position (𝑎,𝑏), and zero-padding applied at grid boundaries. Then, for road Ji Cao et al. segment 𝑖located at grid (𝑖𝑥,𝑖𝑦), its coarse-grained environment features are aggregated via an attention mechanism which dynami- cally models the influence of different POI cluster types: ˜pcoarse 𝑖 = ∑︁ 𝑐∈𝐶 1𝑐 (𝑖𝑥,𝑖𝑦)𝛼𝑐 W𝑣(˜g𝑐 (𝑖𝑥,𝑖𝑦) ∥e𝑐), (5) where the attention coefficients 𝛼𝑐are calculated as: 𝛼𝑐= exp W𝑞r𝑖  W𝑘(˜g𝑐 (𝑖𝑥,𝑖𝑦) ∥e𝑐)⊤ Í 𝑐′∈𝐶1𝑐′ (𝑖𝑥,𝑖𝑦) exp W𝑞r𝑖  W𝑘(˜g𝑐′ (𝑖𝑥,𝑖𝑦) ∥e𝑐′)⊤ . (6) Here,𝐶denotes the set of all POI primary categories. 1𝑐 (𝑥,𝑦) ∈{0, 1} indicates whether grid (𝑥,𝑦) has a valid semantic representation for primary category 𝑐, either due to inclusion in the top 10% or after spatial diffusion. e𝑐∈R𝑑is a category-specific embedding of POI type 𝑐. W𝑞∈R𝑑×𝑑, W𝑘, W𝑣∈R𝑑×2𝑑are learnable matrices. Multi-Granularity Semantic Integration. To produce a unified, environment-aware segment token, we integrate the fine-grained and coarse-grained environment semantics with the base road seg- ment representation r𝑖via a gated fusion mechanism, thereby al- lowing each segment to adaptively absorb relevant environmental information from multiple scales. The final environment-aware segment token ˜r𝑖is formulated as: ˜r𝑖= r𝑖+ ˜pfine 𝑖 ⊙Gate( ˜pfine 𝑖 ∥r𝑖) + ˜pcoarse 𝑖 ⊙Gate( ˜pcoarse 𝑖 ∥r𝑖). (7) Here, the Gate(·) function represents a gating mechanism com- posed of a linear transformation followed by a Tanh activation: Gate(x) = Tanh(WGatex + bGate), (8) where WGate ∈R𝑑×2𝑑, bGate ∈R𝑑are learnable parameters. 3.2 Route Choice Encoder Building upon the environment-aware road network, the Route Choice Encoder is designed to capture the internal route choice be- havior within trajectories. Instead of viewing a trajectory as a static sequence, it models the dynamic, sequential decision-making pro- cess at each transition between segments. Specifically, to model the decision of transitioning from the current segment 𝑟𝑖to a candidate next segment 𝑟𝑐∈N (𝑟𝑖), we construct a context from the following four complementary features, which are designed to capture both key navigational factors and the rich environmental context: • Journey Progression: We define the normalized journey progres- sion 𝜌𝑖to signify the global advancement within a journey: 𝜌𝑖= Í𝑖 𝑗=1 𝑑𝑟𝑗 Í𝑛 𝑗=1 𝑑𝑟𝑗 , (9) where 𝑑𝑟𝑗is the length of road segment 𝑟𝑗. • Historical Transition Likelihood: We derive a normalized historical transition likelihood from 𝑟𝑖to a candidate adjacent segment 𝑟𝑐 to capture general traffic flow patterns: 𝑃(𝑟𝑐| 𝑟𝑖) = 𝑁(𝑟𝑖→𝑟𝑐) Í 𝑟′𝑐∈N(𝑟𝑖) 𝑁(𝑟𝑖→𝑟′𝑐) . (10) Here, 𝑁(·) denotes the historical transition frequency between consecutive segments. • Destination-Oriented Directional Deviation: This feature measures the angular deviation between the direction towards 𝑟𝑐and the direct bearing to the destination 𝑟𝑛: Δ𝜃𝑟𝑐= ∠(−−→ 𝑟𝑖𝑟𝑐, −−→ 𝑟𝑖𝑟𝑛). (11) • Environment-Aware Segment Tokens: We leverage the environment- aware segment tokens ˜r𝑟𝑖and ˜r𝑟𝑐for the current and candidate road segments, respectively. These tokens, produced by the Envi- ronment Perception Module, provide the essential environmental context for route choice modeling. For each candidate transition from 𝑟𝑖to 𝑟𝑐∈N (𝑟𝑖), we pro- cess the aforementioned context features using a Wide & Deep network [7]. This architecture is chosen for its unique ability to capture the dual nature of route choice decisions. Specifically, its Wide component captures explicit, low-order patterns from sparse feature interactions, akin to memorizing simple routing heuristics. Concurrently, its Deep component learns abstract, high-order rep- resentations from dense embeddings, enabling it to generalize to unseen decision scenarios. The specific details are as follows. Wide Component Design. The Wide component models the interaction between the historical transition likelihood 𝑃(𝑟𝑐| 𝑟𝑖) and destination-oriented directional deviation Δ𝜃𝑟𝑐. We select these two features as they encapsulate the fundamental trade-off in navi- gation between following popular routes and taking the most direct path. To implement this, we first discretize 𝑃(𝑟𝑐| 𝑟𝑖) and Δ𝜃𝑟𝑐into 5 and 8 uniform bins, respectively. A cross-product transformation is then applied to these binned features to generate a 40-dimensional one-hot vector ecrossed ∈R40. The output is derived through a linear transformation: cWide = WWideecrossed. (12) Deep Component Design. The Deep component is designed to capture high-order feature interactions. We first project all con- textual features into a common 𝑑-dimensional space. Notably, for the directional deviation Δ𝜃(𝑟𝑖,𝑟𝑐), we encode its sine and cosine components to preserve its cyclical nature. These projected features are then concatenated into a single vector h(𝑟𝑖,𝑟𝑐) ∈R5𝑑: h(𝑟𝑖,𝑟𝑐) =  W𝜌𝜌𝑖∥W𝑃𝑃(𝑟𝑐| 𝑟𝑖) ∥W𝑟𝑖˜r𝑟𝑖∥W𝑟𝑐˜r𝑟𝑐∥ WΔ𝜃[sin Δ𝜃𝑟𝑐, cos Δ𝜃𝑟𝑐]  , (13) where W𝜌, W𝑃∈R𝑑×1, W𝑟𝑖, W𝑟𝑐∈R𝑑×𝑑, WΔ𝜃∈R𝑑×2 are learn- able parameters. This vector is subsequently passed through an MLP mapping from 5𝑑to 𝑑dimensions to produce the Deep com- ponent’s output cDeep ∈R𝑑: cDeep = MLP(h(𝑟𝑖,𝑟𝑐)). (14) Capturing Route Choice Behavior. The outputs from the Wide component cWide and Deep component cDeep are then integrated via element-wise summation to form a unified context vector c(𝑟𝑖,𝑟𝑐) for each potential transition (𝑟𝑖,𝑟𝑐): c(𝑟𝑖,𝑟𝑐) = cWide + cDeep. (15) To capture the user’s route choice at the segment 𝑟𝑖, we explicitly model the contrast between the selected transition and its alterna- tives. The context vector corresponding to the chosen transition, (𝑟𝑖,𝑟𝑖+1), is denoted as cselected 𝑟𝑖 . Concurrently, the context vectors Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning of all unselected alternatives in N (𝑟𝑖) \ {𝑟𝑖+1} are aggregated via mean pooling to form a single representation cunselected 𝑟𝑖 : cunselected 𝑟𝑖 = Pool  c(𝑟𝑖,𝑟𝑐) | 𝑟𝑐∈N (𝑟𝑖) \ {𝑟𝑖+1} . (16) Finally, the selected and unselected context vectors are concate- nated and projected through an MLP that maps from R2𝑑to R𝑑, yielding the route choice behavior representation c𝑟𝑖∈R𝑑: c𝑟𝑖= MLP cselected 𝑟𝑖 ∥cunselected 𝑟𝑖 . (17) This process is repeated for each segment in the trajectory to pro- duce a sequence of route-choice-aware representations (c𝑟1, c𝑟2, . . . , c𝑟𝑛). For the terminal segment c𝑟𝑛, its cselected 𝑟𝑖 is set to a zero vector. These representations are then passed to the subsequent Trajectory Se- mantic Integrator module. 3.3 Trajectory Semantic Integrator The Trajectory Semantic Integrator module is designed to generate a global trajectory embedding. Firstly, for each spatio-temporal point (𝑟𝑖,𝑡𝑖) within a trajectory, we construct its representation x𝑖∈R𝑑by combining the route-choice-aware representations: x𝑖= c𝑟𝑖+ tm 𝑚(𝑡𝑖) + td 𝑑(𝑡𝑖), (18) where tm 𝑚(𝑡𝑖) and td 𝑑(𝑡𝑖) are learnable embeddings for the minute- of-day (1 to 1440) and day-of-week (1 to 7) of the timestamp 𝑡𝑖, respectively. Subsequently, we prepend a learnable classification token [CLS] to the sequence of spatio-temporal point representa- tions. This sequence is then fed into a Transformer encoder [35] to produce context-aware hidden states: (ztraj, z1, . . . , z𝑛) = TransformerEncoder([CLS], x1, . . . , x𝑛). (19) The final hidden state corresponding to the CLS token ztraj ∈R𝑑, serves as the comprehensive representation for the entire trajectory. 3.4 Pre-Training Tasks Following prior TRL studies [16, 20, 58], we adopt two self-supervised objectives to enhance the model’s ability to capture contextual de- pendencies and semantic distinctions. Masked Language Modeling (MLM) Loss. To model contex- tual dependencies within trajectories, we employ a SpanBERT-style MLM objective [21]. For each trajectory, we randomly mask mul- tiple spans, each with an average length of 3 segments, such that the total masked proportion is 15% of the sequence. Each masked segment 𝑟𝑚is substituted with a learnable embedding x[MASK], and the model is trained to predict the original segment ID: LMLM = −1 |B| ∑︁ T∈B 1 |MT| ∑︁ 𝑟𝑚∈MT log  exp(y𝑚,𝑟𝑚) Í 𝑖∈V exp(y𝑚,𝑖)  , (20) where y𝑟𝑚is the model’s prediction for a masked segment 𝑟𝑚. Contrastive Loss. We further adopt the NT-Xent loss [3] to en- force semantic consistency across augmented views. For each trajec- tory in a batch B, two distinct augmentations (details in Section B) are applied, yielding in 2|B| augmented views. The objective en- courages the representations of positive pairs to be similar while distinguishing them from negative samples: LCL = − 1 2|B| Í2|B| 𝑖=1 " log exp sim(ztraj 𝑖 ,ztraj pos(𝑖) )/𝜏 Í2|B| 𝑗=1 1[𝑗≠𝑖] exp sim(ztraj 𝑖 ,ztraj 𝑗 )/𝜏 # , (21) where sim(·, ·) denotes cosine similarity,𝜏= 0.05 is the temperature parameter, and pos(𝑖) identifies the positive sample index for z𝑖. Pre-training Objective. The overall pre-training objective com- bines the two self-supervised losses in equal proportion: L = (LMLM + LCL)/2. (22) 4 Experiments To demonstrate the effectiveness of the proposed PRTraj framework, we conducted extensive experiments on three real-world trajectory datasets to answer the following research questions: • RQ1: How does PRTraj perform across various downstream tasks in comparison to state-of-the-art TRL methods? • RQ2: What are the individual contributions of PRTraj’s key archi- tectural components to its overall performance? • RQ3: How data-efficient is PRTraj, particularly under few-shot scenarios with limited training trajectories? • RQ4: What is the computational efficiency of PRTraj in terms of training and inference time? • RQ5: How does PRTraj perform in qualitative visualizations? 4.1 Experimental Setups Datasets. We conduct experiments on three real-world trajec- tory datasets of Chinese cities, namely, Beijing, Chengdu, and Xi’an. Road networks for these cities are downloaded from Open- StreetMap3 using OSMnx [1], and trajectories are map-matched with the FMM [47]. Each trajectory dataset is chronologically split into training, validation, and testing sets at a 7:1:2 ratio. Addition- ally, we acquire POI data for these three cities through the Amap API4, which includes detailed information such as POI names, cate- gories, and geographical coordinates. A detailed description of the dataset is available in Section C.1. Downstream Tasks. We evaluate the quality of the learned repre- sentations across five downstream tasks: road label prediction (RLP), trajectory destination prediction (TDP), travel time estimation (TTE), similar trajectory retrieval (STR), and path ranking (PR). These tasks assess our model’s capabilities at two distinct levels: RLP evaluates the intermediate road segment representations, whereas the other four tasks evaluate the final trajectory-level representations. For STR, we directly use the learned trajectory embeddings to compute similarity, while for all other tasks (RLP, TDP, TTE, and PR), we append a task-specific MLP head and fine-tune the entire model end-to-end. More details are available in Section C.2. Evaluation Metrics. For road label prediction, we adopt Macro- F1 and Micro-F1. For trajectory destination prediction, we report Acc@1, Acc@5, and Acc@10. For travel time estimation, we use mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). For similar trajectory re- trieval, we employ HR@1, HR@5, and mean reciprocal rank (MRR). 3https://www.openstreetmap.org 4https://lbs.amap.com Ji Cao et al. Table 1: Performance comparison between PRTraj and baselines on three real-world trajectory datasets across five downstream tasks, including road label prediction (RLP), trajectory destination prediction (TDP), travel time estimation (TTE), similar trajectory retrieval (STR), and path ranking (PR). “Impr.” denotes the relative improvement of our method over the best result, and “Avg.” indicates the average rank across all evaluation metrics. Task RLP TDP TTE STR PR Metric Macro-F1↑Micro-F1↑ Acc@1↑ Acc@5↑ Acc@10↑ MAE↓ RMSE↓ MAPE↓ HR@1↑ HR@5↑ MRR↑ 𝜏↑ 𝜌↑ MAE↓ Avg.↓ Beijing HRNR 0.7912 0.7990 0.0789 0.2019 0.2613 5.1072 9.4108 0.3388 0.8653 0.9226 0.8914 0.6259 0.6749 0.1492 6.3 Toast 0.5868 0.6285 0.0694 0.1809 0.2374 5.1042 9.2255 0.3423 0.8532 0.9112 0.8792 0.6298 0.6808 0.1444 7.8 PIM 0.6284 0.6519 0.0629 0.1614 0.2100 5.1622 9.3977 0.3456 0.8060 0.8615 0.8323 0.6125 0.6627 0.1506 8.6 DyToast 0.6263 0.6488 0.0772 0.1846 0.2415 4.2883 8.3221 0.2863 0.8773 0.9259 0.8998 0.6317 0.6822 0.1421 5.8 JCLRNT 0.6479 0.6751 0.0782 0.1879 0.2538 5.0191 9.1380 0.3281 0.8985 0.9369 0.9166 0.6352 0.6871 0.1405 5.1 START 0.7574 0.7776 0.0856 0.2153 0.2876 5.0103 9.1894 0.3376 0.9056 0.9442 0.9231 0.6421 0.6903 0.1397 3.5 TRACK 0.6744 0.6948 0.0873 0.2198 0.2939 4.0375 8.0509 0.2773 0.8941 0.9387 0.9158 0.6455 0.6898 0.1402 3.3 GREEN 0.6705 0.7006 0.0913 0.2285 0.3062 4.1657 8.2444 0.2915 0.9078 0.9437 0.9207 0.6302 0.6811 0.1438 3.6 PRTraj 0.8879 0.8953 0.1078 0.2649 0.3496 3.8719 7.8116 0.2607 0.9612 0.9828 0.9712 0.6813 0.7289 0.1282 1.0 Impr. 12.22% 12.05% 18.07% 15.93% 14.17% 4.10% 2.97% 5.99% 5.88% 4.09% 5.21% 5.55% 5.59% 8.23% Chengdu HRNR 0.7054 0.7903 0.4168 0.6215 0.6995 1.7796 2.7743 0.2308 0.6069 0.7705 0.6782 0.7489 0.7885 0.1007 5.4 Toast 0.4200 0.7508 0.3954 0.5940 0.6667 1.7804 2.7730 0.2301 0.5739 0.7180 0.6100 0.7359 0.7779 0.1020 8.3 PIM 0.4684 0.7569 0.4073 0.6129 0.6858 1.7877 2.7778 0.2292 0.5498 0.7012 0.6025 0.7502 0.7895 0.0969 6.9 DyToast 0.4511 0.7548 0.4007 0.6019 0.6703 1.5071 2.3719 0.2025 0.6256 0.7811 0.6962 0.7456 0.7858 0.0981 6.6 JCLRNT 0.6376 0.7754 0.4072 0.6154 0.6932 1.7959 2.7878 0.2343 0.7521 0.8706 0.8069 0.7526 0.7908 0.0958 5.6 START 0.6535 0.7805 0.4128 0.6231 0.7033 1.4901 2.3406 0.2010 0.7859 0.8911 0.8337 0.7559 0.7954 0.0931 3.1 TRACK 0.6107 0.7590 0.4159 0.6267 0.7041 1.4307 2.2401 0.1864 0.7712 0.8831 0.8221 0.7549 0.7951 0.0927 3.3 GREEN 0.6182 0.7692 0.4060 0.6137 0.6933 1.4259 2.2245 0.1882 0.7992 0.9044 0.8394 0.7428 0.7824 0.1014 4.6 PRTraj 0.8004 0.8615 0.4294 0.6408 0.7184 1.3668 2.1718 0.1776 0.9404 0.9728 0.9557 0.7620 0.8017 0.0933 1.1 Impr. 13.47% 9.01% 3.02% 2.25% 2.03% 4.14% 2.37% 4.72% 17.67% 7.56% 13.86% 0.81% 0.79% -0.65% Xi’an HRNR 0.6459 0.6651 0.3511 0.5800 0.6624 2.9449 4.7241 0.2835 0.6641 0.8062 0.7251 0.6972 0.7459 0.1062 5.7 Toast 0.4308 0.5129 0.3260 0.5421 0.6247 2.9350 4.6983 0.2824 0.6618 0.7838 0.6902 0.6894 0.7419 0.1094 8.1 PIM 0.4357 0.5155 0.3396 0.5629 0.6440 2.9540 4.7358 0.2786 0.6188 0.7453 0.6685 0.7022 0.7521 0.1038 7.0 DyToast 0.4327 0.5149 0.3304 0.5451 0.6222 2.2856 3.7257 0.2282 0.6900 0.8187 0.7488 0.6975 0.7483 0.1058 6.5 JCLRNT 0.4582 0.5076 0.3430 0.5687 0.6566 2.9791 4.7513 0.2869 0.8092 0.8895 0.8451 0.7024 0.7518 0.1039 6.1 START 0.6225 0.6363 0.3499 0.5844 0.6742 2.3700 3.7994 0.2482 0.8645 0.9214 0.8724 0.6985 0.7481 0.1047 3.7 TRACK 0.4652 0.5190 0.3503 0.5824 0.6705 2.2514 3.5998 0.2047 0.8613 0.9255 0.8793 0.6998 0.7480 0.1044 3.2 GREEN 0.5290 0.5545 0.3550 0.5889 0.6773 2.2743 3.7211 0.2056 0.8862 0.9403 0.9123 0.6969 0.7478 0.1041 3.6 PRTraj 0.7599 0.7591 0.3624 0.5971 0.6837 2.1507 3.6072 0.1943 0.9538 0.9864 0.9687 0.7131 0.7629 0.1010 1.1 Impr. 17.65% 14.13% 2.08% 1.39% 0.94% 4.47% -0.21% 5.08% 7.63% 4.90% 6.18% 1.52% 1.44% 2.70% For path ranking, we use Kendall’s 𝜏, Spearman’s 𝜌, and MAE. The definitions of 𝜏and 𝜌are provided in Section C.3. Baselines. We compare PRTraj against eight state-of-the-art base- lines: one task-agnostic supervised method, HRNR [42], and seven self-supervised methods. The self-supervised methods fall into two categories: non-contrastive approaches, including Toast [6], PIM [48], and DyToast [5]; and contrastive-based methods, includ- ing JCLRNT [29], START [20], TRACK [16], and GREEN [58]. Fur- ther details are provided in Section C.4. 4.2 Overall Performance (RQ1) Table 1 presents a performance comparison between PRTraj and the baseline methods, averaged over five independent runs. A detailed version of the results, including std values, is provided in Section D.1. The key findings are summarized as follows: • PRTraj steadily outperforms baselines. PRTraj achieves the best average ranking across all cities. Moreover, among the 42 eval- uation metrics across three cities, it ranks first in 40 metrics, demonstrating its effectiveness and generalizability across dif- ferent cities. This strong performance can be largely attributed to its unique ability to model the interplay between external environmental semantics and internal route choice logic, two critical, interconnected dimensions that baseline methods largely overlook, thus limiting their overall performance. • Contrastive learning-based methods are competitive. Baselines like JCLRNT, START, TRACK, and GREEN achieve competitive performance. Their strength stems from the core principle of con- trastive learning: by training the model to pull representations of semantically similar trajectories (i.e., positive pairs) closer while pushing dissimilar ones (i.e., negative pairs) apart, it learns a well-structured embedding space where distance reflects seman- tic relatedness. However, their holistic, sequence-level compari- son struggles to effectively distill rich environmental context or model implicit route choice behavior in the trajectory. As a result, the learned representations are highly discriminative but lack interpretive depth, capturing the fact of trajectory divergence without encoding the reasons for it. • HRNR excels in the Road Label Prediction task. HRNR achieves the best performance among baselines on this task, a result at- tributable to its hierarchical modeling of the road network. By hierarchically processing the network, it effectively captures la- tent structural patterns across multiple scales, ranging from local road connectivity to broader regional layouts. However, rely- ing solely on abstract network structure is inherently limiting. PRTraj surpasses it by incorporating an Environment Perception Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 2: Performance comparison of PRTraj variants on the five downstream tasks. For brevity, we report a single repre- sentative metric per task. Task RLP TDP TTE STR PR Metric Macro-F1↑ Acc@1↑ MAE↓ HR@1↑ 𝜏↑ Beijing w/o Fine-Grained 0.7158 0.0927 4.1735 0.9341 0.6738 w/o Coarse-Grained 0.7964 0.1014 4.1034 0.9447 0.6751 w/o Route Choice 0.8823 0.1026 4.0125 0.8974 0.6281 w/o Pre-Training 0.8698 0.1071 3.9467 0.7184 0.6761 PRTraj 0.8879 0.1078 3.8719 0.9612 0.6813 Chengdu w/o Fine-Grained 0.5997 0.4255 1.4118 0.8699 0.7590 w/o Coarse-Grained 0.7127 0.4270 1.4005 0.8804 0.7601 w/o Route Choice 0.7944 0.4234 1.3877 0.8594 0.7411 w/o Pre-Training 0.7655 0.4282 1.3851 0.7264 0.7604 PRTraj 0.8004 0.4294 1.3668 0.9404 0.7620 Xi’an w/o Fine-Grained 0.5957 0.3591 2.2103 0.9233 0.7067 w/o Coarse-Grained 0.7012 0.3598 2.2094 0.9375 0.7101 w/o Route Choice 0.7503 0.3596 2.2045 0.8706 0.6925 w/o Pre-Training 0.7421 0.3601 2.2006 0.7178 0.7086 PRTraj 0.7599 0.3624 2.1507 0.9538 0.7131 Module to enrich road representations with real-world environ- mental semantics, demonstrating that this semantic context is ultimately more decisive for determining road attributes. 4.3 Ablation Studies (RQ2) To validate PRTraj’s designs, we conduct ablation studies by com- paring the following variants: (1) w/o Fine-Grained removes the fine-grained semantic modeling within the Environment Percep- tion Module, (2) w/o Coarse-Grained removes the coarse-grained semantic modeling within the Environment Perception Module, (3) w/o Route Choice ablates the Route Choice Encoder to assess the importance of capturing implicit route choice behavior, and (4) w/o Pre-Training bypasses the self-supervised pre-training stage, and the model is trained from scratch on each downstream task. Table 2 presents the results of our ablation studies. These results demonstrate that each component of PRTraj plays a critical role in the model’s overall performance, as removing any single module results in a degradation of performance. However, the contribution of each design varies across different tasks. In particular, the Road Label Prediction task is highly sensitive to the Environment Percep- tion Module, with performance dropping significantly when either fine-grained or coarse-grained semantic modeling is removed. This is because the urban environment context captured by these com- ponents provides essential cues for inferring road attributes. In turn, ablating the Route Choice Encoder most harms Path Ranking, a task that directly evaluates the route choice behavior modeled by this component. Meanwhile, forgoing pre-training primarily affects Similar Trajectory Retrieval, since its contrastive objective is essential for learning a discriminative embedding space. The min- imal impact of omitting pre-training on other tasks highlights that performance gains are largely attributable to PRTraj’s architectural innovations in environment perception and route choice modeling, rather than pre-training alone. Table 3: Average training time (hours per epoch) and infer- ence time (milliseconds per trajectory) for PRTraj and base- lines on the three trajectory datasets. Dataset Beijing Chengdu Xi’an Metric Training↓ Inference↓ Training↓ Inference↓ Training↓ Inference↓ START 1.9541 0.7879 0.2104 0.2985 0.2149 0.3234 TRACK 1.6134 1.1778 0.2258 0.4055 0.2192 0.4298 GREEN 0.5150 0.3191 0.1494 0.1897 0.1428 0.1987 PRTraj 0.8194 0.2803 0.1889 0.1714 0.1747 0.1702 4.4 Data Efficiency (RQ3) We investigate the data efficiency of PRTraj by varying the propor- tion of data used for both pre-training and subsequent fine-tuning. The results on the Beijing dataset are shown in Figure 3, with ad- ditional results for the Chengdu and Xi’an datasets provided in Section D.2 due to space limitations. Our findings demonstrate that PRTraj achieves superior data efficiency, significantly outperform- ing the top baseline on the road label prediction task with only 10% of the data, and matching its performance on trajectory retrieval and path ranking tasks with just 20%. This strong few-shot learning capability can be attributed to the core design of PRTraj, which incorporates a strong inductive bias for the learning process. By explicitly encoding external environmental factors and internal route choice logic, PRTraj reduces its reliance on inferring complex relationships purely from data, enabling robust generalization with a substantially reduced number of trajectories. 4.5 Computational Efficiency (RQ4) Table 3 presents a comparative analysis of the computational ef- ficiency between our proposed PRTraj model and three leading baselines: START, TRACK, and GREEN. The results demonstrate that PRTraj strikes an excellent balance between high performance and computational speed. While PRTraj’s training time per epoch is second only to the most lightweight model, GREEN, it remains highly competitive and avoids excessive overhead. Crucially, in the inference phase, which is vital for real-world deployment, PRTraj consistently ranks as one of the fastest models. It matches the effi- ciency of GREEN and significantly surpasses START and TRACK across all datasets. The architectural designs of START and TRACK, which prioritize capturing complex dependencies (like temporal regularities or historical information), are inherently computation- ally intensive. Consequently, this leads to longer processing times during both training and inference. These results demonstrate that the significant performance gains from PRTraj’s sophisticated mod- eling do not come at the cost of practical efficiency, offering an excellent trade-off between effectiveness and computational speed. 4.6 Visualization Analysis (RQ5) To intuitively showcase PRTraj’s performance, we provide two qual- itative visualizations: a case study on trajectory similarity retrieval and a t-SNE projection of the embeddings. Case Study on Trajectory Similarity Retrieval. We visually assess our embeddings on the task of similar trajectory retrieval. As shown in Figure 4 on the Beijing dataset, PRTraj retrieves more similar trajectories for a given query compared to a strong baseline, Ji Cao et al. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.79 0.81 0.84 0.87 0.89 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.03 0.05 0.07 0.09 0.11 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 3.83 4.07 4.32 4.56 4.81 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.90 0.91 0.93 0.95 0.96 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.64 0.65 0.66 0.67 0.68 Kendall's Path Ranking Figure 3: Performance of PRTraj on downstream tasks with varying proportions of training data on the Beijing trajectory dataset. The red dashed line denotes the best baseline performance. For brevity, we present one metric for each task. Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) PRTraj Figure 4: Case study of trajectory similarity retrieval on the Beijing dataset. START, thanks to its unified modeling of environment and route choice. Due to space constraints, results for other datasets and baselines are provided in the Section D.4. t-SNE Projection of Trajectory Embeddings. To assess the model’s sensitivity to environmental dynamics, we conducted a case study on a traffic control event (Sanyuan Bridge replacement, Beijing). Figure 5 visualizes the t-SNE projection of embeddings from normal versus restricted periods, comparing PRTraj against the strong baseline START. The visualization reveals that PRTraj forms distinct clusters, validating its capacity to perceive their influence on route choices. Further baseline comparisons and traffic heatmaps for the traffic control event are available in Section D.4. 5 Related Work Trajectory Representation Learning (TRL). TRL aims to trans- form raw trajectory data into low-dimensional embeddings. Early approaches in this domain are typically task-specific. For instance, some methods are tailored for trajectory similarity computation [12, 17, 24, 51], while others are developed for clustering [2, 11, 54] or anomaly detection [27, 41]. However, these representations suffered from poor generalizability. To address this, subsequent works shift to general-purpose pre-training, utilizing sequence-to-sequence models [14] or curriculum learning [48] to capture spatio-temporal dependencies. More recently, contrastive learning [3] emerges as a dominant paradigm for acquiring more robust and discriminative representations. For example, JCLRNT [29] aligns trajectory and road network features through cross-scale contrastive learning, 20 0 20 20 0 20 Before Traffic Control During Traffic Control (a) START 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (b) PRTraj Figure 5: t-SNE visualization of trajectory embeddings before and during the traffic control in the Sanyuan Bridge area. while START [20] captures travel semantics and temporal regulari- ties. TRACK [16] extends this by incorporating traffic conditions, and GREEN [58] further enhances representation by jointly model- ing road network-based and grid-based trajectories. LLMs in Spatio-Temporal Data Mining. Large Language Models (LLMs), proven to be highly effective in Natural Language Process- ing [30, 34], are now finding new applications in the domain of spatio-temporal data mining. Current efforts focus on tasks such as mobility generation [13, 32, 37], POI recommendation [8, 15, 23], geographic knowledge understanding [18, 26, 45], and traffic fore- casting [25, 53, 56]. While these studies showcase the potential of LLMs, they seldom explicitly model the relationship between road segments and the surrounding POIs. To bridge this gap, we propose an Environment Perception Module that leverages an LLM to interpret multi-granularity POI information, thereby enriching the semantic representation of the road network. 6 Conclusion In this work, we propose PRTraj, a novel TRL framework designed to address a key oversight in existing TRL methods: they merely treat trajectories as isolated spatio-temporal sequences, without considering the external environment and internal route choice behavior that govern their formation, which fundamentally lim- its their performance. To bridge this gap, PRTraj first employs an Environment Perception Module to construct a perception-aware road network by integrating multi-granularity POI semantics. This semantically-rich representation then serves as an essential founda- tion for the subsequent Route Choice Encoder, which models route choice behavior within this context-rich environment. A final Tra- jectory Semantic Integrator aggregates these route-choice-aware insights into a holistic trajectory embedding. Extensive experiments validate the effectiveness of PRTraj and demonstrate its potential for application in various spatiotemporal downstream tasks. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning References [1] Geoff Boeing. 2025. Modeling and analyzing urban networks and amenities with OSMnx. Geographical Analysis (2025). [2] Chao Chen, Chengwu Liao, Xuefeng Xie, Yasha Wang, and Junfeng Zhao. 2019. Trip2Vec: a deep embedding approach for clustering and profiling taxi trip purposes. Pers. Ubiquitous Comput. (2019). [3] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In ICML. [4] Wei Chen, Yuxuan Liang, Yuanshao Zhu, Yanchuan Chang, Kang Luo, Haomin Wen, Lei Li, Yanwei Yu, Qingsong Wen, Chao Chen, Kai Zheng, Yunjun Gao, Xi- aofang Zhou, and Yu Zheng. 2024. Deep learning for trajectory data management and mining: A survey and beyond. arXiv preprint arXiv:2403.14151 (2024). [5] Yile Chen, Xiucheng Li, Gao Cong, Zhifeng Bao, and Cheng Long. 2024. Semantic- enhanced representation learning for road networks with temporal dynamics. arXiv preprint arXiv:2403.11495 (2024). [6] Yile Chen, Xiucheng Li, Gao Cong, Zhifeng Bao, Cheng Long, Yiding Liu, Arun Kumar Chandran, and Richard Ellison. 2021. Robust road network repre- sentation learning: When traffic patterns meet traveling semantics. In CIKM. [7] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. [8] Jiawei Cheng, Jingyuan Wang, Yichuan Zhang, Jiahao Ji, Yuanshao Zhu, Zhibo Zhang, and Xiangyu Zhao. 2025. Poi-enhancer: An llm-based semantic enhance- ment framework for poi representation learning. In AAAI. [9] Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, Peter W. Battaglia, Vishal Gupta, Ang Li, Zhongwen Xu, Alvaro Sanchez-Gonzalez, Yujia Li, and Petar Velickovic. 2021. Eta prediction with graph neural networks in google maps. In CIKM. [10] Xiaomin Fang, Jizhou Huang, Fan Wang, Lingke Zeng, Haijin Liang, and Haifeng Wang. 2020. Constgat: Contextual spatial-temporal graph attention network for travel time estimation at baidu maps. In SIGKDD. [11] Ziquan Fang, Yuntao Du, Lu Chen, Yujia Hu, Yunjun Gao, and Gang Chen. 2021. E2dtc: An end to end deep trajectory clustering framework via self-training. In ICDE. [12] Ziquan Fang, Yuntao Du, Xinjun Zhu, Danlei Hu, Lu Chen, Yunjun Gao, and Christian S Jensen. 2022. Spatio-temporal trajectory similarity learning in road networks. In SIGKDD. [13] Jie Feng, Yuwei Du, Jie Zhao, and Yong Li. 2024. Agentmove: Predicting human mobility anywhere using large language model based agentic framework. arXiv preprint arXiv:2408.13986 (2024). [14] Tao-Yang Fu and Wang-Chien Lee. 2020. Trembr: Exploring road networks for trajectory representation learning. ACM Trans. Intell. Syst. Technol. (2020). [15] Letian Gong, Yan Lin, Xinyue Zhang, Yiwen Lu, Xuedi Han, Yichen Liu, Sheng- nan Guo, Youfang Lin, and Huaiyu Wan. 2024. Mobility-llm: Learning visiting intentions and travel preference from human mobility data with large language models. In NeurIPS. [16] Chengkai Han, Jingyuan Wang, Yongyao Wang, Xie Yu, Hao Lin, Chao Li, and Junjie Wu. 2025. Bridging traffic state and trajectory for dynamic road network and trajectory representation learning. In AAAI. [17] Peng Han, Jin Wang, Di Yao, Shuo Shang, and Xiangliang Zhang. 2021. A graph- based approach for trajectory similarity computation in spatial networks. In SIGKDD. [18] Xixuan Hao, Wei Chen, Yibo Yan, Siru Zhong, Kun Wang, Qingsong Wen, and Yuxuan Liang. 2025. Urbanvlp: Multi-granularity vision-language pretraining for urban socioeconomic indicator prediction. In AAAI. [19] Danlei Hu, Lu Chen, Hanxi Fang, Ziquan Fang, Tianyi Li, and Yunjun Gao. 2023. Spatio-temporal trajectory similarity measures: A comprehensive survey and quantitative study. IEEE Trans. Knowl. Data Eng. (2023). [20] Jiawei Jiang, Dayan Pan, Houxing Ren, Xiaohan Jiang, Chao Li, and Jingyuan Wang. 2023. Self-supervised trajectory representation learning with temporal regularities and travel semantics. In ICDE. [21] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predict- ing spans. Trans. Assoc. Comput. Linguistics (2020). [22] Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika (1938). [23] Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song, and Flora D Salim. 2024. Large language models for next point-of-interest recommendation. In SIGIR. [24] Xiucheng Li, Kaiqi Zhao, Gao Cong, Christian S Jensen, and Wei Wei. 2018. Deep representation learning for trajectory similarity computation. In ICDE. [25] Zhonghang Li, Lianghao Xia, Jiabin Tang, Yong Xu, Lei Shi, Long Xia, Dawei Yin, and Chao Huang. 2024. UrbanGPT: Spatio-Temporal Large Language Models. In SIGKDD. [26] Zekun Li, Wenxuan Zhou, Yao-Yi Chiang, and Muhao Chen. 2023. GeoLM: Em- powering Language Models for Geospatially Grounded Language Understanding. In EMNLP. [27] Yiding Liu, Kaiqi Zhao, Gao Cong, and Zhifeng Bao. 2020. Online anomalous trajectory detection with deep generative sequence modeling. In ICDE. [28] Xiaowei Mao, Yan Lin, Shengnan Guo, Yubin Chen, Xingyu Xian, Haomin Wen, Qisen Xu, Youfang Lin, and Huaiyu Wan. 2025. DutyTTE: Deciphering Uncer- tainty in Origin-Destination Travel Time Estimation. In AAAI. [29] Zhenyu Mao, Ziyue Li, Dedong Li, Lei Bai, and Rui Zhao. 2022. Jointly contrastive representation learning on road network and trajectory. In CIKM. [30] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS. [31] Carlo Giacomo Prato. 2009. Route choice modeling: past, present and future research directions. Journal of choice modelling (2009). [32] Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, and Yong Li. 2024. Chain-of-planned-behaviour workflow elicits few-shot mobil- ity generation in LLMs. arXiv preprint arXiv:2402.09836 (2024). [33] Charles Spearman. 1904. The proof and measurement of association between two things. Am. J. Psychol (1904). [34] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lam- ple. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [35] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. [36] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR. [37] Jiawei Wang, Renhe Jiang, Chuang Yang, Zengqing Wu, Makoto Onizuka, Ryosuke Shibasaki, Noboru Koshizuka, and Chuan Xiao. 2024. Large language models as urban residents: An llm agent framework for personal mobility gener- ation. In NeurIPS. [38] Sheng Wang, Zhifeng Bao, J Shane Culpepper, and Gao Cong. 2021. A survey on trajectory data management, analytics, and learning. ACM Comput. Surv. (2021). [39] Senzhang Wang, Jiannong Cao, and S Yu Philip. 2020. Deep learning for spatio- temporal data mining: A survey. IEEE Trans. Knowl. Data Eng. (2020). [40] Zheng Wang, Kun Fu, and Jieping Ye. 2018. Learning to estimate the travel time. In SIGKDD. [41] Hao Wu, Weiwei Sun, and Baihua Zheng. 2017. A fast trajectory outlier detection approach via driving behavior modeling. In CIKM. [42] Ning Wu, Xin Wayne Zhao, Jingyuan Wang, and Dayan Pan. 2020. Learning effective road network representation with hierarchical graph neural networks. In SIGKDD. [43] Xiaohang Xu, Renhe Jiang, Chuang Yang, Zipei Fan, and Kaoru Sezaki. 2024. Taming the long tail in human mobility prediction. In NeurIPS. [44] Andy Yuan Xue, Rui Zhang, Yu Zheng, Xing Xie, Jin Huang, and Zhenghua Xu. 2013. Destination prediction by sub-trajectory synthesis and privacy protection against such prediction. In ICDE. [45] Yibo Yan, Haomin Wen, Siru Zhong, Wei Chen, Haodong Chen, Qingsong Wen, Roger Zimmermann, and Yuxuan Liang. 2024. Urbanclip: Learning text-enhanced urban region profiling with contrastive language-image pretraining from the web. In WWW. [46] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 (2025). [47] Can Yang and Gyozo Gidofalvi. 2018. Fast map matching, an algorithm integrat- ing hidden Markov model with precomputation. Int. J. Geogr. Inf. Sci. (2018). [48] Sean Bin Yang, Chenjuan Guo, Jilin Hu, Jian Tang, and Bin Yang. 2021. Unsu- pervised path representation learning with curriculum negative sampling. In IJCAI. [49] Sean Bin Yang, Chenjuan Guo, and Bin Yang. 2020. Context-aware path ranking in road networks. IEEE Trans. Knowl. Data Eng. (2020). [50] Sean Bin Yang, Jilin Hu, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2023. Lightpath: Lightweight and scalable path representation learning. In SIGKDD. [51] Di Yao, Gao Cong, Chao Zhang, and Jingping Bi. 2019. Computing trajectory similarity in linear time: A generic seed-guided neural metric learning approach. Ji Cao et al. In ICDE. [52] Di Yao, Chao Zhang, Zhihua Zhu, Jianhui Huang, and Jingping Bi. 2017. Trajec- tory clustering via deep representation learning. In IJCNN. [53] Yuan Yuan, Jingtao Ding, Jie Feng, Depeng Jin, and Yong Li. 2024. UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal Prediction. In SIGKDD. [54] Mingxuan Yue, Yaguang Li, Haoze Yang, Ritesh Ahuja, Yao-Yi Chiang, and Cyrus Shahabi. 2019. Detect: Deep trajectory clustering for mobility-behavior analysis. In IEEE BigData. [55] Yanzhao Zhang, Mingxin Li, Dingkun Long, Xin Zhang, Huan Lin, Baosong Yang, Pengjun Xie, An Yang, Dayiheng Liu, Junyang Lin, Fei Huang, and Jingren Zhou. 2025. Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models. arXiv preprint arXiv:2506.05176 (2025). [56] Zijian Zhang, Xiangyu Zhao, Qidong Liu, Chunxu Zhang, Qian Ma, Wanyu Wang, Hongwei Zhao, Yiqi Wang, and Zitao Liu. 2023. Promptst: Prompt-enhanced spatio-temporal multi-attribute prediction. In CIKM. [57] Yu Zheng. 2015. Trajectory data mining: an overview. ACM Trans. Intell. Syst. Technol. (2015). [58] Silin Zhou, Shuo Shang, Lisi Chen, Peng Han, and Christian S. Jensen. 2025. Grid and road expressions are complementary for trajectory representation learning. In SIGKDD. A Prompt Design for POI Context Description In this section, we detail the prompt for the Environment Percep- tion Module. To address the environmental perception gap in TRL, we leverage LLMs to synthesize semantically rich, human-centric descriptions from raw POI data. This process captures the nuanced functional characteristics of urban spaces across multiple granular- ities, establishing a more expressive semantic foundation. A.1 Prompts for Fine-Grained POI Context At the fine-grained level, the objective is to generate detailed de- scriptions of individual road segments by summarizing their sur- rounding POI context. The prompt template fed into the LLM con- sists of the following components: • Role Definition: The LLM is prompted to assume the role of a local resident, encouraging it to produce contextually grounded and locally nuanced descriptions. • Structured POI Data Input: A structured list of nearby POIs is provided as factual grounding for the description. • Task Instruction: The LLM is explicitly instructed to generate a description of the road segment based on the given information. LLM Prompt Template (Fine-Grained). You are a resident living in <city_name>, familiar with the local transportation network and surrounding POI information. There is a road segment with the following POIs located within a 100-meter radius: <poi_count> POIs categorized under [<poi_primary_category>], further subdivided as: • <poi_count> [<poi_subcategory>]: <poi_name_list>. • <poi_count> [<poi_subcategory>]: <poi_name_list>. • · · · (Can be further expanded here.) · · · (Additional primary categories may be added in the same format.) Please describe the relevant characteristics of this road section based on the POI information within a 100-meter radius around it. To illustrate the applicability of the proposed template, we present a case study of a road segment on Qianmen East Street, which is located in a historically and commercially significant area of Beijing frequented by both residents and tourists. Illustrative Prompt Instance (Fine-Grained). You are a resident living in Beijing, familiar with the local transportation network and surrounding POI information. There is a road segment with the following POIs located within a 100-meter radius: 6 POIs categorized under [Shopping Services], further subdivided as: • 2 [Clothing & Shoes Stores]: Fashion Frontline (Xinjiekou Street), Old Beijing Cloth Shoes (Qianmen West Houheyan Street). • 2 [Specialty Stores]: Gaojiazhai Grocery, Beijing Specialty Store (Qianmen West Heyan Community). • 2 [Convenience Stores]: Lehafu Supermarket, Cigarettes & Cold Drinks. 4 POIs categorized under [Catering Services], further subdivided as: • 1 [Food-Related Establishment]: Juewei Taiwanese Salt & Pepper Chicken. • 2 [Chinese Restaurants]: Mian Ai Mian (Qianmen Branch), Qipin Xiang Tofu. • 1 [Fast Food Restaurant]: KFC (Qianmen Branch). 1 POIs categorized under [Transportation Facilities], further subdivided as: • 1 [Subway Station]: Qianmen Subway Station (Southwest Exit C). 1 POIs categorized under [Government & Social Organizations], further subdivided as: • 1 [Public Security Institution]: Dashilan Patrol Police Station. 1 POIs categorized under [Healthcare Services], further subdivided as: • 1 [Healthcare Facility]: Blood Donation Cabin. 2 POIs categorized under [Accommodation Services], further subdivided as: • 2 [Hotel]: Qianmen Guanqi Hotel (Qianmen East Branch), Shunchaolong Hotel. Please describe the relevant characteristics of this road section based on the POI information within a 100-meter radius around it. LLM-Generated Description (Fine-Grained). Based on the POI information within a 100-meter radius of this road section, the following characteristics can be summarized: 1. Vibrant Commerce and Diverse Consumption Scenes • Dense Shopping Services: Six shopping-related POIs are concentrated in this area, forming a commercial belt primarily focused on clothing, shoes, specialty retail, and convenience stores. • Blend of Fashion and Tradition: Trendy stores like “Fashion Frontline (Xinjiekou Street)” and “Old Beijing Cloth Shoes (Qianmen West Houheyan Street)” showcase both modern trends and local cultural characteristics, attracting young crowds and tourists. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning • Convenience Meets Specialty: Convenience stores like “Lehafu Supermarket” and “Cigarettes & Cold Drinks” cater to daily needs, while specialty shops like “Gaojiazhai Grocery” and “Beijing Specialty Store (Qianmen West Heyan Community)” focus on local products, ideal for tourists seeking souvenirs. • Varied Dining Options: Four food-related POIs cover fast food, Chinese cuisine, and specialty snacks, creating an all-day dining scene. • Local and International Flavors: Restaurants like “Mian Ai Mian (Qianmen Branch)” and “Qipin Xiang Tofu” specialize in Beijing cuisine, while “KFC (Qianmen Branch)” and “Juewei Taiwanese Salt & Pepper Chicken” offer quick and convenient meals, catering to both locals and tourists. 2. Transportation Hub and High Foot Traffic • Direct Metro Access: The “Qianmen Subway Station (Southwest Exit C)” makes this area a junction between subway and ground transportation, especially crowded during rush hours, potentially serving both commercial and commuting purposes. • Public Security: The presence of the “Dashilan Police Patrol Station” indicates strict public safety management, enhancing the sense of security for residents and visitors. 3. Prominent Tourism and Accommodation Functions • Comprehensive Lodging Options: Hotels like “Qianmen Guanqi Hotel (Qianmen East Branch)” and “Shunchaolong Hotel” provide accommodation, confirming this area as a tourist hotspot, likely close to historical and cultural attractions like Qianmen Street. • Cultural Experience Extension: Nearby POIs (e.g., “Old Beijing Cloth Shoes (Qianmen West Houheyan Street)” and “Beijing Specialty Store”) complement Qianmen’s historical and cultural district, boosting the area’s tourism appeal. 4. Healthcare and Community Services • Blood Donation Cabin: The presence of a blood donation cabin indicates basic medical support, offering public welfare services to residents and tourists and strengthening community functions. Summary: This road section is located in the core Qianmen business district, integrating commerce, transportation, tourism, and residential functions, representing a typical blend of Beijing’s historical culture and modern lifestyle. Its dense POI distribution and diverse services make it a key node for tourists, residents, and commuters, particularly suitable for walking exploration, shopping, leisure, and short-term stays. A.2 Prompts for Coarse-Grained POI Context At the coarse-grained level, we aim to generate a detailed descrip- tion of a POI cluster with a primary category of 𝑐. Similar to the prompt designed for fine-grained POI context described above, this prompt also comprises three components: Role Definition, Structured POI Data Input, and Task Instruction. LLM Prompt Template (Coarse-Grained). You are a resident living in <city_name>, familiar with the local transportation network and surrounding POI information. In a 1000m × 1000m area of <city_name>, POIs of the type [<poi_primary_category>] exhibit significant clustering characteristics. Data analysis shows that the number of [<poi_primary_category>] POIs in this area ranks within the top 10% in <city_name>. Further subdividing these [<poi_primary_category>] POIs, they include: • <poi_count> [<poi_subcategory>]: <poi_name_list>. • <poi_count> [<poi_subcategory>]: <poi_name_list>. • · · · (Can be further expanded here.) Please describe the characteristics of this 1000m × 1000m area where a large number of [<poi_primary_category>] POIs are clustered. Here, we select a POI cluster with the primary category of “shop- ping services” located near the Beijing CBD to illustrate the appli- cability of the proposed template. Illustrative Prompt Instance (Coarse-Grained). You are a resident living in Beijing, familiar with the local transportation network and surrounding POI information. In a 1000m × 1000m area of Beijing, POIs of the type exhibit significant clustering characteristics. Data analysis shows that the number of [Shopping Services] POIs in this area ranks within the top 10% in Beijing. Further subdividing these [Shopping Services] POIs, they include: • 790 [Specialty Stores]: Li-Ning (Fuli Plaza Store), Dionysus Wine Cellar, Youngor, etc. • 80 [Personal Care & Cosmetics Stores]: Qingzhuang Cosmetics, Scent Library, PIPI FACE Perfume, etc. • 24 [Sports Goods Stores]: Adidas (China World Trade Center South Zone), Reebok Treadmill Store (Guomao Store), Kingsmith Treadmill Store (Guomao Store), etc. • 52 [Convenience Stores]: 7-Eleven (Huamao Store), Quick (Jianwai Branch 2), Wankeyuan Supermarket (Jingheng Street), etc. • 40 [Shopping Malls]: Beijing Yintai Center in01, China World Trade Center South Zone, China World Trade Center North Zone, etc. • 46 [Home & Building Material Markets]: Dakang Home Furnishing, Risheng Furniture & Hardware, Bairuisi Furniture, etc. • 46 [Electronics & Appliance Stores]: Changhong Air Conditioning, ThinKpad Laptop Store, BOSE (China World Trade Center North Zone), etc. • 8 [Stationery Stores]: JimmyBlack, 7G Office, Montblanc (China World Trade Center South Zone), etc. • 708 [Clothing & Shoes Stores]: Yaliange, Xiu Clothing, Xiaojin Wardrobe (Jianwai SOHO Store), etc. • 6 [Auction Houses]: Beijing Jianya Century Auction Co., Ltd., Beijing Zhonghan Auction Co., Ltd., Beijing Zhongzhao International Auction Co., Ltd. (Jintong East Road), etc. • 62 [Flower & Pet Markets]: No. 18 Flower Shop, Love Flowers, LOVE Flower Studio, etc. • 28 [Supermarkets]: Jingkelong Supermarket, Mannings (Yintai Center Store), CR Vanguard (Guomao Store), etc. LLM-Generated Description (Coarse-Grained). Based on the provided data, the 1000m × 1000m area exhibits highly concentrated commercial characteristics, which can be summarized as follows: 1. Highly Concentrated Business Types, Forming a Diverse Shopping Ecosystem • Core Business Types: Dominated by [specialty stores] (790) and [clothing & shoes stores] (708), indicating a strong presence of fashion and personalized shopping, likely including brand flagships and boutique outlets. Ji Cao et al. • Comprehensive Supporting Services: Includes [convenience stores] (52) and [supermarkets] (28), enabling a one-stop shopping experience that serves daily consumer needs. • Presence of High-End and Niche Segments: Categories like [electronics & appliance stores] (46), [home & building material markets] (46), and [stationery stores] (8) suggest the area also caters to professionals and affluent residential communities. 2. Likely Location: Beijing’s Core Commercial Districts • POI Name Clues: Frequent use of “Guomao” in POI names implies proximity to the Guomao CBD or a similar core business district. • Exceptionally High POI Density: Nearly 2,000 shopping-related POIs within one square kilometer, far exceeding typical commercial zones, points to a central retail hub like Guomao, Sanlitun, or Wangjing. 3. Multi-Tiered Consumption Landscape • Mass Market Consumption: Supermarkets and convenience stores address daily needs of nearby residents. • High-End and Niche Offerings: Stores like “Dionysus Wine Cellar”, “Montblanc”, and “Beijing Jianya Century Auction Co., Ltd.” indicate the presence of luxury brands and specialty services, appealing to affluent and interest-based clientele. 4. Commercial Vitality Driven by Transit and Foot Traffic • Density Backed by Accessibility: Such a concentration of POIs suggests strong transport connectivity (e.g., metro, bus hubs) and population density (e.g., office towers, housing). The “Guomao” indicator further implies proximity to a metro hub or major retail cluster. • Competitive and Complementary Dynamics: Coexistence of specialty, fashion, and general markets suggests differentiated positioning (e.g., brand vs. value), while services like convenience stores offer complementary utility, enhancing overall vitality. 5. Potential Challenges • Risk of Oversaturation: High POI density may lead to homogeneous competition, putting pressure on smaller retailers. • Traffic Congestion: Intensive commercial activity could strain nearby roads, requiring effective urban traffic planning and management. Conclusion The area is most likely a “core commercial district in Beijing” (e.g., Guomao, Sanlitun) or a “large emerging complex”, characterized by: • Diverse Business Types: From everyday retail to luxury brands. • Broad Consumption Spectrum: Serving mass, middle-class, and affluent segments. • Transit-Oriented Clustering: Commercial density supported by transit and population flow. • Potential Functional Zoning: Unique commercial ecosystems inferred from specialized POIs. B Trajectory Augmentation Strategies We apply two data augmentation strategies to the trajectory data to generate distinct views for contrastive learning, as detailed below. Trajectory Cropping. To better capture the spatial information in trajectories, we apply random cropping while preserving continuity. Specifically, we crop either the beginning or end of a trajectory, with the cropping ratio uniformly sampled between 5% and 15%. Temporal Perturbation. To emphasize temporal characteristics, we randomly perturb the travel times of approximately 15% of the segments. Specifically, given a road segment with travel time Δ𝑡, the perturbed value Δ𝑡′ is computed as Δ𝑡′ = Δ𝑡−𝑟(Δ𝑡−Δ𝑡avg), where Δ𝑡avg denotes the average travel time for that road segment, and 𝑟is a random variable uniformly sampled from 15% and 30%. C Details of the Experimental Setup In this section, we provide a detailed description of our experi- mental setup, including the datasets, downstream evaluation tasks, evaluation metrics, and baseline models used for comparison. C.1 Datasets Here, we present the statistics and characteristics of the trajectory and POI datasets across three cities: Beijing, Chengdu, and Xi’an. Trajectory Datasets. We evaluated the performance of PRTraj and other baseline methods on trajectory datasets collected from three cities: Beijing, Chengdu, and Xi’an. Table 4 shows the statistics of these datasets, and Figure 6 shows the distribution, respectively. Table 4: Statistics of Trajectory Datasets. Dataset # Trajectories # Segments Avg. Length Avg. Time Beijing 1 010 325 40 306 6.60 km 15.57 min Chengdu 431 581 6741 4.10 km 8.77 min Xi’an 407 181 6795 4.54 km 11.59 min (a) Beijing (b) Chengdu (c) Xi’an Figure 6: Distributions of three trajectory datasets. POI Datasets. We collect POI datasets for Beijing, Chengdu, and Xi’an using the Amap API. Each record includes the POI name, primary category, subcategory, and geographic coordinates. POIs located outside city boundaries or lacking valid coordinates are excluded. Table 5 presents the statistics of these POI datasets. Table 5: Statistics of POI Datasets. Dataset # POIs # Primary Categories # Subcategories Beijing 706 115 14 128 Chengdu 141 918 14 126 Xi’an 87 309 14 124 To illustrate the structure of the POI dataset more clearly, Table 6 presents a representative record from Beijing. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 6: Example of a POI record from Beijing. Field Value POI Name National Museum of China Primary Category Science, Education and Cultural Services Subcategory Museum Coordinates (39.903744, 116.395475) C.2 Downstream Tasks Implementation details for the five downstream tasks used to eval- uate the learned trajectory representations are presented below. Road Label Prediction. Following prior studies [5, 6, 42], we evaluate the quality of the learned road segment representations by predicting segment-level attributes. In particular, we focus on predicting the number of lanes, as this attribute is not included in the input features defined in Equation (1). Since road segments with five or more lanes are extremely rare, including them may dis- tort evaluation metrics (e.g., Macro-F1) and compromise reliability. Therefore, we restrict this task to segments with 1 to 4 lanes. The model is trained using the cross-entropy loss. Trajectory Destination Prediction. This task aims to predict the final destination of a trajectory using partial trajectory informa- tion [42, 44], with applications in areas such as POI recommendation and map navigation. Specifically, we use the first 50% of the trajec- tory as input and predict the road segment where the trajectory terminates. We formulate this task as a classification problem, and the model is optimized using a cross-entropy loss. Travel Time Estimation. Also known as Estimated Time of Arrival (ETA), this task aims to predict the time a vehicle takes to complete a given trajectory. As a fundamental component of navigation systems, it has broad real-world applications [9, 10, 40]. We treat this as a regression problem and optimize it using the MSE loss, and the target travel time is measured in minutes. Similar Trajectory Retrieval. This task aims to retrieve the most similar trajectory T from a large-scale trajectory dataset D, which has significant applications in identifying popular routes and sim- ilar drivers [24]. Following prior work [20, 29, 58], we generate ground truth by constructing detoured versions of original trajec- tories. Specifically, we sample 𝑛𝑞trajectories from the test set, for each trajectory T, we search for the top-𝑘shortest paths from its origin to destination on the road network. The first path that is strictly longer than the original trajectory T is selected as the de- toured trajectory T𝑑. To construct the candidate set D, we combine the original 𝑛𝑞trajectories with 𝑛neg additional trajectories that are randomly sampled from the test set without overlap with the origi- nal 𝑛𝑞trajectories. During evaluation, each detoured trajectory T𝑑 is used to query the candidate set D, and the retrieval is considered correct if the most similar trajectory retrieved is the corresponding original trajectory T. In our implementation, we set 𝑛𝑞= 5000 and 𝑛neg = 50000. Path Ranking. In this task, the model assigns a score in the range [0, 1] to each candidate trajectory to rank multiple paths connecting the same origin and destination, where a higher score indicates greater similarity to the “optimal” trajectory. Following prior work [49], we assign a score of 1.0 to the real trajectory and generate the top-𝑘shortest paths, using their Intersection-over- Union (IoU) with the real trajectory as the target score. This task is framed as a regression problem and optimized using the MSE loss. To encourage spatial diversity among generated trajectories, the top-𝑘paths are generated and filtered sequentially. Specifically, starting from top-1 to top-𝑘, we evaluate each path in order. For the 𝑖-th candidate, we compute its IoU with the real trajectory as well as with all previously retained paths (i.e., top-1 to top- (𝑖−1)). If any IoU exceeds a predefined threshold 𝛿, the path is discarded. This iterative filtering ensures that the final candidate set avoids redundancy and covers diverse spatial patterns. In our implementation, we set 𝑘= 10 and 𝛿= 0.8. C.3 Evaluation Metrics Here, we detail the formulation of Kendall’s 𝜏and Spearman’s 𝜌rank correlation coefficients, which are utilized to assess the performance of the path ranking task. Kendall’s Rank Correlation Coefficient (𝜏). Kendall’s 𝜏[22] measures the ordinal association between two ranked variables by counting the number of concordant and discordant pairs. Given a set of 𝑛items, the metric is computed as: 𝜏= 𝐶−𝐷 1 2𝑛(𝑛−1) , (23) where 𝐶is the number of concordant pairs and 𝐷is the number of discordant pairs. The value of 𝜏ranges from -1 (complete disagree- ment) to 1 (complete agreement). It is widely used to evaluate the agreement between predicted and true rankings. Spearman’s rank correlation coefficient (𝜌). Spearman’s 𝜌[33] evaluates how well the relationship between two variables can be described by a monotonic function. It is defined as the Pearson correlation between the rank variables, and in the absence of ties, can be computed as: 𝜌= 1 −6 Í𝑛 𝑖=1 𝑑2 𝑖 𝑛(𝑛2 −1) , (24) where 𝑑𝑖is the difference between the ranks of each item in the two lists, and 𝑛is the number of ranked items. Like Kendall’s 𝜏, the value of 𝜌ranges from -1 to 1, with higher values indicating stronger rank correlation. C.4 Baselines We compare PRTraj’s performance against the following eight state- of-the-art baselines: • HRNR [42] is a hierarchical graph neural network that learns road network representations by modeling a three-level structure of “functional zones”, “structural regions”, and “road segments” to capture both structural and functional characteristics. • Toast [6] combines a traffic context-aware skip-gram module to capture traffic patterns with a trajectory-enhanced Transformer module to learn traveling semantics from trajectory data. • PIM [48] learns trajectory representations using two comple- mentary discriminators that capture both global (trajectory- trajectory) and local (trajectory-road segment) views, along with a curriculum-based negative sampling strategy. Ji Cao et al. • DyToast [5] enhances Toast [6] by integrating learnable trigono- metric temporal functions to explicitly model dynamic temporal patterns, yielding time-sensitive road network representations. • JCLRNT [29] jointly learns road network and trajectory repre- sentations in an end-to-end framework through three tailored contrastive learning tasks: road-road, trajectory-trajectory, and a novel road-trajectory cross-scale contrast mechanism. • START [20] employs GAT [36] to encode road representations enriched with travel semantics, and Transformer [35] to model periodic temporal regularities. Two self-supervised objectives, span-masked recovery and contrastive learning, are designed to capture spatio-temporal characteristics. • TRACK [16] extends START [20] by jointly learning dynamic road network and trajectory representations through explicitly bridging traffic state and trajectory data. • GREEN [58] uses CNN and GNN-based encoders to learn grid- based and road-level trajectory representations, and fuses them with a cross-attention module for final trajectory embedding. C.5 Implementation Details The embedding dimension 𝑑is set to 128. The Environment Percep- tion Module adopts a proximity threshold 𝛿of 100m and a grid cell side length 𝐿of 1000m. Both the CrossGAT module and the coarse- grained semantic attention mechanism utilize 4 attention heads. The Transformer encoder in the Trajectory Semantic Integrator consists of 6 layers, each with 4 attention heads and a dropout rate of 0.1. For both pre-training and fine-tuning, the model is trained for 50 epochs using the AdamW optimizer with a batch size of 64. The learning rate is initialized to 2 × 10−4, undergoes linear warm- up over the first 5 epochs, and then decays via a cosine schedule to a minimum of 1 × 10−6. To ensure fair comparison, all baseline models use the same embedding dimension (128) and Transformer depth (6 layers) as PRTraj, and their road segment embeddings are computed using Equation (1). Other configurations follow the default settings of their original implementations. All experiments are implemented with PyTorch 2.7.1 and conducted on a single NVIDIA RTX A6000 GPU. D More Detailed Experimental Results In this section, we present a series of supplementary experiments for a more comprehensive evaluation of our proposed method. These in- clude additional overall performance results, data efficiency results, hyperparameter sensitivity analyses, and visualization analyses. D.1 Overall Performance with STD Value To ensure the robustness and statistical reliability of our experimen- tal results, we apply five independent runs and report the results along with their corresponding std values in Table 7. D.2 Additional Results on Data Efficiency Figure 7 presents the data efficiency results for the Chengdu and Xi’an datasets, which corroborate the findings from the Beijing dataset presented in the main paper. Across these heterogeneous urban environments, PRTraj consistently demonstrates superior data efficiency, often surpassing the fully-trained top baseline with only a small fraction of the training data. This robust performance underscores the generalizability of our approach, highlighting its practical value in scenarios where large-scale, high-quality trajec- tory data may be scarce or expensive to acquire. D.3 Hyperparameter Sensitivity Here, we evaluate the stability of PRTraj by analyzing its sensitivity to the two key hyperparameters within the Environment Percep- tion Module: the proximity radius 𝛿for fine-grained environment semantic modeling and the grid cell side length 𝐿for coarse-grained environment semantic modeling. While these parameters theoreti- cally influence the model’s perceptual granularity of the environ- ment, a robust model should exhibit consistent performance across a reasonable range of values rather than being finely tuned to a spe- cific configuration. To demonstrate this, we conducted experiments on all three datasets, varying 𝛿across {50, 100, 200} meters and 𝐿 across {500, 1000, 2000} meters. As illustrated in Figure 8, PRTraj is not overly sensitive to these choices and delivers consistently strong performance, thereby highlighting the model’s stability and the generalizability of our selected configuration. D.4 Visualization Analysis To supplement the quantitative results, we conduct a qualitative visualization analysis to offer more intuitive insights into the capa- bilities of PRTraj. This analysis is designed to visually demonstrate how our unified modeling of environment perception and route choice translates into more semantically expressive and behav- iorally aligned trajectory representations. We present two sets of visualizations: case studies on trajectory similarity retrieval and a t-SNE projection of the trajectory embeddings. Case Study on Trajectory Similarity Retrieval. In this part, we conduct a case study to visually inspect the quality of our learned embeddings for similar trajectory retrieval. As illustrated in Figure 9, Figure 10, and Figure 11, for a randomly selected query trajectory, we retrieve its top-3 most similar counterparts using embeddings from PRTraj and those from three leading baselines: START, TRACK, and GREEN. In contrast to the baselines, our method retrieves more similar trajectories, owing to its perception of the environment and modeling of route choice behavior. t-SNE Projection of Trajectory Embeddings. To further assess the model’s ability to capture trajectory semantics under dynamic traffic conditions, we perform a case study centered on a traffic con- trol event in Beijing. From November 13 to 15, 2015, the Sanyuan Bridge area underwent traffic restrictions due to a bridge replace- ment project, which resulted in a significant shift in traffic patterns as illustrated by the heatmap comparison in Figure 12. We collect trajectories passing through this area during the control period and compare them with trajectories from normal days before the event. Figure 13 shows the t-SNE projections of the learned embed- dings from PRTraj and three strong baselines: START, TRACK, and GREEN. As shown, PRTraj more clearly separates the restricted and normal trajectories, indicating its superior ability to perceive environmental changes and model route choice behavior. In con- trast, the baselines yield more entangled representations, failing to effectively distinguish the impact of the traffic control. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 7: Performance comparison between PRTraj and baselines with mean and standard deviation on three real-world trajectory datasets across five downstream tasks. Results are reported as mean ± std over five independent runs. Task RLP TDP TTE STR PR Metric Macro-F1↑ Micro-F1↑ Acc@1↑ Acc@5↑ Acc@10↑ MAE↓ RMSE↓ MAPE↓ HR@1↑ HR@5↑ MRR↑ 𝜏↑ 𝜌↑ MAE↓ Beijing HRNR 0.7912±0.0390 0.7990±0.0392 0.0789±0.0014 0.2019±0.0013 0.2613±0.0026 5.1072±0.0612 9.4108±0.0998 0.3388±0.0166 0.8653±0.0077 0.9226±0.0166 0.8914±0.0171 0.6259±0.0029 0.6749±0.0029 0.1492±0.0018 Toast 0.5868±0.0385 0.6285±0.0305 0.0694±0.0017 0.1809±0.0057 0.2374±0.0081 5.1042±0.0602 9.2255±0.0666 0.3423±0.0151 0.8532±0.0098 0.9112±0.0092 0.8792±0.0109 0.6298±0.0045 0.6808±0.0045 0.1444±0.0009 PIM 0.6284±0.0336 0.6519±0.0362 0.0629±0.0005 0.1614±0.0015 0.2100±0.0019 5.1622±0.0422 9.3977±0.0324 0.3456±0.0077 0.8060±0.0028 0.8615±0.0056 0.8323±0.0033 0.6125±0.0015 0.6627±0.0014 0.1506±0.0010 DyToast 0.6263±0.0331 0.6488±0.0353 0.0772±0.0020 0.1846±0.0034 0.2415±0.0041 4.2883±0.0244 8.3221±0.0201 0.2863±0.0034 0.8773±0.0091 0.9259±0.0065 0.8998±0.0080 0.6317±0.0027 0.6822±0.0027 0.1421±0.0019 JCLRNT 0.6479±0.0281 0.6751±0.0227 0.0782±0.0010 0.1879±0.0018 0.2538±0.0015 5.0191±0.0347 9.1380±0.0497 0.3281±0.0112 0.8985±0.0029 0.9369±0.0094 0.9166±0.0040 0.6352±0.0012 0.6871±0.0015 0.1405±0.0007 START 0.7574±0.0176 0.7776±0.0144 0.0856±0.0012 0.2153±0.0030 0.2876±0.0035 5.0103±0.0409 9.1894±0.0368 0.3376±0.0076 0.9056±0.0044 0.9442±0.0098 0.9231±0.0083 0.6421±0.0016 0.6903±0.0012 0.1397±0.0012 TRACK 0.6744±0.0157 0.6948±0.0126 0.0873±0.0014 0.2198±0.0032 0.2939±0.0033 4.0375±0.0368 8.0509±0.0472 0.2773±0.0095 0.8941±0.0023 0.9387±0.0064 0.9158±0.0071 0.6455±0.0016 0.6898±0.0016 0.1402±0.0019 GREEN 0.6705±0.0184 0.7006±0.0147 0.0913±0.0019 0.2285±0.0032 0.3062±0.0032 4.1657±0.0324 8.2444±0.0496 0.2915±0.0083 0.9078±0.0035 0.9437±0.0071 0.9207±0.0063 0.6302±0.0020 0.6811±0.0019 0.1438±0.0022 PRTraj 0.8879±0.0197 0.8953±0.0134 0.1078±0.0011 0.2649±0.0023 0.3496±0.0029 3.8719±0.0341 7.8116±0.0417 0.2607±0.0071 0.9612±0.0025 0.9828±0.0051 0.9712±0.0032 0.6813±0.0013 0.7289±0.0013 0.1282±0.0011 Chengdu HRNR 0.7054±0.0254 0.7903±0.0209 0.4168±0.0012 0.6215±0.0020 0.6995±0.0013 1.7796±0.0087 2.7743±0.0272 0.2308±0.0033 0.6069±0.0154 0.7705±0.0156 0.6782±0.0146 0.7489±0.0027 0.7885±0.0029 0.1007±0.0016 Toast 0.4200±0.0279 0.7508±0.0138 0.3954±0.0005 0.5940±0.0006 0.6667±0.0009 1.7804±0.0126 2.7730±0.0308 0.2301±0.0067 0.5739±0.0166 0.7180±0.0173 0.6100±0.0196 0.7359±0.0019 0.7779±0.0017 0.1020±0.0011 PIM 0.4684±0.0394 0.7569±0.0129 0.4073±0.0009 0.6129±0.0010 0.6858±0.0009 1.7877±0.0042 2.7778±0.0145 0.2292±0.0026 0.5498±0.0044 0.7012±0.0057 0.6025±0.0050 0.7502±0.0019 0.7895±0.0015 0.0969±0.0016 DyToast 0.4511±0.0306 0.7548±0.0178 0.4007±0.0011 0.6019±0.0013 0.6703±0.0016 1.5071±0.0091 2.3719±0.0220 0.2025±0.0082 0.6256±0.0092 0.7811±0.0103 0.6962±0.0086 0.7456±0.0019 0.7858±0.0016 0.0981±0.0015 JCLRNT 0.6376±0.0392 0.7754±0.0194 0.4072±0.0011 0.6154±0.0011 0.6932±0.0010 1.7959±0.0049 2.7878±0.0209 0.2343±0.0034 0.7521±0.0045 0.8706±0.0093 0.8069±0.0058 0.7526±0.0015 0.7908±0.0013 0.0958±0.0005 START 0.6535±0.0358 0.7805±0.0231 0.4128±0.0015 0.6231±0.0018 0.7033±0.0020 1.4901±0.0052 2.3406±0.0133 0.2010±0.0033 0.7859±0.0170 0.8911±0.0183 0.8337±0.0199 0.7559±0.0014 0.7954±0.0012 0.0931±0.0009 TRACK 0.6107±0.0351 0.7590±0.0209 0.4159±0.0013 0.6267±0.0015 0.7041±0.0018 1.4307±0.0056 2.2401±0.0159 0.1864±0.0039 0.7712±0.0130 0.8831±0.0149 0.8221±0.0120 0.7549±0.0017 0.7951±0.0016 0.0927±0.0012 GREEN 0.6182±0.0275 0.7692±0.0136 0.4060±0.0008 0.6137±0.0009 0.6933±0.0009 1.4259±0.0035 2.2245±0.0129 0.1882±0.0023 0.7992±0.0071 0.9044±0.0089 0.8394±0.0071 0.7428±0.0011 0.7824±0.0010 0.1014±0.0007 PRTraj 0.8004±0.0262 0.8615±0.0175 0.4294±0.0006 0.6408±0.0007 0.7184±0.0009 1.3668±0.0087 2.1718±0.0194 0.1776±0.0059 0.9404±0.0081 0.9728±0.0117 0.9557±0.0084 0.7620±0.0015 0.8017±0.0014 0.0933±0.0006 Xi’an HRNR 0.6459±0.0359 0.6651±0.0251 0.3511±0.0010 0.5800±0.0027 0.6624±0.0019 2.9449±0.0105 4.7241±0.0378 0.2835±0.0047 0.6641±0.0073 0.8062±0.0124 0.7251±0.0087 0.6972±0.0024 0.7459±0.0022 0.1062±0.0014 Toast 0.4308±0.0405 0.5129±0.0281 0.3260±0.0011 0.5421±0.0014 0.6247±0.0018 2.9350±0.0191 4.6983±0.0403 0.2824±0.0059 0.6618±0.0168 0.7838±0.0147 0.6902±0.0107 0.6894±0.0017 0.7419±0.0016 0.1094±0.0007 PIM 0.4357±0.0395 0.5155±0.0294 0.3396±0.0009 0.5629±0.0007 0.6440±0.0006 2.9540±0.0079 4.7358±0.0153 0.2786±0.0073 0.6188±0.0062 0.7453±0.0102 0.6685±0.0073 0.7022±0.0014 0.7521±0.0011 0.1038±0.0011 DyToast 0.4327±0.0362 0.5149±0.0260 0.3304±0.0016 0.5451±0.0026 0.6222±0.0019 2.2856±0.0176 3.7257±0.0393 0.2282±0.0056 0.6900±0.0061 0.8187±0.0040 0.7488±0.0041 0.6975±0.0029 0.7483±0.0026 0.1058±0.0028 JCLRNT 0.4582±0.0380 0.5076±0.0377 0.3430±0.0015 0.5687±0.0024 0.6566±0.0027 2.9791±0.0124 4.7513±0.0348 0.2869±0.0054 0.8092±0.0054 0.8895±0.0065 0.8451±0.0057 0.7024±0.0018 0.7518±0.0016 0.1039±0.0007 START 0.6225±0.0226 0.6363±0.0141 0.3499±0.0014 0.5844±0.0026 0.6742±0.0031 2.3700±0.0116 3.7994±0.0317 0.2482±0.0046 0.8645±0.0098 0.9214±0.0097 0.8724±0.0087 0.6985±0.0016 0.7481±0.0014 0.1047±0.0006 TRACK 0.4652±0.0238 0.5190±0.0170 0.3503±0.0012 0.5824±0.0018 0.6705±0.0021 2.2514±0.0198 3.5998±0.0399 0.2047±0.0059 0.8613±0.0077 0.9255±0.0131 0.8793±0.0089 0.6998±0.0022 0.7480±0.0020 0.1044±0.0013 GREEN 0.5290±0.0209 0.5545±0.0115 0.3550±0.0009 0.5889±0.0011 0.6773±0.0010 2.2743±0.0154 3.7211±0.0351 0.2056±0.0055 0.8862±0.0091 0.9403±0.0095 0.9123±0.0071 0.6969±0.0015 0.7478±0.0013 0.1041±0.0008 PRTraj 0.7599±0.0268 0.7591±0.0195 0.3624±0.0008 0.5971±0.0010 0.6837±0.0011 2.1507±0.0135 3.6072±0.0308 0.1943±0.0049 0.9538±0.0071 0.9864±0.0074 0.9687±0.0079 0.7131±0.0013 0.7629±0.0011 0.1010±0.0005 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.70 0.73 0.75 0.78 0.81 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.31 0.34 0.37 0.40 0.43 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.35 1.44 1.53 1.63 1.72 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.79 0.83 0.87 0.91 0.95 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.718 0.729 0.741 0.753 0.764 Kendall's Path Ranking (a) Chengdu 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.64 0.67 0.70 0.73 0.77 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.25 0.28 0.31 0.34 0.37 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.13 2.26 2.38 2.51 2.64 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.88 0.90 0.92 0.94 0.96 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.674 0.684 0.694 0.705 0.715 Kendall's Path Ranking (b) Xi’an Figure 7: Performance of PRTraj on downstream tasks with varying proportions of training data on the Chengdu and Xi’an trajectory dataset. The red dashed line denotes the best baseline performance. For brevity, we report one metric for each task. Ji Cao et al. 500m 1000m 2000m 50m 100m 200m 0.8650 0.8785 0.8651 0.8716 0.8879 0.8667 0.8803 0.8713 0.8551 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.1074 0.1065 0.1071 0.1082 0.1078 0.1075 0.1079 0.1073 0.1070 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 3.8629 3.8664 3.8731 3.8690 3.8719 3.8722 3.8599 3.8703 3.8710 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9501 0.9563 0.9611 0.9584 0.9612 0.9603 0.9590 0.9600 0.9585 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.6812 0.6815 0.6808 0.6811 0.6813 0.6809 0.6814 0.6810 0.6806 Path Ranking (Kendall's ) (a) Beijing 500m 1000m 2000m 50m 100m 200m 0.7969 0.7994 0.7813 0.7818 0.8004 0.7967 0.8015 0.7824 0.7851 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.4288 0.4286 0.4287 0.4289 0.4294 0.4293 0.4286 0.4291 0.4284 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 1.3659 1.3712 1.3650 1.3576 1.3668 1.3618 1.3632 1.3654 1.3640 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9388 0.9382 0.9407 0.9388 0.9404 0.9402 0.9386 0.9396 0.9392 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.7614 0.7616 0.7611 0.7621 0.7620 0.7618 0.7620 0.7614 0.7617 Path Ranking (Kendall's ) (b) Chengdu 500m 1000m 2000m 50m 100m 200m 0.7569 0.7611 0.7560 0.7579 0.7599 0.7569 0.7596 0.7594 0.7596 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.3619 0.3624 0.3628 0.3622 0.3624 0.3619 0.3617 0.3618 0.3622 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 2.1527 2.1601 2.1497 2.1523 2.1507 2.1468 2.1532 2.1486 2.1541 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9508 0.9519 0.9504 0.9532 0.9538 0.9546 0.9520 0.9552 0.9532 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.7130 0.7137 0.7127 0.7142 0.7131 0.7125 0.7129 0.7126 0.7129 Path Ranking (Kendall's ) (c) Xi’an Figure 8: Performance of PRTraj on downstream tasks under varying hyperparameter settings for the Environment Perception Module across the Beijing, Chengdu, and Xi’an datasets. For brevity, we present one metric for each task. Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 9: Case study of trajectory similarity retrieval on the Beijing dataset. (Please zoom in for better visibility.) Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 10: Case study of trajectory similarity retrieval on the Chengdu dataset. (Please zoom in for better visibility.) Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 11: Case study of trajectory similarity retrieval on the Xi’an dataset. (Please zoom in for better visibility.) (a) Before Traffic Control (b) During Traffic Control Figure 12: Traffic heatmaps of the Sanyuan Bridge area. 20 0 20 20 0 20 Before Traffic Control During Traffic Control (a) START 20 0 20 20 0 20 40 Before Traffic Control During Traffic Control (b) TRACK 40 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (c) GREEN 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (d) PRTraj Figure 13: t-SNE visualization of trajectory embeddings before and during the traffic control event in the Sanyuan Bridge area.
Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Ji Cao1,3 ∗, Yu Wang1 ∗, Tongya Zheng2 †, Zujie Ren3, Canghong Jin2, Gang Chen1, Mingli Song1 1Zhejiang University 2Hangzhou City University 3Zhejiang Lab {caoj25, yu.wang, renzju, cg, , Abstract Trajectory Representation Learning (TRL) aims to encode raw trajectories into low-dimensional vectors, which can then be leveraged in various downstream tasks, including travel time estimation, location prediction, and trajectory similarity analysis. However, existing TRL methods suffer from a key oversight: treating trajectories as isolated spatio-temporal sequences, without considering the external environment and internal route choice behavior that govern their formation. To bridge this gap, we propose a novel framework that unifies comprehensive environment Perception and explicit Route choice modeling for effective Trajectory representation learning, dubbed PRTraj. Specifically, PRTraj first introduces an Environment Perception Module to enhance the road network by capturing multi-granularity environmental semantics from surrounding POI distributions. Building on this environment-aware backbone, a Route Choice Encoder then captures the route choice behavior inherent in each trajectory by modeling its constituent road segment transitions as a sequence of decisions. These route-choice-aware representations are finally aggregated to form the global trajectory embedding. Extensive experiments on 3 real-world datasets across 5 downstream tasks validate the effectiveness and generalizability of PRTraj. Moreover, PRTraj demonstrates strong data efficiency, maintaining robust performance under few-shot scenarios. Our code is available online 1. CCS Concepts • Information systems →Data mining. Keywords Urban computing, Trajectory data mining, Trajectory representation learning, Route choice modeling 1 Introduction The proliferation of devices equipped with the Global Positioning System (GPS) facilitates the large-scale collection of spatio-temporal trajectories, capturing fine-grained movement patterns of vehicles or pedestrians [4, 38, 39, 57]. These trajectories contain rich spatiotemporal semantics and can be used to support various downstream 1https://anonymous.4open.science/r/PRTraj ∗Equal contribution. †Corresponding author. Summer Palace Zhongguancun Beijing CBD 0 2 4 6 8 10 12 14 16 18 20 22 Hour of Day 20 40 Speed (km/h) Summer Palace Zhongguancun Beijing CBD (a) External Urban Environment: The functional semantics of an urban area profoundly shape its traffic patterns. Left: A tourist site (Summer Palace ) and two business districts (Zhongguancun , Beijing CBD ) in Beijing. Right: Business areas exhibit distinct hourly speed patterns compared to the tourist site. (b) Internal Route Choice: Different drivers may choose significantly different routes for the same origin-destination (OD) pair, reflecting diverse internal route choice behavior. Figure 1: Illustration of two core factors in trajectory formation, which are often overlooked in existing TRL methods. applications, including travel time estimation [28], location prediction [43], and trajectory similarity analysis [19]. Trajectory representation learning (TRL), which encodes raw trajectory data into low-dimensional embeddings, receives considerable attention for its capacity to extract spatio-temporal patterns from trajectories [20, 29, 50]. Early TRL methods [24, 52] are predominantly task-specific, which inherently limits their generalizability across diverse downstream tasks. To overcome this limitation, subsequent studies adopt self-supervised learning strategies to improve the generalizability of learned representations. For instance, Trembr [14] employs an encoder-decoder architecture for trajectory reconstruction to capture spatio-temporal characteristics; PIM [6] maximizes mutual information to derive path representations from both global and local perspectives; START [20] captures travel semantics and temporal regularities via masked language modeling and contrastive learning; further extending START, TRACK [16] enriches road segment features with historical speed profiles, while GREEN [58] learns multi-scale contextual information from both grid-level and segment-level trajectories. 16 Oct 2025 Ji Cao et al. However, despite these advancements, existing methods fundamentally regard a trajectory as an isolated spatio-temporal sequence, overlooking the latent influential factors that govern its formation. The formation of a trajectory does not occur in a vacuum, but is the result of a human's sequential decision-making process within a complex urban environment. Ignoring this underlying formation process leads to representations that lack behavioral fidelity and thus face inherent limitations in downstream tasks. This trajectory formation process is governed by a duality of factors. On the one hand, the external urban environment profoundly shapes mobility patterns. As illustrated in Figure 1(a), the semantic function of an area profoundly dictates its traffic dynamics. Two distant business districts (Zhongguancun and Beijing CBD) exhibit remarkably similar hourly speed patterns, starkly contrasting with the nearby Summer Palace, a tourist site. On the other hand, the trajectory itself embodies the individual's internal route choice behavior. As shown in Figure 1(b), even for the same origin-destination (OD) pair, different drivers choose significantly different routes, revealing diverse preferences and decision criteria, which aligns with the core principles of Route Choice Models [31] from transportation studies. Crucially, these external and internal factors are not independent: an individual's route choice is fundamentally constrained and informed by their perception of the environment. Motivated by these observations, we propose a novel framework that unifies comprehensive environment Perception with explicit Route choice modeling to enable more effective Trajectory representation learning, dubbed PRTraj. Firstly, our method constructs a semantically-rich road network by capturing and integrating multi-granularity environmental information from surrounding POIs. Specifically, we leverage a Large Language Model (LLM) to interpret raw POI distributions into rich descriptions, thereby enriching each road segment representation with an understanding of both local context and regional function. Secondly, leveraging this environment-aware road network, the model captures internal route choice behavior by framing each trajectory as a sequence of decisions, where each choice is modeled by a Wide & Deep network [7] that synthesizes key navigational factors with the rich environmental context from our aforementioned road network. Finally, a Trajectory Semantic Integrator fuses these behavioral representations with temporal features and aggregates them into a global trajectory embedding for various downstream tasks. The entire model is end-to-end and optimized with self-supervised objectives, including Masked Language Modeling (MLM) and contrastive learning to enhance contextual and semantic understanding. To summarize, we make the following contributions: • We identify a key oversight in existing TRL: treating trajectories as isolated spatio-temporal sequences, rather than as sequential human decision-making within an urban environment. • We propose PRTraj, a novel TRL framework that first constructs an environment-aware road network backbone to support the subsequent modeling of route choice behavior, which are finally aggregated into a unified trajectory embedding. • We conduct extensive experiments on 3 real-world trajectory datasets across 5 downstream tasks. The results demonstrate that our method significantly outperforms state-of-the-art baselines, validating the effectiveness of our method. 2 Preliminaries 2.1 Definitions Definition 1 (GPS Trajectory). A GPS trajectory is a sequence of GPS points T GPS = {τGPS 1 ,τGPS 2 , . . . ,τGPS n }, where each point τGPS i = (lati, loni, ti) comprises latitude, longitude, and timestamp. Definition 2 (Road Network). A road network is modeled as a directed graph G = ⟨V, E⟩, where V is the set of road segments, and E ⊆V × V is a set of edges representing connectivity. An edge ei,j∈E indicates that road segment vjis directly reachable from vi. Definition 3 (Road Network Constrained Trajectory). A road network constrained trajectory2 T = {τ1,τ2, . . . ,τn} is a timestamped sequence of road segments, derived from a raw GPS trajectory through map matching. Each element τi= (ri,ti) consists of a road segment ID riand its corresponding timestamp ti. Definition 4 (Point of Interest). A point of interest (POI) is a geographic entity represented as p= (lat, lon, cls, name), where lat and lon are its latitude and longitude, cls= (c1,c2) is a two-level functional category (e.g., c1 = Shopping Services, c2 = Convenience Stores), and name is the POI's name (e.g., 7-Eleven). 2.2 Problem Statement Given a trajectory dataset D = {T1, T2, . . . , T| D|} and a road network G, our objective is to learn a mapping function: F: T →z ∈ Rd, which encodes each trajectory T into a d-dimensional representation vector z. The representation z is expected to be effective and informative for various downstream tasks. 3 Methodology In this section, we introduce PRTraj, a novel TRL framework designed to unify the perception of the external environment with the modeling of internal route choice behavior. As depicted in Figure 2, PRTraj consists of three core modules: (1) an Environment Perception Module that comprehensively models multi-granularity urban semantics via POI distributions; (2) a Route Choice Encoder designed to model route choice behavior explicitly; and (3) a Trajectory Semantic Integrator that aggregates segment-level representations into a global trajectory embedding. The framework is trained end-to-end in a self-supervised manner using both MLM loss and contrastive learning. 3.1 Environment Perception Module The semantics of the transportation network are profoundly shaped by the surrounding urban environment, which can be effectively perceived through the distribution and attributes of POIs. However, disparities in spatial granularity and semantics between POIs and road segments hinder effective modeling. To bridge this gap, the Environment Perception Module leverages LLMs to model and integrate POI contexts at multiple granularities, thereby enriching the semantic representation of the road network. To establish a basis for this enrichment, we first define the initial representation for each road segment. Specifically, each segment is encoded as a vector ri∈Rdby summing the embeddings of four 2For convenience, "trajectory" will refer to a "road network constrained trajectory" in unambiguous contexts throughout this paper. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Describe nearby POIs (Fine-Grained) LLM Description & Text Embedding Identify POI Clusters (Coarse-Grained) Enhance Local Semantics lat POI type lng Propagate & Aggregate Cluster Semantics (step 1) (step 2) Cross GAT Roads POIs Q K&V POI clusters Attention Roads Q K&V CNN Gate ⨁ Gate ⨂ ⨂ Capture Route Choice Unselected Selected Unselected Journey Progression Historical Transition Directional Deviation EnvironmentAware Segment Token Wide & Deep Encode Route Choice ⋯ Unselected MLP Mean Pooling Selected ⋯ CLS Output Transformer Encoder ⨁ Timestamp Temporal Encoding Environment-Aware Segment Token Route Choice Encoder Route Choice Behavior Trajectory Semantic Integrator Various Downstream Tasks Destination Prediction Travel Time Estimation Trajectory Retrieval Trajectory lookup Environment-Aware Road Network POI Data Environment Perception Module Figure 2: Overview of the PRTraj framework. It first employs an Environment Perception Module to construct an environmentaware road network representation by capturing urban environmental semantics. Building on this, the Route Choice Encoder models the decision-making process to extract route choice behaviors embedded within the trajectory. Finally, a Trajectory Semantic Integrator aggregates these outputs to produce the global trajectory representation. attributes: road type, length, in-degree, and out-degree: ri= rtype i + rlength i + rin i+ rout i . (1) This initial representation is then enhanced by modeling environment semantics at both fine and coarse granularities. Fine-Grained Semantic Modeling. For fine-grained modeling, we capture localized urban context at the road segment level by analyzing the distribution of proximate POIs. Specifically, POIs within a fixed proximity δof each road segment are extracted and grouped according to their two-level functional category information. These grouped POIs are then used to prompt Qwen3-8B [46] to generate natural language descriptions that characterize the local environment of each road segment from a human-centric perspective. The prompt template and illustrative prompt-output pairs are provided in Section A.1. These descriptions are subsequently encoded into dense semantic vectors, pfine i ∈Rd, for each road segment iusing the Qwen3-Embedding-8B text embedding model [55]. To mitigate the limitations of isolated segment-level representations and incorporate broader contextual information, a CrossGAT is employed. Here, the road segment representation riserves as the query, while the POI context embeddings pfine j of adjacent road segments act as keys and values in a cross-attention mechanism. This enables each segment to selectively aggregate semantically relevant contextual information from its surroundings: ̃pfine i = Wppfine i + ∑︁ j∈N(i) αi,jWppfine j , (2) where the attention coefficients αi,jare computed as: αi,j= exp a⊤LeakyReLU(Wrri+ Wppfine j ) Í k∈N(i) exp a⊤LeakyReLU(Wrri+ Wppfine k ) . (3) Here, N (i) is the set of segments adjacent to ri. LeakyReLU [36] is an activation function with a negative input slope of 0.2. a ∈Rd, Wr, Wp∈Rd×dare learnable parameters. Coarse-Grained Semantic Modeling. To capture regional POI influence, the city is divided into grid cells of side length L. For each POI primary category c(i.e., the top-level category c1 in Definition 4), we identify clusters by selecting the top 10% of grid cells with the highest POI density. Note that a single grid cell may contain multiple clusters, each corresponding to a different primary category c. These structured POI distributions are used to prompt Qwen3-8B [46] to generate cluster-level natural language descriptions (refer to Section A.2 for details), which are subsequently encoded using Qwen3-Embedding-8B [55] into grid-level vectors gc (x,y) ∈Rd, similar to the procedure used in fine-grained modeling. To model the spatial diffusion of influence from each POI cluster type c, a 3 × 3 convolutional kernel is applied over gc (x,y), enabling localized propagation of regional semantics within the urban grid: ̃gc (x,y) = 1 ∑︁ a=-1 1 ∑︁ b=-1 W(a,b)gc (x+a,y+b), (4) with learnable weights W(a,b) ∈Rd×dfor each relative position (a,b), and zero-padding applied at grid boundaries. Then, for road Ji Cao et al. segment ilocated at grid (ix,iy), its coarse-grained environment features are aggregated via an attention mechanism which dynamically models the influence of different POI cluster types: ̃pcoarse i = ∑︁ c∈C 1c (ix,iy)αc Wv( ̃gc (ix,iy) ∥ec) , (5) where the attention coefficients αcare calculated as: αc= exp Wqri Wk( ̃gc (ix,iy) ∥ec) ⊤ Í c′∈C1c′ (ix,iy) exp Wqri Wk( ̃gc′ (ix,iy) ∥ec′) ⊤ . (6) Here,Cdenotes the set of all POI primary categories. 1c (x,y) ∈{0, 1} indicates whether grid (x,y) has a valid semantic representation for primary category c, either due to inclusion in the top 10% or after spatial diffusion. ec∈Rdis a category-specific embedding of POI type c. Wq∈Rd×d, Wk, Wv∈Rd×2dare learnable matrices. Multi-Granularity Semantic Integration. To produce a unified, environment-aware segment token, we integrate the fine-grained and coarse-grained environment semantics with the base road segment representation rivia a gated fusion mechanism, thereby allowing each segment to adaptively absorb relevant environmental information from multiple scales. The final environment-aware segment token ̃riis formulated as: ̃ri= ri+ ̃pfine i ⊙Gate( ̃pfine i ∥ri) + ̃pcoarse i ⊙Gate( ̃pcoarse i ∥ri). (7) Here, the Gate(·) function represents a gating mechanism composed of a linear transformation followed by a Tanh activation: Gate(x) = Tanh(WGatex + bGate), (8) where WGate ∈Rd×2d, bGate ∈Rdare learnable parameters. 3.2 Route Choice Encoder Building upon the environment-aware road network, the Route Choice Encoder is designed to capture the internal route choice behavior within trajectories. Instead of viewing a trajectory as a static sequence, it models the dynamic, sequential decision-making process at each transition between segments. Specifically, to model the decision of transitioning from the current segment rito a candidate next segment rc∈N (ri), we construct a context from the following four complementary features, which are designed to capture both key navigational factors and the rich environmental context: • Journey Progression: We define the normalized journey progression ρito signify the global advancement within a journey: ρi= Íi j=1 drj Ín j=1 drj , (9) where drjis the length of road segment rj. • Historical Transition Likelihood: We derive a normalized historical transition likelihood from rito a candidate adjacent segment rc to capture general traffic flow patterns: P(rc| ri) = N(ri→rc) Í r′c∈N(ri) N(ri→r′c) . (10) Here, N(·) denotes the historical transition frequency between consecutive segments. • Destination-Oriented Directional Deviation: This feature measures the angular deviation between the direction towards rcand the direct bearing to the destination rn: Δθrc= ∠(--→ rirc, --→ rirn). (11) • Environment-Aware Segment Tokens: We leverage the environmentaware segment tokens ̃rriand ̃rrcfor the current and candidate road segments, respectively. These tokens, produced by the Environment Perception Module, provide the essential environmental context for route choice modeling. For each candidate transition from rito rc∈N (ri), we process the aforementioned context features using a Wide & Deep network [7]. This architecture is chosen for its unique ability to capture the dual nature of route choice decisions. Specifically, its Wide component captures explicit, low-order patterns from sparse feature interactions, akin to memorizing simple routing heuristics. Concurrently, its Deep component learns abstract, high-order representations from dense embeddings, enabling it to generalize to unseen decision scenarios. The specific details are as follows. Wide Component Design. The Wide component models the interaction between the historical transition likelihood P(rc| ri) and destination-oriented directional deviation Δθrc. We select these two features as they encapsulate the fundamental trade-off in navigation between following popular routes and taking the most direct path. To implement this, we first discretize P(rc| ri) and Δθrcinto 5 and 8 uniform bins, respectively. A cross-product transformation is then applied to these binned features to generate a 40-dimensional one-hot vector ecrossed ∈R40. The output is derived through a linear transformation: cWide = WWideecrossed. (12) Deep Component Design. The Deep component is designed to capture high-order feature interactions. We first project all contextual features into a common d-dimensional space. Notably, for the directional deviation Δθ(ri,rc), we encode its sine and cosine components to preserve its cyclical nature. These projected features are then concatenated into a single vector h(ri,rc) ∈R5d: h(ri,rc) = Wρρi∥WPP(rc| ri) ∥Wri ̃rri∥Wrc ̃rrc∥ WΔθ[sin Δθrc, cos Δθrc] , (13) where Wρ, WP∈Rd×1, Wri, Wrc∈Rd×d, WΔθ∈Rd×2 are learnable parameters. This vector is subsequently passed through an MLP mapping from 5dto ddimensions to produce the Deep component's output cDeep ∈Rd: cDeep = MLP(h(ri,rc)). (14) Capturing Route Choice Behavior. The outputs from the Wide component cWide and Deep component cDeep are then integrated via element-wise summation to form a unified context vector c(ri,rc) for each potential transition (ri,rc): c(ri,rc) = cWide + cDeep. (15) To capture the user's route choice at the segment ri, we explicitly model the contrast between the selected transition and its alternatives. The context vector corresponding to the chosen transition, (ri,ri+1), is denoted as cselected ri . Concurrently, the context vectors Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning of all unselected alternatives in N (ri) \ {ri+1} are aggregated via mean pooling to form a single representation cunselected ri : cunselected ri = Pool c(ri,rc) | rc∈N (ri) \ {ri+1} . (16) Finally, the selected and unselected context vectors are concatenated and projected through an MLP that maps from R2dto Rd, yielding the route choice behavior representation cri∈Rd: cri= MLP cselected ri ∥cunselected ri . (17) This process is repeated for each segment in the trajectory to produce a sequence of route-choice-aware representations (cr1, cr2, . . . , crn). For the terminal segment crn, its cselected ri is set to a zero vector. These representations are then passed to the subsequent Trajectory Semantic Integrator module. 3.3 Trajectory Semantic Integrator The Trajectory Semantic Integrator module is designed to generate a global trajectory embedding. Firstly, for each spatio-temporal point (ri,ti) within a trajectory, we construct its representation xi∈Rdby combining the route-choice-aware representations: xi= cri+ tm m(ti) + td d(ti), (18) where tm m(ti) and td d(ti) are learnable embeddings for the minuteof-day (1 to 1440) and day-of-week (1 to 7) of the timestamp ti, respectively. Subsequently, we prepend a learnable classification token [CLS] to the sequence of spatio-temporal point representations. This sequence is then fed into a Transformer encoder [35] to produce context-aware hidden states: (ztraj, z1, . . . , zn) = TransformerEncoder([CLS], x1, . . . , xn). (19) The final hidden state corresponding to the CLS token ztraj ∈Rd, serves as the comprehensive representation for the entire trajectory. 3.4 Pre-Training Tasks Following prior TRL studies [16, 20, 58], we adopt two self-supervised objectives to enhance the model's ability to capture contextual dependencies and semantic distinctions. Masked Language Modeling (MLM) Loss. To model contextual dependencies within trajectories, we employ a SpanBERT-style MLM objective [21]. For each trajectory, we randomly mask multiple spans, each with an average length of 3 segments, such that the total masked proportion is 15% of the sequence. Each masked segment rmis substituted with a learnable embedding x[MASK], and the model is trained to predict the original segment ID: LMLM = -1 |B| ∑︁ T∈B 1 |MT| ∑︁ rm∈MT log exp(ym,rm) Í i∈V exp(ym,i) , (20) where yrmis the model's prediction for a masked segment rm. Contrastive Loss. We further adopt the NT-Xent loss [3] to enforce semantic consistency across augmented views. For each trajectory in a batch B, two distinct augmentations (details in Section B) are applied, yielding in 2|B| augmented views. The objective encourages the representations of positive pairs to be similar while distinguishing them from negative samples: LCL = - 1 2|B| Í2|B| i=1 " log exp sim(ztraj i ,ztraj pos(i) )/τ Í2|B| j=1 1[j≠i] exp sim(ztraj i ,ztraj j )/τ # , (21) where sim(·, ·) denotes cosine similarity,τ= 0.05 is the temperature parameter, and pos(i) identifies the positive sample index for zi. Pre-training Objective. The overall pre-training objective combines the two self-supervised losses in equal proportion: L = (LMLM + LCL)/2. (22) 4 Experiments To demonstrate the effectiveness of the proposed PRTraj framework, we conducted extensive experiments on three real-world trajectory datasets to answer the following research questions: • RQ1: How does PRTraj perform across various downstream tasks in comparison to state-of-the-art TRL methods? • RQ2: What are the individual contributions of PRTraj's key architectural components to its overall performance? • RQ3: How data-efficient is PRTraj, particularly under few-shot scenarios with limited training trajectories? • RQ4: What is the computational efficiency of PRTraj in terms of training and inference time? • RQ5: How does PRTraj perform in qualitative visualizations? 4.1 Experimental Setups Datasets. We conduct experiments on three real-world trajectory datasets of Chinese cities, namely, Beijing, Chengdu, and Xi'an. Road networks for these cities are downloaded from OpenStreetMap3 using OSMnx [1], and trajectories are map-matched with the FMM [47]. Each trajectory dataset is chronologically split into training, validation, and testing sets at a 7:1:2 ratio. Additionally, we acquire POI data for these three cities through the Amap API4, which includes detailed information such as POI names, categories, and geographical coordinates. A detailed description of the dataset is available in Section C.1. Downstream Tasks. We evaluate the quality of the learned representations across five downstream tasks: road label prediction (RLP), trajectory destination prediction (TDP), travel time estimation (TTE), similar trajectory retrieval (STR), and path ranking (PR). These tasks assess our model's capabilities at two distinct levels: RLP evaluates the intermediate road segment representations, whereas the other four tasks evaluate the final trajectory-level representations. For STR, we directly use the learned trajectory embeddings to compute similarity, while for all other tasks (RLP, TDP, TTE, and PR), we append a task-specific MLP head and fine-tune the entire model end-to-end. More details are available in Section C.2. Evaluation Metrics. For road label prediction, we adopt MacroF1 and Micro-F1. For trajectory destination prediction, we report Acc@1, Acc@5, and Acc@10. For travel time estimation, we use mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). For similar trajectory retrieval, we employ HR@1, HR@5, and mean reciprocal rank (MRR). 3https://www.openstreetmap.org 4https://lbs.amap.com Ji Cao et al. Table 1: Performance comparison between PRTraj and baselines on three real-world trajectory datasets across five downstream tasks, including road label prediction (RLP), trajectory destination prediction (TDP), travel time estimation (TTE), similar trajectory retrieval (STR), and path ranking (PR). "Impr." denotes the relative improvement of our method over the best result, and "Avg." indicates the average rank across all evaluation metrics. Task RLP TDP TTE STR PR Metric Macro-F1↑Micro-F1↑ Acc@1↑ Acc@5↑ Acc@10↑ MAE↓ RMSE↓ MAPE↓ HR@1↑ HR@5↑ MRR↑ τ↑ ρ↑ MAE↓ Avg.↓ Beijing HRNR 0.7912 0.7990 0.0789 0.2019 0.2613 5.1072 9.4108 0.3388 0.8653 0.9226 0.8914 0.6259 0.6749 0.1492 6.3 Toast 0.5868 0.6285 0.0694 0.1809 0.2374 5.1042 9.2255 0.3423 0.8532 0.9112 0.8792 0.6298 0.6808 0.1444 7.8 PIM 0.6284 0.6519 0.0629 0.1614 0.2100 5.1622 9.3977 0.3456 0.8060 0.8615 0.8323 0.6125 0.6627 0.1506 8.6 DyToast 0.6263 0.6488 0.0772 0.1846 0.2415 4.2883 8.3221 0.2863 0.8773 0.9259 0.8998 0.6317 0.6822 0.1421 5.8 JCLRNT 0.6479 0.6751 0.0782 0.1879 0.2538 5.0191 9.1380 0.3281 0.8985 0.9369 0.9166 0.6352 0.6871 0.1405 5.1 START 0.7574 0.7776 0.0856 0.2153 0.2876 5.0103 9.1894 0.3376 0.9056 0.9442 0.9231 0.6421 0.6903 0.1397 3.5 TRACK 0.6744 0.6948 0.0873 0.2198 0.2939 4.0375 8.0509 0.2773 0.8941 0.9387 0.9158 0.6455 0.6898 0.1402 3.3 GREEN 0.6705 0.7006 0.0913 0.2285 0.3062 4.1657 8.2444 0.2915 0.9078 0.9437 0.9207 0.6302 0.6811 0.1438 3.6 PRTraj 0.8879 0.8953 0.1078 0.2649 0.3496 3.8719 7.8116 0.2607 0.9612 0.9828 0.9712 0.6813 0.7289 0.1282 1.0 Impr. 12.22% 12.05% 18.07% 15.93% 14.17% 4.10% 2.97% 5.99% 5.88% 4.09% 5.21% 5.55% 5.59% 8.23% Chengdu HRNR 0.7054 0.7903 0.4168 0.6215 0.6995 1.7796 2.7743 0.2308 0.6069 0.7705 0.6782 0.7489 0.7885 0.1007 5.4 Toast 0.4200 0.7508 0.3954 0.5940 0.6667 1.7804 2.7730 0.2301 0.5739 0.7180 0.6100 0.7359 0.7779 0.1020 8.3 PIM 0.4684 0.7569 0.4073 0.6129 0.6858 1.7877 2.7778 0.2292 0.5498 0.7012 0.6025 0.7502 0.7895 0.0969 6.9 DyToast 0.4511 0.7548 0.4007 0.6019 0.6703 1.5071 2.3719 0.2025 0.6256 0.7811 0.6962 0.7456 0.7858 0.0981 6.6 JCLRNT 0.6376 0.7754 0.4072 0.6154 0.6932 1.7959 2.7878 0.2343 0.7521 0.8706 0.8069 0.7526 0.7908 0.0958 5.6 START 0.6535 0.7805 0.4128 0.6231 0.7033 1.4901 2.3406 0.2010 0.7859 0.8911 0.8337 0.7559 0.7954 0.0931 3.1 TRACK 0.6107 0.7590 0.4159 0.6267 0.7041 1.4307 2.2401 0.1864 0.7712 0.8831 0.8221 0.7549 0.7951 0.0927 3.3 GREEN 0.6182 0.7692 0.4060 0.6137 0.6933 1.4259 2.2245 0.1882 0.7992 0.9044 0.8394 0.7428 0.7824 0.1014 4.6 PRTraj 0.8004 0.8615 0.4294 0.6408 0.7184 1.3668 2.1718 0.1776 0.9404 0.9728 0.9557 0.7620 0.8017 0.0933 1.1 Impr. 13.47% 9.01% 3.02% 2.25% 2.03% 4.14% 2.37% 4.72% 17.67% 7.56% 13.86% 0.81% 0.79% -0.65% Xi'an HRNR 0.6459 0.6651 0.3511 0.5800 0.6624 2.9449 4.7241 0.2835 0.6641 0.8062 0.7251 0.6972 0.7459 0.1062 5.7 Toast 0.4308 0.5129 0.3260 0.5421 0.6247 2.9350 4.6983 0.2824 0.6618 0.7838 0.6902 0.6894 0.7419 0.1094 8.1 PIM 0.4357 0.5155 0.3396 0.5629 0.6440 2.9540 4.7358 0.2786 0.6188 0.7453 0.6685 0.7022 0.7521 0.1038 7.0 DyToast 0.4327 0.5149 0.3304 0.5451 0.6222 2.2856 3.7257 0.2282 0.6900 0.8187 0.7488 0.6975 0.7483 0.1058 6.5 JCLRNT 0.4582 0.5076 0.3430 0.5687 0.6566 2.9791 4.7513 0.2869 0.8092 0.8895 0.8451 0.7024 0.7518 0.1039 6.1 START 0.6225 0.6363 0.3499 0.5844 0.6742 2.3700 3.7994 0.2482 0.8645 0.9214 0.8724 0.6985 0.7481 0.1047 3.7 TRACK 0.4652 0.5190 0.3503 0.5824 0.6705 2.2514 3.5998 0.2047 0.8613 0.9255 0.8793 0.6998 0.7480 0.1044 3.2 GREEN 0.5290 0.5545 0.3550 0.5889 0.6773 2.2743 3.7211 0.2056 0.8862 0.9403 0.9123 0.6969 0.7478 0.1041 3.6 PRTraj 0.7599 0.7591 0.3624 0.5971 0.6837 2.1507 3.6072 0.1943 0.9538 0.9864 0.9687 0.7131 0.7629 0.1010 1.1 Impr. 17.65% 14.13% 2.08% 1.39% 0.94% 4.47% -0.21% 5.08% 7.63% 4.90% 6.18% 1.52% 1.44% 2.70% For path ranking, we use Kendall's τ, Spearman's ρ, and MAE. The definitions of τand ρare provided in Section C.3. Baselines. We compare PRTraj against eight state-of-the-art baselines: one task-agnostic supervised method, HRNR [42], and seven self-supervised methods. The self-supervised methods fall into two categories: non-contrastive approaches, including Toast [6], PIM [48], and DyToast [5]; and contrastive-based methods, including JCLRNT [29], START [20], TRACK [16], and GREEN [58]. Further details are provided in Section C.4. 4.2 Overall Performance (RQ1) Table 1 presents a performance comparison between PRTraj and the baseline methods, averaged over five independent runs. A detailed version of the results, including std values, is provided in Section D.1. The key findings are summarized as follows: • PRTraj steadily outperforms baselines. PRTraj achieves the best average ranking across all cities. Moreover, among the 42 evaluation metrics across three cities, it ranks first in 40 metrics, demonstrating its effectiveness and generalizability across different cities. This strong performance can be largely attributed to its unique ability to model the interplay between external environmental semantics and internal route choice logic, two critical, interconnected dimensions that baseline methods largely overlook, thus limiting their overall performance. • Contrastive learning-based methods are competitive. Baselines like JCLRNT, START, TRACK, and GREEN achieve competitive performance. Their strength stems from the core principle of contrastive learning: by training the model to pull representations of semantically similar trajectories (i.e., positive pairs) closer while pushing dissimilar ones (i.e., negative pairs) apart, it learns a well-structured embedding space where distance reflects semantic relatedness. However, their holistic, sequence-level comparison struggles to effectively distill rich environmental context or model implicit route choice behavior in the trajectory. As a result, the learned representations are highly discriminative but lack interpretive depth, capturing the fact of trajectory divergence without encoding the reasons for it. • HRNR excels in the Road Label Prediction task. HRNR achieves the best performance among baselines on this task, a result attributable to its hierarchical modeling of the road network. By hierarchically processing the network, it effectively captures latent structural patterns across multiple scales, ranging from local road connectivity to broader regional layouts. However, relying solely on abstract network structure is inherently limiting. PRTraj surpasses it by incorporating an Environment Perception Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 2: Performance comparison of PRTraj variants on the five downstream tasks. For brevity, we report a single representative metric per task. Task RLP TDP TTE STR PR Metric Macro-F1↑ Acc@1↑ MAE↓ HR@1↑ τ↑ Beijing w/o Fine-Grained 0.7158 0.0927 4.1735 0.9341 0.6738 w/o Coarse-Grained 0.7964 0.1014 4.1034 0.9447 0.6751 w/o Route Choice 0.8823 0.1026 4.0125 0.8974 0.6281 w/o Pre-Training 0.8698 0.1071 3.9467 0.7184 0.6761 PRTraj 0.8879 0.1078 3.8719 0.9612 0.6813 Chengdu w/o Fine-Grained 0.5997 0.4255 1.4118 0.8699 0.7590 w/o Coarse-Grained 0.7127 0.4270 1.4005 0.8804 0.7601 w/o Route Choice 0.7944 0.4234 1.3877 0.8594 0.7411 w/o Pre-Training 0.7655 0.4282 1.3851 0.7264 0.7604 PRTraj 0.8004 0.4294 1.3668 0.9404 0.7620 Xi'an w/o Fine-Grained 0.5957 0.3591 2.2103 0.9233 0.7067 w/o Coarse-Grained 0.7012 0.3598 2.2094 0.9375 0.7101 w/o Route Choice 0.7503 0.3596 2.2045 0.8706 0.6925 w/o Pre-Training 0.7421 0.3601 2.2006 0.7178 0.7086 PRTraj 0.7599 0.3624 2.1507 0.9538 0.7131 Module to enrich road representations with real-world environmental semantics, demonstrating that this semantic context is ultimately more decisive for determining road attributes. 4.3 Ablation Studies (RQ2) To validate PRTraj's designs, we conduct ablation studies by comparing the following variants: (1) w/o Fine-Grained removes the fine-grained semantic modeling within the Environment Perception Module, (2) w/o Coarse-Grained removes the coarse-grained semantic modeling within the Environment Perception Module, (3) w/o Route Choice ablates the Route Choice Encoder to assess the importance of capturing implicit route choice behavior, and (4) w/o Pre-Training bypasses the self-supervised pre-training stage, and the model is trained from scratch on each downstream task. Table 2 presents the results of our ablation studies. These results demonstrate that each component of PRTraj plays a critical role in the model's overall performance, as removing any single module results in a degradation of performance. However, the contribution of each design varies across different tasks. In particular, the Road Label Prediction task is highly sensitive to the Environment Perception Module, with performance dropping significantly when either fine-grained or coarse-grained semantic modeling is removed. This is because the urban environment context captured by these components provides essential cues for inferring road attributes. In turn, ablating the Route Choice Encoder most harms Path Ranking, a task that directly evaluates the route choice behavior modeled by this component. Meanwhile, forgoing pre-training primarily affects Similar Trajectory Retrieval, since its contrastive objective is essential for learning a discriminative embedding space. The minimal impact of omitting pre-training on other tasks highlights that performance gains are largely attributable to PRTraj's architectural innovations in environment perception and route choice modeling, rather than pre-training alone. Table 3: Average training time (hours per epoch) and inference time (milliseconds per trajectory) for PRTraj and baselines on the three trajectory datasets. Dataset Beijing Chengdu Xi'an Metric Training↓ Inference↓ Training↓ Inference↓ Training↓ Inference↓ START 1.9541 0.7879 0.2104 0.2985 0.2149 0.3234 TRACK 1.6134 1.1778 0.2258 0.4055 0.2192 0.4298 GREEN 0.5150 0.3191 0.1494 0.1897 0.1428 0.1987 PRTraj 0.8194 0.2803 0.1889 0.1714 0.1747 0.1702 4.4 Data Efficiency (RQ3) We investigate the data efficiency of PRTraj by varying the proportion of data used for both pre-training and subsequent fine-tuning. The results on the Beijing dataset are shown in Figure 3, with additional results for the Chengdu and Xi'an datasets provided in Section D.2 due to space limitations. Our findings demonstrate that PRTraj achieves superior data efficiency, significantly outperforming the top baseline on the road label prediction task with only 10% of the data, and matching its performance on trajectory retrieval and path ranking tasks with just 20%. This strong few-shot learning capability can be attributed to the core design of PRTraj, which incorporates a strong inductive bias for the learning process. By explicitly encoding external environmental factors and internal route choice logic, PRTraj reduces its reliance on inferring complex relationships purely from data, enabling robust generalization with a substantially reduced number of trajectories. 4.5 Computational Efficiency (RQ4) Table 3 presents a comparative analysis of the computational efficiency between our proposed PRTraj model and three leading baselines: START, TRACK, and GREEN. The results demonstrate that PRTraj strikes an excellent balance between high performance and computational speed. While PRTraj's training time per epoch is second only to the most lightweight model, GREEN, it remains highly competitive and avoids excessive overhead. Crucially, in the inference phase, which is vital for real-world deployment, PRTraj consistently ranks as one of the fastest models. It matches the efficiency of GREEN and significantly surpasses START and TRACK across all datasets. The architectural designs of START and TRACK, which prioritize capturing complex dependencies (like temporal regularities or historical information), are inherently computationally intensive. Consequently, this leads to longer processing times during both training and inference. These results demonstrate that the significant performance gains from PRTraj's sophisticated modeling do not come at the cost of practical efficiency, offering an excellent trade-off between effectiveness and computational speed. 4.6 Visualization Analysis (RQ5) To intuitively showcase PRTraj's performance, we provide two qualitative visualizations: a case study on trajectory similarity retrieval and a t-SNE projection of the embeddings. Case Study on Trajectory Similarity Retrieval. We visually assess our embeddings on the task of similar trajectory retrieval. As shown in Figure 4 on the Beijing dataset, PRTraj retrieves more similar trajectories for a given query compared to a strong baseline, Ji Cao et al. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.79 0.81 0.84 0.87 0.89 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.03 0.05 0.07 0.09 0.11 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 3.83 4.07 4.32 4.56 4.81 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.90 0.91 0.93 0.95 0.96 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.64 0.65 0.66 0.67 0.68 Kendall's Path Ranking Figure 3: Performance of PRTraj on downstream tasks with varying proportions of training data on the Beijing trajectory dataset. The red dashed line denotes the best baseline performance. For brevity, we present one metric for each task. Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) PRTraj Figure 4: Case study of trajectory similarity retrieval on the Beijing dataset. START, thanks to its unified modeling of environment and route choice. Due to space constraints, results for other datasets and baselines are provided in the Section D.4. t-SNE Projection of Trajectory Embeddings. To assess the model's sensitivity to environmental dynamics, we conducted a case study on a traffic control event (Sanyuan Bridge replacement, Beijing). Figure 5 visualizes the t-SNE projection of embeddings from normal versus restricted periods, comparing PRTraj against the strong baseline START. The visualization reveals that PRTraj forms distinct clusters, validating its capacity to perceive their influence on route choices. Further baseline comparisons and traffic heatmaps for the traffic control event are available in Section D.4. 5 Related Work Trajectory Representation Learning (TRL). TRL aims to transform raw trajectory data into low-dimensional embeddings. Early approaches in this domain are typically task-specific. For instance, some methods are tailored for trajectory similarity computation [12, 17, 24, 51], while others are developed for clustering [2, 11, 54] or anomaly detection [27, 41]. However, these representations suffered from poor generalizability. To address this, subsequent works shift to general-purpose pre-training, utilizing sequence-to-sequence models [14] or curriculum learning [48] to capture spatio-temporal dependencies. More recently, contrastive learning [3] emerges as a dominant paradigm for acquiring more robust and discriminative representations. For example, JCLRNT [29] aligns trajectory and road network features through cross-scale contrastive learning, 20 0 20 20 0 20 Before Traffic Control During Traffic Control (a) START 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (b) PRTraj Figure 5: t-SNE visualization of trajectory embeddings before and during the traffic control in the Sanyuan Bridge area. while START [20] captures travel semantics and temporal regularities. TRACK [16] extends this by incorporating traffic conditions, and GREEN [58] further enhances representation by jointly modeling road network-based and grid-based trajectories. LLMs in Spatio-Temporal Data Mining. Large Language Models (LLMs), proven to be highly effective in Natural Language Processing [30, 34], are now finding new applications in the domain of spatio-temporal data mining. Current efforts focus on tasks such as mobility generation [13, 32, 37], POI recommendation [8, 15, 23], geographic knowledge understanding [18, 26, 45], and traffic forecasting [25, 53, 56]. While these studies showcase the potential of LLMs, they seldom explicitly model the relationship between road segments and the surrounding POIs. To bridge this gap, we propose an Environment Perception Module that leverages an LLM to interpret multi-granularity POI information, thereby enriching the semantic representation of the road network. 6 Conclusion In this work, we propose PRTraj, a novel TRL framework designed to address a key oversight in existing TRL methods: they merely treat trajectories as isolated spatio-temporal sequences, without considering the external environment and internal route choice behavior that govern their formation, which fundamentally limits their performance. To bridge this gap, PRTraj first employs an Environment Perception Module to construct a perception-aware road network by integrating multi-granularity POI semantics. This semantically-rich representation then serves as an essential foundation for the subsequent Route Choice Encoder, which models route choice behavior within this context-rich environment. A final Trajectory Semantic Integrator aggregates these route-choice-aware insights into a holistic trajectory embedding. Extensive experiments validate the effectiveness of PRTraj and demonstrate its potential for application in various spatiotemporal downstream tasks. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning References [1] Geoff Boeing. 2025. Modeling and analyzing urban networks and amenities with OSMnx. Geographical Analysis (2025). [2] Chao Chen, Chengwu Liao, Xuefeng Xie, Yasha Wang, and Junfeng Zhao. 2019. Trip2Vec: a deep embedding approach for clustering and profiling taxi trip purposes. Pers. Ubiquitous Comput. (2019). [3] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In ICML. [4] Wei Chen, Yuxuan Liang, Yuanshao Zhu, Yanchuan Chang, Kang Luo, Haomin Wen, Lei Li, Yanwei Yu, Qingsong Wen, Chao Chen, Kai Zheng, Yunjun Gao, Xiaofang Zhou, and Yu Zheng. 2024. Deep learning for trajectory data management and mining: A survey and beyond. arXiv preprint (2024). [5] Yile Chen, Xiucheng Li, Gao Cong, Zhifeng Bao, and Cheng Long. 2024. Semanticenhanced representation learning for road networks with temporal dynamics. arXiv preprint (2024). [6] Yile Chen, Xiucheng Li, Gao Cong, Zhifeng Bao, Cheng Long, Yiding Liu, Arun Kumar Chandran, and Richard Ellison. 2021. Robust road network representation learning: When traffic patterns meet traveling semantics. In CIKM. [7] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems. [8] Jiawei Cheng, Jingyuan Wang, Yichuan Zhang, Jiahao Ji, Yuanshao Zhu, Zhibo Zhang, and Xiangyu Zhao. 2025. Poi-enhancer: An llm-based semantic enhancement framework for poi representation learning. In AAAI. [9] Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, Peter W. Battaglia, Vishal Gupta, Ang Li, Zhongwen Xu, Alvaro Sanchez-Gonzalez, Yujia Li, and Petar Velickovic. 2021. Eta prediction with graph neural networks in google maps. In CIKM. [10] Xiaomin Fang, Jizhou Huang, Fan Wang, Lingke Zeng, Haijin Liang, and Haifeng Wang. 2020. Constgat: Contextual spatial-temporal graph attention network for travel time estimation at baidu maps. In SIGKDD. [11] Ziquan Fang, Yuntao Du, Lu Chen, Yujia Hu, Yunjun Gao, and Gang Chen. 2021. E2dtc: An end to end deep trajectory clustering framework via self-training. In ICDE. [12] Ziquan Fang, Yuntao Du, Xinjun Zhu, Danlei Hu, Lu Chen, Yunjun Gao, and Christian S Jensen. 2022. Spatio-temporal trajectory similarity learning in road networks. In SIGKDD. [13] Jie Feng, Yuwei Du, Jie Zhao, and Yong Li. 2024. Agentmove: Predicting human mobility anywhere using large language model based agentic framework. arXiv preprint (2024). [14] Tao-Yang Fu and Wang-Chien Lee. 2020. Trembr: Exploring road networks for trajectory representation learning. ACM Trans. Intell. Syst. Technol. (2020). [15] Letian Gong, Yan Lin, Xinyue Zhang, Yiwen Lu, Xuedi Han, Yichen Liu, Shengnan Guo, Youfang Lin, and Huaiyu Wan. 2024. Mobility-llm: Learning visiting intentions and travel preference from human mobility data with large language models. In NeurIPS. [16] Chengkai Han, Jingyuan Wang, Yongyao Wang, Xie Yu, Hao Lin, Chao Li, and Junjie Wu. 2025. Bridging traffic state and trajectory for dynamic road network and trajectory representation learning. In AAAI. [17] Peng Han, Jin Wang, Di Yao, Shuo Shang, and Xiangliang Zhang. 2021. A graphbased approach for trajectory similarity computation in spatial networks. In SIGKDD. [18] Xixuan Hao, Wei Chen, Yibo Yan, Siru Zhong, Kun Wang, Qingsong Wen, and Yuxuan Liang. 2025. Urbanvlp: Multi-granularity vision-language pretraining for urban socioeconomic indicator prediction. In AAAI. [19] Danlei Hu, Lu Chen, Hanxi Fang, Ziquan Fang, Tianyi Li, and Yunjun Gao. 2023. Spatio-temporal trajectory similarity measures: A comprehensive survey and quantitative study. IEEE Trans. Knowl. Data Eng. (2023). [20] Jiawei Jiang, Dayan Pan, Houxing Ren, Xiaohan Jiang, Chao Li, and Jingyuan Wang. 2023. Self-supervised trajectory representation learning with temporal regularities and travel semantics. In ICDE. [21] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics (2020). [22] Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika (1938). [23] Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song, and Flora D Salim. 2024. Large language models for next point-of-interest recommendation. In SIGIR. [24] Xiucheng Li, Kaiqi Zhao, Gao Cong, Christian S Jensen, and Wei Wei. 2018. Deep representation learning for trajectory similarity computation. In ICDE. [25] Zhonghang Li, Lianghao Xia, Jiabin Tang, Yong Xu, Lei Shi, Long Xia, Dawei Yin, and Chao Huang. 2024. UrbanGPT: Spatio-Temporal Large Language Models. In SIGKDD. [26] Zekun Li, Wenxuan Zhou, Yao-Yi Chiang, and Muhao Chen. 2023. GeoLM: Empowering Language Models for Geospatially Grounded Language Understanding. In EMNLP. [27] Yiding Liu, Kaiqi Zhao, Gao Cong, and Zhifeng Bao. 2020. Online anomalous trajectory detection with deep generative sequence modeling. In ICDE. [28] Xiaowei Mao, Yan Lin, Shengnan Guo, Yubin Chen, Xingyu Xian, Haomin Wen, Qisen Xu, Youfang Lin, and Huaiyu Wan. 2025. DutyTTE: Deciphering Uncertainty in Origin-Destination Travel Time Estimation. In AAAI. [29] Zhenyu Mao, Ziyue Li, Dedong Li, Lei Bai, and Rui Zhao. 2022. Jointly contrastive representation learning on road network and trajectory. In CIKM. [30] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS. [31] Carlo Giacomo Prato. 2009. Route choice modeling: past, present and future research directions. Journal of choice modelling (2009). [32] Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, and Yong Li. 2024. Chain-of-planned-behaviour workflow elicits few-shot mobility generation in LLMs. arXiv preprint (2024). [33] Charles Spearman. 1904. The proof and measurement of association between two things. Am. J. Psychol (1904). [34] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint (2023). [35] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. [36] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR. [37] Jiawei Wang, Renhe Jiang, Chuang Yang, Zengqing Wu, Makoto Onizuka, Ryosuke Shibasaki, Noboru Koshizuka, and Chuan Xiao. 2024. Large language models as urban residents: An llm agent framework for personal mobility generation. In NeurIPS. [38] Sheng Wang, Zhifeng Bao, J Shane Culpepper, and Gao Cong. 2021. A survey on trajectory data management, analytics, and learning. ACM Comput. Surv. (2021). [39] Senzhang Wang, Jiannong Cao, and S Yu Philip. 2020. Deep learning for spatiotemporal data mining: A survey. IEEE Trans. Knowl. Data Eng. (2020). [40] Zheng Wang, Kun Fu, and Jieping Ye. 2018. Learning to estimate the travel time. In SIGKDD. [41] Hao Wu, Weiwei Sun, and Baihua Zheng. 2017. A fast trajectory outlier detection approach via driving behavior modeling. In CIKM. [42] Ning Wu, Xin Wayne Zhao, Jingyuan Wang, and Dayan Pan. 2020. Learning effective road network representation with hierarchical graph neural networks. In SIGKDD. [43] Xiaohang Xu, Renhe Jiang, Chuang Yang, Zipei Fan, and Kaoru Sezaki. 2024. Taming the long tail in human mobility prediction. In NeurIPS. [44] Andy Yuan Xue, Rui Zhang, Yu Zheng, Xing Xie, Jin Huang, and Zhenghua Xu. 2013. Destination prediction by sub-trajectory synthesis and privacy protection against such prediction. In ICDE. [45] Yibo Yan, Haomin Wen, Siru Zhong, Wei Chen, Haodong Chen, Qingsong Wen, Roger Zimmermann, and Yuxuan Liang. 2024. Urbanclip: Learning text-enhanced urban region profiling with contrastive language-image pretraining from the web. In WWW. [46] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. 2025. Qwen3 technical report. arXiv preprint (2025). [47] Can Yang and Gyozo Gidofalvi. 2018. Fast map matching, an algorithm integrating hidden Markov model with precomputation. Int. J. Geogr. Inf. Sci. (2018). [48] Sean Bin Yang, Chenjuan Guo, Jilin Hu, Jian Tang, and Bin Yang. 2021. Unsupervised path representation learning with curriculum negative sampling. In IJCAI. [49] Sean Bin Yang, Chenjuan Guo, and Bin Yang. 2020. Context-aware path ranking in road networks. IEEE Trans. Knowl. Data Eng. (2020). [50] Sean Bin Yang, Jilin Hu, Chenjuan Guo, Bin Yang, and Christian S Jensen. 2023. Lightpath: Lightweight and scalable path representation learning. In SIGKDD. [51] Di Yao, Gao Cong, Chao Zhang, and Jingping Bi. 2019. Computing trajectory similarity in linear time: A generic seed-guided neural metric learning approach. Ji Cao et al. In ICDE. [52] Di Yao, Chao Zhang, Zhihua Zhu, Jianhui Huang, and Jingping Bi. 2017. Trajectory clustering via deep representation learning. In IJCNN. [53] Yuan Yuan, Jingtao Ding, Jie Feng, Depeng Jin, and Yong Li. 2024. UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal Prediction. In SIGKDD. [54] Mingxuan Yue, Yaguang Li, Haoze Yang, Ritesh Ahuja, Yao-Yi Chiang, and Cyrus Shahabi. 2019. Detect: Deep trajectory clustering for mobility-behavior analysis. In IEEE BigData. [55] Yanzhao Zhang, Mingxin Li, Dingkun Long, Xin Zhang, Huan Lin, Baosong Yang, Pengjun Xie, An Yang, Dayiheng Liu, Junyang Lin, Fei Huang, and Jingren Zhou. 2025. Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models. arXiv preprint (2025). [56] Zijian Zhang, Xiangyu Zhao, Qidong Liu, Chunxu Zhang, Qian Ma, Wanyu Wang, Hongwei Zhao, Yiqi Wang, and Zitao Liu. 2023. Promptst: Prompt-enhanced spatio-temporal multi-attribute prediction. In CIKM. [57] Yu Zheng. 2015. Trajectory data mining: an overview. ACM Trans. Intell. Syst. Technol. (2015). [58] Silin Zhou, Shuo Shang, Lisi Chen, Peng Han, and Christian S. Jensen. 2025. Grid and road expressions are complementary for trajectory representation learning. In SIGKDD. A Prompt Design for POI Context Description In this section, we detail the prompt for the Environment Perception Module. To address the environmental perception gap in TRL, we leverage LLMs to synthesize semantically rich, human-centric descriptions from raw POI data. This process captures the nuanced functional characteristics of urban spaces across multiple granularities, establishing a more expressive semantic foundation. A.1 Prompts for Fine-Grained POI Context At the fine-grained level, the objective is to generate detailed descriptions of individual road segments by summarizing their surrounding POI context. The prompt template fed into the LLM consists of the following components: • Role Definition: The LLM is prompted to assume the role of a local resident, encouraging it to produce contextually grounded and locally nuanced descriptions. • Structured POI Data Input: A structured list of nearby POIs is provided as factual grounding for the description. • Task Instruction: The LLM is explicitly instructed to generate a description of the road segment based on the given information. LLM Prompt Template (Fine-Grained). You are a resident living in , familiar with the local transportation network and surrounding POI information. There is a road segment with the following POIs located within a 100-meter radius: POIs categorized under [ ], further subdivided as: • [ ]: . • [ ]: . • · · · (Can be further expanded here.) · · · (Additional primary categories may be added in the same format.) Please describe the relevant characteristics of this road section based on the POI information within a 100-meter radius around it. To illustrate the applicability of the proposed template, we present a case study of a road segment on Qianmen East Street, which is located in a historically and commercially significant area of Beijing frequented by both residents and tourists. Illustrative Prompt Instance (Fine-Grained). You are a resident living in Beijing, familiar with the local transportation network and surrounding POI information. There is a road segment with the following POIs located within a 100-meter radius: 6 POIs categorized under [Shopping Services], further subdivided as: • 2 [Clothing & Shoes Stores]: Fashion Frontline (Xinjiekou Street), Old Beijing Cloth Shoes (Qianmen West Houheyan Street). • 2 [Specialty Stores]: Gaojiazhai Grocery, Beijing Specialty Store (Qianmen West Heyan Community). • 2 [Convenience Stores]: Lehafu Supermarket, Cigarettes & Cold Drinks. 4 POIs categorized under [Catering Services], further subdivided as: • 1 [Food-Related Establishment]: Juewei Taiwanese Salt & Pepper Chicken. • 2 [Chinese Restaurants]: Mian Ai Mian (Qianmen Branch), Qipin Xiang Tofu. • 1 [Fast Food Restaurant]: KFC (Qianmen Branch). 1 POIs categorized under [Transportation Facilities], further subdivided as: • 1 [Subway Station]: Qianmen Subway Station (Southwest Exit C). 1 POIs categorized under [Government & Social Organizations], further subdivided as: • 1 [Public Security Institution]: Dashilan Patrol Police Station. 1 POIs categorized under [Healthcare Services], further subdivided as: • 1 [Healthcare Facility]: Blood Donation Cabin. 2 POIs categorized under [Accommodation Services], further subdivided as: • 2 [Hotel]: Qianmen Guanqi Hotel (Qianmen East Branch), Shunchaolong Hotel. Please describe the relevant characteristics of this road section based on the POI information within a 100-meter radius around it. LLM-Generated Description (Fine-Grained). Based on the POI information within a 100-meter radius of this road section, the following characteristics can be summarized: 1. Vibrant Commerce and Diverse Consumption Scenes • Dense Shopping Services: Six shopping-related POIs are concentrated in this area, forming a commercial belt primarily focused on clothing, shoes, specialty retail, and convenience stores. • Blend of Fashion and Tradition: Trendy stores like "Fashion Frontline (Xinjiekou Street)" and "Old Beijing Cloth Shoes (Qianmen West Houheyan Street)" showcase both modern trends and local cultural characteristics, attracting young crowds and tourists. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning • Convenience Meets Specialty: Convenience stores like "Lehafu Supermarket" and "Cigarettes & Cold Drinks" cater to daily needs, while specialty shops like "Gaojiazhai Grocery" and "Beijing Specialty Store (Qianmen West Heyan Community)" focus on local products, ideal for tourists seeking souvenirs. • Varied Dining Options: Four food-related POIs cover fast food, Chinese cuisine, and specialty snacks, creating an all-day dining scene. • Local and International Flavors: Restaurants like "Mian Ai Mian (Qianmen Branch)" and "Qipin Xiang Tofu" specialize in Beijing cuisine, while "KFC (Qianmen Branch)" and "Juewei Taiwanese Salt & Pepper Chicken" offer quick and convenient meals, catering to both locals and tourists. 2. Transportation Hub and High Foot Traffic • Direct Metro Access: The "Qianmen Subway Station (Southwest Exit C)" makes this area a junction between subway and ground transportation, especially crowded during rush hours, potentially serving both commercial and commuting purposes. • Public Security: The presence of the "Dashilan Police Patrol Station" indicates strict public safety management, enhancing the sense of security for residents and visitors. 3. Prominent Tourism and Accommodation Functions • Comprehensive Lodging Options: Hotels like "Qianmen Guanqi Hotel (Qianmen East Branch)" and "Shunchaolong Hotel" provide accommodation, confirming this area as a tourist hotspot, likely close to historical and cultural attractions like Qianmen Street. • Cultural Experience Extension: Nearby POIs (e.g., "Old Beijing Cloth Shoes (Qianmen West Houheyan Street)" and "Beijing Specialty Store") complement Qianmen's historical and cultural district, boosting the area's tourism appeal. 4. Healthcare and Community Services • Blood Donation Cabin: The presence of a blood donation cabin indicates basic medical support, offering public welfare services to residents and tourists and strengthening community functions. Summary: This road section is located in the core Qianmen business district, integrating commerce, transportation, tourism, and residential functions, representing a typical blend of Beijing's historical culture and modern lifestyle. Its dense POI distribution and diverse services make it a key node for tourists, residents, and commuters, particularly suitable for walking exploration, shopping, leisure, and short-term stays. A.2 Prompts for Coarse-Grained POI Context At the coarse-grained level, we aim to generate a detailed description of a POI cluster with a primary category of c. Similar to the prompt designed for fine-grained POI context described above, this prompt also comprises three components: Role Definition, Structured POI Data Input, and Task Instruction. LLM Prompt Template (Coarse-Grained). You are a resident living in , familiar with the local transportation network and surrounding POI information. In a 1000m × 1000m area of , POIs of the type [ ] exhibit significant clustering characteristics. Data analysis shows that the number of [ ] POIs in this area ranks within the top 10% in . Further subdividing these [ ] POIs, they include: • [ ]: . • [ ]: . • · · · (Can be further expanded here.) Please describe the characteristics of this 1000m × 1000m area where a large number of [ ] POIs are clustered. Here, we select a POI cluster with the primary category of "shopping services" located near the Beijing CBD to illustrate the applicability of the proposed template. Illustrative Prompt Instance (Coarse-Grained). You are a resident living in Beijing, familiar with the local transportation network and surrounding POI information. In a 1000m × 1000m area of Beijing, POIs of the type exhibit significant clustering characteristics. Data analysis shows that the number of [Shopping Services] POIs in this area ranks within the top 10% in Beijing. Further subdividing these [Shopping Services] POIs, they include: • 790 [Specialty Stores]: Li-Ning (Fuli Plaza Store), Dionysus Wine Cellar, Youngor, etc. • 80 [Personal Care & Cosmetics Stores]: Qingzhuang Cosmetics, Scent Library, PIPI FACE Perfume, etc. • 24 [Sports Goods Stores]: Adidas (China World Trade Center South Zone), Reebok Treadmill Store (Guomao Store), Kingsmith Treadmill Store (Guomao Store), etc. • 52 [Convenience Stores]: 7-Eleven (Huamao Store), Quick (Jianwai Branch 2), Wankeyuan Supermarket (Jingheng Street), etc. • 40 [Shopping Malls]: Beijing Yintai Center in01, China World Trade Center South Zone, China World Trade Center North Zone, etc. • 46 [Home & Building Material Markets]: Dakang Home Furnishing, Risheng Furniture & Hardware, Bairuisi Furniture, etc. • 46 [Electronics & Appliance Stores]: Changhong Air Conditioning, ThinKpad Laptop Store, BOSE (China World Trade Center North Zone), etc. • 8 [Stationery Stores]: JimmyBlack, 7G Office, Montblanc (China World Trade Center South Zone), etc. • 708 [Clothing & Shoes Stores]: Yaliange, Xiu Clothing, Xiaojin Wardrobe (Jianwai SOHO Store), etc. • 6 [Auction Houses]: Beijing Jianya Century Auction Co., Ltd., Beijing Zhonghan Auction Co., Ltd., Beijing Zhongzhao International Auction Co., Ltd. (Jintong East Road), etc. • 62 [Flower & Pet Markets]: No. 18 Flower Shop, Love Flowers, LOVE Flower Studio, etc. • 28 [Supermarkets]: Jingkelong Supermarket, Mannings (Yintai Center Store), CR Vanguard (Guomao Store), etc. LLM-Generated Description (Coarse-Grained). Based on the provided data, the 1000m × 1000m area exhibits highly concentrated commercial characteristics, which can be summarized as follows: 1. Highly Concentrated Business Types, Forming a Diverse Shopping Ecosystem • Core Business Types: Dominated by [specialty stores] (790) and [clothing & shoes stores] (708), indicating a strong presence of fashion and personalized shopping, likely including brand flagships and boutique outlets. Ji Cao et al. • Comprehensive Supporting Services: Includes [convenience stores] (52) and [supermarkets] (28), enabling a one-stop shopping experience that serves daily consumer needs. • Presence of High-End and Niche Segments: Categories like [electronics & appliance stores] (46), [home & building material markets] (46), and [stationery stores] (8) suggest the area also caters to professionals and affluent residential communities. 2. Likely Location: Beijing's Core Commercial Districts • POI Name Clues: Frequent use of "Guomao" in POI names implies proximity to the Guomao CBD or a similar core business district. • Exceptionally High POI Density: Nearly 2,000 shopping-related POIs within one square kilometer, far exceeding typical commercial zones, points to a central retail hub like Guomao, Sanlitun, or Wangjing. 3. Multi-Tiered Consumption Landscape • Mass Market Consumption: Supermarkets and convenience stores address daily needs of nearby residents. • High-End and Niche Offerings: Stores like "Dionysus Wine Cellar", "Montblanc", and "Beijing Jianya Century Auction Co., Ltd." indicate the presence of luxury brands and specialty services, appealing to affluent and interest-based clientele. 4. Commercial Vitality Driven by Transit and Foot Traffic • Density Backed by Accessibility: Such a concentration of POIs suggests strong transport connectivity (e.g., metro, bus hubs) and population density (e.g., office towers, housing). The "Guomao" indicator further implies proximity to a metro hub or major retail cluster. • Competitive and Complementary Dynamics: Coexistence of specialty, fashion, and general markets suggests differentiated positioning (e.g., brand vs. value), while services like convenience stores offer complementary utility, enhancing overall vitality. 5. Potential Challenges • Risk of Oversaturation: High POI density may lead to homogeneous competition, putting pressure on smaller retailers. • Traffic Congestion: Intensive commercial activity could strain nearby roads, requiring effective urban traffic planning and management. Conclusion The area is most likely a "core commercial district in Beijing" (e.g., Guomao, Sanlitun) or a "large emerging complex", characterized by: • Diverse Business Types: From everyday retail to luxury brands. • Broad Consumption Spectrum: Serving mass, middle-class, and affluent segments. • Transit-Oriented Clustering: Commercial density supported by transit and population flow. • Potential Functional Zoning: Unique commercial ecosystems inferred from specialized POIs. B Trajectory Augmentation Strategies We apply two data augmentation strategies to the trajectory data to generate distinct views for contrastive learning, as detailed below. Trajectory Cropping. To better capture the spatial information in trajectories, we apply random cropping while preserving continuity. Specifically, we crop either the beginning or end of a trajectory, with the cropping ratio uniformly sampled between 5% and 15%. Temporal Perturbation. To emphasize temporal characteristics, we randomly perturb the travel times of approximately 15% of the segments. Specifically, given a road segment with travel time Δt, the perturbed value Δt′ is computed as Δt′ = Δt-r(Δt-Δtavg), where Δtavg denotes the average travel time for that road segment, and ris a random variable uniformly sampled from 15% and 30%. C Details of the Experimental Setup In this section, we provide a detailed description of our experimental setup, including the datasets, downstream evaluation tasks, evaluation metrics, and baseline models used for comparison. C.1 Datasets Here, we present the statistics and characteristics of the trajectory and POI datasets across three cities: Beijing, Chengdu, and Xi'an. Trajectory Datasets. We evaluated the performance of PRTraj and other baseline methods on trajectory datasets collected from three cities: Beijing, Chengdu, and Xi'an. Table 4 shows the statistics of these datasets, and Figure 6 shows the distribution, respectively. Table 4: Statistics of Trajectory Datasets. Dataset # Trajectories # Segments Avg. Length Avg. Time Beijing 1 010 325 40 306 6.60 km 15.57 min Chengdu 431 581 6741 4.10 km 8.77 min Xi'an 407 181 6795 4.54 km 11.59 min (a) Beijing (b) Chengdu (c) Xi'an Figure 6: Distributions of three trajectory datasets. POI Datasets. We collect POI datasets for Beijing, Chengdu, and Xi'an using the Amap API. Each record includes the POI name, primary category, subcategory, and geographic coordinates. POIs located outside city boundaries or lacking valid coordinates are excluded. Table 5 presents the statistics of these POI datasets. Table 5: Statistics of POI Datasets. Dataset # POIs # Primary Categories # Subcategories Beijing 706 115 14 128 Chengdu 141 918 14 126 Xi'an 87 309 14 124 To illustrate the structure of the POI dataset more clearly, Table 6 presents a representative record from Beijing. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 6: Example of a POI record from Beijing. Field Value POI Name National Museum of China Primary Category Science, Education and Cultural Services Subcategory Museum Coordinates (39.903744, 116.395475) C.2 Downstream Tasks Implementation details for the five downstream tasks used to evaluate the learned trajectory representations are presented below. Road Label Prediction. Following prior studies [5, 6, 42], we evaluate the quality of the learned road segment representations by predicting segment-level attributes. In particular, we focus on predicting the number of lanes, as this attribute is not included in the input features defined in Equation (1). Since road segments with five or more lanes are extremely rare, including them may distort evaluation metrics (e.g., Macro-F1) and compromise reliability. Therefore, we restrict this task to segments with 1 to 4 lanes. The model is trained using the cross-entropy loss. Trajectory Destination Prediction. This task aims to predict the final destination of a trajectory using partial trajectory information [42, 44], with applications in areas such as POI recommendation and map navigation. Specifically, we use the first 50% of the trajectory as input and predict the road segment where the trajectory terminates. We formulate this task as a classification problem, and the model is optimized using a cross-entropy loss. Travel Time Estimation. Also known as Estimated Time of Arrival (ETA), this task aims to predict the time a vehicle takes to complete a given trajectory. As a fundamental component of navigation systems, it has broad real-world applications [9, 10, 40]. We treat this as a regression problem and optimize it using the MSE loss, and the target travel time is measured in minutes. Similar Trajectory Retrieval. This task aims to retrieve the most similar trajectory T from a large-scale trajectory dataset D, which has significant applications in identifying popular routes and similar drivers [24]. Following prior work [20, 29, 58], we generate ground truth by constructing detoured versions of original trajectories. Specifically, we sample nqtrajectories from the test set, for each trajectory T, we search for the top-kshortest paths from its origin to destination on the road network. The first path that is strictly longer than the original trajectory T is selected as the detoured trajectory Td. To construct the candidate set D, we combine the original nqtrajectories with nneg additional trajectories that are randomly sampled from the test set without overlap with the original nqtrajectories. During evaluation, each detoured trajectory Td is used to query the candidate set D, and the retrieval is considered correct if the most similar trajectory retrieved is the corresponding original trajectory T. In our implementation, we set nq= 5000 and nneg = 50000. Path Ranking. In this task, the model assigns a score in the range [0, 1] to each candidate trajectory to rank multiple paths connecting the same origin and destination, where a higher score indicates greater similarity to the "optimal" trajectory. Following prior work [49], we assign a score of 1.0 to the real trajectory and generate the top-kshortest paths, using their Intersection-overUnion (IoU) with the real trajectory as the target score. This task is framed as a regression problem and optimized using the MSE loss. To encourage spatial diversity among generated trajectories, the top-kpaths are generated and filtered sequentially. Specifically, starting from top-1 to top-k, we evaluate each path in order. For the i-th candidate, we compute its IoU with the real trajectory as well as with all previously retained paths (i.e., top-1 to top- (i-1)). If any IoU exceeds a predefined threshold δ, the path is discarded. This iterative filtering ensures that the final candidate set avoids redundancy and covers diverse spatial patterns. In our implementation, we set k= 10 and δ= 0.8. C.3 Evaluation Metrics Here, we detail the formulation of Kendall's τand Spearman's ρrank correlation coefficients, which are utilized to assess the performance of the path ranking task. Kendall's Rank Correlation Coefficient (τ). Kendall's τ[22] measures the ordinal association between two ranked variables by counting the number of concordant and discordant pairs. Given a set of nitems, the metric is computed as: τ= C-D 1 2n(n-1) , (23) where Cis the number of concordant pairs and Dis the number of discordant pairs. The value of τranges from -1 (complete disagreement) to 1 (complete agreement). It is widely used to evaluate the agreement between predicted and true rankings. Spearman's rank correlation coefficient (ρ). Spearman's ρ[33] evaluates how well the relationship between two variables can be described by a monotonic function. It is defined as the Pearson correlation between the rank variables, and in the absence of ties, can be computed as: ρ= 1 -6 Ín i=1 d2 i n(n2 -1) , (24) where diis the difference between the ranks of each item in the two lists, and nis the number of ranked items. Like Kendall's τ, the value of ρranges from -1 to 1, with higher values indicating stronger rank correlation. C.4 Baselines We compare PRTraj's performance against the following eight stateof-the-art baselines: • HRNR [42] is a hierarchical graph neural network that learns road network representations by modeling a three-level structure of "functional zones", "structural regions", and "road segments" to capture both structural and functional characteristics. • Toast [6] combines a traffic context-aware skip-gram module to capture traffic patterns with a trajectory-enhanced Transformer module to learn traveling semantics from trajectory data. • PIM [48] learns trajectory representations using two complementary discriminators that capture both global (trajectorytrajectory) and local (trajectory-road segment) views, along with a curriculum-based negative sampling strategy. Ji Cao et al. • DyToast [5] enhances Toast [6] by integrating learnable trigonometric temporal functions to explicitly model dynamic temporal patterns, yielding time-sensitive road network representations. • JCLRNT [29] jointly learns road network and trajectory representations in an end-to-end framework through three tailored contrastive learning tasks: road-road, trajectory-trajectory, and a novel road-trajectory cross-scale contrast mechanism. • START [20] employs GAT [36] to encode road representations enriched with travel semantics, and Transformer [35] to model periodic temporal regularities. Two self-supervised objectives, span-masked recovery and contrastive learning, are designed to capture spatio-temporal characteristics. • TRACK [16] extends START [20] by jointly learning dynamic road network and trajectory representations through explicitly bridging traffic state and trajectory data. • GREEN [58] uses CNN and GNN-based encoders to learn gridbased and road-level trajectory representations, and fuses them with a cross-attention module for final trajectory embedding. C.5 Implementation Details The embedding dimension dis set to 128. The Environment Perception Module adopts a proximity threshold δof 100m and a grid cell side length Lof 1000m. Both the CrossGAT module and the coarsegrained semantic attention mechanism utilize 4 attention heads. The Transformer encoder in the Trajectory Semantic Integrator consists of 6 layers, each with 4 attention heads and a dropout rate of 0.1. For both pre-training and fine-tuning, the model is trained for 50 epochs using the AdamW optimizer with a batch size of 64. The learning rate is initialized to 2 × 10-4, undergoes linear warmup over the first 5 epochs, and then decays via a cosine schedule to a minimum of 1 × 10-6. To ensure fair comparison, all baseline models use the same embedding dimension (128) and Transformer depth (6 layers) as PRTraj, and their road segment embeddings are computed using Equation (1). Other configurations follow the default settings of their original implementations. All experiments are implemented with PyTorch 2.7.1 and conducted on a single NVIDIA RTX A6000 GPU. D More Detailed Experimental Results In this section, we present a series of supplementary experiments for a more comprehensive evaluation of our proposed method. These include additional overall performance results, data efficiency results, hyperparameter sensitivity analyses, and visualization analyses. D.1 Overall Performance with STD Value To ensure the robustness and statistical reliability of our experimental results, we apply five independent runs and report the results along with their corresponding std values in Table 7. D.2 Additional Results on Data Efficiency Figure 7 presents the data efficiency results for the Chengdu and Xi'an datasets, which corroborate the findings from the Beijing dataset presented in the main paper. Across these heterogeneous urban environments, PRTraj consistently demonstrates superior data efficiency, often surpassing the fully-trained top baseline with only a small fraction of the training data. This robust performance underscores the generalizability of our approach, highlighting its practical value in scenarios where large-scale, high-quality trajectory data may be scarce or expensive to acquire. D.3 Hyperparameter Sensitivity Here, we evaluate the stability of PRTraj by analyzing its sensitivity to the two key hyperparameters within the Environment Perception Module: the proximity radius δfor fine-grained environment semantic modeling and the grid cell side length Lfor coarse-grained environment semantic modeling. While these parameters theoretically influence the model's perceptual granularity of the environment, a robust model should exhibit consistent performance across a reasonable range of values rather than being finely tuned to a specific configuration. To demonstrate this, we conducted experiments on all three datasets, varying δacross {50, 100, 200} meters and L across {500, 1000, 2000} meters. As illustrated in Figure 8, PRTraj is not overly sensitive to these choices and delivers consistently strong performance, thereby highlighting the model's stability and the generalizability of our selected configuration. D.4 Visualization Analysis To supplement the quantitative results, we conduct a qualitative visualization analysis to offer more intuitive insights into the capabilities of PRTraj. This analysis is designed to visually demonstrate how our unified modeling of environment perception and route choice translates into more semantically expressive and behaviorally aligned trajectory representations. We present two sets of visualizations: case studies on trajectory similarity retrieval and a t-SNE projection of the trajectory embeddings. Case Study on Trajectory Similarity Retrieval. In this part, we conduct a case study to visually inspect the quality of our learned embeddings for similar trajectory retrieval. As illustrated in Figure 9, Figure 10, and Figure 11, for a randomly selected query trajectory, we retrieve its top-3 most similar counterparts using embeddings from PRTraj and those from three leading baselines: START, TRACK, and GREEN. In contrast to the baselines, our method retrieves more similar trajectories, owing to its perception of the environment and modeling of route choice behavior. t-SNE Projection of Trajectory Embeddings. To further assess the model's ability to capture trajectory semantics under dynamic traffic conditions, we perform a case study centered on a traffic control event in Beijing. From November 13 to 15, 2015, the Sanyuan Bridge area underwent traffic restrictions due to a bridge replacement project, which resulted in a significant shift in traffic patterns as illustrated by the heatmap comparison in Figure 12. We collect trajectories passing through this area during the control period and compare them with trajectories from normal days before the event. Figure 13 shows the t-SNE projections of the learned embeddings from PRTraj and three strong baselines: START, TRACK, and GREEN. As shown, PRTraj more clearly separates the restricted and normal trajectories, indicating its superior ability to perceive environmental changes and model route choice behavior. In contrast, the baselines yield more entangled representations, failing to effectively distinguish the impact of the traffic control. Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Table 7: Performance comparison between PRTraj and baselines with mean and standard deviation on three real-world trajectory datasets across five downstream tasks. Results are reported as mean ± std over five independent runs. Task RLP TDP TTE STR PR Metric Macro-F1↑ Micro-F1↑ Acc@1↑ Acc@5↑ Acc@10↑ MAE↓ RMSE↓ MAPE↓ HR@1↑ HR@5↑ MRR↑ τ↑ ρ↑ MAE↓ Beijing HRNR 0.7912±0.0390 0.7990±0.0392 0.0789±0.0014 0.2019±0.0013 0.2613±0.0026 5.1072±0.0612 9.4108±0.0998 0.3388±0.0166 0.8653±0.0077 0.9226±0.0166 0.8914±0.0171 0.6259±0.0029 0.6749±0.0029 0.1492±0.0018 Toast 0.5868±0.0385 0.6285±0.0305 0.0694±0.0017 0.1809±0.0057 0.2374±0.0081 5.1042±0.0602 9.2255±0.0666 0.3423±0.0151 0.8532±0.0098 0.9112±0.0092 0.8792±0.0109 0.6298±0.0045 0.6808±0.0045 0.1444±0.0009 PIM 0.6284±0.0336 0.6519±0.0362 0.0629±0.0005 0.1614±0.0015 0.2100±0.0019 5.1622±0.0422 9.3977±0.0324 0.3456±0.0077 0.8060±0.0028 0.8615±0.0056 0.8323±0.0033 0.6125±0.0015 0.6627±0.0014 0.1506±0.0010 DyToast 0.6263±0.0331 0.6488±0.0353 0.0772±0.0020 0.1846±0.0034 0.2415±0.0041 4.2883±0.0244 8.3221±0.0201 0.2863±0.0034 0.8773±0.0091 0.9259±0.0065 0.8998±0.0080 0.6317±0.0027 0.6822±0.0027 0.1421±0.0019 JCLRNT 0.6479±0.0281 0.6751±0.0227 0.0782±0.0010 0.1879±0.0018 0.2538±0.0015 5.0191±0.0347 9.1380±0.0497 0.3281±0.0112 0.8985±0.0029 0.9369±0.0094 0.9166±0.0040 0.6352±0.0012 0.6871±0.0015 0.1405±0.0007 START 0.7574±0.0176 0.7776±0.0144 0.0856±0.0012 0.2153±0.0030 0.2876±0.0035 5.0103±0.0409 9.1894±0.0368 0.3376±0.0076 0.9056±0.0044 0.9442±0.0098 0.9231±0.0083 0.6421±0.0016 0.6903±0.0012 0.1397±0.0012 TRACK 0.6744±0.0157 0.6948±0.0126 0.0873±0.0014 0.2198±0.0032 0.2939±0.0033 4.0375±0.0368 8.0509±0.0472 0.2773±0.0095 0.8941±0.0023 0.9387±0.0064 0.9158±0.0071 0.6455±0.0016 0.6898±0.0016 0.1402±0.0019 GREEN 0.6705±0.0184 0.7006±0.0147 0.0913±0.0019 0.2285±0.0032 0.3062±0.0032 4.1657±0.0324 8.2444±0.0496 0.2915±0.0083 0.9078±0.0035 0.9437±0.0071 0.9207±0.0063 0.6302±0.0020 0.6811±0.0019 0.1438±0.0022 PRTraj 0.8879±0.0197 0.8953±0.0134 0.1078±0.0011 0.2649±0.0023 0.3496±0.0029 3.8719±0.0341 7.8116±0.0417 0.2607±0.0071 0.9612±0.0025 0.9828±0.0051 0.9712±0.0032 0.6813±0.0013 0.7289±0.0013 0.1282±0.0011 Chengdu HRNR 0.7054±0.0254 0.7903±0.0209 0.4168±0.0012 0.6215±0.0020 0.6995±0.0013 1.7796±0.0087 2.7743±0.0272 0.2308±0.0033 0.6069±0.0154 0.7705±0.0156 0.6782±0.0146 0.7489±0.0027 0.7885±0.0029 0.1007±0.0016 Toast 0.4200±0.0279 0.7508±0.0138 0.3954±0.0005 0.5940±0.0006 0.6667±0.0009 1.7804±0.0126 2.7730±0.0308 0.2301±0.0067 0.5739±0.0166 0.7180±0.0173 0.6100±0.0196 0.7359±0.0019 0.7779±0.0017 0.1020±0.0011 PIM 0.4684±0.0394 0.7569±0.0129 0.4073±0.0009 0.6129±0.0010 0.6858±0.0009 1.7877±0.0042 2.7778±0.0145 0.2292±0.0026 0.5498±0.0044 0.7012±0.0057 0.6025±0.0050 0.7502±0.0019 0.7895±0.0015 0.0969±0.0016 DyToast 0.4511±0.0306 0.7548±0.0178 0.4007±0.0011 0.6019±0.0013 0.6703±0.0016 1.5071±0.0091 2.3719±0.0220 0.2025±0.0082 0.6256±0.0092 0.7811±0.0103 0.6962±0.0086 0.7456±0.0019 0.7858±0.0016 0.0981±0.0015 JCLRNT 0.6376±0.0392 0.7754±0.0194 0.4072±0.0011 0.6154±0.0011 0.6932±0.0010 1.7959±0.0049 2.7878±0.0209 0.2343±0.0034 0.7521±0.0045 0.8706±0.0093 0.8069±0.0058 0.7526±0.0015 0.7908±0.0013 0.0958±0.0005 START 0.6535±0.0358 0.7805±0.0231 0.4128±0.0015 0.6231±0.0018 0.7033±0.0020 1.4901±0.0052 2.3406±0.0133 0.2010±0.0033 0.7859±0.0170 0.8911±0.0183 0.8337±0.0199 0.7559±0.0014 0.7954±0.0012 0.0931±0.0009 TRACK 0.6107±0.0351 0.7590±0.0209 0.4159±0.0013 0.6267±0.0015 0.7041±0.0018 1.4307±0.0056 2.2401±0.0159 0.1864±0.0039 0.7712±0.0130 0.8831±0.0149 0.8221±0.0120 0.7549±0.0017 0.7951±0.0016 0.0927±0.0012 GREEN 0.6182±0.0275 0.7692±0.0136 0.4060±0.0008 0.6137±0.0009 0.6933±0.0009 1.4259±0.0035 2.2245±0.0129 0.1882±0.0023 0.7992±0.0071 0.9044±0.0089 0.8394±0.0071 0.7428±0.0011 0.7824±0.0010 0.1014±0.0007 PRTraj 0.8004±0.0262 0.8615±0.0175 0.4294±0.0006 0.6408±0.0007 0.7184±0.0009 1.3668±0.0087 2.1718±0.0194 0.1776±0.0059 0.9404±0.0081 0.9728±0.0117 0.9557±0.0084 0.7620±0.0015 0.8017±0.0014 0.0933±0.0006 Xi'an HRNR 0.6459±0.0359 0.6651±0.0251 0.3511±0.0010 0.5800±0.0027 0.6624±0.0019 2.9449±0.0105 4.7241±0.0378 0.2835±0.0047 0.6641±0.0073 0.8062±0.0124 0.7251±0.0087 0.6972±0.0024 0.7459±0.0022 0.1062±0.0014 Toast 0.4308±0.0405 0.5129±0.0281 0.3260±0.0011 0.5421±0.0014 0.6247±0.0018 2.9350±0.0191 4.6983±0.0403 0.2824±0.0059 0.6618±0.0168 0.7838±0.0147 0.6902±0.0107 0.6894±0.0017 0.7419±0.0016 0.1094±0.0007 PIM 0.4357±0.0395 0.5155±0.0294 0.3396±0.0009 0.5629±0.0007 0.6440±0.0006 2.9540±0.0079 4.7358±0.0153 0.2786±0.0073 0.6188±0.0062 0.7453±0.0102 0.6685±0.0073 0.7022±0.0014 0.7521±0.0011 0.1038±0.0011 DyToast 0.4327±0.0362 0.5149±0.0260 0.3304±0.0016 0.5451±0.0026 0.6222±0.0019 2.2856±0.0176 3.7257±0.0393 0.2282±0.0056 0.6900±0.0061 0.8187±0.0040 0.7488±0.0041 0.6975±0.0029 0.7483±0.0026 0.1058±0.0028 JCLRNT 0.4582±0.0380 0.5076±0.0377 0.3430±0.0015 0.5687±0.0024 0.6566±0.0027 2.9791±0.0124 4.7513±0.0348 0.2869±0.0054 0.8092±0.0054 0.8895±0.0065 0.8451±0.0057 0.7024±0.0018 0.7518±0.0016 0.1039±0.0007 START 0.6225±0.0226 0.6363±0.0141 0.3499±0.0014 0.5844±0.0026 0.6742±0.0031 2.3700±0.0116 3.7994±0.0317 0.2482±0.0046 0.8645±0.0098 0.9214±0.0097 0.8724±0.0087 0.6985±0.0016 0.7481±0.0014 0.1047±0.0006 TRACK 0.4652±0.0238 0.5190±0.0170 0.3503±0.0012 0.5824±0.0018 0.6705±0.0021 2.2514±0.0198 3.5998±0.0399 0.2047±0.0059 0.8613±0.0077 0.9255±0.0131 0.8793±0.0089 0.6998±0.0022 0.7480±0.0020 0.1044±0.0013 GREEN 0.5290±0.0209 0.5545±0.0115 0.3550±0.0009 0.5889±0.0011 0.6773±0.0010 2.2743±0.0154 3.7211±0.0351 0.2056±0.0055 0.8862±0.0091 0.9403±0.0095 0.9123±0.0071 0.6969±0.0015 0.7478±0.0013 0.1041±0.0008 PRTraj 0.7599±0.0268 0.7591±0.0195 0.3624±0.0008 0.5971±0.0010 0.6837±0.0011 2.1507±0.0135 3.6072±0.0308 0.1943±0.0049 0.9538±0.0071 0.9864±0.0074 0.9687±0.0079 0.7131±0.0013 0.7629±0.0011 0.1010±0.0005 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.70 0.73 0.75 0.78 0.81 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.31 0.34 0.37 0.40 0.43 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.35 1.44 1.53 1.63 1.72 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.79 0.83 0.87 0.91 0.95 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.718 0.729 0.741 0.753 0.764 Kendall's Path Ranking (a) Chengdu 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.64 0.67 0.70 0.73 0.77 Macro-F1 Road Label Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.25 0.28 0.31 0.34 0.37 Acc@1 Trajectory Destination Prediction 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.13 2.26 2.38 2.51 2.64 MAE Travel Time Estimation 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.88 0.90 0.92 0.94 0.96 HR@1 Similar Trajectory Retrieval 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.674 0.684 0.694 0.705 0.715 Kendall's Path Ranking (b) Xi'an Figure 7: Performance of PRTraj on downstream tasks with varying proportions of training data on the Chengdu and Xi'an trajectory dataset. The red dashed line denotes the best baseline performance. For brevity, we report one metric for each task. Ji Cao et al. 500m 1000m 2000m 50m 100m 200m 0.8650 0.8785 0.8651 0.8716 0.8879 0.8667 0.8803 0.8713 0.8551 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.1074 0.1065 0.1071 0.1082 0.1078 0.1075 0.1079 0.1073 0.1070 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 3.8629 3.8664 3.8731 3.8690 3.8719 3.8722 3.8599 3.8703 3.8710 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9501 0.9563 0.9611 0.9584 0.9612 0.9603 0.9590 0.9600 0.9585 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.6812 0.6815 0.6808 0.6811 0.6813 0.6809 0.6814 0.6810 0.6806 Path Ranking (Kendall's ) (a) Beijing 500m 1000m 2000m 50m 100m 200m 0.7969 0.7994 0.7813 0.7818 0.8004 0.7967 0.8015 0.7824 0.7851 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.4288 0.4286 0.4287 0.4289 0.4294 0.4293 0.4286 0.4291 0.4284 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 1.3659 1.3712 1.3650 1.3576 1.3668 1.3618 1.3632 1.3654 1.3640 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9388 0.9382 0.9407 0.9388 0.9404 0.9402 0.9386 0.9396 0.9392 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.7614 0.7616 0.7611 0.7621 0.7620 0.7618 0.7620 0.7614 0.7617 Path Ranking (Kendall's ) (b) Chengdu 500m 1000m 2000m 50m 100m 200m 0.7569 0.7611 0.7560 0.7579 0.7599 0.7569 0.7596 0.7594 0.7596 Road Label Prediction (Macro-F1) 500m 1000m 2000m 50m 100m 200m 0.3619 0.3624 0.3628 0.3622 0.3624 0.3619 0.3617 0.3618 0.3622 Trajectory Destination Prediction (Acc@1) 500m 1000m 2000m 50m 100m 200m 2.1527 2.1601 2.1497 2.1523 2.1507 2.1468 2.1532 2.1486 2.1541 Travel Time Estimation (MAE) 500m 1000m 2000m 50m 100m 200m 0.9508 0.9519 0.9504 0.9532 0.9538 0.9546 0.9520 0.9552 0.9532 Similar Trajectory Retrieval (HR@1) 500m 1000m 2000m 50m 100m 200m 0.7130 0.7137 0.7127 0.7142 0.7131 0.7125 0.7129 0.7126 0.7129 Path Ranking (Kendall's ) (c) Xi'an Figure 8: Performance of PRTraj on downstream tasks under varying hyperparameter settings for the Environment Perception Module across the Beijing, Chengdu, and Xi'an datasets. For brevity, we present one metric for each task. Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 9: Case study of trajectory similarity retrieval on the Beijing dataset. (Please zoom in for better visibility.) Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 10: Case study of trajectory similarity retrieval on the Chengdu dataset. (Please zoom in for better visibility.) Query Trajectory Most Similar 2nd Similar 3rd Similar (a) START Query Trajectory Most Similar 2nd Similar 3rd Similar (b) TRACK Query Trajectory Most Similar 2nd Similar 3rd Similar (c) GREEN Query Trajectory Most Similar 2nd Similar 3rd Similar (d) PRTraj Figure 11: Case study of trajectory similarity retrieval on the Xi'an dataset. (Please zoom in for better visibility.) (a) Before Traffic Control (b) During Traffic Control Figure 12: Traffic heatmaps of the Sanyuan Bridge area. 20 0 20 20 0 20 Before Traffic Control During Traffic Control (a) START 20 0 20 20 0 20 40 Before Traffic Control During Traffic Control (b) TRACK 40 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (c) GREEN 20 0 20 40 20 0 20 Before Traffic Control During Traffic Control (d) PRTraj Figure 13: t-SNE visualization of trajectory embeddings before and during the traffic control event in the Sanyuan Bridge area.
2510.14807
Technical Report SIMKO: SIMPLE PASS@K POLICY OPTIMIZATION Ruotian Peng1,* Yi Ren2,* Zhouliang Yu3 Weiyang Liu3 Yandong Wen1,† 1Westlake University 2University of British Columbia 3CUHK ABSTRACT Reinforcement learning with verifiable rewards (RLVR) has advanced the reason- ing capabilities of large language models (LLMs). However, prevailing RLVR methods exhibit a systematic bias toward exploitation over exploration, as evi- denced by improved pass@1 but reduced pass@K (K>1) performance. To un- derstand this issue, we analyze training dynamics of RLVR methods by tracking the token-level probability distributions over vocabulary candidates. Our analysis reveals a consistent probability concentration effect where the top-1 candidate in- creasingly accumulates probability mass and suppresses that of other candidates. More importantly, stronger over-concentration correlates with worse pass@K per- formance. Inspired by this finding, we propose Simple Pass@K Optimization (SimKO), a method designed to mitigate the over-concentration issue, thereby en- couraging exploration. SimKO operates in an asymmetrical manner. For verified- correct responses, it boosts the probabilities of the top-K candidates. For verified- incorrect responses, it applies stronger penalties to the top-1 candidate. We observe that this asymmetric design is particularly effective at mitigating over- concentration when applied at tokens with high entropy. Across various math and logical-reasoning benchmarks, SimKO consistently yields higher pass@K for a wide range of K, providing a simple way to improve RLVR’s exploration. 2 1 2 3 2 5 2 7 K solutions per problem 40 60 80 Average Accuracy Pass@K Performance in Math Tasks Base Model GRPO SimKO 2 1 2 3 2 5 2 7 K solutions per problem 30 40 50 60 70 Average Accuracy Pass@K Performance in Logic Tasks Base Model GRPO SimKO 1 2 3 4 5 Top-K Candidate in Vocabulary 0.0 0.2 0.4 0.6 0.8 1.0 Average Probability 0.47 0.23 0.10 0.05 0.03 0.93 0.06 0.00 0.00 0.00 Top-5 Posterior Probabilities GRPO SimKO Figure 1: SimKO improves pass@K performance on math tasks (AIME24/25, AMC, MATH500, Minerva, Olympiadbench) and logic tasks (Synlogic, BBH) compared to GRPO, as shown in the plots (left and middle). The figure on the right shows the k-th highest candidate probabilities averaged over the dataset. The SimKO- trained model exhibits a less concentrated probability distribution compared to GRPO. 1 INTRODUCTION Reinforcement learning with verifiable rewards (RLVR) offers a simple recipe for improving LLM reasoning: the model generates responses, and updates itself by increasing the probability of the correct ones while decreasing that of the incorrect ones (Shao et al., 2024; Schulman et al., 2017; Hu, 2025). This coupled process induces a systematic bias, whereby the model progressively collapses to a narrow set of safe responses, implicitly prioritizing exploitation over exploration (Liang et al., 2025; He et al., 2025a). Such effect is evident in improved pass@1, which measures the expected quality of a single reasoning path, but degraded pass@K, which measures the coverage of multiple reasoning paths (Yue et al., 2025; Wu et al., 2025). It has been observed that the reduced exploration ability limits the model’s reasoning potential and deteriorates its ability to generalize to novel or more challenging scenarios (Chen et al., 2025a; Song et al., 2025). *Equal contributions †Corresponding author spherelab.ai/simko 1 arXiv:2510.14807v2 [cs.AI] 21 Oct 2025 Technical Report Several approaches have been proposed to mitigate this exploration deficit. Data-centric meth- ods focus on data augmentation, exposing the model to broader reasoning environments to promote response diversity. Techniques include using off-policy data (Dong et al., 2025; Li et al., 2025), gen- erating more responses for challenging samples (Yang et al., 2025), and creating new task variations from the model itself (Liang et al., 2025). As a complementary line of work, reward-centric methods derive group-wise rewards to evaluate the collective quality of multiple responses. These approaches provide an unbiased estimate of pass@K (Walder & Karkhanis, 2025; Chen et al., 2025b), which the model can then directly optimize. Despite their promise, these methods operate only at the input or output end, leaving the internal learning dynamics underexplored. More recently, entropy-based methods (Cui et al., 2025; Cheng et al., 2025) use output entropy as a proxy for exploration control, preventing the model from collapsing. While these methods yield valuable insights, entropy remains a coarse and incomplete measure that cannot capture fine-grained exploration behavior. In this work, we take an alternative perspective grounded in next-token prediction. At each decoding step, an LLM outputs a probability distribution over its entire vocabulary, offering a direct and fine- grained view of its exploration behavior. How this probability mass is distributed across vocabulary candidates determines whether the model explores multiple reasoning paths or collapses into a single deterministic trajectory (Deng et al., 2025b; Zhu et al., 2025). Unfortunately, capturing the full distribution is computationally prohibitive, as modern vocabularies often exceed 100K candidates. This practical constraint likely explains why prior work favored scalar measures like entropy. We seek a practical solution to this constraint by revisiting the token-level probability distribution. Our empirical evidence shows that these distributions are highly skewed, as illustrated in Figure 1, with only a few candidates carrying non-negligible probability mass. This finding justifies a tractable approximation: by focusing on the top-K candidates, we can effectively characterize exploration behavior. Building on this view, we further analyze the training dynamics of RLVR algorithms and observe a distinct pattern: probability mass gradually concentrates on the top-1 candidate, while other candidates are suppressed. This over-concentration pushes the model towards deterministic behavior and, more importantly, directly explains the degradation of pass@K performance. Motivated by this observation, we propose Simple Pass@K Optimization (SimKO), a method de- signed to explicitly mitigate distribution collapse. The core idea is to redistribute and balance gra- dient updates across the top-K candidates rather than allowing the top-1 to dominate. For verified- correct responses, SimKO shares positive gradients between the generated token and other high- probability candidates, reducing over-concentration on a single choice. For incorrect responses, SimKO applies stronger penalties to the top-1 candidate, encouraging probability mass to flow into other alternative candidates. This asymmetric regularization proves especially effective when ap- plied to “semantic forking” tokens in the reasoning path, where token-level entropy is high. We evaluate SimKO on several mathematical and logical reasoning benchmarks across multiple LLM backbones. SimKO consistently improves over vanilla GRPO and surpasses strong baselines across a wide range of K values (up to 256), as shown in Figure 1. Ablation studies further confirm the mechanism behind its gains, offering new evidence for the role of probability concentration in the exploration-exploitation trade-off. In summary, our work makes three key contributions: • We introduce a new perspective on understanding RLVR dynamics. Specifically, we adopt top-K posterior probabilities as a tractable approximation of the full token-level distribution, providing direct insight into why existing RLVR methods often improve pass@1 at the cost of exploration. • We propose a simple, effective yet principled RLVR method, termed SimKO. SimKO mitigates probability collapse through asymmetric gradient redistribution. By redistributing probability mass from overconfident top-1 tokens to other highly confident candidates, SimKO can explic- itly promote exploration, thereby improving the pass@K performance. • SimKO consistenly improves the pass@K performance by a considerable margin. In our exper- iments, we demonstrate SimKO’s effectiveness on multiple challenging math and logic bench- marks, showing consistent improvements in pass@K without sacrificing pass@1. 2 BACKGROUND AND PRELIMINARIES Group relative policy optimization (GRPO), a representative and widely used policy-based RLVR method (Shao et al., 2024), is a variant of PPO (Schulman et al., 2017) tailored for LLM post- 2 Technical Report 𝜋(⋅|𝑠𝑖,𝑙) Valid Path 1 Valid Path 2 Other tokens 𝒓= 𝟏 𝟎 𝟎 𝟏 𝟏 𝟎 𝒓= 𝟎 e.g. Ungrammatical Incorrect, etc. 𝑦others (b) Good/bad Exploration: compare top1 – top2 (c) Why token-entropy alone is not enough (a) Role of top-gap in exploration Desired: Undesired: 𝑦𝑖,𝑙 (1) 𝑦𝑖,𝑙 (2) 𝜋(𝑦𝑖,𝑙 1 |𝑠𝑖,𝑙) 𝜋(𝑦𝑖,𝑙 2 |𝑠𝑖,𝑙) 𝜋(𝑦𝑖,𝑙 1 |𝑠𝑖,𝑙) 𝜋(𝑦𝑖,𝑙 2 |𝑠𝑖,𝑙) Figure 2: (a) The exploration behavior visualized according to the token-level posterior probability. (b) Com- parison of two exploration strategy. (c) An example of two distributions with identical entropy but distinct probability distribution. training. Given a question x, the model generates G different responses {yi}G i=1 and updates its parameters according to the following objective function: JGRPO(θ; γi,l) = 1 G G X i=1 1 |yi| |yi| X l=1 [min (γi,lAi,l, clip(γi,l, 1 −ϵ, 1 + ϵ) Ai,l) −βDKL(πθ||πref)] where γi,l = πθ(yi,l | si,l)/πref(yi,l | si,l) is the likelihood ratio between the current policy πθ and the reference policy πref at the l-th token of response yi, with si,l = (x, yi,<l) denoting the decoding state. The term Ai,l represents the advantage of the l-th token in the i-th response, computed by normalizing rewards within the roll-out. GRPO can be analyzed in gradient space under the learning dynamics framework of Ren & Suther- land (2025). The dominant contribution to parameter updates comes from ∇θAi,lγi,l. Here, the advantage Ai,l can be viewed as a sequence-level adaptive learning rate that scales gradients from different responses yi. Ignoring the contributions of the KL regularization and clipping terms (both primarily introduced for stability), the main optimization direction is: ∇θAi,lγi,l = Ai,l πθ(yi,l|si,l) πref(yi,l|si,l)∇θ log πθ(yi,l|si,l) = Ai,l · sg(γi,l) · ∇θ log πθ(yi,l|si,l), (1) where sg(·) is the stopping-gradient operator. 3 OVER-CONCENTRATED DISTRIBUTION DEGRADES PASS@K SCORE To understand the mechanism driving current RLVR methods to favor exploitation over exploration, we begin by examining the connection between the model’s token-level probability distribution and its pass@K performance. We consider the illustrative example in Figure 2-(a), where the model generates the l-th token given the context si,l = (x, yi,<l). Suppose two valid reasoning paths begin with y(1) i,l and y(2) i,l , where y(k) i,l denotes the rank-k candidate under πθ(· | si,l). A model with strong exploration ability will distribute probability more evenly between πθ(y(1) i,l | si,l) and πθ(y(2) i,l | si,l), as illustrated in the upper panel of Figure 2-(b). To track the token probability distribution among top-K candidates, we propose to compute Λ(k) := 1 G G X i=1 1 |yi| |yi| X l=1 log πθ(y(k) i,l | si,l), k ∈{1, . . . , K}. (2) This metric represents the average log-probability of the rank-k candidate during generation. De- spite the large vocabulary size, Figure 1 shows the probability mass is highly concentrated in the top few candidates. Empirically, we find that top-3 candidates (i.e. k=3) is a sufficient and efficient for approximating the distribution. In addition to the probability of the rank-k candidate, we track the evolution of the average log- probability of the token sequence that is actually sampled, which is denoted as Λ.1 Since updates 1Simply replace the y(k) i,<l term in Equation 2 by the sampled candidate. 3 Technical Report 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Llama3.2-3B-Instruct GRPO Λ Λ(1) Λ(2) Λ(3) -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 NSR -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 PSR -6.0 -4.0 -2.0 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 73.3% 73.7% 74.1% 71.9% 20.2% 30.2% 30.8% 29.2% Accuracy pass@1 pass@256 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Qwen2.5-Math-7B -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 80.6% 78.3% 80.2% 74.2% 32.6% 49.7% 48.4% 48.1% 1 00 (a) (b) (c) (d) Figure 3: (a)-(c) Training dynamics of average log probability Λ and top-K probabilities Λ(k) derived by GRPO, NSR, and PSR. (d) The corresponding pass@1 and pass@K results of the RLVL-trained models. Following the setups of Zhu et al. (2025), we train a Llama3.2-3B-Instruct on a mixture of GSM8K and MATH (Level 1) and train Qwen2.5-Math-7B on the MATH dataset. Complete training dynamics are shown in Figure 9. occur directly on these sampled tokens, examining which Λ(k) is closer to Λ allows us to infer which ranked candidate primarily dominates the model’s generation process. To better disentangle the roles of positive and negative gradients, central to our algorithm design, we also conduct two ablations where the model is updated using only one type of response. We refer to these as Positive Sample Reinforce (PSR) and Negative Sample Reinforce (NSR) (Zhu et al., 2025). We present the learning dynamics in Figure 3. The results in the first column reveal a clear trend under GRPO training: the probability of the rank-1 candidate, Λ(1), approaches Λ and saturates near 1, while the probabilities of alternatives (Λ(2), Λ(3)) collapse to negligible values (10−8 to 10−10 in Qwen2.5-Math-7B). This demonstrates that GRPO aggressively concentrates probability mass onto a single candidate. While NSR partially mitigates this effect, PSR substantially exacerbates it. Crucially, we observe a clear inverse relationship between the degree of probability concentration and pass@K performance. As shown in Figure 3-(d), as the probability gap between the rank-1 candidate and its alternatives increases, the model’s pass@K accuracy declines. This suggests that over-concentration of posterior probability on the top-1 candidate suppresses the exploration of other plausible reasoning paths. This observation naturally motivates a key question: If we can mitigate this over-concentration, can we improve pass@K performance by encouraging broader exploration? Our algorithmic design is precisely driven by this intuition and presented in the next section. Summary of Findings • RLVR Induces Over-Concentration: Standard RLVR methods, such as GRPO, tend to aggressively concentrate the token probability distribution. Specifically, the probability of the rank-1 candidate saturates near 1, while the probabilities of other viable alternatives collapse to negligible values. • Over-Concentration Inversely Correlates with Pass@K: We observe a significant in- verse correlation between the degree of probability concentration on the rank-1 candidate and the model’s pass@K performance. Specifically, as the disparity between the probabil- ity of the rank-1 candidate and its alternatives increases, the pass@K accuracy decreases. 4 SIMKO: SIMPLE PASS@K OPTIMIZATION In this section, we present the three key components of our SimKO method: (i) identifying “forking” tokens for reasoning, (ii) applying top-K label smoothing to redistribute positive gradients among the top-K candidates, and (iii) strengthening rank-1 candidate updates for negative gradients. These steps, illustrated in Figure 4, work together to improve reasoning diversity. 4 Technical Report (b) Sec 4.2: strategy for the positive samples (a) Sec 4.1: “forking” tokens (high-entropy) (c) Sec 4.3: strategy for the negative samples 𝐞𝒚𝒊,𝒍 𝐺= (𝜋" − 𝐞𝒚𝒊,𝒍) top1 ↑, topK ↓ )𝐺= (𝜋" −*𝐞𝐭𝐨𝐩𝑲) Increase top1 & topK Vanilla cross-entropy: TopK label smoothing: 𝜋" 𝑦!"#$% (') 𝑦(%) Squeezing effect from the negative gradient on any non-top1 demension 𝜋" 0.5 ⋅𝐺(𝑦()*+, (.) ) 2 ⋅𝐺(𝑦(%)) Solution: mitigate 𝐺 for all non-top1 update 𝐞𝒚𝒊,𝒍 𝜋0 1 Figure 4: Intuition of the proposed method. (a) We begin by identifying the “forking” tokens, which are high-entropy tokens, and diverge into multiple reasoning paths. (b) For positive samples, we redistribute the probability mass from the top-1 candidate to the top-K candidates, mitigating overconcentration. (c) For nega- tive samples, we apply a strong penalty to the top-1 candidate and a weaker penalty to non-top-1 candidates to prevent the squeezing effect, thereby avoiding sharp distributions and facilitating model exploration. 4.1 IDENTIFYING INFORMATIVE TOKENS TO OPTIMIZE While standard on-policy RLVR methods compute gradients over the entire generated sequence, not all tokens contribute equally to logical inference. Recent work (Wang et al., 2025; Deng et al., 2025b) shows that only a small subset of tokens drives the core reasoning process, whereas the majority (over 80%) primarily ensure grammatical fluency and formatting. Our analysis supports this finding by showing that key reasoning tokens, often marked by “forking” tokens, often exhibit higher entropy and shape the reasoning trajectory (Figure 4-(a)). Building on this insight, SimKO selectively operate on a critical subset of tokens whose entropy is greater than a threshold τ. In short, by replacing the γi,l in the vanilla GRPO loss with a newly defined ˜γi,l, the proposed method can be written as: JSimKO(θ) = JGRPO(θ; γi,l →˜γi,l); ˜γi,l :=      γpos i,l , if H(πθ(·|si,l)) > τ, Ai,l > 0, γneg i,l , if H(πθ(·|si,l)) > τ, Ai,l < 0, γi,l, if H(πθ(·|si,l)) ≤τ, ∀Ai,l, (3) where H(πθ(·|si,l)) = −P yi,l∈V πθ(yi,l | si,l) log πθ(yi,l | si,l) denotes the entropy of the policy distribution at state si,l. Ai,l is the advantage calculated in vanilla GRPO. 4.2 REDISTRIBUTING POSITIVE GRADIENTS AMONG TOP-K CANDIDATES We now design regularization mechanisms for tokens with positive gradients, i.e., tokens from cor- rect responses. To understand how the model’s predictions evolve after an update, we analyze its behavior in gradient space and denote the derivative of the loss with respect to the logits z as G-term: ∇z log πθ(yi,l|si,l) := G(i, l) = πθ(·|si,l) −eyi,l, (4) where eyi,l is the one-hot vector for the label yi,l. We now analyze the one-step effect of learning the l-th token in the i-th correct response. As shown Figure 4-(b), one-step update increases the proba- bility of the yi,l-th candidate of πθ(· | si,l) while simultaneously decreasing all other candidates. When yi,l is the rank-1 candidate, which is highly likely under on-policy sampling, the distribution becomes sharper, and the probability gap between the rank-1 candidate and its alternatives grows. Continued training under this dynamic causes the rank-1 candidate to absorb nearly all probability mass, leaving the model unable to generate diverse yet correct responses that begin with rank-k candidates. This effect is empirically validated by our results in Figure 3. To address this issue, we need to design a mechanism that can reallocate probability mass absorbed by the rank-1 candidate back to other plausible ones, thereby restoring diversity. This resonates with the classical solution for over-concentration: label smoothing (M¨uller et al., 2019), in which the one-hot target eyi,l is replaced by a convex combination of the one-hot and a uniform distribution: (1 −α)eyi,l + α V −1u, where α ∈[0, 1] controls the smoothing strength. However, directly applying vanilla label smoothing in LLM fine-tuning is problematic because the vocabulary size V is extremely large. Spreading probability mass uniformly risks generating un- grammatical or irrelevant tokens, which can destabilize training. To address this problem, we take 5 Technical Report inspiration from the design of top-K sampling (Fan et al., 2018), and propose top-K label smooth- ing, which redistributes probability only across several most plausible candidates. Concretely, by replacing the eyi,l in Eqn. 4, we propose the ˜G-term: ˜G(i, l) = πθ(·|si,l) −˜etopK = πθ(·|si,l) −((1 −α)eyi,l + α K X k∈ItopK ek), (5) where ItopK denotes the indices of the top-K tokens under the current model distribution. Impor- tantly, we retain this definition even if yi,l ∈ItopK, ensuring consistent treatment of the target token. We now present an equivalent loss with the re-designed one-step gradient provided in Equation 5 as γpos i,l = (1 −α) γi,l + α |ItopK| X k∈ItopK sg γi,l γ(k) i,l ! γ(k) i,l , α ∈[0, 1], (6) where γ(k) i,l = πθ y(k) i,l | si,t  /πθref y(k) i,l | si,l  , which is the ratio of the model’s probability on the rank-k candidate. The term sg(·) is designed to make sure that sg(γpos i,l ) = sg(γi,l), which is cru- cial for importance sampling in GRPO-based methods (see Equation 1). More design details can be found in Appendix B. Intuitively, optimizing with γpos i,l increases the probabilities of the top-K candidates, causing the output distribution to form a plateau rather than a sharp peak. This flatter distribution promotes exploration and enhances response diversity, as illustrated in Figure 4. 4.3 RESTRAINING NEGATIVE GRADIENTS ON TOP-1 CANDIDATE We design a separate mechanism for negative responses because their gradients affect the probability distribution asymmetrically compared to positive gradients. As in the previous subsection, we ana- lyze the one-step influence of negative gradients (Ai,l < 0) through their G-term. Expanding G(i, l) element-wise (Equation 4) leads to: [G(i, l)]yi,l = πθ(yi,l|si,l) −1 and [G(i, l)]others = πθ(yi,l|si,l). This formulation highlights two key effects. First, candidates with already high probabilities experi- ence minimal push-down pressure when selected, since πθ(yi,l | si,l)−1 ≈0. Second, the probabil- ity mass removed from the target candidate is redistributed proportionally across all other candidates based on their πθ(yi,l | si,l). These mechanisms align with the “squeezing effect” described in Ren & Sutherland (2025), where the rank-1 candidate benefits the most during redistribution. This dual influence also clarifies why simply amplifying negative gradients, e.g., by multiplying a big scalar c to those Ai,l < 0, is ineffective in mitigating the decay of exploration. While uniformly stronger push-down pressure can suppress probability growth of the rank-1 candidate, the induced squeezing effect on non-rank-1 candidates paradoxically makes the distribution sharper, as demon- strated in the upper panel of Figure 4-(c). In summary, for negative gradients, we must control the relative strength between the negative gradients imposed on rank-1 and non-rank-1 candidates. Motivated by this, we propose to replace the original γi,l by λ·γi,l when yi,l is the rank-1 candidate, where λ is a hyper-parameter that is greater than one. The effect of applying this γneg i,l is demonstrated by the lower panel in Figure 4-(c). We present the pseudo-code in Figure 5. def compute_policy_loss(): ratio = torch.exp( log_prob - old_log_prob ) + # 1. Identify forking tokens + w = (entropy > percentile(entropy, τ)) + # 2. Using top-K ratio + topk_ratio = torch.exp(topk_log_probs - old_topk_log_probs) + topk_ratio = ((ratio.detach() / topk_ratio.detach())*topk_ratio).sum(dim=-1) + ratio = torch.where(advantage > 0, (1-α*w)*ratio + (α*w/K)*topk_ratio, ratio) + # 3. Apply a strong penalty to top1 negative tokens + mask = (advantage < 0) & is_top1 & w + ratio[mask] *= λ pg_losses = -advantage * ratio # ...clip and compute loss Figure 5: The pseudo-code of the policy loss computation with SimKO. SimKO only requires modifying a few lines from a standard policy gradient implementation. 6 Technical Report 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 Log(10) Probability GRPO Λ Λ(1) Λ(2) Λ(3) -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 KL-Cov -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 Entropy Adv -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 SimKO -0.8 -0.6 -0.4 -0.2 0.0 Log(10) Probability Figure 6: Comparison of SimKO with GRPO, KL-Cov, and Entropy-Adv on Qwen2.5-Math-7B. SimKO ef- fectively controls probability concentration on the Λ(1) while preserving diversity among Λ(2) and Λ(3). 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) KL-Cov GRPO 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) Entropy-Adv GRPO 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) SimKO GRPO Figure 7: Token-level entropy distributions from the Qwen2.5-Math-7B backbone trained with SimKO, GRPO, KL-Cov, and Entropy-Adv, demonstrating SimKO’s ability to maintain the entropy of the “forking” tokens. 5 EXPERIMENTS AND RESULTS 5.1 EXPERIMENTAL SETUP Models and Datasets. We experiment with a diverse set of models, including Qwen2.5-7B (Team, 2024), Qwen2.5-Math-7B (Yang et al., 2024), and Llama3.2-3B-Instruct (AI@Meta, 2024). Follow- ing the setups in Zhu et al. (2025); Zeng et al. (2025), the Qwen models are trained on the MATH dataset (Hendrycks et al., 2021), while the Llama model is trained on a combined dataset of GSM8K (Cobbe et al., 2021) and MATH (Level 1). For logical reasoning tasks, we train Qwen2.5-7B on the Synlogic-easy dataset (training split). Training details and hypers in Appendix D.1. Evaluation Protocol. We compare SimKO against several competitive baselines, including GRPO, PSR, NSR, W-REINFORCE (Zhu et al., 2025), KL-Cov (Cui et al., 2025), P@k T. (Chen et al., 2025a) and Entropy-Adv (Cheng et al., 2025). Evaluations are conducted on a variety of reason- ing benchmarks: MATH-500 (Hendrycks et al., 2021), Minerva Math (Lewkowycz et al., 2022), Olympiad-Bench (He et al., 2024), AMC, AIME, Synlogic-easy (validation split) (Liu et al., 2025a), and BBH (Suzgun et al., 2022). To obtain comprehensive evaluation, we adopt the unbiased pass@K metric with K up to 256, computed as pass@K := Ex∼D  1 − n−c K  / n K  , where c denotes the number of correct completions out of n generated responses. To reduce evaluation variance on small datasets (e.g., AIME and AMC), we set n = 300; for other math datasets, we use n = 256, and for logic datasets, n = 128. 5.2 EFFECTS OF SIMKO ON TRAINING DYNAMICS We analyze the training dynamics of SimKO in comparison with GRPO, KL-Cov, and Entropy-Adv. Figure 6 presents the changes of top-K log-probabilities (Λ(1), Λ(2), and Λ(3)) across training steps. As can be seen, GRPO leads to severe over-concentration: Λ(1) GRPO increases to nearly 1, while Λ(2) GRPO and Λ(3) GRPO sharply drop below 10−8 and 10−10, respectively. This indicates that nearly all probabil- ity mass collapses onto the top-1 token. KL-Cov exhibits a moderate concentration effect due to the KL penalty, while Entropy-Adv collapses even more rapidly, likely because of its stronger empha- sis on high-entropy tokens. In contrast, SimKO achieves the most effective deconcentration among all methods. This is evidenced by a lower Λ(1) SimKO and higher Λ(2) SimKO and Λ(3) SimKO. These results suggest that SimKO effectively mitigates probability mass collapse and can potentially encourages exploration during training. To further validate this, we visualize the histogram of token-level entropy in Figure 7. GRPO drives most tokens toward near-zero entropy. SimKO, however, can preserve token entropy, particularly 7 Technical Report Method 1 2 4 8 16 32 64 128 256 Qwen2.5-Math-7B Base Model 25.8 35.9 45.1 52.8 59.1 64.3 68.8 72.7 76.4 GRPO 41.7 47.9 53.3 58.1 62.6 66.4 69.7 72.9 76.1 PSR 39.5 45.1 49.9 54.3 58.5 62.4 66.0 69.3 72.5 NSR 39.5 47.0 53.6 59.2 64.0 68.2 72.2 76.1 80.3 W-REINFORCE 41.5 48.6 54.4 59.5 64.1 68.4 72.4 76.3 80.2 KL-Cov 42.5 49.5 55.4 60.4 64.8 68.6 72.0 75.5 79.0 P@k T. 39.8 47.7 54.1 59.5 64.2 68.5 72.5 76.3 80.1 Entropy-Adv 42.1 47.7 52.6 56.7 60.3 63.9 67.7 71.9 76.1 SimKO 43.4 50.7 56.7 61.4 65.6 69.4 73.1 76.7 80.5 ∆(SimKO-GRPO) +1.7 +2.8 +3.4 +3.3 +3.0 +3.0 +3.4 +3.8 +4.4 Qwen2.5-7B Base Model 26.6 35.0 42.7 49.5 55.6 61.1 66.2 71.3 76.4 GRPO 38.4 44.4 49.8 54.5 58.7 62.5 66.0 69.3 72.3 PSR 36.2 41.6 46.5 51.2 55.7 59.8 63.4 66.9 70.7 NSR 35.2 42.1 48.2 53.7 58.4 62.5 66.3 69.8 72.8 W-REINFORCE 35.9 42.7 48.8 54.0 58.8 63.3 67.3 71.1 75.2 SimKO 38.9 45.5 50.8 55.2 59.2 63.1 67.0 70.8 74.3 ∆(SimKO-GRPO) +0.5 +1.1 +1.0 +0.7 +0.5 +0.6 +1.0 +1.5 +2.0 Llama3.2-3B-Instruct Base Model 14.2 20.7 28.0 35.6 43.1 50.2 57.2 63.6 68.9 GRPO 23.3 29.4 35.7 41.4 46.9 52.4 57.9 63.7 69.5 PSR 20.6 26.1 32.0 37.9 43.9 49.8 55.8 61.8 67.8 NSR 22.5 29.1 35.6 41.6 47.4 53.1 58.6 64.1 69.7 W-REINFORCE 22.4 28.8 34.9 40.8 46.4 52.0 57.5 63.1 68.1 SimKO 24.0 30.3 36.4 42.0 47.6 53.3 58.9 64.8 70.8 ∆(SimKO-GRPO) +0.7 +0.9 +0.7 +0.6 +0.7 +0.9 +1.0 +1.2 +1.3 Table 1: Average pass@256 results for Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct on MATH500, AIME 2024/25, Minerva math, Olympiadbench, and AMC23 Datasets. at “semantic forks”, where high entropy is desirable for exploration. This preservation of entropy further further confirms SimKO’s role in promoting exploration. 5.3 MAIN RESULTS ON MATH BENCHMARKS We evaluate SimKO on six widely used math benchmarks across different model backbones. Ta- ble 1 reports average pass@K results for K ranging from 1 to 256. Detailed results on separate benchmarks are provided in the Appendix Table 4. Compared to the base models, SimKO significantly improves the pass@1 score by 17.6% on Qwen2.5-Math-7B and 9.8% on Llama3.2-3B-Instruct, indicating improved exploitation. At the same time, it also boosts the pass@256 score by 4.1% and 1.9% on the same backbones respec- tively, demonstrating improved exploration and overall reasoning quality. Although the base model of Qwen2.5-7B achieves the highest pass@256 score (76.4% vs. 74.3% from SimKO), its pass@1 performance is notably low (26.6% vs. 38.9% from SimKO), indicating an imbalance between exploration and exploitation. Compared to GRPO and its variants (KL-Cov, Entropy-Adv and P@k T.), SimKO consistently out- performs them across all model backbones and values of K. More importantly, SimKO delivers these gains without sacrificing exploration (pass@256) and with even stronger exploitation (pass@1). Relative to GRPO, SimKO improves pass@256 by 4.4%, 2.0%, and 1.3% on Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct, respectively, while also achieving higher pass@1. Al- 8 Technical Report Method Synlogic BBH 1 2 4 8 16 32 64 128 1 2 4 8 16 32 64 128 Base Model 3.1 4.9 7.5 11.1 15.4 20.0 24.5 28.7 42.4 59.3 74.4 84.6 89.9 92.4 93.6 94.2 GRPO 35.3 38.2 40.8 42.9 44.7 46.3 47.9 49.4 56.4 64.6 71.3 76.6 80.7 83.9 86.3 88.2 PSR 27.3 29.3 31.7 34.3 36.9 39.4 41.6 43.6 54.9 62.7 68.8 73.4 76.8 79.4 81.4 82.8 NSR 1.1 1.9 3.2 5.3 8.5 12.5 17.2 21.6 26.1 41.6 59.3 74.9 85.3 90.5 92.7 93.5 W-REINFORCE 0.8 1.3 1.8 2.4 2.9 3.4 3.8 3.9 15.4 21.9 27.8 32.3 35.4 37.2 38.3 38.8 SimKO 34.7 38.4 42.0 45.5 48.5 51.0 53.2 55.0 58.4 69.5 77.7 83.2 86.8 89.2 90.9 92.0 Table 2: Pass@K results for Qwen2.5-7B on Synlogic and BBH Datasets. though P@k T. achieves a higher pass@256 (80.1%), its pass@1 performance drops to 39.8%, indicating that it fails to balance exploration and exploitation effectively.” For NSR and W-REINFORCE, strong pass@256 performance is maintained, but often at the ex- pense of much lower pass@1. In contrast, SimKO achieves a better balance on most backbones. On Qwen2.5-Math-7B, SimKO reaches a slightly higher pass@256 score (80.5% vs. 80.3% for NSR and 80.2% for W-REINFORCE) while clearly outperforming both in pass@1 (43.4% vs. 39.5% and 41.5%). A similar trend is observed on Llama3.2-3B-Instruct, where SimKO improves both pass@256 (70.8% vs. 69.7% and 68.1%) and pass@1 (24.0% vs. 22.5% and 22.4%). For Qwen2.5- 7B, however, the trade-offs differ: SimKO outperforms NSR on pass@256 (74.3% vs. 72.8%) but lags slightly behind W-REINFORCE (75.2%), and on pass@1 it significantly surpasses both base- lines (35.2% vs. 35.9% for NSR and 38.9% for W-REINFORCE). These results support our hypothesis that alleviating probability over-concentration (Figure 6) im- proves pass@K performance, indicating a better balance between exploitation and exploration. 5.4 GENERALIZATION TO LOGICAL TASKS We evaluate SimKO’s generalization ability on two logic reasoning benchmarks, as shown in Ta- ble 2. These benchmarks cover two scenarios: (1) Synlogic, an in-distribution task where the base model performs poorly in pass@K on both the training and test datasets, which come from the same distribution (Synlogic-easy), and (2) BBH, an out-of-distribution task where the base model performs better in pass@K, but the test data differs from the training data distribution. On Synlogic, SimKO significantly outperforms the base model, with a +31.6% gain in pass@1 and +26.3% at pass@128. Methods like GRPO and PSR show improvements but lag behind SimKO by 4.6% and 11.4% at pass@128. NSR and W-REINFORCE, however, fail to train effectively, with pass@1 scores of only 1.1% and 0.8%. Similar observations can also be found in BBH dataset. On BBH, SimKO boosts the base model’s pass@1 to 58.4% (+16.0%), and maintains stability at higher sampling rates, with just a 2.2% decrease in pass@128. GRPO and PSR, by comparison, drop 6.0% and 11.4% at pass@128 compare to base model, showing difficulties in sustaining per- formance. NSR and W-REINFORCE perform poorly, achieving only 26.1% and 15.4% at pass@1. These results demonstrate that relying solely on negative samples is insufficient to improve Pass@K on challenging tasks. In contrast, SimKO exhibits strong generalization, effectively trains on difficult tasks, and improves Pass@K performance by mitigating probability over- concentration. 5.5 ABLATION STUDIES We conduct an in-depth analysis of how various parameters, such as τ, α, K, and the impacts of γpos and γneg, affect SimKO. The full ablation results are summarized in Table 3, and the perfor- mance variations with respect to τ, α and K are shown in Figure 8. Specifically, we evaluate α values from 0 to 0.1, with α = 0 representing the performance of GRPO. Increasing α results in a monotonic improvement in pass@256, with gains ranging from 3.3% to 4.4% compared to GRPO. In contrast, pass@1 performance peaks at α = 0.01 and then slightly degrades, though it remains superior to GRPO. For τ, we test values from 0 to 100, with τ = 100 corresponding to GRPO. Notably, SimKO outperforms GRPO across all τ values in pass@256. However, when τ = 0, 9 Technical Report Method AIME24 AIME25 AMC MATH500 Minerva Olympiad Avg. Base Model 13.2/66.0 5.4/51.8 38.2/98.5 55.8/96.0 16.5/68.8 25.6/77.0 25.8/76.4 GRPO 28.1/72.3 11.5/52.1 61.2/97.1 76.6/96.2 33.4/64.0 39.1/74.7 41.7/76.1 α=0.02 31.1/79.9 13.3/58.9 62.6/99.3 77.6/97.0 35.2/66.2 39.4/76.9 43.2/79.7 α=0.03 30.9/72.7 12.4/64.6 62.4/97.5 77.2/97.6 34.9/66.9 38.9/76.9 42.8/79.4 α=0.05 30.8/78.7 12.2/67.8 63.0/97.5 77.2/96.8 34.8/65.1 39.3/76.3 42.9/80.4 α=0.1 29.1/75.5 12.4/65.0 61.8/99.9 77.2/96.8 35.2/67.6 38.9/77.8 42.4/80.4 τ(0) 10.7/74.1 6.2/54.0 51.7/96.7 70.5/95.8 32.2/67.3 33.5/74.5 34.1/77.1 τ(40) 20.8/70.9 7.2/57.5 58.3/94.6 74.9/95.2 33.8/68.0 36.4/75.0 38.6/76.9 τ(60) 27.5/77.6 10.0/56.0 62.2/99.9 76.5/96.8 35.3/68.8 38.2/76.4 41.6/79.3 K = 1 29.5/83.7 12.2/55.1 62.1/97.1 77.1/96.8 35.1/65.8 39.5/75.9 42.6/79.1 K = 2 30.3/78.4 12.2/57.9 62.5/99.6 77.2/96.4 34.9/65.8 39.5/76.9 42.8/79.2 K = 4 32.8/84.6 12.5/54.6 62.6/97.5 77.7/97.2 35.3/68.4 39.2/78.4 43.4/80.1 K = 5 31.3/78.5 11.7/58.0 62.2/99.6 77.5/96.2 35.7/68.0 39.0/76.3 42.9/79.4 w/o γneg 31.5/80.7 11.5/57.9 62.7/97.4 77.1/96.2 34.1/65.8 39.0/75.6 42.8/78.9 w/o γpos 30.4/75.2 12.7/64.9 62.3/99.6 77.4/96.4 34.7/65.8 39.5/77.3 42.8/79.9 SimKO 32.8/78.0 12.9/64.6 62.4/97.5 77.6/96.8 35.0/68.4 39.8/77.8 43.4/80.5 Table 3: Ablations on α, τ, k, γpos, and γneg. Pass@1/Pass@256 scores are evaluated using Qwen2.5-Math-7B. 0.00 0.02 0.04 0.06 0.08 0.10 35 40 45 50 Performance (Pass@1) Ablation Study on : pass@1 vs pass@256 Pass@1 Pass@256 0 20 40 60 80 100 35 40 45 50 Performance (Pass@1) Ablation Study on : pass@1 vs pass@256 1 2 3 4 5 K 35 40 45 50 Performance (Pass@1) Ablation Study on K: pass@1 vs pass@256 65 70 75 80 Performance (Pass@256) 65 70 75 80 Performance (Pass@256) 65 70 75 80 Performance (Pass@256) Figure 8: Ablations on α, τ and K. Pass@1 and pass@256 scores are evaluated using the Qwen2.5-Math-7B backbone on math benchmarks. where SimKO is applied to all tokens, pass@1 drops significantly by 9.3% compared to applied in “forking” tokens. This indicates that restricting SimKO to the “semantic forks” of the task is es- sential for maintaining optimal performance. For K, we test values from 1 to 5. As K increases, both pass@256 and pass@1 show an initial increase followed by a decrease. This trend suggests that restricting optimization to a small subset of the most probable tokens is sufficient. Specifi- cally, pass@256 fluctuates between 79.1% and 80.5%, while pass@1 fluctuates between 42.6% and 43.4%, both outperforming GRPO. Additionally, as shown in Table 3, applying SimKO exclusively to either correct or incorrect examples leads to a drop in pass@K performance. This highlights the importance of asymmetric regularization, applied to both correct and incorrect examples, as it yields the best results. 6 CONCLUDING REMARKS This paper addresses a key limitation in current RLVR methods, where pass@K performance drops due to reduced output diversity. Through analyzing token-level posterior probabilities, we find that RLVR training causes the model to overly concentrate probability mass on the top-1 candidate, leading to a deterministic policy and limited exploration. To overcome this issue, we propose Sim- ple Pass@K Optimization (SimKO), a method that mitigates this effect by redistributing gradient updates across the top-K candidates. Extensive evaluations demonstrate that SimKO effectively preserves output diversity and consistently outperforms the GRPO baseline on both pass@1 and pass@256 metrics. These results highlight that SimKO achieves a superior balance between ex- ploitation and exploration, thereby enhancing the model’s overall reasoning capabilities. 10 Technical Report REFERENCES Madhu S Advani, Andrew M Saxe, and Haim Sompolinsky. High-dimensional dynamics of gener- alization error in neural networks. Neural Networks, 132:428–446, 2020. 15 AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/ llama3/blob/main/MODEL_CARD.md. 7 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint arXiv:2508.10751, 2025a. 1, 7 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@ k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint arXiv:2508.10751, 2025b. 2, 15 Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, and Furu Wei. Reasoning with exploration: An entropy perspective. arXiv preprint arXiv:2506.14758, 2025. 2, 7, 15 Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161, 2025. 15 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 7 Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for reasoning language models. arXiv preprint arXiv:2505.22617, 2025. 2, 7, 15, 16 DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948. 15 Wenlong Deng, Yi Ren, Muchen Li, Danica J Sutherland, Xiaoxiao Li, and Christos Thrampoulidis. On the effect of negative gradient in group relative deep reinforcement optimization. arXiv preprint arXiv:2505.18830, 2025a. 16 Wenlong Deng, Yi Ren, Yushu Li, Boying Gong, Danica J Sutherland, Xiaoxiao Li, and Christos Thrampoulidis. Token hidden reward: Steering exploration-exploitation in group relative deep reinforcement learning. arXiv preprint arXiv:2510.03669, 2025b. 2, 5, 15 Yihong Dong, Xue Jiang, Yongding Tao, Huanyu Liu, Kechi Zhang, Lili Mou, Rongyu Cao, Ying- wei Ma, Jue Chen, Binhua Li, et al. Rl-plus: Countering capability boundary collapse of llms in reinforcement learning with hybrid-policy optimization. arXiv preprint arXiv:2508.00222, 2025. 2, 15 Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018. 6 Andre He, Daniel Fried, and Sean Welleck. Rewarding the unlikely: Lifting grpo beyond distribution sharpening. arXiv preprint arXiv:2506.02355, 2025a. 1 Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024. 7 Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, et al. Skywork open reasoner 1 technical report. arXiv preprint arXiv:2505.22312, 2025b. 15 11 Technical Report Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. 7 Zhenyu Hou, Xin Lv, Rui Lu, Jiajie Zhang, Yujiang Li, Zijun Yao, Juanzi Li, Jie Tang, and Yux- iao Dong. Advancing language model reasoning through reinforcement learning and inference scaling. arXiv preprint arXiv:2501.11651, 2025. 15 Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262, 2025. 1 Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model, 2025. URL https://arxiv.org/abs/2503.24290. 15 Hugging Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL https: //github.com/huggingface/open-r1. 15 Yuxian Jiang, Yafu Li, Guanxu Chen, Dongrui Liu, Yu Cheng, and Jing Shao. Rethinking entropy regularization in large reasoning models. arXiv preprint arXiv:2509.25133, 2025. 15 Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, and Aviral Kumar. What do learning dynamics reveal about generalization in llm reasoning? arXiv preprint arXiv:2411.07681, 2024. 15 Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra- masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in neural information processing systems, 35:3843–3857, 2022. 7 Jiazheng Li, Hong Lu, Kaiyue Wen, Zaiwen Yang, Jiaxuan Gao, Hongzhou Lin, Yi Wu, and Jingzhao Zhang. Questa: Expanding reasoning capacity in llms via question augmentation. arXiv preprint arXiv:2507.13266, 2025. 2, 15 Xiao Liang, Zhongzhi Li, Yeyun Gong, Yelong Shen, Ying Nian Wu, Zhijiang Guo, and Weizhu Chen. Beyond pass@ 1: Self-play with variational problem synthesis sustains rlvr. arXiv preprint arXiv:2508.14029, 2025. 1, 2, 15 Junteng Liu, Yuanxiang Fan, Zhuo Jiang, Han Ding, Yongyi Hu, Chi Zhang, Yiqi Shi, Shitong Weng, Aili Chen, Shiqi Chen, et al. Synlogic: Synthesizing verifiable reasoning data at scale for learning logical reasoning and beyond. arXiv preprint arXiv:2505.19641, 2025a. 7 Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783, 2025b. 15 Rafael M¨uller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? Ad- vances in neural information processing systems, 32, 2019. 5 Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228, 2024. 15 Noam Razin, Sadhika Malladi, Adithya Bhaskar, Danqi Chen, Sanjeev Arora, and Boris Hanin. Unintentional unalignment: Likelihood displacement in direct preference optimization. In The Thirteenth International Conference on Learning Representations, 2025. 15 Yi Ren. Learning dynamics of deep learning–force analysis of deep neural networks. arXiv preprint arXiv:2509.19554, 2025. 15 Yi Ren and Danica J. Sutherland. Learning dynamics of LLM finetuning. In The Thirteenth Interna- tional Conference on Learning Representations, 2025. URL https://openreview.net/ forum?id=tPNHOoZFl9. 3, 6, 15, 17 12 Technical Report Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. 15 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. 1, 2 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 1, 2, 15 Yuda Song, Julia Kempe, and Remi Munos. Outcome-based exploration for llm reasoning. arXiv preprint arXiv:2509.06941, 2025. 1 Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. 7 Remi Tachet, Mohammad Pezeshki, Samira Shabanian, Aaron Courville, and Yoshua Bengio. On the learning dynamics of deep neural networks. arXiv preprint arXiv:1809.06848, 2018. 15 Qwen Team. Qwen2.5: A party of foundation models, September 2024. URL https://qwenlm. github.io/blog/qwen2.5/. 7 Christian Walder and Deep Karkhanis. Pass@ k policy optimization: Solving harder reinforcement learning problems. arXiv preprint arXiv:2505.15201, 2025. 2, 15 Shenzhi Wang, Le Yu, Chang Gao, Chujie Zheng, Shixuan Liu, Rui Lu, Kai Dang, Xionghui Chen, Jianxin Yang, Zhenru Zhang, et al. Beyond the 80/20 rule: High-entropy minority tokens drive effective reinforcement learning for llm reasoning. arXiv preprint arXiv:2506.01939, 2025. 5, 15 Fang Wu, Weihao Xuan, Ximing Lu, Zaid Harchaoui, and Yejin Choi. The invisible leash: Why rlvr may not escape its origin. arXiv preprint arXiv:2507.14843, 2025. 1, 15 An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jian- hong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024. 7 Zhicheng Yang, Zhijiang Guo, Yinya Huang, Yongxin Wang, Dongchun Xie, Yiwei Wang, Xiaodan Liang, and Jing Tang. Depth-breadth synergy in rlvr: Unlocking llm reasoning gains with adaptive exploration. arXiv preprint arXiv:2508.13755, 2025. 2, 15 Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Weinan Dai, Tiantian Fan, Gaohong Liu, Lingjun Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. 15 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does re- inforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837, 2025. 1, 15 Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl- zoo: Investigating and taming zero reinforcement learning for open base models in the wild. In Second Conference on Language Modeling, 2025. 7, 15 Chujie Zheng, Shixuan Liu, Mingze Li, Xiong-Hui Chen, Bowen Yu, Chang Gao, Kai Dang, Yuqiong Liu, Rui Men, An Yang, et al. Group sequence policy optimization. arXiv preprint arXiv:2507.18071, 2025. 15 Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, and Yu Meng. The surprising effectiveness of negative reinforcement in llm reasoning. arXiv preprint arXiv:2506.01347, 2025. 2, 4, 7, 18 13 Technical Report Appendix Table of Contents A Related Work 15 A.1 Reinforcement Learning with Verifiable Rewards in LLMs . . . . . . . . . . . 15 A.2 Effective Exploration for RLVR in LLMs . . . . . . . . . . . . . . . . . . . . 15 A.3 Analysis of Learning Dynamics in LLMs . . . . . . . . . . . . . . . . . . . . 15 B More Details about the Design of γpos i,l 17 C Additional Analysis of Distribution Changes During Training 18 C.1 Over-Concentrated Distribution across multiple models . . . . . . . . . . . . . 18 C.2 Top-6 Candidates Probabilities Distribution . . . . . . . . . . . . . . . . . . . 19 D Experimental Details 20 D.1 Training Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 D.2 More Experiment Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 14 Technical Report A RELATED WORK A.1 REINFORCEMENT LEARNING WITH VERIFIABLE REWARDS IN LLMS Reinforcement learning with verifiable rewards (RLVR) from large language models (LLMs) has demonstrated significant potential (DeepSeek-AI, 2025; Hugging Face, 2025; Zeng et al., 2025; He et al., 2025b), especially when directly applied to a base model using GRPO (Shao et al., 2024) for RL training. This approach has notably enhanced the base model’s performance, particularly in improving its reasoning abilities for mathematical and coding tasks. Subsequent works have focused on improving GRPO to further enhance the algorithm’s performance. For instance, DAPO (Yu et al., 2025) adjusts GRPO’s clipping thresholds and removes KL regularization to encourage larger updates in correct answer. Dr.GRPO (Liu et al., 2025b) eliminates the normalization term when computing advantages to prevent length bias. GSPO (Zheng et al., 2025) modifies the importance sampling from the token level to the sequence level, which proves to be more stable in training mixture-of-experts (MoE) models. These modifications have contributed to improvements in the model’s pass@1 performance, but they have not specifically addressed pass@K performance, which relates to the model’s exploration ability. A.2 EFFECTIVE EXPLORATION FOR RLVR IN LLMS A central challenge in RLVR tasks lies in moving beyond the exploitation of a pretrained model’s implicit knowledge to actively exploring diverse reasoning paths. Current methods tend to con- verge on a limited set of solutions, as evidenced by poor performance on the pass@K metric, which evaluates the coverage of multiple reasoning paths and thus reflects exploration effectiveness (Yue et al., 2025; Wu et al., 2025). To address this exploration deficit, the community has pursued several strategies. Data-centric methods aim to use data augmentation to enhance the model’s exposure to diverse reasoning environments, thereby encouraging the exploration of a broader range of solution paths. One such approach involves using off-policy data from more capable models to expand the model’s knowledge and promote solution diversity (Dong et al., 2025; Li et al., 2025). Additional strategies include generating varied responses for challenging samples (Yang et al., 2025) or para- phrasing questions to stimulate different reasoning trajectories for the same problem (Liang et al., 2025). As a complementary approach, Reward-centric methods redesign the objective function to directly incentivize diversity by calculating a group-wise reward based on a set of candidate solu- tions, providing an unbiased gradient for optimizing pass@K (Walder & Karkhanis, 2025; Chen et al., 2025b). While these methods are effective to some extent, both treat the model as a black box, manipulating its inputs and final supervisory signals without understanding the internal mech- anisms driving exploration. To address this limitation, a more recent line of work has shifted focus inward. The Entropy-based methods use entropy as a proxy for exploration controls (Cui et al., 2025; Cheng et al., 2025; Wang et al., 2025; Hou et al., 2025; Hu et al., 2025), but policy entropy is a rough measure that does not provide fine-grained insights into model’s exploration behavior. More recently, (Jiang et al., 2025) improved the coarse entropy metric by considering only candidates with cumulative probabilities exceeding a threshold p. However, it still lacks a detailed analysis of how the model’s predicted candidates training dynamics affect exploration. This highlights the need for a mechanism that directly monitors and explores changes in the distribution of next-token predictions during training, which is the central focus of our work. Another relevant work to ours is Token-Hidden Reward (THR) (Deng et al., 2025b), which restricts confidence increases for positive tokens to leave more probability mass for alternative reasoning paths. However, it only addresses the prevention of excessive confidence for correct tokens. The influence of negative tokens is embedded in the definition of THR. Additionally, it introduces the need for inner product calculations between token embeddings, which increases computational cost. A.3 ANALYSIS OF LEARNING DYNAMICS IN LLMS Analyzing the learning dynamics of deep neural networks provides valuable insight into how training shapes model behavior (Saxe et al., 2013; Tachet et al., 2018; Advani et al., 2020). This analytical perspective has recently been extended to Large Language Models (LLMs), where prior work has widely examined the dynamics of supervised fine-tuning (SFT) (Kang et al., 2024; Chu et al., 2025), off-policy preference optimization methods such as DPO (Razin et al., 2025; Pal et al., 2024), or both (Ren & Sutherland, 2025; Ren, 2025). 15 Technical Report Several recent studies have begun exploring the learning dynamics of on-policy RL. Cui et al. (2025) adopt entropy-based metrics to track model changes during training. However, such metrics provide only an indirect signal by averaging over the entire vocabulary, thereby failing to capture meaningful shifts among high-probability candidates. In contrast, Deng et al. (2025a) examine probability shifts induced by individual gradient updates to analyze inter-sample effects. While these analyses offer valuable fine-grained insights into probability changes, they fail to capture the cumulative evolution of the model’s policy. To overcome these limitations, we propose a top-K probability dynamics framework that directly tracks how probability mass redistributes among the most likely candidates throughout training. This approach provides a scalable and interpretable lens for understanding how on-policy RL shapes model behavior. 16 Technical Report B MORE DETAILS ABOUT THE DESIGN OF γPOS i,l This appendix provides more details about how we design Equation-(6): γpos i,l = (1 −α) γi,t + α |ItopK| X k∈ItopK sg γi,l γ(k) i,l ! γ(k) i,l , α ∈[0, 1], Based on the smoothed ˜G(i, l) term in Equation-(5): ˜G(i, l) = πθ(·|si,l) −˜etopK = πθ(·|si,l) −  (1 −α)eyi,l + α K X k∈ItopK ek  , Following the AKG decomposition in Ren & Sutherland (2025), we know ∇θγi,l = Ai,l · sg(γi,l) | {z } Constant · ∇z log πθ(yi,l | si,l) | {z } Defined as G(i,l) = Ai,l · sg(γi,l) · ∇θzG(i, l). Now, we replace G(i, l) term to ˜G(i, l), the gradient part (Ai,l · sg(γi,l) is a constant w.r.t θ): ∇θz ˜G(i, l) = ∇θz  πθ(·|si,l) −  (1 −α)eyi,l + α K X k∈ItopK ek     = ∇θz  (1 −α)πθ(·|si,l) + απθ(·|si,l) −  (1 −α)eyi,l + α K X k∈ItopK ek     = ∇θz  (1 −α)(πθ(·|si,l) −eyi,l) + α K X k∈ItopK (πθ(·|si,l) −ek)   = (1 −α)∇θzG(i, y) + α K X k∈ItopK ∇θzG(i, k) (7) From the equation above, we know that if we change eyi,l in the original γ to ˜etopK, the gradient of the new loss can be simplified to a combination of the above. Note that Ai,l · sg(γi,l) · ∇θzG(i, k) is exactly the decomposition of γ(k) i,l = πθ y(k) i,l |si,t  πθref y(k) i,l |si,l , i.e., updating the model using y(k) i,l . In other words, our new loss might have a form like (1 −α)γi,l + α K X k∈ItopK γ(k) i,l However, the combination above will make the new γ a biased estimator (the RL theoretical guaran- tee needs a correct importance sampling). We then use the following stop-gradient trick to fix this. Specifically, by multiplying sg  γi,l/γ(k) i,l  to the second term in Equation-(7), we can ensure that sg(γpos i,l ) = sg(γi,l). This design is only one line of code, using the .detach() in Pytorch. 17 Technical Report C ADDITIONAL ANALYSIS OF DISTRIBUTION CHANGES DURING TRAINING C.1 OVER-CONCENTRATED DISTRIBUTION ACROSS MULTIPLE MODELS In Figure 9, we present the training dynamics for all three models, which validate our findings in Section 3 across multiple models. 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Llama3.2-3B-Instruct GRPO Λ Λ(1) Λ(2) Λ(3) -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 NSR -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 PSR -6.0 -4.0 -2.0 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 73.3% 73.7% 74.1% 71.9% 20.2% 30.2% 30.8% 29.2% Accuracy pass@1 pass@256 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Qwen2.5-Math-7B -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 80.6% 78.3% 80.2% 74.2% 32.6% 49.7% 48.4% 48.1% 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Qwen2.5-7B -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 82.0% 75.8% 79.5% 70.4% 37.5% 50.1% 48.0% 47.4% (a) (b) (c) (d) Figure 9: (a)-(c) Training dynamics of average log probability Λ and top-K probabilities Λ(k) derived by GRPO, NSR, and PSR. (d) The corresponding pass@1 and pass@K results of the RLVL-trained models. Following the setups of Zhu et al. (2025), we train a Llama3.2-3B-Instruct on a mixture of GSM8K and MATH (Level 1) and train Qwen2.5-Math-7B/Qwen2.5-7B on the MATH dataset. 0.00 0.25 0.50 0.75 1.00 Probability 0.0 0.5 1.0 Proportion Top-1 0.00 0.25 0.50 0.75 1.00 Probability Top-2 0.00 0.25 0.50 0.75 1.00 Probability Top-3 0.00 0.25 0.50 0.75 1.00 Probability 0.0 0.5 1.0 Proportion Top-4 0.00 0.25 0.50 0.75 1.00 Probability Top-5 0.00 0.25 0.50 0.75 1.00 Probability Top-6 Base Model GRPO SimKO Figure 10: The probability distribution of each top-K candidate. It shows that the probabilities of candidates from top-2 to top-6 largely fall within the lowest probability range, indicating that monitoring the top-K candi- dates’ probability distribution is sufficient. 18 Technical Report C.2 TOP-6 CANDIDATES PROBABILITIES DISTRIBUTION Figure 10 illustrates the probability distributions of the top-6 candidates. The rank-1 candidate dis- tribution for all models is concentrated in the highest probability bin, with the GRPO-trained model showing a more pronounced concentration, where over 90% of the candidates are focused on the highest probability. In contrast, the rank-2 to rank-6 candidates are concentrated in the lowest prob- ability bin, with over 95% of the probability for the rank-6 candidate across all models being below 0.05. This concentration of probability mass in the top-K candidates suggests that their collective distribution serves as a sufficient proxy for the full vocabulary distribution of the model. 19 Technical Report D EXPERIMENTAL DETAILS D.1 TRAINING DETAIL All models are trained with a learning rate of 10−6, a batch size of 1024, and a PPO mini-batch size of 256. Each input problem is sampled with 8 responses using temperature 1.0. We set α in SimKO as 0.01, and define the entropy threshold τ(q) as the q-quantile of the token-level entropy distribution, such that a fraction q of all tokens have entropy values lower than τ(q). Unless other- wise specified, we use τ(0.8) in our experiments. We also set λtop1 = 1.1, except for Qwen2.5-7B where we use λtop1 = 1.05. For the logic task, we apply a warm-up of 50 steps and use a smaller α = 0.005 along with λtop1 = 1.05. D.2 MORE EXPERIMENT RESULT In this section, we present more detailed experimental results, as shown in Table 4. Method AIME24 AIME25 AMC23 MATH500 Minerva Olympiad Avg. Qwen2.5-Math-7B Base Model 13.2/66.0 5.4/51.8 38.2/98.5 55.8/96.0 16.5/68.8 25.6/77.0 25.8/76.4 GRPO 28.1/72.3 11.5/52.1 61.2/97.1 76.6/96.2 33.4/64.0 39.1/74.7 41.7/76.1 PSR 19.3/68.5 11.2/48.9 62.1/94.9 74.0/91.4 32.8/63.6 37.6/67.7 39.5/72.5 NSR 22.8/80.3 9.7/61.2 59.4/100.0 74.6/97.0 32.9/65.1 37.8/78.4 39.5/80.3 W-REINFORCE 29.2/86.5 10.8/55.7 61.1/97.4 76.4/96.4 33.4/67.6 38.1/77.6 41.5/80.2 KL-Cov 30.9/81.2 11.7/55.2 62.2/97.4 76.5/97.0 34.4/66.2 39.2/76.9 42.5/79.0 P@k T. 26.7/77.9 10.2/61.3 58.8/97.5 73.3/96.8 33.2/68.8 36.6/78.2 39.8/80.1 GRPO w/ Entropy-Adv 29.1/81.7 10.9/55.0 62.5/92.1 77.1/95.0 33.5/60.7 39.7/71.9 42.1/76.1 SimKO 32.8/78.0 12.9/64.6 62.4/97.5 77.6/96.8 35.0/68.4 39.8/77.8 43.4/80.5 ∆(SimKO-GRPO) +4.7/+5.7 +1.4/+12.5 +1.2/+0.4 +1.0/+0.6 +1.6/+4.4 +0.7/+3.1 +1.7/+4.4 Qwen2.5-7B Base Model 7.4/64.0 3.6/48.8 36.1/99.6 61.4/97.2 23.0/70.2 28.1/78.5 26.6/76.4 GRPO 15.6/59.0 8.8/55.0 56.0/92.5 75.7/95.2 35.7/61.8 38.8/70.4 38.4/72.3 PSR 14.3/61.8 9.3/54.2 51.6/96.7 73.6/92.0 32.9/52.2 35.6/67.0 36.2/70.7 NSR 9.9/48.9 7.1/52.3 49.8/97.1 73.9/95.4 33.7/65.8 36.6/77.0 35.2/72.8 W-REINFORCE 11.2/57.5 6.1/54.2 54.3/99.6 73.9/95.8 33.4/66.2 36.6/77.9 35.9/75.2 SimKO 16.3/58.4 9.4/55.2 57.3/97.1 76.7/94.8 35.2/66.2 38.7/74.4 38.9/74.3 ∆(SimKO-GRPO) +0.7/-0.6 +0.6/+0.2 1.3/+4.6 1.0/-0.4 -0.5/+4.4 -0.1/+4.0 +0.5/+2.0 Llama3.2-3B-Instruct Base Model 3.4/51.7 0.7/46.7 20.3/94.9 37.8/93.6 10.1/59.2 12.7/67.1 14.2/68.9 GRPO 12.7/55.1 1.1/44.1 32.5/96.7 53.1/91.6 17.3/62.5 20.1/67.0 23.3/69.5 PSR 7.8/57.4 1.0/35.1 27.2/98.8 50.3/91.0 18.5/61.0 18.9/63.7 20.6/67.8 NSR 11.1/53.7 1.5/47.4 30.3/94.6 53.3/94.0 19.0/60.3 20.0/68.0 22.5/69.7 W-REINFORCE 13.3/51.7 1.1/42.1 31.4/96.3 52.4/92.8 16.7/59.9 19.6/65.8 22.4/68.1 SimKO 13.8/54.6 1.0/45.4 35.2/98.8 54.6/93.4 18.5/63.2 21.0/69.6 24.0/70.8 ∆(SimKO-GRPO) +1.1/-0.5 -0.1/+1.3 +2.7/+2.1 +1.5/+1.8 +1.2/+0.7 +0.9/+2.6 +0.7/+1.3 Table 4: Pass@1 / Pass@256 Results for Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct on MATH500, AIME 2024/25, Minerva math, Olympiadbench, and AMC23 Datasets. 20
Technical Report SIMKO: SIMPLE PASS@K POLICY OPTIMIZATION Ruotian Peng1,* Yi Ren2,* Zhouliang Yu3 Weiyang Liu3 Yandong Wen1,† 1Westlake University 2 3CUHK ABSTRACT Reinforcement learning with verifiable rewards (RLVR) has advanced the reasoning capabilities of large language models (LLMs). However, prevailing RLVR methods exhibit a systematic bias toward exploitation over exploration, as evidenced by improved pass@1 but reduced pass@K (K>1) performance. To understand this issue, we analyze training dynamics of RLVR methods by tracking the token-level probability distributions over vocabulary candidates. Our analysis reveals a consistent probability concentration effect where the top-1 candidate increasingly accumulates probability mass and suppresses that of other candidates. More importantly, stronger over-concentration correlates with worse pass@K performance. Inspired by this finding, we propose Simple Pass@K Optimization (SimKO), a method designed to mitigate the over-concentration issue, thereby encouraging exploration. SimKO operates in an asymmetrical manner. For verifiedcorrect responses, it boosts the probabilities of the top-K candidates. For verifiedincorrect responses, it applies stronger penalties to the top-1 candidate. We observe that this asymmetric design is particularly effective at mitigating overconcentration when applied at tokens with high entropy. Across various math and logical-reasoning benchmarks, SimKO consistently yields higher pass@K for a wide range of K, providing a simple way to improve RLVR's exploration. 2 1 2 3 2 5 2 7 K solutions per problem 40 60 80 Average Accuracy Pass@K Performance in Math Tasks Base Model GRPO SimKO 2 1 2 3 2 5 2 7 K solutions per problem 30 40 50 60 70 Average Accuracy Pass@K Performance in Logic Tasks Base Model GRPO SimKO 1 2 3 4 5 Top-K Candidate in Vocabulary 0.0 0.2 0.4 0.6 0.8 1.0 Average Probability 0.47 0.23 0.10 0.05 0.03 0.93 0.06 0.00 0.00 0.00 Top-5 Posterior Probabilities GRPO SimKO Figure 1: SimKO improves pass@K performance on math tasks (AIME24/25, AMC, MATH500, Minerva, Olympiadbench) and logic tasks (Synlogic, BBH) compared to GRPO, as shown in the plots (left and middle). The figure on the right shows the k-th highest candidate probabilities averaged over the dataset. The SimKOtrained model exhibits a less concentrated probability distribution compared to GRPO. 1 INTRODUCTION Reinforcement learning with verifiable rewards (RLVR) offers a simple recipe for improving LLM reasoning: the model generates responses, and updates itself by increasing the probability of the correct ones while decreasing that of the incorrect ones (Shao et al., 2024; Schulman et al., 2017; Hu, 2025). This coupled process induces a systematic bias, whereby the model progressively collapses to a narrow set of safe responses, implicitly prioritizing exploitation over exploration (Liang et al., 2025; He et al., 2025a). Such effect is evident in improved pass@1, which measures the expected quality of a single reasoning path, but degraded pass@K, which measures the coverage of multiple reasoning paths (Yue et al., 2025; Wu et al., 2025). It has been observed that the reduced exploration ability limits the model's reasoning potential and deteriorates its ability to generalize to novel or more challenging scenarios (Chen et al., 2025a; Song et al., 2025). *Equal contributions †Corresponding author spherelab.ai/simko 1 21 Oct 2025 Technical Report Several approaches have been proposed to mitigate this exploration deficit. Data-centric methods focus on data augmentation, exposing the model to broader reasoning environments to promote response diversity. Techniques include using off-policy data (Dong et al., 2025; Li et al., 2025), generating more responses for challenging samples (Yang et al., 2025), and creating new task variations from the model itself (Liang et al., 2025). As a complementary line of work, reward-centric methods derive group-wise rewards to evaluate the collective quality of multiple responses. These approaches provide an unbiased estimate of pass@K (Walder & Karkhanis, 2025; Chen et al., 2025b), which the model can then directly optimize. Despite their promise, these methods operate only at the input or output end, leaving the internal learning dynamics underexplored. More recently, entropy-based methods (Cui et al., 2025; Cheng et al., 2025) use output entropy as a proxy for exploration control, preventing the model from collapsing. While these methods yield valuable insights, entropy remains a coarse and incomplete measure that cannot capture fine-grained exploration behavior. In this work, we take an alternative perspective grounded in next-token prediction. At each decoding step, an LLM outputs a probability distribution over its entire vocabulary, offering a direct and finegrained view of its exploration behavior. How this probability mass is distributed across vocabulary candidates determines whether the model explores multiple reasoning paths or collapses into a single deterministic trajectory (Deng et al., 2025b; Zhu et al., 2025). Unfortunately, capturing the full distribution is computationally prohibitive, as modern vocabularies often exceed 100K candidates. This practical constraint likely explains why prior work favored scalar measures like entropy. We seek a practical solution to this constraint by revisiting the token-level probability distribution. Our empirical evidence shows that these distributions are highly skewed, as illustrated in Figure 1, with only a few candidates carrying non-negligible probability mass. This finding justifies a tractable approximation: by focusing on the top-K candidates, we can effectively characterize exploration behavior. Building on this view, we further analyze the training dynamics of RLVR algorithms and observe a distinct pattern: probability mass gradually concentrates on the top-1 candidate, while other candidates are suppressed. This over-concentration pushes the model towards deterministic behavior and, more importantly, directly explains the degradation of pass@K performance. Motivated by this observation, we propose Simple Pass@K Optimization (SimKO), a method designed to explicitly mitigate distribution collapse. The core idea is to redistribute and balance gradient updates across the top-K candidates rather than allowing the top-1 to dominate. For verifiedcorrect responses, SimKO shares positive gradients between the generated token and other highprobability candidates, reducing over-concentration on a single choice. For incorrect responses, SimKO applies stronger penalties to the top-1 candidate, encouraging probability mass to flow into other alternative candidates. This asymmetric regularization proves especially effective when applied to "semantic forking" tokens in the reasoning path, where token-level entropy is high. We evaluate SimKO on several mathematical and logical reasoning benchmarks across multiple LLM backbones. SimKO consistently improves over vanilla GRPO and surpasses strong baselines across a wide range of K values (up to 256), as shown in Figure 1. Ablation studies further confirm the mechanism behind its gains, offering new evidence for the role of probability concentration in the exploration-exploitation trade-off. In summary, our work makes three key contributions: • We introduce a new perspective on understanding RLVR dynamics. Specifically, we adopt top-K posterior probabilities as a tractable approximation of the full token-level distribution, providing direct insight into why existing RLVR methods often improve pass@1 at the cost of exploration. • We propose a simple, effective yet principled RLVR method, termed SimKO. SimKO mitigates probability collapse through asymmetric gradient redistribution. By redistributing probability mass from overconfident top-1 tokens to other highly confident candidates, SimKO can explicitly promote exploration, thereby improving the pass@K performance. • SimKO consistenly improves the pass@K performance by a considerable margin. In our experiments, we demonstrate SimKO's effectiveness on multiple challenging math and logic benchmarks, showing consistent improvements in pass@K without sacrificing pass@1. 2 BACKGROUND AND PRELIMINARIES Group relative policy optimization (GRPO), a representative and widely used policy-based RLVR method (Shao et al., 2024), is a variant of PPO (Schulman et al., 2017) tailored for LLM post2 Technical Report π(⋅|si,l) Valid Path 1 Valid Path 2 Other tokens r= 1 0 0 1 1 0 r= 0 e.g. Ungrammatical Incorrect, etc. yothers (b) Good/bad Exploration: compare top1 - top2 (c) Why token-entropy alone is not enough (a) Role of top-gap in exploration Desired: Undesired: yi,l (1) yi,l (2) π(yi,l 1 |si,l) π(yi,l 2 |si,l) π(yi,l 1 |si,l) π(yi,l 2 |si,l) Figure 2: (a) The exploration behavior visualized according to the token-level posterior probability. (b) Comparison of two exploration strategy. (c) An example of two distributions with identical entropy but distinct probability distribution. training. Given a question x, the model generates G different responses {yi}G i=1 and updates its parameters according to the following objective function: JGRPO(θ; γi,l) = 1 G G X i=1 1 |yi| |yi| X l=1 [min (γi,lAi,l, clip(γi,l, 1 -ε, 1 + ε) Ai,l) -βDKL(πθ||πref)] where γi,l = πθ(yi,l | si,l)/πref(yi,l | si,l) is the likelihood ratio between the current policy πθ and the reference policy πref at the l-th token of response yi, with si,l = (x, yi, τ, Ai,l > 0, γneg i,l , if H(πθ(·|si,l)) > τ, Ai,l percentile(entropy, τ)) + # 2. Using top-K ratio + topk_ratio = torch.exp(topk_log_probs - old_topk_log_probs) + topk_ratio = ((ratio.detach() / topk_ratio.detach())*topk_ratio).sum(dim=-1) + ratio = torch.where(advantage > 0, (1-α*w)*ratio + (α*w/K)*topk_ratio, ratio) + # 3. Apply a strong penalty to top1 negative tokens + mask = (advantage < 0) & is_top1 & w + ratio[mask] *= λ pg_losses = -advantage * ratio # ...clip and compute loss Figure 5: The pseudo-code of the policy loss computation with SimKO. SimKO only requires modifying a few lines from a standard policy gradient implementation. 6 Technical Report 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 Log(10) Probability GRPO Λ Λ(1) Λ(2) Λ(3) -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 KL-Cov -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 Entropy Adv -0.8 -0.6 -0.4 -0.2 0.0 0 50 100 Training Step -10.0 -7.5 -5.0 -2.5 0.0 SimKO -0.8 -0.6 -0.4 -0.2 0.0 Log(10) Probability Figure 6: Comparison of SimKO with GRPO, KL-Cov, and Entropy-Adv on Qwen2.5-Math-7B. SimKO effectively controls probability concentration on the Λ(1) while preserving diversity among Λ(2) and Λ(3). 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) KL-Cov GRPO 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) Entropy-Adv GRPO 0 1 2 3 4 Entropy 0 2 4 6 # of tokens (Log10) SimKO GRPO Figure 7: Token-level entropy distributions from the Qwen2.5-Math-7B backbone trained with SimKO, GRPO, KL-Cov, and Entropy-Adv, demonstrating SimKO's ability to maintain the entropy of the "forking" tokens. 5 EXPERIMENTS AND RESULTS 5.1 EXPERIMENTAL SETUP Models and Datasets. We experiment with a diverse set of models, including Qwen2.5-7B (Team, 2024), Qwen2.5-Math-7B (Yang et al., 2024), and Llama3.2-3B-Instruct (AI@Meta, 2024). Following the setups in Zhu et al. (2025); Zeng et al. (2025), the Qwen models are trained on the MATH dataset (Hendrycks et al., 2021), while the Llama model is trained on a combined dataset of GSM8K (Cobbe et al., 2021) and MATH (Level 1). For logical reasoning tasks, we train Qwen2.5-7B on the Synlogic-easy dataset (training split). Training details and hypers in Appendix D.1. Evaluation Protocol. We compare SimKO against several competitive baselines, including GRPO, PSR, NSR, W-REINFORCE (Zhu et al., 2025), KL-Cov (Cui et al., 2025), P@k T. (Chen et al., 2025a) and Entropy-Adv (Cheng et al., 2025). Evaluations are conducted on a variety of reasoning benchmarks: MATH-500 (Hendrycks et al., 2021), Minerva Math (Lewkowycz et al., 2022), Olympiad-Bench (He et al., 2024), AMC, AIME, Synlogic-easy (validation split) (Liu et al., 2025a), and BBH (Suzgun et al., 2022). To obtain comprehensive evaluation, we adopt the unbiased pass@K metric with K up to 256, computed as pass@K := Ex∼D 1 - n-c K / n K , where c denotes the number of correct completions out of n generated responses. To reduce evaluation variance on small datasets (e.g., AIME and AMC), we set n = 300; for other math datasets, we use n = 256, and for logic datasets, n = 128. 5.2 EFFECTS OF SIMKO ON TRAINING DYNAMICS We analyze the training dynamics of SimKO in comparison with GRPO, KL-Cov, and Entropy-Adv. Figure 6 presents the changes of top-K log-probabilities (Λ(1), Λ(2), and Λ(3)) across training steps. As can be seen, GRPO leads to severe over-concentration: Λ(1) GRPO increases to nearly 1, while Λ(2) GRPO and Λ(3) GRPO sharply drop below 10-8 and 10-10, respectively. This indicates that nearly all probability mass collapses onto the top-1 token. KL-Cov exhibits a moderate concentration effect due to the KL penalty, while Entropy-Adv collapses even more rapidly, likely because of its stronger emphasis on high-entropy tokens. In contrast, SimKO achieves the most effective deconcentration among all methods. This is evidenced by a lower Λ(1) SimKO and higher Λ(2) SimKO and Λ(3) SimKO. These results suggest that SimKO effectively mitigates probability mass collapse and can potentially encourages exploration during training. To further validate this, we visualize the histogram of token-level entropy in Figure 7. GRPO drives most tokens toward near-zero entropy. SimKO, however, can preserve token entropy, particularly 7 Technical Report Method 1 2 4 8 16 32 64 128 256 Qwen2.5-Math-7B Base Model 25.8 35.9 45.1 52.8 59.1 64.3 68.8 72.7 76.4 GRPO 41.7 47.9 53.3 58.1 62.6 66.4 69.7 72.9 76.1 PSR 39.5 45.1 49.9 54.3 58.5 62.4 66.0 69.3 72.5 NSR 39.5 47.0 53.6 59.2 64.0 68.2 72.2 76.1 80.3 W-REINFORCE 41.5 48.6 54.4 59.5 64.1 68.4 72.4 76.3 80.2 KL-Cov 42.5 49.5 55.4 60.4 64.8 68.6 72.0 75.5 79.0 P@k T. 39.8 47.7 54.1 59.5 64.2 68.5 72.5 76.3 80.1 Entropy-Adv 42.1 47.7 52.6 56.7 60.3 63.9 67.7 71.9 76.1 SimKO 43.4 50.7 56.7 61.4 65.6 69.4 73.1 76.7 80.5 ∆(SimKO-GRPO) +1.7 +2.8 +3.4 +3.3 +3.0 +3.0 +3.4 +3.8 +4.4 Qwen2.5-7B Base Model 26.6 35.0 42.7 49.5 55.6 61.1 66.2 71.3 76.4 GRPO 38.4 44.4 49.8 54.5 58.7 62.5 66.0 69.3 72.3 PSR 36.2 41.6 46.5 51.2 55.7 59.8 63.4 66.9 70.7 NSR 35.2 42.1 48.2 53.7 58.4 62.5 66.3 69.8 72.8 W-REINFORCE 35.9 42.7 48.8 54.0 58.8 63.3 67.3 71.1 75.2 SimKO 38.9 45.5 50.8 55.2 59.2 63.1 67.0 70.8 74.3 ∆(SimKO-GRPO) +0.5 +1.1 +1.0 +0.7 +0.5 +0.6 +1.0 +1.5 +2.0 Llama3.2-3B-Instruct Base Model 14.2 20.7 28.0 35.6 43.1 50.2 57.2 63.6 68.9 GRPO 23.3 29.4 35.7 41.4 46.9 52.4 57.9 63.7 69.5 PSR 20.6 26.1 32.0 37.9 43.9 49.8 55.8 61.8 67.8 NSR 22.5 29.1 35.6 41.6 47.4 53.1 58.6 64.1 69.7 W-REINFORCE 22.4 28.8 34.9 40.8 46.4 52.0 57.5 63.1 68.1 SimKO 24.0 30.3 36.4 42.0 47.6 53.3 58.9 64.8 70.8 ∆(SimKO-GRPO) +0.7 +0.9 +0.7 +0.6 +0.7 +0.9 +1.0 +1.2 +1.3 Table 1: Average pass@256 results for Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct on MATH500, AIME 2024/25, Minerva math, Olympiadbench, and AMC23 Datasets. at "semantic forks", where high entropy is desirable for exploration. This preservation of entropy further further confirms SimKO's role in promoting exploration. 5.3 MAIN RESULTS ON MATH BENCHMARKS We evaluate SimKO on six widely used math benchmarks across different model backbones. Table 1 reports average pass@K results for K ranging from 1 to 256. Detailed results on separate benchmarks are provided in the Appendix Table 4. Compared to the base models, SimKO significantly improves the pass@1 score by 17.6% on Qwen2.5-Math-7B and 9.8% on Llama3.2-3B-Instruct, indicating improved exploitation. At the same time, it also boosts the pass@256 score by 4.1% and 1.9% on the same backbones respectively, demonstrating improved exploration and overall reasoning quality. Although the base model of Qwen2.5-7B achieves the highest pass@256 score (76.4% vs. 74.3% from SimKO), its pass@1 performance is notably low (26.6% vs. 38.9% from SimKO), indicating an imbalance between exploration and exploitation. Compared to GRPO and its variants (KL-Cov, Entropy-Adv and P@k T.), SimKO consistently outperforms them across all model backbones and values of K. More importantly, SimKO delivers these gains without sacrificing exploration (pass@256) and with even stronger exploitation (pass@1). Relative to GRPO, SimKO improves pass@256 by 4.4%, 2.0%, and 1.3% on Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct, respectively, while also achieving higher pass@1. Al8 Technical Report Method Synlogic BBH 1 2 4 8 16 32 64 128 1 2 4 8 16 32 64 128 Base Model 3.1 4.9 7.5 11.1 15.4 20.0 24.5 28.7 42.4 59.3 74.4 84.6 89.9 92.4 93.6 94.2 GRPO 35.3 38.2 40.8 42.9 44.7 46.3 47.9 49.4 56.4 64.6 71.3 76.6 80.7 83.9 86.3 88.2 PSR 27.3 29.3 31.7 34.3 36.9 39.4 41.6 43.6 54.9 62.7 68.8 73.4 76.8 79.4 81.4 82.8 NSR 1.1 1.9 3.2 5.3 8.5 12.5 17.2 21.6 26.1 41.6 59.3 74.9 85.3 90.5 92.7 93.5 W-REINFORCE 0.8 1.3 1.8 2.4 2.9 3.4 3.8 3.9 15.4 21.9 27.8 32.3 35.4 37.2 38.3 38.8 SimKO 34.7 38.4 42.0 45.5 48.5 51.0 53.2 55.0 58.4 69.5 77.7 83.2 86.8 89.2 90.9 92.0 Table 2: Pass@K results for Qwen2.5-7B on Synlogic and BBH Datasets. though P@k T. achieves a higher pass@256 (80.1%), its pass@1 performance drops to 39.8%, indicating that it fails to balance exploration and exploitation effectively." For NSR and W-REINFORCE, strong pass@256 performance is maintained, but often at the expense of much lower pass@1. In contrast, SimKO achieves a better balance on most backbones. On Qwen2.5-Math-7B, SimKO reaches a slightly higher pass@256 score (80.5% vs. 80.3% for NSR and 80.2% for W-REINFORCE) while clearly outperforming both in pass@1 (43.4% vs. 39.5% and 41.5%). A similar trend is observed on Llama3.2-3B-Instruct, where SimKO improves both pass@256 (70.8% vs. 69.7% and 68.1%) and pass@1 (24.0% vs. 22.5% and 22.4%). For Qwen2.57B, however, the trade-offs differ: SimKO outperforms NSR on pass@256 (74.3% vs. 72.8%) but lags slightly behind W-REINFORCE (75.2%), and on pass@1 it significantly surpasses both baselines (35.2% vs. 35.9% for NSR and 38.9% for W-REINFORCE). These results support our hypothesis that alleviating probability over-concentration (Figure 6) improves pass@K performance, indicating a better balance between exploitation and exploration. 5.4 GENERALIZATION TO LOGICAL TASKS We evaluate SimKO's generalization ability on two logic reasoning benchmarks, as shown in Table 2. These benchmarks cover two scenarios: (1) Synlogic, an in-distribution task where the base model performs poorly in pass@K on both the training and test datasets, which come from the same distribution (Synlogic-easy), and (2) BBH, an out-of-distribution task where the base model performs better in pass@K, but the test data differs from the training data distribution. On Synlogic, SimKO significantly outperforms the base model, with a +31.6% gain in pass@1 and +26.3% at pass@128. Methods like GRPO and PSR show improvements but lag behind SimKO by 4.6% and 11.4% at pass@128. NSR and W-REINFORCE, however, fail to train effectively, with pass@1 scores of only 1.1% and 0.8%. Similar observations can also be found in BBH dataset. On BBH, SimKO boosts the base model's pass@1 to 58.4% (+16.0%), and maintains stability at higher sampling rates, with just a 2.2% decrease in pass@128. GRPO and PSR, by comparison, drop 6.0% and 11.4% at pass@128 compare to base model, showing difficulties in sustaining performance. NSR and W-REINFORCE perform poorly, achieving only 26.1% and 15.4% at pass@1. These results demonstrate that relying solely on negative samples is insufficient to improve Pass@K on challenging tasks. In contrast, SimKO exhibits strong generalization, effectively trains on difficult tasks, and improves Pass@K performance by mitigating probability over- concentration. 5.5 ABLATION STUDIES We conduct an in-depth analysis of how various parameters, such as τ, α, K, and the impacts of γpos and γneg, affect SimKO. The full ablation results are summarized in Table 3, and the performance variations with respect to τ, α and K are shown in Figure 8. Specifically, we evaluate α values from 0 to 0.1, with α = 0 representing the performance of GRPO. Increasing α results in a monotonic improvement in pass@256, with gains ranging from 3.3% to 4.4% compared to GRPO. In contrast, pass@1 performance peaks at α = 0.01 and then slightly degrades, though it remains superior to GRPO. For τ, we test values from 0 to 100, with τ = 100 corresponding to GRPO. Notably, SimKO outperforms GRPO across all τ values in pass@256. However, when τ = 0, 9 Technical Report Method AIME24 AIME25 AMC MATH500 Minerva Olympiad Avg. Base Model 13.2/66.0 5.4/51.8 38.2/98.5 55.8/96.0 16.5/68.8 25.6/77.0 25.8/76.4 GRPO 28.1/72.3 11.5/52.1 61.2/97.1 76.6/96.2 33.4/64.0 39.1/74.7 41.7/76.1 α=0.02 31.1/79.9 13.3/58.9 62.6/99.3 77.6/97.0 35.2/66.2 39.4/76.9 43.2/79.7 α=0.03 30.9/72.7 12.4/64.6 62.4/97.5 77.2/97.6 34.9/66.9 38.9/76.9 42.8/79.4 α=0.05 30.8/78.7 12.2/67.8 63.0/97.5 77.2/96.8 34.8/65.1 39.3/76.3 42.9/80.4 α=0.1 29.1/75.5 12.4/65.0 61.8/99.9 77.2/96.8 35.2/67.6 38.9/77.8 42.4/80.4 τ(0) 10.7/74.1 6.2/54.0 51.7/96.7 70.5/95.8 32.2/67.3 33.5/74.5 34.1/77.1 τ(40) 20.8/70.9 7.2/57.5 58.3/94.6 74.9/95.2 33.8/68.0 36.4/75.0 38.6/76.9 τ(60) 27.5/77.6 10.0/56.0 62.2/99.9 76.5/96.8 35.3/68.8 38.2/76.4 41.6/79.3 K = 1 29.5/83.7 12.2/55.1 62.1/97.1 77.1/96.8 35.1/65.8 39.5/75.9 42.6/79.1 K = 2 30.3/78.4 12.2/57.9 62.5/99.6 77.2/96.4 34.9/65.8 39.5/76.9 42.8/79.2 K = 4 32.8/84.6 12.5/54.6 62.6/97.5 77.7/97.2 35.3/68.4 39.2/78.4 43.4/80.1 K = 5 31.3/78.5 11.7/58.0 62.2/99.6 77.5/96.2 35.7/68.0 39.0/76.3 42.9/79.4 w/o γneg 31.5/80.7 11.5/57.9 62.7/97.4 77.1/96.2 34.1/65.8 39.0/75.6 42.8/78.9 w/o γpos 30.4/75.2 12.7/64.9 62.3/99.6 77.4/96.4 34.7/65.8 39.5/77.3 42.8/79.9 SimKO 32.8/78.0 12.9/64.6 62.4/97.5 77.6/96.8 35.0/68.4 39.8/77.8 43.4/80.5 Table 3: Ablations on α, τ, k, γpos, and γneg. Pass@1/Pass@256 scores are evaluated using Qwen2.5-Math-7B. 0.00 0.02 0.04 0.06 0.08 0.10 35 40 45 50 Performance (Pass@1) Ablation Study on : pass@1 vs pass@256 Pass@1 Pass@256 0 20 40 60 80 100 35 40 45 50 Performance (Pass@1) Ablation Study on : pass@1 vs pass@256 1 2 3 4 5 K 35 40 45 50 Performance (Pass@1) Ablation Study on K: pass@1 vs pass@256 65 70 75 80 Performance (Pass@256) 65 70 75 80 Performance (Pass@256) 65 70 75 80 Performance (Pass@256) Figure 8: Ablations on α, τ and K. Pass@1 and pass@256 scores are evaluated using the Qwen2.5-Math-7B backbone on math benchmarks. where SimKO is applied to all tokens, pass@1 drops significantly by 9.3% compared to applied in "forking" tokens. This indicates that restricting SimKO to the "semantic forks" of the task is essential for maintaining optimal performance. For K, we test values from 1 to 5. As K increases, both pass@256 and pass@1 show an initial increase followed by a decrease. This trend suggests that restricting optimization to a small subset of the most probable tokens is sufficient. Specifically, pass@256 fluctuates between 79.1% and 80.5%, while pass@1 fluctuates between 42.6% and 43.4%, both outperforming GRPO. Additionally, as shown in Table 3, applying SimKO exclusively to either correct or incorrect examples leads to a drop in pass@K performance. This highlights the importance of asymmetric regularization, applied to both correct and incorrect examples, as it yields the best results. 6 CONCLUDING REMARKS This paper addresses a key limitation in current RLVR methods, where pass@K performance drops due to reduced output diversity. Through analyzing token-level posterior probabilities, we find that RLVR training causes the model to overly concentrate probability mass on the top-1 candidate, leading to a deterministic policy and limited exploration. To overcome this issue, we propose Simple Pass@K Optimization (SimKO), a method that mitigates this effect by redistributing gradient updates across the top-K candidates. Extensive evaluations demonstrate that SimKO effectively preserves output diversity and consistently outperforms the GRPO baseline on both pass@1 and pass@256 metrics. These results highlight that SimKO achieves a superior balance between exploitation and exploration, thereby enhancing the model's overall reasoning capabilities. 10 Technical Report REFERENCES Madhu S Advani, Andrew M Saxe, and Haim Sompolinsky. High-dimensional dynamics of generalization error in neural networks. Neural Networks, 132:428-446, 2020. 15 AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/ llama3/blob/main/MODEL_CARD.md. 7 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint , 2025a. 1, 7 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@ k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint , 2025b. 2, 15 Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, and Furu Wei. Reasoning with exploration: An entropy perspective. arXiv preprint , 2025. 2, 7, 15 Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint , 2025. 15 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint , 2021. 7 Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for reasoning language models. arXiv preprint , 2025. 2, 7, 15, 16 DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948. 15 Wenlong Deng, Yi Ren, Muchen Li, Danica J Sutherland, Xiaoxiao Li, and Christos Thrampoulidis. On the effect of negative gradient in group relative deep reinforcement optimization. arXiv preprint , 2025a. 16 Wenlong Deng, Yi Ren, Yushu Li, Boying Gong, Danica J Sutherland, Xiaoxiao Li, and Christos Thrampoulidis. Token hidden reward: Steering exploration-exploitation in group relative deep reinforcement learning. arXiv preprint , 2025b. 2, 5, 15 Yihong Dong, Xue Jiang, Yongding Tao, Huanyu Liu, Kechi Zhang, Lili Mou, Rongyu Cao, Yingwei Ma, Jue Chen, Binhua Li, et al. Rl-plus: Countering capability boundary collapse of llms in reinforcement learning with hybrid-policy optimization. arXiv preprint , 2025. 2, 15 Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint , 2018. 6 Andre He, Daniel Fried, and Sean Welleck. Rewarding the unlikely: Lifting grpo beyond distribution sharpening. arXiv preprint , 2025a. 1 Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint , 2024. 7 Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, et al. Skywork open reasoner 1 technical report. arXiv preprint , 2025b. 15 11 Technical Report Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint , 2021. 7 Zhenyu Hou, Xin Lv, Rui Lu, Jiajie Zhang, Yujiang Li, Zijun Yao, Juanzi Li, Jie Tang, and Yuxiao Dong. Advancing language model reasoning through reinforcement learning and inference scaling. arXiv preprint , 2025. 15 Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint , 2025. 1 Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model, 2025. URL https://arxiv.org/abs/2503.24290. 15 Hugging Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL https: //github.com/huggingface/open-r1. 15 Yuxian Jiang, Yafu Li, Guanxu Chen, Dongrui Liu, Yu Cheng, and Jing Shao. Rethinking entropy regularization in large reasoning models. arXiv preprint , 2025. 15 Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, and Aviral Kumar. What do learning dynamics reveal about generalization in llm reasoning? arXiv preprint , 2024. 15 Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in neural information processing systems, 35:3843-3857, 2022. 7 Jiazheng Li, Hong Lu, Kaiyue Wen, Zaiwen Yang, Jiaxuan Gao, Hongzhou Lin, Yi Wu, and Jingzhao Zhang. Questa: Expanding reasoning capacity in llms via question augmentation. arXiv preprint , 2025. 2, 15 Xiao Liang, Zhongzhi Li, Yeyun Gong, Yelong Shen, Ying Nian Wu, Zhijiang Guo, and Weizhu Chen. Beyond pass@ 1: Self-play with variational problem synthesis sustains rlvr. arXiv preprint , 2025. 1, 2, 15 Junteng Liu, Yuanxiang Fan, Zhuo Jiang, Han Ding, Yongyi Hu, Chi Zhang, Yiqi Shi, Shitong Weng, Aili Chen, Shiqi Chen, et al. Synlogic: Synthesizing verifiable reasoning data at scale for learning logical reasoning and beyond. arXiv preprint , 2025a. 7 Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint , 2025b. 15 Rafael M ̈uller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? Advances in neural information processing systems, 32, 2019. 5 Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint , 2024. 15 Noam Razin, Sadhika Malladi, Adithya Bhaskar, Danqi Chen, Sanjeev Arora, and Boris Hanin. Unintentional unalignment: Likelihood displacement in direct preference optimization. In The Thirteenth International Conference on Learning Representations, 2025. 15 Yi Ren. Learning dynamics of deep learning-force analysis of deep neural networks. arXiv preprint , 2025. 15 Yi Ren and Danica J. Sutherland. Learning dynamics of LLM finetuning. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/ forum?id=tPNHOoZFl9. 3, 6, 15, 17 12 Technical Report Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint , 2013. 15 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint , 2017. 1, 2 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint , 2024. 1, 2, 15 Yuda Song, Julia Kempe, and Remi Munos. Outcome-based exploration for llm reasoning. arXiv preprint , 2025. 1 Mirac Suzgun, Nathan Scales, Nathanael Sch ̈arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint , 2022. 7 Remi Tachet, Mohammad Pezeshki, Samira Shabanian, Aaron Courville, and Yoshua Bengio. On the learning dynamics of deep neural networks. arXiv preprint , 2018. 15 Qwen Team. Qwen2.5: A party of foundation models, September 2024. URL https://qwenlm. github.io/blog/qwen2.5/. 7 Christian Walder and Deep Karkhanis. Pass@ k policy optimization: Solving harder reinforcement learning problems. arXiv preprint , 2025. 2, 15 Shenzhi Wang, Le Yu, Chang Gao, Chujie Zheng, Shixuan Liu, Rui Lu, Kai Dang, Xionghui Chen, Jianxin Yang, Zhenru Zhang, et al. Beyond the 80/20 rule: High-entropy minority tokens drive effective reinforcement learning for llm reasoning. arXiv preprint , 2025. 5, 15 Fang Wu, Weihao Xuan, Ximing Lu, Zaid Harchaoui, and Yejin Choi. The invisible leash: Why rlvr may not escape its origin. arXiv preprint , 2025. 1, 15 An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint , 2024. 7 Zhicheng Yang, Zhijiang Guo, Yinya Huang, Yongxin Wang, Dongchun Xie, Yiwei Wang, Xiaodan Liang, and Jing Tang. Depth-breadth synergy in rlvr: Unlocking llm reasoning gains with adaptive exploration. arXiv preprint , 2025. 2, 15 Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Weinan Dai, Tiantian Fan, Gaohong Liu, Lingjun Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint , 2025. 15 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint , 2025. 1, 15 Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerlzoo: Investigating and taming zero reinforcement learning for open base models in the wild. In Second Conference on Language Modeling, 2025. 7, 15 Chujie Zheng, Shixuan Liu, Mingze Li, Xiong-Hui Chen, Bowen Yu, Chang Gao, Kai Dang, Yuqiong Liu, Rui Men, An Yang, et al. Group sequence policy optimization. arXiv preprint , 2025. 15 Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, and Yu Meng. The surprising effectiveness of negative reinforcement in llm reasoning. arXiv preprint , 2025. 2, 4, 7, 18 13 Technical Report Appendix Table of Contents A Related Work 15 A.1 Reinforcement Learning with Verifiable Rewards in LLMs . . . . . . . . . . . 15 A.2 Effective Exploration for RLVR in LLMs . . . . . . . . . . . . . . . . . . . . 15 A.3 Analysis of Learning Dynamics in LLMs . . . . . . . . . . . . . . . . . . . . 15 B More Details about the Design of γpos i,l 17 C Additional Analysis of Distribution Changes During Training 18 C.1 Over-Concentrated Distribution across multiple models . . . . . . . . . . . . . 18 C.2 Top-6 Candidates Probabilities Distribution . . . . . . . . . . . . . . . . . . . 19 D Experimental Details 20 D.1 Training Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 D.2 More Experiment Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 14 Technical Report A RELATED WORK A.1 REINFORCEMENT LEARNING WITH VERIFIABLE REWARDS IN LLMS Reinforcement learning with verifiable rewards (RLVR) from large language models (LLMs) has demonstrated significant potential (DeepSeek-AI, 2025; Hugging Face, 2025; Zeng et al., 2025; He et al., 2025b), especially when directly applied to a base model using GRPO (Shao et al., 2024) for RL training. This approach has notably enhanced the base model's performance, particularly in improving its reasoning abilities for mathematical and coding tasks. Subsequent works have focused on improving GRPO to further enhance the algorithm's performance. For instance, DAPO (Yu et al., 2025) adjusts GRPO's clipping thresholds and removes KL regularization to encourage larger updates in correct answer. Dr.GRPO (Liu et al., 2025b) eliminates the normalization term when computing advantages to prevent length bias. GSPO (Zheng et al., 2025) modifies the importance sampling from the token level to the sequence level, which proves to be more stable in training mixture-of-experts (MoE) models. These modifications have contributed to improvements in the model's pass@1 performance, but they have not specifically addressed pass@K performance, which relates to the model's exploration ability. A.2 EFFECTIVE EXPLORATION FOR RLVR IN LLMS A central challenge in RLVR tasks lies in moving beyond the exploitation of a pretrained model's implicit knowledge to actively exploring diverse reasoning paths. Current methods tend to converge on a limited set of solutions, as evidenced by poor performance on the pass@K metric, which evaluates the coverage of multiple reasoning paths and thus reflects exploration effectiveness (Yue et al., 2025; Wu et al., 2025). To address this exploration deficit, the community has pursued several strategies. Data-centric methods aim to use data augmentation to enhance the model's exposure to diverse reasoning environments, thereby encouraging the exploration of a broader range of solution paths. One such approach involves using off-policy data from more capable models to expand the model's knowledge and promote solution diversity (Dong et al., 2025; Li et al., 2025). Additional strategies include generating varied responses for challenging samples (Yang et al., 2025) or paraphrasing questions to stimulate different reasoning trajectories for the same problem (Liang et al., 2025). As a complementary approach, Reward-centric methods redesign the objective function to directly incentivize diversity by calculating a group-wise reward based on a set of candidate solutions, providing an unbiased gradient for optimizing pass@K (Walder & Karkhanis, 2025; Chen et al., 2025b). While these methods are effective to some extent, both treat the model as a black box, manipulating its inputs and final supervisory signals without understanding the internal mechanisms driving exploration. To address this limitation, a more recent line of work has shifted focus inward. The Entropy-based methods use entropy as a proxy for exploration controls (Cui et al., 2025; Cheng et al., 2025; Wang et al., 2025; Hou et al., 2025; Hu et al., 2025), but policy entropy is a rough measure that does not provide fine-grained insights into model's exploration behavior. More recently, (Jiang et al., 2025) improved the coarse entropy metric by considering only candidates with cumulative probabilities exceeding a threshold p. However, it still lacks a detailed analysis of how the model's predicted candidates training dynamics affect exploration. This highlights the need for a mechanism that directly monitors and explores changes in the distribution of next-token predictions during training, which is the central focus of our work. Another relevant work to ours is Token-Hidden Reward (THR) (Deng et al., 2025b), which restricts confidence increases for positive tokens to leave more probability mass for alternative reasoning paths. However, it only addresses the prevention of excessive confidence for correct tokens. The influence of negative tokens is embedded in the definition of THR. Additionally, it introduces the need for inner product calculations between token embeddings, which increases computational cost. A.3 ANALYSIS OF LEARNING DYNAMICS IN LLMS Analyzing the learning dynamics of deep neural networks provides valuable insight into how training shapes model behavior (Saxe et al., 2013; Tachet et al., 2018; Advani et al., 2020). This analytical perspective has recently been extended to Large Language Models (LLMs), where prior work has widely examined the dynamics of supervised fine-tuning (SFT) (Kang et al., 2024; Chu et al., 2025), off-policy preference optimization methods such as DPO (Razin et al., 2025; Pal et al., 2024), or both (Ren & Sutherland, 2025; Ren, 2025). 15 Technical Report Several recent studies have begun exploring the learning dynamics of on-policy RL. Cui et al. (2025) adopt entropy-based metrics to track model changes during training. However, such metrics provide only an indirect signal by averaging over the entire vocabulary, thereby failing to capture meaningful shifts among high-probability candidates. In contrast, Deng et al. (2025a) examine probability shifts induced by individual gradient updates to analyze inter-sample effects. While these analyses offer valuable fine-grained insights into probability changes, they fail to capture the cumulative evolution of the model's policy. To overcome these limitations, we propose a top-K probability dynamics framework that directly tracks how probability mass redistributes among the most likely candidates throughout training. This approach provides a scalable and interpretable lens for understanding how on-policy RL shapes model behavior. 16 Technical Report B MORE DETAILS ABOUT THE DESIGN OF γPOS i,l This appendix provides more details about how we design Equation-(6): γpos i,l = (1 -α) γi,t + α |ItopK| X k∈ItopK sg γi,l γ(k) i,l ! γ(k) i,l , α ∈[0, 1], Based on the smoothed ̃G(i, l) term in Equation-(5): ̃G(i, l) = πθ(·|si,l) - ̃etopK = πθ(·|si,l) -  (1 -α)eyi,l + α K X k∈ItopK ek  , Following the AKG decomposition in Ren & Sutherland (2025), we know ∇θγi,l = Ai,l · sg(γi,l) | {z } Constant · ∇z log πθ(yi,l | si,l) | {z } Defined as G(i,l) = Ai,l · sg(γi,l) · ∇θzG(i, l). Now, we replace G(i, l) term to ̃G(i, l), the gradient part (Ai,l · sg(γi,l) is a constant w.r.t θ): ∇θz ̃G(i, l) = ∇θz  πθ(·|si,l) -  (1 -α)eyi,l + α K X k∈ItopK ek     = ∇θz  (1 -α)πθ(·|si,l) + απθ(·|si,l) -  (1 -α)eyi,l + α K X k∈ItopK ek     = ∇θz  (1 -α)(πθ(·|si,l) -eyi,l) + α K X k∈ItopK (πθ(·|si,l) -ek)   = (1 -α)∇θzG(i, y) + α K X k∈ItopK ∇θzG(i, k) (7) From the equation above, we know that if we change eyi,l in the original γ to ̃etopK, the gradient of the new loss can be simplified to a combination of the above. Note that Ai,l · sg(γi,l) · ∇θzG(i, k) is exactly the decomposition of γ(k) i,l = πθ y(k) i,l |si,t πθref y(k) i,l |si,l , i.e., updating the model using y(k) i,l . In other words, our new loss might have a form like (1 -α)γi,l + α K X k∈ItopK γ(k) i,l However, the combination above will make the new γ a biased estimator (the RL theoretical guarantee needs a correct importance sampling). We then use the following stop-gradient trick to fix this. Specifically, by multiplying sg γi,l/γ(k) i,l to the second term in Equation-(7), we can ensure that sg(γpos i,l ) = sg(γi,l). This design is only one line of code, using the .detach() in Pytorch. 17 Technical Report C ADDITIONAL ANALYSIS OF DISTRIBUTION CHANGES DURING TRAINING C.1 OVER-CONCENTRATED DISTRIBUTION ACROSS MULTIPLE MODELS In Figure 9, we present the training dynamics for all three models, which validate our findings in Section 3 across multiple models. 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Llama3.2-3B-Instruct GRPO Λ Λ(1) Λ(2) Λ(3) -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 NSR -6.0 -4.0 -2.0 0.0 0 50 100 Training Step -10.0 -5.0 0.0 PSR -6.0 -4.0 -2.0 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 73.3% 73.7% 74.1% 71.9% 20.2% 30.2% 30.8% 29.2% Accuracy pass@1 pass@256 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Qwen2.5-Math-7B -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 80.6% 78.3% 80.2% 74.2% 32.6% 49.7% 48.4% 48.1% 0 50 100 Training Step -10.0 -5.0 0.0 Log(10) Probability Qwen2.5-7B -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 0 50 100 Training Step -10.0 -5.0 0.0 -0.8 -0.5 -0.2 0.0 Log(10) Probability Base GRPO NSR PSR 0.00 0.25 0.50 0.75 1.00 82.0% 75.8% 79.5% 70.4% 37.5% 50.1% 48.0% 47.4% (a) (b) (c) (d) Figure 9: (a)-(c) Training dynamics of average log probability Λ and top-K probabilities Λ(k) derived by GRPO, NSR, and PSR. (d) The corresponding pass@1 and pass@K results of the RLVL-trained models. Following the setups of Zhu et al. (2025), we train a Llama3.2-3B-Instruct on a mixture of GSM8K and MATH (Level 1) and train Qwen2.5-Math-7B/Qwen2.5-7B on the MATH dataset. 0.00 0.25 0.50 0.75 1.00 Probability 0.0 0.5 1.0 Proportion Top-1 0.00 0.25 0.50 0.75 1.00 Probability Top-2 0.00 0.25 0.50 0.75 1.00 Probability Top-3 0.00 0.25 0.50 0.75 1.00 Probability 0.0 0.5 1.0 Proportion Top-4 0.00 0.25 0.50 0.75 1.00 Probability Top-5 0.00 0.25 0.50 0.75 1.00 Probability Top-6 Base Model GRPO SimKO Figure 10: The probability distribution of each top-K candidate. It shows that the probabilities of candidates from top-2 to top-6 largely fall within the lowest probability range, indicating that monitoring the top-K candidates' probability distribution is sufficient. 18 Technical Report C.2 TOP-6 CANDIDATES PROBABILITIES DISTRIBUTION Figure 10 illustrates the probability distributions of the top-6 candidates. The rank-1 candidate distribution for all models is concentrated in the highest probability bin, with the GRPO-trained model showing a more pronounced concentration, where over 90% of the candidates are focused on the highest probability. In contrast, the rank-2 to rank-6 candidates are concentrated in the lowest probability bin, with over 95% of the probability for the rank-6 candidate across all models being below 0.05. This concentration of probability mass in the top-K candidates suggests that their collective distribution serves as a sufficient proxy for the full vocabulary distribution of the model. 19 Technical Report D EXPERIMENTAL DETAILS D.1 TRAINING DETAIL All models are trained with a learning rate of 10-6, a batch size of 1024, and a PPO mini-batch size of 256. Each input problem is sampled with 8 responses using temperature 1.0. We set α in SimKO as 0.01, and define the entropy threshold τ(q) as the q-quantile of the token-level entropy distribution, such that a fraction q of all tokens have entropy values lower than τ(q). Unless otherwise specified, we use τ(0.8) in our experiments. We also set λtop1 = 1.1, except for Qwen2.5-7B where we use λtop1 = 1.05. For the logic task, we apply a warm-up of 50 steps and use a smaller α = 0.005 along with λtop1 = 1.05. D.2 MORE EXPERIMENT RESULT In this section, we present more detailed experimental results, as shown in Table 4. Method AIME24 AIME25 AMC23 MATH500 Minerva Olympiad Avg. Qwen2.5-Math-7B Base Model 13.2/66.0 5.4/51.8 38.2/98.5 55.8/96.0 16.5/68.8 25.6/77.0 25.8/76.4 GRPO 28.1/72.3 11.5/52.1 61.2/97.1 76.6/96.2 33.4/64.0 39.1/74.7 41.7/76.1 PSR 19.3/68.5 11.2/48.9 62.1/94.9 74.0/91.4 32.8/63.6 37.6/67.7 39.5/72.5 NSR 22.8/80.3 9.7/61.2 59.4/100.0 74.6/97.0 32.9/65.1 37.8/78.4 39.5/80.3 W-REINFORCE 29.2/86.5 10.8/55.7 61.1/97.4 76.4/96.4 33.4/67.6 38.1/77.6 41.5/80.2 KL-Cov 30.9/81.2 11.7/55.2 62.2/97.4 76.5/97.0 34.4/66.2 39.2/76.9 42.5/79.0 P@k T. 26.7/77.9 10.2/61.3 58.8/97.5 73.3/96.8 33.2/68.8 36.6/78.2 39.8/80.1 GRPO w/ Entropy-Adv 29.1/81.7 10.9/55.0 62.5/92.1 77.1/95.0 33.5/60.7 39.7/71.9 42.1/76.1 SimKO 32.8/78.0 12.9/64.6 62.4/97.5 77.6/96.8 35.0/68.4 39.8/77.8 43.4/80.5 ∆(SimKO-GRPO) +4.7/+5.7 +1.4/+12.5 +1.2/+0.4 +1.0/+0.6 +1.6/+4.4 +0.7/+3.1 +1.7/+4.4 Qwen2.5-7B Base Model 7.4/64.0 3.6/48.8 36.1/99.6 61.4/97.2 23.0/70.2 28.1/78.5 26.6/76.4 GRPO 15.6/59.0 8.8/55.0 56.0/92.5 75.7/95.2 35.7/61.8 38.8/70.4 38.4/72.3 PSR 14.3/61.8 9.3/54.2 51.6/96.7 73.6/92.0 32.9/52.2 35.6/67.0 36.2/70.7 NSR 9.9/48.9 7.1/52.3 49.8/97.1 73.9/95.4 33.7/65.8 36.6/77.0 35.2/72.8 W-REINFORCE 11.2/57.5 6.1/54.2 54.3/99.6 73.9/95.8 33.4/66.2 36.6/77.9 35.9/75.2 SimKO 16.3/58.4 9.4/55.2 57.3/97.1 76.7/94.8 35.2/66.2 38.7/74.4 38.9/74.3 ∆(SimKO-GRPO) +0.7/-0.6 +0.6/+0.2 1.3/+4.6 1.0/-0.4 -0.5/+4.4 -0.1/+4.0 +0.5/+2.0 Llama3.2-3B-Instruct Base Model 3.4/51.7 0.7/46.7 20.3/94.9 37.8/93.6 10.1/59.2 12.7/67.1 14.2/68.9 GRPO 12.7/55.1 1.1/44.1 32.5/96.7 53.1/91.6 17.3/62.5 20.1/67.0 23.3/69.5 PSR 7.8/57.4 1.0/35.1 27.2/98.8 50.3/91.0 18.5/61.0 18.9/63.7 20.6/67.8 NSR 11.1/53.7 1.5/47.4 30.3/94.6 53.3/94.0 19.0/60.3 20.0/68.0 22.5/69.7 W-REINFORCE 13.3/51.7 1.1/42.1 31.4/96.3 52.4/92.8 16.7/59.9 19.6/65.8 22.4/68.1 SimKO 13.8/54.6 1.0/45.4 35.2/98.8 54.6/93.4 18.5/63.2 21.0/69.6 24.0/70.8 ∆(SimKO-GRPO) +1.1/-0.5 -0.1/+1.3 +2.7/+2.1 +1.5/+1.8 +1.2/+0.7 +0.9/+2.6 +0.7/+1.3 Table 4: Pass@1 / Pass@256 Results for Qwen2.5-Math-7B, Qwen2.5-7B, and Llama3.2-3B-Instruct on MATH500, AIME 2024/25, Minerva math, Olympiadbench, and AMC23 Datasets. 20
2510.14809
Astronomy & Astrophysics manuscript no. articlev3 ©ESO 2025 October 17, 2025 The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations S. H. J. Wallström1, 2, P. Scicluna3, 4, 5, 2, S. Srinivasan6, 2⋆, J. G. A. Wouterloot7, I. McDonald8, 9, L. Decock1, M. Wijshoff1, R. Chen10, 2, D. Torres11, L. Umans1, B. Willebrords1, F. Kemper12, 13, 14, G. Rau15, 16, 17, S. Feng18, 2, M. Jeste19, T. Kaminski20, D. Li21, F. C. Liu22, A. Trejo-Cruz6, 2, H. Chawner23, S. Goldman24, 25, H. MacIsaac26, 27, J. Tang21, S. T. Zeegers28, T. Danilovich29, 1, M. Matsuura30, K. M. Menten19, J. Th van Loon31, J. Cami26, 27, 32, C. J. R. Clark24, T. E. Dharmawardena33, 34, J. Greaves23, Jinhua He35, 36, 37, H. Imai38, 39, O. C. Jones40, H. Kim41, J. P. Marshall2, H. Shinnaga42, 38, R. Wesson30, 43, and the NESS Collaboration44 (Affiliations can be found after the references) Received / Accepted ABSTRACT Low- to intermediate-mass (∼0.8−8 M⊙) evolved stars contribute significantly to the chemical enrichment of the interstellar medium in the local Universe. It is therefore crucial to accurately measure the mass return in their final evolutionary stages. The Nearby Evolved Stars Survey (NESS) is a large multi-telescope project targeting a volume-limited sample of ∼850 stars within 3 kpc in order to derive the dust and gas return rates in the Solar Neighbourhood, and to constrain the physics underlying these processes. We present an initial analysis of the CO-line observations, including detection statistics, carbon isotopic ratios, initial mass-loss rates, and gas-to-dust ratios. We describe a new data reduction pipeline for homogeneity, which we use to analyse the available NESS CO data from the James Clerk Maxwell Telescope, measuring line parameters and calculating empirical gas mass-loss rates. We present the first release of the available data on 485 sources, one of the largest homogeneous samples of CO data to date. Comparison with a large combined literature sample finds that high mass-loss rate and especially carbon-rich sources are over- represented in literature, while NESS is probing significantly more sources at low mass-loss rates, detecting 59 sources in CO for the first time and providing useful upper limits on non-detections. CO line detection rates are 81% for the CO (2–1) line and 75% for CO (3–2). The majority (82%) of detected lines conform to the expected soft parabola shape, while eleven sources show a double wind. Calculated mass-loss rates show power-law relations with both the dust-production rates and expansion velocities, up to a mass-loss rate saturation value ∼5 × 10−6 M⊙yr−1. Median gas-to-dust ratios of 250 and 680 are found for oxygen-rich and carbon-rich sources, respectively. Our analysis of CO observations in this first data release highlights the importance of our volume-limited approach in characterizing the local AGB population as a whole. Key words. stars: AGB and post-AGB – stars: mass-loss – stars: winds, outflows – surveys 1. Introduction Cool evolved stars contribute to the chemical enrichment of the interstellar medium (ISM) by synthesising new elements which are then expelled in massive stellar winds driven by newly formed dust. Population models indicate that low- to intermediate-mass (∼0.8−8 M⊙) asymptotic giant branch (AGB) stars dominate this process today (Karakas & Lattanzio 2014), aided by their more massive (≥8 M⊙) and therefore less numer- ous red supergiant (RSG) cousins, which eventually explode as supernovae. Accordingly, AGB stars are of great interest and many studies have been carried out on various aspects of their winds, including Loup et al. (1993); Schöier & Olofsson (2001); Olofsson et al. (2002); Gonzalez Delgado et al. (2003); Ramst- edt et al. (2009) and De Beck et al. (2010), which are described in Appendix B. Nevertheless, it has been difficult to draw firm conclusions about many key aspects of the population of AGB stars, including the total mass returned to the ISM by these stars, the physical processes driving the onset of mass loss, the frac- tion of the ejected mass that condenses into dust, and variations in the mass-loss rate (MLR) over time (e.g., Höfner & Olofsson 2018). Repeated third dredge-up events during the AGB phase bring carbon-enriched material to the surface of the star, increasing ⋆E-mail:s.srinivasan@irya.unam.mx its C/O ratio over time, and changing the dominant chemistry of their winds. AGB stars are accordingly divided into three groups that likely form an evolutionary sequence. The majority are oxygen-rich at solar metallicity and typically have C/O ratios ≲0.9; their winds consist mainly of oxygen-bearing molecules and silicate dust. Carbon-rich AGB stars are at the other ex- treme (C/O > 1), producing mainly carbonaceous molecules and amorphous carbon dust. S-type AGB stars have C/O ratios close to unity (0.9 ≲C/O < 1.1, with the exact lower limit being temperature-dependent; e.g. Scalo & Ross 1976; Van Eck et al. 2017) and ZrO bands stronger than TiO in low-resolution spectra (Keenan 1954). As AGB stars are major dust producers, we can use mid- IR observations to reveal their dust-forming regions and study their mass loss. While distance uncertainties have hampered such studies in the Galaxy, the AGB populations of the Magel- lanic Clouds are well-studied. However, due to their low metal- licity, the dominant dust producers in these galaxies are evolved carbon-rich stars (e.g., Riebel et al. 2012; Boyer et al. 2012; Srinivasan et al. 2016). This limits our ability to extrapolate to the Galactic population, where oxygen-rich AGB stars are more common. Furthermore, using dust to estimate the total return of enriched material to the ISM requires assumptions of the expan- sion velocity, dust-to-gas ratio, dust-grain properties and dust emissivity, which may differ between both individual stars and Article number, page 1 of 18 arXiv:2510.14809v1 [astro-ph.GA] 16 Oct 2025 A&A proofs: manuscript no. articlev3 stellar populations, and may be function of metallicity. We there- fore also want direct observations of the gas, which forms the majority of the lost mass. CO observations are a good tracer of AGB gas mass loss (e.g., Kemper et al. 2003; Decin et al. 2007) – being both ubiqui- tous and chemically stable in AGB circumstellar shells, models show that its abundance relative to the main constituent of the stellar wind, H2, varies by less than a factor of 2 across a range of C/O ratios (Cherchneff 2006). In addition, it is very robust to photodissociation while its rotational transitions are excited at low temperatures(e.g. Mamon et al. 1988). Together, these en- sure it traces the bulk of the gas in the outflow within the pho- todissociation radius, without significant changes from source to source. CO also provides a way to measure the 12C/13C ratio through its 13CO lines. This ratio is modified by the nucleosyn- thesis in AGB stars (e.g., Kobayashi et al. 2011) and hence, on Galactic scales, it can be used to trace star-formation histories. Excepting the brightest Magellanic Cloud objects (Groenewegen et al. 2016; Matsuura et al. 2016), measurements of CO can only be made for Galactic sources, and its rotational transitions can only be systematically determined for Solar Neighbourhood ob- jects (within a few kpc). In-depth studies of individual nearby sources continue to drive great progress in our understanding of the final stages of stellar evolution (e.g., Agúndez et al. 2017; Bujarrabal et al. 2021; Hoai et al. 2022), and broader studies have been done on limited samples or certain types of AGB star (e.g. Danilovich et al. 2015; Massalkhi et al. 2018; Wallström et al. 2024 and the six studies in the literature sample described in Appendix B). However, no large unbiased study of Galactic AGB stars yet exists. As we cannot observe all Galactic AGB stars, a sample delineated only by geometry is the best way to circumvent potential biases and accurately characterize the pop- ulation. The Nearby Evolved Stars Survey (NESS, Scicluna et al. 2022) is a large, multi-telescope observing programme targeting a volume-limited sample of ∼850 mass-losing AGB and RSG stars within 3 kpc for CO J=(2–1) and (3–2) spectral line and sub-millimetre (submm) continuum observations. The NESS sources are shown in Figure 1 and have been divided into the five tiers listed below, based on distance and dust-production rate (DPR), which is calculated by matching photometry with mod- els from the Grid of Red supergiant and AGB ModelS (GRAMS; Sargent et al. 2011; Srinivasan et al. 2011) and detailed in Sci- cluna et al. (2022). Note that some Galactic plane sources are ex- cluded in more distant tiers due to the potential confusion from interstellar lines, and are instead part of a sample that must be observed separately with interferometry to filter out interstellar lines. The tiers are defined as follows: (i) "very low" sources with no detectable DPR, distances d < 250 pc and luminosities of L > 1600 L⊙, as measured by McDonald et al. (2012) and McDonald et al. (2017); (ii) "low" sources with DPR < 10−10 M⊙yr−1 and d < 300 pc; (iii) "intermediate" sources with 10−10 < DPR < 3 × 10−9 M⊙yr−1 and d < 600 pc, excluding the Galactic plane for d > 400 pc (i.e., only including sources with Galactic lati- tude |b| > 1.5) ; (iv) "high" sources with 3 × 10−9 < DPR < 10−7 M⊙yr−1 and d < 1200 pc, excluding the Galactic plane for d > 800 pc; and (v) "extreme" sources with DPR > 10−7 M⊙yr−1 and d < 3000 pc, excluding the Galactic plane for d > 2000 pc. In this paper, we present the initial CO results from observa- tions of the northern part of the NESS sample with the James Clerk Maxwell Telescope (JCMT). Although incomplete, this initial dataset of 485 sources represents the largest sample of CO lines for nearby AGB stars to date, and includes 59 sources with no prior CO observations. Our highly homogeneous obser- vations extend to systematically lower mass-loss rates than have typically been explored in the literature (McDonald et al. 2025). Section 2 describes the observations, data reduction pipeline, and data analysis, Section 3 gives some initial results on the detec- tion rates, line profiles, 12CO/13CO ratios, mass-loss rates, gas- to-dust ratios, and comparisons with literature data and models, and finally Section 4 contains the conclusions. 2. Observations and analysis The NESS project is based around a JCMT Large Program to observe the ∼500 sources in the northern parts of the sample, for which we placed a declination cut at −30◦to avoid observing at airmass > 2. To this sample, we add all archival data for NESS targets taken with RxA3 and HARP since the ACSIS correlator was installed (from 2006 onwards). We note that this includes archival data of 71 sources that have Dec. < −30◦. Because these sources are so far south, they have also been observed with APEX (along with other sources to ensure full-sky coverage), and sources observed by both telescopes will be compared in a paper analysing the APEX observations (Jeste et al., in prep). The JCMT data presented here observed CO (2–1) and CO (3–2), as well as their 13CO isotopologues, in staring mode. We used double beam switching with a uniform chop throw of 180′′ at a position angle of 90◦East of North. There is also a small mapping subsample, and continuum observations at 850 and 450 µm, which will not be discussed in this work. See Scicluna et al. (2022) for more details on the observations. The NESS observations at the JCMT began in June 2017, and this paper includes all heterodyne data taken up to February 2023 using the RxA3/RxA3m1 and HARP (Buckle et al. 2009) receivers. Note that the RxA3 receiver was removed from the JCMT in June 2018. Further observations, including with the new N¯amakanui receivers (Mizuno et al. 2020) are underway and will complement the findings presented here. 2.1. Removed sources We have removed a small number of sources from the anal- ysis which, while meeting the original NESS sample criteria, are deemed to not be cool evolved stars. These sources are: IRAS 05251-1244, IRAS 06491-0654, IRAS 17150-3224, IRAS 17328-3327, IRAS 18458-0213, IRAS 19327+3024, IRAS 20002+3322, and IRAS 21282+5050. Most are planetary neb- ulae and one, IRAS 06491-0654, is not an evolved star at all but rather a Herbig Ae/Be star. Further details on the removed sources and the updated NESS sample are presented in McDon- ald et al. (2025). 2.2. Main-beam efficiencies In order to convert each observation from a T ∗ A to Tmb tem- perature scale, we have derived values for the main-beam effi- ciency (ηmb) for both RxA3 and HARP using the full datasets of planet observations (Mars, Jupiter, and Uranus) from the JCMT. This data is presented graphically on the JCMT webpages for 1 RxA3 was the 3rd A-band receiver at the JCMT. It was upgraded after a mixer change in January 2016 to RxA3m, and throughout the rest of this paper we refer to both simply as RxA3. Article number, page 2 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations HARP2 and RxA33. Note that we do not use observations taken during the day (09–19h local time = 19–05h UT) as they tend to have larger uncertainties and systematically lower values of ηmb. There have also been periods of misalignment of the RxA3 receiver, for each of which we calculate a separate best-fit ηmb value. An observational uncertainty, ση, was only available for the HARP observations, so we use the mean ση value for obser- vations with all receivers: ση = 0.103 ± 0.005. As the individual values of ηmb can vary widely, down to almost zero, due to various factors (including pointing errors, the amount of atmospheric water vapor, and uncertainties in the planet models), it can be difficult to ascertain the best values for ηmb. For consistency, we elected to model the ηmb distribution as a function of time (t) as a straight line with an exponential bias term acting to reduce the measured values. The fitted function is as follows: ηmb = m ∗t + b −B ∗W + ϵ (1) where m is the (shallow) slope of the line, b is the intercept, ϵ is Gaussian random noise such that ϵ ∼N  0, ση  (where ση is the measured uncertainty on the ηmb values), B is the bias term that is drawn from an exponential distribution whose scale parame- ter σB = 1/λ we infer, and W is a weighting of the bias term (between 0 and 1). The use of an exponential distribution for the bias term results in asymmetry; this accounts for the system- atic effects in observations that can reduce the measured main- beam efficiency, e.g. bad pointing, which may produce underes- timates of the true value on any given day. The values of m, b, σB, and W were optimised by choosing a reasonable range for each parameter and then, for each of 1000 combinations of parame- ter values, randomly drawing 1000 distributions and comparing them to the distribution of measured ηmb values using the two- sample Kolmogorov–Smirnov (KS) test. The set of parameters with the lowest KS test statistic, corresponding to the smallest absolute difference between the empirical distribution functions of the model and the measured ηmb values, were taken to be the best-fit parameters. The uncertainty on each parameter was cal- culated by finding the range of parameters which fell within the median absolute deviation of the KS test statistics, although note for the misalignment periods there were not enough data points for this analysis. The slope was found to be 0 with an uncertainty of ∼10−5 for all sets of observations, so the intercept b is taken to be the value of ηmb. The best-fit values of ηmb, σB, and W are given in Table 1. The best-fit ηmb values are used to correct each observational spectrum before they are combined into one spectrum (per fre- quency range) for each source. For later error propagation, a time-dependent, average error for the ηmb correction is used on spectral parameters derived from RxA3 observations. This error factor is equivalent to a fractional error of 0.21 in ηmb. 2.3. Data reduction In order to obtain a homogeneous dataset, we have created an automated pipeline to reduce this large quantity of data, and per- form an initial analysis. The aim of the NESS pipeline4 is to reduce all existing heterodyne JCMT data of a given source, out- put a FITS image (single pixel for RxA3 data and 16 pixels for 2 https://www.eaobservatory.org/jcmt/instrumentation/ heterodyne/calibration/harp-planets/ 3 https://www.eaobservatory.org/jcmt/instrumentation/ heterodyne/calibration/rxa3-planets/ 4 Available at https://github.com/swallstrom/JCMTpipeline Table 1: Best-fit values of the main beam efficiency (ηmb, equal to b in Equation 1), width of the bias function (σB), and weight of the bias function (W) for the HARP (325 −375 GHz) and RxA3 (212 −274 GHz) receivers, including separate values for periods of misalignment (m), the dates of which are given as footnotes. Also given is the number of observations for each re- ceiver/period. Uncertainties are given in parentheses, where they could be calculated; the uncertainties on ηmb also include the measurement uncertainty (yerr). Receiver ηmb σB W Nobs HARP 0.57 (0.11) 0.35 (0.01) 0.45 (0.05) 551 RxA3 0.60 (0.11) 0.30 (0.01) 0.30 (0.01) 124 RxA3 m1 (a) 0.45 (0.1) 0.3 0.3 3 RxA3 m2 (b) 0.53 (0.1) 0.3 0.3 16 RxA3m (c) 0.55 (0.14) 0.35 (0.04) 0.35 (0.04) 64 RxA3m m(d) 0.53 (0.1) 0.4 0.4 6 (a) 20120413 – 20121201; (b) 20140508 – 20150605 (c) From 20160101; (d) 20170407 – 20170810 HARP) and a spectrum extracted from the primary pixel, fit any lines in the spectrum, and output a table of measured values. It is written in Python and makes use of the Starlink software (Currie et al. 2014). For a given source, the pipeline will: (i) Query the JCMT archive for all observations matching the right ascension (R.A.) and Dec. of the source, with a given instrument, molecule, line, and observing mode, discarding observations marked either as "failed" or taken during the day (when the atmosphere is less stable). (ii) Convert from T ∗ A to Tmb temperature scale, using new ηmb determinations for both HARP and RxA3 (see Section 2.2). Perform a side-band correction for RxA3m data5. (iii) Reduce all raw files together in a group reduction using ORAC-DR (Jenness et al. 2015), binning the data to chan- nel widths of 1, 2, and 4 km s−1. (iv) Output the group reduced file as a FITS image and extract the spectrum (in case of HARP observations, from the pri- mary pixel). (v) Fit a soft parabola function (e.g. Olofsson et al. 1993; De Beck et al. 2010) to the CO line (which is assumed to be the only bright line between –100 and +100 km s−1) using the Markov-Chain Monte Carlo (MCMC) implemen- tation EMCEE (Foreman-Mackey et al. 2013) to quantify the uncertainties on the line peak, central velocity, width, and line shape parameters. All three spectral resolutions are fit- ted simultaneously under equal weighting. (vi) Calculate the root-mean square (RMS) noise of each spec- trum using two regions, retaining the lower of the two val- ues: i) the region between –400 and –150 km s−1, which does not contain any other commonly detected lines or likely ISM contamination and hence is assumed to be line- free; ii) the region between –200 and 200 km s−1 after re- moving the fitted CO line, as this is the part of the spectrum that is manually inspected for, e.g., ISM contamination. (vii) Output a data table containing the source coordinates, to- tal integration time, RMS noise at each velocity resolution, and the best-fit line parameters, including positive and neg- ative uncertainties. 5 see https://www.eaobservatory.org/jcmt/wp- content/uploads/sites/2/2019/05/RxA3m-SB-Notes-2018.pdf Article number, page 3 of 18 A&A proofs: manuscript no. articlev3 2.4. Data analysis Each spectrum and line fit is manually checked to see if the line is fit reasonably well. A second pass of the MCMC line fitter, given input parameters from other lines in the same source, is carried out on spectra with a poor line fit. In the absence of other lines, the initial guesses for the parameters are estimated visu- ally. For the remainder of this manuscript, we use the 1 km s−1 spectra for display purposes. For each well-fit spectrum, the ra- dial velocity and velocity width of the line is taken from the soft parabola fit parameters. These are used to define the velocity range over which to integrate the line (using the fit of the corre- sponding CO line to define the integration region for each 13CO line). We note that this may slightly underestimate the width of some lines which are less well fit by a soft parabola function, and for lines with clear line wings we instead take the line edges to be where the line emission falls below 3 × RMS. The integrated line intensities are used to calculate a 12CO/13CO ratio, and an empirical MLR using the Ramstedt et al. (2008) formula: MLR = sJ (ICO θ2 b D2)aJ vbJ e f −cJ CO (2) where ICO is the integrated line intensity in K km s−1, θb is the telescope beam size in arcseconds, D is the distance in pc, ve is the expansion velocity in km s−1, fCO is the CO abundance relative to H2. The sJ, aJ, bJ, and cJ parameters, and their uncer- tainties, are given for the fits to different CO transitions in their Table A.1. In cases where no 12CO line is detected (peak tem- perature < 3 × RMS) we calculate an upper limit on the MLR, assuming an expansion velocity of 10 km s−1 unless its value is known from other lines in the same source. The integrated line intensities include uncertainties calcu- lated from the rms noise of the spectrum, the uncertainty on ηmb, the uncertainty in the line width, added in quadrature. The dis- tance uncertainties (for a discussion of their determination and uncertainties, see Scicluna et al. 2022) are also propagated. The expansion velocity is taken to be half the line width, with an uncertainty equal to the channel width. fCO is assumed to be 1 × 10−3 for carbon-rich stars and 2 × 10−4 for oxygen-rich stars (Ramstedt et al. 2008). The chemical type of the stars is deter- mined as oxygen-rich or carbon-rich from mid-infrared spectra (see Scicluna et al. 2022) as a first approximation, and assumed to be oxygen-rich if no clear determination can be made. This causes S stars to be grouped with O-rich stars since they do not have strong SiC features. From the literature sample (see Sec- tion 3.6 and Appendix B), there are 221 sources with unambigu- ous chemical classifications. Only 3/221 sources had classifica- tions that conflicted with the mid-infrared spectroscopic classes: IRAS 20077-0625, IRAS 20141-2128, and IRAS 23438+0312. The chemical classifications for these sources have been fixed (now classified as O-rich, C-rich, and C-rich respectively), and we are confident the rest of the sample has similarly low error rates. However, further inspection of the chemical types is left to a future paper. To display distributions of MLR and gas-to-dust ratios in later figures, we show both histograms and kernel density es- timates (KDEs). These KDEs are based on a Gaussian kernel, whose bandwidth is calculated using cross-validation by system- atically testing a range of possible bandwidths. This calculated bandwidth is then added in quadrature to the median uncertainty on the plotted parameter (MLR or gas-to-dust ratio), so the re- sulting KDE is representative of the full uncertainty in the distri- bution. 10 2 10 1 100 Distance (kpc) 10 12 10 11 10 10 10 9 10 8 10 7 10 6 DPR (M yr 1) full sample extreme high intermediate low very low C-rich Fig. 1: Distance vs. dust production rate (DPR) for the full NESS sample, in grey. Overlaid are the sources which have been ob- served and are included in the current analysis, coloured by tier. Cyan crosses show the locations of the carbon-rich sources. 3. Results and discussion A total of 485 sources from the full NESS sample have been included in this analysis: 259 sources with CO (2–1) observa- tions, and 428 sources with CO (3–2) observations. The full analysis results, with measured parameters and calculated val- ues, are available online at the CDS via Vizier. Table C.1 lists the columns available in the online dataset. 3.1. Detection statistics Figure 1 shows the full NESS sample and the sources that are in- cluded in the current analysis. The three middle tiers ("low", "in- termediate", and "high") are well sampled (∼60–75% observed), while fewer sources in the "extreme" tier have been observed (∼30%) so far. A larger fraction of the less numerous and gen- erally brighter carbon-rich sources in NESS have been observed so far, compared to oxygen-rich sources. The observation and detection statistics for the NESS sample so far, both in total and divided by tier, are shown in Table 2 and Fig. A.1 respectively. In total, we detect about 80% of targeted sources in CO (2–1) and 75% in CO (3–2). Of these, 59 sources appear to have no previously published CO observations; these sources are marked in the online table. Detection statistics for 13CO are around 40% for 13CO (2–1) and 30% for 13CO (3– 2). We note that 13CO has not been targeted towards all sources with the JCMT, as observing priority has so far been given to sources with 12CO (2–1) or (3–2) detections brighter than 0.3 K. The "low" tier has lower detection rates than the other tiers. This is unsurprising as it includes sources with no measurable DPR, which may not be able to launch a significant dust-driven wind at all. Detection statistics for the three highest tiers are compara- ble, despite relatively fewer sources having been observed so far in the most distant tiers ("high" and "extreme"), indicating that our observing strategy is not biased towards a particular type of source. We also note that, overall, the C-rich sources show higher detection rates, consistent with their generally higher MLRs. Article number, page 4 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations Table 2: Fraction of the overall NESS sample that has been ob- served with the JCMT in this work, divided by tier and chemical type. Tier Total O-rich(a) C-rich All tiers 485/852 (57%) 421/757 (56%) 64/95 (67%) Very low 1/19 (5%) 1/19 (5%) 0/0 (–) Low 76/105 (72%) 72/101 (71%) 4/4 (100%) Interm. 169/222 (76%) 152/201 (76%) 17/21 (81%) High 190/324 (59%) 152/273 (56%) 38/51 (75%) Extreme 49/182 (27%) 44/163 (27%) 5/19 (26%) (a) sources without mid-IR based chemical identifications are assumed to be O-rich; see Section 2.4 for details. 3.2. Line profiles All 12CO spectra have been judged by eye on whether they fol- low a soft parabola shape, as is expected for a constant-velocity, spherically symmetric wind. Some ∼7% of detected lines are too weak for their shape to be determined. However, of the sources with sufficiently bright CO lines, 82% conform to a soft parabola shape. For O-rich sources, 85% show a soft parabola shape, com- pared with a slightly smaller fraction, 74%, of C-rich sources. These proportions are essentially upper limits, as we have as- sumed a soft parabola shape as the default and there may be sources whose deviations from that shape are less clearly dis- tinguishable due to noise. Three typical examples of the different CO line shapes are shown in Figure 2, and a histogram and KDE of the MLRs of sources with different line shapes is shown in Figure 3. The O- rich sources show a large spread in MLR, while the fewer C- rich sources are concentrated at higher MLRs, peaking around 5 × 10−6 M⊙yr−1. Within each chemistry, the distributions of MLRs for different line profiles are comparable. There is a pos- sible indication of lower typical MLRs for O-rich stars with non- soft parabola line shapes, but the number of such sources is too low to support a definitive conclusion. The most common devi- ations from the soft parabola shape are asymmetry and/or line wings. Furthermore, 11 of the 485 sources clearly show mul- tiple velocity components centred on the same velocity: dou- ble winds. The majority (7/11) of these double-wind sources are O-rich, and they cluster around a low MLR of a few × 10−7 M⊙yr−1. The C-rich double wind sources have a range of MLRs, between 2.2 × 10−7 and 1.7 × 10−5 M⊙yr−1. Our results are similar to what was found by Knapp et al. (1998) for a sample of 43 CO-bright AGB stars: 30 (70%) of their sources show a parabolic line shape, six are asymmetric, and seven show double winds. Of their 13 C-rich sources, one is asymmetric and one shows an uncertain double wind, so 85% show a parabolic line shape. They also find that their stars with double winds tend to have relatively low MLRs, largely due to the slower wind component, which also has a lower expansion velocity than most AGB winds, and speculate that this corre- sponds to a slow wind following a period of increased mass loss. 3.3. Carbon isotopic ratios The 12C/13C ratio is a tracer of the evolutionary state and nu- cleosynthesis in AGB stars (e.g. Milam et al. 2007; Ramst- edt & Olofsson 2014). The previous evolution on the red gi- ant branch tends to produce low 12C/13C ratios, around 5−10 (Pavlenko et al. 2003), especially for low-mass stars. Then, dur- ing the AGB phase, dredge-ups will increase the amount of carbon (and specifically 12C) in the surface layers of the star, which also increases the 12C/13C ratio. A further effect takes place in the more massive AGB stars, above ∼4 M⊙, where hot- bottom burning will consume 12C and lower the 12C/13C ratio (Karakas & Lattanzio 2014). From these evolutionary processes, we would naïvely expect a correlation between 12C/13C ratio and MLR. Based on the above processes and a reasonable IMF, we expect that most early AGB stars have low 12C/13C ratios and low MLRs, and both quantities tend to increase during the AGB phase. Hot-bottom burning lowers the 12C/13C ratios in the rarer intermediate-mass AGB stars, which generally have higher MLRs. Our data contains 80 sources with detections in both 12CO and 13CO so far. We measure approximate 12CO/13CO ratios by dividing the integrated intensities of both lines and multiply- ing by a factor of 0.874, equal to the frequency ratio cubed, to correct for differences in line strength (as is also done in, e.g., De Beck et al. 2010). In most cases this provides a lower limit on the ratio as most 12CO lines are optically thick and hence their integrated intensities do not linearly scale with abundance, un- like their optically thin counterparts. Saberi et al. (2020) exam- ine this question in more detail, showing that chemical and ex- citation effects compound these optical-depth differences. This means that inference of reliable 12CO/13CO isotopologue ratios requires the use of line radiative-transfer models. As a result, we limit our discussion here to observed line ratios, and do not at- tempt to directly infer the isotope ratios themselves, although in an ideal case these two parameters should be related. The mea- sured 12CO/13CO ratios are plotted against MLR in Figure 4 (ex- cluding the C-rich IRAS 19008+0726 which has a ratio of 69, MLR of 4.6 × 10−6 M⊙yr−1, and is optically thin). We find a range of ratios from 0.62 to 69, with a mean value of 9.0 ± 0.7 and a median value of 7.3. For O-rich sources, we find slightly lower values, with a mean value of 8.6 ± 0.6 and a median of 6.9, while for C-rich sources we find slightly higher values, with a mean of 10 ± 2 and a median of 8.1. This is similar to the re- sults from De Beck et al. (2010) who, for a sample of 27 sources, find a mean value of 10 ± 2 and a median value of 8.1 in both C-rich and O-rich sources. Limiting the analysis of our data to sources with optically thin 12CO lines should provide better es- timates of the 12CO/13CO ratios. We have visually identified ten sources (two of which are carbon-rich) where at least one of the 12CO lines shows a clear flat-topped or double-peaked profile, indicative of a low optical depth (Habing & Olofsson 2003). An Anderson–Darling test (Anderson & Darling 1952) shows with high confidence (p = 0.001) that the optically thin sources are systematically different (that is, their isotope ratios are drawn from a different underlying distribution) than the rest of the iso- topic ratios. The 12CO/13CO ratios from these few optically thin lines have a range of 3 to 79, a mean value of 18 ± 6, and a median value of 10.5. This is slightly higher than the values from full sample, as expected, since the CO line ratio should be less underestimated than it is for optically thick lines, though we note that only having eleven sources limits the reliability of these statistics. We also compare our results to those of Ramstedt & Olofsson (2014) and Milam et al. (2009), who use radiative-transfer mod- elling to determine abundances of 12CO and 13CO, and hence are better able to account for optical depth effects. Ramstedt & Olofsson (2014) have a sample of ∼60 stars, evenly split between oxygen-rich, carbon-rich, and S-type AGB stars. Overall they find a range of 12CO/13CO ratios of 2 to 100, a mean value of 22 ± 3, and a median value of 17. The carbon-rich sources in- clude the full range of ratio values, and have a larger mean value Article number, page 5 of 18 A&A proofs: manuscript no. articlev3 30 20 10 0 10 20 30 Velocity (km/s) 0 1 2 3 4 5 6 7 Tmb (K) IRAS 23320+4316 CO 3-2 30 20 10 0 10 20 30 Velocity (km/s) 0 1 2 3 4 IRAS 01556+4511 CO 3-2 30 20 10 0 10 20 30 Velocity (km/s) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 IRAS 15492+4837 CO 3-2 Fig. 2: Three typical examples of the different CO line shapes, with the best-fit soft parabola shown with a red dotted line. From left to right: LP And (IRAS 23320+4316) shows a soft parabola profile, V360 And (IRAS 01556+4511) shows a double-wind profile, and ST Her (IRAS 15492+4837) shows an asymmetric profile with line wings. 10 7 10 6 10 5 MLR (M /yr) 0.0 0.5 1.0 1.5 2.0 2.5 Probability density Chemical type = O 10 7 10 6 10 5 MLR (M /yr) Chemical type = C Line shape soft parabola other double wind Fig. 3: Distributions of MLRs derived from CO lines with a soft parabola shape, a double wind, or other non-soft-parabola shape. 10 7 10 6 10 5 MLR (M /yr) 0 5 10 15 20 25 30 12CO/13CO ratio CO(2-1) (O-/C-rich) CO(3-2) (O-/C-rich) Optically thin Fig. 4: Plot of the 12CO/13CO ratio as a function of mass-loss rate. Note that the optically thin C-rich IRAS 19008+0726 with a ratio of 69 and MLR of 4.6 × 10−6 M⊙yr−1 has been excluded for clarity. (27 ± 1) but the same median value of 17. Milam et al. (2009) have a sample of 15 AGB stars, of which 11 are carbon-rich. The carbon-rich sources have a higher mean 12CO/13CO ratio of 38 ± 2, with a median of 29; while the oxygen-rich sources have a mean value of 27 ± 3 but a higher median value of 32. Both studies find larger values than this paper, even when we limit ourselves to the optically thin sources, reflecting their mitigation of the optical depth effects on the CO line ratio. Overall, across both our data and previous studies, carbon-rich sources are found to have somewhat higher 12CO/13CO ratios, as expected from the repeated dredge-ups that are the cause of their carbon-rich nature (Karakas & Lugaro 2016), but the differences with oxygen-rich sources are not as significant as might be expected, indicating that other effects have a larger impact on these measurements. Comparing our 12CO/13CO ratios with MLR, we find a weak negative correlation (Spearman correlation coefficient of −0.4) between isotopic ratio and MLR (Fig. 4). This correlation is more likely due to higher MLR sources having more optically thick 12CO lines, which will decrease their 12CO/13CO ratios. The large scatter – i.e., the difference in isotopic ratios derived from the CO(2-1) and CO(3-2) lines for individual sources – even at low MLRs, seems to indicate that optical depth has a stronger effect on the 12CO/13CO ratio than the evolutionary fac- tors such as dredge up. In contrast, the full radiative-transfer analysis by Ramstedt & Olofsson (2014) found no evidence for such a correlation. 3.4. Mass-loss rates Empirical gas MLRs were calculated using the formula from Ramstedt et al. (2008) (Equation 2), as described in more de- tail in Section 2.4. There are significant uncertainties associated with this formula, mainly from the formula parameters them- selves, which depend on the observed CO transition, but also from the measured input parameters such as integrated intensity and distance. The formula may also produce unreliable estimates at high MLRs ( ˙M ≥10−5 M⊙yr−1; De Beck et al. 2010). Alto- gether, this results in uncertainties on the calculated MLRs of about an order of magnitude, but the aggregate results are still useful as a first approximation in the absence of full radiative- transfer modelling. Calculated MLRs range from 1.9 × 10−8 −1.1 × 10−4 M⊙yr−1, whereas DPRs range from 1.2 × 10−11 −1.1 × 10−6 M⊙yr−1(Scicluna et al. 2022). We find that the MLRs tend to increase with the NESS tiers, as expected since the tiers’ defi- nition includes an increase in DPR, which is related to MLR. The median MLR values for the different tiers are: "low": 7.5 × 10−8, "intermediate": 3.2 × 10−7, "high": 3.2 × 10−6, and "extreme": 8.4 × 10−6 M⊙yr−1. We also find differences between the O-rich and C-rich sources, as expected from AGB evolution. For O-rich sources, the mean MLR is 2.7 ± 0.2 × 10−6 and the median MLR is 8.3 × 10−7 M⊙yr−1. The C-rich sources have slightly higher Article number, page 6 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations average MLRs, with a mean value of 7 ± 1 × 10−6 and a median of 4.0 × 10−6 M⊙yr−1. We find a best fit relating the MLR and DPR (including the upper limits) with a broken power-law distribution, us- ing the Markov-Chain Monte Carlo (MCMC) implementation EMCEE (Foreman-Mackey et al. 2013), as shown in Figure 5. In log(MLR)-log(DPR) space, this straight line follows the equa- tion log MLR M⊙yr−1 ! = 0.82+0.06 −0.06 log DPR M⊙yr−1 ! + 0.82+0.51 −0.52 (3) until the break at log(DPR) = −7.45+0.15 −0.13, or DPR ∼3.5 × 10−8 M⊙yr−1. This "saturation value" for the MLR is at MLR = 5.2+1.0 −0.8 × 10−6 M⊙yr−1. The given uncertainties are the 68% credible interval on each parameter, and a corner plot of the full MCMC distributions is shown in Figure A.2. Despite the large uncertainties on each data point and the inherent scatter, the fit is well constrained and clearly implies there is no single gas-to- dust ratio for AGB stars, as discussed further in Section 3.5. We also note here that the DPR estimates are based on SED fits that only take the warm dust emission into account. In contrast, the MLR is based on CO emission, which integrates over larger radii in the circumstellar envelope. Therefore, the MLR and DPR may be probing material ejected at different epochs. Interestingly, the position of the break in the power law co- incides with a relative paucity of sources in both the MLR and DPR distributions, as seen in the histograms in Figure 5, at val- ues which correspond to the high-DPR end of the "high" tier of NESS sources. We note this paucity of sources is also seen in the full NESS sample, so is not due to a bias in the data observed so far. It is possible this is an evolutionary effect, where the ma- jority of low-mass stars have lower MLRs than ∼10−6 M⊙yr−1, whereas intermediate-mass stars will instead tend to cluster around the highest possible MLR, which is expected from the theory of dust-driven winds to be ∼10−5 −10−4 M⊙yr−1(Lamers & Cassinelli 1999). It is also possible this is partly an observa- tional effect, as the saturation of the CO lines for higher MLRs will make it more difficult to derive accurate MLRs from their integrated intensities. Finally, it could also be a temporal or ex- ternal effect, such as from coincidences in the timings of thermal pulses, or the impact of binarity. We find that MLRs calculated from CO (3–2) lines are sys- tematically higher than MLRs from CO (2–1) lines, by a median factor of ∼1.8, though they also have larger uncertainties. A sys- tematic offset of similar order was noted in De Beck et al. (2010), with Eq. 2 predicting higher MLR for higher-J transitions (their figure 10). They explain this as due to differences in the han- dling of cooling between Ramstedt et al. (2008) and De Beck et al. (2010). Better radiative transfer modelling of the CO lines is required to clarify the cause of this offset. 3.5. Gas-to-dust ratios Despite indications that there is not a constant AGB gas-to- dust ratio across the whole parameter space, it is still useful to characterise its distribution from our large sample using sum- mary statistics. A histogram and KDE of the gas-to-dust ratios is shown in Figure 6. Dividing the MLR by the DPR, we find a very large range of gas-to-dust ratios, between 2 and 16 600. The extreme values of this range are probably not accurate, and we reiterate that there are large uncertainties on both the MLR and DPR values. To quantify the average gas-to-dust ratio, we have taken the 16th, 50th, and 84th percentiles of the gas-to-dust ratio 10 11 10 10 10 9 10 8 10 7 10 6 DPR (M /yr) 10 11 10 10 10 9 10 8 10 7 10 6 10 5 10 4 MLR (M /yr) CO(2-1), O-rich CO(3-2), O-rich CO(2-1), C-rich CO(3-2), C-rich Lin. reg. 0 50 Nsou DPR 0 50 Nsou MLR u.lim Fig. 5: Plot of log(MLR) including error bars vs log(DPR) with a best-fit broken power law in dashed black. The grey lines show random draws from the posterior distribution, as an indication of the uncertainty in the fit. The marginal distributions for the MLR and DPR are shown as histograms. A histogram (dashed) is also shown for the cases where the MLRs are upper limits. distribution to find a median value of 290, and a 68% confidence interval ranging from 80 to 790. There are also large differences between the O-rich and C-rich sources. The O-rich sources have a median value of 250 and the 68% confidence interval ranges from 70 to 580, while for C-rich sources the median value is 680 and the 68% confidence interval ranges from 160 to 1990. Our gas-to-dust ratio estimates are higher than what has been found in the literature, but broadly consistent when the large un- certainties are taken into account. Knapp (1985) find gas-to-dust ratios of 160 for O-rich sources and 400 for C-rich sources in their sample of 40 AGB stars, as compared with 250 and 680 for our O-rich and C-rich sources, respectively. Groenewegen et al. (1999) find a range of gas-to-dust ratios from 7.9 to 1570, with a median value of 152, for a sample of 72 sources, of which ∼60% are O-rich; our overall median gas-to-dust value of 290 is about a factor of two higher, from a much larger sample that is ∼87% O-rich. We have also calculated non-parametric Spearman rank cor- relations between the gas-to-dust ratio and the MLR, DPR, and expansion velocity. We find a significant correlation only for the DPR: a negative correlation coefficient of −0.53 with the gas-to-dust ratio. Given that the MLR and DPR are correlated, the lack of correlation between MLR and gas-to-dust ratio even though gas-to-dust ratio is correlated with DPR is unexpected. Why should these two correlations cancel out so effectively? One possibility is that as the density at the base of the wind increases, more dust condenses. This then translates into a higher radiative momentum available to accelerate the wind; however, the sub- linear scaling of MLR with DPR shows that this increased mo- mentum does not translate directly into more efficient mass loss. This in turn explains the increase in velocity at higher MLR (see below); as the condensation efficiency increases, a larger frac- tion of momentum must be translated to velocity as the gas-to- Article number, page 7 of 18 A&A proofs: manuscript no. articlev3 101 102 103 104 Gas-to-dust ratio 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Probability density Chemical type = O 101 102 103 104 Gas-to-dust ratio Chemical type = C Fig. 6: Histogram and KDE of the gas-to-dust ratios, for O-rich and C-rich sources, respectively. dust ratio decreases. In contrast with our results, neither Knapp (1985) nor Groenewegen et al. (1999) find any significant corre- lation between gas-to-dust ratio and MLR or DPR. The power-law relationship between MLR and DPR found in Section 3.4 implies that the gas-to-dust ratio decreases with increasing MLR and DPR, such that sources with higher mass- loss rates have a wind with a larger proportion of dust. This is broadly consistent with the scenario of a dust-driven wind. While the C-rich sources follow this trend for higher gas-to-dust ratio at higher MLR in Figure 5, the population is offset towards higher mass-loss rates. This could be a result of the fact that carbon-rich dust is more efficient at driving the wind, and therefore less dust is required to achieve a higher MLR. However, it is possible that the systematics in assumptions in the modelling of the dust SED contribute to this difference. 3.6. Comparison with combined literature sample We compare our empirical MLRs and expansion velocities against a literature sample, which combines the results of Loup et al. (1993); Schöier & Olofsson (2001); Olofsson et al. (2002); Gonzalez Delgado et al. (2003); Ramstedt et al. (2009); and De Beck et al. (2010). These studies are described in Ap- pendix B and plotted against the NESS results in Figure 7. Over- all, our results are similar to the literature results, mainly prob- ing sources with MLR ∼10−7 −10−5 M⊙yr−1 and expansion velocities ∼5–20 km s−1, with some outliers at higher veloci- ties. The current NESS data shows some dearth of low-MLR and low-velocity sources, and fewer of the high-velocity outliers, but also shows some excess of lower-MLR sources at all velocities and some high-MLR outliers. Some of these outliers are most likely not AGB stars, for example the O-rich source with MLR ∼10−5 M⊙yr−1 and velocity ∼5 km s−1 is IRAS 19597+3327A: an infrared source that hasn’t previously been studied in CO with a very bright but narrow CO line. However, others seem to be AGB stars that have never been observed in CO before; for instance, the C-rich IRAS 00084-1851 (AC Cet), which has a very low MLR of ∼2 × 10−8 M⊙yr−1 and an expansion velocity around 6 km s−1, is classified as a long-period variable (Samus’ et al. 2017). There are only two additional C-rich stars in the NESS data that have not been previously observed in CO: IRAS 17565-2035 and IRAS 19321+2757 (IRC +30374). We have used the same MCMC implementation as in Sec- tion 3.4 to fit a broken power law to the MLR as a function of expansion velocity, as shown in Figure 7. We find log MLR M⊙yr−1 ! = 0.120+0.008 −0.008 vinf km s−1 ! −7.4+0.9 −0.9 (4) until the break at vinf = 16.8+0.8 −0.7 km s−1. This corresponds to an MLR saturation value of 3.9+0.5 −0.4 × 10−6 M⊙yr−1. The given un- certainties are the 68% credible interval on each parameter, and a corner plot of the full MCMC distributions is shown in Fig- ure A.3. To determine whether this slope is due entirely to the dependence of MLR on expansion velocity in Eq. 2, we perform a t-test with the null hypothesis equal to the slope expected given the values of bJ in Ramstedt et al. (2008). The estimates for the slopes obtained are around 2.3. The p-value of the null hypothe- sis is ∼10−23, clearly rejected at high significance. There are only 18 NESS sources with vinf > 23 km s−1, of which 12 are carbon stars with velocities up to 33 km s−1. Except for one O–rich Mira (vinf ≈24 km s−1), the remaining are either RSGs or post-AGB/binary stars, or they show clear deviations from the soft-parabola profile in at least one of the two lines. OH/IR stars in our sample are restricted to vinf < 22 km s−1. These results are consistent with the expectation from hydrody- namic models – single M-type models tend to have vinf < 25−30 km s−1, while single C-star models extend beyond 30 km s−1 (Bladh et al. 2019a,b). Our results are consistent with those of De Beck et al. (2010) – once we eliminate RSGs, sources that have evolved beyond the AGB, and sources with line profiles that deviate from a soft parabola, the highest expansion velocity in their sample (their Table A.1) belongs to an OH/IR star (25 km s−1). Vassiliadis & Wood (1993) noted that the MLR for Galac- tic Miras increases exponentially with pulsation period for pe- riods ≲500 d. Beyond this period, the MLR seems to satu- rate at values consistent with the superwind phase, where the MLR is at the radiation pressure limit given by L/(cvinf) with vinf ≈15 km s−1, which is comparable to the location of the knee (vinf ≈17 km s−1) in Figure 7. De Beck et al. (2010) found a similar trend, with a somewhat higher period before saturation (∼850 d, their Figure 14). In Figure 8, we plot our mean MLRs (averaged over both CO lines) against the median values calcu- lated from the pulsational periods compiled by McDonald et al. (2025) for the NESS sample. Sources with vinf > 17 km s−1, corresponding to the break found in the MLR vs. vinf plot, are highlighted in the figure and grouped by SIMBAD object type. Our data replicate the increasing trend followed by saturation around ∼750 d. The rising trend of MLR with expansion veloc- ity at low MLRs is well reproduced by the relation from De Beck et al. (2010) (solid line in the figure). The MLR saturation values quoted by Vassiliadis & Wood (1993) and De Beck et al. (2010) are roughly an order of magnitude higher than our value. This difference is not unexpected, given the scatter in their data, the tendency of the Ramstedt et al. (2008) relation to underestimate high MLRs, and the bias of literature samples toward higher- MLR stars (see below). To compare our results with the literature sample, we have di- vided the NESS results into two subsamples: the 221 sources that are part of the literature sample (NESS-lit) and the 272 that are not (NESS-nonlit). Both subsamples have similar median MLR values: 1.7 × 10−6 M⊙yr−1 and 7.5 × 10−7 M⊙yr−1 for NESS- lit and NESS-nonlit, respectively. The range of MLRs is also comparable between NESS-lit (3.1 × 10−8 −4.4 × 10−5 M⊙yr−1) and NESS-nonlit (1.9 × 10−8 −1.9 × 10−5 M⊙yr−1). However, the NESS-lit subsample extends to larger expansion velocities Article number, page 8 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations 0 5 10 15 20 25 30 35 40 Expansion velocity (km/s) 10 8 10 7 10 6 10 5 10 4 MLR (M /yr) Loup+93 (O-/C-rich) Schoier+01 Olofsson+02 Gonzalez-Delgado+03 Ramstedt+09 DeBeck+10 (O/C) NESS CO(2-1) (O/C) NESS CO(3-2) (O/C) Powerlaw fit Fig. 7: MLR (including error bars) vs expansion velocity, com- pared with samples from literature. Oxygen-rich (and S-type) sources are plotted in blue while carbon-rich sources are in or- ange. A broken power law fit to the NESS data is shown in dashed black, and the grey lines show random draws from the posterior distribution as an indication of the uncertainty in the fit. (38.8 km s−1) than NESS-nonlit (26.9 km s−1), and has a higher median expansion velocity of 13.4 km s−1, as compared to a me- dian expansion velocity of 10.3 km s−1 in the NESS-nonlit sub- sample. In the MLR histogram in Figure 9, we can see that overall higher MLR sources are over-represented in the NESS-lit sub- sample. This is expected, as the literature sample is drawn from previously detected sources and hence biased towards brighter and higher-MLR stars. There are also stark differences in chemi- cal composition and detection rates between the two subsamples, as seen in Table A.1. The NESS-nonlit subsample contains only three C-rich sources, while the other 95% of the C-rich sources in our data belong to the NESS-lit subsample. Most (60%) of the sources in the NESS-nonlit sample with uncertain chemical classifications are in the low tier, whereas the known carbon stars from the NESS-lit sample are typically found in the high and in- termediate tiers. The uncertain classifications therefore do not significantly alter our conclusions. Furthermore, for NESS-lit, 97% of sources are detected in CO (2–1) and 90% are detected in CO (3–2), as compared with only ∼60% in both lines for the NESS-nonlit subsample. Many of the detected NESS-nonlit sources have no previously published CO data. Another differ- ence is the proportion of sources with a soft-parabola-shaped CO line: 80% of the NESS-lit sources but nearly 100% of the NESS-nonlit sources, showing that unusual sources are over- represented in the literature sample. This shows the value in building a volume-limited sample: NESS includes a lot of under- studied sources, forming a more complete picture of the local population of AGB stars and, while many of these observations are non-detections and hence provide only upper limits, this is still a vital aspect of characterizing the entire AGB population, especially given our homogeneous observing setup. 0 500 1000 1500 2000 Period (d) 8 7 6 5 4 log (MLR / M yr 1) C-rich AGB (24) OH/IR (12) Mira (12) RSG (5) S-type AGB (1) Long-period var. (1) Saturation MLR De Beck et al. (2010) Fig. 8: Mean MLR vs median period for our sample (grey cir- cles). Sources with vinf > 17 km s−1are coloured according to their SIMBAD object type. The fit from De Beck et al. (2010) (dashed curve) reproduces the increasing trend in our data well. The saturation MLR from our broken-power law fit (Equation 4, solid line) is also shown for comparison. 10 7 10 6 10 5 MLR (M /yr) 0 2 4 6 8 10 12 14 Number of sources Chemical type = O 10 7 10 6 10 5 MLR (M /yr) Chemical type = C NESS-lit NESS-nonlit Fig. 9: Histogram and KDE of the MLR derived for NESS sources which are in the literature sample (NESS-lit), and those which are not (NESS-nonlit). We also compare our derived MLR and expansion velocity values with those calculated in the literature papers described in Appendix B for the 162 individual sources with CO detections in both NESS and the literature sample. Dividing our MLR with the mean of literature values for each source yields a range between 0.02 −8.7 with a median ratio of 0.88, while for the expansion velocity we find a range between 0.5 −1.7 with a median ra- tio of 0.96, shown in Figure 10. While this spread is large, the empirical MLR calculations have uncertainties of about an order of magnitude, and most values are within these uncertainties. We also note that the literature MLRs are calculated in a wide variety of ways (see Appendix B), for which uncertainties are not always quantified. The ratio of the expansion velocities has a narrower distribution, centred on unity as expected, though some ratios show discrepancies of up to a factor ∼2. Some of the literature Article number, page 9 of 18 A&A proofs: manuscript no. articlev3 10 7 10 6 10 5 NESS value 10 8 10 7 10 6 10 5 10 4 Literature value Mass-loss rate 10 20 30 40 NESS value 0 10 20 30 40 Literature value Velocity O-rich C-rich Fig. 10: Mass-loss rates and expansion velocities derived in this paper, as compared with the values from literature, for sources common to both samples. The dashed line corresponds to a ratio of 1 and the shaded region covers the central 68% of values. observations are of CO (1–0) or SiO lines, rather than CO (2–1) or (3–2), which may explain some of the discrepancy; for exam- ple, the higher excitation of SiO lines may result in a lower ve- locity since it tends to be emitted in the inner wind, while CO(1– 0) may produce broader lines since it probes the outermost parts of the envelope. Furthermore, low signal-to-noise observations can underestimate line widths. We note that, of the spectra in our dataset with inferred expansion velocities outside the 68th percentile, almost 80% have soft parabola shapes. This indicates that the NESS velocity estimates for these spectra are reliable despite their disagreement with the literature values. 4. Conclusions This paper has presented initial CO results from the NESS sur- vey, observed and analysed in a homogeneous way using a new JCMT data reduction pipeline. We have demonstrated the advan- tages of a volume-limited sample, like NESS, for probing a large range of CO mass-loss rates. This first data release contains CO observations for 485 sources which are divided into four tiers with increasing distance and dust production rate (DPR). We summarize our findings as follows: – We find overall detection rates of 81% for CO (2–1) and 75% for CO (3–2), including 59 sources with no previously pub- lished CO detections. – 82% of CO lines conform to a soft parabola shape, while 11 sources show a double wind. The majority of these double- wind sources are oxygen-rich, and tend to have lower-than- average mass-loss rates around a few ×10−7 M⊙yr−1. – Estimated 12CO/13CO ratios have a median of 7.3 for the full sample and a median of 10.5 for the few sources where the 12CO line appears to be optically thin. Carbon-rich sources have overall slightly higher values than oxygen-rich, but the small differences indicate that other effects such as optical depth has a larger impact on the estimated ratios. We also find a weak negative correlation between the 12CO/13CO ra- tio and mass-loss rate, which is also likely due to optical- depth effects. – We calculate gas mass-loss rates (MLRs) using the empirical formula from Ramstedt et al. (2008), resulting in uncertain- ties of about an order of magnitude. Overall, these estimates are similar to values found in literature and from models. – We find a power-law relation between the MLR and DPR, up to a MLR saturation value of 5.3+1.0 −0.8×10−6 M⊙yr−1, implying there is no single gas-to-dust ratio for the population of AGB stars. – We show the distributions of gas-to-dust ratios for both oxygen-rich and carbon-rich AGB stars, which have median values of 250 and 680, respectively. The gas-to-dust ratio is found to be negatively correlated with the DPR, indicating that the dust-production process is more efficient at higher DPR, lowering the gas:dust ratio. While this correlation is at least in part due to the definition of the gas:dust ratio in terms of the MLR and DPR, the lack of correlation with MLR may indicate a change in the distribution of radiative momentum towards greater acceleration at higher DPR, explaining the increase in velocity at higher DPR. – We find a power-law relationship between MLR and expansion velocity, up to a MLR saturation value of 3.9+0.5 −0.4 × 10−6 M⊙yr−1, which corresponds to a velocity of ∼17 km s−1. This is similar to the MLR saturation value found for the MLR-DPR relation, though the two values are not within each others credible intervals. – Comparing the NESS results with a large combined lit- erature sample finds high mass-loss-rate sources are over- represented in the literature sample, especially among carbon-rich sources. The literature sources also have higher expansion velocities. – The most striking difference between the NESS results and the literature sample are the detection rates. Over 90% of sources that are in the literature sample are detected in our data, while only ∼60% of sources not in the literature sam- ple are detected, and many of these have no previously pub- lished CO data. The proportion of sources showing a soft parabola line shape also differ: about 80% of sources in the literature sample show a soft parabola shape compared with over 98% of sources not in the literature sample. These statis- tics reflect the under-representation of low-MLR sources and over-representation of extreme or unusual sources in litera- ture samples. NESS detects significant numbers of low-MLR sources despite only being designed to sample them in a small volume, reflecting a potentially large bias in the lit- erature. – We also compare the calculated MLRs for individual sources with literature values, which are found be consistent within the (large) uncertainties. Overall the initial analysis of 485 NESS sources highlights the importance of our volume-limited approach in characteriz- ing the local AGB population as a whole, and of including upper limits derived from non-detections, especially given our homo- geneous observing strategy. As illustrated by the discrepancy in detection rates between the NESS sources included in and ex- cluded from the literature sample, NESS is probing more low- MLR and under-observed sources. Acknowledgements SHJW acknowledges support from the Research Foundation Flanders (FWO) through grant 1285221N, from the ERC con- solidator grant 646758 AEROSOL, from the Ministry of Sci- ence and Technology of Taiwan under grants MOST104- 2628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and from Academia Sinica under AS-IA-106-M03. SS ac- knowledges support from UNAM-PAPIIT Programs IA104822 and IA104824. FK acknowledges support from the Span- ish Ministry of Science, Innovation and Universities, un- der grant number PID2023-149918NB-I00. This work was Article number, page 10 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations also partly supported by the Spanish program Unidad de Excelencia María de Maeztu CEX2020-001058-M, financed by MCIN/AEI/10.13039/501100011033. TD is supported in part by the Australian Research Council through a Discov- ery Early Career Researcher Award (DE230100183). M.M. and R.W. acknowledge support from the STFC Consolidated grant (ST/W000830/1). JH thanks the support of NSFC project 11873086. HK thanks the support of the National Research Foundation of Korea (NRF) grant (RS-2021-NR058398) and the Korea Astronomy and Space Science Institute (KASI) grant (Project No. 2025184102), both funded by the Korean Govern- ment (MSIT). JPM acknowledges research support by the Na- tional Science and Technology Council of Taiwan under grant NSTC 112-2112-M-001-032-MY3. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Ob- servatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; Center for Astronomical Mega-Science (as well as the National Key R&D Program of China with No. 2017YFA0402700). Ad- ditional funding support is provided by the Science and Tech- nology Facilities Council of the United Kingdom and participat- ing universities in the United Kingdom and Canada. This paper made use of JCMT observations under program IDs M17BL002 and M20AL014. The James Clerk Maxwell Telescope has his- torically been operated by the Joint Astronomy Centre on be- half of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada and the Netherlands Organisation for Scientific Research. The Star- link software (Currie et al. 2014) is currently supported by the East Asian Observatory. This research used the Canadian Ad- vanced Network For Astronomy Research (CANFAR) operated in partnership by the Canadian Astronomy Data Centre and The Digital Research Alliance of Canada with support from the Na- tional Research Council of Canada the Canadian Space Agency, CANARIE and the Canadian Foundation for Innovation. This work is sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. This work has made use of Python packages Pyspeckit (Ginsburg et al. 2022), PyVO (Graham et al. 2014), Astropy (Robitaille et al. 2013; Price-Whelan et al. 2018, 2022), SciPy (Virtanen et al. 2020), pandas (McKinney 2010), NumPy (Har- ris et al. 2020), and Matplotlib (Hunter 2007). This research has made use of the SIMBAD (Wenger et al. 2000) database, oper- ated at CDS, Strasbourg, France. This research has made use of NASA’s Astrophysics Data System Bibliographic Services Data Availability The data underlying this article are available in the article and in its online supplementary material, and on the NESS website https://evolvedstars.space. References Agúndez, M., Cernicharo, J., Quintana-Lacaci, G., et al. 2017, Astronomy & Astrophysics, 601, A4 Anderson, T. W. & Darling, D. A. 1952, The Annals of Mathematical Statistics, 23, 193 Bladh, S., Eriksson, K., Marigo, P., Liljegren, S., & Aringer, B. 2019a, Astron- omy and Astrophysics, 623, A119 Bladh, S., Liljegren, S., Höfner, S., Aringer, B., & Marigo, P. 2019b, Astronomy and Astrophysics, 626, A100 Boyer, M. L., Srinivasan, S., Riebel, D., et al. 2012, The Astrophysical Journal, 748, 40 Buckle, J. V., Hills, R. E., Smith, H., et al. 2009, Monthly Notices of the Royal Astronomical Society, 399, 1026 Bujarrabal, V., Agúndez, M., Gómez-Garrido, M., et al. 2021, Astronomy & Astrophysics, 651, A4 Cherchneff, I. 2006, Astronomy & Astrophysics, 456, 1001 Currie, M. J., Berry, D. S., Jenness, T., et al. 2014, Astronomical Data Analysis Software and Systems XXIII, 485, 391 Danilovich, T., Teyssier, D., Justtanont, K., et al. 2015, Astronomy & Astro- physics, 581, A60 De Beck, E., Decin, L., de Koter, A., et al. 2010, Astronomy & Astrophysics, 523, A18 Decin, L., Hony, S., de Koter, A., et al. 2007, Astronomy and Astrophysics, 475, 233 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, \pasp, 125, 306 Ginsburg, A., Sokolov, V., Val-Borro, M. d., et al. 2022, The Astronomical Jour- nal, 163, 291 Gonzalez Delgado, D., Olofsson, H., Kerschbaum, F., et al. 2003, Astronomy and Astrophysics, v.411, p.123-147 (2003), 411, 123 Graham, M., Plante, R., Tody, D., & Fitzpatrick, M. 2014, PyVO: Python ac- cess to the Virtual Observatory, Astrophysics Source Code Library, record ascl:1402.004 Groenewegen, M. a. T., Baas, F., Blommaert, J. a. D. L., et al. 1999, Astronomy and Astrophysics Supplement Series, 140, 197 Groenewegen, M. a. T., Vlemmings, W. H. T., Marigo, P., et al. 2016, Astronomy & Astrophysics, 596, A50 Habing, H. & Olofsson, H., eds. 2003, Asymptotic Giant Branch Stars (A&A Library (Springer)) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357 Hoai, D. T., Nhung, P. T., Darriulat, P., et al. 2022, Monthly Notices of the Royal Astronomical Society, 510, 2363 Höfner, S. & Olofsson, H. 2018, Astronomy and Astrophysics Review, 26, 1 Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90 Jenness, T., Currie, M. J., Tilanus, R. P. J., et al. 2015, Monthly Notices of the Royal Astronomical Society, 453, 73 Jorissen, A. & Knapp, G. R. 1998, Astronomy and Astrophysics Supplement Series, 129, 363 Karakas, A. I. & Lattanzio, J. C. 2014, Publications of the Astronomical Society of Australia, 31 Karakas, A. I. & Lugaro, M. 2016, ApJ, 825, 26 Keenan, P. C. 1954, ApJ, 120, 484 Kemper, F., Stark, R., Justtanont, K., et al. 2003, Astronomy & Astrophysics, 407, 609 Kerschbaum, F. & Olofsson, H. 1999, Astronomy and Astrophysics Supplement Series, 138, 299 Knapp, G. R. 1985, The Astrophysical Journal, 293, 273 Knapp, G. R. & Morris, M. 1985, The Astrophysical Journal, 292, 640 Knapp, G. R., Young, K., Lee, E., & Jorissen, A. 1998, Astrophysical Journal, Supplement Series, 117, 209 Kobayashi, C., Karakas, A. I., & Umeda, H. 2011, \mnras, 414, 3231 Lamers, H. J. G. L. M. & Cassinelli, J. P. 1999, Introduction to stellar winds Loup, C., Forveille, T., Omont, A., & Paul, J. F. 1993, Astronomy and Astro- physics, Suppl. Ser., Vol. 99, p. 291-377 (1993), 99, 291 Mamon, G. A., Glassgold, A. E., & Huggins, P. J. 1988, The Astrophysical Jour- nal, 328, 797 Massalkhi, S., Agúndez, M., Cernicharo, J., et al. 2018, Astronomy & Astro- physics, 611, A29 Massalkhi, S., Agúndez, M., Cernicharo, J., & Velilla Prieto, L. 2020, Astronomy & Astrophysics, 641, A57 Matsuura, M., Sargent, B., Swinyard, B., et al. 2016, Monthly Notices of the Royal Astronomical Society, 462, 2995 McDonald, I., Srinivasan, S., Scicluna, P., et al. 2025, MNRAS, 541, 516 McDonald, I., Zijlstra, A. A., & Boyer, M. L. 2012, Monthly Notices of the Royal Astronomical Society, 427, 343 McDonald, I., Zijlstra, A. A., & Watson, R. A. 2017, Monthly Notices of the Royal Astronomical Society, 471, 770 McKinney, W. 2010, Proceedings of the 9th Python in Science Conference, 56 Milam, S. N., Apponi, A. J., Woolf, N. J., & Ziurys, L. M. 2007, The Astrophys- ical Journal, 668, L131 Milam, S. N., Woolf, N. J., & Ziurys, L. M. 2009, The Astrophysical Journal, 690, 837 Mizuno, I., Friberg, P., Berthold, R., et al. 2020, in 114533T, Vol. 11453, Mil- limeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy X, ed. J. Zmuidzinas & J.-R. Gao (SPIE), 628–646 Nyman, L. A., Booth, R. S., Carlstrom, U., et al. 1992, Astronomy and Astro- physics Supplement Series, 93, 121 Olofsson, H., Delgado, D. G., Kerschbaum, F., & Schöier, F. L. 2002, Astronomy & Astrophysics, 391, 1053 Article number, page 11 of 18 A&A proofs: manuscript no. articlev3 Olofsson, H., Eriksson, K., Gustafsson, B., & Carlstrom, U. 1993, The Astro- physical Journal Supplement Series, 87, 267 Pavlenko, Y. V., Jones, H. R. A., & Longmore, A. J. 2003, Monthly Notices of the Royal Astronomical Society, 345, 311 Price-Whelan, A. M., Lim, P. L., Earl, N., et al. 2022, The Astrophysical Journal, 935, 167 Price-Whelan, A. M., Sip˝ocz, B. M., Günther, H. M., et al. 2018, The Astronom- ical Journal, 156, 123 Ramstedt, S. & Olofsson, H. 2014, Astronomy & Astrophysics, 566, A145 Ramstedt, S., Schöier, F. L., & Olofsson, H. 2009, Astronomy and Astrophysics, 499, 515 Ramstedt, S., Schöier, F. L., Olofsson, H., & Lundgren, A. A. 2008, \aap, 487, 645 Ramstedt, S., Vlemmings, W. H. T., Doan, L., et al. 2020, Astronomy & Astro- physics, 640, A133 Riebel, D., Srinivasan, S., Sargent, B., & Meixner, M. 2012, \apj, 753, 71 Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, Astronomy & Astro- physics, 558, A33 Saberi, M., Olofsson, H., Vlemmings, W. H. T., et al. 2020, A&A, 638, A99 Samus’, N. N., Kazarovets, E. V., Durlevich, O. V., Kireeva, N. N., & Pas- tukhova, E. N. 2017, Astronomy Reports, 61, 80 Sargent, B. A., Srinivasan, S., & Meixner, M. 2011, The Astrophysical Journal, 728, 93 Scalo, J. M. & Ross, J. E. 1976, A&A, 48, 219 Schöier, F. L. & Olofsson, H. 2001, Astron. Astrophys., 368, 969 Scicluna, P., Kemper, F., McDonald, I., et al. 2022, Monthly Notices of the Royal Astronomical Society, 512, 1091 Srinivasan, S., Boyer, M. L., Kemper, F., et al. 2016, Monthly Notices of the Royal Astronomical Society, 457, 2814 Srinivasan, S., Sargent, B. A., & Meixner, M. 2011, Astronomy & Astrophysics, 532, A54 Teyssier, D., Alcolea, J., Bujarrabal, V., et al. 2011, in The Molecular Universe, eds. J. Cernicharo, & R. Bachiller, IAU Symp., 280, 353 Van Eck, S., Neyskens, P., Jorissen, A., et al. 2017, A&A, 601, A10 Vassiliadis, E. & Wood, P. R. 1993, The Astrophysical Journal, 413, 641 Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261 Wallström, S. H. J., Danilovich, T., Müller, H. S. P., et al. 2024, Astronomy & Astrophysics, 681, A50 Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9 1 Institute of Astronomy, KU Leuven, Celestijnenlaan 200D bus 2401, 3001 Leuven, Belgium 2 Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd., Taipei 106319, Taiwan 3 Centre for Astrophysics Research, Department of Physics, Astron- omy and Mathematics, College Lane Campus, University of Hert- fordshire, Hatfield AL10 9AB, UK 4 European Southern Observatory, Alonso de Cordova 3107, Santiago RM, Chile 5 Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, CO 80301, USA 6 Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México. Antigua Carretera a Pátzcuaro #8701, Ex- Hda. San José de la Huerta 58089. Morelia, Michoacán, México 7 East Asian Observatory (JCMT), 660 N. A’ohoku Place, Hilo, HI 96720, USA 8 JBCA, Department Physics and Astronomy, University of Manch- ester, Manchester M13 9PL, UK 9 School of Physical Sciences, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK 10 Department of Physics, Duke University, Durham, NC 27708, USA 11 Departamento de Física, Matemáticas y Materiales, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chihuahua, Mexico 12 Institute of Space Science (ICE), CSIC, Can Magrans, E-08193 Cer- danyola del Vallès, Barcelona, Spain 13 ICREA, Pg. Lluís Companys 23, E-08010 Barcelona, Spain 14 Institut d’Estudis Espacials de Catalunya (IEEC), E-08860 Castelldefels, Barcelona, Spain 15 Schmidt Sciences, New York, NY 10011, USA 16 National Science Foundation, 2415 Eisenhower Avenue, Alexan- dria, Virginia 22314, USA 17 NASA Goddard Space Flight Center, Exoplanets and Stellar Astro- physics Laboratory, Code 667, Greenbelt, MD 20771, USA AND US National Science Foundation, Alexandria, VA 18 Department of Astronomy, Xiamen University, Zengcuo’an West Road, Xiamen, 361005 People’s Republic of China 19 Max-Planck-Institut für Radioastronomie, 53121 Bonn, Germany 20 Nicolaus Copernicus Astronomical Center, Polish Academy of Sci- ences, Rabiánska 8, 87-100, Toru´n, Poland 21 Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, 830011, People’s Republic of China 22 National Taiwan Normal University, Earth Sciences, 88 Section 4, Ting-Chou Road, Taipei 11677, Taiwan 23 School of Physics and Astronomy, Cardiff University, Queen’s Buildings, The Parade, Cardiff CF24 3AA, UK 24 Space Telescope Science Institute, 3700 San Martin Drive, Balti- more, MD 21218, USA 25 SOFIA-USRA, NASA Ames Research Center, MS 232-12, Moffett Field, CA 94035, USA 26 Department of Physics and Astronomy, The University of Western Ontario, London, ON, N6A 3K7, Canada 27 Institute for Earth and Space Exploration, The University of Western Ontario, London, ON, N6A 3K7, Canada 28 European Space Agency, ESTEC/SRE-SA, Keplerlaan 1, 2201 AZ, Noordwijk, The Netherlands 29 School of Physics and Astronomy, Monash University, Clayton 3800, Victoria, Australia 30 Cardiff Hub for Astrophysics Research and Technology (CHART), School of Physics and Astronomy, Cardiff University, The Parade, Cardiff CF24 3AA, UK 31 Lennard-Jones Laboratories, Keele University, ST5 5BG, UK 32 SETI Institute, 189 Bernardo Avenue, Suite 100, Mountain View, CA 94043, USA 33 Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave., New York, NY 10010, USA 34 Center for Cosmology and Particle Physics, Department of Physics, New York University, 726 Broadway, New York, NY 10003, USA 35 Yunnan Observatories, Chinese Academy of Sciences, 396 Yang- fangwang, Guandu District, Kunming 650216, China 36 Chinese Academy of Sciences South America Center for Astron- omy, National Astronomical Observatories, CAS, Beijing 100101, China 37 Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile 38 Amanogawa Galaxy Astronomy Research Center, Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Kori- moto, Kagoshima, 890-0065, Japan 39 Center for General Education, Institute for Comprehensive Educa- tion, Kagoshima University, 1-21-30 Korimoto, Kagoshima, 890- 0065, Japan 40 UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK 41 Korea Astronomy and Space Science Institute (KASI) 776, Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 42 Department of Physics and Astronomy, Kagoshima University, 1- 21-35 Korimoto, Kagoshima, Japan 43 Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK 44 https://evolvedstars.space/members Article number, page 12 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations Appendix A: Additional plots A.1. Detection statistics Table A.1: Detection statistics for the NESS data in total, and divided into two subsamples: the sources in the literature sample (NESS lit), and the sources not in the literature sample (NESS non-lit). Sample: All NESS % NESS lit % NESS non-lit % Number of sources Total 485 100% 217 45% 268 55% O-rich 421 100% 156 37% 265 63% C-rich 64 100% 61 95% 3 5% CO(2-1) detections Total 210/259 81% 147/151 97% 63/108 58% O-rich 168/216 78% 106/110 96% 62/106 58% C-rich 42/43 98% 41/41 100% 1/2 50% 13CO(2-1) detections Total 57/136 42% 57/123 46% 0/13 0% O-rich 36/94 38% 36/82 44% 0/12 0% C-rich 21/42 50% 21/41 51% 0/1 0% CO(3-2) detections Total 320/428 75% 164/183 90% 156/245 64% O-rich 268/374 72% 115/132 87% 153/242 63% C-rich 52/54 96% 49/51 96% 3/3 100% 13CO(3-2) detections Total 55/178 31% 54/126 43% 1/52 2% O-rich 37/147 25% 36/95 38% 1/52 2% C-rich 18/31 58% 18/31 58% 0/0 – Soft parabola shaped Total 432/485 89% 169/217 78% 263/268 98% O-rich 387/421 92% 125/156 80% 262/265 99% C-rich 45/64 70% 44/61 72% 1/3 33% Article number, page 13 of 18 A&A proofs: manuscript no. articlev3 very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 25/48 (52%) 60/65 (92%) 63/79 (80%) 20/24 (83%) 0/5 (0%) 13/41 (32%) 17/37 (46%) 6/11 (55%) 1/1 (100%) 36/66 (55%) 102/141 (72%) 98/130 (75%) 31/36 (86%) 0/1 (0%) 17/64 (27%) 16/69 (23%) 4/13 (31%) O-rich very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 3/4 (75%) 10/10 (100%) 26/26 (100%) 3/3 (100%) 1/3 (33%) 4/8 (50%) 15/29 (52%) 1/2 (50%) 4/4 (100%) 15/16 (94%) 28/29 (97%) 5/5 (100%) 0/1 (0%) 4/8 (50%) 12/20 (60%) 2/2 (100%) C-rich very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 28/52 (54%) 70/75 (93%) 89/105 (85%) 23/27 (85%) 1/8 (12%) 17/49 (35%) 32/66 (48%) 7/13 (54%) 1/1 (100%) 40/70 (57%) 117/157 (75%) 126/159 (79%) 36/41 (88%) 0/2 (0%) 21/72 (29%) 28/89 (31%) 6/15 (40%) Total Fig. A.1: Heatmaps showing detections in the JCMT heterodyne data processed in this paper for the O–rich (top), C–rich (centre), and the full sample of observed NESS sources. Article number, page 14 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations A.2. Corner plots Fig. A.2: Corner plot of the MCMC fit to the log(MLR)-log(DPR) plot in Figure 5. The 16th, 50th, and 84th percentiles are marked with dashed lines, and the contour plots show 1σ and 2σ contours. Fig. A.3: Corner plot of the MCMC fit to the log(MLR)-expansion velocity plot in Figure 7. The 16th, 50th, and 84th percentiles are marked with dashed lines, and the contour plots show 1σ and 2σ contours. Article number, page 15 of 18 A&A proofs: manuscript no. articlev3 Appendix B: Description of the literature sample for comparison with NESS results We will compare our observations to a combined literature sample of relatively large previous studies of AGB stars that have calculated gas MLRs, which includes the results of Loup et al. (1993); Schöier & Olofsson (2001); Olofsson et al. (2002); Gonza- lez Delgado et al. (2003); Ramstedt et al. (2009); De Beck et al. (2010). This sample has a total of 616 observational data points across ∼350 sources, comparable to our 1013 data points across 493 sources so far. This section will give an overview of the chosen samples of these studies, and summarize their results. Loup et al. (1993) collect a sample of 444 evolved stars with previous detections in either CO or HCN. They calculate MLRs for 284 AGB sources, using an empirical formula based on CO (1–0) intensity from Knapp & Morris (1985), and taking into account the CO dissociation radii calculated by Mamon et al. (1988). In cases where they have several observations of a source, they report a mean calculated MLR and measured expansion velocity, but without an indication of the spectral quality of the various observations. As such, there are some cases of large discrepancies between individual values for a single source. For oxygen-rich AGB stars they find MLRs ranging from 1 × 10−7 to 5 × 10−5 M⊙yr−1, and expansion velocities generally between 5–20 km s−1, with very few sources above 20–25 km s−1. For carbon-rich AGB stars they find MLRs ranging from 3 × 10−7 to 5 × 10−5 M⊙yr−1, and expansion velocities generally between 5–30 km s−1, with a fairly continuous distribution up to 35 km s−1. They caution that their sample has "no sound statistical basis but merely reflects the personal biases of the various observers in the field." It is obviously biased towards stronger sources (which tend to have higher MLRs) as they require a CO or HCN detection. The sample is also biased towards peculiar sources that previous observers have been interested in, such as bipolar outflows, which will make the MLR calculations less reliable. Their sample is also strongly biased against galactic plane sources and towards sources in the northern sky, though they note their inclusion of the Nyman et al. (1992) SEST study helps mitigate this. Schöier & Olofsson (2001) observe a sample of 68 carbon-rich AGB stars, consisting of all sources with CO detections from an earlier study by Olofsson et al. (1993) which targeted the brightest (K < 2 mag) carbon stars in the sky. Schöier & Olofsson (2001) note that their sample includes all sources with distances up to 500 pc, and is probably only missing about a third of sources out to the maximum distance of ∼1 kpc. They use observations of CO (1–0), (2–1), and (3–2), and radiative transfer modelling to determine MLRs. 61 sources are well fit with a 1D model, and of the remaining 7 sources: 5 show detached shells, and 2 are not spherically symmetric. From the well-fit sources they derive MLRs between 5 × 10−9 and 2 × 10−5 M⊙yr−1, with a large fraction of sources around 3 × 10−7 M⊙yr−1 and very few below 5 × 10−8 M⊙yr−1. They say the lack of MLRs below 5 × 10−8 M⊙yr−1 seems to be real, and probably indicates a lower limit to what is required to drive a dusty wind. They find that in general at high MLR the most important feature in determining the MLR is the temperature structure, while a wider range of parameters are important at low MLR. This makes sense as the CO emission tends to be saturated for high MLRs, making it harder to derive good model results from a few CO observations. They also find the MLR to be well correlated with the measured expansion velocity, and estimate that the studied types of carbon stars return ∼0.05 M⊙yr−1 of gas to the galaxy, while more extreme carbon stars (with MLRs above 2 × 10−5 M⊙yr−1) may provide an order of magnitude more. Olofsson et al. (2002) present a sample of 69 oxygen-rich AGB stars, which are either semi-regular or irregular variables. These are the sources with CO detections from an earlier sample by Kerschbaum & Olofsson (1999), chosen based on IRAS colors indicating a dusty AGB envelope. They derive distances by assuming a luminosity of 4000 L⊙. They use observations of CO (1– 0), (2–1), (3–2), and (4–3), and 1D radiative transfer modelling to derive MLRs ranging from 2 × 10−8 to 8 × 10−7 M⊙yr−1, and find expansion velocities between 2.2 and 14.4 km s−1. 30% of their sources have expansion velocities below 5 km s−1, so this sample seems to be biased towards slower winds compared to M stars in the NESS sample. 5 sources show expansion velocities below 3 km s−1, which corresponds to the escape velocity at 100 R⋆, far beyond normally accepted acceleration zone which only extends to ∼20 R⋆. They speculate that this may be due to low radiative acceleration efficiency, leading to gas moving at a constant velocity from a few R⋆and eventually escaping, yielding both low MLRs and low expansion velocities. In general they find a good correlation between MLR and expansion velocity. They also compare their results with the Schöier & Olofsson (2001) results described above, which were derived using the same methods, finding that the median MLRs are very similar. However Olofsson et al. (2002) see a sharp cutoff around MLRs of 10−6 M⊙yr−1, which seems to be the maximum for these types of stars. They note, however, that their sample is biased by the IRAS colors selecting for dusty stars, and they of course do not include Mira variables which tend to have higher MLRs. Their range of MLRs also doesn’t extend to values as low as those found for the carbon-rich sample. Gonzalez Delgado et al. (2003) have a sample of 71 oxygen-rich AGB stars, which have all been detected in CO emission by Kerschbaum & Olofsson (1999) and Olofsson et al. (2002). Using observations of several low-J transitions of SiO, for which they find a detection rate of ∼60%, and radiative transfer modelling they derive MLRs and SiO radial abundance distributions for 44 sources. Additionally, they model CO (1–0), (2–1), (3–2), and (4–3) emission from the 12 Mira variables in their sample to derive MLRs. For these 12 sources they find a very high median MLR of 1.3×10−5 M⊙yr−1, and only two sources have low MLRs around 10−7 M⊙yr−1. The median expansion velocity for the Miras is 15.3 km s−1, also significantly higher than the 7 km s−1 found by Olofsson et al. (2002) for their sample of irregular and semiregular variables. Again, the two low MLR Miras are the only ones with expansion velocities below 10 km s−1. Overall they find MLRs ranging from 2 × 10−8 to 4 × 10−5 M⊙yr−1, with a median value of 4 × 10−7 M⊙yr−1. For expansion velocities they find a range from 2.3 to 19.3 km s−1, with a median value of 7.5 km s−1. They find that the SiO lines are generally narrower than CO, but have wide line wings meaning the measured expansion velocities from both SiO and CO lines are similar. Ramstedt et al. (2009) have a sample of 40 S-type AGB stars with previous CO detections, largely taken from the sample of Jorissen & Knapp (1998) of IRAS PSC S-type sources with good quality IRAS fluxes. They note their sample is likely biased towards higher MLRs, but they believe it to be representative of mass-losing S-type stars and complete to a distance of 600 pc (while the largest distance in their sample is 1210 pc). Using observations of CO (1–0) and (2–1), as well as one or more low-J transitions of SiO (which they detect in 26 sources), and 1D radiative transfer modelling they derive MLRs and CO and SiO radial abundance Article number, page 16 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations distributions. They find median MLRs of 4.5 × 10−7 M⊙yr−1 and 1.75 × 10−7 M⊙yr−1 for their Mira and SRV samples respectively. These numbers are comparable to the median value of 3 × 10−7 M⊙yr−1 found for carbon-rich stars by Schöier & Olofsson (2001) and for oxygen-rich stars by Olofsson et al. (2002) and Gonzalez Delgado et al. (2003). They find a median expansion velocity of 8 km s−1, similar to the results for oxygen-rich sources, and slightly lower than the 11 km s−1 found for carbon-rich sources in previous studies. De Beck et al. (2010) have a sample of 69 sources, including mostly AGB stars but also some RSGs, hypergiants, post-AGB stars, and YSOs. The data for this sample, consisting of 12CO and 13CO transitions up to J=6-5, has been assembled over many years. They use several CO transitions and 1D radiative transfer modelling to derive analytical expressions to estimate MLRs. They use this procedure to determine the MLRs for 50 of the evolved stars in the sample to which they could fit a soft parabola profile, of which 39 are AGB stars. They find AGB MLRs ranging from 4×10−8 to 6×10−5 M⊙yr−1, with a median value of 4.1×10−6 M⊙yr−1. This is a higher median value than the ∼3 × 10−7 M⊙yr−1 found by the previously mentioned studies, indicating a significant bias towards bright sources in this sample. They find a correlation between MLR and pulsation period for periods below ∼850 days, representing Mira and semiregular AGB pulsators, as well as short-period OH/IR stars. For 29 stars in their sample they have both 12CO and 13CO observations, and hence are able to estimate 12CO/13CO ratios by dividing their respective integrated intensities (with a correction factor for differences in line strength). They find 12CO/13CO ratios ranging from 3.6 to 30.7 for their subsample of AGB stars (their Table 8), and note that these estimates are actually lower limits as the 12CO lines are often optically thick. Many other studies of samples of AGB stars also draw from the aforementioned surveys for their source selection (e.g., Teyssier et al. 2011; Massalkhi et al. 2020; Ramstedt et al. 2020). Overall this literature sample consists largely of sources with previous CO detections, so it is biased towards the brighter AGB stars with relatively high MLRs. The NESS sample thus complements these literature samples (Figure 1). Some of the samples of carbon-rich or S-type stars are said to be complete out to a few hundred pc, but there has been little attempt to form a complete sample of the much more abundant oxygen-rich stars, and this significantly hinders our ability to draw firm physical conclusions about them. Article number, page 17 of 18 A&A proofs: manuscript no. articlev3 Appendix C: Columns available in online table of results Table C.1: Description of columns available in the online table Column Description Unit IRASPSC IRAS PSC identifier SIMBAD_ID SIMBAD identifier Chem_type Chemical type based on mid-IR spectra [‘O’ or ‘C’]; see Section 2.4 Tier Grouping from Scicluna et al. (2022) based on location in distance-DPR space (‘very low’, ‘low’, ‘intermediate’, ‘high’, or ‘extreme’) Peak_CO(2-1)a Peak intensity of the CO(2-1) line K Peak_CO(2-1)_error Uncertainty in the peak intensity of the CO(2-1) line K v_inf_CO(2-1) Expansion velocity of the shell derived from the CO(2-1) line (uncertainty 1 km s−1 km/s) Int_CO(2-1) Integrated intensity of the CO(2-1) line K km s−1 Int_CO(2-1)_error Uncertainty in the integrated intensity of the CO(2-1) line K km s−1 nchan_CO(2-1) Number of velocity channels in the CO(2-1) spectral band CO(2-1)_rms RMS noise in the CO(2-1) line K Int_13CO(2-1) Integrated intensity of the 13CO(2-1) line K km s−1 Int_13CO(2-1)_error Uncertainty in the integrated intensity of the 13CO(2-1) line K km s−1 Peak_CO(3-2) Peak intensity of the CO(3-2) line K Peak_CO(3-2)_error Uncertainty in the peak intensity of the CO(3-2) line K v_inf_CO(3-2) Expansion velocity of the shell derived from the CO(3-2) line (uncertainty 1 km s−1 km/s) Int_CO(3-2) Integrated intensity of the CO(3-2) line K km s−1 Int_CO(3-2)_error Uncertainty in the integrated intensity of the CO(3-2) line K km s−1 nchan_CO(3-2) Number of velocity channels in the CO(3-2) spectral band CO(3-2)_rms RMS noise in the CO(3-2) line K Int_13CO(3-2) Integrated intensity of the 13CO(3-2) line K km s−1 Int_13CO(3-2)_error Uncertainty in the integrated intensity of the 13CO(3-2) line K km s−1 MLR_CO(2-1) Empirical mass-loss rate derived from the CO(2-1) line using the Ramstedt et al. M⊙yr−1 (2008) formula MLR_CO(2-1)_error Uncertainty in the mass-loss rate derived from the CO(2-1) line M⊙yr−1 CO(2-1)_upperlimit Flag designating whether the CO(2-1) line is detected (0) or is below 3 times the RMS (1) MLR_CO(3-2) Empirical mass-loss rate derived from the CO(3-2) line using the Ramstedt et al. M⊙yr−1 (2008) formula MLR_CO(3-2)_error Uncertainty in the mass-loss rate derived from the CO(3-2) line M⊙yr−1 CO(3-2)_upperlimit Flag denoting whether the CO(3-2) line is detected (0) or is below 3 times the RMS (1) IsoRatio(2-1) Isotopic ratio 12CO/13CO derived from the (2-1) lines IsoRatio(3-2) Isotopic ratio 12CO/13CO derived from the (3-2) lines DPR Dust-production rate from SED fit M⊙yr−1 DPR_error Uncertainty in the dust-production rate M⊙yr−1 GasDust_CO(2-1) Gas-to-dust ratio derived from the CO(2-1) line GasDust_CO(2-1)_error Uncertainty in the gas-to-dust ratio derived from the CO(2-1) line GasDust_CO(3-2) Gas-to-dust ratio derived from the CO(3-2) line GasDust_CO(3-2)_error Uncertainty in the gas-to-dust ratio derived from the CO(3-2) line SoftParabola_CO(2-1) Flag denoting whether the CO(2-1) line shape is a soft parabola (1) or not (0) SoftParabola_CO(3-2) Flag denoting whether the CO(3-2) line shape is a soft parabola (1) or not (0) OptThin_12CO(2-1) Flag denoting whether the CO(2-1) line is optically thin (1) or not (0) OptThin_12CO(3-2) Flag denoting whether the CO(3-2) line is optically thin (1) or not (0) First_det Flag denoting whether the NESS observation is the first CO detection for the source (1) or not (0) Notes: a All peak intensities, expansion velocities, and the related uncertainties in the table are estimated from soft-parabola fits. Integrated intensities are derived from the spectrum, not from the models. Article number, page 18 of 18
Astronomy & Astrophysics manuscript no. articlev3 October 17, 2025 The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations S. H. J. Wallström1, 2, P. Scicluna3, 4, 5, 2, S. Srinivasan6, 2⋆, J. G. A. Wouterloot7, I. McDonald8, 9, L. Decock1, M. Wijshoff1, R. Chen10, 2, D. Torres11, L. Umans1, B. Willebrords1, F. Kemper12, 13, 14, G. Rau15, 16, 17, S. Feng18, 2, M. Jeste19, T. Kaminski20, D. Li21, F. C. Liu22, A. Trejo-Cruz6, 2, H. Chawner23, S. Goldman24, 25, H. MacIsaac26, 27, J. Tang21, S. T. Zeegers28, T. Danilovich29, 1, M. Matsuura30, K. M. Menten19, J. Th van Loon31, J. Cami26, 27, 32, C. J. R. Clark24, T. E. Dharmawardena33, 34, J. Greaves23, Jinhua He35, 36, 37, H. Imai38, 39, O. C. Jones40, H. Kim41, J. P. Marshall2, H. Shinnaga42, 38, R. Wesson30, 43, and the NESS Collaboration44 (Affiliations can be found after the references) Received / Accepted ABSTRACT Low- to intermediate-mass (∼0.8-8 M⊙) evolved stars contribute significantly to the chemical enrichment of the interstellar medium in the local Universe. It is therefore crucial to accurately measure the mass return in their final evolutionary stages. The Nearby Evolved Stars Survey (NESS) is a large multi-telescope project targeting a volume-limited sample of ∼850 stars within 3 kpc in order to derive the dust and gas return rates in the Solar Neighbourhood, and to constrain the physics underlying these processes. We present an initial analysis of the CO-line observations, including detection statistics, carbon isotopic ratios, initial mass-loss rates, and gas-to-dust ratios. We describe a new data reduction pipeline for homogeneity, which we use to analyse the available NESS CO data from the James Clerk Maxwell Telescope, measuring line parameters and calculating empirical gas mass-loss rates. We present the first release of the available data on 485 sources, one of the largest homogeneous samples of CO data to date. Comparison with a large combined literature sample finds that high mass-loss rate and especially carbon-rich sources are overrepresented in literature, while NESS is probing significantly more sources at low mass-loss rates, detecting 59 sources in CO for the first time and providing useful upper limits on non-detections. CO line detection rates are 81% for the CO (2-1) line and 75% for CO (3-2). The majority (82%) of detected lines conform to the expected soft parabola shape, while eleven sources show a double wind. Calculated mass-loss rates show power-law relations with both the dust-production rates and expansion velocities, up to a mass-loss rate saturation value ∼5 × 10-6 M⊙yr-1. Median gas-to-dust ratios of 250 and 680 are found for oxygen-rich and carbon-rich sources, respectively. Our analysis of CO observations in this first data release highlights the importance of our volume-limited approach in characterizing the local AGB population as a whole. Key words. stars: AGB and post-AGB - stars: mass-loss - stars: winds, outflows - surveys 1. Introduction Cool evolved stars contribute to the chemical enrichment of the interstellar medium (ISM) by synthesising new elements which are then expelled in massive stellar winds driven by newly formed dust. Population models indicate that low- to intermediate-mass (∼0.8-8 M⊙) asymptotic giant branch (AGB) stars dominate this process today (Karakas & Lattanzio 2014), aided by their more massive (≥8 M⊙) and therefore less numerous red supergiant (RSG) cousins, which eventually explode as supernovae. Accordingly, AGB stars are of great interest and many studies have been carried out on various aspects of their winds, including Loup et al. (1993); Schöier & Olofsson (2001); Olofsson et al. (2002); Gonzalez Delgado et al. (2003); Ramstedt et al. (2009) and De Beck et al. (2010), which are described in Appendix B. Nevertheless, it has been difficult to draw firm conclusions about many key aspects of the population of AGB stars, including the total mass returned to the ISM by these stars, the physical processes driving the onset of mass loss, the fraction of the ejected mass that condenses into dust, and variations in the mass-loss rate (MLR) over time (e.g., Höfner & Olofsson 2018). Repeated third dredge-up events during the AGB phase bring carbon-enriched material to the surface of the star, increasing ⋆ its C/O ratio over time, and changing the dominant chemistry of their winds. AGB stars are accordingly divided into three groups that likely form an evolutionary sequence. The majority are oxygen-rich at solar metallicity and typically have C/O ratios ≲0.9; their winds consist mainly of oxygen-bearing molecules and silicate dust. Carbon-rich AGB stars are at the other extreme (C/O > 1), producing mainly carbonaceous molecules and amorphous carbon dust. S-type AGB stars have C/O ratios close to unity (0.9 ≲C/O 1600 L⊙, as measured by McDonald et al. (2012) and McDonald et al. (2017); (ii) "low" sources with DPR 400 pc (i.e., only including sources with Galactic latitude |b| > 1.5) ; (iv) "high" sources with 3 × 10-9 800 pc; and (v) "extreme" sources with DPR > 10-7 M⊙yr-1 and d 2000 pc. In this paper, we present the initial CO results from observations of the northern part of the NESS sample with the James Clerk Maxwell Telescope (JCMT). Although incomplete, this initial dataset of 485 sources represents the largest sample of CO lines for nearby AGB stars to date, and includes 59 sources with no prior CO observations. Our highly homogeneous observations extend to systematically lower mass-loss rates than have typically been explored in the literature (McDonald et al. 2025). Section 2 describes the observations, data reduction pipeline, and data analysis, Section 3 gives some initial results on the detection rates, line profiles, 12CO/13CO ratios, mass-loss rates, gasto-dust ratios, and comparisons with literature data and models, and finally Section 4 contains the conclusions. 2. Observations and analysis The NESS project is based around a JCMT Large Program to observe the ∼500 sources in the northern parts of the sample, for which we placed a declination cut at -30◦to avoid observing at airmass > 2. To this sample, we add all archival data for NESS targets taken with RxA3 and HARP since the ACSIS correlator was installed (from 2006 onwards). We note that this includes archival data of 71 sources that have Dec. 23 km s-1, of which 12 are carbon stars with velocities up to 33 km s-1. Except for one O-rich Mira (vinf ≈24 km s-1), the remaining are either RSGs or post-AGB/binary stars, or they show clear deviations from the soft-parabola profile in at least one of the two lines. OH/IR stars in our sample are restricted to vinf 17 km s-1, corresponding to the break found in the MLR vs. vinf plot, are highlighted in the figure and grouped by SIMBAD object type. Our data replicate the increasing trend followed by saturation around ∼750 d. The rising trend of MLR with expansion velocity at low MLRs is well reproduced by the relation from De Beck et al. (2010) (solid line in the figure). The MLR saturation values quoted by Vassiliadis & Wood (1993) and De Beck et al. (2010) are roughly an order of magnitude higher than our value. This difference is not unexpected, given the scatter in their data, the tendency of the Ramstedt et al. (2008) relation to underestimate high MLRs, and the bias of literature samples toward higherMLR stars (see below). To compare our results with the literature sample, we have divided the NESS results into two subsamples: the 221 sources that are part of the literature sample (NESS-lit) and the 272 that are not (NESS-nonlit). Both subsamples have similar median MLR values: 1.7 × 10-6 M⊙yr-1 and 7.5 × 10-7 M⊙yr-1 for NESSlit and NESS-nonlit, respectively. The range of MLRs is also comparable between NESS-lit (3.1 × 10-8 -4.4 × 10-5 M⊙yr-1) and NESS-nonlit (1.9 × 10-8 -1.9 × 10-5 M⊙yr-1). However, the NESS-lit subsample extends to larger expansion velocities Article number, page 8 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations 0 5 10 15 20 25 30 35 40 Expansion velocity (km/s) 10 8 10 7 10 6 10 5 10 4 MLR (M /yr) Loup+93 (O-/C-rich) Schoier+01 Olofsson+02 Gonzalez-Delgado+03 Ramstedt+09 DeBeck+10 (O/C) NESS CO(2-1) (O/C) NESS CO(3-2) (O/C) Powerlaw fit Fig. 7: MLR (including error bars) vs expansion velocity, compared with samples from literature. Oxygen-rich (and S-type) sources are plotted in blue while carbon-rich sources are in orange. A broken power law fit to the NESS data is shown in dashed black, and the grey lines show random draws from the posterior distribution as an indication of the uncertainty in the fit. (38.8 km s-1) than NESS-nonlit (26.9 km s-1), and has a higher median expansion velocity of 13.4 km s-1, as compared to a median expansion velocity of 10.3 km s-1 in the NESS-nonlit subsample. In the MLR histogram in Figure 9, we can see that overall higher MLR sources are over-represented in the NESS-lit subsample. This is expected, as the literature sample is drawn from previously detected sources and hence biased towards brighter and higher-MLR stars. There are also stark differences in chemical composition and detection rates between the two subsamples, as seen in Table A.1. The NESS-nonlit subsample contains only three C-rich sources, while the other 95% of the C-rich sources in our data belong to the NESS-lit subsample. Most (60%) of the sources in the NESS-nonlit sample with uncertain chemical classifications are in the low tier, whereas the known carbon stars from the NESS-lit sample are typically found in the high and intermediate tiers. The uncertain classifications therefore do not significantly alter our conclusions. Furthermore, for NESS-lit, 97% of sources are detected in CO (2-1) and 90% are detected in CO (3-2), as compared with only ∼60% in both lines for the NESS-nonlit subsample. Many of the detected NESS-nonlit sources have no previously published CO data. Another difference is the proportion of sources with a soft-parabola-shaped CO line: 80% of the NESS-lit sources but nearly 100% of the NESS-nonlit sources, showing that unusual sources are overrepresented in the literature sample. This shows the value in building a volume-limited sample: NESS includes a lot of understudied sources, forming a more complete picture of the local population of AGB stars and, while many of these observations are non-detections and hence provide only upper limits, this is still a vital aspect of characterizing the entire AGB population, especially given our homogeneous observing setup. 0 500 1000 1500 2000 Period (d) 8 7 6 5 4 log (MLR / M yr 1) C-rich AGB (24) OH/IR (12) Mira (12) RSG (5) S-type AGB (1) Long-period var. (1) Saturation MLR De Beck et al. (2010) Fig. 8: Mean MLR vs median period for our sample (grey circles). Sources with vinf > 17 km s-1are coloured according to their SIMBAD object type. The fit from De Beck et al. (2010) (dashed curve) reproduces the increasing trend in our data well. The saturation MLR from our broken-power law fit (Equation 4, solid line) is also shown for comparison. 10 7 10 6 10 5 MLR (M /yr) 0 2 4 6 8 10 12 14 Number of sources Chemical type = O 10 7 10 6 10 5 MLR (M /yr) Chemical type = C NESS-lit NESS-nonlit Fig. 9: Histogram and KDE of the MLR derived for NESS sources which are in the literature sample (NESS-lit), and those which are not (NESS-nonlit). We also compare our derived MLR and expansion velocity values with those calculated in the literature papers described in Appendix B for the 162 individual sources with CO detections in both NESS and the literature sample. Dividing our MLR with the mean of literature values for each source yields a range between 0.02 -8.7 with a median ratio of 0.88, while for the expansion velocity we find a range between 0.5 -1.7 with a median ratio of 0.96, shown in Figure 10. While this spread is large, the empirical MLR calculations have uncertainties of about an order of magnitude, and most values are within these uncertainties. We also note that the literature MLRs are calculated in a wide variety of ways (see Appendix B), for which uncertainties are not always quantified. The ratio of the expansion velocities has a narrower distribution, centred on unity as expected, though some ratios show discrepancies of up to a factor ∼2. Some of the literature Article number, page 9 of 18 A&A proofs: manuscript no. articlev3 10 7 10 6 10 5 NESS value 10 8 10 7 10 6 10 5 10 4 Literature value Mass-loss rate 10 20 30 40 NESS value 0 10 20 30 40 Literature value Velocity O-rich C-rich Fig. 10: Mass-loss rates and expansion velocities derived in this paper, as compared with the values from literature, for sources common to both samples. The dashed line corresponds to a ratio of 1 and the shaded region covers the central 68% of values. observations are of CO (1-0) or SiO lines, rather than CO (2-1) or (3-2), which may explain some of the discrepancy; for example, the higher excitation of SiO lines may result in a lower velocity since it tends to be emitted in the inner wind, while CO(10) may produce broader lines since it probes the outermost parts of the envelope. Furthermore, low signal-to-noise observations can underestimate line widths. We note that, of the spectra in our dataset with inferred expansion velocities outside the 68th percentile, almost 80% have soft parabola shapes. This indicates that the NESS velocity estimates for these spectra are reliable despite their disagreement with the literature values. 4. Conclusions This paper has presented initial CO results from the NESS survey, observed and analysed in a homogeneous way using a new JCMT data reduction pipeline. We have demonstrated the advantages of a volume-limited sample, like NESS, for probing a large range of CO mass-loss rates. This first data release contains CO observations for 485 sources which are divided into four tiers with increasing distance and dust production rate (DPR). We summarize our findings as follows: - We find overall detection rates of 81% for CO (2-1) and 75% for CO (3-2), including 59 sources with no previously published CO detections. - 82% of CO lines conform to a soft parabola shape, while 11 sources show a double wind. The majority of these doublewind sources are oxygen-rich, and tend to have lower-thanaverage mass-loss rates around a few ×10-7 M⊙yr-1. - Estimated 12CO/13CO ratios have a median of 7.3 for the full sample and a median of 10.5 for the few sources where the 12CO line appears to be optically thin. Carbon-rich sources have overall slightly higher values than oxygen-rich, but the small differences indicate that other effects such as optical depth has a larger impact on the estimated ratios. We also find a weak negative correlation between the 12CO/13CO ratio and mass-loss rate, which is also likely due to opticaldepth effects. - We calculate gas mass-loss rates (MLRs) using the empirical formula from Ramstedt et al. (2008), resulting in uncertainties of about an order of magnitude. Overall, these estimates are similar to values found in literature and from models. - We find a power-law relation between the MLR and DPR, up to a MLR saturation value of 5.3+1.0 -0.8×10-6 M⊙yr-1, implying there is no single gas-to-dust ratio for the population of AGB stars. - We show the distributions of gas-to-dust ratios for both oxygen-rich and carbon-rich AGB stars, which have median values of 250 and 680, respectively. The gas-to-dust ratio is found to be negatively correlated with the DPR, indicating that the dust-production process is more efficient at higher DPR, lowering the gas:dust ratio. While this correlation is at least in part due to the definition of the gas:dust ratio in terms of the MLR and DPR, the lack of correlation with MLR may indicate a change in the distribution of radiative momentum towards greater acceleration at higher DPR, explaining the increase in velocity at higher DPR. - We find a power-law relationship between MLR and expansion velocity, up to a MLR saturation value of 3.9+0.5 -0.4 × 10-6 M⊙yr-1, which corresponds to a velocity of ∼17 km s-1. This is similar to the MLR saturation value found for the MLR-DPR relation, though the two values are not within each others credible intervals. - Comparing the NESS results with a large combined literature sample finds high mass-loss-rate sources are overrepresented in the literature sample, especially among carbon-rich sources. The literature sources also have higher expansion velocities. - The most striking difference between the NESS results and the literature sample are the detection rates. Over 90% of sources that are in the literature sample are detected in our data, while only ∼60% of sources not in the literature sample are detected, and many of these have no previously published CO data. The proportion of sources showing a soft parabola line shape also differ: about 80% of sources in the literature sample show a soft parabola shape compared with over 98% of sources not in the literature sample. These statistics reflect the under-representation of low-MLR sources and over-representation of extreme or unusual sources in literature samples. NESS detects significant numbers of low-MLR sources despite only being designed to sample them in a small volume, reflecting a potentially large bias in the literature. - We also compare the calculated MLRs for individual sources with literature values, which are found be consistent within the (large) uncertainties. Overall the initial analysis of 485 NESS sources highlights the importance of our volume-limited approach in characterizing the local AGB population as a whole, and of including upper limits derived from non-detections, especially given our homogeneous observing strategy. As illustrated by the discrepancy in detection rates between the NESS sources included in and excluded from the literature sample, NESS is probing more lowMLR and under-observed sources. Acknowledgements SHJW acknowledges support from the Research Foundation Flanders (FWO) through grant 1285221N, from the ERC consolidator grant 646758 AEROSOL, from the Ministry of Science and Technology of Taiwan under grants MOST1042628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and from Academia Sinica under AS-IA-106-M03. SS acknowledges support from UNAM-PAPIIT Programs IA104822 and IA104824. FK acknowledges support from the Spanish Ministry of Science, Innovation and Universities, under grant number PID2023-149918NB-I00. This work was Article number, page 10 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations also partly supported by the Spanish program Unidad de Excelencia María de Maeztu CEX2020-001058-M, financed by MCIN/AEI/10.13039/501100011033. TD is supported in part by the Australian Research Council through a Discovery Early Career Researcher Award (DE230100183). M.M. and R.W. acknowledge support from the STFC Consolidated grant (ST/W000830/1). JH thanks the support of NSFC project 11873086. HK thanks the support of the National Research Foundation of Korea (NRF) grant (RS-2021-NR058398) and the Korea Astronomy and Space Science Institute (KASI) grant (Project No. 2025184102), both funded by the Korean Government (MSIT). JPM acknowledges research support by the National Science and Technology Council of Taiwan under grant NSTC 112-2112-M-001-032-MY3. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica ; the Korea Astronomy and Space Science Institute; Center for Astronomical Mega-Science (as well as the National Key R&D Program of China with No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. This paper made use of JCMT observations under program IDs M17BL002 and M20AL014. The James Clerk Maxwell Telescope has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada and the Netherlands Organisation for Scientific Research. The Starlink software (Currie et al. 2014) is currently supported by the East Asian Observatory. This research used the Canadian Advanced Network For Astronomy Research (CANFAR) operated in partnership by the Canadian Astronomy Data Centre and The Digital Research Alliance of Canada with support from the National Research Council of Canada the Canadian Space Agency, CANARIE and the Canadian Foundation for Innovation. This work is sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. This work has made use of Python packages Pyspeckit (Ginsburg et al. 2022), PyVO (Graham et al. 2014), Astropy (Robitaille et al. 2013; Price-Whelan et al. 2018, 2022), SciPy (Virtanen et al. 2020), pandas (McKinney 2010), NumPy (Harris et al. 2020), and Matplotlib (Hunter 2007). This research has made use of the SIMBAD (Wenger et al. 2000) database, operated at CDS, Strasbourg, France. This research has made use of NASA's Astrophysics Data System Bibliographic Services Data Availability The data underlying this article are available in the article and in its online supplementary material, and on the NESS website https://evolvedstars.space. References Agúndez, M., Cernicharo, J., Quintana-Lacaci, G., et al. 2017, Astronomy & Astrophysics, 601, A4 Anderson, T. W. & Darling, D. A. 1952, The Annals of Mathematical Statistics, 23, 193 Bladh, S., Eriksson, K., Marigo, P., Liljegren, S., & Aringer, B. 2019a, Astronomy and Astrophysics, 623, A119 Bladh, S., Liljegren, S., Höfner, S., Aringer, B., & Marigo, P. 2019b, Astronomy and Astrophysics, 626, A100 Boyer, M. L., Srinivasan, S., Riebel, D., et al. 2012, The Astrophysical Journal, 748, 40 Buckle, J. V., Hills, R. E., Smith, H., et al. 2009, Monthly Notices of the Royal Astronomical Society, 399, 1026 Bujarrabal, V., Agúndez, M., Gómez-Garrido, M., et al. 2021, Astronomy & Astrophysics, 651, A4 Cherchneff, I. 2006, Astronomy & Astrophysics, 456, 1001 Currie, M. J., Berry, D. S., Jenness, T., et al. 2014, Astronomical Data Analysis Software and Systems XXIII, 485, 391 Danilovich, T., Teyssier, D., Justtanont, K., et al. 2015, Astronomy & Astrophysics, 581, A60 De Beck, E., Decin, L., de Koter, A., et al. 2010, Astronomy & Astrophysics, 523, A18 Decin, L., Hony, S., de Koter, A., et al. 2007, Astronomy and Astrophysics, 475, 233 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, , 125, 306 Ginsburg, A., Sokolov, V., Val-Borro, M. d., et al. 2022, The Astronomical Journal, 163, 291 Gonzalez Delgado, D., Olofsson, H., Kerschbaum, F., et al. 2003, Astronomy and Astrophysics, v.411, p.123-147 (2003), 411, 123 Graham, M., Plante, R., Tody, D., & Fitzpatrick, M. 2014, PyVO: Python access to the Virtual Observatory, Astrophysics Source Code Library, record ascl:1402.004 Groenewegen, M. a. T., Baas, F., Blommaert, J. a. D. L., et al. 1999, Astronomy and Astrophysics Supplement Series, 140, 197 Groenewegen, M. a. T., Vlemmings, W. H. T., Marigo, P., et al. 2016, Astronomy & Astrophysics, 596, A50 Habing, H. & Olofsson, H., eds. 2003, Asymptotic Giant Branch Stars (A&A Library (Springer)) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357 Hoai, D. T., Nhung, P. T., Darriulat, P., et al. 2022, Monthly Notices of the Royal Astronomical Society, 510, 2363 Höfner, S. & Olofsson, H. 2018, Astronomy and Astrophysics Review, 26, 1 Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90 Jenness, T., Currie, M. J., Tilanus, R. P. J., et al. 2015, Monthly Notices of the Royal Astronomical Society, 453, 73 Jorissen, A. & Knapp, G. R. 1998, Astronomy and Astrophysics Supplement Series, 129, 363 Karakas, A. I. & Lattanzio, J. C. 2014, Publications of the Astronomical Society of Australia, 31 Karakas, A. I. & Lugaro, M. 2016, ApJ, 825, 26 Keenan, P. C. 1954, ApJ, 120, 484 Kemper, F., Stark, R., Justtanont, K., et al. 2003, Astronomy & Astrophysics, 407, 609 Kerschbaum, F. & Olofsson, H. 1999, Astronomy and Astrophysics Supplement Series, 138, 299 Knapp, G. R. 1985, The Astrophysical Journal, 293, 273 Knapp, G. R. & Morris, M. 1985, The Astrophysical Journal, 292, 640 Knapp, G. R., Young, K., Lee, E., & Jorissen, A. 1998, Astrophysical Journal, Supplement Series, 117, 209 Kobayashi, C., Karakas, A. I., & Umeda, H. 2011, , 414, 3231 Lamers, H. J. G. L. M. & Cassinelli, J. P. 1999, Introduction to stellar winds Loup, C., Forveille, T., Omont, A., & Paul, J. F. 1993, Astronomy and Astrophysics, Suppl. Ser., Vol. 99, p. 291-377 (1993), 99, 291 Mamon, G. A., Glassgold, A. E., & Huggins, P. J. 1988, The Astrophysical Journal, 328, 797 Massalkhi, S., Agúndez, M., Cernicharo, J., et al. 2018, Astronomy & Astrophysics, 611, A29 Massalkhi, S., Agúndez, M., Cernicharo, J., & Velilla Prieto, L. 2020, Astronomy & Astrophysics, 641, A57 Matsuura, M., Sargent, B., Swinyard, B., et al. 2016, Monthly Notices of the Royal Astronomical Society, 462, 2995 McDonald, I., Srinivasan, S., Scicluna, P., et al. 2025, MNRAS, 541, 516 McDonald, I., Zijlstra, A. A., & Boyer, M. L. 2012, Monthly Notices of the Royal Astronomical Society, 427, 343 McDonald, I., Zijlstra, A. A., & Watson, R. A. 2017, Monthly Notices of the Royal Astronomical Society, 471, 770 McKinney, W. 2010, Proceedings of the 9th Python in Science Conference, 56 Milam, S. N., Apponi, A. J., Woolf, N. J., & Ziurys, L. M. 2007, The Astrophysical Journal, 668, L131 Milam, S. N., Woolf, N. J., & Ziurys, L. M. 2009, The Astrophysical Journal, 690, 837 Mizuno, I., Friberg, P., Berthold, R., et al. 2020, in 114533T, Vol. 11453, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy X, ed. J. Zmuidzinas & J.-R. Gao (SPIE), 628-646 Nyman, L. A., Booth, R. S., Carlstrom, U., et al. 1992, Astronomy and Astrophysics Supplement Series, 93, 121 Olofsson, H., Delgado, D. G., Kerschbaum, F., & Schöier, F. L. 2002, Astronomy & Astrophysics, 391, 1053 Article number, page 11 of 18 A&A proofs: manuscript no. articlev3 Olofsson, H., Eriksson, K., Gustafsson, B., & Carlstrom, U. 1993, The Astrophysical Journal Supplement Series, 87, 267 Pavlenko, Y. V., Jones, H. R. A., & Longmore, A. J. 2003, Monthly Notices of the Royal Astronomical Society, 345, 311 Price-Whelan, A. M., Lim, P. L., Earl, N., et al. 2022, The Astrophysical Journal, 935, 167 Price-Whelan, A. M., Sip ̋ocz, B. M., Günther, H. M., et al. 2018, The Astronomical Journal, 156, 123 Ramstedt, S. & Olofsson, H. 2014, Astronomy & Astrophysics, 566, A145 Ramstedt, S., Schöier, F. L., & Olofsson, H. 2009, Astronomy and Astrophysics, 499, 515 Ramstedt, S., Schöier, F. L., Olofsson, H., & Lundgren, A. A. 2008, , 487, 645 Ramstedt, S., Vlemmings, W. H. T., Doan, L., et al. 2020, Astronomy & Astrophysics, 640, A133 Riebel, D., Srinivasan, S., Sargent, B., & Meixner, M. 2012, , 753, 71 Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, Astronomy & Astrophysics, 558, A33 Saberi, M., Olofsson, H., Vlemmings, W. H. T., et al. 2020, A&A, 638, A99 Samus', N. N., Kazarovets, E. V., Durlevich, O. V., Kireeva, N. N., & Pastukhova, E. N. 2017, Astronomy Reports, 61, 80 Sargent, B. A., Srinivasan, S., & Meixner, M. 2011, The Astrophysical Journal, 728, 93 Scalo, J. M. & Ross, J. E. 1976, A&A, 48, 219 Schöier, F. L. & Olofsson, H. 2001, Astron. Astrophys., 368, 969 Scicluna, P., Kemper, F., McDonald, I., et al. 2022, Monthly Notices of the Royal Astronomical Society, 512, 1091 Srinivasan, S., Boyer, M. L., Kemper, F., et al. 2016, Monthly Notices of the Royal Astronomical Society, 457, 2814 Srinivasan, S., Sargent, B. A., & Meixner, M. 2011, Astronomy & Astrophysics, 532, A54 Teyssier, D., Alcolea, J., Bujarrabal, V., et al. 2011, in The Molecular Universe, eds. J. Cernicharo, & R. Bachiller, IAU Symp., 280, 353 Van Eck, S., Neyskens, P., Jorissen, A., et al. 2017, A&A, 601, A10 Vassiliadis, E. & Wood, P. R. 1993, The Astrophysical Journal, 413, 641 Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261 Wallström, S. H. J., Danilovich, T., Müller, H. S. P., et al. 2024, Astronomy & Astrophysics, 681, A50 Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9 1 200D bus 2401, 3001 Leuven, Belgium 2 11F of Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd., Taipei 106319, Taiwan 3 Centre for Astrophysics Research, - omy and Mathematics, College Lane Campus, - fordshire, Hatfield AL10 9AB, UK 4 European Southern Observatory, Alonso de Cordova 3107, Santiago RM, Chile 5 Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, CO 80301, USA 6 Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México. Antigua Carretera a Pátzcuaro #8701, ExHda. San José de la Huerta 58089. Morelia, Michoacán, México 7 East Asian Observatory (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA 8 JBCA, Department Physics and Astronomy, - ester, Manchester M13 9PL, UK 9 7 6AA, UK 10 27708, USA 11 Departamento de Física, Matemáticas y Materiales, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chihuahua, Mexico 12 (ICE), CSIC, Can Magrans, E-08193 Cerdanyola del Vallès, Barcelona, Spain 13 ICREA, Pg. Lluís Companys 23, E-08010 Barcelona, Spain 14 Institut d'Estudis Espacials de Catalunya (IEEC), E-08860 Castelldefels, Barcelona, Spain 15 Schmidt Sciences, New York, NY 10011, USA 16 National Science Foundation, 2415 Eisenhower Avenue, Alexandria, Virginia 22314, USA 17 NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics Laboratory, Code 667, Greenbelt, MD 20771, USA AND US National Science Foundation, Alexandria, VA 18 'an West Road, Xiamen, 361005 People's Republic of China 19 Max-Planck-Institut für Radioastronomie, 53121 Bonn, Germany 20 Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Rabiánska 8, 87-100, Toru ́n, Poland 21 Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, 830011, People's Republic of China 22 National Taiwan Normal University, Earth Sciences, 88 Section 4, Ting-Chou Road, Taipei 11677, Taiwan 23 's Buildings, The Parade, Cardiff CF24 3AA, UK 24 Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 25 SOFIA-USRA, NASA Ames Research Center, MS 232-12, Moffett Field, CA 94035, USA 26 6A 3K7, Canada 27 Institute for Earth and Space Exploration, The 6A 3K7, Canada 28 European Space Agency, ESTEC/SRE-SA, Keplerlaan 1, 2201 AZ, Noordwijk, The Netherlands 29 3800, Victoria, Australia 30 Cardiff Hub for Astrophysics Research and Technology (CHART), 24 3AA, UK 31 Lennard-Jones Laboratories, Keele University, ST5 5BG, UK 32 SETI Institute, 189 Bernardo Avenue, Suite 100, Mountain View, CA 94043, USA 33 Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave., New York, NY 10010, USA 34 Center for Cosmology and Particle Physics, 726 Broadway, New York, NY 10003, USA 35 Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming 650216, China 36 Chinese Academy of Sciences South America Center for Astronomy, National Astronomical Observatories, CAS, Beijing 100101, China 37 Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile 38 Amanogawa Galaxy Astronomy Research Center, Graduate 1-21-35 Korimoto, Kagoshima, 890-0065, Japan 39 Center for General Education, Institute for Comprehensive Education, Kagoshima University, 1-21-30 Korimoto, Kagoshima, 8900065, Japan 40 UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK 41 Korea Astronomy and Space Science Institute (KASI) 776, Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 42 121-35 Korimoto, Kagoshima, Japan 43 1E 6BT, UK 44 https://evolvedstars.space/members Article number, page 12 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations Appendix A: Additional plots A.1. Detection statistics Table A.1: Detection statistics for the NESS data in total, and divided into two subsamples: the sources in the literature sample (NESS lit), and the sources not in the literature sample (NESS non-lit). Sample: All NESS % NESS lit % NESS non-lit % Number of sources Total 485 100% 217 45% 268 55% O-rich 421 100% 156 37% 265 63% C-rich 64 100% 61 95% 3 5% CO(2-1) detections Total 210/259 81% 147/151 97% 63/108 58% O-rich 168/216 78% 106/110 96% 62/106 58% C-rich 42/43 98% 41/41 100% 1/2 50% 13CO(2-1) detections Total 57/136 42% 57/123 46% 0/13 0% O-rich 36/94 38% 36/82 44% 0/12 0% C-rich 21/42 50% 21/41 51% 0/1 0% CO(3-2) detections Total 320/428 75% 164/183 90% 156/245 64% O-rich 268/374 72% 115/132 87% 153/242 63% C-rich 52/54 96% 49/51 96% 3/3 100% 13CO(3-2) detections Total 55/178 31% 54/126 43% 1/52 2% O-rich 37/147 25% 36/95 38% 1/52 2% C-rich 18/31 58% 18/31 58% 0/0 - Soft parabola shaped Total 432/485 89% 169/217 78% 263/268 98% O-rich 387/421 92% 125/156 80% 262/265 99% C-rich 45/64 70% 44/61 72% 1/3 33% Article number, page 13 of 18 A&A proofs: manuscript no. articlev3 very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 25/48 (52%) 60/65 (92%) 63/79 (80%) 20/24 (83%) 0/5 (0%) 13/41 (32%) 17/37 (46%) 6/11 (55%) 1/1 (100%) 36/66 (55%) 102/141 (72%) 98/130 (75%) 31/36 (86%) 0/1 (0%) 17/64 (27%) 16/69 (23%) 4/13 (31%) O-rich very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 3/4 (75%) 10/10 (100%) 26/26 (100%) 3/3 (100%) 1/3 (33%) 4/8 (50%) 15/29 (52%) 1/2 (50%) 4/4 (100%) 15/16 (94%) 28/29 (97%) 5/5 (100%) 0/1 (0%) 4/8 (50%) 12/20 (60%) 2/2 (100%) C-rich very low low intermediate high extreme CO(2-1) 13CO(2-1) CO(3-2) 13CO(3-2) 28/52 (54%) 70/75 (93%) 89/105 (85%) 23/27 (85%) 1/8 (12%) 17/49 (35%) 32/66 (48%) 7/13 (54%) 1/1 (100%) 40/70 (57%) 117/157 (75%) 126/159 (79%) 36/41 (88%) 0/2 (0%) 21/72 (29%) 28/89 (31%) 6/15 (40%) Total Fig. A.1: Heatmaps showing detections in the JCMT heterodyne data processed in this paper for the O-rich (top), C-rich (centre), and the full sample of observed NESS sources. Article number, page 14 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations A.2. Corner plots Fig. A.2: Corner plot of the MCMC fit to the log(MLR)-log(DPR) plot in Figure 5. The 16th, 50th, and 84th percentiles are marked with dashed lines, and the contour plots show 1σ and 2σ contours. Fig. A.3: Corner plot of the MCMC fit to the log(MLR)-expansion velocity plot in Figure 7. The 16th, 50th, and 84th percentiles are marked with dashed lines, and the contour plots show 1σ and 2σ contours. Article number, page 15 of 18 A&A proofs: manuscript no. articlev3 Appendix B: Description of the literature sample for comparison with NESS results We will compare our observations to a combined literature sample of relatively large previous studies of AGB stars that have calculated gas MLRs, which includes the results of Loup et al. (1993); Schöier & Olofsson (2001); Olofsson et al. (2002); Gonzalez Delgado et al. (2003); Ramstedt et al. (2009); De Beck et al. (2010). This sample has a total of 616 observational data points across ∼350 sources, comparable to our 1013 data points across 493 sources so far. This section will give an overview of the chosen samples of these studies, and summarize their results. Loup et al. (1993) collect a sample of 444 evolved stars with previous detections in either CO or HCN. They calculate MLRs for 284 AGB sources, using an empirical formula based on CO (1-0) intensity from Knapp & Morris (1985), and taking into account the CO dissociation radii calculated by Mamon et al. (1988). In cases where they have several observations of a source, they report a mean calculated MLR and measured expansion velocity, but without an indication of the spectral quality of the various observations. As such, there are some cases of large discrepancies between individual values for a single source. For oxygen-rich AGB stars they find MLRs ranging from 1 × 10-7 to 5 × 10-5 M⊙yr-1, and expansion velocities generally between 5-20 km s-1, with very few sources above 20-25 km s-1. For carbon-rich AGB stars they find MLRs ranging from 3 × 10-7 to 5 × 10-5 M⊙yr-1, and expansion velocities generally between 5-30 km s-1, with a fairly continuous distribution up to 35 km s-1. They caution that their sample has "no sound statistical basis but merely reflects the personal biases of the various observers in the field." It is obviously biased towards stronger sources (which tend to have higher MLRs) as they require a CO or HCN detection. The sample is also biased towards peculiar sources that previous observers have been interested in, such as bipolar outflows, which will make the MLR calculations less reliable. Their sample is also strongly biased against galactic plane sources and towards sources in the northern sky, though they note their inclusion of the Nyman et al. (1992) SEST study helps mitigate this. Schöier & Olofsson (2001) observe a sample of 68 carbon-rich AGB stars, consisting of all sources with CO detections from an earlier study by Olofsson et al. (1993) which targeted the brightest (K < 2 mag) carbon stars in the sky. Schöier & Olofsson (2001) note that their sample includes all sources with distances up to 500 pc, and is probably only missing about a third of sources out to the maximum distance of ∼1 kpc. They use observations of CO (1-0), (2-1), and (3-2), and radiative transfer modelling to determine MLRs. 61 sources are well fit with a 1D model, and of the remaining 7 sources: 5 show detached shells, and 2 are not spherically symmetric. From the well-fit sources they derive MLRs between 5 × 10-9 and 2 × 10-5 M⊙yr-1, with a large fraction of sources around 3 × 10-7 M⊙yr-1 and very few below 5 × 10-8 M⊙yr-1. They say the lack of MLRs below 5 × 10-8 M⊙yr-1 seems to be real, and probably indicates a lower limit to what is required to drive a dusty wind. They find that in general at high MLR the most important feature in determining the MLR is the temperature structure, while a wider range of parameters are important at low MLR. This makes sense as the CO emission tends to be saturated for high MLRs, making it harder to derive good model results from a few CO observations. They also find the MLR to be well correlated with the measured expansion velocity, and estimate that the studied types of carbon stars return ∼0.05 M⊙yr-1 of gas to the galaxy, while more extreme carbon stars (with MLRs above 2 × 10-5 M⊙yr-1) may provide an order of magnitude more. Olofsson et al. (2002) present a sample of 69 oxygen-rich AGB stars, which are either semi-regular or irregular variables. These are the sources with CO detections from an earlier sample by Kerschbaum & Olofsson (1999), chosen based on IRAS colors indicating a dusty AGB envelope. They derive distances by assuming a luminosity of 4000 L⊙. They use observations of CO (10), (2-1), (3-2), and (4-3), and 1D radiative transfer modelling to derive MLRs ranging from 2 × 10-8 to 8 × 10-7 M⊙yr-1, and find expansion velocities between 2.2 and 14.4 km s-1. 30% of their sources have expansion velocities below 5 km s-1, so this sample seems to be biased towards slower winds compared to M stars in the NESS sample. 5 sources show expansion velocities below 3 km s-1, which corresponds to the escape velocity at 100 R⋆, far beyond normally accepted acceleration zone which only extends to ∼20 R⋆. They speculate that this may be due to low radiative acceleration efficiency, leading to gas moving at a constant velocity from a few R⋆and eventually escaping, yielding both low MLRs and low expansion velocities. In general they find a good correlation between MLR and expansion velocity. They also compare their results with the Schöier & Olofsson (2001) results described above, which were derived using the same methods, finding that the median MLRs are very similar. However Olofsson et al. (2002) see a sharp cutoff around MLRs of 10-6 M⊙yr-1, which seems to be the maximum for these types of stars. They note, however, that their sample is biased by the IRAS colors selecting for dusty stars, and they of course do not include Mira variables which tend to have higher MLRs. Their range of MLRs also doesn't extend to values as low as those found for the carbon-rich sample. Gonzalez Delgado et al. (2003) have a sample of 71 oxygen-rich AGB stars, which have all been detected in CO emission by Kerschbaum & Olofsson (1999) and Olofsson et al. (2002). Using observations of several low-J transitions of SiO, for which they find a detection rate of ∼60%, and radiative transfer modelling they derive MLRs and SiO radial abundance distributions for 44 sources. Additionally, they model CO (1-0), (2-1), (3-2), and (4-3) emission from the 12 Mira variables in their sample to derive MLRs. For these 12 sources they find a very high median MLR of 1.3×10-5 M⊙yr-1, and only two sources have low MLRs around 10-7 M⊙yr-1. The median expansion velocity for the Miras is 15.3 km s-1, also significantly higher than the 7 km s-1 found by Olofsson et al. (2002) for their sample of irregular and semiregular variables. Again, the two low MLR Miras are the only ones with expansion velocities below 10 km s-1. Overall they find MLRs ranging from 2 × 10-8 to 4 × 10-5 M⊙yr-1, with a median value of 4 × 10-7 M⊙yr-1. For expansion velocities they find a range from 2.3 to 19.3 km s-1, with a median value of 7.5 km s-1. They find that the SiO lines are generally narrower than CO, but have wide line wings meaning the measured expansion velocities from both SiO and CO lines are similar. Ramstedt et al. (2009) have a sample of 40 S-type AGB stars with previous CO detections, largely taken from the sample of Jorissen & Knapp (1998) of IRAS PSC S-type sources with good quality IRAS fluxes. They note their sample is likely biased towards higher MLRs, but they believe it to be representative of mass-losing S-type stars and complete to a distance of 600 pc (while the largest distance in their sample is 1210 pc). Using observations of CO (1-0) and (2-1), as well as one or more low-J transitions of SiO (which they detect in 26 sources), and 1D radiative transfer modelling they derive MLRs and CO and SiO radial abundance Article number, page 16 of 18 S. H. J. Wallström et al.: The Nearby Evolved Stars Survey III: First data release of JCMT CO-line observations distributions. They find median MLRs of 4.5 × 10-7 M⊙yr-1 and 1.75 × 10-7 M⊙yr-1 for their Mira and SRV samples respectively. These numbers are comparable to the median value of 3 × 10-7 M⊙yr-1 found for carbon-rich stars by Schöier & Olofsson (2001) and for oxygen-rich stars by Olofsson et al. (2002) and Gonzalez Delgado et al. (2003). They find a median expansion velocity of 8 km s-1, similar to the results for oxygen-rich sources, and slightly lower than the 11 km s-1 found for carbon-rich sources in previous studies. De Beck et al. (2010) have a sample of 69 sources, including mostly AGB stars but also some RSGs, hypergiants, post-AGB stars, and YSOs. The data for this sample, consisting of 12CO and 13CO transitions up to J=6-5, has been assembled over many years. They use several CO transitions and 1D radiative transfer modelling to derive analytical expressions to estimate MLRs. They use this procedure to determine the MLRs for 50 of the evolved stars in the sample to which they could fit a soft parabola profile, of which 39 are AGB stars. They find AGB MLRs ranging from 4×10-8 to 6×10-5 M⊙yr-1, with a median value of 4.1×10-6 M⊙yr-1. This is a higher median value than the ∼3 × 10-7 M⊙yr-1 found by the previously mentioned studies, indicating a significant bias towards bright sources in this sample. They find a correlation between MLR and pulsation period for periods below ∼850 days, representing Mira and semiregular AGB pulsators, as well as short-period OH/IR stars. For 29 stars in their sample they have both 12CO and 13CO observations, and hence are able to estimate 12CO/13CO ratios by dividing their respective integrated intensities (with a correction factor for differences in line strength). They find 12CO/13CO ratios ranging from 3.6 to 30.7 for their subsample of AGB stars (their Table 8), and note that these estimates are actually lower limits as the 12CO lines are often optically thick. Many other studies of samples of AGB stars also draw from the aforementioned surveys for their source selection (e.g., Teyssier et al. 2011; Massalkhi et al. 2020; Ramstedt et al. 2020). Overall this literature sample consists largely of sources with previous CO detections, so it is biased towards the brighter AGB stars with relatively high MLRs. The NESS sample thus complements these literature samples (Figure 1). Some of the samples of carbon-rich or S-type stars are said to be complete out to a few hundred pc, but there has been little attempt to form a complete sample of the much more abundant oxygen-rich stars, and this significantly hinders our ability to draw firm physical conclusions about them. Article number, page 17 of 18 A&A proofs: manuscript no. articlev3 Appendix C: Columns available in online table of results Table C.1: Description of columns available in the online table Column Description Unit IRASPSC IRAS PSC identifier SIMBAD_ID SIMBAD identifier Chem_type Chemical type based on mid-IR spectra ['O' or 'C']; see Section 2.4 Tier Grouping from Scicluna et al. (2022) based on location in distance-DPR space ('very low', 'low', 'intermediate', 'high', or 'extreme') Peak_CO(2-1)a Peak intensity of the CO(2-1) line K Peak_CO(2-1)_error Uncertainty in the peak intensity of the CO(2-1) line K v_inf_CO(2-1) Expansion velocity of the shell derived from the CO(2-1) line (uncertainty 1 km s-1 km/s) Int_CO(2-1) Integrated intensity of the CO(2-1) line K km s-1 Int_CO(2-1)_error Uncertainty in the integrated intensity of the CO(2-1) line K km s-1 nchan_CO(2-1) Number of velocity channels in the CO(2-1) spectral band CO(2-1)_rms RMS noise in the CO(2-1) line K Int_13CO(2-1) Integrated intensity of the 13CO(2-1) line K km s-1 Int_13CO(2-1)_error Uncertainty in the integrated intensity of the 13CO(2-1) line K km s-1 Peak_CO(3-2) Peak intensity of the CO(3-2) line K Peak_CO(3-2)_error Uncertainty in the peak intensity of the CO(3-2) line K v_inf_CO(3-2) Expansion velocity of the shell derived from the CO(3-2) line (uncertainty 1 km s-1 km/s) Int_CO(3-2) Integrated intensity of the CO(3-2) line K km s-1 Int_CO(3-2)_error Uncertainty in the integrated intensity of the CO(3-2) line K km s-1 nchan_CO(3-2) Number of velocity channels in the CO(3-2) spectral band CO(3-2)_rms RMS noise in the CO(3-2) line K Int_13CO(3-2) Integrated intensity of the 13CO(3-2) line K km s-1 Int_13CO(3-2)_error Uncertainty in the integrated intensity of the 13CO(3-2) line K km s-1 MLR_CO(2-1) Empirical mass-loss rate derived from the CO(2-1) line using the Ramstedt et al. M⊙yr-1 (2008) formula MLR_CO(2-1)_error Uncertainty in the mass-loss rate derived from the CO(2-1) line M⊙yr-1 CO(2-1)_upperlimit Flag designating whether the CO(2-1) line is detected (0) or is below 3 times the RMS (1) MLR_CO(3-2) Empirical mass-loss rate derived from the CO(3-2) line using the Ramstedt et al. M⊙yr-1 (2008) formula MLR_CO(3-2)_error Uncertainty in the mass-loss rate derived from the CO(3-2) line M⊙yr-1 CO(3-2)_upperlimit Flag denoting whether the CO(3-2) line is detected (0) or is below 3 times the RMS (1) IsoRatio(2-1) Isotopic ratio 12CO/13CO derived from the (2-1) lines IsoRatio(3-2) Isotopic ratio 12CO/13CO derived from the (3-2) lines DPR Dust-production rate from SED fit M⊙yr-1 DPR_error Uncertainty in the dust-production rate M⊙yr-1 GasDust_CO(2-1) Gas-to-dust ratio derived from the CO(2-1) line GasDust_CO(2-1)_error Uncertainty in the gas-to-dust ratio derived from the CO(2-1) line GasDust_CO(3-2) Gas-to-dust ratio derived from the CO(3-2) line GasDust_CO(3-2)_error Uncertainty in the gas-to-dust ratio derived from the CO(3-2) line SoftParabola_CO(2-1) Flag denoting whether the CO(2-1) line shape is a soft parabola (1) or not (0) SoftParabola_CO(3-2) Flag denoting whether the CO(3-2) line shape is a soft parabola (1) or not (0) OptThin_12CO(2-1) Flag denoting whether the CO(2-1) line is optically thin (1) or not (0) OptThin_12CO(3-2) Flag denoting whether the CO(3-2) line is optically thin (1) or not (0) First_det Flag denoting whether the NESS observation is the first CO detection for the source (1) or not (0) Notes: a All peak intensities, expansion velocities, and the related uncertainties in the table are estimated from soft-parabola fits. Integrated intensities are derived from the spectrum, not from the models. Article number, page 18 of 18
2510.14804
On cliques in hypergraphs Jun Gao∗ Mathematics Institute and DIMAP, University of Warwick, Coventry, UK October 17, 2025 Abstract We prove that for any k ≥3, every k-uniform hypergraph on n vertices contains at most n −o(n) different sizes of cliques (maximal complete subgraphs). In particular, the 3-uniform case answers a question of Erdős. 1 Introduction Given an integer k ≥2 and a k-uniform hypergraph H, we say a vertex set S of H forms a complete graph if every k-vertex subset of S is an edge in H. A complete subgraph of H is called a clique if it is not contained in any other complete subgraph of H. In this paper we study how many distinct clique sizes that can occur in a k-uniform hypergraph on n vertices. We denote by g(n, k) the largest possible number of distinct clique sizes in a k-uniform hypergraph of n vertices. For graphs (i.e., when k = 2), such problems were first studied by Moon and Moser [4] in 1965. They proved (throughout this paper log will denote logarithm to the base 2) that n −log n −2 log log n < g(n, 2) ≤n −⌊log n⌋. Subsequently, Erdős [2] improved the lower bound to g(n, 2) > n −log n −log∗n −O(1), Where log∗n is the number of iterated logarithms such that log log . . . log n < 1. Erdős con- jectured that this represented the correct order of magnitude. Spencer [5] later disproved this by proving that g(n, 2) > n −log n −O(1). For hypergraphs, Erdős (see [3], or Erdős Problem #775 [1]) showed that there is a 3-uniform hypergraph on n vertices which contains n −log∗n cliques of distinct size and asked whether it can be shown that there is no 3-uniform hypergraph with n −C cliques of distinct sizes, for some constant C and large n. In this paper, we prove that such a hypergraph does not exist for any uniformity k, thereby answering this question. Theorem 1.1. For any integers k ≥3 and C ≥0, there exists a constant N = N(k, C) such that for all n ≥N, every n-vertex k-uniform hypergraph contains no more than n−C different sizes of cliques. ∗Research supported by ERC Advanced Grant 101020255. Email: jun.gao@warwick.ac.uk 1 arXiv:2510.14804v1 [math.CO] 16 Oct 2025 2 Proof of Theorem 1.1 For any hypergraph H, we denote by V (H) and E(H) the vertex set and edge set, re- spectively. For a vertex v in H, denote by NH(v) and degH(v) the neighborhood and the degree of v in H, respectively. Given a tree T rooted at vertex u, let x, y be two vertices in T. We use dist(x, y) to denote the length of the path between x and y in T, and define dist(x, x) = 0. Let T(x) denote the subtree of T rooted at x, i.e., T(x) is the subtree of T induced by x and all vertices v such that the unique path from v to u passes through x. For any positive integer n, let [n] denote the set {1, 2, · · · , n}. Before proving the theorem, we introduce a definition that will be used in the argument. Definition 2.1 ((k, C)-layered tree). We say a tree T is a (k, C)-layered tree if we can order vertices of the tree with v0, v1, v2, · · · , vt such that the following holds: (1) Each vertex vi is a neighbor of some vj with j < i. (2) For any i ∈[t], dist(v0, vi) ≤k. (3) For each vertex vi ∈V (T), the degree of vi in T is at most 2C+i. The next lemma shows that any (k, C)-layered tree has only a bounded number of vertices in terms of k and C. Lemma 2.2. There exists a constant N0 = N0(C, k) such that any (k, C)-layered tree has at most N0 vertices. Proof. Let T be a (k, C)-layered tree with the vertices v0, v1, v2, · · · , vt satisfying the con- dition in Definition 2.1. The proof proceeds by induction on k. When k = 1, for any i ∈[t], vi is the neighbor of v0 as dist(v0, vi) ≤1. It follows from degT (v0) ≤2C that |V (T)| = t + 1 ≤2C + 1, so Lemma 2.2 holds for k = 1. Assume Lemma 2.2 holds for k −1. Let vx1, vx2, . . . , vxm be the neighbors of v0 in T such that x1 < x2 < · · · < xm. Set xm+1 = t + 1. For each i ∈[m]. Let Ti be an subtree of T with vertex set {v0, v1, v2, . . . , vxi+1−2, vxi+1−1}. Let T ′ i be the tree obtained from Ti by merging vertices vx1, vx2, . . . , vxi into a single new vertex wi,0, replacing the edge with one endpoint in {vx1, vx2, . . . , vxi} by an edge incident to wi,0 and then deleting the vertex v0. Then we have |V (T ′ i)| = xi+1 −i. Claim 2.3. If degT (vxj) ≤Cj for any j ∈[i], then T ′ i is a (k−1, 2C +C+Pi j=1 Cj)-layered tree. Proof of the claim. Given an integer i, we set wi,0 = w0. The remaining vertices in the tree T ′ i are ordered in such a way w1, w2, · · · , wxi+1−i−1 that their relative order is the same as in the original sequence, i.e., if wi1 = vj1 and wi2 = vj2, then i1 < i2 if and only if j1 < j2. Next, we prove that T ′ i is a (k −1, 2C + C + Pi j=1 Cj)-layered tree by verifying the three conditions in the definition with ordering w0, w1, w2 · · · , wxi+1−i−1 . The first two conditions are automatically satisfied, as we do not change the order of the vertices except for w0 and the vertex v0 is removed. Note that degT ′ i (w0) = i X j=1 degTi(vxj) −1  ≤ i X j=1 degT (vxj) −1  < i X j=1 Cj ≤2 Pi j=1 Cj, 2 thus the third condition holds for w0. For each wj ̸= w0, there exists j′ ≤j + 2C such that wj = vj′, as the size of the vertex set we are contracting is i, which satisfies i ≤m ≤2C. Note that for each wj ∈T ′ i with wj ̸= w0, the degree of wj in T ′ i, is at most the degree of wj = vj′ in T. We derive that degT ′ i (wj) ≤degT (vj′) ≤2j′+C ≤2j+2C+C ≤2j+2C+C+Pi j=1 Cj. So T ′ i is a (k −1, 2C + C + Pi j=1 Cj)-layered tree. ■ Claim 2.4. For each j ∈[m], there exists a constant Cj = Cj(C, k) such that degT (vxj) ≤ Cj. Proof of the claim. Let m′ be the maximum integer such that there exists a constant Cm′ = Cm′(C, k) such that degT (vxm′) ≤Cm′. It follows from v1 = vx1 is a neighbor of v0 that we can take C1 = 2C+1, which implies that such a m′ exists. Suppose to the contrary that m′ < m. By Claim 2.3, T ′ m′ is a (k−1, 2C +C +Pm′ j=1 Cj)- layered tree. By the inductive hypothesis, we know that there exists a constant Nm′ = N0  k −1, 22C+C+Pm′ i=1 Ci  such that |V (T ′ m′)| = xm′+1 −m′ ≤Nm′. Take Cm′+1 = 2Nm′+2C+C, then Cm′+1 is a constant in terms of k and C with degT (vxm′+1) ≤2xm′+1+C ≤2Nm′+m′+C ≤Cm′+1, a contradiction. ■ By Claim 2.3, T ′ m is a (k −1, 2C + C + Pm i=1 Ci) layered tree. By the inductive hypothesis and Claim 2.4, there exists a constant Nm in terms of k and C such that |V (T ′ m)| = t + 1 −m ≤Nm, which implies that |V (T)| = t + 1 ≤Nm + m ≤Nm + 2C. So Lemma 2.2 holds for k. The following fact will be useful in our proof. Fact 2.5. Let H be a k-uniform hypergraph and S1, S2, . . . , Sk+1 be k + 1 vertex sets of H. If for each i ∈[k + 1], (Sk+1 j=1 Sj) \ Si forms a complete graph in H, then (Sk+1 j=1 Sj) forms a complete graph in H. Now we are ready to prove Theorem 1.1. Proof of Theorem 1.1. Let H be a k-uniform hypergraph on n vertices. Suppose that H contains at least n −C different sizes of cliques. Let X0, X1, X2, . . . , Xt be the collection of cliques in H such that |V (X0)| ≥|V (X1)| ≥· · · ≥|V (Xt)|. Since H contains at least n −C different sizes of cliques, we derive that t + 1 ≥n −C and |V (Xi)| ≥n −C −i for i = 0, 1, 2, . . . , t. We will show that there exists a (k −1, C)-layered tree T with V (T)={x0, x1, . . . , xt}. If such a tree exists, then by Lemma 2.2, there exists a constant N0 = N0(k −1, C) such that n −C ≤t + 1 = |V (T)| ≤N0, 3 which implies that n ≤N0 + C. So if we take N = N0 + C + 1, then every k-uniform hypergraph on n vertices with n ≥N contains no more than n−C different sizes of cliques. So we only need to show such a tree exists. Let A0 = V (X0), B0 = V (H) \ A0 and T0 be the tree with single vertex x0. We now construct a sequence trees T0 ⊆T1 ⊆T2 ⊆· · · ⊆Tt = T rooted at x0 for each i ∈{0, 1, 2, . . . , t} as follows: • Assume that for some integer i with 0 ≤i < t, we have constructed a tree Ti on the vertex set {x0, x1, . . . xi}, together with two sequences of sets A0, A1, . . . , Ai and B0, B1, . . . , Bi. Set T ∗= Ti, u = x0. For ease of notation, for each 0 ≤j ≤t, we define Xxj = Xj, Axj = Aj and Bxj = Bj, respectively. We construct the new tree Ti+1 by adding the vertex xi+1 following the rule described below: 1. If for every vertex v adjacent to u in T ∗, V (Xv) ∩Bu ̸= V (Xi+1) ∩Bu or if u has no neighbor in T ∗, then let Ti+1 be the tree obtained from Ti by adding the edge uxi+1, and set Ai+1 = Au ∩V (Xi+1) and Bi+1 = Ai \ Ai+1. 2. Otherwise there exists vertex w adjacent to u in T ∗such that V (Xw) ∩Bu = V (Xi+1) ∩Bu. Then we reset T ∗= Ti(w) be the subtree of Ti rooted at w, and reset u = w. Then return to Step 1. • Since after each iteration, the distance between x0 and u increases by one, the process must terminate after finitely many steps. Consequently, we obtain tree Ti+1 and two sets Ai+1, Bi+1. Now we claim that T is a (k −1, C)-layered tree with the vertex ordering x0, x1, . . . , xt satisfying the condition in Definition 2.1. By the construction of T, we know that Condition (1) holds automatically. Before proving Conditions (2) and (3), we first establish some properties of the paths in T. Let P = y0y1y2 . . . yr be a path of length r with x0 = y0 as an endpoint. Let Si = V (Xyi) ∩Byi−1 for i ∈[r] and S0 = ∅. By the construction of T, the following holds: Ayr Byr Byr−1 · · · By1 B0 Sr S2 S1 Ayr−1 A0 Figure 1: Properties of the path in T. • For any i ≤r, Ayi−1 = Ayi ⊔Byi, which implies that V (H) = A0 ⊔B0 = · · · = Ayr ⊔Byr ⊔Byr−1 ⊔· · · ⊔B0, where we use ⊔to denote the disjoint union. • For distinct i and j, Si = V (Xyi) ∩Byi−1 and Sj = V (Xyj) ∩Byj−1 are disjoint as Byi−1 and Byj−1 are disjoint. 4 • For each pair i, j with 1 ≤i < j ≤r, since yj is not a neighbor of yi−1 and yj ∈T(yi), by the process of adding the vertex yj to the tree, we derive that V (Xyj) ∩Byi−1 = V (Xyi) ∩Byi−1 = Si. This means that for any v, w ∈T(u) with w ∈T(v) and v ̸= u, Xv ∩Bu = Xw ∩Bu. • For each j ≥1, by the definition of Ayj, we have V (Xyj) ∩Ayj−1 = Ayj. Then we derive that V (Xyj) = V (Xyj) ∩V (H) = V (Xyj) ∩ Ayj−1 ⊔ j−1 G i=0 Byi ! = Ayj ⊔ jG i=1 Si. • For any j ∈[r], since V (Xyi) ̸⊆V (Xyi−1) and V (Xyj) = Ayj ⊔ jG i=1 Si ⊆Ayj−1 ∪Sj ∪ j−1 [ i=1 Si = Sj ∪V (Xyj−1), we have Sj ̸= ∅, which implies that if v ∈T(u) with v ̸= u, then Xv ∩Bu ̸= ∅. To prove (2), we have the following claim. Claim 2.6. For any i ∈[t], dist(x0, xi) ≤k −1. Proof of the claim. Suppose, for contradiction, that there exists a path of length k with x0 as an endpoint. Without loss of generality, we assume that y0y1 . . . yk is a path of length k in T with x0 = y0. Let U = Ayk ∪Sk, W = Ayk−1, then we have U ∪Sk−1 ∪Sk−2 · · · ∪S1 = V (Xyk), W ∪Sk−1 ∪Sk−2 · · · ∪S1 = V (Xyk−1), For each j ∈[k −1], it follows from Si ⊆Byi−1 and Ayi−1 = Ayi ∪Byi for any i ∈[k] that (U ∪W ∪Sk−1 ∪Sk−2 · · · ∪S1) \ Sj = Ayk−1 ∪ k[ i=j+1 Si ∪ j−1 [ i=1 Si ⊆Ayk−1 ∪ k[ i=j+1 Byi−1 ∪ j−1 [ i=1 Si = Ayj−1 ∪ j−1 [ i=1 Si = V (Xyj−1). By Fact 2.5, V (Xyk) ∪V (Xyk−1) = U ∪W ∪Sk−1 ∪Sk−2 · · · ∪S1 forms a complete graph, which contradicts the maximality of Xyk and Xyk−1. ■ Finally, we prove that Condition (3) holds. By the construction of T, we know that for any vertex xi, the new vertices added adjacent to xi correspond to distinct subsets of Bi. Moreover, each of these subsets is nonempty. Therefore, the degree of x0 in T is at most 2|B0| −1 < 2|B0|, and the degree of xi for i ̸= 0 is at most 1 + 2|Bi| −1 = 2|Bi|. Since Bi ∩V (Xi) = ∅, we have |Bi| ≤n −|V (Xi)| ≤C + i, which implies that degT (xi) ≤2C+i for each i ∈{0, 1, 2, . . . , t}. Thus, T is a (k −1, C)-layered tree. This completes the proof of Theorem 1.1. 5 Acknowledgements The author would like to thank Shumin Sun for reading a draft of this paper and for helpful comments. References [1] T. F. Bloom. Erdős problem #775. https://www.erdosproblems.com/775, 2025. Accessed: 2025-10-11. 1 [2] P. Erdős. On cliques in graphs. Israel J. Math., 4:233–234, 1966. 1 [3] R. K. Guy. A miscellany of Erdős problems. American Mathematical Monthly, 90:118– 120, 1983. 1 [4] J. W. Moon and L. Moser. On cliques in graphs. Israel J. Math., 3:23–28, 1965. 1 [5] J. H. Spencer. On cliques in graphs. Israel J. Math., 9:419–421, 1971. 1 6
On cliques in hypergraphs Jun Gao∗ Mathematics Institute and DIMAP, 17, 2025 Abstract We prove that for any k ≥3, every k-uniform hypergraph on n vertices contains at most n -o(n) different sizes of cliques (maximal complete subgraphs). In particular, the 3-uniform case answers a question of Erdős. 1 Introduction Given an integer k ≥2 and a k-uniform hypergraph H, we say a vertex set S of H forms a complete graph if every k-vertex subset of S is an edge in H. A complete subgraph of H is called a clique if it is not contained in any other complete subgraph of H. In this paper we study how many distinct clique sizes that can occur in a k-uniform hypergraph on n vertices. We denote by g(n, k) the largest possible number of distinct clique sizes in a k-uniform hypergraph of n vertices. For graphs (i.e., when k = 2), such problems were first studied by Moon and Moser [4] in 1965. They proved (throughout this paper log will denote logarithm to the base 2) that n -log n -2 log log n n -log n -log∗n -O(1), Where log∗n is the number of iterated logarithms such that log log . . . log n n -log n -O(1). For hypergraphs, Erdős (see [3], or Erdős Problem #775 [1]) showed that there is a 3-uniform hypergraph on n vertices which contains n -log∗n cliques of distinct size and asked whether it can be shown that there is no 3-uniform hypergraph with n -C cliques of distinct sizes, for some constant C and large n. In this paper, we prove that such a hypergraph does not exist for any uniformity k, thereby answering this question. Theorem 1.1. For any integers k ≥3 and C ≥0, there exists a constant N = N(k, C) such that for all n ≥N, every n-vertex k-uniform hypergraph contains no more than n-C different sizes of cliques. ∗Research supported by ERC Advanced Grant 101020255. Email: 1 16 Oct 2025 2 Proof of Theorem 1.1 For any hypergraph H, we denote by V (H) and E(H) the vertex set and edge set, respectively. For a vertex v in H, denote by NH(v) and degH(v) the neighborhood and the degree of v in H, respectively. Given a tree T rooted at vertex u, let x, y be two vertices in T. We use dist(x, y) to denote the length of the path between x and y in T, and define dist(x, x) = 0. Let T(x) denote the subtree of T rooted at x, i.e., T(x) is the subtree of T induced by x and all vertices v such that the unique path from v to u passes through x. For any positive integer n, let [n] denote the set {1, 2, · · · , n}. Before proving the theorem, we introduce a definition that will be used in the argument. Definition 2.1 ((k, C)-layered tree). We say a tree T is a (k, C)-layered tree if we can order vertices of the tree with v0, v1, v2, · · · , vt such that the following holds: (1) Each vertex vi is a neighbor of some vj with j < i. (2) For any i ∈[t], dist(v0, vi) ≤k. (3) For each vertex vi ∈V (T), the degree of vi in T is at most 2C+i. The next lemma shows that any (k, C)-layered tree has only a bounded number of vertices in terms of k and C. Lemma 2.2. There exists a constant N0 = N0(C, k) such that any (k, C)-layered tree has at most N0 vertices. Proof. Let T be a (k, C)-layered tree with the vertices v0, v1, v2, · · · , vt satisfying the condition in Definition 2.1. The proof proceeds by induction on k. When k = 1, for any i ∈[t], vi is the neighbor of v0 as dist(v0, vi) ≤1. It follows from degT (v0) ≤2C that |V (T)| = t + 1 ≤2C + 1, so Lemma 2.2 holds for k = 1. Assume Lemma 2.2 holds for k -1. Let vx1, vx2, . . . , vxm be the neighbors of v0 in T such that x1 < x2 < · · · < xm. Set xm+1 = t + 1. For each i ∈[m]. Let Ti be an subtree of T with vertex set {v0, v1, v2, . . . , vxi+1-2, vxi+1-1}. Let T ′ i be the tree obtained from Ti by merging vertices vx1, vx2, . . . , vxi into a single new vertex wi,0, replacing the edge with one endpoint in {vx1, vx2, . . . , vxi} by an edge incident to wi,0 and then deleting the vertex v0. Then we have |V (T ′ i)| = xi+1 -i. Claim 2.3. If degT (vxj) ≤Cj for any j ∈[i], then T ′ i is a (k-1, 2C +C+Pi j=1 Cj)-layered tree. Proof of the claim. Given an integer i, we set wi,0 = w0. The remaining vertices in the tree T ′ i are ordered in such a way w1, w2, · · · , wxi+1-i-1 that their relative order is the same as in the original sequence, i.e., if wi1 = vj1 and wi2 = vj2, then i1 < i2 if and only if j1 < j2. Next, we prove that T ′ i is a (k -1, 2C + C + Pi j=1 Cj)-layered tree by verifying the three conditions in the definition with ordering w0, w1, w2 · · · , wxi+1-i-1 . The first two conditions are automatically satisfied, as we do not change the order of the vertices except for w0 and the vertex v0 is removed. Note that degT ′ i (w0) = i X j=1 degTi(vxj) -1 ≤ i X j=1 degT (vxj) -1 < i X j=1 Cj ≤2 Pi j=1 Cj, 2 thus the third condition holds for w0. For each wj ̸= w0, there exists j′ ≤j + 2C such that wj = vj′, as the size of the vertex set we are contracting is i, which satisfies i ≤m ≤2C. Note that for each wj ∈T ′ i with wj ̸= w0, the degree of wj in T ′ i, is at most the degree of wj = vj′ in T. We derive that degT ′ i (wj) ≤degT (vj′) ≤2j′+C ≤2j+2C+C ≤2j+2C+C+Pi j=1 Cj. So T ′ i is a (k -1, 2C + C + Pi j=1 Cj)-layered tree. ■ Claim 2.4. For each j ∈[m], there exists a constant Cj = Cj(C, k) such that degT (vxj) ≤ Cj. Proof of the claim. Let m′ be the maximum integer such that there exists a constant Cm′ = Cm′(C, k) such that degT (vxm′) ≤Cm′. It follows from v1 = vx1 is a neighbor of v0 that we can take C1 = 2C+1, which implies that such a m′ exists. Suppose to the contrary that m′ < m. By Claim 2.3, T ′ m′ is a (k-1, 2C +C +Pm′ j=1 Cj)- layered tree. By the inductive hypothesis, we know that there exists a constant Nm′ = N0 k -1, 22C+C+Pm′ i=1 Ci such that |V (T ′ m′)| = xm′+1 -m′ ≤Nm′. Take Cm′+1 = 2Nm′+2C+C, then Cm′+1 is a constant in terms of k and C with degT (vxm′+1) ≤2xm′+1+C ≤2Nm′+m′+C ≤Cm′+1, a contradiction. ■ By Claim 2.3, T ′ m is a (k -1, 2C + C + Pm i=1 Ci) layered tree. By the inductive hypothesis and Claim 2.4, there exists a constant Nm in terms of k and C such that |V (T ′ m)| = t + 1 -m ≤Nm, which implies that |V (T)| = t + 1 ≤Nm + m ≤Nm + 2C. So Lemma 2.2 holds for k. The following fact will be useful in our proof. Fact 2.5. Let H be a k-uniform hypergraph and S1, S2, . . . , Sk+1 be k + 1 vertex sets of H. If for each i ∈[k + 1], (Sk+1 j=1 Sj) \ Si forms a complete graph in H, then (Sk+1 j=1 Sj) forms a complete graph in H. Now we are ready to prove Theorem 1.1. Proof of Theorem 1.1. Let H be a k-uniform hypergraph on n vertices. Suppose that H contains at least n -C different sizes of cliques. Let X0, X1, X2, . . . , Xt be the collection of cliques in H such that |V (X0)| ≥|V (X1)| ≥· · · ≥|V (Xt)|. Since H contains at least n -C different sizes of cliques, we derive that t + 1 ≥n -C and |V (Xi)| ≥n -C -i for i = 0, 1, 2, . . . , t. We will show that there exists a (k -1, C)-layered tree T with V (T)={x0, x1, . . . , xt}. If such a tree exists, then by Lemma 2.2, there exists a constant N0 = N0(k -1, C) such that n -C ≤t + 1 = |V (T)| ≤N0, 3 which implies that n ≤N0 + C. So if we take N = N0 + C + 1, then every k-uniform hypergraph on n vertices with n ≥N contains no more than n-C different sizes of cliques. So we only need to show such a tree exists. Let A0 = V (X0), B0 = V (H) \ A0 and T0 be the tree with single vertex x0. We now construct a sequence trees T0 ⊆T1 ⊆T2 ⊆· · · ⊆Tt = T rooted at x0 for each i ∈{0, 1, 2, . . . , t} as follows: • Assume that for some integer i with 0 ≤i < t, we have constructed a tree Ti on the vertex set {x0, x1, . . . xi}, together with two sequences of sets A0, A1, . . . , Ai and B0, B1, . . . , Bi. Set T ∗= Ti, u = x0. For ease of notation, for each 0 ≤j ≤t, we define Xxj = Xj, Axj = Aj and Bxj = Bj, respectively. We construct the new tree Ti+1 by adding the vertex xi+1 following the rule described below: 1. If for every vertex v adjacent to u in T ∗, V (Xv) ∩Bu ̸= V (Xi+1) ∩Bu or if u has no neighbor in T ∗, then let Ti+1 be the tree obtained from Ti by adding the edge uxi+1, and set Ai+1 = Au ∩V (Xi+1) and Bi+1 = Ai \ Ai+1. 2. Otherwise there exists vertex w adjacent to u in T ∗such that V (Xw) ∩Bu = V (Xi+1) ∩Bu. Then we reset T ∗= Ti(w) be the subtree of Ti rooted at w, and reset u = w. Then return to Step 1. • Since after each iteration, the distance between x0 and u increases by one, the process must terminate after finitely many steps. Consequently, we obtain tree Ti+1 and two sets Ai+1, Bi+1. Now we claim that T is a (k -1, C)-layered tree with the vertex ordering x0, x1, . . . , xt satisfying the condition in Definition 2.1. By the construction of T, we know that Condition (1) holds automatically. Before proving Conditions (2) and (3), we first establish some properties of the paths in T. Let P = y0y1y2 . . . yr be a path of length r with x0 = y0 as an endpoint. Let Si = V (Xyi) ∩Byi-1 for i ∈[r] and S0 = ∅. By the construction of T, the following holds: Ayr Byr Byr-1 · · · By1 B0 Sr S2 S1 Ayr-1 A0 Figure 1: Properties of the path in T. • For any i ≤r, Ayi-1 = Ayi ⊔Byi, which implies that V (H) = A0 ⊔B0 = · · · = Ayr ⊔Byr ⊔Byr-1 ⊔· · · ⊔B0, where we use ⊔to denote the disjoint union. • For distinct i and j, Si = V (Xyi) ∩Byi-1 and Sj = V (Xyj) ∩Byj-1 are disjoint as Byi-1 and Byj-1 are disjoint. 4 • For each pair i, j with 1 ≤i < j ≤r, since yj is not a neighbor of yi-1 and yj ∈T(yi), by the process of adding the vertex yj to the tree, we derive that V (Xyj) ∩Byi-1 = V (Xyi) ∩Byi-1 = Si. This means that for any v, w ∈T(u) with w ∈T(v) and v ̸= u, Xv ∩Bu = Xw ∩Bu. • For each j ≥1, by the definition of Ayj, we have V (Xyj) ∩Ayj-1 = Ayj. Then we derive that V (Xyj) = V (Xyj) ∩V (H) = V (Xyj) ∩ Ayj-1 ⊔ j-1 G i=0 Byi ! = Ayj ⊔ jG i=1 Si. • For any j ∈[r], since V (Xyi) ̸⊆V (Xyi-1) and V (Xyj) = Ayj ⊔ jG i=1 Si ⊆Ayj-1 ∪Sj ∪ j-1 [ i=1 Si = Sj ∪V (Xyj-1), we have Sj ̸= ∅, which implies that if v ∈T(u) with v ̸= u, then Xv ∩Bu ̸= ∅. To prove (2), we have the following claim. Claim 2.6. For any i ∈[t], dist(x0, xi) ≤k -1. Proof of the claim. Suppose, for contradiction, that there exists a path of length k with x0 as an endpoint. Without loss of generality, we assume that y0y1 . . . yk is a path of length k in T with x0 = y0. Let U = Ayk ∪Sk, W = Ayk-1, then we have U ∪Sk-1 ∪Sk-2 · · · ∪S1 = V (Xyk), W ∪Sk-1 ∪Sk-2 · · · ∪S1 = V (Xyk-1), For each j ∈[k -1], it follows from Si ⊆Byi-1 and Ayi-1 = Ayi ∪Byi for any i ∈[k] that (U ∪W ∪Sk-1 ∪Sk-2 · · · ∪S1) \ Sj = Ayk-1 ∪ k[ i=j+1 Si ∪ j-1 [ i=1 Si ⊆Ayk-1 ∪ k[ i=j+1 Byi-1 ∪ j-1 [ i=1 Si = Ayj-1 ∪ j-1 [ i=1 Si = V (Xyj-1). By Fact 2.5, V (Xyk) ∪V (Xyk-1) = U ∪W ∪Sk-1 ∪Sk-2 · · · ∪S1 forms a complete graph, which contradicts the maximality of Xyk and Xyk-1. ■ Finally, we prove that Condition (3) holds. By the construction of T, we know that for any vertex xi, the new vertices added adjacent to xi correspond to distinct subsets of Bi. Moreover, each of these subsets is nonempty. Therefore, the degree of x0 in T is at most 2|B0| -1 < 2|B0|, and the degree of xi for i ̸= 0 is at most 1 + 2|Bi| -1 = 2|Bi|. Since Bi ∩V (Xi) = ∅, we have |Bi| ≤n -|V (Xi)| ≤C + i, which implies that degT (xi) ≤2C+i for each i ∈{0, 1, 2, . . . , t}. Thus, T is a (k -1, C)-layered tree. This completes the proof of Theorem 1.1. 5 Acknowledgements The author would like to thank Shumin Sun for reading a draft of this paper and for helpful comments. References [1] T. F. Bloom. Erdős problem #775. https://www.erdosproblems.com/775, 2025. Accessed: 2025-10-11. 1 [2] P. Erdős. On cliques in graphs. Israel J. Math., 4:233-234, 1966. 1 [3] R. K. Guy. A miscellany of Erdős problems. American Mathematical Monthly, 90:118120, 1983. 1 [4] J. W. Moon and L. Moser. On cliques in graphs. Israel J. Math., 3:23-28, 1965. 1 [5] J. H. Spencer. On cliques in graphs. Israel J. Math., 9:419-421, 1971. 1 6
2510.14805
Augmented Lagrangian Method based adjoint space framework for sparse reconstruction of acoustic source with boundary measurements Nirui Tan∗ Hongpeng Sun†‡ October 17, 2025 Abstract We propose a semismooth Newton-based augmented Lagrangian method for re- constructing sparse sources in inverse acoustic scattering problems. The semismooth Newton method can be iterated in the space of measurements instead of the unknown source to be reconstructed. It is highly efficient, especially when the measurement data is much less than the acoustic source. The source can be calculated from Fenchel- Rockafellar duality theory. We can obtain lots of acceleration and leverage the com- putational cost. The numerical examples show the high efficiency of the proposed semismooth Newton-based methods. 1 Introduction Inverse acoustic scattering is crucial in various applications, including sonar imaging, oil prospecting, and non-destructive testing, among others [17]. In this paper, we focus on the reconstruction of acoustic sources, which is a typical inverse acoustic scattering problem. It is well-known that the inverse source problem lacks uniqueness [8, 30] due to the presence of non-radiating sources, resulting in high ill-posedness. Appropriate regularization that incorporates prior information can leverage the ill-posedness, such as L2 regularization [19]. For convenience, we assume that Ω⊆Rd with d = 2 or d = 3 is a bounded and compact domain with boundary ∂Ωof class C3 and contains the sources. Sparse regularization is a popular choice for signal processing, including L1 regularization and total variation, which can promote sparsity for the solutions or gradients. However, for acoustic scattering involving the Helmholtz equation, which is an infinite-dimensional problem, the existence ∗School of Mathematics, Renmin University of China, 100872 Beijing, China. Email: nirui- tan@ruc.edu.cn. †School of Mathematics, Renmin University of China, 100872 Beijing, China. Email: hpsun@amss.ac.cn. ‡Corresponding author. 1 arXiv:2510.14805v1 [math.NA] 16 Oct 2025 of a reconstructed solution in L1(Ω) space can not be guaranteed, due to the lack of weak completeness in L1(Ω) (see Chapter 4 of [10]). In [9, 15, 16], a larger space, the Radon measure space M(Ω), is introduced for inverse problems and sparse optimal control. We also refer to [2] for inverse problems, including electrical impedance tomography with finite measurements and the linearized and locally optimized strategy and algorithms for sparse point acoustic sources [1]. It is known that M(Ω) is a Banach space and can be characterized by its dual space C0(Ω) through the Riesz representation theorem (see Chapter 4 of [10]), ∥µ∥M(Ω) = sup Z Ω udµ : u ∈C0(Ω), ∥u∥C0(Ω) ≤1  . (1.1) This is also equivalent to M(Ω) = C0(Ω)′, which means that M(Ω) is weakly compact by the Banach-Alaoglu theorem since C0(Ω) is a separable Banach space [10]. The existence of the reconstructed sparse solution in the Radon measure space M(Ω) can be guaranteed. Moreover, since L1(Ω) can be embedded in M(Ω), the regularization with M(Ω) can also promote sparsity. Inspired by the developments of the Radon measure space regulariza- tion [15, 16], [32] considered the sparse reconstruction of an acoustic source in the Radon measure space M(Ω). Full scattering data in Ωis employed, and semismooth Newton is developed for reconstruction algorithms [32]. There are several iterative methods for inverse scattering with smooth functionals such as L2 regularization, including the gradient descent method [47], recursive linearization [5, 6], Newton-type methods, or distorted Born iteration [11, 24, 26, 27, 29, 39]. For non- smooth regularization, the primal-dual method for the total variation regularized inverse medium scattering problem is developed in [12, 13, 14]. Let us turn to a discrete setting for discussing the efficiency. Suppose that M is the number of unknowns modeling the scatter and N is the number of measurements. It is claimed that “For most practical situations, the number of unknowns is much larger than the number of measurements, i.e., N ≫M” in [11]. It was further pointed out that M ∝ √ N generally for the two-dimensional case [29]. Currently, these Newton-type or distorted Born-type methods are iterated directly in the domain of the unknowns, which means that a linear system such as Ax = b with A ∈CN×N has to be solved during each Newton update. It can be much more efficient if each Newton update can be carried out in the measurement domain by solving a linear system such as Abz = c with Ab ∈CM×M. One then recovers the N unknowns through the M computed dual variables by the optimality conditions directly. This technique can reduce a lot of computational cost. In this paper, we build a framework that the Newton or semismooth Newton executes on the measurement domain. We call the corresponding solution with the same dimen- sions of the measurements a “dual solution” or “adjoint field” [21]. We then recover the unknown acoustic source through Fenchel–Rockafellar duality theory. Our contributions can be summarized as follows. First, we develop a semismooth Newton-based augmented Lagrangian method, where we can employ semismooth Newton methods in the adjoint 2 space and the linear update of the Newton method to solve problems with the same di- mensions as the boundary measurements. Such ideas for inverse electromagnetic scattering can be found in [21, 22], where incremental iterative methods are developed that involve a linear equation with the same dimensions as the measurements. Compared to [21, 22], our framework can handle nonsmooth regularization, whereas L2 Tikhonov regularization is employed in [21, 22]. The semismooth Newton method thus can be highly efficient, while the boundary measurements can be much less than the unknown acoustic source to be reconstructed. We would like to point out that a similar dual framework is studied for statistical or signal data processing [37, 48]. Second, we also design a semismooth Newton method with the Moreau–Yosida regularization [16, section 3.1], [28, Chapter 9] or [32] for directly reconstructing the acoustic source in case the scale of the measurement data is comparable with the unknowns. The paper is organized as follows. In section 2, we introduce the direct problem. In section 3, we present the semismooth Newton-based augmented Lagrangian method in the adjoint space. In section 4, we introduce a semismooth Newton method and a first- order primal-dual method. In section 5, we present detailed numerical experiments both in two-dimensional and three-dimensional spaces. Finally, in section 6, we give a conclusion. 2 Inverse source problem with boundary measurements In this paper, we will focus on the analysis and reconstruction of the following inverse scattering problem: Reconstructing a sparse source f in the Radon measure space M(Ω) for a given scattered field on the boundary ∂Ω. The direct scattering problem in the frequency domain with an inhomogeneous medium in Rd with d = 2 or d = 3 is as follows    −∆u −k2n(x)u = µ, x ∈Rd, lim |x|→∞|x| d−1 2 ( ∂u ∂|x| −iku) = 0, (2.1) where µ ∈M(Ω) is a Radon measure and n(x) is the refraction index. We refer to Figure 1 for an illustration. Henceforth, we assume n(x) is real, smooth, and compactly supported, i.e., ℑn = 0 and there exists a bounded domain Ω⊆Rd such that supp(n(x) −1) ⋐Ω. Throughout this paper, we assume µ is a real Radon measure, which is reasonable in physics and is also compactly supported. We assume the bounded domain Ωis large enough such that µ is also compactly supported in Ω, i.e., supp(µ) ⋐Ω. The Helmholtz equation (2.1) can also be recovered with n(x) ≡1. The solution of (2.1) is defined in “very weak” sense [32, Definition 2.1] with test functions in C2,α. Here, for convenience, we directly work with the following volume 3 A Figure 1: Illustration of the acoustic source scattering: the red squares represent the receivers for recording the scattering wave, and the trapezoid A represents the acoustic sources. potential representations, (Vµ)(x) := Z Ω G(x, y)dµ(y), (2.2) where the inhomogeneous and background Green’s function G(x, y) is defined as the radi- ating solution [18]    −∆xG(x, y) −k2n(x)G(x, y) = δ(x −y), x, y ∈Rd, lim |x|→∞|x| d−1 2 ∂G(x, y) ∂|x| −ikG(x, y)  = 0. (2.3) Denoting m(x) = 1 −n(x), we thus can construct G(x, y) by the Lippmann-Schwinger integral equation [18] G(x, y) = Φ(x, y) −k2 Z Ω Φ(x, z)m(z)G(z, y)dz (2.4) where Φ(x, y) is the fundamental solution of the Helmholtz equation, i.e., Φ(x, y) = i 4H(1) 0 (k|x −y|) in R2 and Φ(x, y) = eik|x−y| 4π|x−y| in R3. For the W 1,p estimate of the prob- lem (2.1), we have the following proposition [32, Theorem 2.8]. Proposition 1. For the solution of (2.1) with representation in (2.2), we have the follow- ing regularity estimate, ∥u∥W 1,p(Ω) ≤C∥µ∥M(Ω), 1 ≤p < d d −1, d = 2 or d = 3. (2.5) Here C is a positive constant that does not depend on µ. According to the trace theorem of W 1,p(Ω) in [4, Theorem 5.36], we have W 1,p(Ω) ,→Ls(∂Ω), p ≤s ≤s∗= (d −1)p/(d −p) = (d −1)/(d p −1). (2.6) 4 Since p < d/(d −1) as in (2.5), we have the estimate on s s = ( s0, ∀s0, such that p ≤s0 < ∞, d = 2, s0, ∀s0, such that p ≤s0 < 2, d = 3. (2.7) With the above trace property (2.6), to reconstruct the sparse source µ ∈M(Ω), we will make use of the following sparse regularization functional, min µ 1 2∥Vµ −ub∥2 Ls(∂Ω) + α∥µ∥M(Ω), ∀s ( such that p ≤s < ∞, d = 2, such that p ≤s < 2, d = 3, (2.8) where α is the regularization parameter and ub is the measured scattered fields on ∂Ω. Vµ satisfies the equation (2.1) as discussed. Theorem 1. There exists a solution µ ∈M(Ω) of the regularization functional (2.8). Proof. The proof is similar to [16, Proposition 2.2] and [9]. Since the energy in (2.8) is 1 2∥ub∥2 Ls(∂Ω) while µ = 0. Thus we can find a minimizing sequence {µn} in M(Ω) which are bounded by 1 2α∥ub∥2 Ls(∂Ω). Since M(Ω) is a weakly sequentially compact [10] (see Chapter 4), there exists a weakly convergent subsequence µn,k converging to µ∗∈M(Ω) weakly. Denoting un,k = V(µn,k), we see un,k ∈W 1,p(Ω) with p < d d−1. By Proposition 1, un,k weakly converges to V(µ∗) and V(µ∗) is a solution (2.1) within definition of the very weak solution [32, Definition 2.1] defined as the dual of C2,α. By the weak lower semicontinuity of the norms in L2(Ω) and M(Ω), we conclude that µ∗is a minimizer of (2.8) and the existence follows. For the non-smooth minimization problem (2.8), it is convenient to consider the predual problem under the powerful Fenchel–Rockafellar duality theory; see [9, 15, 16, 28] for its various applications in inverse problems and optimal control problems. The semismooth Newton method can be employed to compute dual problems efficiently. However, the problem (2.8) is with a complex-valued function. For the application of Fenchel–Rockafellar duality theory, we need to reformulate the complex-valued operators and functions in terms of real matrix operators and real vector functions. Let us denote V = VR + iVI with V = VR −VI VI VR  , µ := µR µI  , and ub := ℜub ℑub  , (2.9) where VR = ℜ(V), VI = ℑ(V). We still employ µ and ub as their complex realization. 3 Semismooth Newton-based Augmented Lagrangian Method Henceforth, we consider the discrete setting of the functional in (2.8) and denote Vb = V |∂Ω, i.e, the trace of the volume potential after complex realization. For the discretization of Vb, 5 we employ the framework in [31], which is implemented in [12, 13] with publicly available code. The inhomogeneous medium can be solved by the Lippmann-Schwinger equation; we will leave some comments in section 5. Since Vb may lack of injectivity, for the convenience of the semismooth Newton methods along with optimization framework, we introduce an extra L2 regularization, and work in L2(∂Ω) instead of Ls(∂Ω) in discrete settings, min µ P(µ), P(µ) := 1 2∥Vbµ −ub∥2 L2(∂Ω) + α0 2 ∥µ∥2 L2(Ω) + α∥µ∥L1(Ω). (3.1) In (3.1), we also replace M(Ω) by L1(Ω) since these two norms are equal in a discrete setting. Now, let us turn to the augmented Lagrangian method for solving (3.1). First, set h(y) = 1 2∥y −ub∥2 L2(∂Ω), p(µ) = α0 2 ∥µ∥2 L2(Ω) + α∥µ∥L1(Ω). (3.2) Let us first present the general Fenchel-Rockafellar duality theory [28, Theorem 4.34] or [7, Theorem 15.23]. Let f, g be convex, proper, lower-semicontinuous functions in Hilbert spaces Y and X, and Λ : X →Y is a linear, bounded, and continuous operator. Then, under certain conditions (such as 0 ∈sri(dom f −Λ(dom g)) [7, Theorem 15.23] with “sri” denoting strong relative interior and dom denoting the definition domain), the dual problem of the following primal minimization problem min x f(Λx) + g(x) (3.3) is sup y (−f∗(y) −g∗(−Λ∗y)) ⇔inf y f∗(y) + g∗(−Λ∗y) (3.4) and we have the following properties on the optimal solutions (x∗, y∗) f(Λx∗) + g(x∗) = −f∗(y∗) −g∗(−Λ∗y∗), Λx∗∈∂f∗(y∗), −Λ∗y∗∈∂g(x∗). (3.5) Here x∗and y∗are the solutions of the primal problem (3.3) and dual problem (3.4) respectively. Λ∗: X →Y is the adjoint operator of Λ. The optimality conditions (3.5) can shuttle solutions between the primal and the dual solutions. We will first present the semismooth Newton-based augmented Lagrangian method, followed by the convergence analysis. 3.1 Semismooth Newton-based augmented Lagrangian framework in the dual space Now let us turn to (3.1). It can be reformulated as min µ h(Vbµ) + p(µ). (3.6) 6 By direct calculations, we see that the dual functions of h and p are h∗(y) = 1 2∥y∥2 L2(∂Ω) + ⟨y, ub⟩L2(∂Ω), (3.7) and p∗(z) = ⟨z′, z⟩L2(Ω) −α0 2 ∥z′∥2 L2(Ω) −α∥z′∥L1(Ω), z′ = S α1 α0 ( z α0 ), (3.8) where S is the soft thresholding operator [Sσ(z)]i := ( 0, |zi| ≤σ zi −σsign(zi), |zi| > σ (3.9) and sign(·) is the sign function. It will turn out that the explicit formulation of p∗is not needed later. By Fenchel–Rockafellar duality, we obtain the dual problem of (3.6) min y∈L2(∂Ω) D(y) := p∗(−VT b y) + h∗(y), (3.10) which is equivalent to maxy∈L2(∂Ω)(−p∗(−VT b y) −h∗(y)) by (3.4). Furthermore, letting −VT b y = z, the above minimization problem becomes the following linear constrained optimization problem min y∈L2(∂Ω), z∈L2(Ω) p∗(z) + h∗(y), s.t., VT b y + z = 0. (3.11) We now introduce the augmented Lagrangian function Lσ(y, z; λ) := p∗(z) + h∗(y) + ⟨λ, Vb∗y + z⟩L2(Ω) + σ 2 ∥VT b y + z∥2 L2(Ω). (3.12) We arrive at the following equivalent inf-sup problem for (3.6) under certain conditions, inf y∈L2(∂Ω), z∈L2(Ω) sup λ∈L2(Ω) Lσ(y, z; λ). (3.13) The augmented Lagrangian method thus follows, with nondecreasing update of σk →c∞< +∞, e.g., σk+1 = c0σk, where c0 ∈[2, 10] is a constant, (yk+1, zk+1) = arg min y∈L2(∂Ω), z∈L2(Ω) Lσk(y, z; λk), (3.14a) λk+1 = λk + σk(VT b yk+1 + zk+1). (3.14b) The Augmented Lagrangian method (shortened ALM) differs from ADMM (alternating direction method of multipliers), as ALM solves (y, z) variables together, whereas ADMM solves y and z separately. ALM can be obtained from the proximal point algorithm [41] to 7 the dual problem [42], and increasing the step sizes σk can obtain superlinear convergence for certain functions. For the developments of ALM, we refer to [35, 38] and [33, 34]. Semismooth Newton methods [28, 49] are widely employed for the nonlinear and non- smooth updates of ALM [37, 50] and [25, 36, 44]. For its application in total variation (or total generalized variation) regularized imaging processing problems, we refer to [45, 46]. Let us turn to semismooth Newton methods for solving the coupled nonlinear system of (y, z) in (3.14). The first order optimality conditions on yk+1 and zk+1 respectively of (3.14) tell that ∇h∗(yk+1) + Vbλk + σVb(VT b yk+1 + zk+1) = 0, (3.15) ∂p∗(zk+1) + λk + σk(zk+1 + VT b yk+1) ∋0. (3.16) Usually, for discretizations, the dimensions of L2(∂Ω) is much less than the dimensions of L2(Ω) [11]. It can be much more efficient if only the Newton update of yk+1 is solved. Letting us solving zk+1 with yk+1 first. By (σkI + ∂p∗)(zk+1) ∋−σVT b yk+1 −λk, we now solve zk+1 from (3.15) zk+1 = (I + 1 σk ∂p∗)−1(−VT b yk+1 −λk σk ). (3.17) Substituting zk+1 in (3.17) into (3.14), we obtain ∇h∗(yk+1) + Vbλk + σVbVT b yk+1 + σkVb(I + 1 σk ∂p∗)−1(−VT b yk+1 −λk σk ) ∋0. With Moreau’s identity x = (I + σ∂G)−1(x) + σ(I + 1 σ∂G∗)−1(x σ), and by letting x = −σkVT b yk+1 −λk, and G(x) = p(x), we get σkzk+1 as follows σk(I + 1 σk ∂p∗)−1(−VT b yk+1 −λk σk ) = −σkVT b yk+1 −λk −(I + σk∂p)−1(−σkVT b yk+1 −λk). (3.18) Substituting the above formula which is σkzk+1 actually into (3.15), we arrive at ∇h∗(yk+1)+Vbλk+σkVbVT b yk+1+Vb (−σkVT b yk+1 −λk −(I + σk∂p)−1(−σkVT b yk+1 −λk)) | {z } σkzk+1 = 0. This leads to the following nonlinear equation of yk+1 on L2(∂Ω) finally ∂h∗(y) −Vb(I + σ∂p)−1(−λk −σkVT b y) = 0. (3.19) 8 Since ∇h∗(y) = y + ub with (3.7), the nonlinear equation (3.19) on yk+1 becomes Fk(y) = 0, Fk(y) := y + ub −Vb(I + σk∂p)−1(−λk −σkVT b y) = 0. (3.20) Note that y ∈L2(∂Ω) and the nonlinear equation (3.20) is on the boundary ∂Ω, which can be of much smaller scale compared to the source µ to be reconstructed. Since (I + σ∂p)−1(·) is a semismooth function for any positive σ, we have the following lemma for the semismooth Newton derivative of F. Lemma 1. The semismooth Newton derivative of Fk(y) is (∂Fk(y))(z) = (I + σk 1 + σkα0 VbX k y VT b )(z) (3.21) where X k y can be chosen as follows X k y :=      {1}, |λk + σkVT b y| > σkα {0}, |λk + σkVT b y| < σkα [0, 1], |λk + σkVT b y| = σkα . (3.22) Proof. For the resolvent (I + σk∂p)−1 in (3.20), let us introduce w := (I + σk∂p)−1(µ) = arg min ˜w ∥˜w −µ∥2 2 2 + σkα0 2 ∥˜w∥2 L2(Ω) + σkα∥˜w∥L1(Ω). By the optimality condition, we have 0 ∈w −µ + σkα0w + σkα∂∥w∥1 ⇔µ ∈(1 + σkα0)w + σkα∂∥w∥1 (3.23) ⇒w = (I + σkα 1 + σkα0 ∂∥.∥1)−1( µ 1 + σkα0 ) = S σkα 1+σkα0 ( µ 1 + σkα0 ). (3.24) Now, introducing l(y) := (I+σk∂p)−1(−λ−σkVT b y), we can write its expression component- wisely [l(y)]i =        (−λ−σkVT b y)i 1+σα0 − σα 1+σα0 , if (−λ −σkVT b y)i > σkα, 0 if |(−λ −σkVT b y)i| ≤σkα (−λ−σkVT b y)i 1+σkα0 + σkα 1+σkα0 , if (−λ −σkVT b y)i < −σkα. (3.25) It can be checked that l(y) is a PC1 function (piecewise differentiable function) [43, Propo- sition 4.3.1]. Since l(y) is continuous differentiable on yi when |(−λ −σkVT b y)i| > σkα, we have [(∂l)(z)]i = (− σ 1+σkα0 Vb(z))i. While |(−λ −σkVT b y)i| < σkα, we obtain [(∂l)(z)]i = 0. For the case |(−λ −σkVT b y)i| = σkα, by [43, Theorem 4.3.1], we have s( −σk 1+σkα0 Vb(z))i ∈ [(∂l)(z)]i for any s ∈[0, 1]. Since the soft thresholding operator S σkα 1+σkα0 is anisotropic on y. We finally arrive at the semismooth Newton derivative ∂F as in (3.21). 9 The Newton update for solving yk+1 becomes that for l = 0, 2, . . . , with y0 = yk N(yl)(yl+1 −yl) = −F(yl), N(yl) ∈∂F(yl). (3.26) Remark 1. It can be seen that from (3.21), the semismooth Newton derivative is a positive definite and bounded operator. Thus for the Newton update in (3.26) and any N(yl) ∈ ∂F(yl), we have N(yl) ≻I and N(yl)−1 is uniformly bounded. The Newton update (3.26) is well-defined. After obtaining yl+1, we can calculate zk+1 by (3.18) as follows zl+1 = Mk(yl+1), Mk(yl+1) := 1 σk [−σkVT b yl+1 −λk −(I + σk∂p)−1(−σkVT b yl+1 −λk)]. (3.27) Since Newton methods, including semismooth Newton methods, are always locally con- vergent. Line search is usually employed for globalization and convergence. We use the following Armijo line search strategy. By defining dl = yl+1 −yl, we proceed with an aggressive Armijo-type line search involving parameters βt > 0, t = 0, 1, 2, . . ., β ∈(0, 1), and c > 0. The objective is to find the smallest integer t satisfying: Lσk(yl + βtdl, Mk(yl + βtdl); λk) ≤Lσk(yl, Mk(yl); λk) −cβt∥dl∥2 2. (3.28) The final step size tl := βt obtained from the line search in (3.28) dictates the update of yk+1 as follows: yl+1 = yl + tldl. (3.29) Once some stopping criterion for inner Newton iteration is satisfied, e.g., for some l = l0, we set yk+1 = yl0 and obtain zk+1 with zk+1 = Mk(yk+1). After finishing the inner semis- mooth Newton iterations, we then update λk+1 as in (3.14b) for the outer ALM updates. Once the stopping criterion of the outer ALM iterations is satisfied with (yK, zK, λK), the solution µ corresponding to the primal problem can be approximated by µK = (I + α α0 ∂∥.∥1)−1(−VT b yK α0 ), (3.30) which has an explicit solution. Since by the primal-dual optimality conditions of (3.10) at saddle-points (y∗, µ∗), we have −VT b y∗∈∂p(µ∗) ⇔−VT b y∗∈α∂∥.∥1 + α0µ∗⇔µ∗= (I + α α0 ∂∥.∥1)−1(−VT b y∗ α0 ). Finally, let us conclude this section with the proposed semismooth Newton-based ALM as the following Algorithm 1. We then discuss its convergence in the next section. 10 Algorithm 1 Boundary measurements based dual Augmented Lagrangian method with semismooth Newton solver for (2.8) (ALM-bd-SSN) Require: Given linear operator Vb and ub, regularization parameters (α0, α), and pa- rameter settings for ALM σ0, c0, β, ρ, and initial vectors (y0, z0, λ0), set y = y0, σ = σ0, λ = λ0 while 0 ≤i ≤Kmax and some stopping criterion for outer ALM iterations is satisfied do while l ≤L and some stopping criterion for inner SSN is satisfied do Calculate X k y in (3.22) and Newton derivative (3.21), and solve (3.26) for SSN update. Do Armijo line search (3.28) and update yl+1 with (3.29). end while Compute zk+1 = Mk(yk+1) by (3.27). Update λk+1 by (3.14b). Update σk+1 = c0σk. end while Obtain the reconstructed µ with (3.30) finally. 3.2 Convergence of the SSN-based ALM on the dual space Now we introduce some basic definitions and properties of multivalued mapping from con- vex analysis [20, 37]. Let F : X ⇒Y be a multivalued mapping. The graph of F is defined as the set gphF := {(x, y) ∈X × Y |y ∈F(x)}. The inverse of F, i.e., F −1 : Y ⇒X is defined as the multivalued mapping whose graph is {(y, x)|(x, y) ∈gphF}. The distance x from the set C ⊂X is defined by dist(x, C) := inf{∥x −x′∥| x′ ∈C}. For the local convergence rate of ALM, we need the metrical subregularity [20]. Definition 1 (Metric Subregularity [20]). A mapping F : X ⇒Y is called metrically subregular at ¯x for ¯y if (¯x, ¯y) ∈gphF and there exists modulus κ ≥0 along with a neighborhoods U of ¯x and V of ¯y such that dist(x, F −1(¯y)) ≤κ dist(¯y, F(x) ∩V ) for all x ∈U. (3.31) Now, let us turn to the primal problem (3.1), which is the dual problem (3.10). For the L1 norm, we see ∥µ∥1 = M X i=1 |µi| (3.32) 11 which is a polyhedral function. Introduce the Lagrangian function l(y, z, λ) = p∗(z) + h∗(y) + ⟨λ, Vb∗y + z⟩L2(Ω). (3.33) It is well-known that l is a convex-concave function on (u, p, λ). Define the maximal monotone operator Tl by Tl(y, z, λ) = {(y′, z′, λ′)|(y′, z′, −λ′) ∈∂l(y, z, λ)}, (3.34) and the corresponding inverse is given by T −1 l (y′, z′, λ′) = {(y, z, λ)|(y′, z′, −λ′) ∈∂l(y, z, λ)}. (3.35) Here, adding the negative sign as in −λ′ is because l is concave on λ. Theorem 2. For the dual problem (3.10), assuming the KKT system has at least one solution, then Tl is metrically subregular at (y∗, z∗, λ∗)T for the origin. Similarly, assuming (∂P)−1(0) ̸= ∅as P in (3.1), ∂P are metrically subregular at λ∗for the origin. Proof. With direct calculation, we have Tl(y, z, λ) = (∂h∗(y) + Vbλ, ∂p∗(z) + λ, −z −VT b y)T := A(x) + B(x), where A   y z λ  :=   ∂h∗ 0 0 0 ∂p∗ 0 0 0 0     y z λ  , B   y z λ  :=   0 0 Vb 0 0 I −VT b −I 0     y z λ  . (3.36) It is known that the ∥· ∥1 (3.32) is a polyhedral convex function. We can see that ∂h∗is a polyhedral function of y and ∂p∗is a polyhedral function of z. We thus conclude that the monotone operator A is polyhedral. Besides, the operator B is a maximal monotone and linear operator. Thus Tl is a polyhedral mapping [40]. By the corollary in [40], we see Tl is metrically subregular at (y∗, z∗, λ∗)T for the origin. Let us now turn to the metric subregularity of ∂P, which is a dual function of the dual problem (3.10). Supposing (∂P)−1(0) ̸= ∅, since (∂P)(µ) = VT b (Vbµ −ub) + α0µ + α∂∥µ∥1. (3.37) We conclude that it is a polyhedral mapping of µ. With the corollary in [40], ∂P is metrically subregular. 12 Here, we follow the standard stopping criterion for the inexact augmented Lagrangian method originated [41, 42]. Lσk(yk+1, zk+1; λk) −inf y,z Lσk(y, z; λk) ≤ϵ2 k/(2σk), ∞ X k=0 ϵk < ∞, (A) Lσk(yk+1, zk+1; λk) −inf y,z Lσk(y, z; λk) ≤(δ2 k/(2σk))∥λk+1 −λk∥2, ∞ X k=0 δk < +∞, (B1) dist(0, ∂Lσk(y, z; λk)|y,z) ≤(δ′ k/σk)∥λk+1 −λk∥, 0 ≤δ′ k →0. (B2) Before discussing the local convergence rate, let us turn to the relation between µ and the Lagrangian multiplier λ. It is known that the saddle-points of the augmented Lagrangian functional (3.12) and the Lagrangian functional (3.33) are the same. Let us focus on (3.33). With direct calculation, we have inf y,z sup λ l(y, z, λ) = p∗(z) + h∗(y) + ⟨VT b y + z, λ⟩⇔ (3.38) sup λ −[sup y ⟨y, −Vbλ⟩−h∗(y) + sup z ⟨−λ, z⟩−p∗(z)] ⇔ (3.39) sup λ −(h(−Vbλ) + p(−λ)) ⇔−inf λ (h(−Vbλ) + p(−λ)). (3.40) Compared to (3.6), we conclude that optimal solutions µ∗of (3.6) and λ∗of (3.40) have the relation µ∗= −λ∗. The convergence of µk is dominated by λk. We can also obtain the convergence rate of µk from the convergence rate of yk by (3.30). Since (I + α α0 ∂∥.∥1)−1 is a firmly nonexpansive operator, we have ∥µk −µk+1∥≤c0∥yk −yk+1∥, ∥µk −µ∗∥≤c0∥yk −y∗∥, c0 := ∥VT b ∥/α0. Henceforth, we only focus on the convergence rate of (yk, zk, λk). We denote X P as the solution sets of the problem (3.1). With these preparations, we have the following global convergence and local convergence rate for the ALM method (3.14), under the mild condition that the KKT system of (3.33) has at least one solution, as in Theorem 2. Theorem 3. For the dual problem (3.10) and corresponding ALM (3.14), denote the it- eration sequence (yk, zk, λk) generated by ALM-bd-SSN with stopping criteria (A). The sequence (yk, zk, λk) is bounded and convergences to (y∗, z∗, λ∗) globally. Td = ∂P is met- rically subregular for the origin with modulus κd and with the additional stopping criteria (B1), the sequence {λk}k converges to λ∗∈X P . For sufficiently large k, we have the following local linear convergence dist(λk+1, X P ) ≤θk dist(λk, X P ), (3.41) 13 where θk = [κd(κ2 d + σ2 k)−1/2 + δk](1 −δk)−1, as k →∞, θk →θ∞= κd(κ2 d + σ2 ∞)−1/2 < 1. Furthermore, Tl is metrically subregular at (y∗, z∗, λ∗) for the origin with modulus κl. When the additional stopping criteria (B2) is employed, for sufficiently large k, we have ∥(yk+1, zk+1) −(yk, zk)∥≤θ′ k∥λk+1 −λk∥, (3.42) where θ′ k = κl(1 + δ′ k)/σk with lim k→∞θ′ k = κl/σ∞. Proof. Since discrete L2(∂Ω) is a finite-dimensional reflexive space and the dual function (3.10) is lower semicontinuous, proper, and strongly convex due to the strong convexity of h∗, it is coercive. Thus, the existence of the solution can be guaranteed [28, Theorem 4.25]. Furthermore, since dom D = L2(∂Ω), by Fenchel-Rockafellar theory [28, Chapter 4.3], the solution to the dual problem (3.10) is not empty and minµ P(µ) = maxy −D(y). By [42, Theorem 4] (or [41, Theorem 1] where the augmented Lagrangian method (3.14) essentially comes from proximal point method applying to its dual problem, i.e., the primal problem (3.1)), with criterion (A), we get the boundedness of {λk}k. The uniqueness of (y∗, z∗) follows from the strong convexity of h∗on y and the z∗= −VT b y∗, which is one of the KKT conditions. The boundedness of (yk, zk) and convergence of (yk, zk, λk) then follows by [42, Theorem 4]. By Theorem 2, we have metrical subregularity of Td = ∂P. With the stopping criteria (A) and (B1), the local convergence rate (3.41) can thus be obtained from [42, Theorem 5] (or [41, Theorem 2]). Now we turn to the local convergence rate of (yk, zk). By the metrical subregularity of Tl as in Theorem 2, for sufficiently large k, we have ∥(yk+1, zk+1) −(y∗, z∗)∥+ dist(λk, X P ) ≤κldist(0, Tl(yk+1, zk+1, λk+1)). Together with the stopping criteria (B2) and [42] (Theorem 5 and Corollary in Section 4), we arrive at ∥(yk+1, zk+1) −(y∗, z∗)∥≤κl q δ′2 k σ−2 k ∥λk+1 −λk∥2 + σ−2 k ∥λk+1 −λk∥2 = κl q δ′2 k + 1σ−1 k ∥λk+1 −λk∥≤θ′ k∥λk+1 −λk∥, which leads to (3.42). 4 Semismooth Newton Method and First-Order Primal-dual Method In this section, we will introduce a semismooth Newton method and a first-order primal- dual method for the inverse source scattering problem. Let us first discuss the semismooth 14 Newton method. For the primal formulation (3.1), if choosing f(z) := 1 2⟨B−1z, z⟩L2(Ω) −⟨z, B−1VT b ub⟩L2(Ω) + d0, g(z) := α∥z∥L1(Ω), (4.1) where B := VT b Vb +α0I. The the primal formulation can be written as minµ f(Bµ)+g(µ). By the Fenchel–Rockafellar duality, the predual problem is miny f∗(y) + g∗(−BT y), which is min y ⟨y, By + VT b ub⟩L2(Ω) −1 2⟨y + B−1VT b ub, By + VT b ub⟩L2(Ω) + ⟨y + B−1VT b ub, VT b ub⟩L2(Ω) −d0 + I{∥BT y∥∞≤α}(y) (4.2) We then use Moreau–Yosida regularization for dealing with the constraint ∥BT y∥∞≤α as follows min y E(y), E(y) := ⟨y, By + VT b ub⟩L2(Ω) −1 2⟨y + B−1VT b ub, Bz + VT b ub⟩L2(Ω) (4.3) + ⟨y + B−1VT b ub, VT b ub⟩L2(Ω) −d0 + 1 2γ ∥max(0, γ(BT y −α)∥2 2 + 1 2γ ∥min(0, γ(BT y + α)∥2 2. Although there is B−1 in the above functional (4.3), with direct calculation by the first- order optimality condition of the functional (4.3), no B−1 is involved in the following nonlinear equation, fortunately, F(y) = ∇E(y) = By + VT b ub + γB max(0, BT y −α) + γB min(0, BT y + α) = 0. (4.4) The formulation max(0, BT y −α) or min(0, BT y + α) is understood in point-wise sense. For their subgradients, we introduce A+ k = {x ∈Ω: BT yk(x) ≥α}, A− k = {x ∈Ω: BT yk(x) ≤−α}, and Ak = A+ k ∪A− k along with χA+ = Diag([χA+]1, [χA+]2, . . . , ), χA−= Diag(χA1]1, [χA1]2, . . . , ) and χA = Diag([χA]1, [χA]2, . . . , ) depend on y and their diagonal elements are defined by [χA+]i = ( 1, [BT yk]i ≥β, 0, [BT yk]i < β, [χA−]i = ( 1, [BT yk]i ≤−β, 0, [BT yk]i > −β, [χA]i = [χA+]i + [χA−]i. (4.5) With these preparations, the semismooth Newton method for solving the nonlinear system F(y) = 0 reads as N(yk)yk+1 = N(yk)yk −F(yk), (4.6) where B + γBXAkBT = N(yk) ∈∂F(yk), so we get (B + γBXAkBT )yk+1 = −VT b ub + γαB(XA+ k −XA− k )⃗1. (4.7) 15 We let γ →∞with path-following strategy [25] (or [28, Section 9.1]). Finally, we can recover µ with y by (3.5) Bµ∗= ∇f∗(y) →µ = B−1∇f∗(y∗) = y∗+ B−1(VT b ub). (4.8) We conclude the above discussions with the following semismooth Newton algorithm, i.e., Algorithm 2. Algorithm 2 Semismooth Newton method (shortened as SSN) Require: y0 ∈L2(Ω), strictly increasing path-following parameters [γ0, γ1, . . . , γKmax], with γi > 0 i = 1, . . . , Kmax. Ensure: y, µ 1: Initialization y0 γ0 = y0 2: while 0 ≤i ≤I, γ = γi do 3: while k ≤K do 4: Calculate active sets and the corresponding functions as in (4.5) for γ = γi. 5: Solve for yk ∈L2(Ω) by (4.7) and denote it yk γi. 6: Update A+ k , A− k , Ak 7: Until A+ k = A+ k−1, A− k = A− k−1, set y0 γi+1 = yk γi. 8: end while 9: end while 10: y = yk γKmax 11: µ = B−1∇f∗(y) = y + B−1(VT b ub). 4.1 First-order primal-dual method Finally, let us turn to the first-order methods. The Chambolle-Pock’s first-order primal- dual algorithm [14] is employed and studied for the inverse medium scattering problem in [12, 13]. Here, for comparison, we also presented the first-order primal-dual algorithm. With 1 2∥Vbµ −ub∥2 L2(∂Ω) = max p∈L2(∂Ω)⟨Vbµ −ub, p⟩L2(∂Ω) −1 2∥p∥2 L2(∂Ω), we get the following primal-dual problem min µ max p ⟨Vbµ −ub, p⟩L2(∂Ω) −1 2∥p∥2 L2(∂Ω) + α0 2 ∥µ∥2 L2(Ω) + α∥µ∥L1(Ω) ⇔ (4.9) min µ max p ⟨Vbµ, p⟩L2(∂Ω) + α0 2 ∥µ∥2 L2(Ω) + α∥µ∥L1(Ω) − 1 2∥p∥2 L2(∂Ω) + ⟨ub, p⟩2 L2(∂Ω)  . (4.10) Setting F ∗(p) = 1 2∥p∥2 L2(∂Ω) + ⟨ub, p⟩L2(∂Ω) and G(µ) = α0 2 ∥µ∥2 L2(Ω) + α∥µ∥L1(Ω), we have (I + σ∂F ∗)−1(¯p) = ¯p −σub 1 + σ , (I + τ∂G)−1(¯µ) = sign(¯µ) max n|¯µ| −τα 1 + τα0 , 0 o . 16 We now can employ the first-order primal-dual algorithm [14, Algorithm 1] to solve (2.8) with saddle-point formulation (4.10), as the following Algorithm 3. Algorithm 3 First-order primal-dual algorithm (PDA) Require: initial vectors (p0, µ0) ∈L2(∂Ω) × L2(Ω), step sizes (σ, τ), Ensure: p, µ 1: set p = p0, µ = µ0, ¯µ = µ0 2: for 0 ≤k ≤Kmax do 3: pk+1 = (I + σ∂F ∗)−1(pk + σVb¯µk) = pk+σVb¯µk−σub 1+σ 4: µk+1 = (I + τ∂G)−1(µk −τVT b pk+1) = sign(µk −τVT b pk+1) max  |µk−τVT b pk+1|−τα 1+τα0 , 0 5: ¯µk+1 = µk+1 + θ(µk+1 −µk) 6: end for 5 Numerical Experiments In this section, we will first give a brief discussion of the discretization of the Lippmann- Schwinger volume potential, followed by the presentation of the numerical examples. We will first give the linear mapping from the acoustic source µ to the boundary measurement us b for the inhomogeneous medium. Since we have −∆us −k2n(x)us = µ ⇒−∆us −k2us = k2( 1 k2 µ + (n −1)us), (5.1) which leads to the following Lippmann-Schwinger integral equation us = Vk( 1 k2 µ + (n(x) −1)us). (5.2) Now let us introduce q(x) = n(x) −1 and Vkh := k2 Z Ω Φ(x, y)h(y)dy. We have us(x) = 1 k2 (I −Vkq)−1Vkµ. (5.3) Substituting (5.3) into (5.2), we have us(x) = 1 k2 Vk(I + q(I −Vkq)−1Vk)µ. (5.4) We finally obtain the linear mapping from the source µ to the measurement us b(x) = Tr ◦  1 k2 Vk(I + q(I −Vkq)−1Vk)  µ, Tru := u|∂Ω, (5.5) 17 where Tr denotes the trace mapping. The source to measurement mapping can be T := Tr ◦  1 k2 Vk(I + q(I −Vkq)−1Vk)  : M(Ω) →Tr(W 1,p(∂Ω)). (5.6) The volume potential Vk and (I −Vkq)−1 can be computed by the Fourier-based collocation method by periodization of the Lippmann-Schwinger volume potential [31]. We adapted the code developed in [12, 13] where the discretization of T is given, and we refer to [12, 13] for more details. All algorithms are conducted with MATLAB R2024b on a Ubuntu 24.04 workstation with dual Intel (R) Xeon (R) E5-2697AV4 CPUs, 2.60 GHz, and 288 GB memory. The parameters of the compared algorithms are as follows • ALM: ALM-bd-SSN as in Algorithm 1. We choose σ0 = 1 and σk+1 = 6σk. For the Armijo line search, we choose β = 0.3 and c = 10−4 as in (3.28). • SSN: SSN as Algorithm 2. We choose γ0 = 1 and γi = 10i, i = 1, . . . , 8. • PDA: First-order primal-dual algorithm as in Algorithm 3 [14]. We choose σ = 0.5, τ = 1/((∥V2 b ∥+ 10−6)σ), and θ ≡1. The Gaussian noise is added to get noisy boundary measurement uδ b for the simulated data u, which is essentially as the same in [12, (73)] (uδ b −ub)/∥ub∥2 = δ(Nre + iNim) where Nre and Nim are two real matrices sampled from standard normal distribution. We measure the reconstruction error by the following relative error N-Error := ∥µKmax −µexa∥2/∥µexa∥2, where µexa is the exact acoustic source and Kmax is the corresponding maximum outer iteration number, and it varies for different algorithms. To avoid “inverse crime”, we employ the code of [12, 13]. The strategy is that the discretization, especially the grid size of generating the synthetic data of the direct problem, is different from the discretization for reconstructing the unknown sources [12] (see also [47, Chapter 2.3.6]). For the homogeneous medium, we choose the velocity c ≡1 both in R2 and R3. For the inhomogeneous medium, we employ the inhomogeneous medium from [26] q(x) = χ(x −0), χ(t) = ( exp( −1 1−|t|2 ) |t| ≤1, 0 |t| > 1, (5.7) where 0 = (0, 0)T in R2 and 0 = (0, 0, 0)T in R3. 18 We computed the primal solution µKmax by (3.30) except the numerical examples in Figures 6 and 7, which are computed by −λKmax due to (3.40) and the explanations there. Now, let us turn to the numerical examples. Figures 2 and 3 present the reconstructions of different point sources with %1 relative Gaussian noise error for homogeneous and inhomogeneous medium correspondingly. Ta- ble 1 collects the running time and relative errors of three different algorithms. For our experiments, we did not see big differences in running time between the homogeneous and the inhomogeneous medium. We thus only present the running time for the homogeneous case. Figures 4 and 5 present the reconstructions of different strip sources with %0.1 rela- tive Gaussian noise error for homogeneous and inhomogeneous medium correspondingly. Table 2 collects corresponding running time only for the homogeneous cases (still because the running time for the inhomogeneous cases is nearly the same as the homogeneous cases), relative errors for differential algorithms regarding homogeneous or inhomogeneous medium. Figures 6 and 7 present the reconstructions of different 3-dimensional sources with %1 relative Gaussian noise error for homogeneous and inhomogeneous medium, respectively. Table 3 collects the running time and relative errors of ALM and SSN. It can be seen that the ALM can be 10 times faster than SSN. We did not compare with PDA since we found PDA is too slow. We computed the solution with −λk since µ∗= −λ∗by (3.40) and numerically we found that ∥µKmax + λKmax∥is generally less than 10−5 as Kmax being the maximal outer iteration number and µKmax computed by (3.30). Figures 8 and 9 present the reconstructions of different 3-dimensional sources with %0.01 relative Gaussian noise error for homogeneous and inhomogeneous medium, corre- spondingly. Table 4 collects relative errors of ALM and SSN. Since the running time to similar to the %1 relative Gaussian noise error case, we omit the running time in Table 4. 6 Conclusions We proposed a semismooth Newton-based augmented Lagrangian method on the “adjoint” space of measurement. The reconstruction can be greatly accelerated if the dimensions of the measurement data are much less than the unknown acoustic sources, which can be more than 10 times faster, as shown in the 3-dimensional numerical examples. It would be interesting to incorporate the multifrequency scattering data as was done in [3, 5, 6, 23]. We believe this framework can also benefit inverse medium problems and inverse electromagnetic wave scattering problems. 19 a b c d e f g h i j k l m n o p Figure 2: Reconstruction of sparse sources with multiple peaks in homogeneous media with k = 6. The figures in the leftmost column are the original acoustic sources, including one, four, six, and eight peaks. The images in the second, the third from the left, and the rightmost columns are reconstructed results of ALM, SSN, and PDA, respectively. The images in the first, second, third, and fourth rows are the sources with one, four, six, and eight peaks, respectively. 20 a b c d e f g h i j k l m n o p Figure 3: Reconstruction of sparse sources with multiple peaks in inhomogeneous media with k = 6. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 2. 21 Table 1: Comparisons of the running time and relative errors for ALM, SSN, and PDA algorithms with noise level 1%. This table corresponds to Figure 2 and Figure 3. The running times, i.e., “Times (s)” are only presented for the homogeneous cases as in Figure 2. The “N-error” and “N-error (in)” are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations “one”, “four”, “six”, and “eight” corresponds to the sources with one, four, six, and eight peaks and are the same as in Figures 2 and 3. (ALM and SSN with α = 9e-4, α0= 1e-7, and PDA with α = 9e-5, α0= 1e-12). Methods Sources Time (s) N-Error N-Error (in) ALM one 8.77 6.30e-02 4.98e-02 four 9.67 2.08e-01 7.69e-01 six 11.19 1.01e-00 1.00e-00 eight 9.13 1.09e-00 1.07e-00 SSN one 8.62 1.93e-02 2.27e-02 four 10.64 4.32e-01 1.37e-01 six 8.09 9.57e-01 9.50e-01 ten 10.11 1.10e-00 1.07e-00 PDA one 17.12 6.99e-02 5.11e-2 four 15.66 2.49e-01 1.47e-01 six 16.71 9.09e-01 8.64e-01 eight 16.31 1.19e-00 1.12e-00 a b c d e f g h Figure 4: Reconstructions of strip-shaped sparse sources in homogeneous media with k = 4, noise level 0.1%, and α =2e-6, α0= 2e-8. The figures in the leftmost column are the original acoustic sources. The images in the second, the third from the left, and the rightmost columns are reconstructed results of ALM, SSN, and PDA, respectively. The images in the first and second rows are the sources with “skew diagonal” (the first row) and “diagonal” (the second row) types of strips, respectively. 22 a b c d e f g h Figure 5: Reconstruction of strip-shaped sparse sources in inhomogeneous media with k = 4, noise level 0.1%, and α =2e-6, α0= 2e-8. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 4. Table 2: Comparisons of the running time and relative errors for ALM, SSN, and PDA algorithms with noise level 0.1%, and α = 2e-6, together with α0= 2e-8. This table corresponds to Figure 2 and Figure 3. The running times, i.e., “Times (s)” are only presented for the homogeneous cases as in Figure 4. The “N-error” and “N-error (in)” are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations “skew” and “diag” represent the “skew diagonal” and “diagonal” strips in Figures 4 and 5. Methods Sources Time (s) N-Error N-Error (in) ALM skew 16.96 4.19e-02 2.30e-02 diag 17.23 1.75e-02 4.18e-02 SSN skew 11.07 9.18e-02 4.04e-02 diag 11.72 1.79e-02 3.81e-02 PDA skew 18.15 2.40e-02 2.49e-02 diag 18.82 1.51e-02 1.70e-02 23 a b c d e f g h i j k l Figure 6: Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6, noise level 1%, and α = 5e-7 together with α0= 2e-8. The solution is computed by −λKmax. The figures in the leftmost column are the original acoustic sources. The images in the middle and the rightmost columns are reconstructed results of ALM and SSN, respectively. The images in the first, second, third, and fourth rows are the sources with the two balls, the right up tripod, the left down tripod, and the two tripods, respectively. 24 a b c d e f g h i j k l Figure 7: Reconstruction of 3-dimensional sparse sources in inhomogeneous media with k = 6 and noise level 1%, and α = 5e-7 together with α0= 2e-8. The solution is computed by −λKmax. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 6. 25 Table 3: Experimental results for ALM and SSN methods under different scenarios and treatments, noise level 1%, and α = 5e-7 together with α0= 2e-8. Here “N-Error” is the relative error for both homogeneous and inhomogeneous cases, corresponding to “homo” and “inhomo” in the table. The notations “two balls”, “two tripods”, “right up”, and “left down” represent the acoustic sources in the first to the fourth rows in Figure 6 or 7, respectively. Methods Sources Medium Time (s) N-Error ALM two balls homo 79.86 3.97e-01 two balls inhomo 87.71 3.99e-01 two tripods homo 82.29 4.26e-01 two tripods inhomo 88.84 4.96e-01 right up homo 79.49 3.14e-01 right up inhomo 87.01 2.78e-01 left down homo 78.36 3.03e-01 left down inhomo 88.69 2.55e-01 SSN two balls homo 1224.96 3.60e-01 two balls inhomo 1304.16 3.26e-01 two tripods homo 1195.87 4.47e-01 two tripods inhomo 1149.60 2.90e-01 right up homo 1139.21 4.35e-01 right up inhomo 1296.09 3.55e-01 left down homo 1228.42 4.01e-01 left down inhomo 1301.16 3.16e-01 Table 4: Relative reconstruction errors for ALM and SSN methods with noise level 0.01%, and α = 1e-7 together with α0= 1e-9. The “N-error” and “N-error (in)” are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations “two balls”, “right up”, and “left down” represent the acoustic sources in the first to the third rows in Figure 8 or 9, respectively. The running times are nearly the same to the 1% noise level cases as in Table and we omit here. Methods Sources N-Error N-Error (in) ALM two balls 1.27e-01 6.75e-02 right up 1.40e-01 1.43e-02 left down 7.33e-02 2.09e-02 SSN two balls 3.72e-01 2.12e-01 right up 1.35e-01 2.29e-01 left down 4.07e-02 5.36e-03 26 a b c d e f g h i Figure 8: Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6, noise level 0.01%, and α = 1e-7 together with α0= 1e-9. Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6 and noise level 1%, and α = 5e-7 together with α0= 2e-8. The figures in the leftmost column are the original acoustic sources. The images in the middle and the rightmost columns are reconstructed results of ALM and SSN, respectively. The images in the first, second, and third rows are the sources with the two balls, the right up tripod, and the left down tripod, respectively. 27 a b c d e f g h i Figure 9: Reconstruction of 3-dimensional sparse sources in inhomogeneous media with k = 6, noise level 0.01%, and α = 1e-7 together with α0= 1e-9. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 8. 28 Acknowledgements Nirui Tan and Hongpeng Sun acknowledge the support of the National Natural Science Foundation of China under grant No. 12271521, National Key R&D Program of China (2022ZD0116800), and Beijing Natural Science Foundation No. Z210001. References [1] G. S. Alberti, R. Petit, and M. Santacesaria, Localization of Point Scatterers via Sparse Optimization on Measures, SIAM J. Imaging Sciences, 17(3), 2024, pp. 1619–1649. [2] G. S. Alberti, M. Santacesaria, Infinite-dimensional inverse problems with finite mea- surements, Arch. Ration. Mech. Anal., 243 (2022), pp. 1–31. [3] A. Alzaalig, G. Hu, X. Liu, and J. Sun, Fast acoustic source imaging using multi- frequency sparse data, Inverse Problems 36 (2020) 025009 (18pp). [4] R. A. Adams, John J. F. Fournier, Sobolev Spaces, 2nd Edition, 2003, Elsevier Ltd. [5] G. Bao, J. Lin, F. Triki, A multi-frequency inverse source problem, J. Differential Equations, 249(2010), pp. 3443–3465. [6] G. Bao, P. Li, J. Lin, F. Triki, Inverse scattering problems with multi-frequencies, Inverse Problems, 31(2015), no. 9, 093001, 21 pp. [7] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer Science+Business Media, LLC, second edition, 2017. [8] N. Bleistein, J. K. Cohen, Nonuniqueness in the inverse source problem in acoustics and electromagnetics, J. Math. Phys. 18, 1977, pp. 194–201. [9] K. Bredies, H. K. Pikkarainen, Inverse problems in spaces of measures, ESAIM: COCV 19, pp. 190–218, 2013. [10] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer Science+Business Media, LLC 2011. [11] Weng C. Chew, Y. M. Wang, Reconstruction of two-dimensional permittivity distribu- tion using the distorted Born iterative method, IEEE Transactions on Medical Imaging, 9(2), pp. 218-225, 1990. [12] Florian B¨urgel, Kamil S. Kazimierski, and Armin Lechleiter, A sparsity regularization and total variation based computational framework for the inverse medium problem in scattering, Journal of Computational Physics, 339:1–30, 2017. 29 [13] Florian B¨urgel, Kamil S. Kazimierski, and Armin Lechleiter, Algorithm 1001: IP- scatt—A MATLAB Toolbox for the Inverse Medium Problem in Scattering, ACM Transactions on Mathematical Software (TOMS), Vol. 45(4), pp. 1-20. [14] A. Chambolle, T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., Vol. 40, No. 1, 2011. [15] E. Casas, C. Clason, K. Kunisch, Approximation of elliptic control problems in measure spaces with sparse solutions, SIAM J. Control Optim., 50(4), pp. 1735–1752, 2012. [16] C. Clason, K. Kunisch, A duality-based approach to elliptic control problems in non- reflexive Banach spaces, ESAIM: COCV, 17 pp. 243–266, 2011. [17] D. Colton, R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Springer Science+Business Media New York, Third Edition, 2013. [18] D. Colton, P. Monk, A linear sampling method for the detection of leukemia using microwaves, SIAM J. Appl. Math., 58(3), pp. 926-941, 1998. [19] A. J. Devaney, E. A. Marengo, Mei Li, Inverse source problem in nonhomogeneous background media, SIAM J. Appl. Math., 67(5), (2007), pp. 1353–1378. [20] A. L. Dontchev, R. T. Rockafellar, Functions and Solution Mappings: A View from Variational Analysis, Second Edition, Springer Science+Business Media, New York 2014. [21] O. Dorn, H. Bertete-Aguirre, G.C. Papanicolaou, A nonlinear inversion method for 3D electromagnetic imaging using adjoint fields, Inverse Problems, 1999, 15, 1523. [22] O. Dorn, H. Bertete-Aguirre, G.C. Papanicolaou, Adjoint Fields and Sensitivities for 3D Electromagnetic Imaging in Isotropic and Anisotropic Media, 2008. In: Bonilla, L.L. (eds) Inverse Problems and Imaging. Lecture Notes in Mathematics, vol 1943. Springer, Berlin, Heidelberg. [23] M. Eller, N. P. Valdivia, Acoustic source identification using multiple frequency infor- mation, Inverse Problems, 25(2009), 115005 (20pp). [24] Andrew J. Hesford, Weng C. Chew, A Frequency-Domain Formulation of the Fr´echet Derivative to Exploit the Inherent Parallelism of the Distorted Born Iterative Method, Waves in Random and Complex Media 16 (4): 495–508, 2006. [25] M. Hinterm¨uller, K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem, SIAM J. Appl. Math, 64(4), pp. 1311–1333. [26] T. Hohage, On the numerical solution of a three-dimensional inverse medium scatter- ing problem, Inverse Problems, 2001, 17, pp. 1743-1763. 30 [27] T. Hohage, Fast numerical solution of the electromagnetic medium scattering problem and applications to the inverse problem, J. Comp. Phys. 214, pp. 224–238, 2006. [28] K. Ito, K. Kunisch, Lagrange Multiplier Approach to Variational Problems and Ap- plications. SIAM, Philadelphia (2008) [29] M. Moghaddam, Weng C. Chew, M. Oristaglio, Comparison of the born iterative method and tarantola’s method for an electromagnetic time-domain inverse problem, International Journal of Imaging Systems and Technology, 3(4), 318-333, 1991. [30] J. Sylvester, Notions of support for far fields, Inverse Problems, 22, (2006), pp. 1273– 1288. [31] Gennadi Vainikko, Fast Solvers of the Lippmann-Schwinger Equation, In Robert P. Gilbert, Joji Kajiwara, and Yongzhi S. Xu, editors, Direct and Inverse Problems of Mathematical Physics, pages 423–440. Springer, Boston, 2000. [32] X. Xiang, H. Sun, Sparse reconstructions of acoustic source for inverse scattering problems in measure space, 2020, Inverse Problems, 36 035004. [33] M. Fortin, R. Glowinski (eds.), Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems, North-Holland, Amsterdam, 1983. [34] R. Glowinski, S. Osher, W. Yin (eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer, 2016. [35] M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appl., 4, pp. 303–320, 1968 [36] K. Ito, K. Kunisch, An active set strategy based on the augmented Lagrangian for- mulation for image restoration, RAIRO, Math. Mod. and Num. Analysis, 33(1), pp. 1–21, 1999. [37] X. Li, D. Sun, C. Toh, A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems, SIAM J. Optim., 28(1), pp. 433–458, 2018. [38] M. J. D. Powell, A method for nonlinear constraints in minimization problems, in Optimization, R. Fletcher, ed., Academic Press, New York, pp. 283–298, 1968. [39] R. F. Remis, P. M. van den Berg, On the equivalence of the Newton-Kantorovich and distorted Born methods, Inverse Problems, 16 L1-L4, 2000. [40] S. M. Robinson, Some continuity properties of polyhedral multifunctions, in Mathe- matical Programming at Oberwolfach, Math. Program. Stud., Springer, Berlin, Hei- delberg, pp. 206–214, 1981. 31 [41] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14(5), pp. 877–898, 1976. [42] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point al- gorithm in convex programming, Math. Oper. Res., 1(2), pp. 97-116, 1976. [43] S. Scholtes, Introduction to Piecewise Differentiable Equations, Springer Briefs in Optimization, Springer, New York, 2012. [44] G. Stadler, Semismooth Newton and augmented Lagrangian methods for a simplified friction problem, SIAM J. Optim., 15(1), pp. 39–62, 2004. [45] H. Sun, An Investigation on Semismooth Newton based Augmented Lagrangian Method for Image Restoration, J. Sci. Comput. 92(82), 2022. [46] H. Sun, An efficient augmented Lagrangian method with semismooth Newton solver for total generalized variation, Inverse Problems and Imaging, 2023, 17(2): 381-405. [47] A. Tarantola, Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM, 2005. [48] R. Tomioka and M. Sugiyama, Dual-augmented Lagrangian method for efficient sparse reconstruction, in IEEE Signal Processing Letters, 16(12), pp. 1067–1070, Dec. 2009. [49] M. Ulbrich, Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces, MOS-SIAM Series on Optimization, 2011. [50] X. Zhao, D. Sun, K. Toh, A Newton-CG augmented Lagrangian method for semidefi- nite programming, SIAM J. Optim., 20(4), pp. 1737–1765, 2010. 32
Augmented Lagrangian Method based adjoint space framework for sparse reconstruction of acoustic source with boundary measurements Nirui Tan∗ Hongpeng Sun†‡ October 17, 2025 Abstract We propose a semismooth Newton-based augmented Lagrangian method for reconstructing sparse sources in inverse acoustic scattering problems. The semismooth Newton method can be iterated in the space of measurements instead of the unknown source to be reconstructed. It is highly efficient, especially when the measurement data is much less than the acoustic source. The source can be calculated from FenchelRockafellar duality theory. We can obtain lots of acceleration and leverage the computational cost. The numerical examples show the high efficiency of the proposed semismooth Newton-based methods. 1 Introduction Inverse acoustic scattering is crucial in various applications, including sonar imaging, oil prospecting, and non-destructive testing, among others [17]. In this paper, we focus on the reconstruction of acoustic sources, which is a typical inverse acoustic scattering problem. It is well-known that the inverse source problem lacks uniqueness [8, 30] due to the presence of non-radiating sources, resulting in high ill-posedness. Appropriate regularization that incorporates prior information can leverage the ill-posedness, such as L2 regularization [19]. For convenience, we assume that Ω⊆Rd with d = 2 or d = 3 is a bounded and compact domain with boundary ∂Ωof class C3 and contains the sources. Sparse regularization is a popular choice for signal processing, including L1 regularization and total variation, which can promote sparsity for the solutions or gradients. However, for acoustic scattering involving the Helmholtz equation, which is an infinite-dimensional problem, the existence ∗ 100872 Beijing, China. Email: nirui- . † 100872 Beijing, China. Email: . ‡Corresponding author. 1 16 Oct 2025 of a reconstructed solution in L1(Ω) space can not be guaranteed, due to the lack of weak completeness in L1(Ω) (see Chapter 4 of [10]). In [9, 15, 16], a larger space, the Radon measure space M(Ω), is introduced for inverse problems and sparse optimal control. We also refer to [2] for inverse problems, including electrical impedance tomography with finite measurements and the linearized and locally optimized strategy and algorithms for sparse point acoustic sources [1]. It is known that M(Ω) is a Banach space and can be characterized by its dual space C0(Ω) through the Riesz representation theorem (see Chapter 4 of [10]), ∥μ∥M(Ω) = sup Z Ω udμ : u ∈C0(Ω), ∥u∥C0(Ω) ≤1 . (1.1) This is also equivalent to M(Ω) = C0(Ω)′, which means that M(Ω) is weakly compact by the Banach-Alaoglu theorem since C0(Ω) is a separable Banach space [10]. The existence of the reconstructed sparse solution in the Radon measure space M(Ω) can be guaranteed. Moreover, since L1(Ω) can be embedded in M(Ω), the regularization with M(Ω) can also promote sparsity. Inspired by the developments of the Radon measure space regularization [15, 16], [32] considered the sparse reconstruction of an acoustic source in the Radon measure space M(Ω). Full scattering data in Ωis employed, and semismooth Newton is developed for reconstruction algorithms [32]. There are several iterative methods for inverse scattering with smooth functionals such as L2 regularization, including the gradient descent method [47], recursive linearization [5, 6], Newton-type methods, or distorted Born iteration [11, 24, 26, 27, 29, 39]. For nonsmooth regularization, the primal-dual method for the total variation regularized inverse medium scattering problem is developed in [12, 13, 14]. Let us turn to a discrete setting for discussing the efficiency. Suppose that M is the number of unknowns modeling the scatter and N is the number of measurements. It is claimed that "For most practical situations, the number of unknowns is much larger than the number of measurements, i.e., N ≫M" in [11]. It was further pointed out that M ∝ √ N generally for the two-dimensional case [29]. Currently, these Newton-type or distorted Born-type methods are iterated directly in the domain of the unknowns, which means that a linear system such as Ax = b with A ∈CN×N has to be solved during each Newton update. It can be much more efficient if each Newton update can be carried out in the measurement domain by solving a linear system such as Abz = c with Ab ∈CM×M. One then recovers the N unknowns through the M computed dual variables by the optimality conditions directly. This technique can reduce a lot of computational cost. In this paper, we build a framework that the Newton or semismooth Newton executes on the measurement domain. We call the corresponding solution with the same dimensions of the measurements a "dual solution" or "adjoint field" [21]. We then recover the unknown acoustic source through Fenchel-Rockafellar duality theory. Our contributions can be summarized as follows. First, we develop a semismooth Newton-based augmented Lagrangian method, where we can employ semismooth Newton methods in the adjoint 2 space and the linear update of the Newton method to solve problems with the same dimensions as the boundary measurements. Such ideas for inverse electromagnetic scattering can be found in [21, 22], where incremental iterative methods are developed that involve a linear equation with the same dimensions as the measurements. Compared to [21, 22], our framework can handle nonsmooth regularization, whereas L2 Tikhonov regularization is employed in [21, 22]. The semismooth Newton method thus can be highly efficient, while the boundary measurements can be much less than the unknown acoustic source to be reconstructed. We would like to point out that a similar dual framework is studied for statistical or signal data processing [37, 48]. Second, we also design a semismooth Newton method with the Moreau-Yosida regularization [16, section 3.1], [28, Chapter 9] or [32] for directly reconstructing the acoustic source in case the scale of the measurement data is comparable with the unknowns. The paper is organized as follows. In section 2, we introduce the direct problem. In section 3, we present the semismooth Newton-based augmented Lagrangian method in the adjoint space. In section 4, we introduce a semismooth Newton method and a firstorder primal-dual method. In section 5, we present detailed numerical experiments both in two-dimensional and three-dimensional spaces. Finally, in section 6, we give a conclusion. 2 Inverse source problem with boundary measurements In this paper, we will focus on the analysis and reconstruction of the following inverse scattering problem: Reconstructing a sparse source f in the Radon measure space M(Ω) for a given scattered field on the boundary ∂Ω. The direct scattering problem in the frequency domain with an inhomogeneous medium in Rd with d = 2 or d = 3 is as follows    -∆u -k2n(x)u = μ, x ∈Rd, lim |x|→∞|x| d-1 2 ( ∂u ∂|x| -iku) = 0, (2.1) where μ ∈M(Ω) is a Radon measure and n(x) is the refraction index. We refer to Figure 1 for an illustration. Henceforth, we assume n(x) is real, smooth, and compactly supported, i.e., In = 0 and there exists a bounded domain Ω⊆Rd such that supp(n(x) -1) ⋐Ω. Throughout this paper, we assume μ is a real Radon measure, which is reasonable in physics and is also compactly supported. We assume the bounded domain Ωis large enough such that μ is also compactly supported in Ω, i.e., supp(μ) ⋐Ω. The Helmholtz equation (2.1) can also be recovered with n(x) ≡1. The solution of (2.1) is defined in "very weak" sense [32, Definition 2.1] with test functions in C2,α. Here, for convenience, we directly work with the following volume 3 A Figure 1: Illustration of the acoustic source scattering: the red squares represent the receivers for recording the scattering wave, and the trapezoid A represents the acoustic sources. potential representations, (Vμ)(x) := Z Ω G(x, y)dμ(y), (2.2) where the inhomogeneous and background Green's function G(x, y) is defined as the radiating solution [18]    -∆xG(x, y) -k2n(x)G(x, y) = δ(x -y), x, y ∈Rd, lim |x|→∞|x| d-1 2 ∂G(x, y) ∂|x| -ikG(x, y) = 0. (2.3) Denoting m(x) = 1 -n(x), we thus can construct G(x, y) by the Lippmann-Schwinger integral equation [18] G(x, y) = Φ(x, y) -k2 Z Ω Φ(x, z)m(z)G(z, y)dz (2.4) where Φ(x, y) is the fundamental solution of the Helmholtz equation, i.e., Φ(x, y) = i 4H(1) 0 (k|x -y|) in R2 and Φ(x, y) = eik|x-y| 4π|x-y| in R3. For the W 1,p estimate of the problem (2.1), we have the following proposition [32, Theorem 2.8]. Proposition 1. For the solution of (2.1) with representation in (2.2), we have the following regularity estimate, ∥u∥W 1,p(Ω) ≤C∥μ∥M(Ω), 1 ≤p σ (3.9) and sign(·) is the sign function. It will turn out that the explicit formulation of p∗is not needed later. By Fenchel-Rockafellar duality, we obtain the dual problem of (3.6) min y∈L2(∂Ω) D(y) := p∗(-VT b y) + h∗(y), (3.10) which is equivalent to maxy∈L2(∂Ω)(-p∗(-VT b y) -h∗(y)) by (3.4). Furthermore, letting -VT b y = z, the above minimization problem becomes the following linear constrained optimization problem min y∈L2(∂Ω), z∈L2(Ω) p∗(z) + h∗(y), s.t., VT b y + z = 0. (3.11) We now introduce the augmented Lagrangian function Lσ(y, z; λ) := p∗(z) + h∗(y) + ⟨λ, Vb∗y + z⟩L2(Ω) + σ 2 ∥VT b y + z∥2 L2(Ω). (3.12) We arrive at the following equivalent inf-sup problem for (3.6) under certain conditions, inf y∈L2(∂Ω), z∈L2(Ω) sup λ∈L2(Ω) Lσ(y, z; λ). (3.13) The augmented Lagrangian method thus follows, with nondecreasing update of σk →c∞ σkα {0}, |λk + σkVT b y| σkα, 0 if |(-λ -σkVT b y)i| ≤σkα (-λ-σkVT b y)i 1+σkα0 + σkα 1+σkα0 , if (-λ -σkVT b y)i σkα, we have [(∂l)(z)]i = (- σ 1+σkα0 Vb(z))i. While |(-λ -σkVT b y)i| 0, t = 0, 1, 2, . . ., β ∈(0, 1), and c > 0. The objective is to find the smallest integer t satisfying: Lσk(yl + βtdl, Mk(yl + βtdl); λk) ≤Lσk(yl, Mk(yl); λk) -cβt∥dl∥2 2. (3.28) The final step size tl := βt obtained from the line search in (3.28) dictates the update of yk+1 as follows: yl+1 = yl + tldl. (3.29) Once some stopping criterion for inner Newton iteration is satisfied, e.g., for some l = l0, we set yk+1 = yl0 and obtain zk+1 with zk+1 = Mk(yk+1). After finishing the inner semismooth Newton iterations, we then update λk+1 as in (3.14b) for the outer ALM updates. Once the stopping criterion of the outer ALM iterations is satisfied with (yK, zK, λK), the solution μ corresponding to the primal problem can be approximated by μK = (I + α α0 ∂∥.∥1)-1(-VT b yK α0 ), (3.30) which has an explicit solution. Since by the primal-dual optimality conditions of (3.10) at saddle-points (y∗, μ∗), we have -VT b y∗∈∂p(μ∗) ⇔-VT b y∗∈α∂∥.∥1 + α0μ∗⇔μ∗= (I + α α0 ∂∥.∥1)-1(-VT b y∗ α0 ). Finally, let us conclude this section with the proposed semismooth Newton-based ALM as the following Algorithm 1. We then discuss its convergence in the next section. 10 Algorithm 1 Boundary measurements based dual Augmented Lagrangian method with semismooth Newton solver for (2.8) (ALM-bd-SSN) Require: Given linear operator Vb and ub, regularization parameters (α0, α), and parameter settings for ALM σ0, c0, β, ρ, and initial vectors (y0, z0, λ0), set y = y0, σ = σ0, λ = λ0 while 0 ≤i ≤Kmax and some stopping criterion for outer ALM iterations is satisfied do while l ≤L and some stopping criterion for inner SSN is satisfied do Calculate X k y in (3.22) and Newton derivative (3.21), and solve (3.26) for SSN update. Do Armijo line search (3.28) and update yl+1 with (3.29). end while Compute zk+1 = Mk(yk+1) by (3.27). Update λk+1 by (3.14b). Update σk+1 = c0σk. end while Obtain the reconstructed μ with (3.30) finally. 3.2 Convergence of the SSN-based ALM on the dual space Now we introduce some basic definitions and properties of multivalued mapping from convex analysis [20, 37]. Let F : X ⇒Y be a multivalued mapping. The graph of F is defined as the set gphF := {(x, y) ∈X × Y |y ∈F(x)}. The inverse of F, i.e., F -1 : Y ⇒X is defined as the multivalued mapping whose graph is {(y, x)|(x, y) ∈gphF}. The distance x from the set C ⊂X is defined by dist(x, C) := inf{∥x -x′∥| x′ ∈C}. For the local convergence rate of ALM, we need the metrical subregularity [20]. Definition 1 (Metric Subregularity [20]). A mapping F : X ⇒Y is called metrically subregular at ̄x for ̄y if ( ̄x, ̄y) ∈gphF and there exists modulus κ ≥0 along with a neighborhoods U of ̄x and V of ̄y such that dist(x, F -1( ̄y)) ≤κ dist( ̄y, F(x) ∩V ) for all x ∈U. (3.31) Now, let us turn to the primal problem (3.1), which is the dual problem (3.10). For the L1 norm, we see ∥μ∥1 = M X i=1 |μi| (3.32) 11 which is a polyhedral function. Introduce the Lagrangian function l(y, z, λ) = p∗(z) + h∗(y) + ⟨λ, Vb∗y + z⟩L2(Ω). (3.33) It is well-known that l is a convex-concave function on (u, p, λ). Define the maximal monotone operator Tl by Tl(y, z, λ) = {(y′, z′, λ′)|(y′, z′, -λ′) ∈∂l(y, z, λ)}, (3.34) and the corresponding inverse is given by T -1 l (y′, z′, λ′) = {(y, z, λ)|(y′, z′, -λ′) ∈∂l(y, z, λ)}. (3.35) Here, adding the negative sign as in -λ′ is because l is concave on λ. Theorem 2. For the dual problem (3.10), assuming the KKT system has at least one solution, then Tl is metrically subregular at (y∗, z∗, λ∗)T for the origin. Similarly, assuming (∂P)-1(0) ̸= ∅as P in (3.1), ∂P are metrically subregular at λ∗for the origin. Proof. With direct calculation, we have Tl(y, z, λ) = (∂h∗(y) + Vbλ, ∂p∗(z) + λ, -z -VT b y)T := A(x) + B(x), where A   y z λ  :=   ∂h∗ 0 0 0 ∂p∗ 0 0 0 0     y z λ  , B   y z λ  :=   0 0 Vb 0 0 I -VT b -I 0     y z λ  . (3.36) It is known that the ∥· ∥1 (3.32) is a polyhedral convex function. We can see that ∂h∗is a polyhedral function of y and ∂p∗is a polyhedral function of z. We thus conclude that the monotone operator A is polyhedral. Besides, the operator B is a maximal monotone and linear operator. Thus Tl is a polyhedral mapping [40]. By the corollary in [40], we see Tl is metrically subregular at (y∗, z∗, λ∗)T for the origin. Let us now turn to the metric subregularity of ∂P, which is a dual function of the dual problem (3.10). Supposing (∂P)-1(0) ̸= ∅, since (∂P)(μ) = VT b (Vbμ -ub) + α0μ + α∂∥μ∥1. (3.37) We conclude that it is a polyhedral mapping of μ. With the corollary in [40], ∂P is metrically subregular. 12 Here, we follow the standard stopping criterion for the inexact augmented Lagrangian method originated [41, 42]. Lσk(yk+1, zk+1; λk) -inf y,z Lσk(y, z; λk) ≤ε2 k/(2σk), ∞ X k=0 εk -β, [χA]i = [χA+]i + [χA-]i. (4.5) With these preparations, the semismooth Newton method for solving the nonlinear system F(y) = 0 reads as N(yk)yk+1 = N(yk)yk -F(yk), (4.6) where B + γBXAkBT = N(yk) ∈∂F(yk), so we get (B + γBXAkBT )yk+1 = -VT b ub + γαB(XA+ k -XAk )⃗1. (4.7) 15 We let γ →∞with path-following strategy [25] (or [28, Section 9.1]). Finally, we can recover μ with y by (3.5) Bμ∗= ∇f∗(y) →μ = B-1∇f∗(y∗) = y∗+ B-1(VT b ub). (4.8) We conclude the above discussions with the following semismooth Newton algorithm, i.e., Algorithm 2. Algorithm 2 Semismooth Newton method (shortened as SSN) Require: y0 ∈L2(Ω), strictly increasing path-following parameters [γ0, γ1, . . . , γKmax], with γi > 0 i = 1, . . . , Kmax. Ensure: y, μ 1: Initialization y0 γ0 = y0 2: while 0 ≤i ≤I, γ = γi do 3: while k ≤K do 4: Calculate active sets and the corresponding functions as in (4.5) for γ = γi. 5: Solve for yk ∈L2(Ω) by (4.7) and denote it yk γi. 6: Update A+ k , Ak , Ak 7: Until A+ k = A+ k-1, Ak = Ak-1, set y0 γi+1 = yk γi. 8: end while 9: end while 10: y = yk γKmax 11: μ = B-1∇f∗(y) = y + B-1(VT b ub). 4.1 First-order primal-dual method Finally, let us turn to the first-order methods. The Chambolle-Pock's first-order primaldual algorithm [14] is employed and studied for the inverse medium scattering problem in [12, 13]. Here, for comparison, we also presented the first-order primal-dual algorithm. With 1 2∥Vbμ -ub∥2 L2(∂Ω) = max p∈L2(∂Ω)⟨Vbμ -ub, p⟩L2(∂Ω) -1 2∥p∥2 L2(∂Ω), we get the following primal-dual problem min μ max p ⟨Vbμ -ub, p⟩L2(∂Ω) -1 2∥p∥2 L2(∂Ω) + α0 2 ∥μ∥2 L2(Ω) + α∥μ∥L1(Ω) ⇔ (4.9) min μ max p ⟨Vbμ, p⟩L2(∂Ω) + α0 2 ∥μ∥2 L2(Ω) + α∥μ∥L1(Ω) - 1 2∥p∥2 L2(∂Ω) + ⟨ub, p⟩2 L2(∂Ω) . (4.10) Setting F ∗(p) = 1 2∥p∥2 L2(∂Ω) + ⟨ub, p⟩L2(∂Ω) and G(μ) = α0 2 ∥μ∥2 L2(Ω) + α∥μ∥L1(Ω), we have (I + σ∂F ∗)-1( ̄p) = ̄p -σub 1 + σ , (I + τ∂G)-1( ̄μ) = sign( ̄μ) max n| ̄μ| -τα 1 + τα0 , 0 o . 16 We now can employ the first-order primal-dual algorithm [14, Algorithm 1] to solve (2.8) with saddle-point formulation (4.10), as the following Algorithm 3. Algorithm 3 First-order primal-dual algorithm (PDA) Require: initial vectors (p0, μ0) ∈L2(∂Ω) × L2(Ω), step sizes (σ, τ), Ensure: p, μ 1: set p = p0, μ = μ0, ̄μ = μ0 2: for 0 ≤k ≤Kmax do 3: pk+1 = (I + σ∂F ∗)-1(pk + σVb ̄μk) = pk+σVb ̄μk-σub 1+σ 4: μk+1 = (I + τ∂G)-1(μk -τVT b pk+1) = sign(μk -τVT b pk+1) max |μk-τVT b pk+1|-τα 1+τα0 , 0 5: ̄μk+1 = μk+1 + θ(μk+1 -μk) 6: end for 5 Numerical Experiments In this section, we will first give a brief discussion of the discretization of the LippmannSchwinger volume potential, followed by the presentation of the numerical examples. We will first give the linear mapping from the acoustic source μ to the boundary measurement us b for the inhomogeneous medium. Since we have -∆us -k2n(x)us = μ ⇒-∆us -k2us = k2( 1 k2 μ + (n -1)us), (5.1) which leads to the following Lippmann-Schwinger integral equation us = Vk( 1 k2 μ + (n(x) -1)us). (5.2) Now let us introduce q(x) = n(x) -1 and Vkh := k2 Z Ω Φ(x, y)h(y)dy. We have us(x) = 1 k2 (I -Vkq)-1Vkμ. (5.3) Substituting (5.3) into (5.2), we have us(x) = 1 k2 Vk(I + q(I -Vkq)-1Vk)μ. (5.4) We finally obtain the linear mapping from the source μ to the measurement us b(x) = Tr ◦ 1 k2 Vk(I + q(I -Vkq)-1Vk) μ, Tru := u|∂Ω, (5.5) 17 where Tr denotes the trace mapping. The source to measurement mapping can be T := Tr ◦ 1 k2 Vk(I + q(I -Vkq)-1Vk) : M(Ω) →Tr(W 1,p(∂Ω)). (5.6) The volume potential Vk and (I -Vkq)-1 can be computed by the Fourier-based collocation method by periodization of the Lippmann-Schwinger volume potential [31]. We adapted the code developed in [12, 13] where the discretization of T is given, and we refer to [12, 13] for more details. All algorithms are conducted with MATLAB R2024b on a Ubuntu 24.04 workstation with dual Intel (R) Xeon (R) E5-2697AV4 CPUs, 2.60 GHz, and 288 GB memory. The parameters of the compared algorithms are as follows • ALM: ALM-bd-SSN as in Algorithm 1. We choose σ0 = 1 and σk+1 = 6σk. For the Armijo line search, we choose β = 0.3 and c = 10-4 as in (3.28). • SSN: SSN as Algorithm 2. We choose γ0 = 1 and γi = 10i, i = 1, . . . , 8. • PDA: First-order primal-dual algorithm as in Algorithm 3 [14]. We choose σ = 0.5, τ = 1/((∥V2 b ∥+ 10-6)σ), and θ ≡1. The Gaussian noise is added to get noisy boundary measurement uδ b for the simulated data u, which is essentially as the same in [12, (73)] (uδ b -ub)/∥ub∥2 = δ(Nre + iNim) where Nre and Nim are two real matrices sampled from standard normal distribution. We measure the reconstruction error by the following relative error N-Error := ∥μKmax -μexa∥2/∥μexa∥2, where μexa is the exact acoustic source and Kmax is the corresponding maximum outer iteration number, and it varies for different algorithms. To avoid "inverse crime", we employ the code of [12, 13]. The strategy is that the discretization, especially the grid size of generating the synthetic data of the direct problem, is different from the discretization for reconstructing the unknown sources [12] (see also [47, Chapter 2.3.6]). For the homogeneous medium, we choose the velocity c ≡1 both in R2 and R3. For the inhomogeneous medium, we employ the inhomogeneous medium from [26] q(x) = χ(x -0), χ(t) = ( exp( -1 1-|t|2 ) |t| ≤1, 0 |t| > 1, (5.7) where 0 = (0, 0)T in R2 and 0 = (0, 0, 0)T in R3. 18 We computed the primal solution μKmax by (3.30) except the numerical examples in Figures 6 and 7, which are computed by -λKmax due to (3.40) and the explanations there. Now, let us turn to the numerical examples. Figures 2 and 3 present the reconstructions of different point sources with %1 relative Gaussian noise error for homogeneous and inhomogeneous medium correspondingly. Table 1 collects the running time and relative errors of three different algorithms. For our experiments, we did not see big differences in running time between the homogeneous and the inhomogeneous medium. We thus only present the running time for the homogeneous case. Figures 4 and 5 present the reconstructions of different strip sources with %0.1 relative Gaussian noise error for homogeneous and inhomogeneous medium correspondingly. Table 2 collects corresponding running time only for the homogeneous cases (still because the running time for the inhomogeneous cases is nearly the same as the homogeneous cases), relative errors for differential algorithms regarding homogeneous or inhomogeneous medium. Figures 6 and 7 present the reconstructions of different 3-dimensional sources with %1 relative Gaussian noise error for homogeneous and inhomogeneous medium, respectively. Table 3 collects the running time and relative errors of ALM and SSN. It can be seen that the ALM can be 10 times faster than SSN. We did not compare with PDA since we found PDA is too slow. We computed the solution with -λk since μ∗= -λ∗by (3.40) and numerically we found that ∥μKmax + λKmax∥is generally less than 10-5 as Kmax being the maximal outer iteration number and μKmax computed by (3.30). Figures 8 and 9 present the reconstructions of different 3-dimensional sources with %0.01 relative Gaussian noise error for homogeneous and inhomogeneous medium, correspondingly. Table 4 collects relative errors of ALM and SSN. Since the running time to similar to the %1 relative Gaussian noise error case, we omit the running time in Table 4. 6 Conclusions We proposed a semismooth Newton-based augmented Lagrangian method on the "adjoint" space of measurement. The reconstruction can be greatly accelerated if the dimensions of the measurement data are much less than the unknown acoustic sources, which can be more than 10 times faster, as shown in the 3-dimensional numerical examples. It would be interesting to incorporate the multifrequency scattering data as was done in [3, 5, 6, 23]. We believe this framework can also benefit inverse medium problems and inverse electromagnetic wave scattering problems. 19 a b c d e f g h i j k l m n o p Figure 2: Reconstruction of sparse sources with multiple peaks in homogeneous media with k = 6. The figures in the leftmost column are the original acoustic sources, including one, four, six, and eight peaks. The images in the second, the third from the left, and the rightmost columns are reconstructed results of ALM, SSN, and PDA, respectively. The images in the first, second, third, and fourth rows are the sources with one, four, six, and eight peaks, respectively. 20 a b c d e f g h i j k l m n o p Figure 3: Reconstruction of sparse sources with multiple peaks in inhomogeneous media with k = 6. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 2. 21 Table 1: Comparisons of the running time and relative errors for ALM, SSN, and PDA algorithms with noise level 1%. This table corresponds to Figure 2 and Figure 3. The running times, i.e., "Times (s)" are only presented for the homogeneous cases as in Figure 2. The "N-error" and "N-error (in)" are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations "one", "four", "six", and "eight" corresponds to the sources with one, four, six, and eight peaks and are the same as in Figures 2 and 3. (ALM and SSN with α = 9e-4, α0= 1e-7, and PDA with α = 9e-5, α0= 1e-12). Methods Sources Time (s) N-Error N-Error (in) ALM one 8.77 6.30e-02 4.98e-02 four 9.67 2.08e-01 7.69e-01 six 11.19 1.01e-00 1.00e-00 eight 9.13 1.09e-00 1.07e-00 SSN one 8.62 1.93e-02 2.27e-02 four 10.64 4.32e-01 1.37e-01 six 8.09 9.57e-01 9.50e-01 ten 10.11 1.10e-00 1.07e-00 PDA one 17.12 6.99e-02 5.11e-2 four 15.66 2.49e-01 1.47e-01 six 16.71 9.09e-01 8.64e-01 eight 16.31 1.19e-00 1.12e-00 a b c d e f g h Figure 4: Reconstructions of strip-shaped sparse sources in homogeneous media with k = 4, noise level 0.1%, and α =2e-6, α0= 2e-8. The figures in the leftmost column are the original acoustic sources. The images in the second, the third from the left, and the rightmost columns are reconstructed results of ALM, SSN, and PDA, respectively. The images in the first and second rows are the sources with "skew diagonal" (the first row) and "diagonal" (the second row) types of strips, respectively. 22 a b c d e f g h Figure 5: Reconstruction of strip-shaped sparse sources in inhomogeneous media with k = 4, noise level 0.1%, and α =2e-6, α0= 2e-8. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 4. Table 2: Comparisons of the running time and relative errors for ALM, SSN, and PDA algorithms with noise level 0.1%, and α = 2e-6, together with α0= 2e-8. This table corresponds to Figure 2 and Figure 3. The running times, i.e., "Times (s)" are only presented for the homogeneous cases as in Figure 4. The "N-error" and "N-error (in)" are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations "skew" and "diag" represent the "skew diagonal" and "diagonal" strips in Figures 4 and 5. Methods Sources Time (s) N-Error N-Error (in) ALM skew 16.96 4.19e-02 2.30e-02 diag 17.23 1.75e-02 4.18e-02 SSN skew 11.07 9.18e-02 4.04e-02 diag 11.72 1.79e-02 3.81e-02 PDA skew 18.15 2.40e-02 2.49e-02 diag 18.82 1.51e-02 1.70e-02 23 a b c d e f g h i j k l Figure 6: Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6, noise level 1%, and α = 5e-7 together with α0= 2e-8. The solution is computed by -λKmax. The figures in the leftmost column are the original acoustic sources. The images in the middle and the rightmost columns are reconstructed results of ALM and SSN, respectively. The images in the first, second, third, and fourth rows are the sources with the two balls, the right up tripod, the left down tripod, and the two tripods, respectively. 24 a b c d e f g h i j k l Figure 7: Reconstruction of 3-dimensional sparse sources in inhomogeneous media with k = 6 and noise level 1%, and α = 5e-7 together with α0= 2e-8. The solution is computed by -λKmax. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 6. 25 Table 3: Experimental results for ALM and SSN methods under different scenarios and treatments, noise level 1%, and α = 5e-7 together with α0= 2e-8. Here "N-Error" is the relative error for both homogeneous and inhomogeneous cases, corresponding to "homo" and "inhomo" in the table. The notations "two balls", "two tripods", "right up", and "left down" represent the acoustic sources in the first to the fourth rows in Figure 6 or 7, respectively. Methods Sources Medium Time (s) N-Error ALM two balls homo 79.86 3.97e-01 two balls inhomo 87.71 3.99e-01 two tripods homo 82.29 4.26e-01 two tripods inhomo 88.84 4.96e-01 right up homo 79.49 3.14e-01 right up inhomo 87.01 2.78e-01 left down homo 78.36 3.03e-01 left down inhomo 88.69 2.55e-01 SSN two balls homo 1224.96 3.60e-01 two balls inhomo 1304.16 3.26e-01 two tripods homo 1195.87 4.47e-01 two tripods inhomo 1149.60 2.90e-01 right up homo 1139.21 4.35e-01 right up inhomo 1296.09 3.55e-01 left down homo 1228.42 4.01e-01 left down inhomo 1301.16 3.16e-01 Table 4: Relative reconstruction errors for ALM and SSN methods with noise level 0.01%, and α = 1e-7 together with α0= 1e-9. The "N-error" and "N-error (in)" are relative errors for the corresponding homogeneous and inhomogeneous cases. The notations "two balls", "right up", and "left down" represent the acoustic sources in the first to the third rows in Figure 8 or 9, respectively. The running times are nearly the same to the 1% noise level cases as in Table and we omit here. Methods Sources N-Error N-Error (in) ALM two balls 1.27e-01 6.75e-02 right up 1.40e-01 1.43e-02 left down 7.33e-02 2.09e-02 SSN two balls 3.72e-01 2.12e-01 right up 1.35e-01 2.29e-01 left down 4.07e-02 5.36e-03 26 a b c d e f g h i Figure 8: Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6, noise level 0.01%, and α = 1e-7 together with α0= 1e-9. Reconstruction of 3-dimensional sparse sources in homogeneous media with k = 6 and noise level 1%, and α = 5e-7 together with α0= 2e-8. The figures in the leftmost column are the original acoustic sources. The images in the middle and the rightmost columns are reconstructed results of ALM and SSN, respectively. The images in the first, second, and third rows are the sources with the two balls, the right up tripod, and the left down tripod, respectively. 27 a b c d e f g h i Figure 9: Reconstruction of 3-dimensional sparse sources in inhomogeneous media with k = 6, noise level 0.01%, and α = 1e-7 together with α0= 1e-9. The information of the acoustic sources and the corresponding reconstruction algorithms are the same as in Figure 8. 28 Acknowledgements Nirui Tan and Hongpeng Sun acknowledge the support of the National Natural Science Foundation of China under grant No. 12271521, National Key R&D Program of China (2022ZD0116800), and Beijing Natural Science Foundation No. Z210001. References [1] G. S. Alberti, R. Petit, and M. Santacesaria, Localization of Point Scatterers via Sparse Optimization on Measures, SIAM J. Imaging Sciences, 17(3), 2024, pp. 1619-1649. [2] G. S. Alberti, M. Santacesaria, Infinite-dimensional inverse problems with finite measurements, Arch. Ration. Mech. Anal., 243 (2022), pp. 1-31. [3] A. Alzaalig, G. Hu, X. Liu, and J. Sun, Fast acoustic source imaging using multifrequency sparse data, Inverse Problems 36 (2020) 025009 (18pp). [4] R. A. Adams, John J. F. Fournier, Sobolev Spaces, 2nd Edition, 2003, Elsevier Ltd. [5] G. Bao, J. Lin, F. Triki, A multi-frequency inverse source problem, J. Differential Equations, 249(2010), pp. 3443-3465. [6] G. Bao, P. Li, J. Lin, F. Triki, Inverse scattering problems with multi-frequencies, Inverse Problems, 31(2015), no. 9, 093001, 21 pp. [7] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer Science+Business Media, LLC, second edition, 2017. [8] N. Bleistein, J. K. Cohen, Nonuniqueness in the inverse source problem in acoustics and electromagnetics, J. Math. Phys. 18, 1977, pp. 194-201. [9] K. Bredies, H. K. Pikkarainen, Inverse problems in spaces of measures, ESAIM: COCV 19, pp. 190-218, 2013. [10] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer Science+Business Media, LLC 2011. [11] Weng C. Chew, Y. M. Wang, Reconstruction of two-dimensional permittivity distribution using the distorted Born iterative method, IEEE Transactions on Medical Imaging, 9(2), pp. 218-225, 1990. [12] Florian B ̈urgel, Kamil S. Kazimierski, and Armin Lechleiter, A sparsity regularization and total variation based computational framework for the inverse medium problem in scattering, Journal of Computational Physics, 339:1-30, 2017. 29 [13] Florian B ̈urgel, Kamil S. Kazimierski, and Armin Lechleiter, Algorithm 1001: IPscatt-A MATLAB Toolbox for the Inverse Medium Problem in Scattering, ACM Transactions on Mathematical Software (TOMS), Vol. 45(4), pp. 1-20. [14] A. Chambolle, T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., Vol. 40, No. 1, 2011. [15] E. Casas, C. Clason, K. Kunisch, Approximation of elliptic control problems in measure spaces with sparse solutions, SIAM J. Control Optim., 50(4), pp. 1735-1752, 2012. [16] C. Clason, K. Kunisch, A duality-based approach to elliptic control problems in nonreflexive Banach spaces, ESAIM: COCV, 17 pp. 243-266, 2011. [17] D. Colton, R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Springer Science+Business Media New York, Third Edition, 2013. [18] D. Colton, P. Monk, A linear sampling method for the detection of leukemia using microwaves, SIAM J. Appl. Math., 58(3), pp. 926-941, 1998. [19] A. J. Devaney, E. A. Marengo, Mei Li, Inverse source problem in nonhomogeneous background media, SIAM J. Appl. Math., 67(5), (2007), pp. 1353-1378. [20] A. L. Dontchev, R. T. Rockafellar, Functions and Solution Mappings: A View from Variational Analysis, Second Edition, Springer Science+Business Media, New York 2014. [21] O. Dorn, H. Bertete-Aguirre, G.C. Papanicolaou, A nonlinear inversion method for 3D electromagnetic imaging using adjoint fields, Inverse Problems, 1999, 15, 1523. [22] O. Dorn, H. Bertete-Aguirre, G.C. Papanicolaou, Adjoint Fields and Sensitivities for 3D Electromagnetic Imaging in Isotropic and Anisotropic Media, 2008. In: Bonilla, L.L. (eds) Inverse Problems and Imaging. Lecture Notes in Mathematics, vol 1943. Springer, Berlin, Heidelberg. [23] M. Eller, N. P. Valdivia, Acoustic source identification using multiple frequency information, Inverse Problems, 25(2009), 115005 (20pp). [24] Andrew J. Hesford, Weng C. Chew, A Frequency-Domain Formulation of the Fr ́echet Derivative to Exploit the Inherent Parallelism of the Distorted Born Iterative Method, Waves in Random and Complex Media 16 (4): 495-508, 2006. [25] M. Hinterm ̈uller, K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem, SIAM J. Appl. Math, 64(4), pp. 1311-1333. [26] T. Hohage, On the numerical solution of a three-dimensional inverse medium scattering problem, Inverse Problems, 2001, 17, pp. 1743-1763. 30 [27] T. Hohage, Fast numerical solution of the electromagnetic medium scattering problem and applications to the inverse problem, J. Comp. Phys. 214, pp. 224-238, 2006. [28] K. Ito, K. Kunisch, Lagrange Multiplier Approach to Variational Problems and Applications. SIAM, Philadelphia (2008) [29] M. Moghaddam, Weng C. Chew, M. Oristaglio, Comparison of the born iterative method and tarantola's method for an electromagnetic time-domain inverse problem, International Journal of Imaging Systems and Technology, 3(4), 318-333, 1991. [30] J. Sylvester, Notions of support for far fields, Inverse Problems, 22, (2006), pp. 12731288. [31] Gennadi Vainikko, Fast Solvers of the Lippmann-Schwinger Equation, In Robert P. Gilbert, Joji Kajiwara, and Yongzhi S. Xu, editors, Direct and Inverse Problems of Mathematical Physics, pages 423-440. Springer, Boston, 2000. [32] X. Xiang, H. Sun, Sparse reconstructions of acoustic source for inverse scattering problems in measure space, 2020, Inverse Problems, 36 035004. [33] M. Fortin, R. Glowinski (eds.), Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems, North-Holland, Amsterdam, 1983. [34] R. Glowinski, S. Osher, W. Yin (eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer, 2016. [35] M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appl., 4, pp. 303-320, 1968 [36] K. Ito, K. Kunisch, An active set strategy based on the augmented Lagrangian formulation for image restoration, RAIRO, Math. Mod. and Num. Analysis, 33(1), pp. 1-21, 1999. [37] X. Li, D. Sun, C. Toh, A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems, SIAM J. Optim., 28(1), pp. 433-458, 2018. [38] M. J. D. Powell, A method for nonlinear constraints in minimization problems, in Optimization, R. Fletcher, ed., Academic Press, New York, pp. 283-298, 1968. [39] R. F. Remis, P. M. van den Berg, On the equivalence of the Newton-Kantorovich and distorted Born methods, Inverse Problems, 16 L1-L4, 2000. [40] S. M. Robinson, Some continuity properties of polyhedral multifunctions, in Mathematical Programming at Oberwolfach, Math. Program. Stud., Springer, Berlin, Heidelberg, pp. 206-214, 1981. 31 [41] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14(5), pp. 877-898, 1976. [42] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res., 1(2), pp. 97-116, 1976. [43] S. Scholtes, Introduction to Piecewise Differentiable Equations, Springer Briefs in Optimization, Springer, New York, 2012. [44] G. Stadler, Semismooth Newton and augmented Lagrangian methods for a simplified friction problem, SIAM J. Optim., 15(1), pp. 39-62, 2004. [45] H. Sun, An Investigation on Semismooth Newton based Augmented Lagrangian Method for Image Restoration, J. Sci. Comput. 92(82), 2022. [46] H. Sun, An efficient augmented Lagrangian method with semismooth Newton solver for total generalized variation, Inverse Problems and Imaging, 2023, 17(2): 381-405. [47] A. Tarantola, Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM, 2005. [48] R. Tomioka and M. Sugiyama, Dual-augmented Lagrangian method for efficient sparse reconstruction, in IEEE Signal Processing Letters, 16(12), pp. 1067-1070, Dec. 2009. [49] M. Ulbrich, Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces, MOS-SIAM Series on Optimization, 2011. [50] X. Zhao, D. Sun, K. Toh, A Newton-CG augmented Lagrangian method for semidefinite programming, SIAM J. Optim., 20(4), pp. 1737-1765, 2010. 32
2510.14802
A Scalable MVDR Beamforming Algorithm That is Linear in the Number of Antennas Sanjaya Herath∗, Armin Gerami∗, Kevin Wagner†, Ramani Duraiswami ∗, Christopher A. Metzler∗ Email: sanjayah@umd.edu, agerami@umd.edu, kevin.wagner@nrl.navy.mil, ramanid@umd.edu, metzler@umd.edu ∗University of Maryland, College Park, MD † Naval Research Laboratory, Washington, DC Abstract—The Minimum Variance Distortionless Response (MVDR) beamforming technique is widely applied in array systems to mitigate interference. However, applying MVDR to large arrays is computationally challenging; its computational complexity scales cubically with the number of antenna elements. In this paper, we introduce a scalable MVDR beamforming method tailored for massive arrays. Our approach, which is specific to scenarios where the signal of interest is below the noise floor (e.g., GPS), leverages the Sherman-Morrison formula, low-rank Singular Value Decomposition (SVD) approximations, and algebraic manipulation. Using our approach, we reduce the computational complexity from cubic to linear in the number of antennas. We evaluate the proposed method through simula- tions, comparing its computational efficiency and beamforming accuracy with the conventional MVDR approach. Our method significantly reduces the computational load while maintaining high beamforming accuracy for large-scale arrays. This solution holds promise for real-time applications of MVDR beamforming in fields like radar, sonar, and wireless communications, where massive antenna arrays are proliferating. Index Terms—large arrays, MVDR, Scalable beamforming I. INTRODUCTION Large-scale arrays are widely deployed in various applica- tions such as radar, sonar, and wireless communications [1]. These arrays consist of a large number of antennas ranging from hundreds to thousands. The large number of antennas in the array provides high spatial resolution and the ability to suppress interference. The large array size is particularly cru- cial in applications requiring precise localization and tracking of targets in the presence of interference. However, as arrays scale up, the associated signal processing tasks, especially beamforming, become more complex and computationally demanding [2]. Beamforming is a signal processing technique used in array systems to direct the reception or transmission of signals towards a specific direction [3]. By adjusting the phase and amplitude of the signals received or transmitted by each an- tenna, beamforming allows for focusing the array’s sensitivity towards a desired direction while suppressing interference from other directions [4]. Beamforming is utilized in narrow- band applications such as radar target detection, as well as Sponsored by the Office of Naval Research via the NRL Base Program. CAM was supported in part by AFOSR Young Investigator Program Award #FA9550-22-1-0208 and ONR award N000142312752. AG and RD were supported in part by ONR Award N000142312086, and by a gift from Dolby. Distribution Statement A: Approved for public release. Distribution unlimited. in broadband applications such as sound scene analysis [5]. Several beamforming techniques have been developed over the years, each with its own advantages and limitations [6]. Among these, MVDR beamforming is a popular beamforming technique for its ability to enhance the signal-to-inteference- plus-noise ratio (SINR) [7]. MVDR achieves this by mini- mizing the total output power of the array while maintaining a distortionless response to the signal of interest. This makes MVDR beamforming particularly useful in applications where the signal of interest is weak, and the interference is strong, such as in cluttered or interference-rich environments. While MVDR beamforming is effective in smaller arrays, its application to large-scale arrays introduces significant chal- lenges. The primary challenge is the computational complexity associated with estimating the covariance matrix and inverting the covariance matrix. For large arrays, the covariance matrix grows quadratically with the number of antennas, and its inversion grows cubically [8]. As a result, real-time imple- mentation of MVDR beamforming in large arrays becomes computationally infeasible, especially in scenarios where the covariance matrix needs to be updated frequently, such as in dynamic environments with moving targets and interferers. In this paper, we propose a scalable approach to MVDR beamforming for large arrays, specifically when the desired signal is below the noise floor. The proposed method leverages the Sherman-Morrison formula [9] and low-rank Singular Value Decomposition (SVD) approximations [10] to reduce the computational complexity of the MVDR beamforming. The proposed method allows for real-time implementation of MVDR beamforming in large arrays by updating the inverse of the covariance matrix in O(MK2) operations per time step, where M is the number of antennas and K is a number much smaller than M. We evaluate the performance of the proposed method in terms of computational complexity and beamforming accuracy using simulations and compare it with the conventional MVDR beamforming method. II. RELATED WORK Several methods have been proposed to address the com- putational complexity of beamforming in large arrays. They can be broadly categorized into three groups: algorithmic ap- proaches, distributed or parallel approaches, and deep learning- based approaches. arXiv:2510.14802v1 [eess.SP] 16 Oct 2025 Algorithmic approaches aim to reduce the computational complexity of MVDR beamforming by approximating the covariance matrix or using iterative algorithms that avoid the explicit inversion of the covariance matrix. A Nystr¨om- based low-rank unitary MVDR scheme [11] approximates the covariance matrix using a low-rank approximation and reduces the computational complexity and storage overhead. Another approach [12] uses QR decomposition to track speech and noise variations dynamically. In [8], an SMI-MVDR beamformer is proposed that uses recursive implementation for large arrays using Cholesky factorization and Householder transformation. Distributed MVDR beamforming approaches have been developed to handle large-scale systems more effectively. A message passing algorithm [13] is proposed to enable dis- tributed beamforming by iteratively updating weights through local node communication. Similarly, parallel algorithms [14] and ADMM-based methods [15] distribute the computational load across multiple processors, improving robustness and scalability while reducing latency. Deep learning (DL) has made significant contributions to beamforming, particularly in massive Multiple-Input Multiple Output (MIMO) systems [16]. Notable examples include deep adversarial reinforcement learning to enhance the capacity and performance of massive MIMO beamforming [17], as well as to predict beamforming direction in mmWave Wireless Body Area Networks [18]. Additionally, in [19], Convolutional Neural Networks (CNNs) have been employed to reduce train- ing complexity by combining supervised and unsupervised learning and for calibration state diagnosis of massive antenna arrays [18]. III. PROBLEM DEFINITION Consider an array consisting of M isotrophic antennas. Assume the array receives a signal from a target at an angle θ0 with respect to the array boresight and a collection of L interfering signals from the directions θ1, θ2, . . . , θL over an observation period of m snapshots (number of temporal samples). The received signal vector xt ∈CM at the time step t ∈{1, 2, ..., m} can be expressed as, xt = a(θ0)s0(t) + L X l=1 a(θl)sl(t) + sn(t) (1) where s0(t) ∈C is the signal of interest, sl(t) ∈C is the interfering signal from the direction θl, sn(t) ∈CM is zero mean and spatially white Gaussian noise with variance σ2 , and a(θi) ∈CM for i ∈{0, 1, 2, . . . , L} is the steering vector. Then the output of the beamformer can be expressed as, yt = wHxt, (2) where w ∈CM is the beamforming weights. According to the above model, the classic MVDR beamformer problem can be formulated as, min w 1 2wHRw s.t. wHa(θ0) = 1, (3) where R ∈CM×M is the covariance matrix of the received signal. The solution to the above problem is given by, w = R−1a(θ0) a(θ0)HR−1a(θ0) . (4) Practically, the covariance matrix is not known and needs to be estimated using the received signal. The sample covariance matrix at time step n (n > m) can be expressed as, Rn = 1 m n X i=n−m+1 xixi H, (5) which is the average covariance over the window of the m snapshots. Then the Equation 4 can be modified as, w = R−1 n a(θ0) a(θ0)HR−1 n a(θ0) . (6) IV. PROPOSED METHOD Once we receive the signal at the time step n + 1, re- calculating the inverse of the sample covariance matrix is computationally expensive. Instead, consider the following recursive update rule for the sample covariance matrix, Rn+1 = αRn + (1 −α)xnxn H, (7) where 0 < α < 1 is the forgetting factor. Assuming R−1 n is calculated using the first n samples, the inverse of the covariance matrix can be updated using the Sherman-Morrison formula as, R−1 n+1 = R−1 n α −(1 −α) α R−1 n xnxnHR−1 n α + (1 −α)xnHR−1 n xn (8) which reduces the computational complexity to O(M 2) per time step. In order to further reduce the computational com- plexity, we need to have a method that does not explicitly form a covariance matrix as explicit formulation of an M ×M matrix leads to O(M 2) complexity. Instead, we propose the following steps that do not require the explicit formation of the covariance matrix except for the initialization step. First, we calculate the Singular Value Decomposition (SVD) of R. As R is Hermitian, the SVD can be expressed as, R = UDU H, (9) where U ∈CM×M is a unitary matrix containing the eigen- vectors of R, D ∈CM×M is a diagonal matrix containing the eigenvalues of R in decreasing order. We can form a low- rank approximation of R by selecting the first K eigenvalues and the corresponding vectors. K should be selected such that K >= L + 1. For large arrays K << M. The low-rank approximation can be expressed as, R ≈UKDKU H K , (10) where UK ∈CM×K is the first K columns of U and DK ∈ CK×K is the first K eigenvalues of D. Then we can modify the Equation 8 as, R−1 n+1 ≈UKD−1 K,n+1U H K (11) Algorithm 1 Proposed Algorithm Require: Initial covariance matrix Rn : M ×M, data vectors xn, xn+1, xn+2, . . . , xN : M × 1, Forgetting factor α, Low rank dimension: K . 1: R ←Rn 2: Compute the K-rank approximation of the covariance matrix UKDKU H K ←R 3: for i = n to N do 4: x ←xi 5: Update of the inverse of the submatrix D D−1 K ←D−1 K α −(1 −α) α D−1 K U H K xxHUKD−1 K α + (1 −α)xHUKD−1 K U H K x 6: Compute the Beamforming Weights w = UKD−1 K U H K a(θ) a(θ)HUKD−1 K U H K a(θ) 7: end for where D−1 K,n+1 can be updated using the following recursive update rule, D−1 K,n+1= D−1 K,n α −(1 −α) α D−1 K,nU H K xnxnHUKD−1 K,n α + (1 −α)xnHUKD−1 K,nU H K xn (12) The above update rule reduces the computational complexity to O(MK2) per time step. For large arrays as K << M the computational complexity can be approximated as O(MK2) per time step. Finally, the beamforming weights at time step n + 1 can be computed by substituting the updated inverse covariance matrix R−1 n+1 into Equation 6 as, w = UKD−1 K,n+1U H K a(θ0) a(θ0)HUKD−1 K,n+1U H K a(θ0) . (13) The proposed method is summarized in Algorithm 1. A. SVD Initialization For MVDR beamforming, the formation of the covariance matrix requires O(M 2) operations per time step and its inversion requires O(M 3) operations which accounts for a total of O(M 2 + M 3) operations per time step. Note that in the proposed method, the low-rank SVD ap- proximation of the covariance matrix is computed once during initialization, with a computational complexity of O(M 3). For subsequent time steps, the MVDR beamformer is efficiently computed with a complexity of O(MK2) using the proposed algorithm. V. EXPERIMENTAL RESULTS A. Simulation Setup For simulations, we consider a Uniform Linear Array (ULA) with M antennas where M will be changed according to TABLE I COMPARISON OF BEAMFORMING METRICS BETWEEN THE PROPOSED ALGORITHM AND MVDR. THE PROPOSED ALGORITHM PERFORMS SIMILARLY TO THE MVDR BEAMFORMER IN MLW AND ACHIEVES COMPARABLE SLLS AND NULL DEPTHS M L MLW (°) ↓ Null depth (dB) ↓ SLL (dB) ↓ MVDR Prop MVDR Prop MVDR Prop 50 1 2.27 2.32 -49.56 -42.44 -13.02 -9.14 50 2 2.38 2.31 -51.31 -40.10 -13.48 -9.06 50 3 2.09 2.03 -49.55 -35.29 -13.19 -10.20 75 1 1.37 1.37 -47.54 -29.32 -13.12 -12.32 75 2 1.36 1.33 -41.02 -34.84 -12.75 -11.05 75 3 1.34 1.41 -44.70 -39.53 -11.11 -12.55 100 1 1.10 1.08 -44.01 -37.13 -13.05 -10.76 100 2 1.07 1.07 -55.54 -44.56 -12.27 -11.47 100 3 1.06 1.08 -45.83 -41.78 -11.37 -12.47 the experiment. The antennas are spaced half a wavelength apart. Therefore the steering vector can be expressed as a(θ) =  1, e−jπ sin(θ), e−j2π sin(θ), . . . , e−j(M−1)π sin(θ)T . For the signal of interest, we consider an echo signal from a target located 10 km away from the array and moving tangentially to the array at a velocity of 500 m/s. The trans- mitted signal is a linear FM chirp with a bandwidth of 300 MHz and a pulse duration of 100 ms. The interfering signal is generated from a similar target located 10 km from the array and moving at a tangential velocity of 500 m/s. The noise is generated from a zero mean Gaussian distribution with a variance of 1. The SINR ratio is set to -10 dB. Sampling is performed at 1 MHz. At this speed, the target moves roughly one-hundredth of a degree in about 1 ms, during which approximately 3500 samples are collected. To maintain manageable processing, we select a 1 ms observation window, during which about 1,000 samples are collected, corresponding to a smaller angular displacement of roughly 0.003◦. Therefore, we set the snapshot number to 1000. The forgetting factor α is set to 0.99 and the low-rank dimension K is set to 10. We initialize the DoA of the target and the interfering signals to a random value between -30 and 30 degrees. The DoA of the target and the interfering signal are updated by 0.01 degrees per 1000 samples. The beamforming weights are updated at each time step. The performance of the proposed method is evaluated in terms of computational complexity and beamforming accuracy. The computational complexity is measured by the number of operations required per time step. B. Beamforming Accuracy The beamforming accuracy is evaluated in terms of the beampattern of the array. The beampattern represents the spatial response of the array to a signal arriving from dif- ferent directions, with a fixed set of weights. To compute the beampattern, the array is steered across various directions, and the output power is measured for each direction. The beam pattern of the array is compared with the beampattern of the conventional MVDR beamformer. The beampatterns of the proposed method and the conventional MVDR beamformer Fig. 1. Comparison of Beam patterns. The number of antennas changes from 50, 75, and 100 from top to bottom and the number of interferers changes from 1, 2, and 3 from left to right. The proposed method closely matches the beam pattern of the conventional MVDR beamformer in placing the main lobe in the direction of the target and nulls in the direction of the interfering signals. are shown in Figure 1 for M = 50, 75, 100 and number of interferers L = 1, 2, 3. It can be observed that the beampattern of the proposed method closely matches the beampattern of the conventional MVDR beamformer. The proposed algorithm correctly places the main lobe of the beampattern at the direction of the target and the nulls at the direction of the interfering signal. As for quantifying the beamforming accuracy, we use the main lobe width (MLW), side lobe level (SLL), and null depth. The MLW is the angular width of the main lobe of the beam pattern. The SLLs are the power levels of the side lobes relative to the main lobe. The null depth is the power level of the nulls relative to the main lobe. The main lobe width, side lobe levels, and null depth of the proposed method and the conventional MVDR beamformer for the configurations shown in Figure 1 are shown in Table I. The proposed algorithm performs comparably to the MVDR beamformer in terms of main lobe width (MLW), with only minor variations across configurations. While the MVDR beamformer achieves deeper nulls, the proposed method consistently achieves SLLs comparable to those of the MVDR beamformer. C. Computational Complexity The computational complexity of the proposed method is compared with that of the conventional MVDR beamformer. The computational complexity of the conventional MVDR beamformer is O(M 3) per time step. The computational complexity of the proposed method is O(MK2) per time step. For this experiment, we measured the execution time of the proposed method and the conventional MVDR beamformer for 10000-time steps for M = 10, 11, 12, . . . , 500 and K = 10. The experiment was conducted on AMD EPYC 7443 24- Core Processor [20]. The execution time of the proposed method and the conventional MVDR beamformer is shown in Figure 2. It can be observed that the execution time of the proposed method is significantly lower than the conventional MVDR beamformer for large arrays. The execution time of the proposed method scales linearly with the number of antennas, while the execution time of the conventional MVDR beamformer scales cubically with the number of antennas. VI. DISCUSSION AND LIMITATIONS The proposed method is specifically designed for scenarios where the desired signal is below the noise floor. In scenarios where the desired signal is above the noise floor, the proposed method may not perform as well as the conventional MVDR beamformer. An example of this is shown in Figure 3 where the desired signal is above the noise floor with an SNR of 10 dB. The proposed method does not place the nulls in the direction of the interfering signal as effectively as the conventional MVDR beamformer. Another limitation of the proposed method is the reduction of SINR gain when the proposed algorithm is used over long periods. SINR gain is the ratio of the output SINR to the input SINR. For MVDR and the proposed method, the output SINR can be calculated as, Fig. 2. Execution times. Comparison of the execution time for M = 20, 11, 12, . . . , 500 and K = 10. The execution time of the proposed method scales linearly with the number of antennas, while the execution time of the conventional MVDR beamformer scales cubically with the number of antennas. This demonstrates the computational efficiency of the proposed method for large arrays. SINRout = σ2 s0|wHa(θ0)|2 wHRw (14) where σ2 s0 is the power of the desired signal. To illustrate this limitation, we calculated the average SINR gain over 10 realizations of the proposed method over the conventional MVDR beamformer for M = 100 and K = 10 over 100,000 time steps. The results are shown in Figure 4. It can be observed that the SINR gain of the proposed method decreases over time. However, a possible solution to this limitation is to reinitialize the proposed method at regular intervals to maintain the SINR gain. The proposed method was reinitialized for the above experiment on the 50000th time step. It can observed that the SINR gain is restored to the initial value after reinitialization. Even with reinitialization, the proposed method offers reduced computational complexity compared to the conventional MVDR beamformer. The reini- tialization step is quadratic in complexity with respect to the number of antennas, compared to the cubic complexity of the conventional MVDR beamformer. To further reduce the computational complexity during initialization, instead of computing the full SVD of R and then truncating it to form the K-rank approximation, we can directly compute a low-rank approximation of R. This approach reduces the computational complexity from O(M 3) to O(KM 2), where K is the rank of the approximation and K ≪M. There are several methods to compute a low-rank approximation of R without explicitly forming the covariance matrix. One such method is the randomized SVD [21], which uses random projections to identify the subspace that captures the dominant singular values and vectors of R. By utilizing Fig. 3. Failure Mode. Beam patterns of the proposed method and the conventional MVDR beamformer for M = 50 and K = 10 when the desired signal is above the noise floor: the proposed method does not place the nulls at the direction of the interfering signal as effectively as the conventional MVDR beamformer. Fig. 4. SINR gain of the proposed method over the conventional MVDR beamformer for M = 100 and K = 10 for 100000 time steps. The SINR gain of the proposed method decreases over time. However, the SINR gain can be restored by reinitializing the proposed method at regular intervals. such methods, we can further reduce the computational com- plexity of the initialization step while maintaining the accuracy of the low-rank approximation. VII. CONCLUSION In this paper, we proposed a scalable approach to MVDR beamforming for large arrays when the desired signal is below the noise floor. By incorporating the Sherman-Morrison formula and leveraging low-rank SVD approximations, we sig- nificantly reduced the complexity of updating the covariance matrix. The proposed method offers O(MK2) complexity, making it feasible for real-time applications, even with arrays consisting of hundreds or thousands of antennas. Simulation results demonstrate that the proposed approach maintains high beamforming accuracy, effectively suppressing interference and placing nulls at the desired locations. Fur- thermore, the computational complexity of our method scales linearly with the number of antennas, in contrast to the cubic scaling of conventional MVDR methods. The results indicate that this approach is particularly useful in dynamic environ- ments requiring frequent updates to the covariance matrix, as it avoids the computationally expensive direct inversion. Future work will focus on extending the proposed method to work when the desired signal is above the noise floor. Additionally, we plan to investigate more efficient initialization methods for the proposed algorithm to further reduce the computational complexity and improve real-time performance. Nonetheless, the proposed method offers a promising solution for real-time MVDR beamforming with large arrays. REFERENCES [1] A. M. Elbir, K. V. Mishra, S. A. Vorobyov, and R. W. Heath, “Twenty- five years of advances in beamforming: From convex and nonconvex optimization to learning techniques,” IEEE Signal Processing Magazine, vol. 40, no. 4, pp. 118–131, 2023. [2] Y. Xu, E. G. Larsson, E. A. Jorswieck, X. Li, S. Jin, and T.-H. Chang, “Distributed signal processing for extremely large-scale antenna array systems: State-of-the-art and future directions,” arXiv preprint arXiv:2407.16121, 2024. [3] B. D. Van and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” IEEE ASSP Mag., vol. 5, pp. 4–24, Apr. 1988. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/665/ [4] W. Liu and S. Weiss, Wideband beamforming: concepts and techniques. John Wiley & Sons, 2010. [5] N. A. Gumerov, B. Zhi, and R. Duraiswami, “Sequential Direction Detection for Sound Scene Analysis,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 6807–6811. [Online]. Available: http://dx.doi.org/10. 1109/ICASSP.2018.8462314 [6] A. V. D. Veen, “Algebraic methods for deterministic blind beamforming,” Proc. IEEE Inst. Electr. Electron. Eng., vol. 86, pp. 1987–2008, 1998. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/720249/ [7] H. L. Van Trees, Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons, 2002. [8] V. V. Zaharov and M. Teixeira, “SMI-MVDR beamformer implementa- tions for large antenna array and small sample size,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 55, no. 10, pp. 3317– 3327, 2008. [9] J. Sherman and W. J. Morrison, “Adjustment of an inverse matrix corresponding to a change in one element of a given matrix,” The Annals of Mathematical Statistics, vol. 21, no. 1, pp. 124–127, 1950. [10] G. Strang, Linear Algebra and Its Applications 4th ed., 2012. [11] S. Jiang, M. Jin, S. Liu, and Z. Lin, “A Nystr¨om-based low-rank unitary MVDR beamforming scheme,” Signal Processing, vol. 220, p. 109433, 2024. [12] A. Barnov, V. B. Bracha, and S. Markovich-Golan, “QRD based MVDR beamforming for fast tracking of speech and noise dynamics,” in 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2017, pp. 369–373. [13] R. Heusdens, G. Zhang, R. C. Hendriks, Y. Zeng, and W. B. Kleijn, “Distributed MVDR beamforming for (wireless) microphone networks using message passing,” in IWAENC 2012; International Workshop on Acoustic Signal Enhancement. VDE, 2012, pp. 1–4. [14] P. Sinha, A. D. George, and K. Kim, “Parallel algorithms for robust broadband MVDR beamforming,” Journal of Computational Acoustics, vol. 10, no. 01, pp. 69–96, 2002. [15] W. Zhang, J. Lin, X. Wu, and Y. Pan, “A distributed approach to robust minimum variance distortionless response beamforming in large-scale arrays,” IET Communications, vol. 17, no. 8, pp. 950–959, 2023. [16] H. Al Kassir, Z. D. Zaharis, P. I. Lazaridis, N. V. Kantartzis, T. V. Yioultsis, and T. D. Xenos, “A review of the state of the art and future challenges of deep learning-based beamforming,” IEEE Access, vol. 10, pp. 80 869–80 882, 2022. [17] T. Maksymyuk, J. Gazda, O. Yaremko, and D. Nevinskiy, “Deep learning based massive MIMO beamforming for 5G mobile network,” in 2018 IEEE 4th International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS). IEEE, 2018, pp. 241–244. [18] H. Ngo, H. Fang, and H. Wang, “Deep learning-based adaptive beam- forming for mmWave wireless body area network,” in GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020, pp. 1–6. [19] S. Lu, S. Zhao, and Q. Shi, “Learning-based massive beamforming,” in GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020, pp. 1–6. [20] AMD, “AMD EPYC 7443,” https://www.amd.com/en/products/ processors/server/epyc/7003-series/amd-epyc-7443.html, accessed: 2024-10-28. [21] N. Halko, P.-G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate ma- trix decompositions,” SIAM review, vol. 53, no. 2, pp. 217–288, 2011.
A Scalable MVDR Beamforming Algorithm That is Linear in the Number of Antennas Sanjaya Herath∗, Armin Gerami∗, Kevin Wagner†, Ramani Duraiswami ∗, Christopher A. Metzler∗ Email: , , , , ∗ † Naval Research Laboratory, Washington, DC Abstract-The Minimum Variance Distortionless Response (MVDR) beamforming technique is widely applied in array systems to mitigate interference. However, applying MVDR to large arrays is computationally challenging; its computational complexity scales cubically with the number of antenna elements. In this paper, we introduce a scalable MVDR beamforming method tailored for massive arrays. Our approach, which is specific to scenarios where the signal of interest is below the noise floor (e.g., GPS), leverages the Sherman-Morrison formula, low-rank Singular Value Decomposition (SVD) approximations, and algebraic manipulation. Using our approach, we reduce the computational complexity from cubic to linear in the number of antennas. We evaluate the proposed method through simulations, comparing its computational efficiency and beamforming accuracy with the conventional MVDR approach. Our method significantly reduces the computational load while maintaining high beamforming accuracy for large-scale arrays. This solution holds promise for real-time applications of MVDR beamforming in fields like radar, sonar, and wireless communications, where massive antenna arrays are proliferating. Index Terms-large arrays, MVDR, Scalable beamforming I. INTRODUCTION Large-scale arrays are widely deployed in various applications such as radar, sonar, and wireless communications [1]. These arrays consist of a large number of antennas ranging from hundreds to thousands. The large number of antennas in the array provides high spatial resolution and the ability to suppress interference. The large array size is particularly crucial in applications requiring precise localization and tracking of targets in the presence of interference. However, as arrays scale up, the associated signal processing tasks, especially beamforming, become more complex and computationally demanding [2]. Beamforming is a signal processing technique used in array systems to direct the reception or transmission of signals towards a specific direction [3]. By adjusting the phase and amplitude of the signals received or transmitted by each antenna, beamforming allows for focusing the array's sensitivity towards a desired direction while suppressing interference from other directions [4]. Beamforming is utilized in narrowband applications such as radar target detection, as well as Sponsored by the Office of Naval Research via the NRL Base Program. CAM was supported in part by AFOSR Young Investigator Program Award #FA9550-22-1-0208 and ONR award N000142312752. AG and RD were supported in part by ONR Award N000142312086, and by a gift from Dolby. Distribution Statement A: Approved for public release. Distribution unlimited. in broadband applications such as sound scene analysis [5]. Several beamforming techniques have been developed over the years, each with its own advantages and limitations [6]. Among these, MVDR beamforming is a popular beamforming technique for its ability to enhance the signal-to-inteferenceplus-noise ratio (SINR) [7]. MVDR achieves this by minimizing the total output power of the array while maintaining a distortionless response to the signal of interest. This makes MVDR beamforming particularly useful in applications where the signal of interest is weak, and the interference is strong, such as in cluttered or interference-rich environments. While MVDR beamforming is effective in smaller arrays, its application to large-scale arrays introduces significant challenges. The primary challenge is the computational complexity associated with estimating the covariance matrix and inverting the covariance matrix. For large arrays, the covariance matrix grows quadratically with the number of antennas, and its inversion grows cubically [8]. As a result, real-time implementation of MVDR beamforming in large arrays becomes computationally infeasible, especially in scenarios where the covariance matrix needs to be updated frequently, such as in dynamic environments with moving targets and interferers. In this paper, we propose a scalable approach to MVDR beamforming for large arrays, specifically when the desired signal is below the noise floor. The proposed method leverages the Sherman-Morrison formula [9] and low-rank Singular Value Decomposition (SVD) approximations [10] to reduce the computational complexity of the MVDR beamforming. The proposed method allows for real-time implementation of MVDR beamforming in large arrays by updating the inverse of the covariance matrix in O(MK2) operations per time step, where M is the number of antennas and K is a number much smaller than M. We evaluate the performance of the proposed method in terms of computational complexity and beamforming accuracy using simulations and compare it with the conventional MVDR beamforming method. II. RELATED WORK Several methods have been proposed to address the computational complexity of beamforming in large arrays. They can be broadly categorized into three groups: algorithmic approaches, distributed or parallel approaches, and deep learningbased approaches. 16 Oct 2025 Algorithmic approaches aim to reduce the computational complexity of MVDR beamforming by approximating the covariance matrix or using iterative algorithms that avoid the explicit inversion of the covariance matrix. A Nystr ̈ombased low-rank unitary MVDR scheme [11] approximates the covariance matrix using a low-rank approximation and reduces the computational complexity and storage overhead. Another approach [12] uses QR decomposition to track speech and noise variations dynamically. In [8], an SMI-MVDR beamformer is proposed that uses recursive implementation for large arrays using Cholesky factorization and Householder transformation. Distributed MVDR beamforming approaches have been developed to handle large-scale systems more effectively. A message passing algorithm [13] is proposed to enable distributed beamforming by iteratively updating weights through local node communication. Similarly, parallel algorithms [14] and ADMM-based methods [15] distribute the computational load across multiple processors, improving robustness and scalability while reducing latency. Deep learning (DL) has made significant contributions to beamforming, particularly in massive Multiple-Input Multiple Output (MIMO) systems [16]. Notable examples include deep adversarial reinforcement learning to enhance the capacity and performance of massive MIMO beamforming [17], as well as to predict beamforming direction in mmWave Wireless Body Area Networks [18]. Additionally, in [19], Convolutional Neural Networks (CNNs) have been employed to reduce training complexity by combining supervised and unsupervised learning and for calibration state diagnosis of massive antenna arrays [18]. III. PROBLEM DEFINITION Consider an array consisting of M isotrophic antennas. Assume the array receives a signal from a target at an angle θ0 with respect to the array boresight and a collection of L interfering signals from the directions θ1, θ2, . . . , θL over an observation period of m snapshots (number of temporal samples). The received signal vector xt ∈CM at the time step t ∈{1, 2, ..., m} can be expressed as, xt = a(θ0)s0(t) + L X l=1 a(θl)sl(t) + sn(t) (1) where s0(t) ∈C is the signal of interest, sl(t) ∈C is the interfering signal from the direction θl, sn(t) ∈CM is zero mean and spatially white Gaussian noise with variance σ2 , and a(θi) ∈CM for i ∈{0, 1, 2, . . . , L} is the steering vector. Then the output of the beamformer can be expressed as, yt = wHxt, (2) where w ∈CM is the beamforming weights. According to the above model, the classic MVDR beamformer problem can be formulated as, min w 1 2wHRw s.t. wHa(θ0) = 1, (3) where R ∈CM×M is the covariance matrix of the received signal. The solution to the above problem is given by, w = R-1a(θ0) a(θ0)HR-1a(θ0) . (4) Practically, the covariance matrix is not known and needs to be estimated using the received signal. The sample covariance matrix at time step n (n > m) can be expressed as, Rn = 1 m n X i=n-m+1 xixi H, (5) which is the average covariance over the window of the m snapshots. Then the Equation 4 can be modified as, w = R-1 n a(θ0) a(θ0)HR-1 n a(θ0) . (6) IV. PROPOSED METHOD Once we receive the signal at the time step n + 1, recalculating the inverse of the sample covariance matrix is computationally expensive. Instead, consider the following recursive update rule for the sample covariance matrix, Rn+1 = αRn + (1 -α)xnxn H, (7) where 0 = L + 1. For large arrays K << M. The low-rank approximation can be expressed as, R ≈UKDKU H K , (10) where UK ∈CM×K is the first K columns of U and DK ∈ CK×K is the first K eigenvalues of D. Then we can modify the Equation 8 as, R-1 n+1 ≈UKD-1 K,n+1U H K (11) Algorithm 1 Proposed Algorithm Require: Initial covariance matrix Rn : M ×M, data vectors xn, xn+1, xn+2, . . . , xN : M × 1, Forgetting factor α, Low rank dimension: K . 1: R ←Rn 2: Compute the K-rank approximation of the covariance matrix UKDKU H K ←R 3: for i = n to N do 4: x ←xi 5: Update of the inverse of the submatrix D D-1 K ←D-1 K α -(1 -α) α D-1 K U H K xxHUKD-1 K α + (1 -α)xHUKD-1 K U H K x 6: Compute the Beamforming Weights w = UKD-1 K U H K a(θ) a(θ)HUKD-1 K U H K a(θ) 7: end for where D-1 K,n+1 can be updated using the following recursive update rule, D-1 K,n+1= D-1 K,n α -(1 -α) α D-1 K,nU H K xnxnHUKD-1 K,n α + (1 -α)xnHUKD-1 K,nU H K xn (12) The above update rule reduces the computational complexity to O(MK2) per time step. For large arrays as K << M the computational complexity can be approximated as O(MK2) per time step. Finally, the beamforming weights at time step n + 1 can be computed by substituting the updated inverse covariance matrix R-1 n+1 into Equation 6 as, w = UKD-1 K,n+1U H K a(θ0) a(θ0)HUKD-1 K,n+1U H K a(θ0) . (13) The proposed method is summarized in Algorithm 1. A. SVD Initialization For MVDR beamforming, the formation of the covariance matrix requires O(M 2) operations per time step and its inversion requires O(M 3) operations which accounts for a total of O(M 2 + M 3) operations per time step. Note that in the proposed method, the low-rank SVD approximation of the covariance matrix is computed once during initialization, with a computational complexity of O(M 3). For subsequent time steps, the MVDR beamformer is efficiently computed with a complexity of O(MK2) using the proposed algorithm. V. EXPERIMENTAL RESULTS A. Simulation Setup For simulations, we consider a Uniform Linear Array (ULA) with M antennas where M will be changed according to TABLE I COMPARISON OF BEAMFORMING METRICS BETWEEN THE PROPOSED ALGORITHM AND MVDR. THE PROPOSED ALGORITHM PERFORMS SIMILARLY TO THE MVDR BEAMFORMER IN MLW AND ACHIEVES COMPARABLE SLLS AND NULL DEPTHS M L MLW (°) ↓ Null depth (dB) ↓ SLL (dB) ↓ MVDR Prop MVDR Prop MVDR Prop 50 1 2.27 2.32 -49.56 -42.44 -13.02 -9.14 50 2 2.38 2.31 -51.31 -40.10 -13.48 -9.06 50 3 2.09 2.03 -49.55 -35.29 -13.19 -10.20 75 1 1.37 1.37 -47.54 -29.32 -13.12 -12.32 75 2 1.36 1.33 -41.02 -34.84 -12.75 -11.05 75 3 1.34 1.41 -44.70 -39.53 -11.11 -12.55 100 1 1.10 1.08 -44.01 -37.13 -13.05 -10.76 100 2 1.07 1.07 -55.54 -44.56 -12.27 -11.47 100 3 1.06 1.08 -45.83 -41.78 -11.37 -12.47 the experiment. The antennas are spaced half a wavelength apart. Therefore the steering vector can be expressed as a(θ) = 1, e-jπ sin(θ), e-j2π sin(θ), . . . , e-j(M-1)π sin(θ) T . For the signal of interest, we consider an echo signal from a target located 10 km away from the array and moving tangentially to the array at a velocity of 500 m/s. The transmitted signal is a linear FM chirp with a bandwidth of 300 MHz and a pulse duration of 100 ms. The interfering signal is generated from a similar target located 10 km from the array and moving at a tangential velocity of 500 m/s. The noise is generated from a zero mean Gaussian distribution with a variance of 1. The SINR ratio is set to -10 dB. Sampling is performed at 1 MHz. At this speed, the target moves roughly one-hundredth of a degree in about 1 ms, during which approximately 3500 samples are collected. To maintain manageable processing, we select a 1 ms observation window, during which about 1,000 samples are collected, corresponding to a smaller angular displacement of roughly 0.003◦. Therefore, we set the snapshot number to 1000. The forgetting factor α is set to 0.99 and the low-rank dimension K is set to 10. We initialize the DoA of the target and the interfering signals to a random value between -30 and 30 degrees. The DoA of the target and the interfering signal are updated by 0.01 degrees per 1000 samples. The beamforming weights are updated at each time step. The performance of the proposed method is evaluated in terms of computational complexity and beamforming accuracy. The computational complexity is measured by the number of operations required per time step. B. Beamforming Accuracy The beamforming accuracy is evaluated in terms of the beampattern of the array. The beampattern represents the spatial response of the array to a signal arriving from different directions, with a fixed set of weights. To compute the beampattern, the array is steered across various directions, and the output power is measured for each direction. The beam pattern of the array is compared with the beampattern of the conventional MVDR beamformer. The beampatterns of the proposed method and the conventional MVDR beamformer Fig. 1. Comparison of Beam patterns. The number of antennas changes from 50, 75, and 100 from top to bottom and the number of interferers changes from 1, 2, and 3 from left to right. The proposed method closely matches the beam pattern of the conventional MVDR beamformer in placing the main lobe in the direction of the target and nulls in the direction of the interfering signals. are shown in Figure 1 for M = 50, 75, 100 and number of interferers L = 1, 2, 3. It can be observed that the beampattern of the proposed method closely matches the beampattern of the conventional MVDR beamformer. The proposed algorithm correctly places the main lobe of the beampattern at the direction of the target and the nulls at the direction of the interfering signal. As for quantifying the beamforming accuracy, we use the main lobe width (MLW), side lobe level (SLL), and null depth. The MLW is the angular width of the main lobe of the beam pattern. The SLLs are the power levels of the side lobes relative to the main lobe. The null depth is the power level of the nulls relative to the main lobe. The main lobe width, side lobe levels, and null depth of the proposed method and the conventional MVDR beamformer for the configurations shown in Figure 1 are shown in Table I. The proposed algorithm performs comparably to the MVDR beamformer in terms of main lobe width (MLW), with only minor variations across configurations. While the MVDR beamformer achieves deeper nulls, the proposed method consistently achieves SLLs comparable to those of the MVDR beamformer. C. Computational Complexity The computational complexity of the proposed method is compared with that of the conventional MVDR beamformer. The computational complexity of the conventional MVDR beamformer is O(M 3) per time step. The computational complexity of the proposed method is O(MK2) per time step. For this experiment, we measured the execution time of the proposed method and the conventional MVDR beamformer for 10000-time steps for M = 10, 11, 12, . . . , 500 and K = 10. The experiment was conducted on AMD EPYC 7443 24Core Processor [20]. The execution time of the proposed method and the conventional MVDR beamformer is shown in Figure 2. It can be observed that the execution time of the proposed method is significantly lower than the conventional MVDR beamformer for large arrays. The execution time of the proposed method scales linearly with the number of antennas, while the execution time of the conventional MVDR beamformer scales cubically with the number of antennas. VI. DISCUSSION AND LIMITATIONS The proposed method is specifically designed for scenarios where the desired signal is below the noise floor. In scenarios where the desired signal is above the noise floor, the proposed method may not perform as well as the conventional MVDR beamformer. An example of this is shown in Figure 3 where the desired signal is above the noise floor with an SNR of 10 dB. The proposed method does not place the nulls in the direction of the interfering signal as effectively as the conventional MVDR beamformer. Another limitation of the proposed method is the reduction of SINR gain when the proposed algorithm is used over long periods. SINR gain is the ratio of the output SINR to the input SINR. For MVDR and the proposed method, the output SINR can be calculated as, Fig. 2. Execution times. Comparison of the execution time for M = 20, 11, 12, . . . , 500 and K = 10. The execution time of the proposed method scales linearly with the number of antennas, while the execution time of the conventional MVDR beamformer scales cubically with the number of antennas. This demonstrates the computational efficiency of the proposed method for large arrays. SINRout = σ2 s0|wHa(θ0)|2 wHRw (14) where σ2 s0 is the power of the desired signal. To illustrate this limitation, we calculated the average SINR gain over 10 realizations of the proposed method over the conventional MVDR beamformer for M = 100 and K = 10 over 100,000 time steps. The results are shown in Figure 4. It can be observed that the SINR gain of the proposed method decreases over time. However, a possible solution to this limitation is to reinitialize the proposed method at regular intervals to maintain the SINR gain. The proposed method was reinitialized for the above experiment on the 50000th time step. It can observed that the SINR gain is restored to the initial value after reinitialization. Even with reinitialization, the proposed method offers reduced computational complexity compared to the conventional MVDR beamformer. The reinitialization step is quadratic in complexity with respect to the number of antennas, compared to the cubic complexity of the conventional MVDR beamformer. To further reduce the computational complexity during initialization, instead of computing the full SVD of R and then truncating it to form the K-rank approximation, we can directly compute a low-rank approximation of R. This approach reduces the computational complexity from O(M 3) to O(KM 2), where K is the rank of the approximation and K ≪M. There are several methods to compute a low-rank approximation of R without explicitly forming the covariance matrix. One such method is the randomized SVD [21], which uses random projections to identify the subspace that captures the dominant singular values and vectors of R. By utilizing Fig. 3. Failure Mode. Beam patterns of the proposed method and the conventional MVDR beamformer for M = 50 and K = 10 when the desired signal is above the noise floor: the proposed method does not place the nulls at the direction of the interfering signal as effectively as the conventional MVDR beamformer. Fig. 4. SINR gain of the proposed method over the conventional MVDR beamformer for M = 100 and K = 10 for 100000 time steps. The SINR gain of the proposed method decreases over time. However, the SINR gain can be restored by reinitializing the proposed method at regular intervals. such methods, we can further reduce the computational complexity of the initialization step while maintaining the accuracy of the low-rank approximation. VII. CONCLUSION In this paper, we proposed a scalable approach to MVDR beamforming for large arrays when the desired signal is below the noise floor. By incorporating the Sherman-Morrison formula and leveraging low-rank SVD approximations, we significantly reduced the complexity of updating the covariance matrix. The proposed method offers O(MK2) complexity, making it feasible for real-time applications, even with arrays consisting of hundreds or thousands of antennas. Simulation results demonstrate that the proposed approach maintains high beamforming accuracy, effectively suppressing interference and placing nulls at the desired locations. Furthermore, the computational complexity of our method scales linearly with the number of antennas, in contrast to the cubic scaling of conventional MVDR methods. The results indicate that this approach is particularly useful in dynamic environments requiring frequent updates to the covariance matrix, as it avoids the computationally expensive direct inversion. Future work will focus on extending the proposed method to work when the desired signal is above the noise floor. Additionally, we plan to investigate more efficient initialization methods for the proposed algorithm to further reduce the computational complexity and improve real-time performance. Nonetheless, the proposed method offers a promising solution for real-time MVDR beamforming with large arrays. REFERENCES [1] A. M. Elbir, K. V. Mishra, S. A. Vorobyov, and R. W. Heath, "Twentyfive years of advances in beamforming: From convex and nonconvex optimization to learning techniques," IEEE Signal Processing Magazine, vol. 40, no. 4, pp. 118-131, 2023. [2] Y. Xu, E. G. Larsson, E. A. Jorswieck, X. Li, S. Jin, and T.-H. Chang, "Distributed signal processing for extremely large-scale antenna array systems: State-of-the-art and future directions," arXiv preprint , 2024. [3] B. D. Van and K. M. Buckley, "Beamforming: a versatile approach to spatial filtering," IEEE ASSP Mag., vol. 5, pp. 4-24, Apr. 1988. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/665/ [4] W. Liu and S. Weiss, Wideband beamforming: concepts and techniques. John Wiley & Sons, 2010. [5] N. A. Gumerov, B. Zhi, and R. Duraiswami, "Sequential Direction Detection for Sound Scene Analysis," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 6807-6811. [Online]. Available: http://dx.doi.org/10. 1109/ICASSP.2018.8462314 [6] A. V. D. Veen, "Algebraic methods for deterministic blind beamforming," Proc. IEEE Inst. Electr. Electron. Eng., vol. 86, pp. 1987-2008, 1998. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/720249/ [7] H. L. Van Trees, Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons, 2002. [8] V. V. Zaharov and M. Teixeira, "SMI-MVDR beamformer implementations for large antenna array and small sample size," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 55, no. 10, pp. 33173327, 2008. [9] J. Sherman and W. J. Morrison, "Adjustment of an inverse matrix corresponding to a change in one element of a given matrix," The Annals of Mathematical Statistics, vol. 21, no. 1, pp. 124-127, 1950. [10] G. Strang, Linear Algebra and Its Applications 4th ed., 2012. [11] S. Jiang, M. Jin, S. Liu, and Z. Lin, "A Nystr ̈om-based low-rank unitary MVDR beamforming scheme," Signal Processing, vol. 220, p. 109433, 2024. [12] A. Barnov, V. B. Bracha, and S. Markovich-Golan, "QRD based MVDR beamforming for fast tracking of speech and noise dynamics," in 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2017, pp. 369-373. [13] R. Heusdens, G. Zhang, R. C. Hendriks, Y. Zeng, and W. B. Kleijn, "Distributed MVDR beamforming for (wireless) microphone networks using message passing," in IWAENC 2012; International Workshop on Acoustic Signal Enhancement. VDE, 2012, pp. 1-4. [14] P. Sinha, A. D. George, and K. Kim, "Parallel algorithms for robust broadband MVDR beamforming," Journal of Computational Acoustics, vol. 10, no. 01, pp. 69-96, 2002. [15] W. Zhang, J. Lin, X. Wu, and Y. Pan, "A distributed approach to robust minimum variance distortionless response beamforming in large-scale arrays," IET Communications, vol. 17, no. 8, pp. 950-959, 2023. [16] H. Al Kassir, Z. D. Zaharis, P. I. Lazaridis, N. V. Kantartzis, T. V. Yioultsis, and T. D. Xenos, "A review of the state of the art and future challenges of deep learning-based beamforming," IEEE Access, vol. 10, pp. 80 869-80 882, 2022. [17] T. Maksymyuk, J. Gazda, O. Yaremko, and D. Nevinskiy, "Deep learning based massive MIMO beamforming for 5G mobile network," in 2018 IEEE 4th International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS). IEEE, 2018, pp. 241-244. [18] H. Ngo, H. Fang, and H. Wang, "Deep learning-based adaptive beamforming for mmWave wireless body area network," in GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020, pp. 1-6. [19] S. Lu, S. Zhao, and Q. Shi, "Learning-based massive beamforming," in GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020, pp. 1-6. [20] AMD, "AMD EPYC 7443," https://www.amd.com/en/products/ processors/server/epyc/7003-series/amd-epyc-7443.html, accessed: 2024-10-28. [21] N. Halko, P.-G. Martinsson, and J. A. Tropp, "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," SIAM review, vol. 53, no. 2, pp. 217-288, 2011.
2510.14810
Rethinking Hebbian Principle: Low-Dimensional Structural Projection for Unsupervised Learning Shikuang Deng1∗, Jiayuan Zhang2∗, Yuhang Wu1, Ting Chen1, Shi Gu3,4B 1School of Computer Science and Engineering, UESTC 2Glasgow College, UESTC 3 School of Computer Science and Technology, Zhejiang University 4 State Key Lab of Brain-Machine Intelligence, Zhejiang University, China dengsk@uestc.edu.cn, gus@zju.edu.cn Abstract Hebbian learning is a biological principle that intuitively describes how neurons adapt their connections through repeated stimuli. However, when applied to ma- chine learning, it suffers serious issues due to the unconstrained updates of the connections and the lack of accounting for feedback mediation. Such shortcomings limit its effective scaling to complex network architectures and tasks. To this end, here we introduce the Structural Projection Hebbian Representation (SPHeRe), a novel unsupervised learning method that integrates orthogonality and structural information preservation through a local auxiliary nonlinear block. The loss for structural information preservation backpropagates to the input through an auxil- iary lightweight projection that conceptually serves as feedback mediation while the orthogonality constraints account for the boundedness of updating magnitude. Extensive experimental results show that SPHeRe achieves SOTA performance among unsupervised synaptic plasticity approaches on standard image classification benchmarks, including CIFAR-10, CIFAR-100, and Tiny-ImageNet. Furthermore, the method exhibits strong effectiveness in continual learning and transfer learning scenarios, and image reconstruction tasks show the robustness and generalizability of the extracted features. This work demonstrates the competitiveness and potential of Hebbian unsupervised learning rules within modern deep learning frameworks, demonstrating the possibility of efficient and biologically inspired learning al- gorithms without the strong dependence on strict backpropagation. Our code is available at https://github.com/brain-intelligence-lab/SPHeRe. 1 Introduction In recent years, deep learning based on error backpropagation has achieved revolutionary devel- opments in the field of artificial intelligence [1, 2]. However, the biological plausibility of back- propagation remains a subject of debate, particularly concerning its requirement for symmetric feedback connections and precise global error signal transmission, mechanisms not readily observed in biological neural circuits [3, 4]. At the same time, some neuroanatomical studies have found that the plasticity regulation of cortical synapses mainly follows local rules, with the strength changes dynamically dominated by the activities of pre-/postsynaptic neurons, occasionally under the syner- gistic regulation of higher-order signals such as neuromodulators. Based on these neuroanatomical studies, researchers have proposed various biologically inspired learning algorithms to replace the backpropagation algorithm. The Hebbian rule, as a canonical algorithm of this type, was first pro- posed by Donald Hebb in 1949 [5]. Its core idea is that "neurons that fire together wire together," * Equal contribution B Corresponding author 39th Conference on Neural Information Processing Systems (NeurIPS 2025). arXiv:2510.14810v2 [cs.LG] 22 Oct 2025 which means that learning and memory formation are achieved through the cooperative strengthening of synapses. As it does not rely on global signals, it belongs to unsupervised learning. In order to overcome the limitations of the original Hebbian rules in terms of stability, competitiveness, and feature extraction capabilities, researchers have developed various important variants based on them, such as threshold mechanisms, Bienenstock-Cooper-Munro (BCM) rules [6], and Oja’s rules [7]. A significant theoretical leap was made by Pehlevan, Chklovskii, and collaborators, who established a principled connection between Hebbian learning and similarity-matching objectives [8, 9, 10]. Their foundational work demonstrated that optimizing objectives of the form ||X⊤X −Y ⊤Y ||2 F —which aim to preserve the geometric structure (Gram matrix) of the input—can be achieved through fully local Hebbian and anti-Hebbian synaptic updates. However, extending these elegant theories to deep, nonlinear networks often theoretically relies on feedback connections for inter-layer credit assignment. More recent works, such as combining Oja’s rule with Winner-Take-All (WTA) mechanisms [11, 12], have achieved impressive results on complex datasets, yet many Hebbian-inspired methods still lack a clear, optimizable objective that seamlessly integrates with modern deep learning paradigms, hindering their scalability. In this paper, we revisit this problem from a different architectural perspective. We propose the Structural Projection Hebbian Representation (SPHeRe), a novel, Hebbian-inspired, greedy layer- wise unsupervised learning framework. We begin by simplifying the equivalent loss of Oja’s rule, arriving at a stable objective that, coincidentally, aligns with the similarity-matching goals pioneered by Pehlevan and Chklovskii.Then we provide theoretical proof, demonstrating that minimizing this simplified loss corresponds to finding an optimal low-dimensional projection of the input data (Lemma 4.2.1), akin to principal subspace analysis (PCA). Our core innovation lies in how we apply this objective. Instead of deriving synaptic-level rules or relying on feedback, we introduce a lightweight auxiliary projection module (ϕ), creating a purely feedforward, block-wise training architecture. This "main-pathway learning, side-pathway supervision" design decouples the main network’s high- dimensional feature learning from the low-dimensional structural preservation objective. This allows the main block (f) to learn rich representations (Y ′) while the auxiliary block (ϕ) efficiently computes the structural loss on a low-dimensional projection (Z). Complemented by an orthogonality constraint (Lorth) to encourage feature decorrelation, SPHeRe provides a practical and effective bridge between Hebbian principles and modern deep learning. Our experiments show that SPHeRe achieves SOTA performance among Hebbian-inspired unsupervised methods and demonstrates strong generalization in continual and transfer learning, positioning it as a competitive greedy layer-wise pre-training strategy. The following summarizes our main contributions: • We propose SPHeRe, a novel Hebbian-inspired unsupervised learning framework, whose core innovation is a purely feedforward, block-wise training architecture. By introducing a lightweight auxiliary module (ϕ), SPHeRe decouples high-dimensional feature learning in the main pathway from a low-dimensional structural preservation objective, enabling the effective application of Hebbian principles to deep, non-linear networks. • We provide a theoretical analysis (Lemma 4.2.1 and Appendix B) showing that optimiz- ing the core loss LSPHeRe is equivalent to finding the optimal low-dimensional projection (principal subspace) of the input data. Through experiments on reconstruction and transfer learning, we further verify that SPHeRe can extract robust and generalizable image features. • We demonstrate through extensive experiments that SPHeRe achieves state-of-the-art per- formance among Hebbian-inspired unsupervised methods on CIFAR-10/100 and Tiny- ImageNet. Furthermore, its strong performance in continual learning, transfer learning, and feature reconstruction validates its effectiveness as a greedy layer-wise pre-training strategy that learns robust and generalizable representations. 2 Related Work Synaptic Plasticity Rules. Synaptic plasticity, the modulation of connection strength between neurons, is recognized as a fundamental biological substrate for learning, memory consolidation, and behavioral shaping [13]. The classical Hebbian principle, which posits that correlated activity strengthens synapses [5], laid the foundation for understanding activity-dependent modifications. But neuroscience experiments reveal that the rules of synaptic plasticity in the brain are much more complex, such as the timing-dependent effects of STDP [14], network-wide balancing through 2 homeostatic plasticity [15], influences between synapses through heterosynaptic plasticity [16], and even the control of plasticity itself, termed metaplasticity [17]. Recognizing the power of these biological adaptation strategies, researchers are increasingly working to integrate similar neuroplastic concepts into neural networks. Notable examples include investigations into the functional impact of homeostatic regulation in artificial systems [15], the synergistic use of Hebbian and non-Hebbian rules within recurrent architectures [18], and the development of unsupervised learning paradigms based on complementary Hebbian and anti-Hebbian updates [12], demonstrating a growing trend towards biologically inspired learning mechanisms in modern artificial intelligence. A significant advancement in connecting Hebbian learning to principled optimization was made by [8, 9]. They demonstrated that optimizing a similarity-matching objective, which minimizes the discrepancy between input and output Gram matrices, can be mathematically decomposed into a set of purely local, online synaptic update rules with Hebbian and anti-Hebbian forms. Subsequent work extended this framework to deep networks [10], but these extensions often theoretically rely on feedback connections to propagate error or target signals for credit assignment across layers. Starting from Oja’s rule, we obtain the same core mathematical objective as they did. But we propose a fundamentally different architectural solution that is a purely feedforward, block-based approach applying this goal to deep networks and avoiding the need for explicit feedback paths. Unsupervised learning. Unsupervised learning (UL) aims to learn data representations from unlabeled data. Classical methods such as K-means [19] and Principal Component Analysis (PCA) [20] exploit the statistical structure of the data to perform dimensionality reduction or clustering. In the context of image classification, recent advances in UL for neural networks are typically categorized into two main approaches: self-supervised learning (SSL) and the reconstruction method [21]. SSL methods, including contrastive learning approaches such as SimCLR [22], BYOL [23], and SimSiam [24], rely on pretext tasks and often employ pseudo-labels. The reconstruction method, on the other hand, trains the encoder and decoder simultaneously through image reconstruction loss to train the encoder to extract low-dimensional representations of the images, and the decoder can reconstruct the images from the low-dimensional representations. The autoencoder [25], VAE [26], and MAE [27] are representative of this type of method. However, the inherent limitations of these mainstream SSL and reconstruction methods are that they fundamentally rely on end-to-end gradient computation through backpropagation, a process that lacks biological plausibility. In this paper, we attempt to establish a biologically inspired new path for unsupervised learning. 3 Preliminaries 3.1 Hebbian Rule The Hebbian learning rule satisfies the locality property, where the synaptic connection is determined solely by the spike states of its pre- and post-synaptic neurons, without the influence of other neurons. In the classical Hebbian rule, the equation is ∆wij = η · vivj, representing the product of the firing rates of the pre- and post-synaptic neurons. Then consider a simple linear matrix multiplication process, we use X ∈RB×Nto represent the input vector, Y ∈RB×M to represent the output vector, W ∈RN×M to represent the synaptic weights, and the relationship between them is: Y = X · W. So, the original Hebbian rule for W is: ∆W = η · (X⊤Y ), (1) 3.2 Oja’s rule The classical Hebbian rule has certain deficiencies, such as the potential for overtraining and instability due to consistently positive firing rates. The Oja’s rule is a well-known variant of the Hebbian rule, incorporating an additional decay term in its formula; the greater the post-synaptic activity, the more significant the decay. Under Oja’s rule, the weight update equation is ∆wij = η(vi −wijvj)vj. And the Oja’s rule formula for W can be expressed as follows: ∆W = η(X⊤· Y −W · Y ⊤· Y ). (2) To some extent, Oja’s rule approximates principal component analysis (PCA) for dimensionality reduction and has the effect of extracting features. However, the current Oja’s rule does not consider the complex nonlinear transformations and feature quantity adjustments in deep learning, making it difficult to directly apply to multilayer networks. 3 4 Methodology This section introduces the Structural Projection Hebbian Representation (SPHeRe) method, an unsupervised learning approach inspired by Hebbian principles but adapted for modern deep learning architectures. SPHeRe aims to learn meaningful representations by preserving the structural rela- tionships within the input data, utilizing a specific loss function, an orthogonality constraint, and a lightweight auxiliary block. 4.1 Analyze Hebbian rules from the perspective of loss functions In this section, we will analyze how the Hebbian rule can extract information from the perspective of the loss function. Since the Hebbian rule does not care for its activation function, we just start from the simplest linear layer, where we use X ∈RB×Nto represent the input vector, Y ∈RB×M to represent the output vector, W ∈RN×M to represent the synaptic weights, and the relationship between them is: Y = X · W. Original Hebbian rule. For the simple linear layer trained using the classical Hebbian rule, its equivalent loss function is Lhebb = −1 2||Y ||2 F . Minimizing the loss Lhebb will increase the overall magnitude of the output matrix Y , which means that the values of the elements in Y will increase. This optimization might lead to a larger variance in the distribution of Y , making it easier to classify. However, without constraints, this could result in excessively large output values, potentially causing numerical instability. In contrast, the equivalent loss function of anti-hebbian is Lanti-hebb = 1 2||Y ||2 F , and optimizing Lanti-hebb is equivalent to driving all elements of the output Y as close to zero as possible. This is a form of L2 regularization for the output. Oja’s rule. Oja’s rule improves upon the classical Hebbian rule by introducing a suppressive term that prevents the weight matrix from growing indefinitely. The equivalent loss function of Oja’s rule is: Loja = 1 4Tr (Y Y ⊤−XX⊤)(XX⊤)−1(Y Y ⊤−XX⊤)  . (3) In which Tr denotes the trace of a matrix. Optimizing this loss function is somewhat equivalent to minimizing the Gram matrix (sample relationship matrix) KX = XX⊤of the input X and the gram matrix KY = Y Y ⊤of the output Y . When M ≥N, the weight W tends to WW ⊤≈IN; when M < N, the output Y becomes a representation of X, which will mimic the geometric structure of X in the N-dimensional space as much as possible in the M-dimensional space, thus preserving the pairwise relationships between samples as much as possible. In the loss function, (XX⊤)−1 acts as a weighting mechanism, weighting the errors according to the characteristics of the input data. However, the existence of (XX⊤)−1 requires (XX⊤) to be invertible, and it may lead to numerical instability. 4.2 Structural Projection Hebbian Representation (SPHeRe) 4.2.1 Preserving Data Structure SPHeRe starts from the equivalent loss function of Oja’s rule. First, we simplify the loss function by removing the inverse term (XX⊤)−1 and the constant term. This new objective directly seeks to learn an output representation (Y ) that preserves the pairwise structural information inherent in the input, as captured by their respective Gram matrices (XX⊤and Y Y ⊤). While derived from a Hebbian learning rule (Oja’s), this simplified loss also connects to dimensionality reduction techniques aiming for optimal data projection. We thus define the core SPHeRe loss function as: LSPHeRe = Tr (Y Y ⊤−XX⊤)(Y Y ⊤−XX⊤)  = Y Y ⊤−XX⊤ 2 F . (4) From the above equation, it can be intuitively seen that the goal of simplifying the loss function is to make the output Gram matrix approach the input Gram matrix. Some works use this method to achieve multidimensional scaling (MDS) [28] or non-negative matrix factorization (NMF) [29]. Notably, while we derive this objective from a Hebbian perspective, it is mathematically identical to the similarity-matching objectives studied extensively by [8]. The theoretical justification for this loss in capturing essential data structure comes from its connection to principal subspace projection, as formalized in the following lemma. 4 Lemma 4.2.1. Let X ∈RB×N be the input data matrix and let Y = XW ∈RB×M be the output, W ∈RN×M be the weight matrix, and M < N. Consider the loss function L = ∥Y Y ⊤−XX⊤∥2 F . Minimizing L with respect to W yields an optimal output Y ∗given by Y ∗= XVM, where VM consists of the first M right singular vectors of X. This Y ∗represents the projection of the rows of X onto the M-dimensional principal subspace, and Y ∗Y ∗⊤is the best rank-M approximation of XX⊤in the Frobenius norm. The relevant proof is provided in Appendix A. From the lemma 4.2.1, it can be concluded that the objective of the new equivalent loss function is to find the optimal projection of the low-dimensional space of the input X, and the minimum value of the equivalent loss function is PN i=M+1 σ4 i , where σi is the i-th largest singular value of the input X. In addition, when the relationship between Y and X is nonlinear, such as Y = f(X), where f is a nonlinear neural network (universal approximator). Due to the powerful fit ability of f, minimizing the loss function L will still lead Y to approach the optimal solution Y ∗= XVM to minimize L. After obtaining the equivalent loss function, we can find the derivative of W as: ∇W = 4X⊤(Y Y ⊤− XX⊤)Y . The outer term X⊤(·)Y still reflects the fundamental Hebbian rule: the change in weights is related to the correlation between pre-synaptic activity (X) and post-synaptic activity (Y ). (Y Y ⊤−XX⊤) is the weight term of the fundamental Hebbian rule. It wants to transform the statistical structure of the input space (XX⊤) into the statistical structure of the output space (Y Y ⊤) and adjust the Hebbian rule through the higher-order relationships between the sample pairs in the input and output. 4.2.2 Enhancing Features: Orthogonality constraint To enhance the quality and interpretability of the extracted features, we introduce an additional Orthogonality constraint term, called orthogonal loss. This loss encourages the different features represented by the columns of the output matrix Y to be mutually orthogonal. Decorrelated or orthogonal features often lead to less redundant representations and can improve the effectiveness of downstream tasks. The Orthogonal Loss function is defined as: Lorth = Y ⊤Y −IM 2 F . (5) The matrix Y ⊤Y ∈RM×M computes the product of dots between all pairs of columns (features) of Y . So, minimizing Lorth pushes the columns of Y towards orthogonality. By promoting orthogonality among the features, we aim to reduce redundancy in the learned representation, ensuring that each feature captures distinct aspects of the input data. Under orthogonal loss, the gradient of W is:∇W = X⊤Y (Y ⊤Y −IM). Interestingly, the gradient associated with this loss also exhibits a Hebbian modulation structure, which adds a weight term Y ⊤Y −IM to the standard Hebbian rule and drives the output features of Y to be as orthogonal as possible. As a result, the total loss combines the new hebbian loss and the orthogonality loss and can be described as: LTotal = LSPHeRe + λLorth, (6) where λ is a hyperparameter that balances the contribution of the orthogonality constraint against the low-rank projection. 4.2.3 Integrating SPHeRe into Neural Networks: The Auxiliary block The simplified Hebbian loss function LSPHeRe and the orthogonal loss Lorth presented earlier are defined based on a linear transformation Y = XW. However, deep learning heavily relies on non-linear layers to capture complex data patterns. Directly applying linear Hebbian loss to train deep neural networks is often insufficient. Therefore, we extend our Hebbian method to non-linear settings. Linear to nonlinear. we replace the linear transformation with a nonlinear mapping represented by a neural network. Let Y ′ = f(X, W) denote the output of a neural network layer (or block) f with parameters W, given the input X. According to Lemma 4.2.1, minimizing the simplified Hebbian loss LSPHeRe drives the output Y towards the optimal low-dimensional projection of the input X. Meanwhile, the neural network f is a universal approximator, it has the capacity to learn complex 5 functions, including the optimal linear projection identified by Lemma 4.2.1 if that minimizes the objective. Thus, even with a non-linear function f, optimizing the new Hebbian loss that compares input and output statistical structures can guide f to learn meaningful representations. We provide the corresponding verification experiments in Appendix B. Auxiliary block. To further enhance flexibility and computational efficiency, we introduce an additional lightweight non-linear projection layer ϕ with parameters θ. This layer takes the interme- diate representation Y ′ as input and produces the auxiliary output Z = ϕ(Y ′, θ) = ϕ(f(X, W), θ). We provide a comparison between SPHeRe with the auxiliary block and other similar methods in Appendix G. This two-stage non-linear mapping offers several advantages: • Computational Efficiency. The dimension MZ of the auxiliary output Z can be chosen to be small (e.g., MZ ≪M ′), significantly reducing the cost of computing ZZ⊤and the LSPHeRe loss, making it practical for large batches and deep networks. • Flexibility for Main Block (f). The main block f can learn a potentially high-dimensional (M ′) and complex intermediate representation Y ′ without being directly constrained by the low-rank structure imposed by LSPHeRe. The auxiliary network ϕ handles the projection to the low-dimensional space Z where the structural comparison occurs. • Effective Nonlinear Learning. Even though f and ϕ are nonlinear, minimizing LSPHeRe on Z still effectively guides the network. As indicated by Lemma 4.2.1, the target for Z is the principal subspace projection XVMZ. Since f and ϕ are universal approximators, they possess the capacity to learn the complex mapping from X to an approximation of XVMZ through Y ′ and Z during optimization. Experimental validation (detailed in Appendix B) confirms this: using this non-linear auxiliary setup, the learned auxiliary features Z exhibit high structural similarity (measured by Centered Kernel Alignment (CKA) and alignment of Singular Value Decomposition (SVD) components) to features obtained from the ideal linear projection defined by Lemma 4.2.1. This demonstrates that LSPHeRe effectively drives the non-linear system to capture the principal structural information of the input via the auxiliary branch. • Locality. The objective of W is determined locally by comparing its input X to a repre- sentation Z derived (via ϕ) from its output Y ′. Although gradients still flow through f and ϕ during optimization, the objective itself is defined relative to the block’s input and (projected) output structure, differing from end-to-end supervision from a final task loss. 4.3 Distinguishing SPHeRe from PCA and Oja’s Rule While inspired by principles related to Oja’s rule and PCA , SPHeRe differs significantly as a practical deep learning method: (1) Nonlinear: SPHeRe is explicitly designed for training deep, nonlinear neural networks layer-by-layer or block-by-block, whereas standard PCA is a linear technique and Oja’s rule is typically analyzed in linear or simple non-linear settings; (2) Auxiliary block: The use of the auxiliary block ϕ is a core architectural innovation of SPHeRe. It decouples the dimensionality required for efficient structural comparison (MZ) from the dimensionality of the main block feature representation (M ′), allowing flexibility and scalability not present in direct PCA/Oja implementations; (3) Motivation The primary goal is unsupervised pre-training or layer- wise training of deep networks to learn features beneficial for downstream tasks (like classification, transfer learning), rather than solely dimensionality reduction as in standard PCA. We provide a comparison of SPHeRe with more unsupervised dimensionality reduction methods in Appendix G. 4.4 Overall Method The core idea of SPHeRe is to leverage an auxiliary lightweight nonlinear projection ϕ to compute a low-dimensional representation Z, whose relationship structure (captured by a kernel matrix KZ) is driven to match the relationship structure of the input X (captured by KX). This is achieved by minimizing the simplified Hebbian loss LSPHeRe defined on these kernel matrices. Then, an orthogonality constraint Lorth can be applied to the output Y ′ of the main block f to encourage the decorrelation of the features. The auxiliary branch allows the main block f to operate with potentially higher-dimensional intermediate features Y ′ while keeping the Hebbian loss computation efficient via the low-dimensional Z. The process can be summarized by the following equations and Fig. 1: 6 Y = f(X, W), Z = ϕ(Y, θ) ˆX = X/∥X∥2, ˆZ = Z/∥Z∥2 (KX) = XX⊤, (KZ) = ZZ⊤ LSPHeRe = ∥KZ −KX∥2 F , Lorth = ∥Z⊤Z −I∥2 F LTotal = LSPHeRe + λLorth X Layer i σ Layer i+1 Φ κ κ SPHeRe Y Z Forward Backward Detach Gradient Figure 1: The concept of SPHeRe. 5 Experiments 5.1 Experimental Setup Our Hebbian network architecture consists of three Hebbian-trained convolutional layers (384, 768, 1536 channels) with kernel size 3 and a fully connected output layer trained with backpropagation. Each convolutional layer is followed by a max-pooling layer to reduce the image resolution by half. A skip connection is employed in the final layer, in which an Avgpool(2,2) is used to downsample, but gradients are not propagated backward through it. If not specifically stated, we use Leaky-ReLU as the activation function. A lightweight auxiliary network, ϕ, is introduced for the downsampling of convolutional features. Let the input feature map of ϕ have k channels. The network ϕ comprises three sequential stages: (i) a 1 × 1 convolution halving the channel dimension to k/2; (ii) an Adaptive Pooling operation collapsing the spatial dimensions to 1 × 1; (iii) a fully connected layer projecting the intermediate k/2-dimensional features onto a final 256-dimensional representation. We trained the network using AdamW (learning rate: 0.001, weight decay: 0.05) with a batch size of 128 and a learning rate scheduler; the hyperparameter λ for Lorth is set to 0.8; no data augmentation was applied except for standard normalization. For fair comparison, the SoftHebb baseline adopted the same network structure but otherwise retained its standard optimal hyperparameters. 5.2 Comparison to Existing Works In this section, we compare our method with other existing unsupervised synaptic plasticity learning methods on the CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets. For the SoftHebb method, we use open source code and verify it in our network architecture. It is possible that SoftHebb is sensitive to network structure, which has led to a little decrease in network performance than reported in the paper after using our structure. As shown in Table 1, our method consistently outperforms these existing methods on all three benchmark datasets. On CIFAR-10, our method achieves the test accuracy of 81.11%, exceeding the best previous result (80.3%). This advantage is maintained on CIFAR-100 (0.79% higher than SoftHebb) and becomes even more pronounced on the more complex Tiny-ImageNet dataset, where our method achieves 40.33% compared to 34.12% for SoftHebb. Table 1: Compare with existing synaptic plasticity works. * denotes results reported on the source paper. † denotes self-implemented result on our network structure with open source code. All accuracy values are presented in the format mean ± standard deviation. Approaches CIFAR-10 CIFAR-100 Tiny-ImageNet D-CSNN [30] 73.7 45.17 14.36 Hard WTA [31] 72.2 32.56 – Hard WTA [11] 74.6 – – SoftHebb [12]* 80.3 56.0 – SoftHebb [12]† 78.86 54.18 34.12 SPHeRe 81.11 ± 0.11 56.79 ± 0.69 40.33 ± 0.24 5.3 Analysis Experiments In this section, we conduct a series of analysis experiments to evaluate different aspects of our proposed method. Due to space limitations in the main text, we present a selection of key results here. 7 For a more comprehensive evaluation and additional analysis experiments, we refer the reader to Appendix B,H,I,J,K,L, which provides more analysis experiments and results. 5.3.1 Ablation Study In this section, we conduct an ablation study on different components of our method to verify their effectiveness to the final performance. Specifically, we decompose our method into the original Oja’s Hebbian loss Loja, the simplified Oja’s Hebbian loss LSPHeRe, the orthogonality constraint loss of features Lorth, and the auxiliary block ϕ. Then, we verify the unsupervised classification performance on CIFAR-10 with different combinations. The results are summarized in Table 2. Using only Loja can achieve an accuracy of 73.8%, indicating that the original Oja’s rule is still effective. It is worth noting that ablation experiments show that both the orthogonality constraint loss Lorth and the auxiliary block ϕ can effectively improve the algorithm’s performance. However, Lorth requires a significant amount of computation, leading to memory overflow when dimension reduction is not performed using ϕ. Although LSPHeRe does not perform as well as the original Loja, its combination with other components (Lorth and ϕ) is superior to the original Oja loss Loja. Ultimately, with the combination of LSPHeRe, Lorth, and ϕ, the network achieved 81.18% accuracy on the test set. Table 2: Accuracy comparison for different loss function combinations LOja LSPHeRe Lorth ϕ Accuracy (%) ✓ 73.8 ✓ ✓ 76.2 ✓ ✓ ✓ 76.3 ✓ 65.4 ✓ ✓ 78.9 ✓ ✓ 80.7 ✓ ✓ ✓ 81.18 Table 3: Continual Learning results. * denotes self-implemented. All accuracy values are pre- sented in the format mean ± standard deviation. Approaches Split-CIFAR100 Split-TinyImageNet EWC [32] 71.96± 0.37 62.87± 0.31 HAT [33] 75.70± 0.50 54.97± 0.84 OWM [34] 70.89± 0.13 – GPM [35] 74.99 ± 0.12 66.00 ± 0.24 SoftHebb [12]* 51.1 – SPHeRe 72.72± 1.14 63.46± 0.79 SPHeRe-EWC 76.53± 0.64 67.05± 1.16 5.3.2 Performance on Continual Learning Continual learning (CL) is a significant challenge in machine learning, which requires the net- work to learn new knowledge sequentially without suffering catastrophic forgetting of previously acquired knowledge. We evaluated our proposed method, SPHeRe, on standard CL benchmarks, Split-CIFAR100 and Split-TinyImageNet, with the results presented in Table 3. Our approach achieves competitive performance, demonstrating its potential in the CL setting. The reason for this phenomenon may be that our unsupervised learning method pays more attention to the general representations of input images, so it is more robust to distribution changes encountered between tasks, thus mitigating the phenomenon of knowledge forgetting. Furthermore, when combined with a well-established CL technique (EWC), the SPHeRe-EWC method significantly improves performance, achieving state-of-the-art results on both datasets. 5.4 Performance on Transfer Learning Since our method belongs to unsupervised learning, it can capture some general representations of the underlying data of images, which may be applicable to different target tasks or datasets. In this section, we conduct cross-dataset transfer learning experiments, training the three convolutional layers on one dataset (source) through SPHeRe and only training the classification head on another dataset (target) to evaluate network performance. The results, detailed in Table 4, show promising transferability. When transferring knowledge from Tiny-ImageNet to CIFAR-10, the model achieved a test accuracy of 80.03%, which is remarkably close to the 81.11% accuracy obtained when training SPHeRe directly on CIFAR-10 (a performance gap of only -1.08%). Similarly, transferring from CIFAR-10 to Tiny-ImageNet yielded a test accuracy of 37.7%, compared to the 40.33% baseline achieved with direct training on Tiny-ImageNet (a gap of -2.63%). Although there is a slight decrease in performance compared to training directly on the target dataset, these results indicate that the feature extraction rules learned by SPHeRe on the source dataset remain effective when applied to different target datasets, confirming the method’s effective transfer learning capability. 8 Table 4: Cross-Dataset transfer learning performance of SPHeRe Transfer Direction Transfer Learning (%) Non-Transfer (%) Gap (Train/Test) (SPHeRe alone) (vs. Non-Transfer) Tiny-ImageNet →CIFAR-10 97.3 / 80.03 81.11 −1.08% CIFAR-10 →Tiny-ImageNet 99.9 / 37.7 40.33 −2.63% 5.4.1 Reconstruction Experiments In continuous learning and transfer learning experiments, SPHeRe achieved good experimental results, indicating that SPHeRe tends to extract more general information from images. To verify this statement, we designed an experiment to prove whether the image information extracted by SPHeRe can better compress the complete information of the image. We first pretrained both the baseline model and our SPHeRe model on the CIFAR-10 dataset. These pretrained models were then used as encoders to transform input images into feature maps. These images could be added with Gaussian noise. Next, we send the feature maps into a decoder, which was trained via backpropagation to reconstruct the original images. More details on the structure, training settings, and more results can be found in the Appendix H. Reconstruction loss (pixel-wise mean square error), calculated as the difference between the original and decoded images, served as our evaluation metric. As shown in Fig. 2, for reconstructing original clean images, the decoder utilizing features from our SPHeRe encoder achieved a remarkably low loss of 3.37×10−3. This outperforms the reconstruction based on features of other methods. At the same time, visually, our method provides the best image restoration, indicating that our method can better compress and retain the complete image information. Furthermore, even under these noisy conditions, the decoder using SPHeRe features maintained an outstanding reconstruction performance, achieving an MSE of just 3.59 × 10−3, effectively reconstructing the original clean image from the noisy input’s features. Additionally, we notice that the SoftHebb method tends to discard the color information of the image. This may lead to decreased performance in tasks where color information is crucial. These reconstruction results strongly support our hypothesis that SPHeRe’s unsupervised Hebbian learning mechanism captures more fundamental, complete, and robust visual information compared to both standard backpropagation and the SoftHebb baseline. Original Image SoftHebb (1 epoch) SoftHebb (100 epoch) BP Ours loss: 7.79 loss: 12.75 loss: 4.75 loss: 3.37 Noise Image loss: 19.75 loss: 17.67 loss: 5.11 loss: 3.59 loss: 10.00 Figure 2: A sample of reconstruction results. The loss values are scaled by the factor of 103. 6 Conclusion This work revisits Hebbian learning’s connection to optimal low-dimensional projection via equivalent loss functions. We introduce SPHeRe, a novel unsupervised loss function derived by simplifying the equivalent loss of Oja’s rule, designed to minimize structural differences between input and output representations. To enhance feature extraction, SPHeRe incorporates lightweight nonlinear modules and a feature orthogonality constraint. SPHeRe achieves state-of-the-art results among unsupervised plasticity methods on multiple classification datasets and demonstrates robustness and generalizability 9 across continual learning, transfer learning, and image reconstruction tasks. Our findings highlight the feasibility and potential of Hebbian learning in modern deep learning. However, certain limitations invite further investigation and delineate promising avenues for future work. One notable observation from our experiments is that performance gains slightly diminish as the number of convolutional layers increases. This suggests that noise accumulates as the layer increases and that purely local unsupervised SPHeRe learning lacks sufficient contextual signals to learn to eliminate noise. This challenge resonates with biological reality, where neural circuits employ a sophisticated synergy between local synaptic plasticity and global neuromodulatory signals. SPHeRe currently emphasizes the former. Therefore, a compelling next step involves exploring the integration of multiscale regulatory or contextual signals, perhaps inspired by biological reward/punishment pathways or attention mechanisms. We believe that pursuing such multiscale approaches will not only advance the capabilities and scalability of Hebbian and other biologically inspired algorithms within modern deep learning frameworks but also offer valuable computational perspectives that could deepen our understanding of biological neural learning processes. 7 Acknowledgment This project is supported by NSFC Key Program (No. 62236009), Youth Program (No. 62506065), Postdoctoral Fellowship Program of CPSF under Grant Number GZC20251045. References [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [3] Francis Crick. The recent excitement about neural networks. Nature, 337(6203):129–132, 1989. [4] Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. Random synap- tic feedback weights support error backpropagation for deep learning. Nature communications, 7(1):13276, 2016. [5] Donald Olding Hebb. The organization of behavior: A neuropsychological theory. Psychology press, 2005. [6] Elie L Bienenstock, Leon N Cooper, and Paul W Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience, 2(1):32–48, 1982. [7] Erkki Oja. Simplified neuron model as a principal component analyzer. Journal of mathematical biology, 15:267–273, 1982. [8] Cengiz Pehlevan, Tao Hu, and Dmitri B Chklovskii. A hebbian/anti-hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. Neural computation, 27(7):1461–1495, 2015. [9] Cengiz Pehlevan, Anirvan M Sengupta, and Dmitri B Chklovskii. Why do similarity matching objectives lead to hebbian/anti-hebbian networks? Neural computation, 30(1):84–124, 2017. [10] Dina Obeid, Hugo Ramambason, and Cengiz Pehlevan. Structured and deep similarity matching via structured and deep hebbian networks. Advances in neural information processing systems, 32, 2019. [11] Leopold Grinberg, John Hopfield, and Dmitry Krotov. Local unsupervised learning for image analysis. arXiv preprint arXiv:1908.08993, 2019. 10 [12] Adrien Journe, Hector Garcia Rodriguez, Qinghai Guo, and Timoleon Moraitis. Hebbian deep learning without feedback. arXiv preprint arXiv:2209.11883, 2022. [13] Sarah C. Simmons, Greg G. Grecco, Brady K. Atwood, and Fereshteh S. Nugent. Effects of prenatal opioid exposure on synaptic adaptations and behaviors across development. Neurophar- macology, 222:109312, 2023. [14] Guo-qiang Bi and Mu-ming Poo. Synaptic modifications in cultured hippocampal neurons: de- pendence on spike timing, synaptic strength, and postsynaptic cell type. Journal of neuroscience, 18(24):10464–10472, 1998. [15] Johannes Zierenberg, Jens Wilting, and Viola Priesemann. Homeostatic plasticity and external input shape neural network dynamics. Physical Review X, 8(3):031018, 2018. [16] Marina Chistiakova, Nicholas M Bannon, Maxim Bazhenov, and Maxim Volgushev. Heterosy- naptic plasticity: multiple mechanisms and multiple roles. The Neuroscientist, 20(5):483–498, 2014. [17] Wickliffe C Abraham and Mark F Bear. Metaplasticity: the plasticity of synaptic plasticity. Trends in neurosciences, 19(4):126–130, 1996. [18] Priyadarshini Panda and Kaushik Roy. Learning to generate sequences with combination of hebbian and non-hebbian plasticity in recurrent spiking neural networks. Frontiers in neuroscience, 11:693, 2017. [19] Stuart Lloyd. Least squares quantization in PCM. IEEE transactions on information theory, 28(2):129–137, 1982. [20] Harold Hotelling. Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6):417, 1933. [21] Yanbei Chen, Massimiliano Mancini, Xiatian Zhu, and Zeynep Akata. Semi-supervised and unsupervised deep visual learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 46(3):1327–1347, 2022. [22] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PmLR, 2020. [23] Jean-Bastien Grill, Florian Strub, Florent Altche, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020. [24] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15750–15758, 2021. [25] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006. [26] Diederik P Kingma, Max Welling, et al. Auto-encoding variational bayes. 2013. [27] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022. [28] Arild Nokland and Lars Hiller Eidnes. Training neural networks with local error signals. In International conference on machine learning, pages 4839–4850. PMLR, 2019. [29] Da Kuang, Chris Ding, and Haesun Park. Symmetric nonnegative matrix factorization for graph clustering. In Proceedings of the 2012 SIAM international conference on data mining, pages 106–117. SIAM, 2012. 11 [30] Bonifaz Stuhr and Jurgen Brauer. Csnns: Unsupervised, backpropagation-free convolutional neural networks for representation learning. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), pages 1613–1620. IEEE, 2019. [31] Thomas Miconi. Hebbian learning with gradients: Hebbian convolutional neural networks with modern deep learning frameworks. arXiv preprint arXiv:2107.01729, 2021. [32] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017. [33] Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In International conference on machine learning, pages 4548–4557. PMLR, 2018. [34] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence, 1(8):364–372, 2019. [35] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. arXiv preprint arXiv:2103.09762, 2021. 12 A Proof of Lemma 4.2.1 Lemma 4.2.1. Let X ∈RB×N be the input data matrix and let Y = XW ∈RB×M be the output, W ∈RN×M be the weight matrix, and M < N. Consider the loss function L = ∥Y Y ⊤−XX⊤∥2 F . Minimizing L with respect to W yields an optimal output Y ∗given by Y ∗= XVM, where VM consists of the first M right singular vectors of X. This Y ∗represents the projection of the rows of X onto the M-dimensional principal subspace, and Y ∗Y ∗⊤is the best rank-M approximation of XX⊤in the Frobenius norm. The proof of the above theorem is similar to the proof of the Eckart-Young-Mirsky theorem, and its proof is given below: Proof. The objective is to minimize the loss function:L(W) = ||Y Y ⊤−XX⊤||2 F . Substituting Y = XW, we get: L(W) = ||(XW)(XW)⊤−XX⊤||2 F = ||XWW ⊤X⊤−XX⊤||2 F Let P = WW ⊤. The matrix P ∈RN×N is semidefinite positive symmetric (SPD). The rank of P is bounded by the dimensions of W: rank(P) = rank(W) ≤min(N, M) = M, since M < N. The optimization problem can be reformulated to find an SPD matrix P that minimizes L(P) = ||XPX⊤−XX⊤||2 F . (7) First, we can perform a Singular Value Decomposition (SVD) of the input matrix X that let X = UΣV ⊤, (8) where: • U ∈RB×B is an orthogonal matrix (U ⊤U = IB). • V ∈RN×N is an orthogonal matrix (V ⊤V = IN). The columns v1, v2, . . . , vN of V are the right singular vectors of X. • Σ ∈RB×N is a rectangular diagonal matrix containing the singular values σ1 ≥σ2 ≥ · · · ≥σr > 0 on its main diagonal, where r = rank(X). We can express the terms in Eqn. 7 using the Eqn. 8: • XX⊤= (UΣV ⊤)(UΣV ⊤)⊤= UΣV ⊤V Σ⊤U ⊤= U(ΣΣ⊤)U ⊤. Let Λ = ΣΣ⊤∈ RB×B. Λ is a diagonal matrix with diagonal entries σ2 1, . . . , σ2 r, 0, . . . , 0. • XPX⊤= (UΣV ⊤)P(UΣV ⊤)⊤= UΣV ⊤PV Σ⊤U ⊤. Then Eqn. 7 is change to: L(P) = ∥UΣV ⊤PV Σ⊤U ⊤−UΛU ⊤∥2 F = 1 4∥ΣV ⊤PV Σ⊤−Λ∥2 F , where the second equals sign is due to the unitary invariance property of the Frobenius norm (i.e., ∥QAZ∥F = ∥A∥F for orthogonal matrices Q, Z). Let Q = V ⊤PV . Since V is orthogonal and P is symmetric SPD with rank(P) ≤M, Q is also symmetric PSD with rank(Q) = rank(P) ≤M. The optimization problem transforms into finding a symmetric PSD matrix Q ∈RN×N with rank(Q) ≤M that minimizes: L(Q) = ∥ΣQΣ⊤−Λ∥2 F . (9) Let Σr = diag(σ1, . . . , σr). We can appropriately partition Σ and Q. Without loss of generality regarding the dimensions B, N, the product takes the form: ΣQΣ⊤=  Σr 0 0 0  B×N Q  Σr 0 0 0 ⊤ N×B =  ΣrQ11Σr 0 0 0  B×B , (10) 13 where Q11 is the top-left r × r principal submatrix of Q. The matrix Λ = ΣΣ⊤=  Σ2 r 0 0 0  B×B . The Eqn. 9 can be simplified to: L(Q) = ∥ΣrQ11Σr −Σ2 r∥2 F . (11) Let A11 = ΣrQ11Σr. Since σi > 0 for i = 1, . . . , r, Σr is invertible. Thus, Q11 = Σ−1 r A11Σ−1 r . A11 must be symmetric SPD and rank(A11) = rank(Q11) ≤M. The problem is equivalent to finding the best rank-M symmetric SPD approximation A11 to the diagonal matrix Σ2 r = diag(σ2 1, . . . , σ2 r) in the Frobenius norm. And the optimal rank-k approximation of a diagonal matrix (in Frobenius norm) is obtained by keeping the k diagonal entries with the largest magnitudes and setting the others to zero. Since σ2 1 ≥· · · ≥σ2 r > 0 and A11 must be SPD, the best rank-M symmetric SPD approximation A∗ 11 to Σ2 r is obtained by keeping the first M largest diagonal entries (assuming r ≥M, otherwise keep all r): A∗ 11 = diag(σ2 1, . . . , σ2 M, 0, . . . , 0)r×r. The corresponding optimal Q∗ 11 is: Q∗ 11 = Σ−1 r A∗ 11Σ−1 r = diag(1, . . . , 1, 0, . . . , 0)r×r, where there are M ones. The optimal N × N matrix Q∗can be constructed as: Q∗=  Q∗ 11 0 0 0  N×N =  IM 0 0 0  N×N . Q∗is symmetric SPD and rank(Q∗) = M. Then, we have P ∗= V  IM 0 0 0  N × NV ⊤ = [VM, V N −M]  IM 0 0 0   V ⊤ M V ⊤ N−M  = VMIMV ⊤ M = VMV ⊤ M P ∗is the projection matrix onto the subspace spanned by the first M right singular vectors of X. We need to find W ∗∈RN×M such that W ∗(W ∗)⊤= P ∗= VMV ⊤ M. A valid solution is W ∗= VM. (Other solutions W ∗= VMO for any M × M orthogonal matrix O exist, but they lead to the same Y Y ⊤). As a result, the optimal output Y ∗: Y ∗= XW ∗= XVM = UΣV ⊤VM = UΣB×M = UMΣM, (12) So the optimal output Y ∗= XVM calculates the coordinates of the rows of X projected onto this principal subspace. Furthermore, the product Y ∗(Y ∗)⊤is: Y ∗(Y ∗)⊤= (UMΣM)(UMΣM)⊤= PM i=1 σ2 i uiu⊤ i , which is the best rank-M approximation of XX⊤= UΛU ⊤= Pr i=1 σ2 i uiu⊤ i in the Frobenius norm. And the minimal loss Lmin = PN i=M+1 σ4 i . B SPHeRe Loss in Nonliner Condition In practice, instead of directly using the linear projection, we applied a nonlinear transformation to X, we use f to represent the nonlinear transformation. So the loss function is change to: L = ∥f(XW; θf)f(XW; θf)⊤−XX⊤∥2 F , (13) where the parameters θf of a universal approximator f (neural network). When Z = f(XW; θf) ∈ RB×M ′(M ′ < N), through Lemma 4.2.1, we can obtain the optimal Z∗= XVM ′ that minimizes the loss L. Since f is a universal approximator, there theoretically exists a set of parameters θ∗ f and W ∗such that f(XW ∗; θ∗ f) = XVM ′. Therefore, during the training process, the output Z of the auxiliary branch will gradually approach the dimensionality reduction projection of X: XVM ′, and the output Y of the main branch layer will gradually summarize and extract the features of the input X through this training process. To verify our viewpoint, we conducted the corresponding experiments. We trained two different network branches: 1) linear branch, where f is a linear mapping with parameter W; 2) nonlinear branch, where f is a neural network with 3 layers. Both branches receive the exact same input data X, and we train them through Eqn. 13. During the training process, we record the output features at different training epochs and use the centered kernel alignment (CKA) similarity to quantify and compare the similarity between the output features between two different branches (Fig. 3 A). From 14 the CKA similarity curve in Fig. 3 B, it can be seen that, after only 20 training epochs, the CKA similarity exceeded 0.90 and tended to stabilize at a high level as the training progressed for all datasets. This indicates that regardless of whether an explicit nonlinear transformation f is used, the SPHeRe loss drives the network to learn very similar feature representations. Secondly, to further explore the structure of the feature space, we performed a single value decompo- sition (SVD) on the feature output of the two branches. We focus on the 36 main components (that is, right singular vectors) and calculate the similarity between the feature vectors obtained by linear SPHeRe optimization (Linear f) and those obtained by nonlinear SPHeRe optimization (Nonlinear f). We visualized these similarities as a 36 × 36 similarity matrix (Fig. 3 C). The diagonal region of the similarity matrix shows very high similarity, especially for the first few principal components with lower indices (the first 20). This indicates that the most important principal directions (directions explaining the most variance) in the feature space learned by the two optimization methods are highly aligned and consistent. Principal components with higher indices (the last 10) do not strictly follow the maximum similarity on the diagonal, but most singular vectors have relatively similar corresponding vectors. X W W1 W2 σ W3 σ Y1 Y2 CKA Similarity Linear situation Nonlinear situation Optimize SPHeRe Loss(Y1, X) Optimize SPHeRe Loss(Y2, X) MNIST CIFAR-10 CIFAR-100 Epochs 0 20 40 60 80 100 CKA similarity 0.70 0.75 0.80 0.85 0.90 0.95 Component Correlation Between Feature Spaces Nonlinear Components Linear Components 1 6 12 18 24 30 36 1 6 12 18 24 30 36 0.0 0.2 0.4 0.6 0.8 1.0 A B C Figure 3: (A) The experiment’s network architecture; (B) CKA similarity comparison of the linear and nonlinear branches; (C) SVD component correlation between the linear and nonlinear branches. C Ablation study on Auxiliary Block Architecture To further evaluate the function of auxiliary blocks, we conduct an ablation study to the auxiliary block. We systematically varied two key components of this design: 1. Depth: We changed the number of convolutional layers within ϕ (0, 1, or 2 layers). 2. Projection Dimension: We changed the output dimension of the final FC projection layer (128, 256, or 512). The results of this ablation study, measured by test accuracy on CIFAR-10, are summarized in Table 5 and Table 6. Table 5: Ablation on the number of convolutional layers in ϕ Number of Conv Layers CIFAR-10 Test Accuracy (%) 0 78.65 1 (Default) 81.11 2 79.93 Table 6: Ablation on the projection dimension of the FC layer in ϕ Projection Dimension CIFAR-10 Test Accuracy (%) 128 80.50 256 (Default) 81.11 512 81.18 15 D Comparison on Transfer Learning Performances To provide a strong baseline for comparison, we conducted identical experiments using features from a standard Backpropagation (BP) trained model and the current state-of-the-art Hebbian method, SoftHebb [12]. Table 7: Cross-Dataset transfer learning performance comparison. Non-Transfer refers to the accuracy achieved by each method when trained and tested on the target dataset (results from Table 1). Gap is the difference between Transfer Learning (Test) and Non-Transfer accuracies. Method Transfer Direction Transfer Learning (%) Non-Transfer (%) (Train) (Test) BP Tiny-ImageNet →CIFAR-10 100.0 81.7 88.7 CIFAR-10 →Tiny-ImageNet 99.9 39.2 45.3 SoftHebb Tiny-ImageNet →CIFAR-10 78.0 74.84 78.86 CIFAR-10 →Tiny-ImageNet 60.0 33.26 34.12 SPHeRe (Ours) Tiny-ImageNet →CIFAR-10 97.3 80.03 81.11 CIFAR-10 →Tiny-ImageNet 99.9 37.7 40.33 E Comparison with Backpropagation-Based Unsupervised/Self-supervised Learning To further contextualize the performance of SPHeRe within the broader landscape of unsupervised learning, we compare it against a modern self-supervised learning (SSL) method, SimCLR [? ]. This comparison is insightful yet requires careful interpretation due to fundamental differences in learning philosophy and optimal network architecture. Modern SSL methods like SimCLR leverage strong, explicit pseudo-supervisory signals derived from sophisticated data augmentations to learn invariant and discriminative features. In contrast, SPHeRe operates on a different principle: it uses a simple, unsupervised objective (LSPHeRe) that aims to preserve the intrinsic structural information of the input without relying on such explicit invariance-inducing signals. An empirical investigation was conducted on CIFAR-10 using an official SimCLR implementation. Both methods were evaluated under their own optimal hyperparameter settings. The results are summarized in Table 8. Table 8: Comparison between SPHeRe and SimCLR under different training configurations on CIFAR-10. Best Settings refers to the optimal hyperparameters for each respective method. The model structure follows SPHeRe optimal setting. Method Configuration Train Acc. (%) Test Acc. (%) SimCLR SimCLR Best Settings 87.83 71.94 SPHeRe SimCLR Best Settings 75.03 70.61 SimCLR SPHeRe Best Settings 56.36 49.01 SPHeRe SPHeRe Best Settings 97.91 81.13 F Application to Deeper Architectures A key question for any unsupervised learning method is its scalability to modern, deep architectures. As noted in the conclusion, our initial observation indicated that the performance gains of purely local SPHeRe learning diminish as network depth increases, due to the accumulation of representation noise across layers. To investigate this quantitatively and explore a practical application, we designed an experiment where a SPHeRe-trained module serves as a biologically-inspired "stem" for a standard deep ResNet. 16 To evaluate the potential of SPHeRe as an initial feature extractor, we conducted a series of ex- periments integrating it with a standard ResNet-18 architecture on the CIFAR-10 dataset. We first established a baseline by training a standard ResNet-18 model end-to-end with backpropagation. We then pre-trained the first convolutional layer of the ResNet using the SPHeRe objective. This pre-trained stem was integrated into the ResNet-18 in two ways: 1) by freezing the SPHeRe-trained stem parameters and only training the subsequent ResNet layers, and 2) by using the SPHeRe-trained stem as an initialization and fine-tuning the entire network. The results of these experiments are summarized below. Table 9: Performance of ResNet-18 on CIFAR-10 with different training strategies for the initial stem layers. Model & Training Strategy Test Accuracy (%) ResNet-18 (Standard Backpropagation) 79.46 SPHeRe Stem (frozen) + ResNet-18 78.78 SPHeRe Stem (fine-tuned) + ResNet-18 79.93 G Comparison to related unsupervised methods This section provides a comparative overview of SPHeRe and several related unsupervised learning and matrix factorization methods. Table 10 summarizes the typical optimization objectives and primary goals for SPHeRe along with typical unsupervised methods such as PCA, Kernel PCA, MDS, SymNMF, and Laplacian eigenmaps. This comparison helps to situate SPHeRe within the broader landscape of dimensionality reduction and structure preservation algorithms, highlighting the similarities and distinctions in their underlying mathematical formulations and objectives. H Supplementary Information on the Reconstruction Autoencoder Fig. 4 shows the framework of the image reconstruction experiments. The encoder shares the same structure as the unsupervised learning setting. The decoder has 3 upsampling Convolution layers ({512, 256, 3} channels) with kernel size 3, stride 2, padding 1 and out padding 1. To better visualize the result of the reconstruction, the decoder is trained on the specific image for 100 iterations. We use pixel-wise mean square error to quantify the difference between the reconstructed image and the original image. To guarantee generalization, we also tested on the whole CIFAR10 testing dataset, for which our method also shows superiority. In this experiment, we trained the decoder on the training dataset for 10 epochs, then tested on the testing dataset. The result is shown in Table. 11. Stage 1: Pretrain with SPHeRe on CIFAR Freeze Parameters Stage 2: Train with BP Encoder Decoder Feature Map Loss Original Image Noise Original Image Reconstructed Image Element-wise Addition Forward Detach Gradient Gradient Backward Figure 4: Auto-Encoder framework. 17 Table 10: Comparison of SPHeRe with Related Unsupervised Learning / Matrix Factorization Methods Method Typical Optimization Objective / Equation Primary Goal SPHeRe minW,θ ∥KZ −KX∥2 F Z = ϕ(Y, θ) Y = f(X, W) KX = XX⊤ KZ = ZZ⊤ Preserve input data structure (pair- wise relationships defined by KX) in the output representation Z. Find optimal low-dimensional projection. PCA maxW Tr(W ⊤X⊤XW) s.t. W ⊤W = I. Find orthogonal directions (princi- pal components) capturing maxi- mum variance. Dimensionality re- duction preserving global covari- ance structure. Kernel PCA maxV Tr(V ⊤CϕV ) s.t. V ⊤V = Ip Cϕ = 1 N ˜Φ⊤˜Φ ˜ϕ(xi) = ϕ(xi) −1 N PN j=1 ϕ(xj) Find principal components in a non- linear feature space defined by the kernel k. Preserve kernel-based structure. MDS minY P i,j wij(dij(Y ) −δij)2 dij(Y ) is distance in embedding Y δij is original distance. Find a low-dimensional embedding Y that preserves the given pairwise distances or dissimilarities δij. SymNMF minH≥0 ∥A −HH⊤∥2 F A is a given symmetric matrix H must be non-negative Find a non-negative low-rank matrix H such that HH⊤approximates A. Often used for clustering. Laplacian Eigenmaps minY Tr(Y ⊤LY ) s.t. Y ⊤DY = I. (minY P i,j wij∥yi −yj∥2) L = D −W Preserve local neighborhood struc- ture in a low-dimensional embed- ding. Dimensionality reduction for manifold data. Table 11: Reconstruction result on CIFAR10 SoftHebb(100 epoch) SoftHebb BP SPHeRe Loss 1.81 2.39 1.68 1.62 And we provide more results of the image reconstruction tasks in Fig. 5 to support the experimental results and the analysis in the main text. Note that the reconstruction performance varies across different random initializations. To account for this variability, we repeated the experiment with 10 different random seeds and report the mean reconstruction loss along with the standard error of the mean (SEM) in Fig. 6. I t-SNE Visualization To qualitatively assess the feature representations learned by the three convolutional layers trained by our method, we visualized the output features for the CIFAR-10 dataset using t-SNE. Fig. 7 presents the t-SNE embeddings for both the training and test set samples, comparing the feature space before training (random weights) and after training. As expected, after training, distinct clusters begin to emerge in the feature space for both training and test sets. This demonstrates that the proposed unsupervised Hebbian learning process successfully organizes the feature representations based on the underlying structure within the CIFAR-10 data, and these learned structures generalize well from the training set to the unseen test set. Consistent with observations made for SoftHebb [12], our SPHeRe approach shows a clear propensity to separate classes characterized by relatively well-defined shapes and edges, such as airplane, automobile, ship and truck. However, it is more challenging to separate 18 Original Image SoftHebb (1 epoch) SoftHebb (100 epoch) BP Ours loss: 16.75 loss: 9.37 loss: 2.68 loss: 1.99 Noise Image loss: 44.81 loss: 9.43 loss: 3.60 loss: 2.37 loss: 10 Original Image loss: 58.87 loss: 9.03 loss: 2.56 loss: 1.93 Noise Image loss: 64.06 loss: 12.39 loss: 4.29 loss: 2.34 loss: 10 Figure 5: Reconstruction experiment results. The loss values are scaled by a factor of 103. Figure 6: Reconstruction loss across 10 random initializations (mean ± SEM). The y-axis is log- scaled, and the loss values are scaled by a factor of 103. 19 classes representing animals that often feature fur, feathers, and more variable textures, such as birds, cats, deer, and horses. These classes exhibit considerable overlap in the t-SNE embedding, suggesting that the features captured by SPHeRe, while effective for edge-based discrimination, may be less sensitive to finer textural details or complex shape variations. airplane automobile bird cat deer dog frog horse ship truck Trainset Testset Random weights Trained weights with SPHeRe Random weights Trained weights with SPHeRe Figure 7: The t-SNE results on CIFAR-10 dataset. J Experiments with Different Activation Functions To validate the impact of the selection of activation functions on the performance of our model, we conduct experiments considering several common activation functions: ReLU, Leaky-ReLU, tanh, sigmoid, and binary step function. The results are summarized in Table. 12. The performance differences achieved by different activation functions vary significantly, indicating that our approach is still influenced by the choice of activation function. The training set accuracy of ReLU, Leaky- ReLU, and tanh is above 95%, indicating that they are all good choices for the activation function. Among them, Leaky-ReLU performs the best, possibly because it retains the most information before activation while introducing nonlinearity. Table 12: Comparison for different activation functions ReLU Leaky-ReLU tanh sigmoid binary Train accuracy 96.9 97.7 96.6 74.3 83.9 Test accuracy 79.9 81.2 78.5 72.7 71.2 K KNN Experiment To better monitor the training process, we replace the final classifier with K-Nearest Neighbors (KNN). As shown in Figure 8, the clustering process converges quickly during training. However, after convergence, a slight decrease in accuracy is observed. We assume that this drop is mainly due to overfitting on the training dataset. L Spiking Neural Network Adaption To further demonstrate the biologically plausible nature of SPHeRe, we demonstrate that our method could work decently on Spiking Neural Networks. The overall network structure is the same with CNN setting, activation function to Leaky-Integrate-and-Fire (LIF) dynamic (with 4 timesteps). We use a surrogate gradient, as spikes are non-differentiable. The surrogate function we choose is the Piecewise-Exponential surrogate function, which can be derived as: g′(x) = α 2 e−α|x|. We verify our method and SoftHebb, comparing with random initial weight. The results in Table 13 show that our method can reach decent accuracy, while SoftHebb training has a negative effect on the final result. This may be because softhebb directly implements the Hebbian rule operation on the gradient, without considering the impact of different activation functions on the result distribution, which is also mentioned in the Appendix of the Softhebb paper. 20 Figure 8: KNN validation accuracy Table 13: Spiking Neural Network comparison Approach Random weight SoftHebb SPHeRe Accuracy 60.8 55.8 75.28 M Experiments compute resources All the experiments are carried out on one Nvidia RTX 3090 Graphic Card with 24GB VRAM. The detailed information is in below: Table 14: Computational resources used for experiments. Training was conducted on an NVIDIA RTX 3090 GPU (24GB VRAM) with a batch size of 128 and dataloader workers set to 2. Dataset Training Time GPU VRAM Usage CPU Workers Total Epochs CIFAR-10 ∼30 min ∼2.5 GB 2 100 CIFAR-100 ∼30 min ∼2.5 GB 2 100 Tiny-ImageNet ∼3.5 hours ∼8.2 GB 2 100 21
Rethinking Hebbian Principle: Low-Dimensional Structural Projection for Unsupervised Learning Shikuang Deng1∗, Jiayuan Zhang2∗, Yuhang Wu1, Ting Chen1, Shi Gu3,4B 1 2Glasgow College, UESTC 3 4 State Key Lab of Brain-Machine Intelligence, Zhejiang University, China , Abstract Hebbian learning is a biological principle that intuitively describes how neurons adapt their connections through repeated stimuli. However, when applied to machine learning, it suffers serious issues due to the unconstrained updates of the connections and the lack of accounting for feedback mediation. Such shortcomings limit its effective scaling to complex network architectures and tasks. To this end, here we introduce the Structural Projection Hebbian Representation (SPHeRe), a novel unsupervised learning method that integrates orthogonality and structural information preservation through a local auxiliary nonlinear block. The loss for structural information preservation backpropagates to the input through an auxiliary lightweight projection that conceptually serves as feedback mediation while the orthogonality constraints account for the boundedness of updating magnitude. Extensive experimental results show that SPHeRe achieves SOTA performance among unsupervised synaptic plasticity approaches on standard image classification benchmarks, including CIFAR-10, CIFAR-100, and Tiny-ImageNet. Furthermore, the method exhibits strong effectiveness in continual learning and transfer learning scenarios, and image reconstruction tasks show the robustness and generalizability of the extracted features. This work demonstrates the competitiveness and potential of Hebbian unsupervised learning rules within modern deep learning frameworks, demonstrating the possibility of efficient and biologically inspired learning algorithms without the strong dependence on strict backpropagation. Our code is available at https://github.com/brain-intelligence-lab/SPHeRe. 1 Introduction In recent years, deep learning based on error backpropagation has achieved revolutionary developments in the field of artificial intelligence [1, 2]. However, the biological plausibility of backpropagation remains a subject of debate, particularly concerning its requirement for symmetric feedback connections and precise global error signal transmission, mechanisms not readily observed in biological neural circuits [3, 4]. At the same time, some neuroanatomical studies have found that the plasticity regulation of cortical synapses mainly follows local rules, with the strength changes dynamically dominated by the activities of pre-/postsynaptic neurons, occasionally under the synergistic regulation of higher-order signals such as neuromodulators. Based on these neuroanatomical studies, researchers have proposed various biologically inspired learning algorithms to replace the backpropagation algorithm. The Hebbian rule, as a canonical algorithm of this type, was first proposed by Donald Hebb in 1949 [5]. Its core idea is that "neurons that fire together wire together," * Equal contribution B Corresponding author 39th Conference on Neural Information Processing Systems (NeurIPS 2025). 22 Oct 2025 which means that learning and memory formation are achieved through the cooperative strengthening of synapses. As it does not rely on global signals, it belongs to unsupervised learning. In order to overcome the limitations of the original Hebbian rules in terms of stability, competitiveness, and feature extraction capabilities, researchers have developed various important variants based on them, such as threshold mechanisms, Bienenstock-Cooper-Munro (BCM) rules [6], and Oja's rules [7]. A significant theoretical leap was made by Pehlevan, Chklovskii, and collaborators, who established a principled connection between Hebbian learning and similarity-matching objectives [8, 9, 10]. Their foundational work demonstrated that optimizing objectives of the form ||X⊤X -Y ⊤Y ||2 F -which aim to preserve the geometric structure (Gram matrix) of the input-can be achieved through fully local Hebbian and anti-Hebbian synaptic updates. However, extending these elegant theories to deep, nonlinear networks often theoretically relies on feedback connections for inter-layer credit assignment. More recent works, such as combining Oja's rule with Winner-Take-All (WTA) mechanisms [11, 12], have achieved impressive results on complex datasets, yet many Hebbian-inspired methods still lack a clear, optimizable objective that seamlessly integrates with modern deep learning paradigms, hindering their scalability. In this paper, we revisit this problem from a different architectural perspective. We propose the Structural Projection Hebbian Representation (SPHeRe), a novel, Hebbian-inspired, greedy layerwise unsupervised learning framework. We begin by simplifying the equivalent loss of Oja's rule, arriving at a stable objective that, coincidentally, aligns with the similarity-matching goals pioneered by Pehlevan and Chklovskii.Then we provide theoretical proof, demonstrating that minimizing this simplified loss corresponds to finding an optimal low-dimensional projection of the input data (Lemma 4.2.1), akin to principal subspace analysis (PCA). Our core innovation lies in how we apply this objective. Instead of deriving synaptic-level rules or relying on feedback, we introduce a lightweight auxiliary projection module (φ), creating a purely feedforward, block-wise training architecture. This "main-pathway learning, side-pathway supervision" design decouples the main network's highdimensional feature learning from the low-dimensional structural preservation objective. This allows the main block (f) to learn rich representations (Y ′) while the auxiliary block (φ) efficiently computes the structural loss on a low-dimensional projection (Z). Complemented by an orthogonality constraint (Lorth) to encourage feature decorrelation, SPHeRe provides a practical and effective bridge between Hebbian principles and modern deep learning. Our experiments show that SPHeRe achieves SOTA performance among Hebbian-inspired unsupervised methods and demonstrates strong generalization in continual and transfer learning, positioning it as a competitive greedy layer-wise pre-training strategy. The following summarizes our main contributions: • We propose SPHeRe, a novel Hebbian-inspired unsupervised learning framework, whose core innovation is a purely feedforward, block-wise training architecture. By introducing a lightweight auxiliary module (φ), SPHeRe decouples high-dimensional feature learning in the main pathway from a low-dimensional structural preservation objective, enabling the effective application of Hebbian principles to deep, non-linear networks. • We provide a theoretical analysis (Lemma 4.2.1 and Appendix B) showing that optimizing the core loss LSPHeRe is equivalent to finding the optimal low-dimensional projection (principal subspace) of the input data. Through experiments on reconstruction and transfer learning, we further verify that SPHeRe can extract robust and generalizable image features. • We demonstrate through extensive experiments that SPHeRe achieves state-of-the-art performance among Hebbian-inspired unsupervised methods on CIFAR-10/100 and TinyImageNet. Furthermore, its strong performance in continual learning, transfer learning, and feature reconstruction validates its effectiveness as a greedy layer-wise pre-training strategy that learns robust and generalizable representations. 2 Related Work Synaptic Plasticity Rules. Synaptic plasticity, the modulation of connection strength between neurons, is recognized as a fundamental biological substrate for learning, memory consolidation, and behavioral shaping [13]. The classical Hebbian principle, which posits that correlated activity strengthens synapses [5], laid the foundation for understanding activity-dependent modifications. But neuroscience experiments reveal that the rules of synaptic plasticity in the brain are much more complex, such as the timing-dependent effects of STDP [14], network-wide balancing through 2 homeostatic plasticity [15], influences between synapses through heterosynaptic plasticity [16], and even the control of plasticity itself, termed metaplasticity [17]. Recognizing the power of these biological adaptation strategies, researchers are increasingly working to integrate similar neuroplastic concepts into neural networks. Notable examples include investigations into the functional impact of homeostatic regulation in artificial systems [15], the synergistic use of Hebbian and non-Hebbian rules within recurrent architectures [18], and the development of unsupervised learning paradigms based on complementary Hebbian and anti-Hebbian updates [12], demonstrating a growing trend towards biologically inspired learning mechanisms in modern artificial intelligence. A significant advancement in connecting Hebbian learning to principled optimization was made by [8, 9]. They demonstrated that optimizing a similarity-matching objective, which minimizes the discrepancy between input and output Gram matrices, can be mathematically decomposed into a set of purely local, online synaptic update rules with Hebbian and anti-Hebbian forms. Subsequent work extended this framework to deep networks [10], but these extensions often theoretically rely on feedback connections to propagate error or target signals for credit assignment across layers. Starting from Oja's rule, we obtain the same core mathematical objective as they did. But we propose a fundamentally different architectural solution that is a purely feedforward, block-based approach applying this goal to deep networks and avoiding the need for explicit feedback paths. Unsupervised learning. Unsupervised learning (UL) aims to learn data representations from unlabeled data. Classical methods such as K-means [19] and Principal Component Analysis (PCA) [20] exploit the statistical structure of the data to perform dimensionality reduction or clustering. In the context of image classification, recent advances in UL for neural networks are typically categorized into two main approaches: self-supervised learning (SSL) and the reconstruction method [21]. SSL methods, including contrastive learning approaches such as SimCLR [22], BYOL [23], and SimSiam [24], rely on pretext tasks and often employ pseudo-labels. The reconstruction method, on the other hand, trains the encoder and decoder simultaneously through image reconstruction loss to train the encoder to extract low-dimensional representations of the images, and the decoder can reconstruct the images from the low-dimensional representations. The autoencoder [25], VAE [26], and MAE [27] are representative of this type of method. However, the inherent limitations of these mainstream SSL and reconstruction methods are that they fundamentally rely on end-to-end gradient computation through backpropagation, a process that lacks biological plausibility. In this paper, we attempt to establish a biologically inspired new path for unsupervised learning. 3 Preliminaries 3.1 Hebbian Rule The Hebbian learning rule satisfies the locality property, where the synaptic connection is determined solely by the spike states of its pre- and post-synaptic neurons, without the influence of other neurons. In the classical Hebbian rule, the equation is ∆wij = η · vivj, representing the product of the firing rates of the pre- and post-synaptic neurons. Then consider a simple linear matrix multiplication process, we use X ∈RB×Nto represent the input vector, Y ∈RB×M to represent the output vector, W ∈RN×M to represent the synaptic weights, and the relationship between them is: Y = X · W. So, the original Hebbian rule for W is: ∆W = η · (X⊤Y ), (1) 3.2 Oja's rule The classical Hebbian rule has certain deficiencies, such as the potential for overtraining and instability due to consistently positive firing rates. The Oja's rule is a well-known variant of the Hebbian rule, incorporating an additional decay term in its formula; the greater the post-synaptic activity, the more significant the decay. Under Oja's rule, the weight update equation is ∆wij = η(vi -wijvj)vj. And the Oja's rule formula for W can be expressed as follows: ∆W = η(X⊤· Y -W · Y ⊤· Y ). (2) To some extent, Oja's rule approximates principal component analysis (PCA) for dimensionality reduction and has the effect of extracting features. However, the current Oja's rule does not consider the complex nonlinear transformations and feature quantity adjustments in deep learning, making it difficult to directly apply to multilayer networks. 3 4 Methodology This section introduces the Structural Projection Hebbian Representation (SPHeRe) method, an unsupervised learning approach inspired by Hebbian principles but adapted for modern deep learning architectures. SPHeRe aims to learn meaningful representations by preserving the structural relationships within the input data, utilizing a specific loss function, an orthogonality constraint, and a lightweight auxiliary block. 4.1 Analyze Hebbian rules from the perspective of loss functions In this section, we will analyze how the Hebbian rule can extract information from the perspective of the loss function. Since the Hebbian rule does not care for its activation function, we just start from the simplest linear layer, where we use X ∈RB×Nto represent the input vector, Y ∈RB×M to represent the output vector, W ∈RN×M to represent the synaptic weights, and the relationship between them is: Y = X · W. Original Hebbian rule. For the simple linear layer trained using the classical Hebbian rule, its equivalent loss function is Lhebb = -1 2||Y ||2 F . Minimizing the loss Lhebb will increase the overall magnitude of the output matrix Y , which means that the values of the elements in Y will increase. This optimization might lead to a larger variance in the distribution of Y , making it easier to classify. However, without constraints, this could result in excessively large output values, potentially causing numerical instability. In contrast, the equivalent loss function of anti-hebbian is Lanti-hebb = 1 2||Y ||2 F , and optimizing Lanti-hebb is equivalent to driving all elements of the output Y as close to zero as possible. This is a form of L2 regularization for the output. Oja's rule. Oja's rule improves upon the classical Hebbian rule by introducing a suppressive term that prevents the weight matrix from growing indefinitely. The equivalent loss function of Oja's rule is: Loja = 1 4Tr (Y Y ⊤-XX⊤)(XX⊤)-1(Y Y ⊤-XX⊤) . (3) In which Tr denotes the trace of a matrix. Optimizing this loss function is somewhat equivalent to minimizing the Gram matrix (sample relationship matrix) KX = XX⊤of the input X and the gram matrix KY = Y Y ⊤of the output Y . When M ≥N, the weight W tends to WW ⊤≈IN; when M 0 on its main diagonal, where r = rank(X). We can express the terms in Eqn. 7 using the Eqn. 8: • XX⊤= (UΣV ⊤)(UΣV ⊤)⊤= UΣV ⊤V Σ⊤U ⊤= U(ΣΣ⊤)U ⊤. Let Λ = ΣΣ⊤∈ RB×B. Λ is a diagonal matrix with diagonal entries σ2 1, . . . , σ2 r, 0, . . . , 0. • XPX⊤= (UΣV ⊤)P(UΣV ⊤)⊤= UΣV ⊤PV Σ⊤U ⊤. Then Eqn. 7 is change to: L(P) = ∥UΣV ⊤PV Σ⊤U ⊤-UΛU ⊤∥2 F = 1 4∥ΣV ⊤PV Σ⊤-Λ∥2 F , where the second equals sign is due to the unitary invariance property of the Frobenius norm (i.e., ∥QAZ∥F = ∥A∥F for orthogonal matrices Q, Z). Let Q = V ⊤PV . Since V is orthogonal and P is symmetric SPD with rank(P) ≤M, Q is also symmetric PSD with rank(Q) = rank(P) ≤M. The optimization problem transforms into finding a symmetric PSD matrix Q ∈RN×N with rank(Q) ≤M that minimizes: L(Q) = ∥ΣQΣ⊤-Λ∥2 F . (9) Let Σr = diag(σ1, . . . , σr). We can appropriately partition Σ and Q. Without loss of generality regarding the dimensions B, N, the product takes the form: ΣQΣ⊤= Σr 0 0 0 B×N Q Σr 0 0 0 ⊤ N×B = ΣrQ11Σr 0 0 0 B×B , (10) 13 where Q11 is the top-left r × r principal submatrix of Q. The matrix Λ = ΣΣ⊤= Σ2 r 0 0 0 B×B . The Eqn. 9 can be simplified to: L(Q) = ∥ΣrQ11Σr -Σ2 r∥2 F . (11) Let A11 = ΣrQ11Σr. Since σi > 0 for i = 1, . . . , r, Σr is invertible. Thus, Q11 = Σ-1 r A11Σ-1 r . A11 must be symmetric SPD and rank(A11) = rank(Q11) ≤M. The problem is equivalent to finding the best rank-M symmetric SPD approximation A11 to the diagonal matrix Σ2 r = diag(σ2 1, . . . , σ2 r) in the Frobenius norm. And the optimal rank-k approximation of a diagonal matrix (in Frobenius norm) is obtained by keeping the k diagonal entries with the largest magnitudes and setting the others to zero. Since σ2 1 ≥· · · ≥σ2 r > 0 and A11 must be SPD, the best rank-M symmetric SPD approximation A∗ 11 to Σ2 r is obtained by keeping the first M largest diagonal entries (assuming r ≥M, otherwise keep all r): A∗ 11 = diag(σ2 1, . . . , σ2 M, 0, . . . , 0)r×r. The corresponding optimal Q∗ 11 is: Q∗ 11 = Σ-1 r A∗ 11Σ-1 r = diag(1, . . . , 1, 0, . . . , 0)r×r, where there are M ones. The optimal N × N matrix Q∗can be constructed as: Q∗= Q∗ 11 0 0 0 N×N = IM 0 0 0 N×N . Q∗is symmetric SPD and rank(Q∗) = M. Then, we have P ∗= V IM 0 0 0 N × NV ⊤ = [VM, V N -M] IM 0 0 0 V ⊤ M V ⊤ N-M = VMIMV ⊤ M = VMV ⊤ M P ∗is the projection matrix onto the subspace spanned by the first M right singular vectors of X. We need to find W ∗∈RN×M such that W ∗(W ∗)⊤= P ∗= VMV ⊤ M. A valid solution is W ∗= VM. (Other solutions W ∗= VMO for any M × M orthogonal matrix O exist, but they lead to the same Y Y ⊤). As a result, the optimal output Y ∗: Y ∗= XW ∗= XVM = UΣV ⊤VM = UΣB×M = UMΣM, (12) So the optimal output Y ∗= XVM calculates the coordinates of the rows of X projected onto this principal subspace. Furthermore, the product Y ∗(Y ∗)⊤is: Y ∗(Y ∗)⊤= (UMΣM)(UMΣM)⊤= PM i=1 σ2 i uiu⊤ i , which is the best rank-M approximation of XX⊤= UΛU ⊤= Pr i=1 σ2 i uiu⊤ i in the Frobenius norm. And the minimal loss Lmin = PN i=M+1 σ4 i . B SPHeRe Loss in Nonliner Condition In practice, instead of directly using the linear projection, we applied a nonlinear transformation to X, we use f to represent the nonlinear transformation. So the loss function is change to: L = ∥f(XW; θf)f(XW; θf)⊤-XX⊤∥2 F , (13) where the parameters θf of a universal approximator f (neural network). When Z = f(XW; θf) ∈ RB×M ′(M ′ < N), through Lemma 4.2.1, we can obtain the optimal Z∗= XVM ′ that minimizes the loss L. Since f is a universal approximator, there theoretically exists a set of parameters θ∗ f and W ∗such that f(XW ∗; θ∗ f) = XVM ′. Therefore, during the training process, the output Z of the auxiliary branch will gradually approach the dimensionality reduction projection of X: XVM ′, and the output Y of the main branch layer will gradually summarize and extract the features of the input X through this training process. To verify our viewpoint, we conducted the corresponding experiments. We trained two different network branches: 1) linear branch, where f is a linear mapping with parameter W; 2) nonlinear branch, where f is a neural network with 3 layers. Both branches receive the exact same input data X, and we train them through Eqn. 13. During the training process, we record the output features at different training epochs and use the centered kernel alignment (CKA) similarity to quantify and compare the similarity between the output features between two different branches (Fig. 3 A). From 14 the CKA similarity curve in Fig. 3 B, it can be seen that, after only 20 training epochs, the CKA similarity exceeded 0.90 and tended to stabilize at a high level as the training progressed for all datasets. This indicates that regardless of whether an explicit nonlinear transformation f is used, the SPHeRe loss drives the network to learn very similar feature representations. Secondly, to further explore the structure of the feature space, we performed a single value decomposition (SVD) on the feature output of the two branches. We focus on the 36 main components (that is, right singular vectors) and calculate the similarity between the feature vectors obtained by linear SPHeRe optimization (Linear f) and those obtained by nonlinear SPHeRe optimization (Nonlinear f). We visualized these similarities as a 36 × 36 similarity matrix (Fig. 3 C). The diagonal region of the similarity matrix shows very high similarity, especially for the first few principal components with lower indices (the first 20). This indicates that the most important principal directions (directions explaining the most variance) in the feature space learned by the two optimization methods are highly aligned and consistent. Principal components with higher indices (the last 10) do not strictly follow the maximum similarity on the diagonal, but most singular vectors have relatively similar corresponding vectors. X W W1 W2 σ W3 σ Y1 Y2 CKA Similarity Linear situation Nonlinear situation Optimize SPHeRe Loss(Y1, X) Optimize SPHeRe Loss(Y2, X) MNIST CIFAR-10 CIFAR-100 Epochs 0 20 40 60 80 100 CKA similarity 0.70 0.75 0.80 0.85 0.90 0.95 Component Correlation Between Feature Spaces Nonlinear Components Linear Components 1 6 12 18 24 30 36 1 6 12 18 24 30 36 0.0 0.2 0.4 0.6 0.8 1.0 A B C Figure 3: (A) The experiment's network architecture; (B) CKA similarity comparison of the linear and nonlinear branches; (C) SVD component correlation between the linear and nonlinear branches. C Ablation study on Auxiliary Block Architecture To further evaluate the function of auxiliary blocks, we conduct an ablation study to the auxiliary block. We systematically varied two key components of this design: 1. Depth: We changed the number of convolutional layers within φ (0, 1, or 2 layers). 2. Projection Dimension: We changed the output dimension of the final FC projection layer (128, 256, or 512). The results of this ablation study, measured by test accuracy on CIFAR-10, are summarized in Table 5 and Table 6. Table 5: Ablation on the number of convolutional layers in φ Number of Conv Layers CIFAR-10 Test Accuracy (%) 0 78.65 1 (Default) 81.11 2 79.93 Table 6: Ablation on the projection dimension of the FC layer in φ Projection Dimension CIFAR-10 Test Accuracy (%) 128 80.50 256 (Default) 81.11 512 81.18 15 D Comparison on Transfer Learning Performances To provide a strong baseline for comparison, we conducted identical experiments using features from a standard Backpropagation (BP) trained model and the current state-of-the-art Hebbian method, SoftHebb [12]. Table 7: Cross-Dataset transfer learning performance comparison. Non-Transfer refers to the accuracy achieved by each method when trained and tested on the target dataset (results from Table 1). Gap is the difference between Transfer Learning (Test) and Non-Transfer accuracies. Method Transfer Direction Transfer Learning (%) Non-Transfer (%) (Train) (Test) BP Tiny-ImageNet →CIFAR-10 100.0 81.7 88.7 CIFAR-10 →Tiny-ImageNet 99.9 39.2 45.3 SoftHebb Tiny-ImageNet →CIFAR-10 78.0 74.84 78.86 CIFAR-10 →Tiny-ImageNet 60.0 33.26 34.12 SPHeRe (Ours) Tiny-ImageNet →CIFAR-10 97.3 80.03 81.11 CIFAR-10 →Tiny-ImageNet 99.9 37.7 40.33 E Comparison with Backpropagation-Based Unsupervised/Self-supervised Learning To further contextualize the performance of SPHeRe within the broader landscape of unsupervised learning, we compare it against a modern self-supervised learning (SSL) method, SimCLR [? ]. This comparison is insightful yet requires careful interpretation due to fundamental differences in learning philosophy and optimal network architecture. Modern SSL methods like SimCLR leverage strong, explicit pseudo-supervisory signals derived from sophisticated data augmentations to learn invariant and discriminative features. In contrast, SPHeRe operates on a different principle: it uses a simple, unsupervised objective (LSPHeRe) that aims to preserve the intrinsic structural information of the input without relying on such explicit invariance-inducing signals. An empirical investigation was conducted on CIFAR-10 using an official SimCLR implementation. Both methods were evaluated under their own optimal hyperparameter settings. The results are summarized in Table 8. Table 8: Comparison between SPHeRe and SimCLR under different training configurations on CIFAR-10. Best Settings refers to the optimal hyperparameters for each respective method. The model structure follows SPHeRe optimal setting. Method Configuration Train Acc. (%) Test Acc. (%) SimCLR SimCLR Best Settings 87.83 71.94 SPHeRe SimCLR Best Settings 75.03 70.61 SimCLR SPHeRe Best Settings 56.36 49.01 SPHeRe SPHeRe Best Settings 97.91 81.13 F Application to Deeper Architectures A key question for any unsupervised learning method is its scalability to modern, deep architectures. As noted in the conclusion, our initial observation indicated that the performance gains of purely local SPHeRe learning diminish as network depth increases, due to the accumulation of representation noise across layers. To investigate this quantitatively and explore a practical application, we designed an experiment where a SPHeRe-trained module serves as a biologically-inspired "stem" for a standard deep ResNet. 16 To evaluate the potential of SPHeRe as an initial feature extractor, we conducted a series of experiments integrating it with a standard ResNet-18 architecture on the CIFAR-10 dataset. We first established a baseline by training a standard ResNet-18 model end-to-end with backpropagation. We then pre-trained the first convolutional layer of the ResNet using the SPHeRe objective. This pre-trained stem was integrated into the ResNet-18 in two ways: 1) by freezing the SPHeRe-trained stem parameters and only training the subsequent ResNet layers, and 2) by using the SPHeRe-trained stem as an initialization and fine-tuning the entire network. The results of these experiments are summarized below. Table 9: Performance of ResNet-18 on CIFAR-10 with different training strategies for the initial stem layers. Model & Training Strategy Test Accuracy (%) ResNet-18 (Standard Backpropagation) 79.46 SPHeRe Stem (frozen) + ResNet-18 78.78 SPHeRe Stem (fine-tuned) + ResNet-18 79.93 G Comparison to related unsupervised methods This section provides a comparative overview of SPHeRe and several related unsupervised learning and matrix factorization methods. Table 10 summarizes the typical optimization objectives and primary goals for SPHeRe along with typical unsupervised methods such as PCA, Kernel PCA, MDS, SymNMF, and Laplacian eigenmaps. This comparison helps to situate SPHeRe within the broader landscape of dimensionality reduction and structure preservation algorithms, highlighting the similarities and distinctions in their underlying mathematical formulations and objectives. H Supplementary Information on the Reconstruction Autoencoder Fig. 4 shows the framework of the image reconstruction experiments. The encoder shares the same structure as the unsupervised learning setting. The decoder has 3 upsampling Convolution layers ({512, 256, 3} channels) with kernel size 3, stride 2, padding 1 and out padding 1. To better visualize the result of the reconstruction, the decoder is trained on the specific image for 100 iterations. We use pixel-wise mean square error to quantify the difference between the reconstructed image and the original image. To guarantee generalization, we also tested on the whole CIFAR10 testing dataset, for which our method also shows superiority. In this experiment, we trained the decoder on the training dataset for 10 epochs, then tested on the testing dataset. The result is shown in Table. 11. Stage 1: Pretrain with SPHeRe on CIFAR Freeze Parameters Stage 2: Train with BP Encoder Decoder Feature Map Loss Original Image Noise Original Image Reconstructed Image Element-wise Addition Forward Detach Gradient Gradient Backward Figure 4: Auto-Encoder framework. 17 Table 10: Comparison of SPHeRe with Related Unsupervised Learning / Matrix Factorization Methods Method Typical Optimization Objective / Equation Primary Goal SPHeRe minW,θ ∥KZ -KX∥2 F Z = φ(Y, θ) Y = f(X, W) KX = XX⊤ KZ = ZZ⊤ Preserve input data structure (pairwise relationships defined by KX) in the output representation Z. Find optimal low-dimensional projection. PCA maxW Tr(W ⊤X⊤XW) s.t. W ⊤W = I. Find orthogonal directions (principal components) capturing maximum variance. Dimensionality reduction preserving global covariance structure. Kernel PCA maxV Tr(V ⊤CφV ) s.t. V ⊤V = Ip Cφ = 1 N ̃Φ⊤ ̃Φ ̃φ(xi) = φ(xi) -1 N PN j=1 φ(xj) Find principal components in a nonlinear feature space defined by the kernel k. Preserve kernel-based structure. MDS minY P i,j wij(dij(Y ) -δij)2 dij(Y ) is distance in embedding Y δij is original distance. Find a low-dimensional embedding Y that preserves the given pairwise distances or dissimilarities δij. SymNMF minH≥0 ∥A -HH⊤∥2 F A is a given symmetric matrix H must be non-negative Find a non-negative low-rank matrix H such that HH⊤approximates A. Often used for clustering. Laplacian Eigenmaps minY Tr(Y ⊤LY ) s.t. Y ⊤DY = I. (minY P i,j wij∥yi -yj∥2) L = D -W Preserve local neighborhood structure in a low-dimensional embedding. Dimensionality reduction for manifold data. Table 11: Reconstruction result on CIFAR10 SoftHebb(100 epoch) SoftHebb BP SPHeRe Loss 1.81 2.39 1.68 1.62 And we provide more results of the image reconstruction tasks in Fig. 5 to support the experimental results and the analysis in the main text. Note that the reconstruction performance varies across different random initializations. To account for this variability, we repeated the experiment with 10 different random seeds and report the mean reconstruction loss along with the standard error of the mean (SEM) in Fig. 6. I t-SNE Visualization To qualitatively assess the feature representations learned by the three convolutional layers trained by our method, we visualized the output features for the CIFAR-10 dataset using t-SNE. Fig. 7 presents the t-SNE embeddings for both the training and test set samples, comparing the feature space before training (random weights) and after training. As expected, after training, distinct clusters begin to emerge in the feature space for both training and test sets. This demonstrates that the proposed unsupervised Hebbian learning process successfully organizes the feature representations based on the underlying structure within the CIFAR-10 data, and these learned structures generalize well from the training set to the unseen test set. Consistent with observations made for SoftHebb [12], our SPHeRe approach shows a clear propensity to separate classes characterized by relatively well-defined shapes and edges, such as airplane, automobile, ship and truck. However, it is more challenging to separate 18 Original Image SoftHebb (1 epoch) SoftHebb (100 epoch) BP Ours loss: 16.75 loss: 9.37 loss: 2.68 loss: 1.99 Noise Image loss: 44.81 loss: 9.43 loss: 3.60 loss: 2.37 loss: 10 Original Image loss: 58.87 loss: 9.03 loss: 2.56 loss: 1.93 Noise Image loss: 64.06 loss: 12.39 loss: 4.29 loss: 2.34 loss: 10 Figure 5: Reconstruction experiment results. The loss values are scaled by a factor of 103. Figure 6: Reconstruction loss across 10 random initializations (mean ± SEM). The y-axis is logscaled, and the loss values are scaled by a factor of 103. 19 classes representing animals that often feature fur, feathers, and more variable textures, such as birds, cats, deer, and horses. These classes exhibit considerable overlap in the t-SNE embedding, suggesting that the features captured by SPHeRe, while effective for edge-based discrimination, may be less sensitive to finer textural details or complex shape variations. airplane automobile bird cat deer dog frog horse ship truck Trainset Testset Random weights Trained weights with SPHeRe Random weights Trained weights with SPHeRe Figure 7: The t-SNE results on CIFAR-10 dataset. J Experiments with Different Activation Functions To validate the impact of the selection of activation functions on the performance of our model, we conduct experiments considering several common activation functions: ReLU, Leaky-ReLU, tanh, sigmoid, and binary step function. The results are summarized in Table. 12. The performance differences achieved by different activation functions vary significantly, indicating that our approach is still influenced by the choice of activation function. The training set accuracy of ReLU, LeakyReLU, and tanh is above 95%, indicating that they are all good choices for the activation function. Among them, Leaky-ReLU performs the best, possibly because it retains the most information before activation while introducing nonlinearity. Table 12: Comparison for different activation functions ReLU Leaky-ReLU tanh sigmoid binary Train accuracy 96.9 97.7 96.6 74.3 83.9 Test accuracy 79.9 81.2 78.5 72.7 71.2 K KNN Experiment To better monitor the training process, we replace the final classifier with K-Nearest Neighbors (KNN). As shown in Figure 8, the clustering process converges quickly during training. However, after convergence, a slight decrease in accuracy is observed. We assume that this drop is mainly due to overfitting on the training dataset. L Spiking Neural Network Adaption To further demonstrate the biologically plausible nature of SPHeRe, we demonstrate that our method could work decently on Spiking Neural Networks. The overall network structure is the same with CNN setting, activation function to Leaky-Integrate-and-Fire (LIF) dynamic (with 4 timesteps). We use a surrogate gradient, as spikes are non-differentiable. The surrogate function we choose is the Piecewise-Exponential surrogate function, which can be derived as: g′(x) = α 2 e-α|x|. We verify our method and SoftHebb, comparing with random initial weight. The results in Table 13 show that our method can reach decent accuracy, while SoftHebb training has a negative effect on the final result. This may be because softhebb directly implements the Hebbian rule operation on the gradient, without considering the impact of different activation functions on the result distribution, which is also mentioned in the Appendix of the Softhebb paper. 20 Figure 8: KNN validation accuracy Table 13: Spiking Neural Network comparison Approach Random weight SoftHebb SPHeRe Accuracy 60.8 55.8 75.28 M Experiments compute resources All the experiments are carried out on one Nvidia RTX 3090 Graphic Card with 24GB VRAM. The detailed information is in below: Table 14: Computational resources used for experiments. Training was conducted on an NVIDIA RTX 3090 GPU (24GB VRAM) with a batch size of 128 and dataloader workers set to 2. Dataset Training Time GPU VRAM Usage CPU Workers Total Epochs CIFAR-10 ∼30 min ∼2.5 GB 2 100 CIFAR-100 ∼30 min ∼2.5 GB 2 100 Tiny-ImageNet ∼3.5 hours ∼8.2 GB 2 100 21
2510.14803
Scaling Artificial Intelligence for Multi-Tumor Early Detection with More Reports, Fewer Masks Pedro R. A. S. Bassi1,2,3, Xinze Zhou1†, Wenxuan Li1†, Szymon Płotka4†, Jieneng Chen1, Qi Chen1, Zheren Zhu5,14, Jakub Prządo6, Ibrahim E. Hamaci7,8, Sezgin Er9, Yuhan Wang10, Ashwin Kumar11, Bjoern Menze9, Jarosław B. Ćwikła12, Yuyin Zhou10, Akshay S. Chaudhari11, Curtis P. Langlotz11, Sergio Decherchi3, Andrea Cavalli2,3,13, Kang Wang14, Yang Yang14, Alan L. Yuille1, Zongwei Zhou1* 1Johns Hopkins University, Baltimore, MD, USA. 2University of Bologna, Bologna, Italy. 3Istituto Italiano di Tecnologia, Genova, Italy. 4Jagiellonian University, Kraków, Poland. 5University of California, Berkeley, CA, USA. 6Warmian-Masurian Cancer Center, Olsztyn, Poland. 7University of Zurich, Zurich, Switzerland. 8ETH AI Center, Zurich, Switzerland. 9Istanbul Medipol University, Istanbul, Turkey. 10University of California, Santa Cruz, CA, USA. 11Stanford University, Stanford, CA, USA. 12University of Warmia and Mazury, Olsztyn, Poland. 13École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland. 14University of California, San Francisco, CA, USA. *Corresponding author(s). E-mail(s): zzhou82@jh.edu; †These authors contributed equally to this work. Abstract Early tumor detection save lives. Each year, more than 300 million computed tomography (CT) scans are performed worldwide, offering a vast opportunity for effective cancer screening. However, detecting small or early-stage tumors on these CT scans remains challenging, even for experts. Artificial intelligence (AI) 1 arXiv:2510.14803v1 [cs.CV] 16 Oct 2025 models can assist by highlighting suspicious regions, but training such models typically requires extensive tumor masks—detailed, voxel-wise outlines of tumors manually drawn by radiologists. Drawing these masks is costly, requiring years of effort and millions of dollars. In contrast, nearly every CT scan in clinical practice is already accompanied by medical reports describing the tumor’s size, number, appearance, and sometimes, pathology results—information that is rich, abundant, and often underutilized for AI training. We introduce R-Super, which trains AI to segment tumors that match their descriptions in medical reports. This approach scales AI training with large collections of readily available med- ical reports, substantially reducing the need for manually drawn tumor masks. When trained on 101,654 reports, AI models achieved performance comparable to those trained on 723 masks. Combining reports and masks further improved sensitivity by +13% and specificity by +8%, surpassing radiologists in detect- ing five of the seven tumor types. Notably, R-Super enabled segmentation of tumors in the spleen, gallbladder, prostate, bladder, uterus, and esophagus, for which no public masks or AI models previously existed. This study challenges the long-held belief that large-scale, labor-intensive tumor mask creation is indis- pensable, establishing a scalable and accessible path toward early detection across diverse tumor types. We plan to release our trained models, code, and dataset at https://github.com/MrGiovanni/R-Super. 1 Main Cancer is a leading cause of death worldwide [1, 2]. Early detection is crucial. Five- year survival rates often exceed 90% when tumors are detected at an early stage but can drop below 20% once the disease becomes advanced or metastatic [3]. There is no effective, widely adopted screening for these diseases, even among high-risk populations. Computed tomography (CT) is already part of routine care, with more than 300 million CT scans performed globally each year and 85 million in the United States alone [4]. These scans represent a vast, untapped opportunity for detecting tumors sooner. However, detecting tumors at an early stage from CT scans is extremely difficult, and even experienced radiologists can miss them. For instance, in a study of CT scans taken before pancreatic cancer diagnosis, about 50% of the tumors were present but overlooked by radiologists [5]. Artificial Intelligence (AI) has the potential to help radiologists detect early tumors [6–8]. AI offers several advantages: AI does not get tired or suffer from attentional effects; AI can see CT scans in 3D, while radiologists analyze them slice by slice; AI can train on large datasets (e.g., 101,654 scans in this study), surpassing the number of CT scans that a radiologist analyzes annually (est. 5,000 scans [9]); and AI can see disease signs usually invisible to radiologists, such as signs of pancreatic tumors on non-contrast CT [7]. State-of-the-art AI models in tumor detection are typically formulated as semantic segmentation [10–12]—a type of AI that localizes tumors and outlines them on the CT scan, accurately indicating tumor locations and boundaries, allowing radiologists to easily verify the AI’s findings. 2 A major challenge in developing these segmentation models is the need for tumor masks—precise outlines drawn by radiologists. Creating accurate masks is labor- intensive, costly, and not part of standard clinical workflow. Drawing a mask for each tumor can take up to 30 minutes, and one study required 8 radiologists, five years, and millions of dollars to produce 3,125 pancreatic tumor masks [6]. Public CT datasets contain tumor masks only for a few organs, such as the kidney, liver, and pancreas, with a very small number of annotated tumor scans [13–16]. For many clinically impor- tant organs, such as the spleen, gallbladder, prostate, bladder, uterus, and esophagus, no public tumor masks exist, creating a significant barrier to developing multi-tumor segmentation models. Unlike drawing tumor masks, radiologists write medical reports as part of their standard clinical workflow. These reports describe tumor characteristics observed in the CT scans, including the number, approximate size, location within organs, and attenuation (whether the tumor appears bright or dark), and sometimes include pathology results from biopsy or surgery. As a result, paired CT–Report datasets are naturally much larger than CT–Mask datasets (Figure 1). Public datasets [17, 18] already provide around 25,000 CT-Report pairs, and a single hospital can easily accu- mulate over 500,000 CT-Report pairs (Section 4.1). In contrast, public datasets rarely exceed 1,000 CT–Mask pairs [13–16]. This striking difference raises an important ques- tion: Can medical reports supplement—or even replace—tumor masks in training AI for tumor segmentation? Recent advances in vision–language models (VLMs) have shown capability in gen- erating descriptive captions [17–21]. For example, models like Google’s MedGemma [19] and Stanford’s Merlin [18] can generate medical reports from CT scans. How- ever, these models are not designed for segmentation, which requires precise tumor localization and boundary delineation. As a result, they frequently produce errors such as missing existing tumors, detecting non-existent ones, or failing to describe small and subtle lesions accurately—the very cases that are most clinically impor- tant [10, 22]. A major limitation lies in their training paradigm: current VLMs rely on contrastive language–image pre-training (CLIP) [23], which were designed to learn from generic image–text pairs like social media captions, not for the rich, structured information in radiology reports. Our approach addresses this limitation by explicitly modeling the tumor’s location as a hidden variable—similar in spirit to the Expec- tation–Maximization (EM) framework [24]. The precise and descriptive nature of medical reports thus becomes a powerful supervisory signal for training. In this paper, we introduce R-Super (Report Supervision), which trains AI to seg- ment tumors using radiology reports, and pathology reports when available (Figure 1). We then examine how much these reports can reduce the need for manual tumor masks. R-Superenables tumor segmentation not only in organs with many available masks but also in those with few or no tumor masks. Using R-Super, we trained the first open AI model capable of segmenting tumors across seven organs1. The key innovation of R-Super lies in report-supervised loss functions that directly teach the AI to segment tumors consistent with the tumor descriptions in reports—in terms 1These include six tumor types with no public tumor mask in CT: spleen, gallbladder, prostate, bladder, uterus, and esophagus. We also include adrenal tumors, which have only 53 public tumor masks [25]. No public AI model can segment these seven tumor types. 3 of tumor count, size, location2, and attenuation. Conceptually, this involves learn- ing from incomplete data: reports describe many tumor characteristics but not the exact tumor outline, so R-Super teaches the segmentation model to estimate out- lines consistent with the available tumor characteristics in reports. By using reports to guide tumor segmentation, unlike prior approaches (e.g., VLMs), R-Super learns efficiently and achieves superior tumor detection performance (Table 2). R-Super extracts the tumor characteristics from reports using large language models (LLMs) with radiologist-designed prompts and store it before training. Importantly, reports are only used in training, not in inference. R-Super can train any segmentation model architecture, and it can learn from just CT-Report pairs. To further improve accu- racy, R-Super can also learn from CT-Mask pairs together with the CT-Report pairs. Thus, R-Super can segment tumors without public masks, and it can further scale the largest CT-Mask datasets (e.g., PanTS [16]) with many CT-Report pairs. To train R-Super to segment multiple tumor types lacking public tumor masks3, we created the largest CT-Report training dataset to date—101,654 CT-Report pairs (Section 4.1 and Table 1). These CT scans were performed in the University of Cal- ifornia San Francisco (UCSF) hospital and affiliated institutions during the last 28 years. Our dataset also includes the public Merlin dataset [18] (25,494 CTs, Stanford Hospital, from 2012 to 2018). To the best of our knowledge, no previous study used 100,000+ CT-Report pairs to train AI. First, we used these 101,654 CT-Report pairs to train R-Super. Then, to further improve performance, we created tumor masks for our dataset, and trained R-Super on both the CT-Report and CT-Mask pairs. To create these tumor masks efficiently, we introduced a report-guided active learning cycle (Section 4.1): (I) R-Super automatically created tumor masks for our dataset; (II) we identified the most incorrect tumor masks by comparing them to reports; (III) these incorrect tumor masks were revised by 31 radiologists; (IV) we trained R-Super using the revised tumor masks and all CT-Report pairs. We repeated this cycle until reaching 723 radiologist-corrected tumor masks. The radiologists reported that our report-guided active learning cycle reduced the average time to create each mask from about 30 to five minutes. Learning jointly from CT–Report and CT–Mask pairs, R-Super achieves substantially higher performance in comparison to standard segmentation training with just CT-Mask pairs—both when few masks are available (first active learning cycles), or when many are available (final cycles). We evaluated R-Super through internal and external validation on three datasets. Internal validation used unseen patients from the hospitals in our training dataset, while external validation tested R-Super in a hospital never seen during training. R-Super accurately detected tumors in seven organs lacking public tumor masks. In five of these tumor types, R-Super surpassed radiologist tumor detection perfor- mances reported in the literature (Table 2)4 In tumor detection, R-Super consistently 2A tumor location in a report is provided as the organ, organ sub-segment, and/or slice where the tumor is. A slice is a plane localizing the tumor in the 3D CT scan. 3Our dataset includes benign, primary (malignant) and metastatic tumors. 4We compare our results with radiologist tumor detection performance reported in the literature. Our AI was evaluated on a dataset containing both healthy patients and those with malignant tumors, and we searched the literature for studies that also assessed radiologists on datasets with healthy and malignant tumor patients. However, this comparison remains limited, as the AI and radiologists were tested on different datasets—including distinct patient populations, CT scanners, and tumor characteristics (see Appendix B for an analysis of the selected studies and limitations of our comparison between radiologists and AI). 4 outperformed VLMs such as Merlin [18] (Stanford University) and MedGemma [19] (Google) by double digit margins (Table 2). Trained with 101,654 CT–Report pairs and 723 CT–Mask pairs, R-Super exceeded standard segmentation (trained only on the 723 CT–Mask pairs our radiologists created) by margins of +13/+8% in sen- sitivity/specificity (Figure 3). Importantly, these outperformance margins were also large for detecting small tumors (< 2 cm): +7/+5.3%. R-Super improved performance when both few (e.g., 52, Section 2.2) and many tumor masks (e.g., 900, Section 2.5) were available for training. Remarkably, when trained only with CT–Report pairs (620 to 10,980 per tumor type), R-Super surpassed segmentation trained with few masks (52 to 185 per tumor type, Section 2.4). This shows that large-scale weak supervi- sion (reports) can outperform small-scale strong supervision (tumor masks) in tumor segmentation, echoing strong trends in computer vision [26] and natural language processing [21]. Our main contributions are: 1. R-Super: a new AI training method that enforces consistency between tumors segmented by AI and report descriptions of these tumors (tumor number, size, loca- tion, and attenuation). It can train any segmentation architecture using CT–Report pairs alone or CT–Report & CT–Mask pairs. R-Super has the first loss functions that directly supervise CT tumor segmentation using reports (Figure 1). 2. Early tumor detection: by learning from 101,654 readily available radiology reports, R-Super improves the detection of small tumors by double-digit margins (Table 3), showing potential to enhance early cancer detection—critical for survival. 3. Enabling multi-tumor segmentation and open science: we release the first public segmentation model capable of segmenting seven tumor types lacking public segmentation masks in CT. It surpasses reported radiologist performance in four tumor types (Table 2). We also release CT, tumor masks and reports for these tumors, giving the community methods and data to segment understudied tumor types and advance opportunistic, multi-organ tumor detection in real-world CT scans. This paper builds on our prior conference paper [27], providing several improve- ments: (1) the R-Super loss functions now also use the tumor slice and attenuation information from reports—exploiting all tumor characteristics in most reports; (2) we now segment seven tumor types without public masks, previously we segmented only pancreatic and kidney tumors, which exist in public CT-Mask datasets; and (3) we scaled our training dataset from 6,718 to 101,654 CT-Report pairs, and 31 radiolo- gists created 723 tumor masks for it. This study is about tumor segmentation, but as an addendum, we also trained our AI to produce CLIP embeddings—giving the com- munity the first public AI trained on 100,000+ CTs to create such embeddings—used for tasks such as report generation [18]. Consequently, these comparisons offer a qualitative sense of the detection difficulty across tumor types. A more rigorous comparison would require a reader study in which radiologists and AI are assessed on the same dataset. 5 “3 cm hypodense lesion in the spleen” tumor description location: spleen diameter: 30 mm count: 1 attenuation: hypo R-Super Losses: Ball, Volume & Attenuation Loss Teach segmentation AI to segment tumors that match reports in number of tumors, tumor size, tumor location (e.g., spleen), and tumor attenuation segmentation AI LLM report (b) (a) Pancreas: 2.2 x 2.0 cm hypodense lesion in the pancreatic head reports tumor masks Scarce, created for research, tumors in few organs Common, radiologists’ everyday work, tumors in all organs Tumor Segmentation: AI detects and outlines tumors in medical images Fig. 1: (a) CT-Report datasets are much larger than CT-Mask datasets. Our dataset has 117K CT-Report pairs, 98K with tumors. Merlin (public) [18] has 25K CT-Report pairs, 16K with tumors. In contrast, the largest CT-Mask datasets have 3K CT-Mask pairs with tumors. No tumor mask is available for many tumor types in CT. The figure also shows an example CT scan with a pancreatic tumor (PDAC), part of its report, and its tumor mask (red). (b) Overview of R-Super training method. R-Super transforms reports into per-voxel supervision for tumor segmentation, through new loss functions. It can train on both CT-Mask pairs and CT- Report pairs. For CT-Mask, R-Super uses usual dice and cross-entropy segmentation losses. For CT-Report, R-Super uses the new Volume Loss (Section 4.3.2), Ball Loss (Section 4.3.3) and Attenuation Loss (Section 4.3.4). They optimize the segmentation output of the AI, enforcing consistency between segmented tumors and the tumor characteristics in the report—tumor count, diameters, locations, estimated volumes and attenuation. This information is extracted from the reports by an LLM and stored before training. R-Super is applicable to any segmentation architecture with minimal extra computational cost (the zero-shot LLM runs once, before training). The figure shows a spleen tumor in a CT and its segmentation by R-Super (red). 2 Results 2.1 Validation Methodology & Dataset Overview We perform two kinds of validation: report-based validation and mask-based valida- tion. In report-based validation (Section 2.2, 2.3, and 2.4) we use radiology reports as ground-truth and evaluate tumor detection at the organ-level. I.e., we compare the segmentation model output and the report, checking for tumor absence/presence 6 dataset total spleen esophagus bladder gallbladder adrenal uterus prostate UCSF & Merlin train 101,654 11,677 541 3,271 2,892 9,609 5,075 1,902 UCSF 85,899 10,980 620 3,167 2,628 8,996 4,367 1,948 Merlin 25,469 1,820 107 565 683 1,716 1,359 404 UCSF test 1,220 181 72 112 77 263 100 158 Merlin test 1,133 135 22 107 57 544 58 38 Masks UCSF & Merlin 723 87 185 88 75 52 169 67 Masks UCSF 612 66 183 63 52 29 156 63 Table 1: Our CT-Report training dataset (UCSF & Merlin train) has an unprecedented size: 101.6K CT-Report pairs. Importantly, it focuses on seven understudied tumor types. To the best of our knowledge, no previous research project trained AI on 100K+ CT-Report pairs. The table illustrates the number of CT scans with each type of tumor. Section 2.2 and Section 2.3 used the datasets ’UCSF & Merlin train’ and ’Masks UCSF & Merlin’ for training, and ’UCSF test’ for testing. Section 2.4 used UCSF for training and ’Merlin test’ for testing. Section 2.5 trains on the public PanTS dataset [16] (9K CT-Mask pairs, and 0.9K with pancreatic tumors) and the public Merlin dataset (we selected 1.8k CT-Report pairs with pancreatic tumors, plus 1.8K normals). In Section 2.5, we test on the PanTS test set (901 CT-Mask pairs, 151 with pancreatic tumors), and a Merlin test set with 400 CT-Report pairs, 200 with pancreatic tumors. Merlin and PanTS are already publicly available. The PanTS datasets and all our training datasets include normals, benign tumors and malignant tumors. The Merlin and UCSF test datasets include only malignant tumors (primary and metastasis) and healthy patients. Malignancy is confirmed by explicit mentions of malignancy in radiology reports, or by pathology reports (available in UCSF test). in each organ and calculating tumor detection sensitivity, specificity and F1-Score. E.g., if a report mentions a tumor in the spleen, the AI is correct if it segmented a spleen tumor. For report-based evaluation, we transform the tumor segmentation out- puts generated by segmentation models (like R-Super) into categorical outputs (e.g., spleen tumor present/absent). To do so, we use voxel-count and confidence thresh- olds. For example, with a voxel count threshold of 50 and a confidence threshold of 50%, we consider that the segmentation model predicted a spleen tumor if more than 50 voxels of the "spleen tumor" class have more than 50% confidence (after sigmoid activation). In mask-based validation (Section 2.5) we take advantage of ground truth tumor masks to perform the regular segmentation validation, using Dice Similarity Coefficient (DSC) and Normalized Surface Distance (NSD). Table 1 explains which datasets were used in which parts of the following sections. Report-based validation allows for large-scale test datasets, because it does not require ground-truth tumor masks. Moreover, it allows for comparisons between segmentation models and other types of AI, such as VLMs5. On the other hand, mask- based validation is more precise than report-based validation, using DSC and NSD to evaluate how well segmentation models outline tumors. However, it incurs smaller 5To evaluate VLMs, we ask them to write radiology reports, and we automatically analyze whether these reports indicate tumor presence or absence in each organ, following [10]. 7 sen. spec. f1 0 50 100 77698366 +14 91 707076 657366 +8 spleen tumor sen. spec. f1 67798067 +1 859081 +5 737861 +5 bladder tumor sen. spec. f1 40 738471 +11 849293 +8 648072 +16 gallbladder tumor sen. spec. f1 86808485 +4 9174 9489 +20 668480 +18 uterus tumor sen. spec. f1 0 50 100 44 798488 +5 74757867 +3 727673 +4 prostate tumor sen. spec. f1 28 707876 +8 76838385 616668 +5 esophagus tumor sen. spec. f1 41 648076 +16 85828583 +3 708278 +12 adrenal tumor sen. spec. f1 55738276 +9 83798582 +6 677771 +10 average Radiologists (results from literature) standard segmentation (29 to 183 masks / tumor type) R-Super (29 to 183 masks & 620 to 11K reports / tumor type) R-Super No Mask (0 masks & 620 to 11K reports / tumor type) Fig. 2: Trained on 101K CT-Report pairs, R-Super segments seven under- studied tumor types. For adrenal tumors, 53 public tumor masks exist [25]. For the other six tumor types, no mask exists. By learning from reports, R-Super can segment these tumors—becoming the first public AI that segments them in CT. R- Super surpasses radiologist tumor detection performance for five of the seven tumor types. Radiologist performance was extracted from the literature, see Appendix B for an analysis of the selected studies and limitations of the comparison. Training with 101K CT-Report pairs surpassed training with 723 masks, showing that large- scale weak supervision (many reports) can surpass small-scale strong supervision (few masks). Training with both the 101K CT-Report pairs and the 723 CT-Mask pairs provided +9%/+6%/+10% sensitivity/ specificity/F1-Score improvement over stan- dard segmentation (no report). Here, we train on UCSF and Merlin, and test on UCSF test—N=1,220 (see dataset descriptions in Table 1). Radiologist performances in tumor detection were acquired from several studies: test datasets, because it needs ground-truth tumor masks. Also, we can only calcu- late DSC and NSD for segmentation models, not for VLMs. Therefore, we use both mask-based and report-based validation. R-Super can train any segmentation architecture. We used MedFormer (a U-Net- based convolutional neural network and transformer hybrid [28]) as the segmentation architecture for R-Super and the standard segmentation model. MedFormer was cho- sen because of its strong performance—top 1 position in the Touchstone Segmentation Benchmark [29]. Public AI models use their original architectures: medgemma-4b-it for MedGemma, RadLlama-7B for Merlin, and nnU-Net [30] for ULS. 2.2 R-Super Detects seven Tumor Types w/o Public Mask R-Super segments tumors in the spleen, gallbladder, prostate, bladder, uterus, esoph- agus and adrenal glands. No public tumor masks for these organs exist, except for adrenal gland tumors, which have only 53 public masks [25]. Table 2 shows that 8 Table 2: With R-Super, reports enable tumor detection across seven types of tumors unavailable in public datasets. R-Super surpasses radiologist tumor detection performance for five of the seven tumor types. Radiologist performance was extracted from the literature, see Appendix B for an analysis of the selected stud- ies and limitations of the comparison. Results include R-Super trained with reports (101K) and masks (723), and R-Super trained with reports only. We also compare it to standard segmentation (trained with only masks) and to public AI models: ULS, a nnU-Net [30] segmentation model trained for universal lesion segmentation on lesions in unspecified organs; MedGemma, the flagship medical VLM from Google; and Mer- lin, the latest medical VLM from Stanford University. We test on 1,220 CT scans from UCSF (internal validation), using reports as ground truth (no masks available). See dataset details in Table 1. For DSC and NSD, see Section 2.5. bladder esophagus gallbladder uterus AI sen. spe. sen. spe. sen. spe. sen. spe. radiologists (literature) 67.0 n/a 28.0 76.0 40.0 n/a 86.0 91.0 Public Vision-Language Models Merlin [18] 1.9 99.6 0.0 100.0 1.4 98.5 0.0 100.0 MedGemma [19] 2.9 100.0 0.0 100.0 0.0 100.0 1.0 99.6 Public Universal Lesion Segmentation Models ULS [31] 57.7 72.5 28.6 92.4 23.6 96.6 39.3 85.9 standard segmentation 78.8 85.2 70.1 82.9 72.6 84.4 80.4 74.1 R-Super No Mask 66.7 80.9 76.2 85.3 71.1 92.8 85.2 88.8 R-Super 79.5 89.5 77.8 83.3 84.4 91.8 84.0 94.2 prostate adrenal spleen average AI sen. spe. sen. spe. sen. spe. sen. spe. radiologists (literature) 44.0 74.0 41.1 84.5 76.9 90.9 54.7 83.3 Public Vision-Language Models Merlin [18] 0.6 99.2 0.0 99.6 0.6 98.5 0.6 99.3 MedGemma [19] 0.0 100.0 1.6 99.6 7.2 96.0 1.8 99.3 Public Universal Lesion Segmentation Models ULS [31] 33.7 84.7 0.0 100.0 20.1 85.9 29.0 88.3 standard segmentation 79.1 75.2 63.8 81.7 69.1 70.2 73.4 79.1 R-Super No Mask 88.4 66.9 75.6 83.1 65.8 76.3 75.6 82.0 R-Super 83.5 77.5 79.5 85.2 82.9 69.6 81.7 84.4 R-Super, trained with 723 CT-Mask pairs & 101,654 CT-Report pairs, surpasses pub- lic VLMs such as Google’s MedGemma and Stanford’s Merlin by large margins. All VLMs struggled to find tumors, generating radiology reports of low tumor detection sensitivity. VLMs did not surpass a standard segmentation model (trained with only CT-Mask pairs), as seen in previous studies [10]. The VLM results here are worse than in [10], possibly for two reasons: the tumors we consider here are rarer than liver, kid- ney and pancreas tumors, considered in [10], and/or they are more difficult to detect. The Universal Lesion Segmentation (ULS) model also underperformed (33.7% aver- age tumor detection F1-Score). This likely reflects limitations in training data: the ULS dataset does not distinguish tumor types, and the rare tumors considered here 9 were possibly underrepresented, hindering accurate segmentation. R-Super has the best performance in Table 2, surpassing the standard segmentation model by large margins: +9%/+6%/+10% in sensitivity/specificity/F1-Score (Figure 2). Therefore, unlike VLMs, R-Super effectively used reports to improve tumor detection. Even when trained with only CT-Report pairs (no masks), R-Super surpassed standard segmentation trained with only CT-Mask pairs (no reports). Therefore, many reports (541 to 11.7K per tumor type, Table 1) can offer more training value than few masks (52 to 185 per tumor type, Table 1). This discovery may seem surprising in the field of tumor segmentation. However, in other AI fields, like Natural Language Processing (NLP) and computer vision, weaker supervision in large-scale can also surpass stronger supervision at smaller-scale [32, 33]. In NLP, powerful LLMs like ChatGPT were only possible after the transition from a small text dataset with precise labels to massive (billion-scale) text datasets with weaker labels (self-supervision). Similarly, in computer vision, VLMs that can understand images and generalize to multiple tasks and domains were only possible after the transition from small image datasets with precise classification labels to massive image datasets with weaker labels (captions). Our results suggest that the transition from small CT-Mask datasets to massive CT-Report datasets may also transform the tumor segmentation field. The best performance in Table 2 is achieved by R-Super trained on CT-Mask pairs (723) plus CT-Report pairs (101,654). R-Super can segment tumors with zero masks, and it gets better when more masks become available. This result makes R-Super an efficient tool to accelerate mask creation with active learning—a cycle where AI creates masks, radiologists correct the worst AI-made masks, and AI retrains on the corrected masks (getting better). R-Super helps at every step: it can train on only CT-Report pairs to help radiologists produce the initial masks, and it can generate increasingly better masks as more masks become available for training, helping the radiologists more. Indeed, we used R-Super in a active learning loop to help radiologists create the 723 masks in our dataset (see Section 4.1). For five of the seven tumor types in Table 2, R-Super surpassed the tumor detection performance of radiologists (for tumors in the bladder, gallbladder, uterus, prostate, esophagus, and adrenal glands). For these tumor types, CT scans are not the primary diagnostic tool—since these tumor types usually are difficult to see in CT. However, our results show that AI may be able too see tumors signs that are not easily per- ceptible to humans. This echoes with previous studies, which show that AI can see pancreatic tumors in non-contrast CT with high-accuracy, but humans cannot [34]. This finding is especially relevant for opportunistic detection: with >300 million CT scans performed annually for diverse clinical reasons, segmentation models have the potential to scan images in the background, flag suspicious studies and regions, and prompt radiologists to review those areas and refer patients for targeted follow-up when needed. We drew radiologist performances from published studies that—where possi- ble—tested tumor detection on datasets containing both healthy and malignant tumor patients, such as our test dataset. Some caveats limit head-to-head comparisons: the bladder [35] and gallbladder [36] studies lacked normal controls (so only sensitivity is available); the esophagus study [37] considered non-contrast CT, but our test set 10 Table 3: With R-Super, reports improve small tumor detection across seven types of tumors unavailable in public datasets. The detection of small tumors is crucial because it can improve early cancer detection and patient survival. Results include R-Super trained with reports (101K) and masks (723), and R-Super trained with reports only. We compare it to standard segmentation (trained with only masks) and to public AI models: ULS, a nnU-Net [30] segmentation model trained for uni- versal lesion segmentation on lesions in unspecified organs; MedGemma, the flagship medical VLM from Google; and Merlin, the latest medical VLM from Stanford Uni- versity. We test on 470 CT scans from UCSF (internal validation), 257 healthy and 213 with small tumors (< 2 cm diameter). We use pathology reports as ground truth (no masks available). See data details in Table 1. For DSC and NSD, see Section 2.5. bladder esophagus gallbladder uterus AI sen. spe. sen. spe. sen. spe. sen. spe. Public Vision-Language Models Merlin [18] 0.0 99.6 0.0 100.0 0.0 98.5 0.0 100.0 MedGemma [19] 7.1 100.0 0.0 100.0 0.0 100.0 0.0 99.6 Public Universal Lesion Segmentation Models ULS [31] 66.7 72.5 0.0 92.4 33.3 96.6 33.3 85.9 standard segmentation 46.7 85.2 80.0 82.9 28.6 84.4 100.0 74.1 R-Super No Mask 26.7 80.9 100.0 85.3 42.9 92.8 66.7 88.8 R-Super 68.8 89.5 80.0 83.3 37.5 91.8 66.7 94.2 prostate adrenal spleen average AI sen. spe. sen. spe. sen. spe. sen. spe. Public Vision-Language Models Merlin [18] 7.1 99.2 0.0 99.6 0.0 98.5 1.0 99.3 MedGemma [19] 0.0 100.0 2.0 99.6 7.4 96.0 2.4 99.3 Public Universal Lesion Segmentation Models ULS [31] 21.4 84.7 0.0 100.0 6.8 85.9 23.1 88.3 standard segmentation 64.3 75.2 60.7 81.7 54.2 70.2 62.1 79.1 R-Super No Mask 64.3 66.9 70.6 83.1 49.2 76.3 60.1 82.0 R-Super 71.4 77.5 79.4 85.2 80.0 69.6 69.1 84.4 uses contrast-enhanced CT (where tumors are easier to detect); the bladder study [35] considered pre-diagnostic CT, where tumor detection is more difficult; and the adrenal gland study [38] considered only metastatic adrenal tumors, while our AI has both primary and metastatic adrenal tumors. Descriptions of all selected studies and comparison limitations appear in Appendix B. 2.3 R-Super Detects Small Tumors R-Super also surpassed the state-of-the-art in the detection of tumors smaller than 2 cm in diameter (see Table 3). The detection of small tumors is especially important for early cancer detection and better patient survival. However, it is very challeng- ing, as small tumors occupy as little as 0.0001% of a CT scan volume [39–41]. To address this challenge, the R-Super loss functions transform reports into per-voxel 11 sen. spec. f1 0 50 100 77738080 +7 91909283 +2 788479 +6 spleen tumor sen. spec. f1 677984 64 +5 788793 +9 748273 +8 bladder tumor sen. spec. f1 40 61 93 74 +32 969195 -5 718478 +13 gallbladder tumor sen. spec. f1 86869083 +4 91 54 7990 +25 547177 +17 uterus tumor sen. spec. f1 0 50 100 4453 8487 +31 74878787 49 6971 +20 prostate tumor sen. spec. f1 28 737377 76818685 +5 455152 +6 esophagus tumor sen. spec. f1 41 657866 +13 85759290 +17 768678 +10 adrenal tumor sen. spec. f1 55708376 +13 83808889 +8 647573 +11 average Radiologists (results from literature) standard segmentation (29 to 183 masks / tumor type) R-Super (29 to 183 masks & 620 to 11K reports / tumor type) R-Super No Mask (0 masks & 620 to 11K reports / tumor type) Fig. 3: In external validation, R-Super outperforms standard segmentation (trained only with CT-Mask pairs) by large margins. R-Super surpasses radi- ologist tumor detection performance for six of the seven tumor types. Radiologist performance was extracted from the literature, see Appendix B for an analysis of the selected studies and limitations of the comparison. Even when trained with CT- Report pairs (620 to 11K CT-Report pairs per tumor type) and zero CT-Mask pairs, R-Super surpassed standard segmentation (trained with 29 to 183 CT-Mask pairs per tumor type, Table 1). R-Super trained with both CT-Report pairs and CT-Mask pairs achieved the best results, surpassing standard segmentation by +12% F1-Score. We test on a hospital never seen during training, the Stanford Hospital (Merlin Test Set, N=1,133). All segmentation models were trained on the UCSF dataset and tested on Merlin. Esophagus tumor F1-Score seem low due to a large unbalance in the test set: only 21 esophagus tumor cases for 170 normals. See Table 1 for dataset details. supervision concentrated on the organ where the tumor is, or even on a small part of this organ (Section 4.3.3). This strategy was successful: in comparison to stan- dard segmentation (trained with CT-Mask pairs, no report), R-Super (trained with CT-Report and CT-Mask pairs) yielded an improvement of +7%/+5.3% in tumor detection sensitivity/specificity. 2.4 R-Super Generalizes to Unseen Hospitals When tested on a hospital never seen during training, R-Super outperforms standard segmentation (trained without reports) by a large margin (Figure 3). External valida- tion of medical AI on hospitals outside the training data is essential to demonstrate that the AI model can perform well across institutions, patient demographics, clinical procedures, and CT scanners [29, 42–44]. To perform external validation, we excluded the Merlin dataset from training. We trained R-Super on all UCSF CT–Report and CT-Mask pairs, and tested only on Merlin. Merlin comes from the Stanford Hospital, which is not in the UCSF dataset—making Merlin out-of-distribution. 12 Fig. 4: R-Super scales the largest public pancreatic tumor segmentation dataset, improving AI performance—especially for small tumors (< 2 cm). PanTS [16] is the largest public CT-Mask dataset for pancreatic tumor segmentation (1.1K pancreatic tumor masks). We scale it by merging PanTS and Merlin [18], a public CT-Report dataset with 2K pancreatic tumor reports. By learning from CT- Mask and CT-Report pairs (PanTS & Merlin), R-Super substantially outperformed a standard segmentation model, trained only on CT-Mask pairs (PanTS). We evaluated in a Merlin test set (400 CTs, 200 with pancreatic tumors), and the PanTS test set (901 CTs, 151 with pancreatic tumors). Notably, R-Super had the largest advantage in small pancreatic tumors (e.g., +19% sensitivity for small tumors in PanTS)— critical for early detection. We evaluated both for tumor detection (sensitivity and specificity) and segmentation (DSC and NSD). DSC and NSD are only possible to calculate in PanTS, because it has ground-truth tumor masks. Also, we calculate DSC and NSD only for CT scans with tumors. For small tumors, R-Super produced a strong improvement in DSC and NSD. For larger tumors, the improvement was smaller, possibly indicating an overfit of the standard segmentation model (trained on PanTS only) to the PanTS masks. As in internal validation (Tab. 2), method surpassed radiologist performance in external validation (Fig. 3) for tumors in the bladder, gallbladder, uterus, prostate, esophagus, and adrenal glands. Additionally, in external validation R-Super also sur- passed radiologist performance for spleen tumors. The radiologist performances were extracted from published studies, and Appendix B describe the studies and limitations of this comparison. 2.5 R-Super Also Improves Segmentation of Tumors with Many Masks R-Super not only enables the segmentation of tumors when few or no masks exist (Section 2.2 to 2.4), it also improves the segmentation of tumors that are the focus of the largest public CT-Mask datasets (Figure 4). The PanTS Dataset [16] is the largest public dataset with pancreatic tumor masks. It includes 9,000 public CT-Mask pairs, 1,077 with pancreatic tumors. Therefore, AI trained on PanTS represents the state- of-the-art of what standard segmentation training can achieve with public CT-Mask pairs. We show that, using public data only, R-Super advances this state-of-the-art 13 substantially. To train on public data only, we do not use our 101,654 CT-Report dataset here. Instead, we train R-Super on PanTS-train (900 pancreatic tumor CT- Mask pairs) plus Merlin-train (1,800 pancreatic tumor CT-Report pairs). Figure 4 shows that R-Super substantially outperformed standard segmentation (trained only on the PanTS-train CT-Mask pairs) both on the PanTS test set, and on the Merlin test set. R-Super was especially helpful for small tumors, providing up to +19% in sensitivity at matched specificity, and +10% DSC. Thus, R-Super can use reports to scale the largest public segmentation datasets, improving AI performance and early detection of tumors. Notably, whereas our previous experiments trained R-Super on a ratio of 100 CT–Report pairs per CT–Mask pair, this experiment used only a ratio of 2 CT-Reports pairs per CT-Mask pair—yet R-Super still yielded substantial gains. Thus, one does not need an enormous number of CT-Report pairs to benefit from R-Super. 3 Discussion This study introduces R-Super, a novel AI training method that converts reports into supervision signals that directly guide the segmentation task—constraining segmented tumors to match the tumor count, diameters, volumes, attenuations, and locations in reports. We used R-Super to train on a dataset with an unprecedented number of CT- Report pairs—101,654. In five test datasets, encompassing both internal and external validation, R-Super substantially surpassed other AI training methods and state-of- the-art VLMs in detecting tumors—such as Merlin, from Stanford University, and MedGemma, from Google. By effectively learning from reports, R-Super surpassed standard segmentation training (without reports) by a very substantial margins: +13% sensitivity, +8% specificity and +11% F1-Score in external validation. Additionally, R-Super can train any segmentation architecture. It does not significantly increase training time6, and does not change inference time. We will release the first public AI capable of segmenting tumors in the spleen, esophagus, adrenal glands, bladder, gallbladder, uterus, and prostate, in CT scans. We plan to keep expanding the size and types of cancer in our dataset. We are contacting multiple medical institutions, in diverse countries, to collaborate with CT scans and reports. Furthermore, we are gathering other types of medical images, such as MRI—where we hypothesize R-Super could be directly applied. This study reveals the importance of reports for tumor segmentation. By intro- ducing a novel training method that can effectively learn tumor segmentation from reports, we demonstrated that report-based training can yield double-digit improve- ments in tumor detection and segmentation metrics in 3 test datasets (UCSF, Merlin, and PanTS). Our results demonstrate that reports allows tumor segmentation with few or zero tumor masks—training with many reports (101,654) and zero mask sur- passed training with zero reports and few masks (723), Table 2). Thus, R-Super allowed the segmentation of seven tumor types missing from public CT-Mask datasets. Moreover, R-Super can use CT-Report pairs to scale the largest public CT-Mask training datasets (e.g., PanTS, the largest public pancreatic tumor CT-Mask dataset) 6We trained R-Super in five days with 2 NVidia H100 GPUs. 14 and substantially improve results. In summary, we show that reports can strongly improve tumor segmentation and detection—given an AI training method that effec- tively learns from reports, like R-Super. We hope this result can encourage more researchers to develop report-based training methods. Overall, we hope our findings will advance the tumor segmentation field, helping in a transition from small CT-Mask datasets to large CT-Mask & CT-Report datasets. To benefit the research community, we will make public the R-Super code, the first public AI model that can segment and detect seven understudied tumor types in CT, and the first public tumor masks for these tumor types. Until now, to segment these seven tumor types unavailable in public CT-Mask datasets, radiologists needed to spend months or years drawing tumor masks. In previous studies, eight radiologists spent five years to produce 3,125 masks for pancreatic tumors. These same radiolo- gists would need 300 years to create masks for the 101,654 CT scans in our dataset. Mask creation represents an enormous cost and time barrier, which has been prevent- ing broader research on multi-tumor segmentation. R-Super removes this barrier by allowing AI to train with CT scans and reports—readily available in hospitals and public datasets. In summary, study provides the community with data and an efficient training method to segment understudied tumor types. We hope this contribution to help democratize AI research and foster further advancements in the detection of understudied tumor types. In the end, we expect this research will translate into better cancer detection. 4 Methods 4.1 Assembling a Training Dataset of 101,654 CT-Report Pairs This study is built upon three datasets: UCSF, Merlin, and PanTS. (1) UCSF Dataset. LLMs searched over 410,000 CT reports from the University of California San Francisco (UCSF) Picture Archiving and Communication System (PACS). These reports are from 1997 to 2024, and they encompass the UCSF hospital and multiple affiliated institutions in California, USA. The LLM read the reports and selected normal patients and those with tumors in the esophagus, bladder, gallblad- der, spleen, uterus, prostate, and adrenal glands. For efficiency, we used a small LLM, Llama 3.1 8B AWQ. Then, a large LLM, LLama 3.1 70B AWQ, read the selected reports again, confirming the small LLM findings. The LLMs used radiologist-designed prompts, available in our public code. To certify LLM accuracy, radiologists read 447 of the reports selected by the LLM, and certified that it has 96% accuracy in identifying patients with tumors7—a level of accuracy on par with labelers in estab- lished datasets like CheXpert [45] and ChestX-ray 14 [46]. In total, the LLMs selected 85,899 reports of interest. The UCSF dataset covers the pelvis, abdomen and chest. It includes non-contrast and contrast cases—84% are venous phase, 10% arterial phase, and 6% non-contrast. Of all CT scans, 68% were for outpatients (same-day visits), 7In the verified reports, 182 patients had tumors, 265 were normals. The LLM correctly identified all tumor reports, and correctly identified 247/265 of the normal reports. 15 (a) (b) (c) Fig. 5: Dataset summary. (a) Distribution of tumor diameters and patient sex in the training dataset (UCSF & Merlin train) and test datasets (UCSF test and Merlin test). (b) Distribution of tumor counts per CT scan and patient age in the training and test datasets. (c) Comparison of age and sex distribution for healthy and unhealthy (tumor) patients. The UCSF test set was randomly selected, matching the age and sex distribution from healthy and unhealthy patients—avoiding bias in our results. 17% for inpatients (admitted patients), and 15% were done in the emergency depart- ment (urgent care). In total, the dataset has 33,248 patients and 85,899 CT-Report pairs. Prior to this study, all data was de-identified. (2) Merlin Dataset [18]. Merlin is the largest public abdominal-focused CT- Report dataset. It was collected from the Stanford Hospital, and includes 25,494 scans from 18,317 patients, acquired from 2012 to 2018. Every CT scan is paired with its report. Exams were selected via the Stanford Medicine Research Data Repository (STARR), using Current Procedural Terminology (CPT) codes 72192, 72193, 72194, 74150, 74160, 74170, 74176, 74177, and 74178. Of the CT scans in Merlin, 97% are portal venous, 2.4% delayed, 0.45% arterial, and 0.26% non-contrast. CT scans include the abdomen, chest and pelvis. All data was de-identified by the dataset creators. (3) PanTS Dataset [16]. PanTS is the largest public CT-Mask dataset focused on pancreatic tumors. It includes 9,901 public CT scans from 143 medical institutions in 17 countries; including 1,077 CT-Mask pairs with pancreatic tumors. All CT scans have masks for 27 organs and anatomical structures (including the pancreas and its head, body, and tail). Contrast phases are: non-contrast 7.9%, venous 64.9%, arterial 26.6%, and delayed 0.6%. All data was de-identified by the dataset creators. 16 4.1.1 Training and Testing Splits Table 1 summarizes the training and testing datasets used in each of our experiments. Figure 5 provides tumor and demographic information on our training and testing datasets. In our main experiments (Section 2.2 and Section 2.3), we trained R-Super on 101,654 CT-Report pairs; 82,130 from the UCSF dataset, 25,494 from Merlin. In Section 2.2, we tested on 1,220 CT scans from unseen patients at UCSF. Test CTs were randomly selected, ensuring similar age and sex demographics between normal and tumor patients. In Section 2.3, we tested on the CT scans with small tumors (213 CT scans with tumors smaller than 2 cm) and normal cases (257) inside this UCSF test dataset. In Section 2.4, for external validation, we trained on the UCSF dataset and tested on a Merlin test set. Finally, in Section 2.5, we trained on Merlin and PanTS, and tested on the official PanTS test split (901 CT scans, 151 with pancreatic tumor) and 400 CT scans randomly selected from Merlin (200 with pancreatic tumors, 200 normals). All training datasets (including PanTS) contain normal cases, benign tumors, and malignant tumors. The Merlin and UCSF test sets include only malignant tumors (primary or metastatic) and healthy controls. Malignancy is confirmed through explicit mentions in radiology reports or, for the UCSF test set, by pathology reports. 4.2 Report-based Active Learning Drawing a single tumor mask in CT (which is tri-dimensional) can take up to 30 min- utes for a radiologist, making large-scale mask creation costly and slow [47, 48]. Our dataset includes 723 tumor masks, created by 31 radiologists using a new report-based active learning strategy built on R-Super. Radiologists reported that this strategy reduced annotation time from around 30 to five minutes per mask. Figure 6 illustrates our active learning strategy. We first train R-Super on CT–Report pairs only. Then, we iteratively create tumor masks through a loop: R- Super generates AI-made tumor masks; we identify those the least consistent with reports; radiologists correct them; and we re-train R-Super with the radiologist- corrected tumor masks plus the remaining CT-Report pairs. We stopped at 723 radiologist-corrected tumor masks. Consistency between AI-made tumor masks and reports is quantified using the Ball Loss (Section 4.3.3), which increases when the AI-made tumor mask disagrees with tumor descriptions in reports—in tumor num- ber, size, and location. We prioritize the most inaccurate AI-made tumor masks for correction. Retraining R-Super on corrected masks teaches it to avoid its past mistakes. Unlike traditional active learning—where radiologists begin by creating masks without AI assistance—our strategy starts with a strong segmentation model to assist radiologists, R-Super trained from CT–Report pairs alone. It provides effective AI assistance from the start, accelerating early mask creation. As more radiologist- corrected masks are added to the training dataset, R-Super continuously improves. It continuously maintains superior accuracy to standard segmentation models trained without reports—throughout the whole active learning process, whether few or many masks are available (Sections 2.2, 2.5). Overall, this report-guided active learning 17 CT-Report Training R-Super learns tumor segmentation from CT-Report pairs alone Mask Creation R-Super creates tumor masks for the training dataset Report Score We compare masks to reports & find most inconsistent masks Radiologists in-the-loop Radiologists correct inconsistent masks R-Super Training R-Super trains with all CT-Report & CT-Mask pairs available Fig. 6: Our report-based active learning enables radiologists to cre- ate tumor masks six times faster. Instead of drawing masks from scratch, radiologists correct AI-generated masks and have access to the original radiology reports—reducing annotation time from 30 to five minutes. The process prioritizes the most inaccurate masks, identified automatically using the Ball Loss, which increases when AI predictions disagree with report details (tumor number, size, or location). Retraining on these corrected masks helps R-Super avoid previous errors. Standard active learning begins with radiologists creating masks from scratch, without AI assistance—the cold-start problem. In contrast, our strategy starts with R-Super, already trained from CT–Report pairs, providing AI assistance from the start. As radiologist-corrected masks are created, R-Super continuously improves and consis- tently outperforms standard segmentation models trained without reports, making it a more effective assistant throughout the entire active learning process. speeds up tumor mask creation by sixfold and delivers a more accurate, continuously improving AI assistance for radiologists. 4.3 R-Super Figure 1 is an overview of the R-Super training method, designed to enforce consis- tency between AI-segmented tumors and tumor descriptions in reports. Section 4.3.1 explains how R-Super uses an LLM to extract tumor characteristics from reports— tumor count, locations8, diameters, attenuation, and estimated volumes. Section 4.3.2, Section 4.3.3, and Section 4.3.4 explain the three novel loss functions that use the LLM-extracted tumor characteristics as ground-truth: the Volume Loss, Ball Loss and Attenuation Loss, respectively. The Volume Loss, used as deep supervision, is less strict, enforcing only volume and location consistency between the segmented tumors and reports. The Ball Loss directly supervises the final AI segmentation output, and it enforces consistency in tumor volumes, locations, count, and diameters. The Attenua- tion Loss is applied both as deep supervision and at the output. It enforces consistency 8Locations refer to the organ or organ sub-segment where the tumor is. In our experiments, we used sub-segments for pancreatic tumors (pancreatic head, body, or tail), and organs for other tumors. When available, we also extract the tumor slice—the horizontal plane where the tumor is in the CT. 18 in attenuation—if a report states that a tumor is hypoattenuating, the segmented tumor should be darker than the surrounding organ; if hyperattenuating, it should be brighter. 4.3.1 LLM for Extracting Report Information We use an LLM to extract tumor characteristics from reports. It directly extracts tumor counts, locations (organ/organ sub-segment/tumor slice), attenuation and diameters. Tumor volumes are estimated from diameters (see Eq. 3). Since LLMs can interpret semantics and context, they can adapt to the diverse writing styles and word choice of reports—written in diverse medical institutions by diverse radiologists. To facilitate the application of R-Super to any hospital and avoid any risk of overfit- ting the LLM to the styles of the reports in our training dataset, we use a zero-shot LLM, Llama 3.1 70B AWQ [49]. Notably, zero-shot LLMs extract tumor characteris- tics from reports accurately, according to manual evaluation by radiologists (Section 4.1). To reduce computational cost, we run the LLM only once per report and store its answer. Our LLM prompt (available in our code) was designed by radiologists in an iterative procedure9. This prompt provides the LLM with medical knowledge and detailed guidelines for understanding reports. The prompt also asks the LLM to thor- oughly justify its answers according to the report, and to fill templates with the tumor characteristics (tumor diameters, organ, organ sub-segment, slice, attenuation). The LLM-filled templates are automatically converted into a table for later use as ground- truth for the Volume Loss, Ball Loss, and Attenuation Loss. Sometimes, reports miss some tumor characteristics (e.g., diameters). Our loss functions also work for these reports—they leverage all information available in each report (Sections 4.3.2 and 4.3.3). 4.3.2 Volume Loss We apply the Volume Loss to a deep layer of the segmentation model, as deep supervision10. The Volume Loss is designed to be not strict—it enforces only two constraints: tumors must be segmented inside the locations (organs or organ sub- segments, for simplicity we say “organ” in the explanations below) where the report mentions tumors, and the combined volume of all segmented tumors must match the combined volume of all reported tumors in each location. The non-strict loss allows exploration in deeper layers, while the strict Ball Loss (see Section 4.3.3) enforces accurate final predictions. When reports inform the tumor slices (the vertical height of the tumor in the CT), the Volume Loss also enforces the segmented tumors to be at the informed slices. Usually, reports do not directly provide tumor volumes, but they provide tumor diameters. We extract diameters with the LLM and use them to estimate volumes. 9Radiologists and computer scientists prepared a prompt, tried it, checked for LLM errors, and improved the prompt to avoid these errors. 10Before applying the loss, we use a 1 × 1 × 1 convolution with sigmoid activation to reduce the number of channels in the deep layer output, making them match the number of segmentation classes. We also use nearest neighbor interpolation to make the deep layer output match the input size and voxel spacing. 19 Reports can provide 1, 2, or 3 diameters for a tumor11. With a single diameter (d1), we estimate tumor volume as a ball: d3 1π/6. With 3 diameters (d1, d2, d3), we use an ellipsoid estimation: d1d2d3π/6. With 2 diameters, d1 and d2, we estimate d3 as (d1 + d2)/2, and use the ellipsoid volume estimation. After estimating the volume for each tumor the report describes in an organ o, we sum them, giving Vr,o—the total reported tumor volume in o. The Volume Loss optimizes the total segmented tumor volume, Vs,o, to match Vr,o, for each organ o. To calculate Vs,o, we divide the CT into organs (and organ sub- segments) using pre-saved, AI-made organ masks. These organ masks do not need to be manually created. We created them with an nnU-Net [30] trained on public data12, and we will also publicly release this nnU-Net. To compensate for errors in organ masks and account for tumors that grow beyond organ boundaries, we expand the organ masks with binary dilation (by about 2 cm). When tumor slices are informed in the report, we just edit the organ mask, O = [oh,w,l], making it zero in regions away from the informed tumor slices13. The Volume Loss will automatically discourage tumor segmentations in these zeroed regions. Then, for each organ with tumors in the report, o, we multiply (element-wise) the organ mask, O = [oh,w,l], with the tumors segmented by the AI for the organ o (after softmax/sigmoid activation function), T o = [to h,w,l]. The multiplication selects only the tumors segmented inside the organ o. To estimate the total tumor volume in o, Vs,o, we sum the multiplication result in the spatial dimensions, and multiply it by the volume of one voxel, v: Vs,o = v H,W,L X h,w,l to h,w,loh,w,l (1) The Volume Loss minimizes the difference between Vs,o and Vr,o (ground-truth). To this end, we experimented with many loss functions, like the L1 and L2 losses, but we achieved better convergence with the function in Eq. 2. The reasons for this better convergence are: (1) a strong but finite gradient when Vs,o = 0 and Vr,o ̸= 0, strongly penalizing the AI when it misses tumors, but keeping numerical stability; (2) a soft gradient when Vs,o > Vr,o, since a strong gradient when Vs,o > Vr,o can increase the number of tumors missed by the AI (by pushing it towards a Vs,o = 0 solution). L′ forg,o(Vs,o, Vr,o) = |Vs,o −Vr,o| Vs,o + Vr,o + E (2) 11One diameter measurements are common for small, rounder tumors, and they are used in the RECIST (Response Evaluation Criteria in Solid Tumors) guideline [50]. Two diameters are used in the World Health Organization (WHO) tumor measurement standard [51], where the first diameter is the largest tumor diameter in any CT axial slice, and the second diameter is measured perpendicularly to the first, in the same slice. Some reports have a third diameter, perpendicular to the other two. 12Organ segmentation is usually more accurate than tumor segmentation—DSC scores above 80% are common in organ segmentation [29], but state-of-the-art tumor segmentation models rarely reach 70% DSC [12, 52]. Public CT datasets have few masks for few types of tumors, but these datasets have many masks for many organs [48, 53]. Moreover, there are many accurate and public organ segmentation models, such as TotalSegmentator [53] and Touchstone [29]. Our organ segmentation model was trained on the public dataset AbdomenAtlas [10]. It segments 39 organs and structures, including all seven organs where our dataset has tumors, and pancreas sub-segments. 13Consider tumor slices zi, for tumors i with maximum diameter di; z is the vertical axis of the CT. We make zero the organ mask in all z coordinates where the distance to any tumor slice zi is larger than the corresponding tumor diameter di. I.e., oh,w,l = 0 if |h −zi| > di for all tumors i. 20 Deep Layer Output Tumor Volume in Organ LLM Extract from Report & Save: • Tumor Organ: spleen • ~ Volume of all tumors (Vr,o): 14K mm3 spleen Total tumor volume: multiply the AI estimated tumor probabilities by the organ mask and sum over all voxels. 14K mm3, volume of all spleen tumors in report, Vr,o (I) (II) (III) Volume Loss Lforg,o Total segmented tumor vol., Vs,o Radiology Report “3 cm hypodense lesion in the spleen” Fig. 7: The Volume Loss enforces the volume of segmented tumors to match the tumor volume estimated from the report. The loss is applied to deep layers of the segmentation model, as deep supervision. (I) An LLM extracts tumor diameters and locations (organ/organ sub-segment/slice) from reports. From diameters, we esti- mate the total tumor volume from the radiology reports, Vr,o, within each organ/organ sub-segment o. (II) We sum the AI tumor segmentation output (softmax/sigmoid) for all voxels inside the organ/sub-segment o, estimating the total segmented tumor vol- ume in the organ/sub-segment, Vs,o. Pre-saved, AI-made organ/sub-segment masks identify the voxels inside the organ/sub-segment. (III) We use a custom regression loss (Eq. 3, right panel of the figure) to enforce the segmented tumor volume, Vs,o, to match the tumor volume in the radiology report, Vr,o. This loss includes a tolerance margin to account for human and estimation errors of Vr,o. The figure plots the loss for Vr,o = 1000 mm3 and varying Vs,o. In case the report informs the tumor slices, the Volume Loss also ensures that the segmented tumors are near the informed slices. The constant E (set to 500 mm3) provides numerical stability for small Vr,o. Impor- tantly, the tumor volumes estimated from reports (Vr,o) are not perfect. They are subject to human errors, inter-observer variance, and approximation errors in our vol- ume estimation from diameters. Thus, we added a tolerance margin (0 < τ < 1) in the Volume Loss: if the difference between Vs,o and Vr,o is small (i.e., |Vs,o −Vr,o| ≤ τVr,o) the Volume Loss does not penalize the AI—the loss and its gradient become zero. Eq. 3 displays the loss with tolerance, and Figure 7 plots it. We set τ = 10%. Lforg,o(Vs,o, Vr,o) = max{L′ forg,o(Vs,o, Vr,o) −L′ forg,o((1 −τ)Vr,o, Vr,o), 0} (3) For each organ o with tumors in the report, the Volume Loss also penalizes any tumor segmented outside the organ, using cross-entropy and the organ segmentation mask (Eq. 4). For organs with no tumor in the report, we use cross-entropy to penalize all tumor segmentation output voxels, pushing them towards 0. Equation 5 displays the final Volume Loss: Lforg,o(Vs,o, Vr,o) makes Vs,o match Vr,o inside the organ o with tumors, and the term Lbkg,o(To) minimizes tumor segmentation outside this organ. Lbkg,o(To) = − 1 H · W · L H X h=1 W X w=1 L X l=1 ln(1 −to h,w,l(1 −oh,w,l)) (4) 21 Lvol,o = Lforg,o(Vs,o, Vr,o) + Lbkg,o(To) (5) Following its non-strict design, the Volume Loss allows flexibility in how tumors are segmented inside organs. Also, it does not push estimated tumor probabilities towards 1 or 0, allowing uncertainty and exploration in deep layers. Mainly, the Volume Loss enforces that the AI must not: (I) segment tumors in organs the report mentions no tumor (false-positive), (II) miss tumors in organs the report mentions tumors (false- negative), (III) segment tumors much larger/smaller than tumors in the report, (IV) segment tumors away from informed tumor slices (when reports inform them). When a report mentions that an organ has tumors, but it does not inform tumor diameter or number of tumors, we cannot estimate the total reported tumor volume in the organ, Vr,o. In this case, we resort to a prior-based, high-tolerance version of the tumor loss: we consider that the tumor must be larger than 5 mm in diameter (Vr,o > 65) and smaller than 120 mm (Vr,o < 904, 779). These numbers are based on the analysis of tumor sizes in our dataset. To implement this requirement, when the real Vr,o is unknown (i.e., report does not inform tumor number or size), we substitute Vr,o by bVr,o and calculate the loss as defined below: bVr,o =        65 if Vs,o < 65, Vs,o if 65 ≤Vs,o ≤904,779, 904,779 if Vs,o > 904,779, (6) Lvol,o = Lforg,o  Vs,o, bVr,o  + Lbkg,o(To) . (7) 4.3.3 Ball Loss Unlike the Volume Loss, the Ball Loss is applied to the final segmentation output of the segmentation model (last layer), and it enforces strict constraints. First, it enforces that segmented tumors must be in the locations (organs/organ sub-segments) where the report mentions tumors. Then, for each location, it enforces that: the number of segmented tumors must match the number of tumors in the report; and each one of these segmented tumors must match the diameter, volume and slice (when informed) of one tumor described in the report. Overall, the Ball Loss uses multiple information from reports to guide tumor segmentation. First, like the Volume Loss, the Ball Loss uses the cross-entropy loss to penalize tumor segmentations for organs with no tumor in the reports. For organs with tumors in the report, Figure 8 displays the Ball Loss procedure. First, we multiply (element- wise) the tumor segmentation output (To) with the corresponding pre-saved organ segmentation mask, To ⊗O. This multiplication selects only the tumors inside the organ. Second, we apply sequential ball convolutions to locate each individual tumor the report mentions, starting by the largest. A ball convolution is a standard convo- lution using a non-learnable binary kernel shaped like a ball, with the same diameter as the tumor diameter in the report (for tumors with multiple diameters, we use the largest). The convolution moves the ball inside the tumor segmentation output. 22 Last Layer Output Repeat for all tumors reported in the organ, from large to small. For each tumor, ignore voxels already assigned (maximized) for other tumors. Minimize output voxels not assigned to any tumor. Ball Convolution Inside the spleen, move a spherical kernel with the reported tumor diameter (30 mm). The highest convolution output indicates the ball location with the highest tumor probability. Output Voxels to Maximize Maximize the 14K most probable voxels inside the highest probability ball. Ball loss: Which output voxels to maximize and minimize? Locate Organ Organ Localization Minimize tumor probabilities outside the spleen, located w/ a spleen mask (AI- made, pre-saved). Maximize 14K Highest prob. ball Location: spleen Diameter: 30 mm Volume: 14K voxels LLM Extract from Report & Save: • Tumor Organ: spleen • Tumor Diameter: 30 mm • Tumor Number: 1 (I) Radiology Report “3 cm hypodense lesion in the spleen” (II) (III) (IV) Fig. 8: The Ball Loss converts reports into voxel-level supervision. (I) a zero-shot large language model (LLM) extracts and saves tumor count, locations (organs or sub-regions), slices, attenuation, and diameters from reports. (II) Dur- ing segmentation training, the tumors in the reports are located to the CT by Ball Convolutions—standard convolutions with fixed, spherical binary kernels matching reported tumor diameter (plus a small margin). We apply the convolution to the segmentation model’s outputs (tumor probabilities), ignoring locations outside the organ/organ sub-segment containing the tumor (with the help of pre-saved, AI-made organ masks). When the tumor slice is informed by the report, we also ignore locations far from the slice. The convolution output is maximum when the ball is at the most probable location for the tumor—the highest probability ball. (III) By setting the top-N most probable voxels inside the highest-probability ball as 1, and the remaining voxels as 0, we create a segmentation mask for the tumor. N is the number of voxels the tumor is expected to occupy—according to its volume, estimated from the report. If the report shows multiple tumors, we use a sequence of ball convolutions to locate them one-by-one. After each tumor is located and added to the segmentation mask, we remove it from the segmentation output, to avoid reuse. (IV) We use the mask to optimize the segmentation model, using the dice loss and a custom cross-entropy loss, which has higher weights near the tumor center. The masks created by the Ball Loss have tumors with correct tumor size, count and locations (organs/sub-segments), and they get better as the segmentation model trains and improves. The Ball Loss includes tolerances for uncertain tumor borders. In each ball position, the convolution output is the sum of all tumor probabilities (softmax/sigmoid) within the ball. The position where this sum is highest—highest probability ball—indicates the most likely location for the tumor with the same diame- ter as the ball. The preliminary multiplication between the tumor segmentation output and the organ segmentation mask, To ⊗O, avoids the highest probability balls to fall outside the organ. If the report informs the tumor slice, we modify the organ mask, O, by making it zero on CT slices away from the informed tumor slice (more than 1 tumor diameter away). This forces the ball convolution to find a highest probability 23 ball that intersects the informed tumor slice. Additionally, to improve tumor local- ization, we weigh the ball convolution kernel slightly higher around the ball center, so the convolution responds more strongly if the tumor’s center (where probabilities tend to be highest) aligns with the ball kernel’s center14. Our first ball convolution uses the diameter of the largest tumor reported in the organ, d0. The convolution output is a 3D volume, whose maximum is the most likely tumor center, c0, for a tumor of diameter d0. We place the highest probability ball (diameter d0, center c0) in the tumor segmentation output, and select the N0 voxels with the highest tumor probability inside the ball. N0 represents how many voxels the tumor should have—we derive N0 from the tumor volume estimated from the report (see Section 4.3.2). Then, we create an empty segmentation mask (all zero) and set these N0 voxels to 1. Before using another ball convolution to locate the next tumor, we zero out these N0 voxels inside the tumor segmentation output. This ensures that the next ball convolution will not locate the tumor we just located15. We repeat this process—ball convolution, add tumor to segmentation mask, remove tumor from segmentation output—until all tumors mentioned in the report are added to the segmentation mask. The final segmentation mask matches the report in tumor count, locations (organ), volumes, and diameters. We use it as ground truth for a Dice loss and a custom weighted cross-entropy loss, which optimize the AI tumor segmentation output to match the mask. Our cross-entropy gives higher weights to voxels where the AI has higher tumor confidence, and it does not penalize a margin around the tumor borders—compensating for errors in diameters in the report. In case the report mentions a tumor but does not provide its size, we use a relaxed version of the Ball Loss. We assume that the tumor has at least 5 mm in diameter, because less than 0.1% of tumors in our reports have less than 5 mm. Then, we apply the ball convolutions as before, considering 5 mm diameter. This will generate a small tumor inside the segmentation mask, possibly near the center of a real, larger tumor (usually centers have higher tumor probability). We use this segmentation mask to train the segmentation model with the cross entropy and dice loss. However, we do not apply the cross entropy and dice loss to the mask’s zero voxels inside the organ with tumor. The reason is that we do not know the real tumor size. Thus, it is not possible to estimate how many voxels are tumor voxels. We just enforce that, at least, a 5 mm tumor exists, but we do not penalize the segmentation model if it finds a larger tumor. In case we do not know how many tumors an organ has, we use the ball loss to localize and create a mask for each tumor described in the report, but we again do not penalize the mask’s zero voxels inside the organ with tumor. Not all reports are equal, but the Ball Loss (like the Volume Loss) adapts to different types of reports: when reports are more precise, the Ball Loss is more precise, leveraging all available information. The most precise reports include size and slice for all tumors (about 15% of our reports). This limits the ball convolution to search 14Ball kernels are 1 at the kernel center, and decay towards the ball border, following a 3D Gaussian with standard deviation of 0.75× the ball diameter, di. Outside of the ball, the kernel is zero. Ball convolutions use stride 1, zero padding and odd kernel sizes to ensure input-output alignment. 15It is important to iterate from largest to smallest tumor. Consider a segmentation output with a big tumor and a small tumor. A ball convolution with small diameter can have high outputs over the small tumor or anywhere inside the large tumor. Thus, before localizing the small tumor, we first localize the large tumor and remove it from the segmentation output. 24 for the tumor in a very small region—a few slices inside one organ—making it very precise. Despite its name, the Ball Loss does not assume that tumors are spherical, nor does it teach the segmentation model to segment spherical tumors only. Instead, it assumes that tumors fit inside a ball whose diameter matches the largest tumor diameter in the report. Tumors can assume any shape that fits inside this ball. This assumption can be relaxed by increasing the diameter used in the Ball Loss (e.g., we increase the diameters in the reports by 30% when applying the Ball Loss). One may wonder whether the Ball Loss can guide the segmentation model to seg- ment the wrong thing. Indeed, in a single image it can: if the segmentation model segments a wrong tumor (false positive) inside the right organ, the Ball Loss may enforce this wrong segmentation. However, when a similar false positive appears in a healthy patient, the loss penalizes it. Thus, while some wrong tumor segmenta- tions may be reinforced in individual cases, they are canceled out across patients. To consistently minimize Ball Loss across the whole training dataset, the segmentation model must learn to find the correct tumor organs, tumor counts and tumor sizes. In other words, the only way to minimize the Ball Loss is to truly segment the tumors. Our experiments show that, by minimizing the Ball Loss and Volume Loss, R-Super becomes substantially better at segmenting tumors—proving that the net effect of our losses is reinforcing correct segmentations, not false positives. 4.3.4 Attenuation Loss Reports commonly inform tumor attenuation. A hypoattenuating tumor is darker than the surrounding organ, a hyperattenuating tumor is brighter, and an isoattenuating tumor has a similar brightness to the organ. The Attenuation Loss leverages this information to improve tumor segmentation. Brightness in CT scans is expressed as HU values—the value of the voxels in the CT. Even after we normalize the CT, relative attenuation remains: if a tumor is hypoattenuating, its average voxel value will be lower than the average voxel value of the organ. We apply the Attenuation Loss both as deep supervision and at the segmentation model’s final output. To calculate the Attenuation Loss, we define the tumor voxels as the voxels where the segmentation model predicts more than 50% tumor probability. We define the organ voxels using pre-saved, AI-made organ mask, but we do not consider tumor vox- els as organ voxels. We calculate the mean and standard deviation of the tumor voxels and of the organ voxels. These means and standard deviations are sent to an MLP16, which is an attenuation classifier. It classifies whether tumors are all hyperattenuat- ing, all hypoattenuating, or of mixed attenuation/isoattenuation. The label for the attenuation classifier is extracted from the report, by the LLM. We train the attenu- ation classifier with a standard cross-entropy classification loss. The gradient of this loss is used to train the attenuation classifier, but it also back-propagates to the seg- mentation model, and improves it—the segmentation model should delineate tumors better, to allow the attenuation classifier to predict the tumor attenuation better. 16128 neurons in the hidden layer. 25 Acknowledgments. This work was supported by the National Institutes of Health (NIH) under Award Number R01EB037669, the Lustgarten Foundation for Pancreatic Cancer Research, and the Center for Biomolecular Nanotechnologies, Istituto Italiano di Tecnologia (73010, Arnesano, LE, Italy). We would like to thank the Johns Hopkins Research IT team in IT@JH for their support and infrastructure resources where some of these analyses were conducted, especially DISCOVERY HPC; thank the HPC infrastructure and the Support Team at Fondazione Istituto Italiano di Tecnologia. We thank Jaimie Patterson for writing a news article about this project. Appendix A Training Details We train using CT patches. For CT-Report pairs, each training patch is designed to fully cover one target organ. This target organ is randomly chosen, with a high probability of choosing organs with tumors in the report (e.g., 90%). The training patch must fully cover the target organ. Otherwise, a tumor mentioned in the report could fall outside the patch, and the report-based losses would wrongly push the AI to find a tumor that is not visible to the AI. Training parameters followed the defaults set by MedFormer [28], the segmentation architecture we used inside R-Super. The only new parameters we include are loss weights. We set a loss weight of 1 to the segmentation losses (cross entropy and dice, used for CT-Mask pairs), 0.1 for the Volume Loss and Ball Loss, and 0.01 for the Attenuation Loss. There is no need to carefully tune these weights, we used the same weights in all our experiments. We train with AdamW, gradient norm clipping (1), 50 epochs of 1000 batches each, batch size of 4, patch size of 128 x 128 x 128, isotropic voxel spacing of 1 mm, weight decay of 5.00E-2, learning rate of 1.00E-4 (5 epochs of warmup, followed by polynomial decay). CT intensity was clipped between -991 and 500 HU, then normalized. Data augmentation includes rotation, brightness, gamma, contrast, gaussian blur and gaussian noise [28]. We super-sampled the CT-Mask pairs, making them 50% of the samples that the segmentation model saw in each epoch. Segmentation models were initialized pre-trained for organ segmentation on AbdomenAtlas 2.0 [10, 54]. Pre-training followed the same configuration as training (described above), but without using the R-Super losses or reports. Appendix B Comparison to Radiologists The radiologist tumor detection performance is not very high for many tumor types in this study, because these tumor types are very difficult to detect in CT scans. Due to this difficulty, CT is not the primary diagnostic tool for tumors in the bladder, esophagus, prostate, uterus and gallbladder. However, more than 300 million CT scans are performed early in the world, for diverse reasons. This large number of CT scans create a large opportunity for opportunistic early detection of tumors (when tumors are found in CT scans performed for other reason, not to search for tumors). AI can help this opportunistic detection, because it can often see tumor signs that are not visible to humans. For example, PANDA detects pancreatic tumors on non-contrast CT that radiologists typically cannot [7]. Here, we compared the performance of our AI to that of radiologists reported in the literature. 26 We searched for studies where radiologists analyzed a dataset of patients with malignant tumors and normal patients. We extracted from these studies the sensitivity and specificity reported for the radiologists. For the bladder and gallbladder, we could only find studies without healthy patients. Therefore, we report only the radiologist sensitivity in these cases. A limitation of our comparison between radiologists and AI is that our AI and the radiologists were evaluated in different test sets, with different patient populations, different CT scanners, possibly different proportions of the types of malignant tumors, different contrast protocols and different hospitals. Some clear differences are: for esophagus tumor, the radiologist study worked on non-contrast CT, while our AI worked on contrast-enhanced CT (easier); and for adrenal tumors, our AI evaluated both primary and metastatic tumors, while the radiologist study worked only on metastatic tumors. Besides the esophagus study, other studies used contrast- enhanced CT, as we did in our test datasets. Here, we provide a brief summary of each study. • Bladder tumors [35]: CT scans were selected for patients later diagnosed with bladder cancer (99 patients; 226 CTs). These scans were acquired up to five years before the pathologic diagnosis (pre-diagnostic). Radiologists achieved 67% tumor detection sensitivity. The study lacked normal patients, so specificity was not estimable. Since these CT scans are pre-diagnostic, some may have a very small tumor, or truly no tumor, reducing the reported radiologist sensitivity. • Esophagus tumors [37]: Non-contrast CT scans come form 52 esophagus can- cer patients and 48 normal patients. Radiologists achieved 25–31% sensitivity at 74–78% specificity. Unlike this study, our test dataset used contrast-enhanced CT (easier). • Gallbladder tumors [36]: This study, published in 1997, is a retrospective anal- ysis. It covers gallbladder carcinoma patients at the Howard University, for the previous 28 years. Radiologist performance for gallbladder tumor detection in CT was reported as 40% sensitivity. No normal patient was included, and specificity is not reported. • Prostate tumors [55]: The study included 139 clinically significant prostate cancer CTs and 432 healthy CTs. Radiologists achieved 44% sensitivity and 74% specificity in detecting these tumors. • Spleen tumors [56]: A 2024 meta-analysis synthesized spleen tumor detection performance across different imaging modalities. On CT, radiologists achieved 77% sensitivity and 91% specificity in detecting spleen tumors. • Uterus tumors [57]: In asymptomatic postmenopausal women on CT (22 cancers; 22 controls), the endometrium thickness was measured by radiologists to detect endometrial cancer. An 8 mm thickness threshold yielded 86% sensitivity and 91% specificity for detecting the tumors. This study considers only endometrial cancers, but our test dataset may include other types of uterus malignant tumors. • Adrenal tumors [38]: The study considered 91 lung cancer patients. Of them, 53 had adrenal metastases (autopsy-validated). Radiologists could detect these metas- tasis on CT with sensitivity of 20 to 41%, but high specificity (84 to 99%). This study considers only metastasis, but our test dataset includes both metastasis and primary adrenal malignant tumors. 27 References [1] Mathers, C.D., Boerma, T., Ma Fat, D.: Global and regional causes of death. British medical bulletin 92(1), 7–32 (2009) [2] Sung, H., Ferlay, J., Siegel, R.L., Laversanne, M., Soerjomataram, I., Jemal, A., Bray, F.: Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians 71(3), 209–249 (2021) [3] Crosby, D., Bhatia, S., Brindle, K.M., Coussens, L.M., Dive, C., Emberton, M., Esener, S., Fitzgerald, R.C., Gambhir, S.S., Kuhn, P., et al.: Early detection of cancer. Science 375(6586), 9040 (2022) [4] McCollough, C.H., Bushberg, J.T., Fletcher, J.G., Eckel, L.J.: Answers to com- mon questions about the use and safety of ct scans. In: Mayo Clinic Proceedings, vol. 90, pp. 1380–1392 (2015). Elsevier [5] Hoogenboom, S.A., Engels, M.M., Chuprin, A.V., Hooft, J.E., LeGout, J.D., Wallace, M.B., Bolan, C.W.: Prevalence, features, and explanations of missed and misinterpreted pancreatic cancer on imaging: a matched case–control study. Abdominal Radiology 47(12), 4160–4172 (2022) [6] Xia, Y., Yu, Q., Chu, L., Kawamoto, S., Park, S., Liu, F., Chen, J., Zhu, Z., Li, B., Zhou, Z., et al.: The felix project: Deep networks to detect pancreatic neoplasms. medRxiv (2022) [7] Cao, K., Xia, Y., Yao, J., Han, X., Lambert, L., Zhang, T., Tang, W., Jin, G., Jiang, H., Fang, X., et al.: Large-scale pancreatic cancer detection via non-contrast ct and deep learning. Nature medicine 29(12), 3033–3043 (2023) [8] Hu, C., Xia, Y., Zheng, Z., Cao, M., Zheng, G., Chen, S., Sun, J., Chen, W., Zheng, Q., Pan, S., et al.: Ai-based large-scale screening of gastric cancer from noncontrast ct imaging. Nature Medicine, 1–9 (2025) [9] Markotić, V., Pojužina, T., Radančević, D., Miljko, M., Pokrajčić, V.: The radiologist workload increase; where is the limit?: mini review and case study. Psychiatria Danubina 33(suppl 4), 768–770 (2021) [10] Bassi, P.R., Yavuz, M.C., Wang, K., Chen, X., Li, W., Decherchi, S., Cavalli, A., Yang, Y., Yuille, A., Zhou, Z.: Radgpt: Constructing 3d image-text tumor datasets. arXiv preprint arXiv:2501.04678 (2025) [11] Liu, J., Zhang, Y., Wang, K., Yavuz, M.C., Chen, X., Yuan, Y., Li, H., Yang, Y., Yuille, A., Tang, Y., et al.: Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical Image Analysis, 103226 (2024) 28 [12] Chen, J., Xia, Y., Yao, J., Yan, K., Zhang, J., Lu, L., Wang, F., Zhou, B., Qiu, M., Yu, Q., et al.: Cancerunit: Towards a single unified model for effective detection, segmentation, and diagnosis of eight major cancers using a large collection of ct scans. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 21327–21338 (2023) [13] Antonelli, M., Reinke, A., Bakas, S., Farahani, K., Kopp-Schneider, A., Land- man, B.A., Litjens, G., Menze, B., Ronneberger, O., Summers, R.M., et al.: The medical segmentation decathlon. Nature communications 13(1), 1–13 (2022) [14] Bilic, P., Christ, P., Li, H.B., Vorontsov, E., Ben-Cohen, A., Kaissis, G., Sze- skin, A., Jacobs, C., Mamani, G.E.H., Chartrand, G., et al.: The liver tumor segmentation benchmark (lits). Medical image analysis 84, 102680 (2023) [15] Heller, N., Isensee, F., Maier-Hein, K.H., Hou, X., Xie, C., Li, F., Nan, Y., Mu, G., Lin, Z., Han, M., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge. Medical Image Analysis, 101821 (2020) [16] Li, W., Zhou, X., Chen, Q., Lin, T., Bassi, P.R., Plotka, S., Cwikla, J.B., Chen, X., Ye, C., Zhu, Z., et al.: Pants: The pancreatic tumor segmentation dataset. arXiv preprint arXiv:2507.01291 (2025) [17] Hamamci, I.E., Er, S., Menze, B.: Ct2rep: Automated radiology report gener- ation for 3d medical imaging. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 476–486 (2024). Springer [18] Blankemeier, L., Cohen, J.P., Kumar, A., Van Veen, D., Gardezi, S.J.S., Paschali, M., Chen, Z., Delbrouck, J.-B., Reis, E., Truyts, C., et al.: Merlin: A vision language foundation model for 3d computed tomography. Research Square, 3 (2024) [19] Sellergren, A., Kazemzadeh, S., Jaroensri, T., Kiraly, A., Traverse, M., Kohlberger, T., Xu, S., Jamil, F., Hughes, C., Lau, C., et al.: Medgemma technical report. arXiv preprint arXiv:2507.05201 (2025) [20] Hamamci, I.E., Er, S., Almas, F., Simsek, A.G., Esirgun, S.N., Dogan, I., Das- delen, M.F., Wittmann, B., Simsar, E., Simsar, M., et al.: A foundation model utilizing chest ct volumes and radiology reports for supervised-level zero-shot detection of abnormalities. CoRR (2024) [21] Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023) [22] Chen, Y., Xiao, W., Bassi, P.R., Zhou, X., Er, S., Hamamci, I.E., Zhou, Z., 29 Yuille, A.: Are vision language models ready for clinical diagnosis? a 3d med- ical benchmark for tumor-centric visual question answering. arXiv preprint arXiv:2505.18915 (2025) [23] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual mod- els from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763 (2021). PMLR [24] Moon, T.K.: The expectation-maximization algorithm. IEEE Signal processing magazine 13(6), 47–60 (1996) [25] Moawad, A.W., Ahmed, A.A., ElMohr, M., Eltaher, M., Habra, M.A., Fisher, S., Perrier, N., Zhang, M., Fuentes, D., Elsayes, K.: Voxel-level segmentation of pathologically-proven adrenocortical carcinoma with ki-67 expression (adrenal- acc-ki67-seg)[data set]. The Cancer Imaging Archive 8 (2023) [26] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learn- ing with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015) [27] Bassi, P.R., Li, W., Chen, J., Zhu, Z., Lin, T., Decherchi, S., Cavalli, A., Wang, K., Yang, Y., Yuille, A.L., et al.: Learning segmentation from radiology reports. arXiv preprint arXiv:2507.05582 (2025) [28] Gao, Y., Zhou, M., Liu, D., Yan, Z., Zhang, S., Metaxas, D.N.: A data-scalable transformer for medical image segmentation: architecture, model efficiency, and benchmark. arXiv preprint arXiv:2203.00131 (2022) [29] Bassi, P.R., Li, W., Tang, Y., Isensee, F., Wang, Z., Chen, J., Chou, Y.-C., Kirch- hoff, Y., Rokuss, M., Huang, Z., Ye, J., He, J., Wald, T., Ulrich, C., Baumgartner, M., Roy, S., Maier-Hein, K.H., Jaeger, P., Ye, Y., Xie, Y., Zhang, J., Chen, Z., Xia, Y., Xing, Z., Zhu, L., Sadegheih, Y., Bozorgpour, A., Kumari, P., Azad, R., Merhof, D., Shi, P., Ma, T., Du, Y., Bai, F., Huang, T., Zhao, B., Wang, H., Li, X., Gu, H., Dong, H., Yang, J., Mazurowski, M.A., Gupta, S., Wu, L., Zhuang, J., Chen, H., Roth, H., Xu, D., Blaschko, M.B., Decherchi, S., Cavalli, A., Yuille, A.L., Zhou, Z.: Touchstone benchmark: Are we on the right way for evaluat- ing ai algorithms for medical segmentation? Conference on Neural Information Processing Systems (2024) [30] Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods 18(2), 203–211 (2021) [31] Grauw, M., Scholten, E.T., Smit, E.J., Rutten, M.J., Prokop, M., Ginneken, B., Hering, A.: The uls23 challenge: A baseline model and benchmark dataset for 3d universal lesion segmentation in computed tomography. Medical Image Analysis 30 102, 103525 (2025) [32] Zhuang, J., Wu, L., Wang, Q., Fei, P., Vardhanabhuti, V., Luo, L., Chen, H.: Mim: Mask in mask self-supervised pre-training for 3d medical image analysis. IEEE Transactions on Medical Imaging (2025) [33] Zhuang, J., Luo, L., Wang, Q., Wu, M., Luo, L., Chen, H.: Advancing volu- metric medical image segmentation via global-local masked autoencoders. IEEE Transactions on Medical Imaging (2025) [34] Reiss, T., Cohen, N., Bergman, L., Hoshen, Y.: Panda: Adapting pretrained fea- tures for anomaly detection and segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2806–2814 (2021) [35] Malik, R.F., Berry, R., Lau, B.D., Busireddy, K.R., Patel, P., Patel, S.H., Fishman, E.K., Bivalacqua, T.J., Johnson, P.T., Sedaghat, F.: Systematic eval- uation of imaging features of early bladder cancer using computed tomography performed before pathologic diagnosis. Tomography 9(5), 1734–1744 (2023) [36] Frezza, E., Mezghebe, H.: Gallbladder carcinoma: a 28 year experience. Interna- tional surgery 82(3), 295–300 (1997) [37] Sui, H., Ma, R., Liu, L., Gao, Y., Zhang, W., Mo, Z.: Detection of incidental esophageal cancers on chest ct by deep learning. Frontiers in Oncology 11, 700210 (2021) [38] Allard, P., Yankaskas, B.C., Fletcher, R.H., Alden Parkery, L., Halvorsen Jr, R.A.: Sensitivity and specificity of computed tomography for the detection of adrenal metastatic lesions among 91 autopsied lung cancer patients. Cancer 66(3), 457– 462 (1990) [39] Ardila, D., Kiraly, A.P., Bharadwaj, S., Choi, B., Reicher, J.J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., et al.: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine 25(6), 954–961 (2019) [40] Mikhael, P.G., Wohlwend, J., Yala, A., Karstens, L., Xiang, J., Takigami, A.K., Bourgouin, P.P., Chan, P., Mrah, S., Amayri, W., et al.: Sybil: a validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. Journal of Clinical Oncology 41(12), 2191–2200 (2023) [41] Cao, W., Zhang, J., Shui, Z., Wang, S., Chen, Z., Li, X., Lu, L., Ye, X., Liang, T., Zhang, Q., et al.: Boosting vision semantic density with anatomy normality mod- eling for medical vision-language pre-training. arXiv preprint arXiv:2508.03742 (2025) 31 [42] Bassi, P.R., Dertkigil, S.S., Cavalli, A.: Improving deep neural network general- ization and robustness to background bias via layer-wise relevance propagation optimization. Nature Communications 15(1), 291 (2024) [43] Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., Wichmann, F.A.: Shortcut learning in deep neural networks. Nature Machine Intelligence 2(11), 665–673 (2020) [44] DeGrave, A.J., Janizek, J.D., Lee, S.-I.: Ai for radiographic covid-19 detection selects shortcuts over signal. Nature Machine Intelligence 3(7), 610–619 (2021) [45] Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 590–597 (2019) [46] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classi- fication and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017) [47] Qu, C., Zhang, T., Qiao, H., Liu, J., Tang, Y., Yuille, A., Zhou, Z.: Abdomenatlas- 8k: Annotating 8,000 abdominal ct volumes for multi-organ segmentation in three weeks. In: Conference on Neural Information Processing Systems, vol. 21 (2023). https://github.com/MrGiovanni/AbdomenAtlas [48] Li, W., Qu, C., Chen, X., Bassi, P.R., Shi, Y., Lai, Y., Yu, Q., Xue, H., Chen, Y., Lin, X., et al.: Abdomenatlas: A large-scale, detailed-annotated, & multi- center dataset for efficient transfer learning and open algorithmic benchmarking. Medical Image Analysis, 103285 (2024) [49] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) [50] Eisenhauer, E.A., Therasse, P., Bogaerts, J., Schwartz, L.H., Sargent, D., Ford, R., Dancey, J., Arbuck, S., Gwyther, S., Mooney, M., et al.: New response eval- uation criteria in solid tumours: revised recist guideline (version 1.1). European journal of cancer 45(2), 228–247 (2009) [51] Miller, A.B., Hoogstraten, B., Staquet, M., Winkler, A.: Reporting results of cancer treatment. Cancer 47(1), 207–214 (1981) [52] Płotka, S., Mert, G., Chrabaszcz, M., Szczurek, E., Sitek, A.: Mamba goes home: Hierarchical soft mixture-of-experts for 3d medical image segmentation. arXiv preprint arXiv:2507.06363 (2025) 32 [53] Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D.T., Cyriac, J., Yang, S., et al.: Totalsegmentator: robust segmen- tation of 104 anatomic structures in ct images. Radiology: Artificial Intelligence 5(5) (2023) [54] Li, W., Yuille, A., Zhou, Z.: How well do supervised models transfer to 3d image segmentation? In: International Conference on Learning Representations (2024). https://github.com/MrGiovanni/SuPreM [55] Korevaar, S., et al.: Incidental detection of prostatecancerwithcomputedtomog- raphyscans. SciRep11 (1) 7956 (2021) [56] Valizadeh, P., Jannatdoust, P., Tahamtan, M., Ghorani, H., Dorcheh, S.S., Farnoud, K., Salahshour, F.: Diagnostic performance of different imaging modal- ities for splenic malignancies: A comparative meta-analysis. European Journal of Radiology Open 12, 100566 (2024) [57] Franconeri, A., Fang, J., Brook, A., Brook, O.R.: Asymptomatic endometrial thickening of 8 mm or greater on postcontrast computed tomography in post- menopausal women is a predictor of endometrial cancer. Journal of computer assisted tomography 43(1), 136–142 (2019) 33
Scaling Artificial Intelligence for Multi-Tumor Early Detection with More Reports, Fewer Masks Pedro R. A. S. Bassi1,2,3, Xinze Zhou1†, Wenxuan Li1†, Szymon Płotka4†, Jieneng Chen1, Qi Chen1, Zheren Zhu5,14, Jakub Prządo6, Ibrahim E. Hamaci7,8, Sezgin Er9, Yuhan Wang10, Ashwin Kumar11, Bjoern Menze9, Jarosław B. Ćwikła12, Yuyin Zhou10, Akshay S. Chaudhari11, Curtis P. Langlotz11, Sergio Decherchi3, Andrea Cavalli2,3,13, Kang Wang14, Yang Yang14, Alan L. Yuille1, Zongwei Zhou1* 1Johns Hopkins University, Baltimore, MD, USA. 2 . 3Istituto Italiano di Tecnologia, Genova, Italy. 4Jagiellonian University, Kraków, Poland. 5 . 6Warmian-Masurian Cancer Center, Olsztyn, Poland. 7 . 8ETH AI Center, Zurich, Switzerland. 9Istanbul Medipol University, Istanbul, Turkey. 10 . 11Stanford University, Stanford, CA, USA. 12 . 13École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland. 14 . *Corresponding author(s). E-mail(s): ; †These authors contributed equally to this work. Abstract Early tumor detection save lives. Each year, more than 300 million computed tomography (CT) scans are performed worldwide, offering a vast opportunity for effective cancer screening. However, detecting small or early-stage tumors on these CT scans remains challenging, even for experts. Artificial intelligence (AI) 1 16 Oct 2025 models can assist by highlighting suspicious regions, but training such models typically requires extensive tumor masks-detailed, voxel-wise outlines of tumors manually drawn by radiologists. Drawing these masks is costly, requiring years of effort and millions of dollars. In contrast, nearly every CT scan in clinical practice is already accompanied by medical reports describing the tumor's size, number, appearance, and sometimes, pathology results-information that is rich, abundant, and often underutilized for AI training. We introduce R-Super, which trains AI to segment tumors that match their descriptions in medical reports. This approach scales AI training with large collections of readily available medical reports, substantially reducing the need for manually drawn tumor masks. When trained on 101,654 reports, AI models achieved performance comparable to those trained on 723 masks. Combining reports and masks further improved sensitivity by +13% and specificity by +8%, surpassing radiologists in detecting five of the seven tumor types. Notably, R-Super enabled segmentation of tumors in the spleen, gallbladder, prostate, bladder, uterus, and esophagus, for which no public masks or AI models previously existed. This study challenges the long-held belief that large-scale, labor-intensive tumor mask creation is indispensable, establishing a scalable and accessible path toward early detection across diverse tumor types. We plan to release our trained models, code, and dataset at https://github.com/MrGiovanni/R-Super. 1 Main Cancer is a leading cause of death worldwide [1, 2]. Early detection is crucial. Fiveyear survival rates often exceed 90% when tumors are detected at an early stage but can drop below 20% once the disease becomes advanced or metastatic [3]. There is no effective, widely adopted screening for these diseases, even among high-risk populations. Computed tomography (CT) is already part of routine care, with more than 300 million CT scans performed globally each year and 85 million in the United States alone [4]. These scans represent a vast, untapped opportunity for detecting tumors sooner. However, detecting tumors at an early stage from CT scans is extremely difficult, and even experienced radiologists can miss them. For instance, in a study of CT scans taken before pancreatic cancer diagnosis, about 50% of the tumors were present but overlooked by radiologists [5]. Artificial Intelligence (AI) has the potential to help radiologists detect early tumors [6-8]. AI offers several advantages: AI does not get tired or suffer from attentional effects; AI can see CT scans in 3D, while radiologists analyze them slice by slice; AI can train on large datasets (e.g., 101,654 scans in this study), surpassing the number of CT scans that a radiologist analyzes annually (est. 5,000 scans [9]); and AI can see disease signs usually invisible to radiologists, such as signs of pancreatic tumors on non-contrast CT [7]. State-of-the-art AI models in tumor detection are typically formulated as semantic segmentation [10-12]-a type of AI that localizes tumors and outlines them on the CT scan, accurately indicating tumor locations and boundaries, allowing radiologists to easily verify the AI's findings. 2 A major challenge in developing these segmentation models is the need for tumor masks-precise outlines drawn by radiologists. Creating accurate masks is laborintensive, costly, and not part of standard clinical workflow. Drawing a mask for each tumor can take up to 30 minutes, and one study required 8 radiologists, five years, and millions of dollars to produce 3,125 pancreatic tumor masks [6]. Public CT datasets contain tumor masks only for a few organs, such as the kidney, liver, and pancreas, with a very small number of annotated tumor scans [13-16]. For many clinically important organs, such as the spleen, gallbladder, prostate, bladder, uterus, and esophagus, no public tumor masks exist, creating a significant barrier to developing multi-tumor segmentation models. Unlike drawing tumor masks, radiologists write medical reports as part of their standard clinical workflow. These reports describe tumor characteristics observed in the CT scans, including the number, approximate size, location within organs, and attenuation (whether the tumor appears bright or dark), and sometimes include pathology results from biopsy or surgery. As a result, paired CT-Report datasets are naturally much larger than CT-Mask datasets (Figure 1). Public datasets [17, 18] already provide around 25,000 CT-Report pairs, and a single hospital can easily accumulate over 500,000 CT-Report pairs (Section 4.1). In contrast, public datasets rarely exceed 1,000 CT-Mask pairs [13-16]. This striking difference raises an important question: Can medical reports supplement-or even replace-tumor masks in training AI for tumor segmentation? Recent advances in vision-language models (VLMs) have shown capability in generating descriptive captions [17-21]. For example, models like Google's MedGemma [19] and Stanford's Merlin [18] can generate medical reports from CT scans. However, these models are not designed for segmentation, which requires precise tumor localization and boundary delineation. As a result, they frequently produce errors such as missing existing tumors, detecting non-existent ones, or failing to describe small and subtle lesions accurately-the very cases that are most clinically important [10, 22]. A major limitation lies in their training paradigm: current VLMs rely on contrastive language-image pre-training (CLIP) [23], which were designed to learn from generic image-text pairs like social media captions, not for the rich, structured information in radiology reports. Our approach addresses this limitation by explicitly modeling the tumor's location as a hidden variable-similar in spirit to the Expectation-Maximization (EM) framework [24]. The precise and descriptive nature of medical reports thus becomes a powerful supervisory signal for training. In this paper, we introduce R-Super (Report Supervision), which trains AI to segment tumors using radiology reports, and pathology reports when available (Figure 1). We then examine how much these reports can reduce the need for manual tumor masks. R-Superenables tumor segmentation not only in organs with many available masks but also in those with few or no tumor masks. Using R-Super, we trained the first open AI model capable of segmenting tumors across seven organs1. The key innovation of R-Super lies in report-supervised loss functions that directly teach the AI to segment tumors consistent with the tumor descriptions in reports-in terms 1These include six tumor types with no public tumor mask in CT: spleen, gallbladder, prostate, bladder, uterus, and esophagus. We also include adrenal tumors, which have only 53 public tumor masks [25]. No public AI model can segment these seven tumor types. 3 of tumor count, size, location2, and attenuation. Conceptually, this involves learning from incomplete data: reports describe many tumor characteristics but not the exact tumor outline, so R-Super teaches the segmentation model to estimate outlines consistent with the available tumor characteristics in reports. By using reports to guide tumor segmentation, unlike prior approaches (e.g., VLMs), R-Super learns efficiently and achieves superior tumor detection performance (Table 2). R-Super extracts the tumor characteristics from reports using large language models (LLMs) with radiologist-designed prompts and store it before training. Importantly, reports are only used in training, not in inference. R-Super can train any segmentation model architecture, and it can learn from just CT-Report pairs. To further improve accuracy, R-Super can also learn from CT-Mask pairs together with the CT-Report pairs. Thus, R-Super can segment tumors without public masks, and it can further scale the largest CT-Mask datasets (e.g., PanTS [16]) with many CT-Report pairs. To train R-Super to segment multiple tumor types lacking public tumor masks3, we created the largest CT-Report training dataset to date-101,654 CT-Report pairs (Section 4.1 and Table 1). These CT scans were performed in the - ifornia San Francisco (UCSF) hospital and affiliated institutions during the last 28 years. Our dataset also includes the public Merlin dataset [18] (25,494 CTs, Stanford Hospital, from 2012 to 2018). To the best of our knowledge, no previous study used 100,000+ CT-Report pairs to train AI. First, we used these 101,654 CT-Report pairs to train R-Super. Then, to further improve performance, we created tumor masks for our dataset, and trained R-Super on both the CT-Report and CT-Mask pairs. To create these tumor masks efficiently, we introduced a report-guided active learning cycle (Section 4.1): (I) R-Super automatically created tumor masks for our dataset; (II) we identified the most incorrect tumor masks by comparing them to reports; (III) these incorrect tumor masks were revised by 31 radiologists; (IV) we trained R-Super using the revised tumor masks and all CT-Report pairs. We repeated this cycle until reaching 723 radiologist-corrected tumor masks. The radiologists reported that our report-guided active learning cycle reduced the average time to create each mask from about 30 to five minutes. Learning jointly from CT-Report and CT-Mask pairs, R-Super achieves substantially higher performance in comparison to standard segmentation training with just CT-Mask pairs-both when few masks are available (first active learning cycles), or when many are available (final cycles). We evaluated R-Super through internal and external validation on three datasets. Internal validation used unseen patients from the hospitals in our training dataset, while external validation tested R-Super in a hospital never seen during training. R-Super accurately detected tumors in seven organs lacking public tumor masks. In five of these tumor types, R-Super surpassed radiologist tumor detection performances reported in the literature (Table 2)4 In tumor detection, R-Super consistently 2A tumor location in a report is provided as the organ, organ sub-segment, and/or slice where the tumor is. A slice is a plane localizing the tumor in the 3D CT scan. 3Our dataset includes benign, primary (malignant) and metastatic tumors. 4We compare our results with radiologist tumor detection performance reported in the literature. Our AI was evaluated on a dataset containing both healthy patients and those with malignant tumors, and we searched the literature for studies that also assessed radiologists on datasets with healthy and malignant tumor patients. However, this comparison remains limited, as the AI and radiologists were tested on different datasets-including distinct patient populations, CT scanners, and tumor characteristics (see Appendix B for an analysis of the selected studies and limitations of our comparison between radiologists and AI). 4 outperformed VLMs such as Merlin [18] (Stanford University) and MedGemma [19] (Google) by double digit margins (Table 2). Trained with 101,654 CT-Report pairs and 723 CT-Mask pairs, R-Super exceeded standard segmentation (trained only on the 723 CT-Mask pairs our radiologists created) by margins of +13/+8% in sensitivity/specificity (Figure 3). Importantly, these outperformance margins were also large for detecting small tumors ( 300 million CT scans performed annually for diverse clinical reasons, segmentation models have the potential to scan images in the background, flag suspicious studies and regions, and prompt radiologists to review those areas and refer patients for targeted follow-up when needed. We drew radiologist performances from published studies that-where possible-tested tumor detection on datasets containing both healthy and malignant tumor patients, such as our test dataset. Some caveats limit head-to-head comparisons: the bladder [35] and gallbladder [36] studies lacked normal controls (so only sensitivity is available); the esophagus study [37] considered non-contrast CT, but our test set 10 Table 3: With R-Super, reports improve small tumor detection across seven types of tumors unavailable in public datasets. The detection of small tumors is crucial because it can improve early cancer detection and patient survival. Results include R-Super trained with reports (101K) and masks (723), and R-Super trained with reports only. We compare it to standard segmentation (trained with only masks) and to public AI models: ULS, a nnU-Net [30] segmentation model trained for universal lesion segmentation on lesions in unspecified organs; MedGemma, the flagship medical VLM from Google; and Merlin, the latest medical VLM from Stanford University. We test on 470 CT scans from UCSF (internal validation), 257 healthy and 213 with small tumors ( Vr,o, since a strong gradient when Vs,o > Vr,o can increase the number of tumors missed by the AI (by pushing it towards a Vs,o = 0 solution). L′ forg,o(Vs,o, Vr,o) = |Vs,o -Vr,o| Vs,o + Vr,o + E (2) 11One diameter measurements are common for small, rounder tumors, and they are used in the RECIST (Response Evaluation Criteria in Solid Tumors) guideline [50]. Two diameters are used in the World Health Organization (WHO) tumor measurement standard [51], where the first diameter is the largest tumor diameter in any CT axial slice, and the second diameter is measured perpendicularly to the first, in the same slice. Some reports have a third diameter, perpendicular to the other two. 12Organ segmentation is usually more accurate than tumor segmentation-DSC scores above 80% are common in organ segmentation [29], but state-of-the-art tumor segmentation models rarely reach 70% DSC [12, 52]. Public CT datasets have few masks for few types of tumors, but these datasets have many masks for many organs [48, 53]. Moreover, there are many accurate and public organ segmentation models, such as TotalSegmentator [53] and Touchstone [29]. Our organ segmentation model was trained on the public dataset AbdomenAtlas [10]. It segments 39 organs and structures, including all seven organs where our dataset has tumors, and pancreas sub-segments. 13Consider tumor slices zi, for tumors i with maximum diameter di; z is the vertical axis of the CT. We make zero the organ mask in all z coordinates where the distance to any tumor slice zi is larger than the corresponding tumor diameter di. I.e., oh,w,l = 0 if |h -zi| > di for all tumors i. 20 Deep Layer Output Tumor Volume in Organ LLM Extract from Report & Save: • Tumor Organ: spleen • ~ Volume of all tumors (Vr,o): 14K mm3 spleen Total tumor volume: multiply the AI estimated tumor probabilities by the organ mask and sum over all voxels. 14K mm3, volume of all spleen tumors in report, Vr,o (I) (II) (III) Volume Loss Lforg,o Total segmented tumor vol., Vs,o Radiology Report "3 cm hypodense lesion in the spleen" Fig. 7: The Volume Loss enforces the volume of segmented tumors to match the tumor volume estimated from the report. The loss is applied to deep layers of the segmentation model, as deep supervision. (I) An LLM extracts tumor diameters and locations (organ/organ sub-segment/slice) from reports. From diameters, we estimate the total tumor volume from the radiology reports, Vr,o, within each organ/organ sub-segment o. (II) We sum the AI tumor segmentation output (softmax/sigmoid) for all voxels inside the organ/sub-segment o, estimating the total segmented tumor volume in the organ/sub-segment, Vs,o. Pre-saved, AI-made organ/sub-segment masks identify the voxels inside the organ/sub-segment. (III) We use a custom regression loss (Eq. 3, right panel of the figure) to enforce the segmented tumor volume, Vs,o, to match the tumor volume in the radiology report, Vr,o. This loss includes a tolerance margin to account for human and estimation errors of Vr,o. The figure plots the loss for Vr,o = 1000 mm3 and varying Vs,o. In case the report informs the tumor slices, the Volume Loss also ensures that the segmented tumors are near the informed slices. The constant E (set to 500 mm3) provides numerical stability for small Vr,o. Importantly, the tumor volumes estimated from reports (Vr,o) are not perfect. They are subject to human errors, inter-observer variance, and approximation errors in our volume estimation from diameters. Thus, we added a tolerance margin (0 65) and smaller than 120 mm (Vr,o 904,779, (6) Lvol,o = Lforg,o Vs,o, bVr,o + Lbkg,o(To) . (7) 4.3.3 Ball Loss Unlike the Volume Loss, the Ball Loss is applied to the final segmentation output of the segmentation model (last layer), and it enforces strict constraints. First, it enforces that segmented tumors must be in the locations (organs/organ sub-segments) where the report mentions tumors. Then, for each location, it enforces that: the number of segmented tumors must match the number of tumors in the report; and each one of these segmented tumors must match the diameter, volume and slice (when informed) of one tumor described in the report. Overall, the Ball Loss uses multiple information from reports to guide tumor segmentation. First, like the Volume Loss, the Ball Loss uses the cross-entropy loss to penalize tumor segmentations for organs with no tumor in the reports. For organs with tumors in the report, Figure 8 displays the Ball Loss procedure. First, we multiply (elementwise) the tumor segmentation output (To) with the corresponding pre-saved organ segmentation mask, To ⊗O. This multiplication selects only the tumors inside the organ. Second, we apply sequential ball convolutions to locate each individual tumor the report mentions, starting by the largest. A ball convolution is a standard convolution using a non-learnable binary kernel shaped like a ball, with the same diameter as the tumor diameter in the report (for tumors with multiple diameters, we use the largest). The convolution moves the ball inside the tumor segmentation output. 22 Last Layer Output Repeat for all tumors reported in the organ, from large to small. For each tumor, ignore voxels already assigned (maximized) for other tumors. Minimize output voxels not assigned to any tumor. Ball Convolution Inside the spleen, move a spherical kernel with the reported tumor diameter (30 mm). The highest convolution output indicates the ball location with the highest tumor probability. Output Voxels to Maximize Maximize the 14K most probable voxels inside the highest probability ball. Ball loss: Which output voxels to maximize and minimize? Locate Organ Organ Localization Minimize tumor probabilities outside the spleen, located w/ a spleen mask (AImade, pre-saved). Maximize 14K Highest prob. ball Location: spleen Diameter: 30 mm Volume: 14K voxels LLM Extract from Report & Save: • Tumor Organ: spleen • Tumor Diameter: 30 mm • Tumor Number: 1 (I) Radiology Report "3 cm hypodense lesion in the spleen" (II) (III) (IV) Fig. 8: The Ball Loss converts reports into voxel-level supervision. (I) a zero-shot large language model (LLM) extracts and saves tumor count, locations (organs or sub-regions), slices, attenuation, and diameters from reports. (II) During segmentation training, the tumors in the reports are located to the CT by Ball Convolutions-standard convolutions with fixed, spherical binary kernels matching reported tumor diameter (plus a small margin). We apply the convolution to the segmentation model's outputs (tumor probabilities), ignoring locations outside the organ/organ sub-segment containing the tumor (with the help of pre-saved, AI-made organ masks). When the tumor slice is informed by the report, we also ignore locations far from the slice. The convolution output is maximum when the ball is at the most probable location for the tumor-the highest probability ball. (III) By setting the top-N most probable voxels inside the highest-probability ball as 1, and the remaining voxels as 0, we create a segmentation mask for the tumor. N is the number of voxels the tumor is expected to occupy-according to its volume, estimated from the report. If the report shows multiple tumors, we use a sequence of ball convolutions to locate them one-by-one. After each tumor is located and added to the segmentation mask, we remove it from the segmentation output, to avoid reuse. (IV) We use the mask to optimize the segmentation model, using the dice loss and a custom cross-entropy loss, which has higher weights near the tumor center. The masks created by the Ball Loss have tumors with correct tumor size, count and locations (organs/sub-segments), and they get better as the segmentation model trains and improves. The Ball Loss includes tolerances for uncertain tumor borders. In each ball position, the convolution output is the sum of all tumor probabilities (softmax/sigmoid) within the ball. The position where this sum is highest-highest probability ball-indicates the most likely location for the tumor with the same diameter as the ball. The preliminary multiplication between the tumor segmentation output and the organ segmentation mask, To ⊗O, avoids the highest probability balls to fall outside the organ. If the report informs the tumor slice, we modify the organ mask, O, by making it zero on CT slices away from the informed tumor slice (more than 1 tumor diameter away). This forces the ball convolution to find a highest probability 23 ball that intersects the informed tumor slice. Additionally, to improve tumor localization, we weigh the ball convolution kernel slightly higher around the ball center, so the convolution responds more strongly if the tumor's center (where probabilities tend to be highest) aligns with the ball kernel's center14. Our first ball convolution uses the diameter of the largest tumor reported in the organ, d0. The convolution output is a 3D volume, whose maximum is the most likely tumor center, c0, for a tumor of diameter d0. We place the highest probability ball (diameter d0, center c0) in the tumor segmentation output, and select the N0 voxels with the highest tumor probability inside the ball. N0 represents how many voxels the tumor should have-we derive N0 from the tumor volume estimated from the report (see Section 4.3.2). Then, we create an empty segmentation mask (all zero) and set these N0 voxels to 1. Before using another ball convolution to locate the next tumor, we zero out these N0 voxels inside the tumor segmentation output. This ensures that the next ball convolution will not locate the tumor we just located15. We repeat this process-ball convolution, add tumor to segmentation mask, remove tumor from segmentation output-until all tumors mentioned in the report are added to the segmentation mask. The final segmentation mask matches the report in tumor count, locations (organ), volumes, and diameters. We use it as ground truth for a Dice loss and a custom weighted cross-entropy loss, which optimize the AI tumor segmentation output to match the mask. Our cross-entropy gives higher weights to voxels where the AI has higher tumor confidence, and it does not penalize a margin around the tumor borders-compensating for errors in diameters in the report. In case the report mentions a tumor but does not provide its size, we use a relaxed version of the Ball Loss. We assume that the tumor has at least 5 mm in diameter, because less than 0.1% of tumors in our reports have less than 5 mm. Then, we apply the ball convolutions as before, considering 5 mm diameter. This will generate a small tumor inside the segmentation mask, possibly near the center of a real, larger tumor (usually centers have higher tumor probability). We use this segmentation mask to train the segmentation model with the cross entropy and dice loss. However, we do not apply the cross entropy and dice loss to the mask's zero voxels inside the organ with tumor. The reason is that we do not know the real tumor size. Thus, it is not possible to estimate how many voxels are tumor voxels. We just enforce that, at least, a 5 mm tumor exists, but we do not penalize the segmentation model if it finds a larger tumor. In case we do not know how many tumors an organ has, we use the ball loss to localize and create a mask for each tumor described in the report, but we again do not penalize the mask's zero voxels inside the organ with tumor. Not all reports are equal, but the Ball Loss (like the Volume Loss) adapts to different types of reports: when reports are more precise, the Ball Loss is more precise, leveraging all available information. The most precise reports include size and slice for all tumors (about 15% of our reports). This limits the ball convolution to search 14Ball kernels are 1 at the kernel center, and decay towards the ball border, following a 3D Gaussian with standard deviation of 0.75× the ball diameter, di. Outside of the ball, the kernel is zero. Ball convolutions use stride 1, zero padding and odd kernel sizes to ensure input-output alignment. 15It is important to iterate from largest to smallest tumor. Consider a segmentation output with a big tumor and a small tumor. A ball convolution with small diameter can have high outputs over the small tumor or anywhere inside the large tumor. Thus, before localizing the small tumor, we first localize the large tumor and remove it from the segmentation output. 24 for the tumor in a very small region-a few slices inside one organ-making it very precise. Despite its name, the Ball Loss does not assume that tumors are spherical, nor does it teach the segmentation model to segment spherical tumors only. Instead, it assumes that tumors fit inside a ball whose diameter matches the largest tumor diameter in the report. Tumors can assume any shape that fits inside this ball. This assumption can be relaxed by increasing the diameter used in the Ball Loss (e.g., we increase the diameters in the reports by 30% when applying the Ball Loss). One may wonder whether the Ball Loss can guide the segmentation model to segment the wrong thing. Indeed, in a single image it can: if the segmentation model segments a wrong tumor (false positive) inside the right organ, the Ball Loss may enforce this wrong segmentation. However, when a similar false positive appears in a healthy patient, the loss penalizes it. Thus, while some wrong tumor segmentations may be reinforced in individual cases, they are canceled out across patients. To consistently minimize Ball Loss across the whole training dataset, the segmentation model must learn to find the correct tumor organs, tumor counts and tumor sizes. In other words, the only way to minimize the Ball Loss is to truly segment the tumors. Our experiments show that, by minimizing the Ball Loss and Volume Loss, R-Super becomes substantially better at segmenting tumors-proving that the net effect of our losses is reinforcing correct segmentations, not false positives. 4.3.4 Attenuation Loss Reports commonly inform tumor attenuation. A hypoattenuating tumor is darker than the surrounding organ, a hyperattenuating tumor is brighter, and an isoattenuating tumor has a similar brightness to the organ. The Attenuation Loss leverages this information to improve tumor segmentation. Brightness in CT scans is expressed as HU values-the value of the voxels in the CT. Even after we normalize the CT, relative attenuation remains: if a tumor is hypoattenuating, its average voxel value will be lower than the average voxel value of the organ. We apply the Attenuation Loss both as deep supervision and at the segmentation model's final output. To calculate the Attenuation Loss, we define the tumor voxels as the voxels where the segmentation model predicts more than 50% tumor probability. We define the organ voxels using pre-saved, AI-made organ mask, but we do not consider tumor voxels as organ voxels. We calculate the mean and standard deviation of the tumor voxels and of the organ voxels. These means and standard deviations are sent to an MLP16, which is an attenuation classifier. It classifies whether tumors are all hyperattenuating, all hypoattenuating, or of mixed attenuation/isoattenuation. The label for the attenuation classifier is extracted from the report, by the LLM. We train the attenuation classifier with a standard cross-entropy classification loss. The gradient of this loss is used to train the attenuation classifier, but it also back-propagates to the segmentation model, and improves it-the segmentation model should delineate tumors better, to allow the attenuation classifier to predict the tumor attenuation better. 16128 neurons in the hidden layer. 25 Acknowledgments. This work was supported by the National Institutes of Health (NIH) under Award Number R01EB037669, the Lustgarten Foundation for Pancreatic Cancer Research, and the Center for Biomolecular Nanotechnologies, Istituto Italiano di Tecnologia (73010, Arnesano, LE, Italy). We would like to thank the Johns Hopkins Research IT team in IT@JH for their support and infrastructure resources where some of these analyses were conducted, especially DISCOVERY HPC; thank the HPC infrastructure and the Support Team at Fondazione Istituto Italiano di Tecnologia. We thank Jaimie Patterson for writing a news article about this project. Appendix A Training Details We train using CT patches. For CT-Report pairs, each training patch is designed to fully cover one target organ. This target organ is randomly chosen, with a high probability of choosing organs with tumors in the report (e.g., 90%). The training patch must fully cover the target organ. Otherwise, a tumor mentioned in the report could fall outside the patch, and the report-based losses would wrongly push the AI to find a tumor that is not visible to the AI. Training parameters followed the defaults set by MedFormer [28], the segmentation architecture we used inside R-Super. The only new parameters we include are loss weights. We set a loss weight of 1 to the segmentation losses (cross entropy and dice, used for CT-Mask pairs), 0.1 for the Volume Loss and Ball Loss, and 0.01 for the Attenuation Loss. There is no need to carefully tune these weights, we used the same weights in all our experiments. We train with AdamW, gradient norm clipping (1), 50 epochs of 1000 batches each, batch size of 4, patch size of 128 x 128 x 128, isotropic voxel spacing of 1 mm, weight decay of 5.00E-2, learning rate of 1.00E-4 (5 epochs of warmup, followed by polynomial decay). CT intensity was clipped between -991 and 500 HU, then normalized. Data augmentation includes rotation, brightness, gamma, contrast, gaussian blur and gaussian noise [28]. We super-sampled the CT-Mask pairs, making them 50% of the samples that the segmentation model saw in each epoch. Segmentation models were initialized pre-trained for organ segmentation on AbdomenAtlas 2.0 [10, 54]. Pre-training followed the same configuration as training (described above), but without using the R-Super losses or reports. Appendix B Comparison to Radiologists The radiologist tumor detection performance is not very high for many tumor types in this study, because these tumor types are very difficult to detect in CT scans. Due to this difficulty, CT is not the primary diagnostic tool for tumors in the bladder, esophagus, prostate, uterus and gallbladder. However, more than 300 million CT scans are performed early in the world, for diverse reasons. This large number of CT scans create a large opportunity for opportunistic early detection of tumors (when tumors are found in CT scans performed for other reason, not to search for tumors). AI can help this opportunistic detection, because it can often see tumor signs that are not visible to humans. For example, PANDA detects pancreatic tumors on non-contrast CT that radiologists typically cannot [7]. Here, we compared the performance of our AI to that of radiologists reported in the literature. 26 We searched for studies where radiologists analyzed a dataset of patients with malignant tumors and normal patients. We extracted from these studies the sensitivity and specificity reported for the radiologists. For the bladder and gallbladder, we could only find studies without healthy patients. Therefore, we report only the radiologist sensitivity in these cases. A limitation of our comparison between radiologists and AI is that our AI and the radiologists were evaluated in different test sets, with different patient populations, different CT scanners, possibly different proportions of the types of malignant tumors, different contrast protocols and different hospitals. Some clear differences are: for esophagus tumor, the radiologist study worked on non-contrast CT, while our AI worked on contrast-enhanced CT (easier); and for adrenal tumors, our AI evaluated both primary and metastatic tumors, while the radiologist study worked only on metastatic tumors. Besides the esophagus study, other studies used contrastenhanced CT, as we did in our test datasets. Here, we provide a brief summary of each study. • Bladder tumors [35]: CT scans were selected for patients later diagnosed with bladder cancer (99 patients; 226 CTs). These scans were acquired up to five years before the pathologic diagnosis (pre-diagnostic). Radiologists achieved 67% tumor detection sensitivity. The study lacked normal patients, so specificity was not estimable. Since these CT scans are pre-diagnostic, some may have a very small tumor, or truly no tumor, reducing the reported radiologist sensitivity. • Esophagus tumors [37]: Non-contrast CT scans come form 52 esophagus cancer patients and 48 normal patients. Radiologists achieved 25-31% sensitivity at 74-78% specificity. Unlike this study, our test dataset used contrast-enhanced CT (easier). • Gallbladder tumors [36]: This study, published in 1997, is a retrospective analysis. It covers gallbladder carcinoma patients at the Howard University, for the previous 28 years. Radiologist performance for gallbladder tumor detection in CT was reported as 40% sensitivity. No normal patient was included, and specificity is not reported. • Prostate tumors [55]: The study included 139 clinically significant prostate cancer CTs and 432 healthy CTs. Radiologists achieved 44% sensitivity and 74% specificity in detecting these tumors. • Spleen tumors [56]: A 2024 meta-analysis synthesized spleen tumor detection performance across different imaging modalities. On CT, radiologists achieved 77% sensitivity and 91% specificity in detecting spleen tumors. • Uterus tumors [57]: In asymptomatic postmenopausal women on CT (22 cancers; 22 controls), the endometrium thickness was measured by radiologists to detect endometrial cancer. An 8 mm thickness threshold yielded 86% sensitivity and 91% specificity for detecting the tumors. This study considers only endometrial cancers, but our test dataset may include other types of uterus malignant tumors. • Adrenal tumors [38]: The study considered 91 lung cancer patients. Of them, 53 had adrenal metastases (autopsy-validated). Radiologists could detect these metastasis on CT with sensitivity of 20 to 41%, but high specificity (84 to 99%). This study considers only metastasis, but our test dataset includes both metastasis and primary adrenal malignant tumors. 27 References [1] Mathers, C.D., Boerma, T., Ma Fat, D.: Global and regional causes of death. British medical bulletin 92(1), 7-32 (2009) [2] Sung, H., Ferlay, J., Siegel, R.L., Laversanne, M., Soerjomataram, I., Jemal, A., Bray, F.: Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians 71(3), 209-249 (2021) [3] Crosby, D., Bhatia, S., Brindle, K.M., Coussens, L.M., Dive, C., Emberton, M., Esener, S., Fitzgerald, R.C., Gambhir, S.S., Kuhn, P., et al.: Early detection of cancer. Science 375(6586), 9040 (2022) [4] McCollough, C.H., Bushberg, J.T., Fletcher, J.G., Eckel, L.J.: Answers to common questions about the use and safety of ct scans. In: Mayo Clinic Proceedings, vol. 90, pp. 1380-1392 (2015). Elsevier [5] Hoogenboom, S.A., Engels, M.M., Chuprin, A.V., Hooft, J.E., LeGout, J.D., Wallace, M.B., Bolan, C.W.: Prevalence, features, and explanations of missed and misinterpreted pancreatic cancer on imaging: a matched case-control study. Abdominal Radiology 47(12), 4160-4172 (2022) [6] Xia, Y., Yu, Q., Chu, L., Kawamoto, S., Park, S., Liu, F., Chen, J., Zhu, Z., Li, B., Zhou, Z., et al.: The felix project: Deep networks to detect pancreatic neoplasms. medRxiv (2022) [7] Cao, K., Xia, Y., Yao, J., Han, X., Lambert, L., Zhang, T., Tang, W., Jin, G., Jiang, H., Fang, X., et al.: Large-scale pancreatic cancer detection via non-contrast ct and deep learning. Nature medicine 29(12), 3033-3043 (2023) [8] Hu, C., Xia, Y., Zheng, Z., Cao, M., Zheng, G., Chen, S., Sun, J., Chen, W., Zheng, Q., Pan, S., et al.: Ai-based large-scale screening of gastric cancer from noncontrast ct imaging. Nature Medicine, 1-9 (2025) [9] Markotić, V., Pojužina, T., Radančević, D., Miljko, M., Pokrajčić, V.: The radiologist workload increase; where is the limit?: mini review and case study. Psychiatria Danubina 33(suppl 4), 768-770 (2021) [10] Bassi, P.R., Yavuz, M.C., Wang, K., Chen, X., Li, W., Decherchi, S., Cavalli, A., Yang, Y., Yuille, A., Zhou, Z.: Radgpt: Constructing 3d image-text tumor datasets. arXiv preprint (2025) [11] Liu, J., Zhang, Y., Wang, K., Yavuz, M.C., Chen, X., Yuan, Y., Li, H., Yang, Y., Yuille, A., Tang, Y., et al.: Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical Image Analysis, 103226 (2024) 28 [12] Chen, J., Xia, Y., Yao, J., Yan, K., Zhang, J., Lu, L., Wang, F., Zhou, B., Qiu, M., Yu, Q., et al.: Cancerunit: Towards a single unified model for effective detection, segmentation, and diagnosis of eight major cancers using a large collection of ct scans. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 21327-21338 (2023) [13] Antonelli, M., Reinke, A., Bakas, S., Farahani, K., Kopp-Schneider, A., Landman, B.A., Litjens, G., Menze, B., Ronneberger, O., Summers, R.M., et al.: The medical segmentation decathlon. Nature communications 13(1), 1-13 (2022) [14] Bilic, P., Christ, P., Li, H.B., Vorontsov, E., Ben-Cohen, A., Kaissis, G., Szeskin, A., Jacobs, C., Mamani, G.E.H., Chartrand, G., et al.: The liver tumor segmentation benchmark (lits). Medical image analysis 84, 102680 (2023) [15] Heller, N., Isensee, F., Maier-Hein, K.H., Hou, X., Xie, C., Li, F., Nan, Y., Mu, G., Lin, Z., Han, M., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge. Medical Image Analysis, 101821 (2020) [16] Li, W., Zhou, X., Chen, Q., Lin, T., Bassi, P.R., Plotka, S., Cwikla, J.B., Chen, X., Ye, C., Zhu, Z., et al.: Pants: The pancreatic tumor segmentation dataset. arXiv preprint (2025) [17] Hamamci, I.E., Er, S., Menze, B.: Ct2rep: Automated radiology report generation for 3d medical imaging. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 476-486 (2024). Springer [18] Blankemeier, L., Cohen, J.P., Kumar, A., Van Veen, D., Gardezi, S.J.S., Paschali, M., Chen, Z., Delbrouck, J.-B., Reis, E., Truyts, C., et al.: Merlin: A vision language foundation model for 3d computed tomography. Research Square, 3 (2024) [19] Sellergren, A., Kazemzadeh, S., Jaroensri, T., Kiraly, A., Traverse, M., Kohlberger, T., Xu, S., Jamil, F., Hughes, C., Lau, C., et al.: Medgemma technical report. arXiv preprint (2025) [20] Hamamci, I.E., Er, S., Almas, F., Simsek, A.G., Esirgun, S.N., Dogan, I., Dasdelen, M.F., Wittmann, B., Simsar, E., Simsar, M., et al.: A foundation model utilizing chest ct volumes and radiology reports for supervised-level zero-shot detection of abnormalities. CoRR (2024) [21] Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint (2023) [22] Chen, Y., Xiao, W., Bassi, P.R., Zhou, X., Er, S., Hamamci, I.E., Zhou, Z., 29 Yuille, A.: Are vision language models ready for clinical diagnosis? a 3d medical benchmark for tumor-centric visual question answering. arXiv preprint (2025) [23] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748-8763 (2021). PMLR [24] Moon, T.K.: The expectation-maximization algorithm. IEEE Signal processing magazine 13(6), 47-60 (1996) [25] Moawad, A.W., Ahmed, A.A., ElMohr, M., Eltaher, M., Habra, M.A., Fisher, S., Perrier, N., Zhang, M., Fuentes, D., Elsayes, K.: Voxel-level segmentation of pathologically-proven adrenocortical carcinoma with ki-67 expression (adrenalacc-ki67-seg)[data set]. The Cancer Imaging Archive 8 (2023) [26] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint (2015) [27] Bassi, P.R., Li, W., Chen, J., Zhu, Z., Lin, T., Decherchi, S., Cavalli, A., Wang, K., Yang, Y., Yuille, A.L., et al.: Learning segmentation from radiology reports. arXiv preprint (2025) [28] Gao, Y., Zhou, M., Liu, D., Yan, Z., Zhang, S., Metaxas, D.N.: A data-scalable transformer for medical image segmentation: architecture, model efficiency, and benchmark. arXiv preprint (2022) [29] Bassi, P.R., Li, W., Tang, Y., Isensee, F., Wang, Z., Chen, J., Chou, Y.-C., Kirchhoff, Y., Rokuss, M., Huang, Z., Ye, J., He, J., Wald, T., Ulrich, C., Baumgartner, M., Roy, S., Maier-Hein, K.H., Jaeger, P., Ye, Y., Xie, Y., Zhang, J., Chen, Z., Xia, Y., Xing, Z., Zhu, L., Sadegheih, Y., Bozorgpour, A., Kumari, P., Azad, R., Merhof, D., Shi, P., Ma, T., Du, Y., Bai, F., Huang, T., Zhao, B., Wang, H., Li, X., Gu, H., Dong, H., Yang, J., Mazurowski, M.A., Gupta, S., Wu, L., Zhuang, J., Chen, H., Roth, H., Xu, D., Blaschko, M.B., Decherchi, S., Cavalli, A., Yuille, A.L., Zhou, Z.: Touchstone benchmark: Are we on the right way for evaluating ai algorithms for medical segmentation? Conference on Neural Information Processing Systems (2024) [30] Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods 18(2), 203-211 (2021) [31] Grauw, M., Scholten, E.T., Smit, E.J., Rutten, M.J., Prokop, M., Ginneken, B., Hering, A.: The uls23 challenge: A baseline model and benchmark dataset for 3d universal lesion segmentation in computed tomography. Medical Image Analysis 30 102, 103525 (2025) [32] Zhuang, J., Wu, L., Wang, Q., Fei, P., Vardhanabhuti, V., Luo, L., Chen, H.: Mim: Mask in mask self-supervised pre-training for 3d medical image analysis. IEEE Transactions on Medical Imaging (2025) [33] Zhuang, J., Luo, L., Wang, Q., Wu, M., Luo, L., Chen, H.: Advancing volumetric medical image segmentation via global-local masked autoencoders. IEEE Transactions on Medical Imaging (2025) [34] Reiss, T., Cohen, N., Bergman, L., Hoshen, Y.: Panda: Adapting pretrained features for anomaly detection and segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2806-2814 (2021) [35] Malik, R.F., Berry, R., Lau, B.D., Busireddy, K.R., Patel, P., Patel, S.H., Fishman, E.K., Bivalacqua, T.J., Johnson, P.T., Sedaghat, F.: Systematic evaluation of imaging features of early bladder cancer using computed tomography performed before pathologic diagnosis. Tomography 9(5), 1734-1744 (2023) [36] Frezza, E., Mezghebe, H.: Gallbladder carcinoma: a 28 year experience. International surgery 82(3), 295-300 (1997) [37] Sui, H., Ma, R., Liu, L., Gao, Y., Zhang, W., Mo, Z.: Detection of incidental esophageal cancers on chest ct by deep learning. Frontiers in Oncology 11, 700210 (2021) [38] Allard, P., Yankaskas, B.C., Fletcher, R.H., Alden Parkery, L., Halvorsen Jr, R.A.: Sensitivity and specificity of computed tomography for the detection of adrenal metastatic lesions among 91 autopsied lung cancer patients. Cancer 66(3), 457462 (1990) [39] Ardila, D., Kiraly, A.P., Bharadwaj, S., Choi, B., Reicher, J.J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., et al.: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine 25(6), 954-961 (2019) [40] Mikhael, P.G., Wohlwend, J., Yala, A., Karstens, L., Xiang, J., Takigami, A.K., Bourgouin, P.P., Chan, P., Mrah, S., Amayri, W., et al.: Sybil: a validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. Journal of Clinical Oncology 41(12), 2191-2200 (2023) [41] Cao, W., Zhang, J., Shui, Z., Wang, S., Chen, Z., Li, X., Lu, L., Ye, X., Liang, T., Zhang, Q., et al.: Boosting vision semantic density with anatomy normality modeling for medical vision-language pre-training. arXiv preprint (2025) 31 [42] Bassi, P.R., Dertkigil, S.S., Cavalli, A.: Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization. Nature Communications 15(1), 291 (2024) [43] Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., Wichmann, F.A.: Shortcut learning in deep neural networks. Nature Machine Intelligence 2(11), 665-673 (2020) [44] DeGrave, A.J., Janizek, J.D., Lee, S.-I.: Ai for radiographic covid-19 detection selects shortcuts over signal. Nature Machine Intelligence 3(7), 610-619 (2021) [45] Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 590-597 (2019) [46] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097-2106 (2017) [47] Qu, C., Zhang, T., Qiao, H., Liu, J., Tang, Y., Yuille, A., Zhou, Z.: Abdomenatlas8k: Annotating 8,000 abdominal ct volumes for multi-organ segmentation in three weeks. In: Conference on Neural Information Processing Systems, vol. 21 (2023). https://github.com/MrGiovanni/AbdomenAtlas [48] Li, W., Qu, C., Chen, X., Bassi, P.R., Shi, Y., Lai, Y., Yu, Q., Xue, H., Chen, Y., Lin, X., et al.: Abdomenatlas: A large-scale, detailed-annotated, & multicenter dataset for efficient transfer learning and open algorithmic benchmarking. Medical Image Analysis, 103285 (2024) [49] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models. arXiv preprint (2023) [50] Eisenhauer, E.A., Therasse, P., Bogaerts, J., Schwartz, L.H., Sargent, D., Ford, R., Dancey, J., Arbuck, S., Gwyther, S., Mooney, M., et al.: New response evaluation criteria in solid tumours: revised recist guideline (version 1.1). European journal of cancer 45(2), 228-247 (2009) [51] Miller, A.B., Hoogstraten, B., Staquet, M., Winkler, A.: Reporting results of cancer treatment. Cancer 47(1), 207-214 (1981) [52] Płotka, S., Mert, G., Chrabaszcz, M., Szczurek, E., Sitek, A.: Mamba goes home: Hierarchical soft mixture-of-experts for 3d medical image segmentation. arXiv preprint (2025) 32 [53] Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D.T., Cyriac, J., Yang, S., et al.: Totalsegmentator: robust segmentation of 104 anatomic structures in ct images. Radiology: Artificial Intelligence 5(5) (2023) [54] Li, W., Yuille, A., Zhou, Z.: How well do supervised models transfer to 3d image segmentation? In: International Conference on Learning Representations (2024). https://github.com/MrGiovanni/SuPreM [55] Korevaar, S., et al.: Incidental detection of prostatecancerwithcomputedtomographyscans. SciRep11 (1) 7956 (2021) [56] Valizadeh, P., Jannatdoust, P., Tahamtan, M., Ghorani, H., Dorcheh, S.S., Farnoud, K., Salahshour, F.: Diagnostic performance of different imaging modalities for splenic malignancies: A comparative meta-analysis. European Journal of Radiology Open 12, 100566 (2024) [57] Franconeri, A., Fang, J., Brook, A., Brook, O.R.: Asymptomatic endometrial thickening of 8 mm or greater on postcontrast computed tomography in postmenopausal women is a predictor of endometrial cancer. Journal of computer assisted tomography 43(1), 136-142 (2019) 33
2510.14799
Error analysis of Abate–Whitt methods for Inverse Laplace Transforms and a new algorithm for queuing theory applications Nikita Deniskin1* and Federico Poloni2* 1Classe di Scienze, Scuola Normale Superiore, Pisa, Italy. 2Dipartimento di Informatica, Universit`a di Pisa, Pisa, Italy. *Corresponding author(s). E-mail(s): nikita.deniskin@sns.it; federico.poloni@unipi.it; Abstract We study the accuracy of a class of methods to compute the Inverse Laplace Transform, the so-called Abate–Whitt methods [Abate, Whitt 2006], which are based on a linear combination of evaluations of b f in a few points. We provide error bounds which relate the accuracy of a method to the rational approxima- tion of the exponential function. We specialize our analysis to applications in queuing theory, a field in which Abate–Whitt methods are often used; in partic- ular, we study phase-type distributions and Markov-modulated fluid models (or fluid queues). We use a recently developed algorithm for rational approximation, the AAA algo- rithm [Nakatsukasa, S`ete, Trefethen 2018], to produce a new family of methods, which we call TAME. The parameters of these methods are constructed depend- ing on a function-specific domain Ω; we provide a quasi-optimal choice for certain families of functions. We discuss numerical issues related to floating-point com- putation, and we validate our results through numerical experiments which show that the new methods require significantly fewer function evaluations to achieve an accuracy that is comparable (or better) to that of the classical methods. Keywords: Inverse Laplace Transform, Abate–Whitt framework, Rational Approximation, Markov-modulated fluid models MSC Classification: 65R10 , 41A20 , 65C40 1 arXiv:2510.14799v1 [math.NA] 16 Oct 2025 1 Introduction The Laplace Transform of a function f : (0, ∞) →C is defined as bf(s) = ∞ Z 0 e−stf(t) dt. (1) We study methods to compute its inverse, i.e., reconstruct the value f(t) in a given point t, from evaluations of bf in several points β1, . . . , βN ∈C. Unlike the better-behaved Fourier transform, inverting the Laplace transform is an ill-posed problem: for instance, there are examples of functions such that |f(1)−g(1)| = 1 but ∥bf −bg∥∞≤ε for arbitrarily small ε. This property makes numerical computation challenging, since small errors in the computation of bf can turn into large errors on f(t). Because of its ill-posedness, the Inverse Laplace Transform (ILT) problem has attracted ample attention in the mathematical and numerical community. It would be impossible to cite all relevant works, but we refer the reader to the book [37] for theoretical properties and to the book [15] for numerical methods. In this work, we make two specific assumptions to restrict our interest to special cases: • We are interested in algorithms that compute f(t) in one given point t, based on evaluating bf in a small number of points that we are allowed to choose; these algorithms usually take the form f(t) ≈ N X n=1 wn bf(βn), which resembles the general form of a quadrature formula. This setup is known as the Abate–Whitt framework in certain fields, after [3]. It excludes several algorithms, such as those based on approximating f with a truncated series of functions [15, Chapter 3] or a rational function with known coefficients [15, Chapter 5], those based on the derivatives of f (e.g., [18] or the Post–Widder formula [15, Chapter 7]), methods where f(t) is evaluated in multiple points simultaneously (e.g., [17, 27]). We describe this framework and some of the classical methods in Section 2. • We are interested only in the case in which f has a very special (“tame”) form: it belongs to one of certain classes of functions that are constructed based on weighted sums of exponentials. Namely: SE class (sum of exponentials) functions of the form f(t) = M X m=1 cmeαmt, where c1, . . . , cm, α1, . . . , αm ∈C are given constants. 2 ME class (matrix exponential) functions of the form f(t) = v∗exp(tQ)u, Q ∈Cd×d, u, v ∈Cd, (2) where exp(A) = I +A+ 1 2!A2 + 1 3!A3 . . . is the matrix exponential. Using the Jordan form, one sees that these functions can be equivalently written as f(t) = M X m=1 cmeαmttbm, with b1, . . . , bm ∈Z+. It is easy to see that this class is the one of all functions f whose transform bf is a rational function. We chose the acronym ME after the one used for matrix exponential distributions [30], but we note that some authors use the same acronym for other classes named mixture of exponentials, e.g., [38, Definition 9.4]. The density of a Matrix Exponential distribution [30] has the form (2), but with additional conditions: f(t) has to be non-negative with total mass 1. In our definition of a ME function, we do not require such conditions. LS class (Laplace–Stieltjes) functions of the form f(t) = Z e−xtdµ(x), (3) where µ is a non-negative measure on R+ = (0, ∞). In other words, f in (3) is itself the Laplace transform of a measure µ. A SE function with negative real weights αi is in this class: it corresponds to a discrete measure µ = PM m=1 cmδ|αm|. The LS class is a generalization with an integral rather than a discrete sum. Bernstein’s Theorem [38, Theorem 1.4] states that the class of Laplace-Stieltjes functions is equivalent to the class of completely monotone functions, i.e. functions f(t) ∈C∞((0, ∞)) such that (−1)nf (n)(t) ≥0 ∀n ≥0, ∀t > 0. These three classes are very favorable cases: in most other literature the inverse Laplace transform the main focus is on functions with jump discontinuities (e.g., the Heaviside step function, the square wave) or jumps in the derivative (e.g., formulas involving the absolute-value function |·|). Under these two assumptions, we relate the accuracy of an Abate–Whitt method to a rational approximation problem, and prove several quantitative bounds on its error, revealing the role of quantities such as the evaluation point t, the range of the αi, and the field of values of the matrix Q. This is done in Section 3. Another approach is based on the moments of a function related to the approximation problem. We present some bounds and an estimate in Section 4. Despite the strong assumptions that we make, this restricted setup is useful in practice in several applications in queuing theory, which we describe in more detail in 3 Section 5. In particular, we deal with distributions whose density function belongs to a subclass of the ME class, the so-called phase-type distributions. They are obtained when Q in (2) is the subgenerator of a continuous-time Markov chain and u = −Q1. Phase-Type distributions predate ME distributions, as they arise naturally in queuing theory as the probability distribution of the hitting time of an absorbing Markov chain. See [13] for an exposition of the theory of ME and Phase-Type distributions. Another application in queuing theory is the computation of the so-called time- dependent first-return matrix of Markov-modulated fluid models, or fluid queues, a class of stochastic processes that is used to model continuous-time queues and buffers [4, 28, 33, 44]. Indeed, there are several algorithms to compute directly the Laplace transform of this matrix [9], while the direct computation of the time-domain function is more involved. After reducing the problem to rational approximation, we describe the AAA algorithm [32], a recently developed algorithm that produces high-quality rational approximants. With a few modifications, we can use this algorithm to produce the weights and poles (wn, βn)N n=1 of a family of Abate–Whitt methods, which we dub the TAME method (Triple-A for the Matrix Exponential). We describe the AAA algorithm and its modifications in Section 6. From the computational point of view, one needs to ensure not only that the parameters (wn, βn) of an Abate–Whitt method produce an accurate rational approx- imation, but also that the magnitude of weights wn is moderate. Weights of large magnitude may lead to precision loss in floating-point arithmetic. We discuss these issues, as well as other numerical aspects of our method, in Section 7, and we conclude our paper with numerical experiments that prove the effectiveness of our method, in Section 8. The experiments show that the TAME method achieves accuracy compara- ble to the best Abate–Whitt methods, requires significantly fewer evaluations of the function bf, and has better stability properties. 1.1 Notation In the following, ∥A∥2 and ∥A∥∞denote the Euclidean and ∞operator norm of a matrix A. The spectrum of the matrix A is Λ(A). For a complex-valued function f, we define ∥f∥∞,Ω= max{|f(z)|: z ∈Ω}. We denote with B(c, R) = {z ∈C: |z −c| ≤R} the ball with center c and radius R in the complex plane. 2 Inverse Laplace Transforms and the Abate–Whitt framework We assume that the function f is defined (at least) on R+ = (0, ∞) and that it has real values. Most of our results are also applicable to functions f : R+ →C, but we shall see that in the real case we can use conjugate poles to speed up the computation of the Inverse Laplace Transform. For Equation (1) to be well-defined, the integral must converge. For the function f(t) = eαt, this means that bf(s) is defined for all s ∈C with Re(s −α) > 0. However, bf(s) = 1 s−α is defined algebraically for all s ̸= α, irrespective of the convergence of 4 the integral (1). This is a form of analytical continuation. When we deal with linear combinations and integral averages of exponential functions, we can use this extended definition of bf: we only need to ensure that bf (which in most of our paper is a rational function) can be computed in the required points βn t , even if those points do not lie in the domain of convergence of (1). In particular, we note that also one of the classical Abate–Whitt methods has poles βn in the left half-plane, namely, the Talbot method (see e.g. Figure 4); so we deal with points where (1) does not converge even when using the Talbot method for functions of the form f(t) = eαt with α < 0. A widely used formula for the Inverse Laplace Transform is the Bromwich Integral, which is a contour integral on the vertical line in the complex plane γ(u) = b + iu with u ranging from −∞to ∞, namely f(t) = 1 2πi Z γ est bf(s) ds. (4) As discussed in the Introduction, the Inverse Laplace Transform is an ill-posed prob- lem. We can gain insight on this phenomenon from the above formula (4): the multiplying weight est has constant modulus on γ, so we have to integrate on an unbounded domain a function that may not vanish when Im s →∞. This difficulty does not appear in the Direct Laplace Transform, where the multiplying weight is decreasing, nor in the Fourier Transform, where the integral is on a finite domain. We focus on a class of methods that have been put in a common structure by Abate and Whitt in a series of works [1, 2, 3]. This class includes some well-known classical methods, as well as some more recent ones. Definition 2.1. An Abate–Whitt method with N weights (wn)N n=1 and N distinct nodes (βn)N n=1 is the formula fN(t) = N X n=1 wn t bf βn t  . (5) This formula allows one to approximate f(t) ≈fN(t) with N evaluations of the function bf(s). In particular, for t = 1 this formula becomes fN(1) = N X n=1 wn bf (βn) . (6) Abate–Whitt methods are useful when bf can be computed (by a black-box function) on a desired set of points {β1, . . . , βn} and we are interested in recovering f(t) for one value of t. One typically is interested in the case when computation of bf is expensive (as in the fluid queue case, Section 5), hence we want to keep N small. Indeed, we will see that in many common cases the TAME method works by evaluating bf in just 4 points. 5 Remark 2.2. Suppose that f satisfies the property that bf(s) = bf(s), where s is the complex conjugate of s: this form of symmetry with respect to the real axis is common for functions that map R+ to R. If there is a pair of conjugate nodes and weights, i.e., (wn, βn) and (wk, βk) with wk = wn, βk = βn, then wn bf(βn) + wk bf(βk) = wn bf(βn) + (wn bf(βn)) = 2 Re(wn bf(βn)). Therefore, we can compute the sum with just one evaluation of bf instead of two. More generally, for a real-valued function f, an Abate–Whitt method can be computed in the reduced form fN(t) = N′ X n=1 Re w′ n t bf βn t  , (7) where we have reordered the indices such that each weight and node with index in {N ′ + 1, . . . , N} is the conjugate of one with index in {1, . . . , N ′}. To keep the sum unchanged, we set w′ n = 2wn if n is part of a conjugate pair, and w′ n = wn otherwise. The Abate–Whitt methods that we present below use this speed-up trick and are already in reduced form. All their weights and nodes are in conjugate pairs, plus possibly an unpaired one when N is odd, so N ′ is either N/2 or (N + 1)/2. Choos- ing conjugate pairs speeds up the computation by a factor 2, and ensures that the computed fN(t) is real. There are many ways to construct an Abate–Whitt method. For example, a quadra- ture of the Bromwich Integral (4) is a weighted sum of evaluations of bf, so it has the same form as Equation (6). The Euler method [1, 3] uses equispaced nodes βn on the vertical line γ. Abate and Whitt [3, Section 5] propose the following choice for the parameters, with odd N ′ and for 1 ≤n ≤N ′ βn = ln(10) 6 (N ′ −1) + iπ(n −1) and w′ n = 10 N′−1 6 (−1)nξn, (8) where ξ1 = 1 2, ξn = 1 ∀n : 2 ≤n ≤N ′ + 1 2 , ξ N′+3 2 +j = 1 −2−N′−1 2 j X k=0  N ′−1 2 k ! ∀j : 0 ≤j ≤N ′ −3 2 . Talbot [40] proposed to deform the contour, to avoid the oscillation of est due to its imaginary part. The new contour is the curve γ(θ) = rθ (cot(θ) + ai), with parameters r > 0, a > 0 and −π < θ < π, which has its imaginary part bounded in the interval [−πra, πra]. Abate and Valk´o [2] describe a way to fit the Talbot method in the Abate–Whitt framework using this contour. The parameters are [3, Section 6] β1 = 2N ′ 5 , βn = 2(n −1)π 5 (cot(θn) + i) for 2 ≤n ≤N ′, w′ 1 = 1 5eβ1, w′ n = 2 5  1 + iθn  1 + cot (θn)2 −i cot (θn)  eβn for 2 ≤n ≤N ′. 6 Further optimization of the contour was also investigated in [46]. The Gaver–Stehfest method [21, 39] is based instead on the Post-Widder formula [15, Theorem 2.4]. Its parameters are ([3, Section 4], [24]) βn = n ln(2), 1 ≤n ≤N ′, w′ n = (−1)N ′/2+n ln(2) min(n,N ′/2) X j=⌊(n+1)/2⌋ jN′/2+1 (N ′/2)! N ′/2 j 2j j  j n −j  , 1 ≤n ≤N ′. In this method all poles and weights are real, but the drawback is that it is prone to numerical cancellation, since it computes implicitly higher-order derivatives through finite differences. These three methods are described extensively by Abate and Whitt in [3]. They mention also the work of Zakian [48, 49] and that his method can be included in the framework, but they do not focus on it. The Concentrated Matrix Exponential (CME) method has been introduced by Telek and collaborators in a series of papers [24, 25, 26]. Both Zakian’s and the CME method are based on the following idea. Let (wn, βn)N n=1 be the parameters of an Abate–Whitt method. Substitute the definition of bf into the defining formula (5) of the Abate–Whitt method fN(t) = N X n=1 wn t bf βn t  = N X n=1 wn t ∞ Z 0 e−βn t xf(x) dx = ∞ Z 0 f(x) N X n=1 wne−βn x t ! 1 t dx = ∞ Z 0 f(ty) N X n=1 wne−βny ! dy. (9) Definition 2.3. The Dirac approximant of the Abate–Whitt method (wn, βn)N n=1 is the function ρN(y) = N X n=1 wne−βny. If in (9) we replace ρN(y) with the Dirac delta δ1(y) at point y = 1, we recover f(t) exactly: f(t) = ∞ Z 0 f(ty) δ1(y) dy. We can create an effective Abate–Whitt method by choosing the parameters (wn, βn)N n=1 in such a way that the Dirac approximant ρN(y) is “close” to the Dirac delta δ1(y) in a suitable sense. The two methods use different approaches. 7 For the CME method [24], the authors construct a bell-shaped Dirac approximant; that is, a (continuous) function ρN(y) that assumes a large value at y = 1, with most of the mass concentrated near y = 1 (hence the name Concentrated Matrix Exponential), and that tends rapidly to zero away from 1. The authors choose the nodes βn equispaced on the vertical line (just as in the Euler method), and find the appropriate weights wn by an optimization procedure, aiming to minimize the normalized variance of the function ρN(y) (see Definition 4.2). Zakian [48, 49] instead takes Laplace Transforms bδ1(s) = ∞ Z 0 e−syδ1(y) dy = e−s, and c ρN(s) = ∞ Z 0 e−sy N X n=1 wne−βny ! dy = N X n=1 ∞ Z 0 wne−(s+βn)y dy = N X n=1 wn βn + s and chooses the parameters to make the two transformed functions close. If we set z = −s, this becomes a classical problem: approximating the exponential ez with a rational function c ρN(−z). Definition 2.4. The rational approximant of the Abate–Whitt method (wn, βn)N n=1 is c ρN(−z) = N X n=1 wn βn −z . Zakian suggests choosing c ρN(−z) to be an accurate rational approximation of the exponential ez, to produce an accurate Abate–Whitt method. However, recall that the Inverse Laplace Transform is ill-posed: small perturbations to bg can result in big perturbations to g. Thus, even if ∥c ρN(−z) −bδ1(−z)∥is small in a suitable norm, this does not mean that ρN(y) −δ1(y) is small as well. Indeed, we can not even use a classical norm to measure this error, since δ1(y) is a distribution and not a function in the classical sense. In the next section, we make this approach more rigorous. Zakian suggests using the (N −1, N)th Pad´e approximant of the exponential ez as the rational approximant c ρN(−z). With this choice, c ρN(−z) is an excellent approxi- mation of ez in a neighbourhood of z = 0, but gets progressively worse as |z| grows. As noted in [49], the method is exact when f is a polynomial of degree at most 2N −1. In our experiments we found that Zakian method exhibits fast convergence, although it suffers from numerical instability, evident from N ′ = 5 onward. The use of Pad´e approximants is also discussed by Wellekens [47], refining an approach due to Vlach [45] (see also [3]). The idea is to approximate the exponential in (4) with a rational function, whose poles and residues are the nodes and weights of the Abate–Whitt method. While this structure is similar to the method we propose later, they differ in a key aspect: Wellekens approximates the exponential on the contour of 8 the Bromwich integral, while the approximation domain Ωof the TAME method is tailored to the function f. 3 Accuracy of AW methods through rational approximants In this section we provide theoretical background for Zakian’s approach, proving that an accurate rational approximation of exp(z) gives an accurate Abate–Whitt method. The relation between rational approximation and accuracy of a method for inverse Laplace transform is not new: a discussion appears for instance in [43], though formulated in terms of integrals. Definition 3.1. We say that an Abate–Whitt method (wn, βn)N n=1 is ε-accurate on Ω⊆C if ∥exp(z) −c ρN(−z)∥∞,Ω= exp(z) − N X n=1 wn βn −z ∞,Ω ≤ε. (10) Based on this definition, we obtain accuracy results for functions of class SE and ME; we start with the SE case since the proof is simpler. Theorem 3.2. Let f(t) be a SE function f(t) = M X m=1 cmeαmt. Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains α1t, . . . , αMt. Then, the error of the method at point t is bounded by |f(t) −fN(t)| ≤ M X m=1 |cm| ! · ε. Proof Since bf(s) = M X m=1 cm s −αm , we have |f(t) −fN(t)| = M X m=1 cmeαmt − N X n=1 wn t M X m=1 cm βn t −αm ! = M X m=1 cm  eαmt − N X n=1 wn t  βn t −αm    9 = M X m=1 cm eαmt − N X n=1 wn βn −αmt ! ≤ M X m=1 |cm| ε. In the last line, we have used the ε-accurateness condition (10) in the points αmt. □ Remark 3.3. When M = 1, the inequality in this theorem becomes an equality. So we have also a converse negative result: let z∗∈C be a point for which |exp(z∗) − c ρN(−z∗)| = C > ε; then, if we choose α, t∗such that αt∗= z∗we have that |f(t∗) − fN(t∗)| = C > ε for the function f(t) = eαt. Or, informally: given a point z∗in which the rational approximant of an Abate–Whitt method is inaccurate, we can find an exponential function f and a point t∗for which the method is inaccurate. Since no rational approximation of the exponential can be accurate on the whole complex plane, this means that no Abate–Whitt method can be accurate for all exponential functions and all points t. Therefore, the quality of the Inverse Laplace Transform with an Abate–Whitt method depends on the approximation error of ez with the rational function c ρN(−z) at points {αmt}M m=1. Given f, the coefficients cm are fixed, but we can increase the order N and choose the parameters (wn, βn)N n=1 opportunely to obtain rational approxima- tions of ez which get better on the points {αmt}M m=1 as N increases. This observation ensures that a suitable family of Abate–Whitt approximations can achieve convergence when N →∞(at least in exact arithmetic). To extend the result to the class of ME functions, we need a few technical tools. Definition 3.4. The field of values, also called numerical range, of a matrix A ∈Cd×d is the complex set W(A) = {x∗Ax: x ∈Cd, ∥x∥= 1}. The field of values is a well-known tool in linear algebra; here we recall only a few classical results (see, e.g., [10, Section I.2] or [22]). Lemma 3.5. The following properties hold. 1. We have the inclusions hull(Λ(A)) ⊆W(A) ⊆B(0, ∥A∥2), where hull(Λ(A)) is the convex hull of the eigenvalues of A. The left inclusion is an equality when A is a normal matrix. 2. Translation and rescaling of a matrix changes its field of values in the same way: W(αA + βI) = {αz + β : z ∈W(A)}. 3. If B is a principal submatrix of A, then W(B) ⊆W(A). 10 4. Let J0 ∈Rd×d be the size-d Jordan block with eigenvalue 0. We have W(J) = B(0, cos π d+1) ⊆B(0, 1). An important result relating the field of values and matrix functions is the following. Theorem 3.6 (Crouzeix-Palencia, [16]). Let A be a square matrix, and let ϕ(z) be a holomorphic function on W(A). Then, ∥ϕ(A)∥2 ≤(1 + √ 2) ∥ϕ∥∞,W(A), where ϕ(A) denotes the extension of ϕ to square matrices. It is conjectured that the constant 1 + √ 2 can be replaced by 2 (Crouzeix’s conjecture). With these tools, we extend our error bound to matrices. Theorem 3.7. Let Q be a d × d matrix and f(t) = exp(tQ). Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, the error of the ILT at point t is bounded by ∥f(t) −fN(t)∥2 ≤(1 + √ 2)ε. Proof The Laplace Transform of f is bf(s) = (sI −Q)−1 . This can be proven by using the spectral representation of a matrix function [31, Section 7.9]. The Laplace Transform is well-defined when Re(s) > Re(Λ(Q)), but as discussed in Section 2, it can be extended to s ̸∈Λ(Q). The Abate–Whitt approximant of f is fN(t) = N X n=1 wn t bf βn t  = N X n=1 wn t βn t I −Q −1 = N X n=1 wn (βnI −tQ)−1 . We apply the Crouzeix-Palencia Theorem 3.6 to the function ϕ(z) = ez − N X n=1 wn βn −z , obtaining exp(tQ) − N X n=1 wn (βn −tQ)−1 2 ≤(1 + √ 2)ε. □ Now we can obtain any ME function by choosing an appropriate matrix Q and doing a left and right vector product. Corollary 3.8. Let v, u ∈Cd and Q ∈Cd×d. Let f(t) = v∗exp(tQ)u. Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, |f(t) −fN(t)| ≤(1 + √ 2)ε∥v∥2∥u∥2. 11 Corollary 3.9. Let f(t) = tb b! eαt, with b ∈N and α ∈C. Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains B(αt, t). Then, |f(t) −fN(t)| ≤(1 + √ 2)ε. Proof Let J be the (b + 1) × (b + 1) Jordan block with eigenvalue α; then we have J =   α 1 α 1 ... ... α 1 α   , exp(tJ) = eαt   1 t t2 2 . . . tb b! 1 t ... ... 1 ... t2 2 ... t 1   and W(tJ) ⊂B(αt, t) thanks to the properties in Lemma 3.5. The result now follows from Corollary 3.8, by setting v = e1, u = eb+1, and noting that f(t) = v∗exp(tJ)u. □ If a function f is close to a function g for which we know the error of an Abate– Whitt method (for example, if g is in the SE class), then we can bound the error of the ILT with the Abate–Whitt method for f. Theorem 3.10. Let f(t) be a function, and suppose that g(t) is an approximation of f with error η on (0, ∞). ∥f(t) −g(t)∥∞, R+ ≤η. Let (wn, βn)N n=1 be an Abate–Whitt method. Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) −fN(t)| ≤(1 + ∥ρN∥1) η + |g(t) −gN(t)| Proof f(t) −fN(t) = f(t) − ∞ Z 0 f(ty) ρN(y) dy = ∞ Z 0 (g(ty) −f(ty)) ρN(y) dy + g(t) − ∞ Z 0 g(ty) ρN(y) dx + (f(t) −g(t)). We estimate the first term as ∞ Z 0 (g(ty) −f(ty)) ρN(y) dy ≤ ∞ Z 0 |g(ty) −f(ty)| |ρN(y)| dy ≤ max y∈(0,∞)|g(ty) −f(ty)| · ∞ Z 0 |ρN(y)| dy = ∥g −f∥∞,R+ · ∥ρN∥1 ≤η ∥ρN∥1. 12 The second term is the Abate–Whitt approximation error g(t) −gN(t), while the third term is bounded as |f(t) −g(t)| ≤η. Putting everything together, we obtain |f(t) −fN(t)| ≤η (1 + ∥ρN∥1) + |g(t) −gN(t)|. □ Combining Theorem 3.10 and Theorem 3.2 we get the following result. Theorem 3.11. Let f(t) be a function, and suppose that g(t) = M P m=1 cmeαmt is an approximation of f with error η. f(t) − M X m=1 cmeαmt ∞,R+ ≤η Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains α1t, . . . , αMt. Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) −fN(t)| ≤(1 + ∥ρN∥1) η + M X m=1 |cm| ! ε. (11) Let us comment on this result. It is not immediate how to choose parameters that make the right-hand side of (11) small. To approximate f(t) with a small error η, we may have to choose a SE function with large weights cm. Symmetrically, to approximate c ρN(s) with a small error ε, we may have to choose an Abate–Whitt method with large ∥ρN∥1. But if ρN ≈δ1, we can expect ∥ρN∥1 not to be much larger than ∥δ1∥1 = 1; say, ∥ρN∥1 ≤10. If this bound holds uniformly for a family of Abate– Whitt methods, we can first choose η to make the first summand in (11) small, and then select an Abate–Whitt method in the family to make ε (and hence the second summand in (11)) small. We use Theorem 3.11 to obtain a bound for a LS-class function with finite measure µ, i.e., one such that µ(R+) < ∞. Note that in this case then f(0) = R ∞ 0 dµ(x) = µ(R+) is also defined using the formula (3) and finite; hence an alternative characterization is LS functions that do not diverge in 0. Proposition 3.12. Let f be a LS function with a finite measure µ. Then, for each ε > 0 there exists a SE-class function g(t) = PM m=1 cme−xmt such that ∥f −g∥∞,R+ ≤ ε. We can take g(t) to have cm > 0, PM m=1 cm = µ(R+), and x1, . . . , xM ∈(0, L], where L > 0 is such that µ((L, ∞)) µ(R+) ≤ε. Proof We assume, up to scaling, that µ is a probability measure, i.e., µ(R+) = 1. Let F(x) be its cumulative distribution function (CDF). If X is a random variable with distribution 13 F(x), then the distribution of the clamped random variable min(X, L) is FL(x) = ( F(x) x < L, 1 x ≥L. By the Glivenko-Cantelli theorem (uniform strong law of large numbers, [30]), if the reals x1, x2, . . . are taken sampled from the distribution FL, then the CDF GM(t) of the measure 1 M PM m=1 δxm satisfies almost surely limM→∞∥FL −GM∥∞,R+ = 0. In particular, this implies that for each ε there exist choices of x1, . . . , xM ∈(0, L] such that ∥FL−GM∥∞,R+ ≤ ε. Then ∥F −GM∥∞,R+ ≤ε holds too, since for any x > L we have GM(x) = 1 and F(x) ≥1 −ε. We set g(t) = R ∞ 0 e−xt dG(x) = 1 M PM m=1 e−txm, a SE function. Integrating by parts, we have |f(t) −g(t)| = Z ∞ 0 e−xt( dF(x) −dGM(x)) = Z ∞ 0  d dxe−xt  (F(x) −GM(x)) dx ≤ Z d dxe−xt dx · ε = ε. To justify rigorously the use of integration by parts even when the measures are not Lebesgue- continuous, we can use for instance [11, Theorem 18.4]. □ The convergence speed (as N grows) of the approximation of LS functions with SE functions is also studied in detail in [14]. Combining Theorem 3.11 and Proposition 3.12, we obtain the following result. Theorem 3.13. Let f be a LS function with a finite measure µ. Let L > 0 and η > 0 be such that µ((L, ∞)) ≤ηµ(R+). Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on Ω= [−L, 0). Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) −fN(t)| ≤(1 + ∥ρN∥1) η + µ(R+) ε. Moreover, if the Abate–Whitt method is ε-accurate on the whole half-line R+, we can let L →∞and η →0, obtaining |f(t) −fN(t)| ≤µ(R+) ε. Remark 3.14. Our previous theorems are valid for a function f(t) in the SE or ME class that can take, in the most general case, complex values. Functions of the LS class are positive (since they are the integral of a positive function with a positive measure). However, the Laplace Transform, its Inverse, and the Abate–Whitt methods are linear. If we can recover both functions g1 and g2 with a small error, we can also recover any linear combination c1g1 + c2g2. Therefore, if a function f can be written as f(t) = g1(t) −g2(t) where g1, g2 are LS functions with measures µ1 and µ2, then Theorem 3.13 is valid also for f, with µ1(R+) + µ2(R+) in place of µ(R+). 14 4 Bounds based on moments We now present bounds (and an estimate) based on the moments of a Dirac approximant ρN, which are valid under small assumptions on the regularity of f. Definition 4.1. Let ρ : R+ →R be a function. The moments of ρ are µ0 = ∞ Z 0 ρ(y) dy, µ1 = ∞ Z 0 yρ(y) dy, µ2 = ∞ Z 0 y2ρ(y) dy. The shifted moments are ν2 = ∞ Z 0 (y −1)2ρ(y) dy, eν2 = ∞ Z 0 (y −1)2 |ρ(y)| dy. The shifted moment ν2 can be expressed through µ0, µ1, µ2 as ν2 = ∞ Z 0 (y2 −2y + 1)ρ(y) dy = µ2 −2µ1 + µ0. If ρ is non-negative, then ν2(ρ) = eν2(ρ). It is a classical result that the moments of a function (when they exist) can be expressed through the derivatives of its Laplace transform in 0: µ0 = bρ(0), −µ1 = d ds bρ(s) s=0 , µ2 = d2 ds2 bρ(s) s=0 ; (12) see, e.g.,[11, Section 21], which shows this fact more generally for probability distributions. Another quantity related to moments appear prominently in [25]. Definition 4.2. Let ρ : R+ →R+ be a function with finite moments µ0, µ1, µ2. The Squared Coefficient of Variation (SCV) is SCV(ρ) = µ0µ2 µ2 1 −1. If ρ is the pdf of a random variable X, then the SCV is the normalized variance of X: SCV(ρ) = Var(X) E(X)2 . In general, the relation between the SCV and the second shifted moment is SCV(ρ) = µ0µ2 −µ2 1 µ2 1 = µ0(ν2 + 2µ1 −µ0) −µ2 1 µ2 1 = ν2 µ0 µ2 1 −  1 −µ0 µ1 2 . 15 Note that if µ0(ρ) = µ1(ρ) = 1, then SCV(ρ) = ν2. In [24], the CME method is computed with an optimization procedure that minimizes SCV(ρN), and the following bound is obtained. That article assumes µ0 = 1 for the definition of the SCV and for Theorem 4.3. However, the result can be easily extended to a generic non-negative function ρ by rescaling it as 1 µ0(ρ)ρ. Theorem 4.3 ([24, Theorem 4]). Let ρ(x) : R+ →R+ be a non-negative function with finite moments µ0, µ1, µ2, and assume that µ0 = µ1 = 1. Let f : R+ →R be bounded by a constant H, and Lipschitz-continuous with constant L at point t, i.e. |f(t)| ≤H and |f(t) −f(t1)| ≤L|t −t1| for t1 ≥0. If fN(t) = ∞ R 0 f(tx)ρ(x) dx, then the error of this approximation is bounded by |fN(t) −f(t)| ≤3 2HL2t2 1 3 (SCV(ρ)) 1 3 . We are interested in giving similar bounds for functions ρN which are approxima- tions of the Dirac delta distribution in 1. As the Dirac delta is a probability distribution with mean 1, we expect to have µ0(ρN) ≈1 and µ1(ρN) ≈1. For the Dirac approxi- mants of the Abate–Whitt methods these are not exact equalities, so we give bounds in which µ0(ρN) and µ1(ρN) can differ from 1. Theorem 4.4. Let ρ(y) : R+ →R be a function, with moments as in Definition 4.1. Let f : R+ →R be a C2 function and fN(t) = R ∞ 0 f(ty)ρ(y) dy. Then |fN(t) −f(t)| ≤|µ0 −1| |f(t)| + t |f ′(t)| |µ1 −µ0| + 1 2 t2 ∥f ′′∥∞,R+ eν2. Proof We have R ∞ 0 ρ(y) dy = µ0, therefore R ∞ 0 f(t)ρ(y) dy = µ0f(t). Then fN(t) −f(t) = fN(t) −µ0f(t) + (µ0 −1) f(t) = ∞ Z 0 f(ty)ρ(y) dy − ∞ Z 0 f(t)ρ(y) dy + (µ0 −1) f(t) = (µ0 −1) f(t) + ∞ Z 0 (f(ty) −f(t)) ρ(y) dy = (µ0 −1) f(t) + ∞ Z 0 (f′(t)(ty −t) + 1 2f′′(ζy)(ty −t)2) ρ(y) dy = (µ0 −1) f(t) + tf′(t) ∞ Z 0 (y −1)ρ(y) dy + 1 2t2 ∞ Z 0 f′′(ζy)(y −1)2ρ(y) dy. The above expression is an equality, where between the fourth and fifth lines we used the Taylor series expansion of f centered on t, computed at point ty, with ζy being the point of 16 the Lagrange remainder. To obtain an upper bound, we take absolute values and get |fN(t) −f(t)| ≤|(µ0 −1) f(t)| + tf′(t) ∞ Z 0 (y −1)ρ(y) dy + 1 2t2 ∞ Z 0 f′′(ζy)(y −1)2ρ(y) dy ≤|µ0 −1| |f(t)| + t |f′(t)| ∞ Z 0 yρ(y) dy − ∞ Z 0 ρ(y) dy + 1 2 t2 ∥f′′∥∞,R+ ∞ Z 0 (y −1)2 |ρ(y)| dy = |µ0 −1| |f(t)| + t |f′(t)| |µ1 −µ0| + 1 2 t2 ∥f′′∥∞,R+ eν2. □ We apply this theorem to an Abate–Whitt method, with ρ = ρN. For the CME method, ρN is positive and concentrated near x = 1, so eν2(ρN) = ν2(ρN) is small and the error bound is small also. For the other methods, ρN has both positive and negative components, and unfortunately this makes eν2(ρN) orders of magnitude larger than ν2(ρN) in practical cases. Thus, this error bound is too large to be useful. However, we can truncate the Taylor series expansion at the first-order term, and compute just an approximation instead of an upper bound. We obtain the following estimate. Definition 4.5. Let ρN be the Dirac approximant of an Abate–Whitt method. Let µ0, µ1 be the moments of ρN as in Definition 4.1. Let f : R+ →R be a C1 function. The first-order moment estimate of the Abate–Whitt approximation error is |fN(t) −f(t)| ≈|µ0 −1| |f(t)| + t |f ′(t)| |µ1 −µ0|. (13) We compare this estimate to the actual error in Figure 8. Note that the coefficients |µ0 −1| and |µ1 −µ0| can be related to the rational approximation of ez with c ρN(−z). Recalling (12), we have µ0(ρN) = R ∞ 0 ρN(y) dy = c ρN(0) and e0 = 1, so 1 −µ0 = (ez −c ρN(−z))|z=0. Hence if 0 ∈Ωand the Abate–Whitt method is ε-accurate on Ω, then |µ0 −1| ≤ε. Similarly, µ1 −µ0 = µ1 −1 + 1 −µ0 =  d dz (c ρN(−z) −ez)  z=0 + (ez −c ρN(−z)) z=0 . Hence |µ1 −µ0| is small when both the value and the first derivative in zero of the rational approximation c ρN(−z) are close to those of ez. 5 Laplace transforms in queuing theory In this section, we specialize our general bounds to the functions appearing in some applications in queuing theory. 17 5.1 Continuous-time Markov chains The generator matrix (or rate matrix) of a continuous-time Markov chain (CTMC) is a matrix Q ∈Rd×d such that Qij ≥0 whenever i ̸= j and Q1 = 0, where 1 is the vector with all entries equal to 1. In particular, Q is a singular −M-matrix. The time-dependent distribution of a CTMC with initial probability distribution π0 ∈Rd is given by π(t) = π0 exp(Qt). (14) On the other hand, its Laplace transform bπ(s) = π0(sI −Q)−1 has a simpler algebraic expression, which is more convenient to work with. One of the reasons for this convenience is an appealing probabilistic interpretation for s > 0: bπ(s) is the expected state of the Markov chain at time τ, where τ ∼Exp(s) is an exponentially distributed random variable [30]. Often one can resort to algorithms and arguments that exploit the connection between this time τ and the many other exponentially-distributed random variables that appear in the theory of CTMCs; see for instance [41]. This connection is useful also when working with other quantities defined in terms of a CTMC. An important example are phase-type distributions, which model the time it takes for a CTMC to reach a specified set of states. A phase-type distribution is a probability distribution on [0, ∞) with probability density function (pdf) f(t) = αT exp(tQ)q, q = −Q1 ≥0, (15) where α ∈Rd is a stochastic vector, 1 ∈Rd is the vector of all ones, and Q is a subgenerator matrix, i.e., a matrix such that Qij ≥0 for all i ̸= j and −Q1 ≥0. For a probability distribution, one usually deals with the Laplace transform of its pdf, which is known as the Laplace-Stieltjes transform. Phase-type distributions appear together with their Laplace-Stieltjes transforms in various contexts in queuing theory; see for instance the examples in [1]. Clearly both (14) and (15) are ME functions, hence the bounds in Section 3 can be applied. With some manipulations, we obtain the following result. Theorem 5.1. Let f(t) be the pdf of a phase-type distribution (15), and F(t) = R t 0 f(t) dt be its corresponding cumulative distribution function (CDF). Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, the following bound holds for the inverse Laplace transform of f(t): |f(t) −fN(t)| ≤(1 + √ 2)ε∥q∥1, and the following two bounds for that of F(t), assuming 0 ∈Ω: |F(t) −FN(t)| ≤ε + (1 + √ 2)ε √ d, 18 |F(t) −FN(t)| ≤ε + (1 + √ 2)ε(−αT Q−11)∥q∥1. Note that −αT Q−11 is the first moment of the phase-type distribution [30, Page 6105]. Proof For the first bound, it is enough to use Corollary 3.8 and note that ∥α∥2 ≤∥α∥1 = αT 1 = 1, and ∥q∥2 ≤∥q∥1. For the first bound, we integrate f(t) to get the well-known expression for the CDF of a phase-type distribution F(t) = 1 −αT exp(tQ)1. The function F(t) is the sum of the constant function F (1)(t) = 1 and F (2)(t) = −αT exp(tQ)1; hence we have |F(t) −FN(t)| ≤|F (1)(t) −F (1) N (t)| + |F (2)(t) −F (2) N (t)| ≤ε + |F (2)(t) −F (2) N (t)|, where we can bound the first term with ε since the constant function 1 is SE with α = 0. The second term is a ME function; to bound it, we can use Corollary 3.8, leading to |F (2)(t) −F (2) N (t)| ≤ε(1 + √ 2)∥α∥2∥1∥2 ≤ε(1 + √ 2)∥α∥1∥1∥2 = ε(1 + √ 2) √ d. Alternatively, since Q, Q−1 and exp(tQ) commute, we can write F (2)(t) = −αT exp(tQ)1 = −αT Q−1 exp(tQ)Q1. This expression leads to the second bound, since ∥(−αT Q−1)T ∥2 ≤∥(−αT Q−1)T ∥1 = −αT Q−11. Indeed, Q−1 ≤0, so the vector −αT Q−1 has non-negative entries and its 1-norm reduces to the sum of its entries. □ Unfortunately, there is no simple expression for the field of values W(Q) of a generator or subgenerator matrix, but the following results give inclusions. Let us recall that a uniformization rate for a (sub)generator matrix Q ∈Rd×d is any λ ∈R such that λ ≥maxi=1,...,d|Qii|; the definition comes from the concept of uniformization, a popular tool to discretize CTMCs [35, Section 6.7]. Theorem 5.2. Let Q ∈Rd×d be a generator or sub-generator matrix, and λ be a uniformization rate for it. Then, W(Q) ⊆B(−λ, λ √ d). Proof A sub-generator matrix can always be written as the principal submatrix of a (d + 1) × (d + 1) generator matrix, so in view of Lemma 3.5 we reduce to the case in which Q is a generator. If Q is a generator matrix and λ is a uniformization rate, then P = I + 1 λQ is a stochastic matrix. In particular, ∥P∥∞= 1, where ∥·∥∞is the operator norm induced by the max-norm on vectors. It is a classical inequality [23, Page 50-5] that ∥P∥2 ≤ √ d∥P∥∞; in particular we have W(P) ⊆B(0, ∥P∥2) ⊆B(0, √ d). This inclusion implies the result, thanks again to Lemma 3.5. □ A stronger bound is the following: 19 Theorem 5.3. The smallest rectangle in the complex plane (with its sides parallel to the axes) such that W(Q) ⊂Ωd for each generator or subgenerator matrix Q ∈Rd×d with uniformization rate λ is Ωd =                              [−2λ, λ 2 ( √ 2 −1)] + i[−1 2λ, 1 2λ], d = 2; [−(1 + √ 5 2 )λ, λ 2 √ 3 −1] + i[− √ 3 2 λ, √ 3 2 λ], d = 3; [−(1 + √ 6 2 )λ, λ 2 ] + i[−λ, λ], d = 4;  −(1 + √ d + 2 2 )λ, λ 2 ( √ d −1)  + i " − √ 2λ 4 q d + p d2 −4d + 12, √ 2λ 4 q d + p d2 −4d + 12 # , d ≥5. Proof This result follows from the argument in Theorem 5.2, paired with the bound on the field of values of a stochastic matrix W(P) given in [20, Theorem 6.9]. □ These bounds are valid for a generic generator matrix of given size. If we have information on the eigenvalues of the real and imaginary parts of a matrix, sharper bounds can be obtained for its field of values. The next theorem follows from the results in [22, Section 5.6], taking a very coarse discretization θ ∈{0, π 2 , π, 3π 2 }. Theorem 5.4. Let A ∈Cd×d be a matrix. Let X = A+A∗ 2 be the real part of A and Y = A−A∗ 2i be the imaginary part of A. X and Y are real matrices and thus have real eigenvalues. We have W(A) ⊆[λmin (X) , λmax (X)] + i [λmin (Y ) , λmax (Y )] . 5.2 Fluid queues A setting where we can obtain useful results is the one of Markov-modulated fluid models, or fluid queues [4, 28, 33, 44]. A fluid queue with transition matrix Q ∈Rd and rates r1, . . . , rd is an infinite-dimensional continuous-time Markov process (φ(t), ℓ(t)), where the random variable φ(t) ∈{1, . . . , d} (phase) is a CTMC with transition matrix Q, and the random variable ℓ(t) ∈R (level) is a continuous function of t that evolves according to d dtℓ(t) = rφ(t). This model is often paired with boundary conditions, for instance to enforce ℓ(t) ≥0. Fluid queues model buffers, telecommunication queues, or performance measures associated to being in a state φ(t) for a certain period of time. A fundamental quantity to analyze their transient distribution is the so-called first-return matrix. Let φ(0) = i be a phase with ri > 0, so that ℓ(t) is increasing for t = 0. Set τ = min{t > 0: ℓ(t) = ℓ(0)}, 20 the first-return time to the initial level, with the convention that τ = ∞if the level never returns to ℓ(0). The (time-dependent) first-return matrix Ψ(t) is then defined by [Ψ(t)]ij = P[τ ≤t, φ(τ) = j | φ(0) = i], i.e., the probability that the first return happens before time t, and at the same time the phase changes from i to j. Usually, one includes in Ψ(t) only phases i and j with ri > 0 and rj < 0, since these conditions must hold for a return to level 0 to be possible; hence Ψ(t) ∈Rd+×d−, where d+, d−≤d are the number of positive and negative rates, respectively. The matrix-valued function Ψ(t) is the cumulative density function (CDF) of the return time: Ψ(t) is increasing in t, and converges for t →∞to a finite substochastic matrix Ψ(∞). Its derivative d dtΨ(t) = ψ(t) is the corresponding probability density function (pdf), which is non-negative for each t and converges to 0 for t →∞. One can compute the Laplace-Stieltjes transform bψ(s) (and with it bΨ(s) = 1 s bψ(s)) as the solution of a certain nonsymmetric algebraic Riccati equation; see [8, 9] for more detail and several algorithms. On the other hand, algorithms to compute Ψ(t) directly [5, 6] are less common and more complicated. Once again, the convenience of working with Laplace transforms is related to the physical interpretation (for s > 0) of bΨ(s) as the probability that the first-return time is smaller than a random variable with exponential distribution Exp(s). In order to apply the techniques introduced earlier, we recall the following expression for Ψ(t). Theorem 5.5 ([6, Lemma 2]). Consider a fluid queue model, and let λ be a uni- formization rate for its generator matrix Q. Then, there exist nonnegative matrices Ψ− k ∈Rd+×d−, for k = 1, 2, . . . (dependent on λ), such that Ψ(t) = e−λt ∞ X h=1 (λt)h h! h X k=1 Ψ− k = e−λt ∞ X k=1 Ψ− k ∞ X h=k (λt)h h! . (16) Moreover, limt→∞Ψ(t) = Ψ(∞) = P∞ k=1 Ψ− k . The matrices Ψ− k have a physical interpretation using uniformization: they repre- sent the probability that the first return to level ℓ(0) happens between the nth and n + 1st uniformization event; see [6] for more detail. Differentiating (16) term by term, we can obtain an expression for ψ(t), i.e., ψ(t) = d dtΨ(t) = λe−λt ∞ X k=1 Ψ− k (λt)k−1 (k −1)!. (17) Following the same approach as Corollary 3.8, we can get the following result. 21 Theorem 5.6. Consider a fluid queue with uniformization rate λ, and let f(t) = [ψ(t)]ij. Let (wn, βn)N n=1 be an Abate–Whitt method that is ε-accurate on a region Ω which contains B(−tλ, tλ). Then, the error of the method at point t is bounded by |f(t) −fN(t)| < ελ(1 + √ 2)[Ψ(∞)]ij. Proof Let Q =   −λ λ −λ λ ... ... −λ λ −λ   , exp(tQ) = e−tλ   1 tλ (tλ)2 2 . . . (tλ)(K−1) (K−1)! 1 tλ ... ... 1 ... (tλ)2 2 ... tλ 1   (18) be a scaled Jordan block of size K × K and its matrix exponential. Note that W(tQ) ⊆ B(−tλ, tλ), thanks to the properties in Lemma 3.5. We truncate (17) after the first K terms; thanks to the expression above of exp(tQ), we see that f(K)(t) = λ K X k=1 exp(−λt)(λt)(k−1) (k −1)! [Ψ− k ]ij = αT exp(tQ)q, with α =   [Ψ− K]ij [Ψ− K−1]ij ... [Ψ− 2 ]ij [Ψ− 1 ]ij   , q = −Q1 =   0 0 ... 0 λ   . (19) This is, essentially, a phase-type distribution, but up to a scaling factor, because in general αT 1 = K X k=1 [Ψ− k ]ij ̸= 1. Now Corollary 3.8 gives |f(K)(t) −f(K) N (t)| ≤(1 + √ 2)ε ∥α∥2∥q∥2 ≤(1 + √ 2)ε ∥α∥1∥q∥1 = (1 + √ 2)ελ K X k=1 [Ψ− k ]ij ≤(1 + √ 2)ελ [Ψ(∞)]ij. Since this bound is uniform in K, we can pass to the limit and obtain a bound for f(t) − fN(t). □ With a little more work, one can obtain an accuracy bound for Ψ(t) as well. 22 Theorem 5.7. Consider a fluid queue with uniformization rate λ, and let F(t) = [Ψ(t)]ij. Then, the error of the method at point t is bounded by |F(t) −FN(t)| < εF(∞) + (1 + √ 2)ελ E[ψ(t)]ij. Here, E[ψ(t)] = R ∞ 0 tψ(t) dt is the first moment of the function ψ(t). Proof We compute a formula for Ψ(t) analogous to the expression of the CDF of a phase-type distribution. First, we make some algebraic manipulations in the summations to obtain Ψ(t) = e−λt ∞ X h=1 (λt)h h! h X k=1 Ψ− k = e−λt ∞ X h=0 (λt)h h! ! ∞ X k=1 Ψ− k ! −e−λt ∞ X h=0 (λt)h h! ∞ X k=h+1 Ψ− k = ∞ X k=1 Ψ− k ! −e−λt ∞ X h=0 (λt)h h! ∞ X k=h+1 Ψ− k . Then, we truncate the summations after the term Ψ− K, to get Ψ(K) ij (t) = K X k=1 [Ψ− k ]ij ! −e−λt ∞ X h=0 (λt)h h! K X k=h+1 [Ψ− k ]ij = αT 1 −αT exp(tQ)1. with Q, α as in (18) and (19) above. Arguing as in the proof of Theorem 5.1, with the only difference that αT 1 ̸= 1, we have |Ψ(K)(t) −Ψ(K) N (t)| ≤εα1 + (1 + √ 2)ελ(−αT Q−11) ≤εF(∞) + (1 + √ 2)ελ(−αT Q−11). It remains to show that (−αT Q−11) converges to the first moment of Ψ(t)ij, when K →∞. To this purpose, we compute the remainder E[f(t)] −E[f(K)(t)] = Z ∞ 0  tf(t) −tf(K)(t)  dt = Z ∞ 0 (λt) ∞ X k=K+1 exp(−λt)(λt)(k−1) (k −1)! [Ψ− k ]ij dt = ∞ X k=K+1 [Ψ− k ]ij Z ∞ 0 (λt) exp(−λt)(λt)(k−1) (k −1)! dt = ∞ X k=K+1 [Ψ− k ]ij k λ. In particular, when K = 0, this computation gives an expression for the first moment of ψ as a non-negative series E[f(t)] = Z ∞ 0 tf(t) dt = ∞ X k=1 [Ψ− k ]ij k λ If E[ψ(t)]ij < ∞, the series must be summable, and this implies that lim K→∞ ∞ X k=K+1 [Ψ− k ]ij k λ = 0. □ 23 6 AAA approximation We have seen that the error of the Abate–Whitt approximant depends on the ε- accurate approximation of ez on Ωwith the rational function c ρN(−z). Hence, we can construct accurate Abate–Whitt methods by choosing suitable approximations. Zakian used the Pad´e approximant of ez at the point z = 0, which can be useful if Ω is close to z = 0, but for generic Ωit provides a worse approximation. Therefore, we need an approach that can find a good ε-accurate rational approx- imation on an arbitrary region Ω. There are many algorithms in the literature; the state of the art is the AAA algorithm, proposed by Nakatsukasa, S`ete and Trefethen [32]. We refer to the paper for more details and below we briefly present the algorithm. 6.1 AAA algorithm The AAA algorithm computes a rational function which approximates a prescribed f(z) on a given finite set of points Z ⊆C. One of the key tools is the so-called barycentric representation r(z) = n(z) d(z) = K X k=1 ukfk z −zk K X k=1 uk z −zk . (20) Here u1, . . . , uK, z1, . . . , zK, f1, . . . , fK ∈C are parameters that will be chosen appro- priately. Both n(z) and d(z) are rational functions of degree (K −1, K). Since z −zk appears in denominators, one may think that the zk must be poles of r(z). However, clearing denominators we obtain the equivalent expression r(z) = K X k=1 ukfk Y i̸=k (z −zi) K X k=1 uk Y i̸=k (z −zi) . (21) By evaluating (21) for z = zk, in both n(z) and d(z) all summands except one vanish, and we obtain r(zk) = fk. Therefore we see that r(z) is a rational function of degree at most (K −1, K −1) that takes the values f1, . . . , fK in the support points z1, . . . , zK. Evaluating a rational function in the barycentric representation (20) has surpris- ingly good numerical stability properties: even though the denominators z −zk can become arbitrarily large, the ratio n(z) d(z) stays bounded, and the floating-point errors in computing 1 z−zk for z ≈zk cancel out, since that sub-expression appears in both n(z) and d(z); see [32] for more discussion. 24 The AAA algorithm takes as its input a function f(z) that we wish to approximate with a rational function, and a finite set of points Z ⊆C, with |Z| = L, on which f(z) ≈r(z) should hold. The algorithm chooses iteratively inside Z a set of points to use as support points, and adds one new point greedily at each iteration. We describe the iteration step K+1, assuming that points {z1, . . . , zK} ⊆Z have already been selected, and set fk = f(zk) for k = 1, . . . , K. In this way, f(zk) = fk = r(zk) = n(zk) d(zk) , so the approximation is exact in the support points. These interpolation conditions are not sufficient to determine uniquely the function r(z) in (20): we also need to choose the weights u1, . . . , uK. We choose these weights so that f(z) ≈r(z) = n(z) d(z) on the points of Z \ {z1, . . . , zK}, which we shall label Z1, Z2, . . . , ZL−K. To this purpose, we take (u1, . . . , uK) = arg min (L−K X i=1 |f(Zi)d(Zi) −n(Zi)|2 : u ∈CK, ∥u∥= 1 ) . (22) (Note that we can restrict to ∥u∥= 1, since r(z) does not depend on the scaling of u.) The problem (22) is, in effect, a linear least-squares problem in the weights u: (u1, . . . , uK) = arg min{∥Au∥2 2 : u ∈CK, ∥u∥= 1}. With some computations, one sees that the associated matrix is A = D1C−CD2, with C =       1 Z1 −z1 · · · 1 Z1 −zK ... ... ... 1 ZL−K −z1 · · · 1 ZL−K −zK       ∈C(L−K)×K, and D1 = diag(f(Z1), . . . , f(Zk)) ∈C(L−K)×(L−K), D2 = diag(f(z1), . . . , f(zk)) ∈CK×K. The problem (22) can be solved with some linear algebra tools: in the typical case where L −K ≥K, the optimal u is the singular vector associated to the minimum singular value of A. By computing u, we fix all the parameters in the rational function r(z). For the next step of the iteration, we add a new point zK+1 to the set of sup- port points: we use a greedy strategy, and select the point Zi on which the current approximation is worse: zK+1 = arg max{|f(z) −r(z)| : z ∈{Z1, . . . , ZL−K}}. 25 One continues adding support points (and increasing the degree K) until the desired accuracy is reached. There are few theoretical guarantees for the optimality of the rational functions produced by AAA, since (22) does not ensure that we minimize |f(z) −r(z)|, but in practice the algorithm is very efficient and stable, producing results that are very close to the theoretical optima [32]. The form (20) does not reveal the poles of r(z) immediately, but in [32] it is shown how to compute them. The poles of r(z) are the solutions of the generalized eigenvalue problem det        0 u1 u2 · · · uK 1 z1 −λ 1 z2 −λ ... ... 1 zK −λ        = 0. (23) Indeed, adding a multiple of the k-th row with coefficient uk/(λ −zk) to the first row, we obtain det          K P k=1 uk λ−zk 0 0 · · · 0 1 z1 −λ 1 z2 −λ ... ... 1 zK −λ          = 0. The first diagonal entry is d(λ); so the K −1 zeros of d are the solutions of (23). Once the poles βk have been computed as the solutions of (23), the correspond- ing residues wk can be obtained with the residue formula from complex analysis, wk = n(βk) d′(βk). While the rest of the AAA algorithm has good floating-point stability properties, solving (23) may be more problematic. We advise using higher precision in this final part of the algorithm. 6.2 Modifications to AAA We have seen the basics of the AAA algorithm; now we apply it to the problem of computing ε-accurate Abate–Whitt methods. We want c ρN(−z) to be a rational approximation of ez on the set Ω. However, c ρN(−z) has degree (N −1, N), while the AAA algorithm produces a rational function of degree (K −1, K −1). A modifica- tion of the algorithm to produce rational approximations of a general degree (M, N) is described in [19]; here, we adopt a simpler approach instead. We can regard the condition deg n(z) < deg d(z) as requiring that r(∞) = 0. This relation is similar to the interpolation conditions r(zk) = fk that are imposed in the support points. We would like to impose this condition already from the first step, hence starting our iter- ation from a “0th support point” z0 = ∞, f0 = f(z0) = 0. However, we cannot run the algorithm without modification with a support point equal to ∞, but we have to 26 modify slightly the barycentric representation (20). Namely, we set r(z) = K X k=1 ukfk z −zk u0 + K X k=1 uk z −zk . (24) Clearing denominators, we see that r(z) in (24) has degree (K −1, K). With this representation, the analogue of (22) is (u0, . . . , uK) = arg min{∥eAu∥2 2 : u ∈CK+1, ∥u∥= 1}. (25) The associated matrix is eA = D1 eC −eC eD2 ∈C(L−K)×(K+1), with eC =       1 1 Z1 −z1 · · · 1 Z1 −zK ... ... ... ... 1 1 ZL−K −z1 · · · 1 ZL−K −zK       , (26) and eD2 = diag(0, f(z1), . . . , f(zK)). Hence, the only modification required is including an initial column of ones in the matrix C, and a corresponding zero in D2; the rest of the iteration can proceed without modifications. Remark 6.1. With the same strategy, for any given f0 ∈C we can construct a rational approximant such that r(∞) = f0: it is sufficient to add the term u0f0 to the numerator n(z) of (24). We also make another modification to the algorithm to ensure that the non-real weights wn and nodes βn come in conjugate pairs: inside the main loop of the algo- rithm, whenever we add a non-real zk to the set of support points, we also add in the same iteration its conjugate zk+1 = zk. The computed weights uk in AAA then also come in conjugate pairs, in exact arithmetic; to compensate for machine arithmetic errors, we replace the pair (uk, uk+1) with  uk+uk+1 2 , uk+uk+1 2  . 27 We need a small modification also to the eigenvalue problem (23) to compute the poles; it becomes det          0 u0 u1 u2 · · · uK 1 −1 1 z1 −λ 1 z2 −λ ... ... 1 zK −λ          = 0. (27) Indeed, we can carry out row operations that preserve the determinant to transform this problem into the equivalent one det          u0 + PK k=1 uk λ−zk 0 0 0 · · · 0 1 −1 1 z1 −λ 1 z2 −λ ... ... 1 zK −λ          = 0. (28) Solving this eigenvalue problem provides the parameters of an Abate–Whitt method: since r(z) = c ρK(−z) = K X k=1 wk βk −z , we recover βk as the poles of r(z), and wk as the corresponding residues. The AAA algorithm with these modifications is described in Algorithm 1. We note that the number of computed support points N may be either Nmax or Nmax −1; the second case happens when we would like to add a pair of conjugate support points, but there is no room to do it. The number N ′ is also not known a priori: the poles are computed only at the end of the loop, and we do not know in advance how many are real. 7 Computing TAME parameters 7.1 Floating-point precision issues While some previous works focus on arbitrary precision arithmetic [2, 3], we deal with the case in which the computations in an Abate–Whitt method (5) are performed in the standard IEEE754 binary64 (double precision), possibly using poles and weights (wn, βn) precomputed in higher precision. We have seen that, in exact arithmetic, the error of the ILT approximation is bounded as |f(t) −fN(t)| ≤Cε, where ε is the rational approximation error and C is a constant depending on the class of f. We now assess the impact of inaccuracies in the computation of bf, such as the ones due to floating-point arithmetic. 28 Algorithm 1: The modified AAA algorithm Data: Finite set of points Z ⊆C, function f, maximum order Nmax, tolerance ε Result: Poles and weights (wn, βn)N n=1, either real in or conjugate pairs, such that ∥f(z) −PN n=1 wn βn−z∥∞,Z ≤ε and r(∞) = 0 with N ≤Nmax. eC ←1|Z|; u0 ←0; K ←0; while ∥f −r∥∞,Z > ε and K < Nmax do zK+1 = arg max{|f(z) −r(z)| : z ∈Z \ {z1, . . . , zK}}; Update eC according to (26): remove row corresponding to zK+1, add column corresponding to it; if zK+1 is real then K ←K + 1; else if K + 2 > Nmax then // No room to add a conjugate pair: // ignore zK+1 and terminate with K support points break; end zK+2 ←zK+1; Update eC according to (26): remove row corresponding to zK+2, add column corresponding to it; K ←K + 2; end (u0, u1, . . . , uK) ←arg min{∥eAu∥2 2 : u ∈CK+1, ∥u∥= 1}; end Compute β1, . . . , βK by solving (27) (in higher precision); Compute residues wk = n(βk) d′(βk), k = 1, . . . , K; Let us assume for simplicity that t = 1. Let g(βn) be the value obtained by computing bf(βi) numerically, and suppose that it has relative precision δ, i.e., g(βn) −bf(βn) bf(βn) ≤δ. In floating-point arithmetic, we can only ensure a δ at least as large as the machine precision, if not larger. We can write the computed value of the Abate–Whitt approximant as efN(1) = N X n=1 wng (βn) = N X n=1 wn bf (βn) (1 + δn), |δn| ≤δ. 29 0 10 20 10−46 10−30 10−14 102 N ε 0 10 20 10−1 103 107 1011 N max|wn| binary64 vpa Fig. 1: Approximation error ε and magnitude of weights max|wn| when running AAA with Ω= B(−5, 5) in either binary64 or VPA with 100 significant digits. Then we have the bound | efN(1) −fN(1)| ≤ N X n=1 |wn bf (βn)|δ and hence | efN(1) −f(1)| ≤ N X n=1 |wn bf (βn)|δ + Cε (29) This bound reveals that limiting the magnitude of the weights wn is important when we evaluate bf in limited precision, as already noted in [24, Section 5.1]. In practice, we observe that increasing N leads to a smaller ε but also to larger weights; hence increasing n after a certain point (which depends on the machine precision) is no longer beneficial. Remark 7.1. In the TAME method, the weights are computed by optimizing a function that is itself computed numerically, by evaluating the rational approximant c ρN(−z). If the computations in the main AAA loop (Algorithm 1) are performed with suffi- ciently many significant digits (e.g., 100 decimal digits of precision), then we observe the approximation error ε decreasing even below 10−16, but at the same time the mag- nitude of the weights increasing steadily. This is detrimental if we plan to compute the ILT in binary64, since the large weights cause a large numerical error irrespective of ε: in (29) the summation becomes the dominant term, even if ε is small. If we run the AAA algorithm in binary64 instead, then the error stagnates around the machine precision 2.2×10−16, and at that points increasing N further only adds spurious poles with weights that are of the order of the machine precision; in particular, the mag- nitude of the weights does not increase anymore. So running the AAA algorithm in binary64 acts as a safeguard against increasing weights. We observe this behavior in an example in Figure 1, and also its consequences later in Section 8.3. 30 7.2 Choice of Ω We have seen in Sections 3 and 5 that an Abate–Whitt method which is ε-accurate on Ωcan recover the original function f with an error proportional to ε (Theorems 3.2, 3.7, 5.1, 5.6, 3.13). We can use the AAA algorithm to construct a TAME method with a small ε. The first step is to select the region Ω, depending on the information we have on f. • Fluid queues. If f arises from a fluid queue model with uniformization rate λ (i.e., f(t) = Ψ(t) or f(t) = ψ(t)), then use Ω= B(−r, r), with r = λt. • ME class. If f is in the ME class (e.g. f is a phase type distribution), then Ωshould contain W(Q). With no information on Q apart from its dimension, Theorem 5.3 can be used. If the user has more information about W(Q), tighter bounds can be used, for instance the one in Theorem 5.4. • LS class. If f(t) is in the LS class, we use Ω= [−L, 0] with L chosen according to Theorem 3.13. • If f(t) can be approximated by a function of the above classes, use the corresponding Ω. Otherwise, if nothing is known about f, then we recommend trying three possible domains: the circle B(−r, r), the segment on the real half-line [−r, 0], the segment on the imaginary line i[−r, r]. However, as we note in Remark 3.3, no Abate–Whitt method can give good results for all functions f. Remark 7.2. One may be tempted to use a large region Ωto cover as many functions as possible; however, usually the magnitude of the weights |wn| is larger for a bigger region Ω. This observation discourages using overly large sets Ω: if we need to compute an ILT for which we know that the domain Ω= B(−1, 1) is sufficient, then using Ω= B(−10, 10) instead would cause loss of accuracy, at least when the computations are done in double-precision arithmetic. 7.3 Choice of Z Once Ωis chosen, we wish to construct a method that is ε-accurate on Ωusing Algo- rithm 1. However, the domain for AAA is a discrete set of points Z, while in many of our earlier examples Ωis a bounded region such as a disc or a rectangle. Hence we need a way to approximate Ωwith a finite set of points. The simplest approach would be to use a regular grid of points inside Ω, thus requiring a number of points that scales quadratically with the diameter of Ω. We use another more efficient approach instead, following [32, Section 6]. Assume that Ω is a simply connected domain with boundary ∂Ω. Consider the function g(z) = ez − c ρN(−z), which is holomorphic on C \ {β1, . . . , βn}. If g has no poles inside Ω, then by the maximum modulus principle [36, Theorem 10.24] maxz∈Ω|g(z)| = maxz∈∂Ω|g(z)|. That is, it is enough to require that |g(z)| be small on the boundary of Ω. Therefore we can take Z as a discretization of ∂Ω. In particular, when Ωis a disc, we take Z to be a set of equispaced points on a circle. We then apply the modified AAA algorithm on the support set Z, obtaining a rational approximation bρ(−z) = PN n=1 wn βn−z to ez. This rational approximation is 31 0 5 10 15 20 25 30 35 10−16 10−12 10−8 10−4 100 r ε N ′ = 2, N = 3 N ′ = 2, N = 4 N ′ = 3, N = 5 N ′ = 3, N = 6 N ′ = 4, N = 7 N ′ = 4, N = 8 N ′ = 5, N = 9 N ′ = 5, N = 10 N ′ = 6, N = 11 N ′ = 6, N = 12 N ′ = 7, N = 13 N ′ = 7, N = 14 N ′ = 8, N = 15 N ′ = 8, N = 16 N ′ = 9, N = 17 N ′ = 9, N = 18 N ′ = 10, N = 19 N ′ = 10, N = 20 N ′ = 11, N = 21 N ′ = 11, N = 22 N ′ = 12, N = 23 N ′ = 12, N = 24 Fig. 2: Errors ε obtained by TAME on Ω= B(−r, r), with the corresponding values of n and n′. guaranteed to be ε-accurate only on Z, not on the whole Ω, but in practice this is sufficient if the points in Z are sufficiently tight. Finally, we recover the parameters (wn, βn)N n=1 and obtain the TAME method. We have to check that the poles βn are outside the domain Ω, otherwise the maxi- mum modulus principle may be violated; furthermore, we could not have a ε-accurate method on Ω, since c ρn(−z) would have a pole inside Ω. This condition is satisfied in the TAME methods we computed. 7.4 TAME with optimal N ′ for a given B(−r, r) As we discussed above, increasing N (or N ′) beyond a certain value does not improve accuracy, if the transforms are evaluated in binary64. In this section, we wish to give a strategy to select a quasi-optimal N ′ to use, for the common case of a region of the form Ω= B(−r, r) and several values of r. First, we display in Figure 2 the errors ε obtained for several values of r, and the corresponding values of N and N ′. Several comments on this figure are in order. Since Z is the discretization of a circle, all points in it apart from two are non- real. Hence, our AAA variant almost always adds support points in conjugate pairs. As a consequence, for any given r one obtains only about half of the possible values of N. For instance, for r = 30, TAME produces rational approximants with N ∈ {3, 5, 7, 9, 11, 13, 15, 16, 18, 20, 22, 24}, since a real support point is added as the 16th point. The iterations at which real support points are added vary depending on r: this explains why the colored lines are sometimes broken in the figure. 32 0 5 10 15 20 25 30 35 10−14 10−10 10−6 10−2 r η N ′ = 2, N = 3 N ′ = 2, N = 4 N ′ = 3, N = 5 N ′ = 3, N = 6 N ′ = 4, N = 7 N ′ = 4, N = 8 N ′ = 5, N = 9 N ′ = 5, N = 10 N ′ = 6, N = 11 N ′ = 6, N = 12 N ′ = 7, N = 13 N ′ = 7, N = 14 N ′ = 8, N = 15 N ′ = 8, N = 16 N ′ = 9, N = 17 N ′ = 9, N = 18 N ′ = 10, N = 19 N ′ = 10, N = 20 N ′ = 11, N = 21 N ′ = 11, N = 22 N ′ = 12, N = 23 N ′ = 12, N = 24 Fig. 3: Error estimator η obtained by TAME on Ω= B(−r, r), with the corresponding values of n and n′. We note that each value of N ′ is reached with more than one value of N: typically, one has N = 2N ′ −1 and N = 2N ′, i.e., all poles come in conjugate pairs apart from at most one real pole. The AAA algorithm was run in binary64, as suggested in Remark 7.1, with only the eigenvalue problem (27) solved in higher precision. For each value of N ′ (repre- sented by a different color in the figure), the method reaches a value of ε close to the machine precision 2.2 · 10−16 for all values of r up to a certain threshold, but then the error increases steadily. For instance, for N ′ = 6, the error starts drifting away from machine precision at r ≈5. There is a considerable amount of numerical noise due to floating point errors, clearly visible at the bottom of the graph, so that “close to machine precision” may mean a value larger than 10−13, in certain cases. It follows from the discussion in Section 7.1 that ε is not the only factor that affects the final accuracy of the method when the ILT is evaluated in binary64. We take as a proxy for the error (29) the quantity η = ε + u max|wn|, where u ≈2.2 · 10−16 is the machine precision. This choice is motivated by the heuristic that (a) f(βn) has the same order of magnitude as C for our classes of functions and for typical values of βn, and that (b) the maximum of |wn| is a better estimate for the error than the worst-case bound P|wn|, because the errors δn do not have all the same phase, and the wn do not have all the same magnitude. We plot this error estimator in Figure 3. Based on the plot, we selected an array of methods with different values of N ′. Some relevant quantities for these methods are reported in Table 1, while their poles and weights are available for download on https://github.com/numpi/tame-ilt. When 33 Table 1: Table of relevant quantities for a family of quasi-optimal precomputed TAME methods. The method in each row was generated with r = rmax. N ′ N rmax ε max|wn| Error proxy η 3 6 0.6 1.637909e-14 5.462668e+02 1.376747e-13 4 8 1.8 8.229408e-14 3.402665e+03 8.378375e-13 5 10 4.0 3.973128e-13 7.669538e+03 2.100292e-12 6 12 7.0 1.420889e-12 1.684302e+04 5.160790e-12 7 14 11.2 1.983229e-12 3.981692e+04 1.082436e-11 8 16 16.8 1.140301e-11 5.302010e+04 2.317583e-11 9 18 22.7 6.075754e-12 1.296546e+05 3.486487e-11 10 20 31.6 3.192135e-11 1.495150e+05 6.512034e-11 one needs a method for a certain Ω= B(−r, r), we suggest using the method with the smallest rmax ≥r. In this way we have a method that is ε-accurate on Ω⊆ B(−rmax, rmax), and at the same time it does not have unnecessarily large weights. 8 Experiments 8.1 Comparison of different Abate–Whitt methods To gain additional insight on how the different methods relate to each other, we present a comparison of their parameters. In Figure 4 the nodes βn (along with their conjugates) are plotted in the complex plane. The TAME method uses N ′ = 5 and r = 4 chosen according to Table 1. The domain Ω= B(−r, r) is shown as well; none of the nodes lie inside the domain, so the choice of the discretization Z is justified (see Section 7.3). Some methods position the nodes according to a chosen geometry: the nodes of the CME and the Euler methods are aligned on vertical lines, while the nodes of the Talbot method follow the deformed integration contour. On the other hand, the position of TAME and Zakian nodes is determined by solving respectively a minimization problem and a system of equations. It is interesting to note that the obtained nodes are arranged on curves which are close to the Talbot contour; we stress, however, that having close nodes do not make two methods similar because the weights wn can be vastly different. The Dirac approximants ρN(y) are plotted in Figure 5. The CME method has a peaked distribution at y = 1, and is close to zero elsewhere. The other methods have a smaller and wider peak, as well as oscillation around zero in other parts of the domain; this is clearly visible for the Euler method. The presence of oscillation in ρN does not imply necessarily that the method is ineffective. Indeed, for (9) we only need convergence of distributions in the weak sense, and this kind of convergence is possible even for wildly oscillating functions, for example sin(ny) ⇀0. The Talbot method is excluded because it has nodes with Re(βn) < 0, which make the addends wneβny extraordinarily large for y > 1; the interpretation of ρN(y) as a Dirac approximant is not clear for Talbot. 34 −25 −20 −15 −10 −5 5 10 15 −20 −15 −10 −5 5 10 15 20 Re(z) Im(z) domain Ω Euler N ′ = 35 Gaver N ′ = 16 Talbot N ′ = 20 CME N ′ = 60 Zakian N ′ = 4 TAME N ′ = 5 Fig. 4: Plot of nodes βn for different methods. The TAME method uses Ω= B(−4, 4). 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 20 40 y ρN(y) Euler N ′ = 35 Gaver N ′ = 16 CME N ′ = 60 Zakian N ′ = 4 TAME N ′ = 5 Fig. 5: Plot of the Dirac approximants ρN(y) for different methods. The TAME method uses Ω= B(−4, 4). In Figure 6, we display the error of the rational approximant bρN(−z) for different Abate–Whitt methods on a rectangular domain. The TAME method is constructed using B(−r, r) with r = 4, according to Table 1; we display also a circle with radius 4 on the picture. In the rest of this section, we present a numerical evaluation of the performance of the various Abate–Whitt methods on several problems; the first three are from 35 −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 a) Euler N ′ = 35. d) CME N ′ = 75. −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 b) Gaver–Stehfest N ′ = 16. e) Zakian N ′ = 4. −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 Re z Im z −16 −14 −12 −10 −8 −6 −4 −2 0 c) Talbot N ′ = 20. f) TAME N ′ = 5, Ω= B(−4, 4). Fig. 6: Base-10 logarithm of the approximation error log10 exp(z) −PN n=1 wn βn−z , and the poles β in red. 36 0 5 10 15 20 25 30 35 40 45 50 10−15 10−10 10−5 100 reduced nodes N ′ ∥ψ(1) −ψN(1)∥∞ Euler Gaver Talbot CME Zakian TAME 0 5 10 15 20 25 30 35 40 45 50 10−15 10−10 10−5 100 reduced nodes N ′ ∥Ψ(1) −ΨN(1)∥∞ Euler Gaver Talbot CME Zakian TAME Fig. 7: Experiment A. Plot of the error of the six Abate–Whitt methods. Above: ψ(t) (pdf), below: Ψ(t) (CDF). λ = 1, t = 1. The TAME method uses Ω= B(−1, 1). the classes of functions studied in this paper, while the last two are from functions outside these classes, to determine whether the numerical properties observed and proved extend to more general problems. Our code is publicly available on https: //github.com/numpi/tame-ilt. 8.2 Experiment A We consider a fluid queue model of size d+ = 5, d−= 10 and uniformization rate λ = 1. The matrix Q of the underlying Markov Chain has size d = 15. To construct Q, we generate a random matrix (with abs(randn(d)) in Matlab) and then modify the diagonal entries to have zero row sums. The positive rates r1, . . . , rd+ are uni- formly distributed in [0, 1], while the negative rates rd++1, . . . , rd in [−1, 0]. To allow reproducibility, the seed of the Random Number Generator is set to rng(0). The Transform bψ(s) of the pdf is calculated by solving a nonsymmetric algebraic Riccati equation (see [7]). Using the properties of derivatives of the Laplace Transform ([15, Section 1.3]), the Transform of the CDF is calculated as bΨ(s) = 1 s bψ(s). We 37 Table 2: Experiment A. Minimum of the approximation error for the given Abate–Whitt method and corresponding value of N. Ψ(t) (CDF) ψ(t) (pdf) Method min error N Euler 4.0 · 10−12 35 Gaver 5.5 · 10−8 16 Talbot 1.2 · 10−14 18 CME 1.7 · 10−5 50 Zakian 4.3 · 10−14 4 TAME 3.3 · 10−14 4 Method min error N Euler 2.0 · 10−11 31 Gaver 2.1 · 10−7 14 Talbot 1.2 · 10−13 20 CME 1.2 · 10−5 50 Zakian 3.8 · 10−13 4 TAME 8.0 · 10−14 3 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ Euler Rat. appr. Moments 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ Gaver Rat. appr. Moments 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ Talbot Rat. appr. Moments 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ Zakian Rat. appr. Moments 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ CME Rat. appr. Moments 0 10 20 30 40 50 10−15 10−8 10−1 reduced nodes N ′ TAME Rat. appr. Moments Fig. 8: Experiment A. Upper bound, moment estimate, and approximation error for the pdf ψ(t). 38 recover both Ψ(t) and ψ(t) by means of the Inverse Laplace Transform at time t = 1. Following Theorem 5.6 we choose the domain Ω= B(−1, 1) for the TAME method and compare it to the classical Abate–Whitt methods. We plot the errors ∥ψ(1) −ψN(1)∥∞and ∥Ψ(1) −ΨN(1)∥∞as N increases in Figure 7. All methods except CME exhibit a linear rate of convergence in the first part of the graph. The error of the CME method decreases approximately as ∼(N ′)−2.21. This result is consistent with the findings in [25, Section 4.2], where it was observed that the SCV of the CME method decreases approximately as ∼(N ′)−2.14; therefore by Theorem 4.3 the error of the CME method decreases at least as fast as O((N ′)−0.71). In the CDF case, for N ′ = 50 the error is 1.7 · 10−5; the other methods reach this level of precision much sooner: the Talbot method at N ′ = 5 and the TAME method at N ′ = 2. However, the classical methods soon encounter a problem: the error starts growing instead of decreasing. This is caused by an increase in the magnitude of the weights wn, which causes numerical instability, as noted in Section 7.1. Zakian’s method is the fastest, but it is also the most prone to instability: after reaching the minimum at N ′ = 4, the error starts rapidly rising again. For other functions the threshold N ′ may be different, so one risks overshooting the value of N ′ which gives a satisfying approximation. Talbot’s method is the one that reaches the smallest error among classical methods and the growth of the error due to instability is not as fast as for Zakian. The TAME method combines both positive aspects. It reaches an error similar to Talbot’s error, and it is as fast as the Zakian method: just three evaluations of bf(s) are sufficient to recover the original function f. Moreover, it does not suffer from numerical instability, because Algorithm 1 constructs a good rational approximation already with N ′ = 4 (i.e., with degree N = 8), and increasing N ′ further produces weights with an absolute value comparable to machine precision, as noted in Remark 7.1. While one would like to avoid using unnecessary nodes, the TAME method does not punish the user if they choose N ′ too large. In Figure 8 we show for each method, its error, the upper bound given by Theorem 5.6, and the moment estimate given by Definition 4.5. We see that up to the start of numerical instability, both the upper bound and the moment estimate are good approximations of the actual error for the Euler, Gaver, Talbot, and Zakian methods. For the CME method, the moment estimate is of order of machine preci- sion, and the same happens for TAME for N ′ ≥6. We highlight that the upper bound in Theorem 5.6 depends on N ′ only through ε, while the moment estimate in Defi- nition 4.5 depends on N ′ through |1 −µ0| and |µ1 −µ0|, which are quantities that depend only on the Abate–Whitt method and not on the function. As further confirmed in Figure 6, the upper bound is valid to measure the pre- cision even of an Abate–Whitt method created with a different perspective in mind. Numerically also the moment estimates are sharp for all methods apart from CME; they are a surprisingly good approximation for the Talbot method. 39 0 5 10 15 20 25 10−15 10−8 10−1 errors for t = 1 0 5 10 15 20 25 10−15 10−8 10−1 errors for t = 3 0 5 10 15 20 25 10−15 10−8 10−1 errors for t = 10 0 5 10 15 20 25 10−15 10−8 10−1 errors for t = 30 0 5 10 15 20 25 10−15 10−8 10−1 errors for t = 100 TAME optimal TAME r = 0.5 TAME r = 1 TAME r = 3 TAME r = 10 TAME r = 100 Fig. 9: Experiment B. Error of the pdf ψ(t), computed at different times t. Comparison of TAME methods on a circular domain Ωwith different r. 8.3 Experiment B With the same fluid queue as in Experiment A, we compute the pdf ψ(t) at dif- ferent times t ∈{1, 3, 10, 30, 100}. By Theorem 5.6, the radius of the domain Ω= B(−r, r) is r = tλ (here λ = 1); we construct different TAME methods for each r ∈{0.5, 1, 3, 10, 100}. We also consider the “optimal TAME” method (see Table 1), where the radius r is not constant, but depends on the number of nodes. The comparison of these methods is shown in Figure 9. In general, TAME methods with r > t are quite accurate, while for r < t they are not. For a fixed t, increasing r makes the convergence slower in N ′; this is due to the fact that larger r leads to larger weights wn (see Section 7.1). The TAME method with r = 100 has the slowest convergence, but has almost the same convergence speed for any value of t. Conversely, methods with smaller r have faster convergence, but they provide bad approximations when t is much larger than r. 40 0 5 10 15 20 25 30 10−15 10−10 10−5 100 reduced nodes N ′ ∥f(1) −fN(1)∥∞ Euler Gaver Talbot CME Zakian TAME Table 1 TAME Thm. 5.2 TAME Thm. 5.3 TAME Thm. 5.4 Fig. 10: Experiment C. Target function is f(t) = etQ at t = 1. Plot of the error of the six Abate–Whitt methods and four TAME methods as N ′ increases. −4 −2 0 2 −2 0 2 Theorem 5.2 Theorem 5.3 Theorem 5.4 W(Q) Fig. 11: Experiment C. Plot of the field of values and its bounds. 8.4 Experiment C We consider the same generator matrix Q of the previous two experiments, but instead of constructing the fluid queue we just analyze the underlying continuous-time Markov chain. The size of Q is d = 15 and the target function is f(t) = etQ, computed at time t = 1. We use TAME methods constructed with the following domains Ω: • the quasi-optimal ones in Table 1, • the circle Ω= B(−1, √ 15), based on Theorem 5.2, • the rectangle Ω= [−3.06, 1.43] + i[−1.88, 1.88], based on Theorem 5.3, • the smaller rectangle Ω= [−1.15, 0] + i[−0.19, 0.19], based on Theorem 5.4. 41 0 1 2 3 4 5 6 −0.5 0 0.5 1 1.5 t fN(t) f(t) Talbot N ′ = 20 Gaver N ′ = 16 Zakian N ′ = 4 Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 75 0 1 2 3 4 5 6 −4 −2 0 2 4 ·10−2 t fN(t) −f(t) Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 20 CME N ′ = 75 Fig. 12: Experiment D, triangular wave. Upper plot: functions f(t) and fN(t). Lower plot: error in linear scale. The three domain bounds for W(Q) are plotted in Figure 11. The approximation errors are displayed in Figure 10. The four TAME methods exhibit the steepest convergence, along with the Zakian method. As the region Ωgets smaller, the error of the TAME method diminishes, consistently with the discussion in Section 7.1. 8.5 Experiment D Consider two non-smooth signals: the square wave and the triangular wave. We rescale and shift them so that they assume values in the interval [0, 1]. They are periodic functions, so they admit a Fourier series expansion on all R+. The triangular wave is defined as f△(t) = ( t −⌊t⌋ if mod (⌊t⌋, 2) = 0, −t + ⌊t⌋+ 1 if mod (⌊t⌋, 2) = 1. Its Laplace Transform is c f△(s) = 1 s2 1 −e−s 1 + e−s . The triangular wave is a continuous function, but it is not differentiable everywhere. The k-th coefficient of its Fourier series is ck = 4 (2k+1)2π2 . We note that the coefficients 42 0 1 2 3 4 5 6 −0.5 0 0.5 1 1.5 t fN(t) f(t) Talbot N ′ = 20 Gaver N ′ = 16 Zakian N ′ = 4 Euler N ′ = 33 TAME N ′ = 20 0 1 2 3 4 5 6 −0.5 0 0.5 1 1.5 t fN(t) f(t) Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 75 CME N ′ = 20 Fig. 13: Experiment D, square wave. Function f(t) and approximants are displayed fN(t). Upper plot: Euler and TAME compared to worse perfoming methods (Talbot, Gaver, Zakian). Lower plot: Euler and TAME compared to better performing CME. are summable and P∞ k=0 |ck| = 4 π2 P∞ k=0 1 (2k+1)2 = 1 2. Therefore, the Fourier series converges absolutely to f△. The truncated Fourier series is in the SE class, so we have the necessary hypotheses to apply Theorem 3.11. The exponents of the summands of the truncated series are purely imaginary numbers, so we use Ω= i[−r, r] for a suitable chosen r. The square wave and its Laplace Transform are respectively f□(t) = mod(⌊t⌋, 2), c f□(s) = 1 s(1 + es). The k-the coefficient of the Fourier series is ck = 1 2k+1. The square wave is discon- tinuous, so it cannot be uniformly approximated by any continuous function and we cannot apply Theorem 3.11. Nevertheless, the Fourier series converges at the points where f□is continuous, so it is reasonable to use a TAME method that acts accurately on the truncated Fourier series; therefore we use Ω= i[−r, r] also for the square wave. However, observe that the absolute series P∞ k=0 |ck| diverges, so the approximation bound of Theorem 3.2 grows worse as K grows. For both problems, we have constructed a TAME method with N ′ = 20 and r = 80. The triangular wave is shown in Figure 12. We can see that Talbot, Gaver, and Zakian do not provide a good approximation. The Talbot method has numerical issues at some 43 values of t, reaching values of order of 1014. This happens because the contour curve of the Talbot integral intersects the domain Ω, so the nodes β t may happen to be close to singularities of c f△. This leaves us the Euler, CME, and TAME methods. We see in the plot that the TAME method with N ′ = 20 provides a better approximation that the Euler method with N ′ = 33 and the CME method with N ′ = 20. Unfortunately, increasing N ′ further in the TAME method does not reduce this error, while CME with N ′ = 75 is more accurate and avoids the oscillation that can be seen in the other plots. The square wave is shown in Figure 13. As for the triangular wave, the results of Talbot, Gaver, and Zakian methods are not close to the target function. The Euler and TAME methods provide better approximations, but they suffer from Gibbs phenomena and oscillation around the correct value. In this case, CME method is the one that provides the best approximation by far, even with the same number of nodes (N ′ = 20). Increasing it to N ′ = 75 yields an even better approximation with the CME method. Indeed f□is outside of our framework of “tame” functions and, as noted, the Fourier coefficients 1 2k+1 are not summable. The experiment shows that CME performs better than TAME on discontinuous functions. 8.6 Experiment E We consider the European Call Option Pricing problem. For a detailed exposition of the topic, see the books [12, 34]. The goal of the Option Pricing problem is to determine the value of a contract (i.e., a call) C(t, Q) depending on time t and the price of a given asset Q. Under standard hypotheses, C(t, Q) satisfies the Black-Scholes PDE; one of the methods for its computation is through bC(s) and the inverse Laplace transform. For the vanilla European Call Option, both C(t) and bC(s) are known explicitly, allowing us to use them to test the TAME method. For some exotic options, however, only bC(s) is known, see e.g. [29]. The analytical solution of the European Call Option is [42] C(t) = Q Φ(d+) −Ke−Rt Φ(d−), (30) where Q = Q(t) is the asset price at current time, R and σ are parameters: R is the rate of interest, σ is the volatility. Φ(x) is the CDF of the normal distribution and d±(t) = 1 σ √ t  log  Q K  + R ± 1 2σ2 t  . The Laplace Transform of C(t) is (see [42]) bC(s) =    K γ+−γ− S K γ− γ+ R+s −γ+−1 s  + Q s − K R+s if Q ≥K, K γ+−γ− S K γ+  γ− R+s −γ−−1 s  if Q < K, where γ+(s) ≥γ−(s) are γ±(s) = 1 σ2  − R −1 2σ2 ± qR −1 2σ22 + 2σ2(R + s)  . The function C(t) is in none of the three classes we described. Nevertheless, the sum- mands Φ(d±(t)) of (30) are integrals of e−x2 on a domain determined by d±(t), which is similar to the structure of a LS function. For this reason, we construct TAME methods using Ω= [−r, 0]. 44 0 5 10 15 20 25 30 35 40 45 50 0 20 40 60 t C(t) C(t) 0 5 10 15 20 25 30 35 40 45 50 −15 −10 −5 t log(|CN(t) −C(t)|) Talbot N ′ = 20 Euler N ′ = 33 CME N ′ = 75 Gaver N ′ = 16 Zakian N ′ = 4 TAME N ′ = 12, r = 50 TAME N ′ = 33, r = 100 Fig. 14: Experiment E. Upper plot: function C(t). Lower plot: absolute value of the error in log scale. In Figure 14 we show the function C(t) and the approximation errors for K = 100, Q = 80, σ = 0.1, R = 0.05. The function is regular and all methods manage to provide a good enough approximation, with error smaller than 5 · 10−2. The most accurate methods are the Talbot with N ′ = 20 and TAME with N ′ = 33 nodes and r = 100, followed by the TAME method with N ′ = 12 and r = 50, and then by the Euler method with N ′ = 33. 9 Conclusions We developed a theoretical framework for the accuracy analysis of Abate–Whitt methods, based on a rational approximation problem. We focused on functions from the SE, the ME, and the LS classes, as well as on the first return time matrix Ψ(t) for fluid queues. For each class, we provided theoretical bounds for the approximation error of an Abate–Whitt method, and a description how to choose an appropriate domain Ω. We showed how to adapt the AAA algorithm to construct a TAME Abate–Whitt method for each domain Ω. We provide precomputed parameters for the application in fluid queues. We hope that both the analysis and the new method can be of use for practitioners in queuing theory. In this paper, we also discussed the numerical issues related to the computation of Abate–Whitt methods in floating-point arithmetic; we plan to focus our future 45 research on optimizing further the selection of weights to avoid excessive growth. Another open problem is extending these results to more classes of functions and to numerical methods for the ILT outside the Abate–Whitt class. Acknowledgments. The authors are thankful to Bernhard Beckermann for liter- ature pointers, to Michele Benzi for useful comments, to Andrea Macr`ı for help and discussion regarding Experiment E, and to Francesco Zigliotto for LaTeX help. Supplementary information. The authors are members of INDAM (Istituto Nazionale di Alta Matematica) / GNCS. FP has received financial support by the National Centre for HPC, Big Data and Quantum Computing–HPC, CUP B83C22002940006, funded by the European Union–NextGenerationEU, and by the Italian Ministry of University and Research (MUR) through the PRIN project 2022 “MOLE: Manifold constrained Optimization and LEarning”, code: 2022ZK5ME7 MUR D.D. financing decree n. 20428 of November 6th, 2024 (CUP B53C24006410006). The authors have no competing interests to disclose. References [1] J. Abate, G. L. Choudhury, and W. Whitt, “An introduction to numerical trans- form inversion and its application to probability models,” in Computational Probability, W. K. Grassmann, Ed. Boston, MA: Springer US, 2000, pp. 257– 323. doi: 10.1007/978-1-4757-4828-4 8. [2] J. Abate and P. P. Valk´o, “Multi-precision Laplace transform inversion,” Inter- national Journal for Numerical Methods in Engineering, vol. 60, no. 5, pp. 979– 993, 2004. doi: https://doi.org/10.1002/nme.995. [3] J. Abate and W. Whitt, “A unified framework for numerically inverting Laplace transforms,” INFORMS Journal on Computing, vol. 18, no. 4, pp. 408–421, Nov. 2006. doi: 10.1287/ijoc.1050.0137. [4] S. Asmussen, “Stationary distributions for fluid flow models with or without Brownian noise,” Communications in Statistics: Stochastic Models, vol. 11, no. 1, pp. 21–49, 1995. [5] N. Barbot and M. Sericola Bruno a nd Telek, “Distribution of busy period in stochastic fluid models.,” Stoch. Models, vol. 17, no. 4, pp. 407–427, 2001. doi: 10.1081/STM-120001216. [6] N. Bean, G. T. Nguyen, and F. Poloni, “A new algorithm for time-dependent first-return probabilities of a fluid queue,” in Matrix-Analytic Methods in Stochastic Models, S. Hautphenne, M. O’Reilly, and F. Poloni, Eds., 2019, pp. 18–22. [7] N. G. Bean, M. Fackrell, and P. Taylor, “Characterization of matrix-exponential distributions,” Stochastic Models, vol. 24, no. 3, pp. 339–363, 2008. doi: 10. 1080/15326340802232186. [8] N. G. Bean, M. M. O’Reilly, and P. G. Taylor, “Hitting probabilities and hitting times for stochastic fluid flows,” Stochastic Processes and their Applications, vol. 115, no. 9, pp. 1530–1556, 2005. doi: 10.1016/j.spa.2005.04.002. 46 [9] N. G. Bean, M. M. O’Reilly, and P. G. Taylor, “Algorithms for the Laplace–Stieltjes transforms of first return times for stochastic fluid flows,” Methodology and Computing in Applied Probability, vol. 10, pp. 381–408, 3 Sep. 2008. doi: 10.1007/s11009-008-9077-3. [10] R. Bhatia, Matrix Analysis (Grad. Texts Math.). New York, NY: Springer, 1996, vol. 169. [11] P. Billingsley, Probability and Measure, 3rd ed. Chichester: John Wiley & Sons Ltd., 1995. [12] T. Bj¨ork, Arbitrage Theory in Continuous Time. Oxford University Press, Mar. 2004. doi: 10.1093/0199271267.001.0001. [13] M. Bladt and B. F. Nielsen, Matrix-Exponential Distributions in Applied Prob- ability. Springer New York, NY, May 2017. doi: 10.1007/978- 1- 4939- 7049- 0. [14] D. Braess and W. Hackbusch, “The approximation of Cauchy-Stieltjes and Laplace-Stieltjes functions,” in Multiscale, Nonlinear and Adaptive Approxima- tion II, Cham: Springer, 2024, pp. 115–143. doi: 10.1007/978-3-031-75802- 7 7. [15] A. M. Cohen, Numerical Methods for Laplace Transform Inversion (Numerical Methods and Algorithms). Springer-Verlag US, 2007. doi: 10.1007/978-0-387- 68855-8. [16] M. Crouzeix and C. Palencia, “The numerical range is a (1 + √ 2)-spectral set,” SIAM J. Matrix Anal. Appl., vol. 38, no. 2, pp. 649–655, 2017. doi: 10.1137/ 17M1116672. [17] S. Cuomo, L. D’Amore, A. Murli, and M. Rizzardi, “Computation of the inverse Laplace transform based on a collocation method which uses only real values,” J. Comput. Appl. Math., vol. 198, no. 1, pp. 98–115, 2007. doi: 10.1016/j.cam. 2005.11.017. [18] S. Cuomo, L. D’Amore, M. Rizzardi, and A. Murli, “A modification of Weeks’ method for numerical inversion of the Laplace transform in the real case based on automatic differentiation,” in Advances in Automatic Differentiation. Selected papers based on the presentations at the 5th international conference on auto- matic differentiation, Bonn, Germany, August 11–15, 2008, Berlin: Springer, 2008, pp. 45–54. [19] S.-I. Filip, Y. Nakatsukasa, L. N. Trefethen, and B. Beckermann, “Rational minimax approximation via adaptive barycentric representations,” SIAM J. Sci. Comput., vol. 40, no. 4, a2427–a2455, 2018. doi: 10.1137/17M1132409. [20] H.-L. G. Gau, K.-Z. Wang, and P. Y. Wu, “Numerical ranges of row stochastic matrices,” Linear Algebra and its Applications, vol. 506, pp. 478–505, 2016. doi: https://doi.org/10.1016/j.laa.2016.06.010. [21] D. P. Gaver, “Observing stochastic processes, and approximate transform inversion,” Operations Research, vol. 14, no. 3, pp. 444–459, 1966. [22] K. E. Gustafson and D. K. M. Rao, Numerical Range. The Field of Values of Linear Operators and Matrices (Universitext). New York, NY: Springer, 1996. 47 [23] L. Hogben, Ed., Handbook of Linear Algebra (Discrete Math. Appl. (Boca Raton)), 2nd enlarged ed. Boca Raton, FL: Chapman & Hall/CRC, 2014. doi: 10.1201/b16113. [24] G. Horv´ath, I. Horv´ath, S. A.-D. Almousa, and M. Telek, “Numerical inverse Laplace transformation using concentrated matrix exponential distributions,” Performance Evaluation, vol. 137, p. 102 067, 2020. doi: 10.1016/j.peva.2019. 102067. [25] G. Horv´ath, I. Horv´ath, and M. Telek, “High order concentrated matrix- exponential distributions,” Stochastic Models, vol. 36, no. 2, pp. 176–192, 2020. doi: 10.1080/15326349.2019.1702058. [26] I. Horv´ath, O. S´af´ar, M. Telek, and B. Z´amb´o, “Concentrated matrix exponential distributions,” in Computer Performance Engineering, D. Fiems, M. Paolieri, and A. N. Platis, Eds., Cham: Springer International Publishing, 2016, pp. 18– 31. [27] P. den Iseger, “Numerical transform inversion using gaussian quadrature,” Prob- ability in the Engineering and Informational Sciences, vol. 20, no. 1, pp. 1–44, 2006. doi: 10.1017/S0269964806060013. [28] R. L. Karandikar and V. Kulkarni, “Second-order fluid flow models: Reflected Brownian motion in a random environment,” Oper. Res, vol. 43, pp. 77–88, 1995. [29] T. Kimura, “American fractional lookback options: Valuation and premium decomposition,” SIAM Journal on Applied Mathematics, vol. 71, no. 2, pp. 517– 539, 2011. doi: 10.1137/090771831. [30] S. Kotz, C. B. Read, N. Balakrishnan, and B. Vidakovic, Eds., Encyclopedia of Statistical Sciences. 16 Vols. 2nd ed. Hoboken, NJ: John Wiley & Sons, 2006. [31] C. D. Meyer, Matrix Analysis and Applied Linear Algebra. USA: Society for Industrial and Applied Mathematics, 2000. [32] Y. Nakatsukasa, O. S`ete, and L. N. Trefethen, “The AAA algorithm for rational approximation,” SIAM Journal on Scientific Computing, vol. 40, no. 3, A1494– A1522, 2018. doi: 10.1137/16M1106122. [33] L. C. G. Rogers, “Fluid models in queueing theory and Wiener-Hopf factoriza- tion of Markov chains,” Ann. Appl. Probab., vol. 4, pp. 390–413, 1994. [34] S. M. Ross, An Elementary Introduction to Mathematical Finance, 3rd ed. Cambridge: Cambridge University Press, 2011. [35] S. M. Ross, Introduction to Probability Models, 13th edition. Amsterdam: Elsevier/Academic Press, 2023. [36] W. Rudin, Real and Complex Analysis, 3rd ed. New York, NY: McGraw-Hill, 1987. [37] J. L. Schiff, The Laplace Transform, Theory and Applications. Springer New York, NY, Oct. 1999, pp. XIV, 236. doi: 10.1007/978-0-387-22757-3. [38] R. L. Schilling, R. Song, and Z. Vondracek, Bernstein Functions: Theory and Applications. Berlin, Boston: De Gruyter, 2012. doi: doi : 10 . 1515 / 9783110269338. [39] H. Stehfest, “Algorithm 368: Numerical inversion of Laplace transforms [D5],” Commun. ACM, vol. 13, no. 1, pp. 47–49, Jan. 1970. doi: 10.1145/361953. 361969. 48 [40] A. Talbot, “The accurate Numerical Inversion of Laplace Transforms,” IMA Journal of Applied Mathematics, vol. 23, no. 1, pp. 97–120, Jan. 1979. doi: 10.1093/imamat/23.1.97. [41] M. Telek, “Transient analysis of Markov modulated processes with Erlan- gization, ME-fication and inverse Laplace transformation,” Stochastic Models, vol. 38, no. 4, pp. 638–664, 2022. doi: 10.1080/15326349.2022.2081206. [42] M. Tomas and R. Shukla, “Complete derivation of black-scholes option pric- ing formula,” in Financial Derivatives, S. Kumar, Ed., New Delhi 110001: PHI Learning Pvt. Ltd., 2007, ch. Appendix, pp. 227–237. [43] L. N. Trefethen, J. A. C. Weideman, and T. Schmelzer, “Talbot quadratures and rational approximations,” BIT, vol. 46, no. 3, pp. 653–670, 2006. doi: 10. 1007/s10543-006-0077-9. [44] V. Ramaswami, “Matrix analytic methods for stochastic fluid flows,” in Teletraf- fic Engineering in a Competitive World (Proceedings of the 16th International Teletraffic Congress), D. Smith and P. Hey, Eds., Edinburgh, UK: Elsevier Science B.V., 1999, pp. 1019–1030. [45] J. Vlach, “Numerical method for transient responses of linear networks with lumped, distributed or mixed parameters,” Journal of the Franklin Institute, vol. 288, no. 2, pp. 99–113, 1969. doi: 10.1016/0016-0032(69)90172-0. [46] J. A. C. Weideman, “Optimizing Talbot’s contours for the inversion of the Laplace transform,” SIAM Journal on Numerical Analysis, vol. 6, pp. 2342– 2362, 44 2006. doi: 10.1137/050625837. [47] C. J. Wellekens, “Generalisation of Vlach’s method for the numerical inversion of the Laplace transform,” Electronics Letters, vol. 6, pp. 742–744, 1970. doi: 10.1049/EL%3A19700514. [48] V. Zakian, “Numerical inversion of Laplace transform,” Electronic Letters, vol. 5, pp. 120–121, Feb. 1969. [49] V. Zakian, “Optimisation of numerical inversion of Laplace transforms,” Elec- tronic Letters, vol. 6, pp. 677–679, 1970. 49
Error analysis of Abate-Whitt methods for Inverse Laplace Transforms and a new algorithm for queuing theory applications Nikita Deniskin1* and Federico Poloni2* 1Classe di Scienze, Scuola Normale Superiore, Pisa, Italy. 2Dipartimento di Informatica, Universit`a di Pisa, Pisa, Italy. *Corresponding author(s). E-mail(s): ; ; Abstract We study the accuracy of a class of methods to compute the Inverse Laplace Transform, the so-called Abate-Whitt methods [Abate, Whitt 2006], which are based on a linear combination of evaluations of b f in a few points. We provide error bounds which relate the accuracy of a method to the rational approximation of the exponential function. We specialize our analysis to applications in queuing theory, a field in which Abate-Whitt methods are often used; in particular, we study phase-type distributions and Markov-modulated fluid models (or fluid queues). We use a recently developed algorithm for rational approximation, the AAA algorithm [Nakatsukasa, S`ete, Trefethen 2018], to produce a new family of methods, which we call TAME. The parameters of these methods are constructed depending on a function-specific domain Ω; we provide a quasi-optimal choice for certain families of functions. We discuss numerical issues related to floating-point computation, and we validate our results through numerical experiments which show that the new methods require significantly fewer function evaluations to achieve an accuracy that is comparable (or better) to that of the classical methods. Keywords: Inverse Laplace Transform, Abate-Whitt framework, Rational Approximation, Markov-modulated fluid models MSC Classification: 65R10 , 41A20 , 65C40 1 16 Oct 2025 1 Introduction The Laplace Transform of a function f : (0, ∞) →C is defined as bf(s) = ∞ Z 0 e-stf(t) dt. (1) We study methods to compute its inverse, i.e., reconstruct the value f(t) in a given point t, from evaluations of bf in several points β1, . . . , βN ∈C. Unlike the better-behaved Fourier transform, inverting the Laplace transform is an ill-posed problem: for instance, there are examples of functions such that |f(1)-g(1)| = 1 but ∥bf -bg∥∞≤ε for arbitrarily small ε. This property makes numerical computation challenging, since small errors in the computation of bf can turn into large errors on f(t). Because of its ill-posedness, the Inverse Laplace Transform (ILT) problem has attracted ample attention in the mathematical and numerical community. It would be impossible to cite all relevant works, but we refer the reader to the book [37] for theoretical properties and to the book [15] for numerical methods. In this work, we make two specific assumptions to restrict our interest to special cases: • We are interested in algorithms that compute f(t) in one given point t, based on evaluating bf in a small number of points that we are allowed to choose; these algorithms usually take the form f(t) ≈ N X n=1 wn bf(βn), which resembles the general form of a quadrature formula. This setup is known as the Abate-Whitt framework in certain fields, after [3]. It excludes several algorithms, such as those based on approximating f with a truncated series of functions [15, Chapter 3] or a rational function with known coefficients [15, Chapter 5], those based on the derivatives of f (e.g., [18] or the Post-Widder formula [15, Chapter 7]), methods where f(t) is evaluated in multiple points simultaneously (e.g., [17, 27]). We describe this framework and some of the classical methods in Section 2. • We are interested only in the case in which f has a very special ("tame") form: it belongs to one of certain classes of functions that are constructed based on weighted sums of exponentials. Namely: SE class (sum of exponentials) functions of the form f(t) = M X m=1 cmeαmt, where c1, . . . , cm, α1, . . . , αm ∈C are given constants. 2 ME class (matrix exponential) functions of the form f(t) = v∗exp(tQ)u, Q ∈Cd×d, u, v ∈Cd, (2) where exp(A) = I +A+ 1 2!A2 + 1 3!A3 . . . is the matrix exponential. Using the Jordan form, one sees that these functions can be equivalently written as f(t) = M X m=1 cmeαmttbm, with b1, . . . , bm ∈Z+. It is easy to see that this class is the one of all functions f whose transform bf is a rational function. We chose the acronym ME after the one used for matrix exponential distributions [30], but we note that some authors use the same acronym for other classes named mixture of exponentials, e.g., [38, Definition 9.4]. The density of a Matrix Exponential distribution [30] has the form (2), but with additional conditions: f(t) has to be non-negative with total mass 1. In our definition of a ME function, we do not require such conditions. LS class (Laplace-Stieltjes) functions of the form f(t) = Z e-xtdμ(x), (3) where μ is a non-negative measure on R+ = (0, ∞). In other words, f in (3) is itself the Laplace transform of a measure μ. A SE function with negative real weights αi is in this class: it corresponds to a discrete measure μ = PM m=1 cmδ|αm|. The LS class is a generalization with an integral rather than a discrete sum. Bernstein's Theorem [38, Theorem 1.4] states that the class of Laplace-Stieltjes functions is equivalent to the class of completely monotone functions, i.e. functions f(t) ∈C∞((0, ∞)) such that (-1)nf (n)(t) ≥0 ∀n ≥0, ∀t > 0. These three classes are very favorable cases: in most other literature the inverse Laplace transform the main focus is on functions with jump discontinuities (e.g., the Heaviside step function, the square wave) or jumps in the derivative (e.g., formulas involving the absolute-value function |·|). Under these two assumptions, we relate the accuracy of an Abate-Whitt method to a rational approximation problem, and prove several quantitative bounds on its error, revealing the role of quantities such as the evaluation point t, the range of the αi, and the field of values of the matrix Q. This is done in Section 3. Another approach is based on the moments of a function related to the approximation problem. We present some bounds and an estimate in Section 4. Despite the strong assumptions that we make, this restricted setup is useful in practice in several applications in queuing theory, which we describe in more detail in 3 Section 5. In particular, we deal with distributions whose density function belongs to a subclass of the ME class, the so-called phase-type distributions. They are obtained when Q in (2) is the subgenerator of a continuous-time Markov chain and u = -Q1. Phase-Type distributions predate ME distributions, as they arise naturally in queuing theory as the probability distribution of the hitting time of an absorbing Markov chain. See [13] for an exposition of the theory of ME and Phase-Type distributions. Another application in queuing theory is the computation of the so-called timedependent first-return matrix of Markov-modulated fluid models, or fluid queues, a class of stochastic processes that is used to model continuous-time queues and buffers [4, 28, 33, 44]. Indeed, there are several algorithms to compute directly the Laplace transform of this matrix [9], while the direct computation of the time-domain function is more involved. After reducing the problem to rational approximation, we describe the AAA algorithm [32], a recently developed algorithm that produces high-quality rational approximants. With a few modifications, we can use this algorithm to produce the weights and poles (wn, βn)N n=1 of a family of Abate-Whitt methods, which we dub the TAME method (Triple-A for the Matrix Exponential). We describe the AAA algorithm and its modifications in Section 6. From the computational point of view, one needs to ensure not only that the parameters (wn, βn) of an Abate-Whitt method produce an accurate rational approximation, but also that the magnitude of weights wn is moderate. Weights of large magnitude may lead to precision loss in floating-point arithmetic. We discuss these issues, as well as other numerical aspects of our method, in Section 7, and we conclude our paper with numerical experiments that prove the effectiveness of our method, in Section 8. The experiments show that the TAME method achieves accuracy comparable to the best Abate-Whitt methods, requires significantly fewer evaluations of the function bf, and has better stability properties. 1.1 Notation In the following, ∥A∥2 and ∥A∥∞denote the Euclidean and ∞operator norm of a matrix A. The spectrum of the matrix A is Λ(A). For a complex-valued function f, we define ∥f∥∞,Ω= max{|f(z)|: z ∈Ω}. We denote with B(c, R) = {z ∈C: |z -c| ≤R} the ball with center c and radius R in the complex plane. 2 Inverse Laplace Transforms and the Abate-Whitt framework We assume that the function f is defined (at least) on R+ = (0, ∞) and that it has real values. Most of our results are also applicable to functions f : R+ →C, but we shall see that in the real case we can use conjugate poles to speed up the computation of the Inverse Laplace Transform. For Equation (1) to be well-defined, the integral must converge. For the function f(t) = eαt, this means that bf(s) is defined for all s ∈C with Re(s -α) > 0. However, bf(s) = 1 s-α is defined algebraically for all s ̸= α, irrespective of the convergence of 4 the integral (1). This is a form of analytical continuation. When we deal with linear combinations and integral averages of exponential functions, we can use this extended definition of bf: we only need to ensure that bf (which in most of our paper is a rational function) can be computed in the required points βn t , even if those points do not lie in the domain of convergence of (1). In particular, we note that also one of the classical Abate-Whitt methods has poles βn in the left half-plane, namely, the Talbot method (see e.g. Figure 4); so we deal with points where (1) does not converge even when using the Talbot method for functions of the form f(t) = eαt with α 0, a > 0 and -π ε; then, if we choose α, t∗such that αt∗= z∗we have that |f(t∗) - fN(t∗)| = C > ε for the function f(t) = eαt. Or, informally: given a point z∗in which the rational approximant of an Abate-Whitt method is inaccurate, we can find an exponential function f and a point t∗for which the method is inaccurate. Since no rational approximation of the exponential can be accurate on the whole complex plane, this means that no Abate-Whitt method can be accurate for all exponential functions and all points t. Therefore, the quality of the Inverse Laplace Transform with an Abate-Whitt method depends on the approximation error of ez with the rational function c ρN(-z) at points {αmt}M m=1. Given f, the coefficients cm are fixed, but we can increase the order N and choose the parameters (wn, βn)N n=1 opportunely to obtain rational approximations of ez which get better on the points {αmt}M m=1 as N increases. This observation ensures that a suitable family of Abate-Whitt approximations can achieve convergence when N →∞(at least in exact arithmetic). To extend the result to the class of ME functions, we need a few technical tools. Definition 3.4. The field of values, also called numerical range, of a matrix A ∈Cd×d is the complex set W(A) = {x∗Ax: x ∈Cd, ∥x∥= 1}. The field of values is a well-known tool in linear algebra; here we recall only a few classical results (see, e.g., [10, Section I.2] or [22]). Lemma 3.5. The following properties hold. 1. We have the inclusions hull(Λ(A)) ⊆W(A) ⊆B(0, ∥A∥2), where hull(Λ(A)) is the convex hull of the eigenvalues of A. The left inclusion is an equality when A is a normal matrix. 2. Translation and rescaling of a matrix changes its field of values in the same way: W(αA + βI) = {αz + β : z ∈W(A)}. 3. If B is a principal submatrix of A, then W(B) ⊆W(A). 10 4. Let J0 ∈Rd×d be the size-d Jordan block with eigenvalue 0. We have W(J) = B(0, cos π d+1) ⊆B(0, 1). An important result relating the field of values and matrix functions is the following. Theorem 3.6 (Crouzeix-Palencia, [16]). Let A be a square matrix, and let φ(z) be a holomorphic function on W(A). Then, ∥φ(A)∥2 ≤(1 + √ 2) ∥φ∥∞,W(A), where φ(A) denotes the extension of φ to square matrices. It is conjectured that the constant 1 + √ 2 can be replaced by 2 (Crouzeix's conjecture). With these tools, we extend our error bound to matrices. Theorem 3.7. Let Q be a d × d matrix and f(t) = exp(tQ). Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, the error of the ILT at point t is bounded by ∥f(t) -fN(t)∥2 ≤(1 + √ 2)ε. Proof The Laplace Transform of f is bf(s) = (sI -Q)-1 . This can be proven by using the spectral representation of a matrix function [31, Section 7.9]. The Laplace Transform is well-defined when Re(s) > Re(Λ(Q)), but as discussed in Section 2, it can be extended to s ̸∈Λ(Q). The Abate-Whitt approximant of f is fN(t) = N X n=1 wn t bf βn t = N X n=1 wn t βn t I -Q -1 = N X n=1 wn (βnI -tQ)-1 . We apply the Crouzeix-Palencia Theorem 3.6 to the function φ(z) = ez - N X n=1 wn βn -z , obtaining exp(tQ) - N X n=1 wn (βn -tQ)-1 2 ≤(1 + √ 2)ε. □ Now we can obtain any ME function by choosing an appropriate matrix Q and doing a left and right vector product. Corollary 3.8. Let v, u ∈Cd and Q ∈Cd×d. Let f(t) = v∗exp(tQ)u. Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, |f(t) -fN(t)| ≤(1 + √ 2)ε∥v∥2∥u∥2. 11 Corollary 3.9. Let f(t) = tb b! eαt, with b ∈N and α ∈C. Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ωwhich contains B(αt, t). Then, |f(t) -fN(t)| ≤(1 + √ 2)ε. Proof Let J be the (b + 1) × (b + 1) Jordan block with eigenvalue α; then we have J =   α 1 α 1 ... ... α 1 α   , exp(tJ) = eαt   1 t t2 2 . . . tb b! 1 t ... ... 1 ... t2 2 ... t 1   and W(tJ) ⊂B(αt, t) thanks to the properties in Lemma 3.5. The result now follows from Corollary 3.8, by setting v = e1, u = eb+1, and noting that f(t) = v∗exp(tJ)u. □ If a function f is close to a function g for which we know the error of an AbateWhitt method (for example, if g is in the SE class), then we can bound the error of the ILT with the Abate-Whitt method for f. Theorem 3.10. Let f(t) be a function, and suppose that g(t) is an approximation of f with error η on (0, ∞). ∥f(t) -g(t)∥∞, R+ ≤η. Let (wn, βn)N n=1 be an Abate-Whitt method. Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) -fN(t)| ≤(1 + ∥ρN∥1) η + |g(t) -gN(t)| Proof f(t) -fN(t) = f(t) - ∞ Z 0 f(ty) ρN(y) dy = ∞ Z 0 (g(ty) -f(ty)) ρN(y) dy + g(t) - ∞ Z 0 g(ty) ρN(y) dx + (f(t) -g(t)). We estimate the first term as ∞ Z 0 (g(ty) -f(ty)) ρN(y) dy ≤ ∞ Z 0 |g(ty) -f(ty)| |ρN(y)| dy ≤ max y∈(0,∞)|g(ty) -f(ty)| · ∞ Z 0 |ρN(y)| dy = ∥g -f∥∞,R+ · ∥ρN∥1 ≤η ∥ρN∥1. 12 The second term is the Abate-Whitt approximation error g(t) -gN(t), while the third term is bounded as |f(t) -g(t)| ≤η. Putting everything together, we obtain |f(t) -fN(t)| ≤η (1 + ∥ρN∥1) + |g(t) -gN(t)|. □ Combining Theorem 3.10 and Theorem 3.2 we get the following result. Theorem 3.11. Let f(t) be a function, and suppose that g(t) = M P m=1 cmeαmt is an approximation of f with error η. f(t) - M X m=1 cmeαmt ∞,R+ ≤η Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ωwhich contains α1t, . . . , αMt. Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) -fN(t)| ≤(1 + ∥ρN∥1) η + M X m=1 |cm| ! ε. (11) Let us comment on this result. It is not immediate how to choose parameters that make the right-hand side of (11) small. To approximate f(t) with a small error η, we may have to choose a SE function with large weights cm. Symmetrically, to approximate c ρN(s) with a small error ε, we may have to choose an Abate-Whitt method with large ∥ρN∥1. But if ρN ≈δ1, we can expect ∥ρN∥1 not to be much larger than ∥δ1∥1 = 1; say, ∥ρN∥1 ≤10. If this bound holds uniformly for a family of AbateWhitt methods, we can first choose η to make the first summand in (11) small, and then select an Abate-Whitt method in the family to make ε (and hence the second summand in (11)) small. We use Theorem 3.11 to obtain a bound for a LS-class function with finite measure μ, i.e., one such that μ(R+) 0 there exists a SE-class function g(t) = PM m=1 cme-xmt such that ∥f -g∥∞,R+ ≤ ε. We can take g(t) to have cm > 0, PM m=1 cm = μ(R+), and x1, . . . , xM ∈(0, L], where L > 0 is such that μ((L, ∞)) μ(R+) ≤ε. Proof We assume, up to scaling, that μ is a probability measure, i.e., μ(R+) = 1. Let F(x) be its cumulative distribution function (CDF). If X is a random variable with distribution 13 F(x), then the distribution of the clamped random variable min(X, L) is FL(x) = ( F(x) x L we have GM(x) = 1 and F(x) ≥1 -ε. We set g(t) = R ∞ 0 e-xt dG(x) = 1 M PM m=1 e-txm, a SE function. Integrating by parts, we have |f(t) -g(t)| = Z ∞ 0 e-xt( dF(x) -dGM(x)) = Z ∞ 0 d dxe-xt (F(x) -GM(x)) dx ≤ Z d dxe-xt dx · ε = ε. To justify rigorously the use of integration by parts even when the measures are not Lebesguecontinuous, we can use for instance [11, Theorem 18.4]. □ The convergence speed (as N grows) of the approximation of LS functions with SE functions is also studied in detail in [14]. Combining Theorem 3.11 and Proposition 3.12, we obtain the following result. Theorem 3.13. Let f be a LS function with a finite measure μ. Let L > 0 and η > 0 be such that μ((L, ∞)) ≤ημ(R+). Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on Ω= [-L, 0). Let ρN be the Dirac approximant of the method. Then, the error of the method at point t is bounded by |f(t) -fN(t)| ≤(1 + ∥ρN∥1) η + μ(R+) ε. Moreover, if the Abate-Whitt method is ε-accurate on the whole half-line R+, we can let L →∞and η →0, obtaining |f(t) -fN(t)| ≤μ(R+) ε. Remark 3.14. Our previous theorems are valid for a function f(t) in the SE or ME class that can take, in the most general case, complex values. Functions of the LS class are positive (since they are the integral of a positive function with a positive measure). However, the Laplace Transform, its Inverse, and the Abate-Whitt methods are linear. If we can recover both functions g1 and g2 with a small error, we can also recover any linear combination c1g1 + c2g2. Therefore, if a function f can be written as f(t) = g1(t) -g2(t) where g1, g2 are LS functions with measures μ1 and μ2, then Theorem 3.13 is valid also for f, with μ1(R+) + μ2(R+) in place of μ(R+). 14 4 Bounds based on moments We now present bounds (and an estimate) based on the moments of a Dirac approximant ρN, which are valid under small assumptions on the regularity of f. Definition 4.1. Let ρ : R+ →R be a function. The moments of ρ are μ0 = ∞ Z 0 ρ(y) dy, μ1 = ∞ Z 0 yρ(y) dy, μ2 = ∞ Z 0 y2ρ(y) dy. The shifted moments are ν2 = ∞ Z 0 (y -1)2ρ(y) dy, eν2 = ∞ Z 0 (y -1)2 |ρ(y)| dy. The shifted moment ν2 can be expressed through μ0, μ1, μ2 as ν2 = ∞ Z 0 (y2 -2y + 1)ρ(y) dy = μ2 -2μ1 + μ0. If ρ is non-negative, then ν2(ρ) = eν2(ρ). It is a classical result that the moments of a function (when they exist) can be expressed through the derivatives of its Laplace transform in 0: μ0 = bρ(0), -μ1 = d ds bρ(s) s=0 , μ2 = d2 ds2 bρ(s) s=0 ; (12) see, e.g.,[11, Section 21], which shows this fact more generally for probability distributions. Another quantity related to moments appear prominently in [25]. Definition 4.2. Let ρ : R+ →R+ be a function with finite moments μ0, μ1, μ2. The Squared Coefficient of Variation (SCV) is SCV(ρ) = μ0μ2 μ2 1 -1. If ρ is the pdf of a random variable X, then the SCV is the normalized variance of X: SCV(ρ) = Var(X) E(X)2 . In general, the relation between the SCV and the second shifted moment is SCV(ρ) = μ0μ2 -μ2 1 μ2 1 = μ0(ν2 + 2μ1 -μ0) -μ2 1 μ2 1 = ν2 μ0 μ2 1 - 1 -μ0 μ1 2 . 15 Note that if μ0(ρ) = μ1(ρ) = 1, then SCV(ρ) = ν2. In [24], the CME method is computed with an optimization procedure that minimizes SCV(ρN), and the following bound is obtained. That article assumes μ0 = 1 for the definition of the SCV and for Theorem 4.3. However, the result can be easily extended to a generic non-negative function ρ by rescaling it as 1 μ0(ρ)ρ. Theorem 4.3 ([24, Theorem 4]). Let ρ(x) : R+ →R+ be a non-negative function with finite moments μ0, μ1, μ2, and assume that μ0 = μ1 = 1. Let f : R+ →R be bounded by a constant H, and Lipschitz-continuous with constant L at point t, i.e. |f(t)| ≤H and |f(t) -f(t1)| ≤L|t -t1| for t1 ≥0. If fN(t) = ∞ R 0 f(tx)ρ(x) dx, then the error of this approximation is bounded by |fN(t) -f(t)| ≤3 2HL2t2 1 3 (SCV(ρ)) 1 3 . We are interested in giving similar bounds for functions ρN which are approximations of the Dirac delta distribution in 1. As the Dirac delta is a probability distribution with mean 1, we expect to have μ0(ρN) ≈1 and μ1(ρN) ≈1. For the Dirac approximants of the Abate-Whitt methods these are not exact equalities, so we give bounds in which μ0(ρN) and μ1(ρN) can differ from 1. Theorem 4.4. Let ρ(y) : R+ →R be a function, with moments as in Definition 4.1. Let f : R+ →R be a C2 function and fN(t) = R ∞ 0 f(ty)ρ(y) dy. Then |fN(t) -f(t)| ≤|μ0 -1| |f(t)| + t |f ′(t)| |μ1 -μ0| + 1 2 t2 ∥f ′′∥∞,R+ eν2. Proof We have R ∞ 0 ρ(y) dy = μ0, therefore R ∞ 0 f(t)ρ(y) dy = μ0f(t). Then fN(t) -f(t) = fN(t) -μ0f(t) + (μ0 -1) f(t) = ∞ Z 0 f(ty)ρ(y) dy - ∞ Z 0 f(t)ρ(y) dy + (μ0 -1) f(t) = (μ0 -1) f(t) + ∞ Z 0 (f(ty) -f(t)) ρ(y) dy = (μ0 -1) f(t) + ∞ Z 0 (f′(t)(ty -t) + 1 2f′′(ζy)(ty -t)2) ρ(y) dy = (μ0 -1) f(t) + tf′(t) ∞ Z 0 (y -1)ρ(y) dy + 1 2t2 ∞ Z 0 f′′(ζy)(y -1)2ρ(y) dy. The above expression is an equality, where between the fourth and fifth lines we used the Taylor series expansion of f centered on t, computed at point ty, with ζy being the point of 16 the Lagrange remainder. To obtain an upper bound, we take absolute values and get |fN(t) -f(t)| ≤|(μ0 -1) f(t)| + tf′(t) ∞ Z 0 (y -1)ρ(y) dy + 1 2t2 ∞ Z 0 f′′(ζy)(y -1)2ρ(y) dy ≤|μ0 -1| |f(t)| + t |f′(t)| ∞ Z 0 yρ(y) dy - ∞ Z 0 ρ(y) dy + 1 2 t2 ∥f′′∥∞,R+ ∞ Z 0 (y -1)2 |ρ(y)| dy = |μ0 -1| |f(t)| + t |f′(t)| |μ1 -μ0| + 1 2 t2 ∥f′′∥∞,R+ eν2. □ We apply this theorem to an Abate-Whitt method, with ρ = ρN. For the CME method, ρN is positive and concentrated near x = 1, so eν2(ρN) = ν2(ρN) is small and the error bound is small also. For the other methods, ρN has both positive and negative components, and unfortunately this makes eν2(ρN) orders of magnitude larger than ν2(ρN) in practical cases. Thus, this error bound is too large to be useful. However, we can truncate the Taylor series expansion at the first-order term, and compute just an approximation instead of an upper bound. We obtain the following estimate. Definition 4.5. Let ρN be the Dirac approximant of an Abate-Whitt method. Let μ0, μ1 be the moments of ρN as in Definition 4.1. Let f : R+ →R be a C1 function. The first-order moment estimate of the Abate-Whitt approximation error is |fN(t) -f(t)| ≈|μ0 -1| |f(t)| + t |f ′(t)| |μ1 -μ0|. (13) We compare this estimate to the actual error in Figure 8. Note that the coefficients |μ0 -1| and |μ1 -μ0| can be related to the rational approximation of ez with c ρN(-z). Recalling (12), we have μ0(ρN) = R ∞ 0 ρN(y) dy = c ρN(0) and e0 = 1, so 1 -μ0 = (ez -c ρN(-z))|z=0. Hence if 0 ∈Ωand the Abate-Whitt method is ε-accurate on Ω, then |μ0 -1| ≤ε. Similarly, μ1 -μ0 = μ1 -1 + 1 -μ0 = d dz (c ρN(-z) -ez) z=0 + (ez -c ρN(-z)) z=0 . Hence |μ1 -μ0| is small when both the value and the first derivative in zero of the rational approximation c ρN(-z) are close to those of ez. 5 Laplace transforms in queuing theory In this section, we specialize our general bounds to the functions appearing in some applications in queuing theory. 17 5.1 Continuous-time Markov chains The generator matrix (or rate matrix) of a continuous-time Markov chain (CTMC) is a matrix Q ∈Rd×d such that Qij ≥0 whenever i ̸= j and Q1 = 0, where 1 is the vector with all entries equal to 1. In particular, Q is a singular -M-matrix. The time-dependent distribution of a CTMC with initial probability distribution π0 ∈Rd is given by π(t) = π0 exp(Qt). (14) On the other hand, its Laplace transform bπ(s) = π0(sI -Q)-1 has a simpler algebraic expression, which is more convenient to work with. One of the reasons for this convenience is an appealing probabilistic interpretation for s > 0: bπ(s) is the expected state of the Markov chain at time τ, where τ ∼Exp(s) is an exponentially distributed random variable [30]. Often one can resort to algorithms and arguments that exploit the connection between this time τ and the many other exponentially-distributed random variables that appear in the theory of CTMCs; see for instance [41]. This connection is useful also when working with other quantities defined in terms of a CTMC. An important example are phase-type distributions, which model the time it takes for a CTMC to reach a specified set of states. A phase-type distribution is a probability distribution on [0, ∞) with probability density function (pdf) f(t) = αT exp(tQ)q, q = -Q1 ≥0, (15) where α ∈Rd is a stochastic vector, 1 ∈Rd is the vector of all ones, and Q is a subgenerator matrix, i.e., a matrix such that Qij ≥0 for all i ̸= j and -Q1 ≥0. For a probability distribution, one usually deals with the Laplace transform of its pdf, which is known as the Laplace-Stieltjes transform. Phase-type distributions appear together with their Laplace-Stieltjes transforms in various contexts in queuing theory; see for instance the examples in [1]. Clearly both (14) and (15) are ME functions, hence the bounds in Section 3 can be applied. With some manipulations, we obtain the following result. Theorem 5.1. Let f(t) be the pdf of a phase-type distribution (15), and F(t) = R t 0 f(t) dt be its corresponding cumulative distribution function (CDF). Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ωwhich contains W(tQ). Then, the following bound holds for the inverse Laplace transform of f(t): |f(t) -fN(t)| ≤(1 + √ 2)ε∥q∥1, and the following two bounds for that of F(t), assuming 0 ∈Ω: |F(t) -FN(t)| ≤ε + (1 + √ 2)ε √ d, 18 |F(t) -FN(t)| ≤ε + (1 + √ 2)ε(-αT Q-11)∥q∥1. Note that -αT Q-11 is the first moment of the phase-type distribution [30, Page 6105]. Proof For the first bound, it is enough to use Corollary 3.8 and note that ∥α∥2 ≤∥α∥1 = αT 1 = 1, and ∥q∥2 ≤∥q∥1. For the first bound, we integrate f(t) to get the well-known expression for the CDF of a phase-type distribution F(t) = 1 -αT exp(tQ)1. The function F(t) is the sum of the constant function F (1)(t) = 1 and F (2)(t) = -αT exp(tQ)1; hence we have |F(t) -FN(t)| ≤|F (1)(t) -F (1) N (t)| + |F (2)(t) -F (2) N (t)| ≤ε + |F (2)(t) -F (2) N (t)|, where we can bound the first term with ε since the constant function 1 is SE with α = 0. The second term is a ME function; to bound it, we can use Corollary 3.8, leading to |F (2)(t) -F (2) N (t)| ≤ε(1 + √ 2)∥α∥2∥1∥2 ≤ε(1 + √ 2)∥α∥1∥1∥2 = ε(1 + √ 2) √ d. Alternatively, since Q, Q-1 and exp(tQ) commute, we can write F (2)(t) = -αT exp(tQ)1 = -αT Q-1 exp(tQ)Q1. This expression leads to the second bound, since ∥(-αT Q-1)T ∥2 ≤∥(-αT Q-1)T ∥1 = -αT Q-11. Indeed, Q-1 ≤0, so the vector -αT Q-1 has non-negative entries and its 1-norm reduces to the sum of its entries. □ Unfortunately, there is no simple expression for the field of values W(Q) of a generator or subgenerator matrix, but the following results give inclusions. Let us recall that a uniformization rate for a (sub)generator matrix Q ∈Rd×d is any λ ∈R such that λ ≥maxi=1,...,d|Qii|; the definition comes from the concept of uniformization, a popular tool to discretize CTMCs [35, Section 6.7]. Theorem 5.2. Let Q ∈Rd×d be a generator or sub-generator matrix, and λ be a uniformization rate for it. Then, W(Q) ⊆B(-λ, λ √ d). Proof A sub-generator matrix can always be written as the principal submatrix of a (d + 1) × (d + 1) generator matrix, so in view of Lemma 3.5 we reduce to the case in which Q is a generator. If Q is a generator matrix and λ is a uniformization rate, then P = I + 1 λQ is a stochastic matrix. In particular, ∥P∥∞= 1, where ∥·∥∞is the operator norm induced by the max-norm on vectors. It is a classical inequality [23, Page 50-5] that ∥P∥2 ≤ √ d∥P∥∞; in particular we have W(P) ⊆B(0, ∥P∥2) ⊆B(0, √ d). This inclusion implies the result, thanks again to Lemma 3.5. □ A stronger bound is the following: 19 Theorem 5.3. The smallest rectangle in the complex plane (with its sides parallel to the axes) such that W(Q) ⊂Ωd for each generator or subgenerator matrix Q ∈Rd×d with uniformization rate λ is Ωd =                              [-2λ, λ 2 ( √ 2 -1)] + i[-1 2λ, 1 2λ], d = 2; [-(1 + √ 5 2 )λ, λ 2 √ 3 -1] + i[- √ 3 2 λ, √ 3 2 λ], d = 3; [-(1 + √ 6 2 )λ, λ 2 ] + i[-λ, λ], d = 4; -(1 + √ d + 2 2 )λ, λ 2 ( √ d -1) + i " - √ 2λ 4 q d + p d2 -4d + 12, √ 2λ 4 q d + p d2 -4d + 12 # , d ≥5. Proof This result follows from the argument in Theorem 5.2, paired with the bound on the field of values of a stochastic matrix W(P) given in [20, Theorem 6.9]. □ These bounds are valid for a generic generator matrix of given size. If we have information on the eigenvalues of the real and imaginary parts of a matrix, sharper bounds can be obtained for its field of values. The next theorem follows from the results in [22, Section 5.6], taking a very coarse discretization θ ∈{0, π 2 , π, 3π 2 }. Theorem 5.4. Let A ∈Cd×d be a matrix. Let X = A+A∗ 2 be the real part of A and Y = A-A∗ 2i be the imaginary part of A. X and Y are real matrices and thus have real eigenvalues. We have W(A) ⊆[λmin (X) , λmax (X)] + i [λmin (Y ) , λmax (Y )] . 5.2 Fluid queues A setting where we can obtain useful results is the one of Markov-modulated fluid models, or fluid queues [4, 28, 33, 44]. A fluid queue with transition matrix Q ∈Rd and rates r1, . . . , rd is an infinite-dimensional continuous-time Markov process (φ(t), l(t)), where the random variable φ(t) ∈{1, . . . , d} (phase) is a CTMC with transition matrix Q, and the random variable l(t) ∈R (level) is a continuous function of t that evolves according to d dtl(t) = rφ(t). This model is often paired with boundary conditions, for instance to enforce l(t) ≥0. Fluid queues model buffers, telecommunication queues, or performance measures associated to being in a state φ(t) for a certain period of time. A fundamental quantity to analyze their transient distribution is the so-called first-return matrix. Let φ(0) = i be a phase with ri > 0, so that l(t) is increasing for t = 0. Set τ = min{t > 0: l(t) = l(0)}, 20 the first-return time to the initial level, with the convention that τ = ∞if the level never returns to l(0). The (time-dependent) first-return matrix Ψ(t) is then defined by [Ψ(t)]ij = P[τ ≤t, φ(τ) = j | φ(0) = i], i.e., the probability that the first return happens before time t, and at the same time the phase changes from i to j. Usually, one includes in Ψ(t) only phases i and j with ri > 0 and rj 0) of bΨ(s) as the probability that the first-return time is smaller than a random variable with exponential distribution Exp(s). In order to apply the techniques introduced earlier, we recall the following expression for Ψ(t). Theorem 5.5 ([6, Lemma 2]). Consider a fluid queue model, and let λ be a uniformization rate for its generator matrix Q. Then, there exist nonnegative matrices Ψk ∈Rd+×d-, for k = 1, 2, . . . (dependent on λ), such that Ψ(t) = e-λt ∞ X h=1 (λt)h h! h X k=1 Ψk = e-λt ∞ X k=1 Ψk ∞ X h=k (λt)h h! . (16) Moreover, limt→∞Ψ(t) = Ψ(∞) = P∞ k=1 Ψk . The matrices Ψk have a physical interpretation using uniformization: they represent the probability that the first return to level l(0) happens between the nth and n + 1st uniformization event; see [6] for more detail. Differentiating (16) term by term, we can obtain an expression for ψ(t), i.e., ψ(t) = d dtΨ(t) = λe-λt ∞ X k=1 Ψk (λt)k-1 (k -1)!. (17) Following the same approach as Corollary 3.8, we can get the following result. 21 Theorem 5.6. Consider a fluid queue with uniformization rate λ, and let f(t) = [ψ(t)]ij. Let (wn, βn)N n=1 be an Abate-Whitt method that is ε-accurate on a region Ω which contains B(-tλ, tλ). Then, the error of the method at point t is bounded by |f(t) -fN(t)| ε and K Nmax then // No room to add a conjugate pair: // ignore zK+1 and terminate with K support points break; end zK+2 ←zK+1; Update eC according to (26): remove row corresponding to zK+2, add column corresponding to it; K ←K + 2; end (u0, u1, . . . , uK) ←arg min{∥eAu∥2 2 : u ∈CK+1, ∥u∥= 1}; end Compute β1, . . . , βK by solving (27) (in higher precision); Compute residues wk = n(βk) d′(βk), k = 1, . . . , K; Let us assume for simplicity that t = 1. Let g(βn) be the value obtained by computing bf(βi) numerically, and suppose that it has relative precision δ, i.e., g(βn) -bf(βn) bf(βn) ≤δ. In floating-point arithmetic, we can only ensure a δ at least as large as the machine precision, if not larger. We can write the computed value of the Abate-Whitt approximant as efN(1) = N X n=1 wng (βn) = N X n=1 wn bf (βn) (1 + δn), |δn| ≤δ. 29 0 10 20 10-46 10-30 10-14 102 N ε 0 10 20 10-1 103 107 1011 N max|wn| binary64 vpa Fig. 1: Approximation error ε and magnitude of weights max|wn| when running AAA with Ω= B(-5, 5) in either binary64 or VPA with 100 significant digits. Then we have the bound | efN(1) -fN(1)| ≤ N X n=1 |wn bf (βn)|δ and hence | efN(1) -f(1)| ≤ N X n=1 |wn bf (βn)|δ + Cε (29) This bound reveals that limiting the magnitude of the weights wn is important when we evaluate bf in limited precision, as already noted in [24, Section 5.1]. In practice, we observe that increasing N leads to a smaller ε but also to larger weights; hence increasing n after a certain point (which depends on the machine precision) is no longer beneficial. Remark 7.1. In the TAME method, the weights are computed by optimizing a function that is itself computed numerically, by evaluating the rational approximant c ρN(-z). If the computations in the main AAA loop (Algorithm 1) are performed with sufficiently many significant digits (e.g., 100 decimal digits of precision), then we observe the approximation error ε decreasing even below 10-16, but at the same time the magnitude of the weights increasing steadily. This is detrimental if we plan to compute the ILT in binary64, since the large weights cause a large numerical error irrespective of ε: in (29) the summation becomes the dominant term, even if ε is small. If we run the AAA algorithm in binary64 instead, then the error stagnates around the machine precision 2.2×10-16, and at that points increasing N further only adds spurious poles with weights that are of the order of the machine precision; in particular, the magnitude of the weights does not increase anymore. So running the AAA algorithm in binary64 acts as a safeguard against increasing weights. We observe this behavior in an example in Figure 1, and also its consequences later in Section 8.3. 30 7.2 Choice of Ω We have seen in Sections 3 and 5 that an Abate-Whitt method which is ε-accurate on Ωcan recover the original function f with an error proportional to ε (Theorems 3.2, 3.7, 5.1, 5.6, 3.13). We can use the AAA algorithm to construct a TAME method with a small ε. The first step is to select the region Ω, depending on the information we have on f. • Fluid queues. If f arises from a fluid queue model with uniformization rate λ (i.e., f(t) = Ψ(t) or f(t) = ψ(t)), then use Ω= B(-r, r), with r = λt. • ME class. If f is in the ME class (e.g. f is a phase type distribution), then Ωshould contain W(Q). With no information on Q apart from its dimension, Theorem 5.3 can be used. If the user has more information about W(Q), tighter bounds can be used, for instance the one in Theorem 5.4. • LS class. If f(t) is in the LS class, we use Ω= [-L, 0] with L chosen according to Theorem 3.13. • If f(t) can be approximated by a function of the above classes, use the corresponding Ω. Otherwise, if nothing is known about f, then we recommend trying three possible domains: the circle B(-r, r), the segment on the real half-line [-r, 0], the segment on the imaginary line i[-r, r]. However, as we note in Remark 3.3, no Abate-Whitt method can give good results for all functions f. Remark 7.2. One may be tempted to use a large region Ωto cover as many functions as possible; however, usually the magnitude of the weights |wn| is larger for a bigger region Ω. This observation discourages using overly large sets Ω: if we need to compute an ILT for which we know that the domain Ω= B(-1, 1) is sufficient, then using Ω= B(-10, 10) instead would cause loss of accuracy, at least when the computations are done in double-precision arithmetic. 7.3 Choice of Z Once Ωis chosen, we wish to construct a method that is ε-accurate on Ωusing Algorithm 1. However, the domain for AAA is a discrete set of points Z, while in many of our earlier examples Ωis a bounded region such as a disc or a rectangle. Hence we need a way to approximate Ωwith a finite set of points. The simplest approach would be to use a regular grid of points inside Ω, thus requiring a number of points that scales quadratically with the diameter of Ω. We use another more efficient approach instead, following [32, Section 6]. Assume that Ω is a simply connected domain with boundary ∂Ω. Consider the function g(z) = ez - c ρN(-z), which is holomorphic on C \ {β1, . . . , βn}. If g has no poles inside Ω, then by the maximum modulus principle [36, Theorem 10.24] maxz∈Ω|g(z)| = maxz∈∂Ω|g(z)|. That is, it is enough to require that |g(z)| be small on the boundary of Ω. Therefore we can take Z as a discretization of ∂Ω. In particular, when Ωis a disc, we take Z to be a set of equispaced points on a circle. We then apply the modified AAA algorithm on the support set Z, obtaining a rational approximation bρ(-z) = PN n=1 wn βn-z to ez. This rational approximation is 31 0 5 10 15 20 25 30 35 10-16 10-12 10-8 10-4 100 r ε N ′ = 2, N = 3 N ′ = 2, N = 4 N ′ = 3, N = 5 N ′ = 3, N = 6 N ′ = 4, N = 7 N ′ = 4, N = 8 N ′ = 5, N = 9 N ′ = 5, N = 10 N ′ = 6, N = 11 N ′ = 6, N = 12 N ′ = 7, N = 13 N ′ = 7, N = 14 N ′ = 8, N = 15 N ′ = 8, N = 16 N ′ = 9, N = 17 N ′ = 9, N = 18 N ′ = 10, N = 19 N ′ = 10, N = 20 N ′ = 11, N = 21 N ′ = 11, N = 22 N ′ = 12, N = 23 N ′ = 12, N = 24 Fig. 2: Errors ε obtained by TAME on Ω= B(-r, r), with the corresponding values of n and n′. guaranteed to be ε-accurate only on Z, not on the whole Ω, but in practice this is sufficient if the points in Z are sufficiently tight. Finally, we recover the parameters (wn, βn)N n=1 and obtain the TAME method. We have to check that the poles βn are outside the domain Ω, otherwise the maximum modulus principle may be violated; furthermore, we could not have a ε-accurate method on Ω, since c ρn(-z) would have a pole inside Ω. This condition is satisfied in the TAME methods we computed. 7.4 TAME with optimal N ′ for a given B(-r, r) As we discussed above, increasing N (or N ′) beyond a certain value does not improve accuracy, if the transforms are evaluated in binary64. In this section, we wish to give a strategy to select a quasi-optimal N ′ to use, for the common case of a region of the form Ω= B(-r, r) and several values of r. First, we display in Figure 2 the errors ε obtained for several values of r, and the corresponding values of N and N ′. Several comments on this figure are in order. Since Z is the discretization of a circle, all points in it apart from two are nonreal. Hence, our AAA variant almost always adds support points in conjugate pairs. As a consequence, for any given r one obtains only about half of the possible values of N. For instance, for r = 30, TAME produces rational approximants with N ∈ {3, 5, 7, 9, 11, 13, 15, 16, 18, 20, 22, 24}, since a real support point is added as the 16th point. The iterations at which real support points are added vary depending on r: this explains why the colored lines are sometimes broken in the figure. 32 0 5 10 15 20 25 30 35 10-14 10-10 10-6 10-2 r η N ′ = 2, N = 3 N ′ = 2, N = 4 N ′ = 3, N = 5 N ′ = 3, N = 6 N ′ = 4, N = 7 N ′ = 4, N = 8 N ′ = 5, N = 9 N ′ = 5, N = 10 N ′ = 6, N = 11 N ′ = 6, N = 12 N ′ = 7, N = 13 N ′ = 7, N = 14 N ′ = 8, N = 15 N ′ = 8, N = 16 N ′ = 9, N = 17 N ′ = 9, N = 18 N ′ = 10, N = 19 N ′ = 10, N = 20 N ′ = 11, N = 21 N ′ = 11, N = 22 N ′ = 12, N = 23 N ′ = 12, N = 24 Fig. 3: Error estimator η obtained by TAME on Ω= B(-r, r), with the corresponding values of n and n′. We note that each value of N ′ is reached with more than one value of N: typically, one has N = 2N ′ -1 and N = 2N ′, i.e., all poles come in conjugate pairs apart from at most one real pole. The AAA algorithm was run in binary64, as suggested in Remark 7.1, with only the eigenvalue problem (27) solved in higher precision. For each value of N ′ (represented by a different color in the figure), the method reaches a value of ε close to the machine precision 2.2 · 10-16 for all values of r up to a certain threshold, but then the error increases steadily. For instance, for N ′ = 6, the error starts drifting away from machine precision at r ≈5. There is a considerable amount of numerical noise due to floating point errors, clearly visible at the bottom of the graph, so that "close to machine precision" may mean a value larger than 10-13, in certain cases. It follows from the discussion in Section 7.1 that ε is not the only factor that affects the final accuracy of the method when the ILT is evaluated in binary64. We take as a proxy for the error (29) the quantity η = ε + u max|wn|, where u ≈2.2 · 10-16 is the machine precision. This choice is motivated by the heuristic that (a) f(βn) has the same order of magnitude as C for our classes of functions and for typical values of βn, and that (b) the maximum of |wn| is a better estimate for the error than the worst-case bound P|wn|, because the errors δn do not have all the same phase, and the wn do not have all the same magnitude. We plot this error estimator in Figure 3. Based on the plot, we selected an array of methods with different values of N ′. Some relevant quantities for these methods are reported in Table 1, while their poles and weights are available for download on https://github.com/numpi/tame-ilt. When 33 Table 1: Table of relevant quantities for a family of quasi-optimal precomputed TAME methods. The method in each row was generated with r = rmax. N ′ N rmax ε max|wn| Error proxy η 3 6 0.6 1.637909e-14 5.462668e+02 1.376747e-13 4 8 1.8 8.229408e-14 3.402665e+03 8.378375e-13 5 10 4.0 3.973128e-13 7.669538e+03 2.100292e-12 6 12 7.0 1.420889e-12 1.684302e+04 5.160790e-12 7 14 11.2 1.983229e-12 3.981692e+04 1.082436e-11 8 16 16.8 1.140301e-11 5.302010e+04 2.317583e-11 9 18 22.7 6.075754e-12 1.296546e+05 3.486487e-11 10 20 31.6 3.192135e-11 1.495150e+05 6.512034e-11 one needs a method for a certain Ω= B(-r, r), we suggest using the method with the smallest rmax ≥r. In this way we have a method that is ε-accurate on Ω⊆ B(-rmax, rmax), and at the same time it does not have unnecessarily large weights. 8 Experiments 8.1 Comparison of different Abate-Whitt methods To gain additional insight on how the different methods relate to each other, we present a comparison of their parameters. In Figure 4 the nodes βn (along with their conjugates) are plotted in the complex plane. The TAME method uses N ′ = 5 and r = 4 chosen according to Table 1. The domain Ω= B(-r, r) is shown as well; none of the nodes lie inside the domain, so the choice of the discretization Z is justified (see Section 7.3). Some methods position the nodes according to a chosen geometry: the nodes of the CME and the Euler methods are aligned on vertical lines, while the nodes of the Talbot method follow the deformed integration contour. On the other hand, the position of TAME and Zakian nodes is determined by solving respectively a minimization problem and a system of equations. It is interesting to note that the obtained nodes are arranged on curves which are close to the Talbot contour; we stress, however, that having close nodes do not make two methods similar because the weights wn can be vastly different. The Dirac approximants ρN(y) are plotted in Figure 5. The CME method has a peaked distribution at y = 1, and is close to zero elsewhere. The other methods have a smaller and wider peak, as well as oscillation around zero in other parts of the domain; this is clearly visible for the Euler method. The presence of oscillation in ρN does not imply necessarily that the method is ineffective. Indeed, for (9) we only need convergence of distributions in the weak sense, and this kind of convergence is possible even for wildly oscillating functions, for example sin(ny) ⇀0. The Talbot method is excluded because it has nodes with Re(βn) 1; the interpretation of ρN(y) as a Dirac approximant is not clear for Talbot. 34 -25 -20 -15 -10 -5 5 10 15 -20 -15 -10 -5 5 10 15 20 Re(z) Im(z) domain Ω Euler N ′ = 35 Gaver N ′ = 16 Talbot N ′ = 20 CME N ′ = 60 Zakian N ′ = 4 TAME N ′ = 5 Fig. 4: Plot of nodes βn for different methods. The TAME method uses Ω= B(-4, 4). 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 20 40 y ρN(y) Euler N ′ = 35 Gaver N ′ = 16 CME N ′ = 60 Zakian N ′ = 4 TAME N ′ = 5 Fig. 5: Plot of the Dirac approximants ρN(y) for different methods. The TAME method uses Ω= B(-4, 4). In Figure 6, we display the error of the rational approximant bρN(-z) for different Abate-Whitt methods on a rectangular domain. The TAME method is constructed using B(-r, r) with r = 4, according to Table 1; we display also a circle with radius 4 on the picture. In the rest of this section, we present a numerical evaluation of the performance of the various Abate-Whitt methods on several problems; the first three are from 35 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 a) Euler N ′ = 35. d) CME N ′ = 75. -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 b) Gaver-Stehfest N ′ = 16. e) Zakian N ′ = 4. -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 -15 -10 -5 0 5 10 15 -15 -10 -5 0 5 10 15 Re z Im z -16 -14 -12 -10 -8 -6 -4 -2 0 c) Talbot N ′ = 20. f) TAME N ′ = 5, Ω= B(-4, 4). Fig. 6: Base-10 logarithm of the approximation error log10 exp(z) -PN n=1 wn βn-z , and the poles β in red. 36 0 5 10 15 20 25 30 35 40 45 50 10-15 10-10 10-5 100 reduced nodes N ′ ∥ψ(1) -ψN(1)∥∞ Euler Gaver Talbot CME Zakian TAME 0 5 10 15 20 25 30 35 40 45 50 10-15 10-10 10-5 100 reduced nodes N ′ ∥Ψ(1) -ΨN(1)∥∞ Euler Gaver Talbot CME Zakian TAME Fig. 7: Experiment A. Plot of the error of the six Abate-Whitt methods. Above: ψ(t) (pdf), below: Ψ(t) (CDF). λ = 1, t = 1. The TAME method uses Ω= B(-1, 1). the classes of functions studied in this paper, while the last two are from functions outside these classes, to determine whether the numerical properties observed and proved extend to more general problems. Our code is publicly available on https: //github.com/numpi/tame-ilt. 8.2 Experiment A We consider a fluid queue model of size d+ = 5, d-= 10 and uniformization rate λ = 1. The matrix Q of the underlying Markov Chain has size d = 15. To construct Q, we generate a random matrix (with abs(randn(d)) in Matlab) and then modify the diagonal entries to have zero row sums. The positive rates r1, . . . , rd+ are uniformly distributed in [0, 1], while the negative rates rd++1, . . . , rd in [-1, 0]. To allow reproducibility, the seed of the Random Number Generator is set to rng(0). The Transform bψ(s) of the pdf is calculated by solving a nonsymmetric algebraic Riccati equation (see [7]). Using the properties of derivatives of the Laplace Transform ([15, Section 1.3]), the Transform of the CDF is calculated as bΨ(s) = 1 s bψ(s). We 37 Table 2: Experiment A. Minimum of the approximation error for the given Abate-Whitt method and corresponding value of N. Ψ(t) (CDF) ψ(t) (pdf) Method min error N Euler 4.0 · 10-12 35 Gaver 5.5 · 10-8 16 Talbot 1.2 · 10-14 18 CME 1.7 · 10-5 50 Zakian 4.3 · 10-14 4 TAME 3.3 · 10-14 4 Method min error N Euler 2.0 · 10-11 31 Gaver 2.1 · 10-7 14 Talbot 1.2 · 10-13 20 CME 1.2 · 10-5 50 Zakian 3.8 · 10-13 4 TAME 8.0 · 10-14 3 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ Euler Rat. appr. Moments 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ Gaver Rat. appr. Moments 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ Talbot Rat. appr. Moments 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ Zakian Rat. appr. Moments 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ CME Rat. appr. Moments 0 10 20 30 40 50 10-15 10-8 10-1 reduced nodes N ′ TAME Rat. appr. Moments Fig. 8: Experiment A. Upper bound, moment estimate, and approximation error for the pdf ψ(t). 38 recover both Ψ(t) and ψ(t) by means of the Inverse Laplace Transform at time t = 1. Following Theorem 5.6 we choose the domain Ω= B(-1, 1) for the TAME method and compare it to the classical Abate-Whitt methods. We plot the errors ∥ψ(1) -ψN(1)∥∞and ∥Ψ(1) -ΨN(1)∥∞as N increases in Figure 7. All methods except CME exhibit a linear rate of convergence in the first part of the graph. The error of the CME method decreases approximately as ∼(N ′)-2.21. This result is consistent with the findings in [25, Section 4.2], where it was observed that the SCV of the CME method decreases approximately as ∼(N ′)-2.14; therefore by Theorem 4.3 the error of the CME method decreases at least as fast as O((N ′)-0.71). In the CDF case, for N ′ = 50 the error is 1.7 · 10-5; the other methods reach this level of precision much sooner: the Talbot method at N ′ = 5 and the TAME method at N ′ = 2. However, the classical methods soon encounter a problem: the error starts growing instead of decreasing. This is caused by an increase in the magnitude of the weights wn, which causes numerical instability, as noted in Section 7.1. Zakian's method is the fastest, but it is also the most prone to instability: after reaching the minimum at N ′ = 4, the error starts rapidly rising again. For other functions the threshold N ′ may be different, so one risks overshooting the value of N ′ which gives a satisfying approximation. Talbot's method is the one that reaches the smallest error among classical methods and the growth of the error due to instability is not as fast as for Zakian. The TAME method combines both positive aspects. It reaches an error similar to Talbot's error, and it is as fast as the Zakian method: just three evaluations of bf(s) are sufficient to recover the original function f. Moreover, it does not suffer from numerical instability, because Algorithm 1 constructs a good rational approximation already with N ′ = 4 (i.e., with degree N = 8), and increasing N ′ further produces weights with an absolute value comparable to machine precision, as noted in Remark 7.1. While one would like to avoid using unnecessary nodes, the TAME method does not punish the user if they choose N ′ too large. In Figure 8 we show for each method, its error, the upper bound given by Theorem 5.6, and the moment estimate given by Definition 4.5. We see that up to the start of numerical instability, both the upper bound and the moment estimate are good approximations of the actual error for the Euler, Gaver, Talbot, and Zakian methods. For the CME method, the moment estimate is of order of machine precision, and the same happens for TAME for N ′ ≥6. We highlight that the upper bound in Theorem 5.6 depends on N ′ only through ε, while the moment estimate in Definition 4.5 depends on N ′ through |1 -μ0| and |μ1 -μ0|, which are quantities that depend only on the Abate-Whitt method and not on the function. As further confirmed in Figure 6, the upper bound is valid to measure the precision even of an Abate-Whitt method created with a different perspective in mind. Numerically also the moment estimates are sharp for all methods apart from CME; they are a surprisingly good approximation for the Talbot method. 39 0 5 10 15 20 25 10-15 10-8 10-1 errors for t = 1 0 5 10 15 20 25 10-15 10-8 10-1 errors for t = 3 0 5 10 15 20 25 10-15 10-8 10-1 errors for t = 10 0 5 10 15 20 25 10-15 10-8 10-1 errors for t = 30 0 5 10 15 20 25 10-15 10-8 10-1 errors for t = 100 TAME optimal TAME r = 0.5 TAME r = 1 TAME r = 3 TAME r = 10 TAME r = 100 Fig. 9: Experiment B. Error of the pdf ψ(t), computed at different times t. Comparison of TAME methods on a circular domain Ωwith different r. 8.3 Experiment B With the same fluid queue as in Experiment A, we compute the pdf ψ(t) at different times t ∈{1, 3, 10, 30, 100}. By Theorem 5.6, the radius of the domain Ω= B(-r, r) is r = tλ (here λ = 1); we construct different TAME methods for each r ∈{0.5, 1, 3, 10, 100}. We also consider the "optimal TAME" method (see Table 1), where the radius r is not constant, but depends on the number of nodes. The comparison of these methods is shown in Figure 9. In general, TAME methods with r > t are quite accurate, while for r < t they are not. For a fixed t, increasing r makes the convergence slower in N ′; this is due to the fact that larger r leads to larger weights wn (see Section 7.1). The TAME method with r = 100 has the slowest convergence, but has almost the same convergence speed for any value of t. Conversely, methods with smaller r have faster convergence, but they provide bad approximations when t is much larger than r. 40 0 5 10 15 20 25 30 10-15 10-10 10-5 100 reduced nodes N ′ ∥f(1) -fN(1)∥∞ Euler Gaver Talbot CME Zakian TAME Table 1 TAME Thm. 5.2 TAME Thm. 5.3 TAME Thm. 5.4 Fig. 10: Experiment C. Target function is f(t) = etQ at t = 1. Plot of the error of the six Abate-Whitt methods and four TAME methods as N ′ increases. -4 -2 0 2 -2 0 2 Theorem 5.2 Theorem 5.3 Theorem 5.4 W(Q) Fig. 11: Experiment C. Plot of the field of values and its bounds. 8.4 Experiment C We consider the same generator matrix Q of the previous two experiments, but instead of constructing the fluid queue we just analyze the underlying continuous-time Markov chain. The size of Q is d = 15 and the target function is f(t) = etQ, computed at time t = 1. We use TAME methods constructed with the following domains Ω: • the quasi-optimal ones in Table 1, • the circle Ω= B(-1, √ 15), based on Theorem 5.2, • the rectangle Ω= [-3.06, 1.43] + i[-1.88, 1.88], based on Theorem 5.3, • the smaller rectangle Ω= [-1.15, 0] + i[-0.19, 0.19], based on Theorem 5.4. 41 0 1 2 3 4 5 6 -0.5 0 0.5 1 1.5 t fN(t) f(t) Talbot N ′ = 20 Gaver N ′ = 16 Zakian N ′ = 4 Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 75 0 1 2 3 4 5 6 -4 -2 0 2 4 ·10-2 t fN(t) -f(t) Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 20 CME N ′ = 75 Fig. 12: Experiment D, triangular wave. Upper plot: functions f(t) and fN(t). Lower plot: error in linear scale. The three domain bounds for W(Q) are plotted in Figure 11. The approximation errors are displayed in Figure 10. The four TAME methods exhibit the steepest convergence, along with the Zakian method. As the region Ωgets smaller, the error of the TAME method diminishes, consistently with the discussion in Section 7.1. 8.5 Experiment D Consider two non-smooth signals: the square wave and the triangular wave. We rescale and shift them so that they assume values in the interval [0, 1]. They are periodic functions, so they admit a Fourier series expansion on all R+. The triangular wave is defined as f△(t) = ( t -⌊t⌋ if mod (⌊t⌋, 2) = 0, -t + ⌊t⌋+ 1 if mod (⌊t⌋, 2) = 1. Its Laplace Transform is c f△(s) = 1 s2 1 -e-s 1 + e-s . The triangular wave is a continuous function, but it is not differentiable everywhere. The k-th coefficient of its Fourier series is ck = 4 (2k+1)2π2 . We note that the coefficients 42 0 1 2 3 4 5 6 -0.5 0 0.5 1 1.5 t fN(t) f(t) Talbot N ′ = 20 Gaver N ′ = 16 Zakian N ′ = 4 Euler N ′ = 33 TAME N ′ = 20 0 1 2 3 4 5 6 -0.5 0 0.5 1 1.5 t fN(t) f(t) Euler N ′ = 33 TAME N ′ = 20 CME N ′ = 75 CME N ′ = 20 Fig. 13: Experiment D, square wave. Function f(t) and approximants are displayed fN(t). Upper plot: Euler and TAME compared to worse perfoming methods (Talbot, Gaver, Zakian). Lower plot: Euler and TAME compared to better performing CME. are summable and P∞ k=0 |ck| = 4 π2 P∞ k=0 1 (2k+1)2 = 1 2. Therefore, the Fourier series converges absolutely to f△. The truncated Fourier series is in the SE class, so we have the necessary hypotheses to apply Theorem 3.11. The exponents of the summands of the truncated series are purely imaginary numbers, so we use Ω= i[-r, r] for a suitable chosen r. The square wave and its Laplace Transform are respectively f□(t) = mod(⌊t⌋, 2), c f□(s) = 1 s(1 + es). The k-the coefficient of the Fourier series is ck = 1 2k+1. The square wave is discontinuous, so it cannot be uniformly approximated by any continuous function and we cannot apply Theorem 3.11. Nevertheless, the Fourier series converges at the points where f□is continuous, so it is reasonable to use a TAME method that acts accurately on the truncated Fourier series; therefore we use Ω= i[-r, r] also for the square wave. However, observe that the absolute series P∞ k=0 |ck| diverges, so the approximation bound of Theorem 3.2 grows worse as K grows. For both problems, we have constructed a TAME method with N ′ = 20 and r = 80. The triangular wave is shown in Figure 12. We can see that Talbot, Gaver, and Zakian do not provide a good approximation. The Talbot method has numerical issues at some 43 values of t, reaching values of order of 1014. This happens because the contour curve of the Talbot integral intersects the domain Ω, so the nodes β t may happen to be close to singularities of c f△. This leaves us the Euler, CME, and TAME methods. We see in the plot that the TAME method with N ′ = 20 provides a better approximation that the Euler method with N ′ = 33 and the CME method with N ′ = 20. Unfortunately, increasing N ′ further in the TAME method does not reduce this error, while CME with N ′ = 75 is more accurate and avoids the oscillation that can be seen in the other plots. The square wave is shown in Figure 13. As for the triangular wave, the results of Talbot, Gaver, and Zakian methods are not close to the target function. The Euler and TAME methods provide better approximations, but they suffer from Gibbs phenomena and oscillation around the correct value. In this case, CME method is the one that provides the best approximation by far, even with the same number of nodes (N ′ = 20). Increasing it to N ′ = 75 yields an even better approximation with the CME method. Indeed f□is outside of our framework of "tame" functions and, as noted, the Fourier coefficients 1 2k+1 are not summable. The experiment shows that CME performs better than TAME on discontinuous functions. 8.6 Experiment E We consider the European Call Option Pricing problem. For a detailed exposition of the topic, see the books [12, 34]. The goal of the Option Pricing problem is to determine the value of a contract (i.e., a call) C(t, Q) depending on time t and the price of a given asset Q. Under standard hypotheses, C(t, Q) satisfies the Black-Scholes PDE; one of the methods for its computation is through bC(s) and the inverse Laplace transform. For the vanilla European Call Option, both C(t) and bC(s) are known explicitly, allowing us to use them to test the TAME method. For some exotic options, however, only bC(s) is known, see e.g. [29]. The analytical solution of the European Call Option is [42] C(t) = Q Φ(d+) -Ke-Rt Φ(d-), (30) where Q = Q(t) is the asset price at current time, R and σ are parameters: R is the rate of interest, σ is the volatility. Φ(x) is the CDF of the normal distribution and d±(t) = 1 σ √ t log Q K + R ± 1 2σ2 t . The Laplace Transform of C(t) is (see [42]) bC(s) =    K γ+-γ- S K γ- γ+ R+s -γ+-1 s + Q s - K R+s if Q ≥K, K γ+-γ- S K γ+ γR+s -γ--1 s if Q < K, where γ+(s) ≥γ-(s) are γ±(s) = 1 σ2 - R -1 2σ2 ± q R -1 2σ2 2 + 2σ2(R + s) . The function C(t) is in none of the three classes we described. Nevertheless, the summands Φ(d±(t)) of (30) are integrals of e-x2 on a domain determined by d±(t), which is similar to the structure of a LS function. For this reason, we construct TAME methods using Ω= [-r, 0]. 44 0 5 10 15 20 25 30 35 40 45 50 0 20 40 60 t C(t) C(t) 0 5 10 15 20 25 30 35 40 45 50 -15 -10 -5 t log(|CN(t) -C(t)|) Talbot N ′ = 20 Euler N ′ = 33 CME N ′ = 75 Gaver N ′ = 16 Zakian N ′ = 4 TAME N ′ = 12, r = 50 TAME N ′ = 33, r = 100 Fig. 14: Experiment E. Upper plot: function C(t). Lower plot: absolute value of the error in log scale. In Figure 14 we show the function C(t) and the approximation errors for K = 100, Q = 80, σ = 0.1, R = 0.05. The function is regular and all methods manage to provide a good enough approximation, with error smaller than 5 · 10-2. The most accurate methods are the Talbot with N ′ = 20 and TAME with N ′ = 33 nodes and r = 100, followed by the TAME method with N ′ = 12 and r = 50, and then by the Euler method with N ′ = 33. 9 Conclusions We developed a theoretical framework for the accuracy analysis of Abate-Whitt methods, based on a rational approximation problem. We focused on functions from the SE, the ME, and the LS classes, as well as on the first return time matrix Ψ(t) for fluid queues. For each class, we provided theoretical bounds for the approximation error of an Abate-Whitt method, and a description how to choose an appropriate domain Ω. We showed how to adapt the AAA algorithm to construct a TAME Abate-Whitt method for each domain Ω. We provide precomputed parameters for the application in fluid queues. We hope that both the analysis and the new method can be of use for practitioners in queuing theory. In this paper, we also discussed the numerical issues related to the computation of Abate-Whitt methods in floating-point arithmetic; we plan to focus our future 45 research on optimizing further the selection of weights to avoid excessive growth. Another open problem is extending these results to more classes of functions and to numerical methods for the ILT outside the Abate-Whitt class. Acknowledgments. The authors are thankful to Bernhard Beckermann for literature pointers, to Michele Benzi for useful comments, to Andrea Macr`ı for help and discussion regarding Experiment E, and to Francesco Zigliotto for LaTeX help. Supplementary information. The authors are members of INDAM (Istituto Nazionale di Alta Matematica) / GNCS. FP has received financial support by the National Centre for HPC, Big Data and Quantum Computing-HPC, CUP B83C22002940006, funded by the European Union-NextGenerationEU, and by the Italian Ministry of University and Research (MUR) through the PRIN project 2022 "MOLE: Manifold constrained Optimization and LEarning", code: 2022ZK5ME7 MUR D.D. financing decree n. 20428 of November 6th, 2024 (CUP B53C24006410006). The authors have no competing interests to disclose. References [1] J. Abate, G. L. Choudhury, and W. Whitt, "An introduction to numerical transform inversion and its application to probability models," in Computational Probability, W. K. Grassmann, Ed. Boston, MA: Springer US, 2000, pp. 257323. 8. [2] J. Abate and P. P. Valk ́o, "Multi-precision Laplace transform inversion," International Journal for Numerical Methods in Engineering, vol. 60, no. 5, pp. 979993, 2004. [3] J. Abate and W. Whitt, "A unified framework for numerically inverting Laplace transforms," INFORMS Journal on Computing, vol. 18, no. 4, pp. 408-421, Nov. 2006. [4] S. Asmussen, "Stationary distributions for fluid flow models with or without Brownian noise," Communications in Statistics: Stochastic Models, vol. 11, no. 1, pp. 21-49, 1995. [5] N. Barbot and M. Sericola Bruno a nd Telek, "Distribution of busy period in stochastic fluid models.," Stoch. Models, vol. 17, no. 4, pp. 407-427, 2001. [6] N. Bean, G. T. Nguyen, and F. Poloni, "A new algorithm for time-dependent first-return probabilities of a fluid queue," in Matrix-Analytic Methods in Stochastic Models, S. Hautphenne, M. O'Reilly, and F. Poloni, Eds., 2019, pp. 18-22. [7] N. G. Bean, M. Fackrell, and P. Taylor, "Characterization of matrix-exponential distributions," Stochastic Models, vol. 24, no. 3, pp. 339-363, 2008. 1080/15326340802232186. [8] N. G. Bean, M. M. O'Reilly, and P. G. Taylor, "Hitting probabilities and hitting times for stochastic fluid flows," Stochastic Processes and their Applications, vol. 115, no. 9, pp. 1530-1556, 2005. 46 [9] N. G. Bean, M. M. O'Reilly, and P. G. Taylor, "Algorithms for the Laplace-Stieltjes transforms of first return times for stochastic fluid flows," Methodology and Computing in Applied Probability, vol. 10, pp. 381-408, 3 Sep. 2008. [10] R. Bhatia, Matrix Analysis (Grad. Texts Math.). New York, NY: Springer, 1996, vol. 169. [11] P. Billingsley, Probability and Measure, 3rd ed. Chichester: John Wiley & Sons Ltd., 1995. [12] T. Bj ̈ork, Arbitrage Theory in Continuous Time. Oxford University Press, Mar. 2004. [13] M. Bladt and B. F. Nielsen, Matrix-Exponential Distributions in Applied Probability. Springer New York, NY, May 2017. 1- 4939- 70490. [14] D. Braess and W. Hackbusch, "The approximation of Cauchy-Stieltjes and Laplace-Stieltjes functions," in Multiscale, Nonlinear and Adaptive Approximation II, Cham: Springer, 2024, pp. 115-143. 7 7. [15] A. M. Cohen, Numerical Methods for Laplace Transform Inversion (Numerical Methods and Algorithms). Springer-Verlag US, 2007. 68855-8. [16] M. Crouzeix and C. Palencia, "The numerical range is a (1 + √ 2)-spectral set," SIAM J. Matrix Anal. Appl., vol. 38, no. 2, pp. 649-655, 2017. 17M1116672. [17] S. Cuomo, L. D'Amore, A. Murli, and M. Rizzardi, "Computation of the inverse Laplace transform based on a collocation method which uses only real values," J. Comput. Appl. Math., vol. 198, no. 1, pp. 98-115, 2007. 2005.11.017. [18] S. Cuomo, L. D'Amore, M. Rizzardi, and A. Murli, "A modification of Weeks' method for numerical inversion of the Laplace transform in the real case based on automatic differentiation," in Advances in Automatic Differentiation. Selected papers based on the presentations at the 5th international conference on automatic differentiation, Bonn, Germany, August 11-15, 2008, Berlin: Springer, 2008, pp. 45-54. [19] S.-I. Filip, Y. Nakatsukasa, L. N. Trefethen, and B. Beckermann, "Rational minimax approximation via adaptive barycentric representations," SIAM J. Sci. Comput., vol. 40, no. 4, a2427-a2455, 2018. [20] H.-L. G. Gau, K.-Z. Wang, and P. Y. Wu, "Numerical ranges of row stochastic matrices," Linear Algebra and its Applications, vol. 506, pp. 478-505, 2016. [21] D. P. Gaver, "Observing stochastic processes, and approximate transform inversion," Operations Research, vol. 14, no. 3, pp. 444-459, 1966. [22] K. E. Gustafson and D. K. M. Rao, Numerical Range. The Field of Values of Linear Operators and Matrices (Universitext). New York, NY: Springer, 1996. 47 [23] L. Hogben, Ed., Handbook of Linear Algebra (Discrete Math. Appl. (Boca Raton)), 2nd enlarged ed. Boca Raton, FL: Chapman & Hall/CRC, 2014. [24] G. Horv ́ath, I. Horv ́ath, S. A.-D. Almousa, and M. Telek, "Numerical inverse Laplace transformation using concentrated matrix exponential distributions," Performance Evaluation, vol. 137, p. 102 067, 2020. 102067. [25] G. Horv ́ath, I. Horv ́ath, and M. Telek, "High order concentrated matrixexponential distributions," Stochastic Models, vol. 36, no. 2, pp. 176-192, 2020. [26] I. Horv ́ath, O. S ́af ́ar, M. Telek, and B. Z ́amb ́o, "Concentrated matrix exponential distributions," in Computer Performance Engineering, D. Fiems, M. Paolieri, and A. N. Platis, Eds., Cham: Springer International Publishing, 2016, pp. 1831. [27] P. den Iseger, "Numerical transform inversion using gaussian quadrature," Probability in the Engineering and Informational Sciences, vol. 20, no. 1, pp. 1-44, 2006. [28] R. L. Karandikar and V. Kulkarni, "Second-order fluid flow models: Reflected Brownian motion in a random environment," Oper. Res, vol. 43, pp. 77-88, 1995. [29] T. Kimura, "American fractional lookback options: Valuation and premium decomposition," SIAM Journal on Applied Mathematics, vol. 71, no. 2, pp. 517539, 2011. [30] S. Kotz, C. B. Read, N. Balakrishnan, and B. Vidakovic, Eds., Encyclopedia of Statistical Sciences. 16 Vols. 2nd ed. Hoboken, NJ: John Wiley & Sons, 2006. [31] C. D. Meyer, Matrix Analysis and Applied Linear Algebra. USA: Society for Industrial and Applied Mathematics, 2000. [32] Y. Nakatsukasa, O. S`ete, and L. N. Trefethen, "The AAA algorithm for rational approximation," SIAM Journal on Scientific Computing, vol. 40, no. 3, A1494A1522, 2018. [33] L. C. G. Rogers, "Fluid models in queueing theory and Wiener-Hopf factorization of Markov chains," Ann. Appl. Probab., vol. 4, pp. 390-413, 1994. [34] S. M. Ross, An Elementary Introduction to Mathematical Finance, 3rd ed. Cambridge: Cambridge University Press, 2011. [35] S. M. Ross, Introduction to Probability Models, 13th edition. Amsterdam: Elsevier/Academic Press, 2023. [36] W. Rudin, Real and Complex Analysis, 3rd ed. New York, NY: McGraw-Hill, 1987. [37] J. L. Schiff, The Laplace Transform, Theory and Applications. Springer New York, NY, Oct. 1999, pp. XIV, 236. [38] R. L. Schilling, R. Song, and Z. Vondracek, Bernstein Functions: Theory and Applications. Berlin, Boston: De Gruyter, 2012. : 10 . 1515 / 9783110269338. [39] H. Stehfest, "Algorithm 368: Numerical inversion of Laplace transforms [D5]," Commun. ACM, vol. 13, no. 1, pp. 47-49, Jan. 1970. 361969. 48 [40] A. Talbot, "The accurate Numerical Inversion of Laplace Transforms," IMA Journal of Applied Mathematics, vol. 23, no. 1, pp. 97-120, Jan. 1979. [41] M. Telek, "Transient analysis of Markov modulated processes with Erlangization, ME-fication and inverse Laplace transformation," Stochastic Models, vol. 38, no. 4, pp. 638-664, 2022. [42] M. Tomas and R. Shukla, "Complete derivation of black-scholes option pricing formula," in Financial Derivatives, S. Kumar, Ed., New Delhi 110001: PHI Learning Pvt. Ltd., 2007, ch. Appendix, pp. 227-237. [43] L. N. Trefethen, J. A. C. Weideman, and T. Schmelzer, "Talbot quadratures and rational approximations," BIT, vol. 46, no. 3, pp. 653-670, 2006. 1007/s10543-006-0077-9. [44] V. Ramaswami, "Matrix analytic methods for stochastic fluid flows," in Teletraffic Engineering in a Competitive World (Proceedings of the 16th International Teletraffic Congress), D. Smith and P. Hey, Eds., Edinburgh, UK: Elsevier Science B.V., 1999, pp. 1019-1030. [45] J. Vlach, "Numerical method for transient responses of linear networks with lumped, distributed or mixed parameters," Journal of the Franklin Institute, vol. 288, no. 2, pp. 99-113, 1969. [46] J. A. C. Weideman, "Optimizing Talbot's contours for the inversion of the Laplace transform," SIAM Journal on Numerical Analysis, vol. 6, pp. 23422362, 44 2006. [47] C. J. Wellekens, "Generalisation of Vlach's method for the numerical inversion of the Laplace transform," Electronics Letters, vol. 6, pp. 742-744, 1970. [48] V. Zakian, "Numerical inversion of Laplace transform," Electronic Letters, vol. 5, pp. 120-121, Feb. 1969. [49] V. Zakian, "Optimisation of numerical inversion of Laplace transforms," Electronic Letters, vol. 6, pp. 677-679, 1970. 49
2510.14800
Morphology-Aware Prognostic model for Five-Year Survival Prediction in Colorectal Cancer from H&E Whole Slide Images Usama Sajjad1*, Abdul Rehman Akbar1, Ziyu Su1, Deborah Knight1, Wendy L. Frankel1, Metin N. Gurcan2, Wei Chen1, Muhammad Khalid Khan Niazi1 1Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, OH, USA. 2Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA. Correspondence: Usama Sajjad, Email: Usama.Sajjad@osumc.edu Abstract Colorectal cancer (CRC) remains the third most prevalent malignancy globally, with approximately 154,000 new cases and 54,000 projected deaths anticipated for 2025. The recent advancement of foundation models in computational pathology has been largely propelled by task agnostic methodologies that can overlook organ-specific crucial morphological patterns that represent distinct biological processes that can fundamentally influence tumor behavior, therapeutic response, and patient outcomes. The aim of this study is to develop a novel, interpretable AI model, PRISM (Prognostic Representation of Integrated Spatial Morphology), that incorporates a continuous variability spectrum within each distinct morphology to characterize phenotypic diversity and reflecting the principle that malignant transformation occurs through incremental evolutionary processes rather than abrupt phenotypic shifts. PRISM is trained on 8.74 million histological images extracted from surgical resection specimens of 424 patients with stage III CRC. PRISM achieved superior prognostic performance for five-year OS (AUC = 0.70 ± 0.04; accuracy = 68.37% ± 4.75%; HR = 3.34, 95% CI = 2.28–4.90; p < 0.0001), outperforming existing CRC-specific methods by 15% and AI foundation models by ~23% accuracy. It showed sex-agnostic robustness (AUC Δ = 0.02; accuracy Δ = 0.15%) and stable performance across clinicopathological subgroups, with minimal accuracy fluctuation (Δ = 1.44%) between 5FU/LV and CPT-11/5FU/LV regimens, replicating the Alliance cohort finding of no survival difference between treatments. Introduction Colorectal cancer (CRC) is the third most widespread cancer, with approximately 154,000 new cases and 54,000 deaths estimated for 2025 [3]. This epidemiological burden is characterized by pronounced stage-dependent prognostic stratification, with 5-year survival rates of 94.7% for Stage I, 88.4% for Stage II, 74.3% for Stage III, and 31.5% for Stage IV [4]. CRC is morphologically heterogeneous, and current pathological assessment does not capture the morphological variability in a quantitative manner and lacks multi-variate (multi-feature) assessments. For example, there are associations between grade, necrosis, and stroma with CRC prognosis [5-8], illustrating that these underutilized morphological features, if analyzed together through a more quantitative and multi-variate approach, could provide deeper insights into patient outcomes and improve prognostic accuracy. Recently, Tumor budding (TB) has been recognized as markers of epithelial- mesenchymal transition and linked to poor prognosis [9]. Also, it has also been shown that necrosis is an independent prognostic variable with respect to progression-free survival [5]. Finally, stroma-rich tumors and those with immature desmoplastic stroma are also associated with worse outcomes [10]. These findings further underscore the prognostic value embedded in morphological characteristics—yet these morphological features remain underutilized in routine clinical assessments due to the lack of scalable, quantitative, and multi-variate analysis tools. To address this gap, artificial intelligence (AI) systems have gained prominence, particularly with the digitization of tissue specimens into whole slide images (WSIs) [11, 12]. These technologies demonstrate robust performance in tumor classification, tumor subtyping, gene expression profiling, and automated analysis of nuclear/cellular features [12-17]. More recently, foundation models—trained using self-supervised learning on millions of WSIs —have emerged as a dominant paradigm in computational pathology [2, 18-21]. These models learn to identify generic visual patterns by minimizing feature distances between augmented versions of the same image (e.g., through color or rotation changes) [22]. However, this domain-agnostic training approach presents a critical limitation for CRC prognostication: it often overlooks organ- specific morphological features that reflect distinct biological processes and are essential for predicting tumor behavior, treatment response, and patient outcomes. As a result, despite their generalizability, these models may fall short in capturing the nuanced, morphology-driven prognostic signals crucial for CRC. Given the morphological heterogeneity of CRC [23], a tool that fails to incorporate morphologically relevant heterogeneous features would be unable to adequately quantify and analyze the variability within distinct morphological regions [24]. While existing computational tools such as QuantCRC [24] have made significant contributions to CRC prognostication, they primarily focus on discrete quantification of morphological features without fully capturing the continuous spectrum of phenotypic variability present within distinct histological microenvironments within WSIs. This limitation is significant because cancer develops through gradual changes rather than sudden transformations. These continuous biological changes create a spectrum of cellular abnormalities rather than distinct categories, and these gradual morphological transitions directly influence tumor aggressiveness, metastatic potential, and therapeutic sensitivity. Therefore, an AI prognostic model is needed that integrates the diverse variability spectrum within each distinct morphology for prognostication. To address these limitations, we propose PRISM (Prognostic Representation of Integrated Spatial Morphology), an AI model that integrates CRC morphologically relevant heterogeneous features for five-year overall survival (OS) prognostication (Figure 1). PRISM advances beyond binary morphological detection and quantification by incorporating a continuous variability spectrum that characterizes the phenotypic diversity within each distinct morphological pattern. This approach captures subtle gradations in cellular architecture and tissue organization that exist within nominally similar morphological categories. Unlike pathologists who may label a region simply as “cancer,” PRISM goes further by capturing the nuanced variations among neoplastic cells—for example, differences in nuclear morphology, gland formation, and architectural patterns—and leverages this detailed phenotypic diversity to enhance prognostication. PRISM starts by training morphology informed classifier and extracting the histological features of the following: (i) Epithelial grade Spectrum (Adenocarcinoma high grade, Adenocarcinoma low grade, Adenoma high grade, Adenoma low grade); (ii) Serrated Pathway Precursors (Sessile serrated lesion); (iii) Tumor Microenvironment (Stroma, Vessels, Inflammation, Necrosis); and (iv) Evidence of submucosal Invasion (Muscle). The PRISM captures biologically relevant information through a domain-specific branch and complements it with generic histopathological features extracted through a parallel branch, enabling a comprehensive and nuanced prognostic representation. This precise modeling of survival outcomes may enable more informed therapeutic decision-making, personalized treatment stratification, and improved resource allocation in clinical settings. Our study also reveals a fundamental limitation in AI models development for prognosis: Conventional validation strategies such as K-fold cross-validation are demonstrably inadequate for robustly evaluating OS prediction models in histopathology. PRISM incorporates a comprehensive strategy that integrates clinical and pathological attributes prior to training and evaluation to minimize confounding effects. It also employs stratified evaluation across clinicopathological subgroups to ensure robust and equitable model performance. This rigorous approach enables PRISM to maintain prognostic accuracy across diverse clinical contexts and treatment scenarios. Figure 1: An overview of our PRISM framework. (a) We first tessellate WSIs into 𝑛 non-overlapping patches, with each patch undergoing dual feature extraction. (b) We perform cross-feature interaction between universal pathology features from UNI and morphology-aware features that encode tissue architecture and histopathological patterns. We then fuse these complementary feature representations 𝑓𝑖,𝑗 at the patch level to create comprehensive morphological embeddings. An attention mechanism computes importance scores for each patch feature 𝑓𝑖,𝑗 based on its prognostic relevance, enabling the model to focus on histologically relevant regions. We aggregate attention-weighted patch embeddings into a slide-level representation that captures the overall morphological landscape for five-year survival prediction. (c) During patch feature aggregation, we project features using two different neural networks (𝑊𝑔, 𝑊𝑚) and aggregate the results to obtain morphology-aware features for each patch using 𝑊𝑓𝑢𝑠𝑖𝑜𝑛. (d) Based on the predicted probability, we train a time-to-event Cox Hazards model or perform risk stratification using concordance index. 2. Results: 2.1. Deep Learning Predicts Colorectal Morphological Phenotypes and Enables Extraction of Morphology Informed Features The foundation of PRISM relies on accurate automated identification and quantification of distinct morphological phenotypes within colorectal histopathology. To establish this capability, we systematically validated our morphology informed phenotyping on comprehensively annotated tissue patches from HistAI colorectal dataset, ensuring robust performance across the diverse spectrum of morphological patterns encountered in CRC histopathology. Our analysis demonstrated that training across this comprehensive spectrum of tissue morphologies yields a highly robust deep learning model for morphological phenotyping, achieving an overall accuracy of 90.43%, and AUC of 0.890. Critically, performance remained consistently high across all histological phenotypes, confirming the model's capacity to resolve the intrinsic morphological heterogeneity characteristics of CRC histopathology. This high-fidelity morphological phenotyping indicates that the learned feature representations encode biologically meaningful histopathological patterns. These resulting morphologically derived features capture essential organizational principles within the tumor microenvironment, enabling automated quantification of key histomorphometry parameters, including tumor burden, stromal composition, vascular density, and the spatial distribution of prognostically significant morphological components. 2.2. Prognostic Stratification of Stage III CRC Patients Using a Morphology-Aware Deep Learning Network To evaluate the clinical utility and prognostic accuracy of PRISM, we conducted a comprehensive evaluation on the Alliance cohort [35]. Our assessment focused on quantifying improvements in survival prediction accuracy while examining the model's ability to maintain balanced performance across key clinical metrics. We compared PRISM against state-of-the-art multiple instance learning approaches (CLAM, TransMIL, ABMIL, RRT-MIL, Nakanishi et. al) [1, 29-32] and existing CRC prognostication methods to establish its relative performance and clinical significance. Additionally, we conducted detailed analysis of sensitivity-specificity trade-offs and cross-validation stability to assess the PRISM’s readiness. First, we applied PRISM on the Alliance cohort (N=424 patients), demonstrating significantly enhanced prognostic stratification for five-year post-resection survival (Figure 2). Overall, PRISM achieved an AUC of 0.70 ± 0.04 and an accuracy of 68.37% ± 4.75%, demonstrating >9% improvement in accuracy over existing approaches (second-best: 58.84% ± 4.50%). We also compared PRISM with Nakanishi et. al [32] that was proposed specifically for predicting CRC recurrence and demonstrated an improvement of 9% over their approach. Additionally, PRISM's superior performance was statistically significant compared to all baseline methods (p-values: ABMIL = 0.012, CLAM = 0.018, TransMIL < 0.0001, Nakanishi et al. < 0.0001, RRTMIL < 0.0003). We also evaluated PRISM’s performance using both sensitivity (accurate identification of patients who died within five years) and specificity (accurate detection of survivors at five-years). As shown in Figure 2, PRISM achieved an optimal balance between these metrics, while comparison methods (CLAM [29], TransMIL [30], Ilse et al. [1], RRT-MIL [31], Nakanishi et al. [32]) exhibited 10-20% difference between sensitivity and specificity. While threshold adjustment could improve this balance for individual method, it inevitably compromises the opposing metric. The consistently lower standard deviation of our model compared to alternatives demonstrates superior robustness, highlighting the importance of incorporating relevant histopathological features. a b c Figure 2: Five-year survival prediction results in the Alliance cohort using five-fold cross-validation. (a) Area Under the Curve (AUC) values with standard deviations, (b) Accuracy percentages with standard deviations, and (c) Grouped comparison of sensitivity and specificity with other models. Our model achieves the highest accuracy and balanced sensitivity and specificity, demonstrating superior performance compared to existing state-of-the-art methods (CLAM [29], TransMIL [30], ABMIL [1], RRT-MIL [31], Nakanishi et al. [32]). 2.3. Comparison of Five-Year Survival Prediction Using PRISM Across Different Treatments The potential impact of different chemotherapeutic regimens on tumor biology necessitates examining prognostic model performance across treatment-specific cohorts. While the original Alliance trial found no significant survival difference between FL and IFL treatment regimens, variations in model performance could indicate irrelevant learned features rather than intrinsic biological features. We compared the performance of benchmarks methods with PRISM by stratifying the treatment (FL vs IFL). The results are presented in Table 1 respectively. PRISM demonstrated robust performance with the highest AUC (0.7160 ± 0.061) and accuracy (68.21% ± 3.40), maintaining balanced sensitivity-specificity (64.82% ± 3.40 / 71.60% ± 8.50) on the Alliance cohort treated with FL only. In contrast, benchmark models exhibited critical limitations: TransMIL showed sensitivity collapse, and ABMIL's, Nakanishi et. al [32], RRT-MIL's high sensitivity, and accuracy volatility (±20.00 SD, ±11.43 SD) revealed fundamental instability to learn the morphologically meaningful features. Transitioning to the Alliance cohort treated using IFL treatment, PRISM maintained a consistent accuracy (66.77% ± 5.10) and competitive AUC (0.6846 ± 0.108) with a degradation of ~0.030 AUC and 1.45% in accuracy across FL, IFL treatment groups, whereas benchmarks displayed high performance degradation; ABMIL's AUC decreased by ~7%, CLAM's accuracy dropped 10%, Nakanishi et Al. [32] accuracy differs 5% across FL/IFL treatment groups, and RRT-MIL's variability to accurately predict patients survived five-years surged to ± 25.00 SD. Table 1: Five-year survival prediction results of PRISM stratified by FL/IFL treatments in the Alliance cohort using five-fold cross- validation. Each row reports the average performance with standard deviation. Model Treatment AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [29] FL 0.65 ± 0.11 62.02 ± 13.76 51.93 ± 22.92 72.12 ± 08.70 IFL 0.58 ± 0.07 54.13 ± 05.61 45.34 ± 08.90 62.92 ± 06.89 CLAM [1] FL 0.61 ± 0.09 50.04 ± 05.96 14.12 ± 08.97 85.96 ± 08.47 IFL 0.63 ± 0.15 60.89 ± 06.01 36.71 ± 13.97 85.07 ± 09.81 TransMIL [30] FL 0.75 ± 0.10 54.75 ± 08.37 10.33 ± 16.13 99.16 ± 01.66 IFL 0.62 ± 0.09 51.81 ± 03.63 3.63 ± 07.27 100.00 ± 00.00 Nakanishi et. Al [32] FL 0.65 ± 0.16 61.12 ± 12.85 48.38 ± 21.23 73.85 ± 07.18 IFL 0.55 ± 0.07 56.02 ± 05.29 56.23 ± 10.32 55.80 ± 08.49 RRT-MIL [31] FL 0.59 ± 0.15 55.42 ± 08.54 43.45 ± 25.65 67.38 ± 25.52 IFL 0.55 ± 0.04 55.62 ± 03.61 46.90 ± 24.09 64.35 ± 25.62 PRISM FL 0.72 ± 0.06 68.21 ± 03.40 64.82 ± 03.40 71.60 ± 08.50 IFL 0.68 ± 0.11 66.77 ± 05.10 68.76 ± 13.90 64.77 ± 10.90 2.4.Morphology-Informed Features Correlates with Time-to-Event Survival Beyond predicting five-year classification, we evaluated PRISM's capacity for continuous risk stratification and survival prediction using established survival analysis metrics. This assessment provides critical insights into PRISM's clinical utility for patient counseling and treatment planning decisions. PRISM demonstrated remarkable performance in stratifying CRC patients into distinct prognostic risk categories using Cox proportional-hazards model, achieving a hazard ratio of 3.34 (95% CI: 2.28–4.90) and concordance index (c-index) of 0.67 (Figure 4; Supplementary Table S6). This represents a significant improvement over all benchmark methods: CLAM (HR: 1.96, 95% CI: 1.30–2.98; c-index: 0.62), Nakanishi et al. (HR: 1.84, 95% CI: 1.27–2.65; c-index: 0.60), ABMIL (HR: 1.67, 95% CI: 1.16–2.41; c-index: 0.61), TransMIL (HR: 6.30, 95% CI: 3.05–13.03; c-index: 0.65), and RRT-MIL (HR: 1.47, 95% CI: 1.01–2.12; c-index: 0.56). While TransMIL showed a higher nominal hazard ratio, its broad confidence interval and reduced accuracy limits its ability for individualized risk stratification. Additionally, we visualized the Kaplan–Meier survival curves overall survival probability for five-year follow-up in Figure 3. PRISM achieved the highest hazard ratio of 3.34 (95% CI: 2.28-4.90) and concordance index of 0.67. While benchmark methods such as CLAM [29] and Nakanishi et al. [32] show initial separation between risk groups, their curves demonstrate earlier convergence and less pronounced differences in survival probabilities, with ABMIL [1] displaying moderate separation but notable fluctuations suggesting less stable risk predictions, TransMIL [30] exhibiting erratic curve behavior with wide confidence intervals despite a high nominal hazard ratio (6.30), and RRT-MIL [31] demonstrating the poorest performance with minimal curve separation and the lowest hazard ratio (1.47). Figure 3. Kaplan–Meier plots for overall survival probability for five-year follow-up. ABMIL CLAM TransMIL Nakanishi et. Al RRT-MIL PRISM Figure 4: Hazard ratios and concordance index (c-index) values in the Alliance cohort using five-fold cross-validation. (a) Hazard ratios with 95% confidence intervals for each model, and (b) Concordance Index (C-Index) values demonstrating model ability to sort the patient based on risk for survival prediction. Our model achieves the highest C-Index (0.6720) and comparable hazard ratio in comparison with TransMIL [30] with smaller confidence interval, indicating superior prognostic performance compared to other state- of-the-art methods. 3. Discussion This study presents PRISM, a novel morphology-informed AI prognostic model that significantly advances the field of AI- driven prognostication in CRC. Our findings demonstrate that incorporating specific morphological features into deep learning models yields substantial improvements over conventional approaches, while simultaneously revealing critical limitations in current model validation practices and highlighting the importance of addressing algorithmic bias in clinical AI applications (Supplementary Results 7.1-7.3). 3.1.Enhanced Prognostic Performance Through Novel, Morphology-Aware Feature Integration The superior performance of PRISM, achieving an AUC of 0.70 ± 0.04 and accuracy of 68.37% ± 4.75%, represents a meaningful advance over existing methodologies. The ~9% improvement in accuracy over the second-best performing method translates to more precise risk stratification for approximately 1 in 10 patients, which could significantly impact clinical decision-making at population scales. More importantly, the hazard ratio of 3.34 (95% CI: 2.28–4.90) demonstrates robust discrimination between high- and low-risk patient populations, providing clinicians with actionable prognostic information that extends beyond traditional TNM staging. The success of PRISM validates the hypothesis that CRC's inherent morphological heterogeneity contains prognostically relevant information that is inadequately captured by conventional histopathological evaluation. By explicitly training on the Hist-AI colorectal dataset [25] to recognize specific tissue morphologies that includes glandular carcinogenesis spectrum, serrated pathway precursors, tumor microenvironment characteristics, and invasion biomarkers, our model learns biologically meaningful features that align with established pathological knowledge. This targeted feature extraction addresses a fundamental limitation in foundation model pretraining, where self-supervised objectives are decoupled from the morphological patterns most relevant for prognostication. Our findings also reproduce the Alliance cohort clinical trial results, confirming no significant survival differences between treatment groups (5FU/LV versus CPT-11/5FU/LV). These treatment-driven performance differences highlight critical flaws in benchmark feature representations (Table 1). Despite clinical evidence of equivalent survival outcomes, models trained solely using foundation model extracted features showed differences in AUC, and accuracy, accompanied by high standard deviation. Such regimen-specific instability suggests these models capture irrelevant features. PRISM's minimal accuracy fluctuation (Δ = 1.44%) demonstrates alignment with the trial's finding of no survival benefit differences between treatment regimens. The Kaplan-Meier survival curves provide compelling visual evidence of PRISM's superior prognostic stratification capabilities, demonstrating the most pronounced and consistent separation between high-risk and low-risk survival curves throughout the five-year follow-up period with minimal convergence, which directly correlates with our quantitative metrics (Figure 3). The sustained separation observed in PRISM's curves has important clinical implications, as the early differences of survival probabilities within the first year post-resection suggests that morphology-informed features can identify high-risk patients who may benefit from more aggressive adjuvant therapy or closer surveillance protocols, while the maintained separation throughout the five-year follow-up period indicates that these morphological patterns capture fundamental tumor biology characteristics that influence long-term outcomes rather than short-term artifacts. The superior survival curve separation achieved by PRISM can be attributed to its integration of CRC-specific morphological features with histopathological patterns, enabling quantification of tumor morphology and invasiveness, tumor microenvironment characteristics, and stromal patterns that capture biologically relevant information translating into meaningful prognostic differences. The smooth, consistent nature of PRISM's survival curves, without the erratic fluctuations observed in benchmark methods, indicates robust feature extraction and stable risk prediction crucial for clinical implementation, suggesting that the model's prognostic assessments are less susceptible to noise or artifacts in histopathological images and supporting its potential for reliable clinical decision-making in diverse patient populations. These findings collectively indicate that effective clinical implementation requires models that simultaneously optimize for: (1) balanced sensitivity/specificity, (2) cross-validation stability, and (3) biologically relevant feature learning. 3.2. Our New Standards for Model Validation and Bias Mitigation The balanced sensitivity (67.14% ± 8.26%) and specificity (68.86% ± 8.62%) achieved by PRISM addresses a critical challenge in clinical implementation, where extreme performance on one metric at the expense of the other can limit practical utility. Unlike benchmark methods that exhibited 10-15% differences between these metrics, our AI prognostic model provides reliable predictions across both outcomes in correctly identifying patients at high risk of death within five years while minimizing false alarms that could lead to overtreatment. The robustness of PRISM across diverse clinical subgroups strengthens its translational potential. The minimal performance variation across sex subgroups (AUC difference of only 0.03 between male and female patients; Supplementary Results: 7.2) and consistent performance across histological grades demonstrate that the model captures fundamental biological features rather than demographic artifacts. Similarly, reproducing the Alliance cohort clinical trial results, with only 1.44% accuracy difference between FL and IFL regimens, aligns with clinical trial evidence showing equivalent survival outcomes and suggests that our model learns treatment-independent morphological features. Our analysis reveals a nuanced relationship between tumor location, sample size, and predictive performance that has important implications for clinical deployment (Supplementary Results: 7.1). The superior performance in sigmoid colon cases (n=149; AUC: 0.77 ± 0.06) compared to smaller anatomical subgroups suggests that both adequate sample representation and location-specific morphological patterns contribute to model reliability. The poor performance in hepatic flexure cases (n=28; AUC: 0.56 ± 0.30) likely reflects both limited training examples and potentially distinct biological characteristics of tumors arising in different anatomical locations. These findings highlight the need for location-stratified model development or the incorporation of anatomical location as an explicit feature in future iterations. From a practical standpoint, this suggests that initial clinical deployment might prioritize the most common tumor locations where adequate training data exists, with subsequent expansion as larger datasets become available for rarer anatomical sites. Our study also exposes critical flaws in conventional validation approaches for survival prediction models in histopathology. The inadequacy of standard K-fold cross-validation for this application stems from its failure to account for the complex demographic and socioeconomic factors that can introduce subtle biases into histopathological data. The proposed stratified sampling approach based on demographic clustering represents a significant methodological advance that should be adopted more broadly in the field (Supplementary Results: 7.3). The discovery that AI models can inadvertently learn and amplify socioeconomic biases present in WSIs—factors often imperceptible to pathologists—represents a critical finding with broad implications for clinical AI development. These biases, linked to race, age, BMI, and income, can lead to disparate performance across patient populations, potentially exacerbating healthcare inequities. Our stratified K-fold validation approach provides a rigorous framework for identifying and mitigating these biases before clinical deployment. 3.3.Limitations and Challenges Our study introduces an interpretable AI prognostic model that incorporates morphology-informed features for five-year survival prediction and addresses a critical gap in the interpretability of computational pathology models. PRISM’s interpretability stems from its ability to capture the continuous variability spectrum within distinct morphological patterns. This enables clinicians to understand the biological basis underlying model predictions, as each extracted feature correlates with specific tissue morphologies that pathologists can visually validate. The interpretable nature of our morphology-informed features provides insights into how patient subgroups differ at the time of surgical resection, potentially serving as prognostic biomarkers for survival prediction. These morphological signatures reflect underlying tumor biology, micro-environmental characteristics, and disease progression patterns that influence long-term outcomes. There are several limitations that constrain our ability to establish definitive causal relationships between resection-time morphological features and survival outcomes. This is because there is a temporal disconnect between patient surgical resection and treatment administration. Since surgical resection precedes extended adjuvant therapy protocols, the morphological features captured at resection may not fully account for treatment-related biological changes that occur over the subsequent months to years. This temporal gap introduces uncertainty regarding whether the identified morphological patterns represent features that may be modified by subsequent therapeutic interventions. The relationship between training data scale and model performance in computational pathology presents a compelling question that deserves thoughtful examination through the lens of therapeutic dynamics. The observation that training data scale may become less consequential when models lack treatment awareness presents an intriguing consideration rooted in fundamental cancer biology. The temporal disconnect between surgical resection and subsequent different adjuvant therapy administration creates a critical biological gap that even massive datasets cannot bridge. During this interval, profound cellular and molecular changes occur: chemotherapy-induced DNA damage responses alter cellular morphology, stromal remodeling modifies the tumor microenvironment architecture, and selective pressure from cytotoxic agents drives clonal evolution and emergence of resistant subpopulations. These treatment-induced biological transformations include changes in tumor-infiltrating lymphocyte populations, vascular remodeling, fibroblast activation states, and epithelial-mesenchymal transition dynamics that fundamentally alter the morphological landscape that ultimately determines patient outcomes. This raises meaningful questions about whether a single, treatment-naive foundation model, regardless of its training scale, may face fundamental constraints in prognostic capacity given the dynamic nature of cancer biology. Such models primarily capture static morphological patterns representing a single temporal snapshot of tumor heterogeneity, cellular architecture, and microenvironmental organization, while remaining blind to the evolving biological processes that drive therapeutic resistance, metastatic progression, and treatment response. The complex interplay between tumor cell intrinsic factors (oncogene addiction, DNA repair capacity, metabolic plasticity), extrinsic microenvironmental influences (immune cell infiltration, stromal composition, hypoxic gradients), and treatment-induced selective pressures creates a dynamic biological system that cannot be adequately represented by pre-treatment morphological features alone. This suggests that future developments might benefit from thoughtfully exploring multi-modal, temporally-aware models that integrate morphological patterns with molecular signatures (genomic alterations, transcriptomic profiles, proteomic landscapes), treatment response biomarkers (circulating tumor DNA dynamics, immune activation markers), and longitudinal biological data across the entire therapeutic continuum, thereby capturing the full spectrum of cancer biology rather than simply scaling the size of foundation models. Several limitations warrant consideration. First, our study focuses exclusively on stage III CRC patients from a single clinical trial, which limits generalizability to other stages and practice settings. The Alliance cohort [35], while well- characterized, represents a selected population that met specific trial inclusion criteria that may differ from routine clinical practice populations. Second, the morphology-aware classifier was trained on the Hist-AI colorectal dataset, which, while comprehensive, may not capture all morphological variations encountered in diverse clinical settings. Future work should explore training on larger, more diverse morphological datasets to enhance feature extraction capabilities. Third, our analysis reveals performance limitations in less common anatomical subgroups in our patient group, suggesting the need for larger datasets or alternative modeling approaches for comprehensive coverage of all tumor locations. The development of meta- learning approaches that can effectively learn from limited data in rare anatomical locations represents an important future direction. Lastly, to establish robust interpretable biomarkers, future investigations require a more comprehensive data collection approach. This would necessitate molecular profiling at multiple timepoints throughout the treatment continuum, enabling researchers to distinguish between morphological features that remain stable predictors versus those that evolve with treatment response. Such holistic data collection strategies would strengthen the foundation for developing truly interpretable and clinically actionable prognostic biomarkers in CRC. 3.4. Broader Impact and Future Work The integration of morphology-aware, high-level AI features with conventional histopathological characteristics represents a paradigm shift toward more sophisticated, biologically informed AI systems in oncology. Our AI prognostic model moves beyond the current limitations of "black box" deep learning models by incorporating domain-specific knowledge in a principled manner. The success of our AI prognostic model suggests broader applications across other cancer types where morphological heterogeneity plays a critical role in prognosis. The rigorous bias detection and mitigation strategies developed in this study provide a template for responsible AI development in healthcare. As AI systems become increasingly integrated into clinical workflows, the proactive identification and correction of algorithmic biases will be essential for ensuring equitable healthcare delivery. Overall, PRISM presents a significant advancement in AI-driven CRC prognostication, demonstrating that morphology- aware, high-level feature extraction can substantially improve survival prediction accuracy. Beyond its technical contributions, this study establishes new standards for model validation and bias mitigation in clinical AI applications. PRISM's robust performance across diverse patient subgroups, combined with its principled approach to addressing algorithmic bias, positions it as a promising tool for clinical translation. Future work will focus on extension to other cancer stages and types, and continued refinement of bias mitigation strategies to ensure equitable deployment across diverse patient populations. 4. Datasets and Preprocessing 4.1. Ethics Statement and Patient Cohorts For our main analysis, we used data from Cancer and Leukemia Group B (CALGB) 89803, a randomized phase III trial that compared adjuvant weekly bolus IFL versus FL alone in 1,264 patients with completely resected stage III CRC (Identifier: NCT00003835; Registry Identifier: NCI-2012-01844; First Submitted: November 1, 1999; First Posted: April 27, 2004; Study Start: May 1999). CALGB is now part of the Alliance for Clinical Trials in Oncology, so this is referred to as the Alliance cohort. Eligible patients had no prior chemotherapy/radiotherapy history, performance status 0–2, and initiated treatment within 21–56 days post-resection. Stratification factors included lymph node involvement (1–3 vs ≥4 nodes), histology grade, and preoperative CEA. The FL arm received the Roswell Park regimen (Leucovorin (LV) 500 mg/m² + 5- Fluorouracil (5FU) 500 mg/m² weekly × 6, every 8 weeks for 4 cycles). The IFL arm received Irinotecan (CPT-11) 125 mg/m² + LV 20 mg/m² + 5FU 500 mg/m² weekly × 4, every 6 weeks for 5 cycles. Primary endpoints were overall survival (OS) and disease-free survival (DFS). The trial was approved by institutional review boards at all participating centers in cooperative groups CALGB, NCCTG, NCIC CTG, ECOG, SWOG, and all patients provided written informed consent. We have provided the detailed characteristics of the dataset in Table 2. Table 2. Demographic characteristics of Alliance Cohort Alliance Cohort Characteristics Number of slides 431 Number of Patients 424 Mean age (yrs) 60.47 Mean of household income median (USD) 43194.59 Race White 398 Black 33 Sex Male 240 Female 191 Treatment 5FU/LV 219 CPT-11/5FU/LV 212 Zubrod Performance scale 0 328 1 98 2 2 Tumor location Cecum 101 Ascending Colon 64 Hepatic Flexure 28 Transverse Colon 46 Splenic flexure 19 Descending colon 19 Sigmoid colon 149 Grade I 20 II 300 III 108 IV 0 Stage I 6 II 42 III 350 IV 8 V 20 Small blood/lymphatic vessel invasion No 285 Yes 138 Extramural vascular invasion No 387 Yes 29 Infiltrating border No 274 Yes 141 Five-year survival Yes 29 Survived 328 4.2.Tissue Scanning, and Quality Assurance Protocol All tissue samples in the Alliance cohort were resected from one of the following tumor locations: cecum, ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, or sigmoid colon. Following resection, all samples underwent formalin fixation and were subsequently stained with H&E. WSIs of Alliance cohort were then acquired through digital scanning at 40× magnification (0.25 μm/pixel resolution) using an Aperio Digital Pathology Scanning system (Leica Biosystems) at The Ohio State University, Wexner Medical Center, Columbus, Ohio. This study received approval from the Ohio State University Institutional Review Board (IRB 2018C0098) with a waiver for informed consent. These WSIs were individually, manually reviewed by a trained expert pathologist to ensure that a tumor was present. In over >98% cases had one representative tumor slide/WSI for evaluation. Exclusion criteria include: WSI with no tumor, lymph node tissue only, or mucinous (tumor composed of pools of mucin with floating tumor cells only) cases, and those with death attributed to causes other than disease from within five-years deceased group. Cases were also reviewed for quality check to ensure correct tissue detection and coverage, color fidelity, focus/sharpness, and no scanning artifacts; WSIs with inadequate staining were also removed. This comprehensive quality control process resulted in the exclusion of 211 cases from the original Alliance cohort, leaving 424 cases for PRISM training and evaluation. Throughout the entire scanning and tissue quality assessment process, reviewing pathologists remained blinded to all patient clinical information and outcomes. This rigorous filtering ensures that the dataset used for training PRISM is high-quality, relevant, and consistent, which is crucial for reliable prognostication. 5. Methods 5.1. Identification of Subgroups with Varied Survival Rates and Stratified Data Splitting for Effective Generalization A critical aspect of developing robust prognostic models involves understanding the heterogeneity within patient populations and ensuring that model training and evaluation account for clinically relevant subgroups. Traditional approaches to model validation often overlook the inherent diversity in patient demographics and clinical characteristics, which can lead to biased performance estimates and reduced generalizability across different patient populations. To address this fundamental limitation, we conducted a comprehensive analysis of patient subgroups within our Alliance cohort to identify distinct populations with varying survival outcomes. We then implemented a stratified approach to data splitting that preserves the representation of these diverse patient characteristics. Our quantitative analysis of clinical data revealed distinct patient subgroups with different survival outcomes across multiple demographic and clinical parameters. We systematically grouped patients based on several key criteria and assessed five- year survival rates within each subgroup. Age stratification demonstrated significant prognostic value, with patients aged ≤65 years exhibiting substantially different five-year survival rates compared to those >65 years. This age-based survival disparity reflects the complex interplay between chronological age, comorbidity burden, treatment tolerance, and overall physiological reserve in CRC patients. Similarly, we identified significant survival variations based on socioeconomic factors, particularly income levels, which serve as a proxy for healthcare access, treatment compliance, and overall health status. The income-based stratification revealed distinct survival patterns that likely reflect disparities in healthcare quality, early detection rates, and access to optimal treatment regimens. Body mass index (BMI) stratification also demonstrated prognostic significance, with patients categorized into three distinct groups: BMI <25 kg/m², BMI 25-30 kg/m², and BMI ≥30 kg/m². These BMI-based subgroups exhibited different five-year survival rates, potentially reflecting the complex relationship between nutritional status, metabolic health, treatment toxicity, and surgical outcomes in CRC patients. To ensure robust model development and evaluation, we implemented a stratified data splitting approach that maintained proportional representation of these clinically relevant subgroups across training and validation sets (figure 5). As a result, PRISM prevents the inadvertent creation of training sets that over-represent or under-represent specific patient populations, thereby ensuring that PRISM's prognostic performance is evaluated against the full spectrum of patient diversity present in clinical practice. This approach addresses the critical limitation of conventional validation strategies that may produce overly optimistic performance estimates by failing to account for population heterogeneity and potential subgroup-specific biases embedded within histopathological images. Figure 5: Study population and cohort design. A Flowchart illustrating the patient selection process for colon carcinoma cases, including exclusion criteria: whole-slide images (WSIs) with no tumor, lymph node tissue only, mucinous adenocarcinoma, non-disease-related deaths within five years, and cases lost to follow-up within five years. The final cohort included 424 patients with 431 WSIs, stratified into deceased (n=103) and survivor (n=321) groups based on five-year follow-up. To ensure balanced representation across varying survival rate groups, patients were clustered based on age, BMI, and income, and five-folds were constructed within each cluster. These were then concatenated to form training, validation, and test sets. 5.2.Morphology-Informed Classifier and Feature Extractor Current foundation models in computational pathology have demonstrated strong generalization across tasks [2, 18], but they do not explicitly encode morphological semantics. These models typically rely on self-supervised learning to capture broad visual representations, often overlooking the domain-specific morphological features (biomarkers) that are critical for accurate prognostication. In contrast, pathologists do recognize these patterns but tend to describe them in discrete terms—labeling regions as “neoplastic,” “stromal,” or “inflammatory”—without accounting for the continuous phenotypic variation that exists within these categories. Existing computational systems that attempt to extract morphological features often follow a two-step process: first, they classify tissue regions into predefined categories; then, they compute basic statistics (e.g., area, density) within those identified regions. While useful, this approach treats morphology as static and compartmentalized, failing to capture the gradual transitions and phenotypic diversity that reflect tumor evolution and biological complexity. PRISM addresses this critical gap by explicitly modeling the continuous morphological variability within and across histological regions. Rather than relying solely on categorical labels or generic feature extraction, PRISM learns to represent nuanced phenotypic differences—such as subtle changes in nuclear morphology, glandular architecture, and stromal composition—and integrates these with generic histopathological features to construct a more biologically faithful and prognostically powerful representation. To extract these morphology-informed features, we developed a deep learning model trained on the publicly available Histopathology AI (Hist-AI) colorectal dataset [25]. Hist-AI dataset contains 13 distinct tissue morphologies, including: High-Grade Adenocarcinoma, Low-Grade Adenocarcinoma, High-Grade Adenoma, Low-Grade Adenoma, Fat, Hyperplastic Polyp, Inflammation, Mucin, Smooth Muscle, Necrosis, Sessile Serrated Lesion, Stroma, and Vascular Structures. We used 224 × 224-pixel patches extracted at 20× magnification to train a morphology classification network. During training, PRISM not only learns to classify these morphologies but also captures the intra-class variability within each category. This enables the model to extract a diverse set of high-dimensional features that reflect the phenotypic spectrum within each morphological type. The network was optimized using cross-entropy loss and evaluated using five- fold cross-validation to ensure generalization. For downstream prognostication, the final classification layer was removed, and the penultimate fully connected layer was used as a feature extraction backbone (Figure 6). These learned features— rich in tissue-specific morphological information—serve as the foundation for PRISM’s survival prediction model, enabling it to capture clinically relevant patterns that are often missed by traditional approaches. Figure 6. Overview of PRISM’s morphology classifier training pipeline. Whole slide images (WSIs) are first annotated by expert pathologists to identify 13 distinct morphological tissue regions (Hist-AI dataset). Histological patches are then extracted from these annotated regions and used to train a morphology classification module. Once trained, this module generates specialized morphology- informed features, which PRISM integrates with generic histopathological features to construct a robust, multi-faceted representation for five-year survival prediction in stage III CRC. 5.3. Morphology Aware Survival Prediction Building upon the morphology-informed feature extraction capabilities described above, PRISM integrates these domain- specific features with foundation model embeddings to construct a comprehensive AI prognostic model. Foundation models [2], trained on large-scale histological datasets using self-supervised learning, capture broad visual patterns but lack explicit encoding of biologically meaningful morphological semantics. In contrast, PRISM’s morphology-informed features are derived from expert-annotated tissue regions and capture nuanced phenotypic variations—such as differences in nuclear morphology, glandular architecture, and stromal composition—that are critical for prognostication. By combining the generalizability of foundation models with the specificity of morphology-aware representations, PRISM creates a more biologically faithful and prognostically powerful feature space. To effectively model survival outcomes, PRISM employs a multi-instance learning (MIL) paradigm [1], which accommodates the variable number of tissue patches per WSI and learns to weigh the relative importance of different morphological patterns. This approach addresses the critical need for prognostic models that can capture the complex interplay between diverse tissue morphologies while accounting for the inherent spatial and phenotypic heterogeneity present within CRC slides. The resulting model not only improves predictive performance but also enhances interpretability by linking prognostic signals to specific morphological contexts. Here are the implementation details of PRISM. Let 𝑋 = {𝑋1, 𝑋2, . . , 𝑋𝑛} represent the 𝑛 patient WSIs with five-survival 𝑌 = {𝑦1, 𝑦2, . . , 𝑦𝑛}, where 𝑦𝑛∈{0, 1} for 𝑖= {1, … , 𝑛} corresponds to the five-survival label with 1 means that patient died within five years and 0 means that patient survived for five-years post resection. For each WSI 𝑋𝑖, for 𝑖= {1, … , 𝑛}, we divide 𝑋𝑖 = {𝑥𝑖1, 𝑥𝑖2, . . , 𝑥𝑖𝑚} into 𝑚 patches with each patch 𝑥𝑖1 representing 224 × 224 area at 20× magnification, and patches with less than 25% of the tissue were excluded using TRIDENT [26, 27]. Then, a vision transformer-based histopathology trained feature extractor (UNI) [2, 28] is used to extract the features 𝑔𝑖𝑗 from each patch 𝑥𝑖𝑗 ∈ ℝ𝑑 of the WSI 𝑋𝑖. Subsequently, we also feed each patch 𝑥𝑖𝑗 of the WSI 𝑋𝑖 from morphology informed feature extractor and extract the morphology aware features 𝑚𝑖𝑗 for each patch 𝑥𝑖𝑗 ∈ ℝ𝑑 of WSI. Once we extract both features (𝑔𝑖𝑗 and 𝑚𝑖𝑗) from each patch 𝑥𝑖𝑗, we compute the cross-feature interaction of each morphology aware feature with each foundation model feature to generate 𝑓′ 𝑖𝑗 where 𝑓′ 𝑖𝑗 ∈ ℝ𝑑 𝑥 𝑑(figure 1). Then, we use a neural network to project them again as 𝑓𝑖𝑗 ∈ ℝ𝑑 to merge the morphology informed features with foundation model features. Specifically, we learn a transformation T: ℝ𝑑 𝑥 𝑑→ ℝ𝑑, that maximizes the mutual information I(𝑓𝑖𝑗; 𝑓′ 𝑖𝑗) = I (𝑓𝑖𝑗; 𝑇(𝑓𝑖𝑗)) while enforcing the constraint 𝑑 𝑥 𝑑′ << 𝑑. Since each slide 𝑋𝑖 contains a different number of patches, and each patch may have varying prognostic significance, we employ multiple instance learning (MIL) aggregation [1] to combine the morphology-informed patch features and generate the slide-level representation 𝑍𝑖 (eq. 1), 𝑍𝑖= ∑𝑎𝑖𝑗 𝑛 𝑗=1 𝑓𝑖𝑗 (1) where 𝑎𝑖𝑘= 𝑒𝑥𝑝(𝑊𝑇(tanh(𝑉𝑇𝑓𝑖𝑘)) ⊙ 𝑉𝑇(sigm(𝑈𝑇𝑓𝑖𝑘)) ) ∑ 𝑒𝑥𝑝(𝑊𝑇(tanh(𝑉𝑇𝑓𝑖𝑗)) ⊙ 𝑉𝑇(sigm(𝑈𝑇𝑓𝑖𝑗))) 𝑛 𝑗=1 (2) is the importance score 𝑎𝑖𝑘 computed for each patch 𝑥𝑖𝑘, which quantifies the relative prognostic relevance of patch 𝑥𝑖𝑘 with respect to other patches within the WSI 𝑋𝑖 (Eq. 2). Here, 𝑉 ∈ ℝ𝑑 𝑥 𝑙, 𝑈 ∈ ℝ𝑑 𝑥 𝑙, and 𝑊 ∈ ℝ 𝑙 𝑥 1 are the learnable parameters of the attention network, 𝑙 represents the number of neurons in the hidden layer, and ⊙ denotes element-wise multiplication. This morphology-informed slide representation 𝑍𝑖 is subsequently used by a separate neural network to predict five-year survival. During the training process, all cross-modal interactions between generic histopathological features and morphological informed features for each patch 𝑥𝑖𝑗 are learned and optimized in an end-to-end fashion, which enables optimized of feature combinations most relevant for prognostic prediction. 5.4. Evaluation Protocols and Implementation Details To train our morphology-informed feature extraction module, we utilized the publicly available Hist AI colorectal dataset [25]. This dataset comprises 77,182 annotated histological patches derived from 1,719 H&E-stained WSIs, systematically categorized into 13 distinct morphological classes. For training, we used 70% of the histological patches from each morphological class, with 15% allocated for validation and the remaining 15% for testing. This stratified sampling approach ensured balanced representation across all tissue types in each data split. For training PRISM, we partitioned each WSI into 224 × 224 × 3 patches at 20× magnification using TRIDENT [26, 27] and excluded patches containing less than 25% tissue content. Then, we extracted generic histopathological features using a foundation model (UNI) [2] to ensure consistent feature representation across all comparative approaches [1, 29-32]. For training the morphology-informed classifier, we implemented the Hibou foundation model [21] as a feature extractor, coupled with a multi-layer perceptron (MLP) head comprising two fully connected layers of 512, 128, and 13 dimensions, respectively, utilizing ReLU and SoftMax activation functions. Once morphology-informed classifier was trained, we obtained morphology-informed features by truncating the model after the 512-dimensional layer to capture hidden feature representations. Subsequently, we configured PRISM training with the following hyperparameters: learning rate of 2×10⁻⁵, Adam optimizer [33], Xavier uniform weight initialization [34], batch size of 1, and L1 regularization coefficient of 5×10⁻⁴. For comparative evaluation, we implemented baseline methods using default parameters from their respective GitHub repositories, with the exception of RRT-MIL [31], where we computed classification thresholds on the validation set to optimize performance evaluation. We omitted dropout regularization as our model incorporated L1 regularization to promote feature sparsity and prevent overfitting to irrelevant morphological patterns. We employed Xavier uniform initialization [34] to ensure appropriate gradient flow and stable training dynamics throughout the deep architecture and maintained training reproducibility through fixed random seed implementation, enabling consistent results across multiple experimental iterations. We evaluated the performance of PRISM on: (i). Alliance cohort; (ii). a publicly available dataset using The Cancer Genome Atlas (TCGA) CRC cohort, a multicenter study encompassing patients with stage I–IV disease, predominantly from institutions across the United States. All histopathological images and associated clinical data from the TCGA study are publicly accessible through the Genomic Data Commons portal (https://portal.gdc.cancer.gov)." For each cohort, we used 70% patients for training, 10% patients for validation and 20% for testing using stratified K-fold cross validation within each cluster. Specifically, we trained our model separately on each cohort using cohort specific training data and morphology-informed feature extractor to evaluate the performance. To compare the statistical significance between methods, a two-sided Wilcoxon signed-rank test was used. Clinicaltrials.gov Identifier: NCT00003835 (CALGB 89803); Registry: CTRP (Clinical Trial Reporting Program); Registry Identifier: NCI-2012-01844 Support: The data from CALGB 89803 were obtained directly from the Alliance for Clinical Trials in Oncology, a National Clinical Trials Network cooperative group; however, all analyses and conclusions in this manuscript are the sole responsibility of the authors and do not necessarily reflect the opinions or views of the clinical trial investigators, the NCTN, the NCORP or the NCI. Author contributions U.S. performed data preprocessing, experimental design, validation, and manuscript writing. A.R. and Z.S. provided feedback on study design, validation, and manuscript editing. W.F., W.C., and D.K. contributed clinical assessment, while W.F, and M.N.G. assisted with manuscript editing. W.F., A.R., W.C., D.K., U.S., M.K.K., and M.N.G. performed result analysis. W.C. and M.K.K.N. conceptualized and designed the study, oversaw validation, supervised the research, and contributed to manuscript editing. Acknowledgments The authors gratefully acknowledge the Ohio Supercomputer Center for providing high-performance computing resources under its contract with The Ohio State University College of Medicine. We also thank the Department of Pathology and the Comprehensive Cancer Center at The Ohio State University for their valuable support. Funding The project described was supported in part by R01 CA276301 (PIs: Niazi and Chen) from the National Cancer Institute, The project was also supported partly by Pelotonia under IRP CC13702 (PIs: Niazi, Vilgelm, and Roy), The Ohio State University Department of Pathology and Comprehensive Cancer Center. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or National Institutes of Health or The Ohio State University. Competing Interests All authors declare no financial or non-financial competing interests. Code Availability The underlying code for this study is available in AI4Path PRISM repository and can be accessed via following link: https://github.com/AI4Path-Lab/PRISM. Ethics Approval and Consent to Participate This study was reviewed and approved by the Institutional Review Board of Ohio State University (IRB #2018C0098). All procedures involving human data were conducted in accordance with the ethical standards of the institutional research committee, the national research regulations, and with the 1964 Declaration of Helsinki and its later amendments. Given the retrospective nature of the study and because data were archival and de-identified, the requirement for informed consent to participate was formally waived by the Institutional Review Board. References 1. Ilse, M., J. Tomczak and M. Welling. "Attention-based deep multiple instance learning." Presented at International conference on machine learning, 2018. PMLR, 2127-36. 2. Chen, R. J., T. Ding, M. Y. Lu, D. F. Williamson, G. Jaume, A. H. Song, B. Chen, A. Zhang, D. Shao and M. Shaban. "Towards a general-purpose foundation model for computational pathology." Nature Medicine 30 (2024): 850-62. 3. Siegel, R. L., T. B. Kratzer, A. N. Giaquinto, H. Sung and A. Jemal. "Cancer statistics, 2025." Ca 75 (2025): 10. 4. Hong, Y., J. Kim, Y. J. Choi and J. G. Kang. "Clinical study of colorectal cancer operation: Survival analysis." Korean Journal of Clinical Oncology 16 (2020): 3. 5. Pollheimer, M. J., P. Kornprat, R. A. Lindtner, L. Harbaum, A. Schlemmer, P. Rehak and C. Langner. "Tumor necrosis is a new promising prognostic factor in colorectal cancer." Human pathology 41 (2010): 1749-57. 6. Hugen, N., R. Verhoeven, S. Radema, I. De Hingh, J. Pruijt, I. Nagtegaal, V. Lemmens and J. De Wilt. "Prognosis and value of adjuvant chemotherapy in stage iii mucinous colorectal carcinoma." Annals of oncology 24 (2013): 2819-24. 7. Huh, J. W., J. H. Lee and H. R. Kim. "Prognostic significance of tumor-infiltrating lymphocytes for patients with colorectal cancer." Archives of surgery 147 (2012): 366-72. 8. Mesker, W. E., J. M. Junggeburt, K. Szuhai, P. de Heer, H. Morreau, H. J. Tanke and R. A. Tollenaar. "The carcinoma–stromal ratio of colon carcinoma is an independent factor for survival compared to lymph node status and tumor stage." Analytical Cellular Pathology 29 (2007): 387-98. 9. Lugli, A., R. Kirsch, Y. Ajioka, F. Bosman, G. Cathomas, H. Dawson, H. El Zimaity, J.-F. Fléjou, T. P. Hansen and A. Hartmann. "Recommendations for reporting tumor budding in colorectal cancer based on the international tumor budding consensus conference (itbcc) 2016." Modern pathology 30 (2017): 1299-311. 10. Fan, S., X. Cui, L. Zheng, W. Ma, S. Zheng, J. Wang, L. Qi and Z. Ye. "Prognostic value of desmoplastic stromal reaction, tumor budding and tumor-stroma ratio in stage ii colorectal cancer." Journal of Gastrointestinal Oncology 13 (2022): 2903. 11. Niazi, M. K. K., A. V. Parwani and M. N. Gurcan. "Digital pathology and artificial intelligence." The lancet oncology 20 (2019): e253-e61. 12. Tavolara, T. E., Z. Su, M. N. Gurcan and M. K. K. Niazi. "One label is all you need: Interpretable ai-enhanced histopathology for oncology." Presented at Seminars in Cancer Biology, 2023. Elsevier, 97, 70-85. 13. Graham, S., Q. D. Vu, S. E. A. Raza, A. Azam, Y. W. Tsang, J. T. Kwak and N. Rajpoot. "Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images." Medical image analysis 58 (2019): 101563. 14. Hörst, F., M. Rempe, L. Heine, C. Seibold, J. Keyl, G. Baldini, S. Ugurel, J. Siveke, B. Grünwald and J. Egger. "Cellvit: Vision transformers for precise cell segmentation and classification." Medical Image Analysis 94 (2024): 103143. 15. Tavolara, T. E., M. Niazi, A. C. Gower, M. Ginese, G. Beamer and M. N. Gurcan. "Deep learning predicts gene expression as an intermediate data modality to identify susceptibility patterns in mycobacterium tuberculosis infected diversity outbred mice." EBioMedicine 67 (2021): 16. Su, Z., M. Rezapour, U. Sajjad, S. Niu, M. N. Gurcan and M. K. K. Niazi. "Cross-attention-based saliency inference for predicting cancer metastasis on whole slide images." IEEE Journal of Biomedical and Health Informatics (2024): 17. Sajjad, U., M. Rezapour, Z. Su, G. H. Tozbikian, M. N. Gurcan and M. K. K. Niazi. "Nrk-abmil: Subtle metastatic deposits detection for predicting lymph node metastasis in breast cancer whole-slide images." Cancers 15 (2023): 3428. 18. Vorontsov, E., A. Bozkurt, A. Casson, G. Shaikovski, M. Zelechowski, S. Liu, K. Severson, E. Zimmermann, J. Hall and N. Tenenholtz. "Virchow: A million-slide digital pathology foundation model." arXiv preprint arXiv:2309.07778 (2023): 19. Filiot, A., P. Jacob, A. Mac Kain and C. Saillard. "Phikon-v2, a large and public feature extractor for biomarker prediction." arXiv preprint arXiv:2409.09173 (2024): 20. Wang, X., S. Yang, J. Zhang, M. Wang, J. Zhang, W. Yang, J. Huang and X. Han. "Transformer-based unsupervised contrastive learning for histopathological image classification." Medical image analysis 81 (2022): 102559. 21. Nechaev, D., A. Pchelnikov and E. Ivanova. "Hibou: A family of foundational vision transformers for pathology." arXiv preprint arXiv:2406.05074 (2024): 22. He, K., H. Fan, Y. Wu, S. Xie and R. Girshick. "Momentum contrast for unsupervised visual representation learning." Presented at Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020. 9729-38. 23. Budinská, E., M. Hrivňáková, T. C. Ivkovic, M. Madrzyk, R. Nenutil, B. Bencsiková, D. Al Tukmachi, M. Ručková, L. Z. Dubská and O. Slabý. "Molecular portraits of colorectal cancer morphological regions." Elife 12 (2023): RP86655. 24. Pai, R. K., I. Banerjee, S. Shivji, S. Jain, D. Hartman, D. D. Buchanan, M. A. Jenkins, D. F. Schaeffer, C. Rosty and J. Como. "Quantitative pathologic analysis of digitized images of colorectal carcinoma improves prediction of recurrence-free survival." Gastroenterology 163 (2022): 1531-46. e8. 25. Nechaev, D., A. Pchelnikov and E. Ivanova. "Histai: An open-source, large-scale whole slide image dataset for computational pathology." arXiv preprint arXiv:2505.12120 (2025): 26. Zhang, A., G. Jaume, A. Vaidya, T. Ding and F. Mahmood. "Accelerating data processing and benchmarking of ai models for pathology." arXiv preprint arXiv:2502.06750 (2025): 27. Vaidya, A., A. Zhang, G. Jaume, A. H. Song, T. Ding, S. J. Wagner, M. Y. Lu, P. Doucet, H. Robertson and C. Almagro-Perez. "Molecular-driven foundation model for oncologic pathology." arXiv preprint arXiv:2501.16652 (2025): 28. Dosovitskiy, A., L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold and S. Gelly. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929 (2020): 29. Lu, M. Y., D. F. Williamson, T. Y. Chen, R. J. Chen, M. Barbieri and F. Mahmood. "Data-efficient and weakly supervised computational pathology on whole-slide images." Nature biomedical engineering 5 (2021): 555-70. 30. Shao, Z., H. Bian, Y. Chen, Y. Wang, J. Zhang and X. Ji. "Transmil: Transformer based correlated multiple instance learning for whole slide image classification." Advances in neural information processing systems 34 (2021): 2136- 47. 31. Tang, W., F. Zhou, S. Huang, X. Zhu, Y. Zhang and B. Liu. "Feature re-embedding: Towards foundation model- level performance in computational pathology." Presented at Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 11343-52. 32. Nakanishi, R., K. i. Morooka, K. Omori, S. Toyota, Y. Tanaka, H. Hasuda, N. Koga, K. Nonaka, Q. Hu and Y. Nakaji. "Artificial intelligence-based prediction of recurrence after curative resection for colorectal cancer from digital pathological images." Annals of Surgical Oncology 30 (2023): 3506-14. 33. Diederik, K. "Adam: A method for stochastic optimization." (No Title) (2014): 34. Glorot, X. and Y. Bengio. "Understanding the difficulty of training deep feedforward neural networks." Presented at Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010. JMLR Workshop and Conference Proceedings, 249-56. 35. Saltz, L. B., D. Niedzwiecki, D. Hollis, R. M. Goldberg, A. Hantel, J. P. Thomas, A. L. Fields and R. J. Mayer. "Irinotecan fluorouracil plus leucovorin is not superior to fluorouracil plus leucovorin alone as adjuvant treatment for stage iii colon cancer: Results of calgb 89803." Journal of Clinical Oncology 25 (2007): 3456-61. Supplementary Materials 7. Additional Results: 7.1. Tumor Location-Dependent Survival Prediction Using PRISM: Larger Patient Numbers Offset Batch Effects and Enhance Performance To assess the generalizability and clinical applicability of PRISM across different colorectal anatomical sites, we conducted comprehensive subgroup analysis stratified by tumor resection location. This evaluation is critical for understanding how morphological feature patterns and model performance vary across the diverse biological microenvironments within the colon, informing clinical deployment strategies and identifying potential limitations in site-specific prognostication. The prognostic performance across colon tumor resection cohorts exhibited a strong correlation with total sample size (n), where larger patient groups stratified by tumor location yield more reliable prognostic performance, though inherent biological differences in feature patterns across anatomical locations may also contribute to performance variations. As detailed in Table S1 and Figure S1, in patients with sigmoid colon cancers (n=149), PRISM achieved optimal performance (AUC: 0.77 ± 0.06; C-index: 0.75; HR: 3.88). Following this, in cecal cancers (n=101), PRISM demonstrated moderate but stable metrics (AUC: 0.71 ± 0.11; C-index: 0.69; HR: 2.97), showing the performance drop despite adequate sampling. This performance decline accelerated with ascending colon cancers (n=64), where results remained robust (AUC: 0.74 ± 0.21; C-index: 0.71; HR: 3.10) but exhibited high standard deviation. In transverse colon cancers (n=46), PRISM demonstrated high sensitivity (84.52 ± 15.5) but critically low specificity (50.01 ± 17.46), suggesting location-specific feature importance may skew predictions. Conversely, smaller cohorts revealed significant limitations: splenic flexure cancers (n=19) had the poorest AUC (0.42 ± 0.43) and C-index (0.49), while hepatic flexure (n=28) and descending colon cancers (n=19) showed near-random sensitivity (50.0 ± 50.0) and marginal hazard ratios (3.29 and 2.39) with large 95% confidence intervals (Figure S1), indicating that both limited samples and possibly local specific features may compromise model generalizability. These results collectively demonstrate that reliability of deep learning models depends critically on cohort size, and tumor microenvironment differences across locations, necessitating location-specific feature extraction for clinically viable models. Table S1: Five-year OS results of PRISM stratified by tumor location in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second best is underlined. Right Colon Tumor location AUC Accuracy (%) Sensitivity (%) Specificity (%) No of samples (n) Cecum 0.71 ± 0.11 67.58 ± 6.70 67.85 ± 0.20 67.32 ± 11.00 101 Ascending Colon 0.74 ± 0.21 72.46 ± 20.32 74.28 ± 0.24 70.64 ± 17.39 64 Hepatic Flexure 0.56 ± 0.30 60.77 ± 22.22 60.95 ± 25.71 60.60 ± 22.81 28 Transverse Colon 0.74 ± 0.19 67.26 ± 3.80 84.52 ± 15.5 50.01 ± 17.46 46 Right Colon 0.69 ± 0.05 68.08 ± 3.59 71.62 ± 10.12 64.53 ± 10.56 239 Left Colon Splenic Flexure 0.42 ± 0.43 65.83 ± 15.87 50.00 ± 50.00 81.66 ± 18.48 19 Descending Colon 0.46 ± 0.27 64.58 ± 14.87 50.00 ± 50.00 79.16 ± 21.65 19 Sigmoid Colon 0.77 ± 0.06 70.09 ± 3.70 64.69 ± 8.40 75.49 ± 12.29 149 Left Colon 0.71 ± 0.08 67.69 ± 5.69 61.39 ± 12.66 73.99 ± 10.03 187 Figure S1: Subgroup analysis of our model’s performance stratified by colon tumor location in the Alliance cohort using five-fold cross-validation. (a) Hazard ratios with 95% confidence intervals for each anatomical site, and (b) Concordance Index (C-Index) values demonstrating model's ability to risk-stratify patients across different colon segments. PRISM achieved the highest C-Index (0.7483) and robust hazard ratio (3.88) in sigmoid colon cancers, while smaller cohorts from splenic flexure and descending colon showed reduced performance, indicating that model reliability correlates with sample size and possibly anatomical site-specific biological features. 7.2.Robustness of PRISM Across Demographics and Histology A critical requirement for clinical deployment of AI-based prognostic models is demonstrated robustness across diverse populations and clinical contexts. To address concerns about generalization and bias, we conducted comprehensive subgroup analyses of PRISM within the Alliance cohort, evaluating performance consistency across demographic and pathological stratifications. These analyses confirm PRISM’s superior stability and reliability compared to benchmark methods (ABMIL, CLAM, Nakanishi et al., RRT-MIL). PRISM exhibited strong consistency across sexes (Table S2), a key indicator of clinical robustness. Performance metrics remained stable between male (AUC: 0.69 ± 0.06; accuracy: 68.06% ± 5.00) and female (AUC: 0.71 ± 0.08; accuracy: 67.91% ± 3.20) cohorts, with minimal AUC variation (Δ = 0.02) and negligible accuracy fluctuation (0.15% difference). In contrast, benchmarks showed significant sex-based instability: CLAM had a 5.5% accuracy drop between sexes, while Nakanishi et al. and RRT-MIL exhibited 1.5–2% accuracy reductions and extreme sensitivity variability (e.g., RRT-MIL sensitivity SD: ±30.91). This consistency underscores PRISM’s resilience to sex-based distribution shifts, suggesting reliance on generalizable biological features rather than spurious correlations. Notably, PRISM achieved superior performance in female patients (highest AUC: 0.71 ± 0.078; highest accuracy: 67.91% ± 3.20), surpassing CLAM by 15% in accuracy while delivering better-balanced sensitivity/specificity (64.00% ± 11.0/71.81% ± 10.00 vs. 20.33% ± 12.13/85.16% ± 13.61) (Table S2). Its higher sensitivity is clinically critical for identifying high-risk female patients requiring aggressive intervention. In male patients, PRISM outperformed benchmarks in practical utility: while TransMIL had a marginally higher AUC (0.67 ± 0.06), its AUC is influenced by predicting majority of the samples as five-year survivors, whereas PRISM achieved ~8% higher accuracy (68.06% ± 5.00) with balanced sensitivity/specificity (69.7% ± 9.00/66.58% ± 11.00) and lower metric variability, avoiding trade-offs observed in competitors. Table S2: Five-year survival prediction results of PRISM stratified by sex in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second-best is underlined. Model Sex AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [29] Female 0.63 ± 0.11 57.86 ± 10.13 51.48 ± 17.68 64.25 ± 07.50 Male 0.61 ± 0.05 57.68 ± 05.75 46.04 ± 12.87 69.32 ± 07.46 CLAM [1] Female 0.61 ± 0.16 52.75 ± 09.29 20.33 ± 12.13 85.16 ± 13.61 Male 0.63 ± 0.03 58.35 ± 03.95 31.38 ± 08.05 85.33 ± 05.95 TransMIL [30] Female 0.69 ± 0.12 54.18 ± 05.99 08.37± 11.99 01.00 ± 00.00 Male 0.67 ± 0.06 52.67 ± 06.19 06.00 ± 12.00 99.35 ± 01.20 Nakanishi et. Al [32] Female 0.60 ± 0.11 58.32 ± 09.70 53.43 ± 20.25 63.21 ± 07.87 Male 0.61 ± 0.06 59.99 ± 05.78 56.01 ± 12.59 63.97 ± 03.65 RRT-MIL [31] Female 0.56 ± 0.07 54.11 ± 05.35 45.98 ± 30.91 62.24 ± 24.09 Male 0.59 ± 0.08 56.17 ± 06.29 44.19 ± 21.72 68.16 ± 25.79 PRISM Female 0.71 ± 0.07 67.91± 03.20 64.00 ± 11.00 71.81 ± 10.00 Male 0.69 ± 0.06 68.06 ± 05.00 69.70 ± 09.00 66.38 ± 11.00 PRISM also generalized robustly across histologic heterogeneity (Table S3). Despite known inter-observer variability in grading, performance remained consistent across glandular differentiation grades: • Grade I (well differentiated, n=20): High accuracy (81.25% ± 6.25) and AUC (0.75 ± 0.25), though variability reflected sample limitations. • Grade II (moderately differentiated, n=297): Stable accuracy (67.73% ± 3.31) and AUC (0.71 ± 0.03) with low standard deviation. • Grade III (poorly differentiated, n=104): Robust accuracy (67.25 ± 4.72) and AUC (0.70 ± 0.10) This consistency across biologically distinct grades—particularly the stability in Grade II (the largest subgroup)— demonstrates PRISM’s capacity to learn generalizable pathologic features. PRISM establishes a new standard for robust prognostication, consistently matching or outperforming benchmarks across genders and histologic grades while avoiding sensitivity-specificity trade-offs. Its minimal performance fluctuation in key subgroups (ΔAUC <0.03 across sexes; <4% accuracy variability in Grade II and III), superior accuracy in males and leading female-cohort performance in comparison with comparison methods, and adaptability to histologic heterogeneity directly address clinical concerns about generalization and bias. These attributes position PRISM as a uniquely reliable model for clinical translation in CRC prognosis. Table S3: Five-year survival prediction results of PRISM stratified by Grade in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second- best is underlined. Grade AUC Accuracy (%) Sensitivity (%) Specificity (%) No of patients (n) I 0.75 ± 0.25 81.25 ± 6.25 75.00 ± 25.00 87.50 ± 12.5 20 II 0.71 ± 0.03 67.73 ± 3.31 65.11 ± 6.92 70.35 ± 8.99 297 III 0.70 ± 0.10 67.25 ± 4.72 72.85 ± 18.90 61.64 ± 14.63 104 7.3. Enhanced Performance of Baseline Methods using Add-on Module Integration and PRISM Superiority To rigorously evaluate the impact of our proposed patient heterogeneity module, we applied this add-on enhancement to existing state-of-the-art methods and compared their performance against PRISM. The integration of our module consistently improved the prognostic accuracy of baseline models across multiple clinical subgroups, though PRISM maintained superior performance in all comparisons. When applied to CLAM, our add-on module substantially improved its five-year survival prediction accuracy by 7% compared to baseline, ensuring that model training incorporates clinically relevant subgroups (Supplementary Figure S2). In the FL treatment subgroup, CLAM’s accuracy increased from 50.04 ± 5.96% to 63.29 ± 4.45%, representing a 13.25% absolute improvement and reduced variability (Supplementary Table S4). Similarly, in female patients, CLAM’s accuracy improved from 52.75 ± 9.29% to 65.10 ± 8.70% (a 12.35% gain) (Supplementary Table S5). However, these enhancements did not surpass PRISM’s performance, which achieved an accuracy of 68.21 ± 3.40% in the FL subgroup and 67.91 ± 3.20% in females, representing a 4.92% and 2.81% improvement, respectively. ABMIL also demonstrated notable gains with our module. For FL-treated patients, accuracy improved from 62.02 ± 13.76% to 63.86 ± 6.30, a 1.84% increase with significantly reduced standard deviation (from 13.76% to 6.30%). Sex-stratified analysis also revealed an 8% accuracy improvement in female patients (from 57.86 ± 10.13% to 61.71 % ± 8.20). Despite these improvements, PRISM still outperformed enhanced ABMIL by 4.35% in the FL subgroup and 8.77% in the IFL subgroup. Nakanishi et al.’s method showed more modest gains, with FL treatment accuracy improving from 61.12 ± 12.85 to 62.21 ± 4.70 and IFL accuracy increasing from 56.02 ± 5.29 to 58.83 ± 8.60. RRT-MIL exhibited variable performance: FL accuracy improved from 55.42 ± 8.54 to 59.51 ± 11.11 (a 4.09% gain), but IFL accuracy decreased from 55.62 ± 3.61 to 51.18 ± 9.42 (a 4.44% drop), indicating instability. The HR analysis further validated PRISM’s dominance, achieving a robust HR of 3.34 (95% CI: 2.28–4.90)—1.5× greater risk discrimination than the state-of-the-art methods (Supplementary Figure S3). While CLAM improved marginally (HR: 1.96 → 2.25) but failed to approach PRISM’s precision, as their underlying architectures could not fully leverage the continuous morphological spectrum essential for robust risk stratification. PRISM’s tight confidence interval (spanning 2.62 vs. TransMIL’s 9.98) enabled clinically actionable identification of high-risk patients with 3.34× greater mortality likelihood within five years. Critically, the add-on module reduced performance variability across all baselines, as evidenced by lower standard deviations in key subgroups (Supplementary Table S4). For instance: CLAM’s standard deviation in the FL subgroup decreased from ±5.96% to ±4.45%. ABMIL’s FL subgroup variability dropped sharply from ±13.76 to ± 6.30%. Nakanishi et al.’s FL subgroup variability reduced from ±12.85 to ± 4.70%. This confirms that accounting for population heterogeneity mitigates algorithmic bias and enhances model stability. a b c Figure S2: Five-year survival prediction results in the Alliance cohort using our proposed five-fold cross-validation method. (a) Area Under the Curve (AUC) values with standard deviations, (b) Accuracy percentages with standard deviations, and (c) Grouped comparison of sensitivity and specificity with other models. Our model achieves the highest accuracy and balanced sensitivity and specificity, demonstrating superior performance compared to existing state-of-the-art methods. Table S4: Five-year survival prediction results of PRISM stratified by FL/IFL treatments in the Alliance cohort using our proposed validation strategy. Each row reports the average performance with standard deviation. For each metric within a treatment, the best result is shown in bold, and the second best is underlined. Model Treatment AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [30] FL 0.7118 ± 0.058 63.86 ± 6.30 65.26 ± 17.11 62.46 ± 12.28 IFL 0.6576 ± 0.141 58.00 ± 10.85 61.76 ± 27.66 54.24 ± 15.40 CLAM [26] FL 0.6820 ± 0.044 63.29 ± 4.45 48.63 ± 9.40 77.96 ± 7.96 IFL 0.6444 ± 0.112 59.85 ± 11.17 49.62 ± 22.04 70.09 ± 3.92 Transmil [31] FL 0.7132 ± 0.057 51.89 ± 3.70 4.40 ± 8.80 99.35 ± 1.29 IFL 0.6091 ± 0.098 51.09 ± 2.10 3.30 ± 6.60 98.80 ± 2.20 Nakanishi et. Al [33] FL 0.6914 ± 0.080 62.21 ± 4.70 56.24 ± 17.12 68.18 ± 11.43 IFL 0.6843 ± 0.128 58.83 ± 8.60 48.16 ± 15.27 69.51 ± 15.43 RRT-MIL [32] FL 0.6961 ± 0.036 59.51 ± 11.11 59.82 ± 32.59 59.19 ± 36.46 IFL 0.5962 ± 0.159 51.18 ± 9.42 46.30 ± 33.37 56.07 ± 42.83 PRISM FL 0.7160 ± 0.061 68.21 ± 3.40 64.82 ± 3.40 71.60 ± 8.50 IFL 0.6846 ± 0.108 66.77 ± 5.10 68.76 ± 13.90 64.77 ± 10.90 Figure S3: Hazard ratios and concordance index (c-index) values in the Alliance cohort using our proposed validation strategy. (a) Hazard ratios with 95% confidence intervals for each model, and (b) Concordance Index (C-Index) values demonstrating model ability to sort the patient based on risk for survival prediction. Our model achieves the highest C-Index (0.6720) and comparable hazard ratio in comparison with TransMIL with smaller confidence interval, indicating superior prognostic performance compared to other state-of- the-art methods. Table S5: Five-year survival prediction results of PRISM stratified by sex in the Alliance cohort using our proposed validation strategy. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second- best is underlined. Model Sex AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [30] Female 0.6816 ± 0.079 61.71 ± 8.20 60.87 ± 19.10 62.54 ± 13.14 Male 0.6906 ± 0.064 59.26 ± 5.10 64.51 ± 22.47 54.73 ± 15.35 CLAM [26] Female 0.7046 ± 0.070 65.10 ± 8.70 56.34 ± 16.78 73.85 ± 8.72 Male 0.6377 ± 0.053 59.90 ± 5.50 45.87 ± 13.08 73.93 ± 5.80 Transmil [31] Female 0.6699 ± 0.110 52.85 ± 5.70 5.70± 11.42 1.0 ± 0.0 Male 0.6541 ± 0.054 50.57 ± 1.10 2.80 ± 5.70 98.28 ± 3.40 Nakanishi et. Al [33] Female 0.6796 ± 0.080 59.33 ± 6.20 46.70 ± 9.60 71.96 ± 14.18 Male 0.6672 ± 0.043 59.96 ± 2.60 54.05 ± 14.40 65.86 ± 13.73 RRT-MIL [32] Female 0.6299 ± 0.055 55.95 ± 7.08 52.46 ± 33.69 59.45 ± 37.99 Male 0.6535 ± 0.056 54.37 ± 8.67 51.38 ± 31.69 57.36 ± 40.05 PRISM Female 0.7120 ± 0.07 67.91± 3.20 64.00 ± 11.0 71.81 ± 10.00 Male 0.6869 ± 0.060 68.06 ± 5.00 69.7 ± 9.00 66.38 ± 11.00 Table S6: c-index values got using TCGA-COADREAD dataset. All methods were trained using five-fold cross validation Model C-index ABMIL 0.6034 ± 0.0500 PANTHER 0.5832 ± 0.0735 DSMIL 0.5 ± 0.0 RRT-MIL 0.5991 ± 0.08094 PRISM 0.6268 ± 0.0612
Morphology-Aware Prognostic model for Five-Year Survival Prediction in Colorectal Cancer from H&E Whole Slide Images Usama Sajjad1*, Abdul Rehman Akbar1, Ziyu Su1, Deborah Knight1, Wendy L. Frankel1, Metin N. Gurcan2, Wei Chen1, Muhammad Khalid Khan Niazi1 1 . 2Center for Artificial Intelligence Research, Wake Forest University -Salem, NC, USA. Correspondence: Usama Sajjad, Email: Abstract Colorectal cancer (CRC) remains the third most prevalent malignancy globally, with approximately 154,000 new cases and 54,000 projected deaths anticipated for 2025. The recent advancement of foundation models in computational pathology has been largely propelled by task agnostic methodologies that can overlook organ-specific crucial morphological patterns that represent distinct biological processes that can fundamentally influence tumor behavior, therapeutic response, and patient outcomes. The aim of this study is to develop a novel, interpretable AI model, PRISM (Prognostic Representation of Integrated Spatial Morphology), that incorporates a continuous variability spectrum within each distinct morphology to characterize phenotypic diversity and reflecting the principle that malignant transformation occurs through incremental evolutionary processes rather than abrupt phenotypic shifts. PRISM is trained on 8.74 million histological images extracted from surgical resection specimens of 424 patients with stage III CRC. PRISM achieved superior prognostic performance for five-year OS (AUC = 0.70 ± 0.04; accuracy = 68.37% ± 4.75%; HR = 3.34, 95% CI = 2.28-4.90; p 9% improvement in accuracy over existing approaches (second-best: 58.84% ± 4.50%). We also compared PRISM with Nakanishi et. al [32] that was proposed specifically for predicting CRC recurrence and demonstrated an improvement of 9% over their approach. Additionally, PRISM's superior performance was statistically significant compared to all baseline methods (p-values: ABMIL = 0.012, CLAM = 0.018, TransMIL 98% cases had one representative tumor slide/WSI for evaluation. Exclusion criteria include: WSI with no tumor, lymph node tissue only, or mucinous (tumor composed of pools of mucin with floating tumor cells only) cases, and those with death attributed to causes other than disease from within five-years deceased group. Cases were also reviewed for quality check to ensure correct tissue detection and coverage, color fidelity, focus/sharpness, and no scanning artifacts; WSIs with inadequate staining were also removed. This comprehensive quality control process resulted in the exclusion of 211 cases from the original Alliance cohort, leaving 424 cases for PRISM training and evaluation. Throughout the entire scanning and tissue quality assessment process, reviewing pathologists remained blinded to all patient clinical information and outcomes. This rigorous filtering ensures that the dataset used for training PRISM is high-quality, relevant, and consistent, which is crucial for reliable prognostication. 5. Methods 5.1. Identification of Subgroups with Varied Survival Rates and Stratified Data Splitting for Effective Generalization A critical aspect of developing robust prognostic models involves understanding the heterogeneity within patient populations and ensuring that model training and evaluation account for clinically relevant subgroups. Traditional approaches to model validation often overlook the inherent diversity in patient demographics and clinical characteristics, which can lead to biased performance estimates and reduced generalizability across different patient populations. To address this fundamental limitation, we conducted a comprehensive analysis of patient subgroups within our Alliance cohort to identify distinct populations with varying survival outcomes. We then implemented a stratified approach to data splitting that preserves the representation of these diverse patient characteristics. Our quantitative analysis of clinical data revealed distinct patient subgroups with different survival outcomes across multiple demographic and clinical parameters. We systematically grouped patients based on several key criteria and assessed fiveyear survival rates within each subgroup. Age stratification demonstrated significant prognostic value, with patients aged ≤65 years exhibiting substantially different five-year survival rates compared to those >65 years. This age-based survival disparity reflects the complex interplay between chronological age, comorbidity burden, treatment tolerance, and overall physiological reserve in CRC patients. Similarly, we identified significant survival variations based on socioeconomic factors, particularly income levels, which serve as a proxy for healthcare access, treatment compliance, and overall health status. The income-based stratification revealed distinct survival patterns that likely reflect disparities in healthcare quality, early detection rates, and access to optimal treatment regimens. Body mass index (BMI) stratification also demonstrated prognostic significance, with patients categorized into three distinct groups: BMI <25 kg/m2, BMI 25-30 kg/m2, and BMI ≥30 kg/m2. These BMI-based subgroups exhibited different five-year survival rates, potentially reflecting the complex relationship between nutritional status, metabolic health, treatment toxicity, and surgical outcomes in CRC patients. To ensure robust model development and evaluation, we implemented a stratified data splitting approach that maintained proportional representation of these clinically relevant subgroups across training and validation sets (figure 5). As a result, PRISM prevents the inadvertent creation of training sets that over-represent or under-represent specific patient populations, thereby ensuring that PRISM's prognostic performance is evaluated against the full spectrum of patient diversity present in clinical practice. This approach addresses the critical limitation of conventional validation strategies that may produce overly optimistic performance estimates by failing to account for population heterogeneity and potential subgroup-specific biases embedded within histopathological images. Figure 5: Study population and cohort design. A Flowchart illustrating the patient selection process for colon carcinoma cases, including exclusion criteria: whole-slide images (WSIs) with no tumor, lymph node tissue only, mucinous adenocarcinoma, non-disease-related deaths within five years, and cases lost to follow-up within five years. The final cohort included 424 patients with 431 WSIs, stratified into deceased (n=103) and survivor (n=321) groups based on five-year follow-up. To ensure balanced representation across varying survival rate groups, patients were clustered based on age, BMI, and income, and five-folds were constructed within each cluster. These were then concatenated to form training, validation, and test sets. 5.2.Morphology-Informed Classifier and Feature Extractor Current foundation models in computational pathology have demonstrated strong generalization across tasks [2, 18], but they do not explicitly encode morphological semantics. These models typically rely on self-supervised learning to capture broad visual representations, often overlooking the domain-specific morphological features (biomarkers) that are critical for accurate prognostication. In contrast, pathologists do recognize these patterns but tend to describe them in discrete terms-labeling regions as "neoplastic," "stromal," or "inflammatory"-without accounting for the continuous phenotypic variation that exists within these categories. Existing computational systems that attempt to extract morphological features often follow a two-step process: first, they classify tissue regions into predefined categories; then, they compute basic statistics (e.g., area, density) within those identified regions. While useful, this approach treats morphology as static and compartmentalized, failing to capture the gradual transitions and phenotypic diversity that reflect tumor evolution and biological complexity. PRISM addresses this critical gap by explicitly modeling the continuous morphological variability within and across histological regions. Rather than relying solely on categorical labels or generic feature extraction, PRISM learns to represent nuanced phenotypic differences-such as subtle changes in nuclear morphology, glandular architecture, and stromal composition-and integrates these with generic histopathological features to construct a more biologically faithful and prognostically powerful representation. To extract these morphology-informed features, we developed a deep learning model trained on the publicly available Histopathology AI (Hist-AI) colorectal dataset [25]. Hist-AI dataset contains 13 distinct tissue morphologies, including: High-Grade Adenocarcinoma, Low-Grade Adenocarcinoma, High-Grade Adenoma, Low-Grade Adenoma, Fat, Hyperplastic Polyp, Inflammation, Mucin, Smooth Muscle, Necrosis, Sessile Serrated Lesion, Stroma, and Vascular Structures. We used 224 × 224-pixel patches extracted at 20× magnification to train a morphology classification network. During training, PRISM not only learns to classify these morphologies but also captures the intra-class variability within each category. This enables the model to extract a diverse set of high-dimensional features that reflect the phenotypic spectrum within each morphological type. The network was optimized using cross-entropy loss and evaluated using fivefold cross-validation to ensure generalization. For downstream prognostication, the final classification layer was removed, and the penultimate fully connected layer was used as a feature extraction backbone (Figure 6). These learned featuresrich in tissue-specific morphological information-serve as the foundation for PRISM's survival prediction model, enabling it to capture clinically relevant patterns that are often missed by traditional approaches. Figure 6. Overview of PRISM's morphology classifier training pipeline. Whole slide images (WSIs) are first annotated by expert pathologists to identify 13 distinct morphological tissue regions (Hist-AI dataset). Histological patches are then extracted from these annotated regions and used to train a morphology classification module. Once trained, this module generates specialized morphologyinformed features, which PRISM integrates with generic histopathological features to construct a robust, multi-faceted representation for five-year survival prediction in stage III CRC. 5.3. Morphology Aware Survival Prediction Building upon the morphology-informed feature extraction capabilities described above, PRISM integrates these domainspecific features with foundation model embeddings to construct a comprehensive AI prognostic model. Foundation models [2], trained on large-scale histological datasets using self-supervised learning, capture broad visual patterns but lack explicit encoding of biologically meaningful morphological semantics. In contrast, PRISM's morphology-informed features are derived from expert-annotated tissue regions and capture nuanced phenotypic variations-such as differences in nuclear morphology, glandular architecture, and stromal composition-that are critical for prognostication. By combining the generalizability of foundation models with the specificity of morphology-aware representations, PRISM creates a more biologically faithful and prognostically powerful feature space. To effectively model survival outcomes, PRISM employs a multi-instance learning (MIL) paradigm [1], which accommodates the variable number of tissue patches per WSI and learns to weigh the relative importance of different morphological patterns. This approach addresses the critical need for prognostic models that can capture the complex interplay between diverse tissue morphologies while accounting for the inherent spatial and phenotypic heterogeneity present within CRC slides. The resulting model not only improves predictive performance but also enhances interpretability by linking prognostic signals to specific morphological contexts. Here are the implementation details of PRISM. Let X = {X1, X2, . . , Xn} represent the n patient WSIs with five-survival Y = {y1, y2, . . , yn}, where yn∈{0, 1} for i= {1, ... , n} corresponds to the five-survival label with 1 means that patient died within five years and 0 means that patient survived for five-years post resection. For each WSI Xi, for i= {1, ... , n}, we divide Xi = {xi1, xi2, . . , xim} into m patches with each patch xi1 representing 224 × 224 area at 20× magnification, and patches with less than 25% of the tissue were excluded using TRIDENT [26, 27]. Then, a vision transformer-based histopathology trained feature extractor (UNI) [2, 28] is used to extract the features gij from each patch xij ∈ Rd of the WSI Xi. Subsequently, we also feed each patch xij of the WSI Xi from morphology informed feature extractor and extract the morphology aware features mij for each patch xij ∈ Rd of WSI. Once we extract both features (gij and mij) from each patch xij, we compute the cross-feature interaction of each morphology aware feature with each foundation model feature to generate f′ ij where f′ ij ∈ Rd x d(figure 1). Then, we use a neural network to project them again as fij ∈ Rd to merge the morphology informed features with foundation model features. Specifically, we learn a transformation T: Rd x d→ Rd, that maximizes the mutual information I(fij; f′ ij) = I (fij; T(fij)) while enforcing the constraint d x d′ << d. Since each slide Xi contains a different number of patches, and each patch may have varying prognostic significance, we employ multiple instance learning (MIL) aggregation [1] to combine the morphology-informed patch features and generate the slide-level representation Zi (eq. 1), Zi= ∑aij n j=1 fij (1) where aik= exp(WT(tanh(VTfik)) ⊙ VT(sigm(UTfik)) ) ∑ exp(WT(tanh(VTfij)) ⊙ VT(sigm(UTfij))) n j=1 (2) is the importance score aik computed for each patch xik, which quantifies the relative prognostic relevance of patch xik with respect to other patches within the WSI Xi (Eq. 2). Here, V ∈ Rd x l, U ∈ Rd x l, and W ∈ R l x 1 are the learnable parameters of the attention network, l represents the number of neurons in the hidden layer, and ⊙ denotes element-wise multiplication. This morphology-informed slide representation Zi is subsequently used by a separate neural network to predict five-year survival. During the training process, all cross-modal interactions between generic histopathological features and morphological informed features for each patch xij are learned and optimized in an end-to-end fashion, which enables optimized of feature combinations most relevant for prognostic prediction. 5.4. Evaluation Protocols and Implementation Details To train our morphology-informed feature extraction module, we utilized the publicly available Hist AI colorectal dataset [25]. This dataset comprises 77,182 annotated histological patches derived from 1,719 H&E-stained WSIs, systematically categorized into 13 distinct morphological classes. For training, we used 70% of the histological patches from each morphological class, with 15% allocated for validation and the remaining 15% for testing. This stratified sampling approach ensured balanced representation across all tissue types in each data split. For training PRISM, we partitioned each WSI into 224 × 224 × 3 patches at 20× magnification using TRIDENT [26, 27] and excluded patches containing less than 25% tissue content. Then, we extracted generic histopathological features using a foundation model (UNI) [2] to ensure consistent feature representation across all comparative approaches [1, 29-32]. For training the morphology-informed classifier, we implemented the Hibou foundation model [21] as a feature extractor, coupled with a multi-layer perceptron (MLP) head comprising two fully connected layers of 512, 128, and 13 dimensions, respectively, utilizing ReLU and SoftMax activation functions. Once morphology-informed classifier was trained, we obtained morphology-informed features by truncating the model after the 512-dimensional layer to capture hidden feature representations. Subsequently, we configured PRISM training with the following hyperparameters: learning rate of 2×10-5, Adam optimizer [33], Xavier uniform weight initialization [34], batch size of 1, and L1 regularization coefficient of 5×10-4. For comparative evaluation, we implemented baseline methods using default parameters from their respective GitHub repositories, with the exception of RRT-MIL [31], where we computed classification thresholds on the validation set to optimize performance evaluation. We omitted dropout regularization as our model incorporated L1 regularization to promote feature sparsity and prevent overfitting to irrelevant morphological patterns. We employed Xavier uniform initialization [34] to ensure appropriate gradient flow and stable training dynamics throughout the deep architecture and maintained training reproducibility through fixed random seed implementation, enabling consistent results across multiple experimental iterations. We evaluated the performance of PRISM on: (i). Alliance cohort; (ii). a publicly available dataset using The Cancer Genome Atlas (TCGA) CRC cohort, a multicenter study encompassing patients with stage I-IV disease, predominantly from institutions across the United States. All histopathological images and associated clinical data from the TCGA study are publicly accessible through the Genomic Data Commons portal (https://portal.gdc.cancer.gov)." For each cohort, we used 70% patients for training, 10% patients for validation and 20% for testing using stratified K-fold cross validation within each cluster. Specifically, we trained our model separately on each cohort using cohort specific training data and morphology-informed feature extractor to evaluate the performance. To compare the statistical significance between methods, a two-sided Wilcoxon signed-rank test was used. Clinicaltrials.gov Identifier: NCT00003835 (CALGB 89803); Registry: CTRP (Clinical Trial Reporting Program); Registry Identifier: NCI-2012-01844 Support: The data from CALGB 89803 were obtained directly from the Alliance for Clinical Trials in Oncology, a National Clinical Trials Network cooperative group; however, all analyses and conclusions in this manuscript are the sole responsibility of the authors and do not necessarily reflect the opinions or views of the clinical trial investigators, the NCTN, the NCORP or the NCI. Author contributions U.S. performed data preprocessing, experimental design, validation, and manuscript writing. A.R. and Z.S. provided feedback on study design, validation, and manuscript editing. W.F., W.C., and D.K. contributed clinical assessment, while W.F, and M.N.G. assisted with manuscript editing. W.F., A.R., W.C., D.K., U.S., M.K.K., and M.N.G. performed result analysis. W.C. and M.K.K.N. conceptualized and designed the study, oversaw validation, supervised the research, and contributed to manuscript editing. Acknowledgments The authors gratefully acknowledge the Ohio Supercomputer Center for providing high-performance computing resources under its contract with The Ohio State University . We also thank the . Funding The project described was supported in part by R01 CA276301 (PIs: Niazi and Chen) from the National Cancer Institute, The project was also supported partly by Pelotonia under IRP CC13702 (PIs: Niazi, Vilgelm, and Roy), The Ohio State University . The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or National Institutes of Health or The Ohio State University. Competing Interests All authors declare no financial or non-financial competing interests. Code Availability The underlying code for this study is available in AI4Path PRISM repository and can be accessed via following link: https://github.com/AI4Path-Lab/PRISM. Ethics Approval and Consent to Participate This study was reviewed and approved by the Institutional Review Board of Ohio State University (IRB #2018C0098). All procedures involving human data were conducted in accordance with the ethical standards of the institutional research committee, the national research regulations, and with the 1964 Declaration of Helsinki and its later amendments. Given the retrospective nature of the study and because data were archival and de-identified, the requirement for informed consent to participate was formally waived by the Institutional Review Board. References 1. Ilse, M., J. Tomczak and M. Welling. "Attention-based deep multiple instance learning." Presented at International conference on machine learning, 2018. PMLR, 2127-36. 2. Chen, R. J., T. Ding, M. Y. Lu, D. F. Williamson, G. Jaume, A. H. Song, B. Chen, A. Zhang, D. Shao and M. Shaban. "Towards a general-purpose foundation model for computational pathology." Nature Medicine 30 (2024): 850-62. 3. Siegel, R. L., T. B. Kratzer, A. N. Giaquinto, H. Sung and A. Jemal. "Cancer statistics, 2025." Ca 75 (2025): 10. 4. Hong, Y., J. Kim, Y. J. Choi and J. G. Kang. "Clinical study of colorectal cancer operation: Survival analysis." Korean Journal of Clinical Oncology 16 (2020): 3. 5. Pollheimer, M. J., P. Kornprat, R. A. Lindtner, L. Harbaum, A. Schlemmer, P. Rehak and C. Langner. "Tumor necrosis is a new promising prognostic factor in colorectal cancer." Human pathology 41 (2010): 1749-57. 6. Hugen, N., R. Verhoeven, S. Radema, I. De Hingh, J. Pruijt, I. Nagtegaal, V. Lemmens and J. De Wilt. "Prognosis and value of adjuvant chemotherapy in stage iii mucinous colorectal carcinoma." Annals of oncology 24 (2013): 2819-24. 7. Huh, J. W., J. H. Lee and H. R. Kim. "Prognostic significance of tumor-infiltrating lymphocytes for patients with colorectal cancer." Archives of surgery 147 (2012): 366-72. 8. Mesker, W. E., J. M. Junggeburt, K. Szuhai, P. de Heer, H. Morreau, H. J. Tanke and R. A. Tollenaar. "The carcinoma-stromal ratio of colon carcinoma is an independent factor for survival compared to lymph node status and tumor stage." Analytical Cellular Pathology 29 (2007): 387-98. 9. Lugli, A., R. Kirsch, Y. Ajioka, F. Bosman, G. Cathomas, H. Dawson, H. El Zimaity, J.-F. Fléjou, T. P. Hansen and A. Hartmann. "Recommendations for reporting tumor budding in colorectal cancer based on the international tumor budding consensus conference (itbcc) 2016." Modern pathology 30 (2017): 1299-311. 10. Fan, S., X. Cui, L. Zheng, W. Ma, S. Zheng, J. Wang, L. Qi and Z. Ye. "Prognostic value of desmoplastic stromal reaction, tumor budding and tumor-stroma ratio in stage ii colorectal cancer." Journal of Gastrointestinal Oncology 13 (2022): 2903. 11. Niazi, M. K. K., A. V. Parwani and M. N. Gurcan. "Digital pathology and artificial intelligence." The lancet oncology 20 (2019): e253-e61. 12. Tavolara, T. E., Z. Su, M. N. Gurcan and M. K. K. Niazi. "One label is all you need: Interpretable ai-enhanced histopathology for oncology." Presented at Seminars in Cancer Biology, 2023. Elsevier, 97, 70-85. 13. Graham, S., Q. D. Vu, S. E. A. Raza, A. Azam, Y. W. Tsang, J. T. Kwak and N. Rajpoot. "Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images." Medical image analysis 58 (2019): 101563. 14. Hörst, F., M. Rempe, L. Heine, C. Seibold, J. Keyl, G. Baldini, S. Ugurel, J. Siveke, B. Grünwald and J. Egger. "Cellvit: Vision transformers for precise cell segmentation and classification." Medical Image Analysis 94 (2024): 103143. 15. Tavolara, T. E., M. Niazi, A. C. Gower, M. Ginese, G. Beamer and M. N. Gurcan. "Deep learning predicts gene expression as an intermediate data modality to identify susceptibility patterns in mycobacterium tuberculosis infected diversity outbred mice." EBioMedicine 67 (2021): 16. Su, Z., M. Rezapour, U. Sajjad, S. Niu, M. N. Gurcan and M. K. K. Niazi. "Cross-attention-based saliency inference for predicting cancer metastasis on whole slide images." IEEE Journal of Biomedical and Health Informatics (2024): 17. Sajjad, U., M. Rezapour, Z. Su, G. H. Tozbikian, M. N. Gurcan and M. K. K. Niazi. "Nrk-abmil: Subtle metastatic deposits detection for predicting lymph node metastasis in breast cancer whole-slide images." Cancers 15 (2023): 3428. 18. Vorontsov, E., A. Bozkurt, A. Casson, G. Shaikovski, M. Zelechowski, S. Liu, K. Severson, E. Zimmermann, J. Hall and N. Tenenholtz. "Virchow: A million-slide digital pathology foundation model." arXiv preprint (2023): 19. Filiot, A., P. Jacob, A. Mac Kain and C. Saillard. "Phikon-v2, a large and public feature extractor for biomarker prediction." arXiv preprint (2024): 20. Wang, X., S. Yang, J. Zhang, M. Wang, J. Zhang, W. Yang, J. Huang and X. Han. "Transformer-based unsupervised contrastive learning for histopathological image classification." Medical image analysis 81 (2022): 102559. 21. Nechaev, D., A. Pchelnikov and E. Ivanova. "Hibou: A family of foundational vision transformers for pathology." arXiv preprint (2024): 22. He, K., H. Fan, Y. Wu, S. Xie and R. Girshick. "Momentum contrast for unsupervised visual representation learning." Presented at Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020. 9729-38. 23. Budinská, E., M. Hrivňáková, T. C. Ivkovic, M. Madrzyk, R. Nenutil, B. Bencsiková, D. Al Tukmachi, M. Ručková, L. Z. Dubská and O. Slabý. "Molecular portraits of colorectal cancer morphological regions." Elife 12 (2023): RP86655. 24. Pai, R. K., I. Banerjee, S. Shivji, S. Jain, D. Hartman, D. D. Buchanan, M. A. Jenkins, D. F. Schaeffer, C. Rosty and J. Como. "Quantitative pathologic analysis of digitized images of colorectal carcinoma improves prediction of recurrence-free survival." Gastroenterology 163 (2022): 1531-46. e8. 25. Nechaev, D., A. Pchelnikov and E. Ivanova. "Histai: An open-source, large-scale whole slide image dataset for computational pathology." arXiv preprint (2025): 26. Zhang, A., G. Jaume, A. Vaidya, T. Ding and F. Mahmood. "Accelerating data processing and benchmarking of ai models for pathology." arXiv preprint (2025): 27. Vaidya, A., A. Zhang, G. Jaume, A. H. Song, T. Ding, S. J. Wagner, M. Y. Lu, P. Doucet, H. Robertson and C. Almagro-Perez. "Molecular-driven foundation model for oncologic pathology." arXiv preprint (2025): 28. Dosovitskiy, A., L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold and S. Gelly. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint (2020): 29. Lu, M. Y., D. F. Williamson, T. Y. Chen, R. J. Chen, M. Barbieri and F. Mahmood. "Data-efficient and weakly supervised computational pathology on whole-slide images." Nature biomedical engineering 5 (2021): 555-70. 30. Shao, Z., H. Bian, Y. Chen, Y. Wang, J. Zhang and X. Ji. "Transmil: Transformer based correlated multiple instance learning for whole slide image classification." Advances in neural information processing systems 34 (2021): 213647. 31. Tang, W., F. Zhou, S. Huang, X. Zhu, Y. Zhang and B. Liu. "Feature re-embedding: Towards foundation modellevel performance in computational pathology." Presented at Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 11343-52. 32. Nakanishi, R., K. i. Morooka, K. Omori, S. Toyota, Y. Tanaka, H. Hasuda, N. Koga, K. Nonaka, Q. Hu and Y. Nakaji. "Artificial intelligence-based prediction of recurrence after curative resection for colorectal cancer from digital pathological images." Annals of Surgical Oncology 30 (2023): 3506-14. 33. Diederik, K. "Adam: A method for stochastic optimization." (No Title) (2014): 34. Glorot, X. and Y. Bengio. "Understanding the difficulty of training deep feedforward neural networks." Presented at Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010. JMLR Workshop and Conference Proceedings, 249-56. 35. Saltz, L. B., D. Niedzwiecki, D. Hollis, R. M. Goldberg, A. Hantel, J. P. Thomas, A. L. Fields and R. J. Mayer. "Irinotecan fluorouracil plus leucovorin is not superior to fluorouracil plus leucovorin alone as adjuvant treatment for stage iii colon cancer: Results of calgb 89803." Journal of Clinical Oncology 25 (2007): 3456-61. Supplementary Materials 7. Additional Results: 7.1. Tumor Location-Dependent Survival Prediction Using PRISM: Larger Patient Numbers Offset Batch Effects and Enhance Performance To assess the generalizability and clinical applicability of PRISM across different colorectal anatomical sites, we conducted comprehensive subgroup analysis stratified by tumor resection location. This evaluation is critical for understanding how morphological feature patterns and model performance vary across the diverse biological microenvironments within the colon, informing clinical deployment strategies and identifying potential limitations in site-specific prognostication. The prognostic performance across colon tumor resection cohorts exhibited a strong correlation with total sample size (n), where larger patient groups stratified by tumor location yield more reliable prognostic performance, though inherent biological differences in feature patterns across anatomical locations may also contribute to performance variations. As detailed in Table S1 and Figure S1, in patients with sigmoid colon cancers (n=149), PRISM achieved optimal performance (AUC: 0.77 ± 0.06; C-index: 0.75; HR: 3.88). Following this, in cecal cancers (n=101), PRISM demonstrated moderate but stable metrics (AUC: 0.71 ± 0.11; C-index: 0.69; HR: 2.97), showing the performance drop despite adequate sampling. This performance decline accelerated with ascending colon cancers (n=64), where results remained robust (AUC: 0.74 ± 0.21; C-index: 0.71; HR: 3.10) but exhibited high standard deviation. In transverse colon cancers (n=46), PRISM demonstrated high sensitivity (84.52 ± 15.5) but critically low specificity (50.01 ± 17.46), suggesting location-specific feature importance may skew predictions. Conversely, smaller cohorts revealed significant limitations: splenic flexure cancers (n=19) had the poorest AUC (0.42 ± 0.43) and C-index (0.49), while hepatic flexure (n=28) and descending colon cancers (n=19) showed near-random sensitivity (50.0 ± 50.0) and marginal hazard ratios (3.29 and 2.39) with large 95% confidence intervals (Figure S1), indicating that both limited samples and possibly local specific features may compromise model generalizability. These results collectively demonstrate that reliability of deep learning models depends critically on cohort size, and tumor microenvironment differences across locations, necessitating location-specific feature extraction for clinically viable models. Table S1: Five-year OS results of PRISM stratified by tumor location in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second best is underlined. Right Colon Tumor location AUC Accuracy (%) Sensitivity (%) Specificity (%) No of samples (n) Cecum 0.71 ± 0.11 67.58 ± 6.70 67.85 ± 0.20 67.32 ± 11.00 101 Ascending Colon 0.74 ± 0.21 72.46 ± 20.32 74.28 ± 0.24 70.64 ± 17.39 64 Hepatic Flexure 0.56 ± 0.30 60.77 ± 22.22 60.95 ± 25.71 60.60 ± 22.81 28 Transverse Colon 0.74 ± 0.19 67.26 ± 3.80 84.52 ± 15.5 50.01 ± 17.46 46 Right Colon 0.69 ± 0.05 68.08 ± 3.59 71.62 ± 10.12 64.53 ± 10.56 239 Left Colon Splenic Flexure 0.42 ± 0.43 65.83 ± 15.87 50.00 ± 50.00 81.66 ± 18.48 19 Descending Colon 0.46 ± 0.27 64.58 ± 14.87 50.00 ± 50.00 79.16 ± 21.65 19 Sigmoid Colon 0.77 ± 0.06 70.09 ± 3.70 64.69 ± 8.40 75.49 ± 12.29 149 Left Colon 0.71 ± 0.08 67.69 ± 5.69 61.39 ± 12.66 73.99 ± 10.03 187 Figure S1: Subgroup analysis of our model's performance stratified by colon tumor location in the Alliance cohort using five-fold cross-validation. (a) Hazard ratios with 95% confidence intervals for each anatomical site, and (b) Concordance Index (C-Index) values demonstrating model's ability to risk-stratify patients across different colon segments. PRISM achieved the highest C-Index (0.7483) and robust hazard ratio (3.88) in sigmoid colon cancers, while smaller cohorts from splenic flexure and descending colon showed reduced performance, indicating that model reliability correlates with sample size and possibly anatomical site-specific biological features. 7.2.Robustness of PRISM Across Demographics and Histology A critical requirement for clinical deployment of AI-based prognostic models is demonstrated robustness across diverse populations and clinical contexts. To address concerns about generalization and bias, we conducted comprehensive subgroup analyses of PRISM within the Alliance cohort, evaluating performance consistency across demographic and pathological stratifications. These analyses confirm PRISM's superior stability and reliability compared to benchmark methods (ABMIL, CLAM, Nakanishi et al., RRT-MIL). PRISM exhibited strong consistency across sexes (Table S2), a key indicator of clinical robustness. Performance metrics remained stable between male (AUC: 0.69 ± 0.06; accuracy: 68.06% ± 5.00) and female (AUC: 0.71 ± 0.08; accuracy: 67.91% ± 3.20) cohorts, with minimal AUC variation (Δ = 0.02) and negligible accuracy fluctuation (0.15% difference). In contrast, benchmarks showed significant sex-based instability: CLAM had a 5.5% accuracy drop between sexes, while Nakanishi et al. and RRT-MIL exhibited 1.5-2% accuracy reductions and extreme sensitivity variability (e.g., RRT-MIL sensitivity SD: ±30.91). This consistency underscores PRISM's resilience to sex-based distribution shifts, suggesting reliance on generalizable biological features rather than spurious correlations. Notably, PRISM achieved superior performance in female patients (highest AUC: 0.71 ± 0.078; highest accuracy: 67.91% ± 3.20), surpassing CLAM by 15% in accuracy while delivering better-balanced sensitivity/specificity (64.00% ± 11.0/71.81% ± 10.00 vs. 20.33% ± 12.13/85.16% ± 13.61) (Table S2). Its higher sensitivity is clinically critical for identifying high-risk female patients requiring aggressive intervention. In male patients, PRISM outperformed benchmarks in practical utility: while TransMIL had a marginally higher AUC (0.67 ± 0.06), its AUC is influenced by predicting majority of the samples as five-year survivors, whereas PRISM achieved ~8% higher accuracy (68.06% ± 5.00) with balanced sensitivity/specificity (69.7% ± 9.00/66.58% ± 11.00) and lower metric variability, avoiding trade-offs observed in competitors. Table S2: Five-year survival prediction results of PRISM stratified by sex in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the second-best is underlined. Model Sex AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [29] Female 0.63 ± 0.11 57.86 ± 10.13 51.48 ± 17.68 64.25 ± 07.50 Male 0.61 ± 0.05 57.68 ± 05.75 46.04 ± 12.87 69.32 ± 07.46 CLAM [1] Female 0.61 ± 0.16 52.75 ± 09.29 20.33 ± 12.13 85.16 ± 13.61 Male 0.63 ± 0.03 58.35 ± 03.95 31.38 ± 08.05 85.33 ± 05.95 TransMIL [30] Female 0.69 ± 0.12 54.18 ± 05.99 08.37± 11.99 01.00 ± 00.00 Male 0.67 ± 0.06 52.67 ± 06.19 06.00 ± 12.00 99.35 ± 01.20 Nakanishi et. Al [32] Female 0.60 ± 0.11 58.32 ± 09.70 53.43 ± 20.25 63.21 ± 07.87 Male 0.61 ± 0.06 59.99 ± 05.78 56.01 ± 12.59 63.97 ± 03.65 RRT-MIL [31] Female 0.56 ± 0.07 54.11 ± 05.35 45.98 ± 30.91 62.24 ± 24.09 Male 0.59 ± 0.08 56.17 ± 06.29 44.19 ± 21.72 68.16 ± 25.79 PRISM Female 0.71 ± 0.07 67.91± 03.20 64.00 ± 11.00 71.81 ± 10.00 Male 0.69 ± 0.06 68.06 ± 05.00 69.70 ± 09.00 66.38 ± 11.00 PRISM also generalized robustly across histologic heterogeneity (Table S3). Despite known inter-observer variability in grading, performance remained consistent across glandular differentiation grades: • Grade I (well differentiated, n=20): High accuracy (81.25% ± 6.25) and AUC (0.75 ± 0.25), though variability reflected sample limitations. • Grade II (moderately differentiated, n=297): Stable accuracy (67.73% ± 3.31) and AUC (0.71 ± 0.03) with low standard deviation. • Grade III (poorly differentiated, n=104): Robust accuracy (67.25 ± 4.72) and AUC (0.70 ± 0.10) This consistency across biologically distinct grades-particularly the stability in Grade II (the largest subgroup)- demonstrates PRISM's capacity to learn generalizable pathologic features. PRISM establishes a new standard for robust prognostication, consistently matching or outperforming benchmarks across genders and histologic grades while avoiding sensitivity-specificity trade-offs. Its minimal performance fluctuation in key subgroups (ΔAUC <0.03 across sexes; <4% accuracy variability in Grade II and III), superior accuracy in males and leading female-cohort performance in comparison with comparison methods, and adaptability to histologic heterogeneity directly address clinical concerns about generalization and bias. These attributes position PRISM as a uniquely reliable model for clinical translation in CRC prognosis. Table S3: Five-year survival prediction results of PRISM stratified by Grade in the Alliance cohort using five-fold cross-validation. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the secondbest is underlined. Grade AUC Accuracy (%) Sensitivity (%) Specificity (%) No of patients (n) I 0.75 ± 0.25 81.25 ± 6.25 75.00 ± 25.00 87.50 ± 12.5 20 II 0.71 ± 0.03 67.73 ± 3.31 65.11 ± 6.92 70.35 ± 8.99 297 III 0.70 ± 0.10 67.25 ± 4.72 72.85 ± 18.90 61.64 ± 14.63 104 7.3. Enhanced Performance of Baseline Methods using Add-on Module Integration and PRISM Superiority To rigorously evaluate the impact of our proposed patient heterogeneity module, we applied this add-on enhancement to existing state-of-the-art methods and compared their performance against PRISM. The integration of our module consistently improved the prognostic accuracy of baseline models across multiple clinical subgroups, though PRISM maintained superior performance in all comparisons. When applied to CLAM, our add-on module substantially improved its five-year survival prediction accuracy by 7% compared to baseline, ensuring that model training incorporates clinically relevant subgroups (Supplementary Figure S2). In the FL treatment subgroup, CLAM's accuracy increased from 50.04 ± 5.96% to 63.29 ± 4.45%, representing a 13.25% absolute improvement and reduced variability (Supplementary Table S4). Similarly, in female patients, CLAM's accuracy improved from 52.75 ± 9.29% to 65.10 ± 8.70% (a 12.35% gain) (Supplementary Table S5). However, these enhancements did not surpass PRISM's performance, which achieved an accuracy of 68.21 ± 3.40% in the FL subgroup and 67.91 ± 3.20% in females, representing a 4.92% and 2.81% improvement, respectively. ABMIL also demonstrated notable gains with our module. For FL-treated patients, accuracy improved from 62.02 ± 13.76% to 63.86 ± 6.30, a 1.84% increase with significantly reduced standard deviation (from 13.76% to 6.30%). Sex-stratified analysis also revealed an 8% accuracy improvement in female patients (from 57.86 ± 10.13% to 61.71 % ± 8.20). Despite these improvements, PRISM still outperformed enhanced ABMIL by 4.35% in the FL subgroup and 8.77% in the IFL subgroup. Nakanishi et al.'s method showed more modest gains, with FL treatment accuracy improving from 61.12 ± 12.85 to 62.21 ± 4.70 and IFL accuracy increasing from 56.02 ± 5.29 to 58.83 ± 8.60. RRT-MIL exhibited variable performance: FL accuracy improved from 55.42 ± 8.54 to 59.51 ± 11.11 (a 4.09% gain), but IFL accuracy decreased from 55.62 ± 3.61 to 51.18 ± 9.42 (a 4.44% drop), indicating instability. The HR analysis further validated PRISM's dominance, achieving a robust HR of 3.34 (95% CI: 2.28-4.90)-1.5× greater risk discrimination than the state-of-the-art methods (Supplementary Figure S3). While CLAM improved marginally (HR: 1.96 → 2.25) but failed to approach PRISM's precision, as their underlying architectures could not fully leverage the continuous morphological spectrum essential for robust risk stratification. PRISM's tight confidence interval (spanning 2.62 vs. TransMIL's 9.98) enabled clinically actionable identification of high-risk patients with 3.34× greater mortality likelihood within five years. Critically, the add-on module reduced performance variability across all baselines, as evidenced by lower standard deviations in key subgroups (Supplementary Table S4). For instance: CLAM's standard deviation in the FL subgroup decreased from ±5.96% to ±4.45%. ABMIL's FL subgroup variability dropped sharply from ±13.76 to ± 6.30%. Nakanishi et al.'s FL subgroup variability reduced from ±12.85 to ± 4.70%. This confirms that accounting for population heterogeneity mitigates algorithmic bias and enhances model stability. a b c Figure S2: Five-year survival prediction results in the Alliance cohort using our proposed five-fold cross-validation method. (a) Area Under the Curve (AUC) values with standard deviations, (b) Accuracy percentages with standard deviations, and (c) Grouped comparison of sensitivity and specificity with other models. Our model achieves the highest accuracy and balanced sensitivity and specificity, demonstrating superior performance compared to existing state-of-the-art methods. Table S4: Five-year survival prediction results of PRISM stratified by FL/IFL treatments in the Alliance cohort using our proposed validation strategy. Each row reports the average performance with standard deviation. For each metric within a treatment, the best result is shown in bold, and the second best is underlined. Model Treatment AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [30] FL 0.7118 ± 0.058 63.86 ± 6.30 65.26 ± 17.11 62.46 ± 12.28 IFL 0.6576 ± 0.141 58.00 ± 10.85 61.76 ± 27.66 54.24 ± 15.40 CLAM [26] FL 0.6820 ± 0.044 63.29 ± 4.45 48.63 ± 9.40 77.96 ± 7.96 IFL 0.6444 ± 0.112 59.85 ± 11.17 49.62 ± 22.04 70.09 ± 3.92 Transmil [31] FL 0.7132 ± 0.057 51.89 ± 3.70 4.40 ± 8.80 99.35 ± 1.29 IFL 0.6091 ± 0.098 51.09 ± 2.10 3.30 ± 6.60 98.80 ± 2.20 Nakanishi et. Al [33] FL 0.6914 ± 0.080 62.21 ± 4.70 56.24 ± 17.12 68.18 ± 11.43 IFL 0.6843 ± 0.128 58.83 ± 8.60 48.16 ± 15.27 69.51 ± 15.43 RRT-MIL [32] FL 0.6961 ± 0.036 59.51 ± 11.11 59.82 ± 32.59 59.19 ± 36.46 IFL 0.5962 ± 0.159 51.18 ± 9.42 46.30 ± 33.37 56.07 ± 42.83 PRISM FL 0.7160 ± 0.061 68.21 ± 3.40 64.82 ± 3.40 71.60 ± 8.50 IFL 0.6846 ± 0.108 66.77 ± 5.10 68.76 ± 13.90 64.77 ± 10.90 Figure S3: Hazard ratios and concordance index (c-index) values in the Alliance cohort using our proposed validation strategy. (a) Hazard ratios with 95% confidence intervals for each model, and (b) Concordance Index (C-Index) values demonstrating model ability to sort the patient based on risk for survival prediction. Our model achieves the highest C-Index (0.6720) and comparable hazard ratio in comparison with TransMIL with smaller confidence interval, indicating superior prognostic performance compared to other state-ofthe-art methods. Table S5: Five-year survival prediction results of PRISM stratified by sex in the Alliance cohort using our proposed validation strategy. Each row reports the average performance with standard deviation. For each metric, the best result is shown in bold, and the secondbest is underlined. Model Sex AUC Accuracy (%) Sensitivity (%) Specificity (%) ABMIL [30] Female 0.6816 ± 0.079 61.71 ± 8.20 60.87 ± 19.10 62.54 ± 13.14 Male 0.6906 ± 0.064 59.26 ± 5.10 64.51 ± 22.47 54.73 ± 15.35 CLAM [26] Female 0.7046 ± 0.070 65.10 ± 8.70 56.34 ± 16.78 73.85 ± 8.72 Male 0.6377 ± 0.053 59.90 ± 5.50 45.87 ± 13.08 73.93 ± 5.80 Transmil [31] Female 0.6699 ± 0.110 52.85 ± 5.70 5.70± 11.42 1.0 ± 0.0 Male 0.6541 ± 0.054 50.57 ± 1.10 2.80 ± 5.70 98.28 ± 3.40 Nakanishi et. Al [33] Female 0.6796 ± 0.080 59.33 ± 6.20 46.70 ± 9.60 71.96 ± 14.18 Male 0.6672 ± 0.043 59.96 ± 2.60 54.05 ± 14.40 65.86 ± 13.73 RRT-MIL [32] Female 0.6299 ± 0.055 55.95 ± 7.08 52.46 ± 33.69 59.45 ± 37.99 Male 0.6535 ± 0.056 54.37 ± 8.67 51.38 ± 31.69 57.36 ± 40.05 PRISM Female 0.7120 ± 0.07 67.91± 3.20 64.00 ± 11.0 71.81 ± 10.00 Male 0.6869 ± 0.060 68.06 ± 5.00 69.7 ± 9.00 66.38 ± 11.00 Table S6: c-index values got using TCGA-COADREAD dataset. All methods were trained using five-fold cross validation Model C-index ABMIL 0.6034 ± 0.0500 PANTHER 0.5832 ± 0.0735 DSMIL 0.5 ± 0.0 RRT-MIL 0.5991 ± 0.08094 PRISM 0.6268 ± 0.0612
2510.14806
JOINT CHANNEL AND CFO ESTIMATION FROM BEAM-SWEPT SYNCHRONIZATION SIGNAL UNDER STRONG INTER-CELL INTERFERENCE Bowen Li1,2,3, Junting Chen1,2 and Nikolaos Pappas3 1School of Science and Engineering and 2Shenzhen Future Network of Intelligence Institute, The Chinese University of Hong Kong, Shenzhen, China 3Department of Computer and Information Science, Link¨oping University, Link¨oping, Sweden ABSTRACT Complete awareness of the wireless environment, crucial for future intelligent networks, requires sensing all transmit- ted signals, not just the strongest. A fundamental barrier is estimating the target signal when it is buried under strong co-channel interference from other transmitters, a failure of which renders the signal unusable. This work proposes a maximum likelihood (ML)-based cross-preamble estima- tion framework that exploits carrier frequency offset (CFO) constancy across beam-swept synchronization signal (SS), coherently aggregating information across multiple observa- tions to reinforce the desired signal against overwhelming interference. Cramer-Rao lower bound (CRLB) analysis and simulation demonstrate reliable estimation even when the signal is over a thousand times weaker than the interfer- ence. A low-altitude radio-map case study further verifies the framework’s practical effectiveness. Index Terms— CFO estimation, strong interference, beam-swept SS, multi-transmitter systems. 1. INTRODUCTION The paradigm of wireless networks is evolving towards com- plex, multi-point architectures where awareness of the en- tire radio environment, not just the strongest signal, is crit- ical [1–6]. For instance, cell-free systems require channel information from numerous access points to enable coopera- tion [1–3]. Likewise, high-resolution radio map construction needs to characterize all signals to provide a complete envi- ronmental picture [4–6]. A unifying theme across these sys- tems is the necessity of processing weak signals in the pres- ence of strong ones. This creates a challenge: weak signals of interest are often buried under severe co-channel interference, and the critical first step of CFO estimation fails, rendering them invisible to the network. To estimate CFO, blind methods exploit inherent signal redundancy, such as the cyclic prefix (CP), for estimation Junting Chen is the corresponding author. The work was supported in part by xxx. without dedicated overhead [7,8]. However, their reliance on short data segments provides limited averaging gain, render- ing them unreliable in low signal-to-interference-and-noise ratio (SINR) and strong interference conditions. Training- based methods offer improved robustness by utilizing known preambles or pilots as a clean reference [9–11]. However, practical synchronization signals are never perfectly orthog- onal [12,13], leading to cross-correlation leakage that allows strong interfering signals to corrupt the estimation of weaker ones. Even dedicated iterative techniques like successive in- terference cancellation (SIC) [14], designed specifically for this challenge, often fall short, as residual estimation errors from strong signals are typically still powerful enough to completely mask the weak signals. We tackle this challenge by exploiting a structural prop- erty of modern training bursts. Within a short coherence block, multiple preambles exhibit heterogeneous per-sequence (beam-dependent) gains, while the CFO is common across the burst. For example, in 5G new radio (NR) SS block (SSB) structure [11, 15, 16]: each SSB has a distinct, beam- dependent power gain, yet all SSBs share a common CFO over a short duration (≈5 ms); in cell-free networks [1–3]: the same pilot from a user is received by multiple base stations (BSs) through different channels, so the received gains differ across sites, whereas the user-induced CFO is identical at all receivers. Our approach leverages this structure, proposing a cross-preamble estimation across the entire burst to reinforce the weak signal against strong co-channel interference. This paper develops a principled framework for weak- signal synchronization. It establishes the performance limits, showing that it is not the instantaneous SINR that matters, but the aggregate SINR across all observations. Our key contri- butions are: • We develop an ML-based cross-preamble CFO estima- tor that exploits a shared CFO across multiple pream- bles and, via CRLB analysis, show that the estimation error is inversely proportional to the aggregate SINR over all preambles and proportional to the CFO magni- tude. Therefore, an iterative refinement algorithm, pre- compensation, and SIC, followed by re-estimation, is arXiv:2510.14806v1 [eess.SP] 16 Oct 2025 Fig. 1. Preamble Structure. proposed to estimate the channel and CFO. • Simulations demonstrate reliable CFO estimation down to −30 dB SINR and an approximate 6 dB improve- ment in channel estimation at low SINR. A low-altitude radio-map field test further confirms the framework’s practical effectiveness. 2. SYSTEM MODEL Consider a multi-transmitter system where a single receiver is within the coverage of K BSs, indexed by K ∈{0, 1, · · · , K− 1}. Each BS periodically broadcasts a signal burst containing P distinct preamble frames, indexed by P ∈{0, 1, · · · , P − 1}. All preamble frames contain two identical training se- quences, as shown in Fig. 1. In addition, all preambles within one coherence block share the same timing and frequency offsets, while each preamble frame experiences a distinct channel. 2.1. Preamble Structure Let C denote the set of all length-N training sequences. The preamble transmitted by the kth BS can be expressed as ck = ck,0 ⊕0 ⊕µkck,1 (1) where ck,0, ck,1 ∈C are the selected training sequences; in- dividual sequences may be reused across different BS, but the ordered pair (ck,0, ck,1), and thus the preamble ck is unique to BS k; µk ∈R+ is a scaling factor that represents the power difference between the two training sequences; and 0 is a zero-padding vector of fixed length τ0, corresponding to the timing offset between the two training signals. The cross-correlation between two training sequences ck, cl ∈C with timing offset τ and frequency offset ω is given by rk,l (τ, ω) ≜1 N N−1 X m=0 ck [m] c∗ l [m −τ] e−jωm. (2) Because training sequences are not perfectly orthogonal, we model the cross-correlation for any mismatch (k ̸= l) or nonzero delay (τ ̸= 0) as a zero-mean circular complex Gaussian with a variance that decays with the sequence length N. For the matched case (k = l and τ = 0), the statistic re- duces to the sequence’s autocorrelation rk(ω). As a result, the cross-correlation can be expressed as rk,l (τ, ω) = ( rk (ω) k = l, τ = 0 CN 0, σ2 c /N  otherwise (3) where σ2 c captures sequence-family sidelobes and implemen- tation impairments (e.g., filtering, residual multipath), and the magnitude of autocorrelation |rk(ω)| follows a Dirichlet pro- file; |rk(0)| = 1 and although oscillatory, its overall trend decreases as |ω| increases, e.g., for Zadoff–Chu sequences, |rk(ω)| = | sin(Nω/2)/(N sin(ω/2))| [13]. 2.2. Transmission Model For the kth BS, denote τk and ωk as the group timing and fre- quency offsets respectively, and denote α(p) k denote the com- plex channel gain in the ith frame. Then, the received signal at time m can be expressed as y [m] = K−1 X k=0 P −1 X p=0 α(p) k ck [m −pNf −τk] e−jωkm + ν [m] (4) for all m ∈{0, 1, · · · , No −1}, where Nf is the frame length and ν[m] ∼CN(0, σ2 n) denotes additive white Gaussian noise. The parameter No represents the total observation length under consideration. Without loss of generality, we assume that the received signal length is sufficiently large to accommodate all reference signals, and that training se- quences from different frames do not overlap. Moreover, ck[m] = 0 for m /∈{0, · · · , N −1}. For notational convenience, we collect the received sam- ples into a vector y ≜[y [0] , y [1] , · · · , y [No −1]]T . In this work, the initial tasks of active BS detection and delay estimation are assumed to be complete, consistent with established generalized likelihood ratio test (GLRT) meth- ods [17]. The scope is therefore narrowed to estimating the re- maining parameters for the active set K: CFOs ω ≜[ωk]k∈K, scaling factors µ ≜[µk]k∈K, and per-frame channel gains α ≜[α(p) k ]k∈K, p∈P. maximize ω,α,τ,µ P (y|ω, α, µ) . This joint estimation problem is challenging due to strong parameter coupling, the inherent nonlinearity of CFO estima- tion, and imperfect orthogonality of the training sequences. 3. CROSS-PREAMBLE ESTIMATION In this section, we first reformulate channel estimation as a linear problem and solve it via the classical linear least squares (LLS) estimator [18]. Second, we perform a single- preamble analysis and derive a separate CFO estimator. Then, we extend the analysis across preambles, leveraging the burst- wise shared CFO to obtain a cross-preamble CFO estimator. 3.1. Channel Estimator For each frame of the received signal, denoted as y(p) ≜ y [m + pNf]m∈{0,1,··· ,Nf−1} ∈CNf×1, where p ∈P, then the pth received signal can be written as y(p) = ejω(p−1)NfΩ(ω)  C (τ) | {z } A(p) α(p) + v(p) where C (τ) = [0τ×1; ck; 0]k∈K ∈CNf×K, Ω(ω) = [Ωk(ωk)]k∈K ∈CNf×K, Ωk(ωk) = [1; ejωk; · · · ; ejωk(Nf−1)], α(p) = [α(p) k ]k∈K ∈CK×1, and v(p) = [v[m]]m∈{0,··· ,Nf−1} ∈ CNf×1. As a result, the received signals can be expressed as y = blkdiag  A(0), · · · , A(P −1) | {z } A(τ,ω) α + v. Then, according to the LLS solution, the channel gain α can be estimated by ˆα = A(τ, ω)†y. (5) In addition, the received signals y can be expressed as A0α+ h A(0)diag  α(0) ; · · · ; A(P −1)diag  α(P −1)i | {z } B µ+v. where Ai = blkdiag(A(0) i , · · · , A(P −1) i ), A(p) i = ejω(p−1)Nf Ω(ω)  Ci (τ), Ci (τ) = [0(τ+τci)×1; ck,i; 0]k∈K, and τc = N + τ0. Then, according to the LLS solution, the scaling factor µ can be estimated by ˆµ = B† (y −A0) . (6) 3.2. Separate Analysis and Separate CFO Estimator For an arbitrary frame p of the received signal, we define the cross-correlation between the received signal y and the two training sequences ck,0 and ck,1 as rp,i y,k [τ] ≜1 N τ+N−1 X m=τ y [m + pNf + iτc] cH k,i [m −τ] where k ∈K, p ∈P, and i ∈{0, 1}. Based on the cross-relation of the training sequences, the distribution of rp,i y,k [τ] is summarized as Lemma 1. For any k ∈K, p ∈P, and i ∈{0, 1}, if τ = τk, the cross-correlation rp,i y,k [τk] follows rp,i y,k [τk] ∼α(p) k rk (ωk) µke−jωkτci + CN 0, σ2 k,p,i  where the σ2 k,p,i is the averaged interference plus noise σ2 k,p,i = P q̸=k,q∈K  µi kα(p) q σc 2 + σ2 n N . Lemma 1 indicates that the ratio between the two corre- lation outputs, rp,0 y,k and rp,1 y,k, reveals the exponential term as- sociated with the CFO, which leads to the following separate CFO estimator: ˆωsep k = −1 τc ∠(E [ry,k [p, 1]] /µk/E [ry,k [p, 0]]) . (7) However, its variance depends on the per-sequence SINR δ2 k,p,i and is easily overwhelmed by strong co-channel sig- nals, which motivates the cross-preamble estimator below. 3.3. Cross-Preamble CFO Estimator Let the set of correlation variables associated with CFO ωk as ry,k = [rp,i y,k [τk]]p∈P,i∈{0,1}, and define the cross-preamble likelihood as P(ry,k|ω). Since preambles from different frames are non-overlapping, the correlation outputs are con- ditionally independent given ωk, yielding P (ry,k|ω) = Y p∈P Y i∈{0,1} P  rp,i y,k [τk] |ωk  . According to Lemma 1, each correlation output rp,i y,k [τk] is Gaussian distributed, and thus the log-likelihood reduces to X p∈P X i∈{0,1} − rp,i y,k [τk] −α(p) k rk (ωk) µke−jωkτci 2 σ2 k,p,i . (8) We prove that the cross-preamble CFO estimator is shown in the following proposition. Proposition 1. The cross-preamble ML CFO estimator is ˆωcross k ≜arg max ωk P (ry,k|ω) = 1 τc ∠  ψ1ψH 0 (9) where ψ0 ≜ X p∈P α(p) k rp,0 y,k [τk]H σ2 k,p,0 , ψ1 ≜ X p∈P α(p) k µkrp,1 y,k [τk]H σ2 k,p,1 . (10) Proposition 2. The cross-preamble ML CFO estimator ˆωcross k a∼N(ω, [I−1]11), where [I−1]11 is CRLB  I−1 1,1 = Γ−1 0 + Γ−1 1 2τ 2c |rk (ωk)|2 (11) and Γ0 and Γ1 are the aggregated SINR of the two training signals Γ0 = X p∈P α(p) k 2 σ2 k,p,0 , Γ1 = X p∈P µkα(p) k 2 σ2 k,p,1 . (12) Algorithm 1 Joint CFO and Channel Estimation Algorithm # Initialization: ˆω ←0, ˆα ←0, ˆµ ←1, ˜y ←y 1: Construct A based on ˆω and ˆµ and update ˆα based on (5); Then, Construct A0 and B based on ˆω and ˆα and update ˆµ based on (6). 2: For each BS k, perform SIC and compensation: ˜yk ← (y −A(ˆτ−k, ˆω−k) ˆα−k)  Ωk(ωk). Then, estimate the remaining CFO δk based on the cross-preamble estimator (9), and update ˆωk ←ˆωk + δk. 3: Go to step 1) until P k |δk| < ϵ. Our analysis of the CRLB reveals two key insights into CFO estimation. First, accurate CFO estimation for a weak signal is achievable. This follows from the estimation error being inversely proportional to the aggregate SINR across all beams (i.e., Γ−1 0 + Γ−1 1 ). In addition, a coarse-to-fine (it- erative) refinement, pre-compensation, and SIC, followed by re-estimation, can progressively reduce the effective offset be- cause the estimation variance increases with interference and increases with the magnitude of the CFO, |ωk|. 3.4. Joint CFO and Channel Estimation Algorithm At first, we estimate channels based on (5) and (6). Then, for every k, we cancel the contributions of the other BSs, pre- compensate the CFO of BS k, and estimate its residual CFO increment δk. The to-be-estimated residual signal for BS k is ˜yk =  y −A ˆτ−k, ˆω−k  ˆα−k  ⊙Ωk(ˆωk) (13) where “−k” denotes the collection of parameters for all BSs except k (i.e., the entries associated with BS k are set to zero). The residual CFO is then updated as ˆωk ←ˆωk + δk. The iter- ation stops when the residual CFO corrections are small. In- tuitively, improving the CFO estimate sharpens the matched preamble and reduces the bias/variance of the channel esti- mates; the refined channel estimates, in turn, yield smaller CFO increments. Under Proposition 2, this yields a mono- tone decrease of the residual CFO, and δk →0. 4. SIMULATION RESULTS We develop a simulation platform that generates 5G NR waveforms [15] and use it to assess the robustness of the proposed cross-preamble estimation framework. For com- parison, we consider: i) CP-based blind estimator [8]; ii) Auto-correlation (separate) estimator that splits each training sequence into two segments and estimates CFO from their phase difference [11]; iii) Cross-correlation (separate) esti- mator as in (7); iv) Weighted average of separated estimated CFOs according to channel power. Fig. 2 (a) illustrates the mean absolute error (MAE) over SINR. As expected, error decreases with SINR. The ranking Fig. 2. CFO ω estimation MAE and channel α estimation NMSE over SINR, obtained from 200 Monte-Carlo simula- tions, with K = 12 BSs , and P = 12 preambles. Fig. 3. Field test: Radio maps for two beams transmitted by a ground BS; the BS location is indicated by a red triangle. is consistent with the length of the used reference sequence: cross-preamble < separate < CP-based. Cross-correlation outperforms auto-correlation because the separation τc be- tween the two training sequences provides a larger phase baseline. The proposed cross-preamble estimator further improves accuracy by pooling information across frames un- der the shared-CFO constraint. Notably, the cross-preamble estimator at SINR= −30 dB still outperforms the weighted- average baseline operating at SINR= −5 dB confirming robustness in low-SINR regimes. Consequently, improving CFO accuracy reduces variance in channel estimation; the error (normalized mean squared error (NMSE)) versus SINR follows the same trend as in Fig. 2(b). Fig. 3 shows the radio-map estimation for a low-altitude setting. The two clearly resolved (pointing in different di- rections) beams corroborate the effectiveness of the proposed framework in practice. 5. CONCLUSION Leveraging CFO constancy within a coherence block, we pro- pose an ML cross-preamble estimation framework that ag- gregates preamble correlations. Simulations show estimation gains exceeding 30 dB for CFO and 5 dB for the channel. Field tests demonstrate the ability to identify beams in low- altitude airspace. 6. REFERENCES [1] B. W. Zarikoff and J. K. Cavers, “Multiple frequency offset estimation for the downlink of coordinated MIMO systems,” IEEE J. Sel. Areas Commun., vol. 26, no. 6, pp. 901–912, 2008. [2] D. Gesbert, S. Hanly, H. Huang, S. Shamai Shitz, O. Simeone, and W. Yu, “Multi-cell MIMO cooperative networks: A new look at interference,” IEEE J. Sel. Ar- eas Commun., vol. 28, no. 9, pp. 1380–1408, 2010. [3] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-free massive MIMO versus small cells,” IEEE Trans. on Wireless Commun., vol. 16, no. 3, pp. 1834–1850, 2017. [4] S. J. Maeng, O. Ozdemir, I. Guvenc, M. L. Sichitiu, M. Mushi, and R. Dutta, “LTE I/Q data set for UAV propagation modeling, communication, and navigation research,” IEEE Commun. Mag., vol. 61, no. 9, pp. 90– 96, 2023. [5] Y. Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Gao, D. Gesbert, S. Cui, and R. Zhang, “A tuto- rial on environment-aware communications via channel knowledge map for 6G,” IEEE Commun. Surveys Tuts., vol. 26, no. 3, pp. 1478–1519, 2024. [6] H. Sun and J. Chen, “Integrated interpolation and block- term tensor decomposition for spectrum map construc- tion,” IEEE Transactions on Signal Processing, vol. 72, pp. 3896–3911, 2024. [7] T.-C. Lin and S.-M. Phoong, “A new cyclic-prefix based algorithm for blind CFO estimation in OFDM systems,” IEEE Trans. on Wireless Commun., vol. 15, no. 6, pp. 3995–4008, 2016. [8] S. Singh, S. Kumar, S. Majhi, U. Satija, and C. Yuen, “Blind carrier frequency offset estimation techniques for next-generation multicarrier communication systems: Challenges, comparative analysis, and future prospects,” IEEE Commun. Surveys Tuts., vol. 27, no. 1, pp. 1–36, 2025. [9] Y.-R. Tsai, H.-Y. Huang, Y.-C. Chen, and K.-J. Yang, “Simultaneous multiple carrier frequency offsets es- timation for coordinated multi-point transmission in OFDM systems,” IEEE Trans. on Wireless Commun., vol. 12, no. 9, pp. 4558–4568, 2013. [10] O. H. Salim, A. A. Nasir, H. Mehrpouyan, and W. Xi- ang, “Multi-relay communications in the presence of phase noise and carrier frequency offsets,” IEEE Trans. on Commun., vol. 65, no. 1, pp. 79–94, 2017. [11] R. Tuninato, D. G. Riviello, R. Garello, B. Melis, and R. Fantini, “A comprehensive study on the synchroniza- tion procedure in 5G NR with 3GPP-compliant link- level simulator,” EURASIP J. Wireless Com. Network, vol. 2023, no. 1, p. 111. [12] P. Stoica and O. Besson, “Training sequence design for frequency offset and frequency-selective channel esti- mation,” IEEE Trans. on Commun., vol. 51, no. 11, pp. 1910–1917, 2003. [13] M. Hua, M. Wang, K. W. Yang, and K. J. Zou, “Analysis of the frequency offset effect on Zadoff-Chu sequence timing performance,” IEEE Trans. on Commun., vol. 62, no. 11, pp. 4024–4039, 2014. [14] M. Morelli, C.-C. J. Kuo, and M.-O. Pun, “Synchroniza- tion techniques for orthogonal frequency division mul- tiple access (OFDMA): A tutorial review,” Proc. IEEE, vol. 95, no. 7, pp. 1394–1427, 2007. [15] 3rd Generation Partnership Project, “Technical Spec- ification Group Radio Access Network; NR; Physi- cal channels and modulation (Release 18),” 3GPP, TS 38.211, May 2024, v18.2.0. [16] S. Yoneda, M. Sawahashi, S. Nagata, and S. Suyama, “Prach transmission employing carrier frequency offset pre-compensation based on measurement at UE for NR uplink,” IEICE Trans. on Commun., vol. E108-B, no. 3, pp. 330–338, 2025. [17] S. M. Kay, Fundamentals of Statistical Signal Process- ing, Volume II: Detection Theory. Upper Saddle River, NJ, USA: Prentice Hall, 1998. [18] ——, Fundamentals of statistical signal processing: es- timation theory. Prentice-Hall, Inc., 1993.
JOINT CHANNEL AND CFO ESTIMATION FROM BEAM-SWEPT SYNCHRONIZATION SIGNAL UNDER STRONG INTER-CELL INTERFERENCE Bowen Li1,2,3, Junting Chen1,2 and Nikolaos Pappas3 1 2Shenzhen Future Network of Intelligence Institute, The Chinese 3 ̈oping University, Link ̈oping, Sweden ABSTRACT Complete awareness of the wireless environment, crucial for future intelligent networks, requires sensing all transmitted signals, not just the strongest. A fundamental barrier is estimating the target signal when it is buried under strong co-channel interference from other transmitters, a failure of which renders the signal unusable. This work proposes a maximum likelihood (ML)-based cross-preamble estimation framework that exploits carrier frequency offset (CFO) constancy across beam-swept synchronization signal (SS), coherently aggregating information across multiple observations to reinforce the desired signal against overwhelming interference. Cramer-Rao lower bound (CRLB) analysis and simulation demonstrate reliable estimation even when the signal is over a thousand times weaker than the interference. A low-altitude radio-map case study further verifies the framework's practical effectiveness. Index Terms- CFO estimation, strong interference, beam-swept SS, multi-transmitter systems. 1. INTRODUCTION The paradigm of wireless networks is evolving towards complex, multi-point architectures where awareness of the entire radio environment, not just the strongest signal, is critical [1-6]. For instance, cell-free systems require channel information from numerous access points to enable cooperation [1-3]. Likewise, high-resolution radio map construction needs to characterize all signals to provide a complete environmental picture [4-6]. A unifying theme across these systems is the necessity of processing weak signals in the presence of strong ones. This creates a challenge: weak signals of interest are often buried under severe co-channel interference, and the critical first step of CFO estimation fails, rendering them invisible to the network. To estimate CFO, blind methods exploit inherent signal redundancy, such as the cyclic prefix (CP), for estimation Junting Chen is the corresponding author. The work was supported in part by xxx. without dedicated overhead [7,8]. However, their reliance on short data segments provides limited averaging gain, rendering them unreliable in low signal-to-interference-and-noise ratio (SINR) and strong interference conditions. Trainingbased methods offer improved robustness by utilizing known preambles or pilots as a clean reference [9-11]. However, practical synchronization signals are never perfectly orthogonal [12,13], leading to cross-correlation leakage that allows strong interfering signals to corrupt the estimation of weaker ones. Even dedicated iterative techniques like successive interference cancellation (SIC) [14], designed specifically for this challenge, often fall short, as residual estimation errors from strong signals are typically still powerful enough to completely mask the weak signals. We tackle this challenge by exploiting a structural property of modern training bursts. Within a short coherence block, multiple preambles exhibit heterogeneous per-sequence (beam-dependent) gains, while the CFO is common across the burst. For example, in 5G new radio (NR) SS block (SSB) structure [11, 15, 16]: each SSB has a distinct, beamdependent power gain, yet all SSBs share a common CFO over a short duration (≈5 ms); in cell-free networks [1-3]: the same pilot from a user is received by multiple base stations (BSs) through different channels, so the received gains differ across sites, whereas the user-induced CFO is identical at all receivers. Our approach leverages this structure, proposing a cross-preamble estimation across the entire burst to reinforce the weak signal against strong co-channel interference. This paper develops a principled framework for weaksignal synchronization. It establishes the performance limits, showing that it is not the instantaneous SINR that matters, but the aggregate SINR across all observations. Our key contributions are: • We develop an ML-based cross-preamble CFO estimator that exploits a shared CFO across multiple preambles and, via CRLB analysis, show that the estimation error is inversely proportional to the aggregate SINR over all preambles and proportional to the CFO magnitude. Therefore, an iterative refinement algorithm, precompensation, and SIC, followed by re-estimation, is 16 Oct 2025 Fig. 1. Preamble Structure. proposed to estimate the channel and CFO. • Simulations demonstrate reliable CFO estimation down to -30 dB SINR and an approximate 6 dB improvement in channel estimation at low SINR. A low-altitude radio-map field test further confirms the framework's practical effectiveness. 2. SYSTEM MODEL Consider a multi-transmitter system where a single receiver is within the coverage of K BSs, indexed by K ∈{0, 1, · · · , K1}. Each BS periodically broadcasts a signal burst containing P distinct preamble frames, indexed by P ∈{0, 1, · · · , P - 1}. All preamble frames contain two identical training sequences, as shown in Fig. 1. In addition, all preambles within one coherence block share the same timing and frequency offsets, while each preamble frame experiences a distinct channel. 2.1. Preamble Structure Let C denote the set of all length-N training sequences. The preamble transmitted by the kth BS can be expressed as ck = ck,0 ⊕0 ⊕μkck,1 (1) where ck,0, ck,1 ∈C are the selected training sequences; individual sequences may be reused across different BS, but the ordered pair (ck,0, ck,1), and thus the preamble ck is unique to BS k; μk ∈R+ is a scaling factor that represents the power difference between the two training sequences; and 0 is a zero-padding vector of fixed length τ0, corresponding to the timing offset between the two training signals. The cross-correlation between two training sequences ck, cl ∈C with timing offset τ and frequency offset ω is given by rk,l (τ, ω) ≜1 N N-1 X m=0 ck [m] c∗ l [m -τ] e-jωm. (2) Because training sequences are not perfectly orthogonal, we model the cross-correlation for any mismatch (k ̸= l) or nonzero delay (τ ̸= 0) as a zero-mean circular complex Gaussian with a variance that decays with the sequence length N. For the matched case (k = l and τ = 0), the statistic reduces to the sequence's autocorrelation rk(ω). As a result, the cross-correlation can be expressed as rk,l (τ, ω) = ( rk (ω) k = l, τ = 0 CN 0, σ2 c /N otherwise (3) where σ2 c captures sequence-family sidelobes and implementation impairments (e.g., filtering, residual multipath), and the magnitude of autocorrelation |rk(ω)| follows a Dirichlet profile; |rk(0)| = 1 and although oscillatory, its overall trend decreases as |ω| increases, e.g., for Zadoff-Chu sequences, |rk(ω)| = | sin(Nω/2)/(N sin(ω/2))| [13]. 2.2. Transmission Model For the kth BS, denote τk and ωk as the group timing and frequency offsets respectively, and denote α(p) k denote the complex channel gain in the ith frame. Then, the received signal at time m can be expressed as y [m] = K-1 X k=0 P -1 X p=0 α(p) k ck [m -pNf -τk] e-jωkm + ν [m] (4) for all m ∈{0, 1, · · · , No -1}, where Nf is the frame length and ν[m] ∼CN(0, σ2 n) denotes additive white Gaussian noise. The parameter No represents the total observation length under consideration. Without loss of generality, we assume that the received signal length is sufficiently large to accommodate all reference signals, and that training sequences from different frames do not overlap. Moreover, ck[m] = 0 for m /∈{0, · · · , N -1}. For notational convenience, we collect the received samples into a vector y ≜[y [0] , y [1] , · · · , y [No -1]]T . In this work, the initial tasks of active BS detection and delay estimation are assumed to be complete, consistent with established generalized likelihood ratio test (GLRT) methods [17]. The scope is therefore narrowed to estimating the remaining parameters for the active set K: CFOs ω ≜[ωk]k∈K, scaling factors μ ≜[μk]k∈K, and per-frame channel gains α ≜[α(p) k ]k∈K, p∈P. maximize ω,α,τ,μ P (y|ω, α, μ) . This joint estimation problem is challenging due to strong parameter coupling, the inherent nonlinearity of CFO estimation, and imperfect orthogonality of the training sequences. 3. CROSS-PREAMBLE ESTIMATION In this section, we first reformulate channel estimation as a linear problem and solve it via the classical linear least squares (LLS) estimator [18]. Second, we perform a singlepreamble analysis and derive a separate CFO estimator. Then, we extend the analysis across preambles, leveraging the burstwise shared CFO to obtain a cross-preamble CFO estimator. 3.1. Channel Estimator For each frame of the received signal, denoted as y(p) ≜ y [m + pNf]m∈{0,1,··· ,Nf-1} ∈CNf×1, where p ∈P, then the pth received signal can be written as y(p) = ejω(p-1)NfΩ(ω) C (τ) | {z } A(p) α(p) + v(p) where C (τ) = [0τ×1; ck; 0]k∈K ∈CNf×K, Ω(ω) = [Ωk(ωk)]k∈K ∈CNf×K, Ωk(ωk) = [1; ejωk; · · · ; ejωk(Nf-1)], α(p) = [α(p) k ]k∈K ∈CK×1, and v(p) = [v[m]]m∈{0,··· ,Nf-1} ∈ CNf×1. As a result, the received signals can be expressed as y = blkdiag A(0), · · · , A(P -1) | {z } A(τ,ω) α + v. Then, according to the LLS solution, the channel gain α can be estimated by ˆα = A(τ, ω)†y. (5) In addition, the received signals y can be expressed as A0α+ h A(0)diag α(0) ; · · · ; A(P -1)diag α(P -1) i | {z } B μ+v. where Ai = blkdiag(A(0) i , · · · , A(P -1) i ), A(p) i = ejω(p-1)Nf Ω(ω) Ci (τ), Ci (τ) = [0(τ+τci)×1; ck,i; 0]k∈K, and τc = N + τ0. Then, according to the LLS solution, the scaling factor μ can be estimated by ˆμ = B† (y -A0) . (6) 3.2. Separate Analysis and Separate CFO Estimator For an arbitrary frame p of the received signal, we define the cross-correlation between the received signal y and the two training sequences ck,0 and ck,1 as rp,i y,k [τ] ≜1 N τ+N-1 X m=τ y [m + pNf + iτc] cH k,i [m -τ] where k ∈K, p ∈P, and i ∈{0, 1}. Based on the cross-relation of the training sequences, the distribution of rp,i y,k [τ] is summarized as Lemma 1. For any k ∈K, p ∈P, and i ∈{0, 1}, if τ = τk, the cross-correlation rp,i y,k [τk] follows rp,i y,k [τk] ∼α(p) k rk (ωk) μke-jωkτc i + CN 0, σ2 k,p,i where the σ2 k,p,i is the averaged interference plus noise σ2 k,p,i = P q̸=k,q∈K μi kα(p) q σc 2 + σ2 n N . Lemma 1 indicates that the ratio between the two correlation outputs, rp,0 y,k and rp,1 y,k, reveals the exponential term associated with the CFO, which leads to the following separate CFO estimator: ˆωsep k = -1 τc ∠(E [ry,k [p, 1]] /μk/E [ry,k [p, 0]]) . (7) However, its variance depends on the per-sequence SINR δ2 k,p,i and is easily overwhelmed by strong co-channel signals, which motivates the cross-preamble estimator below. 3.3. Cross-Preamble CFO Estimator Let the set of correlation variables associated with CFO ωk as ry,k = [rp,i y,k [τk]]p∈P,i∈{0,1}, and define the cross-preamble likelihood as P(ry,k|ω). Since preambles from different frames are non-overlapping, the correlation outputs are conditionally independent given ωk, yielding P (ry,k|ω) = Y p∈P Y i∈{0,1} P rp,i y,k [τk] |ωk . According to Lemma 1, each correlation output rp,i y,k [τk] is Gaussian distributed, and thus the log-likelihood reduces to X p∈P X i∈{0,1} - rp,i y,k [τk] -α(p) k rk (ωk) μke-jωkτc i 2 σ2 k,p,i . (8) We prove that the cross-preamble CFO estimator is shown in the following proposition. Proposition 1. The cross-preamble ML CFO estimator is ˆωcross k ≜arg max ωk P (ry,k|ω) = 1 τc ∠ ψ1ψH 0 (9) where ψ0 ≜ X p∈P α(p) k rp,0 y,k [τk]H σ2 k,p,0 , ψ1 ≜ X p∈P α(p) k μkrp,1 y,k [τk]H σ2 k,p,1 . (10) Proposition 2. The cross-preamble ML CFO estimator ˆωcross k a∼N(ω, [I-1]11), where [I-1]11 is CRLB I-1 1,1 = Γ-1 0 + Γ-1 1 2τ 2c |rk (ωk)|2 (11) and Γ0 and Γ1 are the aggregated SINR of the two training signals Γ0 = X p∈P α(p) k 2 σ2 k,p,0 , Γ1 = X p∈P μkα(p) k 2 σ2 k,p,1 . (12) Algorithm 1 Joint CFO and Channel Estimation Algorithm # Initialization: ˆω ←0, ˆα ←0, ˆμ ←1, ̃y ←y 1: Construct A based on ˆω and ˆμ and update ˆα based on (5); Then, Construct A0 and B based on ˆω and ˆα and update ˆμ based on (6). 2: For each BS k, perform SIC and compensation: ̃yk ← (y -A(ˆτ-k, ˆω-k) ˆα-k) Ωk(ωk). Then, estimate the remaining CFO δk based on the cross-preamble estimator (9), and update ˆωk ←ˆωk + δk. 3: Go to step 1) until P k |δk| < ε. Our analysis of the CRLB reveals two key insights into CFO estimation. First, accurate CFO estimation for a weak signal is achievable. This follows from the estimation error being inversely proportional to the aggregate SINR across all beams (i.e., Γ-1 0 + Γ-1 1 ). In addition, a coarse-to-fine (iterative) refinement, pre-compensation, and SIC, followed by re-estimation, can progressively reduce the effective offset because the estimation variance increases with interference and increases with the magnitude of the CFO, |ωk|. 3.4. Joint CFO and Channel Estimation Algorithm At first, we estimate channels based on (5) and (6). Then, for every k, we cancel the contributions of the other BSs, precompensate the CFO of BS k, and estimate its residual CFO increment δk. The to-be-estimated residual signal for BS k is ̃yk = y -A ˆτ-k, ˆω-k ˆα-k ⊙Ωk(ˆωk) (13) where "-k" denotes the collection of parameters for all BSs except k (i.e., the entries associated with BS k are set to zero). The residual CFO is then updated as ˆωk ←ˆωk + δk. The iteration stops when the residual CFO corrections are small. Intuitively, improving the CFO estimate sharpens the matched preamble and reduces the bias/variance of the channel estimates; the refined channel estimates, in turn, yield smaller CFO increments. Under Proposition 2, this yields a monotone decrease of the residual CFO, and δk →0. 4. SIMULATION RESULTS We develop a simulation platform that generates 5G NR waveforms [15] and use it to assess the robustness of the proposed cross-preamble estimation framework. For comparison, we consider: i) CP-based blind estimator [8]; ii) Auto-correlation (separate) estimator that splits each training sequence into two segments and estimates CFO from their phase difference [11]; iii) Cross-correlation (separate) estimator as in (7); iv) Weighted average of separated estimated CFOs according to channel power. Fig. 2 (a) illustrates the mean absolute error (MAE) over SINR. As expected, error decreases with SINR. The ranking Fig. 2. CFO ω estimation MAE and channel α estimation NMSE over SINR, obtained from 200 Monte-Carlo simulations, with K = 12 BSs , and P = 12 preambles. Fig. 3. Field test: Radio maps for two beams transmitted by a ground BS; the BS location is indicated by a red triangle. is consistent with the length of the used reference sequence: cross-preamble < separate < CP-based. Cross-correlation outperforms auto-correlation because the separation τc between the two training sequences provides a larger phase baseline. The proposed cross-preamble estimator further improves accuracy by pooling information across frames under the shared-CFO constraint. Notably, the cross-preamble estimator at SINR= -30 dB still outperforms the weightedaverage baseline operating at SINR= -5 dB confirming robustness in low-SINR regimes. Consequently, improving CFO accuracy reduces variance in channel estimation; the error (normalized mean squared error (NMSE)) versus SINR follows the same trend as in Fig. 2(b). Fig. 3 shows the radio-map estimation for a low-altitude setting. The two clearly resolved (pointing in different directions) beams corroborate the effectiveness of the proposed framework in practice. 5. CONCLUSION Leveraging CFO constancy within a coherence block, we propose an ML cross-preamble estimation framework that aggregates preamble correlations. Simulations show estimation gains exceeding 30 dB for CFO and 5 dB for the channel. Field tests demonstrate the ability to identify beams in lowaltitude airspace. 6. REFERENCES [1] B. W. Zarikoff and J. K. Cavers, "Multiple frequency offset estimation for the downlink of coordinated MIMO systems," IEEE J. Sel. Areas Commun., vol. 26, no. 6, pp. 901-912, 2008. [2] D. Gesbert, S. Hanly, H. Huang, S. Shamai Shitz, O. Simeone, and W. Yu, "Multi-cell MIMO cooperative networks: A new look at interference," IEEE J. Sel. Areas Commun., vol. 28, no. 9, pp. 1380-1408, 2010. [3] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, "Cell-free massive MIMO versus small cells," IEEE Trans. on Wireless Commun., vol. 16, no. 3, pp. 1834-1850, 2017. [4] S. J. Maeng, O. Ozdemir, I. Guvenc, M. L. Sichitiu, M. Mushi, and R. Dutta, "LTE I/Q data set for UAV propagation modeling, communication, and navigation research," IEEE Commun. Mag., vol. 61, no. 9, pp. 9096, 2023. [5] Y. Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Gao, D. Gesbert, S. Cui, and R. Zhang, "A tutorial on environment-aware communications via channel knowledge map for 6G," IEEE Commun. Surveys Tuts., vol. 26, no. 3, pp. 1478-1519, 2024. [6] H. Sun and J. Chen, "Integrated interpolation and blockterm tensor decomposition for spectrum map construction," IEEE Transactions on Signal Processing, vol. 72, pp. 3896-3911, 2024. [7] T.-C. Lin and S.-M. Phoong, "A new cyclic-prefix based algorithm for blind CFO estimation in OFDM systems," IEEE Trans. on Wireless Commun., vol. 15, no. 6, pp. 3995-4008, 2016. [8] S. Singh, S. Kumar, S. Majhi, U. Satija, and C. Yuen, "Blind carrier frequency offset estimation techniques for next-generation multicarrier communication systems: Challenges, comparative analysis, and future prospects," IEEE Commun. Surveys Tuts., vol. 27, no. 1, pp. 1-36, 2025. [9] Y.-R. Tsai, H.-Y. Huang, Y.-C. Chen, and K.-J. Yang, "Simultaneous multiple carrier frequency offsets estimation for coordinated multi-point transmission in OFDM systems," IEEE Trans. on Wireless Commun., vol. 12, no. 9, pp. 4558-4568, 2013. [10] O. H. Salim, A. A. Nasir, H. Mehrpouyan, and W. Xiang, "Multi-relay communications in the presence of phase noise and carrier frequency offsets," IEEE Trans. on Commun., vol. 65, no. 1, pp. 79-94, 2017. [11] R. Tuninato, D. G. Riviello, R. Garello, B. Melis, and R. Fantini, "A comprehensive study on the synchronization procedure in 5G NR with 3GPP-compliant linklevel simulator," EURASIP J. Wireless Com. Network, vol. 2023, no. 1, p. 111. [12] P. Stoica and O. Besson, "Training sequence design for frequency offset and frequency-selective channel estimation," IEEE Trans. on Commun., vol. 51, no. 11, pp. 1910-1917, 2003. [13] M. Hua, M. Wang, K. W. Yang, and K. J. Zou, "Analysis of the frequency offset effect on Zadoff-Chu sequence timing performance," IEEE Trans. on Commun., vol. 62, no. 11, pp. 4024-4039, 2014. [14] M. Morelli, C.-C. J. Kuo, and M.-O. Pun, "Synchronization techniques for orthogonal frequency division multiple access (OFDMA): A tutorial review," Proc. IEEE, vol. 95, no. 7, pp. 1394-1427, 2007. [15] 3rd Generation Partnership Project, "Technical Specification Group Radio Access Network; NR; Physical channels and modulation (Release 18)," 3GPP, TS 38.211, May 2024, v18.2.0. [16] S. Yoneda, M. Sawahashi, S. Nagata, and S. Suyama, "Prach transmission employing carrier frequency offset pre-compensation based on measurement at UE for NR uplink," IEICE Trans. on Commun., vol. E108-B, no. 3, pp. 330-338, 2025. [17] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume II: Detection Theory. Upper Saddle River, NJ, USA: Prentice Hall, 1998. [18] --, Fundamentals of statistical signal processing: estimation theory. Prentice-Hall, Inc., 1993.
2510.14801
MNRAS 000, 1–7 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 A new photometric ephemeris for the 2M1510 AB double brown dwarf eclipsing binary system Seb T. Millward1★, Vedad Kunovac1,2 1Department of Physics, University of Warwick, Coventry, CV4 7AL, UK 2Centre for Exoplanets and Habitability, University of Warwick, Coventry, CV4 7AL, UK Accepted XXX. Received YYY; in original form ZZZ ABSTRACT Eclipsing brown dwarfs are important calibrators of sub-stellar evolution models used to infer the characteristics of directly imaged brown dwarfs and giant exoplanets. Only two double brown dwarf eclipsing binary systems are known, among them 2MASS J15104786-2818174 (2M1510 AB), published in 2020 with a poorly constrained orbital period. Here we analyse TESS full-frame image (FFI) photometry of this faint (𝑇mag = 15.9) binary and detect a significant (>10𝜎) periodic signal spanning TESS Cycles 1–7, consistent with previous data. We refine the orbital period to 20.897782±0.000036 d, reducing its present-day uncertainty from 18 h to 8 min. Our work is crucial for scheduling follow-up observations of this system for detailed study with other photometric facilities. We also find that a recent orbital solution from Doppler data is inconsistent with existing photometry. A timing offset in the Doppler data may have produced a spurious signal mimicking retrograde apsidal precession, from which the claimed circumbinary planet 2M1510 ABb was inferred. From our best attempt at correcting the data we were unable to reconcile the radial velocity data with the photometry, suggesting that the radial velocity uncertainties are underestimated, and that the circumbinary planet 2M1510 ABb may be a false positive. Key words: binaries: eclipsing – stars: brown dwarfs – techniques: photometric – methods: data analysis 1 INTRODUCTION Brown dwarfs are substellar objects with masses in between giant planets and stars, typically 13 to 80 MJup such that internal tempera- tures allow for deuterium burning but not hydrogen fusion (Chabrier et al. 2014). The masses, radii and ages of brown dwarfs and giant planets whose orbital inclination are not known – such as directly imaged objects or double-line binaries – are typically inferred from sub-stellar evolution models (e.g. LP-413-53 Hsu et al. 2023). The calibration of such models rely on the direct measurements of these properties for brown dwarfs. However, the list of systems where such measurements can be made remains small. For a double line binary system with a known orbital inclination, the masses of the binary components can be directly measured. One such method to find orbital inclination is through astrometry, for example Franson et al. (2022), Xuan et al. (2024) and Brandt et al. (2021), who all investigate double line binary systems containing brown dwarfs. Alternatively, for double line binary systems with observable eclipses, the shape of the eclipse in the light curve allows for a measurement of the orbital inclination, as well component radii. Only three such double line ultra-cool (spectral type M8 or later) binary systems have been identified: EPIC 2037103875 AB (David et al. 2019), 2M0535-05 AB (Stassun et al. 2006, 2007), and 2M1510 AB, the system investigated in this paper. Only the latter two are bona-fide double brown dwarf binaries. The discovery of 2M1510 AB was announced by Triaud et al. ★Contact e-mail: Seb.Millward@warwick.ac.uk (2020) following the observation of the first eclipse on UT night 2017 July 26 with the SPECULOOS Southern Observatory (Delrez et al. 2018). The eclipse was soon followed up with high-resolution spectroscopy in the red optical and near-infrared with UVES on the VLT and HIRES on Keck, from which line splitting was observed, confirming the system as a double lined binary. However, no further eclipses were observed, and so the main constraint on the orbital period came from the UVES and HIRES radial velocities. This left an uncertainty on the orbital period of roughly 8.5 min at the time the first set of data was gathered, increasing on each subsequent eclipse of the system by 2.5 h per year. At time of writing, the uncertainty is roughly 18 h. In this Letter, we analyse the TESS (Ricker et al. 2015) full-frame image photometry of 2M1510 AB, identifying 4 new eclipses which reduce the uncertainty on the orbital period. This work will allow for follow-up observations to study this rare system in more detail. The Letter is organised as follows: In Section 2 we summarise the system properties and the extraction of the TESS photometry. In Section 3 we detail our box-least squares (BLS) search for periodic signals in the TESS data, and the modelling of the light curves using an eclipsing binary model with parameter estimation using MCMC. We outline our main results in Section 4. In Section 5 we compare our results to previous findings from Triaud et al. (2020), and the new Doppler solutions found by Baycroft et al. (2025). We also discuss differences between the depth of the eclipses identified in SPECULOOS and TESS. Finally, we conclude in Section 6. © 2025 The Authors arXiv:2510.14801v1 [astro-ph.SR] 16 Oct 2025 2 S.T. Millward & V. Kunovac 2 DATA The 2M1510 system is a hierarchical triple brown dwarf system consisting of an unresolved near equal-mass binary 2M1510 AB (2MASS J15104786-2818174, Gaia DR3 6212595980928732032), orbited by a spatially resolved third component, 2M1510 C (2MASS J15104761-2818234, Gaia DR3 6212595980924278144), separated by 6.8 ′′ on the sky or a projected separation of 250 astronomical units (Gizis 2002). The system is located at a distance of 36.6±0.3 pc (Col- laboration et al. 2023) and is a kinematic member of the 45 ± 5 Myr Argus moving group (Gagné et al. 2015). The inner binary system orbit each other in a 20.9 d eccentric orbit, and a ∼4.3 % deep sec- ondary eclipse was observed with SPECULOOS in 2017. Due to the system being viewed along the line of apsides (𝜔= −90◦) no primary eclipse is visible (Triaud et al. 2020). The 2M1510 system was observed in TESS full-frame images (FFI) in sectors 11, 38, 65 and 91, at exposure times of 30 min and 10 min in sectors 11 and 38, and 200 s in sectors 65 and 91. According to the TESS Input Catalog (TIC) the 2M1510AB binary system is designated TIC 61253912 with a 𝑇mag = 15.9, while the tertiary has the TIC identifier 61253915 with a 𝑇mag = 17.1. We use the publicly available software tglc1 provided by the TESS-Gaia Light Curve project Han & Brandt 2023 to download the available light curves for the binary system, which can produce light curves down to 16th TESS magnitude. The tglc software produces light curves by forward modelling the FFIs with the effective point-spread function (ePSF) based on position and magnitude data from Gaia DR3. As a consequence, light curves are automatically corrected for contamination from nearby stars. Using their FFI forward model, light curves can either be generated from PSF photometry or aperture photometry. For faint stars in sparse fields like ours the PSF method generally yields better precision (Han & Brandt 2023, Figure 10). To verify this we compute a similar metric to the combined differential photometric precision (CDPP, Christiansen et al. 2012). In effect, after correcting for low-frequency trends with the biweight filter in wotan (Hippke et al. 2019) and removing outliers with iterative 5𝜎clipping, we compute the rolling mean in chunks of 90 minutes; equal to the eclipse duration. We adopt the standard deviation of the rolling means as our CDPP proxy (Gilliland et al. 2011; Van Cleve et al. 2016). Indeed, we find that the PSF light curves led to an improvement between 4 % to 25 % in the CDPP relative to the aperture photometry in all sectors except 38, where the CDPP was 6 % worse. Han & Brandt (2023) advocate for a weighted approach using both light curves to get the best precision. Following their work, we calculated the CDPP of the average light curve by increasing the fraction of PSF photometry in steps of 10 percentage points. On average the weighted light curve improved the CDPP by a further 10 % compared to pure PSF photometry, and we found that the best weights for the PSF photometry were 0.5, 0.4, 0.8 and 0.5 for Sectors 11, 38, 65 and 91, respectively, with the remainder made up from aperture photometry. The CDPP of our final light curves ranged from 1.3 % to 1.4 % in a 90 minute window equal to the eclipse duration, which is factor >3 smaller than the expected eclipse depth. The light curves are shown in Figure 1. Initially we removed all points flagged by both TESS and tglc with a quality flag larger than 0. After an initial analysis we noticed that an eclipse occured at the end of Sector 65, indicated by the red vertical region at BJD ≈2460072.2. This region is flagged as a non-zero quality flag in the tglc data product, but not in TESS FFI 1 https://github.com/TeHanHunter/TESS_Gaia_Light_Curve? tab=readme-ov-file data. To maximize the number of eclipses in our dataset we decided to include this region in our analysis. 3 METHODS To identify approximate eclipse positions within each sector, we extrapolate forwards the eclipse times and errors from the period and secondary transit time of Triaud et al. (2020) to each TESS sector. This identifies five transit regions within the TESS data - one in sector 11, one in sector 38, two in sector 65 and one in sector 91 - however, the last transit in sector 91 contains only 32 exposures, covering roughly 4.8 % of the 1𝜎region of interest. To search for periodic signals present in the TESS data we imple- ment a box-least squares2 (BLS, Kovács et al. 2002) search in the range of 5 to 25 days, searching for eclipses of durations from 60-120 minutes. This search includes the eclipse observed by SPECULOOS as well as all four TESS sectors. To ensure it was not purely a result from the accurate SPECULOOS data being combined with spurious signals in the TESS data, we perform a second BLS search on the TESS data alone. We calculate a variation of the signal detection efficiency (SDE) by subtracting a smoothed mean from the raw BLS power spectrum, and then divide the result by the standard deviation. To account for significant noise increase at higher frequencies, we slice the peri- odogram into 20 different bins and calculate the standard deviation for each bin. The smooth trend of the standard deviation as function of frequency was used as 𝜎BLS. We perform a least squares fit on the data around the period iden- tified from the BLS search using model light curves generated by the software ellc (Maxted 2016). Following Triaud et al. (2020), we fix the surface brightness ratio 𝐽= 0.83 and radius ratio 𝑘= 1 as they are degenerate for a grazing secondary eclipse. The latter was set due to the mass ratio of the system being indistinguishable from unity, while the former is based on the flux ratio measured in the UVES spectra at 819 nm. The eccentricity 𝑒and argument of periastron 𝜔– necessary to compute the inferior conjunction times from the time of secondary eclipse 𝑇sec – were fixed to the values of Triaud et al. (2020) (see Table 1). We vary cosine of sky inclination cos𝑖, sum of radii ((𝑅1 + 𝑅2)/𝑎), orbital period 𝑃, and 𝑇sec. We fix the limb darkening parameters, assuming a quadratic limb darkening law, and compute the coefficients for the SPECULOOS 𝐼+ 𝑧′ and TESS bands with ldtk3 (Parviainen & Aigrain 2015) using stellar parameters from Triaud et al. (2020). We then use MCMC sampler emcee to explore the parameter space (Foreman-Mackey et al. 2013). Walkers are initialised in a normal distribution centred on the values found from the least squares fit. We use uniform priors, listed in Table 1. We include white noise terms and offset scaling factors for each transit, and account for finite exposure time of the TESS data. In our fit we include the SPECULOOS eclipse as well as a 1𝜎uncertainty region around each TESS eclipse from the Triaud et al. (2020) solution, which encompasses several hours around the eclipse times found by the BLS search. We omit sector 91 as it did not contain any data in the predicted eclipse window from our maximum likelihood solution. After an initial fit it became clear that the TESS data showed a deeper eclipse than SPECULOOS. Therefore, in the final MCMC, we add a separate brightness ratio for the TESS bandpass to act as a depth parameter, but caution that 2 https://docs.astropy.org/en/stable/timeseries/bls.html 3 https://github.com/hpparvi/ldtk MNRAS 000, 1–7 (2025) New eclipse timings for 2M1510 AB 3 1605 1610 1615 1620 Time (BJD -2457000) 0.9 1.0 1.1 Relative Brightness Sector 11 2340 2345 2350 2355 2360 Time (BJD -2457000) 0.8 1.0 1.2 Sector 38 3070 3075 3080 3085 3090 Time (BJD -2457000) 0.8 1.0 1.2 1.4 Relative Brightness Sector 65 3780 3785 3790 3795 Time (BJD -2457000) 0.8 1.0 1.2 1.4 Sector 91 Figure 1. Lightcuves of 2M1510AB observed by TESS. Solid red line indicates position of transits predicted from Triaud et al. (2020). Further red shaded region indicates its 1𝜎uncertainty. 0.0 0.2 0.4 0.6 0.8 1.0 Period (days) 0.0 0.2 0.4 0.6 0.8 1.0 5.0 2.5 0.0 2.5 5.0 SDE SPECULOOS + TESS, a) b) 7.5 10.0 12.5 15.0 17.5 20.0 22.5 0 5 10 SDE TESS, c) 20.875 20.900 20.925 d) Figure 2. Periodograms from Box Least Squares search. a) is the periodogram from the BLS search using TESS and SPECULOOS. b is magnified for 5 𝜎 around the period predicted by Triaud et al. (2020). c and d, same as above, using only TESS data. Red arrows and red line indicate the period predicted by Triaud et al. (2020). Further red shaded region around the red line indicates 1 𝜎uncertainty. MNRAS 000, 1–7 (2025) 4 S.T. Millward & V. Kunovac the value of this parameter is not to be interpreted physically as the discrepancy in the depth is currently unexplained (Section 5.3). The final run uses 400 walkers. We test it for convergence every 100 iterations, and stop it once the autocorrelation factors had changed by <1 %, which results in 54 600 iterations. We discard the first 52 600 iterations, and the chains were thinned by a factor of 545 due to autocorrelation, leaving 1200 independent samples for each parameter. 4 RESULTS The BLS periodogram from the joint search of SPECULOOS and TESS is shown in the top panel of Figure 2. The highest peak in the BLS periodogram corresponds to a period of 𝑃= 20.89776 d with SDE value of 5.7. This period is within 1𝜎of the orbital solution based on radial velocity data from Triaud et al. (2020). In the bottom panel of Figure 2 we show the power spectrum of the BLS search on the TESS data alone, excluding SPECULOOS. This search found six peaks occurring with an SDE value >7, attributed to the more noisy TESS data. The most significant peak was again identified at 𝑃= 20.89776 d with SDE value of 11.5. Table 1 presents the 16th, 50th and 84th percentiles of the resultant samples from the MCMC search. The orbital period is 20.897782 ± 0.000036 d, 𝑇sec is 2457961.53520 ± 0.00026 d BJDTDB, orbital in- clination is 88.47±0.03◦and the sum of radii is 0.3145±0.0057. The median model from our posterior is shown in Figure 3. We calculate the S/N of the phase folded light curve as as S/N = 𝛿 √ 𝑁/𝜎with 𝛿 being the model depth, 𝑁is the number of binned exposures inside the eclipse, and 𝜎being the individual uncertainty on a single binned 15 min exposure. We find a strong detection with a S/N = 9.5. The transit depth in the phase folded data is deeper than SPECU- LOOS, at 8% compared to 4.2%, at a significance of 2.6𝜎. The potential causes of this discussed in Section 5.3. To account for this the TESS data was fit with a separate surface brightness ratio value, which acts as a depth rescaling factor for the purposes of our work. It is not to be interpreted as a physical value. This resulted in a surface brightness ratio of 6.2+7.3 −3.5 for the TESS bandpass, compared to the fixed 0.83 (Triaud et al. 2020) for SPECULOOS 𝐼+ 𝑧′. Despite its posterior being somewhat broad, the deeper model (red solid line, Figure 3) is a better fit to the data than the shallower model produced by using the SPECULOOS value for the surface brightness ratio (black dotted line). Similarly, the fit to the SPECULOOS data which uses the surface brightness ratio from Triaud et al. (2020) produces a light curve in a strong agreement. 5 DISCUSSION 5.1 Comparison to Triaud et al. (2020) The new ephemeris is in a strong agreement with Triaud et al. (2020), with the orbital period, 𝑇sec, orbital inclination, and the sum of radii all within 1𝜎of the solution presented by Triaud et al. (2020). Given higher S/N of the SPECULOOS data compared to TESS the shape of the eclipse is strongly constrained by the former, and as a result orbital inclination and the sum of radii are in strong agreement with Triaud et al. (2020). However, the inclusion of TESS data has reduced the errors on all four of the free parameters - the new 𝑇sec has an error 2.5 times smaller, the new orbital inclination has an error 3.3 times smaller, and the sum of radii has an error 2.8 times smaller than the errors on the values from Triaud et al. (2020). Finally, the new period we present at 𝑃= 20.897782 ± 0.000036 d has an error of only 3 s, reduced from 8.5 min from Triaud et al. (2020). 5.2 Comparison to Baycroft et al. (2025) Recently, Baycroft et al. (2025) published 22 new radial velocity (RV) observations of the system using the UVES spectrograph, ex- tending the observed Doppler baseline of the system by up to four years. Using the new data and the 12 previously published epochs from the same instrument, the authors derived significantly more precise RVs by modelling the double-lined spectra using a Gaus- sian process regression framework (Sairam et al. 2024). Compared to the published data, the new analysis reduced the formal uncer- tainties on the RVs from ∼1.5 km s−1 to ∼50 m s−1, an improvement of a factor ∼30. Baycroft et al. (2025) reported an orbital period of 𝑃= 20.907495 ± 0.000088 d, and found that including apsidal pre- cession at a rate of ¤𝜔= −343 ± 126 ′′/yr significantly improved the fit. The authors interpreted the retrograde direction of the precession as evidence of a planet orbiting the binary in a polar configuration. The orbital period we have derived in this work is 14 min shorter compared to the recent Doppler result from Baycroft et al. (2025), a difference of 110𝜎. This significant discrepancy prompted us to study the results in more detail. First we compared the binary orbital solution to the published values of Triaud et al. (2020). While the orbital periods agree within errors, 𝑒and 𝜔differ by 2.3𝜎and 4.2𝜎 respectively, which affects the predicted eclipse timings. We adopted their orbital solution and back-propagated it in time to see if it cor- rectly predicts the secondary eclipse at time 𝑇sec = 2457961.53520 BJDTDB that was observed with SPECULOOS in the original discov- ery paper (Triaud et al. 2020). We found that this orbital solution is not consistent with an eclipse at 𝑇sec; the predicted secondary eclipse occurs 13.5 h after 𝑇sec. Next we replicated the analysis by using the RV data and epochs (timestamps) which are included in Table 2 and Table 3 in Baycroft et al. (2025), and reached 1𝜎agreement with most of the reported Keplerian fit solution and confirm a negative value of ¤𝜔. The two most discrepant parameters are the orbital period (2𝜎) and 𝜔(1.2𝜎). The differences may arise due to different treatment of nuisance parameters, but we are unable to confirm this based on the available information in the paper. We carried out our analysis with separate RV offsets and white noise terms for each component, and the white noise terms reached a modest ≤30 m s−1 for the two components, suggesting the formal uncertainties are fairly accurately estimated based on the best-fit model. However, we note that 22 out of 33 epochs are recorded in MJD despite being labelled BJD – meaning a 0.5 day offset has not been applied to these affected timestamps. We verified this by downloading the raw UVES data from the ESO archive and checking the timestamps directly. We find that this temporal shift may have important consequences for the fit. We converted all MJD epochs (in the UTC time scale) from the raw UVES data to BJDTDB so that they are consistent, and shifted the epochs to the mid-point of the exposures. We also re- applied the barycentric correction using the corrected epochs, but also verified that whether or not we include this step in our anal- ysis does not affect our final conclusions. We used these new data to fit the same Keplerian model as before, and find no evidence of apsidal precession. This result indicates that the negative value of ¤𝜔= −343 ± 126 ′′/yr reported in Baycroft et al. (2025) materi- alised due to inconsistent timestamps, which suggests the purported circumbinary planet 2M1510 ABb is a false positive. However, our model resulted in a poor fit where the uncertainties on the data had to be inflated by 0.6 km s−1 in order for the residuals to behave like MNRAS 000, 1–7 (2025) New eclipse timings for 2M1510 AB 5 0.0 0.2 0.4 0.6 0.8 1.0 Time (BJD - 2457000) 0.0 0.2 0.4 0.6 0.8 1.0 1609.30 1609.45 0.85 0.90 0.95 1.00 1.05 1.10 Relative Brightness a) 2340.70 2340.85 b) 3072.10 3072.25 c) 3093.0 3093.2 d) 961.48 961.54 961.60 Time (BJD - 2457000) 0.95 0.96 0.97 0.98 0.99 1.00 1.01 Relative Brightness e) 160 80 0 80 160 Time (Minutes) 0.90 0.95 1.00 1.05 f) Figure 3. Fits for TESS and SPECULOOS data. a), b), c) and d) show the four identified eclipses in the TESS data, binned to 15 minutes. e) shows the eclipse in the SPECULOOS data, and f) shows the phase-folded TESS data of the 4 eclipses, binned to 15 minutes and centred on the mid-eclipse. The red line in each figure indicates the best fit solution from the MCMC run, with a further red shaded region indicating its 1 𝜎uncertainty. The black dotted line in f) represents the best fit solution whilst fixing the surface brightness ratio at 0.827 (i.e. same as SPECULOOS). Table 1. Priors and solutions for the MCMC search, and the solutions found by Triaud et al. (2020). Variables and Units Priors MCMC results Triaud et al. (2020) Orbital Period (days) U(20.892, 20.903) 20.897782 ± 0.000036 20.9022+0.0059 −0.0056 𝑇sec(BJDTDB −2457000) U(961.533, 961.537) 961.53520 ± 0.00026 961.53518+0.00064 −0.00061 Inclination (◦) U(88, 90) 88.466 ± 0.030 88.5 ± 0.1 Sum of Radii (R⊙) U(0.178, 0.583) 0.3145 ± 0.0057 0.3147+0.0159 −0.0157 Surface Brightness Ratio 𝐽2/𝐽1 = 𝑘2 𝑓2/ 𝑓1 Fixed 0.827 0.827 ± 0.013 Radius Ratio 𝑘= (𝑅2/𝑅1) Fixed 1 1 Eccentricity Fixed 0.309 0.309 ± 0.022 Argument of Periastron (◦) Fixed -89.9 −89.9 ± 3.3 a normal distribution; an order of magnitude larger than the quoted uncertainties on the data. This solution, however, also did not agree with our work nor Triaud et al. (2020). We repeated the analysis by imposing a prior on the orbital period and secondary eclipse time in Table 1 from our photometric solution. In this case the fit got even worse; the RV “jitter” term required to reconcile the two datasets is 4 km s−1. We experimented with a range of non-Keplerian effects such as a linear and quadratic trend, changes in inclination (d𝑖/d𝑡), orbital period (d𝑃/d𝑡), and apsidal precession (d𝜔/d𝑡), none of which resulted in a satisfactory solution. Some potential causes for the high jitter value may be incorrect assumptions in our correction of the published RVs, or systematic noise sources not accounted for in the RV computation, such as wavelength calibration drifts over long baselines (e.g. Sacco et al. 2014), or the sensitivity of the derived RVs to the choice and treatment of spectral lines used in the analysis. A deeper investigation into whether the RV data can be explained by any other Keplerian signal or secular effect is outside the scope of this Letter. We conclude that the uncertainties on the RV data presented in Baycroft et al. (2025) are underestimated by 0.6–4 km s−1 in its current form, and therefore the results derived from them cannot be compared to the photometric data in a meaningful way. Our conclu- sions are based on the assumption that our correction to the radial velocities is accurate, but should be confirmed by recomputing the RVs with consistent timestamps and with treatment of other sources of uncertainty. MNRAS 000, 1–7 (2025) 6 S.T. Millward & V. Kunovac 5.3 Eclipse depth difference between SPECULOOS and TESS The phase folded eclipse in TESS is deeper than the SPECULOOS eclipse by about 4 percentage points at a significance of 2.6𝜎, as shown in Figure 3. Here we discuss a few possibilities for this appar- ent difference. Contamination in TESS: The eclipses in Sector 11 and latter half of Sector 65 occur near the edge of the light curves which coincides with the end of a TESS orbit. These regions are frequently associated with increased noise due to stray light from the Earth and the Moon. Light curves provided by tglc are corrected for the background and for contamination from nearby stars, however the correction may suffer as the background becomes brighter (Han & Brandt 2023). This could result in an over-correction of the contamination flux, which could in principle lead to deeper eclipses. Due to the low S/N of individual eclipses we are not able to study their variation across sectors, but variation in flux contamination is thought be behind a range of depth discrepancies for both transiting exoplanets (Bryant et al. 2020; Han et al. 2025) and eclipsing binaries (Bryant et al. 2023) observed with TESS. Contamination in SPECULOOS: It is not currently known if the original SPECULOOS light curve was corrected for the flux from 2M1510 C or whether it was necessary given the final aperture size. Given an atmospheric seeing of 1 ′′ and SPECULOOS’ pixel scale of 0.35 ′′/pix, the FWHM of the PSF is about 2.9 pixels. Using a conservative 3 × FWHM aperture radius gives a 3 ′′ aperture, makes it unlikely that 2M1510 C at a separation of 6.8 ′′ contributes any significant flux into the aperture. On the off-chance that the real aperture was larger such that all of the light from 2M1510 C fell into the aperture of 2M1510 AB we can estimate what the true eclipse depth would be. We assume that the Gaia RP band is similar to the SPECULOOS 𝐼+𝑧′, and estimate that given the Gaia RP magnitudes for the binary and tertiary (𝐺rp = 15.90 and 17.3) the flux ratio is 𝑓3/( 𝑓1 + 𝑓2) = 0.28. The true eclipse depth of the binary would be roughly 5.4 % instead of the currently observed 4.2 %. This value does reconcile the two depths to within ∼1𝜎, although it is an upper limit of the effect. Filter response differences: We investigated whether the difference in the response functions between 𝐼+𝑧′ and the TESS filter could ac- count for the depth discrepancy, as the two brown dwarfs have slightly different effective temperatures. For this we calculated the total trans- mission for the SPECULOOS observations as a product of the 𝐼+ 𝑧′ filter, the quantum efficiency curve of the detector, and atmospheric extinction using the ESO Sky Model Calculator4. We retrieved BT Settl atmospheric models of the two brown dwarfs assuming effec- tive temperatures of 2600 K and 2700 K, with a log 𝑔= 4.5 and solar metallicity. We calculate the expected flux within the TESS filter and the SPECULOOS total throughput and find that the flux ratio of the two brown dwarfs in each filter differs only by 1 % in relative terms, and can therefore not explain the difference in depth. 6 CONCLUSION We have studied TESS full-frame image (FFI) photometry spanning 7 years of the double brown dwarf eclipsing binary 2M1510 AB and detected a significant periodic signal using a box-least-squares (BLS) search. We find a new value for the period 𝑃= 20.897782 ± 0.000036 d which is in agreement with the period from Triaud et al. 4 https://www.eso.org/observing/etc/bin/gen/form?INS.MODE= swspectr+INS.NAME%3DSKYCALC (2020), whilst reducing its uncertainty from 8.5 min to 3 s. The TESS data favours a deeper eclipse (8%) than SPECULOOS (4.2%), which we cannot explain by filter differences nor orbital evolution due to the tertiary 2M1510 C. However, we cannot rule out that the discrep- ancy arises from imperfect background correction of the original SPECULOOS or TESS images. Further, this only affects the shape of the lightcurve and not the position, and therefore does not effect the result for the period. Current data available at the time of writing does not indicate the presence of a circumbinary planet. Our work is crucial for scheduling follow-up observations of this system to allow for further study of the eclipse timing variations or secular evolution of the orbit, for example with JWST. ACKNOWLEDGEMENTS We thank the referee, Keivan Stassun, for their careful reading of the manuscript and for the constructive comments and suggestions which have helped improve the clarity and quality of the paper. SM is grateful to the Department of Physics at the University of Warwick for funding through the Undergraduate Research Support Scheme (URSS). VK acknowledges funding from the Royal Society through a Newton International Fellowship with grant number NIF\R1\232229, and is grateful to Dan Bayliss, Ed Bryant, Tom Killestein, Tom Bay- croft, Lalitha Sairam and Amaury Triaud for insightful discussions. DATA AVAILABILITY The TESS FFIs presented in this Letter can be accessed directly from the online Mikulski Archive for Space Telescope (MAST) portal5, while the processed light curves for 2M1510 are more easily accessed using the publicly available tglc software.6 The SPECULOOS pho- tometry was first published in Triaud et al. (2020) and is available upon request. REFERENCES Baycroft T. A., Sairam L., Triaud A. H. M. J., Correia A. C. M., 2025, Science Advances, 11, eadu0627 Brandt G. M., et al., 2021, AJ, 162, 301 Bryant E. M., et al., 2020, Monthly Notices of the Royal Astronomical Society, 499, 3139 Bryant E. M., Bayliss D., Van Eylen V., 2023, Monthly Notices of the Royal Astronomical Society, 521, 3663 Chabrier G., Johansen A., Janson M., Rafikov R., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. pp 619–642 (arXiv:1401.7559), doi:10.2458/azu_uapress_9780816531240-ch027 Christiansen J. L., et al., 2012, Publications of the Astronomical Society of the Pacific, 124, 1279 Collaboration G., et al., 2023, Astronomy and Astrophysics, 674, A1 David T. J., Hillenbrand L. A., Gillen E., Cody A. M., Howell S. B., Isaacson H. T., Livingston J. H., 2019, ApJ, 872, 161 Delrez L., et al., 2018, in Marshall H. K., Spyromilio J., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 10700, Ground-based and Airborne Telescopes VII. p. 107001I (arXiv:1806.11205), doi:10.1117/12.2312475 5 https://mast.stsci.edu/portal/Mashup/Clients/Mast/ Portal.html 6 https://github.com/TeHanHunter/TESS_Gaia_Light_Curve MNRAS 000, 1–7 (2025) New eclipse timings for 2M1510 AB 7 Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, Publications of the Astronomical Society of the Pacific, 125, 306 Franson K., Bowler B. P., Brandt T. D., Dupuy T. J., Tran Q. H., Brandt G. M., Li Y., Kraus A. L., 2022, AJ, 163, 50 Gagné J., et al., 2015, The Astrophysical Journal Supplement Series, 219, 33 Gilliland R. L., et al., 2011, The Astrophysical Journal Supplement Series, 197, 6 Gizis J. E., 2002, The Astrophysical Journal, 575, 484 Han T., Brandt T. D., 2023, The Astronomical Journal, 165, 71 Han T., Robertson P., Brandt T. D., Kanodia S., Cañas C., Shporer A., Ricker G., Beard C., 2025, The Astrophysical Journal Letters, Volume 988, Issue 1, id.L4, 14 pp., 988, L4 Hippke M., David T. J., Mulders G. D., Heller R., 2019, The Astronomical Journal, 158, 143 Hsu C.-C., Burgasser A. J., Theissen C. A., 2023, ApJ, 945, L6 Kovács G., Zucker S., Mazeh T., 2002, Astronomy and Astrophysics, 391, 369 Maxted P. F. L., 2016, Astronomy and Astrophysics, 591, A111 Parviainen H., Aigrain S., 2015, Monthly Notices of the Royal Astronomical Society, 453, 3821 Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003 Sacco G. G., et al., 2014, Astronomy and Astrophysics, 565, A113 Sairam L., et al., 2024, Monthly Notices of the Royal Astronomical Society, 527, 2261 Stassun K. G., Mathieu R. D., Valenti J. A., 2006, Nature, 440, 311 Stassun K. G., Mathieu R. D., Valenti J. A., 2007, The Astrophysical Journal, 664, 1154 Triaud A. H. M. J., et al., 2020, Nature Astronomy, pp 1–8 Van Cleve J. E., et al., 2016, Publications of the Astronomical Society of the Pacific, 128, 075002 Xuan J. W., et al., 2024, Nature, 634, 1070 APPENDIX A: PREDICTED ECLIPSE TIMINGS This paper has been typeset from a TEX/LATEX file prepared by the author. Table A1. Predicted mid-point of secondary eclipses until the end of 2026. Uncertainties on each prediction range from 9-10 minutes. 𝑎Since BJDUTC 2457961.53255. Epoch𝑎 Datetime BJD (UTC) (UTC) 144 22 Oct 2025 07:30:56 2460970.81316 145 12 Nov 2025 05:03:45 2460991.71094 146 03 Dec 2025 02:36:33 2461012.60872 147 24 Dec 2025 00:09:21 2461033.50650 148 13 Jan 2026 21:42:10 2461054.40429 149 03 Feb 2026 19:14:58 2461075.30207 150 24 Feb 2026 16:47:47 2461096.19985 151 17 Mar 2026 14:20:35 2461117.09763 152 07 Apr 2026 11:53:23 2461137.99541 153 28 Apr 2026 09:26:12 2461158.89320 154 19 May 2026 06:59:00 2461179.79098 155 09 Jun 2026 04:31:48 2461200.68876 156 30 Jun 2026 02:04:37 2461221.58654 157 20 Jul 2026 23:37:25 2461242.48432 158 10 Aug 2026 21:10:13 2461263.38211 159 31 Aug 2026 18:43:02 2461284.27989 160 21 Sep 2026 16:15:50 2461305.17767 161 12 Oct 2026 13:48:39 2461326.07545 162 02 Nov 2026 11:21:27 2461346.97323 163 23 Nov 2026 08:54:15 2461367.87102 164 14 Dec 2026 06:27:04 2461388.76880 MNRAS 000, 1–7 (2025) 8 S.T. Millward & V. Kunovac Table A2. Orbital parameters and the physical characteristics used for the emcee fits, as well as their priors Nuisance parameters Priors TESS and SPECULOOS Limb darkening Parameters (𝑢1, 𝑢2) SPECULOOS Fixed [0.26, 0.44] TESS Fixed [0.36, 0.45] log(f) SPECULOOS U(−20.0, 0.0) −6.19 ± 0.13 Sector 11 U(−20.0, 0.0) <−1.2 Sector 38 U(−20.0, 0.0) <−1.2 Sector 65 (1st Transit) U(−20.0, 0.0) <−1.2 Sector 65 (2nd Transit) U(−20.0, 0.0) <−1.2 Mean offset SPECULOOS U(0.8, 1.2) 1.00004 ± 0.00030 Sector 11 U(0.8, 1.2) 0.9969 ± 0.0071 Sector 38 U(0.8, 1.2) 1.0074 ± 0.0043 Sector 65 (1st Transit) U(0.8, 1.2) 1.0008 ± 0.0033 Sector 65 (2nd Transit) U(0.8, 1.2) 0.9998 ± 0.0035 Surface Brigthness TESS U(0.0, 20.0) 6.2+7.3 −3.5 MNRAS 000, 1–7 (2025)
MNRAS 000, 1-7 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 A new photometric ephemeris for the 2M1510 AB double brown dwarf eclipsing binary system Seb T. Millward1★, Vedad Kunovac1,2 1 4 7AL, UK 2Centre for Exoplanets and Habitability, 4 7AL, UK Accepted XXX. Received YYY; in original form ZZZ ABSTRACT Eclipsing brown dwarfs are important calibrators of sub-stellar evolution models used to infer the characteristics of directly imaged brown dwarfs and giant exoplanets. Only two double brown dwarf eclipsing binary systems are known, among them 2MASS J15104786-2818174 (2M1510 AB), published in 2020 with a poorly constrained orbital period. Here we analyse TESS full-frame image (FFI) photometry of this faint (Tmag = 15.9) binary and detect a significant (>10σ) periodic signal spanning TESS Cycles 1-7, consistent with previous data. We refine the orbital period to 20.897782±0.000036 d, reducing its present-day uncertainty from 18 h to 8 min. Our work is crucial for scheduling follow-up observations of this system for detailed study with other photometric facilities. We also find that a recent orbital solution from Doppler data is inconsistent with existing photometry. A timing offset in the Doppler data may have produced a spurious signal mimicking retrograde apsidal precession, from which the claimed circumbinary planet 2M1510 ABb was inferred. From our best attempt at correcting the data we were unable to reconcile the radial velocity data with the photometry, suggesting that the radial velocity uncertainties are underestimated, and that the circumbinary planet 2M1510 ABb may be a false positive. Key words: binaries: eclipsing - stars: brown dwarfs - techniques: photometric - methods: data analysis 1 INTRODUCTION Brown dwarfs are substellar objects with masses in between giant planets and stars, typically 13 to 80 MJup such that internal temperatures allow for deuterium burning but not hydrogen fusion (Chabrier et al. 2014). The masses, radii and ages of brown dwarfs and giant planets whose orbital inclination are not known - such as directly imaged objects or double-line binaries - are typically inferred from sub-stellar evolution models (e.g. LP-413-53 Hsu et al. 2023). The calibration of such models rely on the direct measurements of these properties for brown dwarfs. However, the list of systems where such measurements can be made remains small. For a double line binary system with a known orbital inclination, the masses of the binary components can be directly measured. One such method to find orbital inclination is through astrometry, for example Franson et al. (2022), Xuan et al. (2024) and Brandt et al. (2021), who all investigate double line binary systems containing brown dwarfs. Alternatively, for double line binary systems with observable eclipses, the shape of the eclipse in the light curve allows for a measurement of the orbital inclination, as well component radii. Only three such double line ultra-cool (spectral type M8 or later) binary systems have been identified: EPIC 2037103875 AB (David et al. 2019), 2M0535-05 AB (Stassun et al. 2006, 2007), and 2M1510 AB, the system investigated in this paper. Only the latter two are bona-fide double brown dwarf binaries. The discovery of 2M1510 AB was announced by Triaud et al. ★Contact e-mail: (2020) following the observation of the first eclipse on UT night 2017 July 26 with the SPECULOOS Southern Observatory (Delrez et al. 2018). The eclipse was soon followed up with high-resolution spectroscopy in the red optical and near-infrared with UVES on the VLT and HIRES on Keck, from which line splitting was observed, confirming the system as a double lined binary. However, no further eclipses were observed, and so the main constraint on the orbital period came from the UVES and HIRES radial velocities. This left an uncertainty on the orbital period of roughly 8.5 min at the time the first set of data was gathered, increasing on each subsequent eclipse of the system by 2.5 h per year. At time of writing, the uncertainty is roughly 18 h. In this Letter, we analyse the TESS (Ricker et al. 2015) full-frame image photometry of 2M1510 AB, identifying 4 new eclipses which reduce the uncertainty on the orbital period. This work will allow for follow-up observations to study this rare system in more detail. The Letter is organised as follows: In Section 2 we summarise the system properties and the extraction of the TESS photometry. In Section 3 we detail our box-least squares (BLS) search for periodic signals in the TESS data, and the modelling of the light curves using an eclipsing binary model with parameter estimation using MCMC. We outline our main results in Section 4. In Section 5 we compare our results to previous findings from Triaud et al. (2020), and the new Doppler solutions found by Baycroft et al. (2025). We also discuss differences between the depth of the eclipses identified in SPECULOOS and TESS. Finally, we conclude in Section 6. The Authors 16 Oct 2025 2 S.T. Millward & V. Kunovac 2 DATA The 2M1510 system is a hierarchical triple brown dwarf system consisting of an unresolved near equal-mass binary 2M1510 AB (2MASS J15104786-2818174, Gaia DR3 6212595980928732032), orbited by a spatially resolved third component, 2M1510 C (2MASS J15104761-2818234, Gaia DR3 6212595980924278144), separated by 6.8 ′′ on the sky or a projected separation of 250 astronomical units (Gizis 2002). The system is located at a distance of 36.6±0.3 pc (Collaboration et al. 2023) and is a kinematic member of the 45 ± 5 Myr Argus moving group (Gagné et al. 2015). The inner binary system orbit each other in a 20.9 d eccentric orbit, and a ∼4.3 % deep secondary eclipse was observed with SPECULOOS in 2017. Due to the system being viewed along the line of apsides (ω= -90◦) no primary eclipse is visible (Triaud et al. 2020). The 2M1510 system was observed in TESS full-frame images (FFI) in sectors 11, 38, 65 and 91, at exposure times of 30 min and 10 min in sectors 11 and 38, and 200 s in sectors 65 and 91. According to the TESS Input Catalog (TIC) the 2M1510AB binary system is designated TIC 61253912 with a Tmag = 15.9, while the tertiary has the TIC identifier 61253915 with a Tmag = 17.1. We use the publicly available software tglc1 provided by the TESS-Gaia Light Curve project Han & Brandt 2023 to download the available light curves for the binary system, which can produce light curves down to 16th TESS magnitude. The tglc software produces light curves by forward modelling the FFIs with the effective point-spread function (ePSF) based on position and magnitude data from Gaia DR3. As a consequence, light curves are automatically corrected for contamination from nearby stars. Using their FFI forward model, light curves can either be generated from PSF photometry or aperture photometry. For faint stars in sparse fields like ours the PSF method generally yields better precision (Han & Brandt 2023, Figure 10). To verify this we compute a similar metric to the combined differential photometric precision (CDPP, Christiansen et al. 2012). In effect, after correcting for low-frequency trends with the biweight filter in wotan (Hippke et al. 2019) and removing outliers with iterative 5σclipping, we compute the rolling mean in chunks of 90 minutes; equal to the eclipse duration. We adopt the standard deviation of the rolling means as our CDPP proxy (Gilliland et al. 2011; Van Cleve et al. 2016). Indeed, we find that the PSF light curves led to an improvement between 4 % to 25 % in the CDPP relative to the aperture photometry in all sectors except 38, where the CDPP was 6 % worse. Han & Brandt (2023) advocate for a weighted approach using both light curves to get the best precision. Following their work, we calculated the CDPP of the average light curve by increasing the fraction of PSF photometry in steps of 10 percentage points. On average the weighted light curve improved the CDPP by a further 10 % compared to pure PSF photometry, and we found that the best weights for the PSF photometry were 0.5, 0.4, 0.8 and 0.5 for Sectors 11, 38, 65 and 91, respectively, with the remainder made up from aperture photometry. The CDPP of our final light curves ranged from 1.3 % to 1.4 % in a 90 minute window equal to the eclipse duration, which is factor >3 smaller than the expected eclipse depth. The light curves are shown in Figure 1. Initially we removed all points flagged by both TESS and tglc with a quality flag larger than 0. After an initial analysis we noticed that an eclipse occured at the end of Sector 65, indicated by the red vertical region at BJD ≈2460072.2. This region is flagged as a non-zero quality flag in the tglc data product, but not in TESS FFI 1 https://github.com/TeHanHunter/TESS_Gaia_Light_Curve? tab=readme-ov-file data. To maximize the number of eclipses in our dataset we decided to include this region in our analysis. 3 METHODS To identify approximate eclipse positions within each sector, we extrapolate forwards the eclipse times and errors from the period and secondary transit time of Triaud et al. (2020) to each TESS sector. This identifies five transit regions within the TESS data - one in sector 11, one in sector 38, two in sector 65 and one in sector 91 - however, the last transit in sector 91 contains only 32 exposures, covering roughly 4.8 % of the 1σregion of interest. To search for periodic signals present in the TESS data we implement a box-least squares2 (BLS, Kovács et al. 2002) search in the range of 5 to 25 days, searching for eclipses of durations from 60-120 minutes. This search includes the eclipse observed by SPECULOOS as well as all four TESS sectors. To ensure it was not purely a result from the accurate SPECULOOS data being combined with spurious signals in the TESS data, we perform a second BLS search on the TESS data alone. We calculate a variation of the signal detection efficiency (SDE) by subtracting a smoothed mean from the raw BLS power spectrum, and then divide the result by the standard deviation. To account for significant noise increase at higher frequencies, we slice the periodogram into 20 different bins and calculate the standard deviation for each bin. The smooth trend of the standard deviation as function of frequency was used as σBLS. We perform a least squares fit on the data around the period identified from the BLS search using model light curves generated by the software ellc (Maxted 2016). Following Triaud et al. (2020), we fix the surface brightness ratio J= 0.83 and radius ratio k= 1 as they are degenerate for a grazing secondary eclipse. The latter was set due to the mass ratio of the system being indistinguishable from unity, while the former is based on the flux ratio measured in the UVES spectra at 819 nm. The eccentricity eand argument of periastron ω- necessary to compute the inferior conjunction times from the time of secondary eclipse Tsec - were fixed to the values of Triaud et al. (2020) (see Table 1). We vary cosine of sky inclination cosi, sum of radii ((R1 + R2)/a), orbital period P, and Tsec. We fix the limb darkening parameters, assuming a quadratic limb darkening law, and compute the coefficients for the SPECULOOS I+ z′ and TESS bands with ldtk3 (Parviainen & Aigrain 2015) using stellar parameters from Triaud et al. (2020). We then use MCMC sampler emcee to explore the parameter space (Foreman-Mackey et al. 2013). Walkers are initialised in a normal distribution centred on the values found from the least squares fit. We use uniform priors, listed in Table 1. We include white noise terms and offset scaling factors for each transit, and account for finite exposure time of the TESS data. In our fit we include the SPECULOOS eclipse as well as a 1σuncertainty region around each TESS eclipse from the Triaud et al. (2020) solution, which encompasses several hours around the eclipse times found by the BLS search. We omit sector 91 as it did not contain any data in the predicted eclipse window from our maximum likelihood solution. After an initial fit it became clear that the TESS data showed a deeper eclipse than SPECULOOS. Therefore, in the final MCMC, we add a separate brightness ratio for the TESS bandpass to act as a depth parameter, but caution that 2 https://docs.astropy.org/en/stable/timeseries/bls.html 3 https://github.com/hpparvi/ldtk MNRAS 000, 1-7 (2025) New eclipse timings for 2M1510 AB 3 1605 1610 1615 1620 Time (BJD -2457000) 0.9 1.0 1.1 Relative Brightness Sector 11 2340 2345 2350 2355 2360 Time (BJD -2457000) 0.8 1.0 1.2 Sector 38 3070 3075 3080 3085 3090 Time (BJD -2457000) 0.8 1.0 1.2 1.4 Relative Brightness Sector 65 3780 3785 3790 3795 Time (BJD -2457000) 0.8 1.0 1.2 1.4 Sector 91 Figure 1. Lightcuves of 2M1510AB observed by TESS. Solid red line indicates position of transits predicted from Triaud et al. (2020). Further red shaded region indicates its 1σuncertainty. 0.0 0.2 0.4 0.6 0.8 1.0 Period (days) 0.0 0.2 0.4 0.6 0.8 1.0 5.0 2.5 0.0 2.5 5.0 SDE SPECULOOS + TESS, a) b) 7.5 10.0 12.5 15.0 17.5 20.0 22.5 0 5 10 SDE TESS, c) 20.875 20.900 20.925 d) Figure 2. Periodograms from Box Least Squares search. a) is the periodogram from the BLS search using TESS and SPECULOOS. b is magnified for 5 σ around the period predicted by Triaud et al. (2020). c and d, same as above, using only TESS data. Red arrows and red line indicate the period predicted by Triaud et al. (2020). Further red shaded region around the red line indicates 1 σuncertainty. MNRAS 000, 1-7 (2025) 4 S.T. Millward & V. Kunovac the value of this parameter is not to be interpreted physically as the discrepancy in the depth is currently unexplained (Section 5.3). The final run uses 400 walkers. We test it for convergence every 100 iterations, and stop it once the autocorrelation factors had changed by 7, attributed to the more noisy TESS data. The most significant peak was again identified at P= 20.89776 d with SDE value of 11.5. Table 1 presents the 16th, 50th and 84th percentiles of the resultant samples from the MCMC search. The orbital period is 20.897782 ± 0.000036 d, Tsec is 2457961.53520 ± 0.00026 d BJDTDB, orbital inclination is 88.47±0.03◦and the sum of radii is 0.3145±0.0057. The median model from our posterior is shown in Figure 3. We calculate the S/N of the phase folded light curve as as S/N = δ √ N/σwith δ being the model depth, Nis the number of binned exposures inside the eclipse, and σbeing the individual uncertainty on a single binned 15 min exposure. We find a strong detection with a S/N = 9.5. The transit depth in the phase folded data is deeper than SPECULOOS, at 8% compared to 4.2%, at a significance of 2.6σ. The potential causes of this discussed in Section 5.3. To account for this the TESS data was fit with a separate surface brightness ratio value, which acts as a depth rescaling factor for the purposes of our work. It is not to be interpreted as a physical value. This resulted in a surface brightness ratio of 6.2+7.3 -3.5 for the TESS bandpass, compared to the fixed 0.83 (Triaud et al. 2020) for SPECULOOS I+ z′. Despite its posterior being somewhat broad, the deeper model (red solid line, Figure 3) is a better fit to the data than the shallower model produced by using the SPECULOOS value for the surface brightness ratio (black dotted line). Similarly, the fit to the SPECULOOS data which uses the surface brightness ratio from Triaud et al. (2020) produces a light curve in a strong agreement. 5 DISCUSSION 5.1 Comparison to Triaud et al. (2020) The new ephemeris is in a strong agreement with Triaud et al. (2020), with the orbital period, Tsec, orbital inclination, and the sum of radii all within 1σof the solution presented by Triaud et al. (2020). Given higher S/N of the SPECULOOS data compared to TESS the shape of the eclipse is strongly constrained by the former, and as a result orbital inclination and the sum of radii are in strong agreement with Triaud et al. (2020). However, the inclusion of TESS data has reduced the errors on all four of the free parameters - the new Tsec has an error 2.5 times smaller, the new orbital inclination has an error 3.3 times smaller, and the sum of radii has an error 2.8 times smaller than the errors on the values from Triaud et al. (2020). Finally, the new period we present at P= 20.897782 ± 0.000036 d has an error of only 3 s, reduced from 8.5 min from Triaud et al. (2020). 5.2 Comparison to Baycroft et al. (2025) Recently, Baycroft et al. (2025) published 22 new radial velocity (RV) observations of the system using the UVES spectrograph, extending the observed Doppler baseline of the system by up to four years. Using the new data and the 12 previously published epochs from the same instrument, the authors derived significantly more precise RVs by modelling the double-lined spectra using a Gaussian process regression framework (Sairam et al. 2024). Compared to the published data, the new analysis reduced the formal uncertainties on the RVs from ∼1.5 km s-1 to ∼50 m s-1, an improvement of a factor ∼30. Baycroft et al. (2025) reported an orbital period of P= 20.907495 ± 0.000088 d, and found that including apsidal precession at a rate of ¤ω= -343 ± 126 ′′/yr significantly improved the fit. The authors interpreted the retrograde direction of the precession as evidence of a planet orbiting the binary in a polar configuration. The orbital period we have derived in this work is 14 min shorter compared to the recent Doppler result from Baycroft et al. (2025), a difference of 110σ. This significant discrepancy prompted us to study the results in more detail. First we compared the binary orbital solution to the published values of Triaud et al. (2020). While the orbital periods agree within errors, eand ωdiffer by 2.3σand 4.2σ respectively, which affects the predicted eclipse timings. We adopted their orbital solution and back-propagated it in time to see if it correctly predicts the secondary eclipse at time Tsec = 2457961.53520 BJDTDB that was observed with SPECULOOS in the original discovery paper (Triaud et al. 2020). We found that this orbital solution is not consistent with an eclipse at Tsec; the predicted secondary eclipse occurs 13.5 h after Tsec. Next we replicated the analysis by using the RV data and epochs (timestamps) which are included in Table 2 and Table 3 in Baycroft et al. (2025), and reached 1σagreement with most of the reported Keplerian fit solution and confirm a negative value of ¤ω. The two most discrepant parameters are the orbital period (2σ) and ω(1.2σ). The differences may arise due to different treatment of nuisance parameters, but we are unable to confirm this based on the available information in the paper. We carried out our analysis with separate RV offsets and white noise terms for each component, and the white noise terms reached a modest ≤30 m s-1 for the two components, suggesting the formal uncertainties are fairly accurately estimated based on the best-fit model. However, we note that 22 out of 33 epochs are recorded in MJD despite being labelled BJD - meaning a 0.5 day offset has not been applied to these affected timestamps. We verified this by downloading the raw UVES data from the ESO archive and checking the timestamps directly. We find that this temporal shift may have important consequences for the fit. We converted all MJD epochs (in the UTC time scale) from the raw UVES data to BJDTDB so that they are consistent, and shifted the epochs to the mid-point of the exposures. We also reapplied the barycentric correction using the corrected epochs, but also verified that whether or not we include this step in our analysis does not affect our final conclusions. We used these new data to fit the same Keplerian model as before, and find no evidence of apsidal precession. This result indicates that the negative value of ¤ω= -343 ± 126 ′′/yr reported in Baycroft et al. (2025) materialised due to inconsistent timestamps, which suggests the purported circumbinary planet 2M1510 ABb is a false positive. However, our model resulted in a poor fit where the uncertainties on the data had to be inflated by 0.6 km s-1 in order for the residuals to behave like MNRAS 000, 1-7 (2025) New eclipse timings for 2M1510 AB 5 0.0 0.2 0.4 0.6 0.8 1.0 Time (BJD - 2457000) 0.0 0.2 0.4 0.6 0.8 1.0 1609.30 1609.45 0.85 0.90 0.95 1.00 1.05 1.10 Relative Brightness a) 2340.70 2340.85 b) 3072.10 3072.25 c) 3093.0 3093.2 d) 961.48 961.54 961.60 Time (BJD - 2457000) 0.95 0.96 0.97 0.98 0.99 1.00 1.01 Relative Brightness e) 160 80 0 80 160 Time (Minutes) 0.90 0.95 1.00 1.05 f) Figure 3. Fits for TESS and SPECULOOS data. a), b), c) and d) show the four identified eclipses in the TESS data, binned to 15 minutes. e) shows the eclipse in the SPECULOOS data, and f) shows the phase-folded TESS data of the 4 eclipses, binned to 15 minutes and centred on the mid-eclipse. The red line in each figure indicates the best fit solution from the MCMC run, with a further red shaded region indicating its 1 σuncertainty. The black dotted line in f) represents the best fit solution whilst fixing the surface brightness ratio at 0.827 (i.e. same as SPECULOOS). Table 1. Priors and solutions for the MCMC search, and the solutions found by Triaud et al. (2020). Variables and Units Priors MCMC results Triaud et al. (2020) Orbital Period (days) U(20.892, 20.903) 20.897782 ± 0.000036 20.9022+0.0059 -0.0056 Tsec(BJDTDB -2457000) U(961.533, 961.537) 961.53520 ± 0.00026 961.53518+0.00064 -0.00061 Inclination (◦) U(88, 90) 88.466 ± 0.030 88.5 ± 0.1 Sum of Radii (R⊙) U(0.178, 0.583) 0.3145 ± 0.0057 0.3147+0.0159 -0.0157 Surface Brightness Ratio J2/J1 = k2 f2/ f1 Fixed 0.827 0.827 ± 0.013 Radius Ratio k= (R2/R1) Fixed 1 1 Eccentricity Fixed 0.309 0.309 ± 0.022 Argument of Periastron (◦) Fixed -89.9 -89.9 ± 3.3 a normal distribution; an order of magnitude larger than the quoted uncertainties on the data. This solution, however, also did not agree with our work nor Triaud et al. (2020). We repeated the analysis by imposing a prior on the orbital period and secondary eclipse time in Table 1 from our photometric solution. In this case the fit got even worse; the RV "jitter" term required to reconcile the two datasets is 4 km s-1. We experimented with a range of non-Keplerian effects such as a linear and quadratic trend, changes in inclination (di/dt), orbital period (dP/dt), and apsidal precession (dω/dt), none of which resulted in a satisfactory solution. Some potential causes for the high jitter value may be incorrect assumptions in our correction of the published RVs, or systematic noise sources not accounted for in the RV computation, such as wavelength calibration drifts over long baselines (e.g. Sacco et al. 2014), or the sensitivity of the derived RVs to the choice and treatment of spectral lines used in the analysis. A deeper investigation into whether the RV data can be explained by any other Keplerian signal or secular effect is outside the scope of this Letter. We conclude that the uncertainties on the RV data presented in Baycroft et al. (2025) are underestimated by 0.6-4 km s-1 in its current form, and therefore the results derived from them cannot be compared to the photometric data in a meaningful way. Our conclusions are based on the assumption that our correction to the radial velocities is accurate, but should be confirmed by recomputing the RVs with consistent timestamps and with treatment of other sources of uncertainty. MNRAS 000, 1-7 (2025) 6 S.T. Millward & V. Kunovac 5.3 Eclipse depth difference between SPECULOOS and TESS The phase folded eclipse in TESS is deeper than the SPECULOOS eclipse by about 4 percentage points at a significance of 2.6σ, as shown in Figure 3. Here we discuss a few possibilities for this apparent difference. Contamination in TESS: The eclipses in Sector 11 and latter half of Sector 65 occur near the edge of the light curves which coincides with the end of a TESS orbit. These regions are frequently associated with increased noise due to stray light from the Earth and the Moon. Light curves provided by tglc are corrected for the background and for contamination from nearby stars, however the correction may suffer as the background becomes brighter (Han & Brandt 2023). This could result in an over-correction of the contamination flux, which could in principle lead to deeper eclipses. Due to the low S/N of individual eclipses we are not able to study their variation across sectors, but variation in flux contamination is thought be behind a range of depth discrepancies for both transiting exoplanets (Bryant et al. 2020; Han et al. 2025) and eclipsing binaries (Bryant et al. 2023) observed with TESS. Contamination in SPECULOOS: It is not currently known if the original SPECULOOS light curve was corrected for the flux from 2M1510 C or whether it was necessary given the final aperture size. Given an atmospheric seeing of 1 ′′ and SPECULOOS' pixel scale of 0.35 ′′/pix, the FWHM of the PSF is about 2.9 pixels. Using a conservative 3 × FWHM aperture radius gives a 3 ′′ aperture, makes it unlikely that 2M1510 C at a separation of 6.8 ′′ contributes any significant flux into the aperture. On the off-chance that the real aperture was larger such that all of the light from 2M1510 C fell into the aperture of 2M1510 AB we can estimate what the true eclipse depth would be. We assume that the Gaia RP band is similar to the SPECULOOS I+z′, and estimate that given the Gaia RP magnitudes for the binary and tertiary (Grp = 15.90 and 17.3) the flux ratio is f3/( f1 + f2) = 0.28. The true eclipse depth of the binary would be roughly 5.4 % instead of the currently observed 4.2 %. This value does reconcile the two depths to within ∼1σ, although it is an upper limit of the effect. Filter response differences: We investigated whether the difference in the response functions between I+z′ and the TESS filter could account for the depth discrepancy, as the two brown dwarfs have slightly different effective temperatures. For this we calculated the total transmission for the SPECULOOS observations as a product of the I+ z′ filter, the quantum efficiency curve of the detector, and atmospheric extinction using the ESO Sky Model Calculator4. We retrieved BT Settl atmospheric models of the two brown dwarfs assuming effective temperatures of 2600 K and 2700 K, with a log g= 4.5 and solar metallicity. We calculate the expected flux within the TESS filter and the SPECULOOS total throughput and find that the flux ratio of the two brown dwarfs in each filter differs only by 1 % in relative terms, and can therefore not explain the difference in depth. 6 CONCLUSION We have studied TESS full-frame image (FFI) photometry spanning 7 years of the double brown dwarf eclipsing binary 2M1510 AB and detected a significant periodic signal using a box-least-squares (BLS) search. We find a new value for the period P= 20.897782 ± 0.000036 d which is in agreement with the period from Triaud et al. 4 https://www.eso.org/observing/etc/bin/gen/form?INS.MODE= swspectr+INS.NAME%3DSKYCALC (2020), whilst reducing its uncertainty from 8.5 min to 3 s. The TESS data favours a deeper eclipse (8%) than SPECULOOS (4.2%), which we cannot explain by filter differences nor orbital evolution due to the tertiary 2M1510 C. However, we cannot rule out that the discrepancy arises from imperfect background correction of the original SPECULOOS or TESS images. Further, this only affects the shape of the lightcurve and not the position, and therefore does not effect the result for the period. Current data available at the time of writing does not indicate the presence of a circumbinary planet. Our work is crucial for scheduling follow-up observations of this system to allow for further study of the eclipse timing variations or secular evolution of the orbit, for example with JWST. ACKNOWLEDGEMENTS We thank the referee, Keivan Stassun, for their careful reading of the manuscript and for the constructive comments and suggestions which have helped improve the clarity and quality of the paper. SM is grateful to the (URSS). VK acknowledges funding from the Royal Society through a Newton International Fellowship with grant number NIF 1\232229, and is grateful to Dan Bayliss, Ed Bryant, Tom Killestein, Tom Baycroft, Lalitha Sairam and Amaury Triaud for insightful discussions. DATA AVAILABILITY The TESS FFIs presented in this Letter can be accessed directly from the online Mikulski Archive for Space Telescope (MAST) portal5, while the processed light curves for 2M1510 are more easily accessed using the publicly available tglc software.6 The SPECULOOS photometry was first published in Triaud et al. (2020) and is available upon request. REFERENCES Baycroft T. A., Sairam L., Triaud A. H. M. J., Correia A. C. M., 2025, Science Advances, 11, eadu0627 Brandt G. M., et al., 2021, AJ, 162, 301 Bryant E. M., et al., 2020, Monthly Notices of the Royal Astronomical Society, 499, 3139 Bryant E. M., Bayliss D., Van Eylen V., 2023, Monthly Notices of the Royal Astronomical Society, 521, 3663 Chabrier G., Johansen A., Janson M., Rafikov R., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. pp 619-642 ( ), Christiansen J. L., et al., 2012, Publications of the Astronomical Society of the Pacific, 124, 1279 Collaboration G., et al., 2023, Astronomy and Astrophysics, 674, A1 David T. J., Hillenbrand L. A., Gillen E., Cody A. M., Howell S. B., Isaacson H. T., Livingston J. H., 2019, ApJ, 872, 161 Delrez L., et al., 2018, in Marshall H. K., Spyromilio J., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 10700, Ground-based and Airborne Telescopes VII. p. 107001I ( ), 5 https://mast.stsci.edu/portal/Mashup/Clients/Mast/ Portal.html 6 https://github.com/TeHanHunter/TESS_Gaia_Light_Curve MNRAS 000, 1-7 (2025) New eclipse timings for 2M1510 AB 7 Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, Publications of the Astronomical Society of the Pacific, 125, 306 Franson K., Bowler B. P., Brandt T. D., Dupuy T. J., Tran Q. H., Brandt G. M., Li Y., Kraus A. L., 2022, AJ, 163, 50 Gagné J., et al., 2015, The Astrophysical Journal Supplement Series, 219, 33 Gilliland R. L., et al., 2011, The Astrophysical Journal Supplement Series, 197, 6 Gizis J. E., 2002, The Astrophysical Journal, 575, 484 Han T., Brandt T. D., 2023, The Astronomical Journal, 165, 71 Han T., Robertson P., Brandt T. D., Kanodia S., Cañas C., Shporer A., Ricker G., Beard C., 2025, The Astrophysical Journal Letters, Volume 988, Issue 1, id.L4, 14 pp., 988, L4 Hippke M., David T. J., Mulders G. D., Heller R., 2019, The Astronomical Journal, 158, 143 Hsu C.-C., Burgasser A. J., Theissen C. A., 2023, ApJ, 945, L6 Kovács G., Zucker S., Mazeh T., 2002, Astronomy and Astrophysics, 391, 369 Maxted P. F. L., 2016, Astronomy and Astrophysics, 591, A111 Parviainen H., Aigrain S., 2015, Monthly Notices of the Royal Astronomical Society, 453, 3821 Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003 Sacco G. G., et al., 2014, Astronomy and Astrophysics, 565, A113 Sairam L., et al., 2024, Monthly Notices of the Royal Astronomical Society, 527, 2261 Stassun K. G., Mathieu R. D., Valenti J. A., 2006, Nature, 440, 311 Stassun K. G., Mathieu R. D., Valenti J. A., 2007, The Astrophysical Journal, 664, 1154 Triaud A. H. M. J., et al., 2020, Nature Astronomy, pp 1-8 Van Cleve J. E., et al., 2016, Publications of the Astronomical Society of the Pacific, 128, 075002 Xuan J. W., et al., 2024, Nature, 634, 1070 APPENDIX A: PREDICTED ECLIPSE TIMINGS This paper has been typeset from a TEX/LATEX file prepared by the author. Table A1. Predicted mid-point of secondary eclipses until the end of 2026. Uncertainties on each prediction range from 9-10 minutes. aSince BJDUTC 2457961.53255. Epocha Datetime BJD (UTC) (UTC) 144 22 Oct 2025 07:30:56 2460970.81316 145 12 Nov 2025 05:03:45 2460991.71094 146 03 Dec 2025 02:36:33 2461012.60872 147 24 Dec 2025 00:09:21 2461033.50650 148 13 Jan 2026 21:42:10 2461054.40429 149 03 Feb 2026 19:14:58 2461075.30207 150 24 Feb 2026 16:47:47 2461096.19985 151 17 Mar 2026 14:20:35 2461117.09763 152 07 Apr 2026 11:53:23 2461137.99541 153 28 Apr 2026 09:26:12 2461158.89320 154 19 May 2026 06:59:00 2461179.79098 155 09 Jun 2026 04:31:48 2461200.68876 156 30 Jun 2026 02:04:37 2461221.58654 157 20 Jul 2026 23:37:25 2461242.48432 158 10 Aug 2026 21:10:13 2461263.38211 159 31 Aug 2026 18:43:02 2461284.27989 160 21 Sep 2026 16:15:50 2461305.17767 161 12 Oct 2026 13:48:39 2461326.07545 162 02 Nov 2026 11:21:27 2461346.97323 163 23 Nov 2026 08:54:15 2461367.87102 164 14 Dec 2026 06:27:04 2461388.76880 MNRAS 000, 1-7 (2025) 8 S.T. Millward & V. Kunovac Table A2. Orbital parameters and the physical characteristics used for the emcee fits, as well as their priors Nuisance parameters Priors TESS and SPECULOOS Limb darkening Parameters (u1, u2) SPECULOOS Fixed [0.26, 0.44] TESS Fixed [0.36, 0.45] log(f) SPECULOOS U(-20.0, 0.0) -6.19 ± 0.13 Sector 11 U(-20.0, 0.0) <-1.2 Sector 38 U(-20.0, 0.0) <-1.2 Sector 65 (1st Transit) U(-20.0, 0.0) <-1.2 Sector 65 (2nd Transit) U(-20.0, 0.0) <-1.2 Mean offset SPECULOOS U(0.8, 1.2) 1.00004 ± 0.00030 Sector 11 U(0.8, 1.2) 0.9969 ± 0.0071 Sector 38 U(0.8, 1.2) 1.0074 ± 0.0043 Sector 65 (1st Transit) U(0.8, 1.2) 1.0008 ± 0.0033 Sector 65 (2nd Transit) U(0.8, 1.2) 0.9998 ± 0.0035 Surface Brigthness TESS U(0.0, 20.0) 6.2+7.3 -3.5 MNRAS 000, 1-7 (2025)
2510.14795
ON THE FINITENESS OF LOG SURFACES DANIIL SEREBRENNIKOV Abstract. We prove that a log surface has only finitely many weakly log canonical projective models with klt singularities up to log isomorphism, by reducing the problem to the boundedness of their polarization. Contents 1. Introduction 1 2. Preliminary 2 3. Isotrivial families 5 4. Main results 9 5. Applications 13 References 15 1. Introduction Let X be a projective variety over the field of complex numbers C. We study a classical problem about the number of normal (Q-factorial) projective varieties Xα such that each Xα is birational to X, and the canonical divisor KXα is nef, that is, KXα ·C ≥0 for all curves C ⊂Xα. In general, this number is not expected to be finite unless we consider only mild singularities of Xα. For instance, one may restrict to terminal singularities (see Theorem 2.1). In this case the varieties Xα are called minimal models of X. More precisely, the following conjecture was proposed by Kawamata and Matsuki [Mat02, Conjecture 12.3.6.]: Conjecture 1.1. The number of projective minimal models in a fixed birational class is finite up to isomorphism. It is natural to extend this conjecture to a class of crepant birationally equivalent log pairs, namely a 0-class (see Theorem 2.3). Conjecture 1.2. The number of projective wlc klt models in a fixed 0-class is finite up to log isomorphism. In the present paper we show that this conjecture holds for log surfaces. Theorem 1.3. Theorem 1.2 holds for log surfaces. Let (X, B) be a projective log surface. The most delicate case of Theorem 1.2 for log surfaces is when KX + B ≡0, namely, Calabi–Yau pairs. Our proof reduces the finiteness problem for Calabi– Yau pairs to the boundedness of polarizations on their models. To be precise, consider the class C 1 arXiv:2510.14795v1 [math.AG] 16 Oct 2025 2 D. V. SEREBRENNIKOV of all projective wlc klt models (Xα, Bα) that are crepant birationally equivalent to (X, B). Let D be the class of all varieties Xα corresponding to the pairs (Xα, Bα) in C. The following theorem is the main result of this work. Theorem 1.4. If the class D has bounded polarization, then the class C contains only finitely many log surfaces up to log isomorphism. Remark 1.5. The restriction on singularities in Theorem 1.2 cannot be weakened to log canonical singularities. Consider the log pair (X, B), where B is a smooth curve of genus 1, and X = P2 C. Since the blow-up of every point p ∈B adds to the boundary a divisor with coefficient 0, the Calabi-Yau pair (X, B) has infinitely many pairwise non-isomorphic projective wlc models in its 0-class. Acknowledgements. The author is grateful to Professor Shokurov for the ideas underlying this work, guidance, and support. 2. Preliminary Throughout, we work over the field of complex numbers C and follow the standard terminology [IS05] unless stated otherwise. By a variety we mean an integral separated scheme of finite type over C. On a normal variety, a prime divisor is understood in the Weil sense, i.e. an integral closed subscheme of codimension 1. In every part of the paper, for an R-divisor D we write D = P i diDi, where Di are distinct prime divisors and di ∈R. A pair (X, B) consists of a normal variety X and an R-divisor B on X. We say that (X, B) is a log pair if KX + B is an R-Cartier divisor. Suppose B = P i biBi, and Γ ⊆R is a set. If bi ∈Γ for all i then we write B ∈Γ. Denote by [0, 1] the unit segment of real numbers. The R-divisor B is called a boundary if B ∈[0, 1]. A pair (X, B) is called projective (resp. of dimension d) if X is projective (resp. of dimension d). The pairs (X1, B1) and (X2, B2) are said to be log isomorphic if there exists an isomorphism φ : X1 →X2 such that φ∗(B1) = B2; we then write (X1, B1) ≈(X2, B2). We use the notation ∼= instead of ≈if the isomorphism is natural. Definition 2.1. Let (X, B) be a log pair with B = P biBi. We say that (X, B) has the following singularities when the corresponding inequalities hold:                  terminal (trm) canonical Kawamata log terminal (klt) ϵ-Kawamata log terminal (ϵ-klt) log canonical ⇐⇒dis(X, B)                  > 0; ≥0; > −1 and bi < 1 ∀i; > −1 + ϵ and bi < 1 −ϵ ∀i; ≥−1 and bi ≤1 ∀i. We refer the reader to [IS05] for the details of these definitions. Definition 2.2. A log pair (X, B) is called a weakly log canonical model (wlc model) if the following hold: • X is a proper variety, and B is a boundary. • (X, B) has log canonical singularities. • KX + B is nef. ON THE FINITENESS OF LOG SURFACES 3 If (X, B) has klt (resp. terminal) singularities, then (X, B) is called a wlc klt model (resp. a wlc trm model). If, in addition, KX + B ≡0, then (X, B) is called a 0-pair1. Definition 2.3. We say that a log pair (Xα, Bα) is crepant birationally equivalent to (X, B) if the varieties Xα and X are birationally equivalent, and for each common log resolution (Y, D) of these pairs the following equalities hold: KY + D = f∗(KX + B) = f∗ α(KXα + Bα), B = f∗D, Bα = (fα)∗D, where f : Y →X and fα : Y →Xα are the corresponding log resolutions. The class of all log pairs crepant birationally equivalent to (X, B) is called the 0-class of the log pair (X, B). Remark 2.4. If (X, B) is a wlc klt model (resp. 0-pair), then each log pair (Xα, Bα) crepant birationally equivalent to (X, B) with effective R-divisor Bα is also a wlc klt model (resp. 0-pair). Example 2.5. Let X be a smooth projective K3 surface containing infinitely many distinct (−2)- curves Ci, that is, smooth rational curves with C2 i = −2. Let fi : X →Xi be the contraction of Ci. Then each log pair (Xi, 0) is a klt 0-pair in the 0-class of (X, 0). Example 2.6. Let X = P2 C and B = 1 2 P6 i=1 Li, where {Li}6 i=1 is the set of six lines passing through four points in general position. Then (X, B) is a klt 0-pair. Let fij : Xij →X be the blow-up of a point p ∈Li ∩Lj, and let Bij be a R-divisor defined by KXij + Bij = f∗ ij(KX + B). Then for all i ̸= j the log pair (Xij, Bij) is a klt 0-pair in the 0-class of (X, B). Let f : X →S be a morphism of schemes and s ∈S a point. We denote the scheme-theoretic fiber by Xs = X ×S s. If L is a line bundle on X, then its restriction to the fiber Xs is denoted by Ls = L|Xs. When no ambiguity arises, we denote a morphism f : X →S simply by X/S. Definition 2.7. Suppose f : X →S is a proper flat morphism between varieties, and the base S is non-singular. We say that X/S is a family of varieties if every fiber Xs is a variety. If, in addition, the morphism f is projective, then X/S is called a projective family of varieties. Consider a family of varieties X/S and an R-divisor B = P biBi on X. In general, the restriction of B to a fiber Xs is not defined, and even when it is, the definition is not straightforward. For our purposes it is convenient to work with elementary families on which Bs is defined. Definition 2.8. Let (X, B) be a pair and f : X →S be a family of varieties. Suppose B = Pn i=1 biBi is a decomposition of the R-divisor into distinct prime divisors. We say that (X/S, B) is an elementary family if for every point s ∈S the following hold: (0) The fiber Xs is a normal variety. (1) The restriction f|Bi : Bi →S is flat for all i. (2) The closed subscheme (Bi)s = Bi ×S s is a prime divisor on Xs for all i. (3) The prime divisors (Bi)s and (Bj)s are distinct whenever i ̸= j. In this situation, we put Bs = P i bi(Bi)s. If, in addition, the morphism f is projective, then (X/S, B) is called a projective elementary family. Definition 2.9. A class D of projective varieties Xα is called bounded if there exist finitely many projective families X (j)/S(j) such that: 1In the literature, 0-pairs are also called Calabi–Yau pairs. 4 D. V. SEREBRENNIKOV (1) For every Xα in D there is an index j and a closed point sα ∈S(j) with X (j) sα ≈Xα. (2) For each index j and every closed point s ∈S(j) there exists Xα in D with X (j) s ≈Xα. Definition 2.10. A class C of projective pairs (Xα, Bα) is called bounded if there exist finitely many projective elementary families (X (j)/S(j), B(j)) such that: (1) For every (Xα, Bα) in C there is an index j and a closed point sα ∈S(j) such that (Xα, Bα) ≈(X (j) sα , B(j) sα ). (2) For each index j and every closed point s ∈S(j) there exists a pair (Xα, Bα) in C such that (X (j) s , B(j) s ) ≈(Xα, Bα). Definition 2.11. A class C of projective pairs (Xα, Bα) has bounded polarization if there exists a positive integer N ∈N and a finite subset Γ ⊂R such that Bα ∈Γ, and each Xα admits a very ample Cartier divisor Hα satisfying the following inequalities: (Bα)red · Hdim Xα−1 α ≤N, Hdim Xα α ≤N, where (Bα)red = P V ∈supp(Bα) V , supp(Bα) = { Biα | biα ̸= 0 }, and Bα = P i biαBiα. Similarly, a class of projective (not necessarily normal) varieties Xα has bounded polarization if there exist a positive integer N ∈N and very ample Cartier divisors Hα on Xα such that Hdim Xα α ≤N. Remark 2.12. As the intersection (Bα)red · Hdim Xα−1 α is bounded, the set {# supp(Bα)} ⊆N is also bounded. We write #A for the number of elements of a finite set A. For ease of reference, we recall the following general result proven by Grothendieck. Proposition 2.13. [GW20, Appendix E] Let X →S be a flat proper morphism between Noetherian schemes. Then the following sets are open in S: • {s ∈S : Xs is geometrically integral }; • {s ∈S : Xs is geometrically normal }. For the reader’s convenience, we prove the following statement, which often appears in the liter- ature without proof. Lemma 2.14. Let D be a class of projective varieties of fixed dimension d ∈N. If D has bounded polarization, then D is a subclass of a bounded class of projective varieties. Proof. We construct a projective morphism f : X →S between quasi-projective schemes such that for every variety Xα in D there exists a closed point sα ∈S together with a closed embedding φα : Xα ,→PN+d C such that φα(Xα) = Xsα. By the definition of a class with bounded polarization, there exists a positive integer N and very ample divisors Hα on Xα such that Hd α ≤N. From Matsusaka’s inequalities [Kol96, Chapter VI, Exercise 2.15.8.5] it follows that dimC H0(Xα, Hα) ≤Hd α + d ≤N + d < +∞. Hence for every Xα ∈D there is an embedding Xα ,→Pdim |Hα| C ⊆PN+d C such that deg(Xα) = (Hα)d ≤N. Without loss of generality, we may assume that all varieties Xα in D have degree N in PN+d C . According to [Kol96, Chapter I, Theorem 3.21] there is a universal family Univd,N(PN+d C ) −→Chowd,N(PN+d C ) ON THE FINITENESS OF LOG SURFACES 5 parametrizing d-dimensional effective cycles of degree N in PN+d C . Let S be the closure of the set {[Xα]} ⊂Chowd,N(PN+d C ), endowed with the structure of a reduced closed subscheme. Then the projection f : X = Univd,N(PN+d C ) ×Chowd,N(PN+d C ) S −→S is a projective morphism. Without loss of generality we may assume that the base S is irreducible. By the generic flatness theorem [FGI+05, Theorem 5.12] for a morphism of finite type (over an integral Noetherian scheme), the base can be written as a finite union of locally closed subschemes S = S Si such that the restriction X ×S Si →Si is flat. After replacing the base by F Si →S Si we obtain a flat projective morphism f : X →S. By Theorem 2.13, the set { s ∈S : Xs is geometrically integral } is open in S. Therefore, after shrinking the base, we may assume that Xs is a variety for every closed point s ∈S. As the singular locus of S is closed, after a further stratification and base change we may assume that S is non-singular. Set Xj = X ×S Sj, where {Sj} is the finite set of irreducible components of the base. Then each Xj/Sj is a projective family of varieties, and every variety Xα in D is isomorphic to some fiber of one of these families. □ Remark 2.15. In general, a subclass of a bounded class of projective varieties need not be bounded in the sense of Theorem 2.9. For example, consider the class of all elliptic curves with rational j-invariant. At the beginning of the main section (see Section 4), we prove the following proposition, which is a modification of Theorem 2.14. Proposition 2.16. Let C be a class of projective pairs of fixed dimension d ∈N. If C has bounded polarization, then C is a subclass of a bounded class of projective pairs. 3. Isotrivial families Let (X/S, B) be a projective elementary family, and (X, B) be a pair. Define the sets IsoX(X/S) = {s ∈S : Xs ≈X}, Iso(X,B)(X/S, B) = {s ∈S : (Xs, Bs) ≈(X, B)}. Throughout this section, we work with a fixed log pair (X, B) of dimension 2, namely a log surface. Suppose that (X, B) is a projective trm 0-pair. Since X has only terminal singularities, X is a smooth surface [KM98, Theorem 4.5]. We also fix a projective elementary family (X/S, B) such that • f : X →S is a smooth projective family of varieties; • (X, B) is a terminal log pair, and KX + B ∼R 0; • Iso(X,B)(X/S, B) is dense in S. In this section, we show that (X/S, B) is an isotrivial family over an open dense subset of S. The analogous statement [Xu25, Theorem 4.12] is proven for certain 0-pairs in dimension 3 with boundary having rational coefficients. The proof in loc. cit. relies on techniques from [Amb05, Section 2]. Definition 3.1. We say that the morphism τ : U →S is an ´etale base change if the image τ(U) is an open dense subset of S, and τ : U →τ(U) is an ´etale covering, that is, an ´etale, finite, surjective morphism. For an elementary family (X/S, B) with B = P biBi we set (X, B) ×S U = (X ×S U, P bi(Bi ×S U)). It is clear that (X, B) ×S U is an elementary family over U. 6 D. V. SEREBRENNIKOV Theorem 3.2. There is an ´etale base change U →S such that (X, B)×S U ≈(X, B)×C U over U. Proof. We split the proof into two cases according to whether the boundary B has rational or real coefficients. Case 1. The boundary B is a Q-divisor. We introduce the setup from [Amb05, Section 2] and adapt it to our situation. For an R-divisor D = P i diDi we define ⌊D⌋= P i⌊di⌋Di, and {D} = D−⌊D⌋. Note that the variety X is nonsingular, as X/S is a smooth family with nonsingular base S. (1) Let µ : Y →X be an equivariant resolution of X with respect to Bred (cf. [Amb05, Lemma 1.1]), KY + D = µ∗(KX + B) is the log pullback, and E = {D}red. We can assume that the log pair (Y, D) is log smooth over an open dense subset in S by generic smoothness. (2) Let b be the minimal positive integer such that b(KX + B) ∼0 over the generic point of S, and choose a rational function φ ∈k(X) with b(KX + B) = (φ). Let π : eY →Y be the normalization of Y in k(Y)( b√φ), that is, the index-one cover. (3) Let ν : V →eY be an equivariant resolution with respect to Sing( eY), i.e. the singular locus of eY. Let h : V →S be the induced morphism. Without loss of generality, it can be assumed that the morphism h is smooth, and R2h∗ZV has a global section inducing a polarization on each fiber of h. (4) Let P be the period domain, and let Γ be the monodromy group associated with the variation of polarized Hodge structure (R2h∗CV)prim ⊗OS. Define Φ : S →Γ\P to be the associated period map. (5) Let κs : TS,s →H1(Ys, TYs⟨−Es⟩) and κVs : TS,s →H1(Vs, TVs) be the induced Ko- daira–Spencer classes. These maps have the same kernel for general closed points s ∈S, equal to TΦ−1(Φ(s)),s ⊆TS,s [Amb05, Proposition 2.1]. (Y, D) µ  ( eY, 0) π o V ν o h { (X, B) f  S The following lemma is a summary of results in [Xu25, Section 4] with some modifications. Lemma 3.3. Assume that Ker(κs) = 0 for general points s ∈S. Then the base S is a single point. Proof. To be definite, we put (X, B) = (Xs0, Bs0) for a general point s0 ∈S. Since (X/S, B) is an elementary family, we may assume that (Y/S, D) is a projective elementary family after shrinking the base. After passing to an open dense subset, we can assume that the induced morphism µs : Ys →Xs is an equivariant resolution with respect to (Bs)red for every s ∈S. Hence Iso(X,B)(X/S, B) ⊆ Iso(Ys0,Ds0)(Y/S, D). Now we show that Iso(Ys0,Ds0)(Y/S, D) ⊆Iso eYs0( eY) after passing to an appropriate open dense subset in S. Recall that b is the minimal positive integer such that b(KX + B) ∼0 over the generic point of S. Since X/S is a smooth family, we have b(KX + B)|Xs = b(KXs + Bs) for every point ON THE FINITENESS OF LOG SURFACES 7 s ∈S. By the semicontinuity theorem [Har77, Theorem 12.8], the set Ib′ = {s ∈S : h0 Xs, OX (b′(KXs + Bs))  = 1} is locally closed (constructible) for every positive integer b′ ≤b. Moreover, Ib′ contains the generic point if and only if b′ = b. Hence after shrinking the base, we can assume that b is the minimal positive integer such that b(KXs + Bs) ∼0 for every point s ∈S. By the flat stratification theorem, the induced morphism eY →S is flat over an open dense subset. Since normality is an open condition Theorem 2.13, we conclude that ( eY/S, 0) is a projective elementary family after replacing the base by an open dense subset. Then, the variety eYs is the index-one cover of (Ys, Ds) for every point s ∈S. It follows that Iso(Ys0,Ds0)(Y/S, D) ⊆Iso eYs0( eY) due to uniqueness of normalization. Next, as (Y/S, D) is an elementary projective family, each irreducible component of Sing( eY) is horizontal over an open dense subset in S. We may shrink S, so that Vs →eYs is an equivari- ant resolution of singularities with respect to Sing( eYs) for every s ∈S. This yields Iso eYs0( eY/S) ⊆ IsoVs0(V/S). This implies that the set IsoVs0(V/S) is dense in S, because it includes Iso(X,B)(X/S, B). By pt we denote the period point corresponding to the Hodge structure of H2 prim(Vs0, C). Then Φ(s) = pt for every s ∈IsoVs0(V/S). In accordance with Property 5, we have Ker(κVs) = TΦ−1(Φ(s)),s ⊆TS,s for general points s ∈S. But, the assumption Ker(κVs) = 0 implies that Φ is injective. Thus IsoVs0(V/S) = {s0}. The density of IsoVs0(V/S) implies S = {s0}, as claimed. □ To conclude the proof of the first case we apply the following theorem (cf. [Kaw85, Lemma 7.1.]): Theorem 3.4. [Amb05, Theorem 2.2.] There exist dominant morphisms τ : S →S and ϱ : S →S! with τ generically finite and S, S! nonsingular, and there exist a log pair (X !, B!) with KX ! +B! ∼Q 0 and a projective contraction f ! : X ! →S! satisfying the following properties: (1) there exists an open dense subset U ⊆S and a log isomorphism (X, B)×S U ≈(X !, B!)×S! U over U; (2) The Kodaira-Spencer map κs! is injective for general points s! ∈S!. (X, B) f  (X !, B!) f!  S τ | ϱ # S S! By Theorem 3.3, we obtain (X !, B!) = (X, B) and S! = Spec C. It follows from Theorem 3.4 and its proof that, after shrinking the base and passing to an ´etale covering U →S, there is a log isomorphism (X, B) ×S U ≈(X, B) ×C U over U, as required. Before proving the second case, we introduce some definitions. Definition 3.5. [Ser06, Definition 2.6.9] Let X/S be a family of varieties. (1) X/S is called a trivial family if there exists a variety X and an isomorphism X ×k S ∼= X over S. (2) X/S is called an isotrivial family if there exists an ´etale covering U →S such that X ×SU → U is a trivial family. 8 D. V. SEREBRENNIKOV (3) X/S is called a locally isotrivial family if every closed point s ∈S has a neighborhood U (in the Zariski topology) such that the family X ×S U →U is isotrivial. Proposition 3.6. [Ser06, Proposition 2.6.10] Let X/S be a projective family of varieties and X a projective variety. If IsoX(X/S) = {all closed points of S}, then the family X/S is locally isotrivial. Case 2. The boundary B is an R-divisor. First, we show that the family X/S is isotrivial over an appropriate open dense subset in U. To be definite, we put (X, B) = (Xs0, Bs0) for a general point s0 ∈S. In accordance with [Bir11, Proposition 3.2(3)], the R-divisor B can be written as an R-linear combination of Q-divisors ∆i such that KX + ∆i ∼f,Q 0. After removing some closed subsets of codimension 1 in S, we have KX + ∆i ∼Q 0. Then, for each index i there is an ´etale base change Ui →S such that (X, ∆i) ×S Ui ≈(X, (∆i)s0) ×C Ui over Ui by Case 1. However, this is not enough to conclude that there exists an ´etale base change U →S together with a log isomorphism (X, B) ×S U ≈(X, B) ×C U over U. We therefore proceed by a different approach. Nevertheless, the first case guarantees that the set IsoX(X/S) ⊇ \ i Iso(X,(∆i)s0)(X/S, ∆i) contains an open dense subset U. After shrinking the base and passing to an ´etale covering, we can assume that X = X ×C S by Theorem 3.6. Secondly, we derive a version of Zariski decomposition for the R-divisor B. By assumption −KX ≡B; hence −KX is pseudo-effective and κ(−KX) ≥0. By [B˘ad01, Theorem 14.14] there exists a (unique) Zariski decomposition −KX = PK + N, where PK is a nef Q-divisor, N is an effective Q-divisor whose components have a negative-definite intersection matrix, and PK · N = 0. From the properties of the Zariski decomposition it follows that N ≤B and P = B −N is an effective R-divisor such that P ≡PK. By the semi-ampleness theorem [Fuj12, Theorem 8.1] we have ν(KX + B) = κ(KX + B), that is, KX + B ∼R 0. Again by semi-ampleness, for the klt pair (X, B + ϵP) with 0 < ϵ ≪1, the effective nef R-divisor ϵP is semiample. Fix a sufficiently large n ∈N so that the linear system |⌊nP⌋| is basepoint-free and nPK is Cartier. Since nP ∼R nPK, we have ⌊nP⌋∼nPK. After replacing the representative of the canonical class KX, we have PK = 1 n⌊nP⌋. Next, we construct a Zariski decomposition for B. For that, we prove that the R-divisor B is numerically constant over S. To be precise, the following lemma holds: Lemma 3.7. We have Bs ≡B for every closed point s ∈S. Proof. Write B = P i biBi as a sum of distinct prime components, and set Γ = {bi ̸= 0} with r = #Γ. For j = 1, . . . , r define the reduced divisor Bj = P i: bi=aj Bi, where aj ∈Γ and aj < aj+1. Then we write B = Pr j=1 ajBj slightly abusing the notation (since Bj is not necessarily prime). Similarly, B = Pr j=1 ajBj. For every closed point s ∈S we have (Bj)s −Bj = X V ∈supp(Bj) (Vs −Vs0). Thus the divisors (Bj)s and Bj are algebraically equivalent [Ful98, Example 10.3.2] since the base is a nonsingular variety. As algebraic equivalence implies numerical equivalence, (Bj)s ≡Bj. It ON THE FINITENESS OF LOG SURFACES 9 follows that Bs = r X j=1 aj(Bj)s ≡ r X j=1 ajBj = B. □ Therefore, we have Bs ≡B for every closed point s ∈S according to Theorem 3.7. From the properties of the Zariski decomposition and the elementarity of the family (X ×C S/S, B) it follows that B = P + N × S, where P is an effective divisor such that Ps ∼R P for every closed point s ∈S. Then ⌊nP⌋∼S ⌊nP⌋× S, that is, ⌊nP⌋∼⌊nP⌋× S + pr∗(D) for some Cartier divisor D on S, where pr : X ×C S →S is the projection. After shrinking the base, we can assume that D = 0. Finally, we use the global Zariski decomposition for B to show that the positive part of B defines a constant linear system over S. Recall that PK = 1 n⌊nP⌋. If κ(PK) = 0, then PK ≡0 and B ≡N. But, the inequality N ≤B implies B = N, and Theorem 3.2 follows. We may assume κ(PK) ≥1. Set Z = ProjC M m≥0 H0 (X, OX(m⌊nP⌋)) and Z = ProjS M m≥0 f∗OX×CS (m⌊nP⌋) . Since ⌊nP⌋∼⌊nP⌋× S, there is an isomorphism of OS-algebras M m≥0 f∗OX×CS (m⌊nP⌋× S) → M m≥0 f∗OX×CS (m⌊nP⌋) inducing the isomorphism of projective varieties Z ×C S →Z over S. Note that (X ×C S)×Z (Z ×C S) ∼= X ×C S. Hence we obtain the following Cartesian square: X ×C S /  X ×C S  Z ×C S / Z By construction, the upper horizontal arrow induces the log isomorphism (X, ⌊nP⌋) ×C S ≈(X ×C S, ⌊nP⌋) over S. It follows from the properties of Zariski decomposition that φ∗(N) = N for every φ ∈Aut(X). Thus (X, ⌊nP⌋+N)×CS ≈(X ×CS, ⌊nP⌋+N ×S). Moreover, by elementarity of the family (X×CS, B) we have (X, B)×CS ≈(X×CS, B). This concludes the proof of Theorem 3.2. □ 4. Main results In this section we prove Theorem 1.4. Proposition 4.1. Let C be a class of projective pairs (Xα, Bα) of fixed dimension d ∈N. If C has bounded polarization, then C is a subclass of a bounded class of projective pairs. Proof. Step 1. We first prove the proposition in the case when the R-divisors Bα are reduced, that is, Bα = P i Biα. We also assume that the number of prime components is the same for all Bα and equals some n ∈N. Note that the expression Bα = Pn i=1 biαBiα imposes an ordering on the prime divisors Biα. By the same arguments as in the proof of Theorem 2.14, we conclude that there exist 10 D. V. SEREBRENNIKOV flat projective morphisms X ′ →S′ and B′ i →Ti together with closed embeddings φα : Xα ,→PM C such that φα(Xα) = X ′ sα and φα(Biα) = (B′ i)tiα for some closed points sα ∈S′ and tiα ∈Ti. Set S = S′ ×C Q i Ti and X = X ′ ×C Q i Ti. Define Bi =  B′ i ×C S′ ×C  Y j̸=i Tj   \ X ⊆PM S to be the scheme-theoretic intersection endowed with the structure of a reduced closed subscheme. By Theorem 2.13 the set { s ∈S : Xs is geometrically integral and geometrically normal } is open. Therefore, after shrinking the base, all fibers of X →S are normal projective varieties of dimension d. After a base change we may assume that each closed subscheme Bi is flat over S. Since the set { s ∈S : (Bi)s is a geometrically integral scheme } is open according to Theorem 2.13, after shrinking the base we may assume that (Bi)s is a prime divisor on Xs for all s ∈S. After shrinking the base further we may assume (Bi)s ̸= (Bj)s for all s ∈S whenever i ̸= j. Put B = P i Bi. Then (X/S, B) is an elementary projective family, as required. Step 2. Now, we prove the proposition in general. By the definition of a class with bounded polarization there exists a finite set Γ = {a1, . . . , ar} ⊂R such that Bα ∈Γ, and the number of prime components of the divisors Bα is bounded by some n ∈N (see Theorem 2.12). Then C splits into finitely many subclasses Cn1,n2,...,nr corresponding to the pairs (Xα, Bα) such that nj = #{ Biα ∈supp(Bα) : biα = aj }, 0 ≤ r X j=1 nj ≤n. Hence it suffices to prove the proposition for one fixed class Cn1,n2,...,nr. For each index j = 1, . . . , r and every pair (Xα, Bα) in Cn1,n2,...,nr define the reduced divisor B(j) α = P i: biα=aj Biα. Let Cj be the class consisting of all pairs Xα, B(j) α  . According to Step 1, the classes Cj of pairs are subclasses of bounded classes of projective pairs. It follows that for each index j = 1, . . . r there exists a projective elementary family (X (j)/S(j), B(j)) with B(j) = Pnj i=1 B(j) i such that every pair in Cj is isomorphic to a fiber of (X (j)/S(j), B(j)), and X (j)/S(j) ⊆PM S(j). We use the convention that if nj = 0 then B(j) = 0. Define S = r Y j=1 S(j), X = r\ j=1  X (j) ×C Y k̸=j S(k) ⊆PM S , B = r X j=1 aj nj X i=1 B(j) i ×C Y k̸=j S(k) \ X  . By the same argument as in the first step, we conclude that, after shrinking and stratification the base, the family (X/S, B) satisfies the conditions of Theorem 2.10. □ Proposition 4.2. Let (X/S, B) be an elementary projective family. Suppose (X, B) is a klt (resp. trm) log pair. Then for every closed point s ∈S the log pair (Xs, Bs) has klt (resp. trm) singularities. Proof. Using Noetherian induction, we prove the proposition over a dense open subset of S. We treat the case when (X, B) is klt, as the terminal case is analogous. Let µ : X ′ →X be a log resolution and write KX ′ + B′ = µ∗(KX + B). We may assume that (X ′/S, B′) is a projective elementary family. After shrinking the base, (X ′, B′) is a log smooth pair over S by generic smoothness. Hence for every closed point s ∈S the induced morphism µs : (X ′ s, B′ s) →(Xs, Bs) is a log resolution. As (X, B) is a klt log pair, all coefficients (in ON THE FINITENESS OF LOG SURFACES 11 the decomposition into prime components) of the R-divisor B′ are less than 1. Since (X ′/S, B′) is an elementary family, the coefficients of B′ s are also less than 1. In other words, (Xs, Bs) is klt for all closed s ∈S. □ Proposition 4.3. A log surface (X, B) has a unique projective wlc trm model in its 0-class up to log isomorphism. Proof. Let (X1, B1) and (X2, B2) be two projective wlc trm models of (X, B). Let (Y, D) be a common log resolution for (X1, B1) and (X2, B2), and let fi : Y →Xi be the corresponding morphisms. Then KY + D = f∗ i (KXi + Bi) and D = (f−1 i )∗Bi + Ei, Exc(fi) = [ V ∈supp(Ei) V. By terminality, the R-divisor −Ei is strictly effective on the exceptional components of fi. Therefore f2 contracts E1 and E2, since the boundary B2 is effective. As the pair (X2, B2) is terminal, the morphism f2 cannot contract the components of (f −1 1 )∗B1. Hence the birational map φ = f2 ◦f−1 1 neither contracts divisors nor blows up points. Then φ is an isomorphism in codimension 1. For surfaces this is equivalent to being an isomorphism. Moreover, B2 = φ∗(B1) by the definition of crepant birationally equivalent log pairs. □ Suppose (X, B) is a fixed projective trm 0-pair of dimension 2. Let C be the class of all pro- jective wlc klt models (Xα, Bα) in the 0-class of (X, B). Consider the class D of all varieties Xα corresponding to the pairs (Xα, Bα) in C. Theorem 4.4. If the class D has bounded polarization, then the class C contains only finitely many log surfaces up to log isomorphism. Lemma 4.5. The class C has bounded polarization, and all Bα ∈[0, 1]. Proof. We first show that there exists a finite set Γ ⊂[0, 1] such that Bα ∈Γ. By Theorem 4.3 the log pair (X, B) is a unique projective wlc trm model in the 0-class of (X, B) up to log isomorphism. Set Γ = {bi : i = 1, 2, . . . , n} ⊂[0, 1], here B = Pn i=1 biBi. Then Bα ∈Γ as every log pair (Xα, Bα) is obtained by contracting prime components of the boundary B. Moreover, # supp(Bα) ≤ # supp(B) ≤n. Next we prove that there exists N ∈N and very ample divisors Hα on Xα such that Bα ·Hα ≤N for all log pairs (Xα, Bα) in C. By Theorem 2.14 there exist finitely many projective families X (j)/S(j) such that all Xα are isomorphic to fibers of the families X (j)/S(j). Without loss of generality, we may assume there is a single family X/S and Xα = Xsα for some closed points sα ∈S. Choose a very ample invertible sheaf L on X/S and set Hα = Lsα. By elementarity of X/S there exists a polynomial p(m) ∈Q[m] such that the Euler characteristic χ(mHα) equals p(m) for all sα ∈S. As klt singularities are rational, we have χ(mf ∗ αHα) = χ(mHα), where fα : Yα →Xα is a resolution of singularities. Using the Riemann–Roch formula on the smooth surface Yα and the projection formula, we obtain χ(mHα) = H2 α 2 m2 −KXα· Hα 2 m + χ(OYα). Hence the intersection number Bα · Hα = −KXα · Hα is the same for all Xα and equals N = 2p′(0). 12 D. V. SEREBRENNIKOV It remains to bound (Bα)red · Hα. Let bmin = min(Γ \ {0}), and write Bα = P i biαBiα with biα ̸= 0. Then bmin Biα · Hα ≤ X i bmin Biα · Hα ≤Bα · Hα ≤N. Recall that # supp(Bα) ≤n. Therefore, (Bα)red · Hα ≤n N bmin . □ Proof of Theorem 4.4. By Theorem 4.5 and Theorem 4.1, we have finitely many elementary projec- tive families (X (j)/S(j), B(j)) such that all log pairs in C are isomorphic to some fibers of these fam- ilies. Without loss of generality, we assume there is a single elementary projective family (X/S, B). Let {sα : α ∈I} ⊆S be the set of closed points parametrizing all pairs in C. Replacing the base by the closure of {sα}α∈I, we may assume that {sα}α∈I is dense in S. It suffices to show that there exists a log pair (Y, D) and a dense open subset U ⊆S such that (Xs, Bs) ≈(Y, D) for all closed points s ∈U. Then the finiteness of log pairs in the family follows by Noetherian induction. By [HX15, Proposition 2.4], after shrinking the base we may assume that (X, B) is a klt log pair. Let π : (X ′, B′) →(X, B) be a terminalization [BCHM10, Corollary 1.4.3], so that KX ′ + B′ = π∗(KX + B). Here (X ′, B′) has only terminal singularities and B′ is a boundary. We may assume (X ′/S, B′) is an elementary projective family. By Theorem 4.2, for all s ∈S the log pair (X ′ s, B′ s) has terminal singularities. Moreover, each log pair (Xα, Bα) in C is isomorphic to (Xsα, Bsα), and its terminal- ization is isomorphic to (X ′ sα, B′ sα). By Theorem 4.3, all log pairs (X ′ sα, B′ sα) are log isomorphic to a fixed log pair (X, B). By Theorem 3.2, after shrinking the base and passing to an ´etale cover we may suppose that X ′ = X ×C S B′ = B ×C S. Recall that π : (X ′, B′) →(X, B) is the terminalization morphism. We show that the exceptional locus Exc(π) is a divisor. Proposition 4.6. Let φ : X ×C S 99K Y be a birational map over a connected nonsingular base S which contracts a curve C ⊂X in some fiber X ×C s0. Then φ contracts C ×C S. Proof. By assumption, the map φ contracts the class of the curve [C ×C s0] ∈N1(X ×C S/S), but [C ×C s0] = [C ×C s] for every closed point s ∈S. □ Hence Exc(π) = ∪iEi × S for some prime divisors Ei on X. By [Tot10, Lemma 4.3.] there exist positive real coefficients ai such that (P i aiEi) · Ei = −1 for all i. Set E = P i ai (Ei × S) and E = P i aiEi. Next, we run the MMP for KX ′ + B′ + ϵE over S. We prove that an extremal contraction preserves the triviality of the projective elementary family. Proposition 4.7. Let π1 : X →Y be an extremal contraction for KX ′ + B′ + ϵE over S. Then Exc(π1) ∈supp E. Moreover, there exists a log pair (Y, D) such that (Y, π∗(B′ + ϵE)) ≈(Y, D) ×C S over S. Proof. By Theorem 4.6 the contraction π1 is divisorial, and by extremality Exc(π1) = C × S for some curve C ⊂X. Since KX ′ + B′ + ϵE ≡S ϵE and ϵE · C < 0, we have C × S ∈supp(E). By the cone theorem for KX + B + ϵE, there exists an extremal contraction π0 : X →Y of the curve C. Put D = (π0)∗(B + ϵE). Let A be a general ample divisor on Y and H = (π0)∗A. Then the divisor H = H × S defines a contraction π2 : X →Y ×C S over S with Exc(π2) = Exc(π1) = C × S. Suppose the birational ON THE FINITENESS OF LOG SURFACES 13 map φ = π1 ◦(π2)−1 is an isomorphism over S. Then φ induces a log isomorphism (Y, D) ×C S ≈ (Y, π∗(B′ + ϵE)), since Exc(π2) = Exc(π1). Thus, it remains to verify the following lemma. Lemma 4.8. Suppose that (X ×C S/S, E), (Y ×C S/S, 0), and (Y/S, 0) are projective elementary families, and Y is a Q-factorial surface. Let π1 : X ×C S →Y and π2 : X ×C S →Y ×C S be birational morphisms over S such that Exc(π1) = Exc(π2) = ∪V ∈supp(E)V . Then the map π1 ◦π−1 2 : Y ×C S 99K Y is an isomorphism over S. Proof. Let A be an ample Cartier divisor on Y/S. Set D = (π2)∗(π1)∗A = φ∗A, where φ = π2◦π−1 1 . We may assume that D is a Cartier divisor on Y ×C S/S, since Y is Q-factorial. We show that D is ample on Y ×C S/S. As dim Y = 2, we have dim π2(Exc((π2)s)) = 0 for every closed point s ∈S. Hence the proper transform eC = (π2)−1 ∗(C) ⊈Exc(π2) is defined for every curve C ⊂Y ×C s. Using Exc(π1) = Exc(π2) and the projection formula, we obtain D · C = π∗ 2(D) · eC = π∗ 1(A) · eC = A · (π1)∗eC > 0. This implies that the Cartier divisor D is ample on Y ×C S/S. We can suppose that Y = ProjS M m≥0 pr∗OY ×CS(mD), and φ is the natural map. Since D is ample, φ is an isomorphism. □ Therefore, Theorem 4.7 is proved. □ Remark 4.9. More generally [Kol25, Theorem 30], Koll´ar established that even small modifications preserve the triviality of the family. Thus, by Theorem 4.7, at each step of the MMP for KX ′ + B′ + ϵE over S, an irreducible component of supp E is contracted, and the triviality of the family is preserved. Moreover, the MMP contracts all prime components of supp E. In other words, the MMP terminates in finitely many steps. The outcome of the MMP is a projective elementary family (Y, D) ×C S over S. Let πMMP : (X, B) ×C S →(Y, D) ×C S be the morphism (over S) induced by the MMP. Recall that π : (X, B) ×C S →(X, B) is the terminalization such that Exc(π) = Exc(πMMP ) = S V ∈supp(E) V. By Theorem 4.8, the birational map π◦π−1 MMP : (Y ×C S/S, D×S) 99K (X/S, B) is an isomorphism. This completes the proof of Theorem 4.4. □ 5. Applications Finally, we prove Theorem 1.3 by applying Theorem 1.4. Theorem 5.1. The number of projective wlc klt models in a fixed 0-class of log surfaces is finite up to log isomorphism. Proof. Let (X0, B0) be a projective log surface, and let C be the class of all its projective wlc klt models (Xα, Bα). Let fα : (X′ α, B′ α) →(Xα, Bα) be a terminalization morphism [BCHM10, Corollary 1.4.3]. By Theorem 4.3, there is a unique (up to log isomorphism) projective wlc trm log surface (X, B) in the class C. Therefore, we may assume (X′ α, B′ α) = (X, B) for all α. By [Fuj12, Theorem 8.1], the R-Cartier divisor KX + B is semiample. Case 1: κ(KX + B) = 2. Set Y = Proj L m≥0 H0(X, ⌊m(KX + B)⌋) and let g : X →Y be the natural map. Similarly, set Yα = Proj L m≥0 H0(Xα, ⌊m(KXα + Bα)⌋) and gα : Xα →Yα. Put 14 D. V. SEREBRENNIKOV φα = g ◦f−1 α ◦g−1 α : Yα 99K Y , and let Aα be a general ample divisor on Yα. By construction, (φα)∗(Aα) is ample, hence φα is an isomorphism. Therefore Exc(fα) ⊆Exc(g) for all α, so the set {Exc(fα)} is finite. If Exc(fα) = Exc(fβ), then Xα ≈Xβ by Theorem 4.8. Since (fβ ◦f−1 α )∗(Bα) = Bβ by the definition of a 0-class, we have (Xα, Bα) ≈(Xβ, Bβ). Hence finiteness of the set {Exc(fα)} implies that C contains only finitely many pairs up to log isomorphism. Case 2: κ(KX + B) = 1. For m ≫1 (resp. mα ≫1) the R-Cartier divisor m(KX + B) (resp. mα(KXα + Bα)) defines a fibration π : X →C (resp. πα : Xα →Cα) over a smooth curve C (resp. Cα). It can be proved that there is an isomorphism φα : C →Cα such that φα ◦π = πα ◦fα. Hence Exc(fα) ⊆{ C ⊂X : a contractible vertical curve } for all α. By Theorem 4.8 and the finiteness of the set { C ⊂X : a contractible vertical curve }, the class C contains only finitely many pairs up to log isomorphism. Case 3: κ(KX + B) = 0. Then KX + B ≡0, and all (Xα, Bα) are 0-pairs. Consider the class D of all varieties Xα corresponding to the pairs (Xα, Bα) in C. Case 3.1: B ̸= 0. Then the class D has bounded polarization by the following result due to Alexeev. Theorem 5.2. [Ale94, Theorem 6.9] Fix ϵ > 0. Consider the class D of projective normal surfaces Xα such that each Xα admits a boundary Bα satisfying: (1) The pair (Xα, Bα) is an MR ϵ-klt log pair. (2) The R-Cartier divisor −(KXα + Bα) is nef. (3) The case, when Bα = 0, KXα ≡0, and Xα has Du Val singularities, is excluded. Then the class D has bounded polarization. Condition (1) means that the inequalities in the definition of an ϵ-klt log pair (Theorem 2.1) need not hold for all log resolutions, but hold at least for the minimal resolution of Xα. In our situation all log pairs (Xα, Bα) satisfy this condition. By the main Theorem 4.4, the class C contains only finitely many log surfaces up to log isomorphism. Case 3.2: B = 0. Hence KX ≡0. By [Kaw97, Theorem 2.1], the surface X admits only finitely many birational contractions up to automorphism. □ A key input in [Kaw97, Theorem 2.1] is Sterk’s cone theorem for K3 surfaces [Ste85], proved via the Torelli theorem. In future work we plan to develop an algebraic approach, which is independent of the Torelli Theorem, to bound polarization of Du Val K3 surfaces in a fixed birational class. Then, their finiteness up to isomorphism will follow from Theorem 4.4. By [CL14, Theorem 2.14], the relative cone conjecture in dimension ≤d together with the MMP in dimension d implies finiteness of minimal models up to isomorphism. Since the log version of the cone conjecture [Tot10] is proved for log surfaces, Theorem 5.1 is known to experts. However, the principal result of this paper is Theorem 4.4 — a method reducing the finiteness problem for models to boundedness of their polarization. A generalization of Theorem 4.4 to higher dimensions will be treated in a forthcoming paper. ON THE FINITENESS OF LOG SURFACES 15 References [Ale94] V. Alexeev, Boundedness and K2 for log surfaces, Internat. J. Math. 5 (1994), no. 6, 779–810. ↑ [Amb05] F. Ambro, The moduli b-divisor of an lc-trivial fibration, Compos. Math. 141 (2005), no. 2, 385–403. ↑ [Bir11] C. Birkar, On existence of log minimal models II, Journal f¨ur die reine und angewandte Mathe- matik 658 (2011), 99–113. ↑ [BCHM10] C. Birkar, P. Cascini, C. D. Hacon, and J. McKernan, Existence of minimal models for varieties of log general type, J. Amer. Math. Soc. 23 (2010), no. 2, 405–468. ↑ [B˘ad01] L. B˘adescu, Algebraic Surfaces, Universitext, Springer New York, NY, 2001. ↑ [CL14] P. Cascini and V. Lazi´c, On the number of minimal models of a log smooth threefold, Journal de Math´ematiques Pures et Appliqu´ees 102 (2014), no. 3, 597–616. ↑ [FGI+05] B. Fantechi, L. G¨ottsche, L. Illusie, S. L. Kleiman, N. Nitsure, and A. Vistoli (eds.), Fundamental Algebraic Geometry: Grothendieck’s FGA Explained, Mathematical Surveys and Monographs, vol. 123, American Mathematical Society, 2005. ↑ [Fuj12] O. Fujino, Minimal model theory for log surfaces, Publ. Res. Inst. Math. Sci. 48 (2012), no. 2, 339–371. ↑ [Ful98] W. Fulton, Intersection Theory, 2nd ed., Springer, New York, NY, 1998. ↑ [GW20] U. G¨ortz and T. Wedhorn, Algebraic Geometry I: Schemes, 2nd ed., Springer Studium Mathe- matik – Master, Springer Spektrum, Wiesbaden, 2020. Originally published by Springer Fachme- dien Wiesbaden GmbH, 2010. ↑ [Har77] R. Hartshorne, Algebraic geometry, Springer-Verlag, New York-Heidelberg, 1977. Graduate Texts in Mathematics, No. 52. ↑ [HX15] C. D. Hacon and C. Xu, Boundedness of log Calabi–Yau pairs of Fano type, Math. Res. Lett. 22 (2015), no. 6, 1699–1716. ↑ [IS05] V. A. Iskovskikh and V. V. Shokurov, Birational models and flips, Russian Mathematical Surveys 60 (2005), no. 1, 27–94. ↑ [Kaw85] Y. Kawamata, Minimal models and the Kodaira dimension of algebraic fiber spaces, Journal f¨ur die reine und angewandte Mathematik 363 (1985), 1–46. ↑ [Kaw97] Y. Kawamata, On the cone of divisors of Calabi–Yau fiber spaces, Int. J. Math. 8 (1997), no. 5, 665–687. ↑ [Kol96] J. Koll´ar, Rational Curves on Algebraic Varieties, Ergebnisse der Mathematik und ihrer Gren- zgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 32, Springer-Verlag, Berlin, Heidelberg, 1996. ↑ [Kol25] J. Koll´ar, N´eron models, minimal models, and birational group actions, 2025. ArXiv e-print, arXiv:2502.13800. ↑ [KM98] J. Koll´ar and S. Mori, Birational geometry of algebraic varieties, Cambridge Tracts in Mathe- matics, vol. 134, Cambridge University Press, Cambridge, 1998. With the collaboration of C. H. Clemens and A. Corti; Translated from the 1998 Japanese original. ↑ [Mat02] K. Matsuki, Introduction to the Mori Program, Universitext, Springer New York, NY, 2002. ↑ [Ser06] E. Sernesi, Deformations of Algebraic Schemes, Grundlehren der mathematischen Wis- senschaften, Springer, Berlin, Heidelberg, 2006. ↑ [Ste85] H. Sterk, Finiteness results for algebraic K3 surfaces, Mathematische Zeitschrift 189 (1985), 507–514. ↑ [Tot10] B. Totaro, The cone conjecture for Calabi–Yau pairs in dimension 2, Duke Math. J. 154 (2010), no. 2, 241–263. ↑ [Xu25] F. Xu, On finiteness of fiber space structures of klt Calabi–Yau pairs in dimension 3, 2025. ArXiv e-print, arXiv:2501.10239. ↑ 16 D. V. SEREBRENNIKOV Johns Hopkins University Email address: dserebr1@jhu.edu
ON THE FINITENESS OF LOG SURFACES DANIIL SEREBRENNIKOV Abstract. We prove that a log surface has only finitely many weakly log canonical projective models with klt singularities up to log isomorphism, by reducing the problem to the boundedness of their polarization. Contents 1. Introduction 1 2. Preliminary 2 3. Isotrivial families 5 4. Main results 9 5. Applications 13 References 15 1. Introduction Let X be a projective variety over the field of complex numbers C. We study a classical problem about the number of normal (Q-factorial) projective varieties Xα such that each Xα is birational to X, and the canonical divisor KXα is nef, that is, KXα ·C ≥0 for all curves C ⊂Xα. In general, this number is not expected to be finite unless we consider only mild singularities of Xα. For instance, one may restrict to terminal singularities (see Theorem 2.1). In this case the varieties Xα are called minimal models of X. More precisely, the following conjecture was proposed by Kawamata and Matsuki [Mat02, Conjecture 12.3.6.]: Conjecture 1.1. The number of projective minimal models in a fixed birational class is finite up to isomorphism. It is natural to extend this conjecture to a class of crepant birationally equivalent log pairs, namely a 0-class (see Theorem 2.3). Conjecture 1.2. The number of projective wlc klt models in a fixed 0-class is finite up to log isomorphism. In the present paper we show that this conjecture holds for log surfaces. Theorem 1.3. Theorem 1.2 holds for log surfaces. Let (X, B) be a projective log surface. The most delicate case of Theorem 1.2 for log surfaces is when KX + B ≡0, namely, Calabi-Yau pairs. Our proof reduces the finiteness problem for CalabiYau pairs to the boundedness of polarizations on their models. To be precise, consider the class C 1 16 Oct 2025 2 D. V. SEREBRENNIKOV of all projective wlc klt models (Xα, Bα) that are crepant birationally equivalent to (X, B). Let D be the class of all varieties Xα corresponding to the pairs (Xα, Bα) in C. The following theorem is the main result of this work. Theorem 1.4. If the class D has bounded polarization, then the class C contains only finitely many log surfaces up to log isomorphism. Remark 1.5. The restriction on singularities in Theorem 1.2 cannot be weakened to log canonical singularities. Consider the log pair (X, B), where B is a smooth curve of genus 1, and X = P2 C. Since the blow-up of every point p ∈B adds to the boundary a divisor with coefficient 0, the Calabi-Yau pair (X, B) has infinitely many pairwise non-isomorphic projective wlc models in its 0-class. Acknowledgements. The author is grateful to Professor Shokurov for the ideas underlying this work, guidance, and support. 2. Preliminary Throughout, we work over the field of complex numbers C and follow the standard terminology [IS05] unless stated otherwise. By a variety we mean an integral separated scheme of finite type over C. On a normal variety, a prime divisor is understood in the Weil sense, i.e. an integral closed subscheme of codimension 1. In every part of the paper, for an R-divisor D we write D = P i diDi, where Di are distinct prime divisors and di ∈R. A pair (X, B) consists of a normal variety X and an R-divisor B on X. We say that (X, B) is a log pair if KX + B is an R-Cartier divisor. Suppose B = P i biBi, and Γ ⊆R is a set. If bi ∈Γ for all i then we write B ∈Γ. Denote by [0, 1] the unit segment of real numbers. The R-divisor B is called a boundary if B ∈[0, 1]. A pair (X, B) is called projective (resp. of dimension d) if X is projective (resp. of dimension d). The pairs (X1, B1) and (X2, B2) are said to be log isomorphic if there exists an isomorphism φ : X1 →X2 such that φ∗(B1) = B2; we then write (X1, B1) ≈(X2, B2). We use the notation ∼= instead of ≈if the isomorphism is natural. Definition 2.1. Let (X, B) be a log pair with B = P biBi. We say that (X, B) has the following singularities when the corresponding inequalities hold:                  terminal (trm) canonical Kawamata log terminal (klt) ε-Kawamata log terminal (ε-klt) log canonical ⇐⇒dis(X, B)                  > 0; ≥0; > -1 and bi -1 + ε and bi 0. This implies that the Cartier divisor D is ample on Y ×C S/S. We can suppose that Y = ProjS M m≥0 pr∗OY ×CS(mD), and φ is the natural map. Since D is ample, φ is an isomorphism. □ Therefore, Theorem 4.7 is proved. □ Remark 4.9. More generally [Kol25, Theorem 30], Koll ́ar established that even small modifications preserve the triviality of the family. Thus, by Theorem 4.7, at each step of the MMP for KX ′ + B′ + εE over S, an irreducible component of supp E is contracted, and the triviality of the family is preserved. Moreover, the MMP contracts all prime components of supp E. In other words, the MMP terminates in finitely many steps. The outcome of the MMP is a projective elementary family (Y, D) ×C S over S. Let πMMP : (X, B) ×C S →(Y, D) ×C S be the morphism (over S) induced by the MMP. Recall that π : (X, B) ×C S →(X, B) is the terminalization such that Exc(π) = Exc(πMMP ) = S V ∈supp(E) V. By Theorem 4.8, the birational map π◦π-1 MMP : (Y ×C S/S, D×S) 99K (X/S, B) is an isomorphism. This completes the proof of Theorem 4.4. □ 5. Applications Finally, we prove Theorem 1.3 by applying Theorem 1.4. Theorem 5.1. The number of projective wlc klt models in a fixed 0-class of log surfaces is finite up to log isomorphism. Proof. Let (X0, B0) be a projective log surface, and let C be the class of all its projective wlc klt models (Xα, Bα). Let fα : (X′ α, B′ α) →(Xα, Bα) be a terminalization morphism [BCHM10, Corollary 1.4.3]. By Theorem 4.3, there is a unique (up to log isomorphism) projective wlc trm log surface (X, B) in the class C. Therefore, we may assume (X′ α, B′ α) = (X, B) for all α. By [Fuj12, Theorem 8.1], the R-Cartier divisor KX + B is semiample. Case 1: κ(KX + B) = 2. Set Y = Proj L m≥0 H0(X, ⌊m(KX + B)⌋) and let g : X →Y be the natural map. Similarly, set Yα = Proj L m≥0 H0(Xα, ⌊m(KXα + Bα)⌋) and gα : Xα →Yα. Put 14 D. V. SEREBRENNIKOV φα = g ◦f-1 α ◦g-1 α : Yα 99K Y , and let Aα be a general ample divisor on Yα. By construction, (φα)∗(Aα) is ample, hence φα is an isomorphism. Therefore Exc(fα) ⊆Exc(g) for all α, so the set {Exc(fα)} is finite. If Exc(fα) = Exc(fβ), then Xα ≈Xβ by Theorem 4.8. Since (fβ ◦f-1 α )∗(Bα) = Bβ by the definition of a 0-class, we have (Xα, Bα) ≈(Xβ, Bβ). Hence finiteness of the set {Exc(fα)} implies that C contains only finitely many pairs up to log isomorphism. Case 2: κ(KX + B) = 1. For m ≫1 (resp. mα ≫1) the R-Cartier divisor m(KX + B) (resp. mα(KXα + Bα)) defines a fibration π : X →C (resp. πα : Xα →Cα) over a smooth curve C (resp. Cα). It can be proved that there is an isomorphism φα : C →Cα such that φα ◦π = πα ◦fα. Hence Exc(fα) ⊆{ C ⊂X : a contractible vertical curve } for all α. By Theorem 4.8 and the finiteness of the set { C ⊂X : a contractible vertical curve }, the class C contains only finitely many pairs up to log isomorphism. Case 3: κ(KX + B) = 0. Then KX + B ≡0, and all (Xα, Bα) are 0-pairs. Consider the class D of all varieties Xα corresponding to the pairs (Xα, Bα) in C. Case 3.1: B ̸= 0. Then the class D has bounded polarization by the following result due to Alexeev. Theorem 5.2. [Ale94, Theorem 6.9] Fix ε > 0. Consider the class D of projective normal surfaces Xα such that each Xα admits a boundary Bα satisfying: (1) The pair (Xα, Bα) is an MR ε-klt log pair. (2) The R-Cartier divisor -(KXα + Bα) is nef. (3) The case, when Bα = 0, KXα ≡0, and Xα has Du Val singularities, is excluded. Then the class D has bounded polarization. Condition (1) means that the inequalities in the definition of an ε-klt log pair (Theorem 2.1) need not hold for all log resolutions, but hold at least for the minimal resolution of Xα. In our situation all log pairs (Xα, Bα) satisfy this condition. By the main Theorem 4.4, the class C contains only finitely many log surfaces up to log isomorphism. Case 3.2: B = 0. Hence KX ≡0. By [Kaw97, Theorem 2.1], the surface X admits only finitely many birational contractions up to automorphism. □ A key input in [Kaw97, Theorem 2.1] is Sterk's cone theorem for K3 surfaces [Ste85], proved via the Torelli theorem. In future work we plan to develop an algebraic approach, which is independent of the Torelli Theorem, to bound polarization of Du Val K3 surfaces in a fixed birational class. Then, their finiteness up to isomorphism will follow from Theorem 4.4. By [CL14, Theorem 2.14], the relative cone conjecture in dimension ≤d together with the MMP in dimension d implies finiteness of minimal models up to isomorphism. Since the log version of the cone conjecture [Tot10] is proved for log surfaces, Theorem 5.1 is known to experts. However, the principal result of this paper is Theorem 4.4 - a method reducing the finiteness problem for models to boundedness of their polarization. A generalization of Theorem 4.4 to higher dimensions will be treated in a forthcoming paper. ON THE FINITENESS OF LOG SURFACES 15 References [Ale94] V. Alexeev, Boundedness and K2 for log surfaces, Internat. J. Math. 5 (1994), no. 6, 779-810. ↑ [Amb05] F. Ambro, The moduli b-divisor of an lc-trivial fibration, Compos. Math. 141 (2005), no. 2, 385-403. ↑ [Bir11] C. Birkar, On existence of log minimal models II, Journal f ̈ur die reine und angewandte Mathematik 658 (2011), 99-113. ↑ [BCHM10] C. Birkar, P. Cascini, C. D. Hacon, and J. McKernan, Existence of minimal models for varieties of log general type, J. Amer. Math. Soc. 23 (2010), no. 2, 405-468. ↑ [B ̆ad01] L. B ̆adescu, Algebraic Surfaces, Universitext, Springer New York, NY, 2001. ↑ [CL14] P. Cascini and V. Lazi ́c, On the number of minimal models of a log smooth threefold, Journal de Math ́ematiques Pures et Appliqu ́ees 102 (2014), no. 3, 597-616. ↑ [FGI+05] B. Fantechi, L. G ̈ottsche, L. Illusie, S. L. Kleiman, N. Nitsure, and A. Vistoli (eds.), Fundamental Algebraic Geometry: Grothendieck's FGA Explained, Mathematical Surveys and Monographs, vol. 123, American Mathematical Society, 2005. ↑ [Fuj12] O. Fujino, Minimal model theory for log surfaces, Publ. Res. Inst. Math. Sci. 48 (2012), no. 2, 339-371. ↑ [Ful98] W. Fulton, Intersection Theory, 2nd ed., Springer, New York, NY, 1998. ↑ [GW20] U. G ̈ortz and T. Wedhorn, Algebraic Geometry I: Schemes, 2nd ed., Springer Studium Mathematik - Master, Springer Spektrum, Wiesbaden, 2020. Originally published by Springer Fachmedien Wiesbaden GmbH, 2010. ↑ [Har77] R. Hartshorne, Algebraic geometry, Springer-Verlag, New York-Heidelberg, 1977. Graduate Texts in Mathematics, No. 52. ↑ [HX15] C. D. Hacon and C. Xu, Boundedness of log Calabi-Yau pairs of Fano type, Math. Res. Lett. 22 (2015), no. 6, 1699-1716. ↑ [IS05] V. A. Iskovskikh and V. V. Shokurov, Birational models and flips, Russian Mathematical Surveys 60 (2005), no. 1, 27-94. ↑ [Kaw85] Y. Kawamata, Minimal models and the Kodaira dimension of algebraic fiber spaces, Journal f ̈ur die reine und angewandte Mathematik 363 (1985), 1-46. ↑ [Kaw97] Y. Kawamata, On the cone of divisors of Calabi-Yau fiber spaces, Int. J. Math. 8 (1997), no. 5, 665-687. ↑ [Kol96] J. Koll ́ar, Rational Curves on Algebraic Varieties, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 32, Springer-Verlag, Berlin, Heidelberg, 1996. ↑ [Kol25] J. Koll ́ar, N ́eron models, minimal models, and birational group actions, 2025. ArXiv e-print, . ↑ [KM98] J. Koll ́ar and S. Mori, Birational geometry of algebraic varieties, Cambridge Tracts in Mathematics, vol. 134, Cambridge University Press, Cambridge, 1998. With the collaboration of C. H. Clemens and A. Corti; Translated from the 1998 Japanese original. ↑ [Mat02] K. Matsuki, Introduction to the Mori Program, Universitext, Springer New York, NY, 2002. ↑ [Ser06] E. Sernesi, Deformations of Algebraic Schemes, Grundlehren der mathematischen Wissenschaften, Springer, Berlin, Heidelberg, 2006. ↑ [Ste85] H. Sterk, Finiteness results for algebraic K3 surfaces, Mathematische Zeitschrift 189 (1985), 507-514. ↑ [Tot10] B. Totaro, The cone conjecture for Calabi-Yau pairs in dimension 2, Duke Math. J. 154 (2010), no. 2, 241-263. ↑ [Xu25] F. Xu, On finiteness of fiber space structures of klt Calabi-Yau pairs in dimension 3, 2025. ArXiv e-print, . ↑ 16 D. V. SEREBRENNIKOV Johns Hopkins University Email address:
2510.14796
NOVIKOV COHOMOLOGY, FINITE DOMINATION, AND COHOMOLOGICAL DIMENSION SAM P. FISHER Abstract. We introduce the Σ∗-invariant of a group of finite type, which is defined to be the subset of non-zero characters χ ∈H1(G; R) with vanish- ing associated top-dimensional Novikov cohomology. We prove an analogue of Sikorav’s Theorem for this invariant, namely that cd(ker χ) = cd(G)−1 if and only if ±χ ∈Σ∗(G) for integral characters χ. This implies that cohomolog- ical dimension drop is an open property among integral characters. We also study the cohomological dimension of arbitrary co-Abelian subgroups. The techniques yield a short new proof of Ranicki’s criterion for finite domination of infinite cyclic covers, and in a different direction, we prove that the algebra of affiliated operators U(G) of a RFRS group G has weak dimension at most one if and only if G is an iterated (cyclic or finite) extension of a free group. 1. Introduction In [BNS87], Bieri, Neumann, and Strebel introduced the Σ-invariant of a finitely generated group G. The invariant, denoted Σ(G), is a subset of H1(G; R)∖{0}, and controls the finiteness properties of co-Abelian subgroups of G. We call elements of H1(G; R) characters; these are just homomorphisms χ: G ! R. If χ(G) ⊆Q, then we say that χ is integral (because in this case χ(G) ∼= Z). The set of all non- zero characters which vanish on a given subgroup N is denoted by S(G, N). The following celebrated result from [BNS87] establishes the fundamental properties of the Σ-invariant. Theorem 1.1 (Bieri–Neumann–Strebel, [BNS87]). Let G be a finitely generated group. The following properties hold: (1) Σ(G) is open in H1(G; R) ∖{0}; (2) if N P G is a normal subgroup such that G/N is Abelian, then N is finitely generated if and only if S(G, N) ⊆Σ(G). In particular, an integral character χ: G ! Z has finitely generated kernel if and only if ±χ ∈Σ(G). Recall that a map to Z with finitely generated kernel is called an algebraic fibration of G. Theorem 1.1 has the following well known corollary. Corollary 1.2. Algebraic fibring is an open property. More precisely, given a finitely generated group G, the set {χ: G −! Q : ker χ is finitely generated} ∖{0} is an open subset of H1(G; Q) ∖{0}. The Σ-invariant is well-understood in certain cases (e.g. for right-angled Artin groups [MMV98, BG99]), but for most classes of groups it is notoriously difficult to compute. A useful tool is provided by Sikorav’s Theorem, which gives an elegant homological criterion for a character to lie in the Σ-invariant. 1 arXiv:2510.14796v1 [math.GR] 16 Oct 2025 2 SAM P. FISHER Theorem 1.3 (Sikorav, [Sik87]). If G is a finitely generated group, then χ ∈Σ(G) if and only if H1(G; ‘ Z[G] χ ) = 0. The object ‘ Z[G] χ appearing in the theorem is the Novikov ring associated to χ; it is a certain completion of the group ring Z[G] along χ, and can be thought of as the ring of infinite series of elements in G which go to infinity in the direction of χ (see Example 3.1 for a precise definition). Sikorav’s Theorem plays a central role in Kielak’s proof that a finitely generated RFRS group virtually algebraically fibres if and only if it has vanishing first L2-Betti number [Kie20]. His strategy was to related the L2-homology of a RFRS group G with the homologies of its finite-index subgroups with coefficients in Novikov rings associated to various characters. Sikorav’s Theorem has many generalisations and variations. For instance, the kernel of an integral character χ: G ! Z being of type FPn is equivalent to the vanishing of Hi(G; ‘ Z[G] ±χ ) for all i ⩽n (see [BR88], as well as Schweitzer’s appen- dix to [Bie07]). This result also holds if one replaces the coefficient ring Z by an arbitrary ring R, and type FPn by type FPn(R) (see, e.g., [Fis24a, Theorem 5.3]). A closely related result is Ranicki’s criterion [Ran95, Theorem 1], which states that an infinite cyclic cover of a compact CW complex X is finitely dominated if and only if Hi(π1(X); ŸZ[π1(X)] ±χ ) = 0 for all i, where χ: π1(X) ! Z is the map as- sociated to the cyclic cover (Ranicki’s original result was for spaces X such that π1(X) = G × Z and χ being the projection onto the Z factor, but it holds in the generality stated here). Yet another result in this direction is Latour’s theorem, which, for a manifold M of dimension at least 6, characterises the existence of closed non-singular 1-forms in a class χ ∈H1(M; R) in terms of the vanishing of Novikov homology associated to χ and of a K-theoretic invariant [Lat94]. All the results mentioned above directly or indirectly relate vanishing Novikov homology of a character χ to the finiteness properties of its kernel. In [Fis24b], the author showed that Novikov cohomology controls a different property of the kernel, namely its cohomological dimension. This was done with the goal of proving that RFRS groups of cohomological dimension two are virtually free-by-cyclic if and only if they have vanishing second L2-Betti number. The main new input was a criterion stating that if a group G of type FP has Hcd(G)(G; ‘ Z[G] ±χ ) = 0, then cd(ker χ) = cd(G) −1. One of the main purposes of this article is to prove a converse, thus showing that vanishing top-dimensional Novikov cohomology of integral characters completely determines the cohomological dimension of the kernel. In fact, the result holds in the generality of chain complexes of projective modules. Theorem A. Let G be a group, let χ: G ! Z be an integral character, and let C• be a chain complex of projective Z[G]-modules such that Ci = 0 for all i > n, for some n ∈Z, and assume that Cn is finitely generated. The following are equivalent: (1) Hn(C•; ‘ Z[G] ±χ ) = 0; (2) C• is chain homotopy equivalent over Z[ker χ] to a chain complex D• of projective Z[ker χ]-modules such that Di = 0 for all i > n −1. Theorem A is proved by analysing the long exact sequence in cohomology in- duced by a special short exact sequence involving the Novikov ring (see Observa- tion 3.2). This short exact sequence was communicated to the author by Andrei Jaikin-Zapirain as a tool to simplify the original proof of [Fis24b, Theorem A]. It NOVIKOV COHOMOLOGY 3 was also used in [FIK25] to study the algebraic fibres of Poincaré-duality groups. The most general form of Theorem A will be given in Corollary 3.11; in particular, the theorem still holds if one replaces the coefficient ring Z with any ring R (we always consider maps G ! Z, of course). The crux of the implication (2) ⇒(1) is an observation about general modules over Novikov rings: if M is a module over ‘ Z[G] χ (and χ ̸= 0), then either M = 0 or M is uncountable (Lemma 3.5). Theorem A has the following immediate consequence for groups, proving that top-dimensional Novikov cohomology is a perfect obstruction for cohomological dimension drop. Motivated by Sikorav’s Theorem, we define the Σ∗-invariant of a group G of type FP by Σ∗(G) := {χ ∈H1(G; R) ∖{0} : Hcd(G)(G; ‘ Z[G] χ ) = 0}. Corollary B. Let G be a group of type FP. If χ: G ! Z is an integral character, then cd(ker χ) = cd(G) −1 if and only if ±χ ∈Σ∗(G). It is well-known that vanishing Novikov homology is an open property for groups of finite type; this applies just as well to Novikov cohomology, so we obtain the following corollary (see Corollary 4.8), which is an analogue of Corollary 1.2 for cohomological dimension drop. Corollary C. If G is a group of type FP, then cohomological dimension drop is an open property among integral characters. More precisely, the set {χ: G −! Q : cd(ker χ) = cd(G) −1} is an open subset of H1(G; Q). In particular, if cd(G) = 2, then the set of integral characters with free kernel is open in H1(G; Q). From a slightly stronger version of Theorem A, we obtain a result on the L2- invariants of RFRS groups which generalises [Fis24b, Theorem A]. Let U(G) be the algebra of operators affiliated to the group von Neumann algebra of G. The L2- Betti numbers of G are defined by b(2) i (G) = dimU(G) Hi(G; U(G)), where dimU(G) denotes the von Neumann dimension (see [Lüc02] for details). The weak dimension of an R-module M is the supremal integer n for which there exists an R-module N such that TorR n (M, N) ̸= 0. Theorem D. Let G be a RFRS group of type FP(Q). The following are equivalent: (1) U(G) is of weak dimension at most one as a Q[G]-module; (2) for every subgroup H ⩽G, we have b(2) i (H) = 0 for all i > 1; (3) G is an iterated (cyclic or finite) extension of a free group. Theorem D will be used to prove that RFRS groups admitting an Abelian hier- archy are iterated (cyclic or finite) extensions of free groups, extending a result of Hagen and Wise [HW10] (see Corollary 5.11). The weak dimension one property is a crucial step in Jaikin-Zapirain and Linton’s proof that group algebras of one- relator groups are coherent [JZL25]. In a similar vein, a conjecture of Wise predicts that groups of cohomological dimension at most two are coherent if and only if they have vanishing second L2-Betti number. If G is RFRS and cd(G) ⩽2, then an easy consequence of [Fis24b, Theorem A] is that the vanishing of the second L2-Betti number is equivalent to U(G) being of weak dimension at most one. Motivated by all this, we make the following prediction for coherence in the class of finite type RFRS groups. 4 SAM P. FISHER Conjecture 1.4. If G is a RFRS group of type FP, then the following are equiva- lent: (1) G is coherent; (2) Q[G] is coherent; (3) U(G) is of weak dimension one as a Q[G]-module. The method of proving Theorem A also yields a new proof of Ranicki’s Criterion [Ran95] for finite domination of cyclic covers (see Theorem 3.6), as well as a new proof of the implication H1(G; ‘ Z[G] ±χ ) = 0 =⇒ ker χ is finitely generated (see Proposition 3.8). This is the more subtle direction of Sikorav’s Theorem, and is usually the one used in practice (such as in Kielak’s fibring theorem). The new proofs are more conceptual, as there is no longer the need to deal with individual Novikov cycles. One instead works with long exact sequences of homology mod- ules and other standard results from homological algebra, and uses the vanishing of Novikov homology to show that the homology of the kernel commutes with direct products. By Brown’s homological characterisation of finite domination [Bro75], this yields finite generation of the kernel. Thus, the method links two well-known criteria for finite domination of chain complexes: Brown’s and Ranicki’s. Another satisfying aspect of this proof is that it unifies the points of view on Novikov co- homology controlling cohomological dimension and Novikov homology controlling finiteness properties of the cyclic covers; these phenomena follow from dual argu- ments. A drawback, however, is that it is crucial to work with integral characters and the vanishing of Novikov homology associated to both χ and −χ simultaneously. We do not obtain an new proof of the fact that χ ∈Σ(G) when H1(G; ‘ Z[G] χ ) = 0 for general characters χ: G ! R. From the Lyndon–Hochschild–Serre spectral sequence, it is easy to see that the cohomological dimension of ker(χ: G ! Z) is at least cd(G) −1, so vanishing of Novikov cohomology in low codimensions cannot imply further cohomological di- mension drop of the kernel (we will actually see that vanishing of low-codimensional Novikov cohomology implies the functors Hi(ker χ; −) commute with direct limits in the same low codimensions, which can be seen as a type of finiteness property). However, it still makes sense to ask whether Novikov cohomology can control fur- ther cohomological drop of the kernel of a non-integral character. We define the higher Σ∗-invariants of a group G of finite type by Σ∗ m(G) := {χ ∈H1(G; R) ∖{0} : Hi(G; ‘ Z[G] χ ) = 0 for all i > cd(G) −m} for m ⩾0. Hence, Σ∗ 0(G) = H1(G; R) ∖{0} and Σ∗ 1(G) = Σ∗(G). We obtain the following sufficient condition for a map to a free Abelian group to have kernel of the minimal theoretically allowed cohomological dimension. Theorem E. Let G be a group of type FP. Let N P G be a normal subgroup such that G/N ∼= Zd. If S(G, N) ⊆Σ∗ d(G), then cd(N) = n −d. Given the similarity with Theorem 1.1(2), this raises the obvious question of whether the converse statement holds (Corollary B is the important special case d = 1); it does not, see Example 4.17. Theorem E has the following corollary for Poincaré-duality groups. NOVIKOV COHOMOLOGY 5 Corollary F. Let G be a Poincaré-duality group of dimension n and let χ: G ! Zd. If ker χ is of type FPd−1, then cd(G) = n −d. The condition FP0 is vacuous, so in the case where d = 1, the corollary says that all kernels of maps onto Z drop in cohomological dimension. This is a very special case of Strebel’s theorem, which states that all infinite-index subgroups of PDn- groups have cohomological dimension at most n −1. There are various situations where cohomological dimension is additive under group extensions. One of the most general results is Fel’dman’s theorem [Fel71, Theorem 2.4], which states that if 1 ! N ! G ! Q ! 1 is an extension of groups and k is a field such that N is of type FP(k) and cdk(Q) < ∞, then cdk(N) + cdk(Q) = cdk(G). An interesting feature of Corollary F is that the finiteness property required on the kernel is much weaker than in Fel’dman’s theorem. We will also see in Corollary 4.18 that the result holds over arbitrary rings, whereas it is important in Fel’dman’s theorem that k be a field. 1.1. Structure of the paper. Section 2 introduces the necessary general homo- logical algebra preliminaries for the article. In Section 3, we will prove Theorem 1.1 in its full generality. This means working with a general chain complex of finitely generated R-modules, and arbitrary Laurent series instead of Novikov rings. We will also give a new proof of Ranicki’s Criterion and of the fact that vanishing Novikov homology detects finiteness properties. In Section 4, we define the Σ∗-invariants of a group over a general ring, and prove Corollary B and Corollary 1.2, as well as Theorem E and Corollary F. In Section 5, we use Kielak’s connection between Novikov and L2-cohomology to prove Theorem D. Finally, in Section 6 we look at the example of one-relator groups to show that the top-dimensional Novikov homol- ogy does not seem to be a useful invariant, in contrast to top-dimensional Novikov cohomology. We will see that torsion-free one-relator groups have vanishing second Novikov homology with respect to all characters, though this is far from the case for the second Novikov cohomology. 1.2. Acknowledgments. The author is grateful to Andrei Jaikin-Zapirain, Dawid Kielak, Ian Leary, and Boris Okun for useful comments. The author also thanks ICMAT, Andrei, Javier, Marco, and Sara, for their hospitality. 2. Homological algebra preliminaries Convention 2.1. All rings associative and unital, with 1 ̸= 0. Unless specified otherwise, R always denotes an arbitrary ring and G always denotes an arbitrary group. Let R be a ring. A chain complex · · · ! Ci+1 ! Ci ! · · · of R-modules will usually be denoted by C• and its boundary maps by ∂i : Ci ! Ci−1 (or just by ∂when the dimension is understood). Suppose that C• is a chain complex of left R-modules. If M is a right R-module, then ith homology of C• with coefficients in M is ith homology of the chain complex M ⊗R C• and is denoted by Hi(C•; M). Similarly, if M is a left R-module, then the ith cohomology of C• with coefficients in M is the ith cohomology of the cochain complex HomR(C•, M) and is denoted by Hi(C•; M). There are analogous definitions when C• consists of right R-modules. If C• and D• are chain complexes, then a chain map f• : C• ! D• is a sequence of morphisms fi : Ci ! Di such that fi−1∂i = ∂ifi for all i. Two chain maps 6 SAM P. FISHER f•, g• : C• ! D• are said to be chain homotopic if there is a sequence of morphisms hi : Ci ! Di+1 such that fi −gi = hi−1∂i + ∂i+1hi for all i. A chain map f• : C• ! D• is a chain homotopy equivalence if there is chain map g• : D• ! C• such that f•g• and g•f• are each chain homotopic to the identity maps. In this case, we say that C• and D• are chain homotopic or of the same chain homotopy type. Suppose that C• is a chain complex of projective R-modules. We say that C• is finitely dominated or of finite type if C• is chain homotopic to a chain complex of finitely generated projective R-modules. If Ci = 0 for all i < 0 and n is an integer, we say that C• is of finite n-type if C• is chain homotopic to a chain complex D• of projective modules such that Di is finitely generated for i ⩽n and Di = 0 for all i < 0. A group G is said to be of type FPn(R) (resp. FP∞(R), resp. FP(R)) if the trivial R[G]-module R admits a projective resolution which is finitely generated in degrees at most n (resp. finitely generated in all degrees, resp. of finite length and finitely generated in all degrees). Brown gives the following characterisation of finite domination in terms of the (co)homology functors Hi(C•; −) and Hi(C•; −). Theorem 2.2 ([Bro75, Theorem 2]). Let C• be a chain complex of projective R- modules such that Ci = 0 for all i < 0. The following are equivalent: (1) C• is of finite n-type; (2) for all directed systems of R-modules {Mj}j∈J, the natural map lim −! Hi(C•; Mj) −! Hi(C•; lim −! Mj) is an isomorphism for i < n and a monomorphism for i = n; (3) for all directed systems {Mj}j∈J of R-modules with lim −! Mj = 0, we have lim −! Hi(C•; Mj) = 0 for i ⩽n; (4) for any family {Mj}j∈J of R-modules, the natural map Hi(C•; Y Mj) −! Y Hi(C•; Mj) is an isomorphism for i < n and an epimorphism for i = n; (5) for any index set J, the natural map Hi(C•; Q J R) ! Q J Hi(C•; R) is an isomorphism for i < n and an epimorphism for i = n. In particular, C• is finitely dominated if and only if Hi(C•; −) preserves direct limits if and only if Hi(C•; −) preserves direct products. Definition 2.3 (Cohomological dimension). The cohomological dimension cd(C•) of a chain complex C• of R-modules is the supremal integer n such that there exists an R-module M with Hn(C•; M) ̸= 0. We will always use this definition of cohomological dimension of a chain complex, but it is nice to know that it also has the following characterisation when C• consists of projective modules. Lemma 2.4. If C• is a chain complex of projective R-modules, then C• is chain homotopy equivalent to a complex D• such that Di = 0 for all i > n if and only if cd(C•) ⩽n. Proof. If C• is chain homotopy equivalent to D• as in the statement, then it is clear that Hi(C•; M) = 0 for all R-modules M and all i > n, since chain homotopy equivalences induce isomorphisms on homology. Hence cd(C•) ⩽n. NOVIKOV COHOMOLOGY 7 Conversely, suppose that cd(C•) ⩽n. Let M = im(∂n+1) ⩽Cn. By assumption, the cocycle z : Cn+1 ∂−! M is zero in Hn+1(C•; M), so there is a cochain c: Cn ! M such that c∂= z. This implies that M is a direct summand of Cn; write Cn = M ⊕P. Then C• splits as the direct sum of the chain complexes {· · · ! Cn+2 ! Cn+1 ! M ! 0 ! · · · } ⊕{· · · ! 0 ! P ! Cn−1 ! Cn−2 ! · · · } of projectives modules, where M and P both lie in degree n. We claim that the complex on the left (call it E•) is null-homotopic, which will prove the lemma. It is an easy exercise to check that Hi(E•; L) = 0 for all integers i and all R- modules L. Let N = im(∂n+2) and consider the short exact sequence 0 −! N −! Cn+1 −! M −! 0. By assumption, the cocycle w: Cn+2 ∂−! N is zero in cohomology, so there is some d: Cn+1 ! N such that w = d∂. Hence, the short exact sequence above is split, so E• = {· · · ! Cn+2 ! N ! 0 ! · · · } ⊕{· · · ! 0 ! M ! M ! 0 ! · · · }. Again, the complex {· · · ! Cn+2 ! N ! 0 ! · · · } has vanishing cohomology with all coefficients, so we can split off {· · · ! 0 ! N ! N ! 0 ! · · · } as a direct summand, whose complement has vanishing cohomology with all coefficients. Continuing in this way show shows that E• is a direct sum of complexes of the form {0 ! L ! L ! 0} and is therefore null-homotopic. □ The Eilenberg–Watts theorem will be useful to us in induction arguments, in- cluding in the proof of Lemma 2.6 just below. Morally, it says that tensoring is the only right-exact additive functor that preserves direct sums. Theorem 2.5 ([Wat60]). Let R and S be rings, and let F : ModR ! ModS be an additive functor. If F is right-exact and preserves arbitrary direct sums, then there is a natural isomorphism F(−) ∼= −⊗R F(R), where R is viewed as a right module over itself via multiplication, and F(R) is viewed as a left R-module via r · x := F(mr)(x) (and mr is the left-multiplication by r map). Lemma 2.6. Let R be a ring, let C• be a chain complex of projective left R-modules and let m and n be integers. (1) Suppose Ci = 0 for all i < 0. If Hi(C•; R) = 0 for all i ⩽n, then Hi(C•; M) = 0 for all i ⩽n and all R-modules M. (2) Suppose Ci = 0 for all i > n, that Hi(C•; −) commutes with direct limits for all i ⩾m, and that Hi(C•; R) = 0 for all i ⩾m. Then Hi(C•; M) = 0 for all i ⩾m and all R-modules M. Proof. For (1), see [Bro75, Lemma 2]. For (2), we will assume that n ⩾m, since otherwise the statement is vacuous. We will prove that Hi(C•; M) = 0 for all R- modules M by reverse induction on i, starting with i = n + 1 and terminating with i = m. There is nothing to prove in the base case, since Cn+1 = 0 and therefore Hn+1(C•; M) trivially vanishes for all modules M. Suppose Hi(C•; M) = 0 for all R-modules M and for some i > m. Let 0 −! M0 −! M1 −! M2 −! 0 8 SAM P. FISHER be a short exact sequence of modules. The associated long exact sequence in coho- mology contains the portion Hi−1(C•; M1) −! Hi−1(C•; M2) −! Hi(C•; M0) = 0, so Hi−1(C•; −) is right exact. The functor Hi−1(C•; −) also preserves direct limits by assumption, so the Eilenberg–Watts theorem implies that Hi−1(C•; −) is natu- rally isomorphic to the functor Hi−1(C•; R)⊗R −, which is zero by assumption. □ 3. Twisted Laurent series and finite domination Let R be a ring and let α: R ! R be a ring automorphism. The α-twisted Laurent polynomial ring in one variable is denoted by Rα[t−1, t]. This is the Laurent polynomial ring over R, where we impose the rule rtn = tnαn(r) for all r ∈R and all n ∈Z. Let Rα[t−1, tK denote the ring of Laurent series in the variable t, i.e. the ring of expressions of the form P∞ n=k rntn with rn ∈R for all n ⩾k, where k is some integer. Multiplication is defined on Rα[t−1, tK by formally extending the multiplication on Rα[t−1, t]. Similarly, let RαJt−1, t] be the ring of formal series of the form Pk n=−∞rntn for some k ∈Z, i.e. the ring of Laurent series in the variable t−1. Example 3.1 (The Novikov ring). The main example of the construction above will come from groups and maps to Z. Let χ: G ! R be an homomorphism with kernel N and let R be a ring. The Novikov ring associated to this data is ‘ R[G] χ = nX rgg : |{g : rg ̸= 0} ∩χ−1(]−∞, t])| < ∞for all t ∈R o . Now suppose the image of χ is infinite cyclic and let t ∈G map to a generator of χ(G). Denoting by α the automorphism of R[N] induced by the conjugation action of t, we have ‘ R[G] χ ∼= R[N]α[t−1, tK. In view of Example 3.1, we will often call the (co)homology of a complex C• of Rα[t−1, t]-modules with coefficients in Rα[t−1, tK the Novikov cohomology of C•. Given a right R-module M, let MαJt−1, tK = Y i∈Z Mti. The elements of MαJt−1, tK are thought of as bi-infinite series P∞ i=−∞miti, where mi ∈M for each i ∈Z. We endow MαJt−1, tK with the structure of a right Rα[t−1, t]-module by defining ∞ X i=−∞ miti ! · rtn = ∞ X i=−∞ (miα−i(r))ti+n. We also define Mα[t−1, tK to be the submodule of MαJt−1, tK consisting of series of the form P∞ i=k miti for some k ∈Z. Let Mα[t−1, tK be the Rα[t−1, t]-submodule of series of the form P∞ n=k mntn for some k ∈Z. Similarly, let MαJt−1, t] be the submodule of series of the form Pk n=−∞mntn for some k ∈Z. Crucially, Mα[t−1, tK is a right Rα[t−1, tK-module, and MαJt−1, t] is a right RαJt−1, t]-module. Finally, let Mα[t−1, t] be the right Rα[t−1, t]-module of finitely supported sums P i miti. NOVIKOV COHOMOLOGY 9 Observation 3.2. There is a short exact sequence 0 Mα[t−1, t] MαJt−1, t] ⊕Mα[t−1, tK MαJt−1, tK 0 x (x, x) (x, y) x −y of right Rα[t−1, t]-modules. Let ι: R ! Rα[t−1, t] be the natural inclusion. If M is an Rα[t−1, t]-module, denote the restricted R-module by ι!M. Lemma 3.3 (Shapiro’s Lemma). Let C• be a chain complex of Rα[t−1, t]-modules. For any right R-module M, we have (1) Hi(C•; Mα[t−1, t]) ∼= Hi(ι!C•; M) when C• is a complex of left modules, and (2) Hi(C•; MαJt−1, tK) ∼= Hi(ι!C•; M) when C• is a complex of right modules for all i ∈Z. Proof. The isomorphism in (1) is an immediate consequence of the isomorphism Mα[t−1, t] ∼= M ⊗R Rα[t−1, t]. For (2), there is an isomorphism Φ: HomRα[t−1,t](L, MαJt−1, tK) ! HomR(ι!L, M) for any right Rα[t−1, t]-module L; it is given by Φ(φ)(l) = φ(l)0, where l ∈L is arbitrary. It’s inverse Ψ is given by Ψ(ψ)(l) = P∞ i=−∞ψ(lti)t−i. We leave it to the reader to verify that these isomorphisms are well-defined and that they are functorial in the first entry, i.e. that given a map L0 ! L1 of right Rα[t−1, t]- modules, the diagram HomRα[t−1,t](L1, MαJt−1, tK) HomRα[t−1,t](L0, MαJt−1, tK) HomR(ι!L1, M) HomR(ι!L0, M) commutes. The claimed isomorphism is then an immediate consequence of the isomorphism Φ. □ The power of Observation 3.2 is that it relates the (co)homology of C• with that of ι!C• via the following immediate consequence of Shapiro’s Lemma. Corollary 3.4. Let C• be a chain complex of projective Rα[t−1, t]-modules. For any right R-module M, the short exact sequence of Observation 3.2 induces the following long exact sequences in homology and cohomology: · · · Hi(C•; MαJt−1, t] ⊕Mα[t−1, tK) Hi(C•; MαJt−1, tK) Hi−1(ι!C•; M) Hi−1(C•; MαJt−1, t] ⊕Mα[t−1, tK) · · · · · · Hi(C•; MαJt−1, t] ⊕Mα[t−1, tK) Hi(ι!C•; M) Hi+1(C•; Mα[t−1, t]) Hi+1(C•; MαJt−1, t] ⊕Mα[t−1, tK) · · · Before establishing some of the main results, we need the following key lemma. 10 SAM P. FISHER Lemma 3.5. Any non-zero Rα[t−1, tK-module is uncountable. Proof. Let S ⊂Rα[t−1, K be the set of non-zero expressions of the form P∞ i=0 εiti, where εi ∈{0, 1} for each i ⩾0. Observe that if s0 ̸= s1 are distinct elements of S, then s1 −s0 is a unit of Rα[t−1, tK. This is because s1 −s0 = (1 −p)t−k for some k ∈Z and some element p of the form p = P∞ i=1 δiti. Hence (s1 −s0)−1 = tk(1 + p + p2 + · · · ). Let M be a non-zero module over Rα[t−1, tK and let m ∈M ∖{0}. We claim that the map S ! M sending s to m·s is injective. Suppose m·s0 = m·s1 for some s0, s1 ∈S. If s0 ̸= s1, then m = m(s1−s0)(s1−s0)−1 = 0, which is a contradiction. Hence, the map is injective, and since S is uncountable, M is too. □ We are now ready to give a proof of Ranicki’s Criterion based on the long exact sequences of Corollary 3.4. The result differs from Ranicki’s in two ways: first, it is in terms of vanishing Novikov cohomology (instead of homology) and second, it does not require that the chain complex be bounded above or below. Note also that Ranicki’s original proof was for chain complexes of free modules and with trivial twisting α. Theorem 3.6 (Ranicki’s Criterion, cohomology version). Let R be a countable ring and let C• be a chain complex of finitely generated projective Rα[t−1, t]-modules. For any integer n ⩾0, the following are equivalent: (1) ι!C• is finitely dominated; (2) Hi(C•; RαJt−1, t]) = Hi(C•; Rα[t−1, tK) = 0 for all i. Proof. Assume that (1) holds. The long exact sequence in cohomology of Corol- lary 3.4 yields exact sequences Hi(C•; Rα[t−1, t]) −! Hi(C•; RαJt−1, t] ⊕Rα[t−1, tK) −! Hi(ι!C•; R) for all i ⩾0. Since C• is a chain complex of finitely generated Rα[t−1, t]-modules and Rα[t−1, t] is countable, it follows that Hi(C•; Rα[t−1, t]) is countable. Moreover, since ι!C• is chain homotopy equivalent to a chain complex that is finitely generated in degrees at most n, we also have that Hi(ι!C•; R) is countable. Since the sequence above is exact, we conclude that Hi(C•; RαJt−1, t] ⊕Rα[t−1, tK) is countable, and therefore vanishes by Lemma 3.5. Now we assume that (2) holds. Let M be an arbitrary right R-module. The long exact sequence in cohomology of Corollary 3.4 yields isomorphisms Hi(ι!C•; M) ∼= Hi+1(C•; Mα[t−1, t]) for all i ⩾0, by Lemma 2.6. Let {Mj}j∈J be any directed system of R-modules such that lim −! Mj = 0. Then lim −! (Mj)α[t−1, t]  ∼= lim −! (Mj)α ⊗R Rα[t−1, t]  = 0 because tensor products commute with direct limits. Since C• is a complex of finitely generated projective modules, we have lim −! Hi(ι!C•; Mj) ∼= lim −! Hi+1(C•; (Mj)α[t−1, t]) ∼= Hi+1(C•; lim −! (Mj)α[t−1, t]  ) = 0. By Theorem 2.2, ι!C• is finitely dominated. □ NOVIKOV COHOMOLOGY 11 Remark 3.7. Let S be an Rα[t−1, t]-algebra. If C• is a bounded chain complex (meaning Ci = 0 for |i| sufficiently large), then Hi(C•; S) = 0 for all i if and only if Hi(C•; S) = 0 for all i (this is an easy consequence of [Bro75, Lemma 2]). Hence, the proof of Theorem 3.6 gives a short new proof of Ranicki’s Criterion. We decided to give the proof of Theorem 3.6 in the case that R is countable, since it is more straightforward and this covers most cases of interest (such as when R = k[N], where k is a countable ring and N = ker(G ! Z) for some countable group G). As explained below, the case of a general ring R follows from the countable case. Note that only the proof of the implication (1) ⇒(2) used the countability assumption. Proof (of Theorem 3.6, for a general ring R). We first set up some general nota- tion: if F is a free S-module for some ring S and with a fixed basis B, then let F|S′ denote the free S′-submodule L b∈B S′b ⩽F. By assumption, there is a chain complex D• of finitely generated projective R- modules chain homotopy equivalent to ι!C•. We may assume that C• and D• are complexes of finitely generated free modules, at the potential cost of them being nonzero in infinitely many degrees. Fix bases for all the modules in C• and D•. Since Rα[t−1, t] is free as an R-module with basis given by the powers of t, this choice induces a basis for ι!C• as a free R-module. Note that all the bases are countable. We consider the following module maps: • the boundary maps of the chain complexes ι!C• and D•; • a homotopy equivalence h• : D• ! ι!C• with its chain homotopy inverse g• : ι!C• ! D•; • a chain map s• : D• ! D•+1 witnessing the fact that g•h• is chain homo- topic to the identity (i.e. a map satisfying idDi −gihi = si−1∂i + si+1∂i) as well as a chain map r• : ι!C• ! ι!C•+1 witnessing the fact that h•g• is chain homotopic to the identity. Let E be the union of the R-bases of all the modules ι!Ci and Di. Let R′ ⩽R be a countable subring such that f(e) ∈M|R′, where e ∈E is arbitrary and f is any one of the maps above whose domain contains e and whose codomain is M. Then ι!C•|R′ and D•|R′ are well defined chain complexes of free R′-modules, and the restrictions of the maps above witness the fact that they are chain homotopic. We will now prove that Hi(C•; Rα[t−1, tK) = 0 for all i ⩽n. Let z : Ci −! Rα[t−1, tK be a cocycle for some i ⩽n, and suppose that {e1, . . . , em} is the fixed finite generating set of Ci as an Rα[t−1, t]-module. By potentially enlarging the subring R′ from the previous paragraph, we may assume it is countable and contains all the coefficients of each element z(ej). We may moreover assume that R′ is α-invariant (and countable) by replacing it with the subring generated by S i∈Z αi(R′). Then z restricts to a cochain z′ : Ci|R′ −! R′ α[t−1, t]. By Theorem 3.6, we have Hi(C•|R′; R′ α[t−1, tK) = 0, so there exists a cochain c′ : Ci−1|R′ ! R′ α[t−1, tK such that z′ = c′∂. But c′ defines an R-linear map c: Ci−1 −! Rα[t−1, tK 12 SAM P. FISHER by setting c(e) = c′(e) on all basis elements e. We then also have z = c∂, proving that Hi(C•; Rα[t−1, tK) = 0, as desired. □ We will need the next two propositions later on. The first is a refinement of Ranicki’s Criterion, which is often called Sikorav’s Theorem, and has already been obtained in this form by Hillman and Kochloukova [HK07, Theorem 5]. We will give a new proof the direction (2) ⇒(1). Together with Hillman and Kochloukova’s proof of the implication (1) ⇒(2), this gives a proof of Sikorav’s Theorem that does not involve analysing individual Novikov cycles in the complex of Novikov chains, and instead uses standard methods from homological algebra (such as mapping cones of chain complexes and long exact sequences in homology). Proposition 3.8 (Sikorav’s Theorem). Let C• be a chain complex of projective Rα[t−1, t]-modules such that Ci is finitely generated for all i ⩽n and Ci = 0 for all i < 0. The following are equivalent: (1) ι!C• is of finite n-type; (2) Hi(C•; RαJt−1, t]) = Hi(C•; Rα[t−1, tK) = 0 for all i ⩽n. Proof. Assume that (2) holds. There is a finitely generated direct summand C′ n+1 of Cn+1 such that the modified chain complex C′ • = {0 −! C′ n+1 −! Cn −! · · · −! C0 −! 0} still has Hn(C′ •; RαJt−1, t]) = Hn(C′ •; Rα[t−1, tK) = 0. This is because RαJt−1, t] is a ring, and therefore the module Zi(RαJt−1, t]⊗Rα[t−1,t] C•) of cycles degrees i ⩽n is a direct summand of RαJt−1, t] ⊗Rα[t−1,v] Ci for all i ⩽n. A similar remark applies to Rα[t−1, tK. Now, Hi(C′ •; MαJt−1, t]) = Hi(C′ •; Mα[t−1, tK) = 0 for any right R-module M by Lemma 2.6 and all i ⩽n. The long exact sequence in homology of Corollary 3.4 yields isomorphisms Hi+1(C′ •; MαJt−1, tK) ∼= Hi(ι!C•; M) for all i < n. Note that Q RαJt−1, tK ∼= (Q R)α Jt−1, tK, where the product is taken over an arbitrary index set. Since Ci+1 is finitely generated for i < n, the homology of C• commutes with products, and therefore Hi(ι!C•; Y R) = Hi(ι!C′ •; Y R) ∼= Hi+1(C′ •; ( Y R)αJt−1, tK) ∼= Hi+1(C′ •; Y RαJt−1, tK) ∼= Y Hi+1(C′ •; RαJt−1, tK) ∼= Y Hi(ι!C′ •; R) = Y Hi(ι!C•; R) for i < n. We have a commuting diagram Hn+1(C′ •; Q RαJt−1, tK) Hn(ι!C′ •; Q R) 0 Q Hn+1(C′ •; RαJt−1, tK) Q Hn(ι!C′ •; R) 0 ∼ = NOVIKOV COHOMOLOGY 13 with exact rows. Hence, Hn(ι!C′ •; Q R) ! Q Hn(ι!C′ •; R) is an epimorphism. But there is another commuting diagram Hn(ι!C′ •; Q R) Hn(ι!C•; Q R) 0 Q Hn(ι!C′ •; R) Q Hn(ι!C•; Q R) 0 with exact rows. Hence Q Hn(ι!C′ •; R) ! Hn(ι!C•; Q R) is an epimorphism, and hence (1) follows from Theorem 2.2. □ The following result is the dual of the previous theorem and is proved similarly. It will be used in Theorem 4.14. Theorem 3.9. Let C• be a chain complex of projective Rα[t−1, t]-modules with Ci = 0 for all i > n. Suppose that the functors Hi(C•; −) commute with direct limits of Rα[t−1, t]-modules for all i ⩾m. If Hi(C•; RαJt−1, t]) = Hi(C•; Rα[t−1, tK) = 0 for all i ⩾m, then Hi(ι!C•; −) commutes with direct limits for all i ⩾m. If n ⩾m, then ι!C• is of cohomological dimension at most n −1. Remark 3.10. It is unclear to us whether the converse holds, i.e. whether Hi(ι!C•; −) commuting with direct limits for i ⩾m implies vanishing Novikov cohomology in degrees at least m. It holds if m = n (see Corollary 3.11), and if we additionally as- sume that C• is a chain complex of finitely generated projectives, then the converse also holds at m = n −1. Proof (of Theorem 3.9). The proof that Hi(ι!C•; −) commutes with direct limits is similar to the proof of Proposition 3.8, where it was shown that Hi(ι!C•; −) commutes with direct products in a certain range. We leave the details to the reader. By Lemma 2.6, Hi(C•; MαJt−1, t]) = Hi(C•; Mα[t−1, tK) = 0 for all i ⩾m and any right R-module M. The long exact sequence in cohomology of Corollary 3.4 then yields Hn(ι!C•; M) = 0. Since M was arbitrary, this shows that ι!C• has cohomological dimension at most n −1. □ As a corollary, we find that the top-dimensional Novikov cohomology of C• completely controls the cohomological dimension of ι!C•, proving Theorem A from the introduction in a more general form. Corollary 3.11. Let C• be a chain complex of projective Rα[t−1, t]-modules of cohomological dimension n and such that Cn is finitely generated. The following are equivalent: (1) ι!C is of cohomological dimension n −1; (2) Hn(C•; RαJt−1, t]) = Hn(C•; Rα[t−1, tK) = 0. Proof. If ι!C• is of cohomological dimension n −1, then the long exact sequence in cohomology of Corollary 3.4 yields the exact sequence Hn(C•; Rα[t−1, t]) −! Hn(C•; RαJt−1, t] ⊕Rα[t−1, tK) −! 0. 14 SAM P. FISHER If R is a countable ring, then Hn(C•; Rα[t−1, t]) is countable (since Cn) is finitely generated. If not, we can use the fact that Cn is finitely generated to reduce to the countable case, much in the same way as in the proof of Theorem 3.6 in the general case. We sketch the argument below. We may assume that the modules Ci are all free. The fact that ι!C• is of cohomological dimension n −1 means that there is a map ρ: ι!Cn−1 ! ι!Cn such that ρ∂= idι!Cn. Since Cn is finitely generated, there is a finitely generated direct summand A of Cn−1 containing ∂(Cn). Writing Cn−1 = A ⊕B, we can take ρ to be zero on B. Representing ∂and ρ as matrices over R, they contain countably many entries, and there is an α-invariant countable subring R′ of R containing all of these entries. By the argument above, Hn(C•; R′ αJt−1, t]) = Hn(C•; R′ α[t−1, tK) = 0. We conclude that Hn(C•; RαJt−1, t]) = Hn(C•; Rα[t−1, tK) = 0 in the same way as in the proof of the general case of Theorem 3.6. □ 4. The Σ∗-invariant Let G be a group and let R be a ring such that cdR(G) = n. If the trivial R[G]-module R admits a projective resolution P• ! R ! 0 such that Pi is finitely generated for all i > n −m, then we will say that G is of type FP∗ m(R). In other words, the trivial module admits a resolution that is finitely generated in the top m degrees. Note that the finiteness property FP∗ m(R) only makes sense for groups with finite cohomological dimension over R, and that if a group is of type FP(R) then it is of type FP∗ m(R) for all m. Definition 4.1. Let G be of type FP∗ m(R). The mth Σ∗-invariant of G (over R) is Σ∗ m(G; R) = {χ ∈H1(G, R) ∖{0} : Hi(G; ‘ R[G] χ ) = 0 for all i > n −m}. In particular, Σ∗ 1(G; R) is the set of characters with respect to which the top- dimensional Novikov homology of G vanishes. It follows immediately from the definition that Σ∗ m+1(G; R) ⊆Σ∗ m(G; R) (whenever G is of type FP∗ m+1(R)). Proposition 4.2. Let G be a finitely generated group of type FP∗ m(R) for some ring R and integer m. Then Σ∗ m(G; R) is open in H1(G; R). The proposition will follow quickly from Lemma 4.5 below, which will be well known to experts. The proof given here relies on chain contractions, and the follow- ing simple fact: if χ: G ! R is positive on the support of x ∈R[G], then there is an open neighbourhood U of χ such that 1 −x is invertible in ‘ R[G] ψ for all ψ ∈U. We first give a couple of definitions. Definition 4.3 (Truncation). Let χ: G ! R be a character. The truncation of an element P g∈G rgg ∈‘ R[G] χ at height t ∈R is the element P g∈G rgg ∈R[G], where rg = rg if χ(g) < t, and rg = 0 otherwise. Denote by RG the set of all formal series of elements of G with coefficients in R (which can also be thought of as the set of all functions G ! R). For every character χ: G ! R, there is a natural inclusion ‘ R[G] χ ⊆RG. NOVIKOV COHOMOLOGY 15 Definition 4.4 (Definability over Novikov rings). Let χ, ψ: G ! R be two char- acters. Say that x ∈‘ R[G] χ is definable over ‘ R[G] ψ if x ∈‘ R[G] χ ∩‘ R[G] ψ when viewing both Novikov rings as subsets of RG. Definitions 4.3 and 4.4 extend immediately to matrices over Novikov rings. Let A ∈Mat( ‘ R[G] χ ). The truncation of A at height t ∈R is obtained by truncating all entries of A at height t, and we say that A is definable over ‘ R[G] ψ if all its entries are definable over ‘ R[G] ψ . Lemma 4.5. Let C• be a chain complex of finitely generated projective R[G]- modules such that Ci = 0 for i < 0 and Ci. For any integer m, the set of non-zero characters χ such that Hi(C•; ‘ R[G] χ ) = 0 for all i ⩽m is open in H1(G; R) ∖{0}. Proof. By taking direct sums, we may assume that the modules Ci are finitely generated and free. Let χ be a character such that Hi(C•; ‘ R[G] χ ) = 0 for all i ⩽m. Fixing bases for the modules Ci induces representations of maps between modules ‘ R[G] χ ⊗R[G] Ci as matrices over ‘ R[G] χ . By induction on i, we will show that ‘ R[G] χ ⊗R[G] C• admits a partial chain contraction of length i whose maps are defined over ‘ R[G] ψ for all ψ in some open neighbourhood U ⊆H1(G; R) ∖{0} of χ (recall that a partial chain contraction of length l is a sequence of maps si : ‘ R[G] χ ⊗R[G] Ci −! ‘ R[G] χ ⊗R[G] Ci+1 such that id = si−1∂i + ∂i+1si for all i ⩽l). Set s−1 = 0, which is defined over every Novikov ring, so there is nothing to prove in the base case. Suppose that m ⩾i > −1 and that we have found maps sj for all j < i defining a length i −1 partial chain contraction of ‘ R[G] χ ⊗R[G] C•, which is defined over ‘ R[G] ψ for all ψ in some open neighbourhood U of χ. By the Novikov acyclicity assumption, there is a map σi : ‘ R[G] χ ⊗R[G] Ci ! ‘ R[G] χ ⊗R[G] Ci+1 extending the partial chain contraction to one of length i. We will now modify σi so that it is defined over an open set of characters. Let σi be a truncation of σi at a height sufficiently large so that every entry of the matrix M = ∂i+1(σi −σi) has positive support under χ. We then have M = id −σi−1∂i −∂i+1σi, so M is defined over ‘ R[G] ψ for ψ in some open neighbourhood V of χ. Moreover, since M has only finitely many non-zero entries, we may shrink V so that the entries of M are positively supported with respect to all ψ ∈V . It follows that id −M is invertible with inverse P∞ i=0 M i defined over ‘ R[G] ψ for all ψ ∈V . Let si = σi(id −M)−1. One easily checks that this extends the partial chain contrac- tion which is defined in the open neighbourhood U ∩V of χ. This completes the induction. The existence of a length m partial chain contraction defined over ‘ R[G] ψ for all ψ in an open neighbourhood of χ implies that Hi(C•; ‘ R[G] ψ ) = 0 for all i ⩽m and all ψ in said neighbourhood. □ 16 SAM P. FISHER Remark 4.6. The proof above would be shorter if definability over the Novikov ring was an open condition on characters. This is not the case. Indeed, it is easy to construct examples of elements in ‘ R[G] χ (with G = Z2, for instance) that are not defined over any ‘ R[G] ψ with ψ ̸= χ. Proof (of Proposition 4.2). Let P• ! R ! 0 be a length n resolution witnessing the FP∗ m(R) property for G. Let P ′ n−m be a finitely generated free factor of Pn−m containing the image of Pn−m+1 under the boundary map. The complex 0 −! Pn −! · · · −! Pn−m+1 −! P ′ n−m has vanishing cohomology (with any coefficients) in degrees above n−m if and only if P• does. Hence, we will work with this modified complex, and continue to denote it by P•. The cohomology Hi(P•; ‘ R[G] χ ) in degrees i > n −m is the cohomology of the cochain complex HomR[G](Pi, ‘ R[G] χ ) ∼= HomR[G](Pi, R[G]) ⊗R[G] ‘ R[G] χ . Since Pi is finitely generated, HomR[G](Pi, R[G]) is projective and finitely generated as an R[G]-module. The result then immediately follows from Lemma 4.5. □ As we saw in Example 3.1, the Novikov ring associated to an integral character χ: G ! Z is a special case of the construction Rα[t−1, tK. Thus, the results of the previous section readily apply to give theorems about group cohomology. The first such result is an immediate consequence of Corollary 3.11, and establishes the converse to [Fis24b, Theorem 3.5] in the case of integral characters. Theorem 4.7. Let G be a group and let R be a ring such that cdR(G) < ∞and G is of type FP∗ 1(R). Let χ: G ! Z be an integral character. Then cdR(ker χ) = cdR(G) −1 if and only if ±χ ∈Σ∗ 1(G). Corollary C follows at once from Theorem 4.7 and Proposition 4.2. Corollary 4.8. Let G be a group of type FP(R) for some ring R. The set of characters {χ: G −! Q : cdR(ker χ) = cdR(G) −1} is open in H1(G; Q). The remainder of the section will be devoted to proving Theorem E. We first need a result about the finiteness properties of kernels of maps to poly-Z groups. Notation 4.9. Recall that a group P is poly-Z if it admits a subnormal series {1} = Pd ⩽Pd−1 ⩽· · · ⩽P1 ⩽P0 = P such that Pi/Pi+1 ∼= Z for all 0 ⩽i < d. The integer d is the length of the poly-Z group P; it coincides with its cohomological dimension, and therefore is uniquely determined. We will always assume that our poly-Z groups come with a fixed subnormal series as above. For an epimorphism χ: G ! P with P poly-Z, there is a distinguished sequence of integral characters χi : Gi −! Gi/Gi+1 ∼= Z where Gi := χ−1(Pi). Note that if P is of length d, then Gd = ker χ. NOVIKOV COHOMOLOGY 17 The following two results are easy consequences of Proposition 3.8 and Theo- rem 3.9. We will need them when studying We use the following notation: if M is an R[G]-module and N ⩽G is a pair of groups, then the R[N]-module obtained by restricting M will be denoted by resG N M. We follow Notation 4.9 in the statement of the next proposition. Proposition 4.10. Let χ: G ! P be an epimorphism, where P is a poly-Z group of length d. Let C• be a chain complex of projective R[G]-modules of finite n-type. The following are equivalent: (1) resG Gd C• is of finite n-type; (2) Hi(resG Gj C•; ’ R[Gj] ±χj) = 0 for all 0 ⩽j < d. Proof. We prove the result by induction on d. If d = 1, then the statement of the proposition is identical to that of Proposition 3.8. Suppose (1) holds. Then resG G1 C• is of finite n-type. Indeed, there is a convergent spectral sequence Hp(G1/Gd; Hq(resG Gd C•; M)) =⇒Hp+q(resG G1 C•; M) for any R[G1]-module M. This is just the Lyndon–Hochschild–Serre spectral se- quence, except one replaces a resolution of the trivial R[G]-module R by the chain complex resG Gd C•; the proof of its existence is identical (see [Bro94, Chapter VII]). By Theorem 2.2, lim −! Hq(resG Gd C•; Mj) = 0 for all q ⩽n and all directed systems {Mj} with vanishing direct limit. Moreover, Hp(G1/Gd; −) preserves direct limits in all degrees, since G1/Gd is a poly-Z group and therefore of type FP. Thus, lim −! Hp(resG G1 C•; Mj) = 0 for all p ⩽n and all directed systems {Mj} with van- ishing direct limit. Again by Theorem 2.2, resG G1 C• has the appropriate finiteness conditions. By Proposition 3.8, we conclude that Hj(resG G1 C•; ‘ R[G] ±χ0) = 0 for all j ⩽n. By induction, we also have Hj(resG Gi+1 C•; ’ R[Gi] ±χi) ∼= Hj(resG1 Gi+1 C•; ’ R[Gi] ±χi) = 0 for all j ⩽n and all 0 < i < d. If (2) holds, then resG G1 C• is of finite n-type by Proposition 3.8. By induction, resG1 Gd C• is of finite n-type. □ Proposition 4.11. Let χ: G ! P be an epimorphism, where P is a poly-Z group of length d. Let C• be a chain complex of projective R[G]-modules such that Ci = 0 for all i > n and such that Hi(G; −) commutes with direct limits of R[G]-modules for all i > n −d. If Hi(resG Gj C•; ’ R[Gj] ±χj) = 0 for all i > n −d and all 0 ⩽j < n, then resG Gd C• is of cohomological dimension at most n −d. Proof. We prove the result by induction on d. If d = 1, then it follows immediately from Corollary 3.11. Suppose that d > 1. Again by Corollary 3.11, resG G1 C• is chain homotopy equivalent to a chain complex of projective modules D• which is of cohomological dimension at most n −1 and such that Hi(D•; −) commutes with direct limits for all i > n −d. The claim now follows by induction. □ 18 SAM P. FISHER For the proof of Theorem E, we will need the following result of Farber, Geoghe- gan, and Schütz, which gives a version of Sikorav’s Theorem for chain complexes. Theorem 4.12 ([FGS10, Theorem 8]). Let C• be a chain complex of projective R[G]-modules such Ci is finitely generated for i ⩽n and zero for i < 0. If N P G is such that G/N is Abelian, then the following are equivalent: (1) resG N C• is of finite n-type; (2) for all χ ∈S(G, N), we have Hi(C•; ‘ R[G] χ ) = 0 for all i ⩽n. Remark 4.13. In [FGS10], the result is stated for R = Z. Just as in the case of Sikorav’s Theorem for groups, the proof goes through to the case of a general unital associative ring R without modification. We are now ready to prove Theorem E for chain complexes of projectives. Recall that if N ⩽G is a pair of groups and M is a left R[N]-module, then the corre- sponding induced module is IndG N M = R[G] ⊗R[N] M, and is a left R[G]-module. Theorem 4.14. Let C• be a chain complex of finitely generated projective R[G]- modules such that Ci = 0 for all i > n, and let χ: G ! R be a character with free Abelian image of rank d. If Hi(C•; ‘ R[G] φ ) = 0 for all φ ∈S(G, ker χ) and all i > n −d, then resG ker χ C• has cohomological dimen- sion at most n −d. Proof. Since the modules Ci are finitely generated, HomR[G](C•, ‘ R[G] φ ) ∼= HomR[G](C•, R[G]) ⊗R[G] ‘ R[G] φ . The Novikov acyclicity assumption and a Theorem 4.12 imply that resG ker χ HomR[G](C•, R[G]) is chain homotopy equivalent to a chain complex D• of projective R[ker χ]-modules such that Di is finitely generated for all i > n −d. For the remainder of the proof, we follow Notation 4.9. Let P = χ(G) ∼= Zd and fix a series of subgroups {1} = Pd ⩽Pd−1 ⩽· · · ⩽P1 ⩽P0 = P such that Pi/Pi+1 ∼= Z for all 0 ⩽i < d. Let the maps χi : Gi = χ−1(Pi) ! Pi/Pi+1 be the associated integral characters. By Proposition 4.10, for all 0 ⩽j < d, the cochain complexes resG Gj HomR[G](C•, R[G]) ⊗R[Gj] ’ R[Gj] ±χj ∼= HomR[G](C•, R[G]) ⊗R[G] IndG Gj ’ R[Gj] ±χj ∼= HomR[G](C•, IndG Gj ’ R[Gj] ±χj) are acyclic in degrees greater than n −d, so we have Hi(C•; IndG Gj ’ R[Gj] ±χj) = 0 for all i > n −d and 0 ⩽j < d. The result will follow quickly from the following claim. NOVIKOV COHOMOLOGY 19 Claim 4.15. Let χ: G ! P be an epimorphism, where P is a poly-Z group of length d. Let F• be a chain complex of projective R[G]-modules such that Fi = 0 for all i > n and such that Hi(F•; −) commutes with direct limits for all i > n −d. If Hi(F•; IndG Gj ’ R[Gj] ±χj) = 0 for all i > n −d and 0 ⩽j < d, then Hi(resG Gj F•; ’ R[Gj] ±χj) = 0 for all i > n −d and 0 ⩽j < d. Proof. We will prove this by induction on d. If d = 1, then there is nothing to show, so we assume that d > 1. Fix some j < d. Let M = IndG1 Gj ’ R[Gj] ±χj, and let t ∈G map to the generator of G/G1 such that χ0(t) = 1. As in Observation 3.2, there is a short exact sequence 0 −! IndG G1 M −! MαJt−1, t] ⊕Mα[t−1, tK −! coIndG G1 M −! 0, where α is induced by the conjugation action of t. We know Hi(F•; IndG G1 M) = Hi(F•; IndG Gj ’ R[Gj] ±χj) = 0 for all i > n −d. We also know that Hi(F•; MαJt−1, t]) = Hi(F•; Mα[t−1, tK) = 0 by Lemma 2.6, since Mα[t−1, tK is a right ‘ R[G] χ0 = R[G1]α[t−1, tK-module and Hi(F•; ‘ R[G] χ0) = 0 (and similarly for MαJt−1, t]). It then follows that Hi(F•; coIndG G1 M) = Hi(resG G1 F•; IndG1 Gj ’ R[Gj] ±χj) = 0. for all i > n−d. Since Hi(resG G1 F•; −) commutes with direct limits for all i > n−d by Proposition 4.11, the claim holds by induction. ⋄ By the claim, Hi(resG Gj C•; ’ R[Gj] ±χj) = 0 for all i > n −d and all 0 ⩽j < d. By Proposition 4.11, resG ker χ C• is of cohomological dimension at most n −d. □ Theorem E follows immediately by taking C• to be a projective resolution of G witnessing the FP∗ d(R) property. Corollary 4.16. Let G be a group of type FP∗ d(R) with cdR(G) = n, and suppose that N P G is such that G/N ∼= Zd. If S(G, N) ⊆Σ∗ d(G; R), then cdR(N) = n −d. While Corollary 4.16 has a converse for d = 1, the converse fails in general, as the following example shows. The author thanks Grigori Avramidi for telling him that many closed hyperbolic 3-manifolds are free-by-Z2. Example 4.17. Let M be a closed fibred hyperbolic 3-manifold with b1(M) ⩾2, and let χ: π1(M) ! Z be an algebraic fibration induced by a fibration of M over S1 with fibre a hyperbolic surface Sg. By openness of the Σ-invariant and the fact that b1(M) ⩾2, we may perturb χ to obtain a new algebraic fibration χ′ : π1(M) ! Z not equal to ±χ. The kernel of the map f : π1(M) ! Z2 given by α 7! (χ(α), χ′(α)) is an infinite-index subgroup of ker χ ∼= π1(Sg) and therefore is non-finitely gener- ated free. Hence, π1(M) is free-by-Z2. Let F = ker f. By Theorem 1.1, there is 20 SAM P. FISHER a character χ ∈S(π1(M), F) such that H1(π1(M); ŸZ[π1(M)] χ ) ̸= 0 (because F is not finitely generated), so H2(π1(M); ŸZ[π1(M)] χ ) ̸= 0 by Poincaré-duality. But the converse of Corollary 4.16 would imply that Hi(π1(M); ŸZ[π1(M)] χ ) = 0 for i = 2, 3, since cd(F) = 1, so it cannot hold. We conclude the section by proving Corollary F. A group G is an n-dimensional Poincaré-duality group over R (or, more briefly, a PDn R-group) if it satisfies Hi(G; M) ∼= Hn−i(G; M) for every R[G]-module M. Homology commutes with direct limits, which implies that the cohomology of a Poincaré-duality group commutes with direct limits. Hence, by Theorem 2.2, PDn R-groups are of type FP(R). Corollary 4.18. Let G be a PDn R-group and let χ: G ! Zd be an epimorphism. If ker χ is of type FPd−1(R), then cdR(G) = n −d. Proof. By the higher version of Theorem 1.1 (see [BR88], or [Fis24a, Theorem 5.3] for the case of an arbitrary ring R), S(G, N) ⊆Σd−1(G; R), where N = ker χ. By Poincaré-duality, S(G, N) ⊆Σ∗ d(G; R). By Corollary 4.16, cdR(G) = n −d. □ 5. An application to RFRS groups Definition 5.1. A group G is said to be residually finite rationally solvable, or RFRS, if (1) there is a chain of finite-index normal subgroups G = G0 ⩾G1 ⩾G2 ⩾. . . such that T i⩾0 Gi = {1}, and (2) ker(Gi ! Q ⊗Z Gi/[Gi, Gi]) is a subgroup of Gi+1 for all i ⩾0. A chain satisfying these properties will be called a witnessing chain for the RFRS group. Definition 5.2. It is easy to see that finitely generated RFRS groups are residually (torsion-free solvable), and therefore satisfy the strong Atiyah conjecture by [Sch02]. By a result of Linnell [Lin93], this implies that the division closure of Q[G] in the algebra of affiliated operators U(G) is a division ring, which we call DQ[G]. The nth L2-Betti numbers of G is b(2) n (G) := dimDQ[G] Hn(G; DQ[G]), where dimDQ[G] M the rank of a (necessarily free of unique rank) DQ[G]-module M. For more details, we refer the reader to [Lüc02, Chapter 10]. In fact, for RFRS groups G, Jaikin-Zapirain shows that k[G] embeds into a division ring Dk[G] for any field k, which shares many properties with DQ[G] [JZ21, Corollary 1.3]. We may use this division ring to define the L2-Betti numbers of a RFRS group over a field k by b(2) n (G; k) := dimDk[G] Hn(G; Dk[G]). In particular, b(2) n (G; Q) = b(2) n (G). For us, the important common feature of all the division rings Dk[G] (as the ground field k varies) is that they can be expressed as directed unions of rings which are closely related to the Novikov rings (over k) of G and its finite-index subgroups. For k = Q, this is the crux of Kielak’s fibring theorem [Kie20]. This NOVIKOV COHOMOLOGY 21 was extended by Jaikin-Zapirain to all fields k in the appendix of [JZ21]. We give a rough sketch of how this works, emphasising the details that will be necessary for us. Fix a finitely generated RFRS group G and a witnessing chain (Gi)i⩾0. In [Kie20, Definition 4.3], Kielak introduces the notion of a rich set of characters in H1(Gi; R) ∖{0}, a notion which depends the chosen witnessing chain (Gi)i⩾0. The definition of a rich set is somewhat technical, but we will only need the following properties: • rich sets are open; • rich sets are nonempty; • the intersection of rich sets is rich [Kie20, Lemma 4.4]. Remark 5.3. In [Kie20, Definition 4.3], it is not required that rich sets be open. This is however simply a misprint, and indeed it is important that rich sets be open for the proofs of [Kie20, Lemma 4.4] and [Kie20, Lemma 4.13]. If U ⊆H1(Gi; R) is any set, then Kielak defines [Kie20, Definition 3.15] an asso- ciated subring Dk[Gi],U ⊆Dk[G] (see also [JZ21, Section 5.2]). Again the definition of Dk[Gi],U is somewhat technical, but we will really only use some basic properties (references are provided for the interested reader): • k[Gi] is contained in Dk[Gi],U for every choice of U (follows directly from [Kie20, Definition 3.15]); • if U ⊆V ⊆H1(Gi; R), then Dk[Gi],V ⊆Dk[Gi],U (follows directly from [Kie20, Definition 3.15]); • for any choice of U, the conjugation action of G on k[Gi] extends to an action on Dk[Gi],U, and we may thus form the twisted group ring Dk[Gi],U ∗G/Gi ∼= Dk[Gi],U ⊗k[Gi] k[G], and there is a natural embedding Dk[Gi],U ∗G/Gi ⊆Dk[G] (see [JZ21, Proposition 5.3]); • for every χ ∈U, there is a map Dk[Gi],U ! ’ k[Gi] χ , or in other words, ’ k[Gi] χ is a Dk[Gi],U-algebra (see [Kie20, Lemma 3.13]). We continue with the notation above: G is a RFRS group with witnessing chain (Gi)i⩾0. Let n be a non-negative integer and let U = (Un, Un+1, . . . ), where Ui ⩽ H1(Gi; R) is a rich set for each i ⩾n. We call such a U a sequence of rich sets. Define Dk[G],U = \ i⩾n Dk[Gi],Ui ∗G/Gi. The main technical theorem of Kielak is that Dk[G] is covered by the rings Dk[G],U. Theorem 5.4 ([Kie20, Theorem 4.13]). Dk[G] = S U Dk[G],U, where the union is taken over all sequences of rich sets U. We only need the following additional fact. Lemma 5.5. The rings {Dk[G],U}U form a directed system under inclusion. Proof. Let m and n be non-negative integers, and let U = (Um, Un+1, . . . ) and V = (Vn, Vn+1, . . . ) be sequences of rich sets. Let M = max{m, n}. Then Dk[G],U 22 SAM P. FISHER and Dk[G],V are each contained in Dk[G],U∩V, where U ∩V := (UM ∩VM, UM+1 ∩ VM+1, . . . ). By the facts listed above, U ∩V is a sequence of rich sets, and Dk[G],U ⊆Dk[G],U∩V ⊇Dk[G],V. This proves that the system of rings Dk[G],U is directed. □ Lemma 5.6. Let G be a group with cdk(G) = n for some field k. Let S = lim −! Sj, where {Sj}j∈J is a directed system of k[G]-algebras. Suppose that Hi(G; −) com- mutes with direct limits of k[G]-modules for all i ⩾m. If Hi(G; S) = 0 for all i ⩾m, then there is some j ∈J such that Hi(G; Sj) = 0 for all i ⩾m. Proof. Suppose that, for a fixed l ⩾m, there exists j ∈J with Hi(G; Sj) = 0 for all i > l (we trivially know this holds for l = n and all j ∈J). Then Hl(G; −) is a right exact functor on the category of Sj-modules. By Theorem 2.5, there is a natural isomorphism Hl(G; −) ∼= Hl(G; Sj) ⊗Sj − as functors on the category of Sj-modules. The functor Hl(G; −) commutes with direct products of Sj-modules, and there- fore so does the functor Hl(G; Sj) ⊗Sj −. It is a well-known fact (and an easy exercise) that Hl(G; Sj) must be finitely generated as an Sj-module. Let z1, . . . , zs be a finite generating set for Hl(G; Sj). Since Hl(G; S) ∼= Hl(G; Sj) ⊗Sj S ∼= lim −! j′⩾j Hl(G; Sj) ⊗Sj Sj′ = 0, we see that there is some j′ ∈J such that zi ⊗1 = 0 in Hl(G; Sj) ⊗Sj Sj′ for each i = 1, . . . , s. But these elements also generate Hl(G; Sj) ⊗Sj Sj′, so we conclude that Hl(G; Sj′) ∼= Hl(G; Sj) ⊗Sj Sj′ = 0. Since Sj′ is an Sj-module, we also have Hi(G; Sj′) = 0 for all i > l by Lemma 2.6. The lemma then follows by induction. □ We can now prove the main result of this section, which implies Theorem D by taking d = 1 and using the Stallings–Swan Theorem [Sta68, Swa69], which characterises free groups as the groups of cohomological dimension at most one. Theorem 5.7. Let G be a RFRS group with cdk(G) = n for some field k. Suppose Hi(G; −) commutes with direct limits of k[G]-modules for all i > d. The following are equivalent: (1) Dk[G] is of weak dimension at most d as a k[G]-module; (2) for every subgroup H ⩽G, we have b(2) i (H; k) = 0 for all i > d; (3) there is a subnormal series Γl < Γl−1 < · · · < Γ0 = G, where each quotient Γi/Γi+1 is either finite or cyclic, and cdk(Γl) = d. The proof will show that in (3), the quotients Γi/Γi+1 will alternate between being finite and infinite cyclic, and when Γi/Γi+1 ∼= Z we have cdk(Γi+1) < cdk(Γi). Proof (of Theorem 5.7). The implication (1) ⇒(2) follows quickly from the fact that Hi(H; Dk[G]) ∼= Hi(H; Dk[H]) ⊗Dk[H] Dk[G] for all i, by flatness of Dk[G] over Dk[H]. NOVIKOV COHOMOLOGY 23 Suppose that (2) holds. By Lemmas 5.5 and 5.6, there is a sequence of rich sets U = (Us, Us+1, . . . ) such that Hi(G; Dk[G],U) = 0 for all i > d. But Dk[G],U ⊆ Dk[Gs],Us ∗G/Gs, and therefore 0 = Hi(G; Dk[Gs],Us ∗G/Gs) ∼= Hi(Gs; Dk[Gs],Us) for all i > d by Lemma 2.6. We will take Γ1 = Gs. Recall that ’ k[Gs] χ is a Dk[Gs],Us- algebra for every χ ∈Us. Since −Us ∩Us is also a rich (and hence open) set, there is an integral character χ: Gs ! Z such that Hi(Gs; ’ k[Gs] ±χ ) = 0 for all i > d. By Corollary 3.11, cdk(ker χ) = n −1 and Hi(ker χ; −) commutes with direct limits for all i > d. We take Γ2 = ker χ, and repeat this argument with Γ2 instead of G to extend the chain to Γ2 > Γ3 > Γ4, where Γ2/Γ3 is finite, and Γ3/Γ4 is cyclic with cdk(Γ4) = n −2. Continuing in this way proves item (3). To prove (3) implies (1), it suffices to prove the following claim: If 1 −! N −! G −! Q −! 1 is an extension with G RFRS, Q amenable, and Dk[N] of weak dimension at most d as a k[N]-module, then Dk[G] is of weak dimension at most d as a k[G]-module. The argument is similar to that of [JZL25, Theorem 3.4]. Indeed, let M be any k[G]-module. Then, for all i > d, Tork[G] i (Dk[N] ∗G/N, M) ∼= Tork[N] i (Dk[N], M) = 0 by Shapiro’s lemma. But Dk[G] is the Ore localisation of Dk[N] ∗G/N by a result of Tamari [Tam54], and since Ore localisation is flat, we also get Tork[G] i (Dk[G], M) = 0 for all i > d, as claimed. □ Corollary 5.8. Let G be a RFRS group of type FP(k). Then Dk[G] is of weak dimension one as a k[G]-module if and only if G is an iterated extension of a free group by (cyclic or finite) groups. Remark 5.9. In the introduction, Corollary 5.8 was stated in terms of U(G), the algebra of operators affiliated to the von Neumann algebra of G. Let k ⊆C be a field. Since Dk[G] is the division closure of k[G] in U(G), and moreover Dk[G] is a division ring, U(G) is faithfully flat over Dk[G]. It follows that U(G) has weak dimension at most one as a k[G]-module if and only if Dk[G] does. It is an interesting problem to determine the difference between the invariants b(2) i (G; k) for different fields k. In the context of RFRS groups (of finite type, say), it is an easy consequence of the results main results of [Kie20] and [Fis24a] that b(2) 1 (G; k) = 0 for one field k if and only if b(2) 1 (G; k′) = 0 for all fields k′. It is open whether the value of b(2) 1 (G; k) depends on the field k, but for every i > 1 there are examples of right-angled Artin groups G (which are, in particular, RFRS) such that b(2) i (G; k) depends on k [AOS21]. Here we show that Dk[G] being of weak dimension one (or equivalently, that all subgroups of G have vanishing k-L2-Betti numbers) does not depend on k. Corollary 5.10. Let G be a RFRS group of type FP. If Dk[G] is of weak dimension one as a k[G]-module for some field k, then Dk′[G] is of weak dimension one as a k′[G]-module for all fields k′. 24 SAM P. FISHER We conclude the section with applications to groups with elementary hierarchies. Following Hagen and Wise [HW10], we say that the trivial group has an elementary hierarchy of length 0, and that a group G has an elementary hierarchy of length n if it splits as a graph of groups with an elementary hierarchy of length n −1 with cyclic edge groups. The main theorem of [HW10] is that any finitely generated virtually special group with an elementary hierarchy is virtually free-by-cyclic (a group is special if and only if it is a subgroup of a RAAG, so special groups are RFRS). This class of groups includes, e.g., graphs of free groups with cyclic edge groups not containing unbalanced Baumslag–Solitar groups and limit groups not containing Z3-subgroups. We extend the Hagen–Wise theorem in the following way: say that G has an Abelian hierarchy of length 0 if it is trivial and that it has an Abelian hierarchy of length n if it virtually splits as a graph of groups with an Abelian hierarchy of length n −1 and finitely generated virtually Abelian edge groups. While all groups with an elementary hierarchy of cohomological dimension at most two, groups with an Abelian hierarchy can be of arbitrarily high cohomological dimension (the class contains all limits groups, for example). Corollary 5.11. Let G be a virtually RFRS group of type FP(k) for some field k admitting a virtually Abelian hierarchy. Then G is an iterated extension of a free group by (cyclic or finite) groups. Proof. It suffices to prove that every subgroup of G has vanishing k-L2-Betti num- bers above degree one. Every subgroup of G admits an Abelian hierarchy, so it suffices to prove the claim for a fixed group Γ with an Abelian hierarchy of length n. If n = 0, then Γ is trivial, and therefore b(2) i (Γ; k) = 0 for all i > 0. Now assume n > 0; since k-L2-Betti numbers scale under passage to finite-index subgroups, we will assume that Γ admits a graph of groups splitting (Γv, Γe), where each Γv has an Abelian hierarchy of length n −1 and Γe is virtually Abelian. Then Chiswell’s long exact sequence in the homology of a graph of groups [Chi76] and the fact that vir- tually Abelian groups have vanishing k-L2-Betti numbers above degree zero imply that b(2) i (Γ; k) = 0 for all i > 1. The result now follows from Corollary 5.8. □ 6. Novikov cohomology versus Novikov homology Let G be a group of finite type and of cohomological dimension n and let χ: G ! Z be an integral character. If for some ring R we have Hn(G; ‘ R[G] χ ) = 0, then Hn(G; ‘ R[G] χ ) = 0. A natural question is whether the converse holds. if the converse fails, it still makes sense to ask whether the vanishing of top-dimensional Novikov homology controls some property of the kernel. A natural guess is that if Hn(G; ‘ R[G] ±χ ) = 0, then the homological dimension of ker χ over R is n −1. In this brief section, we will see that these questions have negative answers. Lemma 6.1. Let G be a torsion-free one-relator group and let R be any ring without zero divisors containing Z[G]. Then H2(G; R) = 0. Proof. Since G is torsion-free, Z admits a projective resolution of the form 0 −! Z[G] ∂ −−! Z[G]n −! Z[G] −! Z −! 0 (this is a consequence of the Lyndon identity theorem [Lyn50]). We claim that the extended map ∂: R ! Rn is injective. Indeed, suppose ∂(r) = r∂(1) = 0. Since NOVIKOV COHOMOLOGY 25 ∂is injective, ∂(1) is nonzero. Since R has no zero divisors, this implies r = 0, as desired. □ Corollary 6.2. If G is a torsion-free one-relator group and χ: G ! R is any homomorphism, then H2(G; ‘ Z[G] χ ) = 0. Proof. One-relator groups are locally indicable by a result of Brodski˘ı [Bro84] and therefore the group ring Z[G] has no zero divisors by a result of Higman [Hig40]. By looking at terms of minimal χ-height, it is easy to show that ‘ Z[G] χ also has no zero divisors as well. The result then follows immediately from Lemma 6.1. □ Example 6.3. Consider the Baumslag–Solitar group G = BS(2, 3) = ⟨a, t | t−1a2t = a3⟩. It is well known that G is not residually finite, and therefore it cannot be vir- tually free-by-cyclic [Bau71]. So for any finite-index subgroup H ⩽G and any epimorphism χ: H ! Z, either H2(H; ’ Z[H] χ ) ̸= 0 or H2(H; ’ Z[H] −χ ) ̸= 0 by Corollary B. In particular, let χ: G ! Z be the map defined by χ(a) = 0 and χ(t) = 1. With Corollary 6.2 this gives an example of a group of finite type and a map with non-vanishing top-dimensional Novikov cohomology that has vanishing Novikov homology. Moreover, ker χ is not of homological dimension one. Indeed, there is a well defined homomorphism ⟨x, y | x3 = y2⟩−! G, x 7−! a, y 7−! t−1at, which is injective (we leave checking injectivity as an exercise for the reader). Since M ∼= ⟨x, y | x3 = y2⟩is a non-free one-relator group, it is of homological dimension two, which shows that ker χ is of homological dimension two. Hence, vanishing top-dimensional Novikov homology does not imply that the homological dimension of the kernel drops. References [AOS21] Grigori Avramidi, Boris Okun, and Kevin Schreve. Mod p and torsion homology growth in nonpositive curvature. Invent. Math., 226(3):711–723, 2021. [Bau71] Gilbert Baumslag. Finitely generated cyclic extensions of free groups are residually finite. Bull. Austral. Math. Soc., 5:87–94, 1971. [BG99] Kai-Uwe Bux and Carlos Gonzalez. The Bestvina-Brady construction revisited: geo- metric computation of Σ-invariants for right-angled Artin groups. J. London Math. Soc. (2), 60(3):793–801, 1999. [Bie07] Robert Bieri. Deficiency and the geometric invariants of a group. J. Pure Appl. Algebra, 208(3):951–959, 2007. With an appendix by Pascal Schweitzer. [BNS87] Robert Bieri, Walter D. Neumann, and Ralph Strebel. A geometric invariant of discrete groups. Invent. Math., 90(3):451–477, 1987. [BR88] Robert Bieri and Burkhardt Renz. Valuations on free resolutions and higher geometric invariants of groups. Comment. Math. Helv., 63(3):464–497, 1988. [Bro75] Kenneth S. Brown. Homological criteria for finiteness. Comment. Math. Helv., 50:129– 135, 1975. [Bro84] Sergei D. Brodski˘ı. Equations over groups, and groups with one defining relation. Sibirsk. Mat. Zh., 25(2):84–103, 1984. [Bro94] Kenneth S. Brown. Cohomology of groups, volume 87 of Graduate Texts in Mathemat- ics. Springer-Verlag, New York, 1994. Corrected reprint of the 1982 original. 26 SAM P. FISHER [Chi76] I. M. Chiswell. Exact sequences associated with a graph of groups. J. Pure Appl. Al- gebra, 8(1):63–74, 1976. [Fel71] G. L. Fel’dman. The homological dimension of group algebras of solvable groups. Izv. Akad. Nauk SSSR Ser. Mat., 35:1225–1236, 1971. [FGS10] M. Farber, R. Geoghegan, and D. Schütz. Closed 1-forms in topology and geometric group theory. Uspekhi Mat. Nauk, 65(1(391)):145–176, 2010. [FIK25] Sam P. Fisher, Giovanni Italiano, and Dawid Kielak. Virtual fibring of Poincaré-duality groups, 2025. arXiv:2506.14666. [Fis24a] Sam P. Fisher. Improved algebraic fibrings. Compos. Math., 160(9):2203–2227, 2024. [Fis24b] Sam P. Fisher. On the cohomological dimension of kernels of maps to Z, 2024. arXiv:2403.18758, (to appear in Geom. Topol.). [Hig40] Graham Higman. Units in group rings. DPhil, University of Oxford, 1940. [HK07] J. A. Hillman and D. H. Kochloukova. Finiteness conditions and PDr-group covers of PDn-complexes. Math. Z., 256(1):45–56, 2007. [HW10] Mark Hagen and Daniel T. Wise. Special groups with an elementary hierarchy are virtually free-by-Z. Groups Geom. Dyn., 4(3):597–603, 2010. [JZ21] Andrei Jaikin-Zapirain. The universality of Hughes-free division rings. Selecta Math. (N.S.), 27(4):Paper No. 74, 33, 2021. [JZL25] Andrei Jaikin-Zapirain and Marco Linton. On the coherence of one-relator groups and their group algebras. Ann. of Math. (2), 201(3):909–959, 2025. [Kie20] Dawid Kielak. Residually finite rationally solvable groups and virtual fibring. J. Amer. Math. Soc., 33(2):451–486, 2020. [Lat94] François Latour. Existence de 1-formes fermées non singulières dans une classe de co- homologie de de Rham. Inst. Hautes Études Sci. Publ. Math., (80):135–194 (1995), 1994. [Lin93] Peter A. Linnell. Division rings and group von Neumann algebras. Forum Math., 5(6):561–576, 1993. [Lüc02] Wolfgang Lück. L2-invariants: theory and applications to geometry and K-theory. Springer-Verlag, Berlin, 2002. [Lyn50] Roger C. Lyndon. Cohomology theory of groups with a single defining relation. Ann. of Math. (2), 52:650–665, 1950. [MMV98] John Meier, Holger Meinert, and Leonard VanWyk. Higher generation subgroup sets and the Σ-invariants of graph groups. Comment. Math. Helv., 73(1):22–44, 1998. [Ran95] Andrew Ranicki. Finite domination and Novikov rings. Topology, 34(3):619–632, 1995. [Sch02] Thomas Schick. Erratum: “Integrality of L2-Betti numbers”. Math. Ann., 322(2):421– 422, 2002. [Sik87] Jean-Claude Sikorav. Homologie de Novikov associée à une classe de cohomologie de degré un. Thèse d’État, Université Paris-Sud (Orsay), 1987. [Sta68] John R. Stallings. On torsion-free groups with infinitely many ends. Ann. of Math. (2), 88:312–334, 1968. [Swa69] Richard G. Swan. Groups of cohomological dimension one. J. Algebra, 12:585–610, 1969. [Tam54] Dov Tamari. A refined classification of semi-groups leading to generalised polynomial rings with a generalised degree concept. In Proceedings of the International Congress of Mathematicians, 1954, Amsterdam, volume III, pages 439–440. Erven P. Noordhoff N. V., Groningen; North-Holland Publishing Co., Amsterdam, 1954. [Wat60] Charles E. Watts. Intrinsic characterizations of some additive functors. Proc. Amer. Math. Soc., 11:5–8, 1960. Email address: samuel.p.fisher@gmail.com
NOVIKOV COHOMOLOGY, FINITE DOMINATION, AND COHOMOLOGICAL DIMENSION SAM P. FISHER Abstract. We introduce the Σ∗-invariant of a group of finite type, which is defined to be the subset of non-zero characters χ ∈H1(G; R) with vanishing associated top-dimensional Novikov cohomology. We prove an analogue of Sikorav's Theorem for this invariant, namely that cd(ker χ) = cd(G)-1 if and only if ±χ ∈Σ∗(G) for integral characters χ. This implies that cohomological dimension drop is an open property among integral characters. We also study the cohomological dimension of arbitrary co-Abelian subgroups. The techniques yield a short new proof of Ranicki's criterion for finite domination of infinite cyclic covers, and in a different direction, we prove that the algebra of affiliated operators U(G) of a RFRS group G has weak dimension at most one if and only if G is an iterated (cyclic or finite) extension of a free group. 1. Introduction In [BNS87], Bieri, Neumann, and Strebel introduced the Σ-invariant of a finitely generated group G. The invariant, denoted Σ(G), is a subset of H1(G; R)∖{0}, and controls the finiteness properties of co-Abelian subgroups of G. We call elements of H1(G; R) characters; these are just homomorphisms χ: G ! R. If χ(G) ⊆Q, then we say that χ is integral (because in this case χ(G) ∼= Z). The set of all nonzero characters which vanish on a given subgroup N is denoted by S(G, N). The following celebrated result from [BNS87] establishes the fundamental properties of the Σ-invariant. Theorem 1.1 (Bieri-Neumann-Strebel, [BNS87]). Let G be a finitely generated group. The following properties hold: (1) Σ(G) is open in H1(G; R) ∖{0}; (2) if N P G is a normal subgroup such that G/N is Abelian, then N is finitely generated if and only if S(G, N) ⊆Σ(G). In particular, an integral character χ: G ! Z has finitely generated kernel if and only if ±χ ∈Σ(G). Recall that a map to Z with finitely generated kernel is called an algebraic fibration of G. Theorem 1.1 has the following well known corollary. Corollary 1.2. Algebraic fibring is an open property. More precisely, given a finitely generated group G, the set {χ: G -! Q : ker χ is finitely generated} ∖{0} is an open subset of H1(G; Q) ∖{0}. The Σ-invariant is well-understood in certain cases (e.g. for right-angled Artin groups [MMV98, BG99]), but for most classes of groups it is notoriously difficult to compute. A useful tool is provided by Sikorav's Theorem, which gives an elegant homological criterion for a character to lie in the Σ-invariant. 1 16 Oct 2025 2 SAM P. FISHER Theorem 1.3 (Sikorav, [Sik87]). If G is a finitely generated group, then χ ∈Σ(G) if and only if H1(G; ' Z[G] χ ) = 0. The object ' Z[G] χ appearing in the theorem is the Novikov ring associated to χ; it is a certain completion of the group ring Z[G] along χ, and can be thought of as the ring of infinite series of elements in G which go to infinity in the direction of χ (see Example 3.1 for a precise definition). Sikorav's Theorem plays a central role in Kielak's proof that a finitely generated RFRS group virtually algebraically fibres if and only if it has vanishing first L2-Betti number [Kie20]. His strategy was to related the L2-homology of a RFRS group G with the homologies of its finite-index subgroups with coefficients in Novikov rings associated to various characters. Sikorav's Theorem has many generalisations and variations. For instance, the kernel of an integral character χ: G ! Z being of type FPn is equivalent to the vanishing of Hi(G; ' Z[G] ±χ ) for all i ⩽n (see [BR88], as well as Schweitzer's appendix to [Bie07]). This result also holds if one replaces the coefficient ring Z by an arbitrary ring R, and type FPn by type FPn(R) (see, e.g., [Fis24a, Theorem 5.3]). A closely related result is Ranicki's criterion [Ran95, Theorem 1], which states that an infinite cyclic cover of a compact CW complex X is finitely dominated if and only if Hi(π1(X); Ÿ Z[π1(X)] ±χ ) = 0 for all i, where χ: π1(X) ! Z is the map associated to the cyclic cover (Ranicki's original result was for spaces X such that π1(X) = G × Z and χ being the projection onto the Z factor, but it holds in the generality stated here). Yet another result in this direction is Latour's theorem, which, for a manifold M of dimension at least 6, characterises the existence of closed non-singular 1-forms in a class χ ∈H1(M; R) in terms of the vanishing of Novikov homology associated to χ and of a K-theoretic invariant [Lat94]. All the results mentioned above directly or indirectly relate vanishing Novikov homology of a character χ to the finiteness properties of its kernel. In [Fis24b], the author showed that Novikov cohomology controls a different property of the kernel, namely its cohomological dimension. This was done with the goal of proving that RFRS groups of cohomological dimension two are virtually free-by-cyclic if and only if they have vanishing second L2-Betti number. The main new input was a criterion stating that if a group G of type FP has Hcd(G)(G; ' Z[G] ±χ ) = 0, then cd(ker χ) = cd(G) -1. One of the main purposes of this article is to prove a converse, thus showing that vanishing top-dimensional Novikov cohomology of integral characters completely determines the cohomological dimension of the kernel. In fact, the result holds in the generality of chain complexes of projective modules. Theorem A. Let G be a group, let χ: G ! Z be an integral character, and let C• be a chain complex of projective Z[G]-modules such that Ci = 0 for all i > n, for some n ∈Z, and assume that Cn is finitely generated. The following are equivalent: (1) Hn(C•; ' Z[G] ±χ ) = 0; (2) C• is chain homotopy equivalent over Z[ker χ] to a chain complex D• of projective Z[ker χ]-modules such that Di = 0 for all i > n -1. Theorem A is proved by analysing the long exact sequence in cohomology induced by a special short exact sequence involving the Novikov ring (see Observation 3.2). This short exact sequence was communicated to the author by Andrei Jaikin-Zapirain as a tool to simplify the original proof of [Fis24b, Theorem A]. It NOVIKOV COHOMOLOGY 3 was also used in [FIK25] to study the algebraic fibres of Poincaré-duality groups. The most general form of Theorem A will be given in Corollary 3.11; in particular, the theorem still holds if one replaces the coefficient ring Z with any ring R (we always consider maps G ! Z, of course). The crux of the implication (2) ⇒(1) is an observation about general modules over Novikov rings: if M is a module over ' Z[G] χ (and χ ̸= 0), then either M = 0 or M is uncountable (Lemma 3.5). Theorem A has the following immediate consequence for groups, proving that top-dimensional Novikov cohomology is a perfect obstruction for cohomological dimension drop. Motivated by Sikorav's Theorem, we define the Σ∗-invariant of a group G of type FP by Σ∗(G) := {χ ∈H1(G; R) ∖{0} : Hcd(G)(G; ' Z[G] χ ) = 0}. Corollary B. Let G be a group of type FP. If χ: G ! Z is an integral character, then cd(ker χ) = cd(G) -1 if and only if ±χ ∈Σ∗(G). It is well-known that vanishing Novikov homology is an open property for groups of finite type; this applies just as well to Novikov cohomology, so we obtain the following corollary (see Corollary 4.8), which is an analogue of Corollary 1.2 for cohomological dimension drop. Corollary C. If G is a group of type FP, then cohomological dimension drop is an open property among integral characters. More precisely, the set {χ: G -! Q : cd(ker χ) = cd(G) -1} is an open subset of H1(G; Q). In particular, if cd(G) = 2, then the set of integral characters with free kernel is open in H1(G; Q). From a slightly stronger version of Theorem A, we obtain a result on the L2invariants of RFRS groups which generalises [Fis24b, Theorem A]. Let U(G) be the algebra of operators affiliated to the group von Neumann algebra of G. The L2Betti numbers of G are defined by b(2) i (G) = dimU(G) Hi(G; U(G)), where dimU(G) denotes the von Neumann dimension (see [Lüc02] for details). The weak dimension of an R-module M is the supremal integer n for which there exists an R-module N such that TorR n (M, N) ̸= 0. Theorem D. Let G be a RFRS group of type FP(Q). The following are equivalent: (1) U(G) is of weak dimension at most one as a Q[G]-module; (2) for every subgroup H ⩽G, we have b(2) i (H) = 0 for all i > 1; (3) G is an iterated (cyclic or finite) extension of a free group. Theorem D will be used to prove that RFRS groups admitting an Abelian hierarchy are iterated (cyclic or finite) extensions of free groups, extending a result of Hagen and Wise [HW10] (see Corollary 5.11). The weak dimension one property is a crucial step in Jaikin-Zapirain and Linton's proof that group algebras of onerelator groups are coherent [JZL25]. In a similar vein, a conjecture of Wise predicts that groups of cohomological dimension at most two are coherent if and only if they have vanishing second L2-Betti number. If G is RFRS and cd(G) ⩽2, then an easy consequence of [Fis24b, Theorem A] is that the vanishing of the second L2-Betti number is equivalent to U(G) being of weak dimension at most one. Motivated by all this, we make the following prediction for coherence in the class of finite type RFRS groups. 4 SAM P. FISHER Conjecture 1.4. If G is a RFRS group of type FP, then the following are equivalent: (1) G is coherent; (2) Q[G] is coherent; (3) U(G) is of weak dimension one as a Q[G]-module. The method of proving Theorem A also yields a new proof of Ranicki's Criterion [Ran95] for finite domination of cyclic covers (see Theorem 3.6), as well as a new proof of the implication H1(G; ' Z[G] ±χ ) = 0 =⇒ ker χ is finitely generated (see Proposition 3.8). This is the more subtle direction of Sikorav's Theorem, and is usually the one used in practice (such as in Kielak's fibring theorem). The new proofs are more conceptual, as there is no longer the need to deal with individual Novikov cycles. One instead works with long exact sequences of homology modules and other standard results from homological algebra, and uses the vanishing of Novikov homology to show that the homology of the kernel commutes with direct products. By Brown's homological characterisation of finite domination [Bro75], this yields finite generation of the kernel. Thus, the method links two well-known criteria for finite domination of chain complexes: Brown's and Ranicki's. Another satisfying aspect of this proof is that it unifies the points of view on Novikov cohomology controlling cohomological dimension and Novikov homology controlling finiteness properties of the cyclic covers; these phenomena follow from dual arguments. A drawback, however, is that it is crucial to work with integral characters and the vanishing of Novikov homology associated to both χ and -χ simultaneously. We do not obtain an new proof of the fact that χ ∈Σ(G) when H1(G; ' Z[G] χ ) = 0 for general characters χ: G ! R. From the Lyndon-Hochschild-Serre spectral sequence, it is easy to see that the cohomological dimension of ker(χ: G ! Z) is at least cd(G) -1, so vanishing of Novikov cohomology in low codimensions cannot imply further cohomological dimension drop of the kernel (we will actually see that vanishing of low-codimensional Novikov cohomology implies the functors Hi(ker χ; -) commute with direct limits in the same low codimensions, which can be seen as a type of finiteness property). However, it still makes sense to ask whether Novikov cohomology can control further cohomological drop of the kernel of a non-integral character. We define the higher Σ∗-invariants of a group G of finite type by Σ∗ m(G) := {χ ∈H1(G; R) ∖{0} : Hi(G; ' Z[G] χ ) = 0 for all i > cd(G) -m} for m ⩾0. Hence, Σ∗ 0(G) = H1(G; R) ∖{0} and Σ∗ 1(G) = Σ∗(G). We obtain the following sufficient condition for a map to a free Abelian group to have kernel of the minimal theoretically allowed cohomological dimension. Theorem E. Let G be a group of type FP. Let N P G be a normal subgroup such that G/N ∼= Zd. If S(G, N) ⊆Σ∗ d(G), then cd(N) = n -d. Given the similarity with Theorem 1.1(2), this raises the obvious question of whether the converse statement holds (Corollary B is the important special case d = 1); it does not, see Example 4.17. Theorem E has the following corollary for Poincaré-duality groups. NOVIKOV COHOMOLOGY 5 Corollary F. Let G be a Poincaré-duality group of dimension n and let χ: G ! Zd. If ker χ is of type FPd-1, then cd(G) = n -d. The condition FP0 is vacuous, so in the case where d = 1, the corollary says that all kernels of maps onto Z drop in cohomological dimension. This is a very special case of Strebel's theorem, which states that all infinite-index subgroups of PDngroups have cohomological dimension at most n -1. There are various situations where cohomological dimension is additive under group extensions. One of the most general results is Fel'dman's theorem [Fel71, Theorem 2.4], which states that if 1 ! N ! G ! Q ! 1 is an extension of groups and k is a field such that N is of type FP(k) and cdk(Q) n if and only if cd(C•) ⩽n. Proof. If C• is chain homotopy equivalent to D• as in the statement, then it is clear that Hi(C•; M) = 0 for all R-modules M and all i > n, since chain homotopy equivalences induce isomorphisms on homology. Hence cd(C•) ⩽n. NOVIKOV COHOMOLOGY 7 Conversely, suppose that cd(C•) ⩽n. Let M = im(∂n+1) ⩽Cn. By assumption, the cocycle z : Cn+1 ∂-! M is zero in Hn+1(C•; M), so there is a cochain c: Cn ! M such that c∂= z. This implies that M is a direct summand of Cn; write Cn = M ⊕P. Then C• splits as the direct sum of the chain complexes {· · · ! Cn+2 ! Cn+1 ! M ! 0 ! · · · } ⊕{· · · ! 0 ! P ! Cn-1 ! Cn-2 ! · · · } of projectives modules, where M and P both lie in degree n. We claim that the complex on the left (call it E•) is null-homotopic, which will prove the lemma. It is an easy exercise to check that Hi(E•; L) = 0 for all integers i and all Rmodules L. Let N = im(∂n+2) and consider the short exact sequence 0 -! N -! Cn+1 -! M -! 0. By assumption, the cocycle w: Cn+2 ∂-! N is zero in cohomology, so there is some d: Cn+1 ! N such that w = d∂. Hence, the short exact sequence above is split, so E• = {· · · ! Cn+2 ! N ! 0 ! · · · } ⊕{· · · ! 0 ! M ! M ! 0 ! · · · }. Again, the complex {· · · ! Cn+2 ! N ! 0 ! · · · } has vanishing cohomology with all coefficients, so we can split off {· · · ! 0 ! N ! N ! 0 ! · · · } as a direct summand, whose complement has vanishing cohomology with all coefficients. Continuing in this way show shows that E• is a direct sum of complexes of the form {0 ! L ! L ! 0} and is therefore null-homotopic. □ The Eilenberg-Watts theorem will be useful to us in induction arguments, including in the proof of Lemma 2.6 just below. Morally, it says that tensoring is the only right-exact additive functor that preserves direct sums. Theorem 2.5 ([Wat60]). Let R and S be rings, and let F : ModR ! ModS be an additive functor. If F is right-exact and preserves arbitrary direct sums, then there is a natural isomorphism F(-) ∼= -⊗R F(R), where R is viewed as a right module over itself via multiplication, and F(R) is viewed as a left R-module via r · x := F(mr)(x) (and mr is the left-multiplication by r map). Lemma 2.6. Let R be a ring, let C• be a chain complex of projective left R-modules and let m and n be integers. (1) Suppose Ci = 0 for all i n, that Hi(C•; -) commutes with direct limits for all i ⩾m, and that Hi(C•; R) = 0 for all i ⩾m. Then Hi(C•; M) = 0 for all i ⩾m and all R-modules M. Proof. For (1), see [Bro75, Lemma 2]. For (2), we will assume that n ⩾m, since otherwise the statement is vacuous. We will prove that Hi(C•; M) = 0 for all Rmodules M by reverse induction on i, starting with i = n + 1 and terminating with i = m. There is nothing to prove in the base case, since Cn+1 = 0 and therefore Hn+1(C•; M) trivially vanishes for all modules M. Suppose Hi(C•; M) = 0 for all R-modules M and for some i > m. Let 0 -! M0 -! M1 -! M2 -! 0 8 SAM P. FISHER be a short exact sequence of modules. The associated long exact sequence in cohomology contains the portion Hi-1(C•; M1) -! Hi-1(C•; M2) -! Hi(C•; M0) = 0, so Hi-1(C•; -) is right exact. The functor Hi-1(C•; -) also preserves direct limits by assumption, so the Eilenberg-Watts theorem implies that Hi-1(C•; -) is naturally isomorphic to the functor Hi-1(C•; R)⊗R -, which is zero by assumption. □ 3. Twisted Laurent series and finite domination Let R be a ring and let α: R ! R be a ring automorphism. The α-twisted Laurent polynomial ring in one variable is denoted by Rα[t-1, t]. This is the Laurent polynomial ring over R, where we impose the rule rtn = tnαn(r) for all r ∈R and all n ∈Z. Let Rα[t-1, tK denote the ring of Laurent series in the variable t, i.e. the ring of expressions of the form P∞ n=k rntn with rn ∈R for all n ⩾k, where k is some integer. Multiplication is defined on Rα[t-1, tK by formally extending the multiplication on Rα[t-1, t]. Similarly, let RαJt-1, t] be the ring of formal series of the form Pk n=-∞rntn for some k ∈Z, i.e. the ring of Laurent series in the variable t-1. Example 3.1 (The Novikov ring). The main example of the construction above will come from groups and maps to Z. Let χ: G ! R be an homomorphism with kernel N and let R be a ring. The Novikov ring associated to this data is ' R[G] χ = nX rgg : |{g : rg ̸= 0} ∩χ-1(]-∞, t])| n. Suppose that the functors Hi(C•; -) commute with direct limits of Rα[t-1, t]-modules for all i ⩾m. If Hi(C•; RαJt-1, t]) = Hi(C•; Rα[t-1, tK) = 0 for all i ⩾m, then Hi(ι!C•; -) commutes with direct limits for all i ⩾m. If n ⩾m, then ι!C• is of cohomological dimension at most n -1. Remark 3.10. It is unclear to us whether the converse holds, i.e. whether Hi(ι!C•; -) commuting with direct limits for i ⩾m implies vanishing Novikov cohomology in degrees at least m. It holds if m = n (see Corollary 3.11), and if we additionally assume that C• is a chain complex of finitely generated projectives, then the converse also holds at m = n -1. Proof (of Theorem 3.9). The proof that Hi(ι!C•; -) commutes with direct limits is similar to the proof of Proposition 3.8, where it was shown that Hi(ι!C•; -) commutes with direct products in a certain range. We leave the details to the reader. By Lemma 2.6, Hi(C•; MαJt-1, t]) = Hi(C•; Mα[t-1, tK) = 0 for all i ⩾m and any right R-module M. The long exact sequence in cohomology of Corollary 3.4 then yields Hn(ι!C•; M) = 0. Since M was arbitrary, this shows that ι!C• has cohomological dimension at most n -1. □ As a corollary, we find that the top-dimensional Novikov cohomology of C• completely controls the cohomological dimension of ι!C•, proving Theorem A from the introduction in a more general form. Corollary 3.11. Let C• be a chain complex of projective Rα[t-1, t]-modules of cohomological dimension n and such that Cn is finitely generated. The following are equivalent: (1) ι!C is of cohomological dimension n -1; (2) Hn(C•; RαJt-1, t]) = Hn(C•; Rα[t-1, tK) = 0. Proof. If ι!C• is of cohomological dimension n -1, then the long exact sequence in cohomology of Corollary 3.4 yields the exact sequence Hn(C•; Rα[t-1, t]) -! Hn(C•; RαJt-1, t] ⊕Rα[t-1, tK) -! 0. 14 SAM P. FISHER If R is a countable ring, then Hn(C•; Rα[t-1, t]) is countable (since Cn) is finitely generated. If not, we can use the fact that Cn is finitely generated to reduce to the countable case, much in the same way as in the proof of Theorem 3.6 in the general case. We sketch the argument below. We may assume that the modules Ci are all free. The fact that ι!C• is of cohomological dimension n -1 means that there is a map ρ: ι!Cn-1 ! ι!Cn such that ρ∂= idι!Cn. Since Cn is finitely generated, there is a finitely generated direct summand A of Cn-1 containing ∂(Cn). Writing Cn-1 = A ⊕B, we can take ρ to be zero on B. Representing ∂and ρ as matrices over R, they contain countably many entries, and there is an α-invariant countable subring R′ of R containing all of these entries. By the argument above, Hn(C•; R′ αJt-1, t]) = Hn(C•; R′ α[t-1, tK) = 0. We conclude that Hn(C•; RαJt-1, t]) = Hn(C•; Rα[t-1, tK) = 0 in the same way as in the proof of the general case of Theorem 3.6. □ 4. The Σ∗-invariant Let G be a group and let R be a ring such that cdR(G) = n. If the trivial R[G]-module R admits a projective resolution P• ! R ! 0 such that Pi is finitely generated for all i > n -m, then we will say that G is of type FP∗ m(R). In other words, the trivial module admits a resolution that is finitely generated in the top m degrees. Note that the finiteness property FP∗ m(R) only makes sense for groups with finite cohomological dimension over R, and that if a group is of type FP(R) then it is of type FP∗ m(R) for all m. Definition 4.1. Let G be of type FP∗ m(R). The mth Σ∗-invariant of G (over R) is Σ∗ m(G; R) = {χ ∈H1(G, R) ∖{0} : Hi(G; ' R[G] χ ) = 0 for all i > n -m}. In particular, Σ∗ 1(G; R) is the set of characters with respect to which the topdimensional Novikov homology of G vanishes. It follows immediately from the definition that Σ∗ m+1(G; R) ⊆Σ∗ m(G; R) (whenever G is of type FP∗ m+1(R)). Proposition 4.2. Let G be a finitely generated group of type FP∗ m(R) for some ring R and integer m. Then Σ∗ m(G; R) is open in H1(G; R). The proposition will follow quickly from Lemma 4.5 below, which will be well known to experts. The proof given here relies on chain contractions, and the following simple fact: if χ: G ! R is positive on the support of x ∈R[G], then there is an open neighbourhood U of χ such that 1 -x is invertible in ' R[G] ψ for all ψ ∈U. We first give a couple of definitions. Definition 4.3 (Truncation). Let χ: G ! R be a character. The truncation of an element P g∈G rgg ∈' R[G] χ at height t ∈R is the element P g∈G rgg ∈R[G], where rg = rg if χ(g) -1 and that we have found maps sj for all j n -m is the cohomology of the cochain complex HomR[G](Pi, ' R[G] χ ) ∼= HomR[G](Pi, R[G]) ⊗R[G] ' R[G] χ . Since Pi is finitely generated, HomR[G](Pi, R[G]) is projective and finitely generated as an R[G]-module. The result then immediately follows from Lemma 4.5. □ As we saw in Example 3.1, the Novikov ring associated to an integral character χ: G ! Z is a special case of the construction Rα[t-1, tK. Thus, the results of the previous section readily apply to give theorems about group cohomology. The first such result is an immediate consequence of Corollary 3.11, and establishes the converse to [Fis24b, Theorem 3.5] in the case of integral characters. Theorem 4.7. Let G be a group and let R be a ring such that cdR(G) n and such that Hi(G; -) commutes with direct limits of R[G]-modules for all i > n -d. If Hi(resG Gj C•; ' R[Gj] ±χj) = 0 for all i > n -d and all 0 ⩽j 1. Again by Corollary 3.11, resG G1 C• is chain homotopy equivalent to a chain complex of projective modules D• which is of cohomological dimension at most n -1 and such that Hi(D•; -) commutes with direct limits for all i > n -d. The claim now follows by induction. □ 18 SAM P. FISHER For the proof of Theorem E, we will need the following result of Farber, Geoghegan, and Schütz, which gives a version of Sikorav's Theorem for chain complexes. Theorem 4.12 ([FGS10, Theorem 8]). Let C• be a chain complex of projective R[G]-modules such Ci is finitely generated for i ⩽n and zero for i n, and let χ: G ! R be a character with free Abelian image of rank d. If Hi(C•; ' R[G] φ ) = 0 for all φ ∈S(G, ker χ) and all i > n -d, then resG ker χ C• has cohomological dimension at most n -d. Proof. Since the modules Ci are finitely generated, HomR[G](C•, ' R[G] φ ) ∼= HomR[G](C•, R[G]) ⊗R[G] ' R[G] φ . The Novikov acyclicity assumption and a Theorem 4.12 imply that resG ker χ HomR[G](C•, R[G]) is chain homotopy equivalent to a chain complex D• of projective R[ker χ]-modules such that Di is finitely generated for all i > n -d. For the remainder of the proof, we follow Notation 4.9. Let P = χ(G) ∼= Zd and fix a series of subgroups {1} = Pd ⩽Pd-1 ⩽· · · ⩽P1 ⩽P0 = P such that Pi/Pi+1 ∼= Z for all 0 ⩽i n -d and 0 ⩽j n and such that Hi(F•; -) commutes with direct limits for all i > n -d. If Hi(F•; IndG Gj ' R[Gj] ±χj) = 0 for all i > n -d and 0 ⩽j n -d and 0 ⩽j 1. Fix some j n -d. We also know that Hi(F•; MαJt-1, t]) = Hi(F•; Mα[t-1, tK) = 0 by Lemma 2.6, since Mα[t-1, tK is a right ' R[G] χ0 = R[G1]α[t-1, tK-module and Hi(F•; ' R[G] χ0) = 0 (and similarly for MαJt-1, t]). It then follows that Hi(F•; coIndG G1 M) = Hi(resG G1 F•; IndG1 Gj ' R[Gj] ±χj) = 0. for all i > n-d. Since Hi(resG G1 F•; -) commutes with direct limits for all i > n-d by Proposition 4.11, the claim holds by induction. ⋄ By the claim, Hi(resG Gj C•; ' R[Gj] ±χj) = 0 for all i > n -d and all 0 ⩽j l (we trivially know this holds for l = n and all j ∈J). Then Hl(G; -) is a right exact functor on the category of Sj-modules. By Theorem 2.5, there is a natural isomorphism Hl(G; -) ∼= Hl(G; Sj) ⊗Sj - as functors on the category of Sj-modules. The functor Hl(G; -) commutes with direct products of Sj-modules, and therefore so does the functor Hl(G; Sj) ⊗Sj -. It is a well-known fact (and an easy exercise) that Hl(G; Sj) must be finitely generated as an Sj-module. Let z1, . . . , zs be a finite generating set for Hl(G; Sj). Since Hl(G; S) ∼= Hl(G; Sj) ⊗Sj S ∼= lim -! j′⩾j Hl(G; Sj) ⊗Sj Sj′ = 0, we see that there is some j′ ∈J such that zi ⊗1 = 0 in Hl(G; Sj) ⊗Sj Sj′ for each i = 1, . . . , s. But these elements also generate Hl(G; Sj) ⊗Sj Sj′, so we conclude that Hl(G; Sj′) ∼= Hl(G; Sj) ⊗Sj Sj′ = 0. Since Sj′ is an Sj-module, we also have Hi(G; Sj′) = 0 for all i > l by Lemma 2.6. The lemma then follows by induction. □ We can now prove the main result of this section, which implies Theorem D by taking d = 1 and using the Stallings-Swan Theorem [Sta68, Swa69], which characterises free groups as the groups of cohomological dimension at most one. Theorem 5.7. Let G be a RFRS group with cdk(G) = n for some field k. Suppose Hi(G; -) commutes with direct limits of k[G]-modules for all i > d. The following are equivalent: (1) Dk[G] is of weak dimension at most d as a k[G]-module; (2) for every subgroup H ⩽G, we have b(2) i (H; k) = 0 for all i > d; (3) there is a subnormal series Γl d. But Dk[G],U ⊆ Dk[Gs],Us ∗G/Gs, and therefore 0 = Hi(G; Dk[Gs],Us ∗G/Gs) ∼= Hi(Gs; Dk[Gs],Us) for all i > d by Lemma 2.6. We will take Γ1 = Gs. Recall that ' k[Gs] χ is a Dk[Gs],Usalgebra for every χ ∈Us. Since -Us ∩Us is also a rich (and hence open) set, there is an integral character χ: Gs ! Z such that Hi(Gs; ' k[Gs] ±χ ) = 0 for all i > d. By Corollary 3.11, cdk(ker χ) = n -1 and Hi(ker χ; -) commutes with direct limits for all i > d. We take Γ2 = ker χ, and repeat this argument with Γ2 instead of G to extend the chain to Γ2 > Γ3 > Γ4, where Γ2/Γ3 is finite, and Γ3/Γ4 is cyclic with cdk(Γ4) = n -2. Continuing in this way proves item (3). To prove (3) implies (1), it suffices to prove the following claim: If 1 -! N -! G -! Q -! 1 is an extension with G RFRS, Q amenable, and Dk[N] of weak dimension at most d as a k[N]-module, then Dk[G] is of weak dimension at most d as a k[G]-module. The argument is similar to that of [JZL25, Theorem 3.4]. Indeed, let M be any k[G]-module. Then, for all i > d, Tork[G] i (Dk[N] ∗G/N, M) ∼= Tork[N] i (Dk[N], M) = 0 by Shapiro's lemma. But Dk[G] is the Ore localisation of Dk[N] ∗G/N by a result of Tamari [Tam54], and since Ore localisation is flat, we also get Tork[G] i (Dk[G], M) = 0 for all i > d, as claimed. □ Corollary 5.8. Let G be a RFRS group of type FP(k). Then Dk[G] is of weak dimension one as a k[G]-module if and only if G is an iterated extension of a free group by (cyclic or finite) groups. Remark 5.9. In the introduction, Corollary 5.8 was stated in terms of U(G), the algebra of operators affiliated to the von Neumann algebra of G. Let k ⊆C be a field. Since Dk[G] is the division closure of k[G] in U(G), and moreover Dk[G] is a division ring, U(G) is faithfully flat over Dk[G]. It follows that U(G) has weak dimension at most one as a k[G]-module if and only if Dk[G] does. It is an interesting problem to determine the difference between the invariants b(2) i (G; k) for different fields k. In the context of RFRS groups (of finite type, say), it is an easy consequence of the results main results of [Kie20] and [Fis24a] that b(2) 1 (G; k) = 0 for one field k if and only if b(2) 1 (G; k′) = 0 for all fields k′. It is open whether the value of b(2) 1 (G; k) depends on the field k, but for every i > 1 there are examples of right-angled Artin groups G (which are, in particular, RFRS) such that b(2) i (G; k) depends on k [AOS21]. Here we show that Dk[G] being of weak dimension one (or equivalently, that all subgroups of G have vanishing k-L2-Betti numbers) does not depend on k. Corollary 5.10. Let G be a RFRS group of type FP. If Dk[G] is of weak dimension one as a k[G]-module for some field k, then Dk′[G] is of weak dimension one as a k′[G]-module for all fields k′. 24 SAM P. FISHER We conclude the section with applications to groups with elementary hierarchies. Following Hagen and Wise [HW10], we say that the trivial group has an elementary hierarchy of length 0, and that a group G has an elementary hierarchy of length n if it splits as a graph of groups with an elementary hierarchy of length n -1 with cyclic edge groups. The main theorem of [HW10] is that any finitely generated virtually special group with an elementary hierarchy is virtually free-by-cyclic (a group is special if and only if it is a subgroup of a RAAG, so special groups are RFRS). This class of groups includes, e.g., graphs of free groups with cyclic edge groups not containing unbalanced Baumslag-Solitar groups and limit groups not containing Z3-subgroups. We extend the Hagen-Wise theorem in the following way: say that G has an Abelian hierarchy of length 0 if it is trivial and that it has an Abelian hierarchy of length n if it virtually splits as a graph of groups with an Abelian hierarchy of length n -1 and finitely generated virtually Abelian edge groups. While all groups with an elementary hierarchy of cohomological dimension at most two, groups with an Abelian hierarchy can be of arbitrarily high cohomological dimension (the class contains all limits groups, for example). Corollary 5.11. Let G be a virtually RFRS group of type FP(k) for some field k admitting a virtually Abelian hierarchy. Then G is an iterated extension of a free group by (cyclic or finite) groups. Proof. It suffices to prove that every subgroup of G has vanishing k-L2-Betti numbers above degree one. Every subgroup of G admits an Abelian hierarchy, so it suffices to prove the claim for a fixed group Γ with an Abelian hierarchy of length n. If n = 0, then Γ is trivial, and therefore b(2) i (Γ; k) = 0 for all i > 0. Now assume n > 0; since k-L2-Betti numbers scale under passage to finite-index subgroups, we will assume that Γ admits a graph of groups splitting (Γv, Γe), where each Γv has an Abelian hierarchy of length n -1 and Γe is virtually Abelian. Then Chiswell's long exact sequence in the homology of a graph of groups [Chi76] and the fact that virtually Abelian groups have vanishing k-L2-Betti numbers above degree zero imply that b(2) i (Γ; k) = 0 for all i > 1. The result now follows from Corollary 5.8. □ 6. Novikov cohomology versus Novikov homology Let G be a group of finite type and of cohomological dimension n and let χ: G ! Z be an integral character. If for some ring R we have Hn(G; ' R[G] χ ) = 0, then Hn(G; ' R[G] χ ) = 0. A natural question is whether the converse holds. if the converse fails, it still makes sense to ask whether the vanishing of top-dimensional Novikov homology controls some property of the kernel. A natural guess is that if Hn(G; ' R[G] ±χ ) = 0, then the homological dimension of ker χ over R is n -1. In this brief section, we will see that these questions have negative answers. Lemma 6.1. Let G be a torsion-free one-relator group and let R be any ring without zero divisors containing Z[G]. Then H2(G; R) = 0. Proof. Since G is torsion-free, Z admits a projective resolution of the form 0 -! Z[G] ∂ --! Z[G]n -! Z[G] -! Z -! 0 (this is a consequence of the Lyndon identity theorem [Lyn50]). We claim that the extended map ∂: R ! Rn is injective. Indeed, suppose ∂(r) = r∂(1) = 0. Since NOVIKOV COHOMOLOGY 25 ∂is injective, ∂(1) is nonzero. Since R has no zero divisors, this implies r = 0, as desired. □ Corollary 6.2. If G is a torsion-free one-relator group and χ: G ! R is any homomorphism, then H2(G; ' Z[G] χ ) = 0. Proof. One-relator groups are locally indicable by a result of Brodski ̆ı [Bro84] and therefore the group ring Z[G] has no zero divisors by a result of Higman [Hig40]. By looking at terms of minimal χ-height, it is easy to show that ' Z[G] χ also has no zero divisors as well. The result then follows immediately from Lemma 6.1. □ Example 6.3. Consider the Baumslag-Solitar group G = BS(2, 3) = ⟨a, t | t-1a2t = a3⟩. It is well known that G is not residually finite, and therefore it cannot be virtually free-by-cyclic [Bau71]. So for any finite-index subgroup H ⩽G and any epimorphism χ: H ! Z, either H2(H; ' Z[H] χ ) ̸= 0 or H2(H; ' Z[H] -χ ) ̸= 0 by Corollary B. In particular, let χ: G ! Z be the map defined by χ(a) = 0 and χ(t) = 1. With Corollary 6.2 this gives an example of a group of finite type and a map with non-vanishing top-dimensional Novikov cohomology that has vanishing Novikov homology. Moreover, ker χ is not of homological dimension one. Indeed, there is a well defined homomorphism ⟨x, y | x3 = y2⟩-! G, x 7-! a, y 7-! t-1at, which is injective (we leave checking injectivity as an exercise for the reader). Since M ∼= ⟨x, y | x3 = y2⟩is a non-free one-relator group, it is of homological dimension two, which shows that ker χ is of homological dimension two. Hence, vanishing top-dimensional Novikov homology does not imply that the homological dimension of the kernel drops. References [AOS21] Grigori Avramidi, Boris Okun, and Kevin Schreve. Mod p and torsion homology growth in nonpositive curvature. Invent. Math., 226(3):711-723, 2021. [Bau71] Gilbert Baumslag. Finitely generated cyclic extensions of free groups are residually finite. Bull. Austral. Math. Soc., 5:87-94, 1971. [BG99] Kai-Uwe Bux and Carlos Gonzalez. The Bestvina-Brady construction revisited: geometric computation of Σ-invariants for right-angled Artin groups. J. London Math. Soc. (2), 60(3):793-801, 1999. [Bie07] Robert Bieri. Deficiency and the geometric invariants of a group. J. Pure Appl. Algebra, 208(3):951-959, 2007. With an appendix by Pascal Schweitzer. [BNS87] Robert Bieri, Walter D. Neumann, and Ralph Strebel. A geometric invariant of discrete groups. Invent. Math., 90(3):451-477, 1987. [BR88] Robert Bieri and Burkhardt Renz. Valuations on free resolutions and higher geometric invariants of groups. Comment. Math. Helv., 63(3):464-497, 1988. [Bro75] Kenneth S. Brown. Homological criteria for finiteness. Comment. Math. Helv., 50:129135, 1975. [Bro84] Sergei D. Brodski ̆ı. Equations over groups, and groups with one defining relation. Sibirsk. Mat. Zh., 25(2):84-103, 1984. [Bro94] Kenneth S. Brown. Cohomology of groups, volume 87 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1994. Corrected reprint of the 1982 original. 26 SAM P. FISHER [Chi76] I. M. Chiswell. Exact sequences associated with a graph of groups. J. Pure Appl. Algebra, 8(1):63-74, 1976. [Fel71] G. L. Fel'dman. The homological dimension of group algebras of solvable groups. Izv. Akad. Nauk SSSR Ser. Mat., 35:1225-1236, 1971. [FGS10] M. Farber, R. Geoghegan, and D. Schütz. Closed 1-forms in topology and geometric group theory. Uspekhi Mat. Nauk, 65(1(391)):145-176, 2010. [FIK25] Sam P. Fisher, Giovanni Italiano, and Dawid Kielak. Virtual fibring of Poincaré-duality groups, 2025. . [Fis24a] Sam P. Fisher. Improved algebraic fibrings. Compos. Math., 160(9):2203-2227, 2024. [Fis24b] Sam P. Fisher. On the cohomological dimension of kernels of maps to Z, 2024. , (to appear in Geom. Topol.). [Hig40] Graham Higman. Units in group rings. DPhil, 1940. [HK07] J. A. Hillman and D. H. Kochloukova. Finiteness conditions and PDr-group covers of PDn-complexes. Math. Z., 256(1):45-56, 2007. [HW10] Mark Hagen and Daniel T. Wise. Special groups with an elementary hierarchy are virtually free-by-Z. Groups Geom. Dyn., 4(3):597-603, 2010. [JZ21] Andrei Jaikin-Zapirain. The universality of Hughes-free division rings. Selecta Math. (N.S.), 27(4):Paper No. 74, 33, 2021. [JZL25] Andrei Jaikin-Zapirain and Marco Linton. On the coherence of one-relator groups and their group algebras. Ann. of Math. (2), 201(3):909-959, 2025. [Kie20] Dawid Kielak. Residually finite rationally solvable groups and virtual fibring. J. Amer. Math. Soc., 33(2):451-486, 2020. [Lat94] François Latour. Existence de 1-formes fermées non singulières dans une classe de cohomologie de de Rham. Inst. Hautes Études Sci. Publ. Math., (80):135-194 (1995), 1994. [Lin93] Peter A. Linnell. Division rings and group von Neumann algebras. Forum Math., 5(6):561-576, 1993. [Lüc02] Wolfgang Lück. L2-invariants: theory and applications to geometry and K-theory. Springer-Verlag, Berlin, 2002. [Lyn50] Roger C. Lyndon. Cohomology theory of groups with a single defining relation. Ann. of Math. (2), 52:650-665, 1950. [MMV98] John Meier, Holger Meinert, and Leonard VanWyk. Higher generation subgroup sets and the Σ-invariants of graph groups. Comment. Math. Helv., 73(1):22-44, 1998. [Ran95] Andrew Ranicki. Finite domination and Novikov rings. Topology, 34(3):619-632, 1995. [Sch02] Thomas Schick. Erratum: "Integrality of L2-Betti numbers". Math. Ann., 322(2):421422, 2002. [Sik87] Jean-Claude Sikorav. Homologie de Novikov associée à une classe de cohomologie de degré un. Thèse d'État, Université Paris-Sud (Orsay), 1987. [Sta68] John R. Stallings. On torsion-free groups with infinitely many ends. Ann. of Math. (2), 88:312-334, 1968. [Swa69] Richard G. Swan. Groups of cohomological dimension one. J. Algebra, 12:585-610, 1969. [Tam54] Dov Tamari. A refined classification of semi-groups leading to generalised polynomial rings with a generalised degree concept. In Proceedings of the International Congress of Mathematicians, 1954, Amsterdam, volume III, pages 439-440. Erven P. Noordhoff N. V., Groningen; North-Holland Publishing Co., Amsterdam, 1954. [Wat60] Charles E. Watts. Intrinsic characterizations of some additive functors. Proc. Amer. Math. Soc., 11:5-8, 1960. Email address:
2510.14798
Balls and Bins and the Infinite Process with Random Deletions Petra Berenbrink∗ Tom Friedetzky† Peter Kling‡ Lars Nagel§ Abstract We consider an infinite balls-into-bins process with deletions where in each discrete step t a coin is tossed as to whether, with probability β(t) ∈(0, 1), a new ball is allocated using the Greedy[2] strategy (which places the ball in the lower loaded of two bins sampled uniformly at random) or, with remaining probability 1 −β(t), a ball is deleted from a non-empty bin chosen uniformly at random. Let n be the number of bins and m(t) the total load at time t. We are interested in bounding the discrepancy xmax(t) −m(t)/n (current maximum load relative to current average) and the overload xmax(t) −mmax(t)/n (current maximum load relative to highest average observed so far). We prove that at an arbitrarily chosen time t the total number of balls above the average is O(n) and that the discrepancy is O(log(n)). For the discrepancy, we provide a matching lower bound. Furthermore we prove that at an arbitrarily chosen time t the overload is log log(n) + O(1). For “good” insertion probability sequences (in which the average load of time intervals with polynomial length increases in expectation) we show that even the discrepancy is bounded by log log(n) + O(1). One of our main analytical tools is a layered induction, as per [ABKU99]. Since our model allows for rather more general scenarios than what was previously considered, the formal analysis requires some extra ingredients as well, in particular a detailed potential analysis. Furthermore, we simplify the setup by applying probabilistic couplings to obtain certain “recovery” properties, which eliminate much of the need for intricate and careful conditioning elsewhere in the analysis. 1 Introduction. Balls-into-bins processes have been formalized in the mathematical community since at least the early 18th century [dM18]. Their objective is, e.g., to allocate at random a set of balls into a number of bins such that the maximum number of balls per bin is minimized. These processes have been very well studied, with many results being largely folklore. However, even seemingly minor modifi- cations of the protocol or the model tend to result in vastly different and often far more complex analyses as well as, occasionally, very surprising results. Apart from the undoubted usefulness as probabilistic paradigms, balls-into-bins processes have many applications, often under the bonnet as primitive black-box operations. Among others, this includes areas like (the obvious) load balancing and allocation of jobs to servers [CL97], routing in networks [CMMadH+98], hashing [CRSW13], dictionary data structures [DW05], design of file systems [LCB13], peer-to-peer networks [SMK+01], and many more. The most simple balls-into-bins process, which allocates m balls independently and uniformly at random to n bins, is very well understood. For example, if m = n then the maximum number of ∗University of Hamburg, Germany (petra.berenbrink@uni-hamburg.de). †Durham University, U.K. (tom.friedetzky@durham.ac.uk). ‡Darmstadt University of Applied Sciences, Germany (peter.kling@h-da.de). §Loughborough University, U.K. (L.Nagel@lboro.ac.uk). 1 arXiv:2510.14798v1 [cs.DC] 16 Oct 2025 balls (load) in any bin is, w.h.p.1, log(n)/ log log(n) · (1 + o(1)) [RS98]. In [ABKU99] the authors introduced the Greedy[d] process. Here, balls are allocated sequentially; each ball chooses d ∈N bins uniformly at random and is allocated to a least loaded of them. Greedy[1] is identical to the simple process described above. But if d ≥2, the maximum load is, w.h.p., reduced to log log(n)/ log(d) + O(1) [ABKU99] for m = n. This exponential decrease in the maximum load is often referred to as the power of two choices. Greedy[2] was later analyzed for m ≥n (the heavily loaded case), which led to the surprising result that, in contrast to Greedy[1], the difference between the maximum and average load is independent of m [BCSV06] (namely, w.h.p. it is log log(n)/ log(d) + O(1)). Balls-into-Bins with Deletions. We study Greedy[2] in the heavily loaded case where balls are not only inserted but also deleted. Specifically, we ask whether allocation via Greedy[2] can over- come the negative effect of random deletions (effectively “holes” allocated according to Greedy[1], which without insertions would cause a rift of depth p m n log n for large m). One can differentiate balls-into-bins with deletions depending on whether, each time a ball is inserted, the ball gets d fresh choices (insertion/deletion model) or whether the ball might be a previously deleted ball that is reinserted with its original d choices (reinsertion/deletion model). Recently, the authors of [BK22] showed that, somewhat surprisingly, even in the insertion/deletion model, an oblivious adversary2 can make sure that Greedy[2] does not overcome the negative effect of deletions: already for n = 4 bins, it can cause a maximum load of m/4+Ω(√m) with probability Ω(1). Intuitively, the lower bound uses the fact that if there is ever a bin i that contains far fewer (say k) balls than other bins, Greedy[2] is biased toward i for the next k throws. After filling the system to m balls with at least k insertions, the adversary exploits the insertion bias to perform k deletions that are now biased towards bin i. Adding replacement balls for the just deleted balls, one can show that enough of these land on bins ̸= i to cause the desired overload. Results in a Nutshell. We consider the insertion/deletion model for n ∈N bins with random but dynamic deletions, starting from an initially empty system. Each time step t, with probability β(t) a new ball with fresh 2 choices is inserted by Greedy[2]. Otherwise, with probability 1−β(t), a ball is deleted from a bin chosen uniformly at random among all non-empty bins. (All our results also hold for the protocol that does not delete from a random bin, but a ball chosen uniformly at random among all balls, see Appendix A.) Our main result can be seen as complementing this unfortunate lower bound of [BK22] for Greedy[2] with a positive result: In a model with random deletions (but still dynamic, that is, resulting in a fluctuating average load), Greedy[2]’s allocation overcomes the negative deletion effect as long as insertions dominate deletions ever so slightly in the recent past of a given time step t. This highlights a fundamental difference between our model and the one of [BK22]. Our result suggests that in many practical applications, where deletions (if at all) occur non-adversarially, Greedy[2] easily overcomes random Greedy[1]-type deletions. This holds true even if the system utilization (ratio between insertions and deletions) fluctuates significantly. See Section 1.1 for a full and formal description of our results. 1We say an event E happens with high probability (w.h.p.) if Pr [ E ] ≥1 −n−Ω(1). 2The adversary knows only the d choices and insertion time of each ball. Each time step it can either (a) insert a new ball with d fresh random bin choices or (b) delete the r-th most recently inserted ball (r chosen freely). In particular, it is oblivious to the exact load situation, since it cannot see to which of the d choices the allocation strategy (like Greedy[d]) assigned a given ball. 2 1.1 Our Contribution. The system state at any time can be modeled by a load vector x = (x1, . . . , xn) ∈Nn 0, where xi is the load of the i-th fullest bin. We use m(x) := ∥x∥1 to denote the total load of x and define, for clarity, xmax := x1 and xmin := xn as the maximum and minimum load of x, respectively. We say x is perfectly balanced if xmin, xmax ∈{ ⌊m(x)/n⌋, ⌈m(x)/n⌉}. The discrepancy disc(x) := xmax−m(x)/n of x is the difference between the current maximum and average load. In some cases, we also use the stronger absolute discrepancy adisc(x) := max { xmax −m(x)/n, m(x)/n −xmin } of x (the largest difference of any load to the current average load). With this, our process can be described as a random sequence x(t)  t∈N0 of load vectors with total load m(t) := m x(t)  , discrepancy disc(t) := disc x(t)  , and absolute discrepancy adisc(t) := adisc x(t)  at time t. We use mmax(t) := max { m(0), . . . , m(t) } to denote the maximum total load up to time t. The overload ovld(t) := xmax(t) −mmax(t)/n at time t is the difference between the current maximum and the highest average load up to time t. The height of a ball is its position in the bin modelled as a stack of balls counting from bottom to top. We assume a dynamic insertion probability β(t) that may change from step to step to any value in [0, 1]. In other words, insertions and deletions are governed by an infinite insertion probability sequence β(t)  t∈N giving the (independent) probabilities of an insertion/deletion in each time step. We say value x ∈(0, 1) is bounded away from 0 (from 1) if x ≥0 + Ω(1) (x ≤1 −Ω(1)). While balls-into-bins processes typically aim to bound the discrepancy, this cannot work in general when deletions are involved. For example, the insertion probability sequence could contain a stretch that deletes m(t)/2 balls after some time t. Translating a folklore example (also mentioned in [BK22]) to our setting, deletions can be regarded as random “holes” allocated via Greedy[1]. This would cause a discrepancy of order ˜Θ( p m(t)/n), a bound trivially achieved by allocations via Greedy[1], even holds if the system starts out perfectly balanced. Thus, in this paper we aim to bound the overload. We characterize which (possibly local) properties an insertion probability sequence requires for Greedy[2] to achieve a small discrepancy. Also, in an infinite process it is quite impossible to always have a small discrepancy (or even overload). Thus, our main results (given below) all hold with high probability at an arbitrary (also super-exponentially large) point t in time: (a) Assume all insertion probabilities up to time t are bounded away from 0. Then the discrepancy at time t is O(log(n)) (Theorem 2.1) and the number of balls above average at time t is O(n) (Theorem 2.2). The bound on the discrepancy is tight if the insertion probabilities are bounded away from 1/2 from above in the previous O(n log(n)) time steps (see Appendix E). (b) Assume we are in a well (e.g., perfectly) balanced situation at time t. Then we can maintain an overload of log log(n) + O(1) for poly(n) steps as long as the insertion probabilities during that time are bounded away from 0 and 1. If, additionally, the insertion probabilities ensure an expected load increase for polynomially long subintervals (see definition of c-good time intervals in Section 3), we can maintain a discrepancy (instead of only overload) of log log(n) + O(1) (see Theorem 3.1 and Lemma 3.4 in Section 3 for the full, formal statements). (c) Our final result shows that we achieve an overload of log log(n) + O(1) at an arbitrary time t ∈N0 as long as the insertion probability sequence till that time is bounded away from 0 and 1 (Theorem 4.1). Similar to the previous result, if the insertion probabilities ensure an expected (possibly quite small) load increase for long subintervals during the previous n4 time steps, we even get a discrepancy of log log(n) + O(1) at time t. To the best of our knowledge, these are the first results for the discrete, heavily-loaded inser- tion/deletion setting that provide characterizations for when Greedy[2] achieves strong guarantees 3 on the discrepancy instead of only on the overload. Main Technical Contributions and Proof Idea. To prove that the discrepancy is O(log(n)) (see (a) above) we use a potential function argument similar to [PTW15, TW14]. To reduce the discrepancy to double-logarithmic (see (c) above) we use the excellent recovery properties of Greedy[2]: In [Czu00], Czumaj studied the recovery of Greedy[2] for a process that alternates between ball insertions and deletions, maintaining the total load invariant. He showed that the process is able to recover, i.e., reach a steady state distribution, in polynomial time. However, this does not reveal any properties (e.g., the discrepancy) of the steady state. In contrast to [Czu00] we cannot have a unique steady state as the total load varies over time. However, we can show via a path coupling argument that when we start two copies of our process in any two configurations of logarithmic discrepancy with the same number of balls – which we have at any time via (a) – it causes both configurations to converge to each other quickly (in o(n4) steps). This can be interpreted as a “typical” state to which all configurations with the same number of balls converge. Again, as in [Czu00] this does not yet reveal any further information about such a typical state. This is where (b) comes into play, which uses a layered induction to show that, starting from a certain class of configurations (with discrepancy at most log log(n)+c), the process remains in such a well-balanced configuration for a long time, Ω(n4) steps. That is, we maintain well-balancedness for longer than it takes to reach the typical state, implying that such typical states lie in this class of well-balanced configurations. Note that this deviates significantly from previous usages of layered induction in this area: For example, in [TW14] the authors used layered induction to reduce the discrepancy from O(log n) to O(log log n). Our layered induction is “merely” used to establish that a typical state (reached quickly due to the good recovery) has a low discrepancy. Apart from being flexible enough to deal with a rather general deletion model, our approach also allows for stronger probability guarantees (of order 1 −1/poly(n) instead of 1 −1/polylog(n) in [TW14]). In our layered induction we have to cope with two new problems in comparison with previous work: The total load m(t) changes over time, and we may not assume it to be linear in n. The classical layered induction proof of [ABKU99] heavily relies on the base case, which by means of a simple pigeon-hole argument states that there are at most n/c bins with c balls. This holds because in their setting the total load cannot exceed n. In our case, however, the total load is unbounded. We deal with this first problem by performing our layered induction only on the highest log log(n)+O(1) many load levels. This leads us to the second challenge, since the changing total load m(t) also implies that the (absolute) height of the considered levels changes over time. To deal with this, we carefully formulate our layered induction relative to the maximum average and use a flexible random walk argument to prove strong bounds on the number of balls above a certain level over the maximum average. This yields a bound on the overload measure, from which we derive the discrepancy bound – under the assumption that there was no large, recent drop of the total load. Note that in the case where such a large drop occurred, our lower bound mentioned in (a) shows that we cannot hope for a good discrepancy. To the best of our knowledge, this is the first layered induction for the case m ≫n as well as for a process with a changing load. 1.2 Related Work. There is a vast amount of literature on balls-into-bins games in a multitude of settings. For an overview see [Wie17] and for results with one random choice per ball see [BCN+15, BFK+18, LS23]. After a very brief overview of results with multiple choices we focus on processes with deletions. 4 Multiple Choice. The Greedy[d] process was introduced and analyzed in [ABKU99], although a similar result in a different setting had been shown in [KLMadH92]. In [BCSV06] the authors generalize the results to the heavily loaded case m ≫n. [TW14] presents a simpler analysis for the heavily loaded case, albeit showing a slightly worse upper bound on the maximum load. The author of [V¨03] considers the always-go-left process Left[d] where bins are divided into d clusters of n/d bins each. Every ball randomly chooses one bin per clusters and is allocated to a least loaded among the chosen bins; ties are broken in favor of the leftmost cluster. This results in a maximum load of log log(n)/(d·ϕd) with ϕd ≤2 (ϕd ∈(1, 2) is the generalized golden ratio), w.h.p., a surprising improvement over [ABKU99]. For more recent results see [LSS22, LSS23, LSS24]. The potential function analysis for Part (a) is inspired by methods introduced in [PTW15] where the authors consider a very interesting variant of Greedy[d], which they call (1 + β)-process. Here each ball is allocated using Greedy[1] with probability 1 −β, and using Greedy[2] with the remaining probability β. The authors showed that the difference between maximum and average load is Θ(log(n)/β) for each β < 1. Hence, as long as the number of allocations with Greedy[2] are sufficiently large, the load is rebalanced and the “damage” caused by Greedy[1] is undone. In this paper we use the same potential function and generalize the analysis to deletions (see Section 2). Processes with Deletion, m ≤n. In [ABKU99] the authors analyze the Greedy[d] allocation strategy in an insertion/deletion model where, after n initial insertions, random deletions and in- sertions happen alternately. They show that for a polynomial number of steps the maximum load is, w.h.p., log log(n)/log(d) + O(1). The authors of [CFM+98] provide an alternative proof for the same bound that can handle arbitrary insertion/deletion sequences, as long as the total system load is at most n. Under the same restriction they provide bounds for the reinsertion/deletion model with adversarial deletions, showing that the maximum load is, w.h.p., O(log log(n)). Processes with Deletions, m > n. The infinite insertion/deletion model with adversarial dele- tions for the moderately loaded case is considered by [V¨03] for the Left[d] process where the maximum system load is always bounded by h · n. He proves that the maximum (bin) load is ln ln(n)/(d · ln(ϕd)) + O(h) w.h.p. (again, ϕd ∈(1, 2) is the generalized golden ratio). Note that for h = Ω(log(n)), this is not better than the bound achieved by the trivial Greedy[1] allocation. The authors of [BCF+23] consider the moderately loaded case but for the reinsertion/deletion model. They introduce the Iceberg[d] process, combining Greedy[1] and Left[d]. Slightly simplified, the Iceberg[d] process allocates a ball via Greedy[1] if the target bin contains at most h + ˜O( √ h) balls that were allocated by Greedy[1]. Otherwise, either Left[d] is used (if there are currently fewer than n/d balls allocated by Left[d]) or the ball is allocated to bin 1. This process achieves a maximum load of h + log log(n)/(d log ϕd)) + ˜O( √ h). Note that this process must know h and keep track of the number of balls placed with specific strategies, making it difficult to use in distributed settings. In contrast, Greedy[d] places balls solely based on the selected bins’ loads. The focus of [Czu00] is the analysis of the recovery time (the time it takes a process to move from an arbitrary to a “typical” state) in the heavily loaded case (m arbitrary). The author considers a setting starting with m arbitrarily placed balls followed by alternating deletions (either a random ball or from a random bin) and insertions. The recovery time under Greedy[d] allocation is proven to be polynomial in m and n. While parts of our analysis (Section 4) are inspired by this result, note that in our case the system load m fluctuates over time and that we do not only show fast recovery but actually characterize what the “typical” state looks like. As mentioned at the beginning of the introduction, [BK22] give lower bounds for Greedy[2] in the heavily loaded, adversarial insertion/deletion model. They also provide a more general lower 5 bound for any oblivious allocation strategy (like Greedy[2] or Left[2]) in the reinsertion/deletion model (with the system load capped at m): an oblivious adversary can force a maximum load of m/4 + mΩ(1) for n = 4 bins within the first poly(m) steps. On the positive side, they introduce the allocation strategy ModulatedGreedy for the insertion/deletion model, which achieves an overload of O(log(m)). To overcome their lower bound for Greedy[2], ModulatedGreedy de- cides probabilistically between two bin candidates, based on their current load. The algorithm must know global guarantees on the load situation (in particular, the maximum number of balls that will ever be present and the current maximum and minimum load). Queuing Systems. A prominent example of a queuing system is the supermarket model consid- ered in, e.g., [VDK96, BCFV00, Mit01, LN05, LM05]. It can be seen as a variant of balls-into-bins process in continuous time. Here, balls typically arrive in a Poisson stream of rate λn and are allocated according to Greedy[d]. Bins process (delete) balls in first-in first-out order, with a processing time that is exponentially distributed with mean 1. These systems and their results can often be readily mapped to the balls-into-bins settings yielding results similar to those discussed above. However, the focus in queuing systems lies typically on properties like the balls’ service times (the average time a ball spends in the system), how long it takes to reach an equilibrium state, or how the system behaves in the equilibrium. Note that the (expected) unit processing time together with λ < 1 means that we are dealing here with the lightly/moderately loaded case (m ≤poly(n)) in the equilibrium. Later results (e.g., [BLP10, MBvLW16, BFL18]) considered settings where d and/or λ might depend on n or where the balls’ service time adhere to more general distributions. Further Related Work in Load Balancing. There are also results such as [ABS98, BFK+18] that consider parallel load balancing settings, where balls are not sequentially allocated but thrown in batches of size Θ(n) and where, similar to the supermarket model, bins have a processing time (in these cases simply 1). One is interested in which batch sizes and allocation strategies yield small waiting times. [ABS98] shows for Greedy[2] that, if the batches are not too large, the maximum waiting time is, w.h.p., at most ln ln(n)/ ln(d) + O(1). For a batch size of λn, [BFK+18] prove a maximum waiting time of O( 1 1−λ · log n 1−λ) for Greedy[1] and O(log n 1−λ) for Greedy[2]. Another line of work considers a mixture of load balancing and averaging processes on general graphs (in contrast to the balls-into-bins settings, that can be thought of as load balancing on a clique of n nodes). Two recent results are [ANS22, BHH+23]. The major difficulty here typically stems from dealing with the given graph’s structure, which changes the considered processes and analysis techniques considerably compared to our setting. 2 Max Load & Above-average Ball Count via Potential Functions. In this section we extend the analysis of [PTW15] to allow for ball deletions (with variable deletion probabilities). The following Theorem 2.1 uses the potential function Γα(x(t)) from [PTW15]. We show that, at an arbitrary point of time, the expected potential is linear in n. The potential is defined as Γα(x(t)) = Φα(x(t)) + Ψα(x(t)), where Φα(x(t)) = n X i=1 eα·(xi(t)−m(t)/n), and Ψα(x(t)) = n X i=1 e−α·(xi(t)−m(t)/n). (1) The potential Φα(x(t)) quantifies the impact of bins with loads above the average while Ψα(x(t)) does the same for the bins below the average. Although bounds on the maximum load are derived 6 only from Φα(x(t)), both parts are needed to ensure a potential drop in every load situation where the potential is large. Theorem 2.1, whose proof can be found in Appendix B, demonstrates that the potential function is robust enough to tolerate deletions if the β(t) are lower-bounded by a constant β > 0. It states that the absolute discrepancy at any time t is, w.h.p., at most O(log n) higher than it was at the start. In particular, if the process starts with an absolute discrepancy of O(log n), it will almost certainly be O(log n) at time t. Theorem 2.1. Consider an insertion probability sequence β(t)  t∈N bounded away from 0 by a constant β > 0. If α ≤β/16, then for any t ∈N0 we have E[Γα(x(t))] = O(n) · eα·adisc(0). Moreover, for any a ≥0 we have Pr  adisc(t) −adisc(0) ≥a+2 α · ln n  ≤n−a. In particular, if we start in a perfectly balanced situation with discrepancy 0 (e.g., empty system), the expected potential at an arbitrary time t is only linear. Equipped with the above, we can prove a strong upper bound on the number of balls that lie above the average at a given time t: Theorem 2.2. Fix an arbitrary step t. Let β ∈(0, 1] be a constant. Assume β(τ) ≥β for all 1 ≤τ ≤t and α ≤β/16. For any fixed non-negative ℓand any fixed µ ∈(0, 1), there is a γ = γ(µ) so that, with probability at least 1 −n−(3+ℓ), the number of balls above height ⌈m(t) n ⌉+ γ is at most µn. The proof is based on Lemma C.1 in Appendix C, an adaptation of a result in [LS22, LS21], which gives us a high-probability guarantee for the potential Γα(x(t)) being linear. In fact, the purpose of the adaptation of [LS22, LS21]’s results is two-fold: (i) to incorporate our deletions and their respective probability bounds, and (ii) to improve the high-probability guarantee from the original 1 −1/n3 (which was sufficient for the purposes of [LS22, LS21]). Note that the probability stated in Theorem 2.2 is inherited from Lemma C.1; apart from that, the theorem’s actual proof is an entirely deterministic counting argument. The proof details may be found in Appendix C. 3 Maintaining a Well-balanced State. Before we state this section’s main result (Theorem 3.1), we introduce the notion of c-good time intervals I for a given insertion probability sequence. Intuitively, such an interval guarantees us that the total system load increases in expectation in any sufficiently long subinterval of I. Definition 3.1 (c-good interval). Given a sequence β(t) (t ∈N) of insertion probabilities, we call a discrete time interval I c-good if for each subinterval (t1, t2] ⊆I with t2 −t1 ≥c · n we have Pt2 t=t1+1 β(t)  /(t2 −t1) ≥1 2(1 + ϵ) for a constant ϵ > 0. Theorem 3.1 states that, w.h.p., our process, when started in a perfectly balanced configuration, remains well-balanced for at least n4 steps.3 Here, well-balanced (this is formalized later in Defini- tion 3.2 and Lemma 3.4) is measured relative to the maximum average observed so far for general insertion probability sequences and relative to the current average if the considered time window is O(1)-good. Theorem 3.1. Consider an insertion probability sequence β(t)  t∈N bounded away from 0 and 1. Let x(0) be a perfectly balanced configuration. Then, w.h.p., xmax(t) ≤mmax(t)/n+log log n+O(1) for any t ≤n4. Moreover, if there is a constant c ∈N such that the time interval [0, n4] is c-good, then, w.h.p., xmax(t) ≤m(t)/n + log log n + O(1) for any t ≤n4. 3In fact, our proof shows that starting from any well-balanced configuration, the situation remains well-balanced for polynomial many steps. 7 To prove Theorem 3.1, we conduct a layered induction. A major difficulty stems from the fact that we have to deal with a permanently changing average load. For this an important in- gredient is Theorem 2.2, which gives us a high probability bound on the number of balls above the current average. This enables us to derive a base case for our layered induction, indepen- dent of the changing average load. The layered induction works roughly as follows: For each level ℓ∈{ 0, 1, . . . , log log(n) + Θ(1) } we consider the number of balls at height greater than average +ℓ. For each level we define a critical threshold αℓthat decreases doubly exponentially with ℓ. Moreover, each level has a safe threshold αℓ/2. We show that, as long as all levels remain below their critical threshold, any level that is above its safe threshold has a strong drift back towards the safe threshold. A standard result on random walks then shows that it is unlikely that during the poly(n) many considered time steps, any of the log log(n) + Θ(1) many levels (that all start below their respective safe threshold) crosses the critical interval [αℓ/2, αℓ). Formalizing Well-balanced Configurations. In the following, let ˆβ ∈(0, 1) be a constant upper bound on the insertion probability of our process, such that β(t) ≤ˆβ for all considered time steps t. For a height h ∈N0 and a configuration x define mh(x) as the number of balls that have height at least h in configuration x. For our process starting in an initial configuration x(0), we use the shorthand mh(t) := mh(x(t)). In our analysis, it will be helpful to measure the number of balls at or above some level ℓ, which, in turn, is above a given base height h (which might vary over time). To this end, we introduce the notation m(h) ℓ(x) := mh+ℓ(x) and m(h) ℓ(t) := mh+ℓ(t). We consider ℓ∗+ 2 = log log(n) + Θ(1) many levels ℓ∈{ 0, 1, . . . , ℓ∗+ 1 } with their critical thresholds αℓdefined as follows: αℓ:=                  1−ˆβ 128ˆβ · n if ℓ= 0, 32ˆβ 1−ˆβ · α2 ℓ−1/n if αℓ−1 > r 3(1−ˆβ) 2ˆβ · n log(n), 12 log(n) if ℓ= ℓ∗:= min  ℓ αℓ−1 ≤ r 3(1−ˆβ) 2ˆβ · n log(n)  , and 24 if ℓ= ℓ∗+ 1. (2) Note that the recursive definition ensures ℓ∗= log log(n) + Θ(1). Moreover, it implies the following relations between the critical thresholds αℓand αℓ−1: Observation 3.1. For any ℓ∈{ 1, 2, . . . , ℓ∗+ 1 } and large enough n we have 8ˆβ 1 −ˆβ · α2 ℓ−1 n ≤αℓ≤1 4 · αℓ−1. (3) Using the critical threshold αℓ, we define the valid interval of level ℓ∈{ 0, 1, . . . , ℓ∗+ 1 } as Vℓ:= [0, αℓ). Similarly, we define the level’s safe interval as Sℓ:= [0, αℓ/2) and its critical interval as Cℓ:= [αℓ/2, αℓ). Definition 3.2. For a base height h ∈N0 and level ℓ∈{ 0, 1, . . . , ℓ∗+ 1 } we say level ℓof con- figuration x is h-valid if m(h) ℓ(x) ∈Vℓ, h-safe if m(h) ℓ(x) ∈Sℓ, and h-critical if m(h) ℓ(x) ∈Cℓ. Configuration x is h-valid/ -safe/ -critical if all levels 0 to ℓ∗+ 1 are h-valid/-safe/-critical. In proofs, we sometimes omit the base height if it is clear from the context. 8 Analysis. We start with a simple lemma that provides the probabilities for the increase and decrease of the number of balls at or above a given height h. Lemma 3.1. For any configuration x and any height h ∈N we have (a) Pr [ mh(t + 1) > mh(t) | x(t) = x ] = β(t) · mh−1(t) −mh(t) 2/n2 and (b) Pr [ mh(t + 1) < mh(t) | x(t) = x ] ≥ 1 −β(t)  · mh(t) −mh+1(t)  /n. Proof. (a) For mh(t) to increase during the next step, a ball must be inserted by Greedy[2] (probability β(t)) and both of the choices of Greedy[2] must fall into a bin that has load at least h −1. The number of such bins is exactly mh−1(t) −mh(t), yielding the probability mh−1(t) −mh(t) 2/n2. (b) For mh(t) to decrease during the next step, a ball must be deleted (probability 1 −β(t)), and the chosen bin must have load at least h. The number of such bins is exactly mh(t)−mh+1(t) and the total number of non-empty bins is at most n, yielding the probability ≥ 1 −β(t)  · mh(t) −mh+1(t)  /n. Lemma 3.2. Consider an h-valid configuration x and let x′ denote the (random) configuration after one step of our process with the insertion probability β(t) ≤ˆβ < 1. (a) If level ℓ∈{ 1, 2, . . . , ℓ∗} of configuration x is h-critical, then Pr h m(h) ℓ(x′) < m(h) ℓ(x) i . Pr h m(h) ℓ(x′) > m(h) ℓ(x) i ≥2. (4) (b) If level ℓ= ℓ∗+ 1 of configuration x is h-critical and if n is large enough, then Pr h m(h) ℓ(x′) < m(h) ℓ(x) i . Pr h m(h) ℓ(x′) > m(h) ℓ(x) i ≥√n. (5) The proof of Lemma 3.2 may be found in Appendix D. Next we formulate an auxiliary lemma about the probability that a random walk on integers reaches position b > 0 before position −a < 0 when started in 0, assuming the random walk is biased towards the left by a factor r > 0. We will apply this lemma to the values m(h) ℓ(•) (basically corresponding to the random walks) to show that these values are unlikely to cross the critical interval Cℓ(on which, by Lemma 3.2, m(h) ℓ(•) is biased by a factor of at least 2 towards the safe left side). Lemma 3.3. Consider a (not necessarily i.i.d.) random walk (St)t∈N0 on Z with initial position S0 = 0 and position St = Pt i=1 Xi after t steps with step width Xi ∈{ −1, 0, +1 } at time i. Let F = (Ft)t∈N0 denote the random walk’s natural filtration. Assume there is r ∈R>0 \ { 1 } such that for all t ∈N, Pr [ Xt = −1 | Ft−1 ]  Pr [ Xt = +1 | Ft−1 ] = r. For x ∈Z define the stopping time τx := inf { t ∈N0 | St = x } indicating when the random walk reaches x for the first time. Then, for any a, b > 0, Pr [ τ+b < τ−a ] = (ra −1)/(ra+b −1). Proof. Consider martingale M0 := 1 and Mi := rXi · Mi−1. Define the stopping time τ := min { τ−a, τb }. By the optional stopping theorem, we have E[Mτ] = E[M0] = 1. This yields 1 = E[M0] = E[Mτ] = Pr [ τ+b < τ−a ] · rb + 1 −Pr [ τ+b < τ−a ]  · r−a. Rearranging for Pr [ τ+b < τ−a ] gives the desired result. 9 The next lemma is our main tool in proving Theorem 3.1. It states that if started in a safe configuration x, our process maintains safety (with respect to the base height h(t) := ⌈mmax(t)/n⌉+ O(1)) for a polynomial time. The basic idea is to consider any time t′ in which some previously safe level ℓjust became critical. Using Lemma 3.2, we then couple the random variable mℓ(•) with a random walk that is always at least as large as mℓ(•) and is biased by a factor of exactly 2 (or √n, in the case of level ℓ= ℓ∗+ 1) towards the safe left side of the critical interval Cℓ. Applying Lemma 3.3 to all possible such random walks together with a union bound then implies that, w.h.p., no level crosses the critical interval and becomes invalid for poly(n) many steps. Lemma 3.4. Let h(t) := ⌈mmax(t)/n⌉+ γ denote the base height at time t. Consider an initial configuration x(0) = x that is h(0)-safe. Fix the time T = n4 and assume n to be large enough. If β(t) ≤ˆβ < 1 for all t ∈{ 0, 1, . . . , T −1 }, then Pr [ ∀t ∈{ 0, 1, . . . , T −1 } : x(t) is h(t)-valid ] ≥1 −n−1. (6) The proof of the lemma may be found in Appendix D. It remains to prove Theorem 3.1, which follows by applying Lemma 3.4 and observing that any valid configuration has the desired maximum load. Proof of Theorem 3.1. Define the base height h(t) := ⌈mmax(t)/n⌉+ γ at time t as in Lemma 3.4. Consider an initial configuration x = x(0) that is perfectly balanced (i.e., a configuration with maximum load ⌈mmax(t)/n⌉). Note that x is trivially h(0)-safe, since (choosing γ ≥1) there are no balls with height ≥h(0). Then, by Lemma 3.4, all configurations x(t) with t ∈[0, n4) are (w.h.p.) h(t)-valid. Thus, it is sufficient to show that the maximum load in any h-valid configuration is h + log log n + O(1). This follows immediately from the definition of the critical thresholds: Indeed, if x is h-valid, so is level ℓ∗+ 1. Thus, there are at most αℓ∗+1 = O(1) many balls on or above level ℓ∗+ 1. In particular, the maximum load in x is xmax ≤h + ℓ∗+ 1 + αℓ∗+1 = h + log log n + O(1). This proves the first part of the theorem. For the second part, assume that the insertion probability in the interval [0, n4] is c-good for a constant c. That is, for any discrete subinterval (t1, t2] ⊆[0, n4] of length t2 −t1 ≥c · n we have (Pt2 t=t1+1 β(t))/(t2 −t1) ≥1 2 · (1 + ϵ) for a constant ϵ > 0. Consider the random variable Y = Pt2 t=t1+1 Yt where Yt = 1 with probability β(t) and Yt = 0 otherwise. Note that Y counts the number of balls added in the time interval (t1, t2]. By c-goodness we have E[Y ] ≥(1 + ϵ) · t2−t1 2 . A simple application of Chernoff yields Pr  Y < (1 + ϵ/2) · t2 −t1 2  ≤e−Θ(1)·(t2−t1) = n−ω(1). (7) Note that if Y balls are added during the interval (t1, t2], at most t2 −t1 −Y balls are deleted. Thus, using Equation (7), the total change of the system load during the interval (t1, t2] is, with very high probability, m(t2) −m(t1) ≥Y −(t2 −t1 −Y ) = 2Y −(t2 −t1) ≥ϵ · t2 −t1 2 > 0. (8) Thus, we can take a union bound over the poly(n) many subintervals (t1, t2] ⊆[0, n4] of length at least c·n to get that, w.h.p., the average load at the end of the subinterval is at least the average load at its beginning. All other subintervals have length < c·n, so the average load can decrease by at most c during any of them. Together, we get that, w.h.p., for any t ∈[0, n4] we have m(t) ≥mmax(t) −c. Thus, by using the theorem’s first statement we can, w.h.p., bound the maximum load at any time t ∈[0, n4] by xmax(t) ≤⌈mmax(t)/n⌉+ log log n + O(1) ≤⌈m(t)/n⌉+ log log n + O(1) + c = ⌈m(t)/n⌉+ log log n + O(1). 10 4 Quick Recovery & Steady State Characterization. With the tools from Sections 2 and 3 at hand, we are now ready to prove our main result (Theo- rem 4.1). It generalizes the guarantees given by Theorem 3.1 (there only for the first poly(n) many steps) to an arbitrary time t ∈N0. Theorem 4.1. Consider an insertion probability sequence β(t)  t∈N bounded away from 0 and 1. Then, w.h.p., xmax(t) ≤mmax(t)/n+log log n+O(1) for any t ∈N0. Moreover, if there is a constant c ∈N such that time interval [t −n4, t] is c-good, then, w.h.p., xmax(t) ≤m(t)/n + log log n + O(1). We conjecture that the requirement for a constant ϵ > 0 in Definition 3.1 is an artifact of our analysis technique and can be reduced to o(1) or even eliminated. The same holds for our assumption that the insertion probability must be bounded away from 1 (note that for β = 1 the result follows from [BCSV06]). We note that the first of the two results in the theorem also holds when first m ≥n balls are inserted and the process then alternates between insertions and deletions. Analysis Overview. The proof idea of Theorem 4.1 is as follows: For any time t ∈N0, The- orem 2.1 implies (w.h.p.) a logarithmic absolute discrepancy at time t′ := t −n4. We then use a path coupling argument similar to [Czu00] to show that our process causes any load situation with logarithmic absolute discrepancy to recover quickly to a “typical” state. More exactly, any two (different) such load situations at time t′ can be coupled such that (w.h.p.) they become identical at time t. At this point however, we don’t know how such a “typical” looks like (and whether it has small maximal load). This is where Theorem 3.1 comes into play. It tells us that if we start in a perfectly balanced load situation at time t′, our process maintains (w.h.p.) a double-logarithmic maximal load for at least n4 steps. A simple union bound over these two high probability bounds then implies that the “typical” load situation at time t must also have a double-logarithmic maximal load. Analysis. We now formalize the above idea. To this end, we first introduce a measure for the similarity of two given load situations. Definition 4.1. Consider two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1. The transformation distance between x and y is ∆(x, y) := ∥x −y∥1/2. Note that the transformation distance obeys the identity ∆(x, y) = Pn i=1 max { 0, xi −yi } and corresponds to the minimal number of ball movements to transform x into y (and vice versa). Moreover, any load vector x can be transformed into the perfectly balanced load vector by moving the at most n · disc(x) ≤n · adisc(x) balls above average. As an immediate consequence, we get the following observation. Observation 4.1. For two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1, we have ∆(x, y) ≤2n · max { adisc(x), adisc(y) }. The following lemma leverages a result from [Czu00] to show that one step of our process tends to reduce transformation distance between two given load vectors. In [Czu00], the author considers basically the same setting as ours, but with guaranteed alternating insertions/deletions. Lemma 4.1 basically shows that their proof transfers directly to our more general setting with insertion probability sequences. 11 Lemma 4.1. Consider two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1. Assume ∆(x, y) = 1 and let x′ and y′ denote the load vectors after applying one step of our process with insertion probability β ∈[0, 1] to x and y, respectively. There is a coupling of x′ and y′ such that (a) E[∆(x′, y′)] ≤∆(x, y) = 1 and (b) Pr [ ∆(x′, y′) ̸= ∆(x, y) ] ≥(1 −β)/n. Proof. For the random decision whether to insert or delete a ball in x and y, we use the identity coupling that either inserts a ball in both processes (with probability β(t)) or deletes a ball in both processes (with probability 1−β(t)). It remains to couple the random bin choices in a deletion step and the two random bin choices of Greedy[2] in an insertion step. Assume the considered step is a deletion step. Then we use the first coupling described in [Czu00, Section 6] (who consider alternating insertions/deletions instead of our more flexi- ble insertion probability sequences). In this case, [Czu00, Claims 6.1 and 6.2] together yield E[∆(x′, y′) | Deletion Step] ≤∆(x, y) = 1 and Pr [ ∆(x′, y′) ̸= ∆(x, y)|Deletion Step ] ≥1/n. Note that this already yields the lemma’s second statement, since a deletion step occurs with prob- ability 1 −β. Similarly, for an insertion step, we use the second coupling described in [Czu00, Section 6], where the exact same insertion rule is used (again, in a setting with alternating insertions/deletions). In this case, the proof of [Czu00, Lemma 5.1] implies E[∆(x′, y′) | Insertion Step] ≤∆(x, y) = 1. Together, we get the desired result. The next lemma characterizes the expected and high probability crossing time of a simple, α- lazy reflecting random walk on { 0, 1, . . . , D } (i.e., a random walk that has the same probability to move left/right and that stays put with probability α or if it tries to move out of bounds). We use the lemma for our coupling result in Lemma 4.3. Lemma 4.2. Let D ∈N0 and α ∈[0, 1). Consider a simple, α-lazy reflecting random walk (Wt)t∈N0 on { 0, 1, . . . , D } started in W0 = D. Let T := min { t ∈N | Wt = 0 }. Then E[T] = D·(D+1)/(1− α). Moreover, for any n ∈N and a > 0 we have Pr [ T ≥a · log(n) · 2E[T] ] ≤1/n−a. Proof. Let Ei denote the expected time for the random walk to reach 0 if started at position i ∈{ 0, 1, . . . , D }. Clearly, E0 = 0 and ED = E[T]. Note that we have the recurrence relation Ei = 1+Ei ·α+(Ei−1 +Ei+1)·(1−α)/2 for i ∈{ 1, 2, . . . , D −1 } and ED = 1+ED−1 ·(1−α)/2+ ED ·(1+α)/2. From this one can first deduce Ei = 2(D−i+1)/(1−α)+Ei−1 for i ∈{ 1, 2, . . . , D } and, then, E[T] = ED = D · (D + 1)/(1 −α). For the high probability bound, we observe that Markov’s inequality implies Pr [ T ≥2E[T] ] ≤ 1/2. Thus, after at most a · log(n) repetitions of such phases of length 2E[T], the random walk has reached 0 with probability at least (1/2)a·log(n) = n−a. The following lemma shows that it is possible to couple two instances of our process with a logarithmic absolute discrepancy; they will be in the same distribution after polynomial time. Lemma 4.3. Consider two instances of our random processes x(t)  t∈N0 and y(t)  t∈N0 with ∥x(0)∥1 = ∥y(0)∥1 and adisc(x(0)), adisc(y(0)) ≤3 α · log n. Assume both processes use the same insertion probability sequence β(t)  t∈N0 bounded away from 0 and 1. Then there is a coupling of x(t)  t∈N0 and y(t)  t∈N0 for whose coupling time τ := min  t ∈N0 ∆ x(t), y(t)  = 0 we have, w.h.p., τ = O n3 · (log n)3 . 12 Proof. Consider any time t ∈N0 and define ∆(t) := ∆ x(t), y(t)  . There is a sequence x(t) = x(0)(t), x(1)(t), . . . , x(∆(t))(t) = y(t) of load vectors such that for all indices i ∈{ 1, 2, . . . , ∆(t) } we have ∆(i)(t) := ∆ x(i−1)(t), x(i)(t)  = 1. By Lemma 4.1(a), there is a coupling of each pair x(i−1)(t + 1), x(i)(t + 1)  such that E  ∆(i)(t + 1)  ≤1. This implies a coupling of x(t + 1) and y(t + 1) with E  ∆(t + 1)  ≤E   ∆(t) X i=1 ∆(i)(t + 1)  = ∆(t) X i=1 E h ∆(i)(t + 1) i ≤∆(t). (9) Similarly, Lemma 4.1(b) implies Pr [ ∆(t + 1) ̸= ∆(t) | ∆(t) > 0 ] ≥1 −β(t) n . (10) Consider the random process ∆(t)  t∈N0. Let D := n · 10 α · log n = O(n · log n) and define the stopping time T1 := min { t ∈N0 | ∆(t) > D } . (11) Let β := sup { β(t) | t ∈N0 } < 1 and α := 1 −(1 −β)/n. Because of Equations (9) and (10), during the time steps t ∈{ 0, 1, . . . , T1 −1 } the distance ∆(t) is majorized by the position W(t) of a simple, α-lazy reflecting random walk W(t)  t∈N0 on { 0, 1, . . . , D } starting in W(0) = D. Define the stopping time T2 := min { t ∈N0 | W(t) = 0 } . (12) Note that if T1 > T2, the majorization implies ∆(T2) ≤W(T2) = 0 and, thus, τ ≤T2. By applying Theorem 2.1 with a = 5 to the first n4 time steps and using a union bound, w.h.p. both x(t)  t∈N0 and y(t)  t∈N0 maintain an absolute discrepancy of at most adisc(0) + 7 α · log n = 10 α ·log n during the first n4 steps. By Observation 4.1, this implies a transformation distance of at most n· 10 α ·log n = D and, thus, T1 > n4. By Lemma 4.2, w.h.p., T2 = O n3 ·(log n)3 . Thus, by a union bound we have, w.h.p., T1 ≥T2 and T2 = O n3 · (log n)3 . As argued above, together these imply τ ≤T2 = O n3 · (log n)3 . With the coupling at hand we are ready to prove our main result. Proof of Theorem 4.1. Note that for t ≤n4, the statement follows immediately from Theorem 3.1. For larger t, Theorem 2.1 gives us, w.h.p., xmax(t −n4) −xmin(t −n4) ≤2c · log n. By this and by shifting each time step forward by n4 steps, it is sufficient to prove the theorem’s statement for t = n4 starting from an initial load vector x(0) with xmax(0) −xmin(0) ≤2c · log n. Define an initial load vector y(0) with total load ∥y(0)∥1 = m(0) that is perfectly balanced. By Lemma 4.3 there is a coupling of x(t)  t∈N0 and y(t)  t∈N0 such that, w.h.p., x(t) = y(t) for all t = ˜Ω(n3) (so, in particular, for t = n4). Note that Theorem 3.1 applies to y(t)  t∈N0. This gives us the corresponding high probability guarantees from Theorem 3.1 for y(n4) which, by leveraging the coupling, hold w.h.p. for x(n4), finishing the proof. Acknowledgments. Petra Berenbrink’s research was funded by the DFG project Distributed and Collaborative Systems of Agents (project number 411362735) as part of the Research Unit (Forschungsgruppe) Algorithms, Dynamics, and Information Flow in Networks (FOR 2975). 13 References [ABKU99] Yossi Azar, Andrei Z. Broder, Anna R. Karlin, and Eli Upfal. Balanced allocations. SIAM Journal on Computing, 29(1):180–200, 1999. [ABS98] Micah Adler, Petra Berenbrink, and Klaus Schröder. Analyzing an infinite parallel job allocation process. In In Proceedings of the European Symposium (ESA), pages 417–428, 1998. [ANS22] Dan Alistarh, Giorgi Nadiradze, and Amirmojtaba Sabour. Dynamic averaging load balancing on cycles. Algorithmica, 84(4):1007–1029, 2022. [BCF+23] Michael A. Bender, Alex Conway, Martin Farach-Colton, William Kuszmaul, and Guido Tagliavini. Tiny pointers. In Proceedings of the Symposium on Discrete Algorithms (SODA), pages 477–508, 2023. [BCFV00] Petra Berenbrink, Artur Czumaj, Tom Friedetzky, and Nikita D. Vvedenskaya. Infinite parallel job allocation (extended abstract). In Proceedings Symposium on Parallel Algorithms and Architectures (SPAA), pages 99–108, 2000. [BCN+15] Luca Becchetti, Andrea E. F. Clementi, Emanuele Natale, Francesco Pasquale, and Gustavo Posta. Self-stabilizing repeated balls-into-bins. In Proc. of 27th Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 332–339, 2015. [BCSV06] Petra Berenbrink, Artur Czumaj, Angelika Steger, and Berthold Vöcking. Balanced allocations: The heavily loaded case. SIAM Journal on Computing, 35(6):1350– 1385, 2006. [BFK+18] Petra Berenbrink, Tom Friedetzky, Peter Kling, Frederik Mallmann-Trenn, Lars Nagel, and Chris Wastell. Self-stabilizing balls and bins in batches - the power of leaky bins. Algorithmica, 80(12):3673–3703, 2018. [BFL18] G. Brightwell, M. Fairthorne, and M.J. Luczak. The supermarket model with bounded queue lengths in equilibrium. J Stat Phys, 173:1149–1194, 2018. [BHH+23] Petra Berenbrink, Lukas Hintze, Hamed Hosseinpour, Dominik Kaaser, and Malin Rau. Dynamic averaging load balancing on arbitrary graphs. In In Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), pages 18:1–18:18, 2023. [BK22] Nikhil Bansal and William Kuszmaul. Balanced allocations: The heavily loaded case with deletions. In Proc. 63rd Symposium on Foundations of Computer Science (FOCS), pages 801–812, 2022. [BLP10] Maury Bramson, Yi Lu, and Balaji Prabhakar. Randomized load balancing with general service time distributions. In In Proceedings of International Conference on Measurement and Modeling of Computer Systems, pages 275–286, 2010. [CFM+98] Richard Cole, Alan M. Frieze, Bruce M. Maggs, Michael Mitzenmacher, Andréa W. Richa, Ramesh K. Sitaraman, and Eli Upfal. On balls and bins with deletions. In Proc. Randomization and Approximation Techniques in Computer Science (RAN- DOM), pages 145–158, 1998. 14 [CL97] Xu C. and F.C.M. Lau. Load Balancing in Parallel Computers: Theory and Prac- tice. Springer New York, NY, 1997. [CMMadH+98] Richard Cole, Bruce M. Maggs, Friedhelm Meyer auf der Heide, Michael Mitzen- macher, Andréa W. Richa, Klaus Schröder, Ramesh K. Sitaraman, and Berthold Vöcking. Randomized protocols for low-congestion circuit routing in multistage in- terconnection networks. In Proceedings of the Symposium on Theory of Computing (STOC), page 378–388, 1998. [CRSW13] L. Elisa Celis, Omer Reingold, Gil Segev, and Udi Wieder. Balls and bins: Smaller hash families and faster evaluation. SIAM Journal on Computing, 42(3):1030–1050, 2013. [Czu00] Artur Czumaj. Recovery time of dynamic allocation processes. Theory Comput. Syst., 33(5/6):465–487, 2000. [dM18] A. de Moivre. Doctrine of Chances. Woodfall, London, United Kingdom, 1718. [DW05] Martin Dietzfelbinger and Christoph Weidling. Balanced allocation and dictionaries with tightly packed constant size bins. In Automata, Languages and Programming, pages 166–178, 2005. [KLMadH92] Richard M. Karp, Michael Luby, and Friedhelm Meyer auf der Heide. Efficient PRAM simulation on a distributed memory machine. In Proc. 24th Symposium on Theory of Computing (STOC), pages 318–326, 1992. [LCB13] Paul Hermann Lensing, Toni Cortes, and André Brinkmann. Direct lookup and hash-based metadata placement for local file systems. In Proceedings of the 6th International Systems and Storage Conference, 2013. [LM05] Malwina Luczak and Colin MacDiarmid. On the power of two choices: Balls and bins in continuous time. The Annals of Applied Probability, 15(3):1733–1764, 2005. [LN05] Malwina Luczak and James Norris. Strong approximation for the supermarket model. The Annals of Applied Probability, 13(3):2038–2061, 2005. [LS21] Dimitrios Los and Thomas Sauerwald. Balanced allocations with incomplete infor- mation: The power of two queries. CoRR, abs/2107.03916, 2021. [LS22] Dimitrios Los and Thomas Sauerwald. Balanced allocations with incomplete in- formation: The power of two queries. In Proceedings of Innovations in Theoretical Computer Science Conference, (ITCS), 2022. [LS23] Dimitrios Los and Thomas Sauerwald. Tight bounds for repeated balls-into-bins. In In Proceedings of the Symposium on Theoretical Aspects of Computer Science (Stacs), pages 1–22, 2023. [LSS22] Dimitrios Los, Thomas Sauerwald, and John Sylvester. Balanced allocations: Caching and packing, twinning and thinning. In Proc. 33rd Symposium on Dis- crete Algorithms (SODA), pages 1847–1874, 2022. [LSS23] Dimitrios Los, Thomas Sauerwald, and John Sylvester. Balanced allocations with heterogeneous bins: The power of memory. In Proceedings of Symposium on Discrete Algorithms (SODA), pages 4448–4477, 2023. 15 [LSS24] Dimitrios Los, Thomas Sauerwald, and John Sylvester. The power of filling in balanced allocations. SIAM J. Discret. Math., 38(1):529–565, 2024. [MBvLW16] Debankur Mukherjee, Sem C. Borst, Johan van Leeuwaarden, and Phil Whiting. Universality of power-of-d load balancing schemes. SIGMETRICS Perform. Eval- uation Rev., 44(2):36–38, 2016. [Mit01] Michael Mitzenmacher. The power of two choices in randomized load balancing. IEEE Trans. Parallel Distributed Syst., 12(10):1094–1104, 2001. [PTW15] Yuval Peres, Kunal Talwar, and Udi Wieder. Graphical balanced allocations and the (1 + β)-choice process. Random Struct. Algorithms, 47(4):760–775, 2015. [RS98] M. Raab and A. Steger. "Balls into Bins" - a simple and tight analysis. In Proceed- ings of the 2nd International Workshop Randomization and Approximation Tech- niques in Computer Science (RANDOM), pages 159–170, 1998. [SMK+01] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Balakr- ishnan. Chord: A scalable peer-to-peer lookup service for internet applications. In Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, page 149–160, 2001. [TW14] Kunal Talwar and Udi Wieder. Balanced allocations: A simple proof for the heav- ily loaded case. In Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias, editors, Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I, volume 8572 of Lecture Notes in Computer Science, pages 979–990. Springer, 2014. Full version available on arXiv. [V¨03] Berthold Vöcking. How asymmetry helps load balancing. Journal of the ACM, 50(4):568–589, 2003. [VDK96] Nikita Vvedenskaya, Roland Dobrushin, and Fridrikh. Karpelevich. Queueing sys- tem with selection of the shortest of two queues: An asymptotic approach. Problemy Peredachi Informatsii, 32(1):20–34, 1996. [Wie17] Udi Wieder. Hashing, load balancing and multiple choice. Found. Trends Theor. Comput. Sci., 12(3-4):275–379, 2017. 16 Appendix. A Deletion of a Random Ball. Consider an insertion probability sequence β(t)  t∈N. Let X = (x(t))t∈N0 be the process where balls are deleted from randomly chosen non-empty bins during a deletion step. Remember that configuration x(t) is the load vector after step t with xi(t) denoting the i-th fullest bin. Similarly, let Y = (y(t))t∈N0 be the process in which every ball is deleted with the same probability during a deletion step. Our goal is to show that process Y is not worse (w.r.t. discrepancy/overload) than process X. We formalize this using the standard notion of majorization: For a configuration x let Sk(x) = Pk i=1 xi be the total load of the k fullest bins. We say a configuration x majorizes configuration y if Sk(y) ≤Sk(x) for all k ∈{ 1, 2, . . . , n }. Note that this immediately implies that y has a smaller maximal load than x. Thus, we can formalize the desired result via the following theorem: Theorem A.1. Starting from an empty system, there exists a coupling between processes X and Y such that x(t) majorizes y(t) for all t ∈N0. Coupling. Let us define a suitable coupling between processes X and Y . To this end, we use the identity coupling for the decision whether a ball is allocated (with probability β(t)) or deleted (with probability 1 −β(t)). As a result, the total load in both processes remains the same at any time; we denote it by m(t). It remains to couple the random choices for allocation and for deletion steps from configuration x(t) to configuration x(t + 1): Allocation Step: Here we use the identity coupling. That is, both processes use the same two random bin choices for Greedy[2]. Deletion Step: Here we number the m(t) balls in both configurations x(t) and y(t) from left to right (and arbitrarily within each bin). We then draw a uniformly random number z(t) from [0, 1) and use it to determine the ball deletions as follows: (a) Y deletes the ball i ∈{ 1, . . . , m(t) } if and only if i −1 m(t) ≤z(t) < i m(t). (13) (b) X deletes the ball j ∈{ 1, . . . , m(t) }, where j = Pℓ−1 a=1 xa(t) + k (i.e., the k-th ball in bin ℓ), if and only if ℓ−1 ˆn(t) + k −1 ˆn(t) · xℓ(t) ≤z(t) < ℓ−1 ˆn(t) + k ˆn(t) · xℓ(t), (14) where ˆn(t) denotes the number of non-empty bins in configuration x(t). Note that our coupling of the deletions maintains the marginal distributions of X and Y . Indeed, process Y deletes ball i with probability exactly 1/m. Similarly, process X deletes ball j with probability exactly 1/(ˆn(t) · xℓ(t)) (a uniformly random ball chosen from a uniformly random non- empty bin). 17 Analysis. In the prove of Theorem A.1, we make use of the following simple observation: Observation A.1. Consider the above coupling for processes X and Y during a deletion step t+1. Assume that y(t) is majorized by x(t). Let r and s denote the bins from which X and Y delete during step t + 1, respectively. Similarly, let j and i denote the number of the balls deleted by X and Y during step t + 1, respectively. Then Ss−1(y(t)) < i ≤j ≤Sr(x(t)) (15) The first inequality follows by our ball numbering scheme (from left to right), since i ≤Ss−1(y(t)) would imply that ball i lies in a bin < s. Similarly, the last inequality follows since j > Sr(x(t)) would imply that ball j lies in a bin > r. The inequality i ≤j is a simple consequence of our coupling, since the probabilities for ball i being deleted in Y are constant (w.r.t. i) while the probabilities for ball j being deleted in X are monotonously non-decreasing (w.r.t. j). With this, we are ready to prove the main result of this section. Proof of Theorem A.1. We use the coupling defined above and prove the statement via induction over t, with the trivial base case Sk(x(0)) = 0 = Sk(y(0)) for all k. For the inductive step we show that the coupling maintains majorization when going from x(t) to x(t + 1) during time step t + 1. If a ball is inserted during time step t + 1, this follows immediately from [ABKU99, Lemma 3.4]. Thus, in the rest of this proof we consider a deletion during time step t + 1. Moreover, as in Observation A.1, we use (a) r and s to denote the bins from which X and Y delete a ball in step t, respectively; and (b) j and i to denote the balls deleted by X and Y , respectively. A plateau of a configuration x is a maximal set of consecutive bins having the same load. Note that when going from x(t) to x(t+1) by deleting a ball from bin r, we have to re-sort the bins after the deletion. This will swap bin r with the rightmost bin r′ of the plateau in which r lies. In other words, we can think of X as deleting a ball from some bin r′ ≥r instead of bin r. Formally we get x(t + 1) = x(t) −er′, where ei denotes the i-th unit vector in Rn. Similarly, Y deletes a ball from the rightmost bin s′ ≥s in the plateau containing bin s, yielding y(t + 1) = y(t) −es′ So assume x(t) majorizes y(t). That is, Sk(y(t)) ≤Sk(x(t)) for all k ∈{ 1, 2, . . . , n }. We show by case distinction that Sk(y(t + 1)) ≤Sk(x(t + 1)) for all k ∈{ 1, 2, . . . , n }: Case 1: k < min { r′, s′ } Then the first k bins remain untouched by the deletion in both processes, yielding Sk(y(t + 1)) = Sk(y(t)) ≤Sk(x(t)) = Sk(x(t + 1)). (16) Case 2: k ≥max { r′, s′ } Then the first k bins both loose exactly one ball in both processes, yield- ing Sk(y(t + 1)) = Sk(y(t)) −1 ≤Sk(x(t)) −1 = Sk(x(t + 1)). (17) Case 3: s′ ≤k < r′ Then X and Y delete exactly zero and one ball in the first k bins, respectively, yielding Sk(y(t + 1)) = Sk(y(t)) −1 ≤Sk(x(t)) −1 = Sk(x(t + 1)) −1 < Sk(x(t + 1)). (18) Case 4: r′ ≤k < s ≤s′ Then we use Observation A.1 to get Sk(y(t + 1)) = Sk(y(t)) ≤Ss−1(y(t)) ≤Sr(x(t)) −1 ≤Sr′(x(t)) −1 = Sr′(x(t + 1)) ≤Sk(x(t + 1)). (19) 18 Case 5: max { r′, s } ≤k < s′ For the sake of a contradiction, let k be the minimal value in this range that violates the majorization after the deletion (and normalization). That is, we have Sk(y(t + 1)) > Sk(x(t + 1)) and Sk−1(y(t + 1)) ≤Sk−1(x(t + 1)).4 This implies yk(t + 1) > xk(t + 1). Now note that k lies in the plateau of y(t) between s and s′, such that for all q ∈{ k + 1, k + 2, s′ −1 } we have yq(t + 1) = yk(t + 1) > xk(t + 1) ≥xq(t + 1) (20) (the last inequality using that x(t + 1) is sorted non-increasingly) and, similarly, ys′(t + 1) = yk(t + 1) −1 ≥xk(t + 1) ≥xq(t + 1). (21) This implies Ss′(y(t)) = Ss′(y(t + 1)) + 1 > Ss′(x(t + 1)) + 1 = Ss′(x(t)), contradicting the induction hypothesis. Together, this shows that x(t + 1) majorizes y(t + 1). B Omitted Material from Proof of Theorem 2.1. In this section we prove Theorem 2.1, which we restate for convenience: Theorem 2.1. Consider an insertion probability sequence β(t)  t∈N bounded away from 0 by a constant β > 0. If α ≤β/16, then for any t ∈N0 we have E[Γα(x(t))] = O(n) · eα·adisc(0). Moreover, for any a ≥0 we have Pr  adisc(t) −adisc(0) ≥a+2 α · ln n  ≤n−a. We use the potential functions from [PTW15] and show that, at an arbitrary point of time, the expected potential is linear in n. Since the authors of [PTW15] considered a process without ball deletions, we extend the potential function analysis to capture ball deletions. Recall that the process works as follows: In every time step a ball is inserted with probability β(t) using two choices. Otherwise, with probability 1 −β(t), a random non-empty bin is picked and a ball from that bin is deleted. The three potential functions defined in [PTW15] are: Φα(x(t)) = n X i=1 eα·(xi(t)−m(t)/n); Ψα(x(t)) = n X i=1 e−α·(xi(t)−m(t)/n); Γα(x(t)) = Φα(x(t)) + Ψα(x(t)) where x(t) = (x1(t), ..., xn(t)) is the normalized load vector. For convenience, we use yi(t) := xi(t) −m(t)/n for the difference between the load in the ith bin and the average. The remainder of the section is dedicated to prove Lemma B.8 which directly implies Theo- rem 2.1. For this proof, we require insertion and deletion probabilities. Given that a ball is deleted in round t, we denote by qi(t) the probability that it is removed from the ith bin. And given that a ball is inserted, we denote by pi(t) the probability that it is inserted into the ith bin. If no bin is empty, then qi(t) = 1/n because the protocol deletes a ball from a non-empty bin chosen uniformly at random. If there are s empty bins, then the empty bins, which come last in the normalized array, have probability qi(t) = 0, and the remaining bins have probability qi(t) = 1/(n −s). pi(t), on the other hand, is independent of t. pi(t) = pi = i n 2 − i−1 n 2 because for the ith bin to be chosen in the normalized array, the ith bin must be among the choices next to a bin with the same or a smaller index. (W.l.o.g. we assume that if bins of equal load are chosen, the bin with the larger index receives the ball.) From this we can derive the following simple observation: 4For k = s this holds by the previous case. 19 Observation B.1. P i≤n/4 pi ≤1/4 −ϵ and P i≥3n/4 pi ≥1/4 + ϵ for any ϵ ≤3 16. Proof. The probability for any of the first n/4 bins receiving the new ball is X i≤n/4 pi = X i≤n/4  i n 2 − i −1 n 2 = n/4 n 2 = 1 16. The probability for any of the last n/4 bins receiving the new ball is X i>3n/4 pi = X i>3n/4  i n 2 − i −1 n 2 = n n 2 − 3n/4 n 2 = 1 − 3 4 2 = 7 16 Let ˆy(t) = (ˆy1(t), . . . , ˆyn(t)) denote the load vector above average before normalization. By combining the probabilities β, pi and qi, we can express ˆyi(t + 1), i.e. the ith bin’s load above average before normalization at t + 1, as a function of its previous load yi(t): ˆyi(t + 1) =            yi + 1 −1/n, with probability β(t) · pi (ball is allocated to ith bin), yi −1/n, with probability β(t) · (1 −pi) (ball is allocated to other bin), yi −1 + 1/n, with probability (1 −β(t)) · qi (ball is deleted from ith bin), yi + 1/n, with probability (1 −β(t)) · (1 −qi) (ball is deleted from other bin) Let Φi α(x(t)) = eα·yi(t) and Ψi α(x(t)) = e−α·yi(t) be the local potentials in bin i. And let ∆ˆΦi α := Φi α(ˆx(t+1))−Φi α(x(t)) and ∆ˆΨi α := Ψi α(ˆx(t+1))−Ψi α(x(t)) be the local potential changes in the ith bin (before normalization). Then: ∆ˆΦi α =            eα(yi+1−1/n) −eαyi = eαyi · (eα(1−1/n) −1), with probability β(t) · pi, eα(yi−1/n) −eαyi = eαyi · (e−α/n −1), with probability β(t) · (1 −pi), eα(yi−1+1/n) −eαyi = eαyi · (eα(−1+1/n) −1), with probability (1 −β(t)) · qi, eα(yi+1/n) −eαyi = eαyi · (eα/n −1), with probability (1 −β(t)) · (1 −qi) ∆ˆΨi α =            e−α(yi+1−1/n) −eαyi = e−αyi · (e−α(1−1/n) −1), with probability β(t) · pi, e−α(yi−1/n) −eαyi = e−αyi · (eα/n −1), with probability β(t) · (1 −pi), e−α(yi−1+1/n) −eαyi = e−αyi · (e−α(−1+1/n) −1), with probability (1 −β(t)) · qi, e−α(yi+1/n) −eαyi = e−αyi · (e−α/n −1), with probability (1 −β(t)) · (1 −qi) We use these local changes to prove different bounds on the overall potential changes ∆Φα = Φα(x(t + 1)) −Φα(x(t)) (in Lemma B.1, Lemma B.2, Corollary B.1 and Lemma B.5) and ∆Ψα = Ψα(x(t + 1)) −Ψα(x(t)) (in Lemma B.3, Lemma B.4, Corollary B.2 and Lemma B.6). Lemma B.1 (similar to a part of Lemma A.5 in [TW14]). If, for a constant β, β(t) ≥β for all t and if α ≤ϵ·β 3 ≤1 16, then E[∆Φα | x(t)] ≤ n X i=1 eαyi  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  . 20 Proof. We compute the expected potential difference: E[∆Φα | x(t)] = n X i=1 β(t)pi · (eα(1−1/n) −1) · eαyi + β(t)(1 −pi) · (eα(−1/n) −1) · eαyi +(1 −β(t))qi · (eα(−1+1/n) −1) · eαyi + (1 −β(t))(1 −qi) · (eα(1/n) −1) · eαyi = n X i=1 β(t)eαyi  pi · e−α/n · (eα −1) + e−α/n −1  +(1 −β(t))eαyi  qieα/n(e−α −1) + eα/n −1  ≤ n X i=1 β(t)eαyi  pi(eα −1) + e−α/n −1  + (1 −β(t))eαyi  qi(e−α −1) + eα/n −1  = n X i=1 eαyi  β(t)pi(eα −1) + (1 −β(t))qi(e−α −1) + eα/n −1 + β(t)(e−α/n −eα/n)  ≤ n X i=1 eαyi  β(t)pi(eα −1) + (1 −β(t))qi(e−α −1) + α n + 2α2 n2 −2β(t)α n  ≤ n X i=1 eαyi  β(t)pi(eα −1) + (1 −β(t)) 1 n(e−α −1) + α n + 2α2 n2 −2β(t)α n  (22) = n X i=1 eαyi  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  Inequality (22) holds because, if no bin is empty, all qi = 1/n, and if bins are empty, then the qi are decreasing while the term (1 −β(t)) · (e−α −1) · eαyi is increasing as e−α −1 < 0. Lemma B.2 (similar to a part of Lemma A.5 in [TW14]). If, for a constant β, β(t) ≥β for all t, if α ≤ϵ·β 3 ≤1 16 and if y3n/4(t) ≤0, then E[∆Φα | x(t)] ≤−β·α·ϵ n · Φα + α. 21 Proof. We start from Lemma B.1 and continue the calculation: E[∆Φα | x(t)] ≤ n X i=1 eαyi  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  ≤ X i<3n/4 eαyi  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  + X i≥3n/4 e0  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  ≤ X i<3n/4 4Φα 3n  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  (23) +β(t)  eα −1 −α 4  + (1 −β(t))1 4(e−α −1 + α) + α2 2n (Obs. B.1) ≤ 4Φα 3n  β(t) 3 4 −ϵ  (eα −1) −3α 4  + (1 −β(t))3 4(e−α −1 + α) + 3α2 2n  + α (24) = Φα n  β(t)  1 −4 3ϵ  (eα −1) −α  + (1 −β(t))(e−α −1 + α) + 2α2 n  + α = Φα n  −4 3β(t)ϵ(eα −1) + β(t) (eα −1 −α) + (1 −β(t))(e−α −1 + α) + 2α2 n  + α ≤ Φα n  −4 3β(t)ϵα + 2 3α2  + α ≤ Φα n (−β(t)ϵα) + α ≤ Φα n (−βϵα) + α In Inequality (23), we use that Pn i=1 pi(eα−1)eαyi is maximal when eαyi = 4·Φα 3·n for all i < 3n/4. This is because the pi are non-decreasing and the yi are non-increasing. In Inequality (24), we use Observation B.1 and the fact that Pn i=1 pi(eα −1)eαyi is maximal if the pi are uniform. The second to last inequality uses α ≤β · ϵ/3 ≤β(t) · ϵ/3. Corollary B.1. If, for a constant β, β(t) ≥β for all t, if α ≤ϵ·β 3 ≤ 1 16 and if y3n/4(t) ≤0, then E[∆Φα | x(t)] ≤Φα · α2 n . Proof. We use Lemma B.1 and continue the calculation: E[∆Φα | x(t)] ≤ n X i=1 eαyi  β(t)  pi(eα −1) −α n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  ≤ n X i=1 eαyi n  β(t)(eα −1 −α) + (1 −β(t))(e−α −1 + α) + 2α2 n  ≤ Φα · α2 n 22 The second inequality, for which all pi are set to 1/n, holds because the pi are non-decreasing and the eαyi are non-increasing. The last inequality makes use of the Taylor series. Lemma B.3 (similar to a part of Lemma A.6 in [TW14]). If, for a constant β, β(t) ≥β for all t and if α ≤ϵ·β 3 ≤1 16, then E[∆Ψα | x(t)] ≤ n X i=1 e−αyi  β(t)(pi(e−α −1) + α/n) + (1 −β(t)) 1 n(eα −1 −α) + 2α2 n2  . Proof. We upper-bound the expected potential difference. E[∆Ψα | x(t)] = n X i=1 β(t)pi · (e−α(1−1/n) −1) · e−αyi + β(t)(1 −pi) · (e−α(−1/n) −1) · e−αyi +(1 −β(t))qi · (e−α(−1+1/n) −1) · e−αyi + (1 −β(t))(1 −qi) · (e−α(1/n) −1) · e−αyi = n X i=1 β(t)e−αyi  pi · eα/n · (e−α −1) + eα/n −1  +(1 −β(t))e−αyi  qi · e−α/n · (eα −1) + e−α/n −1  ≤ n X i=1 β(t)e−αyi  pi(e−α −1) + eα/n −1  + (1 −β(t))e−αyi  qi(eα −1) + e−α/n −1  = n X i=1 e−αyi  β(t)pi(e−α −1) + (1 −β(t))qi(eα −1) + e−α/n −1 + β(t)(eα/n −e−α/n)  ≤ n X i=1 e−αyi  β(t)pi(e−α −1) + (1 −β(t))qi(eα −1) −α n + 2β(t)α n + 2α2 n2  ≤ n X i=1 e−αyi  β(t)pi(e−α −1) + (1 −β(t)) 1 n(eα −1) −α n + 2β(t)α n + 2α2 n2  (25) = n X i=1 e−αyi  β(t)(pi(e−α −1) + α/n) + (1 −β(t)) 1 n (eα −1 −α) + 2α2 n2  We use the Taylor series of the exponential function, and Inequality (25), replacing qi by 1/n, holds because the qi are non-increasing and the q−αyi are non-decreasing. Lemma B.4 (similar to part of Lemma A.6 in [TW14]). If, for a constant β, β(t) ≥β for all t, if α ≤ϵ·β 3 ≤1 16 and if yn/4 ≥0, then E[∆Ψα | x(t)] ≤−β·α·ϵ n · Ψα + α. 23 Proof. We use Lemma B.3 and continue the calculation: E[∆Ψα | x(t)] ≤ n X i=1 e−αyi  β(t)(pi(e−α −1) + α/n) + (1 −β(t)) 1 n (eα −1 −α) + 2α2 n2  ≤ n X i=1 e−αyi  β(t)  pi(e−α −1) + α n  + (1 −β(t))α2 n + 2α2 n2  ≤ X i≥n/4 e−αyi  β(t)  pi(e−α −1) + α n  + (1 −β(t))α2 n + 2α2 n2  + X i<n/4 e0  β(t)  pi(e−α −1) + α n  + (1 −β(t))α2 n + 2α2 n2  (Obs. B.1) ≤ X i≥n/4 e−αyi n  β(t)  1 + 4 3ϵ  (e−α −1) + β(t)α + (1 −β(t))α2 + 2α2 n  (26) +β(t) 4 (e−α −1 + α) + 1 −β(t) 4 α2 + α2 2n ≤ Ψα −n/4 n  β(t)4 3ϵ(e−α −1) + α2 + α3 2  + 3 4α2 (27) ≤ Ψα n  β(t)4 3ϵ(e−α −1) + α2  + α ≤ Ψα n (−β(t)ϵα) + α ≤ Ψα n (−βϵα) + α The tricky part is Eq. (26) and (27). Note that (e−α −1) is negative, and pi and e−αyi are weakly increasing. So, to bound the expression, we can assume that the pi are uniform, and from P i≥n/4 pi ≥3/4 + ϵ it follows that then pi ≥(3/4 + ϵ)/(3/4 · n) = (1 + 4/3 · ϵ) · 1/n which implies Eq. (26). The resulting term 1 n  β(t) 1 + 4 3ϵ  (e−α −1) + β(t)α + (1 −β(t))(eα −1 −α) + 2α2 n  is negative as well. Since P i<n/4 e−αyi < n/4 (because of yn/4 ≥0), it follows that P i≥n/4 e−αyi ≥ Ψα −n/4 and thus Eq. (27). The second to last inequality uses ϵ ≥3α/β ≥3α/β(t). Corollary B.2. If, for a constant β, β(t) ≥β for all t, if α ≤ϵ·β 3 ≤ 1 16 and if yn/4 ≥0, then E[∆Ψα | x(t)] ≤Ψα α2 n . Proof. We use Lemma B.3 and continue the calculation: E[∆Ψα | x(t)] ≤ n X i=1 e−αyi  β(t)(pi(e−α −1) + α/n) + (1 −β(t)) 1 n(eα −1 −α) + 2α2 n2  ≤ Ψα α2 n 24 Lemma B.5 (similar to Lemma A.7 in [TW14]). Suppose β(t) ≥β, y3n/4 > 0 and E[∆Φα | x(t)] ≥ −α·β·ϵ 4·n · Φα. Then either Φα < ϵ 4 · Ψα or Γα ≤360·n ϵ8·β4 . Proof. In the following we use that pi ≤1−4·ϵ n for i ≤n/3 and that P pi · eα·yi is maximized when p is uniform. We start from Lemma B.1. E[∆Φα | x(t)] ≤ n X i=1 eαyi  β(t)(pi(eα −1) −α/n) + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  ≤ X i≤n/3 eαyi  β(t) 1 −4ϵ n (eα −1) −α/n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  + X i>n/3 eαyi  β(t)  3 2n(eα −1) −α/n  + (1 −β(t)) 1 n(e−α −1 + α) + 2α2 n2  ≤ Φα,≤n/3 n  β(t) ((1 −4ϵ)(eα −1) −α) + (1 −β(t))(e−α −1 + α) + 2α2 n  +Φα,>n/3 n  β(t) 3 2(eα −1) −α  + (1 −β(t))(e−α −1 + α) + 2α2 n  ≤ −3 · β(t) · α · ϵ n · Φα + α n · Φα,>n/3 ≤ −3 · β · α · ϵ n · Φα + α n · Φα,>n/3 Recall that we assume E[∆Φα | x(t)] ≥−α·β·ϵ 4·n · Φα. −α · β · ϵ 4 · n · Φα ≤ −3 · α · β · ϵ n · Φα + α n · Φα,>n/3 ⇒ 2 · β · ϵ · Φα ≤ Φα,>n/3 If Φα < ϵ 4·Ψα, we are done (see statement). So, assume Φα ≥ϵ 4·Ψα. Since Φα,≥n/3 ≤2·n 3 ·e3·α·B/n for B = P i max{0, yi} = ||y||1 2 and since y3n/4 > 0 implies Ψα ≥n 4 · e4·α·B/n, we get: n · ϵ 16 · e4·α·B/n ≤ϵ 4 · Ψα ≤Φα ≤ Φα,>n/3 · 1 2 · β · ϵ ≤ n 3 · β · ϵ · e3·α·B/n =⇒ eα·B/n ≤ 6 β · ϵ2 Finally, it follows that: Γα = Φα + Ψα ≤5 ϵ · Φα ≤5 ϵ · n 3 · β · ϵ · e3·α·B/n ≤5 ϵ · n 3 · β · ϵ ·  6 β · ϵ2 3 = 360 · n ϵ8 · β4 Lemma B.6 (similar to Lemma A.8 in [TW14]). Suppose β(t) ≥β, yn/4 < 0 and E[∆Ψα | x(t)] ≥ −α·β·ϵ 4·n · Ψα. Then either Ψα < ϵ 4 · Φα or Γα ≤1040000·n ϵ8·β4 . 25 Proof. We use Lemma B.3. In Eq. (28) we use pi > 1+ϵ n for i > 2n/3 and e−α −1 < 0. E[∆Ψα | x(t)] ≤ n X i=1 e−αyi  β(t)  pi(e−α −1) + α n  + (1 −β(t)) 1 n(eα −1 −α) + 2α2 n2  ≤ Ψα,≤2n/3 · α n (28) + X i>2n/3 e−αyi  β(t) 1 + ϵ n (e−α −1) + α n  + (1 −β(t)) 1 n(eα −1 −α) + 2α2 n2  ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 n  β(t)ϵ(e−α −1) + α2 + α3 2  ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 n −αβ(t)ϵ + α2 ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 · −αβ(t)ϵ 2n ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 · −αβϵ 2n Using E[∆Ψα | x(t)] ≥−α·β·ϵ 4·n · Ψα, we get: −α · β · ϵ 4 · n · Ψα ≤ Ψα,≤2n 3 · α n −Ψα · αβϵ 2n Ψα ≤ 4 β · ϵ · Ψα,≤2n 3 If Ψα < ϵ 4 · Φα, we would be done; so we assume Ψα ≥ϵ 4 · Φα. Since Ψα,≤2n 3 ≤2 3 · n · e3·α·B/n for B = P i max{0, yi} and since yn/4 < 0 implies Φα ≥n 4 · e4·α·B/n, we get: ϵ · n 16 · e4·α·B/n ≤ϵ 4 · Φα ≤Ψα ≤ 4 β · ϵ · Ψα,≤2n 3 ≤ 8 3 · β · ϵ · n · e3·α·B/n ⇒ eα·B/n ≤ 128 3 · β · ϵ2 Finally, it follows that: Γα ≤ 5 ϵ · Ψα ≤5 ϵ · 8 3 · β · ϵ · n · e3·α·B/n ≤ 40 3 · β · ϵ2 · n ·  128 3 · β · ϵ2 3 ≤1040000 · n ϵ8 · β4 The next Lemma B.7 utilizes all previous lemmas and corollaries to prove a bound on the potential change ∆Γα = Γα(x(t + 1)) −Γα(x(t)). This bound will then be used in Lemma B.8 to finally prove that the expected potential E[Γα(x(t))] is linear in n. Lemma B.7 (similar to Theorem A.9 in [TW14]). E[Γα(t+1) | x(t)] ≤(1−α·β·ϵ 4·n )·Γα(t)+ 1040000 ϵ8·β4 . 26 Proof. Case 1: yn/4 ≥0 and y3n/4 ≤0. The statement follows from Lemma B.2 and Lemma B.4: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ −α · β · ϵ n · Φα + α −α · β · ϵ n · Ψα + α = −α · β · ϵ n · Γα + 2 · α Case 2: yn/4 ≥y3n/4 > 0. If E[∆Φα | x(t)] ≤−ϵ·β·α 4·n · Φα, apply Lemma B.4: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ −ϵ · β · α 4 · n · Φα + α −Ψα · α · β · ϵ n + α ≤ −α · β · ϵ 4 · n · Γα + α If this is not the case, apply Lemma B.5 which distinguishes two cases: If Φα < ϵ 4 · Ψα, apply Lemma B.4 and Corollary B.1 and use Φα < ϵ 4 · Ψα ≤1 16 · Ψα as well as β ≥α (which follows from ϵ ≥2 · α/β): E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ α2 n · Φα −Ψα · α · β · ϵ n + α ≤ α · β · ϵ n · 4 · Ψα −Ψα · α · β · ϵ n + α = −Ψα · 3 · α · β · ϵ 4 · n + α ≤ −Γα · 3 · α · β · ϵ 4 · n + α If Γα < 360·n ϵ8·β4 , apply Corollary B.1 and Corollary B.2: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] (29) ≤ α2 n · Φα + α2 n · Ψα = α2 n · Γα ≤ −α · β · ϵ n · Γα + (1 + ϵ) · α · β n · Γα ≤ −α · β · ϵ n · Γα + 360 ϵ8 · β2 (30) Case 3: y3n/4 ≤yn/4 < 0. If E[∆Ψα | x(t)] ≤−α·β·ϵ 4·n · Ψα, apply Lemma B.2: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ −α · β · ϵ n · Φα + α −α · β · ϵ 4 · n · Ψα ≤ −α · β · ϵ 4 · n · Γα + α 27 If this is not the case, apply Lemma B.6 which distinguishes two cases: If Ψα < ϵ 4 · Φα, apply Lemma B.2 and Corollary B.2 and use ϵ < 1/4: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ −α · β · ϵ n · Φα + α + α2 n · Ψα ≤ −3 · α · β · ϵ 4 · n · Γα + α (The proof for the last inequality is very similar to Case 2.) If Γα < c·n β4 , Inequality (29)-(30) can be reused with the constant from Lemma B.6. E[∆Γα | x(t)] ≤ −α · β · ϵ n · Γα + 1040000 ϵ8 · β2 Lemma B.8. Fix an arbitrary step t. Let ϵ ≤ 3 16, and assume that β ∈(0, 1] is a constant probability and that α ≤ϵ·β 3 . Then E[Γα(x(t))] ≤4160002 α · β3 · ϵ9 · eα·adisc(0) · n. Proof. The proof of this lemma follows from Lemma B.7 which gives us E[Γα(x(t + 1)) | x(t)] ≤  1 −α · β · ϵ 4 · n  · Γα(x(t)) + 1040000 ϵ8 · β2 . Taking the expected value on both sides, we obtain E[Γα(x(t + 1))] ≤  1 −α · β · ϵ 4 · n  · E[Γα(x(t))] + 1040000 ϵ8 · β2 . Solving this recursion yields E[Γα(x(t))] ≤  1 −α · β · ϵ 4 · n t ·  E[Γα(x(0))] −4160000 α · β3 · ϵ9 · n  + 4160000 α · β3 · ϵ9 · n ≤E[Γα(x(0))] + 4160000 α · β3 · ϵ9 · n ≤2n · eα·adisc(0) + 4160000 α · β3 · ϵ9 · n =  2eα·adisc(0) + 4160000 α · β3 · ϵ9  · n ≤4160002 α · β3 · ϵ9 · eα·adisc(0) · n. Lemma B.9. Fix an arbitrary step t. Let ϵ ≤ 3 16, and assume that β ∈(0, 1] is a constant probability and that α ≤ϵ·β 3 . Then Pr  adisc(t) −adisc(0) ≥(a + 2) · ln n α  ≤n−a. 28 Proof. Lemma B.8 together with Markov’s inequality yields Pr  Γα(x(t)) ≥na+1 · 4160002 α · β3 · ϵ9 · eα·adisc(0)  ≤n−a. By rearranging the inequality on the left-hand side, we see that the corresponding event is equivalent to ln(Γα(x(t))) α −adisc(0) ≥ (a + 1) · ln(n) + ln(4160002 α·β3·ϵ9 ) α . For large n, the right-hand side of this is at most (a+2)·ln(n) α . For the left-hand side, the potential function definition yields a lower bound of adisc(t) −adisc(0). Combining the above insights, we get Pr  adisc(t) −adisc(0) ≥(a + 2) · ln n α  ≤Pr " ln(Γα(x(t))) α −adisc(0) ≥ (a + 1) · ln(n) + ln(4160002 α·β3·ϵ9 ) α # = Pr  Γα(x(t)) ≥na+1 · 4160002 α · β3 · ϵ9 · eα·adisc(0)  ≤n−a. C Omitted Material from Proof of Theorem 2.2. We restate Theorem 2.2 for convenience. Theorem 2.2. Fix an arbitrary step t. Let β ∈(0, 1] be a constant. Assume β(τ) ≥β for all 1 ≤τ ≤t and α ≤β/16. For any fixed non-negative ℓand any fixed µ ∈(0, 1), there is a γ = γ(µ) so that, with probability at least 1 −n−(3+ℓ), the number of balls above height ⌈m(t) n ⌉+ γ is at most µn. Proof. From Lemma C.1 we may assume that Γα(x(t)) = O(n) with probability at least 1−n−(3+ℓ) for any fixed non-negative ℓ, where the constant factor hidden in the big-Oh depends on ℓ. We assign a height hb(t) to each ball b at time t in such a way that in each bin i the Xi(t) balls contained in that bin at time t are numbered 1, . . . , Xi(t). Further let h′ b(t) = hb(t) −⌈m(t) n ⌉ if hb(t) ≥⌈m(t) n ⌉and h′ b(t) = 0 otherwise. The standard potential function Φα(t) = P i eαX+ i (t) with X+ i (t) being bin i’s load above ⌈m(t) n ⌉ at time t and zero otherwise is expressed in terms of individual bins’ potentials. We will rearrange this function so as to express the potential in terms of individual balls’ potentials, and then make subsequent arguments based upon this new potential function. We assign potential πb(t) to ball b at height hb(t) as follows: πb(t) =      0 if hb(t) ≤⌈m(t) n ⌉, eα if hb(t) = ⌈m(t) n ⌉+ 1, eαh′ b(t) −eα(h′ b(t)−1) if hb(t) ≥⌈m(t) n ⌉+ 2. Furthermore let Π(t) = P ball b πb(t). Now Π(t) = Φ(t), which may be seen as follows: Consider a bin i with k ≥1 balls above ⌈m(t) n ⌉. Then those balls contribute a total potential of Πi(t) := X ball b: hb(t)>⌈m(t) n ⌉ πb(t) = eα + k X j=2 eα·j −eα·(j−1) = eα + (e2α −eα) + (e3α −e2α) + · · · + (e(k−1)α −e(k−2)α) + (ekα −e(k−1)α). 29 This is a telescoping sum evaluating to Πi(t) = ekα. This however is precisely this bin’s potential contribution in the potential function Φ, so that we can conclude that Π(t) = X ball b πb(t) = X bin i X ball b in bin i πb(t) = X bin i Πi(t) = X bin i eαX+ i (t) = Φ(t). Consider a height ⌈m(t) n ⌉+ k for some k ≥2; the precise value will be fixed below. Assume that there is a set B of at least µn balls at height ⌈m(t) n ⌉+ k or higher (if no such set exists the proof is finished); the precise distribution of the balls in bins is irrelevant. Each of those at least µn many balls contributes a potential of at least eαk −eα(k−1) = eαk ·  1 −1 eα  ≥ϵ · eαk for some constant ϵ = ϵ(α) > 0, for a total potential contribution of those balls of at least ΠB(t) ≥ϵ · µn · eαk. Now ΠB(t) > c · n whenever k > 1 α ln c ϵµ −1, so letting d = 1 α ln c ϵµ proves the theorem (recall that the probability is inherited from Lemma C.1 – the actual argument hee in the proof building on top of it is entirely deterministic). Lemma C.1. Assume the premise of Theorem 2.2 holds. Then Γα(x(t)) = O(n) with probability at least 1 −n−(3+ℓ) for any fixed non-negative ℓ, where the constant factor hidden in the big-Oh depends on ℓ. Proof. The proof is a minor adaptation of Theorem 5.3 on page 18 in Los and Sauerwald’s [LS21], which is a more detailed version in arXiv of their [LS22]. All references in Appendix C to their lemmas etc. are with respect to the numbering in their arXiv version [LS21]. As mention before, the purpose of the adaptation of [LS21]’s results is two-fold: (i) to incorporate our deletions and their respective probability bounds, and (ii) to improve the high-probability guarantee from the original 1 −1/n3 (which was sufficient for the purposes of [LS22, LS21]) to 1 −1/n3+ℓfor any arbitrary but fixed ℓ> 0. To show this result we adapt Theorem 5.3 in [LS21] as follows. The ℓin the exponent of the statement comes from Theorem 2.2. Theorem C.1 (cf. Theorem 5.3 in [LS21]). Consider any probability vector p that is (i) non- decreasing in i, i.e., pi ≤pi+1 and (ii) for constant ϵ, P i≥3n/4 pi ≥1/4+ϵ and P i≤n/4 pi ≥1/4−ϵ. Then, for any t ≥0 and α2 ≤β·ϵ 110, c = cϵ,α2 = 2 · 40 · 1283 · ϵ−7 · 4 · α−1 2 , Pr h ∩s∈[t,t+n log5(n)]{Γ(s) α2 ≤2cn} i ≥1 −n−(3+ℓ). Proof. The proof is almost identical to that of Theorem 5.3 in [LS21]. The chain of modifications starts with Lemma 5.5 in [LS21] where the restriction Γ(t) 1 ≤cn9 occurs. An application of Markov’s inequality turns the n9 into a probability 1/n8, which subsequent lemmas etc. further decay to 1/n3. The main idea to make the result work for us is to modify the restriction on Γ(t) 1 to Γ(t) 1 ≤cn9+ℓ, which accordingly will turn into a probability 1/n8+ℓ, only to subsequently be eroded to 1/n3+ℓ. Basically we have the exact same results with the exact same proofs as in in [LS21] but with some explicit constant parameters replaced with their respective original values plus ℓ. We will need to 30 make some equally straightforward modification of [LS21]’s results to make them compatible with our potential function analysis. We will use two instances of the potential function Γα which only differ in α. The first one, Γα1, uses α1 ≤β·ϵ 3 , the second one, Γα2, uses α2 ≤ α1 4·9.1. (Explanation: α1 and α2 must fulfil the requirements of the Potential Function Analysis in the previous section and of Lemma C.2.) From Lemma B.8 we can derive, for n and c large enough, i.e. c ≥4160002·n α·β3·ϵ9 , that Pr  Γα1(x(t)) ≥c · n9+ℓ < n−(8+ℓ). The next lemma proves a stronger statement for Γα2. Lemma C.2 (cf. Lemma 5.5 in [LS21]). For any t ≥0, if Γ(t) α1 ≤c·n9+ℓ, then, (i) |xi(t)| ≤9.1 α1 ·log(n) for all i ∈[n], (ii) Γ(t) α2 ≤n4/3, and (iii) |Γ(t+1) α2 −Γ(t) α2| ≤n1/3. Proof. First statement: For all i ∈[n], Γ(t) α1 = eα1·xi(t) + e−α1·xi(t) ≤c · n9+ℓ=⇒|xi(t)| ≤9.1 α1 · log(n). Second statement: Γ(t) α2 < 2 · n X i=1 exp  α2 · 9.1 α1 · log(n)  ≤2 · n · n1/4 < n4/3. Third statement: One of the xi(t) is increased or decreased by 1, and all xi(t) are (additionally) decreased or increased by 1/n as the average load is altered by the allocation or deletion in step t + 1. The change affects both parts, Φα and Ψα, of potential function Γα. In total, the absolute ℓ1-change in all xi(t) is upper-bounded by 4. Since exp(.) is convex, the hypothetical worst case is that the largest exponent increases by 4 while all others remain the same: |Γ(t+1) α2 −Γ(t) α2| ≤ exp(α2 · max{x(t) max + 4, −x(t) min −4}) ≤ e4α2 · exp(α2 · 9.1 α1 · log(n)) ≤n1/3 If we define c′ = 104·106 ϵ8·β4 and ϵ′ α = αβϵ 4 then we obtain the following claim by applying Theorem B.7 in the proof from [LS21]. Claim C.1 (cf. Claim 5.7 in [LS21]). For any t ≥0, E[Γ(t+1) α2 | Γ(t) α2, Γ(t) α2 ≥2c′ ϵ′α2 · n] ≤(1 − ϵ′ α2 2n ) · Γ(t) α2. Proof. The proof is identical to Claim 5.7 in [LS21]. Lemma C.3 (cf. Lemma 5.6 in [LS21]). For any t ≥n · log2(n), for constants c′ > 0, ϵ′ α2 > 0 defined as above, Pr  ∪s∈[t−n·log2(n),t]  Γ(s) α2 ≤2c′ ϵ′α2 · n   ≥1 −2 · c · n−(8+ℓ). Proof. The proof is almost identical to that of Lemma 5.6 in [LS21]. In fact the modifications immediately follow from those made to Lemma 5.5 (here: Lemma C.2). 31 We are now in a position to finish the proof of Theorem C.1, a part that essentially mimics Section 5.2, with just some few slightly adapted coefficients. We do of course not wish to copy a page or two from another paper, and therefore just briefly highlight the modifications. The application of Lemma 5.6 in [LS21] (here Lemma C.3) gives a lower bound of 1 −2cn−(8+ℓ) instead of 1 −2cn−8 for the probability. Further down, the application of the union bound gives Pr   \ u∈[t−n log2 n,t+n log5 n] {Γ(u) 1 ≤cn9+ℓ}  ≥1 −2 log5 n n7+ℓ. Then, the application of Theorem A.4 in [LS21] gives Pr h X(u) r ≥X(r) r + cn 2 i ≤exp  − c2n2/4 2 · (2n log5 n) · (2n1/3)  + 2 log5 n n7+ℓ ≤3 log5 n n7+ℓ. Further down we find that if a red phase starts at time r, then with probability 1 −3 log5 n n7+ℓ, Γ(u) 2 will always be ≤3cn 2 . Taking union bounds as in Section 5.2 in [LS21] yields Pr   [ r∈[s,t+n log5 n] [ u∈[r,t+n log5 n] {X(u) r > 3cn 2 }  ≤3 · log5 n n7+ℓ· (4n2 log10 n) ≤1 2n−4+ℓ. Concluding as in [LS21], with probability 1 −1 2n−4+ℓ−2cn−8+ℓ≥1 −n−4+ℓ, it holds that Γ(r) 2 ≤ 3cn 2 for all time-steps r which are within a red phase in [s, t + n log5 n] ⊆[t, t + n log5 n]. Since Γ(r) 2 ≤T ≤cn holds (deterministically) by definition for all time-steps r within a green phase, the theorem follows. This concludes the proof sketch of the Lemma. D Omitted Material from Section 3. We first present the proof of Lemma 3.2. We restate it for convenience: Lemma 3.2. Consider an h-valid configuration x and let x′ denote the (random) configuration after one step of our process with the insertion probability β(t) ≤ˆβ < 1. (a) If level ℓ∈{ 1, 2, . . . , ℓ∗} of configuration x is h-critical, then Pr h m(h) ℓ(x′) < m(h) ℓ(x) i . Pr h m(h) ℓ(x′) > m(h) ℓ(x) i ≥2. (4) (b) If level ℓ= ℓ∗+ 1 of configuration x is h-critical and if n is large enough, then Pr h m(h) ℓ(x′) < m(h) ℓ(x) i . Pr h m(h) ℓ(x′) > m(h) ℓ(x) i ≥√n. (5) Proof. For the sake of readability, we abuse notation in this proof by omitting the base height h when talking about the number of balls that have at least some level ℓ(i.e., we write mℓ(•) instead of m(h) ℓ(•)). Similarly, we refer to the notations h-valid and h-critical simply as valid/critical within the proof. 32 (a) Consider a critical level ℓ∈{ 1, 2, . . . , ℓ∗} of configuration x. Since x is valid, level ℓ−1 and ℓ+ 1 are valid. Using Lemma 3.1 and Observation 3.1 we calculate Pr [ mℓ(x′) < mℓ(x) ] Pr [ mℓ(x′) > mℓ(x) ] ≥1 −β(t) β(t) · mℓ(x) −mℓ+1(x) mℓ−1(x) −mℓ(x) 2 · n ≥1 −β β · αℓ/2 −αℓ+1 (αℓ−1 −αℓ/2)2 · n ≥1 −β β · αℓ/4 α2 ℓ−1 · n ≥2. The first inequality follows from Lemma 3.1. The second inequality uses the bound on β(t), that level ℓis critical, and that levels ℓ−1 and ℓ+1 are valid. The third and final inequalities follow from Observation 3.1. (b) For level ℓ∗+ 1 we similarly calculate Pr [ mℓ∗+1(x′) < mℓ∗+1(x) ] Pr [ mℓ∗+1(x′) > mℓ∗+1(x) ] ≥1 −β(t) β(t) · mℓ∗+1(x) −mℓ∗+1+1(x) mℓ∗+1−1(x) −mℓ∗+1(x) 2 · n ≥1 −β β · 1 α2 ℓ∗ · n = 1 −β β · n 144 log2(n) = Ω  n log2(n)  ≥√n. Compared to the proof of the previous statement, we bound the number of balls at level at least ℓ∗+1 in the numerator by 1 (level ℓ∗+1 being critical means it is non-empty, so there is at least one bin with a ball on level ℓ∗+ 1). The rest of the calculation applies the definition of αℓ∗(see Equation (2)) and simplifies. We next present the proof of Lemma 3.4. We restate it for convenience: Lemma 3.4. Let h(t) := ⌈mmax(t)/n⌉+ γ denote the base height at time t. Consider an initial configuration x(0) = x that is h(0)-safe. Fix the time T = n4 and assume n to be large enough. If β(t) ≤ˆβ < 1 for all t ∈{ 0, 1, . . . , T −1 }, then Pr [ ∀t ∈{ 0, 1, . . . , T −1 } : x(t) is h(t)-valid ] ≥1 −n−1. (6) Proof. Define the stopping time τ := min({ T } ∪{ t ∈N | x(t) is not h(t)-valid }). With this, we clearly have Pr [ ∀t ∈{ 0, 1, . . . , T −1 } : x(t) is h(t)-valid ] = Pr [ τ ≥T ], so it is sufficient to prove that Pr [ τ ≥T ] ≥1 −n−1. In the following, we say level ℓis valid at time t (critical/safe) if level ℓof configuration x(t) is h(t)-valid (-critical/-safe). We use the shorthand ˜mℓ(t) := m(h(t)) ℓ (x(t)). Since the base height h(t) is monotonously non-decreasing in t, we have ˜mℓ(t+1) = mh(t+1) ℓ (x(t+ 1)) ≤mh(t) ℓ (x(t + 1)) for any t ∈N0. Thus, if x(t) is h(t)-valid and level ℓis critical at time t, Lemma 3.2 implies Pr [ ˜mℓ(t + 1) < ˜mℓ(t) ] Pr [ ˜mℓ(t + 1) > ˜mℓ(t) ] ≥ Pr h m(h(t)) ℓ (x(t + 1)) < m(h(t)) ℓ (x(t)) i Pr h m(h(t)) ℓ (x(t + 1)) > m(h(t)) ℓ (x(t)) i ≥ ( 2 if ℓ∈{ 0, 1, . . . , ℓ∗} and √n if ℓ= ℓ∗+ 1. (31) If τ < T, there must be some level ℓthat is not valid at time τ and a minimal time t′ < τ such that level ℓis critical at any integral time t ∈[t′, τ). In particular, using the minimality of t′, we have for all integral t ∈[t′, τ), ˜mℓ(t′) = αℓ/2 ≤˜mℓ(t) < ˜mℓ(τ) = αℓ. That is, between time step t′ 33 and τ, the number of balls at level at least ℓstarted at αℓ/2 and increased to αℓwithout decreasing below αℓ/2. If this happens, we say level ℓcrosses [αℓ/2, αℓ) during [t′, τ), and denote this event as Cℓ,[t′,τ). We will show that for any level ℓand any time t′ event Cℓ,[t′,τ) is unlikely to happen. Using a union bound, this yields that, w.h.p., τ ≥T, as desired. To bound Pr  Cℓ,[t′,τ)  , we define a random walk W (ℓ,t′) that has an even larger probability to cross [αℓ/2, αℓ) during [t′, τ) and show that this larger probability is still small enough. To this end, fix a level ℓ∈{ 0, 1, . . . , ℓ∗+ 1 } and integral time t′ ∈[0, τ] ⊆[0, T]. Let p(ℓ,t) := Pr [ ˜mℓ(t + 1) = ˜mℓ(t) | x(t) ] denote the probability that the number of balls at level at least ℓ does not change from time t to time t+1 (note that this probability is a random variable depending on the random configuration x(t)). We define the random walk W (ℓ,t′) = (W (ℓ,t′) t )t∈[t′,T) over the discrete time interval [t′, T). The random walk’s start position is W (ℓ,t′) t′ := ˜mℓ(t′) = αℓ/2 and its movement from time t ∈[t′, T) to time t + 1 is defined via W (ℓ,t′) t+1 =      W (ℓ,t′) t with probability p(ℓ,t), W (ℓ,t′) t −1 with probability 1 −p(ℓ,t) · 2/3, and W (ℓ,t′) t + 1 with probability 1 −p(ℓ,t) · 1/3. (32) In other words, the random walk W (ℓ,t′) starts at time t′ at position ˜mℓ(t′) = αℓ/2, maintains its position whenever ˜mℓ(•) does not change, and the probability that it moves left is twice the probability that it moves right. For a time t ≥t′, we say W (ℓ,t′) crosses [αℓ/2, αℓ) during [t′, t) if W (ℓ,t′) reaches αℓbefore decreasing below αℓ/2, and denote this event as CW ℓ,[t′,t). Given the current configuration x(t) = x at time t ≥t′ and depending on how ˜mℓ(•) changes from time t to time t + 1, we can easily couple the random walk W (ℓ,t′) with our process as follows: • if ˜mℓ(•) does not change, W (ℓ,t′) remains stationary; • if ˜mℓ(•) increases, W (ℓ,t′) moves right; and • if ˜mℓ(•) decreases, W (ℓ,t′) moves left with prob. min { 1, 2 3/Pr [ ˜mℓ(t + 1) < ˜mℓ(t) | x(t) = x ] } and right otherwise. By this coupling, the random walk is never “left of” ˜mℓ(•) (i.e., for all t ≥t′ we have W (ℓ,t′) t ≥˜mℓ(t)). We now consider the crossing probabilities for the different levels ℓ∈{ 0, 1, . . . , ℓ∗+ 1 } for some integral time t′ ∈[0, τ) at which level 0 just became critical. For level ℓ= 0, Theorem 2.2 can be used to see that, w.h.p, this level will not become invalid at any time during the discrete interval [0, τ). For any other level ℓ, we use the fact that the ˜mℓ-value is upper-bounded by the position of the random walk W (ℓ,t′) during [t′, τ) (by the coupling above) together with the fact that the left-biased random walk W (ℓ,t′) is very unlikely to reach the right border before becoming again safe (by Lemma 3.3). Case 1: For level ℓ= 0 we do not need to considered the random walk. Instead, note that Theorem 2.2 allows us to choose a constant γ = γ(α0) such that the number of balls above height ⌈m(t)/n⌉+ γ is, with (arbitrarily) high probability, at most α0 = 1−β 128β · n. Thus, by choosing the probability high enough and by using a union bound over the T = poly(n) many time steps, we can ensure that, with probability at least 1 −n−6, level 0 does not become invalid at any time ∈{ 1, 2, . . . , T }. In particular, this yields Pr  C0,[t′,τ)  ≤Pr  C0,[t′,T)  ≤n−6. (33) 34 Case 2: For any level ℓ∈{ 1, 2, . . . , ℓ∗} and large enough n we get Pr  Cℓ,[t′,τ)  ≤Pr h CW ℓ,[t′,τ) i ≤Pr h CW ℓ,[t′,T) i Lem. 3.3 ≤ 21 −1 21+αℓ/2 −1 ≤ 2 21+αℓ/2 = 1 2αℓ/2 ≤ 1 2αℓ∗/2 = n−6. (34) Case 3: For level ℓ= ℓ∗+ 1 we get Pr  Cℓ∗,[t′,τ)  ≤Pr h CW ℓ∗,[t′,τ) i ≤Pr h CW ℓ∗,[t′,T) i Lem. 3.3 ≤ √n1 −1 √n1+αℓ∗+1/2 −1 ≤ √n √n1+αℓ∗+1/2 = n−6. (35) By this case distinction, we know that the probability that any single level that just became critical at time t′ ∈[0, τ) has probability at most n−6 to become critical within [0, τ). Taking a union bound over the ℓ∗+ 2 = log log(n) + Θ(1) many levels and T = n4 many time steps finishes the proof of Lemma 3.4. E Lower Discrepancy Bound when Deletions are Dominant. Theorem E.1 gives a lower bound that shows that our result from Theorem 2.1 is tight in the sense that there are insertion probability sequences that result in a logarithmic discrepancy. Our lower bound statement assumes that the potential Φα (see Section 2) is linear in n, which holds with high probability (see Lemma C.1). Similar to the results of [PTW15] the discrepancy is heavily dependent on the one-choice operations, ball deletions in our case. A long sequence of ball deletions creates deep holes in the bins and the frequency of ball insertions must be high enough to re-balance the load. Theorem E.1. Fix a step t and assume Φα(t) = O(n) for some constant c. Let T = t + n ln(n) 2+4ϵ and assume that for every τ with t ≤τ ≤T we have β(τ) = 1/2 −ϵ with ϵ < 1/2 being an arbitrary positive constant. Then the expected number of bins at the end of step T with load at least ⌊m(T)/n⌋+ ln(n)/2 is Ω(√n). We note that the above result also holds (with basically the same proof) for varying insertion probabilities as long as fewer than (T −t)·(1/2−ϵ) balls are inserted during the time interval [t, T]. The proof idea for Theorem E.1 is as follows: We first show that, as long as in step t we have Φ(t) ≤c·n for some constant c (which holds with high probability), there exists a constant fraction of bins with a load which is at least m(t)/n (Lemma E.1). If now during the next O(n log n) steps much more balls are deleted than inserted, the average load will go down and one of the bins with load at least m(t)/n will receive no ball deletions at all. This bin will have a discrepancy of O(log n). This holds even under the assumption that the bin does not receive any additional ball. Lemma E.1. Assume Φ ≤c · n for some constant c and let r = r(n) denote the fraction of bins with a load of at least ⌊m(t)/n⌋. Then r = Θ(1), i.e., a constant fraction of bins has load at least ⌊m(t) n ⌋. Proof. Let B1 denote the set of bins with a load of at least ⌊m(t)/n⌋and B2 the set of bins with fewer than ⌊m(t)/n⌋balls. Since r is defined as the fraction of bins in B1, we have |B1| = r · n and 35 |B2| = (1 −r) · n. This implies that there are at least (1 −r) · n holes below the level ⌊m(t) n ⌋and, in turn, at least (1 −r) · n balls above the average load of m(t)/n. Let m(t)/n+ℓbe defined as the average load of the bins in B1. Note that ℓ≥0 because m(t)/n is the average load and the bins in B1 have a higher load than the ones in B2. From the definition, it follows that ℓ≥n · (1 −r) r · n = 1 −r r . Since the potential function Φ is minimal when all bins in B1 have load ℓ, we obtain r·n X i=1 eα· 1−r r ≤Φ ≤c · n which implies r · eα· 1−r r ≤c. If we view r = r(n) as function in n and assume r(n) = o(1) then in r(n) · eα  1 r(n)−1  = r(n) · f(n) with f(n) = eα  1 r(n)−1  the exponent is a function growing as 1/r(n) = ω(1). This implies that regardless of the size of α (as long as it is positive) the f(n) grows exponentially quicker than the r(n) = o(1) decays, and so the whole r(n) · f(n) is in ω(1) and in particular not upper-bounded by the constant c. Since r(n) ≤1 ∈O(1), we conclude that r(n) = Θ(1). With this, we can prove Theorem E.1: Proof of Theorem E.1. First of all, a simple application of Chernoff bounds proves that, w.h.p., ⌊m(t)/n⌋≥⌊m(T)/n⌋+ ln(n)/2. Fix a bin i and define ℓ= T −t = n ln(n)/(2 + 4ϵ). Then for large n, the probability that during all steps τ with t ≤τ ≤T no ball is deleted from bin i can be bounded by (using the inequality 1 −x ≥e−2x for x ∈[0, 0.795])  1 −1 n (1−(1/2−ϵ))ℓ =  1 −1 n (1/2+ϵ)ℓ ≥exp  −(1 + 2ϵ) ℓ n  = exp (−ln(n)/2) = n−1/2. (36) From Lemma E.1 we get that, at time t, a constant fraction of bins have load at least ⌊m(t)/n⌋. And by the above calculation, the expected number of those bins from which no ball is deleted is Ω(√n). Thus, at time T, in expectation there are Ω(√n) bins that have a load of at least ⌊m(t)/n⌋≥⌊m(T)/n⌋+ ln(n)/2. 36
Balls and Bins and the Infinite Process with Random Deletions Petra Berenbrink∗ Tom Friedetzky† Peter Kling‡ Lars Nagel§ Abstract We consider an infinite balls-into-bins process with deletions where in each discrete step t a coin is tossed as to whether, with probability β(t) ∈(0, 1), a new ball is allocated using the Greedy[2] strategy (which places the ball in the lower loaded of two bins sampled uniformly at random) or, with remaining probability 1 -β(t), a ball is deleted from a non-empty bin chosen uniformly at random. Let n be the number of bins and m(t) the total load at time t. We are interested in bounding the discrepancy xmax(t) -m(t)/n (current maximum load relative to current average) and the overload xmax(t) -mmax(t)/n (current maximum load relative to highest average observed so far). We prove that at an arbitrarily chosen time t the total number of balls above the average is O(n) and that the discrepancy is O(log(n)). For the discrepancy, we provide a matching lower bound. Furthermore we prove that at an arbitrarily chosen time t the overload is log log(n) + O(1). For "good" insertion probability sequences (in which the average load of time intervals with polynomial length increases in expectation) we show that even the discrepancy is bounded by log log(n) + O(1). One of our main analytical tools is a layered induction, as per [ABKU99]. Since our model allows for rather more general scenarios than what was previously considered, the formal analysis requires some extra ingredients as well, in particular a detailed potential analysis. Furthermore, we simplify the setup by applying probabilistic couplings to obtain certain "recovery" properties, which eliminate much of the need for intricate and careful conditioning elsewhere in the analysis. 1 Introduction. Balls-into-bins processes have been formalized in the mathematical community since at least the early 18th century [dM18]. Their objective is, e.g., to allocate at random a set of balls into a number of bins such that the maximum number of balls per bin is minimized. These processes have been very well studied, with many results being largely folklore. However, even seemingly minor modifications of the protocol or the model tend to result in vastly different and often far more complex analyses as well as, occasionally, very surprising results. Apart from the undoubted usefulness as probabilistic paradigms, balls-into-bins processes have many applications, often under the bonnet as primitive black-box operations. Among others, this includes areas like (the obvious) load balancing and allocation of jobs to servers [CL97], routing in networks [CMMadH+98], hashing [CRSW13], dictionary data structures [DW05], design of file systems [LCB13], peer-to-peer networks [SMK+01], and many more. The most simple balls-into-bins process, which allocates m balls independently and uniformly at random to n bins, is very well understood. For example, if m = n then the maximum number of ∗ ( ). †Durham University, U.K. ( ). ‡Darmstadt ( ). §Loughborough University, U.K. ( ). 1 16 Oct 2025 balls (load) in any bin is, w.h.p.1, log(n)/ log log(n) · (1 + o(1)) [RS98]. In [ABKU99] the authors introduced the Greedy[d] process. Here, balls are allocated sequentially; each ball chooses d ∈N bins uniformly at random and is allocated to a least loaded of them. Greedy[1] is identical to the simple process described above. But if d ≥2, the maximum load is, w.h.p., reduced to log log(n)/ log(d) + O(1) [ABKU99] for m = n. This exponential decrease in the maximum load is often referred to as the power of two choices. Greedy[2] was later analyzed for m ≥n (the heavily loaded case), which led to the surprising result that, in contrast to Greedy[1], the difference between the maximum and average load is independent of m [BCSV06] (namely, w.h.p. it is log log(n)/ log(d) + O(1)). Balls-into-Bins with Deletions. We study Greedy[2] in the heavily loaded case where balls are not only inserted but also deleted. Specifically, we ask whether allocation via Greedy[2] can overcome the negative effect of random deletions (effectively "holes" allocated according to Greedy[1], which without insertions would cause a rift of depth p m n log n for large m). One can differentiate balls-into-bins with deletions depending on whether, each time a ball is inserted, the ball gets d fresh choices (insertion/deletion model) or whether the ball might be a previously deleted ball that is reinserted with its original d choices (reinsertion/deletion model). Recently, the authors of [BK22] showed that, somewhat surprisingly, even in the insertion/deletion model, an oblivious adversary2 can make sure that Greedy[2] does not overcome the negative effect of deletions: already for n = 4 bins, it can cause a maximum load of m/4+Ω(√m) with probability Ω(1). Intuitively, the lower bound uses the fact that if there is ever a bin i that contains far fewer (say k) balls than other bins, Greedy[2] is biased toward i for the next k throws. After filling the system to m balls with at least k insertions, the adversary exploits the insertion bias to perform k deletions that are now biased towards bin i. Adding replacement balls for the just deleted balls, one can show that enough of these land on bins ̸= i to cause the desired overload. Results in a Nutshell. We consider the insertion/deletion model for n ∈N bins with random but dynamic deletions, starting from an initially empty system. Each time step t, with probability β(t) a new ball with fresh 2 choices is inserted by Greedy[2]. Otherwise, with probability 1-β(t), a ball is deleted from a bin chosen uniformly at random among all non-empty bins. (All our results also hold for the protocol that does not delete from a random bin, but a ball chosen uniformly at random among all balls, see Appendix A.) Our main result can be seen as complementing this unfortunate lower bound of [BK22] for Greedy[2] with a positive result: In a model with random deletions (but still dynamic, that is, resulting in a fluctuating average load), Greedy[2]'s allocation overcomes the negative deletion effect as long as insertions dominate deletions ever so slightly in the recent past of a given time step t. This highlights a fundamental difference between our model and the one of [BK22]. Our result suggests that in many practical applications, where deletions (if at all) occur non-adversarially, Greedy[2] easily overcomes random Greedy[1]-type deletions. This holds true even if the system utilization (ratio between insertions and deletions) fluctuates significantly. See Section 1.1 for a full and formal description of our results. 1We say an event E happens with high probability (w.h.p.) if Pr [ E ] ≥1 -n-Ω(1). 2The adversary knows only the d choices and insertion time of each ball. Each time step it can either (a) insert a new ball with d fresh random bin choices or (b) delete the r-th most recently inserted ball (r chosen freely). In particular, it is oblivious to the exact load situation, since it cannot see to which of the d choices the allocation strategy (like Greedy[d]) assigned a given ball. 2 1.1 Our Contribution. The system state at any time can be modeled by a load vector x = (x1, . . . , xn) ∈Nn 0, where xi is the load of the i-th fullest bin. We use m(x) := ∥x∥1 to denote the total load of x and define, for clarity, xmax := x1 and xmin := xn as the maximum and minimum load of x, respectively. We say x is perfectly balanced if xmin, xmax ∈{ ⌊m(x)/n⌋, ⌈m(x)/n⌉}. The discrepancy disc(x) := xmax-m(x)/n of x is the difference between the current maximum and average load. In some cases, we also use the stronger absolute discrepancy adisc(x) := max { xmax -m(x)/n, m(x)/n -xmin } of x (the largest difference of any load to the current average load). With this, our process can be described as a random sequence x(t) t∈N0 of load vectors with total load m(t) := m x(t) , discrepancy disc(t) := disc x(t) , and absolute discrepancy adisc(t) := adisc x(t) at time t. We use mmax(t) := max { m(0), . . . , m(t) } to denote the maximum total load up to time t. The overload ovld(t) := xmax(t) -mmax(t)/n at time t is the difference between the current maximum and the highest average load up to time t. The height of a ball is its position in the bin modelled as a stack of balls counting from bottom to top. We assume a dynamic insertion probability β(t) that may change from step to step to any value in [0, 1]. In other words, insertions and deletions are governed by an infinite insertion probability sequence β(t) t∈N giving the (independent) probabilities of an insertion/deletion in each time step. We say value x ∈(0, 1) is bounded away from 0 (from 1) if x ≥0 + Ω(1) (x ≤1 -Ω(1)). While balls-into-bins processes typically aim to bound the discrepancy, this cannot work in general when deletions are involved. For example, the insertion probability sequence could contain a stretch that deletes m(t)/2 balls after some time t. Translating a folklore example (also mentioned in [BK22]) to our setting, deletions can be regarded as random "holes" allocated via Greedy[1]. This would cause a discrepancy of order ̃Θ( p m(t)/n), a bound trivially achieved by allocations via Greedy[1], even holds if the system starts out perfectly balanced. Thus, in this paper we aim to bound the overload. We characterize which (possibly local) properties an insertion probability sequence requires for Greedy[2] to achieve a small discrepancy. Also, in an infinite process it is quite impossible to always have a small discrepancy (or even overload). Thus, our main results (given below) all hold with high probability at an arbitrary (also super-exponentially large) point t in time: (a) Assume all insertion probabilities up to time t are bounded away from 0. Then the discrepancy at time t is O(log(n)) (Theorem 2.1) and the number of balls above average at time t is O(n) (Theorem 2.2). The bound on the discrepancy is tight if the insertion probabilities are bounded away from 1/2 from above in the previous O(n log(n)) time steps (see Appendix E). (b) Assume we are in a well (e.g., perfectly) balanced situation at time t. Then we can maintain an overload of log log(n) + O(1) for poly(n) steps as long as the insertion probabilities during that time are bounded away from 0 and 1. If, additionally, the insertion probabilities ensure an expected load increase for polynomially long subintervals (see definition of c-good time intervals in Section 3), we can maintain a discrepancy (instead of only overload) of log log(n) + O(1) (see Theorem 3.1 and Lemma 3.4 in Section 3 for the full, formal statements). (c) Our final result shows that we achieve an overload of log log(n) + O(1) at an arbitrary time t ∈N0 as long as the insertion probability sequence till that time is bounded away from 0 and 1 (Theorem 4.1). Similar to the previous result, if the insertion probabilities ensure an expected (possibly quite small) load increase for long subintervals during the previous n4 time steps, we even get a discrepancy of log log(n) + O(1) at time t. To the best of our knowledge, these are the first results for the discrete, heavily-loaded insertion/deletion setting that provide characterizations for when Greedy[2] achieves strong guarantees 3 on the discrepancy instead of only on the overload. Main Technical Contributions and Proof Idea. To prove that the discrepancy is O(log(n)) (see (a) above) we use a potential function argument similar to [PTW15, TW14]. To reduce the discrepancy to double-logarithmic (see (c) above) we use the excellent recovery properties of Greedy[2]: In [Czu00], Czumaj studied the recovery of Greedy[2] for a process that alternates between ball insertions and deletions, maintaining the total load invariant. He showed that the process is able to recover, i.e., reach a steady state distribution, in polynomial time. However, this does not reveal any properties (e.g., the discrepancy) of the steady state. In contrast to [Czu00] we cannot have a unique steady state as the total load varies over time. However, we can show via a path coupling argument that when we start two copies of our process in any two configurations of logarithmic discrepancy with the same number of balls - which we have at any time via (a) - it causes both configurations to converge to each other quickly (in o(n4) steps). This can be interpreted as a "typical" state to which all configurations with the same number of balls converge. Again, as in [Czu00] this does not yet reveal any further information about such a typical state. This is where (b) comes into play, which uses a layered induction to show that, starting from a certain class of configurations (with discrepancy at most log log(n)+c), the process remains in such a well-balanced configuration for a long time, Ω(n4) steps. That is, we maintain well-balancedness for longer than it takes to reach the typical state, implying that such typical states lie in this class of well-balanced configurations. Note that this deviates significantly from previous usages of layered induction in this area: For example, in [TW14] the authors used layered induction to reduce the discrepancy from O(log n) to O(log log n). Our layered induction is "merely" used to establish that a typical state (reached quickly due to the good recovery) has a low discrepancy. Apart from being flexible enough to deal with a rather general deletion model, our approach also allows for stronger probability guarantees (of order 1 -1/poly(n) instead of 1 -1/polylog(n) in [TW14]). In our layered induction we have to cope with two new problems in comparison with previous work: The total load m(t) changes over time, and we may not assume it to be linear in n. The classical layered induction proof of [ABKU99] heavily relies on the base case, which by means of a simple pigeon-hole argument states that there are at most n/c bins with c balls. This holds because in their setting the total load cannot exceed n. In our case, however, the total load is unbounded. We deal with this first problem by performing our layered induction only on the highest log log(n)+O(1) many load levels. This leads us to the second challenge, since the changing total load m(t) also implies that the (absolute) height of the considered levels changes over time. To deal with this, we carefully formulate our layered induction relative to the maximum average and use a flexible random walk argument to prove strong bounds on the number of balls above a certain level over the maximum average. This yields a bound on the overload measure, from which we derive the discrepancy bound - under the assumption that there was no large, recent drop of the total load. Note that in the case where such a large drop occurred, our lower bound mentioned in (a) shows that we cannot hope for a good discrepancy. To the best of our knowledge, this is the first layered induction for the case m ≫n as well as for a process with a changing load. 1.2 Related Work. There is a vast amount of literature on balls-into-bins games in a multitude of settings. For an overview see [Wie17] and for results with one random choice per ball see [BCN+15, BFK+18, LS23]. After a very brief overview of results with multiple choices we focus on processes with deletions. 4 Multiple Choice. The Greedy[d] process was introduced and analyzed in [ABKU99], although a similar result in a different setting had been shown in [KLMadH92]. In [BCSV06] the authors generalize the results to the heavily loaded case m ≫n. [TW14] presents a simpler analysis for the heavily loaded case, albeit showing a slightly worse upper bound on the maximum load. The author of [V ̈03] considers the always-go-left process Left[d] where bins are divided into d clusters of n/d bins each. Every ball randomly chooses one bin per clusters and is allocated to a least loaded among the chosen bins; ties are broken in favor of the leftmost cluster. This results in a maximum load of log log(n)/(d·φd) with φd ≤2 (φd ∈(1, 2) is the generalized golden ratio), w.h.p., a surprising improvement over [ABKU99]. For more recent results see [LSS22, LSS23, LSS24]. The potential function analysis for Part (a) is inspired by methods introduced in [PTW15] where the authors consider a very interesting variant of Greedy[d], which they call (1 + β)-process. Here each ball is allocated using Greedy[1] with probability 1 -β, and using Greedy[2] with the remaining probability β. The authors showed that the difference between maximum and average load is Θ(log(n)/β) for each β n. The infinite insertion/deletion model with adversarial deletions for the moderately loaded case is considered by [V ̈03] for the Left[d] process where the maximum system load is always bounded by h · n. He proves that the maximum (bin) load is ln ln(n)/(d · ln(φd)) + O(h) w.h.p. (again, φd ∈(1, 2) is the generalized golden ratio). Note that for h = Ω(log(n)), this is not better than the bound achieved by the trivial Greedy[1] allocation. The authors of [BCF+23] consider the moderately loaded case but for the reinsertion/deletion model. They introduce the Iceberg[d] process, combining Greedy[1] and Left[d]. Slightly simplified, the Iceberg[d] process allocates a ball via Greedy[1] if the target bin contains at most h + ̃O( √ h) balls that were allocated by Greedy[1]. Otherwise, either Left[d] is used (if there are currently fewer than n/d balls allocated by Left[d]) or the ball is allocated to bin 1. This process achieves a maximum load of h + log log(n)/(d log φd)) + ̃O( √ h). Note that this process must know h and keep track of the number of balls placed with specific strategies, making it difficult to use in distributed settings. In contrast, Greedy[d] places balls solely based on the selected bins' loads. The focus of [Czu00] is the analysis of the recovery time (the time it takes a process to move from an arbitrary to a "typical" state) in the heavily loaded case (m arbitrary). The author considers a setting starting with m arbitrarily placed balls followed by alternating deletions (either a random ball or from a random bin) and insertions. The recovery time under Greedy[d] allocation is proven to be polynomial in m and n. While parts of our analysis (Section 4) are inspired by this result, note that in our case the system load m fluctuates over time and that we do not only show fast recovery but actually characterize what the "typical" state looks like. As mentioned at the beginning of the introduction, [BK22] give lower bounds for Greedy[2] in the heavily loaded, adversarial insertion/deletion model. They also provide a more general lower 5 bound for any oblivious allocation strategy (like Greedy[2] or Left[2]) in the reinsertion/deletion model (with the system load capped at m): an oblivious adversary can force a maximum load of m/4 + mΩ(1) for n = 4 bins within the first poly(m) steps. On the positive side, they introduce the allocation strategy ModulatedGreedy for the insertion/deletion model, which achieves an overload of O(log(m)). To overcome their lower bound for Greedy[2], ModulatedGreedy decides probabilistically between two bin candidates, based on their current load. The algorithm must know global guarantees on the load situation (in particular, the maximum number of balls that will ever be present and the current maximum and minimum load). Queuing Systems. A prominent example of a queuing system is the supermarket model considered in, e.g., [VDK96, BCFV00, Mit01, LN05, LM05]. It can be seen as a variant of balls-into-bins process in continuous time. Here, balls typically arrive in a Poisson stream of rate λn and are allocated according to Greedy[d]. Bins process (delete) balls in first-in first-out order, with a processing time that is exponentially distributed with mean 1. These systems and their results can often be readily mapped to the balls-into-bins settings yielding results similar to those discussed above. However, the focus in queuing systems lies typically on properties like the balls' service times (the average time a ball spends in the system), how long it takes to reach an equilibrium state, or how the system behaves in the equilibrium. Note that the (expected) unit processing time together with λ 0. It states that the absolute discrepancy at any time t is, w.h.p., at most O(log n) higher than it was at the start. In particular, if the process starts with an absolute discrepancy of O(log n), it will almost certainly be O(log n) at time t. Theorem 2.1. Consider an insertion probability sequence β(t) t∈N bounded away from 0 by a constant β > 0. If α ≤β/16, then for any t ∈N0 we have E[Γα(x(t))] = O(n) · eα·adisc(0). Moreover, for any a ≥0 we have Pr adisc(t) -adisc(0) ≥a+2 α · ln n ≤n-a. In particular, if we start in a perfectly balanced situation with discrepancy 0 (e.g., empty system), the expected potential at an arbitrary time t is only linear. Equipped with the above, we can prove a strong upper bound on the number of balls that lie above the average at a given time t: Theorem 2.2. Fix an arbitrary step t. Let β ∈(0, 1] be a constant. Assume β(τ) ≥β for all 1 ≤τ ≤t and α ≤β/16. For any fixed non-negative land any fixed μ ∈(0, 1), there is a γ = γ(μ) so that, with probability at least 1 -n-(3+l), the number of balls above height ⌈m(t) n ⌉+ γ is at most μn. The proof is based on Lemma C.1 in Appendix C, an adaptation of a result in [LS22, LS21], which gives us a high-probability guarantee for the potential Γα(x(t)) being linear. In fact, the purpose of the adaptation of [LS22, LS21]'s results is two-fold: (i) to incorporate our deletions and their respective probability bounds, and (ii) to improve the high-probability guarantee from the original 1 -1/n3 (which was sufficient for the purposes of [LS22, LS21]). Note that the probability stated in Theorem 2.2 is inherited from Lemma C.1; apart from that, the theorem's actual proof is an entirely deterministic counting argument. The proof details may be found in Appendix C. 3 Maintaining a Well-balanced State. Before we state this section's main result (Theorem 3.1), we introduce the notion of c-good time intervals I for a given insertion probability sequence. Intuitively, such an interval guarantees us that the total system load increases in expectation in any sufficiently long subinterval of I. Definition 3.1 (c-good interval). Given a sequence β(t) (t ∈N) of insertion probabilities, we call a discrete time interval I c-good if for each subinterval (t1, t2] ⊆I with t2 -t1 ≥c · n we have Pt2 t=t1+1 β(t) /(t2 -t1) ≥1 2(1 + ε) for a constant ε > 0. Theorem 3.1 states that, w.h.p., our process, when started in a perfectly balanced configuration, remains well-balanced for at least n4 steps.3 Here, well-balanced (this is formalized later in Definition 3.2 and Lemma 3.4) is measured relative to the maximum average observed so far for general insertion probability sequences and relative to the current average if the considered time window is O(1)-good. Theorem 3.1. Consider an insertion probability sequence β(t) t∈N bounded away from 0 and 1. Let x(0) be a perfectly balanced configuration. Then, w.h.p., xmax(t) ≤mmax(t)/n+log log n+O(1) for any t ≤n4. Moreover, if there is a constant c ∈N such that the time interval [0, n4] is c-good, then, w.h.p., xmax(t) ≤m(t)/n + log log n + O(1) for any t ≤n4. 3In fact, our proof shows that starting from any well-balanced configuration, the situation remains well-balanced for polynomial many steps. 7 To prove Theorem 3.1, we conduct a layered induction. A major difficulty stems from the fact that we have to deal with a permanently changing average load. For this an important ingredient is Theorem 2.2, which gives us a high probability bound on the number of balls above the current average. This enables us to derive a base case for our layered induction, independent of the changing average load. The layered induction works roughly as follows: For each level l∈{ 0, 1, . . . , log log(n) + Θ(1) } we consider the number of balls at height greater than average +l. For each level we define a critical threshold αlthat decreases doubly exponentially with l. Moreover, each level has a safe threshold αl/2. We show that, as long as all levels remain below their critical threshold, any level that is above its safe threshold has a strong drift back towards the safe threshold. A standard result on random walks then shows that it is unlikely that during the poly(n) many considered time steps, any of the log log(n) + Θ(1) many levels (that all start below their respective safe threshold) crosses the critical interval [αl/2, αl). Formalizing Well-balanced Configurations. In the following, let ˆβ ∈(0, 1) be a constant upper bound on the insertion probability of our process, such that β(t) ≤ˆβ for all considered time steps t. For a height h ∈N0 and a configuration x define mh(x) as the number of balls that have height at least h in configuration x. For our process starting in an initial configuration x(0), we use the shorthand mh(t) := mh(x(t)). In our analysis, it will be helpful to measure the number of balls at or above some level l, which, in turn, is above a given base height h (which might vary over time). To this end, we introduce the notation m(h) l(x) := mh+l(x) and m(h) l(t) := mh+l(t). We consider l∗+ 2 = log log(n) + Θ(1) many levels l∈{ 0, 1, . . . , l∗+ 1 } with their critical thresholds αldefined as follows: αl:=                  1-ˆβ 128ˆβ · n if l= 0, 32ˆβ 1-ˆβ · α2 l-1/n if αl-1 > r 3(1-ˆβ) 2ˆβ · n log(n), 12 log(n) if l= l∗:= min l αl-1 ≤ r 3(1-ˆβ) 2ˆβ · n log(n) , and 24 if l= l∗+ 1. (2) Note that the recursive definition ensures l∗= log log(n) + Θ(1). Moreover, it implies the following relations between the critical thresholds αland αl-1: Observation 3.1. For any l∈{ 1, 2, . . . , l∗+ 1 } and large enough n we have 8ˆβ 1 -ˆβ · α2 l-1 n ≤αl≤1 4 · αl-1. (3) Using the critical threshold αl, we define the valid interval of level l∈{ 0, 1, . . . , l∗+ 1 } as Vl:= [0, αl). Similarly, we define the level's safe interval as Sl:= [0, αl/2) and its critical interval as Cl:= [αl/2, αl). Definition 3.2. For a base height h ∈N0 and level l∈{ 0, 1, . . . , l∗+ 1 } we say level lof configuration x is h-valid if m(h) l(x) ∈Vl, h-safe if m(h) l(x) ∈Sl, and h-critical if m(h) l(x) ∈Cl. Configuration x is h-valid/ -safe/ -critical if all levels 0 to l∗+ 1 are h-valid/-safe/-critical. In proofs, we sometimes omit the base height if it is clear from the context. 8 Analysis. We start with a simple lemma that provides the probabilities for the increase and decrease of the number of balls at or above a given height h. Lemma 3.1. For any configuration x and any height h ∈N we have (a) Pr [ mh(t + 1) > mh(t) | x(t) = x ] = β(t) · mh-1(t) -mh(t) 2/n2 and (b) Pr [ mh(t + 1) m(h) l(x) i ≥2. (4) (b) If level l= l∗+ 1 of configuration x is h-critical and if n is large enough, then Pr h m(h) l(x′) m(h) l(x) i ≥√n. (5) The proof of Lemma 3.2 may be found in Appendix D. Next we formulate an auxiliary lemma about the probability that a random walk on integers reaches position b > 0 before position -a 0. We will apply this lemma to the values m(h) l(•) (basically corresponding to the random walks) to show that these values are unlikely to cross the critical interval Cl(on which, by Lemma 3.2, m(h) l(•) is biased by a factor of at least 2 towards the safe left side). Lemma 3.3. Consider a (not necessarily i.i.d.) random walk (St)t∈N0 on Z with initial position S0 = 0 and position St = Pt i=1 Xi after t steps with step width Xi ∈{ -1, 0, +1 } at time i. Let F = (Ft)t∈N0 denote the random walk's natural filtration. Assume there is r ∈R>0 \ { 1 } such that for all t ∈N, Pr [ Xt = -1 | Ft-1 ] Pr [ Xt = +1 | Ft-1 ] = r. For x ∈Z define the stopping time τx := inf { t ∈N0 | St = x } indicating when the random walk reaches x for the first time. Then, for any a, b > 0, Pr [ τ+b 0. Consider the random variable Y = Pt2 t=t1+1 Yt where Yt = 1 with probability β(t) and Yt = 0 otherwise. Note that Y counts the number of balls added in the time interval (t1, t2]. By c-goodness we have E[Y ] ≥(1 + ε) · t2-t1 2 . A simple application of Chernoff yields Pr Y 0. (8) Thus, we can take a union bound over the poly(n) many subintervals (t1, t2] ⊆[0, n4] of length at least c·n to get that, w.h.p., the average load at the end of the subinterval is at least the average load at its beginning. All other subintervals have length 0 in Definition 3.1 is an artifact of our analysis technique and can be reduced to o(1) or even eliminated. The same holds for our assumption that the insertion probability must be bounded away from 1 (note that for β = 1 the result follows from [BCSV06]). We note that the first of the two results in the theorem also holds when first m ≥n balls are inserted and the process then alternates between insertions and deletions. Analysis Overview. The proof idea of Theorem 4.1 is as follows: For any time t ∈N0, Theorem 2.1 implies (w.h.p.) a logarithmic absolute discrepancy at time t′ := t -n4. We then use a path coupling argument similar to [Czu00] to show that our process causes any load situation with logarithmic absolute discrepancy to recover quickly to a "typical" state. More exactly, any two (different) such load situations at time t′ can be coupled such that (w.h.p.) they become identical at time t. At this point however, we don't know how such a "typical" looks like (and whether it has small maximal load). This is where Theorem 3.1 comes into play. It tells us that if we start in a perfectly balanced load situation at time t′, our process maintains (w.h.p.) a double-logarithmic maximal load for at least n4 steps. A simple union bound over these two high probability bounds then implies that the "typical" load situation at time t must also have a double-logarithmic maximal load. Analysis. We now formalize the above idea. To this end, we first introduce a measure for the similarity of two given load situations. Definition 4.1. Consider two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1. The transformation distance between x and y is ∆(x, y) := ∥x -y∥1/2. Note that the transformation distance obeys the identity ∆(x, y) = Pn i=1 max { 0, xi -yi } and corresponds to the minimal number of ball movements to transform x into y (and vice versa). Moreover, any load vector x can be transformed into the perfectly balanced load vector by moving the at most n · disc(x) ≤n · adisc(x) balls above average. As an immediate consequence, we get the following observation. Observation 4.1. For two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1, we have ∆(x, y) ≤2n · max { adisc(x), adisc(y) }. The following lemma leverages a result from [Czu00] to show that one step of our process tends to reduce transformation distance between two given load vectors. In [Czu00], the author considers basically the same setting as ours, but with guaranteed alternating insertions/deletions. Lemma 4.1 basically shows that their proof transfers directly to our more general setting with insertion probability sequences. 11 Lemma 4.1. Consider two load vectors x, y over n ∈N bins with identical total loads ∥x∥1 = ∥y∥1. Assume ∆(x, y) = 1 and let x′ and y′ denote the load vectors after applying one step of our process with insertion probability β ∈[0, 1] to x and y, respectively. There is a coupling of x′ and y′ such that (a) E[∆(x′, y′)] ≤∆(x, y) = 1 and (b) Pr [ ∆(x′, y′) ̸= ∆(x, y) ] ≥(1 -β)/n. Proof. For the random decision whether to insert or delete a ball in x and y, we use the identity coupling that either inserts a ball in both processes (with probability β(t)) or deletes a ball in both processes (with probability 1-β(t)). It remains to couple the random bin choices in a deletion step and the two random bin choices of Greedy[2] in an insertion step. Assume the considered step is a deletion step. Then we use the first coupling described in [Czu00, Section 6] (who consider alternating insertions/deletions instead of our more flexible insertion probability sequences). In this case, [Czu00, Claims 6.1 and 6.2] together yield E[∆(x′, y′) | Deletion Step] ≤∆(x, y) = 1 and Pr [ ∆(x′, y′) ̸= ∆(x, y)|Deletion Step ] ≥1/n. Note that this already yields the lemma's second statement, since a deletion step occurs with probability 1 -β. Similarly, for an insertion step, we use the second coupling described in [Czu00, Section 6], where the exact same insertion rule is used (again, in a setting with alternating insertions/deletions). In this case, the proof of [Czu00, Lemma 5.1] implies E[∆(x′, y′) | Insertion Step] ≤∆(x, y) = 1. Together, we get the desired result. The next lemma characterizes the expected and high probability crossing time of a simple, αlazy reflecting random walk on { 0, 1, . . . , D } (i.e., a random walk that has the same probability to move left/right and that stays put with probability α or if it tries to move out of bounds). We use the lemma for our coupling result in Lemma 4.3. Lemma 4.2. Let D ∈N0 and α ∈[0, 1). Consider a simple, α-lazy reflecting random walk (Wt)t∈N0 on { 0, 1, . . . , D } started in W0 = D. Let T := min { t ∈N | Wt = 0 }. Then E[T] = D·(D+1)/(1α). Moreover, for any n ∈N and a > 0 we have Pr [ T ≥a · log(n) · 2E[T] ] ≤1/n-a. Proof. Let Ei denote the expected time for the random walk to reach 0 if started at position i ∈{ 0, 1, . . . , D }. Clearly, E0 = 0 and ED = E[T]. Note that we have the recurrence relation Ei = 1+Ei ·α+(Ei-1 +Ei+1)·(1-α)/2 for i ∈{ 1, 2, . . . , D -1 } and ED = 1+ED-1 ·(1-α)/2+ ED ·(1+α)/2. From this one can first deduce Ei = 2(D-i+1)/(1-α)+Ei-1 for i ∈{ 1, 2, . . . , D } and, then, E[T] = ED = D · (D + 1)/(1 -α). For the high probability bound, we observe that Markov's inequality implies Pr [ T ≥2E[T] ] ≤ 1/2. Thus, after at most a · log(n) repetitions of such phases of length 2E[T], the random walk has reached 0 with probability at least (1/2)a·log(n) = n-a. The following lemma shows that it is possible to couple two instances of our process with a logarithmic absolute discrepancy; they will be in the same distribution after polynomial time. Lemma 4.3. Consider two instances of our random processes x(t) t∈N0 and y(t) t∈N0 with ∥x(0)∥1 = ∥y(0)∥1 and adisc(x(0)), adisc(y(0)) ≤3 α · log n. Assume both processes use the same insertion probability sequence β(t) t∈N0 bounded away from 0 and 1. Then there is a coupling of x(t) t∈N0 and y(t) t∈N0 for whose coupling time τ := min t ∈N0 ∆ x(t), y(t) = 0 we have, w.h.p., τ = O n3 · (log n)3 . 12 Proof. Consider any time t ∈N0 and define ∆(t) := ∆ x(t), y(t) . There is a sequence x(t) = x(0)(t), x(1)(t), . . . , x(∆(t))(t) = y(t) of load vectors such that for all indices i ∈{ 1, 2, . . . , ∆(t) } we have ∆(i)(t) := ∆ x(i-1)(t), x(i)(t) = 1. By Lemma 4.1(a), there is a coupling of each pair x(i-1)(t + 1), x(i)(t + 1) such that E ∆(i)(t + 1) ≤1. This implies a coupling of x(t + 1) and y(t + 1) with E ∆(t + 1) ≤E   ∆(t) X i=1 ∆(i)(t + 1)  = ∆(t) X i=1 E h ∆(i)(t + 1) i ≤∆(t). (9) Similarly, Lemma 4.1(b) implies Pr [ ∆(t + 1) ̸= ∆(t) | ∆(t) > 0 ] ≥1 -β(t) n . (10) Consider the random process ∆(t) t∈N0. Let D := n · 10 α · log n = O(n · log n) and define the stopping time T1 := min { t ∈N0 | ∆(t) > D } . (11) Let β := sup { β(t) | t ∈N0 } T2, the majorization implies ∆(T2) ≤W(T2) = 0 and, thus, τ ≤T2. By applying Theorem 2.1 with a = 5 to the first n4 time steps and using a union bound, w.h.p. both x(t) t∈N0 and y(t) t∈N0 maintain an absolute discrepancy of at most adisc(0) + 7 α · log n = 10 α ·log n during the first n4 steps. By Observation 4.1, this implies a transformation distance of at most n· 10 α ·log n = D and, thus, T1 > n4. By Lemma 4.2, w.h.p., T2 = O n3 ·(log n)3 . Thus, by a union bound we have, w.h.p., T1 ≥T2 and T2 = O n3 · (log n)3 . As argued above, together these imply τ ≤T2 = O n3 · (log n)3 . With the coupling at hand we are ready to prove our main result. Proof of Theorem 4.1. Note that for t ≤n4, the statement follows immediately from Theorem 3.1. For larger t, Theorem 2.1 gives us, w.h.p., xmax(t -n4) -xmin(t -n4) ≤2c · log n. By this and by shifting each time step forward by n4 steps, it is sufficient to prove the theorem's statement for t = n4 starting from an initial load vector x(0) with xmax(0) -xmin(0) ≤2c · log n. Define an initial load vector y(0) with total load ∥y(0)∥1 = m(0) that is perfectly balanced. By Lemma 4.3 there is a coupling of x(t) t∈N0 and y(t) t∈N0 such that, w.h.p., x(t) = y(t) for all t = ̃Ω(n3) (so, in particular, for t = n4). Note that Theorem 3.1 applies to y(t) t∈N0. This gives us the corresponding high probability guarantees from Theorem 3.1 for y(n4) which, by leveraging the coupling, hold w.h.p. for x(n4), finishing the proof. Acknowledgments. Petra Berenbrink's research was funded by the DFG project Distributed and Collaborative Systems of Agents (project number 411362735) as part of the Research Unit (Forschungsgruppe) Algorithms, Dynamics, and Information Flow in Networks (FOR 2975). 13 References [ABKU99] Yossi Azar, Andrei Z. Broder, Anna R. Karlin, and Eli Upfal. Balanced allocations. SIAM Journal on Computing, 29(1):180-200, 1999. [ABS98] Micah Adler, Petra Berenbrink, and Klaus Schröder. Analyzing an infinite parallel job allocation process. In In Proceedings of the European Symposium (ESA), pages 417-428, 1998. [ANS22] Dan Alistarh, Giorgi Nadiradze, and Amirmojtaba Sabour. Dynamic averaging load balancing on cycles. Algorithmica, 84(4):1007-1029, 2022. [BCF+23] Michael A. Bender, Alex Conway, Martin Farach-Colton, William Kuszmaul, and Guido Tagliavini. Tiny pointers. In Proceedings of the Symposium on Discrete Algorithms (SODA), pages 477-508, 2023. [BCFV00] Petra Berenbrink, Artur Czumaj, Tom Friedetzky, and Nikita D. Vvedenskaya. Infinite parallel job allocation (extended abstract). In Proceedings Symposium on Parallel Algorithms and Architectures (SPAA), pages 99-108, 2000. [BCN+15] Luca Becchetti, Andrea E. F. Clementi, Emanuele Natale, Francesco Pasquale, and Gustavo Posta. Self-stabilizing repeated balls-into-bins. In Proc. of 27th Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 332-339, 2015. [BCSV06] Petra Berenbrink, Artur Czumaj, Angelika Steger, and Berthold Vöcking. Balanced allocations: The heavily loaded case. SIAM Journal on Computing, 35(6):13501385, 2006. [BFK+18] Petra Berenbrink, Tom Friedetzky, Peter Kling, Frederik Mallmann-Trenn, Lars Nagel, and Chris Wastell. Self-stabilizing balls and bins in batches - the power of leaky bins. Algorithmica, 80(12):3673-3703, 2018. [BFL18] G. Brightwell, M. Fairthorne, and M.J. Luczak. The supermarket model with bounded queue lengths in equilibrium. J Stat Phys, 173:1149-1194, 2018. [BHH+23] Petra Berenbrink, Lukas Hintze, Hamed Hosseinpour, Dominik Kaaser, and Malin Rau. Dynamic averaging load balancing on arbitrary graphs. In In Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), pages 18:1-18:18, 2023. [BK22] Nikhil Bansal and William Kuszmaul. Balanced allocations: The heavily loaded case with deletions. In Proc. 63rd Symposium on Foundations of Computer Science (FOCS), pages 801-812, 2022. [BLP10] Maury Bramson, Yi Lu, and Balaji Prabhakar. Randomized load balancing with general service time distributions. In In Proceedings of International Conference on Measurement and Modeling of Computer Systems, pages 275-286, 2010. [CFM+98] Richard Cole, Alan M. Frieze, Bruce M. Maggs, Michael Mitzenmacher, Andréa W. Richa, Ramesh K. Sitaraman, and Eli Upfal. On balls and bins with deletions. In Proc. Randomization and Approximation Techniques in Computer Science (RANDOM), pages 145-158, 1998. 14 [CL97] Xu C. and F.C.M. Lau. Load Balancing in Parallel Computers: Theory and Practice. Springer New York, NY, 1997. [CMMadH+98] Richard Cole, Bruce M. Maggs, Friedhelm Meyer auf der Heide, Michael Mitzenmacher, Andréa W. Richa, Klaus Schröder, Ramesh K. Sitaraman, and Berthold Vöcking. Randomized protocols for low-congestion circuit routing in multistage interconnection networks. In Proceedings of the Symposium on Theory of Computing (STOC), page 378-388, 1998. [CRSW13] L. Elisa Celis, Omer Reingold, Gil Segev, and Udi Wieder. Balls and bins: Smaller hash families and faster evaluation. SIAM Journal on Computing, 42(3):1030-1050, 2013. [Czu00] Artur Czumaj. Recovery time of dynamic allocation processes. Theory Comput. Syst., 33(5/6):465-487, 2000. [dM18] A. de Moivre. Doctrine of Chances. Woodfall, London, United Kingdom, 1718. [DW05] Martin Dietzfelbinger and Christoph Weidling. Balanced allocation and dictionaries with tightly packed constant size bins. In Automata, Languages and Programming, pages 166-178, 2005. [KLMadH92] Richard M. Karp, Michael Luby, and Friedhelm Meyer auf der Heide. Efficient PRAM simulation on a distributed memory machine. In Proc. 24th Symposium on Theory of Computing (STOC), pages 318-326, 1992. [LCB13] Paul Hermann Lensing, Toni Cortes, and André Brinkmann. Direct lookup and hash-based metadata placement for local file systems. In Proceedings of the 6th International Systems and Storage Conference, 2013. [LM05] Malwina Luczak and Colin MacDiarmid. On the power of two choices: Balls and bins in continuous time. The Annals of Applied Probability, 15(3):1733-1764, 2005. [LN05] Malwina Luczak and James Norris. Strong approximation for the supermarket model. The Annals of Applied Probability, 13(3):2038-2061, 2005. [LS21] Dimitrios Los and Thomas Sauerwald. Balanced allocations with incomplete information: The power of two queries. CoRR, abs/2107.03916, 2021. [LS22] Dimitrios Los and Thomas Sauerwald. Balanced allocations with incomplete information: The power of two queries. In Proceedings of Innovations in Theoretical Computer Science Conference, (ITCS), 2022. [LS23] Dimitrios Los and Thomas Sauerwald. Tight bounds for repeated balls-into-bins. In In Proceedings of the Symposium on Theoretical Aspects of Computer Science (Stacs), pages 1-22, 2023. [LSS22] Dimitrios Los, Thomas Sauerwald, and John Sylvester. Balanced allocations: Caching and packing, twinning and thinning. In Proc. 33rd Symposium on Discrete Algorithms (SODA), pages 1847-1874, 2022. [LSS23] Dimitrios Los, Thomas Sauerwald, and John Sylvester. Balanced allocations with heterogeneous bins: The power of memory. In Proceedings of Symposium on Discrete Algorithms (SODA), pages 4448-4477, 2023. 15 [LSS24] Dimitrios Los, Thomas Sauerwald, and John Sylvester. The power of filling in balanced allocations. SIAM J. Discret. Math., 38(1):529-565, 2024. [MBvLW16] Debankur Mukherjee, Sem C. Borst, Johan van Leeuwaarden, and Phil Whiting. Universality of power-of-d load balancing schemes. SIGMETRICS Perform. Evaluation Rev., 44(2):36-38, 2016. [Mit01] Michael Mitzenmacher. The power of two choices in randomized load balancing. IEEE Trans. Parallel Distributed Syst., 12(10):1094-1104, 2001. [PTW15] Yuval Peres, Kunal Talwar, and Udi Wieder. Graphical balanced allocations and the (1 + β)-choice process. Random Struct. Algorithms, 47(4):760-775, 2015. [RS98] M. Raab and A. Steger. "Balls into Bins" - a simple and tight analysis. In Proceedings of the 2nd International Workshop Randomization and Approximation Techniques in Computer Science (RANDOM), pages 159-170, 1998. [SMK+01] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Balakrishnan. Chord: A scalable peer-to-peer lookup service for internet applications. In Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, page 149-160, 2001. [TW14] Kunal Talwar and Udi Wieder. Balanced allocations: A simple proof for the heavily loaded case. In Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias, editors, Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I, volume 8572 of Lecture Notes in Computer Science, pages 979-990. Springer, 2014. Full version available on arXiv. [V ̈03] Berthold Vöcking. How asymmetry helps load balancing. Journal of the ACM, 50(4):568-589, 2003. [VDK96] Nikita Vvedenskaya, Roland Dobrushin, and Fridrikh. Karpelevich. Queueing system with selection of the shortest of two queues: An asymptotic approach. Problemy Peredachi Informatsii, 32(1):20-34, 1996. [Wie17] Udi Wieder. Hashing, load balancing and multiple choice. Found. Trends Theor. Comput. Sci., 12(3-4):275-379, 2017. 16 Appendix. A Deletion of a Random Ball. Consider an insertion probability sequence β(t) t∈N. Let X = (x(t))t∈N0 be the process where balls are deleted from randomly chosen non-empty bins during a deletion step. Remember that configuration x(t) is the load vector after step t with xi(t) denoting the i-th fullest bin. Similarly, let Y = (y(t))t∈N0 be the process in which every ball is deleted with the same probability during a deletion step. Our goal is to show that process Y is not worse (w.r.t. discrepancy/overload) than process X. We formalize this using the standard notion of majorization: For a configuration x let Sk(x) = Pk i=1 xi be the total load of the k fullest bins. We say a configuration x majorizes configuration y if Sk(y) ≤Sk(x) for all k ∈{ 1, 2, . . . , n }. Note that this immediately implies that y has a smaller maximal load than x. Thus, we can formalize the desired result via the following theorem: Theorem A.1. Starting from an empty system, there exists a coupling between processes X and Y such that x(t) majorizes y(t) for all t ∈N0. Coupling. Let us define a suitable coupling between processes X and Y . To this end, we use the identity coupling for the decision whether a ball is allocated (with probability β(t)) or deleted (with probability 1 -β(t)). As a result, the total load in both processes remains the same at any time; we denote it by m(t). It remains to couple the random choices for allocation and for deletion steps from configuration x(t) to configuration x(t + 1): Allocation Step: Here we use the identity coupling. That is, both processes use the same two random bin choices for Greedy[2]. Deletion Step: Here we number the m(t) balls in both configurations x(t) and y(t) from left to right (and arbitrarily within each bin). We then draw a uniformly random number z(t) from [0, 1) and use it to determine the ball deletions as follows: (a) Y deletes the ball i ∈{ 1, . . . , m(t) } if and only if i -1 m(t) ≤z(t) Sr(x(t)) would imply that ball j lies in a bin > r. The inequality i ≤j is a simple consequence of our coupling, since the probabilities for ball i being deleted in Y are constant (w.r.t. i) while the probabilities for ball j being deleted in X are monotonously non-decreasing (w.r.t. j). With this, we are ready to prove the main result of this section. Proof of Theorem A.1. We use the coupling defined above and prove the statement via induction over t, with the trivial base case Sk(x(0)) = 0 = Sk(y(0)) for all k. For the inductive step we show that the coupling maintains majorization when going from x(t) to x(t + 1) during time step t + 1. If a ball is inserted during time step t + 1, this follows immediately from [ABKU99, Lemma 3.4]. Thus, in the rest of this proof we consider a deletion during time step t + 1. Moreover, as in Observation A.1, we use (a) r and s to denote the bins from which X and Y delete a ball in step t, respectively; and (b) j and i to denote the balls deleted by X and Y , respectively. A plateau of a configuration x is a maximal set of consecutive bins having the same load. Note that when going from x(t) to x(t+1) by deleting a ball from bin r, we have to re-sort the bins after the deletion. This will swap bin r with the rightmost bin r′ of the plateau in which r lies. In other words, we can think of X as deleting a ball from some bin r′ ≥r instead of bin r. Formally we get x(t + 1) = x(t) -er′, where ei denotes the i-th unit vector in Rn. Similarly, Y deletes a ball from the rightmost bin s′ ≥s in the plateau containing bin s, yielding y(t + 1) = y(t) -es′ So assume x(t) majorizes y(t). That is, Sk(y(t)) ≤Sk(x(t)) for all k ∈{ 1, 2, . . . , n }. We show by case distinction that Sk(y(t + 1)) ≤Sk(x(t + 1)) for all k ∈{ 1, 2, . . . , n }: Case 1: k Sk(x(t + 1)) and Sk-1(y(t + 1)) ≤Sk-1(x(t + 1)).4 This implies yk(t + 1) > xk(t + 1). Now note that k lies in the plateau of y(t) between s and s′, such that for all q ∈{ k + 1, k + 2, s′ -1 } we have yq(t + 1) = yk(t + 1) > xk(t + 1) ≥xq(t + 1) (20) (the last inequality using that x(t + 1) is sorted non-increasingly) and, similarly, ys′(t + 1) = yk(t + 1) -1 ≥xk(t + 1) ≥xq(t + 1). (21) This implies Ss′(y(t)) = Ss′(y(t + 1)) + 1 > Ss′(x(t + 1)) + 1 = Ss′(x(t)), contradicting the induction hypothesis. Together, this shows that x(t + 1) majorizes y(t + 1). B Omitted Material from Proof of Theorem 2.1. In this section we prove Theorem 2.1, which we restate for convenience: Theorem 2.1. Consider an insertion probability sequence β(t) t∈N bounded away from 0 by a constant β > 0. If α ≤β/16, then for any t ∈N0 we have E[Γα(x(t))] = O(n) · eα·adisc(0). Moreover, for any a ≥0 we have Pr adisc(t) -adisc(0) ≥a+2 α · ln n ≤n-a. We use the potential functions from [PTW15] and show that, at an arbitrary point of time, the expected potential is linear in n. Since the authors of [PTW15] considered a process without ball deletions, we extend the potential function analysis to capture ball deletions. Recall that the process works as follows: In every time step a ball is inserted with probability β(t) using two choices. Otherwise, with probability 1 -β(t), a random non-empty bin is picked and a ball from that bin is deleted. The three potential functions defined in [PTW15] are: Φα(x(t)) = n X i=1 eα·(xi(t)-m(t)/n); Ψα(x(t)) = n X i=1 e-α·(xi(t)-m(t)/n); Γα(x(t)) = Φα(x(t)) + Ψα(x(t)) where x(t) = (x1(t), ..., xn(t)) is the normalized load vector. For convenience, we use yi(t) := xi(t) -m(t)/n for the difference between the load in the ith bin and the average. The remainder of the section is dedicated to prove Lemma B.8 which directly implies Theorem 2.1. For this proof, we require insertion and deletion probabilities. Given that a ball is deleted in round t, we denote by qi(t) the probability that it is removed from the ith bin. And given that a ball is inserted, we denote by pi(t) the probability that it is inserted into the ith bin. If no bin is empty, then qi(t) = 1/n because the protocol deletes a ball from a non-empty bin chosen uniformly at random. If there are s empty bins, then the empty bins, which come last in the normalized array, have probability qi(t) = 0, and the remaining bins have probability qi(t) = 1/(n -s). pi(t), on the other hand, is independent of t. pi(t) = pi = i n 2 - i-1 n 2 because for the ith bin to be chosen in the normalized array, the ith bin must be among the choices next to a bin with the same or a smaller index. (W.l.o.g. we assume that if bins of equal load are chosen, the bin with the larger index receives the ball.) From this we can derive the following simple observation: 4For k = s this holds by the previous case. 19 Observation B.1. P i≤n/4 pi ≤1/4 -ε and P i≥3n/4 pi ≥1/4 + ε for any ε ≤3 16. Proof. The probability for any of the first n/4 bins receiving the new ball is X i≤n/4 pi = X i≤n/4 i n 2 - i -1 n 2 = n/4 n 2 = 1 16. The probability for any of the last n/4 bins receiving the new ball is X i>3n/4 pi = X i>3n/4 i n 2 - i -1 n 2 = n n 2 - 3n/4 n 2 = 1 - 3 4 2 = 7 16 Let ˆy(t) = (ˆy1(t), . . . , ˆyn(t)) denote the load vector above average before normalization. By combining the probabilities β, pi and qi, we can express ˆyi(t + 1), i.e. the ith bin's load above average before normalization at t + 1, as a function of its previous load yi(t): ˆyi(t + 1) =            yi + 1 -1/n, with probability β(t) · pi (ball is allocated to ith bin), yi -1/n, with probability β(t) · (1 -pi) (ball is allocated to other bin), yi -1 + 1/n, with probability (1 -β(t)) · qi (ball is deleted from ith bin), yi + 1/n, with probability (1 -β(t)) · (1 -qi) (ball is deleted from other bin) Let Φi α(x(t)) = eα·yi(t) and Ψi α(x(t)) = e-α·yi(t) be the local potentials in bin i. And let ∆ˆΦi α := Φi α(ˆx(t+1))-Φi α(x(t)) and ∆ˆΨi α := Ψi α(ˆx(t+1))-Ψi α(x(t)) be the local potential changes in the ith bin (before normalization). Then: ∆ˆΦi α =            eα(yi+1-1/n) -eαyi = eαyi · (eα(1-1/n) -1), with probability β(t) · pi, eα(yi-1/n) -eαyi = eαyi · (e-α/n -1), with probability β(t) · (1 -pi), eα(yi-1+1/n) -eαyi = eαyi · (eα(-1+1/n) -1), with probability (1 -β(t)) · qi, eα(yi+1/n) -eαyi = eαyi · (eα/n -1), with probability (1 -β(t)) · (1 -qi) ∆ˆΨi α =            e-α(yi+1-1/n) -eαyi = e-αyi · (e-α(1-1/n) -1), with probability β(t) · pi, e-α(yi-1/n) -eαyi = e-αyi · (eα/n -1), with probability β(t) · (1 -pi), e-α(yi-1+1/n) -eαyi = e-αyi · (e-α(-1+1/n) -1), with probability (1 -β(t)) · qi, e-α(yi+1/n) -eαyi = e-αyi · (e-α/n -1), with probability (1 -β(t)) · (1 -qi) We use these local changes to prove different bounds on the overall potential changes ∆Φα = Φα(x(t + 1)) -Φα(x(t)) (in Lemma B.1, Lemma B.2, Corollary B.1 and Lemma B.5) and ∆Ψα = Ψα(x(t + 1)) -Ψα(x(t)) (in Lemma B.3, Lemma B.4, Corollary B.2 and Lemma B.6). Lemma B.1 (similar to a part of Lemma A.5 in [TW14]). If, for a constant β, β(t) ≥β for all t and if α ≤ε·β 3 ≤1 16, then E[∆Φα | x(t)] ≤ n X i=1 eαyi β(t) pi(eα -1) -α n + (1 -β(t)) 1 n(e-α -1 + α) + 2α2 n2 . 20 Proof. We compute the expected potential difference: E[∆Φα | x(t)] = n X i=1 β(t)pi · (eα(1-1/n) -1) · eαyi + β(t)(1 -pi) · (eα(-1/n) -1) · eαyi +(1 -β(t))qi · (eα(-1+1/n) -1) · eαyi + (1 -β(t))(1 -qi) · (eα(1/n) -1) · eαyi = n X i=1 β(t)eαyi pi · e-α/n · (eα -1) + e-α/n -1 +(1 -β(t))eαyi qieα/n(e-α -1) + eα/n -1 ≤ n X i=1 β(t)eαyi pi(eα -1) + e-α/n -1 + (1 -β(t))eαyi qi(e-α -1) + eα/n -1 = n X i=1 eαyi β(t)pi(eα -1) + (1 -β(t))qi(e-α -1) + eα/n -1 + β(t)(e-α/n -eα/n) ≤ n X i=1 eαyi β(t)pi(eα -1) + (1 -β(t))qi(e-α -1) + α n + 2α2 n2 -2β(t)α n ≤ n X i=1 eαyi β(t)pi(eα -1) + (1 -β(t)) 1 n(e-α -1) + α n + 2α2 n2 -2β(t)α n (22) = n X i=1 eαyi β(t) pi(eα -1) -α n + (1 -β(t)) 1 n(e-α -1 + α) + 2α2 n2 Inequality (22) holds because, if no bin is empty, all qi = 1/n, and if bins are empty, then the qi are decreasing while the term (1 -β(t)) · (e-α -1) · eαyi is increasing as e-α -1 0 and E[∆Φα | x(t)] ≥ -α·β·ε 4·n · Φα. Then either Φα n/3 eαyi β(t) 3 2n(eα -1) -α/n + (1 -β(t)) 1 n(e-α -1 + α) + 2α2 n2 ≤ Φα,≤n/3 n β(t) ((1 -4ε)(eα -1) -α) + (1 -β(t))(e-α -1 + α) + 2α2 n +Φα,>n/3 n β(t) 3 2(eα -1) -α + (1 -β(t))(e-α -1 + α) + 2α2 n ≤ -3 · β(t) · α · ε n · Φα + α n · Φα,>n/3 ≤ -3 · β · α · ε n · Φα + α n · Φα,>n/3 Recall that we assume E[∆Φα | x(t)] ≥-α·β·ε 4·n · Φα. -α · β · ε 4 · n · Φα ≤ -3 · α · β · ε n · Φα + α n · Φα,>n/3 ⇒ 2 · β · ε · Φα ≤ Φα,>n/3 If Φα 0 implies Ψα ≥n 4 · e4·α·B/n, we get: n · ε 16 · e4·α·B/n ≤ε 4 · Ψα ≤Φα ≤ Φα,>n/3 · 1 2 · β · ε ≤ n 3 · β · ε · e3·α·B/n =⇒ eα·B/n ≤ 6 β · ε2 Finally, it follows that: Γα = Φα + Ψα ≤5 ε · Φα ≤5 ε · n 3 · β · ε · e3·α·B/n ≤5 ε · n 3 · β · ε · 6 β · ε2 3 = 360 · n ε8 · β4 Lemma B.6 (similar to Lemma A.8 in [TW14]). Suppose β(t) ≥β, yn/4 1+ε n for i > 2n/3 and e-α -1 2n/3 e-αyi β(t) 1 + ε n (e-α -1) + α n + (1 -β(t)) 1 n(eα -1 -α) + 2α2 n2 ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 n β(t)ε(e-α -1) + α2 + α3 2 ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 n -αβ(t)ε + α2 ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 · -αβ(t)ε 2n ≤ Ψα,≤2n 3 · α n + Ψα,> 2n 3 · -αβε 2n Using E[∆Ψα | x(t)] ≥-α·β·ε 4·n · Ψα, we get: -α · β · ε 4 · n · Ψα ≤ Ψα,≤2n 3 · α n -Ψα · αβε 2n Ψα ≤ 4 β · ε · Ψα,≤2n 3 If Ψα 0. If E[∆Φα | x(t)] ≤-ε·β·α 4·n · Φα, apply Lemma B.4: E[∆Γα | x(t)] = E[∆Φα | x(t)] + E[∆Ψα | x(t)] ≤ -ε · β · α 4 · n · Φα + α -Ψα · α · β · ε n + α ≤ -α · β · ε 4 · n · Γα + α If this is not the case, apply Lemma B.5 which distinguishes two cases: If Φα ⌈m(t) n ⌉ πb(t) = eα + k X j=2 eα·j -eα·(j-1) = eα + (e2α -eα) + (e3α -e2α) + · · · + (e(k-1)α -e(k-2)α) + (ekα -e(k-1)α). 29 This is a telescoping sum evaluating to Πi(t) = ekα. This however is precisely this bin's potential contribution in the potential function Φ, so that we can conclude that Π(t) = X ball b πb(t) = X bin i X ball b in bin i πb(t) = X bin i Πi(t) = X bin i eαX+ i (t) = Φ(t). Consider a height ⌈m(t) n ⌉+ k for some k ≥2; the precise value will be fixed below. Assume that there is a set B of at least μn balls at height ⌈m(t) n ⌉+ k or higher (if no such set exists the proof is finished); the precise distribution of the balls in bins is irrelevant. Each of those at least μn many balls contributes a potential of at least eαk -eα(k-1) = eαk · 1 -1 eα ≥ε · eαk for some constant ε = ε(α) > 0, for a total potential contribution of those balls of at least ΠB(t) ≥ε · μn · eαk. Now ΠB(t) > c · n whenever k > 1 α ln c εμ -1, so letting d = 1 α ln c εμ proves the theorem (recall that the probability is inherited from Lemma C.1 - the actual argument hee in the proof building on top of it is entirely deterministic). Lemma C.1. Assume the premise of Theorem 2.2 holds. Then Γα(x(t)) = O(n) with probability at least 1 -n-(3+l) for any fixed non-negative l, where the constant factor hidden in the big-Oh depends on l. Proof. The proof is a minor adaptation of Theorem 5.3 on page 18 in Los and Sauerwald's [LS21], which is a more detailed version in arXiv of their [LS22]. All references in Appendix C to their lemmas etc. are with respect to the numbering in their arXiv version [LS21]. As mention before, the purpose of the adaptation of [LS21]'s results is two-fold: (i) to incorporate our deletions and their respective probability bounds, and (ii) to improve the high-probability guarantee from the original 1 -1/n3 (which was sufficient for the purposes of [LS22, LS21]) to 1 -1/n3+lfor any arbitrary but fixed l> 0. To show this result we adapt Theorem 5.3 in [LS21] as follows. The lin the exponent of the statement comes from Theorem 2.2. Theorem C.1 (cf. Theorem 5.3 in [LS21]). Consider any probability vector p that is (i) nondecreasing in i, i.e., pi ≤pi+1 and (ii) for constant ε, P i≥3n/4 pi ≥1/4+ε and P i≤n/4 pi ≥1/4-ε. Then, for any t ≥0 and α2 ≤β·ε 110, c = cε,α2 = 2 · 40 · 1283 · ε-7 · 4 · α-1 2 , Pr h ∩s∈[t,t+n log5(n)]{Γ(s) α2 ≤2cn} i ≥1 -n-(3+l). Proof. The proof is almost identical to that of Theorem 5.3 in [LS21]. The chain of modifications starts with Lemma 5.5 in [LS21] where the restriction Γ(t) 1 ≤cn9 occurs. An application of Markov's inequality turns the n9 into a probability 1/n8, which subsequent lemmas etc. further decay to 1/n3. The main idea to make the result work for us is to modify the restriction on Γ(t) 1 to Γ(t) 1 ≤cn9+l, which accordingly will turn into a probability 1/n8+l, only to subsequently be eroded to 1/n3+l. Basically we have the exact same results with the exact same proofs as in in [LS21] but with some explicit constant parameters replaced with their respective original values plus l. We will need to 30 make some equally straightforward modification of [LS21]'s results to make them compatible with our potential function analysis. We will use two instances of the potential function Γα which only differ in α. The first one, Γα1, uses α1 ≤β·ε 3 , the second one, Γα2, uses α2 ≤ α1 4·9.1. (Explanation: α1 and α2 must fulfil the requirements of the Potential Function Analysis in the previous section and of Lemma C.2.) From Lemma B.8 we can derive, for n and c large enough, i.e. c ≥4160002·n α·β3·ε9 , that Pr Γα1(x(t)) ≥c · n9+l 0, ε′ α2 > 0 defined as above, Pr ∪s∈[t-n·log2(n),t] Γ(s) α2 ≤2c′ ε′α2 · n ≥1 -2 · c · n-(8+l). Proof. The proof is almost identical to that of Lemma 5.6 in [LS21]. In fact the modifications immediately follow from those made to Lemma 5.5 (here: Lemma C.2). 31 We are now in a position to finish the proof of Theorem C.1, a part that essentially mimics Section 5.2, with just some few slightly adapted coefficients. We do of course not wish to copy a page or two from another paper, and therefore just briefly highlight the modifications. The application of Lemma 5.6 in [LS21] (here Lemma C.3) gives a lower bound of 1 -2cn-(8+l) instead of 1 -2cn-8 for the probability. Further down, the application of the union bound gives Pr   \ u∈[t-n log2 n,t+n log5 n] {Γ(u) 1 ≤cn9+l}  ≥1 -2 log5 n n7+l. Then, the application of Theorem A.4 in [LS21] gives Pr h X(u) r ≥X(r) r + cn 2 i ≤exp - c2n2/4 2 · (2n log5 n) · (2n1/3) + 2 log5 n n7+l ≤3 log5 n n7+l. Further down we find that if a red phase starts at time r, then with probability 1 -3 log5 n n7+l, Γ(u) 2 will always be ≤3cn 2 . Taking union bounds as in Section 5.2 in [LS21] yields Pr   [ r∈[s,t+n log5 n] [ u∈[r,t+n log5 n] {X(u) r > 3cn 2 }  ≤3 · log5 n n7+l· (4n2 log10 n) ≤1 2n-4+l. Concluding as in [LS21], with probability 1 -1 2n-4+l-2cn-8+l≥1 -n-4+l, it holds that Γ(r) 2 ≤ 3cn 2 for all time-steps r which are within a red phase in [s, t + n log5 n] ⊆[t, t + n log5 n]. Since Γ(r) 2 ≤T ≤cn holds (deterministically) by definition for all time-steps r within a green phase, the theorem follows. This concludes the proof sketch of the Lemma. D Omitted Material from Section 3. We first present the proof of Lemma 3.2. We restate it for convenience: Lemma 3.2. Consider an h-valid configuration x and let x′ denote the (random) configuration after one step of our process with the insertion probability β(t) ≤ˆβ m(h) l(x) i ≥2. (4) (b) If level l= l∗+ 1 of configuration x is h-critical and if n is large enough, then Pr h m(h) l(x′) m(h) l(x) i ≥√n. (5) Proof. For the sake of readability, we abuse notation in this proof by omitting the base height h when talking about the number of balls that have at least some level l(i.e., we write ml(•) instead of m(h) l(•)). Similarly, we refer to the notations h-valid and h-critical simply as valid/critical within the proof. 32 (a) Consider a critical level l∈{ 1, 2, . . . , l∗} of configuration x. Since x is valid, level l-1 and l+ 1 are valid. Using Lemma 3.1 and Observation 3.1 we calculate Pr [ ml(x′) ml(x) ] ≥1 -β(t) β(t) · ml(x) -ml+1(x) ml-1(x) -ml(x) 2 · n ≥1 -β β · αl/2 -αl+1 (αl-1 -αl/2)2 · n ≥1 -β β · αl/4 α2 l-1 · n ≥2. The first inequality follows from Lemma 3.1. The second inequality uses the bound on β(t), that level lis critical, and that levels l-1 and l+1 are valid. The third and final inequalities follow from Observation 3.1. (b) For level l∗+ 1 we similarly calculate Pr [ ml∗+1(x′) ml∗+1(x) ] ≥1 -β(t) β(t) · ml∗+1(x) -ml∗+1+1(x) ml∗+1-1(x) -ml∗+1(x) 2 · n ≥1 -β β · 1 α2 l∗ · n = 1 -β β · n 144 log2(n) = Ω n log2(n) ≥√n. Compared to the proof of the previous statement, we bound the number of balls at level at least l∗+1 in the numerator by 1 (level l∗+1 being critical means it is non-empty, so there is at least one bin with a ball on level l∗+ 1). The rest of the calculation applies the definition of αl∗(see Equation (2)) and simplifies. We next present the proof of Lemma 3.4. We restate it for convenience: Lemma 3.4. Let h(t) := ⌈mmax(t)/n⌉+ γ denote the base height at time t. Consider an initial configuration x(0) = x that is h(0)-safe. Fix the time T = n4 and assume n to be large enough. If β(t) ≤ˆβ ̃ml(t) ] ≥ Pr h m(h(t)) l (x(t + 1)) m(h(t)) l (x(t)) i ≥ ( 2 if l∈{ 0, 1, . . . , l∗} and √n if l= l∗+ 1. (31) If τ < T, there must be some level lthat is not valid at time τ and a minimal time t′ < τ such that level lis critical at any integral time t ∈[t′, τ). In particular, using the minimality of t′, we have for all integral t ∈[t′, τ), ̃ml(t′) = αl/2 ≤ ̃ml(t) < ̃ml(τ) = αl. That is, between time step t′ 33 and τ, the number of balls at level at least lstarted at αl/2 and increased to αlwithout decreasing below αl/2. If this happens, we say level lcrosses [αl/2, αl) during [t′, τ), and denote this event as Cl,[t′,τ). We will show that for any level land any time t′ event Cl,[t′,τ) is unlikely to happen. Using a union bound, this yields that, w.h.p., τ ≥T, as desired. To bound Pr Cl,[t′,τ) , we define a random walk W (l,t′) that has an even larger probability to cross [αl/2, αl) during [t′, τ) and show that this larger probability is still small enough. To this end, fix a level l∈{ 0, 1, . . . , l∗+ 1 } and integral time t′ ∈[0, τ] ⊆[0, T]. Let p(l,t) := Pr [ ̃ml(t + 1) = ̃ml(t) | x(t) ] denote the probability that the number of balls at level at least l does not change from time t to time t+1 (note that this probability is a random variable depending on the random configuration x(t)). We define the random walk W (l,t′) = (W (l,t′) t )t∈[t′,T) over the discrete time interval [t′, T). The random walk's start position is W (l,t′) t′ := ̃ml(t′) = αl/2 and its movement from time t ∈[t′, T) to time t + 1 is defined via W (l,t′) t+1 =      W (l,t′) t with probability p(l,t), W (l,t′) t -1 with probability 1 -p(l,t) · 2/3, and W (l,t′) t + 1 with probability 1 -p(l,t) · 1/3. (32) In other words, the random walk W (l,t′) starts at time t′ at position ̃ml(t′) = αl/2, maintains its position whenever ̃ml(•) does not change, and the probability that it moves left is twice the probability that it moves right. For a time t ≥t′, we say W (l,t′) crosses [αl/2, αl) during [t′, t) if W (l,t′) reaches αlbefore decreasing below αl/2, and denote this event as CW l,[t′,t). Given the current configuration x(t) = x at time t ≥t′ and depending on how ̃ml(•) changes from time t to time t + 1, we can easily couple the random walk W (l,t′) with our process as follows: • if ̃ml(•) does not change, W (l,t′) remains stationary; • if ̃ml(•) increases, W (l,t′) moves right; and • if ̃ml(•) decreases, W (l,t′) moves left with prob. min { 1, 2 3/Pr [ ̃ml(t + 1) < ̃ml(t) | x(t) = x ] } and right otherwise. By this coupling, the random walk is never "left of" ̃ml(•) (i.e., for all t ≥t′ we have W (l,t′) t ≥ ̃ml(t)). We now consider the crossing probabilities for the different levels l∈{ 0, 1, . . . , l∗+ 1 } for some integral time t′ ∈[0, τ) at which level 0 just became critical. For level l= 0, Theorem 2.2 can be used to see that, w.h.p, this level will not become invalid at any time during the discrete interval [0, τ). For any other level l, we use the fact that the ̃ml-value is upper-bounded by the position of the random walk W (l,t′) during [t′, τ) (by the coupling above) together with the fact that the left-biased random walk W (l,t′) is very unlikely to reach the right border before becoming again safe (by Lemma 3.3). Case 1: For level l= 0 we do not need to considered the random walk. Instead, note that Theorem 2.2 allows us to choose a constant γ = γ(α0) such that the number of balls above height ⌈m(t)/n⌉+ γ is, with (arbitrarily) high probability, at most α0 = 1-β 128β · n. Thus, by choosing the probability high enough and by using a union bound over the T = poly(n) many time steps, we can ensure that, with probability at least 1 -n-6, level 0 does not become invalid at any time ∈{ 1, 2, . . . , T }. In particular, this yields Pr C0,[t′,τ) ≤Pr C0,[t′,T) ≤n-6. (33) 34 Case 2: For any level l∈{ 1, 2, . . . , l∗} and large enough n we get Pr Cl,[t′,τ) ≤Pr h CW l,[t′,τ) i ≤Pr h CW l,[t′,T) i Lem. 3.3 ≤ 21 -1 21+αl/2 -1 ≤ 2 21+αl/2 = 1 2αl/2 ≤ 1 2αl∗/2 = n-6. (34) Case 3: For level l= l∗+ 1 we get Pr Cl∗,[t′,τ) ≤Pr h CW l∗,[t′,τ) i ≤Pr h CW l∗,[t′,T) i Lem. 3.3 ≤ √n1 -1 √n1+αl∗+1/2 -1 ≤ √n √n1+αl∗+1/2 = n-6. (35) By this case distinction, we know that the probability that any single level that just became critical at time t′ ∈[0, τ) has probability at most n-6 to become critical within [0, τ). Taking a union bound over the l∗+ 2 = log log(n) + Θ(1) many levels and T = n4 many time steps finishes the proof of Lemma 3.4. E Lower Discrepancy Bound when Deletions are Dominant. Theorem E.1 gives a lower bound that shows that our result from Theorem 2.1 is tight in the sense that there are insertion probability sequences that result in a logarithmic discrepancy. Our lower bound statement assumes that the potential Φα (see Section 2) is linear in n, which holds with high probability (see Lemma C.1). Similar to the results of [PTW15] the discrepancy is heavily dependent on the one-choice operations, ball deletions in our case. A long sequence of ball deletions creates deep holes in the bins and the frequency of ball insertions must be high enough to re-balance the load. Theorem E.1. Fix a step t and assume Φα(t) = O(n) for some constant c. Let T = t + n ln(n) 2+4ε and assume that for every τ with t ≤τ ≤T we have β(τ) = 1/2 -ε with ε < 1/2 being an arbitrary positive constant. Then the expected number of bins at the end of step T with load at least ⌊m(T)/n⌋+ ln(n)/2 is Ω(√n). We note that the above result also holds (with basically the same proof) for varying insertion probabilities as long as fewer than (T -t)·(1/2-ε) balls are inserted during the time interval [t, T]. The proof idea for Theorem E.1 is as follows: We first show that, as long as in step t we have Φ(t) ≤c·n for some constant c (which holds with high probability), there exists a constant fraction of bins with a load which is at least m(t)/n (Lemma E.1). If now during the next O(n log n) steps much more balls are deleted than inserted, the average load will go down and one of the bins with load at least m(t)/n will receive no ball deletions at all. This bin will have a discrepancy of O(log n). This holds even under the assumption that the bin does not receive any additional ball. Lemma E.1. Assume Φ ≤c · n for some constant c and let r = r(n) denote the fraction of bins with a load of at least ⌊m(t)/n⌋. Then r = Θ(1), i.e., a constant fraction of bins has load at least ⌊m(t) n ⌋. Proof. Let B1 denote the set of bins with a load of at least ⌊m(t)/n⌋and B2 the set of bins with fewer than ⌊m(t)/n⌋balls. Since r is defined as the fraction of bins in B1, we have |B1| = r · n and 35 |B2| = (1 -r) · n. This implies that there are at least (1 -r) · n holes below the level ⌊m(t) n ⌋and, in turn, at least (1 -r) · n balls above the average load of m(t)/n. Let m(t)/n+lbe defined as the average load of the bins in B1. Note that l≥0 because m(t)/n is the average load and the bins in B1 have a higher load than the ones in B2. From the definition, it follows that l≥n · (1 -r) r · n = 1 -r r . Since the potential function Φ is minimal when all bins in B1 have load l, we obtain r·n X i=1 eα· 1-r r ≤Φ ≤c · n which implies r · eα· 1-r r ≤c. If we view r = r(n) as function in n and assume r(n) = o(1) then in r(n) · eα 1 r(n)-1 = r(n) · f(n) with f(n) = eα 1 r(n)-1 the exponent is a function growing as 1/r(n) = ω(1). This implies that regardless of the size of α (as long as it is positive) the f(n) grows exponentially quicker than the r(n) = o(1) decays, and so the whole r(n) · f(n) is in ω(1) and in particular not upper-bounded by the constant c. Since r(n) ≤1 ∈O(1), we conclude that r(n) = Θ(1). With this, we can prove Theorem E.1: Proof of Theorem E.1. First of all, a simple application of Chernoff bounds proves that, w.h.p., ⌊m(t)/n⌋≥⌊m(T)/n⌋+ ln(n)/2. Fix a bin i and define l= T -t = n ln(n)/(2 + 4ε). Then for large n, the probability that during all steps τ with t ≤τ ≤T no ball is deleted from bin i can be bounded by (using the inequality 1 -x ≥e-2x for x ∈[0, 0.795]) 1 -1 n (1-(1/2-ε))l = 1 -1 n (1/2+ε)l ≥exp -(1 + 2ε) l n = exp (-ln(n)/2) = n-1/2. (36) From Lemma E.1 we get that, at time t, a constant fraction of bins have load at least ⌊m(t)/n⌋. And by the above calculation, the expected number of those bins from which no ball is deleted is Ω(√n). Thus, at time T, in expectation there are Ω(√n) bins that have a load of at least ⌊m(t)/n⌋≥⌊m(T)/n⌋+ ln(n)/2. 36
2510.14793
Reflections of quantum educators on strategies to diversify the second quantum revolution Apekshya Ghimire* and Chandralekha Singh Department of Physics and Astronomy, University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA, 15260 *apg61@pitt.edu Introduction The second quantum revolution focuses on quantum information science and technology (QIST). It promises to bring about unprecedented improvements in computing, communication and sensing due to our ability to exquisitely control and manipulate quantum systems [1, 2]. This fast-growing field presents equally unprecedented opportunities and challenges for preparing students to be future leaders in this area [3, 4]. One major challenge pertains to how to diversify the second quantum revolution. The first quantum revolution, which began nearly a century ago and laid the foundation for the technological breakthroughs of the mid-20th century, reflects the broader lack of diversity that characterized the scientific landscape of that era. Since the second quantum revolution is in its infancy, it is critical to contemplate these issues to ensure that people from historically marginalized groups in physics and related fields can contribute equitably [5, 6]. This paper focuses on reflections and suggestions of five college quantum educators from four different institutions (two from the same institution) regarding what can be done to diversify the second quantum revolution. They are leading QIST researchers, and very passionate about improving quantum education. The educators were also asked about their thoughts on whether the interdisciplinary nature of the field, in which nobody can claim to be an expert in all aspects of QIST, may make it easier to create a better culture from the beginning, supportive of equitable participation of diverse groups unlike physics. This is because disciplines such as physics have an ingrained inequitable culture based on brilliance attribution [7] that is a major impediment to diversity, equity and inclusion. Educators were interviewed on Zoom using a semi-structured think-aloud protocol about various issues related to QIST education including those pertaining to how to diversify the second quantum revolution discussed here. Below, we discuss some excerpts that exemplify interviewed quantum educators’ reflections and suggestions in their own words. They are insightful and can help other educators adapt and implement strategies to diversify QIST. We mention below that one of the educators was a woman because her gender identity may be useful to understand her point of view. Quantum educators in their own words Some educators emphasized the importance of teaching quantum concepts early, e.g., in high school and early in college to students from diverse backgrounds. This exposure is important because targeting diverse high schools has the potential to introduce exciting quantum concepts and careers to students from diverse backgrounds considering diversity is severely lacking among college STEM majors. Early introduction is also valuable because learning quantum concepts early may make the subject matter less mysterious. Students may also better learn to discern the contexts in which classical and quantum concepts are applicable. This is similar to the facility children have in discerning which language to use in which contexts and switching between them as needed depending upon contexts, when they simultaneously learn two different languages early. Emphasis on potential QIST applications focusing on solving societal problems that resonate with students and discussing career pathways with students can be useful. For example, one male educator emphasized the importance of teaching quantum and relevant mathematical concepts early saying, “If I had the magic wand that could reform, you know, high school curriculum, I would probably change a few [parts] of the mathematics teaching. Like, I think, data science and statistics at this point should have at least as much space as calculus does. Well, once you understand statistics…then quantum is a small step from there and [can even be taught with] linear algebra. I would teach…coding, programming, statistics... And once you've got that, you can teach any quantum you want…even in high school…My course, that I am teaching second year [college students] here, with minor modifications, I could teach in high school.” He also emphasized the importance of helping students from diverse backgrounds develop confidence along with competence from early on while learning QIST concepts saying, “in every field, no matter how developed or under-developed [like QIST] it is, you will find people who have a lot of confidence, you know, becoming the most visible ones…It's something that students should learn a little in high school…”. Another male educator who is passionate about early teaching as well as early research experiences related to QIST for first- and second-year college students in science and engineering reflected on the importance of supporting all students in foundational courses especially those not particularly interested in physics saying, “let's say that we [with physics background] are teaching a college course on foundations of quantum computing…we found some advisors who said to students in electrical engineering or computer science that this might be a good thing if they were thinking about…going into the workforce in this area. Now when they landed in those courses…they're like, oh, this sounds too much like physics to me. So what can you do?...We want all these students if they're going to take an intro course…to succeed…there should be something for everybody...They shouldn't be punished for not being a physicist”. He acknowledged that there are many ways of structuring interdisciplinary QIST courses for early learners, but felt educators need to emphasize applications in quantum computing, communication and sensing and how they could impact society through examples that resonate with students from diverse backgrounds as well as discuss career pathways. He also noted that while these early courses should mainly focus on learners with diverse background preparations and interests, students can be provided enrichment options through projects/special modules as part of the course that dive more deeply into the sub-field of QIST, e.g., engineering or computer science, they may be more interested in. This educator also noted leading a first experiences in quantum research program for first- year college students currently that he hopes will play a role in diversifying QIST. He also stated that diversifying issues are not only in QIST but in many STEM disciplines including physics stating, “simply acknowledging this…and trying to make sure that people have support that they need throughout their…education to be able to…walk into a room where people don't look like them and succeed” is important. He did not think that QIST being a new field makes it easier to diversify than the sub-fields such as physics, “I do think that there's in some sense…we could just like start over afresh, because this [QIST] is a new field, and we don't carry over the baggage…I'm not sure that it's succeeding so far”. He felt that the current diversity in QIST is dismal saying, “I don't see things as automatically getting fixed just because we rebranded. And now it's quantum 2.0. I think that it's…very important to push back against these…And I think that the earlier the positive experiences people have in doing that [learning quantum concepts] successfully, the better the outcome could be, that you have to…realize that, you know, imposter syndrome is like, almost everybody feels that…has that kind of feeling at some stage or another and that you…can embrace it and say, well, I'm new to this field, but I still belong here…[do] some type of self-affirmation”. He also reflected on the highly interdisciplinary nature of QIST playing a role in sense of belonging and emphasized the role of educators and researchers in contemplating ways to integrate students in QIST, who are trained differently. He felt that since the preparation of students is siloed in physics, engineering, computer science etc., some students may feel uncomfortable being part of this new interdisciplinary field of QIST saying, “what's interesting about the quantum is…there are dimensions to [it] where people feel like they don't belong…they have to do with their professional upbringing, right? So, somebody can feel that they don't belong…because they weren't trained in a certain way and I think that it's important to address all of these issues, right? Whatever it is that is inhibiting people from learning and doing [QIST]”. A female educator reflected on these issues saying, “I think, in general…physics has been very sort of depressingly bad…I'd say that quantum does not have good representation, you know it's not so inclusive…”. She also felt that unless the culture of each subfield, e.g., physics, that makes up QIST changes, she disagreed with “this idea that there are these different fields with somewhat different cultures and maybe that could provide an avenue to improving [QIST]” emphasizing that a major barrier to diversity is, “this thing of like, if you get a guy, and he knows, you know, half of something, [he’ll say] oh, yeah, I know that. And you get a woman, and she knows more, [she’ll say] oh, I don't know…everybody's faking it. But somehow people's internalization of what they know relative to everyone else is very culturally dependent”. She felt that unless we can do something to eradicate these types of dichotomies that are systemic in the culture, e.g., overly confident men making statements bragging about more than what they know and women undermining themselves, the new field of QIST will continue to be intimidating for many women and other underrepresented people similar to physics and they would decide not to be part of a field with such an inequitable culture. She gave several concrete examples of men being overconfident and acting as if they were pinnacles of brilliance based upon her own interactions with them in physics. She exemplified specific cases in which they knew significantly less than her but pretended to be on top of everything and took the opportunity to promote themselves. She reiterated that she is worried that this kind of inequitable culture that pervades physics will seep into this new field of QIST since the same people who have dominated the space in physics will continue to do so in QIST, “I'm saying just like having that confidence of, you know, I belong here...[I disagree that] one of the great things about a field that's as broad as this one [QIST] is that nobody knows everything. But you don't feel that way [if you are an underrepresented person such as a woman], you feel like everybody knows it but you [because of the way many men portray themselves]”. She felt that the only way to diversify QIST is to change the systemic cultural issues in physics and other related disciplines that make up QIST in which brilliance is over-emphasized and many individuals from the dominant group are pretending and portraying to know more than they do, thereby intimidating others and even driving them out of the discipline. Another male educator strongly believed that early outreach to students from diverse backgrounds in various settings can play a critical role in diversifying QIST saying, “we've got a massive issue with diversity in physics and…in electrical engineering and other engineering disciplines…in computer science…and I think we, as physicists, have to be very active in going out to the general public”. He also emphasized the importance of increasing the visibility and impact of underrepresented people in QIST saying, “And I think that it's important to emphasize and underline the contributions of those role models and to show to people that this is a field for everyone”. He thought it was important that “when we organize events, we represent the diversity of people who are involved…there are people from a lot of minority ethnic groups involved in this area. There are a lot of women involved in this area really doing internationally leading research”. To diversify QIST, he stressed the need to change the culture of physics and related disciplines comprising QIST that currently favors the dominant group, “we need to keep looking at our own research teams, our ways of interacting with people. We need to look at our recruiting practices. We need to look at the ways in which we organize meetings, and we need to be constantly questioning whether we have tried to understand all of this best practice from what's been done in this field around the world, making sure that we implement it in our own teams and make sure that we essentially spread the message and take leadership roles in saying that diversity, equity, and inclusion, these things are extremely important to us, and extremely important to the health of the field, going forth”. He added, “we lose out still a lot. A lot of very talented physicists choose to go into other areas…they have bad experiences with a workplace culture...I also think that when you get a diverse range of people, people with different ethnic backgrounds, people of different nationalities, people from different gender backgrounds, people from…all different ways you can be, I find that teams tend to work much better. They tend to be more creative because people coming in with different ideas, different ways [of] thinking lead to different types of creativity, but also the culture of teams change…most inefficient committees I've ever been on, we're essentially all made up of white men and the most productive committees I was on, we're typically always characterized by having a very diverse range of people. And I think for the health of our field, it's important that we take leadership roles and that we do everything we can in this area”. Another male educator, who acknowledged that the physics culture is a detriment to diversifying, was concerned that this burgeoning QIST field can morph into a non-diverse field like physics saying, “physics, of course, has just long problems with underrepresented minorities, which because quantum technology is physics led, is a problem that [can] propagate into that”. However, he expressed optimism that “attitude in physics might not hopefully propagate to wider quantum technology, and so might be a way to kind of get…minorities in…so there's that sense of optimism there that with the invention of a new field [QIST], you can invent a new culture hopefully, free from some of the mistakes of physics”. He also felt that whatever strategies are currently being used to increase diversity, equity and inclusion in physics should be employed in QIST since they may be useful for QIST as well. Discussion and Summary The interdisciplinary field of QIST is fast-growing with major implications for the future workforce. This paper focuses on reflections and suggestions from five college quantum educators, who are passionate about teaching quantum concepts, regarding how to diversify the second quantum revolution. Their suggestions can be invaluable for other educators interested in similar pursuits and can be summarized as follows: • Focus on changing the culture of disciplines such as physics since the inequitable culture in these sub-disciplines comprising QIST can impact outcomes in QIST, which is interdisciplinary. • Acknowledge the need for diversity, equity, and inclusion in QIST and support students from diverse backgrounds as appropriate. • Conduct quantum outreach in both formal (K-12) and informal settings to get students from diverse backgrounds aware of and interested in QIST concepts and career opportunities. • Teach quantum concepts early, e.g., in high school and early in college to students from diverse backgrounds. o More focus in high school on statistics, data science, and programming may be helpful for introducing quantum early. o Focus on developing student confidence along with competence. o Focus on not teaching early QIST courses as physics courses o Tailor courses to benefit diverse backgrounds and interests emphasizing applications in quantum computing, communication and sensing that have the potential to solve major societal problems as well as career pathways. • Give students early experience in QIST research. • Increase the number of role models in QIST. • Increase the visibility and impact of existing underrepresented QIST researchers, e.g., by including them as invited speakers and on committees that make major decisions. • Try similar approaches to those that have proved useful for diversifying physics. Based upon these suggestions, the involvement of early learners, e.g., in high school, college and via outreach is important for ensuring that students from diverse backgrounds can learn about exciting quantum concepts and careers particularly because college STEM majors lack diversity. There is a need to ensure that students develop an appropriate level of confidence and competence commensurate with these courses. Early exposure at the appropriate level can make quantum concepts less spooky and ensure that students recognize the differences between situations in which classical and quantum frameworks are useful to understand physical phenomena. It would help to emphasize QIST applications and how they can help solve major societal problems such as making nitrogen fixation more energy efficient for agricultural innovation and drug discovery. In addition, discussion of career pathways and the relevance of QIST to national security can be valuable. Giving students opportunities for early experiences in QIST research as appropriate, commensurate with their prior preparation, can also get students excited about pursuing QIST related careers. Educators have begun to develop educational tools for early quantum learners. For example, tools have been developed to visualize QIST concepts [8-10], explore physical realizations of qubits [11], as well as approaches that allow for kinesthetic [12] and game- based learning [13-15] to understand entanglement, decoherence, Bell’s inequality and quantum cryptography. Educators can engage students by using other interactive approaches that use “compare and contrast” activities involving quantum and classical concepts [16, 17], diagrammatic tools [18], evidence-based teaching-learning sequences [19] and optics experiments [20, 21] for early learners to get students excited about QIST concepts [22]. Some organizations such as National Q-12 partnership [23] and Quantum for All [24, 25] are providing a community to K-12 educators interested in incorporating quantum concepts early. For example, National Q-12 partnership [23] is promoting and supporting quantum education by connecting those with interest in K-12 quantum education with each other as well as hosting a framework for K-12 quantum education that can be used to develop QIST curricula. They also host a variety of QuanTime activities, which are ready-to-use activities, that K-12 educators can use in their classes without having prior experience with teaching QIST concepts. For example, in 2024 year, there were 27 activities available to the K-12 teachers [23] and teachers were encouraged to devote at least one class period around the World Quantum Day on April 14 on one of the activities they selected. Quantum for All [24, 25] has been a leader in providing an inclusive community for K-12 educators interested in incorporating quantum education in their classes in a conceptual manner. They provide intensive professional development activities to K-12 educators to ensure that they feel like they are part of a community with similar goals and feel confident infusing quantum concepts into their classes [24, 25]. Other organizations such as Womanium [26] are for all levels (high school through graduate school and postdoc). Womanium is founded and run by women, who are QISE researchers and educators, and it is working on bringing in students of all genders, nationalities, and levels to the QIST field[26]. Qubitbyqubit [27] and IBM [28] focus on K- 16 QIST education and conduct workshops and courses suited for various levels. While the efforts of these organizations and others to support QIST education is important particularly due to the increased career opportunities in both quantum-proficient and quantum-aware/adjacent jobs[3], further efforts to diversify QIST is critical for harnessing the talents of demographic groups that have historically been marginalized in the first quantum revolution. Increasing the visibility and impact of existing underrepresented QIST researchers as well as increasing the number of role models would help with diversification. Changing the culture of disciplines such as physics would help since the inequitable culture in these sub-disciplines can impact the culture of QIST and the extent to which it ends up being diverse, equitable and inclusive as the field matures. Finally, the interdisciplinary nature of QIST, in which nobody can claim to be an expert, can be an opportunity for diversifying this new field whose culture can be shaped and made more equitable than disciplines such as physics, electrical engineering, or computer science that already have a set culture, which is an impediment to diversity, equity and inclusion. However, the interdisciplinary nature of QIST can also present challenges since student preparation is often siloed into different disciplines, and people with a particular type of training may not feel comfortable being part of an interdisciplinary community in which it is difficult to communicate with people with different types of training. Without adequate support, these issues, e.g., related to sense of belonging [29] in such a community consisting of those with training in physics, engineering, computer science, chemistry, etc., may be particularly challenging for people from historically marginalized groups. Coming up with a shared QIST language that is not heavy on quantum physics jargon (unimportant for many stakeholders with non-physics training) can be useful. Ultimately, forging an empathetic and supportive community culture in which those with different types of training feel comfortable and safe sharing their ideas can go a long way to diversify QIST. Acknowledgment We thank the NSF for award PHY-2309260. References [1] Raymer M G and Monroe C 2019 The US national quantum initiative. Quantum Science and Technology 4 (2) 020504. [2] https://qt.eu/. [3] Asfaw A, et al. 2022 Building a quantum engineering undergraduate program. IEEE Transactions on Education 65 (2) 220-242.10.1109/TE.2022.3144943. [4] Fox M F J, et al. 2020 Preparing for the quantum revolution: What is the role of higher education? Physical Review Physics Education Research 16 (2) 020131. [5] Maries A and Singh C 2024 Towards meaningful diversity, equity and inclusion in physics learning environments. Nature Physics 20 (3) 367-375.10.1038/s41567-024-02391-6. [6] Meyer J C, et al. 2024 Disparities in access to U.S. quantum information education. Physical Review Physics Education Research 20 (1) 010131.10.1103/PhysRevPhysEducRes.20.010131. [7] Leslie S-J, et al. 2015 Expectations of brilliance underlie gender distributions across academic disciplines. Science 347 (6219) 262-265.10.1126/science.1261375. [8] Dür W and Heusler S 2014 Visualization of the invisible: the qubit as key to quantum physics. The Physics Teacher 52 (8) 489-492.10.1119/1.4897588. [9] McKagan S B, et al. 2008 Developing and researching PhET simulations for teaching quantum mechanics. Am. J. Phys. 76 (4) 406-417. [10] Kohnle A, et al. 2014 A new introductory quantum mechanics curriculum. European Journal of Physics 35 (1) 015001.10.1088/0143-0807/35/1/015001. [11] Dür W and Heusler S 2016 The Qubit as Key to Quantum Physics Part II: Physical Realizations and Applications. The Physics Teacher 54 (3) 156-159.10.1119/1.4942137. [12] Hahn K and Gire E 2022 Waving arms around to teach quantum mechanics. Am. J. Phys. 90 778- 786.10.1119/5.0073946. [13] López-Incera A and Dür W 2019 Entangle me! A game to demonstrate the principles of quantum mechanics. Am. J. Phys. 87 (2) 95-101.10.1119/1.5086275. [14] López-Incera A, et al. 2020 Encrypt me! A game-based approach to Bell inequalities and quantum cryptography. European Journal of Physics 41 (6) 065702.10.1088/1361-6404/ab9a67. [15] Marckwordt J, et al. 2021 Entanglement ball: using dodgeball to introduce quantum entanglement. The Physics Teacher 59 613-616.10.1119/5.0019871. [16] Oss S and Rosi T 2015 A bit of quantum mechanics. The Physics Teacher 53 (4) 230-233.10.1119/1.4914565. [17] Singh C, et al. 2022 Preparing precollege students for the second quantum revolution with core concepts in quantum information science. The Physics Teacher 60 (8) 639-641.10.1119/5.0027661. [18] Rudolph T (2017). Q is for Quantum, Terence Rudolph.https://www.amazon.com/Q-Quantum-Terry- Rudolph/dp/0999063502. [19] Hu P, et al. 2024 Investigating and improving student understanding of the basics of quantum computing. Physical Review Physics Education Research 20 (2) 020108.10.1103/PhysRevPhysEducRes.20.020108. [20] Walsh J A, et al. 2022 Piloting a full-year, optics-based high school course on quantum computing. Physics Education 57 (2) 025010.10.1088/1361-6552/ac3dc2. [21] Bitzenbauer P and Meyn J-P 2020 A new teaching concept on quantum physics in secondary schools. Physics Education 55 (5) 055031.10.1088/1361-6552/aba208. [22] Darienzo M, et al. 2024 Student attitudes toward quantum information science and technology in a high school outreach program. Phys. Rev. PER 20 020126. https://doi.org/10.1103/PhysRevPhysEducRes.20.020126. [23] https://q12education.org/quantime. [24] https://quantumforall.org/. [25] Matsler K, et al. 2024 Applying Classroom Practices Learned from Virtual Professional Development During a Pandemic. The Physics Teacher 62 41-46.10.1119/5.0107084. [26] https://www.womanium.org/. [27] https://www.qubitbyqubit.org/. [28] Singh C, et al. 2021 Preparing students to be leaders of the quantum information revolution. Physics Today https://doi.org/10.1063/PT.6.5.20210927a. [29] Binning K R, et al. 2020 Changing social contexts to foster equity in college science courses: An ecological- belonging intervention. Psychological Science 31 (9) 1059-1070.10.1177/0956797620929984.
Reflections of quantum educators on strategies to diversify the second quantum revolution Apekshya Ghimire* and Chandralekha Singh 3941 O'Hara St, Pittsburgh, PA, 15260 * Introduction The second quantum revolution focuses on quantum information science and technology (QIST). It promises to bring about unprecedented improvements in computing, communication and sensing due to our ability to exquisitely control and manipulate quantum systems [1, 2]. This fast-growing field presents equally unprecedented opportunities and challenges for preparing students to be future leaders in this area [3, 4]. One major challenge pertains to how to diversify the second quantum revolution. The first quantum revolution, which began nearly a century ago and laid the foundation for the technological breakthroughs of the mid-20th century, reflects the broader lack of diversity that characterized the scientific landscape of that era. Since the second quantum revolution is in its infancy, it is critical to contemplate these issues to ensure that people from historically marginalized groups in physics and related fields can contribute equitably [5, 6]. This paper focuses on reflections and suggestions of five college quantum educators from four different institutions (two from the same institution) regarding what can be done to diversify the second quantum revolution. They are leading QIST researchers, and very passionate about improving quantum education. The educators were also asked about their thoughts on whether the interdisciplinary nature of the field, in which nobody can claim to be an expert in all aspects of QIST, may make it easier to create a better culture from the beginning, supportive of equitable participation of diverse groups unlike physics. This is because disciplines such as physics have an ingrained inequitable culture based on brilliance attribution [7] that is a major impediment to diversity, equity and inclusion. Educators were interviewed on Zoom using a semi-structured think-aloud protocol about various issues related to QIST education including those pertaining to how to diversify the second quantum revolution discussed here. Below, we discuss some excerpts that exemplify interviewed quantum educators' reflections and suggestions in their own words. They are insightful and can help other educators adapt and implement strategies to diversify QIST. We mention below that one of the educators was a woman because her gender identity may be useful to understand her point of view. Quantum educators in their own words Some educators emphasized the importance of teaching quantum concepts early, e.g., in high school and early in college to students from diverse backgrounds. This exposure is important because targeting diverse high schools has the potential to introduce exciting quantum concepts and careers to students from diverse backgrounds considering diversity is severely lacking among college STEM majors. Early introduction is also valuable because learning quantum concepts early may make the subject matter less mysterious. Students may also better learn to discern the contexts in which classical and quantum concepts are applicable. This is similar to the facility children have in discerning which language to use in which contexts and switching between them as needed depending upon contexts, when they simultaneously learn two different languages early. Emphasis on potential QIST applications focusing on solving societal problems that resonate with students and discussing career pathways with students can be useful. For example, one male educator emphasized the importance of teaching quantum and relevant mathematical concepts early saying, "If I had the magic wand that could reform, you know, high school curriculum, I would probably change a few [parts] of the mathematics teaching. Like, I think, data science and statistics at this point should have at least as much space as calculus does. Well, once you understand statistics...then quantum is a small step from there and [can even be taught with] linear algebra. I would teach...coding, programming, statistics... And once you've got that, you can teach any quantum you want...even in high school...My course, that I am teaching second year [college students] here, with minor modifications, I could teach in high school." He also emphasized the importance of helping students from diverse backgrounds develop confidence along with competence from early on while learning QIST concepts saying, "in every field, no matter how developed or under-developed [like QIST] it is, you will find people who have a lot of confidence, you know, becoming the most visible ones...It's something that students should learn a little in high school...". Another male educator who is passionate about early teaching as well as early research experiences related to QIST for first- and second-year college students in science and engineering reflected on the importance of supporting all students in foundational courses especially those not particularly interested in physics saying, "let's say that we [with physics background] are teaching a college course on foundations of quantum computing...we found some advisors who said to students in electrical engineering or computer science that this might be a good thing if they were thinking about...going into the workforce in this area. Now when they landed in those courses...they're like, oh, this sounds too much like physics to me. So what can you do?...We want all these students if they're going to take an intro course...to succeed...there should be something for everybody...They shouldn't be punished for not being a physicist". He acknowledged that there are many ways of structuring interdisciplinary QIST courses for early learners, but felt educators need to emphasize applications in quantum computing, communication and sensing and how they could impact society through examples that resonate with students from diverse backgrounds as well as discuss career pathways. He also noted that while these early courses should mainly focus on learners with diverse background preparations and interests, students can be provided enrichment options through projects/special modules as part of the course that dive more deeply into the sub-field of QIST, e.g., engineering or computer science, they may be more interested in. This educator also noted leading a first experiences in quantum research program for firstyear college students currently that he hopes will play a role in diversifying QIST. He also stated that diversifying issues are not only in QIST but in many STEM disciplines including physics stating, "simply acknowledging this...and trying to make sure that people have support that they need throughout their...education to be able to...walk into a room where people don't look like them and succeed" is important. He did not think that QIST being a new field makes it easier to diversify than the sub-fields such as physics, "I do think that there's in some sense...we could just like start over afresh, because this [QIST] is a new field, and we don't carry over the baggage...I'm not sure that it's succeeding so far". He felt that the current diversity in QIST is dismal saying, "I don't see things as automatically getting fixed just because we rebranded. And now it's quantum 2.0. I think that it's...very important to push back against these...And I think that the earlier the positive experiences people have in doing that [learning quantum concepts] successfully, the better the outcome could be, that you have to...realize that, you know, imposter syndrome is like, almost everybody feels that...has that kind of feeling at some stage or another and that you...can embrace it and say, well, I'm new to this field, but I still belong here...[do] some type of self-affirmation". He also reflected on the highly interdisciplinary nature of QIST playing a role in sense of belonging and emphasized the role of educators and researchers in contemplating ways to integrate students in QIST, who are trained differently. He felt that since the preparation of students is siloed in physics, engineering, computer science etc., some students may feel uncomfortable being part of this new interdisciplinary field of QIST saying, "what's interesting about the quantum is...there are dimensions to [it] where people feel like they don't belong...they have to do with their professional upbringing, right? So, somebody can feel that they don't belong...because they weren't trained in a certain way and I think that it's important to address all of these issues, right? Whatever it is that is inhibiting people from learning and doing [QIST]". A female educator reflected on these issues saying, "I think, in general...physics has been very sort of depressingly bad...I'd say that quantum does not have good representation, you know it's not so inclusive...". She also felt that unless the culture of each subfield, e.g., physics, that makes up QIST changes, she disagreed with "this idea that there are these different fields with somewhat different cultures and maybe that could provide an avenue to improving [QIST]" emphasizing that a major barrier to diversity is, "this thing of like, if you get a guy, and he knows, you know, half of something, [he'll say] oh, yeah, I know that. And you get a woman, and she knows more, [she'll say] oh, I don't know...everybody's faking it. But somehow people's internalization of what they know relative to everyone else is very culturally dependent". She felt that unless we can do something to eradicate these types of dichotomies that are systemic in the culture, e.g., overly confident men making statements bragging about more than what they know and women undermining themselves, the new field of QIST will continue to be intimidating for many women and other underrepresented people similar to physics and they would decide not to be part of a field with such an inequitable culture. She gave several concrete examples of men being overconfident and acting as if they were pinnacles of brilliance based upon her own interactions with them in physics. She exemplified specific cases in which they knew significantly less than her but pretended to be on top of everything and took the opportunity to promote themselves. She reiterated that she is worried that this kind of inequitable culture that pervades physics will seep into this new field of QIST since the same people who have dominated the space in physics will continue to do so in QIST, "I'm saying just like having that confidence of, you know, I belong here...[I disagree that] one of the great things about a field that's as broad as this one [QIST] is that nobody knows everything. But you don't feel that way [if you are an underrepresented person such as a woman], you feel like everybody knows it but you [because of the way many men portray themselves]". She felt that the only way to diversify QIST is to change the systemic cultural issues in physics and other related disciplines that make up QIST in which brilliance is over-emphasized and many individuals from the dominant group are pretending and portraying to know more than they do, thereby intimidating others and even driving them out of the discipline. Another male educator strongly believed that early outreach to students from diverse backgrounds in various settings can play a critical role in diversifying QIST saying, "we've got a massive issue with diversity in physics and...in electrical engineering and other engineering disciplines...in computer science...and I think we, as physicists, have to be very active in going out to the general public". He also emphasized the importance of increasing the visibility and impact of underrepresented people in QIST saying, "And I think that it's important to emphasize and underline the contributions of those role models and to show to people that this is a field for everyone". He thought it was important that "when we organize events, we represent the diversity of people who are involved...there are people from a lot of minority ethnic groups involved in this area. There are a lot of women involved in this area really doing internationally leading research". To diversify QIST, he stressed the need to change the culture of physics and related disciplines comprising QIST that currently favors the dominant group, "we need to keep looking at our own research teams, our ways of interacting with people. We need to look at our recruiting practices. We need to look at the ways in which we organize meetings, and we need to be constantly questioning whether we have tried to understand all of this best practice from what's been done in this field around the world, making sure that we implement it in our own teams and make sure that we essentially spread the message and take leadership roles in saying that diversity, equity, and inclusion, these things are extremely important to us, and extremely important to the health of the field, going forth". He added, "we lose out still a lot. A lot of very talented physicists choose to go into other areas...they have bad experiences with a workplace culture...I also think that when you get a diverse range of people, people with different ethnic backgrounds, people of different nationalities, people from different gender backgrounds, people from...all different ways you can be, I find that teams tend to work much better. They tend to be more creative because people coming in with different ideas, different ways [of] thinking lead to different types of creativity, but also the culture of teams change...most inefficient committees I've ever been on, we're essentially all made up of white men and the most productive committees I was on, we're typically always characterized by having a very diverse range of people. And I think for the health of our field, it's important that we take leadership roles and that we do everything we can in this area". Another male educator, who acknowledged that the physics culture is a detriment to diversifying, was concerned that this burgeoning QIST field can morph into a non-diverse field like physics saying, "physics, of course, has just long problems with underrepresented minorities, which because quantum technology is physics led, is a problem that [can] propagate into that". However, he expressed optimism that "attitude in physics might not hopefully propagate to wider quantum technology, and so might be a way to kind of get...minorities in...so there's that sense of optimism there that with the invention of a new field [QIST], you can invent a new culture hopefully, free from some of the mistakes of physics". He also felt that whatever strategies are currently being used to increase diversity, equity and inclusion in physics should be employed in QIST since they may be useful for QIST as well. Discussion and Summary The interdisciplinary field of QIST is fast-growing with major implications for the future workforce. This paper focuses on reflections and suggestions from five college quantum educators, who are passionate about teaching quantum concepts, regarding how to diversify the second quantum revolution. Their suggestions can be invaluable for other educators interested in similar pursuits and can be summarized as follows: • Focus on changing the culture of disciplines such as physics since the inequitable culture in these sub-disciplines comprising QIST can impact outcomes in QIST, which is interdisciplinary. • Acknowledge the need for diversity, equity, and inclusion in QIST and support students from diverse backgrounds as appropriate. • Conduct quantum outreach in both formal (K-12) and informal settings to get students from diverse backgrounds aware of and interested in QIST concepts and career opportunities. • Teach quantum concepts early, e.g., in high school and early in college to students from diverse backgrounds. o More focus in high school on statistics, data science, and programming may be helpful for introducing quantum early. o Focus on developing student confidence along with competence. o Focus on not teaching early QIST courses as physics courses o Tailor courses to benefit diverse backgrounds and interests emphasizing applications in quantum computing, communication and sensing that have the potential to solve major societal problems as well as career pathways. • Give students early experience in QIST research. • Increase the number of role models in QIST. • Increase the visibility and impact of existing underrepresented QIST researchers, e.g., by including them as invited speakers and on committees that make major decisions. • Try similar approaches to those that have proved useful for diversifying physics. Based upon these suggestions, the involvement of early learners, e.g., in high school, college and via outreach is important for ensuring that students from diverse backgrounds can learn about exciting quantum concepts and careers particularly because college STEM majors lack diversity. There is a need to ensure that students develop an appropriate level of confidence and competence commensurate with these courses. Early exposure at the appropriate level can make quantum concepts less spooky and ensure that students recognize the differences between situations in which classical and quantum frameworks are useful to understand physical phenomena. It would help to emphasize QIST applications and how they can help solve major societal problems such as making nitrogen fixation more energy efficient for agricultural innovation and drug discovery. In addition, discussion of career pathways and the relevance of QIST to national security can be valuable. Giving students opportunities for early experiences in QIST research as appropriate, commensurate with their prior preparation, can also get students excited about pursuing QIST related careers. Educators have begun to develop educational tools for early quantum learners. For example, tools have been developed to visualize QIST concepts [8-10], explore physical realizations of qubits [11], as well as approaches that allow for kinesthetic [12] and gamebased learning [13-15] to understand entanglement, decoherence, Bell's inequality and quantum cryptography. Educators can engage students by using other interactive approaches that use "compare and contrast" activities involving quantum and classical concepts [16, 17], diagrammatic tools [18], evidence-based teaching-learning sequences [19] and optics experiments [20, 21] for early learners to get students excited about QIST concepts [22]. Some organizations such as National Q-12 partnership [23] and Quantum for All [24, 25] are providing a community to K-12 educators interested in incorporating quantum concepts early. For example, National Q-12 partnership [23] is promoting and supporting quantum education by connecting those with interest in K-12 quantum education with each other as well as hosting a framework for K-12 quantum education that can be used to develop QIST curricula. They also host a variety of QuanTime activities, which are ready-to-use activities, that K-12 educators can use in their classes without having prior experience with teaching QIST concepts. For example, in 2024 year, there were 27 activities available to the K-12 teachers [23] and teachers were encouraged to devote at least one class period around the World Quantum Day on April 14 on one of the activities they selected. Quantum for All [24, 25] has been a leader in providing an inclusive community for K-12 educators interested in incorporating quantum education in their classes in a conceptual manner. They provide intensive professional development activities to K-12 educators to ensure that they feel like they are part of a community with similar goals and feel confident infusing quantum concepts into their classes [24, 25]. Other organizations such as Womanium [26] are for all levels (high school through graduate school and postdoc). Womanium is founded and run by women, who are QISE researchers and educators, and it is working on bringing in students of all genders, nationalities, and levels to the QIST field[26]. Qubitbyqubit [27] and IBM [28] focus on K16 QIST education and conduct workshops and courses suited for various levels. While the efforts of these organizations and others to support QIST education is important particularly due to the increased career opportunities in both quantum-proficient and quantum-aware/adjacent jobs[3], further efforts to diversify QIST is critical for harnessing the talents of demographic groups that have historically been marginalized in the first quantum revolution. Increasing the visibility and impact of existing underrepresented QIST researchers as well as increasing the number of role models would help with diversification. Changing the culture of disciplines such as physics would help since the inequitable culture in these sub-disciplines can impact the culture of QIST and the extent to which it ends up being diverse, equitable and inclusive as the field matures. Finally, the interdisciplinary nature of QIST, in which nobody can claim to be an expert, can be an opportunity for diversifying this new field whose culture can be shaped and made more equitable than disciplines such as physics, electrical engineering, or computer science that already have a set culture, which is an impediment to diversity, equity and inclusion. However, the interdisciplinary nature of QIST can also present challenges since student preparation is often siloed into different disciplines, and people with a particular type of training may not feel comfortable being part of an interdisciplinary community in which it is difficult to communicate with people with different types of training. Without adequate support, these issues, e.g., related to sense of belonging [29] in such a community consisting of those with training in physics, engineering, computer science, chemistry, etc., may be particularly challenging for people from historically marginalized groups. Coming up with a shared QIST language that is not heavy on quantum physics jargon (unimportant for many stakeholders with non-physics training) can be useful. Ultimately, forging an empathetic and supportive community culture in which those with different types of training feel comfortable and safe sharing their ideas can go a long way to diversify QIST. Acknowledgment We thank the NSF for award PHY-2309260. References [1] Raymer M G and Monroe C 2019 The US national quantum initiative. Quantum Science and Technology 4 (2) 020504. [2] https://qt.eu/. [3] Asfaw A, et al. 2022 Building a quantum engineering undergraduate program. IEEE Transactions on Education 65 (2) 220-242.10.1109/TE.2022.3144943. [4] Fox M F J, et al. 2020 Preparing for the quantum revolution: What is the role of higher education? Physical Review Physics Education Research 16 (2) 020131. [5] Maries A and Singh C 2024 Towards meaningful diversity, equity and inclusion in physics learning environments. Nature Physics 20 (3) 367-375.10.1038/s41567-024-02391-6. [6] Meyer J C, et al. 2024 Disparities in access to U.S. quantum information education. Physical Review Physics Education Research 20 (1) 010131.10.1103/PhysRevPhysEducRes.20.010131. [7] Leslie S-J, et al. 2015 Expectations of brilliance underlie gender distributions across academic disciplines. Science 347 (6219) 262-265.10.1126/science.1261375. [8] Dür W and Heusler S 2014 Visualization of the invisible: the qubit as key to quantum physics. The Physics Teacher 52 (8) 489-492.10.1119/1.4897588. [9] McKagan S B, et al. 2008 Developing and researching PhET simulations for teaching quantum mechanics. Am. J. Phys. 76 (4) 406-417. [10] Kohnle A, et al. 2014 A new introductory quantum mechanics curriculum. European Journal of Physics 35 (1) 015001.10.1088/0143-0807/35/1/015001. [11] Dür W and Heusler S 2016 The Qubit as Key to Quantum Physics Part II: Physical Realizations and Applications. The Physics Teacher 54 (3) 156-159.10.1119/1.4942137. [12] Hahn K and Gire E 2022 Waving arms around to teach quantum mechanics. Am. J. Phys. 90 778786.10.1119/5.0073946. [13] López-Incera A and Dür W 2019 Entangle me! A game to demonstrate the principles of quantum mechanics. Am. J. Phys. 87 (2) 95-101.10.1119/1.5086275. [14] López-Incera A, et al. 2020 Encrypt me! A game-based approach to Bell inequalities and quantum cryptography. European Journal of Physics 41 (6) 065702.10.1088/1361-6404/ab9a67. [15] Marckwordt J, et al. 2021 Entanglement ball: using dodgeball to introduce quantum entanglement. The Physics Teacher 59 613-616.10.1119/5.0019871. [16] Oss S and Rosi T 2015 A bit of quantum mechanics. The Physics Teacher 53 (4) 230-233.10.1119/1.4914565. [17] Singh C, et al. 2022 Preparing precollege students for the second quantum revolution with core concepts in quantum information science. The Physics Teacher 60 (8) 639-641.10.1119/5.0027661. [18] Rudolph T (2017). Q is for Quantum, Terence Rudolph.https://www.amazon.com/Q-Quantum-TerryRudolph/dp/0999063502. [19] Hu P, et al. 2024 Investigating and improving student understanding of the basics of quantum computing. Physical Review Physics Education Research 20 (2) 020108.10.1103/PhysRevPhysEducRes.20.020108. [20] Walsh J A, et al. 2022 Piloting a full-year, optics-based high school course on quantum computing. Physics Education 57 (2) 025010.10.1088/1361-6552/ac3dc2. [21] Bitzenbauer P and Meyn J-P 2020 A new teaching concept on quantum physics in secondary schools. Physics Education 55 (5) 055031.10.1088/1361-6552/aba208. [22] Darienzo M, et al. 2024 Student attitudes toward quantum information science and technology in a high school outreach program. Phys. Rev. PER 20 020126. https://doi.org/10.1103/PhysRevPhysEducRes.20.020126. [23] https://q12education.org/quantime. [24] https://quantumforall.org/. [25] Matsler K, et al. 2024 Applying Classroom Practices Learned from Virtual Professional Development During a Pandemic. The Physics Teacher 62 41-46.10.1119/5.0107084. [26] https://www.womanium.org/. [27] https://www.qubitbyqubit.org/. [28] Singh C, et al. 2021 Preparing students to be leaders of the quantum information revolution. Physics Today https://doi.org/10.1063/PT.6.5.20210927a. [29] Binning K R, et al. 2020 Changing social contexts to foster equity in college science courses: An ecologicalbelonging intervention. Psychological Science 31 (9) 1059-1070.10.1177/0956797620929984.
2510.14797
Grain volume distribution alters the critical phenomena in complex granular systems Teng Man,1, ∗Yimin Lu,2 Zhongrong Wang,3 Herbert Huppert,4 Alessio Zaccone,5 and Honglei Sun1, † 1College of Civil Engineering, Zhejiang University of Technology, Hangzhou, Zhejiang 310023, China 2Department of Civil, Environmental, and Construction Engineering, Texas Tech University, Lubbock, Texas 79409, United States 3School of Engineering, Royal Melbourne Institute of Technology, Victoria, 3001, Australia 4Institute of Theoretical Geophysics, King’s College, University of Cambridge, King’s Parade, Cambridge CB2 1ST, United Kingdom 5Department of Physics “A. Pontremoli,” University of Milan, via Celoria 16, 20133 Milan, Italy (Dated: October 17, 2025) The grain size distribution (GSD) plays an important role in the mechanical properties of amor- phous disordered systems and complex granular materials. Varying GSD causes segregation issues and alters critical behaviors. This work used the discrete element method (DEM) to investigate the rheological and critical behaviors of sheared granular flows with various GSDs. The results show that, while a unified rheological relation can be obtained, a characteristic length scale, which is as- sociated with the contact probability and can be obtained from any GSD, is embedded within such a polydisperse disordered system. We further acquire a correlation function between critical solid fractions and dimensionless grain volume distributions. This work elucidates the effect of particle volumes on the rheology and micromechanics of dry granular systems and provides further insights in better incorporating the influence of other particle properties into a unified framework, which is helpful and critical for the corresponding engineering and geophysical problems. Granular materials and disordered systems are com- monly encountered in natural systems and industrial pro- cesses, such as landslides, pyroclastic flows, fresh con- crete, and pharmaceutical and chemical particulates [1– 4]. In recent decades, extensive investigations have been conducted to describe the rheology[5, 6], compaction [7, 8], and jamming transition [9, 10] of granular ma- terials. The proposal of the inertial number Ic and the viscous number Iv suggests that the dynamics of granular flows is governed by the combination of three time scales: ti = dp/ p σn/ρp, tv = ηf/σn, and tM = 1/˙γ, where ti, tv, and tM are inertial, viscous, and macroscopic time scales, dp and ρp are the particle diameter and density, σn is the pressure, and ηf denotes the fluid viscosity if the system is submerged or fully saturated. However, the current rheological framework cannot fully capture the influence of either particle shapes or size distributions. The grain size distribution (GSD) plays an important role in the rheology of granular systems, but previous studies often provide seemingly contradic- tory opinions, where some studies suggested that the ex- istence of small particles initiates a lubrication effect that helps the system increase its mobility [11], while others believed that a wider GSD induces a denser packing frac- tion that increases the shear strength of granular systems and makes the system easier to transition to a jammed state [12]. Hill and Yohannes [13] studied the rheology of bidis- persed granular flows using the discrete element model (DEM) [13, 14] and suggested that the pressure of a granular system is determined not only by its size dis- ∗manteng0520@zjut.edu.cn † sunhonglei@zju.edu.cn tribution, but also by the effective free volume per par- ticle, which was adopted from the jammed packing of monosized particles. They stated that the dimension- less pressure, (1/Ic)2, scales as a function of the coor- dination number, Zc. Polan´ıa et al. [15] studied the rheological behavior of polydispersed granular systems, which concluded that such systems still behave similarly to monodispersed systems if the right length scale is cho- sen, but the solid fraction can be quite different due to polydispersity. Recently, Ding et al. [11] also studied the behavior of granular systems with fractal distributions, and found that the shear strength of confined flow de- creases at large fractal dimensions, which can be further linked to help decipher the mechanisms within the highly localized basal zones and the high mobility of general ge- ological avalanches. In this Letter, we focus on disordered physical sys- tems with various continuous GSDs, where sheared gran- ular materials with three different cumulative distribu- tion functions (CDF) are examined using DEM. We show that, unlike the length scale proposed in Polan´ıa et al. [15], a simply defined “average” particle size, which has a clear physical root in the contact probability, already unifies the rheological behavior of dry granular systems with a wide range of different particle size distributions. Further analyses are devoted to better understand the critical phenomena when a granular system with differ- ent GSDs and frictional properties is approaching the “jamming” transition. Methodology and simulation setup In this work, the classical DEM is used to simulate sheared granular sys- tems, where the particles are spherical and follow a lin- ear spring-dashpot contact model. The normal contact force F n ij is calculated as a Hookean contact with energy damping. The magnitude of the tangential contact force arXiv:2510.14797v1 [cond-mat.soft] 16 Oct 2025 2 FIG. 1. Rheological behaviors: (a-c) The µeff ∼Ica and ϕs ∼Ica relationships plotted for systems with size distribution D1, D2, and D3, where Ica is the conventional inertial number that uses system averaged particle diameter as the characteristic length. The inset of (b) shows the relationship between two length scales, dac and daw, where dac is the average particle diameter calculated based on contact number of each particle and daw is the average particle diameter calculated based on volume of each particle. (e-f) The µeff ∼Icw and ϕs ∼Icw relationships plotted for systems with size distribution D1, D2, and D3, where Icw is the new dimensionless number calculated with daw. Inset of (d) presents a simulation snapshop, and the inset of (f) is the relationship between ϕs/ϕc and the proposed new inertial number Icw (see Equation 1. |ktδt ij| cannot exceed its friction limit, µp|F n ij|, where µp is the particle frictional coefficient and for most cases we set µp = 0.3. The system has no rolling resistance. The granular system is located between two sawteeth plates and is sheared with constant pressure and shear rate. For different simulations, we vary the pressure σ from 50 to 1000 Pa and the shear rate ˙γ from 0.01 to 1.0 s−1, and measure the mean value of the key variables when the sheared system is already in steady state, shown in Ap- pendix A, where its solid fraction ϕs, pressure σ, shear stress τ, and velocity profile become stable. We can ob- tain the effective frictional coefficient µeff = τ/σ, the effective viscosity ηeff = τ/˙γ, and the average inertial number Ic from each simulation for further investigation. The plot of a shear cell is shown in the inset of FIG. 1(d). The particle size distributions used in this work, i.e., Fuller distribution and two power-law distributions, are elaborated in detail in Appendix B. Rheological behaviors According to the classical µ(I) rheology of dry granular systems, both µeff = τ/σ and ϕs should scale with the inertial number Ica = ˙γdp/ p σ/ρp, where dp is the particle diameter and also a characteristic length scale. Ge et al. [6] proposed that the length scale plays an important role in determining the rheological behavior of granular systems, and we further argue that this length scale should be adjusted to reflect changes in particle size distributions. FIGs. 1(a-b) demonstrate the µeff ∼Ica relationship for systems with GSDs of D1, D2, and D3, which are clarified in Appendix B, whereas FIG. 1(c) shows the ϕs ∼Ica relationship, where Ica rep- resents the conventional inertial number that uses the average particle diameter of the system as the character- istic length. FIGs. 1(a-c) shows that, even though the µeff ∼Ica and ϕs ∼Ica relationships collapse for systems with any specific GSD, changing the particle gradation leads to deviations in both µeff and ϕs. FIGs. 1(a) and (b) show that systems with different particle gradations re- sult in different transitional Ica, which denotes a transi- tion from quasi-static to inertial flow. The change in this transitional Ica implies that the characteristic length of the system may also change due to changes of GSDs. Hill and Yohannes [13] reported that the average particle size should be calculated based on each particle volume. Thus, we define the characteristic length of the system as daw = [PNp i=1(dpiVpi)]/(PNp j=1 Vpj) and particles with different volumes are assumed to have different weight in calculating this “average” particle size. Thus, a new inertial number can be defined as Icw = ˙γdaw(ρp/σ)1/2 . (1) After adopting the new inertial number, as shown in FIGs. 1(d,e), the relationships between µeff and Icw col- lapse onto a single master curve, indicating that the char- acteristic length, daw, reflects some physical and topolog- ical nature of the sheared granular systems, and choos- ing an appropriate length scale helps unify the frictional rheology of dry granular systems with any size distri- bution. However, we admit that Icw fails to unify the ϕs ∼Icw relationship, although FIG. 1(f) looks much better than FIG. 1(c). For systems with different GSDs, their transitional Icw seems to be unified, and the only 3 FIG. 2. Relationship between the effective viscosity of gran- ular systems, ηeff = τ/ ˙γ, and the solid fraction, ϕs. In the inset of this figure, we change the horizontal and vertical axes into |ϕs −ϕc| and ηeff −1/1.3, respectively. The dashed line represents a linear relationship. difference lies in the ϕs−axis, i.e., moving the ϕs ∼Icw relationships upward or downward can also unify this re- lation. For a series of simulations with the same GSD, the relationship between ϕs and Icw can be written as ϕs = ϕc/(1 + αϕIcw), where αϕ ≈0.32. Thus, if we change the vertical coordinate into ϕs/ϕc, where ϕc is the critical solid fraction for each GSD, we can obtain a master curve of ϕs/ϕc = 1/(1 + αϕIcw), shown as the blue dashed curve in the inset of FIG. 1(f). Critical length scale and critical solid fraction Our choices of characteristic length and critical solid frac- tion have deep physical and geometrical reasons, even though they seem to be deliberately fitted. On one hand, daw can be related to the coordination number of each polydispersed granular system. The characteristic length should be linked to particle volumes because they are directly related to the number of contacts. Intuitively, a particle with larger size has a greater chance to con- tact with more particles, indicating that such a particle plays a more important role in bearing strong force net- works and determining the macroscopic behavior of the whole system. To validate this, we calculated another length scale, dac, based on contact numbers of each par- ticle, where dac = [P i(dpiNci)]/(P j Ncj) and Nci is the contact number of particle i, and plotted its relation- ship with daw in the inset of FIG. 1(b). This shows that daw and dac are linearly related, which verifies that daw, which can be easily obtained from GSD, has a deep root in contact statistics. ϕc varies with the GSD and frictional properties of particles. It is clearly neither the jamming solid fraction, ϕJ [10], nor the random close packing fraction, ϕRCP [16]. However, Zaccone [16] and his follow-up research on the random close packing of the polydisperse granular system [17] encourage us to argue that ϕc is a function of ϕRCP and µp. Since we kept µp constant, ϕc is predominantly controlled by ϕRCP , which should be related to the GSD of a granular system. Thus, we could obtain a functional relationship between ϕc and the GSD. We note that ϕc for each GSD should not be ob- tained from fitting of the ϕs ∼Icw relationship, since the functional form of the ϕs ∼Icw relationship is based on phenomena, not physics. Fortunately, Zaccone [16] derived the relationship between effective viscosity ηeff = τ/˙γ and ϕs from first principles, which yields ηeff ∼|ϕc −ϕs|−1.3. Thus, if we plot ηeff −1/1.3 against |ϕc −ϕs|, we should be able to obtain a linear relation- ship. In FIG. 2, we plot ηeff against ϕs for all simulations and find that ηeff always diverges at relatively high solid fractions (which are different for systems with different GSDs). The solid fraction, where a granular system is approaching infinite viscosity, indicates a critical solid fraction, ϕc. For each GSD, we can obtain a specific ϕc, and ηeff −1/1.3 and |ϕc −ϕs| are linearly related, as shown in the inset of FIG. 2. Our above analyses indicate that the length scale daw, which implies contact statistics, helps to unify the µeff ∼ Icw relationship but leaves a problem of ϕc for us to solve. This implies that, for systems with different GSDs but ex- actly the same daw, their stress-strain rate relationships behave similarly, but their solid fractions differ from each other. Since stresses in granular materials are strongly related to force networks (especially the strong force net- work), granular systems with different GSDs may have different amounts of rattler particles, which do not alter the strong force network, but occupy different amounts of volumes, resulting in different solid fractions. We define particles with zero contact or only one con- tact as rattlers, while particles with more than one con- tact as non-rattlers. While calculating the contact num- ber of each particle, Nc, and the coordination number, Zc ≡⟨Nc⟩, of each simulation, we can also obtain the volumetric percentage of rattlers, Vp01/Vp, and the coor- dination number of non-rattlers, Zc≥2. FIG. 3(a) plots the histograms for a set of simulations with GSD of D3 and η3 = 1.0, where the legend provides the information of Icw. Increasing Icw leads to a decrease in the num- ber of particles with large contact numbers and increases the chances of finding particles with only one contact, indicating that increasing the inertial number results in more binary collisions. With such a wide range of parti- cle sizes, a sheared granular system naturally has many “small” particles, which do not play a role in the force network. FIG. 3(b) indicates that, for some cases, Zc is even below 1.0, but for each set of simulations with the same GSD, the data collapse onto a Zc ∼ϕs master curve. However, it is difficult to figure out how changing the GSD alters Zc. Neglecting particles with only zero and one contact number and changing the vertical axis into the average coordination number of particles with more than one con- tact, Zc≥2, in FIG. 3(c), the relationship Zc≥2 ∼ϕs shows power-law features with systematic changes intro- duced by changing the GSD. The inset of FIG. 3(c) shows that dividing ϕs by ϕc helps to get a unified Zc≥2 ∼ϕs/ϕc relationship, which is similar to the ηeff ∼ϕs/ϕc relation- ship. We argue that it is mostly particles with Nc ≤2 that determine the effective viscosity of the system. All of these imply that changing the GSD leads to a dif- 4 FIG. 3. (a) Histogram of the number of contact while changing Icw. This figure only plots data from the system of D3 and ηs = 1.0. (b) The Zc ∼ϕs relationship for systems with different GSDs, and Zc is the average coordination number. (c) The volume percentage of particles with only 0 or 1 contact is plotted against Icw. (d) Relationship between the dimensionless granular temperature, Tg/( ˙γdaw)2, and ϕs (inset: Tg/( ˙γdaw)2 ∼|ϕc −ϕs|). (e) The relationship between ϕc and the dimensionless index of the grain volume distribution, IGV D. ferent percentage of particles that participate in bearing forces in granular systems. However, FIG. 3(d) contra- dicts our expectation and shows that changing the GSD does not change the volume percentage of particles much, as Nc ≤1. We further plot the relationship between the normal- ized granular temperature Tg/(˙γdaw)2 and ϕs in FIG. 3(e), where Tg ≡⟨(⃗u −⟨⃗u⟩)2⟩and ⃗u is the particle ve- locity. Again, the critical solid fraction ϕc, which is heavily influenced by the GSD, plays a key role in the scaling of Tg/(˙γdaw)2 ∼ϕs, and Tg/(˙γdaw)2 scales with ∼|ϕc −ϕs|−0.6. We argue that ϕc, as a critical solid fraction, marks the transition from a system with finite viscosity to that with infinite viscosity. Physically, it has similar characteristics as random close packing fractions ϕRCP , which are affected by both µp and GSD. Zaccone [17] and Anzivino et al. [18] suggested that ϕRCP can be expressed as a function of the standard de- viation of GSD, while Desmond and Weeks [12] argued that they could draw a unified relation between ϕRCP and a combination of skewness and polydispersity of GSD with the data obtained from simulations. However, we find that both methods do not work for our system. We adopted the idea of Desmond and Weeks [12] and calcu- lated both the skewness Sv and the polydispersity δv of the normalized grain volume distribution (GVD), instead of the GSD, so that ˜V i p =  V i p −min(Vp)  / [max(Vp) −min(Vp)] , (2a) Sv = ⟨∆˜V 3 p ⟩/  ⟨∆˜V 2 p ⟩ 3/2 , δv = q ⟨∆˜V 2 p ⟩/⟨˜Vp⟩, (2b) where ˜Vp is the normalized particle volume, ∆˜Vp = ˜Vp − ⟨˜Vp⟩, ⟨˜V n p ⟩= R ˜V n p · P( ˜Vp)d ˜Vp, and ⟨∆˜V (n) p ⟩= R ∆˜V n p · P( ˜Vp)d ˜Vp. To obtain a universal relationship, we varied both the size ranges ([1, 10] cm, [2, 10] cm, and [3.16, 10] cm) and the particle frictional coefficient (µp =0.0001, 0.1, 0.3 and 0.9). With a combined dimensionless index of the GVD, IGV D = δv +Svδ2 v, we obtained an empirical relationship between ϕc and IGV D so that ϕc =1 −(1 −ϕc0)/ [1 + β ln (1 + IGV D/Ig0)] , (3) where β ≈0.087 and Ig0 = 4.5 are fitting constants, ϕc0 is the base ϕc and varies with µp, and the functional form of Eq. 3 is inspired by the time evolution of the solid fraction of granular system under tapping compactions in Knight et al. [19]. FIG. 3(f) suggested that we only need to adjust ϕc0 to get a reasonable ϕc ∼IGV D re- lationship for systems with different µp, and Eq. 3, al- though being empirical, introduces a logarithmic depen- dence that ensures the eventual “flattening” of ϕc with increasing IGV D. Ultimately, ϕc should flatten out to a “maximum” packing fraction that is lower than or equal to 1 when IGV D →+∞, as also suggested in previous publications [12, 18, 20]. In Appendix C, we plot the ϕRCP ∼IGV D relationship for data extracted from pre- vious publications to show that Eq. 3 also works for predicting the random close packing fractions. This Letter focuses on the rheology of granular sys- tems with different grain size distributions and identifies a characteristic length scale that can be associated with the contact probability. We find that varying the GSD leads to the change in criticality behavior, i.e. the critical 5 FIG. 4. (a) The time evolution of the shear stress, τ, pres- sure, σn, and the solid fraction, ϕs. (b) the profile of particle velocities in the x−direction, Vpx. The inset of (b) shows the configuration of a simple shear simulation of the granular sys- tem. solid fraction ϕc that marks the transition from a finite ηeff to an infinite one. The analyses of Zc and Tg verify the importance of determining ϕc based on the GVD, and we have introduced a dimensionless index, IGV D, to suc- cessfully acquire a unified solution for granular systems with different particle gradations, which has the poten- tial to be adopted to other athermal disordered systems with various polydispersities. ACKNOWLEDGMENTS Acknowledgments- We wish to acknowledge the finan- cial support of Grant No. 12202367 from NSFC and the start-up grant from Zhejiang University of Tech- nology. T.M. thanks Prof. K.M. Hill, Prof. J.-L. Le, Dr. Z. Ge, and Prof. S.A. Galindo-Torres for helpful discussions. The simulations in this work were conducted on an open-access multiphysics simulation li- brary: MECHSYS, which can be acquired from Github (https://github.com/Axtal/mechsys.git) Appendix A: Simulations and stress calculations The discrete element method (DEM) solves the mo- mentum balance equation for both translational and ro- tational motions. The governing equations of DEM can be written in the form of the Newton’s second law, where the calculation of contact forces is of vital importance. In each simulation, the governing equations are integrated using a Verlet method [21], which is commonly used in molecular dynamics. Particles are spherical and follow a linear spring-dashpot contact model, where the normal contact force F n ij is calculated as a Hookean contact with energy damping, so that F n ij = −knδn ij −cn ˙δn ij, where kn is the contact stiffness, cn is the damping coefficient, and δn ij is the contact overlap. F n ij has a viscous term for energy dissipation, where cn is calculated based on the coefficient of restitution, en, so that cn = 2ξn p meffkn , ξn = −ln en/ p π2 + (ln en)2 , (A1) where meff = (1/mi + 1/mj)−1 is the reduced mass and we set en = 0.5. The magnitude of the tangential con- tact force |ktδt ij| cannot exceed its frictional limit, µp|F n ij|, where µp is the friction coefficient of particles. The granular system is located within and sheared by two sawteeth plates at constant pressure and shear rate, as shown in the inset of FIG. 4(b). At each timestep, the sheared granular system reaches a steady state when ϕs, τ, and σn become stable. Meanwhile, instead of measur- ing stresses from the upper and lower plates, we calcu- lated the average stress tensor for each output timestep. The average stress tensor σij can be obtained by σij = (1/V ) Nct X c f i clj c , (A2) where V is the volume of the granular system, f i c is the ith component of a contact force vector (a summation of normal and tangential contact forces of the cth con- tact), and lj c is the jth component of a contact branch vector that links centroids of two contacting particles. FIG. 4(a) presents typical time-lapse evolutions of ϕs, τ, and σn, showing a clear steady state for determining the representative average values. Since we utilized sawteeth plates as upper and lower boundaries, the velocity pro- file across height is always linear [shown in FIG. 4(b)], indicating a constant shear rate. Appendix B: Particle size distributions In this letter, we use two different types of particle size distributions, i.e., Fuller distribution (D1) and power-law distributions (D2 and D3). The cumulative distribution function (CDF) of D1 can be written as C1(dp) = [(dp −dmin)/(dmax −dmin)]η1 . (B1) Then, a random particle diameter that follows this CDF can be obtained by imposing a uniformly distributed ran- dom number within [0, 1], Rand(0, 1), on the left-hand side of Eq. B1 so that dp = dmin + [Rand(0, 1)]1/η1 · (dmax −dmin). (B2) 6 FIG. 5. (a-d) show both the PDF and CDF of different GSDs for particle size ranging from 1 to 10 cm, while (c) and (f) show the PDF and CDF for systems with dp ∈[3.16, 10] cm. In Figs. (g,h), we plot the rheological properties of systems with dp ∈[3.16, 10] cm. FIG. 6. Data extracted from published papers, where the particle size in Farr and Groot [22] follows a lognormal dis- tribution and that in Anzivino et al. [18] follows a Gamma distribution. The published data are fitted with Eq. 3. Considering the possible slopes of the PDF of power-law distributions, we take two different functional forms: C2(dp) = (dp/dmax)η2 , (B3a) dp = dmax · [Rand(0, 1)]1/η2 , (B3b) C3(dp) = 1 −(dp/dmin)−η3 , (B3c) dp = dmin · [1 −Rand(0, 1)]−1/η3 . (B3d) where we impose the dp ∈[dmin, dmax] condition every time we generate a random particle diameter. Figures 5(a-f) show both the PDF and CDF of systems with different GSDs, and FIGs. 5(g, h) show the rheologi- cal properties of granular systems with a different size range, which indicate that changing the particle size range would not change the rheological behavior, espe- cially when we consider Icw and eliminate the effect of ϕc. Appendix C: ϕRCP extracted from previous publications We mentioned that previous research has focused on predicting the random close packing solid fraction, ϕRCP , which is closely related to ϕc in our present work. How- ever, we make clear that ϕRCP is different from ϕc. The studies of ϕRCP are mostly focused on frictionless sys- tems. Furthermore, DEM simulations, where particles are in loading conditions, allow for the existence of over- laps among contacting particles, which will result in rel- atively larger solid fractions. To validate the ϕc ∼IGV D relationship, we turn to published works to also link ϕRCP to IGV D. In FIG. 6, we plot the data extracted from both Farr and Groot [22] with lognormal size dis- tribution and Anzivino et al. [18] with Γ distribution. We converted their particle size distributions into parti- cle volume distributions and obtained the corresponding statistical parameters to calculate IGV D. FIG. 6 shows that our empirical relationship of Eq. 3 also works for predicting ϕRCP for frictionless spheres with a single fit- ting parameter of ϕc0 ≈0.645. [1] T. Man, Mathematical modeling of pavement gyratory compaction: A perspective on granular-fluid assemblies, Mathematics 11, 2096 (2023). 7 [2] T. Man, H. E. Huppert, and S. A. Galindo-Torres, Run- out scaling of granular column collapses on inclined planes, J. Fluid Mech. 1002, A50 (2025). [3] Y. Lu, W. Jin, J. Klinger, T. L. Westover, and S. Dai, Flow characterization of compressible biomass particles using multiscale experiments and a hypoplastic model, Pow. Technol. 383, 396 (2021). [4] Y. Lu, W. Jin, J. Klinger, N. Saha, Y. Xia, and S. Dai, Shear rate dependency on flowing granular biomass ma- terial, Pow. Technol. 442, 119834 (2024). [5] G. MiDi, On dense granular flows, Eur. Phys. J. E 14, 341 (2004). [6] Z. Ge, T. Man, H. E. Huppert, K. M. Hill, and S. A. Galindo-Torres, Unifying length-scale-based rheology of dense suspensions, Phys. Rev. Fluids 9, L012302 (2024). [7] J. Fiscina, G. Lumay, F. Ludewig, and N. Vandewalle, Compaction dynamics of wet granular assemblies, Phys. Rev. Lett. 105, 048001 (2010). [8] T. Man, Compaction evolution and mechanisms of gran- ular materials due to gyratory shearing, Materials 17, 10.3390/ma17225525 (2024). [9] C. Song, P. Wang, and H. A. Makse, A phase diagram for jammed matter, Nature 453, 629–632 (2008). [10] D. Bi, J. Zhang, B. Chakraborty, and R. P. Behringer, Jamming by shear, Nature 480, 355 (2011). [11] Z. Ding, W. Hu, C. Chang, Y. Li, and G. Wang, Shear behaviors of confined flow: Insights for understanding the influences of fractal particle size distribution on high mobility of granular flows, Geophys. Res. Lett. 51, e2024GL108956 (2024). [12] K. W. Desmond and E. R. Weeks, Influence of parti- cle size distribution on random close packing of spheres, Phys. Rev. E 90, 022204 (2014). [13] K. Hill and B. Yohannes, Rheology of dense granular mix- tures: Boundary pressures, Phys. Rev. Lett. 106, 058302 (2011). [14] B. Yohannes and K. Hill, Rheology of dense granular mixtures: Particle-size distributions, boundary condi- tions, and collisional time scales, Phys. Rev. E 82, 061301 (2010). [15] O. Polan´ıa, M. Renouf, M. Cabrera, N. Estrada, and E. Az´ema, Monodisperse behavior of polydisperse flows, Phys. Rev. E 111, L043401 (2025). [16] A. Zaccone, Complete mathematical theory of the jam- ming transition: A perspective, J. Appl. Phys. 137, 050901 (2025). [17] A. Zaccone, Analytical solution for the polydisperse ran- dom close packing problem in 2d, Pow. Technol. 459, 121008 (2025). [18] C. Anzivino, M. Casiulis, T. Zhang, A. S. Moussa, S. Martiniani, and A. Zaccone, Estimating random close packing in polydisperse and bidisperse hard spheres via an equilibrium model of crowding, J. Chem. Phys. 158, 044901 (2023). [19] J. B. Knight, C. G. Fandrich, C. N. Lau, H. M. Jaeger, and S. R. Nagel, Density relaxation in a vibrated granular material, Phys. Rev. E 51, 3957 (1995). [20] R. S. Hoy, Ultrastable jammed sphere packings with a wide range of particle dispersities, Phys. Rev. E , (2025). [21] S. A. Galindo-Torres and D. M. Pedroso, Molecular dy- namics simulations of complex-shaped particles using voronoi-based spheropolyhedra, Phys. Rev. E 81, 061303 (2010). [22] R. S. Farr and R. D. Groot, Close packing density of polydisperse hard spheres, J. Chem. Phys. 131, 244104 (2009).
Grain volume distribution alters the critical phenomena in complex granular systems Teng Man,1, ∗Yimin Lu,2 Zhongrong Wang,3 Herbert Huppert,4 Alessio Zaccone,5 and Honglei Sun1, † 1 310023, China 2 79409, United States 3 3001, Australia 4 's College, 's Parade, Cambridge CB2 1ST, United Kingdom 5 "A. Pontremoli," 16, 20133 Milan, Italy (Dated: October 17, 2025) The grain size distribution (GSD) plays an important role in the mechanical properties of amorphous disordered systems and complex granular materials. Varying GSD causes segregation issues and alters critical behaviors. This work used the discrete element method (DEM) to investigate the rheological and critical behaviors of sheared granular flows with various GSDs. The results show that, while a unified rheological relation can be obtained, a characteristic length scale, which is associated with the contact probability and can be obtained from any GSD, is embedded within such a polydisperse disordered system. We further acquire a correlation function between critical solid fractions and dimensionless grain volume distributions. This work elucidates the effect of particle volumes on the rheology and micromechanics of dry granular systems and provides further insights in better incorporating the influence of other particle properties into a unified framework, which is helpful and critical for the corresponding engineering and geophysical problems. Granular materials and disordered systems are commonly encountered in natural systems and industrial processes, such as landslides, pyroclastic flows, fresh concrete, and pharmaceutical and chemical particulates [14]. In recent decades, extensive investigations have been conducted to describe the rheology[5, 6], compaction [7, 8], and jamming transition [9, 10] of granular materials. The proposal of the inertial number Ic and the viscous number Iv suggests that the dynamics of granular flows is governed by the combination of three time scales: ti = dp/ p σn/ρp, tv = ηf/σn, and tM = 1/ ̇γ, where ti, tv, and tM are inertial, viscous, and macroscopic time scales, dp and ρp are the particle diameter and density, σn is the pressure, and ηf denotes the fluid viscosity if the system is submerged or fully saturated. However, the current rheological framework cannot fully capture the influence of either particle shapes or size distributions. The grain size distribution (GSD) plays an important role in the rheology of granular systems, but previous studies often provide seemingly contradictory opinions, where some studies suggested that the existence of small particles initiates a lubrication effect that helps the system increase its mobility [11], while others believed that a wider GSD induces a denser packing fraction that increases the shear strength of granular systems and makes the system easier to transition to a jammed state [12]. Hill and Yohannes [13] studied the rheology of bidispersed granular flows using the discrete element model (DEM) [13, 14] and suggested that the pressure of a granular system is determined not only by its size dis- ∗ † tribution, but also by the effective free volume per particle, which was adopted from the jammed packing of monosized particles. They stated that the dimensionless pressure, (1/Ic)2, scales as a function of the coordination number, Zc. Polan ́ıa et al. [15] studied the rheological behavior of polydispersed granular systems, which concluded that such systems still behave similarly to monodispersed systems if the right length scale is chosen, but the solid fraction can be quite different due to polydispersity. Recently, Ding et al. [11] also studied the behavior of granular systems with fractal distributions, and found that the shear strength of confined flow decreases at large fractal dimensions, which can be further linked to help decipher the mechanisms within the highly localized basal zones and the high mobility of general geological avalanches. In this Letter, we focus on disordered physical systems with various continuous GSDs, where sheared granular materials with three different cumulative distribution functions (CDF) are examined using DEM. We show that, unlike the length scale proposed in Polan ́ıa et al. [15], a simply defined "average" particle size, which has a clear physical root in the contact probability, already unifies the rheological behavior of dry granular systems with a wide range of different particle size distributions. Further analyses are devoted to better understand the critical phenomena when a granular system with different GSDs and frictional properties is approaching the "jamming" transition. Methodology and simulation setup In this work, the classical DEM is used to simulate sheared granular systems, where the particles are spherical and follow a linear spring-dashpot contact model. The normal contact force F n ij is calculated as a Hookean contact with energy damping. The magnitude of the tangential contact force 16 Oct 2025 2 FIG. 1. Rheological behaviors: (a-c) The μeff ∼Ica and φs ∼Ica relationships plotted for systems with size distribution D1, D2, and D3, where Ica is the conventional inertial number that uses system averaged particle diameter as the characteristic length. The inset of (b) shows the relationship between two length scales, dac and daw, where dac is the average particle diameter calculated based on contact number of each particle and daw is the average particle diameter calculated based on volume of each particle. (e-f) The μeff ∼Icw and φs ∼Icw relationships plotted for systems with size distribution D1, D2, and D3, where Icw is the new dimensionless number calculated with daw. Inset of (d) presents a simulation snapshop, and the inset of (f) is the relationship between φs/φc and the proposed new inertial number Icw (see Equation 1. |ktδt ij| cannot exceed its friction limit, μp|F n ij|, where μp is the particle frictional coefficient and for most cases we set μp = 0.3. The system has no rolling resistance. The granular system is located between two sawteeth plates and is sheared with constant pressure and shear rate. For different simulations, we vary the pressure σ from 50 to 1000 Pa and the shear rate ̇γ from 0.01 to 1.0 s-1, and measure the mean value of the key variables when the sheared system is already in steady state, shown in Appendix A, where its solid fraction φs, pressure σ, shear stress τ, and velocity profile become stable. We can obtain the effective frictional coefficient μeff = τ/σ, the effective viscosity ηeff = τ/ ̇γ, and the average inertial number Ic from each simulation for further investigation. The plot of a shear cell is shown in the inset of FIG. 1(d). The particle size distributions used in this work, i.e., Fuller distribution and two power-law distributions, are elaborated in detail in Appendix B. Rheological behaviors According to the classical μ(I) rheology of dry granular systems, both μeff = τ/σ and φs should scale with the inertial number Ica = ̇γdp/ p σ/ρp, where dp is the particle diameter and also a characteristic length scale. Ge et al. [6] proposed that the length scale plays an important role in determining the rheological behavior of granular systems, and we further argue that this length scale should be adjusted to reflect changes in particle size distributions. FIGs. 1(a-b) demonstrate the μeff ∼Ica relationship for systems with GSDs of D1, D2, and D3, which are clarified in Appendix B, whereas FIG. 1(c) shows the φs ∼Ica relationship, where Ica represents the conventional inertial number that uses the average particle diameter of the system as the characteristic length. FIGs. 1(a-c) shows that, even though the μeff ∼Ica and φs ∼Ica relationships collapse for systems with any specific GSD, changing the particle gradation leads to deviations in both μeff and φs. FIGs. 1(a) and (b) show that systems with different particle gradations result in different transitional Ica, which denotes a transition from quasi-static to inertial flow. The change in this transitional Ica implies that the characteristic length of the system may also change due to changes of GSDs. Hill and Yohannes [13] reported that the average particle size should be calculated based on each particle volume. Thus, we define the characteristic length of the system as daw = [PNp i=1(dpiVpi)]/(PNp j=1 Vpj) and particles with different volumes are assumed to have different weight in calculating this "average" particle size. Thus, a new inertial number can be defined as Icw = ̇γdaw(ρp/σ)1/2 . (1) After adopting the new inertial number, as shown in FIGs. 1(d,e), the relationships between μeff and Icw collapse onto a single master curve, indicating that the characteristic length, daw, reflects some physical and topological nature of the sheared granular systems, and choosing an appropriate length scale helps unify the frictional rheology of dry granular systems with any size distribution. However, we admit that Icw fails to unify the φs ∼Icw relationship, although FIG. 1(f) looks much better than FIG. 1(c). For systems with different GSDs, their transitional Icw seems to be unified, and the only 3 FIG. 2. Relationship between the effective viscosity of granular systems, ηeff = τ/ ̇γ, and the solid fraction, φs. In the inset of this figure, we change the horizontal and vertical axes into |φs -φc| and ηeff -1/1.3, respectively. The dashed line represents a linear relationship. difference lies in the φs-axis, i.e., moving the φs ∼Icw relationships upward or downward can also unify this relation. For a series of simulations with the same GSD, the relationship between φs and Icw can be written as φs = φc/(1 + αφIcw), where αφ ≈0.32. Thus, if we change the vertical coordinate into φs/φc, where φc is the critical solid fraction for each GSD, we can obtain a master curve of φs/φc = 1/(1 + αφIcw), shown as the blue dashed curve in the inset of FIG. 1(f). Critical length scale and critical solid fraction Our choices of characteristic length and critical solid fraction have deep physical and geometrical reasons, even though they seem to be deliberately fitted. On one hand, daw can be related to the coordination number of each polydispersed granular system. The characteristic length should be linked to particle volumes because they are directly related to the number of contacts. Intuitively, a particle with larger size has a greater chance to contact with more particles, indicating that such a particle plays a more important role in bearing strong force networks and determining the macroscopic behavior of the whole system. To validate this, we calculated another length scale, dac, based on contact numbers of each particle, where dac = [P i(dpiNci)]/(P j Ncj) and Nci is the contact number of particle i, and plotted its relationship with daw in the inset of FIG. 1(b). This shows that daw and dac are linearly related, which verifies that daw, which can be easily obtained from GSD, has a deep root in contact statistics. φc varies with the GSD and frictional properties of particles. It is clearly neither the jamming solid fraction, φJ [10], nor the random close packing fraction, φRCP [16]. However, Zaccone [16] and his follow-up research on the random close packing of the polydisperse granular system [17] encourage us to argue that φc is a function of φRCP and μp. Since we kept μp constant, φc is predominantly controlled by φRCP , which should be related to the GSD of a granular system. Thus, we could obtain a functional relationship between φc and the GSD. We note that φc for each GSD should not be obtained from fitting of the φs ∼Icw relationship, since the functional form of the φs ∼Icw relationship is based on phenomena, not physics. Fortunately, Zaccone [16] derived the relationship between effective viscosity ηeff = τ/ ̇γ and φs from first principles, which yields ηeff ∼|φc -φs|-1.3. Thus, if we plot ηeff -1/1.3 against |φc -φs|, we should be able to obtain a linear relationship. In FIG. 2, we plot ηeff against φs for all simulations and find that ηeff always diverges at relatively high solid fractions (which are different for systems with different GSDs). The solid fraction, where a granular system is approaching infinite viscosity, indicates a critical solid fraction, φc. For each GSD, we can obtain a specific φc, and ηeff -1/1.3 and |φc -φs| are linearly related, as shown in the inset of FIG. 2. Our above analyses indicate that the length scale daw, which implies contact statistics, helps to unify the μeff ∼ Icw relationship but leaves a problem of φc for us to solve. This implies that, for systems with different GSDs but exactly the same daw, their stress-strain rate relationships behave similarly, but their solid fractions differ from each other. Since stresses in granular materials are strongly related to force networks (especially the strong force network), granular systems with different GSDs may have different amounts of rattler particles, which do not alter the strong force network, but occupy different amounts of volumes, resulting in different solid fractions. We define particles with zero contact or only one contact as rattlers, while particles with more than one contact as non-rattlers. While calculating the contact number of each particle, Nc, and the coordination number, Zc ≡⟨Nc⟩, of each simulation, we can also obtain the volumetric percentage of rattlers, Vp01/Vp, and the coordination number of non-rattlers, Zc≥2. FIG. 3(a) plots the histograms for a set of simulations with GSD of D3 and η3 = 1.0, where the legend provides the information of Icw. Increasing Icw leads to a decrease in the number of particles with large contact numbers and increases the chances of finding particles with only one contact, indicating that increasing the inertial number results in more binary collisions. With such a wide range of particle sizes, a sheared granular system naturally has many "small" particles, which do not play a role in the force network. FIG. 3(b) indicates that, for some cases, Zc is even below 1.0, but for each set of simulations with the same GSD, the data collapse onto a Zc ∼φs master curve. However, it is difficult to figure out how changing the GSD alters Zc. Neglecting particles with only zero and one contact number and changing the vertical axis into the average coordination number of particles with more than one contact, Zc≥2, in FIG. 3(c), the relationship Zc≥2 ∼φs shows power-law features with systematic changes introduced by changing the GSD. The inset of FIG. 3(c) shows that dividing φs by φc helps to get a unified Zc≥2 ∼φs/φc relationship, which is similar to the ηeff ∼φs/φc relationship. We argue that it is mostly particles with Nc ≤2 that determine the effective viscosity of the system. All of these imply that changing the GSD leads to a dif4 FIG. 3. (a) Histogram of the number of contact while changing Icw. This figure only plots data from the system of D3 and ηs = 1.0. (b) The Zc ∼φs relationship for systems with different GSDs, and Zc is the average coordination number. (c) The volume percentage of particles with only 0 or 1 contact is plotted against Icw. (d) Relationship between the dimensionless granular temperature, Tg/( ̇γdaw)2, and φs (inset: Tg/( ̇γdaw)2 ∼|φc -φs|). (e) The relationship between φc and the dimensionless index of the grain volume distribution, IGV D. ferent percentage of particles that participate in bearing forces in granular systems. However, FIG. 3(d) contradicts our expectation and shows that changing the GSD does not change the volume percentage of particles much, as Nc ≤1. We further plot the relationship between the normalized granular temperature Tg/( ̇γdaw)2 and φs in FIG. 3(e), where Tg ≡⟨(⃗u -⟨⃗u⟩)2⟩and ⃗u is the particle velocity. Again, the critical solid fraction φc, which is heavily influenced by the GSD, plays a key role in the scaling of Tg/( ̇γdaw)2 ∼φs, and Tg/( ̇γdaw)2 scales with ∼|φc -φs|-0.6. We argue that φc, as a critical solid fraction, marks the transition from a system with finite viscosity to that with infinite viscosity. Physically, it has similar characteristics as random close packing fractions φRCP , which are affected by both μp and GSD. Zaccone [17] and Anzivino et al. [18] suggested that φRCP can be expressed as a function of the standard deviation of GSD, while Desmond and Weeks [12] argued that they could draw a unified relation between φRCP and a combination of skewness and polydispersity of GSD with the data obtained from simulations. However, we find that both methods do not work for our system. We adopted the idea of Desmond and Weeks [12] and calculated both the skewness Sv and the polydispersity δv of the normalized grain volume distribution (GVD), instead of the GSD, so that ̃V i p = V i p -min(Vp) / [max(Vp) -min(Vp)] , (2a) Sv = ⟨∆ ̃V 3 p ⟩/ ⟨∆ ̃V 2 p ⟩ 3/2 , δv = q ⟨∆ ̃V 2 p ⟩/⟨ ̃Vp⟩, (2b) where ̃Vp is the normalized particle volume, ∆ ̃Vp = ̃Vp - ⟨ ̃Vp⟩, ⟨ ̃V n p ⟩= R ̃V n p · P( ̃Vp)d ̃Vp, and ⟨∆ ̃V (n) p ⟩= R ∆ ̃V n p · P( ̃Vp)d ̃Vp. To obtain a universal relationship, we varied both the size ranges ([1, 10] cm, [2, 10] cm, and [3.16, 10] cm) and the particle frictional coefficient (μp =0.0001, 0.1, 0.3 and 0.9). With a combined dimensionless index of the GVD, IGV D = δv +Svδ2 v, we obtained an empirical relationship between φc and IGV D so that φc =1 -(1 -φc0)/ [1 + β ln (1 + IGV D/Ig0)] , (3) where β ≈0.087 and Ig0 = 4.5 are fitting constants, φc0 is the base φc and varies with μp, and the functional form of Eq. 3 is inspired by the time evolution of the solid fraction of granular system under tapping compactions in Knight et al. [19]. FIG. 3(f) suggested that we only need to adjust φc0 to get a reasonable φc ∼IGV D relationship for systems with different μp, and Eq. 3, although being empirical, introduces a logarithmic dependence that ensures the eventual "flattening" of φc with increasing IGV D. Ultimately, φc should flatten out to a "maximum" packing fraction that is lower than or equal to 1 when IGV D →+∞, as also suggested in previous publications [12, 18, 20]. In Appendix C, we plot the φRCP ∼IGV D relationship for data extracted from previous publications to show that Eq. 3 also works for predicting the random close packing fractions. This Letter focuses on the rheology of granular systems with different grain size distributions and identifies a characteristic length scale that can be associated with the contact probability. We find that varying the GSD leads to the change in criticality behavior, i.e. the critical 5 FIG. 4. (a) The time evolution of the shear stress, τ, pressure, σn, and the solid fraction, φs. (b) the profile of particle velocities in the x-direction, Vpx. The inset of (b) shows the configuration of a simple shear simulation of the granular system. solid fraction φc that marks the transition from a finite ηeff to an infinite one. The analyses of Zc and Tg verify the importance of determining φc based on the GVD, and we have introduced a dimensionless index, IGV D, to successfully acquire a unified solution for granular systems with different particle gradations, which has the potential to be adopted to other athermal disordered systems with various polydispersities. ACKNOWLEDGMENTS Acknowledgments- We wish to acknowledge the financial support of Grant No. 12202367 from NSFC and the start-up grant from Zhejiang - nology. T.M. thanks Prof. K.M. Hill, Prof. J.-L. Le, Dr. Z. Ge, and Prof. S.A. Galindo-Torres for helpful discussions. The simulations in this work were conducted on an open-access multiphysics simulation library: MECHSYS, which can be acquired from Github (https://github.com/Axtal/mechsys.git) Appendix A: Simulations and stress calculations The discrete element method (DEM) solves the momentum balance equation for both translational and rotational motions. The governing equations of DEM can be written in the form of the Newton's second law, where the calculation of contact forces is of vital importance. In each simulation, the governing equations are integrated using a Verlet method [21], which is commonly used in molecular dynamics. Particles are spherical and follow a linear spring-dashpot contact model, where the normal contact force F n ij is calculated as a Hookean contact with energy damping, so that F n ij = -knδn ij -cn ̇δn ij, where kn is the contact stiffness, cn is the damping coefficient, and δn ij is the contact overlap. F n ij has a viscous term for energy dissipation, where cn is calculated based on the coefficient of restitution, en, so that cn = 2ξn p meffkn , ξn = -ln en/ p π2 + (ln en)2 , (A1) where meff = (1/mi + 1/mj)-1 is the reduced mass and we set en = 0.5. The magnitude of the tangential contact force |ktδt ij| cannot exceed its frictional limit, μp|F n ij|, where μp is the friction coefficient of particles. The granular system is located within and sheared by two sawteeth plates at constant pressure and shear rate, as shown in the inset of FIG. 4(b). At each timestep, the sheared granular system reaches a steady state when φs, τ, and σn become stable. Meanwhile, instead of measuring stresses from the upper and lower plates, we calculated the average stress tensor for each output timestep. The average stress tensor σij can be obtained by σij = (1/V ) Nct X c f i clj c , (A2) where V is the volume of the granular system, f i c is the ith component of a contact force vector (a summation of normal and tangential contact forces of the cth contact), and lj c is the jth component of a contact branch vector that links centroids of two contacting particles. FIG. 4(a) presents typical time-lapse evolutions of φs, τ, and σn, showing a clear steady state for determining the representative average values. Since we utilized sawteeth plates as upper and lower boundaries, the velocity profile across height is always linear [shown in FIG. 4(b)], indicating a constant shear rate. Appendix B: Particle size distributions In this letter, we use two different types of particle size distributions, i.e., Fuller distribution (D1) and power-law distributions (D2 and D3). The cumulative distribution function (CDF) of D1 can be written as C1(dp) = [(dp -dmin)/(dmax -dmin)]η1 . (B1) Then, a random particle diameter that follows this CDF can be obtained by imposing a uniformly distributed random number within [0, 1], Rand(0, 1), on the left-hand side of Eq. B1 so that dp = dmin + [Rand(0, 1)]1/η1 · (dmax -dmin). (B2) 6 FIG. 5. (a-d) show both the PDF and CDF of different GSDs for particle size ranging from 1 to 10 cm, while (c) and (f) show the PDF and CDF for systems with dp ∈[3.16, 10] cm. In Figs. (g,h), we plot the rheological properties of systems with dp ∈[3.16, 10] cm. FIG. 6. Data extracted from published papers, where the particle size in Farr and Groot [22] follows a lognormal distribution and that in Anzivino et al. [18] follows a Gamma distribution. The published data are fitted with Eq. 3. Considering the possible slopes of the PDF of power-law distributions, we take two different functional forms: C2(dp) = (dp/dmax)η2 , (B3a) dp = dmax · [Rand(0, 1)]1/η2 , (B3b) C3(dp) = 1 -(dp/dmin)-η3 , (B3c) dp = dmin · [1 -Rand(0, 1)]-1/η3 . (B3d) where we impose the dp ∈[dmin, dmax] condition every time we generate a random particle diameter. Figures 5(a-f) show both the PDF and CDF of systems with different GSDs, and FIGs. 5(g, h) show the rheological properties of granular systems with a different size range, which indicate that changing the particle size range would not change the rheological behavior, especially when we consider Icw and eliminate the effect of φc. Appendix C: φRCP extracted from previous publications We mentioned that previous research has focused on predicting the random close packing solid fraction, φRCP , which is closely related to φc in our present work. However, we make clear that φRCP is different from φc. The studies of φRCP are mostly focused on frictionless systems. Furthermore, DEM simulations, where particles are in loading conditions, allow for the existence of overlaps among contacting particles, which will result in relatively larger solid fractions. To validate the φc ∼IGV D relationship, we turn to published works to also link φRCP to IGV D. In FIG. 6, we plot the data extracted from both Farr and Groot [22] with lognormal size distribution and Anzivino et al. [18] with Γ distribution. We converted their particle size distributions into particle volume distributions and obtained the corresponding statistical parameters to calculate IGV D. FIG. 6 shows that our empirical relationship of Eq. 3 also works for predicting φRCP for frictionless spheres with a single fitting parameter of φc0 ≈0.645. [1] T. Man, Mathematical modeling of pavement gyratory compaction: A perspective on granular-fluid assemblies, Mathematics 11, 2096 (2023). 7 [2] T. Man, H. E. Huppert, and S. A. Galindo-Torres, Runout scaling of granular column collapses on inclined planes, J. Fluid Mech. 1002, A50 (2025). [3] Y. Lu, W. Jin, J. Klinger, T. L. Westover, and S. Dai, Flow characterization of compressible biomass particles using multiscale experiments and a hypoplastic model, Pow. Technol. 383, 396 (2021). [4] Y. Lu, W. Jin, J. Klinger, N. Saha, Y. Xia, and S. Dai, Shear rate dependency on flowing granular biomass material, Pow. Technol. 442, 119834 (2024). [5] G. MiDi, On dense granular flows, Eur. Phys. J. E 14, 341 (2004). [6] Z. Ge, T. Man, H. E. Huppert, K. M. Hill, and S. A. Galindo-Torres, Unifying length-scale-based rheology of dense suspensions, Phys. Rev. Fluids 9, L012302 (2024). [7] J. Fiscina, G. Lumay, F. Ludewig, and N. Vandewalle, Compaction dynamics of wet granular assemblies, Phys. Rev. Lett. 105, 048001 (2010). [8] T. Man, Compaction evolution and mechanisms of granular materials due to gyratory shearing, Materials 17, 10.3390/ma17225525 (2024). [9] C. Song, P. Wang, and H. A. Makse, A phase diagram for jammed matter, Nature 453, 629-632 (2008). [10] D. Bi, J. Zhang, B. Chakraborty, and R. P. Behringer, Jamming by shear, Nature 480, 355 (2011). [11] Z. Ding, W. Hu, C. Chang, Y. Li, and G. Wang, Shear behaviors of confined flow: Insights for understanding the influences of fractal particle size distribution on high mobility of granular flows, Geophys. Res. Lett. 51, e2024GL108956 (2024). [12] K. W. Desmond and E. R. Weeks, Influence of particle size distribution on random close packing of spheres, Phys. Rev. E 90, 022204 (2014). [13] K. Hill and B. Yohannes, Rheology of dense granular mixtures: Boundary pressures, Phys. Rev. Lett. 106, 058302 (2011). [14] B. Yohannes and K. Hill, Rheology of dense granular mixtures: Particle-size distributions, boundary conditions, and collisional time scales, Phys. Rev. E 82, 061301 (2010). [15] O. Polan ́ıa, M. Renouf, M. Cabrera, N. Estrada, and E. Az ́ema, Monodisperse behavior of polydisperse flows, Phys. Rev. E 111, L043401 (2025). [16] A. Zaccone, Complete mathematical theory of the jamming transition: A perspective, J. Appl. Phys. 137, 050901 (2025). [17] A. Zaccone, Analytical solution for the polydisperse random close packing problem in 2d, Pow. Technol. 459, 121008 (2025). [18] C. Anzivino, M. Casiulis, T. Zhang, A. S. Moussa, S. Martiniani, and A. Zaccone, Estimating random close packing in polydisperse and bidisperse hard spheres via an equilibrium model of crowding, J. Chem. Phys. 158, 044901 (2023). [19] J. B. Knight, C. G. Fandrich, C. N. Lau, H. M. Jaeger, and S. R. Nagel, Density relaxation in a vibrated granular material, Phys. Rev. E 51, 3957 (1995). [20] R. S. Hoy, Ultrastable jammed sphere packings with a wide range of particle dispersities, Phys. Rev. E , (2025). [21] S. A. Galindo-Torres and D. M. Pedroso, Molecular dynamics simulations of complex-shaped particles using voronoi-based spheropolyhedra, Phys. Rev. E 81, 061303 (2010). [22] R. S. Farr and R. D. Groot, Close packing density of polydisperse hard spheres, J. Chem. Phys. 131, 244104 (2009).
2510.14794
1 Bridging Theory and Practice in Reconfigurable Fluid Antenna Systems Halvin Yang, Member, IEEE, Yizhe Zhao, Member, IEEE, Kai-Kit Wong, Fellow, IEEE, Hsiao-Hwa Chen, Life Fellow, IEEE, and Chan-Byoung Chae, Fellow, IEEE Abstract—Fluid antennas, including those based on liquid, me- chanical, and pixel-based technologies, are poised to significantly enhance next-generation wireless systems by adaptively optimiz- ing their radiation characteristics. Many theoretical analyses assumed near-instant reconfiguration, perfect channel knowledge, static or slowly varying propagation environments, and ideal material properties that rarely hold in practice. In this article, we dissect these common assumptions and contrast them with the re- alities of finite actuation time, limited and imperfect channel state information, rapidly changing fading conditions, electromagnetic coupling, and mechanical constraints. Through illustrative ex- amples and simulations, we demonstrate how ignoring these factors can lead to overestimated gains in capacity, coverage, etc.. We then propose modeling refinements, experimental validation methods, and emerging control algorithms that better account for real-world constraints. Our findings highlight that, while reconfigurable antennas remain highly promising for B5G/6G and Internet of things (IoT) applications, their full potential can only be realized by incorporating practical considerations into system design and performance evaluation. Index Terms—Fluid antenna, fluid antenna system (FAS), movable antenna, modeling, resource allocation, 6G. I. INTRODUCTION W IRELESS networks beyond fifth-generation (5G) are required to keep up with the exponential growth in mobile data traffic while also providing ubiquitous connec- tivity. To achieve this, new wireless technologies need to be developed. In recent years, multiple-input multiple-output (MIMO) technology has been viewed as a primary solution to drastically improve spectral efficiency (SE) and energy efficiency (EE) of wireless communications via space-time coding, spatial multiplexing and beamforming. However, this performance improvement comes at a high cost, namely the addition of more radio-frequency (RF) chains to support ad- ditional antennas and higher computational complexity when tackling more complex channels. The increasing system com- plexity, hardware size and energy consumption pose significant limitation when considering mobile or edge devices, which lack the computational capacity and energy supply of base stations (BS). H. Yang (email: halvin.yang@imperial.ac.uk) is with the De- partment of Electrical and Electronic Engineering, Imperial College Lon- don, UK. Y. Zhao (email: yzzhao@uestc.edu.cn) is with the School of Information and Communication Engineering, University of Elec- tronic Science and Technology of China, China. K.-K. Wong (email: kai-kit.wong@ucl.ac.uk) is with the Department of Electronic and Electrical Engineering, University College London, UK, and Yon- sei Frontier Lab. Yonsei University, South Korea. H.-H. Chen (email: hshwchen@mail.ncku.edu.tw) (the corresponding author) is with the Department of Engineering Science, National Cheng Kung University, Tai- wan. C.-B. Chae (email: cbchae@yonsei.ac.kr) is with the School of Integrated Technology, Yonsei University, South Korea. With recent breakthroughs in reconfigurable antenna tech- nology, it is possible to design an antenna that can switch its position in a fixed space along predetermined points, each referred to as a port [1]. Owing to this unique ability to change between multiple spatial locations, this article refers to such a communication system as a fluid antenna system (FAS). Note that a physical antenna used in FAS is not necessarily liquid in nature; the term fluid is a description of the smooth or dynamic nature of the antenna rather than a physical description. FAS introduces the novel concept of a position-reconfigurable antenna that can access the null of the interference created by natural fading phenomenon in a multipath-rich environment, significantly reducing the computational complexity of the system as complex beamforming and channel estimation for precoding at the transmitter are no longer required [2]. The concept of FAS was initially introduced by Wong et al. in [1], sparking further interest in utilizing re- configurable antenna technologies for wireless communication system design. Following [1], Zhu et al. introduced movable antenna in [3] to describe a specific class of FAS that can physically relocate the antenna within a confined space using mechanically movable radiating elements. Recent studies even suggested 3-D positioning and 3-D orientation of the antenna surface for a 6-D movable antenna architecture. In this article, we adopt the term FAS in a broad sense to encompass various forms of spatially reconfigurable antenna systems, including fluidic, mechanically movable, and other implementations. Recent advances have further envisioned to scale FAS into large reconfigurable surfaces, effectively engineering a wire- less environment itself [4]. This concept proposes that entire building facades or urban infrastructure could be transformed into massive FAS deployments. By dynamically shaping signal paths, these enormous FAS could maximize coverage and enhance capacity in real time, extending the principles of reconfigurable intelligent surfaces (RIS) into fully adaptive communication environments. While such a deployment in- troduces new challenges, it represents a substantial step to- wards 6G-enabled smart environments, where communication optimization is embedded directly into physical infrastructure. Previous literature has shown the viability of fluid antennas deployed in communication systems with a single-antenna FAS, not only significantly outperforming a traditional fixed- position antenna system [1], but also achieving high multi- plexing and diversity gains despite being a single antenna [2]. As FAS technology advances over the years, different systems were proposed, each having different interpretation of fluid antennas, different terminologies, physical architecture, and thus different FAS technical approaches, possibly causing some confusions. Readers are advised to refer to [5] for more arXiv:2510.14794v1 [eess.SP] 16 Oct 2025 2 information. Current FAS research often assumes idealized material, channel, and movement conditions, which do not hold in practice. This article will discuss those realistic conditions and their impact on communication systems. Corresponding solutions are also recommended at the end of the article, which may serve as future research directions of this emerging and exciting new research area. The rest of the article is organized as follows. Section II summarizes the key concepts, characteristics, and advantages and disadvantages of existing FASs, followed by a fundamen- tal illustration on how a typical FAS operates. Then, Section III examines the theoretical assumptions made in FAS research and compares them with practical realities, highlighting their implications on system performance. Section IV discusses the broader impact of these assumptions on communication systems. After that, Section V proposes solutions and outlines future research directions. Finally, Section VI concludes the article. II. OVERVIEW ON FAS TECHNOLOGIES This section provides an overview on how FASs are im- plemented practically by looking at different fluid antenna designs, and then discusses about the fundamental operating principles of FAS and related enabling technologies. A. Physical Design A fluid antenna refers to any radiating structure in which software-controlled conductive or dielectric elements dynami- cally alter their shape, position and/or geometry to reconfigure key metrics such as operating frequency, polarization, or ra- diation pattern. These structures may involve fluidic elements, such as conductive liquids namely, Eutectic Gallium-Indium, Galinstan, or ionized solutions (e.g., NaCl) [6], [7]. Depending on the design, such fluid antennas can enable frequency or spatial reconfigurability, but exhibit slow switching speeds due to fluid inertia and often require bulky pumps or reservoirs. In contrast, pixel-based antennas use electronically switched static radiating elements, e.g., PIN diodes or MEMS, to form reconfigurable patterns without mechanical movement [8], en- abling microsecond- or even nanosecond-level switching and supporting both shape and position reconfiguration. Similarly, metamaterial-based antennas employ tunable electromagnetic surfaces composed of reconfigurable unit cells, thus achieving agile and fully electronic beam control without moving parts and offering comparable switching speeds [9]. By comparison, mechanical antennas physically reposition radiating elements, often in 2D or 6D space (3D transla- tion and 3D rotation), using motors along predefined tracks. While they may offer fine spatial resolution, they often suffer from slow actuation, big energy overhead, and wear-and- tear. Contrary to some claims, mechanical antennas are not simple to implement, requiring structural supports, motion control, and synchronization. Moreover, antenna size is not architecture-dependent, and large arrays can be realized using any technology if deployment constraints permit. Table I summarizes the main fluid antenna architectures, highlighting their mechanisms, benefits, and limitations. Fig. 1. An illustration of a FAS’s browsing capabilities [1]. B. Operating Principles Fig. 1 exemplifies a FAS. By enabling radiating elements to change positions in space, a fluid antenna allows users to browse through different fading envelopes located at each different spatial location. Not only does this enable benefits to diversity and capacity, but also can users select the position in space, where the signal power is the highest and the noise or interference is suppressed by a deep fade. By exploiting the capability of FAS to resolve interference by selecting the optimal port (location in space), multiuser communication can be achieved without any complex beam- forming or channel feedback. In a traditional MIMO system, where time-frequency resources are shared, signal processing and beamforming were required to create peaks and nulls of the fading envelopes with channel state information (CSI) beforehand. The interference at each user in a FAS is expected to be mitigated by the fluid antenna via a process known as port selection, in which the port with the highest signal-to- interference plus noise ratio (SINR) can be selected. A more sophisticated scheme that selects more than one ports is also possible to enhance interference immunity [5, Section V-D]. C. State of the Art • Channel Characteristics: Due to the proximity of ports, which can be arbitrarily close, correlation between ports is a key consideration in FAS. Thus, to get an accurate representation of a FAS channel, the correlation needs to be modeled accurately because an underestimation of the correlation would result in unrealistic performance evaluation. There are already several correlation models proposed recently, including the block correlation model, which assumes a large number of correlated random vari- ables expressed in a matrix, and the single port correlation model, in which it is assumed each port is correlated with a single reference port and therefore there is one mutual random variable at each port. The block correlation model exhibits the highest accuracy, while single port correlation model has a lower analysis complexity [5, Section II]. • Modeling and Performance Analysis: Performance analysis is essential in understanding the viability of FAS 3 TABLE I SUMMARY OF DIFFERENT FLUID ANTENNA ARCHITECTURES Technology Type Architecture Operating Principle Advantages/Disadvantages Pixel-Based Antenna [8] A grid of static radiating elements is reconfigured electronically by toggling integrated switches (e.g., PIN diodes or MEMS) on/off to form different radiation patterns. No physical movement is involved, allowing ultra-fast and reliable switching. Scalability depends on the number and layout of switches. • Extremely high-speed switching (in 𝜇s range, even reaching 𝑛s) • Limited by the number of switches • Large unused sections • Design complexity increases with reconfiguration capability Mechanical Antenna [3] Antenna element is repositioned mechanically in 2D or 3D space using motors and a structural frame with predefined paths or tracks. Advanced variants (e.g., 6DMA) support simultaneous translation and rotation. It requires physical support infrastructure, power control, and real-time coordination. Signal is routed through an RF chain for processing. • 2D or 6D spatial coverage depending on configuration • Slow switching response (similar to liquid-based fluid antennas) • Requires structural frame, control logic, and mechanical actuation • Power consumption and mechanical wear were often overlooked in analysis • Weight and size limitations depend on deployment Surface-Wave Position-Flexible Antenna [6], [7] It combines fluid monopole and surface-wave designs under a shared category. Liquid-based antennas control RF behavior by relocating conductive fluid inside a channel. Frequency-reconfigurable designs (monopole type) vary the fluid’s length and surface-wave types adjust fluid position to steer or receive guided waves. It is typically actuated using pumps or pressure gradients. • Enables frequency or position reconfigurability depending on design • Moderate spatial control precision • Slow switching speed due to fluid inertia • Design may be bulky due to pumps and reservoirs • No solid-state switching, with limited responsiveness compared to pixel or metamaterial designs Metamaterial-Based Antenna [9] It utilizes programmable metasurfaces or tunable unit cells to manipulate electromagnetic wave propagation without mechanical movement. These structures can dynamically adjust their properties to control beam direction, shape and polarization, enabling agile and efficient beamforming. • No mechanical movement, and fully electronic reconfiguration • Ultra-fast switching speeds • Compact and integrable into various form factors • Requires complex control circuitry • High design and fabrication complexity in a communications environment. To this end, a majority of previous works was focused on the derivation of such performance indicators as outage probability, multiplex- ing gain, bit error rate, and data rate. The accuracy of the channel model will impact the performance analysis on different indicators in different scenarios, while more accurate models tend to be significantly more complex. • Resource Allocation and Optimization: Resource al- location and optimization strategies also depend on the underlying FAS architecture. Pixel-based and metama- terial antennas, due to their ultra-fast switching, are particularly suited for optimization problems that require frequent reconfiguration, such as fast port selection, user scheduling, and multiuser interference suppression. By contrast, mechanical and liquid-based designs face slower actuation and higher energy costs, making them less effective for real-time optimization but still valuable in long-term tasks such as static port assignment, spectrum planning, or energy efficiency optimization. Liquid and surface-wave designs further enable frequency and posi- tion agility, which can be exploited in spectrum allocation and interference avoidance, albeit with slower responsive- ness. Hence, the choice of optimization strategy should reflect both the communication objective and the physical limitations of the chosen architecture. III. THEORETICAL ASSUMPTIONS VS REALITY Fluid antennas, often described as reconfigurable, mechan- ical, or pixel-based solutions, promise to dynamically modify antenna parameters to maximize signal-to-noise ratio (SNR), data rates, or coverage. Despite this potential, many studies relied on simplified assumptions that may lead to inflated expectations or overly idealized system models. In the text followed, we examine these assumptions and contrast them with practical constraints. We also discuss the implications of these idealized assumptions on communication system performance. 4 0 2 4 6 8 10 Time (Years) 0.0 0.1 0.2 0.3 0.4 0.5 Outage Probability Outage Probability of Different Antenna Types Over Time (Degradation) Wear-Based Mechanical Antenna Fluid Exposure Antenna Controlled Precision Fluid Antenna Fig. 2. Performance degradation of three antenna types over time. Outage probability trends were simulated with gradual signal loss and small random variations to reflect environmental impacts such as wear-and-tear, fluid exposure, and temperature changes. A. Ideal Material Properties Assumption: Existing fluid-based antenna models treat the fluid as having constant, frequency-independent conductivity and negligible temperature dependence. Similarly, structural materials may be taken as perfect dielectrics or perfectly flex- iblity. Analytical models often assumed that once an optimal configuration is found, the antenna can reliably return to that exact state with minimal error each time. Reality: Two issues need to be considered carefully: 1) Frequency Dependence: Conductive liquids, gels, and alloys often exhibit dispersive properties, where conduc- tivity and permittivity vary with frequency. 2) Thermal and Aging Effects: Temperature shifts can alter fluid viscosity or cause metal oxidation, leading to performance drifts. Over time, repeated mechanical or chemical stress can degrade materials as well [10]. Impact: Ignoring real materials’ frequency dependence and aging effect leads to overly optimistic predictions of antenna gain and efficiency. Designers may find their prototypes un- derperforming in practical temperature ranges or long-term deployment. More experimental data on these issues can be found in [10]. In practice, a ‘best configuration’ might vary slightly each time. This inconsistency impacts link reliability and may require additional measurements or dynamic cal- ibration, increasing overhead. This may require calibration loops to ensure performance reliability [10]. Fig. 2 shows a rough indication of the impact of wear-and-tear on the performance of antennas, showing the impact of antenna wear due to mechanical movement, antenna fluid degradation from environmental exposure, and the impact of degradation of control mechanisms. Fig. 2 illustrates how three different antenna types degrade in performance over a ten-year period, expressed in terms of an increasing outage probability, the likelihood that the antenna fails to maintain a reliable connection. Each curve was generated by simulating a gradual decline in signal quality over time, with added random variation to reflect environ- mental impacts like weather, dust, or material inconsistencies. The mechanical antenna assumes a steady performance loss from physical wear, corrosion, and moving-part fatigue. The fluid exposure antenna degrades more slowly, capturing the effects of fluid leakage or contamination over extended use. The directional fluid antenna shows the most stable behavior, thereby reflecting its reliance on solid-state control compo- nents rather than moving parts or exposed fluids. The rate at which performance degrades varies across the antennas, depending on how sensitive each design is to material or control deterioration. B. No Impact of Packaging or Enclosures Assumption: Antenna performance simulations were usu- ally performed in free space, assuming no physical constraints on movement or additional losses from device enclosures. Reality: The actual performance does depend on the fol- lowing factors. 1) Space Constraints: Mobile handsets, IoT devices, or unmanned aerial vehicles (UAVs) have different limiting factors that restrict antenna motion or fluid distribution pathways [6], [7]. 2) Interaction with Enclosures and Users: Plastics, metals, and even user’s hands can detune the antennas, causing modeled reflections and absorption. Impact: In practice, antenna elements often cannot be physically moved into theoretically ‘optimal’ positions. Real packaging might reduce the reconfigurable range, adding new losses or scattering effects that degrade performance gains. C. Negligible Mechanical and Electromagnetic Coupling Assumption: Some studies analyzed a single ‘element’ or assumed perfect isolation between antenna elements, ignoring the mechanical constraints or electromagnetic interactions that naturally arise in closely packed arrays. The change in antenna geometry in any form can have a significant impact on the RF performance of the whole system, or in the case of multiple antennas affect adjacent radiating elements [11]. 5 0 100 200 300 400 500 Time Steps 10 5 0 5 10 Channel Gain (dB) Impact of Incorrect Slow Fading Assumption on FAS Performance Fast Fading (Frequent Switching) Slow Fading (Slow Switching) (a) Fast Fading Slow Fading Fading Assumption 0 2 4 6 8 10 12 14 Average SNR (dB) 0 1 2 3 4 5 6 7 8 Outage Probability (%) Comparison of Average SNR and Outage Probability for Fading Assumptions Average SNR (dB) Outage Probability (%) (b) Fig. 3. The effect of an inaccurate fading assumption on the performance of the system: (a) the channel gain of slow and fast fading assumptions in a fast fading environment, and (b) the SNR and outage performance of fading assumptions in a fast fading channel. These results illustrate the importance of modeling realistic channel dynamics for FAS. Reality: The following effects matter. 1) Mutual Coupling: In multi-element reconfigurable sys- tems, shifting or resizing one antenna can detune its neighbors, altering the collective radiation pattern in unpredictable ways [11]. 2) Mechanical Friction and Alignment: Real mechanical actuators encounter gear backlash, friction, and align- ment tolerances, meaning that the intended final position often differs from the actual one. 3) Actuator Precision: Mechanical and fluidic systems each have tolerances, and small deviations may accumulate over multiple cycles, which could become unacceptable. Impact: Overlooking these coupling effects can invalidate the assumed radiation pattern, mismatch, and overall system performance. Reconfiguration might improve one metric (e.g., gain in a certain direction) but degrade the others (e.g., interference to neighboring elements). D. Instantaneous Reconfiguration Assumption: Many analyses assumed that antennas can be reconfigured with negligible latency and without additional energy overhead. Under such assumptions, the system can always ‘hop’ to the best configuration whenever channel conditions change. However, this is often unrealistic across multiple antenna types. For instance, [6], [7] showed that liquid-based antennas require pumps or valves and involve non-negligible delays. Similarly, mechanical antennas must physically reposition the radiating elements using motors, which also incurs latency and energy cost. Even electronically reconfigurable designs, such as pixel or metamaterial-based antennas, can involve control signaling, timing constraints, or driver delays. Reality: The following factors must be considered. 1) Latency and Energy: Actuation hardware, no matter whether involving fluid redistribution, motorized move- ment, or complex switching logic, consumes both time and power. Frequent reconfiguration can drain battery 6 life in portable devices and create communication gaps during the transition. 2) Control Overhead: Synchronizing reconfiguration com- mands with fast-changing channels requires additional processing and signaling, introducing further delays in system responsiveness. Impact: Ignoring reconfiguration latency risks exaggerating theoretical throughput or coverage gains. In practice, systems may lose critical time adapting to channel conditions that might have already changed. Scheduling algorithms that as- sume instantaneous reconfiguration often overlook switching delays or transitional downtimes, leading to overly optimistic performance predictions that are difficult to reproduce in real deployments. Low-latency or mission-critical applications, such as industrial IoT and autonomous systems, are par- ticularly sensitive to such overhead. Latency introduced by antenna switching may violate strict quality-of-service (QoS) requirements. Moreover, misaligned timing or rapid channel variation can degrade SNR and compromise system reliability if reconfiguration is not executed with a sufficient precision. E. Static or Slowly Varying Channels Assumption: Another common assumption is that the chan- nel remains largely unchanged during the reconfiguration process, or it only changes really slowly that once the ‘best’ antenna state is found, or it remains optimal for long periods [12]. This assumption may not be realistic, as [13] showed that millimeter-wave channels can fluctuate rapidly. Reality: In real-world conditions, we have to take into account the following factors. 1) Mobility and Fast Fading: In urban or vehicular settings, multipath components vary rapidly. By the time an antenna arrives at its new state, the optimal direction or frequency response may have shifted. 2) Short Coherence Times: In high-frequency bands (e.g., millimeter-wave or terahertz bands), coherence time can be on the order of milliseconds, challenging any reconfiguration that takes longer time than that. 3) Channel Dynamics: Mobility and environmental changes can cause rapid channel fluctuations. Impact: Models assuming quasi-static channels can largely overestimate achievable performance in real dynamic sce- narios. Protocols that rely on repeated reconfiguration may struggle to keep pace with channel variations, leading to under- utilized potential or wasted energy. The work in [13] offered further insight into the performance of FAS specifically under these fast channel variations and the detriment to performance from assuming a slow fading channel. Fig. 3 illustrates the impact of inaccurate fading assumptions. These inaccuracies and a larger channel frequency fluctuations may result in the optimal port having to change more frequently, requiring more processing power for port selection. It is worth noting that the results in Fig. 3 are based on a generic FAS channel model rather than a specific hardware implementation. The intention is to illustrate the universal impact of inaccurate fading assumptions, which all FAS architectures are subject to. Nevertheless, the severity of degradation can vary: pixel- based and metamaterial antennas, with their fast electronic switching, can adapt more effectively to fast-fading environ- ments, whereas mechanical and liquid-based designs are more susceptible to performance loss due to their slower actuation. F. Perfect Knowledge of Channel Assumption: It is often assumed that the transmitter (or a central controller) has perfect knowledge of the instantaneous CSI for all possible antenna states, enabling optimal selection among them. This is a very strong assumption, as shown in [14], which highlighted the measurement complexity in capturing channel responses for numerous possible antenna states. Reality: Acquiring CSI comes with practical issues. 1) Measurement Overhead: Probing all candidate antenna states to obtain CSI can be prohibitively time-consuming and complex, especially in dense multipath or high- frequency (millimeter-wave) bands. 2) Estimation Errors: Noise, limited training symbols, and pilot contamination (in multiuser scenarios) may degrade CSI estimation accuracy. Impact: The mismatch between assumed perfect CSI and real, noisy, partial, or outdated CSI can greatly reduce the system’s ability to consistently select the best antenna con- figuration. Ultimately, capacity gains predicted by idealized models often shrink significantly when estimation errors or feedback delays are included. The challenges of CSI overhead were further discussed in [14], which also explored the concept of adaptive selection from multiple antenna states. The impact of CSI errors on FAS can be observed in Fig. 4. Fig. 4 shows the impact of imperfect CSI on wireless link capacity across an SNR range from −10 dB to 30 dB. The simulation assumed a flat Rayleigh fading channel with Gaussian signaling and no feedback. CSI errors were modeled as zero-mean complex Gaussian noise with variances ranging from 0 (perfect CSI) to an error of 40%. Each curve represents the average of 1, 000 Monte Carlo trials. The results reveal that while CSI errors have minimal effect at a low SNR, their impact grows significantly in a high-SNR region, in which even a modest estimation error can severely limit achievable capacity. This underscores the critical role of accurate channel estimation in high-performance communication systems. IV. IMPLICATIONS IN COMMUNICATION SYSTEMS The ideal assumptions discussed above can collectively lead to multiple implications on communication systems in real-world scenarios, as illustrated in Fig. 5. Some important implications are discussed below. These physical layer impacts underscore the need for joint mechanical-electrical modeling, ensuring the dynamic antenna behavior is captured in link- level simulations. By anticipating the shifts in resonance, plan- ning for demodulation disruptions, and incorporating offset- tracking loops, designers can mitigate transient degradations and preserve robust physical-layer performance. 7 10 5 0 5 10 15 20 25 30 SNR (dB) 0 2 4 6 8 Capacity (bps/Hz) Capacity vs SNR on Different CSI Variances CSI Error Variance: 0.0 CSI Error Variance: 0.05 CSI Error Variance: 0.1 CSI Error Variance: 0.2 CSI Error Variance: 0.3 CSI Error Variance: 0.4 Fig. 4. Impact of imperfect CSI on FAS capacity. Simulations used a flat Rayleigh fading channel with Gaussian signaling and CSI errors modeled as Gaussian noise. Higher error variance significantly reduces capacity at a high SNR. Fig. 5. Impacts of a non-ideal fluid antenna on communication systems. A. Impedance and Matching Networks Fluid movement can change antenna’s impedance profile, impacting the match to transmission line or power amplifier [15]. A mismatch leads to reflected power and potential amplifier distortion, necessitating either real-time tuning (e.g., via variable matching networks) or the use of conservative design margins that reduce the benefits from reconfiguration. B. Frequency Shift and Resonance Stability Fluidic antennas typically rely on shape or volume changes to alter their resonance. If the antenna’s effective electrical length shifts unpredictably during or after reconfiguration, then the system can experience frequency offset and mismatches in the expected resonant band [15]. This may cause unex- pected errors for RF-to-baseband and orthogonal frequency division multiplexing (OFDM) demodulation modules, which is particularly significant for higher-frequency operation (e.g., in millimeter-wave bands), where small dimensional changes can result in large resonance shifts. C. Doppler and Mobility Considerations Although fluidic antennas are not necessarily moving at high speeds themselves, rapid mechanical actuation or fluid flow can momentarily mimic Doppler-like effects in the air interface. This can lead to apparent frequency offsets during reconfiguration, requiring updated CFO (i.e., carrier frequency offset) tracking at the receiver. In high-mobility scenarios, such as vehicular communications, fluid antennas must not only cope with channel fading but also ensure that any antenna reconfiguration does not significantly worsen the already chal- lenging Doppler spread. This may have similar impacts as the aforementioned frequency shift on a communication system. D. Inaccurate and Outdated CSI The massive antenna positions make it impractical to esti- mate a wireless channel of all antenna states, which results in imperfect CSI at a transmitter. Meanwhile, fast fading in rapidly varying channels may occasionally exceed channel estimation frequency, which makes the acquired CSI out- dated. Therefore, the beamforming efficiency can be readily degraded, since transmit precoding and receive combining both have strict requirements of perfect CSI. E. Demodulation Complexity and Synchronization Real-time changes in the antenna gain pattern can introduce additional phase noise and time-varying channel responses, complicating receiver synchronization and channel estimation. For example, as the antenna reorients or reshapes, partial CSI 8 can become obsolete much more quickly [6]. Consequently, demodulation algorithms may need frequent pilot re-insertion or adaptive equalization to handle dynamic interference or varying signal strengths during reconfiguration events. F. QoS Degradation Dynamic FAS positioning inevitably incurs additional la- tency and energy consumption, which further degrades the energy efficiency of a communication link. Moreover, the wasted time for antenna re-positioning also contributes to the loss of throughput. These factors altogether result in the QoS degradation of the end-to-end communication link. V. RECOMMENDATIONS AND FUTURE DIRECTIONS Bridging the gap between theoretical promise and commer- cial deployment of reconfigurable antenna systems requires fresh perspectives and rigorous research efforts considering different aspects. Below are several key directions from the perspective of wireless communications. Innovations in other fields such as materials can also benefit FAS. For instance, there is a need for stable high-performance fluids. New con- ductive fluids with a lower viscosity and reduced temperature sensitivity can enhance reliability. Metallurgical advances in gallium-based alloys or ion-rich solutions can improve con- ductivity and longevity. Durable mechanical components are also necessary. Shape-memory alloys, flexible plastics, or 3D- printed waveguides may enable smoother and more repeatable motion, mitigating mechanical wear-and-tear issues. Apart from these, there are some promising directions that deserve more investigations, which we briefly discuss as follows. A. Stochastic or Hybrid Channel Models Because channels and reconfiguration processes are inher- ently uncertain and time-varying, purely deterministic models are insufficient. Researchers are beginning to adopt stochastic approaches, where each antenna configuration transition is assigned a probability of success, a mean delay, and an associated CSI uncertainty distribution. Hybrid models that blend measurement-based data with theoretical frameworks can more accurately reflect practical performance envelopes, guiding realistic system-level optimizations. B. Medium Access Control Protocols Medium access control (MAC), which relies on fine-grained on-the-fly antenna configuration changes, must incorporate latency and channel-estimation overhead into their scheduling mechanisms. Slot-based protocols, for example, may require longer guard intervals or advanced predictive algorithms if the antenna reconfiguration time is not negligible. In addition, resource allocation strategies that assume perfect CSI for all antenna states need to incorporate robust feedback loops, reducing effective spectral efficiency. In multiuser scenarios (e.g., MIMO networks), fluid antennas might coordinate with each other to conduct cooperative positioning in order to avoid interference or optimize coverage collectively. C. Cross-Layer Design Reconfigurable antennas can no longer be viewed solely as a physical-layer phenomenon. Cross-layer frameworks, where changes at the physical layer interact with MAC scheduling and even application-layer demands, will be essential. For in- stance, when reconfiguration overhead is high, an application- layer decision to buffer data during reconfiguration intervals may improve overall system efficiency. D. Advanced Control and Optimization Methods More creative and practical approaches need to be sought out in order to make FAS mature for a large scale deployment. Some of the thoughts are listed as follows. • Predictive and Machine Learning Techniques: Instead of reacting to instantaneous channel measurements, systems can utilize machine learning models to anticipate channel variations based on historical data, reducing unnecessary reconfigurations. • Limited-Codebook Approaches: To reduce the overhead of searching a continuum of possible antenna states, de- signers can adopt a finite codebook of well-chosen config- urations. This strategy reduces complexity and channel- measurement overhead. • Energy-Efficient and Time-Limited Port Selection: Prac- tical fluid or reconfigurable antennas often have multiple ports or states, each with different power-consumption and performance trade-offs. To address energy and la- tency constraints, researchers have proposed time-limited port selection algorithms, which optimized port usage only within pre-defined time windows. By scheduling reconfiguration intervals more judiciously, the system avoids excessive energy draw from continuous scanning of all possible antenna states. In tandem, energy-efficient metrics (e.g., bits per joule or energy per reconfiguration) can guide selection policies, ensuring that any perfor- mance gain from reconfiguration outweighs its associated cost. E. Hardware-In-the-Loop and Field Trials These include: • Realistic Prototyping: Small-scale prototypes, complete setup with actuation hardware, fluid channels or mechani- cal pivot arms, should be tested in realistic indoor/outdoor environments. • Extended Network Trials: Multi-node testbeds (e.g., in an anechoic chamber or urban corridor) can expose the interplay of mobility, interference, and real packing con- straints, the factors often absent in lab-based experiments. F. Standardization Efforts Standardization efforts are also essential. As reconfigurable antennas mature, industry and regulatory bodies may need new guidelines for testing, compliance, and performance eval- uation, similar to the existing antenna standards (e.g., ETSI, 3GPP). 9 VI. CONCLUSION While FAS hold great promise for dynamically optimiz- ing performance, the assumptions listed above, often made for analytical tractability or simplicity, can lead to inflated theoretical gains. In practice, real-world constraints such as time and energy costs, material properties, limited resolution, measurement and alignment errors, dynamic channels, and multiuser interference can significantly diminish these benefits. When models overlook the factors such as reconfiguration latency and energy consumption, imperfect or limited channel knowledge, mechanical and electromagnetic coupling, non- ideal materials, packaging limitations, and misalignment is- sues, they risk being overly optimistic. To better predict actual performance in real-world deployments, future studies should incorporate more realistic models that account for hardware constraints, reconfiguration overhead, channel dynamics, and network-level interactions. Key Takeaways • Many FAS studies relied on overly ideal assumptions (e.g., perfect CSI, instantaneous switching, ideal mate- rials). • Practical constraints such as material aging, packaging limits, and switching delays can significantly reduce the expected gains. • Realistic stochastic and hardware-aware models are needed to evaluate system performance accurately. • Cross-layer design is essential to manage reconfiguration overheads and fast-changing channels. • Prototyping, field trials, and standardization are critical steps to translate FAS theory into practical applications. REFERENCES [1] K.-K. Wong, A. Shojaeifard, K.-F. Tong, and Y. Zhang, “Fluid antenna systems,” IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1950–1962, Mar. 2021. [2] K. K. Wong and K. F. Tong, “Fluid antenna multiple access,” IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 4801–4815, Jul. 2022. [3] L. Zhu, W. Ma, and R. Zhang, “Movable antennas for wireless commu- nication: Opportunities and challenges,” IEEE Commun. Mag., vol. 62, no. 6, pp. 114–120, Jun. 2024. [4] K. K. Wong, K. F. Tong, Z. Chu, and Y. Zhang, “A vision to smart radio environment: Surface wave communication superhighways,” IEEE Wireless Commun., vol. 28, no. 1, pp. 112–119, Feb. 2021. [5] W. K. New et al., “A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hard- ware designs,” IEEE Commun. Surv. & Tut., doi:10.1109/COMST.2024. 3498855, 2024. [6] H. Abu Bakar, R. A. Rahim, P. J. Soh, and P. Akkaraekthalin, “Liquid- based reconfigurable antenna technology: Recent developments, chal- lenges and future,” Sensors, vol. 21, no. 3, p. 827, Jan. 2021. [7] K.-F. Tong, B. Liu, and K.-K. Wong, “Designs and challenges in fluid antenna system hardware,” Electronics, Special Issue on Futuristic Antennas: Sustainable, Efficient, Reconfigurable, and Intelligent Design, vol. 14, no. 7, pp. 1458, Apr. 2025. [8] S. Song and R. D. Murch, “An efficient approach for optimizing frequency reconfigurable pixel antennas using genetic algorithms,” IEEE Trans. Antennas Propag., vol. 62, no. 2, pp. 609–620, Feb. 2014. [9] B. Liu, K.-F. Tong, K.-K. Wong, C.-B. Chae, and H. Wong, “Be water, my antennas: Riding on radio wave fluctuation in nature for spatial multiplexing using programmable meta-fluid antenna,” arXiv preprint, arXiv:2502.04693, 2025. [10] M. Kubo et al., “Stretchable microfluidic radiofrequency antennas,” Adv. Mater., vol. 22, no. 25, pp. 2749–2752, Jul. 2010. [11] J. Zhang, M. O. Akinsolu, B. Liu, and G. A. E. Vandenbosch, “Automatic AI-driven design of mutual coupling reducing topologies for frequency reconfigurable antenna arrays,” IEEE Trans. Antennas Propag., vol. 69, no. 3, pp. 1831–1836, Mar. 2021. [12] J. Zou, S. Sun, and C. Wang, “Online learning-induced port selection for fluid antenna in dynamic channel environment,” IEEE Wireless Commun. Lett., vol. 13, no. 2, pp. 313–317, Feb. 2024. [13] C. Gupta et al., “Accurate and computationally efficient modeling of nonquasi static effects in MOSFETs for millimeter-wave applications,” IEEE Trans. Electron Devices, vol. 66, no. 1, pp. 44–51, Jan. 2019 [14] Y. Wang, H. Shen, C. Han, and M. Tao, “Movable antennas: Channel measurement, modeling, and performance evaluation,” arXiv preprint , arXiv:2409.03386, 2024. [15] T. Jang, C. Zhang, H. Youn, J. Zhou, and L. J. Guo, “Semitransparent and flexible mechanically reconfigurable electrically small antennas based on tortuous metallic micromesh,” IEEE Trans. Antennas Propag., vol. 65, no. 1, pp. 150–158, Jan. 2017.
1 Bridging Theory and Practice in Reconfigurable Fluid Antenna Systems Halvin Yang, Member, IEEE, Yizhe Zhao, Member, IEEE, Kai-Kit Wong, Fellow, IEEE, Hsiao-Hwa Chen, Life Fellow, IEEE, and Chan-Byoung Chae, Fellow, IEEE Abstract-Fluid antennas, including those based on liquid, mechanical, and pixel-based technologies, are poised to significantly enhance next-generation wireless systems by adaptively optimizing their radiation characteristics. Many theoretical analyses assumed near-instant reconfiguration, perfect channel knowledge, static or slowly varying propagation environments, and ideal material properties that rarely hold in practice. In this article, we dissect these common assumptions and contrast them with the realities of finite actuation time, limited and imperfect channel state information, rapidly changing fading conditions, electromagnetic coupling, and mechanical constraints. Through illustrative examples and simulations, we demonstrate how ignoring these factors can lead to overestimated gains in capacity, coverage, etc.. We then propose modeling refinements, experimental validation methods, and emerging control algorithms that better account for real-world constraints. Our findings highlight that, while reconfigurable antennas remain highly promising for B5G/6G and Internet of things (IoT) applications, their full potential can only be realized by incorporating practical considerations into system design and performance evaluation. Index Terms-Fluid antenna, fluid antenna system (FAS), movable antenna, modeling, resource allocation, 6G. I. INTRODUCTION W IRELESS networks beyond fifth-generation (5G) are required to keep up with the exponential growth in mobile data traffic while also providing ubiquitous connectivity. To achieve this, new wireless technologies need to be developed. In recent years, multiple-input multiple-output (MIMO) technology has been viewed as a primary solution to drastically improve spectral efficiency (SE) and energy efficiency (EE) of wireless communications via space-time coding, spatial multiplexing and beamforming. However, this performance improvement comes at a high cost, namely the addition of more radio-frequency (RF) chains to support additional antennas and higher computational complexity when tackling more complex channels. The increasing system complexity, hardware size and energy consumption pose significant limitation when considering mobile or edge devices, which lack the computational capacity and energy supply of base stations (BS). H. Yang (email: ) is with the Department of Electrical and Electronic Engineering, Imperial College London, UK. Y. Zhao (email: ) is with the - tronic Science and Technology of China, China. K.-K. Wong (email: ) is with the - sei Frontier Lab. Yonsei University, South Korea. H.-H. Chen (email: ) (the corresponding author) is with the - wan. C.-B. Chae (email: ) is with the . With recent breakthroughs in reconfigurable antenna technology, it is possible to design an antenna that can switch its position in a fixed space along predetermined points, each referred to as a port [1]. Owing to this unique ability to change between multiple spatial locations, this article refers to such a communication system as a fluid antenna system (FAS). Note that a physical antenna used in FAS is not necessarily liquid in nature; the term fluid is a description of the smooth or dynamic nature of the antenna rather than a physical description. FAS introduces the novel concept of a position-reconfigurable antenna that can access the null of the interference created by natural fading phenomenon in a multipath-rich environment, significantly reducing the computational complexity of the system as complex beamforming and channel estimation for precoding at the transmitter are no longer required [2]. The concept of FAS was initially introduced by Wong et al. in [1], sparking further interest in utilizing reconfigurable antenna technologies for wireless communication system design. Following [1], Zhu et al. introduced movable antenna in [3] to describe a specific class of FAS that can physically relocate the antenna within a confined space using mechanically movable radiating elements. Recent studies even suggested 3-D positioning and 3-D orientation of the antenna surface for a 6-D movable antenna architecture. In this article, we adopt the term FAS in a broad sense to encompass various forms of spatially reconfigurable antenna systems, including fluidic, mechanically movable, and other implementations. Recent advances have further envisioned to scale FAS into large reconfigurable surfaces, effectively engineering a wireless environment itself [4]. This concept proposes that entire building facades or urban infrastructure could be transformed into massive FAS deployments. By dynamically shaping signal paths, these enormous FAS could maximize coverage and enhance capacity in real time, extending the principles of reconfigurable intelligent surfaces (RIS) into fully adaptive communication environments. While such a deployment introduces new challenges, it represents a substantial step towards 6G-enabled smart environments, where communication optimization is embedded directly into physical infrastructure. Previous literature has shown the viability of fluid antennas deployed in communication systems with a single-antenna FAS, not only significantly outperforming a traditional fixedposition antenna system [1], but also achieving high multiplexing and diversity gains despite being a single antenna [2]. As FAS technology advances over the years, different systems were proposed, each having different interpretation of fluid antennas, different terminologies, physical architecture, and thus different FAS technical approaches, possibly causing some confusions. Readers are advised to refer to [5] for more 16 Oct 2025 2 information. Current FAS research often assumes idealized material, channel, and movement conditions, which do not hold in practice. This article will discuss those realistic conditions and their impact on communication systems. Corresponding solutions are also recommended at the end of the article, which may serve as future research directions of this emerging and exciting new research area. The rest of the article is organized as follows. Section II summarizes the key concepts, characteristics, and advantages and disadvantages of existing FASs, followed by a fundamental illustration on how a typical FAS operates. Then, Section III examines the theoretical assumptions made in FAS research and compares them with practical realities, highlighting their implications on system performance. Section IV discusses the broader impact of these assumptions on communication systems. After that, Section V proposes solutions and outlines future research directions. Finally, Section VI concludes the article. II. OVERVIEW ON FAS TECHNOLOGIES This section provides an overview on how FASs are implemented practically by looking at different fluid antenna designs, and then discusses about the fundamental operating principles of FAS and related enabling technologies. A. Physical Design A fluid antenna refers to any radiating structure in which software-controlled conductive or dielectric elements dynamically alter their shape, position and/or geometry to reconfigure key metrics such as operating frequency, polarization, or radiation pattern. These structures may involve fluidic elements, such as conductive liquids namely, Eutectic Gallium-Indium, Galinstan, or ionized solutions (e.g., NaCl) [6], [7]. Depending on the design, such fluid antennas can enable frequency or spatial reconfigurability, but exhibit slow switching speeds due to fluid inertia and often require bulky pumps or reservoirs. In contrast, pixel-based antennas use electronically switched static radiating elements, e.g., PIN diodes or MEMS, to form reconfigurable patterns without mechanical movement [8], enabling microsecond- or even nanosecond-level switching and supporting both shape and position reconfiguration. Similarly, metamaterial-based antennas employ tunable electromagnetic surfaces composed of reconfigurable unit cells, thus achieving agile and fully electronic beam control without moving parts and offering comparable switching speeds [9]. By comparison, mechanical antennas physically reposition radiating elements, often in 2D or 6D space (3D translation and 3D rotation), using motors along predefined tracks. While they may offer fine spatial resolution, they often suffer from slow actuation, big energy overhead, and wear-andtear. Contrary to some claims, mechanical antennas are not simple to implement, requiring structural supports, motion control, and synchronization. Moreover, antenna size is not architecture-dependent, and large arrays can be realized using any technology if deployment constraints permit. Table I summarizes the main fluid antenna architectures, highlighting their mechanisms, benefits, and limitations. Fig. 1. An illustration of a FAS's browsing capabilities [1]. B. Operating Principles Fig. 1 exemplifies a FAS. By enabling radiating elements to change positions in space, a fluid antenna allows users to browse through different fading envelopes located at each different spatial location. Not only does this enable benefits to diversity and capacity, but also can users select the position in space, where the signal power is the highest and the noise or interference is suppressed by a deep fade. By exploiting the capability of FAS to resolve interference by selecting the optimal port (location in space), multiuser communication can be achieved without any complex beamforming or channel feedback. In a traditional MIMO system, where time-frequency resources are shared, signal processing and beamforming were required to create peaks and nulls of the fading envelopes with channel state information (CSI) beforehand. The interference at each user in a FAS is expected to be mitigated by the fluid antenna via a process known as port selection, in which the port with the highest signal-tointerference plus noise ratio (SINR) can be selected. A more sophisticated scheme that selects more than one ports is also possible to enhance interference immunity [5, Section V-D]. C. State of the Art • Channel Characteristics: Due to the proximity of ports, which can be arbitrarily close, correlation between ports is a key consideration in FAS. Thus, to get an accurate representation of a FAS channel, the correlation needs to be modeled accurately because an underestimation of the correlation would result in unrealistic performance evaluation. There are already several correlation models proposed recently, including the block correlation model, which assumes a large number of correlated random variables expressed in a matrix, and the single port correlation model, in which it is assumed each port is correlated with a single reference port and therefore there is one mutual random variable at each port. The block correlation model exhibits the highest accuracy, while single port correlation model has a lower analysis complexity [5, Section II]. • Modeling and Performance Analysis: Performance analysis is essential in understanding the viability of FAS 3 TABLE I SUMMARY OF DIFFERENT FLUID ANTENNA ARCHITECTURES Technology Type Architecture Operating Principle Advantages/Disadvantages Pixel-Based Antenna [8] A grid of static radiating elements is reconfigured electronically by toggling integrated switches (e.g., PIN diodes or MEMS) on/off to form different radiation patterns. No physical movement is involved, allowing ultra-fast and reliable switching. Scalability depends on the number and layout of switches. • Extremely high-speed switching (in μs range, even reaching ns) • Limited by the number of switches • Large unused sections • Design complexity increases with reconfiguration capability Mechanical Antenna [3] Antenna element is repositioned mechanically in 2D or 3D space using motors and a structural frame with predefined paths or tracks. Advanced variants (e.g., 6DMA) support simultaneous translation and rotation. It requires physical support infrastructure, power control, and real-time coordination. Signal is routed through an RF chain for processing. • 2D or 6D spatial coverage depending on configuration • Slow switching response (similar to liquid-based fluid antennas) • Requires structural frame, control logic, and mechanical actuation • Power consumption and mechanical wear were often overlooked in analysis • Weight and size limitations depend on deployment Surface-Wave Position-Flexible Antenna [6], [7] It combines fluid monopole and surface-wave designs under a shared category. Liquid-based antennas control RF behavior by relocating conductive fluid inside a channel. Frequency-reconfigurable designs (monopole type) vary the fluid's length and surface-wave types adjust fluid position to steer or receive guided waves. It is typically actuated using pumps or pressure gradients. • Enables frequency or position reconfigurability depending on design • Moderate spatial control precision • Slow switching speed due to fluid inertia • Design may be bulky due to pumps and reservoirs • No solid-state switching, with limited responsiveness compared to pixel or metamaterial designs Metamaterial-Based Antenna [9] It utilizes programmable metasurfaces or tunable unit cells to manipulate electromagnetic wave propagation without mechanical movement. These structures can dynamically adjust their properties to control beam direction, shape and polarization, enabling agile and efficient beamforming. • No mechanical movement, and fully electronic reconfiguration • Ultra-fast switching speeds • Compact and integrable into various form factors • Requires complex control circuitry • High design and fabrication complexity in a communications environment. To this end, a majority of previous works was focused on the derivation of such performance indicators as outage probability, multiplexing gain, bit error rate, and data rate. The accuracy of the channel model will impact the performance analysis on different indicators in different scenarios, while more accurate models tend to be significantly more complex. • Resource Allocation and Optimization: Resource allocation and optimization strategies also depend on the underlying FAS architecture. Pixel-based and metamaterial antennas, due to their ultra-fast switching, are particularly suited for optimization problems that require frequent reconfiguration, such as fast port selection, user scheduling, and multiuser interference suppression. By contrast, mechanical and liquid-based designs face slower actuation and higher energy costs, making them less effective for real-time optimization but still valuable in long-term tasks such as static port assignment, spectrum planning, or energy efficiency optimization. Liquid and surface-wave designs further enable frequency and position agility, which can be exploited in spectrum allocation and interference avoidance, albeit with slower responsiveness. Hence, the choice of optimization strategy should reflect both the communication objective and the physical limitations of the chosen architecture. III. THEORETICAL ASSUMPTIONS VS REALITY Fluid antennas, often described as reconfigurable, mechanical, or pixel-based solutions, promise to dynamically modify antenna parameters to maximize signal-to-noise ratio (SNR), data rates, or coverage. Despite this potential, many studies relied on simplified assumptions that may lead to inflated expectations or overly idealized system models. In the text followed, we examine these assumptions and contrast them with practical constraints. We also discuss the implications of these idealized assumptions on communication system performance. 4 0 2 4 6 8 10 Time (Years) 0.0 0.1 0.2 0.3 0.4 0.5 Outage Probability Outage Probability of Different Antenna Types Over Time (Degradation) Wear-Based Mechanical Antenna Fluid Exposure Antenna Controlled Precision Fluid Antenna Fig. 2. Performance degradation of three antenna types over time. Outage probability trends were simulated with gradual signal loss and small random variations to reflect environmental impacts such as wear-and-tear, fluid exposure, and temperature changes. A. Ideal Material Properties Assumption: Existing fluid-based antenna models treat the fluid as having constant, frequency-independent conductivity and negligible temperature dependence. Similarly, structural materials may be taken as perfect dielectrics or perfectly flexiblity. Analytical models often assumed that once an optimal configuration is found, the antenna can reliably return to that exact state with minimal error each time. Reality: Two issues need to be considered carefully: 1) Frequency Dependence: Conductive liquids, gels, and alloys often exhibit dispersive properties, where conductivity and permittivity vary with frequency. 2) Thermal and Aging Effects: Temperature shifts can alter fluid viscosity or cause metal oxidation, leading to performance drifts. Over time, repeated mechanical or chemical stress can degrade materials as well [10]. Impact: Ignoring real materials' frequency dependence and aging effect leads to overly optimistic predictions of antenna gain and efficiency. Designers may find their prototypes underperforming in practical temperature ranges or long-term deployment. More experimental data on these issues can be found in [10]. In practice, a 'best configuration' might vary slightly each time. This inconsistency impacts link reliability and may require additional measurements or dynamic calibration, increasing overhead. This may require calibration loops to ensure performance reliability [10]. Fig. 2 shows a rough indication of the impact of wear-and-tear on the performance of antennas, showing the impact of antenna wear due to mechanical movement, antenna fluid degradation from environmental exposure, and the impact of degradation of control mechanisms. Fig. 2 illustrates how three different antenna types degrade in performance over a ten-year period, expressed in terms of an increasing outage probability, the likelihood that the antenna fails to maintain a reliable connection. Each curve was generated by simulating a gradual decline in signal quality over time, with added random variation to reflect environmental impacts like weather, dust, or material inconsistencies. The mechanical antenna assumes a steady performance loss from physical wear, corrosion, and moving-part fatigue. The fluid exposure antenna degrades more slowly, capturing the effects of fluid leakage or contamination over extended use. The directional fluid antenna shows the most stable behavior, thereby reflecting its reliance on solid-state control components rather than moving parts or exposed fluids. The rate at which performance degrades varies across the antennas, depending on how sensitive each design is to material or control deterioration. B. No Impact of Packaging or Enclosures Assumption: Antenna performance simulations were usually performed in free space, assuming no physical constraints on movement or additional losses from device enclosures. Reality: The actual performance does depend on the following factors. 1) Space Constraints: Mobile handsets, IoT devices, or unmanned aerial vehicles (UAVs) have different limiting factors that restrict antenna motion or fluid distribution pathways [6], [7]. 2) Interaction with Enclosures and Users: Plastics, metals, and even user's hands can detune the antennas, causing modeled reflections and absorption. Impact: In practice, antenna elements often cannot be physically moved into theoretically 'optimal' positions. Real packaging might reduce the reconfigurable range, adding new losses or scattering effects that degrade performance gains. C. Negligible Mechanical and Electromagnetic Coupling Assumption: Some studies analyzed a single 'element' or assumed perfect isolation between antenna elements, ignoring the mechanical constraints or electromagnetic interactions that naturally arise in closely packed arrays. The change in antenna geometry in any form can have a significant impact on the RF performance of the whole system, or in the case of multiple antennas affect adjacent radiating elements [11]. 5 0 100 200 300 400 500 Time Steps 10 5 0 5 10 Channel Gain (dB) Impact of Incorrect Slow Fading Assumption on FAS Performance Fast Fading (Frequent Switching) Slow Fading (Slow Switching) (a) Fast Fading Slow Fading Fading Assumption 0 2 4 6 8 10 12 14 Average SNR (dB) 0 1 2 3 4 5 6 7 8 Outage Probability (%) Comparison of Average SNR and Outage Probability for Fading Assumptions Average SNR (dB) Outage Probability (%) (b) Fig. 3. The effect of an inaccurate fading assumption on the performance of the system: (a) the channel gain of slow and fast fading assumptions in a fast fading environment, and (b) the SNR and outage performance of fading assumptions in a fast fading channel. These results illustrate the importance of modeling realistic channel dynamics for FAS. Reality: The following effects matter. 1) Mutual Coupling: In multi-element reconfigurable systems, shifting or resizing one antenna can detune its neighbors, altering the collective radiation pattern in unpredictable ways [11]. 2) Mechanical Friction and Alignment: Real mechanical actuators encounter gear backlash, friction, and alignment tolerances, meaning that the intended final position often differs from the actual one. 3) Actuator Precision: Mechanical and fluidic systems each have tolerances, and small deviations may accumulate over multiple cycles, which could become unacceptable. Impact: Overlooking these coupling effects can invalidate the assumed radiation pattern, mismatch, and overall system performance. Reconfiguration might improve one metric (e.g., gain in a certain direction) but degrade the others (e.g., interference to neighboring elements). D. Instantaneous Reconfiguration Assumption: Many analyses assumed that antennas can be reconfigured with negligible latency and without additional energy overhead. Under such assumptions, the system can always 'hop' to the best configuration whenever channel conditions change. However, this is often unrealistic across multiple antenna types. For instance, [6], [7] showed that liquid-based antennas require pumps or valves and involve non-negligible delays. Similarly, mechanical antennas must physically reposition the radiating elements using motors, which also incurs latency and energy cost. Even electronically reconfigurable designs, such as pixel or metamaterial-based antennas, can involve control signaling, timing constraints, or driver delays. Reality: The following factors must be considered. 1) Latency and Energy: Actuation hardware, no matter whether involving fluid redistribution, motorized movement, or complex switching logic, consumes both time and power. Frequent reconfiguration can drain battery 6 life in portable devices and create communication gaps during the transition. 2) Control Overhead: Synchronizing reconfiguration commands with fast-changing channels requires additional processing and signaling, introducing further delays in system responsiveness. Impact: Ignoring reconfiguration latency risks exaggerating theoretical throughput or coverage gains. In practice, systems may lose critical time adapting to channel conditions that might have already changed. Scheduling algorithms that assume instantaneous reconfiguration often overlook switching delays or transitional downtimes, leading to overly optimistic performance predictions that are difficult to reproduce in real deployments. Low-latency or mission-critical applications, such as industrial IoT and autonomous systems, are particularly sensitive to such overhead. Latency introduced by antenna switching may violate strict quality-of-service (QoS) requirements. Moreover, misaligned timing or rapid channel variation can degrade SNR and compromise system reliability if reconfiguration is not executed with a sufficient precision. E. Static or Slowly Varying Channels Assumption: Another common assumption is that the channel remains largely unchanged during the reconfiguration process, or it only changes really slowly that once the 'best' antenna state is found, or it remains optimal for long periods [12]. This assumption may not be realistic, as [13] showed that millimeter-wave channels can fluctuate rapidly. Reality: In real-world conditions, we have to take into account the following factors. 1) Mobility and Fast Fading: In urban or vehicular settings, multipath components vary rapidly. By the time an antenna arrives at its new state, the optimal direction or frequency response may have shifted. 2) Short Coherence Times: In high-frequency bands (e.g., millimeter-wave or terahertz bands), coherence time can be on the order of milliseconds, challenging any reconfiguration that takes longer time than that. 3) Channel Dynamics: Mobility and environmental changes can cause rapid channel fluctuations. Impact: Models assuming quasi-static channels can largely overestimate achievable performance in real dynamic scenarios. Protocols that rely on repeated reconfiguration may struggle to keep pace with channel variations, leading to underutilized potential or wasted energy. The work in [13] offered further insight into the performance of FAS specifically under these fast channel variations and the detriment to performance from assuming a slow fading channel. Fig. 3 illustrates the impact of inaccurate fading assumptions. These inaccuracies and a larger channel frequency fluctuations may result in the optimal port having to change more frequently, requiring more processing power for port selection. It is worth noting that the results in Fig. 3 are based on a generic FAS channel model rather than a specific hardware implementation. The intention is to illustrate the universal impact of inaccurate fading assumptions, which all FAS architectures are subject to. Nevertheless, the severity of degradation can vary: pixelbased and metamaterial antennas, with their fast electronic switching, can adapt more effectively to fast-fading environments, whereas mechanical and liquid-based designs are more susceptible to performance loss due to their slower actuation. F. Perfect Knowledge of Channel Assumption: It is often assumed that the transmitter (or a central controller) has perfect knowledge of the instantaneous CSI for all possible antenna states, enabling optimal selection among them. This is a very strong assumption, as shown in [14], which highlighted the measurement complexity in capturing channel responses for numerous possible antenna states. Reality: Acquiring CSI comes with practical issues. 1) Measurement Overhead: Probing all candidate antenna states to obtain CSI can be prohibitively time-consuming and complex, especially in dense multipath or highfrequency (millimeter-wave) bands. 2) Estimation Errors: Noise, limited training symbols, and pilot contamination (in multiuser scenarios) may degrade CSI estimation accuracy. Impact: The mismatch between assumed perfect CSI and real, noisy, partial, or outdated CSI can greatly reduce the system's ability to consistently select the best antenna configuration. Ultimately, capacity gains predicted by idealized models often shrink significantly when estimation errors or feedback delays are included. The challenges of CSI overhead were further discussed in [14], which also explored the concept of adaptive selection from multiple antenna states. The impact of CSI errors on FAS can be observed in Fig. 4. Fig. 4 shows the impact of imperfect CSI on wireless link capacity across an SNR range from -10 dB to 30 dB. The simulation assumed a flat Rayleigh fading channel with Gaussian signaling and no feedback. CSI errors were modeled as zero-mean complex Gaussian noise with variances ranging from 0 (perfect CSI) to an error of 40%. Each curve represents the average of 1, 000 Monte Carlo trials. The results reveal that while CSI errors have minimal effect at a low SNR, their impact grows significantly in a high-SNR region, in which even a modest estimation error can severely limit achievable capacity. This underscores the critical role of accurate channel estimation in high-performance communication systems. IV. IMPLICATIONS IN COMMUNICATION SYSTEMS The ideal assumptions discussed above can collectively lead to multiple implications on communication systems in real-world scenarios, as illustrated in Fig. 5. Some important implications are discussed below. These physical layer impacts underscore the need for joint mechanical-electrical modeling, ensuring the dynamic antenna behavior is captured in linklevel simulations. By anticipating the shifts in resonance, planning for demodulation disruptions, and incorporating offsettracking loops, designers can mitigate transient degradations and preserve robust physical-layer performance. 7 10 5 0 5 10 15 20 25 30 SNR (dB) 0 2 4 6 8 Capacity (bps/Hz) Capacity vs SNR on Different CSI Variances CSI Error Variance: 0.0 CSI Error Variance: 0.05 CSI Error Variance: 0.1 CSI Error Variance: 0.2 CSI Error Variance: 0.3 CSI Error Variance: 0.4 Fig. 4. Impact of imperfect CSI on FAS capacity. Simulations used a flat Rayleigh fading channel with Gaussian signaling and CSI errors modeled as Gaussian noise. Higher error variance significantly reduces capacity at a high SNR. Fig. 5. Impacts of a non-ideal fluid antenna on communication systems. A. Impedance and Matching Networks Fluid movement can change antenna's impedance profile, impacting the match to transmission line or power amplifier [15]. A mismatch leads to reflected power and potential amplifier distortion, necessitating either real-time tuning (e.g., via variable matching networks) or the use of conservative design margins that reduce the benefits from reconfiguration. B. Frequency Shift and Resonance Stability Fluidic antennas typically rely on shape or volume changes to alter their resonance. If the antenna's effective electrical length shifts unpredictably during or after reconfiguration, then the system can experience frequency offset and mismatches in the expected resonant band [15]. This may cause unexpected errors for RF-to-baseband and orthogonal frequency division multiplexing (OFDM) demodulation modules, which is particularly significant for higher-frequency operation (e.g., in millimeter-wave bands), where small dimensional changes can result in large resonance shifts. C. Doppler and Mobility Considerations Although fluidic antennas are not necessarily moving at high speeds themselves, rapid mechanical actuation or fluid flow can momentarily mimic Doppler-like effects in the air interface. This can lead to apparent frequency offsets during reconfiguration, requiring updated CFO (i.e., carrier frequency offset) tracking at the receiver. In high-mobility scenarios, such as vehicular communications, fluid antennas must not only cope with channel fading but also ensure that any antenna reconfiguration does not significantly worsen the already challenging Doppler spread. This may have similar impacts as the aforementioned frequency shift on a communication system. D. Inaccurate and Outdated CSI The massive antenna positions make it impractical to estimate a wireless channel of all antenna states, which results in imperfect CSI at a transmitter. Meanwhile, fast fading in rapidly varying channels may occasionally exceed channel estimation frequency, which makes the acquired CSI outdated. Therefore, the beamforming efficiency can be readily degraded, since transmit precoding and receive combining both have strict requirements of perfect CSI. E. Demodulation Complexity and Synchronization Real-time changes in the antenna gain pattern can introduce additional phase noise and time-varying channel responses, complicating receiver synchronization and channel estimation. For example, as the antenna reorients or reshapes, partial CSI 8 can become obsolete much more quickly [6]. Consequently, demodulation algorithms may need frequent pilot re-insertion or adaptive equalization to handle dynamic interference or varying signal strengths during reconfiguration events. F. QoS Degradation Dynamic FAS positioning inevitably incurs additional latency and energy consumption, which further degrades the energy efficiency of a communication link. Moreover, the wasted time for antenna re-positioning also contributes to the loss of throughput. These factors altogether result in the QoS degradation of the end-to-end communication link. V. RECOMMENDATIONS AND FUTURE DIRECTIONS Bridging the gap between theoretical promise and commercial deployment of reconfigurable antenna systems requires fresh perspectives and rigorous research efforts considering different aspects. Below are several key directions from the perspective of wireless communications. Innovations in other fields such as materials can also benefit FAS. For instance, there is a need for stable high-performance fluids. New conductive fluids with a lower viscosity and reduced temperature sensitivity can enhance reliability. Metallurgical advances in gallium-based alloys or ion-rich solutions can improve conductivity and longevity. Durable mechanical components are also necessary. Shape-memory alloys, flexible plastics, or 3Dprinted waveguides may enable smoother and more repeatable motion, mitigating mechanical wear-and-tear issues. Apart from these, there are some promising directions that deserve more investigations, which we briefly discuss as follows. A. Stochastic or Hybrid Channel Models Because channels and reconfiguration processes are inherently uncertain and time-varying, purely deterministic models are insufficient. Researchers are beginning to adopt stochastic approaches, where each antenna configuration transition is assigned a probability of success, a mean delay, and an associated CSI uncertainty distribution. Hybrid models that blend measurement-based data with theoretical frameworks can more accurately reflect practical performance envelopes, guiding realistic system-level optimizations. B. Medium Access Control Protocols Medium access control (MAC), which relies on fine-grained on-the-fly antenna configuration changes, must incorporate latency and channel-estimation overhead into their scheduling mechanisms. Slot-based protocols, for example, may require longer guard intervals or advanced predictive algorithms if the antenna reconfiguration time is not negligible. In addition, resource allocation strategies that assume perfect CSI for all antenna states need to incorporate robust feedback loops, reducing effective spectral efficiency. In multiuser scenarios (e.g., MIMO networks), fluid antennas might coordinate with each other to conduct cooperative positioning in order to avoid interference or optimize coverage collectively. C. Cross-Layer Design Reconfigurable antennas can no longer be viewed solely as a physical-layer phenomenon. Cross-layer frameworks, where changes at the physical layer interact with MAC scheduling and even application-layer demands, will be essential. For instance, when reconfiguration overhead is high, an applicationlayer decision to buffer data during reconfiguration intervals may improve overall system efficiency. D. Advanced Control and Optimization Methods More creative and practical approaches need to be sought out in order to make FAS mature for a large scale deployment. Some of the thoughts are listed as follows. • Predictive and Machine Learning Techniques: Instead of reacting to instantaneous channel measurements, systems can utilize machine learning models to anticipate channel variations based on historical data, reducing unnecessary reconfigurations. • Limited-Codebook Approaches: To reduce the overhead of searching a continuum of possible antenna states, designers can adopt a finite codebook of well-chosen configurations. This strategy reduces complexity and channelmeasurement overhead. • Energy-Efficient and Time-Limited Port Selection: Practical fluid or reconfigurable antennas often have multiple ports or states, each with different power-consumption and performance trade-offs. To address energy and latency constraints, researchers have proposed time-limited port selection algorithms, which optimized port usage only within pre-defined time windows. By scheduling reconfiguration intervals more judiciously, the system avoids excessive energy draw from continuous scanning of all possible antenna states. In tandem, energy-efficient metrics (e.g., bits per joule or energy per reconfiguration) can guide selection policies, ensuring that any performance gain from reconfiguration outweighs its associated cost. E. Hardware-In-the-Loop and Field Trials These include: • Realistic Prototyping: Small-scale prototypes, complete setup with actuation hardware, fluid channels or mechanical pivot arms, should be tested in realistic indoor/outdoor environments. • Extended Network Trials: Multi-node testbeds (e.g., in an anechoic chamber or urban corridor) can expose the interplay of mobility, interference, and real packing constraints, the factors often absent in lab-based experiments. F. Standardization Efforts Standardization efforts are also essential. As reconfigurable antennas mature, industry and regulatory bodies may need new guidelines for testing, compliance, and performance evaluation, similar to the existing antenna standards (e.g., ETSI, 3GPP). 9 VI. CONCLUSION While FAS hold great promise for dynamically optimizing performance, the assumptions listed above, often made for analytical tractability or simplicity, can lead to inflated theoretical gains. In practice, real-world constraints such as time and energy costs, material properties, limited resolution, measurement and alignment errors, dynamic channels, and multiuser interference can significantly diminish these benefits. When models overlook the factors such as reconfiguration latency and energy consumption, imperfect or limited channel knowledge, mechanical and electromagnetic coupling, nonideal materials, packaging limitations, and misalignment issues, they risk being overly optimistic. To better predict actual performance in real-world deployments, future studies should incorporate more realistic models that account for hardware constraints, reconfiguration overhead, channel dynamics, and network-level interactions. Key Takeaways • Many FAS studies relied on overly ideal assumptions (e.g., perfect CSI, instantaneous switching, ideal materials). • Practical constraints such as material aging, packaging limits, and switching delays can significantly reduce the expected gains. • Realistic stochastic and hardware-aware models are needed to evaluate system performance accurately. • Cross-layer design is essential to manage reconfiguration overheads and fast-changing channels. • Prototyping, field trials, and standardization are critical steps to translate FAS theory into practical applications. REFERENCES [1] K.-K. Wong, A. Shojaeifard, K.-F. Tong, and Y. Zhang, "Fluid antenna systems," IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1950-1962, Mar. 2021. [2] K. K. Wong and K. F. Tong, "Fluid antenna multiple access," IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 4801-4815, Jul. 2022. [3] L. Zhu, W. Ma, and R. Zhang, "Movable antennas for wireless communication: Opportunities and challenges," IEEE Commun. Mag., vol. 62, no. 6, pp. 114-120, Jun. 2024. [4] K. K. Wong, K. F. Tong, Z. Chu, and Y. Zhang, "A vision to smart radio environment: Surface wave communication superhighways," IEEE Wireless Commun., vol. 28, no. 1, pp. 112-119, Feb. 2021. [5] W. K. New et al., "A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hardware designs," IEEE Commun. Surv. & Tut., 3498855, 2024. [6] H. Abu Bakar, R. A. Rahim, P. J. Soh, and P. Akkaraekthalin, "Liquidbased reconfigurable antenna technology: Recent developments, challenges and future," Sensors, vol. 21, no. 3, p. 827, Jan. 2021. [7] K.-F. Tong, B. Liu, and K.-K. Wong, "Designs and challenges in fluid antenna system hardware," Electronics, Special Issue on Futuristic Antennas: Sustainable, Efficient, Reconfigurable, and Intelligent Design, vol. 14, no. 7, pp. 1458, Apr. 2025. [8] S. Song and R. D. Murch, "An efficient approach for optimizing frequency reconfigurable pixel antennas using genetic algorithms," IEEE Trans. Antennas Propag., vol. 62, no. 2, pp. 609-620, Feb. 2014. [9] B. Liu, K.-F. Tong, K.-K. Wong, C.-B. Chae, and H. Wong, "Be water, my antennas: Riding on radio wave fluctuation in nature for spatial multiplexing using programmable meta-fluid antenna," arXiv preprint, , 2025. [10] M. Kubo et al., "Stretchable microfluidic radiofrequency antennas," Adv. Mater., vol. 22, no. 25, pp. 2749-2752, Jul. 2010. [11] J. Zhang, M. O. Akinsolu, B. Liu, and G. A. E. Vandenbosch, "Automatic AI-driven design of mutual coupling reducing topologies for frequency reconfigurable antenna arrays," IEEE Trans. Antennas Propag., vol. 69, no. 3, pp. 1831-1836, Mar. 2021. [12] J. Zou, S. Sun, and C. Wang, "Online learning-induced port selection for fluid antenna in dynamic channel environment," IEEE Wireless Commun. Lett., vol. 13, no. 2, pp. 313-317, Feb. 2024. [13] C. Gupta et al., "Accurate and computationally efficient modeling of nonquasi static effects in MOSFETs for millimeter-wave applications," IEEE Trans. Electron Devices, vol. 66, no. 1, pp. 44-51, Jan. 2019 [14] Y. Wang, H. Shen, C. Han, and M. Tao, "Movable antennas: Channel measurement, modeling, and performance evaluation," arXiv preprint , , 2024. [15] T. Jang, C. Zhang, H. Youn, J. Zhou, and L. J. Guo, "Semitransparent and flexible mechanically reconfigurable electrically small antennas based on tortuous metallic micromesh," IEEE Trans. Antennas Propag., vol. 65, no. 1, pp. 150-158, Jan. 2017.
2510.14790
ACTIVE JAMMER LOCALIZATION VIA ACQUISITION-AWARE PATH PLANNING Luis Gonz´alez-Gudi˜no1, Mariona Jaramillo-Civill2, Pau Closas2, Tales Imbiriba1 1Dept. of Computer Science, University of Massachusetts Boston, Boston, MA, USA 2Dept. of Electrical & Computer Engineering, Northeastern University, Boston, MA, USA ABSTRACT We propose an active jammer localization framework that combines Bayesian optimization with acquisition-aware path planning. Un- like passive crowdsourced methods, our approach adaptively guides a mobile agent to collect high-utility Received Signal Strength measurements while accounting for urban obstacles and mobil- ity constraints. For this, we modified the A* algorithm, A-UCB*, by incorporating acquisition values into trajectory costs, leading to high-acquisition planned paths. Simulations on realistic urban scenarios show that the proposed method achieves accurate localiza- tion with fewer measurements compared to uninformed baselines, demonstrating consistent performance under different environments. Index Terms— Jammer localization, GNSS interference, Bayesian optimization, Gaussian processes, Path planning 1. INTRODUCTION Global Navigation Satellite Systems (GNSS) such as GPS, Galileo, GLONASS and BeiDou provide critical position, navigation, and timing (PNT) services for a wide array of applications, from intel- ligent transportation and precision agriculture to timing-dependent infrastructures like banking systems and cellular networks [1]. How- ever, the strong dependence on GNSS makes these systems vulner- able to both unintentional and intentional interference [2, 3]. Unin- tentional interference may originate from out-of-band sources such as terrestrial digital video broadcasting (DVB-T) or amateur radios, as well as from in-band sources like distance measuring equipment (DME) or civilian radars. Additionally, GNSS signals are suscepti- ble to jamming by personal privacy devices (PPDs). These devices, which are inexpensive and readily available online despite being ille- gal, emit high-power signals in the L-band (the frequency band used by GNSS) and can disrupt reception over distances ranging from tens of meters to several kilometers. These interferences can overwhelm receiver front-ends and result in service denial, with incidents re- ported in ports and air traffic control zones [4]. Detecting and localizing such interferers is essential for resilient PNT operations. A cost-effective and scalable solution is crowd- sourced data [5, 6, 7], especially in densely populated or high-traffic areas with many GNSS users. Crowdsourcing leverages existing GNSS-enabled devices (e.g., smartphones) to collect environmen- tal data in a distributed way. A key measurement here is Received Signal Strength (RSS), which quantifies signal power at a given location and frequency. Under jamming, RSS can deviate markedly from expected GNSS levels, indicating abnormal activity. By an- alyzing spatially distributed RSS readings, it becomes possible to infer the presence and even approximate location of interference This work was partially supported by the National Science Foundation under Awards 1845833, 2326559 and 2530870. sources without requiring a dense deployment of dedicated monitor- ing stations. This principle has given rise to a variety of localization tech- niques. Many approaches in crowdsourced jammer localization typically rely on fitting power measurements, such as Carrier-to- Noise-density ratio C/N0 or Automatic Gain Control (AGC) values, to a simple physical model [7, 8, 9, 10, 11]. These methods often assume a known path-loss propagation model that, while effective in open-sky scenarios, struggles in complex urban environments where multipath, shadowing and occlusions introduce significant devia- tions from the ideal path-loss function. To address these limitations, more recent work has explored data-driven approaches. Instead of assuming a fixed physical model, these methods use tools like neu- ral networks to learn the complex, non-linear relationship between location and RSS directly from the data [6, 12]. A common thread in the aforementioned research is its pas- sive nature. These methods rely on data collected by users pursu- ing their own objectives, meaning samples are gathered inciden- tally. This leads to inefficient localization, as measurements may be sparse, clustered in redundant areas, or fail to cover regions of highest uncertainty. As a result, many samples may be needed for a confident jammer estimate. Several recent works have explored alter- native approaches to overcome these limitations. For instance, [13] propose a UAV-based system that scans the environment by hover- ing at preplanned waypoints and rotating a directional antenna to measure signal strength. However, the UAV follows a static plan and does not adapt its path to the data, limiting efficiency. In contrast, [14] deploy a UAV tailored for jamming scenarios that performs greedy bearing-based triangulation. This adds partial adaptivity by iteratively steering toward estimated jammer directions, but remains limited to short-horizon heuristics without global reasoning about promising regions. These limitations reveal a gap in the literature: the absence of adaptive strategies that guide data collection in a sample-efficient and environment-aware manner. To address this, we propose an active localization framework that combines Bayesian optimiza- tion with acquisition-aware path planning, enabling an autonomous agent to navigate complex environments while wisely selecting high-value measurements. To isolate the performance of our pro- posed framework, we focus on the localization of a single static jammer. However, with an appropriately designed multimodal sur- rogate model, the framework could be extended to handle multiple jammers. In summary, our main contributions are: • A novel Bayesian optimization framework for active jammer localization. • An acquisition-aware path planning strategy that balances movement cost and acquisition gain. • A sample-efficient strategy that accurately localizes the jam- mer with minimal measurements. arXiv:2510.14790v1 [cs.LG] 16 Oct 2025 2. PROBLEM FORMULATION In this work, we address the problem of localizing a single stationary jamming source in urban environments, using crowdsourced RSS measurements from static agents together with adaptive active sens- ing by an autonomous mobile agent. Our key assumption is that the jammer’s true position xJ lies in a bounded two-dimensional area X ⊂R2 and induces an (unknown) interference-power field ftrue(x; xJ), whose global maximum corresponds to the location where the RSS from the jammer is strongest. Thus, our goal is to estimate it by finding the global maximizer of this field: ˆxJ = arg max x∈X ftrue(x; xJ) (1) Obtaining a closed-form analytical model for ftrue in urban sce- narios is extremely challenging due to multipath, non-line-of-sight conditions, and shadowing effects that make the field highly non- convex and dependent on city-specific characteristics like building layout and materials. As a consequence, ftrue is treated as a black- box function. In this context, an autonomous agent is deployed to sequentially visit probing locations can collect data with the ulti- mate goal of solving (1). At each location x ∈X, the agent records a noisy RSS measurement: yn = ftrue(x ; xJ) + ξn, n = 1, . . . , N (2) with ξn representing additive measurement noise. Importantly, the agent operates under navigation constraints: some regions of X are inaccessible due to static obstacles such as buildings, walls, or re- stricted zones. We assume that these obstacles (and hence the feasi- ble subset of X) are known a priori. Therefore, the challenge is not only to search for the global maximizer, but to decide where to mea- sure next so that the maximizer can be identified with as few samples as possible. This motivates an approach that smartly balances explo- ration and exploitation. 3. FRAMEWORK FOR JAMMER LOCALIZATION To tackle this problem effectively, we require a strategy capable of navigating complex, noisy, and partially observable environments in a data-efficient manner. Bayesian optimization (BO) provides a nat- ural fit for this setting: it is specifically designed for optimizing ex- pensive, black-box functions with limited samples, while explicitly modeling uncertainty. In our context, BO enables the agent to rea- son about both the expected interference power and the confidence of that estimate across the environment, allowing it to actively trade off, through an acquisition function, between exploring uncertain re- gions and exploiting promising ones. For a detailed overview of the BO framework, we refer the reader to [15]. Inspired by BO, we adopt an iterative framework (see Fig.1) where at each iteration, the agent performs four main steps: (i) ac- quires a new measurement at a selected location, (ii) updates a prob- abilistic model of the interference field, (iii) computes an acquisition function that determines the next sensing location, and (iv) plans an acquisition-guided path to this location. This loop continues until convergence criteria are met, such as reaching a confidence in the estimated jammer position or a time constraint. In our approach, we discretize the continuous domain X ⊂R2 into a finite set of uniformly spaced grid points denoted as G = {x(1), . . . , x(M)} ⊂X, where each x(i) corresponds to a grid cell. For notational simplicity, we will write x(i) simply as x. This grid-based representation serves two purposes: first, it simplifies the fitting of the surrogate model and acquisition function evaluation, Fig. 1. Overview of the proposed active localization framework. Top: GP posterior mean µn(x) and uncertainty σn(x) over F. Bottom-right: UCB acquisition αUCB(x) used to select the next sensing target xc n (⋆). The black curve is the past trajectory and the green dashed line is the path planning toward xc n. Gray polygons are buildings (non-traversable) and black dots are initial crowdsourced samples D0. and second, it enables efficient path planning via graph-based algo- rithms. Some cells in G are obstructed by static obstacles. Let each individual obstacle be defined as a subset Ok ⊂G for k = 1, . . . , K, and define the complete set of obstacle cells as O = SK k=1 Ok ⊂G. Therefore, the feasible set of grid locations where the agent can safely navigate and take measurements is then given by F = G \ O. All agent trajectories and sensing decisions are constrained to lie within this set. We now describe each component of the framework in more detail. Sensing. At each iteration n = 1, . . . , N of the BO loop, the agent selects a batch of bn feasible locations {xn,1, . . . xn,bn} ⊂F (see Sec.3) and records a corresponding set of noisy RSS mea- surements according to (2). In this article, noise is assumed to be Gaussian ξn ∼N(0, σ2). These new measurements are ap- pended to the existing dataset, which evolves over time as Dn = Dn−1 ∪{(xn,j , yn,j)}bn j=1. Here, D0 contains b0 initial crowd- sourced measurements available at time step zero, each also obtained according to (2) by static agents. Predictive Model. We adopt a Gaussian Process (GP) [16] regres- sor as surrogate model of the interference-power field. GPs provide a nonparametric Bayesian prior over functions that yields, after con- ditioning on data, closed-form posterior predictive means and vari- ances. In short, we assume the prior fsurr(˜x) ∼GP 0, kθ(˜x, ˜x′)  , where at each location x = (px, py) we build the feature vector ˜x = [px, py, zx], with px, py the normalized 2D coordinates and zx the normalized building height at location x. This lightweight aug- mentation helps the GP capture systematic power variations induced by urban morphology without sampling inside obstacles. Given the dataset Dn, the surrogate model follows a posterior distribution fsurr,n|Dn, ˜x ∼N(µn(˜x), σ2 n(˜x)) at any query loca- tion ˜x with mean µ(˜x) and uncertainty σ(˜x): µn(˜x) = kθ,n(˜x)⊤Kn + σ2 ηI −1yn (3) σ2 n(˜x) = kθ(˜x, ˜x) −kθ,n(˜x)⊤Kn + σ2 ηI −1kθ,n(˜x) (4) where yn = [y1, . . . , y|Dn|]⊤denotes the observation vector and Kn ∈ R|Dn|×|Dn| defines the Gram matrix with entries [Kn]pq = kθ(˜xp, ˜x′ q). For a new query ˜x, the associated kernel vector is kθ,n(˜x) = [kθ(˜x, ˜x1), . . . , kθ(˜x, ˜x|Dn|)]⊤. These two quantities are jointly exploited by the acquisition function (see Sec. 3). To flexibly capture both short-range fluctuations (e.g., lo- cal multipath) and broader trends of the field, we use an additive, multi-scale kernel with a learnable noise term: kθ(˜x, ˜x′) = kℓ θ(˜x, ˜x′) + ks θ(˜x, ˜x′) + σ2 ηδ˜x,˜x′ (5) with kι θ(˜x, ˜x′) = σ2 ι exp(−1 2 P d ˜x −˜x′)2/ℓ2 ι,d), and ι ∈{ℓ, s} corresponding to the long- and short-length scales, explaining the smooth and high-frequency components of the field. The white-noise term σ2 ηδ˜x,˜x′ accounts for measurement noise and residual model error, with δ˜x,˜x′ = 1 if ˜x = ˜x′ and 0 otherwise. We found this two-scale structure a good compromise between expressiveness and robustness for urban fields. Our formulation is agnostic to this choice and can swap kernels without altering the rest of the pipeline. Hyperparameters θ = (σ2 s, σ2 ℓ, ℓs,d, ℓℓ,d, σ2 η) are learned by maximizing the GP log marginal likelihood with multiple random restarts. Moreover, targets are normalized (zero-mean, unit-variance) and de-normalized at prediction time. Given Dn, the posterior pre- dictive is computed in closed form and supplied to the acquisition (see 3). Finally, the active sensing framework does not rely on GP-specific structure: any probabilistic surrogate able to produce calibrated uncertainty (e.g., Bayesian neural networks) could re- place the GP. We choose GPs here for their data efficiency in the low-to-moderate sample regime and their well-calibrated uncertain- ties, which are pivotal for acquisition-aware planning. Acquisition Function. To select the next candidate location, we adopt the Upper Confidence Bound (UCB) [17] acquisition strat- egy. UCB selects points that maximize a weighted combination of the surrogate model’s predictive mean and standard deviation: αUCB(x) = µ(x) + κ · σ(x) (6) where µ(x) and σ(x) are the predictive mean and standard devia- tion at location x, and κ > 0 is a tunable parameter that controls the exploration–exploitation trade-off. Higher values of κ encourage more exploration by favoring regions with high uncertainty, while lower values bias the search toward locations with high predicted signal strength. At each iteration, the next candidate point is chosen as xc n = arg maxx∈F αUCB(x). Path Planning Strategy. While Bayesian optimization identifies the next most valuable location xc n via an acquisition function, nav- igating directly to this target is often infeasible or suboptimal in real environments. Urban landscapes impose hard constraints due to buildings, and moving in straight lines can lead to redundant paths that fail to extract new information. What is needed is a path planning strategy that remains goal-oriented but also exploits the acquisition landscape along the way. In other words, at each BO it- eration we set xc n as the intended destination, and instead of moving directly there, we compute a bounded-length path from the agent’s current position that also prioritizes acquisition gain. To this end, we modify the A* algorithm [18, 19], a graph-based search method for computing minimum-cost paths. For each node x, the algorithm evaluates f(x) = g(x) + h(x), (7) where g(x) is the accumulated path cost from the start node (sum of edge costs cx,x′) and h(x) is an admissible heuristic that esti- mates the remaining cost to the goal. By always expanding the node with minimum f(x), A* guarantees an optimal collision-free path in grid environments with static obstacles. While classical A* min- imizes only travel cost, yielding efficient but uninformed paths, we modify the arc cost definition to incorporate values from the acqui- sition function. Specifically, for each edge connecting nodes x and x′, we define cx,x′ = λlen −λinfo · ¯α  ∥x −x′∥, (8) where ¯α = 1 2 αUCB(x) + αUCB(x′)  is the mean acquisition value across the edge. Here, λlen weighs travel cost, while λinfo biases the planner toward promising regions. This modification transforms A* into an acquisition-aware path planner A-UCB* that not only reaches the selected target xc n but also collects valuable samples along the way. To respect mobility constraints, each path is limited to a maxi- mum length budget δ (implicitly accounting for a maximum allow- able velocity) and uniformly subsampled into bn waypoints per iter- ation. 4. EXPERIMENTS 4.1. Data Generation We synthesize RSS fields following the crowdsourced-jamming se- tups in prior work [6, 12], using MATLAB’s deterministic 3D ray- tracing engine [20, 21]. Unlike prior work, we adopt a denser sam- pling strategy, placing receivers on a uniform grid with 2 m spacing between adjacent sampling points across the walkable workspace. Building maps are also preprocessed: interior courtyards without street access are removed so that each building becomes a compact obstacle polygon. We considered two representative urban layouts: • Chicago Downtown (dense urban core): a downtown sce- nario characterized by narrow streets and tall buildings, cre- ating strong multipath and shadowing effects. • Boston Common (urban park with open-sky): a mixed en- vironment where open park areas provide long line-of-sight corridors, while surrounding fac¸ades still generate significant reflections and occlusions. 4.2. Quantitative Results We evaluate the performance of our jammer localization framework under various sampling andplanning strategies. The goal of this anal- ysis is dual: first, to assess how accurately each method can localize the jammer using a limited number of iterations, and second, to in- vestigate the role of acquisition-aware path planning in improving sample efficiency. Besides our method with a finite path budget, A-UCB* (δ = 50), we consider: (i) A-UCB* with unlimited path length (δ = ∞) as an upper bound; (ii) Random Motion (RM), which moves uni- formly at random along the four cardinal directions with δ = 50 steps per iteration—this respects motion feasibility but ignores ac- quisition information, serving as a conservative lower bound; and (iii) Random i.i.d. Sampling (RIS), which draws queries uniformly from F without motion continuity or acquisition awareness. All methods are evaluated over 100 independent trials. At each trial, the agent starts from a randomly selected initial position in the grid; this position is fixed across all methods within the same trial to ensure a fair comparison. In every BO iteration, all methods col- lect bn = 2 RSS measurements and all experiments use b0 = 35 initial crowdsourced samples uniformly drawn from F. The UCB exploration parameter is set to κ = 2, and the measurement noise variance to σ2 = 2.5. For A-UCB*, the maximum path length bud- get is set to δ = 50. We fixed these values for all methods and trials. To evaluate localization accuracy, we report two complementary error metrics. The Surrogate Model Error (SME) measures the dis- tance between the true jammer location and the maximizer of the GP posterior mean field, reflecting the accuracy of the surrogate predic- tion. The Bayesian Optimization Error (BOE) instead measures the distance to the grid point with the highest RSS value actually sam- pled, which corresponds to the output of the proposed algorithm. The results are summarized in Table 1, where we report the median and interquartile range (25%-75%) of each error metric across the 100 trials. Furthermore, we illustrate the evolution of the BOE across the 80 iterations in Fig.2. Specifically, these plots depict the distribution of BOE values at each iteration for both scenarios. Table 1. Localization error (median [25%–75%]) across 100 inde- pendent trials for both Chicago Downtown and Boston Common datasets. Errors are reported in meters for SME and BOE. Lower (↓) values indicate better localization performance. Chicago Downtown Boston Common Method SME (↓) BOE (↓) SME (↓) BOE (↓) RIS 57.4 [27.6–120.1] 65.6 [28.1–103.8] 50.7 [25.4–93.6] 35.9 [20.6–57.2] RM 100.3 [36.9–226.5] 102.5 [62.3–186.1] 82.4 [46.0–132.4] 83.9 [44.6–103.6] A-UCB* (δ = ∞) 14.1 [6.5–24.5] 14.1 [7.7–25.0] 8.3 [5.3–12.4] 8.3 [5.6–14.1] A-UCB* (δ = 50) 13.4 [7.8–28.0] 12.9 [7.8–24.0] 12.8 [8.3–32.2] 11.4 [6.9–23.2] Fig. 2. Evolution of the BOE in the Chicago Downtown (top panel) and Boston Common (bottom panel) scenarios across 80 Bayesian optimization iterations. Boxplots at each iteration show the distribu- tion of localization errors across 100 independent trials. The inset zooms into the last 30 iterations highlight last convergence behavior. The proposed acquisition-aware planning strategy consistently achieves faster convergence and lower localization errors compared to the uninformed baselines. Random motion is particularly inef- fective, as its tendency to revisit redundant regions leads to per- sistent errors even after many iterations. Uniform random sampling performs slightly better due to broader spatial coverage, but it still lacks the adaptivity needed to consistently reduce error. In contrast, A-UCB* rapidly drives the optimization process toward the jam- mer location, with its error distribution narrowing significantly after about 30 iterations—corresponding to fewer than 100 total measure- ments when accounting for the initial 35 crowdsourced samples and two agent-collected samples per iteration. This outcome underscores the effectiveness of acquisition-aware path planning in prioritizing high-utility measurements: rather than relying on exhaustive explo- ration, the method focuses on high-value measurements, producing more reliable field estimates minimizing wasted effort. Moreover, even when constrained by a finite path-length budget, the method retains most of its efficiency. Finally, the comparison between the two datasets emphasizes the role of the environment: Boston Com- mon, with its open-sky areas, yields lower error levels overall, while the dense and obstructed Chicago Downtown setting poses greater challenges. Despite this, A-UCB* maintains robust performance in both scenarios, highlighting its adaptability in complex urban envi- ronments. 4.3. Sensitivity of κ Varying κ and measuring BOE at iteration 30 shows a shallow U- shaped trend in both datasets, as illustrated in Fig. 3. Very small κ (e.g., 0.1) is too exploitative, yielding large median errors and wide IQRs, where the agent’s initialization strongly influences per- formance. Very large κ (≥5) over-explores, increasing error and variability at the fixed number of iterations. The lowest, most stable errors occur for κ ∈[1, 3]. We therefore find κ = 2 a robust choice within the low-error plateau. Fig. 3. Sensitivity analysis of the exploration–exploitation parameter κ in the UCB acquisition function. The plot shows the BOE after 30 iterations for both datasets. Each point corresponds to the median BOE across 100 trials, with error bars denoting the IQR. 5. CONCLUSION We introduced an acquisition-aware Bayesian optimization frame- work for jammer localization that integrates probabilistic modeling with adaptive path planning. Results in realistic urban environments confirm that our method localizes interference sources with high accuracy using limited measurements, outperforming random base- lines in both convergence speed and robustness. Even under mo- bility constraints, the framework retains efficiency, highlighting its suitability for real-world deployments. Future work will extend this approach to multi-jammer scenarios and explore alternative proba- bilistic surrogates beyond GP’s. 6. REFERENCES [1] Y Jade Morton, Frank van Diggelen, James J Spilker Jr, Brad- ford W Parkinson, Sherman Lo, and Grace Gao, Position, nav- igation, and timing technologies in the 21st century: Integrated satellite navigation, sensor systems, and civil applications, vol- ume 1, John Wiley & Sons, 2021. [2] Rigas Themistoklis Ioannides, Thomas Pany, and Glen Gib- bons, “Known Vulnerabilities of Global Navigation Satellite Systems, Status, and Potential Mitigation Techniques,” Pro- ceedings of the IEEE, vol. 104, no. 6, pp. 1174–1194, 2016. [3] Moeness G. Amin, Pau Closas, Ali Broumandan, and John L. Volakis, “Vulnerabilities, threats, and authentication in satellite-based navigation systems [scanning the issue],” Pro- ceedings of the IEEE, vol. 104, no. 6, pp. 1169–1173, 2016. [4] J. Coffed, “The Threat of GPS Jamming: The Risk to an Information Utility,” Tech. Rep., Exelis, Inc., Feb. 2015, Online. https://rntfnd.org/wp-content/uploads/Exelis- GPS-Vulnerability-Assessment-February2014.pdf. [5] Luka Strizic, Dennis M. Akos, and Sherman Lo, “Crowdsourc- ing GNSS Jammer Detection and Localization,” in Proceed- ings of the 2018 International Technical Meeting of The Insti- tute of Navigation, Reston, Virginia, 01 2018, pp. 626–641. [6] Andrea Nardin, Tales Imbiriba, and Pau Closas, “Jamming Source Localization Using Augmented Physics-Based Model,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5. [7] Borio D, Gioia C, Stern A, Dimc F, and Baldini G, Jammer Localization: from Crowdsourcing to Synthetic Detection, In- stitute of Navigation, 2016. [8] Gl¨adje Karl Olsson, Sara Nilsson, Erik Axell, Erik G. Larsson, and Panos Papadimitratos, “Using Mobile Phones for Partici- patory Detection and Localization of a GNSS Jammer,” 2023. [9] Hangyu Bai, Tao Zhang, and Wenxian Yu, “A Map Recon- struction Based Method for Localizing Multiple GNSS Jam- mers with Unknown Number,” in 2024 43rd Chinese Control Conference (CCC), 2024, pp. 3839–3844. [10] Sriramya Bhamidipati and Grace Xingxin Gao, “Simultaneous localization of multiple jammers and receivers using probabil- ity hypothesis density,” in 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2018, pp. 940–944. [11] Benon Gattis, Ethan Pyke, Dennis Akos, Mark Crews, and Stephen Robertson, “Stress-Testing Flagship Smartphone Models With Real-World GNSS RFI to Determine Real-Time Emitter Localization Capabilities,” in 2025 IEEE/ION Posi- tion, Location and Navigation Symposium (PLANS), 2025, pp. 1566–1574. [12] Mariona Jaramillo-Civill, Peng Wu, Andrea Nardin, Tales Im- biriba, and Pau Closas, “Jammer Source Localization with Federated Learning,” in 2025 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2025, pp. 362–371. [13] Marco Spanghero, Filip Geib, Ronny Panier, and Panos Pa- padimitratos, “GNSS Jammer Localization and Identification With Airborne Commercial GNSS Receivers,” IEEE Transac- tions on Information Forensics and Security, vol. 20, pp. 3550– 3565, 2025. [14] Adrien Perkins, Louis Dressel, Sherman Lo, Tyler Reid, Kazuma Gunning, and Per Enge, “Demonstration of UAV Based GPS Jammer Localization During a Live Interference Exercise,” in Proceedings of the 29th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+), Portland, Oregon, Sept. 2016, The Institute of Navigation, pp. 3094–3106. [15] Eric Brochu, Vlad M. Cora, and Nando de Freitas, “A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Rein- forcement Learning,” 2010. [16] Carl Edward Rasmussen and Christopher K. I. Williams, Gaus- sian Processes for Machine Learning, The MIT Press, 11 2005. [17] Peter Auer, “Using confidence bounds for exploitation- exploration trade-offs,” J. Mach. Learn. Res., vol. 3, no. null, pp. 397–422, Mar. 2003. [18] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths,” IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, 1968. [19] Rina Dechter and Judea Pearl, “Generalized best-first search strategies and the optimality of A*,” J. ACM, vol. 32, no. 3, pp. 505–536, July 1985. [20] Zhengqing Yun and Magdy F. Iskander, “Ray Tracing for Ra- dio Propagation Modeling: Principles and Applications,” IEEE Access, vol. 3, pp. 1089–1100, 2015. [21] K.R. Schaubach, N.J. Davis, and T.S. Rappaport, “A ray trac- ing method for predicting path loss and delay spread in mi- crocellular environments,” in [1992 Proceedings] Vehicular Technology Society 42nd VTS Conference - Frontiers of Tech- nology, 1992, pp. 932–935 vol.2.
ACTIVE JAMMER LOCALIZATION VIA ACQUISITION-AWARE PATH PLANNING Luis Gonz ́alez-Gudi ̃no1, Mariona Jaramillo-Civill2, Pau Closas2, Tales Imbiriba1 1Dept. of Computer Science, 2Dept. of Electrical & Computer Engineering, Northeastern University, Boston, MA, USA ABSTRACT We propose an active jammer localization framework that combines Bayesian optimization with acquisition-aware path planning. Unlike passive crowdsourced methods, our approach adaptively guides a mobile agent to collect high-utility Received Signal Strength measurements while accounting for urban obstacles and mobility constraints. For this, we modified the A* algorithm, A-UCB*, by incorporating acquisition values into trajectory costs, leading to high-acquisition planned paths. Simulations on realistic urban scenarios show that the proposed method achieves accurate localization with fewer measurements compared to uninformed baselines, demonstrating consistent performance under different environments. Index Terms- Jammer localization, GNSS interference, Bayesian optimization, Gaussian processes, Path planning 1. INTRODUCTION Global Navigation Satellite Systems (GNSS) such as GPS, Galileo, GLONASS and BeiDou provide critical position, navigation, and timing (PNT) services for a wide array of applications, from intelligent transportation and precision agriculture to timing-dependent infrastructures like banking systems and cellular networks [1]. However, the strong dependence on GNSS makes these systems vulnerable to both unintentional and intentional interference [2, 3]. Unintentional interference may originate from out-of-band sources such as terrestrial digital video broadcasting (DVB-T) or amateur radios, as well as from in-band sources like distance measuring equipment (DME) or civilian radars. Additionally, GNSS signals are susceptible to jamming by personal privacy devices (PPDs). These devices, which are inexpensive and readily available online despite being illegal, emit high-power signals in the L-band (the frequency band used by GNSS) and can disrupt reception over distances ranging from tens of meters to several kilometers. These interferences can overwhelm receiver front-ends and result in service denial, with incidents reported in ports and air traffic control zones [4]. Detecting and localizing such interferers is essential for resilient PNT operations. A cost-effective and scalable solution is crowdsourced data [5, 6, 7], especially in densely populated or high-traffic areas with many GNSS users. Crowdsourcing leverages existing GNSS-enabled devices (e.g., smartphones) to collect environmental data in a distributed way. A key measurement here is Received Signal Strength (RSS), which quantifies signal power at a given location and frequency. Under jamming, RSS can deviate markedly from expected GNSS levels, indicating abnormal activity. By analyzing spatially distributed RSS readings, it becomes possible to infer the presence and even approximate location of interference This work was partially supported by the National Science Foundation under Awards 1845833, 2326559 and 2530870. sources without requiring a dense deployment of dedicated monitoring stations. This principle has given rise to a variety of localization techniques. Many approaches in crowdsourced jammer localization typically rely on fitting power measurements, such as Carrier-toNoise-density ratio C/N0 or Automatic Gain Control (AGC) values, to a simple physical model [7, 8, 9, 10, 11]. These methods often assume a known path-loss propagation model that, while effective in open-sky scenarios, struggles in complex urban environments where multipath, shadowing and occlusions introduce significant deviations from the ideal path-loss function. To address these limitations, more recent work has explored data-driven approaches. Instead of assuming a fixed physical model, these methods use tools like neural networks to learn the complex, non-linear relationship between location and RSS directly from the data [6, 12]. A common thread in the aforementioned research is its passive nature. These methods rely on data collected by users pursuing their own objectives, meaning samples are gathered incidentally. This leads to inefficient localization, as measurements may be sparse, clustered in redundant areas, or fail to cover regions of highest uncertainty. As a result, many samples may be needed for a confident jammer estimate. Several recent works have explored alternative approaches to overcome these limitations. For instance, [13] propose a UAV-based system that scans the environment by hovering at preplanned waypoints and rotating a directional antenna to measure signal strength. However, the UAV follows a static plan and does not adapt its path to the data, limiting efficiency. In contrast, [14] deploy a UAV tailored for jamming scenarios that performs greedy bearing-based triangulation. This adds partial adaptivity by iteratively steering toward estimated jammer directions, but remains limited to short-horizon heuristics without global reasoning about promising regions. These limitations reveal a gap in the literature: the absence of adaptive strategies that guide data collection in a sample-efficient and environment-aware manner. To address this, we propose an active localization framework that combines Bayesian optimization with acquisition-aware path planning, enabling an autonomous agent to navigate complex environments while wisely selecting high-value measurements. To isolate the performance of our proposed framework, we focus on the localization of a single static jammer. However, with an appropriately designed multimodal surrogate model, the framework could be extended to handle multiple jammers. In summary, our main contributions are: • A novel Bayesian optimization framework for active jammer localization. • An acquisition-aware path planning strategy that balances movement cost and acquisition gain. • A sample-efficient strategy that accurately localizes the jammer with minimal measurements. 16 Oct 2025 2. PROBLEM FORMULATION In this work, we address the problem of localizing a single stationary jamming source in urban environments, using crowdsourced RSS measurements from static agents together with adaptive active sensing by an autonomous mobile agent. Our key assumption is that the jammer's true position xJ lies in a bounded two-dimensional area X ⊂R2 and induces an (unknown) interference-power field ftrue(x; xJ), whose global maximum corresponds to the location where the RSS from the jammer is strongest. Thus, our goal is to estimate it by finding the global maximizer of this field: ˆxJ = arg max x∈X ftrue(x; xJ) (1) Obtaining a closed-form analytical model for ftrue in urban scenarios is extremely challenging due to multipath, non-line-of-sight conditions, and shadowing effects that make the field highly nonconvex and dependent on city-specific characteristics like building layout and materials. As a consequence, ftrue is treated as a blackbox function. In this context, an autonomous agent is deployed to sequentially visit probing locations can collect data with the ultimate goal of solving (1). At each location x ∈X, the agent records a noisy RSS measurement: yn = ftrue(x ; xJ) + ξn, n = 1, . . . , N (2) with ξn representing additive measurement noise. Importantly, the agent operates under navigation constraints: some regions of X are inaccessible due to static obstacles such as buildings, walls, or restricted zones. We assume that these obstacles (and hence the feasible subset of X) are known a priori. Therefore, the challenge is not only to search for the global maximizer, but to decide where to measure next so that the maximizer can be identified with as few samples as possible. This motivates an approach that smartly balances exploration and exploitation. 3. FRAMEWORK FOR JAMMER LOCALIZATION To tackle this problem effectively, we require a strategy capable of navigating complex, noisy, and partially observable environments in a data-efficient manner. Bayesian optimization (BO) provides a natural fit for this setting: it is specifically designed for optimizing expensive, black-box functions with limited samples, while explicitly modeling uncertainty. In our context, BO enables the agent to reason about both the expected interference power and the confidence of that estimate across the environment, allowing it to actively trade off, through an acquisition function, between exploring uncertain regions and exploiting promising ones. For a detailed overview of the BO framework, we refer the reader to [15]. Inspired by BO, we adopt an iterative framework (see Fig.1) where at each iteration, the agent performs four main steps: (i) acquires a new measurement at a selected location, (ii) updates a probabilistic model of the interference field, (iii) computes an acquisition function that determines the next sensing location, and (iv) plans an acquisition-guided path to this location. This loop continues until convergence criteria are met, such as reaching a confidence in the estimated jammer position or a time constraint. In our approach, we discretize the continuous domain X ⊂R2 into a finite set of uniformly spaced grid points denoted as G = {x(1), . . . , x(M)} ⊂X, where each x(i) corresponds to a grid cell. For notational simplicity, we will write x(i) simply as x. This grid-based representation serves two purposes: first, it simplifies the fitting of the surrogate model and acquisition function evaluation, Fig. 1. Overview of the proposed active localization framework. Top: GP posterior mean μn(x) and uncertainty σn(x) over F. Bottom-right: UCB acquisition αUCB(x) used to select the next sensing target xc n (⋆). The black curve is the past trajectory and the green dashed line is the path planning toward xc n. Gray polygons are buildings (non-traversable) and black dots are initial crowdsourced samples D0. and second, it enables efficient path planning via graph-based algorithms. Some cells in G are obstructed by static obstacles. Let each individual obstacle be defined as a subset Ok ⊂G for k = 1, . . . , K, and define the complete set of obstacle cells as O = SK k=1 Ok ⊂G. Therefore, the feasible set of grid locations where the agent can safely navigate and take measurements is then given by F = G \ O. All agent trajectories and sensing decisions are constrained to lie within this set. We now describe each component of the framework in more detail. Sensing. At each iteration n = 1, . . . , N of the BO loop, the agent selects a batch of bn feasible locations {xn,1, . . . xn,bn} ⊂F (see Sec.3) and records a corresponding set of noisy RSS measurements according to (2). In this article, noise is assumed to be Gaussian ξn ∼N(0, σ2). These new measurements are appended to the existing dataset, which evolves over time as Dn = Dn-1 ∪{(xn,j , yn,j)}bn j=1. Here, D0 contains b0 initial crowdsourced measurements available at time step zero, each also obtained according to (2) by static agents. Predictive Model. We adopt a Gaussian Process (GP) [16] regressor as surrogate model of the interference-power field. GPs provide a nonparametric Bayesian prior over functions that yields, after conditioning on data, closed-form posterior predictive means and variances. In short, we assume the prior fsurr( ̃x) ∼GP 0, kθ( ̃x, ̃x′) , where at each location x = (px, py) we build the feature vector ̃x = [px, py, zx], with px, py the normalized 2D coordinates and zx the normalized building height at location x. This lightweight augmentation helps the GP capture systematic power variations induced by urban morphology without sampling inside obstacles. Given the dataset Dn, the surrogate model follows a posterior distribution fsurr,n|Dn, ̃x ∼N(μn( ̃x), σ2 n( ̃x)) at any query location ̃x with mean μ( ̃x) and uncertainty σ( ̃x): μn( ̃x) = kθ,n( ̃x)⊤ Kn + σ2 ηI -1yn (3) σ2 n( ̃x) = kθ( ̃x, ̃x) -kθ,n( ̃x)⊤ Kn + σ2 ηI -1kθ,n( ̃x) (4) where yn = [y1, . . . , y|Dn|]⊤denotes the observation vector and Kn ∈ R|Dn|×|Dn| defines the Gram matrix with entries [Kn]pq = kθ( ̃xp, ̃x′ q). For a new query ̃x, the associated kernel vector is kθ,n( ̃x) = [kθ( ̃x, ̃x1), . . . , kθ( ̃x, ̃x|Dn|)]⊤. These two quantities are jointly exploited by the acquisition function (see Sec. 3). To flexibly capture both short-range fluctuations (e.g., local multipath) and broader trends of the field, we use an additive, multi-scale kernel with a learnable noise term: kθ( ̃x, ̃x′) = kl θ( ̃x, ̃x′) + ks θ( ̃x, ̃x′) + σ2 ηδ ̃x, ̃x′ (5) with kι θ( ̃x, ̃x′) = σ2 ι exp(-1 2 P d ̃x - ̃x′)2/l2 ι,d), and ι ∈{l, s} corresponding to the long- and short-length scales, explaining the smooth and high-frequency components of the field. The white-noise term σ2 ηδ ̃x, ̃x′ accounts for measurement noise and residual model error, with δ ̃x, ̃x′ = 1 if ̃x = ̃x′ and 0 otherwise. We found this two-scale structure a good compromise between expressiveness and robustness for urban fields. Our formulation is agnostic to this choice and can swap kernels without altering the rest of the pipeline. Hyperparameters θ = (σ2 s, σ2 l, ls,d, ll,d, σ2 η) are learned by maximizing the GP log marginal likelihood with multiple random restarts. Moreover, targets are normalized (zero-mean, unit-variance) and de-normalized at prediction time. Given Dn, the posterior predictive is computed in closed form and supplied to the acquisition (see 3). Finally, the active sensing framework does not rely on GP-specific structure: any probabilistic surrogate able to produce calibrated uncertainty (e.g., Bayesian neural networks) could replace the GP. We choose GPs here for their data efficiency in the low-to-moderate sample regime and their well-calibrated uncertainties, which are pivotal for acquisition-aware planning. Acquisition Function. To select the next candidate location, we adopt the Upper Confidence Bound (UCB) [17] acquisition strategy. UCB selects points that maximize a weighted combination of the surrogate model's predictive mean and standard deviation: αUCB(x) = μ(x) + κ · σ(x) (6) where μ(x) and σ(x) are the predictive mean and standard deviation at location x, and κ > 0 is a tunable parameter that controls the exploration-exploitation trade-off. Higher values of κ encourage more exploration by favoring regions with high uncertainty, while lower values bias the search toward locations with high predicted signal strength. At each iteration, the next candidate point is chosen as xc n = arg maxx∈F αUCB(x). Path Planning Strategy. While Bayesian optimization identifies the next most valuable location xc n via an acquisition function, navigating directly to this target is often infeasible or suboptimal in real environments. Urban landscapes impose hard constraints due to buildings, and moving in straight lines can lead to redundant paths that fail to extract new information. What is needed is a path planning strategy that remains goal-oriented but also exploits the acquisition landscape along the way. In other words, at each BO iteration we set xc n as the intended destination, and instead of moving directly there, we compute a bounded-length path from the agent's current position that also prioritizes acquisition gain. To this end, we modify the A* algorithm [18, 19], a graph-based search method for computing minimum-cost paths. For each node x, the algorithm evaluates f(x) = g(x) + h(x), (7) where g(x) is the accumulated path cost from the start node (sum of edge costs cx,x′) and h(x) is an admissible heuristic that estimates the remaining cost to the goal. By always expanding the node with minimum f(x), A* guarantees an optimal collision-free path in grid environments with static obstacles. While classical A* minimizes only travel cost, yielding efficient but uninformed paths, we modify the arc cost definition to incorporate values from the acquisition function. Specifically, for each edge connecting nodes x and x′, we define cx,x′ = λlen -λinfo · ̄α ∥x -x′∥, (8) where ̄α = 1 2 αUCB(x) + αUCB(x′) is the mean acquisition value across the edge. Here, λlen weighs travel cost, while λinfo biases the planner toward promising regions. This modification transforms A* into an acquisition-aware path planner A-UCB* that not only reaches the selected target xc n but also collects valuable samples along the way. To respect mobility constraints, each path is limited to a maximum length budget δ (implicitly accounting for a maximum allowable velocity) and uniformly subsampled into bn waypoints per iteration. 4. EXPERIMENTS 4.1. Data Generation We synthesize RSS fields following the crowdsourced-jamming setups in prior work [6, 12], using MATLAB's deterministic 3D raytracing engine [20, 21]. Unlike prior work, we adopt a denser sampling strategy, placing receivers on a uniform grid with 2 m spacing between adjacent sampling points across the walkable workspace. Building maps are also preprocessed: interior courtyards without street access are removed so that each building becomes a compact obstacle polygon. We considered two representative urban layouts: • Chicago Downtown (dense urban core): a downtown scenario characterized by narrow streets and tall buildings, creating strong multipath and shadowing effects. • Boston Common (urban park with open-sky): a mixed environment where open park areas provide long line-of-sight corridors, while surrounding fac ̧ades still generate significant reflections and occlusions. 4.2. Quantitative Results We evaluate the performance of our jammer localization framework under various sampling andplanning strategies. The goal of this analysis is dual: first, to assess how accurately each method can localize the jammer using a limited number of iterations, and second, to investigate the role of acquisition-aware path planning in improving sample efficiency. Besides our method with a finite path budget, A-UCB* (δ = 50), we consider: (i) A-UCB* with unlimited path length (δ = ∞) as an upper bound; (ii) Random Motion (RM), which moves uniformly at random along the four cardinal directions with δ = 50 steps per iteration-this respects motion feasibility but ignores acquisition information, serving as a conservative lower bound; and (iii) Random i.i.d. Sampling (RIS), which draws queries uniformly from F without motion continuity or acquisition awareness. All methods are evaluated over 100 independent trials. At each trial, the agent starts from a randomly selected initial position in the grid; this position is fixed across all methods within the same trial to ensure a fair comparison. In every BO iteration, all methods collect bn = 2 RSS measurements and all experiments use b0 = 35 initial crowdsourced samples uniformly drawn from F. The UCB exploration parameter is set to κ = 2, and the measurement noise variance to σ2 = 2.5. For A-UCB*, the maximum path length budget is set to δ = 50. We fixed these values for all methods and trials. To evaluate localization accuracy, we report two complementary error metrics. The Surrogate Model Error (SME) measures the distance between the true jammer location and the maximizer of the GP posterior mean field, reflecting the accuracy of the surrogate prediction. The Bayesian Optimization Error (BOE) instead measures the distance to the grid point with the highest RSS value actually sampled, which corresponds to the output of the proposed algorithm. The results are summarized in Table 1, where we report the median and interquartile range (25%-75%) of each error metric across the 100 trials. Furthermore, we illustrate the evolution of the BOE across the 80 iterations in Fig.2. Specifically, these plots depict the distribution of BOE values at each iteration for both scenarios. Table 1. Localization error (median [25%-75%]) across 100 independent trials for both Chicago Downtown and Boston Common datasets. Errors are reported in meters for SME and BOE. Lower (↓) values indicate better localization performance. Chicago Downtown Boston Common Method SME (↓) BOE (↓) SME (↓) BOE (↓) RIS 57.4 [27.6-120.1] 65.6 [28.1-103.8] 50.7 [25.4-93.6] 35.9 [20.6-57.2] RM 100.3 [36.9-226.5] 102.5 [62.3-186.1] 82.4 [46.0-132.4] 83.9 [44.6-103.6] A-UCB* (δ = ∞) 14.1 [6.5-24.5] 14.1 [7.7-25.0] 8.3 [5.3-12.4] 8.3 [5.6-14.1] A-UCB* (δ = 50) 13.4 [7.8-28.0] 12.9 [7.8-24.0] 12.8 [8.3-32.2] 11.4 [6.9-23.2] Fig. 2. Evolution of the BOE in the Chicago Downtown (top panel) and Boston Common (bottom panel) scenarios across 80 Bayesian optimization iterations. Boxplots at each iteration show the distribution of localization errors across 100 independent trials. The inset zooms into the last 30 iterations highlight last convergence behavior. The proposed acquisition-aware planning strategy consistently achieves faster convergence and lower localization errors compared to the uninformed baselines. Random motion is particularly ineffective, as its tendency to revisit redundant regions leads to persistent errors even after many iterations. Uniform random sampling performs slightly better due to broader spatial coverage, but it still lacks the adaptivity needed to consistently reduce error. In contrast, A-UCB* rapidly drives the optimization process toward the jammer location, with its error distribution narrowing significantly after about 30 iterations-corresponding to fewer than 100 total measurements when accounting for the initial 35 crowdsourced samples and two agent-collected samples per iteration. This outcome underscores the effectiveness of acquisition-aware path planning in prioritizing high-utility measurements: rather than relying on exhaustive exploration, the method focuses on high-value measurements, producing more reliable field estimates minimizing wasted effort. Moreover, even when constrained by a finite path-length budget, the method retains most of its efficiency. Finally, the comparison between the two datasets emphasizes the role of the environment: Boston Common, with its open-sky areas, yields lower error levels overall, while the dense and obstructed Chicago Downtown setting poses greater challenges. Despite this, A-UCB* maintains robust performance in both scenarios, highlighting its adaptability in complex urban environments. 4.3. Sensitivity of κ Varying κ and measuring BOE at iteration 30 shows a shallow Ushaped trend in both datasets, as illustrated in Fig. 3. Very small κ (e.g., 0.1) is too exploitative, yielding large median errors and wide IQRs, where the agent's initialization strongly influences performance. Very large κ (≥5) over-explores, increasing error and variability at the fixed number of iterations. The lowest, most stable errors occur for κ ∈[1, 3]. We therefore find κ = 2 a robust choice within the low-error plateau. Fig. 3. Sensitivity analysis of the exploration-exploitation parameter κ in the UCB acquisition function. The plot shows the BOE after 30 iterations for both datasets. Each point corresponds to the median BOE across 100 trials, with error bars denoting the IQR. 5. CONCLUSION We introduced an acquisition-aware Bayesian optimization framework for jammer localization that integrates probabilistic modeling with adaptive path planning. Results in realistic urban environments confirm that our method localizes interference sources with high accuracy using limited measurements, outperforming random baselines in both convergence speed and robustness. Even under mobility constraints, the framework retains efficiency, highlighting its suitability for real-world deployments. Future work will extend this approach to multi-jammer scenarios and explore alternative probabilistic surrogates beyond GP's. 6. REFERENCES [1] Y Jade Morton, Frank van Diggelen, James J Spilker Jr, Bradford W Parkinson, Sherman Lo, and Grace Gao, Position, navigation, and timing technologies in the 21st century: Integrated satellite navigation, sensor systems, and civil applications, volume 1, John Wiley & Sons, 2021. [2] Rigas Themistoklis Ioannides, Thomas Pany, and Glen Gibbons, "Known Vulnerabilities of Global Navigation Satellite Systems, Status, and Potential Mitigation Techniques," Proceedings of the IEEE, vol. 104, no. 6, pp. 1174-1194, 2016. [3] Moeness G. Amin, Pau Closas, Ali Broumandan, and John L. Volakis, "Vulnerabilities, threats, and authentication in satellite-based navigation systems [scanning the issue]," Proceedings of the IEEE, vol. 104, no. 6, pp. 1169-1173, 2016. [4] J. Coffed, "The Threat of GPS Jamming: The Risk to an Information Utility," Tech. Rep., Exelis, Inc., Feb. 2015, Online. https://rntfnd.org/wp-content/uploads/ExelisGPS-Vulnerability-Assessment-February2014.pdf. [5] Luka Strizic, Dennis M. Akos, and Sherman Lo, "Crowdsourcing GNSS Jammer Detection and Localization," in Proceedings of the 2018 International Technical Meeting of The Institute of Navigation, Reston, Virginia, 01 2018, pp. 626-641. [6] Andrea Nardin, Tales Imbiriba, and Pau Closas, "Jamming Source Localization Using Augmented Physics-Based Model," in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1-5. [7] Borio D, Gioia C, Stern A, Dimc F, and Baldini G, Jammer Localization: from Crowdsourcing to Synthetic Detection, Institute of Navigation, 2016. [8] Gl ̈adje Karl Olsson, Sara Nilsson, Erik Axell, Erik G. Larsson, and Panos Papadimitratos, "Using Mobile Phones for Participatory Detection and Localization of a GNSS Jammer," 2023. [9] Hangyu Bai, Tao Zhang, and Wenxian Yu, "A Map Reconstruction Based Method for Localizing Multiple GNSS Jammers with Unknown Number," in 2024 43rd Chinese Control Conference (CCC), 2024, pp. 3839-3844. [10] Sriramya Bhamidipati and Grace Xingxin Gao, "Simultaneous localization of multiple jammers and receivers using probability hypothesis density," in 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2018, pp. 940-944. [11] Benon Gattis, Ethan Pyke, Dennis Akos, Mark Crews, and Stephen Robertson, "Stress-Testing Flagship Smartphone Models With Real-World GNSS RFI to Determine Real-Time Emitter Localization Capabilities," in 2025 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2025, pp. 1566-1574. [12] Mariona Jaramillo-Civill, Peng Wu, Andrea Nardin, Tales Imbiriba, and Pau Closas, "Jammer Source Localization with Federated Learning," in 2025 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2025, pp. 362-371. [13] Marco Spanghero, Filip Geib, Ronny Panier, and Panos Papadimitratos, "GNSS Jammer Localization and Identification With Airborne Commercial GNSS Receivers," IEEE Transactions on Information Forensics and Security, vol. 20, pp. 35503565, 2025. [14] Adrien Perkins, Louis Dressel, Sherman Lo, Tyler Reid, Kazuma Gunning, and Per Enge, "Demonstration of UAV Based GPS Jammer Localization During a Live Interference Exercise," in Proceedings of the 29th International Technical Meeting of the Satellite Division of The (ION GNSS+), Portland, Oregon, Sept. 2016, The . 3094-3106. [15] Eric Brochu, Vlad M. Cora, and Nando de Freitas, "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning," 2010. [16] Carl Edward Rasmussen and Christopher K. I. Williams, Gaussian Processes for Machine Learning, The MIT Press, 11 2005. [17] Peter Auer, "Using confidence bounds for exploitationexploration trade-offs," J. Mach. Learn. Res., vol. 3, no. null, pp. 397-422, Mar. 2003. [18] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael, "A Formal Basis for the Heuristic Determination of Minimum Cost Paths," IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, 1968. [19] Rina Dechter and Judea Pearl, "Generalized best-first search strategies and the optimality of A*," J. ACM, vol. 32, no. 3, pp. 505-536, July 1985. [20] Zhengqing Yun and Magdy F. Iskander, "Ray Tracing for Radio Propagation Modeling: Principles and Applications," IEEE Access, vol. 3, pp. 1089-1100, 2015. [21] K.R. Schaubach, N.J. Davis, and T.S. Rappaport, "A ray tracing method for predicting path loss and delay spread in microcellular environments," in [1992 Proceedings] Vehicular Technology Society 42nd VTS Conference - Frontiers of Technology, 1992, pp. 932-935 vol.2.
2510.14791
MNRAS 000, 1–15 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 Stellar population astrophysics (SPA) with the TNG. The Phosphorus abundance on the young side of Milky Way★ Mingjie Jian (简明杰) ,1† Xiaoting Fu (符晓婷) ,2,3 Valentina D’Orazi,4,5 Angela Bragaglia,3 S. Bijavara Seshashayana,6,7 He Zhao (赵赫) ,8 Ziyi Guo (郭子怡) ,9,10 Karin Lind,1 Noriyuki Matsunaga (松永典之) ,11 Antonino Nunnari,12,4 Giuseppe Bono,4 Nicoletta Sanna,13 Donatella Romano,3 and Marina Dal Ponte5 1Department of Astronomy, Stockholm University, AlbaNova University Center, Roslagstullsbacken 21, 114 21 Stockholm, Sweden 2Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China 3INAF – Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via P. Gobetti 93/3, 40129 Bologna, Italy 4Department of Physics, University of Rome Tor Vergata, via della Ricerca Scientifica 1, 00133, Rome, Italy 5INAF - Osservatorio Astronomico di Padova, Vicolo dell’ Osservatorio 5, 35122 Padova, Italy 6Materials Science and Applied Mathematics, Malmö University, SE-205 06 Malmö, Sweden 7Nordic Optical Telescope, Rambla José Ana Fernández Pérez 7, ES-38711 Breña Baja, Spain 8Departamento de Ciencias Fisicas, Universidad Andres Bello, Republica 220, 8320000 Santiago, Chile 9School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China 10Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China 11 Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan 12 INAF – Astronomic Observatory of Rome, Via Frascati 33, 00078 Monte Porzio Catone, Italy 13 INAF – Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy Accepted XXX. Received YYY; in original form ZZZ ABSTRACT We present phosphorus abundance measurements for a total of 102 giant stars, including 82 stars in 24 open clusters and 20 Cepheids, based on high-resolution near-infrared spectra obtained with GIANO-B. Evolution of phosphorus abundance, despite its astrophysical and biological significance, remains poorly understood due to a scarcity of observational data. By combining precise stellar parameters from the optical, a robust line selection and measurement method, we measure phosphorus abundances using available P i lines. Our analysis confirms a declining trend in [P/Fe] with increasing [Fe/H] around solar metallicity for clusters and Cepheids, consistent with previous studies. We also report a [P/Fe]–age relation among open clusters older than 1 Gyr, indicating a time-dependent enrichment pattern. Such pattern can be explained by the different stellar formation history of their parental gas, with more efficient stellar formation in the gas of older clusters (thus with higher phosphorus abundances). [P/Fe] shows a flat trend among cepheids and clusters younger than 1 Gyr (along with three Cepheids inside open clusters), possibly hinting at the phosphorus contribution from the previous-generation low-mass stars. Such trend suggests that the young clusters share a nearly common chemical history, with a mild increase in phosphorus production by low-mass stars. Key words: stars: abundances – stars: late-type – open clusters and associations: general – stars: variables: Cepheids 1 INTRODUCTION Spectroscopic analysis of stars allows us to determine the abundances of various elements in their atmospheres, effectively revealing their chemical compositions. These elemental signatures reflect the com- position of the molecular cloud from which the stars were born, capturing the initial conditions of star formation. Consequently, mea- suring the chemical compositions of stars across different elements ★Based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias † E-mail: jian-mingjie@outlook.com provides crucial constraints on the evolutionary history of the Milky Way and constitutes one of the central goals of Galactic Archaeology (or Galactic Paleontology; see e.g. Tolstoy 2011). For instance, ob- servations of 𝛼-elements (e.g., O, Mg, Si, S, Ca and Ti) in the Milky Way help distinguish different components of the Galactic disk (e.g., the thin and thick disk; Hayden et al. 2015; Anders et al. 2017). When combined with our understanding of the nucleosynthetic origins of these elements (e.g., Kobayashi et al. 2020), such measurements of- fer key insights into the chemical evolution of our host galaxy, solar system or even life (Fernández-García et al. 2017). However, the chemical evolution of some elements remains elusive due to a lack of observational constraints. Phosphorus is one such element. The observable spectral lines of phosphorus are mostly lo- cated in the ultraviolet and infrared wavelength regions, making it © 2025 The Authors arXiv:2510.14791v1 [astro-ph.SR] 16 Oct 2025 2 M. Jian et al. impossible to measure its abundance using optical spectra. Given that ultraviolet observations are heavily affected by interstellar ex- tinction, the infrared becomes the most suitable wavelength range for studying phosphorus abundance and thus provides the most useful information for Galactic archaeology. In contrast, many spectral lines of other elements are found in the optical, which makes it easier to determine stellar parameters using optical spectra. These parameters can then be used to infer the abundances of elements whose lines are only available in the infrared. Simultaneously obtained optical and infrared spectra are particularly well-suited for this approach (see, e.g., Jian et al. 2024). Observational studies of phosphorus began relatively recently, with the pioneering work of Caffau et al. (2011), who measured the P abundances of 20 F-type dwarf stars. They found that the [P/Fe] ratio decreases from approximately +0.4 to 0 as metallicity increases from [Fe/H] = −1 to solar. This trend contrasts with that of other light odd-𝑍elements, such as Na and Al, and also deviates from the theo- retical predictions of Kobayashi et al. (2006). Subsequently, Roederer et al. (2014) extended the observational data towards the metal-poor regime. Using the Hubble Space Telescope Imaging Spectrograph, they measured phosphorus abundances in 14 dwarf stars with metal- licities ranging from [Fe/H] = −4 to −0.1 dex. [P/Fe] remains approximately solar in the metallicity range [Fe/H] ∼−4 to −2 dex, then increases to [P/Fe] ∼0.5 at [Fe/H] ≈−1.1, before decreas- ing back to near-solar levels at higher metallicities. Most subsequent studies have confirmed this general trend. Hawkins et al. (2016) per- formed the first large-scale measurement of phosphorus abundances using data from the APOGEE survey, significantly expanding the sample around solar metallicity. Later Hayes et al. (2022) measured phosphorus abundances for more than 120 000 stars using APOGEE DR17, though only the upper-limit can be estimated for the majority of the stars (∼87 000) thus the completeness in metallicity is largely affected. Over 40 additional measurements were later contributed by Caffau et al. (2016) and Caffau et al. (2019), using the high- resolution spectrograph GIANO. Nandakumar et al. (2022) used the IGRINS spectrograph to reproduce the [P/Fe] trend within the metallicity range of [Fe/H] = −1.2 to +0.3 dex. In addition, Maas et al. (2022) found that the thick disc stars with the metallicity ranges from [Fe/H] = −1 to −0.4 dex tend to exhibit higher phosphorus abundances. Interestingly, Weinberg et al. (2019) reported that the [P/Mg] ratio varies among stars with different magnesium abun- dances. Since Mg is almost a “pure” 𝛼element that is mainly pro- duced by core-collapse supernovae (CCSN), the results of Weinberg et al. (2019) contradict the theoretical prediction that phosphorus is predominantly produced by CCSN. A number of giant stars with peculiar phosphorus abundances have also been identified. Masseron et al. (2020a) discovered 15 stars exhibiting extremely high phosphorus enhancements ([P/Fe] ≳1.5) with unusual overabundances O, Mg, Si, Al, and Ce, and further examined the detailed abundance patterns of three of these stars in a follow-up study (Masseron et al. 2020b). This sample was later expanded to 78 stars by Brauner et al. (2023). The unusual abundance patterns observed in these objects continue to pose a challenge to current Galactic chemical evolution (GCE) models. GCE models suggest that phosphorus is primarily produced in massive stars. Figure 1 presents the stellar total yield of phosphorus (the amount of element newly-produced plus that present in the star at birth, in contrast with the net yield, those newly-produced by stellar nucleosynthesis) after being weighted by the Kroupa stellar initial mass function (IMF, Kroupa 2001). We consider three types of yields here: low-mass stars (from Karakas 2010), massive stars (from Nomoto et al. 2013) and Type Ia supernovae (SNIa, from 0 20 40 60 80 100 Stellar mass (M ) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 IMF weighted total yield (M ) 1e 7 iniZ = 0.0001 iniZ = 0.004 iniZ = 0.01 Figure 1. Stellar total yield of phosphorus in three different initial metallicities (ini𝑍). The yield has been weighted by Kroupa IMF, and the stellar population has been normalized to 1 M⊙. Iwamoto et al. 1999). The massive stars contribute over 85% of phosphorus in a single stellar population in all three metallicities we consider for a representative initial metallicities for Population I and II progenitor stars in galactic chemical evolution models. Also, the yield for metal-rich stars is higher than that for metal poor stars in most of the mass range, since in general stellar wind increases as metallicity increases (Vink et al. 2001). Cescutti et al. (2012) compares their phosphorus measurement on stars with [Fe/H] > −1 with the GCE models, and concluded that the major producer of P should be CCSN, though their adopted yields need to be multiplied by 3 to match the [P/Fe] values, and their trend does not match the metal-poor observation from Roederer et al. (2014). Recently Bekki & Tsujimoto (2024) suggested that the P production of oxygen-neon novae (ONe novae) needs to be included to explain the solar [P/Fe] values in the range of [Fe/H] < −2.5 dex as well as the increase between −2.5 < [Fe/H] < −1. To unfold the evolutionary history of phosphorus in the Milky Way, stellar age is a key parameter. Despite recent observational progress, the age–phosphorus relation remains poorly understood. Determining the ages of individual field stars in the Milky Way with high precision is challenging. Maas et al. (2019) attempted to explore this relation by constructing a P–age diagram, but the stellar ages in their sample suffer from large uncertainties. They emphasised that more accurate age determinations are essential to unveil the evolutionary behaviour of phosphorus in the thin disc of the Milky Way. Cluster member stars, on the other hand, provide more reliable age estimates by fitting isochrones to the cluster colour–magnitude dia- gram (CMD). This offers a promising avenue for constructing a more precise P–age relation. In this work, we present phosphorus abun- dance measurements for members of 24 open clusters as well as 20 Cepheids in the field, located in different directions across the Galaxy and the Galactocentric distance (𝑅gc) between 7 to 10 kpc, aiming to place new constraints on the chemical evolution of phosphorus in the Milky Way. This paper is organised as follows. Section 2 describes the data and the reduction process. Section 3 outlines the method used to deter- mine phosphorus abundances. The results are presented in Section 4, followed by a discussion in Section 5 and conclusions in Section 6. MNRAS 000, 1–15 (2025) SPA P abundance 3 2 DATA AND REDUCTION The data used in this study consist of near-infrared (NIR) spectra covering the wavelength range 0.9–2.45 𝜇m, obtained with a spectral resolution of 𝑅= 50,000 using the GIANO-B spectrograph (Oliva et al. 2012a,b; Origlia et al. 2014). These NIR spectra were acquired simultaneously with optical spectra spanning 0.383–0.690 𝜇m at a resolution of 𝑅= 115,000, obtained with the HARPS-N spectro- graph (Cosentino et al. 2012) in GIARPS mode (Tozzi et al. 2016; Claudi et al. 2017) at the Telescopio Nazionale Galileo (TNG), a 3.58- m optical/infrared telescope located in the Roque de los Muchachos Observatory in La Palma, Canary Islands. This configuration effec- tively eliminates temporal variations between the NIR and optical spectral lines. The dataset is part of the Large Programme Stel- lar Population Astrophysics (SPA), which aims to derive detailed, age-resolved chemical abundances across the Milky Way disc (pro- gramme ID A37TAC_31, PI: L. Origlia). The programme began in 2018 and was awarded observing time with both the HARPS-N and GIANO-B high-resolution echelle spectrographs at the TNG. Our dataset consists of 82 giant stars located in open clusters, supplemented by 20 field Cepheids and two standard stars: the Sun and Arcturus. Table 1 provides the observational log of the target stars, and Figure 2 displays the CMDs of our sample. We adopted the ages of the clusters from Cantat-Gaudin et al. (2020) in this study. For the Cepheids, we derived the ages from their period using the 𝑍= 0.02 period-age relation presented in Bono et al. (2005), with their pulsation mode (fundamental or first overtone) distinguished. Three of the Cepheids, DL Cas, SV Vul and X Vul, are classified as open cluster members (e.g., in Hao et al. 2022). SV Vul and X Vul have similar ages from their pulsation period and host clusters (with differences < 12 Myr). DL Cas, however, has a age from period-age relation of 51 ± 9 Myr, while its host cluster, NGC 129 is ∼130 Myr old. Considering that only three of our Cepheids have ages from their host clusters, we adopt their ages from period-age relation to increase the number of stars with age measurement. Although the SPA project also includes many dwarf stars in these clusters, measuring phosphorus abundances for them is particularly challenging. This is because, in dwarfs, phosphorus lines are signif- icantly weaker than in other types of stars (see Figure 2), while their typically higher 𝑣sin𝑖values broaden the lines. This combination highly reduce the P line detectability in dwarfs. In addition, dwarfs are intrinsically fainter, and the P lines are often smeared out by noise in their spectra. Thus we limit our target stars to giants in this study. 2.1 Telluric correction and normalization The GIANO-B spectra were reduced using the GOFIO data reduc- tion pipeline (Rainer et al. 2018). For each spectral order, the pipeline performs bad pixel and cosmic ray removal, sky and dark subtrac- tion, and corrections for flat-field and blaze effects, followed by the extraction of one-dimensional spectra. We further processed the ex- tracted spectra using the pre-analysis pipeline giano_ct1, which carries out continuum normalisation and telluric absorption correc- tion. Continuum normalisation is performed using the “alpha-roll” method described by Xu et al. (2019) and Cretignier et al. (2020). In this method, a circle with radius 𝛼is rolled along the top of the spectrum, and the contact points between the circle and the flux curve are selected as continuum points. The value of 𝛼is adaptively deter- mined based on the difference between the smoothed and observed 1 https://github.com/MingjieJian/giano_ct spectra, increasing in regions where absorption lines are present. Tel- luric correction is then applied using the TelFit package (Gullikson et al. 2014), by fitting the model telluric spectrum to the observed spectrum, and the individual spectral orders are merged into a sin- gle, continuum-normalised spectrum. The best-fit telluric model is also saved for later analysis. We note that certain wavelength re- gions (13530–14350 Å and 18020–19400 Å) are heavily affected by telluric absorption and are therefore excluded during the correction process. 3 MEASUREMENT OF P ABUNDANCE 3.1 Stellar parameters Stellar parameters (i.e., effective temperature 𝑇eff, surface gravity log 𝑔, iron abundance [Fe/H] as a proxy of metallicity, and microtur- bulance velocity 𝑉mic) for our target stars were independently deter- mined by Dal Ponte et al. (2025) based on HARPS-N spectra, and by Bijavara-Seshashayana et al. (in prep.) using GIANO-B spectra. The two sets of parameters are generally consistent, although some discrepancies in log 𝑔are present for individual stars. In this work, we adopt the stellar parameters from Dal Ponte et al. (2025). The spectral synthesis in the following analysis is performed using the PySME code (Wehrhahn et al. 2023). Spectral synthesis is carried out by the SME library (Piskunov & Valenti 2017; Valenti & Piskunov 1996), based on a one-dimensional stellar atmosphere model, under the assumption of local thermodynamic equilibrium (LTE). We note that although non-LTE corrections are available for 16 elements (Amarsi et al. 2020; Mallinson et al. 2024), there is currently no grid of departure coefficients for phosphorus. PySME is capable of synthesising model spectra and determining stellar parameters and abundances by 𝜒2 minimisation between the observed and synthetic spectra within user-defined masks. The current version of PySME takes all input line lists into account during synthesis, which can be time- and memory-consuming when dealing with spectra that span a wide wavelength range. Our recent developments in PySME include a more efficient synthesis via the dynamic removal of weak spectral lines, measurement of stellar parameters using pre-defined masks, and abundance measurements for multiple elements, all performed in an automated manner. These improvements will serve as part of the pipeline for future large-scale spectroscopic surveys such as 4MOST (de Jong et al. 2019). The first two developments will be discussed and tested in detail in Jian et al. (in prep.), while we focus on the abundance measurement aspect in this article. The line list, covering the wavelength range from 9360 to 24, 240 Å, is extracted from the VALD3 database version 3750M (Piskunov et al. 1995; Ryabchikova et al. 2015). We used the “Ex- tract All” mode with hyperfine structure included. All molecular lines across the full GIANO wavelength range are included, except for TiO lines. The enormous number of TiO lines makes it nearly impossible both to download and to include them in the synthesis. However, for target stars with 𝑇eff ⪅4500 K (14 stars in our sample), the blending of TiO lines may not be negligible. Thus, we included the TiO lines within ±20 Å of the P lines used for abundance measurements for our target giants (i.e., those in Table 2 with 𝑁giant not marked as ‘-’). We adopt MARCS model atmospheres (Gustafsson et al. 2008) as input, and solar abundances are set according to Grevesse et al. (2007, except for phosphorus; see Section 4.1). The broadening velocity, including macroturbulence and projected rotational velocity, 𝑉broad, is not provided in Dal Ponte et al. (2025). We therefore measure 𝑉broad using the GIANO-B spectra. For each MNRAS 000, 1–15 (2025) 4 M. Jian et al. Table 1. Observation log of our target stars, with the columns of (left to right) star name, Gaia DR3 ID, observation date in reduced heliocentric Julian dates (RHJD = HJD −2400000), exposure time, and the mean signal-to-noise ratio per pixel. Only the first six stars from cluster giant and Cepheid sample and the standard stars are listed, and the full table is available in CDS. Star name Gaia DR3 ID RHJD 𝑡exp (s) 𝑆/𝑁mean Alessi1-2 402506369136008832 58471.820 300 × 12 212 Alessi1-3 402505991180022528 58471.872 300 × 12 225 Alessi1-5 402867593065772288 58470.864 300 × 8 233 Alessi1-6 402880684126058880 58471.926 300 × 10 199 Alessi Teutsch11-1 2184332753719499904 58710.863 300 × 2 377 Basel11b-1 3424056131485038592 58513.903 300 × 10 196 ... ... ... ... ... DL Cas (NGC 129) 428620663657823232 58354.228 300 × 4 253 SV Vul (UBC 130) 2027951173435143680 58425.869 300 × 2 545 X Vul (UBC 129) 2027263738133623168 58349.853 300 × 4 375 CO Aur 3451987987438434944 58426.248 300 × 4 398 RX Aur 200708636406382720 58429.227 300 × 4 425 RW Cam 473043922712140928 58354.187 300 × 4 388 ... ... ... ... ... Sun (Ganymede) – 58325.880 30 × 2 130 Arcturus – 58301.894 60 × 2 612 Table 2. List of P i lines used in this study, including their air wavelengths (𝜆air), lower excitation potentials (EP), and oscillator strengths (log 𝑔𝑓). Columns 𝑁giant and 𝑁Cepheid indicates the number of stars using the cor- responding line for P measurement (excluding cases where the line is blended with telluric features or only provides an upper limit), and the number in the parentheses (if presents) indicates the number of stars being removed (see the description in section 3.3). The final column (‘Sun’) marks whether the line was also used in the solar abundance determination. 𝜆air EP log 𝑔𝑓 𝑁giant 𝑁Cepheid Sun (Å) (eV) 9609.036 6.9356 −1.05 - 0 (1) 9676.222 8.0785 0.00 - 4 9734.755 6.9543 −0.36 - 3 9750.748 6.9543 −0.18 12 1 Y 9790.194 7.1758 −0.69 - 2 9796.828 6.9852 0.27 - 1 Y 9903.671 7.1758 −0.30 - 7 Y 9976.681 6.9852 −0.29 - 1 (5) Y 10084.277 7.2127 0.14 - 4 10204.716 7.2127 −0.52 - 4 10511.588 6.9356 −0.13 11 1 10529.524 6.9543 0.24 22 1 Y 10581.577 6.9852 0.45 34 1 Y 10596.903 6.9356 −0.21 29 (1) 4 Y 10681.406 6.9543 −0.19 6 5 Y 10769.511 6.9543 −1.07 - 2 10813.141 6.9852 −0.41 0 (12) 5 10932.724 8.0785 0.31 - 1 10967.373 8.0783 0.12 - 7 11183.240 7.2127 0.40 8 - 16254.749 8.2255 0.00 - 7 16482.932 7.2127 −0.29 42 5 Y 16590.045 8.2276 0.50 - 0 (6) 16738.681 8.2864 0.00 - 0 (4) 17112.447 8.2504 0.50 0 (51) 0 (7) 17286.920 8.2864 0.00 - 3 17423.670 8.2864 0.00 0 (2) 4 star, we first fix the other stellar parameters and select isolated Fe I lines based on the synthetic spectrum (the details of the line selection method are described in Section 3.2). The𝑉broad value is then fitted for each selected Fe I line, and the weighted mean and standard deviation across all lines are adopted as the final broadening velocity and its associated uncertainty. The adopted stellar parameters are listed in Table 3 and plotted in Figure 3. The theoretical equivalent width (EW) of the P i line at 10529.524 Å – one of the strongest phosphorus lines in the near- infrared – is also shown as a contour in the diagram. Most infrared P lines exhibit EW contour shapes similar to this line, with maximum strengths occurring at approximately 𝑇eff ≈6000 K and low log 𝑔, and decreasing as𝑇eff moves away from this peak or as log 𝑔increases. As a result, most of our Cepheids exhibit relatively strong P lines (see also Elgueta et al. 2024). In contrast, our cluster giants lie far from the EW peak region, and the EWs of the 10529.524 Å line are typically around 15 mÅ. Therefore, high signal-to-noise ratio spectra are required to derive reliable phosphorus abundances for these stars. 3.2 Line selection Given the relatively large sample size of stars in this study, as well as their wide range of stellar parameters, it is essential to select suitable P i lines using the synthetic spectra. The VALD database contains 229 P i lines within the relevant wavelength range. However, whether a given line can be used for phosphorus abundance determination depends on several factors: whether the line appears in the observed spectrum, the degree of blending with nearby lines, and whether the line can be reliably measured given the quality of the observed spectra. Therefore, a careful line selection process is required prior to performing the abundance analysis. The goal of the line selection process is to identify all P lines that are present in the spectra and exhibit minimal blending, based on syn- thetic spectra. All P i lines are first extracted from the line list. For each star, we define a total broadening velocity,𝑉broad, which accounts for the broadening of microturbulence velocity (𝑉mic), macroturbu- lence velocity (𝑉mac), projected rotation (𝑣sin𝑖), and instrumental MNRAS 000, 1–15 (2025) SPA P abundance 5 Figure 2. CMD of the target clusters. The grey dots represent stars with membership probabilities greater than 0.5 from Cantat-Gaudin et al. (2020), while the blue circles highlight our selected cluster giant stars. resolution (𝑅), as: 𝑉broad = √︂ 𝑉2 mic + 𝑉2mac + (𝑣sin𝑖)2 +  𝑐 𝑅 2 , (1) with 𝑐as the speed of light. A wavelength region with a width of 4𝑉broad/𝑐is assigned to each target line, defining a feature. If two features overlap in wavelength, they are merged and treated as a single feature. Synthetic spectra based on the stellar parameters mentioned in Section 3.1 are then generated for each feature, and three variables are used to evaluate the suitability of each P line for abundance analysis: • Feature-dominance: the ratio of the EW of the phosphorus line(s) to the total EW of all lines within the feature; • Feature-depth: the maximum depth of the feature in the nor- malised flux; • Telluric-depth: the maximum depth of telluric absorption in the corresponding wavelength region. We require that the selected features satisfy the following thresh- olds: feature-dominance greater than 0.4, feature-depth greater than 0.006, and telluric-depth less than 0.1. The adopted feature- dominance threshold does not require the feature to be completely dominated by phosphorus lines. We found that the primary contam- MNRAS 000, 1–15 (2025) 6 M. Jian et al. Table 3. Stellar parameters of the target stars. Only the first six stars from cluster giant and Cepheid sample and the standard stars are listed, and the full table is available in CDS. Star name 𝑇eff log 𝑔 [Fe/H] 𝑉mic 𝑉broad Alessi1-2 4986 ± 30 2.83 ± 0.07 −0.001 ± 0.010 1.297 ± 0.013 6.7 ± 0.9 Alessi1-3 4996 ± 30 2.84 ± 0.07 −0.007 ± 0.010 1.276 ± 0.014 6.6 ± 0.9 Alessi1-5 4939 ± 31 2.73 ± 0.08 −0.028 ± 0.010 1.353 ± 0.013 6.7 ± 0.9 Alessi1-6 4985 ± 32 2.87 ± 0.08 −0.013 ± 0.010 1.278 ± 0.014 6.4 ± 0.8 Alessi Teutsch 11-1 4517 ± 35 2.15 ± 0.11 −0.040 ± 0.013 1.756 ± 0.014 7.2 ± 1.1 Basel 11b-1 4950 ± 35 2.51 ± 0.09 0.007 ± 0.010 1.808 ± 0.010 8.4 ± 1.1 ... ... ... ... ... ... DL Cas 5622 ± 18 1.73 ± 0.04 0.050 ± 0.010 3.920 ± 0.040 25.92 ± 0.17 SV Vul 5676 ± 13 1.00 ± 0.03 0.190 ± 0.010 4.240 ± 0.020 15.19 ± 0.06 X Vul 6294 ± 12 1.95 ± 0.02 0.170 ± 0.010 4.410 ± 0.020 18.04 ± 0.06 CO Aur 7076 ± 9 2.77 ± 0.01 0.180 ± 0.010 2.670 ± 0.020 13.52 ± 0.05 RX Aur 6466 ± 10 1.52 ± 0.01 −0.040 ± 0.010 4.040 ± 0.020 26.62 ± 0.09 RW Cam 5672 ± 16 1.63 ± 0.04 0.170 ± 0.010 3.180 ± 0.020 17.51 ± 0.09 ... ... ... ... ... ... Sun 5772 4.44 0.00 1.0 2.8 Arcturus 4286 1.6 −0.52 1.74 4.6 Figure 3. The Kiel diagram of our sample stars. The underlying contours show the EW (mÅ) of P i line at 10529.524 Å at solar metallicity. inants in P features are CN (at a wavelength range of ⪅12000 Å) and CO (⪆12000 Å) molecular lines. To minimize the blending of these molecular lines, we altered the C abundance to fit the spectra of ±2 Å around the P line and fix it during the fit for the P abun- dance. These C abundances are presented in appendix A and the tables therein. The feature-depth threshold is determined based on the default synthesis precision of PySME. The telluric-depth thresh- old of 0.1 is relatively strict. Given that P lines are intrinsically weak, the profiles of P lines blended with deep telluric absorption are often significantly distorted. A total of 11 and 26 P i lines passed the automatic selection criteria for cluster giants and Cepheids, respectively, but no P ii line passed our selection. Table 2 lists the line parameters and indicates which group of stars each line is used for. The phosphorus lines that meet the threshold requirements for each target are subsequently included to derive its phosphorus abundance. Minor refinements to the line selection are made following individual abundance measurements for each line. The oscillator strength, or log 𝑔𝑓value, is a key parameter that determines the strength of a spectral line in synthetic spectra. Accurate abundance measurements therefore rely critically on the precision of the adopted log 𝑔𝑓values. While Elgueta et al. (2024) recalibrated the log 𝑔𝑓values for seven of the P lines included in our selection, all of the selected lines already have laboratory-measured log 𝑔𝑓values. To maintain consistency across all line parameters, we adopt the log 𝑔𝑓values provided by the VALD database for all P lines used in this work. 3.3 Abundance measurement For each phosphorus line, the pixels within the full feature re- gion—defined as a width of 4𝑉broad/𝑐centred on the line—are used to fit the phosphorus abundance with PySME. Figures 4 and 5 show an example of the line fitting procedure, for our standard star the Sun and Alessi 1-2, respectively. Synthetic spectra spanning a broader width of 4𝑉broad/𝑐+ 4 Å are generated, as illustrated in the upper panel. Within the fitting region (vertical orange shaded band), if the blue curve (representing synthetic spectra generated using the P line only) lies between the shaded blue area (synthetic spectra with all lines, with varied [P/Fe]) and the blending by other species (the spectra with thin solid lines) is small, the feature is considered to be dominated by the P line. The lower panel displays the configuration and result of the actual fitting. To obtain a more accurate continuum level, we extend the input observed spectra to a total width of 4𝑉broad/𝑐+ 10 Å, centred on the P line. Pixels are classified into three types of masks: • Continuum mask: pixels that satisfy all of the following con- ditions: (1) their depth in the synthetic spectrum is less than 0.025; (2) their depth in the telluric model is less than 0.1; and (3) the abso- lute difference between the observed and synthetic fluxes is less than twice the standard deviation of the residuals; • Line mask: pixels within 4𝑉broad/𝑐of the P line centre; • Bad mask: all remaining pixels. This masking strategy ensures that pixels affected by other spectral lines or strong telluric absorption are excluded from continuum esti- mation. As the continuum normalisation is generally well performed MNRAS 000, 1–15 (2025) SPA P abundance 7 in Section 2, PySME applies only a constant scaling to the synthetic spectrum, determined by the mean flux level of the continuum pix- els. The phosphorus abundance is then derived by minimising the 𝜒2 over the line mask region only. We adopt the best-fit phosphorus abundance and the correspond- ing fit_uncertainties (the statistical uncertainties of the fitting) output from PySME as our final measurement and its associated un- certainty. Upper limits are identified following the method similar to the one described in Wang et al. (2024): if the EW of a fitted line at its best-fit abundance 𝐴(P)𝜆is smaller than three times the EW obtained at 𝐴(P)𝜆+𝜎𝐴(P)𝜆, the measurement is classified as an upper limit and excluded from further analysis. When measurement uncer- tainties properly reflect the noise level in the spectrum, this criterion helps mitigate selection bias by avoiding the preferential inclusion of stronger lines influenced by noise. We expect the phosphorus abundances derived from all selected lines for a given star to be consistent within their respective uncer- tainties. However, we find that for a few lines, the derived abundances systematically deviate from the mean value obtained from the other lines. This discrepancy may arise from several factors, such as inac- curate log 𝑔𝑓values, improper continuum placement, or undetected blending features that are not captured by the synthetic spectra. These problematic lines are marked with 𝑁giant or 𝑁Cepheid as 0 following by parentheses in Table 2 and are excluded from subsequent anal- ysis. Most of these lines are used in only a few stars, and we defer discussion of specific cases where the affected lines are present in the majority of stars to Appendix B. Finally, the weighted average and standard deviation of the 𝐴(P) from all available P i lines are taken as the final 𝐴(P) and its corre- sponding error. 4 RESULTS We present the results of phosphorus abundance measurements for individual stars and clusters in this section. 4.1 Phosphorus Abundance for the standard stars Our measurement of the solar spectrum from GIANO shows good consistency with previous determinations, as shown in Figure 6. The same line selection procedure as for other stars was applied to the standard stars. A total of 13 phosphorus lines were initially selected. Among them, one line is too weak to provide reliable measurements and is therefore treated as upper limits, while another three are sig- nificantly blended with telluric absorption features or not showing a clear absorption profile. It is evident that the lines affected by telluric contamination tend to deviate from the remaining measure- ments (i.e., the 9525.741 Å and 11183.24 Å line in Figure 6), which justifies their exclusion from the final abundance calculation. The final phosphorus abundance, 5.46, is obtained as the weighted mean of the remaining reliable P lines. This value is in good agreement with the solar phosphorus abundance derived using 1D LTE spectral synthesis in Scott et al. (2015), 5.38. The estimated uncertainty of 0.2 dex likely arises from a combination of factors, including lower resolution and signal-to-noise ratio compared to the solar spectrum used in Scott et al. (2015), uncertainties in the adopted log 𝑔𝑓values, and imperfect continuum placement. For Arcturus, no phosphorus lines pass our line selection criteria. Compared to the Sun, Arcturus is cooler and has significantly lower surface gravity (log 𝑔), which leads to weaker P lines (see Figure 3). In addition, its relatively low metallicity ([Fe/H] = −0.51) further reduces the line strengths, rendering all available phosphorus lines too weak for reliable abundance measurement. Our results from the Sun and Arcturus help to define an approxi- mate detection boundary for phosphorus abundance measurements at the resolution and signal-to-noise ratio (S/N) of our GIANO-B spec- tra. Specifically, a metallicity of [Fe/H] ∼−0.5 appears to be the lower limit for detecting P lines in cool giant stars, as demonstrated by the non-detection in Arcturus. For warmer giants and dwarfs, our test using the solar stellar parameters suggests that only one sin- gle P line at 10529.524 Å remains detectable at [Fe/H] = −1.0, indicating a rough detection limit under such conditions. This con- clusion is consistent with previous high-resolution studies, particu- larly those by Caffau et al. (2011), whose sample consists of dwarf stars with the most metal-poor star having a metallicity just above [Fe/H] = −1.0. To extend phosphorus measurements to lower metal- licities, stars within the optimum 𝑇eff and log 𝑔range (see Fig 3) should be observed at very high SNR, or observations should be done in ultraviolet (see Roederer et al. 2014). 4.2 Phosphorus Abundance for our sample stars Figure 7 shows an example of the final phosphorus abundance de- termination for the star Alessi 1-2. Eight lines were selected for the measurement, two of which exhibit weak absorption features and are therefore treated as upper limits. One additional line was excluded based on the criteria described in Section 3.3. The final phosphorus abundance is calculated as the weighted average of the abundances derived from the remaining lines. Tables 4 and 5 summarise the phosphorus abundance measure- ments for cluster giants and Cepheids, respectively. The uncertain- ties in the averaged P abundances for the giants are typically around 0.1 dex, while those for the Cepheids are mostly lower, indicating that P lines are generally stronger in Cepheid spectra. The NLTE effect on the phosphorus lines may also contribute, at least in part, to the uncertainties in our target stars. However, there is currently no available NLTE departure coefficient for phosphorus, and thus it is not yet possible to quantify or correct this effect. We then compute the average phosphorus abundance for each cluster using its member stars, as listed in Table 6. The [P/Fe]–[Fe/H] distribution of our cluster and Cepheid sample is broadly consistent with previous studies, as shown in Figure 8. Our targets span a relatively narrow metallicity range, from [Fe/H] = −0.1 to +0.15. Given this limited coverage, we adopt the results from Nandakumar et al. (2022) as the primary comparison, and refer the reader to their Figure 4 for a comprehensive compilation of earlier studies. Both our field Cepheids and cluster stars exhibit a modest decreasing trend in [P/Fe] from ∼0.3 to ∼0 over the range [Fe/H] = −0.1 to 0. At the metal-rich end, [P/Fe] appears to flatten near zero, although this trend is not well constrained due to the small number of stars in this regime. Overall, our observations fall within the same [P/Fe] range as reported by Nandakumar et al. (2022), supporting the consistency of our results with previous measurements. The [P/Fe]–age relation observed in our cluster sample shows general consistency with previous studies, while also providing new insights at the younger end. As shown in Figure 9, the phosphorus abundance (both in the form of absolute P abundance 𝐴(P) and the relative [P/Fe]) of clusters with ages greater than 1 Gyr increases with age. This trend is consistent with the findings of Maas et al. (2019) and Feuillet et al. (2018), although we note that the stellar ages from Maas et al. (2019) (with mean error of ∼1.8 Gyr) were derived from isochrone fitting of field stars, a method notoriously subject to larger uncertainties than when it is applied to stellar clusters (with MNRAS 000, 1–15 (2025) 8 M. Jian et al. Table 4. P measurement for cluster giants, with the star name, mean P abundance from all the normal-flag lines, number of normal-flag line and the abundance of the first six lines. The abundances with superscript of u or t are those with upper limit or blended by telluric lines. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name 𝐴(P)mean 𝑁 𝐴(P)9750.748 𝐴(P)10511.588 𝐴(P)10529.524 𝐴(P)10581.577 𝐴(P)10596.903 𝐴(P)11183.24 Alessi 1-2 5.49 ± 0.12 5 5.68 ± 0.24u 5.01 ± 0.18u 5.56 ± 0.14 5.43 ± 0.07 5.51 ± 0.11 5.77 ± 0.13 Alessi 1-3 5.50 ± 0.14 7 5.80 ± 0.12 5.56 ± 0.07 5.20 ± 0.09 5.52 ± 0.05 5.55 ± 0.12 5.56 ± 0.22 Alessi 1-5 5.46 ± 0.21 5 5.62 ± 0.16 4.98 ± 0.18u 5.37 ± 0.09 - 5.39 ± 0.08 5.72 ± 0.09 Alessi 1-6 5.51 ± 0.23 4 5.62 ± 0.25u 5.16 ± 0.19u 5.28 ± 0.18 - 5.37 ± 0.17 5.84 ± 0.15 Alessi Teutsch 11-1 5.49 ± 0.05 2 1.28 ± 5.77t 5.04 ± 0.22u 5.45 ± 0.14 - 5.49 ± 0.06 - Basel 11b-1 5.41 ± 0.13 4 4.76 ± 0.82u 4.95 ± 0.31u 5.45 ± 0.17 - 5.44 ± 0.13 5.53 ± 0.16 Basel 11b-2 5.35 ± 0.19 6 5.92 ± 0.14 5.36 ± 0.14 5.27 ± 0.22u 5.30 ± 0.04 5.31 ± 0.10 5.73 ± 0.16 Basel 11b-3 5.46 ± 0.14 2 - - - 5.51 ± 0.09 - - COIN-Gaia 30-1 - 0 −1.36 ± 37.53t −2.10 ± 8.58 5.42 ± 0.28u - 4.70 ± 0.41r 5.82 ± 0.11t Collinder 350-1 - 0 - - - - - - Collinder 350-2 5.30 ± 0.04 2 - - - 5.30 ± 0.03 - - ... ... ... ... ... ... ... ... ... Table 5. Similar to Table 4, but for the sun and Cepheid sample, with ages coming from the period-age relation. Only the first 11 Cepheids are listed, and the full table is available in CDS. star name Age (Gyr) 𝐴(P)mean 𝑁line 𝐴(P)9734.755 𝐴(P)9750.748 𝐴(P)9790.194 𝐴(P)9796.828 𝐴(P)9903.671 A(P)9976.681 SV Vul 0.016 ± 0.003 5.88 ± 0.09 1 6.24 ± 0.04r - - - - - DL Cas 0.051 ± 0.009 5.52 ± 0.23 4 - - 5.78 ± 0.07 - 5.61 ± 0.11t - CO Aur 0.096 ± 0.009 5.73 ± 0.17 4 - 4.63 ± 0.93 - - - - RX Aur 0.039 ± 0.007 5.61 ± 0.07 4 - - 5.69 ± 0.03 - - - V351 Cep 0.080 ± 0.008 5.97 ± 0.08 2 - −1.76 ± 6.86 - - - - CD Cyg 0.031 ± 0.006 5.65 ± 0.08 2 - - 5.70 ± 0.21t - 5.75 ± 0.11 - V1334 Cyg 0.075 ± 0.008 5.61 ± 0.13 5 - - 5.74 ± 0.10t - - - VZ Cyg 0.071 ± 0.013 5.65 ± 0.25 4 6.19 ± 0.22t - 5.44 ± 0.45t - - - X Cyg 0.031 ± 0.006 5.43 ± 0.07 2 - - 5.63 ± 0.13t - 5.98 ± 0.08t - W Gem 0.051 ± 0.009 5.39 ± 0.18 7 - - - - - - RR Lac 0.059 ± 0.011 5.54 ± 0.09 5 - - - 3.11 ± 29.35t - - ... ... ... ... ... ... ... ... ... ... Sun - 5.46 ± 0.20 9 - - - 5.50 ± 0.12 - 5.74 ± 0.09 Table 6. The mean Phosphorus abundance of our target clusters, with the number of stars for the cluster listed. Cluster Age (Gyr) [Fe/H]mean 𝐴(P)mean [P/Fe]mean 𝑁 Alt Name Gulliver 18 0.04 −0.01 ± 0.02 5.59 ± 0.11 0.25 ± 0.11 1 Collinder 416 Collinder 463 0.11 −0.06 ± 0.01 5.46 ± 0.19 0.16 ± 0.19 2 UPK 219 0.15 0.07 ± 0.01 5.33 ± 0.06 −0.10 ± 0.06 1 Tombaugh 5 0.19 0.02 ± 0.06 5.31 ± 0.14 −0.08 ± 0.15 3 NGC 7086 0.19 −0.08 ± 0.03 5.61 ± 0.17 0.33 ± 0.17 2 Collinder 437 UBC 194 0.23 0.06 ± 0.01 5.44 ± 0.12 0.02 ± 0.13 1 Basel 11b 0.23 0.03 ± 0.03 5.41 ± 0.05 0.02 ± 0.06 3 FSR 877 NGC 2437 0.30 0.01 ± 0.06 5.45 ± 0.28 0.08 ± 0.29 4 M 46, Melotte75 UBC 169 0.30 0.05 ± 0.03 5.40 ± 0.15 −0.01 ± 0.16 2 NGC 2548 0.40 0.05 ± 0.05 5.30 ± 0.07 −0.11 ± 0.09 3 M 48, Melotte 85 Stock 2 0.40 −0.00 ± 0.04 5.30 ± 0.12 −0.06 ± 0.12 7 NGC 7209 0.43 0.04 ± 0.01 5.63 ± 0.05 0.22 ± 0.05 1 Melotte 238, Collinder 444 Collinder 350 0.59 0.08 ± 0.01 5.30 ± 0.04 −0.15 ± 0.04 1 NGC 2632 0.68 0.13 ± 0.01 5.62 ± 0.18 0.13 ± 0.19 1 M 44, Praesepe NGC 752 1.17 0.08 ± 0.02 5.30 ± 0.08 −0.13 ± 0.09 3 Melotte 12, Theia 1214 IC 4756 1.29 −0.00 ± 0.04 5.39 ± 0.05 0.03 ± 0.06 6 Collinder386, Melotte 210 Alessi 1 1.45 −0.01 ± 0.01 5.49 ± 0.01 0.14 ± 0.02 4 Casado-Alessi 1 NGC 6991 1.55 0.03 ± 0.06 5.42 ± 0.15 0.03 ± 0.16 5 UBC 141 2.09 −0.02 ± 0.01 5.38 ± 0.23 0.05 ± 0.23 1 UBC 577 2.75 −0.04 ± 0.04 5.41 ± 0.12 0.09 ± 0.13 3 Alessi 191 Ruprecht 171 2.75 −0.04 ± 0.05 5.71 ± 0.08 0.39 ± 0.09 4 NGC 2682 4.27 0.02 ± 0.01 5.66 ± 0.53 0.28 ± 0.53 1 M67 MNRAS 000, 1–15 (2025) SPA P abundance 9 10527 10528 10529 10530 10531 10532 0.95 0.96 0.97 0.98 0.99 1.00 Normalized flux Sun, P 1 (10529.524 Å) Synthetic spectra with [P/Fe]±0.1 Synthetic spectra with P 1 line only Synthetic spectra of other species 10524 10526 10528 10530 10532 10534 Wavelength (Å) 0.80 0.85 0.90 0.95 1.00 Normalized flux Fitted A(P)=5.26±0.11, EWsynth, all=10.94±2.25 mÅ, normal bad line cont mask (dashed) Synthesized spectrum Telluric spectrum Observed spectrum Figure 4. Phosphorus abundance determination from the P i 10529.524 Å line in the solar spectrum. Top panel: Synthetic spectra after varying the abundances of P by ±0.1,dex (blue shaded profiles) together with spectra that contains only the P i transition (solid blue), and other species (gray). The orange dotted vertical line marks the line centre, while the orange shaded strip marks the wavelength range used in the fit. Bottom panel: Observed spectrum (blue points) compared with the best–fitting synthetic spectrum (solid orange) and models corresponding to ±1𝜎uncertainty in 𝐴(P) (orange dashed). The light-grey dashed “stair-step” curve (right-hand 𝑦-axis) shows the pixel mask: cont (continuum windows), line (pixels included in the fit), and bad (pixels excluded). The orange shaded region is the same fitting window as in the top panel. Similar plots for the other analysed P lines are available in the Supporting Information. relative error around 25%; Cantat-Gaudin et al. 2020). In contrast, the trend at younger ages seems to be flat. We performed linear fit to the clusters with ages younger and older than 1 Gyr separately in Figure 9. The linear fit for the older clusters show a positive slope and significant Pearson 𝑝-values less than 0.05, in both 𝐴(P) and [P/Fe]. Those for the young clusters present a negative slope with 𝑝-values larger than 0.05. Most clusters with their ages less than 1 Gyr have near-solar phosphorus abundances, while the clusters and Cepheids with even younger ages show either solar or enhanced [P/Fe] values. They re- duce the slope of linear fitting and further decrease the 𝑝-values to less than 0.05. The measurement of the Cepheids fill the blank in the youngest side of the trend. They extends the negative trend to the age of 0.015 Gyrs, and raises the fitted Phosphorus abundance to ∼5.7. 4.3 Spatial distribution of Phosphorus One of the goal of Galactic archaeology is to map the spatial distribu- tion of elements in our Galaxy. Since OCs represents a well-defined single stellar population, their spatial distribution allows us to inves- tigate the phosphorus distribution among different groups of stars in various directions across the Galactic plane. Classical Cepheids in our sample can also provide similar information, as they are relatively young and their period–luminosity relation enables precise distance measurements. Figure 10 displays the spatial distribution of our target clusters and Cepheids in heliocentric Cartesian coordinates, overlaid with a dust extinction map (Dharmawardena et al. 2024) to represent the current structure of the interstellar medium (ISM). We adopted Solar 𝑋and 𝑍values of 8122 and 20.8 pc, respectively (GRAVITY Collaboration et al. 2018; Bennett & Bovy 2019). Despite the clear inhomogeneous in the present-day ISM, we find no statistically significant correlation between the absolute phospho- rus abundance, A(P), and the clusters’ current locations, be it their Galactocentric radius or their position relative to local dust struc- tures. This apparent lack of correlation is not a deficiency in our data but is, in fact, a key piece of evidence for the dynamic nature of the Milky Way’s disk. Over timescales of tens of millions to several billion years, processes such as radial migration and vertical heating act to redistribute stars and clusters from their birthplaces (see e.g. Viscasillas Vázquez et al. 2023). Consequently, a cluster’s current location is often decoupled from the chemical composition of the gas from which it formed. 5 DISCUSSION Our analysis of OCs ages reveals a distinct “V-shape” in the phospho- rus enrichment history of the Milky Way’s thin disk (see Figure 9). While previous studies have established a general trend of [P/Fe] MNRAS 000, 1–15 (2025) 10 M. Jian et al. 10579 10580 10581 10582 10583 10584 0.92 0.94 0.96 0.98 1.00 Normalized flux Alessi1-2, P 1 (10581.577 Å) Synthetic spectra with [P/Fe]±0.1 Synthetic spectra with P 1 line only Synthetic spectra of CN 1 Synthetic spectra of other species 10576 10578 10580 10582 10584 10586 Wavelength (Å) 0.5 0.6 0.7 0.8 0.9 1.0 Normalized flux Fitted A(P)=5.43±0.07, EWsynth, all=37.19±2.15 mÅ, normal bad line cont mask (dashed) Synthesized spectrum Telluric spectrum Observed spectrum Figure 5. Similar as Figure 4, but for the line at 10581.577 Å of Alessi 1-2. Here several CN lines appear around the target P line, and their synthetic spectrum is also plotted in the upper panel. Similar plots for other lines and other stars are available in the Supporting Information. Figure 6. The Phosphorus abundance measurement result from all the lines for the Sun. Blue filled circles give the abundance 𝐴(P) derived for each line (wavelength range shown on the 𝑥-axis); their vertical blue error bars mark the 1𝜎fitting uncertainty. Deep-red plus symbols plot the line-by-line abundances reported by Scott et al. (2015) for those that are used in both studies, enabling a direct comparison. Red crosses indicate lines discarded from our final mean because of severe telluric blending or being manually excluded, while blue downward arrows denote lines for which only an upper limit on 𝐴(P) could be set. The horizontal blue dashed line is the weighted mean abundance from all accepted (“normal-flag”) lines, with its 1𝜎dispersion shown by the light-blue band. For reference, the red dashed line marks the solar value 𝐴(P) = 5.38 adopted by Scott et al. (2015). The panel title lists the mean abundance and its standard deviation obtained in this work. with [Fe/H] (Caffau et al. 2011, 2016; Maas et al. 2019, 2022), the high-precision and wide range of ages with a similar [Fe/H] of our cluster sample allow us to dissect the phosphorus evolution in time, uncovering two distinct evolutionary trends. The “V-shape” in the 𝐴(P)-age plane is not a single evolutionary track, but rather the superposition of two different regimes of GCE. The clusters younger than ∼1 Gyr appear to follow a path consistent with local, recent enrichment, while the older clusters unveil a fossil record of the diverse and dynamic star formation histories from the Galaxy’s past. We note that similar but milder [Mg/Fe]–age and [Mg/H]–age trend can also be found using the Mg abundances from Dal Ponte et al. (2025) where only open clusters are analysed. A systematic MNRAS 000, 1–15 (2025) SPA P abundance 11 9750.156- 9751.340Å 10510.949- 10512.227Å 10528.884- 10530.164Å 10580.934- 10582.220Å 10596.259- 10597.547Å 11182.560- 11183.920Å 16481.930- 16483.934Å 17111.407- 17113.487Å 5.0 5.2 5.4 5.6 5.8 6.0 6.2 A(P) Alessi1-2, A(P)=5.49±0.12 P 1 line Scott+2015 adopted value: 5.38 Fitted value Fitted std Figure 7. Similar as Figure 6, but for Alessi 2-1. Similar figures for other stars are available in the Supporting Information. Figure 8. [P/Fe] versus [Fe/H] for OC giants in our sample (left panel, the average [P/Fe] for each cluster) and Cepheid stars (right panel). For comparison, the [P/Fe] measurement of the giants from Nandakumar et al. (2022) is also plotted. comparison of age–abundance trends for other elements with similar nucleosynthetic origins would be highly informative; however, such an analysis is beyond the scope of the present study and is deferred to future work. 5.1 Old OCs as a fossil record of diverse star formation histories and P enrichment For OCs older than ∼1 Gyr, we observe that the absolute phospho- rus abundance 𝐴(P) decreases as stellar age decreases from ∼4 Gyr to ∼1 Gyr. This trend is contrary to the predictions of a simple, single-zone GCE model where metal abundance should monoton- ically increase over the evolution time. The explanation lies in the fact that the Milky Way is not a single, uniform entity. The solar neighbourhood today contains a mix of stellar populations born in different locations and at different times, each with a unique star formation history. This is a key feature of multi-zone GCE. The clusters in our sample, despite having a similar near-Solar metallicity (probed by [Fe/H]), reached this metallicity at vastly dif- ferent cosmic epochs (see Table 6). For a cluster like M67 which has achieved the solar metallicity by about 4 Gyr ago, its parent gas cloud must have undergone a very rapid and intense period of en- richment. In contrast, for another cluster in the old OC group, NGC 752, its parent gas cloud reached a similar metallicity much more gradually, about 1 Gyr ago. This difference in enrichment speed im- plies a fundamentally different Star Formation History. The rapid enrichment required for the older clusters necessitates either a sig- nificantly higher Star Formation Rate or a more “top-heavy” IMF that produces more massive stars per generation. In either scenario, the integrated contribution from massive stars—the primary produc- ers of phosphorus—is far greater (see Figure 1). Therefore, it is an inevitable consequence that the 4 Gyr old M67 formed from gas that was more abundant in absolute phosphorus 𝐴(P) than the gas that formed the 1 Gyr-old NGC 752. The declining 𝐴(P) trend for clus- ters older than ∼1 Gyr is thus a direct manifestation of observing a sequence of clusters born from environments with progressively less intense star formation histories. 5.2 Young OCs and Cepheids as evidence for local, quiescent P Enrichments Open clusters younger than ∼1 Gyr and Cepheids in Figure 9 exhibit a flat or slightly increasing 𝐴(P) (or [P/Fe]) with decreasing age. This behaviour aligns well with the expectations for a single-zone GCE MNRAS 000, 1–15 (2025) 12 M. Jian et al. Figure 9. Cluster or stellar age versus phosphorus abundance for our targets. The A(P) and [P/Fe] measurement from Maas et al. (2019) are plotted in gray as a reference. The blue lines and transparent strip represent separate linear fits and its 95% confidential interval to the cluster data, applied independently to those younger and older than 1 Gyr, while the orange ones are the linear fit to the cluster and Cepheid sample in the young side. The [P/Fe]–age trend from Feuillet et al. (2018) is plotted as a reference. model. These clusters and stars are young and located in the Solar neighbourhood, meaning they were likely born locally and share a nearly common, recent chemical history. They represent the current, more quiescent phase of evolution in our part of the Galaxy. In this local environment, we see a slow enrichment of phosphorus over the last ∼1 Gyr. Such gentle phosphorus enrichment may be explained by the con- tribution from low-mass stars. Figure 11 shows the evolution of phos- phorus in single stellar population at two different initial metallicity (ini𝑍) values, following the same set-up as the yields in Figure 1. Overall, the main phosphorus production at both lower and higher metallicities originates from massive stars, which is consistent with the conclusions of previous studies (e.g. Cescutti et al. 2012). The contribution from lower mass stars, i.e. AGB stars, reached a level of 10−8 𝑀⊙in the later evolution time at ini𝑍= 0.0001. How- ever, the contribution of AGB stars to the phosphorus production at ini𝑍= 0.01 is an order of magnitude higher than in the metal-poor case, making it more significant at later times in the evolution of the single stellar population. This increased phosphorus production by AGB stars is reflected in the peak of the IMF-weighted total yield at the lowest stellar masses, as seen for ini𝑍= 0.004 and 0.01 in Figure 1. These slight sub-Solar metallicities are also proximally corresponding to the previous generation of the OCs and Cepheids in our study. In the relative absence of vigorous massive star for- mation in the immediate Solar vicinity, the enrichment from these lower-mass stars can become a noticeable contributor to the ongo- ing chemical evolution, explaining the slight upward trend in 𝐴(P) towards the present day. We note that this interpretation is sensitive to the uncertainties in the nucleosynthetic yields adopted here, highlighting the need for more precise yield determinations. Also the contribution of ONe nova, recently discussed by Bekki & Tsujimoto (2024), is not con- sidered here, but it may contribute to some extent to phosphorus production. 6 SUMMARY In this work, we present a detailed abundance analysis of phosphorus for 82 giant stars in 24 open clusters, together with 20 Cepheids (17 in the Galactic field and 3 in open clusters). The analysis is based on high-resolution near-infrared spectra obtained with the GIANO- B spectrograph. To achieve this, we developed and implemented a robust line selection and fitting procedure, which carefully accounts for blending, telluric contamination, and line-strength variations, en- suring reliable abundance measurements. Our key findings include: • Our results confirm the previously observed modest decreas- ing trend of [P/Fe] with increasing [Fe/H] for stars around solar metallicity. • Using the high precision age of open clusters, we uncover a distinct “V-shape” in the phosphorus-age relation, revealing two separate chemical evolution pathways and mixtures for the Milky Way’s thin disk: – For clusters older than ∼1 Gyr in our sample, we find that phosphorus abundance increases with stellar age. We interpret this trend as a fossil record of diverse star formation histories and phosphorus enrichments. With a similar metallicity, older clusters formed from a gas that was more rapidly enriched, a consequence of more intense star formation in the earlier epochs of the Galactic disk. – For clusters younger than ∼1 Gyr and Cepheids in our sample, the [P/Fe] and 𝐴(P) trend with age is nearly flat with a small negative slope. This points to a more quiescent, local enrichment history, where the gentle increase in phosphorus of the younger clusters may be attributable to the contribution from low-mass stars at solar metallicities. • We find no correlation between the 𝐴(P) abundance and the current spatial location of open clusters and cepheids. Future observations of more young open cluster giant stars or Cepheids may further clarify the [P/Fe]–age trend in the very young population of our Milky Way. ACKNOWLEDGEMENTS A.B., M.J., and V.D. acknowledge funding from INAF Mini-Grant 2022 (High resolution spectroscopy of open clusters). X.F. acknowl- edges the support of the National Natural Science Foundation of China (NSFC) No. 12203100. G.B., G.F. and A.N. thank the sup- port from Project PRIN MUR 2022 (code 2022ARWP9C) ‘Early Formation and Evolution of Bulge and HalO (EFEBHO)’ (PI: M. Marconi), funded by the European Union—Next Generation EU, and from the Large grant INAF 2023 MOVIE (PI: M. Marconi). MNRAS 000, 1–15 (2025) SPA P abundance 13 −0.5 0.0 0.5 Z [kpc] −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 X [kpc] −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Y [kpc] ⊙ −0.5 0.0 0.5 Z [kpc] 5.2 5.3 5.4 5.5 5.6 5.7 5.8 A(P) Figure 10. Space distribution of the target clusters (dots) and Cepheids (stars), colour-coded by their phosphorus abundance. The overlaid extinction maps for each plane are derived from Dharmawardena et al. (2024). A maximum 𝐴0 of 0.8 mag was set (represented by darkest black), with a contour (black dashed line) at 0.2 mag for the XY plane, while a maximum 𝐴0 of 3 mag was applied for the XZ and YZ planes. The coloured curves represent the Galactic log-periodic spiral arms, defined by the parameters from (Reid et al. 2019): the Carina–Sagittarius arm in purple, the Local arm in black, and the Perseus arm in green. Additionally, the spur between the Local and Sagittarius–Carina arms is indicated by the blue curve. Part of the research activities described in this paper were carried out with the contribution of NextGenerationEU funds within the Na- tional Recovery and Resilience Plan (PNRR), Mission 4–Education and Research, Component 2–From Research to Business (M4C2), Investment Line 3.1–Strengthening and creation of Research Infras- tructures, Project IR0000034— “STILES–Strengthening the Italian Leadership in ELT and SKA”, CUP C33C22000640006. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. DATA AVAILABILITY Full version of Figures 4, 5 and 7 are available in the Supporting Information, and Tables 1, 3–5, A1 and A2 are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (130.79.128.5), or via https://cdsarc.cds.unistra.fr. The data underlying this article will be shared on reasonable request to the corresponding author. REFERENCES Amarsi A. M., et al., 2020, A&A, 642, A62 Anders F., et al., 2017, A&A, 600, A70 Bekki K., Tsujimoto T., 2024, ApJ, 967, L1 Bennett M., Bovy J., 2019, MNRAS, 482, 1417 Bono G., Marconi M., Cassisi S., Caputo F., Gieren W., Pietrzynski G., 2005, ApJ, 621, 966 Brauner M., Masseron T., García-Hernández D. A., Pignatari M., Womack K. A., Lugaro M., Hayes C. R., 2023, A&A, 673, A123 Caffau E., Bonifacio P., Faraggiana R., Steffen M., 2011, A&A, 532, A98 Caffau E., Andrievsky S., Korotin S., Origlia L., Oliva E., Sanna N., Ludwig H. G., Bonifacio P., 2016, A&A, 585, A16 MNRAS 000, 1–15 (2025) 14 M. Jian et al. 0.001 0.01 0.1 1 10 Age (Gyr) 10 10 10 9 10 8 10 7 10 6 Integrated ejecta mass (M ) iniZ = 0.0001 0.001 0.01 0.1 1 10 Age (Gyr) iniZ = 0.01 All massive star SN Ia AGB star Figure 11. Integrated ejecta mass of a single stellar population, with the initial metallicity ini𝑍of 0.0001 in the left panel and 0.01 in the right panel. Caffau E., et al., 2019, A&A, 622, A68 Cantat-Gaudin T., et al., 2020, A&A, 640, A1 Cescutti G., Matteucci F., Caffau E., François P., 2012, A&A, 540, A33 Claudi R., et al., 2017, European Physical Journal Plus, 132, 364 Cosentino R., et al., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for As- tronomy IV. p. 84461V, doi:10.1117/12.925738 Cretignier M., Francfort J., Dumusque X., Allart R., Pepe F., 2020, A&A, 640, A42 Dal Ponte M., et al., 2025, arXiv e-prints, p. arXiv:2507.15122 Dharmawardena T. E., Bailer-Jones C. A. L., Fouesneau M., Foreman-Mackey D., Coronica P., Colnaghi T., Müller T., Wilson A. G., 2024, MNRAS, 532, 3480 Elgueta S. S., et al., 2024, MNRAS, 532, 3694 Fernández-García C., Coggins A. J., Powner M. W., 2017, Life, 7, 23 Feuillet D. K., et al., 2018, MNRAS, 477, 2326 GRAVITY Collaboration et al., 2018, A&A, 615, L15 Grevesse N., Asplund M., Sauval A. J., 2007, Space Sci. Rev., 130, 105 Gullikson K., Dodson-Robinson S., Kraus A., 2014, AJ, 148, 53 Gustafsson B., Edvardsson B., Eriksson K., Jørgensen U. G., Nordlund Å., Plez B., 2008, A&A, 486, 951 Hao C. J., Xu Y., Wu Z. Y., Lin Z. H., Bian S. B., Li Y. J., Liu D. J., 2022, A&A, 668, A13 Hawkins K., Masseron T., Jofré P., Gilmore G., Elsworth Y., Hekker S., 2016, A&A, 594, A43 Hayden M. R., et al., 2015, ApJ, 808, 132 Hayes C. R., et al., 2022, ApJS, 262, 34 Iwamoto K., Brachwitz F., Nomoto K., Kishimoto N., Umeda H., Hix W. R., Thielemann F.-K., 1999, ApJS, 125, 439 Jian M., et al., 2024, A&A, 687, A189 Karakas A. I., 2010, MNRAS, 403, 1413 Kobayashi C., Umeda H., Nomoto K., Tominaga N., Ohkubo T., 2006, ApJ, 653, 1145 Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179 Kroupa P., 2001, MNRAS, 322, 231 Maas Z. G., Cescutti G., Pilachowski C. A., 2019, AJ, 158, 219 Maas Z. G., Hawkins K., Hinkel N. R., Cargile P., Janowiecki S., Nelson T., 2022, AJ, 164, 61 Mallinson J. W. E., Lind K., Amarsi A. M., Youakim K., 2024, A&A, 687, A5 Masseron T., García-Hernández D. A., Santoveña R., Manchado A., Zamora O., Manteiga M., Dafonte C., 2020a, Nature Communications, 11, 3759 Masseron T., García-Hernández D. A., Zamora O., Manchado A., 2020b, ApJ, 904, L1 Nandakumar G., Ryde N., Montelius M., Thorsbro B., Jönsson H., Mace G., 2022, A&A, 668, A88 Nomoto K., Kobayashi C., Tominaga N., 2013, ARA&A, 51, 457 Oliva E., et al., 2012a, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84463T, doi:10.1117/12.925274 Oliva E., Biliotti V., Baffa C., Giani E., Gonzalez M., Sozzi M., Tozzi A., Origlia L., 2012b, in Holland A. D., Beletic J. W., eds, Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8453, High Energy, Optical, and Infrared Detectors for Astronomy V. p. 84532T, doi:10.1117/12.925293 Origlia L., et al., 2014, in Ramsay S. K., McLean I. S., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V. p. 91471E, doi:10.1117/12.2054743 Piskunov N., Valenti J. A., 2017, A&A, 597, A16 Piskunov N. E., Kupka F., Ryabchikova T. A., Weiss W. W., Jeffery C. S., 1995, A&AS, 112, 525 Rainer M., et al., 2018, in Evans C. J., Simard L., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII. p. 1070266, doi:10.1117/12.2312130 Reid M. J., et al., 2019, ApJ, 885, 131 Roederer I. U., Jacobson H. R., Thanathibodee T., Frebel A., Toller E., 2014, ApJ, 797, 69 Ryabchikova T., Piskunov N., Kurucz R. L., Stempels H. C., Heiter U., Pakho- mov Y., Barklem P. S., 2015, Phys. Scr., 90, 054005 Scott P., et al., 2015, A&A, 573, A25 Tolstoy E., 2011, Science, 333, 176 Tozzi A., et al., 2016, in Evans C. J., Simard L., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI. p. 99086C, doi:10.1117/12.2231898 Valenti J. A., Piskunov N., 1996, A&AS, 118, 595 Vink J. S., de Koter A., Lamers H. J. G. L. M., 2001, A&A, 369, 574 Viscasillas Vázquez C., Magrini L., Spina L., Tautvaišien˙e G., Van der Swael- men M., Randich S., Sacco G. G., 2023, A&A, 679, A122 Wang E. X., et al., 2024, MNRAS, 528, 5394 Wehrhahn A., Piskunov N., Ryabchikova T., 2023, A&A, 671, A171 Weinberg D. H., et al., 2019, ApJ, 874, 102 Xu X., Cisewski-Kehe J., Davis A. B., Fischer D. A., Brewer J. M., 2019, AJ, 157, 243 de Jong R. S., et al., 2019, The Messenger, 175, 3 MNRAS 000, 1–15 (2025) SPA P abundance 15 APPENDIX A: CARBON ABUNDANCES FROM BLENDING MOLECULAR FEATURES This section presents Table A1 and A2 of C abundance used to fit the molecular features around the phosphorus lines, for cluster giants and Cepheids respectively. APPENDIX B: THE P LINES WITH BIASED ABUNDANCE As described in Section 3.3 and summarised in Table 2, four lines selected for cluster giants and four lines for Cepheids were excluded from the analysis due to systematic biases in their derived phosphorus abundances. Among these, two lines for the giants and one for the Cepheids were used in more than five stars. Here, we discuss these lines individually. The P i line at 10813.141 Å is intrinsically very weak, with a typ- ical line depth of only ∼0.01 in most stars. Its abundance sensitivity is highly dependent on accurate continuum placement. However, the continuum in the fitting region is often depressed below unity, and since PySME applies only a constant scaling over a wide region dur- ing abundance fitting, local misplacement of the continuum across a narrow wavelength range cannot be corrected. As a result, the phos- phorus abundances derived from this line tend to be overestimated. Also, the situation for P i line at 17423.670 Å is similar but with underestimated abundance, thus it is also being removed. The line at 17112.447 Å consistently yields phosphorus abun- dances that are approximately 0.3 dex higher than the average derived from other lines. Inspection of the line fitting reveals the presence of an unidentified absorption feature on the red side of the P line. This feature is absent from the synthetic spectra (otherwise it would have been accounted for), but it appears consistently in all observed spectra with a similar relative position, suggesting it originates from the stellar spectrum. Due to the severity of this blending, the fitting routine attempts to reproduce the composite feature by increasing the phosphorus abundance, leading to a systematic overestimation. Six P i lines were excluded from the Cepheid sample. Their be- havior is similar to that of the 10813.141 Å line, for which accurate continuum placement is challenging in this wavelength region, com- promising the reliability of the abundance fitting. As a result, these lines were excluded from the analysis. This paper has been typeset from a TEX/LATEX file prepared by the author. MNRAS 000, 1–15 (2025) 16 M. Jian et al. Table A1. Carbon abundance used to fit molecular features around the phosphorus lines for cluster giants. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name 𝐴(C)9750.748 𝐴(C)10511.588 𝐴(C)10529.524 𝐴(C)10581.577 𝐴(C)10596.903 𝐴(C)11183.24 Alessi 1-2 8.30 ± 0.05 8.14 ± 0.38 8.56 ± 0.10 8.33 ± 0.24 8.06 ± 0.22 8.31 ± 0.08 Alessi 1-3 8.27 ± 0.06 8.25 ± 0.28 8.64 ± 0.09 8.40 ± 0.22 8.42 ± 0.08 8.25 ± 0.08 Alessi 1-5 8.38 ± 0.05 8.04 ± 0.37 8.61 ± 0.08 - 8.40 ± 0.06 8.30 ± 0.07 Alessi 1-6 8.26 ± 0.07 7.68 ± 0.78 8.71 ± 0.08 - 7.63 ± 0.16 8.20 ± 0.09 Alessi Teutsch 11-1 8.25 ± 0.05 8.35 ± 0.11 8.52 ± 0.05 - 8.32 ± 0.03 - Basel 11b-1 8.35 ± 0.07 8.47 ± 0.14 8.79 ± 0.07 - 8.15 ± 0.25 8.33 ± 0.09 Basel 11b-2 8.42 ± 0.06 8.37 ± 0.19 8.79 ± 0.07 8.67 ± 0.12 8.51 ± 0.08 8.38 ± 0.08 Basel 11b-3 - - - 8.54 ± 0.09 - - COIN-Gaia 30-1 7.87 ± 0.17 8.12 ± 0.29 8.52 ± 0.09 - 8.38 ± 0.07 8.45 ± 0.05 Collinder 350-1 - - - - - - Collinder 350-2 - - - 8.55 ± 0.14 - - ... ... ... ... ... ... ... Table A2. Carbon abundance used by fitting the molecular features around the phosphorus lines for Cepheids. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name 𝐴(C)9493.571 𝐴(C)9609.036 𝐴(C)10932.724 𝐴(C)10967.373 𝐴(C)11186.753 𝐴(C)15711.522 SV Vul 8.08 ± 0.18 8.64 ± 0.04 8.52 ± 0.11 8.63 ± 0.02 8.41 ± 0.10 10.27 ± 0.09 DL Cas - - - - - - CO Aur 6.35 ± 2.36 - - 8.35 ± 3.05 - - RX Aur - - - - - - V 351 Cep - - - 8.33 ± 2.73 10.33 ± 0.31 - CD Cyg - - 8.25 ± 0.02 - - - V 1334 Cyg 6.09 ± 5.05 - - - - - VZ Cyg 8.07 ± 20.49 10.06 ± 0.61 - 8.07 ± 4.22 - - X Cyg - - - - - - W Gem - - - - - - RR Lac - - - - - - ... ... ... ... ... ... ... MNRAS 000, 1–15 (2025)
MNRAS 000, 1-15 (2025) Preprint 17 October 2025 Compiled using MNRAS LATEX style file v3.3 Stellar population astrophysics (SPA) with the TNG. The Phosphorus abundance on the young side of Milky Way★ Mingjie Jian (简明杰) ,1† Xiaoting Fu (符晓婷) ,2,3 Valentina D'Orazi,4,5 Angela Bragaglia,3 S. Bijavara Seshashayana,6,7 He Zhao (赵赫) ,8 Ziyi Guo (郭子怡) ,9,10 Karin Lind,1 Noriyuki Matsunaga (松永典之) ,11 Antonino Nunnari,12,4 Giuseppe Bono,4 Nicoletta Sanna,13 Donatella Romano,3 and Marina Dal Ponte5 1 21, 114 21 Stockholm, Sweden 2Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China 3INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via P. Gobetti 93/3, 40129 Bologna, Italy 4 1, 00133, Rome, Italy 5INAF - Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, 35122 Padova, Italy 6Materials Science and Applied Mathematics, Malmö University, SE-205 06 Malmö, Sweden 7Nordic Optical Telescope, Rambla José Ana Fernández Pérez 7, ES-38711 Breña Baja, Spain 8Departamento de Ciencias Fisicas, Universidad Andres Bello, Republica 220, 8320000 Santiago, Chile 9 210093, China 10Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China 11 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan 12 INAF - Astronomic Observatory of Rome, Via Frascati 33, 00078 Monte Porzio Catone, Italy 13 INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy Accepted XXX. Received YYY; in original form ZZZ ABSTRACT We present phosphorus abundance measurements for a total of 102 giant stars, including 82 stars in 24 open clusters and 20 Cepheids, based on high-resolution near-infrared spectra obtained with GIANO-B. Evolution of phosphorus abundance, despite its astrophysical and biological significance, remains poorly understood due to a scarcity of observational data. By combining precise stellar parameters from the optical, a robust line selection and measurement method, we measure phosphorus abundances using available P i lines. Our analysis confirms a declining trend in [P/Fe] with increasing [Fe/H] around solar metallicity for clusters and Cepheids, consistent with previous studies. We also report a [P/Fe]-age relation among open clusters older than 1 Gyr, indicating a time-dependent enrichment pattern. Such pattern can be explained by the different stellar formation history of their parental gas, with more efficient stellar formation in the gas of older clusters (thus with higher phosphorus abundances). [P/Fe] shows a flat trend among cepheids and clusters younger than 1 Gyr (along with three Cepheids inside open clusters), possibly hinting at the phosphorus contribution from the previous-generation low-mass stars. Such trend suggests that the young clusters share a nearly common chemical history, with a mild increase in phosphorus production by low-mass stars. Key words: stars: abundances - stars: late-type - open clusters and associations: general - stars: variables: Cepheids 1 INTRODUCTION Spectroscopic analysis of stars allows us to determine the abundances of various elements in their atmospheres, effectively revealing their chemical compositions. These elemental signatures reflect the composition of the molecular cloud from which the stars were born, capturing the initial conditions of star formation. Consequently, measuring the chemical compositions of stars across different elements ★Based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias † E-mail: provides crucial constraints on the evolutionary history of the Milky Way and constitutes one of the central goals of Galactic Archaeology (or Galactic Paleontology; see e.g. Tolstoy 2011). For instance, observations of α-elements (e.g., O, Mg, Si, S, Ca and Ti) in the Milky Way help distinguish different components of the Galactic disk (e.g., the thin and thick disk; Hayden et al. 2015; Anders et al. 2017). When combined with our understanding of the nucleosynthetic origins of these elements (e.g., Kobayashi et al. 2020), such measurements offer key insights into the chemical evolution of our host galaxy, solar system or even life (Fernández-García et al. 2017). However, the chemical evolution of some elements remains elusive due to a lack of observational constraints. Phosphorus is one such element. The observable spectral lines of phosphorus are mostly located in the ultraviolet and infrared wavelength regions, making it The Authors 16 Oct 2025 2 M. Jian et al. impossible to measure its abundance using optical spectra. Given that ultraviolet observations are heavily affected by interstellar extinction, the infrared becomes the most suitable wavelength range for studying phosphorus abundance and thus provides the most useful information for Galactic archaeology. In contrast, many spectral lines of other elements are found in the optical, which makes it easier to determine stellar parameters using optical spectra. These parameters can then be used to infer the abundances of elements whose lines are only available in the infrared. Simultaneously obtained optical and infrared spectra are particularly well-suited for this approach (see, e.g., Jian et al. 2024). Observational studies of phosphorus began relatively recently, with the pioneering work of Caffau et al. (2011), who measured the P abundances of 20 F-type dwarf stars. They found that the [P/Fe] ratio decreases from approximately +0.4 to 0 as metallicity increases from [Fe/H] = -1 to solar. This trend contrasts with that of other light odd-Zelements, such as Na and Al, and also deviates from the theoretical predictions of Kobayashi et al. (2006). Subsequently, Roederer et al. (2014) extended the observational data towards the metal-poor regime. Using the Hubble Space Telescope Imaging Spectrograph, they measured phosphorus abundances in 14 dwarf stars with metallicities ranging from [Fe/H] = -4 to -0.1 dex. [P/Fe] remains approximately solar in the metallicity range [Fe/H] ∼-4 to -2 dex, then increases to [P/Fe] ∼0.5 at [Fe/H] ≈-1.1, before decreasing back to near-solar levels at higher metallicities. Most subsequent studies have confirmed this general trend. Hawkins et al. (2016) performed the first large-scale measurement of phosphorus abundances using data from the APOGEE survey, significantly expanding the sample around solar metallicity. Later Hayes et al. (2022) measured phosphorus abundances for more than 120 000 stars using APOGEE DR17, though only the upper-limit can be estimated for the majority of the stars (∼87 000) thus the completeness in metallicity is largely affected. Over 40 additional measurements were later contributed by Caffau et al. (2016) and Caffau et al. (2019), using the highresolution spectrograph GIANO. Nandakumar et al. (2022) used the IGRINS spectrograph to reproduce the [P/Fe] trend within the metallicity range of [Fe/H] = -1.2 to +0.3 dex. In addition, Maas et al. (2022) found that the thick disc stars with the metallicity ranges from [Fe/H] = -1 to -0.4 dex tend to exhibit higher phosphorus abundances. Interestingly, Weinberg et al. (2019) reported that the [P/Mg] ratio varies among stars with different magnesium abundances. Since Mg is almost a "pure" αelement that is mainly produced by core-collapse supernovae (CCSN), the results of Weinberg et al. (2019) contradict the theoretical prediction that phosphorus is predominantly produced by CCSN. A number of giant stars with peculiar phosphorus abundances have also been identified. Masseron et al. (2020a) discovered 15 stars exhibiting extremely high phosphorus enhancements ([P/Fe] ≳1.5) with unusual overabundances O, Mg, Si, Al, and Ce, and further examined the detailed abundance patterns of three of these stars in a follow-up study (Masseron et al. 2020b). This sample was later expanded to 78 stars by Brauner et al. (2023). The unusual abundance patterns observed in these objects continue to pose a challenge to current Galactic chemical evolution (GCE) models. GCE models suggest that phosphorus is primarily produced in massive stars. Figure 1 presents the stellar total yield of phosphorus (the amount of element newly-produced plus that present in the star at birth, in contrast with the net yield, those newly-produced by stellar nucleosynthesis) after being weighted by the Kroupa stellar initial mass function (IMF, Kroupa 2001). We consider three types of yields here: low-mass stars (from Karakas 2010), massive stars (from Nomoto et al. 2013) and Type Ia supernovae (SNIa, from 0 20 40 60 80 100 Stellar mass (M ) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 IMF weighted total yield (M ) 1e 7 iniZ = 0.0001 iniZ = 0.004 iniZ = 0.01 Figure 1. Stellar total yield of phosphorus in three different initial metallicities (iniZ). The yield has been weighted by Kroupa IMF, and the stellar population has been normalized to 1 M⊙. Iwamoto et al. 1999). The massive stars contribute over 85% of phosphorus in a single stellar population in all three metallicities we consider for a representative initial metallicities for Population I and II progenitor stars in galactic chemical evolution models. Also, the yield for metal-rich stars is higher than that for metal poor stars in most of the mass range, since in general stellar wind increases as metallicity increases (Vink et al. 2001). Cescutti et al. (2012) compares their phosphorus measurement on stars with [Fe/H] > -1 with the GCE models, and concluded that the major producer of P should be CCSN, though their adopted yields need to be multiplied by 3 to match the [P/Fe] values, and their trend does not match the metal-poor observation from Roederer et al. (2014). Recently Bekki & Tsujimoto (2024) suggested that the P production of oxygen-neon novae (ONe novae) needs to be included to explain the solar [P/Fe] values in the range of [Fe/H] < -2.5 dex as well as the increase between -2.5 < [Fe/H] < -1. To unfold the evolutionary history of phosphorus in the Milky Way, stellar age is a key parameter. Despite recent observational progress, the age-phosphorus relation remains poorly understood. Determining the ages of individual field stars in the Milky Way with high precision is challenging. Maas et al. (2019) attempted to explore this relation by constructing a P-age diagram, but the stellar ages in their sample suffer from large uncertainties. They emphasised that more accurate age determinations are essential to unveil the evolutionary behaviour of phosphorus in the thin disc of the Milky Way. Cluster member stars, on the other hand, provide more reliable age estimates by fitting isochrones to the cluster colour-magnitude diagram (CMD). This offers a promising avenue for constructing a more precise P-age relation. In this work, we present phosphorus abundance measurements for members of 24 open clusters as well as 20 Cepheids in the field, located in different directions across the Galaxy and the Galactocentric distance (Rgc) between 7 to 10 kpc, aiming to place new constraints on the chemical evolution of phosphorus in the Milky Way. This paper is organised as follows. Section 2 describes the data and the reduction process. Section 3 outlines the method used to determine phosphorus abundances. The results are presented in Section 4, followed by a discussion in Section 5 and conclusions in Section 6. MNRAS 000, 1-15 (2025) SPA P abundance 3 2 DATA AND REDUCTION The data used in this study consist of near-infrared (NIR) spectra covering the wavelength range 0.9-2.45 μm, obtained with a spectral resolution of R= 50,000 using the GIANO-B spectrograph (Oliva et al. 2012a,b; Origlia et al. 2014). These NIR spectra were acquired simultaneously with optical spectra spanning 0.383-0.690 μm at a resolution of R= 115,000, obtained with the HARPS-N spectrograph (Cosentino et al. 2012) in GIARPS mode (Tozzi et al. 2016; Claudi et al. 2017) at the Telescopio Nazionale Galileo (TNG), a 3.58m optical/infrared telescope located in the Roque de los Muchachos Observatory in La Palma, Canary Islands. This configuration effectively eliminates temporal variations between the NIR and optical spectral lines. The dataset is part of the Large Programme Stellar Population Astrophysics (SPA), which aims to derive detailed, age-resolved chemical abundances across the Milky Way disc (programme ID A37TAC_31, PI: L. Origlia). The programme began in 2018 and was awarded observing time with both the HARPS-N and GIANO-B high-resolution echelle spectrographs at the TNG. Our dataset consists of 82 giant stars located in open clusters, supplemented by 20 field Cepheids and two standard stars: the Sun and Arcturus. Table 1 provides the observational log of the target stars, and Figure 2 displays the CMDs of our sample. We adopted the ages of the clusters from Cantat-Gaudin et al. (2020) in this study. For the Cepheids, we derived the ages from their period using the Z= 0.02 period-age relation presented in Bono et al. (2005), with their pulsation mode (fundamental or first overtone) distinguished. Three of the Cepheids, DL Cas, SV Vul and X Vul, are classified as open cluster members (e.g., in Hao et al. 2022). SV Vul and X Vul have similar ages from their pulsation period and host clusters (with differences < 12 Myr). DL Cas, however, has a age from period-age relation of 51 ± 9 Myr, while its host cluster, NGC 129 is ∼130 Myr old. Considering that only three of our Cepheids have ages from their host clusters, we adopt their ages from period-age relation to increase the number of stars with age measurement. Although the SPA project also includes many dwarf stars in these clusters, measuring phosphorus abundances for them is particularly challenging. This is because, in dwarfs, phosphorus lines are significantly weaker than in other types of stars (see Figure 2), while their typically higher vsinivalues broaden the lines. This combination highly reduce the P line detectability in dwarfs. In addition, dwarfs are intrinsically fainter, and the P lines are often smeared out by noise in their spectra. Thus we limit our target stars to giants in this study. 2.1 Telluric correction and normalization The GIANO-B spectra were reduced using the GOFIO data reduction pipeline (Rainer et al. 2018). For each spectral order, the pipeline performs bad pixel and cosmic ray removal, sky and dark subtraction, and corrections for flat-field and blaze effects, followed by the extraction of one-dimensional spectra. We further processed the extracted spectra using the pre-analysis pipeline giano_ct1, which carries out continuum normalisation and telluric absorption correction. Continuum normalisation is performed using the "alpha-roll" method described by Xu et al. (2019) and Cretignier et al. (2020). In this method, a circle with radius αis rolled along the top of the spectrum, and the contact points between the circle and the flux curve are selected as continuum points. The value of αis adaptively determined based on the difference between the smoothed and observed 1 https://github.com/MingjieJian/giano_ct spectra, increasing in regions where absorption lines are present. Telluric correction is then applied using the TelFit package (Gullikson et al. 2014), by fitting the model telluric spectrum to the observed spectrum, and the individual spectral orders are merged into a single, continuum-normalised spectrum. The best-fit telluric model is also saved for later analysis. We note that certain wavelength regions (13530-14350 Å and 18020-19400 Å) are heavily affected by telluric absorption and are therefore excluded during the correction process. 3 MEASUREMENT OF P ABUNDANCE 3.1 Stellar parameters Stellar parameters (i.e., effective temperature Teff, surface gravity log g, iron abundance [Fe/H] as a proxy of metallicity, and microturbulance velocity Vmic) for our target stars were independently determined by Dal Ponte et al. (2025) based on HARPS-N spectra, and by Bijavara-Seshashayana et al. (in prep.) using GIANO-B spectra. The two sets of parameters are generally consistent, although some discrepancies in log gare present for individual stars. In this work, we adopt the stellar parameters from Dal Ponte et al. (2025). The spectral synthesis in the following analysis is performed using the PySME code (Wehrhahn et al. 2023). Spectral synthesis is carried out by the SME library (Piskunov & Valenti 2017; Valenti & Piskunov 1996), based on a one-dimensional stellar atmosphere model, under the assumption of local thermodynamic equilibrium (LTE). We note that although non-LTE corrections are available for 16 elements (Amarsi et al. 2020; Mallinson et al. 2024), there is currently no grid of departure coefficients for phosphorus. PySME is capable of synthesising model spectra and determining stellar parameters and abundances by χ2 minimisation between the observed and synthetic spectra within user-defined masks. The current version of PySME takes all input line lists into account during synthesis, which can be time- and memory-consuming when dealing with spectra that span a wide wavelength range. Our recent developments in PySME include a more efficient synthesis via the dynamic removal of weak spectral lines, measurement of stellar parameters using pre-defined masks, and abundance measurements for multiple elements, all performed in an automated manner. These improvements will serve as part of the pipeline for future large-scale spectroscopic surveys such as 4MOST (de Jong et al. 2019). The first two developments will be discussed and tested in detail in Jian et al. (in prep.), while we focus on the abundance measurement aspect in this article. The line list, covering the wavelength range from 9360 to 24, 240 Å, is extracted from the VALD3 database version 3750M (Piskunov et al. 1995; Ryabchikova et al. 2015). We used the "Extract All" mode with hyperfine structure included. All molecular lines across the full GIANO wavelength range are included, except for TiO lines. The enormous number of TiO lines makes it nearly impossible both to download and to include them in the synthesis. However, for target stars with Teff ⪅4500 K (14 stars in our sample), the blending of TiO lines may not be negligible. Thus, we included the TiO lines within ±20 Å of the P lines used for abundance measurements for our target giants (i.e., those in Table 2 with Ngiant not marked as '-'). We adopt MARCS model atmospheres (Gustafsson et al. 2008) as input, and solar abundances are set according to Grevesse et al. (2007, except for phosphorus; see Section 4.1). The broadening velocity, including macroturbulence and projected rotational velocity, Vbroad, is not provided in Dal Ponte et al. (2025). We therefore measure Vbroad using the GIANO-B spectra. For each MNRAS 000, 1-15 (2025) 4 M. Jian et al. Table 1. Observation log of our target stars, with the columns of (left to right) star name, Gaia DR3 ID, observation date in reduced heliocentric Julian dates (RHJD = HJD -2400000), exposure time, and the mean signal-to-noise ratio per pixel. Only the first six stars from cluster giant and Cepheid sample and the standard stars are listed, and the full table is available in CDS. Star name Gaia DR3 ID RHJD texp (s) S/Nmean Alessi1-2 402506369136008832 58471.820 300 × 12 212 Alessi1-3 402505991180022528 58471.872 300 × 12 225 Alessi1-5 402867593065772288 58470.864 300 × 8 233 Alessi1-6 402880684126058880 58471.926 300 × 10 199 Alessi Teutsch11-1 2184332753719499904 58710.863 300 × 2 377 Basel11b-1 3424056131485038592 58513.903 300 × 10 196 ... ... ... ... ... DL Cas (NGC 129) 428620663657823232 58354.228 300 × 4 253 SV Vul (UBC 130) 2027951173435143680 58425.869 300 × 2 545 X Vul (UBC 129) 2027263738133623168 58349.853 300 × 4 375 CO Aur 3451987987438434944 58426.248 300 × 4 398 RX Aur 200708636406382720 58429.227 300 × 4 425 RW Cam 473043922712140928 58354.187 300 × 4 388 ... ... ... ... ... Sun (Ganymede) - 58325.880 30 × 2 130 Arcturus - 58301.894 60 × 2 612 Table 2. List of P i lines used in this study, including their air wavelengths (λair), lower excitation potentials (EP), and oscillator strengths (log gf). Columns Ngiant and NCepheid indicates the number of stars using the corresponding line for P measurement (excluding cases where the line is blended with telluric features or only provides an upper limit), and the number in the parentheses (if presents) indicates the number of stars being removed (see the description in section 3.3). The final column ('Sun') marks whether the line was also used in the solar abundance determination. λair EP log gf Ngiant NCepheid Sun (Å) (eV) 9609.036 6.9356 -1.05 - 0 (1) 9676.222 8.0785 0.00 - 4 9734.755 6.9543 -0.36 - 3 9750.748 6.9543 -0.18 12 1 Y 9790.194 7.1758 -0.69 - 2 9796.828 6.9852 0.27 - 1 Y 9903.671 7.1758 -0.30 - 7 Y 9976.681 6.9852 -0.29 - 1 (5) Y 10084.277 7.2127 0.14 - 4 10204.716 7.2127 -0.52 - 4 10511.588 6.9356 -0.13 11 1 10529.524 6.9543 0.24 22 1 Y 10581.577 6.9852 0.45 34 1 Y 10596.903 6.9356 -0.21 29 (1) 4 Y 10681.406 6.9543 -0.19 6 5 Y 10769.511 6.9543 -1.07 - 2 10813.141 6.9852 -0.41 0 (12) 5 10932.724 8.0785 0.31 - 1 10967.373 8.0783 0.12 - 7 11183.240 7.2127 0.40 8 - 16254.749 8.2255 0.00 - 7 16482.932 7.2127 -0.29 42 5 Y 16590.045 8.2276 0.50 - 0 (6) 16738.681 8.2864 0.00 - 0 (4) 17112.447 8.2504 0.50 0 (51) 0 (7) 17286.920 8.2864 0.00 - 3 17423.670 8.2864 0.00 0 (2) 4 star, we first fix the other stellar parameters and select isolated Fe I lines based on the synthetic spectrum (the details of the line selection method are described in Section 3.2). TheVbroad value is then fitted for each selected Fe I line, and the weighted mean and standard deviation across all lines are adopted as the final broadening velocity and its associated uncertainty. The adopted stellar parameters are listed in Table 3 and plotted in Figure 3. The theoretical equivalent width (EW) of the P i line at 10529.524 Å - one of the strongest phosphorus lines in the nearinfrared - is also shown as a contour in the diagram. Most infrared P lines exhibit EW contour shapes similar to this line, with maximum strengths occurring at approximately Teff ≈6000 K and low log g, and decreasing asTeff moves away from this peak or as log gincreases. As a result, most of our Cepheids exhibit relatively strong P lines (see also Elgueta et al. 2024). In contrast, our cluster giants lie far from the EW peak region, and the EWs of the 10529.524 Å line are typically around 15 mÅ. Therefore, high signal-to-noise ratio spectra are required to derive reliable phosphorus abundances for these stars. 3.2 Line selection Given the relatively large sample size of stars in this study, as well as their wide range of stellar parameters, it is essential to select suitable P i lines using the synthetic spectra. The VALD database contains 229 P i lines within the relevant wavelength range. However, whether a given line can be used for phosphorus abundance determination depends on several factors: whether the line appears in the observed spectrum, the degree of blending with nearby lines, and whether the line can be reliably measured given the quality of the observed spectra. Therefore, a careful line selection process is required prior to performing the abundance analysis. The goal of the line selection process is to identify all P lines that are present in the spectra and exhibit minimal blending, based on synthetic spectra. All P i lines are first extracted from the line list. For each star, we define a total broadening velocity,Vbroad, which accounts for the broadening of microturbulence velocity (Vmic), macroturbulence velocity (Vmac), projected rotation (vsini), and instrumental MNRAS 000, 1-15 (2025) SPA P abundance 5 Figure 2. CMD of the target clusters. The grey dots represent stars with membership probabilities greater than 0.5 from Cantat-Gaudin et al. (2020), while the blue circles highlight our selected cluster giant stars. resolution (R), as: Vbroad = √︂ V2 mic + V2mac + (vsini)2 + c R 2 , (1) with cas the speed of light. A wavelength region with a width of 4Vbroad/cis assigned to each target line, defining a feature. If two features overlap in wavelength, they are merged and treated as a single feature. Synthetic spectra based on the stellar parameters mentioned in Section 3.1 are then generated for each feature, and three variables are used to evaluate the suitability of each P line for abundance analysis: • Feature-dominance: the ratio of the EW of the phosphorus line(s) to the total EW of all lines within the feature; • Feature-depth: the maximum depth of the feature in the normalised flux; • Telluric-depth: the maximum depth of telluric absorption in the corresponding wavelength region. We require that the selected features satisfy the following thresholds: feature-dominance greater than 0.4, feature-depth greater than 0.006, and telluric-depth less than 0.1. The adopted featuredominance threshold does not require the feature to be completely dominated by phosphorus lines. We found that the primary contamMNRAS 000, 1-15 (2025) 6 M. Jian et al. Table 3. Stellar parameters of the target stars. Only the first six stars from cluster giant and Cepheid sample and the standard stars are listed, and the full table is available in CDS. Star name Teff log g [Fe/H] Vmic Vbroad Alessi1-2 4986 ± 30 2.83 ± 0.07 -0.001 ± 0.010 1.297 ± 0.013 6.7 ± 0.9 Alessi1-3 4996 ± 30 2.84 ± 0.07 -0.007 ± 0.010 1.276 ± 0.014 6.6 ± 0.9 Alessi1-5 4939 ± 31 2.73 ± 0.08 -0.028 ± 0.010 1.353 ± 0.013 6.7 ± 0.9 Alessi1-6 4985 ± 32 2.87 ± 0.08 -0.013 ± 0.010 1.278 ± 0.014 6.4 ± 0.8 Alessi Teutsch 11-1 4517 ± 35 2.15 ± 0.11 -0.040 ± 0.013 1.756 ± 0.014 7.2 ± 1.1 Basel 11b-1 4950 ± 35 2.51 ± 0.09 0.007 ± 0.010 1.808 ± 0.010 8.4 ± 1.1 ... ... ... ... ... ... DL Cas 5622 ± 18 1.73 ± 0.04 0.050 ± 0.010 3.920 ± 0.040 25.92 ± 0.17 SV Vul 5676 ± 13 1.00 ± 0.03 0.190 ± 0.010 4.240 ± 0.020 15.19 ± 0.06 X Vul 6294 ± 12 1.95 ± 0.02 0.170 ± 0.010 4.410 ± 0.020 18.04 ± 0.06 CO Aur 7076 ± 9 2.77 ± 0.01 0.180 ± 0.010 2.670 ± 0.020 13.52 ± 0.05 RX Aur 6466 ± 10 1.52 ± 0.01 -0.040 ± 0.010 4.040 ± 0.020 26.62 ± 0.09 RW Cam 5672 ± 16 1.63 ± 0.04 0.170 ± 0.010 3.180 ± 0.020 17.51 ± 0.09 ... ... ... ... ... ... Sun 5772 4.44 0.00 1.0 2.8 Arcturus 4286 1.6 -0.52 1.74 4.6 Figure 3. The Kiel diagram of our sample stars. The underlying contours show the EW (mÅ) of P i line at 10529.524 Å at solar metallicity. inants in P features are CN (at a wavelength range of ⪅12000 Å) and CO (⪆12000 Å) molecular lines. To minimize the blending of these molecular lines, we altered the C abundance to fit the spectra of ±2 Å around the P line and fix it during the fit for the P abundance. These C abundances are presented in appendix A and the tables therein. The feature-depth threshold is determined based on the default synthesis precision of PySME. The telluric-depth threshold of 0.1 is relatively strict. Given that P lines are intrinsically weak, the profiles of P lines blended with deep telluric absorption are often significantly distorted. A total of 11 and 26 P i lines passed the automatic selection criteria for cluster giants and Cepheids, respectively, but no P ii line passed our selection. Table 2 lists the line parameters and indicates which group of stars each line is used for. The phosphorus lines that meet the threshold requirements for each target are subsequently included to derive its phosphorus abundance. Minor refinements to the line selection are made following individual abundance measurements for each line. The oscillator strength, or log gfvalue, is a key parameter that determines the strength of a spectral line in synthetic spectra. Accurate abundance measurements therefore rely critically on the precision of the adopted log gfvalues. While Elgueta et al. (2024) recalibrated the log gfvalues for seven of the P lines included in our selection, all of the selected lines already have laboratory-measured log gfvalues. To maintain consistency across all line parameters, we adopt the log gfvalues provided by the VALD database for all P lines used in this work. 3.3 Abundance measurement For each phosphorus line, the pixels within the full feature region-defined as a width of 4Vbroad/ccentred on the line-are used to fit the phosphorus abundance with PySME. Figures 4 and 5 show an example of the line fitting procedure, for our standard star the Sun and Alessi 1-2, respectively. Synthetic spectra spanning a broader width of 4Vbroad/c+ 4 Å are generated, as illustrated in the upper panel. Within the fitting region (vertical orange shaded band), if the blue curve (representing synthetic spectra generated using the P line only) lies between the shaded blue area (synthetic spectra with all lines, with varied [P/Fe]) and the blending by other species (the spectra with thin solid lines) is small, the feature is considered to be dominated by the P line. The lower panel displays the configuration and result of the actual fitting. To obtain a more accurate continuum level, we extend the input observed spectra to a total width of 4Vbroad/c+ 10 Å, centred on the P line. Pixels are classified into three types of masks: • Continuum mask: pixels that satisfy all of the following conditions: (1) their depth in the synthetic spectrum is less than 0.025; (2) their depth in the telluric model is less than 0.1; and (3) the absolute difference between the observed and synthetic fluxes is less than twice the standard deviation of the residuals; • Line mask: pixels within 4Vbroad/cof the P line centre; • Bad mask: all remaining pixels. This masking strategy ensures that pixels affected by other spectral lines or strong telluric absorption are excluded from continuum estimation. As the continuum normalisation is generally well performed MNRAS 000, 1-15 (2025) SPA P abundance 7 in Section 2, PySME applies only a constant scaling to the synthetic spectrum, determined by the mean flux level of the continuum pixels. The phosphorus abundance is then derived by minimising the χ2 over the line mask region only. We adopt the best-fit phosphorus abundance and the corresponding fit_uncertainties (the statistical uncertainties of the fitting) output from PySME as our final measurement and its associated uncertainty. Upper limits are identified following the method similar to the one described in Wang et al. (2024): if the EW of a fitted line at its best-fit abundance A(P)λis smaller than three times the EW obtained at A(P)λ+σA(P)λ, the measurement is classified as an upper limit and excluded from further analysis. When measurement uncertainties properly reflect the noise level in the spectrum, this criterion helps mitigate selection bias by avoiding the preferential inclusion of stronger lines influenced by noise. We expect the phosphorus abundances derived from all selected lines for a given star to be consistent within their respective uncertainties. However, we find that for a few lines, the derived abundances systematically deviate from the mean value obtained from the other lines. This discrepancy may arise from several factors, such as inaccurate log gfvalues, improper continuum placement, or undetected blending features that are not captured by the synthetic spectra. These problematic lines are marked with Ngiant or NCepheid as 0 following by parentheses in Table 2 and are excluded from subsequent analysis. Most of these lines are used in only a few stars, and we defer discussion of specific cases where the affected lines are present in the majority of stars to Appendix B. Finally, the weighted average and standard deviation of the A(P) from all available P i lines are taken as the final A(P) and its corresponding error. 4 RESULTS We present the results of phosphorus abundance measurements for individual stars and clusters in this section. 4.1 Phosphorus Abundance for the standard stars Our measurement of the solar spectrum from GIANO shows good consistency with previous determinations, as shown in Figure 6. The same line selection procedure as for other stars was applied to the standard stars. A total of 13 phosphorus lines were initially selected. Among them, one line is too weak to provide reliable measurements and is therefore treated as upper limits, while another three are significantly blended with telluric absorption features or not showing a clear absorption profile. It is evident that the lines affected by telluric contamination tend to deviate from the remaining measurements (i.e., the 9525.741 Å and 11183.24 Å line in Figure 6), which justifies their exclusion from the final abundance calculation. The final phosphorus abundance, 5.46, is obtained as the weighted mean of the remaining reliable P lines. This value is in good agreement with the solar phosphorus abundance derived using 1D LTE spectral synthesis in Scott et al. (2015), 5.38. The estimated uncertainty of 0.2 dex likely arises from a combination of factors, including lower resolution and signal-to-noise ratio compared to the solar spectrum used in Scott et al. (2015), uncertainties in the adopted log gfvalues, and imperfect continuum placement. For Arcturus, no phosphorus lines pass our line selection criteria. Compared to the Sun, Arcturus is cooler and has significantly lower surface gravity (log g), which leads to weaker P lines (see Figure 3). In addition, its relatively low metallicity ([Fe/H] = -0.51) further reduces the line strengths, rendering all available phosphorus lines too weak for reliable abundance measurement. Our results from the Sun and Arcturus help to define an approximate detection boundary for phosphorus abundance measurements at the resolution and signal-to-noise ratio (S/N) of our GIANO-B spectra. Specifically, a metallicity of [Fe/H] ∼-0.5 appears to be the lower limit for detecting P lines in cool giant stars, as demonstrated by the non-detection in Arcturus. For warmer giants and dwarfs, our test using the solar stellar parameters suggests that only one single P line at 10529.524 Å remains detectable at [Fe/H] = -1.0, indicating a rough detection limit under such conditions. This conclusion is consistent with previous high-resolution studies, particularly those by Caffau et al. (2011), whose sample consists of dwarf stars with the most metal-poor star having a metallicity just above [Fe/H] = -1.0. To extend phosphorus measurements to lower metallicities, stars within the optimum Teff and log grange (see Fig 3) should be observed at very high SNR, or observations should be done in ultraviolet (see Roederer et al. 2014). 4.2 Phosphorus Abundance for our sample stars Figure 7 shows an example of the final phosphorus abundance determination for the star Alessi 1-2. Eight lines were selected for the measurement, two of which exhibit weak absorption features and are therefore treated as upper limits. One additional line was excluded based on the criteria described in Section 3.3. The final phosphorus abundance is calculated as the weighted average of the abundances derived from the remaining lines. Tables 4 and 5 summarise the phosphorus abundance measurements for cluster giants and Cepheids, respectively. The uncertainties in the averaged P abundances for the giants are typically around 0.1 dex, while those for the Cepheids are mostly lower, indicating that P lines are generally stronger in Cepheid spectra. The NLTE effect on the phosphorus lines may also contribute, at least in part, to the uncertainties in our target stars. However, there is currently no available NLTE departure coefficient for phosphorus, and thus it is not yet possible to quantify or correct this effect. We then compute the average phosphorus abundance for each cluster using its member stars, as listed in Table 6. The [P/Fe]-[Fe/H] distribution of our cluster and Cepheid sample is broadly consistent with previous studies, as shown in Figure 8. Our targets span a relatively narrow metallicity range, from [Fe/H] = -0.1 to +0.15. Given this limited coverage, we adopt the results from Nandakumar et al. (2022) as the primary comparison, and refer the reader to their Figure 4 for a comprehensive compilation of earlier studies. Both our field Cepheids and cluster stars exhibit a modest decreasing trend in [P/Fe] from ∼0.3 to ∼0 over the range [Fe/H] = -0.1 to 0. At the metal-rich end, [P/Fe] appears to flatten near zero, although this trend is not well constrained due to the small number of stars in this regime. Overall, our observations fall within the same [P/Fe] range as reported by Nandakumar et al. (2022), supporting the consistency of our results with previous measurements. The [P/Fe]-age relation observed in our cluster sample shows general consistency with previous studies, while also providing new insights at the younger end. As shown in Figure 9, the phosphorus abundance (both in the form of absolute P abundance A(P) and the relative [P/Fe]) of clusters with ages greater than 1 Gyr increases with age. This trend is consistent with the findings of Maas et al. (2019) and Feuillet et al. (2018), although we note that the stellar ages from Maas et al. (2019) (with mean error of ∼1.8 Gyr) were derived from isochrone fitting of field stars, a method notoriously subject to larger uncertainties than when it is applied to stellar clusters (with MNRAS 000, 1-15 (2025) 8 M. Jian et al. Table 4. P measurement for cluster giants, with the star name, mean P abundance from all the normal-flag lines, number of normal-flag line and the abundance of the first six lines. The abundances with superscript of u or t are those with upper limit or blended by telluric lines. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name A(P)mean N A(P)9750.748 A(P)10511.588 A(P)10529.524 A(P)10581.577 A(P)10596.903 A(P)11183.24 Alessi 1-2 5.49 ± 0.12 5 5.68 ± 0.24u 5.01 ± 0.18u 5.56 ± 0.14 5.43 ± 0.07 5.51 ± 0.11 5.77 ± 0.13 Alessi 1-3 5.50 ± 0.14 7 5.80 ± 0.12 5.56 ± 0.07 5.20 ± 0.09 5.52 ± 0.05 5.55 ± 0.12 5.56 ± 0.22 Alessi 1-5 5.46 ± 0.21 5 5.62 ± 0.16 4.98 ± 0.18u 5.37 ± 0.09 - 5.39 ± 0.08 5.72 ± 0.09 Alessi 1-6 5.51 ± 0.23 4 5.62 ± 0.25u 5.16 ± 0.19u 5.28 ± 0.18 - 5.37 ± 0.17 5.84 ± 0.15 Alessi Teutsch 11-1 5.49 ± 0.05 2 1.28 ± 5.77t 5.04 ± 0.22u 5.45 ± 0.14 - 5.49 ± 0.06 - Basel 11b-1 5.41 ± 0.13 4 4.76 ± 0.82u 4.95 ± 0.31u 5.45 ± 0.17 - 5.44 ± 0.13 5.53 ± 0.16 Basel 11b-2 5.35 ± 0.19 6 5.92 ± 0.14 5.36 ± 0.14 5.27 ± 0.22u 5.30 ± 0.04 5.31 ± 0.10 5.73 ± 0.16 Basel 11b-3 5.46 ± 0.14 2 - - - 5.51 ± 0.09 - - COIN-Gaia 30-1 - 0 -1.36 ± 37.53t -2.10 ± 8.58 5.42 ± 0.28u - 4.70 ± 0.41r 5.82 ± 0.11t Collinder 350-1 - 0 - - - - - - Collinder 350-2 5.30 ± 0.04 2 - - - 5.30 ± 0.03 - - ... ... ... ... ... ... ... ... ... Table 5. Similar to Table 4, but for the sun and Cepheid sample, with ages coming from the period-age relation. Only the first 11 Cepheids are listed, and the full table is available in CDS. star name Age (Gyr) A(P)mean Nline A(P)9734.755 A(P)9750.748 A(P)9790.194 A(P)9796.828 A(P)9903.671 A(P)9976.681 SV Vul 0.016 ± 0.003 5.88 ± 0.09 1 6.24 ± 0.04r - - - - - DL Cas 0.051 ± 0.009 5.52 ± 0.23 4 - - 5.78 ± 0.07 - 5.61 ± 0.11t - CO Aur 0.096 ± 0.009 5.73 ± 0.17 4 - 4.63 ± 0.93 - - - - RX Aur 0.039 ± 0.007 5.61 ± 0.07 4 - - 5.69 ± 0.03 - - - V351 Cep 0.080 ± 0.008 5.97 ± 0.08 2 - -1.76 ± 6.86 - - - - CD Cyg 0.031 ± 0.006 5.65 ± 0.08 2 - - 5.70 ± 0.21t - 5.75 ± 0.11 - V1334 Cyg 0.075 ± 0.008 5.61 ± 0.13 5 - - 5.74 ± 0.10t - - - VZ Cyg 0.071 ± 0.013 5.65 ± 0.25 4 6.19 ± 0.22t - 5.44 ± 0.45t - - - X Cyg 0.031 ± 0.006 5.43 ± 0.07 2 - - 5.63 ± 0.13t - 5.98 ± 0.08t - W Gem 0.051 ± 0.009 5.39 ± 0.18 7 - - - - - - RR Lac 0.059 ± 0.011 5.54 ± 0.09 5 - - - 3.11 ± 29.35t - - ... ... ... ... ... ... ... ... ... ... Sun - 5.46 ± 0.20 9 - - - 5.50 ± 0.12 - 5.74 ± 0.09 Table 6. The mean Phosphorus abundance of our target clusters, with the number of stars for the cluster listed. Cluster Age (Gyr) [Fe/H]mean A(P)mean [P/Fe]mean N Alt Name Gulliver 18 0.04 -0.01 ± 0.02 5.59 ± 0.11 0.25 ± 0.11 1 Collinder 416 Collinder 463 0.11 -0.06 ± 0.01 5.46 ± 0.19 0.16 ± 0.19 2 UPK 219 0.15 0.07 ± 0.01 5.33 ± 0.06 -0.10 ± 0.06 1 Tombaugh 5 0.19 0.02 ± 0.06 5.31 ± 0.14 -0.08 ± 0.15 3 NGC 7086 0.19 -0.08 ± 0.03 5.61 ± 0.17 0.33 ± 0.17 2 Collinder 437 UBC 194 0.23 0.06 ± 0.01 5.44 ± 0.12 0.02 ± 0.13 1 Basel 11b 0.23 0.03 ± 0.03 5.41 ± 0.05 0.02 ± 0.06 3 FSR 877 NGC 2437 0.30 0.01 ± 0.06 5.45 ± 0.28 0.08 ± 0.29 4 M 46, Melotte75 UBC 169 0.30 0.05 ± 0.03 5.40 ± 0.15 -0.01 ± 0.16 2 NGC 2548 0.40 0.05 ± 0.05 5.30 ± 0.07 -0.11 ± 0.09 3 M 48, Melotte 85 Stock 2 0.40 -0.00 ± 0.04 5.30 ± 0.12 -0.06 ± 0.12 7 NGC 7209 0.43 0.04 ± 0.01 5.63 ± 0.05 0.22 ± 0.05 1 Melotte 238, Collinder 444 Collinder 350 0.59 0.08 ± 0.01 5.30 ± 0.04 -0.15 ± 0.04 1 NGC 2632 0.68 0.13 ± 0.01 5.62 ± 0.18 0.13 ± 0.19 1 M 44, Praesepe NGC 752 1.17 0.08 ± 0.02 5.30 ± 0.08 -0.13 ± 0.09 3 Melotte 12, Theia 1214 IC 4756 1.29 -0.00 ± 0.04 5.39 ± 0.05 0.03 ± 0.06 6 Collinder386, Melotte 210 Alessi 1 1.45 -0.01 ± 0.01 5.49 ± 0.01 0.14 ± 0.02 4 Casado-Alessi 1 NGC 6991 1.55 0.03 ± 0.06 5.42 ± 0.15 0.03 ± 0.16 5 UBC 141 2.09 -0.02 ± 0.01 5.38 ± 0.23 0.05 ± 0.23 1 UBC 577 2.75 -0.04 ± 0.04 5.41 ± 0.12 0.09 ± 0.13 3 Alessi 191 Ruprecht 171 2.75 -0.04 ± 0.05 5.71 ± 0.08 0.39 ± 0.09 4 NGC 2682 4.27 0.02 ± 0.01 5.66 ± 0.53 0.28 ± 0.53 1 M67 MNRAS 000, 1-15 (2025) SPA P abundance 9 10527 10528 10529 10530 10531 10532 0.95 0.96 0.97 0.98 0.99 1.00 Normalized flux Sun, P 1 (10529.524 Å) Synthetic spectra with [P/Fe]±0.1 Synthetic spectra with P 1 line only Synthetic spectra of other species 10524 10526 10528 10530 10532 10534 Wavelength (Å) 0.80 0.85 0.90 0.95 1.00 Normalized flux Fitted A(P)=5.26±0.11, EWsynth, all=10.94±2.25 mÅ, normal bad line cont mask (dashed) Synthesized spectrum Telluric spectrum Observed spectrum Figure 4. Phosphorus abundance determination from the P i 10529.524 Å line in the solar spectrum. Top panel: Synthetic spectra after varying the abundances of P by ±0.1,dex (blue shaded profiles) together with spectra that contains only the P i transition (solid blue), and other species (gray). The orange dotted vertical line marks the line centre, while the orange shaded strip marks the wavelength range used in the fit. Bottom panel: Observed spectrum (blue points) compared with the best-fitting synthetic spectrum (solid orange) and models corresponding to ±1σuncertainty in A(P) (orange dashed). The light-grey dashed "stair-step" curve (right-hand y-axis) shows the pixel mask: cont (continuum windows), line (pixels included in the fit), and bad (pixels excluded). The orange shaded region is the same fitting window as in the top panel. Similar plots for the other analysed P lines are available in the Supporting Information. relative error around 25%; Cantat-Gaudin et al. 2020). In contrast, the trend at younger ages seems to be flat. We performed linear fit to the clusters with ages younger and older than 1 Gyr separately in Figure 9. The linear fit for the older clusters show a positive slope and significant Pearson p-values less than 0.05, in both A(P) and [P/Fe]. Those for the young clusters present a negative slope with p-values larger than 0.05. Most clusters with their ages less than 1 Gyr have near-solar phosphorus abundances, while the clusters and Cepheids with even younger ages show either solar or enhanced [P/Fe] values. They reduce the slope of linear fitting and further decrease the p-values to less than 0.05. The measurement of the Cepheids fill the blank in the youngest side of the trend. They extends the negative trend to the age of 0.015 Gyrs, and raises the fitted Phosphorus abundance to ∼5.7. 4.3 Spatial distribution of Phosphorus One of the goal of Galactic archaeology is to map the spatial distribution of elements in our Galaxy. Since OCs represents a well-defined single stellar population, their spatial distribution allows us to investigate the phosphorus distribution among different groups of stars in various directions across the Galactic plane. Classical Cepheids in our sample can also provide similar information, as they are relatively young and their period-luminosity relation enables precise distance measurements. Figure 10 displays the spatial distribution of our target clusters and Cepheids in heliocentric Cartesian coordinates, overlaid with a dust extinction map (Dharmawardena et al. 2024) to represent the current structure of the interstellar medium (ISM). We adopted Solar Xand Zvalues of 8122 and 20.8 pc, respectively (GRAVITY Collaboration et al. 2018; Bennett & Bovy 2019). Despite the clear inhomogeneous in the present-day ISM, we find no statistically significant correlation between the absolute phosphorus abundance, A(P), and the clusters' current locations, be it their Galactocentric radius or their position relative to local dust structures. This apparent lack of correlation is not a deficiency in our data but is, in fact, a key piece of evidence for the dynamic nature of the Milky Way's disk. Over timescales of tens of millions to several billion years, processes such as radial migration and vertical heating act to redistribute stars and clusters from their birthplaces (see e.g. Viscasillas Vázquez et al. 2023). Consequently, a cluster's current location is often decoupled from the chemical composition of the gas from which it formed. 5 DISCUSSION Our analysis of OCs ages reveals a distinct "V-shape" in the phosphorus enrichment history of the Milky Way's thin disk (see Figure 9). While previous studies have established a general trend of [P/Fe] MNRAS 000, 1-15 (2025) 10 M. Jian et al. 10579 10580 10581 10582 10583 10584 0.92 0.94 0.96 0.98 1.00 Normalized flux Alessi1-2, P 1 (10581.577 Å) Synthetic spectra with [P/Fe]±0.1 Synthetic spectra with P 1 line only Synthetic spectra of CN 1 Synthetic spectra of other species 10576 10578 10580 10582 10584 10586 Wavelength (Å) 0.5 0.6 0.7 0.8 0.9 1.0 Normalized flux Fitted A(P)=5.43±0.07, EWsynth, all=37.19±2.15 mÅ, normal bad line cont mask (dashed) Synthesized spectrum Telluric spectrum Observed spectrum Figure 5. Similar as Figure 4, but for the line at 10581.577 Å of Alessi 1-2. Here several CN lines appear around the target P line, and their synthetic spectrum is also plotted in the upper panel. Similar plots for other lines and other stars are available in the Supporting Information. Figure 6. The Phosphorus abundance measurement result from all the lines for the Sun. Blue filled circles give the abundance A(P) derived for each line (wavelength range shown on the x-axis); their vertical blue error bars mark the 1σfitting uncertainty. Deep-red plus symbols plot the line-by-line abundances reported by Scott et al. (2015) for those that are used in both studies, enabling a direct comparison. Red crosses indicate lines discarded from our final mean because of severe telluric blending or being manually excluded, while blue downward arrows denote lines for which only an upper limit on A(P) could be set. The horizontal blue dashed line is the weighted mean abundance from all accepted ("normal-flag") lines, with its 1σdispersion shown by the light-blue band. For reference, the red dashed line marks the solar value A(P) = 5.38 adopted by Scott et al. (2015). The panel title lists the mean abundance and its standard deviation obtained in this work. with [Fe/H] (Caffau et al. 2011, 2016; Maas et al. 2019, 2022), the high-precision and wide range of ages with a similar [Fe/H] of our cluster sample allow us to dissect the phosphorus evolution in time, uncovering two distinct evolutionary trends. The "V-shape" in the A(P)-age plane is not a single evolutionary track, but rather the superposition of two different regimes of GCE. The clusters younger than ∼1 Gyr appear to follow a path consistent with local, recent enrichment, while the older clusters unveil a fossil record of the diverse and dynamic star formation histories from the Galaxy's past. We note that similar but milder [Mg/Fe]-age and [Mg/H]-age trend can also be found using the Mg abundances from Dal Ponte et al. (2025) where only open clusters are analysed. A systematic MNRAS 000, 1-15 (2025) SPA P abundance 11 9750.1569751.340Å 10510.94910512.227Å 10528.88410530.164Å 10580.93410582.220Å 10596.25910597.547Å 11182.56011183.920Å 16481.93016483.934Å 17111.40717113.487Å 5.0 5.2 5.4 5.6 5.8 6.0 6.2 A(P) Alessi1-2, A(P)=5.49±0.12 P 1 line Scott+2015 adopted value: 5.38 Fitted value Fitted std Figure 7. Similar as Figure 6, but for Alessi 2-1. Similar figures for other stars are available in the Supporting Information. Figure 8. [P/Fe] versus [Fe/H] for OC giants in our sample (left panel, the average [P/Fe] for each cluster) and Cepheid stars (right panel). For comparison, the [P/Fe] measurement of the giants from Nandakumar et al. (2022) is also plotted. comparison of age-abundance trends for other elements with similar nucleosynthetic origins would be highly informative; however, such an analysis is beyond the scope of the present study and is deferred to future work. 5.1 Old OCs as a fossil record of diverse star formation histories and P enrichment For OCs older than ∼1 Gyr, we observe that the absolute phosphorus abundance A(P) decreases as stellar age decreases from ∼4 Gyr to ∼1 Gyr. This trend is contrary to the predictions of a simple, single-zone GCE model where metal abundance should monotonically increase over the evolution time. The explanation lies in the fact that the Milky Way is not a single, uniform entity. The solar neighbourhood today contains a mix of stellar populations born in different locations and at different times, each with a unique star formation history. This is a key feature of multi-zone GCE. The clusters in our sample, despite having a similar near-Solar metallicity (probed by [Fe/H]), reached this metallicity at vastly different cosmic epochs (see Table 6). For a cluster like M67 which has achieved the solar metallicity by about 4 Gyr ago, its parent gas cloud must have undergone a very rapid and intense period of enrichment. In contrast, for another cluster in the old OC group, NGC 752, its parent gas cloud reached a similar metallicity much more gradually, about 1 Gyr ago. This difference in enrichment speed implies a fundamentally different Star Formation History. The rapid enrichment required for the older clusters necessitates either a significantly higher Star Formation Rate or a more "top-heavy" IMF that produces more massive stars per generation. In either scenario, the integrated contribution from massive stars-the primary producers of phosphorus-is far greater (see Figure 1). Therefore, it is an inevitable consequence that the 4 Gyr old M67 formed from gas that was more abundant in absolute phosphorus A(P) than the gas that formed the 1 Gyr-old NGC 752. The declining A(P) trend for clusters older than ∼1 Gyr is thus a direct manifestation of observing a sequence of clusters born from environments with progressively less intense star formation histories. 5.2 Young OCs and Cepheids as evidence for local, quiescent P Enrichments Open clusters younger than ∼1 Gyr and Cepheids in Figure 9 exhibit a flat or slightly increasing A(P) (or [P/Fe]) with decreasing age. This behaviour aligns well with the expectations for a single-zone GCE MNRAS 000, 1-15 (2025) 12 M. Jian et al. Figure 9. Cluster or stellar age versus phosphorus abundance for our targets. The A(P) and [P/Fe] measurement from Maas et al. (2019) are plotted in gray as a reference. The blue lines and transparent strip represent separate linear fits and its 95% confidential interval to the cluster data, applied independently to those younger and older than 1 Gyr, while the orange ones are the linear fit to the cluster and Cepheid sample in the young side. The [P/Fe]-age trend from Feuillet et al. (2018) is plotted as a reference. model. These clusters and stars are young and located in the Solar neighbourhood, meaning they were likely born locally and share a nearly common, recent chemical history. They represent the current, more quiescent phase of evolution in our part of the Galaxy. In this local environment, we see a slow enrichment of phosphorus over the last ∼1 Gyr. Such gentle phosphorus enrichment may be explained by the contribution from low-mass stars. Figure 11 shows the evolution of phosphorus in single stellar population at two different initial metallicity (iniZ) values, following the same set-up as the yields in Figure 1. Overall, the main phosphorus production at both lower and higher metallicities originates from massive stars, which is consistent with the conclusions of previous studies (e.g. Cescutti et al. 2012). The contribution from lower mass stars, i.e. AGB stars, reached a level of 10-8 M⊙in the later evolution time at iniZ= 0.0001. However, the contribution of AGB stars to the phosphorus production at iniZ= 0.01 is an order of magnitude higher than in the metal-poor case, making it more significant at later times in the evolution of the single stellar population. This increased phosphorus production by AGB stars is reflected in the peak of the IMF-weighted total yield at the lowest stellar masses, as seen for iniZ= 0.004 and 0.01 in Figure 1. These slight sub-Solar metallicities are also proximally corresponding to the previous generation of the OCs and Cepheids in our study. In the relative absence of vigorous massive star formation in the immediate Solar vicinity, the enrichment from these lower-mass stars can become a noticeable contributor to the ongoing chemical evolution, explaining the slight upward trend in A(P) towards the present day. We note that this interpretation is sensitive to the uncertainties in the nucleosynthetic yields adopted here, highlighting the need for more precise yield determinations. Also the contribution of ONe nova, recently discussed by Bekki & Tsujimoto (2024), is not considered here, but it may contribute to some extent to phosphorus production. 6 SUMMARY In this work, we present a detailed abundance analysis of phosphorus for 82 giant stars in 24 open clusters, together with 20 Cepheids (17 in the Galactic field and 3 in open clusters). The analysis is based on high-resolution near-infrared spectra obtained with the GIANOB spectrograph. To achieve this, we developed and implemented a robust line selection and fitting procedure, which carefully accounts for blending, telluric contamination, and line-strength variations, ensuring reliable abundance measurements. Our key findings include: • Our results confirm the previously observed modest decreasing trend of [P/Fe] with increasing [Fe/H] for stars around solar metallicity. • Using the high precision age of open clusters, we uncover a distinct "V-shape" in the phosphorus-age relation, revealing two separate chemical evolution pathways and mixtures for the Milky Way's thin disk: - For clusters older than ∼1 Gyr in our sample, we find that phosphorus abundance increases with stellar age. We interpret this trend as a fossil record of diverse star formation histories and phosphorus enrichments. With a similar metallicity, older clusters formed from a gas that was more rapidly enriched, a consequence of more intense star formation in the earlier epochs of the Galactic disk. - For clusters younger than ∼1 Gyr and Cepheids in our sample, the [P/Fe] and A(P) trend with age is nearly flat with a small negative slope. This points to a more quiescent, local enrichment history, where the gentle increase in phosphorus of the younger clusters may be attributable to the contribution from low-mass stars at solar metallicities. • We find no correlation between the A(P) abundance and the current spatial location of open clusters and cepheids. Future observations of more young open cluster giant stars or Cepheids may further clarify the [P/Fe]-age trend in the very young population of our Milky Way. ACKNOWLEDGEMENTS A.B., M.J., and V.D. acknowledge funding from INAF Mini-Grant 2022 (High resolution spectroscopy of open clusters). X.F. acknowledges the support of the National Natural Science Foundation of China (NSFC) No. 12203100. G.B., G.F. and A.N. thank the support from Project PRIN MUR 2022 (code 2022ARWP9C) 'Early Formation and Evolution of Bulge and HalO (EFEBHO)' (PI: M. Marconi), funded by the European Union-Next Generation EU, and from the Large grant INAF 2023 MOVIE (PI: M. Marconi). MNRAS 000, 1-15 (2025) SPA P abundance 13 -0.5 0.0 0.5 Z [kpc] -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 X [kpc] -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Y [kpc] ⊙ -0.5 0.0 0.5 Z [kpc] 5.2 5.3 5.4 5.5 5.6 5.7 5.8 A(P) Figure 10. Space distribution of the target clusters (dots) and Cepheids (stars), colour-coded by their phosphorus abundance. The overlaid extinction maps for each plane are derived from Dharmawardena et al. (2024). A maximum A0 of 0.8 mag was set (represented by darkest black), with a contour (black dashed line) at 0.2 mag for the XY plane, while a maximum A0 of 3 mag was applied for the XZ and YZ planes. The coloured curves represent the Galactic log-periodic spiral arms, defined by the parameters from (Reid et al. 2019): the Carina-Sagittarius arm in purple, the Local arm in black, and the Perseus arm in green. Additionally, the spur between the Local and Sagittarius-Carina arms is indicated by the blue curve. Part of the research activities described in this paper were carried out with the contribution of NextGenerationEU funds within the National Recovery and Resilience Plan (PNRR), Mission 4-Education and Research, Component 2-From Research to Business (M4C2), Investment Line 3.1-Strengthening and creation of Research Infrastructures, Project IR0000034- "STILES-Strengthening the Italian Leadership in ELT and SKA", CUP C33C22000640006. This work has made use of the VALD database, operated at Uppsala University, the . DATA AVAILABILITY Full version of Figures 4, 5 and 7 are available in the Supporting Information, and Tables 1, 3-5, A1 and A2 are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (130.79.128.5), or via https://cdsarc.cds.unistra.fr. The data underlying this article will be shared on reasonable request to the corresponding author. REFERENCES Amarsi A. M., et al., 2020, A&A, 642, A62 Anders F., et al., 2017, A&A, 600, A70 Bekki K., Tsujimoto T., 2024, ApJ, 967, L1 Bennett M., Bovy J., 2019, MNRAS, 482, 1417 Bono G., Marconi M., Cassisi S., Caputo F., Gieren W., Pietrzynski G., 2005, ApJ, 621, 966 Brauner M., Masseron T., García-Hernández D. A., Pignatari M., Womack K. A., Lugaro M., Hayes C. R., 2023, A&A, 673, A123 Caffau E., Bonifacio P., Faraggiana R., Steffen M., 2011, A&A, 532, A98 Caffau E., Andrievsky S., Korotin S., Origlia L., Oliva E., Sanna N., Ludwig H. G., Bonifacio P., 2016, A&A, 585, A16 MNRAS 000, 1-15 (2025) 14 M. Jian et al. 0.001 0.01 0.1 1 10 Age (Gyr) 10 10 10 9 10 8 10 7 10 6 Integrated ejecta mass (M ) iniZ = 0.0001 0.001 0.01 0.1 1 10 Age (Gyr) iniZ = 0.01 All massive star SN Ia AGB star Figure 11. Integrated ejecta mass of a single stellar population, with the initial metallicity iniZof 0.0001 in the left panel and 0.01 in the right panel. Caffau E., et al., 2019, A&A, 622, A68 Cantat-Gaudin T., et al., 2020, A&A, 640, A1 Cescutti G., Matteucci F., Caffau E., François P., 2012, A&A, 540, A33 Claudi R., et al., 2017, European Physical Journal Plus, 132, 364 Cosentino R., et al., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84461V, Cretignier M., Francfort J., Dumusque X., Allart R., Pepe F., 2020, A&A, 640, A42 Dal Ponte M., et al., 2025, arXiv e-prints, p. Dharmawardena T. E., Bailer-Jones C. A. L., Fouesneau M., Foreman-Mackey D., Coronica P., Colnaghi T., Müller T., Wilson A. G., 2024, MNRAS, 532, 3480 Elgueta S. S., et al., 2024, MNRAS, 532, 3694 Fernández-García C., Coggins A. J., Powner M. W., 2017, Life, 7, 23 Feuillet D. K., et al., 2018, MNRAS, 477, 2326 GRAVITY Collaboration et al., 2018, A&A, 615, L15 Grevesse N., Asplund M., Sauval A. J., 2007, Space Sci. Rev., 130, 105 Gullikson K., Dodson-Robinson S., Kraus A., 2014, AJ, 148, 53 Gustafsson B., Edvardsson B., Eriksson K., Jørgensen U. G., Nordlund Å., Plez B., 2008, A&A, 486, 951 Hao C. J., Xu Y., Wu Z. Y., Lin Z. H., Bian S. B., Li Y. J., Liu D. J., 2022, A&A, 668, A13 Hawkins K., Masseron T., Jofré P., Gilmore G., Elsworth Y., Hekker S., 2016, A&A, 594, A43 Hayden M. R., et al., 2015, ApJ, 808, 132 Hayes C. R., et al., 2022, ApJS, 262, 34 Iwamoto K., Brachwitz F., Nomoto K., Kishimoto N., Umeda H., Hix W. R., Thielemann F.-K., 1999, ApJS, 125, 439 Jian M., et al., 2024, A&A, 687, A189 Karakas A. I., 2010, MNRAS, 403, 1413 Kobayashi C., Umeda H., Nomoto K., Tominaga N., Ohkubo T., 2006, ApJ, 653, 1145 Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179 Kroupa P., 2001, MNRAS, 322, 231 Maas Z. G., Cescutti G., Pilachowski C. A., 2019, AJ, 158, 219 Maas Z. G., Hawkins K., Hinkel N. R., Cargile P., Janowiecki S., Nelson T., 2022, AJ, 164, 61 Mallinson J. W. E., Lind K., Amarsi A. M., Youakim K., 2024, A&A, 687, A5 Masseron T., García-Hernández D. A., Santoveña R., Manchado A., Zamora O., Manteiga M., Dafonte C., 2020a, Nature Communications, 11, 3759 Masseron T., García-Hernández D. A., Zamora O., Manchado A., 2020b, ApJ, 904, L1 Nandakumar G., Ryde N., Montelius M., Thorsbro B., Jönsson H., Mace G., 2022, A&A, 668, A88 Nomoto K., Kobayashi C., Tominaga N., 2013, ARA&A, 51, 457 Oliva E., et al., 2012a, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84463T, Oliva E., Biliotti V., Baffa C., Giani E., Gonzalez M., Sozzi M., Tozzi A., Origlia L., 2012b, in Holland A. D., Beletic J. W., eds, Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series Vol. 8453, High Energy, Optical, and Infrared Detectors for Astronomy V. p. 84532T, Origlia L., et al., 2014, in Ramsay S. K., McLean I. S., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V. p. 91471E, Piskunov N., Valenti J. A., 2017, A&A, 597, A16 Piskunov N. E., Kupka F., Ryabchikova T. A., Weiss W. W., Jeffery C. S., 1995, A&AS, 112, 525 Rainer M., et al., 2018, in Evans C. J., Simard L., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII. p. 1070266, Reid M. J., et al., 2019, ApJ, 885, 131 Roederer I. U., Jacobson H. R., Thanathibodee T., Frebel A., Toller E., 2014, ApJ, 797, 69 Ryabchikova T., Piskunov N., Kurucz R. L., Stempels H. C., Heiter U., Pakhomov Y., Barklem P. S., 2015, Phys. Scr., 90, 054005 Scott P., et al., 2015, A&A, 573, A25 Tolstoy E., 2011, Science, 333, 176 Tozzi A., et al., 2016, in Evans C. J., Simard L., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI. p. 99086C, Valenti J. A., Piskunov N., 1996, A&AS, 118, 595 Vink J. S., de Koter A., Lamers H. J. G. L. M., 2001, A&A, 369, 574 Viscasillas Vázquez C., Magrini L., Spina L., Tautvaišien ̇e G., Van der Swaelmen M., Randich S., Sacco G. G., 2023, A&A, 679, A122 Wang E. X., et al., 2024, MNRAS, 528, 5394 Wehrhahn A., Piskunov N., Ryabchikova T., 2023, A&A, 671, A171 Weinberg D. H., et al., 2019, ApJ, 874, 102 Xu X., Cisewski-Kehe J., Davis A. B., Fischer D. A., Brewer J. M., 2019, AJ, 157, 243 de Jong R. S., et al., 2019, The Messenger, 175, 3 MNRAS 000, 1-15 (2025) SPA P abundance 15 APPENDIX A: CARBON ABUNDANCES FROM BLENDING MOLECULAR FEATURES This section presents Table A1 and A2 of C abundance used to fit the molecular features around the phosphorus lines, for cluster giants and Cepheids respectively. APPENDIX B: THE P LINES WITH BIASED ABUNDANCE As described in Section 3.3 and summarised in Table 2, four lines selected for cluster giants and four lines for Cepheids were excluded from the analysis due to systematic biases in their derived phosphorus abundances. Among these, two lines for the giants and one for the Cepheids were used in more than five stars. Here, we discuss these lines individually. The P i line at 10813.141 Å is intrinsically very weak, with a typical line depth of only ∼0.01 in most stars. Its abundance sensitivity is highly dependent on accurate continuum placement. However, the continuum in the fitting region is often depressed below unity, and since PySME applies only a constant scaling over a wide region during abundance fitting, local misplacement of the continuum across a narrow wavelength range cannot be corrected. As a result, the phosphorus abundances derived from this line tend to be overestimated. Also, the situation for P i line at 17423.670 Å is similar but with underestimated abundance, thus it is also being removed. The line at 17112.447 Å consistently yields phosphorus abundances that are approximately 0.3 dex higher than the average derived from other lines. Inspection of the line fitting reveals the presence of an unidentified absorption feature on the red side of the P line. This feature is absent from the synthetic spectra (otherwise it would have been accounted for), but it appears consistently in all observed spectra with a similar relative position, suggesting it originates from the stellar spectrum. Due to the severity of this blending, the fitting routine attempts to reproduce the composite feature by increasing the phosphorus abundance, leading to a systematic overestimation. Six P i lines were excluded from the Cepheid sample. Their behavior is similar to that of the 10813.141 Å line, for which accurate continuum placement is challenging in this wavelength region, compromising the reliability of the abundance fitting. As a result, these lines were excluded from the analysis. This paper has been typeset from a TEX/LATEX file prepared by the author. MNRAS 000, 1-15 (2025) 16 M. Jian et al. Table A1. Carbon abundance used to fit molecular features around the phosphorus lines for cluster giants. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name A(C)9750.748 A(C)10511.588 A(C)10529.524 A(C)10581.577 A(C)10596.903 A(C)11183.24 Alessi 1-2 8.30 ± 0.05 8.14 ± 0.38 8.56 ± 0.10 8.33 ± 0.24 8.06 ± 0.22 8.31 ± 0.08 Alessi 1-3 8.27 ± 0.06 8.25 ± 0.28 8.64 ± 0.09 8.40 ± 0.22 8.42 ± 0.08 8.25 ± 0.08 Alessi 1-5 8.38 ± 0.05 8.04 ± 0.37 8.61 ± 0.08 - 8.40 ± 0.06 8.30 ± 0.07 Alessi 1-6 8.26 ± 0.07 7.68 ± 0.78 8.71 ± 0.08 - 7.63 ± 0.16 8.20 ± 0.09 Alessi Teutsch 11-1 8.25 ± 0.05 8.35 ± 0.11 8.52 ± 0.05 - 8.32 ± 0.03 - Basel 11b-1 8.35 ± 0.07 8.47 ± 0.14 8.79 ± 0.07 - 8.15 ± 0.25 8.33 ± 0.09 Basel 11b-2 8.42 ± 0.06 8.37 ± 0.19 8.79 ± 0.07 8.67 ± 0.12 8.51 ± 0.08 8.38 ± 0.08 Basel 11b-3 - - - 8.54 ± 0.09 - - COIN-Gaia 30-1 7.87 ± 0.17 8.12 ± 0.29 8.52 ± 0.09 - 8.38 ± 0.07 8.45 ± 0.05 Collinder 350-1 - - - - - - Collinder 350-2 - - - 8.55 ± 0.14 - - ... ... ... ... ... ... ... Table A2. Carbon abundance used by fitting the molecular features around the phosphorus lines for Cepheids. Only the first 10 stars and first 6 lines are listed, and the full table is available in CDS. star name A(C)9493.571 A(C)9609.036 A(C)10932.724 A(C)10967.373 A(C)11186.753 A(C)15711.522 SV Vul 8.08 ± 0.18 8.64 ± 0.04 8.52 ± 0.11 8.63 ± 0.02 8.41 ± 0.10 10.27 ± 0.09 DL Cas - - - - - - CO Aur 6.35 ± 2.36 - - 8.35 ± 3.05 - - RX Aur - - - - - - V 351 Cep - - - 8.33 ± 2.73 10.33 ± 0.31 - CD Cyg - - 8.25 ± 0.02 - - - V 1334 Cyg 6.09 ± 5.05 - - - - - VZ Cyg 8.07 ± 20.49 10.06 ± 0.61 - 8.07 ± 4.22 - - X Cyg - - - - - - W Gem - - - - - - RR Lac - - - - - - ... ... ... ... ... ... ... MNRAS 000, 1-15 (2025)
2510.14787
A Human-Vector Susceptible–Infected–Susceptible Model for Analyzing and Controlling the Spread of Vector-Borne Diseases Lorenzo Zino, Alessandro Casu, and Alessandro Rizzo Abstract— We propose an epidemic model for the spread of vector-borne diseases. The model, which is built extending the classical susceptible–infected–susceptible model, accounts for two populations —humans and vectors— and for cross- contagion between the two species, whereby humans become in- fected upon interaction with carrier vectors, and vectors become carriers after interaction with infected humans. We formulate the model as a system of ordinary differential equations and leverage monotone systems theory to rigorously characterize the epidemic dynamics. Specifically, we characterize the global asymptotic behavior of the disease, determining conditions for quick eradication of the disease (i.e., for which all trajectories converge to a disease-free equilibrium), or convergence to a (unique) endemic equilibrium. Then, we incorporate two control actions: namely, vector control and incentives to adopt protection measures. Using the derived mathematical tools, we assess the impact of these two control actions and determine the optimal control policy. I. INTRODUCTION In the last decade, mathematical models of epidemic diseases have gained traction within the systems and control community [1]–[5]. In fact, the development of increas- ingly refined models has allowed to accurately predict the course of an epidemic outbreak and, ultimately, to design and assess intervention policies [1], [4], [6]. In particular, the latest epidemiological threats, such as the outbreaks of Ebola, COVID-19, and seasonal flu have provided further motivation to pursue these studies, yielding tailored versions of these general epidemic models [7]–[11]. Typical modeling setups deal with human-to-human conta- gion mechanisms [1]–[5]. However, according to the World Health Organization, more than 17% of all infectious dis- eases are vector-borne [12]. This means that they are not transmitted through human-to-human interactions, but by arthropod vectors (such as mosquitoes, fleas, or ticks) that can carry pathogens and transmit them to humans [12]. Vector-borne diseases (including dengue, malaria, and West Nile fever) pose a significant threat to our society, being causing more than 700,000 deaths annually [12]. Moreover, the ongoing climate change crisis exacerbates concerns on the prevention of these diseases, as vectors adapt to new habi- tats [13]. This is the case, e.g., of Aedes aegypti (responsible for the transmission of several diseases, including dengue, Zika, and yellow fever), which is predicted to infest many regions of Europe if the temperature increases by 2°C [14]. The authors are with the Department of Electronics and Telecommunica- tions, Politecnico di Torino, Torino, Italy (lorenzo.zino@polito.it, alessandro.casu@studenti.polito.it, alessandro.rizzo@polito.it). Numerous mathematical models of vector-borne diseases have been proposed and studied, particularly in response to the increasing concern for dengue fever, leading to a rich body of research [15], [16]. However, most of these models, developed by computational epidemiologists as complex simulation tools, offer limited analytical tractability [17]. Conversely, there is a scarcity of parsimonious models that efficiently balance accuracy and interpretability. Here, we fill in this gap by proposing a novel mathematical model for vector-borne diseases. Our model, grounded in dynamical systems theory, considers two interacting popula- tions of humans and vectors. Through such interactions, the pathogen is transmitted from carrier vectors to susceptible humans and from infectious humans to vectors, establishing a positive feedback loop of contagion. Formally, we cast our model as a system of nonlinear ordinary differential equa- tions (ODEs), in which we couple i) an epidemic model for humans, inspired by the Susceptible–Infected–Susceptible (SIS) model [2], ii) a contagion model for vectors, inspired by the Susceptible–Infected (SI) model [2], and iii) a vital dynamics for vectors, which is modeled using a birth-death process [18]. We refer to the model obtained as the human- vector SIS (HV-SIS) epidemic model. In addition to the formulation of the model, the main contribution of this paper is twofold. First, by leveraging monotone dynamical systems theory [19], we perform a thorough analysis of the asymptotic behavior of the HV-SIS model, characterizing two regimes: one where the epidemic outbreak is quickly eradicated, leading to global convergence to a disease-free equilibrium; and one where the disease becomes endemic, and the system converges to a (unique) en- demic equilibrium. Second, we introduce two control actions: namely, vector control —which focuses on reducing the vector population of vectors (e.g., using pesticides) [20]— and the use of personal protection measures against conta- gion [21]. By studying the controlled HV-SIS model and formulating an optimization problem, we investigate the optimal control policies to prevent outbreaks of vector-borne diseases, as a function of the model parameters and the cost associated with implementing interventions. II. HUMAN-VECTOR SIS EPIDEMIC MODEL We consider a large population of humans that interact with a population of vectors. Similar to most epidemic models [2], we observe that the duration of an epidemic outbreak is typically negligible with respect to the life-span of humans. Hence, we approximate the size of the human population as constant. Moreover, being the population large, arXiv:2510.14787v1 [eess.SY] 16 Oct 2025 human dynamics vector dynamics S I N C γ µ ω µ βh βv vector dynamics vector dynamics Fig. 1: Schematic of the human-vector epidemic model. Solid arrows represent possible transitions of the state of humans (S and I for susceptible and infected, respectively) and vectors (N and C for non-carrier and carrier, respectively). Dashed arrows are associated with the vital dynamics of vectors. Dotted colored arrows indicate transitions that are triggered by interactions with humans or vectors with a specific state. we approximate it as a continuum of individuals with total mass equal to 1 [2]. On the contrary, the life-span of a vector is typically comparable with the infection propagation dynamics [22]. Hence, we assume that the total quantity of vectors v(t) ≥0 (normalized with respect to the unit mass human population) evolves in continuous-time t ≥0 accord- ing to a classical ODE associated with a birth-death process, typically used in mathematical biological models [18]: ˙v(t) = ω −µv(t), (1) where ω > 0 and µ > 0 are two constants representing the birth and death rate, respectively. Humans can be healthy and susceptible to the disease or infected with the disease. We assume that there is no natural immunity: after recovery, individuals are again susceptible to the disease. This is a good proxy for many diseases, e.g., dengue fever, for which natural immunity wanes quickly and it only protects against the virus serotype specific of the previous infection [23]. We denote by x(t) ∈[0, 1] and s(t) ∈[0, 1] the fraction of infected individuals and susceptible individuals at time t ≥0, respectively. Since there is no immunity, it holds s(t) = 1 −x(t). Similarly, vectors can be either carriers of the pathogen or non-carriers. We denote by y(t) ≥0 the number of non-carrier vectors and by z(t) ≥0 the quantity of carrier vectors. Being v(t) the total quantity of vectors at time t, then y(t)+z(t) = v(t). We assume that the two populations are well-mixed, and we define a human-vector compartmental model that describes the evolution of the fraction of susceptible and infected individuals and the quantity of carriers and non- carriers in the two populations. The compartmental model, illustrated in Fig. 1, yields the following 3-dimensional system of nonlinear ODEs: ˙x(t) = −γx(t) + βh(1 −x(t))z(t) (2a) ˙y(t) = ω −µy(t) −βvx(t)y(t) (2b) ˙z(t) = βvx(t)y(t) −µz(t), (2c) with initial condition in the domain D := {(x, y, z) : x, y, z ≥0, x ≤1}. In the following paragraphs, we extensively discuss these equations. In Eq. (2a), the fraction of infected individuals evolves ac- cording to two contrasting mechanisms: the negative contri- bution −γx(t) accounts for infected individuals who recover at a rate γ > 0; the positive contribution βh(1 −x(t))z(t) accounts for new infections, whose number is proportional to the quantity of susceptible humans, the quantity of carriers, and a parameter βh > 0 that captures the human contagion rate (i.e., the likelihood that the pathogen is transmitted from a carrier vector to a human through a human-vector inter- action). This equation resembles the classical SIS epidemic model [2], but here new contagions, instead of being propor- tional to the quantity of infected humans, are proportional to the quantity of carriers. For this reason, we shall refer to the model with dynamics in Eq. (2) and initial condition in D as the human-vector SIS model, abbreviated as HV-SIS model. The other two equations, Eqs. (2b)–(2c), govern the dy- namics of vectors. In particular, the term βvx(t)y(t) captures new carriers and gives a positive contribution to the dynamics of carriers and a negative contribution to non-carriers. This term is proportional to the number of non-carrier vectors, infected humans, and a parameter βv ≥0 that captures the vector contagion rate (i.e., the likeliness that the pathogen is transmitted from a human to a vector through a human- vector interaction). The other two terms come from Eq. (1): new born vectors are not carriers of the pathogen (so the rate ω appears in Eq. (2b)), while the death rate is independent of the pathogen, since vectors are only carriers and not infected with the disease, leading to the terms −µy(t) and −µz(t), respectively. III. MAIN RESULTS ON THE HV-SIS EPIDEMIC MODEL In this section, we present our main results on the analysis of the HV-SIS epidemic model. First, we prove that the equations are well-defined. Lemma 1. The domain of the HV-SIS model D := {(x, y, z) : x, y, z ≥0, x ≤1} can be split into two domains D1 := {(x, y, z) : x, y, z ≥0, x ≤1, y + z ≤ ω µ} and D2 := {(x, y, z) : x, y, z ≥0, x ≤1, y + z ≥ω µ}, which are positive invariant under Eq. (2). Proof. The domain D is closed and convex and the vector field in Eq. (2) is Lipschitz-continuous. Hence, Nagumo’s Theorem can be applied [24]. We need to verify that the vector field at the boundaries of the domain does not point towards the boundary. We immediately observe that, if any of the variables is equal to 0, then the corresponding derivative is always non-negative (hence, it does not point towards the boundary). Similarly, at x = 1, we get that Eq. (2a) is always non-positive. Finally, when y+z = ω µ, from summing Eq. (2b) and Eq. (2c) we get ˙y + ˙z = 0. Hence, the vector field does not point towards any boundary, yielding the first claim. The second claim follows the same arguments, where we observe that it always holds that ˙y + ˙z ≤0 in D2. Then, we provide a complete characterization of the asymptotic behavior of the HV-SIS model, determining its equilibria. Specifically, we will prove that, depending on the model parameters, there is always one equilibrium that is (almost) globally asymptotically stable, characterizing two distinct regimes: either the disease is eradicated and all trajectories converge to a disease-free equilibrium (DFE), or the disease becomes endemic and (almost) all trajectories converge to an endemic equilibrium (EE), where a fraction of the population is infected (and a fraction of the vectors are carriers). The phase transition between these two regimes, which is a typical phenomenon of many epidemic models [2], [4], is shaped by the value of the model parameters that determine the so-called epidemic threshold [4]. We start our analysis by determining the equilibria of Eq. (2) and determining their (local) stability. Proposition 1. The HV-SIS model in Eq. (2) has, at most two equilibria: i) The DFE (x∗, y∗, z∗) =  0, ω µ, 0  , (3) and ii) the EE (¯x, ¯y, ¯z) =  ωβhβv −µ2γ ωβhβv + µγβv , γµ + βhω βh(βv + µ), ωβhβv −µ2γ µβhβv + µ2βh  . (4) Specifically, let us define the epidemic threshold σ0 := βhβvω γµ2 . (5) The DFE in Eq. (3) always exists and is locally exponentially stable if σ0 < 1 and unstable if σ0 > 1. The EE in Eq. (4) exists and is distinct from the DFE if and only if σ0 > 1 and (if it exists) it is always locally exponentially stable. Proof. First, we compute the equilibria of Eq. (2) by equat- ing the right hand sides to 0, obtaining a system of three nonlinear equations, which yields the two solutions in Eq. (3) and Eq. (4). Then, we observe that the DFE is always in the domain D. On the contrary, the EE is in the domain D if and only if the numerators of ¯x and ¯z are non-negative, i.e., if ωβhβv −µ2γ ≥0, which yield the condition ωβhβv µ2γ ≥1. Finally, we observe that, when ωβhβv µ2γ = 1, the DFE and the EE coincide, yielding the strict inequality for the existence of a second equilibrium of Eq. (2). At this stage, we compute the Jacobian matrix of Eq. (2) in a generic point (x, y, z), that is, J(x, y, z) =   −γ −βhx 0 βh(1 −x) −βvy −µ −βvx 0 βvy βvx −µ  . (6) By evaluating Eq. (6) at the DFE in Eq. (3), we get J(x∗, y∗, z∗) =   −γ 0 βh −βv ω µ −µ 0 βv ω µ 0 −µ  , (7) whose eigenvalues are λ1 = −µ and λ2 = 1 2µ(−γµ − µ2 −√µ p γ2µ −2γµ2 + µ3 + 4βhβvω), whose real parts are always negative, and λ3 = 1 2µ(−γµ −µ2 + p γ2µ2 −2γµ3 + µ4 + 4βhβvωµ), which is negative if and only if γ2µ2 −2γµ3 + µ4 + 4βhβvωµ < (γµ + µ2)2, which simplifies to the condition σ0 = βvβhω γµ2 < 1. Hence the DFE is locally exponentially stable if σ0 < 1 and unstable if σ0 > 1. Similarly, we evaluate the Jacobian matrix in Eq. (6) at the EE in Eq. (4), obtaining   −γµβv+ωβvβh µ(µ+βv) 0 µγβvβh+µ2γβh ωβvβh+µγβv −βvγµ+βvβhω βh(βv+µ) −µ −ωβvβh−µ2γ ωβh+µγ 0 βvγµ+βvβhω βh(βv+µ) ωβvβh−µ2γ ωβh+µγ −µ  , (8) whose eigenvalues are λ1 = −µ, which is always negative, and another pair of eigenvalues, which are not reported due to their cumbersome expression. Again, by imposing that the largest of the two has negative real part, we obtain a complicated condition which can be simplified to σ0 > 1, where computations are omitted due to space constraints. Remark 1. From the expression of the epidemic threshold in Eq. (5), we observe that, as predictable, increasing the infection rates βh and βv favors the spread of the disease. A similar effect is observed by increasing the vector birth rate ω. On the other hand, increasing the vector death rate µ and/or the human recovery rate γ favors the eradication of the disease. Interestingly, the vector death rate µ has a larger impact, since it appears squared at the denominator, suggesting that vector control is a potentially effective strat- egy to avoid outbreaks of vector-borne diseases. Proposition 1 characterizes the local behavior of Eq. (2) about the two equilibria of the system. In order to prove global convergence, we now leverage monotone systems theory [19]. However, since the Jacobian of Eq. (2) in Eq. (6) is evidently not a Metzler matrix, we cannot directly apply the monotone systems theory to Eq. (2), and we need to introduce a change of variables, as detailed in the proof of the following result. Theorem 1. Let σ0 be the epidemic threshold from Eq. (5). If σ0 ≤1, then all trajectories of the HV-SIS model in Eq. (2) converge to the DFE in Eq. (3). If σ0 > 1, then all trajectories with initial condition such that x(0) ̸= 0 or z(0) ̸= 0 converge to the EE in Eq. (4). Proof. We operate a change of variables, where we introduce an auxiliary 3-dimensional system formed by x(t), z(t), and v(t) = y(t) + z(t), which is governed by Eq. (1). Being y(t) = v(t) −z(t), we obtain ˙x(t) = −γx(t) + βh(1 −x(t))z(t) (9a) ˙z(t) = βvx(t)(v(t) −z(t)) −µz(t) (9b) ˙v(t) = ω −µv(t). (9c) First, from Lemma 1, we derive that the two invariant sets, written in terms of the new variables are D1 := {(x, z, v) : x, z, v ≥0, x ≤1, v ≤ω µ, z ≤v} and D2 := {(x, z, v) : x, z, v ≥0, x ≤1, v ≥ ω µ, z ≤v}. Second, we prove that all trajectories in D2 are bounded (those in D1 are necessarily bounded, being D1 compact). From Eq. (9c), we observe that ˙v(t) ≤0 for any v(t) ≥ ω µ, which 0 20 40 0 0.02 0.04 0.06 0.08 time, t Prevalence x(t) 0 20 40 0 0.5 1 time, t y(t) z(t) (a) µ = 0.2 0 50 100 0 0.1 0.2 time, t Prevalence x(t) 0 50 100 0 1 2 time, t y(t) z(t) (b) µ = 0.1 Fig. 2: Trajectories of the human-vector epidemic model. In (a) σ0 < 1 and the trajectory converges to the DFE; in (b) σ0 > 1 and the trajectory converges to the EE. The equilibrium predicted by Theorem 1 in the two cases is depicted with gray dashed horizontal lines. Common parameters are ω = βh = βv = 0.2, and γ = 0.4. implies v(t) ≤max{v(0), ω/µ}, which in turn implies that trajectories cannot diverge. Third, we compute the Jacobian matrix of Eq. (9) at the generic point (x, z, v), obtaining ˜J(x, z, v) =   −γ −βhz βh(1 −x) 0 βv(v −z) −βvx −µ βvx 0 0 −µ  . (10) We observe that the matrix in Eq. (10) is Metzler, since all its off-diagonal entries are non-negative for values of the parameters belonging to the two invariant sets. Hence, the dynamical system in Eq. (9) is monotone [19], which implies that all its trajectories converge to a fixed point [19]. Fourth, since v(t) and z(t) converge, also their difference y(t) = z(t) −x(t) necessarily converges to a fixed point, yielding that also trajectories of Eq. (2) converge. Fifth, the analysis of the local stability of the equilibria in Proposition 1 yields the claim for σ0 < 1 and σ0 > 1. Finally, we observe that when σ0 = 1, Proposition 1 does not provide any information on the stability of the equilibria, but it states that the system has a unique equilibrium: the DFE. Combining this with the system’s monotonicity (which implies convergence to an equilibrium), we obtain convergence to the DFE also in the case σ0 = 1, yielding the claim. Figure 2 illustrates the results of Theorem 1. If σ0 < 1, then the disease is quickly eradicated and the system converges to the DFE. It is interesting to notice from Fig. 2a that, unlike classical SIS-models [2], the convergence to the DFE may be non-monotone, with an initial increase in the epidemic prevalence x(t), until a peak is reached and the fraction of infected individuals starts decreasing to 0. On the contrary, if we increase σ0 to reach a value larger than 1 (e.g., by decreasing µ as in Fig. 2b), we enter the endemic regime and trajectories converge to the EE. IV. CONTROL OF THE HV-SIS MODEL In this section, we consider two distinct control actions that can be implemented in the prevention of vector-borne diseases, and we encapsulate them within the dynamical system in Eq. (2) by introducing two additional terms that capture these actions: • Vector control interventions, which focuses on reduc- ing the vector population (e.g., using insecticide-based tools or integrated pest management) [20]; • Incentives to adopt personal protection measures, such as promoting the use of insect repellent, wear long clothing, and limit outdoor activity [21], Both these interventions have been proven effective in pre- venting vector-borne diseases [20], [21]. In the following, we use the mathematical model developed in Section II to analytically assess the effectiveness of these control actions and use them to design an optimal control strategy. In order to incorporate vector control in the model, we introduce a control parameter u1 ≥0 that captures the efficacy of this control action. In particular, we assume that the death rate of the vector is increased by a parameter u1 thanks to the impact of vector control actions. Then, we introduce a control parameter u2 ∈[0, βh] that represents the efficacy of personal protection measures in reducing human contagion rate. Hence, the controlled HV-SIS model is captured by the following system of ODEs: ˙x(t) = −γx(t) + (βh −u2)(1 −x(t))z(t) (11a) ˙y(t) = ω −(µ + u1)y(t) −βvx(t)y(t) (11b) ˙z(t) = βvx(t)y(t) −(µ + u1)z(t). (11c) Repeating the same analysis performed for the uncon- trolled model in Section III, we obtain the following result, whose proof is a corollary of Proposition 1 and Theorem 1. Corollary 1. Let σc := (βh −u2)βvω γ(µ + u1)2 . (12) If σc ≤1, then all trajectories of the controlled HV-SIS model in Eq. (11) converge to the DFE (x∗ c, y∗ c, z∗ c) =  0, ω µ + u1 , 0  . (13) If σc > 1, then all trajectories of the controlled HV-SIS model in Eq. (11) with initial condition such that x(0) ̸= 0 or z(0) ̸= 0 converge to the EE ¯xc = ωβv(βh −u2) −(µ + u1)2γ ωβv(βh −u2) + (µ + u1)γβv (14a) ¯yc = γ(µ + u1) + (βh −u2)ω (βh −u2)(βv + µ + u1) (14b) ¯zc = ωβv(βh −u2) −(µ + u1)2γ (µ + u1)(βh −u2)βv + (µ + u1)2(βh −u2). (14c) The expressions derived in Corollary 1 can be used to assess the performance of the two different control strategies. In Fig. 3, we report the value of the fraction of infected 0 0.02 0.04 0 0.02 0.04 0.06 0.08 0.1 vector control, u1 protection measures, u2 0 0.1 0.2 (a) Infected individuals, ¯xc 0 0.02 0.04 0 0.02 0.04 0.06 0.08 0.1 vector control, u1 0 0.2 0.4 0.6 (b) Carrier vectors, ¯zc Fig. 3: (a) Infected individuals and (b) carrier vectors at the EE for different values of the control inputs u1 and u2. Model parameters are ω = βh = βv = 0.2, γ = 0.4, and µ = 0.1. individuals at the EE in Eq. (14) for different values of the control inputs u1 and u2. The figure suggests that vector control is more effective not only in reducing the epidemic threshold (as observed in Remark 1), but also in reducing the fraction of infected individuals at the EE. In fact, from Fig. 3a, we observe that when u1 = 0.01, which means an increase in the vector death rate by just 10%, the prevalence at the EE decreases by more than 25%. To obtain the same results using only protective measures, one needs to have u2 that reduces the human infection rate by almost 20%. The same relation is observed for the vectors in Fig. 3b. From Corollary 1 and the following discussion, a problem spontaneously arises. How can one design an optimal control policy in terms of vector control and protection measures to achieve eradication of the outbreak, minimizing the cost of the control strategy? Formally, we can define a cost function C(u1, u2), associated with implementing level u1 and u2 of vector control and protection measures, respectively, for which it is reasonable to make the following assumptions. Assumption 1. The cost function C(u1, u2) : [0, ∞) × [0, βh] →[0, ∞) is a non-negative differentiable function and it is monotonically increasing in u1 and u2. Then, we formulate the following optimization problem: (u∗ 1, u∗ 2) = arg min C(u1, u2) subject to (βh −u2)βvω −γ(µ + u1)2 ≤0 u1, u2 ≥0, u2 ≤βh, (15) where constraint (βh −u2)βvω −γ(µ+u1)2 ≤0 is obtained from Eq. (12), by imposing that the DFE is globally asymp- totically stable, i.e., imposing σc ≤1. From the analysis of the optimization problem in Eq. (15), we obtain the following result, which provides an explicit way to compute the optimal control policy for the HV-SIS epidemic model. Theorem 2. Under Assumption 1, the optimal solution (u∗ 1, u∗ 2) of Eq. (15) solves the following system of equations: ∂ ∂u1 C(u1, u2) −2λγ(µ + u1) = 0 ∂ ∂u2 C(u1, u2) −λβvω = 0 (βh −u2)βvω −γ(µ + u1)2 = 0. (16) Proof. First, we observe that the problem is always feasible. In fact, u1 = 0 and u2 = βh is a solution that satisfies all the constraints. Then, we prove that the minimum of Eq. (15) is attained for values of the control inputs u1 and u2 that either are both equal to 0, or they satisfy the equality constraint (βh−u2)βvω−γ(µ+u1)2 = 0. To prove this statement, let us define g(u1, u2) = (βh−u2)βvω−γ(µ+u1)2. If g(0, 0) ≤0 (which is equivalent to σ0 ≤1), then the monotonicity of C implies that the minimum is attained at u∗ 1 = u∗ 2 = 0. If g(0, 0) > 0, assume that (u∗ 1, u∗ 2) is the optimal solution of Eq. (15) and that g(u∗ 1, u∗ 2) < 0. The cost function at the optimal solution is equal to C(u∗ 1, u∗ 2). If u∗ 1 > 0, we can define ˜u∗ 1(ζ) = u∗ 1 −ζ. By continuity, being g(u∗ 1, u∗ 2) < 0 there exists ∆u > 0 such that g(˜u∗ 1(∆u), u∗ 2) ≤0. Clearly (˜u∗ 1(∆u), u∗ 2) is then a feasible solution of Eq. (15), and C(˜u∗ 1(∆u), u∗ 2) < C(u1, u∗ 2), due to the monotonicity of the cost function, which contradicts the assumption that (u∗ 1, u∗ 2) is the optimal solution of Eq. (15), yielding the claim. If u∗ 1 = 0, the same argument holds letting ˜u∗ 2(ζ) = u∗ 2 −ζ. Once we know that the optimal solution is attained at the boundary g(u1, u2) = 0, we use Lagrange multipliers to solve the optimization problem [25], by writing the Lagrangian function L(u1, u2, λ) = C(u1, u2) + λg(u1, u2) = C(u1, u2) + λ((βh −u2)βvω −γ(µ + u1)2). (17) Finally, a necessary condition for optimality is that the solu- tion should solve the nonlinear system obtained by posing the partial derivatives of Eq. (17) to 0 [25], yielding Eq. (16) In general, Eq. (16) can have multiple solutions, and thus being a solution of Eq. (16) is only a necessary condition for optimality. However, in the special case in which the cost function is linear, we can derive a close-form expression for the optimal solution of the control problem in Eq. (15). Corollary 2. Assume that C(u1, u2) = c1u1 + c2u2, where c1 > 0 and c2 > are positive constants that weight the cost for implementing vector control and personal protection measures, respectively. Then, the optimal solution (u∗ 1, u∗ 2) of Eq. (15) is given by u∗ 1 = ( q βhβvω γ −µ if σ0 > 1 and (c1, c2) /∈C 0 otherwise, (18a) u∗ 2 = ( βhβvω−γµ2 βvω if σ0 > 1 and (c1, c2) ∈C 0 otherwise, (18b) where C :=  (c1, c2) : c1 c2 > 2γµ βvω , c1 c2 βvω −γµ 2 ≥βhβvγω  . (19) Proof. Eq. (18) is obtained as the unique solution of Eq. (16) for C(u1, u2) = c1u1 + c2u2. The results in Corollary 2 suggest that, if the cost for implementing intervention policies grows linearly in the effectiveness of the intervention, then it is always beneficial to focus on implementing only one of the two types of policies. Which policy to implement depends on the model 0 2 4 0 2 4 vector control, c1 protections, c2 (a) βh = 0.4, βv = 0.1 0 2 4 0 2 4 vector control, c1 protections, c2 (b) βh = 0.1, βv = 0.4 Fig. 4: In the blue area, vector control is preferable; in the yellow area, personal protection measures are preferable, according to Corollary 2 Common model parameters are ω = 0.2, γ = 0.4, and µ = 0.1. parameters and on the ratio between the cost for imple- menting the two control actions, as illustrated in Fig. 4. Note that, as the vector infection rate βv becomes larger, the region in which incentivizing protection measures is preferable becomes larger. However, even when βv is four times larger than βh, the region in which vector control is more effective is larger. V. CONCLUSION In this paper, we have proposed and analyzed a novel model for the spread of vector-borne diseases. The model, built using a system of ODEs, accounts for human and vector contagion, as well as for the vital dynamics of the vectors. Using systems theoretic tools, we have studied the asymptotic behavior of the system, characterizing a phase transition between a regime where the DFE is globally asymptotically stable and a regime where the system con- verges to a (unique) EE. Then, by introducing two control actions in the equations, we have analytically assessed the effectiveness of vector control interventions and incentives for humans to adopt protection measures. These preliminary results pave the way for several lines of future research. First, the control strategies proposed in Section IV are assumed constant. Future research should focus on designing dynamical control strategies, following the approaches developed in [1], [4]. Moreover, despite Theorem 2 applies to quite general cost functions, we have focused our discussion on the linear scenario, for which a closed-form expression for the optimal control can be easily derived. Nonlinear cost functions that accounts, e.g., for diminishing returns in the effectiveness of interventions should be investigated. Second, embedding the model on a network structure is a key future step to gain insights into the impact of geographical displacement of humans and vectors on vector-borne diseases. Finally, the HV-SIS model can be coupled with more realistic models of human behavior, e.g., using game theory [26]–[29], to develop a more realistic framework to study interventions. REFERENCES [1] C. Nowzari, V. M. Preciado, and G. J. Pappas, “Analysis and control of epidemics: a survey of spreading processes on complex networks,” IEEE Control Syst. Mag., vol. 36, no. 1, pp. 26–46, 2016. [2] W. Mei, S. Mohagheghi, S. Zampieri, and F. Bullo, “On the dynamics of deterministic epidemic propagation over networks,” Annu. Rev. Contr., vol. 44, pp. 116–128, 2017. [3] P. E. Par´e, C. L. Beck, and T. Bas¸ar, “Modeling, estimation, and anal- ysis of epidemics over networks: An overview,” Annu. Rev. Control, vol. 50, pp. 345–360, 2020. [4] L. Zino and M. Cao, “Analysis, prediction, and control of epidemics: A survey from scalar to dynamic network models,” IEEE Circuits Syst. Mag., vol. 21, no. 4, pp. 4–23, 2021. [5] M. Ye and B. D. O. Anderson, “Competitive epidemic spreading over networks,” IEEE Control Syst. Lett., vol. 7, pp. 545–552, 2023. [6] V. M. Preciado, M. Zargham, C. Enyioha, A. Jadbabaie, and G. Pappas, “Optimal vaccine allocation to control epidemic outbreaks in arbitrary networks,” in 52nd IEEE Conf. Decis. Control, 2013, pp. 7486–7491. [7] A. Rizzo, B. Pedalino, and M. Porfiri, “A network model for Ebola spreading,” J. Theor. Biol., vol. 394, no. 7, pp. 212–222, 2016. [8] G. Giordano et al., “Modelling the COVID-19 epidemic and im- plementation of population-wide interventions in Italy,” Nat. Med., vol. 26, pp. 855–860, 2020. [9] R. Carli, G. Cavone, N. Epicoco, P. Scarabaggio, and M. Dotoli, “Model predictive control to mitigate the COVID-19 outbreak in a multi-region scenario,” Annu. Rev. Control, vol. 50, pp. 373–393, 2020. [10] F. Parino, L. Zino, M. Porfiri, and A. Rizzo, “Modelling and predicting the effect of social distancing and travel restrictions on COVID-19 spreading,” J. R. Soc. Interface, vol. 18, no. 175, p. 20200875, 2021. [11] S. Fiandrino et al., “Collaborative forecasting of influenza-like illness in Italy: The Influcast experience,” Epidemics, vol. 50, p. 100819, 2025. [12] World Health Organization, “Vector-borne diseases,” https://www.who.int/news-room/fact-sheets/detail/vector-borne- diseases, 2024. [13] J. Rockl¨ov and R. Dubrow, “Climate change: an enduring challenge for vector-borne disease prevention and control,” Nat. Immunol., vol. 21, no. 5, p. 479–483, 2020. [14] J. Liu-Helmersson, J. Rockl¨ov, M. Sewe, and ˚A. Br¨annstr¨om, “Climate change may enable aedes aegypti infestation in major european cities by 2100,” Environ. Res., vol. 172, p. 693–699, 2019. [15] M. Aguiar et al., “Mathematical models for dengue fever epidemiol- ogy: A 10-year systematic review,” Phys Life Rev., vol. 40, p. 65–92, 2022. [16] S. T. Ogunlade, M. T. Meehan, A. I. Adekunle, and E. S. McBryde, “A systematic review of mathematical models of dengue transmission and vector control: 2010–2020,” Viruses, vol. 15, no. 1, p. 254, 2023. [17] M. J. Keeling and P. Rohani, Modeling Infectious Diseases in Humans and Animals. Princeton University Press, 2011. [18] J. D. Murray, Continuous Population Models for Single Species. Springer New York, 1993, p. 1–43. [19] M. Hirsch and H. Smith, Chapter 4 Monotone Dynamical Systems. Elsevier, 2006, p. 239–357. [20] A. L. Wilson et al., “The importance of vector control for the control and elimination of vector-borne diseases,” PLOS Negl. Trop. Dis., vol. 14, no. 1, p. e0007831, 2020. [21] J. D. Alpern, S. J. Dunlop, B. J. Dolan, W. M. Stauffer, and D. R. Boulware, “Personal protection measures against mosquitoes, ticks, and other arthropods,” Med. Clin. N. Am., vol. 100, no. 2, p. 303–316, 2016. [22] J. Lourenc¸o and M. Recker, “Viral and epidemiological determinants of the invasion dynamics of novel dengue genotypes,” PLOS Negl. Trop. Dis., vol. 4, no. 11, p. e894, 2010. [23] World Health Organization, “Dengue and severe dengue,” https://www.who.int/health-topics/dengue-and-severe-dengue, 2024. [24] F. Blanchini, “Set invariance in control,” Automatica, vol. 35, no. 11, pp. 1747–1767, 1999. [25] D. P. Bertsekas, Nonlinear Programming. Belmont MA, US: Athena Scientific, 1995. [26] M. Ye, L. Zino, A. Rizzo, and M. Cao, “Game-theoretic modeling of collective decision making during epidemics,” Phys. Rev. E, vol. 104, no. 2, p. 024314, 2021. [27] K. Frieswijk, L. Zino, M. Ye, A. Rizzo, and M. Cao, “A mean-field analysis of a network behavioral–epidemic model,” IEEE Control Syst. Lett., vol. 6, pp. 2533–2538, 2022. [28] A. R. Hota, T. Sneh, and K. Gupta, “Impacts of game-theoretic activation on epidemic spread over dynamical networks,” SIAM J. Control Optim., vol. 60, no. 2, pp. S92–S118, 2022. [29] K. Paarporn and C. Eksin, “SIS epidemics coupled with evolutionary social distancing dynamics,” in 2023 Am. Control Conf., 2023, pp. 4308–4313.
A Human-Vector Susceptible-Infected-Susceptible Model for Analyzing and Controlling the Spread of Vector-Borne Diseases Lorenzo Zino, Alessandro Casu, and Alessandro Rizzo Abstract- We propose an epidemic model for the spread of vector-borne diseases. The model, which is built extending the classical susceptible-infected-susceptible model, accounts for two populations -humans and vectors- and for crosscontagion between the two species, whereby humans become infected upon interaction with carrier vectors, and vectors become carriers after interaction with infected humans. We formulate the model as a system of ordinary differential equations and leverage monotone systems theory to rigorously characterize the epidemic dynamics. Specifically, we characterize the global asymptotic behavior of the disease, determining conditions for quick eradication of the disease (i.e., for which all trajectories converge to a disease-free equilibrium), or convergence to a (unique) endemic equilibrium. Then, we incorporate two control actions: namely, vector control and incentives to adopt protection measures. Using the derived mathematical tools, we assess the impact of these two control actions and determine the optimal control policy. I. INTRODUCTION In the last decade, mathematical models of epidemic diseases have gained traction within the systems and control community [1]-[5]. In fact, the development of increasingly refined models has allowed to accurately predict the course of an epidemic outbreak and, ultimately, to design and assess intervention policies [1], [4], [6]. In particular, the latest epidemiological threats, such as the outbreaks of Ebola, COVID-19, and seasonal flu have provided further motivation to pursue these studies, yielding tailored versions of these general epidemic models [7]-[11]. Typical modeling setups deal with human-to-human contagion mechanisms [1]-[5]. However, according to the World Health Organization, more than 17% of all infectious diseases are vector-borne [12]. This means that they are not transmitted through human-to-human interactions, but by arthropod vectors (such as mosquitoes, fleas, or ticks) that can carry pathogens and transmit them to humans [12]. Vector-borne diseases (including dengue, malaria, and West Nile fever) pose a significant threat to our society, being causing more than 700,000 deaths annually [12]. Moreover, the ongoing climate change crisis exacerbates concerns on the prevention of these diseases, as vectors adapt to new habitats [13]. This is the case, e.g., of Aedes aegypti (responsible for the transmission of several diseases, including dengue, Zika, and yellow fever), which is predicted to infest many regions of Europe if the temperature increases by 2°C [14]. The authors are with the - tions, Politecnico di Torino, Torino, Italy ( , , ). Numerous mathematical models of vector-borne diseases have been proposed and studied, particularly in response to the increasing concern for dengue fever, leading to a rich body of research [15], [16]. However, most of these models, developed by computational epidemiologists as complex simulation tools, offer limited analytical tractability [17]. Conversely, there is a scarcity of parsimonious models that efficiently balance accuracy and interpretability. Here, we fill in this gap by proposing a novel mathematical model for vector-borne diseases. Our model, grounded in dynamical systems theory, considers two interacting populations of humans and vectors. Through such interactions, the pathogen is transmitted from carrier vectors to susceptible humans and from infectious humans to vectors, establishing a positive feedback loop of contagion. Formally, we cast our model as a system of nonlinear ordinary differential equations (ODEs), in which we couple i) an epidemic model for humans, inspired by the Susceptible-Infected-Susceptible (SIS) model [2], ii) a contagion model for vectors, inspired by the Susceptible-Infected (SI) model [2], and iii) a vital dynamics for vectors, which is modeled using a birth-death process [18]. We refer to the model obtained as the humanvector SIS (HV-SIS) epidemic model. In addition to the formulation of the model, the main contribution of this paper is twofold. First, by leveraging monotone dynamical systems theory [19], we perform a thorough analysis of the asymptotic behavior of the HV-SIS model, characterizing two regimes: one where the epidemic outbreak is quickly eradicated, leading to global convergence to a disease-free equilibrium; and one where the disease becomes endemic, and the system converges to a (unique) endemic equilibrium. Second, we introduce two control actions: namely, vector control -which focuses on reducing the vector population of vectors (e.g., using pesticides) [20]- and the use of personal protection measures against contagion [21]. By studying the controlled HV-SIS model and formulating an optimization problem, we investigate the optimal control policies to prevent outbreaks of vector-borne diseases, as a function of the model parameters and the cost associated with implementing interventions. II. HUMAN-VECTOR SIS EPIDEMIC MODEL We consider a large population of humans that interact with a population of vectors. Similar to most epidemic models [2], we observe that the duration of an epidemic outbreak is typically negligible with respect to the life-span of humans. Hence, we approximate the size of the human population as constant. Moreover, being the population large, 16 Oct 2025 human dynamics vector dynamics S I N C γ μ ω μ βh βv vector dynamics vector dynamics Fig. 1: Schematic of the human-vector epidemic model. Solid arrows represent possible transitions of the state of humans (S and I for susceptible and infected, respectively) and vectors (N and C for non-carrier and carrier, respectively). Dashed arrows are associated with the vital dynamics of vectors. Dotted colored arrows indicate transitions that are triggered by interactions with humans or vectors with a specific state. we approximate it as a continuum of individuals with total mass equal to 1 [2]. On the contrary, the life-span of a vector is typically comparable with the infection propagation dynamics [22]. Hence, we assume that the total quantity of vectors v(t) ≥0 (normalized with respect to the unit mass human population) evolves in continuous-time t ≥0 according to a classical ODE associated with a birth-death process, typically used in mathematical biological models [18]: ̇v(t) = ω -μv(t), (1) where ω > 0 and μ > 0 are two constants representing the birth and death rate, respectively. Humans can be healthy and susceptible to the disease or infected with the disease. We assume that there is no natural immunity: after recovery, individuals are again susceptible to the disease. This is a good proxy for many diseases, e.g., dengue fever, for which natural immunity wanes quickly and it only protects against the virus serotype specific of the previous infection [23]. We denote by x(t) ∈[0, 1] and s(t) ∈[0, 1] the fraction of infected individuals and susceptible individuals at time t ≥0, respectively. Since there is no immunity, it holds s(t) = 1 -x(t). Similarly, vectors can be either carriers of the pathogen or non-carriers. We denote by y(t) ≥0 the number of non-carrier vectors and by z(t) ≥0 the quantity of carrier vectors. Being v(t) the total quantity of vectors at time t, then y(t)+z(t) = v(t). We assume that the two populations are well-mixed, and we define a human-vector compartmental model that describes the evolution of the fraction of susceptible and infected individuals and the quantity of carriers and noncarriers in the two populations. The compartmental model, illustrated in Fig. 1, yields the following 3-dimensional system of nonlinear ODEs: ̇x(t) = -γx(t) + βh(1 -x(t))z(t) (2a) ̇y(t) = ω -μy(t) -βvx(t)y(t) (2b) ̇z(t) = βvx(t)y(t) -μz(t), (2c) with initial condition in the domain D := {(x, y, z) : x, y, z ≥0, x ≤1}. In the following paragraphs, we extensively discuss these equations. In Eq. (2a), the fraction of infected individuals evolves according to two contrasting mechanisms: the negative contribution -γx(t) accounts for infected individuals who recover at a rate γ > 0; the positive contribution βh(1 -x(t))z(t) accounts for new infections, whose number is proportional to the quantity of susceptible humans, the quantity of carriers, and a parameter βh > 0 that captures the human contagion rate (i.e., the likelihood that the pathogen is transmitted from a carrier vector to a human through a human-vector interaction). This equation resembles the classical SIS epidemic model [2], but here new contagions, instead of being proportional to the quantity of infected humans, are proportional to the quantity of carriers. For this reason, we shall refer to the model with dynamics in Eq. (2) and initial condition in D as the human-vector SIS model, abbreviated as HV-SIS model. The other two equations, Eqs. (2b)-(2c), govern the dynamics of vectors. In particular, the term βvx(t)y(t) captures new carriers and gives a positive contribution to the dynamics of carriers and a negative contribution to non-carriers. This term is proportional to the number of non-carrier vectors, infected humans, and a parameter βv ≥0 that captures the vector contagion rate (i.e., the likeliness that the pathogen is transmitted from a human to a vector through a humanvector interaction). The other two terms come from Eq. (1): new born vectors are not carriers of the pathogen (so the rate ω appears in Eq. (2b)), while the death rate is independent of the pathogen, since vectors are only carriers and not infected with the disease, leading to the terms -μy(t) and -μz(t), respectively. III. MAIN RESULTS ON THE HV-SIS EPIDEMIC MODEL In this section, we present our main results on the analysis of the HV-SIS epidemic model. First, we prove that the equations are well-defined. Lemma 1. The domain of the HV-SIS model D := {(x, y, z) : x, y, z ≥0, x ≤1} can be split into two domains D1 := {(x, y, z) : x, y, z ≥0, x ≤1, y + z ≤ ω μ} and D2 := {(x, y, z) : x, y, z ≥0, x ≤1, y + z ≥ω μ}, which are positive invariant under Eq. (2). Proof. The domain D is closed and convex and the vector field in Eq. (2) is Lipschitz-continuous. Hence, Nagumo's Theorem can be applied [24]. We need to verify that the vector field at the boundaries of the domain does not point towards the boundary. We immediately observe that, if any of the variables is equal to 0, then the corresponding derivative is always non-negative (hence, it does not point towards the boundary). Similarly, at x = 1, we get that Eq. (2a) is always non-positive. Finally, when y+z = ω μ, from summing Eq. (2b) and Eq. (2c) we get ̇y + ̇z = 0. Hence, the vector field does not point towards any boundary, yielding the first claim. The second claim follows the same arguments, where we observe that it always holds that ̇y + ̇z ≤0 in D2. Then, we provide a complete characterization of the asymptotic behavior of the HV-SIS model, determining its equilibria. Specifically, we will prove that, depending on the model parameters, there is always one equilibrium that is (almost) globally asymptotically stable, characterizing two distinct regimes: either the disease is eradicated and all trajectories converge to a disease-free equilibrium (DFE), or the disease becomes endemic and (almost) all trajectories converge to an endemic equilibrium (EE), where a fraction of the population is infected (and a fraction of the vectors are carriers). The phase transition between these two regimes, which is a typical phenomenon of many epidemic models [2], [4], is shaped by the value of the model parameters that determine the so-called epidemic threshold [4]. We start our analysis by determining the equilibria of Eq. (2) and determining their (local) stability. Proposition 1. The HV-SIS model in Eq. (2) has, at most two equilibria: i) The DFE (x∗, y∗, z∗) = 0, ω μ, 0 , (3) and ii) the EE ( ̄x, ̄y, ̄z) = ωβhβv -μ2γ ωβhβv + μγβv , γμ + βhω βh(βv + μ), ωβhβv -μ2γ μβhβv + μ2βh . (4) Specifically, let us define the epidemic threshold σ0 := βhβvω γμ2 . (5) The DFE in Eq. (3) always exists and is locally exponentially stable if σ0 1. The EE in Eq. (4) exists and is distinct from the DFE if and only if σ0 > 1 and (if it exists) it is always locally exponentially stable. Proof. First, we compute the equilibria of Eq. (2) by equating the right hand sides to 0, obtaining a system of three nonlinear equations, which yields the two solutions in Eq. (3) and Eq. (4). Then, we observe that the DFE is always in the domain D. On the contrary, the EE is in the domain D if and only if the numerators of ̄x and ̄z are non-negative, i.e., if ωβhβv -μ2γ ≥0, which yield the condition ωβhβv μ2γ ≥1. Finally, we observe that, when ωβhβv μ2γ = 1, the DFE and the EE coincide, yielding the strict inequality for the existence of a second equilibrium of Eq. (2). At this stage, we compute the Jacobian matrix of Eq. (2) in a generic point (x, y, z), that is, J(x, y, z) =   -γ -βhx 0 βh(1 -x) -βvy -μ -βvx 0 βvy βvx -μ  . (6) By evaluating Eq. (6) at the DFE in Eq. (3), we get J(x∗, y∗, z∗) =   -γ 0 βh -βv ω μ -μ 0 βv ω μ 0 -μ  , (7) whose eigenvalues are λ1 = -μ and λ2 = 1 2μ(-γμ - μ2 -√μ p γ2μ -2γμ2 + μ3 + 4βhβvω), whose real parts are always negative, and λ3 = 1 2μ(-γμ -μ2 + p γ2μ2 -2γμ3 + μ4 + 4βhβvωμ), which is negative if and only if γ2μ2 -2γμ3 + μ4 + 4βhβvωμ 1. Similarly, we evaluate the Jacobian matrix in Eq. (6) at the EE in Eq. (4), obtaining   -γμβv+ωβvβh μ(μ+βv) 0 μγβvβh+μ2γβh ωβvβh+μγβv -βvγμ+βvβhω βh(βv+μ) -μ -ωβvβh-μ2γ ωβh+μγ 0 βvγμ+βvβhω βh(βv+μ) ωβvβh-μ2γ ωβh+μγ -μ  , (8) whose eigenvalues are λ1 = -μ, which is always negative, and another pair of eigenvalues, which are not reported due to their cumbersome expression. Again, by imposing that the largest of the two has negative real part, we obtain a complicated condition which can be simplified to σ0 > 1, where computations are omitted due to space constraints. Remark 1. From the expression of the epidemic threshold in Eq. (5), we observe that, as predictable, increasing the infection rates βh and βv favors the spread of the disease. A similar effect is observed by increasing the vector birth rate ω. On the other hand, increasing the vector death rate μ and/or the human recovery rate γ favors the eradication of the disease. Interestingly, the vector death rate μ has a larger impact, since it appears squared at the denominator, suggesting that vector control is a potentially effective strategy to avoid outbreaks of vector-borne diseases. Proposition 1 characterizes the local behavior of Eq. (2) about the two equilibria of the system. In order to prove global convergence, we now leverage monotone systems theory [19]. However, since the Jacobian of Eq. (2) in Eq. (6) is evidently not a Metzler matrix, we cannot directly apply the monotone systems theory to Eq. (2), and we need to introduce a change of variables, as detailed in the proof of the following result. Theorem 1. Let σ0 be the epidemic threshold from Eq. (5). If σ0 ≤1, then all trajectories of the HV-SIS model in Eq. (2) converge to the DFE in Eq. (3). If σ0 > 1, then all trajectories with initial condition such that x(0) ̸= 0 or z(0) ̸= 0 converge to the EE in Eq. (4). Proof. We operate a change of variables, where we introduce an auxiliary 3-dimensional system formed by x(t), z(t), and v(t) = y(t) + z(t), which is governed by Eq. (1). Being y(t) = v(t) -z(t), we obtain ̇x(t) = -γx(t) + βh(1 -x(t))z(t) (9a) ̇z(t) = βvx(t)(v(t) -z(t)) -μz(t) (9b) ̇v(t) = ω -μv(t). (9c) First, from Lemma 1, we derive that the two invariant sets, written in terms of the new variables are D1 := {(x, z, v) : x, z, v ≥0, x ≤1, v ≤ω μ, z ≤v} and D2 := {(x, z, v) : x, z, v ≥0, x ≤1, v ≥ ω μ, z ≤v}. Second, we prove that all trajectories in D2 are bounded (those in D1 are necessarily bounded, being D1 compact). From Eq. (9c), we observe that ̇v(t) ≤0 for any v(t) ≥ ω μ, which 0 20 40 0 0.02 0.04 0.06 0.08 time, t Prevalence x(t) 0 20 40 0 0.5 1 time, t y(t) z(t) (a) μ = 0.2 0 50 100 0 0.1 0.2 time, t Prevalence x(t) 0 50 100 0 1 2 time, t y(t) z(t) (b) μ = 0.1 Fig. 2: Trajectories of the human-vector epidemic model. In (a) σ0 1 and the trajectory converges to the EE. The equilibrium predicted by Theorem 1 in the two cases is depicted with gray dashed horizontal lines. Common parameters are ω = βh = βv = 0.2, and γ = 0.4. implies v(t) ≤max{v(0), ω/μ}, which in turn implies that trajectories cannot diverge. Third, we compute the Jacobian matrix of Eq. (9) at the generic point (x, z, v), obtaining ̃J(x, z, v) =   -γ -βhz βh(1 -x) 0 βv(v -z) -βvx -μ βvx 0 0 -μ  . (10) We observe that the matrix in Eq. (10) is Metzler, since all its off-diagonal entries are non-negative for values of the parameters belonging to the two invariant sets. Hence, the dynamical system in Eq. (9) is monotone [19], which implies that all its trajectories converge to a fixed point [19]. Fourth, since v(t) and z(t) converge, also their difference y(t) = z(t) -x(t) necessarily converges to a fixed point, yielding that also trajectories of Eq. (2) converge. Fifth, the analysis of the local stability of the equilibria in Proposition 1 yields the claim for σ0 1. Finally, we observe that when σ0 = 1, Proposition 1 does not provide any information on the stability of the equilibria, but it states that the system has a unique equilibrium: the DFE. Combining this with the system's monotonicity (which implies convergence to an equilibrium), we obtain convergence to the DFE also in the case σ0 = 1, yielding the claim. Figure 2 illustrates the results of Theorem 1. If σ0 1, then all trajectories of the controlled HV-SIS model in Eq. (11) with initial condition such that x(0) ̸= 0 or z(0) ̸= 0 converge to the EE ̄xc = ωβv(βh -u2) -(μ + u1)2γ ωβv(βh -u2) + (μ + u1)γβv (14a) ̄yc = γ(μ + u1) + (βh -u2)ω (βh -u2)(βv + μ + u1) (14b) ̄zc = ωβv(βh -u2) -(μ + u1)2γ (μ + u1)(βh -u2)βv + (μ + u1)2(βh -u2). (14c) The expressions derived in Corollary 1 can be used to assess the performance of the two different control strategies. In Fig. 3, we report the value of the fraction of infected 0 0.02 0.04 0 0.02 0.04 0.06 0.08 0.1 vector control, u1 protection measures, u2 0 0.1 0.2 (a) Infected individuals, ̄xc 0 0.02 0.04 0 0.02 0.04 0.06 0.08 0.1 vector control, u1 0 0.2 0.4 0.6 (b) Carrier vectors, ̄zc Fig. 3: (a) Infected individuals and (b) carrier vectors at the EE for different values of the control inputs u1 and u2. Model parameters are ω = βh = βv = 0.2, γ = 0.4, and μ = 0.1. individuals at the EE in Eq. (14) for different values of the control inputs u1 and u2. The figure suggests that vector control is more effective not only in reducing the epidemic threshold (as observed in Remark 1), but also in reducing the fraction of infected individuals at the EE. In fact, from Fig. 3a, we observe that when u1 = 0.01, which means an increase in the vector death rate by just 10%, the prevalence at the EE decreases by more than 25%. To obtain the same results using only protective measures, one needs to have u2 that reduces the human infection rate by almost 20%. The same relation is observed for the vectors in Fig. 3b. From Corollary 1 and the following discussion, a problem spontaneously arises. How can one design an optimal control policy in terms of vector control and protection measures to achieve eradication of the outbreak, minimizing the cost of the control strategy? Formally, we can define a cost function C(u1, u2), associated with implementing level u1 and u2 of vector control and protection measures, respectively, for which it is reasonable to make the following assumptions. Assumption 1. The cost function C(u1, u2) : [0, ∞) × [0, βh] →[0, ∞) is a non-negative differentiable function and it is monotonically increasing in u1 and u2. Then, we formulate the following optimization problem: (u∗ 1, u∗ 2) = arg min C(u1, u2) subject to (βh -u2)βvω -γ(μ + u1)2 ≤0 u1, u2 ≥0, u2 ≤βh, (15) where constraint (βh -u2)βvω -γ(μ+u1)2 ≤0 is obtained from Eq. (12), by imposing that the DFE is globally asymptotically stable, i.e., imposing σc ≤1. From the analysis of the optimization problem in Eq. (15), we obtain the following result, which provides an explicit way to compute the optimal control policy for the HV-SIS epidemic model. Theorem 2. Under Assumption 1, the optimal solution (u∗ 1, u∗ 2) of Eq. (15) solves the following system of equations: ∂ ∂u1 C(u1, u2) -2λγ(μ + u1) = 0 ∂ ∂u2 C(u1, u2) -λβvω = 0 (βh -u2)βvω -γ(μ + u1)2 = 0. (16) Proof. First, we observe that the problem is always feasible. In fact, u1 = 0 and u2 = βh is a solution that satisfies all the constraints. Then, we prove that the minimum of Eq. (15) is attained for values of the control inputs u1 and u2 that either are both equal to 0, or they satisfy the equality constraint (βh-u2)βvω-γ(μ+u1)2 = 0. To prove this statement, let us define g(u1, u2) = (βh-u2)βvω-γ(μ+u1)2. If g(0, 0) ≤0 (which is equivalent to σ0 ≤1), then the monotonicity of C implies that the minimum is attained at u∗ 1 = u∗ 2 = 0. If g(0, 0) > 0, assume that (u∗ 1, u∗ 2) is the optimal solution of Eq. (15) and that g(u∗ 1, u∗ 2) 0, we can define ̃u∗ 1(ζ) = u∗ 1 -ζ. By continuity, being g(u∗ 1, u∗ 2) 0 such that g( ̃u∗ 1(∆u), u∗ 2) ≤0. Clearly ( ̃u∗ 1(∆u), u∗ 2) is then a feasible solution of Eq. (15), and C( ̃u∗ 1(∆u), u∗ 2) 0 and c2 > are positive constants that weight the cost for implementing vector control and personal protection measures, respectively. Then, the optimal solution (u∗ 1, u∗ 2) of Eq. (15) is given by u∗ 1 = ( q βhβvω γ -μ if σ0 > 1 and (c1, c2) /∈C 0 otherwise, (18a) u∗ 2 = ( βhβvω-γμ2 βvω if σ0 > 1 and (c1, c2) ∈C 0 otherwise, (18b) where C := (c1, c2) : c1 c2 > 2γμ βvω , c1 c2 βvω -γμ 2 ≥βhβvγω . (19) Proof. Eq. (18) is obtained as the unique solution of Eq. (16) for C(u1, u2) = c1u1 + c2u2. The results in Corollary 2 suggest that, if the cost for implementing intervention policies grows linearly in the effectiveness of the intervention, then it is always beneficial to focus on implementing only one of the two types of policies. Which policy to implement depends on the model 0 2 4 0 2 4 vector control, c1 protections, c2 (a) βh = 0.4, βv = 0.1 0 2 4 0 2 4 vector control, c1 protections, c2 (b) βh = 0.1, βv = 0.4 Fig. 4: In the blue area, vector control is preferable; in the yellow area, personal protection measures are preferable, according to Corollary 2 Common model parameters are ω = 0.2, γ = 0.4, and μ = 0.1. parameters and on the ratio between the cost for implementing the two control actions, as illustrated in Fig. 4. Note that, as the vector infection rate βv becomes larger, the region in which incentivizing protection measures is preferable becomes larger. However, even when βv is four times larger than βh, the region in which vector control is more effective is larger. V. CONCLUSION In this paper, we have proposed and analyzed a novel model for the spread of vector-borne diseases. The model, built using a system of ODEs, accounts for human and vector contagion, as well as for the vital dynamics of the vectors. Using systems theoretic tools, we have studied the asymptotic behavior of the system, characterizing a phase transition between a regime where the DFE is globally asymptotically stable and a regime where the system converges to a (unique) EE. Then, by introducing two control actions in the equations, we have analytically assessed the effectiveness of vector control interventions and incentives for humans to adopt protection measures. These preliminary results pave the way for several lines of future research. First, the control strategies proposed in Section IV are assumed constant. Future research should focus on designing dynamical control strategies, following the approaches developed in [1], [4]. Moreover, despite Theorem 2 applies to quite general cost functions, we have focused our discussion on the linear scenario, for which a closed-form expression for the optimal control can be easily derived. Nonlinear cost functions that accounts, e.g., for diminishing returns in the effectiveness of interventions should be investigated. Second, embedding the model on a network structure is a key future step to gain insights into the impact of geographical displacement of humans and vectors on vector-borne diseases. Finally, the HV-SIS model can be coupled with more realistic models of human behavior, e.g., using game theory [26]-[29], to develop a more realistic framework to study interventions. REFERENCES [1] C. Nowzari, V. M. Preciado, and G. J. Pappas, "Analysis and control of epidemics: a survey of spreading processes on complex networks," IEEE Control Syst. Mag., vol. 36, no. 1, pp. 26-46, 2016. [2] W. Mei, S. Mohagheghi, S. Zampieri, and F. Bullo, "On the dynamics of deterministic epidemic propagation over networks," Annu. Rev. Contr., vol. 44, pp. 116-128, 2017. [3] P. E. Par ́e, C. L. Beck, and T. Bas ̧ar, "Modeling, estimation, and analysis of epidemics over networks: An overview," Annu. Rev. Control, vol. 50, pp. 345-360, 2020. [4] L. Zino and M. Cao, "Analysis, prediction, and control of epidemics: A survey from scalar to dynamic network models," IEEE Circuits Syst. Mag., vol. 21, no. 4, pp. 4-23, 2021. [5] M. Ye and B. D. O. Anderson, "Competitive epidemic spreading over networks," IEEE Control Syst. Lett., vol. 7, pp. 545-552, 2023. [6] V. M. Preciado, M. Zargham, C. Enyioha, A. Jadbabaie, and G. Pappas, "Optimal vaccine allocation to control epidemic outbreaks in arbitrary networks," in 52nd IEEE Conf. Decis. Control, 2013, pp. 7486-7491. [7] A. Rizzo, B. Pedalino, and M. Porfiri, "A network model for Ebola spreading," J. Theor. Biol., vol. 394, no. 7, pp. 212-222, 2016. [8] G. Giordano et al., "Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy," Nat. Med., vol. 26, pp. 855-860, 2020. [9] R. Carli, G. Cavone, N. Epicoco, P. Scarabaggio, and M. Dotoli, "Model predictive control to mitigate the COVID-19 outbreak in a multi-region scenario," Annu. Rev. Control, vol. 50, pp. 373-393, 2020. [10] F. Parino, L. Zino, M. Porfiri, and A. Rizzo, "Modelling and predicting the effect of social distancing and travel restrictions on COVID-19 spreading," J. R. Soc. Interface, vol. 18, no. 175, p. 20200875, 2021. [11] S. Fiandrino et al., "Collaborative forecasting of influenza-like illness in Italy: The Influcast experience," Epidemics, vol. 50, p. 100819, 2025. [12] World Health Organization, "Vector-borne diseases," https://www.who.int/news-room/fact-sheets/detail/vector-bornediseases, 2024. [13] J. Rockl ̈ov and R. Dubrow, "Climate change: an enduring challenge for vector-borne disease prevention and control," Nat. Immunol., vol. 21, no. 5, p. 479-483, 2020. [14] J. Liu-Helmersson, J. Rockl ̈ov, M. Sewe, and ̊A. Br ̈annstr ̈om, "Climate change may enable aedes aegypti infestation in major european cities by 2100," Environ. Res., vol. 172, p. 693-699, 2019. [15] M. Aguiar et al., "Mathematical models for dengue fever epidemiology: A 10-year systematic review," Phys Life Rev., vol. 40, p. 65-92, 2022. [16] S. T. Ogunlade, M. T. Meehan, A. I. Adekunle, and E. S. McBryde, "A systematic review of mathematical models of dengue transmission and vector control: 2010-2020," Viruses, vol. 15, no. 1, p. 254, 2023. [17] M. J. Keeling and P. Rohani, Modeling Infectious Diseases in Humans and Animals. Princeton University Press, 2011. [18] J. D. Murray, Continuous Population Models for Single Species. Springer New York, 1993, p. 1-43. [19] M. Hirsch and H. Smith, Chapter 4 Monotone Dynamical Systems. Elsevier, 2006, p. 239-357. [20] A. L. Wilson et al., "The importance of vector control for the control and elimination of vector-borne diseases," PLOS Negl. Trop. Dis., vol. 14, no. 1, p. e0007831, 2020. [21] J. D. Alpern, S. J. Dunlop, B. J. Dolan, W. M. Stauffer, and D. R. Boulware, "Personal protection measures against mosquitoes, ticks, and other arthropods," Med. Clin. N. Am., vol. 100, no. 2, p. 303-316, 2016. [22] J. Lourenc ̧o and M. Recker, "Viral and epidemiological determinants of the invasion dynamics of novel dengue genotypes," PLOS Negl. Trop. Dis., vol. 4, no. 11, p. e894, 2010. [23] World Health Organization, "Dengue and severe dengue," https://www.who.int/health-topics/dengue-and-severe-dengue, 2024. [24] F. Blanchini, "Set invariance in control," Automatica, vol. 35, no. 11, pp. 1747-1767, 1999. [25] D. P. Bertsekas, Nonlinear Programming. Belmont MA, US: Athena Scientific, 1995. [26] M. Ye, L. Zino, A. Rizzo, and M. Cao, "Game-theoretic modeling of collective decision making during epidemics," Phys. Rev. E, vol. 104, no. 2, p. 024314, 2021. [27] K. Frieswijk, L. Zino, M. Ye, A. Rizzo, and M. Cao, "A mean-field analysis of a network behavioral-epidemic model," IEEE Control Syst. Lett., vol. 6, pp. 2533-2538, 2022. [28] A. R. Hota, T. Sneh, and K. Gupta, "Impacts of game-theoretic activation on epidemic spread over dynamical networks," SIAM J. Control Optim., vol. 60, no. 2, pp. S92-S118, 2022. [29] K. Paarporn and C. Eksin, "SIS epidemics coupled with evolutionary social distancing dynamics," in 2023 Am. Control Conf., 2023, pp. 4308-4313.
2510.14789
arXiv:2510.14789v1 [gr-qc] 16 Oct 2025 Quantum confinement of scalar bosons in the Bonnor-Melvin spacetime: uniform magnetic field and rainbow gravity effects Omar Mustafa∗ Department of Physics, Eastern Mediterranean University, 99628, G. Magusa, north Cyprus, Mersin 10 - T¨urkiye Abdullah Guvendi† Department of Basic Sciences, Erzurum Technical University, 25050, Erzurum, T¨urkiye (Dated: October 17, 2025) We present an exact analytical study of Klein-Gordon (KG) scalar bosons and antibosons confined in the Bonnor-Melvin (BM) spacetime under a uniform magnetic field, incorporating rainbow gravity (RG) correc- tions with a positive cosmological constant. The cosmological constant partitions spacetime into an infinite sequence of confinement domains bounded by impenetrable barriers. Within the first allowed domain, the KG equation reduces to a hypergeometric differential equation, yielding closed-form expressions for both the energy spectra and the radial wavefunctions in terms of hypergeometric polynomials. Two representative RG models, inspired by the Magueijo-Smolin framework and loop quantum gravity (LQG), produce Planck-scale bounded, symmetric particle-antiparticle spectra. A distinctive feature of the curved magnetized geometry is the collapse of all magnetic quantum states m ̸= 0 onto the m = 0 level for each radial excitation, a degener- acy absent in flat spacetime. Increasing the cosmological constant partially lifts this collapse, establishing a direct link between the global spacetime curvature and the local quantum structure. Radial probability den- sity analysis further shows that stronger magnetic fields enhance spatial localization, confining bosons into static or rotating ring-like configurations with nodal architectures that evolve systematically with quantum numbers. These findings reveal how gravitational confinement, topology, magnetic fields, and Planck-scale corrections jointly govern the spectral and spatial properties of relativistic quantum fields in curved and magnetized backgrounds. I. INTRODUCTION Einstein’s theory of general relativity (GR) [1] describes gravity as a manifestation of the curvature of spacetime. The attempt to unify GR with quantum mechanics, often referred to as quantum gravity (QG), offers fundamental insights into the behavior of quantum systems in curved backgrounds. This connection has motivated many studies examining how spacetime curvature influences quantum spectroscopic properties, especially for scalar and spinor particles described by the KG and Dirac equations, respectively [2]. The present study focuses on KG particles in such settings. Magnetic fields are central to many astrophysical and cosmological phe- nomena, including stellar structures, accretion disks, galactic centers, magnetars [3, 4], and heavy-ion collisions [5–7]. The simultaneous pres- ence of strong gravitational and magnetic fields near compact objects has led to considerable interest in general relativistic models that in- corporate both effects. One prominent example is the BM universe, an exact solution of the Einstein-Maxwell equations that describes a static spacetime with cylindrical symmetry, sourced by a magnetic field aligned with the symmetry axis and self-consistently embedded in its own gravitational field [8–12]. In this model, the magnetic field con- ∗omar.mustafa@emu.edu.tr (Corr. Author) † abdullah.guvendi@erzurum.edu.tr tributes to the energy-momentum tensor and thereby affects the geom- etry of spacetime. A positive cosmological constant Λ > 0 is introduced to stabilize the spacetime and preserve its symmetry. The BM space- time metric is given in [8] as ds2 = −dt2 + dr2 + ˜σ2 sin2 √ 2Λr  dϕ2 + dz2, (1) where z ∈[−∞, ∞], ϕ ∈[0, 2π], r ∈[0, ∞], and Λ in units of inverse length squared. Following the coordinate transformation √ 2Λr →ρ and ˜σ √ 2Λϕ →φ, as in ˇZofka’s treatment [8], the line element (1) becomes ds2 = −dt2 + 1 2Λ dρ2 + sin2 (ρ) dφ2 + dz2, (2) where H = 1 √ 2 sin(ρ) denotes the magnetic field [8]. Topological defects such as domain walls are expected in various unified field theories [13–16], and their effects on quantum dynamics have been examined in [17–34]. The relationship between gravitational geometry and quantum systems remains a key motivation for studying such con- figurations, and the BM spacetime provides a suitable model for this purpose [8–10, 35–38]. In the high-energy (ultraviolet) regime, the energy of a test particle can influence the structure of spacetime, leading to a modification of the standard dispersion relation [39–45]: E2f0 (u)2 −p2f1(u)2 = m2 ◦. (3) 2 This form appears in various QG theories, such as string theory, LQG, and non-commutative geometry [46–48], as well as in some recent re- search framed within RG, including: energy-momentum distributions of a cylindrical black hole in rainbow gravity [49]; generalized uncer- tainty principle corrections in Rastall-Rainbow Casimir wormholes [50]; fermions moving around strong gravitational sources [51]; and the ex- ploration of the physical properties of the strange star SAX J1808.4– 3658 in rainbow gravity [52]. Consequently, the BM metric (2) under RG becomes ds2 = − dt2 f0 (u)2 + 1 f1 (u)2  1 2Λ dρ2 + sin2 (ρ) dφ2 + dz2  , (4) where fi(u); i = 0, 1, are the so-called rainbow functions that obey the infrared limit lim u→0 fi (u) = 1, and u = |E|/Ep satisfies 0 ≤u ≤1. This ensures a consistent transition to the classical regime and an equal treatment of particles and antiparticles [45, 53–56]. The func- tions must preserve Ep, the Planck energy, as an invariant scale. We refer to the metric in (4) as the BM-RG spacetime. In our recent stud- ies [55, 56], we showed that the BM spacetime contains singularities at ρ = √ 2Λr = τ π, with τ = 0, 1, 2, · · ·. These singularities corre- spond to infinite potential barriers, acting effectively as impenetrable walls. The surfaces at r = τ π √ 2Λ do not constitute domain walls in the strict field-theoretic sense, since a domain wall normally separates dis- tinct vacuum phases. Instead, they arise as geometric or topological defects due to degeneration of the angular metric component gφφ. At these radial locations, the angular component degenerates, producing singularities in the spacetime structure. Notably, this array of im- penetrable surfaces imposes effective confinement for fields within each radial domain, resembling domain-wall-like behavior phenomenologi- cally, while their origin is purely geometric rather than associated with any phase transition of the field. The confinement mechanism in the BM spacetime is therefore emphasized. As a result, quantum particles are confined between two successive infinite walls, such as the region ρ = √ 2Λr ∈[0, π]. This confinement agrees with ˇZofka’s result [8], confirming the presence of a boundary at √ 2Λr = π, and makes any treatment that assumes r →∞(ρ →∞) invalid [12, 57]. In [55], we analyzed KG particles in a nonuniform magnetic field in the BM- RG background and showed that the resulting problem is conditionally exactly solvable using the general Heun function HG(a, q, α, β, γ, δ, z). In the present study, we consider KG particles in a uniform external magnetic field in the BM-RG background. In Section II, we formulate the KG equation in this setting and show that it reduces to a hyperge- ometric differential equation. This leads to exact solutions in terms of hypergeometric polynomials of order n. Unlike the case of a nonuniform magnetic field in [55], where we were only able to find a conditionally exact solution using general Heun polynomials. We derive the energy spectrum and wave functions by analyzing the associated hypergeo- metric series. Here, our analyses interestingly show that, contrary to standard quantum mechanics, where one would expect a magnetic field to at least partially lift the degeneracy associated with the magnetic quantum number m = ±|m|, we observe no such effect. Moreover, we find that as the magnetic field strength increases, all m-states appear to collapse onto the m = 0 state for a given radial quantum number n. Nevertheless, the cosmological constant Λ is observed to lift the degen- eracy associated with m, even within its standard range 0 < Λ ≪1. In our recent study [55] of KG particles in a nonuniform external magnetic field within the BM RG framework, such surprising degeneracy behav- iors were not observed. In Section III, we examine the effect of RG using two sets of rainbow functions. The first set, proposed by Magueijo and Smolin, is given by f0(u) = 1/(1 −˜ǫ|E|) and f1(u) = 1, with ˜ǫ = ǫ/Ep, motivated by varying speed of light models [58]. The second set, in- spired by LQG [59, 60], uses f0(u) = 1 and f1(u) = p 1 −˜ǫ|E|ν, with ν = 1, 2. Both sets comply with the principles of RG and maintain Ep as an upper energy bound [53–56, 61]. Our concluding remarks are presented in Section IV. II. KG-PARTICLES IN A UNIFORM MAGNETIC FIELD IN BM-RG-SPACETIME BACKGROUND In this section, we formulate a non-perturbative wave equation by in- corporating the effects of the RG framework. For the BM-RG space- time, the non-zero components of the contravariant metric tensor gµν associated with the line element (4) are given by g00 = −f0(u)2, g11 = 2Λf1(u)2, g22 = 2Λf1(u)2 sin(ρ)2 , g33 = f1(u)2. (5) The determinant of the covariant metric tensor is det(gµν) = g = − sin(ρ)2 4Λ2f1(u)6f0(u)2 . We proceed by considering a KG particle and antiparticle in the pres- ence of an external magnetic field within the BM background. The governing equation reads 1 √−g (∂µ −ieAµ) √−g gµν (∂ν −ieAν) ψ(t, ρ, φ, z) = m2 ◦ψ(t, ρ, φ, z), (6) where m◦denotes the rest mass energy (or rest energy) of the particle. We consider the electromagnetic four-vector potential in the form Aµ = (0, 0, Aφ, 0), with Aφ = −B◦ 2Λ cos(ρ) ⇒B = B◦f1(u)2 ˆz, as obtained from the electromagnetic field invariant FµνF µν/2 = ||B||2 −||E||2/c2. It is evident that in the absence of RG correc- tions, where f1(u) = 1, the magnetic field is strictly uniform. How- ever, the presence of RG effects modifies the field, rendering it energy- dependent. To determine the influence of the BM spacetime on KG bosons under a uniform magnetic field, we employ the ansatz ψ(t, ρ, φ, z) = ei (m φ+k z−E t) R(ρ) in Eq. (6), which yields the following radial differential equation: R′′ (ρ) + 1 tan ρR′ (ρ) +   E − h m + ˜B cos ρ i2 sin2 ρ   R (ρ) = 0, (7) where ˜B = eB◦ 2Λ , E = f0 (u)2 E2 −m2 ◦ 2Λf1 (u)2 −k2 2Λ , (8) and the integer m = 0, ±1, ±2, · · · corresponds to the magnetic quan- tum number. This expression can be further simplified into the form R′′ (ρ) + 1 tan ρR′ (ρ) + ˜E − η2 sin2 ρ −2 m ˜B cos ρ sin2 ρ ! R (ρ) = 0, (9) where we have introduced ˜E = E + ˜B2 and η2 = (m2 + ˜B2) ⇒ η = q (m2 + ˜B2) . To recast this equation into a one-dimensional Schr¨odinger-like form, which facilitates the identification of the effective 3 gravitational potential, we define R(ρ) = U(ρ)/ p sin(ρ). Substituting into Eq. (9) leads to U ′′(ρ) + " ˜λ −(η2 −1/4) sin2 ρ −2 m ˜B cos ρ sin2 ρ # U(ρ) = 0, (10) with ˜λ = ˜E +1/4. Accordingly, the effective gravitational potential can be identified as Veff(ρ) = (η2 −1/4) sin2 ρ + 2 m ˜B cos ρ sin2 ρ . (11) It should be noted that, due to the small but non-zero cosmological con- stant 0 < Λ ≪1, and the condition ˜B ̸= 0, the effective potential does not impose restrictions on the values of the magnetic quantum number m. Instead, it introduces potential barriers at ρ = 0, π, 2π, . . ., con- fining the quantum particle/antiparticle within the interval ρ ∈[0, π]. This would suggest the boundary conditions U(0) = 0 = U(π) for the radial wave function. In the subsequent section, we will derive the exact solution to the corresponding one-dimensional Schr¨odinger-type equation given in (10). A. Geometric Confinement of Klein-Gordon Particles We begin by analyzing the KG test particles in a uniform magnetic field within BM spacetime induced confinement and described by the metric in (10), using the substitutions x = cos ρ →ρ = arccos(x), and U(x) = (x −1)α (x + 1)β F (x), (12) where α = 1 4 p 4 η2 + 4 γ + 1 + 1  , β = 1 4 p 4 η2 −4 γ + 1 + 1  , γ = 2 m ˜B, and η2 = m2 + ˜B2. Note that for x = 1 (ρ = 0) and x = −1 (ρ = π), the function U(x) = 0, analogous to the standard textbook infinite potential well, represented by the first two impenetrable walls in the BM spacetime scenario. Next, we apply the change of variables y = (x + 1)/2, leading to (y2 −y)F ′′(x) + [(a + b + 1)y −c] F ′(y) + ab F (y) = 0, (13) where a = α + β + p ˜λ, b = α + β − p ˜λ, c = 2β + 1 2. (14) It is evident that equation (13) is the hypergeometric differential equa- tion, which admits the solution F (y) = 2F1(a, b, c, y), where a = −n and/or b = −n, would ensure that the hypergeometric power series truncates to a polynomial of order n ≥0. Furthermore, note that α, β ≥1/2, as can be seen from α = 1 4 q 4(m + ˜B)2 + 1 + 1  , β = 1 4 q 4(m −˜B)2 + 1 + 1  . (15) It is important to recognize that the term α + β in a and b of (14) remains unchanged for m = ±|m|, where α|±|m| = β|∓|m|. To verify the uniqueness of F (y) = 2F1(a, b, c, y) as a solution of (13), we consider a power series expansion of the form F (y) = ∞ X j=0 Cj yj+σ, (16) which yields ∞ X j=0 Cj [ab + (j + σ)(j + σ + a + b)] yj+σ = ∞ X j=0 Cj+1 [(j + σ + 1)(j + σ + c)] yj+σ + C0 σ [σ + c −1] yσ−1. (17) Since C0 ̸= 0, it follows that σ [σ + c −1] = 0. Consequently, σ = 0 or σ = −c + 1 = −2β + 1/2. The latter case would cause F (y) →∞as y →0 (i.e., ρ = π ∈[0, π]), and must therefore be excluded. We thus adopt σ = 0, leading to the recurrence relation Cj+1 [(j + 1)(j + c)] = Cj [ab + j(j + a + b)] . (18) To truncate the power series into a polynomial of degree n ≥0, we im- pose the condition Cn+1 = 0 while ensuring Cn ̸= 0. This requirement yields ab + n(n + a + b) = 0 ⇒a = −n and/or b = −n. As a result, we obtain ˜λ = (n + α + β)2. Consequently, the energy expression becomes f0(u)2E2 = f1(u)2Gnm + m2 ◦; Gnm = 2Λ  (n + α + β)2 −˜B2 −1 4  + k2. (19) We observe that the invariance of α + β under the transformation m = ±|m| is reflected in the spectroscopic structure of KG particles and antiparticles in (19). Specifically, the spectrum exhibits degener- acy with respect to the magnetic quantum number m = ±|m|, aris- ing from the identity α|±|m| = β|∓|m|. Furthermore, since F (y) = 2F1(a, b, c, y) = Pn j=0 Cj yj, the full solution reads Unm(x) ∼(x −1)α (x + 1)β n X j=0 Cj x + 1 2 j ⇒ Unm(ρ) = N (cos(ρ) −1)α (cos(ρ) + 1)β n X j=0 Cj  cos ρ 2 2j Rnm(ρ) = N (cos(ρ) −1)α−1/4 (cos(ρ) + 1)β−1/4 n X j=0 Cj  cos ρ 2 2j . (20) Here, the coefficients Cj are given by the recurrence relation in (18), with α and β defined in (15). This result will be used in the follow- ing analysis to examine the effects of RG corrections on the spectro- scopic structure of KG particles and antiparticles in BM spacetime subjected to a uniform external magnetic field. It is clear that both radial functions R(ρ) and U(ρ) vanish at the surfaces ρ = 0 and ρ = π, in agreement with the standard behavior of quantum wave functions for particles confined within an infinite potential well of width π. III. RAINBOW GRAVITY EFFECTS In this section, we investigate the influence of RG on KG particles and antiparticles by employing two distinct classes of rainbow functions. The first class, originally proposed within the framework of the varying speed of light hypothesis [58], is defined by f0(u) = 1 1 −ǫu = 1 1 −˜ǫ|E|, f1(u) = 1, 4 FIG. 1. The figure displays the energy levels of KG particles and antiparticles given by (21). Specifically, we plot: (a) E versus ˜ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e; (b) E versus B◦for n = 0, m = 0, 1, 2, 3, 4, ˜ǫ = 0.5, and Λ = 0.1; (c) E versus B◦for n = 2, m = 0, 1, 2, 3, 4, ˜ǫ = 0.5, and Λ = 0.1; (d) E versus the cosmological constant Λ for n = 2, m = 0, 1, 2, 3, 4, ˜ǫ = 0.5, and B◦= 4; (e) E versus Λ for n = 0, 1, 2, 3, 4, m = 2, ˜ǫ = 0.5, and B◦= 4; and (f) E versus Λ for n = 2, m = 0, 1, 2, 3, 4, ˜ǫ = 0.5, and B◦= 1. where u = |E|/Ep denotes a dimensionless energy scale normalized by the Planck energy Ep, and ˜ǫ = ǫ/Ep is the corresponding dimensionless rainbow parameter. The second class, inspired by the considerations of LQG [59, 60], is given by f0(u) = 1, f1(u) = q 1 −˜ǫ|E|ν, ν = 1, 2, with the admissible energy range limited to 0 ≤u = |E|/Ep ≤1. These functional forms introduce energy-dependent deformations to the spacetime geometry, encapsulating potential quantum gravitational corrections at trans-Planckian scales. A. Magueijo-Smolin Rainbow Functions This set of rainbow functions, f0 (u) = 1/ (1 −˜ǫ |E|), f1 (u) = 1 (with ˜ǫ = ǫ/Ep), is motivated by the hypothesis of varying the speed of light [58]. Substituting this set into the result (19) leads to E2 = ˜Gnm (1 −˜ǫ |E|)2 ; ˜Gnm = Gnm + m2 ◦. (21) It is important to recall that |E| = E+ is used for test particles, while |E| = −E−is used for antiparticles. This distinction leads to two separate equations: E2 +  1 −˜ǫ2 ˜Gnm  + 2˜ǫ ˜Gnm E+ −˜Gnm = 0, (22) for particles, and E2 −  1 −˜ǫ2 ˜Gnm  −2˜ǫ ˜GnmE−−˜Gnm = 0, (23) for antiparticles. These can be combined to yield the general result: En,m = ± p ˜Gnm 1 + ˜ǫ p ˜Gnm . (24) This result suggests a clear symmetrization of the KG particle and antiparticle energies with respect to E = 0, as also observed in Figure 1. However, it should be noted that the asymptotic behavior of |E±| as B◦→∞approaches a limiting value |E±| ∼2 = 1/˜ǫ = Ep/ǫ, as seen in Figs. 1(b) and 1(c). Consequently, it follows that |E±| ≤ Ep =⇒|E±|max = Ep when ǫ = 1. Although it is known that the cosmological constant typically satisfies 0 < Λ ≪1 ( i.e., Λ = 1.4657 × 10−52m−2 = 3.827 × 10−123l−2 p , where lp is the Planck length), we have extended this range and considered larger values of Λ purely for hypothetical exploration. Although it is hypothetical, it allows us to probe theoretical possibilities and emphasizes that the Planck energy Ep is the maximum possible energy for particles and antiparticles (a natural consequence of the RG proposal) and is yet another invariant, alongside the speed of light, as clearly illustrated in Figs. 1(d), 1(e), and 1(f). According to standard quantum mechanics, one would expect that a magnetic field lifts the degeneracy associated with the magnetic quantum number m = ±|m|, at least partially. Interestingly, contrary to this expectation, we observe no such effect. In fact, as the magnetic field strength increases, all m-states appear to collapse into the m = 0 state for a given n. However, the cosmological constant Λ lifts the degeneracy associated with m, even within its standard range 0 < Λ < 1, as clearly shown in Figs. 1(d), 1(e) and 1(f). We attribute this to the uniform nature of the magnetic field employed here. In our recent study [55] of KG particles in a non-uniform external magnetic field within the BM RG framework, such surprising degeneracy behaviors were not observed. B. Rainbow functions pair f0(u) = 1, f1 (u) = p 1 −˜ǫ |E|ν ; ν = 1, 2 This pair of rainbow functions is motivated by LQG [59, 60], which has been shown to be fully consistent with the RG principles, thus ensuring invariance of the Planck energy Ep. This energy scale represents the maximum attainable energy for both particles and antiparticles (see, e.g., [53, 54, 61]). Accordingly, it is of particular interest to explore 5 FIG. 2. The figure illustrates the energy levels of KG particles and antiparticles as given by Eq. (25). Specifically, the plots show: (a) E as a function of ˜ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e; (b) E as a function of B◦for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.1, and ˜ǫ = 0.5; (c) E as a function of B◦for n = 1, m = 0, 1, 2, 3, 4, Λ = 0.1, and ˜ǫ = 0.5; (d) E as a function of the cosmological constant Λ for n = 0, m = 0, 1, 2, B◦= 4, and ˜ǫ = 0.5. FIG. 3. The figure shows the energy levels for KG particles and antiparticles given by (28), and we plot (a) E against ˜ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e, (b) E against B◦for n = 0 and m = 0, 1, 2, 3, 4, Λ = 0.1, and ˜ǫ = 0.5, (c) E against B◦for n = 1 and m = 0, 1, 2, 3, 4, Λ = 0.1, and ˜ǫ = 0.5, and (d) E against Λ for n = 0 and m = 0, 1, 2, B◦= 4, and ˜ǫ = 0.5, (d) E against . the behavior of such rainbow function pairs when applied to KG par- ticles and antiparticles in a magnetized BM spacetime. We begin by considering the case υ = 1, which, through (19), leads to the following expression: E2 + ˜ǫ Gnm|E| −˜Gnm = 0. (25) The corresponding energy solutions are given by: E+ = −˜ǫ Gnm 2 + 1 2 q ˜ǫ2 G2 nm + 4 ˜Gnm, (26) and E−= ˜ǫ Gnm 2 −1 2 q ˜ǫ2 G2 nm + 4 ˜Gnm. (27) These energy levels are illustrated in Figure 2. It is evident that, as the cosmological constant Λ →∞, equation (25) implies that the max- imum energy, |E|max, tends to 1/˜ǫ = 2. This indicates an asymptotic saturation of the energy levels of KG particles and antiparticles in the BM spacetime, as clearly observed in Figure 2. Next, we consider the case υ = 2, which yields the following result: E2 = ˜Gnm 1 + ˜ǫ Gnm ⇒ E± = ± s ˜Gnm 1 + ˜ǫ Gnm . (28) The corresponding energy levels are shown in Figure 3. This plot fur- ther confirms the asymptotic behavior of the energy spectrum as the cosmological constant Λ →∞. As indicated by (28), the maximum energy in this case is given by |E|max ∼ p 1/˜ǫ = √ 2 ≈1.41. This behavior is in full agreement with the analytical expression and is visu- ally represented in Figure 3. Notably, the same behavior observed for the Magueijo-Smolin rainbow functions in Figure 1 reemerges in Fig- ures 2 and 3. Remarkably, an increase in the uniform magnetic field strength causes all m states to collapse into the m = 0 state for a given radial quantum number n. However, as the cosmological constant Λ (of the magnetized BM spacetime) increases, the m-related degenera- cies are partially lifted, although the degeneracy between states with m = ±|m| remains preserved as Λ →1. Interestingly, such behavior is absent in the case of scalar KG bosonic particles in a non-uniform magnetic field within BM spacetime, as discussed in [55]. The three sets of rainbow functions examined here, a Magueijo-Smolin set and two sets motivated by LQG, are all found to be consistent with the RG-inherited Planck energy invariance. IV. CONCLUDING REMARKS In this comprehensive investigation, we have conducted an exact ana- lytical study of KG bosons and antibosons subjected to an external uni- form magnetic field in the background of the BM spacetime, modified by the framework of RG and endowed with a positive cosmological con- stant. This work builds upon recent advances in quantum field theory in curved spacetimes, providing a detailed exploration of how gravita- tional confinement, topological structures, magnetic interactions, and QG-inspired modifications affect the energy-momentum dispersion re- lations. A key aspect of our analysis is that the geometry of the BM spacetime naturally induces an infinite sequence of infinite potential barriers, appearing at discrete surfaces determined by the expression ρ = √ 2Λr = τπ, with τ = 0, π, 2π, . . . . These potential barriers act as impenetrable boundaries, dividing spacetime into an infinite number of confinement regions, each similar to an ideal infinite quantum well with rigid boundaries. Within the first of these confinement domains, 6 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 0 0.5 1 1.5 2 2.5 3 -0.5 0 0.5 0 0.5 1 1.5 2 2.5 3 -0.2 0 0.2 0.4 0 0.5 1 1.5 2 2.5 3 0 0.5 1 0 0.5 1 1.5 2 2.5 3 -0.1 -0.05 0 0.05 0.1 0 0.5 1 1.5 2 2.5 3 -0.01 -0.005 0 0.005 0.01 FIG. 4. Radial wave functions Rn,m(ρ) in (20) for quantum numbers n = 0, 1, 2 and m = 0, 1, 2 under two magnetic field strengths, B◦= 1 (left column) and B◦= 10 (right column). Each subplot depicts the behavior of Rn,m(ρ) over the domain ρ ∈[0, π], illustrating how the magnetic field intensity affects the spatial distribution of scalar field modes in the magnetized BM universe. FIG. 5. Radial probability density distributions Pn,m(ρ) are presented for quantum states with n = 0, 1, 2 and m = 0, 1, 2, where Pn,m(ρ) = R ρ 0 |Rn,m(ρ′)|2 ρ′ dρ′. The wavefunctions Rn,m(ρ) are computed under a transverse magnetic field with effective coupling ˜B = eB◦ 2Λ , using e = 1, B◦= 1, and Λ = 0.1. The surface plots are mapped to Cartesian coordinates via x = ρ cos φ, y = ρ sin φ, where both the vertical axis and color scale represent the magnitude of Pn,m(ρ). we have obtained exact, closed-form solutions to the KG equation, ex- pressed using hypergeometric polynomials for the radial wave functions. This framework allowed us to precisely determine the energy spectrum and examine the effects of external fields and RG corrections. Our results reveal several notable features regarding the spectroscopic structure of the system. The inclusion of RG effects, through both Magueijo-Smolin rainbow functions and those inspired by LQG, sig- nificantly influences the energy spectrum. In particular, the uniform external magnetic field, which normally lifts the degeneracies associ- ated with the magnetic quantum number m, behaves in an unexpected way within this curved spacetime setting. Specifically, the combined effect of the magnetic field and the BM spacetime geometry causes all magnetic quantum states m to collapse onto the m = 0 state for a fixed radial quantum number n. This behavior differs from what is seen in flat spacetime quantum mechanics and highlights how space- time curvature and topology can dominate the spectral properties of quantum fields. Furthermore, our analysis shows that the cosmological constant Λ, which is usually important at large (cosmological) scales, also has a direct effect on the local quantum behavior of the system. When Λ increases beyond physically realistic values, a partial lifting of the degeneracies associated with the magnetic quantum number is observed. This indicates a subtle connection between the global geom- etry of spacetime and the local quantum characteristics of relativistic particles. The study also confirms the consistency of the rainbow functions used with the RG principles. In both the Magueijo-Smolin and LQG-inspired cases, the Planck energy Ep appears clearly as the highest possible energy for particles and antiparticles. This maintains the theoretical 7 FIG. 6. Radial probability density distributions Pn,m(ρ) are shown for quantum states labeled by (n, m), representing the eigenstates of a charged particle confined to a curved surface under a transverse magnetic field. The system parameters are fixed as e = 1, Λ = 0.1, and B◦= 10, corresponding to a strong dimensionless magnetic coupling ˜B = eB◦ 2Λ ≫1. The plots display the spatial distributions of Pn,m(ρ) for discrete states with n = 0, 1, 2 and m = 0, 1, 2, rendered as three-dimensional surfaces in Cartesian coordinates via the polar mapping x = ρ cos φ, y = ρ sin φ, with height and colormap indicating the magnitude of the probability density. framework of QG phenomenology and keeps the symmetry of the par- ticle and antiparticle energy spectra around E = 0. It is important to note that the unusual degeneracy behavior caused by the uniform magnetic field is not observed in cases with non-uniform magnetic fields in the same BM-RG spacetime [55]. This difference emphasizes that both the magnetic field configuration and the geometry of spacetime influence the spectral and confinement properties of quantum fields. With reference to the radial wave functions given in equation (20), we plot in Figure 4 the radial wave functions Rn,m(ρ) for quantum numbers n = 0, 1, 2 and m = 0, 1, 2 at two magnetic field strengths B◦= 1 and 10. As the magnetic field strength increases, the radial wave functions, representing KG particles and antiparticles, become increasingly localized. For B◦= 1, the particles are mainly distributed between ρ ∼0.5 and ρ ∼3, while for B◦= 10, they are restricted to the region ρ ∼1 to ρ ∼2. This localization effect is further illustrated in the corresponding radial probability densities shown in Figures 5 and 6. Radial probability densities are defined as Pn,m(ρ) = Z ρ 0 |Rn,m(ρ′)|2 ρ′ dρ′, and are visualized in two dimensions via x = ρ cos φ and y = ρ sin φ, where the color intensity represents the magnitude of Pn,m. The pa- rameter ˜B = e B◦ 2 Λ characterizes the effect of the external magnetic field on the quantum states (B◦= 1 in Figure 5 and B◦= 10 in Fig- ure 6). As the radial quantum number n increases, the radial prob- ability distributions develop additional nodal rings, corresponding to higher energy states with more radial nodes. For n = 0, the density is mainly concentrated near the origin, while for n = 1, 2, more com- plex ring-like structures emerge. For fixed n, increasing the azimuthal quantum number m pushes the probability density outward due to the centrifugal barrier, broadening the spatial extent of the wave function. Importantly, increasing the magnetic field strength B◦enhances the magnetic confinement, effectively compressing the probability densities toward smaller radial distances and strengthening localization near the origin. The magnetic field acts as a confining potential that counter- acts the outward push caused by larger m values, resulting in sharper and more localized radial peaks. Stronger magnetic fields also slightly shift the nodal positions inward, modifying the spatial probability dis- tribution. These results illustrate how tuning the magnetic field allows control over quantum confinement and the spatial characteristics of the system’s eigenstates. Throughout all states, rotational symmetry is preserved, consistent with conservation of angular momentum. The observed evolution of probability densities with increasing n, m, and B◦provides a quantitative understanding of how quantum numbers and magnetic field strength shape spatial quantum states under magnetic confinement. This work provides an exact and detailed description of the KG boson dynamics in the BM-RG spacetime, showing how gravitational con- finement, topological features, magnetic fields, and RG corrections in- fluence the energy spectrum and wave functions. These results con- tribute to the theoretical understanding of quantum fields in curved and magnetized backgrounds and suggest potential areas for further study. Future work could include extending the analysis to fermionic or higher-spin fields, exploring analogous phenomena in optical or con- densed matter systems (in principle), and examining how these effects might appear in astrophysical or cosmological contexts. Although pri- marily theoretical, the findings help clarify how QG-inspired modifica- tions could affect particle behavior in curved spacetimes. Moreover, it should be noted that the BM spacetime considered here is an exact solution of the Einstein-Maxwell equations, describing the self-consistent interaction between a magnetic field and spacetime ge- ometry. With its idealized cylindrical symmetry and infinitely extended magnetic flux tube, it should be regarded primarily as an analytical model rather than a physically realized configuration. Nevertheless, it is useful for capturing several qualitative effects in strongly magne- 8 tized, curved environments, such as the geometric confinement of scalar bosons, curvature-dependent energy spectra, and the collapse of mag- netic sublevels. Near magnetars, neutron stars, or magnetized accretion disks, similar interactions between strong magnetic fields and spacetime curvature can influence particle localization and energy levels. The in- clusion of rainbow gravity corrections further introduces Planck-scale deformations of the dispersion relation, providing a framework to ex- plore high-energy effects. Thus, although not physically observable, the BM spacetime serves as a theoretical laboratory for studying how grav- ity, magnetism, and rainbow gravity effects jointly influence quantum fields in curved, magnetized backgrounds. [1] A. Einstein, “Die Grundlage der allgemeinen Relativit¨atstheorie” Ann. Phys. (Berlin), 49 769 (1916) [2] L. Parker, “One-electron atom in curved space-time” Physical Review Letters, 44 1559 (1980) [3] C. Thompson, R.C. Duncan, “The soft gamma repeaters as very strongly magnetized neu- tron stars-I. Radiative mechanism for outbursts” Monthly Notices of the Royal Astronomical Society, 275 255–300 (1995) [4] C. Kouveliotou, S. Dieters, T. Strohmayer, J. Van Paradijs, G.J. Fishman, C.A. Meegan, K. Hurley, J. Kommers, I. Smith, D. Frail and others, “An X-ray pulsar with a superstrong magnetic field in the soft γ-ray repeater SGR1806-20” Nature, 393 235–237 (1998) [5] U. G¨ursoy, D. Kharzeev, K. Rajagopal, “Magnetohydrodynam- ics, charged currents, and directed flow in heavy ion collisions” Physical Review C, 89 054905 (2014) [6] A. Bzdak, V. Skokov, “Event-by-event fluctuations of magnetic and electric fields in heavy ion collisions” Physics Letters B, 710 171–174 (2012) [7] V. Voronyuk, V.D. Toneev, W. Cassing, E.L. Bratkovskaya, V.P. Konchakovski, S.A. Voloshin, “Electromagnetic field evolution in relativistic heavy-ion collisions” Physical Review C, 83 054911 (2011) [8] M. ˇZofka, “Bonnor-Melvin universe with a cosmological constant” Physical Review D, 99 044058 (2019) [9] W.B. Bonnor, “Static magnetic fields in general relativity” Proceedings of the Physical Society. Section A, 67 225 (1954) [10] M.A. Melvin, “Pure magnetic and electric geons” Physics Letters, 8 65–68 (1964) [11] F. Ahmed, A. Bouzenada, “Effects of gravity rain- bow on scalar bosonic and oscillator fields in Bon- nor–Melvin space-time with a cosmological constant” European Physical Journal C, 84 1045 (2024) [12] F. Ahmed, A. Bouzenada, “Scalar fields in Bonnor-Melvin- Lambda universe with potential: a study of dynamics of spin-zero particles-antiparticles” Physica Scripta, 99 065033 (2024) [13] T.W.B. Kibble, “Some implications of a cosmological phase tran- sition” Physics Reports, 67 183–199 (1980) [14] A. Vilenkin, “Gravitational field of vacuum domain walls and strings” Physical Review D, 23 852 (1981) [15] A. Vilenkin, “Cosmic strings and domain walls” Physics reports, 121 263–315 (1985) [16] E.A.F. Bragan¸ca, R.L.L. Vit´oria, H. Belich, E.R.B. de Mello, ”Rel- ativistic quantum oscillators in the global monopole spacetime” European Physical Journal C, 80 1–11 (2020) [17] S. ¨Olmez, V. Mandic, X. Siemens, “Gravitational-wave stochas- tic background from kinks and cusps on cosmic strings” Physical Review D, 81 104028 (2010) [18] L.C.N. Santos, C.C. Barros Jr, “Scalar bosons under the in- fluence of noninertial effects in the cosmic string spacetime” European Physical Journal C, 77 186 (2017) [19] A.L. Cavalcanti de Oliveira, E.R. Bezerra de Mello, “Exact so- lutions of the Klein–Gordon equation in the presence of a dyon, magnetic flux and scalar potential in the spacetime of gravitational defects” Classical and Quantum Gravity, 23 5249 (2006) [20] M. Hosseinpour, F.M. Andrade, E.O. Silva, H. Hassanabadi, “Scattering and bound states for the Hulth´en potential in a cosmic string background” European Physical Journal C, 77 1–6 (2017) [21] L.C.N. Santos, C.C. Barros, “Relativistic quantum motion of spin- 0 particles under the influence of noninertial effects in the cosmic string spacetime” European Physical Journal C, 78 1–8 (2018) [22] F.A.C. Neto, F.M. Da Silva, L.C.N Santos, L.B. Cas- tro, “Scalar bosons with Coulomb potentials in a cos- mic string background: scattering and bound states” European Physical Journal Plus, 135 1–11 (2020) [23] M.O. Katanaev, I.V. Volovich, “Theory of defects in solids and three-dimensional gravity” Annals of Physics, 216 1–28 (1992) [24] R.A. Puntigam, H.H. Soleng, “Volterra dis- tortions, spinning strings, and cosmic defects” Classical and Quantum Gravity, 14 1129 (1997) [25] W.C.F. da Silva, K. Bakke, R.L.L. Vit´oria, “Non-relativistic quantum effects on the harmonic oscillator in a spacetime with a distortion of a vertical line into a vertical spiral” European Physical Journal C, 79 1–5 (2019) [26] R.L.L. Vit´oria, “Noninertial effects on a scalar field in a spacetime with a magnetic screw dislocation” European Physical Journal C, 79 844 (2019) [27] A. Guvendi, S. Zare, H. Hassanabadi, “Exact solution for a fermion-antifermion system with Cornell type nonmini- mal coupling in the topological defect-generated spacetime” Physics of the Dark Universe, 38 101133 (2022) [28] A. Guvendi, “Evolution of an interacting fermion–antifermion pair in the near-horizon of the BTZ black hole” European Physical Journal C, 84 185 (2024) [29] A. Guvendi, O. Mustafa, “Energy Symmetry Breaking of Dirac and Weyl Fermions in Magnetized Spinning Conical Geometries” Advanced Theory and Simulations, e00451 (2025) [30] S. Gurtas Dogan, O. Mustafa, A. Guvendi, “Charged fermions and vector bosons in magnetic fields within a spacetime generated by a spinning point source” Nuclear Physics B, 1016 116918 (2025) [31] A. Guvendi, O. Mustafa, “Fermion-antifermion pairs in magnetized spacetime generated by a point source” Nuclear Physics B, 1011 116803 (2025) [32] A. Vilenkin, “Gravitational field of vacuum domain walls” Physics Letters B, 133 177–179 (1983) [33] B. Linet, “The static metrics with cylindrical symmetry describing a model of cosmic strings” 9 General Relativity and Gravitation, 17 1109–1115 (1985) [34] M. Barriola, A. Vilenkin, “Gravitational field of a global monopole” Physical Review Letters, 63 341 (1989) [35] M. Astorino, “Charging axisymmetric space-times with cosmolog- ical constant” Journal of High Energy Physics, 2012 1–15 (2012) [36] J. Vesel`y, M. ˇZofka, “Cosmological magnetic field: The boost- symmetric case” Physical Review D, 100 044059 (2019) [37] A. Guvendi, O. Mustafa, “Fermion-antifermion pairs in a magnetized space-time with non-zero cosmological constant” Nuclear Physics B, 1004 116571 (2024) [38] A. Guvendi, F. Ahmed, S.G. Dogan, “Relativistic fermions and vector bosons in magnetized three-dimensional space-time with a cosmological constant” Nuclear Physics B, 1004 116569 (2024) [39] J. Magueijo, L. Smolin, “Lorentz invariance with an invariant en- ergy scale” Physical Review Letters, 88 190403 (2002) [40] P. Gal´an, G.A.M. Marug´an, “Quantum time uncertainty in a grav- ity’s rainbow formalism” Physical Review D, 70 124003 (2004) [41] G. Amelino-Camelia, “Relativity in spacetimes with short-distance structure governed by an observer-independent (Planckian) length scale” International Journal of Modern Physics D, 11 35–59 (2002) [42] G. Amelino-Camelia, “Doubly-special relativ- ity: first results and key open problems” International Journal of Modern Physics D, 11 1643–1669 (2002) [43] M. Hosseinpour, H. Hassanabadi, J. Kriz, S. Has- sanabadi, B.C. L¨utf¨uo˘glu, “Interaction of the gener- alized Duffin–Kemmer–Petiau equation with a non- minimal coupling under the cosmic rainbow gravity” International Journal of Geometric Methods in Modern Physics, 18 2150224 (2021) [44] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopou- los, S. Sarkar, “Tests of quantum gravity from observations of γ-ray bursts” Nature, 393 763–765 (1998) [45] O. Mustafa, “PDM KG-Coulomb particles in cosmic string rainbow gravity spacetime and a uniform magnetic field” Physics Letters B, 839 137793 (2023) [46] V.A. Kosteleck`y, S. Samuel, “Spontaneous breaking of Lorentz symmetry in string theory” Physical Review D, 39 683 (1989) [47] R. Gambini, J. Pullin, “Nonstandard optics from quantum space- time” Physical Review D, 59 124021 (1999) [48] S.M. Carroll, J.A. Harvey, V.A. Kosteleck`y, C.D. Lane, T. Okamoto, “Noncommutative field theory and Lorentz violation” Physical Review Letters, 87 141601 (2001) [49] M. Korunur, S. Korunur, “Energy–Momentum distri- butions of cylindrical black hole in rainbow gravity” Int. J. Geom. Meth. Mod. Phys. 21 (2024) 2450118 [50] E. Battista, S. Capozziello, A. Errehymy , “Generalized uncer- tainty principle corrections in Rastall-Rainbow Casimir worm- holes” Eur. Phys. J. C 84 (2024) 1314 [51] B. Dogan, M. Salti, O. Aydogdu, P. Rej ”Fermions moving around strong gravitational sources”, Int. J. Geom. Meth. Mod Phys. 22 (2025) 2450316 [52] W. Ali, U. Sheikh, S. Ali, M. J. Amir, ” Exploring the physical properties of strange star SAXJ1808.4–3658 in rainbow gravity” Int. J. Geom. Meth. Mod Phys. 21 (2024) 2450199 [53] O. Mustafa, “KG-particles in a cosmic string rain- bow gravity spacetime in mixed magnetic fields” European physical Journal C, 84 362 (2024) [54] O. Mustafa, “Massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime in a fine tuned rainbow gravity” Nuclear Physics B, 995 116334 (2023) [55] O. Mustafa, A. Guvendi, “Klein-Gordon particles in a nonuniform external magnetic field in Bonnor-Melvin rainbow gravity back- ground” Nuclear Physics B, 1018 116998 (2025) [56] O. Mustafa, A. Guvendi, “On the Klein-Gordon bosonic fields in the Bonnor-Melvin spacetime with a cosmological constant in rainbow gravity: Bonnor-Melvin Domain Walls” European physical Journal C, 85 509 (2025) [57] L.C. Barbosa, C.C. Barros, “Scalar bosons in Bonnor-Melvin-Λ universe: exact solution, Landau levels and Coulomb-like poten- tial” Physica Scripta, 100 035302 (2025) [58] J. Magueijo, L. Smolin, “Generalized Lorentz invariance with an invariant energy scale” Physical Review D, 67 044017 (2003) [59] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, “Distance measurement and wave disper- sion in a Liouville-string approach to quantum gravity” International Journal of Modern Physics A, 12 607–623 (1997) [60] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, “Quantum-spacetime phenomenology” Living Reviews in Relativity, 16 1–137 (2013) [61] O. Mustafa, A. Guvendi, “Klein-Gordon oscillators in traversable wormhole rainbow gravity spacetime: Conditional exact solv- ability via a throat radius and oscillator frequency correlation” Int. J. Geom. Meth. Mod. Phys., 22 (2025) 2550091
16 Oct 2025 Quantum confinement of scalar bosons in the Bonnor-Melvin spacetime: uniform magnetic field and rainbow gravity effects Omar Mustafa∗ 99628, G. Magusa, north Cyprus, Mersin 10 - T ̈urkiye Abdullah Guvendi† 25050, Erzurum, T ̈urkiye (Dated: October 17, 2025) We present an exact analytical study of Klein-Gordon (KG) scalar bosons and antibosons confined in the Bonnor-Melvin (BM) spacetime under a uniform magnetic field, incorporating rainbow gravity (RG) corrections with a positive cosmological constant. The cosmological constant partitions spacetime into an infinite sequence of confinement domains bounded by impenetrable barriers. Within the first allowed domain, the KG equation reduces to a hypergeometric differential equation, yielding closed-form expressions for both the energy spectra and the radial wavefunctions in terms of hypergeometric polynomials. Two representative RG models, inspired by the Magueijo-Smolin framework and loop quantum gravity (LQG), produce Planck-scale bounded, symmetric particle-antiparticle spectra. A distinctive feature of the curved magnetized geometry is the collapse of all magnetic quantum states m ̸= 0 onto the m = 0 level for each radial excitation, a degeneracy absent in flat spacetime. Increasing the cosmological constant partially lifts this collapse, establishing a direct link between the global spacetime curvature and the local quantum structure. Radial probability density analysis further shows that stronger magnetic fields enhance spatial localization, confining bosons into static or rotating ring-like configurations with nodal architectures that evolve systematically with quantum numbers. These findings reveal how gravitational confinement, topology, magnetic fields, and Planck-scale corrections jointly govern the spectral and spatial properties of relativistic quantum fields in curved and magnetized backgrounds. I. INTRODUCTION Einstein's theory of general relativity (GR) [1] describes gravity as a manifestation of the curvature of spacetime. The attempt to unify GR with quantum mechanics, often referred to as quantum gravity (QG), offers fundamental insights into the behavior of quantum systems in curved backgrounds. This connection has motivated many studies examining how spacetime curvature influences quantum spectroscopic properties, especially for scalar and spinor particles described by the KG and Dirac equations, respectively [2]. The present study focuses on KG particles in such settings. Magnetic fields are central to many astrophysical and cosmological phenomena, including stellar structures, accretion disks, galactic centers, magnetars [3, 4], and heavy-ion collisions [5-7]. The simultaneous presence of strong gravitational and magnetic fields near compact objects has led to considerable interest in general relativistic models that incorporate both effects. One prominent example is the BM universe, an exact solution of the Einstein-Maxwell equations that describes a static spacetime with cylindrical symmetry, sourced by a magnetic field aligned with the symmetry axis and self-consistently embedded in its own gravitational field [8-12]. In this model, the magnetic field con- ∗ (Corr. Author) † tributes to the energy-momentum tensor and thereby affects the geometry of spacetime. A positive cosmological constant Λ > 0 is introduced to stabilize the spacetime and preserve its symmetry. The BM spacetime metric is given in [8] as ds2 = -dt2 + dr2 + ̃σ2 sin2 √ 2Λr dφ2 + dz2, (1) where z ∈[-∞, ∞], φ ∈[0, 2π], r ∈[0, ∞], and Λ in units of inverse length squared. Following the coordinate transformation √ 2Λr →ρ and ̃σ √ 2Λφ →φ, as in ˇZofka's treatment [8], the line element (1) becomes ds2 = -dt2 + 1 2Λ dρ2 + sin2 (ρ) dφ2 + dz2, (2) where H = 1 √ 2 sin(ρ) denotes the magnetic field [8]. Topological defects such as domain walls are expected in various unified field theories [13-16], and their effects on quantum dynamics have been examined in [17-34]. The relationship between gravitational geometry and quantum systems remains a key motivation for studying such configurations, and the BM spacetime provides a suitable model for this purpose [8-10, 35-38]. In the high-energy (ultraviolet) regime, the energy of a test particle can influence the structure of spacetime, leading to a modification of the standard dispersion relation [39-45]: E2f0 (u)2 -p2f1(u)2 = m2 ◦. (3) 2 This form appears in various QG theories, such as string theory, LQG, and non-commutative geometry [46-48], as well as in some recent research framed within RG, including: energy-momentum distributions of a cylindrical black hole in rainbow gravity [49]; generalized uncertainty principle corrections in Rastall-Rainbow Casimir wormholes [50]; fermions moving around strong gravitational sources [51]; and the exploration of the physical properties of the strange star SAX J1808.43658 in rainbow gravity [52]. Consequently, the BM metric (2) under RG becomes ds2 = - dt2 f0 (u)2 + 1 f1 (u)2 1 2Λ dρ2 + sin2 (ρ) dφ2 + dz2 , (4) where fi(u); i = 0, 1, are the so-called rainbow functions that obey the infrared limit lim u→0 fi (u) = 1, and u = |E|/Ep satisfies 0 ≤u ≤1. This ensures a consistent transition to the classical regime and an equal treatment of particles and antiparticles [45, 53-56]. The functions must preserve Ep, the Planck energy, as an invariant scale. We refer to the metric in (4) as the BM-RG spacetime. In our recent studies [55, 56], we showed that the BM spacetime contains singularities at ρ = √ 2Λr = τ π, with τ = 0, 1, 2, · · ·. These singularities correspond to infinite potential barriers, acting effectively as impenetrable walls. The surfaces at r = τ π √ 2Λ do not constitute domain walls in the strict field-theoretic sense, since a domain wall normally separates distinct vacuum phases. Instead, they arise as geometric or topological defects due to degeneration of the angular metric component gφφ. At these radial locations, the angular component degenerates, producing singularities in the spacetime structure. Notably, this array of impenetrable surfaces imposes effective confinement for fields within each radial domain, resembling domain-wall-like behavior phenomenologically, while their origin is purely geometric rather than associated with any phase transition of the field. The confinement mechanism in the BM spacetime is therefore emphasized. As a result, quantum particles are confined between two successive infinite walls, such as the region ρ = √ 2Λr ∈[0, π]. This confinement agrees with ˇZofka's result [8], confirming the presence of a boundary at √ 2Λr = π, and makes any treatment that assumes r →∞(ρ →∞) invalid [12, 57]. In [55], we analyzed KG particles in a nonuniform magnetic field in the BMRG background and showed that the resulting problem is conditionally exactly solvable using the general Heun function HG(a, q, α, β, γ, δ, z). In the present study, we consider KG particles in a uniform external magnetic field in the BM-RG background. In Section II, we formulate the KG equation in this setting and show that it reduces to a hypergeometric differential equation. This leads to exact solutions in terms of hypergeometric polynomials of order n. Unlike the case of a nonuniform magnetic field in [55], where we were only able to find a conditionally exact solution using general Heun polynomials. We derive the energy spectrum and wave functions by analyzing the associated hypergeometric series. Here, our analyses interestingly show that, contrary to standard quantum mechanics, where one would expect a magnetic field to at least partially lift the degeneracy associated with the magnetic quantum number m = ±|m|, we observe no such effect. Moreover, we find that as the magnetic field strength increases, all m-states appear to collapse onto the m = 0 state for a given radial quantum number n. Nevertheless, the cosmological constant Λ is observed to lift the degeneracy associated with m, even within its standard range 0 < Λ ≪1. In our recent study [55] of KG particles in a nonuniform external magnetic field within the BM RG framework, such surprising degeneracy behaviors were not observed. In Section III, we examine the effect of RG using two sets of rainbow functions. The first set, proposed by Magueijo and Smolin, is given by f0(u) = 1/(1 - ̃ǫ|E|) and f1(u) = 1, with ̃ǫ = ǫ/Ep, motivated by varying speed of light models [58]. The second set, inspired by LQG [59, 60], uses f0(u) = 1 and f1(u) = p 1 - ̃ǫ|E|ν, with ν = 1, 2. Both sets comply with the principles of RG and maintain Ep as an upper energy bound [53-56, 61]. Our concluding remarks are presented in Section IV. II. KG-PARTICLES IN A UNIFORM MAGNETIC FIELD IN BM-RG-SPACETIME BACKGROUND In this section, we formulate a non-perturbative wave equation by incorporating the effects of the RG framework. For the BM-RG spacetime, the non-zero components of the contravariant metric tensor gμν associated with the line element (4) are given by g00 = -f0(u)2, g11 = 2Λf1(u)2, g22 = 2Λf1(u)2 sin(ρ)2 , g33 = f1(u)2. (5) The determinant of the covariant metric tensor is det(gμν) = g = - sin(ρ)2 4Λ2f1(u)6f0(u)2 . We proceed by considering a KG particle and antiparticle in the presence of an external magnetic field within the BM background. The governing equation reads 1 √-g (∂μ -ieAμ) √-g gμν (∂ν -ieAν) ψ(t, ρ, φ, z) = m2 ◦ψ(t, ρ, φ, z), (6) where m◦denotes the rest mass energy (or rest energy) of the particle. We consider the electromagnetic four-vector potential in the form Aμ = (0, 0, Aφ, 0), with Aφ = -B◦ 2Λ cos(ρ) ⇒B = B◦f1(u)2 ˆz, as obtained from the electromagnetic field invariant FμνF μν/2 = ||B||2 -||E||2/c2. It is evident that in the absence of RG corrections, where f1(u) = 1, the magnetic field is strictly uniform. However, the presence of RG effects modifies the field, rendering it energydependent. To determine the influence of the BM spacetime on KG bosons under a uniform magnetic field, we employ the ansatz ψ(t, ρ, φ, z) = ei (m φ+k z-E t) R(ρ) in Eq. (6), which yields the following radial differential equation: R′′ (ρ) + 1 tan ρR′ (ρ) +   E - h m + ̃B cos ρ i2 sin2 ρ   R (ρ) = 0, (7) where ̃B = eB◦ 2Λ , E = f0 (u)2 E2 -m2 ◦ 2Λf1 (u)2 -k2 2Λ , (8) and the integer m = 0, ±1, ±2, · · · corresponds to the magnetic quantum number. This expression can be further simplified into the form R′′ (ρ) + 1 tan ρR′ (ρ) + ̃E - η2 sin2 ρ -2 m ̃B cos ρ sin2 ρ ! R (ρ) = 0, (9) where we have introduced ̃E = E + ̃B2 and η2 = (m2 + ̃B2) ⇒ η = q (m2 + ̃B2) . To recast this equation into a one-dimensional Schr ̈odinger-like form, which facilitates the identification of the effective 3 gravitational potential, we define R(ρ) = U(ρ)/ p sin(ρ). Substituting into Eq. (9) leads to U ′′(ρ) + " ̃λ -(η2 -1/4) sin2 ρ -2 m ̃B cos ρ sin2 ρ # U(ρ) = 0, (10) with ̃λ = ̃E +1/4. Accordingly, the effective gravitational potential can be identified as Veff(ρ) = (η2 -1/4) sin2 ρ + 2 m ̃B cos ρ sin2 ρ . (11) It should be noted that, due to the small but non-zero cosmological constant 0 < Λ ≪1, and the condition ̃B ̸= 0, the effective potential does not impose restrictions on the values of the magnetic quantum number m. Instead, it introduces potential barriers at ρ = 0, π, 2π, . . ., confining the quantum particle/antiparticle within the interval ρ ∈[0, π]. This would suggest the boundary conditions U(0) = 0 = U(π) for the radial wave function. In the subsequent section, we will derive the exact solution to the corresponding one-dimensional Schr ̈odinger-type equation given in (10). A. Geometric Confinement of Klein-Gordon Particles We begin by analyzing the KG test particles in a uniform magnetic field within BM spacetime induced confinement and described by the metric in (10), using the substitutions x = cos ρ →ρ = arccos(x), and U(x) = (x -1)α (x + 1)β F (x), (12) where α = 1 4 p 4 η2 + 4 γ + 1 + 1 , β = 1 4 p 4 η2 -4 γ + 1 + 1 , γ = 2 m ̃B, and η2 = m2 + ̃B2. Note that for x = 1 (ρ = 0) and x = -1 (ρ = π), the function U(x) = 0, analogous to the standard textbook infinite potential well, represented by the first two impenetrable walls in the BM spacetime scenario. Next, we apply the change of variables y = (x + 1)/2, leading to (y2 -y)F ′′(x) + [(a + b + 1)y -c] F ′(y) + ab F (y) = 0, (13) where a = α + β + p ̃λ, b = α + β - p ̃λ, c = 2β + 1 2. (14) It is evident that equation (13) is the hypergeometric differential equation, which admits the solution F (y) = 2F1(a, b, c, y), where a = -n and/or b = -n, would ensure that the hypergeometric power series truncates to a polynomial of order n ≥0. Furthermore, note that α, β ≥1/2, as can be seen from α = 1 4 q 4(m + ̃B)2 + 1 + 1 , β = 1 4 q 4(m - ̃B)2 + 1 + 1 . (15) It is important to recognize that the term α + β in a and b of (14) remains unchanged for m = ±|m|, where α|±|m| = β|∓|m|. To verify the uniqueness of F (y) = 2F1(a, b, c, y) as a solution of (13), we consider a power series expansion of the form F (y) = ∞ X j=0 Cj yj+σ, (16) which yields ∞ X j=0 Cj [ab + (j + σ)(j + σ + a + b)] yj+σ = ∞ X j=0 Cj+1 [(j + σ + 1)(j + σ + c)] yj+σ + C0 σ [σ + c -1] yσ-1. (17) Since C0 ̸= 0, it follows that σ [σ + c -1] = 0. Consequently, σ = 0 or σ = -c + 1 = -2β + 1/2. The latter case would cause F (y) →∞as y →0 (i.e., ρ = π ∈[0, π]), and must therefore be excluded. We thus adopt σ = 0, leading to the recurrence relation Cj+1 [(j + 1)(j + c)] = Cj [ab + j(j + a + b)] . (18) To truncate the power series into a polynomial of degree n ≥0, we impose the condition Cn+1 = 0 while ensuring Cn ̸= 0. This requirement yields ab + n(n + a + b) = 0 ⇒a = -n and/or b = -n. As a result, we obtain ̃λ = (n + α + β)2. Consequently, the energy expression becomes f0(u)2E2 = f1(u)2Gnm + m2 ◦; Gnm = 2Λ (n + α + β)2 - ̃B2 -1 4 + k2. (19) We observe that the invariance of α + β under the transformation m = ±|m| is reflected in the spectroscopic structure of KG particles and antiparticles in (19). Specifically, the spectrum exhibits degeneracy with respect to the magnetic quantum number m = ±|m|, arising from the identity α|±|m| = β|∓|m|. Furthermore, since F (y) = 2F1(a, b, c, y) = Pn j=0 Cj yj, the full solution reads Unm(x) ∼(x -1)α (x + 1)β n X j=0 Cj x + 1 2 j ⇒ Unm(ρ) = N (cos(ρ) -1)α (cos(ρ) + 1)β n X j=0 Cj cos ρ 2 2j Rnm(ρ) = N (cos(ρ) -1)α-1/4 (cos(ρ) + 1)β-1/4 n X j=0 Cj cos ρ 2 2j . (20) Here, the coefficients Cj are given by the recurrence relation in (18), with α and β defined in (15). This result will be used in the following analysis to examine the effects of RG corrections on the spectroscopic structure of KG particles and antiparticles in BM spacetime subjected to a uniform external magnetic field. It is clear that both radial functions R(ρ) and U(ρ) vanish at the surfaces ρ = 0 and ρ = π, in agreement with the standard behavior of quantum wave functions for particles confined within an infinite potential well of width π. III. RAINBOW GRAVITY EFFECTS In this section, we investigate the influence of RG on KG particles and antiparticles by employing two distinct classes of rainbow functions. The first class, originally proposed within the framework of the varying speed of light hypothesis [58], is defined by f0(u) = 1 1 -ǫu = 1 1 - ̃ǫ|E|, f1(u) = 1, 4 FIG. 1. The figure displays the energy levels of KG particles and antiparticles given by (21). Specifically, we plot: (a) E versus ̃ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e; (b) E versus B◦for n = 0, m = 0, 1, 2, 3, 4, ̃ǫ = 0.5, and Λ = 0.1; (c) E versus B◦for n = 2, m = 0, 1, 2, 3, 4, ̃ǫ = 0.5, and Λ = 0.1; (d) E versus the cosmological constant Λ for n = 2, m = 0, 1, 2, 3, 4, ̃ǫ = 0.5, and B◦= 4; (e) E versus Λ for n = 0, 1, 2, 3, 4, m = 2, ̃ǫ = 0.5, and B◦= 4; and (f) E versus Λ for n = 2, m = 0, 1, 2, 3, 4, ̃ǫ = 0.5, and B◦= 1. where u = |E|/Ep denotes a dimensionless energy scale normalized by the Planck energy Ep, and ̃ǫ = ǫ/Ep is the corresponding dimensionless rainbow parameter. The second class, inspired by the considerations of LQG [59, 60], is given by f0(u) = 1, f1(u) = q 1 - ̃ǫ|E|ν, ν = 1, 2, with the admissible energy range limited to 0 ≤u = |E|/Ep ≤1. These functional forms introduce energy-dependent deformations to the spacetime geometry, encapsulating potential quantum gravitational corrections at trans-Planckian scales. A. Magueijo-Smolin Rainbow Functions This set of rainbow functions, f0 (u) = 1/ (1 - ̃ǫ |E|), f1 (u) = 1 (with ̃ǫ = ǫ/Ep), is motivated by the hypothesis of varying the speed of light [58]. Substituting this set into the result (19) leads to E2 = ̃Gnm (1 - ̃ǫ |E|)2 ; ̃Gnm = Gnm + m2 ◦. (21) It is important to recall that |E| = E+ is used for test particles, while |E| = -E-is used for antiparticles. This distinction leads to two separate equations: E2 + 1 - ̃ǫ2 ̃Gnm + 2 ̃ǫ ̃Gnm E+ - ̃Gnm = 0, (22) for particles, and E2 - 1 - ̃ǫ2 ̃Gnm -2 ̃ǫ ̃GnmE-- ̃Gnm = 0, (23) for antiparticles. These can be combined to yield the general result: En,m = ± p ̃Gnm 1 + ̃ǫ p ̃Gnm . (24) This result suggests a clear symmetrization of the KG particle and antiparticle energies with respect to E = 0, as also observed in Figure 1. However, it should be noted that the asymptotic behavior of |E±| as B◦→∞approaches a limiting value |E±| ∼2 = 1/ ̃ǫ = Ep/ǫ, as seen in Figs. 1(b) and 1(c). Consequently, it follows that |E±| ≤ Ep =⇒|E±|max = Ep when ǫ = 1. Although it is known that the cosmological constant typically satisfies 0 < Λ ≪1 ( i.e., Λ = 1.4657 × 10-52m-2 = 3.827 × 10-123l-2 p , where lp is the Planck length), we have extended this range and considered larger values of Λ purely for hypothetical exploration. Although it is hypothetical, it allows us to probe theoretical possibilities and emphasizes that the Planck energy Ep is the maximum possible energy for particles and antiparticles (a natural consequence of the RG proposal) and is yet another invariant, alongside the speed of light, as clearly illustrated in Figs. 1(d), 1(e), and 1(f). According to standard quantum mechanics, one would expect that a magnetic field lifts the degeneracy associated with the magnetic quantum number m = ±|m|, at least partially. Interestingly, contrary to this expectation, we observe no such effect. In fact, as the magnetic field strength increases, all m-states appear to collapse into the m = 0 state for a given n. However, the cosmological constant Λ lifts the degeneracy associated with m, even within its standard range 0 < Λ < 1, as clearly shown in Figs. 1(d), 1(e) and 1(f). We attribute this to the uniform nature of the magnetic field employed here. In our recent study [55] of KG particles in a non-uniform external magnetic field within the BM RG framework, such surprising degeneracy behaviors were not observed. B. Rainbow functions pair f0(u) = 1, f1 (u) = p 1 - ̃ǫ |E|ν ; ν = 1, 2 This pair of rainbow functions is motivated by LQG [59, 60], which has been shown to be fully consistent with the RG principles, thus ensuring invariance of the Planck energy Ep. This energy scale represents the maximum attainable energy for both particles and antiparticles (see, e.g., [53, 54, 61]). Accordingly, it is of particular interest to explore 5 FIG. 2. The figure illustrates the energy levels of KG particles and antiparticles as given by Eq. (25). Specifically, the plots show: (a) E as a function of ̃ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e; (b) E as a function of B◦for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.1, and ̃ǫ = 0.5; (c) E as a function of B◦for n = 1, m = 0, 1, 2, 3, 4, Λ = 0.1, and ̃ǫ = 0.5; (d) E as a function of the cosmological constant Λ for n = 0, m = 0, 1, 2, B◦= 4, and ̃ǫ = 0.5. FIG. 3. The figure shows the energy levels for KG particles and antiparticles given by (28), and we plot (a) E against ̃ǫ for n = 0, m = 0, 1, 2, 3, 4, Λ = 0.5, and B◦= 1 = e, (b) E against B◦for n = 0 and m = 0, 1, 2, 3, 4, Λ = 0.1, and ̃ǫ = 0.5, (c) E against B◦for n = 1 and m = 0, 1, 2, 3, 4, Λ = 0.1, and ̃ǫ = 0.5, and (d) E against Λ for n = 0 and m = 0, 1, 2, B◦= 4, and ̃ǫ = 0.5, (d) E against . the behavior of such rainbow function pairs when applied to KG particles and antiparticles in a magnetized BM spacetime. We begin by considering the case υ = 1, which, through (19), leads to the following expression: E2 + ̃ǫ Gnm|E| - ̃Gnm = 0. (25) The corresponding energy solutions are given by: E+ = - ̃ǫ Gnm 2 + 1 2 q ̃ǫ2 G2 nm + 4 ̃Gnm, (26) and E-= ̃ǫ Gnm 2 -1 2 q ̃ǫ2 G2 nm + 4 ̃Gnm. (27) These energy levels are illustrated in Figure 2. It is evident that, as the cosmological constant Λ →∞, equation (25) implies that the maximum energy, |E|max, tends to 1/ ̃ǫ = 2. This indicates an asymptotic saturation of the energy levels of KG particles and antiparticles in the BM spacetime, as clearly observed in Figure 2. Next, we consider the case υ = 2, which yields the following result: E2 = ̃Gnm 1 + ̃ǫ Gnm ⇒ E± = ± s ̃Gnm 1 + ̃ǫ Gnm . (28) The corresponding energy levels are shown in Figure 3. This plot further confirms the asymptotic behavior of the energy spectrum as the cosmological constant Λ →∞. As indicated by (28), the maximum energy in this case is given by |E|max ∼ p 1/ ̃ǫ = √ 2 ≈1.41. This behavior is in full agreement with the analytical expression and is visually represented in Figure 3. Notably, the same behavior observed for the Magueijo-Smolin rainbow functions in Figure 1 reemerges in Figures 2 and 3. Remarkably, an increase in the uniform magnetic field strength causes all m states to collapse into the m = 0 state for a given radial quantum number n. However, as the cosmological constant Λ (of the magnetized BM spacetime) increases, the m-related degeneracies are partially lifted, although the degeneracy between states with m = ±|m| remains preserved as Λ →1. Interestingly, such behavior is absent in the case of scalar KG bosonic particles in a non-uniform magnetic field within BM spacetime, as discussed in [55]. The three sets of rainbow functions examined here, a Magueijo-Smolin set and two sets motivated by LQG, are all found to be consistent with the RG-inherited Planck energy invariance. IV. CONCLUDING REMARKS In this comprehensive investigation, we have conducted an exact analytical study of KG bosons and antibosons subjected to an external uniform magnetic field in the background of the BM spacetime, modified by the framework of RG and endowed with a positive cosmological constant. This work builds upon recent advances in quantum field theory in curved spacetimes, providing a detailed exploration of how gravitational confinement, topological structures, magnetic interactions, and QG-inspired modifications affect the energy-momentum dispersion relations. A key aspect of our analysis is that the geometry of the BM spacetime naturally induces an infinite sequence of infinite potential barriers, appearing at discrete surfaces determined by the expression ρ = √ 2Λr = τπ, with τ = 0, π, 2π, . . . . These potential barriers act as impenetrable boundaries, dividing spacetime into an infinite number of confinement regions, each similar to an ideal infinite quantum well with rigid boundaries. Within the first of these confinement domains, 6 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 0 0.5 1 1.5 2 2.5 3 -0.5 0 0.5 0 0.5 1 1.5 2 2.5 3 -0.2 0 0.2 0.4 0 0.5 1 1.5 2 2.5 3 0 0.5 1 0 0.5 1 1.5 2 2.5 3 -0.1 -0.05 0 0.05 0.1 0 0.5 1 1.5 2 2.5 3 -0.01 -0.005 0 0.005 0.01 FIG. 4. Radial wave functions Rn,m(ρ) in (20) for quantum numbers n = 0, 1, 2 and m = 0, 1, 2 under two magnetic field strengths, B◦= 1 (left column) and B◦= 10 (right column). Each subplot depicts the behavior of Rn,m(ρ) over the domain ρ ∈[0, π], illustrating how the magnetic field intensity affects the spatial distribution of scalar field modes in the magnetized BM universe. FIG. 5. Radial probability density distributions Pn,m(ρ) are presented for quantum states with n = 0, 1, 2 and m = 0, 1, 2, where Pn,m(ρ) = R ρ 0 |Rn,m(ρ′)|2 ρ′ dρ′. The wavefunctions Rn,m(ρ) are computed under a transverse magnetic field with effective coupling ̃B = eB◦ 2Λ , using e = 1, B◦= 1, and Λ = 0.1. The surface plots are mapped to Cartesian coordinates via x = ρ cos φ, y = ρ sin φ, where both the vertical axis and color scale represent the magnitude of Pn,m(ρ). we have obtained exact, closed-form solutions to the KG equation, expressed using hypergeometric polynomials for the radial wave functions. This framework allowed us to precisely determine the energy spectrum and examine the effects of external fields and RG corrections. Our results reveal several notable features regarding the spectroscopic structure of the system. The inclusion of RG effects, through both Magueijo-Smolin rainbow functions and those inspired by LQG, significantly influences the energy spectrum. In particular, the uniform external magnetic field, which normally lifts the degeneracies associated with the magnetic quantum number m, behaves in an unexpected way within this curved spacetime setting. Specifically, the combined effect of the magnetic field and the BM spacetime geometry causes all magnetic quantum states m to collapse onto the m = 0 state for a fixed radial quantum number n. This behavior differs from what is seen in flat spacetime quantum mechanics and highlights how spacetime curvature and topology can dominate the spectral properties of quantum fields. Furthermore, our analysis shows that the cosmological constant Λ, which is usually important at large (cosmological) scales, also has a direct effect on the local quantum behavior of the system. When Λ increases beyond physically realistic values, a partial lifting of the degeneracies associated with the magnetic quantum number is observed. This indicates a subtle connection between the global geometry of spacetime and the local quantum characteristics of relativistic particles. The study also confirms the consistency of the rainbow functions used with the RG principles. In both the Magueijo-Smolin and LQG-inspired cases, the Planck energy Ep appears clearly as the highest possible energy for particles and antiparticles. This maintains the theoretical 7 FIG. 6. Radial probability density distributions Pn,m(ρ) are shown for quantum states labeled by (n, m), representing the eigenstates of a charged particle confined to a curved surface under a transverse magnetic field. The system parameters are fixed as e = 1, Λ = 0.1, and B◦= 10, corresponding to a strong dimensionless magnetic coupling ̃B = eB◦ 2Λ ≫1. The plots display the spatial distributions of Pn,m(ρ) for discrete states with n = 0, 1, 2 and m = 0, 1, 2, rendered as three-dimensional surfaces in Cartesian coordinates via the polar mapping x = ρ cos φ, y = ρ sin φ, with height and colormap indicating the magnitude of the probability density. framework of QG phenomenology and keeps the symmetry of the particle and antiparticle energy spectra around E = 0. It is important to note that the unusual degeneracy behavior caused by the uniform magnetic field is not observed in cases with non-uniform magnetic fields in the same BM-RG spacetime [55]. This difference emphasizes that both the magnetic field configuration and the geometry of spacetime influence the spectral and confinement properties of quantum fields. With reference to the radial wave functions given in equation (20), we plot in Figure 4 the radial wave functions Rn,m(ρ) for quantum numbers n = 0, 1, 2 and m = 0, 1, 2 at two magnetic field strengths B◦= 1 and 10. As the magnetic field strength increases, the radial wave functions, representing KG particles and antiparticles, become increasingly localized. For B◦= 1, the particles are mainly distributed between ρ ∼0.5 and ρ ∼3, while for B◦= 10, they are restricted to the region ρ ∼1 to ρ ∼2. This localization effect is further illustrated in the corresponding radial probability densities shown in Figures 5 and 6. Radial probability densities are defined as Pn,m(ρ) = Z ρ 0 |Rn,m(ρ′)|2 ρ′ dρ′, and are visualized in two dimensions via x = ρ cos φ and y = ρ sin φ, where the color intensity represents the magnitude of Pn,m. The parameter ̃B = e B◦ 2 Λ characterizes the effect of the external magnetic field on the quantum states (B◦= 1 in Figure 5 and B◦= 10 in Figure 6). As the radial quantum number n increases, the radial probability distributions develop additional nodal rings, corresponding to higher energy states with more radial nodes. For n = 0, the density is mainly concentrated near the origin, while for n = 1, 2, more complex ring-like structures emerge. For fixed n, increasing the azimuthal quantum number m pushes the probability density outward due to the centrifugal barrier, broadening the spatial extent of the wave function. Importantly, increasing the magnetic field strength B◦enhances the magnetic confinement, effectively compressing the probability densities toward smaller radial distances and strengthening localization near the origin. The magnetic field acts as a confining potential that counteracts the outward push caused by larger m values, resulting in sharper and more localized radial peaks. Stronger magnetic fields also slightly shift the nodal positions inward, modifying the spatial probability distribution. These results illustrate how tuning the magnetic field allows control over quantum confinement and the spatial characteristics of the system's eigenstates. Throughout all states, rotational symmetry is preserved, consistent with conservation of angular momentum. The observed evolution of probability densities with increasing n, m, and B◦provides a quantitative understanding of how quantum numbers and magnetic field strength shape spatial quantum states under magnetic confinement. This work provides an exact and detailed description of the KG boson dynamics in the BM-RG spacetime, showing how gravitational confinement, topological features, magnetic fields, and RG corrections influence the energy spectrum and wave functions. These results contribute to the theoretical understanding of quantum fields in curved and magnetized backgrounds and suggest potential areas for further study. Future work could include extending the analysis to fermionic or higher-spin fields, exploring analogous phenomena in optical or condensed matter systems (in principle), and examining how these effects might appear in astrophysical or cosmological contexts. Although primarily theoretical, the findings help clarify how QG-inspired modifications could affect particle behavior in curved spacetimes. Moreover, it should be noted that the BM spacetime considered here is an exact solution of the Einstein-Maxwell equations, describing the self-consistent interaction between a magnetic field and spacetime geometry. With its idealized cylindrical symmetry and infinitely extended magnetic flux tube, it should be regarded primarily as an analytical model rather than a physically realized configuration. Nevertheless, it is useful for capturing several qualitative effects in strongly magne8 tized, curved environments, such as the geometric confinement of scalar bosons, curvature-dependent energy spectra, and the collapse of magnetic sublevels. Near magnetars, neutron stars, or magnetized accretion disks, similar interactions between strong magnetic fields and spacetime curvature can influence particle localization and energy levels. The inclusion of rainbow gravity corrections further introduces Planck-scale deformations of the dispersion relation, providing a framework to explore high-energy effects. Thus, although not physically observable, the BM spacetime serves as a theoretical laboratory for studying how gravity, magnetism, and rainbow gravity effects jointly influence quantum fields in curved, magnetized backgrounds. [1] A. Einstein, "Die Grundlage der allgemeinen Relativit ̈atstheorie" Ann. Phys. (Berlin), 49 769 (1916) [2] L. Parker, "One-electron atom in curved space-time" Physical Review Letters, 44 1559 (1980) [3] C. Thompson, R.C. Duncan, "The soft gamma repeaters as very strongly magnetized neutron stars-I. Radiative mechanism for outbursts" Monthly Notices of the Royal Astronomical Society, 275 255-300 (1995) [4] C. Kouveliotou, S. Dieters, T. Strohmayer, J. Van Paradijs, G.J. Fishman, C.A. Meegan, K. Hurley, J. Kommers, I. Smith, D. Frail and others, "An X-ray pulsar with a superstrong magnetic field in the soft γ-ray repeater SGR1806-20" Nature, 393 235-237 (1998) [5] U. G ̈ursoy, D. Kharzeev, K. Rajagopal, "Magnetohydrodynamics, charged currents, and directed flow in heavy ion collisions" Physical Review C, 89 054905 (2014) [6] A. Bzdak, V. Skokov, "Event-by-event fluctuations of magnetic and electric fields in heavy ion collisions" Physics Letters B, 710 171-174 (2012) [7] V. Voronyuk, V.D. Toneev, W. Cassing, E.L. Bratkovskaya, V.P. Konchakovski, S.A. Voloshin, "Electromagnetic field evolution in relativistic heavy-ion collisions" Physical Review C, 83 054911 (2011) [8] M. ˇZofka, "Bonnor-Melvin universe with a cosmological constant" Physical Review D, 99 044058 (2019) [9] W.B. Bonnor, "Static magnetic fields in general relativity" Proceedings of the Physical Society. Section A, 67 225 (1954) [10] M.A. Melvin, "Pure magnetic and electric geons" Physics Letters, 8 65-68 (1964) [11] F. Ahmed, A. Bouzenada, "Effects of gravity rainbow on scalar bosonic and oscillator fields in Bonnor-Melvin space-time with a cosmological constant" European Physical Journal C, 84 1045 (2024) [12] F. Ahmed, A. Bouzenada, "Scalar fields in Bonnor-MelvinLambda universe with potential: a study of dynamics of spin-zero particles-antiparticles" Physica Scripta, 99 065033 (2024) [13] T.W.B. Kibble, "Some implications of a cosmological phase transition" Physics Reports, 67 183-199 (1980) [14] A. Vilenkin, "Gravitational field of vacuum domain walls and strings" Physical Review D, 23 852 (1981) [15] A. Vilenkin, "Cosmic strings and domain walls" Physics reports, 121 263-315 (1985) [16] E.A.F. Bragan ̧ca, R.L.L. Vit ́oria, H. Belich, E.R.B. de Mello, "Relativistic quantum oscillators in the global monopole spacetime" European Physical Journal C, 80 1-11 (2020) [17] S. ̈Olmez, V. Mandic, X. Siemens, "Gravitational-wave stochastic background from kinks and cusps on cosmic strings" Physical Review D, 81 104028 (2010) [18] L.C.N. Santos, C.C. Barros Jr, "Scalar bosons under the influence of noninertial effects in the cosmic string spacetime" European Physical Journal C, 77 186 (2017) [19] A.L. Cavalcanti de Oliveira, E.R. Bezerra de Mello, "Exact solutions of the Klein-Gordon equation in the presence of a dyon, magnetic flux and scalar potential in the spacetime of gravitational defects" Classical and Quantum Gravity, 23 5249 (2006) [20] M. Hosseinpour, F.M. Andrade, E.O. Silva, H. Hassanabadi, "Scattering and bound states for the Hulth ́en potential in a cosmic string background" European Physical Journal C, 77 1-6 (2017) [21] L.C.N. Santos, C.C. Barros, "Relativistic quantum motion of spin0 particles under the influence of noninertial effects in the cosmic string spacetime" European Physical Journal C, 78 1-8 (2018) [22] F.A.C. Neto, F.M. Da Silva, L.C.N Santos, L.B. Castro, "Scalar bosons with Coulomb potentials in a cosmic string background: scattering and bound states" European Physical Journal Plus, 135 1-11 (2020) [23] M.O. Katanaev, I.V. Volovich, "Theory of defects in solids and three-dimensional gravity" Annals of Physics, 216 1-28 (1992) [24] R.A. Puntigam, H.H. Soleng, "Volterra distortions, spinning strings, and cosmic defects" Classical and Quantum Gravity, 14 1129 (1997) [25] W.C.F. da Silva, K. Bakke, R.L.L. Vit ́oria, "Non-relativistic quantum effects on the harmonic oscillator in a spacetime with a distortion of a vertical line into a vertical spiral" European Physical Journal C, 79 1-5 (2019) [26] R.L.L. Vit ́oria, "Noninertial effects on a scalar field in a spacetime with a magnetic screw dislocation" European Physical Journal C, 79 844 (2019) [27] A. Guvendi, S. Zare, H. Hassanabadi, "Exact solution for a fermion-antifermion system with Cornell type nonminimal coupling in the topological defect-generated spacetime" Physics of the Dark Universe, 38 101133 (2022) [28] A. Guvendi, "Evolution of an interacting fermion-antifermion pair in the near-horizon of the BTZ black hole" European Physical Journal C, 84 185 (2024) [29] A. Guvendi, O. Mustafa, "Energy Symmetry Breaking of Dirac and Weyl Fermions in Magnetized Spinning Conical Geometries" Advanced Theory and Simulations, e00451 (2025) [30] S. Gurtas Dogan, O. Mustafa, A. Guvendi, "Charged fermions and vector bosons in magnetic fields within a spacetime generated by a spinning point source" Nuclear Physics B, 1016 116918 (2025) [31] A. Guvendi, O. Mustafa, "Fermion-antifermion pairs in magnetized spacetime generated by a point source" Nuclear Physics B, 1011 116803 (2025) [32] A. Vilenkin, "Gravitational field of vacuum domain walls" Physics Letters B, 133 177-179 (1983) [33] B. Linet, "The static metrics with cylindrical symmetry describing a model of cosmic strings" 9 General Relativity and Gravitation, 17 1109-1115 (1985) [34] M. Barriola, A. Vilenkin, "Gravitational field of a global monopole" Physical Review Letters, 63 341 (1989) [35] M. Astorino, "Charging axisymmetric space-times with cosmological constant" Journal of High Energy Physics, 2012 1-15 (2012) [36] J. Vesel`y, M. ˇZofka, "Cosmological magnetic field: The boostsymmetric case" Physical Review D, 100 044059 (2019) [37] A. Guvendi, O. Mustafa, "Fermion-antifermion pairs in a magnetized space-time with non-zero cosmological constant" Nuclear Physics B, 1004 116571 (2024) [38] A. Guvendi, F. Ahmed, S.G. Dogan, "Relativistic fermions and vector bosons in magnetized three-dimensional space-time with a cosmological constant" Nuclear Physics B, 1004 116569 (2024) [39] J. Magueijo, L. Smolin, "Lorentz invariance with an invariant energy scale" Physical Review Letters, 88 190403 (2002) [40] P. Gal ́an, G.A.M. Marug ́an, "Quantum time uncertainty in a gravity's rainbow formalism" Physical Review D, 70 124003 (2004) [41] G. Amelino-Camelia, "Relativity in spacetimes with short-distance structure governed by an observer-independent (Planckian) length scale" International Journal of Modern Physics D, 11 35-59 (2002) [42] G. Amelino-Camelia, "Doubly-special relativity: first results and key open problems" International Journal of Modern Physics D, 11 1643-1669 (2002) [43] M. Hosseinpour, H. Hassanabadi, J. Kriz, S. Hassanabadi, B.C. L ̈utf ̈uo ̆glu, "Interaction of the generalized Duffin-Kemmer-Petiau equation with a nonminimal coupling under the cosmic rainbow gravity" International Journal of Geometric Methods in Modern Physics, 18 2150224 (2021) [44] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, S. Sarkar, "Tests of quantum gravity from observations of γ-ray bursts" Nature, 393 763-765 (1998) [45] O. Mustafa, "PDM KG-Coulomb particles in cosmic string rainbow gravity spacetime and a uniform magnetic field" Physics Letters B, 839 137793 (2023) [46] V.A. Kosteleck`y, S. Samuel, "Spontaneous breaking of Lorentz symmetry in string theory" Physical Review D, 39 683 (1989) [47] R. Gambini, J. Pullin, "Nonstandard optics from quantum spacetime" Physical Review D, 59 124021 (1999) [48] S.M. Carroll, J.A. Harvey, V.A. Kosteleck`y, C.D. Lane, T. Okamoto, "Noncommutative field theory and Lorentz violation" Physical Review Letters, 87 141601 (2001) [49] M. Korunur, S. Korunur, "Energy-Momentum distributions of cylindrical black hole in rainbow gravity" Int. J. Geom. Meth. Mod. Phys. 21 (2024) 2450118 [50] E. Battista, S. Capozziello, A. Errehymy , "Generalized uncertainty principle corrections in Rastall-Rainbow Casimir wormholes" Eur. Phys. J. C 84 (2024) 1314 [51] B. Dogan, M. Salti, O. Aydogdu, P. Rej "Fermions moving around strong gravitational sources", Int. J. Geom. Meth. Mod Phys. 22 (2025) 2450316 [52] W. Ali, U. Sheikh, S. Ali, M. J. Amir, " Exploring the physical properties of strange star SAXJ1808.4-3658 in rainbow gravity" Int. J. Geom. Meth. Mod Phys. 21 (2024) 2450199 [53] O. Mustafa, "KG-particles in a cosmic string rainbow gravity spacetime in mixed magnetic fields" European physical Journal C, 84 362 (2024) [54] O. Mustafa, "Massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime in a fine tuned rainbow gravity" Nuclear Physics B, 995 116334 (2023) [55] O. Mustafa, A. Guvendi, "Klein-Gordon particles in a nonuniform external magnetic field in Bonnor-Melvin rainbow gravity background" Nuclear Physics B, 1018 116998 (2025) [56] O. Mustafa, A. Guvendi, "On the Klein-Gordon bosonic fields in the Bonnor-Melvin spacetime with a cosmological constant in rainbow gravity: Bonnor-Melvin Domain Walls" European physical Journal C, 85 509 (2025) [57] L.C. Barbosa, C.C. Barros, "Scalar bosons in Bonnor-Melvin-Λ universe: exact solution, Landau levels and Coulomb-like potential" Physica Scripta, 100 035302 (2025) [58] J. Magueijo, L. Smolin, "Generalized Lorentz invariance with an invariant energy scale" Physical Review D, 67 044017 (2003) [59] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, "Distance measurement and wave dispersion in a Liouville-string approach to quantum gravity" International Journal of Modern Physics A, 12 607-623 (1997) [60] G. Amelino-Camelia, J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, "Quantum-spacetime phenomenology" Living Reviews in Relativity, 16 1-137 (2013) [61] O. Mustafa, A. Guvendi, "Klein-Gordon oscillators in traversable wormhole rainbow gravity spacetime: Conditional exact solvability via a throat radius and oscillator frequency correlation" Int. J. Geom. Meth. Mod. Phys., 22 (2025) 2550091
2510.14784
Topological bands in metals Yu. B. Kudasov1, ∗ 1Sarov Physics and Technology Institute NRNU ”MEPhI”, 6, str. Dukhov, Sarov, 607186, Russia In crystalline systems with a superstructure, the electron dispersion can form a nontrivial covering of the Brillouin zone. It is proved that the number of sheets in this covering and its monodromy are topological invariants under ambient isotopy. As a concrete manifestation of this nontrivial topology, we analyze three-sublattice models for 120◦-ordered helimagnets in one, two, and three dimensions. The two-dimensional system exhibits unconventional f-wave magnetism and a specific topological metal state characterized by a spin-textured, one-sheeted Fermi surface. The observable transport signatures of the topological metal and its potential experimental realization are briefly discussed. The topology of electronic band structures has been a central topic in solid-state physics in recent decades.[1, 2] Research in this area has largely focused on topological insulators and their edge states,[2, 3] as well as on sys- tems with topological defects, such as Dirac and Weyl semimetals.[4] It has been shown that nontrivial topo- logical properties give rise to observable transport phe- nomena, including highly mobile surface charge carriers, anomalous and nonlinear Hall effects.[5, 6] The Berry (ge- ometric) phase [7] and derived quantities, such as the Berry vector potential and curvature, are key theoretical tools for describing these phenomena. Combined with symmetry analysis, they enable the derivation of topo- logical invariants and the comprehensive classification of topological phases.[8] Magnetic topological insulators and semimetals, in- cluding helimagnets, have also been studied.[9, 10] At the same time, specific nontrivial band structures in metallic helimagnets have been identified that are distinct from these established classes. In this Letter, we demonstrate that they are unrelated to the Berry phase or topological defects, and we illustrate this finding with simple exam- ples of helimagnetic systems. The electronic band structure of metallic helimagnets has been studied extensively.[11–16] Both approximate methods [11, 16, 17] and exact solution [18] have estab- lished that an effective helical magnetic field produces a characteristic gapless multiband structure. Helical magnetic systems typically comprise two peri- odic structures. Let t be a primitive vector of the crys- tal lattice such that a translation by this vector rotates the magnetic (spin) system by an angle of αm = 2π/m (m > 1). Here and below, we discuss only commensu- rate magnetic structures, i.e., m ∈N. The primitive translation vectors of the system as a whole are then de- termined by the magnetic superstructure (e.g., T = mt). Bloch’s theorem allows for the construction of a magnetic Brillouin zone and yields periodic dispersion relations in reciprocal space. However, within the framework of spin space group theory,[12] a translation by vector t combined with a ro- ∗yu kudasov@yahoo.com tation of the magnetic system by an angle αm, being a symmetry operator (ˆtˆrm), leads to a generalized Bloch theorem [19] and an extended Brillouin zone. As shown below, the presence of the two commensurate periodic structures has profound consequences for the topology of the band structure. It has been proved recently that the electron dispersion in commensurate helimagnets has a symmetry related to time reversal:[20] εk,⟨σ⟩= ε−k,−⟨σ⟩ (1) where k is the wave vector and ⟨σ⟩is the expectation value of the spin projection. A Kramers-like degeneracy exists throughout the Brillouin zone, except at special points defined by the following conditions: exp ikT  = −1 for even m and exp 2ikT  = 1 for odd m.[20] The realization that spin-orbit-free compensated mag- nets with strong band spin splitting – such as altermag- nets in collinear structures [21, 22] and unconventional magnets in noncollinear ones [23, 24] – constitute a new, promising class of functional materials has renewed inter- est in these systems.[25, 26] The helimagnets with even m possess the symmetry operator ˆT1/2ˆθ, where ˆT1/2 is the translation by T/2 = mt/2 and ˆθ is the time-reversal operator.[20] This symmetry gives rise to unconventional magnetism (e.g., of a p-wave type [23]). Let us consider the states of a single electron in a crys- tal lattice, i.e., in a periodic potential U(r + t) = U(r), where t is a primitive vector of the Bravais lattice. The potential U(r) is assumed to be a scalar (real) or spinor function. The Schr¨odinger equation in this case has the form ˆHϕ(r) =  −∆ 2 + U(r)  ϕ(r) = εϕ(r). (2) Eigenvalues εi(k) and eigenvectors ϕi(k) of the Hamil- tonian are defined by the wave vectors k. In the limit of an infinite crystal, these functions are continuous and, according to Bloch’s theorem, periodic in k-space:[27] εi(k + K) = εi(k), (3) ϕi(k + K) = ϕi(k) (4) where K is a vectors of the reciprocal lattice. Here, the index i enumerates the solutions of Eq. (2). arXiv:2510.14784v1 [cond-mat.other] 16 Oct 2025 2 Taking into account the boundary conditions Eqs. (3) and (4), the Brillouin zone can be represented as a closed smooth manifold B, which is topologically equivalent to a torus T n with n = 1, 2, 3 for 1D, 2D, and 3D systems, respectively. The Bloch wave function can be expanded as follows: ϕi(k) = Pn j=1 cij(k)ψij where ψij form an orthonormal basis in the coordinate space, cij(k) are the smooth com- plex functions of k, and n > 1. Assuming a finite num- ber of the terms in the sum and taking into account the normalization condition Pn j=1 |cij|2 = 1, one can asso- ciate ϕi(k) at fixed k and i with a point on a (2n −1)- sphere (S2n−1). In the case of an infinite number of the terms, S2n−1 is replaced by the infinite-dimensional Hilbert space ℓ2. Consequently, ϕi(k) defines a closed smooth manifold ϕ in a space X ∼= B ×G, where B ∼= T n and G is topologically equivalent to S2n−1 or ℓ2. ϕ and B are obviously of the same dimension. The eigenvectors ϕi(k) are mutually orthogonal. Therefore, for any given k, they correspond to distinct points in the fiber G, ensuring that the sheets ϕi and ϕj (i ̸= j) do not intersect. This allows us to assume that the map f : ϕ →B is regular, i.e., has a nowhere vanishing Jacobian:[28] det ∂kα ∂xβ  ̸= 0, (5) where kα and xβ are the local coordinates defined in B (the Brillouin zone) and ϕ, respectively. The energy dispersion εi(k) can also be represented as a manifold ε in the space B × I, where I is the unit interval [0, 1] (under the assumption that εi(k) bounded). However, the map f : ε →B can be singular because the eigenvalues can be degenerate. Under the conditions stated above, the map f : ϕ →B is a covering map. In solid-state physics, usually only trivial coverings are considered; that is, ϕ ∼= B×F where F is a countable or finite discrete topological space. In this case, each sheet εi(k) is a closed manifold topologi- cally equivalent to the base B. Consequently, the index i in Eqs. (3) and (4) specifies the indecomposable cover- ings, as shown in Fig. 1a. The complete inverse image f −1(k) (the fiber of the covering) of any point k ∈B consists of m points. This number is referred to as the number of sheets. An inde- composable covering with m > 1 is nontrivial. Since the map f is regular, the number of sheets for an indecom- posable covering must be finite.[29] Let γ be a loop in B starting and ending at k0, i.e., k0 →k0 + K in terms of k-space, and let {x1, ..., xm} be the fiber over k0 for an indecomposable covering. For each starting point xi in this fiber, there is a unique cov- ering path µ(xi) in ϕ that covers γ. The endpoint of µ(xi) is some xj in the same fiber; however, µ(xi) is not necessarily a loop (if i ̸= j). A schematic 1D example of the dispersion curve corresponding to nontrivial ϕ is shown in Fig. 1b. a b 0 0 0 0 p p p p -p -p e e FIG. 1. 1D band structures and their topological classifica- tion. The right panels provide a schematic topological repre- sentation, depicting the Brillouin zone as a circle (S1). Since any loop within the Brillouin zone B is homo- topic to a null path (i.e., contractible to a point), the choice of the starting point is inessential. For this rea- son, we omit the starting point in the notation, where possible. R2 is the universal covering space of the torus, there- fore it is convenient to use a plane representation for 2D systems, as shown in Fig.2a. The horizontal and inclined lines in this figure schematically denote the boundaries of the Brillouin zone. Gluing them together in the usual way forms a torus.[30] The path from the center of any cell to that of an adjacent one is a non-contractible loop on the torus. A pair of such paths, which are generators of the fundamental group of the torus, are shown (P and Q) in Fig.2a. As an example, consider the three-sheeted covering of the torus, which corresponds to the model discussed below. The different sheets are indicated by colors. It is clear that the lifts of paths P and Q are not closed, because there is a transition to another sheet. It should be emphasized that the dashed lines and color- ing in Fig.2a,b are guides for the eye; in effect, there are no singular points or lines. The plane representation of the covering is also periodic. The paths PQ and P 2Q−1 are generators of the fundamental group of the covering space.[31] This group is isomorphic to the fundamental group of the torus, i.e., π1(ϕ) ≈π1(B). R3 is the universal covering space of T 3. Let the 3D Brillouin zone be a parallelepiped. A schematic repre- sentation of a three-sheeted covering over this base is shown in Fig. 2c, where coloring again denotes the dif- ferent sheets of the covering space. The edges of the unit parallelepiped correspond to the generators of π1(B), and the grey arrows indicate the generators of π1(ϕ). The curve in the right panel of Fig. 1b resembles a knot, particularly considering that ϕ has no self- intersections. It is well-known that embedding of a circle into the R3 as a knot leads to the question its nontriviality.[32] Therefore, the notion of topological 3 P Q P Q a P Q b c FIG. 2. Schematic views of 2D and 3D coverings: (a) plane representation of torus (base), (b) plane representation of a 3-sheeted covering over T 2, and (c) solid representation of a 3-sheeted covering over T 3. The sheets of covering spaces are distinguished by color and texture. equivalence for the coverings discussed above requires precise clarification. Proposition. Let ϕ be a subspace of X ∼= B × G, where B and G are path-connected topological spaces, and let the restriction of the projection π : X →B be a regular covering map p : ϕ →B. Then, if ˜ϕ is ambient isotopic to ϕ within X, ˜ϕ is also a covering space (under the restriction of π) which is equivalent to ϕ. Proof. The ambient isotopy is a continuous mapping F : X × I →X such that Ft is a homeomorphism for every t ∈I with F0 = idX, and F1(ϕ) = ˜ϕ.[30] Define the map ˜p : ˜ϕ →B as the composition ˜p = p ◦¯F1| ˜ϕ, (6) where ¯F is the inverse ambient isotopy, i.e., ¯FtFt = idX. A composition of a homeomorphism and a covering map is again a covering map. Therefore, ˜p is a covering map. Furthermore, ˜p and p are equivalent coverings [33] by definition Eq. (6).■ Corollary 1. Monodromy representations of p and ˜p are isomorphic. Corollary 2. Since the covering map p induces a monomorphism p∗: π1(ϕ, x0) →π1(B, k0), the indecom- posable covering can be nontrivial only if the fundamen- tal group of the base π1(B, k0) is nontrivial. This condi- tion is met in case of the Brillouin zone: π1(T n) ≈Zn. Thus, the topology of the base space, e.g. B = T n, is the source of the nontriviality of the covering map. Let us translate the above results into band struc- ture theory. If ϕ(k) is a m-sheeted regular covering over the Brillouin zone, an individual dispersion sheet is non- periodic within the Brillouin zone, and the conditions Eqs. (3) and (4) are satisfied due to permutations of the dispersion sheets, that is, by monodromy, as schemati- cally shown in Fig. 1b. The number of sheets m, as well as the sequence of the sheet permutations are topologi- cal invariants. Since m is finite, a superstructure must exist in the system, i.e., the covering space itself forms a periodic structure in the k-space with a period commen- surate with that of the Brillouin zone. The nontrivial band structure discussed above can be demonstrated with a tight-binding model of a helimag- netic metal. The Hamiltonian has the same general form for 1D, 2D, and 3D lattices: ˆH3sl = − X ⟨i,j⟩,σ  ˆa† iσˆajσ + h.c.  −hi X i,σ,σ′ ˆa† iσ ˆσˆaiσ′ (7) where ˆa† iσ(ˆaiσ) is the creation (annihilation) operator for an electron with spin projection σ =↑, ↓at the i-th site, ˆσ are the Pauli matrices, and hi is the (effective) mag- netic field at the i-th site. The notation ⟨. . .⟩denotes the sum over nearest-neighbor pairs. We consider a three- sublattice model with a 120◦magnetic order. In this configuration, all hi vectors are coplanar with constant magnitude |hi| = h0 and the angle between the vectors on different sublattices is ±2π/3. Furthermore, all nearest- neighbor sites belong to different sublattices. Since spin- orbit coupling is absent in the model, the magnetic plane, in which the effective field lies, can be chosen arbitrarily. The 120◦magnetic order is schematically shown in Fig. 3 for 1D chain, 2D hexagonal, and 3D simple hexagonal lattices. A primitive crystallographic translation rotates the entire magnetic structure by ±2π/3. An explicit ma- trix form of the Hamiltonian Eq. (7) is presented in the Supplemental material. The dispersion curves obtained from the 1D tight- binding model are shown in Fig. 4a within the ordi- nary (magnetic) (k ∈[−π, π]), and extended (crystallo- graphic) Brillouin zones (k ∈[−3π, 3π]). The lower band marked by the thick line unambiguously corresponds to a continuous ϕi(k). It is periodic within the extended Bril- louin zone. Band folding results in three non-periodic dis- persion sheets within the magnetic (ordinary) one. The magnetic structure is shown by color and arrows within the magnetic Brillouin zone. It should be noted that the commensurate helical magnetic field lifts the spin degen- eracy of the band structure but conserves the Kramers- like symmetry Eq. (1). If the Fermi level falls into the energy range between the horizontal dashed lines in Fig. 4a, a special state 4 a b c FIG. 3. Helical structures with the 120◦-order on (a) 1D, (b) 2D, and (c) 3D lattices. The magnetic field at the sites is indicated by color and arrows. 0 -2 -1 0 1 2 a E - 3 -3 -6 -4 -2 0 2 4 b M K M' M K M E 1 E 2 FIG. 4. The band structure in the tight-binding model for (a) 1D chain (h0 = 0.25) and (b) 2D hexagonal (h0 = 1) lattices. The average spin projection on the axis perpendicular to the magnetic plane is indicated by color and arrows: red and blue if |⟨ˆσz⟩| > 1/2, green otherwise. G M’ M K a G M’ M K b FIG. 5. Fermi surface in the 2D tight-binding model for Fermi levels corresponding to (a) E1 and (b) E2 in Fig. 4b. The average spin projection is indicated in the same manner as in Fig. 4. emerges: a single non-periodic band crosses the Fermi level. We refer to this as a topological metal because such a structure is a consequence of a nontrivial covering over the Brillouin zone. One can see that a backward scattering without spin-flip is forbidden and that a per- sistent spin current exists in this case.[16] The band structure of the tight-binding 2D model is shown in Fig. 4b. The lower bands also exhibit a 3- sheeted structure. Along the M’-Γ-M direction, the lower dispersion curves are similar to those of the 1D model (Fig. 4a). The shapes of the Fermi surface for the Fermi level at E1 and E2 are presented in panels (a) and (b) of Fig. 5, respectively. The spin splitting of the Fermi surface in Fig. 5a leads to the 2D f-wave magnet.[24] If the Fermi level lies at E2 (Fig. 5b), a single spin- textured Fermi surface appears, with the texture obeying Eq. (1). This contrasts with altermagnets and uncon- ventional magnets (e.g. of p- and f-wave type) with the trivial topology, where the spin splitting leads to pairs of bands and a Fermi surface with an even number of sheets.[22, 24] We refer to this state as a 2D topological metal. It demonstrates unusual transport behavior: the suppression of both backscattering without spin-flip and umklapp electron-phonon scattering leads to a drastic in- crease in conductivity.[16] The 3D tight-binding model also features nontrivial bands. The band structure along the a∗direction cor- responding to M’-Γ-M in the 2D model [34] is similar to Fig. 4b (see Supplemental Material). However, a detailed discussion of the 3D model lies beyond the scope of the present work. To practically implement the concept of a topological metal in a real material, several conditions must be met. For instance, in a layered crystal, highly conductive lay- ers must alternate with magnetic layers possessing 120◦ ordering to induce the effective magnetic field. Further- more, the Fermi level must be located in an energy region with a single non-periodic band. PdCrO2, a metallic de- lafossite with a complex magnetic order and anomalously high conductivity,[35] has been considered a candidate for the topological metal. Non-reciprocal transport [36] 5 and an unusual anomalous Hall effect [37] have also been observed in this compound. Evidence of the topologi- cal metal state in this substance could be provided by observing the spin texture of the reconstructed Fermi surface, for example using spin-resolved angle-resolved photoemission spectroscopy.[38] Another way to realize the topological metal is by creating an artificial struc- ture with alternating magnetic and conductive layers, for example, in van der Waals systems.[39] While a nontrivial band structure can also exist in non- magnetic systems (e.g., in crystals with helical symmetry [40]), the topological metal state appears only in helimag- nets due to the combination of lifted spin degeneracy and the Kramers-like symmetry Eq. (1). SUPPLEMENTARY MATERIAL An explicit form of the tight-binding models used in the text and the band structure for the 3D model are provided. ACKNOWLEDGMENTS This work was supported by the National Center for Physics and Mathematics (Project No. 7 “Investigations in high and ultrahigh magnetic fields”). [1] A. Bansil, H. Lin, and T. Das, Rev. Mod. Phys. 88, 021004 (2016). [2] M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). [3] G. Tkachev, Topological Insulators, The Physics of Spin Helicity in Quantum Transport (Taylor & Francis Group, LLC, 2016). [4] N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018). [5] Z. Du, C. M. Wang, S. Li, H.-Z. Lu, and X. Xie, Nat. Commun. 10, 3047 (2024). [6] S. Nandy and I. Sodemann, Phys. Rev. B 100, 195117 (2019). [7] M. V. Berry, Proc. R. Soc. Lond. A 392, 45 (1984). [8] C.-K. Chiu, J. C. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016). [9] B. A. Bernevig, C. Felser, and H. Beidenkopf, Nature 603, 41 (2022). [10] X. Yao, J. Gaudet, R. Verma, D. E. Graf, H.-Y. Yang, F. Bahrami, R. Zhang, A. A. Aczel, S. Subedi, D. H. Torchinsky, J. Sun, A. Bansil, S.-M. Huang, B. Singh, P. Blaha, P. Nikoli´c, and F. Tafti, Rev. Rev. X 13, 011035 (2023). [11] I. E. Dzyaloshinskii, JETP 20, 223 (1965). [12] W. Brinkman and R. J. Elliott, Proc. Roy. Soc. A 294, 343 (1966). [13] M. D. R. Fr´esard and P. Wolfle, Europhys. Lett. 15, 325 (1991). [14] A. A. Fraerman and O. G. Udalov, Phys. Rev. B 77, 094401 (2008). [15] J. Kishine and A. Ovchinnikov, Theory of monoaxial chi- ral helimagnet, in Solid State Physics, Vol. 66, edited by R. E. Camley and R. L. Stamps (Academic Press, 2015) pp. 1–130. [16] Y. B. Kudasov, JETP Lett. 113, 155 (2021). [17] M. Calvo, Phys. Rev. B 19, 5507 (1979). [18] M. Calvo, Phys. Rev. B 18, 5073 (1978). [19] L. M. Sandratskii, J. Phys.: Cond. Matt. 3, 8565 (1991). [20] Y. B. Kudasov, Phy. Rev. B 109, L140402 (2024). [21] L.-D. Yuan, Z. Wang, J.-W. Luo, E. I. Rashba, and A. Zunger, Phys. Rev. B 102, 014422 (2020). [22] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 040501 (2022). [23] B. Brekke, P. Sukhachov, H. G. Giil, A. Brataas, and J. Linder, Phys. Rev. Lett. 133, 236703 (2024). [24] M. Ezawa, Phys. Rev. B 111, 125420 (2025). [25] Q. Song, S. Stavri´c, P. Barone, A. Droghetti, D. S. An- tonenko, J. W. F. Venderbos, C. A. Occhialini, B. Ilyas, E. Erge¸cen, N. Gedik, S.-W. Cheong, R. M. Fernandes, S. Picozzi, and R. Comin, Nature 642, 64 (2025). [26] J. Sears, J. Yao, Z. Hu, W. Tian, N. Aryal, W. Yin, Q. Li, and J. M. Tranquada, Phys. Rev. B 112, 094455 (2025). [27] N. W. Ashcroft and N. D. Mermin, Solid state physics (Harcourt lnc., 1976). [28] This is almost always true (apart from special cases, e.g. topological defects in the Brillouin zone like in Dirac and Weyl systems). [29] B. A. Dubrovin, A. T. Fomenko, and S. P. Novikov, Modern Geometry – Methods and Applications, Vol. 2 (Springer-Verlag, 1985). [30] S. Kalajdzievski, An illustrated introduction to topology and homotopy (Taylor & Francis, 2015). [31] Y. B. Kudasov, JETP Lett. 120, 416 (2024). [32] A. Hatcher, Algebraic topology (Cambridge University Press, 2002). [33] M. A. Armstrong, Basic Topology (Springer-Verlag, 1983). [34] L. S. Hart, J. L. Webb, S. Dale, S. J. Bending, M. Mucha- Kruczynski, D. Wolverson, C. Chen, J. Avila, and M. C. Asensio, Sci. Rep. 7, 5145 (2017). [35] A. P. Mackenzie, Rep. Prog. Phys. 80, 032501 (2017). [36] M. Akaike, Y. Nii, H. Masuda, and Y. Onose, Phys. Rev. B 103, 184428 (2021). [37] H. Takatsu, S. Yonezawa, S. Fujimoto, and Y. Maeno, Phys. Rev. Lett. 105, 137201 (2010). [38] J. H. Dil, Phys.: Condens. Matter 21, 403001 (2009). [39] C.-Z. Chang, Nat. Mater. 19, 484 (2020). [40] M. Damnjanovi´c and I. Miloˇsevi´c, Line Groups in Physics (Springer, 2010).
Topological bands in metals Yu. B. Kudasov1, ∗ 1Sarov Physics and Technology Institute NRNU "MEPhI", 6, str. Dukhov, Sarov, 607186, Russia In crystalline systems with a superstructure, the electron dispersion can form a nontrivial covering of the Brillouin zone. It is proved that the number of sheets in this covering and its monodromy are topological invariants under ambient isotopy. As a concrete manifestation of this nontrivial topology, we analyze three-sublattice models for 120◦-ordered helimagnets in one, two, and three dimensions. The two-dimensional system exhibits unconventional f-wave magnetism and a specific topological metal state characterized by a spin-textured, one-sheeted Fermi surface. The observable transport signatures of the topological metal and its potential experimental realization are briefly discussed. The topology of electronic band structures has been a central topic in solid-state physics in recent decades.[1, 2] Research in this area has largely focused on topological insulators and their edge states,[2, 3] as well as on systems with topological defects, such as Dirac and Weyl semimetals.[4] It has been shown that nontrivial topological properties give rise to observable transport phenomena, including highly mobile surface charge carriers, anomalous and nonlinear Hall effects.[5, 6] The Berry (geometric) phase [7] and derived quantities, such as the Berry vector potential and curvature, are key theoretical tools for describing these phenomena. Combined with symmetry analysis, they enable the derivation of topological invariants and the comprehensive classification of topological phases.[8] Magnetic topological insulators and semimetals, including helimagnets, have also been studied.[9, 10] At the same time, specific nontrivial band structures in metallic helimagnets have been identified that are distinct from these established classes. In this Letter, we demonstrate that they are unrelated to the Berry phase or topological defects, and we illustrate this finding with simple examples of helimagnetic systems. The electronic band structure of metallic helimagnets has been studied extensively.[11-16] Both approximate methods [11, 16, 17] and exact solution [18] have established that an effective helical magnetic field produces a characteristic gapless multiband structure. Helical magnetic systems typically comprise two periodic structures. Let t be a primitive vector of the crystal lattice such that a translation by this vector rotates the magnetic (spin) system by an angle of αm = 2π/m (m > 1). Here and below, we discuss only commensurate magnetic structures, i.e., m ∈N. The primitive translation vectors of the system as a whole are then determined by the magnetic superstructure (e.g., T = mt). Bloch's theorem allows for the construction of a magnetic Brillouin zone and yields periodic dispersion relations in reciprocal space. However, within the framework of spin space group theory,[12] a translation by vector t combined with a ro- ∗yu tation of the magnetic system by an angle αm, being a symmetry operator (ˆtˆrm), leads to a generalized Bloch theorem [19] and an extended Brillouin zone. As shown below, the presence of the two commensurate periodic structures has profound consequences for the topology of the band structure. It has been proved recently that the electron dispersion in commensurate helimagnets has a symmetry related to time reversal:[20] εk,⟨σ⟩= ε-k,-⟨σ⟩ (1) where k is the wave vector and ⟨σ⟩is the expectation value of the spin projection. A Kramers-like degeneracy exists throughout the Brillouin zone, except at special points defined by the following conditions: exp ikT = -1 for even m and exp 2ikT = 1 for odd m.[20] The realization that spin-orbit-free compensated magnets with strong band spin splitting - such as altermagnets in collinear structures [21, 22] and unconventional magnets in noncollinear ones [23, 24] - constitute a new, promising class of functional materials has renewed interest in these systems.[25, 26] The helimagnets with even m possess the symmetry operator ˆT1/2ˆθ, where ˆT1/2 is the translation by T/2 = mt/2 and ˆθ is the time-reversal operator.[20] This symmetry gives rise to unconventional magnetism (e.g., of a p-wave type [23]). Let us consider the states of a single electron in a crystal lattice, i.e., in a periodic potential U(r + t) = U(r), where t is a primitive vector of the Bravais lattice. The potential U(r) is assumed to be a scalar (real) or spinor function. The Schr ̈odinger equation in this case has the form ˆHφ(r) = -∆ 2 + U(r) φ(r) = εφ(r). (2) Eigenvalues εi(k) and eigenvectors φi(k) of the Hamiltonian are defined by the wave vectors k. In the limit of an infinite crystal, these functions are continuous and, according to Bloch's theorem, periodic in k-space:[27] εi(k + K) = εi(k), (3) φi(k + K) = φi(k) (4) where K is a vectors of the reciprocal lattice. Here, the index i enumerates the solutions of Eq. (2). 16 Oct 2025 2 Taking into account the boundary conditions Eqs. (3) and (4), the Brillouin zone can be represented as a closed smooth manifold B, which is topologically equivalent to a torus T n with n = 1, 2, 3 for 1D, 2D, and 3D systems, respectively. The Bloch wave function can be expanded as follows: φi(k) = Pn j=1 cij(k)ψij where ψij form an orthonormal basis in the coordinate space, cij(k) are the smooth complex functions of k, and n > 1. Assuming a finite number of the terms in the sum and taking into account the normalization condition Pn j=1 |cij|2 = 1, one can associate φi(k) at fixed k and i with a point on a (2n -1)- sphere (S2n-1). In the case of an infinite number of the terms, S2n-1 is replaced by the infinite-dimensional Hilbert space l2. Consequently, φi(k) defines a closed smooth manifold φ in a space X ∼= B ×G, where B ∼= T n and G is topologically equivalent to S2n-1 or l2. φ and B are obviously of the same dimension. The eigenvectors φi(k) are mutually orthogonal. Therefore, for any given k, they correspond to distinct points in the fiber G, ensuring that the sheets φi and φj (i ̸= j) do not intersect. This allows us to assume that the map f : φ →B is regular, i.e., has a nowhere vanishing Jacobian:[28] det ∂kα ∂xβ ̸= 0, (5) where kα and xβ are the local coordinates defined in B (the Brillouin zone) and φ, respectively. The energy dispersion εi(k) can also be represented as a manifold ε in the space B × I, where I is the unit interval [0, 1] (under the assumption that εi(k) bounded). However, the map f : ε →B can be singular because the eigenvalues can be degenerate. Under the conditions stated above, the map f : φ →B is a covering map. In solid-state physics, usually only trivial coverings are considered; that is, φ ∼= B×F where F is a countable or finite discrete topological space. In this case, each sheet εi(k) is a closed manifold topologically equivalent to the base B. Consequently, the index i in Eqs. (3) and (4) specifies the indecomposable coverings, as shown in Fig. 1a. The complete inverse image f -1(k) (the fiber of the covering) of any point k ∈B consists of m points. This number is referred to as the number of sheets. An indecomposable covering with m > 1 is nontrivial. Since the map f is regular, the number of sheets for an indecomposable covering must be finite.[29] Let γ be a loop in B starting and ending at k0, i.e., k0 →k0 + K in terms of k-space, and let {x1, ..., xm} be the fiber over k0 for an indecomposable covering. For each starting point xi in this fiber, there is a unique covering path μ(xi) in φ that covers γ. The endpoint of μ(xi) is some xj in the same fiber; however, μ(xi) is not necessarily a loop (if i ̸= j). A schematic 1D example of the dispersion curve corresponding to nontrivial φ is shown in Fig. 1b. a b 0 0 0 0 p p p p -p -p e e FIG. 1. 1D band structures and their topological classification. The right panels provide a schematic topological representation, depicting the Brillouin zone as a circle (S1). Since any loop within the Brillouin zone B is homotopic to a null path (i.e., contractible to a point), the choice of the starting point is inessential. For this reason, we omit the starting point in the notation, where possible. R2 is the universal covering space of the torus, therefore it is convenient to use a plane representation for 2D systems, as shown in Fig.2a. The horizontal and inclined lines in this figure schematically denote the boundaries of the Brillouin zone. Gluing them together in the usual way forms a torus.[30] The path from the center of any cell to that of an adjacent one is a non-contractible loop on the torus. A pair of such paths, which are generators of the fundamental group of the torus, are shown (P and Q) in Fig.2a. As an example, consider the three-sheeted covering of the torus, which corresponds to the model discussed below. The different sheets are indicated by colors. It is clear that the lifts of paths P and Q are not closed, because there is a transition to another sheet. It should be emphasized that the dashed lines and coloring in Fig.2a,b are guides for the eye; in effect, there are no singular points or lines. The plane representation of the covering is also periodic. The paths PQ and P 2Q-1 are generators of the fundamental group of the covering space.[31] This group is isomorphic to the fundamental group of the torus, i.e., π1(φ) ≈π1(B). R3 is the universal covering space of T 3. Let the 3D Brillouin zone be a parallelepiped. A schematic representation of a three-sheeted covering over this base is shown in Fig. 2c, where coloring again denotes the different sheets of the covering space. The edges of the unit parallelepiped correspond to the generators of π1(B), and the grey arrows indicate the generators of π1(φ). The curve in the right panel of Fig. 1b resembles a knot, particularly considering that φ has no selfintersections. It is well-known that embedding of a circle into the R3 as a knot leads to the question its nontriviality.[32] Therefore, the notion of topological 3 P Q P Q a P Q b c FIG. 2. Schematic views of 2D and 3D coverings: (a) plane representation of torus (base), (b) plane representation of a 3-sheeted covering over T 2, and (c) solid representation of a 3-sheeted covering over T 3. The sheets of covering spaces are distinguished by color and texture. equivalence for the coverings discussed above requires precise clarification. Proposition. Let φ be a subspace of X ∼= B × G, where B and G are path-connected topological spaces, and let the restriction of the projection π : X →B be a regular covering map p : φ →B. Then, if ̃φ is ambient isotopic to φ within X, ̃φ is also a covering space (under the restriction of π) which is equivalent to φ. Proof. The ambient isotopy is a continuous mapping F : X × I →X such that Ft is a homeomorphism for every t ∈I with F0 = idX, and F1(φ) = ̃φ.[30] Define the map ̃p : ̃φ →B as the composition ̃p = p ◦ ̄F1| ̃φ, (6) where ̄F is the inverse ambient isotopy, i.e., ̄FtFt = idX. A composition of a homeomorphism and a covering map is again a covering map. Therefore, ̃p is a covering map. Furthermore, ̃p and p are equivalent coverings [33] by definition Eq. (6).■ Corollary 1. Monodromy representations of p and ̃p are isomorphic. Corollary 2. Since the covering map p induces a monomorphism p∗: π1(φ, x0) →π1(B, k0), the indecomposable covering can be nontrivial only if the fundamental group of the base π1(B, k0) is nontrivial. This condition is met in case of the Brillouin zone: π1(T n) ≈Zn. Thus, the topology of the base space, e.g. B = T n, is the source of the nontriviality of the covering map. Let us translate the above results into band structure theory. If φ(k) is a m-sheeted regular covering over the Brillouin zone, an individual dispersion sheet is nonperiodic within the Brillouin zone, and the conditions Eqs. (3) and (4) are satisfied due to permutations of the dispersion sheets, that is, by monodromy, as schematically shown in Fig. 1b. The number of sheets m, as well as the sequence of the sheet permutations are topological invariants. Since m is finite, a superstructure must exist in the system, i.e., the covering space itself forms a periodic structure in the k-space with a period commensurate with that of the Brillouin zone. The nontrivial band structure discussed above can be demonstrated with a tight-binding model of a helimagnetic metal. The Hamiltonian has the same general form for 1D, 2D, and 3D lattices: ˆH3sl = - X ⟨i,j⟩,σ ˆa† iσˆajσ + h.c. -hi X i,σ,σ′ ˆa† iσ ˆσˆaiσ′ (7) where ˆa† iσ(ˆaiσ) is the creation (annihilation) operator for an electron with spin projection σ =↑, ↓at the i-th site, ˆσ are the Pauli matrices, and hi is the (effective) magnetic field at the i-th site. The notation ⟨. . .⟩denotes the sum over nearest-neighbor pairs. We consider a threesublattice model with a 120◦magnetic order. In this configuration, all hi vectors are coplanar with constant magnitude |hi| = h0 and the angle between the vectors on different sublattices is ±2π/3. Furthermore, all nearestneighbor sites belong to different sublattices. Since spinorbit coupling is absent in the model, the magnetic plane, in which the effective field lies, can be chosen arbitrarily. The 120◦magnetic order is schematically shown in Fig. 3 for 1D chain, 2D hexagonal, and 3D simple hexagonal lattices. A primitive crystallographic translation rotates the entire magnetic structure by ±2π/3. An explicit matrix form of the Hamiltonian Eq. (7) is presented in the Supplemental material. The dispersion curves obtained from the 1D tightbinding model are shown in Fig. 4a within the ordinary (magnetic) (k ∈[-π, π]), and extended (crystallographic) Brillouin zones (k ∈[-3π, 3π]). The lower band marked by the thick line unambiguously corresponds to a continuous φi(k). It is periodic within the extended Brillouin zone. Band folding results in three non-periodic dispersion sheets within the magnetic (ordinary) one. The magnetic structure is shown by color and arrows within the magnetic Brillouin zone. It should be noted that the commensurate helical magnetic field lifts the spin degeneracy of the band structure but conserves the Kramerslike symmetry Eq. (1). If the Fermi level falls into the energy range between the horizontal dashed lines in Fig. 4a, a special state 4 a b c FIG. 3. Helical structures with the 120◦-order on (a) 1D, (b) 2D, and (c) 3D lattices. The magnetic field at the sites is indicated by color and arrows. 0 -2 -1 0 1 2 a E - 3 -3 -6 -4 -2 0 2 4 b M K M' M K M E 1 E 2 FIG. 4. The band structure in the tight-binding model for (a) 1D chain (h0 = 0.25) and (b) 2D hexagonal (h0 = 1) lattices. The average spin projection on the axis perpendicular to the magnetic plane is indicated by color and arrows: red and blue if |⟨ˆσz⟩| > 1/2, green otherwise. G M' M K a G M' M K b FIG. 5. Fermi surface in the 2D tight-binding model for Fermi levels corresponding to (a) E1 and (b) E2 in Fig. 4b. The average spin projection is indicated in the same manner as in Fig. 4. emerges: a single non-periodic band crosses the Fermi level. We refer to this as a topological metal because such a structure is a consequence of a nontrivial covering over the Brillouin zone. One can see that a backward scattering without spin-flip is forbidden and that a persistent spin current exists in this case.[16] The band structure of the tight-binding 2D model is shown in Fig. 4b. The lower bands also exhibit a 3sheeted structure. Along the M'-Γ-M direction, the lower dispersion curves are similar to those of the 1D model (Fig. 4a). The shapes of the Fermi surface for the Fermi level at E1 and E2 are presented in panels (a) and (b) of Fig. 5, respectively. The spin splitting of the Fermi surface in Fig. 5a leads to the 2D f-wave magnet.[24] If the Fermi level lies at E2 (Fig. 5b), a single spintextured Fermi surface appears, with the texture obeying Eq. (1). This contrasts with altermagnets and unconventional magnets (e.g. of p- and f-wave type) with the trivial topology, where the spin splitting leads to pairs of bands and a Fermi surface with an even number of sheets.[22, 24] We refer to this state as a 2D topological metal. It demonstrates unusual transport behavior: the suppression of both backscattering without spin-flip and umklapp electron-phonon scattering leads to a drastic increase in conductivity.[16] The 3D tight-binding model also features nontrivial bands. The band structure along the a∗direction corresponding to M'-Γ-M in the 2D model [34] is similar to Fig. 4b (see Supplemental Material). However, a detailed discussion of the 3D model lies beyond the scope of the present work. To practically implement the concept of a topological metal in a real material, several conditions must be met. For instance, in a layered crystal, highly conductive layers must alternate with magnetic layers possessing 120◦ ordering to induce the effective magnetic field. Furthermore, the Fermi level must be located in an energy region with a single non-periodic band. PdCrO2, a metallic delafossite with a complex magnetic order and anomalously high conductivity,[35] has been considered a candidate for the topological metal. Non-reciprocal transport [36] 5 and an unusual anomalous Hall effect [37] have also been observed in this compound. Evidence of the topological metal state in this substance could be provided by observing the spin texture of the reconstructed Fermi surface, for example using spin-resolved angle-resolved photoemission spectroscopy.[38] Another way to realize the topological metal is by creating an artificial structure with alternating magnetic and conductive layers, for example, in van der Waals systems.[39] While a nontrivial band structure can also exist in nonmagnetic systems (e.g., in crystals with helical symmetry [40]), the topological metal state appears only in helimagnets due to the combination of lifted spin degeneracy and the Kramers-like symmetry Eq. (1). SUPPLEMENTARY MATERIAL An explicit form of the tight-binding models used in the text and the band structure for the 3D model are provided. ACKNOWLEDGMENTS This work was supported by the National Center for Physics and Mathematics (Project No. 7 "Investigations in high and ultrahigh magnetic fields"). [1] A. Bansil, H. Lin, and T. Das, Rev. Mod. Phys. 88, 021004 (2016). [2] M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). [3] G. Tkachev, Topological Insulators, The Physics of Spin Helicity in Quantum Transport (Taylor & Francis Group, LLC, 2016). [4] N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018). [5] Z. Du, C. M. Wang, S. Li, H.-Z. Lu, and X. Xie, Nat. Commun. 10, 3047 (2024). [6] S. Nandy and I. Sodemann, Phys. Rev. B 100, 195117 (2019). [7] M. V. Berry, Proc. R. Soc. Lond. A 392, 45 (1984). [8] C.-K. Chiu, J. C. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016). [9] B. A. Bernevig, C. Felser, and H. Beidenkopf, Nature 603, 41 (2022). [10] X. Yao, J. Gaudet, R. Verma, D. E. Graf, H.-Y. Yang, F. Bahrami, R. Zhang, A. A. Aczel, S. Subedi, D. H. Torchinsky, J. Sun, A. Bansil, S.-M. Huang, B. Singh, P. Blaha, P. Nikoli ́c, and F. Tafti, Rev. Rev. X 13, 011035 (2023). [11] I. E. Dzyaloshinskii, JETP 20, 223 (1965). [12] W. Brinkman and R. J. Elliott, Proc. Roy. Soc. A 294, 343 (1966). [13] M. D. R. Fr ́esard and P. Wolfle, Europhys. Lett. 15, 325 (1991). [14] A. A. Fraerman and O. G. Udalov, Phys. Rev. B 77, 094401 (2008). [15] J. Kishine and A. Ovchinnikov, Theory of monoaxial chiral helimagnet, in Solid State Physics, Vol. 66, edited by R. E. Camley and R. L. Stamps (Academic Press, 2015) pp. 1-130. [16] Y. B. Kudasov, JETP Lett. 113, 155 (2021). [17] M. Calvo, Phys. Rev. B 19, 5507 (1979). [18] M. Calvo, Phys. Rev. B 18, 5073 (1978). [19] L. M. Sandratskii, J. Phys.: Cond. Matt. 3, 8565 (1991). [20] Y. B. Kudasov, Phy. Rev. B 109, L140402 (2024). [21] L.-D. Yuan, Z. Wang, J.-W. Luo, E. I. Rashba, and A. Zunger, Phys. Rev. B 102, 014422 (2020). [22] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 040501 (2022). [23] B. Brekke, P. Sukhachov, H. G. Giil, A. Brataas, and J. Linder, Phys. Rev. Lett. 133, 236703 (2024). [24] M. Ezawa, Phys. Rev. B 111, 125420 (2025). [25] Q. Song, S. Stavri ́c, P. Barone, A. Droghetti, D. S. Antonenko, J. W. F. Venderbos, C. A. Occhialini, B. Ilyas, E. Erge ̧cen, N. Gedik, S.-W. Cheong, R. M. Fernandes, S. Picozzi, and R. Comin, Nature 642, 64 (2025). [26] J. Sears, J. Yao, Z. Hu, W. Tian, N. Aryal, W. Yin, Q. Li, and J. M. Tranquada, Phys. Rev. B 112, 094455 (2025). [27] N. W. Ashcroft and N. D. Mermin, Solid state physics (Harcourt lnc., 1976). [28] This is almost always true (apart from special cases, e.g. topological defects in the Brillouin zone like in Dirac and Weyl systems). [29] B. A. Dubrovin, A. T. Fomenko, and S. P. Novikov, Modern Geometry - Methods and Applications, Vol. 2 (Springer-Verlag, 1985). [30] S. Kalajdzievski, An illustrated introduction to topology and homotopy (Taylor & Francis, 2015). [31] Y. B. Kudasov, JETP Lett. 120, 416 (2024). [32] A. Hatcher, Algebraic topology (Cambridge University Press, 2002). [33] M. A. Armstrong, Basic Topology (Springer-Verlag, 1983). [34] L. S. Hart, J. L. Webb, S. Dale, S. J. Bending, M. MuchaKruczynski, D. Wolverson, C. Chen, J. Avila, and M. C. Asensio, Sci. Rep. 7, 5145 (2017). [35] A. P. Mackenzie, Rep. Prog. Phys. 80, 032501 (2017). [36] M. Akaike, Y. Nii, H. Masuda, and Y. Onose, Phys. Rev. B 103, 184428 (2021). [37] H. Takatsu, S. Yonezawa, S. Fujimoto, and Y. Maeno, Phys. Rev. Lett. 105, 137201 (2010). [38] J. H. Dil, Phys.: Condens. Matter 21, 403001 (2009). [39] C.-Z. Chang, Nat. Mater. 19, 484 (2020). [40] M. Damnjanovi ́c and I. Miloˇsevi ́c, Line Groups in Physics (Springer, 2010).
2510.14786
SCALING LIMITS FOR THE CRITICAL LEVEL-SET PERCOLATION OF THE GAUSSIAN FREE FIELD ON REGULAR TREES JIˇR´I ˇCERN´Y, RAMON LOCHER Abstract. We continue the study of the level-set percolation of the discrete Gaussian free field (GFF) on regular trees in the critical regime, initiated in [ˇCL25]. First, we derive a sharp asymptotic estimate for the probability that the connected component of the critical level set containing the root of the tree reaches generation n. In particular, we show that the one-arm exponent satisfies ρ = 1. Next, we establish a Yaglom-type limit theorem for the values of the GFF at generation n within this component. Finally, we show that, after a correct rescaling, this component conditioned on reaching generation n converges, as n →∞, to Aldous’ continuum random tree. 1. Introduction The Gaussian free field’s level-set percolation, especially on Zd, is a significant model in percolation theory that is characterised by its long-range dependencies. Initial investi- gations into this model trace back to the 1980s, with pioneering studies [BLM87, MS83, LS86]. Over the last decade, renewed interest has been ignited by the findings in [RS13], demonstrating that on Zd, the model undergoes a distinctive percolative phase transition at a critical threshold h∗= h∗(d) for any dimension d ≥3. Follow-up research, including papers [DRS14, PR15, DPR18, Szn19, CN20, GRS22, PS22], has provided a comprehen- sive understanding of the model’s behaviour in both subcritical and supercritical phases, often making use of additional natural critical points in order to work in strongly sub- /super-critical regime. Notably, [DCGRS23] confirmed the alignment of these critical points with h∗, indicating a precise phase transition. In this paper, we continue the study of level-set percolation of the discrete Gaussian free field (GFF) on regular trees in the critical regime, building on the work initiated in [ˇCL25]. Our main contributions are threefold: First, we derive a sharp asymptotic estimate for the one-arm probability. Second, we establish a Yaglom-type limit theorem for the field at vertices located at distance n from the root. Finally, we show that the connected component of the critical level set, conditioned to be large, converges to Aldous’ Continuum Random Tree. These results are stated precisely in Theorems 2.1, 2.2, and 2.3, respectively. The considered model was initially studied in [Szn16] where the critical value h∗was identified in terms of the largest eigenvalue of a specific integral operator. Additionally, a comparison with random interlacements was employed to establish bounds on h∗, notably demonstrating that 0 < h∗< ∞. Subsequently, in [AˇC20], sub- and supercritical phases of the model were studied in detail. The main results of this paper include the continuity of the percolation probability outside the critical level h∗, and accurate estimates on the size of connected components of the level sets in both phases. Later, in our previous paper [ˇCL25], properties of the critical and the near-critical model were investigated. It was proved there that there is no percolation at the critical point h∗, and that the percolation probability is continuous also at this point, with a precise asymptotics formula for this 1 arXiv:2510.14786v1 [math.PR] 16 Oct 2025 SCALING LIMITS FOR THE CRITICAL GFF 2 probability in the regime h ↑h∗. Further, we provided rather precise estimates on the tail of the size of the connected component at criticality. Here, we complement these findings with further results concerning the model at the critical level h∗. As in [Szn16, AˇC20, ˇCL25], we will strongly rely on the fact that the model admits a representation as a branching process with an uncountable and unbounded type space. For the critical single-type Galton-Watson process, analogous versions of Theorems 2.1 and 2.2 go back to Kolmogorov [Kol38] and Yaglom [Yag47]. In the more general case of branching processes with a finite type space, similar results are also well known. General scaling limits in the spirit of Theorem 2.3 were first introduced for critical Galton-Watson processes in [Ald91], and later extended to branching processes with finite or countably infinite types [Mie08, dR17]. More recently, [Pow19] shows a similar scaling limit result in the setting of critical branching diffusion on bounded domains. Recent advances in branching process theory have extended these classical results in several important directions. [CTJP24] obtained refined convergence rates for Yaglom limits in varying environments, providing Wasserstein metric bounds that may also be useful for analyses in our unbounded type-space setting. [BDIM23] established Yaglom- type theorems for branching processes in sparse random environments, an intermediate framework that connects the classical Galton-Watson model with fully random environ- ments. [BFRS24] proved a Yaglom-type theorem for near-critical branching processes in random environments and further showed that, under survival conditioning, the ge- nealogical structure of the population at a fixed time horizon converges to a time-changed Brownian coalescent point process. Due to the nature of the branching process appearing in our model (in particular because its type space is uncountable and unbounded), no previous results are directly applicable to our setting. In this article, we adapt and extend the strategy of [Pow19], addressing the fundamental challenge of unbounded type spaces. 2. Model and results We start with the definition of the model. Let T be the infinite (d + 1)-regular tree, d ≥2, rooted at an arbitrary fixed vertex o ∈T, endowed with the usual graph distance d(·, ·). On T, we consider the Gaussian free field φ = (φv)v∈T which is a centred Gaussian process whose covariance function agrees with the Green function of the simple random walk on T (see (3.2) for the precise definition). We use P to denote the law of this process on RT. For x ∈R, we write Px for the conditional distribution of φ given that φo = x, (2.1) Px[ · ] := P[ · | φo = x]. (For an explicit construction of Px, see (3.4) and the paragraph below it.) Furthermore, let ¯o ∈T be an arbitrary fixed neighbour of the root o, and define the forward tree T+ by (2.2) T+ := {v ∈T : ¯o is not contained in the geodesic path from o to v}. We analyse the percolation properties of the (super-)level sets of φ above level h ∈R, (2.3) Eh φ := {v ∈T : φv ≥h}. In particular, we are interested in the connected component of this set containing the root o, (2.4) Ch o := {v ∈T : v is connected to o in Eh φ}, h ∈R. The critical height h∗of the level-set percolation is defined by (2.5) h∗= h∗(d) := inf  h ∈R : P[|Ch o | = ∞] = 0 . SCALING LIMITS FOR THE CRITICAL GFF 3 It is well known that h∗is non-trivial and strictly positive (see [Szn16, Corollary 4.5]). Moreover, as proved in [Szn16], h∗can be characterized with the help of the operator norms of a certain family of non-negative operators (Lh)h∈R acting on the space L2(ν), where ν is a centred Gaussian measure with variance σ2 ν = d/(d −1). We provide more details on this characterization in Section 3 below. Here, we only define λh to be the largest eigenvalue of Lh and χh the corresponding normed eigenfunction, and recall that h∗is the unique solution to (2.6) λh∗= 1. The scalar product on L2(ν) will be denoted by ⟨·, ·⟩. We use N h n to denote the set of vertices in Ch o that are at a distance n from the root, (2.7) N h n = {v ∈Ch o : d(v, o) = n}. and set (2.8) N h,+ n = N h n ∩T+. Since we almost exclusively deal with the critical case, we abbreviate χ := χh∗, Co = Ch∗ o , L := Lh∗, Nn := N h∗ n and N + n := N h∗,+ n . Our first result describes the exact asymptotic behaviour of the probabilities (condi- tional and unconditional) that Nn and N + n are non-empty, that is, that Co has diameter at least n. Theorem 2.1. For every x ≥h∗, as n →∞, Px[N + n ̸= ∅] = C1χ(x)n−1(1 + o(1)), (2.9) P[N + n ̸= ∅] = C1⟨1, χ⟩n−1(1 + o(1)), (2.10) where (2.11) C1 = 2d d −1 1 ⟨χ2, χ⟩. If the event {N + n ̸= ∅} is replaced by {Nn ̸= ∅}, the same results hold with C1 replaced by eC1 = C1(d + 1)/d. In particular, Theorem 2.1 proves that the one-arm exponent of the critical level set, defined by ρ = −limn→∞log n/ log(P[Nn ̸= ∅]) (see, e.g., [Gri99, Section 9.1]) satisfies (2.12) ρ = 1. This complements the values of two other important critical exponents for our model given in [ˇCL25, (2.22)], where it was shown that δ = 2 and β = 1. The one-arm probability and the associated critical exponent is an actively studied quantity in several prominent percolation models. Notably, in the context of level-set percolation of the GFF on the metric graph of Zd, recent work has led to a detailed un- derstanding of the one-arm probability across all dimensions. This includes the derivation of bounds for d = 3 in [DPR25], for d > 6 in [CD25], and for the intermediate regime 3 ≤d ≤6 in [CD24]. Our remaining main results consider the critical component conditioned on being large, more precisely conditioned on the rare event {N + n ̸= ∅}. The first such result is a Yaglom- type limit theorem for the GFF restricted to N + n . SCALING LIMITS FOR THE CRITICAL GFF 4 Theorem 2.2. For f ∈L2(ν), n ≥1, and x ≥h∗, let Zf,x n , resp. Zf n, be a random variable distributed as n−1 P v∈N+ n f(φv) under the conditional measure Px[ · |N + n ̸= ∅], resp. P[ · |N + n ̸= ∅]. Let further Z be an exponential random variable with mean one. Then, with C1 as in (2.11), (2.13) lim n→∞Zf,x n = lim n→∞Zf n = C−1 1 ⟨χ, f⟩Z in distribution. In the case of the critical single-type Galton-Watson process, results analogous to The- orem 2.2 trace back to the classical work of Yaglom [Yag47]. For more general branching processes with a finite type space, similar theorems are also well established; see, for instance, [Mod71, Theorem 10.1] and other foundational texts in the branching process literature. In recent years, Yaglom-type limits have been proven in various extended set- tings. These include critical non-local branching Markov processes [HHKW22], branching Brownian motion with absorption [MS22], and branching diffusions in bounded domains [Pow19]. Additionally, [GLLP22] establishes a Yaglom-type result for critical branching processes in a random Markovian environment with finite state space. Our third result concerns a scaling limit for the critical component Co ∩T+, viewed as a metric space, under the conditional law Px[ · |N + n ̸= ∅]. To this end, let (Tn,x, dn,x) be a random compact metric space whose law coincides with that of (Co ∩T+, n−1d) under Px[ · |N + n ̸= ∅] (recall that d denotes the distance on T). We show that the sequence (Tn,x, dn,x) converges in distribution to a conditioned Brownian continuum random tree (Te, de), whose contour function e is a Brownian excursion conditioned to reach height at least 1. Theorem 2.3. For every x ≥h∗, as n →∞, (2.14) (Tn,x, dn,x) →(Te, de) in distribution, with respect to the Gromov-Hausdorff topology. General scaling limits of critical Galton-Watson processes, in the spirit of Theorem 2.3, were first introduced in [Ald91]. A corresponding result for critical multi-type processes with finitely many types was established in [Mie08], and later extended to processes with a countably infinite type space in [dR17]. More recently, [CKKM24] extended the classical continuous random tree convergence to Galton-Watson trees evolving in a random environment, where each generation has a random offspring distribution with mean one and finite expected variance. In the context of critical branching diffusions, [Pow19] proves an invariance principle under the assumptions of a bounded domain, finite second moment of the offspring distribution, and an elliptic diffusion generator. However, for branching diffusions in general (unbounded) domains, analogous results are not yet available (see [Pow19, Question 1.8]). Remark 2.4. Theorems 2.2 and 2.3 hold without any further change if the conditioning therein is changed from Px[ · |N + n ̸= ∅] to Px[ · |Nn ̸= ∅]. For the sake of brevity, we refrain from providing detailed proofs of these results. We briefly discuss the organisation of this article. In Section 3, we collect relevant back- ground on the GFF on regular trees along with the framework of branching processes with spines. Section 4 presents the proof of Theorem 2.1. In Section 5, we prove Theorem 2.2 and, in a dedicated subsection, establish additional results for the model conditioned on the event {N + n ̸= ∅}. Section 6 introduces an auxiliary martingale Sn and establishes scaling limit results for this martingale and some related processes. Section 7 focuses on SCALING LIMITS FOR THE CRITICAL GFF 5 the “height process” Hn, defined via the distance to the origin in a depth-first traversal of Co ∩T+, and investigates its connection to the martingale Sn. Finally, Section 8 concludes the article with the proof of Theorem 2.3, which combines topological arguments with the results from Sections 6 and 7. 3. Notation and useful results In this section we introduce the notation used throughout the paper and recall some known facts about the level set percolation of the Gaussian free field on trees. We then briefly present the formalism of branching processes with spines and apply it to our model. As already stated in the introduction, we use T to denote the (d+1)-regular tree, d ≥2, that is an infinite tree whose every vertex has exactly d + 1 neighbours. For two vertices v, w ∈T we use d(v, w) to denote the usual graph distance. The tree is rooted at an arbitrary fixed vertex o ∈T, ¯o ∈T denotes a fixed neighbour of o, and T+ stands for the forward tree, see (2.2). We set |v| = d(o, v) and write (3.1) Sn = {v ∈T : |v| = n}, S+ n = Sn ∩T+ for the spheres with radius n centred at o. For every v ∈T \ {o} we use p(v) to denote its parent in T, that is the only vertex on the geodesic path from v to o with |p(v)| = |v| −1. We write desc(v) for the set of direct descendants of v, and sib(v) = desc(p(v)) for the set of its siblings, including itself. Finally, if w is an ancestor of v, that is w lies on the geodesics from o to v, we write w ⪯v. Throughout the paper we use the usual notation for the asymptotic relation of two functions f and g: We will write f(s) ∼g(s) as s →∞if lims→∞f(s)/g(s) = 1, f(s) = o(g(s)) as s →∞if lims→∞|f(s)|/g(s) = 0, and f(s) = O(g(s)) as s →∞if lim sups→∞|f(s)|/g(s) < ∞. We use c, c′, c1, . . . to denote finite positive constants whose value may change from place to place and which can only depend on d. The dependence of these constants on additional parameters is explicitly mentioned. 3.1. Properties of the GFF. We consider the Gaussian free field φ = (φv)v∈T which is the centred Gaussian process on T whose covariance function is the Green function of the simple random walk on T, (3.2) E[φvφw] = g(v, w) := 1 d + 1Ev h ∞ X k=0 1Xk=w i , v, w ∈T, where Ev stands for the expectation with respect to the simple random walk (Xk)k≥0 on T starting at v ∈T. We frequently use the fact that the Gaussian free field on T can be viewed as a multi- type branching process with a continuous type space (see [Szn16, Section 3] and [AˇC20, Section 2.1]). To this end, we define (3.3) σ2 ν := d d −1 and σ2 Y := d + 1 d , and let (Yv)v∈T be a collection of independent centred Gaussian random variables on some auxiliary probability space such that Yo ∼N(0, σ2 ν) and Yv ∼N(0, σ2 Y ) for v ̸= o. We then define another field eφ on T by (a) eφo := Yo, (b) for v ̸= o we recursively set eφv := d−1 eφp(v) + Yv. (3.4) SCALING LIMITS FOR THE CRITICAL GFF 6 As explained, e.g., in [AˇC20, (2.9)], the law of (eφv)v∈T agrees with the law P of the Gaussian free field φ. Therefore, we will always assume that the considered Gaussian free field is constructed in this way and will not distinguish between φ, and eφ from now on. Representation (3.4) of φ can be used to give a concrete construction for the conditional probability Px introduced in (2.1): It suffices to replace (a) in (3.4) by eφo = x. In addition, (3.4) can directly be used to construct a monotone coupling of Px and Py. As a consequence: (3.5) If x < y, then Py stochastically dominates Px, that is Ex[f(φ)] ≤Ey[f(φ)] for every bounded increasing function f : RT →R. From the construction (3.4) it follows that the GFF on T can be viewed as a multi-type branching process where the type corresponds to the value of the field φ. The type of the initial individual o of this branching process is distributed as Yo. Every individual v in this branching process then has d descendants (d + 1 if v = o) whose types are independently given by d−1φv + Y , with Y ∼N(0, σ2 Y ). The branching process point of view can be adapted to the connected component Ch o (defined in (2.4)) by considering the same multi-type branching process but killing immediately all individuals with type smaller than h (and not allowing them to have descendants themselves). We now recall in more detail the spectral machinery introduced in [Szn16] in order to characterise the critical value h∗. Let ν be a centred Gaussian measure on R with variance σ2 ν (as defined in (3.3)), and let Y be a centred Gaussian random variable with variance σ2 Y . The expectation with respect to this random variable is denoted by EY . We consider the Hilbert space L2(ν) := L2(R, B(R), ν), and for every h ∈R, define the operator Lh acting on L2(ν) by Lh[f](x) := 1[h,∞)(x) d EY h 1[h,∞)  Y + x d  f  Y + x d i = 1[h,∞)(x) d Z [h,∞) f(y)ρY  y −x d  dy, (3.6) where ρY denotes the density of Y . From the branching process construction (3.4) of φ, it follows that (3.7) Lh[f](x) = Ex  X v∈Nh,+ 1 f(φv)  , where N h,+ n is defined in (2.8). Iterating this expression one also obtains (3.8) Ln h[f](x) = Ex  X v∈Nh,+ n f(φv)  , n ∈N. Finally, we let λh stand for the operator norm of Lh in L2(ν), (3.9) λh := ∥Lh∥L2(ν)→L2(ν). The following proposition summarises some known properties of the operator Lh as well as the connection between Lh and the critical height h∗. Proposition 3.1 ([Szn16] Propositions 3.1, 3.3, Corollary 4.5). For all h ∈R, Lh is a self-adjoint non-negative Hilbert-Schmidt operator on L2(ν), λh is a simple eigenvalue of Lh, and there exists a unique χh ≥0 with unit L2(ν)-norm, which is continuous, strictly positive on [h, ∞), vanishing on (−∞, h), and such that (3.10) Lh[χh] = λhχh. SCALING LIMITS FOR THE CRITICAL GFF 7 Additionally, the map h 7→λh is a decreasing homeomorphism from R to (0, d) and h∗is the unique value in R such that λh∗= 1. Finally, for every d ≥2, (3.11) 0 < h∗< ∞. Combining Proposition 3.1 with (3.8) gives that for every n ∈N, (3.12) Ex  X w∈Nh,+ n χh(φw)  = λn hχh(x). We will require estimates on the norms of Lh[f], which follow from the hypercontrac- tivity of the Ornstein-Uhlenbeck semigroup. For part (a) of the following proposition, we refer to (3.14) in [Szn16] or (2.14) in [AˇC20]. Part (b) then follows directly from part (a) in combination with generalized H¨older’s inequality. Proposition 3.2. (a) For every f ∈L2(ν), h ∈R, 1 < p < ∞and q ≤(p −1)d2 + 1, (3.13) ∥Lh[f]∥Lq(ν) ≤d∥f∥Lp(ν). (b) For every 1 ≤k ≤(d2 + 1)/2 and f1, . . . , fk ∈L2(ν) (3.14) k Y i=1 Lh[fi] L2(ν) ≤dk k Y i=1 ∥fi∥L2(ν). The next proposition recalls known properties of the critical component Co from [ˇCL25]. Proposition 3.3 (Theorem 2.1 and Theorem 2.3 in [ˇCL25]). There is C ∈(0, ∞) such that for every x ∈R, as n →∞, (3.15) Px[|Co ∩T+| > n] = Cχ(x)n−1/2(1 + o(1)). In particular, for every x ∈R, (3.16) Px[|Co ∩T+| = ∞] = 0. We will also need the following three properties of χ, the first one is proved in Re- mark 2.5 of [ˇCL25], the second is a direct consequence of Proposition 3.1 in [AˇC20], and the last one is proved in Appendix A: x 7→χ(x) is non-decreasing, (3.17) c1x ≤χ(x) ≤c2x for all x ≥h∗and some c1, c2 ∈(0, ∞), (3.18) x 7→χ(x) is Lipschitz on [h∗, ∞). (3.19) 3.2. Branching processes with spines. We now recall the machinery of branching processes with spines which is frequently used in the theory of branching processes, and specialize it to our model, in order to study the critical component Co. We then state many-to-few formulas that express certain moments for the original branching process in terms of the dynamics along the spines, see Proposition 3.5 below. Later, in Section 5.1, we will see that the processes with spines can be used to describe the distribution of Co conditioned on being infinite. The content of this section is mostly based on [HR17]. The main idea of the machinery is to designate several lines of descent in the branching process, called spines. These spines are then used to introduce a certain change of measure, under which the vertices on the spine exhibit modified branching behaviour, while the non-spine vertices behave as in the original process. For the construction, we need some notation. For x ∈R and k ∈N, we first introduce measures P k x under which the behaviour of the field is the same as under Px, but in SCALING LIMITS FOR THE CRITICAL GFF 8 addition there are k distinguished spines. Formally, the measure P k x is the distribution of a (R ∪{†}) × {0, 1, . . . , k}-valued stochastic process (φv, lv)v∈T+. This process assigns to every v ∈T+ a field value φv ∈R ∪{†} (where † is a cemetery state) and a number lv ∈{0, . . . , k} that represents the number of spine marks on v. Under P k x , the law of (φv)v∈T+ is a straightforward modification of the original measure Px, the only difference is that we set φv = † for every v ̸∈Co. The random variables (lv)v∈T+ are independent of the field values (φv)v∈T+. The root node o has exactly k marks, lo = k. The remaining random variables (lv)v̸=o are constructed recursively as follows: If a node v ∈T+ carries j marks, then each of its j marks ‘moves’ to one of its d direct descendants independently uniformly at random. (Note that nodes in the cemetery state can carry marks.) As consequence, under P k x , in every generation n, there are exactly k marks present, that is P v∈S+ n lv = k. We use P k x also for the corresponding expectations. For i = 1, . . . , k, we denote by σi n the node that carries the i-th mark in generation n (that is |σi n| = n), and set ξi n = φσin to be its type. We also define skel(n) = {v ∈ T+ : |v| ≤n, lv ≥1} to be the set of nodes of generation at most n having at least one mark. We let Fn stand for the natural filtration of the branching process, and Fk n for the filtration containing in addition the information about the k spine marks, (3.20) Fn = σ(φv : v ∈T+, |v| ≤n) and Fk n = σ(φv, lv : v ∈T+, |v| ≤n). Any f : R →R is extended to R ∪{†} by setting f(†) = 0. Then, by definition of P k x , (3.21) Ex  X v∈N+ n f(φv)  = P k x  X v∈S+ n f(φv)  . We now define another measure Qk x, where the nodes without a spine mark behave as under P k x but the nodes with a mark have a modified branching behaviour: Under Qk x the movement of the marks, and thus the distribution of (lv)v∈T+, is exactly the same as under P k x : If a node v carries k marks, each of the marks is given to one of its d direct descendants independently uniformly at random. To describe the distribution of the field φ under Qk x, we first define a transition kernel (recall ρY from (3.6)) (3.22) K(x, dy) = dχ(y) χ(x)ρY  y −x d  dy, x ≥h∗, y ∈R. Note that, by (3.6) and Proposition 3.1, for every x ≥h∗, K(x, ·) is a probability measure with support [h∗, ∞). Conditionally on the marks (lv)v∈T+, the field (φv)v∈T+ under Qk x is recursively constructed by (3.23) (a) φo := x, (b) If v ̸= o and lv = 0, then φv = d−1φp(v) + Yv (as under P k x ). (c) If v ̸= o and lv ≥1, then φv is K(φp(v), ·)-distributed, independently of previous randomness. To simplify notation, we write Qx instead of Q1 x; in this case we also set σn = σ1 n and ξn = ξ1 n. Note that, unlike under P k x , under the measure Qk x the nodes in the cemetery state cannot carry any mark. Consequently, Qk x-a.s., there are nodes not in the cemetery state in every generation. By construction, under Qk x, the process (ξi n)n∈N recording the value of the field along the i-th spine is a Markov chain with transition kernel K. This chain never enters the cemetery state. The following lemma determines its invariant distribution. SCALING LIMITS FOR THE CRITICAL GFF 9 Lemma 3.4. The Markov chain (ξn)n∈N with the transition kernel K has a unique in- variant distribution π given by (3.24) π(dx) = χ(x)2ν(dx). Proof. We first show that π is invariant for K. Comparing (3.6) and (3.22) yields that K(x, A) = L[1Aχ](x)/χ(x) for every x ≥h∗. Therefore, writing the action of K on π as inner product on L2(ν), using that L is self-adjoint and χ is its eigenfunction with eigenvalue 1 (see Proposition 3.1), (3.25) (πK)(A) = ⟨K(·, A), χ2⟩ν = ⟨L[1Aχ], χ⟩ν = ⟨1Aχ, L[χ]⟩ν = ⟨1Aχ, χ⟩ν = π(A), which shows that π is an invariant measure. The uniqueness follows from the irreducibility of (ξn)n∈N. □ Next, we state several moment formulas that are frequently used throughout the paper. Such formulas are well understood in the theory of branching processes, see, e.g., [HR17] and references therein. The proof of the following proposition is based on Lemma 8 of that paper and can be found in Appendix B. Proposition 3.5. For all functions f, g : [h∗, ∞) →R for which the expectations below are well defined, Ex h X v∈N+ n f(φv) i = Qx h f(ξn) χ(x) χ(ξn) i , (3.26) Ex h X v,w∈N+ n f(φv)g(φw) i = χ(x)d −1 d n−1 X k=0 Qx  χ(ξk)Qξk hf(ξn−k) χ(ξn−k) i Qξk h g(ξn−k) χ(ξn−k) i + χ(x)Qx hf(ξn)g(ξn) χ(ξn) i . (3.27) 3.3. Asymptotic behaviour of the moments. The main result of this section is Propo- sition 3.8 giving precise asymptotic estimates on the quantities appearing in (3.26), (3.27). To prove them, we could, in principle, use this proposition together with the known results on the convergence of Markov chains. However, it is easier, and for our purposes slightly more practical, to use formula (3.8) together with L2-estimates on the operator L. Since L is a self-adjoint Hilbert-Schmidt operator (see Proposition 3.1), L2(ν) has an orthonormal basis consisting of the eigenfunctions {ek}k≥1 of L corresponding to the eigenvalues {λk}k≥1. By Proposition 3.1 we may assume that 1 = λ1 > |λ2| ≥|λ3| ≥. . . , and e1 = χ. We set γ = |λ2| ∈(0, 1). By decomposing f ∈L2(ν) as (3.28) f = X k≥1 ⟨ek, f⟩ek = ⟨χ, f⟩χ + X k≥2 ⟨ek, f⟩ek =: ⟨χ, f⟩χ + β[f], for every n ∈N, it holds (3.29) Ln[f] = X k≥1 λn k⟨ek, f⟩ek = ⟨χ, f⟩χ + X k≥2 λn k⟨ek, f⟩ek = ⟨χ, f⟩χ + Ln[β[f]], with (3.30) ∥Ln[β[f]]∥≤γn∥β[f]∥≤γn∥f∥. The following simple lemma will later be used to deduce the pointwise convergence from the L2(ν)-one. SCALING LIMITS FOR THE CRITICAL GFF 10 Lemma 3.6. There is a function q : [h∗, ∞) →[0, ∞) such that for every x ≥h∗and f ∈L2(ν) (3.31) |L[f](x)| ≤∥f∥q(x). Proof. Using the definition (3.6) of L and the Cauchy–Schwarz inequality, |L[f](x)| ≤d Z R |f(y)|ρY  y −x d  dy = d Z R |f(y)|ρY (y −x/d) ρν(y) ν(dy) ≤d∥f∥L2(ν) ρY (· −x/d) ρν(·) L2(ν) =: ∥f∥L2(ν)q(x). (3.32) By (3.3), σν > σY . Using this one can easily check by a direct computation that q(x) < ∞ for every x ≥h∗. □ Remark 3.7. The same computation also shows that q(x) ∈L2(ν), but we will not use this fact. We can now give the asymptotic estimates on moments appearing in Proposition 3.5. Proposition 3.8. (a) For every f ∈L2(ν), x ≥h∗, and n ∈N, (3.33) Ex h X v∈N+ n f(φv) i = χ(x)⟨χ, f⟩+ εf n(x), where the error term εf n satisfies (with q as in Lemma 3.6) (3.34) ∥εf n∥≤γn∥f∥ and |εf n(x)| ≤γn−1∥f∥q(x). (b) For every f, g ∈L2(ν), x ≥h∗, and n ∈N Ex h X v,w∈N+ n f(φv)g(φw) i −Ex h X v∈N+ n f(φv)g(φv) i = χ(x)d −1 d ⟨χ, f⟩⟨χ, g⟩⟨χ2, χ⟩n + εf,g n (x), (3.35) where the error term εf,g n satisfies (3.36) ∥εf,g n ∥≤C∥f∥∥g∥ and εf,g n (x) ≤∥f∥∥g∥˜q(x) with ˜q(x) ≤C(q(x) + χ(x)) < ∞. Proof. (a) By (3.8), the left-hand side of (3.33) equals Ln[f]. Therefore, by (3.29), εf n = Ln[β[f]]. (3.30) then implies the first claim in (3.34). The second one follows from the first one and Lemma 3.6. (b) Combining (3.27) with (3.26) and (3.8), we obtain that (3.37) Ex h X v,w∈N+ n f(φv)g(φw) i = d −1 d n−1 X k=0 Lkh Ln−k[f]Ln−k[g] i (x) + Ln[fg](x). By (3.33), Ln−k[f]Ln−k[g] = (⟨χ, f⟩χ + εf n−k)(⟨χ, g⟩χ + εg n−k) and thus, again by (3.33), Lkh Ln−k[f]Ln−k[g] i = ⟨χ, Ln−k[f]Ln−k[g]⟩χ + εLn−k[f]Ln−k[g] k =  ⟨χ, f⟩⟨χ, g⟩⟨χ, χ2⟩+ ⟨χ, f⟩⟨χ, χεg n−k⟩+ ⟨χ, g⟩⟨χ, χεf n−k⟩ + ⟨χ, εf n−kεg n−k⟩  χ + εLn−k[f]Ln−k[g] k . (3.38) SCALING LIMITS FOR THE CRITICAL GFF 11 Note also that the Ln[fg](x) summand in (3.37) equals Ex[P v∈N+ n f(φv)g(φv)]. Therefore, the error term in (3.35) satisfies εf,g n (x) = d −1 d n−1 X k=0 ⟨χ, f⟩⟨χ, χεg n−k⟩+ ⟨χ, g⟩⟨χ, χεf n−k⟩+ ⟨χ, εf n−kεg n−k⟩  χ(x) + εLn−k[f]Ln−k[g] k (x)  . (3.39) To bound this expression, note that |⟨χ, f⟩| ≤∥f∥and |⟨χ, χεg n−k⟩| ≤∥χ2∥∥εg n−k∥≤ Cγn−k∥g∥, by part (a). Therefore, the first summand in (3.39) satisfies |⟨χ, f⟩⟨χ, χεg n−k⟩| ≤ Cγn−k∥f∥∥g∥. Analogously, |⟨χ, g⟩⟨χ, χεf n−k⟩| ≤Cγn−k∥f∥∥g∥. To estimate the third summand, we observe that by (3.33) εf n−k = Ln−k[β[f]]. Therefore, using Proposi- tion 3.2(b), ∥εf n−kεg n−k∥≤d2∥εf n−k−1∥∥εg n−k−1∥and thus (3.40) ⟨χ, εf n−kεg n−k⟩≤d2∥εf n−k−1∥∥εg n−k−1∥≤d2γ2(n−k−1)∥f∥∥g∥. Finally, for the fourth summand, using again Proposition 3.2(b) and part (a), since the operator norm of L equals 1, ∥εLn−k[f]Ln−k[g] k ∥≤γk∥Ln−k[f]Ln−k[g]∥ ≤d2γk∥Ln−k−1[f]∥∥Ln−k−1[g]∥≤Cγk∥f∥∥g∥. (3.41) Taking the norm in (3.39) and making use of the above estimates yields (3.42) ∥εf,g n ∥≤C∥f∥∥g∥ n−1 X k=0 (γn−k + γ2(n−k−1) + γk) < C∥f∥∥g∥, since γ < 1. This establishes the bound on ∥εf,g n ∥. To establish the claimed pointwise bound for |εf,g n (x)|, observe that by part (a), we have εLn−k[f]Ln−k[g] k (x) ≤γk∥Ln−k[f]Ln−k[g]∥q(x) ≤Cγk∥f∥∥g∥q(x). Combining this with (3.39) and the previously derived bounds on the inner products appearing there, we easily conclude. □ 4. Proof of Theorem 2.1 In this section we prove Theorem 2.1 which describes the tail behaviour of the diameter of the critical cluster. We also show several estimates that will later be used in the proof of Theorem 2.2. Let F be the set of all non-increasing functions f : R →[0, 1). For every f ∈F and n ∈N, we introduce (4.1) uf n(x) := ( Ex  1{N+ n ̸=∅} 1 −Q u∈N+ n f(φu)  , for x ≥h∗, 0, for x < h∗. Note that uf n(x) ∈[0, 1] and by (3.5) it is increasing in x. Taking f ≡0, (4.2) u0 n(x) = Px[N + n ̸= ∅], which explains the relevance of this definition for the proof of Theorem 2.1. We will use the following inequality, which often allows us to consider the special case f ≡0 only: For every f ∈F (4.3) u0 n(x)(1 −f(h∗)) ≤uf n(x) ≤u0 n(x). SCALING LIMITS FOR THE CRITICAL GFF 12 Indeed, the second inequality follows directly from the definition of uf n, since f ≥0. To see the first one, note that since f is non-increasing, Q v∈N+ n f(φv) ≤f(h∗) when N + n ̸= ∅, and thus Ex[1{N+ n ̸=∅}(1 −Q v∈N+ n f(φv))] ≥Ex[1{N+ n ̸=∅}(1 −f(h∗))] = (1 −f(h∗))u0 n(x). Similarly to (3.28) we decompose uf n as (4.4) uf n = ⟨χ, uf n⟩χ + β[uf n] and define af n := ⟨χ, uf n⟩and bf n := ∥β[uf n]∥, so that (4.5) ∥uf n∥2 = (af n)2 + (bf n)2. Due to (4.2), in order to show Theorem 2.1, we need precise asymptotic estimates on af n and bf n. These will be proved step by step in the following several lemmas. The theorem is then shown at the end of the section. By Proposition 3.3, Px[|Co ∩T+| = ∞] = 0. Therefore, Px[N + n ̸= ∅] →0 as n →∞. As a consequence, using also (4.3), for every f ∈F, (4.6) uf n(x) →0 as n →∞, pointwise and thus in L2(ν). As a consequence, for every f ∈F, (4.7) lim n→∞af n = lim n→∞bf n = 0. The following lemma provides a basic recursive relation for uf n, based on the branching process representation. This relation will be important to obtain a more precise descrip- tion of the asymptotic behaviour of af n and bf n as n →∞. Lemma 4.1. For every non-increasing f : R →[0, 1) and n ∈N, (4.8) uf n(x) = 1 −  1 −1 dL[uf n−1](x) d for x ∈R. As a consequence, (4.9) uf n(x) = L[uf n−1](x) −g(L[uf n−1](x)), with g(x) = 1 −x −(1 −d−1x)d. This function satisfies, for c1, c2 ∈(0, ∞), (4.10) c1x2 ≤g(x) ≤c2x2 for all x ∈[0, 1]. Proof. For x < h∗, both sides of (4.8) are trivially zero by definition of uf n and L. For x ≥h∗, by the branching process construction of φ, recalling Fn from (3.20), for n0 < n, Ex h Y v∈N+ n f(φv) Fn0 i = Y v∈N+ n0 Eφv h Y w∈N+ n−n0 f(φw) i = Y v∈N+ n0  1 −Eφv h 1 − Y w∈N+ n−n0 f(φw) i = Y v∈N+ n0 1 −uf n−n0(φv)  . (4.11) Therefore, setting n0 = 1, (4.12) uf n(x) = Ex h 1 −Ex h Y v∈N+ n f(φv) F1 ii = 1 −Ex h Y v∈N+ 1 1 −uf n−1(φv) i . SCALING LIMITS FOR THE CRITICAL GFF 13 Using the conditional independence of the (φv : v ∈S+ 1 ) given φo = x, and the fact that uf n(x) = 0 for x < h∗, (4.13) Ex h Y v∈N+ 1 1 −uf n−1(φv) i = Ex h Y v∈S+ 1 1 −uf n−1(φv) i = Y v∈S+ 1 Ex[1 −uf n−1(φv)]. Using (3.7), Ex[uf n−1(φv)] = d−1L[uf n−1](x) for every v ∈S+ 1 . Together with (4.12) and (4.13), this proves (4.8) and (4.9). Inequality (4.10) is proved in [ˇCL25, Lemma 5.3]. □ The following lemma provides a rough lower bound for u0 n and a0 n. Lemma 4.2. There exists a constant c > 0 such that for all n ≥1 (4.14) ∥u0 n∥≥a0 n ≥cn−1. Proof. The statement will follow if we show (4.15) 0 ≤g(x)n−1 ≤u0 n(x) for all n ≥1 for some non-trivial function g : R →[0, 1]. To show (4.15) we use the estimate on the volume of Co from Proposition 3.3. By this proposition and (3.5) there is a positive constant c1 such that, for all m ≥1 and x ≥h∗, (4.16) Px[|Co ∩T+| > m] ≥c1m−1/2. Since |Co ∩T+| = P k≥0|N + k |, this implies for all m, n ≥1 c1m−1/2 ≤Px h X k≥0 |N + k | > m, N + n = ∅ i + Px h X k≥0 |N + k | > m, N + n ̸= ∅ i (4.17) ≤Px h n−1 X k=0 |N + k | > m i + Px[N + n ̸= ∅]. (4.18) By the Markov inequality, Px[Pn−1 k=0|N + k | > m] ≤ 1 m Pn−1 k=0 Ex[|N + k |], where, by Proposi- tion 3.8, Ex[|N + k |] ≤c2(x) for some x-dependent constant c2(x) > 0. Thus, (4.19) c1m−1/2 ≤c2(x) n m + Px[N + n ̸= ∅]. Recalling (4.2) and choosing m = ⌊δ2n2⌋with δ = 2c1c2(x)−1, it follows that (4.20) u0 n(x) ≥ c2 1 2c2(x)n−1 for all n ≥1, showing (4.15) and thus the lemma. □ The next lemma gives upper bounds on af n and bf n. Lemma 4.3. There is c < ∞such that for every f ∈F and n ≥1 (4.21) af n ≤cn−1 and bf n ≤cn−2. Proof. To prove the first statement in (4.21) we recall the recursive relation (4.9) and project it on span{χ}. Using ⟨χ, L[uf n]⟩= ⟨χ, uf n⟩= af n, and the lower bound on g from (4.10), this yields (4.22) af n+1 = ⟨χ, uf n+1⟩= ⟨χ, L[uf n]⟩−⟨χ, g(L[uf n])⟩≤af n −c1⟨χ, L[uf n]2⟩. By Proposition 3.1 and (3.17), χ(x) ≥c > 0 for all x ∈[h∗, ∞). Hence, ⟨χ, L[uf n]2⟩≥ c⟨1, L[uf n]2⟩= c∥L[uf n]∥2 ≥c(af n)2. Applied to (4.22), this gives (4.23) af n+1 ≤af n −c(af n)2. SCALING LIMITS FOR THE CRITICAL GFF 14 This implies that af n is decreasing in n and, after rearranging, also (4.24) c ≤af n −af n+1 (af n)2 ≤af n −af n+1 af n+1af n = 1 af n+1 −1 af n . Summing this over n running from 0 to n −1 yields (4.25) cn ≤ n−1 X k=0  1 af k+1 −1 af k  = 1 af n −1 af 0 ≤1 af n , proving the first part of (4.21). To prove the second part we project (4.9) onto the orthogonal complement of span {χ} and take norms. With the triangle inequality, this gives (4.26) bf n+1 ≤∥β[L[uf n]]∥+ ∥β[g(L[uf n])]∥. By (3.30), ∥β[L[uf n]]∥= ∥L[β[uf n]]∥≤γ∥β[uf n]∥= γbf n. By the upper bound of g from (4.10) and Proposition 3.2, since β is a projection, ∥β[g(L[uf n])]∥≤∥g(L[uf n])∥≤ c2∥L[uf n]2∥≤c∥uf n∥2. Applied to (4.26), this gives (4.27) bf n+1 ≤γbf n + c∥uf n∥2 = γbf n + c(af n)2 + c(bf n)2. Taking now f = 0, since b0 n →0 as n →∞, there is γ′ ∈(γ, 1) and n′ < ∞such that γb0 n + c(b0 n)2 ≤γ′b0 n for all n ≥n′. Together with the first part of (4.21), this implies (4.28) b0 n+1 ≤γ′b0 n + cn−2 for n ≥n′. Applying this recursively yields (4.29) b0 n+1 ≤γ′γ′b0 n−1 + c(n −1)−2 + cn−2 ≤· · · ≤(γ′)n+1−n′b0 n′ + c n−n′ X k=0 (γ′)k(n −k)−2. For n ≥2n′, by splitting the sum at k = n/2, using also that b0 n ≤1 for all n, this can be bounded by c(γ′)n/2 + c(n/2)−2 ≤cn−2, proving the second half of (4.21) for f = 0. For a general f ∈F, by (4.3) and the already proven statements, ∥uf n∥2 ≤∥u0 n∥2 = (a0 n)2 + (b0 n)2 ≤cn−2. Inserting this into (4.27) gives (4.30) bf n+1 ≤γbf n + cn−2, which looks like (4.28). The second part of (4.21) for general f ∈F then follows by the same arguments as for f = 0. □ When f ∈F is fixed, Lemmas 4.2 and 4.3 show that limn→∞uf n/∥uf n∥= χ in L2(ν) and c < n∥uf n∥< c′. In the special case f = 0, this is almost sufficient to prove Theorem 2.1, we only need to improve the estimate on ∥uf n∥. In contrast, in the proof of Theorem 2.2 we will need to consider functions f varying with n. There our estimates are not sufficient, since the lower bound in Lemma 4.2 holds only for f = 0 and cannot be true uniformly over f ∈F. We now provide tools allowing us to deal with this case as well. As it turns out, it is enough to prove the uniformity over a certain family of non- increasing functions. Specifically, let (4.31) ˆF = {fλ : λ ∈[0, ∞)} ⊂F, where (4.32) f0 ≡0 and fλ(x) := exp  −χ(x) λ  for λ > 0. SCALING LIMITS FOR THE CRITICAL GFF 15 To simplify the notation, we define (recall (4.1), (4.4)) (4.33) uλ n = ufλ n , aλ n = afλ n = ⟨χ, uλ n⟩, bλ n = bfλ n = ∥β[uλ n]∥. The first preliminary step in proving the uniformity over ˆF is the following lemma. In the special case λ = 0, this already follows from Lemmas 4.2 and 4.3. Lemma 4.4. There is a constant c < ∞so that (4.34) bλ n ≤caλ n and ∥uλ n∥≤caλ n for all n ∈N0 and λ ≥0. Proof. As the case λ = 0 already follows from Lemmas 4.2 and 4.3, it is enough to show (4.34) with λ ≥0 replaced by λ > 0. By (4.27) from the proof of Lemma 4.3, (4.35) bλ n+1 ≤γbλ n + c∥uλ n∥2, with c that is uniform over f ∈F. An iterative application of this inequality yields (4.36) bλ n+1 ≤γnbλ 0 + c n X l=0 γn−l∥uλ l ∥2. To continue, we argue that (4.37) ∥uλ l ∥≤λ−1 for every l ∈N0, λ > 0. Indeed, by Lemma 4.1, uλ n+1(x) ≤L[uλ n](x). Applying this recursively, taking norms, and using that the operator norm of L is one, we obtain (4.38) ∥uλ l ∥≤∥Ll[uλ 0]∥≤∥uλ 0∥. Moreover, since by definition 0 ≤uλ 0 = 1 −exp(−λ−1χ) ≤λ−1χ, we know that ∥uλ 0∥≤ λ−1∥χ∥= λ−1. Together with (4.38), this shows (4.37). Since also bλ 0 ≤∥uλ 0∥≤λ−1 and γ < 1, (4.36) and (4.37) imply (4.39) bλ n+1 ≤γnλ−1 + cλ−2. On the other hand, by Lemma 4.3, bλ n ≤cn−2, and thus (4.40) bλ n ≤c min  γnλ−1 + λ−2, 1 n2  . Since 1 −fλ(h∗) ≥cλ−1, (4.3) implies that (4.41) aλ n = ⟨uλ n, χ⟩≥(1 −fλ(h∗))⟨u0 n, χ⟩≥cλ−1a0 n ≥cλ−1n−1, where the last inequality follows from Lemma 4.2. Therefore, (4.42) bλ n aλ n ≤c min γnλ−1 + λ−2, n−2 λ−1n−1 ≤c min  cγnn + n λ, λ n  ≤cγnn + c, which is bounded from above by some constant c independent of λ or n. This proves the first statement. The second follows from the first one and ∥uλ n∥2 = (aλ n)2 + (bλ n)2. □ We now improve the result of Lemma 4.4 and show that bλ n = o(aλ n), uniformly in λ. Proposition 4.5. Uniformly over λ ≥0, (4.43) lim n→∞ uλ n ⟨χ, uλ n⟩= χ in L2(ν). SCALING LIMITS FOR THE CRITICAL GFF 16 Proof. The proof consists of two steps. First we show that, for a suitable choice of n0(n), which slowly diverges with n, (4.44) uλ n ⟨χ, uλ n−n0(n)⟩→χ in L2(ν) as n →∞, uniformly in λ ≥0. In a second step, we then show that ⟨χ, uλ n−n0(n)⟩/⟨χ, uλ n⟩→1 as n →∞uniformly in λ ≥0, implying the claim of the proposition. By (4.11) from the proof of Lemma 4.1, uλ n(x) = Ex  Ex h 1 − Y v∈N+ n fλ(φv) Fn0 i = Ex h 1 − Y v∈N+ n0 1 −uλ n−n0(φv) i = Ex h X v∈N+ n0 uλ n−n0(φv) i −Ex h Y v∈N+ n0 1 −uλ n−n0(φv)  −1 + X v∈N+ n0 uλ n−n0(φv) i . (4.45) By Proposition 3.8(a), the first expectation on the right-hand side satisfies (4.46) Ex h X u∈N+ n0 uλ n−n0(φu) i = χ(x)⟨χ, uλ n−n0⟩+ ελ,n−n0 n0 (x) with ∥ελ,n−n0 n0 ∥≤γn0∥uλ n−n0∥. To estimate the second expectation we need the following inequality: For any finite index set I and ai ∈[0, 1], i ∈I, (4.47) 0 ≤ Y i∈I (1 −ai) −1 + X i∈I ai ≤1 2 X i̸=j∈I aiaj. This inequality follows easily from Bonferroni inequalities for independent events Ai with P(Ai) = ai. Indeed, (4.48) 1 − Y i∈I (1 −ai) = P(∪i∈IAi) ≤ X i∈I P(Ai) = X i∈I ai implies the lower bound in (4.47), and (4.49) 1− Y i∈I (1−ai) = P(∪i∈IAi) ≥ X i∈I P(Ai)−1 2 X i̸=j∈I P(Ai∩Aj) = X i∈I ai−1 2 X i̸=j∈I aiaj implies the upper bound. Inequality (4.47) and Proposition 3.8(b) imply 0 ≤Ex h Y u∈N+ n0 1 −uλ n−n0(φu)  −1 + X u∈N+ n0 uλ n−n0(φu) i ≤cχ(x)⟨χ, uλ n−n0⟩2⟨χ, χ2⟩n0 + ¯ελ,n−n0 n0 (x), (4.50) with ∥¯ελ,n−n0 n0 ∥≤C∥uλ n−n0∥2. SCALING LIMITS FOR THE CRITICAL GFF 17 Dividing (4.45) by ⟨χ, uλ n−n0⟩and combining it with (4.46) and (4.50), we obtain uλ n ⟨χ, uλ n−n0⟩−χ ≤∥ελ,n−n0 n0 ∥ ⟨χ, uλ n−n0⟩+ c∥χ⟨χ, uλ n−n0⟩2⟨χ, χ2⟩n0∥ ⟨χ, uλ n−n0⟩ + ∥¯ελ,n−n0 n0 (x)∥ ⟨χ, uλ n−n0⟩ ≤γn0 ∥uλ n−n0∥ ⟨χ, uλ n−n0⟩+ c⟨χ, χ2⟩⟨χ, uλ n−n0⟩n0 + c ∥uλ n−n0∥ ⟨χ, uλ n−n0⟩∥uλ n−n0∥ ≤cγn0 + c⟨χ, χ2⟩n0 n −n0 + c n −n0 , (4.51) where in the last inequality we used that ∥uλ n∥/⟨χ, uλ n⟩is uniformly bounded by Lemma 4.4, and both ⟨χ, uλ n⟩and ∥uλ n∥are uniformly bounded above by cn−1 by Lemma 4.3. This establishes (4.44) with, for example, n0(n) = ⌊log(n)⌋. Next we show that (4.51) also implies (4.52) ⟨χ, uλ n⟩ ⟨χ, uλ n−n0(n)⟩−1 →0 as n →∞, uniformly in λ ≥0. To see this, we use that 1 = ⟨χ, χ⟩and |⟨χ, f⟩| ≤∥f∥to get ⟨χ, uλ n⟩ ⟨χ, uλ n−n0(n)⟩−1 = ⟨χ, uλ n⟩ ⟨χ, uλ n−n0(n)⟩−⟨χ, χ⟩ =  χ, uλ n ⟨χ, uλ n−n0(n)⟩−χ  ≤ uλ n ⟨χ, uλ n−n0(n)⟩−χ , (4.53) which by (4.51) converges to zero uniformly in λ. This shows (4.52), which together with (4.44) finishes the proof. □ Using the hypercontractivity of L from Proposition 3.2, it is easy to slightly improve Proposition 4.5. This will be useful to ensure that certain products still lie in L2(ν). Lemma 4.6. Uniformly over λ ≥0, (4.54) lim n→∞ uλ n ⟨χ, uλ n⟩= χ in L5/2(ν) and pointwise over [h∗, ∞). Proof. We will first show (4.54). We set p = 5/2. By Lemma 4.1, uλ n = L[uλ n−1] + Pd l=2 clL[uλ n−1]l. Therefore, also using that L[χ] = χ, (4.55) uλ n ⟨χ, uλ n⟩−χ Lp(ν) ≤ L h uλ n−1 ⟨χ, uλ n⟩−χ i Lp(ν) + c ⟨χ, uλ n⟩ d X l=2 ∥L[uλ n−1]l∥Lp(ν). Since 0 ≤uλ n ≤1, it holds that 0 ≤L[uλ n] ≤d. Therefore, (4.56) ∥L[uλ n−1]l∥Lp(ν) ≤dl−2∥L[uλ n−1]2∥Lp(ν) = dl−2∥L[uλ n−1]∥2 L2p(ν) ≤c∥uλ n−1∥2 L2(ν), where in the last inequality we used Proposition 3.2(a) with q = 2p = 5 and p = 2. Therefore, using also (4.52) (with n0 = 1), Lemma 4.4, and the fact that ∥uλ n∥≤∥u0 n∥→0 as n →∞, we get (4.57) ∥L[uλ n−1]l∥Lp(ν) ⟨χ, uλ n⟩ ≤ ∥uλ n−1∥2 L2(ν) c⟨χ, uλ n−1⟩→0 as n →∞uniformly in λ ≥0. SCALING LIMITS FOR THE CRITICAL GFF 18 Using Proposition 3.2(a) again for the remaining term on the right-hand side of (4.55), (4.58) L h uλ n−1 ⟨χ, uλ n⟩−χ i Lp(ν) ≤d uλ n−1 ⟨χ, uλ n⟩−χ L2(ν) →0 as n →∞uniformly in λ ≥0. Here, the convergence follows from Proposition 4.5. Combining (4.55) with (4.57) and (4.58) shows (4.54). We now show the pointwise convergence. By similar steps as in (4.55), using Lemma 3.6, for x ≥h∗, uλ n(x) ⟨χ, uλ n⟩−χ(x) ≤ L h uλ n−1 ⟨χ, uλ n⟩−χ i (x) + 1 ⟨χ, uλ n⟩ d X l=2 |L[uλ n−1](x)|l ≤ uλ n−1 ⟨χ, uλ n⟩−χ q(x) + 1 ⟨χ, uλ n⟩ d X l=2 ∥uλ n−1∥lq(x)l. (4.59) Using (4.57) and (4.58), it is immediate that the right-hand side of (4.59) converges to 0 as n →∞, uniformly in λ ≥0. This shows the pointwise convergence and finishes the proof. □ The following lemma provides the final ingredient for the proof of Theorem 2.1. Its result allows us to determine the exact asymptotic behaviour of aλ n as n →∞. Lemma 4.7. Let C1 be as in (2.11). Then, (4.60) lim n→∞ 1 n  1 aλ n −1 aλ 0  = C−1 1 uniformly in λ ≥0. Proof. We show that (4.61) lim n→∞  1 aλ n − 1 aλ n−1  = lim n→∞ aλ n−1 −aλ n aλ n−1aλ n = C−1 1 uniformly in λ ≥0. The claim of the lemma then follows from (4.61) by telescoping the difference therein. By Lemma 4.1 again, this time writing the prefactor of the quadratic term explicitly, aλ n = ⟨χ, uλ n⟩= ⟨χ, L[uλ n−1]⟩− d 2  1 d2⟨χ, L[uλ n−1]2⟩− d X l=3 cl⟨χ, L[uλ n−1]l⟩. (4.62) Using that L is self-adjoint and L[χ] = χ, we get ⟨χ, L[uλ n−1]⟩= ⟨χ, uλ n−1⟩= aλ n−1. Hence, after dividing by (aλ n−1)2, (4.62) implies aλ n−1 −aλ n (aλ n−1)2 = d 2  d−2D χ, L[uλ n−1]2 (aλ n−1)2 E + d X l=3 cl D χ, L[uλ n−1]l (aλ n−1)2 E = d 2  d−2D χ, L huλ n−1 aλ n−1 i2E + d X l=3 cl D χ, L huλ n−1 aλ n−1 i2 L[uλ n−1]l−2E . (4.63) By Propositions 4.5 and 3.2, L[uλ n−1/aλ n−1]2 →χ2 in L2(ν), uniformly in λ. By (4.6) and (4.3), 0 ≤uλ n ≤u0 n →0 in L2(ν) as n →∞. Therefore, using that L[uλ n−1] ≤d and the SCALING LIMITS FOR THE CRITICAL GFF 19 boundedness of L, L[uλ n−1]l−2 ≤dl−3L[uλ n−1] →0 in L2(ν) uniformly in λ. Using this in (4.63) then yields (4.64) lim n→∞ aλ n−1 −aλ n (aλ n−1)2 = d 2  d−2⟨χ, χ2⟩= C−1 1 uniformly in λ ≥0. From this (4.61) follows, if we show (4.65) lim n→∞ aλ n aλ n−1 = 1 uniformly in λ ≥0. To see this, note that by multiplying (4.64) by aλ n−1, (4.66) 1 −aλ n/aλ n−1 = aλ n−1C−1 1 + o(aλ n−1) = o(1), where the o(1) term is independent of λ since 0 ≤aλ n ≤a0 n →0 for n →∞. This shows (4.65), and together with (4.64) completes the proof of (4.61). □ We are now ready to prove Theorem 2.1. Proof of Theorem 2.1. Recall from (4.2) that u0 n(x) = Px[N + n ̸= ∅]. Therefore, by Propo- sition 4.5 and Lemma 4.6, (4.67) lim n→∞ P·[N + n ̸= ∅] ⟨χ, u0 n⟩ = χ in L2(ν) and pointwise. Moreover, by Lemma 4.7, (4.68) ⟨χ, u0 n⟩= a0 n = C1n−1(1 + o(1)). Combining these two facts one obtains (2.9), and by integrating over φ(o) which is ν- distributed also (2.10). To show the last statement of the theorem, let x1, . . . , xd+1 = ¯o be the neighbours of o in T, and let An,i be the event “the subtree of xi intersects Nn”. By the branching process construction, for every n ≥1, the events An,1, . . . , An,d+1 are independent and have the same probability p = p(n, x). Further, Px[Nn ̸= ∅] = Px[∪d+1 i=1 An,i] = 1 −(1 −p)d+1 and similarly Px[N + n ̸= ∅] = 1 −(1 −p)d and thus (4.69) Px[Nn ̸= ∅] = 1 −(1 −Px[N + n ̸= ∅]) d+1 d = d + 1 d Px[N + n ̸= ∅] + O(Px[N + n ̸= ∅]2). This directly implies the statement for Px[Nn ̸= ∅]. The statement for P[Nn ̸= ∅] is again obtained by integration over φo. □ 5. Proof of Theorem 2.2 The goal of this section is twofold: We show Theorem 2.2, and then, in Section 5.1, provide further results concerning the behaviour of Co conditioned on {N + n ̸= ∅}. These results are, more or less, direct consequences of Theorems 2.1 and 2.2, and will later be used to show Theorem 2.3. We start with a preparatory lemma which is a special case of Theorem 2.2 for functions f orthogonal to χ. Lemma 5.1. For every f ∈L2(ν) such that ⟨χ, f⟩= 0 there is a sequence εf n ∈L2(ν) converging to zero in L2(ν) and pointwise, such that for every δ > 0, x ≥h∗, and n ≥1, (5.1) Px h 1 n X v∈N+ n f(φv) > δ N + n ̸= ∅ i ≤cδ−2εf n(x). SCALING LIMITS FOR THE CRITICAL GFF 20 Proof. By the conditional Markov inequality (5.2) Px h 1 n X v∈N+ n f(φv) > δ N + n ̸= ∅ i ≤δ−2Ex h1 n X v∈N+ n f(φv) 2 N + n ̸= ∅ i . The conditional expectation on the right-hand side satisfies (5.3) Ex h1 n X v∈N+ n f(φv) 2 N + n ̸= ∅ i =  1 nPx[N + n ̸= ∅] 1 nEx h X v∈N+ n f(φv) 2i . By Proposition 3.8(a,b), since ⟨χ, f⟩= 0, (5.4) 1 nEx h X v∈N+ n f(φv) 2i = εf n(x), with εf n →0 in L2 and pointwise. Further, by the stochastic domination (3.5), using Theorem 2.1, (nPx[N + n ̸= ∅])−1 ≤(nPh∗[N + n ̸= ∅])−1 ≤c and the statement of the lemma follows. □ Proof of Theorem 2.2. We first consider the special case f = χ; the general case will then easily follow using Lemma 5.1. We start by showing that for every α > 0, (5.5) E  e−αZχ,x n  = C1 C1 + α + εα n(x), with εα n →0 pointwise and in L2(ν). Using the definitions (4.1), (4.32) of uf n and fλ, setting as before uλ n = ufλ n , and recalling (4.2), we obtain E  e−αZχ,x n  = Ex h exp  −α n X v∈N+ n χ(φv)  N + n ̸= ∅ i = 1 −un/α n (x) u0 n(x) . (5.6) By Lemma 4.6 and (4.68), (5.7) u0 n(x) = Cn−1χ(x)(1 + ε(x)), where ε →0 pointwise. Since Lemma 4.6 holds uniformly in λ, we can apply it with λ = n/α to obtain (5.8) un/α n (x) = an/α n χ(x)(1 + ˜ε(x)), again with ˜ε →0 pointwise. Moreover, by Lemma 4.7, again using the uniformity in λ, (5.9) (nan/α n )−1 = (nan/α 0 )−1 + C−1 1 + o(1) as n →∞. By the definitions of aλ n, uλ n and fλ, and by the monotone convergence theorem, nan/α 0 = n⟨χ, un/α 0 ⟩= n χ, E·  1 −fn/α(φo)  = n⟨χ, 1 −fn/α⟩= χ, n 1 −e−αχ(·)/n n→∞ −−−→α⟨χ, χ⟩= α. (5.10) Inserting this into (5.9) yields (5.11) an/α n = αC1(1 + o(1)) n(α + C1) as n →∞. Combining (5.6)–(5.8), (5.11) gives (5.12) 1 −E  e−αZχ,x n  = α α + C1 (1 + o(1)), SCALING LIMITS FOR THE CRITICAL GFF 21 which shows the pointwise convergence in (5.5). The L2(ν)-convergence follows by the dominated convergence theorem, since the left-hand side of (5.5) is bounded by 1. By L´evy’s continuity theorem for the Laplace transform, (5.5) implies the statement (2.13) of the theorem in the case Zf,x n with f = χ. For general f ∈L2(ν) we write f = ⟨χ, f⟩χ+β[f] as usual. Then Zf,x n = ⟨χ, f⟩Zχ,x n +Zβ[f],x n . The statement for Zf,x n then directly follows, since the first summand converges to an exponential random variable with mean C−1 1 ⟨χ, f⟩, by the first step of the proof, and the second summand converges to 0 in probability, by Lemma 5.1, since ⟨χ, β[f]⟩= 0. We now show the statement of the theorem for Zf n. Let νn be the law of φo under P[ · |N + n ̸= ∅]. By integrating (5.5) over νn(dx) (5.13) E  e−Zχ n = C1 C1 + α + Z εα n(x)νn(dx). We need to show that the last integral is o(1). Observe that νn(dx) = P[φo ∈dx|N + n ̸= ∅] = P[φo ∈dx, N + n ̸= ∅]P[N + n ̸= ∅]−1 = Px[N + n ̸= ∅]P[N + n ̸= ∅]−1ν(dx). Therefore, using the Cauchy–Schwarz inequality and that εα n →0 in L2(ν), the integral in (5.13) is bounded above by (5.14) o(1)  Z Px[N + n ̸= ∅]2 P[N + n ̸= ∅]2 ν(dx) 1/2 . By (4.67), Px[N + n ̸= ∅]P[N + n ̸= ∅]−1 = χ(x)⟨1, χ⟩−1 + εn(x), with εn(x) →0 in L2(ν) as n →∞. This implies that the integral in (5.14) is O(1) and together with (5.13) and again L´evy’s continuity theorem shows statement (2.13) for Zf n with f = χ. For Zf n and general f ∈L2(ν) the statement then follows by integrating (5.1) over νn(dx) and applying similar arguments as above. □ 5.1. Further results for the conditioned model. The following few results which all concern the model conditioned to {N + n ̸= ∅} will later be used in the proof of Theorem 2.3. The first one explains the role of the measure Qx introduced using the spine construction in Section 3.2. Similar results are well established in branching processes literature, see for instance [HH07, Theorem 5] or [CR90, Theorem 4]. Proposition 1.5 in [Pow19] can be seen as the analogous result for branching diffusion on bounded domains. Proposition 5.2. For all K ∈N, x ≥h∗and B ∈FK (see (3.20)) (5.15) lim n→∞Px[B|N + n ̸= ∅] = Qx[B]. Proof. We follow similar steps as in the proof of [Pow19, Proposition 1.5]. For every n ≥K, (5.16) Px[B|N + n ̸= ∅] = Ex h1BPx[N + n ̸= ∅|FK] Px[N + n ̸= ∅] i . Writing N + K = {v1, . . . , v|N+ K|} and defining the events Ai = {vi is root of a subtree in Co with height of at least n −K}, (5.17) {N + n ̸= ∅} = |N+ K| [ k=1 Ak = |N+ K| [ k=1  Ak ∩ \ j<k Ac j  , SCALING LIMITS FOR THE CRITICAL GFF 22 where the last union is disjoint. Since the Ai are independent conditionally on FK, (5.18) Px[N + n ̸= ∅|FK] = |N+ K| X i=1 Pφvi[N + n−K ̸= ∅] Y j<i Pφvj [N + n−K = ∅]. By Theorem 2.1, Pφvi[N + n−K ̸= ∅] ∼C1(n−K)−1χ(φvi) →0 and Px[N + n ̸= ∅] ∼C1n−1χ(x) as n →∞. Therefore, (5.19) lim n→∞ 1BPx[N + n ̸= ∅|FK] Px[N + n ̸= ∅] = 1B P v∈NK χ(φv) χ(x) , Px-a.s. In order to insert this into (5.16), we use the generalised dominated convergence theorem. To this end we bound the fraction on the left-hand side of (5.19) by a function gn satisfying gn →g in L1(Px) and Px-a.s.: Using (5.17) and Theorem 2.1 (or (4.67), (4.68) from its proof), (5.20) Px[N + n ̸= ∅|FK] ≤ X v∈N+ K Pφv[N + n−K ̸= ∅] = X v∈N+ K C1(n −K)−1(χ(φv) + ¯εn−K(φv)) with ¯εn →0 pointwise and in L2(ν) as n →∞. For the denominator, again by Theo- rem 2.1, Px[N + n ̸= ∅] ≥Ph∗[N + n ̸= ∅] ≥cn−1 for some constant c. Thus, for n > K, 1BPx[N + n ̸= ∅|FK] Px[N + n ̸= ∅] ≤ cn n −K X v∈N+ K (χ(φv) + ¯εn−K(φv)) ≤c′ X v∈N+ K (χ(φv) + ¯εn−K(φv)) =: gn, (5.21) which converges a.s. to g := c′ P v∈N+ K χ(φv). Moreover, by Proposition 3.8, (5.22) Ex[|gn −g|] ≤Ex  c′ X v∈N+ K |¯εn−K(φv)|  = ⟨χ, |¯εn−K|⟩χ(x) + ε|¯εn−K| K (x). Since ∥¯εn∥→0, the bounds established in (3.34) imply that the right-hand side of (5.22) converges to 0, that is gn →g in L1(P x). Therefore, by the generalised dominated convergence theorem, using (5.16), (5.19), (5.23) lim n→∞Px[B|N + n ̸= ∅] = Ex h 1B P v∈NK χ(φv) χ(x) i = Qx[B], where the last equality follows from Lemma B.2 with k = 1 and Y (v1) = 1Bχ(φv1). □ The following lemma and its corollary follow almost directly from the results established in the proof of Theorem 2.2. Lemma 5.3. For every f ∈L2(ν), x ≥h∗and δ > 0, (5.24) lim n→∞Px  P v∈N+ n f(φv) P v∈N+ n χ(φv) −⟨χ, f⟩ > δ N + n ̸= ∅  = 0. Proof. Observe that P v∈N+ n f(φv) P v∈N+ n χ(φv) −⟨χ, f⟩= P v∈N+ n (f(φv) −⟨χ, f⟩χ(φv)) P v∈N+ n χ(φv) = n−1 P v∈N+ n β[f](φv) n−1 P v∈N+ n χ(φv) . (5.25) SCALING LIMITS FOR THE CRITICAL GFF 23 For any A, B ∈R and ε, δ > 0, {|A/B| > ε} ⊂{|A| > δ} ∪{|B| < δ/ε}, which together with (5.25) allows us to bound the left-hand side of (5.24) by (5.26) Px h n−1 X v∈N+ n β[f](φv) > ¯δn N + n ̸= ∅ i + Px h n−1 X v∈N+ n χ(φv) < ¯δn/δ N + n ̸= ∅ i . We now show that there is a sequence ¯δn →0 such that both summands converge to zero. By Lemma 5.1, the first summand is bounded by ¯δ−2 n ¯εf n(x), where the ¯εf n-term is independent of ¯δn and converges to zero. Therefore, if ¯δn →0 sufficiently slowly, then also ¯δ−2 n ¯εf n(x) →0. For the second summand, fix ε > 0. Then for n large enough so that ¯δn/δ ≤ε, (5.27) Px h n−1 X v∈N+ n χ(φv) < ¯δn/δ N + n ̸= ∅ i ≤Px h n−1 X v∈N+ n χ(φv) < ε N + n ̸= ∅ i . By Theorem 2.2, the right-hand side converges to P(Z < C1ε) which can be made ar- bitrarily small by choosing ε small. This implies that the second summand in (5.26) converges to zero and completes the proof. □ Corollary 5.4. For every x ≥h∗, δ > 0 and f, g ∈L2(ν) with g > 0, (5.28) lim n→∞Px  P v∈N+ n f(φv) P v∈N+ n g(φv) −⟨χ, f⟩ ⟨χ, g⟩ > δ N + n ̸= ∅  = 0. In particular, setting g ≡1, (5.29) lim n→∞Px  1 |N + n | X v∈N+ n f(φv) −⟨χ, f⟩ ⟨χ, 1⟩ > δ N + n ̸= ∅  = 0. Proof. It is easy to see that there is δ′ = δ′(δ, f, g) > 0 such that the event on the left- hand side of (5.28) is contained in the union of {|P f(φv)/ P χ(φv) −⟨χ, f⟩| ≥δ′} and {|P g(φv)/ P χ(φv) −⟨χ, g⟩| ≥δ′}. The statement then follows by Lemma 5.3. □ The final result of this section is the following technical lemma which can be seen as a somewhat stronger version of (5.29) in Corollary 5.4. It is tailored to be used in the proof of Proposition 6.7. Lemma 5.5. Write N + n = {v1, v2, . . . , v|N+ n |} and set N + n,M = {v1, v2, . . . , vM}. For δ, ρ > 0 and f : R →R bounded, define the event (5.30) Bn(f, δ, ρ) := {ρn ≤|N + n |} ∩  [ ρn≤M≤|N+ n |  P v∈N+ n,M f(φv) M −⟨χ, f⟩ ⟨χ, 1⟩ > δ  . Then for every x ≥h∗, (5.31) lim n→∞Px  Bn(f, δ, ρ) N + n ̸= ∅  = 0. Proof. We start by outlining the strategy of the proof. We will define events An = An(n0) (see (5.34)) and A′ n = A′ n(n0, q) (see (5.40)), so that when n0 = n0(n) and q = q(n) are chosen correctly, (5.32) lim n→∞Px  An(n0(n)) N + n ̸= ∅  = 0, lim n→∞Px  A′ n(n0(n), q(n)) N + n ̸= ∅  = 0, Bn(f, δ, ρ) ⊆An(n0(n)) ∪A′ n(n0(n), q(n)) for n large enough. SCALING LIMITS FOR THE CRITICAL GFF 24 The statement of the lemma follows directly from these claims by a union bound. To define An = An(n0), we fix 0 ≤n0 ≤n, write N + n0 = {w1, . . . , w|N+ n0|}, and set N i n = {v ∈N + n : wi is ancestor of v}. For 1 ≤i ≤|N + n0|, we set (5.33) mi = ( |N i n|−1 P v∈Nin f(φv), if |N i n| > 0, ⟨χ, f⟩/⟨χ, 1⟩, otherwise, and define the events (5.34) Ai n = n mi −⟨χ, f⟩ ⟨χ, 1⟩ > δ 2 o and An = An(n0) = |N+ n0| [ i=1 Ai n. We now prove an upper bound on P[An|N + n ̸= ∅]. By a union bound, using that Ai n ⊂{N i n ̸= ∅} ⊂{N + n ̸= ∅}, Px[An|N + n ̸= ∅] = Px[N + n ̸= ∅]−1Px[An ∩{N + n ̸= ∅}] ≤Px[N + n ̸= ∅]−1Ex  |N+ n0| X i=1 Px[Ai n ∩{N i n ̸= ∅}|Fn0]  . (5.35) Using the branching process properties of the GFF, Px[Ai n ∩{N i n ̸= ∅}|Fn0] = Pφwi  1 |N + n−n0| X v∈N+ n−n0 f(φv) −⟨χ, f⟩ ⟨χ, 1⟩ > δ 2 N + n−n0 ̸= ∅  Pφwi[N + n−n0 ̸= ∅]. (5.36) By Corollary 5.4(b), the first probability on the right-hand side is bounded by some εn−n0(φwi) satisfying 1 ≥εn →0 pointwise, and thus in any Lq(ν) by the bounded convergence theorem. By Proposition 4.5 and Lemmas 4.6, 4.7, the second probability equals (5.37) Pφwi[N + n−n0 ̸= ∅] = C1(n −n0)−1χ(φwi) + ¯εn−n0(φwi)  , where ¯ε →0 pointwise and in L5/2(ν). Combining these statements with (5.35) and (5.36), and using then Proposition 3.8 in the numerator, we obtain Px[An|{N + n ̸= ∅}] ≤ Ex h P w∈N+ n0 εn−n0(φw)C1(n −n0)−1(χ(φw) + ¯εn−n0(φw)) i C1n−1(χ(x) + ¯εn(x)) = c(x)n n −n0 ⟨χ, εn−n0(χ + ¯εn−n0)⟩χ(x) + ˜εn0(x), (5.38) where ˜εn →0 pointwise. Finally, using H¨older’s inequality on the inner product, since χ, χ2 ∈L2(ν) (by (3.18)), εn →0 in L5(ν) and ¯ε →0 in L5/2(ν), it follows that for every x ≥h∗, (5.39) if n0 →∞and n −n0 →∞, then Px[An(n0)|{N + n ̸= ∅}] →0. We now turn to the events A′ n. For given n0 ≤n and q, let (5.40) A′ n = A′ n(n0, q) = |N+ n0| [ i=1 {|N i n| > q}. SCALING LIMITS FOR THE CRITICAL GFF 25 To bound the probability P[A′ n|N + n ̸= ∅], we write (5.41) Px[A′ n ∩{N + n ̸= ∅}] ≤Px[A′ n] ≤Ex h |N+ n0| X i=1 Pφwi[|N + n−n0| > q] i . By the Markov inequality and Proposition 3.8(b) (with f = g = 1), (5.42) Pφwi[|N + n−n0| > q] ≤q−2Eφwi[|N + n−n0|2] = Cq−2(χ(φwi)(n −n0) + εn−n0(φwi)), where ∥εn−n0∥≤c. Thus, by Proposition 3.8(a), Px[A′ n ∩{N + n ̸= ∅}] ≤q−2Ex h X v∈N+ n0 (cχ(φv)(n −n0) + εn−n0(φv)) i ≤cq−2(n −n0 + 1)χ(x) + ε′ n0(x)  . (5.43) with ε′ n →0 pointwise. Together with Px[N + n ̸= ∅] ≥cn−1, by Theorem 2.1, this implies (5.44) Px[A′ n|N + n ̸= ∅] ≤n q2  (n −n0 + C)χ(x) + ε′ n0(x)  . We will now give sufficient conditions on the functions n0(n) and q(n) so that for large enough n, (5.45) Bn(f, δ, ρ) ⊂An ∪A′ n, or equivalently Bn(f, δ, ρ)c ⊃(An)c ∩(A′ n)c. By definition, {ρn ≤|N + n |}c ⊆Bn(f, δ, ρ)c. Therefore, (5.45) is implied by (5.46) (An)c ∩(A′ n)c ∩{ρn ≤|N + n |} ⊆Bn(f, δ, ρ)c. To prove this, we assume that (An)c ∩(A′ n)c ∩{ρn ≤|N + n |} holds. For M ≤|N + n |, let (5.47) k(M) := inf n ℓ X i=1 |N i n| : ℓ∈{0, . . . , |N + n0|} such that ℓ X i=1 |N i n| ≥M o ≥M. On (An)c, |N i n|−1 P v∈Nin f(φv) −⟨χ, f⟩/⟨χ, 1⟩ ≤δ/2 for all i = 1, . . . , |N + n0|. Therefore, for all M with ρn ≤M ≤|N + n | (5.48) P v∈N+ n,k(M) f(φv) k(M) −⟨χ, f⟩ ⟨χ, 1⟩ ≤δ 2. Writing P v∈N+ n,M f(φv) = P v∈N+ n,k(M) f(φv) −P v∈N+ n,k(M)\N+ n,M f(φv), and using that the absolute value of the second sum is bounded by (k(M) −M) sup|f|, we obtain that for every M ≥ρn P v∈N+ n,M f(φv) M − P v∈N+ n,k(M) f(φv) k(M) = P v∈N+ n,k(M) f(φv) M − P v∈N+ n,k(M)\N+ n,M f(φv) M − P v∈N+ n,k(M) f(φv) k(M) ≤ X v∈N+ n,k(M) f(φv)  1 M − 1 k(M)  + k(M) −M M sup|f| = k(M) −M M  P v∈N+ n,k(M) f(φv) k(M) + sup|f|  ≤c(f, δ) q ρn, (5.49) SCALING LIMITS FOR THE CRITICAL GFF 26 where in the last inequality we used (5.48) and the fact that k(M) −M ≤q on (A′ n)c. If the right-hand side of (5.49) is smaller than δ/2, then together with (5.48), this implies that Bn(f, δ, ρ)c holds, implying (5.46) and thus (5.45). To finish the proof of (5.32), we must choose n0 = n0(n) and q = q(n) so that (5.39) applies and the left-hand sides of (5.43), (5.49) tend to zero. This is easily done by setting, e.g., q(n) = n3/4 and n0 = n −n1/4. □ 6. The Sn martingale The remaining three sections of this paper are dedicated to proving Theorem 2.3. In this section, we will introduce a martingale based on the depth-first traversal of a sequence of copies of Co ∩T+, and prove that it satisfies an invariance principle, see Proposition 6.1. This martingale can be seen as an analogue to the Lukasiewicz path used to study critical Galton-Watson trees. The second part of the section then demonstrates another two scaling limit results, Proposition 6.8 and Proposition 6.10 which are both consequences of Proposition 6.1. To define the martingale, we consider an i.i.d. sequence T = ((T 1, φ1), (T 2, φ2), . . . ) where every (T i, φi) is distributed as (Co ∩T+, φ|Co∩T+) under Px. To keep the notation simple, we keep using Px for the probability measure associated with the whole sequence T, and for v ∈T i we write φv instead of φi v. We further use oi to denote the root of T i, and set N +,i n = {v ∈T i : d(oi, v) = n}. Note that |N +,i n | has the same distribution as |N + n | which was studied in detail in the previous sections. In particular, we know that all T i are a.s. finite. Throughout this section, we use the notation for the parent p(v), direct descendants desc(v), and siblings sib(v) of v ∈T i relative to the rooted tree T i (not T), w ⪯v means that w is an ancestor of v, cf. below (3.1). We now describe the depth-first traversal v = (v1, v2, . . . ) of T. It starts at the root of T 1, that is v1 = o1, and then explores the tree T 1 in a depth-first manner. After visiting all vertices of T 1, it proceeds to o2, explores T 2 in a depth-first manner, and so forth. vi denotes the i-th vertex visited during this traversal. The notation v <v w indicates that v precedes w in v (with ≤v representing the reflexive version). For any v ∈∪iT i we write Λ(v) to denote the index of the tree which v belongs to (that is v ∈T Λ(v)). We define (6.1) Λn = Λ(vn) and Hn = |vn| = d(vn, oΛn), that is Λn is the index of the tree which is explored at step n, and Hn is the “height” of the n-th explored vertex. For every v ∈∪iT i, we define the set Y (v) as the union of all siblings of some ancestor of v that appear later in v than this ancestor itself, that is, (6.2) Y (v) = [ w⪯v {u ∈sib(w) : w <v u}. With this notation we can introduce the key object of this section, the process (6.3) Sn = χ(φvn) − X i≤Λn χ(φoi) + X w∈Y (vn) χ(φw), n ∈N, which is adapted to the filtration (6.4) Hn = σ  (v, φv) : v ∈ n[ i=1 sib(vi)  , n ∈N. SCALING LIMITS FOR THE CRITICAL GFF 27 The next proposition shows that Sn converges to a Brownian motion with variance (6.5) σ2 = ⟨χ, V⟩/⟨χ, 1⟩, where (6.6) V(x) := Px h X w∈N+ 1 χ(φw) 2i −Px h X w∈N+ 1 χ(φw) i2 = Varx  X w∈N+ 1 χ(φw)  . Here and below (Bt)t≥0 denotes the standard Brownian motion. Proposition 6.1. For every x ≥h∗, in Px-distribution w.r.t. the Skorokhod topology, (6.7) lim n→∞  1 √nS⌊nt⌋  t≥0 = (σBt)t≥0. We need a few preparatory steps to show this proposition. We start by proving that S is a martingale. Lemma 6.2. The process S is a H-martingale under Px. Proof. We first show that (6.8) Sk+1 −Sk = −χ(φvk) + X w∈desc(vk) χ(φw). To this end we need to distinguish between several possible scenarios: (1) If vk is not a leaf, that is desc(vk) ̸= ∅, then vk+1 is the first child of vk, and thus Y (vk+1) = (desc(vk)\{vk+1})∪Y (vk) where the union is disjoint. From this (6.8) follows. (2) If vk is a leaf, that is desc(vk) = ∅, then there are three possible cases for vk+1: Either it is the next sibling (with respect to <v) of vk, or the next sibling of some ancestor of vk, or it is the root of the next tree in the tree sequence. In the first two cases, Y (vk) = Y (vk+1) ∪{vk+1} where the union is disjoint, and thus Sk+1 −Sk = −χ(φvk) and thus (6.8) holds. In the last case, when vk+1 is the root of the next tree, both sets Y (vk) and Y (vk+1) must be empty and Λk+1 = Λk + 1, which implies (6.8) also in this case. From (6.8) and the branching process properties of φ it follows that Ex[Sk+1 −Sk|Hk] = Ex h −χ(φvk) + X w∈desc(vk) χ(φw) Hk i = −χ(φvk) + Eφvk h X w∈N+ 1 χ(φw) i = −χ(φvk) + Lχ(φvk) = 0, (6.9) since χ is the eigenfunction of L (see (3.12)). This finishes the proof. □ The next four simple lemmas will be used to control the quadratic variation of S. Lemma 6.3. For every k ≥1, with V as in (6.6), (6.10) Ex[(Sk+1 −Sk)2|Hk] = V(φvk). Proof. By (6.8) the conditional expectation Ex[(Sk+1 −Sk)2|Hk] equals χ(φvk)2 −2χ(φvk)Ex h X w∈desc(vk) χ(φw) Hk i + Ex h X w∈desc(vk) χ(φw) 2 Hk i = χ(φvk)2 −2χ(φvk)Eφvk h X w∈N+ 1 χ(φw) i + Eφvk h X w∈N+ 1 χ(φw) 2i . (6.11) From this the statement follows using (3.12) again. □ SCALING LIMITS FOR THE CRITICAL GFF 28 Lemma 6.4. There is c < ∞such that V(x) < c for all x ≥h∗. Proof. Let {w1, . . . , wd} be the children of the root in T+. Since (φwi)d i=1 are indepen- dent under Px and χ(x) = 0 for x < h∗, Varx(P v∈N+ 1 χ(φv)) = Varx(Pd i=1 χ(φwi)) = Pd i=1 Varx(χ(φwi)) = d VarY (χ(Y + x/d)), where Y ∼N(0, σ2 Y ), see (3.4). Since χ is Lipschitz on [h∗, ∞) by (3.19), the statement follows. □ To state the next result, let Un be a random variable distributed uniformly on {1, . . . , n}, defined on the same probability space as the sequence T, independent of T. We write (6.12) H∗ n = HUn and Λ∗ n = ΛUn, for the height and the tree index of a vertex chosen uniformly amongst the first n explored vertices. Lemma 6.5. For all x ≥h∗, (6.13) lim u→∞sup n≥1 Px[H∗ n ≥u√n] = 0. Proof. By Theorem 2.1 and Proposition 3.3, there is c(x) < ∞such that for all n ≥1, (6.14) Px[N + n ̸= ∅] ≤c(x) n and Ex[Λn] ≤c(x)√n. To see the second inequality in (6.14), we note that by the independence of the trees T i and Proposition 3.3, Px[Λn ≥k] ≤Px[|T i| < n, i = 1, . . . , k −1] = Px[|T 1| < n]k−1 ≤ (1 −c(x)n−1/2)k−1 and thus Ex[Λn] = P∞ k=1 Px[Λn ≥k] ≤c(x)−1n1/2 as claimed. Since {H∗ n ≥u√n} ⊆{∃i ≤Λn : N +,i ⌈u√n⌉̸= ∅}, by a union bound, (6.15) Px[H∗ n ≥u√n] ≤ X i≥1 Px  i ≤Λn, N +,i ⌈u√n⌉̸= ∅  . The events {i ≤Λn} and {|N +,i ⌈u√n⌉| > 0} are independent. Hence, using the first half of (6.14), this is bounded by c(x)u−1n−1/2 P i≥1 Px[i ≤Λn]. Since P i≥1 Px[i ≤Λn] = Ex[Λn] ≤c(x)√n, by the second half of (6.14), the claim follows. □ Lemma 6.6. For every δ > 0 and x ≥h∗there exists a sequence R(n) such that limn→∞R(n) = ∞and (6.16) sup n≥1 Px[H∗ n ≤R(n)] ≤δ. Proof. We will show that for every K < ∞ (6.17) lim n→∞Px[H∗ n ≤K] = 0, which implies the statement of the lemma. For fixed K > 0, by definition of H∗ n, (6.18) Px[H∗ n ≤K] = n−1 n X l=1 Px[Hl ≤K] = n−1Ex h K X h=0 n X l=1 1Hl=h i . Since Pn l=1 1Hl=h ≤PΛn i=1|N +,i h | and thus PK h=0 Pn l=1 1Hl=h ≤PΛn i=1 PK h=0|N +,i h |. More- over, (PK h=0|N +,i h |)Λn i=1 are i.i.d. and with finite mean. Noting that Λn is a stopping time SCALING LIMITS FOR THE CRITICAL GFF 29 with respect to Gi = σ(T j : j ≤i) and using Wald’s equation (see e.g. [Dur19, Theorem 2.6.2]), (6.19) Px[H∗ n ≤K] ≤1 nEx h Λn X i=1 K X h=0 |N +,i h | i = 1 nEx[Λn]Ex h K X h=0 |N +,1 h | i , which by (6.14) converges to 0 as n →∞. □ The following law of large numbers will be important for the proof of Proposition 6.1. Proposition 6.7. Let f be a bounded function and mf n := n−1 Pn i=1 f(φvi). Then (6.20) lim n→∞mf n = mf ∞:= ⟨χ, f⟩ ⟨χ, 1⟩, in Px-probability. Proof. Let (6.21) N +,i k,n = N +,i k ∩{v1, . . . , vn} be the part of N +,i k traversed in the first n steps, and let N ∗ n = N +,Λ∗ n H∗n,n . We set (6.22) m∗ n = |N ∗ n|−1 X v∈N∗n f(φv). Denoting by FT the σ-algebra generated by the sequence T = (Ti, φi)i≥1, it holds that (6.23) mf n = Ex[m∗ n|FT] (effectively, the expectation here is over Un only). To show the proposition it will thus be sufficient to show that (6.24) lim n→∞m∗ n = mf ∞ in Px-probability. To see that this is indeed enough, note that since f is bounded m∗ n is dominated by a constant. Hence, by the dominated convergence theorem, m∗ n →mf ∞also in L2(Px). By (6.23), since the conditional expectation is a contraction on L2(Px), mf n →mf ∞in L2(Px), and thus in Px-probability as claimed. To prove (6.24), we fix ε, δ > 0 and set (6.25) An = {|m∗ n −mf ∞| > ε}. We then fix C < ∞such that Px[H∗ n ≥Cn1/2] ≤δ/3 for all n ≥1, which is possible by Lemma 6.5, and take R(n) as in Lemma 6.6, so that Px[H∗ n ≤R(n)] ≤δ/3. Setting Bn = {R(n) ≤H∗ n ≤Cn1/2}, it follows that for all n ≥1, (6.26) Px[An] ≤2δ/3 + Px[An ∩Bn] SCALING LIMITS FOR THE CRITICAL GFF 30 We further set mi h,n = |N +,i h,n|−1 P v∈N+,i h,n f(φv) and Ai h,n = {|mi h,n −mf ∞| > ε}. Using the definitions of Un and m∗ n, and then decomposing by possible values of Hk, Λk, we obtain Px[An ∩Bn] = n X k=1 Px[An ∩Bn|Un = k]Px[Un = k] = n X k=1 1 nPx[AΛk Hk,n, R(n) ≤Hk ≤Cn1/2] = 1 n Cn1/2 X l=R(n) n X i=1 Ex h 1Ai l,n n X k=1 1{Hk=l,Λk=i} i . = 1 n Cn1/2 X l=R(n) n X i=1 Ex  1Ai l,n N +,i l,n  , (6.27) where in the last step we used Pn k=1 1{Hk=l,Λk=i} = |N +,i l,n |. To analyse the expectation on the right-hand side, we define σ-algebras Gi = σ(T 1, T 2, . . . , T i) and set τi = |T 1| + · · · + |T i−1| (which is Gi−1-measurable). Then, using that (T i, φi)i≥1 is an i.i.d. sequence, Ex  1Ai l,n|N +,i l,n | Gi−1  = 1{τi≤n}Ex  1A1 l,n−τi N +,1 l,n−τi  = 1{τi≤n}Ex  1Al,n−τi|N + l,n−τi| N + l ̸= ∅  Px[N + l ̸= ∅], (6.28) where on the last line we omitted the superscript ‘1’ since N +,1 l,k has the same distribution as N + l,k. By Theorem 2.1, Proposition 3.8 and (6.14) there is a c = c(x) such that (6.29) Px  N + l ̸= ∅  ≤cl−1, Ex  N + l 2 N + l ̸= ∅ 1/2 ≤cl, Ex  Λn  ≤cn1/2. Further, for Bn(f, ε, ρ) as in Lemma 5.5, it holds that Al,k ∩{|N + l,k| ≥lρ} ⊆Bl(f, ε, ρ). Therefore, choosing ρ = δ/(6Cc2) and decomposing on whether |N + l,n−τi| is larger than lδ/(6Cc2), we obtain Ex  1A1 l,n−τi|N +,1 l,n−τi| N +,1 l ̸= ∅  ≤ lδ 6Cc2 + Ex  1Bn(f,ε,ρ)|N + l,n−τi| N + l ̸= ∅  ≤ lδ 6Cc2 + Ex  |N + l |2 N + l ̸= ∅ 1/2Px  Bl(f, ε, ρ) N + l ̸= ∅ 1/2, (6.30) where in the last step we used the Cauchy–Schwarz inequality and |N + l,n−τi| ≤|N + l |. Combining (6.28) and (6.30) with the first two claims in (6.29) we obtain (6.31) Ex  1Ai l,n|N +,i l,n |  ≤  δ 6Cc + c2Px  Bl(f, ε, ρ) N + l ̸= ∅ 1/2 Px[τi ≤n]. Inserting this into (6.26), (6.27), we obtain that Px[An] is bounded by (6.32) 2δ 3 + 1 n  δ 6Cc + c2 sup l≥R(n) Px  Bl(f, ε, ρ) N + l ̸= ∅ 1/2 ⌊C√n⌋ X l=R(n) n X i=1 Px[Λn ≥i], Since R(n) diverges, Lemma 5.5 implies that the supremum in this formula tends to zero as n →∞. By the last property in (6.29), P⌊C√n⌋ l=R(n) Pn i=1 Px[Λn ≥i] ≤C√nEx[Λn] ≤Ccn. SCALING LIMITS FOR THE CRITICAL GFF 31 Therefore (6.33) Px[An] ≤2δ 3 + δ 6 + Cc3o(1)  ≤δ for n large enough. Since ε and δ are arbitrary, this shows (6.24) and with the initial comment completes the proof. □ We are now ready to prove Proposition 6.1. Proof of Proposition 6.1. We apply a martingale functional central limit theorem, see e.g. [Dur19, Theorem 8.2.8]. To check its assumptions we need to show that (a) limn→∞n−1 Pn k=1 Ex[(Sk −Sk−1)2|Hk−1] = ⟨χ,V⟩ ⟨χ,1⟩in Px-probability, (b) limn→∞n−1 Pn k=1 Ex[(Sk −Sk−1)21{|Sk−Sk−1|>ε√n}] = 0. Condition (a) follows from Lemma 6.3 and Proposition 6.7, using that V is bounded by Lemma 6.4. We now show (b). By the Cauchy–Schwarz inequality, Ex[(Sk−Sk−1)21{|Sk−Sk−1|>ε√n}] ≤ Ex[(Sk −Sk−1)4]1/2Px[(Sk −Sk−1)2 > ε2n]1/2. By Lemmas 6.3, 6.4, Ex[(Sk −Sk−1)2] ≤c uniformly in k and thus Px[(Sk −Sk−1)2 > ε2n] ≤cε−2n−1. Thus, to prove (b) it is enough to show (6.34) Ex[(Sk −Sk−1)4] ≤c uniformly in k. By (6.8), Sk −Sk−1 = P v∈desc(vk−1) χ(φv) −χ(φvk−1), and thus Ex[(Sk −Sk−1)4|Hk−1] = Eφvk−1 h X v∈N+ 1 χ(φv) −χ(φvk−1) 4i = Eφvk−1 h X v∈N+ 1 χ(φv) −Eφvk−1  X v∈N+ 1 χ(φv) 4i , (6.35) where we used that χ(φvk−1) = Eφvk−1[P v∈N+ 1 χ(φv)] in the last line. We write N + 1 = {w1, . . . , wd} and follow similar arguments as in the proof of Lemma 6.4, using that χ(x) = 0 for x < h∗and exploiting the independence of the φwi, to obtain that (6.35) is bounded above by cdEφvk−1[(χ(φw1) −Eφvk−1[χ(φw1)])4]. Since χ is Lipschitz on [h∗, ∞), χ(φw1)−Ex[χ(φw1)] is sub-Gaussian, and thus Ex[(χ(φw1)−Ex[χ(φw1)])4] ≤c, where the constant c only depends on d and the Lipschitz constant of χ but not on x. Statement (6.34) then follows by taking the expectation of (6.35). □ 6.1. Further invariance principles. We now prove several further convergence results that are a consequence of Proposition 6.1 and which will be useful later. Proposition 6.8. Let Bt be a standard Brownian motion, let L0 t be its local time at 0. Then, as n →∞, in Px-distribution with respect to the Skorokhod topology, (6.36) P w∈Y (v⌊nt⌋) χ(φw) √n , Λ(v⌊nt⌋) √n  t≥0 →  σ|Bt|, σ χ(x)L0 t  t≥0. To show Proposition 6.8 from Proposition 6.1, we need the following lemma. It states that (under scaling) the first summand in the definition (6.3) of Sn is negligible. Lemma 6.9. In Px-distribution with respect to the Skorokhod topology, (6.37) lim n→∞ χ(φv⌊nt⌋) √n  t≥0 = 0. SCALING LIMITS FOR THE CRITICAL GFF 32 Proof. As χ grows at most linearly (see (3.18)), it is enough to show that for any fixed t > 0 and ε > 0, (6.38) lim n→∞Px h max i≤nt φvi/√n > ε i = 0. Let H(T i) = maxv∈T i H(v) denote the height of tree T i. For every j, k ∈N, the probability in (6.38) is bounded from above by Px h max i≤nt φvi > ε√n, Λ⌊nt⌋≤j, H(Ti) ≤k for i = 1, . . . , j i + Px[∃i ∈{1, . . . , j} : H(T i) > k, Λ⌊nt⌋≤j] + Px[Λ⌊nt⌋> j]. (6.39) To show (6.38), we thus need to choose jn →∞and kn →∞so that all three summands converge to zero. We start with the first summand. Restricted to {Λ⌊nt⌋≤j} and {H(T i) ≤k for i = 1, . . . , j}, maxk≤nt φvk is dominated by the maximum of all φv with v such that Λ(v) ≤j and H(v) ≤k. Considering not only the maximum over the connected components of the level set, but over the first k generations in j copies of the whole forward tree T+, this is dominated by the maximum of jdk non-negatively correlated Gaussian random variables with mean at most x and bounded variance. By Gaussian comparison techniques, the mean of this maximum is of order x + c p log jdk ≤c√k log j for j, k large enough. Thus, by the Markov inequality, (6.40) Px h max i≤nt φvi > ε√n, Λ⌊nt⌋≤j, H(Ti) ≤k for i = 1, . . . , j i ≤ C εn1/2 p k log(j). By a union bound, using Theorem 2.1, the second summand in (6.39) satisfies (6.41) Px[∃i ∈{1, . . . , j} : H(T i) > k, Λ⌊nt⌋≤j] ≤c(x)j/k. Finally, by the Markov inequality and (6.14), the third summand can be bounded by (6.42) Px[Λ⌊nt⌋> j] ≤c(x) √ tn/j. Setting now, e.g., jn = n2/3 and kn = n5/6, all three summands in (6.39) converge to 0 as required. □ Proof of Proposition 6.8. We use the continuous mapping theorem together with the al- ready established convergence statements. By Proposition 6.1, n−1/2S⌊n·⌋→σB·, and by Lemma 6.9, n−1/2χ(φv⌊n·⌋) →0 as n →∞, in Px-distribution, in the Skorokhod topology. Therefore (6.43) n−1/2(S⌊n·⌋−χ(φv⌊n·⌋)) = n−1/2 X w∈Y (v⌊n·⌋) χ(φw) − X i≤Λ⌊n·⌋ χ(φoi)  also converges to σB· as n →∞. Next, note that (6.44) inf k≤n{Sk −χ(φvk)} = inf k≤n n X w∈Y (vk) χ(φw) − X i≤Λk χ(φoi) o = − X i≤Λn χ(φoi). Therefore, setting Bt = infs≤t Bs and using the continuous mapping theorem with the map g(Xt) = (Xt −infs≤t Xs, −infs≤t Xs), (6.45) n−1/2 X w∈Y (v⌊n·⌋) χ(φw), X i≤Λ⌊n·⌋ χ(φoi)  → σ(B· −B·), −σB·  SCALING LIMITS FOR THE CRITICAL GFF 33 in distribution as n →∞. By L´evy’s theorem (see, e.g., Theorem VI.2.3 in [RY99]), the right-hand side of (6.45) has the same distribution as (σ|B·|, σL0 · (B)). Together with the fact that under Px, P i≤Λ⌊nt⌋χ(φoi) = χ(x)Λ⌊nt⌋, this implies the proposition. □ Another consequence of Proposition 6.1 is the following scaling limit result for Sn conditioned to reach a certain height on the first tree T1. To this end we define (6.46) ¯Sn = (P w∈Y (vn) χ(φw) if n ≤|T 1|, 0 if n > |T 1|. Note that, up to an additive correction χ(φvn) −χ(x), ¯S· is equal to S· restricted to T1. Proposition 6.10. For y > 0, let (e≥y/σ)t≥0 be a Brownian excursion conditioned to reach at least height y/σ. Then, in distribution under Px[ · | supk ¯Sk ≥√ny], with respect to the Skorokhod topology, (6.47) lim n→∞ n−1/2 ¯S⌊nt⌋  t≥0 = (σe≥y/σ)t≥0. We omit the proof of this proposition as it would be identical to the proof of Proposi- tion 6.13 in [Pow19], which itself follows [DLG02, Proposition 2.5.2]. 7. Relation of Sn and Hn The aim of this section will be to establish a connection between the martingale Sn and the height process Hn (see (6.3) and (6.1) for definitions). This will be useful in the proof of Theorem 2.3 in Section 8. Throughout the whole section, we only consider the field on the first tree (T, φ) = (T 1, φ1) in the infinite tree sequence T = ((T 1, φ1), (T 2, φ2), . . . ) introduced in Section 6. As a consequence, we only consider ¯S introduced in (6.46) instead of S. We will see that, approximately, ¯S(v) ≈H(v)/C1 with C1 as in (2.11). Motivated by this, for η > 0 we say that v ∈T is η-bad if (7.1) ¯S(v) H(v) −C−1 1 > η. Fixing in addition R > 0, we say that v ∈T is (η, R)-bad if there exists a w ≺v such that H(w) ≥R and w is η-bad. This means that v is (η, R)-good (i.e. not (η, R)-bad) if all its ancestors in generations at least R are η-good. We set (7.2) N (η,R) n = {v ∈N + n : v is (η, R)-bad}. The first main result of this section shows that this set is relatively small. Proposition 7.1. For every ε > 0 and x ≥h∗, (7.3) lim R→∞sup n≥R Px h|N (η,R) n | |N + n | > ε N + n ̸= ∅ i = 0. Proof. The proof follows a strategy similar to the proof of Proposition 6.17 in [Pow19], with adaptations that are necessary in order to handle the unbounded domain in our setting. For ε > 0, R > 0 and n ≥0 we define the event (7.4) Eε R,n = nP v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) > ε o . We first claim that it suffices to show that (7.5) lim R→∞sup n≥R Px[Eε R,n|N + n ̸= ∅] = 0. SCALING LIMITS FOR THE CRITICAL GFF 34 To see that this indeed implies the lemma, we write (7.6) |N (η,R) n | |N + n | = |N (η,R) n | P v∈N+ n χ(φv) · P v∈N+ n χ(φv) |N + n | . Therefore, for every δ > 0, the probability in (7.3) is bounded from above by (7.7) Px h |N (η,R) n | P v∈N+ n χ(φv) > δ N + n ̸= ∅ i + Px h |N + n | P v∈N+ n χ(φv) < δ/ε N + n ̸= ∅ i . By Lemma 5.3 (with f = 1), the second probability tends to 0 as n →∞if δ = δ(ε) is small enough. Since χ(x) ≥χ(h∗) > 0 for every x ≥h∗, it holds that |N (η,R) n | ≤ c P v∈N+ n χ(φv)1{v is (η, R)-bad}, and thus (7.8) Px h |N +,(η,R) n | P v∈N+ n χ(φv) > δ N + n ̸= ∅ i ≤Px  Eδ/c R,n N + n ̸= ∅  , which together with (7.5) implies the claim of the lemma. We now prove (7.5). By first using the Markov inequality, and then Lemma B.2 with k = 1 and Y (v1) = χ(φv1)1{v1 is (η, R)-bad}1{N+ n ̸=∅}/(P w∈N+ n χ(φw)), it holds Px[Eε R,n|N + n ̸= ∅] ≤ε−1Ex hP v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) N + n ̸= ∅ i = ε−1Ex h1{N+ n ̸=∅} P v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) i Px[N + n ̸= ∅]−1 = ε−1Qx hχ(x)Px[N + n ̸= ∅]−1 P v∈N+ n χ(φv) 1{σn is (η, R)-bad} i , (7.9) where, as defined in Section 3, σn is the vertex on the spine in the n-th generation. To continue, we first show the following two claims: (i) supn≥R Qx[σn is (η, R)-bad] →0 as R →∞. (ii) Set Zn = χ(x)Px[N+ n ̸=∅]−1 P v∈N+ n χ(φv) . Then for all δ > 0, there exist R′, K > 0 such that Qx[Zn1{Zn>K}] ≤δ for all n ≥R′. Claim (i) will follow from the ergodic behaviour of the field along the spine (σn)n≥0 under Qx. Let sib<(v) = {w ∈sib(v) : v <v w}. Then ¯S(σn) = P k≤n P v∈sib<(σk) χ(φv). Recall that ξk := φ(σk). By conditioning on ξk−1, due to the fact that under Qx the spine mark at generation k is uniformly distributed on descendants of its position at generation k −1 and that non-spine vertices behave as under Px, Qx h X v∈sib<(σk) χ(φv) i = Qx h Qξk−1 h X v∈sib<(σ1) χ(φv) ii = Qx h1 2Qξk−1 h X v∈N+ 1 \{σ1} χ(φv) ii = Qx hd −1 2d Pξk−1 h X v∈N+ 1 χ(φv) ii = d −1 2d Qx[χ(ξk−1)]. (7.10) SCALING LIMITS FOR THE CRITICAL GFF 35 By Lemma 3.4 the invariant measure of the Markov chain (ξk)k≥0 under Qx is given by χ(y)2ν(dy), so the last expression converges to C−1 1 as k →∞. Claim (i) then follows from an ergodicity argument. To see (ii), note that again by Lemma B.2, Qx[Zn1{Zn>K}] = Px[1{Zn>K}1{N+ n ̸=∅}] P[N + n ̸= ∅] = Px[Zn > K|N + n ̸= ∅]. ≤Px hP v∈N+ n χ(φv) n < 1 Kc N + n ̸= ∅ i , (7.11) where in the last inequality we used the fact that χ(x)Px[N + n ̸= ∅]−1 ≥c, by Theorem 2.1. By Theorem 2.2, this converges to the probability that an exponentially distributed ran- dom variable is smaller than (Kcx)−1. It is thus straightforward to find K and R′ such that the last probability is smaller than δ for all n ≥R′, proving (ii). Returning back to (7.9), its right-hand side is bounded by Qx[Zn1{σn is (η, R)-bad}] = Qx[Zn1{Zn>K}1{σn is (η, R)-bad}] + Q[Zn1{Zn≤K}1{σn is (η, R)-bad}] ≤Qx[Zn1{Zn>K}] + KQ[σn is (η, R)-bad], (7.12) which together with (i) and (ii) implies (7.5) and concludes the proof. □ To state the second main result of this section, we introduce (enlarging the probability space if necessary) random variables τ 1, . . . , τ k which under Px, conditionally on T, are independent and uniformly distributed on {1, 2, . . . , |T|}. Using these random variables, we define two k × k random matrices (D ¯S n)i,j = n−1 ¯S(vτ i) + ¯S(vτ j) −2 ¯S(vτ i ∧vτ j)  (DH n )i,j = n−1H(vτ i) + H(vτ j) −2H(vτ i ∧vτ j)  , (7.13) where, as usual, (v1, v2, . . . ) denotes the depth-first traversal of T, and v ∧w denotes the most recent common ancestor of v and w in T. Proposition 7.2. For every k ≥1 and ε > 0, (7.14) lim n→∞Px  ∥C−1 1 DH n −D ¯S n∥> ε N + n ̸= ∅  = 0, where ∥·∥denotes the Frobenius norm of k × k matrices. To prove this proposition, we need two lemmas. The first one estimates from below the size of T conditionally on {N + n ̸= ∅}. Lemma 7.3. For every x ≥h∗, (7.15) lim q→0 sup n≥0 Px  |T| ≤qn2|N + n ̸= ∅  = 0. Proof. It will be sufficient to show that for every δ > 0 there exist q and n0 such that for every n ≥n0, (7.16) Px[|T| ≤qn2|N + n ̸= ∅] ≤δ. Indeed, since there are only finitely many n < n0, by decreasing the value of q, the inequality (7.16) can be made true for all n ≥0. This then directly implies the claim of the lemma. SCALING LIMITS FOR THE CRITICAL GFF 36 To show (7.16), observe that for every η > 0 Px[|T| ≤qn2|N + n ̸= ∅] ≤Px  |T| ≤qn2, |N + n | ≥ηn|N + n ̸= ∅  + Px  |N + n | < ηn|N + n ̸= ∅  . (7.17) By Theorem 2.2, there is η > 0 small such that the second probability on the right-hand side of (7.17) is bounded by δ/2 for all n large enough. It is thus sufficient to show that for every δ > 0 and η > 0 there is n0 so that for all n ≥n0 (7.18) Px  |T| ≤qn2, |N + n | ≥ηn N + n ̸= ∅  ≤δ/2. Note first that (7.19) Px  |T| ≤qn2, |N + n | ≥ηn N + n ̸= ∅  ≤Px  |T| ≤qn2 |N + n | ≥ηn  . Given {|N + n | ≥ηn}, let w1, . . . , w⌊ηn⌋be the first ⌊ηn⌋vertices in N + n , and let Tw be the subtree of T rooted at w. Obviously, |T| ≤P⌊ηn⌋ i=1 |Twi|. Under P[ · ||N + n | ≥ηn], the random variables |Twi| are independent. Moreover, since w ∈N + n implies φw ≥h∗, by stochastic domination (3.5), for any u > 0, (7.20) Px  |Twi| ≥u |N + n | ≥ηn  ≥Ph∗ |Co ∩T+| ≥u  . Denoting T ′ i, i ≥1, i.i.d. random variables distributed as |Co ∩T+| under Ph∗, it follows that (7.21) Px  |T| ≤qn2 |N + n | ≥ηn  ≤P h ⌊ηn⌋ X i=1 T ′ i ≤qn2i . By Theorem 3.3, T ′ i are in the domain of attraction of the 1/2-stable random distribution. Therefore, n−2 P i≤⌊ηn⌋T ′ i converges in distribution to a non-negative 1/2-stable random variable (see e.g. [Fel71, Theorem XIII.6.2]). As a consequence, since the distribution of this random variable has no atom at 0, for any δ, η > 0 there exists q small so that for all n large enough the right-hand side of (7.21) is bounded by δ/2. This shows (7.18) and completes the proof. □ The second lemma needed to show Proposition 7.2 studies the probability that a ran- domly chosen vertex vτ1 is (η, R)-bad. Lemma 7.4. Let V := vτ1 be a uniformly chosen vertex of T. Then, for every x ≥h∗ and η > 0, (7.22) lim R→∞sup n≥R Px  V is (η, R)-bad N + n ̸= ∅  = 0. Proof. For arbitrary positive constants q, h1 < h2, it holds Px  V is (η, R)-bad N + n ̸= ∅  ≤Px  |T| ≤qn2 N + n ̸= ∅  + Px  |T| > qn2, H(V ) < h1n N + n ̸= ∅  + Px  |T| > qn2, H(V ) > h2n N + n ̸= ∅  + Px  |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅  . (7.23) We will show that for every δ > 0 there are q, h1, h2 such that for every n ≥0 the first three summands on the right-hand side are smaller than δ/3, and that for every fixed q, h1, h2 the fourth one satisfies (7.24) lim R→∞sup n≥R Px  |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅  = 0. SCALING LIMITS FOR THE CRITICAL GFF 37 This will imply the statement of the lemma. Concerning the first summand, the fact that it is possible to choose q > 0 small so that Px[|T| ≤qn2 N + n ̸= ∅] ≤δ/3 for all n ≥0 follows immediately from Lemma 7.3. From now on we keep q fixed in this way. For the second summand, it holds Px  |T| > qn2, H(V ) < h1n N + n ̸= ∅  ≤ 1 Px[N + n ̸= ∅]Px  |T| > qn2, H(V ) < h1n  ≤ 1 Px[N + n ̸= ∅] 1 qn2Ex h h1n X k=0 |N + k | i , (7.25) where the last inequality follows from the fact that V is a uniformly chosen vertex of T, and thus on {|T| ≥qn2}, (7.26) Px[HV ≤h1n|σ(T)] = |T|−1 h1n X k=0 |N + k | ≤ 1 qn2 h1n X k=0 |N + k |. By Theorem 2.1, Px[N + n ̸= ∅]−1 ≤c(x)n, and, by Proposition 3.8, Ex[|N + k |] ≤c′(x) for all k ≥0. Therefore, the right-hand side of (7.25) can be made smaller than δ/3 by choosing h1 small. For the third summand in (7.23), note that {H(V ) ≥h2n} ⊂{N + h2n ̸= ∅}, and thus (7.27) Px  |T| > qn2, H(V ) > h2n N + n ̸= ∅  ≤ 1 Px[N + n ̸= ∅]Px[N + h2n ̸= ∅], which can be made arbitrarily small by using Theorem 2.1 and choosing h2 large. Finally, we show (7.24). Recall the notation from (7.2). Since V is a uniformly chosen vertex of T, for arbitrary q, h1, h2, n, R > 0 and ε ∈(0, 1), on the event {|T| ≥qn2} it holds that Px  H(V ) ∈[h1n, h2n], V is (η, R)-bad σ(T, φ)  = 1 |T| h2n X k=h1n |N +,(η,R) k | = 1 |T| h2n X k=h1n |N +,(η,R) k | |N + k | |N + k | ≤ε + 1 qn2 h2n X k=h1n 1{|N+,(η,R) k |/|N+ k |≥ε}|N + k |. (7.28) As a consequence, the probability in (7.24) satisfies Px  |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅  ≤ε + 1 qn2 h2n X k=h1n Ex  1{|N+,(η,R) k |/|N+ k |≥ε}|N + k | N + n ̸= ∅  . (7.29) By the Cauchy–Schwarz inequality, every summand in (7.29) is bounded by (7.30) Ex  |N + k |2 N + n ̸= ∅ 1/2Px  |N +,(η,R) k |/|N + k | > ε, N + k ̸= ∅ N + n ̸= ∅ 1/2. By Theorem 2.1 and Proposition 3.8, Ex[|N + k |2 | N + n ̸= ∅]1/2 ≤c(x)n. The second term satisfies, (7.31) Px h|N +,(η,R) k | |N + k | > ε, N + k ̸= ∅ N + n ̸= ∅ i ≤Px h|N +,(η,R) k | |N + k | > ε N + k ̸= ∅ iPx[N + k ̸= ∅] Px[N + n ̸= ∅]. SCALING LIMITS FOR THE CRITICAL GFF 38 By Theorem 2.1, for k ≥h1n, the fraction on the right-hand side can be bounded by a constant c(x). Noting also that |N +,(η,R) k | = 0 for R > k, we obtain that (7.29) is bounded by (7.32) ε + 1 qn2c(x)n2(h2 −h1) sup k≥R Px h|N +,(η,R) k | |N + k | > ε N + k ̸= ∅ i1/2 . By Proposition 7.1, the supremum here converges to 0 as R →∞. Since ε > 0 is arbitrary, this shows (7.24) and completes the proof. □ We can now prove Proposition 7.2. The proof resembles the one of Proposition 6.21 in [Pow19], with modifications required to account for the unbounded domain in our setting. Proof of Proposition 7.2. It is enough to show the statement for k = 2. The general case where k > 2 is then obtained by a union bound. In the case k = 2, since the matrices are symmetric and the diagonal entries are zero, it is sufficient to control one off-diagonal entry. To this end, for ε > 0 let Gn be the event (7.33) Gn :=  |C−1 1 (DH n )1,2 −(D ¯S n)1,2| > ε . In order to bound the probability of Gn, we will define events An(δ) (see (7.38)) satisfying P[An(δ)|N + n ̸= ∅] ≤δ and show that for n large enough and the right choice of δn →0, Gn is a subset of An(δn). This will imply the statement of the lemma. In order to define the events An(δ) for δ > 0, we first fix M = M(δ) ≥1 so that (7.34) lim n→∞Px[N + Mn ̸= ∅|N + n ̸= ∅] ≤δ/3, which is possible by Theorem 2.1. We then set (7.35) η = η(δ) := ε/(4M), and, using Lemma 7.4 and the independence of τ 1 and τ 2, we fix R = R(δ) so that (7.36) lim n→∞Px  vτ 1 or vτ 2 is (η, R)-bad N + n ̸= ∅  ≤δ/3. Next, we fix K = K(δ) such that (7.37) lim n→∞Px[sup{χ(φu) : u ∈∪R k=0N + k } ≥K|N + n ̸= ∅] ≤δ/3. This is possible since the event in (7.37) is FR-measurable, and thus, by Proposition 5.2, the left-hand side of (7.37) equals Qx[sup{χ(φu) : u ∈∪R k=0N + k } ≥K] (with Qx as in Section 3.2). It is then straightforward to choose K large enough so that (7.37) is satisfied. Finally, we define the events An by An = An(δ) = {N + Mn ̸= ∅} ∪{vτ 1 or vτ 2 is (η, R)-bad} ∪{sup{χ(φu) : u ∈∪R k=0N + k } ≥K}. (7.38) By the choice of M, η, R, and K, it holds that limn→∞Px[An(δ)|N + n ̸= ∅] ≤δ. Therefore, for sequences δn converging to zero sufficiently slowly, it also holds that (7.39) lim n→∞Px[An(δn)|N + n ̸= ∅] = 0. Next, we show that Gn ⊆An(δn) for a well-chosen δn and large n. Let B(δ) = {H(vτ 1 ∧ vτ 2) > R}. We will show that for large enough n and a suitable choice of δn, (7.40) Gn ∩Ac n(δn) ∩B(δn) = ∅ and Gn ∩Ac n(δn) ∩Bc(δn) = ∅. SCALING LIMITS FOR THE CRITICAL GFF 39 From this, it easily follows that Gn ∩Ac n(δn) = ∅, and thus, Gn ⊆An(δn). Then, if δn →0 sufficiently slowly, by (7.39), (7.41) lim n→∞Px[Gn|N + n ̸= ∅] ≤lim n→∞Px[An(δn)|N + n ̸= ∅] = 0, which is sufficient to prove the statement of the proposition. It remains to show the existence of δn such that (7.39) and (7.40) hold. By definition, on Ac n it holds (7.42) H(vτ i) ≤Mn and ¯S(vτ i) H(vτ i) −C−1 1 ≤η for i = 1, 2. Thus, on Ac n ∩B it holds that R < H(vτ 1 ∧vτ 2) ≤Mn. Since vτ 1 is not (η, R)-bad, | ¯S(vτ 1 ∧vτ 2)/H(vτ 1 ∧vτ 2)−C−1 1 | ≤η. Together with (7.42), the definition of the matrices in (7.13), and the definition of η, this implies (7.43) |(D ¯S n)1,2 −C−1 1 (DH n )1,2| ≤1 n4Mnη ≤ε, and thus Gn ∩B ∩Ac n = ∅, proving the first half of (7.40). Next, on event Bc ∩Ac n it holds H(vτ 1 ∧vτ 2) ≤R and sup{χ(φu) : u ∈∪k≤RN + k } < K. Therefore, ¯S(vτ 1 ∧ vτ 2) ≤KdR, and as a consequence | ¯S(vτ 1 ∧vτ 2) −C−1 1 H(vτ 1 ∧vτ 2)| ≤KdR + RC−1 1 . We choose a sequence δn converging to 0 sufficiently slowly so that (7.39) is satisfied and K(δn)dR(δn)+1 + R(δn)C−1 1 ≤εn, and thus (7.44) |(D ¯S n)1,2 −C−1 1 (DH n )1,2| ≤ε 2 + K(δn)dR(δn) + R(δn)C−1 1 n ≤ε for n large. Hence, Gn∩Bc(δn)∩Ac n(δn) = ∅for such n, completing the proof of (7.40). □ 8. Proof of Theorem 2.3 In this section, we prove Theorem 2.3, relying on Lemmas 8.1 and 8.2. The argument follows the framework of random metric measure spaces, with key results originating from [GPW09] and [ADH13]. The proof closely mirrors that of Theorem 1.1 in [Pow19, Section 6.3.2], which itself builds on the aforementioned results. Recall from Section 2 that e denotes a Brownian excursion conditioned to reach height 1. Also define ˆe to be (σC1) times a Brownian excursion conditioned to reach height (C1σ)−1, where the constants σ and C1 are given in (6.5) and (2.11), and write (Tˆe, dˆe) for the real tree with contour function ˆe. By Brownian scaling, (Tˆe, dˆe) is isometrically equivalent to (Te, de). Therefore, to show Theorem 2.3, it is enough to prove that if (Tx,n, dx,n) is a random metric space whose law coincides with that of (Ch∗ o ∩T+, d/n) under Px[·|N + n ̸= ∅] then (8.1) (Tx,n, dx,n) →(Tˆe, dˆe) as n →∞ in distribution with respect to the Gromov-Hausdorff topology. It will prove useful to additionally equip (Tx,n, dx,n) and (Tˆe, dˆe) with measures and work in the space of metric measure spaces which we will call X. For this purpose, we equip (Tx,n, dx,n) with the measure µx,n which is the uniform measure among the vertices in Ch∗ o ∩T+. We equip (Tˆe, dˆe) with the measure µˆe which is the uniform measure on [0, ˆτ) and ˆτ is the length of ˆe. We write P n x for the law of (Tx,n, dx,n, µx,n) and Pˆe for the law of (Tˆe, dˆe, µˆe), and use En x and Eˆe for the corresponding expectations. Lemma 8.1. For fixed x ≥h∗, the family (Tx,n, dx,n, µx,n)n≥0 is tight with respect to the Gromov-Hausdorff-Prokhorov metric. SCALING LIMITS FOR THE CRITICAL GFF 40 Proof. Fix x ≥h∗. Our aim is to show that for every ε > 0 there exists a relatively compact subset K ⊂X (with respect to the Gromov-Hausdorff-Prokhorov metric) such that (8.2) inf n≥0 P n x [(Tx,n, dx,n, µx,n) ∈K] ≥1 −ε. To do this, we define the relatively compact set (where the relative compactness follows from Theorem 2.6 in [ADH13]) (8.3) KR,M =  (X, r, µ) ∈X µ(X) = 1, diam(X) ≤2R, for all k ≥1, X can be covered by fewer than 24kM balls of radius 2−k  and show that for ε > 0 there are R(ε) and M(ε) such that for K = KR,M, (8.2) is satisfied. Define the events Ax,n(R) = {diam(Tx,n) ≤2R} and (8.4) Bδ x,n(M) = {Tx,n can be covered with < δ−4M balls of radius δ}. (8.5) Then, since µx,n(Tx,n) = 1, {(Tx,n, dx,n, µx,n) ∈KR,M} = Ax,n(R) ∩(∩k≥1B2−k x,n (M)), and thus P n x [(Tx,n, dx,n, µx,n) ̸∈KR,M] ≤P n x [Ax,n(R)c]+P k≥1 P n x [B2−k x,n (M)c ∩Ax,n(R)]. To see (8.2), it is therefore enough to show that for M = M(ε) and R = R(ε) large enough (8.6) P n x [Ax,n(R)c] ≤ε/2 and P n x [Bδ x,n(M)c ∩Ax,n(R)] ≤δε/2, for all δ > 0 and n ≥0. We will first find a R(ε) such that the first equation of (8.6) is satisfied. Recall that the law of (Tx,n, dx,n) under P n x is that of (T, d/n) under Px[ · |N + n ̸= ∅]. Thus, P n x [diam(Tx,n) ≥ 2R] ≤Px[|N + Rn| > 0|N + n ̸= ∅], which by Theorem 2.1, is smaller than ε/2 for R(ε) large enough. For such R(ε) the first part of (8.6) is valid. Next, we will find a M(ε) such that also the second equation of (8.6) is met. We will distinguish thereby between the cases where nδ < 1 and nδ ≥1. Assume first that nδ < 1. Then, every δ-ball around a vertex v ∈Tx,n only contains the vertex v itself. This means that in this case Tx,n can be covered with < δ−4M balls of radius δ if and only if the cardinality of Tx,n is < δ−4M. Therefore (8.7) P n x [Bδ x,n(M)c ∩Ax,n(R)] ≤Px[|T| ≥δ−4M|N + n ̸= ∅] ≤Px[|T| ≥δ−4M] Px[N + n ̸= ∅] . Using Theorem 2.1 and Proposition 3.3 this can be further bounded from above by (8.8) C δ2n √ M ≤C δ √ M , for some constant C, where we also used that by assumption δn < 1. Choosing some M = M(ε) ≥(2C/ε)2, the second equation of (8.6) is true for all n, δ with nδ < 1. Next we consider the case nδ ≥1. Set bn,δ = ⌊nδ⌋and define the sets of vertices Vj = {v ∈N + jbn,δ : ∃w ∈N + (j+1)bn,δ with v ≺w} for j ≥0. We will show that on (8.9) {diam(Tx,n) ≤2R} ∩ 2Rn/bn,δ−1 \ j=0  |Vj| ≤Mbn,δ 2Rnδ4  (Tx,n, dx,n) can be covered with fewer than δ−4M balls of radius δ. Specifically, Tx,n is covered by {Bδ(v)}v∈V , where V = ∪ 2Rn/bn,δ−1 j=0 Vj. To see this, note that since on {diam(Tx,n) ≤2R} the height of Tx,n is trivially bounded by 2Rn and thus, for every SCALING LIMITS FOR THE CRITICAL GFF 41 v ∈Tx,n there is a w ∈Vj for some 0 ≤j ≤2Rn/bn,δ −1 so that dx,n(v, w) ≤δ. Further, it is clear that on (8.9) the cardinality of V is bounded by (2Rn/bn,δ)(Mbn,δ/2Rnδ4) = δ−4M. This shows the claimed covering property. To prove the second part of (8.6), it thus suffices to show that for M = M(R, ε) large enough, (8.10) Px h 2Rn/bn,δ−1 [ j=0  |Vj| > Mbn,δ 2Rnδ4 N + n ̸= ∅ i ≤δε/2. By conditioning on Fjbn,δ (recalling the definition of Fn from (3.20)) and using Theo- rem 2.1 and Proposition 3.8, Ex  |Vj|  = Ex  Ex  |Vj| Fjbn,δ  = Ex h X v∈N+ jbn,δ Pφv[N + bn,δ ̸= ∅] i (8.11) = Ex h X v∈N+ jbn,δ Cχ(φv)b−1 n,δ(1 + εbn,δ(φv)) i ≤q(x)b−1 n,δ (8.12) for some function q(x) independent of n and δ. Therefore, by conditioning on {N + n ̸= ∅} and using the Markov inequality and Theorem 2.1, (8.13) Px h |Vj| > Mbn,δ 2Rnδ4 N + n ̸= ∅ i ≤2Rnδ4 Mbn,δ Ex[|Vj|] Px[N + n ̸= ∅] ≤q′(x)2Rn2δ4 Mb2 n,δ , where q′ is some other function independent of n and δ. This means that by a union bound, the left-hand side of (8.10) is bounded from above by (8.14) q′(x)2Rn2δ4 Mb2 n,δ (2Rn/bn,δ) = q′(x)4R2n3δ4 Mb3 n,δ = q′(x)4R2δ M  nδ ⌊nδ⌋ 3 . Using that for x ≥1, 1 ≤x/⌊x⌋≤2, we see that by choosing M ≥64R2q′(x)/ε, the right-hand side of (8.14) is bounded by δε/2. For such choices of M, (8.10) is satisfied. Finally, first choosing R as described above and then M as the maximum of the two M-values obtained for the cases nδ < 1 and nδ ≥1, gives (R, M) such that (8.6) is satisfied. For such (R, M), (8.2) holds true for K = KR,M and the proof is finished. □ Lemma 8.2. For fixed x ≥h∗, (8.15) (Tx,n, dx,n, µx,n) →(Tˆe, dˆe, µˆe) as n →∞ in distribution with respect to the Gromov-Prokhorov metric. Proof. By Corollary 3.1 in [GPW09], the convergence (8.15) holds true if and only if (a) The family {P n x }n≥0 is relatively compact in the space of probability measures on X (with respect to the Gromov-weak topology). (b) For every function Ψ : X →R of the form, (8.16) Ψ((X, r, µ)) = Z ψ((r(xi, xj))1≤i<j≤k)µ⊗k(d(x1, . . . , xk)), where ψ : [0, ∞)(k 2) →R is a continuous and bounded function, En x[Ψ] →Eˆe[Ψ] as n →∞. Since convergence in the Gromov-Hausdorff-Prokhorov sense implies convergence in the Gromov-weak sense, part (a) is implied by Lemma 8.1 and only part (b) is left to be shown. SCALING LIMITS FOR THE CRITICAL GFF 42 To see (b), fix a function Ψ as described in (8.16) and recall the definition of DH n from (7.13). Note that dx,n(v, w) = n−1(H(v) + H(w) −2H(v ∧w)), and thus (8.17) En x[Ψ] = Ex  ψ(DH n ) N + n ̸= ∅  . By Proposition 7.2, (8.18) lim n→∞ Ex  ψ(DH n ) N + n ̸= ∅  −Ex  ψ(D ¯S n/C−1 1 ) N + n ̸= ∅  = 0. Assume for now that also (8.19) lim n→∞ Ex  ψ(D ¯S n/C−1 1 ) N + n ̸= ∅  −Ex  ψ(D ¯S n/C−1 1 ) sup m ¯Sm ≥C−1 1 n  = 0. By Proposition 6.10, limn→∞Ex[ψ(D ¯S n/C−1 1 )| supm ¯Sm ≥C−1 1 n] = Eˆe[Ψ], and thus in combination with (8.17), (8.18) and (8.19), it follows limn→∞En x[Ψ] = Eˆe[Ψ], concluding the proof. It remains to prove (8.19). Since ψ is bounded, it is enough to show that (8.20) lim n→∞Px  N + n ̸= ∅ sup m ¯Sm ≥C−1 1 n  = 1 and lim n→∞Px  sup m ¯Sm ≥C−1 1 n N + n ̸= ∅  = 1. Using that P[A|B] = P[B|A]P[A]/P[B], it is enough to show that (8.21) lim n→∞Px[sup m ¯Sm ≥C−1 1 n|N + n ̸= ∅] = 1 and lim n→∞ Px[supm ¯Sm ≥C−1 1 n] Px[N + n ̸= ∅] = 1. Observe that for every δ > 0, Px  sup m ¯Sm ≥C−1 1 n N + n ̸= ∅  ≥Px  sup m ¯Sm ≥C−1 1 n |N + ⌊n(1+δ)⌋| > 0 Px[|N + ⌊n(1+δ)⌋| > 0] Px[N + n ̸= ∅] . (8.22) By Proposition 7.1, the first term in the product on the right-hand side converges to 1 as n →∞, and the second term converges to (1 + δ)−1 by Theorem 2.1. This shows the first convergence of (8.21). By Theorem 2.1, Px[N + n ̸= ∅] ∼χ(x)C1n−1 as n →∞. Therefore, we are left to prove that (8.23) lim n→∞ Px[supm ¯Sm ≥C−1 1 n] χ(x)C1n−1 = 1 to conclude the second limit of (8.21). We will do this in a similar way as, e.g., in [LG05, Section 1.4, p. 263]. Consider the depth-first traversal (v1, v2, . . . ) of the sequence of trees (T 1, T 2, . . . ). We set S′ n = P w∈Y (vn) χ(φw) and Si = supn S′ n1vn∈T i for i ≥1 (so that supn ¯Sn = S1). Recall that by Proposition 6.8, (8.24) lim n→∞ S′ ⌊nt⌋ √n , Λ⌊nt⌋ √n  t≥0 =  σ|Bt|, σ χ(x)L0 t  t≥0, where the limit is in Px-distribution with respect to the Skorokhod topology. Defining γr = inf{t ≥0 : L0 t > r} and τn = inf{t ≥0 : n−1Λ⌊n2t⌋> 1} and using Brownian scaling, this gives the joint convergence (8.25) lim n→∞ 1 nS′ ⌊n2t⌋  t≥0, τn  = σ|Bt|  t≥0, γχ(x)/σ  , SCALING LIMITS FOR THE CRITICAL GFF 43 and therefore also (8.26) lim n→∞ 1 nS′ ⌊n2(t∧τn)⌋  t≥0 = σ|Bt∧γχ(x)/σ|  t≥0. From this we deduce that for every y > 0, as n →∞, (8.27) lim n→∞Px h sup 1≤i≤n Si > ny i = Px h sup t≤γχ(x)/σ σ|Bt| > y i = 1 −exp  −χ(x) y  , where the last equality is a consequence of excursion theory for Brownian motion (see e.g. Chapter XII in [RY99]). By the independence of the trees T i (and thus the variables Si), we also have that (8.28) Px h sup 1≤i≤n Si > ny i = 1 − 1 −Px  S1 > ny n = 1 −  1 −Px h sup m ¯Sm > ny in . Combining (8.27) and (8.28) then gives (8.29) Px h sup m ¯Sm > ny i ∼χ(x) y n−1 as n →∞, which by setting y = C−1 1 concludes (8.23) and finishes the proof. □ We now present the proof of Theorem 2.3, which is identical to that of Theorem 1.1 in [Pow19]. Proof of Theorem 2.3. Since convergence in the Gromov-Hausdorff-Prokhorov metric im- plies convergence in the Gromov-Prokhorov metric, Lemma 8.2 characterizes subsequen- tial limits with respect to the Gromov-Hausdorff-Prokhorov topology. Hence, we obtain the convergence in distribution (8.30) (Tx,n, dx,n, µx,n) →(Tˆe, dˆe, µˆe) as n →∞ with respect to the Gromov-Hausdorff-Prokhorov metric. Now, since Gromov-Hausdorff- Prokhorov convergence further implies convergence in the Gromov-Hausdorff sense, claim (8.1) follows, completing the proof. □ Appendix A. Properties of χ We prove here that χ is Lipschitz continuous on [h∗, ∞), as stated in (3.19). The proof uses similar ideas as the proof of Proposition 3.1 of [AˇC20]. Proof of (3.19). We will show that the derivative χ′ is bounded on [h∗, ∞). Recall that for x ≥h∗it holds χ(x) = d R [h∗,∞) χ(z)ρY (z −x/d) dz, where ρY is the density of the Gaussian random variable Y . Differentiating this expression, using an integration by parts and the fact that χ′(z) = 0 for z < h∗, we obtain that for x ≥h∗ χ′(x) = − Z ∞ h∗χ(z)ρ′ Y  z −x d  dz = Z [h∗,∞) χ′(z)ρY  z −x d  dz + χ(h∗)ρY  h∗−x d  =: EY h χ′ Y + x d i + e(x). (A.1) Defining e(x) = 0 for x < h∗, using that χ is increasing and thus χ′ ≥0, this equality can be extended to inequality (A.2) χ′(x) ≤EY h χ′ Y + x d i + e(x) for all x ∈R. SCALING LIMITS FOR THE CRITICAL GFF 44 Similarly as in the proof of Proposition 3.1 in [AˇC20], we obtain an upper bound on χ′ by iterating (A.2) an appropriate amount of times. For this, let Y1, . . . , Yk be independent Gaussian random variables having the same distribution as Y , and define Zi = Y1/di−1 + Y2/di−2 + · · · + Yi. Note that Zi is a centred Gaussian random variable whose variance, denoted σ2 i , is bounded uniformly in i. Then, by applying (A.2) k-times, χ′(x) ≤EY1 h EY2 h χ′ Y2 + Y1 + x/d d  + e(Y1 + x/d) + e(x) ii ≤· · · ≤E h χ′ x dk + Y1 dk−1 + Y2 dk−2 + · · · + Yk i + E h k−1 X i=0 e  x di + Zi i ≤E h χ′ x dk + Zk i + k−1 X i=0 E h e  x di + Zi i . (A.3) We now choose k = k(x) = ⌊logd(x)⌋. Then the boundedness of the first summand can be proved exactly in the same way as the boundedness of E[χ(x/dk + Zk)] in the proof of Proposition 3.1 in [AˇC20] (note that (d −1) in [AˇC20] corresponds to d in our setting), one only needs to verify that χ′ ∈L2(ν). This can be easily proved using (A.1) and the facts ρ′ Y (x) = −cxρY (x), L[χ] = χ and (3.18) which imply |χ′(x)| ≤cL[χ2](x) + cx2 for x ≥h∗. From this χ′ ∈L2(ν) follows from Proposition 3.2(b). To bound the second summand on the right hand side of (A.3), we first observe that by the choice of k(x) it holds that x0 := x/dk(x) ∈[1, d]. With this notation, by rearranging the sum, (A.4) k(x)−1 X i=0 E[e(x/di + Zi)] = k(x)−1 X i=0 E[e(x0di + Zk(x)−i)]. Moreover, it is easy to see that for some constants c, c′ (A.5) 0 ≤e(x) := 1x≥h∗χ(h∗)ρY  h∗−x d  ≤c′1x≥h∗e−cx. Therefore, (A.6) k(x)−1 X i=0 E[e(x0di + Zk(x)−i)] ≤c′ ∞ X i=0 E  e−c(x0di+Zk(x)−i) ≤c′ ∞ X i=0 e−cx0diec2σ2 k(x)−i/2, which, since σ2 i are bounded and x0 ∈[1, d], is clearly bounded uniformly in x ≥h∗. □ Appendix B. Proof of the many-to-few formulas We give here a proof of Proposition 3.5. Throughout the proof we use the notation from Section 3.2. We start by introducing a useful standard martingale related to φ when viewed as branching process. Lemma B.1. For x ∈R, n ∈N, let (B.1) ζ(x, n) := dnχ(x) and ζn := ζ(ξ1 n, n). Then, for every k ∈N, (ζn)n∈N is a P k x -martingale with respect to Fk n. Proof. By definition of ζn, for n ≥1, (B.2) d−(n−1)Ek x[ζn | Fk n−1] = dEk x[χ(φσ1n) | Fk n−1], SCALING LIMITS FOR THE CRITICAL GFF 45 where σ1 n is the vertex carrying the first spine mark at level n. Conditionally on Fk n−1, σ1 n is uniformly distributed on desc(σ1 n−1) and independent of φ. Therefore, this equals (B.3) X v∈desc(σ1 n−1) Ek x[χ(φv) | Fk n−1] = L[χ](φσ1 n−1) = χ(φσ1 n−1) = χ(ξ1 n−1). where in for the first equality we used (3.7) and the fact that χ(x) = 0 on (−∞, h∗), and where the second equality follows from (3.10). The martingale property of ζn then follows directly from this computation. □ Due to Lemma B.1, Lemma 8 of [HR17] can directly be applied to our process. We restate it here for reader’s convenience, with very minor adaptations coming from the fact that in our process every node has always d descendants (some of them might be in the cemetery state). Lemma B.2 (Lemma 8 in [HR17]). For any k ≥1, let Y be a Fk n-measurable random variable which can be written as (B.4) Y = X v1,...,vk∈S+ n Y (v1, . . . , vk)1{σ1n=v1,...,σkn=vk}, where, for every v1, . . . , vk ∈S+ n , Y (v1, . . . , vk) is a Fn-measurable random variable. Then (B.5) Ex h X v1,...,vk∈N+ n Y (v1, . . . , vk) i = Qk x h Y Y v∈skel(n)\{o} ζ(φp(v), |v| −1) ζ(φv, |v|) dlp(v) i . We are now ready to give the proof of Proposition 3.5. Proof of Proposition 3.5. Statement (3.26) follows directly from Lemma B.2 with k = 1 and Y (v1) = f(φv1). Indeed, since k = 1, there is only one mark on every level and thus Y = f(ξ1 n), skel(n) \ {o} = {σ1 1, . . . , σ1 n}, and lp(v) = 1 for all v ∈skel(n) \ {o}. Therefore, by (B.5) Ex h X v∈N+ n f(φv) i = Qx  f(ξn) Y v∈{σ1,...,σn} ζ(φp(v), |v| −1) ζ(φv, |v|) d  = Qx  f(ξn) n Y i=1 χ(ξi−1)di−1 χ(ξi)di d  = Qx  f(ξ1 n) χ(x) χ(ξn)  , (B.6) as claimed in (3.26). Similarly, statement (3.27) is a consequence of Lemma B.2 with k = 2. We set Y (v1, v2) = f(φv1)g(φv2). Then Y = f(ξ1 n)g(ξ2 n) and by (B.5), (B.7) Ex h X v,w∈N+ n f(φv)g(φw) i = Q2 x  f(ξ1 n)g(ξ2 n) Y v∈skel(n)\{o} ζ(φp(v), |v| −1) ζ(φv, |v|) dlp(v)  . To consider the possible structures of skel(n) \ {o}, set s = max{k : σ1 k = σ2 k} to be the last time where the two spines agree. Note that due to the dynamics of marks under Q2 x, (B.8) Q2 x[s = k] = ( (d −1)d−(k+1), if k ∈{0, . . . , n −1}, d−n, if k = n, SCALING LIMITS FOR THE CRITICAL GFF 46 and, by a simple computation, Y v∈skel(n)\{o} ζ(φp(v), |v| −1) ζ(φv, |v|) dlp(v) = Y v∈skel(n)\{o} χ(φp(v)) χ(φv) 1 d  dlp(v) = χ(ξ1 s)χ(x) χ(ξ1 n)χ(ξ2 n)ds. (B.9) Therefore, (B.7) can be written as Ex  X v∈N+ n f(φv) X v∈N+ n g(φv)  = n X k=0 dkQ2 x[s = k]Q2 x  f(ξ1 n)g(ξ2 n) χ(ξ1 k)χ(x) χ(ξ1 n)χ(ξ2 n) s = k  = d −1 d n−1 X k=0 Q2 x f(ξ1 n) χ(ξ1 n) g(ξ2 n) χ(ξ2 n)χ(ξ1 s)χ(x) s = k  + Q2 x  f(ξ1 n)g(ξ1 n) χ(x) χ(ξ1 n) s = n  . (B.10) By construction, under Q2 x, conditional on s = k, for times i = 1, . . . , k the processes ξ1 i and ξ2 i are Markov chains which follow the same trajectory and have the same dynamics as ξi under Q1 x. Further, for later times i = k + 1, . . . , n, they are independent Markov chains distributed according to Qξ1 k. Therefore, (B.11) Q2 x hf(ξ1 n) χ(ξ1 n) g(ξ2 n) χ(ξ2 n)χ(ξ1 s)χ(x) s = k i = χ(x)Qx  χ(ξk)Q2 ξ1 k hf(ξ1 n−k) χ(ξ1 n−k) i Q2 ξ1 k h g(ξ2 n−k) χ(ξ2 n−k) i , and similarly (B.12) Q2 x  f(ξ1 n)g(ξ1 n) χ(x) χ(ξ1 n) s = n  = χ(x)Qx hf(ξn)g(ξn) χ(ξn) i . Combining (B.10)–(B.12) directly implies (3.27). □ References [AˇC20] Angelo Ab¨acherli and Jiˇr´ı ˇCern´y, Level-set percolation of the Gaussian free field on regular graphs I: regular trees, Electron. J. Probab. 25 (2020), Paper No. 65, 24. MR4115734 [ADH13] Romain Abraham, Jean-Fran¸cois Delmas, and Patrick Hoscheit, A note on the Gromov- Hausdorff-Prokhorov distance between (locally) compact metric measure spaces, Electron. J. Probab. 18 (2013), no. 14, 21. MR3035742 [Ald91] David Aldous, The continuum random tree. I, Ann. Probab. 19 (1991), no. 1, 1–28. MR1085326 [BDIM23] Dariusz Buraczewski, Congzao Dong, Alexander Iksanov, and Alexander Marynych, Critical branching processes in a sparse random environment, Mod. Stoch. Theory Appl. 10 (2023), no. 4, 397–411. MR4655407 [BFRS24] Florin Boenkost, F´elix Foutel-Rodier, and Emmanuel Schertzer, The genealogy of nearly critical branching processes in varying environment, Preprint, available at arXiv:2207.11612, 2024. [BLM87] Jean Bricmont, Joel L. Lebowitz, and Christian Maes, Percolation in strongly correlated sys- tems: the massless Gaussian field, J. Statist. Phys. 48 (1987), no. 5-6, 1249–1268. MR914444 [CD24] Zhenhao Cai and Jian Ding, One-arm probabilities for metric graph gaussian free fields below and at the critical dimension, Preprint, available at arXiv:2406.02397, 2024. [CD25] Zhenhao Cai and Jian Ding, One-arm exponent of critical level-set for metric graph Gaussian free field in high dimensions, Probability Theory and Related Fields 191 (2025), no. 3, 1035– 1120. MR4898098 [CKKM24] Guillaume Conchon-Kerjan, Daniel Kious, and C´ecile Mailler, Scaling limit of critical ran- dom trees in random environment, Electron. J. Probab. 29 (2024), Paper No. 112, 53. MR4779872 SCALING LIMITS FOR THE CRITICAL GFF 47 [ˇCL25] Jiˇr´ı ˇCern´y and Ramon Locher, Critical and near-critical level-set percolation of the Gaussian free field on regular trees, 2025, pp. 746–767. MR4863059 [CN20] Alberto Chiarini and Maximilian Nitzschner, Entropic repulsion for the Gaussian free field conditioned on disconnection by level-sets, Probab. Theory Related Fields 177 (2020), no. 1- 2, 525–575. MR4095021 [CR90] B. Chauvin and A. Rouault, Supercritical branching Brownian motion and K-P-P equation in the critical speed-area, Math. Nachr. 149 (1990), 41–59. MR1124793 [CTJP24] Natalia Cardona-Tob´on, Arturo Jaramillo, and Sandra Palau, Rates on Yaglom’s limit for Galton-Watson processes in a varying environment, ALEA Lat. Am. J. Probab. Math. Stat. 21 (2024), no. 1, 1–23. MR4703767 [DCGRS23] Hugo Duminil-Copin, Subhajit Goswami, Pierre-Fran¸cois Rodriguez, and Franco Severo, Equality of critical parameters for percolation of Gaussian free field level sets, Duke Math. J. 172 (2023), no. 5, 839–913. MR4568695 [DLG02] Thomas Duquesne and Jean-Fran¸cois Le Gall, Random trees, L´evy processes and spa- tial branching processes, Ast´erisque, no. 281, Soci´et´e math´ematique de France, 2002 (en). MR1954248 [DPR18] Alexander Drewitz, Alexis Pr´evost, and Pierre-Fran¸ccois Rodriguez, The sign clusters of the massless Gaussian free field percolate on Zd, d ⩾3 (and more), Comm. Math. Phys. 362 (2018), no. 2, 513–546. MR3843421 [DPR25] Alexander Drewitz, Alexis Pr´evost, and Pierre-Fran¸cois Rodriguez, Critical one-arm proba- bility for the metric Gaussian free field in low dimensions, Probability Theory and Related Fields (2025). [dR17] Lo¨ıc de Raph´elis, Scaling limit of multitype Galton-Watson trees with infinitely many types, Ann. Inst. Henri Poincar´e Probab. Stat. 53 (2017), no. 1, 200–225. MR3606739 [DRS14] Alexander Drewitz, Bal´azs R´ath, and Art¨em Sapozhnikov, On chemical distances and shape theorems in percolation models with long-range correlations, J. Math. Phys. 55 (2014), no. 8, 083307, 30. MR3390739 [Dur19] Rick Durrett, Probability—theory and examples, fifth ed., Cambridge Series in Statistical and Probabilistic Mathematics, vol. 49, Cambridge University Press, Cambridge, 2019. MR3930614 [Fel71] William Feller, An introduction to probability theory and its applications. Vol. II., Second edition, John Wiley & Sons Inc., New York, 1971. MR0270403 [GLLP22] Ion Grama, Ronan Lauvergnat, and ´Emile Le Page, Limit theorems for critical branching processes in a finite-state-space Markovian environment, Adv. in Appl. Probab. 54 (2022), no. 1, 111–140. MR4397862 [GPW09] Andreas Greven, Peter Pfaffelhuber, and Anita Winter, Convergence in distribution of ran- dom metric measure spaces (Λ-coalescent measure trees), Probab. Theory Related Fields 145 (2009), no. 1-2, 285–322. MR2520129 [Gri99] Geoffrey Grimmett, Percolation, Grundlehren der Mathematischen Wissenschaften, vol. 321, Springer-Verlag, Berlin, 1999. MR1707339 [GRS22] Subhajit Goswami, Pierre-Fran¸cois Rodriguez, and Franco Severo, On the radius of Gaussian free field excursion clusters, Ann. Probab. 50 (2022), no. 5, 1675–1724. MR4474499 [HH07] J. W. Harris and S. C. Harris, Survival probabilities for branching Brownian motion with absorption, Electron. Comm. Probab. 12 (2007), 81–92. MR2300218 [HHKW22] Simon C. Harris, Emma Horton, Andreas E. Kyprianou, and Minmin Wang, Yaglom limit for critical nonlocal branching Markov processes, Ann. Probab. 50 (2022), no. 6, 2373–2408. MR4499840 [HR17] Simon C. Harris and Matthew I. Roberts, The many-to-few lemma and multiple spines, Ann. Inst. Henri Poincar´e Probab. Stat. 53 (2017), no. 1, 226–242. MR3606740 [Kol38] Andrey Nikolaevich Kolmogorov, Zur L¨osung einer biologischen Aufgabe, Comm. Math. Mech. Chebyshev Univ. Tomsk 2 (1938), no. 1, 1–12. [LG05] Jean-Fran¸cois Le Gall, Random trees and applications, Probab. Surv. 2 (2005), 245–311. MR2203728 [LS86] Joel L. Lebowitz and H. Saleur, Percolation in strongly correlated systems, Phys. A 138 (1986), no. 1-2, 194–205. MR865243 SCALING LIMITS FOR THE CRITICAL GFF 48 [Mie08] Gr´egory Miermont, Invariance principles for spatial multitype Galton-Watson trees, Ann. Inst. Henri Poincar´e Probab. Stat. 44 (2008), no. 6, 1128–1161. MR2469338 [Mod71] Charles J. Mode, Multitype branching processes. Theory and applications, Modern Analytic and Computational Methods in Science and Mathematics, No. 34, American Elsevier Pub- lishing Co., Inc., New York, 1971. MR0279901 [MS83] S. A. Molchanov and A. K. Stepanov, Percolation in random fields. I, Teoret. Mat. Fiz. 55 (1983), no. 2, 246–256. MR734878 [MS22] Pascal Maillard and Jason Schweinsberg, Yaglom-type limit theorems for branching Brownian motion with absorption, Ann. H. Lebesgue 5 (2022), 921–985. MR4526243 [Pow19] Ellen Powell, An invariance principle for branching diffusions in bounded domains, Probab. Theory Related Fields 173 (2019), no. 3-4, 999–1062. MR3936150 [PR15] Serguei Popov and Bal´azs R´ath, On decoupling inequalities and percolation of excursion sets of the Gaussian free field, J. Stat. Phys. 159 (2015), no. 2, 312–320. MR3325312 [PS22] Christoforos Panagiotis and Franco Severo, Analyticity of Gaussian free field percolation observables, Comm. Math. Phys. 396 (2022), no. 1, 187–223. MR4499015 [RS13] Pierre-Fran¸cois Rodriguez and Alain-Sol Sznitman, Phase transition and level-set percolation for the Gaussian free field, Comm. Math. Phys. 320 (2013), no. 2, 571–601. MR3053773 [RY99] Daniel Revuz and Marc Yor, Continuous martingales and Brownian motion, third ed., Grundlehren der mathematischen Wissenschaften, vol. 293, Springer-Verlag, Berlin, 1999. MR1725357 [Szn16] Alain-Sol Sznitman, Coupling and an application to level-set percolation of the Gaussian free field, Electron. J. Probab. 21 (2016), 1–26. MR3492939 [Szn19] Alain-Sol Sznitman, On macroscopic holes in some supercritical strongly dependent percola- tion models, Ann. Probab. 47 (2019), no. 4, 2459–2493. MR3980925 [Yag47] A. M. Yaglom, Certain limit theorems of the theory of branching random processes, Doklady Akad. Nauk SSSR (N.S.) 56 (1947), 795–798. MR22045
SCALING LIMITS FOR THE CRITICAL LEVEL-SET PERCOLATION OF THE GAUSSIAN FREE FIELD ON REGULAR TREES JIˇR ́I ˇCERN ́Y, RAMON LOCHER Abstract. We continue the study of the level-set percolation of the discrete Gaussian free field (GFF) on regular trees in the critical regime, initiated in [ˇCL25]. First, we derive a sharp asymptotic estimate for the probability that the connected component of the critical level set containing the root of the tree reaches generation n. In particular, we show that the one-arm exponent satisfies ρ = 1. Next, we establish a Yaglom-type limit theorem for the values of the GFF at generation n within this component. Finally, we show that, after a correct rescaling, this component conditioned on reaching generation n converges, as n →∞, to Aldous' continuum random tree. 1. Introduction The Gaussian free field's level-set percolation, especially on Zd, is a significant model in percolation theory that is characterised by its long-range dependencies. Initial investigations into this model trace back to the 1980s, with pioneering studies [BLM87, MS83, LS86]. Over the last decade, renewed interest has been ignited by the findings in [RS13], demonstrating that on Zd, the model undergoes a distinctive percolative phase transition at a critical threshold h∗= h∗(d) for any dimension d ≥3. Follow-up research, including papers [DRS14, PR15, DPR18, Szn19, CN20, GRS22, PS22], has provided a comprehensive understanding of the model's behaviour in both subcritical and supercritical phases, often making use of additional natural critical points in order to work in strongly sub- /super-critical regime. Notably, [DCGRS23] confirmed the alignment of these critical points with h∗, indicating a precise phase transition. In this paper, we continue the study of level-set percolation of the discrete Gaussian free field (GFF) on regular trees in the critical regime, building on the work initiated in [ˇCL25]. Our main contributions are threefold: First, we derive a sharp asymptotic estimate for the one-arm probability. Second, we establish a Yaglom-type limit theorem for the field at vertices located at distance n from the root. Finally, we show that the connected component of the critical level set, conditioned to be large, converges to Aldous' Continuum Random Tree. These results are stated precisely in Theorems 2.1, 2.2, and 2.3, respectively. The considered model was initially studied in [Szn16] where the critical value h∗was identified in terms of the largest eigenvalue of a specific integral operator. Additionally, a comparison with random interlacements was employed to establish bounds on h∗, notably demonstrating that 0 6 in [CD25], and for the intermediate regime 3 ≤d ≤6 in [CD24]. Our remaining main results consider the critical component conditioned on being large, more precisely conditioned on the rare event {N + n ̸= ∅}. The first such result is a Yaglomtype limit theorem for the GFF restricted to N + n . SCALING LIMITS FOR THE CRITICAL GFF 4 Theorem 2.2. For f ∈L2(ν), n ≥1, and x ≥h∗, let Zf,x n , resp. Zf n, be a random variable distributed as n-1 P v∈N+ n f(φv) under the conditional measure Px[ · |N + n ̸= ∅], resp. P[ · |N + n ̸= ∅]. Let further Z be an exponential random variable with mean one. Then, with C1 as in (2.11), (2.13) lim n→∞Zf,x n = lim n→∞Zf n = C-1 1 ⟨χ, f⟩Z in distribution. In the case of the critical single-type Galton-Watson process, results analogous to Theorem 2.2 trace back to the classical work of Yaglom [Yag47]. For more general branching processes with a finite type space, similar theorems are also well established; see, for instance, [Mod71, Theorem 10.1] and other foundational texts in the branching process literature. In recent years, Yaglom-type limits have been proven in various extended settings. These include critical non-local branching Markov processes [HHKW22], branching Brownian motion with absorption [MS22], and branching diffusions in bounded domains [Pow19]. Additionally, [GLLP22] establishes a Yaglom-type result for critical branching processes in a random Markovian environment with finite state space. Our third result concerns a scaling limit for the critical component Co ∩T+, viewed as a metric space, under the conditional law Px[ · |N + n ̸= ∅]. To this end, let (Tn,x, dn,x) be a random compact metric space whose law coincides with that of (Co ∩T+, n-1d) under Px[ · |N + n ̸= ∅] (recall that d denotes the distance on T). We show that the sequence (Tn,x, dn,x) converges in distribution to a conditioned Brownian continuum random tree (Te, de), whose contour function e is a Brownian excursion conditioned to reach height at least 1. Theorem 2.3. For every x ≥h∗, as n →∞, (2.14) (Tn,x, dn,x) →(Te, de) in distribution, with respect to the Gromov-Hausdorff topology. General scaling limits of critical Galton-Watson processes, in the spirit of Theorem 2.3, were first introduced in [Ald91]. A corresponding result for critical multi-type processes with finitely many types was established in [Mie08], and later extended to processes with a countably infinite type space in [dR17]. More recently, [CKKM24] extended the classical continuous random tree convergence to Galton-Watson trees evolving in a random environment, where each generation has a random offspring distribution with mean one and finite expected variance. In the context of critical branching diffusions, [Pow19] proves an invariance principle under the assumptions of a bounded domain, finite second moment of the offspring distribution, and an elliptic diffusion generator. However, for branching diffusions in general (unbounded) domains, analogous results are not yet available (see [Pow19, Question 1.8]). Remark 2.4. Theorems 2.2 and 2.3 hold without any further change if the conditioning therein is changed from Px[ · |N + n ̸= ∅] to Px[ · |Nn ̸= ∅]. For the sake of brevity, we refrain from providing detailed proofs of these results. We briefly discuss the organisation of this article. In Section 3, we collect relevant background on the GFF on regular trees along with the framework of branching processes with spines. Section 4 presents the proof of Theorem 2.1. In Section 5, we prove Theorem 2.2 and, in a dedicated subsection, establish additional results for the model conditioned on the event {N + n ̸= ∅}. Section 6 introduces an auxiliary martingale Sn and establishes scaling limit results for this martingale and some related processes. Section 7 focuses on SCALING LIMITS FOR THE CRITICAL GFF 5 the "height process" Hn, defined via the distance to the origin in a depth-first traversal of Co ∩T+, and investigates its connection to the martingale Sn. Finally, Section 8 concludes the article with the proof of Theorem 2.3, which combines topological arguments with the results from Sections 6 and 7. 3. Notation and useful results In this section we introduce the notation used throughout the paper and recall some known facts about the level set percolation of the Gaussian free field on trees. We then briefly present the formalism of branching processes with spines and apply it to our model. As already stated in the introduction, we use T to denote the (d+1)-regular tree, d ≥2, that is an infinite tree whose every vertex has exactly d + 1 neighbours. For two vertices v, w ∈T we use d(v, w) to denote the usual graph distance. The tree is rooted at an arbitrary fixed vertex o ∈T, ̄o ∈T denotes a fixed neighbour of o, and T+ stands for the forward tree, see (2.2). We set |v| = d(o, v) and write (3.1) Sn = {v ∈T : |v| = n}, S+ n = Sn ∩T+ for the spheres with radius n centred at o. For every v ∈T \ {o} we use p(v) to denote its parent in T, that is the only vertex on the geodesic path from v to o with |p(v)| = |v| -1. We write desc(v) for the set of direct descendants of v, and sib(v) = desc(p(v)) for the set of its siblings, including itself. Finally, if w is an ancestor of v, that is w lies on the geodesics from o to v, we write w ⪯v. Throughout the paper we use the usual notation for the asymptotic relation of two functions f and g: We will write f(s) ∼g(s) as s →∞if lims→∞f(s)/g(s) = 1, f(s) = o(g(s)) as s →∞if lims→∞|f(s)|/g(s) = 0, and f(s) = O(g(s)) as s →∞if lim sups→∞|f(s)|/g(s) n] = Cχ(x)n-1/2(1 + o(1)). In particular, for every x ∈R, (3.16) Px[|Co ∩T+| = ∞] = 0. We will also need the following three properties of χ, the first one is proved in Remark 2.5 of [ˇCL25], the second is a direct consequence of Proposition 3.1 in [AˇC20], and the last one is proved in Appendix A: x 7→χ(x) is non-decreasing, (3.17) c1x ≤χ(x) ≤c2x for all x ≥h∗and some c1, c2 ∈(0, ∞), (3.18) x 7→χ(x) is Lipschitz on [h∗, ∞). (3.19) 3.2. Branching processes with spines. We now recall the machinery of branching processes with spines which is frequently used in the theory of branching processes, and specialize it to our model, in order to study the critical component Co. We then state many-to-few formulas that express certain moments for the original branching process in terms of the dynamics along the spines, see Proposition 3.5 below. Later, in Section 5.1, we will see that the processes with spines can be used to describe the distribution of Co conditioned on being infinite. The content of this section is mostly based on [HR17]. The main idea of the machinery is to designate several lines of descent in the branching process, called spines. These spines are then used to introduce a certain change of measure, under which the vertices on the spine exhibit modified branching behaviour, while the non-spine vertices behave as in the original process. For the construction, we need some notation. For x ∈R and k ∈N, we first introduce measures P k x under which the behaviour of the field is the same as under Px, but in SCALING LIMITS FOR THE CRITICAL GFF 8 addition there are k distinguished spines. Formally, the measure P k x is the distribution of a (R ∪{†}) × {0, 1, . . . , k}-valued stochastic process (φv, lv)v∈T+. This process assigns to every v ∈T+ a field value φv ∈R ∪{†} (where † is a cemetery state) and a number lv ∈{0, . . . , k} that represents the number of spine marks on v. Under P k x , the law of (φv)v∈T+ is a straightforward modification of the original measure Px, the only difference is that we set φv = † for every v ̸∈Co. The random variables (lv)v∈T+ are independent of the field values (φv)v∈T+. The root node o has exactly k marks, lo = k. The remaining random variables (lv)v̸=o are constructed recursively as follows: If a node v ∈T+ carries j marks, then each of its j marks 'moves' to one of its d direct descendants independently uniformly at random. (Note that nodes in the cemetery state can carry marks.) As consequence, under P k x , in every generation n, there are exactly k marks present, that is P v∈S+ n lv = k. We use P k x also for the corresponding expectations. For i = 1, . . . , k, we denote by σi n the node that carries the i-th mark in generation n (that is |σi n| = n), and set ξi n = φσin to be its type. We also define skel(n) = {v ∈ T+ : |v| ≤n, lv ≥1} to be the set of nodes of generation at most n having at least one mark. We let Fn stand for the natural filtration of the branching process, and Fk n for the filtration containing in addition the information about the k spine marks, (3.20) Fn = σ(φv : v ∈T+, |v| ≤n) and Fk n = σ(φv, lv : v ∈T+, |v| ≤n). Any f : R →R is extended to R ∪{†} by setting f(†) = 0. Then, by definition of P k x , (3.21) Ex X v∈N+ n f(φv) = P k x X v∈S+ n f(φv) . We now define another measure Qk x, where the nodes without a spine mark behave as under P k x but the nodes with a mark have a modified branching behaviour: Under Qk x the movement of the marks, and thus the distribution of (lv)v∈T+, is exactly the same as under P k x : If a node v carries k marks, each of the marks is given to one of its d direct descendants independently uniformly at random. To describe the distribution of the field φ under Qk x, we first define a transition kernel (recall ρY from (3.6)) (3.22) K(x, dy) = dχ(y) χ(x)ρY y -x d dy, x ≥h∗, y ∈R. Note that, by (3.6) and Proposition 3.1, for every x ≥h∗, K(x, ·) is a probability measure with support [h∗, ∞). Conditionally on the marks (lv)v∈T+, the field (φv)v∈T+ under Qk x is recursively constructed by (3.23) (a) φo := x, (b) If v ̸= o and lv = 0, then φv = d-1φp(v) + Yv (as under P k x ). (c) If v ̸= o and lv ≥1, then φv is K(φp(v), ·)-distributed, independently of previous randomness. To simplify notation, we write Qx instead of Q1 x; in this case we also set σn = σ1 n and ξn = ξ1 n. Note that, unlike under P k x , under the measure Qk x the nodes in the cemetery state cannot carry any mark. Consequently, Qk x-a.s., there are nodes not in the cemetery state in every generation. By construction, under Qk x, the process (ξi n)n∈N recording the value of the field along the i-th spine is a Markov chain with transition kernel K. This chain never enters the cemetery state. The following lemma determines its invariant distribution. SCALING LIMITS FOR THE CRITICAL GFF 9 Lemma 3.4. The Markov chain (ξn)n∈N with the transition kernel K has a unique invariant distribution π given by (3.24) π(dx) = χ(x)2ν(dx). Proof. We first show that π is invariant for K. Comparing (3.6) and (3.22) yields that K(x, A) = L[1Aχ](x)/χ(x) for every x ≥h∗. Therefore, writing the action of K on π as inner product on L2(ν), using that L is self-adjoint and χ is its eigenfunction with eigenvalue 1 (see Proposition 3.1), (3.25) (πK)(A) = ⟨K(·, A), χ2⟩ν = ⟨L[1Aχ], χ⟩ν = ⟨1Aχ, L[χ]⟩ν = ⟨1Aχ, χ⟩ν = π(A), which shows that π is an invariant measure. The uniqueness follows from the irreducibility of (ξn)n∈N. □ Next, we state several moment formulas that are frequently used throughout the paper. Such formulas are well understood in the theory of branching processes, see, e.g., [HR17] and references therein. The proof of the following proposition is based on Lemma 8 of that paper and can be found in Appendix B. Proposition 3.5. For all functions f, g : [h∗, ∞) →R for which the expectations below are well defined, Ex h X v∈N+ n f(φv) i = Qx h f(ξn) χ(x) χ(ξn) i , (3.26) Ex h X v,w∈N+ n f(φv)g(φw) i = χ(x)d -1 d n-1 X k=0 Qx χ(ξk)Qξk hf(ξn-k) χ(ξn-k) i Qξk h g(ξn-k) χ(ξn-k) i + χ(x)Qx hf(ξn)g(ξn) χ(ξn) i . (3.27) 3.3. Asymptotic behaviour of the moments. The main result of this section is Proposition 3.8 giving precise asymptotic estimates on the quantities appearing in (3.26), (3.27). To prove them, we could, in principle, use this proposition together with the known results on the convergence of Markov chains. However, it is easier, and for our purposes slightly more practical, to use formula (3.8) together with L2-estimates on the operator L. Since L is a self-adjoint Hilbert-Schmidt operator (see Proposition 3.1), L2(ν) has an orthonormal basis consisting of the eigenfunctions {ek}k≥1 of L corresponding to the eigenvalues {λk}k≥1. By Proposition 3.1 we may assume that 1 = λ1 > |λ2| ≥|λ3| ≥. . . , and e1 = χ. We set γ = |λ2| ∈(0, 1). By decomposing f ∈L2(ν) as (3.28) f = X k≥1 ⟨ek, f⟩ek = ⟨χ, f⟩χ + X k≥2 ⟨ek, f⟩ek =: ⟨χ, f⟩χ + β[f], for every n ∈N, it holds (3.29) Ln[f] = X k≥1 λn k⟨ek, f⟩ek = ⟨χ, f⟩χ + X k≥2 λn k⟨ek, f⟩ek = ⟨χ, f⟩χ + Ln[β[f]], with (3.30) ∥Ln[β[f]]∥≤γn∥β[f]∥≤γn∥f∥. The following simple lemma will later be used to deduce the pointwise convergence from the L2(ν)-one. SCALING LIMITS FOR THE CRITICAL GFF 10 Lemma 3.6. There is a function q : [h∗, ∞) →[0, ∞) such that for every x ≥h∗and f ∈L2(ν) (3.31) |L[f](x)| ≤∥f∥q(x). Proof. Using the definition (3.6) of L and the Cauchy-Schwarz inequality, |L[f](x)| ≤d Z R |f(y)|ρY y -x d dy = d Z R |f(y)|ρY (y -x/d) ρν(y) ν(dy) ≤d∥f∥L2(ν) ρY (· -x/d) ρν(·) L2(ν) =: ∥f∥L2(ν)q(x). (3.32) By (3.3), σν > σY . Using this one can easily check by a direct computation that q(x) 0 such that for all n ≥1 (4.14) ∥u0 n∥≥a0 n ≥cn-1. Proof. The statement will follow if we show (4.15) 0 ≤g(x)n-1 ≤u0 n(x) for all n ≥1 for some non-trivial function g : R →[0, 1]. To show (4.15) we use the estimate on the volume of Co from Proposition 3.3. By this proposition and (3.5) there is a positive constant c1 such that, for all m ≥1 and x ≥h∗, (4.16) Px[|Co ∩T+| > m] ≥c1m-1/2. Since |Co ∩T+| = P k≥0|N + k |, this implies for all m, n ≥1 c1m-1/2 ≤Px h X k≥0 |N + k | > m, N + n = ∅ i + Px h X k≥0 |N + k | > m, N + n ̸= ∅ i (4.17) ≤Px h n-1 X k=0 |N + k | > m i + Px[N + n ̸= ∅]. (4.18) By the Markov inequality, Px[Pn-1 k=0|N + k | > m] ≤ 1 m Pn-1 k=0 Ex[|N + k |], where, by Proposition 3.8, Ex[|N + k |] ≤c2(x) for some x-dependent constant c2(x) > 0. Thus, (4.19) c1m-1/2 ≤c2(x) n m + Px[N + n ̸= ∅]. Recalling (4.2) and choosing m = ⌊δ2n2⌋with δ = 2c1c2(x)-1, it follows that (4.20) u0 n(x) ≥ c2 1 2c2(x)n-1 for all n ≥1, showing (4.15) and thus the lemma. □ The next lemma gives upper bounds on af n and bf n. Lemma 4.3. There is c 0 for all x ∈[h∗, ∞). Hence, ⟨χ, L[uf n]2⟩≥ c⟨1, L[uf n]2⟩= c∥L[uf n]∥2 ≥c(af n)2. Applied to (4.22), this gives (4.23) af n+1 ≤af n -c(af n)2. SCALING LIMITS FOR THE CRITICAL GFF 14 This implies that af n is decreasing in n and, after rearranging, also (4.24) c ≤af n -af n+1 (af n)2 ≤af n -af n+1 af n+1af n = 1 af n+1 -1 af n . Summing this over n running from 0 to n -1 yields (4.25) cn ≤ n-1 X k=0 1 af k+1 -1 af k = 1 af n -1 af 0 ≤1 af n , proving the first part of (4.21). To prove the second part we project (4.9) onto the orthogonal complement of span {χ} and take norms. With the triangle inequality, this gives (4.26) bf n+1 ≤∥β[L[uf n]]∥+ ∥β[g(L[uf n])]∥. By (3.30), ∥β[L[uf n]]∥= ∥L[β[uf n]]∥≤γ∥β[uf n]∥= γbf n. By the upper bound of g from (4.10) and Proposition 3.2, since β is a projection, ∥β[g(L[uf n])]∥≤∥g(L[uf n])∥≤ c2∥L[uf n]2∥≤c∥uf n∥2. Applied to (4.26), this gives (4.27) bf n+1 ≤γbf n + c∥uf n∥2 = γbf n + c(af n)2 + c(bf n)2. Taking now f = 0, since b0 n →0 as n →∞, there is γ′ ∈(γ, 1) and n′ 0. SCALING LIMITS FOR THE CRITICAL GFF 15 To simplify the notation, we define (recall (4.1), (4.4)) (4.33) uλ n = ufλ n , aλ n = afλ n = ⟨χ, uλ n⟩, bλ n = bfλ n = ∥β[uλ n]∥. The first preliminary step in proving the uniformity over ˆF is the following lemma. In the special case λ = 0, this already follows from Lemmas 4.2 and 4.3. Lemma 4.4. There is a constant c 0. By (4.27) from the proof of Lemma 4.3, (4.35) bλ n+1 ≤γbλ n + c∥uλ n∥2, with c that is uniform over f ∈F. An iterative application of this inequality yields (4.36) bλ n+1 ≤γnbλ 0 + c n X l=0 γn-l∥uλ l ∥2. To continue, we argue that (4.37) ∥uλ l ∥≤λ-1 for every l ∈N0, λ > 0. Indeed, by Lemma 4.1, uλ n+1(x) ≤L[uλ n](x). Applying this recursively, taking norms, and using that the operator norm of L is one, we obtain (4.38) ∥uλ l ∥≤∥Ll[uλ 0]∥≤∥uλ 0∥. Moreover, since by definition 0 ≤uλ 0 = 1 -exp(-λ-1χ) ≤λ-1χ, we know that ∥uλ 0∥≤ λ-1∥χ∥= λ-1. Together with (4.38), this shows (4.37). Since also bλ 0 ≤∥uλ 0∥≤λ-1 and γ 0, x ≥h∗, and n ≥1, (5.1) Px h 1 n X v∈N+ n f(φv) > δ N + n ̸= ∅ i ≤cδ-2εf n(x). SCALING LIMITS FOR THE CRITICAL GFF 20 Proof. By the conditional Markov inequality (5.2) Px h 1 n X v∈N+ n f(φv) > δ N + n ̸= ∅ i ≤δ-2Ex h 1 n X v∈N+ n f(φv) 2 N + n ̸= ∅ i . The conditional expectation on the right-hand side satisfies (5.3) Ex h 1 n X v∈N+ n f(φv) 2 N + n ̸= ∅ i = 1 nPx[N + n ̸= ∅] 1 nEx h X v∈N+ n f(φv) 2i . By Proposition 3.8(a,b), since ⟨χ, f⟩= 0, (5.4) 1 nEx h X v∈N+ n f(φv) 2i = εf n(x), with εf n →0 in L2 and pointwise. Further, by the stochastic domination (3.5), using Theorem 2.1, (nPx[N + n ̸= ∅])-1 ≤(nPh∗[N + n ̸= ∅])-1 ≤c and the statement of the lemma follows. □ Proof of Theorem 2.2. We first consider the special case f = χ; the general case will then easily follow using Lemma 5.1. We start by showing that for every α > 0, (5.5) E e-αZχ,x n = C1 C1 + α + εα n(x), with εα n →0 pointwise and in L2(ν). Using the definitions (4.1), (4.32) of uf n and fλ, setting as before uλ n = ufλ n , and recalling (4.2), we obtain E e-αZχ,x n = Ex h exp -α n X v∈N+ n χ(φv) N + n ̸= ∅ i = 1 -un/α n (x) u0 n(x) . (5.6) By Lemma 4.6 and (4.68), (5.7) u0 n(x) = Cn-1χ(x)(1 + ε(x)), where ε →0 pointwise. Since Lemma 4.6 holds uniformly in λ, we can apply it with λ = n/α to obtain (5.8) un/α n (x) = an/α n χ(x)(1 + ̃ε(x)), again with ̃ε →0 pointwise. Moreover, by Lemma 4.7, again using the uniformity in λ, (5.9) (nan/α n )-1 = (nan/α 0 )-1 + C-1 1 + o(1) as n →∞. By the definitions of aλ n, uλ n and fλ, and by the monotone convergence theorem, nan/α 0 = n⟨χ, un/α 0 ⟩= n χ, E· 1 -fn/α(φo) = n⟨χ, 1 -fn/α⟩= χ, n 1 -e-αχ(·)/n n→∞ ---→α⟨χ, χ⟩= α. (5.10) Inserting this into (5.9) yields (5.11) an/α n = αC1(1 + o(1)) n(α + C1) as n →∞. Combining (5.6)-(5.8), (5.11) gives (5.12) 1 -E e-αZχ,x n = α α + C1 (1 + o(1)), SCALING LIMITS FOR THE CRITICAL GFF 21 which shows the pointwise convergence in (5.5). The L2(ν)-convergence follows by the dominated convergence theorem, since the left-hand side of (5.5) is bounded by 1. By L ́evy's continuity theorem for the Laplace transform, (5.5) implies the statement (2.13) of the theorem in the case Zf,x n with f = χ. For general f ∈L2(ν) we write f = ⟨χ, f⟩χ+β[f] as usual. Then Zf,x n = ⟨χ, f⟩Zχ,x n +Zβ[f],x n . The statement for Zf,x n then directly follows, since the first summand converges to an exponential random variable with mean C-1 1 ⟨χ, f⟩, by the first step of the proof, and the second summand converges to 0 in probability, by Lemma 5.1, since ⟨χ, β[f]⟩= 0. We now show the statement of the theorem for Zf n. Let νn be the law of φo under P[ · |N + n ̸= ∅]. By integrating (5.5) over νn(dx) (5.13) E e-Zχ n = C1 C1 + α + Z εα n(x)νn(dx). We need to show that the last integral is o(1). Observe that νn(dx) = P[φo ∈dx|N + n ̸= ∅] = P[φo ∈dx, N + n ̸= ∅]P[N + n ̸= ∅]-1 = Px[N + n ̸= ∅]P[N + n ̸= ∅]-1ν(dx). Therefore, using the Cauchy-Schwarz inequality and that εα n →0 in L2(ν), the integral in (5.13) is bounded above by (5.14) o(1) Z Px[N + n ̸= ∅]2 P[N + n ̸= ∅]2 ν(dx) 1/2 . By (4.67), Px[N + n ̸= ∅]P[N + n ̸= ∅]-1 = χ(x)⟨1, χ⟩-1 + εn(x), with εn(x) →0 in L2(ν) as n →∞. This implies that the integral in (5.14) is O(1) and together with (5.13) and again L ́evy's continuity theorem shows statement (2.13) for Zf n with f = χ. For Zf n and general f ∈L2(ν) the statement then follows by integrating (5.1) over νn(dx) and applying similar arguments as above. □ 5.1. Further results for the conditioned model. The following few results which all concern the model conditioned to {N + n ̸= ∅} will later be used in the proof of Theorem 2.3. The first one explains the role of the measure Qx introduced using the spine construction in Section 3.2. Similar results are well established in branching processes literature, see for instance [HH07, Theorem 5] or [CR90, Theorem 4]. Proposition 1.5 in [Pow19] can be seen as the analogous result for branching diffusion on bounded domains. Proposition 5.2. For all K ∈N, x ≥h∗and B ∈FK (see (3.20)) (5.15) lim n→∞Px[B|N + n ̸= ∅] = Qx[B]. Proof. We follow similar steps as in the proof of [Pow19, Proposition 1.5]. For every n ≥K, (5.16) Px[B|N + n ̸= ∅] = Ex h1BPx[N + n ̸= ∅|FK] Px[N + n ̸= ∅] i . Writing N + K = {v1, . . . , v|N+ K|} and defining the events Ai = {vi is root of a subtree in Co with height of at least n -K}, (5.17) {N + n ̸= ∅} = |N+ K| [ k=1 Ak = |N+ K| [ k=1 Ak ∩ \ j K, 1BPx[N + n ̸= ∅|FK] Px[N + n ̸= ∅] ≤ cn n -K X v∈N+ K (χ(φv) + ̄εn-K(φv)) ≤c′ X v∈N+ K (χ(φv) + ̄εn-K(φv)) =: gn, (5.21) which converges a.s. to g := c′ P v∈N+ K χ(φv). Moreover, by Proposition 3.8, (5.22) Ex[|gn -g|] ≤Ex c′ X v∈N+ K | ̄εn-K(φv)| = ⟨χ, | ̄εn-K|⟩χ(x) + ε| ̄εn-K| K (x). Since ∥ ̄εn∥→0, the bounds established in (3.34) imply that the right-hand side of (5.22) converges to 0, that is gn →g in L1(P x). Therefore, by the generalised dominated convergence theorem, using (5.16), (5.19), (5.23) lim n→∞Px[B|N + n ̸= ∅] = Ex h 1B P v∈NK χ(φv) χ(x) i = Qx[B], where the last equality follows from Lemma B.2 with k = 1 and Y (v1) = 1Bχ(φv1). □ The following lemma and its corollary follow almost directly from the results established in the proof of Theorem 2.2. Lemma 5.3. For every f ∈L2(ν), x ≥h∗and δ > 0, (5.24) lim n→∞Px P v∈N+ n f(φv) P v∈N+ n χ(φv) -⟨χ, f⟩ > δ N + n ̸= ∅ = 0. Proof. Observe that P v∈N+ n f(φv) P v∈N+ n χ(φv) -⟨χ, f⟩= P v∈N+ n (f(φv) -⟨χ, f⟩χ(φv)) P v∈N+ n χ(φv) = n-1 P v∈N+ n β[f](φv) n-1 P v∈N+ n χ(φv) . (5.25) SCALING LIMITS FOR THE CRITICAL GFF 23 For any A, B ∈R and ε, δ > 0, {|A/B| > ε} ⊂{|A| > δ} ∪{|B| ̄δn N + n ̸= ∅ i + Px h n-1 X v∈N+ n χ(φv) 0. Then for n large enough so that ̄δn/δ ≤ε, (5.27) Px h n-1 X v∈N+ n χ(φv) 0 and f, g ∈L2(ν) with g > 0, (5.28) lim n→∞Px P v∈N+ n f(φv) P v∈N+ n g(φv) -⟨χ, f⟩ ⟨χ, g⟩ > δ N + n ̸= ∅ = 0. In particular, setting g ≡1, (5.29) lim n→∞Px 1 |N + n | X v∈N+ n f(φv) -⟨χ, f⟩ ⟨χ, 1⟩ > δ N + n ̸= ∅ = 0. Proof. It is easy to see that there is δ′ = δ′(δ, f, g) > 0 such that the event on the lefthand side of (5.28) is contained in the union of {|P f(φv)/ P χ(φv) -⟨χ, f⟩| ≥δ′} and {|P g(φv)/ P χ(φv) -⟨χ, g⟩| ≥δ′}. The statement then follows by Lemma 5.3. □ The final result of this section is the following technical lemma which can be seen as a somewhat stronger version of (5.29) in Corollary 5.4. It is tailored to be used in the proof of Proposition 6.7. Lemma 5.5. Write N + n = {v1, v2, . . . , v|N+ n |} and set N + n,M = {v1, v2, . . . , vM}. For δ, ρ > 0 and f : R →R bounded, define the event (5.30) Bn(f, δ, ρ) := {ρn ≤|N + n |} ∩ [ ρn≤M≤|N+ n | P v∈N+ n,M f(φv) M -⟨χ, f⟩ ⟨χ, 1⟩ > δ . Then for every x ≥h∗, (5.31) lim n→∞Px Bn(f, δ, ρ) N + n ̸= ∅ = 0. Proof. We start by outlining the strategy of the proof. We will define events An = An(n0) (see (5.34)) and A′ n = A′ n(n0, q) (see (5.40)), so that when n0 = n0(n) and q = q(n) are chosen correctly, (5.32) lim n→∞Px An(n0(n)) N + n ̸= ∅ = 0, lim n→∞Px A′ n(n0(n), q(n)) N + n ̸= ∅ = 0, Bn(f, δ, ρ) ⊆An(n0(n)) ∪A′ n(n0(n), q(n)) for n large enough. SCALING LIMITS FOR THE CRITICAL GFF 24 The statement of the lemma follows directly from these claims by a union bound. To define An = An(n0), we fix 0 ≤n0 ≤n, write N + n0 = {w1, . . . , w|N+ n0|}, and set N i n = {v ∈N + n : wi is ancestor of v}. For 1 ≤i ≤|N + n0|, we set (5.33) mi = ( |N i n|-1 P v∈Nin f(φv), if |N i n| > 0, ⟨χ, f⟩/⟨χ, 1⟩, otherwise, and define the events (5.34) Ai n = n mi -⟨χ, f⟩ ⟨χ, 1⟩ > δ 2 o and An = An(n0) = |N+ n0| [ i=1 Ai n. We now prove an upper bound on P[An|N + n ̸= ∅]. By a union bound, using that Ai n ⊂{N i n ̸= ∅} ⊂{N + n ̸= ∅}, Px[An|N + n ̸= ∅] = Px[N + n ̸= ∅]-1Px[An ∩{N + n ̸= ∅}] ≤Px[N + n ̸= ∅]-1Ex |N+ n0| X i=1 Px[Ai n ∩{N i n ̸= ∅}|Fn0] . (5.35) Using the branching process properties of the GFF, Px[Ai n ∩{N i n ̸= ∅}|Fn0] = Pφwi 1 |N + n-n0| X v∈N+ n-n0 f(φv) -⟨χ, f⟩ ⟨χ, 1⟩ > δ 2 N + n-n0 ̸= ∅ Pφwi[N + n-n0 ̸= ∅]. (5.36) By Corollary 5.4(b), the first probability on the right-hand side is bounded by some εn-n0(φwi) satisfying 1 ≥εn →0 pointwise, and thus in any Lq(ν) by the bounded convergence theorem. By Proposition 4.5 and Lemmas 4.6, 4.7, the second probability equals (5.37) Pφwi[N + n-n0 ̸= ∅] = C1(n -n0)-1 χ(φwi) + ̄εn-n0(φwi) , where ̄ε →0 pointwise and in L5/2(ν). Combining these statements with (5.35) and (5.36), and using then Proposition 3.8 in the numerator, we obtain Px[An|{N + n ̸= ∅}] ≤ Ex h P w∈N+ n0 εn-n0(φw)C1(n -n0)-1(χ(φw) + ̄εn-n0(φw)) i C1n-1(χ(x) + ̄εn(x)) = c(x)n n -n0 ⟨χ, εn-n0(χ + ̄εn-n0)⟩χ(x) + ̃εn0(x), (5.38) where ̃εn →0 pointwise. Finally, using H ̈older's inequality on the inner product, since χ, χ2 ∈L2(ν) (by (3.18)), εn →0 in L5(ν) and ̄ε →0 in L5/2(ν), it follows that for every x ≥h∗, (5.39) if n0 →∞and n -n0 →∞, then Px[An(n0)|{N + n ̸= ∅}] →0. We now turn to the events A′ n. For given n0 ≤n and q, let (5.40) A′ n = A′ n(n0, q) = |N+ n0| [ i=1 {|N i n| > q}. SCALING LIMITS FOR THE CRITICAL GFF 25 To bound the probability P[A′ n|N + n ̸= ∅], we write (5.41) Px[A′ n ∩{N + n ̸= ∅}] ≤Px[A′ n] ≤Ex h |N+ n0| X i=1 Pφwi[|N + n-n0| > q] i . By the Markov inequality and Proposition 3.8(b) (with f = g = 1), (5.42) Pφwi[|N + n-n0| > q] ≤q-2Eφwi[|N + n-n0|2] = Cq-2(χ(φwi)(n -n0) + εn-n0(φwi)), where ∥εn-n0∥≤c. Thus, by Proposition 3.8(a), Px[A′ n ∩{N + n ̸= ∅}] ≤q-2Ex h X v∈N+ n0 (cχ(φv)(n -n0) + εn-n0(φv)) i ≤cq-2 (n -n0 + 1)χ(x) + ε′ n0(x) . (5.43) with ε′ n →0 pointwise. Together with Px[N + n ̸= ∅] ≥cn-1, by Theorem 2.1, this implies (5.44) Px[A′ n|N + n ̸= ∅] ≤n q2 (n -n0 + C)χ(x) + ε′ n0(x) . We will now give sufficient conditions on the functions n0(n) and q(n) so that for large enough n, (5.45) Bn(f, δ, ρ) ⊂An ∪A′ n, or equivalently Bn(f, δ, ρ)c ⊃(An)c ∩(A′ n)c. By definition, {ρn ≤|N + n |}c ⊆Bn(f, δ, ρ)c. Therefore, (5.45) is implied by (5.46) (An)c ∩(A′ n)c ∩{ρn ≤|N + n |} ⊆Bn(f, δ, ρ)c. To prove this, we assume that (An)c ∩(A′ n)c ∩{ρn ≤|N + n |} holds. For M ≤|N + n |, let (5.47) k(M) := inf n l X i=1 |N i n| : l∈{0, . . . , |N + n0|} such that l X i=1 |N i n| ≥M o ≥M. On (An)c, |N i n|-1 P v∈Nin f(φv) -⟨χ, f⟩/⟨χ, 1⟩ ≤δ/2 for all i = 1, . . . , |N + n0|. Therefore, for all M with ρn ≤M ≤|N + n | (5.48) P v∈N+ n,k(M) f(φv) k(M) -⟨χ, f⟩ ⟨χ, 1⟩ ≤δ 2. Writing P v∈N+ n,M f(φv) = P v∈N+ n,k(M) f(φv) -P v∈N+ n,k(M) + n,M f(φv), and using that the absolute value of the second sum is bounded by (k(M) -M) sup|f|, we obtain that for every M ≥ρn P v∈N+ n,M f(φv) M - P v∈N+ n,k(M) f(φv) k(M) = P v∈N+ n,k(M) f(φv) M - P v∈N+ n,k(M) + n,M f(φv) M - P v∈N+ n,k(M) f(φv) k(M) ≤ X v∈N+ n,k(M) f(φv) 1 M - 1 k(M) + k(M) -M M sup|f| = k(M) -M M P v∈N+ n,k(M) f(φv) k(M) + sup|f| ≤c(f, δ) q ρn, (5.49) SCALING LIMITS FOR THE CRITICAL GFF 26 where in the last inequality we used (5.48) and the fact that k(M) -M ≤q on (A′ n)c. If the right-hand side of (5.49) is smaller than δ/2, then together with (5.48), this implies that Bn(f, δ, ρ)c holds, implying (5.46) and thus (5.45). To finish the proof of (5.32), we must choose n0 = n0(n) and q = q(n) so that (5.39) applies and the left-hand sides of (5.43), (5.49) tend to zero. This is easily done by setting, e.g., q(n) = n3/4 and n0 = n -n1/4. □ 6. The Sn martingale The remaining three sections of this paper are dedicated to proving Theorem 2.3. In this section, we will introduce a martingale based on the depth-first traversal of a sequence of copies of Co ∩T+, and prove that it satisfies an invariance principle, see Proposition 6.1. This martingale can be seen as an analogue to the Lukasiewicz path used to study critical Galton-Watson trees. The second part of the section then demonstrates another two scaling limit results, Proposition 6.8 and Proposition 6.10 which are both consequences of Proposition 6.1. To define the martingale, we consider an i.i.d. sequence T = ((T 1, φ1), (T 2, φ2), . . . ) where every (T i, φi) is distributed as (Co ∩T+, φ|Co∩T+) under Px. To keep the notation simple, we keep using Px for the probability measure associated with the whole sequence T, and for v ∈T i we write φv instead of φi v. We further use oi to denote the root of T i, and set N +,i n = {v ∈T i : d(oi, v) = n}. Note that |N +,i n | has the same distribution as |N + n | which was studied in detail in the previous sections. In particular, we know that all T i are a.s. finite. Throughout this section, we use the notation for the parent p(v), direct descendants desc(v), and siblings sib(v) of v ∈T i relative to the rooted tree T i (not T), w ⪯v means that w is an ancestor of v, cf. below (3.1). We now describe the depth-first traversal v = (v1, v2, . . . ) of T. It starts at the root of T 1, that is v1 = o1, and then explores the tree T 1 in a depth-first manner. After visiting all vertices of T 1, it proceeds to o2, explores T 2 in a depth-first manner, and so forth. vi denotes the i-th vertex visited during this traversal. The notation v 0} are independent. Hence, using the first half of (6.14), this is bounded by c(x)u-1n-1/2 P i≥1 Px[i ≤Λn]. Since P i≥1 Px[i ≤Λn] = Ex[Λn] ≤c(x)√n, by the second half of (6.14), the claim follows. □ Lemma 6.6. For every δ > 0 and x ≥h∗there exists a sequence R(n) such that limn→∞R(n) = ∞and (6.16) sup n≥1 Px[H∗ n ≤R(n)] ≤δ. Proof. We will show that for every K 0, by definition of H∗ n, (6.18) Px[H∗ n ≤K] = n-1 n X l=1 Px[Hl ≤K] = n-1Ex h K X h=0 n X l=1 1Hl=h i . Since Pn l=1 1Hl=h ≤PΛn i=1|N +,i h | and thus PK h=0 Pn l=1 1Hl=h ≤PΛn i=1 PK h=0|N +,i h |. Moreover, (PK h=0|N +,i h |)Λn i=1 are i.i.d. and with finite mean. Noting that Λn is a stopping time SCALING LIMITS FOR THE CRITICAL GFF 29 with respect to Gi = σ(T j : j ≤i) and using Wald's equation (see e.g. [Dur19, Theorem 2.6.2]), (6.19) Px[H∗ n ≤K] ≤1 nEx h Λn X i=1 K X h=0 |N +,i h | i = 1 nEx[Λn]Ex h K X h=0 |N +,1 h | i , which by (6.14) converges to 0 as n →∞. □ The following law of large numbers will be important for the proof of Proposition 6.1. Proposition 6.7. Let f be a bounded function and mf n := n-1 Pn i=1 f(φvi). Then (6.20) lim n→∞mf n = mf ∞:= ⟨χ, f⟩ ⟨χ, 1⟩, in Px-probability. Proof. Let (6.21) N +,i k,n = N +,i k ∩{v1, . . . , vn} be the part of N +,i k traversed in the first n steps, and let N ∗ n = N +,Λ∗ n H∗n,n . We set (6.22) m∗ n = |N ∗ n|-1 X v∈N∗n f(φv). Denoting by FT the σ-algebra generated by the sequence T = (Ti, φi)i≥1, it holds that (6.23) mf n = Ex[m∗ n|FT] (effectively, the expectation here is over Un only). To show the proposition it will thus be sufficient to show that (6.24) lim n→∞m∗ n = mf ∞ in Px-probability. To see that this is indeed enough, note that since f is bounded m∗ n is dominated by a constant. Hence, by the dominated convergence theorem, m∗ n →mf ∞also in L2(Px). By (6.23), since the conditional expectation is a contraction on L2(Px), mf n →mf ∞in L2(Px), and thus in Px-probability as claimed. To prove (6.24), we fix ε, δ > 0 and set (6.25) An = {|m∗ n -mf ∞| > ε}. We then fix C ε}. Using the definitions of Un and m∗ n, and then decomposing by possible values of Hk, Λk, we obtain Px[An ∩Bn] = n X k=1 Px[An ∩Bn|Un = k]Px[Un = k] = n X k=1 1 nPx[AΛk Hk,n, R(n) ≤Hk ≤Cn1/2] = 1 n Cn1/2 X l=R(n) n X i=1 Ex h 1Ai l,n n X k=1 1{Hk=l,Λk=i} i . = 1 n Cn1/2 X l=R(n) n X i=1 Ex 1Ai l,n N +,i l,n , (6.27) where in the last step we used Pn k=1 1{Hk=l,Λk=i} = |N +,i l,n |. To analyse the expectation on the right-hand side, we define σ-algebras Gi = σ(T 1, T 2, . . . , T i) and set τi = |T 1| + · · · + |T i-1| (which is Gi-1-measurable). Then, using that (T i, φi)i≥1 is an i.i.d. sequence, Ex 1Ai l,n|N +,i l,n | Gi-1 = 1{τi≤n}Ex 1A1 l,n-τi N +,1 l,n-τi = 1{τi≤n}Ex 1Al,n-τi|N + l,n-τi| N + l ̸= ∅ Px[N + l ̸= ∅], (6.28) where on the last line we omitted the superscript '1' since N +,1 l,k has the same distribution as N + l,k. By Theorem 2.1, Proposition 3.8 and (6.14) there is a c = c(x) such that (6.29) Px N + l ̸= ∅ ≤cl-1, Ex N + l 2 N + l ̸= ∅ 1/2 ≤cl, Ex Λn ≤cn1/2. Further, for Bn(f, ε, ρ) as in Lemma 5.5, it holds that Al,k ∩{|N + l,k| ≥lρ} ⊆Bl(f, ε, ρ). Therefore, choosing ρ = δ/(6Cc2) and decomposing on whether |N + l,n-τi| is larger than lδ/(6Cc2), we obtain Ex 1A1 l,n-τi|N +,1 l,n-τi| N +,1 l ̸= ∅ ≤ lδ 6Cc2 + Ex 1Bn(f,ε,ρ)|N + l,n-τi| N + l ̸= ∅ ≤ lδ 6Cc2 + Ex |N + l |2 N + l ̸= ∅ 1/2Px Bl(f, ε, ρ) N + l ̸= ∅ 1/2, (6.30) where in the last step we used the Cauchy-Schwarz inequality and |N + l,n-τi| ≤|N + l |. Combining (6.28) and (6.30) with the first two claims in (6.29) we obtain (6.31) Ex 1Ai l,n|N +,i l,n | ≤ δ 6Cc + c2Px Bl(f, ε, ρ) N + l ̸= ∅ 1/2 Px[τi ≤n]. Inserting this into (6.26), (6.27), we obtain that Px[An] is bounded by (6.32) 2δ 3 + 1 n δ 6Cc + c2 sup l≥R(n) Px Bl(f, ε, ρ) N + l ̸= ∅ 1/2 ⌊C√n⌋ X l=R(n) n X i=1 Px[Λn ≥i], Since R(n) diverges, Lemma 5.5 implies that the supremum in this formula tends to zero as n →∞. By the last property in (6.29), P⌊C√n⌋ l=R(n) Pn i=1 Px[Λn ≥i] ≤C√nEx[Λn] ≤Ccn. SCALING LIMITS FOR THE CRITICAL GFF 31 Therefore (6.33) Px[An] ≤2δ 3 + δ 6 + Cc3o(1) ≤δ for n large enough. Since ε and δ are arbitrary, this shows (6.24) and with the initial comment completes the proof. □ We are now ready to prove Proposition 6.1. Proof of Proposition 6.1. We apply a martingale functional central limit theorem, see e.g. [Dur19, Theorem 8.2.8]. To check its assumptions we need to show that (a) limn→∞n-1 Pn k=1 Ex[(Sk -Sk-1)2|Hk-1] = ⟨χ,V⟩ ⟨χ,1⟩in Px-probability, (b) limn→∞n-1 Pn k=1 Ex[(Sk -Sk-1)21{|Sk-Sk-1|>ε√n}] = 0. Condition (a) follows from Lemma 6.3 and Proposition 6.7, using that V is bounded by Lemma 6.4. We now show (b). By the Cauchy-Schwarz inequality, Ex[(Sk-Sk-1)21{|Sk-Sk-1|>ε√n}] ≤ Ex[(Sk -Sk-1)4]1/2Px[(Sk -Sk-1)2 > ε2n]1/2. By Lemmas 6.3, 6.4, Ex[(Sk -Sk-1)2] ≤c uniformly in k and thus Px[(Sk -Sk-1)2 > ε2n] ≤cε-2n-1. Thus, to prove (b) it is enough to show (6.34) Ex[(Sk -Sk-1)4] ≤c uniformly in k. By (6.8), Sk -Sk-1 = P v∈desc(vk-1) χ(φv) -χ(φvk-1), and thus Ex[(Sk -Sk-1)4|Hk-1] = Eφvk-1 h X v∈N+ 1 χ(φv) -χ(φvk-1) 4i = Eφvk-1 h X v∈N+ 1 χ(φv) -Eφvk-1 X v∈N+ 1 χ(φv) 4i , (6.35) where we used that χ(φvk-1) = Eφvk-1[P v∈N+ 1 χ(φv)] in the last line. We write N + 1 = {w1, . . . , wd} and follow similar arguments as in the proof of Lemma 6.4, using that χ(x) = 0 for x 0 and ε > 0, (6.38) lim n→∞Px h max i≤nt φvi/√n > ε i = 0. Let H(T i) = maxv∈T i H(v) denote the height of tree T i. For every j, k ∈N, the probability in (6.38) is bounded from above by Px h max i≤nt φvi > ε√n, Λ⌊nt⌋≤j, H(Ti) ≤k for i = 1, . . . , j i + Px[∃i ∈{1, . . . , j} : H(T i) > k, Λ⌊nt⌋≤j] + Px[Λ⌊nt⌋> j]. (6.39) To show (6.38), we thus need to choose jn →∞and kn →∞so that all three summands converge to zero. We start with the first summand. Restricted to {Λ⌊nt⌋≤j} and {H(T i) ≤k for i = 1, . . . , j}, maxk≤nt φvk is dominated by the maximum of all φv with v such that Λ(v) ≤j and H(v) ≤k. Considering not only the maximum over the connected components of the level set, but over the first k generations in j copies of the whole forward tree T+, this is dominated by the maximum of jdk non-negatively correlated Gaussian random variables with mean at most x and bounded variance. By Gaussian comparison techniques, the mean of this maximum is of order x + c p log jdk ≤c√k log j for j, k large enough. Thus, by the Markov inequality, (6.40) Px h max i≤nt φvi > ε√n, Λ⌊nt⌋≤j, H(Ti) ≤k for i = 1, . . . , j i ≤ C εn1/2 p k log(j). By a union bound, using Theorem 2.1, the second summand in (6.39) satisfies (6.41) Px[∃i ∈{1, . . . , j} : H(T i) > k, Λ⌊nt⌋≤j] ≤c(x)j/k. Finally, by the Markov inequality and (6.14), the third summand can be bounded by (6.42) Px[Λ⌊nt⌋> j] ≤c(x) √ tn/j. Setting now, e.g., jn = n2/3 and kn = n5/6, all three summands in (6.39) converge to 0 as required. □ Proof of Proposition 6.8. We use the continuous mapping theorem together with the already established convergence statements. By Proposition 6.1, n-1/2S⌊n·⌋→σB·, and by Lemma 6.9, n-1/2χ(φv⌊n·⌋) →0 as n →∞, in Px-distribution, in the Skorokhod topology. Therefore (6.43) n-1/2(S⌊n·⌋-χ(φv⌊n·⌋)) = n-1/2 X w∈Y (v⌊n·⌋) χ(φw) - X i≤Λ⌊n·⌋ χ(φoi) also converges to σB· as n →∞. Next, note that (6.44) inf k≤n{Sk -χ(φvk)} = inf k≤n n X w∈Y (vk) χ(φw) - X i≤Λk χ(φoi) o = - X i≤Λn χ(φoi). Therefore, setting Bt = infs≤t Bs and using the continuous mapping theorem with the map g(Xt) = (Xt -infs≤t Xs, -infs≤t Xs), (6.45) n-1/2 X w∈Y (v⌊n·⌋) χ(φw), X i≤Λ⌊n·⌋ χ(φoi) → σ(B· -B·), -σB· SCALING LIMITS FOR THE CRITICAL GFF 33 in distribution as n →∞. By L ́evy's theorem (see, e.g., Theorem VI.2.3 in [RY99]), the right-hand side of (6.45) has the same distribution as (σ|B·|, σL0 · (B)). Together with the fact that under Px, P i≤Λ⌊nt⌋χ(φoi) = χ(x)Λ⌊nt⌋, this implies the proposition. □ Another consequence of Proposition 6.1 is the following scaling limit result for Sn conditioned to reach a certain height on the first tree T1. To this end we define (6.46) ̄Sn = (P w∈Y (vn) χ(φw) if n ≤|T 1|, 0 if n > |T 1|. Note that, up to an additive correction χ(φvn) -χ(x), ̄S· is equal to S· restricted to T1. Proposition 6.10. For y > 0, let (e≥y/σ)t≥0 be a Brownian excursion conditioned to reach at least height y/σ. Then, in distribution under Px[ · | supk ̄Sk ≥√ny], with respect to the Skorokhod topology, (6.47) lim n→∞ n-1/2 ̄S⌊nt⌋ t≥0 = (σe≥y/σ)t≥0. We omit the proof of this proposition as it would be identical to the proof of Proposition 6.13 in [Pow19], which itself follows [DLG02, Proposition 2.5.2]. 7. Relation of Sn and Hn The aim of this section will be to establish a connection between the martingale Sn and the height process Hn (see (6.3) and (6.1) for definitions). This will be useful in the proof of Theorem 2.3 in Section 8. Throughout the whole section, we only consider the field on the first tree (T, φ) = (T 1, φ1) in the infinite tree sequence T = ((T 1, φ1), (T 2, φ2), . . . ) introduced in Section 6. As a consequence, we only consider ̄S introduced in (6.46) instead of S. We will see that, approximately, ̄S(v) ≈H(v)/C1 with C1 as in (2.11). Motivated by this, for η > 0 we say that v ∈T is η-bad if (7.1) ̄S(v) H(v) -C-1 1 > η. Fixing in addition R > 0, we say that v ∈T is (η, R)-bad if there exists a w ≺v such that H(w) ≥R and w is η-bad. This means that v is (η, R)-good (i.e. not (η, R)-bad) if all its ancestors in generations at least R are η-good. We set (7.2) N (η,R) n = {v ∈N + n : v is (η, R)-bad}. The first main result of this section shows that this set is relatively small. Proposition 7.1. For every ε > 0 and x ≥h∗, (7.3) lim R→∞sup n≥R Px h|N (η,R) n | |N + n | > ε N + n ̸= ∅ i = 0. Proof. The proof follows a strategy similar to the proof of Proposition 6.17 in [Pow19], with adaptations that are necessary in order to handle the unbounded domain in our setting. For ε > 0, R > 0 and n ≥0 we define the event (7.4) Eε R,n = nP v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) > ε o . We first claim that it suffices to show that (7.5) lim R→∞sup n≥R Px[Eε R,n|N + n ̸= ∅] = 0. SCALING LIMITS FOR THE CRITICAL GFF 34 To see that this indeed implies the lemma, we write (7.6) |N (η,R) n | |N + n | = |N (η,R) n | P v∈N+ n χ(φv) · P v∈N+ n χ(φv) |N + n | . Therefore, for every δ > 0, the probability in (7.3) is bounded from above by (7.7) Px h |N (η,R) n | P v∈N+ n χ(φv) > δ N + n ̸= ∅ i + Px h |N + n | P v∈N+ n χ(φv) 0 for every x ≥h∗, it holds that |N (η,R) n | ≤ c P v∈N+ n χ(φv)1{v is (η, R)-bad}, and thus (7.8) Px h |N +,(η,R) n | P v∈N+ n χ(φv) > δ N + n ̸= ∅ i ≤Px Eδ/c R,n N + n ̸= ∅ , which together with (7.5) implies the claim of the lemma. We now prove (7.5). By first using the Markov inequality, and then Lemma B.2 with k = 1 and Y (v1) = χ(φv1)1{v1 is (η, R)-bad}1{N+ n ̸=∅}/(P w∈N+ n χ(φw)), it holds Px[Eε R,n|N + n ̸= ∅] ≤ε-1Ex hP v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) N + n ̸= ∅ i = ε-1Ex h1{N+ n ̸=∅} P v∈N+ n χ(φv)1{v is (η, R)-bad} P v∈N+ n χ(φv) i Px[N + n ̸= ∅]-1 = ε-1Qx hχ(x)Px[N + n ̸= ∅]-1 P v∈N+ n χ(φv) 1{σn is (η, R)-bad} i , (7.9) where, as defined in Section 3, σn is the vertex on the spine in the n-th generation. To continue, we first show the following two claims: (i) supn≥R Qx[σn is (η, R)-bad] →0 as R →∞. (ii) Set Zn = χ(x)Px[N+ n ̸=∅]-1 P v∈N+ n χ(φv) . Then for all δ > 0, there exist R′, K > 0 such that Qx[Zn1{Zn>K}] ≤δ for all n ≥R′. Claim (i) will follow from the ergodic behaviour of the field along the spine (σn)n≥0 under Qx. Let sib K}] = Px[1{Zn>K}1{N+ n ̸=∅}] P[N + n ̸= ∅] = Px[Zn > K|N + n ̸= ∅]. ≤Px hP v∈N+ n χ(φv) n K}1{σn is (η, R)-bad}] + Q[Zn1{Zn≤K}1{σn is (η, R)-bad}] ≤Qx[Zn1{Zn>K}] + KQ[σn is (η, R)-bad], (7.12) which together with (i) and (ii) implies (7.5) and concludes the proof. □ To state the second main result of this section, we introduce (enlarging the probability space if necessary) random variables τ 1, . . . , τ k which under Px, conditionally on T, are independent and uniformly distributed on {1, 2, . . . , |T|}. Using these random variables, we define two k × k random matrices (D ̄S n)i,j = n-1 ̄S(vτ i) + ̄S(vτ j) -2 ̄S(vτ i ∧vτ j) (DH n )i,j = n-1 H(vτ i) + H(vτ j) -2H(vτ i ∧vτ j) , (7.13) where, as usual, (v1, v2, . . . ) denotes the depth-first traversal of T, and v ∧w denotes the most recent common ancestor of v and w in T. Proposition 7.2. For every k ≥1 and ε > 0, (7.14) lim n→∞Px ∥C-1 1 DH n -D ̄S n∥> ε N + n ̸= ∅ = 0, where ∥·∥denotes the Frobenius norm of k × k matrices. To prove this proposition, we need two lemmas. The first one estimates from below the size of T conditionally on {N + n ̸= ∅}. Lemma 7.3. For every x ≥h∗, (7.15) lim q→0 sup n≥0 Px |T| ≤qn2|N + n ̸= ∅ = 0. Proof. It will be sufficient to show that for every δ > 0 there exist q and n0 such that for every n ≥n0, (7.16) Px[|T| ≤qn2|N + n ̸= ∅] ≤δ. Indeed, since there are only finitely many n 0 Px[|T| ≤qn2|N + n ̸= ∅] ≤Px |T| ≤qn2, |N + n | ≥ηn|N + n ̸= ∅ + Px |N + n | 0 small such that the second probability on the right-hand side of (7.17) is bounded by δ/2 for all n large enough. It is thus sufficient to show that for every δ > 0 and η > 0 there is n0 so that for all n ≥n0 (7.18) Px |T| ≤qn2, |N + n | ≥ηn N + n ̸= ∅ ≤δ/2. Note first that (7.19) Px |T| ≤qn2, |N + n | ≥ηn N + n ̸= ∅ ≤Px |T| ≤qn2 |N + n | ≥ηn . Given {|N + n | ≥ηn}, let w1, . . . , w⌊ηn⌋be the first ⌊ηn⌋vertices in N + n , and let Tw be the subtree of T rooted at w. Obviously, |T| ≤P⌊ηn⌋ i=1 |Twi|. Under P[ · ||N + n | ≥ηn], the random variables |Twi| are independent. Moreover, since w ∈N + n implies φw ≥h∗, by stochastic domination (3.5), for any u > 0, (7.20) Px |Twi| ≥u |N + n | ≥ηn ≥Ph∗ |Co ∩T+| ≥u . Denoting T ′ i, i ≥1, i.i.d. random variables distributed as |Co ∩T+| under Ph∗, it follows that (7.21) Px |T| ≤qn2 |N + n | ≥ηn ≤P h ⌊ηn⌋ X i=1 T ′ i ≤qn2i . By Theorem 3.3, T ′ i are in the domain of attraction of the 1/2-stable random distribution. Therefore, n-2 P i≤⌊ηn⌋T ′ i converges in distribution to a non-negative 1/2-stable random variable (see e.g. [Fel71, Theorem XIII.6.2]). As a consequence, since the distribution of this random variable has no atom at 0, for any δ, η > 0 there exists q small so that for all n large enough the right-hand side of (7.21) is bounded by δ/2. This shows (7.18) and completes the proof. □ The second lemma needed to show Proposition 7.2 studies the probability that a randomly chosen vertex vτ1 is (η, R)-bad. Lemma 7.4. Let V := vτ1 be a uniformly chosen vertex of T. Then, for every x ≥h∗ and η > 0, (7.22) lim R→∞sup n≥R Px V is (η, R)-bad N + n ̸= ∅ = 0. Proof. For arbitrary positive constants q, h1 qn2, H(V ) qn2, H(V ) > h2n N + n ̸= ∅ + Px |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅ . (7.23) We will show that for every δ > 0 there are q, h1, h2 such that for every n ≥0 the first three summands on the right-hand side are smaller than δ/3, and that for every fixed q, h1, h2 the fourth one satisfies (7.24) lim R→∞sup n≥R Px |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅ = 0. SCALING LIMITS FOR THE CRITICAL GFF 37 This will imply the statement of the lemma. Concerning the first summand, the fact that it is possible to choose q > 0 small so that Px[|T| ≤qn2 N + n ̸= ∅] ≤δ/3 for all n ≥0 follows immediately from Lemma 7.3. From now on we keep q fixed in this way. For the second summand, it holds Px |T| > qn2, H(V ) qn2, H(V ) qn2, H(V ) > h2n N + n ̸= ∅ ≤ 1 Px[N + n ̸= ∅]Px[N + h2n ̸= ∅], which can be made arbitrarily small by using Theorem 2.1 and choosing h2 large. Finally, we show (7.24). Recall the notation from (7.2). Since V is a uniformly chosen vertex of T, for arbitrary q, h1, h2, n, R > 0 and ε ∈(0, 1), on the event {|T| ≥qn2} it holds that Px H(V ) ∈[h1n, h2n], V is (η, R)-bad σ(T, φ) = 1 |T| h2n X k=h1n |N +,(η,R) k | = 1 |T| h2n X k=h1n |N +,(η,R) k | |N + k | |N + k | ≤ε + 1 qn2 h2n X k=h1n 1{|N+,(η,R) k |/|N+ k |≥ε}|N + k |. (7.28) As a consequence, the probability in (7.24) satisfies Px |T| > qn2, H(V ) ∈[h1n, h2n], V is (η, R)-bad N + n ̸= ∅ ≤ε + 1 qn2 h2n X k=h1n Ex 1{|N+,(η,R) k |/|N+ k |≥ε}|N + k | N + n ̸= ∅ . (7.29) By the Cauchy-Schwarz inequality, every summand in (7.29) is bounded by (7.30) Ex |N + k |2 N + n ̸= ∅ 1/2Px |N +,(η,R) k |/|N + k | > ε, N + k ̸= ∅ N + n ̸= ∅ 1/2. By Theorem 2.1 and Proposition 3.8, Ex[|N + k |2 | N + n ̸= ∅]1/2 ≤c(x)n. The second term satisfies, (7.31) Px h|N +,(η,R) k | |N + k | > ε, N + k ̸= ∅ N + n ̸= ∅ i ≤Px h|N +,(η,R) k | |N + k | > ε N + k ̸= ∅ iPx[N + k ̸= ∅] Px[N + n ̸= ∅]. SCALING LIMITS FOR THE CRITICAL GFF 38 By Theorem 2.1, for k ≥h1n, the fraction on the right-hand side can be bounded by a constant c(x). Noting also that |N +,(η,R) k | = 0 for R > k, we obtain that (7.29) is bounded by (7.32) ε + 1 qn2c(x)n2(h2 -h1) sup k≥R Px h|N +,(η,R) k | |N + k | > ε N + k ̸= ∅ i1/2 . By Proposition 7.1, the supremum here converges to 0 as R →∞. Since ε > 0 is arbitrary, this shows (7.24) and completes the proof. □ We can now prove Proposition 7.2. The proof resembles the one of Proposition 6.21 in [Pow19], with modifications required to account for the unbounded domain in our setting. Proof of Proposition 7.2. It is enough to show the statement for k = 2. The general case where k > 2 is then obtained by a union bound. In the case k = 2, since the matrices are symmetric and the diagonal entries are zero, it is sufficient to control one off-diagonal entry. To this end, for ε > 0 let Gn be the event (7.33) Gn := |C-1 1 (DH n )1,2 -(D ̄S n)1,2| > ε . In order to bound the probability of Gn, we will define events An(δ) (see (7.38)) satisfying P[An(δ)|N + n ̸= ∅] ≤δ and show that for n large enough and the right choice of δn →0, Gn is a subset of An(δn). This will imply the statement of the lemma. In order to define the events An(δ) for δ > 0, we first fix M = M(δ) ≥1 so that (7.34) lim n→∞Px[N + Mn ̸= ∅|N + n ̸= ∅] ≤δ/3, which is possible by Theorem 2.1. We then set (7.35) η = η(δ) := ε/(4M), and, using Lemma 7.4 and the independence of τ 1 and τ 2, we fix R = R(δ) so that (7.36) lim n→∞Px vτ 1 or vτ 2 is (η, R)-bad N + n ̸= ∅ ≤δ/3. Next, we fix K = K(δ) such that (7.37) lim n→∞Px[sup{χ(φu) : u ∈∪R k=0N + k } ≥K|N + n ̸= ∅] ≤δ/3. This is possible since the event in (7.37) is FR-measurable, and thus, by Proposition 5.2, the left-hand side of (7.37) equals Qx[sup{χ(φu) : u ∈∪R k=0N + k } ≥K] (with Qx as in Section 3.2). It is then straightforward to choose K large enough so that (7.37) is satisfied. Finally, we define the events An by An = An(δ) = {N + Mn ̸= ∅} ∪{vτ 1 or vτ 2 is (η, R)-bad} ∪{sup{χ(φu) : u ∈∪R k=0N + k } ≥K}. (7.38) By the choice of M, η, R, and K, it holds that limn→∞Px[An(δ)|N + n ̸= ∅] ≤δ. Therefore, for sequences δn converging to zero sufficiently slowly, it also holds that (7.39) lim n→∞Px[An(δn)|N + n ̸= ∅] = 0. Next, we show that Gn ⊆An(δn) for a well-chosen δn and large n. Let B(δ) = {H(vτ 1 ∧ vτ 2) > R}. We will show that for large enough n and a suitable choice of δn, (7.40) Gn ∩Ac n(δn) ∩B(δn) = ∅ and Gn ∩Ac n(δn) ∩Bc(δn) = ∅. SCALING LIMITS FOR THE CRITICAL GFF 39 From this, it easily follows that Gn ∩Ac n(δn) = ∅, and thus, Gn ⊆An(δn). Then, if δn →0 sufficiently slowly, by (7.39), (7.41) lim n→∞Px[Gn|N + n ̸= ∅] ≤lim n→∞Px[An(δn)|N + n ̸= ∅] = 0, which is sufficient to prove the statement of the proposition. It remains to show the existence of δn such that (7.39) and (7.40) hold. By definition, on Ac n it holds (7.42) H(vτ i) ≤Mn and ̄S(vτ i) H(vτ i) -C-1 1 ≤η for i = 1, 2. Thus, on Ac n ∩B it holds that R 0 there exists a relatively compact subset K ⊂X (with respect to the Gromov-Hausdorff-Prokhorov metric) such that (8.2) inf n≥0 P n x [(Tx,n, dx,n, μx,n) ∈K] ≥1 -ε. To do this, we define the relatively compact set (where the relative compactness follows from Theorem 2.6 in [ADH13]) (8.3) KR,M = (X, r, μ) ∈X μ(X) = 1, diam(X) ≤2R, for all k ≥1, X can be covered by fewer than 24kM balls of radius 2-k and show that for ε > 0 there are R(ε) and M(ε) such that for K = KR,M, (8.2) is satisfied. Define the events Ax,n(R) = {diam(Tx,n) ≤2R} and (8.4) Bδ x,n(M) = {Tx,n can be covered with 0 and n ≥0. We will first find a R(ε) such that the first equation of (8.6) is satisfied. Recall that the law of (Tx,n, dx,n) under P n x is that of (T, d/n) under Px[ · |N + n ̸= ∅]. Thus, P n x [diam(Tx,n) ≥ 2R] ≤Px[|N + Rn| > 0|N + n ̸= ∅], which by Theorem 2.1, is smaller than ε/2 for R(ε) large enough. For such R(ε) the first part of (8.6) is valid. Next, we will find a M(ε) such that also the second equation of (8.6) is met. We will distinguish thereby between the cases where nδ Mbn,δ 2Rnδ4 N + n ̸= ∅ i ≤δε/2. By conditioning on Fjbn,δ (recalling the definition of Fn from (3.20)) and using Theorem 2.1 and Proposition 3.8, Ex |Vj| = Ex Ex |Vj| Fjbn,δ = Ex h X v∈N+ jbn,δ Pφv[N + bn,δ ̸= ∅] i (8.11) = Ex h X v∈N+ jbn,δ Cχ(φv)b-1 n,δ(1 + εbn,δ(φv)) i ≤q(x)b-1 n,δ (8.12) for some function q(x) independent of n and δ. Therefore, by conditioning on {N + n ̸= ∅} and using the Markov inequality and Theorem 2.1, (8.13) Px h |Vj| > Mbn,δ 2Rnδ4 N + n ̸= ∅ i ≤2Rnδ4 Mbn,δ Ex[|Vj|] Px[N + n ̸= ∅] ≤q′(x)2Rn2δ4 Mb2 n,δ , where q′ is some other function independent of n and δ. This means that by a union bound, the left-hand side of (8.10) is bounded from above by (8.14) q′(x)2Rn2δ4 Mb2 n,δ (2Rn/bn,δ) = q′(x)4R2n3δ4 Mb3 n,δ = q′(x)4R2δ M nδ ⌊nδ⌋ 3 . Using that for x ≥1, 1 ≤x/⌊x⌋≤2, we see that by choosing M ≥64R2q′(x)/ε, the right-hand side of (8.14) is bounded by δε/2. For such choices of M, (8.10) is satisfied. Finally, first choosing R as described above and then M as the maximum of the two M-values obtained for the cases nδ 0, Px sup m ̄Sm ≥C-1 1 n N + n ̸= ∅ ≥Px sup m ̄Sm ≥C-1 1 n |N + ⌊n(1+δ)⌋| > 0 Px[|N + ⌊n(1+δ)⌋| > 0] Px[N + n ̸= ∅] . (8.22) By Proposition 7.1, the first term in the product on the right-hand side converges to 1 as n →∞, and the second term converges to (1 + δ)-1 by Theorem 2.1. This shows the first convergence of (8.21). By Theorem 2.1, Px[N + n ̸= ∅] ∼χ(x)C1n-1 as n →∞. Therefore, we are left to prove that (8.23) lim n→∞ Px[supm ̄Sm ≥C-1 1 n] χ(x)C1n-1 = 1 to conclude the second limit of (8.21). We will do this in a similar way as, e.g., in [LG05, Section 1.4, p. 263]. Consider the depth-first traversal (v1, v2, . . . ) of the sequence of trees (T 1, T 2, . . . ). We set S′ n = P w∈Y (vn) χ(φw) and Si = supn S′ n1vn∈T i for i ≥1 (so that supn ̄Sn = S1). Recall that by Proposition 6.8, (8.24) lim n→∞ S′ ⌊nt⌋ √n , Λ⌊nt⌋ √n t≥0 = σ|Bt|, σ χ(x)L0 t t≥0, where the limit is in Px-distribution with respect to the Skorokhod topology. Defining γr = inf{t ≥0 : L0 t > r} and τn = inf{t ≥0 : n-1Λ⌊n2t⌋> 1} and using Brownian scaling, this gives the joint convergence (8.25) lim n→∞ 1 nS′ ⌊n2t⌋ t≥0, τn = σ|Bt| t≥0, γχ(x)/σ , SCALING LIMITS FOR THE CRITICAL GFF 43 and therefore also (8.26) lim n→∞ 1 nS′ ⌊n2(t∧τn)⌋ t≥0 = σ|Bt∧γχ(x)/σ| t≥0. From this we deduce that for every y > 0, as n →∞, (8.27) lim n→∞Px h sup 1≤i≤n Si > ny i = Px h sup t≤γχ(x)/σ σ|Bt| > y i = 1 -exp -χ(x) y , where the last equality is a consequence of excursion theory for Brownian motion (see e.g. Chapter XII in [RY99]). By the independence of the trees T i (and thus the variables Si), we also have that (8.28) Px h sup 1≤i≤n Si > ny i = 1 - 1 -Px S1 > ny n = 1 - 1 -Px h sup m ̄Sm > ny i n . Combining (8.27) and (8.28) then gives (8.29) Px h sup m ̄Sm > ny i ∼χ(x) y n-1 as n →∞, which by setting y = C-1 1 concludes (8.23) and finishes the proof. □ We now present the proof of Theorem 2.3, which is identical to that of Theorem 1.1 in [Pow19]. Proof of Theorem 2.3. Since convergence in the Gromov-Hausdorff-Prokhorov metric implies convergence in the Gromov-Prokhorov metric, Lemma 8.2 characterizes subsequential limits with respect to the Gromov-Hausdorff-Prokhorov topology. Hence, we obtain the convergence in distribution (8.30) (Tx,n, dx,n, μx,n) →(Tˆe, dˆe, μˆe) as n →∞ with respect to the Gromov-Hausdorff-Prokhorov metric. Now, since Gromov-HausdorffProkhorov convergence further implies convergence in the Gromov-Hausdorff sense, claim (8.1) follows, completing the proof. □ Appendix A. Properties of χ We prove here that χ is Lipschitz continuous on [h∗, ∞), as stated in (3.19). The proof uses similar ideas as the proof of Proposition 3.1 of [AˇC20]. Proof of (3.19). We will show that the derivative χ′ is bounded on [h∗, ∞). Recall that for x ≥h∗it holds χ(x) = d R [h∗,∞) χ(z)ρY (z -x/d) dz, where ρY is the density of the Gaussian random variable Y . Differentiating this expression, using an integration by parts and the fact that χ′(z) = 0 for z < h∗, we obtain that for x ≥h∗ χ′(x) = - Z ∞ h∗χ(z)ρ′ Y z -x d dz = Z [h∗,∞) χ′(z)ρY z -x d dz + χ(h∗)ρY h∗-x d =: EY h χ′ Y + x d i + e(x). (A.1) Defining e(x) = 0 for x < h∗, using that χ is increasing and thus χ′ ≥0, this equality can be extended to inequality (A.2) χ′(x) ≤EY h χ′ Y + x d i + e(x) for all x ∈R. SCALING LIMITS FOR THE CRITICAL GFF 44 Similarly as in the proof of Proposition 3.1 in [AˇC20], we obtain an upper bound on χ′ by iterating (A.2) an appropriate amount of times. For this, let Y1, . . . , Yk be independent Gaussian random variables having the same distribution as Y , and define Zi = Y1/di-1 + Y2/di-2 + · · · + Yi. Note that Zi is a centred Gaussian random variable whose variance, denoted σ2 i , is bounded uniformly in i. Then, by applying (A.2) k-times, χ′(x) ≤EY1 h EY2 h χ′ Y2 + Y1 + x/d d + e(Y1 + x/d) + e(x) ii ≤· · · ≤E h χ′ x dk + Y1 dk-1 + Y2 dk-2 + · · · + Yk i + E h k-1 X i=0 e x di + Zi i ≤E h χ′ x dk + Zk i + k-1 X i=0 E h e x di + Zi i . (A.3) We now choose k = k(x) = ⌊logd(x)⌋. Then the boundedness of the first summand can be proved exactly in the same way as the boundedness of E[χ(x/dk + Zk)] in the proof of Proposition 3.1 in [AˇC20] (note that (d -1) in [AˇC20] corresponds to d in our setting), one only needs to verify that χ′ ∈L2(ν). This can be easily proved using (A.1) and the facts ρ′ Y (x) = -cxρY (x), L[χ] = χ and (3.18) which imply |χ′(x)| ≤cL[χ2](x) + cx2 for x ≥h∗. From this χ′ ∈L2(ν) follows from Proposition 3.2(b). To bound the second summand on the right hand side of (A.3), we first observe that by the choice of k(x) it holds that x0 := x/dk(x) ∈[1, d]. With this notation, by rearranging the sum, (A.4) k(x)-1 X i=0 E[e(x/di + Zi)] = k(x)-1 X i=0 E[e(x0di + Zk(x)-i)]. Moreover, it is easy to see that for some constants c, c′ (A.5) 0 ≤e(x) := 1x≥h∗χ(h∗)ρY h∗-x d ≤c′1x≥h∗e-cx. Therefore, (A.6) k(x)-1 X i=0 E[e(x0di + Zk(x)-i)] ≤c′ ∞ X i=0 E e-c(x0di+Zk(x)-i) ≤c′ ∞ X i=0 e-cx0diec2σ2 k(x)-i/2, which, since σ2 i are bounded and x0 ∈[1, d], is clearly bounded uniformly in x ≥h∗. □ Appendix B. Proof of the many-to-few formulas We give here a proof of Proposition 3.5. Throughout the proof we use the notation from Section 3.2. We start by introducing a useful standard martingale related to φ when viewed as branching process. Lemma B.1. For x ∈R, n ∈N, let (B.1) ζ(x, n) := dnχ(x) and ζn := ζ(ξ1 n, n). Then, for every k ∈N, (ζn)n∈N is a P k x -martingale with respect to Fk n. Proof. By definition of ζn, for n ≥1, (B.2) d-(n-1)Ek x[ζn | Fk n-1] = dEk x[χ(φσ1n) | Fk n-1], SCALING LIMITS FOR THE CRITICAL GFF 45 where σ1 n is the vertex carrying the first spine mark at level n. Conditionally on Fk n-1, σ1 n is uniformly distributed on desc(σ1 n-1) and independent of φ. Therefore, this equals (B.3) X v∈desc(σ1 n-1) Ek x[χ(φv) | Fk n-1] = L[χ](φσ1 n-1) = χ(φσ1 n-1) = χ(ξ1 n-1). where in for the first equality we used (3.7) and the fact that χ(x) = 0 on (-∞, h∗), and where the second equality follows from (3.10). The martingale property of ζn then follows directly from this computation. □ Due to Lemma B.1, Lemma 8 of [HR17] can directly be applied to our process. We restate it here for reader's convenience, with very minor adaptations coming from the fact that in our process every node has always d descendants (some of them might be in the cemetery state). Lemma B.2 (Lemma 8 in [HR17]). For any k ≥1, let Y be a Fk n-measurable random variable which can be written as (B.4) Y = X v1,...,vk∈S+ n Y (v1, . . . , vk)1{σ1n=v1,...,σkn=vk}, where, for every v1, . . . , vk ∈S+ n , Y (v1, . . . , vk) is a Fn-measurable random variable. Then (B.5) Ex h X v1,...,vk∈N+ n Y (v1, . . . , vk) i = Qk x h Y Y v∈skel(n)\{o} ζ(φp(v), |v| -1) ζ(φv, |v|) dlp(v) i . We are now ready to give the proof of Proposition 3.5. Proof of Proposition 3.5. Statement (3.26) follows directly from Lemma B.2 with k = 1 and Y (v1) = f(φv1). Indeed, since k = 1, there is only one mark on every level and thus Y = f(ξ1 n), skel(n) \ {o} = {σ1 1, . . . , σ1 n}, and lp(v) = 1 for all v ∈skel(n) \ {o}. Therefore, by (B.5) Ex h X v∈N+ n f(φv) i = Qx f(ξn) Y v∈{σ1,...,σn} ζ(φp(v), |v| -1) ζ(φv, |v|) d = Qx f(ξn) n Y i=1 χ(ξi-1)di-1 χ(ξi)di d = Qx f(ξ1 n) χ(x) χ(ξn) , (B.6) as claimed in (3.26). Similarly, statement (3.27) is a consequence of Lemma B.2 with k = 2. We set Y (v1, v2) = f(φv1)g(φv2). Then Y = f(ξ1 n)g(ξ2 n) and by (B.5), (B.7) Ex h X v,w∈N+ n f(φv)g(φw) i = Q2 x f(ξ1 n)g(ξ2 n) Y v∈skel(n)\{o} ζ(φp(v), |v| -1) ζ(φv, |v|) dlp(v) . To consider the possible structures of skel(n) \ {o}, set s = max{k : σ1 k = σ2 k} to be the last time where the two spines agree. Note that due to the dynamics of marks under Q2 x, (B.8) Q2 x[s = k] = ( (d -1)d-(k+1), if k ∈{0, . . . , n -1}, d-n, if k = n, SCALING LIMITS FOR THE CRITICAL GFF 46 and, by a simple computation, Y v∈skel(n)\{o} ζ(φp(v), |v| -1) ζ(φv, |v|) dlp(v) = Y v∈skel(n)\{o} χ(φp(v)) χ(φv) 1 d dlp(v) = χ(ξ1 s)χ(x) χ(ξ1 n)χ(ξ2 n)ds. (B.9) Therefore, (B.7) can be written as Ex X v∈N+ n f(φv) X v∈N+ n g(φv) = n X k=0 dkQ2 x[s = k]Q2 x f(ξ1 n)g(ξ2 n) χ(ξ1 k)χ(x) χ(ξ1 n)χ(ξ2 n) s = k = d -1 d n-1 X k=0 Q2 x f(ξ1 n) χ(ξ1 n) g(ξ2 n) χ(ξ2 n)χ(ξ1 s)χ(x) s = k + Q2 x f(ξ1 n)g(ξ1 n) χ(x) χ(ξ1 n) s = n . (B.10) By construction, under Q2 x, conditional on s = k, for times i = 1, . . . , k the processes ξ1 i and ξ2 i are Markov chains which follow the same trajectory and have the same dynamics as ξi under Q1 x. Further, for later times i = k + 1, . . . , n, they are independent Markov chains distributed according to Qξ1 k. Therefore, (B.11) Q2 x hf(ξ1 n) χ(ξ1 n) g(ξ2 n) χ(ξ2 n)χ(ξ1 s)χ(x) s = k i = χ(x)Qx χ(ξk)Q2 ξ1 k hf(ξ1 n-k) χ(ξ1 n-k) i Q2 ξ1 k h g(ξ2 n-k) χ(ξ2 n-k) i , and similarly (B.12) Q2 x f(ξ1 n)g(ξ1 n) χ(x) χ(ξ1 n) s = n = χ(x)Qx hf(ξn)g(ξn) χ(ξn) i . Combining (B.10)-(B.12) directly implies (3.27). □ References [AˇC20] Angelo Ab ̈acherli and Jiˇr ́ı ˇCern ́y, Level-set percolation of the Gaussian free field on regular graphs I: regular trees, Electron. J. Probab. 25 (2020), Paper No. 65, 24. MR4115734 [ADH13] Romain Abraham, Jean-Fran ̧cois Delmas, and Patrick Hoscheit, A note on the GromovHausdorff-Prokhorov distance between (locally) compact metric measure spaces, Electron. J. Probab. 18 (2013), no. 14, 21. MR3035742 [Ald91] David Aldous, The continuum random tree. I, Ann. Probab. 19 (1991), no. 1, 1-28. MR1085326 [BDIM23] Dariusz Buraczewski, Congzao Dong, Alexander Iksanov, and Alexander Marynych, Critical branching processes in a sparse random environment, Mod. Stoch. Theory Appl. 10 (2023), no. 4, 397-411. MR4655407 [BFRS24] Florin Boenkost, F ́elix Foutel-Rodier, and Emmanuel Schertzer, The genealogy of nearly critical branching processes in varying environment, Preprint, available at , 2024. [BLM87] Jean Bricmont, Joel L. Lebowitz, and Christian Maes, Percolation in strongly correlated systems: the massless Gaussian field, J. Statist. Phys. 48 (1987), no. 5-6, 1249-1268. MR914444 [CD24] Zhenhao Cai and Jian Ding, One-arm probabilities for metric graph gaussian free fields below and at the critical dimension, Preprint, available at , 2024. [CD25] Zhenhao Cai and Jian Ding, One-arm exponent of critical level-set for metric graph Gaussian free field in high dimensions, Probability Theory and Related Fields 191 (2025), no. 3, 10351120. MR4898098 [CKKM24] Guillaume Conchon-Kerjan, Daniel Kious, and C ́ecile Mailler, Scaling limit of critical random trees in random environment, Electron. J. Probab. 29 (2024), Paper No. 112, 53. MR4779872 SCALING LIMITS FOR THE CRITICAL GFF 47 [ˇCL25] Jiˇr ́ı ˇCern ́y and Ramon Locher, Critical and near-critical level-set percolation of the Gaussian free field on regular trees, 2025, pp. 746-767. MR4863059 [CN20] Alberto Chiarini and Maximilian Nitzschner, Entropic repulsion for the Gaussian free field conditioned on disconnection by level-sets, Probab. Theory Related Fields 177 (2020), no. 12, 525-575. MR4095021 [CR90] B. Chauvin and A. Rouault, Supercritical branching Brownian motion and K-P-P equation in the critical speed-area, Math. Nachr. 149 (1990), 41-59. MR1124793 [CTJP24] Natalia Cardona-Tob ́on, Arturo Jaramillo, and Sandra Palau, Rates on Yaglom's limit for Galton-Watson processes in a varying environment, ALEA Lat. Am. J. Probab. Math. Stat. 21 (2024), no. 1, 1-23. MR4703767 [DCGRS23] Hugo Duminil-Copin, Subhajit Goswami, Pierre-Fran ̧cois Rodriguez, and Franco Severo, Equality of critical parameters for percolation of Gaussian free field level sets, Duke Math. J. 172 (2023), no. 5, 839-913. MR4568695 [DLG02] Thomas Duquesne and Jean-Fran ̧cois Le Gall, Random trees, L ́evy processes and spatial branching processes, Ast ́erisque, no. 281, Soci ́et ́e math ́ematique de France, 2002 (en). MR1954248 [DPR18] Alexander Drewitz, Alexis Pr ́evost, and Pierre-Fran ̧ccois Rodriguez, The sign clusters of the massless Gaussian free field percolate on Zd, d ⩾3 (and more), Comm. Math. Phys. 362 (2018), no. 2, 513-546. MR3843421 [DPR25] Alexander Drewitz, Alexis Pr ́evost, and Pierre-Fran ̧cois Rodriguez, Critical one-arm probability for the metric Gaussian free field in low dimensions, Probability Theory and Related Fields (2025). [dR17] Lo ̈ıc de Raph ́elis, Scaling limit of multitype Galton-Watson trees with infinitely many types, Ann. Inst. Henri Poincar ́e Probab. Stat. 53 (2017), no. 1, 200-225. MR3606739 [DRS14] Alexander Drewitz, Bal ́azs R ́ath, and Art ̈em Sapozhnikov, On chemical distances and shape theorems in percolation models with long-range correlations, J. Math. Phys. 55 (2014), no. 8, 083307, 30. MR3390739 [Dur19] Rick Durrett, Probability-theory and examples, fifth ed., Cambridge Series in Statistical and Probabilistic Mathematics, vol. 49, Cambridge University Press, Cambridge, 2019. MR3930614 [Fel71] William Feller, An introduction to probability theory and its applications. Vol. II., Second edition, John Wiley & Sons Inc., New York, 1971. MR0270403 [GLLP22] Ion Grama, Ronan Lauvergnat, and ́Emile Le Page, Limit theorems for critical branching processes in a finite-state-space Markovian environment, Adv. in Appl. Probab. 54 (2022), no. 1, 111-140. MR4397862 [GPW09] Andreas Greven, Peter Pfaffelhuber, and Anita Winter, Convergence in distribution of random metric measure spaces (Λ-coalescent measure trees), Probab. Theory Related Fields 145 (2009), no. 1-2, 285-322. MR2520129 [Gri99] Geoffrey Grimmett, Percolation, Grundlehren der Mathematischen Wissenschaften, vol. 321, Springer-Verlag, Berlin, 1999. MR1707339 [GRS22] Subhajit Goswami, Pierre-Fran ̧cois Rodriguez, and Franco Severo, On the radius of Gaussian free field excursion clusters, Ann. Probab. 50 (2022), no. 5, 1675-1724. MR4474499 [HH07] J. W. Harris and S. C. Harris, Survival probabilities for branching Brownian motion with absorption, Electron. Comm. Probab. 12 (2007), 81-92. MR2300218 [HHKW22] Simon C. Harris, Emma Horton, Andreas E. Kyprianou, and Minmin Wang, Yaglom limit for critical nonlocal branching Markov processes, Ann. Probab. 50 (2022), no. 6, 2373-2408. MR4499840 [HR17] Simon C. Harris and Matthew I. Roberts, The many-to-few lemma and multiple spines, Ann. Inst. Henri Poincar ́e Probab. Stat. 53 (2017), no. 1, 226-242. MR3606740 [Kol38] Andrey Nikolaevich Kolmogorov, Zur L ̈osung einer biologischen Aufgabe, Comm. Math. Mech. Chebyshev Univ. Tomsk 2 (1938), no. 1, 1-12. [LG05] Jean-Fran ̧cois Le Gall, Random trees and applications, Probab. Surv. 2 (2005), 245-311. MR2203728 [LS86] Joel L. Lebowitz and H. Saleur, Percolation in strongly correlated systems, Phys. A 138 (1986), no. 1-2, 194-205. MR865243 SCALING LIMITS FOR THE CRITICAL GFF 48 [Mie08] Gr ́egory Miermont, Invariance principles for spatial multitype Galton-Watson trees, Ann. Inst. Henri Poincar ́e Probab. Stat. 44 (2008), no. 6, 1128-1161. MR2469338 [Mod71] Charles J. Mode, Multitype branching processes. Theory and applications, Modern Analytic and Computational Methods in Science and Mathematics, No. 34, American Elsevier Publishing Co., Inc., New York, 1971. MR0279901 [MS83] S. A. Molchanov and A. K. Stepanov, Percolation in random fields. I, Teoret. Mat. Fiz. 55 (1983), no. 2, 246-256. MR734878 [MS22] Pascal Maillard and Jason Schweinsberg, Yaglom-type limit theorems for branching Brownian motion with absorption, Ann. H. Lebesgue 5 (2022), 921-985. MR4526243 [Pow19] Ellen Powell, An invariance principle for branching diffusions in bounded domains, Probab. Theory Related Fields 173 (2019), no. 3-4, 999-1062. MR3936150 [PR15] Serguei Popov and Bal ́azs R ́ath, On decoupling inequalities and percolation of excursion sets of the Gaussian free field, J. Stat. Phys. 159 (2015), no. 2, 312-320. MR3325312 [PS22] Christoforos Panagiotis and Franco Severo, Analyticity of Gaussian free field percolation observables, Comm. Math. Phys. 396 (2022), no. 1, 187-223. MR4499015 [RS13] Pierre-Fran ̧cois Rodriguez and Alain-Sol Sznitman, Phase transition and level-set percolation for the Gaussian free field, Comm. Math. Phys. 320 (2013), no. 2, 571-601. MR3053773 [RY99] Daniel Revuz and Marc Yor, Continuous martingales and Brownian motion, third ed., Grundlehren der mathematischen Wissenschaften, vol. 293, Springer-Verlag, Berlin, 1999. MR1725357 [Szn16] Alain-Sol Sznitman, Coupling and an application to level-set percolation of the Gaussian free field, Electron. J. Probab. 21 (2016), 1-26. MR3492939 [Szn19] Alain-Sol Sznitman, On macroscopic holes in some supercritical strongly dependent percolation models, Ann. Probab. 47 (2019), no. 4, 2459-2493. MR3980925 [Yag47] A. M. Yaglom, Certain limit theorems of the theory of branching random processes, Doklady Akad. Nauk SSSR (N.S.) 56 (1947), 795-798. MR22045
2510.14785
Generalized Reduced Jacobian Method M. El Maghri∗1 and Y. Elboulqe†2 1Department of Mathematics and Computer, Faculty of Sciences A¨ın Chock, Hassan II University, Casablanca, BP. 5366, Morocco 2Department of Mathematics, Faculty of Sciences Semlalia, Cadi Ayyad University, Marrakech, BP. 2390, Morocco October 17, 2025 Abstract In a recent work, we presented the reduced Jacobian method (RJM) as an extension of Wolfe’s reduced gradient method to multicriteria (multiobjective) optimization problems dealing with linear constraints. This approach reveals that using a reduction technique of the Jacobian matrix of the objective avoids scalarization. In the present work, we intend to generalize RJM to handle nonlinear constraints too. In fact, we propose a generalized reduced Jacobian (GRJ) method that extends Abadie-Carpentier’s approach for single-objective programs. To this end, we adopt a global reduction strategy based on the fundamental theorem of implicit functions. In this perspective, only a reduced descent direction common to all the criteria is computed by solving a simple convex program. After establishing an Armijo-type line search condition that ensures feasibility, the resulting algorithm is shown to be globally convergent, under mild assumptions, to a Pareto critical (KKT-stationary) point. Finally, experimental results are presented, including comparisons with other deterministic and evolutionary approaches. Keywords Multicriteria optimization. Pareto optima. Nonlinear programming. Pareto KKT- stationarity. Reduced Jacobian method. Nonlinear constraints. Mathematics Subject Classification (2020) 90C29 · 90C30 · 90C52 1 Introduction In many areas, like economics, medicine, design, transportation and so on, we are faced with multicriteria (or multiobjective) optimization problems (MOP), where several functions must be optimized at the same time. Because of the conflicts which may arise between these functions, almost never a single point will minimize or maximize them at once. So the Pareto optimality concept have to be considered. A point is said to be Pareto optimum or efficient solution, if it cannot be improved with respect to a criterion without degrading at least one of the others. Taking account the principle, Pareto solutions are incomparable and no single point can represent all the others. The problem then consists of determining the set of all Pareto solutions or their objective values called the Pareto front or more generally the set of non-dominated solutions. In recent years, it has been observed that this task has become more tractable with the advent of a new generation of algorithms grounded in the principle of multiobjective search descent directions, which aim to identify a direction that decreases all the criteria simultaneously (e.g. [5, 13, 16, 18, 20, 21, 23, 29, 31, 32, 33, 34]). In most cases, the resulting algorithms are merely direct extensions of well-known nonlinear descent methods, which tackle the given problem without resorting to intermediate transformations or introducing artificial parameters that may be sensitive to the ∗Corresponding author. Email: elmaghri@yahoo.com †Email: elboulqeyoussef@gmail.com 1 arXiv:2510.14785v1 [math.OC] 16 Oct 2025 2 m. el maghri, y. elboulqe original problem. From a theoretical perspective, convergence toward Pareto-KKT stationarity is well established, and numerically, these approaches have also proven very promising, both in terms of convergence to the true Pareto front and the diversity of non-dominated approximate solutions. For instance, in [16], we proposed a direct extension of the Wolfe reduced gradient method (RGM) [42] (see also [15] for a new RGM variant) to the so-called reduced Jacobian method (RJM) for solving linearly constrained MOPs. We also refer the reader to [18], where we introduced an RJM variant that possesses the full convergence property. In the present study, we build upon our previous work by focusing on the so-called generalized reduced gradient (GRG) method, originally introduced by Abadie and Carpentier [1], which extends Wolfe’s reduction strategy to problems with nonlinear constraints. Accordingly, we propose a novel approach, which we term the generalized reduced Jacobian (GRJ) method, to effectively handle nonlinear constraints in multicriteria optimization. The GRG method is widely recognized as one of the fundamental and practical approaches to nonlinear mathematical programming. Its convergence properties were rigorously established in subsequent works by Smeers [37], Mokhtar- Kharroubi [30], and Schittkowski [36]. As far as we are concerned, our investigations cover both the theoretical and algorithmic aspects of this method. Abadie-Carpentier’s reduction strategy, based on the well-known implicit function theorem, is adopted in this work, thereby enabling a complete characterization of Pareto-KKT stationarity through multiobjective reduced descent directions. It is subsequently proven that such directions guarantee simultaneous descent of all criteria and ensure the existence of Armijo-like steplengths, while preserving feasibility throughout the process. From a computational point of view, a simple direction-finding subproblem is introduced to iteratively compute the reduced directions. Under mild assumptions, the resulting GRJ algorithm is proven to be globally convergent to a Pareto KKT-stationary point in the sense of accumulation points. Finally, numerical experiments are reported, illustrating the effectiveness of the GRJ method compared with other methods, including a Zoutendijk-like method [31], an SQP-type approach [22], and the well-known evolutionary method NSGA-II [10]. 2 Efficiency and multiobjective descent We are dealing with the following nonlinearly constrained multicriteria optimization problem: (MOP) Min F(x) subject to G(x) = 0, a ≤x ≤b, where  F, G  : x ∈Rn 7→  F(x), G(x)  = f1(x), . . . , fr(x)  , g1(x), . . . , gm(x)  ∈Rr × Rm are two vector-valued functions continuously differentiable on the feasible set denoted by S, and, a = (a1, . . . , an), b = (b1, . . . , bn) ∈Rn. For y = (y1, . . . , yp) ∈Rp and y′ = (y′ 1, . . . , y′ p) ∈Rp, y ≤y′ (resp. y < y′), iff yi ≤y′ i (resp. yi < y′ i) (∀i), and, y ⪇y′ means y ≤y′ and y ̸= y′. The usual norm and the usual inner product in Rp will be respectively denoted by ∥y∥= √y · y and y · y′ = yT y′, where yT stands for the transpose of y. The (MOP) consists in minimizing the vector function F over S in the following Pareto senses: Definition 2.1 A point x∗∈S is said to be (a) weakly efficient for (MOP), if ∄x ∈S, F(x) < F(x∗); (b) efficient (Pareto optimum) for (MOP), if ∄x ∈S, F(x) ⪇F(x∗); (c) properly efficient (in Henig’s sense) for (MOP), if there exists C ⊊Rr a pointed1 convex cone such that its topological interior int C ⊃Rr +\ {0}, and, ∄x ∈S, F(x) ⪇C F(x∗).2 The sets of weakly efficient points, efficient points and properly efficient points will be denoted respectively by Ew, Ee and Ep. To unify the notations, we briefly denote by Eσ the set of all σ-efficient solutions depending on the choice of σ ∈{w, e, p}. Recall the following relationships: Ep ⊆Ee ⊆Ew. (1) 1A pointed cone C means that C ∩−C = {0}. 2The relation y ⪇C y′ means that y′ −y ∈C\ {0}. generalized reduced jacobian method 3 The concept of local σ-efficiency is defined by replacing S by S ∩N(x∗) in Definition 2.1, where N(x∗) is some neighbourhood of x∗. The set of all locally σ-efficient points will be denoted by Eloc σ . The image set F(Eσ) (resp. F(Eloc σ )) is called σ-Pareto front (resp. local σ-Pareto front). Taking into account the inclusions (1), the determination of the weakly efficient set will include all the other efficient solutions. Let us denote by JF(x) = ∂fj(x)/∂xi  j,i the Jacobian matrix of F at x; the jth line of JF(x) being of course the gradient ∇fj(x) of the jth objective fj at x. Definition 2.2 Let S be a convex set. The vector mapping F : S ⊂Rn →Rr is said to be (a) convex on S, if ∀x, x′ ∈S, ∀α ∈[0, 1] , F(αx + (1 −α)x′) ≤αF(x) + (1 −α)F(x′); (b) strictly convex on S, if ∀x, x′ ∈S, x ̸= x′, ∀α ∈]0, 1[ , F(αx + (1 −α)x′) < αF(x) + (1 −α)F(x′); (c) pseudoconvex on S, if it is pseudoconvex at any x ∈S, i.e., ∀x′ ∈S, F(x′) < F(x) =⇒JF(x)(x′ −x) < 0; (d) strictly pseudoconvex on S, if it is strictly pseudoconvex at any x ∈S, i.e., ∀x′ ∈S, x′ ̸= x, F(x′) ≤F(x) =⇒JF(x)(x′ −x) < 0. Remark 2.1 It is obvious that F is componentwise convex, iff F is convex. But the converse does not hold for pseudoconvexity (see [24, Theorem 9.2.3 and Remark 5.3]). The same should hold true for the strict concepts. As in the scalar case, pseudoconvexity generalizes convexity. A multiobjective search direction is defined as follows. Definition 2.3 A vector d ∈Rn is said to be (a) a feasible direction at x ∈S, if ∃tf > 0, ∀t ∈]0, tf] , x + td ∈S; (b) a tangent direction to S at x ∈S, if ∃(dk)k ⊂Rn, ∃(tk)k ⊂]0, +∞[ such that lim k→+∞dk = d, lim k→+∞tk = 0 and x + tkdk ∈S (∀k); (c) a descent direction of F at x ∈S, if ∃td > 0, ∀t ∈]0, td] , F(x + td) < F(x). The sets of feasible directions, tangent directions and descent directions are cones which will be denoted respectively by AS(x), TS(x) and DS(x). A sufficient condition for d ∈DS(x) is (see [20]) JF(x)d < 0. (2) A geometric necessary condition for weak efficiency is obtained using the tangent directions: Theorem 2.1 If x∗∈Eloc w , then ∄d ∈TS(x∗), JF(x∗)d < 0. Proof Let d ∈TS(x∗). We can suppose that d ̸= 0, otherwise the result is trivial. So, by the very definition, ∃(dk)k −→d, ∃(tk)k ↘0+ such that xk := x∗+ tkdk ∈S (∀k). (3) By hypothesis, ∃N(x∗) a neighbourhood of x∗such that ∄x ∈N(x∗) ∩S, F(x) < F(x∗). (4) 4 m. el maghri, y. elboulqe From (3), we see that xk ∈N(x∗) ∩S for k large enough, and from (4), we deduce that ∃K ∈N, ∀k ≥K, ∃jk ∈{1, . . . , r} , fjk(xk) ≥fjk(x∗). (5) Without loss of generality, we can assume that jk = j0 (a constant index) for k large enough. On the other hand, by differentiability, we have that fj0 xk = fj0 x∗+ tkdk = fj0 (x∗) + tk∇fj0 (x∗)T dk + o tkdk . It follows, from (5), that for k large enough, tk∇fj0 (x∗)T dk + o tkdk ≥0. Passing to the limit, when k ↗+∞, after dividing by tk∥dk∥, we obtain ∇fj0 (x∗)T d ≥0, which shows the result of the theorem. □ As is well known, the tangent cone TS(x) is introduced to handle nonlinear equality constraints, since the cone AS(x) may be empty in such cases (see, e.g., the case where S is a circle). On the other hand, the tangent cone TS(x) is always closed and coincides with the closure of AS(x) when S is convex. This helps to explain the suitability of using TS(x) instead of AS(x). Furthermore, although the tangent cone plays a theoretically crucial role for a general feasible set S, its explicit determination often remains challenging in practice. However, when S is explicitly described by equality and/or inequality constraints, the tangent cone can always be expressed in terms of these constraints via the so-called linearized cone. For our (MOP), the latter is formulated at a given x ∈S as follows: CS(x) = {d ∈Rn : JG(x)d = 0, ∀i ∈Ia(x), di ≥0, ∀i ∈Ib(x), di ≤0} , where Ia(x) = {i ∈{1, . . . , n} : xi = ai} and Ib(x) = {i ∈{1, . . . , n} : xi = bi} are the index sets of active variables, and, JG(x) is the Jacobian matrix of G at x. In general, one only has that TS(x) ⊆CS(x) (see, e.g., [4, Chapter 5]). When equality holds, we say that Abadie’s constraint qualification (ACQ) is satisfied at x: (ACQ) TS(x) = CS(x). With the assumption (ACQ), we can state the following explicit conditions of weak efficiency that will be useful in the sequel. Theorem 2.2 Assume that (ACQ) holds at x∗∈S. If x∗∈Eloc w , then the following two equivalent conditions are satisfied: (i) There is no tangent descent direction to S at x∗∈S satisfying (2), i.e., ∄d ∈Rn :          JF(x∗)d < 0, JG(x∗)d = 0, di ≥0, ∀i ∈Ia(x∗), di ≤0, ∀i ∈Ib(x∗). (ii) The point x∗∈S is Pareto KKT-stationary, i.e., ∃(λ, u, v, w) ∈Rr +\ {0} × Rm × Rn + × Rn + :      JF (x∗)T λ + JG (x∗)T u −v + w = 0, v · (x∗−a) = 0, w · (b −x∗) = 0. Conversely, if condition (i) or (ii) is satisfied at x∗∈S, F is pseudoconvex (resp. strictly pseudo- convex) at x∗and the kernel Ker JG(x∗) contains N(0) ∩(S −x∗) for some neighbourhood N(0) of 0, then x∗∈Eloc w (resp. x∗∈Eloc e ). generalized reduced jacobian method 5 Proof Necessity: condition (i) is the result of Theorem 2.1 expressed under (ACQ). Equivalence between (i) and (ii) comes directly from Motzkin’s alternative theorem (see, e.g., [28]). Sufficiency: Since it is assumed that N(0) ∩(S −x∗) ⊆Ker JG(x∗), then there exists N(x∗) a neighbourhood of x∗such that JG(x∗)(x −x∗) = 0 for all x ∈N(x∗) ∩S. Now, by way of contraposition, suppose that x∗̸∈Eloc w (resp. x∗̸∈Eloc e ). Then, there exists x0 ∈S ∩N(x∗) such that F(x0) < F(x∗) (resp. F(x0) ⪇F(x∗)). By putting d = x0 −x∗, pseudoconvexity (resp. strict pseudoconvexity) of F at x∗implies that JF(x∗)d = JF(x∗) x0 −x∗ < 0. On the other hand, we also have that JG(x∗)d = JG(x∗)(x0 −x∗) = 0 because x0 ∈N(x∗) ∩S ⊆Ker JG(x∗). Moreover, di = ( x0 i −ai, if i ∈Ia(x∗), x0 i −bi, if i ∈Ib(x∗), satisfying di ≥0, if i ∈Ia(x∗) and di ≤0, if i ∈Ib(x∗), which under the constraint qualification condition (ACQ), shows the existence of a tangent descent direction of F at x∗. □ Remark 2.2 If, in Theorem 2.2, we replace N(0) by Rn, then it is easy to see by the same proof that the local efficiency becomes global, in particular, when G is affine. 3 GRJ strategy Let A(x) := JG(x) ∈Rm×n be the Jacobian matrix of G at x ∈S such that A(x) is of full rank m < n. Then, there exists a subset B ⊂{1, . . . , n}, called basis of A(x), such that the submatrix AB(x) is invertible, where (rearranging the columns if necessary) A(x) = [AB(x) AN(x)] with AB(x) = JGB(x), AN(x) = JGN(x) and obviously N = {1, . . . , n} \B. Rearranging also the components of x we can write x = (xB, xN), where xB (resp. xN) is the well-known vector of basic (resp. nonbasic) variables. Since G is assumed to be continuously differentiable and AB(x) is invertible, then by the implicit function theorem, there exist V ⊂Rm a neighbourhood of xB, W ⊂Rn−m a neighbourhood of xN and a unique function ψ : W −→V which is continuously differentiable such that ∀x′ N ∈W,      G (ψ(x′ N), x′ N) = 0, ∂ψ ∂xN (x′ N) = −A−1 B (x′)AN(x′). (6) Hence, the feasible set S may be expressed only in terms of nonbasic variables: x ∈S ⇐⇒a ≤x = (ψ(xN), xN) ≤b. A point x ∈S is said to be nondegenerate, if there exists B a basis such that aB < xB < bB. In this case, B is also said to be a nondegenerate basis for x; otherwise x is degenerate. The feasible set S is in turn called nondegenerate, if any x ∈S is nondegenerate. Given a vector d ∈Rn that we partition in the same way to d = (dB, dN), then it is clear that JG(x)d = 0 ⇐⇒dB = −A−1 B (x)AN(x)dN. So by also partitioning the Jacobian matrix JF(x) = [JFB(x) JFN(x)], for any d such that dB = −A−1 B (x)AN(x)dN, it follows that JF(x)d = UN(x)dN, where UN(x) is an r × (n −m) matrix, which will be called generalized reduced Jacobian matrix of F at x, explicitly given by UN(x) := JFN(x) −JFB(x)A−1 B (x)AN(x). (7) 6 m. el maghri, y. elboulqe Observe that the jth row of UN(x) is nothing more than the well-known reduced gradient of the criterion fj at x which is given by uj N(x) = ∂fj ∂xN (x) − A−1 B (x)AN(x) T ∂fj ∂xB (x). Now, we define a multiobjective reduced descent direction of F at x as being a nonbasic vector dN ∈Rn−m satisfying: UN(x)dN < 0, ∀i ∈Ia(x) ∩N, di ≥0, and ∀i ∈Ib(x) ∩N, di ≤0. (8) The geometric stationarity stated in Theorem 2.2(i) then can be expressed only in terms of nonbasic directions: Corollary 3.1 Under (ACQ), a nondegenerate point x ∈S is Pareto KKT-stationary for (MOP), iff there is no multiobjective reduced descent direction of F at x. The next result shows that given any multiobjective reduced descent direction at a given nondegenerate point, feasibility along this direction with an Armijo-like steplength is guaranteed. Proposition 3.1 Let x ∈S and ψ be the associated implicit function given by (6), and, let dN be a multiobjective reduced descent direction of F at x. Let t ∈R+ and define tN := min ai −xi di : di < 0, i ∈N  ∪ bi −xi di : di > 0, i ∈N  . (9) Then, (i) tN > 0. (ii) t ≤tN ⇐⇒aN ≤xN + tdN ≤bN. If furthermore aB < xB < bB (i.e., x is nondegenerate with B = {1, . . . , n} \ N), then (iii) ∃tf ∈]0, tN], ∀t ∈]0, tf], ψ(xN + tdN), xN + tdN  ∈S and aB < ψ(xN + tdN) < bB. (10) (iv) ∀β ∈]0, 1[ (fixed), ∃ta ∈]0, tf], ∀t ∈]0, ta], F ψ(xN + tdN), xN + tdN  < F(x) + βtUN(x)dN. (11) Proof (i) It is clear that tN exists (since dN ̸= 0). The positivity of tN follows by the very definition of dN. Indeed, if tN = (ai −xi)/di, then from (8), it follows that ai −xi < 0, and consequently, tN > 0; otherwise, tN = (bi −xi)/di, which in accordance with (8), entails that bi −xi > 0, thereby ensuring tN > 0. (ii) Let us show the first implication. Let i ∈N. The assertion is obvious if di = 0. Now, if di > 0, then ai ≤xi + tdi and (bi −xi)/di ≥tN ≥t, so that bi ≥xi + tdi. The case di < 0 is similar. Conversely, since dN ̸= 0, then if di < 0 (i ∈N), the hypothesis ai ≤xi + tdi implies that t ≤(ai −xi)/di; and if di > 0, then the hypothesis xi + tdi ≤bi implies that t ≤(bi −xi)/di, which shows that t ≤tN. (iii) By definition of ψ : W −→V (see (6)) and the continuity of t 7−→xN + tdN at 0, there exists t1 > 0 such that xN + tdN ∈W for all t ∈[0, t1]. It follows by (6) that ∀t ∈[0, t1] , G (ψ(xN + tdN), xN + tdN) = 0. On the other hand, by assertion (ii), we have that ∀t ∈[0, tN] , aN ≤xN + tdN ≤bN. generalized reduced jacobian method 7 So, it remains to show that the box-constraint also holds for the basic variables. Indeed, by nondegeneracy assumption and continuity, it is immediate that ∃t2 > 0, ∀t ∈[0, t2] , aB < ψ(xN + tdN) < bB. Thus, by putting tf = min {t1, t2, tN}, we obtain the desired result. (iv) To prove the Armijo-like inequality (11), let us consider the reduced function eF(·) := F (ψ(·), ·) which is continuously differentiable on W ⊂Rn−m. By using the derivative of ψ given by (6), we immediately deduce the Jacobian of eF: J eF(xN) = JFN(x) −JFB(x)AB(x)−1AN(x) = UN(x). By virtue of (iii), xN + tdN ∈W for all t ∈]0, tf]. So, by differentiability, it follows that eF(xN + tdN) = eF(xN) + tUN(x)dN + o(t) = eF(xN) + tβUN(x)dN + t  (1 −β)UN(x)dN + 1 t o(t)  . Since (1 −β)UN(x)dN < 0 and lim t→0+ 1 t o(t) = 0, then ∃ta ∈]0, tf] , ∀t ∈]0, ta] , (1 −β)UN(x)dN + 1 t o(t) < 0, thus implying that eF(xN + tdN) < eF(xN) + tβUN(x)dN. As we have eF(xN + tdN) = F ψ(xN + tdN), xN + tdN  and ψ(xN) = xB, then the result follows straightforwardly. □ After having shown that it is possible to obtain feasible steplengths along reduced directions, it remains for completing our GRJ scheme to show how to determine such reduced descent directions. 4 GRJ search direction Let us consider first a positive functional ϕ : R →R+ satisfying ϕ(t) = 0 iff t = 0; e.g., ϕp(t) = |t|p p for some value of p ∈]0, 1], or, ϕ0(t) = 1R∗the characteristic function of R∗defined by 1R∗(t) = 1 if t ̸= 0; 1R∗(0) = 0. Let ⌊a⌋+ := max(0, a) and ⌊a⌋−:= max(0, −a) denote, respectively, the positive and negative parts of the scalar a. Then, for x ∈S, we introduce the following finding-direction subproblem: (Px) Min λ∈Λ f(λ, x) := 1 2 X i∈N  ϕ (bi −xi) j UN(x)T λ  i k2 −+ ϕ (xi −ai) j UN(x)T λ  i k2 +  where Λ = {(λ1, . . . , λr) ∈Rr + : Pr j=1 λj = 1}, xi (resp. ai and bi) is the ith component of xN (resp. aN and bN) and UN(x)T λ  i is the ith component of UN(x)T λ. It is obvious that (Px) is a convex continuously differentiable problem and always has an optimal solution in the compact set Λ. Moreover, (Px) has simplicial constraints and it is easy to verify that its objective function has a computable gradient explicitly given by ∇λf(λ, x) = −UN(x)δN(λ, x), where δN(λ, x) =  ϕ (bi −xi) j UN(x)T λ  i k −−ϕ (xi −ai) j UN(x)T λ  i k +  i∈N . (12) 8 m. el maghri, y. elboulqe The descent GRJ scheme will consist to take dN := δN(λ(x), x), where λ(x) ∈argmin(Px). Then, according to (12), ∀i ∈N, di =    −ϕ(xi −ai) UN(x)T λ(x)  i , if UN(x)T λ(x)  i > 0, −ϕ(bi −xi) UN(x)T λ(x)  i , else. (13) The lemma below characterizes the optimal solutions of (Px), which will be crucial not only for proving the descent properties of the vector dN, but also for the convergence analysis of the resulting algorithm. Its proof is omitted, as it is similar to the one of [16, Lemma 3.2]. Lemma 4.1 λ(x) ∈Λ is an optimal solution of (Px), iff δN(λ(x), x) · UN(x)T λ  ≤−2f (λ(x), x) (∀λ ∈Λ), or equivalently, δN(λ(x), x) · uj N(x) ≤−2f (λ(x), x) (j = 1, . . . , r). The descent properties are stated as follows: Proposition 4.1 (i) dN = 0 ⇐⇒f(λ(x), x) = 0. (ii) dN ̸= 0 =⇒dN is a multiobjective reduced descent direction of F at x. If moreover (ACQ) holds and x is nondegenerate, then x is not Pareto KKT-stationary for (MOP). (iii) dN = 0 =⇒x is Pareto KKT-stationary for (MOP). (iv) The functional x 7→f(λ(x), x) is continuous on S provided ϕ is continuous. Proof (i) To show the direct implication, observe that as ϕ ≥0, then dN = 0 implies that, ∀i ∈N, j ϕ (bi −xi) UN(x)T λ(x)  i k −= j ϕ (xi −ai) UN(x)T λ(x)  i k +. If we suppose that the last two terms are not equal to zero, we would have ϕ (bi −xi) UN(x)T λ(x)  i < 0 and ϕ (xi −ai) UN(x)T λ(x)  i > 0, which would mean that ϕ (bi −xi) and ϕ (xi −ai) have not the same sign contradicting ϕ ≥0. Thus, ∀i ∈N, j ϕ (bi −xi) UN(x)T λ(x)  i k −= j ϕ (xi −ai) UN(x)T λ(x)  i k + = 0, and this shows that f(λ(x), x) = 0. The reverse implication follows straightforwardly from the positivity of ϕ and (13). (ii) If dN ̸= 0, then by assertion (i), f(λ(x), x) > 0. It follows, by Lemma 4.1, that dN · uj N(x) ≤−2f(λ(x), x) < 0 (j = 1, . . . , r). Hence, UN(x)dN < 0. On the other hand, for i ∈Ia(x) ∩N, we have that di = ϕ(bi −xi) j UN(x)T λ(x)  i k −≥0, and, for i ∈Ib(x) ∩N, we have that di = −ϕ(ai −xi) j UN(x)T λ(x)  i k + ≤0. generalized reduced jacobian method 9 This shows, according to the definition (8), that dN is well a multiobjective reduced descent direction. The second part of this assertion follows straightforwardly from Corollary 3.1. (iii) Suppose that dN = 0. If we put v∗ N := j UN(x)T λ(x)  i k +  i∈N and w∗ N := j UN(x)T λ(x)  i k −  i∈N , then, by (13) and the property that ϕ(t) = 0 iff t = 0, we obtain (v∗ N, w∗ N) ≥0, v∗ N · (xN −aN) = 0 and w∗ N · (bN −xN) = 0. Clearly, for λ∗:= λ(x), u∗:= −(A−1 B (x))T JFB(x)T λ∗, v∗:= (0, v∗ N) and w∗:= (0, w∗ N), it holds that (λ∗, u∗, v∗, w∗) ∈Rr +\ {0} × Rm × Rn + × Rn +, JF(x)T λ∗+ JG(x)T u∗−v∗+ w∗= 0 and v∗· (x −a) = w∗· (b −x) = 0, which express the Pareto KKT-stationarity for (MOP). (iv) Since f(λ(x), x) is the optimal value of (Px), its continuity with respect to x follows from the compactness of the feasible set Λ and the continuity of the function (λ, x) 7→f(λ, x). □ 5 GRJ algorithm and its convergence 5.1 The algorithm The generalized reduced Jacobian algorithm we propose to solve the multicriteria problem (MOP) is described below: GRJ pseudocode with Armijo line search Step 0: Initialization. - Select x ∈S, fix the Armijo’s constant β ∈]0, 1[ and choose the functional ϕ. Step 1: Nondegenerate basis selection. - Identify a basis B such that aB < xB < bB. Set N = {1, . . . , n} \B. Step 2: Generalized reduced Jacobian. - Compute UN(x) according to (7). Step 3: Reduced direction. - Solve (Px) to determine dN according to (13). Step 4: Stopping criterion. - If min(Px) = 0, then STOP: x is Pareto KKT-stationary for (MOP). Step 5: Feasible Armijo line search. - Compute tN according to (9). - Determine, according to (10)-(11), a steplength t ∈]0, tN] that satisfies: t = max p∈N          1 2p :  ψ xN + 1 2p dN  , xN + 1 2p dN  ∈S, and F  ψ xN + 1 2p dN  , xN + 1 2p dN  < F(x) + β 1 2p UN(x)dN          , where ψ is the implicit function defined in (6). Step 6: Updated point. - Set x := (ψ (xN + tdN) , xN + tdN). Step 7: Degeneracy test. - If B is a degenerate basis for x, go to Step 1; otherwise go to Step 2. 10 m. el maghri, y. elboulqe 5.2 Convergence analysis If the algorithm terminates after a finite number of iterations, then according to Proposition 4.1, it terminates at a Pareto KKT-stationary point. So we will assume that the sequence produced by the algorithm is infinite. We also assume that the bases are chosen in such a way that the following hypothesis is satisfied: (H)    xk −→ k∈K⊆N x∗non Pareto KKT-stationary, ∃B basis, ∀k ∈K, aB < xk B < bB.   =⇒aB < x∗ B < bB. Known as the basis property, this hypothesis proves to be essential for the global convergence of reduced gradient methods (e.g., [16, 18, 30, 37] 3). An example provided in [30] shows that, without this hypothesis, the algorithm may fail to converge even if the constraints are linear. However, there are basis selection procedures that ensure the basis property for both linear and nonlinear constraint cases, such as those mentioned in [18]. Theorem 5.1 Assume that S is nondegenerate, ϕ is chosen to be continuous and derivable at 0. Let (xk)k∈N be the sequence produced by the GRJ algorithm. Then, (i) xk ∈S and F(xk+1) < F(xk) for all k ∈N; (ii) under the hypothesis (H), any accumulation point x∗of (xk)k is a Pareto KKT-stationary point for (MOP), and, lim k→+∞F(xk) = F(x∗). Proof (i) This assertion follows directly from Proposition 3.1. (ii) We need to prove the following lemma. Lemma 5.1 If, in addition to the hypotheses of Theorem 5.1, a subsequence (xk)k∈K produced by the GRJ algorithm converges to x∗with constant bases Bk = B for all k ∈K, and λ(xk)  k∈K converges too, then (a) dk N := δN(λ(xk), xk) −→ k∈K d∗ N := δN(λ(x∗), x∗), where N = {1, . . . , n} \B; (b) ¯t := inf k∈K tk N > 0, where tk N := min ai −xk i dk i : dk i < 0, i ∈N  ∪ bi −xk i dk i : dk i > 0, i ∈N  . Proof (a) Let λ∗= lim k∈K k→+∞ λ(xk), then λ∗∈Λ since λ(xk)  k∈K ⊂Λ which is closed. According to the hypothesis and Lemma 4.1, for all k ∈K, we have δN(λ(xk), xk) · uj N(xk) ≤−2f(λ(xk), xk) (j = 1, . . . , r), Taking into account that the mappings (λ, x) 7−→f(λ, x) and (λ, x) 7−→δN(λ, x) are continuous, by passing onto the limit, when k ↗+∞in the previous inequality, we obtain that δN(λ∗, x∗) · uj N(x∗) ≤−2f(λ∗, x∗) (j = 1, . . . , r), Since by closeness of S, x∗remains feasible as (xk)k, this shows applying once again Lemma 4.1 that λ∗∈argmin(Px∗), i.e., λ∗= λ(x∗), which proves the assertion. (b) To prove this assertion, we shall proceed by a contradiction way by assuming that ¯t = 0 (because ¯t ≥0). Then, it would exist K0 ⊆K an infinite subset such that tk N −→ k∈K0 0. Since the 3As pointed out in [18], the hypothesis (H) was inadvertently omitted in the proof of [16, Theorem 4.1]. generalized reduced jacobian method 11 index set N is finite, we can assume (without loss of generality) that one of the two following cases would happen: ∃i0 ∈N, ∀k ∈K0, tk N = ai0 −xk i0 dk i0 , dk i0 < 0, or ∃i0 ∈N, ∀k ∈K0, tk N = bi0 −xk i0 dk i0 , dk i0 > 0. In the first case, following (13), we would have ∀k ∈K0, tk N = 1 (UN(xk)T λ(xk))i0 " xk i0 −ai0 ϕ(xk i0 −ai0) # . (14) Now, by assertion (a), dk i0 := −ϕ(xk i0 −ai0) UN(xk)T λ(xk)  i0 −→ k∈K0 d∗ i0, and, by the contradiction hypothesis, tk N −→ k∈K0 0. So we would have that the sequence xk i0 −ai0  k∈K0 also converges to 0; i.e., xk i0 −→ k∈K0 ai0. But this would imply that ϕ(xk i0 −ai0) −→ k∈K0 ϕ(x∗ i0 −ai0) = ϕ(0) = 0, and, by letting k ↗+∞in (14), we would obtain the contradiction tk N −→ k∈K0 1 (UN(x∗)T λ∗)i0 × 1 ϕ′(0) ̸= 0, The second case is similar. Hence, ¯t > 0. □ Let us go back to the proof of Theorem 5.1(ii). By hypothesis, there exists a subsequence (xk)k∈K −→x∗with K ⊆N an infinite subset. By closeness of S, x∗remains feasible as (xk)k. Now, by continuity of F and the assertion (i), it follows that the sequence F(xk)  k∈N −→F(x∗). On the other hand, since the index set Bk is in the finite set {1, . . . , n}, we can assume (without loss of generality) that Bk = B (hence Nk = N) for all k ∈K. Now, by Armijo’s condition: ∀k ∈N, F(xk+1) −F(xk) < βtkUNk(xk)dk Nk < 0. It follows that lim k∈K k→+∞ tkUN(xk)dk N = lim k∈K k→+∞ tkUNk(xk)dk Nk = 0. (15) Similarly, as the sequence λ(xk)  k ⊂Λ which is a compact set, we can assume (without loss of generality) that λ(xk) −→ k∈K λ∗∈Λ. Then, by virtue of Lemma 5.1(a), dk N −→ k∈K d∗ N. If we suppose, by a contradiction way, that x∗is not Pareto KKT-stationary, then according to Proposition (4.1)(ii)–(iii), d∗ N would be a multiobjective reduced descent direction of F at x∗∈S. Hence, according to the definition (8), UN(x∗)d∗ N < 0. (16) We can see from (15) that this would imply tk −→ k∈K 0. In particular, for any p ∈N, tk < 1 2p for any k ∈K sufficiently large. Recall that, by the implicit function theorem applied at x∗, there exists a unique mapping ψ∗: W ∗−→V ∗, where W ∗(resp. V ∗) is a neighbourhood of x∗ N (resp. x∗ B = ψ∗(x∗ N)) such that G ψ∗(xN), xN  = 0 for all xN ∈W ∗. This applies to each xk N + 1 2p dk N as (k, p) →+∞, since xk N + 1 2p dk N −→x∗ N; hence, for all p ∈N and k ∈K, both sufficiently large, G  ψ∗xk N + 1 2p dk N  , xk N + 1 2p dk N  = 0. (17) 12 m. el maghri, y. elboulqe On the other hand, since x∗is assumed to be not Pareto KKT-stationary, then using the hypothesis (H), x∗would be nondegenerate, so that aB < x∗ B = ψ∗(x∗ N) < bB. Thus, by the continuity of ψ∗, we would also have that, for all p ∈N and k ∈K, both sufficiently large, aB < ψ∗ xk N + 1 2p dk N  < bB. (18) Now, using Lemma 5.1(b), we have that 0 < 1 2p ≤¯t for all p large enough, and this shows, according to Proposition 3.1(ii), that for any p ∈N sufficiently large and k ∈K, aN ≤xk N + 1 2p dk N ≤bN. (19) Subsequently, we would have, following (17)–(19), that for all p ∈N and k ∈K, both sufficiently large,  ψ∗xk N + 1 2p dk N  , xk N + 1 2p dk N  ∈S. However, given the previous assertion that tk < 1 2p , this would mean that, in Step 5 of the GRJ algorithm, for all p ∈N and k ∈K, both sufficiently large, the Armijo inequality would fail to hold; that is, F  ψ∗xk N + 1 2p dk N  , xk N + 1 2p dk N  ̸< F(xk) + β 1 2p UN(xk)dk N. (20) Fixing p ∈N sufficiently large and passing onto the limit when k ↗+∞, we would obtain that F  ψ∗x∗ N + 1 2p d∗ N  , x∗ N + 1 2p d∗ N  ̸< F(x∗) + β 1 2p UN(x∗)d∗ N. Since p ∈N was arbitrary large enough, this would be in contradiction with Proposition 3.1(iv), and this proves that x∗is well Pareto KKT-stationary for (MOP). □ 6 Numerical experiments 6.1 Test problems and implementation In this section, we present computational results obtained through the proposed GRJ method. We also provide comparisons with three other multicriteria methods: ZMO (a Zoutendijk-like method [31]), MOSQP (an SQP-type approach [22]) and the evolutionary NSGA-II method (the nondominated sorting genetic algorithm [10]). All codes were implemented on a machine equipped with 1.90 GHz Intel(R) Core(TM) i5 CPU and 16 Go memory. GRJ, ZMO, and MOSQP were coded in SCILAB-2024.1.0, while NSGA-II was executed using MATLAB’s predefined function “gamultiobj” (MATLAB R2024b). Thirty constrained multicriteria optimization test problems, selected from the literature, were considered in this experiment and are summarized in Table 1, where ‘OV’, ‘L’ and ‘NL’, respectively, indicate the number of original variables, the number of linear constraints and the number of nonlinear constraints. We selected the ‘Disc Brake’ and ‘Welded Beam’ problems, which are derived from real-world engineering design applications [35]. Detailed descriptions of these two models, along with further analysis of the results, are provided at the end of this section. Also, we introduced the problem ‘EL3’ below showing the importance of the numerical aspect of the tangent cone TS(x), in the case where no feasible direction exists, i.e., AS(x) = ∅. (EL3) Minimize  x3 2 + log x2 1 + 1  , sin  x1 x2 + 2  , subject to x2 1 + x2 2 = 1, (21) 0 ≤xi ≤1, i = 1, 2. generalized reduced jacobian method 13 Problem Ref. r OV L NL Problem Ref. r OV L NL ABC comp [26] 2 2 2 1 LAP1 [5] 2 2 1 2 BNH [8] 2 2 0 2 LAP2 [5] 2 30 0 1 CF11 [44] 2 3 0 2 LIR-CMOP1 [19] 2 30 0 2 CPT1 [9] 2 2 0 30 LIR-CMOP2 [19] 2 30 0 2 Disc Brake [35] 2 4 2 3 LIR-CMOP3 [19] 2 30 0 3 DTLZ0 [11] 3 3 0 2 liswetm [27] 2 7 5 0 DTLZ9 [11] 3 30 0 2 MOLPg 001 [38] 3 8 8 0 EL3 (21) 2 2 0 1 MOLPg 002 [38] 3 12 13 0 ex001 [7] 2 5 1 2 MOLPg 003 [38] 3 10 12 0 ex002 [41] 2 5 0 4 OSY [8] 2 6 4 2 ex003 [8] 2 2 0 2 SRN [8] 2 2 1 1 GE1 [14] 2 2 0 1 Tamaki [35] 3 3 0 1 GE4 [14] 3 3 0 1 TLK1 [39] 2 2 4 0 Hanne4 [6] 2 2 0 1 TNK [8] 2 2 0 2 hs05x [25] 3 5 6 0 Welded Beam [35] 2 4 1 3 Table 1: Linearly and nonlinearly constrained multicriteria test problems All these problems were solved starting with the same population of 200 selected individuals. To ensure that all compared solvers generate the same number of 200 final solutions, and since MATLAB’s NSGA-II solver may provide fewer than 200, we set “PopulationSize” option to 571 for NSGA-II, while all other NSGA-II options were left with the default settings. In our implementation, the three codes GRJ, ZMO and MOSQP use the value 0.25 as Armijo’s constant, and min (Px) < 10−6 as the stopping criterion, where Px denotes the direction-finding subproblem specific to each of the three solvers. The latter was solved, for GRJ, by coding a standard RGM based on Wolfe’s continuous scheme (see, e.g., [18] for details), while for ZMO and MOSQP, it was solved using SCILAB’s predefined function “qld” for linear quadratic programming problems. Step 5 of the GRJ algorithm, which involves computing feasible Armijo’s steplengths, relies on the results presented in Proposition 3.1. This step is carried out using the Newton method, following the procedure outlined below: Feasible Armijo line search pseudocode Step 0: Initialization. - Fix a tolerance ε > 0 (e.g., ε = 10−6) and a maximum number of iterations L ∈N (e.g., L = 200). - Set t = tN, y1 B = xB and l = 1. Step 1: Newton rule. - Set yl+1 B = yl B −A−1 B  yl B, xN + tdN  G  yl B, xN + tdN  . Step 2: Feasible Armijo test. - If G  yl+1 B , xN + tdN  < ε, aB ≤yl+1 B ≤bB, and F  yl+1 B , xN + tdN  < F(x) + βtUNdN, then STOP: the current implicit basic vector is set as ψ (xN + tdN) = yl+1 B . Step 3: Update. - If l = L, set t = t 2, y1 B = xB and l = 1; otherwise, set l = l + 1. - Return to Step 1. 14 m. el maghri, y. elboulqe 6.2 Performance measures and interpretation To thoroughly analyse and compare the results obtained by the solvers, a set of performance met- rics, as suggested in the literature (see, e.g., [2]), may be employed. These metrics essentially assess two key aspects: the convergence of the approximate solutions towards the Pareto front and their diversity. In this work, we have chosen three performance measures: Purity (P) [3], Spread (∆∗) [43] and Generational distance (GD) [40]. These three measures use, as a proxy for the true Pareto front, the so-called reference Pareto front, defined as: Fp = ( y∗∈ [ s∈S Fp,s : ∄y ∈ [ s∈S Fp,s, y < y∗ ) , where Fp,s stands for the approximated Pareto front of problem p ∈P provided by method s ∈S; P denotes the set of tested problems and S the set of considered solvers. −Purity (P). This metric consists in measuring the proportion of points in Fp,s admitted in Fp: Pp,s = |Fp,s ∩Fp| |Fp,s| . Clearly Pp,s ∈[0, 1] and the extreme values are significant in the sense that a value Pp,s close to 1 indicates better performance, while a value equal to 0 implies that the solver is unable to generate any point of Fp. −Spread (∆∗). This metric measures the diversity and the dispersion of the points in Fp,s with respect to Fp: ∆∗ p,s = rP j=1 d y∗ j , Fp,s  + P y∈Fp d (y, Fp,s \ {y}) −¯d rP j=1 d y∗ j , Fp,s  + |Fp| ¯d , where d(y, A) = mina∈A ∥y −a∥is the Euclidean distance from the vector y to the set A; here, y∗ j denotes the minimum value of the criterion fj in Fp, and ¯d represents the average distance between each solution y ∈Fp and the set Fp,s \ {y}. Note that lower values of ∆∗ p,s reflects more uniform distributions of the generated solutions. −Generational Distance (GD). Measuring the convergence, this metric represents how far Fp,s is from Fp: GDp,s = 1 |Fp,s| s X y∈Fp,s d2 (y, Fp). Obviously, lower values of GDp,s are requested. Table 2 lists the numerical results obtained by the four considered methods. The column ‘CPU’ indicates the average execution time (in seconds). The best scores are in bold. To analyse these results and look for significant differences between the compared solvers, the performance profile is used (see, e.g., [12]). Recall that a profile is a graphical representation of the (cumulative) distribution function ρ whose outputs are a solver’s scores on the set of test problems against a performance metric. More precisely, given the measure mp,s (set here to 1/P, ∆∗, GD or CPU) by solver s ∈S for solving p ∈P, the corresponding distribution function is given formally by ρs = |{p ∈P : rp,s ≤α}| |P| , where rp,s = mp,s/ min {mp,s : s ∈S}. The purity measure mp,s is set to 1/P in order that all the considered metrics exhibit the same asymptotic behaviour, in a sense that the smaller the measure, the better the solver. Note that at the threshold α = 1, ρs(α) gives us the largest number of problems among the best solved by s according to the analysed performance. However, a value ρs(α) attaining 1 means that all the problems p ∈P have been solved by solver s at the threshold α. Thus, the best overall performance of a solver is that which reaches the value 1 for the smallest value of α. generalized reduced jacobian method 15 Problems GRJ ZMO CPU P ∆⋆ GD CPU P ∆⋆ GD ABC comp 0.015935 0.99 0.902345 0.0000076 0.021454 1 0.953210 0 BNH 1.436124 0.915 0.810000 0.0000086 0.004504 1 0.995432 0 CF11 0.415047 0.885 0.920000 0 0.058855 0.61 0.902345 0 CPT1 1.712818 0.97 0.765432 0.000003 0.055902 0.98 0.895432 0.000003 Disc Brake 1.242267 0.69 0.310123 0.0031797 1.817904 0.035 0.860987 0.004630 DTLZ0 0.410519 0.805 0.705678 0.000615 0.000827 0 0.765432 0.001699 DTLZ9 0.209932 0.9 0.254321 0 1.004320 0.08 0.989765 0.000092 EL3 0.005883 1 0.285432 0 0.005889 1 0.925678 0 ex001 0.112501 1 0.415098 0 0.020554 0.33 0.350987 0.276242 ex002 0.027517 0.46 0.785432 0.0027654 0.010278 0 0.825678 0.042952 ex003 0.044351 0.535 0.811023 0.0003054 0.038164 0.565 0.985432 0.000682 GE1 0.018449 0.995 0.870987 0 0.002708 0.995 0.990987 0 GE4 0.048618 0.985 0.632098 0.0000041 0.037197 0.99 0.991098 0.000004 Hanne4 0.024853 0.56 0.707098 0.0004533 0.030180 0.46 0.998098 0.000614 hs05x 1.008168 0.975 0.980987 0.0006024 0.006076 0.01 0.970987 0.019143 LAP1 1.822010 0.755 0.505432 0.000108 0.028570 0.965 0.640987 0.000128 LAP2 0.087737 0.87 0.180123 0.0058377 0.082840 1 0.995432 0 LIR-CMOP1 0.389043 0.837 0.840987 0.000219 0.035736 0 0.965098 0.003135 LIR-CMOP2 0.135677 0.9 0.750987 0.000038 0.135736 0.02 0.997453 0.003135 LIR-CMOP3 0.031187 0.995 0.509871 0 0.010200 0.62 0.740987 0.000065 liswetm 0.154443 0.61 0.150987 0.000048 0.015055 0 0.930987 0.000815 MOLPg 001 0.184877 0.91 0.955432 0.000004 0.550163 0.81 0.980987 0.000005 MOLPg 002 1.200010 0.625 0.740987 0.000758 0.015298 0 0.720987 0.040700 MOLPg 003 1.005378 0.99 0.700987 0 0.042475 0 0.560987 0.021736 OSY 1.539581 0.47 0.923444 0.003442 0.061214 0 0.781341 0.019456 SRN 1.937231 0.945 0.860987 0 0.010315 0.9 0.940987 0.006855 Tamaki 1.541475 0.83 0.700987 0 0.002469 1 0.530987 0 TLK1 0.013301 0.42 0.840987 0.000101 0.033947 0.39 0.950097 0.000080 TNK 0.459437 0.395 0.830987 0.0003194 0.013540 0.275 0.760987 0.004407 Welded Beam 1.137963 0.91 0.740987 0.0033562 13.20645 0 0.960987 1.226983 Problems MOSQP NSGA-II CPU P ∆⋆ GD CPU P ∆⋆ GD ABC comp 0.020315 0.99 0.980987 0.000017 0.857500 0.41 0.970000 0.000906 BNH 0.163576 0.105 0.992109 0.010358 0.013100 0.84 0.780000 0.001654 CF11 0.011990 0.59 1.000000 0 0.021800 1 0.994567 0 CPT1 0.017768 0.74 0.900000 0.000025 0.191800 1 1.000000 0 Disc Brake 0.021562 0 0.360000 0.006729 0.0084 0.84 0.840012 0.001951 DTLZ0 0.014875 0.495 0.960000 0.000528 0.0094 0.485 0.610000 0.003421 DTLZ9 0.395620 0.93 0.840000 0.005980 0.3825431 0.08 0.070981 0.000092 EL3 0.012636 0 0.659765 0.010208 0.0143 0.99 0.591220 0.000028 ex001 0.087176 0 1.000000 1.875443 4.9669 0 0.991197 0 ex002 0.029159 0.55 0.967787 0.001515 0.038400 0.085 1.000000 0.021188 ex003 0.065472 0.395 0.793245 0.000364 0.147100 0 0.985023 0.000061 GE1 0.031198 0.995 1.000000 0 0.010200 0.62 0.760000 0.000040 GE4 0.019889 0.97 0.992501 0 0.01200 0.87 0.570000 0.000458 Hanne4 0.022001 0.355 0.980000 0.000308 0.014400 0.775 0.900000 0.000643 hs05x 0.056837 1 0.990000 0 3.059300 0.941 0.905401 0.010439 LAP1 0.023713 0.255 0.970350 0.000203 0.010900 0.78 0.870000 0.000176 LAP2 0.024061 0.27 0.790000 0.006000 0.016500 0.075 0.960000 0.059720 LIR-CMOP1 0.059540 0.8 0.806701 0.000053 0.1824925 0.22 1.000000 0.000002 LIR-CMOP2 0.236534 0.53 0.950276 0.000022 2.1357355 0.1 0.993204 0.041562 LIR-CMOP3 0.323865 0.76 0.805467 0.000376 0.010200 0.62 0.760000 0.000065 liswetm 0.016368 0.385 1.000000 0.000410 1.119700 0.515 0.830000 0.000244 MOLPg 001 0.1888712 0.96 0.934124 0.000234 0.210634 0.9 0.993204 0 MOLPg 002 0.017967 0.68 0.995119 0 1.95200 0.075 0.703005 0.075590 MOLPg 003 0.032949 0.79 0.968578 0.001239 1.693900 0.19 0.503204 0.042700 OSY 1.582346 0.525 0.998123 0.001645 1.837200 0.25 0.934551 0.001930 SRN 0.036814 0.945 0.967634 0 2.393800 0.685 0.773497 0.004229 Tamaki 0.02418 0.9 0.843457 0 0.017700 0.99 0.952078 0.000007 TLK1 0.019948 0.225 0.996780 0.000042 0.959700 0.985 0.990840 0.000004 TNK 1.014764 0.09 1.000000 0.000245 0.97000 0.33 1.000000 0.003962 Welded Beam 0.024213 0.05 0.999732 0.178934 0.0875 0.97 0.950000 0.000621 Table 2: Multiobjective performance measurements 16 m. el maghri, y. elboulqe As illustrated in Fig. 1, the GRJ method consistently demonstrates good performance with respect to the P metric, compared to the other methods. Indeed, the best value ρ(α) = 1 is achieved by GRJ at a relatively small threshold α, whereas the other methods fail to reach this value. This observation is further supported by the results in Tab. 2, where the purity values reported for the ZMO, MOSQP and NSGA-II methods are in some cases very close or equal to zero. This suggests that these methods fail to reach the reference Pareto front for certain problems. In terms of the ∆∗and GD metrics, the four methods appear to be generally comparable. Although NSGA-II is widely recognized for its effectiveness in spread, the observed profile indicates good overall performance across all solvers, with a slight advantage noted for GRJ and NSGA-II. Similar observations can be made regarding the GD metric, noting this time a slight advantage of the three methods over NSGA-II, which is well known for its limited convergence toward the Pareto front. Regarding CPU time, as shown in Fig. 1 and Tab. 2, the three methods exhibit a slight speed advantage over GRJ. This can be attributed to the inherent characteristics of the GRJ algorithm and its operational mechanism. More specifically, the algorithm’s iterative process and heuristic strategies, such as the basis selection procedure in Step 1 and the line search procedure in Step 5 (as previously described), can be time-consuming. Nevertheless, these components enable efficient exploration of the solution space, which helps to explain its performance differences relative to the other methods. The graphical representation of the approximated Pareto fronts illustrated in Fig. 2–3 also supports these interpretations. Note also that ZMO generally appears to be faster than MOSQP based on the scores given in Tab. 2, except for the case where ZMO relatively took a long time to solve the ’Welded Beam’ design problem. This may explain the influence of MOSQP’s CPU performance profile on ZMO observed in Fig. 1. However, as reported in Tab. 2, the execution time remains generally very small and reasonable for all the considered solvers. Figure 1: Performance profiles generalized reduced jacobian method 17 6.3 Real-world applications As mentioned earlier, we conclude this section by providing additional details on the two real-world engineering problems: ‘Disc Brake’ and ‘Welded Beam’. 6.3.1 ‘Disc Brake’ design problem This problem consists to minimize simultaneously the mass of the brake and the stopping time. The variables represent the inner radius of the disc, the outer radius, the engaging force, and the number of friction surfaces. The constraints for the design include a minimum distance between the inner and outer radii, a maximum allowable brake length, and limitations related to pressure, temperature, and torque. The problem is a nonlinearly constrained bicriteria problem (see [35] for further details), formally defined as follows: (Disc Brake) Minimize 4.9 × 10−5 x2 2 −x2 1  (x4 −1) , 9.82 × 106 x2 2 −x2 1  x3x4 (x3 2 −x3 1) ! , subject to (x2 −x1) + 20 ≤0, 2.5 (x4 + 1) −30 ≤0, x3 3.14 (x2 2 −x2 1) −0.4 ≤0, 2.22x3 x3 1 −x3 2  103 × (x2 2 −x2 1)2 −1 ≤0, 2.66x3x4 x3 1 −x3 2  102 (x2 2 −x2 1)2 + 900 ≤0, 55 ≤x1 ≤80, 75 ≤x2 ≤110, 103 ≤x3 ≤3 × 103, 2 ≤x4 ≤20. Figure 2: Best Pareto front approximations for the ‘Disc Brake’ design problem by GRJ, ZMO, MOSQP and NSGA-II 18 m. el maghri, y. elboulqe The graphical representation of the approximate Pareto fronts in Fig. 2 obtained by the four methods highlights the superior ability of GRJ and NSGA-II to better explore the Pareto front for the ‘Disc Brake’ problem, in contrast to ZMO and MOSQP, which face serious challenges. Nevertheless, it is important to note that MOSQP slightly outperforms in this example ZMO, as it manages to find some relatively interesting solutions. 6.3.2 ‘Welded Beam’ design problem This problem involves minimizing both the cost of fabrication and the deflection of the beam under an applied load subject to some constraints. The two objectives are inherently conflicting, since reducing deflection generally leads to higher manufacturing costs, primarily due to setup, material, and welding labor costs. The design involves four decision variables and four nonlinear constraints: shear stress, normal stress, weld length, and buckling limitations (see [35] for more details). Its formal definition is given as follows: (Welded Beam) Minimize  1.10471x2 1x2 + 0.04811x3x4(14 + x3), 2.1952 x3 3x4  , subject to v u u tτ ′(x)2 + τ ′′(x)2 + x2τ ′(x)τ ′′(x) q 0.25 x2 2 + (x1 + x3)2 ≤13600, 1 x3 3x4 ≤ 3 504, x3x3 4(0.0282346x3 −1) ≤ 6 × 103 64746.022, x1 ≤x4, 0.125 ≤x1, x4 ≤5, 0.1 ≤x2, x3 ≤10, where τ ′(x) = 6×103 √ 2x1x2 and τ ′′(x) = 3×103  14+0.5x2√ 0.25(x2 2+(x1+x3)2)  0.707x1x2  x2 2 12 +0.25(x1+x3)2  . Figure 3: Best Pareto front approximations for the ‘Welded Beam’ design problem by GRJ, ZMO, MOSQP and NSGA-II generalized reduced jacobian method 19 In the context of the ‘Welded Beam’ problem, as shown in Fig. 3, all methods succeed in approximating significant regions of the Pareto front. However, the resulting dispersions differ from one method to another, so that none can be objectively favoured in this regard. In terms of convergence, it is also observed that the ZMO method is dominated by the other methods, which remain in strong competition with each other. This clearly demonstrates the effectiveness of our GRJ method in solving real-world problems. Moreover, the consistent performance of GRJ across various metrics highlights its robustness and reliability as a practical optimization approach. Statements and Declarations. The authors have no pertinent declarations concerning conflicts of interest, financial or non-financial interests, competing interests or other statements to disclose. References [1] Abadie, J., Carpentier, J.: Generalization of the Wolfe reduced gradient method to the case of nonlinear constraintes. In: Optimization (R. Fletcher, ed.) Academic Press, London and New York, 37–47 (1969) [2] Audet, C., Bigeon, J., Cartier, D., et al.: Performance indicators in multiobjective optimiza- tion. Eur. J. Oper. Res. 292, 397–422 (2021) [3] Bandyopadhyay, S., Pal, S.K., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans . Syst. Man Cyber. B Cyber. 34, 2088–2099 (2004) [4] Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. John Wiley and Sons, New York (2006) [5] Cocchi, G., Lapucci, M.: An augmented Lagrangian algorithm for multi-objective optimiza- tion. Computat. Optim. Appli. 77, 29–56 (2020) [6] Collette, Y., Siarry, P.: Optimisation Multiobjectif. Eyrolles, Paris (2002) [7] Das, I., Dennis, J.E.: A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multicriteria optimization problems. Structural Optimization 14, 63–69 (1997). https://doi.org/10.1007/BF01197559 [8] Deb, K.: Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evolutionary Computation 7, 205–230 (1999) [9] Deb, K., Pratap, A., Meyarivan, T.: Constrained Test Problems for Multi-objective Evolu- tionary Optimization. In: Zitzler E et al (ed) Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO-01), 284–298 (2001) [10] Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002) [11] Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable Test Problems for Evolutionary Multiobjective Optimization. Springer, London (2005) [12] Dolan, E. D., Mor´e, J. J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002) [13] Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization prob- lems. Computat. Optim. Appli. 28, 5–29 (2004) [14] Eichfelder, G.: An adaptive scalarization method in multiobjective optimization. SIAM J. Optim., 19(4), 1694–1718 (2009) [15] El Maghri, M.: A free reduced gradient scheme. Asian Journal of Mathematics and Computer Research 9, 228–239 (2016) [16] El Maghri, M., Elboulqe, Y.: Reduced Jacobian method. Journal of Optimization Theory and Applications 179, 917–943 (2018) [17] El Maghri, M., Elboulqe, Y.: Correction to: reduced Jacobian method. Journal of Optimiza- tion Theory and Applications 187, 304–304 (2020) 20 m. el maghri, y. elboulqe [18] El Maghri, M., Elboulqe, Y.: A reduced Jacobian method with full convergence property. Optimization Letters 18, 1647–1671 (2024) [19] Fan, Z., Li, W., Cai, X. et al.: An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput. 23, 12491–12510 (2019). https://doi.org/10.1007/s00500-019-03794-x [20] Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Mathemat- ical Methods of Operations Research 51, 479–494 (2000) [21] Fleige, J., Drummond, L.M.G., Svaiter, B.F.: Newton’s method for multiobjective optimisa- tion. SIAM Journal on Optimization 20, 602–626 (2009) [22] Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 24, 2091–2119 (2016) [23] Garc´ıa-Palomares, U.M., Burguillo-Rial, J.C., Gonz´alez-Casta˜no, F.J.: Explicit gradient in- formation in multiobjective optimization. Operation Research Letters 36, 722–725 (2008) [24] Goh, C.J., Yang, X.Q.: Duality in Optimization and Variational Inequalities. Taylor and Francis, London (2002) [25] Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Springer–Verlag, Berlin, (1981) [26] Hwang, C.L., Masud, A.S.M.: Multiple Objective Decision Making Methods and Applications: A State-of-the-Art Survey. Springer Science & Business Media, Berlin (1979) [27] Leyffer, S.: A note on multiobjective optimization and complementarity constraints. Preprint ANL/MCS-P1290-0905, Mathematics and Computer Science Division, Argonne National Lab- oratory, Illinois (2005) [28] Mangasarian, O.L.: Nonlinear Programming. SIAM, Philadelphia (1994) [29] Miglierina, E., Molho, E., Recchioni, M.: Box-constrained multi-objective optimization: A gradient-like method without ”a priori” scalarization. Eur. J. Oper. Res. 188, 662–682 (2008) [30] Mokhtar-Kharroubi, H.: Sur la convergence th´eorique de la m´ethode du gradient r´eduit g´eneralis´e. Numerische Mathematik 31, 13–35 (1980) [31] Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multi- objective optimization problems. Eur. J. Oper. Res. 273, 44–57 (2019) [32] Mukai, H.: Algorithms for multicriterion optimization. IEEE Transactions on Automatic Control 25, 177–186 (1980) [33] P´erez, L.R.L, Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM Journal on Optimization 28, 2690–2720 (2018) [34] Qu S., Goh, M., Chand F.T.S.: Quasi-Newton methods for solving multiobjective optimiza- tion. Operation Research Letters 39, 397–399 (2011) [35] Ray, T., Liew, K. M.: A swarm metaphor for multiobjective design optimization. Engineering Optimization, 34, 141–153 (2002) [36] Schittkowski K.: On the convergence of a generalized reduced gradient algorithm for nonlinear programming problems. Optimization 17, 731–755 (1986) [37] Smeers, Y.: Generalized reduced gradient method as an extension of feasible direction meth- ods. Journal of Optimization Theory and Applications 22, 209–226 (1977) [38] Steuer, R.E.: Multiple Criteria Optimization: Theory, Computation, and Application. Wiley Series in Probability and Mathematical Statistics (1986) [39] Thang T.N., Luc D.T., Kim N.T.B.: Solving generalized convex multiobjective programming problems by a normal direction method. Optimization 65, 2269–2292 (2016) generalized reduced jacobian method 21 [40] Van Veldhuizen, D.A., Lamont, G.B.: On measuring multiobjective evolution- ary algorithm performance. Proc. 2000 IEEE Cong. Evol. Comput. (2000). https://doi.org/10.1109/CEC.2000.870296 [41] Wang, J.-F., Renaud, J. E.: Automatic differentiation in multi-objective collaborative opti- mization. In Proceedings of the 3rd WCSMO, Buffalo, New York, May (1999) [42] Wolfe, P.: Methods of nonlinear programming. In: Graves, R.L., Wolfe, P. (Eds.) Recent Advances in Mathematical Programming, pp. 67–86. McGraw-Hill, New York (1963) [43] Zhou, A., Jin, Y., Zhang, Q., Sendhoff, B., Tsang, E.: Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion. In IEEE international conference on evolutionary computation, 892–899 (2006) https://doi.org/10.1109/CEC.2006.1688406 [44] Zhou, Y., Xiang, Y., He, X.: Constrained multiobjective optimization: Test problem con- struction and performance evaluations. IEEE Transactions on Evolutionary Computation, 25, 172–186 (2021)
Generalized Reduced Jacobian Method M. El Maghri∗1 and Y. Elboulqe†2 1 ̈ın Chock, Hassan II University, Casablanca, BP. 5366, Morocco 2 . 2390, Morocco October 17, 2025 Abstract In a recent work, we presented the reduced Jacobian method (RJM) as an extension of Wolfe's reduced gradient method to multicriteria (multiobjective) optimization problems dealing with linear constraints. This approach reveals that using a reduction technique of the Jacobian matrix of the objective avoids scalarization. In the present work, we intend to generalize RJM to handle nonlinear constraints too. In fact, we propose a generalized reduced Jacobian (GRJ) method that extends Abadie-Carpentier's approach for single-objective programs. To this end, we adopt a global reduction strategy based on the fundamental theorem of implicit functions. In this perspective, only a reduced descent direction common to all the criteria is computed by solving a simple convex program. After establishing an Armijo-type line search condition that ensures feasibility, the resulting algorithm is shown to be globally convergent, under mild assumptions, to a Pareto critical (KKT-stationary) point. Finally, experimental results are presented, including comparisons with other deterministic and evolutionary approaches. Keywords Multicriteria optimization. Pareto optima. Nonlinear programming. Pareto KKTstationarity. Reduced Jacobian method. Nonlinear constraints. Mathematics Subject Classification (2020) 90C29 · 90C30 · 90C52 1 Introduction In many areas, like economics, medicine, design, transportation and so on, we are faced with multicriteria (or multiobjective) optimization problems (MOP), where several functions must be optimized at the same time. Because of the conflicts which may arise between these functions, almost never a single point will minimize or maximize them at once. So the Pareto optimality concept have to be considered. A point is said to be Pareto optimum or efficient solution, if it cannot be improved with respect to a criterion without degrading at least one of the others. Taking account the principle, Pareto solutions are incomparable and no single point can represent all the others. The problem then consists of determining the set of all Pareto solutions or their objective values called the Pareto front or more generally the set of non-dominated solutions. In recent years, it has been observed that this task has become more tractable with the advent of a new generation of algorithms grounded in the principle of multiobjective search descent directions, which aim to identify a direction that decreases all the criteria simultaneously (e.g. [5, 13, 16, 18, 20, 21, 23, 29, 31, 32, 33, 34]). In most cases, the resulting algorithms are merely direct extensions of well-known nonlinear descent methods, which tackle the given problem without resorting to intermediate transformations or introducing artificial parameters that may be sensitive to the ∗Corresponding author. Email: †Email: 1 16 Oct 2025 2 m. el maghri, y. elboulqe original problem. From a theoretical perspective, convergence toward Pareto-KKT stationarity is well established, and numerically, these approaches have also proven very promising, both in terms of convergence to the true Pareto front and the diversity of non-dominated approximate solutions. For instance, in [16], we proposed a direct extension of the Wolfe reduced gradient method (RGM) [42] (see also [15] for a new RGM variant) to the so-called reduced Jacobian method (RJM) for solving linearly constrained MOPs. We also refer the reader to [18], where we introduced an RJM variant that possesses the full convergence property. In the present study, we build upon our previous work by focusing on the so-called generalized reduced gradient (GRG) method, originally introduced by Abadie and Carpentier [1], which extends Wolfe's reduction strategy to problems with nonlinear constraints. Accordingly, we propose a novel approach, which we term the generalized reduced Jacobian (GRJ) method, to effectively handle nonlinear constraints in multicriteria optimization. The GRG method is widely recognized as one of the fundamental and practical approaches to nonlinear mathematical programming. Its convergence properties were rigorously established in subsequent works by Smeers [37], MokhtarKharroubi [30], and Schittkowski [36]. As far as we are concerned, our investigations cover both the theoretical and algorithmic aspects of this method. Abadie-Carpentier's reduction strategy, based on the well-known implicit function theorem, is adopted in this work, thereby enabling a complete characterization of Pareto-KKT stationarity through multiobjective reduced descent directions. It is subsequently proven that such directions guarantee simultaneous descent of all criteria and ensure the existence of Armijo-like steplengths, while preserving feasibility throughout the process. From a computational point of view, a simple direction-finding subproblem is introduced to iteratively compute the reduced directions. Under mild assumptions, the resulting GRJ algorithm is proven to be globally convergent to a Pareto KKT-stationary point in the sense of accumulation points. Finally, numerical experiments are reported, illustrating the effectiveness of the GRJ method compared with other methods, including a Zoutendijk-like method [31], an SQP-type approach [22], and the well-known evolutionary method NSGA-II [10]. 2 Efficiency and multiobjective descent We are dealing with the following nonlinearly constrained multicriteria optimization problem: (MOP) Min F(x) subject to G(x) = 0, a ≤x ≤b, where F, G : x ∈Rn 7→ F(x), G(x) = f1(x), . . . , fr(x) , g1(x), . . . , gm(x) ∈Rr × Rm are two vector-valued functions continuously differentiable on the feasible set denoted by S, and, a = (a1, . . . , an), b = (b1, . . . , bn) ∈Rn. For y = (y1, . . . , yp) ∈Rp and y′ = (y′ 1, . . . , y′ p) ∈Rp, y ≤y′ (resp. y 0, ∀t ∈]0, tf] , x + td ∈S; (b) a tangent direction to S at x ∈S, if ∃(dk)k ⊂Rn, ∃(tk)k ⊂]0, +∞[ such that lim k→+∞dk = d, lim k→+∞tk = 0 and x + tkdk ∈S (∀k); (c) a descent direction of F at x ∈S, if ∃td > 0, ∀t ∈]0, td] , F(x + td) 0, i ∈N . (9) Then, (i) tN > 0. (ii) t ≤tN ⇐⇒aN ≤xN + tdN ≤bN. If furthermore aB 0; otherwise, tN = (bi -xi)/di, which in accordance with (8), entails that bi -xi > 0, thereby ensuring tN > 0. (ii) Let us show the first implication. Let i ∈N. The assertion is obvious if di = 0. Now, if di > 0, then ai ≤xi + tdi and (bi -xi)/di ≥tN ≥t, so that bi ≥xi + tdi. The case di 0, then the hypothesis xi + tdi ≤bi implies that t ≤(bi -xi)/di, which shows that t ≤tN. (iii) By definition of ψ : W -→V (see (6)) and the continuity of t 7-→xN + tdN at 0, there exists t1 > 0 such that xN + tdN ∈W for all t ∈[0, t1]. It follows by (6) that ∀t ∈[0, t1] , G (ψ(xN + tdN), xN + tdN) = 0. On the other hand, by assertion (ii), we have that ∀t ∈[0, tN] , aN ≤xN + tdN ≤bN. generalized reduced jacobian method 7 So, it remains to show that the box-constraint also holds for the basic variables. Indeed, by nondegeneracy assumption and continuity, it is immediate that ∃t2 > 0, ∀t ∈[0, t2] , aB 0, -φ(bi -xi) UN(x)T λ(x) i , else. (13) The lemma below characterizes the optimal solutions of (Px), which will be crucial not only for proving the descent properties of the vector dN, but also for the convergence analysis of the resulting algorithm. Its proof is omitted, as it is similar to the one of [16, Lemma 3.2]. Lemma 4.1 λ(x) ∈Λ is an optimal solution of (Px), iff δN(λ(x), x) · UN(x)T λ ≤-2f (λ(x), x) (∀λ ∈Λ), or equivalently, δN(λ(x), x) · uj N(x) ≤-2f (λ(x), x) (j = 1, . . . , r). The descent properties are stated as follows: Proposition 4.1 (i) dN = 0 ⇐⇒f(λ(x), x) = 0. (ii) dN ̸= 0 =⇒dN is a multiobjective reduced descent direction of F at x. If moreover (ACQ) holds and x is nondegenerate, then x is not Pareto KKT-stationary for (MOP). (iii) dN = 0 =⇒x is Pareto KKT-stationary for (MOP). (iv) The functional x 7→f(λ(x), x) is continuous on S provided φ is continuous. Proof (i) To show the direct implication, observe that as φ ≥0, then dN = 0 implies that, ∀i ∈N, j φ (bi -xi) UN(x)T λ(x) i k -= j φ (xi -ai) UN(x)T λ(x) i k +. If we suppose that the last two terms are not equal to zero, we would have φ (bi -xi) UN(x)T λ(x) i 0, which would mean that φ (bi -xi) and φ (xi -ai) have not the same sign contradicting φ ≥0. Thus, ∀i ∈N, j φ (bi -xi) UN(x)T λ(x) i k -= j φ (xi -ai) UN(x)T λ(x) i k + = 0, and this shows that f(λ(x), x) = 0. The reverse implication follows straightforwardly from the positivity of φ and (13). (ii) If dN ̸= 0, then by assertion (i), f(λ(x), x) > 0. It follows, by Lemma 4.1, that dN · uj N(x) ≤-2f(λ(x), x) 0, where tk N := min ai -xk i dk i : dk i 0, i ∈N . Proof (a) Let λ∗= lim k∈K k→+∞ λ(xk), then λ∗∈Λ since λ(xk) k∈K ⊂Λ which is closed. According to the hypothesis and Lemma 4.1, for all k ∈K, we have δN(λ(xk), xk) · uj N(xk) ≤-2f(λ(xk), xk) (j = 1, . . . , r), Taking into account that the mappings (λ, x) 7-→f(λ, x) and (λ, x) 7-→δN(λ, x) are continuous, by passing onto the limit, when k ↗+∞in the previous inequality, we obtain that δN(λ∗, x∗) · uj N(x∗) ≤-2f(λ∗, x∗) (j = 1, . . . , r), Since by closeness of S, x∗remains feasible as (xk)k, this shows applying once again Lemma 4.1 that λ∗∈argmin(Px∗), i.e., λ∗= λ(x∗), which proves the assertion. (b) To prove this assertion, we shall proceed by a contradiction way by assuming that ̄t = 0 (because ̄t ≥0). Then, it would exist K0 ⊆K an infinite subset such that tk N -→ k∈K0 0. Since the 3As pointed out in [18], the hypothesis (H) was inadvertently omitted in the proof of [16, Theorem 4.1]. generalized reduced jacobian method 11 index set N is finite, we can assume (without loss of generality) that one of the two following cases would happen: ∃i0 ∈N, ∀k ∈K0, tk N = ai0 -xk i0 dk i0 , dk i0 0. In the first case, following (13), we would have ∀k ∈K0, tk N = 1 (UN(xk)T λ(xk))i0 " xk i0 -ai0 φ(xk i0 -ai0) # . (14) Now, by assertion (a), dk i0 := -φ(xk i0 -ai0) UN(xk)T λ(xk) i0 -→ k∈K0 d∗ i0, and, by the contradiction hypothesis, tk N -→ k∈K0 0. So we would have that the sequence xk i0 -ai0 k∈K0 also converges to 0; i.e., xk i0 -→ k∈K0 ai0. But this would imply that φ(xk i0 -ai0) -→ k∈K0 φ(x∗ i0 -ai0) = φ(0) = 0, and, by letting k ↗+∞in (14), we would obtain the contradiction tk N -→ k∈K0 1 (UN(x∗)T λ∗)i0 × 1 φ′(0) ̸= 0, The second case is similar. Hence, ̄t > 0. □ Let us go back to the proof of Theorem 5.1(ii). By hypothesis, there exists a subsequence (xk)k∈K -→x∗with K ⊆N an infinite subset. By closeness of S, x∗remains feasible as (xk)k. Now, by continuity of F and the assertion (i), it follows that the sequence F(xk) k∈N -→F(x∗). On the other hand, since the index set Bk is in the finite set {1, . . . , n}, we can assume (without loss of generality) that Bk = B (hence Nk = N) for all k ∈K. Now, by Armijo's condition: ∀k ∈N, F(xk+1) -F(xk) 0 (e.g., ε = 10-6) and a maximum number of iterations L ∈N (e.g., L = 200). - Set t = tN, y1 B = xB and l = 1. Step 1: Newton rule. - Set yl+1 B = yl B -A-1 B yl B, xN + tdN G yl B, xN + tdN . Step 2: Feasible Armijo test. - If G yl+1 B , xN + tdN < ε, aB ≤yl+1 B ≤bB, and F yl+1 B , xN + tdN < F(x) + βtUNdN, then STOP: the current implicit basic vector is set as ψ (xN + tdN) = yl+1 B . Step 3: Update. - If l = L, set t = t 2, y1 B = xB and l = 1; otherwise, set l = l + 1. - Return to Step 1. 14 m. el maghri, y. elboulqe 6.2 Performance measures and interpretation To thoroughly analyse and compare the results obtained by the solvers, a set of performance metrics, as suggested in the literature (see, e.g., [2]), may be employed. These metrics essentially assess two key aspects: the convergence of the approximate solutions towards the Pareto front and their diversity. In this work, we have chosen three performance measures: Purity (P) [3], Spread (∆∗) [43] and Generational distance (GD) [40]. These three measures use, as a proxy for the true Pareto front, the so-called reference Pareto front, defined as: Fp = ( y∗∈ [ s∈S Fp,s : ∄y ∈ [ s∈S Fp,s, y < y∗ ) , where Fp,s stands for the approximated Pareto front of problem p ∈P provided by method s ∈S; P denotes the set of tested problems and S the set of considered solvers. -Purity (P). This metric consists in measuring the proportion of points in Fp,s admitted in Fp: Pp,s = |Fp,s ∩Fp| |Fp,s| . Clearly Pp,s ∈[0, 1] and the extreme values are significant in the sense that a value Pp,s close to 1 indicates better performance, while a value equal to 0 implies that the solver is unable to generate any point of Fp. -Spread (∆∗). This metric measures the diversity and the dispersion of the points in Fp,s with respect to Fp: ∆∗ p,s = rP j=1 d y∗ j , Fp,s + P y∈Fp d (y, Fp,s \ {y}) - ̄d rP j=1 d y∗ j , Fp,s + |Fp| ̄d , where d(y, A) = mina∈A ∥y -a∥is the Euclidean distance from the vector y to the set A; here, y∗ j denotes the minimum value of the criterion fj in Fp, and ̄d represents the average distance between each solution y ∈Fp and the set Fp,s \ {y}. Note that lower values of ∆∗ p,s reflects more uniform distributions of the generated solutions. -Generational Distance (GD). Measuring the convergence, this metric represents how far Fp,s is from Fp: GDp,s = 1 |Fp,s| s X y∈Fp,s d2 (y, Fp). Obviously, lower values of GDp,s are requested. Table 2 lists the numerical results obtained by the four considered methods. The column 'CPU' indicates the average execution time (in seconds). The best scores are in bold. To analyse these results and look for significant differences between the compared solvers, the performance profile is used (see, e.g., [12]). Recall that a profile is a graphical representation of the (cumulative) distribution function ρ whose outputs are a solver's scores on the set of test problems against a performance metric. More precisely, given the measure mp,s (set here to 1/P, ∆∗, GD or CPU) by solver s ∈S for solving p ∈P, the corresponding distribution function is given formally by ρs = |{p ∈P : rp,s ≤α}| |P| , where rp,s = mp,s/ min {mp,s : s ∈S}. The purity measure mp,s is set to 1/P in order that all the considered metrics exhibit the same asymptotic behaviour, in a sense that the smaller the measure, the better the solver. Note that at the threshold α = 1, ρs(α) gives us the largest number of problems among the best solved by s according to the analysed performance. However, a value ρs(α) attaining 1 means that all the problems p ∈P have been solved by solver s at the threshold α. Thus, the best overall performance of a solver is that which reaches the value 1 for the smallest value of α. generalized reduced jacobian method 15 Problems GRJ ZMO CPU P ∆⋆ GD CPU P ∆⋆ GD ABC comp 0.015935 0.99 0.902345 0.0000076 0.021454 1 0.953210 0 BNH 1.436124 0.915 0.810000 0.0000086 0.004504 1 0.995432 0 CF11 0.415047 0.885 0.920000 0 0.058855 0.61 0.902345 0 CPT1 1.712818 0.97 0.765432 0.000003 0.055902 0.98 0.895432 0.000003 Disc Brake 1.242267 0.69 0.310123 0.0031797 1.817904 0.035 0.860987 0.004630 DTLZ0 0.410519 0.805 0.705678 0.000615 0.000827 0 0.765432 0.001699 DTLZ9 0.209932 0.9 0.254321 0 1.004320 0.08 0.989765 0.000092 EL3 0.005883 1 0.285432 0 0.005889 1 0.925678 0 ex001 0.112501 1 0.415098 0 0.020554 0.33 0.350987 0.276242 ex002 0.027517 0.46 0.785432 0.0027654 0.010278 0 0.825678 0.042952 ex003 0.044351 0.535 0.811023 0.0003054 0.038164 0.565 0.985432 0.000682 GE1 0.018449 0.995 0.870987 0 0.002708 0.995 0.990987 0 GE4 0.048618 0.985 0.632098 0.0000041 0.037197 0.99 0.991098 0.000004 Hanne4 0.024853 0.56 0.707098 0.0004533 0.030180 0.46 0.998098 0.000614 hs05x 1.008168 0.975 0.980987 0.0006024 0.006076 0.01 0.970987 0.019143 LAP1 1.822010 0.755 0.505432 0.000108 0.028570 0.965 0.640987 0.000128 LAP2 0.087737 0.87 0.180123 0.0058377 0.082840 1 0.995432 0 LIR-CMOP1 0.389043 0.837 0.840987 0.000219 0.035736 0 0.965098 0.003135 LIR-CMOP2 0.135677 0.9 0.750987 0.000038 0.135736 0.02 0.997453 0.003135 LIR-CMOP3 0.031187 0.995 0.509871 0 0.010200 0.62 0.740987 0.000065 liswetm 0.154443 0.61 0.150987 0.000048 0.015055 0 0.930987 0.000815 MOLPg 001 0.184877 0.91 0.955432 0.000004 0.550163 0.81 0.980987 0.000005 MOLPg 002 1.200010 0.625 0.740987 0.000758 0.015298 0 0.720987 0.040700 MOLPg 003 1.005378 0.99 0.700987 0 0.042475 0 0.560987 0.021736 OSY 1.539581 0.47 0.923444 0.003442 0.061214 0 0.781341 0.019456 SRN 1.937231 0.945 0.860987 0 0.010315 0.9 0.940987 0.006855 Tamaki 1.541475 0.83 0.700987 0 0.002469 1 0.530987 0 TLK1 0.013301 0.42 0.840987 0.000101 0.033947 0.39 0.950097 0.000080 TNK 0.459437 0.395 0.830987 0.0003194 0.013540 0.275 0.760987 0.004407 Welded Beam 1.137963 0.91 0.740987 0.0033562 13.20645 0 0.960987 1.226983 Problems MOSQP NSGA-II CPU P ∆⋆ GD CPU P ∆⋆ GD ABC comp 0.020315 0.99 0.980987 0.000017 0.857500 0.41 0.970000 0.000906 BNH 0.163576 0.105 0.992109 0.010358 0.013100 0.84 0.780000 0.001654 CF11 0.011990 0.59 1.000000 0 0.021800 1 0.994567 0 CPT1 0.017768 0.74 0.900000 0.000025 0.191800 1 1.000000 0 Disc Brake 0.021562 0 0.360000 0.006729 0.0084 0.84 0.840012 0.001951 DTLZ0 0.014875 0.495 0.960000 0.000528 0.0094 0.485 0.610000 0.003421 DTLZ9 0.395620 0.93 0.840000 0.005980 0.3825431 0.08 0.070981 0.000092 EL3 0.012636 0 0.659765 0.010208 0.0143 0.99 0.591220 0.000028 ex001 0.087176 0 1.000000 1.875443 4.9669 0 0.991197 0 ex002 0.029159 0.55 0.967787 0.001515 0.038400 0.085 1.000000 0.021188 ex003 0.065472 0.395 0.793245 0.000364 0.147100 0 0.985023 0.000061 GE1 0.031198 0.995 1.000000 0 0.010200 0.62 0.760000 0.000040 GE4 0.019889 0.97 0.992501 0 0.01200 0.87 0.570000 0.000458 Hanne4 0.022001 0.355 0.980000 0.000308 0.014400 0.775 0.900000 0.000643 hs05x 0.056837 1 0.990000 0 3.059300 0.941 0.905401 0.010439 LAP1 0.023713 0.255 0.970350 0.000203 0.010900 0.78 0.870000 0.000176 LAP2 0.024061 0.27 0.790000 0.006000 0.016500 0.075 0.960000 0.059720 LIR-CMOP1 0.059540 0.8 0.806701 0.000053 0.1824925 0.22 1.000000 0.000002 LIR-CMOP2 0.236534 0.53 0.950276 0.000022 2.1357355 0.1 0.993204 0.041562 LIR-CMOP3 0.323865 0.76 0.805467 0.000376 0.010200 0.62 0.760000 0.000065 liswetm 0.016368 0.385 1.000000 0.000410 1.119700 0.515 0.830000 0.000244 MOLPg 001 0.1888712 0.96 0.934124 0.000234 0.210634 0.9 0.993204 0 MOLPg 002 0.017967 0.68 0.995119 0 1.95200 0.075 0.703005 0.075590 MOLPg 003 0.032949 0.79 0.968578 0.001239 1.693900 0.19 0.503204 0.042700 OSY 1.582346 0.525 0.998123 0.001645 1.837200 0.25 0.934551 0.001930 SRN 0.036814 0.945 0.967634 0 2.393800 0.685 0.773497 0.004229 Tamaki 0.02418 0.9 0.843457 0 0.017700 0.99 0.952078 0.000007 TLK1 0.019948 0.225 0.996780 0.000042 0.959700 0.985 0.990840 0.000004 TNK 1.014764 0.09 1.000000 0.000245 0.97000 0.33 1.000000 0.003962 Welded Beam 0.024213 0.05 0.999732 0.178934 0.0875 0.97 0.950000 0.000621 Table 2: Multiobjective performance measurements 16 m. el maghri, y. elboulqe As illustrated in Fig. 1, the GRJ method consistently demonstrates good performance with respect to the P metric, compared to the other methods. Indeed, the best value ρ(α) = 1 is achieved by GRJ at a relatively small threshold α, whereas the other methods fail to reach this value. This observation is further supported by the results in Tab. 2, where the purity values reported for the ZMO, MOSQP and NSGA-II methods are in some cases very close or equal to zero. This suggests that these methods fail to reach the reference Pareto front for certain problems. In terms of the ∆∗and GD metrics, the four methods appear to be generally comparable. Although NSGA-II is widely recognized for its effectiveness in spread, the observed profile indicates good overall performance across all solvers, with a slight advantage noted for GRJ and NSGA-II. Similar observations can be made regarding the GD metric, noting this time a slight advantage of the three methods over NSGA-II, which is well known for its limited convergence toward the Pareto front. Regarding CPU time, as shown in Fig. 1 and Tab. 2, the three methods exhibit a slight speed advantage over GRJ. This can be attributed to the inherent characteristics of the GRJ algorithm and its operational mechanism. More specifically, the algorithm's iterative process and heuristic strategies, such as the basis selection procedure in Step 1 and the line search procedure in Step 5 (as previously described), can be time-consuming. Nevertheless, these components enable efficient exploration of the solution space, which helps to explain its performance differences relative to the other methods. The graphical representation of the approximated Pareto fronts illustrated in Fig. 2-3 also supports these interpretations. Note also that ZMO generally appears to be faster than MOSQP based on the scores given in Tab. 2, except for the case where ZMO relatively took a long time to solve the 'Welded Beam' design problem. This may explain the influence of MOSQP's CPU performance profile on ZMO observed in Fig. 1. However, as reported in Tab. 2, the execution time remains generally very small and reasonable for all the considered solvers. Figure 1: Performance profiles generalized reduced jacobian method 17 6.3 Real-world applications As mentioned earlier, we conclude this section by providing additional details on the two real-world engineering problems: 'Disc Brake' and 'Welded Beam'. 6.3.1 'Disc Brake' design problem This problem consists to minimize simultaneously the mass of the brake and the stopping time. The variables represent the inner radius of the disc, the outer radius, the engaging force, and the number of friction surfaces. The constraints for the design include a minimum distance between the inner and outer radii, a maximum allowable brake length, and limitations related to pressure, temperature, and torque. The problem is a nonlinearly constrained bicriteria problem (see [35] for further details), formally defined as follows: (Disc Brake) Minimize 4.9 × 10-5 x2 2 -x2 1 (x4 -1) , 9.82 × 106 x2 2 -x2 1 x3x4 (x3 2 -x3 1) ! , subject to (x2 -x1) + 20 ≤0, 2.5 (x4 + 1) -30 ≤0, x3 3.14 (x2 2 -x2 1) -0.4 ≤0, 2.22x3 x3 1 -x3 2 103 × (x2 2 -x2 1)2 -1 ≤0, 2.66x3x4 x3 1 -x3 2 102 (x2 2 -x2 1)2 + 900 ≤0, 55 ≤x1 ≤80, 75 ≤x2 ≤110, 103 ≤x3 ≤3 × 103, 2 ≤x4 ≤20. Figure 2: Best Pareto front approximations for the 'Disc Brake' design problem by GRJ, ZMO, MOSQP and NSGA-II 18 m. el maghri, y. elboulqe The graphical representation of the approximate Pareto fronts in Fig. 2 obtained by the four methods highlights the superior ability of GRJ and NSGA-II to better explore the Pareto front for the 'Disc Brake' problem, in contrast to ZMO and MOSQP, which face serious challenges. Nevertheless, it is important to note that MOSQP slightly outperforms in this example ZMO, as it manages to find some relatively interesting solutions. 6.3.2 'Welded Beam' design problem This problem involves minimizing both the cost of fabrication and the deflection of the beam under an applied load subject to some constraints. The two objectives are inherently conflicting, since reducing deflection generally leads to higher manufacturing costs, primarily due to setup, material, and welding labor costs. The design involves four decision variables and four nonlinear constraints: shear stress, normal stress, weld length, and buckling limitations (see [35] for more details). Its formal definition is given as follows: (Welded Beam) Minimize 1.10471x2 1x2 + 0.04811x3x4(14 + x3), 2.1952 x3 3x4 , subject to v u u tτ ′(x)2 + τ ′′(x)2 + x2τ ′(x)τ ′′(x) q 0.25 x2 2 + (x1 + x3)2 ≤13600, 1 x3 3x4 ≤ 3 504, x3x3 4(0.0282346x3 -1) ≤ 6 × 103 64746.022, x1 ≤x4, 0.125 ≤x1, x4 ≤5, 0.1 ≤x2, x3 ≤10, where τ ′(x) = 6×103 √ 2x1x2 and τ ′′(x) = 3×103 14+0.5x2√ 0.25(x2 2+(x1+x3)2) 0.707x1x2 x2 2 12 +0.25(x1+x3)2 . Figure 3: Best Pareto front approximations for the 'Welded Beam' design problem by GRJ, ZMO, MOSQP and NSGA-II generalized reduced jacobian method 19 In the context of the 'Welded Beam' problem, as shown in Fig. 3, all methods succeed in approximating significant regions of the Pareto front. However, the resulting dispersions differ from one method to another, so that none can be objectively favoured in this regard. In terms of convergence, it is also observed that the ZMO method is dominated by the other methods, which remain in strong competition with each other. This clearly demonstrates the effectiveness of our GRJ method in solving real-world problems. Moreover, the consistent performance of GRJ across various metrics highlights its robustness and reliability as a practical optimization approach. Statements and Declarations. The authors have no pertinent declarations concerning conflicts of interest, financial or non-financial interests, competing interests or other statements to disclose. References [1] Abadie, J., Carpentier, J.: Generalization of the Wolfe reduced gradient method to the case of nonlinear constraintes. In: Optimization (R. Fletcher, ed.) Academic Press, London and New York, 37-47 (1969) [2] Audet, C., Bigeon, J., Cartier, D., et al.: Performance indicators in multiobjective optimization. Eur. J. Oper. Res. 292, 397-422 (2021) [3] Bandyopadhyay, S., Pal, S.K., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans . Syst. Man Cyber. B Cyber. 34, 2088-2099 (2004) [4] Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. John Wiley and Sons, New York (2006) [5] Cocchi, G., Lapucci, M.: An augmented Lagrangian algorithm for multi-objective optimization. Computat. Optim. Appli. 77, 29-56 (2020) [6] Collette, Y., Siarry, P.: Optimisation Multiobjectif. Eyrolles, Paris (2002) [7] Das, I., Dennis, J.E.: A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multicriteria optimization problems. Structural Optimization 14, 63-69 (1997). https://doi.org/10.1007/BF01197559 [8] Deb, K.: Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evolutionary Computation 7, 205-230 (1999) [9] Deb, K., Pratap, A., Meyarivan, T.: Constrained Test Problems for Multi-objective Evolutionary Optimization. In: Zitzler E et al (ed) Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO-01), 284-298 (2001) [10] Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182-197 (2002) [11] Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable Test Problems for Evolutionary Multiobjective Optimization. Springer, London (2005) [12] Dolan, E. D., Mor ́e, J. J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201-213 (2002) [13] Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization problems. Computat. Optim. Appli. 28, 5-29 (2004) [14] Eichfelder, G.: An adaptive scalarization method in multiobjective optimization. SIAM J. Optim., 19(4), 1694-1718 (2009) [15] El Maghri, M.: A free reduced gradient scheme. Asian Journal of Mathematics and Computer Research 9, 228-239 (2016) [16] El Maghri, M., Elboulqe, Y.: Reduced Jacobian method. Journal of Optimization Theory and Applications 179, 917-943 (2018) [17] El Maghri, M., Elboulqe, Y.: Correction to: reduced Jacobian method. Journal of Optimization Theory and Applications 187, 304-304 (2020) 20 m. el maghri, y. elboulqe [18] El Maghri, M., Elboulqe, Y.: A reduced Jacobian method with full convergence property. Optimization Letters 18, 1647-1671 (2024) [19] Fan, Z., Li, W., Cai, X. et al.: An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput. 23, 12491-12510 (2019). https://doi.org/10.1007/s00500-019-03794-x [20] Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Mathematical Methods of Operations Research 51, 479-494 (2000) [21] Fleige, J., Drummond, L.M.G., Svaiter, B.F.: Newton's method for multiobjective optimisation. SIAM Journal on Optimization 20, 602-626 (2009) [22] Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 24, 2091-2119 (2016) [23] Garc ́ıa-Palomares, U.M., Burguillo-Rial, J.C., Gonz ́alez-Casta ̃no, F.J.: Explicit gradient information in multiobjective optimization. Operation Research Letters 36, 722-725 (2008) [24] Goh, C.J., Yang, X.Q.: Duality in Optimization and Variational Inequalities. Taylor and Francis, London (2002) [25] Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Springer-Verlag, Berlin, (1981) [26] Hwang, C.L., Masud, A.S.M.: Multiple Objective Decision Making Methods and Applications: A State-of-the-Art Survey. Springer Science & Business Media, Berlin (1979) [27] Leyffer, S.: A note on multiobjective optimization and complementarity constraints. Preprint ANL/MCS-P1290-0905, Mathematics and Computer Science Division, Argonne National Laboratory, Illinois (2005) [28] Mangasarian, O.L.: Nonlinear Programming. SIAM, Philadelphia (1994) [29] Miglierina, E., Molho, E., Recchioni, M.: Box-constrained multi-objective optimization: A gradient-like method without "a priori" scalarization. Eur. J. Oper. Res. 188, 662-682 (2008) [30] Mokhtar-Kharroubi, H.: Sur la convergence th ́eorique de la m ́ethode du gradient r ́eduit g ́eneralis ́e. Numerische Mathematik 31, 13-35 (1980) [31] Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Oper. Res. 273, 44-57 (2019) [32] Mukai, H.: Algorithms for multicriterion optimization. IEEE Transactions on Automatic Control 25, 177-186 (1980) [33] P ́erez, L.R.L, Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM Journal on Optimization 28, 2690-2720 (2018) [34] Qu S., Goh, M., Chand F.T.S.: Quasi-Newton methods for solving multiobjective optimization. Operation Research Letters 39, 397-399 (2011) [35] Ray, T., Liew, K. M.: A swarm metaphor for multiobjective design optimization. Engineering Optimization, 34, 141-153 (2002) [36] Schittkowski K.: On the convergence of a generalized reduced gradient algorithm for nonlinear programming problems. Optimization 17, 731-755 (1986) [37] Smeers, Y.: Generalized reduced gradient method as an extension of feasible direction methods. Journal of Optimization Theory and Applications 22, 209-226 (1977) [38] Steuer, R.E.: Multiple Criteria Optimization: Theory, Computation, and Application. Wiley Series in Probability and Mathematical Statistics (1986) [39] Thang T.N., Luc D.T., Kim N.T.B.: Solving generalized convex multiobjective programming problems by a normal direction method. Optimization 65, 2269-2292 (2016) generalized reduced jacobian method 21 [40] Van Veldhuizen, D.A., Lamont, G.B.: On measuring multiobjective evolutionary algorithm performance. Proc. 2000 IEEE Cong. Evol. Comput. (2000). https://doi.org/10.1109/CEC.2000.870296 [41] Wang, J.-F., Renaud, J. E.: Automatic differentiation in multi-objective collaborative optimization. In Proceedings of the 3rd WCSMO, Buffalo, New York, May (1999) [42] Wolfe, P.: Methods of nonlinear programming. In: Graves, R.L., Wolfe, P. (Eds.) Recent Advances in Mathematical Programming, pp. 67-86. McGraw-Hill, New York (1963) [43] Zhou, A., Jin, Y., Zhang, Q., Sendhoff, B., Tsang, E.: Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion. In IEEE international conference on evolutionary computation, 892-899 (2006) https://doi.org/10.1109/CEC.2006.1688406 [44] Zhou, Y., Xiang, Y., He, X.: Constrained multiobjective optimization: Test problem construction and performance evaluations. IEEE Transactions on Evolutionary Computation, 25, 172-186 (2021)
2510.14780
Causal Discovery for Linear DAGs with Dependent Latent Variables via Higher-order Cumulants Ming Caia, Penggang Gaoa, Hisayuki Haraa,b aGraduate School of Informatics, Kyoto University, Yoshida Konoe-cho, Kyoto, 606-8501, Japan bInstitute for Liberal Arts and Sciences, Kyoto University, Yoshida Nihonmatsu-cho, Kyoto, 606-8501, Japan Abstract This paper addresses the problem of estimating causal directed acyclic graphs in linear non-Gaussian acyclic models with latent confounders (LvLiNGAM). Existing methods assume mutually independent latent confounders or cannot properly handle models with causal relationships among observed variables. We propose a novel algorithm that identifies causal DAGs in LvLiNGAM, allowing causal structures among latent variables, among observed variables, and between the two. The proposed method leverages higher-order cumu- lants of observed data to identify the causal structure. Extensive simulations and experiments with real-world data demonstrate the validity and practical utility of the proposed algorithm. Keywords: canonical model, causal discovery, cumulants, DAG, latent confounder, Triad constraints 1. Introduction Estimating causal directed acyclic graphs (DAGs) in the presence of la- tent confounders has been a major challenge in causal analysis. Conven- tional causal discovery methods, such as the Peter–Clark (PC) algorithm [1], Greedy Equivalence Search (GES) [2], and the Linear Non-Gaussian Acyclic Model (LiNGAM) [3, 4], focus solely on the causal model without latent confounders. Fast Causal Inference (FCI) [1] extends the PC algorithm to handle latent variables, recovering a partial ancestral graph (PAG) under the faithfulness assumption. However, FCI is computationally intensive and, moreover, often arXiv:2510.14780v1 [cs.LG] 16 Oct 2025 fails to determine the causal directions. Really Fast Causal Inference (RFCI) [5] trades off some independence tests for speed, at the loss of estimating accuracy. Greedy Fast Causal Inference (GFCI) [6] hybridizes GES and FCI but inherits the limitation of FCI. The assumption of linearity and non-Gaussian disturbances in the causal model enables the identification of causal structures beyond the PAG. The linear non-Gaussian acyclic model with latent confounders (LvLiNGAM) is an extension of LiNGAM that incorporates latent confounders. Hoyer et al. [7] demonstrated that LvLiNGAM can be transformed into a canonical model in which all latent variables are mutually independent and causally precede the observed variables. They proposed estimating the canonical models using overcomplete ICA [8], assuming that the number of latent variables is known. Overcomplete ICA can identify the causal DAG only up to permutations and scaling of the variables. Thus, substantial computational effort is required to identify the true causal DAG from the many candidate models. Another limitation of overcomplete ICA is its tendency to converge to local optima. Salehkaleybar et al. [9] improved the algorithm by reducing the candidate models. Other methods for estimating LvLiNGAM, based on linear regression analysis and independence testing, have also been developed.[10, 11, 12, 13]. Furthermore, Multiple Latent Confounders LiNGAM (ML-CLiNGAM) [14] and FRITL [15] initially identify the causal skeleton using a constraint-based method, and then estimate the causal directions of the undirected edges in the skeleton using linear regression and independence tests. While these methods can identify structures among observed variables that are not confounded by latent variables, they cannot necessarily determine the causal direction between two variables confounded by latent variables. More recently, methods using higher-order cumulants have led to new developments in the identification of canonical LvLiNGAMs. Cai et al. [16] assume that each latent variable has at least three observed children, and that there exists a subset of these children that are not connected by any other observed or latent variables. Then, cumulants are employed to identify one- latent-component structures and latent influences are recursively removed to recover the underlying causal relationships. Chen et al. [17] show that if two observed variables share one latent confounder, the causal direction be- tween them can be identified by leveraging higher-order cumulants. Schkoda et al. [18] introduced ReLVLiNGAM, a recursive approach that leverages higher-order cumulants to estimate canonical LvLiNGAM with multiple la- 2 tent parents. One strength of ReLVLiNGAM is that it does not require prior knowledge of the number of latent variables. The methods reviewed so far are estimation methods for the canonical LvLiNGAM. A few methods, however, have been proposed to estimate the causal DAG of LvLiNGAM when latent variables exhibit causal relation- ships. A variable is said to be pure if it is conditionally independent of other observed variables given its latent parents; otherwise, it is called im- pure. Silva et al. [19] showed that the latent DAG is identifiable under the assumption that each latent variable has at least three pure children, by employing tetrad conditions on the covariance of the observed variables. Cai et al. [20] proposed a two-phase algorithm, LSTC (learning the struc- ture of latent variables based on Triad Constraints), to identify the causal DAG where each latent variable has at least two children, all of which are pure, and each observed variable has a single latent parent. Xie et al. [21] generalized LSTC and defined the linear non-Gaussian latent variable model (LiNGLaM), where observed variables may have multiple latent parents but no causal edges among them, and proved its identifiability. In [20] and [21], causal clusters are defined as follows: Definition 1.1 (Causal cluster [20, 21]). A set of observed variables that share the same latent parents is called a causal cluster. Their methods consist of two main steps: identifying causal clusters and then recovering the causal order of latent variables. LSTC and the algorithm for LiNGLaM estimate clusters of observed variables by leveraging the Triad constraints or the generalized independence noise (GIN) conditions. It is also possible to define clusters in the same manner as Definition 1.1 for models where causal edges exist among observed variables. However, when impure observed variables exist, their method might fail to identify the clusters, re- sulting in an incorrect estimation of both the number of latent variables and latent DAGs. Several recent studies have shown that LvLiNGAM remains identifiable even when some observed variables are impure [22, 23, 24]. How- ever, these methods still rely on the existence of at least some pure observed variables in each cluster. 1.1. Contributions In this paper, we relax the pure observed children assumption of Cai et al. [20] and investigate the identifiability of the causal DAG for an extended model that allows causal structures both among latent variables and among 3 observed variables. Using higher-order cumulants of the observed data, we show the identifiability of the causal DAG of a class of LvLiNGAM and pro- pose a practical algorithm for estimating the class. The proposed method first estimates clusters using the approaches of [20, 21]. When causal edges exist among observed variables, the clusters estimated by using Triad constraints or GIN conditions may be over-segmented compared to the true clusters. The proposed method leverages higher-order cumulants of observed variables to refine these clusters, estimates causal edges within clusters, determines the causal order among latent variables, and finally estimates the exact causal structure among latent variables. In summary, our main contributions are as follows: 1. Demonstrate identifiability of causal DAGs in a class of LvLiNGAM, allowing causal relationships among latent and observed variables. 2. Extend the causal cluster estimation methods of [20] and [21] to han- dle cases where directed edges exist among observed variables within clusters. 3. Propose a top-down algorithm using higher-order cumulants to infer the causal order of latent variables. 4. Develop a bottom-up recursive procedure to reconstruct the latent causal DAG from latent causal orders. The rest of this paper is organized as follows. Section 2 defines the class of LvLiNGAM considered in this study. In Section 2, we also summarize some basic facts on higher-order cumulants. Section 3 describes the proposed method in detail. Section 4 presents numerical simulations to demonstrate the effectiveness of the proposed method. Section 5 evaluates the usefulness of the proposed method by applying it to the Political Democracy dataset [25]. Finally, Section 6 concludes the paper. All proofs of theorems, corollaries, and lemmas in the main text are provided in the Appendices. 2. Preliminaries 2.1. LvLiNGAM Let X = (X1, . . . , Xp)⊤and L = (L1, . . . , Lq)⊤be vectors of observed and latent variables, respectively. In this paper, we identify these vectors with the corresponding set of variables. Define V = X ∪L = {V1, . . . , Vp+q}. Let G = (V , E) be a causal DAG. Vi →Vj denotes a directed edge from Vi 4 to Vj. Anc(Vi), Pa(Vi), and Ch(Vi) are the sets of ancestors, parents, and children of Vi, respectively. We use Vi ≺Vj to indicate that Vi precedes Vj in a causal order. The LvLiNGAM considered in this paper is formulated as  L X  =  A 0 Λ B   L X  +  ϵ e  , (2.1) where A = {aji}, B = {bji}, and Λ = {λji} are matrices of causal co- efficients, while ϵ and e denote vectors of independent non-Gaussian dis- turbances associated with L and X, respectively. Let aji, λji, and bji be the causal coefficients from Li to Lj, from Li to Xj, and from Xi to Xj, respectively. Due to the arbitrariness of the scale of latent variables, we may, without loss of generality, set one of the coefficients λji to 1 for some Xj ∈Ch(Li). Hereafter, such a normalization will often be used. A and B can be transformed into lower triangular matrices by row and column permutations. We assume that the elements of ϵ and e are mutu- ally independent and follow non-Gaussian continuous distributions. Let MG denote the LvLiNGAM defined by G. As shown in (2.1), we assume in this paper that all observed variables are not ancestors of any latent variables. Consider the following reduced form of (2.1),  L X  =  (Iq −A)−1 0 (Ip −B)−1Λ(Iq −A)−1 (Ip −B)−1   ϵ e  . Let αll ji, αol ji, and αoo ji represent the total effects from Li to Lj, Li to Xj, and Xi to Xj, respectively. Thus, (Iq −A)−1 = {αll ji}, (Ip −B)−1Λ(Iq −A)−1 = {αol ji}, and (Ip −B)−1 = {αoo ji }. The total effect from Vi to Vj is denoted by αji, with the superscript omitted. M := [(Ip −B)−1Λ(Iq −A)−1, (Ip −B)−1] is called a mixing matrix of the model (2.1). Denote u = (ϵ⊤, e⊤)⊤. Then, X is written as X = Mu, (2.2) which conforms to the formulation of the overcomplete ICA problem [8, 26, 7]. M is said to be irreducible if every pair of columns is linearly independent. G is said to be minimal if and only if M is irreducible. If G is not minimal, some latent variables can be absorbed into other latent variables, resulting in a minimal graph [9]. MG is called the canonical model when A = 0 5 and M is irreducible. Hoyer et al. [7] showed that any LvLiNGAM can be transformed into an observationally equivalent canonical model. For exam- ple, the LvLiNGAM defined by the DAG in Figure 2.1 (a) is the canonical model of the LvLiNGAM defined by the DAG in Figure 2.1 (b). Hoyer et al. [7] also demonstrated that, when the number of latent variables is known, the canonical model can be identified up to observational equivalent models using overcomplete ICA. Salehkaleybar et al. [9] showed that, even when A ̸= 0, the irreducibil- ity of M is a necessary and sufficient condition for the identifiability of the number of latent variables. However, they did not provide an algorithm for estimating the number of latent variables. Schkoda et al. [18] proposed ReLVLiNGAM to estimate the canonical model with generic coefficients even when the number of latent variables is unknown. However, the canonical model derived from an LvLiNGAM with A ̸= 0 lies in a measure-zero sub- set of the parameter space, which prevents ReLVLiNGAM from accurately identifying the number of latent confounders between two observed variables in such cases. For example, ReLVLiNGAM may not identify the canonical model in Figure 2.1 (a) from data generated by the LvLiNGAM in Figure 2.1 (b). Cai et al. [20] and Xie et al. [21] demonstrated that within LvLiNGAMs where all the observed children of latent variables are pure, there exists a class, such as the models shown in Figure 2.1 (b), in which the causal order among latent variables is identifiable. They proposed algorithms for esti- mating the causal order. However, the complete causal structure cannot be identified solely from the causal order, and their algorithm cannot be gener- L1 L2 L3 X1 X2 X3 X4 X5 X6 (a) An example of canonical LvLiNGAM L1 L2 L3 X1 X4 X2 X5 X3 X6 (b) An LvLiNGAM that can be identified by [20, 21] Figure 2.1: Examples of LvLiNGAMs 6 alized to cases where causal edges exist among observed variables or where latent variables do not have sufficient pure children. In this paper, we introduce the following class of models, which generalizes the class of models in Cai et al. [20] by allowing causal edges among the observed variables, and consider the problem of identifying the causal order among observed variables within each cluster as well as the causal structure among the latent variables. A1. Each observed variable has only one latent parent. A2. Each latent variable has at least two children, at least one of which is observed. A3. There are no direct causal paths between causal clusters. A4. The model satisfies the faithfulness assumption. A5. The higher-order cumulant of each component of the disturbance u is nonzero. In Section 3, we demonstrate that the causal structure of latent variables and the causal order of observed variables for the LvLiNGAM that satisfies Assumption A1-A5 are identifiable, and we provide an algorithm for esti- mating the causal DAG for this class. The proposed method enables the identification not only of the causal order among latent variables but also of their complete causal structure. Under Assumption A1, every observed variable is assumed to have one latent parent. However, even if there exist observed variables without latent parents, the estimation problem can sometimes be reduced to a model sat- isfying Assumption A1 by applying ParceLiNGAM [11] or repetitive causal discovery (RCD) [12, 13] as a preprocessing step of the proposed method. Details are provided in Appendix E. 2.2. Cumulants The proposed method leverages higher-order cumulants of observed data to identify the causal structure among latent variables. In this subsection, we summarize some facts on higher-order cumulants. First, we introduce the definition of a higher-order cumulant. Definition 2.1 (Cumulants [27]). Let i1, . . . , ik ∈{1, . . . , p}. The k-th order cumulant of random vector (Xi1, . . . , Xik) is c(k) i1,...,ik = cum(k)(Xi1, . . . , Xik) 7 = X (I1,...,Ih) (−1)h−1(h −1)!E "Y j∈I1 Xj # · · · E "Y j∈Ih Xj # , where the sum is taken over all partitions (I1, . . . , Ih) of (i1, . . . , ik). If i1 = · · · = ik = i, we write cum(k)(Xi) to denote cum(k)(Xi, . . . , Xi). The k-th order cumulants of the observed variables of LvLiNGAM satisfy c(k) i1,i2,...,ik = cum(k)(Xi1, . . . , Xik) = q X j=1 αol i1j · · · αol ikjcum(k)(ϵj) + p X j=1 αoo i1j · · · αoo ikjcum(k)(ej) We consider an LvLiNGAM in which all variables except Xi and Xj are regarded as latent variables. We refer to the canonical model that is observationally equivalent to this model as the canonical model over Xi and Xj. Let Conf(Xi, Xj) = {L′ 1, L′ 2, · · · , L′ ℓ} be the set of latent confounders in the canonical model over Xi and Xj, where all L′ h ∈Conf(Xi, Xj) are mutually independent. Without loss of generality, we assume that Xj /∈ Anc(Xi). Then, Xi and Xj are expressed as Xi = ℓ X h=1 α′ ihL′ h + vi, Xj = ℓ X h=1 α′ jhL′ h + αoo ji vi + vj, (2.3) where vi and vj are disturbances, and α′ ih and α′ jh are total effects from L′ h to Xi and Xj, respectively, in the canonical model over them. We note that the model (2.3) is a canonical model with generic parameters, and that ℓis equal to the number of confounders in the original model MG. Schkoda et al. [18] proposed an algorithm for estimating the canonical model with generic parameters by leveraging higher-order cumulants. Several of their theorems concerning higher-order cumulants are also applicable to the canonical model over Xi and Xj. They define a Pk2−k1+1 i=0 i  ×k1 matrix A(k1,k2) (Xi→Xj) as follows: 8 A(k1,k2) (Xi→Xj) =   c(k1) i,i,...,i c(k1) i,i,...,j . . . c(k1) i,i,...,j c(k1+1) i,i,i,...,i c(k1+1) i,i,i,...,j . . . c(k1+1) i,i,j,...,j c(k1+1) j,i,i,...,i c(k1+1) j,i,i,...,j . . . c(k1+1) j,i,j,...,j ... ... ... ... c(k2) i,...,i,i,i,...,i,i c(k2) i,...,i,i,i,...,i,j . . . c(k2) i,...,i,i,j,...,j,j ... ... ... ... c(k2) j,...,j,i,i,...,i,i c(k2) j,...,j,i,i,...,i,j . . . c(k2) j,...,j,i,j,...,j,j   , (2.4) where k1 < k2. A(k1,k2) (Xj→Xi) is defined similarly by swapping the indices i and j in A(k1,k2) (Xi→Xj). Proposition 2.2 enables the identification of ℓin (2.3) and the causal order between Xi and Xj. Proposition 2.2 (Theorem 3 in [18]). For two observed variables Xi and Xj where Xj /∈Anc(Xi). Let m := min(Pk2−k1+1 i=1 i, k1). Then, 1. A(k1,k2) (Xi→Xj) generically has rank min(ℓ+ 1, m). 2. If αoo ji ̸= 0, A(k1,k2) (Xi→Xj) generically has rank min(ℓ+ 2, m). 3. If αoo ji = 0, A(k1,k2) (Xi→Xj) generically has rank min(ℓ+ 1, m). Define A(ℓ) (Xi→Xj) as A(k1,k2) (Xi→Xj) for the case where k1 = ℓ+ 2 and k2 is the smallest possible choice, and let ˜A(ℓ) (Xi→Xj) be the matrix obtained by adding the row vector (1, α, . . . , αℓ+1) as the first row of A(ℓ) (Xi→Xj). Proposition 2.3 (Theorem 4 in [18]). Consider the determinant of an (ℓ+ 2) × (ℓ+ 2) minor of ˜A(ℓ) (Xi→Xj) that contains the first row and treat it as a polynomial in α. Then, the roots of this polynomial are αoo ji , αol j1, · · · , αol jℓ. Proposition 2.3 enables the identification of αoo ji , αol j1, · · · , αol jℓup to per- mutation. The following proposition plays a crucial role in this paper in identifying both the number of latent variables and the true clusters. 9 Proposition 2.4 (Lemma 5 in [18]). For two observed variables Xi and Xj, αoo ji , αol j1, · · · , αol jℓare the roots of the polynomial in Proposition 2.3. Then   1 1 . . . 1 αoo ji αol j1 . . . αol jℓ ... ... ... ... (αoo ji )k−1 (αol j1)k−1 . . . (αol jℓ)k−1     cum(k)(vi) cum(k)(L′ 1) ... cum(k)(L′ ℓ)  =   c(k) i,i,...,i c(k) i,i,...,j ... c(k) i,j,...,j   (2.5) (2.5) is generically uniquely solvable if k ≥ℓ+ 1. In the following, let c(k) (Xi→Xj)(L′ h), where h = 1, . . . , ℓ, denote the solution of cum(k)(L′ h) in (2.5). 3. Proposed Method In this section, we propose a three-stage algorithm for identifying LvLiNGAM that satisfy Assumptions A1–A5. In the first stage, leveraging Cai et al. [20]’s Triad constraints and Proposition 2.2, the method estimates over-segmented causal clusters and assigns a latent parent to each cluster. In this stage, the ancestral relationships among observed variables are also estimated. In the second stage, Proposition 2.3 is employed to identify latent sources re- cursively and, as a result, the causal order among the latent variables is estimated. When multiple latent variables are found to have identical cumu- lants, their corresponding clusters are merged, enabling the identification of the true clusters. In general, even if the causal order among latent variables can be estimated, the causal structure among them cannot be determined. The final stage identifies the exact causal structure among latent variables in a bottom-up manner. 3.1. Stage I: Estimating Over-segmented Clusters First, we introduce the Triad constraint proposed by Cai et al. [20], which also serves as a key component of our method in this stage. Definition 3.1 (Triad constraint [20]). Let Xi, Xj, and Xk be observed variables in the LvLiNGAM and assume that Cov(Xj, Xk) ̸= 0. Define Triad statistic e(Xi,Xj|Xk) by e(Xi,Xj|Xk) := Xi −Cov(Xi, Xk) Cov(Xj, Xk)Xj. (3.1) 10 L1 L2 L3 X1 X2 X3 X4 X5 (a) An example of LvLiNGAM with impure chil- dren (1) L1 L2 L3 X1 X3 X2 X5 X4 X6 (b) An example of LvLiNGAM with impure children (2) Figure 3.1: Two examples of LvLiNGAM with impure children If e(Xi,Xj|Xk) ⊥⊥Xk, we say that {Xi, Xj} and Xk satisfy the Triad constraint. The following propositions are also provided by Cai et al. [20]. Proposition 3.2 ([20]). Assume that all observed variables are pure, and Xi and Xj are dependent. If {Xi, Xj} and all Xk ∈X \ {Xi, Xj} satisfy the Triad constraint, then Xi and Xj form a cluster. Proposition 3.3 ([20]). Let ˆC1 and ˆC2 be two clusters estimated by using Triad constraints. If ˆC1 and ˆC2 satisfy ˆC1 ∩ˆC2 ̸= ∅, ˆC1 ∪ˆC2 also forms a cluster. When all observed variables are pure, as in the model shown in Fig. 2.1 (b), the correct clusters can be identified in two steps: first, apply Proposition 3.2 to find pairs of variables in the same cluster; then, merge them using Proposition 3.3. However, when impure observed variables are present, the clusters obtained using this method become over-segmented relative to the true clusters. The correct clustering for the model in Figure 3.1 (a) is {X1}, {X2}, {X3, X4, X5}, and the correct clustering for the model in Figure 3.1 (b) is {X1}, {X2}, {X3, X4, X5, X6}. However, the above method incorrectly par- titions the variables into {X1}, {X2}, {X3, X5}, {X4} for (a), and {X1}, {X2}, {X3}, {X4}, {X5}, {X6} for (b), respectively. As in Figure 3.1 (b), when three or more variables in the same cluster form a complete graph, no pair of these observed variables satisfies the Triad constraint. However, even for models in which there exist causal edges among ob- served variables within the same cluster, it can be shown that a pair of 11 Algorithm 1 Estimating over-segmented clusters Input: X = (X1, . . . , Xp)⊤ Output: Estimated clusters ˆC and AO = {Anc(Xi) | Xi ∈X} 1: Initialize ˆC ←{{X1}, . . . , {Xp}}, Anc(Xi) ←∅for i = 1, . . . , p 2: for all pairs (Xi, Xj) do 3: if Xi, Xj satisfy Theorem 3.2 or have an ancestral relationship by Proposition 2.2 then 4: Merge {Xi} and {Xj} 5: Update ˆC and AO 6: end if 7: end for 8: Merge clusters in ˆC and update ˆC by applying Proposition 3.3 9: return ˆC, AO variables satisfying the Triad constraint is a sufficient condition for them to belong to the same cluster. Theorem 3.4. Assume the model satisfies Assumptions A1-A4. If two de- pendent observed variables Xi and Xj satisfy the Triad constraint for all Xk ∈X \ {Xi, Xj}, they belong to the same cluster. Under Assumption A3, the presence of ancestral relationships between two observed variables implies that they belong to the same cluster. Proposi- tion 2.2 allows us to determine ancestral relationships between two observed variables. Using Proposition 2.2, it is possible to identify X3 ∈Anc(X5) in the model of Figure 3.1(a) and X4 ∈Anc(X5), X4 ∈Anc(X6), and X5 ∈Anc(X6) in the model of Figure 3.1(b). Moreover, it follows that Proposition 3.3 also holds for the models consid- ered in this paper. By applying it, the model in Figure 3.1(a) is clustered into {X1}, {X2}, {X3, X5}, {X4}, while the model in Figure 3.1(b) is clustered into {X1}, {X2}, {X3}, and {X4, X5, X6}. Even when Theorem 3.4 and Proposition 3.2 are applied, the resulting clusters are generally over-segmented. To obtain the correct clusters, it is necessary to merge some of them. The correct clustering is obtained in the subsequent stage. The algorithm for Stage I is presented in Algorithm 1. 3.2. Stage II: Identifying the Causal Order among Latent Variables In this section, we provide an algorithm for estimating the correct clusters and the causal order among latent variables. Suppose that, as a result of 12 applying Algorithm 1, K clusters ˆC = { ˆC1, . . . , ˆCK} are estimated. Associate a latent variable Li with each cluster ˆCi for i = 1, . . . , K, and define ˆL = {L1, . . . , LK}. As stated in the previous section, K ≥q. When K > q, some clusters must be merged to recover the true clustering. X can be partitioned into maximal subsets of mutually dependent vari- ables. Each observed variable in these subsets has a corresponding latent parent. If the causal order of the latent parents within each subset is deter- mined, then the causal order of the entire latent variable set ˆL is uniquely determined. Henceforth, we assume, without loss of generality, that X itself forms one such maximal subset. 3.2.1. Determining the Source Latent Variable Since we assume that X consists of mutually dependent variables, G contains only one source node among the latent variables. Theorem 3.5 provides the necessary and sufficient condition for a latent variable to be a source node. Theorem 3.5. Let Xi denote the observed variable with the highest causal order among ˆCi. Then, Li is generically a latent source in G if and only if Conf(Xi, Xj) are identical across all Xj ∈X \ {Xi} such that Xi ̸⊥⊥Xj in the canonical model over Xi and Xj, with their common value being {Li}. Note that in Stage I, the ancestral relationships among the observed vari- ables are determined. Hence, the causal order within each cluster can also be determined. Let Xj be the observed variable with the highest causal order among ˆCj for j = 1, . . . , K and define Xoc = {X1, . . . , XK}. When | ˆCi| ≥2, let Xi′ be any element in ˆCi \ {Xi}. Define Xi by Xi =  {Xi′}, if | ˆCi| ≥2 ∅, if | ˆCi| = 1. (3.2) Let L(i,j) denote a latent confounder of Xi and Xj in the canonical model over them. In the implementation, we verify whether the conditions of Theorem 3.5 are satisfied by using Corollary 3.6. Corollary 3.6. Assume k ≥3. Li is generically a latent source in G if and only if one of the following two cases holds: 1. Xi = ∅and |Xoc \ {Xi}| = 1 13 L1 X1 X2 X3 (a) L1 L2 X1 X2 X3 (b) Figure 3.2: An example of merging clusters in Stage II 2. |(Xoc ∪Xi) \ {Xi}| ≥2 and the following all hold: (a) In the canonical model over Xi and Xj, |Conf(Xi, Xj)| = 1 for Xj ∈(Xoc ∪Xi) \ {Xi} such that Xi ̸⊥⊥Xj. (b) c(k) (Xi→Xj)(L(i,j)) are identical for Xj ∈(Xoc ∪Xi) \ {Xi}. When Xi = ∅and |Xoc \ {Xi}| = 1, it is trivial by Assumption A2 that Li is a latent source. Otherwise, for Li to be a latent source, it is necessary that |Conf(Xi, Xj)| = 1 for all Xj ∈(Xoc ∪Xi) \ {Xi}. This can be verified by using Condition 1 of Proposition 2.2. In addition, if c(k) (Xi→Xj)(L(i,j)) for Xj ∈(Xoc ∪Xi) \ {Xi} are identical, Li can be regarded as a latent source. When Xi ∈Anc(Xi′), the equation (2.5) yields two distinct solutions, c(k) (Xi→Xi′)(L(i,i′)) = c(k) (Xi→Xi′)(Li) and c(k) (Xi→Xi′)(ei), that are identifiable only up to a permutation of the two. If either of these two solutions equals c(k) (Xi→Xj)(L(i,j)) for all Xj ∈Xoc \ {Xi}, then Li can be identified as the latent source. Example 3.7. Consider the models in Figure 3.2. For both models (a) and (b), the clusters estimated in Stage I are ˆC1 = {X1} and ˆC2 = {X2, X3}, and let L1 and L2 be the latent parents assigned to ˆC1 and ˆC2, respectively. Then, Xoc = {X1, X2}. In the model (a), we can assume λ11 = 1 without loss of generality. Then, the model (a) is expressed as X1 = ϵ1 + e1, X2 = λ21ϵ1 + e2, X3 = (λ21b32 + λ31)ϵ1 + b32e2 + e3. By Proposition 2.4 and assuming k ≥2, we can obtain c(k) (X1→X2)(L(1,2)) = cum(k)(ϵ1), c(k) (X2→X1)(L(2,1)) = c(k) (X2→X3)(L(2,3)) = cum(k)(λ21ϵ1). 14 Algorithm 2 Finding latent sources Input: Mutually dependent Xoc, ˆC, and AL Output: Xoc, ˆC, and a set of ancestral relationships between latent variables AL 1: Each cluster is assigned one latent parent and let ˆL be the set of latent parents 2: Apply Corollary 3.6 to find the latent sources Ls 3: Assume Ls ∈Ls and ˆCs ∈ˆC 4: if |Ls| ≥2 then 5: Merge the corresponding clusters into ˆCs and update ˆC and Xoc 6: Identify all latent parents in Ls with Ls 7: end if 8: for all Li ∈ˆL \ Ls do 9: Anc(Li) ←{Ls} 10: end for 11: Xoc ←Xoc \ {Xs} 12: AL ←AL ∪{Anc(Li) | Li ∈ˆL \ Ls} ∪{Anc(Ls) = ∅} 13: return Xoc, ˆC, and AL Since |Xoc \{X1}| = |{X2}| = 1 and c(k) (X2→X1)(L(2,1)) = c(k) (X2→X3)(L(2,3)), both L1 and L2 are determined as latent sources. The dependence between X1 and X2 leads to L1 and L2 being regarded as a single latent source, resulting in the merging of ˆC1 and ˆC2. In the model (b), we can assume λ11 = λ22 = 1 without loss of generality. Then, the model (b) is described as X1 = ϵ1 + e1, X2 = (a21ϵ1 + ϵ2) + e2, X3 = (b32 + λ31)(a21ϵ1 + ϵ2) + b32e2 + e3. Then, c(k) (X1→X2)(L(1,2)) = cum(k)(ϵ1), c(k) (X2→X1)(L(2,1)) = cum(k)(a21ϵ1) ̸= c(k) (X2→X3)(L(2,3)) = cum(k)(a21ϵ1 + ϵ2). Therefore, L1 is a latent source, while L2 is not. As in model (a), multiple latent variables may also be identified as latent sources. In such cases, their observed children are merged into a single cluster. Once Li is established as a latent source, it implies that Li is an ancestor of the other elements in ˆL. The procedure of Section 3.2.1 is summarized in Algorithm 2. 15 3.2.2. Determining the Causal Order of Latent Variables Next, we address the identification of subsequent latent sources after find- ing L1 in the preceding procedure. If the influence of the latent source can be removed from the observed descendant, the subsequent latent source may be identified through a procedure analogous to the one previously applied. The statistic ˜e(Xi,Xh), defined below, serves as a key quantity for removing such influence. Definition 3.8. Let Xi and Xh be two observed variables. Define ˜e(Xi,Xh) as ˜e(Xi,Xh) = Xi −ρ(Xi,Xh)Xh, where ρ(Xi,Xh) =        cum(Xi, Xi, Xh, Xh) cum(Xi, Xh, Xh, Xh) Xi ̸⊥⊥Xh, 0 Xi ⊥⊥Xh. Under Assumption 5, when Xi ̸⊥⊥Xh, ρ(Xi,Xh) is shown to be generically finite and non-zero. See Lemma A.2 in the Appendix for details. Let Lh be the latent source, and let Xh be its observed child with the highest causal order. When there is no directed path between Xi and Xh, ˜e(Xi,Xh) can be regarded as Xi after removing the influence of Lh. Example 3.9. Consider the model in Figure 3.3 (a). We can assume λ22 = λ33 = 1 without loss of generality. Then, X1, X2 and X3 are described as X1 = ϵ1 + e1, X2 = a21ϵ1 + ϵ2 + e2, X3 = a31ϵ1 + ϵ3 + e3. We can easily show that ρ(X2,X1) = a21. Hence, we have ˜e(X2,X1) = −a21e1 + ϵ2 + e2. It can be seen that ˜e(X2,X1) does not depend on L1 = ϵ1, and that ˜e(X2,X1) and X3 are mutually independent. Example 3.10. Consider the model in Figure 3.3 (b). We can assume that λ11 = λ22 = λ33 = 1 without loss of generality. Then, the model is described as X1 = ϵ1 + e1, X2 = a21ϵ1 + ϵ2 + e2, 16 L1 L2 L3 X1 X3 X2 (a) L1 L2 L3 X1 X2 X3 X4 X5 (b) Figure 3.3: Examples of LvLiNGAMs X3 = a32a21ϵ1 + a32ϵ2 + ϵ3 + e3, X4 = λ42(a21ϵ1 + ϵ2) + e4, X5 = (λ53 + b53)(a32a21ϵ1 + a32ϵ2 + ϵ3) + b53e3 + e5. We can easily show that ρ(X2,X1) = a21 and ρ(X3,X1) = a32a21. Hence, we have ˜e(X2,X1) = −a21e1 + ϵ2 + e2, ˜e(X3,X1) = −a32a21e1 + a32ϵ2 + ϵ3 + e3. It can be seen that ˜e(X2,X1) and ˜e(X3,X1) are obtained by replacing L1 = ϵ1 with −e1. The model for (˜e(X2,X1), X3) and (˜e(X2,X1), X5) are described by canonical models with Conf(˜e(X2,X1), X3) = Conf(˜e(X2,X1), X5) = {ϵ2}, respectively. The model for (˜e(X3,X1), X2) and (˜e(X3,X1), X5) are described by canonical models with Conf(˜e(X3,X1), X2) = {ϵ2} and Conf(˜e(X3,X1), X5) = {a32ϵ2 + ϵ3, e3}, re- spectively. X5 contains {ϵ1, ϵ2, ϵ3, e3, e5}, and ˜e(X3,X1) contains {ϵ2, ϵ3, e1, e3}. Since these sets are not in an inclusion relationship, it follows from Lemma 5 of Salehkaleybar et al. [9] that there is no ancestral relationship between ˜e(X3,X1) and X5. It is noteworthy that ˜e(X3,X1) and X5 share two latent confounders, and that no ancestral relationship exists between them even though X3 ∈Anc(X5) in the original graph. Let L1 be the current latent source identified by the preceding procedure. Let G−({L1}) be the subgraph of G induced by V \ ({L1} ∪ˆC1). By gen- eralizing the discussions in Examples 3.9 and 3.10, we obtain the following theorems. Theorem 3.11. For Xi, Xj ∈Xoc \ {X1} and their respective latent parent Li and Lj, Li ⊥⊥Lj | L1 if and only if ˜e(Xi,X1) ⊥⊥Xj. 17 Theorem 3.12. Let Li denote the latent parent of Xi ∈Xoc \ {X1}. If | ˆCi| ≥2, let Xi′ be an element of ˆCi \ {Xi}. Xi is defined in the same manner as (3.2). Then, Li is generically a source in G−({L1}) if and only if the following two conditions hold: 1. Conf(˜e(Xi,X1), Xj) are identical for all Xj ∈Xoc \ {X1, Xi} such that ˜e(Xi,X1) ̸⊥⊥Xj, with their common value being {ϵi}. 2. If Xi ̸= ∅, Conf(˜e(Xi,X1), Xi′) ∩Conf(˜e(Xi,X1), Xj) are identical for all Xj ∈Xoc \ {X1, Xi}, with their common value being {ϵi}. By applying Theorem 3.11, we can obtain the family of maximal depen- dent subsets of Xoc \ {X1} in the conditional distribution given L1. Theo- rem 3.12 allows us to verify whether Li is a latent source in G−({L1}). By recursively iterating such a procedure, the ancestral relationships among the latent variables can be identified. To achieve this, it is necessary to generalize ˜e(Xi,X1) defined as in Definition 3.13. Let G−({L1, . . . , Ls−1}) denote the subgraph of G induced by V except for {L1, . . . , Ls−1} and their observed children and L1, . . . , Ls−1 be latent sources in G, G−({L1}), G−({L1, L2}), . . . , G−({L1, . . . , Ls−2}), respectively. Then, {L1, . . . , Ls−1} has a causal order L1 ≺· · · ≺Ls−1. Definition 3.13. For i ≥s, ˜e(Xi,˜es) is defined as follows. ˜e(Xi,˜es) =  Xi s = 1, Xi −Ps−1 h=1 ρ(Xi,˜e(Xh,˜eh))˜e(Xh,˜eh) s > 1, where ˜es = (˜e(X1,˜e1), . . . , ˜e(Xs−1,˜es−1)). ˜e(Xi,˜es) can be regarded as a statistic with the information of L1, . . . , Ls−1 eliminated from Xi. The following lemma shows that ˜e(Xi,˜es) is obtained by replacing the information of ϵ1, . . . , ϵs−1 with that of e1, . . . , es−1. Lemma 3.14. Let X1, . . . , Xs−1, and Xi be the observed children with the highest causal order of L1, . . . , Ls−1, and Li, respectively. ˜e(Xi,˜es) can be ex- pressed as ˜e(Xi,˜es) =    ϵi + U[i] i = s ϵi + Pi−1 h=s αll ihϵh + ei i > s and s = 1 ϵi + Pi−1 h=s αll ihϵh + U[s−1] + ei i > s and s > 1, 18 where U[i] and U[s−1] are linear combinations of {e1, . . . , ei} and {e1, . . . , es−1}, respectively. By using ˜e(Xi,˜es) in Definition 3.13, we obtain Theorems 3.15 and 3.16, which generalize Theorems 3.11 and 3.12, respectively. Theorem 3.15. For Xi, Xj ∈Xoc \ {X1, . . . , Xs−1} and their respective latent parent Li and Lj, Li ⊥⊥Lj | {L1, . . . , Ls−1} if and only if ˜e(Xi,˜es) ⊥⊥Xj. Theorem 3.16. Let Li be the latent parent of Xi ∈Xoc \ {X1, . . . , Xs−1}. If | ˆCi| ≥2, let Xi′ be an element of ˆCi \ {Xi}. Xi is defined in the same manner as (3.2). Then, Li is generically a latent source in G−({L1, . . . , Ls−1}) if and only if the following two conditions hold: 1. Conf(˜e(Xi,˜es), Xj) are identical for all Xj ∈Xoc \ {X1, . . . , Xs−1, Xi} such that ˜e(Xi,˜es) ̸⊥⊥Xj, with their common value being {ϵi}. 2. When Xi ̸= ∅, Conf(˜e(Xi,˜es), Xi′) ∩Conf(˜e(Xi,˜es), Xj) are identical for all Xj ∈Xoc \ {X1, . . . , Xs−1, Xi} such that ˜e(Xi,˜es) ̸⊥⊥Xj, with their common value being {ϵi}. As in Theorem 3.11, by applying Theorem 3.15, we can identify the fam- ily of maximal dependent subsets of Xoc \ {X1, . . . , Xs−1} in the conditional distribution given {L1, . . . , Ls−1}. For each maximal dependent subset, we can apply Theorem 3.16 to identify the next latent source. In the implemen- tation, we verify whether the conditions of Theorem 3.16 are satisfied using Corollary 3.17, which generalizes Corollary 3.6. Corollary 3.17. Assume k ≥3. Li is generically a latent source in G−({L1, . . . , Ls−1}) if and only if one of the following two cases holds: 1. Xi = ∅and |Xoc \ {X1, . . . , Xs−1, Xi}| = 1. 2. |(Xoc ∪Xi) \ {X1, . . . , Xs−1, Xi}| ≥2, and the following all hold: (a) In the canonical model over ˜e(Xi,˜es) and Xj, |Conf(˜e(Xi,˜es), Xj)| = 1 for all Xj ∈Xoc \ {X1, . . . , Xs−1, Xi} such that ˜e(Xi,˜es) ̸⊥⊥Xj. (b) c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) are identical for all Xj ∈Xoc\{X1, . . . , Xs−1, Xi} such that ˜e(Xi,˜es) ̸⊥⊥Xj, where L(i,j) is the unique latent confounder in the canonical model over ˜e(Xi,˜es) and Xj. 19 (c) ˜e(Xi,˜es) and Xi′ has a latent confounder L(i,i′) in the canonical model over them that satisfies c(k) (˜e(Xi,˜es)→Xi′)(L(i,i′)) = c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) for all Xj ∈Xoc\{X1, . . . , Xs−1, Xi} such that ˜e(Xi,˜es) ̸⊥⊥Xj, when Xi ̸= ∅. To determine whether Li is a latent source of G−({L1, . . . , Ls−1}), we first examine, using Condition 1 of Proposition 2.2, whether |Conf(˜e(Xi,˜es), Xj)| = 1, as in Section 3.2.1. If c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) are identical for Xj ∈(Xoc ∪ Xi)\{X1, . . . , Xs−1, Xi}, Li is identified as a latent source. As in the previous case, when Xi ̸= ∅and Xi ∈Anc(Xi′), the equation (2.5) yields two distinct solutions for the higher-order cumulants of latent confounders. Here, we de- termine that Li is a latent source in G−({L1, . . . , Ls−1}) if either of two solu- tions of (2.5) equals to c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) for Xj ∈Xoc \{X1, . . . , Xs−1, Xi}. If multiple latent sources are identified for any element in a mutually de- pendent maximal subset of Xoc \ {X1, . . . , Xs−1}, the corresponding clusters must be merged. As latent sources are successively identified, the correct set of latent variables L, the ancestral relationships among L, and the correct clusters are also successively identified. The procedure of Section 3.2.2 is presented in Algorithm 3. Algorithm 4 combines Algorithms 2 and 3 to provide the complete procedure for Stage II. Example 3.18. For the model in Figure 3.1 (a), the estimated clusters ob- tained in Stage I are {X1}, {X2}, {X3, X5}, and {X4}, with their corre- sponding latent parents denoted as L1, L2, L3, and L4, respectively. Set Xoc = {X1, X2, X3, X4}. Only X1 satisfies Corollary 3.6, and thus L1 is identified as the initial latent source. Then, we remove X1 from Xoc and update it to Xoc = {X2, X3, X4}. Next, since it can be shown that only L2 satisfies Corol- lary 3.17, i.e., c(3) (˜e(X2,X1)→X3)(L(2,3)) = c(3) (˜e(X2,X1)→X4)(L(2,4)), it follows that L2 is the latent source of G−({L1}). Similarly, we remove X2 from the current Xoc and update it to Xoc = {X3, X4}. Let X3′ = X5. In G−({L1, L2}), we compute ˜e(X3,˜e3) and ˜e(X4,˜e3), and find that c(3) (˜e(X3,˜e3)→X4)(L(3,4)) = c(3) (˜e(X3,˜e3)→X5)(L(3,5)), |Xoc ∪∅\ {X4}| = 1, 20 Algorithm 3 Finding subsequent latent sources Input: Xoc, ˆC, and AL Output: ˆC and AL 1: Apply Corollary 3.17 to find the set of latent sources L0 in G−(Ls) 2: if L0 = ∅then 3: return ˆC and AL 4: end if 5: if |L0| ≥2 then 6: for all pairs Xi, Xj ∈ S k:Lk∈L0 ˆCk  ∩Xoc do 7: if ˜e(Xi,˜es) ̸⊥⊥Xj then 8: Merge ˆCj into ˆCi 9: ˆC ←ˆC \ { ˆCj}, ˆL ←ˆL \ {Lj}, Xoc ←Xoc \ {Xj}, AL ←AL \ {Anc(Lj)} 10: end if 11: end for 12: end if 13: for all Xi ∈ S k:Lk∈L0 ˆCk  ∩Xoc do 14: X(i) oc ←∅ 15: for all Xj ∈Xoc \ {Xi} do 16: if Xj ̸⊥⊥˜e(Xi,˜es) then 17: Anc(Lj) ←Anc(Lj) ∪{Li}, X(i) oc ←X(i) oc ∪{Xj} 18: end if 19: end for 20: ˆC, AL ←Algorithm 3 (X(i) oc , ˆC, AL) 21: end for 22: return ˆC and AL indicating both L3 and L4 are latent sources by Corollary 3.17. Furhtermore, we conclude that {X3, X5} and {X4} should be merged into one cluster con- founded by L3. 3.3. Stage III: Identifying Causal Structure among Latent Variables By the end of Stage II, the clusters of observed variables have been identified, as well as the ancestral relationships among latent variables and among observed variables. The ancestral relationships among L alone do not uniquely determine the complete causal structure of L. Here, we propose a bottom-up algorithm to estimate the causal structure of the latent variables. Note that if the ancestral relationships among L are known, a causal order of L can also be obtained. Theorem 3.19 provides an estimator of the causal coefficients between latent variables. 21 Algorithm 4 Finding the ancestral relationships between latent variables Input: X, AO, and ˆC Output: ˆC and AL 1: AL →∅ 2: for all mutually dependent Xoc do 3: Xoc, ˆC, AL ←Algorithm 2 (Xoc, ˆC, AL) 4: ˆC, AL ←Algorithm 3 (Xoc, ˆC, AL) 5: end for 6: return ˆC and AL Theorem 3.19. Assume that Anc(Li) = {L1, . . . , Li−1} with the causal order L1 ≺· · · ≺Li−1. Let X1, . . . , Xi be the observed children of L1, . . . , Li with the highest causal order, respectively. Define ˜ri,k−1 as ˜ri,k−1 =  Xi, k = 1 Xi −Pi−1 h=i−(k−1) aihXh, k ≥2 When we set λ11 = · · · = λii = 1, ai,i−k = ρ(˜ri,k−1,˜e(Xi−k,˜ei−k)) generically holds. In addition, under Assumption A4, it holds generically that ai,i−k = 0 if and only if ˜ri,k−1 ⊥⊥˜e(Xi−k,˜ei−k). If the only information available is the ancestral relationships among {L1, . . . , Li}, we cannot determine whether there is an edge Li−k →Li in G. However, according to Theorem 3.19, if ˜ri,k−1 ⊥⊥˜e(Xi−k,˜ei−k), then ai,i−k = 0, and thus it follows that Li−k →Li does not exist. Algorithm 5 describes how Theorem 3.19 is applied to estimate the causal structure among L. Example 3.20. For the model in Figure 3.1 (a), the estimated causal order of latent variables is L1 ≺L2 ≺L3 with Xoc = {X1, X2, X3}. Assume initially that L1, L2, and L3 form a complete graph. Then X1, X2, X3, and ˜e(X2,˜e2) are X1 = ϵ1 + e1, X2 = a21ϵ1 + ϵ2 + e2, X3 = (a21a32 + a31)ϵ1 + a32ϵ2 + ϵ3 + e3, ˜e(X2,˜e2) = ˜e(X2,X1) = ϵ2 + e2 −a21e1. We estimate a32 and a31 using Theorem 3.19 as follows: a32 = ρ(X3,˜e(X2,˜e2)), 22 ˜r31 = X3 −a32X2 = a31ϵ1 + ϵ3 −a32e2 + e3, a31 = ρ(˜r31,X1). Thus, if ˜r31 ⊥⊥X1, then a31 = 0. In this case, we can conclude that L1 →L3 does not exist. Algorithm 5 Finding causal structure among latent variables Input: Xoc, L, AL Output: An adjacency matrix Aadj of L 1: function Adjacency(Xoc, Li, Lopen, Aadj, ˜ri) 2: if |Lopen| = 0 then 3: return Aadj 4: end if 5: Initialize Lnext ←∅ 6: for all Lj ∈Lopen do 7: ˆaij ←0, Lnext ←Lnext ∪Pa(Lj) 8: if ∃{Lk, Lh} ⊂Lnext s.t. Lk ∈Anc(Lh) then 9: Lnext ←Lnext \ {Lk} 10: end if 11: if ˜ri ̸⊥⊥˜e(Xj,˜ej) then 12: ˆaij ←an empirical counterpart of aij 13: end if 14: ˜ri ←˜ri −ˆaijXj 15: if ˆaji ̸= 0 then 16: Aadj[i, j] ←1 17: end if 18: end for 19: Lopen ←Lnext 20: Aadj ←Adjacency(Xoc, Li, Lopen, Aadj, ˜ri) 21: end function 22: function Main(Xoc, L, AL) 23: Initialize Aadj ←{0}|L|×|L| 24: for all Li ∈L do 25: ˜ri ←Xi, Lopen ←Pa(Li) 26: if ∃{Lk, Lh} ⊂Lopen s.t. Lk ∈Anc(Lh) then 27: Lopen ←Lopen \ {Lk} 28: end if 29: Aadj ←Adjacency(Xoc, Li, Lopen, Aadj, ˜ri) 30: end for 31: return Aadj 32: end function 23 3.4. Summary This section integrates Algorithms 1, 4, and 5 into Algorithm 6, which identifies the clusters of observed variables, the causal structure between latent variables, and the ancestral relationships between observed variables under the assumptions A1-A5. Since the causal clusters ˆC have been correctly identified, the directed edges from L to X are also identified. Although the ancestral relationships among observed variables can be identified, their exact causal structure remains undetermined. In conclusion, we obtain the following result: Theorem 3.21. Given observed data generated from an LvLiNGAM MG in (2.1) that satisfies the assumptions A1-A5, the proposed method can identify the latent causal structure among L, causal edges from L to X, and ancestral relationships among X. Algorithm 6 Identify the Causal Structure among Latent Variables Input: X = (X1, . . . , Xp)⊤ Output: AO, ˆC, and Aadj 1: ˆC, AO ←Algorithm 1 (X) ▷Estimate over-segmented clusters 2: AL, ˆC ←Algorithm 4 (X, AO, ˆC) ▷Identify the causal order among latent variables 3: Aadj ←Algorithm 5 (Xoc, L, AL) ▷Find the causal structure among latent variables 4: return AO, ˆC, and Aadj 4. Simulations In this section, we assess the effectiveness of the proposed method by comparing it with the algorithms proposed by Xie et al. [21] for estimating LiNGLaM and by Xie et al. [23] for estimating LiNGLaH, as well as with ReLVLiNGAM [18], which serves as the estimation method for the canonical model with generic parameters. For convenience, we hereafter refer to both the model class introduced by Xie et al. [21] and its estimation algorithm as LiNGLaM, and likewise use LiNGLaH to denote both the model class and the estimation algorithm proposed by Xie et al. [23]. 4.1. Settings In the simulation, the true models are set to six LvLiNGAMs defined by the DAGs shown in Figures 4.1 (a)-(f). We refer to these models as Models (a)-(f), respectively. All these models satisfy Assumptions A1-A3. 24 All disturbances are assumed to follow a log-normal distribution, ui ∼ Lognormal(−1.1, 0.8), shifted to have zero mean by subtracting its expected value. The coefficient λii from Li to Xi is fixed at 1. Other coefficients in Λ and A are drawn from Uniform(1.1, 1.5), while those in B are drawn from Uniform(0.5, 0.9). When all causal coefficients are positive, the faithfulness condition is satisfied. The higher-order cumulant of a log-normal distribution is non-zero. None of the models (a)-(f) is LiNGLaM or LiNGLaH. The models (a) and (b) are generic canonical models, whereas the canonical models derived from Figures 4.1 (c)-(f) do not satisfy the genericity assumption of Schkoda et al. [18]. The sample sizes N are set to 1000, 2000, 4000, 8000, and 16000. The number of iterations is set to 100. We evaluate the performance of the pro- posed method and other methods using the following metrics. • Ncl, Nls, Nos, and Ncs: The counts of iterations in which the resulting clusters, the latent structures, the ancestral relationships among X, and the latent structure and the ancestral relationships among X are correctly estimated, respectively. • PREll, RECll, and F1ll: Averages of Precision, Recall, and F1-score of the estimated edges among latent variables, respectively, when clusters are correctly estimated. • PREoo, RECoo, and F1oo: Averages of Precision, Recall, and F1-score of the estimated causal ancestral relationships among observed variables, respectively, when clusters are correctly estimated. LiNGLaM and LiNGLaH assume that each cluster contains at least two observed variables. When a cluster includes only a single observed variable, these methods may fail to assign it to any cluster, resulting in it being left without an associated latent parent. Here, we treat such variables as indi- vidual clusters and assign each a latent parent. 4.2. Implementation Hilbert–Schmidt independence criterion (HSIC) [28] is employed for the independence tests in the proposed method. As HSIC becomes computation- ally expensive for large sample sizes, we randomly select 2,000 samples for HSIC when N ≥2000. The significance level of HSIC is set to αind = 0.05. When estimating the number of latent variables and the ancestral rela- tionships among the observed variables, we apply Proposition 2.2. Following 25 L1 X1 X2 X3 (a) L1 X1 X2 X3 (b) L1 L2 X1 X2 X3 (c) L1 L2 X1 X2 X3 X4 (d) L1 L2 L3 X1 X2 X3 X4 (e) L1 L2 L3 X1 X2 X3 X4 (f) Figure 4.1: Six models for simulations Schkoda et al. [18], the rank of A(k1,k2) (Xi→Xj) is determined from its singular val- ues. Let σr denote the r-th largest singular value of A(k1,k2) (Xi→Xj) and let τs be a predefined threshold. If σr/σ1 ≤τs, we set σr to zero. To ensure termina- tion in the estimation of the number of confounders between two observed variables, we impose an upper bound on the number of latent variables, fol- lowing Schkoda et al. [18]. In this experiment, we set the upper bound on the number of latent variables to two in both our proposed method and ReLVLiNGAM. When estimating latent sources, we use Corollaries 3.6 and 3.17. To check whether |Conf(Xi, Xj)| = 1 in the canonical model over Xi and Xj, one possible approach is to apply Proposition 2.2. Theorem A.7 in the Appendix shows that |Conf(Xi, Xj)| = 1 is equivalent to (c(6) i,i,i,j,j,j)2 = c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j. Based on this fact, one can alternatively check whether Conf(Xi, Xj) = 1 by using the criterion |(c(6) i,i,i,j,j,j)2 −c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j| max (c(6) i,i,i,j,j,j)2, |c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j|  < τo, (4.1) 26 where τo is a predefined threshold. In this experiment, we compared these two approaches. To check condition (b) of Corollary 3.6 and conditions (b) and (c) of Corollary 3.17, we use the empirical counterpart of c(k) (Xi→Xj)(L(i,j)). In this experiment, we set k = 3. We consider the situation of estimating the first latent source using Corollary 3.6. Let c(3) Xi be the set of c(3) (Xi→Xj)(L(i,j)) for Xj ∈(Xoc ∪Xi) \ {Xi}. To show that Li is a latent source, it is necessary to demonstrate that all c(3) (Xi→Xj)(L(i,j)) ∈c(3) Xi are identical. Let ¯ci be ¯ci := 1 |c(3) Xi| X c∈c(3) Xi c and s2 i be the empirical counterpart of 1 |c(3) Xi| X c∈c(3) Xi (c −¯ci)2. Then, we regard Li as a latent source if s2 i is smaller than a given threshold τm1. As mentioned previously, when Xi ̸= ∅, c(3) (Xi→Xi′)(L(i,i′)) cannot be determined, since (2.5) yields two distinct solutions. In this case, we compute s2 i for the two solutions, and if the smaller one is less than τm1, we regard Li as a latent source. The estimation of the second and subsequent latent sources using Corol- lary 3.17 proceeds analogously, provided that c(3) Xi is defined as the set of c(3) (˜e(Xi,˜ei)→Xj)(L(i,j)) for Xj ∈(Xoc ∪Xi) \ {Xi}. However, for the threshold applied to s2 i , we use τm2, which is larger than τm1. This is because, as the iterations proceed, |c(3) Xi| decreases, and hence the variance of s2 i tends to increase. It would be desirable to increase the threshold gradually as the iterations proceed. However, in this experiment, we used the same τm2 from the second iteration onward. In this experiment, (τo, τm1, τm2) = (0.001, 0.001, 0.01). For the model in Figure 4.1 (a)-(c), τs was set to 0.001, and for the models (d)-(f) τs was set to 0.005. All experiments were conducted on a workstation with a 3.0 GHz Core i9 processor and 256 GB memory. 27 4.3. Results and Discussions Table 4.1 reports Ncl, Nls, Nos, and Ncs, and Table 4.2 reports PREll, RECll, F1ll, PREoo, RECoo, and F1oo for both the proposed and existing methods. Since Models (a)-(f) do not satisfy the assumptions of LiNGLaM and LiNGLaH, the results of them in Table 4.2 are omitted. The canonical mod- els derived from Models (c)–(f) are measure-zero exceptions of the generic canonical models addressed by ReLVLiNGAM and thus cannot be identified, so the results of ReLVLiNGAM for Models (c)–(f) are not reported. Models (a) and (b) each involve only a single latent variable without latent–latent edges, so PREll, RECll, and F1ll are not reported. Overall, the proposed method achieves superior accuracy in estimating clusters, causal relationships among latent variables, and ancestral relation- ships among observed variables, with the accuracy improving as the sample size increases. Only the proposed method correctly estimates both the struc- ture of latent variables and the causal relationships among observed variables for all models. Moreover, it can be confirmed that the proposed method also correctly distinguishes the difference in latent structures between Models (e) and (f). While Models (a) and (b) are identifiable by ReLVLiNGAM, the proposed method achieves higher accuracy in both estimations for clusters. While the proposed method shows lower performance than ReLVLiNGAM in estimating ancestral relationships among observed variables for Model (b), its performance gradually approaches that of ReLVLiNGAM as the sample size increases. In addition, when comparing the proposed method with and without Theorem A.7, the version incorporating Theorem A.7 outperforms the one without it in most cases. Although Models (a) and (b) do not satisfy the assumptions of LiNGLaM and LiNGLaH, and thus, in theory, these methods cannot identify the mod- els, Table 4.1 shows that they occasionally recover the single-cluster structure when the sample size is relatively small. It can also be seen from Table 4.1 that the ancestral relationships among the observed variables are not esti- mated correctly at all. As mentioned above, in the original LiNGLaM and LiNGLaH, clusters consisting of a single observed variable are not output and are instead treated as ungrouped variables. In this experiment, by regarding such ungrouped variables as clusters, higher clustering accuracy is achieved in Models (c), 28 Table 4.1: The performance in terms of Ncl, Nls, Nos, and Ncs Model Method Ncl Nls Nos Ncs 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K (a) Proposed (A.7) 60 56 73 74 78 60 56 73 74 78 45 53 70 68 73 45 53 70 68 73 Proposed 60 56 73 74 78 60 56 73 74 78 39 45 67 64 72 39 45 67 64 72 LiNGLaM 10 1 0 0 0 10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 59 29 5 8 7 59 29 5 8 7 0 0 0 0 0 0 0 0 0 0 ReLVLiNGAM 47 50 49 55 64 47 50 49 55 64 0 0 0 0 0 0 0 0 0 0 (b) Proposed (A.7) 62 75 86 92 93 62 75 86 92 93 11 23 34 53 60 11 23 34 53 60 Proposed 61 75 86 92 93 61 75 86 92 93 11 23 34 53 60 11 23 34 53 60 LiNGLaM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ReLVLiNGAM 54 60 78 79 74 54 60 78 79 74 32 41 55 65 68 32 41 55 65 68 (c) Proposed (A.7) 76 78 79 88 93 76 78 79 88 93 53 69 77 87 93 53 69 77 87 93 Proposed 76 78 79 88 94 76 78 79 88 94 47 55 63 79 78 47 55 63 79 78 LiNGLaM 87 90 90 93 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 98 99 97 99 99 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (d) Proposed (A.7) 44 24 38 32 63 44 24 38 32 63 10 22 30 24 58 10 22 30 24 58 Proposed 48 26 49 55 71 48 26 49 55 71 8 8 20 19 21 8 8 20 19 21 LiNGLaM 38 14 9 8 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (e) Proposed (A.7) 37 44 68 75 88 27 39 57 72 83 36 42 62 69 80 26 37 52 66 75 Proposed 37 33 51 84 86 21 23 49 73 83 21 17 27 30 32 12 11 26 25 30 LiNGLaM 96 90 91 94 87 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (f) Proposed (A.7) 30 47 52 71 76 12 34 38 70 76 30 47 52 67 74 12 34 38 66 74 Proposed 18 46 45 57 72 5 35 41 54 72 17 34 39 46 67 4 27 35 44 67 LiNGLaM 92 88 93 87 92 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 LiNGLaH 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (e), and (f). Theoretically, it can also be shown that LiNGLaM is able to identify the clusters in Models (c), (e), and (f), while LiNGLaH can iden- tify the clusters in Model (c). However, Table 4.1 clearly shows that neither LiNGLaM nor LiNGLaH can correctly estimate the causal structure among latent variables or the ancestral relationships among observed variables. On the other hand, Table 4.1 also shows that LiNGLaM and LiNGLaH fail to cor- rectly estimate the clusters in Models (a), (b), and (d). This result suggests that the clustering algorithms of LiNGLaM and LiNGLaH are not applicable to all models in this paper. 4.4. Additional Experiments with Small Sample Sizes In the preceding experiments, the primary objective was to examine the identifiability of the proposed method, and hence the sample size was set to be sufficiently large. However, in practical applications, it is also crucial to evaluate the estimation accuracy when the sample size is limited. When the sample size is not large, the Type II error rate of HSIC increases, which in turn raises the risk of misclassifying clusters. Moreover, with small samples, the variability of the left-hand side of (4.1) becomes larger, thereby affect- ing the accuracy of Corollaries 3.6 and 3.17. To address this, we investigate 29 Table 4.2: The performances in terms of PREll, RECll, F1ll, PREoo, RECoo, and F1oo Model Method PREll RECll F1ll 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K (c) Proposed (A.7) 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Proposed 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 (d) Proposed (A.7) 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Proposed 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 (e) Proposed (A.7) 0.869 0.962 0.946 0.987 0.981 0.905 1.000 1.000 1.000 1.000 0.884 0.977 0.968 0.992 0.989 Proposed 0.824 0.884 0.987 0.956 0.988 0.865 0.955 1.000 1.000 1.000 0.837 0.912 0.992 0.974 0.993 (f) Proposed (A.7) 1.000 1.000 1.000 1.000 1.000 0.800 0.908 0.910 0.995 1.000 0.880 0.945 0.946 0.997 1.000 Proposed 1.000 1.000 1.000 1.000 1.000 0.759 0.920 0.970 0.982 1.000 0.856 0.952 0.982 0.989 1.000 Model Method PREoo RECoo F1oo 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K 1K 2K 4K 8K 16K (a) Proposed (A.7) 0.825 0.964 0.970 0.957 0.962 0.900 0.982 0.986 1.000 1.000 0.850 0.970 0.975 0.971 0.972 Proposed 0.733 0.821 0.929 0.903 0.949 0.817 0.839 0.945 0.946 0.987 0.761 0.827 0.934 0.917 0.959 ReLVLiNGAM 0.262 0.273 0.320 0.321 0.323 0.787 0.820 0.959 0.964 0.969 0.394 0.410 0.480 0.482 0.484 (b) Proposed (A.7) 0.895 0.951 0.984 1.000 1.000 0.586 0.702 0.756 0.855 0.878 0.687 0.790 0.837 0.912 0.926 Proposed 0.902 0.951 0.984 1.000 1.000 0.590 0.702 0.756 0.855 0.878 0.692 0.790 0.837 0.912 0.926 ReLVLiNGAM 0.827 0.872 0.880 0.941 0.973 0.827 0.872 0.880 0.941 0.973 0.827 0.872 0.880 0.941 0.973 (c) Proposed (A.7) 0.697 0.885 0.975 0.989 1.000 0.697 0.885 0.975 0.989 1.000 0.697 0.885 0.975 0.989 1.000 Proposed 0.618 0.705 0.797 0.898 0.830 0.618 0.705 0.797 0.898 0.830 0.618 0.705 0.797 0.898 0.830 (d) Proposed (A.7) 0.392 0.931 0.816 0.818 0.944 0.614 0.958 0.868 0.906 0.968 0.456 0.938 0.829 0.844 0.952 Proposed 0.167 0.308 0.408 0.345 0.296 0.167 0.308 0.408 0.345 0.296 0.167 0.308 0.408 0.345 0.296 (e) Proposed (A.7) 0.973 0.955 0.912 0.920 0.909 0.973 0.955 0.912 0.920 0.909 0.973 0.955 0.912 0.920 0.909 Proposed 0.568 0.515 0.529 0.357 0.372 0.568 0.515 0.529 0.357 0.372 0.568 0.515 0.529 0.357 0.372 (f) Proposed (A.7) 1.000 1.000 1.000 0.944 0.974 1.000 1.000 1.000 0.944 0.974 1.000 1.000 1.000 0.944 0.974 Proposed 0.944 0.739 0.867 0.807 0.931 0.944 0.739 0.867 0.807 0.931 0.944 0.739 0.867 0.807 0.931 Table 4.3: The performances of the proposed method in Ncs with small sample sizes N 50 100 200 400 τo αind 0.01 0.05 0.1 0.2 0.3 0.01 0.05 0.1 0.2 0.3 0.01 0.05 0.1 0.2 0.3 0.01 0.05 0.1 0.2 0.3 0.001 0 0 1 2 6 0 3 4 10 9 0 6 9 13 18 5 11 16 12 11 0.01 0 1 1 4 4 0 1 2 3 6 1 7 7 11 11 2 7 19 16 15 0.1 0 1 2 5 3 0 0 4 9 7 1 4 6 11 13 2 15 17 18 10 whether the estimation accuracy of the model can be improved in small- sample settings by employing relatively larger values of the significance level αind for HSIC and the threshold τo than those used in the previous experi- ments. We conduct additional experiments under small-sample settings using Model (f) in Figure 4.1. The sample sizes N are set to 50, 100, 200, and 400. In these experiments, only Ncs is used as the evaluation metric. The parameters (τs, τm1, τm2) are set to (0.005, 0.001, 0.01), while the sig- nificance level of HSIC is chosen from αind ∈{0.01, 0.05, 0.1, 0.2, 0.3}, and τo ∈{0.001, 0.01, 0.1}. Table 4.3 reports the values of Ncs for each combination of αind and τo. The values in bold represent the best performances with fixed N and τo, and those in italic represent the best performances with fixed N and αind. Although the estimation accuracy is not satisfactory when the sample size is small, the results in Table 4.3 suggest that relatively larger settings of 30 αind and τo tend to yield higher accuracy. The determination of appropriate threshold values for practical applications remains an important issue for future work. 5. Real-World Example We applied the proposed method to the Political Democracy dataset [25], a widely used benchmark in structural equation modeling (SEM). Originally introduced by Bollen [25], this dataset was designed to examine the relation between the level of industrialization and the level of political democracy across 75 countries in 1960 and 1965. It includes indicators for both indus- trialization and political democracy in each year, and is typically modeled using confirmatory factor analysis (CFA) as part of a structural equation model. In the standard SEM formulation, the model consists of three la- tent variables: ind60, representing the level of industrialization in 1960; and dem60 and dem65, representing the level of political democracy in 1960 and 1965, respectively. ind60 is measured by per capita GNP (X1), per capita energy consumption (X2), and the percentage of the labor force in nonagricultural sectors (X3). dem60 and dem65 are each measured by four indicators: press freedom (Y1, Y5), freedom of political opposition (Y2, Y6), fairness of elections (Y3, Y7), and effectiveness of the elected legislatures (Y4, Y8). The SEM in Bollen [25] specifies paths from ind60 to both dem60 and dem65, and from dem60 to dem65. The marginal model for X1, X2 and Y3, . . . , Y6 in the model in Bollen [25] is as shown in Figure 5.1 (a). This marginal model satisfies the assumptions A1-A3, as well as those of LiNGLaM [21] and LiNGLaH [23]. We examined whether the proposed method, LiNGLaM, and LiNGLaH can recover the model in Figure 5.1 (a) from observational data X1, X2 and Y3, . . . , Y6. We set (τs, τm1, τm2) = (0.005, 0.001, 0.01). Since the sample size is as small as N = 75, we set (τo, αind) = (0.1, 0.2), which are relatively large values, following the discussion in Section 4.4. The upper bound on the number of latent variables is set to 2. The resulting DAGs obtained by each method are shown in Figure 5.1 (b)–(d). Among them, the proposed method estimates the same DAG as in Bollen [25]. LiNGLaM fails to estimate the correct clusters and the causal structure among the latent variables. LiNGLaH incorrectly clusters all observed vari- ables into two clusters. This result indicates that the proposed method not only outperforms existing methods such as LiNGLaM and LiNGLaH for mod- 31 ind60 dem60 dem65 X1 X2 Y3 Y4 Y5 Y6 (a) Model in Bollen [25] L1 L2 L3 X1 X2 Y3 Y4 Y5 Y6 (b) The proposed method L1 L2 L3 Y6 X1 X2 Y3 Y4 Y5 (c) LiNGLaM L1 X1 X2 Y3 Y4 Y5 Y6 L2 (d) LiNGLaH Figure 5.1: The application on the political democracy dataset. els to which those methods are applicable, but is also effective even when the sample size is not large. 6. Conclusion In this paper, we propose a novel algorithm for estimating LvLiNGAM models in which causal structures exist both among latent variables and among observed variables. Causal discovery for such a class of LvLiNGAM has not been completely addressed in any previous studies. Through numerical experiments, we also confirmed the consistency of the proposed method with the theoretical results on its identifiability. Fur- thermore, by applying the proposed method to the Political Democracy 32 dataset [25], a standard benchmark in structural equation modeling, we con- firmed its practical usefulness. However, the class of models to which our proposed method can be applied remains limited. In particular, the assumptions that each observed variable has at most one latent parent and that there are no edges between clusters are restrictive. As mentioned in Section 2.1, there exist classes of models that can be identified by the proposed method even when some variables have no latent parents. For further details, see Appendix E. However, even so, the proposed method cannot be applied to many generic canonical mod- els. Developing a more generalized framework that relaxes these constraints remains an important direction for future research. Acknowledgement This work was supported by JST SPRING under Grant Number JP- MJSP2110 and JSPS KAKENHI under Grant Numbers 21K11797 and 25K15017. References [1] P. Spirtes, C. Glymour, R. Scheines, Causation, prediction, and search, MIT press, 2001. [2] D. M. Chickering, Optimal structure identification with greedy search, Journal of Machine Learning Research 3 (2003) 507–554. [3] S. Shimizu, P. O. Hoyer, A. Hyvärinen, A. Kerminen, M. Jordan, A lin- ear non-Gaussian acyclic model for causal discovery., Journal of Machine Learning Research 7 (2006) 2003–2030. [4] S. Shimizu, T. Inazumi, Y. Sogawa, A. Hyvärinen, Y. Kawahara, T. Washio, P. O. Hoyer, K. Bollen, Directlingam: A direct method for learning a linear non-Gaussian structural equation model, Journal of Machine Learning Research 12 (2011) 1225–1248. [5] D. Colombo, M. H. Maathuis, M. Kalisch, T. S. Richardson, Learn- ing high-dimensional directed acyclic graphs with latent and selection variables, The Annals of Statistics 40 (1) (2012) 294–321. 33 [6] J. M. Ogarrio, P. Spirtes, J. Ramsey, A hybrid causal search algorithm for latent variable models, in: Proceedings of the Eighth International Conference on Probabilistic Graphical Models, Vol. 52 of Proceedings of Machine Learning Research, PMLR, Lugano, Switzerland, 2016, pp. 368–379. [7] P. O. Hoyer, S. Shimizu, A. J. Kerminen, M. Palviainen, Estimation of causal effects using linear non-Gaussian causal models with hidden variables, International Journal of Approximate Reasoning 49 (2) (2008) 362–378, special Section on Probabilistic Rough Sets and Special Section on PGM’06. [8] M. Lewicki, T. J. Sejnowski, Learning nonlinear overcomplete represen- tations for efficient coding, Advances in neural information processing systems 10 (1998) 815–821. [9] S. Salehkaleybar, A. Ghassami, N. Kiyavash, K. Zhang, Learning linear non-Gaussian causal models in the presence of latent variables, Journal of Machine Learning Research 21 (39) (2020) 1–24. [10] D. Entner, P. O. Hoyer, Discovering unconfounded causal relationships using linear non-Gaussian models, in: T. Onada, D. Bekki, E. McCready (Eds.), New Frontiers in Artificial Intelligence, Springer Berlin Heidel- berg, Berlin, Heidelberg, 2011, pp. 181–195. [11] T. Tashiro, S. Shimizu, A. Hyvärinen, T. Washio, ParceLiNGAM: A causal ordering method robust against latent confounders, Neural Com- putation 26 (1) (2014) 57–83. [12] T. N. Maeda, S. Shimizu, RCD: Repetitive causal discovery of linear non-Gaussian acyclic models with latent confounders, in: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 735–745. [13] T. N. Maeda, I-RCD: an improved algorithm of repetitive causal discov- ery from data with latent confounders, Behaviormetrika 49 (2) (2022) 329–341. [14] W. Chen, R. Cai, K. Zhang, Z. Hao, Causal discovery in linear non- Gaussian acyclic model with multiple latent confounders, IEEE Trans- 34 actions on Neural Networks and Learning Systems 33 (7) (2022) 2816– 2827. [15] W. Chen, K. Zhang, R. Cai, B. Huang, J. Ramsey, Z. Hao, C. Glymour, Fritl: A hybrid method for causal discovery in the presence of latent confounders, arXiv preprint arXiv:2103.14238 (2021). [16] R. Cai, Z. Huang, W. Chen, Z. Hao, K. Zhang, Causal discovery with latent confounders based on higher-order cumulants, in: International conference on machine learning, PMLR, 2023, pp. 3380–3407. [17] W. Chen, Z. Huang, R. Cai, Z. Hao, K. Zhang, Identification of causal structure with latent variables based on higher order cumulants, in: Pro- ceedings of the AAAI Conference on Artificial Intelligence, Vol. 38(18), 2024, pp. 20353–20361. [18] D. Schkoda, E. Robeva, M. Drton, Causal discovery of linear non- Gaussian causal models with unobserved confounding, arXiv preprint arXiv:2408.04907 (2024). [19] R. Silva, R. Scheines, C. Glymour, P. Spirtes, Learning the structure of linear latent variable models, Journal of Machine Learning Research 7 (8) (2006) 191–246. [20] R. Cai, F. Xie, C. Glymour, Z. Hao, K. Zhang, Triad constraints for learning causal structure of latent variables, Advances in neural infor- mation processing systems 32 (2019). [21] F. Xie, R. Cai, B. Huang, C. Glymour, Z. Hao, K. Zhang, Generalized independent noise condition for estimating latent variable causal graphs, Advances in neural information processing systems 33 (2020) 14891– 14902. [22] F. Xie, Y. Zeng, Z. Chen, Y. He, Z. Geng, K. Zhang, Causal discovery of 1-factor measurement models in linear latent variable models with arbitrary noise distributions, Neurocomputing 526 (2023) 48–61. [23] F. Xie, B. Huang, Z. Chen, R. Cai, C. Glymour, Z. Geng, K. Zhang, Generalized independent noise condition for estimating causal structure with latent variables, Journal of Machine Learning Research 25 (191) (2024) 1–61. 35 [24] S. Jin, F. Xie, G. Chen, B. Huang, Z. Chen, X. Dong, K. Zhang, Struc- tural estimation of partially observed linear non-Gaussian acyclic model: A practical approach with identifiability, in: The Twelfth International Conference on Learning Representations, 2024, pp. 1–27. [25] K. A. Bollen, Structural equations with latent variables, John Wiley & Sons, 1989. [26] J. Eriksson, V. Koivunen, Identifiability, separability, and uniqueness of linear ICA models, IEEE signal processing letters 11 (7) (2004) 601–604. [27] D. R. Brillinger, Time series: data analysis and theory, Society for In- dustrial and Applied Mathematics, 2001. [28] A. Gretton, K. Fukumizu, C. Teo, L. Song, B. Schölkopf, A. Smola, A kernel statistical test of independence, in: J. Platt, D. Koller, Y. Singer, S. Roweis (Eds.), Advances in Neural Information Processing Systems, Vol. 20, Curran Associates, Inc., 2007, pp. 1–8. [29] G. Darmois, Analyse générale des liaisons stochastiques: etude partic- ulière de l’analyse factorielle linéaire, Revue de l’Institut International de Statistique / Review of the International Statistical Institute 21 (1/2) (1953) 2–8. [30] V. P. Skitovich, On a property of the normal distribution, Doklady Akademii Nauk 89 (1953) 217–219. [31] M. Cai, P. Gao, H. Hara, Learning linear acyclic causal model including gaussian noise using ancestral relationships (2024). 36 A. Some Theorems and Lemmas for Proving Theorems in the Main Text In this section, we present several theorems and lemmas that are required for the proofs of the theorems in the main text. In the following sections, we assume that the coefficient from each latent variable Li to its observed child Xi with the highest causal order is normalized to λii = 1. Theorem A.1 (Darmois-Skitovitch theorem [29, 30]). Define two random variables X1 and X2 as linear combinations of independent random variables u1, . . . , um: X1 = m X i=1 α1iui, X2 = m X i=1 α2iui Then, if X1 and X2 are independent, all variables ui for which α1iα2i ̸= 0 are Gaussian. Lemma A.2. Let X1 and X2 be mutually dependent observed variables in LvLiNGAM in (2.2) with mutually independent and non-Gaussian distur- bances u. Under Assumptions A4 and A5, cum(4)(X1, X1, X2, X2) and cum(4)(X1, X1, X1, X2) are generically non-zero. Proof. Let X1 and X2 be linear combinations of u1, . . . , up+q: X1 = p+q X i=1 α1iui, X2 = p+q X i=1 α2iui. When X1 ̸⊥⊥X2, there must be uj with α1jα2j ̸= 0 by Theorem A.1. There- fore, generically cum(4)(X1, X1, X2, X2) = p+q X i=1 α2 1iα2 2icum(4)(ui, ui, ui, ui) ̸= 0 A similar proof shows that generically cum(4)(X1, X1, X1, X2) ̸= 0. Lemma A.3. For Vi, Vj ∈V , let αji be the total effect from Vi to Vj. As- sume that Vi ∈Anc(Vj). Then, it holds generically that Vi and Vj are not confounded if and only if αjk = αji·αik generically holds for all Vk ∈Anc(Vi). 37 Proof. Please refer to Lemma A.2 in Cai et al. [31] for the proof of sufficiency. We prove the necessity by contrapositive. Suppose that Vi and Vj are confounded. We can arbitrarily choose a Vk as their backdoor common an- cestor and assume αjk = αji · αik. From the faithfulness condition, it follows that αji ̸= 0, αik ̸= 0, and αjk ̸= 0. Let Vk ≺Vk+1 ≺· · · ≺Vi−1 ≺Vi ≺ Vi+1 ≺· · · ≺Vj−1 ≺Vj be one possible causal order consistent with the model. Define Pjk :=   −bk+1,k 1 · · · 0 0 0 · · · 0 ... ... ... ... ... ... · · · 0 −bi−1,k −bi−1,k+1 · · · 1 0 0 · · · 0 −bi,k −bi,k+1 · · · −bi,i−1 1 0 · · · 0 −bi+1,k −bi+1,k+1 · · · −bi+1,i−1 −bi+1,i 1 · · · 0 ... ... ... ... ... ... ... −bj−1,k −bj−1,k+1 · · · −bj−1,i−1 −bj−1,i −bj−1,i+1 · · · 1 −bj,k −bj,k+1 · · · −bj,i−1 −bj,i −bj,i+1 · · · −bj,j−1   . Pji and Pik are defined in the same way. Then, αjk = (−1)k+j · |Pjk|, αji = (−1)i+j · |Pji|, αik = (−1)k+i · |Pik| Therefore, αjk = αji · αik implies |Pji| · |Pik| = |Pjk|. (A.1) The left-hand side of (A.1) equals the determinant of Pjk, in which the (i, i)- entry replaced by 0, which implies that (i, i) minor of Pjk vanishes, that is, −bk+1,k 1 · · · 0 0 · · · 0 ... ... ... ... ... · · · 0 −bi−1,k −bi−1,k+1 · · · 1 0 · · · 0 −bi+1,k −bi+1,k+1 · · · −bi+1,i−1 1 · · · 0 ... ... ... ... ... ... −bj−1,k −bj−1,k+1 · · · −bj−1,i−1 −bj−1,i+1 · · · 1 −bj,k −bj,k+1 · · · −bj,i−1 −bj,i+1 · · · −bj,j−1 = 0. The space of brc, r = k+1, . . . , j, c = k, . . . , j−1 satisfying the above equation is a real algebraic set and constitutes a measure-zero subset of the parameter space. Hence, generically |Pji| · |Pik| ̸= |Pjk|. 38 Lemma A.4. Let Li and Lj be the latent parents of Xi and Xj, respectively. Under Assumptions A1 and A4, Xi ⊥⊥Xj ⇔Li ⊥⊥Lj. Proof. The proof is trivial. Lemma A.5. Let Xi and Xj be the observed variables with the highest causal order within the clusters formed by the observed children of their la- tent parents, Li and Lj, respectively. Under Assumptions A1 and A3, if Conf(Li, Lj) = ∅, 1. Li ⊥⊥Lj ⇒Conf(Xi, Xj) = ∅. 2. Li ∈Anc(Lj) ⇒Conf(Xi, Xj) = {Li}. Proof. When Li ⊥⊥Lj, Xi ⊥⊥Xj according to Lemma A.4. Therefore, no la- tent confounder exists between Xi and Xj. Suppose Li ∈Anc(Lj). Under Assumptions A1 and A3, Conf(Xi, Xj) ⊂ Anc(Xi) ∩Anc(Xj) = Anc(Li) ∪{Li}. Every path originating from the variables in Anc(Li) to Xi and Xj passes through Li. Therefore, the only possible latent confounder of Xi and Xj is Li. Lemma A.6. Let Xi and Xj be the observed variables with the highest causal order within the clusters formed by the observed children of their latent par- ents, Li and Lj, respectively. Under Assumption A1, if Xi and Xj have only one latent confounder, Li and Lj do not have multiple confounders. Proof. We will prove this lemma by contrapositive. Assume that |Conf(Li, Lj)| ≥2. There exist two distinct nodes Lk, Lk′ ∈ Conf(Li, Lj) such that there are directed paths from Lk and Lk′ to Li and Lj, respectively, and the two paths share no node other than their starting points. Therefore, by Assumption A1, two directed paths also exist from Lk to Xi and Xj, sharing no node other than Lk. Hence, Conf(Li, Lj) ⊂Conf(Xi, Xj), which implies that |Conf(Xi, Xj)| ≥2. Theorem A.7. Vi, Vj ∈V are two confounded observed variables. Assume that all sixth cross-cumulants of u are non-zero. Then (c(6) i,i,i,j,j,j)2 = c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j if and only if the following two conditions hold simultaneously: 39 1. There exists no direct path between Vi and Vj. 2. Vi and Vj share only one (latent or observed) confounder in the canon- ical model over Vi and Vj. Proof. Without loss of generality, assume that Vj /∈Anc(Vi). Define IA, IB, and IC by IA ={k | Vk ∈Anc(Vi) ∩Anc(Vj)}, IB ={k | Vk ∈Anc(Vi) \ Anc(Vj)}, IC ={k | Vk ∈Anc(Vj) \ (Anc(Vi) ∪{Vi})}. Then, Vi and Vj are expressed as Vi = X k∈IA αikuk + X k∈IB αikuk + ui, Vj = X k∈IA (αjk + αjiαik)uk + X k∈IB αjkuk + X k∈IC αjkuk + αjiui + uj. Since Vk /∈Conf(Vi, Vj) for all k ∈IB, we have X k∈IB αjkuk = αji X k∈IB αikuk, by Lemma A.3. Let ˜ui and ˜uj be ˜ui = X k∈IB αikuk + ui, ˜uj = X k∈IC αjkuk + uj. Then, Vi = X k∈IA αikuk + ˜ui, Vj = X k∈IA (αjk + αjiαik)uk + αji˜ui + ˜uj. (A.2) Necessity: We assume that αji = 0 and |Conf(Vi, Vj)| = 1. Denote the disturbance of the unique confounder of Vi and Vj by uc. Then Vi and Vj are expressed as Vi = ˜ui + αicuc, Vj = ˜uj + αjcuc. 40 The sixth cross-cumulants of Vi and Vj is obtained by direct computation as follows: c(6) i,i,i,j,j,j = α3 icα3 jccum(6)(uc), c(6) i,i,j,j,j,j = α2 icα4 jccum(6)(uc), c(6) i,i,i,i,j,j = α4 icα2 jccum(6)(uc). Therefore we have  c(6) i,i,i,j,j,j 2 = c(6) i,i,j,j,j,jc(6) i,i,i,i,j,j. Sufficiency: According to Hoyer et al. [7], uk, k ∈IA can be merged as one confounder, that is, |Conf(Vi, Vj)| = 1 in the canonical model over Vi and Vj. From (A.2), we have c(6) i,i,i,j,j,j = X k∈IA α3 ik(αjk + αjiαik)3cum(6)(uk) + (αji)3cum(6)(˜ui), c(6) i,i,i,i,j,j = X k∈IA α4 ik(αjk + αjiαik)2cum(6)(uk) + (αji)2cum(6)(˜ui), c(6) i,i,j,j,j,j = X k∈IA α2 ik(αjk + αjiαik)4cum(6)(uk) + (αji)4cum(6)(˜ui). For notational simplicity, we denote the first terms in the right-hand side of the three equations by A33, A42, and A24, respectively. When (c(6) i,i,i,j,j,j)2 = c(6) i,i,j,j,j,jc(6) i,i,i,i,j,j, we have A2 33 + 2A33(αji)3cum(6)(˜ui) + (αji)6cum(6)(˜ui)2 = A42A24 + (αji)2A24cum(6)(˜ui) + A42(αji)4cum(6)(˜ui) + (αji)6cum(6)(˜ui)2, which is equivalent to (2(αji)3A33 −(αji)2A24 −(αji)4A42)cum(6)(˜ui) + (A2 33 −A42A24) = 0. This implies 2(αji)2A33 −(αji)A24 −(αji)3A42 = 0, A2 33 −A42A24 = 0. (A.3) 41 We note that A2 33 = A42A24 ⇔ X k∈IA α3 ik(αjk + αjiαik)3cum(6)(uk) !2 = X k∈IA α4 ik(αjk + αjiαik)2cum(6)(uk) ! X k∈IA α2 ik(αjk + αjiαik)4cum(6)(uk) ! . By Lagrange’s identity ∀k ∈IA, (αjk + αjiαik) αik = c ⇔αjk αik = c −αji for a constant c. For the first equation in (A.3), we have 2(αji)2A33 −αjiA24 −(αji)3A42 = αji " αji −A33 A42 2 −A2 33 A2 42 + A24 A42 # = 0. Since A2 33 = A42A24 and (αjk + αjiαik) = c · αik, αji " αji −A33 A42 2 −A2 33 A2 42 + A24 A42 # = αji " αji − P k∈IA α3 ik(αjk + αjiαik)3cum(6)(uk) P k∈IA α4 ik(αjk + αjiαik)2cum(6)(uk) #2 = αji " αji −c3 P k∈IA α6 ikcum(6)(uk) c2 P k∈IA α6 ikcum(6)(uk) #2 = αji(αji −c)2 = 0. Thus, αji = 0 or c. αji = c implies that αjk = 0, which contradicts the faithfulness assumption. Therefore, we conclude that αji = 0, which implies that there is no directed path from Vi to Vj. Lemma A.8. Assume that Xi and Xj belong to distinct clusters, that they are the children with the highest causal order of Li and Lj, respectively, and that Lj /∈Anc(Li). Under Assumptions A1 and A3, if Xi and Xj have only one latent confounder Lc in the canonical model over them, one of the following conditions generically holds: 42 1. Conf(Li, Lj) = ∅. Then, Li and Lc are identical, and c(k) (Xi→Xj)(Lc) = c(k) (Xi→Xj)(Li) = cum(k)(Li), c(k) (Xj→Xi)(Lc) = c(k) (Xj→Xi)(Li) = cum(k)(αll ji · Li). 2. Conf(Li, Lj) = {Lc}. Then, c(k) (Xi→Xj)(Lc) = cum(k)(αll ic · Lc), c(k) (Xj→Xi)(Lc) = cum(k)(αll jc · Lc). Proof. According to Lemma A.6, |Conf(Li, Lj)| = 0 or 1. Since Xi and Xj are confounded by Lc, Xi ̸⊥⊥Xj, which implies Li ̸⊥⊥Lj by Lemma A.4. First, consider the case where Conf(Li, Lj) = ∅. According to Lemma A.5, when Li ̸⊥⊥Lj, the only possible latent confounder of Xi and Xj is Li. Furthermore, there is at least one causal path from Li to Lj. Define IA and IB by IA = {k | Lk ∈Anc(Li)}, IB = {k | Lk ∈Anc(Lj) \ (Anc(Li) ∪{Li})}. Then, Xi and Xj are written as Xi = X k∈IA αll ikϵk + ϵi ! + ei, (A.4) Xj = X k∈IA αll jkϵk + X k∈IB αll jkϵk + αll jiϵi + ϵj ! + ej. From Lemma A.3, we have X k∈IA αll jkϵk = αll ji · X k∈IA αll ikϵk. Letting vj = P k∈IB αll jkϵk + ϵj + ej, Xj is rewritten as Xj = αll ji( X k∈IA αll ikϵk + ϵi) + vj. (A.5) Note that Li, ei, and vj are mutually independent. From Proposition 2.3 with ℓ= 1, the roots of the polynomial on α 1 α α2 c(3) i,i,i c(3) i,i,j c(3) i,j,j c(4) i,i,i,i c(4) i,i,i,j c(4) i,i,j,j = 0 43 are αoo ji and αol ji. From (A.4) and (A.5), we have 1 α α2 c(3) i,i,i c(3) i,i,j c(3) i,j,j c(4) i,i,i,i c(4) i,i,i,j c(4) i,i,j,j = (αll ji)3cum(3)(Li)cum(4)(Li) −(αll ji)3cum(3)(Li)cum(4)(Li)  −α (αll ji)2 · (cum(3)(ei)cum(4)(Li) −cum(3)(Li)cum(4)(ei))  + α2 (αll ji) · (cum(3)(ei)cum(4)(Li) −cum(3)(Li)cum(4)(ei))  = 0, which is generically equivalent to −α · (αll ji)2 + α2 · αll ji = 0. (A.6) The roots of (A.6) are α = 0, αll ji. Since Xi and Xj belong to different clusters, αoo ji = 0 and hence αol ji = λjjαll ji = αll ji. From Proposition 2.4,  1 1 0 αll ji  " c(k) (Xi→Xj)(ei) c(k) (Xi→Xj)(Lc) # = " c(k) i,...,i,i c(k) i,...,i,j # =  cum(k)(ei) + cum(k)(Li) cum(k)(αll ji · Li)  . Then, we have c(k) (Xi→Xj)(ei) = cum(k)(ei) and c(k) (Xi→Xj)(Lc) = cum(k)(Li). In the same way, we can obtain c(k) (Xj→Xi)(Lc) = cum(k)(αll ji · Li). Next, we consider the case where Conf(Li, Lj) = {Lc}. Then, only Lc has outgoing directed paths to Xi and Xj that share no latent variable other than Lc. Define IA and IB by IA = {k | Lk ∈Anc(Li) \ (Anc(Lc) ∪{Lc})}, IB = {k | Lk ∈Anc(Lj) \ (Anc(Lc) ∪Anc(Li) ∪{Lc, Li})}. Following Salehkaleybar et al. [9], Xi and Xj are expressed as Xi = ( X k∈IA αll ikϵk + αll icLc + ϵi) + ei, Xj = ( X k∈IA αll jkϵk + X k∈IB αll jkϵk + αll jcLc + αll jiϵi + ϵj) + ej. 44 Since X k∈IA αll jkϵk = αll ji · X k∈IA αll ikϵk, by Lemma A.3, Xj is rewritten as Xj = αll ji( X k∈IA αll ikϵk + ϵi) + X k∈IB αll jkϵk + αll jcLc + ϵj + ej. The cumulants c(6) i,i,i,j,j,j, c(6) i,i,i,i,j,j, and c(6) i,i,j,j,j,j are written as follows: c(6) i,i,i,j,j,j = (αll ji)3 X k∈IA (αll ik)6cum(6)(ϵk) + (αll ji)3cum(6)(ϵi) + (αll ic)3(αll jc)3cum(6)(Lc), c(6) i,i,i,i,j,j = (αll ji)2 X k∈IA (αll ik)6cum(6)(ϵk) + (αll ji)2cum(6)(ϵi) + (αll ic)4(αll jc)2cum(6)(Lc), c(6) i,i,j,j,j,j = (αll ji)4 X k∈IA (αll ik)6cum(6)(ϵk) + (αll ji)4cum(6)(ϵi) + (αll ic)2(αll jc)4cum(6)(Lc). Since Xi and Xj have only one confounder, (c(6) i,i,i,j,j,j)2 = c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j holds from Theorem A.7, which implies 2(αll ji)3(αll ic)3(αll jc)3 = (αll ji)2(αll ic)2(αll jc)4 + (αll ji)4(αll ic)4(αll jc)2 ⇔(αll ji)2 αll ji −αll jc αll ic !2 = 0. When αll ji = αll jc/αll ic, all directed paths from Lc to Lj pass through Li by Lemma A.3, and then Lc is not a confounder between Li and Lj, which leads to a contradiction. Therefore, αll ji = 0. Letting vi = P k∈IA αll ikϵk + ϵi + ei and vj = P k∈IB αll jkϵk + ϵj + ej, Xi and Xj are rewritten as Xi = αll icLc + vi, Xj = αll jcLc + vj. Therefore, we find that the unique confounder of Xi and Xj is Lc. 45 In the same way as (A.6), we have generically 1 α α2 c(3) i,i,i c(3) i,i,j c(3) i,j,j c(4) i,i,i,i c(4) i,i,i,j c(4) i,i,j,j = 0 ⇔−α · αll jc + α2 · αll ic = 0 (A.7) Then, α = 0, αll jc/αll ic. Due to the assumption A3, αoo ji = 0. Therefore, αol jc = αll jc/αll ic from Proposition 2.3. According to Proposition 2.4 " 1 1 0 αll jc αll ic # " c(k) (Xi→Xj)(ei) c(k) (Xi→Xj)(Lc) # = " c(k) i,...,i,i c(k) i,...,i,j # = " cum(k)(vi) + cum(k)(αll icLc) αll jc αll ic · cum(k)(αll icLc) # Solving this equation yields c(k) (Xi→Xj)(ei) = cum(k)(vi) and c(k) (Xi→Xj)(Lc) = cum(k)(αll ic·Lc). In the same way, we can obtain c(k) (Xj→Xi)(Lc) = cum(k)(αll jcLc). Lemma A.9. Let Xi and Xj be two dependent observed variables. Assume that Conf(Xi, Xj) = {Lc} and that there is no directed path between Xi and Xj in the canonical model over them. Then, ˜e(Xi,Xj) ⊥⊥Lc. Proof. Let vi and vj be the disturbances of Xi and Xj, respectively, in the canonical model over Xi and Xj. Xi and Xj are expressed as Xi = αol icLc + vi, Xj = αol jcLc + vj. Then, ˜e(Xi,Xj) is given by ˜e(Xi,Xj) = αol icLc + vi −(αol ic)2(αol jc)2cum(4)(Lc) (αol ic)(αol jc)3cum(4)(Lc) (αol jcLc + vj) = vi −αol ic αol jc vj, which shows that ˜e(Xi,Xj) ⊥⊥Lc. B. Proofs of Theorems in Section 3.1 B.1. The proof of Theorem 3.4 Proof. We prove this theorem by contrapositive. Let Li and Lj be the re- spective latent parents of Xi and Xj, assuming that Lj /∈Anc(Li). We divide the proof into four cases. 46 1. Xi and Xj are pure: 1-1. The number of observed children of Lj is greater than one: There exists another observed child Xk of Lj such that Pa(Xk) = {Lj}. Then Xi, Xj and Xk are expressed as Xi = Li + ei, Xj = Lj + ej, Xk = λkjLj + ek, and e(Xi,Xj|Xk) is given by e(Xi,Xj|Xk) = (Li + ei) −Cov(Li, λkjLj) Cov(Lj, λkjLj)(Lj + ej). Since λkj ̸= 0 and Cov(Li, Lj) ̸= 0 generically, both e(Xi,Xj|Xk) and Xk contain terms of ϵj, implying that Xk ̸⊥⊥e(Xi,Xj|Xk) from Theorem A.1. 1-2. The number of observed children of Lj is one: According to Assumption A2, Lj must have a latent child, denoted by Lk, and let Xk be the child of Lk with the highest causal order. Then Xi = Li + ei, Xj = Lj + ej, Xk = Lk + ek = akjLj + X h:Lh∈Pa(Lk)\{Lj} akhLh + ϵk + ek. and e(Xi,Xj|Xk) is given by e(Xi,Xj|Xk) = Xi −Cov(Xi, Xk) Cov(Xj, Xk)(Lj + ej). Since akj ̸= 0 and Cov(Xi,Xj) Cov(Xj,Xk) ̸= 0 generically, e(Xi,Xj|Xk) ̸⊥⊥Xk from Theorem A.1. 2. At least one of Xi, Xj is impure: Assume that Xi is impure and that a directed edge exists between Xi and Xk. The proof proceeds analo- gously when Xj is impure. 2-1. Xi ∈Pa(Xk): Xi, Xj and Xk are expressed as Xi = Li + ei, Xj = Lj + ej, Xk = (λki + bki)Li + bkiei + ek, 47 respectively, and e(Xi,Xj|Xk) is given by e(Xi,Xj|Xk) = (Li + ei) −(λki + bki)Var(Li) + bkiVar(ei) (λki + bki)Cov(Li, Lj) (Lj + ej). Since both e(Xi,Xj|Xk) and Xk contain ei, e(Xi,Xj|Xk) ̸⊥⊥Xk from Theorem A.1. 2-2. Xk ∈Pa(Xi): Xi, Xj and Xk are expressed as Xi = (λii + bik)Li + bikek + ei, Xj = Lj + ej, Xk = Li + ek. respectively, and e(Xi,Xj|Xk) is given by e(Xi,Xj|Xk) = (λii + bik)Li + bikek + ei −(bik + λii)Var(Li) + bikVar(ek) Cov(Li, Lj) (Lj + ej). Since bik ̸= 0, and both e(Xi,Xj|Xk) and Xk contain terms about ek, e(Xi,Xj|Xk) ̸⊥⊥Xk is shown from Theorem A.1. C. Proofs of Theorems and Lemmas in Section 3.2 C.1. The proof of Theorem 3.5 Proof. Sufficiency: If Li is a latent source in G, then no confounder ex- ists between Li and any other latent variable Lj. By Lemma A.5, when Xi and Xj belong to distinct clusters, we have Conf(Xi, Xj) = {Li}, since Li ̸⊥⊥Lj. If Xi and Xj belong to the same cluster confounded by Li, then again Conf(Xi, Xj) = {Li}. Necessity: Note that Xoc ⊂X \ {Xi}. We will prove the necessity by showing that if Li is not a latent source, there exists some Xj ∈Xoc \ {Xi} such that Xi ̸⊥⊥Xj and Conf(Xi, Xj) ̸= {Li} in the canonical model over {Xi, Xj}. Let Ls be a latent source and let Xs be the child of Ls with the highest causal order. By Lemma A.5 and the fact that Li ̸⊥⊥Ls, we have Conf(Xi, Xs) = {Ls} ̸= {Li}. 48 C.2. The proof of Corollary 3.6 Proof. According to Theorem 3.5, sufficiency is immediate. We therefore prove only necessity by showing that if Li is not a latent source, then neither case 1 nor case 2 holds. If Li is not a latent source, then Xi ̸= ∅or |Xoc \ {Xi}| ≥2, and there- fore case 1 is not satisfied. We now consider case 2 and show that either condition (a) or (b) is not satisfied. First, note that condition (a) does not hold whenever there exists Xj ∈Xoc \ {Xi} with |Conf(Xi, Xj)| ̸= 1. Hence, assume that condition (a) holds. Let Ls be the latent source in G and Xs be its observed child with the highest causal order among ˆCs. Let Xj ∈Xoc \ {Xi} have a latent parent Lj, and let Lc be the unique confounder between Xi and Xj, and Xc be its observed child with the highest causal order. We have Xc, Xs ∈Xoc \ {Xi}, and |Conf(Xi, Xs)| = |Conf(Xi, Xj)| = 1. Next, we divide the following discussion into two cases depending on whether Li has a latent child: 1. If Li ∈Pa(Lj), then by Lemma A.8, c(k) (Xi→Xj)(L(i,j)) = cum(k)(Li), c(k) (Xi→Xs)(L(i,s)) = cum(k)(αll is · Ls). Thus, c(k) (Xi→Xj)(L(i,j)) ̸= c(k) (Xi→Xs)(L(i,s)) generically, and condition (b) is not satisfied. 2. If Li has no latent children, then Xi ̸= ∅, and c(k) (Xi→Xi′)(L(i,i′)) = cum(k)(Li). Also c(k) (Xi→Xi′)(L(i,i′)) ̸= c(k) (Xi→Xs)(L(i,s)) generically, so condition (b) is not satisfied. C.3. The proof of Theorem 3.11 Proof. We define sets IA = {h | Lh ∈Anc(Li) ∩Anc(Lj) \ {L1}}, IB = {h | Lh ∈Anc(Li) \ (IA ∪{L1})}, IC = {h | Lh ∈Anc(Lj) \ (IA ∪{L1})}. 49 Then, X1 = ϵ1 + e1, Li = αll i1ϵ1 + X h∈IA αll ihϵh + X h∈IB αll ihϵh + ϵi, Xi = Li + ei, Lj = αll j1ϵ1 + X h∈IA αll jhϵh + X h∈IC αll jhϵh + ϵj, Xj = Lj + ej, We can easily show that cum(Xi, Xi, X1, X1) cum(Xi, X1, X1, X1) = αll i1, and hence, we have ˜e(Xi,X1) = X h∈IA αll ihϵh + X h∈IB αll ihϵh + ϵi + ei −αll i1e1. Sufficiency: Assume Li ⊥⊥Lj in the submodel induced by G−({L1}), which implies that IA = ∅. Therefore, ˜e(Xi,X1) and Xj can be written as Xj = αll j1ϵ1 + X h∈IC αll jhϵh + ϵj + ej, ˜e(Xi,X1) = X h∈IB αll ihϵh + ϵi + ei −αll i1e1. Thus, we conclude that Xj ⊥⊥˜e(Xi,X1). Similarly, we can also show that Xi ⊥⊥˜e(Xj,X1). Necessity: Assume Li ̸⊥⊥Lj in the submodel induced by G−({L1}), which implies that IA ̸= ∅. Since neither αll ih nor αll jh for h ∈IA equals zero, Xj ̸⊥⊥˜e(Xi,X1) by the contrapositive of Theorem A.1. Similarly, we can also show that Xi ̸⊥⊥˜e(Xj,X1). C.4. The proof of Theorem 3.12 According to Lemma A.9, ˜e(Xi,X1) can be regarded as a statistic obtained by removing the influence of L1 from Xi. Based on this observation, we now provide the proof of Theorem 3.12. 50 Proof. Let L1 and Li be two latent variables, and define the set IA = {h | Lh ∈Anc(Li) \ {L1}}. Then, X1, Xi, and ˜e(Xi,X1) are represented as X1 = ϵ1 + e1, Xi = αll i1ϵ1 + X h∈IA αll ihϵh + ϵi + ei, ˜e(Xi,X1) = X h∈IA αll ihϵh + ϵi + ei −αll i1e1, respectively. Sufficiency: If Li is the latent source in G−({L1}), IA = ∅. Hence, we have Xi = αll i1ϵ1 + ϵi + ei, ˜e(Xi,X1) = ϵi + ei −αll i1e1. Assume that Xj ∈Xoc \ {X1, Xi}. Define IB by IB = {h | Lh ∈Anc(Lj) \ {L1, Li}}. Then, ˜e(Xi,X1) and Xj are written as ˜e(Xi,X1) = ϵi + (ei −αll i1e1), Xj = αll jiϵi + αll j1ϵ1 + X h∈IB αll jhϵh + ϵj + ej, which shows that Conf(˜e(Xi,X1), Xj) = {ϵi}. Next, assume that Xi ̸= ∅and Xi′ ∈Xi. We divide the discussion into the following two cases. 1. Xi /∈Anc(Xi′). Define the set IC = {k | Xk ∈Anc(Xi′) ∩ˆCi}. We note that i /∈IA, and rewrite Xi′ as ˜e(Xi,X1) = ϵi + (ei −αll i1e1), Xi′ = αol i′iLi + X k∈IC αoo i′kek + ei′ = αol i′i(αll i1ϵ1 + ϵi) + X k∈IC αoo i′kek + ei′, hence, Conf(˜e(Xi,X1), Xi′) = {ϵi}. 51 2. Xi ∈Anc(Xi′). Define the set IC = {k | Xk ∈(Anc(Xi′) ∩ˆCi) \ {Xi}}. ˜e(Xi,X1) and Xi′ are written as ˜e(Xi,X1) = ϵi + (ei −αll i1e1), Xi′ = αol i′i(αll i1ϵ1 + ϵi) + αol i′iei + X k∈IC αoo i′kek + ei′. Both ei and ϵi appear in ˜e(Xi,X1) and Xi′. Since only ˜e(Xi,X1) contains e1 while only Xi′ contains ei′, there is no ancestral relation between them in their canonical model, according to Lemma 5 of Salehkaleybar et al. [9]. Hence, Conf(˜e(Xi,X1), Xi′) = {ϵi, ei}, and we have Conf(˜e(Xi,X1), Xi′) ∩Conf(˜e(Xi,X1), Xj) = {ϵi}. Necessity: By contrapositive, we aim to show that if Li is not a latent source, then either condition 1 or 2 does not hold. Assume that Ls is the latent source in G−({L1}), and that Xs is its observed child with the highest causal order. Then, we have ˜e(Xi,X1) = αll isϵs + X h∈IA\{s} αll ihϵh + ϵi + ei −αll i1e1 Xs = as1ϵ1 + ϵs + es, implying that Conf(˜e(Xi,X1), Xs) = {ϵs} ̸= {ϵi}. Thus, condition 1 is not satisfied. C.5. The Proof of Lemma 3.14 Proof. Since it is trivial that ˜e(Xi,˜es) = ϵi + i−1 X k=s αll ihϵh + ei, when i > s and s = 1, we only discuss the remaining two cases. We first prove the case where s = i by induction on i. When i = 1, ˜e(X1,˜e1) = X1 = ϵ1 + e1, 52 where U[1] = ϵ1. Assume that the inductive assumption holds up to i. Then, ˜e(Xh,˜eh) = ϵh + U[h], 1 ≤h ≤i. Since Xi+1 is expressed as Xi+1 = ϵi+1 + ei+1 + i X h=1 αll i+1,hϵh, we have ρ(Xi+1,˜e(Xh,˜eh)) = αll i+1,h for h = 1, . . . , i, according to Definition 3.13. Hence, we have ˜e(Xi+1,˜ei+1) = Xi+1 − i X h=1 ρ(Xi+1,˜e(Xh,˜eh))˜e(Xh,˜eh) = ϵi+1 + ei+1 + i X h=1 αll i+1,hϵh − i X h=1 αll i+1,h(ϵh + U[h]) = ϵi+1 + ei+1 − i X h=1 αll i+1,hU[h] = ϵi+1 + U[i+1], where U[i+1] = ei+1 −Pi h=1 αll i+1,hU[h]. Thus, the claim holds for all i by induction. Next, we discuss the case where i > s and s > 1. According to Definition 3.13, ˜e(Xi,˜es) = Xi − s−1 X h=1 ρ(Xi,˜e(Xh,˜eh))˜e(Xh,˜eh), ρ(Xi,˜e(Xh,˜eh)) = αll ih. Using the conclusion of the case where i = s, we obtain ˜e(Xi,˜es) = ϵi + i−1 X h=1 αll ihϵh + ei − s−1 X h=1 αll ih ϵh + U[h]  = ϵi + i−1 X h=s αll ihϵh + U[s−1] + ei. 53 C.6. The proof of Theorem 3.15 Proof. The proof of this theorem follows similarly to that of Theorem 3.11. C.7. The proof of Theorem 3.16 Proof. The proof of this theorem follows similarly to that of Theorem 3.12. C.8. The proof of Corollary 3.17 According to Theorem 3.16, sufficiency is immediate. We therefore prove only necessity by showing that if Li is not a latent source in G−({L1, . . . , Ls−1}), then neither case 1 nor case 2 holds. If Li is not a latent source, Xi ̸= ∅or |Xoc \ {X1, . . . , Xs−1, Xi}| ≥2, and therefore case 1 is not satisfied. We will show that one of the conditions (a), (b), and (c) is not satisfied. First, note that condition (a) does not hold when- ever there exists Xj ∈Xoc \ {X1, . . . , Xs−1, Xi} with |Conf(˜e(Xi,˜es), Xj)| ̸= 1. Hence, assume that condition (a) holds. Assume that Ls is the latent source of G−({L1, . . . , Ls−1}), and that Xs is its observed child with the highest causal order. Let Lj be the latent parent of Xj ∈Xoc \ {X1, . . . , Xs−1, Xi}, respectively. Since the condi- tion (a) holds, |Conf(˜e(Xi,˜es), Xs)| = |Conf(˜e(Xi,˜es), Xj)| = 1, where ˜es = (˜e(X1,˜e1), . . . , ˜e(Xs−1,˜es−1)). Then, Xs and ˜e(Xi,˜es) are written as Xs = s−1 X h=1 αll shϵh + ϵs + es, ˜e(Xi,˜es) = ϵi + i−1 X h=s αll ihϵh + U[s−1] + ei, according to Lemma 3.14. Hence, we have Conf(˜e(Xi,˜es), Xs) = {ϵs}. Assume that Li has a latent child Lj and that none of the descendants of Li are parents of Lj. Xj is expressed by Xj = j−1 X h=1 αll jhϵh + ϵj + ej. 54 Both ˜e(Xi,˜es) and Xj involve linear combinations of ϵs, . . . , ϵi. Since {ϵs, . . . , ϵi} are mutually independent and |Conf(˜e(Xi,˜es), Xj)| = 1, αll jh = αll jiαll ih accord- ing to Hoyer et al. [7], and then Xj can be rewritten as Xj =            s−1 X i=1 αll jhϵh + αll ji ϵi + i−1 X h=s αll ihϵh ! + ϵj + ej, j = i + 1, s−1 X i=1 αll jhϵh + αll ji ϵi + i−1 X h=s αll ihϵh ! + j−1 X h=i+1 αll jhϵh + ϵj + ej, j > i + 1. Therefore, c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) = cum(k) ϵi + i−1 X h=s αll ihϵh ! , c(k) (˜e(Xi,˜es)→Xs)(L(i,s)) = cum(k)(αll isϵs), according to Lemma A.8. Therefore, we conclude that c(k) (˜e(Xi,˜es)→Xj)(L(i,j)) ̸= c(k) (˜e(Xi,˜es)→Xs)(L(i,s)) generically, and condition (b) is not satisfied. Next, we assume that Li does not have latent children, so that Xi ̸= ∅. Assume that Xi′ ∈Xi and it can be expressed as Xi′ = i−1 X h=1 αll ihϵh + ϵi + bi′iei + ei′. Then, Conf(˜e(Xi,˜es), Xi′) =    n ϵi + Pi−1 h=s αll ihϵh o , bi′i = 0, n ϵi + Pi−1 h=s αll ihϵh, ei o , bi′i ̸= 0. In either case, there is one latent L(i,i′) satisfies that c(k) (˜e(Xi,˜es)→Xi′)(L(i,i′)) = cum(k) ϵi + i−1 X h=s αll ihϵh ! , which generically implies c(k) (˜e(Xi,˜es)→Xi′)(L(i,i′)) ̸= c(k) (˜e(Xi,˜es)→Xs)(L(i,s)). Thus, the condition (c) is not satisfied. 55 D. Proofs of Theorems in Section 3.3 D.1. Theorem 3.19 Proof. From Lemma 3.14, ˜e(Xi−k,˜ei−k) = ϵi−k + U[i−k]. (D.1) By definition, ˜ri,k−1 is written as ˜ri,k−1 = Xi − i−1 X h=i−(k−1) aihXh = i−1 X h=1 aihLh + ϵi + ei − i−1 X h=i−(k−1) aih(Lh + eh) = i−(k+1) X h=1 aihLh + ai,i−kϵi−k + ϵi + ei − i−1 X h=i−(k−1) aiheh = V[i−(k+1)] + ai,i−kϵi−k + ϵi + U[i]\[i−k], (D.2) where V[i−(k+1)] is a linear combination of {ϵ1, . . . , ϵi−(k+1)} and U[i]\[i−k] is a linear combination of {ei−(k−1), . . . , ei}. From (D.1) and (D.2), we can show that ˜ri,k−1 ⊥⊥˜e(Xi−k,˜ei−k) ⇔ai,i−k = 0, and otherwise ρ(Xi,˜e(Xi−1,˜ei−1)) = a2 i,i−1 cum(4)(ϵi−1) ai,i−1 cum(4)(ϵi−1) = ai,i−1 generically holds. E. Reducing an LvLiNGAM In this paper, we have discussed the identifiability of LvLiNGAM under the assumption that each observed variable has exactly one latent parent. However, even when some observed variables do not have latent parents, by iteratively marginalizing out sink nodes and conditioning on source nodes to remove such variables one by one, the model can be progressively reduced to one in which each observed variable has a single latent parent. This can 56 be achieved by first estimating the causal structure involving the observed variables without latent parents. ParceLiNGAM [11] or RCD [12, 13] can identify the ancestral relationship between two observed variables if at least one of them does not have a latent parent, and remove the influence of the observed variable without a latent parent. Models 1–3 in Figure E.1 contain observed variables that do not have a latent parent. According to Definition 1.1, X1 and X2 in Models 1 and 3 belong to distinct clusters whose latent parents are L1 and L2, respectively, whereas in Model 2, X1 and X2 share the same latent parent L1. Model 3 contains a directed path between clusters, whereas Models 1 and 2 do not. We consider the model reduction procedure for Models 1–3 individually. Example E.1 (Model 1). By using ParceLiNGAM or RCD, we can identify X4 →X1, X4 →X2, X1 →X5, and X3 →X5. Since X5 is a sink node, the induced subgraph obtained by removing X5 represents the marginal model over the remaining variables. Since X4 is a source node, if we replace X1 and X2 with the residuals r(4) 1 and r(4) 2 obtained by regressing them on X4, then the induced subgraph obtained by removing X4 represents the conditional distribution given X4. As a result, Model 1 is reduced to the model shown in Figure E.1 (d). This model satisfies Assumptions A1–A3. Example E.2 (Model 2). X1 and X2 are confounded by L1, and they are mediated through X3. By using ParceLiNGAM or RCD, the ancestral rela- tionship among X1, X2, X3 can be identified. Let r(1) 3 be the residual obtained by regressing X3 on X1. Let ˜r(3) 2 be the residual obtained by regressing X2 on r(1) 3 . According to [11] and [12, 13], the model for L1, X1, and r(3) 2 corre- sponds to the one shown in Figure E.1 (e). This model satisfies Assumptions A1–A3. Example E.3 (Model 3). In Model 3, X1 ∈Anc(X3), and they are mediated by X5. By using ParceLiNGAM or RCD, the ancestral relationship among X1, X3, X5 can be identified. Let r(1) 5 be the residual obtained by regressing X5 on X1. Let ˜r(5) 3 be the residual obtained by regressing X3 on r(1) 5 . According to [11] and [12, 13], by reasoning in the same way as for Models 1 and 2, Model 3 is reduced to the model shown in Figure E.1 (f). This model doesn’t satisfy Assumptions A1–A3. Using [11] and [12, 13], ancestral relations between pairs of observed vari- ables that include at least one variable without a latent parent can be iden- 57 L1 X1 L2 X2 X3 X4 X5 (a) Model 1 L1 X1 X2 X3 (b) Model 2 L1 X1 L2 X2 X3 X4 X5 (c) Model 3 L1 r(4) 1 L2 X3 r(4) 2 (d) Reduced model of Model 1 L1 X1 ˜r(3) 2 (e) Reduced model of Model 2 L1 r(4) 1 L2 r(4) 2 ˜r(5) 3 (f) Reduced model of Model 3 Figure E.1: Three models that can be reduced. tified. The graph obtained by the model reduction procedure is constructed by iteratively applying the following steps: (i) iteratively remove observed variables without latent parents that ap- pear as source or sink nodes, updating the induced subgraph at each step so that any new source or sink nodes are subsequently removed; (ii) when an observed variable without a latent parent serves as a mediator, remove the variable and connect its parent and child with a directed edge. If no directed path exists between any two observed variables with distinct latent parents, the model obtained through the model reduction procedure satisfies Assumptions A1–A3. Conversely, if there exist two observed vari- ables with distinct latent parents that are connected by a directed path, the model obtained through the model reduction procedure does not satisfy Assumption A3. In summary, Assumption A1 can be generalized to A1′. Each observed variable has at most one latent parent. Proposition E.4. Given observed data generated from an LvLiNGAM MG that satisfies the assumptions A1′ and A2–A5, the latent causal structure 58 among the latent variables, the directed edges from the latent variables to the observed variables, and the ancestral relationships among the observed variables can be identified by using the proposed method in combination with ParceLiNGAM and RCD. 59
Causal Discovery for Linear DAGs with Dependent Latent Variables via Higher-order Cumulants Ming Caia, Penggang Gaoa, Hisayuki Haraa,b aGraduate -cho, Kyoto, 606-8501, Japan bInstitute for Liberal Arts and Sciences, Kyoto University, Yoshida Nihonmatsu-cho, Kyoto, 606-8501, Japan Abstract This paper addresses the problem of estimating causal directed acyclic graphs in linear non-Gaussian acyclic models with latent confounders (LvLiNGAM). Existing methods assume mutually independent latent confounders or cannot properly handle models with causal relationships among observed variables. We propose a novel algorithm that identifies causal DAGs in LvLiNGAM, allowing causal structures among latent variables, among observed variables, and between the two. The proposed method leverages higher-order cumulants of observed data to identify the causal structure. Extensive simulations and experiments with real-world data demonstrate the validity and practical utility of the proposed algorithm. Keywords: canonical model, causal discovery, cumulants, DAG, latent confounder, Triad constraints 1. Introduction Estimating causal directed acyclic graphs (DAGs) in the presence of latent confounders has been a major challenge in causal analysis. Conventional causal discovery methods, such as the Peter-Clark (PC) algorithm [1], Greedy Equivalence Search (GES) [2], and the Linear Non-Gaussian Acyclic Model (LiNGAM) [3, 4], focus solely on the causal model without latent confounders. Fast Causal Inference (FCI) [1] extends the PC algorithm to handle latent variables, recovering a partial ancestral graph (PAG) under the faithfulness assumption. However, FCI is computationally intensive and, moreover, often 16 Oct 2025 fails to determine the causal directions. Really Fast Causal Inference (RFCI) [5] trades off some independence tests for speed, at the loss of estimating accuracy. Greedy Fast Causal Inference (GFCI) [6] hybridizes GES and FCI but inherits the limitation of FCI. The assumption of linearity and non-Gaussian disturbances in the causal model enables the identification of causal structures beyond the PAG. The linear non-Gaussian acyclic model with latent confounders (LvLiNGAM) is an extension of LiNGAM that incorporates latent confounders. Hoyer et al. [7] demonstrated that LvLiNGAM can be transformed into a canonical model in which all latent variables are mutually independent and causally precede the observed variables. They proposed estimating the canonical models using overcomplete ICA [8], assuming that the number of latent variables is known. Overcomplete ICA can identify the causal DAG only up to permutations and scaling of the variables. Thus, substantial computational effort is required to identify the true causal DAG from the many candidate models. Another limitation of overcomplete ICA is its tendency to converge to local optima. Salehkaleybar et al. [9] improved the algorithm by reducing the candidate models. Other methods for estimating LvLiNGAM, based on linear regression analysis and independence testing, have also been developed.[10, 11, 12, 13]. Furthermore, Multiple Latent Confounders LiNGAM (ML-CLiNGAM) [14] and FRITL [15] initially identify the causal skeleton using a constraint-based method, and then estimate the causal directions of the undirected edges in the skeleton using linear regression and independence tests. While these methods can identify structures among observed variables that are not confounded by latent variables, they cannot necessarily determine the causal direction between two variables confounded by latent variables. More recently, methods using higher-order cumulants have led to new developments in the identification of canonical LvLiNGAMs. Cai et al. [16] assume that each latent variable has at least three observed children, and that there exists a subset of these children that are not connected by any other observed or latent variables. Then, cumulants are employed to identify onelatent-component structures and latent influences are recursively removed to recover the underlying causal relationships. Chen et al. [17] show that if two observed variables share one latent confounder, the causal direction between them can be identified by leveraging higher-order cumulants. Schkoda et al. [18] introduced ReLVLiNGAM, a recursive approach that leverages higher-order cumulants to estimate canonical LvLiNGAM with multiple la2 tent parents. One strength of ReLVLiNGAM is that it does not require prior knowledge of the number of latent variables. The methods reviewed so far are estimation methods for the canonical LvLiNGAM. A few methods, however, have been proposed to estimate the causal DAG of LvLiNGAM when latent variables exhibit causal relationships. A variable is said to be pure if it is conditionally independent of other observed variables given its latent parents; otherwise, it is called impure. Silva et al. [19] showed that the latent DAG is identifiable under the assumption that each latent variable has at least three pure children, by employing tetrad conditions on the covariance of the observed variables. Cai et al. [20] proposed a two-phase algorithm, LSTC (learning the structure of latent variables based on Triad Constraints), to identify the causal DAG where each latent variable has at least two children, all of which are pure, and each observed variable has a single latent parent. Xie et al. [21] generalized LSTC and defined the linear non-Gaussian latent variable model (LiNGLaM), where observed variables may have multiple latent parents but no causal edges among them, and proved its identifiability. In [20] and [21], causal clusters are defined as follows: Definition 1.1 (Causal cluster [20, 21]). A set of observed variables that share the same latent parents is called a causal cluster. Their methods consist of two main steps: identifying causal clusters and then recovering the causal order of latent variables. LSTC and the algorithm for LiNGLaM estimate clusters of observed variables by leveraging the Triad constraints or the generalized independence noise (GIN) conditions. It is also possible to define clusters in the same manner as Definition 1.1 for models where causal edges exist among observed variables. However, when impure observed variables exist, their method might fail to identify the clusters, resulting in an incorrect estimation of both the number of latent variables and latent DAGs. Several recent studies have shown that LvLiNGAM remains identifiable even when some observed variables are impure [22, 23, 24]. However, these methods still rely on the existence of at least some pure observed variables in each cluster. 1.1. Contributions In this paper, we relax the pure observed children assumption of Cai et al. [20] and investigate the identifiability of the causal DAG for an extended model that allows causal structures both among latent variables and among 3 observed variables. Using higher-order cumulants of the observed data, we show the identifiability of the causal DAG of a class of LvLiNGAM and propose a practical algorithm for estimating the class. The proposed method first estimates clusters using the approaches of [20, 21]. When causal edges exist among observed variables, the clusters estimated by using Triad constraints or GIN conditions may be over-segmented compared to the true clusters. The proposed method leverages higher-order cumulants of observed variables to refine these clusters, estimates causal edges within clusters, determines the causal order among latent variables, and finally estimates the exact causal structure among latent variables. In summary, our main contributions are as follows: 1. Demonstrate identifiability of causal DAGs in a class of LvLiNGAM, allowing causal relationships among latent and observed variables. 2. Extend the causal cluster estimation methods of [20] and [21] to handle cases where directed edges exist among observed variables within clusters. 3. Propose a top-down algorithm using higher-order cumulants to infer the causal order of latent variables. 4. Develop a bottom-up recursive procedure to reconstruct the latent causal DAG from latent causal orders. The rest of this paper is organized as follows. Section 2 defines the class of LvLiNGAM considered in this study. In Section 2, we also summarize some basic facts on higher-order cumulants. Section 3 describes the proposed method in detail. Section 4 presents numerical simulations to demonstrate the effectiveness of the proposed method. Section 5 evaluates the usefulness of the proposed method by applying it to the Political Democracy dataset [25]. Finally, Section 6 concludes the paper. All proofs of theorems, corollaries, and lemmas in the main text are provided in the Appendices. 2. Preliminaries 2.1. LvLiNGAM Let X = (X1, . . . , Xp)⊤and L = (L1, . . . , Lq)⊤be vectors of observed and latent variables, respectively. In this paper, we identify these vectors with the corresponding set of variables. Define V = X ∪L = {V1, . . . , Vp+q}. Let G = (V , E) be a causal DAG. Vi →Vj denotes a directed edge from Vi 4 to Vj. Anc(Vi), Pa(Vi), and Ch(Vi) are the sets of ancestors, parents, and children of Vi, respectively. We use Vi ≺Vj to indicate that Vi precedes Vj in a causal order. The LvLiNGAM considered in this paper is formulated as L X = A 0 Λ B L X + ε e , (2.1) where A = {aji}, B = {bji}, and Λ = {λji} are matrices of causal coefficients, while ε and e denote vectors of independent non-Gaussian disturbances associated with L and X, respectively. Let aji, λji, and bji be the causal coefficients from Li to Lj, from Li to Xj, and from Xi to Xj, respectively. Due to the arbitrariness of the scale of latent variables, we may, without loss of generality, set one of the coefficients λji to 1 for some Xj ∈Ch(Li). Hereafter, such a normalization will often be used. A and B can be transformed into lower triangular matrices by row and column permutations. We assume that the elements of ε and e are mutually independent and follow non-Gaussian continuous distributions. Let MG denote the LvLiNGAM defined by G. As shown in (2.1), we assume in this paper that all observed variables are not ancestors of any latent variables. Consider the following reduced form of (2.1), L X = (Iq -A)-1 0 (Ip -B)-1Λ(Iq -A)-1 (Ip -B)-1 ε e . Let αll ji, αol ji, and αoo ji represent the total effects from Li to Lj, Li to Xj, and Xi to Xj, respectively. Thus, (Iq -A)-1 = {αll ji}, (Ip -B)-1Λ(Iq -A)-1 = {αol ji}, and (Ip -B)-1 = {αoo ji }. The total effect from Vi to Vj is denoted by αji, with the superscript omitted. M := [(Ip -B)-1Λ(Iq -A)-1, (Ip -B)-1] is called a mixing matrix of the model (2.1). Denote u = (ε⊤, e⊤)⊤. Then, X is written as X = Mu, (2.2) which conforms to the formulation of the overcomplete ICA problem [8, 26, 7]. M is said to be irreducible if every pair of columns is linearly independent. G is said to be minimal if and only if M is irreducible. If G is not minimal, some latent variables can be absorbed into other latent variables, resulting in a minimal graph [9]. MG is called the canonical model when A = 0 5 and M is irreducible. Hoyer et al. [7] showed that any LvLiNGAM can be transformed into an observationally equivalent canonical model. For example, the LvLiNGAM defined by the DAG in Figure 2.1 (a) is the canonical model of the LvLiNGAM defined by the DAG in Figure 2.1 (b). Hoyer et al. [7] also demonstrated that, when the number of latent variables is known, the canonical model can be identified up to observational equivalent models using overcomplete ICA. Salehkaleybar et al. [9] showed that, even when A ̸= 0, the irreducibility of M is a necessary and sufficient condition for the identifiability of the number of latent variables. However, they did not provide an algorithm for estimating the number of latent variables. Schkoda et al. [18] proposed ReLVLiNGAM to estimate the canonical model with generic coefficients even when the number of latent variables is unknown. However, the canonical model derived from an LvLiNGAM with A ̸= 0 lies in a measure-zero subset of the parameter space, which prevents ReLVLiNGAM from accurately identifying the number of latent confounders between two observed variables in such cases. For example, ReLVLiNGAM may not identify the canonical model in Figure 2.1 (a) from data generated by the LvLiNGAM in Figure 2.1 (b). Cai et al. [20] and Xie et al. [21] demonstrated that within LvLiNGAMs where all the observed children of latent variables are pure, there exists a class, such as the models shown in Figure 2.1 (b), in which the causal order among latent variables is identifiable. They proposed algorithms for estimating the causal order. However, the complete causal structure cannot be identified solely from the causal order, and their algorithm cannot be generL1 L2 L3 X1 X2 X3 X4 X5 X6 (a) An example of canonical LvLiNGAM L1 L2 L3 X1 X4 X2 X5 X3 X6 (b) An LvLiNGAM that can be identified by [20, 21] Figure 2.1: Examples of LvLiNGAMs 6 alized to cases where causal edges exist among observed variables or where latent variables do not have sufficient pure children. In this paper, we introduce the following class of models, which generalizes the class of models in Cai et al. [20] by allowing causal edges among the observed variables, and consider the problem of identifying the causal order among observed variables within each cluster as well as the causal structure among the latent variables. A1. Each observed variable has only one latent parent. A2. Each latent variable has at least two children, at least one of which is observed. A3. There are no direct causal paths between causal clusters. A4. The model satisfies the faithfulness assumption. A5. The higher-order cumulant of each component of the disturbance u is nonzero. In Section 3, we demonstrate that the causal structure of latent variables and the causal order of observed variables for the LvLiNGAM that satisfies Assumption A1-A5 are identifiable, and we provide an algorithm for estimating the causal DAG for this class. The proposed method enables the identification not only of the causal order among latent variables but also of their complete causal structure. Under Assumption A1, every observed variable is assumed to have one latent parent. However, even if there exist observed variables without latent parents, the estimation problem can sometimes be reduced to a model satisfying Assumption A1 by applying ParceLiNGAM [11] or repetitive causal discovery (RCD) [12, 13] as a preprocessing step of the proposed method. Details are provided in Appendix E. 2.2. Cumulants The proposed method leverages higher-order cumulants of observed data to identify the causal structure among latent variables. In this subsection, we summarize some facts on higher-order cumulants. First, we introduce the definition of a higher-order cumulant. Definition 2.1 (Cumulants [27]). Let i1, . . . , ik ∈{1, . . . , p}. The k-th order cumulant of random vector (Xi1, . . . , Xik) is c(k) i1,...,ik = cum(k)(Xi1, . . . , Xik) 7 = X (I1,...,Ih) (-1)h-1(h -1)!E "Y j∈I1 Xj # · · · E "Y j∈Ih Xj # , where the sum is taken over all partitions (I1, . . . , Ih) of (i1, . . . , ik). If i1 = · · · = ik = i, we write cum(k)(Xi) to denote cum(k)(Xi, . . . , Xi). The k-th order cumulants of the observed variables of LvLiNGAM satisfy c(k) i1,i2,...,ik = cum(k)(Xi1, . . . , Xik) = q X j=1 αol i1j · · · αol ikjcum(k)(εj) + p X j=1 αoo i1j · · · αoo ikjcum(k)(ej) We consider an LvLiNGAM in which all variables except Xi and Xj are regarded as latent variables. We refer to the canonical model that is observationally equivalent to this model as the canonical model over Xi and Xj. Let Conf(Xi, Xj) = {L′ 1, L′ 2, · · · , L′ l} be the set of latent confounders in the canonical model over Xi and Xj, where all L′ h ∈Conf(Xi, Xj) are mutually independent. Without loss of generality, we assume that Xj /∈ Anc(Xi). Then, Xi and Xj are expressed as Xi = l X h=1 α′ ihL′ h + vi, Xj = l X h=1 α′ jhL′ h + αoo ji vi + vj, (2.3) where vi and vj are disturbances, and α′ ih and α′ jh are total effects from L′ h to Xi and Xj, respectively, in the canonical model over them. We note that the model (2.3) is a canonical model with generic parameters, and that lis equal to the number of confounders in the original model MG. Schkoda et al. [18] proposed an algorithm for estimating the canonical model with generic parameters by leveraging higher-order cumulants. Several of their theorems concerning higher-order cumulants are also applicable to the canonical model over Xi and Xj. They define a Pk2-k1+1 i=0 i ×k1 matrix A(k1,k2) (Xi→Xj) as follows: 8 A(k1,k2) (Xi→Xj) =   c(k1) i,i,...,i c(k1) i,i,...,j . . . c(k1) i,i,...,j c(k1+1) i,i,i,...,i c(k1+1) i,i,i,...,j . . . c(k1+1) i,i,j,...,j c(k1+1) j,i,i,...,i c(k1+1) j,i,i,...,j . . . c(k1+1) j,i,j,...,j ... ... ... ... c(k2) i,...,i,i,i,...,i,i c(k2) i,...,i,i,i,...,i,j . . . c(k2) i,...,i,i,j,...,j,j ... ... ... ... c(k2) j,...,j,i,i,...,i,i c(k2) j,...,j,i,i,...,i,j . . . c(k2) j,...,j,i,j,...,j,j   , (2.4) where k1 q, some clusters must be merged to recover the true clustering. X can be partitioned into maximal subsets of mutually dependent variables. Each observed variable in these subsets has a corresponding latent parent. If the causal order of the latent parents within each subset is determined, then the causal order of the entire latent variable set ˆL is uniquely determined. Henceforth, we assume, without loss of generality, that X itself forms one such maximal subset. 3.2.1. Determining the Source Latent Variable Since we assume that X consists of mutually dependent variables, G contains only one source node among the latent variables. Theorem 3.5 provides the necessary and sufficient condition for a latent variable to be a source node. Theorem 3.5. Let Xi denote the observed variable with the highest causal order among ˆCi. Then, Li is generically a latent source in G if and only if Conf(Xi, Xj) are identical across all Xj ∈X \ {Xi} such that Xi ̸⊥⊥Xj in the canonical model over Xi and Xj, with their common value being {Li}. Note that in Stage I, the ancestral relationships among the observed variables are determined. Hence, the causal order within each cluster can also be determined. Let Xj be the observed variable with the highest causal order among ˆCj for j = 1, . . . , K and define Xoc = {X1, . . . , XK}. When | ˆCi| ≥2, let Xi′ be any element in ˆCi \ {Xi}. Define Xi by Xi = {Xi′}, if | ˆCi| ≥2 ∅, if | ˆCi| = 1. (3.2) Let L(i,j) denote a latent confounder of Xi and Xj in the canonical model over them. In the implementation, we verify whether the conditions of Theorem 3.5 are satisfied by using Corollary 3.6. Corollary 3.6. Assume k ≥3. Li is generically a latent source in G if and only if one of the following two cases holds: 1. Xi = ∅and |Xoc \ {Xi}| = 1 13 L1 X1 X2 X3 (a) L1 L2 X1 X2 X3 (b) Figure 3.2: An example of merging clusters in Stage II 2. |(Xoc ∪Xi) \ {Xi}| ≥2 and the following all hold: (a) In the canonical model over Xi and Xj, |Conf(Xi, Xj)| = 1 for Xj ∈(Xoc ∪Xi) \ {Xi} such that Xi ̸⊥⊥Xj. (b) c(k) (Xi→Xj)(L(i,j)) are identical for Xj ∈(Xoc ∪Xi) \ {Xi}. When Xi = ∅and |Xoc \ {Xi}| = 1, it is trivial by Assumption A2 that Li is a latent source. Otherwise, for Li to be a latent source, it is necessary that |Conf(Xi, Xj)| = 1 for all Xj ∈(Xoc ∪Xi) \ {Xi}. This can be verified by using Condition 1 of Proposition 2.2. In addition, if c(k) (Xi→Xj)(L(i,j)) for Xj ∈(Xoc ∪Xi) \ {Xi} are identical, Li can be regarded as a latent source. When Xi ∈Anc(Xi′), the equation (2.5) yields two distinct solutions, c(k) (Xi→Xi′)(L(i,i′)) = c(k) (Xi→Xi′)(Li) and c(k) (Xi→Xi′)(ei), that are identifiable only up to a permutation of the two. If either of these two solutions equals c(k) (Xi→Xj)(L(i,j)) for all Xj ∈Xoc \ {Xi}, then Li can be identified as the latent source. Example 3.7. Consider the models in Figure 3.2. For both models (a) and (b), the clusters estimated in Stage I are ˆC1 = {X1} and ˆC2 = {X2, X3}, and let L1 and L2 be the latent parents assigned to ˆC1 and ˆC2, respectively. Then, Xoc = {X1, X2}. In the model (a), we can assume λ11 = 1 without loss of generality. Then, the model (a) is expressed as X1 = ε1 + e1, X2 = λ21ε1 + e2, X3 = (λ21b32 + λ31)ε1 + b32e2 + e3. By Proposition 2.4 and assuming k ≥2, we can obtain c(k) (X1→X2)(L(1,2)) = cum(k)(ε1), c(k) (X2→X1)(L(2,1)) = c(k) (X2→X3)(L(2,3)) = cum(k)(λ21ε1). 14 Algorithm 2 Finding latent sources Input: Mutually dependent Xoc, ˆC, and AL Output: Xoc, ˆC, and a set of ancestral relationships between latent variables AL 1: Each cluster is assigned one latent parent and let ˆL be the set of latent parents 2: Apply Corollary 3.6 to find the latent sources Ls 3: Assume Ls ∈Ls and ˆCs ∈ˆC 4: if |Ls| ≥2 then 5: Merge the corresponding clusters into ˆCs and update ˆC and Xoc 6: Identify all latent parents in Ls with Ls 7: end if 8: for all Li ∈ˆL \ Ls do 9: Anc(Li) ←{Ls} 10: end for 11: Xoc ←Xoc \ {Xs} 12: AL ←AL ∪{Anc(Li) | Li ∈ˆL \ Ls} ∪{Anc(Ls) = ∅} 13: return Xoc, ˆC, and AL Since |Xoc \{X1}| = |{X2}| = 1 and c(k) (X2→X1)(L(2,1)) = c(k) (X2→X3)(L(2,3)), both L1 and L2 are determined as latent sources. The dependence between X1 and X2 leads to L1 and L2 being regarded as a single latent source, resulting in the merging of ˆC1 and ˆC2. In the model (b), we can assume λ11 = λ22 = 1 without loss of generality. Then, the model (b) is described as X1 = ε1 + e1, X2 = (a21ε1 + ε2) + e2, X3 = (b32 + λ31)(a21ε1 + ε2) + b32e2 + e3. Then, c(k) (X1→X2)(L(1,2)) = cum(k)(ε1), c(k) (X2→X1)(L(2,1)) = cum(k)(a21ε1) ̸= c(k) (X2→X3)(L(2,3)) = cum(k)(a21ε1 + ε2). Therefore, L1 is a latent source, while L2 is not. As in model (a), multiple latent variables may also be identified as latent sources. In such cases, their observed children are merged into a single cluster. Once Li is established as a latent source, it implies that Li is an ancestor of the other elements in ˆL. The procedure of Section 3.2.1 is summarized in Algorithm 2. 15 3.2.2. Determining the Causal Order of Latent Variables Next, we address the identification of subsequent latent sources after finding L1 in the preceding procedure. If the influence of the latent source can be removed from the observed descendant, the subsequent latent source may be identified through a procedure analogous to the one previously applied. The statistic ̃e(Xi,Xh), defined below, serves as a key quantity for removing such influence. Definition 3.8. Let Xi and Xh be two observed variables. Define ̃e(Xi,Xh) as ̃e(Xi,Xh) = Xi -ρ(Xi,Xh)Xh, where ρ(Xi,Xh) =        cum(Xi, Xi, Xh, Xh) cum(Xi, Xh, Xh, Xh) Xi ̸⊥⊥Xh, 0 Xi ⊥⊥Xh. Under Assumption 5, when Xi ̸⊥⊥Xh, ρ(Xi,Xh) is shown to be generically finite and non-zero. See Lemma A.2 in the Appendix for details. Let Lh be the latent source, and let Xh be its observed child with the highest causal order. When there is no directed path between Xi and Xh, ̃e(Xi,Xh) can be regarded as Xi after removing the influence of Lh. Example 3.9. Consider the model in Figure 3.3 (a). We can assume λ22 = λ33 = 1 without loss of generality. Then, X1, X2 and X3 are described as X1 = ε1 + e1, X2 = a21ε1 + ε2 + e2, X3 = a31ε1 + ε3 + e3. We can easily show that ρ(X2,X1) = a21. Hence, we have ̃e(X2,X1) = -a21e1 + ε2 + e2. It can be seen that ̃e(X2,X1) does not depend on L1 = ε1, and that ̃e(X2,X1) and X3 are mutually independent. Example 3.10. Consider the model in Figure 3.3 (b). We can assume that λ11 = λ22 = λ33 = 1 without loss of generality. Then, the model is described as X1 = ε1 + e1, X2 = a21ε1 + ε2 + e2, 16 L1 L2 L3 X1 X3 X2 (a) L1 L2 L3 X1 X2 X3 X4 X5 (b) Figure 3.3: Examples of LvLiNGAMs X3 = a32a21ε1 + a32ε2 + ε3 + e3, X4 = λ42(a21ε1 + ε2) + e4, X5 = (λ53 + b53)(a32a21ε1 + a32ε2 + ε3) + b53e3 + e5. We can easily show that ρ(X2,X1) = a21 and ρ(X3,X1) = a32a21. Hence, we have ̃e(X2,X1) = -a21e1 + ε2 + e2, ̃e(X3,X1) = -a32a21e1 + a32ε2 + ε3 + e3. It can be seen that ̃e(X2,X1) and ̃e(X3,X1) are obtained by replacing L1 = ε1 with -e1. The model for ( ̃e(X2,X1), X3) and ( ̃e(X2,X1), X5) are described by canonical models with Conf( ̃e(X2,X1), X3) = Conf( ̃e(X2,X1), X5) = {ε2}, respectively. The model for ( ̃e(X3,X1), X2) and ( ̃e(X3,X1), X5) are described by canonical models with Conf( ̃e(X3,X1), X2) = {ε2} and Conf( ̃e(X3,X1), X5) = {a32ε2 + ε3, e3}, respectively. X5 contains {ε1, ε2, ε3, e3, e5}, and ̃e(X3,X1) contains {ε2, ε3, e1, e3}. Since these sets are not in an inclusion relationship, it follows from Lemma 5 of Salehkaleybar et al. [9] that there is no ancestral relationship between ̃e(X3,X1) and X5. It is noteworthy that ̃e(X3,X1) and X5 share two latent confounders, and that no ancestral relationship exists between them even though X3 ∈Anc(X5) in the original graph. Let L1 be the current latent source identified by the preceding procedure. Let G-({L1}) be the subgraph of G induced by V \ ({L1} ∪ˆC1). By generalizing the discussions in Examples 3.9 and 3.10, we obtain the following theorems. Theorem 3.11. For Xi, Xj ∈Xoc \ {X1} and their respective latent parent Li and Lj, Li ⊥⊥Lj | L1 if and only if ̃e(Xi,X1) ⊥⊥Xj. 17 Theorem 3.12. Let Li denote the latent parent of Xi ∈Xoc \ {X1}. If | ˆCi| ≥2, let Xi′ be an element of ˆCi \ {Xi}. Xi is defined in the same manner as (3.2). Then, Li is generically a source in G-({L1}) if and only if the following two conditions hold: 1. Conf( ̃e(Xi,X1), Xj) are identical for all Xj ∈Xoc \ {X1, Xi} such that ̃e(Xi,X1) ̸⊥⊥Xj, with their common value being {εi}. 2. If Xi ̸= ∅, Conf( ̃e(Xi,X1), Xi′) ∩Conf( ̃e(Xi,X1), Xj) are identical for all Xj ∈Xoc \ {X1, Xi}, with their common value being {εi}. By applying Theorem 3.11, we can obtain the family of maximal dependent subsets of Xoc \ {X1} in the conditional distribution given L1. Theorem 3.12 allows us to verify whether Li is a latent source in G-({L1}). By recursively iterating such a procedure, the ancestral relationships among the latent variables can be identified. To achieve this, it is necessary to generalize ̃e(Xi,X1) defined as in Definition 3.13. Let G-({L1, . . . , Ls-1}) denote the subgraph of G induced by V except for {L1, . . . , Ls-1} and their observed children and L1, . . . , Ls-1 be latent sources in G, G-({L1}), G-({L1, L2}), . . . , G-({L1, . . . , Ls-2}), respectively. Then, {L1, . . . , Ls-1} has a causal order L1 ≺· · · ≺Ls-1. Definition 3.13. For i ≥s, ̃e(Xi, ̃es) is defined as follows. ̃e(Xi, ̃es) = Xi s = 1, Xi -Ps-1 h=1 ρ(Xi, ̃e(Xh, ̃eh)) ̃e(Xh, ̃eh) s > 1, where ̃es = ( ̃e(X1, ̃e1), . . . , ̃e(Xs-1, ̃es-1)). ̃e(Xi, ̃es) can be regarded as a statistic with the information of L1, . . . , Ls-1 eliminated from Xi. The following lemma shows that ̃e(Xi, ̃es) is obtained by replacing the information of ε1, . . . , εs-1 with that of e1, . . . , es-1. Lemma 3.14. Let X1, . . . , Xs-1, and Xi be the observed children with the highest causal order of L1, . . . , Ls-1, and Li, respectively. ̃e(Xi, ̃es) can be expressed as ̃e(Xi, ̃es) =    εi + U[i] i = s εi + Pi-1 h=s αll ihεh + ei i > s and s = 1 εi + Pi-1 h=s αll ihεh + U[s-1] + ei i > s and s > 1, 18 where U[i] and U[s-1] are linear combinations of {e1, . . . , ei} and {e1, . . . , es-1}, respectively. By using ̃e(Xi, ̃es) in Definition 3.13, we obtain Theorems 3.15 and 3.16, which generalize Theorems 3.11 and 3.12, respectively. Theorem 3.15. For Xi, Xj ∈Xoc \ {X1, . . . , Xs-1} and their respective latent parent Li and Lj, Li ⊥⊥Lj | {L1, . . . , Ls-1} if and only if ̃e(Xi, ̃es) ⊥⊥Xj. Theorem 3.16. Let Li be the latent parent of Xi ∈Xoc \ {X1, . . . , Xs-1}. If | ˆCi| ≥2, let Xi′ be an element of ˆCi \ {Xi}. Xi is defined in the same manner as (3.2). Then, Li is generically a latent source in G-({L1, . . . , Ls-1}) if and only if the following two conditions hold: 1. Conf( ̃e(Xi, ̃es), Xj) are identical for all Xj ∈Xoc \ {X1, . . . , Xs-1, Xi} such that ̃e(Xi, ̃es) ̸⊥⊥Xj, with their common value being {εi}. 2. When Xi ̸= ∅, Conf( ̃e(Xi, ̃es), Xi′) ∩Conf( ̃e(Xi, ̃es), Xj) are identical for all Xj ∈Xoc \ {X1, . . . , Xs-1, Xi} such that ̃e(Xi, ̃es) ̸⊥⊥Xj, with their common value being {εi}. As in Theorem 3.11, by applying Theorem 3.15, we can identify the family of maximal dependent subsets of Xoc \ {X1, . . . , Xs-1} in the conditional distribution given {L1, . . . , Ls-1}. For each maximal dependent subset, we can apply Theorem 3.16 to identify the next latent source. In the implementation, we verify whether the conditions of Theorem 3.16 are satisfied using Corollary 3.17, which generalizes Corollary 3.6. Corollary 3.17. Assume k ≥3. Li is generically a latent source in G-({L1, . . . , Ls-1}) if and only if one of the following two cases holds: 1. Xi = ∅and |Xoc \ {X1, . . . , Xs-1, Xi}| = 1. 2. |(Xoc ∪Xi) \ {X1, . . . , Xs-1, Xi}| ≥2, and the following all hold: (a) In the canonical model over ̃e(Xi, ̃es) and Xj, |Conf( ̃e(Xi, ̃es), Xj)| = 1 for all Xj ∈Xoc \ {X1, . . . , Xs-1, Xi} such that ̃e(Xi, ̃es) ̸⊥⊥Xj. (b) c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) are identical for all Xj ∈Xoc\{X1, . . . , Xs-1, Xi} such that ̃e(Xi, ̃es) ̸⊥⊥Xj, where L(i,j) is the unique latent confounder in the canonical model over ̃e(Xi, ̃es) and Xj. 19 (c) ̃e(Xi, ̃es) and Xi′ has a latent confounder L(i,i′) in the canonical model over them that satisfies c(k) ( ̃e(Xi, ̃es)→Xi′)(L(i,i′)) = c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) for all Xj ∈Xoc\{X1, . . . , Xs-1, Xi} such that ̃e(Xi, ̃es) ̸⊥⊥Xj, when Xi ̸= ∅. To determine whether Li is a latent source of G-({L1, . . . , Ls-1}), we first examine, using Condition 1 of Proposition 2.2, whether |Conf( ̃e(Xi, ̃es), Xj)| = 1, as in Section 3.2.1. If c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) are identical for Xj ∈(Xoc ∪ Xi)\{X1, . . . , Xs-1, Xi}, Li is identified as a latent source. As in the previous case, when Xi ̸= ∅and Xi ∈Anc(Xi′), the equation (2.5) yields two distinct solutions for the higher-order cumulants of latent confounders. Here, we determine that Li is a latent source in G-({L1, . . . , Ls-1}) if either of two solutions of (2.5) equals to c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) for Xj ∈Xoc \{X1, . . . , Xs-1, Xi}. If multiple latent sources are identified for any element in a mutually dependent maximal subset of Xoc \ {X1, . . . , Xs-1}, the corresponding clusters must be merged. As latent sources are successively identified, the correct set of latent variables L, the ancestral relationships among L, and the correct clusters are also successively identified. The procedure of Section 3.2.2 is presented in Algorithm 3. Algorithm 4 combines Algorithms 2 and 3 to provide the complete procedure for Stage II. Example 3.18. For the model in Figure 3.1 (a), the estimated clusters obtained in Stage I are {X1}, {X2}, {X3, X5}, and {X4}, with their corresponding latent parents denoted as L1, L2, L3, and L4, respectively. Set Xoc = {X1, X2, X3, X4}. Only X1 satisfies Corollary 3.6, and thus L1 is identified as the initial latent source. Then, we remove X1 from Xoc and update it to Xoc = {X2, X3, X4}. Next, since it can be shown that only L2 satisfies Corollary 3.17, i.e., c(3) ( ̃e(X2,X1)→X3)(L(2,3)) = c(3) ( ̃e(X2,X1)→X4)(L(2,4)), it follows that L2 is the latent source of G-({L1}). Similarly, we remove X2 from the current Xoc and update it to Xoc = {X3, X4}. Let X3′ = X5. In G-({L1, L2}), we compute ̃e(X3, ̃e3) and ̃e(X4, ̃e3), and find that c(3) ( ̃e(X3, ̃e3)→X4)(L(3,4)) = c(3) ( ̃e(X3, ̃e3)→X5)(L(3,5)), |Xoc ∪∅\ {X4}| = 1, 20 Algorithm 3 Finding subsequent latent sources Input: Xoc, ˆC, and AL Output: ˆC and AL 1: Apply Corollary 3.17 to find the set of latent sources L0 in G-(Ls) 2: if L0 = ∅then 3: return ˆC and AL 4: end if 5: if |L0| ≥2 then 6: for all pairs Xi, Xj ∈ S k:Lk∈L0 ˆCk ∩Xoc do 7: if ̃e(Xi, ̃es) ̸⊥⊥Xj then 8: Merge ˆCj into ˆCi 9: ˆC ←ˆC \ { ˆCj}, ˆL ←ˆL \ {Lj}, Xoc ←Xoc \ {Xj}, AL ←AL \ {Anc(Lj)} 10: end if 11: end for 12: end if 13: for all Xi ∈ S k:Lk∈L0 ˆCk ∩Xoc do 14: X(i) oc ←∅ 15: for all Xj ∈Xoc \ {Xi} do 16: if Xj ̸⊥⊥ ̃e(Xi, ̃es) then 17: Anc(Lj) ←Anc(Lj) ∪{Li}, X(i) oc ←X(i) oc ∪{Xj} 18: end if 19: end for 20: ˆC, AL ←Algorithm 3 (X(i) oc , ˆC, AL) 21: end for 22: return ˆC and AL indicating both L3 and L4 are latent sources by Corollary 3.17. Furhtermore, we conclude that {X3, X5} and {X4} should be merged into one cluster confounded by L3. 3.3. Stage III: Identifying Causal Structure among Latent Variables By the end of Stage II, the clusters of observed variables have been identified, as well as the ancestral relationships among latent variables and among observed variables. The ancestral relationships among L alone do not uniquely determine the complete causal structure of L. Here, we propose a bottom-up algorithm to estimate the causal structure of the latent variables. Note that if the ancestral relationships among L are known, a causal order of L can also be obtained. Theorem 3.19 provides an estimator of the causal coefficients between latent variables. 21 Algorithm 4 Finding the ancestral relationships between latent variables Input: X, AO, and ˆC Output: ˆC and AL 1: AL →∅ 2: for all mutually dependent Xoc do 3: Xoc, ˆC, AL ←Algorithm 2 (Xoc, ˆC, AL) 4: ˆC, AL ←Algorithm 3 (Xoc, ˆC, AL) 5: end for 6: return ˆC and AL Theorem 3.19. Assume that Anc(Li) = {L1, . . . , Li-1} with the causal order L1 ≺· · · ≺Li-1. Let X1, . . . , Xi be the observed children of L1, . . . , Li with the highest causal order, respectively. Define ̃ri,k-1 as ̃ri,k-1 = Xi, k = 1 Xi -Pi-1 h=i-(k-1) aihXh, k ≥2 When we set λ11 = · · · = λii = 1, ai,i-k = ρ( ̃ri,k-1, ̃e(Xi-k, ̃ei-k)) generically holds. In addition, under Assumption A4, it holds generically that ai,i-k = 0 if and only if ̃ri,k-1 ⊥⊥ ̃e(Xi-k, ̃ei-k). If the only information available is the ancestral relationships among {L1, . . . , Li}, we cannot determine whether there is an edge Li-k →Li in G. However, according to Theorem 3.19, if ̃ri,k-1 ⊥⊥ ̃e(Xi-k, ̃ei-k), then ai,i-k = 0, and thus it follows that Li-k →Li does not exist. Algorithm 5 describes how Theorem 3.19 is applied to estimate the causal structure among L. Example 3.20. For the model in Figure 3.1 (a), the estimated causal order of latent variables is L1 ≺L2 ≺L3 with Xoc = {X1, X2, X3}. Assume initially that L1, L2, and L3 form a complete graph. Then X1, X2, X3, and ̃e(X2, ̃e2) are X1 = ε1 + e1, X2 = a21ε1 + ε2 + e2, X3 = (a21a32 + a31)ε1 + a32ε2 + ε3 + e3, ̃e(X2, ̃e2) = ̃e(X2,X1) = ε2 + e2 -a21e1. We estimate a32 and a31 using Theorem 3.19 as follows: a32 = ρ(X3, ̃e(X2, ̃e2)), 22 ̃r31 = X3 -a32X2 = a31ε1 + ε3 -a32e2 + e3, a31 = ρ( ̃r31,X1). Thus, if ̃r31 ⊥⊥X1, then a31 = 0. In this case, we can conclude that L1 →L3 does not exist. Algorithm 5 Finding causal structure among latent variables Input: Xoc, L, AL Output: An adjacency matrix Aadj of L 1: function Adjacency(Xoc, Li, Lopen, Aadj, ̃ri) 2: if |Lopen| = 0 then 3: return Aadj 4: end if 5: Initialize Lnext ←∅ 6: for all Lj ∈Lopen do 7: ˆaij ←0, Lnext ←Lnext ∪Pa(Lj) 8: if ∃{Lk, Lh} ⊂Lnext s.t. Lk ∈Anc(Lh) then 9: Lnext ←Lnext \ {Lk} 10: end if 11: if ̃ri ̸⊥⊥ ̃e(Xj, ̃ej) then 12: ˆaij ←an empirical counterpart of aij 13: end if 14: ̃ri ← ̃ri -ˆaijXj 15: if ˆaji ̸= 0 then 16: Aadj[i, j] ←1 17: end if 18: end for 19: Lopen ←Lnext 20: Aadj ←Adjacency(Xoc, Li, Lopen, Aadj, ̃ri) 21: end function 22: function Main(Xoc, L, AL) 23: Initialize Aadj ←{0}|L|×|L| 24: for all Li ∈L do 25: ̃ri ←Xi, Lopen ←Pa(Li) 26: if ∃{Lk, Lh} ⊂Lopen s.t. Lk ∈Anc(Lh) then 27: Lopen ←Lopen \ {Lk} 28: end if 29: Aadj ←Adjacency(Xoc, Li, Lopen, Aadj, ̃ri) 30: end for 31: return Aadj 32: end function 23 3.4. Summary This section integrates Algorithms 1, 4, and 5 into Algorithm 6, which identifies the clusters of observed variables, the causal structure between latent variables, and the ancestral relationships between observed variables under the assumptions A1-A5. Since the causal clusters ˆC have been correctly identified, the directed edges from L to X are also identified. Although the ancestral relationships among observed variables can be identified, their exact causal structure remains undetermined. In conclusion, we obtain the following result: Theorem 3.21. Given observed data generated from an LvLiNGAM MG in (2.1) that satisfies the assumptions A1-A5, the proposed method can identify the latent causal structure among L, causal edges from L to X, and ancestral relationships among X. Algorithm 6 Identify the Causal Structure among Latent Variables Input: X = (X1, . . . , Xp)⊤ Output: AO, ˆC, and Aadj 1: ˆC, AO ←Algorithm 1 (X) ▷Estimate over-segmented clusters 2: AL, ˆC ←Algorithm 4 (X, AO, ˆC) ▷Identify the causal order among latent variables 3: Aadj ←Algorithm 5 (Xoc, L, AL) ▷Find the causal structure among latent variables 4: return AO, ˆC, and Aadj 4. Simulations In this section, we assess the effectiveness of the proposed method by comparing it with the algorithms proposed by Xie et al. [21] for estimating LiNGLaM and by Xie et al. [23] for estimating LiNGLaH, as well as with ReLVLiNGAM [18], which serves as the estimation method for the canonical model with generic parameters. For convenience, we hereafter refer to both the model class introduced by Xie et al. [21] and its estimation algorithm as LiNGLaM, and likewise use LiNGLaH to denote both the model class and the estimation algorithm proposed by Xie et al. [23]. 4.1. Settings In the simulation, the true models are set to six LvLiNGAMs defined by the DAGs shown in Figures 4.1 (a)-(f). We refer to these models as Models (a)-(f), respectively. All these models satisfy Assumptions A1-A3. 24 All disturbances are assumed to follow a log-normal distribution, ui ∼ Lognormal(-1.1, 0.8), shifted to have zero mean by subtracting its expected value. The coefficient λii from Li to Xi is fixed at 1. Other coefficients in Λ and A are drawn from Uniform(1.1, 1.5), while those in B are drawn from Uniform(0.5, 0.9). When all causal coefficients are positive, the faithfulness condition is satisfied. The higher-order cumulant of a log-normal distribution is non-zero. None of the models (a)-(f) is LiNGLaM or LiNGLaH. The models (a) and (b) are generic canonical models, whereas the canonical models derived from Figures 4.1 (c)-(f) do not satisfy the genericity assumption of Schkoda et al. [18]. The sample sizes N are set to 1000, 2000, 4000, 8000, and 16000. The number of iterations is set to 100. We evaluate the performance of the proposed method and other methods using the following metrics. • Ncl, Nls, Nos, and Ncs: The counts of iterations in which the resulting clusters, the latent structures, the ancestral relationships among X, and the latent structure and the ancestral relationships among X are correctly estimated, respectively. • PREll, RECll, and F1ll: Averages of Precision, Recall, and F1-score of the estimated edges among latent variables, respectively, when clusters are correctly estimated. • PREoo, RECoo, and F1oo: Averages of Precision, Recall, and F1-score of the estimated causal ancestral relationships among observed variables, respectively, when clusters are correctly estimated. LiNGLaM and LiNGLaH assume that each cluster contains at least two observed variables. When a cluster includes only a single observed variable, these methods may fail to assign it to any cluster, resulting in it being left without an associated latent parent. Here, we treat such variables as individual clusters and assign each a latent parent. 4.2. Implementation Hilbert-Schmidt independence criterion (HSIC) [28] is employed for the independence tests in the proposed method. As HSIC becomes computationally expensive for large sample sizes, we randomly select 2,000 samples for HSIC when N ≥2000. The significance level of HSIC is set to αind = 0.05. When estimating the number of latent variables and the ancestral relationships among the observed variables, we apply Proposition 2.2. Following 25 L1 X1 X2 X3 (a) L1 X1 X2 X3 (b) L1 L2 X1 X2 X3 (c) L1 L2 X1 X2 X3 X4 (d) L1 L2 L3 X1 X2 X3 X4 (e) L1 L2 L3 X1 X2 X3 X4 (f) Figure 4.1: Six models for simulations Schkoda et al. [18], the rank of A(k1,k2) (Xi→Xj) is determined from its singular values. Let σr denote the r-th largest singular value of A(k1,k2) (Xi→Xj) and let τs be a predefined threshold. If σr/σ1 ≤τs, we set σr to zero. To ensure termination in the estimation of the number of confounders between two observed variables, we impose an upper bound on the number of latent variables, following Schkoda et al. [18]. In this experiment, we set the upper bound on the number of latent variables to two in both our proposed method and ReLVLiNGAM. When estimating latent sources, we use Corollaries 3.6 and 3.17. To check whether |Conf(Xi, Xj)| = 1 in the canonical model over Xi and Xj, one possible approach is to apply Proposition 2.2. Theorem A.7 in the Appendix shows that |Conf(Xi, Xj)| = 1 is equivalent to (c(6) i,i,i,j,j,j)2 = c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j. Based on this fact, one can alternatively check whether Conf(Xi, Xj) = 1 by using the criterion |(c(6) i,i,i,j,j,j)2 -c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j| max (c(6) i,i,i,j,j,j)2, |c(6) i,i,i,i,j,jc(6) i,i,j,j,j,j| s and s = 1, we only discuss the remaining two cases. We first prove the case where s = i by induction on i. When i = 1, ̃e(X1, ̃e1) = X1 = ε1 + e1, 52 where U[1] = ε1. Assume that the inductive assumption holds up to i. Then, ̃e(Xh, ̃eh) = εh + U[h], 1 ≤h ≤i. Since Xi+1 is expressed as Xi+1 = εi+1 + ei+1 + i X h=1 αll i+1,hεh, we have ρ(Xi+1, ̃e(Xh, ̃eh)) = αll i+1,h for h = 1, . . . , i, according to Definition 3.13. Hence, we have ̃e(Xi+1, ̃ei+1) = Xi+1 - i X h=1 ρ(Xi+1, ̃e(Xh, ̃eh)) ̃e(Xh, ̃eh) = εi+1 + ei+1 + i X h=1 αll i+1,hεh - i X h=1 αll i+1,h(εh + U[h]) = εi+1 + ei+1 - i X h=1 αll i+1,hU[h] = εi+1 + U[i+1], where U[i+1] = ei+1 -Pi h=1 αll i+1,hU[h]. Thus, the claim holds for all i by induction. Next, we discuss the case where i > s and s > 1. According to Definition 3.13, ̃e(Xi, ̃es) = Xi - s-1 X h=1 ρ(Xi, ̃e(Xh, ̃eh)) ̃e(Xh, ̃eh), ρ(Xi, ̃e(Xh, ̃eh)) = αll ih. Using the conclusion of the case where i = s, we obtain ̃e(Xi, ̃es) = εi + i-1 X h=1 αll ihεh + ei - s-1 X h=1 αll ih εh + U[h] = εi + i-1 X h=s αll ihεh + U[s-1] + ei. 53 C.6. The proof of Theorem 3.15 Proof. The proof of this theorem follows similarly to that of Theorem 3.11. C.7. The proof of Theorem 3.16 Proof. The proof of this theorem follows similarly to that of Theorem 3.12. C.8. The proof of Corollary 3.17 According to Theorem 3.16, sufficiency is immediate. We therefore prove only necessity by showing that if Li is not a latent source in G-({L1, . . . , Ls-1}), then neither case 1 nor case 2 holds. If Li is not a latent source, Xi ̸= ∅or |Xoc \ {X1, . . . , Xs-1, Xi}| ≥2, and therefore case 1 is not satisfied. We will show that one of the conditions (a), (b), and (c) is not satisfied. First, note that condition (a) does not hold whenever there exists Xj ∈Xoc \ {X1, . . . , Xs-1, Xi} with |Conf( ̃e(Xi, ̃es), Xj)| ̸= 1. Hence, assume that condition (a) holds. Assume that Ls is the latent source of G-({L1, . . . , Ls-1}), and that Xs is its observed child with the highest causal order. Let Lj be the latent parent of Xj ∈Xoc \ {X1, . . . , Xs-1, Xi}, respectively. Since the condition (a) holds, |Conf( ̃e(Xi, ̃es), Xs)| = |Conf( ̃e(Xi, ̃es), Xj)| = 1, where ̃es = ( ̃e(X1, ̃e1), . . . , ̃e(Xs-1, ̃es-1)). Then, Xs and ̃e(Xi, ̃es) are written as Xs = s-1 X h=1 αll shεh + εs + es, ̃e(Xi, ̃es) = εi + i-1 X h=s αll ihεh + U[s-1] + ei, according to Lemma 3.14. Hence, we have Conf( ̃e(Xi, ̃es), Xs) = {εs}. Assume that Li has a latent child Lj and that none of the descendants of Li are parents of Lj. Xj is expressed by Xj = j-1 X h=1 αll jhεh + εj + ej. 54 Both ̃e(Xi, ̃es) and Xj involve linear combinations of εs, . . . , εi. Since {εs, . . . , εi} are mutually independent and |Conf( ̃e(Xi, ̃es), Xj)| = 1, αll jh = αll jiαll ih according to Hoyer et al. [7], and then Xj can be rewritten as Xj =            s-1 X i=1 αll jhεh + αll ji εi + i-1 X h=s αll ihεh ! + εj + ej, j = i + 1, s-1 X i=1 αll jhεh + αll ji εi + i-1 X h=s αll ihεh ! + j-1 X h=i+1 αll jhεh + εj + ej, j > i + 1. Therefore, c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) = cum(k) εi + i-1 X h=s αll ihεh ! , c(k) ( ̃e(Xi, ̃es)→Xs)(L(i,s)) = cum(k)(αll isεs), according to Lemma A.8. Therefore, we conclude that c(k) ( ̃e(Xi, ̃es)→Xj)(L(i,j)) ̸= c(k) ( ̃e(Xi, ̃es)→Xs)(L(i,s)) generically, and condition (b) is not satisfied. Next, we assume that Li does not have latent children, so that Xi ̸= ∅. Assume that Xi′ ∈Xi and it can be expressed as Xi′ = i-1 X h=1 αll ihεh + εi + bi′iei + ei′. Then, Conf( ̃e(Xi, ̃es), Xi′) =    n εi + Pi-1 h=s αll ihεh o , bi′i = 0, n εi + Pi-1 h=s αll ihεh, ei o , bi′i ̸= 0. In either case, there is one latent L(i,i′) satisfies that c(k) ( ̃e(Xi, ̃es)→Xi′)(L(i,i′)) = cum(k) εi + i-1 X h=s αll ihεh ! , which generically implies c(k) ( ̃e(Xi, ̃es)→Xi′)(L(i,i′)) ̸= c(k) ( ̃e(Xi, ̃es)→Xs)(L(i,s)). Thus, the condition (c) is not satisfied. 55 D. Proofs of Theorems in Section 3.3 D.1. Theorem 3.19 Proof. From Lemma 3.14, ̃e(Xi-k, ̃ei-k) = εi-k + U[i-k]. (D.1) By definition, ̃ri,k-1 is written as ̃ri,k-1 = Xi - i-1 X h=i-(k-1) aihXh = i-1 X h=1 aihLh + εi + ei - i-1 X h=i-(k-1) aih(Lh + eh) = i-(k+1) X h=1 aihLh + ai,i-kεi-k + εi + ei - i-1 X h=i-(k-1) aiheh = V[i-(k+1)] + ai,i-kεi-k + εi + U[i]\[i-k], (D.2) where V[i-(k+1)] is a linear combination of {ε1, . . . , εi-(k+1)} and U[i]\[i-k] is a linear combination of {ei-(k-1), . . . , ei}. From (D.1) and (D.2), we can show that ̃ri,k-1 ⊥⊥ ̃e(Xi-k, ̃ei-k) ⇔ai,i-k = 0, and otherwise ρ(Xi, ̃e(Xi-1, ̃ei-1)) = a2 i,i-1 cum(4)(εi-1) ai,i-1 cum(4)(εi-1) = ai,i-1 generically holds. E. Reducing an LvLiNGAM In this paper, we have discussed the identifiability of LvLiNGAM under the assumption that each observed variable has exactly one latent parent. However, even when some observed variables do not have latent parents, by iteratively marginalizing out sink nodes and conditioning on source nodes to remove such variables one by one, the model can be progressively reduced to one in which each observed variable has a single latent parent. This can 56 be achieved by first estimating the causal structure involving the observed variables without latent parents. ParceLiNGAM [11] or RCD [12, 13] can identify the ancestral relationship between two observed variables if at least one of them does not have a latent parent, and remove the influence of the observed variable without a latent parent. Models 1-3 in Figure E.1 contain observed variables that do not have a latent parent. According to Definition 1.1, X1 and X2 in Models 1 and 3 belong to distinct clusters whose latent parents are L1 and L2, respectively, whereas in Model 2, X1 and X2 share the same latent parent L1. Model 3 contains a directed path between clusters, whereas Models 1 and 2 do not. We consider the model reduction procedure for Models 1-3 individually. Example E.1 (Model 1). By using ParceLiNGAM or RCD, we can identify X4 →X1, X4 →X2, X1 →X5, and X3 →X5. Since X5 is a sink node, the induced subgraph obtained by removing X5 represents the marginal model over the remaining variables. Since X4 is a source node, if we replace X1 and X2 with the residuals r(4) 1 and r(4) 2 obtained by regressing them on X4, then the induced subgraph obtained by removing X4 represents the conditional distribution given X4. As a result, Model 1 is reduced to the model shown in Figure E.1 (d). This model satisfies Assumptions A1-A3. Example E.2 (Model 2). X1 and X2 are confounded by L1, and they are mediated through X3. By using ParceLiNGAM or RCD, the ancestral relationship among X1, X2, X3 can be identified. Let r(1) 3 be the residual obtained by regressing X3 on X1. Let ̃r(3) 2 be the residual obtained by regressing X2 on r(1) 3 . According to [11] and [12, 13], the model for L1, X1, and r(3) 2 corresponds to the one shown in Figure E.1 (e). This model satisfies Assumptions A1-A3. Example E.3 (Model 3). In Model 3, X1 ∈Anc(X3), and they are mediated by X5. By using ParceLiNGAM or RCD, the ancestral relationship among X1, X3, X5 can be identified. Let r(1) 5 be the residual obtained by regressing X5 on X1. Let ̃r(5) 3 be the residual obtained by regressing X3 on r(1) 5 . According to [11] and [12, 13], by reasoning in the same way as for Models 1 and 2, Model 3 is reduced to the model shown in Figure E.1 (f). This model doesn't satisfy Assumptions A1-A3. Using [11] and [12, 13], ancestral relations between pairs of observed variables that include at least one variable without a latent parent can be iden57 L1 X1 L2 X2 X3 X4 X5 (a) Model 1 L1 X1 X2 X3 (b) Model 2 L1 X1 L2 X2 X3 X4 X5 (c) Model 3 L1 r(4) 1 L2 X3 r(4) 2 (d) Reduced model of Model 1 L1 X1 ̃r(3) 2 (e) Reduced model of Model 2 L1 r(4) 1 L2 r(4) 2 ̃r(5) 3 (f) Reduced model of Model 3 Figure E.1: Three models that can be reduced. tified. The graph obtained by the model reduction procedure is constructed by iteratively applying the following steps: (i) iteratively remove observed variables without latent parents that appear as source or sink nodes, updating the induced subgraph at each step so that any new source or sink nodes are subsequently removed; (ii) when an observed variable without a latent parent serves as a mediator, remove the variable and connect its parent and child with a directed edge. If no directed path exists between any two observed variables with distinct latent parents, the model obtained through the model reduction procedure satisfies Assumptions A1-A3. Conversely, if there exist two observed variables with distinct latent parents that are connected by a directed path, the model obtained through the model reduction procedure does not satisfy Assumption A3. In summary, Assumption A1 can be generalized to A1′. Each observed variable has at most one latent parent. Proposition E.4. Given observed data generated from an LvLiNGAM MG that satisfies the assumptions A1′ and A2-A5, the latent causal structure 58 among the latent variables, the directed edges from the latent variables to the observed variables, and the ancestral relationships among the observed variables can be identified by using the proposed method in combination with ParceLiNGAM and RCD. 59
2510.14781
SciPost Physics Codebases Submission ParaToric 1.0-beta: Continuous-time quantum Monte Carlo for the toric code in a parallel field Simon M. Linsel⋆and Lode Pollet† Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany ⋆simon.linsel@lmu.de , † lode.pollet@lmu.de Abstract We introduce ParaToric, a C++ package for simulating the toric code in a parallel field (i.e., X- and Z-fields) at finite temperature. We implement and extend the continuous- time quantum Monte Carlo algorithm of Wu, Deng, and Prokof’ev on the square, triangu- lar, honeycomb, and cubic lattices with open and periodic boundaries, respectively. The package is expandable to arbitrary lattice geometries and custom observables diagonal in either the X- or Z-basis. ParaToric also supports snapshot extraction in both bases, mak- ing it ideal for generating training/benchmarking data for other methods, such as lattice gauge theories, cold atom or other quantum simulators, quantum spin liquids, artificial intelligence, and quantum error correction. The software provides bindings to C/C++ and Python, and is thus almost universally integrable into other software projects. Copyright attribution to authors. This work is a submission to SciPost Physics Codebases. License information to appear upon publication. Publication information to appear upon publication. Received Date Accepted Date Published Date Contents 1 Introduction 3 2 The toric code in a parallel field 3 2.1 Hamiltonian 3 2.2 Lattice geometries 3 2.3 Observables 4 3 Installation & interfaces 6 3.1 C++ interface 6 3.1.1 Build & Installation 7 3.1.2 Public class ExtendedToricCode 8 3.1.3 Configuration type 8 3.1.4 Return type 11 3.1.5 C++ usage examples 11 3.2 C++ command-line interface 13 3.2.1 Build & Installation 13 1 arXiv:2510.14781v1 [quant-ph] 16 Oct 2025 SciPost Physics Codebases Submission 3.2.2 etc_sample 15 3.2.3 etc_hysteresis 16 3.2.4 etc_thermalization 16 3.2.5 HDF5 structure 16 3.3 C interface 16 3.3.1 Build & Installation 16 3.3.2 Status & error handling 17 3.3.3 Opaque handle 17 3.3.4 Configuration type 17 3.3.5 Return type 18 3.3.6 Procedures (mirror the C++ API) 19 3.3.7 C usage example 19 3.4 Python bindings 21 3.4.1 Build & Installation 21 3.4.2 Module layout 22 3.4.3 Usage example 24 3.5 Python command-line interface 24 3.5.1 Build & Installation 25 3.5.2 T-sweep 26 3.5.3 h-sweep 27 3.5.4 λ-sweep 27 3.5.5 ◦-sweep 28 3.5.6 Hysteresis-sweep 28 3.5.7 Thermalization 29 4 Using ParaToric 29 4.1 Monte Carlo Updates 29 4.2 Monte Carlo Diagnostics 30 4.2.1 Thermalization mode 30 4.2.2 Integrated autocorrelation time 30 4.2.3 Error bars 31 4.3 Tips & tricks 31 4.3.1 Probing ground state physics 31 4.3.2 Probing first-order transitions 31 4.3.3 Choosing the basis 31 4.3.4 Choosing N_thermalization 31 4.3.5 Choosing N_samples 31 4.3.6 Choosing N_between_samples 32 4.3.7 Choosing N_resamples 32 4.3.8 Extracting snapshots 32 4.3.9 Adding new observables/lattices/updates 32 4.4 Benchmarks 32 4.4.1 Thermalization 32 4.4.2 Integrated autocorrelation time 33 4.4.3 Run-time 33 4.4.4 Topological phase transition 35 5 Conclusion & Outlook 36 References 37 2 SciPost Physics Codebases Submission 1 Introduction The toric code is one of the most fundamental and most-studied models in modern condensed matter physics. It was first written down by Kitaev [1] and is the simplest example of a model hosting a topological phase (a gapped Z2 quantum spin liquid) and anyonic excitations. The toric code is also the foundational model for error-correcting codes [2, 3] and has deep con- nections to the Ising gauge theory [4]. The toric code can be extended with fields which, when strong enough, destroy the topo- logical order. This model is sign-problem-free, thus making quantum Monte Carlo the method of choice. Wu, Deng, and Prokof’ev developed a continuous-time quantum Monte Carlo algo- rithm [5]. ParaToric implements and extends this algorithm with new updates which enable ergodicity at large temperatures and at zero off-diagonal field, thus significantly improving the applicability of the algorithm. ParaToric implements a wide range of lattices, boundary conditions, and observables. It is also possible to extend ParaToric with new interactions, observables, and lattices. We provide documented interfaces in C, C++, and Python as well as command-line interfaces, making the integration of ParaToric into other projects and programming languages straightforward. ParaToric will save simulation results to HDF5 files and snapshots to GraphML files (XML- based), with a focus on interoperability with other packages. ParaToric comes with an MIT license. 2 The toric code in a parallel field 2.1 Hamiltonian ParaToric implements and extends the continuous-time quantum Monte Carlo (QMC) algo- rithm by Wu, Deng, and Prokof’ev [5] to simulate the toric code in a parallel field (also called perturbed toric code or extended toric code) ˆH = −µ X v ˆAv −J X p ˆBp −h X l ˆσx l −λ X l ˆσz l , (1) where J,λ > 0 in the ˆ σx-basis and µ,h > 0 in the ˆ σz-basis (otherwise the model has a sign- problem). ˆσx l and ˆσz l are Pauli matrices defined on the links of the underlying lattice. The star term ˆAv contains all links adjacent to lattice site v, the plaquette term ˆBp contains all links that belong to the same elementary plaquette p of the underlying lattice. The temperature T = 1/β is finite. For readers interested in extending the code, we note that it is relatively straightforward to add interactions that are diagonal in the chosen basis, such as (long-range) Ising interactions. Off-diagonal interactions require a more careful review and extension of the Monte Carlo updates to ensure ergodicity. However, diagonal interactions can also lead to sampling problems, especially when they introduce frustration. 2.2 Lattice geometries We implement the square, honeycomb, triangular, and cubic lattices, see Fig. 1. On the cubic lattice, the plaquettes contain the four links of cube faces, not the twelve links of the cube 3 SciPost Physics Codebases Submission (a) (b) ̂Av ̂Bp ̂σx ̂σz ̂Av ̂Bp ̂σz ̂σx (c) ̂σz ̂σx ̂Av ̂Bp (d) ̂Av ̂Bp Figure 1: Implemented lattices. We implement the extended toric code (1) on the square (a), honeycomb (b), triangular (c), and cubic lattices (d). For each lattice, we show the star (ˆAv) and plaquette (ˆBp) terms. The cubic lattice importantly features star interactions of six links and plaquette interactions of four links on the faces of cubes. (that model has a different m-anyon structure). We implement open and periodic boundaries, respectively. New lattices can be added in src/lattice/lattice.cpp. 2.3 Observables Here we list all observables that ParaToric implements for the extended toric code. Custom observables can be added in src/mcmc/extended_toric_code_qmc.hpp. For each ob- servable ˆO, we calculate the expectation value 〈ˆO〉and the Binder ratio UO = 〈ˆO4〉 〈ˆO2〉2 with error bars obtained from bootstrapping (see below), respectively. anyon_count The number of e-anyons ( ˆ σx-basis) or m-anyons ( ˆ σz-basis) in the system. anyon_density ˆ σx-basis: The number of e-anyons divided by the number of lattice sites. ˆ σz-basis: The number of m-anyons divided by the number of plaquettes. delta The difference of the star and plaquette expectation values: ∆= 〈ˆAv〉−〈ˆBp〉. energy The total energy E = 〈ˆH〉. energy_h The electric field term Eh = 〈−h P l ˆσx l 〉. energy_lmbda The gauge field term Eλ = 〈−λ P l ˆσz l 〉. We write λ as lmbda because some programming languages feature lambda as a keyword. 4 SciPost Physics Codebases Submission energy_J The plaquette term EJ = 〈−J P p ˆBp〉. energy_mu The star term Eµ = 〈−µ P v ˆAv〉. fredenhagen_marcu The equal-time Fredenhagen-Marcu loop operator [6–8]: Ox/z FM = lim L→∞ 〈 Q l∈Cx/z 1/2 ˆσx/z l 〉 Ç |〈 Q l∈Cx/z ˆσx/z l 〉| , (2) C x/z 1/2 is half, C x/z is a full Wilson loop in the ˆ σx-basis (’t Hooft loop in the ˆ σz-basis). The loop is automatically constructed for all supported lattices, and the perimeter scales with O(L), where L is the linear system size. When probing perimeter/area laws, the user should change L. We currently do not support off-diagonal loop operators, e.g., measuring products of ˆσx-operators in the ˆσz-basis. largest_cluster The largest connected cluster of neighboring bonds with ˆσx = −1 (ˆσz = −1) in the ˆσx-basis (ˆσz-basis). This observable is used to calculate the percolation strength, see [9]. percolation_probability Measures the bond percolation probability, i.e. if we can wind around the system while only traversing bonds with ˆσx = −1 (ˆσz = −1) in the ˆσx-basis (ˆσz-basis). Formally, it is the expectation value 〈ˆΠx/z〉of the projector ˆΠx/z = X W(j)̸=0 |{ˆσx/z}j〉〈{ˆσx/z}j|, (3) over all possible configurations {ˆσx/z}j with non-zero winding number W(j) of connected link clusters of neighboring ˆσx/z = −1. These clusters are called percolating clusters. For details, see [9–11]. percolation_strength If a snapshot does not have a percolating cluster, the percolation strength is 0. If a snapshot has a percolating cluster, the percolation strength is defined as the result of largest_cluster divided by the total number of links in the system. For details, see [9,11]. plaquette_percolation_probability Similar to the percolation probability of bonds. Two plaquettes are in the same cluster if they share a link l with ˆ τx l = −1. For details, see [11]. plaquette_z The plaquette expectation value 〈ˆBp〉. sigma_x The electric field expectation value 〈ˆσx〉. sigma_x_susceptibility The static susceptibility χ x = 1 N Z β 0 〈ˆσx(0) ˆσx(τ)〉c dτ, (4) where N is the total number of links (qubits) and the integral is over the imaginary time τ1. Importantly, χ x can be calculated both in the ˆσx- and the ˆσz-basis. 1The dynamical (fidelity) susceptibility is a trivial extension and contains an extra τ dependency in the integral. 5 SciPost Physics Codebases Submission sigma_z The gauge field expectation value 〈ˆσz〉. sigma_z_susceptibility The static susceptibility χz = 1 N Z β 0 〈ˆσz(0) ˆσz(τ)〉c dτ, (5) where N is the total number of links (qubits) and the integral is over the imaginary time τ. Importantly, χz can be calculated both in the ˆσx- and the ˆσz-basis. staggered_imaginary_times Order parameter from [5]. It is defined as Ox/z SIT = 1 β  (τk 1 −0) −(τk 2 −τk 1) + ... + (−1)N(k)−1 (τk N(k) −τk N(k)−1) + (−1)N(k)(β −τk N(k))  , (6) where τk n is the imaginary time of the n-th tuple spin flip of type k. k is a plaquette p (star s) of links in the ˆσx-basis (ˆσz-basis). This order parameter can neither be evaluated from snapshots, nor from any other method that does not have access to imaginary time. star_x The star expectation value 〈ˆAv〉. string_number The total number of links with ˆσx = −1 in the ˆσx-basis (ˆσz = −1 in the ˆσz-basis). 3 Installation & interfaces There are five ways to use paratoric, directly from within code (C, C++, Python) or via the command-line (C++, Python). All interfaces require compiling C++ code. We tested the compilation with GCC 15 and Clang 20. All interfaces implement three functionalities. Thermalization simulations are used to benchmark the thermalization process of a Markov chain and are primarily a diagnostic tool. Regular sampling routines are used for generating snapshots and measuring observables, e.g., in the context of continuous phase transitions. Hysteresis routines are a variant of the regu- lar sampling routines where not one but an array of Hamiltonian parameters is provided and only one Markov chain is used for all parameters. The order of the Hamiltonian parameters in the input array matters: The last state of the previous parameter is used as an initial state for the thermalization phase of the next parameters. This simulation type should primarily be used when mapping out hysteresis curves in the vicinity of first-order phase transitions, hence the name. Since the hysteresis simulation returns the values of not one but many parameter sets, the output types are generally different from the regular sampling. It is also much slower than regular sampling, because the simulation for different parameters can in general not be parallelized. 3.1 C++ interface The C++ interface enables users to use a ParaToric public header from within another C++ project. 6 SciPost Physics Codebases Submission 3.1.1 Build & Installation The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). To compile it, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you’re a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to ${CMAKE_SOURCE_DI R}/${CMAKE_INSTALL_BINDIR}/, headers to ${CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to ${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_LIBD IR}/. The Python scripts expect ${CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recom- mended). CMake usage (installed package) cmake_minimum_required(VERSION 3.23) project(my_qmc_app CXX) find_package(paratoric CONFIG REQUIRED) # provides paratoric::core add_executable(myapp main.cpp) target_link_libraries(myapp PRIVATE paratoric::core) CMake usage (as subdirectory) If the core lives in deps/paratoric, add it and link to the same target: add_subdirectory(deps/paratoric) add_executable(myapp main.cpp) target_link_libraries(myapp PRIVATE paratoric::core) 7 SciPost Physics Codebases Submission 3.1.2 Public class ExtendedToricCode The interface class ExtendedToricCode lives in the public header #include<paratoric /mcmc/extended_toric_code.hpp>. All symbols are in the paratoric namespace. All methods are static, take a single Config object and return a Result object. The required fields in config are documented for each method within the docstrings. Result ExtendedToricCode::get_thermalization(Config config) Run ther- malization only. Required fields: lat_spec.{basis,lattice_type,system_size,beta ,boundaries,default_spin}, param_spec.{mu,h,J,lmbda,h_therm,lmbda_therm }, sim_spec.{N_thermalization,N_resamples,custom_therm,observables,seed }, out_spec.{path_out,save_snapshots}. Result ExtendedToricCode::get_sample(Config config) Run a production mea- surement pass. Returns the observables selected in config. Required fields: lat_spec.{ba sis,lattice_type,system_size,beta,boundaries,default_spin}, param_spec. {mu,h,J,lmbda}, sim_spec.{N_samples,N_thermalization,N_between_samples, N_resamples,observables,seed}, out_spec.{path_out,save_snapshots}. Result ExtendedToricCode::get_hysteresis(Config config) Perform a hys- teresis sweep, where the last state of the previous parameter is used as the initial state of the following parameter in h_hys & lmbda_hys. Required fields: lat_spec.{basis,lattice _type,system_size,beta,boundaries,default_spin}, param_spec.{mu,J,h_hys ,lmbda_hys}, sim_spec.{N_samples,N_thermalization,N_between_samples,N_r esamples,observables,seed}, out_spec.{paths_out,save_snapshots}. 3.1.3 Configuration type The struct Config (declared in <paratoric/types/types.hpp>) contains multiple nested specifications. Top-level configuration: Config Field Type Purpose sim_spec SimSpec Simulation / MC controls (backend-consumed). param_spec ParamSpec Model couplings / parameters (backend-consumed). lat_spec LatSpec Lattice geometry and basis. out_spec OutSpec Output folders and snapshot toggles. 8 SciPost Physics Codebases Submission Simulation specification (config.sim_spec) Field Type Meaning / Defaults N_samples int Number of recorded snapshots. Default 1000. N_thermalization int Number of warmup steps before sampling. Typ- ically O(Ld), where L is the system size and d is the dimensionality. Default 10000. N_between_samples int Steps between consecutive snapshots. Higher value decreases autocorrelation and improves er- ror bars. Typically O(Ld), where L is the system size and d is the dimensionality. Default 1000. N_resamples int Bootstrap resamples for errors. Default 1000. custom_therm bool Use custom thermalization schedule. Default fa lse. seed int PRNG seed. 0 means “random seed.” Default 0. observables vector<string> Names of observables to record each snapshot. For options, see Sec. 2.3. Parameter specification (config.param_spec) Field Type Meaning / Defaults mu double Star term coefficient. Default 1.0. h double Electric field term. Default 0.0. J double Plaquette term. Default 1.0. lmbda double Gauge-field term. Default 0.0. h_therm double Thermalization value for h when using custom schedules. Default NaN (unused). lmbda_therm double Thermalization value for lmbda when using cus- tom schedules. Default NaN (unused). h_hys vector<double> Sweep values of h for hysteresis runs. Default empty. Length must match lmbda_hys. lmbda_hys vector<double> Sweep values of lmbda for hysteresis runs. De- fault empty. Length must match h_hys. Lattice specification (config.lat_spec) Field Type Meaning / Valid values basis char Spin eigenbasis for the simulation. Must be ’x’ or ’z’. lattice_type string The lattice ("square", "triangular", "hone ycomb" or "cubic"). system_size int Linear system size (per dimension). beta double Inverse temperature β > 0. boundaries string Boundary condition: "periodic" or "open". default_spin int Initial link spin, must be +1 or −1. 9 SciPost Physics Codebases Submission Output specification (config.out_spec) Field Type Meaning path_out string Primary output folder name. paths_out vector<string> Hysteresis subfolder names. Length must match h_hys. save_snapshots bool Save snapshots toggle. Default false. 10 SciPost Physics Codebases Submission 3.1.4 Return type Field C++ Type Meaning series vector<vector<variant<comple x<double>,double>>> Time series of all requested observables, thermalization is excluded (except for thermalization simulation). Outer index = observable, inner index = time point. acc_ratio vector<double> Time series of Monte Carlo acceptance ratios. mean vector<double> Bootstrap observable means. mean_std vector<double> Bootstrap standard errors of the mean. binder vector<double> Bootstrap binder ratios. binder_std vector<double> Bootstrap standard errors of the binder ratios. tau_int vector<double> Estimated integrated autocorrelation times. series_hys vector<vector<vector<variant <complex<double>,double>>>> Hysteresis time series of all requested observables, thermalization is excluded (except for thermalization simulation). Outer vector = hysteresis parameters (order as in h_hys,lm bda_hys), middle vector = observables (order as in observa bles), inner vector = time series. mean_hys vector<vector<double>> Hysteresis bootstrap observable means. Outer vector = hysteresis parameters (order as in h_hys,lm bda_hys), inner vector = observables (order as in observa bles). mean_std_hys vector<vector<double>> Hysteresis bootstrap standard errors of the mean. Indices as above. binder_hys vector<vector<double>> Hysteresis bootstrap binder ratios. Indices as above. binder_std_hys vector<vector<double>> Hysteresis bootstrap standard errors of the binder ratios. Indices as above. tau_int_hys vector<vector<double>> Hysteresis estimated integrated autocorrelation times. Indices as above. 3.1.5 C++ usage examples Listing 1: C++ API - Minimal call 11 SciPost Physics Codebases Submission // C++23 #include <iostream> #include <print> #include <vector> #include <string> #include <paratoric/mcmc/extended_toric_code.hpp> #include <paratoric/types/types.hpp> int main() { using namespace paratoric; Config cfg{}; // ---- lattice sub-config (required) ---- cfg.lat_spec.basis = 'z'; // or 'x' cfg.lat_spec.lattice_type = "square"; // or "cubic", "honeycomb", ... cfg.lat_spec.system_size = 16; cfg.lat_spec.beta = 8.0; cfg.lat_spec.boundaries = "periodic"; // or "open" cfg.lat_spec.default_spin = 1; // ---- Hamiltonian parameters ---- cfg.param_spec.mu = 1.0; // star term cfg.param_spec.J = 1.0; // plaquette term cfg.param_spec.h = 0.20; // electric field term cfg.param_spec.lmbda = 0.00; // gauge-field term // Optional thermalization schedule values (used if custom_therm = true) cfg.param_spec.h_therm = std::numeric_limits<double>::quiet_NaN(); cfg.param_spec.lmbda_therm = std::numeric_limits<double>::quiet_NaN(); // (Optional) Hysteresis sweep grids - only read by get_hysteresis(...) cfg.param_spec.h_hys = {}; // e.g. {0.0, 0.1, 0.2, 0.3, 0.2, 0.1, 0.0} cfg.param_spec.lmbda_hys = {}; // e.g. {0.0, 0.1, 0.2, 0.3, 0.2, 0.1, 0.0} // ---- Simulation (MC) controls ---- cfg.sim_spec.N_samples = 0; // 0 => thermalization-only cfg.sim_spec.N_thermalization = 5000; // warmup steps cfg.sim_spec.N_between_samples = 10; // thinning between snapshots cfg.sim_spec.N_resamples = 1000; // bootstrap cfg.sim_spec.custom_therm = false; // set true to use *_therm values cfg.sim_spec.seed = 12345; // 0 => random seed // Observables to record each snapshot (backend-recognized names) cfg.sim_spec.observables = { "energy", // total energy "plaquette_z", // plaquette energy "anyon_count", // number of anyons (x-basis: e-anyons, z-basis: m-anyons ) "fredenhagen_marcu" // example: Wilson/'t Hooft loop proxy }; // ---- Output / I/O policy ---- cfg.out_spec.path_out = "runs/sample"; // single-run output dir cfg.out_spec.paths_out = {}; // filled only for hysteresis cfg.out_spec.save_snapshots = false; // set true to dump every snapshot 12 SciPost Physics Codebases Submission cfg.out_spec.full_time_series = true; // save full time series (FCS) // 1) Check thermalization Result warmup = ExtendedToricCode::get_thermalization(cfg); std::print("Thermalization␣series:␣{}", warmup.series); // 2) Production sample (set N_samples > 0 and call get_sample) cfg.sim_spec.N_samples = 2000; Result out = ExtendedToricCode::get_sample(cfg); std::print("Production␣autocorrelations:␣{}", out.tau_int); return 0; } 3.2 C++ command-line interface ParaToric ships a C++ command-line interface ${CMAKE_INSTALL_PREFIX}/${CMAKE_INS TALL_BINDIR}/paratoric that orchestrates C++ backends, runs sweeps, and writes HDF5 (observables) and XML (snapshots) outputs. 3.2.1 Build & Installation The command-line interface requires HDF5 ≥1.14.3 (older HDF5 versions may work, but were not tested). The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). To compile it, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON -DPARATORIC_BUILD_CLI=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you’re a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to ${CMAKE_SOURCE_DI R}/${CMAKE_INSTALL_BINDIR}/, headers to ${CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to ${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_LIBD IR}/. The Python scripts expect ${CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. 13 SciPost Physics Codebases Submission • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recom- mended). • -DPARATORIC_BUILD_CLI=PROJECT_IS_TOP_LEVEL Required for both the C++ and Python command line interface. 14 SciPost Physics Codebases Submission Global options Long flag Short Type Description --simulation -sim string Simulation mode: etc_sample, etc_hysteresis, etc_therm alization. --N_samples -Ns int Number of recorded samples. --N_thermalization -Nth int Thermalization (warmup) steps. --N_between_samples -Nbs int Steps between samples (thin- ning). --beta -bet double Inverse temperature β = 1/T. --mu_constant -muc double Star-term coupling µ. --J_constant -Jc double Plaquette coupling J. --h_constant -hc double Field h. --lmbda_constant -lmbdac double Field λ. --h_constant_therm -hct double Thermalization value for h (used if custom therm). --lmbda_constant_therm -lmbdact double Thermalization value for λ. --h_hysteresis -hhys list<double> Hysteresis schedule for h (space- separated). Length must match l mbdahys. --lmbda_hysteresis -lmbdahys list<double> Hysteresis schedule for λ. Length must match hhys. --N_resamples -Nr int Bootstrap resamples (error bars). --custom_therm -cth bool Use thermalization values (0/1). --observables -obs list<string> Measured observables (space- separated). --seed -s int PRNG seed; 0 means random seed. --basis -bas char Spin basis: ’x’ or ’z’. --lattice_type -lat string Lattice type (e.g. square, cubi c, ...). --system_size -L int Linear lattice size (per dimen- sion). --boundaries -bound string periodic or open. --default_spin -dsp int Initial link spin (+1 or -1). --output_directory -outdir path Output directory path. --folder_name -fn string Subfolder (of output directory) name for single run. --folder_names -fns list<string> Subfolders (of output directory) for hysteresis steps. Length must match lmbdahys. --snapshots -snap bool Save snapshots into specified sub- folders of output directory. --full_time_series -fts bool Save full time series toggle. --process_index -procid int Process identifier (logging/de- bug). 3.2.2 etc_sample Runs a production measurement pass with the supplied configuration. Listing 2: Example usage 15 SciPost Physics Codebases Submission ./paratoric -sim etc_sample -Ns 2000 -Nth 5000 -Nbs 10 -Nr 1000 -bet 16.0 - muc 1 -Jc 1 -hc 0.2 -lmbdac 0.0 -obs energy plaquette_z anyon_count -bas z -lat square -L 16 -bound periodic -dsp 1 -outdir ./runs/sample -snap =0 -fcs=1 3.2.3 etc_hysteresis Runs a parameter sweep where the last state of step i initializes step i+1. Provide --h_hy steresis and --lmbda_hysteresis as space-separated lists, and --folder_names for per-step outputs. Listing 3: Example usage ./paratoric -sim etc_hysteresis -Ns 1000 -Nth 2000 -Nbs 50 -Nr 500 -bet 12.0 -muc 1 -Jc 1 -lmbdahys 0.2 0.2 0.2 0.2 0.2 0.2 0.2 -hhys 0.0 0.1 0.2 0.3 0.2 0.1 0.0 -obs energy fredenhagen_marcu -bas x -lat square -L 12 -bound periodic -dsp 1 -outdir ./runs/hys -fns step0 step1 step2 step3 step4 step5 step6 3.2.4 etc_thermalization Performs thermalization only (no production sampling). Listing 4: Example usage ./paratoric -sim etc_thermalization -Ns 0 -Nth 5000 -Nbs 10 -Nr 500 -bet 10.0 -muc 1 -Jc 1 -hc 0.3 -lmbdac 0.1 -hct 0.4 -lmbdact 0.2 -cth 1 -obs energy anyon_density -bas z -lat square -L 10 -bound open -dsp 1 -outdir ./runs/therm -snap=1 3.2.5 HDF5 structure The output HDF5 has the structure simulation/results/acc_ratio for an array of the acceptance weights (only for thermalization), simulation/results/observable_name/ series for the time series (if it was enabled), and simulation/results/observable_ name/{mean,mean_error,binder,binder_error,autocorrelation_time}. For the regular sampling, mean, mean_error, binder, binder_error and autocorrelation_ time contain doubles. For the hysteresis, they contain an array of values for the hysteresis parameters (in the order of h_hysteresis and lmbda_hysteresis). 3.3 C interface The C interface enables users to use a ParaToric public header from within another C project or another programming language that supports a C-style interface. The C interface exposes a stable ABI. It mirrors the C++ interface. Include the public header #include<paratoric/ mcmc/extended_toric_code_c.h>. All functions return ptc_status_t. 3.3.1 Build & Installation The code can be compiled in exactly the same fashion as for the C++ interface. 16 SciPost Physics Codebases Submission CMake usage (installed package). cmake_minimum_required(VERSION 3.23) project(my_qmc_c C) find_package(paratoric CONFIG REQUIRED) # provides paratoric::core add_executable(cdemo main.c) target_link_libraries(cdemo PRIVATE paratoric::core) CMake usage (as subdirectory). add_subdirectory(deps/paratoric) add_executable(cdemo main.c) target_link_libraries(cdemo PRIVATE paratoric::core) 3.3.2 Status & error handling Name C type/values Meaning ptc_status_t {PTC_STATUS_OK=0, PTC_STATU S_INVALID_ARGUMENT=1, PTC_S TATUS_RUNTIME_ERROR=2, PTC_ STATUS_NO_MEMORY=3, PTC_STA TUS_INTERNAL_ERROR=4} Return code of every API call. ptc_last_error() const char* Thread–local error string. Valid until next call. 3.3.3 Opaque handle Create and destroy the interface instance. Use ptc_create(ptc_handle_t **out) and p tc_destroy(ptc_handle_t *h). 3.3.4 Configuration type Top-level ptc_config_t aggregates four nested specs. Field names mirror the C++ Config. Top-level configuration: ptc_config_t Field Type Purpose sim ptc_sim_spec_t Monte Carlo parameters. params ptc_param_spec_t Hamiltonian parameters. lat ptc_lat_spec_t Lattice parameters. out ptc_out_spec_t Output paths and snapshot toggle. 17 SciPost Physics Codebases Submission Simulation specification (config.sim) Field Type Meaning N_samples int Number of snapshots. N_thermalization int Thermalization steps. N_between_samples int Thinning between snapshots. N_resamples int Bootstrap resamples. custom_therm bool Custom thermalization schedule. seed int PRNG seed (0 = random). observables const char* const* Array of observable names (nullable). N_observables size_t Length of observables. Parameter specification (config.params) Field Type Meaning mu,h,J,lmbda double Couplings (star, electric, plaquette, gauge). h_therm double Thermalization value for h if custom_therm=true. lmbda_therm double Thermalization value for lmbda if custom_therm=t rue. h_hys const double* Hysteresis schedule for h (nullable). h_hys_len size_t Length of h_hys. Must match lmbda_hys_len. lmbda_hys const double* Hysteresis schedule for lmbda (nullable). lmbda_hys_len size_t Length of lmbda_hys. Must match h_hys_len. Lattice specification (config.lat) Field Type Meaning / Valid values basis char Spin basis: ’x’ or ’z’. lattice_type const char* E.g. "triangular", "square", ... system_size int Linear system size per dimension. beta double Inverse temperature. boundaries const char* "periodic" or "open". default_spin int Initial link spin: +1 or -1. Output specification (config.out) Field Type Meaning path_out const char* Single output directory (nullable). paths_out const char* const* Output directories for hysteresis steps (nul- lable). N_paths_out size_t Length of paths_out. Must match h_hys_l en and lmbda_hys_len. save_snapshots bool Toggle snapshot dumping. 3.3.5 Return type All outputs are owned by the caller. Call ptc_result_destroy(&r) to free and zero. 18 SciPost Physics Codebases Submission Field C type Meaning series ptc_series_t Time series (real/complex) of all requested observables, thermaliza- tion is excluded (except for ther- malization simulation). Outer in- dex = observable, inner index = time point. acc_ratio ptc_dvec_t MC acceptance ratios for each up- date (thermalization). mean, mean_std ptc_dvec_t Bootstrap mean and standard error (order as in observables). binder, binder_std ptc_dvec_t Binder ratios and standard error (order as in observables). tau_int ptc_dvec_t Integrated autocorrelation time (order as in observables). series_hys ptc_series_block s_t Hysteresis time series. Outer index = hysteresis parameters (order as in h_hys,lmbda_hys), middle in- dex = observables (order as in ob servables), inner vector = time series. mean_hys, mean_std_hys, binder _hys, binder_std_hys, tau_int_ hys ptc_dmat_t Outer index = hysteresis parame- ters (order as in h_hys,lmbda_h ys), inner index = observables (or- der as in observables). 3.3.6 Procedures (mirror the C++ API) All fill a ptc_result_t *out on success. Return PTC_STATUS_OK on success. ptc_get_thermalization(ptc_handle_t *h, const ptc_config_t *cfg, ptc_r esult_t *out) Run thermalization only. Required fields: cfg->lat.{basis,lattice _type,system_size,beta,boundaries,default_spin}, cfg->params.{mu,h,J,lm bda}, cfg->sim.{N_thermalization,N_resamples,observables,N_observables, seed}, cfg->out.{path_out,save_snapshots}. ptc_get_sample(ptc_handle_t *h, const ptc_config_t *cfg, ptc_result_t *out) Run a production measurement pass. Required fields: cfg->lat.{basis,latti ce_type,system_size,beta,boundaries,default_spin}, cfg->params.{mu,h,J, lmbda,h_therm,lmbda_therm}, cfg->sim.{N_samples,N_thermalization,N_betw een_samples,N_resamples,custom_therm,observables,N_observables,seed}, c fg->out.{path_out,save_snapshots}. ptc_get_hysteresis(ptc_handle_t *h, const ptc_config_t *cfg, ptc_resul t_t *out) Run a hysteresis sweep over h_hys and/or lmbda_hys. The last state of step i initializes step i+1. Required fields: cfg->lat.{basis,lattice_type,system_size ,beta,boundaries,default_spin}, cfg->params.{mu,h_hys,h_hys_len,J,lmbda _hys,lmbda_hys_len}, cfg->sim.{N_samples,N_thermalization,N_between_sam ples,N_resamples,observables,N_observables,seed}, cfg->out.{paths_out,N _paths_out,save_snapshots}. 3.3.7 C usage example 19 SciPost Physics Codebases Submission Listing 5: C API - Minimal call #include <stdio.h> #include <math.h> #include <paratoric/mcmc/extended_toric_code_c.h> int main(void) { ptc_handle_t* h = NULL; if (ptc_create(&h) != PTC_STATUS_OK) { puts("create␣failed"); return 1; } ptc_lat_spec_t lat = { .basis = 'z', .lattice_type = "square", .system_size = 16, .beta = 8.0, .boundaries = "periodic", .default_spin = 1 }; ptc_param_spec_t ps = { .mu = 1.0, .h = 0.2, .J = 1.0, .lmbda = 0.0, .h_therm = NAN, .lmbda_therm = NAN, .h_hys = NULL, .h_hys_len = 0, .lmbda_hys = NULL, .lmbda_hys_len = 0 }; const char* obs[] = {"energy","plaquette_z","anyon_count"}; ptc_sim_spec_t sim = { .N_samples = 0, /* thermalization-only initially */ .N_thermalization = 5000, .N_between_samples = 10, .N_resamples = 1000, .custom_therm = false, .seed = 12345, .observables = obs, .N_observables = sizeof(obs)/sizeof(obs[0]) }; ptc_out_spec_t outspec = { .path_out = "runs/sample", .paths_out = NULL, .N_paths_out = 0, .save_snapshots = false }; ptc_config_t cfg = { .sim = sim, .params = ps, .lat = lat, .out = outspec }; ptc_result_t warm = {0}; ptc_status_t st = ptc_get_thermalization(h, &cfg, &warm); if (st != PTC_STATUS_OK) { puts(ptc_last_error()); ptc_destroy(h); return 2; } ptc_result_destroy(&warm); cfg.sim.N_samples = 2000; ptc_result_t res = {0}; st = ptc_get_sample(h, &cfg, &res); if (st != PTC_STATUS_OK) { puts(ptc_last_error()); ptc_destroy(h); return 3; } 20 SciPost Physics Codebases Submission // use res.mean, res.tau_int, ... ptc_result_destroy(&res); ptc_destroy(h); return 0; } Memory rules. You own all buffers in ptc_result_t. Call ptc_result_destroy once per successful call. 3.4 Python bindings ParaToric exposes a compiled Python extension module _paratoric with a submodule ext ended_toric_code. The bindings convert C++ vectors into NumPy arrays and release the global interpreter lock (GIL) while running the C++ kernels. 3.4.1 Build & Installation The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). The Python bindings require a Python installation with Numpy and PyBind11 (tested with version 3.0.1). Pybind11 is included as a git submodule (you need to pull it!). To compile the Python bindings, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON -DPARATORIC_BUILD_PYBIND=ON \ -DPython3_EXECUTABLE="$(which python)" -DPYBIND11_FINDPYTHON=ON \ -DPARATORIC_INSTALL_TO_SITE=ON -DPARATORIC_PIP_EDITABLE_INSTALL=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you’re a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to ${CMAKE_SOURCE_DI R}/${CMAKE_INSTALL_BINDIR}/, headers to ${CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to ${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_LIBD IR}/. The Python scripts expect ${CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. 21 SciPost Physics Codebases Submission • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recom- mended). • -DPARATORIC_BUILD_PYBIND=OFF. Compile Python bindings. • -DDPARATORIC_INSTALL_TO_SITE=OFF. Install ParaToric Python module to site pack- ages. • -DPARATORIC_PIP_EDITABLE_INSTALL=OFF. Install ParaToric Python module via pip as an editable module. • -DPARATORIC_PIP_OFFLINE_INSTALL=OFF. Turn on when installing to pip without internet access. Requires NumPy and setuptools. 3.4.2 Module layout • paratoric._paratoric: compiled extension (PyBind11). Submodule: extended_t oric_code. • paratoric.extended_toric_code: convenient alias • Running python -m paratoric enters the package entry point (__main__.py). NumPy return formats All time series with potentially complex values are returned as complex128. Real observables appear with zero imaginary part. Shapes are documented in the function references below. API reference (paratoric.extended_toric_code) get_thermalization(...) Run only the warmup and return per-snapshot observables and MC acceptance ratios. Internally converts std::variant<complex<double>,double > to complex128 and std::vector<double> to float64 arrays. The GIL is released while the C++ routine executes. Parameter Type / default Meaning N_thermalization int Warmup steps. N_resamples int=1000 Bootstrap resamples. observables list[str] Names per snapshot. seed int=0 PRNG seed (0 ⇒random). mu,h,J,lmbda float Hamiltonian parameters. basis {’x’,’z’}=’x’ Spin eigenbasis. lattice_type str E.g. "triangular", "square", ... system_size int Linear size per dimension. beta float Inverse temperature. boundaries str="periodic" Boundary condition. default_spin int=1 Initial link spin (+1/-1). save_snapshots bool=false Enable snapshot files. path_out path|None=None Output directory (if saving). Returns: (series, acc_ratio) series: ndarray(complex128) of shape (n_ob s, N_thermalization); acc_ratio: ndarray(float64) of shape (N_thermalizati on,). 22 SciPost Physics Codebases Submission get_sample(...) Run thermalization and production sampling; return series and boot- strap statistics. Converts nested C++ containers to NumPy arrays and releases the GIL during computation. Parameter Type / default Meaning N_samples int Stored samples per observable. N_thermalization int Warmup steps before sampling. N_between_samples int Thinning between samples. N_resamples int=1000 Bootstrap resamples. custom_therm bool=false Use h_therm, lmbda_therm dur- ing warmup. observables list[str] Names per snapshot. seed int=0 PRNG seed (0 ⇒random). mu,h,J,lmbda float Hamiltonian parameters. h_therm,lmbda_therm float=0 Warmup parameters if custom therm. basis {’x’,’z’}=’x’ Spin eigenbasis. lattice_type,system_size,bet a str,int,float Lattice and temperature. boundaries,default_spin str,int=("period ic",1) BC and initial spin. save_snapshots,path_out bool=False,path| None=None Optional I/O. Returns: tuple of six arrays series (complex128): (n_obs, N_samples); mean, mean_std, binder, binder_std, tau_int (float64): each (n_obs,). get_hysteresis(...) Run a sweep where each step uses the previous state as its ini- tial condition. Returns stacked arrays across steps; path handling validates per-step output directories when saving snapshots. Parameter Type / default Meaning N_samples,N_thermalization,N_b etween_samples int,int,int Cadence per step. N_resamples int=1000 Bootstrap resamples. observables list[str] Names per snapshot. seed int=0 PRNG seed. mu,J float Star and plaquette couplings. h_hys,lmbda_hys list[float] Hysteresis values, length must match. basis {’x’,’z’}=’x’ Spin asis. lattice_type,system_size,beta str,int,float Lattice and temperature. boundaries,default_spin str,int=("periodic ",1) BC and initial spin. save_snapshots bool=false Enable stepwise I/O. paths_out list[path]|None=No ne Output path per step (size must match h_hys if saving). Returns: tuple of six arrays series3d (complex128): (n_steps, n_obs, N_sam ples); mean2d, std2d, binder2d, binder_std2d, tau2d (float64): each (n_steps, n_obs). The number of steps equals len(h_hys) (and len(lmbda_hys)). 23 SciPost Physics Codebases Submission Array dtypes and shapes (summary) Function Name / dtype Shape get_thermalization series (complex128) (n_obs, N_thermalization) acc_ratio (float64) (N_thermalization,) get_sample series (complex128) (n_obs, N_samples) mean, mean_std, bind er, binder_std, tau_ int (float64) each (n_obs,) get_hysteresis series3d (complex12 8) (n_steps, n_obs, N_samples) mean2d, std2d, binde r2d, binder_std2d, t au2d (float64) each (n_steps, n_obs) Notes on performance The bindings release the global interpreter lock (GIL) during heavy compute (py::gil_sco ped_release), enabling multi-threaded C++ execution if the backend uses threads or when calling from multiprocessing workers. Conversions handle 1D/2D/3D containers and enforce consistent inner lengths before copying to NumPy. 3.4.3 Usage example Listing 6: Importing and calling from Python >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> series, acc_ratio = etc.get_thermalization( ... N_thermalization=2000, N_resamples=500, ... observables=["energy","plaquette_z","anyon_count"], ... seed=0, mu=1.0, h=0.2, J=1.0, lmbda=0.0, ... basis='z', lattice_type="square", system_size=16, beta=8.0, ... boundaries="periodic", default_spin=1, ... save_snapshots=False, path_out=None) >>> series.shape, series.dtype ((3, 2000), dtype('complex128')) >>> out = etc.get_sample( ... N_samples=1000, N_thermalization=5000, N_between_samples=10, ... N_resamples=1000, custom_therm=False, ... observables=["energy","plaquette_z"], ... seed=0, mu=1.0, h=0.2, h_therm=0.0, ... J=1.0, lmbda=0.0, lmbda_therm=0.0, ... basis='z', lattice_type="square", system_size=16, beta=8.0, ... boundaries="periodic", default_spin=1, ... save_snapshots=False, path_out=None) >>> (series_s, mean, mean_std, binder, binder_std, tau_int) = out 3.5 Python command-line interface ParaToric ships a Python command-line interface /python/cli/paratoric.py that orches- trates C++ backends, runs sweeps, and writes HDF5/XML outputs. It requires NumPy, mat- plotlib, and h5py. 24 SciPost Physics Codebases Submission 3.5.1 Build & Installation The command-line interface requires HDF5 ≥1.14.3 (older HDF5 versions may work, but were not tested). The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). To compile it, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON -DPARATORIC_BUILD_CLI=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you’re a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to ${CMAKE_SOURCE_DI R}/${CMAKE_INSTALL_BINDIR}/, headers to ${CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to ${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_LIBD IR}/. The Python scripts expect ${CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recom- mended). • -DPARATORIC_BUILD_CLI=PROJECT_IS_TOP_LEVEL Required for both the C++ and Python command line interface. 25 SciPost Physics Codebases Submission General options Long flag Short Description --help -h Show help and exit. --simulation -sim Simulation type selector. --N_thermalization -Nth Thermalization steps (proposed updates). --N_samples -Ns Number of samples/snapshots. --N_between_steps -Nbs Steps between successive samples (thinning). --N_resamples -Nr Bootstrap resamples. --custom_therm -cth Use thermalization values for h,λ (0 or 1). --observables -obs Space-separated list, e.g. fredenhagen_marcu percolation_probability energy. --seed -seed PRNG seed; 0 means random seed. --mu_constant -muc Value of µ. --J_constant -Jc Value of J. --h_constant -hc Value of h. --h_constant_therm -hct Thermalization value of h. --lmbda_constant -lmbdac Value of λ. --lmbda_constant_therm -lmbdact Thermalization value of λ. --output_directory -outdir Output directory. --snapshots -snap Save snapshots toggle (0/1). --full_time_series -fts Save full time series toggle (0/1). --processes -proc Logical CPU count for Python multiprocessing. 0 means all available cores. Negative numbers −x mean use all cores minus x. Default is -4. Lattice-specific options Long flag Short Description --help -h Show help and exit. --basis -bas Spin basis: x or z. --lattice_type -lat square, cubic, triangular, honeycomb, ... --system_size -L Linear size; in 2D, 30 yields a 30×30 lattice (unit cells). --temperature -T Temperature T = 1/β > 0. --boundaries -bound periodic or open. --default_spin -dsp Initial edge spin: 1 or -1. The command line interface offers several sweep modes. All are embarrassingly parallel; set --processes close to the number of steps when possible. 3.5.2 T-sweep Runs T_steps independent Markov chains for evenly spaced temperatures in [T_lower, T_upper] and plots all requested observables. Listing 7: Example usage 26 SciPost Physics Codebases Submission python3 ./python/cli/paratoric.py -sim etc_T_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -Tl 0.5 -Tu 5 -Ts 30 -hc 0.1 -Jc 1 -lmbdac 0.1 -Nr 1000 - obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_T_sweep. --T_lower -Tl Lower bound of T. --T_upper -Tu Upper bound of T. --T_steps -Ts Number of temperatures between bounds. 3.5.3 h-sweep Runs h_steps independent chains in parallel for evenly spaced h in [h_lower, h_upper]. Listing 8: Example usage python3 ./python/cli/paratoric.py -sim etc_h_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -hl 0.1 -hu 0.5 -hs 8 -T 0.03 -Jc 1 -lmbdac 0.2 -Nr 1000 - obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 6 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_h_sweep. --h_lower -hl Lower bound of h. --h_upper -hu Upper bound of h. --h_steps -hs Number of field steps between bounds. 3.5.4 λ-sweep Runs lmbda_steps independent chains in parallel for evenly spaced λ in [λ_lower, λ_upper]. Listing 9: Example usage python3 ./python/cli/paratoric.py -sim etc_lmbda_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -lmbdal 0.01 -lmbdau 1.0 -lmbdas 15 -T 0.1 -hc 0.3 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out 27 SciPost Physics Codebases Submission Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_lmbda_sweep. --lmbda_lower -lmbdal Lower bound of λ. --lmbda_upper -lmbdau Upper bound of λ. --lmbda_steps -lmbdas Number of field steps between bounds. 3.5.5 ◦-sweep Runs Theta_steps independent chains in parallel along a circle in (λ,h) centered at (lmbda _constant,h_constant) with radius radius, for angles Θ ∈[Θ_lower, Θ_upper] (angles measured anti-clockwise from the λ-axis). Listing 10: Example usage python3 ./python/cli/paratoric.py -sim etc_circle_sweep -Ns 1000 -muc 1 - Nth 2000 -Nbs 100 -lmbdac 0.4 -rad 0.3 -Thl 0 -Thu 3.141 -Ths 15 -T 0.1 -hc 0.4 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_circle_sweep. --lmbda_constant -lmbdac Circle center in λ. --h_constant -hc Circle center in h. --radius -rad Circle radius. --Theta_lower -Thl Lower bound of Θ. --Theta_upper -Thu Upper bound of Θ. --Theta_steps -Ths Number of angles between bounds. 3.5.6 Hysteresis-sweep Use the hysteresis schedule specified in hhys and lmbdahys. This mode will run two Markov chains, one in the original parameter order specified in hhys and lmbdahys, and one with a reversed parameter order, i.e., it calculates both branches of the hysteresis loop. Listing 11: Example usage python3 ./python/cli/paratoric.py -sim etc_hysteresis -Nbs 5000 -Ns 10000 - muc 1 -Nth 20000 -hhys 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -T 0.1 -Jc -1 -lmbdahys 0.5 0.525 0.55 0.575 0.6 0.625 0.65 0.675 0.7 0.725 0.75 -Nr 1000 -obs plaquette_percolation_probability percolation_strength percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 -bas z -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out 28 SciPost Physics Codebases Submission Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_hysteresis. --lmbda_hysteresis -lmbdahys Hysteresis schedule for λ. Length must match hh ys. --h_hysteresis -hhys Hysteresis schedule for h. Length must match lm bdahys. 3.5.7 Thermalization Runs repetitions independent chains in parallel and reports observables and MC accep- tance ratios every step, averaged over chains. Listing 12: Example usage python3 ./python/cli/paratoric.py -sim etc_thermalization -muc 1 -Nth 2000 -reps 10 -lmbdac 2 -T 0.1 -hc 0.3 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_thermalization. --repetitions -reps Number of Markov chains to average. 4 Using ParaToric 4.1 Monte Carlo Updates There is no need for the user to explicitly call specific updates or interact with internal C++ classes when using the documented interfaces. Internally, we use all five updates described in the original algorithm by Wu, Deng, and Prokof’ev [5]. These must furthermore be sup- plemented with the following two updates: Because for high temperatures and for zero off- diagonal fields the spin at imaginary time 0 = β cannot be flipped, we allow for flipping the spin on the entire imaginary axis on one bond or on a plaquette (star) in the ˆσx-basis (ˆσz- basis). These updates only change the energy terms diagonal in the given basis and is trivial when caching the total integrated diagonal energy (the update locally flips the sign of the total in- tegrated potential energy). Another advantage is that integrated autocorrelation times for observables diagonal in the given basis improve even in regimes that were previously accessi- ble. All seven updates are equally likely to be proposed, and we use a 64-bit Mersenne-Twister for pseudorandom numbers [12] with the ability to externally set the seed. Some updates have early exits for input parameters for which they will always be rejected. 29 SciPost Physics Codebases Submission 4.2 Monte Carlo Diagnostics There are two compilation modes, Release and Debug. In production runs, one should always use the Release mode; however, it still gives the user enough information to diagnose sampling problems without severe performance impacts. 4.2.1 Thermalization mode We provide thermalization routines which should be used before production runs to ensure proper thermalization (also known as burn-in). Thermalization times can vary drastically between different observables and initial conditions. We provide an example of sufficient and insufficient thermalization in Fig. 2. We recommend using the provided Python command-line interface, which will also plot the thermalization of all measured observables for the user. In thermalization runs, we also return the Monte Carlo acceptance ratio of every update. This can also be used to diagnose freezing (in the measurement phase, use the integrated autocorrelation time instead), e.g., when the acceptance ratio is always identical and/or very low. In case one suspects experiencing a serious sampling problem, we recommend recompiling the project in the Debug mode, which provides a wide array of runtime debug information about the proposed steps, acceptance ratios, and intermediate results. However, do not use the Debug mode in production runs, as it negatively impacts performance. 4.2.2 Integrated autocorrelation time When measuring observables, we first thermalize the system with N_thermalization steps, then measure N_samples times with N_between_samples steps between measurements. The normalized autocorrelation function ρO(k) of an observable Ok (observable O measured at time k) applied to a discrete time series of length N is given by: ρO(k) = C(k) C(0), C(k) = 1 N −k N−k−1 X i=0 (Oi −¯O)(Oi+k −¯O), ¯O = 1 N N−1 X i=0 Oi. (7) It is a statistical measure of the correlations between measurements of observable O at times i and i + k.2 We define the integrated autocorrelation time τO int = 1 2 + X k≥1 ρO(k). (8) Large τint are generally undesirable since they increase error bars and can lead to bias. In case of perfect sampling, we would have ρO(0) = 1 and ρO(k) = 0 ∀k ≥1, i.e., each measurement is only correlated with itself but not with other measurements and τint = 1/2. In practice, this is usually not feasible, and we have to work with a finite autocorrelation time τint > 1/2. When using ParaToric, we strongly recommend monitoring τint for all simulations and all observables. It is automatically calculated for every observable based on the full time series. As a rule of thumb, the autocorrelation is fine as long as τint ≪Nsamples, otherwise it leads to bias and seriously underestimated error bars. In the vicinity of phase transitions, τint dramatically increases (“critical slowing down”) [13]. Importantly, τint can differ vastly between different observables! If the autocorrelation is too high, increase the number of steps between samples. In more complicated cases, one may need to adapt the update proposal distributions and/or the updates themselves as a last resort. 2In ParaToric, the autocorrelation function is calculated efficiently using fast Fourier transforms. 30 SciPost Physics Codebases Submission It is also important to mention that ParaToric only computes a statistical estimate of τint. Many factors determine how accurate this estimate is, and crucially, the system needs to be properly thermalized. In principle, one can use τint in the way it is computed above directly for calculating error bars of correlated time series; however, ParaToric uses a more robust bootstrapping approach. 4.2.3 Error bars ParaToric applies the stationary bootstrap [14–16] for all error bars, thus capturing autocor- relation effects. Large τint will lead to worse error bars. The only parameter that the user can change is the number of bootstrap resamples N_between_steps. The default is 1000, which is enough in most cases. Note that a too low value of N_between_steps increases the relative computational cost of performing the measurements, which may negatively affect the code efficiency at no statistical gain. If the error bars are too large, either the number of samples is too low (in which case one should increase N_samples) or the autocorrelation is too large (in which case one could additionally increase N_between_samples). 4.3 Tips & tricks 4.3.1 Probing ground state physics The algorithm implemented by ParaToric fundamentally requires a finite temperature T > 0. However, in QMC simulations, there is always a finite-size energy gap (the difference between the energy of the ground state and the first excited state). Additionally, some phases like the topological ground state of the toric code have a physical bulk gap (even at L →∞). As long as the temperature is well below the total gap, we are exponentially close to the ground state. Usually, a temperature T ∼1/L suffices for the toric code although other situations may arise. 4.3.2 Probing first-order transitions ParaToric provides functionalities to probe weak and strong first-order phase transitions. The hysteresis mode can be used to probe hysteresis loops in the vicinity of strong first-order phase transitions, by repeating the simulation two times and mirroring the order of the parameters in h_hysteresis and lmbda_hysteresis. Weak first-order transitions can be detected by plotting a time series histogram of an observable (it exhibits a double-peak structure). Both approaches have been used in the context of the toric code [11]. 4.3.3 Choosing the basis Sometimes, one can work in both the ˆσx and the ˆσz-basis. The performance can vary drasti- cally! Generally, the ˆσx-basis is more efficient for h/J > λ/µ and vice versa. 4.3.4 Choosing N_thermalization Based on our experience, we can say that Nthermalization = 500Ld/T is a sensible choice for small fields, where d is the dimensionality of the system. Nevertheless, one should make use of the provided tools to benchmark thermalization, see Sec. 4.2, and rather err on the side of safety. 4.3.5 Choosing N_samples Neglecting autocorrelation effects, the error of an observable ∆O scales as ∆O ∼1/ Æ Nsamples. More samples are, in principle, always better and lead to lower error 31 SciPost Physics Codebases Submission bars. Smoothness of a curve of statistical results also requires that error bars be small in rela- tion to the parameter grid size. If one increases the parameter resolution (e.g., in the field h), then one typically also increases N_samples. 4.3.6 Choosing N_between_samples The optimal choice for N_samples is the integrated autocorrelation time. A good guess of the autocorrelation time based on previous simulations for smaller system sizes or nearby parameter points can result in substantial computational saving costs in production runs for large system sizes. Near continuous phase transitions, the integrated autocorrelation time has an additional dependence τint ∼Lz, where z is the dynamical exponent of the universality class of the transition. For a 2D system in the vicinity of a continuous phase transition, a sensible scaling for N_between_samples could be O(L2 × β × Lz) (O(L2) links, each has off-diagonal spin flips O(β)). 4.3.7 Choosing N_resamples As with N_samples, more is better (but also more costly). Usually N_resamples ≈1000 is a sensible choice. 4.3.8 Extracting snapshots When the option save_snapshots is enabled, ParaToric will write the snapshots into the directory specified in path_out (or in the paths paths_out for hysteresis sweeps). The snapshots are saved in the GraphML format (XML-based), which is supported by many major graph libraries. One snapshot will be saved for every measurement of observables, i.e., N_s amples snapshots in total. All snapshots are written into a single file to save disk space and simultaneously offer a structured, self-documenting format. Every edge stores a list of spins. The first spin belongs to the first snapshot, the second one to the second snapshot, and so on. There are no special requirements for disks or memory bandwidth; the snapshots are kept in RAM and are only written to disk after the simulation has finished. 4.3.9 Adding new observables/lattices/updates After adding features to the code, always benchmark them using analytical results, other nu- merical methods (exact diagonalization, tensor networks, ...), and unit tests. We advise using a fixed seed during development, e.g., when checking whether two methods produce the exact same result. The code has some built-in features to check self-consistency, e.g., at the end of each simulation, the code checks whether the cached total energy is numerically close to the total energy calculated from scratch. Do not turn off these features, as they will point you toward bugs! 4.4 Benchmarks 4.4.1 Thermalization In Fig. 2 (which we already discussed before and repeat here for completeness) we plot the gauge field energy ∝λ for two systems: one is sufficiently thermalized, the other one is not. The plots are a direct output of the Python command-line interface. Always make sure that the system is well thermalized. 32 SciPost Physics Codebases Submission (b) (a) Figure 2: Good and bad thermalization plots produced by the Python command- line interface. We show the gauge field energy ∝λ. (a) The system is well thermal- ized; after its initial drop, the energy fluctuates around the expectation value. (b) The system is not yet thermalized; the floating average is still decreasing. 4.4.2 Integrated autocorrelation time Here, we demonstrate how the integrated autocorrelation time τint grows with decreasing N_ between_samples. We use the following setup: Listing 13: N_between_samples benchmarking setup >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=100000, N_thermalization=10000, N_between_samples =1, N_resamples=1000, custom_therm=False, observables=["energy"], seed =0, mu=1.0, h=0.0, h_therm=0.0, J=1.0, lmbda=0, lmbda_therm=0.0, basis=' x', lattice_type="square", system_size=4, beta=10, boundaries="periodic" , default_spin=1, save_snapshots=False) We only run the simulation once per N_between_samples. The results are: N_between_samples 1 10 100 500 1000 τint (energy) 1895 141.4 19.7 3.24 1.64 For very small N_between_samples, τint is very high: in cases where the update is re- jected, the configuration is identical to the one measured before! A choice of N_between_s amples between 500 and 1000 would be a good tradeoff between τint and runtime for this example. Increasing N_between_samples to well over 1000 would be a waste of CPU time. 4.4.3 Run-time We benchmark the run-time for two realistic parameter sets on the square lattice and varying system size. The first setup simulates the toric code without fields:3 Listing 14: L benchmarking setup 1 >>> import numpy as np >>> from paratoric import extended_toric_code as etc 3All tests were run on a laptop; some conditions, like the CPU temperature, were not identical for all simulations. The benchmarks are therefore only an approximation. 33 SciPost Physics Codebases Submission >>> L=20 >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=500*L*L*L, N_between_samples=8*L*L*L, N_resamples=1000, custom_therm=False, observables=["energy", "sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.0, h_therm=0.0, J=1.0, lmbda=0, lmbda_therm=0.0, basis='x', lattice_type=" square", system_size=L, beta=L, boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per system size. The results are: L 4 8 12 16 20 Runtime (s) 3.1 21.3 75 197 379 From our experience, for large L the update complexity is approximately O(L3 logβ), owing to the chosen cubic dependency of N_thermalization and N_between_samples and a O(logβ) dependence of operations on the imaginary time axis, see β benchmark below. The system size itself does not impact the performance, as the interactions are local. On computing clusters, we have realized system sizes of up to L = 80 for the square lattice; this number will only increase in the future as CPUs get faster. The second setup simulates the toric code with fields in both ˆσx and ˆσz-direction: Listing 15: L benchmarking setup 2 >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> L=20 >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=500*L*L*L, N_between_samples=8*L*L*L, N_resamples=1000, custom_therm=False, observables=["energy", "sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.2, J =1.0, lmbda=0.2, basis='x', lattice_type="square", system_size=L, beta=L , boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per system size. The results are: L 4 8 12 16 20 Runtime (s) 3.9 34.1 133 323 689 We also test the run-time dependence of the inverse temperature β, with the following setup: Listing 16: β benchmarking setup >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=20000, N_between_samples =2000, N_resamples=1000, custom_therm=False, observables=["energy", " sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.2, J=1.0, lmbda=0.2, basis='x' , lattice_type="square", system_size=10, beta=20, boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per β. The results are: β 4 8 12 16 20 Runtime (s) 14.9 17.2 19.1 20.0 22.1 34 SciPost Physics Codebases Submission 0.20 0.25 0.30 0.35 0.40 h 0 1 Ox FM square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.20 0.25 0.30 0.35 0.40 h 0.0 0.5 Ox SI square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.31 0.32 0.33 0.34 h 100 101 102 UΠx square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.20 0.25 0.30 0.35 0.40 h 0 1 Πx square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 (b) (a) (c) (d) Figure 3: Topological phase transition of the extended toric code (1) on the square lattice. The critical field is known and located at hc(λ = 0.2) ≈0.33 [5,9]. Our re- sults agree with the value published in the literature, within error bars. (a) The per- colation probability Binder ratio UΠx [9] features a crossing point around h = 0.33. (b) The percolation probability Πx is non-zero in the topological phase and zero in the trivial phase. The transition gets sharper with increasing system size. (c) The staggered imaginary time order parameter Ox SI is zero in the topological phase and non-zero in the trivial phase. (d) The Fredenhagen-Marcu order parameter Ox FM is zero in the topological phase and non-zero in the trivial phase. The loop length grows with O(L). Compared to the other order parameters, it is very noisy because it is a multi-body correlator and on top of that a division of two exponentially small num- bers. This benchmark illustrates an appealing feature of our implementation: there is almost no slowing down when increasing β implying that very low temperatures are within reach with ParaToric. This paradoxical result (because the number n of off-diagonal star/plaquette and magnetic field operators must physically scale linearly in β) is explained by the fact that most searches within the imaginary time axis scale as O(log n) by making use of binary searches. 4.4.4 Topological phase transition We probe the well-known topological phase transition in the ground state of the extended toric code (1) on the square lattice, where we have a gapped Z2 quantum spin liquid for small fields h,λ and a topologically trivial phase for high fields. We set J = µ = 1, λ = 0.2 and sweep h over the known critical value hc(λ = 0.2) ≈0.33 [5,9] for L ∈{10,20,30,40} in the ˆσx-basis. The temperature is set to T = 1/L to capture ground state physics. We take 30000 snapshots, with 8L3 steps in between snapshots and 500L3 thermalization steps.4. We confirm that the system is well thermalized and that all integrated autocorrelation times are below 10, i.e., the produced snapshots can safely be considered identically and independently distributed. We show the percolation probability, the Fredenhagen-Marcu string order parameter, and the staggered imaginary time order parameter in Fig. 3. All of them reproduce the known phase boundary. 4If we were interested in quantities like critical exponents and we need to go very close to the critical field, we should take into account the dynamical exponent z (τint ∼Lz) in the number of steps between snapshots to account for critical slowing down. Now the error bars are just larger near the critical field. 35 SciPost Physics Codebases Submission 5 Conclusion & Outlook We have presented ParaToric, a continuous-time quantum Monte Carlo solver for the toric code in a parallel field. ParaToric builds on the existing work by Wu, Deng, and Prokof’ev [5] and is also applicable to high temperature and low off-diagonal couplings. ParaToric can store snapshots, which makes it ideally suited to generate training/bench- marking data for applications in other fields, such as lattice gauge theories, cold atom or other quantum simulators, quantum spin liquids, artificial intelligence, and quantum error correc- tion. We believe it also serves a pedagogical purpose. Another strength of ParaToric is its in- teroperability with other programming languages. The C interface is compatible with virtually all programming languages, thus ParaToric can be seamlessly integrated into other projects. ParaToric comes with an MIT license. For future release of ParaToric we plan extensions along the following lines: • Additional lattices such as the kagome and the ruby lattice. Given the underlying graph structure used in ParaToric, such extensions are straightforward. • Additional observables: we think here of, for instance, the finite temperature extension of the fidelity susceptibility to diagnose the phase transitions in the absence of a local order parameter. It would be worthwhile to have additional off-diagonal observables such as the off-diagonal Fredenhagen-Marcu string operators, or correlation functions between off-diagonal operators in space and or time. Measurements of the Renyi entropy are also high on the to-do list. The latter two classes require however major changes to the code, and testing. • Additional interaction types. There are many classes of models in which topological order may be emergent instead of explicit as in the toric code. Such models typically have additional interactions than the ones covered in ParaToric, such as longer-range Ising interactions, and miss some others (typically the plaquette type interactions, and sometimes even the star terms). It is in general an open problem how to efficiently simulate such models at the lowest temperatures (even for sign-free models). Extending ParaToric to dealing with other types of interactions can thus serve as an additional tool for benchmarking purposes and algorithmic exploration. Acknowledgements The authors acknowledge fruitful discussions with A. Bohrdt, G. De Paciani, G. Dünnweber, F. Grusdt, L. Homeier, and N. V. Prokof’ev. Author contributions SML did the main coding and planning work with input from LP. All authors contributed to the writing of the manuscript. Funding information This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program – ERC Starting Grant SimUcQuam (Grant Agreement No. 948141), and by the Deutsche Forschungsgemein- schaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2111 – project number 390814868. 36 SciPost Physics Codebases Submission References [1] A. Kitaev, Fault-tolerant quantum computation by anyons, Annals of Physics 303(1), 2 (2003), doi:https://doi.org/10.1016/S0003-4916(02)00018-0. [2] E. Dennis, A. Kitaev, A. Landahl and J. Preskill, Topological quantum memory, Journal of Mathematical Physics 43(9), 4452 (2002), doi:10.1063/1.1499754, https://pubs.aip. org/aip/jmp/article-pdf/43/9/4452/19183135/4452_1_online.pdf. [3] A. G. Fowler, M. Mariantoni, J. M. Martinis and A. N. Cleland, Surface codes: To- wards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012), doi:10.1103/PhysRevA.86.032324. [4] J. B. Kogut, An introduction to lattice gauge theory and spin systems, Rev. Mod. Phys. 51, 659 (1979), doi:10.1103/RevModPhys.51.659. [5] F. Wu, Y. Deng and N. Prokof’ev, Phase diagram of the toric code model in a parallel magnetic field, Phys. Rev. B 85, 195104 (2012), doi:10.1103/PhysRevB.85.195104. [6] K. Fredenhagen and M. Marcu, Charged states in z2 gauge theories, Communications in Mathematical Physics 92(1), 81 (1983), doi:10.1007/BF01206315. [7] K. Fredenhagen and M. Marcu, Confinement criterion for qcd with dynamical quarks, Phys. Rev. Lett. 56, 223 (1986), doi:10.1103/PhysRevLett.56.223. [8] K. Fredenhagen and M. Marcu, Dual interpretation of order parameters for lattice gauge theories with matter fields, Nuclear Physics B - Proceedings Supplements 4, 352 (1988), doi:https://doi.org/10.1016/0920-5632(88)90124-7. [9] S. M. Linsel, A. Bohrdt, L. Homeier, L. Pollet and F. Grusdt, Percolation as a confine- ment order parameter in z2 lattice gauge theories, Phys. Rev. B 110, L241101 (2024), doi:10.1103/PhysRevB.110.L241101. [10] G. Dünnweber, S. M. Linsel, A. Bohrdt and F. Grusdt, Percolation renormalization group analysis of confinement in z2 lattice gauge theories, Phys. Rev. B 111, 024314 (2025), doi:10.1103/PhysRevB.111.024314. [11] S. M. Linsel, L. Pollet and F. Grusdt, Independent e- and m-anyon confinement in the parallel field toric code on non-square lattices, doi:10.48550/arXiv.2504.03512 (2025). [12] M. Matsumoto and T. Nishimura, Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator, ACM Trans. Model. Comput. Simul. 8, 3 (1998). [13] U. Wolff, Critical slowing down, Nuclear Physics B - Proceedings Supplements 17, 93 (1990), doi:https://doi.org/10.1016/0920-5632(90)90224-I. [14] D. N. Politis and J. P. Romano, The stationary bootstrap, Journal of the American Statis- tical Association 89(428), 1303 (1994), doi:10.1080/01621459.1994.10476870. [15] D. N. Politis and H. White, Automatic block-length selection for the dependent bootstrap, Econometric Reviews 23(1), 53 (2004), doi:10.1081/ETC-120028836. [16] A. Patton, D. N. Politis and H. White, Correction to “automatic block-length selection for the dependent bootstrap” by d. politis and h. white, Econometric Reviews 28(4), 372 (2009), doi:10.1080/07474930802459016. 37
SciPost Physics Codebases Submission ParaToric 1.0-beta: Continuous-time quantum Monte Carlo for the toric code in a parallel field Simon M. Linsel⋆and Lode Pollet† (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany ⋆ , † Abstract We introduce ParaToric, a C++ package for simulating the toric code in a parallel field (i.e., X- and Z-fields) at finite temperature. We implement and extend the continuoustime quantum Monte Carlo algorithm of Wu, Deng, and Prokof'ev on the square, triangular, honeycomb, and cubic lattices with open and periodic boundaries, respectively. The package is expandable to arbitrary lattice geometries and custom observables diagonal in either the X- or Z-basis. ParaToric also supports snapshot extraction in both bases, making it ideal for generating training/benchmarking data for other methods, such as lattice gauge theories, cold atom or other quantum simulators, quantum spin liquids, artificial intelligence, and quantum error correction. The software provides bindings to C/C++ and Python, and is thus almost universally integrable into other software projects. Copyright attribution to authors. This work is a submission to SciPost Physics Codebases. License information to appear upon publication. Publication information to appear upon publication. Received Date Accepted Date Published Date Contents 1 Introduction 3 2 The toric code in a parallel field 3 2.1 Hamiltonian 3 2.2 Lattice geometries 3 2.3 Observables 4 3 Installation & interfaces 6 3.1 C++ interface 6 3.1.1 Build & Installation 7 3.1.2 Public class ExtendedToricCode 8 3.1.3 Configuration type 8 3.1.4 Return type 11 3.1.5 C++ usage examples 11 3.2 C++ command-line interface 13 3.2.1 Build & Installation 13 1 16 Oct 2025 SciPost Physics Codebases Submission 3.2.2 etc_sample 15 3.2.3 etc_hysteresis 16 3.2.4 etc_thermalization 16 3.2.5 HDF5 structure 16 3.3 C interface 16 3.3.1 Build & Installation 16 3.3.2 Status & error handling 17 3.3.3 Opaque handle 17 3.3.4 Configuration type 17 3.3.5 Return type 18 3.3.6 Procedures (mirror the C++ API) 19 3.3.7 C usage example 19 3.4 Python bindings 21 3.4.1 Build & Installation 21 3.4.2 Module layout 22 3.4.3 Usage example 24 3.5 Python command-line interface 24 3.5.1 Build & Installation 25 3.5.2 T-sweep 26 3.5.3 h-sweep 27 3.5.4 λ-sweep 27 3.5.5 ◦-sweep 28 3.5.6 Hysteresis-sweep 28 3.5.7 Thermalization 29 4 Using ParaToric 29 4.1 Monte Carlo Updates 29 4.2 Monte Carlo Diagnostics 30 4.2.1 Thermalization mode 30 4.2.2 Integrated autocorrelation time 30 4.2.3 Error bars 31 4.3 Tips & tricks 31 4.3.1 Probing ground state physics 31 4.3.2 Probing first-order transitions 31 4.3.3 Choosing the basis 31 4.3.4 Choosing N_thermalization 31 4.3.5 Choosing N_samples 31 4.3.6 Choosing N_between_samples 32 4.3.7 Choosing N_resamples 32 4.3.8 Extracting snapshots 32 4.3.9 Adding new observables/lattices/updates 32 4.4 Benchmarks 32 4.4.1 Thermalization 32 4.4.2 Integrated autocorrelation time 33 4.4.3 Run-time 33 4.4.4 Topological phase transition 35 5 Conclusion & Outlook 36 References 37 2 SciPost Physics Codebases Submission 1 Introduction The toric code is one of the most fundamental and most-studied models in modern condensed matter physics. It was first written down by Kitaev [1] and is the simplest example of a model hosting a topological phase (a gapped Z2 quantum spin liquid) and anyonic excitations. The toric code is also the foundational model for error-correcting codes [2, 3] and has deep connections to the Ising gauge theory [4]. The toric code can be extended with fields which, when strong enough, destroy the topological order. This model is sign-problem-free, thus making quantum Monte Carlo the method of choice. Wu, Deng, and Prokof'ev developed a continuous-time quantum Monte Carlo algorithm [5]. ParaToric implements and extends this algorithm with new updates which enable ergodicity at large temperatures and at zero off-diagonal field, thus significantly improving the applicability of the algorithm. ParaToric implements a wide range of lattices, boundary conditions, and observables. It is also possible to extend ParaToric with new interactions, observables, and lattices. We provide documented interfaces in C, C++, and Python as well as command-line interfaces, making the integration of ParaToric into other projects and programming languages straightforward. ParaToric will save simulation results to HDF5 files and snapshots to GraphML files (XMLbased), with a focus on interoperability with other packages. ParaToric comes with an MIT license. 2 The toric code in a parallel field 2.1 Hamiltonian ParaToric implements and extends the continuous-time quantum Monte Carlo (QMC) algorithm by Wu, Deng, and Prokof'ev [5] to simulate the toric code in a parallel field (also called perturbed toric code or extended toric code) ˆH = -μ X v ˆAv -J X p ˆBp -h X l ˆσx l -λ X l ˆσz l , (1) where J,λ > 0 in the ˆ σx-basis and μ,h > 0 in the ˆ σz-basis (otherwise the model has a signproblem). ˆσx l and ˆσz l are Pauli matrices defined on the links of the underlying lattice. The star term ˆAv contains all links adjacent to lattice site v, the plaquette term ˆBp contains all links that belong to the same elementary plaquette p of the underlying lattice. The temperature T = 1/β is finite. For readers interested in extending the code, we note that it is relatively straightforward to add interactions that are diagonal in the chosen basis, such as (long-range) Ising interactions. Off-diagonal interactions require a more careful review and extension of the Monte Carlo updates to ensure ergodicity. However, diagonal interactions can also lead to sampling problems, especially when they introduce frustration. 2.2 Lattice geometries We implement the square, honeycomb, triangular, and cubic lattices, see Fig. 1. On the cubic lattice, the plaquettes contain the four links of cube faces, not the twelve links of the cube 3 SciPost Physics Codebases Submission (a) (b) ̂Av ̂Bp ̂σx ̂σz ̂Av ̂Bp ̂σz ̂σx (c) ̂σz ̂σx ̂Av ̂Bp (d) ̂Av ̂Bp Figure 1: Implemented lattices. We implement the extended toric code (1) on the square (a), honeycomb (b), triangular (c), and cubic lattices (d). For each lattice, we show the star (ˆAv) and plaquette (ˆBp) terms. The cubic lattice importantly features star interactions of six links and plaquette interactions of four links on the faces of cubes. (that model has a different m-anyon structure). We implement open and periodic boundaries, respectively. New lattices can be added in src/lattice/lattice.cpp. 2.3 Observables Here we list all observables that ParaToric implements for the extended toric code. Custom observables can be added in src/mcmc/extended_toric_code_qmc.hpp. For each observable ˆO, we calculate the expectation value 〈ˆO〉and the Binder ratio UO = 〈ˆO4〉 〈ˆO2〉2 with error bars obtained from bootstrapping (see below), respectively. anyon_count The number of e-anyons ( ˆ σx-basis) or m-anyons ( ˆ σz-basis) in the system. anyon_density ˆ σx-basis: The number of e-anyons divided by the number of lattice sites. ˆ σz-basis: The number of m-anyons divided by the number of plaquettes. delta The difference of the star and plaquette expectation values: ∆= 〈ˆAv〉-〈ˆBp〉. energy The total energy E = 〈ˆH〉. energy_h The electric field term Eh = 〈-h P l ˆσx l 〉. energy_lmbda The gauge field term Eλ = 〈-λ P l ˆσz l 〉. We write λ as lmbda because some programming languages feature lambda as a keyword. 4 SciPost Physics Codebases Submission energy_J The plaquette term EJ = 〈-J P p ˆBp〉. energy_mu The star term Eμ = 〈-μ P v ˆAv〉. fredenhagen_marcu The equal-time Fredenhagen-Marcu loop operator [6-8]: Ox/z FM = lim L→∞ 〈 Q l∈Cx/z 1/2 ˆσx/z l 〉 Ç |〈 Q l∈Cx/z ˆσx/z l 〉| , (2) C x/z 1/2 is half, C x/z is a full Wilson loop in the ˆ σx-basis ('t Hooft loop in the ˆ σz-basis). The loop is automatically constructed for all supported lattices, and the perimeter scales with O(L), where L is the linear system size. When probing perimeter/area laws, the user should change L. We currently do not support off-diagonal loop operators, e.g., measuring products of ˆσx-operators in the ˆσz-basis. largest_cluster The largest connected cluster of neighboring bonds with ˆσx = -1 (ˆσz = -1) in the ˆσx-basis (ˆσz-basis). This observable is used to calculate the percolation strength, see [9]. percolation_probability Measures the bond percolation probability, i.e. if we can wind around the system while only traversing bonds with ˆσx = -1 (ˆσz = -1) in the ˆσx-basis (ˆσz-basis). Formally, it is the expectation value 〈ˆΠx/z〉of the projector ˆΠx/z = X W(j)̸=0 |{ˆσx/z}j〉〈{ˆσx/z}j|, (3) over all possible configurations {ˆσx/z}j with non-zero winding number W(j) of connected link clusters of neighboring ˆσx/z = -1. These clusters are called percolating clusters. For details, see [9-11]. percolation_strength If a snapshot does not have a percolating cluster, the percolation strength is 0. If a snapshot has a percolating cluster, the percolation strength is defined as the result of largest_cluster divided by the total number of links in the system. For details, see [9,11]. plaquette_percolation_probability Similar to the percolation probability of bonds. Two plaquettes are in the same cluster if they share a link l with ˆ τx l = -1. For details, see [11]. plaquette_z The plaquette expectation value 〈ˆBp〉. sigma_x The electric field expectation value 〈ˆσx〉. sigma_x_susceptibility The static susceptibility χ x = 1 N Z β 0 〈ˆσx(0) ˆσx(τ)〉c dτ, (4) where N is the total number of links (qubits) and the integral is over the imaginary time τ1. Importantly, χ x can be calculated both in the ˆσx- and the ˆσz-basis. 1The dynamical (fidelity) susceptibility is a trivial extension and contains an extra τ dependency in the integral. 5 SciPost Physics Codebases Submission sigma_z The gauge field expectation value 〈ˆσz〉. sigma_z_susceptibility The static susceptibility χz = 1 N Z β 0 〈ˆσz(0) ˆσz(τ)〉c dτ, (5) where N is the total number of links (qubits) and the integral is over the imaginary time τ. Importantly, χz can be calculated both in the ˆσx- and the ˆσz-basis. staggered_imaginary_times Order parameter from [5]. It is defined as Ox/z SIT = 1 β (τk 1 -0) -(τk 2 -τk 1) + ... + (-1)N(k)-1 (τk N(k) -τk N(k)-1) + (-1)N(k)(β -τk N(k)) , (6) where τk n is the imaginary time of the n-th tuple spin flip of type k. k is a plaquette p (star s) of links in the ˆσx-basis (ˆσz-basis). This order parameter can neither be evaluated from snapshots, nor from any other method that does not have access to imaginary time. star_x The star expectation value 〈ˆAv〉. string_number The total number of links with ˆσx = -1 in the ˆσx-basis (ˆσz = -1 in the ˆσz-basis). 3 Installation & interfaces There are five ways to use paratoric, directly from within code (C, C++, Python) or via the command-line (C++, Python). All interfaces require compiling C++ code. We tested the compilation with GCC 15 and Clang 20. All interfaces implement three functionalities. Thermalization simulations are used to benchmark the thermalization process of a Markov chain and are primarily a diagnostic tool. Regular sampling routines are used for generating snapshots and measuring observables, e.g., in the context of continuous phase transitions. Hysteresis routines are a variant of the regular sampling routines where not one but an array of Hamiltonian parameters is provided and only one Markov chain is used for all parameters. The order of the Hamiltonian parameters in the input array matters: The last state of the previous parameter is used as an initial state for the thermalization phase of the next parameters. This simulation type should primarily be used when mapping out hysteresis curves in the vicinity of first-order phase transitions, hence the name. Since the hysteresis simulation returns the values of not one but many parameter sets, the output types are generally different from the regular sampling. It is also much slower than regular sampling, because the simulation for different parameters can in general not be parallelized. 3.1 C++ interface The C++ interface enables users to use a ParaToric public header from within another C++ project. 6 SciPost Physics Codebases Submission 3.1.1 Build & Installation The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). To compile it, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you're a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to {CMAKE_INSTALL_BINDIR}/, headers to {CMAKE_SOURCE_DIR}/ {CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recommended). CMake usage (installed package) cmake_minimum_required(VERSION 3.23) project(my_qmc_app CXX) find_package(paratoric CONFIG REQUIRED) # provides paratoric::core add_executable(myapp main.cpp) target_link_libraries(myapp PRIVATE paratoric::core) CMake usage (as subdirectory) If the core lives in deps/paratoric, add it and link to the same target: add_subdirectory(deps/paratoric) add_executable(myapp main.cpp) target_link_libraries(myapp PRIVATE paratoric::core) 7 SciPost Physics Codebases Submission 3.1.2 Public class ExtendedToricCode The interface class ExtendedToricCode lives in the public header #include . All symbols are in the paratoric namespace. All methods are static, take a single Config object and return a Result object. The required fields in config are documented for each method within the docstrings. Result ExtendedToricCode::get_thermalization(Config config) Run thermalization only. Required fields: lat_spec.{basis,lattice_type,system_size,beta ,boundaries,default_spin}, param_spec.{mu,h,J,lmbda,h_therm,lmbda_therm }, sim_spec.{N_thermalization,N_resamples,custom_therm,observables,seed }, out_spec.{path_out,save_snapshots}. Result ExtendedToricCode::get_sample(Config config) Run a production measurement pass. Returns the observables selected in config. Required fields: lat_spec.{ba sis,lattice_type,system_size,beta,boundaries,default_spin}, param_spec. {mu,h,J,lmbda}, sim_spec.{N_samples,N_thermalization,N_between_samples, N_resamples,observables,seed}, out_spec.{path_out,save_snapshots}. Result ExtendedToricCode::get_hysteresis(Config config) Perform a hysteresis sweep, where the last state of the previous parameter is used as the initial state of the following parameter in h_hys & lmbda_hys. Required fields: lat_spec.{basis,lattice _type,system_size,beta,boundaries,default_spin}, param_spec.{mu,J,h_hys ,lmbda_hys}, sim_spec.{N_samples,N_thermalization,N_between_samples,N_r esamples,observables,seed}, out_spec.{paths_out,save_snapshots}. 3.1.3 Configuration type The struct Config (declared in ) contains multiple nested specifications. Top-level configuration: Config Field Type Purpose sim_spec SimSpec Simulation / MC controls (backend-consumed). param_spec ParamSpec Model couplings / parameters (backend-consumed). lat_spec LatSpec Lattice geometry and basis. out_spec OutSpec Output folders and snapshot toggles. 8 SciPost Physics Codebases Submission Simulation specification (config.sim_spec) Field Type Meaning / Defaults N_samples int Number of recorded snapshots. Default 1000. N_thermalization int Number of warmup steps before sampling. Typically O(Ld), where L is the system size and d is the dimensionality. Default 10000. N_between_samples int Steps between consecutive snapshots. Higher value decreases autocorrelation and improves error bars. Typically O(Ld), where L is the system size and d is the dimensionality. Default 1000. N_resamples int Bootstrap resamples for errors. Default 1000. custom_therm bool Use custom thermalization schedule. Default fa lse. seed int PRNG seed. 0 means "random seed." Default 0. observables vector Names of observables to record each snapshot. For options, see Sec. 2.3. Parameter specification (config.param_spec) Field Type Meaning / Defaults mu double Star term coefficient. Default 1.0. h double Electric field term. Default 0.0. J double Plaquette term. Default 1.0. lmbda double Gauge-field term. Default 0.0. h_therm double Thermalization value for h when using custom schedules. Default NaN (unused). lmbda_therm double Thermalization value for lmbda when using custom schedules. Default NaN (unused). h_hys vector Sweep values of h for hysteresis runs. Default empty. Length must match lmbda_hys. lmbda_hys vector Sweep values of lmbda for hysteresis runs. Default empty. Length must match h_hys. Lattice specification (config.lat_spec) Field Type Meaning / Valid values basis char Spin eigenbasis for the simulation. Must be 'x' or 'z'. lattice_type string The lattice ("square", "triangular", "hone ycomb" or "cubic"). system_size int Linear system size (per dimension). beta double Inverse temperature β > 0. boundaries string Boundary condition: "periodic" or "open". default_spin int Initial link spin, must be +1 or -1. 9 SciPost Physics Codebases Submission Output specification (config.out_spec) Field Type Meaning path_out string Primary output folder name. paths_out vector Hysteresis subfolder names. Length must match h_hys. save_snapshots bool Save snapshots toggle. Default false. 10 SciPost Physics Codebases Submission 3.1.4 Return type Field C++ Type Meaning series vector ,double>>> Time series of all requested observables, thermalization is excluded (except for thermalization simulation). Outer index = observable, inner index = time point. acc_ratio vector Time series of Monte Carlo acceptance ratios. mean vector Bootstrap observable means. mean_std vector Bootstrap standard errors of the mean. binder vector Bootstrap binder ratios. binder_std vector Bootstrap standard errors of the binder ratios. tau_int vector Estimated integrated autocorrelation times. series_hys vector ,double>>>> Hysteresis time series of all requested observables, thermalization is excluded (except for thermalization simulation). Outer vector = hysteresis parameters (order as in h_hys,lm bda_hys), middle vector = observables (order as in observa bles), inner vector = time series. mean_hys vector > Hysteresis bootstrap observable means. Outer vector = hysteresis parameters (order as in h_hys,lm bda_hys), inner vector = observables (order as in observa bles). mean_std_hys vector > Hysteresis bootstrap standard errors of the mean. Indices as above. binder_hys vector > Hysteresis bootstrap binder ratios. Indices as above. binder_std_hys vector > Hysteresis bootstrap standard errors of the binder ratios. Indices as above. tau_int_hys vector > Hysteresis estimated integrated autocorrelation times. Indices as above. 3.1.5 C++ usage examples Listing 1: C++ API - Minimal call 11 SciPost Physics Codebases Submission // C++23 #include #include #include #include #include #include int main() { using namespace paratoric; Config cfg{}; // ---- lattice sub-config (required) ---- cfg.lat_spec.basis = 'z'; // or 'x' cfg.lat_spec.lattice_type = "square"; // or "cubic", "honeycomb", ... cfg.lat_spec.system_size = 16; cfg.lat_spec.beta = 8.0; cfg.lat_spec.boundaries = "periodic"; // or "open" cfg.lat_spec.default_spin = 1; // ---- Hamiltonian parameters ---- cfg.param_spec.mu = 1.0; // star term cfg.param_spec.J = 1.0; // plaquette term cfg.param_spec.h = 0.20; // electric field term cfg.param_spec.lmbda = 0.00; // gauge-field term // Optional thermalization schedule values (used if custom_therm = true) cfg.param_spec.h_therm = std::numeric_limits ::quiet_NaN(); cfg.param_spec.lmbda_therm = std::numeric_limits ::quiet_NaN(); // (Optional) Hysteresis sweep grids - only read by get_hysteresis(...) cfg.param_spec.h_hys = {}; // e.g. {0.0, 0.1, 0.2, 0.3, 0.2, 0.1, 0.0} cfg.param_spec.lmbda_hys = {}; // e.g. {0.0, 0.1, 0.2, 0.3, 0.2, 0.1, 0.0} // ---- Simulation (MC) controls ---- cfg.sim_spec.N_samples = 0; // 0 => thermalization-only cfg.sim_spec.N_thermalization = 5000; // warmup steps cfg.sim_spec.N_between_samples = 10; // thinning between snapshots cfg.sim_spec.N_resamples = 1000; // bootstrap cfg.sim_spec.custom_therm = false; // set true to use *_therm values cfg.sim_spec.seed = 12345; // 0 => random seed // Observables to record each snapshot (backend-recognized names) cfg.sim_spec.observables = { "energy", // total energy "plaquette_z", // plaquette energy "anyon_count", // number of anyons (x-basis: e-anyons, z-basis: m-anyons ) "fredenhagen_marcu" // example: Wilson/'t Hooft loop proxy }; // ---- Output / I/O policy ---- cfg.out_spec.path_out = "runs/sample"; // single-run output dir cfg.out_spec.paths_out = {}; // filled only for hysteresis cfg.out_spec.save_snapshots = false; // set true to dump every snapshot 12 SciPost Physics Codebases Submission cfg.out_spec.full_time_series = true; // save full time series (FCS) // 1) Check thermalization Result warmup = ExtendedToricCode::get_thermalization(cfg); std::print("Thermalization␣series:␣{}", warmup.series); // 2) Production sample (set N_samples > 0 and call get_sample) cfg.sim_spec.N_samples = 2000; Result out = ExtendedToricCode::get_sample(cfg); std::print("Production␣autocorrelations:␣{}", out.tau_int); return 0; } 3.2 C++ command-line interface ParaToric ships a C++ command-line interface {CMAKE_INS TALL_BINDIR}/paratoric that orchestrates C++ backends, runs sweeps, and writes HDF5 (observables) and XML (snapshots) outputs. 3.2.1 Build & Installation The command-line interface requires HDF5 ≥1.14.3 (older HDF5 versions may work, but were not tested). The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). To compile it, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON -DPARATORIC_BUILD_CLI=ON cmake --build build -jN ctest --test-dir build -jN --output-on-failure cmake --install build Replace N with the number of cores to use, e.g. -j4 for 4 cores. • -DCMAKE_BUILD_TYPE=Release. Only set to Debug if you're a developer. • -DCMAKE_INSTALL_PREFIX. By default, executables install to {CMAKE_INSTALL_BINDIR}/, headers to {CMAKE_SOURCE_DIR}/ {CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. 13 SciPost Physics Codebases Submission • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recommended). • -DPARATORIC_BUILD_CLI=PROJECT_IS_TOP_LEVEL Required for both the C++ and Python command line interface. 14 SciPost Physics Codebases Submission Global options Long flag Short Type Description --simulation -sim string Simulation mode: etc_sample, etc_hysteresis, etc_therm alization. --N_samples -Ns int Number of recorded samples. --N_thermalization -Nth int Thermalization (warmup) steps. --N_between_samples -Nbs int Steps between samples (thinning). --beta -bet double Inverse temperature β = 1/T. --mu_constant -muc double Star-term coupling μ. --J_constant -Jc double Plaquette coupling J. --h_constant -hc double Field h. --lmbda_constant -lmbdac double Field λ. --h_constant_therm -hct double Thermalization value for h (used if custom therm). --lmbda_constant_therm -lmbdact double Thermalization value for λ. --h_hysteresis -hhys list Hysteresis schedule for h (spaceseparated). Length must match l mbdahys. --lmbda_hysteresis -lmbdahys list Hysteresis schedule for λ. Length must match hhys. --N_resamples -Nr int Bootstrap resamples (error bars). --custom_therm -cth bool Use thermalization values (0/1). --observables -obs list Measured observables (spaceseparated). --seed -s int PRNG seed; 0 means random seed. --basis -bas char Spin basis: 'x' or 'z'. --lattice_type -lat string Lattice type (e.g. square, cubi c, ...). --system_size -L int Linear lattice size (per dimension). --boundaries -bound string periodic or open. --default_spin -dsp int Initial link spin (+1 or -1). --output_directory -outdir path Output directory path. --folder_name -fn string Subfolder (of output directory) name for single run. --folder_names -fns list Subfolders (of output directory) for hysteresis steps. Length must match lmbdahys. --snapshots -snap bool Save snapshots into specified subfolders of output directory. --full_time_series -fts bool Save full time series toggle. --process_index -procid int Process identifier (logging/debug). 3.2.2 etc_sample Runs a production measurement pass with the supplied configuration. Listing 2: Example usage 15 SciPost Physics Codebases Submission ./paratoric -sim etc_sample -Ns 2000 -Nth 5000 -Nbs 10 -Nr 1000 -bet 16.0 - muc 1 -Jc 1 -hc 0.2 -lmbdac 0.0 -obs energy plaquette_z anyon_count -bas z -lat square -L 16 -bound periodic -dsp 1 -outdir ./runs/sample -snap =0 -fcs=1 3.2.3 etc_hysteresis Runs a parameter sweep where the last state of step i initializes step i+1. Provide --h_hy steresis and --lmbda_hysteresis as space-separated lists, and --folder_names for per-step outputs. Listing 3: Example usage ./paratoric -sim etc_hysteresis -Ns 1000 -Nth 2000 -Nbs 50 -Nr 500 -bet 12.0 -muc 1 -Jc 1 -lmbdahys 0.2 0.2 0.2 0.2 0.2 0.2 0.2 -hhys 0.0 0.1 0.2 0.3 0.2 0.1 0.0 -obs energy fredenhagen_marcu -bas x -lat square -L 12 -bound periodic -dsp 1 -outdir ./runs/hys -fns step0 step1 step2 step3 step4 step5 step6 3.2.4 etc_thermalization Performs thermalization only (no production sampling). Listing 4: Example usage ./paratoric -sim etc_thermalization -Ns 0 -Nth 5000 -Nbs 10 -Nr 500 -bet 10.0 -muc 1 -Jc 1 -hc 0.3 -lmbdac 0.1 -hct 0.4 -lmbdact 0.2 -cth 1 -obs energy anyon_density -bas z -lat square -L 10 -bound open -dsp 1 -outdir ./runs/therm -snap=1 3.2.5 HDF5 structure The output HDF5 has the structure simulation/results/acc_ratio for an array of the acceptance weights (only for thermalization), simulation/results/observable_name/ series for the time series (if it was enabled), and simulation/results/observable_ name/{mean,mean_error,binder,binder_error,autocorrelation_time}. For the regular sampling, mean, mean_error, binder, binder_error and autocorrelation_ time contain doubles. For the hysteresis, they contain an array of values for the hysteresis parameters (in the order of h_hysteresis and lmbda_hysteresis). 3.3 C interface The C interface enables users to use a ParaToric public header from within another C project or another programming language that supports a C-style interface. The C interface exposes a stable ABI. It mirrors the C++ interface. Include the public header #include . All functions return ptc_status_t. 3.3.1 Build & Installation The code can be compiled in exactly the same fashion as for the C++ interface. 16 SciPost Physics Codebases Submission CMake usage (installed package). cmake_minimum_required(VERSION 3.23) project(my_qmc_c C) find_package(paratoric CONFIG REQUIRED) # provides paratoric::core add_executable(cdemo main.c) target_link_libraries(cdemo PRIVATE paratoric::core) CMake usage (as subdirectory). add_subdirectory(deps/paratoric) add_executable(cdemo main.c) target_link_libraries(cdemo PRIVATE paratoric::core) 3.3.2 Status & error handling Name C type/values Meaning ptc_status_t {PTC_STATUS_OK=0, PTC_STATU S_INVALID_ARGUMENT=1, PTC_S TATUS_RUNTIME_ERROR=2, PTC_ STATUS_NO_MEMORY=3, PTC_STA TUS_INTERNAL_ERROR=4} Return code of every API call. ptc_last_error() const char* Thread-local error string. Valid until next call. 3.3.3 Opaque handle Create and destroy the interface instance. Use ptc_create(ptc_handle_t **out) and p tc_destroy(ptc_handle_t *h). 3.3.4 Configuration type Top-level ptc_config_t aggregates four nested specs. Field names mirror the C++ Config. Top-level configuration: ptc_config_t Field Type Purpose sim ptc_sim_spec_t Monte Carlo parameters. params ptc_param_spec_t Hamiltonian parameters. lat ptc_lat_spec_t Lattice parameters. out ptc_out_spec_t Output paths and snapshot toggle. 17 SciPost Physics Codebases Submission Simulation specification (config.sim) Field Type Meaning N_samples int Number of snapshots. N_thermalization int Thermalization steps. N_between_samples int Thinning between snapshots. N_resamples int Bootstrap resamples. custom_therm bool Custom thermalization schedule. seed int PRNG seed (0 = random). observables const char* const* Array of observable names (nullable). N_observables size_t Length of observables. Parameter specification (config.params) Field Type Meaning mu,h,J,lmbda double Couplings (star, electric, plaquette, gauge). h_therm double Thermalization value for h if custom_therm=true. lmbda_therm double Thermalization value for lmbda if custom_therm=t rue. h_hys const double* Hysteresis schedule for h (nullable). h_hys_len size_t Length of h_hys. Must match lmbda_hys_len. lmbda_hys const double* Hysteresis schedule for lmbda (nullable). lmbda_hys_len size_t Length of lmbda_hys. Must match h_hys_len. Lattice specification (config.lat) Field Type Meaning / Valid values basis char Spin basis: 'x' or 'z'. lattice_type const char* E.g. "triangular", "square", ... system_size int Linear system size per dimension. beta double Inverse temperature. boundaries const char* "periodic" or "open". default_spin int Initial link spin: +1 or -1. Output specification (config.out) Field Type Meaning path_out const char* Single output directory (nullable). paths_out const char* const* Output directories for hysteresis steps (nullable). N_paths_out size_t Length of paths_out. Must match h_hys_l en and lmbda_hys_len. save_snapshots bool Toggle snapshot dumping. 3.3.5 Return type All outputs are owned by the caller. Call ptc_result_destroy(&r) to free and zero. 18 SciPost Physics Codebases Submission Field C type Meaning series ptc_series_t Time series (real/complex) of all requested observables, thermalization is excluded (except for thermalization simulation). Outer index = observable, inner index = time point. acc_ratio ptc_dvec_t MC acceptance ratios for each update (thermalization). mean, mean_std ptc_dvec_t Bootstrap mean and standard error (order as in observables). binder, binder_std ptc_dvec_t Binder ratios and standard error (order as in observables). tau_int ptc_dvec_t Integrated autocorrelation time (order as in observables). series_hys ptc_series_block s_t Hysteresis time series. Outer index = hysteresis parameters (order as in h_hys,lmbda_hys), middle index = observables (order as in ob servables), inner vector = time series. mean_hys, mean_std_hys, binder _hys, binder_std_hys, tau_int_ hys ptc_dmat_t Outer index = hysteresis parameters (order as in h_hys,lmbda_h ys), inner index = observables (order as in observables). 3.3.6 Procedures (mirror the C++ API) All fill a ptc_result_t *out on success. Return PTC_STATUS_OK on success. ptc_get_thermalization(ptc_handle_t *h, const ptc_config_t *cfg, ptc_r esult_t *out) Run thermalization only. Required fields: cfg->lat.{basis,lattice _type,system_size,beta,boundaries,default_spin}, cfg->params.{mu,h,J,lm bda}, cfg->sim.{N_thermalization,N_resamples,observables,N_observables, seed}, cfg->out.{path_out,save_snapshots}. ptc_get_sample(ptc_handle_t *h, const ptc_config_t *cfg, ptc_result_t *out) Run a production measurement pass. Required fields: cfg->lat.{basis,latti ce_type,system_size,beta,boundaries,default_spin}, cfg->params.{mu,h,J, lmbda,h_therm,lmbda_therm}, cfg->sim.{N_samples,N_thermalization,N_betw een_samples,N_resamples,custom_therm,observables,N_observables,seed}, c fg->out.{path_out,save_snapshots}. ptc_get_hysteresis(ptc_handle_t *h, const ptc_config_t *cfg, ptc_resul t_t *out) Run a hysteresis sweep over h_hys and/or lmbda_hys. The last state of step i initializes step i+1. Required fields: cfg->lat.{basis,lattice_type,system_size ,beta,boundaries,default_spin}, cfg->params.{mu,h_hys,h_hys_len,J,lmbda _hys,lmbda_hys_len}, cfg->sim.{N_samples,N_thermalization,N_between_sam ples,N_resamples,observables,N_observables,seed}, cfg->out.{paths_out,N _paths_out,save_snapshots}. 3.3.7 C usage example 19 SciPost Physics Codebases Submission Listing 5: C API - Minimal call #include #include #include int main(void) { ptc_handle_t* h = NULL; if (ptc_create(&h) != PTC_STATUS_OK) { puts("create␣failed"); return 1; } ptc_lat_spec_t lat = { .basis = 'z', .lattice_type = "square", .system_size = 16, .beta = 8.0, .boundaries = "periodic", .default_spin = 1 }; ptc_param_spec_t ps = { .mu = 1.0, .h = 0.2, .J = 1.0, .lmbda = 0.0, .h_therm = NAN, .lmbda_therm = NAN, .h_hys = NULL, .h_hys_len = 0, .lmbda_hys = NULL, .lmbda_hys_len = 0 }; const char* obs[] = {"energy","plaquette_z","anyon_count"}; ptc_sim_spec_t sim = { .N_samples = 0, /* thermalization-only initially */ .N_thermalization = 5000, .N_between_samples = 10, .N_resamples = 1000, .custom_therm = false, .seed = 12345, .observables = obs, .N_observables = sizeof(obs)/sizeof(obs[0]) }; ptc_out_spec_t outspec = { .path_out = "runs/sample", .paths_out = NULL, .N_paths_out = 0, .save_snapshots = false }; ptc_config_t cfg = { .sim = sim, .params = ps, .lat = lat, .out = outspec }; ptc_result_t warm = {0}; ptc_status_t st = ptc_get_thermalization(h, &cfg, &warm); if (st != PTC_STATUS_OK) { puts(ptc_last_error()); ptc_destroy(h); return 2; } ptc_result_destroy(&warm); cfg.sim.N_samples = 2000; ptc_result_t res = {0}; st = ptc_get_sample(h, &cfg, &res); if (st != PTC_STATUS_OK) { puts(ptc_last_error()); ptc_destroy(h); return 3; } 20 SciPost Physics Codebases Submission // use res.mean, res.tau_int, ... ptc_result_destroy(&res); ptc_destroy(h); return 0; } Memory rules. You own all buffers in ptc_result_t. Call ptc_result_destroy once per successful call. 3.4 Python bindings ParaToric exposes a compiled Python extension module _paratoric with a submodule ext ended_toric_code. The bindings convert C++ vectors into NumPy arrays and release the global interpreter lock (GIL) while running the C++ kernels. 3.4.1 Build & Installation The core requires C++23, CMake ≥3.23, and Boost ≥1.87 (older Boost versions may work, but were not tested). The Python bindings require a Python installation with Numpy and PyBind11 (tested with version 3.0.1). Pybind11 is included as a git submodule (you need to pull it!). To compile the Python bindings, run: cmake -S . -B build -DCMAKE_BUILD_TYPE=Release \ -DPARATORIC_ENABLE_NATIVE_OPT=ON -DPARATORIC_LINK_MPI=OFF \ -DPARATORIC_BUILD_TESTS=ON -DPARATORIC_BUILD_PYBIND=ON \ -DPython3_EXECUTABLE=" {CMAKE_SOURCE_DI R}/ {CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to {CMAKE_INSTALL_LIBD IR}/. The Python scripts expect {CMAKE_SOURCE_DI R}/ {CMAKE_INSTALL_INCLUDEDIR}/p aratoric, and static libraries to {CMAKE_INSTALL_LIBD IR}/. The Python scripts expect ${CMAKE_SOURCE_DIR}/bin/; this directory always contains the paratoric executable. To install into a custom directory, pass it via -DCM AKE_INSTALL_PREFIX, e.g. -DCMAKE_INSTALL_PREFIX=/your/custom/directo ry/. • -DPARATORIC_EXPORT_COMPILE_COMMANDS=ON. Export compile_commands.json for tooling. • -DPARATORIC_LINK_MPI=OFF. Link the core to MPI, required on some clusters. The core itself does not need MPI. • -DPARATORIC_ENABLE_NATIVE_OPT=OFF. Turn on -march=native on GCC and Clang. • -DPARATORIC_ENABLE_AVX2=OFF. Enable AVX2 (Haswell New Instructions). Requires a CPU which supports AVX2. • -DPARATORIC_BUILD_TESTS=PROJECT_IS_TOP_LEVEL. Compile the tests (recommended). • -DPARATORIC_BUILD_CLI=PROJECT_IS_TOP_LEVEL Required for both the C++ and Python command line interface. 25 SciPost Physics Codebases Submission General options Long flag Short Description --help -h Show help and exit. --simulation -sim Simulation type selector. --N_thermalization -Nth Thermalization steps (proposed updates). --N_samples -Ns Number of samples/snapshots. --N_between_steps -Nbs Steps between successive samples (thinning). --N_resamples -Nr Bootstrap resamples. --custom_therm -cth Use thermalization values for h,λ (0 or 1). --observables -obs Space-separated list, e.g. fredenhagen_marcu percolation_probability energy. --seed -seed PRNG seed; 0 means random seed. --mu_constant -muc Value of μ. --J_constant -Jc Value of J. --h_constant -hc Value of h. --h_constant_therm -hct Thermalization value of h. --lmbda_constant -lmbdac Value of λ. --lmbda_constant_therm -lmbdact Thermalization value of λ. --output_directory -outdir Output directory. --snapshots -snap Save snapshots toggle (0/1). --full_time_series -fts Save full time series toggle (0/1). --processes -proc Logical CPU count for Python multiprocessing. 0 means all available cores. Negative numbers -x mean use all cores minus x. Default is -4. Lattice-specific options Long flag Short Description --help -h Show help and exit. --basis -bas Spin basis: x or z. --lattice_type -lat square, cubic, triangular, honeycomb, ... --system_size -L Linear size; in 2D, 30 yields a 30×30 lattice (unit cells). --temperature -T Temperature T = 1/β > 0. --boundaries -bound periodic or open. --default_spin -dsp Initial edge spin: 1 or -1. The command line interface offers several sweep modes. All are embarrassingly parallel; set --processes close to the number of steps when possible. 3.5.2 T-sweep Runs T_steps independent Markov chains for evenly spaced temperatures in [T_lower, T_upper] and plots all requested observables. Listing 7: Example usage 26 SciPost Physics Codebases Submission python3 ./python/cli/paratoric.py -sim etc_T_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -Tl 0.5 -Tu 5 -Ts 30 -hc 0.1 -Jc 1 -lmbdac 0.1 -Nr 1000 - obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_T_sweep. --T_lower -Tl Lower bound of T. --T_upper -Tu Upper bound of T. --T_steps -Ts Number of temperatures between bounds. 3.5.3 h-sweep Runs h_steps independent chains in parallel for evenly spaced h in [h_lower, h_upper]. Listing 8: Example usage python3 ./python/cli/paratoric.py -sim etc_h_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -hl 0.1 -hu 0.5 -hs 8 -T 0.03 -Jc 1 -lmbdac 0.2 -Nr 1000 - obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 6 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_h_sweep. --h_lower -hl Lower bound of h. --h_upper -hu Upper bound of h. --h_steps -hs Number of field steps between bounds. 3.5.4 λ-sweep Runs lmbda_steps independent chains in parallel for evenly spaced λ in [λ_lower, λ_upper]. Listing 9: Example usage python3 ./python/cli/paratoric.py -sim etc_lmbda_sweep -Ns 1000 -muc 1 -Nth 2000 -Nbs 100 -lmbdal 0.01 -lmbdau 1.0 -lmbdas 15 -T 0.1 -hc 0.3 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out 27 SciPost Physics Codebases Submission Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_lmbda_sweep. --lmbda_lower -lmbdal Lower bound of λ. --lmbda_upper -lmbdau Upper bound of λ. --lmbda_steps -lmbdas Number of field steps between bounds. 3.5.5 ◦-sweep Runs Theta_steps independent chains in parallel along a circle in (λ,h) centered at (lmbda _constant,h_constant) with radius radius, for angles Θ ∈[Θ_lower, Θ_upper] (angles measured anti-clockwise from the λ-axis). Listing 10: Example usage python3 ./python/cli/paratoric.py -sim etc_circle_sweep -Ns 1000 -muc 1 - Nth 2000 -Nbs 100 -lmbdac 0.4 -rad 0.3 -Thl 0 -Thu 3.141 -Ths 15 -T 0.1 -hc 0.4 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_circle_sweep. --lmbda_constant -lmbdac Circle center in λ. --h_constant -hc Circle center in h. --radius -rad Circle radius. --Theta_lower -Thl Lower bound of Θ. --Theta_upper -Thu Upper bound of Θ. --Theta_steps -Ths Number of angles between bounds. 3.5.6 Hysteresis-sweep Use the hysteresis schedule specified in hhys and lmbdahys. This mode will run two Markov chains, one in the original parameter order specified in hhys and lmbdahys, and one with a reversed parameter order, i.e., it calculates both branches of the hysteresis loop. Listing 11: Example usage python3 ./python/cli/paratoric.py -sim etc_hysteresis -Nbs 5000 -Ns 10000 - muc 1 -Nth 20000 -hhys 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -T 0.1 -Jc -1 -lmbdahys 0.5 0.525 0.55 0.575 0.6 0.625 0.65 0.675 0.7 0.725 0.75 -Nr 1000 -obs plaquette_percolation_probability percolation_strength percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 -bas z -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out 28 SciPost Physics Codebases Submission Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_hysteresis. --lmbda_hysteresis -lmbdahys Hysteresis schedule for λ. Length must match hh ys. --h_hysteresis -hhys Hysteresis schedule for h. Length must match lm bdahys. 3.5.7 Thermalization Runs repetitions independent chains in parallel and reports observables and MC acceptance ratios every step, averaged over chains. Listing 12: Example usage python3 ./python/cli/paratoric.py -sim etc_thermalization -muc 1 -Nth 2000 -reps 10 -lmbdac 2 -T 0.1 -hc 0.3 -Jc 1 -Nr 1000 -obs percolation_strength percolation_probability plaquette_percolation_probability largest_cluster string_number energy energy_h energy_mu energy_J energy_lmbda sigma_x sigma_z star_x plaquette_z staggered_imaginary_times delta anyon_count anyon_density fredenhagen_marcu sigma_x_susceptibility sigma_z_susceptibility -s 0 - bas x -lat square -L 4 -bound periodic -dsp 1 -outdir /path/to/out Sweep-specific flags. Long flag Short Description --simulation -sim Use etc_thermalization. --repetitions -reps Number of Markov chains to average. 4 Using ParaToric 4.1 Monte Carlo Updates There is no need for the user to explicitly call specific updates or interact with internal C++ classes when using the documented interfaces. Internally, we use all five updates described in the original algorithm by Wu, Deng, and Prokof'ev [5]. These must furthermore be supplemented with the following two updates: Because for high temperatures and for zero offdiagonal fields the spin at imaginary time 0 = β cannot be flipped, we allow for flipping the spin on the entire imaginary axis on one bond or on a plaquette (star) in the ˆσx-basis (ˆσzbasis). These updates only change the energy terms diagonal in the given basis and is trivial when caching the total integrated diagonal energy (the update locally flips the sign of the total integrated potential energy). Another advantage is that integrated autocorrelation times for observables diagonal in the given basis improve even in regimes that were previously accessible. All seven updates are equally likely to be proposed, and we use a 64-bit Mersenne-Twister for pseudorandom numbers [12] with the ability to externally set the seed. Some updates have early exits for input parameters for which they will always be rejected. 29 SciPost Physics Codebases Submission 4.2 Monte Carlo Diagnostics There are two compilation modes, Release and Debug. In production runs, one should always use the Release mode; however, it still gives the user enough information to diagnose sampling problems without severe performance impacts. 4.2.1 Thermalization mode We provide thermalization routines which should be used before production runs to ensure proper thermalization (also known as burn-in). Thermalization times can vary drastically between different observables and initial conditions. We provide an example of sufficient and insufficient thermalization in Fig. 2. We recommend using the provided Python command-line interface, which will also plot the thermalization of all measured observables for the user. In thermalization runs, we also return the Monte Carlo acceptance ratio of every update. This can also be used to diagnose freezing (in the measurement phase, use the integrated autocorrelation time instead), e.g., when the acceptance ratio is always identical and/or very low. In case one suspects experiencing a serious sampling problem, we recommend recompiling the project in the Debug mode, which provides a wide array of runtime debug information about the proposed steps, acceptance ratios, and intermediate results. However, do not use the Debug mode in production runs, as it negatively impacts performance. 4.2.2 Integrated autocorrelation time When measuring observables, we first thermalize the system with N_thermalization steps, then measure N_samples times with N_between_samples steps between measurements. The normalized autocorrelation function ρO(k) of an observable Ok (observable O measured at time k) applied to a discrete time series of length N is given by: ρO(k) = C(k) C(0), C(k) = 1 N -k N-k-1 X i=0 (Oi - ̄O)(Oi+k - ̄O), ̄O = 1 N N-1 X i=0 Oi. (7) It is a statistical measure of the correlations between measurements of observable O at times i and i + k.2 We define the integrated autocorrelation time τO int = 1 2 + X k≥1 ρO(k). (8) Large τint are generally undesirable since they increase error bars and can lead to bias. In case of perfect sampling, we would have ρO(0) = 1 and ρO(k) = 0 ∀k ≥1, i.e., each measurement is only correlated with itself but not with other measurements and τint = 1/2. In practice, this is usually not feasible, and we have to work with a finite autocorrelation time τint > 1/2. When using ParaToric, we strongly recommend monitoring τint for all simulations and all observables. It is automatically calculated for every observable based on the full time series. As a rule of thumb, the autocorrelation is fine as long as τint ≪Nsamples, otherwise it leads to bias and seriously underestimated error bars. In the vicinity of phase transitions, τint dramatically increases ("critical slowing down") [13]. Importantly, τint can differ vastly between different observables! If the autocorrelation is too high, increase the number of steps between samples. In more complicated cases, one may need to adapt the update proposal distributions and/or the updates themselves as a last resort. 2In ParaToric, the autocorrelation function is calculated efficiently using fast Fourier transforms. 30 SciPost Physics Codebases Submission It is also important to mention that ParaToric only computes a statistical estimate of τint. Many factors determine how accurate this estimate is, and crucially, the system needs to be properly thermalized. In principle, one can use τint in the way it is computed above directly for calculating error bars of correlated time series; however, ParaToric uses a more robust bootstrapping approach. 4.2.3 Error bars ParaToric applies the stationary bootstrap [14-16] for all error bars, thus capturing autocorrelation effects. Large τint will lead to worse error bars. The only parameter that the user can change is the number of bootstrap resamples N_between_steps. The default is 1000, which is enough in most cases. Note that a too low value of N_between_steps increases the relative computational cost of performing the measurements, which may negatively affect the code efficiency at no statistical gain. If the error bars are too large, either the number of samples is too low (in which case one should increase N_samples) or the autocorrelation is too large (in which case one could additionally increase N_between_samples). 4.3 Tips & tricks 4.3.1 Probing ground state physics The algorithm implemented by ParaToric fundamentally requires a finite temperature T > 0. However, in QMC simulations, there is always a finite-size energy gap (the difference between the energy of the ground state and the first excited state). Additionally, some phases like the topological ground state of the toric code have a physical bulk gap (even at L →∞). As long as the temperature is well below the total gap, we are exponentially close to the ground state. Usually, a temperature T ∼1/L suffices for the toric code although other situations may arise. 4.3.2 Probing first-order transitions ParaToric provides functionalities to probe weak and strong first-order phase transitions. The hysteresis mode can be used to probe hysteresis loops in the vicinity of strong first-order phase transitions, by repeating the simulation two times and mirroring the order of the parameters in h_hysteresis and lmbda_hysteresis. Weak first-order transitions can be detected by plotting a time series histogram of an observable (it exhibits a double-peak structure). Both approaches have been used in the context of the toric code [11]. 4.3.3 Choosing the basis Sometimes, one can work in both the ˆσx and the ˆσz-basis. The performance can vary drastically! Generally, the ˆσx-basis is more efficient for h/J > λ/μ and vice versa. 4.3.4 Choosing N_thermalization Based on our experience, we can say that Nthermalization = 500Ld/T is a sensible choice for small fields, where d is the dimensionality of the system. Nevertheless, one should make use of the provided tools to benchmark thermalization, see Sec. 4.2, and rather err on the side of safety. 4.3.5 Choosing N_samples Neglecting autocorrelation effects, the error of an observable ∆O scales as ∆O ∼1/ Æ Nsamples. More samples are, in principle, always better and lead to lower error 31 SciPost Physics Codebases Submission bars. Smoothness of a curve of statistical results also requires that error bars be small in relation to the parameter grid size. If one increases the parameter resolution (e.g., in the field h), then one typically also increases N_samples. 4.3.6 Choosing N_between_samples The optimal choice for N_samples is the integrated autocorrelation time. A good guess of the autocorrelation time based on previous simulations for smaller system sizes or nearby parameter points can result in substantial computational saving costs in production runs for large system sizes. Near continuous phase transitions, the integrated autocorrelation time has an additional dependence τint ∼Lz, where z is the dynamical exponent of the universality class of the transition. For a 2D system in the vicinity of a continuous phase transition, a sensible scaling for N_between_samples could be O(L2 × β × Lz) (O(L2) links, each has off-diagonal spin flips O(β)). 4.3.7 Choosing N_resamples As with N_samples, more is better (but also more costly). Usually N_resamples ≈1000 is a sensible choice. 4.3.8 Extracting snapshots When the option save_snapshots is enabled, ParaToric will write the snapshots into the directory specified in path_out (or in the paths paths_out for hysteresis sweeps). The snapshots are saved in the GraphML format (XML-based), which is supported by many major graph libraries. One snapshot will be saved for every measurement of observables, i.e., N_s amples snapshots in total. All snapshots are written into a single file to save disk space and simultaneously offer a structured, self-documenting format. Every edge stores a list of spins. The first spin belongs to the first snapshot, the second one to the second snapshot, and so on. There are no special requirements for disks or memory bandwidth; the snapshots are kept in RAM and are only written to disk after the simulation has finished. 4.3.9 Adding new observables/lattices/updates After adding features to the code, always benchmark them using analytical results, other numerical methods (exact diagonalization, tensor networks, ...), and unit tests. We advise using a fixed seed during development, e.g., when checking whether two methods produce the exact same result. The code has some built-in features to check self-consistency, e.g., at the end of each simulation, the code checks whether the cached total energy is numerically close to the total energy calculated from scratch. Do not turn off these features, as they will point you toward bugs! 4.4 Benchmarks 4.4.1 Thermalization In Fig. 2 (which we already discussed before and repeat here for completeness) we plot the gauge field energy ∝λ for two systems: one is sufficiently thermalized, the other one is not. The plots are a direct output of the Python command-line interface. Always make sure that the system is well thermalized. 32 SciPost Physics Codebases Submission (b) (a) Figure 2: Good and bad thermalization plots produced by the Python commandline interface. We show the gauge field energy ∝λ. (a) The system is well thermalized; after its initial drop, the energy fluctuates around the expectation value. (b) The system is not yet thermalized; the floating average is still decreasing. 4.4.2 Integrated autocorrelation time Here, we demonstrate how the integrated autocorrelation time τint grows with decreasing N_ between_samples. We use the following setup: Listing 13: N_between_samples benchmarking setup >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=100000, N_thermalization=10000, N_between_samples =1, N_resamples=1000, custom_therm=False, observables=["energy"], seed =0, mu=1.0, h=0.0, h_therm=0.0, J=1.0, lmbda=0, lmbda_therm=0.0, basis=' x', lattice_type="square", system_size=4, beta=10, boundaries="periodic" , default_spin=1, save_snapshots=False) We only run the simulation once per N_between_samples. The results are: N_between_samples 1 10 100 500 1000 τint (energy) 1895 141.4 19.7 3.24 1.64 For very small N_between_samples, τint is very high: in cases where the update is rejected, the configuration is identical to the one measured before! A choice of N_between_s amples between 500 and 1000 would be a good tradeoff between τint and runtime for this example. Increasing N_between_samples to well over 1000 would be a waste of CPU time. 4.4.3 Run-time We benchmark the run-time for two realistic parameter sets on the square lattice and varying system size. The first setup simulates the toric code without fields:3 Listing 14: L benchmarking setup 1 >>> import numpy as np >>> from paratoric import extended_toric_code as etc 3All tests were run on a laptop; some conditions, like the CPU temperature, were not identical for all simulations. The benchmarks are therefore only an approximation. 33 SciPost Physics Codebases Submission >>> L=20 >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=500*L*L*L, N_between_samples=8*L*L*L, N_resamples=1000, custom_therm=False, observables=["energy", "sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.0, h_therm=0.0, J=1.0, lmbda=0, lmbda_therm=0.0, basis='x', lattice_type=" square", system_size=L, beta=L, boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per system size. The results are: L 4 8 12 16 20 Runtime (s) 3.1 21.3 75 197 379 From our experience, for large L the update complexity is approximately O(L3 logβ), owing to the chosen cubic dependency of N_thermalization and N_between_samples and a O(logβ) dependence of operations on the imaginary time axis, see β benchmark below. The system size itself does not impact the performance, as the interactions are local. On computing clusters, we have realized system sizes of up to L = 80 for the square lattice; this number will only increase in the future as CPUs get faster. The second setup simulates the toric code with fields in both ˆσx and ˆσz-direction: Listing 15: L benchmarking setup 2 >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> L=20 >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=500*L*L*L, N_between_samples=8*L*L*L, N_resamples=1000, custom_therm=False, observables=["energy", "sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.2, J =1.0, lmbda=0.2, basis='x', lattice_type="square", system_size=L, beta=L , boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per system size. The results are: L 4 8 12 16 20 Runtime (s) 3.9 34.1 133 323 689 We also test the run-time dependence of the inverse temperature β, with the following setup: Listing 16: β benchmarking setup >>> import numpy as np >>> from paratoric import extended_toric_code as etc >>> series, mean, std, binder, binder_std, tau_int = extended_toric_code. get_sample(N_samples=10000, N_thermalization=20000, N_between_samples =2000, N_resamples=1000, custom_therm=False, observables=["energy", " sigma_x", "sigma_z"], seed=0, mu=1.0, h=0.2, J=1.0, lmbda=0.2, basis='x' , lattice_type="square", system_size=10, beta=20, boundaries="periodic", default_spin=1, save_snapshots=False) We only run one test per β. The results are: β 4 8 12 16 20 Runtime (s) 14.9 17.2 19.1 20.0 22.1 34 SciPost Physics Codebases Submission 0.20 0.25 0.30 0.35 0.40 h 0 1 Ox FM square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.20 0.25 0.30 0.35 0.40 h 0.0 0.5 Ox SI square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.31 0.32 0.33 0.34 h 100 101 102 UΠx square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 0.20 0.25 0.30 0.35 0.40 h 0 1 Πx square, ⁄ = 0.2 L = 10 L = 20 L = 30 L = 40 (b) (a) (c) (d) Figure 3: Topological phase transition of the extended toric code (1) on the square lattice. The critical field is known and located at hc(λ = 0.2) ≈0.33 [5,9]. Our results agree with the value published in the literature, within error bars. (a) The percolation probability Binder ratio UΠx [9] features a crossing point around h = 0.33. (b) The percolation probability Πx is non-zero in the topological phase and zero in the trivial phase. The transition gets sharper with increasing system size. (c) The staggered imaginary time order parameter Ox SI is zero in the topological phase and non-zero in the trivial phase. (d) The Fredenhagen-Marcu order parameter Ox FM is zero in the topological phase and non-zero in the trivial phase. The loop length grows with O(L). Compared to the other order parameters, it is very noisy because it is a multi-body correlator and on top of that a division of two exponentially small numbers. This benchmark illustrates an appealing feature of our implementation: there is almost no slowing down when increasing β implying that very low temperatures are within reach with ParaToric. This paradoxical result (because the number n of off-diagonal star/plaquette and magnetic field operators must physically scale linearly in β) is explained by the fact that most searches within the imaginary time axis scale as O(log n) by making use of binary searches. 4.4.4 Topological phase transition We probe the well-known topological phase transition in the ground state of the extended toric code (1) on the square lattice, where we have a gapped Z2 quantum spin liquid for small fields h,λ and a topologically trivial phase for high fields. We set J = μ = 1, λ = 0.2 and sweep h over the known critical value hc(λ = 0.2) ≈0.33 [5,9] for L ∈{10,20,30,40} in the ˆσx-basis. The temperature is set to T = 1/L to capture ground state physics. We take 30000 snapshots, with 8L3 steps in between snapshots and 500L3 thermalization steps.4. We confirm that the system is well thermalized and that all integrated autocorrelation times are below 10, i.e., the produced snapshots can safely be considered identically and independently distributed. We show the percolation probability, the Fredenhagen-Marcu string order parameter, and the staggered imaginary time order parameter in Fig. 3. All of them reproduce the known phase boundary. 4If we were interested in quantities like critical exponents and we need to go very close to the critical field, we should take into account the dynamical exponent z (τint ∼Lz) in the number of steps between snapshots to account for critical slowing down. Now the error bars are just larger near the critical field. 35 SciPost Physics Codebases Submission 5 Conclusion & Outlook We have presented ParaToric, a continuous-time quantum Monte Carlo solver for the toric code in a parallel field. ParaToric builds on the existing work by Wu, Deng, and Prokof'ev [5] and is also applicable to high temperature and low off-diagonal couplings. ParaToric can store snapshots, which makes it ideally suited to generate training/benchmarking data for applications in other fields, such as lattice gauge theories, cold atom or other quantum simulators, quantum spin liquids, artificial intelligence, and quantum error correction. We believe it also serves a pedagogical purpose. Another strength of ParaToric is its interoperability with other programming languages. The C interface is compatible with virtually all programming languages, thus ParaToric can be seamlessly integrated into other projects. ParaToric comes with an MIT license. For future release of ParaToric we plan extensions along the following lines: • Additional lattices such as the kagome and the ruby lattice. Given the underlying graph structure used in ParaToric, such extensions are straightforward. • Additional observables: we think here of, for instance, the finite temperature extension of the fidelity susceptibility to diagnose the phase transitions in the absence of a local order parameter. It would be worthwhile to have additional off-diagonal observables such as the off-diagonal Fredenhagen-Marcu string operators, or correlation functions between off-diagonal operators in space and or time. Measurements of the Renyi entropy are also high on the to-do list. The latter two classes require however major changes to the code, and testing. • Additional interaction types. There are many classes of models in which topological order may be emergent instead of explicit as in the toric code. Such models typically have additional interactions than the ones covered in ParaToric, such as longer-range Ising interactions, and miss some others (typically the plaquette type interactions, and sometimes even the star terms). It is in general an open problem how to efficiently simulate such models at the lowest temperatures (even for sign-free models). Extending ParaToric to dealing with other types of interactions can thus serve as an additional tool for benchmarking purposes and algorithmic exploration. Acknowledgements The authors acknowledge fruitful discussions with A. Bohrdt, G. De Paciani, G. Dünnweber, F. Grusdt, L. Homeier, and N. V. Prokof'ev. Author contributions SML did the main coding and planning work with input from LP. All authors contributed to the writing of the manuscript. Funding information This research was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program - ERC Starting Grant SimUcQuam (Grant Agreement No. 948141), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - project number 390814868. 36 SciPost Physics Codebases Submission References [1] A. Kitaev, Fault-tolerant quantum computation by anyons, Annals of Physics 303(1), 2 (2003), [2] E. Dennis, A. Kitaev, A. Landahl and J. Preskill, Topological quantum memory, Journal of Mathematical Physics 43(9), 4452 (2002), https://pubs.aip. org/aip/jmp/article-pdf/43/9/4452/19183135/4452_1_online.pdf. [3] A. G. Fowler, M. Mariantoni, J. M. Martinis and A. N. Cleland, Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012), [4] J. B. Kogut, An introduction to lattice gauge theory and spin systems, Rev. Mod. Phys. 51, 659 (1979), [5] F. Wu, Y. Deng and N. Prokof'ev, Phase diagram of the toric code model in a parallel magnetic field, Phys. Rev. B 85, 195104 (2012), [6] K. Fredenhagen and M. Marcu, Charged states in z2 gauge theories, Communications in Mathematical Physics 92(1), 81 (1983), [7] K. Fredenhagen and M. Marcu, Confinement criterion for qcd with dynamical quarks, Phys. Rev. Lett. 56, 223 (1986), [8] K. Fredenhagen and M. Marcu, Dual interpretation of order parameters for lattice gauge theories with matter fields, Nuclear Physics B - Proceedings Supplements 4, 352 (1988), [9] S. M. Linsel, A. Bohrdt, L. Homeier, L. Pollet and F. Grusdt, Percolation as a confinement order parameter in z2 lattice gauge theories, Phys. Rev. B 110, L241101 (2024), [10] G. Dünnweber, S. M. Linsel, A. Bohrdt and F. Grusdt, Percolation renormalization group analysis of confinement in z2 lattice gauge theories, Phys. Rev. B 111, 024314 (2025), [11] S. M. Linsel, L. Pollet and F. Grusdt, Independent e- and m-anyon confinement in the parallel field toric code on non-square lattices, (2025). [12] M. Matsumoto and T. Nishimura, Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator, ACM Trans. Model. Comput. Simul. 8, 3 (1998). [13] U. Wolff, Critical slowing down, Nuclear Physics B - Proceedings Supplements 17, 93 (1990), [14] D. N. Politis and J. P. Romano, The stationary bootstrap, Journal of the American Statistical Association 89(428), 1303 (1994), [15] D. N. Politis and H. White, Automatic block-length selection for the dependent bootstrap, Econometric Reviews 23(1), 53 (2004), [16] A. Patton, D. N. Politis and H. White, Correction to "automatic block-length selection for the dependent bootstrap" by d. politis and h. white, Econometric Reviews 28(4), 372 (2009), 37